151 10 17MB
English Pages 573 [561] Year 2022
Lecture Notes in Applied and Computational Mechanics 98
Jörg Schröder Peter Wriggers Editors
Non-standard Discretisation Methods in Solid Mechanics
Lecture Notes in Applied and Computational Mechanics Volume 98
Series Editors Peter Wriggers, Institut für Kontinuumsmechanik, Leibniz Universität Hannover, Hannover, Niedersachsen, Germany Peter Eberhard, Institute of Engineering and Computational Mechanics, University of Stuttgart, Stuttgart, Germany
This series aims to report new developments in applied and computational mechanics - quickly, informally and at a high level. This includes the fields of fluid, solid and structural mechanics, dynamics and control, and related disciplines. The applied methods can be of analytical, numerical and computational nature. The series scope includes monographs, professional books, selected contributions from specialized conferences or workshops, edited volumes, as well as outstanding advanced textbooks. Indexed by EI-Compendex, SCOPUS, Zentralblatt Math, Ulrich’s, Current Mathematical Publications, Mathematical Reviews and MetaPress.
More information about this series at https://link.springer.com/bookseries/4623
Jörg Schröder · Peter Wriggers Editors
Non-standard Discretisation Methods in Solid Mechanics
Editors Jörg Schröder Institute of Mechanics University of Duisburg-Essen Essen, Germany
Peter Wriggers Institute of Continuum Mechanics Leibniz Universität Hannover Hannover, Germany
ISSN 1613-7736 ISSN 1860-0816 (electronic) Lecture Notes in Applied and Computational Mechanics ISBN 978-3-030-92671-7 ISBN 978-3-030-92672-4 (eBook) https://doi.org/10.1007/978-3-030-92672-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Numerical simulation techniques are an essential component for the development, design, and optimization of innovative products in the field of key technologies. These techniques are required for predictive engineering calculations as well as for the simulation of production processes and demand quality, reliability and performances in order to be able to make trustworthy decisions for target-oriented designs. Challenges include, for example, the reliable handling of incompressibility, anisotropy, nonlinearities, and discontinuities. This was the reason for us to initiate a research group at our German Research Foundation (DFG) together with Carsten Carstensen, Stefanie Reese, and Gerhard Starke. Consequently, the main goal of the DFG Priority Programme 1748: “Reliable simulation techniques in solid mechanics. Development of non-standard discretisation methods, mechanical and mathematical analysis”
was to develop novel discretization methods based, for example, on mixed finite element methods, isogeometric approaches as well as discontinuous Galerkin formulations, including sound mathematical analysis for geometrically as well as for physically nonlinear problems. The Priority Programme 1748, funded by the DFG from 2014 to 2021, has established an international framework for mechanical and applied mathematical research to pursue open challenges in the area of discretization techniques on an inter-disciplinary level. The results achieved in the disciplines of mechanics and applied mathematics have brought a new quality to the research approaches in the field of non-conventional discretization methods by bundling the expertise of mechanics and mathematics, creating new networks and strengthening existing networks. Within the framework of this cooperation, experiences were exchanged between the working groups, synergies were created, and thus the efficiency of the research efforts was increased. Specifically, the Priority Programme has advanced research in the following directions of non-conventional finite element formulations: • An in-depth mathematical understanding for reliable (non-conforming) finite element and isogeometric formulations for finite deformations. v
vi
Preface
• Mathematically sound variational formulations and robust discretizations at finite deformations for (quasi-)incompressible material behavior. • Accurate approximation of all process variables in extreme cases. • Insensitive behavior to significant mesh deformation and distortion. • Convergence analysis of adaptive mesh refinement. • Establishment of a variational basis as well as appropriate discretization techniques for discontinuities: Convergence, stability, and approximation properties. • Novel crack growth and crack branching models, contact formulations. This book contains the results of the individual projects and represents the final outcome of the Priority Programme. The compiled results can be understood as state of the art in the research field and show promising ways of further research in the respective areas. In addition, the references of the publications, which have been published during the funding periods, can be found in the following articles and are useful for in-depth study in the respective research fields. Furthermore, a benchmark paper entitled “A selection of benchmark problems in solid mechanics and applied mathematics” has been issued, which provides a guideline for research in the field of reliable and stable discretization methods. We thank the editors of Springer’s Lecture Notes in Applied and Computational Mechanics for including our collection in this series as well as the Springer Verlag. In addition, on behalf of all the researchers involved, we would like to thank the German Research Foundation for funding. Furthermore, we thank Alexander Schwarz for his forward-thinking management of the research group in all respects. Essen, Germany Hannover, Germany August 2021
Jörg Schröder Peter Wriggers
Contents
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. R. Bayat, J. Krämer, S. Reese, C. Wieners, B. Wohlmuth, and L. Wunderlich Novel Finite Elements - Mixed, Hybrid and Virtual Element Formulations at Finite Strains for 3D Applications . . . . . . . . . . . . . . . . . . . Jörg Schröder, Peter Wriggers, Alex Kraus, and Nils Viebahn Robust and Efficient Finite Element Discretizations for Higher-Order Gradient Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . Johannes Riesselmann, Jonas Wilhelm Ketteler, Mira Schedensack, and Daniel Balzani Stress Equilibration for Hyperelastic Models . . . . . . . . . . . . . . . . . . . . . . . . . F. Bertrand, M. Moldenhauer, and G. Starke
1
37
69
91
Adaptive Least-Squares, Discontinuous Petrov-Galerkin, and Hybrid High-Order Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Philipp Bringmann, Carsten Carstensen, and Ngoc Tien Tran Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 M. Igelbüscher, J. Schröder, A. Schwarz, and G. Starke Hybrid Mixed Finite Element Formulations Based on a Least-Squares Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Maximilian Igelbüscher and Jörg Schröder Adaptive and Pressure-Robust Discretization of Incompressible Pressure-Driven Phase-Field Fracture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Seshadri Basava, Katrin Mang, Mirjam Walloth, Thomas Wick, and Winnifried Wollner
vii
viii
Contents
A Phase-Field Approach to Pneumatic Fracture . . . . . . . . . . . . . . . . . . . . . . 217 C. Bilgen, A. Kopaniˇcáková, R. Krause, and K. Weinberg Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Paul Hennig, Markus Kästner, Roland Maier, Philipp Morgenstern, and Daniel Peterseim Phase Field Modeling of Brittle and Ductile Fracture . . . . . . . . . . . . . . . . . 283 Charlotte Kuhn, Timo Noll, Darius Olesch, and Ralf Müller Adaptive Quadrature and Remeshing Strategies for the Finite Cell Method at Large Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Wadhah Garhuom, Simeon Hubrich, Lars Radtke, and Alexander Düster The Finite Cell Method for Simulation of Additive Manufacturing . . . . . 355 Stefan Kollmannsberger, Davide D’Angella, Massimo Carraturo, Alessandro Reali, Ferdinando Auricchio, and Ernst Rank Error Control and Adaptivity for the Finite Cell Method . . . . . . . . . . . . . . 377 Paolo Di Stolfo and Andreas Schröder Frontiers in Mortar Methods for Isogeometric Analysis . . . . . . . . . . . . . . . 405 Christian Hesch, Ustim Khristenko, Rolf Krause, Alexander Popp, Alexander Seitz, Wolfgang Wall, and Barbara Wohlmuth Collocation Methods and Beyond in Non-linear Mechanics . . . . . . . . . . . . 449 F. Fahrendorf, S. Shivanand, B. V. Rosic, M. S. Sarfaraz, T. Wu, L. De Lorenzis, and H. G. Matthies Approximation Schemes for Materials with Discontinuities . . . . . . . . . . . . 505 Sören Bartels, Marijo Milicevic, Marita Thomas, Sven Tornquist, and Nico Weber
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems H. R. Bayat, J. Krämer, S. Reese, C. Wieners, B. Wohlmuth, and L. Wunderlich
Abstract We introduce novel hybrid discontinuous Galerkin methods for applications in solid mechanics. Different methods are introduced and numerically evaluated for several benchmark scenarios which show that our new approaches are more efficient and in many applications more robust than lowest order conforming finite elements. We consider different methods with discontinuous ansatz spaces in the cells with different concepts to achieve approximate continuity on cell interfaces. In one approach we select adaptively constraints on the faces. This corresponds to a weakly conforming finite element space defined by primal and dual face degrees of freedom. For the hybrid formulation, the element bubble degrees of freedom can be locally eliminated. Here we show robustness of the hybrid method in the nearly incompressible limit and for thin structures. Non-linear applications including contact, plasticity, and large strain elasticity show the flexibility of this discretization. Then, a locking-free incomplete interior penalty Galerkin (IIPG) variant of the discontinuous Galerkin (DG) method with reduced integration on the boundary terms is introduced. Based on the idea of this element formulation, a novel low-order hybrid DG method for geometrically non-linear problems is proposed which eliminates the locking effects. The drawback is the non-symmetric structure of the stiffness matrix. Next, the symmetric version of the aforementioned element formulation is presented based on a finite element technology with reduced integration and hourglass stabilization. Furthermore, the free (penalty) scalar parameter is transformed to a matrix form that is analytically obtained from the finite element technology. Finally, the IIPG method in combination with a cohesive zone model is applied to model failure at the interface. H. R. Bayat · S. Reese (B) RWTH Aachen University, Mies-van-der-Rohe-Str. 1, 52074 Aachen, Germany e-mail: [email protected] J. Krämer · C. Wieners KIT Karlsruhe, Englerstr. 2, 76131 Karlsruhe, Germany e-mail: [email protected] B. Wohlmuth · L. Wunderlich TU München, Boltzmannstr. 3/III, 85748 Garching b. München, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_1
1
2
H. R. Bayat et al.
1 Introduction Modern finite element methods play an important role in the construction, design and development of new materials, innovative products and production processes. However, it remains a challenge to construct stable and efficient discretizations which are robust for a wide range of applications in solid mechanics, for example geometrical and material non-linearities, nearly incompressible, anisotropic and generalized materials, inelasticity as well as contact problems. Discontinuous Galerkin (DG) methods may be seen as generalizations of continuous methods, thus offering additional features and options for the improvement of numerical computations in the aforementioned fields. This comes at a cost, as DG methods require far more degrees of freedom and memory consumption than continuous discretizations on the same mesh. To improve this issue, we investigate hybrid discontinuous Galerkin methods allowing for a significant reduction of global degrees of freedom via static condensation. Discontinuous Galerkin methods are presented in [7, 8, 24, 25], their adaption to linear elasticity is studied, e.g., in [21, 27, 28, 30]. The specific numerical problems arising within elastic formulations are treated in [18, 26, 27] with discontinuous formulations. Our approach extends the higher order generalization of Crouzeix– Raviart finite elements in [23] for the diffusion equation to elasticity. A family of hybrid non-conforming discretizations similar to the one presented in this work is described in [1–3], where the application to plasticity and hyperelastic materials is considered. In [22], a hybrid method for contact problems is presented. Here, we summarize and extend the results about hybrid discretizations in solid mechanics presented in [17, 29, 35, 47]. We introduce and evaluate newly developed methods including the hybrid weakly conforming scheme and the hybrid discontinuous Galerkin method for linear and non-linear problems in solid mechanics. Our design criteria are the following: • We aim for robust methods to improve known deficits of classical methods. • We aim for consistency to recover optimal convergence order for problems with smooth solution. • We aim for efficiency by constructing hybrid schemes where the global number of degrees of freedom is reduced in the assembling phase without reducing the robustness or optimality of the method. The new methods are introduced and briefly discussed in Sect. 2. This includes a novel cohesive discontinuous Galerkin (CDG) method to model brittle material failure at interfaces. Matching as well as non-matching discretizations are applied for the CDG elements. Then, the methods are benchmarked in Sect. 3. In the first step we consider linear materials and smooth problems in order to verify the optimal convergence behavior of the non-conforming and hybrid formulations. In the following, we provide tests to establish robustness with respect to the incompressible limit, for thin structures and anisotropic elements, and for interface problems. Finally, we demonstrate the application to more challenging non-linear problems including large deformations, plasticity and damage, and contact problems.
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
3
2 Continuous, Discontinuous and Hybrid Discretizations We introduce different discontinuous and hybrid discretizations for elastic problems. We start with a hybrid weakly conforming method for linear elasticity and compare it to conforming finite elements and a discontinuous Galerkin formulation. Then, we introduce the incomplete interior penalty Galerkin method using the technique of reduced integration, and we derive a corresponding hybrid discontinuous Galerkin method. Next, the symmetric counterpart is considered, and finally we define a cohesive discontinuous Galerkin method.
2.1 A Weakly Conforming Method The configuration is determined by the reference domain ⊂ Rdim , the Neumann boundary N ⊂ ∂, and the Dirichlet boundary D = ∂ \ N with |D |dim−1 > 1 0. For given Dirichlet data uD ∈ H 2 (D ; Rdim ), body forces f ∈ L2 (, Rdim ), and ) traction forces tN ∈ L2 (N ; Rdim ), we determine the stress σ ∈ H(div, ; Rdim×dim sym and the displacement u ∈ H1 (; Rdim ) satisfying − div σ = f , σ n = tN ,
in , on N ,
(1a) (1b)
u = uD ,
on D ,
(1c)
and the constitutive relation σ = σ (u). For isotropic linear elasticity, we have σ = Cε(u) depending on the linearized strain ε(u) = sym(Du) and the elasticity tensor Cε = 2με + λ tr(ε)I depending on the Lamé parameters μ > 0 and λ ≥ 0. We define the linear space V = H1 (; Rdim ), the affine space V (uD ) = v ∈ V : v = uD on D , the bilinear form and the linear functional a(u, v) = Cε(u), ε(v) 0, , , v = f, v 0, + tN , v 0,N , u, v ∈ V . (2) This determines the weak solution of (1) in variational form: find u ∈ V (uD ) solving a(u, v) = , v ,
v ∈ V0 = V (0) .
(3)
Finite element discretizations are based on a decomposition h = K ∈Kh K into open convex cells K ∈ Kh with skeleton ∂h = \ h . Locally in every cell K ∈ Kh , we select a polynomial space VK ⊂ P(K ; Rdim ), e.g., VK = P1 (K ; R2 ) for linear elements on triangles, or bilinear polynomials on quadrilaterals. This defines the
4
H. R. Bayat et al.
discontinuous space VhDG = ah (uh , vh ) =
VK ⊂ P(h ; Rdim ), and a(·, ·) extends to
Cε(uh ), ε(vh ) 0,K ,
u, v ∈ H1 (h ; Rdim ) ,
(4)
K
where H1 (h ; Rdim ) = v ∈ L2 (; Rdim ) : v K = v| K ∈ H1 (K ; Rdim ) for all K . For the conforming Galerkin method, we select continuous ansatz functions Vhcf = V ∩ VhDG ⊂ C0 (; Rdim ) ,
Vhcf (uD,h ) = vh ∈ Vhcf : vh (x) = uD,h (x) for all nodal points x ∈ D , with a suitable continuous approximation uD,h ∈ C0 ( D ; Rdim ) of the Dirichlet data, cf and the conforming discrete solution ucf h ∈ Vh (uD,h ) is determined by a(ucf h , vh ) = , vh ,
cf vh ∈ V0,h = Vhcf (0) .
In the non-conforming case, the continuity is relaxed. Here, we discuss two methods: For the discontinuous Galerkin method, the bilinear form is extended to ahDG (·, ·), DG is determined by and the discrete solution uDG h ∈ Vh DG ahDG (uDG h , vh ) = h , vh ,
vh ∈ VhDG .
(5)
For the weakly conforming method, the ansatz space is constraint to Vhwc ⊂ VhDG , wc and the discrete solution uwc h ∈ Vh (uD ) is determined by ah (uwc h , vh ) = , vh ,
wc vh ∈ V0,h = Vhwc (0) .
(6)
In the discontinuous case (5), we use the symmetric interior penalty method, see, e.g., [28]. Therefore, let F K be the set of faces F ⊂ ∂ K , and define Fh = F K . We assume that D = F⊂D F, so that the mesh resolves the boundary decomposition. We set h F = diam F, and we select a fixed orientation n F . For inner faces F ∈ F K ∩ , let K F be the neighboring cell. Finally, we set h K = diam(K ) and h = max h K . We observe for the solution of (1) and broken test functions v ∈ H1 (h ; Rdim ) , vh = − div σ (u), vh 0, + σ (u)n K , vh 0,N Cε(u), ε(v K ) 0,K − σ (u)n K , v K 0,F = K
= ah (u, vh ) −
(7)
F∈F K \N
σ (u), [[vh ]] F 0,F
F∈Fh \N
with jump terms [[vh ]] F = v K ⊗ n K + v K F ⊗ n K F on inner faces F = ∂ K ∩ ∂ K F and [[vh ]] F = v K ⊗ n K on boundary faces F = ∂ K ∩ ∂ using v K = vh | K , and
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
5
where a ⊗ b = (ai b j )i, j=1,...dim denotes the tensor product for vectors a = (ai )i=1,...dim and b = (b j ) j=1,...dim . For the symmetric interior penalty method a consistent extension of (7) to discontinuous ansatz functions is defined by ahDG (uh , vh ) = ah (u, vh ) −
{{σ (uh )}} F , [[vh ]] F 0,F + [[uh ]] F , {{σ (vh )}} F 0,F
F∈Fh \N
+
F∈Fh \N
DG h , vh = , vh +
θ [[uh ]] F , [[vh ]] F 0,F , hF
F∈Fh ∩D
(8)
θ uD , vh 0,F hF
depending on a penalty parameter θ > 0 and with {{σ h }} F = 21 σ K + σ K F on inner faces F = ∂ K ∩ ∂ K F and {{σ h }} F = σ K on F = ∂ K ∩ ∂. For the weakly conforming ansatz space, we select on every face a multiplier space M F ⊂ P(F; Rdim ), and we define
Vhwc = vh ∈ VhDG : v K , μ F 0,F = v K F , μ F 0,F for μ F ∈ M F , F ∈ Fh ∩ , (9)
Vhwc (uD ) = vh ∈ Vhwc : vh , μ F 0,F = uD , μ F 0,F for μ F ∈ M F , F ∈ Fh ∩ D .
We note that for all choices of Mh = F∈Fh \N M F we get Vhcf ⊂ Vhwc ⊂ VhDG . However, only for Mh large enough, the broken bilinear form (4) is coercive and the consistency error is at least the same order as the best approximation error. On the other hand, if Mh is too large, we get Vhcf = Vhwc and we cannot profit from the improved robustness properties of nonconforming schemes, and no local static condensation to face degrees of freedom is possible. The analysis of the well-posedness is based on a broken Korn inequality, cf. [19]. Note that our approach extends the higher order generalization of Crouzeix–Raviart finite elements in [23] for the diffusion equation to elasticity. An optimal weakly conforming approximation (6) allows for hybridization, i.e., introducing inter element Lagrange multipliers for the weak continuity constraint and face moments of the displacement uˆ h ∈ Mh . By doing so, static condensation to a positive definite Schur complement system for the skeleton degrees of freedom can ˆ h in a post-processing be carried out. Then, the solution uwc h is reconstructed from u step. For the evaluation of the numerical costs of the different methods, we have to quantify the approximation error in comparison to the degrees of freedom in the global linear system. The degrees of freedom in 2D are illustrated in Fig. 1 and summarized in Table 1: • The ansatz functions for linear conforming P1 elements and for bilinear conforming Q1 elements methods are defined by the nodal values on the vertices in Rdim , and for the quadratic P1 elements on simplices or Serendepity Q2 elements on
6
H. R. Bayat et al.
Fig. 1 Degrees of freedom on individual quadrilateral cells for all linear (top) and higher order (bottom) schemes: From left to right: Conforming, discontinuous Galerkin, weakly conforming Table 1 Allocation and asymptotic number of the degrees of freedom (DoFs) and matrix entries per row on uniform quadrilateral meshes (dim = 2) with NKh cells Discretization
DoFs
Total DoFs
Matrix entries per row
Conforming Serendepity Q1 Conforming Serendepity Q2 Linear DG Quadratic DG P2 P0+ hybrid wc P3 P1 hybrid wc
2 per vertex
2 · N Kh
18
2 per vertex, 2 per edge 8 per cell 18 per cell 3 per face 4 per face
6 · N Kh
42
8 · N Kh 18 · NKh 6 · N Kh 8 · N Kh
40 90 21 28
quadrilaterals and hexahedra in addition on the edge midpoints. The matrix graph couples all degrees of freedom of cells connected by a nodal point. • Discontinuous elements use the same degrees of freedom independently on every cell. The matrix graph couples all degrees of freedom of the cells connected by a face. • The lowest order hybrid method P2 P0+ requires dim + 1 degrees of freedom per face which is combined with quadratic polynomials in the cells, i.e., we use M F = P0 (F; Rdim ) + n F P1 (F) and VhDG = P2 (h ). The P3 P1 hybrid method uses linear multiplier on the faces and discontinuous cubic polynomials in the cells. The matrix graph only couples degrees of freedom on faces with a common cell.
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
7
In the following tests, the linear, bilinear, and lowest order hybrid method are cf DG wc , Vh,1 , Vh,1 , respectively, and the higher order family is denoted denoted by Vh,1 cf DG wc by Vh,2 , Vh,2 , Vh,2 . In our numerical test, we investigate the approximation error of the different discretizations with respect to the L2 norm and the energy (semi)-norm v 0, =
v, v 0, ,
|v|E,h =
Cε(v), ε(v) 0,h .
(10)
2.2 Incomplete Interior Penalty Galerkin Method The incomplete interior penalty Galerkin variant of the Eq. 8 is considered in this part as in Bayat et al. [11, 14–17]. This means that the third term on the right hand side of this equation is omitted. Here, the weak discontinuity divides the body into finite subdomains e as illustrated in Fig. 2. Starting with the strong form in Eq. 1, we obtain the new weak form given by
σ : δε d V +
[[δu]] · {σ } n d +
=
f · δu d V +
θ [[δu]] · [[u]] d
(11)
tp · δu d A ,
∂t
where θ = ϑ E/ h [N/m3 ] is a penalty parameter which depends on the Young’s modulus (E), the mesh size of the structure (h) and a sufficiently large positive number (ϑ). Here, σ is the Cauchy stress tensor and ε is the strain tensor. These quantities are related to each other by Hooke’s law as in Sect. 2. The weak form of the boundary value problem is discretized by standard bilinear shape functions. In this way, we obtain the residual force vector Redisc for the discontinuous part of the Eq. 11 by
Fig. 2 The domain divided by discontinuity into finite subdomains e
8
H. R. Bayat et al.
Redisc =
+ + + 1 N+T ud − − C B C B n d e −T −N u− 2 d e +T + N u N+ − N− de d− . + θ −T −N ud e
(12)
Here, N is the matrix of the shape functions acting on the positive (•)+ and negative (•)− sides of the discontinuities. The derivative of the these shape functions with respect to the position vector is given by B. Accordingly, the DG element stiffness reads as: matrix K disc e +T + + + 1 N+T N − − N C B de + θ −T n C B −T −N −N 2 e e
K disc = e
− N− de .
(13) For a detailed derivation of the quantities please refer to [10, 14]. The novelty of the work lies in the idea of the numerical integration on the boundary terms. In addition to the conventional nodal and full Gaussian integration, reduced as well as mixed integration schemes are introduced in this study. The first term of the Eq. 11, denoted as the bulk term, is integrated in a full manner referring to the standard bilinear 4-node quadrilateral element formulation Q1. Furthermore, a reduced integration of the bulk term enhanced with hourglass stabilization (see [34] or Sect. 2.4) is applied. This element formulation is free of both volumetric and shear locking effects, see [33]. The integration schemes are illustrated in Fig. 3. The numerical integration of the stresses on the boundary are carried out in two different ways. Initially, they were computed directly from the adjacent bulk elements. In the later works, the stresses on the Gauss points of the bulk elements are first extrapolated to the mutual nods on the discontinuity . Next, the average of them on each node is computed. Upon the choice of integration schemes (either full or reduced integration scheme), they are later interpolated on . This is pictured in Fig. 3.
Fig. 3 Various integration schemes for the boundary terms as well as the bulk term
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
9
Fig. 4 The domain divided by discontinuity into finite subdomains e
2.3 Hybrid Discontinuous Galerkin Method A low-order hybrid discontinuous Galerkin method for large deformations based on the IIPG variant of the DG methods was introduced by Wulfinghoff et al. [46, 47]. The interior of the subdomain e is kinematically separated from the skeleton as illustrated in Fig. 4. In this way, displacement jumps can emerge between them. Unlike the traditional DG methods, this hybrid approach has the same global degrees of freedom (the displacements at the corners on the skeleton) as those of the continuous Galerkin methods. This is due to the condensation of the local degrees of freedom on the element level. The method is free of both volumetric and shear locking effects. In this method, the balance equation (Eq. 1) along with its boundary conditions, namely the continuation of the displacements (u − u = 0 on ) as well as the continuation of the traction vectors (t+ + t− = 0 on ) are fulfilled in a weak form given by P : δF d = (PN + θ (u − u)) · δu d S
e
∂e
(14)
t+ + t− + θ (u+ − u ) + θ (u− − u ) · δu d = 0
Here, P is the first Piola-Kirchhoff stress and F is the deformation gradient. The normal vector to the discontinuity is denoted by N. The semi-analytical determination of the penalty parameter θ is explained in the work of [47]. For the case of volumetric locking, θ = μ/2h was proposed whereas in the presence of shear lock2 ¯ . Here, E¯ = E/(1 − ν 2 ) is the plane strain version of ing, it was set to θ = 2 Eh/3l Young’s modulus. Displacements of the subdomains are distinguished from those of the skeleton by denoting them to u and u , respectively. An unconventional linear interpolation of the displacements in the subdomain e (see Fig. 4) is carried out assuming constant deformations within the subdomain. In this way, displacements are computed by
10
H. R. Bayat et al.
u = u0 + H(X − X0 ),
(15)
where u0 is the current displacement of some material point with reference position vector X0 with H being the displacement gradient tensor. The nodal degrees of freedom are replaced by six subdomain degrees of freedom u0 and H. By computation of the first weak form in Eq. 14 on the subdomain level, the internal degrees of freedom are condensed out later at the skeleton level . Next, a trapezoidal rule for the numerical integration of the tractions in the second weak form is applied to form the residual force vectors. Initially, the element stiffness matrix for the HDG method is numerically obtained by perturbation of the nodal displacements u . The structure of the stiffness matrix becomes unsymmetric in this method. An analytic consistent symmetric tangent was later introduced by the authors in [5]. In addition, Geometrically nonlinear single crystal viscoplasticity was embedded in the framework of HDG method by Alipour et al. [4, 6].
2.4 Symmetric Hybrid Discontinuous Galerkin Method The hybrid DG formulation of Wulfinghoff et al. [47] was further developed by Reese et al. [35] in the context of finite elasticity. The idea of the latter work was based on the establishment of an equivalence between methods of reduced integration with hourglass stabilization and the HDG method introduced in [47]. Although these two ideas root from different backgrounds, they share significant similarities when it comes to facing locking-dominated problems. They outconverge other methods in terms of overcoming locking effects. By equalizing these method, a symmetric HDG is achieved. In addition, the vague penalty parameter was replaced by a “free” parameter, which was analytically determined from the hourglass stabilization from the work of [33, 36, 37]. As a result, this parameter is no longer a scalar, but a matrix of 2 × 2 in case of 2D problems. The idea of the reduced integration with hourglass stabilization introduced by Reese et al. [34] includes a two-field variational weak form given by
P : δHenh d Ve = 0 , e
P : Grad δu d V −
t : δu d A = 0.
(16)
∂t
where Henh is the enhanced displacement gradient derived from bubble functions. In this method, the Jacobian matrix J is replaced by its equivalence, J0 evaluated at the center of the element. In addition, a Taylor expansion around the center of the element is used to linearize the first Piola-Kirchhoff stress, ˆ 0 (H ˆ hg + H ˆ enh ). Pˆ = Pˆ 0 + A
(17)
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
11
ˆ hg is the Here, Pˆ 0 is the constant part of Pˆ evaluated at the center of element and H ˆ 0 , is hourglass part of the displacement gradient. The simplified material tangent, A given by Eq. 38 in the work of Reese et al. [35]. ˆ 0 must be large Similar to the free (penalty) parameter of the DG methods, A enough to get numerical stability while being small enough to avoid locking. The element formulation of this method is called Q1SP. Here, “Q” stands for quadrilateral, “1” for the first-order polynomial, “S” for stabilization and “P” for idea of equivalent parallelogram. For a detailed explanation of the method, refer to [33, 34, 36, 37]. By evaluation of the first weak form in Eq. 16, the extra internal degrees of freedom are condensed out on the element level e . The residual force vector obtained from the second weak form needs to be evaluated on the domain . This necessitates the assembly procedure. Evaluation of both weak forms resembles the HDG method. In order to obtain an analytical determination of free parameter matrix , we need cg to define two types of displacements. The first one is u hg , which is the displacement that results from the hourglass contribution in the Q1SP element formulation. The second one is the gap displacement in the HDG method that is defined by ugap = ue − ue . Finally, under the assumption of the equality of the aforementioned displacements, cg
ugap = u hg ,
(18)
one can obtain the analytical definition of the free parameter matrix by ¯ stab =K
det J0 . stot
(19)
¯ stab is the stabilization Here, stot is the circumference of the element boundary e and K stiffness matrix. For more, please refer to [35]. In this way, the modified weak form of the new HDG yields
(t+ + t− + [u − u− ] + [u − u+ )] · δu d A = 0 P · Grad δu d Ve − (t + [u − u]) · δu d Ae = 0
e
(20)
∂e
The rest of the procedure remains similar to that of the initial HDG. In case of parallelogram shaped elements, the modified HDG and the Q1SP deliver the same results upon the determination of the free parameter as explained above. As a result, the tangent is symmetric. For general element shapes, there exist different possibilities. A special choice is to apply the idea of equivalent parallelogram to still obtain symmetric method. Otherwise, symmetry cannot be achieved. A detailed clarification does not fit the scope of this report. Please see [35] for more information.
12
H. R. Bayat et al.
2.5 Cohesive Discontinuous Galerkin Method The incomplete interior penalty Galerkin method from Sect. 2.2 in combination with a novel cohesive zone model was applied to model failure at interfaces. Cohesive zone (CZ) models have proved to be a reliable simple method to capture failure especially at interfaces. Due to the application of the DG method, there is no initial stiffness in the pre-failure regime. Unlike the extrinsic CZ in the framework of continuous Galerkin methods, our cohesive discontinuous Galerkin (CDG) method do not require a remeshing of the crack path during the crack propagation. In addition, the embedding of the DG elements in the bulk as well as at the interface with reduced integration on the boundary terms leads to the elimination of the locking effects. Consequently, a realistic behavior of the crack initiation as well as crack propagation is captured in the model. Discretization of the continuous problem is performed for both matching and nonmatching meshes (meshes with hanging nodes). In this way, an elaborate refinement of the mesh for instance on different sides of crack path is not necessary anymore. The new CDG method with different integration schemes and discretizations is compared to a standard intrinsic cohesive zone model in an example in this report. As pictured in Fig. 5, the body is divided by either weak or strong discontinuities. The former necessitates the continuation of displacements and tractions on as explained before. For the case of strong discontinuities as in cracks at interfaces, only the continuation of the tractions is maintained. To include both scenarios, namely the pre- and post-failure, a unified weak form was suggested in Bayat et al. [13]. This is given by
σ : δε d V +
(1 − α) [[δu]] · {σ } · n d+
+
(1 − α) θ [[δu]] · [[u]] d
α [[δu]] · t([[u]]) d =
∂t
tp · δu d A +
f · δu d V,
(21)
Fig. 5 Body with boundary conditions, a weak discontinuity, b strong discontinuity
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
13
where α = 0 denotes the pre-failure regime while α = 1 refers to the failure regime. The cohesive traction t([[u]]) is determined by traction separation law (TSL) as follows: geff λf − λmax m λ 0 + η g˙ . (22) −θ tcz (g) = t0 −gn n λ λf λmax Here, g is the gap which is nothing else than the jump in a rotated coordinate system. The sign • denotes the Macaulay brackets. The effective gap vector is defined by
geff
βgs = . gn
(23)
The factor β is to control the contribution of the shear separation. The effective separation reads as: λ = geff =
gn 2 + β 2 gs2 .
(24)
In addition, λmax is the maximum effective separation reached before full failure while λf is the effective separation at full failure. The convexity of the TSL on the drop of the traction is controlled by the material parameter m. Here, n is a numerical parameter to maintain stability in face of contact. It is noticeable that θ is the same as that of DG penalty term. The last term of the Eq. 22 is the viscous term to take the viscous effects on the interface into account as well as to assure numerical stability if needed e.g. in the presence of snap-backs. A detailed description of the form of the TSL depending on the effective separation is given in Ref. [13]. The discretization of the continuous problem for non-matching meshes is based on the idea of Paggi and Wriggers [32] as introduced in Bayat et al. [13]. In the aforementioned work, the non-matching meshes are applied in both pre- and postfailure regimes. As illustrated in Fig. 6, the position (ξ P ) of the point 3 on the neighboring side to the hanging node 3 is known in the undeformed configuration. This information is lated used to find the point 3 in the deformed configuration/failure regime. In this way, the value of the displacement jump at the hanging node can be obtained. The next data for calculation is the traction at the point 3 . This is calculated by interpolation of the tractions from the adjacent nodes to point 3 . Finally, the average value of the tractions can be easily computed. For a detailed description of the procedure, please refer to [13]. Having the discretized weak form in hand, after linearization, the residual force vector of the boundary terms (given in Eq. 25) is solved by Newton-Raphson procedure. The implementation of the method is explained in [13].
14
H. R. Bayat et al.
Fig. 6 Non-matching meshes in the a undeformed and b deformed configurations
R =
(1 − α) NJT n NA d + e
e
(1 − α) θ NJT NJ U d
RDG
+
e
α
NJT
(25)
R tCZ d . T
RCZ
Here, NJ and NA are matrices of shape function for the jump and average operators, respectively. The nodal stresses are denoted by and the nodal displacements are represented by U. Here, R is the rotation matrix. The stiffness matrices of the boundary terms are given by ∂RDG =0+ θ NJT NJ d ∂U e ∂RCZ ∂g ¯ R NJ d. = = NJT RT K ∂g ∂U e
KDG = KCZ
(26)
¯ see [13]. For the definition of the K,
3 Numerical Evaluation of the Numerical Schemes Now we present a in series of representative examples the performance of the new discretization methods. This serves as benchmarks to investigate the ability of the presented methods to overcome the various challenges which arise in models of solid mechanics.
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
15
Fig. 7 Initial mesh with 16 quadrilaterals (left) and deformation u (right) in the domain = (0, 10)2 \ [2, 8]2
The examples in the following are computed by using the finite element analysis program FEAP [44]. The weakly conforming method is realized with the parallel finite element system [9, 45] using the parallel direct linear solver [31].
3.1 An Illustrating Smooth Example in 2D First we evaluate the quality and efficiency of the three methods in Sect. 2.1 for smooth problem with known solution. This poses no severe numerical challenges and it is to be expected, that a conforming methods are an adequate tool to solve this problem. The solution to this problem is given by u
x sin(x) cos(y) = , y cos(x) sin(y)
= (0, 10)2 \ [2, 8]2 .
(27)
To test the performance of the different schemes we compute the solutions for each one on a series of uniformly refined meshes, starting with the mesh depicted in Fig. 7. We use E = 0.25, ν = 0.25 and Dirichlet boundary conditions on D = ∂ obtained from the solution (27). The convergence is tested for conforming approximations in Vhcf , for discontinuous Galerkin approximations in VhDG , and for the new weakly conforming finite element space Vhwc . The results are depicted in Fig. 8. We observe that • in the lowest order case, the conforming method is more efficient than the discontinuous Galerkin method. This behavior shows that such a smooth example is being already sufficiently well discretized by conforming methods and there is no need for the additional computational effort which is introduced by discontinuous methods. This observation holds only true for sufficiently smooth examples, as we show in further examples.
16
H. R. Bayat et al.
Fig. 8 Convergence study of the smooth example for all linear (top row), quadratic (middle row) and cubic (bottom row) schemes
There is no truly linear weakly conforming method, the lowest order method uses a quadratic ansatz space in the local cells, which explains the high efficiency compared to the linear methods. • In the quadratic case the relation between the conforming and nonconforming methods are very similar, where again the increased global size of the nonconforming DG method cannot translate into additional accuracy. The weakly conforming method can compete with the conforming method, the reduction of global degrees of freedom increases its efficiency enough to be on the level of a conforming method. • For the cubic schemes the DG and weakly conforming methods show again the same behavior. On the same mesh refinement level the errors are nearly the same,
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
17
Fig. 9 Geometric configuration of a junction in a 3D structure and stress distribution |σ (u)| illustrating strong corner and edge singularities (right)
Fig. 10 Convergence of u 0,B and the error for conforming and weakly conforming P3 P1 approximations compared with the results for the p-adaptive weakly conforming method
but the DG method has a much larger global system, which leads to a loss in efficiency.
3.2 A Corner Singularity in 3D In the next example we consider a junction of three bars where one bar is fixed at the end and traction forces are applied at the ending B of the other two bars, cf. Fig. 9. In this configuration the singularities result in low regularity of the solution. Here we use a linear elastic material with E = 2.5 MPa and ν = 0.25. The convergence for the displacement uh 0,B is illustrated in Fig. 10, and by extrapolation we obtain the asymptotic value u 0,B ≈ 0.11875 which is used for the error measure. The weakly conforming method of lowest order is in this case slightly less efficient than the conforming method, while the higher order ansatz is superior to the quadratic conforming method. Since we observe multiple singularities of various
18
H. R. Bayat et al.
Fig. 11 Geometry, boundary conditions, loading and discretization of the Cook’s membrane
magnitude in this example, the p-adaptive scheme yields a significant improvement, achieving the same accuracy as by uniform refinement with only a fraction of global degrees of freedom.
3.3 Cook’s Membrane: A Benchmark Problem In this section, Cook’s membrane is considered to investigate the convergence rate of different finite element technologies. This benchmark problem is of high significance since near-incompressibility, shear- and bending-dominated deformation as well as slightly distorted mesh occur. Figure 11 shows geometry, boundary conditions, loading and discretization of the Cook’s membrane. It is fixed on its left side and pulled up on its right side by a distributed load F. Linear elastic material as well as hyperelastic material are studied here. Linear Elasticity for Nearly Incompressible Materials The material parameters of the linear elastic case are set to E = 240.565 MPa and ν = 0.4999 (see [14]). A load of F = 0.125 N is applied. The vertical displacement of the point P is plotted against the total number of degrees of freedom in Fig. 12. As it is clearly seen, the application of the discontinuous Galerkin method in combination with standard Q1 elements with full integration on the boundary terms worsen the convergence in comparison to continuous Q1 elements. This is due to the increase of the number of degrees of freedom in DG method. Nonetheless, as long as a reduced or mixed integration of the boundary terms is utilized, the convergence rate is highly improved, comparable to that of Q1SP finite elements. Interestingly, the combination of the Q1SP with DG elements (reduced integration) delivers the fastest convergence rate. A Hyperelastic Application The strain energy function of the hyperelastic material is given by
tip displacement [mm]
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
19
0.009
0.006
0.003 10
100
1000
10000
100000
total number of degrees of freedom
Fig. 12 Displacement of the point P in y-direction in terms of total number of degrees of freedom for Q1, Q1SP, DG and their combinations from [14] Fig. 13 Displacement of the point P in y-direction in terms of number of elements in each direction for different HDG element formulations from [35]
ψ = μ (tr b − 3) − μ ln J +
2 (J − 1 − 2 ln J ). 4
(28)
The shear modulus and Lamé constant are set to μ = 80.194 MPa and = 400889.8 MPa, respectively. These parameters denote near incompressibility (ν = 0.4999). Furthermore, b = FFT is the left Cauchy-Green tensor and J = det F denotes the determinant of deformation gradient F. The vertical tip displacement of the point P with respect to the number of the elements in each direction is given in Fig. 13 for the original HDG (HDG - nonsym.) [47] as well as the improved HDG (HDG symm.) [35] methods. In addition, the influence of the choice of the analytically-defined free parameter (see Eq. 19) is compared to the scalar case given by its representative form θ = (θ11 + θ22 ) I. Here, I is the two by two identity matrix. The new symmetric HDG method (blue lines in Fig. 13) outperform the original HDG method in terms of the rate of convergence. Moreover, the convergence behavior of both HDG methods is improved by definition of the free parameter as a matrix (solid lines without dots) rather than a scalar (solid lines with dots).
20
H. R. Bayat et al.
Fig. 14 Configuration for Cook’s membrane in 3D (left) and distribution of |σ (u)| (right) clearly indicating the stress singularity at the top edge of the Dirichlet boundary
The 3-Dimensional Problem for Nearly Incompressible Materials In this example use a 3D version of Cook’s membrane to demonstrate the robustness of the weakly conforming method with respect to incompressibility. The 3D membrane is given by = conv{(0, 0), (0, 44), (48, 44), (48, 60)} × (0, 1), cf. Fig. 14, with elasticity module E = 2.5 MPa and three different Poisson ratios ν ∈ {0.25, 0.49, 0.49999}. The domain is fixed on D = {0} × (0, 44) × (0, 1) with homogeneous Dirichlet boundary conditions uD = 0, and on the left side {48} × (44, 60) × (0, 1) the traction force t = (0.002, −0.02, 0) N is applied. The resulting displacement at the face x1 = 48 is compared for the different discretizations and materials. We observe that • for the compressible material with ν = 0.25 all tested discretizations resolve the problem sufficiently well with a reasonable computational effort, but the weakly conforming method is the most efficient with respect to the number of DoFs. • For ν = 0.49, the error increases all discretizations, but this effect is stronger for the conforming methods. The weakly conforming method again achieves a smaller error with less global degrees of freedom. • For the incompressible material ν = 0.49999 the conforming methods are completely locking with a large relative and absolute error. However, the weakly conforming methods provides robust results with respect to incompressibility. This shows that the error of the weakly conforming approximation does not depend on material parameters. This is confirmed by the comparison in Fig. 15. The quality of the approximation seems to be nearly independent of the Poisson number for the weakly conforming method, whereas the results for the conforming method depend on ν.
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
21
Fig. 15 Relative errors for the quadratic conforming and P3 P1 weakly conforming methods
Fig. 16 Geometry, boundary conditions, loading as well as discretization of the single-material ring
3.4 Elasto-Plastic Deformation of an Annulus In this example, we consider a ring under a partial surface loading from Bayat et al. [12]. Geometry, boundary conditions, loading as well as discretization of the ring is illustrated in Fig. 16. The discretization is carried out in a homogeneous pattern, i.e. 1 × 5, 2 × 10, 4 × 20, 8 × 40, ... . Due to the symmetry conditions of the problem, only a quarter of the ring is considered. The ring is made out of steel with Young’s modulus and Poisson’s ratio set to E = 200 GPa and ν = 0.285, respectively. The material is elasto-plastic, with its elastic portion limited in small deformations. Here, von-Mises plasticity model is applied. The yield condition is given by = σ −
2/3 σ y ,
(29)
where σ is the deviatoric part of the stress. The yield stress σ y is computed by a linear isotropic hardening rule as follows:
22
H. R. Bayat et al.
Fig. 17 Relative vertical displacement of the point A for different finite element methods - elastoplastic model [12]
σ y = σ y0 + H ξ.
(30)
Here, σ y0 = 800 MPa is the initial yield stress and H = 0.3E is the hardening modulus. The accumulated plastic strain ξ is defined by the flow rule given by ξ˙ =
2/3 ˙ε p
(31)
where ε˙ p is the plastic strain rate. The relative vertical displacement of the point A with respect to its converged solution u ex 2 (A) is plotted in Fig. 17 for various finite element formulations. The weakly-conforming method P3 P1 converges very similarly to the quadratic conforming elements. The low order elements, namely Q1SP and HDG deliver outstanding convergence rates comparable to those of the higher-order methods. This signifies that as along as finite element technologies are utilized, there is no need to apply costly higher-order element formulations.
3.5 Material Discontinuities at Interfaces: A Ring with Different Materials For a robustness test of the hybrid method with respect to material interfaces we now consider a bimaterial ring consisting of two layers: an incompressible rubber-type inner layer with E = 10 MPa and ν = 0.499, and a compressible metal like outer layer with E = 20000 MPa and ν = 0.285. The ring has an inner radius of 50 cm and
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
23
Fig. 18 Geometric configuration of the bimaterial ring (left), distribution of |σ (u)|, visualized on the quarter of the ring (right) which is used for the computations
Fig. 19 Convergence of uh 110 for the different methods (left) and the error (right) estimated by extrapolation
and an outer radius of 110 cm and the material boundary between the two layers has a radius of 100 cm, see Fig. 18. We apply a pressure force from top and bottom. Due to the symmetry of the solution, the computation can be reduced to one quarter of the ring, and we use symmetry boundary conditions for x1 = 0 and x2 = 0. We apply on the outer boundary 110 a constant pressure force (−1, 0, 0) N, see Fig. 18 for the resulting stress distribution. The values of the stress σ (u) are much higher in the metal part, so that we clearly identify the material interface. The displacement results u 110 and the error with respect to the problem size are shown in Fig. 19. We observe that only the linear conforming method fails to provide an accurate approximation of the estimated functional value, all other methods approximate the configuration with material jump very accurate.
24
H. R. Bayat et al.
Fig. 20 a Geometry, boundary conditions and loading of the composite b propagation of the crack (magnified displacements in x-direction - non-matching meshes) Table 2 Uniaxial tension of a 2D fiber composite E in MPa
ν
ϑ
λ0 in mm
λ f in mm
β
η
t0 in MPa
n
m
Matrix
1000
0.4999 —–
—–
—–
—–
—–
—–
—–
—–
Fiber
200000 0.2
DG in matrix
1000
—–
—–
—–
—–
—–
—–
—–
—–
0.4999 500
—–
—–
—–
—–
—–
—–
—–
—–
—–
—–
—–
—–
—–
—–
0.017
1.0
0.0
1.5
1.2
1.0
0.017
1.0
0.0
1.5
1.2
1.0
DG in fiber 200000 0.2
500
Extrinsic CDG
100500 0.3
100E/ h —–
Intrinsic CZ
100500 0.3
105
10−4
3.6 A Fiber Composite with Nearly Incompressible Inclusions A well-known failure in composites is the initiation of crack between the fiber and the matrix. This phenomenon can be appropriately modeled by the cohesive zone approach since the crack path is known in advance. To this end, we have modeled a crack propagation within a composite structure in this example. Geometry, boundary conditions and loading of the structure as well as the crack propagation in nonmatching meshes are illustrated in Fig. 20. The fiber is 200 times stiffer than the matrix. The matrix is a rubber-like material with near incompressible behavior with Poisson’s ratio ν = 0.4999. The material and the numerical parameters are given in Table 2. A typical mesh of the quarter of the geometry (due to dual symmetry) is given for matching as well as non-matching meshes in Fig. 21. Due to the incompressible property of the matrix in this example, volumetric locking effects are expected. To overcome this problem, reduced integration on the
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
(a)
25
(b)
Fig. 21 Matching (a) and non-matching (b) meshes for the quarter of the composite geometry
boundary terms are exploited. Furthermore, a locking-free element formulation from Reese et al. [34] is employed in combination with an intrinsic cohesive zone model (see Rezaei et al. [38]). The convergence behavior of these element formulations are investigated in Fig. 22 in terms of the reaction force-displacement curves. The number of elements in each plot denotes the number of elements once in the fiber and once in the matrix separately on the x and y-directions. According to Fig. 22, application of the reduced integration in CDG method results in tremendous improvement in convergence rate of the elements. The results are comparable to those of the locking-free element Q1SP with intrinsic cohesive zone model [38]. In addition, the overestimated reaction forces as in CDG with full integration on the boundaries as well as in intrinsic CZ model are avoided. Consequently, realistic behavior of the crack propagation is obtained with much fewer number of elements. In conventional discretization techniques, matching meshes on different sides of the interface must be applied. Consequently, unnecessary fine mesh is needed for different materials with e.g. different stiffness properties. This can be computationally very costly. By use of non-matching meshes instead, considerable CPU time can be saved. In this example, due to the fact that the fibers are far stiffer than the matrix, the major deformations occur in the matrix. Therefore, the fiber does not have to be as finely meshed as the matrix. A comparison of the reaction force-displacement of the different discretization techniques, namely matching and non-matching, are carried out in Fig. 23. Please note that the Poisson’s ratio of the matrix is set to ν = 0.3 in this example. The rest of the parameters remain the same as those in Table 2. By application of the non-matching meshes, a redundantly fine mesh in the fiber is avoided without changing the behavior of the entire system. This is proven in Fig. 23, where both meshes deliver the same reaction force-displacement response of the loaded system.
26
H. R. Bayat et al. 50
50
40
40
30
30
20
20
10
10
0
0
0.01
0.02
0.03
0.04
0.05
0
0
0.01
50
40
40
30
30
20
20
10
10 0
0.01
0.03
0.04
0.05
0.03
0.04
0.05
(b)
(a) 50
0
0.02
0.02
0.03
0.04
0.05
0
0
0.01
(c)
0.02
(d)
Fig. 22 Reaction force-displacement curves for different element formulations. a CDG with full integration on the boundary terms, b intrinsic Cohesive zone model from [38], c CDG with reduced integration on the boundary terms, d comparison of different element formulations with the same number of elements (16) as that of converged CDG with reduced integration on the boundary terms (Q1SP with 8 elements) 40 30 20 10 0
0
0.01
0.02
0.03
0.04
0.05
Fig. 23 Reaction force-displacement curves for different discretization techniques. Matching meshes: 32 elements in each direction for fiber and 32 elements in each direction for matrix. Non-matching meshes: only 8 elements in each direction for fiber and 32 elements in each direction for matrix
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
27
Fig. 24 Geometry, boundary conditions, loading as well as discretization of the square thin plate
3.7 A Benchmark Configuration for Thin Structures Within this project, a benchmark problem of a thin 3D plate under surface loading was analyzed, see [39]. Due to the vast applications of such structures in industry, various methods have been developed to count for their thin geometry. Here, a standard finite element formulation as well as two finite element technologies were exploited. The conventional 3D eight-node brick element (Q1) with tri-linear shape functions is used in this example for the computation of the shell-like structure. In addition, the locking-free element from Sect. 2.4, first introduced by Reese et al. [34], is applied for overcoming the shear locking effects. This solid element (Q1SP) benefits from reduced integration in the center of the element and hourglass stabilization technique. An eight-node solid-shell element (Q1STs) with one integration point in the shell plane and minimum of two integration points over the thickness is applied [40, 41]. In addition to the enhanced assumed strain concept as in Q1SP, the idea of assumed natural strains is used to overcome the transverse shear and curvature thickness locking effects. Geometry, boundary conditions, loading as well as discretization of the square thin plate is shown in Fig. 24. The plate is fixed at its all outer sides and a distributed load of q = 0.0002 MPa is acting on the upper face. The thickness (h) of the plate is varied in the computations while the side lengths are set to a = 1 mm. The deflection w P of the point P is investigated for various geometrical aspect ratios (a/ h) as well as different finite element methods. A Neo-Hookean material for hyper-elastic isotropy is considered. The material parameters are set to = 144.2307692 MPa and μ = 96.1538 MPa. The strain energy function reads as follows: W =
√ √ μ (trace C − 3) − μ ln( det C) + (det C − 1 − 2 ln( det C) . 2 4
(32)
Here, C = FT F is the right Cauchy-Green tensor with F denoting the deformation gradient.
28
H. R. Bayat et al.
2 1.5 1 0.5 0
4
8
16
32
Fig. 25 Deflection at the central point of the plate for different number of elements (n e ) in planar as well as in thickness directions with a/ h = 100 0.04
0.03
0.02
0.01
0 20 100
200
500
1000
Fig. 26 Vertical displacement w P at the point P with different thicknesses h
Figure 25 depicts the convergence behavior of the plate for the geometrical aspect ratio of a/ h = 100 normalized to the converged solution w P,conv . As it is seen, the standard Q1 elements face severe shear locking effects and need too many elements to converge. On the contrary, the Q1SP elements show a tremendously fast convergence as long as enough number of elements in the thickness direction (here n z = 4) is used. This is a well known problem, that solid elements with only one integration point over the thickness cannot capture bending correctly. However, by application of a solid-shell element (Q1STs) formulation, the convergence is obtained even with a single element over the thickness. As mentioned above, this element possesses at least three integration points in the thickness direction. Finally, the behavior of the plate with respect to different thicknesses is studied in Fig. 26. The mesh refinement level is set to the lowest level, at which the Q1SP and the Q1STs need to converge. For this level of discretization, the Q1 elements observe sever locking.
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
29
Fig. 27 Geometric configuration of the square specimen (left), cell structure of the computed part (right)
3.8 An Inelastic Model Combining Plasticity and Damage The following configuration is taken from [20] and realized with an elasto-plastic damage model for small deformations in [42, 43]. We compute a square specimen with the size 200 × 200 mm2 which has a circular hole in the middle, with a diameter of 100 mm. The specimen is pulled on both sides with a force of 12N. Since the domain is symmetric, we compute only one quarter, cf. Fig. 27. On the outer left boundary of the computed domain, the used mesh consists of small rectangles and on the rest of the domain it consists of larger rectangles, since most of the stress and arising damage is to be expected at the left side. To avoid hanging nodes, a transition area between the two sides consists of rectangles and triangles. In Fig. 28 the resulting distribution of the equivalent plastic strain εp and of the damage variable d can be seen. We computed this example with two conforming methods and two configurations of our weakly conforming method. The results are given in Fig. 29. We observe that the linear conforming method is locking and therefore not well suited for such a complex computation. On the other side, the weakly conforming methods and the quadratic conforming method perform similar in terms of accuracy and produce comparable results.
30
H. R. Bayat et al.
Fig. 28 Resulting distribution of the equivalent plastic strain εp (left) and of the damage variable d (right)
Fig. 29 Square specimen example: A comparison of conforming methods with the weakly conforming methods. Top row: The L2 norm of the equivalent plastic strain (left) and the L2 norm of the damage variable d (right) are depicted. Bottom row: For both measures the errors are estimated, based on extrapolated values rex and dex
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
31
Fig. 30 Two bodies which are in contact. The parabolic shaped object on the top is pressed down on the lower object, which is fixed on the bottom
Fig. 31 Displacement (left), and stress distribution (right) of the contact benchmark
3.9 A Hybrid Approximation of a Contact Problem In this contact problem benchmark problem, a parabolic shaped object is shifted downwards, which presses it against another object, which is fixed with Dirichlet boundary conditions along its bottom side, cf. Fig. 30. The deformations are limited to small strains, hence linear elasticity is sufficient for the underlying model and only the contact imposes a non-linear problem. The resulting deformation and stress distribution, induced by the contact inequalities can be seen in Fig. 31. In Table 3 the results of a computation with the weakly conforming method for various measurements can be seen.
32
H. R. Bayat et al.
Table 3 Reference values on 5 mesh levels with an estimation of the accuracy by the difference between two mesh levels. Here we use the P3P1 weakly conforming method, where in the hybridization a local contact problem is solved Degrees of 33024 131584 525312 2099200 8392704 freedom Contact length 0.18395 −9.088e−5 n · 0.01127 σ (u)n 0,C −4.335e−5 Max. traction 0.03795 −2.918e−4 Pos. max. 0.73413 traction 0.00794
0.18386 −0.00454 0.01123
0.17932 −0.00688 0.01123
0.17244 −0.00229 0.01119
3.419e−6 0.03766 −3.831e−4 0.74206
−4.523e−5 0.03728 −1.139e−4 0.73822
−3.875e−6 0.03716 −5.327e−5 0.74020
−0.00384
0.00198
1.555e−5
0.17015 0.01118
0.03711 0.74022
4 Conclusion Standard conforming methods are designed for energy minimization and not for the optimal approximation of stresses. They may fail at singularities, discontinuities, and in robustness tests, in particular in the lowest order case. Over the last decades many nonconforming, discontinuous and hybrid schemes are constructed to overcome these deficits. Nevertheless, many of these constructions come along with enhanced complexity requiring more degrees of freedom. Here, we presented new schemes with a focus on hybridization and the efficient realization, so that we achieve optimal and robust convergence behavior with less degrees of freedom than established schemes. This is verified by a series of linear and nonlinear benchmark configurations. Acknowledgements Financial support of this work, related to the projects “Hybrid discretizations for nonlinear and nonsmooth problems in solid mechanics” funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 255721882 - SPP 1748 is gratefully acknowledged.
References 1. M. Abbas, A. Ern, N. Pignet, Hybrid high-order methods for finite deformations of hyperelastic materials. Comput. Mech. 62(4), 909–928 (2018) 2. M. Abbas, A. Ern, N. Pignet, A hybrid high-order method for finite elastoplastic deformations within a logarithmic strain framework. Numer. Methods Eng. 120(3), 303–327 (2019) 3. M. Abbas, A. Ern, N. Pignet, A hybrid high-order method for incremental associative plasticity with small deformations. Comput. Methods Appl. Mech. Eng. 346, 891–912 (2019)
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
33
4. A. Alipour, S. Wulfinghoff, H.R. Bayat, S. Reese, Geometrically nonlinear crystal plasticity implemented into a discontinuous Galerkin element formulation. PAMM 17(1), 753–754 (2017) 5. A. Alipour, S. Wulfinghoff, H.R. Bayat, S. Reese, B. Svendsen, The concept of control points in hybrid discontinuous Galerkin methods-application to geometrically nonlinear crystal plasticity. Int. J. Numer. Methods Eng. 114(5), 557–579 (2018) 6. A. Alipour, S. Wulfinghoff, B. Svendsen, S. Reese, Geometrically nonlinear single crystal viscoplasticity implemented into a hybrid discontinuous Galerkin framework, in Proceedings of the 7th GACM Colloquium on Computational Mechanics (2017) 7. D.N. Arnold, F. Brezzi, B. Cockburn, L.D. Marini, Unified analysis of discontinuous Galerkin methods for elliptic problems. SIAM J. Numer. Anal. 39(5), 1749–1779 (2002) 8. C.E. Baumann, J.T. Oden, A discontinuous hp finite element method for convection-diffusion problems. Comput. Methods Appl. Mech. Eng. 175(3–4), 311–341 (1999) 9. N. Baumgarten, C. Wieners, The parallel finite element system M++ with integrated multilevel preconditioning and multilevel Monte Carlo methods. Comput. Math. Appl. (subm.) (2019). Manuscript available at http://www.math.kit.edu/user/~wieners/BaumgartenWieners2019.pdf 10. H.R. Bayat, Failure modeling of interfaces and sheet metals. Dissertation, RheinischWestfälische Technische Hochschule Aachen, Aachen (2020). https://doi.org/10.18154/ RWTH-2020-04847. https://publications.rwth-aachen.de/record/788898 11. H.R. Bayat, S. Kastian, S. Wulfinghoff, S. Reese, Discontinuous Galerkin (DG) method in 3D linear elasticity with application in problems with locking. PAMM 17(1), 19–22 (2017) 12. H.R. Bayat, J. Krämer, L. Wunderlich, S. Wulfinghoff, S. Reese, B. Wohlmuth, C. Wieners, Numerical evaluation of discontinuous and nonconforming finite element methods in nonlinear solid mechanics. Comput. Mech. 62(6), 1413–1427 (2018). https://doi.org/10.1007/s00466018-1571-z 13. H.R. Bayat, S. Rezaei, T. Brepols, S. Reese, Locking-free interface failure modeling by a cohesive discontinuous Galerkin method for matching and nonmatching meshes. Int. J. Numer. Methods Eng. 121(8), 1762–1790 (2020) 14. H.R. Bayat, S. Wulfinghoff, S. Kastian, S. Reese, On the use of reduced integration in combination with discontinuous Galerkin discretization: application to volumetric and shear locking problems. Adv. Model. Simul. Eng. Sci. 5(1), 10 (2018). https://doi.org/10.1186/s40323-0180103-x 15. H.R. Bayat, S. Wulfinghoff, S. Reese, Application of the discontinuous Galerkin finite element method in small deformation regimes. PAMM 15(1), 171–172 (2015) 16. H.R. Bayat, S. Wulfinghoff, S. Reese, Discontinuous Galerkin analysis of displacement discontinuities for linear elasticity, in 3rd ECCOMAS Young Investigators Conference and 6th GACM Colloquium on Computational Mechanics, RWTH-2015-04002. Lehrstuhl und Institut für Angewandte Mechanik (2015) 17. H.R. Bayat, S. Wulfinghoff, S. Reese, F. Cavaliere, The discontinuous Galerkin method with reduced integration scheme for the boundary terms in almost incompressible linear elasticity. PAMM 16(1), 189–190 (2016) 18. J. Bramwell, L. Demkowicz, J. Gopalakrishnan, W. Qiu, A locking-free hp DPG method for linear elasticity with symmetric stresses. Numerische Mathematik 122(4), 671–707 (2012) 19. S.C. Brenner, Korn’s inequalities for piecewise H1 vector fields. Math. Comput. pp. 1067–1087 (2004) 20. T. Brepols, Theory and numerics of gradient-extended damage coupled with plasticity. Dissertation, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen (2018) 21. L.J. Bridgeman, T. Wihler, Stability and a posteriori error analysis of discontinuous Galerkin methods for linearized elasticity. Comput. Methods Appl. Mech. Eng. 200(13), 1543–1557 (2011) 22. F. Chouly, A. Ern, N. Pignet, A hybrid high-order discretization combined with Nitsche’s method for contact and Tresca friction in small strain elasticity (2019). hal.archivesouvertes.fr/hal-02283418
34
H. R. Bayat et al.
23. P. Ciarlet Jr., C.F. Dunkl, S.A. Sauter, A family of Crouzeix-Raviart finite elements in 3D. Anal. Appl. 16(05), 649–691 (2018) 24. B. Cockburn, G. Kanschat, D. Schötzau, C. Schwab, Local discontinuous Galerkin methods for the Stokes system. SIAM J. Numer. Anal. 40(1), 319–343 (2002) 25. B. Cockburn, G.E. Karniadakis, C.W. Shu, Discontinuous Galerkin Methods: Theory, Computation and Applications, vol. 11 (Springer Science & Business Media, 2012) 26. D.A. Di Pietro, A. Ern, A hybrid high-order locking-free method for linear elasticity on general meshes. Comput. Methods Appl. Mech. Eng. 283, 1–21 (2015) 27. D.A. Di Pietro, S. Nicaise, A locking-free discontinuous Galerkin method for linear elasticity in locally nearly incompressible heterogeneous media. Appl. Numer. Math. 63, 105–116 (2013) 28. P. Hansbo, M.G. Larson, Discontinuous Galerkin methods for incompressible and nearly incompressible elasticity by Nitsche’s method. Comput. Methods Appl. Mech. Eng. 191(17), 1895–1908 (2002) 29. J. Krämer, C. Wieners, B. Wohlmuth, L. Wunderlich, A hybrid weakly nonconforming discretization for linear elasticity. PAMM 16(1), 849–850 (2016) 30. R. Liu, M. Wheeler, C. Dawson, A three-dimensional nodal-based implementation of a family of discontinuous Galerkin methods for elasticity problems. Comput. Struct. 87(3–4), 141–150 (2009) 31. D. Maurer, C. Wieners, A parallel block LU decomposition method for distributed finite element matrices. Parallel Comput. 37(12), 742–758 (2011) 32. M. Paggi, P. Wriggers, Node-to-segment and node-to-surface interface finite elements for fracture mechanics. Comput. Methods Appl. Mech. Eng. 300, 540–560 (2016) 33. S. Reese, On the equivalent of mixed element formulations and the concept of reduced integration in large deformation problems. Int. J. Nonlinear Sci. Numer. Simul. 3(1), 1–34 (2002) 34. S. Reese, On a consistent hourglass stabilization technique to treat large inelastic deformations and thermo-mechanical coupling in plane strain problems. Int. J. Numer. Methods Eng. 57(8), 1095–1127 (2003) 35. S. Reese, H. Bayat, S. Wulfinghoff, On an equivalence between a discontinuous Galerkin method and reduced integration with hourglass stabilization for finite elasticity. Comput. Methods Appl. Mech. Eng. 325, 175–197 (2017) 36. S. Reese, P. Wriggers, A stabilization technique to avoid hourglassing in finite elasticity. Int. J. Numer. Methods Eng. 48(1), 79–109 (2000) 37. S. Reese, P. Wriggers, B.D. Reddy, A new locking-free brick element technique for large deformation problems in elasticity. Comput. Struct. 75(3), 291–304 (2000) 38. S. Rezaei, S. Wulfinghoff, S. Reese, Prediction of fracture and damage in micro/nano coating systems using cohesive zone elements. Int. J. Solids Struct. 121, 62–74 (2017) 39. J. Schröder, T. Wick, S. Reese, P. Wriggers, R. Müller, S. Kollmannsberger, M. Kästner, A. Schwarz, M. Igelbüscher, N. Viebahn, H.R. Bayat, S. Wulfinghoff, K. Mang, E. Rank, T. Bog, D. DAngella, M. Elhaddad, P. Hennig, A. Düster, W. Garhuom, S. Hubrich, M. Walloth, C. Wollner Winnifried Kuhn, T. Heister, A selection of benchmark problems in solid mechanics and applied mathematics. Archives of Computational Methods in Engineering, pp. 1–39 (2020) 40. M. Schwarze, S. Reese, A reduced integration solid-shell finite element based on the EAS and the ANS concept - geometrically linear problems. Int. J. Numer. Methods Eng. 80(10), 1322–1355 (2009) 41. M. Schwarze, S. Reese, A reduced integration solid-shell finite element based on the EAS and the ANS concept - large deformation problems. Int. J. Numer. Methods Eng. 85, 289–329 (2011) 42. R. Shirazi Nejad, C. Wieners, Parallel inelastic heterogeneous multi-scale simulations, in Multiscale Simulation of Composite Materials (Springer, 2019), pp. 57–96 43. J. Spahn, H. Andrä, M. Kabel, R. Müller, A multiscale approach for modeling progressive damage of composite materials using fast fourier transforms. Comput. Methods Appl. Mech. Eng. 268, 871–883 (2014) 44. R.L. Taylor, FEAP - finite element analysis program (2014). http://projects.ce.berkeley.edu/ feap/
Hybrid Discretizations in Solid Mechanics for Non-linear and Non-smooth Problems
35
45. C. Wieners, A geometric data structure for parallel finite elements and the application to multigrid methods with block smoothing. Comput. Vis. Sci. 13(4), 161–175 (2010) 46. S. Wulfinghoff, H.R. Bayat, A. Alipour, S. Reese, Investigation of a locking-free hybrid discontinuous Galerkin element that is very easy to implement into fe-codes. PAMM 17(1), 87–90 (2017) 47. S. Wulfinghoff, H.R. Bayat, A. Alipour, S. Reese, A low-order locking-free hybrid discontinuous Galerkin element formulation for large deformations. Comput. Methods Appl. Mech. Eng. 323(Supplement C), 353–372 (2017)
Novel Finite Elements - Mixed, Hybrid and Virtual Element Formulations at Finite Strains for 3D Applications Jörg Schröder, Peter Wriggers, Alex Kraus, and Nils Viebahn
Abstract The main goal of this research project is to develop new finite-element formulations as a suitable basis for the stable calculation of modern isotropic and anisotropic materials with a complex nonlinear material behavior. New ideas are pursued in a strict variational framework, based either on a mixed or virtual FE approach. A novel extension of the classical Hellinger-Reissner formulation to nonlinear applications is developed. Herein, the constitutive relation of the interpolated stresses and strains is determined with help of an iterative procedure. The extension of the promising virtual finite element method (VEM) is part of the further investigation. Particularly, different stabilization methods are investigated in detail, needed in the framework of complex nonlinear constitutive behavior. Furthermore the interpolation functions for the VEM is extended from linear to quadratic functions to obtain better convergence rates. Especially in this application the flexibility of the VEM regarding the mesh generation will constitute a huge benefit. As a common software development platform the AceGen environment is applied providing a flexible tool for the generation of efficient finite element code.
J. Schröder (B) · N. Viebahn Institute of Mechanics, University Duisburg-Essen, Duisburg, Germany e-mail: [email protected] N. Viebahn e-mail: [email protected] P. Wriggers · A. Kraus Institute of Continuum Mechanics, Leibniz Universität Hannover, Hannover, Germany e-mail: [email protected] A. Kraus e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_2
37
38
J. Schröder et al.
1 Introduction and State of the Art The finite element method (FEM) represents one of the most popular methods for the approximation of boundary value problems. Its main benefit is its applicability to an enormous range of applications including linear and nonlinear problems, complex shaped domains, varying material coefficients, and different boundary conditions. A variational weak form of the differential equation is the starting point for the finite element discretization. In this contribution we focus on the general field of elasticity, where the classical variational approach considers the displacements as the sole unknown quantity. This procedure represents a suitable approach for the FEM in many different applications. However, in various situation this so-called primal finite element scheme suffers due to poor approximation properties. This is for example the case in the framework of (nearly) incompressibility, (nearly) inextensibility or high slenderness of the considered domain. In these constrained situations locking effects occur, see i.a. [1], resulting in very poor approximations of the boundary value problems. Mixed variational approaches have been proven to be able to overcome these deficiencies if a suitable incorporation of such constraints is considered. First developments concerning mixed variational frameworks, where additional unknown field quantities are treated in a variational sense, are given in [2–4]. A few years later [5, 6] have proposed independently an even more general variational principle. Finite elements based on such mixed variational formulations are denoted as mixed finite elements. Unfortunately, the method of mixed finite elements comes in hand with a couple of difficulties. Probably, the most crucial drawback is the conditional stability, depending on the choice of the finite element discretization spaces. From a mathematical point of view this difficulty is well understood in the framework of linear problems and deduced into the theorem of [7, 8]. The two critical conditions in this theorem are a special condition on coercivity and an inf-sup condition. Together with boundedness, which in the framework of a conforming discretization is a priori satisfied, they ensure the existence and uniqueness of the solution and thus guarantee the stability of the mixed finite element approximation. The proof of these stability criteria may be quite complex and technical depending on the formulation and especially the chosen discrete interpolation spaces. For engineering applications a rather simple numerical validation has been established, based on the work of [9, 10]. The extension of Babuška’s and Brezzi’s theorem to hyperelasticity is rather difficult since in this framework solutions are not restricted to be unique. Thus, approximation schemes that are proven to be stable and optimal in the linear case could still lead to unphysical and unstable formulations in the large deformation framework, as reported i.a. by [11–13]. Despite these well-known uncertainties, mixed formulations are widely used in engineering applications also in the nonlinear framework. Due to its direct extensibility to large deformations, elements based on the displacement-pressure or the Hu-Washizu principle are prevalent. Within this project of the SPP1748 one of the main goals was to establish further knowledge on the stability behavior of mixed finite element technology for the large deforma-
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
39
tions regime. In addition, especially stress-displacement based finite elements related to the variational principle of Hellinger-Reissner have been of great interest since they seem to be beneficial in terms of robustness for load stepping, since the highly nonlinear stress-strain relationship is solved already on element level, see [14, 15]. An evolution of the mimetic finite difference methods led to the virtual element methods (VEM). Virtual elements maintain the generality related to element shapes which can have arbitrary geometrical forms. Thus virtual elements allow very complex meshes within a Galerkin type approximation which yields simpler formulations. The work on virtual element methods started with the seminal work [16]. Some early contributions in the area of mathematical analysis and engineering can be found in [17–22]. The above mentioned contributions illustrate that the virtual element method is a relatively recent development. The method permits the use of polygonal elements for problems in two dimensions and polyhedral elements in three dimensions. Furthermore, there is no need for a restriction to convex elements, nor is it necessary to avoid degeneracies such as element sides having an interior angle close to π radians. All kind of element shapes are possible for a discretization using virtual elements. Thus it is even possible to use animal shaped elements, as shown in e.g. [23–25]. Despite this variety of different element geometries the mesh provides a continuous C 0 discretization. Thus the method permits the direct use of Voronoi meshing tools, and as an example, crystals in a polycrystalline materials can be represented by single elements. Key examples of the method in elasticity can be found in [18, 22, 26]. Applications to engineering problems where solids undergo finite strains are presented in [24, 27–31]. Despite being only less than a decade under development, the application range in engineering of virtual elements has been widened to other applications. Especially in the following cases the virtual element method provides advantages: • Many applications in engineering include a combination of different materials or different layers. The mathematical modeling leads to interfaces between the layers or materials. In a discretization, parts related to one material or layer can be meshed in various ways and with dissimilar methods. This leads at the interface to non-matching meshes. Virtual elements allow arbitrary number of nodes and thus coupling can be done in a continuous manner, even for non-matching meshes. It was shown that such coupling fulfills the patch test and thus yields a stable discretization scheme. • In crack propagation problems it is possible to insert directly into a virtual element a crack. This insertion produces a virtual element that has additional vertices. Such adding of node is not permitted in finite element method but does not provide any problem when using virtual element technology. The idea and algorithmic aspects of this method was presented in [25] for the treatment of crack-propagation for 2D elastic solids at small strains. It is also possible to split virtual elements into two elements while maintaining the mesh structure of the existing discretization. This cutting technique for virtual elements can be combined with a phase field approach, see [25, 32, 33].
40
J. Schröder et al.
• In numerical solution schemes for contact mechanics one of the major difficulties is that on the interface the meshes related to the discretization of two contacting bodies do not match. This led to numerous investigations and formulations within finite element methodologies like node-to-segment, segment-to-segment or mortar methods, see e.g. early work [34–38]. Node-to-segment approaches are simple, but usually do not fulfill the contact patch test. Mortar methods fulfill the contact patch test, but have a high complexity with regard to coding. The virtual element allows to have arbitrary number of nodes within an element. This feature can be employed to add additional nodes in a contact interface. Hence a very simple node-to-node contact formulation is achieved, even for non-matching meshes. Node-to-node contact on one hand simplifies the coding and on the other hand fulfills the contact patch test. Applications of the virtual element method in the area of contact mechanics can be found in [39] for small, in [40] for large strains, and in [41] for virtual elements with curved boundaries. • Another big advantage of the virtual element methods comes into play when microstructures of crystalline materials have to be modeled. This is the case in homogenization problems of crystalline anisotropic materials or polycrystals in metal plasticity. Such discretization leads to a huge number of elements and thus needs considerable computing power for its numerical solution. A threedimensional polycrystal discretized using virtual elements can efficiently reduce the number of elements and unknowns, see e.g. [42]. Here only one virtual element per grain is needed having arbitrary number of nodes and faces which reduced the element number drastically and with this the total number of unknowns. Beyond the development of virtual element formulations for various fields of application the stability of the formulations has been of great importance within this project. While [43] investigated the stability of the virtual element method for linear problems, we focussed on the analysis of the virtual element’s ability to accurately predict the stability range in the framework of large deformations using a nearly incompressible material model. Especially in this case some widely employed and optimally performing methods are known to fail the detection of the bifurcation point. The analysis was restricted to displacement based and two-fields mixed virtual elements using second order polynomial basis with different approaches for the stabilization term.
2 Brief Continuum-Mechanical Background We consider a body B 0 ⊂ IR3 in the reference configuration at time t0 and parametrized in X and are interested in the description of its movements at time t > t0 , including the deformations, translations and rotations. The body at the current configuration at time t is denoted as B ⊂ IR3 and is parametrized in x. The deformation from B0 to B is described by the nonlinear, continuous and one-to-one transformation map
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
41
Fig. 1 Sketch of the body in reference on the left and current configuration on the right
ϕ : B0 → B
(1)
which maps points of B0 onto points of B, i.e. φ : X → x. The deformation gradient F follows as the gradient of ϕ F := ∇ X ϕ t (X) with J := det F > 0,
(2)
whereas the operator ∇ X denotes the gradient with respect to X and J the Jacobian. These important relations are summarized in Fig. 1. An important quantity in the continuum mechanical description of the deformation is the symmetric and positive definite right Cauchy-Green tensor C defined as C := F T F ,
(3)
which is, in contrast of the deformation gradient, free of rigid body rotations and thus useful for the description of strain measures in the large deformation case. Such a useful strain measure is for example the Green-Lagrange strain tensor E E := 21 (C − I) ,
(4)
whereas I denotes the second order identity tensor. Restricting ourselves to the framework of hyperelasticity it is reasonable to assume the existence of a Helmholtz free energy function ψ = ψ(F) defined per unit reference volume depending solely on the deformation gradient. Furthermore the internal dissipation is assumed to be zero, considering only perfect reversible processes, which reduced the Clausius Planck inequality to ∂ψ ˙ ≥ 0, ˙ ˙ :F (5) Dint = P : F − ψ = P − ∂F whereas the dot denotes the material time derivative and P the first Piola-Kirchhoff stress tensor. Based on this context the constitutive relation follows by P=
∂ψ . ∂F
(6)
42
J. Schröder et al.
The balance of linear momentum which constitutes the starting point of the variational approach is given by Div P = − f , (7) where Div denotes the divergence operator with respect to X and f the body forces. Let furthermore the strain energy be restricted to a representation such that it is additively decomposed into a compressible part ψ comp and a penalization term regarding volumetric deformation λ (8) ψ = ψ comp (C) + ϑ(J )2 . 2 Therein, ϑ(J ) is a function solely depending on J such that ϑ(J ) = 0 if and only if J = 1 , ϑ (1) = 0 with ϑ (J ) = ∂ϑ . ∂J
(9)
Based on this restriction, the second Piola-Kirchhoff stress S = F −1 P follows as S(C) = 2
∂ψ comp (C) + λ ϑ(J ) ϑ (J ) J C −1 . ∂C
(10)
3 Mixed FE Technology for Large Deformations The terminus mixed describes the introduction of an independent field quantity into the fundamental strong form which substitutes a particular term of the underlying equation. In this contribution we will focus on two different mixed formulations for the large deformation case of elasticity. First, a variational consistent stabilization technique for displacement-pressure based finite elements is discussed. This is followed by an overview on the incorporation of large deformations into the framework of Hellinger-Reissner based elements.
3.1 Consistent Stabilization for Displacement-Pressure Elements The terminus stability of mixed finite element formulations for linear problems is tantamount to the proof of existence and uniqueness of a solution. Mixed elements are declared to be stable if the popular Babuška-Brezzi conditions are fulfilled and thus are ensured to lead to physical meaningful approximations of our boundary value problems. The particular choice of the appropriate discretization of the unknown fields, such that those conditions are fulfilled, represents a wide research field in finite element technology and is excellently discussed and summarized in [44]. Despite
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
43
that, the following discussion on stability is related to the large deformation regime and must be considered separately from the conditions of Babuška-Brezzi. A series of publications [12, 45, 46] reported instabilities of displacement-pressure based elements in the framework of nonlinear elasticity. The observed deficiencies also occur for element formulations which are proven to hold the stability requirements in linear elasticity. In the following a variational consistent formulation is presented, which overcomes the reported deficiencies as shown by the ensuing numerical example. In the classical formulation related to this family of elements a scalar valued substitution of the p := λ ϑ(J ) in Eq. (10) is considered, such that the second PiolaKirchhoff stresses remain as S(C, p) = 2
∂ψ comp (C) + p ϑ (J ) J C −1 . ∂C
(11)
The essential idea of the variational consistent stabilized formulation, proposed already in [47], is based on a slightly different substitution of the pressure given by p := λ ϑ(J ) ϑ (J ). This formulation leads to the representation of the second Piola Kirchhoff stresses as S(C, p) = 2
∂ψ comp (C) + p J C −1 . ∂C
(12)
The related strong form of the boundary value problem reads under the additional assumption of suitable boundary conditions as Div[F S] + f = 0 J = ϕ( p)
∀X ∈ B0
(13)
Unfortunately, this substitution requires a complementary description of the pressure p = λ ϑ(J ) ϑ (J ) in the form J = ϕ( p) which not necessarily exists in an explicit manner, e.g. in case of a nonlinear relationship between the pressure p and the volumetric deformation J . Nonetheless, this requirement can be achieved using a local iterative solution procedure based on the introduced internal variable θ := ϕ( p) solving the residual (14) r (θ, p) := λ ϑ(θ ) ϑ (θ ) − p = 0 . For a fixed pressure the linearization is obtained as Lin[r (θ, p)] = r (θn , p) + λ(ϑ (θn )ϑ (θn ) + ϑ(θn )ϑ (θn )) θ = 0 ,
(15)
θn denotes the given value of θ of the last iteration, which leads to the increment as
θ = −(λ(ϑ (θn ) ϑ (θn ) + ϑ(θn ) ϑ (θn )))−1 r (θn , p) . A consistent update algorithm follows by
(16)
44
J. Schröder et al.
θn+1 = c1 + p c2
(17)
where the abbreviations have been used ϑ(θn )ϑ (θn ) ϑ (θn ) ϑ (θn ) + ϑ(θn ) ϑ (θn ) 1 c2 = . λ(ϑ (θn ) ϑ (θn ) + ϑ(θn ) ϑ (θn )) c1 = θn −
With this in hands the related weak forms follow as 1 S : δC dV − f · δu dV − t 0 · δu dA = 0 ∀ δu G u := 2 B0 ∂B0,t B0 (J − c1 − c2 p) δp dV = 0 ∀ δp G p :=
(18)
B0
with the increments as 1 1 ∂S S : δC + : C + p J C −1 : δC dV ,
G u = 2 ∂C 2 B0 1
G p = J C −1 : C − c2 p δp dV . 2 B0
(19)
Equations (18) and (19) represent the essential equations which have to be discretized by means of the finite element method. In this manner the geometry, the unknown fields an its virtual counterparts are approximated by Xh = uh =
n u-nodes I n u-nodes
I NuI Xˆ = INu Xˆ ,
NuI duI = INu d u ,
δuh =
I
I
NuI δduI = INu δd u ,
(20)
I
n p-nodes
ph =
n u-nodes
n p-nodes
N pI d p I = IN p d p ,
δph =
N pI δd p I = IN p δd p ,
I
whereas n u-nodes and n p-nodes are the number of displacement and pressure related nodes, NuI and N pI are the nodal shape functions related to the displacement and the I pressure approximation, Xˆ , duI and d pI denote the nodal coordinates and the nodal degrees of freedom related to the displacements and pressure. Note that the approximation of the unknown fields and its virtual counterparts coincide. The substitution of the discretized fields into (18) and (19) leads to the discrete weak forms and their linearizations in form of
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
45
Fig. 2 Smallest positive Eigenvalue of global stiffness matrix K over load multiplier γ
G u,h G p,h
G u,h
G p,h
= r u · δd u , = r p · δd p , = (kuu d u + kup d p ) · δd u , = (k pp d p + k pu d u ) · δd p ,
(21)
The standard finite element assembling procedure over the number of elements num ele leads to the global discrete system of equations num ele
A e=1
T
δd u kuu kup d u
r u + = 0. k pu k pp d p
r p δd Tp K
(22)
It should be highlighted that in case of perfect incompressibility (λ → ∞) the constant c2 vanishes which leads to the classical saddle point structure of (22) with a zero matrix instead of k pp . In the following numerical example two different interpolation schemes of the unknown fields are considered in order to emphasize the distinction of the observed instabilities to the well known Babuška-Brezzi conditions. The Q1 P0 element which is associated to a continuous, piecewise bilinear interpolation of the displacements and a discontinuous piecewise constant interpolation of the pressure represents a non-stable discretization. In contrast the T2 P1 element, with continuous piecewise quadratic and continuous piecewise linear approximations for displacements and pressure, is well known to be stable in the linear regime. In accordance to the numerical example in [46], an incompressible rectangular domain with a side length of l = 50 is considered. It is clamped at the lower, left and right edge whereas the top edge is traction free. The Young’s modulus is set to E = 5 and the body force vector is given by f = γ (0, 1)T by means of a dead load, whereas γ ≥ 0 has the meaning of a loading parameter. Due to the incompressibility of the material, the displacement solution remains trivial u = 0. The main focus of this example remains on the correct approximation of the first critical loading and its related eigenmode. Figure 2 depicts the progression of the smallest positive eigenvalue of the global stiffness matrix K over the loading parameter γ . It should
46
J. Schröder et al.
Q1 P0 -classic
Q1 P0 -stabilized
T2 P1 -classic
T2 P1 -stabilized
Fig. 3 Eigenmode corresponding to first critical loading
be remarked that, as usual, the mixed interpolation leads to an indefinite system of equations, whereas the negative eigenvalues are related to the pressure approximation. The blue and green graph in Fig. 2 depict the crucial deficiency of the classical displacement pressure approximation. After an initial meaningful progression of the critical eigenvalue, a sudden instability occurs at around γ = 0.8 or γ = 2. Figure 3 visualizes the corresponding eigenmodes at this critical loading. In case of the corresponding classical interpolation these eigenmodes depict clearly nonphysical deformation states. Contrarily, the proposed stabilized finite element schemes show a meaningful progression of the critical eigenvalue over the increasing load parameter. The critical loading occurs at around γ = 5.2 or γ = 5.4, whereas the small disagreement is explicable as usual discretization error and should vanish for increasing mesh density. The physical meaningful deformation states of the corresponding eigenmode related to the critical loading emphasizes the reliability of the considered formulation.
3.2 Hellinger-Reissner Principle for Large Deformations In the following subsection a large deformation extension for the Hellinger-Reissner Principle for elasticity is discussed. Parts of the following section have already been published in [14, 15]. The Hellinger-Reissner form for large deformations is introduced based on the complementary form of the constitutive relation in Eq. (10). Assuming the existence of a complementary strain energy function χ (S) we obtain the relation (23) ∂ S χ (S) := E . The most trivial case is the complementary form of the St. Venant type nonlinear elasticity E = C−1 : S, which simply yields χ (S) =
1 S : C−1 : S . 2
(24)
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
47
The dual form of the boundary value problem of hyperelasticity is described by the constitutive relation (23) and the balance of momentum equation Div F S + f = 0 on B .
(25)
The solutions for u and S of (23) and (25) are equivalent to the stationary point of the Hellinger-Reissner functional F(S, u) =
B
(S : E − χ (S)) dV + F ext (x) ,
(26)
with the external potential F ext (u) given by F
ext
(u) = −
B
f · u dV −
∂Bt
t · u dA ,
(27)
where t denotes the prescribed traction vector on the Neumann boundary. The stationary point corresponds to the roots of the first variations of (26) with respect to the unknown fields u and S, which are given in detail as G u := δu F =
B
G S := δ S F =
B
δ E : S dV −
B
δu · f dV −
∂Bt
δu · t dA = 0 , (28)
(δ S : (E − ∂ S χ (S)) dV = 0 ,
where δu denotes the virtual deformation and δ S the virtual stress field. The corresponding virtual strains follow by δ E = 21 (δ F T F + F T δ F) with δ F = ∇ X δu. By means of a finite element approach a discretization is utilized. The corresponding discretized weak forms follow as G uh = e G eu and G hS = e G eS where we obtain for a typical element G eu = δd T G eS = δβ T
Be Be
IBT S dV − δd T
Be
IN T f dV − δd T
∂Bte
IN T t dA , (29)
LT (E − ∂ S χ (S)) dV .
with δ E = IB δd, where IB is a suitable matrix containing the derivatives of the shape functions and d denotes the nodal displacements. Furthermore L contains the stress related shape functions and β are the nodal stress unknowns. Further details of the stress approximation are discussed in the following chapter. Unfortunately the assumption of existence of a complementary strain energy function χ (S) is only true in very special cases. In general finite deformation elasticity, such an explicit complementary strain energy function does not exist. Nonetheless, the postulated formulation can be slightly modified such that the partial derivative ∂ S χ (S) is computed in an iterative manner. Therefore we introduce a constitutive
48
J. Schröder et al.
counterpart of the Green-Lagrange strain tensor as ∂ S χ (S) =: E cons . We compute E cons by the evaluation of the residual r(E cons ; S) = S − ∂ E ψ(E)| E cons ≈ 0
(30)
at fixed S in each integration point. Thus we obtain the update in form of E cons ⇐ E cons + [∂ E2 E ψ(E)| E cons ]−1 r(E cons ; S) =: D
(31)
until r(E cons ; S) ≈ 0. The linearization of the weak forms, LinG e = G e (d, β) +
G e ( d, β), yields the increments
G eu = δd T
G eS = δβ T
Be Be
S dV d + δd T
Be
LT IB dV d − δβ T
IBT L dV β ,
Be
LT D L dV β ,
(32)
where is defined by B = d. Based on this, we obtain the related system of equations as
δd eT LinG = δβ eT
e
K euu K eu S T K eu S K eSS
e
d r + eu .
β rS
(33)
with the element matrices and right hand side vectors as K euu := S dV, K eu S := LT IB dV, K eSS := LT D L dV , Be Be Be r eu := IBT S dV − IN T f dV − IN T t dA and Be ∂Bte Be r eS := LT (E − E cons ) dV .
(34)
Be
The global system of equations is obtained by the assembling over the number of elements num ele num ele
A e=1
δd eT δβ eT
K euu K eu S T K eu S K eSS
e
d r + eu = δ D(K D + R) = 0 (35)
β rS
and therefore the nodal unknowns are computed via
D = −K −1 R .
(36)
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
49
Table 1 Nested algorithmic treatment for a single element ELEMENT LOOP (1) Update displacements and stresses (Newton iteration k+1) (k) d = d (k) n + d, β = β n + β INTEGRATION LOOP (2) Compute stresses S and Green-Lagrange strain tensor E at each Gauss Point: S = L β, E = IB d, Read from history: E cons CONSTITUTIVE LOOP (3) Compute residuum: r(E cons ; S) = S − ∂ E ψ(E) cons E
(4) Update: E cons = E cons + D : r(E cons , S) −1 with D = ∂ E2 E ψ(E) cons E
(5) Check convergence If r(E cons ; S) ≤ tol then Update History E cons and exit CONSTITUTIVE LOOP (6) Check divergence If n iter > n tol Then Stop Calculation (7) Determine and export element stiffness and rhs-vector
Due to the elementwise discontinuous interpolation of the stresses, the unknowns
β in (33) can already be eliminated on element level. This leads to a global system of equations with the same number of unknowns, and almost the same computational cost, as a displacement based trilinear element. Table 1 sketches the nested algorithmic treatment for a typical element for the case that a complementary stored energy is not known.
3.2.1
Stress Interpolation
A well known and very efficient stress discretization for the linear elastic counterpart of the proposed HR formulation is the 18 parameter based interpolation scheme proposed by [48], which is a 3D extension of the element by [49]. Here the individual interpolation vectors are given by
50
J. Schröder et al.
L18 ξ ξ = (1, η, ζ, ηζ ) L18 ηη = (1, ξ, ζ, ξ ζ ) L18 ζ ζ = (1, ξ, η, ξ η) L18 ξ η = (1, ζ ) 18 Lηζ = (1, ξ ) L18 ξ ζ = (1, η)
(37)
18 stress modes Another proposed stress interpolation is the 30-parameter based discretization L30 ξ ξ = (1, η, ζ, ηζ ) L30 ηη L30 ζζ L30 ξη L30 ηζ L30 ξζ
= = = = =
(1, (1, (1, (1, (1,
ξ, ξ, ξ, ξ, ξ,
ζ, ξ ζ ) η, ξ η) η, ζ, ηζ, ξ ζ ) η, ζ, ξ η, ξ ζ ) η, ζ, ξ η, ηζ )
(38)
30 stress modes It is closely related to the EAS approach published recently by [50], which does not suffer to hourglassing modes and is free from volumetric locking. In particular we consider the interpolation scheme which can be nested in between the interpolations from Eqs. (37) and (38). The corresponding matrices follow by L24 ξξ L24 ηη L24 ζζ L24 ξη L24 ηζ L24 ξζ
= (1, = (1, = (1, = (1, = (1, = (1,
η, ζ, ηζ ) ξ, ζ, ξ ζ ) ξ, η, ξ η) ζ, ηζ, ξ ζ ) ξ, ξ η, ξ ζ ) η, ξ η, ηζ )
(39)
24 stress modes
3.2.2
Numerical Examples
The following numerical examples, which take into account the proposed finite element family, are a compendium of computations published in [51]. They investigate the formulation with respect to locking phenomena, stability, robustness and efficiency. The assumed stress elements are compared to a non-mixed lowest order element, the well known H1P0 (see [52]) and a couple of enhanced assumed strain element formulations (Table 2).
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
51
Table 2 Overview of considered elements H1 Isoparametric eight-node hexahedral element AS-18 Assumed stress element with 18 stress modes, see Eq. (37) EAS-21 Enhanced assumed strain element with 21 modes, see [53] AS-30 Assumed stress element with 30 stress modes, see Eq. (38) *EAS-9 Enhanced assumed strain element with 9 modes, see [50] AS-24 Assumed stress element with 24 stress modes, see Eq. (39) EAS-15 Enhanced assumed strain element with 15 modes, see [54] H1P0 Displacement-pressure approach with piecewise constant pressure, see [52]
Fig. 4 Cook’s membrane problem; exemplary reference mesh and deformed body on the left and boundary conditions and parameters on the right
Hyperelastic Nearly Incompressible Cook’s Membrane The Cook’s Membrane represents a well known benchmark problem for finite element analysis. It represents a bulk related boundary value problem with a slight amount of bending. The geometry, boundary conditions and material parameter, representing a nearly incompressible material, are summarized in Fig. 4. The convergence of the tip displacement considering a regular mesh refinement in x and y direction are shown in Fig. 5a. The strong volumetric locking of the displacement based element (H1) can be recognized. In contrast, all other mixed finite elements achieve a comparably good displacement convergence behavior. The advantage of the proposed Hellinger-Reissner based elements is depicted in Fig. 5b. It shows the necessary load steps (the minimum number of load steps required to achieve convergence) which have been required for the different element formulations. It can be seen that the assumed stress elements are able to deal with large load steps also in the case of nearly incompressibility, whereas the compared finite element formulations suffer due to a high amount the load increments. For this boundary value problem the considered Hellinger-Reissner based elements require only a single load step, independent of the mesh size. The level of necessary load steps in case of the H1
52
J. Schröder et al.
Fig. 5 Cook’s membrane problem: Convergence of tip displacement a and number of necessary load steps b over the number of elements
element is moderate but it should be kept in mind that their performance by means of displacement accuracy is insufficient. Hyperelastic Fiber Reinforced Cook’s Membrane Problem The second numerical example depicts that the proposed algorithm is also applicable to more complex constitutive equations. We consider again a boundary value problem with the geometry of the Cook’s membrane but assuming a fiber reinforced material. Therefore, the underlying strain energy function is split into an isotropic and an anisotropic part (40) ψ = ψiso + ψaniso . The isotropic part is represented by the following strain energy of Mooney-Rivlin type ψiso =
α β tr[C]2 + tr[CofC]2 − γ ln J + 1 (det[C]2 + det[C]−2 − 2) , (41) 2 2
where α, β, γ , 1 and 2 are material parameter and the cofactor of a second order tensor is defined by Cof A = det[ A] A−T . For the formulation of anisotropic free
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
53
Fig. 6 Cooks membrane problem: Geometry, representative mesh, deformed configuration and the boundary conditions
Fig. 7 Cook’s membrane problem, displacement convergence (left) and the number of necessary load steps (right)
energies as isotropic tensor functions we apply the concept of structural tensors, see e.g. [55]. Considering here the case of transverse isotropy we introduce a preferred direction vector a of unit length and the structural tensor M = a ⊗ a. The anisotropic part of the strain energy is given by ψaniso = g0
1 1 1 −g tr[C M]gc +1 + tr[Cof C M]gh +1 + I3 θ gc + 1 gh + 1 gθ
(42)
with the material parameters g0 , gc , gh , gθ . The geometry, boundary conditions and the material parameters are depicted in Fig. 6. The convergence of the displacements at node x = (48, 60, 0)T and the corresponding necessary load steps are depicted in Fig. 7. The results correspond to the isotropic case.
54
J. Schröder et al.
4 Virtual Element Technology for Large Deformations In order to introduce the virtual element method (VEM) we consider the potential functional
μ λ (43) Πu = [I : C − 3] − μ ln J + ϑ(J )2 dX + F ext (u) 2 B0 2 for the displacement-based formulation. We assume that the boundary of the domain ∂B0 is split into non-overlapping regions ∂B0,t and ∂B0,u with applied traction and displacement boundary conditions. The external loads are given in the form of F (u) = ext
B0
f · u dX
(44)
with a dead body load f . In the variational framework, the weak form of the problem reads as F − F −T : ∇δu dX δΠu (u, δu) = μ B 0 +λ ϑ(J )ϑ (J )J F −T : ∇δu dX + F ext (δu) = 0 ∀δu,
(45)
B0
where δu is the variation of u. The linearisation of (45) yields the bilinear form Lin δΠu ( u, δu) = μ ∇ u: ∇δu dX B 0 + μ − λϑ(J )ϑ (J )J (F −1 ∇ u)T : (F −1 ∇δu) dX (46) B0 +λ (J )(F −T : ∇ u)(F −T : ∇δu) dX B0
with (J ) = ϑ(J )[ϑ (J )J 2 + ϑ (J )J ] + (ϑ (J )J )2 , see [46], and u denoting the linearisation of displacements u. The mixed two-field form of the functional can be written as
μ 1 2 p dX + F ext (u) (47) Πm = [I : C − 3] − μ ln J + pϑ(J ) − 2λ B0 2 with the introduced pressure-like variable p := λ ϑ(J ). We use a mixed formulation for a quasi incompressible material to obtain a generalized displacement formulation using static condensation of the pressure field. The first variation of the mixed functional with respect to displacements and pressure is given as
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
55
F − F −T : ∇δu dX + δΠm = μ p ϑ (J )J F −T : ∇δu dX B B 0 0 p δp dX + F ext (δu) = 0 ∀δu, δp + ϑ(J ) − λ B0
(48)
with δp being the variation of pressure p. The linearisation of the first variation for the mixed problem (47) yields the bilinear form μ − pϑ (J )J (F −1 ∇ u)T : (F −1 ∇δu) dX Lin δΠm = μ ∇ u: ∇δu dX + B B 0 0 +λ (ϑ (J )J 2 + ϑ (J )J )(F −T : ∇ u)(F −T : ∇δu) dX B0
+ −
B0
ϑ (J )J F −T : ∇ uδp dX +
1
p δp dX λ B0
B0
p ϑ (J )J F −T : ∇δu dX
(49) Following [46, 47], we are interested in the stability of the virtual element formulation in the nearly incompressible regime for different choices of ϑ(J ), namely ϑ(J ) = J − 1, ϑ(J ) = log(J ), ϑ(J ) = 1 −
1 . J
(50)
4.1 Displacement VEM Space and Projector Operators Let the domain B0 be decomposed into non-overlapping polygonal elements B E where |E| is the area of the element and e ∈ ∂B E is one edge of the element. We assume the boundary of the mesh ∂B E to be compatible with the applied boundary conditions. For each polygon B E , we define the local virtual space VE = {uh ∈ H 1 (B E ) ∩ C 0 (B E ) : uh |e ∈ P2 (e) ∀e ∈ ∂B E }
(51)
that consists of functions that are second order polynomials on each polygon B E in such a way that it is a second order polynomial on each edge e of the element E. Moreover, VE contains all polynomials of second order in the interior of the element but can contain other functions. Following the notation as presented in [56], the complete set of degrees of freedom for a second order virtual element with Nn nodes, denoted as X i = {X i , Yi }T , i = 1, ..., Nn is given by • •
nodal displacements ui = uh (X i ) ∀X i node of B E , moments m := uh dX BE
(52) (53)
56
J. Schröder et al.
for any uh ∈ VE . In this approach we make use of the projector operator, denoted as Π E∇ , which projects from the virtual element space VE onto P2 (E) defined as BE
∇(uh − Π E∇ uh ) : ∇ p dX = 0 ∀ p ∈ P2 (B E ) ,
(54)
where p denotes the variation of the projected displacements Π E∇ uh . For the sake of simplicity, the superscript ∇ and the subscript E will be omitted. In order to determine the polynomial Π uh we perform a simple manipulation on the orthogonality condition (54) and obtain ∇Π uh : ∇ p dX = ∇uh : ∇ p dX ∀ p ∈ P2 (B E ) . (55) BE
BE
Considering the left and right hand side separately, performing integration by parts and taking the divergence theorem into account we obtain
∇Π uh : ∇ p dX =
BE
BE
∇uh : ∇ p dX =
Π uh · ∇ p · ne dX − uh · ∇ p · ne dX −
∂B E
∂B E
BE
BE
Π uh · ∇ · (∇ p) dX , (56)
uh · ∇ · (∇ p) dX
(57)
with ne being the outward normal vector to the boundary ∂B E of the element B E at edge e. Note that the term p = ∇ · (∇ p) in the last expressions is constant. Thus, making use of Greens theorem and Laplace operator leads to
BE
Π uh · p dX = p
∂B E
Π uh dX dY
(58)
with Π uh dX being the undefined integral of Π uh with respect to X , the first component of X. In consideration of the definition of the degrees of freedom we obtain the final equations for the calculation of the projection:
∇Π uh : ∇ p dX =
BE
∂B E
∇ p · ne · Π uh dX − p
∂B E
∇uh : ∇ p dX =
Π uh dX dY , (59)
BE
∂B E
∇ p · ne · uh dX − p · m .
(60)
Since the Eqs. (59) and (60) define Π uh only up to a constant, two additional equations are necessary. Therefore we choose BE
(uh − Π uh ) dX = 0 ,
(61)
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
57
ensuring that the mean value of the projection Π uh equals to the mean value of displacements uh . The last missing part to be able to determine the polynomial Π uh explicitly is the choice of the basis for Π uh ∈ P2 and p ∈ P2 . In this approach we choose for Π uh a complete polynomial up to order 2 ⎞ a1
⎜ a2 ⎟ ⎟ 1 0 X 0 Y 0 XY 0 X2 0 Y 2 0 ⎜ ⎜ ... ⎟ , Π uh = H a = ⎟ 0 1 0 X 0 Y 0 XY 0 X2 0 Y 2 ⎜ ⎝ ... ⎠ a12 ⎛
(62)
with unknown coefficients a which have to be determined. We define p according to ⎞ α1
⎜ α2 ⎟ ⎟ X 0 Y 0 XY 0 X2 0 Y 2 0 ⎜ ⎜ ... ⎟ . p = hα = ⎟ 0 X 0 Y 0 XY 0 X2 0 Y 2 ⎜ ⎝ ... ⎠ α10 ⎛
(63)
and since only the gradient of p enters (54), the constant parts in the polynomial basis can be neglected. Furthermore the components of α are arbitrary. Substituting the above definitions of Π uh and p into Eqs. (59)–(61) and neglecting α yields ⎡
⎤ m
⎢ b=⎣ ∂B E
∇ h T · ne · uh dX − h T · m
⎥ ⎦
(64)
as the right hand side of the linear system of equations. The integral in Eq. (64) is evaluated using Gauss-Lobatto quadrature. Before we state the left hand side we integrate the last term in (59) using the ansatz (62) and define
m H :=
H dX =
2
2
X 0 X2 0 X Y 0 X2Y 2 0 X 0 X2 0 X Y 0
0 2
X Y 2
0 XY 2 0 . 3 0 X3 0 X Y 2
X3 3
(65)
Thus, the left hand side is given by
⎡ ⎢ Ga := ⎢ ⎣ ∂B E
∂B E
⎤ m H dY
∇ h T · ne · H dX − h T
∂B E
m H dY
⎥ ⎥a, ⎦
(66)
58
J. Schröder et al.
where G is a matrix with dimension 12 × 12. With this in hand we are able to construct and to solve the linear system of equations a = G −1 b .
(67)
This yields the unknown parameters a of Π uh as a function of the nodal unknowns ui and. We remark that the projection Π uh is computable using only geometrical informations of the element E and the degrees of freedom (52) and (53). With the defined projection, the deformation uh can be decomposed into polynomials up to order two and remaining higher order components according to uh = Π uh + (uh − Π uh ) .
(68)
For the two-field formulation of the virtual element the pressure field p is assumed to be constant over the polygonal element E.
4.2 Construction of Displacement Based and Two-Field Mixed VEM Approximation The next step is the construction of a stable method using the virtual element ansatz spaces. Since it is not possible to achieve a formulation of full rank for arbitrary number of nodes using only the ansatz Π uh it is necessary to stabilize the formulation, see [18]. Thus the weak form has to be extended by a stabilization term a(uh , vh ) = a(Π uh , Π vh ) + S(uh − Π uh , vh − Π vh )
(69)
The standard procedure in the formulation of virtual elements is to replace the uncomputable term in S by any symmetric positive definite bilinear form, e.g. S(uh − Π uh , vh − Π vh ) ≈ α
n v [ui − Π uh (X i )] · [vi − Π vh (X i )]
(70)
i=1
which is easily computable. It only needs the selection of an appropriate stabilization parameter α, see e.g. [26], and then assures the stability of the formulation. Another starting point for the stabilization is the decomposition of the potential functional, see [28], ΠuE (uh ) = ΠuE,c (Π uh ) + ΠuE,s (uh − Π uh ) .
(71)
Thus, the potential functional of a generic element E consists of a consistency part ΠuE,c and a stabilization part ΠuE,s . The consistency and the stabilization parts are discussed separately in Sects. 4.2.1 and 4.2.2 and special features are described therein.
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
4.2.1
59
Consistency Term
Using the given potential functional (43) the consistency part for the displacement based virtual element formulation is given by ΠuE,c (Π uh ) =
μ
[I : C(Π uh ) − 3] − μ ln J (Π uh ) dX
BE 2 λ ϑ(J (Π uh ))2 dX + F ext (Π uh ) . + BE 2
(72)
All terms in the consistency part depend only on Π uh according to C(Π uh ) = (I + ∇Π uh )T (I + ∇Π uh ) ,
(73)
J (Π uh ) = det (I + ∇Π uh )
(74)
and therefore can be easily calculated. In the following, the argument of C and J will be omitted when no confusion can arise. Since the integrals in the consistency part can not be shifted to boundary integrals in the case of finite deformations we need to evaluate area integrals. The method used in this approach is a triangulation T E of the polygonal area into N T = Nn − 2 triangles TkE , k = 1, ..., N T and subsequent evaluation of the integral using Gauss-Lobatto quadrature as shown in Fig. 8. Additionally, we can mimic the selective-reduced integration concept known in the FEM literature. Based on that, one point integration is used for the term related to the incompressibility of the material which is then evaluated at the center of the element X c and multiplied by the area of the element |E| according to
μ
λ [I : C − 3] − μ ln J dX + |E| ϑ(J (X c ))2 2 2 BE + F ext (Π uh ) .
ΠuE,c (Π uh ) =
(75)
The first term in (72) is integrated using Gauss-Lobatto quadrature as depicted in Fig. 8. The consistency part of the mixed formulation of the virtual element has the form
ΠmE,c (Π uh ,
μ
[I : C − 3] − μ ln J dX
BE 2 1 2 p ϑ(J ) − p dX + F ext (Π uh ) . + 2λ BE
p) =
(76)
Again a one-point integration can be applied for the last term. For the construction of the loading terms we consider (44). The body load is discretized by
60
J. Schröder et al.
Fig. 8 Triangulation T E of the polygonal domain B E
B0
f · Π uh dX =
Nn E
wiE f (X i ) Π uh (X i )
(77)
i
where wiE denotes the weight associated with the i-th node according to the GaussLobatto quadrature. This formulation is sufficient for optimal convergence rates since only constant body forces are investigated in this approach, see [18].
4.2.2
Stabilization Term
Since ΠuE,s (uh − Π uh ) is not computable, using the introduced ansatz spaces, we replace it by an elemental bilinear form in (69) which is written here for an element as S E (·, ·) multiplied by a stabilization parameter α E , see (70), according to ΠuE,s (uh , Π uh ) ≈ α E S E (uh − Π uh , uh − Π uh )
(78)
as suggested by [57]. The idea is to construct a stabilization term which depends on material parameters and geometrical information of E. Following the idea proposed therein, the bilinear form S E is given by S E (uh − Π uh , uh − Π uh ) = h d−2 E
Nn (ui − Π u(X i )) · (ui − Π u(X i ))
(79)
i=1
where h E = |E|1/d and d is the dimension. In the first approach the stabilization parameter α E is given by the expression 2 ∂ ψ I + ∇Π uh α E,1 = ∂ C∂ C
(80)
where ||| · ||| represents a norm of the Hessian of ψ. The norm used in our approach corresponds to the Euclidean vector norm. The second approach for the stabilization factor α E investigated in this work was proposed in the work by [58]
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
α E,2
2 1 ∂ ψ I + ∇Π uh = 2 Tr d ∂ C∂ C
61
(81)
based on the trace of the Hessian of ψ.
4.3 Numerical Example In order to analyse the stability of the virtual element formulation we consider a square domain as proposed in [46]. A quadratic plate with the domain B0 = (−1, 1) × (−1, 1) is considered. It is clamped at the left, right and bottom edge while the upper edge is traction free as depicted in Fig. 9. The material parameters of the strain energy function are set to μ = 40 and λ = 105 for displacement based and mixed virtual element formulations. The stability analysis is performed using regular meshes with 16 × 16 or 32 × 32 elements followed by Voronoi type meshes with 2024 or 8132 elements. According to (44) a body force f = γ e2 is applied within the domain. The factor γ is increased progressively. The numerical critical value for γcr is related to the fact that the smallest eigenvalue assumes zero. Algorithmically it is found when a change of sign for the smallest eigenvalue occurs. The stability range for this boundary value problem is, according to [13], in Sh = (−∞, γcrit ). For the given parameters of the boundary value problem a good estimate can be computed: γ˜crit ≈ 6.6. In this approach we investigate the stability range of the following virtual element formulations: • DF α E,S : Displacement based virtual element formulation with full integration and stabilization function S, see (72); • DI α E,S : Displacement based virtual element formulation with selective reduced integration and stabilization function S, see (75); • LM α E,S : Mixed two-field virtual element formulation with constant pressure field and stabilization function S, see (76).
Fig. 9 Boundary value problem and the first eigenform for regular and Voronoi type meshes
62
J. Schröder et al.
Table 3 Results for different penalty functions ϑ(J ) for regular meshes Element ϑ(J ) Mesh γcr Element ϑ(J ) Mesh DF α E,1
J −1 log(J ) 1−
DI α E,1
1 J
J −1 log(J ) 1−
1 J
16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32
8.46 5.62 8.46 5.50 8.46 5.50 6.65 6.80 6.75 6.80 6.75 6.79
DF α E,2
J −1 log(J ) 1−
DI α E,2
1 J
J −1 log(J ) 1−
LM α E,2
1 J
J −1 log(J ) 1−
1 J
16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32 16 × 16 32 × 32
γcr 7.48 6.49 7.48 6.46 7.48 6.49 6.66 6.62 6.74 6.62 6.74 6.62 6.74 6.62 4.98 2.45 1.28 1.10
The results of the stability analysis for regular meshes are summarized in Table 3. In the case of the displacement based formulation in combination with full integration we can observe that the critical value γcr is overestimated when the coarse mesh is used. Moreover, a sudden instability occurs at the value γ ≈ 5.5 or γ ≈ 6.5 for the fine mesh as visualized in Fig. 10. In contrast to the fully integrated element formulations a correct estimation of the bifurcation point γcr can be achieved when we use the selective reduced integration technique. The critical value is correctly estimated for all penalty functions and meshes, the associated eigenmode is shown in Fig. 11a.
Fig. 10 Smallest Eigenvalue of global stiffness matrix K over load multiplier γ
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
63
(a)
(b)
Fig. 11 a Physical eigenmode at γ = 6.62 using the DI α E,2 element. b Eigenmode of last converged solution at γ = 1.1 using the LM α E,2 element and ϑ(J ) = 1 − 1J
As extensively described in [47] and also mentioned in Sect. 3 the mixed functional (76) is only consistent with the penalty function ϑ(J ) = J − 1. For this choice of the penalty function the critical load is estimated correctly for the mixed virtual element formulation while other penalty functions lead to the incidence of instability at different values of the load multiplier γ . We remark that no stable mixed formulation was achieved when the stabilization factor α E,1 was used. Table 4 Results for different penalty functions ϑ(J ) for Voronoi type meshes Element ϑ(J ) DoF γcr Element ϑ(J ) DoF DF α E,1
J −1 log(J ) 1−
DI α E,1
1 J
J −1 log(J ) 1−
1 J
2024 8132 2024 8132 2024 8132 2024 8132 2024 8132 2024 8132
11.08 5.24 11.08 4.88 10.87 5.00 7.99 6.77 8.03 6.88 8.03 6.77
DF α E,2
J −1 log(J ) 1−
DI α E,2
1 J
J −1 log(J ) 1−
LM α E,2
1 J
J −1 log(J ) 1−
1 J
2024 8132 2024 8132 2024 8132 2024 8132 2024 8132 2024 8132 2024 8132 2024 8132 2024 8132
γcr 7.97 5.18 8.05 5.06 7.97 5.06 7.04 6.62 7.14 6.62 7.10 6.68 6.58 6.58 6.12 6.32 4.63 3.44
64
J. Schröder et al.
The results for the Voronoi type meshes, shown in Table 4, differ slightly from the results for regular meshes. In general, a stiffer behaviour of the elements is observed which can be explained by the mix of different element sizes in the mesh. The displacement based fully integrated elements show clearly locking and fail to predict the correct stability range. The reduced integration of the volumetric part of the strain energy function removes the locking behaviour and significantly improves the reliability of the virtual element formulation. The accuracy of the selective reduced integrated elements is comparable to the mixed formulation while the deficiency of the mixed formulation is avoided. The sensitivity of the mixed approach with respect to the choice of ϑ(J ) can however be circumvented by using the formulation developed in Sect. 3.1 for the virtual element formulation.
5 Conclusion and Outlook The ability of second order virtual elements to correctly estimate the stability range for large deformation problems considering nearly incompressible material has been investigated. Due to the necessity of avoiding rank deficiency different approaches for the stabilization of the method have been taken into account, based on the norm or on the trace of the Hessian of the strain energy function. It was shown that the choice of the stabilization is of crucial importance for the performance and the reliability of the method for the considered problem. While the classical two-field mixed formulation fails to correctly predict the stability range if arbitrary penalty functions are used, the displacement based method might show volumetric locking in the pure incompressible regime. In order to avoid both deficiencies we proposed an alternative formulation. Therefore we used selective underintegration of the strain energy function where the volumetric part was integrated at the center of the polygon. Regular and Voronoi type meshes were used for this analysis showing the robustness and in general the ability of the virtual element method to deal with star-shaped polygonal meshes. Several extensions of the virtual element method are of interest, for instance, the development of more advanced stabilization methods or the extension of higher-order virtual element formulations to three dimensions. Acknowledgements The authors gratefully acknowledge the support by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 “Reliable Simulation Techniques in Solid Mechanics, Development of Non-standard Discretization Methods, Mechanical and Mathematical Analysis” for the project “Novel finite elements for anisotropic media at finite strain” (Project number: 255432295, IDs: WR 19/50-1, SCHR 570/23-1).
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
65
References 1. I. Babuška, M. Suri, Locking effects in the finite element approximation of elasticity problems. Numerische Mathematik 62(1), 439–463 (1992) 2. E. Hellinger, Die Allgemeinen Ansätze der Mechanik der Kontinua, in Encyklopädie der mathematischen Wissenschaften mit Einschluss ihrer Anwendungen, vol. 4, ed. by F. Klein, C. Müller (Vieweg+Teubner Verlag, Wiesbaden, 1907). https://doi.org/10.1007/9783-663-16028-1_9 3. G. Prange, Das Extremum der Formänderungsarbeit (Habilitationsschrift, Technische Hochschule Hannover, 1916) 4. E. Reissner, On a variational theorem in elasticity. J. Math. Phys. 29, 90–95 (1950) 5. H.C. Hu, On some variational principles in the theory of elasticity and the theory of plasticity. Sci. Sinica 4, 33–54 (1955) 6. K. Washizu, On the variational principles of elasticity and plasticity. Technical report, Aeroelastic and Structures Research Laboratory, Massachusetts Institute of Technology, Cambridge (1955) 7. I. Babuška, The finite element method with Lagrangian multipliers. Numerische Mathematik 20(3), 179–192 (1973) 8. F. Brezzi, On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 8(2), 129–151 (1974) 9. D. Chapelle, K. Bathe, The inf-sup test. Comput. Struct. 47, 537–545 (1993) 10. K.J. Bathe, The inf-sup condition and its evaluation for mixed finite element methods. Comput. Struct. 79, 243–252 (2001) 11. P. Wriggers, S. Reese, A note on enhanced strain methods for large deformations. Comput. Methods Appl. Mech. Eng. 135, 201–209 (1996) 12. F. Auricchio, L. Beirão da Veiga, C. Lovadina, A. Reali, A stability study of some mixed finite elements for large deformation elasticity problems. Comput. Methods Appl. Mech. Eng. 194, 1075–1092 (2005) 13. F. Auricchio, L. Beirao da Veiga, C. Lovadina, A. Reali, The importance of the exact satisfaction of the incompressibility constraint in nonlinear elasticity: mixed FEMs versus NURBS-based approximations. Comput. Methods Appl. Mech. Eng. 199, 314–323 (2010) 14. N. Viebahn, J. Schröder, P. Wriggers, An extension of assumed stress finite elements to a general hyperelastic framework, in Advanced Modeling and Simulation in Engineering Sciences (2019) 15. N. Viebahn, J. Schröder, P. Wriggers, A concept for the extension of the assumed stress finite element method to hyperelasticity, in Novel Finite Element Technologies for Solids and Structures (2019). https://doi.org/10.1007/978-3-030-33520-5_4 16. L. Beirão da Veiga, F. Brezzi, A. Cangiani, G. Manzini, L. Marini, A. Russo, Basic principles of virtual element methods. Math. Models Methods Appl. Sci. 23(01), 199–214 (2013) 17. B. Ahmad, A. Alsaedi, F. Brezzi, L. Marini, A. Russo, Equivalent projectors for virtual element methods. Comput. Math. Appl. 66, 376–391 (2013) 18. L. Beirão da Veiga, F. Brezzi, L. Marini, Virtual elements for linear elasticity problems. SIAM, J. Numer. Anal. 51, 794–812 (2013) 19. L. Beirão da Veiga, F. Brezzi, L.D. Marini, A. Russo, The hitchhiker’s guide to the virtual element method. Math. Models Methods Appl. Sci. 24(8), 1541–1573 (2014) 20. F. Brezzi, L.D. Marini, Virtual element methods for plate bending problems. Comput. Methods Appl. Mech. Engrg. 253, 455–462 (2013) 21. A.L. Gain, Polytope-based topology optimization using a mimetic-inspired method. Dissertation, University of Illinois at Urbana-Champaign (2013) 22. A.L. Gain, C. Talischi, G.H. Paulino, On the virtual element method for three-dimensional linear elasticity problems on arbitrary polyhedral meshes. Comput. Methods Appl. Mech. Eng. 282, 132–160 (2014) 23. G. Paulino, A.L. Gain, Bridging art and engineering using Escher-based virtual elements. Struct. Multidiscip. Optim. 51, 867–883 (2015)
66
J. Schröder et al.
24. H. Chi, L. Beirão da Veiga, G. Paulino, Some basic formulations of the virtual element method (VEM) for finite deformations. Comput. Methods Appl. Mech. Eng. 318, 148–192 (2017) 25. A. Hussein, F. Aldakheel, B. Hudobivnik, P. Wriggers, P.-A. Guidault, O. Allix, A computational framework for brittle crack propagation based on an efficient virtual element method. Finite Elem. Anal. Design 159, 15–32 (2019) 26. E. Artioli, L. Beirão da Veiga, C. Lovadina, E. Sacco, Arbitrary order 2d virtual elements for polygonal meshes: part i, elastic problem. Comput. Mech. 60, 355–377 (2017) 27. L. Beirão da Veiga, C. Lovadina, D. Mora, A virtual element method for elastic and inelastic problems on polytope meshes. Comput. Methods Appl. Mech. Eng. 295, 327–346 (2015) 28. P. Wriggers, B. Reddy, W. Rust, B. Hudobivnik, Efficient virtual element formulations for compressible and incompressible finite deformations. Comput. Mech. 60, 253–268 (2017) 29. P. Wriggers, B. Hudobivnik, A low order virtual element formulation for finite elasto-plastic deformations. Comput. Methods Appl. Mech. Eng. 327, 459–477 (2017) 30. B. Hudobivnik, F. Aldakheel, P. Wriggers, Low order 3d virtual element formulation for finite elasto-plastic deformations. Comput. Mech. 63, 253–269 (2018) 31. M. De Bellis, P. Wriggers, B. Hudobivnik, Serendipity virtual element formulation for nonlinear elasticity. Comput. Struct. 223, 106094 (2019) 32. F. Aldakheel, B. Hudobivnik, A. Hussein, P. Wriggers, Phase-field modeling of brittle fracture using an efficient virtual element scheme. Comput. Methods Appl. Mech. Eng. 341, 443–466 (2018) 33. A. Hussein, B. Hudobivnik, P. Wriggers, A combined adaptive phase field and discrete cutting method for the prediction of crack paths. Comput. Methods Appl. Mech. Eng. submitted (2020) 34. J.O. Hallquist, NIKE2d: An implicit, finite-deformation, finite element code for analysing the static and dynamic response of two-dimensional solids, University of California, Lawrence Livermore National Laboratory, UCRL–52678 (1979) 35. P. Wriggers, J. Simo, A note on tangent stiffnesses for fully nonlinear contact problems. Commun. Appl. Numer. Methods 1, 199–203 (1985) 36. J.C. Simo, P. Wriggers, R.L. Taylor, A perturbed Lagrangian formulation for the finite element solution of contact problems. Comput. Methods Appl. Mech. Eng. 50, 163–180 (1985) 37. F. Ben Belgacem, P. Hild, P. Laborde, Approximation of the unilateral contact problem by the mortar finite element method. C. R. Acad. Sci., Paris, Ser I 324, 123–127 (1997) 38. B.I. Wohlmuth, A mortar finite element method using dual spaces for the Lagrange multiplier. SIAM, J. Numer. Anal. 38, 989–1012 (2000) 39. P. Wriggers, W. Rust, B. Reddy, A virtual element method for contact. Comput. Mech. 58, 1039–1050 (2016) 40. P. Wriggers, W. Rust, A virtual element method for frictional contact including large deformations. Eng. Comput. 36, 2133–2161 (2019) 41. F. Aldakheel, B. Hudobivnik, E. Artioli, L. Beirão da Veiga, P. Wriggers, Curvilinear virtual elements for contact mechanics. Comput. Methods Appl. Mech. Eng. submitted (2020) 42. M. Marino, B. Hudobivnik, P. Wriggers, Computational homogenization of polycrystalline materials with the virtual element method. Comput. Methods Appl. Mech. Eng. 355, 349–372 (2019) 43. L. Beirão da Veiga, C. Lovadina, A. Russo, Stability analysis for the virtual element method. Math. Models Methods Appl. Sci. 27(13), 2557–2594 (2017) 44. D. Boffi, F. Brezzi, M. Fortin, Mixed Finite Element Methods and Applications (Springer, Heidelberg, 2013) 45. F. Auricchio, L. Beirão da Veiga, C. Lovadina, A. Reali, An analysis of some mixed-enhanced finite element for plane linear elasticity. Comput. Methods Appl. Mech. Eng. 194, 2947–2968 (2005) 46. F. Auricchio, L. Beirão da Veiga, C. Lovadina, A. Reali, R. Taylor, P. Wriggers, Approximation of incompressible large deformation elastic problems: some unresolved issues. Comput. Mech. 52(5), 1153–1167 (2013) 47. J. Schröder, N. Viebahn, P. Wriggers, F. Auricchio, K. Steeger, On the stability analysis of hyperelastic boundary value problems using three- and two-field mixed finite element formulations. Comput. Mech. 60(3), 479–492 (2017)
Novel Finite Elements - Mixed, Hybrid and Virtual Element …
67
48. T.H.H. Pian, P. Tong, Relations between incompatible displacement model and hybrid stress model. Int. J. Numer. Methods Eng. 22, 173–181 (1986) 49. T.H.H. Pian, K. Sumihara, A rational approach for assumed stress finite elements. Int. J. Numer. Methods Eng. 20, 1685–1695 (1984) 50. A. Krischok, C. Linder, On the enhancement of low-order mixed finite element methods for the large deformation analysis of diffusion in solids. Int. J. Numer. Methods Eng. 106, 278–297 (2016) 51. N. Viebahn, J. Schröder, P. Wriggers, Application of assumed stress finite elements in hyperelasticity, in Report of the Workshop 1843 at the “Mathematisches Forschungsinstitut Oberwolfach” entitled “Computational Engineering”, organized by O. Allix, A. Buffa C. Carstensen, J. Schröder (2018) 52. J.C. Simo, R.L. Taylor, K.S. Pister, Variational and projection methods for the volume constraint in finite deformation elasto-plasticity. Comput. Methods Appl. Mech. Eng. 51, 177–208 (1985) 53. U. Andelfinger, E. Ramm, EAS-elements for two-dimensional, three-dimensional, plate and shell structures and their equivalence to HR-elements. Int. J. Numer. Methods Eng. 36, 1311– 1337 (1993) 54. D. Pantuso, K.J. Bathe, A four-node quadrilateral mixed-interpolated element for solids and fluids. Math. Models Methods Appl. Sci. (M3AS) 5(8), 1113–1128 (1995) 55. J.P. Boehler, A simple derivation of respresentations for non-polynomial constitutive equations in some cases of anisotropy. Zeitschrift für angewandte Mathematik und Mechanik 59, 157–167 (1979) 56. L. Beirão da Veiga, F. Dassi, A. Russo, High-order virtual element method on polyhedral meshes. Comput. Math. Appl. 74(5), 1110–1122 (2017) 57. L. Beirão da Veiga, C. Lovadina, D. Mora, A Virtual Element Method for elastic and inelastic problems on polytope meshes. Comput. Methods Appl. Mech. Eng. 295, 327–346 (2015) 58. H. Chi, L. Beirão da Veiga, G. Paulino, Some basic formulations of the virtual element method (VEM) for finite deformations. Comput. Methods Appl. Mech. Eng. 318, 148–192 (2017). ISSN 0045-7825
Robust and Efficient Finite Element Discretizations for Higher-Order Gradient Formulations Johannes Riesselmann, Jonas Wilhelm Ketteler, Mira Schedensack, and Daniel Balzani
Abstract In this contribution a novel mixed finite element discretization scheme for gradient enhanced formulations, namely gradient elasticity and gradient damage, is introduced. The approach is based on a split of the Lagrange multiplier, which enforces compatibility between the mixed variables. Through this, a decoupled set of variational equations is obtained. Stability both for the continuous formulation and various discrete subspaces is shown in the small strain gradient elasticity framework. Numerical tests for gradient elasticity and gradient damage at finite strains show convergence and improved computational efficiency. Advantages obtained through the gradient enrichment such as the ability to avoid singularities and to yield meshindependent results is shown.
1 Introduction Local classical formulations for solid material modeling are functionals of first order gradients of the displacements. Although proven to be sufficient for many applications in the elasticity framework, in some scenarios, the stresses predicted by these local models may become infinite at some points, while in reality finite stresses are observed at these points. This is due to the fact that the finite resolution of the microstructure and its influence on the global elastic behavior is not taken into account. For most elasticity applications nevertheless, corresponding local finite element simulations yield sufficiently accurate results. However, when it comes to the modeling of specialized materials (such as e.g. metamaterials), in which the material heterogeneities approach the scale of the body of the modeled specimen, local J. Riesselmann · D. Balzani (B) Chair of Continuum Mechanics, Ruhr University Bochum, Universitätsstr. 150, 44801 Bochum, Germany e-mail: [email protected] J. W. Ketteler · M. Schedensack Institute of Mathematics, Leipzig University, Augustusplatz 10, 04109 Leipzig, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_3
69
70
J. Riesselmann et al.
formulations fail to produce accurate results. In these cases, nonlocal effects can be taken into account through second-order gradient enrichment of the formulation. Furthermore, when modeling materials undergoing strain- or stress-softening, accurate numerical solutions using local models may become even more problematic. Namely, local damage models may lose ellipticity in the onset of localized zones of high strain intensity. Corresponding finite element solution schemes feature a strong mesh-dependency and may fail to converge. Gradient damage formulations on the other hand have shown to produce mesh-independent results in these cases due to the regularizing effect of the gradient enrichment. The main challenge in the research of corresponding gradient enhanced finite element formulations however is the treatment of the C 1 continuity condition of the displacement solution field. A possible approach in this context is the use of isogeometric formulations, see for instance [1] for an IGA model for finite strain gradient elasticity or [2] and [3] where the gradient elasticity concept is extended to capture flexoelectric effects. Yet, in IGA a remaining research challenge is the discretization of complex structures. As a remedy, mixed C 0 continuous finite elements may be used. For small strains the works of [4] and [5] investigate mixed finite element formulations for gradient elasticity consisting of three solution fields. A corresponding mathematical investigation with respect to inf-sup stability and an extension to finite strains is given in [6]. A remaining challenge of these formulations is the relatively high computational cost due to a large number of solution variables. For gradient damage a common approach is to use a formulation which is enhanced by the gradient of a mixed variable corresponding to the internal damage variable, see e.g. [7] for small strains and [8] for a finite strain approach. While these approaches yield mesh-independent results extensive comparative studies with respect to computing efficiency and numerical robustness are yet to be investigated. This contribution presents a mixed approach, in which the mixed variables appear in a decoupled set of variational equations (cf. [9], where the approach is introduced for the finite strain gradient elasticity problem, see also the first ideas in [10] and [11]. In this context, see also [12] where a corresponding split is introduced for the biharmonic equation). In Sect. 2 (following [9]) the proposed formulation and corresponding discretizations are investigated for the gradient elasticity framework with respect to mathematical stability. Moreover, numerical tests of Sect. 2.4 analyze numerical efficiency in the finite strain framework by comparing with the formulation proposed in [6]. In Sect. 3 an analogous approach is introduced for the finite strain damage framework and again comparative numerical studies are shown in Sect. 3.3.
1.1 Definitions In this section some necessary definitions are given. For the dimension d ∈ {2, 3} we define the L 2 inner product and the L 2 -norm of square integrable n-order tensorvalued functions T ∈ L 2 (B, (IRd )n ) with
Robust and Efficient Finite Element Discretizations for Higher …
71
(δT , T ) L 2 (B) :=
B
δT · T dV and ||T ||2L 2 (B) := (T , T ) L 2 (B) .
(1)
For d = 2, let ∧ : (IR2 )n × IR2 → (IR2 )n−1 be the cross product of a n-order tensor T and a vector S, then the coefficients of T ∧ S are (T ∧ S)(i1 ,...,in−1 ) := (Ti1 ,...,in−1 ,1 S2 − Ti1 ,...,in−1 ,2 S1 ). For d = 3, i.e. ∧ : (IR3 )n × IR3 → (IR3 )n the coefficients of T ∧ S are (T ∧ S)(i1 ,...,in−1 ,1) := (Ti1 ,...,in−1 ,2 S3 − Ti1 ,...,in−1 ,3 S2 ), (T ∧ S)(i1 ,...,in−1 ,2) := (−Ti1 ,...,in−1 ,1 S3 + Ti1 ,...,in−1 ,3 S1 ), (T ∧ S)(i1 ,...,in−1 ,3) := (Ti1 ,...,in−1 ,1 S2 − Ti1 ,...,in−1 ,2 S1 ). With ei j being Cartesian base vectors we define the row-wise applied rotation operator Rot T (n) := −∂ j Ti1 ...in ei1 ⊗ ... ⊗ ein ∧ e j .
(2)
We introduce the following Sobolev spaces L 2(n) L 2(n) 0 1(n) H(•) 2(n) H(•)
:= {T ∈ L 2 (B, (IRd )n )}, := {T ∈ L 2(n) : B T dV = 0},
(4)
:= {T ∈ L
(5)
2(n)
: ∇T ∈ L
2(n+1)
(3)
: T |(•) = 0}
1(n) := {T ∈ H(•) : ∇ T ∈ H 1(n+1) }
H(•) (Div)
(n)
0 (n)
H(•) (Div ) H(•) (Rot)
(n)
0 (n)
H(•) (Rot )
:= {T ∈ L
2(n)
: Div T ∈ L
:= {T ∈ H(•) (Div) := {T ∈ L
2(n)
(n)
:= {T ∈ H(•) (Rot)
(6) and T N|(•) = 0},
: Div T = 0 in B}
: Rot T ∈ L (n)
2(n−1)
2(n)
and T ∧ N|(•) = 0}
: Rot T = 0 in B},
(7) (8) (9) (10)
where (•) denotes the boundary domain on which the function traces corresponding to the above definitions vanish. Thus, (•) ⊆ ∂B is a subdomain of the boundary with the special cases (•) = ∂B := 0 and ommission of the subscript in the case (•) = ∅. For example, the space of a function u, which is sought to be in H 1 and prescribed on a boundary subdomain D ⊆ ∂B is denoted by u ∈ H1D . Furthermore, let H −1(n) denote the dual space of H01(n) , i.e., the space of all linear and continuous mappings from H01(n) to IR and define the space H −1 (Div)(n) := {T ∈ H −1(n) : Div T ∈ H −1(n) },
(11)
72
J. Riesselmann et al.
which is the dual space of H0 (Rot)(n) . Define G := {δ g ∈ H01 },
(12)
H01(2) .}
V := {δ H ∈ δ ∈ L 20 Q := δ ∈ H0 (Div0 )(2) Let
(13) if d = 2, if d = 3.
Pk(n) := Pk (T ; (IRd )(n) ).
(14)
(15)
denote the space of k-order piecewise polynomial functions.
2 Formulation for Finite Strain Gradient Elasticity In this section the finite strain gradient elasticity finite element formulation of [9] is described. An introduction to the finite strain gradient elasticity theory is given in Sect. 2.1. In Sect. 2.2 the continous mixed variational framework is introduced and investigated with respect to stability. This is followed by a discussion of corresponding suitable discretization spaces in Sect. 2.3. Finally, in Sect. 2.4 the proposed discretizations are tested in numerical examples where large strains are considered.
2.1 Gradient Elasticity Fundamentals The small strain gradient elasticity approach of [13] and [14] is extended to the finite strain framework in the following way. Let ψ := ψ(F(∇u)) be a (local) hyperelastic strain energy function, u ∈ H 2 D be the displacement and F = ∇ϕ = ∇u + 1 with F ∈ H 1 ∩ H D (Rot 0 )(2) be the gradient of the deformation map ϕ : B → S from the body B in reference configuration to the deformed configuration S. Then, the gradient enrichment is included in the energy by ψ := ψ(F(∇u), ∇ F(∇u)) (cf. [6], see also the finite strain approach of [1]). In addition to the Dirichlet and Neumann boundary ∂B = D ∪ N , a boundary corresponding to the higher order quantities is denoted by ∂B = H ∪ M . We define the space U := {u ∈ H2D : ∇u N| H = 0} and seek the displacement solution u ∈ U as minimizer of the elastic potential
(16)
Robust and Efficient Finite Element Discretizations for Higher …
73
Π [u] = Π int [u] + Π ext [u] ⇒ min, with u int ψ(F(∇u), ∇ F(∇u)) dV , Π [u] =
(17)
Π ext [u] = −(u, f ) L 2 (B) − (u, t) L 2 ( N ) .
(19)
(18)
B
Note, that for the sake of simplicity higher order surface tractions defined on M are assumed to be zero. Therefore, Π ext [u] only consists of surface tractions t and volume loads f as in classical elasticity formulations. The variational poblem corresponding to (17) seeks u ∈ U so that (∇δu, P) L 2 (B) + (∇∇δu, G) L 2 (B) = −Π ext [δu]
(20)
for all δu ∈ U. Here, P := ∂ F ψ denotes the first Piola Kirchhoff stress tensor and G := ∂∇ F ψ denotes a stress tensor corresponding to the higher order quantities.
2.2 Rot-Free Finite Element Formulation In the following, B is assumed to be a bounded, simply connected domain with homogeneous boundary. The extension to mixed boundary conditions is discussed in Sect. 2.3.3. We introduce the following C 0 continuous mixed reformulation of the elastic potential (17) Π [u, H, ] = Π int [H] + Π ext [u] + Π cst with Π cst = − , ∇u − H .
(21) (22)
Here H ∈ H01(2) denotes the mixed displacement gradient variable and ∈ H −1 (Div)(2) with H −1 (Div)(2) defined in (11) denotes the Lagrange multiplier, through which compatibility is enforced, while •, • denotes the dual pairing between H −1 (Div)(2) and H0 (Rot)(2) . Note that in fact ∇u − H ∈ H0 (Rot)(2) and thus, the Lagrange multiplier has to be sought in H −1 (Div)(2) . Moreover, [6, Proposition 1] proves that the formulation would not be stable, if the Lagrange multiplier were sought in L 2(2) ⊆ H −1 (Div)(2) instead of H −1 (Div)(2) . Through Helmholtz decomposition of the Lagrange multiplier = −∇ g + Rot , the constraint can be rewritten as Π cst = (∇ g, ∇u − H) L 2 (B) + (, Rot H) L 2 (B) ,
(23)
with g ∈ H01 and ∈ Q as defined in Sect. 1.1. Due to the L 2 -orthogonality of rot and gradient tensor functions the displacement vanishes from the second term of (23) and the mixed variables are decoupled in the corresponding variational problem:
74
J. Riesselmann et al.
Rot free variational equation
Preprocessing: For a given f ∈ L 2 , find g ∈ G such that (∇δu, ∇ g) L 2 (B) = −Π ext [δu]
(24)
for all δu ∈ G. Main Step (2D): For g ∈ G, find (H, ) ∈ V × Q such that (δ H, P) L 2 (B) + (∇δ H, G) L 2 (B) + (Rot δ H, ) L 2 (B) = (δ H, ∇ g) L 2 (B) , (δ, Rot H) L 2 (B) = 0, for all (δ H, δ) ∈ V × Q.
(25)
Postprocessing: For H ∈ V, find u ∈ G such that (∇δ g, ∇u) L 2 (B) = (∇δ g, H) L 2 (B) ,
(26)
for all δ g ∈ G.
Note that this set of variational problems is quite similar to the one already derived in [15] where Kirchhoff’s equations of thin plate bending were discretized by C0 finite elements. The displacement Eqs. (24) and (26) are simple Laplace type equations and can be viewed as pre- and postprocessing step respectively. Equation (25)2 represents a constraint term enforcing H to be rotation free. In order to maintain well posedness in the limit case of vanishing nonlocal contribution G → 0, an augmentation term is added to (25)1 : Stabilization Term (added to (25)1 ): (27) β(Rot δ H, Rot H) L 2 (B) . Here, β ∈ R+ denotes a stabilization parameter. A corresponding numerical study, in which the influence of the stabilization term is numerically investigated, can be found in [9], where also a discussion of suitable choices for the numerical value of β is given. In order to enforce vanishing divergence of the Lagrange multiplier ∈ H0 (Div0 ) in the three dimensional case d = 3 we introduce a second Lagrange multiplier μ ∈ M and the Sobolev spaces ˜ := { ∈ H0 (Div)(2) } (for d = 3) and Q M :=
L 20 .
(28) (29)
Robust and Efficient Finite Element Discretizations for Higher …
75
Thus, (25) is modified as follows: ˜ × M such that 3D-Formulation: Find (H, , μ) ∈ V × Q (δ H, P) L 2 (B) + (∇δ H, G) L 2 (B) + (Rot δ H, ) L 2 (B) = (δ H, ∇ g) L 2 (B) , (δ, Rot H) L 2 (B) + (Div δ, μ) L 2 (B) = 0, (δμ, Div ) L 2 (B) = 0, ˜ for all (δ H, δ, δμ) ∈ V × Q × M.
(30)
The crucial inf-sup condition of the bilinear form (Rot δ H, ) L 2 (B) follows for d = 2 from the equivalence of the operators Rot and Div up to a transformation of coordinates and the Ladyzhenskaya Lemma [16]. For d = 3 the inf-sup condition of the bilinear form (Rot δ H, ) L 2 (B) basically relies on the fact that Rot : H01 (B; IR3×3 ) → H0 (Div0 , B; IR3×3 ) is a bounded and surjective map [17, Proposition A.1]. If there exists a unique solution to the original problem (20), then this proves the unique existence of a solution of (24)–(26) and the solutions to the problems coincide. Moreover, since the bilinear form (δμ, Div δ) L 2 (B) satisfies an inf-sup condition (compare again with the Ladyzhenskaya lemma [16]), the same holds true for the modified problem (24), (30), (26). A crucial point of (20), (25) and (30) is that these equations are singularly perturbed. To further analyze this, we consider for the mathematical analysis the case that the displacements are small and the internal energy is additively decomposed into a term quadratic in ∇u and a term quadratic in ∇ 2 u. Furthermore, we assume homogeneous boundary conditions ∂B = D = H , in which both the Dirichlet boundary D and the higher-order Dirichlet boundary H take up the whole boundary. We define the bilinear form a(∇δu, ∇u) := c1 (∇ 2 δu, ∇ 2 u) L 2 (B) + (sym(∇δu), C : sym(∇u)) L 2 (B) +β(Rot δ H, Rot H) L 2 (B) ,
where C denotes a constant fourth-order elasticity tensor, c1 > 0 a constitutive parameter associated with the higher-order stress response β > 0 the stabilization parameter introduced in (27). Then problem (20) simplifies to finding u ∈ U for a given f ∈ L 2 so that (31) a(∇δu, ∇u) = −Π ext [δu] for all δu ∈ U (note that Rot(∇δu) = 0) and analogously problem (25) (with added stabilization) becomes: Find (H, ) ∈ V × Q s.t. a(H, δ H) + (Rot δ H, ) L 2 (B) = (∇ g, δ H) L 2 (B) (Rot H, δ) L 2 (B) = 0
(32)
76
J. Riesselmann et al.
for all (δ H, δ) ∈ V × Q. Furthermore, we define an energy norm on V that depends on the parameter c1 describing the nonlocal contribution by 1/2 . |||δ H||| := c1 ∇δ H2L 2 (B) + β Rot δ H2L 2 (B) + C1/2 sym δ H2L 2 (B) Note that, if Rot H = 0, then H is a gradient of a function in H01 . Therefore, Korn’s inequality implies that for min{c1 , β} > 0 this is in fact a norm. It can be shown that a is continuous and coercive on V with respect to this norm and that (Rot δ H, δ) L 2 (B) satisfies an inf-sup condition and is continuous with continuity constant min{β −1 , c1−1 }. This together with Brezzi’s splitting lemma [18] proves the following proposition, see [9] for details. Proposition 1 Let max{β, c1 } > C > 0. There exists a unique solution (H, ) ∈ V × Q to problem (32). Moreover, if u ∈ U is a solution of (31), then there exists ∈ Q and g ∈ G such that (u, ∇u, , g) ∈ G × V × Q × G solves (24), (32) and (26). On the other hand, if (u, H, , g) ∈ G × V × Q × G solves (24), (32) and (26), then u solves (31). For d = 3, the formulation with two Lagrange multipliers reads in the situation considered here: Find (H, , μ) ∈ V × Q˜ × M such that a(H, δ H) + (Rot δ H, ) L 2 (B) = (∇ g, δ H) L 2 (B) , (δ, Rot H) L 2 (B) + (Div δ, μ) L 2 (B) = 0, (δμ, Div ) L 2 (B) = 0,
(33)
for all (δ H, δ, δμ) ∈ V × Q˜ × M. As mentioned above, the inf-sup condition for the bilinear form (δμ, Div δ) L 2 (B) together with the arguments mentioned above leads to the following result, which is proved in detail in [9]. Proposition 2 Let max{β, c1 } > C > 0. There exists a unique solution to (33). Furthermore, if (H, , μ) ∈ V × Q˜ × M is a solution to (33), then (H, ) ∈ V × Q is a solution to (32). On the other hand, if (H, ) ∈ V × Q is a solution to (32), then there exists μ ∈ M such that (H, , μ) ∈ V × Q˜ × M is a solution to (33).
2.3 Finite Element Discretization For a partition of B into a set T = e Te of simplices, suitable finite element spaces corresponding to the previously introduced formulations (25) and (30) are discussed. Note that for the discretization of the Laplace equations (24) and (26) a standard node-based triangular and tetrahedral interpolation with Lagrange shape functions is used.
Robust and Efficient Finite Element Discretizations for Higher …
2.3.1
77
2D Discretization
As in the continuous 2D situation, a coordinate transformation proves the discrete inf-sup condition of (Rot δ H h , δh ) L 2 (B) for stable finite elements for the Stokes equations. This and similar arguments as in the continuous situation prove the following proposition, see [9] for details. Note that the coercivity of a(H h , δ H h ) is guaranteed by the added stabilization term. This means that in the two dimensional case d = 2, any finite element pairing, which is stable for the Stokes equations is a suitable choice for (25). Proposition 3 Let c1 > 0, max{β, c1 } > c > 0 and max{β, c1 } < C < ∞. If Vh × Q h is a stable finite element pair for the Stokes equations, then Vh × Qh is a stable pairing for the discretization of (32) for d = 2. Therefore, there exists a unique solution (H h , h ) ∈ Vh × Qh of the discretization with |||H − H h ||| + − h L 2 (B)
inf
(δ H h ,δh )∈Vh ×Qh
|||H − δ H h ||| + − δh L 2 (B) ,
where (H, ) ∈ V × Q is the solution to problem (32) and the constant hidden in only depends on c and C, but not on c1 . An overview of the used discrete spaces is given in Table 1, where B3 (T , IR2 )(2) is the space of cubic bubble functions. For the implementation of the bubble function, the cubic Lagrangian shape function corresponding to the interior node of the P3 triangular element is used. Remark 1 The rot operator (2) applied to H yields a vector in the two dimensional case. Therefore the Lagrange multiplier is reduced to a vector and holds two nodal degrees of freedom.
2.3.2
3D Discretization
Since in the three dimensional case the Lagrange multiplier is required to be divergence free, the finite element discretization scheme used in this case incorporates two Lagrange multipliers corresponding to (28) and (29). Therefore, we define the following discrete subspaces:
Table 1 Finite element spaces for the discretization of (25) Element name Discrete space Vh P1B H -P1 (Mini)
V∩
P2 H -P1 (Taylor-Hood)
V∩
P3 H -P2 (Taylor-Hood)
V∩
(2) P1 P2(2) (2) P3
⊕ B3
(T , IR2 )(2)
Discrete space Qh Q ∩ P1∗ Q ∩ P1∗ Q ∩ P2∗
78
J. Riesselmann et al.
Vh := V ∩ P1(2) ⊕ B3 (F , IR3 )(2) , ˜ ∩ RT0 (T ; IR3 )(2) and Qh := Q
(34)
Mh := L 20 ∩ P0 ,
(36)
(35)
where B3 (F , IR3 )(2) denotes the space of cubic face bubble functions, for which Lagrangian shape functions corresponding to the midface nodes of P3 -tetrahedral elements can be used. Moreover, RT0 (T ; IR3 )(2) is the lowest-order Raviart Thomas finite element space, whose elements are continuous in normal direction across the interelement boundaries. The discretization corresponding to the discrete spaces (34) through (36) is named P1FB H -RT0 -P0μ in the following. Since Div Qh = Mh (compare [19]), the bilinear form (Div δh , δμh ) satisfies an inf-sup condition. The discrete inf-sup condition of (Rot H h , h ) follows similarly as for the Bernardi-Raugel finite elements for the Stokes equations [19], see also [12]. This together with similar arguments as above prove the following Proposition, see [9]. Proposition 4 Let c1 > 0, max{β, c1 } > c > 0 and max{β, c1 } < C < ∞. The discretization of (33) with the above choice of spaces has a unique solution (H h , h , μh ) ∈ Vh × Qh × Mh satisfying |||H − H h ||| + − h L 2 (B)
inf
|||H − δ H h ||| + − δh L 2 (B) ,
(δ H h ,δh )∈Vh ×(Qh ∩Q)
where (H, ) ∈ V × Q is the solution to (32) and only depends on c and C, but not on c1 . In addition to the previously discussed discretization, the alternative P1FB H RT0 -P0μ discretization of [9] is used in the numerical examples. Here, the secˆ and ond term of (23) is replaced by the duality pairing c , H with c ∈ Q 0 (2) 0 (2) 0 (2) −1 −1 −1 ˆ Q := H (Div ) , where H (Div ) := {T ∈ H (Div ) : Div T = 0}. The ˆ is discretized with Q ˆ ∩Q ˆ h and space for H ∈ V remains unchanged. The space Q 3 (2) h (2) h (2) ˆ ˆ Q := RT0 (T ; IR ) ∩ H (Div) . Since the space Q is in H (Div) and thus, ˆ the duality pairing is replaced by the has more smoothness than requested by Q, 2 ˆ is incorporated with L product. Moreover, the divergence-free condition from Q an additional Lagrange multiplier. Hence the corresponding discrete problem seeks ˆh ×M ˆ h so that (H h , ch , μh ) ∈ Vh × Q (δ H h , P h ) L 2 (B) + (∇δ H h , G h ) L 2 (B) + (δ H h , ch ) L 2 (B) = (δ H h , ∇ g h ) L 2 (B) , (δch , H h ) L 2 (B) + (Div δch , μh ) L 2 (B) = 0, (δμ
h
, Div ch ) L 2 (B)
= 0,
(37)
Robust and Efficient Finite Element Discretizations for Higher … Table 2 Finite element spaces for the discretization of (25) Element name Discrete space Vh Space of ch /h (2) ˜ ∩ RT0 (T ; IR3 )(2) P1FB H -RT0 -P0μ V∩ P ⊕ Q 1
B3 (F , IR3 )(2)
P1FB H -RT0 -P0μ
(2)
ˆ ∩ RT0 (T ; IR3 )(2) ∩ Q H (Div)(2)
V ∩ P1 ⊕ B3 (F , IR3 )(2)
79
Space of μh L 20 ∩ P0 L 2 ∩ P0
ˆh ×M ˆ h . Herein, M ˆ h := L 2 ∩ P0 is the discrete for all (δ H h , δch , δμh ) ∈ Vh × Q subspace for the second Lagrange multiplier. An overview of the used discrete spaces is given in Table 2. A discretization in H (Div)(2) allows for an easy implementation of the divergencefree condition and the chosen discretization yields good results in the numerical experiments. However, other discretizations are possible, but are beyond the scope of this paper.
2.3.3
Mixed Boundary Conditions
In order to take into account mixed boundary conditions with surface tractions on the Neumann boundary, the following modifications are made: G := {δ g ∈ H1D },
(38)
V := {δ H ∈ H : δ H ∧ N| D = 0 and δ H N| H = 0}, δ ∈ L 20 if d = 2, Q := δ ∈ H H (Div0 ) if d = 3,
(39)
1(2)
(40)
˜ := {δ ∈ H (Div)(2) } (for d = 3), Q H
(41)
ˆh
(42)
Q := {δ ∈ H N (Div)
(2)
3 (2)
∩ RT0 (T ; IR ) }.
Note that the above relations hold under the assumption that the Dirichlet boundaries D and H are connected subdomains on ∂B.
2.4 Numerical Examples In the following, the proposed discretizations are numerically tested for gradient elasticity problems in 2D and 3D. The AceGen/AceFEM software package has been used for the finite element generation, which is based on automatic differentiation (cf. [20]). For the evaluation of the element tangent and residual matrices, numerical Gauss integration over the corresponding reference coordinate space is used. An
80
J. Riesselmann et al.
Table 3 Overview of the analyzed finite elements and corresponding Gauss integration schemes Element name # Gauss Points(5) Pre-/Postprocessing (1)
(2)
P1B H -P1 (Mini,2D) P2 H -P1 (Taylor-Hood, 2D) P3 H -P2 (Taylor-Hood, 2D)
3(35) 7(42) 12(39)
P2 g,u (2D) P3 g,u (2D) P4 g,u (2D)
P1FB H -RT0 -P0μ (3D) P1FB H -RT0 -P0μ (3D) P2u -P1B H -P1 (non-decoupled, cf. [6])
4(18) 4(18) 3(35)
P2 g,u (3D) P2 g,u (3D) –
(1)
(3)
(4)
(1) B/FB:
enrichment by volume/- face bubble function (•)th order Lagrange nodal interpolation (for (•) ≥ 1) (3) Interpolation with lowest order Raviart Thomas functions (4) Piecewise constant interpolation (5) Paranthesized numbers are AceGen IDs for associated Gauss integration schemes (2) P(•):
overview of the used interpolation functions discussed in the previous section as well as corresponding Gauss integration codes can be found in Table 3. For the solution of the nonlinear global system a standard incremental Newton-Raphson load step solution procedure is used. The constitutive model used is the Neo Hookean ansatz μ (I1 − 3) + g(J ) 2 with g(J ) = λ/4(J 2 − 1) − λ/2 ln J − μ ln J
ψ loc =
(43) (44)
for the local part of the elastic free energy with I1 = tr C and J = det F and the Lamé constants λ and μ. Unless stated otherwise throughout this section the elastic parameters E = 500 MPa and ν = 0.3 are considered. For the gradient enhancement term the quadratic ansatz c1 (45) ψ nloc = ∇ F · ∇ F 2 is used (cf. [21]). Herein, the nonlocal constant c1 can be rewritten in terms of the √ internal length l = c1 /μ. The 2D problems are implemented under the plane strain assumption. Further numerical tests can be found in [9].
2.4.1
Unit Square
In order to investigate the convergence behavior of the proposed discretizations the unit square domain B with dimensions 1 × 1 mm2 with homogeneous boundary is considered. For this simple geometry, a smooth reference solution is prescribed in terms of the large displacements u = (0, −X 2 Y 2 (X − 1)2 (Y − 1)2 )T 103 mm
(46)
Robust and Efficient Finite Element Discretizations for Higher …
81
to compute f analytically via the strong form of (20) − Div( P − Div G) = f .
(47)
With the strain energy functions (43) and (45) we obtain the expressions λ 2 (J − 1)F −T + μ(F − F −T ) 2 Div G = c1 F P=
(48) (49)
for the first Piola Kirchhoff stress tensor and the divergence of the higher order stress tensor respectively. By imposing f to the right hand side of (24) and solving the discretized equations (24) through (26) the error measure ||∇u − H h || L 2 (B) is analyzed. Here, H h is the finite element solution of the discretized equation (25). Convergence of this error is shown in Fig. 1a for a uniform mesh refinement (a depiction of the coarsest cross-pattern mesh can be found on the lower left hand side of the corresponding figure). In addition to the proposed elements the comparative P2u -P1B H -P1 element of the non decoupled approach of [6] based on [5] and [4] (cf. Table 3) is analyzed. For all investigated elements the observed H 1 convergence rates are approximately h k+1 , where h is the element size and k is the polynomial order of H h . In Fig. 1b the finite element displacement solution u hy at the point A = (0.5, 0.5) mm relative to the reference solution u y (A) = −0.1953 mm is plotted over the computing time.1 In order to take into account variations of the calculating capacity of the computer, the average time of multiple simulations is taken for each refinement step. Figure 1c depicts a visualization of the computing times corresponding to the refinement step at which the convergence criterion | u| ≤ 0.001 mm is fulfilled (marked with a bullet in Fig. 1b). A computational advantage of the proposed approach compared to the mixed, non decoupled P2u -P1B H -P1 element can be observed. An evaluation of the computing time of the P3 H -P2 discretization, in which the computing time of the P4u,g pre and postprocessing step is included, is marked with an asterisk in Fig. 1c. Note, that since the ||∇u − H h || L 2 (B) -convergence plots of the P3 H -P2 and the P3 H -P2∗ computation are identical, the additional plot of the latter is not shown in Fig. 1a.
2.4.2
3D Cook’s Problem
In this section the proposed 3D discretizations are numerically tested on the (modified2 ) 3D Cook’s problem (cf. Fig. 2a). Here, the boundary is decomposed with ∂B = D ∪ N = M , where at position X = 0 the Dirichlet boundary condi1
The evaluated computing time is the time required for the solution of the discretized main problem (25). 2 Here, in order to introduce an additional asymmetry, the side face of the geometry and the X Z plane is slightly skewed (cf. Fig. 2a).
82
J. Riesselmann et al.
Fig. 1 Unit square convergence study. a Convergence of the error ||∇u − H h || L 2 (B) . b Relative displacement at point A verses computing time. c Computing times at the refinement steps marked by bullets in bwhich correspond to comparable accuracies
tions are given by u| D = 0 and H ∧ N| D = 0 and no prescribed higher-order boundary conditions. The boundary conditions for the Lagrange multipliers are given with N| D = 0 and c N| N = 0 respectively (cf. Sect. 2.3.3). The load t = (0, 0, 50 MPa) is applied to the surface at positions X = L = 48 mm. In Fig. 2b– d contourplots of the element average Cauchy stress σ e (cf. [9]) are shown in order to visualize the ability of the proposed approach to avoid stress localization. In Fig. 2b the contourplot of a comparative local (P2u ) displacement element is depicted. A decreased stress localization around the singularity point B = (0, 0, 44) mm of the proposed P1FB H -RT0 -P0μ element (Fig. 2c, d) compared to the local formulation (Fig. 2b) can be observed. In Fig. 2e the displacement response of the proposed formulations at the point A = (48, 20, 60) mm is evaluated for various nonlocal parameters l relative to the length of the geometry L and again compared to the local classical elasticity element P2u . The present mesh refinement stage (5120 elements)
Robust and Efficient Finite Element Discretizations for Higher …
83
Fig. 2 3D Cook’s problem: a geometry description. Non-smoothed σ e contour plots of the b local displacement element P2u and of the P1FB H -RT0 -P0μ element for c l = 0.0005L and for d l = 0.05L. Size effect plot e shows the displacement response at point A for varying length ratios l/L. In f the computing time corresponding to the computation marked with a bullet in plot e is visualized for both proposed elements
corresponds to the depicted contourplot of Fig. 2b–d. For l/L = 0 mm in Fig. 2e it can be seen that the solutions of the proposed formulations coincide with the converged solution u z (A) = 21.41 mm of the local P2u displacement formulation. The varying displacement response for increasing ratio l/L can be related to the modeling of size effects (cf. [22]). The computing time depicted in Fig. 2f corresponds to the simulation marked with a bullet in Fig. 2e and unveils a comparable computational effort for both elements.
84
J. Riesselmann et al.
3 Gradient Enhanced Damage at Finite Strains In this section the approach of the previous section is extended to the gradient damage framework. In Sect. 3.1 a continuum mechanical framework for gradient damage is described. In the following Sect. 3.2, a corresponding finite element treatment is discussed. Finally, numerical tests of the proposed schemes are performed in Sect. 3.3.
3.1 Continuum Damage Mechanics with Gradient Enhancement In order to take into account stress- and strain-softening behavior due to material deterioration we consider the gradient-enhanced strain energy function (cf [8]) ψ = (1 − d(α))ψ 0 +
cd ∇α 2 2
(50)
with constant cd > 0, the damage function d ∈ [0, 1), the internal damage variable α ∈ IR+ and ψ 0 being the hyperelastic strain energy function corresponding to the fictively, undamaged state. For discontinuous damage evolution (cf. [23]) damage is assumed to only evolve under first loading paths, so that for s ∈ [0, t] the internal variable is given by αs = maxs∈[0,t] ψs0 . The damage function is expressed in terms of the internal variable α with f d := 1 − d = exp(−ηd α),
(51)
where ηd can be denoted as damage saturation parameter (cf. [8]). In the following, we restrict ourselves to monotonically increasing first loading paths only and thus, αs = ψs0 in s ∈ [0, t] for each point. In this case, the gradient enhancement in (50) can be written as c2d ∇αs2 = c2d (∇ψs0 )2 . The corresponding displacement solution u ∈ H 2 is sought through minimization of the potential Π [u] = Π int [u] + Π ext [u] ⇒ min with Π int [u] = u
B
ψ dV
(52)
and Π ext given by (19). The corresponding variational problem seeks u ∈ H 2 such that (53) (∇δu, P + cd ∇∂ F ψ 0 ∇ψ 0 ) L 2 (B) + (∇∇δu, G) L 2 (B) = −Π ext [δu]
Robust and Efficient Finite Element Discretizations for Higher …
85
is solved for all δu ∈ H 2 . Here, the corresponding stress measures P and G are defined as follows: P = f d ∂ F ψ 0 and G = cd ∂ F ψ 0 ⊗ ∇ψ 0 with ∇ψ 0 = ∂ F ψ 0 : ∇ F
(54)
3.2 Finite Elements for Gradient Damage As discussed in the previous section, the variational problem seeking the displacement solution for the considered gradient-enhanced damage model takes a structurally similar form as (20). Additionally to the first-order displacement gradients, we have ∇ F. Therefore, here the displacement solution variables are also approximated with the rot free discretization scheme discussed in Sect. 2.2. Since the rot-free constraint is linear, the inf-sup condition from Sect. 2.2 proves that for cd > 0, there exists a solution to the rot free formulation if there exists a solution u ∈ H 2 to the original problem. Moreover, in this case the solutions coincide. For the following 2D numerical analysis, the elements P1B H -P1 and P2 H -P1 are used (cf. Table 4). In order to account for discontinuous damage the following simple update of the internal history variable αk is considered: Update of the Internal Variable 0 Given ψk+1 from Newton-Raphson FE solution at current time step/ load step k+1 For each Gauss point: 1. Initialize αk+1 = αk 0 2. If φk+1 = ψk+1 − αk+1 ≥ 0 0 , f d (αk+1 ) Update αk+1 = ψk+1 Else stop Update tangent matrix and residual
Table 4 Overview of the analyzed finite elements (gradient damage) Element name # Gauss Points AceGen ID P1B H -P1 (Mini,2D) 3 P2 H -P1 7 (Taylor-Hood, 2D) P2u -P1h -pen(1) (cf. 3 [8])
Pre-/Postprocessing
35 42
P2 g,u (2D) P3 g,u (2D)
35
–
The same naming convention as in Table 3 is used (1) Enforcment of the constraint condition with penalty term
86
J. Riesselmann et al.
Hence, the main step (25) is extended by the above update procedure of the h h 0 (H k+1 ) is computed in terms of the solution H k+1 internal variable, where ψk+1 corresponding to each load step k + 1.
3.3 Numerical Examples In this section the previously discussed finite element scheme for gradient damage is numerically tested. As constitutive framework, the Neo-Hooke energy function ψ loc from Sect. 2.4 is used for the virtually undamaged energy function ψ 0 := ψ loc . Throughout this section, the values E = 500 MPa and ν = 0.3 are used for the corresponding elasticity parameters. Analogously to Sect. 2.4, the 2D finite element implementation is done under the plane strain assumption. Furthermore, the incremental Newton-Raphson load step solution procedure of Sect. 2.4 is extended by the update procedure of the internal variable described in Sect. 3.2. An overview of the used elements is given in Table 4.
3.3.1
Unit Square
In order to investigate the convergence behavior of the proposed elements the unit square domain of Sect. 2.4.1 with reference displacement solution (46) is considered. For the analytical computation of the right hand side f , the strong form of the variational equation (53) is used. Here, P and G are given by (54). Similar to Sect. 2.4.1 the L 2 norm of the displacement gradient error ||∇ 2 u − ∇ H h || L 2 (B) is shown in Fig. 3a. The comparative P2u -P1h -pen element denotes a discretization similar to the mixed approach of [8]. Here, the free energy function takes the form cd p (∇h h )2 + (h h − α)2 2 2 and h h ∈ H 1 ∩ P1 : B h h dV = 0,
ψ = (1 − d(α))ψ 0 (∇uh ) + with u ∈ h
H01
∩ P2
(55) (56)
where h h is a mixed variable corresponding to the internal variable α and compatibility is enforced with a penalty parameter p. Figure 3c shows the convergence behavior of the damage function f d at the point A = (0.5, 0.25) mm (cf. Fig. 3b) with respect to the overall computing time of the iterative solution procedure at each refinement step. Note, that again (cf. Sect. 2.4.1) the computing time is averaged over several calculations in order to take into account variations of calculation capacity of the computer, since the size of the problem is relatively small. The colored bullets in Fig. 3c mark the previously discussed computing time corresponding to the refinement stage at which the convergence criterion | u| ≤ 0.001 mm is reached. The corresponding values of computing time are also illustrated in Fig. 3d. It can be observed, that in this example the proposed elements P1B H -P1 and P2 H -P1 require less computing time compared to the P2u -P1h -pen approach of (56). This
Robust and Efficient Finite Element Discretizations for Higher …
87
Fig. 3 Unit square convergence study: a Convergence of the ||∇u − H h || L 2 (B) -error with respect to element size h. b Contourplot of f d of the P2 H -P1 element. c Convergence study of f d at point A = (0.5, 0.25) mm. d Computing time in the converged refinement state (marked with bullet in c). Convergence criterion is f d ≤ 0.001. Material parameters: ηd = 0.003 mm2 /N, α0 = 0, cd = 10−5 mm4 /N, f d,conv (A) = 0.6238, A = (0.5, 0.25) mm, p = 4 mm2 N
is despite the fact that the total number of degrees of freedom of the P2u -P1h -pen element is smaller compared to the proposed elements. Moreover, for the P2u -P1h pen element divergence of the incremental loadstep solution procedure was observed for penalty parameters exceeding p ≥ 5 mm2 N. The contourplot example in Fig. 3b visualizes the distribution of the damage function values f d of the P2 H -P1 element in the deformed state.
3.3.2
Plate with Hole
The aim of this numerical test is to investigate the ability of the proposed formulation to produce mesh-independent solutions. For this, the plate with hole benchmark
88
J. Riesselmann et al.
Fig. 4 Plate with hole benchmark problem: a Material data used. b Geometry of the problem ([length] = mm). c Contourplot of the damage function f d . d Stress deformation curve at point B
problem (cf. Fig. 4b) is considered. Due to existing symmetries, only the upper right quarter of the plate is analyzed. Zero displacements u x = 0 at the left edge (X = 0) and u y = 0 at the lower edge (Y = 0) are prescribed. At the upper edge (Y = 100 mm) a constant surface load p0 = 50 MPa and zero displacement u x = 0 in horizontal direction is considered. The boundary conditions on H are chosen correspondingly, e.g., H y · τ = 0 at the lower edge (Y = 0), where τ denotes the tangent for that edge. Note that the respective parts of the boundary, where Dirichlet boundary conditions on u x and u y are prescribed, are connected. This implies, that for every rot-free H satisfying the boundary conditions prescribed on H, there exists in fact a v ∈ U with the boundary conditions prescribed for u and H = ∇v. This can be seen by a Helmholtz decomposition of H and it implies that the original problem is in fact equivalent to the rot-free formulation under the conditions mentioned in Sect. 3.2.
Robust and Efficient Finite Element Discretizations for Higher …
89
For the numerical analysis, the P1B H -P1 discretization scheme (cf. Table 4) is considered. In order to consider a damage initialization value α0 , to be exceeded by α before material softening is occuring, the internal variable is modified according to [24] yielding the modified damage function f d for monotonic loading as follows (cf. [8]): (57) f d := 1 − d = exp(−ηd α − α0 + ) Here, the Macaulay brackets • + := (• + | • |)/2 filter out positive values. A contourplot of the values of the damage function f d is visualized in Fig. 4c. The damage function approaches low values corresponding to high material degradation in the surrounding of point B = (50, 0) mm. A corresponding stress strain response is shown in Fig. 4d, where the values corresponding to the second diagonal entry of the tensors P and F, i.e. P22 and F22 , are plotted3 for varying element sizes h el . Coinciding stress strain curves for varying mesh sizes are observed showing the mesh-independence resulting from the gradient-enhanced formulation.
4 Conclusion A mixed finite element formulation based on the Helmholtz decomposition of the Lagrange multiplier of the constraint term, which enforced compatibility of the mixed variables, was developed for gradient elasticity. The obtained set of decoupled variational equations was shown to be stable in the continuous case as well as for several introduced discrete finite element subspaces. Moreover, a reduced computation cost compared to the non-decoupled mixed gradient elasticity approach for finite deformations of [6] (based on [4] and [5] for small deformations) was shown in numerical tests of Sect. 2. The proposed approach was extended to the gradient damage framework and an appropriate convergence behavior, increased computational efficiency with respect to a comparative mixed approach and the ability to yield mesh-independent results was shown in corresponding numerical tests of Sect. 3. Acknowledgements The authors highly appreciate financial funding from the German Science Foundation (Deutsche Forschungsgemeinschaft DFG) within the Priority Program 1748 “Reliable simulation techniques in solid mechanics. Development of non-standard discretization methods, mechanical and mathematical analysis” under the project “Robust and Efficient Finite Element Discretizations for Higher-Order Gradient Formulations” (Project number 392564687, project IDs BA2823/15-1 and SCHE1885/1-1).
3
In order to obtain the nodal values at point B the AceFEM postprocessing functionality of least square interpolation of the Gauss point values is used.
90
J. Riesselmann et al.
References 1. S. Rudraraju, A. Van der Ven, K. Garikipati, Three-dimensional isogeometric solutions to general boundary value problems of Toupin’s gradient elasticity theory at finite strains. Comp. Methods Appl. Mech. Eng. 278, 705–728 (2014) 2. C. Liu, J. Wang, G. Xu, M. Kamlah, T.Y. Zhang, An isogeometric approach to flexoelectric effect in ferroelectric materials. Int. J. Sol. Struct. (2019) 3. B.H. Nguyen, X. Zhuang, T. Rabczuk, NURBS-based formulation for nonlinear electrogradient elasticity in semiconductors. Comput. Methods Appl. Mech. Eng. (2018) 4. J.Y. Shu, W.E. King, N.A. Fleck, Finite elements for materials with strain gradient effects. Int. J. Numer. Meth. Eng. 44, 373–391 (1999) 5. L. Zybell, U. Mühlich, M. Kuna, Z.L. Zhang, A three-dimensional finite element for gradient elasticity based on a mixed-type formulation. Comput. Mater. Sci. 52, 268–273 (2012) 6. J. Riesselmann, J.W. Ketteler, M. Schedensack, D. Balzani, Three-field mixed finite element formulations for gradient elasticity at finite strains. GAMM Mitteilungen, Wiley-VCH Verlag GmbH & Co KGaA 43, e202000002 (2020) 7. B.J. Dimitrijevic, K. Hackl, A method for gradient enhancement of continuum damage models. Technol. Mech. 28(1) (2008) 8. T. Waffenschmidt, C. Polindara, A. Menzel, S. Blanco, A gradient-enhanced large-deformation continuum damage model for fibre-reinforced materials. Comp. Methods Appl. Mech. 268, 801–842 (2013) 9. J. Riesselmann, J.W. Ketteler, M. Schedensack, D. Balzani, Rot-free mixed finite elements for gradient elasticity at finite strains. Int. J. Num. Methods. Eng. 122(6), 1602–1628 (2021) 10. J. Riesselmann, J. Ketteler, M. Schedensack, D. Balzani, C0 -continuous finite elements for gradient elasticity at finite strains, in Proceedings of the 8th GACM Colloquium on Computational Mechanics for Young Scientists from Academia and Industry, August 28–30, 2019, Kassel, Germany (2019) 11. J. Riesselmann, J. Ketteler, M. Schedensack, D. Balzani, A new C0 -continuous FE-formulation for finite gradient elasticity. Proc. Appl. Math. Mech. 19, e201900341 (2019) 12. D. Gallistl, Stable splitting of polyharmonic operators by generalized Stokes systems. Math. Comput. 86(308), 2555–2577 (2017) 13. R. Mindlin, Micro-structure in linear elasticity. Arch. Ration. Mech. Anal. 16, 51–78 (1964) 14. R. Toupin, Theories of elasticity with couple-stress. Arch. Ration. Mech. Anal. 17, 85–112 (1964) 15. M. Ortiz, G.R. Morris, C0 finite element discretization of Kirchhoff’s equations of thin plate bending. Int. J. Numer. Methods Eng. 26, 1551–1566 (1988) 16. C. Amrouche, V. Girault, Problèmes généralisés de Stokes. Portugal. Math. 49(4), 463–503 (1992) 17. Z. Lou, A. McIntosh, Hardy space of exact forms on R N . Trans. Am. Math. Soc. 357(4), 1469–1496 (2005) 18. F. Brezzi, On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers. Rev. Française Automat. Informat. Recherche Opérationnelle Sér. Rouge 8(no. , no. R-2):129–151 (1974) 19. D. Braess, Finite Elemente (Springer, 2007) 20. J. Korelc, P. Wriggers, Automation of Finite Element Methods (Springer, 2016) 21. N. Triantafyllidis, E.C. Aifantis, A gradient approach to localization of deformation i. hyperelastic materials. J. Elastic. 16, 225–237 (1986) 22. H. Askes, E.C. Aifantis, Gradient elasticity in statics and dynamics: An overview of formulation, length scale identification procedures, finite element implementations and new results. Int. J. Solids Struct. 48, 1962–1990 (2011) 23. C. Miehe, Discontinuous and continuous damage evolution in Ogden-type large-strain elastic materials. Eur. J. Mech. A-Solid 14, 697–720 (1995) 24. D. Balzani, J. Schröder, D. Gross, Simulation of discontinuous damage incorporating residual stresses in circumferentially overstretched atherosclerotic arteries. Acta Biomater. 2(6), 609– 618 (2006)
Stress Equilibration for Hyperelastic Models F. Bertrand, M. Moldenhauer, and G. Starke
Abstract Stress equilibration is investigated for hyperelastic deformation models in this contribution. From the displacement-pressure approximation computed with a stable finite element pair, an H (div)-conforming approximation to the first PiolaKirchhoff stress tensor is computed. This is done in the usual way in a vertexpatch-wise manner involving local problems of small dimension. The corresponding reconstructed Cauchy stress is not symmetric but its skew-symmetric part is controlled by the computed correction. This difference between the reconstructed stress and the stress approximation obtained directly from the Galerkin approximation also serves as an upper bound for the discretization error. These properties are illustrated by computational experiments for an incompressible rigid block loaded on one half of its top boundary.
1 Introduction Stress equilibration is investigated for hyperelastic deformation models in this contribution. In contrast to our recent work [5] we omit the weak symmetry constraint here and show that the direct use of the equilibration approach also leads to satisfactory results. The accurate approximation of the stress-tensor is of strong importance in The funding by the Deutsche Forschungsgemeinschaft (DFG) under grants BE6511/1-1 and STA 402/14-1 within the priority program SPP 1748 is gratefully acknowledged. F. Bertrand (B) Institut für Mathematik, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany e-mail: [email protected] M. Moldenhauer · G. Starke Fakultät für Mathematik, Universität Duisburg-Essen, Thea-Leymann-Straße 9, 45127 Essen, Germany e-mail: [email protected] G. Starke e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_4
91
92
F. Bertrand et al.
numerous applications and in particular in the hyperelastic material model this paper is concerned with. The mathematical foundations of hyperelastic material models in solid mechanics are covered, e.g., in the books by Marsden and Hughes [20] and Ciarlet [11]. The numerical treatment of the associated variational problems is investigated in detail by Le Tallec [17]. Specifically for incompressible hyperelasticity, issues connected to the use of displacement-pressure formulations are discussed in [2]. A priori analysis of numerical methods are available under restrictive assumptions, see Carstensen and Dolzmann [10] and, for a least-squares finite element approach, Müller et al. [21]. The stress approximations obtained from the common displacement-based formulations or, in the incompressible regime, mixed displacement-pressure approaches for this model are not H (div)-conforming. This means that discontinuities of the normal components occur on the interface between two elements. In particular, this means that momentum is not conserved exactly and that the normal component of the boundary traces is not well-defined and therefore the approximation of the surface traction forces is problematic. This paper investigates the construction of an H (div)conforming reconstructed stress-tensor by equilibration of the displacement-based direct approximation. The idea of reconstructing the matrix-valued stress and vector-valued flux goes back to the hypercircle theorem by Prager and Synge [24] (see also Sect. III.9 in Braess’ book [7] for a presentation in modern mathematical language). Besides the accurate approximation in an H (div)-conforming space, the stress or flux reconstruction builds the basis of an a posteriori error estimator, which was actually already one of the motivations of Prager and Synge [24]. Over the years, a posteriori error estimators based on flux reconstruction were explored in detail in many contributions [3, 13, 18, 19]. An important algorithmic innovation was given by Braess and Schöberl [8] by the equilibration procedure which is completely local and provides the link to residual error estimation. An important aspect of the use of reconstruction-based error estimation of the above type is that it provides guaranteed upper bounds for the error with accessible constants. Another important aspect is that these a posteriori error estimators are valid for any approximation that is inserted into the procedure. In particular, it does not assume that the underlying finite-dimensional variational problems are solved to high precision. The extension of reconstruction strategies to linear elasticity was the subject of a number of contributions in the last two decades [1, 12, 15, 16, 22, 23], stress reconstruction in the context of Stokes flow was also studied recently [14]. More recently, a posteriori error estimation based on the reconstruction of weakly symmetric stresses was investigated in our earlier work [5] and [4]. The recent paper by Botti and Riedlbeck [6] should also be mentioned here. It treats nonlinear elasticity restricted to a geometrically linear situation. In that case, the (Piola-Kirchhoff) stress is still symmetric which allows the use of symmetric stress elements as it is done in the approach by Botti and Riedlbeck [6]. The stress equilibration for the geometrically and materially nonlinear situation associated with hyperelastic material models poses some new challenges not present in the linear elastic case. Firstly, the stresses computed directly from displacement and, possibly, pressure approximations are no longer piecewise polynomial due to
Stress Equilibration for Hyperelastic Models
93
the nonlinearity of the model. Therefore, in order to get a stress reconstruction in an appropriate H (div)-conforming finite element space, a suitable projection to piecewise polynomial stresses need to be carried out first. Secondly, the development of an a posteriori error estimator based on the stress equilibration for hyperelastic material models and, in particular, its analysis is also more complicated and requires rather restrictive assumptions. After all, it is well-known that the solution of the variational problem may not be unique (see the examples in Chap. 5 in [11]). The results in [21] allow us to interpret the correction associated with the stress equilibration as an a posteriori error estimator. The outline of this paper is as follows. We start with the variational formulation of elastic deformations governed by hyperelastic material models and the stress equilibration in Sect. 2. Section 3 presents the local equilibration algorithm and its well-posedness. The use of the stress equilibration procedure for error control is discussed in Sect. 4. Finally, Sect. 5 presents computational experiments illustrating the properties of the equilibrated stress reconstructions.
2 Hyperelasticity and Stress Equilibration We consider the deformation of an open, bounded and connected reference domain ⊂ IRd (d = 2, 3) with Lipschitz-continuous boundary under a hyperelastic material law. The boundary consists of two disjoint non-empty subsets D and N . Homogeneous displacement boundary conditions u = 0 are prescribed on D , while on N surface traction forces P(u) · n = g are imposed. For an appropriate subspace ()d ⊂ V ⊂ H1D ()d , the deformation is then modelled by the variaV with W1,∞ D tional problem of finding u ∈ V such that (P(u), ∇v) L 2 () = (f, v) L 2 () + g, v0, N
(1)
holds for all v ∈ V, where P(u) = ∂F ψ(B) denotes the first Piola-Kirchhoff stress tensor with respect to the stored energy function ψ : IRd×d sym → IR. Here, the deformation gradient is given by F(u) = I + ∇u and B(u) = F(u)F(u)T denotes the left Cauchy-Green strain tensor. In the sequel, the inner product in L 2 () with respect to the reference configuration will be abbreviated simply by ( · , · ). Moreover, f and g stand for volume and surface forces, transformed back to the reference configuration. An example of a stored energy function which we will also use later in the context of a posteriori error estimation is associated with the Neo-Hookean model λ λ 1 μ tr B + det(B) − μ + ln(det(B)) . ψ N H (B) = 2 2 2 The corresponding Piola-Kirchhoff stress tensor is given by
(2)
94
F. Bertrand et al.
P(u) = ∂F ψ N H (B(u)) = μF(u) +
λ (det(B(u)) − 1) − μ F(u)−T . 2
(3)
In order to deal with materials in the incompressible parameter regime (λ μ), pressure may be treated separately, e.g. by setting p = λ(det(F(u)) − 1). In the small strain limit, this additional constraint is consistent with the familiar linear pressure constraint p = λ div u. In terms of u and p, the Piola-Kirchhoff stress, in the Neo-Hookean model, reads p − μ F(u)−T (4) P(u, p) = μF(u) + p 1 + 2λ due to the fact that det(B(u)) − 1 = det(F(u))2 − 1 = (det(F(u)) − 1)(det(F(u)) + 1) holds. With an appropriate pressure space Q, the variational problem turns into a saddle point problem which consists in finding u ∈ V and p ∈ Q such that (P(u, p), ∇v) = (f, v) + g, v0, N for all v ∈ V , 1 for all q ∈ Q (det(F(u)) − 1, q) − ( p, q) = 0 λ
(5)
holds. With respect to a triangulation T h , let Vh ⊂ V be the subspace of continuous piecewise polynomials of degree k + 1, k ≥ 1, for each component of Vh . This assumes that the underlying domain is polyhedral. Curved boundaries would need to be approximated in an appropriate and be treated as a variational crime (cf. [9, Sect. 10.2]). In fact, one would have to use isoparametric elements in order to avoid a degeneracy of the convergence order near the boundary. The finite-dimensional variational problem corresponding to (5) consists in finding uh ∈ Vh such that (P(uh ), ∇vh ) = (f, vh ) + g, vh 0, N
(6)
is satisfied for all vh ∈ Vh . In the incompressible regime, a discrete pressure space Q h consisting of continuous piecewise polynomials of degree k may be used to define a corresponding discrete saddle point problem. This consists in finding (uh , ph ) ∈ Vh × Q h such that (P(uh , ph ), ∇vh ) = (f, vh ) + g, vh 0, N for all vh ∈ Vh , 1 (det(F(uh )) − 1, qh ) − ( ph , qh ) = 0 for all qh ∈ Q h λ
(7)
holds. The direct use of P(uh ) or, in the incompressible regime, P(uh , ph ) as an approximation for the Piola-Kirchhoff stress is, however, not H (div)-conforming with the implication that conservation of momentum is not controlled. This deficiency is analogous to the situation with respect to linear elasticity where stress equilibration is used to this end. Most importantly, P(uh ) · n is not continuous at interfaces between elements of the underlying triangulation implying that traction forces are
Stress Equilibration for Hyperelastic Models
95
not well-defined. This motivates the need to construct an H (div)-conforming stress reconstruction PhR similar to the linear elasticity case. The idea of equilibration consists in computing the reconstructed stress PhR in the Raviart-Thomas space of degree k, which is H (div)-conforming, from P(uh ) by adding a correction. This is done using the broken Raviart-Thomas space of degree k for each row leading to d×d with Ph |T ∈ Pk (T )d×d + Pk (T )d xT } , h = {Ph : → IR
where Pk (T ) denotes the space of polynomials of degree k on the triangle (d = 2) or tetrahedron (d = 3) T . In other words, each row of the stress tensor Ph ∈ h is element-wise given by a function in the Raviart-Thomas space. Unfortunately, in contrast to the linear elasticity situation, P(uh ) ∈ h does not hold, in general, due to the nonlinearity of the stress-strain relation. For the Neo-Hookean model in (3), P(uh ) is not even piecewise polynomial, in general. Therefore, we project P(uh ) k first to an element Ph (uh ) ∈ h . In particular, we set Ph (uh ) = Ph P(uh ), where k 2 Ph denotes the component-wise and element-wise L -orthogonal projection onto Pk (T ). In a similar way as in the weakly symmetric equilibration procedure from [5], we Ph (uh ) between the reconperform the construction for the difference Ph := PhR − structed and the projected original stress. Recall that the reconstruction of an H (div)Ph (uh ) conforming stress PhR requires the equilibration condition div Ph = −f − div · n between elements to in each element and furthermore a jump condition for P h hold. In order to write this jump condition in a precise way, let Sh denote the set of all sides (edges in 2D and faces in 3D) of the triangulation T h and S∗h the set of sides not contained in D S∗h := {S ∈ Sh : S D } . Furthermore, for all sides S ∈ Sh , let n be the normal direction associated with S (depending on its orientation), T+ and T− the elements adjacent to S (such that n points into T+ ) and the jump of Ph over S defined by [[Ph · n]] S = Ph · n|T− − Ph · n|T+ .
(8)
For sides S ⊂ N located on the Neumann boundary we assume that n points outside of and define the jump by [[Ph · n]] S = Ph · n|T− . In order to use the same formulas also for patches adjacent to the Neumann boundary N we define the auxiliary jump by [[Ph ·
n]]∗S
=
Ph · n
T−
[[Ph · n]] S
− g , if S ⊂ N , , if S N .
(9)
96
F. Bertrand et al.
With this, the jump condition for the correction reads [[Ph · n]] S = −[[ Ph (uh ) · n]]∗S ∗ for all sides S ∈ Sh . Rewriting the equilibration and jump conditions in a weak form leads to the following conditions for Ph : Ph (uh ), z)T (div Ph , z)T = −(f + div [[Ph · n]] S , ζ S = −[[ Ph (uh ) · n]]∗S , ζ S
for all z ∈ Pk (T )d , T ∈ T h , for all ζ ∈ Pk (S)d , S ∈ S∗h .
(10)
In [5], an additional symmetry condition for the related Cauchy stress tensor σ (u) = P(u)F(u)T / det(F(u)) is imposed weakly in order to obtain a reconstructed stress with better symmetry properties.
3 Localized Stress Equilibration We localize the problem using a partition of unity in order to be able to efficiently compute the stress reconstruction. The commonly used partition of unity with respect to the set Vh of all vertices of T h , 1≡
φ˜ z on ,
(11)
z∈Vh
consists of continuous piecewise linear functions φ˜ z . In this case, the support of φ˜ z is restricted to
{T ∈ T h : z is a vertex of T } . (12) ω˜ z := As in the stress equilibration procedure described in [4] for the linear elasticity case and in [5] for the hyperelastic case, we modify this classical partition of unity in order to exclude patches formed by vertices z ∈ N , where the local problems may possess / N } too few degrees of freedom to be solvable. To this end, let V h = {z ∈ Vh : z ∈ denote the subset of vertices which are not located on a side (edge/face) of N . The modified partition of unity is defined by 1≡
φz on .
(13)
z∈V h
For z ∈ V h not connected by an edge to N the function φz is equal to φ˜ z . Otherwise, the function φz has to be modified in order to account for unity at the connected / N connected by an edge with z N vertices on N . For each z N ∈ N one vertex z I ∈ is chosen and φ˜ z I is extended by the value 1 along the edge from z I to z N to obtain the modified function φz I . The support of φz is denoted by
Stress Equilibration for Hyperelastic Models
ωz :=
97
{T ∈ T h : φz = 1 for at least one vertex z of T } .
(14)
For the partition of unity (13) to hold, the triangulation T h is required to be such that each vertex on N is connected to an interior edge. Note that this restriction is not unreasonable since it arises in other contexts where one makes use of patch decompositions, e.g. for proving discrete inf-sup conditions, cf. [9, Theorems 12.6.6 and 12.6.7]. This is also of interest in the context of hyperelasticity where generalized inf-sup conditions occur, cf. [17, Sect. 14]. For the localized equilibration algorithm the local subspaces h,z = {qh ∈ h : qh · n = 0 on ∂ωz \∂ , qh ≡ 0 on \ω z }
(15)
for all z ∈ V h will also be needed. Moreover, the local sets of sides Sh,z := {S ∈ Sh : S ⊂ ω z } will also be needed. The conditions in (10) can be restated for a sum of patch-wise contributions Ph,z , (16) Ph = z∈V h
leading, for each z ∈ V h , to the following minimization problem: ωz −→ min! among all Ph,z ∈ Ph,z h,z subject to the constraints (div Ph,z , z)T = −((f + div Ph (uh ))φz , z)T
for all z ∈ Pk (T )d , T ⊂ ωz , (17) [[Ph,z · n]] S , ζ S = −[[ Ph (uh ) · n]]∗S φz , ζ S
for all ζ ∈ Pk (S)d , S ∈ Sh,z . At this point, we may introduce the local orthogonal projections Pkh,T : L 2 (T ) → Pk (T )d and Pkh,S : L 2 (S) → Pk (S)d which means that the constraints in (17) can be written shortly as = −Pkh,T ((f + div Ph (uh ))φz ) , div Ph,z k Ph (uh ) · n]] S φz ) . [[Ph,z · n]] S = −Ph,S ([[
For each z ∈ V h , (17) constitutes a low-dimensional quadratic minimization problem with linear constraints for which standard methods are available for the efficient solution. Note that it is not guaranteed at this point that (17) has a solution at all. ∈ We will now ensure that for every right hand side, a function Ph,z h,z exists such that the constraints (17) are satisfied. The left-hand side in (17) defines a lin
d ear operator Lh,z : h,z → Zh,z × Sh,z , where Zh,z = {z ∈ Pk (T ) : T ⊂ ωz } and where Sh,z = {ζ ∈ Pk (S)d : S ∈ Sh,z } denotes the trace space on the interior sides ⊥ ⊆ Zh,z × Sh,z orthogonal to and ( · ) stands for the dual space. The subspace Rh,z ∗ the range of Lh,z , i.e., the null space of its adjoint Lh,z , is obviously of interest for the
98
F. Bertrand et al.
solvability since the linear functionals on the right-hand side in (17) need to vanish ⊥ ⊥ . The subspace Rh,z , defined by on Rh,z
⊥ = {(zh,z , sh,z ) ∈ Zh,z × Sh,z : Rh,z (div Ph,z , zh,z )T − [[Ph,z · n]] S , sh,z S = 0 for all Ph,z ∈ h,z } ,
T ⊂ωz
(18)
S∈Sz,h
can be characterized as follows: ⊥ = {(ρ, {ρ} S∈Sh,z ) : ρ ∈ IRd } if |∂ωz ∩ D | = 0 , Rh,z ⊥ = {(0, 0)} if |∂ωz ∩ D | > 0 . Rh,z
(19)
Basic linear algebra tells us that the right-hand side of the linear system (17) is in ⊥ the range of the operator Lh,z if it is orthogonal to Rh,z , the null space of L∗h,z . The characterization (19) implies that this is the case for patches ωz with |∂ωz ∩ D | > 0 ⊥ only contains zero in that case. In the case of interior patches ωz in the sense since Rh,z ⊥ that |∂ωz ∩ D | = 0, we may insert the representation of Rh,z into the right-hand side of (17). This leads to ((f + div Ph (uh ))φz , ρ)ωz −
[[ Ph (uh ) · n]] S φz , ρ S (20)
S∈Sz,h
= (f, ρφz )ωz + [[g, ρφz ]]∂ωz ∩ N
− ( Ph (uh ), ρ∇φz )ωz
which needs to vanish for all ρ ∈ IRd . That this is indeed the case follows from (7) and the definition of Ph (uh ).
4 Error Estimation In the linear elasticity case, an a posteriori error estimator based on the local size of the correction associated with the stress equilibration can be derived, cf. [4]. The discussion in this section is meant to justify the use of P h T as an error indicator in the hyperelastic case. Let us start by showing that the skew-symmetric part of the reconstructed Cauchy T stress P hR F(uh )T is controlled locally by P h . If we denote by as Q = (Q − Q )/2 d×d the skew-symmetric part of a matrix Q ∈ IR , then, using the fact that the exact Cauchy stress P(u)F(u)T is symmetric, we obtain as(P hR F(uh )T ) = as(P hR F(uh )T − P(u)F(u)T ) = as((P hR − P(u))F(uh )T + P(u)(F(uh ) − F(u))T ) =
T as(P h F(uh )
+ P(u)∇(uh − u) ) . T
(21)
Stress Equilibration for Hyperelastic Models
99
This implies that as(P hR F(uh )T )T ≤ P h T F(uh )T + P(u)T ∇(u − uh )T
(22)
holds. Using the notation to indicate that an inequality is satisfied with a generic constant which is independent of h (and of λ where applicable), (22) leads to as(P hR F(uh )T )T P h T + ∇(u − uh )T ,
(23)
i.e., the skew-symmetric part of the reconstructed Cauchy stress is small if the stress correction and the displacement error are both small. This justifies our stress equilibration approach without the additional weak symmetry condition described in [5]. Reference [21, Theorem 4.4] implies that, under the assumption that P(u) L ∞ () and ∇u L ∞ () are sufficiently small, P(u) − P hR 2H (div,) + u − uh 2H 1 () div P hR + f2 + A(P hR F(uh )T ) − B(uh )2
(24) holds, where A : IRd×d → IRd×d denotes the (Cauchy) stress to (Cauchy-Green) strain mapping such that A(PF(u)T ) = B(u) is valid. Since the first term on the right-hand side in (24) vanishes, we are led to u − uh H 1 () A(P hR F(uh )T ) − B(uh ) .
(25)
The right-hand side in (25) may be rewritten as A(P hR F(uh )T ) − B(uh ) = A(P hR F(uh )T ) − A(P(uh )F(uh )T ) 1 d = A((P(uh ) + s(P hR − P(uh )))F(uh )T ) ds ds 0 1 = A ((P(uh ) + s(P hR − P(uh )))F(uh )T )[(P hR − P(uh ))F(uh )T ] ds 0 1 T T = A ((P(uh ) + sP h )F(uh ) )[P h F(uh ) ] ds . 0
(26)
For the Neo-Hookean material law (3), the explicit formula [21, (4.3)] leads to T T A (((P(uh ) + sP h )F(uh ) )[P h F(uh ) ] T T λ(Cof(A((P(uh ) + sP 1 h )F(uh ) ) : P h F(uh ) ) T P I . F(u ) − = h h T μ 2μ + λ tr(Cof(A((P(uh ) + sP h )F(uh ) ))) (27) We therefore get that, under the above assumptions on P(u), T T T A (((P(uh ) + sP h )F(uh ) )[P h F(uh ) ] P h F(uh )
(28)
100
F. Bertrand et al.
and therefore, using (26), T A(P hR F(uh )T ) − B(uh ) P h F(uh ) P h
(29)
holds which proves our claim.
5 Computational Experiments This section contains the results of our computational experiments with the stress reconstruction algorithm presented above. We consider the incompressible limit λ = ∞ and set μ = 1 for the hyperelastic system (5) in the rectangular domain depicted in Fig. 1. The boundary conditions are as follows: The displacement is set to zero at the bottom, its horizontal part is set to zero at the left and right boundary. The vertical surface force is also set to zero on the left and right boundary while on the top boundary a unit surface load is prescribed which points downward on the left half and vanishes on the right half. The deformed configuration in 1 clearly shows that this test is well within the nonlinear regime of hyperelasticity.
1.5
1
0.5
0
-0.5
-1
-1.5 -2
-1.5
-1
-0.5
0
Fig. 1 Triangulation of reference and current configuration
0.5
1
1.5
2
Stress Equilibration for Hyperelastic Models
101
Fig. 2 Size of reconstructed stress P hR
For the above triangulation, the size of the reconstructed Piola-Kirchhoff stress P hR is shown in Fig. 2 and the size of the stress correction P h is shown in Fig. 3. It shows that the correction is concentrated around the midpoint of the top boundary segment where we expect the largest error due to the discontinuity in the prescribed surface forces. This supports the use of the size of P h as an a posteriori error indicator for adaptive mesh refinement.
102
F. Bertrand et al.
Fig. 3 Error indicator based on stress correction P h
The individual components of the reconstructed stress P hR are shown in Fig. 4. The pictures show that the largest stress components are achieved in the left half of the block and that the (2, 2)-component shows a pattern of mostly vertical stripes.
Stress Equilibration for Hyperelastic Models
103
Fig. 4 Components of reconstructed stress P hR
We finally investigate the approximate symmetry of the reconstructed Cauchy stress by plotting the skew-symmetric part of P hR F(uh )T in Fig. 5. Obviously, the size of the skew-symmetric part is already small relative to the overall stress at this level of refinement.
104
F. Bertrand et al.
Fig. 5 Skew-symmetric part of reconstructed Cauchy stress P hR F(uh )T
References 1. M. Ainsworth, A. Allendes, G.R. Barrenechea, R. Rankin, Computable error bounds for nonconforming Fortin-Soulie finite element approximation of the Stokes problem. IMA J. Numer. Anal. 32, 417–447 (2012) 2. F. Auricchio, L. Beirão da Veiga, C. Lovadina, A. Reali, R. Taylor, P. Wriggers, Approximation of incompressible large deformation elastic problems: some unresolved issues. Comput. Mech. 52, 1153–1167 (2013) 3. M. Ainsworth, J.T. Oden, A unified approach to a posteriori error estimation using element residual methods. Numer. Math. 65, 23–50 (1993) 4. F. Bertrand, B. Kober, M. Moldenhauer, G. Starke, Weakly symmetric stress equilibration and a posteriori error estimation for linear elasticity. Numer. Methods Partial Differ. Equ. 37, 2783–2802 (2021) 5. F. Bertrand, M. Moldenhauer, G. Starke, Weakly symmetric stress equilibration for hyperelastic material models. GAMM-Mitteilungen 43, e202000007 (2020) 6. M. Botti, R. Riedlbeck, Equilibrated stress tensor reconstruction and a posteriori error estimation for nonlinear elasticity. Comput. Methods Appl. Math. 20, 39–59 (2020) 7. D. Braess, Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics, 3rd edn. (Cambridge University Press, Cambridge, 2007) 8. D. Braess, J. Schöberl, Equilibrated residual error estimator for edge elements. Math. Comput. 77, 651–672 (2008)
Stress Equilibration for Hyperelastic Models
105
9. S.C. Brenner, L.R. Scott, The Mathematical Theory of Finite Element Methods, 3rd edn. (Springer, New York, 2008) 10. C. Carstensen, G. Dolzmann, An a priori error estimate for finite element discretizations in nonlinear elasticity for polyconvex materials under small loads. Numer. Math. 97, 67–80 (2004) 11. P.G. Ciarlet, Mathematical Elasticity Volume I: Three–Dimensional Elasticity (North-Holland, Amsterdam, 1988) 12. P. Dörsek, J.M. Melenk, Symmetry-free, p-robust equilibrated error indication for the hpversion of the FEM in nearly incompressible linear elasticity. Comput. Methods Appl. Math. 13, 291–304 (2013) 13. A. Ern, M. Vohralík, Polynomial-degree-robust a posteriori error estimates in a unified setting for conforming, nonconforming, discontinuous Galerkin, and mixed discretizations. SIAM J. Numer. Anal. 53, 1058–1081 (2015) 14. A. Hannukainen, R. Stenberg, M. Vohralík, A unified framework for a posteriori error estimation for the Stokes equation. Numer. Math. 122, 725–769 (2012) 15. K.-Y. Kim, Guaranteed a posteriori error estimator for mixed finite element methods of linear elasticity with weak stress symmetry. SIAM J. Numer. Anal. 49, 2364–2385 (2011) 16. K.-Y. Kim, A posteriori error estimator for linear elasticity based on nonsymmetric stress tensor approximation. J. KSIAM 16, 1–13 (2011) 17. P. LeTallec, Numerical Methods for Nonlinear Three-Dimensional Elasticity (1994); Handb. Numer. Anal. III, P.G. Ciarlet and J. L. Lions eds. (North-Holland, Amsterdam), pp. 465–662 18. P. Ladevèze, D. Leguillon, Error estimate procedure in the finite element method and applications. SIAM J. Numer. Anal. 20, 485–509 (1983) 19. R. Luce, B. Wohlmuth, A local a posteriori error estimator based on equilibrated fluxes. SIAM J. Numer. Anal. 42, 1394–1414 (2004) 20. J.E. Marsden, T.J.R. Hughes, Mathematical Foundations of Elasticity (Prentice Hall, Englewood Cliffs, 1983) 21. B. Müller, G. Starke, A. Schwarz, J. Schröder, A first-order system least squares method for hyperelasticity. SIAM J. Sci. Comput. 36, B795–B816 (2014) 22. S. Nicaise, K. Witowski, B. Wohlmuth, An a posteriori error estimator for the Lamé equation based on equilibrated fluxes. IMA J. Numer. Anal. 28, 331–353 (2008) 23. N. Parés, J. Bonet, A. Huerta, J. Peraire, The computation of bounds for linear-functional outputs of weak solutions to the two-dimensional elasticity equations. Comput. Methods Appl. Mech. Eng. 195, 406–429 (2006) 24. W. Prager, J.L. Synge, Approximations in elasticity based on the concept of function space. Quart. Appl. Math. 5, 241–269 (1947)
Adaptive Least-Squares, Discontinuous Petrov-Galerkin, and Hybrid High-Order Methods Philipp Bringmann, Carsten Carstensen, and Ngoc Tien Tran
Abstract The success of mixed finite element methods (FEM) in the linear elasticity with focus on the accuracy of the stress variable on the one hand and surprising results on nonconforming FEMs for nonlinear partial differential equations with guaranteed lower eigenvalue bounds or lower energy bounds in convex minimization problems on the other hand motivate the research of three nonstandard discretization schemes within the Priority Program SPP 1748 of the German Research Foundation (DFG). The least-squares (LS) FEM and the discontinuous Petrov-Galerkin (dPG) methods are minimal residual methods with built-in error estimation even for inexact solve. The hybrid high-order (HHO) methodology generalizes nonconforming and mixed schemes to arbitrary polynomial degrees with general mesh-design. This paper presents explicit residual-based a posteriori estimators for these three numerical schemes and outlines the proof of optimal rates for the adaptive LS and dPG methods for linear model problems. The application of the LS and dPG methods to a nonlinear model problem leads to reliable and efficient error control. The HHO method without stabilization for a class of degenerate convex minimization problem allows for guaranteed error control with a superlinearly convergent lower energy bound. The three methods are numerically investigated in adaptive algorithms for effective mesh-refinement. Mathematics Subject Classification (2000) 47H05 · 49M15 · 65N12 · 65N15 · 65N30
P. Bringmann · C. Carstensen (B) · N. T. Tran Institut für Mathematik, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany e-mail: [email protected] P. Bringmann e-mail: [email protected] N. T. Tran e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_5
107
108
P. Bringmann et al.
1 Introduction 1.1 Motivation The discrete stress approximation in a conforming finite element method (FEM) is the symmetric part of the gradient of a piecewise polynomial (say of degree k + 1 for k ∈ N0 ) and so converges with order k + 1 in the Lebesgue norm of L 2 towards a sufficiently smooth solution in elasticity. Besides this reduced convergence (the approximation has order k + 2 in the L 2 norm), the discrete stress approximation has no extra advantages: There is no local equilibrium condition or any other conservation principle satisfied. In computational mechanics, the stress variable is often the main quantity of interest in a numerical simulation and the local equilibrium is a prime objective. This motivates a mixed finite element methodology based on various variational principles, e.g., Hellinger-Reissner, Hu-Washizu [8]. In linear elasticity, this requires a discrete stress approximation that is pointwise symmetric and H (div) conforming. The latter means equilibrium in the sense that at an edge (or triangular side in 3D) E = ∂ T+ ∩ ∂ T− that is shared by two finite element domains T+ and T− , the traction vectors t± := ±σh |T± n E derived from the stress approximation σh |T± in T± are in equilibrium t+ + t− = 0 along E. This means continuity of the normal components of the stress in the sense that the jump [σh ] E n E := (σh |T+ − σh |T− )n E vanishes along E (notice the different signs of t± to see this). The other demanded property is the pointwise symmetry σh (x) ∈ S for almost every point x ∈ in the mechanical body ; S denotes the symmetric 2 × 2 (or 3 × 3) matrices. The two properties can be written simultaneously as σh ∈ H (div, ; S) and, since Newton and Euler, are fundamental in mechanics as first principles known as law of balance of linear momentum and law of balance of angular momentum. The major difficulty in the design of those finite element stress functions is that it is simply impossible to write down a low-order version of piecewise polynomials with those properties. Said differently, the simplest positive example is an Arnold-Winther finite element AW1 (T ) := {τh ∈ P3 (T ; S) ∩ H (div, ; S) : div τh ∈ P1 (T ; R2 )}, where Pk denotes the algebraic polynomials of total degree at most k and Pk (T ) denote the piecewise polynomials in Pk , piecewise with respect to a triangulation T of the domain into triangles in 2D. The 24 degrees of freedom (dof) for the lowestorder Arnold-Winther FEM stress are depicted in Fig. 1a, an implementation of the mixed finite element scheme with a local stiffness matrix of dimension 30 × 30 per triangle can be found in [28]. An analogy to the C 1 conforming finite element schemes for the biharmonic plate problem may illustrate the complexity. The C 1 conforming lowest-order scheme is the quintic Argyris finite element A5 (T ) with the dof depicted in Fig. 1c. One key observation in the analysis and in the design of AW1 (T ) in [2] is that Curl Curl v A = σh ∈ AW1 (T ) for v A ∈ A5 (T ) is a typical divergence-free
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
109
Fig. 1 Illustration of the 30 degrees of freedom of the lowest-order Arnold-Winter FEM in Fig. 1a–b and of the 21 degrees of freedom of the Argyris FEM in Fig. 1c with courtesy from [9]
Arnold-Winther finite element function. In other words, the implementation of A5 (T ) appears as cumbersome as that of AW1 (T ). The computational benchmarks in [24] reveal that a primal approximation of conforming quintic FEM is locking-free, simpler to realize with standard software, and leads to accurate approximations. One of the reasons is that adaptive meshrefinement is necessary and available for the primal FEMs but wide open for the Arnold-Winther FEM. There have been attempts for adaptive mesh-refinement, e.g., in [26] that includes the first residual-based a posteriori error estimate and computational results indicate optimal (and locking-free) convergence rates. The proof of optimal convergence rates becomes possible now and is announced in [53]. As the state of the art, the local discrete equilibrium conditions are available solely at high computational costs and this explains the desire for alternatives for linear and also for nonlinear problems in computational mechanics. The project starts with the two alternatives that generalized mixed finite element schemes may either violate the symmetry condition or the conformity in H (div). There are former suggestions to add the symmetry not in the form of pointwise almost everywhere but add this with a Lagrange multiplier in a weak form. The easiest and oldest example is PEERS with an a posteriori error control in [21] and adaptive mesh-refining in [22]. More recent approaches include the least-squares methods (LSFEM) with an easy conceptional imbedding of the symmetry condition for the stress variable through an L 2 penalty term. If the pointwise symmetry of σh is enforced in the ansatz functions, then the H (div) conformity may be violated and enforced by Lagrange multipliers in a discontinuous Petrov Galerkin (dPG) scheme, that is a residual minimization method with broken test functions. The skeletal schemes replace the conformity through penalization of jump terms and one particular realization is the hybrid highorder (HHO) scheme in a version without stabilization. All three schemes will be addressed in this paper and have in common that no penalty parameter has to be selected: All those schemes are parameter-free, allow for lowest-order versions, and for application to nonlinear problems in general.
110
P. Bringmann et al.
1.2 Three Nonstandard Discretizations The universality of the LSFEM has enjoyed an ongoing attention in the mathematical and the engineering community over the years (cf. [7] and the references therein). The straight-forward approach applies to first-order systems of partial differential equations and minimizes the (possibly weighted) sum of the squared norms of their residuals in the least-squares functional. Once the fundamental equivalence of the least-squares functional with norms of the underlying function spaces is established, any conforming discretization results in a symmetric and positive definite linear system of equations. Another key feature is a natural (sometimes called built-in) a posteriori error estimator by evaluating the local contributions to the least-squares functional at no additional computational costs. The reliability and efficiency follows immediately from the fundamental equivalence and does not require an exact solution of the discrete problem such as standard residual-based error estimators do. This is a particular advantage for nonlinear problems which usually involve an inexact solve of the nonlinear equation. The dPG method was originally designed from the demand of optimal test functions in wave propagation or in convection-dominated problems [44, 45, 47, 72] and leads to a minimal residual method based on a primal, dual, or ultraweak formulation in a setting with broken test spaces. The minimal residual ansatz generalizes the least-squares finite element approach in the higher flexibility of the test spaces. It stands out due to instant stability properties and a built-in error estimator in terms of a computed residual variable plus data approximation terms. A recent generalization of mixed and nonconforming finite element schemes is the emerging methodology under the label skeletal methods with variables on cells (the finite element domains) and the skeleton (the edges or faces between the element domains). The variables on the cells act independent of each other and may even be eliminated by Schur complements, this is called static condensation in engineering, such that only the skeletal variables remain. The list of examples include hybrid discontinuous Galerkin methods (also analyzed under the label weak Galerkin methods), (conforming and nonconforming) virtual finite element methods (VEM), and hybrid high-order (HHO) methods. One aspect of the skeletal methods is that the element domains are no longer simplices or tetrahedra but rather arbitrary polygons or polytopes – that explains the alternative name polytopal methods for this class of problems when the focus is on general mesh-design. The focus in this work is on simplicial schemes and on avoiding stability terms – this is convenient because a non-quadratic growth for nonlinear elasticity requires a non-quadratic stabilization and so no stabilization leads to a simple realization.
Adaptive Least-Squares, Discontinuous Petrov-Galerkin … Fig. 2 Adaptively refined mesh with 1 987 triangles (ndof = 3 975) generated by the NALSFEM algorithm in Sect. 3.2 with bulk parameter θ = 0.3 for the convex minimization problem from Sect. 4.3 below
111
1
0
−1 −1
0
1
1.3 Adaptive Mesh-Refinement Nonconvex domains or mixed boundary conditions may lead to reduced Sobolev regularity of solutions to partial differential equations. The finite element approximation of such solutions results in suboptimal convergence rates for uniformly refined meshes. Suitable adaptive mesh-refinement strategies automatically detect the singularities of the solution in Sect. 4.3 below such as at the reentrant corner in Fig. 2. Numerical experiments confirm that this results in approximations with an optimal convergence rate. The many contributions in the theoretical convergence analysis with rates base on the notion of nonlinear approximation classes and led to the framework of the axioms of adaptivity [25] for algorithms with collective Dörfler marking. The details of the algorithms will follow below in Sect. 3.2 and in Sect. 3.3 for two examples of the two marking strategies: collective and separate marking. The latter concerns data approximation terms without a positive power of the mesh-size and an algorithm for the reduction of this approximation in optimal complexity. This is the case in mixed FEM and in LSFEM, when the data error term f − kT f L 2 () is part of the total error and error estimator for a given source term f ∈ L 2 () and the piecewise L 2 orthogonal projection kT onto the piecewise polynomials Pk (T ) of degree at most k. The investigations of the optimal rates leads to highly technical results called quasi-orthogonality and discrete reliability for nonstandard discretizations as one main achievement in this project.
112
P. Bringmann et al.
1.4 Outline of the Presentation The remaining parts of this paper are organized as follows. Section 2 recalls basic notations for Sobolev spaces and discrete spaces used throughout this work. Section 3 starts with the introduction of the LSFEM and its natural a posteriori control. The latter is surprisingly asymptotically exact and motivates an adaptive scheme in Sect. 3.2 with plain convergence under mild assumptions. Despite the benefit of the natural error estimator, the lack of a positive power of the mesh-size prevents the proof of rate-optimality in the current state of the art [25, 39]. An alternative error estimator for linear elasticity in Sect. 3.3 circumvents this issue and allows for the verification of the axioms of adaptivity for separate marking in Sect. 3.4 to establish the rate-optimality of the adaptive algorithm. The application of the LSFEM to a nonlinear model problem in Sect. 4 leads to an efficient and reliable error estimator. Although the corresponding convex minimization problem has a unique solution, the uniqueness of the discrete solution is an open question. The numerical benchmark in Sect. 4.3 indicates that the corresponding adaptive scheme recovers the optimal convergence rates in the presence of singular solutions. Section 5 is devoted to the design and analysis of the dPG FEM. The three assumptions (H1)–(H3) in Sect. 5.1 provide a general framework for the a posteriori analysis of dPG methods with a built-in a posteriori error control. The design of a dPG method is carried out for the Poisson model problem in Sect. 5.2. While the adaptive scheme driven by the built-in error estimator recovers optimal rates for singular solutions in numerical benchmarks, the proof of rate-optimality fails due to the lack of a positive power of the mesh-size. The alternative error estimator in Sect. 5.3 for the lowestorder dPG scheme for the Poisson model problem enable the first proof of optimal convergence rates for a dPG scheme [31]. The lowest-order dPG method in Sect. 6 for the nonlinear model problem in Sect. 4.1 allows for reformulations of the standard dPG FEM in Sect. 6.2, to a weighted LSFEM and a mixed formulation. It provides a sufficient a posteriori condition for the uniqueness of the discrete solution as pointed out in the first work of the dPG methodology for a nonlinear problem [18]. The focus of the final sections is the HHO method introduced in Sects. 7.1 and 7.2. While the a priori analysis for this method is well-established, current a posteriori estimates accept the stabilization with a negative power of the mesh-size as a computable contribution. This prevents the reduction property of the error estimator as in earlier sections. The analysis in Sect. 7.4 gives rise to general conditions, which are sufficient for a stabilization-free residual-based a posteriori error analysis for the Poisson model problem on simplicial triangulations. The numerical benchmark in Sect. 7.5 indicate the efficiency and reliability of the novel error estimator. The relaxation in the calculus of variation motivates the numerical analysis of a class of degenerate convex minimization problems with non-strictly convex energy densities with some convexity control and two-sided p-growth in Sect. 8.1. The unstabilized HHO method in Sect. 8.2 approximates the deformation gradient by a
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
113
reconstruction with piecewise Raviart-Thomas finite element functions on a regular triangulation into simplices. The application of this HHO method allows for a unique H (div) conforming stress approximation σh . The comparison to the mixed FEM leads to a priori and a posteriori estimates for the stress error σ − σh L p () including a computable lower energy bound in Sects. 8.3 and 8.4. The numerical example in Sect. 8.5 displays higher convergence rates for higher polynomial degrees and provides empirical evidence for the first superlinear convergence rates of lower energy bounds [41].
2 Notation Let ⊂ Rd denote a polyhedral Lipschitz domain with outward unit normal vector n : ∂ → Rd . Standard notations for Sobolev and Lebesgue functions apply throughout this paper. In particular, (•, •) L 2 () denotes the scalar product of L 2 (). Let the function space W p (div, ; M) = W p (div, )m be the matrix-valued version of W p (div, ) = τ ∈ L p (; Rd ) : div τ ∈ L p () for 1 ≤ p ≤ ∞ with the convention H (div, ) = W 2 (div, ) and H (div, ; M) = H (div, )m . For any A, B ∈ M = Rm×d , A : B denotes the Euclidean scalar product of A and B, which induces the Frobenius norm |A| = (A : A)1/2 in M. Let Id×d ∈ M be the identity matrix. For any quadratic matrix A ∈ M, let tr(A) = dj=1 A j j denote the trace and dev A = A − tr(A)/d Id×d the deviatoric (or trace-free) part of the matrix A. For 1 < p < ∞, p = p/( p − 1) denotes the Hölder conjugate of p with 1/ p + 1/ p = 1. The notation A B abbreviates A ≤ C B for a generic constant C independent of the mesh-size and A ≈ B abbreviates A B A. A regular triangulation T of in the sense of Ciarlet is a finite set (of cardinality |T | ∈ N) of closed simplices T of positive volume |T | > 0 with boundary ∂ T and outer unit normal n T such that ∪T ∈T T = and two distinct simplices are either disjoint or share one common (lower-dimensional) subsimplex (vertex or edge in 2D and vertex, edge, or face in 3D). Let F(T ) denote the set of the n + 1 hyperfaces of T , called sides of T , and define the set of all sides F = ∪T ∈T F(T ) and the set of interior sides F() = F \ {F ∈ F : F ⊂ ∂} in T . For any interior side F ∈ F(), there exist exactly two simplices T+ , T− ∈ T such that ∂ T+ ∩ ∂ T− = F. The orientation of the outer normal unit n F = n T+ | F = −n T− | F along F is fixed. Define the side patch ω F = int(T+ ∪ T− ) of F and let [v] F = (v|T+ )| F − (v|T− )| F ∈ L 1 (F) denote the jump of v ∈ L 1 (ω F ) with v ∈ W 1,1 (T+ ) and v ∈ W 1,1 (T− ) across F. For any boundary side F ∈ F(∂) = F \ F(), n F = n T is the exterior unit vector for F ∈ F(T ) with T ∈ T and [v] F = (v|T )| F . The differential operators divpw and Dpw depend on the triangulation T and denote the piecewise application of div and D without explicit reference to the triangulation T .
114
P. Bringmann et al.
For a simplex or a side M ⊂ Rd of diameter h M , let Pk (M) denote the space of polynomials of maximal order 0 ≤ k regarded as functions defined in M. The L 2 projection kM v ∈ Pk (M) of v ∈ L 1 (M) satisfies M
ϕk (1 − kM )v dx = 0 for any ϕk ∈ Pk (M).
The space of Raviart-Thomas finite element functions on a simplex T ∈ T reads RTk (T ) = Pk (T ; Rd ) + x Pk (T ) ⊂ Pk+1 (T ; Rd ). pw
Let Pk (T ), Pk (F), and RTk (T ) denote the space of piecewise functions (with respect to T and F) with restrictions to T or F in Pk (T ), Pk (F), and RTk (T ), and onto the respective discrete spaces. with the L 2 projections kT , kF , and RTpw k (T ) For vector-valued functions v ∈ L 1 (; Rm ) = L 1 ()m , the L 2 projection kT onto the piecewise polynomials Pk (T ; Rm ) = Pk (T )m applies componentwise. This pw applies to the L 2 projections onto Pk (M; Rm ), Pk (F; Rm ) = Pk (F)m , RTk (T ; M) pw m = RTk (T ) etc. The local mesh-sizes give rise to the piecewise constant function h T ∈ P0 (T ) with h T |T ≡ h T in T ∈ T . Let osck ( f, T ) = h T (1 − kT ) f L p () denote the data oscillation of f in T . Let mid(T ) denote the barycenter of a simplex T in the definition of the piecewise affine function • − mid(T ) ∈ P1 (T ; Rd ) by (• − mid(T ))(x) = x − mid(T ) for x ∈ T ∈ T . Define the H (div) conforming Raviart-Thomas finite element space by RTk (T ) = pw RTk (T ) ∩ H (div, ). Let S0k+1 (T ) = Pk+1 (T ) ∩ W01,1 () denote the space of piecewise but globally continuous polynomials of maximal degree k + 1 with vanishing traces. The space of the non-conforming Crouzeix-Raviart finite element functions exhibits a continuity condition along the faces, that is C R01 (T ) = {vh ∈ P1 (T ) : vh is continuous at mid(F) for all F ∈ F() and vh (mid(F)) = 0 for all F ∈ F(∂)}.
3 Least-Squares Finite Element Methods in Computational Mechanics This sections recalls the least-squares formulations for the linear elasticity problem and discusses the least-squares functional as an a posteriori error estimator for adaptive mesh-refinement. The main contribution is an adaptive algorithm with an alternative residual-based error estimator and a separate marking strategy allowing for optimal convergence rates. The proof is sketched in the framework of the axioms of adaptivity.
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
115
3.1 Least-Squares Finite Element Methods This section recalls the least-squares formulation and the remarkable asymptotic exactness property of the natural least-squares estimator [40]. Given a right-hand side f ∈ L 2 (), let R( f ; •) : H (div, ) × H01 () → L 2 () × L 2 (; Rd ) denote the residual of some first-order formulation of a PDE that is R( f ; σ, u) = 0. Its solution is equivalent to the minimization of the least-squares functional (1) L S( f ; σ, u) = R( f ; σ, u)2L 2 () . The LSFEM computes the minimizer of this least-squares functional in the conforming finite element function spaces RTk (T ) ⊂ H (div, ) and S0k+1 (T ) ⊂ H01 () (σLS , u LS ) ∈
arg min
L S( f ; σLS , u LS ).
(2)
(qLS ,vLS )∈RTk (T )×S0k+1 (T )
The well-posedness of this formulation is typically based on a fundamental equivalence of the form, for every σ, τ ∈ H (div, ) and u, v ∈ H01 (), R( f ; σ, u) − R( f ; τ, v)2L 2 () ≈ σ − τ 2H (div,) + |||u − v|||2 .
(3)
Since the exact solution (σ, u) to the PDE satisfies R( f ; σ, u) = 0, the equivalence (3) shows that the least-squares functional provides a natural a posteriori error control, for every τRT ∈ RTk (T ) and vC ∈ S0k+1 (T ), L S( f ; τRT , vC ) ≈ σ − τRT 2H (div,) + |||u − vC |||2 .
(4)
This natural error estimator turned out to provide guaranteed upper error bounds (up to carefully computed reliability constants) and is even asymptotically exact [40, 66] in that L S( f ; σLS , u LS ) → 1 as h T L ∞ () → 0. σ − σLS 2H (div,) + |||u − u LS |||2
(5)
This asymptotic exactness has apparently been overlooked for decades and surprised the community. It follows for several linear model problems from a spectral decomposition of the ansatz space X = H (div, ) × H01 () with an appropriate norm • X and the Galerkin orthogonality of the LSFEM and, so, holds for all kinds of conforming discretizations. As a consequence, the convergence in (5) is independent of the right-hand side f and the polynomial degree k of the underlying discretization. One possible application is the least-squares formulation for the linear elasticity problem from [14] with stress σ ∈ = {τ ∈ H (div, ; M) : tr(τ ) dx = 0} and displacement u ∈ H01 (; Rd ) [40]. For the Lamé parameters λ, μ > 0 and τ ∈ M, consider the linear material law
116
P. Bringmann et al.
Cτ = 2μ τ + λ tr(τ ) Id×d .
(6)
Any discretization based on the least-squares functional L S( f ; •) : × H01 (; Rd ) → R with L S( f ; σ, u) = f + div σ 2L 2 () + C−1/2 σ − C1/2 ε(u)2L 2 () and the weighted norms 1/2 and |||v||| = C1/2 ε(v) L 2 () τ H (div,) = C−1/2 τ 2L 2 () + div τ 2L 2 () satisfies (5). The application to the Stokes problem in [66] requires the restriction of the ansatz space for the velocity to divergence-free vector fields u ∈ Z = {v ∈ H01 (; Rd ) : div v = 0}. The least-squares functional L S( f ; •) : × Z → R from [15] with L S( f ; σ, u) = f + div σ 2L 2 () + dev σ − D u2L 2 () exhibits asymptotically exact convergence with respect to the norms defined by 1/2 and |||v||| = D v L 2 () . τ H (div,) = dev τ 2L 2 () + div τ 2L 2 ()
3.2 Natural Adaptive Mesh-Refinement Due to the reliability and efficiency of the least-squares functional in (4), the local contributions (7) η2 (T , T ) = R( f ; σLS , u LS )2L 2 (T ) can be used as refinement indicators in adaptive algorithms. This section addresses the plain convergence of adaptive least-squares FEMs. Considering the triangulation T on the level ∈ N0 , the Dörfler marking [50] for some bulk parameter 0 < θ≤ 1 allows to select a set M ⊆ T marked for refinement, for η2 (T , M ) = T ∈M η2 (T , T ) and η2 (T ) = η2 (T , T ), by the criterion (8) θ η2 (T ) ≤ η2 (T , M ). Subsequently, the newest-vertex bisection (NVB) from [56, 65, 67] generates the smallest regular refinement T+1 of T such that all simplices in M ⊆ T \ T+1 are refined. Algorithm NALSFEM Input: Initial regular triangulation T0 with some initial condition (cf. [65, Sect. 4]) and bulk parameter 0 < θ ≤ 1.
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
117
for any level = 0, 1, 2, . . . do Solve LSFEM with respect to the triangulation T for the solution (σ , u ). Compute error estimator η(T , T ) from (7) for every T ∈ T . Select a subset M ⊆ T of (almost) minimal cardinality with (8). Compute smallest regular refinement T+1 of T by NVB. od Output: Sequences of discrete solutions (σ , u )∈N0 and triangulations (T )∈N0 .
The recent publication [51] follows [63] to establish the plain convergence (without any rate of convergence) for this algorithm solely requiring mild assumptions on the PDE, the marking strategy, and the mesh-refinement. For the 2D Poisson model problem with residual R( f ; σ, u) = ( f + div σ, σ − ∇u) in (1), the lowest-order (modified) least-squares functional converges Q-linearly [35, Theorem 4.1], in that, for sufficiently large bulk parameter θ and the L 2 orthogonal projection = 0T on the level 0 ≤ , there exists a reduction constant 0 < ρ < 1 and a positive generic constant C with L S(+1 f ; p+1 , u +1 ) + C (1 − +1 ) p+1 2L 2 () ≤ ρ L S( f ; p , u ) + C (1 − ) p 2L 2 () .
(9)
The proof utilizes the supercloseness result 0T f + div pLS L 2 () h T sL ∞ () pLS − ∇u LS L 2 () with the reduced elliptic regularity parameter s for the possibly non-convex polygonal domain as well as the flux representation formula qRT = 0T qRT +
div qRT (• − mid(T )) 2
for any qRT ∈ RT0 (T ) with vCR ∈ C R01 (T ) and wC ∈ S 1 (T )/R in 0T qRT = ∇pw vCR + Curl wC .
3.3 Alternative A posteriori Error Control This section motivates the necessity of an alternative a posteriori error estimator for the convergence analysis with rates for adaptive least-squares FEMs. Exemplarily, it presents an adaptive algorithm for the linear elasticity problem with optimal convergence rate from [13], a joint work of two PIs in the SPP 1748. The requirement of a sufficiently large bulk parameter θ for (9) contrasts the established analysis [25, 39] for optimal rates, where a sufficiently small bulk parameter
118
P. Bringmann et al.
is demanded. In fact, although the least-squares functional is a reliable and efficient error estimator, it does not involve any mesh-size factor that reduces under refinement. This seemingly prevents its reduction property, which is a crucial part in all known quasi-optimality proofs. It is therefore necessary to base the adaptive algorithm on some novel explicit residual-based a posteriori error estimator η2 (T ) = T ∈T η2 (T , T ) with exact solve as it was first suggested in [34] for the Poisson model problem with homogeneous Dirichlet boundary data. The extension of this approach to the Stokes equations in two dimensions with inhomogeneous Dirichlet boundary data has been established in [11] and for higher polynomial degrees in an h-adaptive algorithm in [12]. Bringmann summarized these publications and [13] in the Ph.D. thesis [10] establishing the optimal convergence rates of an adaptive LSFEM for a generalized linear model problem in three spatial dimensions. The analysis covers discretizations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Suppose that the boundary ∂ of ⊂ R3 is partitioned into the compact subset D with positive 2-dimensional Hausdorff measure |D | > 0 and the (nonempty) relatively open subset N = ∂ \ D . Let the set F of sides be subordinated to D and N in that FD = F(D ) = {F ∈ F : F ⊂ D } and FN = F(N ) = F(∂) \ FD partition the set F(∂) of all sides on the boundary ∂. For the three-dimensional linear elasticity, consider the inverse of the linear material law from (6), for the Lamé parameters λ, μ > 0 and τ ∈ M, C−1 τ =
λ 1 τ− (tr τ ) I3×3 2μ 3λ + 2μ
in the least-squares functional L S( f ; σ, u) = f + div σ 2L 2 () + C−1 σ − ε(u)2L 2 () as well as the homogeneous Dirichlet boundary data u|D ≡ 0 and the inhomogeneous Neumann boundary data σ n|N = g ∈ L 2 (N ; R3 ). This first-order formulation from [16] avoids volumetric locking as λ → ∞. The alternative a posteriori error estimator reads [13, Sect. 4.1] η2 (T , T ) = |T |2/3 div(sym C−1 σLS − ε(u LS ))2L 2 (T ) + |T |2/3 curl C−1 (C−1 σLS − ε(u LS ))2L 2 (T ) + |T |1/3 [sym C−1 σLS − ε(u LS )] F n F 2L 2 (F) F∈F (T )\FD
+ |T |
1/3
−1
−1
[C (C σLS − ε(u LS ))] F ×
F∈F (T )\FN
+ |T |1/3
F∈F (T )∩FN
g − kFN g2L 2 (F) .
(10) n F 2L 2 (F)
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
119
From an engineering view point, the use of the L 2 norm for the strain difference C−1 σ − ε(u) is not an energy term and so questionable. But it leads to a locking-free discretization and so is perfectly justified. The resulting estimator terms C−1 (C−1 σLS − ε(u LS )) in (10) are a consequence of this ansatz and justified by the convergence theorem below. Since the first-order divergence least-squares FEM measures the flux errors in H (div), the least-squares functional includes the data resolution error μ2 (T ) = k 2 2 2 T ∈T μ (T ) with μ (T ) = f − T f L 2 (T ) . On the contrary, the error estimator η(T ) does involve any data error of f and cannot ensure the reduction of μ(T ). Hence, a separate marking strategy [38] is required in the adaptive algorithm. Algorithm ALSFEM Input: Initial regular triangulation T0 with some initial condition (cf. [65, Sect. 4]) and parameters 0 < θ ≤ 1, 0 < ρ < 1, and 0 < κ < ∞. for any level = 0, 1, 2, . . . do Solve LSFEM with respect to the triangulation T for the solution (σ , u ). Compute error estimator η(T , T ) from (10) for every T ∈ T . if μ2 (T ) ≤ κ η2 (T ) (Case A) Select a subset M ⊆ T of (almost) minimal cardinality with (8). Compute smallest regular refinement T+1 of T by NVB. else (Case B) Compute an admissible refinement T+1 of T with (almost) minimal cardinality |T+1 | and μ(T+1 ) ≤ μ(T ). fi od Output: Sequences of discrete solutions (σ , u )∈N0 and triangulations (T )∈N0 .
The formal statement of optimal convergence rates employs, for any regularity parameter 0 < s < ∞, the notion of the nonlinear approximation class As consisting of all triples (σ, u, f ) ∈ H (div, ; M) × H 1 (; R3 ) × L 2 (; R3 ) such that u ≡ 0 on D , σ n = g on N , and |(σ, u, f )|As = sup (N + 1)s min N ∈N
T ∈T(N )
2 1/2 η (T ) + μ2 (T ) < ∞.
Theorem 1 (optimal convergence rates) There exists a maximal bulk parameter 0 < θ0 < 1 and a maximal separation parameter 0 < κ0 ≤ ∞ such that for all 0 < θ ≤ θ0 , for all 0 < κ ≤ κ0 , and for all 0 < s < ∞, the output ((σ , u ) : ∈ N) of ALSFEM and (σ, u, f ) ∈ As satisfy s 1/2 sup |T | − |T0 | + 1 η2 (T ) + μ2 (T ) ≈ |(σ, u, f )|As . ∈N
The maximal parameters θ0 and κ0 depend exclusively on the initial triangulation T0 and the polynomial degree k and are λ-independent.
120
P. Bringmann et al.
A careful analysis of the discrete reliability in [30] provides an explicit upper bound for θ for the adaptive Courant FEM for the Poisson model problem with optimal convergence rates. When the error of the approximation of the variable σ is measured in the L 2 norm (instead of the full H (div) norm), an adaptive LSFEM employing collective marking for an alternative residual-based explicit error estimator even converges with the optimal rate [17]. With respect to such weaker norms, adaptive mixed FEMs employing collective marking also converge with the optimal rate, such as in [33] for non-selfadjoint indefinite second-order elliptic PDEs.
3.4 Axioms of Adaptivity The adaptive algorithm with separate marking at hand requires a modification of the axiomatic framework from [39] established by [23]. This section provides an overview of the axioms in the context of the linear elasticity problem from Sect. 3.3. The presentation goes beyond the publication [13] in that it includes the discretization with higher polynomial degree from [10]. The proof of optimal convergence rates bases on the seven axioms (A1)– (A4), (B1)–(B2), and (QM). The ten included positive generic constants j for 3 , ref , and ρ2 < 1 depend on the initial triangulation T0 as well as j = 1, . . . , 7, the polynomial degree k ∈ N0 . The axioms (A1)–(A3), (QM), and (B2) concern an admissible refinement T ∈ T(T ) of an arbitrary triangulation T ∈ T. The distance between these triangulations is defined as the global value σLS − σLS , u LS − u LS ) ≥ 0. δ 2 (T , T ) = L S(0; The stability axiom asserts |η(T , T ∩ T ) − η(T , T ∩ T )| ≤ 1 δ(T , T )
(A1)
and the reduction axiom η(T , T \ T ) ≤ ρ2 η(T , T \ T ) + 2 δ(T , T ).
(A2)
The proofs of (A1)–(A2) for the error estimator η from (10) follow from standard arguments as the triangle inequality and the discrete jump control from [39, Lemma 5.2]. The discrete reliability axiom supposes the existence of some set T \ T ⊆ R ⊆ T of coarse simplices with |R| ≤ ref |T \ T | and 3 η2 (T ). δ 2 (T , T ) ≤ 3 η2 (T , R) + μ2 (T ) +
(A3)
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
121
The proof of (A3) departs with the construction of two intermediate functions τRT ∈ kF − τRT n = ( RTk (T ; M) and τRT ∈ RTk (T ; M) with the Neumann boundary data N kFN )g and τRT n = 0 on N and the divergences σLS − σLS ) and div τRT = (1 − kT ) div( div τRT = kT div( σLS − σLS ) in . The functions τRT and τRT and a discrete exact sequence with partial homogeneous boundary conditions allow for an algebraic split of the left-hand side with some Ned ∈ Nk (T ; M) in Nédélec function of first kind β δ 2 (T , T ) = (1 − 0T ) div( σLS − σLS )2L 2 () −1 + C ( σLS − σLS ) − ε( u LS − u LS ), C−1 τRT L 2 () Ned 2 . + C−1 σLS − ε(u LS ), ε( u LS − u LS ) − C−1 curl β L ()
(11)
The Scott-Zhang quasi-interpolation operator [62] and a modification of the Nédélec quasi-interpolation operator from [71] ensuring the preservation of partial homogeneous boundary conditions enable the localized upper bound in (A3). The subsequent axioms (B1)–(B2) refer to the data approximation algorithm in Case B of the ALSFEM algorithm. The data approximation of rate s > 0 requires that, for all given tolerances Tol > 0, there exists an admissible triangulation TTol ∈ T satisfying (B1) |TTol | − |T0 | ≤ 5 Tol−1/(2s) and μ2 (TTol ) ≤ Tol. The thresholding second algorithm from [5, 6] plus a completion step allows for quasi-optimal data approximation (B1) [39, Theorem 3.3] and is one possible realization of Case B of the ALSFEM. The data approximation error μ from above satisfies the required quasi-monotonicity μ(T ) ≤ 6 μ(T ).
(B2)
The quasi-orthogonality axiom solely concerns the outcome (T : ∈ N0 ) of the ALSFEM algorithm and reads ∞
δ 2 (T j+1 , T j ) ≤ 4 η2 (T ) + μ2 (T ) .
(A4)
j=
In the case of homogeneous boundary conditions, the variational formulation (2) provides a conforming discretization. The Galerkin orthogonality immediately implies the quasi-orthogonality axiom (A4) [34, Theorem 4.1]. However, for triangulations T j and T j+1 , the (possibly) different Neumann data approximations on the refined boundary sides F j (N ) \ F j+1 (N ) prevent the Galerkin orthogonality. A stable
122
P. Bringmann et al.
extension of the approximation error allows for a remedy to prove a quasi-Pythagoras lemma [10, Lemma 4.15] with a positive generic constant CQP such that every α > 0 satisfies δ 2 (T , T ) ≤ (1 + α) L S( f ; σLS , u LS ) − (1 − α) L S( f ; σLS , u LS ) CQP 2 N )) . osc (g, F(N )) − osc2 (g, F( + α
(12)
This enables the proof of (A4) via a weakened version of the quasi-orthogonality with a parameter ε > 0 (cf. [39, Sect. 3.1]). The quasi-monotonicity axiom on η + μ requires η(T ) + μ(T ) ≤ 7 η(T ) + μ(T )
(QM)
and follows from the quasi-Pythagoras lemma as well.
4 Least-Squares Finite Element Methods in Nonlinear Computational Mechanics This section presents the least-squares discretization for a nonlinear model problem. Numerical experiments with the natural adaptive algorithm employing a Newton scheme in a nested iteration exhibit optimal convergence rates. Some concluding comments address the lack of a uniqueness result for the discrete solution.
4.1 Convex Energy Minimization This subsection introduces a scalar nonlinear model example with some Hilbert space setting and a nonlinearity with quadratic growth in the gradient. It stands for a larger class of Hencky materials [69, Sect. 62.8] and is regarded as a first model problem in line towards real-life applications with a matrix-valued stress σ (F) given as a nonlinear function of some deformation gradient F (such as the gradient ∇u of the displacement u) and the remaining equilibration equation f + div σ (∇u) = 0 a.e. in
(13)
for some prescribed source term f in the domain . The model problem involves a nonlinear function φ ∈ C 2 (0, ∞) with 0 < γ1 ≤ φ(t) ≤ γ2 and 0 < γ1 ≤ φ(t) + tφ (t) ≤ γ2 for all t ≥ 0 and universal t positive constants γ1 , γ2 . Given f ∈ L 2 () and the convex function ϕ, ϕ(t) = 0 s φ(s) ds for t ≥ 0, the model problem minimizes the energy functional
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
E(v) =
123
ϕ(|∇v(x)|) dx −
f v dx among all v ∈ H01 ().
The convexity of ϕ and the above assumptions on φ lead to growth-conditions and sequential weak lower semicontinuity of E and guarantee the unique existence of a minimizer u of E in H01 () [70, Theorem 25.D]. The equivalent Euler-Lagrange equation reads
φ(|∇u|)∇u · ∇v dx =
f v dx for all v ∈ H01 ()
(14)
and has a unique solution u in H01 (). The stress variable σ (A) = φ(|A|)A defines a function σ ∈ C 1 (Rd ; Rd ) with the Fréchet derivative D σ (A) = φ(|A|)Id×d + φ (|A|)|A| sign(A) ⊗ sign(A)
(15)
with the sign function sign(A) = A / |A| for A ∈ Rd \ {0} and the closed unit ball sign(0) = B(0, 1) in Rd . The prefactor φ (|A|)|A| makes D σ a continuous function in Rd . In fact D σ ∈ C 0 (S) is bounded with eigenvalues in the compact interval [γ1 , γ2 ] ⊂ (0, ∞). Hence, for A, B ∈ Rd , the fundamental theorem of calculus σ (A) − σ (B) =
1
D σ (s A + (1 − s)B)(A − B) ds
0
and (15) imply the global Lipschitz continuity of σ with Lip(σ ) ≤ γ2 ,
1
|σ (A) − σ (B)| ≤
| D σ (s A + (1 − s)B)(A − B)| ds ≤ γ2 |A − B|.
0
A formal calculation with s( j) = (sign A) j , s( j, k) = (sign A) j (sign A)k etc. and the Kronecker symbol δ jk for j, k, = 1, . . . , n leads at A ∈ Rd to H σ (A) j,k, = φ (|A|)(δ jk s() + δ j s(k) + δk s( j)) + (φ (|A|)|A| − φ (|A|))s( j, k, ). Although H σ (A) may be bounded, it may be discontinuous for A → 0.
4.2 Least-Squares Formulation The least-squares formulation involves the nonlinear residual R( f ; •) : H (div, ) × H01 () → L 2 () × L 2 (; Rd ) for the first-order system of (14), that is defined, for ( p, u) ∈ H (div, ) × H01 (), by R( f ; p, u) = ( f + div p, p − σ (∇u)). The least-squares functional
124
P. Bringmann et al.
L S( f ; p, u) = R( f ; p, u)2L 2 () = f + div p2L 2 () + p − σ (∇u)2L 2 () on the finite element function spaces RT0 (T ) ⊂ H (div, ) and S01 (T ) ⊂ H01 () leads to the discrete nonlinear minimization problem ( pLS , u LS ) ∈
arg min (qLS ,vLS )∈RT0 (T )×S01 (T )
L S( f ; qLS , vLS ).
(16)
The least-squares functional satisfies the fundamental equivalence [18, Lemma 4.2], for ( p, u), (q, v) ∈ H (div, ) × H01 (), R( f ; p, u) − R( f ; q, v)2L 2 () ≈ p − q2H (div,) + |||u − v|||2 and, thus, provides a reliable and efficient a posteriori error estimator as in (4). The discrete minimizers ( pLS , u LS ) in (16) are determined by the solution of the discrete equation 0 = D L S( f ; pLS , u LS ; qLS , vLS ) = ( f + div pLS ) div qLS dx + ( pLS − σ (∇u LS )) · (qLS − D σ (∇u LS )∇vLS ) dx
(17)
for every (qLS , vLS ) ∈ RTk (T ) × S0k+1 (T ). However, the closeness of discrete solutions ( pLS , u LS ) to the regular solution ( p, u) remains open to the best of the authors’ knowledge. The Newton-Kantorovich theorem [68, Sect. 5.2] is a frequently used tool for the proof of the existence of discrete solutions, but the necessary higher Fréchet derivatives do not exist for this model problem in the required functional analytical setting. This is due to the fact that the last term in the second derivative D2 L S( f ; p, u; q, v, q, ˜ v˜ ) = (div q div q˜ + q · q) ˜ dx + (q · D σ (∇u)∇ v˜ + q˜ · D σ (∇u)) dx + (D σ (∇u)∇v) · (D σ (∇u)∇ v˜ ) dx − ( p − σ (∇u)) · H σ (∇u)[∇v, ∇ v˜ ] dx
is not well-defined on the continuous level for u, v, v˜ ∈ H01 (), because the product of three Lebesgue functions in L 2 () is, in general, not in L 1 . Nonetheless, this second derivative is well-defined for discrete arguments and allows for the application of a Newton-Raphson scheme. Other nonlinear leastsquares FEMs follow a Gauss-Newton approach by discretizing a linearization of the PDE (13), cf. [57].
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
125
Fig. 3 Convergence history plot for the convex minimization problem from Sect. 4.3
4.3 Numerical Experiments Let f ≡ 1 on the L-shaped domain = (−1, 1)2 \ [0, 1]2 ⊂ R2 . The unknown exact solution u ∈ H01 () to (14) satisfies homogeneous Dirichlet boundary conditions. For φ(t) = 2 − (1 + t 2 )−1 with γ1 = 1 < γ2 = 4, 0 ≤ φ ≤ 2 is bounded as well as φ and D σ from (15) is globally Lipschitz continuous with Lip(D σ ) ≤ 2. Moreover, φ (0) = 0 and H σ is continuous with H σ (0) = 0. The P1 -conforming finite element solutions u h to (14) on uniformly refined triangulations provide a monotonically decreasing sequence (E(u h ))h of approximated energies. A post-processing with the Aitken 2 extrapolation leads to the reference energy E(u) = −1.017047884936 × 10−1 . The discrete solution to (17) is computed by a Newton scheme following [18, Sect. 5.1]. The least-squares finite element solution ( p˜ RT , u˜ C ) to the linear Poisson model problem with respect to the initial triangulation T0 is scaled by the factor σ (|∇ u˜ C |) L ∞ () to serve as initial iterate in the Newton scheme. Each loop of the nested iteration over successive mesh-refinements starts with the prolongated solution from the coarser triangulation. Following this strategy, a single Newton iteration suffices for the solution of (17) on each level up to machine precision. The convergence history plot in Fig. 3 illustrates that the optimal convergence rate of 0.5 is already achieved for rather moderate bulk parameters θ ≤ 0.7 in the NALSFEM.
126
P. Bringmann et al.
4.4 Comments All discrete solutions ( pLS , u LS ) in the numerical experiments from Sect. 4.3 converge to the unique exact solution ( p, u). In particular, there is no empirical evidence for a second discrete solution. Nonetheless, the uniqueness of the solution to the discrete problem (17) is still open. In the context of the nonlinear discontinuous Petrov-Galerkin method in Sect. 6 below, Theorem 3 provides an a posteriori uniqueness result. If a discrete solution satisfies the computable criterion (32), the theorem proves that the discrete problem has at most one solution. It is a goal for future research, whether such a criterion can be established for the nonlinear LSFEM as well. This might be of particular interest for other PIs in the SPP 1748 who analyzed nonlinear LSFEMs for hyperelasticity [57, 58, 61], elasto-plasticity [54, 64], and elasto-viscoplasticity [60]. The ellipticity of these least-squares discretizations for much more complex material laws is remarkable, but they lack discrete uniqueness results as well.
5 Discontinuous Petrov-Galerkin The following section outlines the discontinuous Petrov-Galerkin (dPG) methodology and discusses the derivation of dPG schemes by breaking the test spaces and the bilinear forms of weak formulations. An alternative error estimator in an adaptive dPG algorithm allows for the proof of optimal convergence rates in the framework of the axioms of adaptivity.
5.1 Optimal Test Functions This section exposes the derivation of a dPG scheme and the functional analytic framework from [19] for a general a posteriori error analysis under three assumptions (H1)–(H3). Let X denote a real reflexive Banach space and let Y be a real Hilbert space with dual space Y ∗ . Given a right-hand side F ∈ Y ∗ , the weak formulation of a PDE involves a bounded bilinear form b : X × Y → R with b = and seeks x ∈ X with
sup
sup
x∈X,x X =1 y∈Y,yY =1
b(x, y) < ∞
b(x, y) = F(y) for all y ∈ Y.
(H1)
(18)
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
127
The well-posedness of the weak formulation (18) follows from the continuous inf-sup condition 0 0 satisfy sup (1 + |T | − |T0 |)s η(T ) ≈ sup (1 + N )s min η(T ). ∈N0
N ∈N0
T ∈T(N )
The proof is part of the Ph.D. thesis [52] of F. Hellwig, who won the Körper prize of the GAMM for it.
5.4 Axioms of Adaptivity This sections outlines the axioms of adaptivity for the ADPGFEM algorithm and addresses some of the key aspects in their proofs. The notion of a distance δ(T , T ) for a triangulation T and its admissible refinement T and corresponding solutions u C , vCR ) to (23) is defined by (u C , vCR ), ( vCR − vCR |||2pw + vCR − vCR 2L 2 () . δ 2 (T , T ) = ||| The reduction of the mesh-size prefactors of the the explicit residual-based error estimator η in (24) and standard arguments prove the axioms of stability and reduction with positive generic constants 1 and 2 [31, Theorems 16 and 17],
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
131
| η(T ∩ T ) − η(T ∩ T )| ≤ 1 δ(T , T ),
(A1)
η(T \ T ) − 2−1/(2d) η(T \ T ) ≤ 2 δ(T , T ).
(A2)
Let R1 ⊂ T denote the set of refined simplices in T \ T plus one additional layer of simplices around it. The proof of the discrete reliability axiom with the positive generic constant 3 in η2 , δ 2 (T , T ) ≤ 3 η2 (R1 ) + h 20
(A3)
∗ ∗ ∈ C R01 (T ) with INC vCR = vCR and |||vCR − is based on an approximation vCR ∗ vCR |||pw η(R1 ) from [31, Lemma 20]. Although neither the error estimator η nor the distance function δ include the conforming component u C ∈ S01 (T ) of the solution (vCR , u C ) to (23), the Scott-Zhang quasi-interpolation is necessary to control the error ||| u C − u C ||| in the proof of discrete reliability by [31, Proposition 11]
||| u C − u C ||| δ(T , T ) + η(R1 ). Let = int( R1 ). The quasi-orthogonality bases essentially on the following = − 1 ( f, vCR ) L 2 () from [31, bound for the energies E = − 21 ( f, vCR ) L 2 () and E 2 Theorem 18] 1 2 2 ≤ κNC δ (T , T ) + E − E h T ( f − vCR )2L 2 ( ) . 4 2 For h 0 sufficiently small and any 0 < δ, λ ≤ 1, set 4 = 4 max{Crel , 16κNC + 2 2 CdP /δ} and ε = 4 max{δCrel , 8κNC λ}. The output (Tk )k∈N0 of the ADPGFEM algorithm satisfies, for all , m ∈ N0 , that +m k=
δ 2 (Tk+1 , Tk ) ≤ 4 η2 + ε
+m
η2 (Tk ).
(A4ε )
k=
6 Discontinuous Petrov-Galerkin in Nonlinear Computational Mechanics This section investigates a primal dPG formulation for the nonlinear model problem from Sect. 4.2 and examines the existence of discrete solutions. Their uniqueness is much more challenging and the only result up to now is a computable a posteriori criterion. It allows us to decide after the computation whether the discrete equations have a unique solution as demonstrated in a concluding numerical example.
132
P. Bringmann et al.
6.1 Nonlinear Discontinuous Petrov-Galerkin This section delineates the nonlinear dPG methodology from [18] for the energy minimization problem from Sect. 4.1. This nonlinear model problem concerns the nonlinear map σ : Rd → Rd . A piecewise integration by parts in (14) and the introduction of the new variable t = σ (∇u) · n on ∂T lead to the nonlinear primal dPG method with F(v) = f v dx and b : X × Y → R for X = H01 () × H −1/2 (∂T ) and Y = H 1 (T ) defined by b(u, t; v) =
σ (∇u) · ∇pw v dx − t, v∂T =: B(u, t), yY .
(25)
for all x = (u, t) ∈ X = H01 () × H −1/2 (∂T ) and y = v ∈ Y = H 1 (T ) with associated norms and the scalar product a in Y . Given the subspaces X h = S01 (T ) × P0 (E) and Yh = P1 (T ), the discrete problem minimizes the residual norm and seeks (u h , th ) = x h ∈ X h with F − B(x h )Yh∗ = min F − B(ξh )Yh∗ . ξh ∈X h
(26)
Although the existence of discrete solutions x h to (19) follows almost immediately, the closeness of x h to some continuous solution x is completely open (cf. the discussion in Sect. 4.2). The derivative D σ : Rd → M gives rise to the map b (u, t; w, s, v) =
∇w · (D σ (∇u)∇pw v) dx − s, v∂T .
(27)
This defines a bounded bilinear form b (u, t; •) : X × Y → R for any x = (u, t) ∈ X and the operator B associated with b belongs to C 1 (X ; Y ∗ ). Recall the equivalent mixed formulation from (22) for the model problem at hand, which seeks (u h , th ) ∈ X h and vh ∈ Yh with a(vh , ηh ) + b(u h , th ; ηh ) = F(ηh )
b (u h , th ; wh , sh , vh ) = 0
for all ηh ∈ Yh , for all (wh , sh ) ∈ X h .
(28)
One critical point is the role of the stability condition in the nonlinear setting for a regular solution and its low-order discretizations (as the most natural first choice for nonlinear problems, partly because of limited known regularity properties). Since D σ (∇u) ∈ L ∞ (; S) is uniformly positive definite, the splitting lemma from the linear theory [20, Theorem 3.3] implies the inf-sup condition for the nondegenerate bilinear form b (x; •, •) : X × Y → R. Hence, the solution x ∈ X to B(x) = F is regular. The discrete stability follows from the stability of the continuous form for piecewise constant ∇u h and so the local discrete stability simply follows from the linearization.
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
133
6.2 Alternative Formulations The analysis of dPG discretizations often relies on alternative formulations to (28) such as the reduced formulation similar to (23) seeking (u h , vh ) ∈ S01 (T ) × C R01 (T ) with, for all wCR ∈ C R01 (T ) and wC ∈ S01 (T ), a(vh , wCR ) + σ (∇u h ) · ∇pw wCR dx = f wCR dx, ∇wC · D σ (∇u h )∇pw vh dx = 0.
(29)
An equivalent least-squares formulation involves the piecewise constant matrix S0 ∈ P0 (T ; M) with S0 = 0T ((• − mid(T )) ⊗ (• − mid(T ))) and the linear operator H0 : L 2 () → P0 (T ; Rd ) defined, for f ∈ L 2 (), by H0 f = 0T ( f (• − mid(T ))). It can be shown that any x h = (u C , t0 ) ∈ X h and pRT ∈ RT0 (T ) with pRT · n = t0 in ∂T satisfy [18, Theorem 3.11] F − b(x h ; •)2Yh∗ = (Id×d + S0 )−1/2 0T pRT − σ (∇u C ) + H0 f 2L 2 () + 0T f + div pRT 2L 2 () .
(30)
Consequently, any solution x h = (u C , t0 ) ∈ X h to (28) and pRT · n = t0 in ∂T minimizes the weighted least-squares functional (30).
6.3 Existence and Uniqueness of Discrete Solutions This section sketches the proof of existence of discrete solutions in [18]. In order to overcome the lack of a theoretical uniqueness result, a computable criterion is presented for the check whether a discrete solution is unique or not. For qRT ∈ RT0 (T ) and vC ∈ S01 (T ), the isomorphism between RT0 (T ) and P0 (E) from [27, Lemma 3.2] motivates to the abbreviation b(vC , qRT ; •) = b((vC , (qRT · n T )T ∈T ); •). Let u ∈ H01 () denote the exact solution the model problem (14) with stress p = σ (∇u) ∈ H (div, ). The characterization of the residual as a least-squares functional in (30), a spectral analysis of the matrix Id×d + S0 , and the fundamental equivalence (3) prove the a posteriori estimate, for any discrete (vC , qRT ) ∈ S01 (T ) × RT0 (T ), [18, Theorem 4.1] p − qRT 2H (div,) + |||u − vC |||2 ≈ F − b(vC , qRT ; •)2Yh∗ + (1 − 0T ) f 2L 2 () + (1 − 0T )qRT 2L 2 () . This allows to establish the growth condition
134
P. Bringmann et al.
lim
ξh X →∞
F − B(ξh )Yh∗ = ∞.
(31)
The existence of discrete solutions to (26) follows with the direct method in the calculus of variations and, in the present case of finite dimensions, from the global minimum of a continuous functional on a compact set from the growth condition (31). The uniqueness of the exact solution (u, t) on the continuous level does not imply the uniqueness of discrete solutions. There is, however, a sufficient condition for a global unique discrete solution [18, Theorem 4.4] to the reduced formulation (29). Notice that vh = v = 0 on the continuous level h = 0 satisfies (32). Theorem 3 (a posteriori uniqueness) Suppose that (u h , vh ) ∈ S01 (T ) × C R01 (T ) solves (29) with Dσ ∈ C(Rd ; S) globally Lipschitz continuous and Lip(Dσ )(1 + CF2 )/γ12 ∇pw vh L ∞ () < 1
(32)
with the Friedrichs constant CF from • L 2 () ≤ CF ||| • ||| in H01 (). Then (29) has exactly one solution (u h , vh ) ∈ S01 (T ) × C R01 (T ).
6.4 Numerical Experiments This section presents numerical experiments with the adaptive DPGFEM for the nonlinear model problem from Sect. 4.1 on the L-shaped domain . Analogously to Sect. 4.3, the discrete solution is computed by the minimization of the generalized least-squares functional G L S( f ; pLS , u LS ) = (Id×d + S0 )−1/2 0T pRT − σ (∇u C ) + H0 f 2L 2 () + 0T f + div pRT 2L 2 () from (30). The adaptive algorithm ADPGFEM is driven by the built-in least-squares error estimator η2 (T , T ) = (Id×d + S0 )−1/2 0T pRT − σ (∇u C ) + H0 f 2L 2 (T ) + 0T f + div pRT 2L 2 (T ) + h T f 2L 2 (T ) . For the L-shaped domain, the smallest eigenvalue λ1 = 9.6397238 of the Laplacian with√homogeneous Dirichlet boundary conditions yields the Friedrichs constant CF = 1/ λ1 = 0.32208293. Since ∇pw v L ∞ () < γ12 Lip(D σ )−1 (1 + CF2 )−1 = 0.45300630 solely holds for the first three levels ∈ N0 of the adaptive computation in Fig. 4 (and the first level in case of uniform refinement), Theorem 3 applies and provides the global uniqueness of the respective discrete solutions. The error estimator converges with the optimal rate of 0.5 in the adaptive computation. In the case of the uniform refinement, the term h T f L 2 () dominates the remaining
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
135
Fig. 4 Convergence history plot for the convex minimization problem from Sect. 6.4
error estimator and the displayed optimal rate is still a preasymptotic √ behavior. The suboptimal convergence rate of about 0.4 of the energy difference E(u ) − E(u) supports this interpretation.
7 Hybrid High-Order Method The hybrid high-order (HHO) method is a new class of emerging spatial discretizations that is flexible in the stabilization and allows polytopal element domains with polynomials of any order. The HHO methodology introduces two types of unknowns 1, p in the approximation of a variable u ∈ W0 (; Rm ); the first u T is associated to the cells and the second u F to the sides [48]. In the weak formulation, R u h and G u h replace u and D u with an analogue treatment of the test functions. The variables u T and u F are linked by a positive semi-definite symmetric bilinear form called stabilization in the HHO context.
7.1 Discrete Ansatz Space 1, p
The discrete ansatz space Vh for W0 (; Rm ) with 1 < p < ∞ is fairly general, for instance, Vh = Pk (T ; Rm ) × Pk (F(); Rm ) with the space of piecewise polynomials Pk (T ; Rm ) in each cell and the piecewise polynomials Pk (F; Rm ) along each side of degree at most k ∈ N0 . The interior sides F() give rise to Pk (F(); Rm ) as the subspace of all (v F ) F∈F ∈ Pk (F; Rm ) with the convention that v F = 0 on any
136
P. Bringmann et al.
boundary side F ∈ F(∂) for homogenous boundary conditions. Inother words, the notation vh ∈ Vh means that vh = (vT , vF ) = (vT )T ∈T , (v F ) F∈F for some vT ∈ P (T ; Rm ) and vF ∈ Pk (F(); Rm ) with the identification vT = vT |T ∈ P (T ; Rm ) and v F = vF | F ∈ Pk (F; Rm ). The discrete space Vh is endowed with the norm • h defined for vh = (vT , vF ) ∈ Vh by p
p
vh h = D vT L p () +
T ∈T F∈F (T )
1− p
p
h F v F − vT L p (F) .
1, p
1, p
The interpolation I : W0 (; Rm ) → Vh maps v ∈ W0 (; Rm ) onto I v = (kT v, kF v) ∈ Vh .
7.2 Reconstruction Operators and Stabilization The potential (or elliptic) reconstruction R vh ∈ Pk+1 (T ; Rm ) of vh = (vT , vF ) ∈ Vh is defined by the relation
Dpw R vh : Dpw ϕk+1 dx = −
vT · ϕk+1 dx + v F · [Dpw ϕk+1 n F ] F ds
F∈F
(33)
F
for any ϕk+1 ∈ Pk+1 (T ; Rm ). The right-hand side of (33) vanishes for piecewise constant test function ϕk+1 ∈ P0 (T ; Rm ), so (33) defines R vh uniquely up to an additive constant per simplex that is fixed by
R vh dx = T
vT dx for any T ∈ T .
(34)
T
The unique solution R vh to (33) and (34) defines the potential reconstruction operator R : Vh → Pk+1 (T ; Rm ). The gradient reconstruction operator G : Vh → h for a linear subspace h with Pk (T ; M) ⊆ h ⊆ Pk+1 (T ; M) maps vh = (vT , vF ) ∈ Vh onto G vh ∈ h such that, for any τh ∈ h ,
G vh : τh dx = −
vT · div τh dx +
F∈F
v F · [τh n F ] F ds
(35)
F
with the normal jump [τh n F ] F of τh across F. In particular, G vh is the Riesz representation of the linear functional on the right-hand side of (35) in the Hilbert space h endowed with the L 2 scalar product. The positive semi-definite symmetric bilinear form s : Vh × Vh → R serves as a face-based penalization of vT and vF p p with the properties ∇pw R vh L 2 () + s(vh , vh ) ≡ vh h and s(I ϕk+1 , vh ) = 0 for
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
137
any vh ∈ Vh , ϕk+1 ∈ S0k+1 (T ; Rm ). An explicit construction of s can be found in [48, Eq. (25)].
7.3 HHO in Computational Mechanics This section introduces an HHO method for the elliptic Poisson model problem −u = f with homogeneous Dirichlet boundary conditions on a polyhedral bounded Lipschitz domain ⊂ R3 with right-hand side f ∈ L 2 (). The discrete space Vh , the operators G, R, and the stabilization s from Sect. 7 with m = 1, p = 2, and h = Pk (T ; R3 ) provide a prototypical skeletal method for the Poisson model problem: Find u h ∈ Vh such that any vh ∈ Vh satisfies
G u h · G vh dx + s(u h , vh ) =
f R vh dx.
(36)
The state-of-the-art for rate-optimal adaptive schemes in [25, 39] (see also Sect. 3.4) points out that existing a posteriori error estimators for the HHO method [49] exhibit a (partly hidden) negative power of the mesh-size in the stabilization term. This excludes the reduction properties despite the numerical evidence for optimal empirical convergence rates. A remedy is the design of an alternative stabilization-free a posteriori error control as for the rate-optimal adaptive least-squares FEMs in Sect. 3.3.
7.4 Reliable and Efficient Error Control Given an approximation q ∈ H 1 (T ; R3 ) of ∇u for the weak solution u ∈ H01 () to the Poisson model problem, suppose that q satisfies the following two abstract conditions. (a) Any vC ∈ S01 (T ) satisfies (q, ∇vC ) L 2 () = ( f, vC ) L 2 () . (b) Any p RT ∈ RT0 (T ; R3 ) with div p RT = 0 satisfies (q, p RT )q = 0. The following theorem uses the notation in (but is not restricted to) three space dimensions. For any T ∈ T with the volume |T |, define the local error contribution η2 (T ) = |T |2/3 curl q2L 2 (T ) + f + div q2L 2 (T ) + |T |1/3 [q] F 2L 2 (F) + |T |1/3 F∈F (T )∩F ()
Then η2 =
T ∈T
F∈F (T )∩F (∂)
η2 (T ) is reliable and efficient.
(37) [q × n F ] F 2L 2 (F) .
138
P. Bringmann et al.
Theorem 4 (reliability and efficiency) Any q ∈ H 1 (T ; R3 ) with (a)–(b) is reliable in the sense that q − ∇u2L 2 () η2 . Conversely, any piecewise polynomial q ∈ Pr (T ; R3 ) of degree r ∈ N0 and the oscillation oscr2 ( f, T ) = T ∈T h 2T f − rT f 2L 2 (T ) satisfy efficiency in the sense that η2 q − ∇u2L 2 () + oscr2 ( f, T ). The first key observation is that the HHO method satisfies (a)–(b) for q = G u h with h = Pk (T ; R3 ) in (35) and an appropriate stabilization s from [48, Eq. (25)]. The second key observation is that (a)–(b) allow for a stabilization-free a posteriori control. Since the novel error estimator has positive powers of the mesh-size, the proof of the axioms (A1)–(A2) in Sect. 3.4 appears routine work, while novel ideas are required for (A3)–(A4).
7.5 Numerical Experiment on L-Shaped Domain with Corner Singularity The approximation of the Poisson model problem on the non-convex L-shaped domain = (−1, 1)2 \ [0, 1)2 with constant right-hand side f ≡ 1 leads to reduced elliptic regularity of the solution u ∈ H 1+s () with 1/2 ≤ s < 1. The following numerical example compares the performance of the adaptive HHO algorithms driven by the novel a posteriori control η from (37) and by the a posteriori estimate ε from [49]. Undisplayed experiments show a suboptimal convergence rate of 0.3 on uniform meshes for both estimators and any polynomial degree, while Figs. 5 and 6 display optimal convergence rates η ∼ ε ∼ ndof (k+1)/2 with k = 0, . . . , 5 for both adaptive algorithms.
8 HHO in Nonlinear Computational Mechanics Nonlinear phenomena are much richer than a simulation via a sequence of linear problems could possibly describe: degeneracy, enforced oscillations (called microstructures) with measure-valued generalized solutions, or the Lavrentiev gap phenomenon highlight additional difficulties. These limit the use of standard (conforming) FEMs; there are examples in nonlinear elasticity, where standard FEMs approximate a totally wrong solution [59]. A remedy is the use of nonstandard finite element functions, which overcome the reliability-efficiency gap [32] or the Lavrentiev gap phenomenon [59].
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
139
Fig. 5 Convergence history plot of η and ε in adaptive HHO method for the Poisson model problem in Sect. 7.3 driven by η from (37)
Fig. 6 Convergence history plot of η and ε in adaptive HHO method for the Poisson model problem in Sect. 7.3 driven by ε from [49]
8.1 A Class of Degenerate Convex Minimization Problems The relaxation procedure in the calculus of variations [43] applies to minimization problems with non-convex energies and enforced microstructures [3] and provides an upscaling to a macroscopic model with a quasi-convexified energy density. In some model problems in nonlinear elasticity, multi-well problems, and topology optimization, the resulting energy density W ∈ C 1 (M) with M = Rm×d is degenerate
140
P. Bringmann et al.
convex with a two-sided growth of order 1 < p < ∞ plus a convexity control with parameters 1 < p, p , r < ∞, 0 ≤ s < ∞, 1/ p + 1/ p = 1: There exist positive constants c1 , . . . , c3 and non-negative constants c4 , c5 such that, for any A, B ∈ M, c1 |A| p − c2 ≤ W (A) ≤ c3 |A| p + c4 , |DW (A) − DW (B)|r ≤ c5 (1 + |A|s + |B|s ) × (W (B) − W (A) − DW (A) : (B − A)).
(38) (39)
Given a right-hand side f ∈ L p (; Rm ) in a bounded polyhedral Lipschitz domain ⊂ Rd , the minimal energy
E(v) =
W (D v) dx −
1, p
f · v dx amongst v ∈ V = W0 (; Rm )
(40)
is attained, but there may be more than one minimizer. Nevertheless, the convexity 1, p control (39) leads to a unique stress σ = D W (D u) ∈ Wloc (; M) ∩ W p (div, ; M) with f + div σ = 0. This motivates the definition of the dual energy ∗
E (τ ) = −
W ∗ (τ ) dx for τ ∈ L p (; M),
with the convex conjugate W ∗ ∈ C() of W . The duality D u ∈ ∂ W ∗ (σ ) means D u : σ = W (D u) + W ∗ (σ ) a.e. in and an integration by parts verifies that σ maximizes E ∗ in Q = {τ ∈ W p (div, ; M) : f + div τ = 0} without duality gap E(u) = min E(V ) = max E ∗ (Q) = E ∗ (σ ). The non-uniqueness of the minimizers (on both the continuous and discrete level) means no control on the primal variables and leads to (reliable) upper error bounds that converge with a lower rate than (efficient) lower error bounds. The only estimate known to overcome this so-called reliability-efficiency gap so far results from a discrete mixed FEM [32], which is equivalent to a Crouzeix-Raviart FEM without a discrete duality gap. The recent HHO-method in [41] generalizes the results from [32] to higher-order discretizations below.
8.2 The Unstabilized HHO Method pw
The gradient reconstruction (35) in the space h = RTk (T ; M) of piecewise Raviart-Thomas finite elements ensures stability in the sense that vh h ≈ G vh L p () , so no additional stabilization s is required [1]. The discrete problem minimizes the discrete energy E h (vh ) =
W (G vh ) dx −
f · vT dx amongst vh = (vT , vF ) ∈ Vh .
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
141
Let u h be a minimizer of E h in Vh . In this class of degenerate convex functions with convexity control (39), the associated discrete stress σh = h D W (G u h ) is unique (i.e., independent of the choice of u h ) with the L 2 orthogonal projection h : L 1 (; M) → h . This projection acts cellwise, but leads to σh ∈ W p (div, ; M) k and T f + div σh = 0 globally in the sense of distributions. This may surprise at first glance, but it is a universal consequence of the discrete Euler-Lagrange equations
σh : G vh dx =
f · vT dx for any vh = (vT , vF ) ∈ Vh .
(41)
The choice vh = (0, vF ) for any vF ∈ Pk (F(); Rm ) in (41) and the definition of G in (35) provide the L 2 orthogonality [σh n F ] F ⊥ Pk (F; Rm ) for all F ∈ F(). In particular, the normal jump [σh n F ] F vanishes along any inner side. It is well established that the continuity of the normal components of σh ∈ h leads to σh ∈ H (div, ; M); the same arguments prove σh ∈ W p (div, ; M).
8.3 A priori Analysis The H (div) conformity of the discrete stress σh motivates a comparison to the mixed Raviart-Thomas FEM that leads to the a priori control in Theorem 5 with the discrete analogue Q h = {τh ∈ RTk (T ; M) : kT f + div τh = 0} to Q and the data oscillation osck ( f, T ) = h T (1 − kT ) f L p () . The overall assumption ( p − 1)r = p + s on the parameters p, r, s follows a rule of thumb on the convexity control (39) of W and holds in all relevant examples. Let u h minimize E h in Vh . Theorem 5 (a priori) There exist positive constants C1 , . . . , C4 such that the (unique) discrete stress σh = h D W (G u h ) satisfies C1−1 σ − σh rL p () + C2−1 σ − D W (G u h )rL p ()
≤ E(u) − max E ∗ (Q h ) + C3 osck ( f, T ) + C4 min D u − G vh rL p () .
(42)
vh ∈Vh
The a priori estimate (42) is similar to the best-approximation result in [36] for the lowest-order conforming FEMs. This guarantees the convergence rate σ − (k+1)/r σh L p () + σ − D W (G u h ) L p () h max , which can be improved if further regularity of or control over the primal variable u is present.
8.4 A posteriori Analysis It is well-known for conforming FEMs that the convexity control (39) provides the bound σ − D W (D v)rL p () E(v) − E(u) for any v ∈ V . Provided E(u) =
142
P. Bringmann et al.
min E(V ) has a known lower energy bound, this leads to an a posteriori stress error estimate in a conforming discretization for the approximation v ∈ V (even for inexact solve) and its (computable) energy E(v). Nonconforming, mixed, and HHO discretizations can be utilized for lower energy bounds (LEBs). Theorem 6 (guaranteed lower-energy bound) Under the assumption of Theorem 5, the discrete stress σh = h D W (G u h ) satisfies C5−1 σ − σh rL p () ≤ E(u) − E ∗ (σh ) + C3 osck ( f, T ) =: LEB. The numerical benchmark in Sect. 8.5 displays superlinear convergence rates of LEB. Since conforming FEMs lead to upper bounds, convexity allows for error control of the stress variable. The a posteriori estimate enables guaranteed error control without additional information and motivates an adaptive scheme. Theorem 7 (a posteriori) Under the assumption of Theorem 5, the discrete stress σh = h D W (G u h ) satisfies C6−1 σ − σh rL p () + C7−1 σ − D W (G u h )rL p ()
(43)
≤ E h (u h ) − E ∗ (σh ) + C3 osck ( f, T ) + C8 min G u h − D vrL p () =: RHS. v∈V
Notice that the right-hand side of (43) is computable with some post-processing v ∈ V (a suitable choice utilizes an averaging technique on a potential reconstruction of u h ) and motivates an adaptive scheme.
8.5 A Topology Optimization Problem: Optimal Design The optimal design problem seeks the optimal distribution of two materials with fixed amounts to fill a given domain for maximal √ torsion stiffness [4, 55]. For fixed parameters λ = 0.0145, μ1 = 1, μ2 = 2, t1 = λ, and t2 = 2t1 , the energy density W (a) = ψ(t), a ∈ R2 , t = |a| ≥ 0 with ⎧ 2 ⎪ if 0 ≤ t ≤ t1 , ⎨μ2 t /2 ψ(t) = t1 μ2 (t − t1 /2) if t1 ≤ t ≤ t2 , ⎪ ⎩ 2 μ1 t /2 − t1 μ2 (t1 /2 − t2 /2) if t2 ≤ t satisfies (38)–(39) with the parameters r = p = 2, s = 0, and the constants c1 = μ1 /2, c2 = c4 = 0, c3 = μ2 /2, and c5 = 2μ2 from [4, Proposition 4.2]. Let = (0, 1)2 \ [0, 1) × (−1, 0] and f ≡ 1 with the reference value min E(V ) = −0.074551285. The material distribution consists of an interior region (blue), a boundary region (yellow), and a transition layer, also called microstructure zone with a fine mixture of the two materials as depicted in Fig. 7.
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
143
Fig. 7 Material distribution for the optimal design problem in Sect. 8 on an adaptive mesh (k = 0) of the L-shaped domain with 5808 triangles
k k k k k
10−1 3
10−2
2
=0 =1 =2 =3 =4
10−3 10−4
5 4
10−5 101
102
103 ndof
104
105
Fig. 8 Convergence history plot of RHS in (43) (up to some multiplicative constant) for the optimal design problem in Sect. 8 on adaptive (solid line) and uniform (dashed line) meshes
The a posteriori error control η converges suboptimally with a convergence rate 2/3 on uniform meshes for any polynomial degree k in Fig. 8. The adaptive algorithm refines towards the reentrant corner as well as the microstructure zone in Fig. 7. This
144
P. Bringmann et al.
10−1
k k k k k
10−2 3
10−3
=0 =1 =2 =3 =4
2
10−4 5
10−5
4
101
102
103 ndof
104
105
Fig. 9 Convergence history plot of LEB for the optimal design problem in Sect. 8 on adaptive (solid line) and uniform (dashed line) meshes
improves the convergence rate of η to 3/4 for k = 0 and 5/4 for k = 3. Figure 9 provides evidence for the first superlinear convergent (guaranteed) LEB. Acknowledgements Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 255510958 – SPP 1748.
References 1. M. Abbas, A. Ern, N. Pignet, Hybrid high-order methods for finite deformations of hyperelastic materials. Comput. Mech. 62(4), 909–928 (2018) 2. D.N. Arnold, R. Winther, Mixed finite elements for elasticity. Numer. Math. 92(3), 401–419 (2002) 3. J.M. Ball, R.D. James, Fine phase mixtures as minimizers of energy. Arch. Ration. Mech. Anal. 100(1), 13–52 (1987) 4. S. Bartels, C. Carstensen, A convergent adaptive finite element method for an optimal design problem. Numer. Math. 108(3), 359–385 (2008) 5. P. Binev, W. Dahmen, R. DeVore, Adaptive finite element methods with convergence rates. Numer. Math. 97(2), 219–268 (2004) 6. P. Binev, R. DeVore, Fast computation in adaptive tree approximation. Numer. Math. 97(2), 193–217 (2004) 7. P.B. Bochev, M.D. Gunzburger, Least-squares finite element methods, Applied Mathematical Sciences, vol. 166. (Springer, New York, 2009) 8. D. Braess, Finite Elements, 3rd edn. (Cambridge University Press, Cambridge, 2007). Theory, fast solvers, and applications in elasticity theory. Translated from the German by Larry L, Schumaker 9. S.C. Brenner, L.R. Scott, The Mathematical Theory of Finite Element Methods, Texts in Applied Mathematics, vol. 15, 2nd edn. (Springer, New York, 2002)
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
145
10. P. Bringmann, Adaptive least-squares finite element method with optimal convergence rates. Ph.D. thesis, Humboldt-Universität zu Berlin (2020) 11. P. Bringmann, C. Carstensen, An adaptive least-squares FEM for the Stokes equations with optimal convergence rates. Numer. Math. 135(2), 459–492 (2017) 12. P. Bringmann, C. Carstensen, h-adaptive least-squares finite element methods for the 2D Stokes equations of any order with optimal convergence rates. Comput. Math. Appl. 74(8), 1923–1939 (2017) 13. P. Bringmann, C. Carstensen, G. Starke, An adaptive least-squares FEM for linear elasticity with optimal convergence rates. SIAM J. Numer. Anal. 56(1), 428–447 (2018) 14. Z. Cai, J. Korsawe, G. Starke, An adaptive least squares mixed finite element method for the stress-displacement formulation of linear elasticity. Numer. Methods Partial Differ. Equ. 21(1), 132–148 (2005) 15. Z. Cai, B. Lee, P. Wang, Least-squares methods for incompressible Newtonian fluid flow: linear stationary problems. SIAM J. Numer. Anal. 42(2), 843–859 (2004) 16. Z. Cai, G. Starke, Least-squares methods for linear elasticity. SIAM J. Numer. Anal. 42(2), 826–842 (2004) 17. C. Carstensen, Collective marking for adaptive least-squares finite element methods with optimal rates. Math. Comp. 89(321), 89–103 (2020) 18. C. Carstensen, P. Bringmann, F. Hellwig, P. Wriggers, Nonlinear discontinuous Petrov-Galerkin methods. Numer. Math. 139(3), 529–561 (2018) 19. C. Carstensen, L. Demkowicz, J. Gopalakrishnan, A posteriori error control for DPG methods. SIAM J. Numer. Anal. 52(3), 1335–1353 (2014) 20. C. Carstensen, L. Demkowicz, J. Gopalakrishnan, Breaking spaces and forms for the DPG method and applications including Maxwell equations. Comput. Math. Appl. 72(3), 494–522 (2016) 21. C. Carstensen, G. Dolzmann, A posteriori error estimates for mixed FEM in elasticity. Numer. Math. 81(2), 187–209 (1998) 22. C. Carstensen, G. Dolzmann, S.A. Funken, D.S. Helm, Locking-free adaptive mixed finite element methods in linear elasticity. Comput. Methods Appl. Mech. Engrg. 190(13–14), 1701– 1718 (2000) 23. C. Carstensen, A.K. Dond, H. Rabus, Quasi-optimality of adaptive mixed FEMs for nonselfadjoint indefinite second-order linear elliptic problems. Comput. Methods Appl. Math. 19(2), 233–250 (2019) 24. C. Carstensen, M. Eigel, J. Gedicke, Computational competition of symmetric mixed FEM in linear elasticity. Comput. Methods Appl. Mech. Engrg. 200(41–44), 2903–2915 (2011) 25. C. Carstensen, M. Feischl, M. Page, D. Praetorius, Axioms of adaptivity. Comput. Math. Appl. 67(6), 1195–1253 (2014) 26. C. Carstensen, D. Gallistl, J. Gedicke, Residual-based a posteriori error analysis for symmetric mixed Arnold-Winther FEM. Numer. Math. 142(2), 205–234 (2019) 27. C. Carstensen, D. Gallistl, F. Hellwig, L. Weggler, Low-order dPG-FEM for an elliptic PDE. Comput. Math. Appl. Int. J. 68(11), 1503–1512 (2014) 28. C. Carstensen, D. Günther, J. Reininghaus, J. Thiele. The Arnold-Winther mixed FEM in linear elasticity. Part i: implementation and numerical verification. Comput. Methods Appl. Mech. Eng. 197(33), 3014–3023 (2008) 29. C. Carstensen, F. Hellwig, Low-order discontinuous Petrov-Galerkin finite element methods for linear elasticity. SIAM J. Numer. Anal. 54(6), 3388–3410 (2016) 30. C. Carstensen, F. Hellwig, Constants in discrete Poincaré and Friedrichs inequalities and discrete quasi-interpolation. Comput. Math. Appl. 18(3), 433–450 (2018) 31. C. Carstensen, F. Hellwig, Optimal convergence rates for adaptive lowest-order discontinuous Petrov-Galerkin schemes. SIAM J. Numer. Anal. 56(2), 1091–1111 (2018) 32. C. Carstensen, D.J. Liu, Nonconforming FEMs for an optimal design problem. SIAM J. Numer. Anal. 53(2), 874–894 (2015) 33. C. Carstensen, R. Ma, Adaptive mixed finite element method for non-selfdajoint indefinite second-order elliptic PDEs with optimal rates. Submitted to SIAM J. Numer. Anal. (2020)
146
P. Bringmann et al.
34. C. Carstensen, E.-J. Park, Convergence and optimality of adaptive least squares finite element methods. SIAM J. Numer. Anal. 53(1), 43–62 (2015) 35. C. Carstensen, E.J. Park, P. Bringmann, Convergence of natural adaptive least squares finite element methods. Numer. Math. 136(4), 1097–1115 (2017) 36. C. Carstensen, P. Plecháˇc, Numerical solution of the scalar double-well problem allowing microstructure. Math. Comp. 66(219), 997–1026 (1997) 37. C. Carstensen, S. Puttkammer, A low-order discontinuous Petrov-Galerkin method for the Stokes equations. Numer. Math. 140(1), 1–34 (2018) 38. C. Carstensen, H. Rabus, An optimal adaptive mixed finite element method. Math. Comp. 80(274), 649–667 (2011) 39. C. Carstensen, H. Rabus, Axioms of adaptivity with separate marking for data resolution. SIAM J. Numer. Anal. 55(6), 2644–2665 (2017) 40. C. Carstensen, J. Storn, Asymptotic exactness of the least-squares finite element residual. SIAM J. Numer. Anal. 56(4), 2008–2028 (2018) 41. C. Carstensen, N.T. Tran, Unstabilized Hybrid High-Order method for a class of degenerate convex minimization problems. Submitted (2020) 42. A. Cohen, W. Dahmen, G. Welper, Adaptivity and variational stabilization for convectiondiffusion equations. ESAIM. Math. Model. Numer. Anal. 46(5), 1247–1273 (2012) 43. B. Dacorogna, Direct Methods in the Calculus of Variations, Applied Mathematical Sciences, vol. 78, 2nd edn. (Springer, New York, 2008) 44. L. Demkowicz, J. Gopalakrishnan, A class of discontinuous Petrov-Galerkin methods. Part I: the transport equation. Comput. Methods Appl. Mech. Engrg. 199(23–24), 1558–1572 (2010) 45. L. Demkowicz, J. Gopalakrishnan, A class of discontinuous Petrov-Galerkin methods. II. optimal test functions. Numer. Methods Partial Differ. Equ. 27(1), 70–105 (2011) 46. L. Demkowicz, J. Gopalakrishnan, A primal DPG method without a first-order reformulation. Comput. Math. Appl. Int. J. 66(6), 1058–1064 (2013) 47. L. Demkowicz, J. Gopalakrishnan, A.H. Niemi, A class of discontinuous Petrov-Galerkin methods. Part III: adaptivity. Appl. Numer. Math. 62(4), 396–427 (2012) 48. D.A. Di Pietro, A. Ern, A hybrid high-order locking-free method for linear elasticity on general meshes. Comput. Methods Appl. Mech. Engrg. 283, 1–21 (2015) 49. D.A. Di Pietro, R. Specogna, An a posteriori-driven adaptive mixed high-order method with application to electrostatics. J. Comput. Phys. 326, 35–55 (2016) 50. W. Dörfler, A convergent adaptive algorithm for Poisson’s equation. SIAM J. Numer. Anal. 33(3), 1106–1124 (1996) 51. T. Führer, D. Praetorius, A short note on plain convergence of adaptive least-squares finite element methods. Comput. Math. Appl. Int. J. 80(6), 1619–1632 (2020) 52. F. Hellwig, Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods. Ph.D. thesis, Humboldt-Universität zu Berlin (2018). Humboldt-Universität zu Berlin 53. J. Hu, G. Yu, A unified analysis of quasi-optimal convergence for adaptive mixed finite element methods. SIAM J. Numer. Anal. 56(1), 296–316 (2018) 54. M. Igelbüscher, A. Schwarz, K. Steeger, J. Schröder, Modified mixed least-squares finite element formulations for small and finite strain plasticity. Int. J. Numer. Methods Eng. 117(1), 141–160 (2019) 55. R.V. Kohn, G. Strang, Optimal design and relaxation of variational problems. I. Commun. Pure Appl. Math. 39(1), 113–137 (1986) 56. J.M. Maubach, Local bisection refinement for n-simplicial grids generated by reflection. SIAM J. Sci. Comput. 16(1), 210–227 (1995) 57. B. Müller, G. Starke, A. Schwarz, J. Schröder, A first-order system least squares method for hyperelasticity. SIAM J. Sci. Comput. 36(5), B795–B816 (2014) 58. B. Müller, Mixed Least Squares Finite Element Methods Based on Inverse Stress-Strain Relations in Hyperelasticity. Ph.D. thesis, Universität Duisburg-Essen (2015) 59. C. Ortner, Nonconforming finite-element discretization of convex variational problems. IMA J. Numer. Anal. 31(3), 847–864 (2011)
Adaptive Least-Squares, Discontinuous Petrov-Galerkin …
147
60. A. Schwarz, J. Schröder, G. Starke, Least-squares mixed finite elements for small strain elastoviscoplasticity. Int. J. Numer. Methods Eng. 77(10), 1351–1370 (2009) 61. A. Schwarz, K. Steeger, M. Igelbüscher, J. Schröder, Different approaches for mixed LSFEMs in hyperelasticity: application of logarithmic deformation measures. Int. J. Numer. Methods Eng. 115(9), 1138–1153 (2018) 62. L.R. Scott, S. Zhang, Finite element interpolation of nonsmooth functions satisfying boundary conditions. Math. Comp. 54(190), 483–493 (1990) 63. K.G. Siebert, A convergence proof for adaptive finite elements without lower bound. IMA J. Numer. Anal. 31(3), 947–970 (2011) 64. G. Starke, An adaptive least-squares mixed finite element method for elasto-plasticity. SIAM J. Numer. Anal. 45(1), 371–388 (2007) 65. R. Stevenson, The completion of locally refined simplicial partitions created by bisection. Math. Comput. 77(261), 227–241 (2008) 66. J. Storn, Topics in Least-Squares and Discontinuous Petrov-Galerkin Finite Element Analysis. Ph.D. thesis, Humboldt-Universität zu Berlin (2019) 67. C.T. Traxler, An algorithm for adaptive mesh refinement in n dimensions. Computing 59(2), 115–137 (1997) 68. E. Zeidler, Nonlinear Functional Analysis and Its Applications. I (Springer, New York, 1986). Fixed-point theorems. Translated from the German by Peter R. Wadsack 69. E. Zeidler. Nonlinear functional analysis and its applications. IV. (Springer, New York, 1988). Applications to mathematical physics, Translated from the German and with a preface by Juergen Quandt 70. E. Zeidler, Nonlinear Functional Analysis and Its Applications. II/B (Springer, New York, 1990) 71. L. Zhong, L. Chen, S. Shu, G. Wittum, J. Xu, Convergence and optimality of adaptive edge FEMs for time-harmonic Maxwell equations. Math. Comp. 81(278), 623–642 (2012) 72. J. Zitelli, I. Muga, L. Demkowicz, J. Gopalakrishnan, D. Pardo, V.M. Calo, A class of discontinuous Petrov-Galerkin methods. Part IV: the optimal test norm and time-harmonic wave propagation in 1D. J. Comput. Phys. 230(7), 2406–2432 (2011)
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity M. Igelbüscher, J. Schröder, A. Schwarz, and G. Starke
Abstract This work presents a mixed least-squares finite element formulation for rate-independent elasto-plasticity at finite strains. In this context, the stressdisplacement formulation is defined by the L2 (B)-norm minimization of a first-order system of differential equations written in residual form. The utilization of the leastsquares method (LSM) provides some well-known advantages. For the proposed rate-independent elasto-plastic material law a straight forward application of the LSM leads to discontinuities within the first variation of the formulation, based on the non-smoothness of the constitutive relation. Therefore, a modification by means of a modified first variation is necessary to guarantee a continuous weak form, which is done in terms of the considered test spaces. In addition to that an antisymmetric displacement gradient in the test space is added to the formulation due to a not a priori fulfillment of the stress symmetry condition, which results from the stress approximation with Raviart-Thomas functions. The resulting formulation is validated by a numerical test and compared to a standard displacement finite element formulation.
M. Igelbüscher · J. Schröder · A. Schwarz (B) Institute of Mechanics, University of Duisburg-Essen, Universitätsstraße 15, 45141 Essen, Germany e-mail: [email protected] M. Igelbüscher e-mail: [email protected] J. Schröder e-mail: [email protected] G. Starke Faculty of Mathematics, University of Duisburg-Essen, Thea-Leymann-Straße 9, 45127 Essen, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_6
149
150
M. Igelbüscher et al.
1 Introduction The investigation of elasto-plastic material behavior has been and is still an important part of research. Over the past decades the fundamental theory of plasticity has been elaborated in several monographs for example in the textbooks of [1–4]. An extensive amount and major impact of the research has to be attributed to the literature of [5– 12]. The basic concepts of small strain plasticity are described in detail in [11]. An extension of this insights to finite strains is discussed in the publications [5, 6] based on the principle of maximum plastic dissipation and the concept of multiplicative decomposition. The finite plasticity analysis in [7] presents a multiplicative plastic model which preserves the classical return mapping scheme of the small strain theory, where an enhanced strain mixed method is investigated in [9], in [10] an extension to finite thermoplasticity is performed and an overview of all these topics is given in [8]. For the simulation of elasto-plastic material models standard displacement formulation are used, beside others. In these formulations the stress field, which is significantly responsible for the occurrence of plastic effects, is determined as a function of the displacements and might be inaccurate and discontinuous between elements. Therefore, mixed variational principles, where e.g. the displacement and stress field are approximated directly, leading to more accurate results for the stress field, could be used, e.g. approaches based on the two-field Hellinger-Reissner or three-field Hu-Washizu functionals, compare e.g. [13–17]. These mixed formulations result in a saddle point structure, which reveals the crucial point and major challenge in the construction of mixed finite elements because existence and uniqueness of a solution cannot be guaranteed in general and have to be proven by the so-called LBB- and ellipticity-condition, see [18–20], or compare e.g. [21, 22]. The advantages of the proposed mixed least-squares finite element method (LSFEM) are, inter alia, that the formulation results in a minimization problem and is therefore not restricted by the LBB-condition. Further advantages of the LSFEM are given through a posteriori error estimator and a flexibility in the construction of functionals with suited unknown field variables (given in solid mechanics e.g. by stresses and displacements). An overview of mixed least-squares finite element formulations regarding the theoretical foundations are given e.g. in [23, 24]. The least-squares (LS) approach considered in the following, in terms of displacement and stress field, applying a stress approximation by Raviart-Thomas functions, cf. [25], has been considered for linear elasticity by [26–28] and further investigated by [29, 30]. An extension to geometrically nonlinear problems can be found e.g. in [31–36]. Furthermore, in the publications of [37–39] LS formulations for elastoplasticity at small strains are analyzed and elasto-viscoplastic effects are investigated by [40]. As described by [41] and further investigated by [42] the application of LSFEM for rate-independent elasto-plastic constitutive relations can lead to problems in the application of the standard Newton-Raphson method, in the form of oscillations in the norm of the residuals and therefore to none or only to an inaccurate solution
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
151
of the problem. This drawback results from the non-smoothness of the constitutive relation, which could lead to kink-like points in the functional and consequently to discontinuities in the first variation of the functional. In [38, 39, 41] approaches for avoiding these problems are given by an adaptive refinement strategy, smoothing algorithms and improved iteration schemes. The main aspect of this contribution is the construction of a mixed LS formulation including a modification of the first variation to guarantee a continuous weak form, cf. [43]. Therein, the discontinuous first variation is modified without changing the underlying system of equations in terms of the test spaces. This leads to an unsymmetric but continuous formulation and will be further improved by adding the antisymmetric displacement gradient in the test space, see [36]. The numerical validation is performed with a RT m Pk finite element type, where m denotes the polynomial order of the stress approximation using vector-valued Raviart-Thomas functions and k is the interpolation order of the displacements considering standard Lagrange functions. A hyperelastic Neo-Hookean type material is considered with an isotropic von Mises yield criterion with linear hardening, see [44]. The paper is structured as follows: First, the basic concept of the considered elasto-plastic model at finite strains is derived and discussed. The application of the finite elasto-plastic model within the least-squares finite element formulation as well as the modification for guaranteeing a continuous weak form are presented afterwards. In Sect. 5, a mathematical analysis of the presented formulation by means of the ellipticity is pointed out. The last section demonstrates a numerical investigation of the proposed least-squares formulation compared to a standard displacement formulation.
2 Elasto-Plasticity for the Framework of Finite Strains The derivation and determination of the applied standard associated plasticity model is discussed in the following briefly, based on the monographs [8, 11, 45]. The general idea of finite elasto-plasticity is based on a multiplicative decomposition of the deformation tensor into an elastic and plastic part F = Fe · F p , first introduced in the works of [46, 47]. For a more detailed description of the continuum mechanical and kinematical foundations for multiplicative elasto-plasticity see e.g. [2, 8, 11, 48]. The application of a multiplicative decomposition of F yields expressions for an elastic left and a plastic right Cauchy-Green tensor given by be = Fe · Fe T = F · C p −1 · F T and C p = F p T · F p = F T · be −1 · F. The thermodynamic consistent finite elasto-plasticity model is derived based on the fulfillment of the second law of thermodynamics. In order to guarantee this for the dissipative effects of elasto-plastic deformations, process and internal variables have to be introduced, which are, for the here considered isothermal and isotropic model, given by the elastic left Cauchy-Green tensor be , the plastic right Cauchy-Green tensor C p and the internal plastic variable α. At a first step the Clausius-Duhem inequality for finite isothermal, isotropic deformation processes is given by
152
M. Igelbüscher et al.
Dint = τ : d − ψ˙ ≥ 0
(1)
with a free energy function ψ(be , α) additively divided into an elastic part ψ e (be ) and a plastic part ψ p (α). Furthermore, the Kirchhoff stress tensor is denoted by τ and d is the symmetric part of the spatial velocity gradient l = F˙ · F−1 . The material time derivative yields ˙ ˙ e , α) = ψ˙ e (be ) + ψ(α) = ψ(b
∂ψ e ˙ e ∂ψ p :b + α˙ , ∂be ∂α
(2)
where the material time derivative of the elastic left Cauchy-Green tensor is given by e e b˙ = (F · C p −1 · F T )· = l · be + £(be ) + be · l T . Furthermore, b˙ is given in terms of p −1 · F T , see the Lie derivative of be defined by £(be ) = F · ∂t∂ (C p −1 ) · F T = F · C˙ e.g. the works of [8, 49]. Therefore, the Clausius-Duhem inequality is reformulated e introducing ψ˙ with respect to the definition of b˙ and the relation for the symmetric part of the spatial velocity gradient d = 1/2(l + l T ) into Dint = τ : d − ψ˙ ≥ 0 ∂ψ e ˙ e ∂ψ p α˙ ≥ 0 =τ :d− :b − ∂be ∂α e ∂ψ e ∂ψ ∂ψ e ∂ψ p e e T e =τ :d− : (l · b ) − : (b · l ) − : £(b ) − α˙ ≥ 0 e e ∂be ∂b∂ψ e 1 ∂b ∂α p ∂ψ ∂ψ e e £(be ) · be −1 − α˙ ≥ 0 . = τ − 2 e · b : d − 2 e · be : ∂b ∂b 2 ∂α (3) Here, the fulfillment of (3) directly leads the stress relation for the Kirchhoff stress tensor τ , cf. [50], with ∂ψ e τ = 2 e · be , (4) ∂b which results in Dint = −τ :
1 2
£(be ) · be −1 + β α˙ ≥ 0 .
(5)
Further an abbreviation for the thermodynamical force related to the internal variable is introduced as the conjugated internal variable β := −∂α ψ p . For the derivation of the evolution equations for C p −1 and α, which fulfill the Clausius-Duhem inequality, we postulate the existence of a convex yield surface in the stress space. This yield surface characterizes the boundary of the elastic domain of the material deformation and simultaneously the space of admissible stress states. The area where elastic deformation occurs is dependent on the choice of the yield criterion. For the choice of a von Mises yield criterion the area of elastic deformation is described in the form of a cylinder in the principle stress space. One important aspect for the fulfillment of the Clausius-Duhem inequality with respect to the evolution equations is the so-called principle of maximum plastic dissipation, which states that the dissipation becomes maximal for the actual stress state in comparison with all other stress states, cf. e.g.
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
153
[8]. From a mathematical point of view the principle yields an optimization problem (−Dint ≤ 0) with constraint condition ( = 0), which is given as a Lagrangian functional £(be ) · be −1 + β α˙ + γ (τ , β) → stat. with γ ≥ 0 , 2 (6) where γ denotes a Lagrange multiplier. Enforcing stationarity conditions for the Lagrangian functional ∂τ ,β,γ L = 0 yields L(τ , β, γ ) = τ :
1
1 £(be ) · be −1 = −γ ∂τ , α˙ = −γ ∂β and = 0 , 2
(7)
under consideration of the Kuhn-Tucker conditions in combination with the consistency condition which are defined by ˙ γ = 0. γ ≥ 0 , ≤ 0 , γ = 0 and
(8)
˙ < 0; γ = 0), plastic The different states are divided into an elastic unloading ( ˙ = 0; γ > 0) and a neutral stress state ( ˙ = 0; γ = 0), cf. [8, 11]. A loading ( reformulation of the flow rule (7)1 and the hardening law (7)2 with respect to the internal variables (C p , α) yields C˙
p −1
= −2 γ (F
−1
· ∂τ (τ , β) · F) · C
p −1
and α˙ =
2 γ. 3
(9)
As already mentioned the area of elastic deformation is dependent on the chosen yield criterion. In the proposed contribution a von Mises yield criterion for finite deformation, based on the Kirchhoff stresses τ , is defined by (τ , β) = dev τ − 2 (y 3 0
+ β). For simplicity, an isotropic linear hardening, where the conjugated internal variable β = h α, is chosen. The evolution equations for the internal variable α˙ and for the plastic flow, given through the inverse plastic right Cauchy-Green tensor p −1 , are determined by applying different time integration schemes. For the time C˙ integration of the evolution equation for the internal variable we consider a backward Euler scheme yielding 2 t γ , (10) αn+1 = αn + 3 where the abbreviation t γ := γ t is introduced with t = tn+1 − tn . Furthermore, t γ as the increment of the plastic multiplier has to be determined with the condition !
n+1 = 0, with n+1 = dev τ n+1 − 23 (y0 + h αn+1 ), if the elastic domain is exceeded ( > 0). This is done by regarding an inconsistent trial state of the yield criterion as 2 trial = dev τ n+1 − (11) (y0 + h αn ) , 3
154
M. Igelbüscher et al.
with the directly approximated stresses τ n+1 and α trial = αn . The relation for t γ is ! obtained by inserting the hardening law (10) into the condition n+1 = 0, as
t γ =
trial
3 2h
=
3 dev τ n+1 + 23 (y0 + hαn ) 2h
.
(12)
In contrast to the utilization of a backward Euler scheme for the hardening law, the p −1 used time integration for C˙ is an exponential map algorithm within an implicit time integration scheme, first introduced by [51, 52] and further regarded e.g. in [7, 50, 53]. A major advantage of an implicit exponential time integration is e.g. an exact fulfillment of the plastic incompressibility for a consideration of a von Mises type yield criterion, see e.g. [7]. Furthermore, the implementation is based on a closest-point-projection algorithm and further a radial return method for the associated plasticity model is utilized. The resulting flow rule yields ∂n+1 p −1 · Fn+1 · C np −1 . C n+1 = exp − 2 t γ F−1 n+1 · ∂τ n+1
(13)
A simplification of the expression for the exponential function is obtained, by regarding the condition exp[A · B · A−1 ] = A · exp[B] · A−1 , for commutating matrices A and B, by p −1 p −1 , (14) C n+1 = F−1 n+1 · exp − 2 t γ n · F n+1 · C n with n as the outward normal on the deviatoric stress plane defined by n := ∂τ n+1 . For the sake of completeness, the requirement of material incompressibility given p with det C p = 1 is proved, with det C n+1 = det C np . This is achieved by applying the det-operator to equation (14), with det(exp[A]) = exp[tr(A)], yielding tr[−2 t γ ∂τ ], which is equivalent to zero, based on the chosen von Mises yield criterion in terms of the deviatoric stresses with tr(dev τ ) = 0, see e.g. [7, 50]. For further derivations the index n + 1 is omitted for notational simplicity for quantities at the actual time step tn+1 . Quantities at the previous time step tn are still denoted with the index n.
3 Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity In the following section the previously derived foundation for finite elasto-plasticity is incorporated within a least-squares finite element formulation. For further steps we introduce the body of interest B parameterized in x ∈ IR3 . Furthermore, we define the scalar multiplication of two tensors A, B ∈ IR3×3 as A : B = tr ABT . The boundary of the domain ∂B consists of two subsets, namely the Dirichlet boundary ∂B D and
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
155
Neumann boundary ∂B N with the definitions ∂B D ∪ ∂B N = ∂B and ∂B D ∩ ∂B N = ∅ .
(15)
The least-squares functional is defined by means of the squared L2 (B)-norm applied to the first-order system of n differential equations given in residual forms Ri as F =
n 1 i=1
2
ωi Ri 2L2 (B) =
n 1 2 ω Ri · Ri dV → min . 2 i i=1 B
(16)
Here ωi denote the weighting factors and the L2 (B)-norm is introduced as •
2L2 (B)
=
B
| • |2 dV ,
(17)
see beside others [24], where the L p (B)-norm for vector v ∈ [L p (B)]n and tensor functions ϒ ∈ [L p (B)]n×m are defined analogously to scalar quantities with all components of vi , ϒi j ∈ L p (B), cf. e.g. [54] and [55]. The least-squares functional for finite elasto-plasticity is based on an extension of a hyperelastic functional, see e.g. [36], where the underlying first-order system of equations is defined by the momentum balance, constitutive relation and a stress symmetry condition by ∂ψ e e · b and R3 = P · F T − F · P T , ∂be (18) where P denotes the first Piola-Kirchhoff stress tensor, f the body force and ψ e the elastic free energy function. The spatial differential operators (Div and ∇) are w.r.t. X in the reference configuration. For hyperelastic formulations different approaches for LSFEM were proposed in e.g. [35] and [36]. For completeness, the stress symmetry condition is not fulfilled a priori due to the utilization of RT m functions for the stress approximation and therefore enforced in a weak sense, cf. e.g. [56] and [57]. However, an additional control of this constraint leads to an improvement of the numerical performance, as e.g. shown in [58]. Therefore, the third residual is formulated in terms of the symmetric Kirchhoff stress tensor τ = P · F T . The considered hyperelastic isotropic Neo-Hookean material law formulated in be is defined by R1 = Div P + f , R2 = P · F T − 2
ψ e :=
√ λ λ μ det be + tr be − + μ ln det be . 4 2 2
(19)
Application of the squared L2 (B)-norm on the first-order system of differential equations yields the least-squares functional for finite elasto-plasticity as
156
M. Igelbüscher et al.
1 F (P, u) = ω2 (Div P + f ) · (Div P + f ) dV 2 B 1 ∂ψ e ∂ψ e 1 ω22 P · F T − 2 e · be : P · F T − 2 e · be dV + 2 B ∂b ∂b 1 + ω2 (P · F T − F · P T ) : (P · F T − F · P T ) dV . 2 B 3
(20)
For solving the nonlinear minimization problem the condition δP,u F (P, u, δP, δu) = 0 is utilized. The associated first variations, presented for simplicity without the weighting factors, are obtained with respect to the displacements and the stresses as
P · δF T − 2
δu F =
δu
∂ψ e e ∂ψ e ∂ψ ·b + · δu be : P · F T − 2 e · be dV e e ∂b ∂b ∂b
B + (P · δF T − δF · P T ) : (P · F T − F · P T ) dV , B
(21)
δP F =
Div δP · (Div P + f ) dV ∂ψ ∂ψ ∂ψ δP · F T − 2 δP e · be + e · δP be : P · F T − 2 e · be dV + ∂b ∂b ∂b B T T T T + (δP · F − F · δP ) : (P · F − F · P ) dV . B
B
(22) At this point, the problem of the non-smoothness of the constitutive relation, at the transition from purely elastic to elasto-plastic material behavior arises, which can lead to problems within the minimization scheme, e.g. with the standard Newton-Raphson method. This non-smooth relation could lead to kink-like points in the functional and furthermore to a discontinuous first variation. Based on this, oscillations in the norm of the residuals could arise and therefore none or only an inaccurate solution of the problem is obtained by application of the standard or damped Newton-Raphson method. The occurring problems in coherence with rate-independent elasto-plasticity have already been described in the work of [41] and further investigated by [42]. However, for a formulation described by a rate-dependent material deformation, which allows over stresses in the model, a further investigation of this discontinuity is in general unnecessary. For guaranteeing a continuous first variation and overcome the resulting discontinuity, an unsymmetric formulation is constructed, which leaves the underlying system of differential equations unchanged. In order to ensure this, either if elasticity or plasticity occurs in the formulation, a modification of the plastic strain tensor within the stress and displacement test space is introduced. The modification of the first variation is done based on the consideration of the transition from purely elastic to elasto-plastic material reaction. The discontinuity within the first variation originates in the constitutive relation, in the variation of the plastic strains depending on
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
157
the Kirchhoff stresses τ defined in (4). For an obvious illustration of this statement, a case distinction is performed and therefore a second definition of be is introduced. In the purely elastic case the elastic left Cauchy-Green tensor is further denoted as e b˜ and defined by e (23) be = F · C np −1 · F T := b˜ for lim − , →0
which is only a function of the displacement field u. For the case of elasto-plasticity be is defined by (24) be = F · C p −1 · F T for lim + , →0
thus it depends on displacements u as well as stresses P. Here, lim − denotes the →0
elastic area in which the yield function is not exceeded and lim + defines the area →0
of plastic loading. The resulting case distinction for the first variation of the second residual δR2 , cf. (18)2 , at the elasto-plastic transition based on (23) and (24) yields
∂ψ e e ∂ψ e = R2 : δP,u (P · F T ) − δP,u 2 e · be . R2 : δP,u (P · F T ) − δu 2 e · b˜ ∂b ∂ b˜
1. Case 2. Case
(25) As can be seen directly in (25) the first part in the variation of the first order equation (18)2 is the same for both cases, here abbreviated to δP,u (P · F T ) = δP · F T + P · δF T . Based on the procedure for small strain deformations, see [42] and [43] a modification of the test spaces is performed. The extension to finite strain formulations leads to a neglection of the variation of the plastic right Cauchy-Green tensor δu C p −1 and δP C p −1 . However, without any change in the underlying system of equation, only a variation of the elastic part is performed. This yield the so-called ˆ of the least-squares functional, which are conmodified weak forms (δu Gˆ and δP G) tinuous at the boundary of elastic ( lim − ) and plastic ( lim + ) loading based on the →0
→0
definition of τ (b˜e ), for simplicity without the respective weighting factors, given by
∂ψ e ˜ e ∂ψ e ∂ψ e ·b + : P · F T − 2 e · be dV · δu b˜ e e ∂b ∂ b˜ ∂ b˜ B + (P · δF T − δF · P T ) : (P · F T − F · P T ) dV , B ∂ψ Div δP · (Div P + f ) dV + δP · F T : P · F T − 2 e · be dV δP Gˆ = ∂b B B + (δP · F T − F · δP T ) : (P · F T − F · P T ) dV . δu Gˆ =
P · δF T − 2
B
δu
(26) It has to be mentioned that the advantage of the LSFEM given by symmetric system matrices is no longer valid due to the modification in (26). As investigated in the works of [29] and [30], a modification of the first variation, by introducing the antisymmetric displacement gradient in the test space, resulting in a unsymmetric formulation lead to an improved performance and is well posed. Furthermore, this type of formulations yields a smaller error in the momentum balance compared to a standard least-squares formulation as analyzed in [30]. An extension of the presented
158
M. Igelbüscher et al.
modified weak form (26) is performed by introducing the approach suggested in [29] and [30] for linear elasticity and for hyperelasticity in [36]. This is based on the not a priori fulfillment of the stress symmetry condition of τ related to the utilized stress approximation with Raviart-Thomas functions and an improved approximation of the momentum balance. The second modified weak form denoted by δu G, motivated by a scalar multiplication of a symmetric stress measure sym and an asymmetric defined tensor function H as (δu) in terms of the displacement test function, is given by introducing an additional weak form with G˜ in the form: ˜ as (δu), sym ) and δP G := δP Gˆ . δu G := δu Gˆ + δ G(H
(27)
A suitable approach for sym and H as (δu) are the Kirchhoff stresses τ = P · F T and the antisymmetric part of the gradient of the displacement test functions w.r.t. the actual configuration with ∇,xas (δu) =
1 (∇,x (δu) − (∇,x (δu))T ) . 2
(28)
The resulting weak form δG including the second modification is than given by
∂ψ 2 ωas ∇,xas (δu) : P · F T − 2 e · be dV and δP G := δP Gˆ . ∂b B (29) As mentioned before, the modified formulation presented here is no longer a classical LSFEM. However, it leads for a characteristic element length h e → 0 to P · F T = ˆ Furthermore, for the framework of linear elasticity F · P T and thus to δu G = δu G. the well posedness of the formulation and its applicability to adaptive finite element computations has been shown by [30]. For solving (P, u) ∈ × V a discretization of stresses P and displacements u in d dimensions in conforming approximation spaces ( h , Vh ) are applied. Here the stresses are approximated by vector-valued Raviart-Thomas functions in Wq (div, B) and the displacements with Lagrange functions in the function space W1, p (B), where Wq (div, B) and W1, p (B) are defined by δu G = δu Gˆ −
Wq (div, B) = {P ∈ Lq (B)d : Div P ∈ Lq (B)d } for 1 ≤ q < ∞ , W1, p (B) = {u ∈ L p (B)d : ∇u ∈ L p (B)d }
for 1 ≤ p < ∞ ,
(30)
compare e.g. [34, 54, 59]. However, the value of the exponents p and q depends on the arising order of unknowns in the underlying functional. The considered finite element spaces h and Vh are in the following given by q d d m h = {P ∈ W (div, B) : P|Be ∈ RT m (Be ) ∀ Be } ⊂ , k 1, p d d Vh = {u ∈ W (B) : u|Be ∈ Pk (Be ) ∀ Be } ⊂ V .
(31)
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
159
Therefore, the resulting finite element structure is denoted by RT m Pk with m and k are the polynomial order of the basis functions for the resulting finite element. In order to solve the nonlinear problem with the Newton-Raphson method we have to linearize the modified weak form. The resulting Newton tangent can be established analytically, by an automated differentiation approach (used in this work) or numerically, e.g. in form of a standard difference quotient procedure. Finally, the complete system of algebraic equations is obtained by a standard assembly operation and it should be noted that there is no local elimination of unknowns on the element level involved. The finite element implementations and computations have been done using the AceGen and AceFEM packages (version 6.503), see [60–62], of Mathematica (version 10.1), see [63].
4 The Least-Squares Functional as an Error Estimator Based on the theory developed in [34], the least-squares functional (20) is considered for the Neo-Hookean material law (19). Ignoring the third term in the functional, this gives 1 ω2 | Div P + f |2 dV 2 B 1 2 λ 1 ω22 P · F(u)T − μ(be (u) − 1) − (det be (u) − 1)1 dV , + 2 B 2 (32) where F(u) = 1 + ∇u and be (u) = F(u) · (C p (u))−1 · F(u)T . Note that C p also depends on u due its implicit connection with be via the yield criterion. The third term may be neglected since it can be bounded from above by the second one as F (P, u) =
2 ω32 P · F(u)T − F(u) · P T dV B 1 λ 2 = ω P · F(u)T − μ(be (u) − 1) − (det be (u) − 1)1 2 B 3 2
T 2 λ e e T − P · F(u) − μ(b (u) − 1) − (det b (u) − 1)1 dV 2 2 2 ω3 1 λ ≤ ω22 P · F(u)T − μ(be (u) − 1) − (det be (u) − 1)1 , 2 B ω2 2 (33) due to the fact that be (u) and 1 are symmetric. Choosing ω3 much larger than ω2 leads to a stronger enforcement of the symmetry of the stress tensor. However, if ω3 and ω2 are of comparable size, one may as well erase the third term and work with (32). 1 2
160
M. Igelbüscher et al.
Our aim is to show that, for a given approximation (P, u), the evaluation of F (P, u) with the functional (32) constitutes, up to a generic constant, an upper bound for the error with respect to the exact solution. To this end, we rely on the analysis in [34] and present a short review of this work in the following. In [34], instead of (32) the functional 1 A(P · F(u)T ) − be (u)2 dV (P, u) = 1 | Div P + f |2 dV + (34) F 2 B 2 B is considered, where we skipped the weights ω1 and ω2 for simplicity. Here, A : IRd×d → IRd×d denotes the (Cauchy) stress to (Cauchy-Green) strain mapping such that A(P · F(u)T ) = be (u) is valid. A is the inverse of the nonlinear strain-to-stress mapping λ G(b) = μ(b − 1) + (det b − 1)1 . (35) 2 For the purely elastic case, i.e., for C p ≡ 1, [34, Theorem 4.4] states that, under the assumption that, PL∞ ( ) , ∇uL∞ ( ) and, for the exact solution (P ∗ , u∗ ), uL∞ ( ) are sufficiently small, P ∗ L∞ ( ) and ∇ (P, u) P ∗ − P2H(div,B) + u∗ − u2H 1 (B) F
(36)
holds, where is an abbreviation for an inequality up to a generic constant. The term in the second integral in (34) may be rewritten as A(P · F(u)T ) − be (u) = A(P · F(u)T ) − A(G(be (u)) 1 d = A(G(be (u)) + s(P · F(u)T − G(be (u)))) ds 0 ds 1 = A ((1 − s)G(be (u)) + sP · F(u)T )[P · F(u)T − G(be (u))] ds .
(37)
0
For the Neo-Hookean material law (19), the explicit formula [34, (4.3)] leads to A ((1 − s)G(be (u)) + sP · F(u)T )[P · F(u)T − G(be (u))] 1 P · F(u)T − G(be (u)) = μ λ Cof(A((1 − s)G(be (u)) + sP · F(u)T ) : (P · F(u)T − G(be (u)))) I . − 2μ + λ tr(Cof(A((1 − s)G(be (u)) + sP · F(u)T ))) (38) We therefore get that, under the above assumptions on P, |A ((1 − s)G(be (u)) + sP · F(u)T )[P · F(u)T − G(be (u))]| |P · F(u)T − G(be (u))|
(39) and therefore, using (37),
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
|A(P · F(u)T ) − be (u)| |P · F(u)T − G(be (u))|
161
(40)
(P, u) F (P, u) which implies holds. This implies F P ∗ − P2H(div,B) + u∗ − u2H 1 (B) F (P, u) .
(41)
So far, this is all derived only for the elastic case C p ≡ 1 with the functional F e (P, u) =
1 2
2 | Div P + f |2 + P · F(u)T − G(F(u) · F(u)T ) dV . (42) B
In order to show that (41) is also valid in the more general plastic case with the functional 1 F (P, u) = 2
p
2 T p −1 T | Div P + f | + P · F(u) − G(F(u) · C (u) · F(u) ) dV , (43) 2
B
we may restrict ourselves to the divergence-free subspace such that Div P + f = Div(P − P ∗ ) = 0. The remaining second part of the functional in (43), rewritten in terms of Fe (u) = F(u) · F p (u)−1 , turns into 1 p P · F p (u)T · Fe (u)T − G(Fe (u) · Fe (u)T )2 dV . F (P, u) = (44) 2 B For this functional, using (41) we obtain P ∗ · F p (u∗ ) − P · F p (u)2L2 (B) + Fe (u∗ ) − Fe (u)2L2 (B) F p (P, u) ,
(45)
which finally proves that the error is controlled in this case, too.
5 Numerical Analysis For the numerical analysis of the formulation an element type of lower-order (RT 0 P2 ) is applied, i.e., linear (lowest-order) RT spaces combined with piecewise quadratic conforming finite elements (Lagrange functions). However, the performance for low-order elements is improved based on the introduced modified weak forms and the applied residual weightings ω1,2,3 = {1, 1, 10}, which is investigated for linear elasticity [29, 58] and for hyperelasticity in [35, 36]. The boundary value problem, investigated in terms of the resulting stress for a prescribed displacement u 2 (t) = 1.8 dm, is given by a plate with a circular hole depicted in Fig. 1. Due to symmetry properties only one quarter of the plate is considered with corresponding boundary conditions, see Fig. 1 for a description of the BVP and material setup. The resulting approximation order in Tables 1 and 2 is computed with respect to log(F l /F l−1 )/ log(neql /neql−1 ), with F l as the LS functional and neql the number
162
M. Igelbüscher et al.
Fig. 1 Geometrical and material setup for the perforated plate problem Table 1 Reduction of the least-squares functional 2 elements in x3 -direction # elem. dim h dim Vh F (P h , uh ) div P h 2 (order) l=0
160
1979
1120
l=1
640
7675
4544
l=2
1440
17083
10272
l=3
2560
30203
18304
l=4
4000
47035
28640
1.3903 e-3 (–) 3.1863 e-4 (1.0725) 1.5185 e-4 (0.9189) 9.6025 e-5 (0.7996) 7.0166 e-5 (0.7051)
σ as 2
1.3828 e-3
7.4975 e-6
3.1641 e-4
2.2173 e-6
1.5076 e-4
1.0930 e-6
9.5304 e-5
7.2132 e-7
6.9608 e-5
5.5808 e-7
of equations at the refinement level l. An optimal order of convergence is achievable only for sufficiently regular problems, cf. [21], which does not hold for the Cook’s membrane problem. However, the achieved convergence order in Table 1 and 2 show the expected ratio in the range to the optimal order of 1. Furthermore, the convergence of the functional and additionally the single functional parts F i , i = 1, 2, 3 are depicted in Fig. 2, with F i = Ri 2L2 (B) . Herein, it can be seen that the convergence of the LS functional is limited in this case by the convergence of the constitutive equation in F 2 , since the lines are almost coinciding. The comparison of a force-displacement curve for the perforated plate problem, for the LSFEM (RT 0 P2 ) and a standard Galerkin finite element formulation (P2 ), is illustrated in Fig. 3, utilizing the same number of finite elements for both formulations with nel = 4000. Both formulations yield the same solutions for the resulting force on the lower face of the perforated plate at x2 = 0. Additionally, the resulting distributions of the von Mises stresses σ v M and the equivalent plastic deformations
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity Table 2 Reduction of the least-squares functional 4 elements in x3 -direction # elem. dim h dim Vh F (P h , uh ) div P h 2 (order) l=0
320
3623
2368
l=1
1280
14039
9600
l=2
2880
31239
21696
l=3
5120
55223
38656
l=4
8000
85991
60480
1.5044 e-3 (-) 2.8062 e-4 (1.2207) 1.1778 e-4 (1.0755) 6.7736 e-5 (0.9647) 4.5620 e-5 (0.8879)
163
σ as 2
1.4979 e-3
6.5577 e-6
2.7875 e-4
1.8654 e-6
1.1693 e-4
8.5514 e-7
6.7233 e-5
5.0223 e-7
4.5279 e-5
3.4048 e-7
Fig. 2 Convergence of the LS functional and the single functional parts for RT 0 P2
Fig. 3 Force-displacement curve for the perforated plate problem comparing LSFEM (RT 0 P2 ) and standard displacement formulations (P2 ) for nel = 4000
164
M. Igelbüscher et al.
Fig. 4 Von Mises stress σ v M for least-squares (a) and standard displacement formulation (b)
Fig. 5 Equivalent plastic def. α for least-squares (a) and standard displacement formulation (b)
α are depicted in Fig. 4 and 5. The results obtained under the given load additionally illustrate the agreement of the LS formulation with the results of the standard Galerkin formulation.
6 Conclusion The presented modified approach for the mixed LS formulation for rate-independent elasto-plasticity at finite strain overcome the arising problem of kink-like points in the functional and the discontinuities in the first variation considering a standard NewtonRaphson method. The constructed continuous modified weak form is achieved by neglecting the plastic variables within the first variation of the functional without changing the underlying system of differential equations. A comparison of the presented LSFEM and a pure displacement approach illustrates a good agreement with respect to displacement and stresses. For the investigation of rate-dependent elastoplastic effects, the presented modification is not needed, because therein overstresses are allowed, which also leads to a continuous formulation.
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
165
Acknowledgements The authors gratefully acknowledge the support by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 “Reliable simulation techniques in solid mechanics. Development of non- standard discretization methods, mechanical and mathematical analysis” under the project “First-order system least squares finite elements for finite elasto-plasticity” (Project number 255798245, project IDs SCHR 570/24-1, SCHW 1355/2-1, STA 402/12-1).
References 1. R. Hill, The Mathematical Theory of Plasticity (Oxford at the Clarendon Press, 1950) 2. J. Lubliner, Plasticity theory (Macmillan Publishing Company, New York, 1990) 3. G.A. Maugin, The Thermomechanics of Plasticity and Fracture (Cambridge University Press, 1992) 4. W. Han, B.D. Reddy, Plasticity: Mathematical Theory and Numerical Analysis (Springer, New York, 1999) 5. J.C. Simo, A framework for finite strain elastoplasticity based on maximum plastic dissipation and the multiplicative decomposition: Part I. continuum formulation. Comput. Methods Appl. Mech. Eng. 66, 199–219 (1988) 6. J.C. Simo, A framework for finite strain elastoplasticity based on maximum plastic dissipation and the multiplicative decomposition. part II: computational aspects. Comput. Methods Appl. Mech. Eng. 68, 1–31 (1988) 7. J.C. Simo, Algorithms for static and dynamic multiplicative plasticity that preserve the classical return mapping schemes of the infinitesimal theory. Comput. Methods Appl. Mech. Eng. 99, 61–112 (1992) 8. J.C. Simo. Numerical analysis and simulation of plasticity. In P.G. Ciarlet and J.L. Lions, editors, Handbook of Numerical Analysis, volume VI, pages 183–499. 1998 9. J.C. Simo, F. Armero, Geometrically non-linear enhanced strain mixed methods and the method of incompatible modes. Int. J. Numer. Meth. Eng. 33, 1413–1449 (1992) 10. J.C. Simo, C. Miehe, Associative coupled thermoplasticity at finite strains: formulation, numerical analysis and implementation. Comput. Methods Appl. Mech. Eng. 96, 133–171 (1992) 11. J.C. Simo, T.J.R. Hughes, Computational Inelasticity (Springer, 1998) 12. E.A. de Souza Neto, D. Peri´c, D.R.J. Owen, Computational Methods for Plasticity: Theory and Applications, 1st edn. (Wiley, 2008) 13. G. Prange, Das Extremum der Formänderungsarbeit (Habilitationsschrift, Technische Hochschule Hannover, 1916) 14. E. Reissner, On a variational theorem in elastictiy. J. Math. Phys. 29, 90–95 (1950) 15. H.-C. Hu, On some variational methods on the theory of elasticity and the theory of plasticity. Sci. Sinica 4, 33–54 (1955) 16. K. Washizu, On the variational principles of elasticity and plasticity. Aeroelastic and Structure Research Laboratory, Technical Report 25-18, MIT, Cambridge (1955) 17. K. Washizu, Variational Methods in Elasticity and Plasticity, 2nd edn. (Pergamon Press, 1975) 18. O. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow, vol. 76 (Gordon and Breach New York, 1969) 19. I. Babuška, The finite element method with lagrangian multipliers. Numer. Math. 20(3), 179– 192 (1973) 20. F. Brezzi, On the existence, uniqueness and approximation of saddle-point problems arising from lagrangian multipliers. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 8(2), 129–151 (1974) 21. D. Boffi, F. Brezzi, M. Fortin, Mixed Finite Element Methods and Applications (Springer, Heidelberg, 2013) 22. F. Auricchio, F. Brezzi, C. Lovadina. Encyclopedia of Computational Mechanics, 2nd edn. (Wiley, 2004)
166
M. Igelbüscher et al.
23. B.-N. Jiang, The Least-Squares Finite Element Methods (Springer, Berlin, 1998) 24. P.B. Bochev, M.D. Gunzburger, Least-Squares Finite Element Methods, 1st edn. (Springer, New York, 2009) 25. P.A. Raviart, J.M. Thomas, A mixed finite element method for 2-nd order elliptic problems. Mathematical Aspects of Finite Element Methods. Lecture Notes in Mathematics (Springer, New York, 1977), pp. 292–315 26. Z. Cai, G. Starke, First-order system least squares for the stress-displacement formulation: Linear elasticity. SIAM J. Numer. Anal. 41, 715–730 (2003) 27. Z. Cai, G. Starke, Least-squares methods for linear elasticity. SIAM J. Numer. Anal. 42, 826– 842 (2004) 28. Z. Cai, J. Korsawe, G. Starke, An adaptive least squares mixed finite element method for the stress-displacement formulation of linear elasticity. Numer. Methods Partial Differ. Equ. 21, 132–148 (2005) 29. A. Schwarz, J. Schröder, G. Starke, A modified least-squares mixed finite element with improved momentum balance. Int. J. Numer. Meth. Eng. 81, 286–306 (2010) 30. G. Starke, A. Schwarz, J. Schröder, Analysis of a modified first-order system least squares method for linear elasticity with improved momentum balance. SIAM J. Numer. Anal. 49(3), 1006–1022 (2011) 31. T.A. Manteuffel, S.F. McCormick, J.G. Schmidt, C.R. Westphal, First-order system least squares (FOSLS) for geometrically nonlinear elasticity. SIAM J. Numer. Anal. 44, 2057–2081 (2006) 32. A. Schwarz, J. Schröder, G. Starke, K. Steeger, Least-squares mixed finite elements for hyperelastic material models, in Report of the Workshop 1207 at the “Mathematisches Forschungsinstitut Oberwolfach” entitled “Advanced Computational Engineering”, organized by O. Allix, C. Carstensen, J. Schröder, P. Wriggers, pp. 14–16 (2012) 33. G. Starke, B. Müller, A. Schwarz, J. Schröder, Stress-displacement least squares mixed finite element approximation for hyperelastic materials, in Report of the Workshop 1207 at the “Mathematisches Forschungsinstitut Oberwolfach” entitled “Advanced Computational Engineering”, organized by O. Allix, C. Carstensen, J. Schröder, P. Wriggers, pp. 11–13 (2012) 34. B. Müller, G. Starke, A. Schwarz, J. Schröder, A first-order system least squares method for hyperelasticity. SIAM J. Sci. Comput. 36, 795–816 (2014) 35. J. Schröder, A. Schwarz, K. Steeger. Least-squares finite element formulations for isotropic and anisotropic elasticity at small and large strains, in Advanced Finite Element Technologies ed. by J. Schröder, P. Wriggers, CISM Courses and Lectures (Springer, 2016), pp. 131–175 36. A. Schwarz, K. Steeger, M. Igelbüscher, J. Schröder, Different approaches for mixed LSFEMs in hyperelasticity: application of logarithmic deformation measures. Int. J. Numer. Meth. Eng. 115, 1138–1153 (2018) 37. K.C. Kwon, S.H. Park, S.K. Youn, The least-squares meshfree method for elasto-plasticity and its application to metal forming analysis. Int. J. Numer. Meth. Eng. 64, 751–788 (2005) 38. G. Starke, An adaptive least-squares mixed finite element method for elasto-plasticity. SIAM J. Numer. Anal. 45, 371–388 (2007) 39. G. Starke, Adaptive least squares finite element methods in elasto-plasticity, in LSSC 2009, Lecture Notes in Computer Science, vol. 5910 (Springer, Heidelberg, 2010), pp. 671–678 40. A. Schwarz, J. Schröder, G. Starke, Least-squares mixed finite elements for small strain elastoviscoplasticity. Int. J. Numer. Meth. Eng. 77, 1351–1370 (2009) 41. J. Kubitz, Gemischte least-squares-FEM für Elastoplastizität. Dissertation, Leibniz Universität Hannover, Fakultät für Mathematik und Physik (2007) 42. A. Schwarz, Least-squares mixed finite elements for solid mechanics. University DuisburgEssen, Ph.D. thesis (2009) 43. M. Igelbüscher, A. Schwarz, K. Steeger, J. Schröder, Modified mixed least-squares finite element formulations for small and finite strain plasticity. Int. J. Numer. Meth. Eng. 117, 141–160 (2018) 44. R. von Mises, Mechanik der Festkörper im plastisch deformablen Zustand (Nachrichten der Gesellschaft für Wissenschaften Göttingen, Mathematisch-physikalische Klasse, 1913), pp. 582–592
Least-Squares Finite Element Formulation for Finite Strain Elasto-Plasticity
167
45. S. Klinkel, Theorie und Numerik eines Volumen-Schalen-Elementes bei finiten elastischen und plastischen Verzerrungen. Ph.D. Thesis, Karlsruhe (2000) 46. E.H. Lee, D.T. Liu, Finite strain elastic-plastic theory with application to plane-wave analysis. 38(1), 19–27 (1967) 47. E.H. Lee, Elastic-plastic deformation at finite strain. J. Appl. Mech. 36, 1–6 (1969) 48. F. Armero, Elastoplastic and viscoplastic deformations in solids and structures. Encyclopedia of Computational Mechanics, Vol. 1:Chapter 7 (Wiley, 2004) 49. J.E. Marsden, J.R. Hughes, Mathematical Foundations of Elasticity. (Prentice-Hall, 1983) 50. C. Miehe, E. Stein, A canonical model of multiplicative elasto-plasticity formulation and ascpects of the numerical implementation. Eur. J. Mech. A. Solids 11, 25–43 (1992) 51. G. Weber, L. Anand, Finite deformation constitutive equations and a time integration procedure for isotropic, hyperelastic-viscoelastic solids. Comput. Methods Appl. Mech. Eng. 79, 173–202 (1990) 52. A.L. Eterovic, K.-J. Bathe, A hyperelastic-based large strain elasto-plastic constitutive formulation with combined isotropic-kinematic hardening using the logarithmic stress and strain measures. Int. J. Numer. Meth. Eng. 30(6), 1099–1115 (1990) 53. A.M. Cuitino, M. Ortiz, A material independent method for extending stress update algorithms from small-strain plasticity to finite plasticity with multiplicative kinematics. Eng. Comput. 9, 437–451 (1992) 54. F. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods (Springer, New York, 1991) 55. D.N. Arnold, F. Brezzi, J. Douglas, PEERS: A new mixed finite element for plane elasticity. Jpn. J. Appl. Math. 1, 347–367 (1984) 56. D. Boffi, F. Brezzi, M. Fortin, Reduced symmetry elements in linear elasticity. Commun. Pure Appl. Anal. 8(1), 95–121 (2009) 57. B. Cockburn, J. Gopalakrishnan, J. Guzm´n, A new elasticity element made for enforcing weak stress symmetry. Math. Comput. 79, 1331–1349 (2010) 58. A. Schwarz, K. Steeger, J. Schröder, Weighted overconstrained least-squares mixed finite elements for static and dynamic problems in quasi-incompressible elasticity. Comput. Mech. 54(1), 603–612 (2014) 59. S.C. Brenner, L.R. Scott, The Mathematical Theory of Finite Element Methods (Springer, 1994) 60. J. Korelc, Automatic generation of finite-element code by simultaneous optimization of expressions. Theor. Comput. Sci. 187(1), 231–248 (1997) 61. J. Korelc, Multi-language and multi-environment generation of nonlinear finite element codes. Eng. Comput. 18, 312–327 (2002) 62. J. Korelc, P. Wriggers, Automation of Finite Element Methods (Springer International Publishing Switzerland, 2016) 63. Inc. Wolfram Research. Mathematica. Wolfram Research, Inc., version 10.1 edition, 2015. Champaign, Illinois
Hybrid Mixed Finite Element Formulations Based on a Least-Squares Approach Maximilian Igelbüscher and Jörg Schröder
Abstract In this contribution we focus on the relaxation of continuity conditions and the enforcement of these continuity constraints for the considered fields via Lagrange multipliers. Therefore, a stress-displacement least-squares formulation F (σ , u) is considered, which is defined by the squared L2 (B)-norm applied to the first-order system of differential equations, given by the balance of momentum and the constitutive equation as well as an additional (mathematically redundant) residual for the enforcement of the moment of momentum. In general the continuity conditions are enforced by the conforming discretization of the individual fields. The conforming discretization, which demands continuity of the displacements and normal continuity of the stresses, is given by polynomial functions of Lagrange type for the displacements, i.e. uh ∈ H 1 (B), and a stress approximation e.g. with Raviart–Thomas functions, i.e. σh ∈ H(div, B). A non-conforming discretization of the stresses and displacements considering discontinuous Raviart–Thomas and discontinuous Lagrange approximation functions with σh ∈ dRT m and uh ∈ dPk yield a relaxation of the continuity conditions. However the fulfillment of these relaxed constraints is enforced by the introduction of Lagrange multipliers. Additionally, a continuous as well as a discontinuous stress approximation with σh ∈ H 1 (B) and σh ∈ L2 (B) is considered.
1 Introduction From an engineering point of view the goal is to have low order elements, which are easy to implement, robust, and which lead to sparse well-conditioned stiffness matrices. These properties generally do not hold for the least-squares finite element M. Igelbüscher (B) · J. Schröder Institute of Mechanics, University of Duisburg-Essen, Universitätsstraße 15, D-45141 Essen, Germany e-mail: [email protected] J. Schröder e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_7
169
170
M. Igelbüscher et al.
method (LSFEM), since the approximation quality of low order elements is moderate and they are neither robust nor accurate, see e.g. [1, 2]. Nevertheless, the mixed least-squares (LS) finite element method provides some inherent advantages, which are, e.g. the LS method yields an a posteriori error estimator which is given by the functional itself which can be applied for adaptive mesh refinement algorithms. Furthermore, the formulations lead to positive definite system matrices and they are not restricted by the LBB condition regarding the choice of the polynomial order of the finite element interpolation, compare [3, 4]. A drawback of the least-squares formulations, besides the performance of low order elements, are the large system matrices in comparison with standard displacement formulations and as well mixed principles as the Pian–Sumihara or PEERS elements [5, 6]. Since low order LS elements lead in the field of solid mechanics to a poor performance, higher order interpolation functions have to be considered which increases the number of degrees of freedom rapidly, which make the calculations costly, especially in 3D applications. However, an improvement in terms of accuracy of elements with lowest-order Raviart–Thomas stress approximation can be achieved by several approaches. These are e.g. an introduction of an additional stress symmetry enforcement within the LS functional, see e.g. [7–9], a modification of the resulting weak form, cf. [10, 11], or the utilization of weighting factors separately for the underlying boundary value problems e.g. shown in [8]. For the field of nonlinear materials in solid mechanics these LS approaches are discussed for hyperelasticity and elasto-plasticity at small and finite strains in [2, 12]. An additional approach of mixed formulations is the hybrid finite element method, where the general idea as well as applications are given e.g. in the textbooks [13–17]. In conforming finite element formulations certain continuity requirements at interelement boundaries have to be fulfilled, which are e.g. the normal continuity of the stress field between inter-element boundaries or the continuity of the displacements. Hybrid finite element formulations are based on the relaxation of continuity requirements. They regard these continuities as constraints and enforce them through the method of Lagrange multipliers, compare [13]. The hybrid finite element method is characterized by the simultaneous approximation of at least one field defined on element level and the Lagrange multiplier defined on the union of the boundaries of the elements, compare [13, 15]. Further discussion concerning mixed hybrid FEM can be found e.g. for hybrid stress elements in [18, 19] and an investigation on the existence and stability of general mixed hybrid elements is presented in [20, 21]. The enforcement of relaxed continuity conditions can be achieved additionally by the direct introduction of the conditions within the least-squares functional (LSF) in so-called equivalent norms. An approach with an enriched discrete space by discontinuous elements in the vicinity of singularities is studied for the first-order Poisson problem by [22] and in [23] a discontinuous LS formulation is investigated for the div-curl problem on non-convex domains. Further, [24] introduce a discontinuous LS formulation for general polytopal meshes and provides rigorous error analysis. For the direct application of boundary conditions within the LSF see e.g. [25].
Hybrid Mixed Finite Element Formulations Based …
171
The proposed idea of the current work is the discussion of a mixed hybrid finite element approach based on a mixed least-squares finite element formulation. Therefore, the continuity of the approximated fields of a standard stress-displacement least-squares formulation ((u, σ ) ∈ H 1 (B) × H(div, B)) is relaxed, by introducing discontinuous stress and displacement approximations, which no longer guarantee the normal continuity of the stresses and continuity of the displacements on the interelement boundaries. This continuity will be enforced by applying Lagrange multipliers at the inter-element boundaries, the so-called skeleton of the finite element mesh, for the relaxed fields. The approximation of the underlying approaches are chosen to be continuous as well as discontinuous for the displacements with uh ∈ Pk and uh ∈ dPk respectively and discontinuous for the stresses σh ∈ dRT m , which is compared to a conforming LS formulation. Here, RT and P denote the approximation utilizing Raviart–Thomas and Lagrange approximation functions and the character d emphasizes a discontinuous approximation. Furthermore, the Lagrange multipliers are approximated at each edge separately λh ∈ Pn and μh ∈ Po . For the application of boundary traction and displacements the same Lagrange multipliers can be used. The idea of a non-conforming discretization and an application of Lagrange multiplier techniques for ensuring the continuity are for example discussed in [16]. As a result of the discontinuous approximation of the stresses, a static condensation of the related fields can be performed for the global system of equations, which results in reduced system matrices.
2 Continuous Least-Squares Finite Element Formulation In the following general assumptions and construction aspects of the mixed LSFEM are presented briefly. The LSFEM is introduced based on minimizing the L2 (B)norm of the residuals in the first-order system of differential equations, cf. e.g. [3, 4]. Let B be a bounded domain parameterized in x ∈ Rd with the boundary ∂B, with ∂B = ∂B D ∂B N and ∂B B ∂B N = ∅ decomposed into a Dirichlet and a Neumann boundary ∂B D and ∂B N . A general construction rule for the least-squares minimization problem is n n 2 1 1 2 ωi Ri L2 (B) = ω Ri , Ri dV → min , F = 2 2 i i=1 i=1 B
(1)
which is obtained by application of the squared L2 (B)-norm to a first-order system of n (differential) equations given in residual forms Ri , with •, • defining the scalar product of the value •. The functional depends on scalar weighting factors denoted by ωi and the L2 (B)-norm given by •
L2 (B)
=•
0,B
1/2 • 2 dV = . B
(2)
172
M. Igelbüscher et al.
An equivalent representation is denoted
bythesum over all elements K within the triangulation T with • L2 (T ) := K ∈T • L2 (K ) . The construction of the firstorder system in terms of displacements u and Cauchy stress field σ is described for the theory of linear elasticity e.g. in [7]. Furthermore, the setup for the linear elasticity problem is defined by the following set of equations div σ + f = 0 C−1 : σ − ε(u) = 0 σ − σT = 0 u = u¯ σ · n = ¯t
on B, on B, on B, on ∂B D , on ∂B N .
(3)
Here, the body force is defined by f , ε = ∇ s u denotes the strain tensor equal to the symmetric displacement gradient, n the outward normal vector, ¯t is the boundary traction and u¯ the boundary displacement. Furthermore, C−1 denotes the material compliance tensor, by means of the special dyadic product between two tensors of second order (A B)i jkl = Aik B jl , see [26]. Here, C−1 is given in terms of the second-order unity tensor I and the Lamé constants and μ by C−1 = −
4 μ2
1 I⊗I+ II. + 4μ 2μ
(4)
For the two dimensional framework plane strain condition is assumed with the stress component σ33 = (ε11 + ε22 ). Based on the setup of Eq. (3) the resulting leastsquares functional F (σ , u) is obtained with 1 div σ + f 2 2 + C−1 : σ − ε 2 2 + C−1 : (σ − σ T )2 2 . L (K ) L (K ) L (K ) 2 K ∈T (5) The here considered LSFEM is based on an approximation of the stress field by conforming vector-valued Raviart–Thomas functions σ ∈ H(div, B). Therefore, the stress symmetry condition of the Cauchy stress tensor σ is not fulfilled a priori and is enforced in a weak sense, see e.g. [27, 28]. Nevertheless, it is controlled through the right side of the inequality
F =
σ − σ T 2L2 (B) ≤ c σ − ∂ε ψ(ε) 2L2 (B) = c σ − C : ε 2L2 (B) ,
(6)
where ψ(ε) = 21 ε : C : ε denotes the free energy function and c is a positive constant, cf. [7]. However, an additional control of this symmetry constraint, as suggested by [8], is performed with the introduction of the asymmetric stress tensor, which will be considered in the proposed formulations, cf. Eq. (5). In [7] the boundary conditions are a priori fulfilled by the applied ansatz functions for σ and u. A consideration of boundary conditions within the least-squares functional (5) can be introduced in a straightforward manner, see e.g. [25], which is omitted for convenience. Fur-
Hybrid Mixed Finite Element Formulations Based …
173
thermore, the scalar valued weighting parameters corresponding to every functional part denoted by ωi are not explicitly included for the representation but have to be considered within the numerical investigation.
3 Hybrid Mixed Finite Element Based on a Least-Squares Approach The conforming discretization of the solution spaces for the displacements and the stresses imply different continuity conditions for the individual fields. The following suggestions are based on the idea of a relaxation of these conditions and a weak enforcement of the necessary continuities. Therefore, we apply Lagrange multipliers for the enforcement of the continuity condition, which are the normal continuity of the stresses and the continuity of the displacement field. For the analysis of mixed hybrid finite element formulations the body of interest B and the boundary ∂B are further subdivided. The triangulation of the placement B with finite elements is denoted by T and E defines the set of all sides in 2D (edges in 3D), including interior sides of the triangulation. In the following := N ∪ D denotes the triangulation of the outer boundary ∂B, where N and D denote the triangulation of the Neumann and Dirichlet boundary. The interior boundaries are defined by i = E\, where for completeness E\ ∩ N = ∅ as well as E\ ∩ D = ∅. The continuous mixed least-squares formulation (5) is extended by means of the corresponding continuity conditions enforced by Lagrange multipliers. Therefore, the two sides of an arbitrary inter-element boundary, are denoted by the characters (+) and (−). For a discontinuous approximation of the stress and displacement field within the LS formulation (5) the balance of momentum, the material law as well as stress symmetry condition are fulfilled on each local element, but the traction continuity (7) (σ · n)+ + (σ · n)− = 0 on E , and the displacement compatibility u+ = u− on E ,
(8)
are not fulfilled a priori. Therefore, both have to be specified over the boundaries of the local elements in order to achieve the continuity conditions in a weak sense. The consideration and fulfillment of these continuity conditions is discussed in detail e.g. in [20]. The normal continuity of the stresses at inter-element boundaries is relaxed and simultaneously enforced by means of the Lagrange multiplier λ, applied for the jump of the traction vector [[σ · n]] on the skeleton i and for the boundary tractions on the outer stress boundary N , which is defined by
174
M. Igelbüscher et al.
λ · [[σ · n]] = 0 on i and λ · (σ · n − ¯t ) = 0 on N .
(9)
In a similar manner, the displacement compatibility is introduced w. r. t. the Lagrange multiplier μ, applied for the jump of the displacement vector u on i and for the boundary displacements on D ¯ = 0 on D . μ · [[u]] = 0 on i and μ · (u − u)
(10)
The resulting hybrid mixed formulation F h based on a mixed LS approach is given, with respect to Eq. (5), by F h (σ , u, λ, μ) = F (σ , u) + F t (σ , λ) + F u (u, μ) ,
(11)
where F t and F u are the functional parts enforcing the traction reciprocity (7) and displacement compatibility (8) on the inter-element boundaries as well as for the boundary conditions on the outer boundary . Here, F t and F u are defined by F t (σ , λ) =
E
E∈i
E
E∈i
F u (u, μ) =
[[σ · n]] · λ dA +
E∈ N
[[u]] · μ dA +
E∈ D
(σ · n − ¯t ) · λ dA , E
¯ · μ dA . (u − u)
(12)
E
3.1 Weak Form and Linearization of the Hybrid Mixed Formulation The first variations of the hybrid mixed formulation F h (σ , u, λ, μ) are determined with respect to σ , u, λ and μ by δσ F h =
B
B
+ +
div δσ · (div σ + f ) dV +
B
C−1 : δσ : (C−1 : σ − ∇ s u) dV
(C−1 : (δσ − δσ T )) : (C−1 : (σ − σ T )) dV
i
N
[[δσ · n]] · λ dA = 0 ,
[[δu]] · μ dA = 0 , δu F h = − ∇ s δu : (C−1 : σ − ∇ s u) dV + B i D h δλ · [[σ · n]] dA + δλ · (σ · n − ¯t ) dA = 0 , δλ F = i N h ¯ dA = 0 , δμ · [[u]] dA + δμ · (u − u) δμ F = i
D
(13)
Hybrid Mixed Finite Element Formulations Based …
175
k n o and the solution (σh , uh , λh , μh ) ∈ Sm h × Vh × Xh × Yh , see (20), is seeked under the assumption of suitable boundary conditions. Here, the variation of the jump of the traction vector and displacement vector at the skeleton and the outer stress and outer displacement boundary respectively are summarized with i ∪ N and i ∪ D . The first variation (13) is solved by find (σ , u, λ, μ) ∈ S × V × X × Y such that δσ,u,λ,μ F = 0 ∀(δσ , δu, δλ, δμ) ∈ S × V × X × Y with S := [H(div, B)]d×d , V := [H 1 (B)]d , X := [H −1/2 (∂B)]d and Y := [H 1/2 (∂B)]d . The choice of approximation orders for the single fields are presented later on. For completeness, the linearization of the mixed hybrid formulation (11) reads
σ δσ F h =
B
−1
σ δλ F h u δσ F h u δu F h u δμ F h λ δσ F h μ δu F h
B
(C−1 : δσ ) : (C−1 : σ ) dV ,
(C : (δσ − δσ )) : (C−1 : (σ − σ T )) dV , B = − (C−1 : δσ ) : ∇ s u dV , B = δλT · [[σ · n]]T dA , i N = − ∇ s δu : (C−1 : σ ) dV , B = ∇ s δu : ∇ s u dV , B = δμ · [[u]] dA , i D = [[δσ · n]] · λ dA , i N = [[δu]] · μ dA , +
σ δu F h
div δσ · div σ dV +
i
T
(14)
D
where u δλ F h = λ δu F h = λ δλ F h = 0, μ δσ F h = σ δμ F h = μ δμ F h = 0 as well as λ δμ F h = μ δλ F h = 0, based on the properties of Lagrange multipliers. The extension of the least-squares functional (11) in terms of the continuity condition leads to a loss of at least some of the a priori given advantages of the least-squares method, e.g. the loss of a positive (semi-) definite system matrix. This observation is crucial, especially for the restriction due to the inf-sup condition. The choice of the polynomial order for the Lagrange multipliers in combination with the considered approximation order of displacements and stresses is discussed later on. For a static condensation of the system matrices with respect to u and σ , we have to ensure that the corresponding matrix A, see (19), is invertible.
176
M. Igelbüscher et al.
3.2 Discretization and Implementation Aspects For the representation of the discretized unknown fields and their variational counterparts in matrix notation the below listed relations are introduced. σ h = S dσ div σ h = S dσ ˆ σ as h = S dσ uh = Nu du ∇ s u h = B du λ h = N λ dλ μ h = N μ dμ
δσ h = S δdσ div δσ h = S δdσ ˆ δσ as h = S δdσ δuh = Nu δdu ∇ s δuh = B δdu δλh = Nλ δdλ δμh = Nμ δdμ
σ h = S dσ div σ h = S dσ ˆ σ as h = S dσ uh = Nu du ∇ s uh = Bdu λh = Nλ dλ μh = Nμ dμ
(15)
The nodal degrees of freedom for the displacements, stresses and Lagrange multipliers are denoted by du , dσ , dλ and dμ , respectively. Furthermore, the matrices Nu , Nλ and Nμ include the Lagrange shape functions and S and Sˆ contain Raviart–Thomas shape functions. The matrix B includes the derivatives of the Lagrange interpolation functions and S contains directional derivatives of the Raviart–Thomas functions. The non-vanishing parts of the discrete formulations, for convenience without the corresponding weighting parameters ωi , are therefore
[(S T (S dσ + f ) + ST (C−1 S dσ − Nu du )] dV T + [Sˆ C−T (C−1 Sˆ dσ )] dV + ST n Nλ dλ dA , B i N NuT Nμ dμ dA , reu = − BT (C−1 S dσ − B du ) dV + i D B NλT (S dσ n) dA + NλT (S dσ n − ¯t ) dA , reλ = i N ¯ dA , reμ = NμT Nu du dA + NμT (Nu du − u) reσ =
B
i
(16)
D
and further
T
ˆ dV , [S T S + ST C−T C−1 S + Sˆ C−T C−1 S] B = − ST C−T B dV , keσ λ = ST n Nλ dA , B i N = − BT C−1 S dV , keuu = BT B dV , B B = NuT Nμ dA , keλσ = NλT nT ST dA , i N i D = NμT Nu dA .
keσ σ = keσ u keuσ keuμ keμu
i
D
(17)
Hybrid Mixed Finite Element Formulations Based …
177
The resulting system of equations reads ⎡ e⎤ ⎤⎡ ⎤ 0 rσ dσ ⎢ du ⎥ ⎢ reu ⎥ keuμ ⎥ dσ,u rσ,u A DT ⎥ ⎢ ⎥ = − ⎢ e ⎥ or =− . ⎣ rλ ⎦ dλ,μ rλ,μ 0 ⎦ ⎣ dλ ⎦ D 0 e rμ dμ 0 (18) The resulting system matrix illustrates the typical saddle point structure with ⎡
keσ σ ⎢ keuσ ⎢ e ⎣ kλσ 0
keσ u keuu 0 keμu
A=
keσ λ 0 0 0
keσ σ keσ u keuσ keuu
,D=
keλσ 0 0 keμu
,
dσ,u = [dσ , du ]T , reσ,u = [reσ , reu ]T , (19) dλ,μ = [dλ , dμ ]T , reλ,μ = [reλ , reμ ]T .
For the solution spaces a conforming choice is considered demanding continuity of the Lagrange multipliers and allowing jumps of the stresses and displacements at inter-element boundaries (edges in 2D and faces in 3D), which are specified by d×d : σh | K ∈ [dRT m ]d×d ∀ K ∈ T } , Sm h := {σh ∈ [H(div, T )] k 1 d Vh := {uh ∈ [H (T )] : uh | K ∈ [dPk ]d ∀ K ∈ T } , Xnh := {λh ∈ [H −1/2 (E)]d : λh | E ∈ [Pn ]d ∀ E ∈ E} , Yho := {μh ∈ [H 1/2 (E)]d : μh | E ∈ [Po ]d ∀ E ∈ E} ,
(20)
where Xnh and Yho are the spaces of continuous piecewise polynomial functions of order n, o ≥ 0, chosen as Lagrange functions, denoted by Pn,o . Furthermore, RT m denotes vector-valued Raviart–Thomas functions of mth order, where dRT m should emphasize that the approximation of σh is discontinuous. For Vhk , a discontinuous approximation of u is applied, denoted by dPk , where discontinuous Lagrange type functions of order k are considered, which are not restricted by any continuity condition between inter-element boundaries. Finally, the chosen finite element solution spaces lead to a finite element type of the form dRT m dPk Pn Po . One corresponding element type is exemplarily depicted in Fig. 1. Fig. 1 Hybrid mixed finite element with edge based d. o. f. for the Lagrange multipliers based on the approach of u ∈ dP2 , σ ∈ dRT 1 , λ ∈ P0 and μ ∈ P1 , which can be used as a basis for a discontinuous approximation of stresses and displacements
As an alternative to the Raviart–Thomas approximation, with vector-valued functions, a scalar approximation of each component of the stress tensor is applied,
178
M. Igelbüscher et al.
where each interpolation site has nine degrees of freedom determining the complete stress tensor. The drawback of an approximation of the stresses σh ∈ Sh with Sh ⊆ [H 1 (B)]d×d is based on the continuity requirements of the function space and the stress field. Since the stresses are only restricted to fulfill normal continuity between two adjacent element edges in 2D or faces in 3D. This drawback can be overcome by introducing the stress approximation in a discontinuous manner on element level and enforce only normal continuity of the stresses across the element interface. Thus the restrictions of H 1 (B) functions are reduced. The advantage of this choice of interpolation functions for the stresses is given by the straight-forward application to any polynomial order and dimension. This is not directly the case for the more sophisticated approach of higher order Raviart–Thomas functions, because the construction of RT functions is based on outer and inner moments. The same explanation applies to the choice of BDM and BDFM functions or similar approximation approaches, see [16, 29, 30]. In the course of the numerical analysis five different element approaches are investigated. The elements with continuous approaches for u and σ , are defined as RT m Pk and Pm Pk , with u ∈ Pk and σ ∈ RT m and σ ∈ Pm , respectively. The element types dRT m Pk Pn and dPm Pk Pn are characterized by a discontinuous stress approximation and continuous approaches for u and λ. Furthermore, dRT m dPk Pn Po denotes the formulation with discontinuous functions for σ and u and continuous functions for λ and μ, cf. (20). n Obviously, a choice of (σh , λh ) ∈ Sm h × Yh with n = m for a dRT m Pk Pn formulation (where only σ is discontinuous and u is continuous), leads to the same solution as for a standard continuous σ -u LS finite element formulation, based on the limitation of the solution spaces. This is related to the limitation principle for mixed formulations, which states that “if the mixed formulation is capable of producing the same approximation of that produced by direct displacement form then it will in fact reproduce that form exactly and give identical and therefore not improved results”, cf. [31]. This choice of Yh0 and S0h with a constant enforcement of the continuity conditions of the stresses over the inter-element boundaries, which holds for m ≥ 0 and n = m, covers the properties of the continuous space H(div, B). The application of this principle, states that a hybrid mixed LS formulation gives identical results as the standard least-squares formulation if the hybrid mixed form is capable to produce the same approximation as the standard least-squares formulation with only continuous approximation functions. In analogy to the count criterion, presented e.g. by [32], for mixed finite element formulations, the resulting matrix structure can be evaluated. The criterion aims to restrict matrices to be non-singular in a purely algebraic manner and can be seen as an algebraic requirement for stability, which is necessary but not sufficient. Therefore, the reduced system in (18) is considered as a basis, where the number of unknowns n λ + n μ in dλ,μ have to be smaller or equal to the number of unknowns n σ + n u in dσ ,u to have a necessary but not sufficient criterion for stability. Furthermore, regarding the idea of a static condensation of the system matrices, the number of degrees of freedom for one element will be less or at least equal to the standard σ − u least-squares formulation, which is not considered in the further
Hybrid Mixed Finite Element Formulations Based …
179
course. Additionally, modification of the interpolation functions may lead to efficient finite element formulations as it is the case in the family of assumed stress elements, see e.g. [5].
4 Numerical Analysis for Hybrid Mixed Formulations The following numerical examples are investigated with respect to different finite element types, where RT m Pk and Pm Pk denote the continuous LS formulation, dRT m Pk Pn and dPm Pk Pn define formulations with discontinuous functions for σ and continuous functions for u and λ. The dRT m dPk Pn Po element type denotes a discontinuous approximation of σ , u and continuous functions for λ, μ, cf. (20). The finite element implementations and computations have been done using the AceGen and AceFEM packages (version 6.503), see [33–35], of Mathematica (version 10.1), see [36]. For the visualization Paraview (version 4.3.1), see [37], has been used.
4.1 Cook’s Membrane Problem The Cook’s membrane problem is investigated by considering plane strain condition, see Fig. 2. The left side of the Cook’s membrane is clamped for u and λ, μ is fixed to zero on all other boundaries ∂B\∂B D and a body force is applied with f = (0, 0.1)T . For the numerical analysis different finite element types, continuous and discontinuous approaches, are considered and compared with each other as well as with a quadratic displacement element P2 . The results for the investigated element types are illustrated for a setup of scalar weighting parameters with ωi = {1, 1, 10} with i = 1, 2, 3, since previous investigations, cf. e.g. [2, 38], show the significant influence of weighting parameters
Fig. 2 Material parameters, boundary conditions and geometry of cantilever beam
180
M. Igelbüscher et al.
Fig. 3 Convergence of displacement u y (left) and the functional F (σh , uh ) (right) for a continuous LS formulation RT m Pk
Fig. 4 Convergence of displacement u y (left) and the functional F (σh , uh ) (right) for a discontinuous mixed hybrid formulation dRT m Pk Pn
as a general drawback of least-squares formulations applied to problems in solid mechanics. For the comparison of the convergence of the formulation the functional F (u, σ ) is chosen as an indicator for the different element formulations. The representation of the functional error in Figs. 3, 4, 5, 6 result for the continuous as well as all three hybrid mixed LS approaches and all considered combinations of polynomial orders into an equivalent convergence of the LS functional for finer meshes, which is additionally compared in Tables 1, 2 and 3. All investigated element formulations show a similar and satisfying performance in terms of displacement convergence, see Figs. 3, 4, 5, 6. However, the critical point of the choice of polynomial order for the Lagrange multipliers can be seen in Figs. 4 and 6. Here, the formulations dRT 1 P2 P0 , dP1 P2 P0 and dP2 P2 P0 for the considered number of equations (neq) does not achieve the desired displacement accuracy. The formulations with discontinuous stress approach and the choice of λ ∈ P0 leads to reduced element performance in terms of the displacement convergence. The convergence of the functional for these element types is equivalent in all cases, with
Hybrid Mixed Finite Element Formulations Based …
181
Fig. 5 Convergence of displacement u y (left) and the functional F (σh , uh ) (right) for a discontinuous mixed hybrid formulation dRT m dPk Pn Po
Fig. 6 Convergence of displacement u y (left) and the functional F (σh , uh ) (right) for a discontinuous mixed hybrid formulation dPm Pk Pn
a decreased functional error for elements with λ ∈ P0 , see Figs. 5, 6. The convergence of F shows similar results for all formulations, which seems to be independent of the selected polynomial degree. This behavior can be explained by the selected irregular problem as well as the utilized regular refinement strategy and is already shown in [2]. An improvement of the convergence, especially for higher polynomial orders can be shown, utilizing adaptive mesh refinement. In general the slight decline in the element performance for coarser meshes can be explained by the additionally introduced degrees of freedom for the Lagrange multipliers λ and μ. The number of equations for the hybrid mixed LS elements can be significantly reduced based on the application of static condensation of the discontinuous stress and displacement field. Due to the a priori positive definite system matrices of u and σ , the procedure is applicable in a straight-forward manner, but is not applied for the presented numerical results. As previously noted, the limitation of the analyzed discontinuous formulations are based on a proper relation between the number of unknowns of the discontinuous
182
M. Igelbüscher et al.
Table 1 Reduction of the LS functional for RT 2 P3 RT 2 P3 l=0 l=1 l=2 # elem. dim Vh dim Sh F (σh , uh ) (order) div σh + f 2 B |σ12 − σ21 | dV
8 84 156 9.58567 e-5 (–) 1.68738 e-6 7.89691 e-3
32 312 648 3.85669 e-5 (0.656758) 2-75797 e-7 3.19002 e-3
128 1200 2640 1.59614 e-5 (0.636389) 4.81207 e-8 1.33493 e-3
Table 2 Reduction of the LS functional for dRT 2 P3 P1 dRT 2 P3 P1 l=0 l=1 l=2 # elem. dim Vh dim Sh dim Xh F (σh , uh ) (order) div σh + f 2 B |σ12 − σ21 | dV
8 84 240 56 7.02981 e-5 (–) 9.21398 e-7 6.17359 e-3
32 312 960 208 2.63875 e-5 (0.720679) 1.27933 e-7 2.24544 e-3
128 1200 3840 800 106124 e-5 (0.663566) 2.08832 e-8 8.91415 e-4
Table 3 Reduction of the LS functional for dRT 2 dP3 P2 P2 dRT 2 dP3 P2 P2 l=0 l=1 l=2 # elem. dim Vh dim Sh dim Xh dim Yh F (σh , uh ) (order) div σh + f 2 B |σ12 − σ21 | dV
8 140 240 84 48 9.53281 e-5 (–) 1.65651 e-6 7.76106 e-3
32 600 960 312 240 3.84864 e-5 (0.640049) 2.74403 e-7 3.17387 e-3
128 2480 3840 1200 1056 1.59432 e-5 (0.628884) 4.80153 e-8 1.33305 e-3
l=3
l=4
512 4704 10656 6.71905 e-6 (0.624128) 8.62968 e-9 5.6743 e-4
1152 10512 24048 4.06624 e-6 (0.619322) 3.17388 e-9 3.44781 e-4
l=3
l=4
512 4704 15360 3136 4.40293 e-6 (0.637768) 3.64023e-9 3.70225 e-4
1152 10512 34560 7008 1.85437 e-6 (0.625325) 6.52478e-10 1.56625 e-4
l=3
l=4
512 10080 15360 4704 4416 6.71271 e-6 (0.620656) 8.61495 e-9 5.66944 e-4
1152 22800 34560 10512 10080 4.0625 e-6 (0.617416) 3.16853 e-9 3.44494 e-4
Hybrid Mixed Finite Element Formulations Based …
183
stress and displacement field and the Lagrange multipliers. Here, we follow the basic idea of the count criterion for mixed formulations, where the degrees of freedom of the primary variables have to be greater or equal to the constraint variables (Lagrange multiplier type variables), presented e.g. in [32, 39]. Based on this, we assume that the ratio of the degrees of freedom is limited by n u,σ ≥ n λ,μ . Here, n u,σ and n λ,μ denote the total number of degrees of freedom for the discontinuous primary variables of u, σ and Lagrange multipliers λ, μ respectively.
4.2 Quartered Plate Example As a second example a plate with four different Young’s moduli is investigated under displacement controlled boundary conditions. The approximation of stresses are naturally given in H(div, B), since physically only normal continuity of the stresses is enforced. For the introduced approximation of σ ∈ H 1 (B) for the Pm Pk element types the continuity of all stress components can lead to problems, e.g. at material transitions. Therefore, the quartered plate (x1 ∈ [−1, 1], x2 ∈ [−1, 1]) is investigated dealing with several material transitions based on the chosen Young’s moduli of the material, cf. Fig. 7. The plate is subjected to a uniform elongation, where the displacements in normal direction of the outer edges are set to 0.1 and the shear stresses on the edges are set to 0. Based on the fact that the Lagrange multiplier corresponds to the displacement on the element edge, λ in normal direction of the outer edges is 0. The resulting deformation of the quartered plate is illustrated in Fig. 8 with an exemplarily depicted finite element mesh. Therein, the influence of the different
Fig. 7 Material parameters, boundary conditions and geometry of the quartered plate
184
M. Igelbüscher et al.
Fig. 8 Illustration of a finite element mesh with 20 × 20 elements on the deformed configuration (left) and the deformed configuration of the example (right) with a scaling of the deformation of 10
material reactions can be clearly seen, e.g. in the comparison of the deformation of the materials with E 1 and E 4 , where the softer material undergoes larger deformations. In Figs. 9 and 10 the stress distribution of σ22 is shown on the undeformed configuration, where it is obvious that the Pm Pk element not yield the physically correct results. The σ22 stresses have to be continuous for material transitions in x2 -direction and discontinuous across the vertical material transition, cf. [40], since physically the normal components of the stress tensor are continuous and the tangential component could be discontinuous across material transitions. This relation is presented by the continuous element with σ ∈ RT m as well as the discontinuous elements with σ ∈ dRT m and σ ∈ dPm . The course of the σ22 stresses over a section of the plate (at line A-B) is additionally presented for each element combination and illustrates the physically incorrect solution for the Pm Pk elements. Furthermore, Figs. 11 and 12 show the stress results in an “out-of-plane” value plot, which show exactly the course of the stresses on the vertical material interface. Here, the enforced continuity of all stress components by the H 1 (B) approximation of the Pm Pk element type can be clearly seen, where the sharp interface yields a smoothed transition of σ22 stresses with averaged values between the two materials, which can have a crucial impact, e.g. for elasto-plastic models. In contrast to that all other presented element formulations are able to give the physically correct results.
Hybrid Mixed Finite Element Formulations Based …
185
Fig. 9 Distribution of σ22 stress in x1 -σ22 -plane on the line A-B (A = (−1, 0.5), B = (1,0.5)) (top) and over the undeformed domain in x1 -x2 -plane (bottom) for element type P2 P3 (left) and dP2 P3 P1 (right) with 20 × 20 elements
5 Conclusion The proposed hybrid mixed finite element formulations based on a least-squares finite element approach show a satisfying performance. A reduction of the element performance can be observed for the choice of low order approximations for the Lagrange multipliers. Nevertheless, the results and performance are in accordance with the standard σ − u least-squares formulation. However, the idea of hybrid mixed finite elements in combination with an approximation of σ ∈ [L2 (B)]3×3 can overcome the physically incorrect representation of stress distribution, based on the too restrictive requirement of the function space H 1 (B) for the stress approximation. Therefore, a straight-forward application of stress and displacement approximation is possible with continuous piecewise polynomial functions of Lagrange type. The limitation of the formulation concerning the balancing of function spaces have to be considered in further studies.
186
M. Igelbüscher et al.
Fig. 10 Distribution of σ22 stress in x1 -σ22 -plane on the line A-B (A = (−1, 0.5), B = (1, 0.5)) (top) and over the undeformed domain in x1 -x2 -plane (bottom) for element type RT 2 P3 (left) and dRT 2 P3 P1 (right) with 20 × 20 elements
Fig. 11 “Out-of-plane” value plot of σ22 stress for element type P2 P3 (left) and dP2 P3 P1 (right) with 20 × 20 elements
Hybrid Mixed Finite Element Formulations Based …
187
Fig. 12 “Out-of-plane” value plot of σ22 stress for element type RT 2 P3 (left) and dRT 2 P3 P1 (right) with 20 × 20 elements
Acknowledgements The authors gratefully acknowledge the support by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 “Reliable simulation techniques in solid mechanics. Development of non- standard discretization methods, mechanical and mathematical analysis” under the project “Approximation and reconstruction of stresses in the deformed configuration for hyperelastic material models” (Project number 392587488, project ID SCHR 570/34-1).
References 1. J.P. Pontaza, Least-squares variational principles and the finite element method: theory, form, and model for solid and fluid mechanics. Ph.D. Thesis, Texas A&M University (2003) 2. A. Schwarz, K. Steeger, M. Igelbüscher, J. Schröder, Different approaches for mixed lsfems in hyperelasticity: application of logarithmic deformation measures. Int. J. Numer. Methods Eng. 115, 1138–1153 (2018) 3. P.B. Bochev, M.D. Gunzburger, Least-Squares Finite Element Method (Springer, Berlin, 2009) 4. B.-N. Jiang, The Least-Squares Finite Element Method (Springer, Berlin, 1998) 5. T.H.H. Pian, K. Sumihara, A rational approach for assumed stress finite elements. Int. J. Numer. Meth. Eng. 20, 1685–1695 (1984) 6. D.N. Arnold, F. Brezzi, J. Douglas, PEERS: a new mixed finite element for plane elasticity. Jpn. J. Appl. Math. 1, 347–367 (1984) 7. Z. Cai, G. Starke, Least-squares methods for linear elasticity. SIAM J. Numer. Anal. 42, 826– 842 (2004) 8. A. Schwarz, K. Steeger, J. Schröder, Weighted overconstrained least-squares mixed finite elements for static and dynamic problems in quasi-incompressible elasticity. Comput. Mech. 54(1), 603–612 (2014) 9. J. Schröder, A. Schwarz, K. Steeger, Least-squares finite element formulations for isotropic and anisotropic elasticity at small and large strains, in Advanced Finite Element Technologies, ed. by J. Schröder, P. Wriggers. CISM Courses and Lectures (Springer, Berlin, 2016), pp. 131–175
188
M. Igelbüscher et al.
10. A. Schwarz, J. Schröder, G. Starke, A modified least-squares mixed finite element with improved momentum balance. Int. J. Numer. Meth. Eng. 81, 286–306 (2010) 11. G. Starke, A. Schwarz, J. Schröder, Analysis of a modified first-order system least squares method for linear elasticity with improved momentum balance. SIAM J. Numer. Anal. 49(3), 1006–1022 (2011) 12. M. Igelbüscher, A. Schwarz, K. Steeger, J. Schröder, Modified mixed least-squares finite element formulations for small and finite strain plasticity. Int. J. Numer. Meth. Eng. 117, 141–160 (2018) 13. G.F. Carey, J.T. Oden, Finite Elements: A Second Course (Prentice Hall, Inc., Hoboken, 1983) 14. S.N. Atluri, R.H. Gallagher, O. Zienkiewicz, Hybrid and Mixed Finite Element Methods (Wiley, Chichester, 1983) 15. J.E. Roberts, J.-M. Thomas, Mixed and hybrid methods, in Handbook of Numerical Analysis, no. 2, ed. by P.G. Ciarlet, J.L. Lions (Elsevier Science, Amsterdam, 1991) 16. F. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods (Springer, New York, 1991) 17. S.N. Atluri, P. Tong, H. Murakawa, Recent studies of hybrid and mixed finite element methods in mechanics, in Hybrid and Mixed Finite Element Methods, ed. by S.N. Atluri, R.H. Gallagher, O.C. Zienkiewicz (Wiley, New York, 1983), pp. 51–72 18. T.H.H. Pian, D.-P. Chen, Alternative ways for formulation of hybrid stress elements. Int. J. Numer. Meth. Eng. 18, 1679–1684 (1982) 19. E.F. Punch, S.N. Atluri, Development and testing of stable, invariant, isoparametric curvilinear 2- and 3-d hybrid-stress elements. Comput. Methods Appl. Mech. Eng. 47, 331–356 (1984) 20. W.-M. Xue, L.A. Karlovitz, S.N. Atluri, On the existence and stability conditions for mixedhybrid finite element solutions based on reissner’s variational principle. Int. J. Solids Struct. 21, 97–116 (1985) 21. W.-M. Xue, S.N. Atluri, Existence and stability, and discrete BB and rank conditions, for general mixed-hybrid finite elements in elasticity, in Hybrid and Mixed Finite Element Methods, vol. 73, ed. by R.L. Spilker, K.W. Reed (ASME, AMD, 1985), pp. 91–112 22. R. Bensow, M.G. Larson, Discontinuous/continuous least-squares finite element methods for elliptic problems. Math. Models Methods Appl. Sci. 15(6), 825–842 (2005) 23. R. Bensow, M.G. Larson, Discontinuous least-squares finite element method for the div-curl problem. Numer. Math. 101, 601–617 (2005) 24. X. Ye, S. Zhang, A discontinuous least-squares finite-element method for second-order elliptic equations. Int. J. Comput. Math. 96, 601–617 (2018) 25. G. Starke, Multilevel boundary functionals for least-squares mixed finite element methods. SIAM J. Numer. Anal. 36, 1065–1077 (1999) 26. P.R. Halmos, Finite-Dimensional Vector Spaces (Van Nostrand, New York, 1958) 27. D. Boffi, F. Brezzi, M. Fortin, Reduced symmetry elements in linear elasticity. Commun. Pure Appl. Anal. 8, 95–121 (2009) 28. B. Cockburn, J. Gopalakrishnan, J. Guzmán, A new elasticity element made for enforcing weak stress symmetry. Math. Comput. 79, 1331–1349 (2010) 29. F. Brezzi, J. Douglas, L.D. Marini, Two families of mixed finite elements for second order elliptic problems. Numer. Math. 47, 217–235 (1985) 30. F. Brezzi, J. Douglas, M. Fortin, D. Marini, Efficient rectangular mixed finite elements in two and three space variables. Math. Modell. Numer. Anal. 21, 581–604 (1987) 31. F.B. de Veubeke, Displacement and equilibrium models in finite element methods, in Stress Analysis, ed. by G. Holister, O. Zienkiewicz (Wiley, New York, 1965), pp. 145–197 32. O.C. Zienkiewicz, S. Qu, R.L. Taylor, S. Nakazawa, The patch test for mixed formulations. Int. J. Numer. Meth. Eng. 23, 1873–1883 (1986) 33. J. Korelc, Automatic generation of finite-element code by simultaneous optimization of expressions. Theoret. Comput. Sci. 187(1), 231–248 (1997) 34. J. Korelc, Multi-language and multi-environment generation of nonlinear finite element codes. Eng. Comput. 18, 312–327 (2002) 35. J. Korelc, P. Wriggers, Automation of Finite Element Methods (Springer International Publishing Switzerland, Cham, 2016)
Hybrid Mixed Finite Element Formulations Based …
189
36. Inc. Wolfram Research, Mathematica, version 10.1 edn. (Wolfram Research, Inc., Champaign, 2015) 37. J. Ahrens, B. Geveci, C. Law, ParaView: An End-User Tool for Large Data Visualization, Visualization Handbook, version 10.1 edn. (Elsevier, Champaign, 2005) 38. M. Igelbüscher, J. Schröder, A. Schwarz, A mixed least-squares finite element formulation with explicit consideration of the balance of moment of momentum, a numerical study. GAMMMitteilungen 43, e202000009 (2020). https://doi.org/10.1002/gamm.202000009 39. O.C. Zienkiewicz, R.L. Taylor, The Finite Element Method - Volume I: The Basis, 5th edn. (Butterworth Heinemann, Oxford, 2000) 40. K. Steeger, Least-squares mixed finite elements for geometrically nonlinear solid mechanics. Ph.D. Thesis, Univesity of Duisburg-Essen (2017)
Adaptive and Pressure-Robust Discretization of Incompressible Pressure-Driven Phase-Field Fracture Seshadri Basava, Katrin Mang, Mirjam Walloth, Thomas Wick, and Winnifried Wollner
Abstract In this work, we consider pressurized phase-field fracture problems in nearly and fully incompressible materials. To this end, a mixed form for the solid equations is proposed. To enhance the accuracy of the spatial discretization, a residual-type error estimator is developed. Our algorithmic advancements are substantiated with several numerical tests that are inspired from benchmark configurations. Therein, a primal-based formulation is compared to our newly developed mixed phase-field fracture method for Poisson ratios approaching ν → 0.5. Finally, for ν = 0.5, we compare the numerical results of the mixed formulation with a pressure robust modification.
1 Introduction This work is devoted to pressurized fractures in nearly and fully incompressible solids using an adaptive finite element discretization. Pressurized fracture problems modeled with a phase-field method is currently a topic being investigated by many groups; see for instance [12, 20, 27, 33, 47], to name a few. We further extended our pressurized phase-field fracture approach to non-isothermal configurations [39]. A recent overview on pressurized and fluid-filled fractures is provided in [48]. However, S. Basava · M. Walloth · W. Wollner (B) Technische Universität Darmstadt, Darmstadt, Germany e-mail: [email protected] S. Basava e-mail: [email protected] M. Walloth e-mail: [email protected] K. Mang · T. Wick Leibniz Universität Hannover, Hanover, Germany e-mail: [email protected] T. Wick e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_8
191
192
S. Basava et al.
all these contributions deal with compressible solids in which Poisson’s ratio is significantly less than 0.5, i.e., the incompressible limit. Incompressible solids are however an important field in solids mechanics [23, 24, 26, 40, 44]. In [32] a model and robust discretization using a phase-field method for fractures in solids mechanics was proposed. A well-known challenge in phasefield methods is the relationship between the model regularization ε > 0 and the spatial mesh size h. To obtain accurate discretizations for small ε around the fracture and specifically at the fracture tip adaptive mesh refinement is a useful tool. First studies date back to [10, 11] investigating residual-type error estimators. A predictorcorrector mesh refinement algorithm with a focus on crack-oriented refinement was developed in [21] and extended to three spatial dimensions in [27]. In [4], anisotropic mesh refinement was studied. Goal-oriented adjoint-based a posteriori error estimation was subject in [49]. Based on a recent approach for residual-type a posteriori estimators for contact problems [25, 46] we developed in [45] a reliable and efficient estimator for a singularly-perturbed obstacle problem taking into account the robustness (in terms of ε). We tested the resulting residual-type estimator for different fracture phase-field problems enforcing the irreversibility condition in [31] and further for nearly incompressible solids in [30]. The main objective of the current work is two-fold. We first develop a phase-field model using a mixed system for pressurized fractures. Therein the methodology from [32] is combined with pressurized fractures as proposed in [35, 36, 47]. Our second aim is to apply adaptive refinement based on our residual-type error estimator [31, 45] to this mixed-system phase-field fracture approach. These algorithmic concepts are substantiated with the help of several numerical examples and mesh convergence studies comparing classical primal formulations and our newly developed mixed formulation. Finally, we will test a pressure-robust modification of the discrete mixed formulation, inspired by the works [28, 29] for the Stokes problem. As this book chapter summarizes our efforts within the German Priority Programme 1748 (DFG SPP 1748), in the project ‘Structure Preserving Adaptive Enriched Galerkin Methods for Pressure-Driven 3D Fracture Phase-Field Models’, we briefly mention the other research directions, which were related to our own overall goal of developing pressure robust discretization schemes for phase-field fracture models. In [9], we considered a stabilized decoupled iteration scheme, a so-called Lscheme. Therein constant stabilization parameters were introduced including both numerical analysis and computational verification. An enhancement in efficiency by using dynamically chosen stabilization parameters during the iteration was subsequently proposed in [15]. We published our open-source parallel computing paper with heuristic adaptive mesh refinement [22]. The open-source programming code was used in the SPP benchmark collection [41]. Several comparisons of different stress-splitting methods were done in [16]. The predictor-corrector approach from [21] inspired an adaptive non-intrusive global-local approach in [38], a paper, which is also a collaboration within the SPP 1748 with the group of Peter Wriggers.
Adaptive and Pressure-Robust Discretization of Incompressible …
193
In the work [45] the basis for a provably reliable and efficient error estimator for fracture phase-field models has been set. The resulting residual-type error estimator has been used to steer solely the adaptive refinement and thus the resolution of the critical region around the crack without any prior knowledge about the problem in [30, 31]. The outline of this paper is as follows. In Sect. 2 the notation and equations are introduced. Next, in Sect. 3, both the discretization and the numerical solution are addressed. In Sect. 4, a residual-type error estimator for pressurized fractures is presented. In the final Sect. 5 several numerical tests are conducted. We summarize our findings in Sect. 6.
2 Notation and Equations In this section, we introduce the basic notation and the underlying equations. In the following, let ⊂ R2 the total domain wherein C ⊂ R denotes the fracture and ⊂ R2 is the intact domain. The outer boundary is denoted by ∂. The inner fracture := C. boundary is denoted by ∂ F Using a phase-field approach, the one-dimensional fracture C is approximated on ∈ R2 with the help of an elliptic (Ambrosio–Tortorelli) functional [1, 2]. This yields an approximate inner fracture boundary ∂ F ≈ C. For fracture formulations posed in a variational setting, this has been first proposed in [6] based on the model developed in [18]. Finally, we denote the L 2 scalar product with (·, ·) as frequently used in the literature. Variational phase-field fracture starts with an energy functional and the motion of the body under consideration is then determined by the Euler–Lagrange equations, which are obtained by differentiation with respect to the unknowns. Therefore, in phase-field-based fracture propagation, the unknown solution variables are vector-valued displacements u : → R2 and a smoothed scalar-valued indicator phase-field function ϕ : → [0, 1]. Here ϕ = 0 denotes the crack region and ϕ = 1 characterizes the unbroken material. The intermediate values constitute a smooth transition zone dependent on a regularization parameter ε. The physics of the underlying problem ask to enforce a crack irreversibility condition (the crack can never heal) yielding the inequality constraint ϕ ≤ ϕ n−1 . Here, ϕ n−1 denote the previous time step solution and ϕ the current solution.
194
S. Basava et al.
2.1 Pressurized Phase-Field Fracture in a Displacement Formulation In this work, we are specifically interested in pressurized fractures in which a given pressure acts on the fracture boundary ∂ F . Using classical interface coupling conditions, namely kinematic and dynamic coupling conditions, for the pressure and balance of contact forces, a pressure pg can be prescribed. However, due to the smeared zone of size ε in which 0 < ϕ < 1, the exact location of the fracture interface is not known and leaves some freedom where to put it. In [35, Sect. 2] or [36, Sect. 3.2], we used the divergence theorem to transform pg from ∂ F into the entire domain . This procedure avoids knowledge of the exact fracture boundary location, but is mathematically rigorous. Mathematical analysis [35, 36] and numerous computations, e.g., in [22, 41, 47], have shown that this approach is justified. As a consequence of the transformation, the pressure pg : → R is added as domain integral to the Euler–Lagrange equations. Let V := H01 (; R2 ) and W := H 1 () the usual Hilbert spaces and the convex set K := K n = {w ∈ W | w ≤ ϕ n−1 ≤ 1 a.e. on } including the inequality constraint. Note that the latter constraint ϕ n−1 ≤ 1 is provided for convenience, only. The Euler–Lagrange system for pressurized phase-field fracture reads [36]: Problem 1 Let pg ∈ W 1,∞ () be given. For the loading steps n = 1, 2, 3, . . . , N : Find vector-valued displacements and a scalar-valued phase-field variable {u, ϕ} := {u n , ϕ n } ∈ V × K such that
g(ϕ) σ (u) , e(v) + (ϕ 2 pg , div v) + (ϕ 2 ∇ pg , v) = 0 ∀v ∈ V,
(1)
and (1 − κ)(ϕ σ (u) : e(u) , ψ−ϕ) + 2(ϕ pg div u, ψ−ϕ) + 2 (ϕ∇ pg · u, ψ−ϕ) 1 + G c − (1 − ϕ, ψ−ϕ) + ε(∇ϕ, ∇(ψ − ϕ)) ≥ 0 ∀ψ ∈ K . ε Here,
(2)
g(ϕ) = (1 − κ)ϕ 2 + κ
is the so-called degradation function with a small regularization parameter κ, G c is the critical energy release rate, and we use the well-known Hook’s law for the linear stress-strain relationship of isotropic materials: σ (u) := 2μ e(u) + λ tr e(u) I,
(3)
Adaptive and Pressure-Robust Discretization of Incompressible …
195
where μ and λ denote the Lamé coefficients, e(u) = 21 (∇u + ∇u T ) is the linearized strain tensor and I is the identity matrix.
2.2 Pressurized Phase-Field Fracture in a Mixed Formulation Following [32], we now derive a mixed formulation for pressurized fractures. To this end, we need to split the stress tensor (3) into the shear part and the volumetric part. In nearly incompressible materials with Poisson’s ratio going to 0.5, for the volumetric parameter, it holds λ → ∞. To cope with volumetric locking, one possibility is to introduce a Lagrange multiplier, e.g., [7], with p ∈ P := L 2 () such that p := λ tr e(u).
Remark 1 This solution variable p should not be confused with the given pressure pg from before. With that, we obtain for the stress tensor: σ (u, p) := 2μ e(u) + p I, as it has been analyzed in our work [32] without the given pressure pg . Adding this fracture pressure pg , we obtain the following reformulation: Problem 2 Let pg ∈ W 1,∞ () be given. For the loading steps n = 1, 2, 3, . . . , N : Find vector-valued displacements, a scalar-valued pressure, and a scalar-valued phase-field variable {u, p, ϕ} := {u n , p n , ϕ n } ∈ V × P × K such that
g(ϕ) σ (u, p) , e(v) + (ϕ 2 pg , div v) + (ϕ 2 ∇ pg , v) = 0 ∀v ∈ V,
and (tr e(u) , q) −
1 ( p, q) = 0 ∀q ∈ P, λ
(4)
(5)
and (1 − κ)(ϕ σ (u, p) : e(u) , ψ−ϕ) + 2(ϕ pg div u, ψ−ϕ) + 2 (ϕ∇ pg · u, ψ−ϕ) 1 + G c − (1 − ϕ, ψ−ϕ) + ε(∇ϕ, ∇(ψ − ϕ)) ≥ 0 ∀ψ ∈ K . ε
(6)
196
S. Basava et al.
3 Discrete Formulation As the structure remains the same for all time steps, we consider one time step n for simplicity. For the discretization in space, we decompose the polygonal domain by a (family of) meshes M = Mn consisting of shape regular rectangles e, such that all meshes share a common coarse mesh. To allow for local refinement, in particular of rectangular elements, we allow for one hanging node per edge at which degrees of freedom will be eliminated to assert H 1 -conformity of the discrete spaces. To each mesh, we associate the mesh size function h, i.e., h|e = h e = diam e for any element e ∈ M. The set of nodes q is given by N and we distinguish between the set N of nodes at the boundary and the set of interior nodes N I . Later on, for the derivation of the estimator, we need the following definitions. For a point q ∈ N, we define a patch ωq as the interior of the union of all elements sharing the node q. We call the union of all sides in the interior of ωq , not including the boundary of ωq , skeleton and denote it by γqI . For boundary nodes, we denote the intersections between and ∂ωq by γq := ∩ ∂ωq . Further, we will make use of ωs which is the union of all elements sharing a side s. We need the definition of the jump term [∇ψh ] := ∇|e ψh · n e − ∇|e˜ ψh · n e where e, e˜ are neighboring elements and n e is the unit outward normal on the common side of the two elements. For the discretization, we consider (bi)-linear (Q1 (e)), (bi)-quadratic (Q2 (e)) and linear (P1 (e)) shape functions. Thus, the finite element spaces are given by Wh := Whn = {vh ∈ C0 () | ∀e ∈ M, vh |e ∈ Q1 (e)} ⊂ W, Ph := Phn = { ph ∈ P | ∀e ∈ M, ph |e ∈ P1 (e)} ⊂ P, and by Vh := Vhn = {vh ∈ C0 (; R2 ) | ∀e ∈ M, vh |e ∈ Q1 (e)2 and vh = 0 on } ⊂ V for the discrete analog of Problem 1 and by Vh := Vhn = {vh ∈ C0 (; R2 ) | ∀e ∈ M, vh |e ∈ Q2 (e)2 and vh = 0 on } ⊂ V, for the discrete analogon of Problem 2, respectively. By this choice the pair (Vh , Ph ) satisfies the inf-sup condition asserting stability of the discrete analogon of Problem 2. We define the respective nodal interpolation operators as Ihn , and define the discrete feasible set for the phase-field by K h := K hn = {ψh ∈ Wh | ψh (q) ≤ (Ihn ϕhn−1 )(q), ∀q ∈ N}. The nodal basis functions of the finite element space Wh are denoted by φq . Analogous to Problem 1, we define the spatially discretized time step problem:
Adaptive and Pressure-Robust Discretization of Incompressible …
197
Problem 3 (Discrete formulation of Problem 1) Let pg ∈ W 1,∞ () be given. For the loading steps n = 1, 2, 3, . . . , N : Find vector-valued displacements and a scalarvalued phase-field variable {u h , ϕh } := {u nh , ϕhn } ∈ Vh × K h such that
g(ϕh ) σ (u h ) , e(vh ) + (ϕh2 pg , div vh ) + (ϕh2 ∇ pg , vh ) = 0 ∀vh ∈ Vh ,
(7)
and (1 − κ)(ϕh σ (u h ) : e(u h ) , ψh −ϕh ) + 2(ϕh pg div u h , ψh −ϕh ) + 2 (ϕh ∇ pg · u h , ψh −ϕh ) (8) 1 + G c − (1 − ϕh , ψh −ϕh ) + ε(∇ϕh , ∇(ψh − ϕh )) ≥ 0 ∀ψ ∈ K . ε Analogous to Problem 2, we define the spatially discretized mixed time step problem: Problem 4 (Discrete formulation of Problem 2) Let pg ∈ W 1,∞ () be given. For the loading steps n = 1, 2, 3, . . . , N : Find vector-valued displacements, a scalar-valued pressure, and a scalar-valued phase-field variable {u h , ph , ϕh } := {u nh , phn , ϕhn } ∈ Vh × Ph × K h such that
g(ϕh ) σ (u h , ph ) , e(vh ) + (ϕh 2 pg , div vh ) + (ϕh 2 ∇ pg , vh ) = 0 ∀vh ∈ Vh ,
and (tr e(u h ) , qh ) −
1 ( ph , qh ) = 0 ∀qh ∈ Ph , λ
(9)
(10)
and (1 − κ)(ϕh σ (u h , ph ) : e(u h ) , ψh − ϕh ) + 2(ϕh pg div u h , ψh − ϕh ) + 2 (ϕh ∇ pg · u h , ψh − ϕh ) (11) 1 + G c − (1 − ϕh , ψh − ϕh ) + ε(∇ϕh , ∇(ψh − ϕh )) ≥ 0 ∀ψh ∈ K h . ε Finally, following the work of [28, 29], we propose a pressure robust modification of Problem 4. To this end, we define the divergence conforming space of Raviart– Thomas finite elements, see, e.g., [8, Sect. III.3.2], on the unit square (−1, 1)2 by RT1 = Q21 + xQ1 . As usual, for elements e ∈ M, the space RT1 (e)
198
S. Basava et al.
is then obtained by mapping of the shape functions utilizing a Piola transform. With this, we can define the global space h = {vh ∈ C0 (; R2 ) | ∀e ∈ M, vh |e ∈ RT1 (e)} V h . Now, following [28, 29], the together with the interpolation operator IRT : Vh → V pressure robust reformulation of Problem 4 is the problem Problem 5 (Pressure robust formulation of Problem 4) Let pg ∈ W 1,∞ () be given. For the loading steps n = 1, 2, 3, . . . , N : Find vector-valued displacements, a scalar-valued pressure, and a scalar-valued phase-field variable {u h , ph , ϕh } := {u nh , phn , ϕhn } ∈ Vh × Ph × K h such that
g(ϕh ) σ (u h , ph ) , e(vh ) + (ϕh 2 pg , div IRT vh ) + (ϕh 2 ∇ pg , IRT vh ) = 0 ∀vh ∈ Vh ,
(12)
as well as (10) and (11) hold.
4 Residual-Type a Posteriori Error Estimator We propose an estimator for the phase-field inequality (8) or (11), respectively, to obtain a good resolution of the fracture growth. Utilizing either σhn := σ (u nh , phn ) for the mixed form or σhn := σ (u nh ) for the nonmixed form, we introduce the bilinear form ah, (ζ, ψ) :=
Gc (ζ, ψ) + (1 − κ)(σhn : e(u nh ) ζ, ψ) + 2( pg div u nh ζ, ψ) + 2 (∇ pg · u nh ζ, ψ) + G c (∇ζ, ∇ψ).
(13)
Thus, the discretized variational inequality in a time step n is given by Problem 6 (Discrete variational inequality) Let u nh , phn and ϕhn−1 be given, then find ϕh ∈ K h such that ah, (ϕh , ψh − ϕh ) ≥
Gc (1, ψh − ϕh ) ∀ψh ∈ K h .
(14)
We define the discrete constraining force density h ∈ Wh∗ of Problem 6 as
h , ψh −1,1 :=
Gc (1, ψh ) − ah, (ϕh , ψh ) ∀ψh ∈ Wh .
(15)
The solution of Problem 6 is the discrete approximation of the auxiliary problem:
Adaptive and Pressure-Robust Discretization of Incompressible …
199
Problem 7 Let u nh , phn and ϕhn−1 be given, then find ϕˆ ∈ K (Ihn (ϕhn−1 )) := {ψ ∈ W | ψ ≤ Ihn (ϕhn−1 )} such that ˆ ψ − ϕ) ˆ ≥ ah, (ϕ,
Gc (1, ψ − ϕ) ˆ ∀ψ ∈ K (Ihn (ϕhn−1 )).
(16)
ˆ ∈ W ∗ of Problem 7 is The corresponding constraining force density ˆ ψ ,
−1,1
:=
Gc (1, ψ) − ah, (ϕ, ˆ ψ) ∀ψ ∈ W.
Remark 2 As the bilinear form ah, (·, ·) depends on the approximation u nh of u n and phn of p n and the constraints depend on the approximation Ihn (ϕhn−1 ) of ϕ n−1 , the solution ϕˆ of (16) is an approximation to the solution ϕ n of (2) or (6), respectively. Despite the finite dimensional data, (16) is posed on a subset of W , not Wh and hence ˆ can not be computed. Yet, if it were known then ˆ ψ
R(ϕh ) , ψ−1,1 := −,
−1,1
+
Gc (1, ψ) − ah, (ϕh , ψ)
would define the linear residual to the corresponding equation. Thus, R(ϕh ) = 0 if ˆ Further, we are interested in the error in the constraining forces. and only if ϕh = ϕ. As h is not a functional on W , but a functional on Wh , it is not uniquely defined ˆ ∈ W ∗ with a discrete how h acts on W . Thus, to compare the constraining force ∗ counterpart, we choose a functional on W called quasi-discrete constraining force, h ∈ W ∗ . Therefore, we follow the approach used in [17, 25, 37, 45] and denoted by distinguish between full-contact nodes q ∈ N f C and semi-contact nodes q ∈ N sC . Full-contact nodes are those nodes for which the solution is fixed to the obstacle ϕh = Ihn (ϕhn−1 ) on ωq and the sign condition h , ψ−1,1,ωq ≥ 0 ∀ψ ≥ 0 ∈ H01 (ωq ) is fulfilled. Semi-contact nodes are those nodes for which ϕh (q) = Ihn (ϕhn−1 )(q) holds but not the conditions of full-contact. Based on this classification, we define the quasidiscrete constraining force, where φq denotes the nodal basis of Wh ,
h , ψ
−1,1
:=
qh , ψφq
q∈N sC
−1,1
+
qh , ψφq
q∈N f C
with the local contributions which are for full-contact nodes
qh , ψφq
and for semi-contact nodes
−1,1
:= h , ψφq −1,1
−1,1
,
(17)
200
S. Basava et al.
qh , ψφq
with cq (ψ) =
ω˜ q
ψφq
ω˜ q
φq
−1,1
:= h , φq −1,1 cq (ψ)
, where ω˜ q is a proper subset of ωq . Therefore, we define the
so-called Galerkin functional ˆ − h , ψ
G, ψ−1,1 := R(ϕh ) , ψ−1,1 + −1,1
G c h , ψ ,ψ − = − ah, (ϕh , ψ). −1,1 We note that in the case that pg = const and div(u nh ) = 0, i.e., the material is incompressible, the bilinear form ah, (ζ, ψ) defined in (13) is elliptic; and the corresponding energy norm is given by ⎧ ⎨
⎫1 21 G 2 ⎬ 2 c + (1 − κ)σ (u nh ) : e(u nh ) (·) · := G c ∇(·)2 + . ⎭ ⎩ sup
(18)
·,ψ
We denote the corresponding dual norm by · ∗, := ψ∈Wψ −1,1 . For the definition of the error estimator contributions, we use the abbreviation of the interior residual r (ϕh ) :=
Gc Gc + G c ϕh − ϕh − (1 − κ)(σhn : e(u nh ))ϕh + 2 pg div u nh ϕh + 2∇ pg · u nh ϕh
and set αq := min x∈ωq
Gc n n + (1 − κ)(σ (u h ) : e(u h )) .
(19)
(20)
Deriving an upper bound of G∗, as, e.g., in [45], we end up with the error indicator η which is the sum of the following contributions η12 :=
2 η1,q ,
q∈N\N f C
η22 :=
q∈N\N
η32
:=
2 η2,q , fC
q∈N\N f C
hq −1 , αq 2 r (ϕh )ωq η1,q :=min √ Gc
(21)
1 hq 1 −1 2 , αq 2 (G c )− 4 G c [∇ϕh ]γqI η2,q :=min √ Gc
(22)
2 η3,q ,
η3,q
hq −1 , αq 2 :=min √ Gc
21
(G c )− 4 G c ∇ϕh γqN 1
(23)
In the case that pg = const and the material is incompressible div(u nh ) = 0, we can derive a robust upper bound of the error measure
Adaptive and Pressure-Robust Discretization of Incompressible …
ˆ − h ∗, ϕˆ − ϕh + in terms of the estimator η :=
4
ηk
201
(24)
(25)
k=1
which consists of the estimator contributions (21), (22), (23) and 2 2 η4,q , η4,q :=sq (Ihn (ϕhn−1 ) − ϕh )φq . η42 := ωq
q∈N sC
Theorem 1 (Reliability) Assuming that pg = const and div(u nh ) = 0, the error estimator η provides a robust upper bound of the error measure, i.e. ˆ − h ∗, ≤ Cη ϕˆ − ϕh + otherwise the estimator constitutes an upper bound of the dual norm of the Galerkin functional G∗, ≤ Cη, where C does not depend on . If pg = const and div(u nh ) = 0, the local estimator contributions constitute local lower bounds with respect to the local error measure (24). The proof to show reliability as well as efficiency follows the ideas of [45].
5 Numerical Tests In this section, we investigate some examples all motivated by the theoretical calculations of Sneddon [42] and Sneddon and Lowengrub [43] considering a pressuredriven cavity. Our implementation is based on the open-source software DOpElib [19] and the finite elements from deal.II [3, 5]. The refinement strategy follows [31, Sect. 4.2]. This strategy allows to flag certain cells based on the cell-wise error indicators to reach a grid that is optimal with respect to an objective function that tries to balance reducing the error and increasing the numerical cost due to the added unknowns. Setup We follow the setup from [41], where the case ν = 0.2 is discussed. We assume a twodimensional domain = (−10, 10)2 as sketched in Fig. 1. In this domain, an initial crack with length l0 = 2.0 and thickness d of two cells on c = [−1, 1] × [−d, d] ⊂ is prescribed by help of the phase-field function ϕ, i.e., ϕ = 0 in c and ϕ = 1
202
S. Basava et al.
Fig. 1 Domain (in 2D) with Dirichlet boundaries ∂, an initial crack C of length 2l0 and a crack width , where the phase-field function ϕ is defined
√ in \ c . Note that the thickness of 2d corresponds to 2h/ 2, where h is the cell diameter. For the numerical realization, ϕh0 = Ih0 (ϕ 0 ) is utilized. As boundary conditions, the displacements u are set to zero on ∂. For the phasefield variable, we use homogeneous Neumann conditions (so-called traction free conditions), i.e., ∂n ϕ = 0 on ∂. √ For all tests in the following sections the crack bandwidth is set as = 4 2d, the regularization parameter κ is determined sufficiently small with κ = 10−8 . The fracture toughness of the observed material is G c = 1.0 and the Young’s modulus E = 1.0. The numerical tests in the following are based on three configurations derived from Sneddon’s setup as discussed in detail in [41] using the solving strategy described below for the discrete formulations of Sect. 3 and adaptively refined meshes based on the error estimator in Sect. 4: • Example 1: Constant given pressure with pg = 10−3 and ν = 0.2 to ν = 0.5 using Problem 3, called Example 1A, compared to Problem 4, called Example 1B, in Sect. 5.1; • Example 2: Constant given pressure with pg = 10−3 , ν = 0.2 to ν = 0.5 and a compressible layer around the finite domain as well as in the prescribed fracture using Problem 3, called Example 2A, compared to Problem 4, called Example 2B, in Sect. 5.2, where details on the layer will be given; • Example 3: Non-constant given pressure pg , ν = 0.5 and a compressible layer around the finite domain as well as in the prescribed fracture using Problem 4, called Example 3A, compared to Problem 5, called Example 3B, in Sect. 5.3. Solution Algorithm The coupled inequality system in Problems 3, 4, and 5 is formulated as a complementarity system as shown in [32]. Therein a Lagrange multiplier is introduced for
Adaptive and Pressure-Robust Discretization of Incompressible …
203
treating the inequality constraint. The Lagrange multiplier τ is discretized in the dual basis to the Q1 space denoted by Q∗1 and the corresponding discrete function space denoted as X h . The discrete form is then solved in a monolithic fashion, but noticing that ϕ is timelagged in the first term of the displacement equation. This means in Problems 3, 4, and 5 we replace in (9), (7) and (12), respectively, the term g(ϕh ) by g(ϕhn−1 ) and (ϕhn )2 by (ϕhn−1 )2 . This procedure helps in relaxing the nonlinearity. Of course, a temporal discretization error is introduced, which however is not significant in the steady-state tests considered here. To this end, we formulate a compact form by summing up all equations: Given the initial data ϕ 0 ; for the loading steps n = 1, 2, . . . , N : Find Uh := Uhn = (u h , ph , ϕh , τh ) ∈ Yh := (Vh × Ph × Wh × X h ) such that Aϕ n−1 (u h , ph , ϕh , τh ) = 0. To solve Aϕ n−1 (·) = 0, we formulate a residual-based Newton scheme, e.g., [50]. The concrete scheme (and its implementation) can be found in [14, 19]. The occurring linear systems are solved with a direct method provided by UMFPACK [13]. Quantities of Interest For all examples, we compared the following quantities of interest: • Total crack volume (TCV); • Bulk energy E b ; • Crack energy E c . It will turn out for the discussion below, that focusing on TCV will be sufficient. For the TCV, manufactured reference values can be computed for a infinite domain from the formulae presented in [43, Sect. 2.4]. Numerical values on the cut-off domain in Fig. 1 and ν = 0.2 can be found in [41]. Numerically, the total crack volume can be computed by using TCV =
u(x, y) · ∇ϕ(x, y) d(x, y).
(26)
Using the exact representation of u y (cf. [43], p. 29) applied to our parameter settings as in [41], we consequently obtain the reference values listed in Table 1 for an infinite domain. As a second quantity of interest, the bulk energy E b is defined as E b :=
g(φ) σ : e(u)dx, 2
(27)
where σ := σ (u) for Problem 3 and σ := σ (u, p) for Problems 4 and 5. As third quantity of interest, the crack energy is computed via
204
S. Basava et al.
Table 1 Manufactured reference values of the TCV computed by help of the formula in [41] for a infinite domain and different Poisson ratios up to the incompressible limit ν TCV2d (reference) 6.03186 × 10−3 4.77459 × 10−3 4.71245 × 10−3 4.71239 × 10−3
0.2 0.49 0.49999 0.5
Gc E c := 2
(ϕ − 1)2 + |∇ϕ|2
dx.
(28)
5.1 Sneddon-Inspired Test Cases (Example 1) In this first set of numerical examples, we compare Example 1A with Example 1B. The prescribed pressure is pg = 10−3 and the Poisson ratios are ν = 0.2, 0.49, 0.49999 and ν = 0.5 (only for the mixed formulation Example 1B). The starting meshes are once globally uniformly refined and three times further uniformly refined around the crack. The following three meshes are either uniformly refined in a zone around the crack (geometric refinement) or adaptively refined based on the estimator proposed in Sect. 4. Tables 2 and 3 show the resulting values for the TCV and E b on the starting mesh and the following three geometrically or adaptively refined meshes with adjusted parameters and d according to [41]. Remark 3 Considering adaptively refined meshes, the parameters ε and d are decreased by a factor of two after each refinement. Hence these values are the same for the computations on geometrically refined meshes, which allows a fair comparison of results coming from geometrically and adaptively refined meshes. For ν = 0.2 the TCV and E b computed with Problem 3, rounded to three significant digits, matches the numbers given in [41], hence we conclude the correctness of our implementation. The fracture energy E c is identical to the values in [41]. On the coarsest mesh this corresponds to E c ≈ 2.895 and on the finest mesh we have E c ≈ 2.423. As the numbers for E c are independent of ν and the chosen formulation, they are not listed separately. In the following, we will focus on the behavior of TCV for different Poisson ratios and compare it to the reference values of Table 1 on an infinite domain. First, we see in Tables 2 and 3 that both quantities of interest are numerically stable under mesh refinement. This shows the robustness of our proposed models and their numerical realization. Second, we observe that more incompressible materials yield smaller
0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125
0.2
0.5
0.49999
0.49
d
ν
29,988 74,852 241,156 880,740 29,988 74,852 241,156 880,740 29,988 74,852 241,156 880,740
Example 1A Geometric DoF 0.00818 0.00691 0.00639 0.00616 0.00601 0.00492 0.00440 0.00415 2.28E-5 2.33E-5 2.35E-5 2.36E-5
TCV 29,988 36,964 49,044 69,428 29,988 36,580 48,436 68,628 29,988 37,332 47,844 70,164
Adaptive DoF 0.00818 0.00691 0.00639 0.00616 0.00601 0.00491 0.00438 0.00413 2.28E-5 2.29E-5 2.28E-5 2.28E-5
TCV 96,436 241,860 781,604 2,858,788 96,436 241,860 781,604 2,858,788 96,436 241,860 781,604 2,858,788 96,436 241,860 781,604 2,858,788
Example 1B Geometric DoF 0.00821 0.00693 0.00640 0.00617 0.00620 0.00504 0.00448 0.00421 2.38E-5 2.39E-5 2.39E-5 2.39E-5 1.44E-5 −1.15E-6 −1.29E-7 −3.38E-8
TCV
Table 2 The number of degrees of freedom (DoF) and the TCV for four different Poisson ratios for Examples 1A and 1B
96,436 118,916 157,900 223,692 96,436 117,668 155,948 221,116 96,436 120,124 154,036 226,108 96,436 118,292 155,284 227,356
Adaptive DoF
0.00821 0.00693 0.00640 0.00617 0.00620 0.00504 0.00447 0.00421 2.38E-5 2.39E-5 2.39E-5 2.39E-5 1.44E-5 −2.03E-7 −1.33E-7 −3.46E-8
TCV
Adaptive and Pressure-Robust Discretization of Incompressible … 205
0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125
0.2
0.5
0.49999
0.49
d
ν
29,988 74,852 241,156 880,740 29,988 74,852 241,156 880,740 29,988 74,852 241,156 880,740
Example 1A Geometric DoF Eb 4.06E-6 3.38E-6 3.14E-6 3.04E-6 3.00E-6 2.46E-6 2.20E-6 2.08E-6 1.14E-8 1.16E-8 1.17E-8 1.18E-8
29,988 36,964 49,044 69,428 29,988 36,580 48,436 68,628 29,988 37,332 47,844 70,164
Adaptive DoF Eb 4.06E-6 3.38E-6 3.13E-6 3.04E-6 3.00E-6 2.45E-6 2.19E-6 2.06E-6 1.14E-8 1.14E-8 1.14E-8 1.14E-8
96,436 241,860 781,604 2,858,788 96,436 241,860 781,604 2,858,788 96,436 241,860 781,604 2,858,788 96,436 241,860 781,604 2,858,788
Example 1B Geometric DoF 4.07E-6 3.39E-6 3.14E-6 3.05E-6 3.09E-6 2.52E-6 2.23E-6 2.10E-6 1.19E-8 1.19E-8 1.19E-8 1.19E-8 4.06E-9 −3.25E-10 −3.65E-11 −9.56E-12
Eb 96,436 118,916 157,900 223,692 96,436 117,668 155,948 221,116 96,436 120,124 154,036 226,108 96,436 118,292 155,284 227,356
Adaptive DoF
Table 3 The number of degrees of freedom (DoF) and the bulk energy E b for four different Poisson ratios for Examples 1A and 1B
4.07E-6 3.39E-6 3.14E-6 3.05E-6 3.09E-6 2.52E-6 2.23E-6 2.10E-6 1.19E-8 1.19E-8 1.19E-8 1.19E-8 4.06E-9 −5.74E-11 −3.76E-11 −9.77E-12
Eb
206 S. Basava et al.
Adaptive and Pressure-Robust Discretization of Incompressible …
207
values of the TCV much smaller than the predicted values in Table 1. Physically, this is to be expected if we think of incompressible material in a closed box, because the material cannot move. Due to the cut-off of the computational domain and the use of an incompressible material, no movement can be expected for ν ≈ 0.5. This led us to suggest the setting of Sect. 5.2 where we add an artificial compressible layer around the (nearly) incompressible domain and inside the prescribed fracture (−1, 1) × (−d, d).
5.2 Incompressible Material Surrounded with a Compressible Layer (Example 2) As we have seen in the previous example in terms of the total crack volume, for νs = 0.49999, the fracture in incompressible solids will not open anymore and the TCV is almost 0. On the other hand, the formulae in [43, Sect. 2.4] suggest a value greater than zero. The reason being that therein an infinite domain was assumed. To study incompressible solids in larger domains, we add a compressible layer as surrounding area to allow an opening of the fracture as it would be possible on an infinite domain. Considering Fig. 1, now we work in a domain (−20, 20)2 which contains the previously defined domain (−10, 10)2 . The surrounding layer of width 10 is defined as a compressible material with ν = 0.2. All other parameters, namely E, G c , κ and c are kept as before with the values listed in the first paragraph of Sect. 5. The same compressible material is used inside of the prescribed fracture on the set (−1, 1) × (−d, d). In Fig. 2, the ranges of the x- and the y-displacements as well as for the pressure values are depicted for Example 2B, where a perfect symmetry of the test setup can be observed. In Table 4, for the primal-based form (Example 2A), the TCV is underestimated for ν ≈ 0.5 while the mixed form (Example 2B) gives results consistent with the computations for ν = 0.5. Compared to Table 1, the TCV values based on the mixed form (Example 2B) are very similar for the four listed Poisson ratios compared to the reference values. Keep in mind at this point, that the reference values are given analytically considering an infinite domain. Further, the TCV in Table 4 on adaptively refined meshes in comparison to geometrically refined meshes coincide satisfactorily. Note however, that as it has to be expected the primal formulation (3) provides unreliable values for ν close to 0.5. To give an impression of the used meshes and to see the difference between geometrically and adaptively refined meshes, in Fig. 3, a coarser starting mesh (geometrically prerefined) on the left and the mesh after three additional adaptive refinements (based on the error estimator) on the right are given. We observe, that the adaptively refined meshes, obtained by the error estimator of Sect. 4, have roughly a tenth of the unknowns on the finest refinement level while providing similar results for the TCV.
208
S. Basava et al.
Fig. 2 Example 2B: The x- and y-displacements and the pressure p for ν = 0.5
5.3 Nonhomogeneous Pressure Test Case with a Compressible Layer (Example 3) In this third example, we prescribe a nonhomogeneous pressure pg in form of a bump that resembles to a fluid-filled fracture situation (e.g., [34]). In this situation, we can no longer expect our pressure p to be almost constant. As it has been observed, e.g., in [28, 29] for Stokes flow, for incompressible situations the difficulty in approximating the pressure can negatively influence the approximation of the displacement field. Hence, for the third example, we will focus on the case ν = 0.5 and compare the numerical results from Problem 4 with the pressure robust Problem 5. For this setting we consider the following given pressure: pg (x, y) = f (x)g(y) where
0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125 0.0625 0.03125 0.015625 0.0078125
0.2
0.5
0.49999
0.49
d
ν
49,508 94,372 260,676 900,260 49,508 94,372 260,676 900,260 49,508 94,372 260,676 900,260
Example 2A Geometric DoF 0.00836 0.00703 0.00648 0.00624 0.00808 0.00622 0.00540 0.00503 0.000913 0.00129 0.00188 0.00237
TCV 49,508 58,036 72,420 93,220 49,508 58,420 72,804 93,796 49,508 57,236 71,044 91,220
Adaptive DoF 0.00836 0.00702 0.00648 0.00624 0.00808 0.00620 0.00537 0.00500 0.000913 0.000890 0.000860 0.000838
TCV 159,316 304,740 844,484 2,921,668 159,316 304,740 844,484 2,921,668 159,316 304,740 844,484 2,921,668 159,316 304,740 844,484 2,921,668
Example 2B Geometric DoF 0.00839 0.00704 0.00649 0.00625 0.00842 0.00640 0.00551 0.00511 0.00840 0.00636 0.00545 0.00505 0.00840 0.00636 0.00545 0.00505
TCV
Table 4 The number of degrees of freedom (DoF) and the TCV for four different Poisson ratios for Examples 2A and 2B
159,316 186,828 233,300 300,420 159,316 188,076 234,548 301,668 159,316 188,076 234,548 301,668 159,316 188,076 234,548 301,668
Adaptive DoF
TCV 0.00839 0.00704 0.00649 0.00625 0.00842 0.00640 0.00551 0.00511 0.00840 0.00636 0.00545 0.00505 0.00840 0.00636 0.00545 0.00505
Adaptive and Pressure-Robust Discretization of Incompressible … 209
210
S. Basava et al.
Fig. 3 The mesh on the starting grid (left) and after three levels of adaptive refinement (right) for Example 2B with ν = 0.5 zoomed to the crack zone
⎧ 0.001 1 ≤ x < 2, ⎪ ⎪ ⎪ ⎨−0.002 x 2 (x − 1.5) 0 ≤ x < 1, f (x) = 2 ⎪ (x − 1.5) 2 ≤ x < 3, 0.002 (x − 3) ⎪ ⎪ ⎩ 0 otherwise, ⎧ ⎪ |y| < 0.5, ⎨1 g(y) = 2(|y| − 1.5)2 |y| 0.5 ≤ |y| < 1.5, ⎪ ⎩ 0 otherwise. All other parameters are chosen as in Example 2. The solution is shown in Fig. 4, where the nonsymmetry in the setup can be clearly seen in the x-displacements. It should be noted that similar to Example 2 the pressure is relatively simple, and the jump in the pressure on the prescribed fracture is aligned with the mesh. Hence, no difficulty in the pressure approximation is expected—and thus the pressure robust results should not deviate too much. Indeed, as the numbers in Table 5 show the pressure robust discretization yields similar numerical results. This is not visible in the table, but actual numbers differ in later digits. Since the given pressure only enters the equation on the boundary of the approximate fracture, i.e., the region where ∇φ = 0, this rather similar behavior of Problem 4 and 5 has to be expected. It remains subject to future research if this remains the same for growing fractures of other forcings.
Adaptive and Pressure-Robust Discretization of Incompressible …
211
Fig. 4 Example 3: The x- and y-displacements (top row), and the pressure p for ν = 0.5 and the final locally adapted mesh
6 Conclusions In this work, we developed a pressurized phase-field fracture model in mixed form for solids up to the incompressible limit ν = 0.5. In addition, a residual-type error estimator is presented for the variational inequality, in this context especially for fractures in solids which are (nearly) incompressible. Estimating the error in the phase-field variable allows to obtain a good resolution especially of the fracture zone. We investigated the performance of the mixed phase-field fracture formulation and the error estimator with the help of three numerical configurations, all based on Sneddon’s and Lowengrub’s setup [42, 43] from which reference values for the total crack volume, on an infinite domain, can be obtained. These reference values have been used to prove the quality of the finite dimensional approximation and the adaptive refinement based on the error estimator. In a second numerical configuration we added a compressible layer around the (nearly) incompressible cavity to allow computing similar results for the TCV as given by the exact formula on an infinite domain. The findings observed on a com-
d
0.0625 0.03125 0.015625 0.0078125
ν
0.5
159,316 304,740 844,484 2,921,668
Example 3A Geometric DoF 0.00372 0.00314 0.00273 0.00252
TCV 159,316 187,744 233,280 299,092
Adaptive DoF 0.00372 0.00314 0.00273 0.00252
TCV
Table 5 The number of degrees of freedom (DoF) and the TCV for Examples 3A and 3B
159,316 304,740 844,484 2,921,668
Example 3B Geometric DoF 0.00372 0.00314 0.00273 0.00252
TCV
159,316 187,744 233,280 299,092
Adaptive DoF
TCV 0.00372 0.00314 0.00273 0.00252
212 S. Basava et al.
Adaptive and Pressure-Robust Discretization of Incompressible …
213
pressible layered cavity, which is incompressible in the inner square and around the crack zone, are very convincing. To go even further, as a third numerical example, we added a non-constant pressure to the layered Sneddon configuration to provide results of a configuration which is not totally symmetric and tested the results in comparison with a pressure robust modification. It turned out that in the benchmark setup the pressure approximation has no significant influence on the displacement fields and thus a pressure robust discretization is not necessary. It will be subject to further studies to check if the situation remains similar considering a fracture which is not only opening in width but also growing in length. Acknowledgements Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Projektnummer 392587580—SPP 1748.
References 1. L. Ambrosio, V. Tortorelli, Approximation of functionals depending on jumps by elliptic functionals via γ - convergence. Comm. Pure Appl. Math. 43, 999–1036 (1990) 2. L. Ambrosio, V. Tortorelli, On the approximation of free discontinuity problems. Boll. Un. Mat. Ital. B 6, 105–123 (1992) 3. D. Arndt, W. Bangerth, T.C. Clevenger, D. Davydov, M. Fehling, D. Garcia-Sanchez, G. Harper, T. Heister, L. Heltai, M. Kronbichler, R.M. Kynch, M. Maier, J.-P. Pelteret, B. Turcksin, D. Wells, The textttdeal.II library, version 9.1. J. Numer. Math. 27, 203–213 (2019) 4. M. Artina, M. Fornasier, S. Micheletti, S. Perotto, Anisotropic mesh adaptation for crack detection in brittle materials. SIAM J. Sci. Comput. 37, B633–B659 (2015) 5. W. Bangerth, R. Hartmann, G. Kanschat, deal.II – a general purpose object oriented finite element library. ACM Trans. Math. Softw. 33, 24/1–24/27 (2007) 6. B. Bourdin, G. Francfort, J.-J. Marigo, Numerical experiments in revisited brittle fracture. J. Mech. Phys. Solids 48, 797–826 (2000) 7. D. Braess, Finite Elemente (Springer, Berlin, 2007). vierte, überarbeitete und erweiterte ed. 8. F. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods. Springer Series in Computational Mathematics, vol. 15 (Springer, Berlin, 1991) 9. M.K.t. Brun, T. Wick, I. Berre, J.M. Nordbotten, F.A. Radu, An iterative staggered scheme for phase field brittle fracture propagation with stabilizing parameters. Comput. Methods Appl. Mech. Engrg. 361, 112752, 22 (2020) 10. S. Burke, C. Ortner, E. Süli, An adaptive finite element approximation of a variational model of brittle fracture. SIAM J. Numer. Anal. 48, 980–1012 (2010) 11. S. Burke, C. Ortner, E. Süli, An adaptive finite element approximation of a generalized Ambrosio-Tortorelli functional. Math. Models Methods Appl. Sci. 23, 1663–1697 (2013) 12. C. Chukwudozie, B. Bourdin, K. Yoshioka, A variational phase-field model for hydraulic fracturing in porous media. Comput. Methods Appl. Mech. Engrg. 347, 957–982 (2019) 13. T.A. Davis, I.S. Duff, An unsymmetric-pattern multifrontal method for sparse LU factorization. SIAM J. Matrix Anal. Appl. 18, 140–158 (1997) 14. The Differential Equation and Optimization Environment: DOpElib. http://www.dopelib.net 15. C. Engwer, S.I. Pop, T. Wick, Dynamic and weighted stabilizations of the l-scheme applied to a phase-field model for fracture propagation (2019). arXiv:1912.07096 16. M. Fan, Y. Jin, T. Wick, A phase-field model for mixed-mode fracture, preprint, Institutionelles Repositorium der Leibniz Universität Hannover (2019) 17. F. Fierro, A. Veeser, A posteriori error estimators for regularized total variation of characteristic functions. SIAM J. Numer. Anal. 41, 2032–2055 (2003)
214
S. Basava et al.
18. G. Francfort, J.-J. Marigo, Revisiting brittle fracture as an energy minimization problem. J. Mech. Phys. Solids 46, 1319–1342 (1998) 19. C. Goll, T. Wick, W. Wollner, DOpElib: differential equations and Optimization Environment. A goal oriented software library for solving PDEs and optimization problems with PDEs. Arch. Numer. Softw. 5, 1–14 (2017) 20. Y. Heider, S. Reiche, P. Siebert, B. Markert, Modeling of hydraulic fracturing using a porousmedia phase-field approach with reference to experimental data. Eng. Fract. Mech. 202, 116– 134 (2018) 21. T. Heister, M.F. Wheeler, T. Wick, A primal-dual active set method and predictor-corrector mesh adaptivity for computing fracture propagation using a phase-field approach. Comput. Methods Appl. Mech. Engrg. 290, 466–495 (2015) 22. T. Heister, T. Wick, Parallel solution, adaptivity, computational convergence, and open-source code of 2d and 3d pressurized phase-field fracture problems. PAMM 18, e201800353 (2018) 23. G.A. Holzapfel, Nonlinear solid mechanics: a continuum approach for engineering science. Meccanica 37, 489–490 (2002) 24. G.A. Holzapfel, R. Eberlein, P. Wriggers, H.W. Weizsäcker, Large strain analysis of soft biological membranes: Formulation and finite element analysis. Comput. Methods Appl. Mech. Engrg. 132, 45–61 (1996) 25. R. Krause, A. Veeser, M. Walloth, An efficient and reliable residual-type a posteriori error estimator for the Signorini problem. Numer. Math. 130, 151–197 (2015) 26. A. Kubo, Y. Umeno, Velocity mode transition of dynamic crack propagation in hyperviscoelastic materials: a continuum model study. Sci. Rep. 7, 42305 (2017) 27. S. Lee, M.F. Wheeler, T. Wick, Pressure and fluid-driven fracture propagation in porous media using an adaptive finite element phase field model. Comput. Methods Appl. Mech. Engrg. 305, 111–132 (2016) 28. A. Linke, G. Matthies, L. Tobiska, Robust arbitrary order mixed finite element methods for the incompressible Stokes equations with pressure independent velocity errors. M2AN Math. Model. Numer. Anal. 50, 289–309 (2016) 29. A. Linke, C. Merdon, W. Wollner, Optimal L 2 velocity error estimates for a modified pressurerobust Crouzeix-Raviart Stokes element. IMA J. Numer. Anal. 37, 354–374 (2017) 30. K. Mang, M. Walloth, T. Wick, W. Wollner, Adaptive numerical simulation of a phase-field fracture model in mixed form tested on an l-shaped specimen with high poisson ratios (2020). arXiv:2003.09459 31. K. Mang, M. Walloth, T. Wick, W. Wollner, Mesh adaptivity for quasi-static phase-field fractures based on a residual-type a posteriori error estimator. GAMM-Mitt. 43, e202000003, 22 (2020) 32. K. Mang, T. Wick, W. Wollner, A phase-field model for fractures in nearly incompressible solids. Comput. Mech. 65, 61–78 (2020) 33. C. Miehe, S. Mauthe, S. Teichtmeister, Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture. J. Mech. Phys. Solids 82, 186–217 (2015) 34. A. Mikeli´c, M.F. Wheeler, T. Wick, Phase-field modeling of a fluid-driven fracture in a poroelastic medium. Comput. Geosci. 19, 1171–1195 (2015) 35. A. Mikeli´c, M.F. Wheeler, T. Wick, A quasi-static phase-field approach to pressurized fractures. Nonlinearity 28, 1371–1399 (2015) 36. A. Mikeli´c, M.F. Wheeler, T. Wick, Phase-field modeling through iterative splitting of hydraulic fractures in a poroelastic medium. GEM Int. J. Geomath. 10, 2, 33 (2019) 37. K.-S. Moon, R.H. Nochetto, T. von Petersdorff, C.-S. Zhang, A posteriori error analysis for parabolic variational inequalities. M2AN Math. Model. Numer. Anal. 41, 485–511 (2007) 38. N. Noii, F. Aldakheel, T. Wick, P. Wriggers, An adaptive global-local approach for phase-field modeling of anisotropic brittle fracture. Comput. Methods Appl. Mech. Engrg. 361, 112744, 45 (2020) 39. N. Noii, T. Wick, A phase-field description for pressurized and non-isothermal propagating fractures. Comput. Methods Appl. Mech. Engrg. 351, 860–890 (2019)
Adaptive and Pressure-Robust Discretization of Incompressible …
215
40. J. Schröder, P. Neff, D. Balzani, A variational approach for materially stable anisotropic hyperelasticity. Int. J. Solids Struct. 42, 4352–4371 (2005) 41. ...J. Schröder, T. Wick, S. Reese, P. Wriggers, R. Müller, S. Kollmannsberger, M. Kästner, A. Schwarz, M. Igelbüscher, N. Viebahn, H.R. Bayat, S. Wulfinghoff, K. Mang, E. Rank, T. Bog, D. D’Angella, M. Elhaddad, P. Hennig, A. Düster, W. Garhuom, S. Hubrich, M. Walloth, W. Wollner, C. Kuhn, T. Heister, A selection of benchmark problems in solid mechanics and applied mathematics. Arch. Comput. Methods Eng. (2020) 42. I.N. Sneddon, The distribution of stress in the neighbourhood of a crack in an elastic solid. Proc. Roy. Soc. London Ser. A 187, 229–260 (1946) 43. I.N. Sneddon, M. Lowengrub, Crack Problems in the Classical Theory of Elasticity (Wiley, New York, 1969) 44. R.L. Taylor, Isogeometric analysis of nearly incompressible solids. Int. J. Numer. Methods Engrg. 87, 273–288 (2011) 45. M. Walloth, Residual-type a posteriori estimators for a singularly perturbed reaction-diffusion variational inequality – reliability, efficiency and robustness (2018). arXiv:1812.01957 46. M. Walloth, Residual-type a posteriori error estimator for a quasi-static Signorini contact problem. IMA J. Numer. Anal. 40, 1937–1971 (2020) 47. M. Wheeler, T. Wick, W. Wollner, An augmented-Lagangrian method for the phase-field approach for pressurized fractures. Comp. Methods Appl. Mech. Engrg. 271, 69–85 (2014) 48. M.F. Wheeler, T. Wick, S. Lee, IPACS: integrated phase-field advanced crack propagation simulator. An adaptive, parallel, physics-based-discretization phase-field framework for fracture propagation in porous media. Comput. Methods Appl. Mech. Engrg. 367, 113124, 35 (2020) 49. T. Wick, Goal functional evaluations for phase-field fracture using PU-based DWR mesh adaptivity. Comput. Mech. 57, 1017–1035 (2016) 50. T. Wick, An error-oriented Newton/inexact augmented Lagrangian approach for fully monolithic phase-field fracture propagation. SIAM J. Sci. Comput. 39, B589–B617 (2017)
A Phase-Field Approach to Pneumatic Fracture C. Bilgen, A. Kopaniˇcáková, R. Krause, and K. Weinberg
Abstract Phase-field models have been proven to be reliable methods for the simulation of complex crack patterns and crack propagation. In this contribution we investigate the phase-field model in linear and finite elasticity and summarize the influences of model specific parameters. Furthermore, externally driven fracture processes, in particular in the context of pneumatic fracture, are examined in detail. The focus is on fracture induced by pressure and anisotropic crack growth. Besides the modeling, the solution process is analyzed by applying multilevel methods. Within a series of parametric studies and numerical examples the versatility of phase-field models is demonstrated.
1 Introduction Every crack in a solid involves the creation of new internal surfaces with a priori unknown location and evolution. Besides different approaches to overcome the challenge of unknown crack paths like the cohesive zone model [56, 58, 66] or extended finite element methods [45, 60] phase-field models have gained much more attention recently, cf. [14, 37, 39, 46, 49, 63]. One main characteristic of the phase-field C. Bilgen · K. Weinberg (B) Universität Siegen, Festkörpermechanik, Paul-Bonatz-Straße 9-11, 57076 Siegen, Germany e-mail: [email protected] C. Bilgen e-mail: [email protected] A. Kopaniˇcáková · R. Krause USI- Università della Svizzera Italiana, Via Giuseppe Buffi 13, 6900 Lugano, Switzerland e-mail: [email protected] R. Krause e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_9
217
218
C. Bilgen et al.
model is that the crack boundaries are ‘smeared’ over a small but finite width lc such that it constitutes a diffuse interface approach. The basis of the phase-field model is traced back to Francfort and Marigo [27] who presented a formulation based on Griffith’s theory [31] within an energy minimization problem. The convergence of the diffuse interface approach to the sharp interface model has been shown by Bourdin et al. [16]. During the years many expansions and applications have been investigated within the phase-field model, for example ductile fracture [2, 3, 41], hydraulic fracture [34, 47, 65] or anisotropic fracture [11, 61]. In this paper we will give an overview about the basics of the general phasefield model and focus on the influences of a series of parameters. Those parameters, for example the length-scale parameter or the degradation function can change the results concerning the load at cracking or the location of crack initialization. Detailed investigations of the influence of those various parameters can be found in literature, cf. [8, 39, 42, 46, 59]. Moreover, we will focus on pressure driven processes. In a compressive setting fracture can only happen when it is subjected to additional physical fields which induce a local state of tension. For example, the creation of cracks starts when a high-pressure fluid or gas is injected into compressed soil. Such hydraulic or pneumatic fracturing is a commonly used method to stimulate natural gas reservoirs. While hydraulic fracturing has been examined detailed by means of the phasefield approach, cf. [48, 51, 65], this contribution deals with pneumatic fracturing where the natural gas is extracted by injecting air or another gas at a sufficient pressure. These applications do not require any support that the crack will remain open. Thus, pneumatic fracturing can be considered as a simplification of hydraulic fracturing. According to Bourdin et al. [15], who considered a variational formulation for pressure-driven phase-field fracture propagation by exerting the work by the pressure inside a crack on the fracture surface, we introduce the loading pressure as a moving boundary condition, cf. [10]. In the context of geological or biological materials the occurrence of the material, for example anisotropic crack growth, is also an issue. There are different approaches to simulate the crack propagation in anisotropic material, cf. [20, 34, 48, 51, 52, 65]. While [28] introduced various critical energy densities, commonly a structural tensor is included in the crack density function which influences the direction of crack growth, cf. [10, 21, 43, 61]. Furthermore, we will examine the numerical properties of the phase-field model with respect to the various influencing parameters and the proposed expansions. The focus is on semi-geometric multigrid method and the recursive multilevel trust region method, in particular within both solution schemes—the monolithic and staggered scheme—and the different influencing parameters, cf. [7, 8, 26, 30, 38]. The structure of this paper is as follows: In Sect. 2 we introduce the basics and fundamentals of the phase-field approach with respect to linear and finite elasticity and the discretization. After that the multilevel solution strategies are demonstrated. Section 4 deals with the discussion of the phase-field model concerning the influ-
A Phase-Field Approach to Pneumatic Fracture
219
encing parameters and the extensions to externally driven fracture processes. Within parametric studies and numerical examples in Sect. 5 the applicability of the proposed model is shown. Finally, the paper is summarized.
2 Phase-Field Model At the beginning we introduce the standard phase-field model in linear and finite elasticity with focus on various formulations and adaptions. The phase-field model to fracture is a reliable approach to simulate the crack initialization and propagation. Let us introduce the main idea and the basic equations of the phase-field method for brittle fracture. Starting point is a domain ⊂ R3 which undergoes a deformation within a given time interval t ∈ [0, ttot ] with the total time ttot . The total potential energy consists of the bulk energy of a solid with the elastic free Helmholtz energy density e and surface contributions as follows e E= dV + Gc d. (1) 0
(t)
The surface integral includes the fracture energy contributions where the Griffith’s critical energy release rate is given by Gc in the case of brittle fracture. Every growing crack in a solid creates new surfaces (t) of an a priori unknown position. Since the new surfaces (t) are unknown the numerical solving process of the surface integral is challenging. Within the phase-field model the way to overcome this task is to approximate the surface integral by a volume integral with the aid of a crack density function γ (z):
(t)
d ≈
0
γ (z) dV
(2)
The crack density function is only nonzero along the cracks and depends on the phase-field parameter z(x, t) : 0 × [0, ttot ] → [0, 1] which indicates the state of the material. If the phase-field is z = 0 the unbroken state is characterized and if the phase-field is z = 1 the fully broken state, in particular the crack is demonstrated. Although there is no unique way to choose the crack density function, the second-order approach is commonly used: γ (z) = 2l1c z 2 + l2c |∇z|2 with a lengthscale parameter lc . Inserting this approximation (2) into the total potential energy (1) the equation results in the following optimization problem: E=
0
( e + Gc γ (z)) dV
⇒
optimum
The evolution equation of the phase-field parameter is given by
(3)
220
C. Bilgen et al.
z˙ = MY,
(4)
where the crack driving terms are summarized in Y = Y e − Y f with the crack driving force Y e and the crack resistance Y f = δz γ (z). The mobility is denoted by M [m2 /(Ns)].
2.1 Linear Elasticity In linear elasticity the strain energy density depends on the displacement field u(x, t) : × [0, ttot ] → R3 and is given by 0e (ε) =
1 ε:C:ε 2
(5)
with the elasticity tensor C and the strain tensor ε(u) = 21 (∇u + (∇u)T ). In order to take into account that only tensile stresses contribute the crack growth the tensile contributions of the strain energy density are degraded by splitting the strain tensor in positive and negative parts as follows, cf. [46]: ε ± (u) =
3 εa ± na ⊗ na
(6)
a=1
with the decomposed eigenvalues εa ± = 21 (εa ± |εa |). Finally, inserting (6) into the strain energy density (5) results in an additive decomposition of the strain energy density e (ε, z) = g(z)0e+ (ε + ) + 0e− (ε− )
(7)
with a degradation function g(z) which is typically chosen to be of quadratic type, e.g. g(z) = (1 − z)2 . Based on this decomposition the crack driving force is formulated by Y e = δz e (u, z)
(8)
in the standard phase-field method. If it is not stated otherwise, this variational type of crack driving force is applied in this contribution.
A Phase-Field Approach to Pneumatic Fracture
221
2.2 Finite Elasticity By now we focused on small deformations. However, this theory can be also adapted to finite deformations. The deformation is mapped by ϕ : × [0, ttot ] → R3 such that the deformation gradient is given by F := ∇ X ϕ
(9)
and its determinant by J = det(F). The first Piola–Kirchhoff stress tensor is formulated by P=
∂ e . ∂F
(10)
Analogous to linear elasticity the asymmetry of fracture has to be taken into account. In this context the strain energy density is formulated depending on the principal invariants, i.e. e = e (I1 , I2 , J ). Then the invariants are decomposed in an additive way, cf. [35, 62]: Ia± = 3 ± max(±(Ia − 3), 0), a ∈ {1, 2}, ±
J = 1 ± max(±(J − 1), 0)
(11) (12)
Inserting these decomposed invariants in the energy density it leads to an analogous description of the energy as proposed in (7), in particular it follows e = g(z) e+ (I1+ , I2+ , J + ) + e− (I1− , I2− , J − ).
(13)
The classical crack driving force is also chosen to be of variational type as Y e = δz e .
2.3 Discretization Let us begin with the technical fundamentals, in particular with the discretization of the formulated problem. Besides the phase-field evolution Eq. (4) the balance of linear momentum constitutes the main equation of the setting: div(P) + B¯ = ρ0 u¨
(14)
¯ the density ρ0 with respect to a volume of the reference with the body force B, ¨ The weak formulation of the coupled problem configuration and the velocity u. reads:
222
C. Bilgen et al.
E(u, z; δu, δz) = Eu (u, z; δu) + Ez (u, z; δz) = 0, ∀δ(u, z) := {δu, δz} ∈ V0u × V0z
(15) The space of admissible test functions for the mechanical field is denoted by V0u = z 1 {δu ∈ H 1 (0 )|δu = 0 on ∂D 0 } and for the phase-field by V0 = {δz ∈ H (0 )|δz = 1 0 on ∂0 }. In this definition the Sobolev-space is denoted by H . The functional Eu (u, z; δu) describes the balance of linear momentum and is given by ¨ dV + ρ0 u·δu P : ∇(δu) dV 0 0 − B¯ · δu dV − T¯ · δu d A = 0
Eu (u, z; δu) =
0
(16)
∂0
for all δu ∈ V0u . Analogous the phase-field equation applying the second-order crack density function is stated in weak form by Ez (u, z; δz) =
M 0
−1
z˙ · δz dV − Y e · δz dV 0 Gc + Gc lc ∇z∇(δz) dV + z · δz dV = 0 (17) l c 0 0
for all δz ∈ V0z . The finite-element method is based on a subdivision of the domain 0 in nonoverlapping elements e0 . The approximations of the displacement field and the phase-field are given by u ≈ uh = z ≈ zh =
nk i=1 nk i=1
Ni (X)uˆ i , N˜ i (X)ˆz i ,
δuh = δz h =
nk i=1 nk
Ni (X)δ uˆ i ,
(18)
N˜ i (X)δ zˆ i ,
(19)
i=1
where we use the same basisfunctions for both fields. The number of nodes is denoted by n k and the nodal displacements and phase-fields by uˆ i ∈ R3 or zˆ i ∈ R, respectively. The Euler–Lagrange equations (15) can be solved using either a monolithic [50] or a staggered solution scheme [46]. The monolithic scheme requires the solution of a fully-coupled nonlinear system of equations in each time-step. This is numerically challenging, as the underlying minimization problem is non-convex and classical solution schemes, such as plain Newton’s method, tend to fail [26]. In contrast, the staggered solution scheme splits (15) into two subproblems, related to the displacement (16) and phase-field (17), which are then solved successively. Although the convergence speed of the staggered solution scheme is usually very slow, it is very
A Phase-Field Approach to Pneumatic Fracture
223
popular and often preferred in practical applications. This is due to the fact that both subproblems are convex and therefore can be solved using standard solution strategies.
3 Multilevel Solution Strategies Although monolithic and staggered solution schemes give rise to different types of minimization problems, they both require the solution of large scale systems in each time-step. This is due to the fact that a sufficiently fine mesh is required to resolve the regularized crack surface. In this chapter, we discuss how to solve the arising linear and nonlinear systems efficiently. To this aim, we employ multilevel solution strategies, as they are of optimal complexity. The multilevel solution strategies employ a hierarchy of usually nested finite element spaces, also called levels. The solution process then combines smoothing and the coarse grid correction step. Smoothing reduces the high-frequency error related to each level, while the coarse grid correction step eliminates the low-frequency error remaining after smoothing on the finer level. Semi-geometric mutligrid method Linear geometric and algebraic multilevel methods have been developed for more than 50 years and they are used extensively in many areas. An introduction to linear multilevel methods can be found for example in [19] or [18]. In the context of phase-field fracture problems, linear multilevel methods were applied in [7, 26, 36]. Here, we propose to use a semi-geometric multigrid (SMG) method [24], as it is suitable for problems with unstructured meshes and complex geometries. The rigorous details about our implementation of the SMG method can be found in [7]. In the context of staggered solution scheme, the SMG method can be used directly to solve the phase-field subproblem, as it requires a solution of a linear system. Since the subproblem related to the displacement field is nonlinear, we employ Newton’s method to tackle the resulting nonlinearity. An iteration of Newton’s method is defined as xk+1 = xk − pk ,
where pk = J −1 k rk ,
(20)
where xk ∈ Rn u denotes the solution vector related to the displacement field. The residual rk ∈ Rn u consists of terms corresponding to (16) and J k ∈ Rn u ×n u denotes the Jacobian matrix containing the respective derivatives. The subscript k describes the iteration number. The direct solution of (20)2 is computationally expensive and might be not even permitted, when n u becomes large. Therefore, we propose to obtain search direction pk in more efficient manner, by solving the linear system iteratively J k pk = rk using the SMG method. The application of the SMG method within a monolithic scheme requires more consideration. First of all, the resulting coupled-nonlinear systems are non-convex, therefore globalized variant of Newton’s method has to be employed to ensure con-
224
C. Bilgen et al.
vergence [23]. The globalization can be performed using line-search [55] or trust region methods [22]. Second of all, globalized variants of Newton’s method also require the solution of large scale linear system on each iteration. Here, we note that the arising linear systems have the following block structure J uz J uu k k , Jk = J kzu J kzz
u r rk = kz , rk
u p pk = kz , pk
(21)
where the blocks of the residual vector rk ∈ Rn consist of the terms corresponding to (16) and (17). The blocks of the Jacobian matrix J k ∈ Rn×n contain the directional derivatives of rk in (21). Since the Jacobian matrix of the coupled problem is not necessarily positive definite, it is numerically challenging to solve these linear systems. Following [6], we can explore the block structure of J k and employ iterative scheme based on the Schur complement method [6]. The inverse of the Jacobian ( J k )−1 can be then expressed as ( J k )−1 =
uu −1 −1 uz −1 zu −1 −1 uz −1 ( J k ) + ( J uu J k S J k ( J uu −( J uu Jk S k ) k ) k ) , −1 −S−1 J kzu ( J uu S−1 k )
(22)
−1 uz J k is the Schur complement of J k with respect to where S := J kzz − J kzu ( J uu k ) uu J k . Naturally, we never compute the inverse ( J k )−1 explicitly, rather directly evaluate the application of ( J k )−1 for the residual vector rk . One application of ( J k )−1 −1 requires applications of ( J kzz )−1 and ( J uu k ) , which we respectively approximate by two V-cycles of our SMG method. Recursive multilevel trust region method Nonlinear multilevel methods provide a computationally cheaper alternative to the standard globalized variants of Newton’s method. The majority of nonlinear multilevel methods, such as full-approximation scheme (FAS) [17], nonlinear multigrid [33], or optimization multigrid (MG-OPT) [54], are proven to converge only for convex minimization and therefore do not apply to non-convex minimization problems arising in the phase-field fracture simulations. The only globally convergent nonlinear multilevel method suitable for non-convex minimization, called recursive multilevel trust region (RMTR), was proposed in [30] and then further developed in [29, 32]. Regarding the phase-field fracture problems, the variant of the RMTR method was designed in [38], where the authors demonstrated a significant speed-up compared to standard, single-level solution methods. RMTR combines multilevel minimization with the trust region globalization strategy. On one side, the multilevel framework helps to tackle large-scale, ill-conditioned systems and lowers the overall computational cost. On the other side, the trust-region framework addresses the issue of the non-convexity of the underlying energy functional. As common for multilevel methods, the solution process alternates between nonlinear smoothing and coarse grid correction step. The RMTR method performs both, smoothing and coarse-grid step, by minimizing nonlinear level-dependent objective functions using a trust region method. The choice of level-dependent objective functions is of crucial importance, as their minimization should yield a good
A Phase-Field Approach to Pneumatic Fracture
225
correction for the original/fine-level problem. In the context of the phase-field fracture, it is challenging to design efficient level-dependent objective functions as the underlying mathematical model relies on the mesh dependent parameter, i.e, the length-scale parameter. For this reason, the novel level-dependent objective functions were proposed in [38], which combine a fine level description of the crack path with the coarse level discretization. Once the approximate minimization on the given level is terminated, the obtained coarse level correction is brought back to the fine level. However, the transferred correction is not added immediately to the current iterate, but only if it provides a decrease in the original/fine level objective function. For the rigorous implementation details, we refer interested reader to [38, 67].
4 Discussions and Extensions of the Phase-Field Model In this chapter the focus is on the various model parameters that influence the solution within the phase-field model. Keeping those factors in mind the phase-field model will be later expanded to externally driven fracture processes.
4.1 Influencing Parameters The phase-field fracture model relies on a set of influencing parameters, namely the length-scale parameter, the mobility parameter, the degradation function, and the crack driving force. In this subsection, we discuss how the particular choices of those parameters affect the obtained solution. Besides, we also demonstrate how the choice of influencing parameters affects the solution process. Here, we examine the performance of both staggered and monolithic solution schemes, configured with a semi-geometric multigrid method, set up as discussed in Sect. 3. During all experiments, we used following stopping criterion: rk ≤ 10−7 , where rk denotes the Euclidean norm of the residual rk . For more detailed analysis, we refer the interested reader to [8]. The effect of varying those parameters is demonstrated by means of a mode-I tension test using the material parameters in Table 1 and the boundary conditions in Fig. 1. Unless specified otherwise, we utilize the linear elastic material model and quadratic degradation function. The length-scale parameter is chosen as lc = 2h, where h denotes the mesh size and mobility parameter M is set as M → ∞. Length-scale parameter lc The length-scale parameter lc , which occurs in the crack density function γ (z), is a two-fold parameter which acts as a numerical parameter and a material parameter. On the one hand it is a measure of the width of the diffuse zone and is oriented towards the mesh size h. Because of numerical reasons it is required that the length-scale parameter fulfills the relation lc > h. Commonly it is chosen to be lc = 2h, cf. [46]. On the other hand the length-scale parameter is
226
C. Bilgen et al.
Table 1 Material parameters for two-dimensional tension test Parameter Value E ν Gc σc
Unit [N/mm2 ] [–] [N/mm] [N/mm2 ]
50 000 0.2 75 · 10−3 31.6228
Fig. 1 Boundary conditions setup for two-dimensional tension test with displacement increments u¯ = 0.02 mm per time step
interpreted as a material parameter since it enters the critical fracture energy density Gc . In particular, the relation of the critical fracture energy density and the lengthscale parameter takes part in the critical stress determined analytically by the one dimensional crack solution applying a quadratic degradation function: σc =
EGc . 3lc
(23)
To demonstrate the influence of the length-scale parameter we perform the mode-I tension test with varying length-scale parameter lc . Considering the crack path itself the width of the diffuse crack zone increases for larger length-scale parameters, see Fig. 2. Investigating the load-deflection curves in more detail, it is observed that for larger length-scale parameters the applied force at cracking decreases, see Fig. 3. It is preferable to use a small length-scale parameter, since the diffuse crack surface converges to a sharp crack for lc → 0, cf. [16]. We remark, that the computational cost increases tremendously for very small length-scale parameter lc as a mesh with higher resolution is required to fulfill requirement, lc > h. We can also see, that the convergence rate of the solution methods deteriorates for smaller values of lc , see Table 2. In particular, the number of
A Phase-Field Approach to Pneumatic Fracture
227
Fig. 2 Phase-field snapshots for the mode-I tension test within various length-scale parameters lc ∈ {1.25, 2.5, 5, 10} mm Fig. 3 Load-deflection curves for a mode-I tension test with varying length-scale parameter lc ∈ {1.25, 2.5, 5, 10} mm
Table 2 The convergence study of various solution strategies (monolithic scheme and staggered scheme (SSS) subdivided into the displacement subproblem (disp. sub.) and the phase-field subproblem (pf. sub.)) with respect to three different length-scale parameters lc ∈ {2h, 4h, 6h}. An experiment performed using mode-I tension test h/lc Monolithic scheme SSS—disp. sub. SSS—pf. sub. 2h 4h 6h 2h 4h 6h 2h 4h 6h h0
h 0 /4
# cumulative nonl. its. # average nonl. its # average linear its. # cumulative nonl. its. # average nonl. its. # average linear its.
638 7.87 4.62 970 10.01 10.77
552 6.98 4.65 907 8.72 10.5
298 3.82 3.96 596 7.18 10.39
181 1.49 21.77 490 2.83 60.49
161 1.46 21.95 460 2.72 63.45
146 1.43 21.95 278 1.97 64.21
− − 5.51 − − 6.73
− − 3.77 − − 5.01
− − 3.34 − − 4.39
required iterations grows as we vary the ratio between h and lc , while keeping mesh size fixed. Similar behavior can be observed by refining a mesh, while keeping the ratio between h and lc constant. Mobility parameter M The mobility M occurs in the evolution equation of the phase-field (4) and acts as a numerical regularization. In the dimensionless formulation of the evolution equation given by
228
C. Bilgen et al.
Fig. 4 Load-deflection curves for a mode-I tension test with varying the mobility parameters M [m2 /(Ns)]
τ z˙ = Y¯ ,
(24)
where Y¯ [–] is a normalized crack driving force and τ [s] is the retardation time associated with the mobility by τ = lc /(MGc ). The main question is how to choose value of M for the problem at hand. The retardation time should allow the phasefield to evolve within one step of time discretization. For that reason it is commonly oriented towards the order of magnitude of the time increment, i.e., τ=
lc = O( t), Gc M
(25)
In particular, for solving the bar problem in one dimension, it results in a linear coherence between the retardation time and the time increment, namely τ ≈ c t with an additional constant c. This corresponds to a mobility M ≈ 2lc /(c tGc ). Commonly, the constant c = 1/10, . . . , 10 leads to a solution which is nearly identical for the simple 1D bar problem. For multidimensional problems, the choice of constant c might be more complex. In the following the presented mode-I tension test with varying retardation times M ∈ {1, 10, 50, 100, 1 000, 10 000} m2 /(Ns) is performed. The load-deflection curves are demonstrated in Fig. 4 and show the slowing effect of phase-field evolution for increasing retardation times τ . Table 3 demonstrates the convergence properties of solution strategies with respect to varying mobility parameter M. We observe that the convergence speed accelerates as M → ∞, i.e. as we approach the quasi-static limit and obtain a rate-independent model. This is not surprising, as lower values of M slow down the crack propagation and artificially increase the stiffness of the material. Degradation function g(z) Next, let us focus on the degradation function which takes part in the decomposition of the strain energy density. The choice of the degradation function is not unique. There are different investigations concerning the degradation functions, cf. [13, 40, 59]. Besides exponential type degradation functions,
A Phase-Field Approach to Pneumatic Fracture
229
Table 3 The convergence study of various solution strategies (monolithic scheme and staggered scheme (SSS) subdivided into the displacement subproblem (disp. sub.) and the phase-field subproblem (pf. sub.)) with respect to the mobility parameter M. An experiment performed using mode-I tension test M
Monolithic scheme 10
50
SSS—disp. sub.
SSS—pf. sub.
100
10000 10
50
100
10000 10
50
100
10000
6404
2818
6034
5129
4463
1639
−
−
−
−
# average nonl. its 10.94 10.83 10.71 9.89
2.98
2.95
2.92
2.83
−
# average linear 8.52 its
47.12 45.58 43.54 40.28 8.91
# cumulative 14867 8772 nonl. its 8.31
7.93
7.58
−
−
−
8.82
8.74
8.22
Fig. 5 Degradation functions for various parameters a (left) and load-deflection curves for a mode-I tension test with various degradation functions (right)
[59], or single-parameter degradation functions, [64], polynomial functions are often applied, cf. [12, 35, 59]. In this contribution we focus on quadratic or rather cubic degradation functions. At first the degradation function has to fulfill the following conditions: g(0) = 1, g(1) = 0, g (1) = 0. By introducing the additional constraint g(0) = −a a cubic degradation function is proposed as g(z) = (a − 2)(1 − z)3 + (3 − a)(1 − z)2 ,
(26)
with an additional constant a ∈ [0, 2], cf. [14, 35]. For a = 2 it results in the quadratic degradation function which is typically used, see left plot of Fig. 5. Performing the mode-I tension test for various choices of the polynomial degradation functions the influence on the results is clarified, see right plot of Fig. 5. For a decreasing parameter a the applied force and the prescribed displacement increase. The choice of degradation function g(z) also influences the convergence properties of the employed solution strategies, see Table 4. For the monolithic scheme, we observe an increase of cumulative nonlinear and linear iterations as a value of a in (26) decreases. Similar behavior is detected for displacement subproblem within a staggered solution scheme. In contrast, the convergence speed of the SMG method
230
C. Bilgen et al.
Table 4 The convergence study of various solution strategies (monolithic scheme and staggered scheme (SSS) subdivided into the displacement subproblem (disp. sub.) and the phase-field subproblem (pf. sub.)) with respect to the degradation function g(z). As experiment performed using mode-I tension test a Monolithic scheme SSS—disp. sub. SSS—pf. sub. 2 1 0.5 0.01 2 1 0.5 0.01 2 1 0.5 0.01 # cumula- 882 tive nonl. its # average 9.81 nonl. its # average 6.98 linear its
1082 1270 2133 261
301
317
377
−
−
−
−
10.80 13.37 20.71 1.83
1.82
1.81
2.05
−
−
−
−
7.52
35.98 35.89 35.88 5.77
5.76
5.79
5.78
7.54
8.47
36.1
used for the phase-field subproblem does not seem to be affected by the choice of g(z). Crack driving force Y e Finally, the crack driving force constitutes also a varying parameter in the phase-field model which influences the findings. By now the crack driving force of the classical phase-field model based on the variation of the strain energy density given in (8) is used. While the proposed energy decompositions are quite arbitrary and there is no explicit reference, crack driving forces based on established failure criteria of fracture mechanics are introduced. Exemplarily, we focus on the failure criterion of the maximum principal stress which is based on the existing stress state. While the principal stresses σa , a ∈ {1, 2, 3} are ordered by denoting σ I = max(σa ) and σ I I I = min(σa ) as follows σI ≥ σI I ≥ σI I I
(27)
σI e ¯ −1 Y = σc +
(28)
the crack driving force leads to
where σc denotes the critical stress. As soon as the maximum principal stress exceeds the critical stress, the phase-field increases. In many cases the critical stress σc coincides with the tensile resistance strength Rmt , however, the critical stress can be determined by the one dimensional crack solution in (23). Please note, that this relation depends on simple assumptions, in particular on linear material behavior, a quadratic degradation function and the one dimensional case. Generally, for applying the adhoc crack driving forces the thermodynamical consistency also holds true, cf. [8, 9]. One main difference of both proposed crack driving forces is that the energy-based approach drives the crack immediately for small strains, the crack driving force based on the maximum principal stress just
A Phase-Field Approach to Pneumatic Fracture
231
Fig. 6 Various crack driving forces in one dimension for a uniaxial tension test exemplarily (left) and load-deflection curves for the mode-I tension test with different crack driving forces (right)
at a threshold. This is demonstrated for a uniaxial tension test in one dimension in the left plot of Fig. 6 where some further failure criteria of fracture mechanics are shown. Comparing the energy-based and the stress-based crack driving force within the mode-I tension test the load-deflection curves demonstrate that the applied force differs slightly whereas the prescribed displacement is the same. The main difference is the shape of the curve—within the stress-based approach the curve is spikier and shows the abrupt cracking.
4.2 Externally Driven Fracture Keeping the various influencing factors in mind we focus on externally driven fracture in the following. Besides pressure driven processes and anisotropic fracture the combination of both occurrences is examined. Further details can be found in [10]. Let us start with incorporating the pressure in the model. Pressure driven processes The stimulation of conventional and unconventional natural gas reservoirs increased during the years. Hydraulic or pneumatic fracturing can be traced back to the 1940s, cf. [53]. The main idea is to inject high-pressure water or gas deep in the ground to facilitate crack propagation. Those cracks should allow to extract natural gas. In this contribution the focus is on pneumatic fracturing and the main idea is to incorporate the pressure as a kind of moving boundary condition. A similar approach has been proposed by Bourdin et al. [15]. We adapt the stress tensor by means of the Terzaghi’s principle as follows: ¯ Pmod = P − pH ¯ σ mod = σ − p1,
(29)
with the modified stresses σ mod or Pmod and the cofactor of the deformation gradient H. The hydrostatic pressure p¯ is modified in such a way that it is coupled with the phase-field. In particular it is chosen to be p¯ = ( p0 + p(t))z with a reference
232
C. Bilgen et al.
Fig. 7 Boundary conditions and the crack propagation of the pressure driven process applying the split of the invariants in two dimensions
pressure p0 which is increased with time. With this modeling the pressure is induced at the crack flanks directly. To demonstrate the applicability of this pressure driven model a two dimensional domain of size 100 × 100 mm2 divided into 200 × 200 quadratic B-spline elements is considered. While all boundaries are constrained in both directions, in the middle of the domain a horizontal crack is prescribed by setting z = 1, see left plot of Fig. 7. The material parameters are given by concrete, in particular the Young’s modulus is E = 50 000 N/mm2 , the Poisson’s ratio is ν = 0.2 and the critical energy release rate is Gc = 75 · 10−3 N/mm. The length-scale parameter is given by lc = 1 mm and the reference pressure is chosen to be p0 = 10 N/mm2 . The evolution of the crack path is presented in Fig. 7. It is observed that the crack propagates horizontally within increasing the pressure. These results coincide with several crack paths in literature, cf. [15, 47, 51]. While these simulations are based on the Neohookean material model with the energetic crack driving force, also the stress-based formulation leads to similar results. Anisotropic fracture Next, we examine fracture dependent on the occurrence of the material which might be important in the context of geological or organic materials. In order to take into account a preferred direction of crack growth anisotropic material behavior is incorporated. There are different ways to model anisotropy, cf. [21, 28, 44, 57, 61]. In this contribution the anisotropic behavior is modeled by an anisotropic crack density function γ (z, ∇z) =
1 2 lc z + ∇z · A · ∇z 2lc 2
(30)
with a structural tensor A that weights the preferred crack growth directions, cf. [61]. The structural tensor can be adapted to various anisotropies. We focus on transversal isotropy and choose the structural tensor as A = 1 + β(a ⊗ a) with a weighting factor β and a direction vector a. For a detailed examination of the influences of the various parameters the reader is referred to [10]. Pressure driven anisotropic fracture In this subsection both presented properties are combined and the pressure driven crack growth in an anisotropic medium is investigated. The focus is here on the modified stress in (29) such that the pressure
A Phase-Field Approach to Pneumatic Fracture
233
Fig. 8 Boundary conditions (left) and the phase-field snapshots crack growth √ of the √pressure driven√ √ applying the λ-μ split for different direction vectors a = (2/ 5, 1/ 5) and a = (1/ 5, −2/ 5) and a weighting factor β = 50 (right)
is induced at the crack flanks and on the anisotropic crack density function (30). Analogous to the example demonstrated in Fig. 7 we consider a domain of size 100 × 100 mm2 , however, apply anisotropic material in this case. All the material parameters and the boundary conditions remain the same as previously mentioned. In Fig. 8 the crack paths are demonstrated for different direction vectors and a fixed weighting factor β = 50. It can be seen that the crack grows immediately in the preferred direction. Since the crack paths coincide for the energy-based and the stress-based crack driving force, we present the phase-field snapshots for the standard variational formulation, exemplarily. For further details the reader is referred to [10].
5 Numerical Examples In this section we demonstrate the applicability of the presented phase-field model to fracture within conchoidal fracture and its extensions within a pressure driven example. At first we begin with conchoidal fracture.
5.1 Conchoidal Fracture Conchoidal fracture is a special type of brittle fracture that occurs commonly in amorphous or fine-grained materials like glasses, rocks or minerals. Besides the rippled and curved crack surface one main characteristic of conchoidal fracture is the location of the crack initialization which is located inside of the body. From the numerical point of view the simulation of the crack initialization without any initial kerf or notch constitutes a challenge typically. Therefore, this is an appropriate example to show the applicability of the proposed models, cf. [7, 8]. The solution process is examined detailed in [38].
234
C. Bilgen et al.
Fig. 9 Boundary conditions of the structured (left) and unstructured (right) geometry of a rock
Fig. 10 Phase-field snapshot of the structured geometry (left) and isosurfaces of the crack surfaces depicted for the phase-field z = 0.95 for both settings, the structured and the unstructured geometry (right)
We focus on two different types of geometry—on the one hand we use a block of stone material of size 4a × 4a × 2a with 2a = 1 m and on the other hand an unstructured geometry of a real rock, see Fig. 9. In both cases on the upper side a squared plate is pulled upwards with prescribed incremental displacements. All the other boundaries are constrained in all directions, i.e. u = 0. The material parameters are given by the Lamé constants λ = μ = 100 000 N/mm2 and a critical energy release rate of Gc = 1 N/mm. The structured geometry is simulated with a Neohookean material model and the unstructured geometry applies the linear material model. For both types of geometry, it can be observed that the crack surface is rippled and curved, see Fig. 10. Furthermore, the crack initializes inside of the block under the pulled surface, see left plot of Fig. 10. The influences of the various parameters, in particular with focus on the length-scale parameter, the degradation function and the choice of the crack driving force, have been investigated detailed in [7, 8].
5.1.1
Convergence Study of Multilevel Solution Strategies
Using a conchoidal fracture example, we investigate the convergence properties of our multilevel solution strategies, i.e. semi-geometric multigrid (SMG) and RMTR method. All investigated solution strategies are set up to fulfill the same stopping
A Phase-Field Approach to Pneumatic Fracture
235
Fig. 11 Convergence study of CG-SMG, performed for conchoidal fracture simulation on cube 50 × 50 × 25. Left: Average number of CG-SMG iterations over all time-steps. Right: Average convergence rates of the CG-SMG method as a function of dofs
criterion. In particular, we terminate the solution process if rk < 10−7 . Here, the symbol · denotes the Euclidean norm and rk stands for the residual of kth iterate. Both iterative methods were implemented using PETSc [5] backend of the Utopia library [1]. All tests were performed using the local cluster at the Institute of Computational Science (ICS), Università della Svizzera Italiana, consisting of 24 compute nodes, each equipped with 2 Intel R E5-2650 v3 processor with a clock frequency of 2.60 GHz. Staggered solution scheme with semi-geometric multigrid We employ a conjugate gradient method preconditioned with a semi-geometric multigrid (CG-SMG) to solve efficiently the linear systems arising in both, displacement and phase-field subproblems, respectively. Our implementation of the SMG method employs the LU factorization from package the MUMPS [4] on the coarsest level and a Gauss–Seidel smoother configured with three smoothing steps on all other levels. The nonlinearity of the mechanical subproblem is addressed using Newton’s method. In Fig. 11 on the left, we demonstrate the average number of CG-SMG iterations as a function of time. As we can see, the number of iterations for the displacement field stays almost constant during the whole simulation process, while it increases for the phase-field subproblem as the crack propagates. However, this increase is not significant and the convergence rate stays relatively low even in the most demanding loading step. Furthermore, we study the convergence properties of the CG-SMG method with respect to a number of the dofs. To this aim, we evaluate the asympk+1 −x k , where xk represents kth iterate. As totic convergence rate, defined as ρ = x xk −xk−1 depicted in Fig. 11 on the right, the CG-SMG shows the h-independence. Monolithic solution scheme with RMTR We solve nonlinear coupled minimization problems using the RMTR method, see Sect. 3. The RMTR method was configured with one pre/post-smoothing step and two coarse grid iterations. In order to solve arising QP-problem, we employ 10 steps of the projected conjugate gradient method
236
C. Bilgen et al.
Fig. 12 Number of nonlinear iterations/V-cycles over time-steps for Left: Conchoidal fracture, rock geometry, 7 866 166 dofs. Right: Pressurized fracture, unit cube geometry, 4 121 204 dofs. RMTR was set-up with 3 levels Table 5 Cumulative number of RMTR iterations over all time steps performed for conchoidal fracture example with rock geometry. Left: Robustness with respect to varying length-scale lc parameter. Right: Robustness with respect to different choices of degradation function g(z) lc 6h 4h 2h 1.5h # V-cycles 68 69 69 70
a 2 1 0.5 0.1 0.01 # V-cycles 70 72 73 75 75
with Jacobi preconditioner. On the coarsest level, we use an active set strategy [25] with direct solver MUMPS [4]. Figure 12 demonstrates the convergence of the RMTR method and it’s single level variant, the trust region method, for all time-steps. As we can observe, the number of V-cycles required by the RMTR method is almost an order of magnitude lower than a number of the required iterations of the TR method. This is especially true, for the time-steps where robust fracture propagation occurs. Further, we demonstrate the robustness of the RMTR method with respect to the model parameters. Table 5 demonstrates the obtained results for varying choices of the degradation function g(z) and length-scale parameter lc . As we can see, the number of accumulative V-cycles stays almost constant as parameters change. This is in contrast to the results observed in Sect. 4.1 for standard Newton based nonlinear solver, where the convergence speed of the method was highly dependent on the parameter choice. Finally, we illustrate the computational complexity and scalability properties of our implementation of the RMTR. In Fig. 13 on the left, we depict the computational time required to perform one V-cycle while increasing the number of dofs. As expected, the required time increases linearly, reflecting that one V-cycle of RMTR method is of optimal complexity. Further, we examine the strong scalability properties of our implementation of the RMTR method. For this reason, we compute the relative speedup as T20 /T p , where T20 and T p denote the time required by 20 cores
A Phase-Field Approach to Pneumatic Fracture
237
Fig. 13 Left: Computational complexity of RMTR method, conchoidal fracture, rock geometry. Right: Strong scaling test, conchoidal fracture, rock geometry, 7 866 166 dofs
Fig. 14 Isosurfaces for the phase-field z = 0.95 of the pressure driven crack growth in three dimensions at different time steps t = {0, 11, 12}[s]
(1 node) and p cores, respectively. The obtained results, reported in Fig. 13 on the right, demonstrate that our implementation gives rise to almost ideal scaling up to 200 cores.
5.2 Pressure Driven Crack Growth Concerning the externally driven fracture processes we present a three dimensional example in the following. Generally, a cube of size 0.5 × 0.5 × 0.5 m3 is considered where five quadratic crack surfaces are prescribed by setting the phase-field z = 1. All boundaries are fixed in all directions. The pressure is injected at the crack surfaces by applying the modified stress in (29). The simulation is performed with the linear material model and the material parameters are E = 50 000 N/mm2 , ν = 0.2 and Gc = 75 · 10−3 N/mm. The solution technique for the system of equations is based on multigrid methods proposed in Sect. 3. The isosurfaces for the phase-field of z = 0.95 are shown in Fig. 14 at different times. The crack surfaces grow with increasing pressure such that interesting crack patterns occur in the whole domain.
238
C. Bilgen et al.
Fig. 15 Isosurfaces for the phase-field z = 0.95 of the penny-shaped problem in three dimensions at different time steps t applying a direction vector a = (1, 5, 1)
Additionally, the pressure driven crack growth is investigated in a cylinder with anisotropic material behavior by applying the anisotropic crack density function (30). Analogous one square crack surface is prescribed in the center of the specimen, see left plot of Fig. 15. The pressure is prescribed at the initial crack. Concerning the anisotropy the direction vector is chosen to be a = (1, 5, 1) to specify the preferred direction of the material. In Fig. 15 the isosurfaces of the phase-field show the crack propagation along the defined direction such that the crack surface spreads through the whole cylinder.
6 Summary The phase-field model is an established way to simulate crack propagation. While there is a series of various parameters which has to be chosen thoughtfully and appropriately this diffuse interface approach constitutes a reliable possibility to predict complex fracture patterns. In this contribution we focused on the various influences and extend the standard phase-field model to pneumatic fracture. On the modeling side externally driven fracture processes are incorporated by moving boundary conditions. The prescribed pressure is injected directly on the crack flanks. Additionally, anisotropic crack growth is investigated in combination with the pressure driven fracture to take into account the occurrence of the soil within pneumatic fracturing. Concerning the numerical side the phase-field model is solved by using multilevel methods. Related to the given problem and to the various solution schemes, in particular the monolithic and staggered solution scheme, different approaches are presented for linear and finite elasticity. The solution schemes are also influenced by the presented parameters. Eventually, the applicability and the flexibility of the phase-field model is demonstrated by means of various numerical examples in two and three dimensions.
A Phase-Field Approach to Pneumatic Fracture
239
Acknowledgements The authors gratefully acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project “Large-scale simulation of pneumatic and hydraulic fracture with a phase-field approach” as part of the Priority Program SPP 1748 with the Project Number 255801726.
References 1. Utopia: A C++ embedded domain specific language for scientific computing. Git repository. https://bitbucket.org/zulianp/utopia 2. M. Ambati, L. De Lorenzis, Phase-field modeling of brittle and ductile fracture in shells with isogeometric NURBS-based solid-shell elements. Comput. Methods Appl. Mech. Eng. 312, 351–373 (2016) 3. M. Ambati, T. Gerasimov, L. De Lorenzis, Phase-field modeling of ductile fracture. Comput. Mech. 55, 1017–1040 (2015) 4. P.R. Amestoy, I.S. Duff, J.-Y. L’Excellent, J. Koster, MUMPS: a general purpose distributed memory sparse solver, in International Workshop on Applied Parallel Computing (Springer, Berlin, 2000), pp. 121–130 5. S. Balay, S. Abhyankar, M. Adams, J. Brown, P. Brune, K. Buschelman, V. Eijkhout, W. Gropp, D. Kaushik, M. Knepley, et al., PETSc users manual revision 3.5. Argonne National Laboratory (ANL) (2014) 6. M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems. Acta Numer 14, 1–137 (2005) 7. C. Bilgen, A. Kopaniˇcáková, R. Krause, K. Weinberg, A phase-field approach to conchoidal fracture. Meccanica 1–17 (2017) 8. C. Bilgen, A. Kopaniˇcáková, R. Krause, K. Weinberg, A detailed investigation of the model influencing parameters of the phase-field fracture approach. Surveys for Applied Mathematics and Mechanics (2019), p. e202000005 9. C. Bilgen, K. Weinberg, On the crack-driving force of phase-field models in linearized and finite elasticity. Comput. Methods Appl. Mech. Eng. 353, 348–372 (2019) 10. C. Bilgen, K. Weinberg, Phase-field model to fracture for pressurized and anisotropic behavior (2020). Submitted to 11. J. Bleyer, R. Alessi, Phase-field modeling of anisotropic brittle fracture including several damage mechanisms. Comput. Methods Appl. Mech. Eng. 336, 213–236 (2018) 12. M.J. Borden, Isogeometric analysis of phase-field models for dynamic brittle and ductile fracture. Ph.D. Thesis (2012) 13. M.J. Borden, T.J.R. Hughes, C.M. Landis, C.V. Verhoosel, A higher-order phase-field model for brittle fracture: formulation and analysis within the isogeometric analysis framework. Comput. Methods Appl. Mech. Eng. 273, 100–118 (2014) 14. M.J. Borden, C.V. Verhoosel, M.A. Scott, T.J.R. Hughes, C.M. Landis, A phase-field description of dynamic brittle fracture. Comput. Methods Appl. Mech. Eng. 217–220, 77–95 (2012) 15. B. Bourdin, C.P. Chukwudozie, K. Yoshioka, A variational approach to the numerical simulation of hydraulic fracturing, in SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers (2012) 16. B. Bourdin, G.A. Francfort, J.-J. Marigo, The variational approach to fracture. J. Elast. 91, 5–148 (2008) 17. A. Brandt, Multi-level adaptive solutions to boundary-value problems. Math. Comput. 31, 333–390 (1977) 18. A. Brandt, Algebraic multigrid theory: the symmetric case. Appl. Math. Comput. 19, 23–56 (1986) 19. W.L. Briggs, S.F. McCormick, et al., A Multigrid Tutorial (Siam, 2000)
240
C. Bilgen et al.
20. T. Cajuhi, L. Sanavia, L. De Lorenzis, Phase-field modeling of fracture in variably saturated porous media. Comput. Mech. 61, 299–318 (2018) 21. J.D. Clayton, J. Knap, Phase field modeling of directional fracture in anisotropic polycrystals. Comput. Mat. Sci. 98, 158–169 (2015) 22. A.R. Conn, N.I. Gould, P.L. Toint, Trust Region Methods, vol. 1 (Siam, 2000) 23. P. Deuflhard, Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, vol. 35 (Springer Science & Business Media, Berlin, 2011) 24. T. Dickopf, R. Krause, A study of prolongation operators between non-nested meshes, in Domain Decomposition Methods in Science and Engineering XIX (Springer, Berlin, 2011), pp. 343–350 25. F. Facchinei, J. Júdice, J. Soares, An active set Newton algorithm for large-scale nonlinear programs with box constraints. SIAM J. Optim. 8, 158–186 (1998) 26. P. Farrell, C. Maurini, Linear and nonlinear solvers for variational phase-field models of brittle fracture. Int. J. Numer. Methods Eng. 109, 648–667 (2017) 27. G.A. Francfort, J.-J. Marigo, Revisiting brittle fracture as an energy minimization problem. J. Mech. Phys. Solids 46, 1319–1342 (1998) 28. M. Ghamgosar, D.J. Williams, N. Erarslan, Effect of anisotropy on fracture toughness and fracturing of rocks, in 49th US Rock Mechanics/Geomechanics Symposium, American Rock Mechanics Association (2015) 29. S. Gratton, M. Mouffe, P. Toint, M. Weber Mendonca, A recursive ∞ -trust-region method for bound-constrained nonlinear optimization. IMA J. Numer. Anal. 28, 827–861 (2008) 30. S. Gratton, A. Sartenaer, P.L. Toint, Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19, 414–444 (2008) 31. A.A. Griffith, The phenomena of rupture and flow in solids. Philos. Trans. R. Soc. Lond. 221, 163–198 (1921) 32. C. Groß, R. Krause, On the convergence of recursive trust-region methods for multiscale nonlinear optimization and applications to nonlinear mechanics. SIAM J. Numer. Anal. 47, 3044–3069 (2009) 33. W. Hackbusch, Multi-Grid Methods and Applications, vol. 4 (Springer, Berlin, 1985) 34. Y. Heider, B. Markert, A phase-field modeling approach of hydraulic fracture in saturated porous media. Mech. Res. Commun. 80, 38–46 (2017) 35. C. Hesch, A.J. Gil, R. Ortigosa, M. Dittmann, C. Bilgen, P. Betsch, M. Franke, A. Janz, K. Weinberg, A framework for polyconvex large strain phase-field methods to fracture. Comput. Methods Appl. Mech. Eng. 317, 649–683 (2017) 36. D. Jodlbauer, U. Langer, T. Wick, Matrix-free multigrid solvers for phase-field fracture problems (2019). arXiv:1902.08112 37. A. Karma, D.A. Kessler, H. Levine, Phase-field model of mode III dynamic fracture. Phys. Rev. Lett. 81, 045501 (2001) 38. A. Kopaniˇcáková, R. Krause, A recursive multilevel trust region method with application to fully monolithic phase-field models of brittle fracture. Comput. Methods Appl. Mech. Eng. 360, 112720 (2020) 39. C. Kuhn, R. Müller, A continuum phase field model for fracture. Eng. Fract. Mech. 77, 3625– 3634 (2010) 40. C. Kuhn, R. Müller, Simulation of size effects by a phase field model for fracture. Theor. Appl. Mech. Lett. 4, 051008 (2014) 41. C. Kuhn, T. Noll, R. Müller, On phase field modeling of ductile fracture. Surv. Appl. Math. Mech. 39, 35–54 (2016) 42. C. Kuhn, A. Schlüter, R. Müller, On degradation functions in phase field fracture models. Comput. Mat. Sci. 108, 374–384 (2015) 43. B. Li, C. Peco, D. Millán, I. Arias, M. Arroyo, Phase-field modeling and simulation of fracture in brittle materials with strongly anisotropic surface energy. Int. J. Numer. Meth. Eng. 102, 711–727 (2015) 44. Z. Liu, D. Juhre, Phase-field modelling of crack propagation in anisotropic polycrystalline materials. Procedia Struct. Integr. 13, 787–792 (2018)
A Phase-Field Approach to Pneumatic Fracture
241
45. S. Mariani, U. Perego, Extended finite element method for quasi-brittle fracture. Int. J. Numer. Meth. Eng. 58, 103–126 (2003) 46. C. Miehe, M. Hofacker, F. Welschinger, A phase field model for rate-independent crack propagation: robust algorithmic implementation based on operator splits. Comput. Methods Appl. Mech. Eng. 199, 2765–2778 (2010) 47. C. Miehe, S. Mauthe, Phase field modeling of fracture in multi-physics problems. Part III. Crack driving forces in hydro-poro-elasticity and hydraulic fracturing of fluid-saturated porous media. Comput. Methods Appl. Mech. Eng. 304, 619–655 (2016) 48. C. Miehe, S. Mauthe, S. Teichtmeister, Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture. J. Mech. Phys. Solids 82, 186–217 (2015) 49. C. Miehe, L.-M. Schänzel, H. Ulmer, Phase-field modeling of fracture in multi-physics problems. Part I. Balance of crack surface and failure criteria for brittle crack propagation in thermoelasitc solids. Comput. Methods Appl. Mech. Eng. 294, 449–485 (2015) 50. C. Miehe, F. Welschinger, M. Hofacker, Thermodynamically consistent phase-field models of fracture: variational principles and multi-field FE implementations. Int. J. Numer. Meth. Eng. 83, 1273–1311 (2010) 51. A. Mikelic, M.F. Wheeler, T. Wick, A phase-field method for propagating fluid-filled fractures coupled to a surrounding porous medium. Multiscale Model. & Simul. 13, 367–398 (2015) 52. A. Mikeli´c, M.F. Wheeler, T. Wick, Phase-field modeling of a fluid-driven fracture in a poroelastic medium. Comput. Geosci. 19, 1171–1195 (2015) 53. C.T. Montgomery, M.B. Smith, Hydraulic fracturing: history of an enduring technology. J. Petrol. Technol. 62, 26–40 (2010) 54. S.G. Nash, A multigrid approach to discretized optimization problems. Optim. Methods Softw. 14, 99–116 (2000) 55. J. Nocedal, S. Wright, Numerical Optimization (Springer Science & Business Media, Berlin, 2006) 56. M. Ortiz, A. Pandolfi, A class of cohesive elements for the simulation of three-dimensional crack propagation. Int. J. Numer. Meth. Eng. 44, 1267–1282 (1999) 57. A. Raina, C. Miehe, A phase-field model for fracture in biological tissues. Biomech. Model. Mechanobiol. 15, 479–496 (2016) 58. K.L. Roe, T. Siegmund, An irreversible cohesive zone model for interface fatigue crack growth simulation. Eng. Fract. Mech. 70(2), 209–232 (2003) 59. J.M. Sargado, E. Keilegavlen, I. Berre, J.M. Nordbotten, High-accuracy phase-field models for brittle fracture based on a new family of degradation functions. J. Mech. Phys. Solids 111, 458–489 (2018) 60. N. Sukumar, D.J. Srolovitz, T.J. Baker, J.-H. Prevost, Brittle fracture in polycrystalline microstructures with the extended finite element method. Int. J. Numer. Meth. Eng. 56, 2015– 2037 (2003) 61. S. Teichtmeister, D. Kienle, F. Aldakheel, M.-A. Keip, Phase field modeling of fracture in anisotropic brittle solids. Int. J. Non-Linear Mech. 97, 1–21 (2017) 62. M. Thomas, C. Bilgen, K. Weinberg, Analysis and simulations for a phase-field fracture model at finite strains based on modified invariants, accepted ZAMM (2019) 63. C.V. Verhoosel, R. de Borst, A phase-field model for cohesive fracture. Int. J. Num. Methods Eng. (2013) 64. Z.A. Wilson, M.J. Borden, C.M. Landis, A phase-field model for fracture in piezoelectric ceramics. Int. J. Fract. 183, 135–153 (2013) 65. Z.A. Wilson, C.M. Landis, Phase-field modeling of hydraulic fracture. J. Mech. Phys. Solids 96, 264–290 (2016) 66. X.-P. Xu, A. Needleman, Numerical simulations of fast crack growth in brittle solids. J. Mech. Phys. Solids 42, 1397–1434 (1994) 67. P. Zulian, A. Kopaniˇcáková, G.C. Nestola, Maria, A. Fink, A. Fadel, Nur, J. VandeVondele, R. Krause, Large scale simulation of pressure induced phase-field fracture propagation using utopia, Submitted to International Conference for High Performance Computing, Networking, Storage, and Analysis (2020)
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities Paul Hennig, Markus Kästner, Roland Maier, Philipp Morgenstern, and Daniel Peterseim
Abstract The development of innovative products demands multi-material lightweight designs with complex heterogeneous local material structures. Their computer-aided engineering relies on the constitutive modeling and, in particular, the numerical simulation of propagating cracks. The underlying numerical techniques have to account for the failure of interfaces and bulk material as well as their interaction in the form of crack branching and coalescence. In order to provide realistic predictions by simulation, the true 3D nature of the problem has to be captured. For this purpose, new numerical models and methods have to be developed that combine adaptive spline-based approximations from isogeometric analysis with phase-field models for crack propagation. The main goals of this work are linked to fundamental challenges in the fields of Computational Mechanics, Numerical Analysis and Material Sciences, e.g., the representation and adaptive refinement of unstructured (water-tight) spline surfaces, the feasible coupling of spline surfaces with structured
The authors gratefully acknowledge support from Deutsche Forschungsgemeinschaft in the Priority Program 1748 Reliable simulation techniques in solid mechanics. Development of non-standard discretization methods, mechanical and mathematical analysis under the projects KA3309/3-2 and PE2143/2-2. P. Hennig · M. Kästner (B) Institute of Solid Mechanics, TU Dresden, Dresden, Germany e-mail: [email protected] P. Hennig e-mail: [email protected] R. Maier · D. Peterseim Department of Mathematics, University of Augsburg, Augsburg, Germany e-mail: [email protected] D. Peterseim e-mail: [email protected] P. Morgenstern Institute of Applied Mathematics, Leibniz Universität Hannover, Hannover, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_10
243
244
P. Hennig et al.
bulk meshes, the regularized modeling of heterogeneous materials, and the rigorous error analysis and control in pre-asymptotic regimes.
1 Introduction Environmental challenges demand the development of innovative, energy-efficient and resource-saving products. Lightweight designs and multi-materials with complex characteristic microstructures are the key features for the development of the associated mechanical components. The computer aided engineering (CAE) of these sophisticated components requires a solution of coupled non-linear field problems and an appropriate discretization of weak discontinuities that arise in the field variables due to rapidly changing mechanical properties at material interfaces. Furthermore, the fail-safe design of the components requires the understanding, constitutive modeling and, in particular, the computational modeling of propagating cracks that lead to strong discontinuities in the field variables. Robust, reliable and efficient nonstandard numerical methods are required to compute accurate primary field variables and their derivatives across these weak and strong discontinuities. Phase-field models are used to solve interfacial problems and were introduced first in [15] in order to model solidification processes. In this approach, different phases of the system are identified by unique values of a continuous scalar field variable, the so-called order parameter. Sharp interfaces between the phases are regularized by a smooth transition of the order parameter from one to another unique value. The physics of the system and interface are finally captured by a (higherorder) partial differential equation (PDE) that describes the evolution of the order parameter and thereby the evolution of the phases. The phase-field method gained increased popularity in the last decades and was therefore applied to different physical problems. Among these are brittle [7, 48, 56], ductile [2, 54] and dynamic [6] fracture, where the order parameter is coupled to a mechanical field problem. In this way, complex cracking phenomena such as crack deflection, branching, and coalescence follow implicitly from the solution of the coupled field problem. The implicit representation of interfaces in terms of the order parameter avoids a cumbersome numerical tracking and repeated discretizations of the discontinuities, which makes the phase-field method a suitable tool for solving moving interface problems numerically. However, in combination with finite element methods (FEM), the variability of the approach comes at the cost of highly refined meshes that are required along the discontinuities to properly resolve the gradients in the order parameter field. Hence, adaptive local mesh refinement and coarsening is essential for efficient computations. Furthermore, the possibly higher-order character of the PDEs demands higher continuous function spaces to spatially discretize the computational domain. Spline-based approximations fulfill these requirements and are consequently an ideal discretization technique for phase-field models. Isogeometric analysis (IGA) was introduced in [39] as a novel discretization method and as a counterpart to finite element (FE) analysis. The main objective
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
245
was to overcome the disjunction between a computational geometry model, commonly described by Non-Uniform Rational B-Splines (NURBS), and an FE model based on Lagrangian polynomial approximations of the geometry and the field variables. In the FEM, the NURBS geometry has to be meshed with finite elements, which can be a tedious, time consuming task that is hard to automate. In contrast, in IGA the B-spline basis functions which represent the geometry are as well used to approximate the field variables in the numerical model. In this way, meshing of the geometry is no longer necessary and the total computing time in numerical simulations is reduced. Furthermore, the higher continuity of the B-spline basis reduces the degrees of freedom (DOF) compared to FEM if a certain error level has to be achieved and allows for a direct discretization of higher-order PDEs. In this context, IGA was successfully used to discretize, e.g., Kirchhoff-Love shells [47] or phasefield models for crack propagation [5]. IGA is also an ideal discretization technique to be combined with adaptive mesh refinement as already the coarsest mesh provides an exact computational geometry representation that is preserved during refinement. Tedious interactions with an underlying geometry model, as it is needed in FEM, are avoided. However, if B-splines or NURBS are considered as a basis, their tensor product nature prohibits a truly local refinement within a single NURBS patch. For that reason, various approaches were developed to overcome the restrictive tensor product structure, including e.g. hierarchical (H)B-splines [78], truncated hierarchical (TH)B-splines [30], locally refined (LR)B-splines [20], T-splines [74, 75], and Hierarchical T-splines [23]. The aim of this work is twofold. The first objective is to provide robust, reliable and efficient methods in the context of IGA that allow for adaptive spatial discretizations of non-linear and time-dependent multi-field problems. The second objective is to develop a unified modeling approach for weak and strong discontinuities in solid mechanics as they arise in the numerical simulation of heterogeneous materials due to rapidly changing mechanical properties at material interfaces or due to propagation of cracks if a specific failure load is exceeded. In this work, we provide an overview of the different achievements towards the above-mentioned objectives that were part of the DFG Priority Program 1748 Reliable simulation techniques in solid mechanics. Development of non-standard discretization methods, mechanical and mathematical analysis. We start with introducing different spline approximations as well as their adaptive refinement (Sect. 2). We focus on THB-splines (Sect. 2.1) and T-splines (Sect. 2.2). Further, we discuss the difficulties with regard to the definition of spline spaces as well as mesh refinement on unstructured meshes without the usual tensor-product structure in Sect. 2.3. In Sect. 3, we justify the use of spline approximations from multiple perspectives. We investigate the spectral superiority of spline approximations compared to classical FE approaches in Sect. 3.1 and briefly cover generalized spline spaces that are used in connection with numerical homogenization in Sect. 3.2. Particularly, we justify the controlled non-locality that is essential for spline-type spaces. Next, we apply the spline approximations and local refinement procedures presented in Sect. 2 in the context of IGA in Sect. 4.1 and discuss mesh adaptivity for incremental solution schemes (Sect. 4.2). Last, we turn to the modeling of weak and strong
246
P. Hennig et al.
discontinuities in solid mechanics. Spline-based approximations are combined with phase-field models for crack propagation. The spline framework allows for an accurate resolution of steep gradients and offers higher-order continuity compared to classical polynomial approximations whereas the phase-field approach relaxes the critical problem of re-meshing. The practical feasibility of this approach relies crucially on the above-mentioned efficient and reliable mesh refinement techniques and is investigated in Sect. 5. We present results in connection with embedded material interfaces in Sect. 5.1, where an appropriate smoothing of interface jumps is necessary in order to avoid spurious oscillations. Further, we consider the modeling of brittle and ductile fracture in homogeneous and heterogeneous materials (Sect. 5.2) and present numerical experiments.
2 Local Mesh Refinement Many physically relevant problems require the solution of local features such as singularities or steep gradients in the field variables. While uniform mesh refinement and corresponding approximation spaces improve the approximation properties, they also heavily increase the number of DOFs. An alternative to uniform mesh refinement is local mesh refinement, where the mesh is only refined near relevant local features. In many spline based approximations, the tensor product structure of the spline basis prevents a truly local refinement within a single patch. Local refinement with these bases requires the subdivision of the analysis domain into several patches which are then refined uniformly [16, 45]. However, a weak coupling of the patches is needed that comes at the cost of additional integrals, which have to be solved along the patch boundaries. For that reason, various approaches were developed to overcome the restrictive tensor product structure, including the above-mentioned (T)HB-splines and Tsplines, on which we focus throughout this work. In Fig. 1, local mesh refinement is illustrated for the benchmark case of a sharp internal layer using T-splines and HB-splines. It can be seen that the T-spline-based refinement is not as local as the refinement with HB-splines [21, 23]. The refinement propagates along the parametric directions as a consequence of necessary additional restrictions in order to obtain linear independence of the basis, see also Sect. 2.2. This behavior is further analyzed in Sect. 4, where THB-splines are compared against T-splines. In the following subsections, we present local refinement using THB-splines and T-splines as well as an approach to refinement if the underlying mesh does not involve a (local) tensor-product structure. The corresponding results were published in [11, 37, 57–59].
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
247
Sharp internal layer
(a)
(b)
(c)
Fig. 1 Two approaches to local mesh refinement: a Initial mesh with a cubic B-spline basis, DOF = 76, and a sharp internal layer. b Local refinement (four levels) using HB-splines (DOF = 1444) and c T-splines (DOF = 2058)
2.1 Truncated Hierarchical B-Splines Hierarchical B-splines were already introduced in 1988 by [24] and were further developed in [78] to meet the requirements of IGA. However, the hierarchical basis does not satisfy the partition of unity property. Truncated hierarchical B-splines [30] recover this property by an alternative definition of the basis. The corresponding basis functions span the same space but have reduced overlap and hence provide lower condition numbers and sparser system matrices when applied to the discretization of a PDE [43, 69]. Using an element-based approach, we proposed Bézier extraction of truncated hierarchical B-splines in [37]. In this way, the truncation operation does not have to be performed during the integration of the system matrices and standard FE procedures can be seamlessly transferred to the framework of IGA. In the following, explanations are given for univariate B-splines. For more information regarding the generalization to a multivariate or NURBS basis, we refer to [37]. To define the hierarchical basis in an element-based framework, a multi-level mesh and basis are required. To build them, we consider a hierarchy of L knot vectors , = 0, . . . , L − 1 created by successive uniform knot insertion (h-refinement) within the univariate parametric domain P . The resulting knot vectors are nested, i.e. ⊂ +1 . The non-zero intervals of the knot vector define elements. Assuming that two consecutive levels result from uniform bisection of each element, a hierarchy of nested elements with a tree-like structure is obtained as shown in Fig. 2a where element boundaries are indicated by the symbol ×. Furthermore, each knot vector defines a set of B-spline basis functions that span the approximation spaces N of each corresponding level and are nested, i.e., N ⊂ N +1 . They are referred to as the multi-level basis, cf. all basis functions in Fig. 2a. As a consequence of nestedness, any B-spline function in the space N can be represented as a linear combination of basis functions in the refined space N +1
248
P. Hennig et al.
Fig. 2 Different sets of basis functions in the multi-level basis: a A – basis functions belonging to active elements (all colored lines), A− – basis functions with support in coarser active elements (dotted lines), and A+ – basis functions with support in finer active elements (dashed lines), and b the resulting truncated hierarchical basis after the application of the hierarchical subdivision operator
N = M,+1 N+1
(1)
with the subdivision operator M,+1 [69]. The relations between basis functions translate directly into transformation rules T P+1 = M,+1 P , T = M,+1 F , +1 F
(2) (3)
for control points P and field variables F , if the same geometry or data are to be represented in terms of a refined basis, respectively. To create a hierarchical function space A in an element-based framework, elements of different hierarchy levels have to be chosen/activated by some criterion to discretize the analysis domain. To ensure a well graded mesh, the selection of elements is restricted to pre-defined rules as they are introduced in Sect. 4. Furthermore, the so-called active elements have to cover the domain P without any overlap as illustrated in Fig. 2a where active elements are indicated in green. Every active element is associated with a number of p + 1 basis functions in the multi-level basis.
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
249
The union of these basis functions on each level is denoted by A . and plotted in color in Fig. 2a. However, to ensure a linearly independent basis not all of these functions can contribute to the hierarchical approximation, i.e. attention has to be paid to basis functions whose support overlaps with the domains of active elements on finer or coarser hierarchy levels. Their contributions are correctly accounted for during the assembly of the hierarchical system of equations. The procedure consists of three steps: 1. First, the element matrices of all active elements are computed using the basis A on each individual level and hence without considering information on whether the basis function contributes to the hierarchical basis or not. This ensures the applicability of standard Bézier extraction. 2. Once element matrices for all active elements of one level have been obtained, they are assembled to form sub-systems for each hierarchy level K F = F . These sub-systems are combined to form the global system of equations, ⎤⎡ 0 ⎤ ⎡ 0 ⎤ F F K0 0 . . . 0 ⎢ 0 K1 ⎥ ⎢ 1F ⎥ ⎢ F1 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ .. ⎥ ⎢ .. ⎥ = ⎢ .. ⎥ . .. ⎣ . ⎦⎣ . ⎦ ⎣ . ⎦ . 0 K L−1 F L−1 FL−1 ⎡
(4)
3. In the system (4), however, there is no communication between individual levels. This interconnection is introduced in terms of the hierarchical subdivision operator Mh . It acts as a transformation matrix on the multi-level system by transferring the contributions from all shape functions of the multi-level basis to the hierarchical basis A. Using simple matrix multiplications Kh = Mh KMTh and fh = Mh f, the hierarchical system of equations Kh uh = fh
(5)
is obtained. It ensures that only basis functions of the THB-spline basis contribute to the approximation. The hierarchical subdivision operator can be constructed using the inter-level subdivision operators (1) and the basis function sets illustrated in Fig. 2a. Based on the definition of the hierarchical subdivision operator, the produced hierarchical basis contains the truncation property, see Fig. 2b.
2.2 T-Splines T-splines [74, 75] were introduced as a new realization of classical B-splines in the context of computer-aided design. They work even on irregular meshes with so-called hanging nodes and also allow for local mesh refinement. For the construction of odd-
250
P. Hennig et al.
degree T-splines in axis-aligned box meshes, each vertex is associated with a knot vector for each dimension, containing the distances to the nearest mesh interfaces. The tensor product of the B-splines that correspond to these knot vectors gives the multivariate spline function associated to the considered vertex, see Fig. 3 for an illustration. For uniform meshes, this construction yields the canonical B-spline basis, while for non-uniform meshes, the properties of the generated functions and the spanned spline space need further investigation. Two main issues have been pointed out in [8]. First, although T-splines showed good performance in practice (see [21]), linear dependencies of the spline functions may occur on individual mesh elements and, in pathological cases, even globally. This issue was solved through the concept of analysis-suitability in [50], which is a topological criterion for the linear independence of T-spline functions and, at that time, had only been characterized in two dimensions [3, 72]. Second, refined meshes may yield non-nested spline spaces.
Fig. 3 Definition of a T-spline basis function on an irregular grid
x2
x1
In view of these issues, we have developed and implemented an adaptive refinement procedure for T-splines [59] and augmented the theory on their linear independence and the refinement strategy to 3D T-splines [57]. This work was generalized to arbitrary higher dimensions [58], which paves the way to mesh-adaptive T-spline-based space-time discretizations. We proved the optimal complexity of the refinement procedure. Similar arguments also guarantee the optimal complexity of adaptive mesh refinement in the context of hierarchical splines [11]. While the refinement algorithm from [72] is based on a heuristic choice of refinement steps to recover analysis-suitability after a naive refinement, and its complexity analysis is hardly feasible, our alternative refinement routine preserves analysissuitability by default without checking for it. The refinement strategy is based on three key ideas. First, the refinement of a mesh element is restricted to subdivision in only one dimension, i.e., the element is bisected or cut into several slices. Second, there is a fixed cyclic order of the (two or more) dimensions such that for each refinement level, the direction of subdivision is fixed a priori. Third, there is a distance kept
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
251
between inter-level transitions. That is, any interface between elements of two different refinement levels has a fixed neighborhood in which no other refinement levels occur. The size of this neighborhood scales linearly with the polynomial degree of the spline functions and the size of the considered interface. We realized these key ideas in a refinement algorithm that recursively generates a so-called closure of the elements marked for refinement. For each marked element, a neighborhood is checked for coarser elements. If existent, these are marked as well and their neighborhoods are checked as before. The size of the neighborhood again scales linearly with the spline degree and the element size. After generation of the closure, all marked elements are subdivided individually in the direction that corresponds to the respective refinement level. Since our refinement procedure generally works in arbitrary dimensions, it is the key tool for the refinement of unstructured meshes as explained in the subsequent subsection.
2.3 Unstructured T-Splines While classical spline spaces as described above typically are based on a (local) tensor-product mesh, such a structure is not always given. In particular, in the context of design purposes, common computer-aided design software allows for the introduction of so-called extraordinary nodes. These are nodes that neighbor exactly 3 or at least 5 edges in the mesh without being a T-junction (a hanging node). The existence of one or several extraordinary nodes in the mesh hence prohibits the assumption that there is a structured initial mesh, as it is made in [57, 59, 72]. Such meshes are therefore referred to as unstructured. The refinement strategy introduced in [72] only requires a local notion of parametric directions in order to locally check for collision-free horizontal and vertical T-junctions, hence the problem may be marginal for this algorithm if the mesh is not substantially refined at extraordinary nodes. However, extraordinary nodes are likely to represent weak singularities in the geometry. Therefore, it is important to account for the case that the solution of the PDE has a singularity at an extraordinary node. In that case, a mesh-adaptive Galerkin method yields a mesh refinement that concentrates at that extraordinary node, which is a delicate task. We sketched a solution to this problem in [58]. The general idea is to interpret a two-dimensional unstructured mesh as part of a higher-dimensional structured mesh. Combined with appropriate embedding and extraction routines, the refinement strategy of [57, 59] described in Sect. 2.2 provides a refinement algorithm for unstructured meshes, see also the illustration in Fig. 4. The spline basis on these unstructured meshes can be constructed as in [79], and, in the context of refinement and linear independence, we consider unstructured meshes and the associated spline spaces in a manifold setting, which is explained in [68]. The T-splines are constructed on a family (proto-atlas) of rectangular domains (proto-charts) and associated tensor-product meshes. Each proto-chart is associated with a geometry map from the proto-chart
252
P. Hennig et al.
Fig. 4 Example for the refinement of a simple unstructured mesh. In the first refinement step, the lower left element is marked, and in the second step, its right-hand child element
to a part (chart) of the physical domain. The charts are supposed to overlap, and similarly, the proto-charts overlap in an abstract sense which is realized by so-called transition functions. The T-splines that correspond to the unstructured mesh (constructed as in [79]) are linearly independent if on each proto-chart, the corresponding T-splines (including the T-splines from other overlapping proto-charts) are linearly independent. An interesting observation is that the spaces that are obtained in [68] can as well be represented by appropriate projections of basis functions of the higherdimensional mesh that is used for the embedding. Further properties of the proposed refinement scheme are the preservation of shape regularity as well as linear complexity, both inherited from the refinement routine for higher-dimensional meshes which has been introduced and investigated in [58] and was outlined in the previous subsection.
3 Spline-Based Analysis In this section, we study general spline approximations from a different perspective. In particular, we present results that justify the usage of splines in general. While approximation properties of analysis-suitable T-splines have been studied for instance in [4], we focus on the superior eigenvalue approximation of IGA in comparison to classical FEM as well as on approximation results that have been shown for adapted spline-type spaces in the context of certain numerical homogenization methods [38, 53, 61–63]. The work that is presented below was published in [12, 27, 51, 52, 66].
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
253
3.1 Spectral Superiority of Splines The superiority of IGA compared to classical finite elements with respect to eigenvalue approximation was conjectured in several publications, including [16–18]. In this regard, our results in [27] show that using B-splines of maximal smoothness, the ratios of numerical eigenvalues and corresponding true eigenvalues are indeed bounded with respect to the polynomial degree p for the majority of the numerical spectrum. However, not all numerical eigenvalues are stable and there exist a few outliers which occur in connection with essential boundary conditions. In case of a classical FE discretization, the corresponding ratios blow up when p is increased. These observations are illustrated by the numerical experiments presented in Fig. 5. Note that the grids are chosen to obtain a similar amount of DOFs for the highest polynomial degree. Regarding the above-mentioned outliers in the B-spline discretization, one can show that for K p being the dimension of the spline space with polynomial degree p and maximal smoothness on a uniform grid with mesh size 2
p=1 p=2 p=3 p=4 p=5
ˆ k /λk λ
1.8 1.6 1.4 1.2 1
0
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
5,500
3,000
3,500
4,000
4,500
5,000
5,500
k 2
p=1 p=2 p=3 p=4 p=5
ˆ k /λk λ
1.8 1.6 1.4 1.2 1
0
500
1,000
1,500
2,000
2,500 k
Fig. 5 Frequency ratios λˆ k /λk for the Laplace eigenvalue problem on the unit square with Dirichlet boundary conditions computed with FE functions of degree p on a uniform rectangular grid consisting of 15 × 15 elements (top), and computed with splines of maximum smoothness of degree p on a uniform rectangular grid consisting of 70 × 70 elements (bottom). Reprinted by permission from Springer Nature: Springer, Numerische Mathematik [27], copyright (2017)
254
P. Hennig et al.
h, the relative approximation error of the last eigenvalue is asymptotically bounded from below by h −2 in the sense lim inf p→∞
λˆ K p C ≥ 2 λK p h
(6)
with a generic constant C. However, the numerical experiments in [27] show that the square root of the uppermost eigenvalue ratio λˆ K p /λ K p seems to increase linearly with the polynomial degree which indicates that the right-hand side in (6) is not optimal and that the eigenvalue ratio λˆ K p /λ K p diverges with rate p 2 as p tends to infinity. To some extent, these results support the conjectured superiority of IGA over classical finite elements.
3.2 Adapted Heterogeneous Spline Spaces The stabilizing effect of spline-type discretizations as mentioned in the previous subsection has also been observed in [65, 67] in the context of numerical homogenization. A uniformly accurate approximation of the spectrum is desirable in several applications, e.g., in computational wave propagation. In [41], a relationship between the discrete spectrum and the wavenumber for the Helmholtz equation was established; see also the dispersion analysis in [19]. Note that there is a close connection between discrete eigenvalue spectra and inverse inequalities. Therefore, the so-called Courant-Friedrichs-Lewy (CFL) condition that imposes a restriction on the time step size based on the spatial mesh size is prescribed by the largest numerical eigenvalue. That is, uniformly stable spectra provide a relaxation of the CFL condition. With regard to standard IGA approximations, the improved spectral behavior mentioned in Sect. 3.1 is not sufficient to achieve a CFL relaxation due to the outlier frequencies in connection with Dirichlet boundary conditions. Therefore, in [18, 41] a nonlinear parametrization of the control points is suggested in order to reduce the outlier modes. In [52, 66], we presented another approach that is actually able to exploit a boundedness of discrete spectra. This is achieved using operator-dependent spline-type basis functions for the spatial discretization. Paired with an explicit time-stepping approach, it is possible to overcome restrictive CFL conditions in the context of adaptively refined meshes [66] or if the operator involves spatial fine-scale oscillations [52]. The construction that is used in [52, 66] is based on the multiscale technique known as localized orthogonal decomposition (LOD), introduced in [53] and further refined in [38]. The construction is based on the decomposition of the solution space into a finite-dimensional coarse approximation space and a fine-scale space in the spirit of the variational multiscale method introduced in [40]. The main concept of the LOD is to choose the approximation space as the orthogonal complement of the fine-scale space with respect to an operator-dependent bilinear form. The
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
255
resulting space has improved approximation properties compared to classical finite elements with the same number of DOFs. When localizing the above construction, one obtains basis functions with larger support than classical FE basis functions that can be understood as operator-adapted spline-type functions. The smoothing of classical FE functions that is achieved by this approach is for instance further used in [36] (presented in Sect. 5.1) to justify the smearing of interface jumps. In the context of the above-mentioned multiscale method, the results in [12, 51] to some extend indicate a superiority of non-local methods – in the sense of approaches with increased but controlled communication between the DOFs – compared to approaches with only neighbor-to-neighbor communication (local methods) such as the classical first-order FEM. This particularly also justifies the use of (classical) spline-type spaces, which are characterized by such a non-local behavior.
4 Adaptive Isogeometric Analysis In this section, we return to the THB-spline and T-spline discretizations as introduced in Sect. 2 and apply the presented refinement procedures in connection with adaptive IGA. Therefore, we follow the standard iterative procedure of adaptive FE analysis. This procedure consist of the steps SOLVE → ESTIMATE → MARK → REFINE/COARSEN which are described in the following. Solve: given a finite-dimensional function space, compute a Galerkin approximation of the solution of a PDE. Estimate: compute for every active element a local estimate for the error. If no error estimator is available, other significant quantities can be used. Mark: given the results of the previous step, select mesh elements for refinement or coarsening. Refine/Coarsen: refine/coarsen the mesh and construct a new finite-dimensional function space. In the following, different refinement strategies for THB- and T-splines are compared with each other in benchmark problems. Furthermore, the adaptive procedure is generalized to problems that are solved by incremental solution schemes, where field variables have to be transferred from the old to the new mesh. The results presented below were published in [11, 34, 35, 37, 57, 59].
256
P. Hennig et al.
4.1 THB-Splines or T-Splines – A Computational Comparison To obtain graded meshes after refinement that meet given minimum quality requirements, we use four different refinement strategies: 1. refine+ : refinement based on THB-splines as proposed e.g. in [37, 73], where the mesh only allows for a one-level difference between neighboring mesh elements. 2. refine++ : refinement based on THB-splines, where only 2-admissible meshes are allowed. As introduced in [9], in m-admissible meshes, basis functions of up to m different levels are allowed to interact in an element. 3. refine_tspline+ : refinement based on T-splines as introduced in [72], where the refinement process is divided into two steps. First, marked elements are refined and, second, an additional refinement step is processed to recover the linear independence of the T-spline functions. 4. refine_tspline++ : refinement based on T-splines as introduced in [57, 59] (see also Sect. 2.2), where also the vicinity of the marked element is considered. By defining a class of admissible T-meshes, the proposed refinement preserves the analysis-suitability of the T-splines directly. The methods above are referred to as greedy refinement (method 1 and 3) and safe refinement (method 2 and 4). The safe refinement strategies allow for a mathematical proof of linear complexity [11, 59]. Together with results on the convergence of the adaptive algorithm [9], this facilitates a mathematical proof of optimal convergence rates based on [13]. Regarding such proofs, we refer to [10, 28] in the context of (T)HB-splines and to [29] for the case of T-splines. Here, we focus on a numerical comparison of these approaches and in particular on the influence of the different mesh classes as presented in [35]. For this purpose, the four refinement strategies were compared numerically in terms of achievable convergence rates, mesh grading, and the numerical properties of the stiffness matrices in terms of sparsity and condition number. In the following, we consider the Poisson model problem. Given a domain ⊂ R2 , we seek u ∈ C 2 () such that − u = f in ,
∂u = g on N , and u|D = 0 on D , ∂νN
(7)
where N is the Neumann boundary and D the Dirichlet boundary. As an example, we compute on the co-called slit domain = {(−1, 1) × (−1, 1)} ,
(8)
with boundaries D and N as illustrated in Fig. 6. Further, we choose f and g such that the exact solution reads in polar coordinates (r, φ)
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
(a)
257
(b) 1.25
0
Fig. 6 a Domain and boundaries for the slit domain as well as b the corresponding analytical solution
u = r 2 sin φ2 . 1
(9)
The geometry leads to a singularity of the solution at the re-entrant corner. In this case, classical convergence theory does not hold, and the order of convergence with respect to the total number of DOFs is given by k = − 21 min p, 21 ,
(10)
see [81]. The optimal order of convergence k = − p/2 can be recovered by local mesh refinement in the vicinity of the singularity. To select elements for refinement, the quantile marking with marking parameter α is used. The parameter α is adjusted for each refinement strategy, to achieve best possible convergence rates. For our numerical computations, the initial mesh of the slit domain consists of 64 elements. The Bézier meshes after L refinement steps, as well as the marking parameters α are illustrated in Fig. 7. As expected, the meshes of the safe refinement routines propagate the refinement area but produce well graded meshes. Then again, the greedy T-spline refinement leads to a mesh with little structure and badly shaped elements with aspect ratios up to 64. Concerning the sparsity patterns of the stiffness matrix, only the greedy THB-spline refinement creates matrices with a higher density, due to the increased interaction between the basis functions. For the adaptive local refinement, the error in the H 1 -norm is plotted over the total number of DOFs in Fig. 8a. It can be seen that the errors of the greedy refinement routines appear to converge with a higher rate in the pre-asymptotic range and later approach the theoretically predicted rate of k = 1.5. The safe refinement routines have a minor convergence rate in the pre-asymptotic range, but then also converge with the theoretical rate of k = 1.5. A reason for this behavior can be found again in the relatively coarse initial mesh, which forces the safe T-spline refinement to refine almost the whole domain in the first refinement steps. As a result, the safe T-spline refinement requires six times more DOFs than the greedy T-spline refinement for the same error level. Further, in Fig. 8b we plot the condition number over the DOFs. Due to the badly shaped elements, the condition number for the greedy T-spline refinement increases
258
P. Hennig et al.
Fig. 7 Slit domain: The marking parameters α, the Bézier meshes and the sparsity patterns of the stiffness matrices after L refinement steps for all a–d refinement strategies
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
(a)
259
(b)
(c) T-splines ++ T-splines + THB-splines ++ THB-splines + uniform
Fig. 8 Slit domain: a The convergence rates as well as b–c the relations between the condition number of the stiffness matrix, the numerical error of the solution and the DOFs are illustrated
the fastest. The THB-spline refinements instead seem to benefit from their hierarchical structure together with the absence of a deforming geometry mapping. At a certain stage of refinement, the condition number does not increase further. This behavior has been also found in [43], where HB-splines are compared against THB- and LRBsplines. In the context of hierarchical finite elements [82], it is known and even proven that the condition number of the stiffness matrix scales with O(log(DOF)) instead of O(DOF), due to orthogonalities with respect to the energy product between basis functions of different levels. In 1D, this leads to block-diagonal stiffness matrices; in higher dimensions, this effect is milder (see e.g. Fig. 7a), but still yields good conditioning. From our observations, it seems that (T)HB-splines share these benefits. Due to the above-mentioned effect, the greedy THB-spline refinement performs best if the numerical error is plotted over the condition number (cf. Fig. 8c). Since only a small amount of DOFs is added during the refinement and due to the fact that the condition number grows slowly per DOF, an increased level of accuracy can be reached without increasing the condition number. However, compared to the uniform refinement, also the T-spline refinements produce smaller condition numbers.
260
P. Hennig et al.
4.2 Mesh Adaptivity for Incremental Solution Schemes In this subsection, we consider more challenging partial differential equations. That is, many boundary value problems that appear in nature are non-linear or timedependent and require incremental solution schemes. In this case, projection or transfer-of-state variables are necessary during the mesh adaptive computations if a re-computation of the problem from the initial state is to be avoided. We distinguish between three different sets of state variables stored in a state vector = {F , H , I }. The first subset F contains field variables, e.g., displacements or temperatures, which are each stored at element nodes or control points. The second and third subset contain variables which are only given at integration points. Variables of the second kind H possess own evolution equations at integration point level, e.g., plastic strains or hardening variables, and are referred to as history variables. Variables of the third kind I , such as stresses or strains, can be computed from F and H and are referred to as internal variables. For the case of a hierarchical spline basis, in [34] we proposed two different projection methods for field variables and two different transfer operators for history variables and compared them in numerical benchmarks against existing versions. In the following, we review these operators and apply them to phase-field models. It is shown that IGA improves the performance of the projection and transfer operations as already the coarsest mesh represents the exact geometry and the hierarchical structure allows for quadrature free projection methods.
4.2.1
Local Mesh Refinement and Coarsening Excluding History Variables
In certain applications, adaptive refinement and coarsening are required to increase the efficiency of the computation. This is the case if, e.g., strong gradients are moving with time as in phase-field models or contact problems. In the following, we concentrate on problems where the state variables only consist of the field F and internal I variables. Since I can be computed from F , only projection methods for field variables are discussed. After refinement, field variables have to be transferred onto the next finer level + 1. Due to the nestedness of the basis, the subdivision operator M from Sect. 2.1 can be used. This leads to a error free projection T = M,+1 F . +1 F
(11)
However, after coarsening, the field variables have to be transferred onto the next coarser level − 1. In this case, the projection causes an error since N ⊂ N −1 . This error can be minimized in the L 2 -norm, that requires an integration over the domain and leads to a projection between two function spaces. Alternatively, dealing
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
261
with a hierarchical basis, the subdivision operator can be used again and the solution is approximated by a least squares approach, i.e., −1 F 2 T min F − M−1, −1 F , −1 F
2
(12)
where the error is minimized in the Euclidean norm. The minimization results in a linear system of equations ¯ −1, F , −1 =M (13) F where
−1 −1, ¯ −1, = M−1, M−1, T M M
(14)
is the pseudo-inverse of the subdivision operator that can be identified as the required transfer operator. This operation is quadrature free. We refer to Eq. (13) as the global discrete least squares method (LSQ). The comin (13) requires the solution of a linear system that grows with the putation of −1 F spline space N −1 . Also the pseudo-inverse is dense and expensive to compute. For this reason, three possible alternatives are reviewed in the following. Each of these methods has to be executed level-wise from = L − 1 to = 1. 1. Subspace discrete least squares method (SLSQ): as proposed in [42], the computational effort can be reduced by limiting the transfer operation to the newly activated elements. In this way, the least squares fit is performed onto a subspace and the size of the resulting linear system reduces. However, for three-dimensional problems the transfer operation can still be large and in the limiting case, where all elements are selected for coarsening, it corresponds to the LSQ. 2. Local discrete least squares method (LLSQ): in [31], a local L 2 -projection for an efficient imposition of Dirichlet boundary conditions in IGA was proposed. Instead of inverting the whole Gram matrix, only FE-based sub-matrices of the Gram matrix are inverted. We adapt this idea and apply it to the subdivision operator that results in a quadrature free local discrete least squares method. The transfer operation is split into two parts, starting with a projection operation on element level and a subsequent operation, where the resulting discontinuous approximation is smoothed by simple averaging. 3. Weighted local least squares method (LLSQw): the final proposed transfer operation is similar to the previous one but uses other weights in the smoothing operation. Here, we use weights as introduced in [77]. The idea is to weigh the field variable with respect to the impact of the corresponding basis function on element level. We can adapt this technique due to our approach to THB-splines, cf. Sect. 2.1, that is also based on Bézier extraction.
262
P. Hennig et al.
The projection methods above were compared against each other and against the corresponding L 2 -projections regarding their convergence and error level in [34]. It has been shown that the SLSQ method performs best but can lead to large systems that would be expensive to solve. Comparing the local methods LLSQ and LLSQw, the latter produces smaller errors but requires the computation of the enhanced weights. Also the L 2 -versions of SLSQ and LLSQw perform better than their corresponding discrete least squares fits but require an integration over all coarsened elements. Depending on the application, the user has to balance the requirements regarding accuracy and computational effort. We show in the following example, where we solve the Cahn-Hilliard equation, that the local discrete least squares fit LLSQ produces accurate results by a clear reduction of computational time. The Cahn-Hilliard equation describes the spinodal decomposition of binary mixtures. Taking the concentration c ∈ [0, 1] of one of the two components as phase-field order parameter, the total free energy is given by = bulk + int =
f (c) +
α |∇c|2 d. 2
(15)
The bulk free energy bulk drives the nucleation of the decomposition, which dominates the early stages of the evolution process. The interfacial free energy int accounts for the influence of the individual interfaces on the system. The constant α is the interface parameter that governs the interface width and f (c) is chosen here as the logarithmic double-well potential [80]. A detailed description of the model can be found in [34, 46]. The Cahn-Hilliard equation is solved on a bi-unit domain as illustrated in Fig. 9. For the spatial discretization, a C 2 -continuous THB-spline basis with four hierarchical levels and p = 3 is used. For temporal discretization, a generalized-α method in combination with an adaptive time stepping scheme is employed. Homogeneous Neumann boundary conditions are applied on the whole boundary of the domain. In order to accurately capture the nucleation process, a very fine mesh has to be used in the early stages of the simulation. The initial concentration is linearly distributed along the horizontal direction of the specimen. A random perturbation in the magnitude of 10−3 is introduced to promote the evolution of the system. In Fig. 10a, the total free energy and its individual contributions are plotted. At the beginning of the computation, the interface energy is zero. When separation starts, interfaces are created and int increases. Subsequently, the system reduces the surface energy and the inclusions coarsen. At this point, at t = 1 × 10−4 s, adaptive refinement and coarsening are activated to track the evolving interfaces. The Euclidean norm of the gradient of the order parameter is used as marking criterion ∇c
θ else
refine , coarsen
(16)
where the threshold is set to θ = 0.5. To project the field variables onto coarsened elements, the LLSQ method is applied.
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
263
Fig. 9 Spinodal decomposition process: The concentration and computational mesh are plotted for three different instants of time
Fig. 10 Spinodal decomposition: The total , interface int , and bulk bulk energies as well as the average concentration c˜ are plotted versus time a for a reference computation on a uniform fine mesh, and b compared to the mesh adaptive computation. c The required number of elements and computation time is significantly reduced using adaptive meshing
264
P. Hennig et al.
To measure the accuracy of the approach, the relative errors in the individual energies of the adaptive computation with respect to the reference solution are plotted in Fig. 10b. The errors range from 10−5 to 10−2 in the beginning when the inclusions coarsen fast but drop to around 10−7 in the end. Furthermore, the error in the average concentration m only increases slightly up to 10−4 , which indicates that the LLSQ projection did not introduce significant artificial diffusion of the transferred field variable c. The number of elements is reduced clearly, and the computation time is cut to 74% of the reference solution, cf. Fig. 10c. Different states of the computation are illustrated in Fig. 9. It can be seen that the refined mesh adaptively follows the interfaces and that areas with c ≈ const. are coarsened.
4.2.2
Local Mesh Refinement Including History Variables
As a next step, local mesh refinement is considered for computations where the state vector includes field variables F , internal variables I , as well as history variables H , e.g., the plastic strain in mechanical computations with inelastic material behavior. Since F are given at control points whereas H are given at quadrature points, the transfer operation has to be split into a projection of F using the procedures of Sect. 4.2.1 and a transfer operation for H that is considered here. To avoid inconsistencies with the constitutive model, I should be computed after the transfer operation from F and H . In the following, the easily and fast implemented closest point transfer is shortly reviewed. Subsequently two possible transfer operators suitable for IGA are introduced. A schematic illustration of the three different operators is given in Fig. 11. For more details, the reader is referred to our publication [34]. 1. Closest point transfer (CPT): this operator represents the simplest possibility and transfers the history variables directly from the old to the new quadrature points. The history variables for a quadrature point of the newly activated elements is taken from the closest quadrature point of the corresponding parent element, cf. Fig. 11a. 2. Basis function transfer (BFT): this operator combines IGA with the shape function transfer, where control points and corresponding basis functions are used in an intermediate step. Therefore, the operator is split into three parts as illustrated in Fig. 11b. In a first step TH,1 , the history variables at the integration points are projected levelwise onto the basis N , i.e., the corresponding control values are computed. Similar to the previous section, a discrete least squares fit is used for this purpose, where the error is minimized in the Euclidean norm. In the second step TH,2 , control values of the history variables of all levels are projected similarly to the field variables onto the refined basis using the transfer operation for field variables (11). We emphasize
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
265
Fig. 11 Schematic illustration of transfer operators for history variables on a bi-quadratic B-spline patch: a Closest point transfer (CPT). b Basis function transfer (BFT). c Weighted patch based least squares fit (WPLSQ)
that this projection is exact in the case of refinement. In the last step TH,3 , the history variables at the quadrature points are interpolated from the control points using the shape functions of N +1 . 3. Weighted patch based least squares fit (WPLSQ): the use of the superconvergent patch recovery (SPR) method is proposed in the following to transfer the history variables and improve the quality of the transferred field by a weighted patch based least squares fit. This methodology is adapted from [49], where the SPR technique [83] was extended from standard FEM to IGA to develop a recovery based error estimator for LRB-splines. There, internal variables are transferred from the superconvergent points to the integration points. Hence, this method is adapted to transfer directly from the old to the new quadrature points. The operation is split into three steps as illustrated in Fig. 11c. At first, for all basis functions that have support on a marked element, element patches are created and corresponding quadrature points are stored in a set Q. In Fig. 11c, the element patches of three control points that belong to the element selected for refinement are exemplarily indicated by color. Subsequently, a least squares fit is used on every patch to approximate the history field from Q using a monomial basis. In a last step, the history variable at the new quadrature point is computed from a weighted sum of these approximations. In Fig. 11c, one integration point of the four newly activated elements is highlighted by the red square.
266
P. Hennig et al.
In [34], we compared the projection methods above against each other. We could show that the simple CPT method leads to a high diffusion during the projection and therefore to higher error levels. The BFT method only slightly outperforms WPLSQ. To further investigate the influence of the different operators on the stability and efficiency of the computation, they are applied to the adaptive phase-field modeling of ductile fracture in Sect. 5.2.3.
5 Weak and Strong Discontinuities in Solid Mechanics Based on the results of the previous section, we are able to present an efficient and unified phase-field modeling approach for weak and strong discontinuities in solid mechanics. These discontinuities arise in the numerical simulation of heterogeneous materials due to rapidly changing mechanical properties at material interfaces or due to propagation of cracks if a specific failure load is exceeded. Using standard FE approaches for such problems, the mesh is either aligned with the material interfaces or decoupled along the cracks to obtain a C 0 - or C −1 -continuous basis. For some applications, however, the meshing process is costly and the numerical tracking of the discontinuities cumbersome. To overcome these drawbacks, non-standard discretization methods are used such as meshless methods [14], where the nodes are not connected, or the extended finite element method [26, 44], where a local enrichment of the approximation space by discontinuous functions is combined with adapted integration techniques. The embedded domain method [64, 70] avoids the meshing processes by an implicit representation of the physical domain in a regular background mesh. While good convergence rates are achieved for homogeneous materials, stress oscillations occur in the heterogeneous case because weak discontinuities are modeled in terms of a continuous basis. For that reason, the use of a separate background mesh for every heterogeneity was proposed in [22]. Here, a generalized Ginzburg-Landau phase-field model [7, 48] is used to simulate propagating strong discontinuities, whereas a static phase-field is introduced to regularize weak discontinuities. The regularization leads to a diffuse interface region of finite width i between the heterogeneities, where the material properties are not defined. Instead of a simple interpolation, a homogenization is applied to compute the effective material parameters in the diffuse interface region, similar to [60, 71]. To provide an appropriate and efficient approximation, an h/i -adaptive refinement strategy is applied, where mesh size h and interface width i are reduced simultaneously. Finally, both methods for weak and strong discontinuities are combined to simulate crack propagation in heterogeneous materials. In all examples, the safe mesh refinement strategy for THB-splines from Sect. 4.1 is used to improve the efficiency of the computations. Due to the occurrence of field and history variables, also the transfer operators from the previous section have to be applied. The influence of the different operators on the quality of the solution is investigated for the phase-field model of ductile fracture in Sect. 5.2. The results presented below were published in [33, 36].
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
267
5.1 Embedded Material Interfaces in Linear Elasticity In this subsection, we consider the problem of linear elasticity, which is described by the following PDE, −∇ · σ (u) = f in , σ (u) · ν N = g on N , and u|D = uD on D , where f is the volume force, g the prescribed traction at the Neumann boundary N , and uD the prescribed displacement vector at the Dirichlet boundary D . The stress tensor σ := C : (u) is given by the fourth-order material tensor C and the strain tensor is described by (u) := 21 ∇u + ∇uT . As illustrated in Fig. 12, the heterogeneity of a body is modeled by a subdivision of its physical domain into n smaller subdomains = in (i) with different elasticity tensors C(i) . Material interfaces are represented by internal boundaries i j between the subdomains (i) and ( j) . For any function v defined in , the jump of v across i j is defined by v := v(i) |i j − v( j) |i j , where v(k) is the restriction of v to (k) . Assuming a perfect bonding between the different material phases (i) , the displacement vector has to be continuous, i.e., u = 0, and the traction vector in normal direction ν to the interface has to fulfill σ · ν = 0.
(17)
However, based on the Hadamard jump condition, the strain is allowed to jump perpendicular to the interface but has to be continuous in tangential direction, i.e., = a ⊗ ν ,
(18)
where a is an arbitrary vector. This results in a weak discontinuity in the solution field u across the interface.
Fig. 12 Embedded material interfaces: The heterogeneities are embedded into a non-conforming FE mesh. The material interface 12 is regularized by the order parameter c over a finite length i . The near field is defined as the domain that surrounds the interface 12 up to a distance of δ/2 from the interface
268
P. Hennig et al.
To spatially discretize the problem above, an only C 0 -continuous basis has to be used to approximate the weak discontinuity in the displacement field. In the context of a FE analysis, this can be achieved by aligning the computational mesh to the material interface. Here, this restriction is relaxed by the introduction of a static phase-field, thus avoiding the associated meshing process. For this purpose, an order parameter c(x) ˜ =
0 1
for all x ∈ (1) for all x ∈ (2)
(19)
is introduced to describe the heterogeneity of a bi-material body = (1) ∪ (2) . The resulting sharp interface problem can be approximated by means of the ModicaMortola energy [60]. The minimization of that energy leads to an order parameter field given by xsd 1 . (20) 1 + tanh c= 2 i Here, xsd is the signed distance function that describes 12 with xsd = 0 and i is a length scale parameter that controls the size of the regularized interface. In situations where the geometry description is obtained from imaging techniques, the regularized interface can also be derived directly from gray scale values. The material tensor C is now defined in terms of the static phase-field to provide a smooth transition from the material tensor C(1) to C(2) . For this purpose, a homogenization approach is applied at material points where both phases are present, i.e., c ∈ (0, 1) (cf. Fig. 12). A representative volume element with a straight interface ¯ y¯ , z¯ ). between the phases (1) and (2) is defined in a reference coordinate system (x, The indicator function c and 1 − c are interpreted as the extent in y¯ direction of the individual phases. Six deformation states for three-dimensional and three deformation states for two-dimensional problems have to be considered to compute the com¯ ponents of the resulting effective material tensor C(c). To take the orientation of the ¯ material interface into account, the material tensor C(c) is transformed from the reference coordinate system to the material coordinate system with basis {μ , ν , ξ }. For the illustrated two-dimensional case in Fig. 12, the normalized normal vector ν ∇c of the interface can be computed from the indicator function ν = ||∇c|| . For more details on the homogenization, the reader is referred to [36]. The resulting homogenized material tensor C(c, ∇c) satisfies the static equilibrium (17) at the interface and the kinematic compatibility across the interface (18). Furthermore it contains the Reuss and Voigt type homogenizations as limiting cases and coincides with the homogenized material tensors proposed in [60, 71].
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
269
Fig. 13 Bi-material rod: Domain with material E (1) = 100 MPa, E (2) = 0.1 E (1) and boundary conditions f = 12x2 kN mm−1 and u D = 0.01 mm
5.1.1
Numerical Example – Bi-Material Rod
The modeling approach above is tested and verified in the following numerical example of a bi-material, uni-axial rod under volume load f , cf. Fig. 13. The solution is compared against the embedded sharp interface representation under local hrefinement similar to [70]. For a better evaluation, the physical domain is divided in a near field δ and a far field \ δ . As illustrated in Fig. 12, the near field is the domain that surrounds all interfaces i j up to a distance of δ/2 from the interface. The rod geometry with length 1 mm, boundary conditions and material parameters are given in Fig. 13. The volume force f does not act in the near field δ that is located in a distance δ/2 = 0.125 mm around the material interface at x = 0.5 mm. The domain is discretized with THB-splines that are C 0 -continuous at the boundary of δ to allow for the jump in the volume force f and C p−1 -continuous across element boundaries in general. The numerical performance of the modeling approach is examined for an hi adaptive refinement strategy, where mesh size h and length scale i are reduced simultaneously at a fixed ratio i / h = 2. The convergence plot in Fig. 14a shows that optimal convergence rates are obtained in the far field (solid lines) for different polynomial degrees only if an appropriate homogenization is applied in the diffuse interface region. In case of the embedded sharp interface, the error of the near field spreads out to the far field and no optimal rates are obtained. The error in the whole domain (dashed lines) is governed by the error in the near field and converges with
Fig. 14 Bi-material rod under local h-refinement: a H 1 -error for different polynomial degrees in the far field (solid lines) and the whole domain (dashed lines). b Stresses for sharp and diffuse interface representations
270
P. Hennig et al.
Fig. 15 Circular inclusion problem: a Model problem. b Mesh and stress solution for the problem with diffuse interface representation, p = 2, and C 1 -continuous basis after local h-refinement
a rate independent of the polynomial degree. However, the application of the local hi -refinement allows to reduce the error in the near field to 10−3 . The proposed approach also avoids stress oscillations that occur for sharp interface representations as shown in Fig. 14b, where the exact solution is overestimated by more than 100% in a significant width around the interface. In summary, the diffuse interface representation bounds the main error to the near field and in this way allows for optimal convergence rates in the far field. This behavior can be rigorously proven in one spatial dimension with correct parameter choices and serves as motivation for higher dimension. For more details, we refer to [36]. We also note that the error in the near field can be further reduced by the application of the hi -refinement strategy to an error range that is typical for embedded methods and suitable for engineering applications. Furthermore, stress oscillations are avoided, which would be disadvantageous if the method is combined with a phase-field model for interface failure.
5.1.2
Numerical Example – Inclusion Problem
In the next example, illustrated in Fig. 15a, an inclusion problem is analyzed, where a smaller cylinder with domain (1) , radius a = 3 mm and material parameters E (1) = 105 MPa and ν (1) = 0.3 is embedded into a larger cylinder with domain (2) , radius b = 15 mm and material parameters E (2) = 0.1E (1) and ν (2) = ν (1) . The outer boundary of the larger cylinder is subjected to a constant displacement load u D perpendicular to the interface. Due to symmetry, we only simulate on a square with length c = 8 mm and the exact solution [76] is applied to its boundary. Again, hi adaptive refinement is applied, cf. Fig. 15b. In contrast to the one-dimensional example, no optimal convergence rates are obtained for arbitrary shaped interfaces in multi-dimensional problems. This probably results from the violation of the homogenization assumptions made above to derive the homogenized material tensor. There, constant stress and strain states in the two different phases along the material interface are assumed. However, as shown in
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
271
Fig. 16 Circular inclusion: a Convergence behavior in (solid lines) and \ δ (dashed lines) using local hi -refinement. b Stress σyy plotted along the x-axis for the sharp and diffuse interface representation with a C 1 -continuous basis, p = 2, and = 0.0624 mm
Fig. 16a, the total error is clearly reduced using local hi -refinement. Furthermore, the diffuse modeling approach is able to improve the convergence rates compared to the embedded sharp interface using local h-refinement. Another important property of the approach is illustrated in Fig. 16b, where the stress σyy is plotted along the x-axis. While the jumps in both fields are smoothed due to the diffuse interface representation, high oscillations only occur for the sharp interface approach.
5.2 Brittle and Ductile Fracture in Homogeneous and Heterogeneous Materials In contrast to standard sharp crack models, in the phase-field approach to fracture the discrete crack is regularized using an order parameter c with internal length scale parameter c for the transition zone, as used in Sect. 5.1. By coupling the scalar order parameter to a mechanical boundary value problem, the initiation and propagation of a sharp crack c are described by the evolution of the crack phase-field, cf. Fig. 17a. In the following, phase-field models for brittle and ductile fracture are reviewed and applied to well known benchmark problems. To simulate local damage and failure in heterogeneous materials, the modeling approach for embedded interfaces from the previous section is combined with a generalized phase-field model for brittle fracture that also accounts for interface failure between two materials [33].
5.2.1
Brittle Fracture in Homogeneous Materials
The variational formulation of the Griffith theory for brittle fracture, initially introduced by [25], is based on the energy functional
272
P. Hennig et al.
Fig. 17 Phase-field approach to fracture and numerical examples: a Displacement field u and fracture phase-field c are coupled on the solid domain with boundary . The sharp crack c is regularized over the length c . b Numerical model and boundary conditions for the single edge notched shear test. c Numerical model and boundary conditions for the asymmetrically notched specimen. All values are given in mm
=
c
G c dc +
ψel (ε(u))d ,
(21)
where is the physical domain of the body that has an outer boundary and an internal discontinuous boundary c , cf. Fig. 17a. Interpreting c as a crack, the fracture toughness G c of the material is integrated over the corresponding crack surface. The linear elastic strain energy density is given by ψel := 21 (u) : C : (u). To avoid the integration over the crack surface, a regularized version of (21) was presented in [7]. The resulting formulation can be interpreted as a generalized Ginzburg-Landau phase-field model [48]. The extended free energy is therefore given by
(u, c) =
Gc
(1 − c)2 2 + lc |∇c| d + g(c)ψel (ε(u))d . 4lc surf
(22)
el
The variable c governs the width of the diffuse crack and the function g(c) = c2 degrades the elastic energy density ψel where the material is broken. The approach leads to a fully coupled two-field problem for the displacement field u and the phase-field c, given by the Ginzburg-Landau evolution equation c˙ = −M δ δc and the equilibrium condition δ = 0. The latter leads to the mechanical momentum δu balance. To find the quasi static solution, we let M → ∞ and use a staggered solution scheme [55].
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
273
Fig. 18 Meshes and solution of the single edge notched shear test: a Initial mesh for the adaptive (bottom) and the pre-refined mesh for the reference solution (top). b Adaptively refined meshes and the solution of the phase-field for two different displacement steps
In the following, the phase-field model of brittle fracture is solved for the problem illustrated in Fig. 17b. We consider a square specimen that is notched on the left edge and clamped at the bottom edge while a given displacement is applied on the top edge. The initial notch is prescribed as a discrete crack in terms of a C −1 -continuity line between two patches. The Lamé parameters for the material are λ = 121 150 MPa and μ = 80 770 MPa and the phase-field parameters are given by the fracture toughness G c = 2.7 MPa mm and the characteristic length c = 0.0075 mm. These parameters are identical to the work of [55] which allows for a quantitative comparison. A staggered, quasi static solution scheme is used with a displacement increment of u = 10−5 mm per time step. A fine mesh is required in the vicinity of the crack to properly resolve the gradients in the order parameter. Here, a THB-spline basis with six hierarchical levels is used, leading to an element size for the finest level of h E = 0.266c . The initial mesh is prerefined in the vicinity of the initial crack tip, cf. Fig. 18(a, bottom). For the adaptive refinement, the safe refinement strategy of Sect. 4.1 is used in combination with a threshold value of c = 0.5, i.e. elements are marked for refinement if the phase-field parameter at any quadrature point of the element falls below 0.5. The non-linearity of the problem requires a mapping of the solution according to the algorithm presented in Sect. 4.2.1.
274
P. Hennig et al.
Fig. 19 Computational comparison of reference and adaptive solution of the single edge notches shear test: a Force-displacement curve. b Number of elements. c Summarized computation time
In Fig. 18, the adaptively refined mesh and the order parameter c are illustrated for different prescribed displacements. The solution produces the same crack pattern as given by [55]. Furthermore, it can be seen that the adaptive algorithm resolves the crack path correctly. To measure the efficiency of the adaptive approach, the solution is compared to a reference solution, obtained on a locally pre-refined mesh, cf. Fig. 18(a, top). As illustrated in Fig. 19a and b, the force-displacement curves obtained with both solutions are very close but the adaptive approach significantly reduces the number of elements. This improves the efficiency compared to uniform refinement and reduces the computation time by up to 76%, cf. Fig. 19c. Note also that knowledge on the expected crack path was used to generate the uniformly pre-refined mesh which is impossible for problems of arbitrary complexity.
5.2.2
Brittle Fracture in Heterogeneous Materials
To model brittle fracture in heterogeneous materials, the phase-field model for brittle fracture of the previous paragraph is combined with the diffuse modeling approach for embedded material interfaces from Sect. 5.1. The free energy for a brittle, heterogeneous, linear elastic material then reads (1 − c)2 2 G c (xsd , i ) + lc |∇c| + g(c)ψel (ε(u), s, ∇s) d. (u, c) = 4lc (23) Here, c is still the order parameter of the fracture phase-field but s is now the static order parameter that describes the pre-defined and unchangeable heterogeneity as introduced in Sect. 5.1. Consequently, (23) has to be minimized only with respect to
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
275
Fig. 20 Crack propagation in heterogeneous materials: Three stiffer inclusions are embedded in a homogeneous matrix using the order parameter s. The specimen has an initial crack, indicated by the black solid line, and is subjected to a shear load
u and c. The linear elastic strain energy is redefined in terms of the homogenized material tensor C(s, ∇s), by ψ el (, s, ∇s) :=
1 : C(s, ∇s) : . 2
(24)
Furthermore, to account for interface failure between two materials, the fracture toughness G c (xsd , i ) is locally reduced to the interface fracture toughness G int in the area of the regularized material interface and therefore depends on the signed distance function xsd and the width of the diffuse material interface i . Due to the use of a diffuse crack and a diffuse interface model, an interaction between the lengthscale parameters c and i can occur. For that reason, we proposed in [33] to compute a scaling factor that further lowers G int and therefore compensate the influence of the fracture toughness of the bulk material on the crack propagation along the interface. To illustrate the capabilities of this approach, the problem shown in Fig. 17b is extended by using a heterogeneous material, cf. Fig. 20. Three inclusions (s = 1) with glass fibre-like material properties λgf = 17 392 MPa, μgf = 33 761 MPa and gf G c = 2 MPa mm are embedded in a matrix (c = 0) with thermoset-like material properties λts = 2299 MPa, μts = 1082 MPa and G tsc = 1 MPa mm. The material interface has a reduced fracture toughness of G int = 0.5 MPa mm. The internal length scales are set to = 2i = 4c = 0.03 mm. The adaptive discretization equals the discretization of the previous homogeneous example except that the mesh is also pre-refined along the material interfaces. Again the safe refinement strategy of Sect. 4.1 and the projection operator of Sect. 4.2.1 are used. As illustrated in Fig. 20, two cracks initiate at the weaker material interfaces of two of the inclusions after loading. Subsequently, they propagate into the matrix material, join each other, and form a larger crack with an orientation of around 45◦ with respect to the loading direction. This is in contrast to the homogeneous case,
276
P. Hennig et al.
where the phase-field crack initiates at the stress singularity at the initial crack tip as shown in Fig. 18. Please note that in the presented approach, no splitting of the strain energy is considered, which would lead to wrong results under mixed loads. Hence, further investigations are required, to combine the diffuse material interface representation with a splitting of the strain energy (24) in a tensile and compressive part. A first approach in this direction is proposed by [32].
5.2.3
Ductile Fracture in Homogeneous Materials
Phase-field modeling of ductile fracture has been the subject of a few investigations in the past, see [1]. Here, the model of [2] is adopted, where the free energy functional (22) is extended by a plastic energy density ψp (u, c) =
Gc
(1 − c)2 2 + lc |∇c| + g(c, p)ψel (ε(u)) + ψp d. 4lc
(25)
The coupling between plasticity and damage is realized through a modified degradation function p eq g(c, p) = c2 p with p = p eq,crit p
that depends on the von Mises equivalent plastic strain eq and a corresponding user p defined threshold value eq,crit . In this way, the evolution of the phase-field (and thus the occurrence of fracture) is driven by the accumulated plastic strain. In the following, an asymmetrically notched specimen is considered. The problem and boundary conditions are illustrated in Fig. 17c. The domain is discretized by 2-admissible, bi-quadratic THB-splines with maximum four levels. The parameters are as follows: elastic constants λ = 53473 MPa and μ = 27280 MPa, yield strength σy = 345 MPa, hardening modulus h = 250 MPa, fracture toughness G c = 9.31 MPa mm, length scale c = 0.08 mm, and equivalent plastic strain threshold p εeq,crit = 0.1. An incremental displacement is applied on the top edge and the solution is computed using a staggered, quasi static solution scheme. The initial mesh for the adaptive computations as well as the mesh for the nonadaptive reference computation are shown in Fig. 21a. The resolution of the initial mesh is chosen to capture the plastic effects in the pre-cracked state appropriately. The refinement of the mesh is necessary to capture the localization of the plastic strain and to properly approximate the crack phase-field. Since cracking is expected if the equivalent plastic strain exceeds a critical value, the elements are marked for p p refinement if εeq > εeq,crit at any quadrature point. The elements are refined until the finest hierarchical level is reached. Due to the elastic-plastic material model, the mesh refinement strategy transfer operators from Sect. 4.2.2 that also includes history variables is used. All three introduced transfer operators are applied and compared.
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
277
Fig. 21 Phase-field modeling of ductile fracture in an asymmetrically notched specimen: a Initial mesh for the adaptive (left) and the pre-refined mesh of the reference solution (right). b Adaptive p refinement of the mesh, controlled in terms of the equivalent plastic strain εeq . c Adaptively refined meshes. d Order parameter c (c = 1: intact material, c = 0: fully cracked material) for three different stages of crack initiation and propagation
Contour plots of the results for the adaptive computation using the BFT operator are shown in Fig. 21b–d. The mesh as well as the distribution of the phase-field variable and the equivalent plastic strain are shown for three different stages of the displacement-controlled loading. It can be seen that the mesh refinement follows the p area with the largest εeq before the crack starts to grow along this path. A quantitative comparison of the performance of the proposed transfer operators from Sect. 4.2.2 is possible using the force-displacement curve, cf. Fig. 22a. The computation with the BFT operator is almost identical to the non-adaptive reference solution, while the CPT and WPLSQ operators lead to a delayed fracture process. Higher numerical diffusion of plastic strain during the data transfer could be a reason for that. Furthermore, the CPT operator leads to instable computations which could result from the violation of constitutive equations and the equilibrium of the system after the transfer. In Fig. 22b, the computation times required to solve the system (blue) and to adaptively refine the mesh (red) are plotted relative to the time needed for the reference computation. All adaptive schemes reduce the time by around 30%. Note that in computations with an unknown crack path, the refinement area has to be
278
P. Hennig et al.
Fig. 22 Phase-field modeling of ductile fracture in an asymmetrically notched specimen: a Forcedisplacement curves for the different transfer operators. b Computation time required to solve the system (blue) and to adaptively refine the mesh (red) with respect to the normalized time needed for the reference computation. c Comparison with respect to number of elements
chosen much larger than in the current example, cf. Fig. 21(a, right). Relative to the number of elements required in the computation, the CPT requires the lowest effort followed by BFT and WPLSQ, cf. Fig. 22c. These results indicate that BFT has the best quality/cost ratio.
6 Conclusion In this contribution, we presented approaches to handle weak and strong discontinuities that arise in the context of solid mechanics. To this end, we considered a spline discretization in the context of isogeometric analysis in combination with a phase-field approach. Since adaptive refinement is the key for fast and accurate modeling in the presence of discontinuities, we introduced refinement techniques for (T)HB-splines and T-splines and compared their performance against other existing refinement schemes. The presented refinement procedures were then used to simulate physically relevant problems such as embedded material interfaces in linear elasticity as well as brittle and ductile fracture in homogeneous and heterogeneous materials, where numerical experiments indicate the feasibility of the presented approaches.
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
279
References 1. R. Alessi, M. Ambati, T. Gerasimov, S. Vidoli, L. De Lorenzis, Comparison of phase-field models of fracture coupled with plasticity, in Advances in Computational Plasticity. (Springer, Cham, 2018), pp. 1–21 2. M. Ambati, T. Gerasimov, L. De Lorenzis, Phase-field modeling of ductile fracture. Comput. Mech. 55(5), 1017–1040 (2015) 3. L. Beirão da Veiga, A. Buffa, G. Sangalli, R. Vázquez, Analysis-suitable T-splines of arbitrary degree: definition, linear independence and approximation properties. Math. Mod. Meth. Appl. S. 23(11), 1979–2003 (2013) 4. L. Beirão da Veiga, A. Buffa, G. Sangalli, R. Vázquez, Mathematical analysis of variational isogeometric methods. Acta Numer. 23, 157–287 (2014) 5. M.J. Borden, T.J.R. Hughes, C.M. Landis, C.V. Verhoosel, A higher-order phase-field model for brittle fracture: formulation and analysis within the isogeometric analysis framework. Comput. Methods Appl. Mech. Engrg. 273, 100–118 (2014) 6. M.J. Borden, C.V. Verhoosel, M.A. Scott, T.J.R. Hughes, C.M. Landis, A phase-field description of dynamic brittle fracture. Comput. Methods Appl. Mech. Engrg. 217–220, 77–95 (2012) 7. B. Bourdin, G.A. Francfort, J.-J. Marigo, The variational approach to fracture. J. Elast. 91(1–3), 5–148 (2008) 8. A. Buffa, D. Cho, G. Sangalli, Linear independence of the T-spline blending functions associated with some particular T-meshes. Comput. Methods Appl. Mech. Engrg. 199(23), 1437–1445 (2010) 9. A. Buffa, C. Giannelli, Adaptive isogeometric methods with hierarchical splines: error estimator and convergence. Math. Mod. Meth. Appl. S. 26(01), 1–25 (2016) 10. A. Buffa, C. Giannelli, Adaptive isogeometric methods with hierarchical splines: optimality and convergence rates. Math. Mod. Meth. Appl. S. 27(14), 2781–2802 (2017) 11. A. Buffa, C. Giannelli, P. Morgenstern, D. Peterseim, Complexity of hierarchical refinement for a class of admissible mesh configurations. Comput. Aided Geom. D. 47, 83–92 (2016) 12. A. Caiazzo, R. Maier, D. Peterseim. Reconstruction of quasi-local numerical effective models from low-resolution measurements. WIAS Preprint, No. 2577 (2019) 13. C. Carstensen, M. Feischl, M. Page, D. Praetorius, Axioms of adaptivity. Comput. Math. Appl. 67(6), 1195–1253 (2014) 14. Y. Chen, J. Lee, A. Eskandarian, Meshless Methods in Solid Mechanics (Springer, Heidelberg, 2006) 15. J.B. Collins, H. Levine, Diffuse interface model of diffusion-limited crystal growth. Phys. Rev. B 31, 6119–6122 (1985) 16. J.A. Cottrell, T.J.R. Hughes, Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA (Wiley, Chichester, 2009) 17. J.A. Cottrell, T.J.R. Hughes, A. Reali, Studies of refinement and continuity in isogeometric structural analysis. Comput. Methods Appl. Mech. Engrg. 196(41–44), 4160–4183 (2007) 18. J.A. Cottrell, A. Reali, Y. Bazilevs, T.J.R. Hughes, Isogeometric analysis of structural vibrations. Comput. Methods Appl. Mech. Engrg. 195(41–43), 5257–5296 (2006) 19. L. Dedè, C. Jäggli, A. Quarteroni, Isogeometric numerical dispersion analysis for twodimensional elastic wave propagation. Comput. Methods Appl. Mech. Engrg. 284, 320–348 (2015) 20. T. Dokken, T. Lyche, K.F. Pettersen, Polynomial splines over locally refined box-partitions. Comput. Aided Geom. D. 30, 331–356 (2013) 21. M.R. Dörfel, B. Jüttler, B. Simeon, Adaptive isogeometric analysis by local h-refinement with T-splines. Comput. Methods Appl. Mech. Engrg. 199(5–8), 264–275 (2010) 22. M. Elhaddad, N. Zander, T. Bog, L. Kudela, S. Kollmannsberger, J. Kirschke, T. Baum, M. Ruess, E. Rank, Multi-level hp-finite cell method for embedded interface problems with application in biomechanics. Int. J. Numer. Methods Biomed. Eng. 34(4), e2951 (2017)
280
P. Hennig et al.
23. E.J. Evans, M.A. Scott, X. Li, D.C. Thomas, Hierarchical T-splines: analysis-suitability, Bézier extraction, and application as an adaptive basis for isogeometric analysis. Comput. Methods Appl. Mech. Engrg. 284, 1–20 (2015) 24. D.R. Forsey, R.H. Bartels, Hierarchical B-spline refinement. SIGGRAPH. Comput. Graph. 22(4), 205–212 (1988) 25. G.A. Francfort, J.-J. Marigo, Revisiting brittle fracture as an energy minimization problem. J. Mech. Phys. Solids 46, 1319–1342 (1998) 26. T.-P. Fries, T. Belytschko, The extended/generalized finite element method: an overview of the method and its applications. Int. J. Numer. Methods Eng. 84(3), 253–304 (2010) 27. D. Gallistl, P. Huber, D. Peterseim, On the stability of Raleigh-Ritz method for eigenvalues. Numer. Math. 137(2), 1–13 (2017) 28. G. Gantner, D. Haberlik, D. Praetorius, Adaptive IGAFEM with optimal convergence rates: hierarchical B-splines. Math. Mod. Meth. Appl. S. 27(14), 2631–2674 (2017) 29. G. Gantner, D. Praetorius, Adaptive IGAFEM with optimal convergence rates: T-splines. ArXiv Preprint 01311, 2019 (1910) 30. C. Giannelli, B. Jüttler, H. Speleers, THB-splines: the truncated basis for hierarchical splines. Comput. Aided Geom. D. 29(7), 485–498 (2012) 31. S. Govindjee, J. Strain, T.J. Mitchell, R.L. Taylor, Convergence of an efficient local leastsquares fitting method for bases with compact support. Comput. Methods Appl. Mech. Engrg. 213–216, 84–92 (2012) 32. A.C. Hansen-Dörr, J. Brummund, M. Kästner, Phase-field modeling of fracture in heterogeneous materials – jump conditions, convergence and crack propagation, in Archive of Applied Mechanics - Special Issue on the 10th German-Greek-Polish Symposium on Recent Advances in Mechanics, submitted 33. A.C. Hansen-Dörr, R. de Borst, P. Hennig, M. Kästner, Phase-field modelling of interface failure in brittle materials. Comput. Methods Appl. Mech. Engrg. 346, 25–42 (2019) 34. P. Hennig, M. Ambati, L. De Lorenzis, M. Kästner, Projection and transfer operators in adaptive isogeometric analysis with hierarchical b-splines. Comput. Methods Appl. Mech. Engrg. 334, 313–336 (2018) 35. P. Hennig, M. Kästner, P. Morgenstern, D. Peterseim, Adaptive mesh refinement strategies in isogeometric analysis - a computational comparison. Comput. Methods Appl. Mech. Engrg. 316, 424–448 (2017) 36. P. Hennig, R. Maier, D. Peterseim, D. Schillinger, B. Verfürth, M. Kästner, A diffuse modeling approach for embedded interfaces in linear elasticity. GAMM-Mitteilungen (2019). online first 37. P. Hennig, S. Müller, M. Kästner, Bézier extraction and adaptive refinement of truncated hierarchical NURBS. Comput. Methods Appl. Mech. Engrg. 305, 316–339 (2016) 38. P. Henning, D. Peterseim, Oversampling for the multiscale finite element method. Multiscale Model. Simul. 11(4), 1149–1175 (2013) 39. T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Engrg. 194(39–41), 4135–4195 (2005) 40. T.J.R. Hughes, G.R. Feijóo, L. Mazzei, J.-B. Quincy, The variational multiscale method - a paradigm for computational mechanics. Comput. Methods Appl. Mech. Engrg. 166(1–2), 3–24 (1998) 41. T.J.R. Hughes, A. Reali, G. Sangalli, Duality and unified analysis of discrete approximations in structural dynamics and wave propagation: comparison of p-method finite elements with k-method NURBS. Comput. Methods Appl. Mech. Engrg. 197(49–50), 4104–4124 (2008) 42. W. Jiang, J.E. Dolbow, Adaptive refinement of hierarchical B-spline finite elements with an efficient data transfer algorithm. Int. J. Numer. Meth. Engng 102, 233–256 (2015) 43. K.A. Johannessen, F. Remonato, T. Kvamsdal, On the similarities and differences between classical hierarchical, truncated hierarchical and LR B-splines. Comput. Methods Appl. Mech. Engrg. 291, 64–101 (2015) 44. M. Joulaian, A. Düster, Local enrichment of the finite cell method for problems with material interfaces. Comput. Mech. 52(4), 741–762 (2013)
Adaptive Isogeometric Phase-Field Modeling of Weak and Strong Discontinuities
281
45. P. Kagan, A. Fischer, P.Z. Bar-Yoseph, Mechanically based models: adaptive refinement for B-spline finite element. Int. J. Numer. Meth. Engng. 57(8), 1145–1175 (2003) 46. M. Kästner, P. Hennig, T. Linse, V. Ulbricht, Phase-field modelling of damage and fracture – convergence and local mesh refinement, in Advanced Methods of Continuum Mechanics for Materials and Structures, pp 307–324 (Springer, Singapore, 2016) 47. J. Kiendl, K.-U. Bletzinger, J. Linhard, R. Wüchner, Isogeometric shell analysis with kirchhofflove elements. Comput. Methods Appl. Mech. Engrg. 198(49–52), 3902–3914 (2009) 48. C. Kuhn, R. Müller, A continuum phase field model for fracture. Eng. Fract. Mech. 77(18), 3625–3634 (2010) 49. M. Kumar, T. Kvamsdal, K.A. Johannessen, Superconvergent patch recovery and a posteriori error estimation technique in adaptive isogeometric analysis. Comput. Methods Appl. Mech. Engrg. 316, 1086–1156 (2017) 50. X. Li, J. Zheng, T.W. Sederberg, T.J.R. Hughes, M.A. Scott, On linear independence of T-spline blending functions. Comput. Aided Geom. D. 29(1), 63–76 (2012) 51. R. Maier, Computational Multiscale Methods in Unstructured Heterogeneous Media. Ph.D. thesis, Universität Augsburg (2020) 52. R. Maier, D. Peterseim, Explicit computational wave propagation in micro-heterogeneous media. BIT Numer. Math. 59(2), 443–462 (2019) 53. A. Målqvist, D. Peterseim, Localization of elliptic multiscale problems. Math. Comp. 83(290), 2583–2603 (2014) 54. C. Miehe, M. Hofacker, L.-M. Schänzel, F. Aldakheel, Phase field modeling of fracture in multiphysics problems. Part II. Coupled brittle-to-ductile failure criteria and crack propagation in thermo-elastic-plastic solids. Comput. Methods Appl. Mech. Engrg. 294, 486–522 (2015) 55. C. Miehe, M. Hofacker, F. Welschinger, A phase field model for rate-independent crack propagation: robust algorithmic implementation based on operator splits. Comput. Methods Appl. Mech. Engrg. 199(45–48), 2765–2778 (2010) 56. C. Miehe, F. Welschinger, M. Hofacker, Thermodynamically consistent phase-field models of fracture: variational principles and multi-field FE implementations. Int. J. Numer. Meth. Engng. 83(10), 1273–1311 (2010) 57. P. Morgenstern, Globally structured three-dimensional analysis-suitable T-splines: definition, linear independence and m-graded local refinement. SIAM J. Numer. Anal. 54(4), 2163–2186 (2016) 58. P. Morgenstern, Mesh Refinement Strategies for the Adaptive Isogeometric Method. Ph.D. thesis, Universität Bonn (2017) 59. P. Morgenstern, D. Peterseim, Analysis-suitable adaptive T-mesh refinement with linear complexity. Comput. Aided Geom. D. 34, 50–66 (2015) 60. J. Mosler, O. Shchyglo, H. Montazer Hojjat, A novel homogenization method for phase field approaches based on partial rank-one relaxation. J. Mech. Phys. Solids 68, 251 – 266 (2014) 61. H. Owhadi, Bayesian numerical homogenization. Multiscale Model. Simul. 13(3), 812–828 (2015) 62. H. Owhadi, Multigrid with rough coefficients and multiresolution operator decomposition from hierarchical information games. SIAM Rev. 59(1), 99–149 (2017) 63. H. Owhadi, L. Zhang, L. Berlyand, Polyharmonic homogenization, rough polyharmonic splines and sparse super-localization. ESAIM Math. Model. Numer. Anal. 48(2), 517–552 (2014) 64. J. Parvizian, A. Düster, E. Rank, Finite cell method. Comput. Mech. 41(1), 121–133 (2007) 65. D. Peterseim, Variational multiscale stabilization and the exponential decay of fine-scale correctors, in Building Bridges: Connections and Challenges in Modern Approaches to Numerical Partial Differential Equations, Lecture Notes in Computational Science and Engineering, vol. 114 (Springer, Cham, 2016), pp. 343–369 66. D. Peterseim, M. Schedensack, Relaxing the CFL condition for the wave equation on adaptive meshes. J. Sci. Comput. 72(3), 1196–1213 (2017) 67. D. Peterseim, R. Scheichl, Robust numerical upscaling of elliptic multiscale problems at high contrast. Comput. Methods Appl. Math. 16(4), 579–603 (2016)
282
P. Hennig et al.
68. G. Sangalli, T. Takacs, R. Vázquez, Unstructured spline spaces for isogeometric analysis based on spline manifolds. Comput. Aided Geom. D. 47, 61–82 (2016) 69. D. Schillinger, L. Dedè, M.A. Scott, J.A. Evans, M.J. Borden, E. Rank, T.J.R. Hughes, An isogeometric design-through-analysis methodology based on adaptive hierarchical refinement of NURBS, immersed boundary methods, and T-spline CAD surfaces. Comput. Methods Appl. Mech. Engrg. 249, 116–150 (2012) 70. D. Schillinger, E. Rank, An unfitted hp-adaptive finite element method based on hierarchical B-splines for interface problems of complex geometry. Comput. Methods Appl. Mech. Engrg. 200(47), 3358–3380 (2011) 71. D. Schneider, O. Tschukin, A. Choudhury, M. Selzer, T. Böhlke, B. Nestler, Phase-field elasticity model based on mechanical jump conditions. Comput. Mech. 55(5), 887–901 (2015) 72. M.A. Scott, X. Li, T.W. Sederberg, T.J.R. Hughes, Local refinement of analysis-suitable Tsplines. Comput. Methods Appl. Mech. Engrg. 213–216, 206–222 (2012) 73. M.A. Scott, D.C. Thomas, E.J. Evans, Isogeometric spline forests. Comput. Methods Appl. Mech. Engrg. 269, 222–264 (2014) 74. T.W. Sederberg, D.L. Cardon, G.T. Finnigan, N.S. North, J. Zheng, T. Lyche, T-spline simplification and local refinement. ACM Trans. Graph. 23(3), 276–283 (2004) 75. T.W. Sederberg, J. Zheng, A. Bakenov, A. Nasri, T-splines and T-NURCCs. ACM Trans. Graph. 22(3), 477–484 (2003) 76. N. Sukumar, D.L. Chopp, N. Moës, T. Belytschko, Modeling holes and inclusions by level sets in the extended finite-element method. Comput. Methods Appl. Mech. Engrg. 190(46–47), 6183–6200 (2001) 77. D.C. Thomas, M.A. Scott, J.A. Evans, K. Tew, E.J. Evans, Bezier projection: a unified approach for local projection and quadrature-free refinement and coarsening of NURBS and T-splines with particular application to isogeometric design and analysis. Comput. Methods Appl. Mech. Engrg. 284, 55–105 (2015) 78. A.-V. Vuong, C. Giannelli, B. Jüttler, B. Simeon, A hierarchical approach to adaptive local refinement in isogeometric analysis. Comput. Methods Appl. Mech. Engrg. 200(49–52), 3554– 3567 (2011) 79. W. Wang, Y. Zhang, M. Scott, T.J.R. Hughes, Converting an unstructured quadrilateral mesh to a standard T-spline surface. Comput. Mech. 48(4), 477–498 (2011) 80. O. Wodo, B. Ganapathysubramanian, Computationally efficient solution to the Cahn-Hilliard equation: adaptive implicit time schemes, mesh sensitivity analysis and the 3D isoperimetric problem. J. Comput. Phys. 230(15), 6037–6060 (2011) 81. Z. Yosibash, Singularities in Elliptic Boundary Value Problems and Elasticity and Their Connection with Failure Initiation (Springer, New York, 2011) 82. H. Yserentant, On the multi-level splitting of finite element spaces. Numer. Math. 49(4), 379– 412 (1986) 83. O.C. Zienkiewicz, J.Z. Zhu, The superconvergent patch recovery and a posteriori error estimates. Part 1: the recovery technique. Int. J. Numer. Methods Eng. 33(7), 1331–1364 (1992)
Phase Field Modeling of Brittle and Ductile Fracture Charlotte Kuhn, Timo Noll, Darius Olesch, and Ralf Müller
Abstract This section describes a phase field model for fracture. For the brittle version of the model the discretisation with finite elements is discussed. Higher order elements and elements based on exponential shape functions, that capture the one dimensional solution behavior, are addressed. For the exponential shape functions special attention is given to the quadrature rule, which plays an important role for the efficiency and accuracy. Furthermore, an adaptive strategy that combines standard bi-linear and exponential elements with higher accuracy is proposed. To extend the physical aspects of the fracture phase field model the existing model is extended to ductile fracture introducing plastic deformation. Depending on the hardening behavior different fracture modes are obtained and discussed.
1 Phase Field Model of Brittle Fracture The starting point of the present investigation is a phase field model of brittle fracture, which goes back to the work of [1]. The variational approach presented in there was setup for a numerical treatment in [2], by introducing a spatial regularisation of the sharp crack interfaces, i.e. the jump discontinuities in the displacement field. Mathematically the energetic equivalency of the regularised and sharp interface model can be proven by -convergence analysis. Conceptually there is a strong relation between C. Kuhn Fakultät 07, Universität Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany e-mail: [email protected] T. Noll · D. Olesch · R. Müller (B) Lehrstuhl für Technische Mechanik, Technische Universität Kaiserslautern, Postfach 3049, 67653 Kaiserslautern, Germany e-mail: [email protected] T. Noll e-mail: [email protected] D. Olesch e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_11
283
284
C. Kuhn et al.
Fig. 1 Sketch of a sharp interface model of a crack, and b phase field representation of a crack
phase field models of fracture to the energetic approach of fracture, as proposed in the seminal works of Griffith, see for example [3]. The intention of the following section is to present a self-contained framework of brittle fracture which is then used for the development of exponential shape functions and for an extension of the model towards ductile fracture. Being an energetic approach the central ingredient in a phase field of fracture is the potential energy or functional F of the system, which depends on the displacement field u and the fracture field s. Note that s = 1 represents the virgin material, while s = 0 is the broken state. In contrast to sharp interface models the crack is represented in a continuous way in a phase field model, see Fig. 1. 1 2 2 g(s)W (ε) + G c dV . (1) F [u, s] = (1 − s) + |∇s| 4
F(ε, s, ∇s) = F e (ε, s) + F s (s, ∇s) The parameters G c and represent the fracture energy, which can be related to the fracture toughness by classical fracture mechanical considerations [4], and a length scale for the diffuse crack width. In the above equation g(s) is a degradation function, which models the reduction of the elastic energy or failure in the broken state. Frequently the following degradation function is employed g(s) = s 2 + η ,
(2)
where the parameter η represents a residual stiffness. If (2) is used the fracture energy G c and the length introduce a 1D fracture stress defined by 3 σc = 16
3E G c , 2
where E is the Young’s modulus of the material, see [5, 6].
(3)
Phase Field Modeling of Brittle and Ductile Fracture
285
For a linear elastic isotropic material the undegraded strain energy W (ε) is given by W (ε) =
λ (tr ε)2 + με : ε 2
(4)
with the Lamé constants λ, μ and the linearised strain tensor ε defined by ε=
1 ∇u + (∇u)T . 2
(5)
From the functional (1) the stress can be computed via σ =
∂W
∂F = s2 + η = s 2 + η (λ trε 1 + 2με) , ∂ε ∂ε
(6)
which demonstrates the meaning of η, when s → 0. In a static setting neglecting volume forces, the stress σ has to satisfy the equilibrium equation, given by divσ = 0 .
(7)
The fracture field s follows a time dependent Ginzburg-Landau equation, which can be understood as a non-local evolution equation. In order to obtain this evolution equation a mobility M is introduced and s˙ = −M
δF , δs
where the variational derivative of F is denoted by
(8) δF δs
and is given by
δF ∂F ∂F 1−s = − div = g (s)W (ε) + G c 2 s − . δs ∂s ∂∇s 2
(9)
The close relation to the Euler-Lagrange equations of calculus of variations is noted. In order to approximate a quasi-static crack propagation the mobility M in (8) has to be chosen sufficiently high compared to other characteristic time scales of the boundary value problem, such as time to apply a load. The equilibrium condition (7) and the evolution equation (8) can be discretised by standard schemes, such as bi-linear 2D finite elements (FE) and an implicit time integration.
286
C. Kuhn et al.
2 Exponential Shape Functions 2.1 Quadratic Shape Functions The accuracy of fracture phase field models relies on a good approximation of the surface energy. In the transition zones steep gradients occur that require a high numerical resolution. Especially the phase field s demands for a better resolution in these transition regions. The main approaches in FE methods to meet the requirements of a sufficiently fine resolution are h-, p- and hp-refinement strategies. In order to avoid a remeshing, higher order shape functions will be tested for the approximation of the phase field. As an example for Lagrange shape functions of a higher order, the linear shape functions will be replaced by quadratic shape functions. The performance of 1d and 2d quadratic shape functions is evaluated in this section. As a first test, the stationary crack field of an unloaded bar with crack at x = 0 is computed, see Fig. 2. Thereby a problem arises due to the fact that the midside and corner nodes of the quadratic shape functions approximate the crack differently. While the corner nodes of the quadratic elements behave similar to the linear elements and approximate a sharp transition from the positive to negative crack surface, the midside node allows only for a continuous transition. However, cracks can be seen as Dirichlet boundaries in a fracture phase field model. For further evaluation the surface energy is observed in a 2d case. Like in the 1d example, no mechanical loads are applied in this test, see Fig. 3. The results show that the surface energy of cracks approximated on midside and corner nodes differ. This renders quadratic shape functions unsuitable for the approximation of an arbitrary fracture field.
2.2 Exponential Shape Functions An alternative to higher order Lagrange shape functions with midside nodes is presented in [7]. Exponential shape functions are introduced into the interpolation of the the phase field variable. The choice is motivated by the analytical
Fig. 2 1d crack tip by linear and quadratic (top: corner node, bottom: midside node) approximation
Phase Field Modeling of Brittle and Ductile Fracture
287
Fig. 3 Comparison of the surface energy of a stationary crack for linear (blue) and quadratic approximation of a stationary crack
Fig. 4 Analytical 1d solution of a stationary crack
solution for a fractured 1d bar, see Fig. 4. Figure 5 illustrates the 1d linear (a) and exponential (b) shape functions. They are defined as functions of the natural coordinate ξ ∈ [−1, 1]. The shape functions of a two noded 1d element read 1 (1 − ξ ), 2 1 N2lin (ξ ) = (1 + ξ ) 2
exp (−δ(1 + ξ )/4) − 1 , exp (−δ/2) − 1 exp (−δ(1 + ξ )/4) − 1 exp N2 (ξ, δ) = . exp (−δ/2) − 1 exp
N1lin (ξ ) =
N1 (ξ, δ) = 1 − and
By the parametrisation of the exponential shape function with δ = h/ (h: element size), the typical 1d solution is captured accurately. For the verification of these spe-
Fig. 5 a Linear and b exponential shape functions (δ = 10) for an 1d element with two nodes
288
C. Kuhn et al.
Fig. 6 Fracture field s for a 1D body with a stationary crack at x = 0
cial shape functions, they are tested in an approximation of the analytical 1d solution of a stationary crack displayed in Fig. 6. The analytical solution is derived from the energy density contribution F s of (1) with the boundary conditions s (±1) = 0, s(0) = 0 and G c = 1 in the domain x ∈ [−1, 1]. The approximation of a stationary crack is independent of the displacement field, thus it is disregarded. The results show, that the exponential elements are superior to the linear elements in approximating diffuse 1d cracks, see Fig. 6. The graph displays the numerical solutions for a stationary one dimensional phase field with a crack at x = 0, defined by the Dirichlet boundary condition s(0) = 0 for different choices of shape functions. The displacement field is zero and can therefore be neglected. In order to obtain the exponential shape functions for the analysis of two-dimensional problems, the 1d shape functions can simply be composed by tensor products as described in [8] for Lagrange elements. The linear combination of these 1d shape functions are composed for different spatial coordinates. It is necessary to consider the allocation of 1D nodal shape functions in the 2D or 3D with respect to the adjoining edges. In this regard, the exponential shape functions need particular attention, because the element edge lengths in the shape functions need to be parameterized, see e.g. [7]. Although the ability of the exponential shape functions to approximate narrow transition zones even with low mesh densities are favourable, there are some peculiarities like the lack of symmetry and the varying accuracy in the numerical integration. These two points will be treated in the following. The major problem, the lack of symmetry requires a proper orientation of the elements. Without a re-orientation cracks would be approximated wrongly unsymmetrical, see Fig. 7. A proper orientation for a simple 1d case with two elements is indicated in Fig. 6. The orientation of the elements in the spatial domain x < 0 is illustrated by the orientation of the triangles (normal orientation: , reversed: orientation).
Phase Field Modeling of Brittle and Ductile Fracture
289
Fig. 7 Approximation with normal shape functions (top), and reoriented shape functions (bottom)
2.3 3d Exponential Shape Functions The extension of 1d exponential shape functions to a 2d setting is presented in [7]. A similar approach for the 3d implementation is straightforward. Therefore the 1d exponential shape functions, belonging to adjacent edges of a respective node, are multiplied. This yields exp
exp
exp
exp
exp
exp
exp
exp
exp
exp
exp
exp
exp
exp
N1 (ξ, η, ζ, δi ) = N1 (ξ, δ1 ) · N1 (η, δ4 ) · N1 (ζ, δ5 ), exp exp exp exp N2 (ξ, η, ζ, δi ) = N2 (ξ, δ1 ) · N1 (η, δ2 ) · N1 (ζ, δ6 ), N3 (ξ, η, ζ, δi ) = N2 (ξ, δ3 ) · N2 (η, δ2 ) · N1 (ζ, δ7 ), exp exp exp exp N4 (ξ, η, ζ, δi ) = N1 (ξ, δ3 ) · N2 (η, δ4 ) · N1 (ζ, δ8 ), exp exp exp exp N5 (ξ, η, ζ, δi ) = N1 (ξ, δ9 ) · N1 (η, δ12 ) · N2 (ζ, δ5 ), N6 (ξ, η, ζ, δi ) = N2 (ξ, δ9 ) · N1 (η, δ10 ) · N2 (ζ, δ6 ), exp exp exp exp N7 (ξ, η, ζ, δi ) = N2 (ξ, δ11 ) · N2 (η, δ10 ) · N2 (ζ, δ7 ), exp
exp
N8 (ξ, η, ζ, δi ) = N1 (ξ, δ11 ) · N2 (η, δ12 ) · N2 (ζ, δ8 ),
Fig. 8 Node/edge numbering of 3D element in global (left) and natural coordinates (right)
290
C. Kuhn et al.
Fig. 9 Shape functions of a 3d exponential element (orientation in local axis direction) with δi = 20
where the element nodes and edges are numbered according to the sketch of the element in the natural coordinates (ξ , η, ζ ) in Fig. 8. Each shape function depends on the ratio δi = h i / of all three adjacent element exp edges and possess the Kronecker delta property, i.e. N I (ξ, η, ζ, δi ) = δ I J . In order to ensure partition of unity the shape functions are adjusted by the correction term R(ξ, η, ζ, δi ) =
8
exp N I (ξ, η, ζ, δi )
−1
I =1
as for the 2d case. This term is then equally distributed to all shape functions exp
exp,old
N I (ξ, η, ζ, δi ) = N I
(ξ, η, ζ, δi ) −
1 R(ξ, η, ζ, δi ). 8
The form of the 3d exponential shape functions resembles the 2d shape functions on each sides of the volume element, see Fig. 9. The orientation issue of the 3d
Fig. 10 3d crack in the x y-plane
Phase Field Modeling of Brittle and Ductile Fracture
291
exponential shape function is more complex compared to 2d element, because of the 8 additional edges. But this problem hasn’t been tackled yet, so that all parallel edges in the same natural coordinate direction share the same orientation (Fig. 10).
2.4 Numerical Test Due to the fact that only global orientation of the exponential shape functions can be chosen, only cracks in one plane are possible. In the first test no mechanical loads are applied. The initial phase field contains a crack with length L. The considered volume [−L , L] × [−L , L] × [−L/10, L/10] is discretised with a regular mesh of brick elements, see Fig. 11. The orientation of the elements are only changed in the domain of the negative crack surface. There the vertical orientation is switched. The plots in Fig. 11 show the surface energy and its relative error of the linear and exponential approximation of the stationary crack for different number of elements. Like for the 2d case the exponential shape functions are superior, even with a too low number of quadrature points. The performance of the 3d exponential shape functions is tested in a half space model of fracture mode I, see Fig. 12. The fracture field contains a crack in the symmetry plane. The discretisation consists of 1 element in depth direction 150 elements in horizontal direction and a varying number of n elements in direction orthogonal to the initial crack, and 5 × 5 Gauß points are used in the numerical integration. Like in the 2d peel off test, the presumed crack path along the bottom plane of the model, an additional layer of smaller elements is introduced, see [7] (Fig. 13).
Fig. 11 Surface energy of a stationary crack approximate by 3d linear and exponential shape functions
292
C. Kuhn et al.
Fig. 12 Comparison of the surface energy for linear (blue) and quadratic approximation of a stationary crack
Fig. 13 Elastic energy obtained with exponential/linear shape functions (left/middle) for a 3d
2.5 Adaptive Numerical Integration The approximation by exponential elements can lead to strongly varying values of the shape functions within one element. In the FE method, it is essential to perform an efficient and sufficiently precise quadrature of the residuals and tangent matrices. Due to the high differences of the fracture field within elements an adaptive numerical integration scheme has been developed and tested. The illustrative 2D boundary value problems are discretised with four-node quadrilateral elements, see Fig. 14. For the calculation in the natural coordinate system ξ − η, it is necessary to compute the determinant of the Jacobian matrix ⎡ ∂x ∂x ⎤ ⎢ ∂η ⎥ J = ⎣ ∂ξ ∂y ∂y ⎦ , ∂ξ ∂η which converts the global x − y coordinate system to the natural element ξ − η coordinate system. Since linlinear shape functions are used for the approximation of N I x I , the Jacobi matrix does not depend on the choice of the geometry x h = the approximation of the s-field.
Phase Field Modeling of Brittle and Ductile Fracture
293
Fig. 14 2D isoparametric element
The general equation for the numerical integration of a 2D integral in the parent space can be describe by
e
f (x, y) dx dy =
−1
f (ξ, η) det(J ) dξ dη ≈
n GP
f (ξ p , η p ) det(J−1 ) w p . (10) p
p=1
The square () represents unit square (ξ ∈ [−1, 1] and η ∈ [−1, 1]) for the natural configuration. The function f represents any arbitrary functions of the coordinates ξ, η. Thus, the integral is approximated by a sum, and f needs only to be evaluated in the quadrature points. The function values are multiplied by weights w p and then summed up for all n GP Gauß points. Like mentioned before, the transformation of the . integration in different coordinate systems requires the inverse Jacobian matrix J−1 p The quality of the numerical integration depends on the continuity of the function and the spatial position and number of the quadrature points. Especially for higher order shape functions, the integration error by an inappropriate quadrature scheme can become crucial. This needs to be counteracted by a sufficient number of quadrature points and an adequate quadrature scheme.
2.5.1
Adaptive Number of Gauß Points
In the standard FE method, the numerical integration is performed by the GaußLegendre quadrature. A scalable straightforward approach for an improvement of the accuracy is to increase the number of quadrature points. An ad-hoc criterion for determining the value of n GP is the shape function parameter δ. In general, this relation can be expressed by a function n GP = f (δmax ).
(11)
294
C. Kuhn et al.
Fig. 15 n GP -function
The construction of the n GP -function is implemented by data of a surface energy evaluation of the stationary crack in the 2d quadratic domain, see Fig. 3, where the δ values of the uniformly meshed domain become the control variable. For this purpose the error of the surface energy is evaluated. When the error reaches a fixed value, here 5%, a higher number of Gauss points is used. The computed data points of the analysis of the stationary crack with regard to δ and the error of the surface energy is used to approximate an interpolation function to compute the number of Gauss points, see Fig. 15. Besides the condition for the total number of Gauß points, a selection is applied to mark the elements, which have to be integrated more accurately. This is achieved by a straightforward adhoc condition Ne
s I < 0.5,
(12)
I =1
which multiplies all Ne nodal values of the fracture field within an element. Practical experiences showed that the value 0.5 provides satisfactory results. This condition is checked in every iteration. The default quadrature rule is a 2 × 2 Gauss-Legendre scheme. An example of the algorithm can be seen in Fig. 16.
2.5.2
Double Exponential Formula
In addition to the Gauß quadrature, a double exponential (DE) formula is tested. The properties of DE formula allows for an improved numerical integration of functions with an infinite number of derivatives, see [9]. How this applies to the exponential shape functions has been evaluated in the following. The DE formula
Phase Field Modeling of Brittle and Ductile Fracture
295
Fig. 16 Local adaptation of the number of quadrature points
Fig. 17 DE formula: for a 1D-reference domain quadrature point positions and weights
I =
1 −1
f (x)dx =
+∞ −∞
wk f (xk ),
1 π sinh(kc) and 2 1 cπ cosh(kc) 2 , wk = 1 π sinh(kh) cosh2 2 is an infinite series of integration points and tends to distribute the major amount of quadrature points towards the integration limits, see Fig. 17. Due to the infinite series, a truncation is necessary, which leads to a deviation e.g. 1.998 instead of 2 for the sum of the weights in a unit interval [−1, 1]. This problem can be corrected xk =
tanh
296
C. Kuhn et al.
Fig. 18 Elastic energy during a fracture mode I for different implementation
by a normalization. It was found that due to the non-monotonous behaviour of the error of the surface energy, it is necessary to apply at least 13 × 13 quadrature points to minimize the error and achieve a robust precision.
2.5.3
Numerical Results
The performance of an adaptive numerical integration is tested in a 2D simulation of a fracture mode I, which is introduced as the second test case in Fig. 12. This is similar to the 3d case except for the missing third dimension. A convergence study of the elastic energy E e during the crack propagation is performed. In Fig. 18 an overall comparison of the different quadrature rules is shown. Also included are the results for standard linear elements. The variants with exponential elements are categorized by quadrature method and adaptive/non-adaptive. While the the DE formula always uses 13 quadrature points, the Gauß-Legendre integration varies with element size from 2– 10 n GP per direction. The adaptive DE formula uses for elements with constant phase field also a Gaußintegration with 2 × 2 GP. In this setting the solution of the elements integrated with the DE formula already converge for a extremely coarse mesh. The improvement by an adaptive number of Gaußpoints reduce the required mesh density by 50%. The difference between the integration methods can be explained partially by the lower amount of quadrature points for the adaptive routines. However, even with the same number the DE formula is more efficient.
Phase Field Modeling of Brittle and Ductile Fracture
297
Fig. 19 Scheme of adaptive shape function with blending elements
2.6 Blending Elements The application of the exponential shape function serves the reduction of the computational effort. This can be complemented by a precise use of the exponential. So, that the exponential approximation of the fracture field is only used in areas, where cracks are present or likely to propagate. For regions far away from the crack prone zone the standard bi-linear elements meet the requirements. In addition to the efficiency increase the linear shape functions are also beneficial for the orientation of the exponential shape functions. Like already mentioned the exponential shape functions need a proper orientation depending on the gradient of the phase field. In areas with a constant phase field the choice of the orientation is rather difficult and can become a stability problem for an adaptive orientation algorithm. But this is prevented with the discretisation of those areas by symmetric bi-linear shape function In order to properly connect the linear and exponential elements, blending elements are developed. This approach is similar to a local p-refinement but without additional nodes, see Fig. 19. To as flexible as possible, the proposed blending elements are capable to have any combination of linear and exponential properties. This means that every element edge can become a linear, normal exponential or mirrored exponential shape function, i.e. for a quadrilateral element 81 combinations exist. But due to restriction in the orientation some cases are impossible. However, most of the 2d blending elements are inversion or rotation version, and therefore, are very similar. So in order to not be repetitive, only the two most important cases are presented. The first one is an element with exponential properties in one direction and linear in orthogonal to that
298
C. Kuhn et al.
Fig. 20 Shape function combinations for crack surface element with δi = 10
Fig. 21 Shape function combinations for crack surface element with δi = 10
exp
exp
N1blend (ξ, η, δi ) = N1lin (ξ ) · N1 (η, δ4 ),
N3blend (ξ, η, δi ) = N2lin (ξ ), ·N2 (η, δ2 ),
exp N2blend (ξ, η, δi ) = N2lin (ξ ) · N1 (η, δ2 ),
N4blend (ξ, η, δi ) = N1lin (ξ ), ·N2 (η, δ4 ).
exp
This element can be used for vertical crack surfaces, see Fig. 20. The second case is the blending element with exponential shape functions in one corner with the same orientation and linear shape functions on the opposite edges. The nodal shape functions are exp
exp
N1blend (ξ, η, δi ) = N1 (ξ, δ1 ) · N1 (η, δ4 ), N2blend (ξ, η, δi )
=
exp N2 (ξ, δ1 )
·
N1lin (η),
N3blend (ξ, η) blend N4 (ξ, η, δi )
= N2lin (ξ ) · N2lin (η), exp
= N1lin (ξ ) · N2 (η, δ4 ).
This element is useful for crack tips, where the crack tip is on the local node 1, see Fig. 21. Like for the fully exponential element, for arbitrary blending elements the
Phase Field Modeling of Brittle and Ductile Fracture
299
Fig. 22 Contour plot of the residual term R over the global element geometry
partition of unity is not fulfilled. In order to ensure the partition of unity and without affecting the continuity, every variant of the blending element shape functions is modified. The same approach stems from the regular 2d and 3d exponential elements. The residual term 4 N Iblend (ξ, η, δi ) + 1. R(ξ, η, δi ) = − I =1
of the partition of unity will be used as a correction term. It is equally distributed over to all shape functions N Iblend (ξ, η, δi ) = N Iblend,old (ξ, η, δi ) +
1 R(ξ, η, δi ). 4
For the two cases the residual term R is analyzed, see Fig. 22a, b. The geometry of the element is approximated by linear shape functions and is the same for both cases. While the combination of the blending elements for a crack tip shows a residual R similar to that of a fully exponential element, see [7], the blending elements with exponential properties only in one direction fulfil the partition of unity without correction. It is also worth mentioning, that the residual term of the crack tip element has a ten times higher residual then the exponential element in this form.
2.7 Adaptive Orientation 2d The key for a wider application of the exponential shape functions is an adaptive orientation. As already mentioned in the introduction, the exponential shape functions
300
C. Kuhn et al.
Fig. 23 Regulating variable for the adaptive shape function choice
are not symmetric and require a proper orientation for the approximation of symmetric crack surfaces. Based on the blending elements an adaptive routine, which not only re-orients, but also chooses the shape function for each edge individually, is developed. Neighbouring elements must be oriented accordingly in order to ensure continuity across the element boundaries. The solution strategy proposes an adaptive choice based on the nodal values of the fracture field of the last converged solution of the Newton scheme. The process of the adaptive routine has two steps. First it computes the marker η=
s s − E exp | |E lin
E s
,
like in an adaptive h-refinement strategy [10]. The value η is the relative difference of the surface energy of the exponential and linear approximation of the element edge, see Fig. 23a. The reference value E s is the difference between the approximation of the surface energy with linear and exponential shape function for two nodes with a phase field difference of 1 s s − E exp | (s1 = 0, s2 = 1).
E s = |E lin
(13)
If the computed η is lower then a specified limit value, the edge is approximated by a linear shape function. If it is larger, then the gradient is examined for the choice of −), normal exponential N exp ( ) its shape function, which can become a linear N lin (− exp or reversed exponential N ( ) shape function, see Fig. 23b. Additionally, the last time step will be computed again if the number of switches from linear to exponential is higher then vice versa. While the routine is robust for stationary cracks, it shows instabilities during crack propagation.
Phase Field Modeling of Brittle and Ductile Fracture
301
Fig. 24 Comparison of the surface energy computed with linear (blue), exponential (red) and adaptive (green) shape functions
Fig. 25 Fracture field s, edge shape functions N lin/exp and orientation of the exponential shape functions for four elements per direction
To test the adaptive algorithm a case without mechanical loads and an initial crack (s = 0) is analysed, see Fig. 2. The domain is a square with an initial crack of length L and linear shape functions. Within an iteration loop a set of shape functions for each edge is established, see Fig. 25. As a validation of the functionality, the surface energy is observed. The difference between the domain discretized by fully exponential elements is small compared to adaptive approximated elements, see Fig. 24, although the exponential approximation is only present at the crack surface and for the edges at the crack surface. It should also be mentioned that the adaptive algorithm modifies the number of quadrature points. So fully linear elements, where
302
C. Kuhn et al.
the phase field is constant, are numerically integrated by 2 × 2 Gauß points to improve the performance.
3 Phase Field Model for Ductile Fracture In this section the extension of the brittle phase field model, see also [11] and [12– 16], towards ductile fracture is presented by consideration of the 1D homogeneous solution and subsequent analysis of a size criterion from the literature, which investigates the so called dog bone shape of the plastic zone at a crack tip. In the literature this scenario is used to determine whether a specimen is suitable for determination of the fracture toughness K I C .
3.1 Phase Field Modeling of Ductile Fracture A desirable way to derive a phase field model for ductile fracture would be to proceed on an analogous route as in the case of brittle fracture, where the variational formulation, embedding the elastic energy density, is regularized. The total strains are splitted additively into an elastic and plastic part by ε = εe + εp .
(14)
However, a variational formulation for ductile fracture based on an elastic-plastic energy functional F˜ ep [εe ; ε p , α] =
(W e (ε e , ε p ) + π p (α)) dV
(15)
comprising an additional plastic contribution π p along with the elastic energy density, is not available. Thus, a regularized formulation of ductile fracture cannot be derived rigorously from a variational model, but is introduced in an ad-hoc fashion based on the energy functional F ep u, s; ε p , α =
e
ψ (ε, ε p , s) + ψ f (s, ∇s) + ψ p (α, s) dV,
(16)
where the plastic contribution ψ p is added to the regularized formulation for brittle fracture (1), see [17–19]. With the decomposition of the total strain into an elastic ε e and a plastic part ε p , the elastic contribution of the energy density reads 1 ψ e (ε, ε p , s) = g(s)W e (ε, ε p ) = g(s) (ε − ε p ) : C (ε − ε p ) , 2
(17)
Phase Field Modeling of Brittle and Ductile Fracture
303
where C is the forth order elasticity tensor which can be expressed by the Lamé constants introduced in (4). The plastic contribution π p (α) =
1 H α 2 + σY α, 2
(18)
where H is the hardening modulus, σY the initial yield strain and α the internal variable accounting for the accumulated plastic strain, is consistent to an associative J2 -plasticity model with linear isotropic hardening. The elastic energy density formulation (17) is not yet capable of differentiating between the influence of tensile and compressive stresses, which might affect crack growth and the stress state near cracks very differently, see e.g. [20]. Thus, this formulation may lead to unphysical in particular for compressive loads. Hence, an ansatz by [20] is used to split the elastic energy density into tensile and compressive parts ψ e (ε, s, ε p ) = g(s)W+e (ε, ε p ) + W−e (ε, ε p )
(19)
with 1 K tr(ε − ε p )2+ + μ (e − ep ) : (e − ep ) and 2 1 W−e (ε, ε p ) = − K tr(ε − εp )2− , 2 W+e (ε, ε p ) =
(20) (21)
where K = λ + 23 μ is the bulk modulus and the expression inside the positive and negative Macaulay-brackets are defined by
x if x ≥ 0
x+ = 0, else
x− =
x if x < 0 0, else.
(22)
The fact that plastic strain is confined to the deviatoric part of the strain is expressed in (20), where e represents the deviatoric part of ε and consequently, ep = ε p . The degradation function g(s) not only models the loss of stiffness in fractured material, but also provides the coupling between the plastic contribution of the energy density π p with the fracture field 1 p p 2 H α + σY α . ψ (α, s) = g(s)π (α) = g(s) (23) 2 The stress is computed within the phase field framework as We ∂ W−e ∂ψ e = g(s) + + ∂ε ∂ε ∂ε = g(s) K tr(ε − ε p )+ 1 + 2μ (e − ep ) + K tr(ε − ε p )− 1
σ =
(24) (25)
304
C. Kuhn et al.
The Ginzburg-Landau type evolution equation derived from the elastic-plastic energy density (16) becomes δψ = −M g (s) (W e + π p ) + ψ f s˙ = −M δs Gc 1 e = M 2G c s − g (s) W + (σY + H α)α + (1 − s) . 2 2
(26)
Thus, the fracture field is not only driven by elastic strains, but also by the accumulated plastic strain α. The von Mises type yield criterion states f (s, α) = s + where q=−
2 q = 0, 3
∂ψ = −g(s) (σ Y + H α) ∂α
(27)
(28)
is the driving force for the plastic deformation. As isotropic material behavior is considered, the relation between deviatoric stress and strain is s = g(s)2μ (e − ep ) .
(29)
With (29) and (28) the yield criterion can be expressed as f (s, α) = g(s) 2μ (e − e ) −g(s) p
2 (σY + H α) = 0 . 3
(30)
The evolution equations of the internal variables in the associative plasticity model are given by ε˙ P = γ
s ∂f = γ N with N = ∂σ
s
(31)
and ∂f α˙ = γ = ∂α
2 γ. 3
(32)
By choosing the same degradation function for the elastic potential W e and the plastic contribution π p in (16) it is feasible to formulate an undegraded counterpart of the yield criterion 2 ∗ ∗ ∗ f (s , α) = s − (33) (σY + H α) = 0 , 3
Phase Field Modeling of Brittle and Ductile Fracture
305
with the undegraded counterpart of the stress deviator s ∗ . The undegraded counterpart of the yield function does not depend on the fracture field and thus allows for direct application of the usual ‘radial return’-algorithm for plasticity [21].
3.2 Analysis of a 1D-Bar Problem The purpose of the following discussion of the presented ductile fracture phase field model in the context of a 1D tension problem is twofold: On the one hand, basic features are presented, which are transferred into a 3D setting later. On the other hand this consideration is useful to investigate the choice of the degradation function that plays a crucial role. The coupling of plastic deformation with the fracture field is modeled by the degradation function. With respect to the constitutive relation for the Cauchy stress (24) the degradation function models the reduction in stiffness for fractured material compared to intact material. Hence, the degradation function needs to meet several requirements. It has to be monotonically increasing from g(0) = 0 to g(1) = 1. Considering the evolution function of the fracture field (26), in which g (s) occurs as a source term, leads to the requirement that g (0) = 0 for fully broken material (s = 0). This eliminates the contributions of the elastic (W+e ) and the plastic energy (π p ) to further crack growth in completely fractured material. A parametrised cubic degradation function can be stated as
g(s) = β s 3 − s 2 + 3 s 2 − 2 s 3 + η,
(34)
where β is a dimensionless parameter ranging from 0 to 2. The latter value for β yields the quadratic degradation function, which is commonly used in many phase field approaches and was introduced in (2). The quadratic degradation function along with the general cubic degradation functions is plotted in Fig. 26. Again as in (2) the residual stiffness parameter η has to be chosen very small, i.e. η 1 and is introduced in order to avoid numerical instabilities, that might occur in completely fractured material.
3.2.1
Quadratic Degradation Function
First the quadratic degradation function is obtained by setting β = 2 is discussed, see e.g. [22, 23]. The case of a 1D bar under tensile loading with linear isotropic hardening modulus H is discussed, see Fig. 27. The resulting strain is thus given by ε(x, t) =
∂u = 2u ∗0 (t)/L := ε0 (t). ∂x
(35)
306
C. Kuhn et al.
The load is assumed to increase monotonously, i.e. ε˙ 0 > 0. Due to this loading only the ‘positive’ energy parts related to the positive Macaulay brackets contribute. Therefore, the subscript + is suppressed for a better readability in the following. In the quasi-static case (M → ∞) the Ginzburg-Landau evolution equation for a spatially homogeneous solution of the fracture field (¯s = s(x) = const) becomes g(s) (W e + π p ) − G c
1−s =0 2
(36)
whereas the elastic energy density in 1D is We =
1 E (ε − εp )2 , 2
(37)
with E being the Young’s modulus. Prior to the onset of fracture, i.e. ε0 < σEY , the quantities εp , α and π p remain zero. Once the yield stress σY is reached and γ > 0 the consistency criterion f˙ =
∂f ∂σ
σ˙ +
sign(σ )=1(tensile loading)
implies
Fig. 26 Degradation functions with several values of β
Fig. 27 Sketch of a 1D bar under tensile loading
∂f α˙ = 0 ∂α
(38)
Phase Field Modeling of Brittle and Ductile Fracture
307
0 = E ε˙ − ε˙p − H α. ˙
(39)
Taking into account that α and εp obey the same evolution equations in 1D ε˙ p = γ
∂f ∂f = γ and α˙ = −γ = γ, ∂σ ∂q
(40)
the relation ε˙p = α˙ =
E ε˙ E+H
(41)
between the rates of the plastic variables and the total strain rate is obtained. Thus, the rate of undegraded stress (∗ indicates undegraded quantities) σ˙∗ = E(˙ε − ε˙p ) =
EH ε˙ E+H
(42)
can be expressed as a function that is solely dependent on the rate of the total strain. With the relation σ˙∗ = and the condition σ ∗ (ε = can be expressed σ ∗ (ε) =
∂σ ∗ σ˙∗ EH ∂σ ∗ ε˙ ⇒ = = , ∂ε ∂ε ε˙ E+H
σY ) E
(43)
= σY the undegraded stress in the elastic-plastic region
ε
∂σ ∗ E d˜ε + σY = (H ε + σY ) . ∂ε E + H σY /E
(44)
Thus, from the relation E (ε − εp ) =
E (H ε + σY ) E+H
(45)
an explicit relation for the plastic strain εp = α =
1 (Eε − σY ) E+H
(46)
is obtained. With (37), (23) and (46) the sum of elastic and plastic energy density can be expressed as a function which solely depends on the total strain. The residual stiffness is set to η = 0 for the following considerations. The sum of the undegraded elastic energy and the plastic dissipation contribution depends on whether the yield stress is reached or not
308
C. Kuhn et al.
1 W +π = e
p
Eε02 2 1 E ε02 2
−
E E+H
ε0 −
σY 2 E
if ε0 ≤ if ε0 >
σY E σY . E
(47)
The solution for the spatially homogeneous fracture field is obtained from (36) as a function of W e + π p , i.e. s¯ =
Gc . 4 (W e + π p ) + G c
(48)
The solution s¯ is always admissible (i.e. 0 ≤ s¯ ≤ 1) since W e + π p ≥ 0 for increasing tensile loading and thus, the denominator is always larger than the numerator in (48). A remarkable observation is that, as soon the specimen is loaded, i.e. W e + π p ≥ 0, the stiffness of the material is reduced, since g(s) < 1 for s < 1. Hence, linear elastic material behavior, even in the modeled linear elastic regime, is not reproduced. Inserting (48) in the quadratic degradation function the stress becomes ⎧ 2 Gc ⎨ · Eε0 e p 4(W +π )+G c 2 σ = g(s)σ ∗ = ⎩ Gc EH · E+H ε0 + 4(W e +π p )+G c
E σ E+H Y
if ε0 ≤
σY E
if ε0 >
σY . E
(49)
For the linear elastic phase field model a maximum value of the stress exists. It has been shown, that slightly above the maximal value of the stress in the homogeneous solution, the homogeneous solution becomes unstable and a bifurcation towards the non-homogeneous fractured state is observed [20, 24, 25]. Thus, this value is regarded as the fracture stress σc . A necessary condition for a local maximum of the stress, corresponding to the fracture stress is given by
d g(s)σ ∗ = 0. dε0
(50)
If σc ≤ σY the problem is entirely linear elastic. From (50) an expression for the fracture strain εc in case of elastic-plastic failure ! εc =
E+H EH
σY2 1 Gc + 3H 6
−
σY H
(51)
is obtained under the condition ≤
3H G c . 6σY2
(52)
The condition guarantees, that the maximum value occurs in the plastic regime. Otherwise the stress decreases right from point when the yield stress is reached. The stress-strain curve resulting from (49) for H = 0.1E is depicted in Fig. 28 on
Phase Field Modeling of Brittle and Ductile Fracture 2
309 10
8 1.5 6 1 4 0.5 2
0
0 0
2
4
6
8
10
0
2
4
6
8
10
Fig. 28 Stress strain curve of the undegraded (σ ∗ ) and degraded stress (σ ) on the left and energy density plot (right) of a 1d bar under tensile loading with a quadratic degradation function
the right. Until the initial yield strain ε0 = εY = σEY is reached, the degraded elastic stress deviates slightly from the undegraded stress represented by the dotted red line. The yield stress in the undegraded case, i.e. σ ∗ (εY ) = σY , which corresponds to the input variable initial yield stress, is notably greater than the nominal yield stress of the model σY = σ (εY ). In the plastic region the difference between modeled and undegraded stress becomes even more obvious, as the modeled stress hardly exhibits hardening behavior. However, the material behavior is strongly dependent on as can be observed in Fig. 29. For a decreasing regularization parameter the undegraded
2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0
2
4
6
8
10
Fig. 29 Load displacement curves for varying regularization parameter with quadratic degradation function
310
C. Kuhn et al.
Gc stress-strain curve is approximated more and more accurately. For ≥ 3H the 6σY2 softening effect due to the degradation function dominates the hardening in the plastic regime such that the stress maximum corresponds to the yield stress σc = σY , see the cyan curve in Fig. 29. The desired stress strain relation prior to fracture as it would be observed in experiment, corresponding the path of the undegraded stress until the maximum value of the stress is reached at ε0 = εc , is not recovered in the model.
3.2.2
Effective Parameters
One way to approximate the constitutive behavior more realistically is obviously to chose the regularization parameter as small as possible. However, choosing a smaller regularization parameter requires finer finite element discretisations and, thus, leads to higher computational costs. Alternatively, effective parameters of the phase field model can be identified and related to the nominal parameters of the model. At the onset of plastic deformation, i.e. σ ∗ = σY , the solution for the fracture field is Gc σY = σ2 sy = s ε = . Y E 2 + G c
(53)
By accounting for the degradation of stiffness caused by a degraded phase field sY , the effective yield stress is defined as pf σY0
σY σY = s y2 σ ∗ ε = = = g(s y )σ ∗ ε = E E
Gc 2
σY2
+ Gc
2 σY .
(54)
Correspondingly, an effective hardening modulus H pf is sought, fulfilling the relation pf = E ep
E pf H pf , E pf + H pf
(55)
in an analogous manner to Cep in conventional von Mises plasticity, where E pf is the effective Young’s modulus and the effective elastic-plastic tangent is defined via the relation between the respective rates of stress and strain EH σY p p 1 − 8s ε˙ . σ˙ = s (ε − ε ) E (ε − ε ) + α + E+H Gc H
2
(56)
pf
E ep
With the effective Young’s modulus E pf = s 2 E,
(57)
Phase Field Modeling of Brittle and Ductile Fracture
311
2
1.5
1
0.5
0 0
2
4
6
8
10
Fig. 30 Load displacement curves of the undegraded stress in red, the phase field model with corresponding nominal plastic parameters in blue and with corresponding effective parameters in green. The vertical lines denote the local maximum of the respective curves
the effective elastic-plastic tangent at the onset of plastic deformation becomes according to (53) with α = 0 and εp = 0 pf
pf
E ep0 =
pf
E 0 H0 pf
pf
E 0 + H0
= sy
EH 2 − 8 sy σY . Gc E + H
E ep0
(58)
ζ
Hence, the effective hardening modulus at the onset of yielding is pf
H0 =
E ep0 − ζ s y E. E − E ep0 + ζ
(59)
The relations (54) and (59) determine the internal variables σY and H such that a desired material behavior is obtained. The stress-strain relation of a model using the 1. the nominal parameters 2. the effective parameters (indicated by the superscript pf ) are compared to the classical (non-phase field) elastic plastic material model in Fig. 30. It is observed using the effective parameter model agrees better with the expected classical model until a significant development of the fracture field. Noteworthy is the shift of the peak stress towards higher strains, when the effective parameters are used instead of the nominal ones.
312
3.2.3
C. Kuhn et al.
Cubic Degradation Function
Although the introduction of effective parameters in combination with the quadratic degradation function allows for a closer approximation of the classical elastic-plastic model, the non-linear behavior in the elastic regime remains still an undesirable feature. This feature can be overcome by using a cubic degradation function. If a cubic degradation function g(s) is applied instead of the quadratic, the Eqs. (35)–(47) remain valid. A particular cubic degradation function is obtained by choosing β = 0, which is studied in the next paragraphs. Dependent on whether the yield stress is already reached or not, the sum of the undegraded elastic energy and the plastic dissipation contribution is given by 1 W +π = e
p
Eε02 2 1 E ε02 2
−
E E+H
σY 2
ε0 −
E
if ε0 ≤ if ε0 >
σY E σY . E
(60)
Two possible homogeneous solutions for the fracture field can be obtained from (36) as a function of W e + π p , s1 = 1
(61)
Gc , s2 = 12 (W e + π p )
(62)
Gc Gc . For W e + π p > 12 the where s2 is only admissible (i.e. s2 ≤ 1) if W e + π p > 12 degraded solution s2 is energetically favorable compared to the unfractured state s1 . By substituting s2 = 1 in the second solution of (36), i.e. (62), the critical strain at the onset of degradation
! εs =
E+H EH
σY2 1 Gc + H 6
−
σY H
(63)
σY H σY H
(64)
is obtained. For the undegraded stress the relation ∗
σ =
Eε0 EH E+H
ε0 +
σY
E H H
if ε0 ≤ if ε0 >
holds. The local maximum of the stress, corresponding to the fracture stress is obtained by (50). Hence, an expression for the fracture strain in case of elastic-plastic failure
Phase Field Modeling of Brittle and Ductile Fracture 2
313 10
8 1.5 6 1 4 0.5 2
0
0 0
5
10
15
20
0
5
10
15
20
Fig. 31 Stress strain curve of the undegraded (σ ∗ ) and degraded stress (σ ) on the left and energy density plot (right) of a 1d bar under tensile loading with a cubic degradation function
" ⎡ ⎤ " # # 2 # 2 2 2 # #1 E + H ⎢ σ2 5 Gc 25 G c ⎥ σY 8 σY G c $ σY Y + εc = # + ⎦− $ 3 E H ⎣ H + 18 + 4 H 9 H 324 H
(65) can be calculated. The stress-strain curve is computed by following the σ (s1 )-path until εs and continuing with the σ (s2 )-path. The result is depicted in Fig. 31. Until the loading reaches the initial yield fracture load ε0 = εY = σHY the elastic stressstrain relation is linear and subsequently transitions to the less steep, however still linear, elastic-plastic regime, described by the continuous blue line. This part of the stress-strain relation is related to s = s1 , which is energetically favorable for ε0 ≤ εs . When the load ε0 = εs is reached, indicated by the vertical blue line, s2 becomes admissible as well as energetically more favorable than s1 (see Fig. 31) and the stressstrain relation is described by the continuous green curve from that point on. The maximum value of the stress is reached at ε0 = εc , denoted by vertical line in green, at which fracture can be expected due to localization of s, see [6]. Due to the existence of two homogeneous solution, including one corresponding to fully intact material (here s1 ), no artificial softening is observed prior to the admissibility of the degraded solution (s2 ). Thus, the phase field model applying a cubic degradation function constitutes a closer approximation of the phenomenologically discrete problem of fracture, than a phase field model using a quadratic degradation function.
3.3 Plane Strain Simulations The finite element simulation under plane strain conditions in the following section review the most important features of the model beyond the homogeneous solution
314
C. Kuhn et al.
Fig. 32 Unrefined 200 × 200 × 1 mesh with boundary conditions (left) for the phase field simulation and 200 × 200 × 1 mesh with geometrically modeled initial crack for the classic von Mises model
discussed in the previous section and lead over to the 3D problems considered in the subsequent section by motivating the model configurations used there. A single notched tension probe subjected to linearly increasing loading is investigated. The monolithic Newton-Raphson solution strategy is chosen for the solution of the non-linear global system of equations for the displacements u and the fracture field s. An adaptive time stepping scheme is used, such that depending on the number of Newton iterations needed to satisfy the convergence criterion, the time step size is increased or decreased, respectively. Healing of fractured material is prevented by fixing nodes where the fracture field drops below a small threshold value (s ≤ 10−8 ) to zero and corresponding nodal values are canceled out from the global system of equations. The initial crack for the phase field simulations is generated by prescribing the initial condition s = 0 from x1 = −l/2 to x1 = 0 in the plane, where x2 = 0, see Fig. 32. A linear increasing vertical displacement loading u ∗ on the bottom and top of the specimen with u∗ =
1 ∗ u t 2 0
(66)
is applied to enable stable crack growth. Furthermore, plane strain conditions are ensured by appropriate boundary conditions. The finite element meshes consisting of 200 × 200 × 1 8-node brick elements are displayed along with the boundary conditions in Fig. 32. The parameters used in the simulations are reported in Table 1. The crack configurations for simulations with the quadratic degradation function and effective parameters, depicted in Figs. 33 and 34, do not differ from those with cubic degradation function. In the 45◦ direction to the virtual elongation of the initial
Phase Field Modeling of Brittle and Ductile Fracture
315
Table 1 Material, geometry and loading properties used in the simulations. The relation of the non-dimensional quantities to E, G c , l and T correspond to dimensionless model in [23] Poisson’s ratio ν: 0.25 Regularization length : 0.01 l Residual stiffness η: 10−4 Deg. function parameter β: 2 Mobility constant M: 10 G lc T Length l: Hardening modulus H : Yield stress σY : Loading rate: u ∗0
1.0l 0.1E % 1.0 G cl E % 1.0 GEc l
Fig. 33 Contour plots of the fracture field (top) and the accumulated plastic strain (bottom) at t = 1.1, t = 1.3 and t = 3.0 (from left to right) for the simulations with effective parameters σY = 1.0 and H = 0.1
crack, where the highest deviatoric stresses appear, bands of plastic deformation are formed. Depending on the choice of the plastic parameters, the crack either kinks in the 45◦ direction, if the hardening modulus is chosen high, or propagates straight, if hardening is low. The two modes correspond either to fracture driven by plastic deformation or an elastic type fracture in an elastic-plastic material. In order to evaluate the probe response to the loading situation, reaction forces are computed on the surfaces where the loading is applied, see Fig. 32 on the left. The reaction force is computed by
316
C. Kuhn et al.
Fig. 34 Contour plots of the fracture field (top) and the accumulated plastic strain (bottom) at t = 1.1, t = 1.3 and t = 2.2 (from left to right) for the simulation with effective parameters σY = 1.0 and H = 0.3
F2 =
+
∂x σ22 K l e K we K ,
(67)
K +
∂x where σ22 K is the outward oriented normal component of the stress in x 2 -direction of node K on the loaded surfaces on the half of the specimen, where it is initially intact over the whole height (i.e. where the notch is not present). The length of the area surrounding node K is denoted le K , while the width is denoted we K . The quantities are illustrated in Fig. 35 on the left. In comparison with the classical von Mises model the stress response in the linear elastic regime is recovered quite well by the phase field models, while using the effective parameters it is yet a little closer than with the nominal parameters. In the plane strain case the use of effective parameters reveals a overestimation of the yield stress, while nominal parameters lead to an underestimation, see Fig. 35 on the right. With the onset of plastic deformation the curves of the reaction force of the phase field models flatten out. For the simulation using the nominal parameters no hardening is observed, while a positive slope is found in the simulation with effective parameters. Even though the simulation with effective parameters cannot recover the load displacement behavior of the classical von Mises behavior in detail, a qualitative agreement is achieved. The degree of agreement depends on the regularization parameter. While for a large regularization parameter = 0.04 not even the usage of effective parameters does yield a hardening effect, the curves of the simulations with decreasing regularization parameter not only reveal a more and
Phase Field Modeling of Brittle and Ductile Fracture
317
Fig. 35 Schematic to illustrate the computation of the reaction force (left). Load-displacement curve of phase field simulations with effective and nominal parameters in comparison with conventional von Mises model (right)
more pronounced hardening effect, but resemble the curve of the simulation with the classic von Mises model closer and closer over the entire range, see Fig. 36. However, even though a smaller and smaller chosen regularization parameter can recover the load displacement behavior of the von Mises model better and better, the choice of a very small regularization parameter comes with the cost of the requirement of a very small finite element mesh, which in turn leads to unacceptable high computational costs. The application of the cubic degradation in the 1D case has been discussed already and rendered a promising result as the homogeneous solution possessed two solutions. Besides one solution with a degrading fracture field also another solution without degradation is possible. Thus, the load-displacement curves for phase field simulations with cubic degradation functions approach their counterpart of standard von Mises model even in the elastic-plastic region reasonably well, see Fig. 37 on the left. Even without an introduction of effective parameters, a hardening regime is observed, whereas the slope is higher the smaller parameter β is chosen. Moreover, not even by investigating low hardening scenarios (H = 0.02, see Fig. 37 on the right) undesired softening behavior is observed. The influence of the mobility constant M is negligible with regard to the load displacement curves, but crucial for the stability of the monolithic solution scheme, see Fig. 37. In Fig. 37 the crosses mark the loading step, in which no convergence could be achieved. More robust staggered solution schemes are an alternative to the monolithic strategy in those cases.
3.4 3D Simulations A finite element mesh of 20 × 20 8-node brick elements in the x1 − x2 -plane, that is refined in the path where the crack is expected to grow by a factor of 9 resulting in a
318
C. Kuhn et al. 0.06
0.05
0.04
0.03
0.02
0.01
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Fig. 36 Load-displacement curve of phase field simulations with effective parameters and varying regularization length in comparison with conventional von Mises model 0.06
0.07
0.05
0.06 0.05
0.04
0.04 0.03
0.03
0.02
0.02
0.01
0.01
0 0
0 0.5
1
1.5
2
0
0.5
1
1.5
2
2.5
3
Fig. 37 Comparison of load-displacement curves simulations with classic von Mises model, with degradation function parameter β = 1.0 and β = 10−6 (left). Load-displacement curves for simulations with very low hardening modulus H = 0.02 and varying mobility (right)
lateral element edge length of du = 0.05l in the unrefined region and dr = 0.0056l in the refined region. In the transition regions from coarse regions of the mesh to the finer ones a topological cleanup is performed such that the valence of nodes is reduced to a maximum of 5. The resulting mesh is depicted in Fig. 38, the refinement strategy in more detail can be found in [26]. A suitable out of plane resolution was determined by comparing the results of simulations where the domain was subdivided in direction of the notch front by a varying number of elements. Apart from that, the same simulation conditions were used as in the previous simulations. In Fig. 39 the line plots in direction of the crack front of the von Mises stress and the accumulated plastic strain for a specimen thickness of d = 0.04 reveal qualitatively the same symmetric profiles, regardless of
Phase Field Modeling of Brittle and Ductile Fracture
319
whether the domain is discretised by n 3 = 5 or n 3 = 8 elements. In contrast to that, the respective profiles for the same discretisations differ significantly for a specimen thickness of d = 0.06, see Fig. 40. Due to the different distribution of the plastic zone in the two discretisations, n 3 = 5 is considered as too coarse for a specimen thickness of d = 0.06, see also Fig. 41. Thus, a rough estimate for the required d is given. number of elements in x3 direction n 3 ≥ 25 2 l Due to the above observations, an out-of-plane discretisation with 8 elements was chosen for specimen with d > 0.4. The fracture field of a specimen with a thickness of d = 0.4l at 4 instances in time is reported in Fig. 43 on the top. The four instances in time correspond to the load states illustrated in the load-displacement plot in Fig. 42 on the left. While at t1 , in the linear-elastic regime, the crack front appears to be flat, a slight thumbnail shape forms after the transition to the elastic-plastic region at t2 . Around the instance when the reaction force reaches its maximum, the thumbnail
Fig. 38 View of the x1 − x2 -plane of the refined finite element mesh 100
0.6
90 0.5
80 70
0.4
60 50
0.3
40 0.2
30 20
0.1
10 0
0 0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Fig. 39 Von Mises stress (left) and accumulated plastic strain (right) at x1 = 0.03 and x2 = 0.0 in out of plane direction of the specimen with d = 0.4 for different meshes in x3 -direction
320
C. Kuhn et al. 50 1.2
45 40
1
35 0.8
30 25
0.6
20 0.4
15 10
0.2 5 0
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0
0.1
0.2
0.3
0.4
0.5
0.6
Fig. 40 Von Mises stress (left) and accumulated plastic strain (right) at x1 = 0.03 and x2 = 0.0 in out of plane direction of the specimen with d = 0.6 for different meshes in x3 -direction
Fig. 41 Slices of the plastic zone of the specimen with d = 0.6 at simulation time t = 2.8 (corresponding u 2 = 1.4) at x3 = d/2 (a) and at x3 = d (b). Each plot on the left originates from a mesh with out of plane discretisation with n 3 = 5 elements, the plot on the right to n 3 = 8 109
0.2
108
0.15
107 0.1 106 0.05 105
0 0
0.5
1
1.5
2
2.5
104
0
1
2
3
4
5
Fig. 42 Load-displacement curve, where F2 is the reaction force acting on the initially undamaged, right part of the structures x2 -surface and distribution of CPU time for simulation of a specimen of thickness d = 0.4l
Phase Field Modeling of Brittle and Ductile Fracture
321
Fig. 43 Fracture field (top) and hardening variable in the upper left quarter of the structure (bottom) at simulation time t1 , t2 , t3 and t4 from left to right
Fig. 44 Plastic zone at simulation time t2 , t3 and t4 (from left to right) for d = 0.4
shape is more pronounced (t3 ) and remains in this form during the propagation of the crack front through the specimen (t4 ). A widening of the transition zone between cracked and intact material in the region were the crack had passed through compared to the pre-notched region is noticeable. The evolution of the accumulated plastic strain in the upper right quarter of the specimen is depicted in Fig. 43. As the plastic zone is lightly developed only at t2 , a band of at least slight plastic deformation has been formed from the crack tip to the top right corner. When the crack front has propagated almost through the entire specimen at t4 a profile of plastic deformation has been formed, where the descent of plasticity becomes less steep in x2 -direction. The shape of the plastic zone is presented in Fig. 44. Regions, where the amount of accumulated plastic strain is very low (α < 1.0 GElc ) are masked. Even when the amount of plastic deformation is still small at t2 , a difference in the lateral extension of the plastic zone on the surface and in the center of the specimen is apparent. When the amount of plasticity increases further (t3 ), the difference becomes more pronounced. At the surface of the specimen an enlargement of the plastic zone in a 45◦ direction from the crack plane develops. The onset of plastic deformation at the surface of the structure accompanying the beginning of fracture in the center turns out to be the computationally most demanding part of the simulation, see Fig. 42 on the right. Towards the end of the simulation further demanding situations emerge.
322
C. Kuhn et al.
Fig. 45 Slices at x3 = 0, x3 = 0.1, x3 = 0.2, x3 = d/2 and x3 = d (from left to right) of the plastic zone at simulation time t3 for the specimen with d = 0.4
Fig. 46 Slices at x3 = 0, x3 = 0.1, x3 = 0.2, x3 = d/2 and x3 = d (from left to right) of the plastic zone at simulation time t3 for the specimen with d = 0.6
In order to assess the lateral expansion of the plastic zone in dependence of the position in direction along the crack front (x3 -direction), several slices at the instance, when the reaction force is almost at maximum level (t = t3 ) are depicted in Fig. 45. At the surface of the specimen the strongly shaped flanks in 45◦ angles from the crack propagation direction are noticeable. Thus, building up an extraordinary wide sickle resembling, concave shape in front of the crack tip. At x3 = 0.1 the flanks are less pronounced, hence the extension of the plastic zone in x2 -direction is smaller and the concavity in front of the crack less pronounced. In the center of the specimen the shape is almost identical. The larger extension of the plastic zone in x1 -direction is due to the fact, that the crack tip in the center has already propagated a little further than in the outer parts of the specimen. A similar picture unfolds for the specimen of thickness d = 0.6l shown in Fig. 46. The flanks and the concavity in front of the crack tip become less prominent the larger the distance from the surface. Even from x3 = 0.1 to x3 = 0.2 and further to the center a distinction is discernible. Compared to d = 0.4l the surface feature seems to decay less strong, while due to the higher distance from the surface in the effect is almost fully decayed in the center. The trend of a decreasing size of the plastic zone towards the center of the specimen across the crack front observed here is related to the out of plane constraint, describing the structural obstacle against plastic deformation, due to the specimen dimension parallel to the crack front [27]. For the specimen with d = 0.6l the out of plane constraint increases from the surface to x3 = 0.1 significantly and slightly from x3 = 0.1 to the center. In case of the smallest specimen thickness the out of plane constraint reaches its maximum value almost at x3 = 0.1. Hence, the out of plane
Phase Field Modeling of Brittle and Ductile Fracture
323
Fig. 47 Slices of the plastic zone at the time when the reaction force is at maximum for a specimen with d = 0.05 approximating the plane stress state (left) and for a specimen under plane strain conditions (right). In the center a schematic sketch of the dog-bone model
constraint at the center is still closer to that at the surface compared to the thicker specimen. A plastic zone in cylindrical shape where the cross section is largely dominated by the larger expansion at the surface for thinner specimens, while the cross section is mostly dominated by the smaller extension in the center for thicker specimens is also observed in [28]. The diminishing extension of the plastic zone at the crack front in direction of propagation as predicted by the dog-bone model, sketched in Fig. 47 (middle) is not entirely observed. The dog-bone model assumes a plane stress state on the surface and plane strain state in the center of the specimen. The scenario predicted by the dog-bone is observed neither in [28] nor in [29], though. Hence the results at hand are in accordance with observations of the works cited above. A comparison between the shapes of the plastic zone at the surface of the 3D specimen and the shape in a plane stress simulation, see Fig. 47 (left), as well as between the shapes in the center of the specimen and the shape in a plane stress simulation, see Fig. 47 (right), reveals an obvious distinction. Thus, neither a plane stress state on the surface nor a plane strain state in the center of the structure is established at the instance in time, when the reaction force is at its maximum in the 3D simulations. The shape of the crack front at this instance is already thumbnail-like, while in the dog-bone model the crack front is assumed to be straight. Furthermore, the size requirement for K I C -measurements, stating that the specimen thickness is large enough such that plane stress state dominated regions on the surface are negligible compared to plane strain state dominated regions inside the K 2 (1−ν) specimen states d ≥ 2.5( KσIYC )2 [28]. Applying this requirement with G c = I C E and assuming unaltered simulation parameters, a specimen thickness of d ≥ 3.3 would be required. This is more than 5 times larger than the thickest specimen considered, above.
324
C. Kuhn et al.
4 Concluding Remarks In the present investigation a phase field model for brittle fracture is enhanced by using exponential shape functions. The choice for this type of functions is motivated by the analytical solution in the 1D case. It is demonstrated that the benefit of the exponential shape functions is a much more accurate approximation of the crack face energy. Furthermore, it is shown that this allows for a much more accurate prediction of critical loads for crack propagation, as compared to discretisations with the same number of standard elements. The main drawback lies in an orientation dependency which must be incorporated. Connecting elements with exponential shape to standard Lagrange finite elements is shown by constructing blending elements. Another important issue is that within the finite element discretisation the exponential shape functions must be integrated over the elements. The quadrature rule to perform this integration numerically must be chosen carefully. An adaptive Gauß quadrature and a double exponential quadrature formula are discussed. In a second part the phase field model for brittle fracture is expanded towards ductile fracture. A more or less straight forward extension using the same degradation function in the elastic as well as plastic response is proposed. The choice of the degradation function becomes crucial. Using the frequently used quadratic approach for the degradation function makes the interpretation of material and phase field parameters cumbersome. By considering 1D setup and relation between effective and nominal plastic parameters, such as yield stress and hardening modulus, is established. Using a cubic degradation function is another option which renders the interpretation of the parameters more straight forward. The cause for this improvement is the existence and stability of several solutions in the elastic, plastic and fractures deformation regime. The application of the elastic-plastic fracture phase field to 2D and 3D reveals important fracture mechanical features. Depending on the hardening behavior either fracture is either driven by the plastic deformation or elastic fracture in an elastic-plastic material occurs. For 3D specimen a thumbnail-shaped crack front is observed as well as width dependent plastic zone distribution.
References 1. G.A. Francfort, J.J. Margio, Revisiting brittle fracture as an energy minimization program. J. Mech. Phys. Solids, 1319–1342 (1998) 2. B. Bourdin, Numerical implementation of the variational formulatoin of quasi-static brittle fracture. Interfaces Free Bound. 9(3), 411–430 (2007) 3. A.A. Griffith, The phenomena of rupture and flow in solids. Philos. Trans. R. Soc. A 221, 163–198 (1921) 4. D. Gross, Th. Seelig, Fracture Mechanics (Springer, 2018) 5. C. Kuhn, Numerical and analytical Investigation of a Phase Field Model for Fracture. Ph.D. thesis. Technische Universität Kaiserslautern (2013) 6. C. Kuhn, R. Müller, Interpretation of parameters in phase field models for fracture. Proc. Appl. Math. Mech. 12, 161–162 (2012)
Phase Field Modeling of Brittle and Ductile Fracture
325
7. C. Kuhn, R. Müller, A new finite element technique for a phase field model of brittle fracture. J. Theor. Appl. Mech., 1115–1133 (2011) 8. O.C. Zienkiewicz, R.L. Taylor, J.Z. Zhu, The Finite Element Method: Its Basis and Fundamentals, 7th edn. (Butterworth-Heinemann, 2013) 9. T. Hidetosi, M. Mori, Double exponential formulas for numerical integration. RIMS, Kyoto Univ., 721–741 (1974) 10. R. Verfürth, A Posteriori Error Estimation Techniques for Finite Element Methods (Oxford University Press, 2013) 11. C. Kuhn, R. Müller, A discussion of fracture mechanisms in heterogeneous materials by means of configurational forces in a phase field fracture model. Comput. Methods Appl. Mech. Eng. 312, 95–116 (2016) 12. B. Bourdin, C. Larsen, C. Richardson, A time-discrete model for dynamic fracture based on crack regularization. Int. J. Fract., 133–143 (2011) 13. M. Hofacker, C. Miehe, A phase field model of dynamic fracture: robust field updates for the analysis of complex crack patterns. Int. J. Numer. Meth. Eng., 276–301 (2013) 14. C. Kuhn, R. Müller, A continuum phase field model for fracture. Eng. Fract. Mech. 77(18), 3625–3634 (2010) 15. S. Nagaraja, M. Elhaddad, M. Ambati, S. Kollmannsberger, L.D. Lorenzis, E. Rank, Comput. Mech. 63(6), 1283–1300 (2019) 16. M.J. Borden et al., A phase-field formulation for fracture in ductile materials: finite deformation balance law derivation, plastic degradation, and stress triaxiality effects. Comput. Methods Appl. Mech. Eng. 312, 130–166 (2016) 17. T. Noll, C. Kuhn, R. Müller, Investigation of a phase field model for elasto-plastic fracture. Proc. Appl. Math. Mech. 16, 157–158 (2016) 18. T. Noll, C. Kuhn, R. Müller, A monolithic solution scheme for a phase field model of ductile fracture. Proc. Appl. Math. Mech. 17, 75–78 (2017) 19. T. Noll, C. Kuhn, R. Müller, Modeling of ductile fracture by a phase field approach. Proc. Appl. Math. Mech. 18, 1–2 (2018) 20. H. Amor, J.-J. Marigo, C. Maurini, Regularized formulation of the variational brittle fracture with unilateral contact: numerical experiments. J. Mech. Phys. Solids 57(8), 1209–1229 (2009) 21. J.C. Simo, T.C.R. Hughes, Computational Inelasticity (Springer, New York, 1998) 22. B. Bourdin, G.A. Francfort, J.J. Marigo, Numerical experimentsin revisited brittle fracture. J. Mech. Phys. Solids, 797–82 (2000) 23. C. Kuhn, T. Noll, R. Müller, On phase field modeling of ductile fracture. GAMM-Mitteilungen, 35–54 (2016) 24. C. Kuhn, A. Schlüter, R. Müller, On degradation functions in phase field fracture models. Comput. Mater. Sci. 108, 374–384 (2015) 25. M. Borden et al., A phase-field description of dynamic brittle fracture. Comput. Methods Appl. Mech. Eng. 217–220, 77–95 (2012) 26. T. Noll, C. Kuhn, D. Olesch, R. Müller, 3D phase field simulations of ductile fracture. GAMMMitteilungen, 43-2 (2019) 27. H. Yuan, W. Brocks, Quantification of constraint effects in elastic-plastic crack front fields. J. Mech. Phys. Solids 46(2), 219–241 (1998) 28. D. Fernández Zúñiga, J.F. Kalthoff, A. Fernández Canteli, J. Grasa, M. Doblaré, Three dimensional finite element calculations of crack tip plastic zones and KIC specimen size requirements, in 17th European Conference on Fracture: Multilevel Approach to Fracture of Materials, Components and Structures (2005) 29. S. Kudari, K. Kodancha, Effect of specimen thickness on plastic zone, in 17th European Conference on Fracture 2008: Multilevel Approach to Fracture of Materials, Components and Structures 1, 530–538 (2008)
Adaptive Quadrature and Remeshing Strategies for the Finite Cell Method at Large Deformations Wadhah Garhuom, Simeon Hubrich, Lars Radtke, and Alexander Düster
Abstract Numerical methods based on a fictitious domain approach, such as the finite cell method, simplify the meshing process significantly as the mesh does not need to conform to the geometry of the underlying problem. However, such methods result in elements/cells which are intersected by the domain boundary. Consequently, special integration methods have to be applied for the broken elements/cells to achieve accurate results. One of the methods that is commonly used in the finite cell method is an adaptive scheme based on a spacetree decomposition. Unfortunately, it results in a large number of integration points which makes the method quite expensive. In the first part of this contribution, we try to overcome this problem by introducing an adaptive scheme for the moment fitting to be able to integrate broken cells efficiently and robustly for nonlinear problems. Furthermore, a remeshing strategy for the finite cell method will be introduced to improve the solution quality and overcome the large distortion of the mesh during the simulation which can cause the analysis to fail. To this end, a new mesh with a good quality is created whenever the old mesh is no longer capable of taking any further deformations. Afterwards, a data transfer between the old and the new meshes is performed with the help of a local radial basis function interpolation. Different numerical examples are presented to study the performance of the proposed methods.
W. Garhuom (B) · S. Hubrich · L. Radtke · A. Düster Numerical Structural Analysis with Application in Ship Technology (M-10), Hamburg University of Technology, Am Schwarzenberg-Campus 4 (C), 21073 Hamburg, Germany e-mail: [email protected] S. Hubrich e-mail: [email protected] L. Radtke e-mail: [email protected] A. Düster e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_12
327
328
W. Garhuom et al.
1 Introduction Fictitious domain methods such as the cut finite element method (cutFEM) [1], immersogeometric analysis [2, 3], and the finite cell method (FCM) [4, 5, 12, 13] decouple the mesh required for the analysis from the geometry approximation, i.e. the mesh does not conform to the boundary of the geometry, allowing for a fast and straightforward discretization. Such methods are suitable for simulating problems with complex geometry or for cases involving voids or material interfaces. In such scenarios, employing the standard finite element method (FEM) requires a huge effort to generate the mesh. Nonetheless, one major downside related to fictitious domain methods is the need for more advanced numerical quadrature schemes to be able to integrate elements/cells that are intersected by the geometry accurately. Standard Gaussian quadrature does not work for such elements/cells since the integrand is discontinuous. Several numerical schemes have been developed to overcome this problem—such as the adaptive integration schemes [5, 14–16], equivalent Legendre polynomials [17, 18], or the moment fitting [19–23]. In this chapter, we focus on the finite cell method, which is based on a combination of the fictitious domain approach and high order finite elements [4, 5]. The finite cell method was successfully implemented in a variety of applications and problems such as, for example, additive manufacturing and 3D printing [6, 7], adaptivity and error estimation [8, 9], elastoplasticity and geometrical nonlinearities [10, 11], and simulations of foam-like materials [37]. For the task of integrating cells that are intersected by the boundary of the physical domain, it is common to employ adaptive integration schemes. They are based on the decomposition of the domain into subdomains and applying the Gaussian quadrature on a subcell level. Such methods work well and robustly. However, they tend to generate a large number of integration points, making these methods rather expensive. Alternatively, the moment fitting quadrature can be used in the FCM utilizing the Lagrange basis functions through Gauss–Legendre points, as proposed by [23]. In doing so, solving a system of equations to setup the moment fitting is avoided with the help of the Kronecker delta feature of the Lagrange basis [21, 22]. This approach performs well for linear problems and can reduce the number of integration points by several orders of magnitude compared to the adaptive spacetree [23]. However, the moment fitting becomes less stable for nonlinear problems, and the overall Newton–Raphson method applied to solve the global set of nonlinear algebraic equations may fail to converge during the analysis. An adaptive version of the moment fitting was proposed in [23], aiming to overcome this problem and to increase its robustness for nonlinear problems. In doing so, every cell is divided into subcells, like in the adaptive spacetree, if the volume fraction of a given integration domain is smaller than a specific tolerance. Consequently, the moment fitting is applied on cell or subcell level. Several numerical examples in elastoplasticity for small and large strain are given to illustrate the above methods. Furthermore, in the finite cell method, the fictitious domain is usually assigned with a material of very low stiffness in order to avoid conditioning problems. As a result, if the cells are badly broken, they can become highly distorted during the
Adaptive Quadrature and Remeshing Strategies for the FCM …
329
simulation, causing the Newton–Raphson method to fail. The remeshing strategy proposed in [24] aims to overcome this problem and to increase the robustness of the FCM, allowing to proceed with the analysis even for very large deformations. The method is based on a multiplicative split of the deformation gradient in a completely Lagrangian formulation. The main concept is to discretize the structure using the initial mesh until no further deformation can be applied. Usually, self-penetration will occur at a certain point, due to the large distortion of the cells. At this point, a new mesh is created with a good quality that covers the deformed structure. Next, data transfer between the meshes is initiated based on a local radial basis function interpolation scheme. Once the interpolation is completed, the analysis is continued using the new mesh until the desired deformation is obtained or until the domain needs to be meshed again. This chapter is structured as follows: First, the finite cell method is explained in Sect. 2. Afterwards, an overview of the moment fitting and the adaptive moment fitting with different numerical examples in elastoplasticity is illustrated in Sect. 3. Next, in Sect. 4, a remeshing strategy for the finite cell method is introduced with several numerical examples related to hyperelasticity. Finally, the chapter is concluded in Sect. 5.
2 The Finite Cell Method This section serves to outline the concept of the finite cell method (FCM)—for more information see [4, 5]. The FCM is based on a fictitious domain approach with the use of high order finite elements. The basic concept is illustrated in Fig. 1 for a two-dimensional problem of solid mechanics. Here, defines the physical domain, which is subjected to Dirichlet boundary conditions acting on 0D and Neumann boundary conditions ¯t acting on 0N , while e denotes the extended domain and e \ defines the fictitious domain. The main idea of the FCM is to embed the physical domain into a fictitious domain which results into an extended domain that can be discretized using a Cartesian grid. This simplifies the mesh generation, but it also results in elements that are intersected by the boundary. In order to distinguish them from boundary conforming elements we call them finite cells, giving the method its name. To account for the geometry, we introduce the indicator function as follows α=
1, for x ∈ . −q α0 = 10 , otherwise
(1)
The parameter α0 is used to stabilize the fictitious domain. By setting α0 = 0, the geometry is exactly preserved. Nevertheless, so as to avoid conditioning problems, small values for the indicator function are used, i.e. q = 5,…,12. The starting point for the discretization is based on the weak form of equilibrium
330
W. Garhuom et al.
Fig. 1 The concept of the FCM [24]
G αe =
α P · Grad η dV −
e
0
¯t · η d A = 0,
(2)
N
where P defines the first Piola–Kirchhoff stress tensor and η denotes the test function, see e.g. [25]. Please note that for reasons of simplicity, volumetric forces are neglected but can be considered in general. The integral in Eq. (2) is discontinuous for cells that are intersected by the physical domain. Therefore, special integration methods need to be developed and applied to integrate broken cells accurately, see e.g. [15, 23, 26].
3 Adaptive Quadrature Based on the Moment Fitting In the first part of this section, the basic idea of the moment fitting is explained. After that, the adaptive version of the moment fitting is introduced. We finally conclude with several numerical examples. For a detailed overview of the moment fitting, the reader is referred to [19, 20, 23, 26].
3.1 Moment Fitting The main idea of the moment fitting is to setup a quadrature rule for any given domain A by solving the following system of equations n i=1
f j (xi ) wi =
A
f j (x) d,
j = 1, . . . , m
(3)
where f j (x) represent the m basis functions, xi define the n integration points, and wi are the weights. Equation (3) can also be written in matrix form as follows
Adaptive Quadrature and Remeshing Strategies for the FCM …
331
Fig. 2 Moment fitting [23]. a Points for an integration order pq = 3. b Adaptive integration based on quadtree of depth k = 4 and Gauss quadrature order pq = 3
⎫ ⎤ ⎧ ⎫ ⎧ f 1 (x1 ) . . . f 1 (xn ) ⎪ ⎨ A f 1 (x) d ⎪ ⎬ ⎬ ⎪ ⎨w1 ⎪ ⎢ .. .. ⎥ .. = .. .. ⎣ . . . ⎦ ⎪ . ⎪ ⎪ . ⎪ ⎭ ⎩ ⎭ ⎩ f m (x1 ) . . . f m (xn ) wn A f m (x) d ⎡
(4)
or in symbolic notation as Aw = b.
(5)
Here, A represents a coefficient matrix, w is a vector containing the weights, and b a vector that contains the moments. The moment fitting system (5) is nonlinear with respect to the position of the integration points {xi , i = 1, . . . , n} and linear in terms of the weights. Therefore, one will need to use, for example, the Newton–Raphson method to solve the nonlinear system of equations, which makes the process rather expensive. To overcome this issue, the position of the points can be fixed a priori to transform the nonlinear system into a linear one. Furthermore, it was shown in [23] that choosing those points as the Gauss–Legendre points and the basis functions f j (x) as the Legendre basis leads to a good conditioning number of the coefficient matrix A. The relation between the number of points n, the number of basis functions m, the quadrature order pq , and the spatial dimension d are defined as d n = m = pq + 1 .
(6)
Consequently, the moment fitting equations result in a linear and square system of equations. Figure 2a shows the distribution of points for pq = 3 in the 2D case. Based on this quadrature rule, any polynomial of degree p ≤ pq can be integrated exactly.
332
W. Garhuom et al.
To further reduce the effort of the moment fitting, it was proposed in [23] to use the Lagrange basis instead of the Legendre basis functions. In doing so, the basis for the 1D case reads pq +1
f j (ξ ) = l j (ξ ) with l j (ξ ) =
ξ − ξ GL k GL ξ − ξkGL k=1 j
(7)
k= j
GL where l j (ξ ) denote the Lagrange polynomials, while ξ GL j and ξk denote the position of the Gauss–Legendre points. The 1D case can be easily extended to 2D and 3D using the tensor product. Applying the Lagrange polynomials, Eq. (3) can be rewritten as n
l j (ξiGL )wi
i=1
=
A
l j (ξ ) d,
j = 1, . . . , pq + 1.
(8)
Thanks to the Kronecker delta property, the coefficient matrix A reduces to the identity matrix (9) A ji = l j (ξiGL ) = δ ji . Consequently, the weights wi =
A
l j (ξ ) d, i = j = 1, . . . , pq + 1
⎫ ⎧ ⎫ ⎧ ⎪ ⎬ ⎪ ⎨ A l1 (ξ ) dξ ⎪ ⎬ ⎨ w1 ⎪ .. .. = . ⎪ ⎪ . ⎪ ⎪ ⎭ ⎩ ⎭ ⎩ l w pq +1 (ξ ) dξ p +1 A q
or
(10)
(11)
can be computed without the need of solving a linear system. In the multi-dimensional case, the position of the points as well as the Lagrange shape functions are simply obtained from the tensor product of the one-dimensional situation. For the 3D case, the Lagrange basis functions are defined as F = lr (ξ )ls (η)lt (ζ ), r, s, t = 1, . . . , pq + 1
(12)
and the positions of the points are X =
GL GL GL ξr , ηs , ζt , r, s, t = 1, . . . , pq + 1 .
(13)
Finally, the weights are computed by integrating the basis functions wi =
A
lr (ξ )ls (η)lt (ζ ) d, r, s, t = 1, . . . , pq + 1
(14)
Adaptive Quadrature and Remeshing Strategies for the FCM …
333
which can be done following different approaches, see [18, 21, 22]. In this contribution, we use the adaptive integration schemes to integrate the basis functions. This is done by dividing the cells into subcells until a predefined depth is reached. Then, the standard Gauss–Legendre scheme is employed. Figure 2b shows the quadtree in 2D with a tree depth level k = 4 and integration order pq = 3. This is utilized to compute the weights of the points shown in Fig. 2a. It is important to mention that the integration of polynomials is relatively cheap as they are just scalar valued functions.
3.2 Adaptive Moment Fitting The moment fitting quadrature rule presented in the previous section works well for linear problems, see e.g. [22]. For nonlinear applications, however, the moment fitting performs less stable—especially for cases where the volume of the fictitious domain in a cell is much larger as compared to the physical domain. We refer to those cells as badly broken cells. It turned out that an adaptive version of the moment fitting improves its stability and robustness for nonlinear problems a lot. The main idea of the adaptive moment fitting is to combine the standard adaptive integration schemes (octree/quadtree in 2D and 3D) [27] with the moment fitting. The tree depth level for the adaptive moment fitting is denoted by ka . First, the volume fraction of every cell is computed at ka = 0. If the volume fraction of the cell is less than a specific tolerance, the integration domain is divided into subdomains. If any of the subdomains are not cut by the interface, the standard Gaussian quadrature is applied, as illustrated in Fig. 3a. If the volume fraction of a subdomain is larger than a predefined tolerance, the moment fitting is applied as explained in Sect. 3.1, see Fig. 3a (top left subdomain). If, however, the volume fraction is less or equal to a given tolerance, the subdomain is subdivided further—and so on. This process is continued until all domains either exhibit a volume fraction larger than the tolerance or until the maximum tree depth level ka = 3 is reached. In [23], different examples were investigated, showing that a good choice for the tolerances is 0.85 for ka = 0 and 0.7 for ka = 1, 2. For the computation of the moment fitting weights, the adaptive spacetree is used, as illustrated in Fig. 2b (for k = 4 and pq = 3). To obtain the same resolution, e.g. the same volume of the geometry using the adaptive moment fitting as compared to the adaptive spacetree, we need to use a lower tree depth level for the computation of the moments in the subcells. Consider for example the subdomain ur sc which corresponds to a refinement level of ka = 2, as illustrated in Fig. 3a. For the computation of the weights for this domain, a refinement level of two is used, as depicted in Fig. 3b.
334
W. Garhuom et al.
Fig. 3 Adaptive moment fitting [23]. a Points position with an integration order pq = 3. b Adaptive integration based on quadtree of depth k = 2 and Gauss quadrature order pq = 3 Fig. 4 Integration points for stabilizing the FCM [23]
3.2.1
Treatment of Points Within the Fictitious Domain
When applying the moment fitting, some of the points are located in the fictitious domain. Detailed numerical experiments suggest to apply the same nonlinear material model as in the physical domain to the fictitious domain [23]. In order to stabilize the FCM, additional integration points are added to the broken cells based on the standard Gaussian quadrature [27, 28]. The points located within the physical domain are neglected, as illustrated in Fig. 4. The constitutive equations used for those points correspond to a simple elastic model (for small strains) or a hyperelastic model (for large strains).
Adaptive Quadrature and Remeshing Strategies for the FCM …
335
Table 1 Elastoplastic material properties [28] Bulk modulus (K ) 164.206 GPa Saturation yield stress (σ∞ ) 715.0 MPa Shear modulus (μ) 80.194 GPa Linear hardening parameter (h) 129.24 MPa Initial yield stress (σ0 ) 450.0 MPa Hardening exponent (ω) 16.93
3.3 Numerical Examples We will investigate the proposed integration schemes using different nonlinear problems in small and large strain. Thereby, J2 elastoplastic material models are used with the following nonlinear isotropic hardening ¯ K (α) ¯ = σ0 + h α¯ + (σ∞ − σ0 )(1 − exp(−ωα)),
(15)
where α¯ denotes the equivalent plastic strain, h the linear hardening parameter, ω the hardening exponent, σ0 the initial yield stress, and σ∞ the saturation yield stress [29]. The material parameters are listed in Table 1.
3.3.1
Small Strain J2 Elastoplasticity
We begin the investigation using a small strain J2 elastoplastic material model, see [25, 29]. Porous domain In the first example we study a porous domain. The motivation is to investigate the performance of the moment fitting presented in Sect. 3.1. The geometry of the model consists of a cube with dimensions of 10 × 10 × 10 mm3 including 27 ellipsoidal holes, as can be seen in Fig. 5a. Symmetry boundary conditions are applied and a displacement of u¯ z = 0.5 mm is prescribed at the top face. The model is discretized using 512 finite cells, as illustrated in Fig. 5a. We apply a tree depth level of k = 3 for the adaptive octree, which is the same depth that is used to compute the weights of the moment fitting. First, the number of integration points is compared between the moment fitting and the adaptive octree by elevating the ansatz order of the shape functions and the corresponding quadrature rules. It can be seen in Fig. 5b that the moment fitting generates much less points (about a factor of 10) as compared to the adaptive octree, hence leading to more efficient quadrature rules. Second, the load-displacement curves are plotted in Fig. 6a using p = 8 and α0 = 10−12 for the adaptive octree and two different versions of the moment fitting. One version uses the same material model for all points in the physical and fictitious domain. The second version called (moment fitting∗ ) uses an elastic model for the points located in the fictitious domain. It can be seen that the first version of the moment fitting shows good agreement with the adaptive octree, while the second version shows a stiffer response when using a linear elastic model. This suggests
336
W. Garhuom et al.
Fig. 5 Porous domain [23]. a Geometry, boundary conditions, and FCM mesh. b The total number of integration points using different ansatz orders of the shape functions
Fig. 6 Porous domain [23]. a Load-displacement curves. b The von Mises stress along a cut line at u¯ z = 0.5 mm
that one should use the same material model and parameters for both domains, the physical and the fictitious one. The von Mises stress is plotted, as well at a cutline from (0, 0, 0) to (10, 10, 10) at the last load step, as depicted in Fig. 6b. The black box represents the physical domain, while the white box represents the fictitious part. Again, the first version of the moment fitting shows good agreement with the adaptive octree, while the second version shows a loss of accuracy. The contour plots for the von Mises stress σv M and the equivalent plastic strain α¯ at the last load step are given in Fig. 7. Cube with a cylindrical hole The moment fitting performs less stable for nonlinear problems, which can lead to a divergence of the overall Newton–Raphson method applied to solve the nonlinear set of algebraic equations resulting from the FCM discretization. In this example, we show the need for an adaptive version of the moment fitting to be able to achieve a robust quadrature. In doing so, we investigate a cube with a size of 10 × 10 × 10 mm3 which is cut by a cylinder. The cylinder can be described using the following level set function
Adaptive Quadrature and Remeshing Strategies for the FCM …
337
Fig. 7 Porous domain contour plots for p = 8 and u¯ z = 0.5 mm [23]. a The von Mises stress. b The equivalent plastic strain
Fig. 8 Cube with a cylindrical hole [23]. a Geometry, boundary conditions, and FCM discretization. b Load-displacement curves
φ (x) = (y − yc )2 + (z − z c )2 − r 2 ,
(16)
where the center coordinates of the cylinder are yc = 10 mm and z c = 0 mm, and its radius is r = 9 mm. Figure 8a illustrates the geometry, the boundary conditions, and the FCM discretization which consists of one cell. To this end, symmetry boundary conditions are applied and a displacement of u¯ z = 0.5 mm is prescribed at the top face of the cube. In Fig. 8b, load-displacement curves are plotted utilizing an ansatz order of p = 8 and an integration depth k = 5 for the resolution of the geometry. For the adaptive octree and the adaptive moment fitting a small stabilization value of α0 = 10−12 is used. The figure shows a good agreement between the two methods. However, for
338
W. Garhuom et al.
Fig. 9 Cube with a cylindrical hole [23]. a The von Mises stress along a cutline at u¯ z = 0.5 mm. b The total number of integration points using different ansatz orders
moment fitting without adaptivity, we need to use a very large stabilization value of α0 = 10−2 in order to be able to reach the final load step, otherwise the Newton– Raphson method fails to converge. This large α value results in a very stiff response in the load-displacement curve, as depicted in Fig. 8b. Similar effects can also be seen when plotting the von Mises stress through a cutline from (0, 0, 0) to (10, 10, 10), as illustrated in Fig. 9a. Again, the black box denotes the physical domain while the white box represents the fictitious domain. Finally, in Fig. 9b, we plot the total number of integration points versus different ansatz orders for the adaptive octree, the adaptive moment fitting, and the moment fitting. The tree depth level of k = 5, 6 is used for the geometry resolution. It can be seen that the adaptive moment fitting offers a good compromise between efficiency and robustness. Further, it can be observed that, by increasing the tree depth from five to six, the number of integration points in the adaptive moment fitting did not increase. This is due to the fact that in the adaptive moment fitting the tree depth level is only used for the computation of the weights. 3.3.2
Finite J2 Elastoplasticity
In this subsection, we apply a finite J2 elastoplastic model. A detailed description of the model can be found, for example, in [23, 29]. Complex cube connector In the last example, a more complex geometry is investigated. The geometry is constructed by eight individual blocks, each block can be defined using the following level set function φ(x) = [(x − xc )2 + (y − yc )2 − R 2 ]2 + [(y − yc )2 + (z − z c )2 − R 2 ]2 + [(z − z c )2 −r 2 ]2 + [(x − xc )2 + (z − z c )2 − R 2 ]2 + [(x − xc )2 − r 2 ]2 + [(y − yc )2 − r 2 ]2 − d 4 ,
where the center coordinates (xc , yc , z c ), the inner radius r , the outer radius R, and the design parameter d for every block are defined in Table 2. Thereby, every
Adaptive Quadrature and Remeshing Strategies for the FCM … Table 2 Geometry parameters of the complex cube connector [23] Block id xc (mm) yc (mm) z c (mm) R (mm) 1 2 3 4 5 6 7 8
1.5 4.5 1.5 4.5 1.5 4.5 1.5 4.5
1.5 1.5 4.5 4.5 1.5 1.5 4.5 4.5
1.5 1.5 1.5 1.5 4.5 4.5 4.5 4.5
1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
339
r (mm)
d (mm4 )
1.125 1.125 1.125 1.125 1.125 1.125 1.125 1.125
5.3 4.9 5.1 4.7 5.2 4.8 5.0 4.6
Fig. 10 Complex cube connector [23]. a Geometry, boundary conditions, and FCM discretization. b Load-displacement curves Table 3 Total number of integration points [23] p n OT n AMF g g 2 3 4
10,432,284 24,720,696 48,285,260
4,123,260 10,391,292 21,079,600
Ratio ≈2.5 ≈2.4 ≈2.3
block has a size of 3 × 3 × 3 mm3 which gives a total size of the model of 6 × 6 × 6 mm3 . Figure 10a shows the geometry, the boundary conditions, and the FCM discretization, which consists of 5376 cells (4464 of which are broken). Within the incremental/iterative solution process we apply an increasing displacement u¯ z on the top surface of the cube connector until the Newton–Raphson method fails to converge. The adaptive moment fitting performs well in this example, while the moment fitting fails. Utilizing the adaptive moment fitting, 1440 of the 4464 broken cells do not need refinements, i.e. ka = 0. In Table 3, the total number of integration points are
340
W. Garhuom et al.
Fig. 11 The complex cube connector at the last load step [23]. a von Mises stress. b Equivalent plastic strain
listed for the adaptive moment fitting (AMF) and the adaptive octree (OT) utilizing ansatz order p = 2, 3, 4 and tree depth level of k = 3 and ka = 3. It can be seen that using the adaptive moment fitting can help to reduce the number of points by a factor of 2.5, 2.4, and 2.3 respectively. In Fig. 10b, the load-displacement curves are plotted for the adaptive moment fitting as well as the adaptive octree—utilizing an ansatz order p = 4, and a stabilization factor α0 = 10−5 . It can be seen that both methods show a good agreement. In Fig. 11, the von Mises stress and the equivalent plastic strain are plotted for the last converged load step (u¯ z = 0.28 mm). It can be seen that necking starts to take place at the thin parts of the upper layer with an equivalent plastic strain of about 50%.
4 A Remeshing Strategy for the Finite Cell Method A remeshing strategy for the finite cell method is introduced to increase the robustness of the FCM and to be able to deform the structure much further for finite strain problems [24].
4.1 Kinematics The remeshing strategy is based on a Lagrangian formulation that refers all computations in the current deformed configuration t to the initial undeformed configuration 0 . The important kinematic quantities are illustrated in Fig. 12. The total displacement d = x − X, which is the difference between the current position of a point x to its initial position X, can be computed as the sum of all displacements in n number of configurations
Adaptive Quadrature and Remeshing Strategies for the FCM …
341
Fig. 12 Sequence of configurations during a simulation with remeshing (left) and configurations involved in a computation starting from a pre-deformed configuration n [24]
d = d˜ 1 + d˜ 2 + · · · + d˜ n + d˜ n+1 = d n + d˜ n+1 ,
(17)
where d n is referred to as a pre-displacement of configuration n . The total displacement gradient is computed as H=
∂d , ∂X
(18)
and the displacement gradient at configuration n−1 is computed as ˜ ˜ n = ∂ dn . H ∂ x n−1
(19)
For the pre-displacement, the gradient at mesh n is defined as Hn =
∂ dn . ∂X
(20)
If mesh number n is reached during the simulation, the primary unknown becomes d˜ n+1 . Additionally, since we discretize at this stage the configuration n , only the ˜ ˜ n+1 = ∂ d n+1 can be computed directly, since the coordinates x n are related term H ∂ xn to the reference configuration of the current mesh. Therefore, to compute the total displacement d, Eq. (17) is used with the help of the pre-displacement d n from the previous computations. The total displacement gradient is computed as follows H=
∂ dn ∂ d˜ n+1 ∂ x n ˜ n+1 (H n + I) . + = Hn + H ∂X ∂ xn ∂ X
(21)
342
W. Garhuom et al.
The data transfer of H n and d n from the old to the new mesh will be discussed in detail in Sect. 4.2.3. Similar to the displacement gradient, we introduce the following deformation gradients Fn =
∂ xn = H n + I, ∂X
(22)
and ˜ n+1 = ∂ x n+1 , F ∂ xn
(23)
as illustrated in Fig. 12. The total deformation gradient related to mesh n is determined by ˜ n+1 F n . F=F
(24)
4.2 Remeshing Procedure To improve the robustness of the FCM for finite strain problems, we introduce in this section a remeshing strategy. First, let us assume, for example, that we want to deform a plate with a hole, presented in Fig. 13, from an initial undeformed configuration 0 to a final deformed configuration 2 . However, the simulation could not be continued until the end because of the highly distorted cells, which cause the Newton–Raphson method to fail. To overcome this problem, we introduce an intermediate configuration 1 . At this point, we still have convergence and the cells are not too distorted. Later in Sect. 4.2.2, some remeshing criteria are introduced to be able to identify the right time to stop the computation and to remesh during the simulation. Once configuration 1 is reached, a new mesh with a high quality is introduced. Afterwards, the important quantities such as the displacement gradients are transferred from the old distorted mesh to the new one with the help of a local radial basis function interpolation scheme, as illustrated in Sect. 4.2.3. Once this is done, the last converged load step is repeated using the new mesh to ensure equilibrium, as it might have been violated during the data transfer. Then, the analysis is continued until we reach 2 . Since the FCM uses Cartesian grids which do not need to conform to the actual geometry, meshing is simplified significantly.
4.2.1
Mesh Generation
The geometry is described using a triangulated surface. Once the analysis is aborted, when the criteria check fails (see Sect. 4.2.2), the displacements at the points of the triangulated surface are evaluated using the cells shape functions. Consequently,
Adaptive Quadrature and Remeshing Strategies for the FCM …
343
Fig. 13 Remeshing procedure [24]
those displacements are added to the coordinates of the triangulated surface to obtain the deformed surface at this stage. This is done for two important reasons. First, we can now create a bounding box that covers the deformed surface, which can be discretized using a Cartesian grid, as illustrated in Fig. 13. In this study, the same number of subdivisions of the Cartesian grid is used for every new mesh. Second, the triangulated surface is used for the inside-outside test to identify whether a point is located in the physical domain or not. This is needed when performing the numerical integration and removing cells which are completely outside the physical domain. It is also possible to use other types of geometric models than the triangulated surface (B-rep model). For any other models, it is required to be able to identify whether a point is located in the physical domain or not, see [5, 13].
4.2.2
Remeshing Criteria
We need some criteria to indicate whether the mesh is strongly distorted and remeshing is required. To this end, we define Jn =
∂ xn = [G 1 G 2 G 3 ] , ∂ξ
(25)
where ξ refers to the local coordinates of the cells, as depicted in Fig. 13. Furthermore, we define ˜ n+1 J n = g 1 g 2 g 3 j n+1 = F
(26)
which gives information on the quality of the cells after deformation. In addition, we introduce j kn+1 = j n+1 (x k ) and J kn = J n (x k ), similarly, G ik and g ik in order to
344
W. Garhuom et al.
shorten the notation. The criteria are evaluated in every load step at a certain number of points, usually at the integration points x k . Ratio of Jacobians The volumetric deformation ratio between two points is measured in this criterion [30–32] as follows min det j kn+1 k . R= max det j ln+1
(27)
l
Where R = 1 for a uniform volumetric deformation and R < 1 for a non-uniform volumetric deformation. Orthogonality Based on the columns of the Jacobi matrix j , the orthogonality of each of the cells can be obtained [33]. First, we define the term bikj = g ik · g ik g kj · g kj .
(28)
Then, the criterion can be calculated as O = min i, j,k
2 bikj − g ik · g kj . bikj
(29)
Thereby, if the cells are orthogonal then O = 1, otherwise O < 1. Inverse aspect ratio Based on the columns of the Jacobi matrices J n and j n+1 , we can compute the inverse aspect ratio. As long as the mesh is not yet deformed, the worst aspect ratio can be obtained using k G i . A0 = min i, j,k G k
(30)
j
Moreover, during the deformation of the mesh, the worst inverse aspect ratio can be computed as k g At = min i . i, j,k k g j
(31)
Finally, to take into account the changes in the inverse aspect ratio analogous to the deformation, the criterion is computed as k g A = min i i, j,k k g j
k G kj . G i
(32)
Adaptive Quadrature and Remeshing Strategies for the FCM …
345
Thereby, A = 1 if the aspect ratio continues to be constant during the analysis. If this is not the case then A < 1.
4.2.3
Data Transfer
A local radial basis function (RBF) interpolation scheme is explained, as proposed by [34, 35], to be able to transfer data between the old and the new mesh. In elasticity problems, those data are the displacement d and the displacement gradient H. The RBF is established on a cloud of n s source points x sk . In the local RBF, for each target point x t , a number of n s nearest source points are identified. Afterwards, a separate interpolation is established for every target point, based on the nearest source points. On the other hand, in the global RBF, all source points are taken to construct one interpolation, which can be used to evaluate any target point. Once the nearest source points to a target point x t are identified, their indices are included to the set N . The target value v t can be computed as vt =
λi i x is − x t ,
(33)
i∈N
where i is related to the source point i, and λi is the weight. A modified version of the inverse multiquadric RBF is used as follows: 1 (r ) = √ 2 r −1
(34)
where for every source point the argument is scaled separately as r i (r ) = β r¯i
,
(35)
with β defining a scaling factor and r¯i denoting the mean distance of the source point i to its n r nearest source neighbors. The weights λi are calculated from the relation v sj =
λi i x is − x sj for j ∈ N .
(36)
i∈N
This leads to a linear system of equations with n n unknowns which has to be solved for every target point. However, this process is parallelizable, which makes the local RBF quite efficient. On the contrary, the global RBF requires to solve one large system of size n s . To summarize, the proposed interpolation has three parameters: the number of source points per target point n n , the number of neighbors n r used to compute the mean distance r¯i and the scaling factor β. Based on numerical experiments, we found that using the parameters n n = 50, n r = 3, and β = 1 yields the best interpolation
346
W. Garhuom et al.
for our examples. Furthermore, in the old mesh, only the integration points that are located in the physical domain are taken as source points while the fictitious domain points are neglected because those points usually have the worst deformations. In the new mesh all points are taken as target points. According to the choice for the RBF in (35) this yields a displacement field for the mesh, that smoothly approaches zero in the fictitious domain with increasing distance from the domain boundary.
4.2.4
Basis Function Removal
Here, we will explain the idea of the basis function removal, aiming to improve the conditioning of the FCM by removing some of the high order modes of broken cells, as illustrated in Fig. 14 for the 2D case. Utilizing the hierarchic shape functions, the basis consists of a linear part (nodal modes) and a high order part (internal and edge modes). Now the task is to remove those high order modes that have a minor contribution to the solution of the problem but deteriorate the conditioning of the resulting equation system significantly. In order to maintain the rigid body motions, the nodal modes are never removed. Those modes can be easily detected when using a hierarchic basis. In the FCM, the broken cells are the source for the ill-conditioning of the problem. The high order shape functions in those cells are marked as affected, as depicted in Fig. 14. Finally, we will now present a global criterion based on the contribution of modes to the diagonal entries of the global stiffness matrix. In doing so, we are able to decide whether an affected mode should be removed or not. To this end, two local cell vectors are set up q c = q11 q21 q31 . . . q1n q2n q3n and hc = h 11 h 12 h 13 . . . h n1 h n2 h n3
(37)
where n denotes the number of shape functions. The entries of those vectors are evaluated as ∂ Ni ∂ Ni ∂ Ni ∂ Ni ∂ Ni ∂ Ni i i i det J c dV + + (38) q1 = q2 = q3 = α ∂ X1 ∂ X1 ∂ X2 ∂ X2 ∂ X3 ∂ X3 e
where α refers to the indicator function, and ∂ Ni ∂ Ni ∂ Ni ∂ Ni ∂ Ni ∂ Ni h i1 = h i2 = h i3 = + + ∂ X1 ∂ X1 ∂ X2 ∂ X2 ∂ X3 ∂ X3
det J c dV.
(39)
e
The term in the parenthesis gives a positive function which corresponds to the diagonal entries of the matrix G T G with G denoting the discrete gradient operator. The vectors q c and hc give an estimation of a local contribution of the ansatz for the physical and the embedded domains respectively. The values of hc define the maximum values. Next, global auxiliary vectors q and h are set up during the assembly as follows
Adaptive Quadrature and Remeshing Strategies for the FCM …
347
Fig. 14 Affected and unaffected shape functions of the hierarchical basis of the FCM [24]
nc
q=
Aq
nc
c
c=1
and h =
Ah . c
(40)
c=1
Using this, we can now estimate the contribution of the basis functions in a global sense by computing the ratio between q I and h I as μI =
qI , with 0 < μ I ≤ 1, hI
(41)
where I denotes the global degree of freedom index. Based on μ I , we remove affected shape functions if μ I < μmin , with 0 < μmin < 1,
(42)
where μmin is a pre-defined threshold.
4.3 Numerical Examples In the following numerical examples, we employ a hyperelastic material model proposed in [36], which is based on the strain energy function W =
μ λ 2 J −1 − (tr (C) − 3) + 2 4
λ + μ ln (J ) , 2
(43)
where J = det(F) and C denotes the right Cauchy-Green tensor. The material parameters are defined as μ = 19.231 N/mm2 and λ = 28.846 N/mm2 .
348
W. Garhuom et al.
Fig. 15 Single cube connector [24]. Geometry, boundary conditions, and discretization
Single cube connector In this example, a single cube connector is investigated. The motivation is to study the influence of the proposed remeshing strategy and the basis function removal. To this end, the geometry can be described using the following level set function φ(x) = [(x − xc )2 + (y − yc )2 − R 2 ]2 + [(y − yc )2 + (z − z c )2 − R 2 ]2 + [(z − z c )2 −r 2 ]2 + [(x − xc )2 + (z − z c )2 − R 2 ]2 + [(x − xc )2 − r 2 ]2 + [(y − yc )2 − r 2 ]2 − d 4 ,
with the center coordinates xc = yc = z c = 0 mm, the outer radius R = 15 mm, the inner radius r = 11.25 mm, and the design parameter d = 4.6 × 104 mm4 . Symmetry boundary conditions are applied and a displacement of u¯ y = 7 mm is prescribed at the top face (compression). The geometry is discretized using 129 cells setting α0 = 10−5 , as illustrated in Fig. 15. For the integration, the adaptive octree is utilized with a tree depth of k = 3 and an integration order of p + 1 in the physical domain. In the fictitious domain, a standard Gauss quadrature of order three is used. We set the tolerances for the remeshing criteria (R, O, A) to zero. In the first part, we apply the basis function removal only without remeshing. To this end, in Fig. 16, the energy-displacement curves are plotted using different threshold values μmin = {0.0, 0.1, 0.3} and ansatz orders p = 2 and 4. It can be seen that the basis function removal allows us to go further in the solution, especially for μmin = 0.3. However, the final desired load step of u¯ y = 7 mm could not be achieved. In the second part, we apply both the remeshing strategy and the basis function removal. To this end, energy-displacement curves are plotted using different threshold values μmin = {0.0, 0.1, 0.3} and ansatz orders p = 2 and 4, as illustrated in Fig. 17. It can be seen that by applying the remeshing strategy the final load step can be reached using μmin = 0.1 and 0.3. However, increasing the threshold to μmin = 0.3 shows a stiffer response because more degrees of freedom are removed. The contour plots for the von Mises stress σv M using p = 4 and μmin = 0.1 are shown in Fig. 18 for different load steps.
Adaptive Quadrature and Remeshing Strategies for the FCM …
349
Fig. 16 Single cube connector. Energy-displacement curves applying only the basis function removal for different μmin [24]
Fig. 17 Single cube connector. Energy-displacement curves for different ansatzes and μmin [24]
Fig. 18 The von Mises stress σv M (MPa) for the single cube connector with p = 4 and μmin = 0.1 at different load steps [24]
350
W. Garhuom et al.
Fig. 19 Single pore of a foam [24]. Geometry, boundary conditions, and discretization Fig. 20 Single pore of a foam [24]. Energy-displacement curves for p = 2
Single pore of a foam In this example, we investigate a single pore of a foam [37], and study the performance of the remeshing strategy. To this end, the geometry is obtained from a CT-scan as illustrated in Fig. 19. The geometry is discretized with 2721 cells in the first mesh. The foam is fixed in all directions at the bottom and a prescribed displacement of u¯ y = 3.9 mm (compression) is applied at the top face, as depicted in Fig. 19. The ansatz order is set to p = 2. First, no remeshing is applied and μmin = 0.0 is chosen. In doing so, only a displacement of 1.5 mm can be achieved because the Newton–Raphson method fails. However, applying the remeshing strategy and setting μmin = 0.1, the final deformation of u¯ y = 3.9 mm can be reached, as can be seen in Fig. 20. The contour plots for the von Mises stress σv M are shown in Fig. 21 for different load steps. Please note that contact is not taken into account and therefore self-penetration of the foam can not be excluded. However, we plan to include the modeling of self-contact in future work to improve the accuracy of the simulation.
Adaptive Quadrature and Remeshing Strategies for the FCM …
351
Fig. 21 The von Mises stress σv M (MPa) for a single pore of a foam with p = 2 and μmin = 0.1 at different load steps [24]
5 Conclusion In the first part of this chapter, we introduced an adaptive version of the moment fitting for broken cells in the finite cell method that performs well for nonlinear problems. Therein, the given domain is divided into subdomains based on the volume fraction of the physical domain before the moment fitting is applied on cell or subcell level. Furthermore, the position of the points was chosen a priori as the Gauss–Legendre points and the basis functions as the Lagrange basis, which avoids the need to solve a system of equations to set up the moment fitting quadrature. The performance of the adaptive moment fitting was illustrated using different numerical examples in small and large strain elastoplasticity. It has been shown that more than half of the integration points can be saved by employing the adaptive moment fitting as compared to the adaptive octree. Additionally, we showed that for the points that are located in the fictitious domain, one should assign the same nonlinear material model that is used in the physical domain to achieve better accuracy instead of using a simple elastic model. In the second part, we introduced a remeshing strategy for the finite cell method to improve its robustness for nonlinear finite strain problems and to overcome the issue of severely distorted cells. The remeshing strategy is based on a multiplicative split of the deformation gradient in a total Lagrangian framework. The basic idea is to deform the structure until the mesh is no longer capable of taking further deformation e.g., because self-penetration may occur. Before that happens, the computation is stopped and a new mesh is created that covers the deformed geometry. Afterwards, a data transfer is applied for the important quantities between the meshes with the help of a local radial basis function interpolation. In doing so, we are able to continue the analysis with the help of the new mesh. This process continues until the desired deformation is achieved. The proposed method is illustrated using different numerical examples utilizing a hyperelastic material model.
352
W. Garhuom et al.
Acknowledgements The authors gratefully acknowledge support by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 “High order immersed-boundary methods in solid mechanics for structures generated by additive processes”—project number 255496529 (DU 405/8-1&2, RA 624/27-1&2, SCHR 1244/4-1&2). Figures 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11 are reprinted from Computers and Mathematics with Applications, 77, S. Hubrich, A. Düster, Numerical integration for nonlinear problems of the finite cell method using an adaptive scheme based on moment fitting, 1983-1997, Copyright (2020), with permission from Elsevier. Figures 1 and 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 are reprinted from Computers and Mathematics with Applications, 80, W. Garhuom, S. Hubrich, L. Radtke, A. Düster, A remeshing strategy for large deformations in the finite cell method, 2379–2398 Copyright (2020), with permission from Elsevier.
References 1. E. Burman, S. Claus, P. Hansbo, M.G. Larson, A. Massing, Cutfem: discretizing geometry and partial differential equations. Int. J. Numer. Methods Eng. 7, 472–501 (2015) 2. F. Xu, D. Schillinger, D. Kamensky, V. Varduhn, C. Wang, M. Hsu, The tetrahedral finite cell method for fluids. Immersogeometric analysis of turbulent flow around complex geometries. Comput. & Fluids 141, 135–154 (2016) 3. V. Varduhn, M. Hsu, M. Ruess, D. Schillinger, The tetrahedral finite cell method: higher order immersogeometric analysis on adaptive non-boundary-fitted meshes. Int. J. Numer. Meth. Eng. 107, 1054–1079 (2016) 4. J. Parvizian, A. Düster, E. Rank, Finite cell method - h- and p-extension for embedded domain problems in solid mechanics. Comput. Mech. 41, 121–133 (2007) 5. A. Düster, J. Parvizian, Z. Yang, E. Rank, The finite cell method for three-dimensional problems of solid mechanics. Comput. Methods Appl. Mech. Eng. 197, 3768–3782 (2008) 6. S. Kollmannsberger, A. Özcan, M. Carraturo, N. Zander, E. Rank, A hierarchical computational model for moving thermal loads and phase changes with applications to selective laser melting. Comput. & Math. Appl. 75, 1483–1497 (2018) 7. A. Özcan, S. Kollmannsberger, J. Jomo, E. Rank, Residual stresses in metal deposition modeling: discretizations of higher order. Comput. & Math. Appl. 78, 2247–2266 (2019) 8. P. Di Stolfo, A. Düster, S. Kollmannsberger, E. Rank, A. Schröder, A posteriori error control for the finite cell method. PAMM 19, e201900419 (2019) 9. P. Di Stolfo, A. Rademacher, A. Schröder, Dual weighted residual error estimation for the finite cell method. J. Numer. Math. 27, 101–122 (2019) 10. D. Schillinger, M. Ruess, N. Zander, Y. Bazilevs, A. Düster, E. Rank, Small and large deformation analysis with the p- and B-spline versions of the finite cell method. Comput. Mech. 50, 445–478 (2012) 11. S. Kollmannsberger, D. D’Angella, E. Rank, W. Garhuom, S. Hubrich, A. Düster, P. D. Stolfo, A. Schröder, Spline- and hp-basis functions of higher differentiability in the finite cell method. GAMM-Mitteilungen 0:e202000004 (2019) 12. D. Schillinger, M. Ruess, The finite cell method: a review in the context of higher-order structural analysis of cad and image-based geometric models. Comput. Mech. 22, 391–455 (2015) 13. A. Düster, E. Rank, B. Szabo, The p-version of the finite element and finite cell methods, in Encyclopedia of Computational Mechanics, vol. 1, eds. by E. Stein, R. de Borst, T.J.R. Hughes (Wiley, New York, 2017) 14. L. Kudela, N. Zander, T. Bog, S. Kollmannsberger, E. Rank, Efficient and accurate numerical quadrature for immersed boundary methods. Adv. Modeling Simul. Eng. Sci. 2, 10 (2015) 15. L. Kudela, N. Zander, S. Kollmannsberger, E. Rank, Smart octrees: accurately integrating discontinuous functions in 3D. Comput. Methods Appl. Mech. Eng. 306, 406–426 (2016) 16. A. Abedian, Düster, An extension of the finite cell method using boolean operations. Comput. Mech. 59, 877–886 (2017)
Adaptive Quadrature and Remeshing Strategies for the FCM …
353
17. G. Ventura, E. Benvenuti, Equivalent polynomials for quadrature in Heaviside function enrichment elements. Int. J. Numer. Meth. Eng. 102, 688–710 (2015) 18. A. Abedian, A. Düster, Equivalent Legendre polynomials: numerical integration of discontinuous functions in the finite element methods. Comput. Methods Appl. Mech. Eng. 343, 690–720 (2019) 19. Y. Sudhakar, W.A. Wall, Quadrature schemes for arbitrary convex/concave volumes and integration of weak form in enriched partition of unity methods. Comput. Methods Appl. Mech. Eng. 258, 39–54 (2013) 20. B. Müller, F. Kummer, M. Oberlack, Highly accurate surface and volume integration on implicit domains by means of moment-fitting. Int. J. Numer. Meth. Eng. 96, 512–528 (2013) 21. M. Joulaian, S. Hubrich, A. Düster, Numerical integration of discontinuities on arbitrary domains based on moment fitting. Comput. Mech. 57, 979–999 (2016) 22. S. Hubrich, P. Di Stolfo, L. Kudela, S. Kollmannsberger, E. Rank, A. Schröder, A. Düster, Numerical integration of discontinuous functions: moment fitting and smart octree. Comput. Mech. 60, 863–881 (2017) 23. S. Hubrich, A. Düster, Numerical integration for nonlinear problems of the finite cell method using an adaptive scheme based on moment fitting. Comput. & Math. Appl. 77, 1983–1997 (2019) 24. W. Garhuom, S. Hubrich, L. Radtke, A. Düster, A remeshing strategy for large deformations in the finite cell method. Comput. & Math. Appl. 80, 2379–2398 (2020) 25. P. Wriggers, Nonlinear Finite Element Methods (Springer, Berlin, 2008) 26. A. Düster, O. Allix, Selective enrichment of moment fitting and application to cut finite elements and cells. Comput. Mech. 65, 429–450 (2020) 27. A. Abedian, J. Parvizian, A. Düster, E. Rank, Finite cell method compared to h-version finite element method for elasto-plastic problems. Appl. Math. Mech. 35, 1239–1248 (2014) 28. A. Abedian, J. Parvizian, A. Düster, E. Rank, The finite cell method for the J2 flow theory of plasticity. Finite Elem. Anal. Des. 69, 37–47 (2013) 29. C. Simo, T.J.R. Hughes, Computational Inelasticity (Springer, New York, 1998) 30. W. Kwok, Z. Chen. A simple and effective mesh quality metric for hexahedral and wedge elements, in Proceedings of the 9th International Meshing Roundtable, IMR, New Orleans, Louisiana, USA (2000), pp. 325–333 31. M. Bucki, C. Lobos, Y. Payan, N. Hitschfeld, Jacobian-based repair method for finite element meshes. Eng. Comput. 27, 285–297 (2011) 32. W. Lowrie, V.S. Lukin, U. Shumlak, A priori mesh quality metric error analysis applied to a high-order finite element method. J. Comput. Phys. 230, 5564–5586 (2011) 33. C. Sorger, F. Frischmann, S. Kollmannsberger, E. Rank, TUM.GeoFrame: automated highorder hexahedral mesh generation for shell-like structures. Eng. Comput. 30, 41–56 (2014) 34. A. de Boer, A.H. van Zuijlen, H. Bijl, Radial basis functions for interface interpolation and mesh deformation, in Advanced Computational Methods in Science and Engineering, ed. by B. Koren, K. Vuik (Springer, Berlin, 2010) 35. M. König, L. Radtke, A. Düster, A flexible C++ framework for the partitioned solution of strongly coupled multifield problems. Comput. & Math. Appl. 72, 1764–1789 (2016) 36. P.G. Ciarlet, Mathematical Elasticity, vol. 1. Three-dimensional Elasticity (Elsevier Science Publishers, Amsterdam, 1988) 37. S. Heinze, T. Bleistein, A. Düster, S. Diebels, A. Jung, Experimental and numerical investigation of single pores for identification of effective metal foams properties. ZAMM - J. Appl. Math. Mech. 98, 682–695 (2018)
The Finite Cell Method for Simulation of Additive Manufacturing Stefan Kollmannsberger, Davide D’Angella, Massimo Carraturo, Alessandro Reali, Ferdinando Auricchio, and Ernst Rank
Abstract Additive manufacturing processes are driven by moving laser-induced thermal sources which induce strong heat fluxes and fronts of phase change coupled to mechanical fields. Their numerical simulation poses several challenges, e.g. the evolution of the (possibly complex) domain as the specimen is produced and the differences in scales of the problem. In this work, the first aspect is addressed using the Finite Cell Method, an immersed approach that removes the need for meshing and is able to accurately handle complex geometries. For the second aspect we develop a framework with local refinement to selectively increase accuracy where needed, and derefinement in previously refined regions far from the laser source to keep the overall computational cost constant throughout the simulation. In this work, we present the essential theoretical fundament of the computational framework. Then, we show its application to model additive manufacturing processes in various examples, including experimental validation.
S. Kollmannsberger (B) · D. D’Angella · E. Rank Technische Universität München, Arcisstr. 21, 80333 München, Germany e-mail: [email protected] D. D’Angella e-mail: [email protected] E. Rank e-mail: [email protected] M. Carraturo · A. Reali · F. Auricchio University of Pavia, via Ferrata 3, 27100 Pavia, Italy e-mail: [email protected] A. Reali e-mail: [email protected] F. Auricchio e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_13
355
356
S. Kollmannsberger et al.
1 Introduction Additive Manufacturing (AM), also known as Solid Free Form (SFF) or Rapid Prototyping (RP), is a manufacturing technology which allows for great freedom in the product design—and it is gaining popularity in many branches of industry and design. A huge variety of products from aerospace components to biomedical applications as well as fashion products and jewelry can be produced based on AM technologies. Selective Laser Melting (SLM) or Selective Laser Sintering (SLS) is a powder-bed based AM technology, that is particularly suitable to produce highly precise and complex mechanical components. In fact, the great success of SLM processes is due to the possibility of obtaining geometrically complex product shapes with net-shaped, optimized internal structures that cannot be obtained relying on classical, subtractive manufacturing techniques. However, the diffusion of Selective Laser Melting is still limited by the complexity of defining the correct manufacturing parameters required to produce the desired component. One option to gain insight is to conduct experimental studies. The other is to develop numerical models to simulate SLM processes. None can replace the other, but joining experimental data with modeling and simulations can provide a deeper understanding of the underlying physical phenomena. To this end, the validation of even the most basic characteristics of the process—such as weld pool shapes or cooling rates—is a key issue. This, in turn, will enable improved control of the manufacturing process and, accordingly, will increase the quality of the produced parts [43]. The finite element method (FEM) is a very popular approach to simulate AM thermal processes [11, 19, 56, 59, 61, 68]. For a detailed review on the state of the art of AM thermal simulations, we refer to [73]. The multi-scale nature of SLM in both space and time renders high fidelity simulations computationally challenging even for the most advanced supercomputers. In fact, spatial scales range from decimeters for the entire part down to micrometers for the laser spot size. Similarly, the entire process can take hours or even days since the required size of the time step to resolve the high laser speed locally is only in the order of microseconds. Therefore, model reduction in both space [60, 64] and time [12] have been attempted in the literature, trying to upscale the local information obtained by means of an high fidelity simulation to a full-part scale [13, 31, 40, 54, 55]. An accurate numerical approximation is also required in the opposite direction, i.e. to downscale the local information in order to predict the final microstructure of the product [27, 41, 57, 58, 63]. In the context of parameter optimization, for example, a correct evaluation of both the melt pool geometry and the thermal gradient is crucial for being able to correctly estimate the micro structural characteristics of the material and, consequently, its mechanical properties. The objective of the present work is to introduce a numerical framework alternative to standard FEM for high-fidelity simulation at the laser trace length scale. The complex evolving geometry is handled by the Finite Cell Method (FCM): an immersed FEM that does not require tedious mesh generation processes. This feature of FCM is particularly useful when we need to simulate structures designed
The Finite Cell Method for Simulation of Additive Manufacturing
357
for AM, where standard mesh generation could be especially challenging. In fact, the so-called design for additive manufacturing (DfAM) is mainly driven by functionality and optimization rather than production constraints leading, in general, to quite complex shapes. Local refinement is used to selectively increase the resolution close to the laser, where small-scale characteristics are concentrated. Finally, mesh coarsening is used to reduce the computational costs where these small features of the solution are not present anymore and only the shape of the final artifact needs to be remembered. This way, the number of degrees of freedom is kept bounded independently of the length of the laser path and the computational costs are kept constant throughout the simulation.
2 The Finite Cell Method The basic idea of the Finite Cell Method (FCM) is to circumvent the task of mesh generation by extending the physical domain of interest phys by a fictitious part f ict . Their union = phys ∪ f ict forms a simply shaped embedding domain , which can be meshed easily. The concept is depicted in Fig. 1. Integrals over phys can be computed in terms of integrals over by means of a domain indicator function α f (x)d V = 1 f (x)d V + 0 f (x)d V phys
phys
f ict
α(x) f (x)d V
=
α(x) =
1 if x ∈ phys 0 if x ∈ f ict .
Ωfict
Ωphys
(a) Physical domain Ω phys .
Ωphys
(b) Fictitious domain extension.
Fig. 1 Illustration of the Finite Cell Method [9]
Ωfict
Ωphys
(c) Finite cell mesh.
358
S. Kollmannsberger et al.
In the context of linear elasticity, the derivation of the FCM is based on the principle of virtual work [38]:
σ : (∇ sym δu)d V −
δW (u, δu) =
phys
−
δu · bd V −
δu · td A = 0,
(1)
N
where σ , b, u, δu and ∇ sym denote the Cauchy stress, body forces, displacement, virtual displacement, and the symmetric part of the gradient, respectively. On the boundary N of the physical domain, the traction vector t specifies the Neumann boundary conditions. The domain indicator function α appears in the constitutive tensor C that relates stresses and strains: σ = αC : ε.
(2)
To cope with severe ill conditioning when cells with very small physical part of the domain are present, α is often defined as: α(x) =
1 ∀x ∈ phys ∀x ∈ f ict ,
(3)
where Tt , w
(10)
where Tt is the transient time in the laser trace, defined as the time interval between the laser activation and the time when the maximum weld-pool length value is measured. Finally, we introduce an additional calibration parameter w defined as the semi-major axis rate at steady state.
5.2 Experimental Validation The experimental quantities we will use to validate the computational model are taken from the AMBench2018 website [1]. AMBench2018 consists of a set of experimental measurements obtained for different AM processes at the National Institute of Standards and Technology (NIST) located in Gaithersburg, MD, USA. Its objective is to provide a reliable set of benchmarks to validate numerical models suitable to simulate AM processes. Specifically, we consider the AMB2018-02 set of measurements, which provides experimental values of the melt-pool length, width, and depth of a single laser beam traveling on a bare plate of IN625, a nickel-based alloy
The Finite Cell Method for Simulation of Additive Manufacturing Table 1 Process and material constant parameters [2] Parameter values Value Laser power Laser speed Laser spot diameter D4σ ρ L Melting temperature interval
363
Units
195 800 100 8.44e-6
(W) (mm/s) (µm) (kg/mm3 )
2.8e5
(J/kg)
1290–1350
(◦ C)
widely employed in SLM processes, annealed at 870 ◦ C for one hour. These measurements are obtained using an EOS M270 (EOS GmbH, Krailling, Germany) modified at NIST Laboratories to host a high-speed short-wave infrared (SWIR) camera. As done on the NIST webpage, we refer to this machine as a Commercial Build Machine (CBM).
5.2.1
Calibration of Parameters
In the used model, we do not consider all the complex physical phenomena occurring in the vicinity of the laser beam [42], which, to be properly modeled, would require the use of discrete numerical methods, such as in Khailallah et al. [42], where each single powder particle is modeled in 3D and a laser tracing heat source is employed to investigate mesoscopic effects in the melt pool. Detailed studies, such as in Lee and Zhang [53], use a discrete element method including powder particles for a volume of fluid study. Even if one does not intend to model each powder particle, it is advisable not to neglect an accurate prediction of the thermal history in the close proximity of the melt pool, melting and solidification, or convective effects in the melt pool itself cannot be completely neglected. The parameters in Table 1 and the thermal conductivity and heat capacity are reported in Table 2 and taken from [2]. The phase-change function fpc is defined by ⎧ ⎪ if T < Ts , ⎨0, T −TM fpc (T ) = 0.5 1 + tanh 10 , if T ∈ [Ts , Tl ] , ⎪ ⎩ 1, if T > Tl ,
(11)
where TM is the reference melting temperature. The calibration process fits the numerical results with the measured data in [33], adjusting the three calibration parameters of the model defined in Sect. 5.1—the heat fraction rate f , the axis rate w, and the absorptivity η. Figure 4 describes the size of
364
S. Kollmannsberger et al.
Table 2 Temperature dependent thermal property values of IN625 [2] Temperature (◦ C) k (W/m K) c p (J/kg K) 21 38 93 204 316 427 538 549 760 871 982
9.8 10.1 10.8 12.5 14.1 15.7 17.5 19 20.8 22.8 25.2
410 427 456 481 511 536 565 590 620 645 670
Fig. 4 IN625 substrate and nominal position of scan tracks [1]
the bare plate of IN625 used on the CBM to obtain the experimental measurements, while Fig. 5 depicts the substrate installation within the machine. The relative errors compared to the measured values are 0.12%, 0.68% and 10.29% for the melt pool length, width, and depth, respectively. Moreover, the graphical comparison in Fig. 6 shows that the calibrated model is able to closely recover the melt pool cross section image of the IN625 bare plate. Finally, we would like to remark that the accuracy of the presented thermal model can further be increased by applying anisotropic conductivities inside the melt pool region (see Fig. 6). Such a model is introduced in [46] along with an in-depth validation and discussion.
The Finite Cell Method for Simulation of Additive Manufacturing
365
Fig. 5 Illustration of the substrate on the CBM [1]
Fig. 6 Secondary electron image [1] of a chemically etc.hed cross section of a single laser track on an IN625 bare plate: Comparison of anisotropic (dashed green line) and isotropic (dashed red line) conductivity model results
5.2.2
Adaptive IGA Convergence Study
To investigate the influence of the spatial resolution of the adaptive IGA mesh on the numerical results, we compare the temperature profile using different level of refinements. We employ a hexahedral mesh with 25 × 25 × 3 quadratic C 1 continuous elements as the initial mesh. Such an initial mesh is further bisected toward the laser path, up to a given level of refinement d = 1, ..., 4. After 40 time steps, we coarsen the elements with the highest distance from the laser source position. Figure 7a shows that a refined mesh is necessary in case we want to correctly capture the local features in the solid-to-liquid transition region. At the same time, Fig. 7b emphasizes the importance of a locally refined mesh in this case. In fact, local mesh refinement and coarsening allow to keep the number of DOFs constant when a steady state regime is reached by the process, leading to extremely accurate results with a relatively low number of DOFs compared to the uniform linear finite element method for the same accuracy. Furthermore, in Fig. 8, we can observe the influence of the mesh refinement on the cross section profile approximation. Again, the cross section shape of conduction mode thermal processes (experimentally measured in Fig. 6) can be numerically recovered by means of a locally refined mesh.
366
S. Kollmannsberger et al.
1,600
106 uniform FE-mesh m-l IGA d = 1 m-l IGA d = 2 m-l IGA d = 3 m-l IGA d = 4
5
1,200
DOFs
Temperature [◦ C]
1,400
1,000
104
experiment m-l IGA d = 1 m-l IGA d = 2 m-l IGA d = 1 m-l IGA d = 4
800
600 −2
−1.5
−1
−0.5
10
103 0
path [mm]
(a) Temperature profile comparison. The
0.5
0
10
20
30
40
50
60
70
Time step
(b) Evolution of total number of DOFs.
dashed red lines indicates the phase-change interval.
Fig. 7 Convergence study using different multi-level IGA refinement depths d
depth=1
depth=2
depth=3
depth=4
Fig. 8 Temperature distribution after 40 time steps on a cross section of the domain using four different refinement depth values
The Finite Cell Method for Simulation of Additive Manufacturing
367
1,600
1,200
290◦ C
Temperature [◦ C]
1,400
1,000
CR =
mm 290 [◦ C] V (Δd) [mm] s
Δd = 0.268
800
Δd = 0.223 experiment m-l IGA d = 5
600 −2
−1.5
−1
−0.5
0
0.5
1
1.5
2
path [mm]
Fig. 9 Comparison between the experimental and numerical cooling rate (CR)
Finally, we report the results obtained for the cooling rate (CR) of the process defined as: 1290 − 1000 V, CR = d where V [mm/s] is the laser speed and d[mm] is the distance between the two points where the temperatures of 1290 ◦ C and 1000 ◦ C are assumed, respectively. Figure 9 shows that the calibrated model with four levels of refinement leads to a good agreement between the two temperature profiles for temperatures below the melting temperature Tm = 1290 ◦ C. The experimental CR mean value is 8.66 × 105 while the simulated CR is equal to 10.4 × 105 , returning a relative error of approximately 20%. The higher error obtained for the CR can be explained by two main reasons: firstly, we are computing a derived variable, which is why the numerical errors (such as stresses in solid mechanics) are naturally higher; secondly, the measurement uncertainty of CR data is significantly higher. The big fluctuation in experimental data affects the exactness of the measurements mean value which is taken as a reference here to compute the relative error. A detailed investigation on the effects of high-order adaptive meshes in thermal transient problems can be found in [8, 45].
368
S. Kollmannsberger et al.
5.3 The Finite Cell Method in SLM Process Simulations We now consider the computational modeling of a SLM process for Ti6Al4V as depicted in Fig. 10. The computational domain consists of a solid base plate with an overlying powder layer. The powder is solidified by a laser following the path specified in the illustration. Then, a new layer of powder is added and the process is repeated until 10 layers are completed. Each layer has a thickness of 50 µm. Temperature dependent material coefficients are assigned to the three phases—powder, solid, and melt—as indicated in Fig. 11. The dependency of the heat capacity is assumed to be the same for all three phases, while the conductivity is assigned individually to each phase. The initial temperature of deposited material is T = 200 ◦ C. A Dirichlet boundary is applied at the base plate bottom surface equal to the initial temperature throughout the entire simulation. Radiation and convection boundary conditions are applied at the top surface using an emissivity of 0.8 and a convection coefficient of 5.7. Adiabatic boundary conditions are applied elsewhere. The discretizational treatment of the process itself is best explained by considering a time-step of the simulation process. Two grids are used. The first one, labeled as material grid, describes the presence of different material states (e.g. solid, powder, and air) in a voxel-like fashion (Fig. 12a), while the second one, labeled as discretization grid, defines the high-order shape function supports approximating the unknown temperature field (Fig. 12b). On the material side, we distinguish between four types of domains: air, powder, solid/liquid, and the base-plate. The distinction between air and powder is modeled using the α defined by the finite-cell method (see Eq. 3). This interface is explicitly defined as a geometric input. The change between powder and melt, however, emerges as a result of the power input by the laser beam. Once powder has changed to melt, it cannot change back to powder, which is why the material state can only vary between melt and solid thereafter. Initially, the grid spanning the basis functions discretizing the temperature field depicted in Fig. 12b consists of 8 × 8 × 5 base finite cells and it is refined by recur-
Fig. 10 Setup of the process model [45]
The Finite Cell Method for Simulation of Additive Manufacturing
369
Fig. 11 Temperature-dependent material properties [45]
Fig. 12 Discretization of material and temperature by means of two grids [45]
sively bisecting the elements three times towards a (moving) bounding box in the close proximity of the impact point of the laser using the multi-level hp-method discussed in Sect. 3. The smallest elements have an element size of half of the layer thickness in z-direction and 62.5 [µm] in the in-plain direction. The initial grids used for both the material and the discretization grid are identical, i.e., the material describing the material coefficients is geometrically and topologically congruent to the one used for the temperature discretization. Nevertheless, the two grids refine and de-refine independently of one another allowing to separate the material state representation from the approximation of the temperature solution. The maximum refinement of the grid discretizing the state variables is one level finer than the thermal counterpart. It refines towards sudden changes in the material coefficients. The material grid is also used for a partitioned integration of the bilinear forms. The emerging structure (logged in that grid) is depicted along with the temperature in all physical domains at the representative time steps 220, 1000 and 1670 in Fig. 13a–c, respectively. The large gradients in the solution are captured accurately
370
S. Kollmannsberger et al.
Fig. 13 Temperature field and its discretization with emerging solidified structure (gray region in the figure) at different time steps and number of degrees of freedom for all time steps throughout the process [45]
by using the multi-level hp-method, and the necessary refinements can be kept local to the impact region of the laser beam. Figure 13d depicts the number of degrees of freedom for each time step. It varies between 6000 and 8000, and increases only marginally throughout the process. The periodic spikes occur at time steps where the laser jumps from one scan path to another while the large plateaus show the change from one layer to another. The complete computation took approximately 6 h for 2000 time steps on a standard desktop computer, whereby only 30 min of CPU time were actually used for solving the resulting non-linear equation system. This clearly indicates that there is room for optimizations.
The Finite Cell Method for Simulation of Additive Manufacturing
371
5.4 Thermo-Mechanical Part-Scale Simulation In this section, we provide an outlook on the use of FCM for thermo-mechanical part-scale simulations. Full details of this approach will be given in [9]. A main objective in part-scale simulations is to compute the residual stresses and deflections of a manufactured structure. To this end, a small-strain thermo-elastoplastic material model is driven by a thermal strain: ε th = αe T I where αe is the temperature-dependent thermal expansion coefficient and I the second-order identity tensor. The FCM simulation follows the ideas presented in the previous sections, but it includes the mechanical effects in a staggered, explicit coupling scheme as depicted in Fig. 14. The computation is started by activating a new layer of cells embedding the geometry in the build direction. Due to the very thin layers in selective laser melting, activating a layer of cells corresponds to activating multiple layers of powder at once. The thermo-mechanical problem is first solved in the new domain to obtain the temperature distribution T and the thermal strains (heating step). Accurate mechanical predictions are obtained by applying an equivalent thermal load and by setting the initial temperature for the newly created material as discussed in [74]. Finally, the residual stresses are obtained in a cooling step, where no energy input is applied.
Fig. 14 Thermo-mechanical part-scale modeling flowchart [9]
372
S. Kollmannsberger et al.
6 Credits and Permissions Part of the text of Sect. 2 was taken (and slightly altered) from [51], with permission from Springer Nature. Part of the text of Sect. 4 was taken (and slightly altered) from [15], with permission from Elsevier. Figure 2 is reprinted from [15], with permission from Elsevier. Part of the text of Sects. 5.1 and 5.2 was taken (and slightly altered) from [46], with permission from Springer Nature. Part of the text of Sect. 5.3 was taken (and slightly altered) from [45], with permission from Elsevier. Figures 10, 11, 12 and 13 are reprinted from [45], with permission from Elsevier.
References 1. AM Bench Benchmark Challenge CHAL-AMB2018-02-MP (2018). https://www.nist.gov/ ambench/amb2018-02-description 2. Special Metals Corporation (2018). http://www.speciametals.com 3. A. Abedian, A. Düster, Equivalent legendre polynomials: Numerical integration of discontinuous functions in the finite element methods. Comput. Methods Appl. Mech. Eng. 343, 690–720 (2019) 4. A. Abedian, J. Parvizian, A. Düster, H. Khademyzadeh, E. Rank, Performance of different integration schemes in facing discontinuities in the finite cell method. Int. J. Comput. Methods 10(03), 1350002 (2013) 5. M.I. Al Hamahmy, I. Deiab, Review and analysis of heat source models for additive manufacturing. Int. J. Adv. Manuf. Technol. 106(3-4), 1223–1238 (2020) 6. I. Babuška, B.A. Szabo, I.N. Katz, The p-version of the finite element method. SIAM J. Numer. Anal. 18, 515–545 (1981) 7. T. Bog, N. Zander, S. Kollmannsberger, E. Rank, Weak imposition of frictionless contact constraints on automatically recovered high-order, embedded interfaces using the finite cell method. Comput. Mech. 61(4), 385–407 (2018) 8. M. Carraturo, C. Giannelli, A. Reali, R. Vázquez, Suitably graded thb-spline refinement and coarsening: Towards an adaptive isogeometric analysis of additive manufacturing processes. Comput. Methods Appl. Mech. Eng. 348, 660–679 (2019) 9. M. Carraturo, J. Jomo, S. Kollmannsberger, A. Reali, F. Auricchio, E. Rank, Modeling and experimental validation of an immersed thermo-mechanical part-scale analysis for laser powder bed fusion processes. Additive Manuf. 36(2020). https://doi.org/10.1016/j.addma.2020. 101498 10. D. Celentano, E. Oñate, S. Oller, A temperature-based formulation for finite element analysis of generalized phase-change problems. Int. J. Numer. Meth. Eng. 37(20), 3441–3465 (1994) 11. B. Cheng, S. Price, J. Lydon, K. Cooper, K. Chou, On process temperature in powder-bed electron beam additive manufacturing: model development and validation. J. Manuf. Sci. Eng. 136(6), 061018 (2014) 12. M. Chiumenti, X. Lin, M. Cervera, W. Lei, Y. Zheng, W. Huang, Numerical simulation and experimental calibration of additive manufacturing by blown powder technology. Part I: thermal analysis. Rapid Protot J 23(2), 448–463 (2017) 13. M. Chiumenti, E. Neiva, E. Salsi, M. Cervera, S. Badia, J. Moya, Z. Chen, C. Lee, C. Davies, Numerical modelling and experimental validation in Selective Laser Melting. Addit. Manuf. 18, 171–185 (2017) 14. J.A. Cottrell, T.J.R. Hughes, Y. Bazilevs, Isogeometric analysis: Towards Integration of CAD and FEM (Wiley, 2009)
The Finite Cell Method for Simulation of Additive Manufacturing
373
15. D. D’Angella, S. Kollmannsberger, E. Rank, A. Reali, Multi-level Bézier extraction for hierarchical local refinement of isogeometric analysis. Comput. Methods Appl. Mech. Eng. 328, 147–174 (2018) 16. D. D’Angella, A. Reali, Efficient extraction of hierarchical b-splines for local refinement and coarsening of isogeometric analysis. Comput. Methods Appl. Mech. Eng. 367, 113131 (2020) 17. D. D’Angella, N. Zander, S. Kollmannsberger, F. Frischmann, E. Rank, A. Schröder, A. Reali, Multi-level hp-adaptivity and explicit error estimation. Adv. Model. Simul. Eng. Sci. 3(1), 33 (2016) 18. M. Dauge, A. Düster, E. Rank, Theoretical and numerical investigation of the finite cell method. Technical Report hal-00850602, CCSD (2013) 19. E.R. Denlinger, J. Irwin, P. Michaleris, Thermomechanical Modeling of Additive Manufacturing Large Parts. J. Manuf. Sci. Eng. 136(6), 061007 (2014) 20. P. Di Stolfo, A. Düster, S. Kollmannsberger, E. Rank, A. Schröder, A posteriori error control for the finite cell method. PAMM 19(1), e201900419 (2019) 21. P. Di Stolfo, A. Rademacher, A. Schröder, Dual weighted residual error estimation for the finite cell method. J. Numer. Math. 27(2), 101–122 (2019) 22. A. Düster, J. Parvizian, Z. Yang, E. Rank, The finite cell method for three-dimensional problems of solid mechanics. Comput. Methods Appl. Mech. Eng. 197, 3768–3782 (2008) 23. A. Düster, O. Allix, Selective enrichment of moment fitting and application to cut finite elements and cells. Comput. Mech. 65(2), 429–450 (2020) 24. A. Düster, E. Rank, B.A. Szabó, The p-version of the finite element method and finite cell methods, in Encyclopedia of Computational Mechanics, vol. 2, ed. by E. Stein, R. Borst, T.J.R. Hughes (Wiley, Chichester, West Sussex, 2017), pp. 1–35 25. M. Elhaddad, N. Zander, T. Bog, L. Kudela, S. Kollmannsberger, J. Kirschke, T. Baum, M. Ruess, E. Rank, Multi-level hp-finite cell method for embedded interface problems with application in biomechanics. Int. J. Numer. Methods Biomed. Eng. 34(4), e2951 (2018) 26. D.R. Forsey, R.H. Bartels, Hierarchical B-spline Refinement, in Proceedings of the 15th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’88, pages 205– 212, New York, NY, USA, 1988. ACM 27. S. Ghosh, L. Ma, N. Ofori-Opoku, J.E. Guyer, On the primary spacing and microsegregation of cellular dendrites in laser deposited Ni-Nb alloys. Modell. Simul. Mater. Sci. Eng. 25(6), 065002 (2017) 28. C. Giannelli, B. Jüttler, S.K. Kleiss, A. Mantzaflaris, B. Simeon, J. Špeh, THB-splines: An effective mathematical technology for adaptive refinement in geometric design and isogeometric analysis. Comput. Methods Appl. Mech. Eng. 299, 337–365 (2016) 29. C. Giannelli, B. Jüttler, H. Speleers, THB-splines: The truncated basis for hierarchical splines. Comput. Aided Geom. Des. 29(7), 485–498 (2012) 30. J. Goldak, A. Chakravarti, M. Bibby, A new finite element model for welding heat sources. Metall. Trans. B 15(2), 299–305 (1984) 31. M. Gouge, E. Denlinger, J. Irwin, C. Li, P. Michaleris, Experimental validation of thermomechanical part-scale modeling for laser powder bed fusion processes. Addit. Manuf. 29, 100771 (2019) 32. G. Greiner, K. Hormann. Interpolating and approximating scattered 3d-data with hierarchical tensor product b-splines, in In Surface Fitting and Multiresolution Methods (Vanderbilt University Press, 1997), pp. 163–172 33. J.C. Heigel, B.M. Lane, Measurement of the melt pool length during single scan tracks in a commercial laser powder bed fusion process. J. Manuf. Sci. Eng. 140(5) (2018) 34. S. Hubrich, P. Di Stolfo, L. Kudela, S. Kollmannsberger, E. Rank, A. Schrüder, A. Düster, Numerical integration of discontinuous functions: moment fitting and smart octree. Comput. Mech. 1–19 (2017). https://doi.org/10.1007/s00466-017-1441-0 35. S. Hubrich, A. Düster, Numerical integration for nonlinear problems of the finite cell method using an adaptive scheme based on moment fitting. Comput. Math. Appl. 77(7), 1983–1997 (2019)
374
S. Kollmannsberger et al.
36. S. Hubrich, M. Joulaian, P.D. Stolfo, A. Schröder, A. Düster, Efficient numerical integration of arbitrarily broken cells using the moment fitting approach. Pamm 16, 201–202 (2016) 37. L. Hug, S. Kollmannsberger, Z. Yosibash, E. Rank, A 3D benchmark problem for crack propagation in brittle fracture. Comput. Methods Appl. Mech. Eng. 364(2020). https://doi.org/10. 1016/j.cma.2020.112905 38. T.J.R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis (Dover Publications, 2000) 39. T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Eng. 194, 4135–4195 (2005) 40. N. Keller, F. Neugebauer, H. Xu, V. Ploshikhin, Thermo-mechanical simulation of additive layer manufacturing of titanium aerospace structures, in Proceedings of the LightMAT Conference (2013) 41. T. Keller, G. Lindwall, S. Ghosh, L. Ma, B.M. Lane, F. Zhang, U.R. Kattner, E.A. Lass, J.C. Heigel, Y. Idell, Application of finite element, phase-field, and CALPHAD-based methods to additive manufacturing of Ni-based superalloys. Acta Mater. 139, 244–253 (2017) 42. S.A. Khairallah, A.T. Anderson, A. Rubenchik, W.E. King, Laser powder-bed fusion additive manufacturing: Physics of complex melt flow and formation mechanisms of pores, spatter, and denudation zones. Acta Mater. 108, 36–45 (2016) 43. W.E. King, A.T. Anderson, R.M. Ferencz, N.E. Hodge, C. Kamath, S.A. Khairallah, A.M. Rubenchik, Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges. Appl. Phys. Rev. 2(4), 041304 (2015) 44. S. Kollmannsberger, A. Özcan, J. Baiges, M. Ruess, E. Rank, A. Reali, Parameter-free, weak imposition of Dirichlet boundary conditions and coupling of trimmed and non-conforming patches. Int. J. Numer. Meth. Eng. 101(9), 1–30 (2014) 45. S. Kollmannsberger, A. Özcan, M. Carraturo, N. Zander, E. Rank, A hierarchical computational model for moving thermal loads and phase changes with applications to selective laser melting. Comput. Math. Appl. 75(5), 1483–1497 (2018) 46. S. Kollmannsberger, M. Carraturo, A. Reali, F. Auricchio, Accurate prediction of melt pool shapes in laser powder bed fusion by the non-linear temperature equation including phase changes. Integrat. Mater. Manuf. Innov. 8(2), 167–177 (2019) 47. S. Kollmannsberger, D. D’Angella, E. Rank, W. Garhuom, S. Hubrich, A. Düster, P. D. Stolfo, A. Schrüder, Spline- and hp-basis functions of higher differentiability in the finite cell method. GAMM-Mitteilungen 0(0), e202000004 (2019) 48. N. Korshunova, J. Jomo, G. Lékó, D. Reznik, P. Balázs, S. Kollmannsberger, Image-based material characterization of complex microarchitectured additively manufactured structures (2019). arXiv:1912.07415 49. R. Kraft, Surface Fitting and Multiresolution Methods Adaptive and linearly independent multilevel B-splines. (Vanderbilt University Press, Nashville, 1997) 50. L. Kudela, S. Kollmannsberger, U. Almac, E. Rank, Direct structural analysis of domains defined by point clouds. Comput. Methods Appl. Mech. Eng. 358, 112581 (2020) 51. L. Kudela, N. Zander, T. Bog, S. Kollmannsberger, E. Rank, Efficient and accurate numerical quadrature for immersed boundary methods. Adv. Model. Simul. Eng. Sci. 2(1), 10 (2015) 52. L. Kudela, N. Zander, S. Kollmannsberger, E. Rank, Smart octrees: accurately integrating discontinuous functions in 3d. Comput. Methods Appl. Mech. Eng. (2016) 53. Y. Lee, W. Zhang, Modeling of heat transfer, fluid flow and solidification microstructure of nickel-base superalloy fabricated by laser powder bed fusion. Addit. Manuf. 12, 178–188 (2016) 54. C. Li, J. Liu, X. Fang, Y. Guo, Efficient predictive model of part distortion and residual stress in selective laser melting. Addit. Manuf. 17, 157–168 (2017) 55. X. Liang, Q. Chen, L. Cheng, D. Hayduke, A.C. To, Modified inherent strain method for efficient prediction of residual deformation in direct metal laser sintered components. Comput. Mech. 64(6), 1719–1733 (2019)
The Finite Cell Method for Simulation of Additive Manufacturing
375
56. L.-E. Lindgren, Numerical modelling of welding. Comput. Methods Appl. Mech. Eng. 195(48– 49), 6710–6736 (2006) 57. L.-E. Lindgren, A. Lundbäck, M. Fisk, R. Pederson, J. Andersson, Simulation of additive manufacturing using coupled constitutive and microstructure models. Addit. Manuf. 12, 144– 158 (2016) 58. J. Liu, A.C. To, Quantitative texture prediction of epitaxial columnar grains in additive manufacturing using selective laser melting. Addit. Manuf. 16, 58–64 (2017) 59. A. Lundbäck, L.-E. Lindgren, Modelling of metal deposition. Finite Elem. Anal. Des. 47(10), 1169–1177 (2011) 60. N. Patil, D. Pal, H. Khalid Rafi, K. Zeng, A. Moreland, A. Hicks, D. Beeler, B. Stucker, A generalized feed forward dynamic adaptive mesh refinement and derefinement finite element framework for metal laser sintering-part i: formulation and algorithm development. J. Manuf. Sci. Eng. 137(4), 041001 (2015) 61. R.B. Patil, V. Yadava, Finite element analysis of temperature distribution in single metallic powder layer during metal laser sintering. Int. J. Mach. Tools Manuf. 47(7–8), 1069–1080 (2007) 62. M. Petö, F. Duvigneau, S. Eisenträger, Enhanced numerical integration scheme based on imagecompression techniques: application to fictitious domain methods. Adv. Model. Simul. Eng. Sci. 7(1), 21 (2020) 63. P. Promoppatum, S.-C. Yao, P.C. Pistorius, A.D. Rollett, P.J. Coutts, F. Lia, R. Martukanitz, Numerical modeling and experimental validation of thermal history and microstructure for additive manufacturing of an Inconel 718 product. Progress in Additive Manufacturing (2018) 64. D. Riedlbauer, P. Steinmann, J. Mergheim, Thermomechanical finite element simulations of selective electron beam melting processes: performance considerations. Comput. Mech. 54(1), 109–122 (2014) 65. M. Ruess, D. Schillinger, Y. Bazilevs, V. Varduhn, E. Rank, Weakly enforced essential boundary conditions for NURBS-embedded and trimmed NURBS geometries on the basis of the finite cell method. Int. J. Numer. Meth. Eng. 95(10), 811–846 (2013) 66. D. Schillinger, M. Ruess, N. Zander, Y. Bazilevs, A. Düster, E. Rank, Small and large deformation analysis with the p- and B-spline versions of the finite cell method. Comput. Mech. 50, 445–478 (2012) 67. D. Schillinger, I. Harari, M.-C. Hsu, D. Kamensky, S.K.F. Stoter, Y. Yu, Y. Zhao, The nonsymmetric Nitsche method for the parameter-free imposition of weak boundary and coupling conditions in immersed finite elements. Comput. Methods Appl. Mech. Eng. 309, 625–652 (2016) 68. E. Soylemez, High deposition rate approach of selective laser melting through defocused single bead experiments and thermal finite element analysis for Ti-6Al-4V. Addit. Manuf. 31, 100984 (2020) 69. P.D. Stolfo, A. Schröder, N. Zander, S. Kollmannsberger, An easy treatment of hanging nodes in hp-finite elements. Finite Elem. Anal. Des. 121, 101–117 (2016) 70. B. Szabó and I. Babuška, Finite Element Analysis (Wiley, 1991) 71. B. Wassermann, S. Kollmannsberger, T. Bog, E. Rank, From geometric design to numerical analysis: A direct approach using the finite cell method on constructive solid geometry. Comput. Methods Appl. Mech. Eng. 74(7), 1703–1726 (2017). High-Order Finite Element and Isogeometric Methods 2016 72. B. Wassermann, S. Kollmannsberger, S. Yin, S. Kudela, E. Rank, Integrating CAD and numerical analysis: “Dirty geometry” handling using the Finite Cell Method. Comput. Methods Appl. Mech. Eng. 351, 808–835 (2019) 73. Z. Yan, W. Liu, Z. Tang, X. Liu, N. Zhang, M. Li, H. Zhang, Review on thermal analysis in laser-based additive manufacturing. Opt. Laser Technol. 106, 427–441 (2018) 74. Y. Yang, M. Allen, T. London, V. Oancea, Residual strain predictions for a powder bed fusion inconel 625 single cantilever part. Integrat. Mater. Manuf. Innov. 8(3), 294–304 (2019) 75. N. Zander, T. Bog, M. Elhaddad, F. Frischmann, S. Kollmannsberger, E. Rank, The multi-level hp-method for three-dimensional problems: Dynamically changing high-order mesh refinement with arbitrary hanging nodes. Comput. Methods Appl. Mech. Eng. 310, 252–277 (2016)
Error Control and Adaptivity for the Finite Cell Method Paolo Di Stolfo and Andreas Schröder
Abstract In this work, we discuss a posteriori error control and adaptivity in the setting of the finite cell method (FCM). For this purpose, we introduce k-times differentiable basis functions for hp-adaptive meshes consisting of paraxial rectangles with arbitrary-level hanging nodes suitable for the immersed-boundary setting of the FCM. Furthermore, we present error control for Poisson’s problem in the context of the finite cell method. To this end, we establish a reliable residual-based estimator for the energy error. Additionally, we introduce a dual-weighted residual estimator capable of separating the discretization error from the quadrature error which poses a second error source typically arising in the FCM. Several numerical experiments illustrate the reliability and efficiency properties of the estimators.
1 Introduction The finite cell method (FCM) introduced by Düster, Parvizian, and Rank [20, 41] is an immersed-boundary method which combines the fictitious domain approach [24, 46] with (higher-order) finite elements. The main idea of the method consists in embedding the possibly complicated physical domain of the problem into an enclosing domain of a simple shape (e.g., a square or a cube) which enables a simple meshing process. Owing to its conceptual simplicity, the FCM has been applied to a vast number of problems such as hyperelasticity [23], thermo-elasticity and thermo-plasticity [40, 57], geometrical non-linearities [47], bio-mechanics [45, 55], elasto-plasticity [2, 53], foamed materials [25, 26], and brittle fracture [29]. The geometry of the physical domain is recovered by multiplying the finite element functions with an indicator function having the value 1 in the physical domain and a P. Di Stolfo · A. Schröder (B) Fachbereich Mathematik, Paris Lodron Universität Salzburg, Hellbrunner Straße 34, 5020 Salzburg, Austria e-mail: [email protected] P. Di Stolfo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_14
377
378
P. Di Stolfo and A. Schröder
small value in the fictitious domain, which is the difference of the enclosing and the physical domain. Obviously, this approach shifts the problem of meshing a complicated domain to the problem of integrating by means of an appropriate quadrature rule for these functions. In recent years, several numerical integration schemes have been developed, see, e.g., [1, 7, 18, 19, 27, 28, 30, 33, 34]. Naturally, these methods present a tradeoff between accuracy, computational efficiency, and simplicity. In particular, if the quadrature rule is not accurate, the quadrature error is a source of error in addition to the discretization error. In this paper, we deal with adaptivity and a posteriori error control for Poisson’s problem in the setting of the finite cell method. In particular, we consider two aspects in the context of this method: first, the definition of C k basis functions for adaptive meshes with hanging nodes and, second, the derivation of residual-based a posteriori estimates in the energy norm as well as goal-oriented error estimates for user-defined quantities of interest. The proposed C k basis functions are defined in such a way that arbitrary-level hanging nodes for any k ∈ N0 are allowed on meshes consisting of two-dimensional rectangles for the use in an hp-adaptive setting. The approach is based on a combination of Hermite and Gegenbauer polynomials and allows for varying anisotropic polynomial degrees per element. As each basis function is associated with a node of the mesh (i.e., a vertex, an edge, or an element), its support is independent of k. In particular, the supports have the same sizes as the supports of the usual C 0 shape functions (for instance, four elements for vertices, two elements for edges, and one element for an element in the case of uniform meshes). This is in contrast to classical B-spline approaches where the enforcement of higher differentiability properties requires large patches of mesh elements as a support, which, typically, leads to a strong coupling of the basis functions and to a denser structure of the stiffness matrix. We emphasize that the construction of C k basis functions presented in this paper is tailored to meshes consisting of rectangles which are typically applied in the finite cell method. Thus, these basis functions are very appropriated to be used within this method. However, the use of these functions is not restricted to the finite cell method. They can be applied in each method where C k differentiability is required on domains which can be meshed by rectangles. The technique is based on the methods developed in [15, 32] and allows for an extension to any space dimension [13]. While error control is available for many standard finite element approaches, only few publications deal with error control for the FCM or similar cut finite element methods [6]. These include recovery-based error estimates [37], goal-oriented error estimates [9, 21, 22, 38] and implicit error estimates [51]. In this work, we present a reliable residual-based error estimator for the energy error suitable for hp-adaptive finite elements. Here, the quadrature error is supposed to be negligible. Furthermore, we derive a dual-weighted residual error estimator based on [11, 12] which is able to separate the discretization error (measured with respect to a linear functional of interest) and the quadrature error. Moreover, we suggest a refinement strategy that performs adaptation of both the finite cell mesh and an associated quadrature rule with the goal to keep the two sources of error in balance. This balancing is to be
Error Control and Adaptivity for the Finite Cell Method
379
understood in the sense that the quadrature rule is adapted in such a way that the quadrature error is sufficiently small to provide a useful discrete solution. The paper is structured as follows. In Sect. 2, the finite cell method in the context of Poisson’s problem is presented. The C k basis functions are introduced in Sect. 3. Sections 4 and 5 introduce the posteriori error estimates. Finally, we provide a conclusion and an outlook in Sect. 6.
2 The Finite Cell Method for the Poisson Problem In this section, we introduce the finite cell method (FCM) for Poisson’s problem on a domain ⊂ R2 . We suppose that ∂ is split into a closed, non-empty boundary part D for Dirichlet boundary conditions and a boundary part N for Neumann boundary conditions. We define V := H1D () := v ∈ H 1 (); v = 0 on D and assume f ∈ L 2 () and g ∈ L 2 ( N ). As usual, Poisson’s problem consists in finding a solution u ∈ V with a(u, v) = F(v)
(1)
for all v ∈ V , where a : V × V → R is the bilinear form a(u, v) := (∇u, ∇v) and F : V → R is the linear form F(v) := ( f, v) + (g, v) N . Here, (·, ·)ω denotes the L 2 inner product on ω ⊂ R2 with the induced norm ·2ω := (·, ·)ω . For the H 1 -norm we write ·21,ω := ·2ω + ∇·2ω . In order to determine a discrete solution by means of the FCM, it is typical to ˆ with ⊂ . ˆ In practical applications, the enclosing specify an enclosing domain domain has a simple shape, e.g., it is the union of a few rectangles, so that a finite ˆ consisting of closed rectangles can easily be constructed. In element mesh Th of order to avoid dealing with the weak imposition of Dirichlet boundary conditions, we ˆ which is compatible with Th , i.e., D is the union suppose that D is a subset of ∂ of some closed edges of Th , so that the Dirichlet data can be imposed in a strong manner. By ¯ N we denote the part of N whose closure is also assumed to be compatible with Th . In order to treat domains with complicated boundaries, we allow the boundary part ˆ N := N \ ¯ N to have a curved shape (in contrast to the piecewise linear boundary parts D and ˆ N ), see Fig. 1. For this purpose, we assume the existence of functions ϕi ∈ C 1 ([a1i , a2i ]) with a1i < a2i , i = 1, . . . , m and counterclockˆ wise rotations with translation Ti : R2 → R2 such that the closure of N coincides m Ti (i ) where i is the graph of ϕi , i.e. i := (x, ϕ(x)); a1i ≤ x ≤ a2i . with i=1 These assumptions on the geometry cover a wide-range of applications and are, in particular, essential for the residual-based error estimates introduced in Sect. 4. In the classical application of the FCM, the discrete weak formulation of Poisson’s problem is replaced by the perturbation: Find u h ∈ Vh such that h) a (u h , vh ) = F(v
(2)
380
P. Di Stolfo and A. Schröder
ˆ and a FCM mesh Fig. 1 Configuration of , ,
ˆ defined on Th for all vh ∈ Vh , where Vh is a finite element space of Vˆ := H1D () with Vh | ⊂ V . The bilinear form a and linear form F may have the approximation property , a (u h , vh ) ≈ a ε (u h , vh ) := a(u h | , vh | ) + ε(∇u h , ∇vh )\ ˆ F(vh ) ≈ F(vh | )
(3)
for a fixed 0 < ε 1 and reflect perturbations of a and F, respectively, which are related to the positive definiteness of the problem and to the inexact integration. serves as a stabilization The first perturbation caused by the term ε(u h , vh )\ ˆ ensuring that a ε is positive definite on Vh . While the solution of (2) with a := a ε −14 exists for any ε > 0, the parameter is often chosen in the range of 10 up to 10−8 in computational practice. Clearly, a positive ε leads to a modeling error and it thus should be chosen such that the modeling error and the discretization error are properly balanced. The additional term causes Galerkin orthogonality to be lost since (∇u − ∇u h , ∇vh ) = ε(∇u h , ∇vh ) \
(4)
for all vh ∈ Vh . The second perturbation indicated by ≈ in (3) is caused by inexact integration. The integrals involved in a ε , F are approximated by a non-standard quadrature scheme ˆ \ may not coincide with any union of elements of Th . due to the fact that , While some of these quadrature schemes provide exact integration up to machine precision, others introduce a quadrature error which should also be in balance with the discretization error. In the remainder of this section, we briefly describe the construction of a quadtree which can be used for an adaptive quadrature scheme. A quadtree Q is a set of paraxial rectangles which is constructed on the basis of Th and a prescribed depth α : Th → N0 indicating the number of recursive refinements of K ∈ Th towards ∂. The following procedure generates a quadtree: 1. Set i := 0, Q K := {K }.
Error Control and Adaptivity for the Finite Cell Method
381
Fig. 2 Visualization of the quadtree
2. If i = α(K ), exit. ˆ \ by its subdivision into 3. Replace all rectangles in Q K with points in and d 2 paraxial rectangles. 4. Increase i by 1 and go to step 2. Finally, the quadtree is obtained by Q := K ∈Th Q K . Consequently, the usual quadrature rules for finite element functions can be employed upon each R ∈ Q (e.g., ˆ \ are approximated by quadrature rules on Gauss’s rule). Integrals on and ˆ \ , respectively. We note that rectangles R ∈ Q all R ∈ Q with R ⊂ and R ⊂ \ )| > 0 are typically used for the computation of with |R ∩ | > 0 and |R ∩ ( \ . The result of the procedure is visualized in Fig. 2 integrals either on or on , which is for an FCM mesh for a quarter disk , the embedding unit square subdivided into four equally sized elements, and a depth α. We note that the procedure for generating a quadtree usually performs the test in step 3 only approximately using sample points (e.g., the four vertices of a rectangle). Due to its simplicity, the technique can be easily applied to complicated geometries. An obvious disadvantage of the quadtree is the fact that it offers only a piecewise constant approximation to ∂. Hence, a high number of recursive refinements may be required to approximate the domain sufficiently well. In the context of the FCM, several improvements of this basic quadtree procedure have been developed, see [7, 34].
3 Basis Functions for Finite Cell Meshes with Hanging Nodes In this section, we construct k-times differentiable basis functions defined on meshes with hanging nodes for any k ∈ N0 based on a (possibly anisotropic) polynomial degree distribution p : Th → N2 . With the application to the finite cell method in mind, it is sufficient to consider meshes that consist of d-dimensional rectangles.
382
P. Di Stolfo and A. Schröder
Here, the treatment is restricted to d = 2. For the general d-dimensional case, we refer to [13]. The basis functions are constructed by means of shape functions defined on each element in Th , which, in turn, are specified using tensor products of shape functions defined on [−1, 1]. Let p1 ∈ N with p1 ≥ 2k + 1. We define the nodes N1 of [−1, 1] by N1 := {{−1}, {1}, (−1, 1)} and associate with each n 1 ∈ N1 a set of indices J1 (n 1 , p1 ) where J1 (n 1 , p1 ) :=
{0, . . . , k}, n 1 ∈ {{−1}, {1}}, {2k + 2, . . . , p1 }, n 1 = (−1, 1).
For n 1 ∈ N1 with n 1 = {x} for x ∈ {−1, 1} and j1 ∈ J1 (n 1 , p1 ), we define σn 1 , j1 ∈ P2k+1 as the unique solution of the Hermite interpolation problem (X ) = δx,X δ j1 ,S , σn(S) 1 , j1
X ∈ {−1, 1}, S ∈ {0, . . . , k}.
These functions can be computed efficiently and automatically by their representation in the Newton basis [13]. For n 1 = (−1, 1) and j1 ∈ J1 (n 1 , p1 ), we define σn 1 , j1 ∈ P j1 by −1/2−k
σn 1 , j1 := G j1
where G ρj1 are the Gegenbauer polynomials [54] with the property (X ) = 0, σn(S) 1 , j1
X ∈ {−1, 1}, S ∈ {0, . . . , k}.
The shape functions are visualized in Fig. 3 for k ∈ {0, 1, 2}. In order to obtain shape functions of degree p = ( p1 , p2 ) ∈ N2 with p1 , p2 ≥ 2k + 1 defined on [−1, 1]2 , we take Cartesian products to obtain the two-dimensional nodes N2 = {n 1 × n 2 ; n 1 , n 2 ∈ N1 } = {v0 , v1 , v2 , v3 , E 0 , E 1 , E 2 , E 3 , K 0 }. The nodes in N2 are visualized in Fig. 4a. For n ∈ N2 we define the indices J2 (n, p) := J1 (n 1 , p1 ) × J1 (n 2 , p2 ). For n ∈ N2 and j ∈ J2 (n, p), the shape functions τn, j are defined by τn, j (x) := σn 1 , j1 (x1 )σn 2 , j2 (x2 ).
Error Control and Adaptivity for the Finite Cell Method
383
Fig. 3 Shape functions in P8 for k ∈ {0, 1, 2}
Fig. 4 Nodes
For any rectangle M = (b1 , b2 ) + [−a1 , a1 ] × [−a2 , a2 ] with a1 , a2 > 0 and b1 , b2 ∈ R, the transformation FM : [−1, 1]2 → M is given by FM (x1 , x2 ) := (a1 x1 + b1 , a2 x2 + b2 ) and allows to define shape functions on M for n ∈ N2 , j ∈ J2 (n, p) by τ M,n, j (x) := a11 a22 (τn, j ◦ FM−1 )(x). j
j
We call FM (N2 ) the nodes of M. The following theorem states that shape functions associated with a node can be extended by zero at nodes which are opposite to that node and that shape functions defined on different rectangles that share a common
384
P. Di Stolfo and A. Schröder
node can be glued together on this common node, where in both cases the C k differentiability is preserved. Let the incident nodes I2 (n) and opposite nodes O2 (n) be defined by I2 (n) := n ∈ N2 ; ∃i ∈ {1, 2} : n i = n i or n i = (−1, 1) , O2 (n) := N2 \ I2 (n). A visualization is provided in Fig. 4b. Theorem 1 Let M be a rectangle, p ∈ N2 and n, N ∈ N2 . Moreover, let x ∈ FM (N ), j ∈ J(n, p) and S1 , S2 ∈ {0, . . . , k}. If N ∈ O2 (n), then ∂xS11 ∂xS22 τ M,n, j (x) = 0. be a further rectangle, ∈ N2 with FM (n) = FM ( Let M p ∈ N2 , and n, N n ) and ). If j ∈ J( n, p ), then FM (N ) = FM ( N ∂xS11 ∂xS22 τ M,n, j (x) = ∂xS11 ∂xS22 τ M, n , j (x). Proof The proof relies on the fact that the orientations of common edges match and that the shape functions are hierarchical. For a proof of the general d-dimensional version covering the assertion of this theorem see [13]. If Th is a regular mesh, the pairwise intersection of elements results in common nodes which implies that Theorem 1 is sufficient to define basis functions. If Th is not regular (in the sense that it contains hanging nodes), we make use of the concept of constraining nodes. To this end, we define the pairs P := Th × N2 . The relation ∼ on P with (K , n) ∼ (K , n ) :⇐⇒ FK (n) ∩ FK (n ) = ∅ is reflexive and symmetric, but not necessarily transitive. Its transitive closure is an equivalence relation on P. We call its equivalence classes the constrainby N ∗ . Defining nodes ofTh and denote the set of all constraining nodes ∗ ∗ ∗ ing n := {FK (n); (K , n) ∈ n }, we observe that n = J 1 × J2 where Ji is either an interval or a single-numbered set. Figure 5 depicts n ∗ fora con∗ straining node n = {(K 1 , E 2 ), (K 2 , E 0 ), (K 2 , v1 ), (K 3 , E 0 ), (K 3 , v0 )} with n ∗ = (0, 2) × {1}. Equivalent definitions of constraining nodes can be found in, e.g., [8, 48]. We suppose for Th that for each constraining node n ∗ there exists a constrainingnode patch ℘ (n ∗ ) ⊂ {M; M ⊂ R2 } which fulfills the following properties: (P1) (P2) (P3) (P4)
Each M ∈ ℘ (n ∗ ) is a rectangle and the union of elements of Th . of M. For each M ∈ ℘ (n ∗ ), n ∗ is a node ℘ (n ∗ ) is a regular mesh of n ∗ := M∈℘ (n ∗ ) M. n ∗ is an -relative neighborhood of each point in n ∗ , i.e., for each x ∈ n ∗ , there is a ball B(x) with B(x) ∩ ⊆ n ∗ .
Error Control and Adaptivity for the Finite Cell Method
385
In Fig. 6 constraining-node patches are visualized. We note that constraining-node patches can be determined, e.g., by accessing the refinement history of the mesh, see [13, 15]. We assign a polynomial degree p(n ∗ ) ∈ N2 for n ∗ ∈ N ∗ with n ∗ = J1 × J2 by the minimum rule as follows: min { p(K )i ; ∃n ∈ N2 : (K , n) ∈ n ∗ } Ji is an interval, p(n ∗ )i := 2k + 1, otherwise. According to (P2), we set n ∗M := FM−1 ( n ∗ ) ∈ N2 for M ∈ ℘ (n ∗ ) and state the following theorem, which shows how to glue shape functions to C k finite element basis functions on meshes with hanging nodes. ∈ ℘ (n ∗ ) and define n ∗ , j for j ∈ J(n ∗ , p(n ∗ )) Theorem 2 Let n ∗ ∈ N ∗ and M M by n ∗ , j | M := τ M,n ∗M , j for M ∈ ℘ (n ∗ ) and n ∗ , j |\ ℘ (n ∗ ) := 0. Then ∂xk1 ∂xk2 n ∗ , j ∈ C 0 (cl()). Proof This follows from Theorem. 1 and the properties of ℘ (n ∗ ), see [13].
We emphasize that n ∗ , j is specified in terms of shape functions defined on M ∈ ℘ (n ∗ ) which is not necessarily an element of Th . To use the basis functions in
Fig. 5 A constraining node n ∗ in a mesh for (0, 2)2 where
n ∗ is an edge
Fig. 6 Constraining-node patches for two constraining nodes of a mesh Th for the L-shape (−1, 1)2 \ [0, 1]2
386
P. Di Stolfo and A. Schröder
a typical finite element assembling process, a representation of n ∗ , j | M in terms of shape functions defined on K ∈ Th with K ⊆ M is required (for instance, by means of connectivity matrices [8, 15]). This is possible by (P1) and can be achieved by collocation or, more efficiently, by employing recursive formulae as introduced in [13].
4 Residual-Based Error Estimation for the Finite Cell Method 4.1 Reliability In this section we introduce a residual-based estimator η for the energy error of Poisson’s problem in the context of the finite cell method by η2 :=
K ∈Th
η2K , η2K := η2K ;Th + η2K ;Eh + η2K ;ˆ
N
(5)
with h 2K f + u h 2K ∩ , p 2K hE 1 hE [∂n u h ]2E∩ + g − ∂n u h 2E , := 2 E∈E (K ) p E p E E∈E (K )
η2K ;Th := η2K ;Eh
h
η2K ;ˆ := N
h
E⊆¯ N
hK g − ∂n u h 2ˆ ∩K . N pK
Here, h K is the element diameter, h E is the length of E ∈ Eh where Eh is the set of edges of Th , p(K ) = ( p K , p K ) for p K ∈ N, p E := min{ p K ; E ⊂ ∂ K , K ∈ Th }, and n is a fixed normal unit vector to E which coincides with the outer normal on N . Furthermore, the set Eh (K ) contains the edges of K ∈ Th and [·] E denotes the jump across E. introduced Since we make use of the hp quasi-interpolation operator Ih : Vh → V in [36], several standard properties of the meshes (quasi-uniformity and no hanging nodes) and the polynomial degree distribution (the neighboring elements’ degrees are comparable by a universal multiplicative constant) are required, see [35, 36]. Our reliability statement is based on the simplifying assumption that the quadrature error is negligibly small which is typically justified by applying suitable FCM quadrature schemes such as the moment-fitting method or a quadtree-based scheme with a sufficiently large depth. The formulation of the reliability statement makes use of a single non-standard requirement on the diameter δ K of the mesh layer around K defined by
Error Control and Adaptivity for the Finite Cell Method
387
δ K := max y − x ; y ∈ ∂T h (K ), x ∈ ∂ K . where T h (K ) :=
K ∈ Th ; K ∩ K = ∅ .
Theorem 3 Assume that δ K > 0 for all K ∈ Th with K ∩ ˆ N = ∅. Then, ∇u − ∇u h η + ε1/2 .
Proof The proof can be found in [14].
Loosely speaking, the assumption δ K > 0 states that each K ∈ Th with K ∩ ˆ N = ∅ is completely surrounded by elements contained in T h (K ) \ K . This can be achieved ˆ in the neighby a sufficiently fine mesh in combination with a sufficiently large borhood of K . In the remainder of this section, we sketch the key ideas of the proof of Theorem 3. It extends the reliability proof in [36] by several modifications that deal with the geometry of as assumed in Sect. 2. Defining the error e := u − u h | , we use [50] to set e∗ := Se which allows for the Stein’s extension operator S : V → V ∗ ∗ estimate e 1, e1, . With w := e∗ − Ih e∗ , integration by parts leads to the bound
e21, f + u h K ∩ w ∗ K ∩ + [∂n u h ] E∩ w ∗ E∩ K ∈Th
+
I
g − ∂n u h E∩¯ N w ∗ E∩¯ N
E∈Eh
III
+
E∈Eh
II
g − ∂n u h ˆ N ∩K w ∗ ˆ ∩K + ∇e, ∇ Ih e∗ . N K ∈Th V IV
The terms I, II, and III are standard and can be estimated using the properties of Ih by ⎛
⎞1/2 h2
K f + u h 2K ∩ ⎠ e∗ 1,ˆ , I⎝ 2 p K ∈Th K ⎞1/2 ⎛ hE
[∂n u h ]2E∩ ⎠ e∗ 1,ˆ , II ⎝ p E∈Eh E ⎞1/2 ⎛ hE
g − ∂n u h 2E∩¯ N ⎠ e∗ 1,ˆ . III ⎝ pE E∈E h
388
P. Di Stolfo and A. Schröder
The terms IV and V require a special treatment. For the bound of w ∗ ˆ N ∩K in IV, m K K K we have K ∩ ˆ N = i=1 Ti (i ) for some m K ∈ N and graphs iK and some counterclockwise rotations with translation Ti K : R2 → R2 . As the key estimate in [14], we prove the trace inequality
∗ 2
w K
Ti (iK )
pK
hK
w ∗ 2
∇w ∗ 2 + , T (K ) T h (K ) h hK pK
which relies on the shape regularity and the comparability of degrees of neighboring elements. By the properties of Ih and the fact that m K is bounded by the total number m of the graphs i , we conclude mK pK p K 2
∗ 2
w ∗ ˆ
w K K e∗ ˆ . = T ∩K ( ) 1, N i i hK h K i=1 K ∈T K ∈T h
h
Therefore, ⎞1/2 hK
g − ∂n u h 2ˆ ∩K ⎠ e∗ 1,ˆ , IV ⎝ N pK K ∈T ⎛
h
The term V results from the lack of Galerkin orthogonality (4) and can be estimated by
V ε1/2 e∗ 1,ˆ .
4.2 Numerical Example In this section, we illustrate the reliability of the residual-based error estimator of Sect. 4.1. For this purpose we study Poisson’s problem on a modified version of the classical L-shaped domain problem, where the domain is equipped with some additional holes. In numerical experiments we examine the behavior of the error estimator η as introduced in (5) applied to various configurations including h-uniform, puniform, hp-geometric, and h-adaptive refinements. As the exact solution is known, we may compute the exact error e := ∇u − ∇u h up to machine precision and check the overestimation of the error by the error estimator η. In particular, we examine the efficiency index
Error Control and Adaptivity for the Finite Cell Method
389
Fig. 7 Domain resulting from the removal of triangular holes from the with L-shaped domain initial mesh consisting of three quad elements
eff :=
η . e
Additionally, we introduce the (maximum) local efficiency index loc := max K ∈Th
ηK ∇u − ∇u h K ∩
to study the local overestimation of the error estimator. We expect a large local overestimation mainly on cut elements or elements in the neighborhood of elements with K ∩ = K . If we apply only p-refinements with increasing polynomial degree, we expect the estimator to be efficient up to a factor O( p), see [36]. We note that the efficiency of η is not proven in theory. Indeed, in [14] an (artificial) counter example is discussed which shows that efficiency of η is not given in general. By applying h-refinements driven by the error estimator, we examine whether optimal algebraic convergence rates can be recovered. The problem configuration is as follows. The domain is the result of removing m := 56 triangular holes ω1 , . . . , ωm of different sizes from the standard Lˆ see Fig. 7. The weak formulation of the problem reads: Find shaped domain , u ∈ V such that (∇u, ∇v) = (g, v) N for all v ∈ V with Dirichlet boundary D := ({0} m× [0, 1]) ∪ ([0, 1] × {0}) and Neuˆ \ D and ˆ N := i=1 ∂ωi . The data g is derived from mann boundaries ¯ N := ∂ the nonsmooth solution u(r, ϕ) := r 2/3 (sin(2ϕ − π )/3) given in polar coordinates
390
P. Di Stolfo and A. Schröder
Fig. 8 Uniform h-refinements
that has a corner singularity in the origin. The FCM discretization with exact quadrature seeks u h ∈ Vh such that (∇u h , ∇vh ) + ε(∇u h , ∇vh ) \ = (g, vh ) N for all vh ∈ Vh where ε := 10−12 and Vh is a C k finite element space as introduced ˆ To obtain a (nearly) exact quadrature in Sect. 3, which is based on a mesh Th of . on K ∩ for K ∈ Th , we apply a Delaunay triangulation of K ∩ using CGAL [56] along with an appropriate quadrature rule on each triangle of this triangulation. We test the error estimator in four configurations, in all of which we begin with an initial mesh consisting of three quad elements with edge length 1. In the first configuration, uniform h-refinements with fixed degrees p = 1 and p = 2 are performed with C 0 finite elements, i.e. k = 0. The decay of the exact error and the error estimator η is visualized in Fig. 8a. The convergence order of O((DOF)−1/3 ) = O(h 2/3 ) is attained for p = 1 as well as p = 2 which is caused by the low regularity of u ∈ H 5/3−ε (), ε > 0 [31]. The global efficiency index shown in Fig. 8b suggests that the estimator captures the error with acceptable indices up to 6. In particular, we observe that the indices lie in a more constricted range for smaller h. In contrast, the local efficiency indices fluctuate and hint at a moderately large local overestimation. However, in view of the moderate global efficiency index, the overestimation can only occur on elements that bear a small portion of the global error, so that the overestimation has only little influence on the overall estimate. In the second configuration, the mesh is fixed at the three initial elements and uniform p-refinements are performed. Here, we use C 1 finite elements, i.e., Vh is a space of higher differentiability as described in Sect. 3 with k = 1. Since the functions in Vh are differentiable, the jump terms [∂n u h ] E∩ across inner edges E ∈ Eh disappear.
Error Control and Adaptivity for the Finite Cell Method
391
Fig. 9 Uniform p-refinements
Thus, the evaluation of the estimator η is essentially simplified since the generation of quadrature rules for E ∩ is avoided. However, we have to remark that it is not clear whether the hp interpolation operator Ih from [36] is applicable to finite element spaces of higher differentiability. Moreover, the theory of this interpolation operator does not actually cover meshes with hanging nodes. Nevertheless, it seems to be worth to study the estimator η for meshes with hanging nodes and, more interestingly, for C 1 finite elements. In Fig. 9a, we display the errors and estimates for p = 3, . . . , 19 that confirm the convergence order of O((DOF)−2/3 ) = O( p −4/3 ) [52]. The global and local efficiency indices are visualized in Fig. 9b. Again, the residual-based error estimator is not expected to be p-robust [36] and an overestimation of a factor O( p) is possible. The visualization of eff and loc suggests that a p-dependent growth is present. As a comparison, we show the scaled indices eff / p, loc / p seem to be constant. Hence, we conclude that the multiplicative factor of O( p) does indeed appear in this example. In the third configuration, we again use C 1 finite elements and study hp-geometric refinements as, for instance, introduced in [49]. In geometric refinements, the degree and the mesh size are decreased towards the corner singularity. They lead to an exponential convergence of the error of the form O(exp(−b(DOF)1/3 )) with some b > 0 [49, 52]. This is also confirmed in this configuration where the finite cell method is applied, see Fig. 10a. The global and local efficiency indices are visualized in Fig. 10b. While the indices decrease in the beginning, they seem to become constant after a certain number of refinements. This may be caused by the fact that only the elements touching the reentrant corner are refined and, thus, the geometry of K ∩ only varies during the first few refinements where the new elements intersect the holes. In the fourth configuration, we perform h-adaptive refinements driven by the estimator η based on C 0 finite elements of degree p = 1 and p = 2. In the con-
392
P. Di Stolfo and A. Schröder
Fig. 10 Geometric refinements
Fig. 11 h-adaptive refinements, meshes for p = 1, 2
vergence plot in Fig. 12a, we see that the optimal algebraic convergence rate of O((DOF)− p/2 ) is recovered when h-adaptive refinements are applied. The meshes for p = 1 and p = 2 in Fig. 11 exhibit a high resolution of the corner singularity by the refinements and only very few unnecessary refinements occur near the boundaries of the holes. The global efficiency index is depicted in Fig. 12b and ranges between 2 and 6, the overestimation seems to be acceptable. The large local efficiency indices of up to 25 hint at a moderately large local overestimation. However, since the global efficiency index is small and the optimal convergence rate is attained, the overestimation seems to occur on elements sharing a small portion of the overall error.
Error Control and Adaptivity for the Finite Cell Method
393
Fig. 12 h-adaptive refinements
5 Dual Weighted Residual Error Estimation for the Finite Cell Method As a further approach for error control, we discuss the dual weighted residual (DWR) method for Poisson’s problem, which allows for an estimation of the error err := J (u) − J (u h | ) where J is a goal functional representing a user-defined quantity of interest. Typically, J are values (of derivatives) in points, averages, norms, or general nonlinear quantities. Here, we focus on the modifications of the standard DWR method that permit a simultaneous treatment of the discretization error and the quadrature error which occur in the FCM when an inexact quadrature scheme is applied. We confine the treatment to linear goal functionals J ∈ V ∗ and refer to [12] for the general nonlinear case. The DWR method derives an identity of err that replaces J by a suitable representation z ∈ V . Since J ∈ V ∗ , we may choose the Riesz representation z solving the so-called dual problem a(v, z) = J (v) for all v ∈ V . The FCM version of the dual problem approximates z by a discrete solution z h ∈ Vh fulfilling a (vh , z h ) = J(vh )
(6)
394
P. Di Stolfo and A. Schröder
for all vh ∈ Vh , where the quadrature scheme from the weak formulation (2) (the so-called primal problem) is applied to compute J(vh ) ≈ J (vh | ).
5.1 Error Identity In this subsection, we prove an error identity that replaces err by a sum of terms each of which we attribute to a specific error source, i.e., to the discretization error and to the quadrature error. Due to the fact that neither u nor z are known, the application of the DWR method typically relies on computable approximations u + and z + of u and z, respectively, such that the error incurred by replacing u by u + and z by z + is negligibly small (e.g., of higher order). In practice, such approximations are obtained by higher-order discrete solutions or higher-order interpolation [3, 4, 42]. Also the bilinear and linear forms a, F, J cannot be evaluated exactly in practice and are, thus, J. In order to ease the notation, substituted by the computable replacements a , F, and denote by v 0 the extension of v| by zero onto ˆ for vˆ ∈ V we write vˆ := v| J, . We assume the existence of improvements a+ , F+ , J+ of a , F, for v ∈ V ∪ V respectively, such that a(v, w) − a+ (v 0 , w 0 ),
F(v) − F+ (v 0 ),
J (v) − J+ (v 0 )
(7)
have a negligibly small absolute value for any v, w ∈ V . In this case, we may interpret any error terms consisting solely of differences from (7) as negligibly small and denote them by the subscript s. If a depth-based quadrature scheme is used and a, F, J J by means of a depth α, the approximations a+ , F+ , J+ are approximated by a , F, Jwith a quadrature can be obtained, e.g., by replacing the quadrature scheme in a , F, scheme based on the depth α+ (K ) := α(K ) + for all K ∈ T for a fixed sufficiently large ∈ N. We define the primal residual A and the dual residual B for v, w ∈ V by A(v, w) := F(w) − a(v, w),
B(v, w) := J (v) − a(v, w),
, and, similarly, for v, w ∈V v, w ) := F+ ( v ) − a+ ( v, w ), A+ (
B+ ( v, w ) := J+ ( v ) − a+ ( v, w ).
Defining the primal error e and the dual error by e∗ by e := u − u h , e∗ := z − z h ,
for the perturbed discrete solutions u h and z h of (2) and (6), respectively, we set the terms related to the discretization error to
Error Control and Adaptivity for the Finite Cell Method
395
1 0 A+ (u 0h , z + − z h0 ) + B+ (u 0+ − u 0h , z h0 ) 2 1 0 0 0 A+ (u 0h , z 0 − z + := ) + B+ (u 0 − u 0+ , z h0 ) + (A(u h , e∗ ) − A+ (u h , e∗ )) 2 +(B(e, z h ) − B+ (e0 , z h0 )) .
e D := e D,s
The terms related to the quadrature error are 0 0 e Q := A+ (u h , z h ), e Q,s := A(u h , z h ) − A+ (u h , z h ).
To see that e Q is indeed related to the quadrature error, we note that in case of exact integration the equality a = a = a+ implies e Q = 0 by Galerkin orthogonality. If approximate operators or perturbed discrete solutions are used, the term e Q is nonzero in general and thus may be regarded as a perturbation error (as, e.g., in [43]). The remaining term is related to the modeling error of Poisson’s problem in the FCM context, eε,s := A+ (u 0h , z h0 ) − A+ (u h , z h ), which is O(ε) [10] and therefore considered to be negligibly small. Theorem 4 (Error identity) J (u) − J (u h ) = e D + e D,s + e Q + e Q,s + eε,s Proof By means of J (w) = a(w, z) for any w ∈ V , we obtain J (u) − J (u h ) = a(u, z) − a(u h , z) = a(e, z) = a(e, e∗ ) + a(e, z h ).
The term a(e, e∗ ) transforms into a(e, e∗ ) = a(u − u h , e∗ ) = F(e∗ ) − a(u h , e∗ ) 0 0 0 0 = A(u h , e∗ ) = A+ (u h , e∗ ) + (A(u h , e∗ ) − A+ (u h , e∗ )). 0 0 − z h0 ) + (z 0 − z + ) gives Expanding e∗0 = z 0 − z h0 = (z + 0 0 0 0 − z h0 ) + A+ (u 0h , z 0 − z + ) + (A(u a(e, e∗ ) = A+ (u 0h , z + h , e∗ ) − A+ (u h , e∗ )).
Similarly, a(e, e∗ ) = a(e, z − z h ) = B(e, z h ) and thus a(e, e∗ ) = B+ (u 0+ − u 0h , z h0 ) + B+ (u 0 − u 0+ , z h0 ) + (B(e, z h ) − B+ (e0 , z h0 )). In total, we obtain a(e, e∗ ) = e D + e D,s and
396
P. Di Stolfo and A. Schröder J (u) − J (u h ) = e D + e D,s + a(e, z h ).
(8)
Noting a(e, z h ) = A(u h , z h ) we write 0 0 a(e, z h ) = A+ (u h , z h ) + (A+ (u 0h , z h0 ) − A+ (u h , z h )) + (A(u h , z h ) − A+ (u h , z h )) = e Q + eε,s + e Q,s .
Replacing a(e, z h ) in (8) completes the proof.
The assumption on the negligibility of the terms e D,s , e Q,s , and eε,s is justified by numerical experiments in [12]. Omitting these terms, we arrive at the approximate error identity err ≈ η := e D + e Q . In order to assess the overestimation of η we introduce the efficiency index (or overestimation index) as η eff := . err
5.2 Refinement Strategy The following refinement strategy exploits the ability to separate the error into a term e D representing the discretization error and a term e Q representing the quadrature error in order to separately adapt the finite cell mesh and an associated depthbased quadrature mesh. This strategy is similar to the one proposed in [43] where the DWR method is used to balance the discretization error with the iteration error of Newton’s method in a nonlinear setting. Each step in the Solve–Estimate–Mark–Refine loop for adaptivity is wellexamined at least for Poisson’s problem with exact quadrature (see, e.g., [17]). However, there are only few strategies available that measure and adapt the accuracy of the quadrature (see, e.g., the a-priori strategies in [16]). In practical computations with no quadrature adaptation, a fixed depth must be chosen. However, the depth might be unnecessarily high which incurs a large computational overhead. We use a modified version of the heuristic strategy described in [12]. Therein, the quadrature mesh is initialized to a low depth d ∈ N0 for each element. During each iteration of the Solve–Estimate–Mark–Refine loop, the sufficiency of the precision of the quadrature is tested for the current computation by checking if the quadrature error exceeds a certain fraction of the discretization error. If this is not the case, then the quadrature mesh depth is increased globally and the computation is repeated using the same FCM mesh. Thereby, the quadrature mesh is only adapted if necessary which keeps the computational overhead as low as possible. The adaptive strategy is outlined in the following steps.
Error Control and Adaptivity for the Finite Cell Method
397
1. Initialization: Set i := 0. Initialize the FCM mesh Ti . Choose d ∈ N0 and initialize the depth αi (K ) := d for all K ∈ Ti . Set dmin ∈ N0 with dmin ≤ d to be the minimum possible depth. Choose a stopping criterion, e.g., stop if the maximum number of degrees of freedom is reached or if a prescribed error tolerance is met. Choose a fraction ρ ∈ [0, 1]. 2. Setup of the quadrature: Construct the quadrature scheme for Ti based on αi . 3. Solve: Compute solutions u h and z h by (2) and (6) as well as approximations u + and z + , respectively. 4. Estimate and localize: Choose ∈ N and construct a quadrature scheme on Ti based on α+ (K ) := αi (K ) + from which a+ , F+ , J+ are available. Compute and localize e D to element-wise indicators η D,K , K ∈ Ti (e.g., by the algebraic filtering, the partition-of-unity approach, or the integration-by-parts [44]). approach If the stopping criterion is fulfilled, stop. Compute e Q . If e Q ≥ ρ |e D |, set αi (K ) := αi (K ) + 1 for all K ∈ Ti and go to step 2. 5. Mark: Choose an appropriate marking strategy, such as fixed-fraction, maximum, or Dörfler marking [5], to mark elements with respect to the discretization error. 6. Refine: Refine the marked elements in Ti to obtain Ti+1 . Let αi+1 : Ti+1 → N and set ⎧ ⎪ K ∈ Ti , ⎨αi (K ), αi+1 (K ) := αi (K ) − 1, K ∈ / Ti , K ⊂ K ∈ Ti , αi (K ) > dmin , ⎪ ⎩ K ∈ / Ti , K ⊂ K ∈ Ti , αi (K ) ≤ dmin . αi (K ), Increase i by 1 and goto step 2.
5.3 Numerical Example We apply the refinement strategy proposed in Sect. 5.2 to a problem on a nonconvex circular domain := B1 (0) \ ([0, 1] × [−1, 0]). The Dirichlet boundary part is D := ([0, 1] × {0}) ∪ ({0} × [−1, 0]) and the Neumann boundary part is N := ∂ \ D . For the discretization via the FCM, we embed into the L-shaped := (−1, 1)2 \ ([0, 1] × [−1, 0]). The initial mesh T0 consists of 48 square domain elements of degree p = 1 and the initial depth α0 is set to α0 (K ) := 2 for all K ∈ Th . The data g N is chosen such that the non-smooth solution u of Sect. 4.2 is attained. We choose J (v) := S v as a goal functional, where S := [1/4, 1/2]2 is a subset of corresponding dual problem in weak form seeks z ∈ V such that the domain. The ∇z · ∇v = S v for all v ∈ V . The exact solutions u and z are approximated by higher-order finite element solutions u + and z + for which the same mesh but finite elements of degree p = 2 are used. The forms a+ , F+ , and J+ are computed with := 3.
398
P. Di Stolfo and A. Schröder
Fig. 13 Decay of the error, the estimated discretization error and the estimated quadrature error Table 1 Number of degrees of freedom, error, estimated discretization and quadrature error, and effectivity index e Q |err + | |e D | DOF eff 21 28 44 85 166 312 621 1192 2318 4385 8757 16648 33749 64833 132180
1.587 · 10−2 5.240 · 10−4 2.981 · 10−4 1.153 · 10−4 6.396 · 10−5 3.226 · 10−5 1.266 · 10−5 7.029 · 10−6 3.028 · 10−6 1.677 · 10−6 7.775 · 10−7 4.051 · 10−7 1.956 · 10−7 1.038 · 10−7 5.250 · 10−8
1.843 · 10−3 4.293 · 10−4 2.641 · 10−4 1.015 · 10−4 5.885 · 10−5 2.996 · 10−5 1.170 · 10−5 6.666 · 10−6 2.868 · 10−6 1.622 · 10−6 7.555 · 10−7 3.960 · 10−7 1.881 · 10−7 9.835 · 10−7 4.789 · 10−8
8.057 · 10−6 1.466 · 10−6 1.351 · 10−6 6.787 · 10−7 3.198 · 10−7 8.177 · 10−8 7.798 · 10−8 1.996 · 10−8 2.230 · 10−8 2.808 · 10−9 2.268 · 10−9 2.452 · 10−9 6.407 · 10−10 5.849 · 10−10 5.031 · 10−10
0.1166 0.8221 0.8907 0.8865 0.9252 0.9312 0.9307 0.9511 0.9546 0.9694 0.9746 0.9835 0.9651 0.9532 0.9217
Since the exact solution u is known and the functional J can be evaluated up to machine precision on S, we compute J (u) = S u ≈ 0.02047405612656314. The exact error err is approximated by err ≈ err + := J (u) − J+ (u h ). Figure 13 shows the reduction of the error err + , the estimated discretization error e D , and the estimated quadrature error e Q . The convergence of order O((DOF)−1 ) delivered by the adaptive strategy is the optimal algebraic order for elements of degree 1, while the uniform refinement achieves a convergence order of only O((DOF)−3/4 ). The numerical values of the error and the estimates along with the efficiency indices are displayed in Table 1. The indices are approximately 0.9 which shows that the exact error is captured well by the estimation.
Error Control and Adaptivity for the Finite Cell Method
399
Fig. 14 Final mesh with 132 180 DOF
By inspecting the vertical lines in the graph representing e Q in Fig. 13, we see that the global quadrature mesh refinement from step 4 of the refinement strategy is performed several times for the same finite cell mesh to keep the quadrature error below a fraction of the discretization error. The final mesh with approx. 130 000 degrees of freedom is shown in Fig. 14. We see that the corner singularity at the origin is resolved by strong refinements. Also, the corners of the subdomain S are resolved by adaptive refinements. In contrast, the circular line, where the quadrature error is nonzero, is ignored by the adaptive mesh refinements which suggests that the separation of the discretization error and the quadrature error is effective.
6 Conclusion and Outlook In this paper, we discuss error control and adaptivity for the finite cell method. First, we constructed k-times differentiable basis functions on meshes with hanging nodes. The basis functions are defined by means of element-wise shape functions associated with the nodes of the element. This proceeding allows for a rigorous proof of the differentiability properties. The construction exploits the fact that, owing to the immersed boundary approach of the FCM, complicated domains can be handled with simple meshes consisting of rectangles or squares. Second, we introduced a residual-based and a dual weighted residual (DWR) based error estimator for the finite cell method applied to Poisson’s problem. While the residual-based estimator measures the error in the energy norm, the DWR estimator allows for estimating the error with respect to linear quantities of interest suitable to the user’s needs. In principle, the DWR method can be extended to also treat nonlinear
400
P. Di Stolfo and A. Schröder
problems and nonlinear quantities of interest (see [12] for the details and numerical examples). The DWR estimator presented in this paper is capable of separating the discretization error and the quadrature error, which occur naturally when the finite cell method with inexact integration is used. This allows for a separate adaptation of the FCM mesh and an associated mesh for the quadrature. In the standard finite element case, the residual-based estimator relies on Galerkin orthogonality in Poisson’s problem. However, in the FCM situation, this property is disturbed by a non-standard stabilization term controlled by the parameter ε. Although our residual-based estimator is capable of dealing with this stabilization term in the case of exact integration, inexact integration would introduce another perturbation of unknown magnitude. It is a subject of future work how the reliability proof can be adapted to these requirements. While the estimator is provably reliable (with unknown multiplicative constants), it is not efficient in general as a counterexample in [14] shows. The example is based on a family Th such that |K h ∩ | → 0 as h → 0 for some K h ∈ Th . However, the numerical examples suggest only a mild overestimation of the error. Compared to the rigorosity of the reliability proof for the residual-based estimator, the DWR estimator and its computation is based on heuristic extrapolation-type arguments. Although the identity relating the error with the DWR estimator terms is exact, the introduction of improved approximations of the exact primal and dual solutions for the actual evaluation of the estimator terms allows to prove reliability only in restrictive situations [39]. Acknowledgements Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – SCHR 1244/4-2 – SPP 1748.
References 1. A. Abedian, A. Düster, Equivalent Legendre polynomials: numerical integration of discontinuous functions in the finite element methods. Comput. Methods Appl. Mech. Eng. 343, 690–720 (2019) 2. A. Abedian, J. Parvizian, A. Düster, E. Rank, The finite cell method for the J2 flow theory of plasticity. Finite Elem. Anal. Des. 69, 37–47 (2013) 3. W. Bangerth, R. Rannacher, Adaptive Finite Element Methods for Differential Equations (Birkhäuser, 2013) 4. H. Blum, A. Schröder, F.T. Suttmeier, A posteriori estimates for FE-solutions of variational inequalities, in Numerical Mathematics and Advanced Applications: Proceedings of ENUMATH 2001 the 4th European Conference on Numerical Mathematics and Advanced Applications Ischia, July 2001. ed. by F. Brezzi, A. Buffa, S. Corsaro, A. Murli (Springer Milan, Milano, 2003), pp. 669–680 5. D. Braess, Finite Elemente: Theorie, schnelle Löser und Anwendungen in der Elastizitätstheorie (Springer, 2013) 6. E. Burman, S. Claus, P. Hansbo, M.G. Larson, A. Massing, CutFEM: discretizing geometry and partial differential equations. Int. J. Numer. Methods Eng. 104(7), 472–501 (2015) 7. A. Byfut, A. Schröder, A fictitious domain method for the simulation of thermoelastic deformations in nc-milling processes. Int. J. Numer. Methods Eng. 113(2), 208–229 (2017)
Error Control and Adaptivity for the Finite Cell Method
401
8. A. Byfut, A. Schröder, Unsymmetric multi-level hanging nodes and anisotropic polynomial degrees in H 1 -conforming higher-order finite element methods. Comput. Math. Appl. 73(9), 2092–2150 (2017) 9. D.M. Causon, D.M. Ingram, C.G. Mingham, A Cartesian cut cell method for shallow water flows with moving boundaries. Adv. Water Resour. 24(8), 899–911 (2001) 10. M. Dauge, A. Düster, E. Rank, Theoretical and numerical investigation of the finite cell method. J. Sci. Comput. 65(3), 1039–1064 (2015) 11. P. Di Stolfo, A. Düster, S. Kollmannsberger, E. Rank, A. Schröder, A posteriori error control for the finite cell method. PAMM 19(1), e201900419 (2019) 12. P. Di Stolfo, A. Rademacher, A. Schröder, Dual weighted residual error estimation for the finite cell method. J. Numer. Math. 27(2), 101–122 (2019) 13. P. Di Stolfo, A. Schröder, C k and C 0 hp-finite elements on d-dimensional meshes with arbitrary hanging nodes. Finite Elem. Anal. Des. 192, 103529 (2021) 14. P. Di Stolfo, A. Schröder, Reliable residual-based error estimation for the finite cell method. J. Sci. Comput. 87(12) (2021) 15. P. Di Stolfo, A. Schröder, N. Zander, S. Kollmannsberger, An easy treatment of hanging nodes in hp-finite elements. Finite Elem. Anal. Des. 121, 101–117 (2016) 16. S.C. Divi, C.V. Verhoosel, F. Auricchio, A. Reali, E.H. van Brummelen, Error-estimate-based adaptive integration for immersed isogeometric analysis. Comput. Math. Appl. 80(11), 2481– 2516 (2020). https://doi.org/10.1016/j.camwa.2020.03.026 17. W. Dörfler, A convergent adaptive algorithm for Poisson’s equation. SIAM J. Numer. Anal. 33(3), 1106–1124 (1996) 18. A. Düster, O. Allix, Selective enrichment of moment fitting and application to cut finite elements and cells. Comput. Mech. 65(2), 429–450 (2020) 19. A. Düster, S. Hubrich, Adaptive integration of cut finite elements and cells for nonlinear structural analysis, in Modeling in Engineering Using Innovative Numerical Methods for Solids and Fluids (Springer, 2020), pp. 31–73 20. A. Düster, J. Parvizian, Z. Yang, E. Rank, The finite cell method for three-dimensional problems of solid mechanics. Comput. Methods Appl. Mech. Eng. 197(45), 3768–3782 (2008) 21. K.J. Fidkowski, D.L. Darmofal, Output-based adaptive meshing using triangular cut cells. Technical report, Aerospace Computational Design Laboratory, Dept. of Aeronautics (2006) 22. K.J. Fidkowski, D.L. Darmofal, A triangular cut-cell adaptive method for high-order discretizations of the compressible Navier-Stokes equations. J. Comput. Phys. 225(2), 1653–1672 (2007) 23. W. Garhuom, S. Hubrich, L. Radtke, A. Düster, A remeshing strategy for large deformations in the finite cell method. Comput. Math. Appl. 80(11), 2379–2398 (2020). https://doi.org/10.1016/j.camwa.2020.03.020. http://www.sciencedirect.com/science/article/ pii/S0898122120301243 24. R. Glowinski, T.W. Pan, J. Periaux, A fictitious domain method for Dirichlet problem and applications. Comput. Methods Appl. Mech. Eng. 111(3–4), 283–303 (1994) 25. S. Heinze, T. Bleistein, A. Düster, S. Diebels, A. Jung, Experimental and numerical investigation of single pores for identification of effective metal foams properties. Zeitschrift für Angewandte Mathematik und Mechanik 98, 682–695 (2018). https://doi.org/10.1002/zamm.201700045 26. S. Heinze, M. Joulaian, A. Düster, Numerical homogenization of hybrid metal foams using the finite cell method. Comput. Math. Appl. 70, 1501–1517 (2015). https://doi.org/10.1016/j. camwa.2015.05.009 27. S. Hubrich, P. Di Stolfo, L. Kudela, S. Kollmannsberger, E. Rank, A. Schröder, A. Düster, Numerical integration of discontinuous functions: moment fitting and smart octree. Comput. Mech. 60(5), 863–881 (2017) 28. S. Hubrich, A. Düster, Numerical integration for nonlinear problems of the finite cell method using an adaptive scheme based on moment fitting. Comput. Math. Appl. 77(7), 1983–1997 (2019) 29. L. Hug, S. Kollmannsberger, Z. Yosibash, E. Rank, A 3d benchmark problem for crack propagation in brittle fracture. Comput. Methods Appl. Mech. Eng. 364, 112905 (2020)
402
P. Di Stolfo and A. Schröder
30. M. Joulaian, S. Hubrich, A. Düster, Numerical integration of discontinuities on arbitrary domains based on moment fitting. Comput. Mech. 57(6), 979–999 (2016) 31. D. Knees, A. Schröder, Global spatial regularity for elasticity models with cracks, contact and other nonsmooth constraints. Math. Methods Appl. Sci. 35(15), 1859–1884 (2012) 32. S. Kollmannsberger, D. D’Angella, E. Rank, W. Garhuom, S. Hubrich, A. Düster, P. Di Stolfo, A. Schröder, Spline- and hp-Basis Functions of Higher Differentiability in the Finite Cell Method (GAMM-Mitteilungen, 2019) 33. L. Kudela, N. Zander, T. Bog, S. Kollmannsberger, E. Rank, Efficient and accurate numerical quadrature for immersed boundary methods. Adv. Model. Simul. Eng. Sci. 2(1), 10 (2015) 34. L. Kudela, N. Zander, S. Kollmannsberger, E. Rank, Smart octrees: accurately integrating discontinuous functions in 3D. Comput. Methods Appl. Mech. Eng. 306, 406–426 (2016) 35. J.M. Melenk, hp-interpolation of nonsmooth functions and an application to hp-a posteriori error estimation. SIAM J. Numer. Anal. 43(1), 127–155 (2005) 36. J.M. Melenk, B.I. Wohlmuth, On residual-based a posteriori error estimation in hp-FEM. Adv. Comput. Math. 15(1–4), 311–331 (2001) 37. E. Nadal, J. Ródenas, J. Albelda, M. Tur, J. Tarancón, F. Fuenmayor, Efficient finite element methodology based on cartesian grids: application to structural shape optimization, in Abstract and Applied Analysis (Hindawi, 2013) 38. M. Nemec, M. Aftosmis, Adjoint error estimation and adaptive refinement for embeddedboundary Cartesian meshes, in 18th AIAA Computational Fluid Dynamics Conference (2007), p. 4187 39. R.H. Nochetto, A. Veeser, M. Verani, A safeguarded dual weighted residual method. IMA J. Numer. Anal. 29(1), 126–140 (2009) 40. A. Özcan, S. Kollmannsberger, J. Jomo, E. Rank, Residual stresses in metal deposition modeling: discretizations of higher order. Comput. Math. Appl. 78(7), 2247–2266 (2019) 41. J. Parvizian, A. Düster, E. Rank, Finite cell method. Comput. Mech. 41(1), 121–133 (2007) 42. A. Rademacher, Adaptive finite element methods for nonlinear hyperbolic problems of second order. Ph.D. thesis, TU Dortmund (2010) 43. R. Rannacher, J. Vihharev, Adaptive finite element analysis of nonlinear problems: balancing of discretization and iteration errors. J. Numer. Math. 21(1), 23–62 (2013) 44. T. Richter, T. Wick, Variational localizations of the dual weighted residual estimator. J. Comput. Appl. Math. 279, 192–208 (2015) 45. M. Ruess, D. Tal, N. Trabelsi, Z. Yosibash, E. Rank, The finite cell method for bone simulations: verification and validation. Biomech. Model. Mechanobiol. 11(3), 425–437 (2012) 46. V. Saul’ev, On the solution of some boundary value problems on high performance computers by fictitious domain method. Sib. Math. J 4(4), 912–925 (1963) 47. D. Schillinger, M. Ruess, N. Zander, Y. Bazilevs, A. Düster, E. Rank, Small and large deformation analysis with the p- and B-spline versions of the finite cell method. Comput. Mech. 50(4), 445–478 (2012). https://doi.org/10.1007/s00466-012-0684-z 48. L.L. Schumaker, L. Wang, Spline spaces on TR-meshes with hanging vertices. Numerische Mathematik 118(3), 531–548 (2011) 49. C. Schwab, p-and hp-Finite Element Methods: Theory and Applications in Solid and Fluid Mechanics (Oxford University Press, 1998) 50. E.M. Stein, Singular integrals and differentiability properties of functions, vol. 2 (Princeton University Press, 1970) 51. H. Sun, D. Schillinger, S. Yuan, Implicit a posteriori error estimation in cut finite elements. Comput. Mech. pp. 1–22 (2019) 52. B. Szabó, A. Düster, E. Rank, The p-version of the finite element method, in Encyclopedia of Computational Mechanics (2004) 53. A. Taghipour, J. Parvizian, S. Heinze, A. Düster, The finite cell method for nearly incompressible finite strain plasticity problems with complex geometries. Comput. Math. Appl. 75, 3298–3316 (2018) 54. F.G. Tricomi, Vorlesungen über Orthogonalreihen (Springer, 1955)
Error Control and Adaptivity for the Finite Cell Method
403
55. C. Verhoosel, G. van Zwieten, B. van Rietbergen, R. de Borst, Image-based goal-oriented adaptive isogeometric analysis with application to the micro-mechanical modeling of trabecular bone. Comput. Methods Appl. Mech. Eng. 284, 138–164 (2015) 56. M. Yvinec, 2D triangulation, in CGAL User and Reference Manual, 4.14 edn. (CGAL Editorial Board, 2019). https://doc.cgal.org/4.14/Manual/packages.html#PkgTriangulation2 57. N. Zander, S. Kollmannsberger, M. Ruess, Z. Yosibash, E. Rank, The finite cell method for linear thermoelasticity. Comput. Math. Appl. 64(11), 3527–3541 (2012)
Frontiers in Mortar Methods for Isogeometric Analysis Christian Hesch, Ustim Khristenko, Rolf Krause, Alexander Popp, Alexander Seitz, Wolfgang Wall, and Barbara Wohlmuth
Abstract Complex geometries as common in industrial applications consist of multiple patches, if spline based parametrizations are used. The requirements for the generation of analysis-suitable models are increasing dramatically since isogeometric analysis is directly based on the spline parametrization and nowadays used for the calculation of higher-order partial differential equations. The computational, or more general, the engineering analysis necessitates suitable coupling techniques between the different patches. Mortar methods have been successfully applied for coupling of patches and for contact mechanics in recent years to resolve the arising issues within the interface. We present here current achievements in the design of C. Hesch (B) Chair of Computational Mechanics, Universität Siegen, Paul-Bonatz Str. 9-11, 57068 Siegen, Germany e-mail: [email protected] A. Popp Institute for Mathematics and Computer-Based Simulation, Universität der Bundeswehr München, Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany e-mail: [email protected] A. Seitz · W. Wall Institute for Computational Mechanics, Technische Universität München, Boltzmannstr. 15, 85748 Garching, Germany e-mail: [email protected] W. Wall e-mail: [email protected] R. Krause Chair for Advanced Scientific Computing, Universita della Svizzera italiana, Via Giuseppe Buffi 13, 6900 Lugano, Switzerland e-mail: [email protected] U. Khristenko · B. Wohlmuth Institute for Numerical Mathematics, Technische Universität München, Boltzmannstr. 3, 85748 Garching, Germany e-mail: [email protected] B. Wohlmuth e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_15
405
406
C. Hesch et al.
mortar technologies in isogeometric analysis within the Priority Program SPP 1748, “Reliable Simulation Techniques in Solid Mechanics. Development of Non-standard Discretisation Methods, Mechanical and Mathematical Analysis”.
1 Introduction Mortar methods have been developed in the early 1990s of the past century [4], see also in [1] in the context of domain decomposition problems, originally applied to spectral and finite element methods, see, among many other [28, 77, 81]. Domain decomposition techniques provide powerful tools for the coupling of different, in general nonconforming meshes. A wide range of reason exists to create such interfaces; the characteristic idea of mortar methods rely on the weak, integral condition in contrast to strong point-wise couplings. Within the principle of virtual work and far beyond this mechanical concept, mortar methods enter the corresponding balance equations in a variational consistent manner. Lagrange multipliers in a dual form have been proposed in [78]. This allows for a cost efficient and effective way for the interface coupling. Several interpretation exist for the condensation, e.g. as elimination of the Lagrange multipliers via the Schur decomposition as presented in [29] or as null-space reduction scheme as shown in [38]. In the latter citation, mortar methods are used for overlapping domain decomposition methods in the context of fluid-structure interaction problems (FSI), also known as immersed techniques. Moreover, [48] applied mortar methods on boundary fitted FSI, demonstrating the wide range of applicability of this methodology. For contact mechanics, nodal wise enforcement of the non-penetration condition are used since the 1980s, see, e.g., [32, 33]. As shown in detail in [25], nodal wise formulations do not pass the patch test and do not converge correctly, which is a major drawback of these methods. In contrast, mortar methods used as variationally consistent contact interface conditions as considered in [35, 43, 44, 60–64] pass the patch test. Isogeometric Analysis (IgA) as introduced in [45] has become a widely used methodology, see [9–11, 39]. This framework facilitates the usage of NURBS basis functions, emanating from the field of computer aided design (CAD). Moreover, it allows for the construction of finite element basis functions with adjustable continuity across the element boundaries, in contrast to classical Lagrangian basis functions. This enables the numerical treatment of higher-order partial differential equations (PDE’s), e.g. for Cahn-Hilliard or Cahn-Hilliard like formulations [30], in fracture mechanics [6, 17, 18], in structural mechanics, e.g. in [2, 3, 22–24, 47, 65] and for generalized continua [27]. A major drawback of IgA is the decomposition of the whole domain in patches. In standard industrial geometries, the domain is decomposed in hundreds or even thousands of patches, which necessitates the application of suitable interface conditions. Therefore, the combination of mortar methods and IgA has been proposed in [8, 36]. A wide range of issues arise, mostly related with the suitable choice of
Frontiers in Mortar Methods for Isogeometric Analysis
407
the Lagrange multiplier space. In a series of actual contributions [20, 21, 40, 67], higher-order domain decomposition has been addressed as well, which allows for the application of higher-order PDE’s in multi-patch geometries. Finally, several contributions deal with IgA and mortar contact methods, see [13, 14, 19, 37, 69]. Biorthogonal splines for the effective condensation of the Lagrange multipliers have been developed in [82]. Multidimensional coupling has been addressed recently in [46, 72]. The paper is structured as follows. In Sect. 3 basic notations and IgA concepts are introduced within a most general framework on unconstrained and constrained elasticity. In Sect. 4, recent trends in IgA based mortar domain decomposition methods are shown. This is followed in Sect. 5 by recent contributions for IgA mortar contact techniques, along with multidimensional coupling conditions in Sect. 6. Eventually, conclusions are drawn in Sect. 7.
2 Coupled Simulations with Mortar Methods in HPC Although mortar methods have been originally developed for coupling on nonoverlapping subdomains, the idea of variational transfer has been applied to a much wider class of problems. A principal advantage of mortar methods is their ability to couple different discretizations, either on interfaces, surfaces, in the volume, and even across dimensions, i.e. between 1D and 3D models or 3D and 2D models. Using variants or derivatives of the mortar methods, coupled multi-physics problems, such as fluid structure interaction, can be realized using immersed methods with volume coupling [54]. For multi-scale simulations in mechanics, mortar methods can be used to couple molecular dynamics simulations with finite element approximations using an overlapping decomposition approach. Contact problems in mechanics can efficiently be dealt with using mortar methods for the coupling between surfaces [15, 59], and coupling across dimension or non-conforming meshes can be used for the simulation of flow in fracture networks in geo-sciences [5, 66, 74, 75]. Finally, mortar methods can also be employed to build multilevel approximation spaces for multigrid methods, thereby serving for the construction of linear or non-linear multigrid methods on complex geometries [16, 85]. The flexibility mortar methods provide, however, is tightly connected to the capability of assembling the mortar transfer operator for volume coupling or surface coupling - or combinations thereof. This seemingly practical task turns out to play a pivotal role when more complex applications with possibly several non-connected or overlapping domains are considered. For large scale simulations, also the parallel assembly of the transfer operator has to be considered. The latter is a challenging task in terms of efficiency and scalability, as in general no a priori information on the connectivity between two non-matching meshes is available, which then has to be generated and dealt with during the assembly. In this section, we will present examples from cardiac simulation, computational mechanics, and fluid-structure interaction in cardiac simulation and geo-sciences,
408
C. Hesch et al.
which illustrate the capabilities of the mortar method. Moreover, we will discuss the tedious and non-trivial assembly of the mortar operator in the context of massively parallel computations in HPC (e.g., [55]), which has been realized in the two libraries [84]. One example for computationally demanding simulations are multi-scale simulations in mechanics. The idea behind these multi-scale simulation is to resolve phenomena such as fracture locally by means of molecular dynamics and to use a continuum mechanics representation for the remaining body. Thus, on the discrete level, molecular dynamics simulation have to be coupled with finite element discretizations. In molecular dynamics (MD), atoms are represented as point masses, which are subject to internal and external forces. For the positions of the atoms it is assumed that they follow Newton’s equation of motion. The forces exerted on the atoms are modeled by means of the gradient of a potential, e.g. Lennard–Jones, which is describing the behavior of the material under consideration. This leads to a system of ordinary differential equations, which then is solved numerically. For the coupling of molecular dynamics simulations, we have to transfer quantities such as displacements, velocities, and forces between a finite element discretization, which is based on integral quantities, and the MD discretization, which is based on pointwise given information (atoms). In [26] this coupling has been realized by attaching a partition of unity to the atoms and then using a mortar transfer operator for coupling between the MD and the FEM discretizations. As it turns out, the mortar method in this context can be shown to act as a frequency filter, which will effectively remove the high-frequency components of the MD displacements, which can not be represented on the FE mesh. As a consequence, mortar based multi-scale coupling eliminates a large part of the unphysical wave reflections at the coupling interface and gives rise to a stable coupling between MD and FEM, see Fig. 1 and [26].
Fig. 1 Multi-scale coupling of Molecular Dynamics and Finite Elements via Partition of Unity and L 2 -projection
Frontiers in Mortar Methods for Isogeometric Analysis
409
In the context of fast solution methods, mortar methods have also been used to derive adaptive space-time discretization methods, see [50, 51], which combine the advantages of structured meshes in terms of simple data-structures with the advantages of adaptive discretizations. Here, for the coupling at the interfaces, mortar methods have been employed, allowing for a local (to a single processor or core) treatment of the unknowns. Clearly, the load balancing needs to be adapted to this situation. The resulting decomposition cannot only be used for the design of adaptive parallel methods, but also for the construction of additive Schwarz-preconditioners in space and time based on non-conforming space-time decomposition, see [50, 51]. Whereas in the above example the computation of the transfer operator at the domain interface is straight forward, it becomes more difficult and demanding in the case of coupling between non-matching or warped surfaces or in the case of volume coupling between bodies represented with unstructured meshes. By definition, the assembly of the transfer operator requires the evaluation of integrals on the intersection of two non-matching meshes. We refer to Fig. 6, which illustrates the complexity of the resulting intersections for a fracture network. In the case of surface coupling, meshes need to be projected from one surface onto another [15]. In a parallel setting, involving two or more domains or bodies, a major difficulty is that a priori we don’t know which elements of which subdomain will have a non-empty intersections [52]. We note that in a parallel computation these might be on completely different processors, so that a global search has to be carried out. A global search however is not advisable due to the resulting quadratic complexity. Thus, more efficient strategies, i.e. hierarchic strategies based on kd-trees, are employed for detecting possibly intersecting elements. The resulting cut-candidates are then checked in detail for possible intersections, so that the quadratures on the intersections can be carried out, in order to compute the entries of the mass matrix. From a technical point of view, detection and computation of the intersections has to be handled very carefully, as small cuts or ill-conditioned sub-problems will show up. Additionally, in order to guarantee scalability, the computation of the intersections and the computation of the local integrals are distributed globally to ensure equal load balancing. For surface related coupling, e.g. contact problems, this imbalance is quite obvious. It will, however, also show up for volume coupling. One possibility to ensure a good load balancing, is to use space-filling curves. We refer to [52], where this approach is described in detail as well as to the library MoonoLith [84], which to our knowledge is the only currently available library implementing variational transfer on arbitrary meshes in parallel for surfaces and volumes. As a consequence, the flexibility of mortar methods is counterbalanced by the complex assembly of the transfer operator for complex geometries. Once available, however, variational transfer can be used to design new numerical methods for coupled multiphysics simulations. Here, as an example, we consider immersed methods for fluid structure interaction in cardiac simulation. Figures 2 and 3 show the fluid-structure interaction interaction between the turbulent systolic jet and the leaflets of a bio-prosthetic aortic valve. The fluid-structure interaction
410 Fig. 2 Spatial distribution of the von Mises stresses in the bio-prosthetic aortic valve
Fig. 3 Vortical structures arising from the interaction of the bio-prosthetic aortic valve with the blood flow [54]
C. Hesch et al.
Frontiers in Mortar Methods for Isogeometric Analysis
411
Fig. 4 Spatial distribution of the von Mises stresses in the bio-prosthetic aortic valve during contact
formulation relies on a mortar approach to couple a finite difference discretization of the Navier-Stokes equations, with an finite-element formulation of an anisotropic fiber-reinforced model [34, 54, 83]. In contrast to other immersed approaches, the coupling strategy allows for an implicit treatment of the equations describing the solid dynamics. Moreover, it effectively prevents leakage at the interface, as the basis functions of the multiplier space form a partition of unity. Figures 4 and 5 show the combination of volume coupling with surface coupling, i.e. of FSI with contact for the considered bio-prosthetic aortic valve. Here, transfer on the entire volume as well as on the surface have to be carried out in order to satisfy the equality (FSI) constraint and inequality (contact) constraints (Figs. 6 and 7).
3 Basic Equations and Isogeometric Analysis We start with a short summary of elasticity with constraints to introduce the basic concepts to be dealt with in the following. Therefore, we consider a Lipschitz bounded domain with reference configuration B0 ⊂ Rd , d ∈ {2, 3}, undergoing a motion characterized by a time dependent deformation mapping ϕ : B0 × I → Rd , where I = [0, T ] is the time interval elapsed during the motion. The current configuration is denoted by Bt = ϕt (B0 ), material points are labeled by X ∈ B0 → Rd . Unconstrained elasticity. For the most basic setting in elasticity, we introduce the virtual work of the internal and external contribution and postulate that the principle of virtual work is valid. In a first step, the spaces of solution and weighting functions ¯ on Γ u , ϕ ∈ H1 (B0 ) | ϕ = ϕ V = δϕ ∈ H1 (B0 ) | δϕ = 0 on Γ u , S=
(1) (2)
are defined. Here, we make use of the standard notation for the Sobolev space Hs (B0 ), s ≥ 1, of square integrable functions ϕ with square integrable weak derivatives of the given order. Note that second and third gradient materials require ϕ ∈ H2 (B0 ) and
412
C. Hesch et al.
Fig. 5 Streamlines of the velocity fluid field during the closure of the valve when contact occurs
ϕ ∈ H3 (B0 ), respectively. In accordance with common nomenclature, we denote the Dirichlet boundary conditions Γ u and Neumann conditions Γ n , satisfying ∂B0 = Γ u ∪ Γ n and Γ u ∩ Γ n = ∅ throughout the time interval I. The principle of virtual work now reads: find ϕ ∈ S such that a(ϕ, δϕ) = l(ϕ, δϕ) ∀ δϕ ∈ V, where
(3)
Frontiers in Mortar Methods for Isogeometric Analysis
Fig. 6 Non conforming mesh method for flow in fracture porous media [66]
Fig. 7 Fluid Structure Interaction with contact for geo-sciences (from [74])
413
414
C. Hesch et al.
a(ϕ, δϕ) :=
∇X (δϕ) : P dV, B0
(4)
δϕ · B(ϕ) dV +
l(ϕ, δϕ) := B0
δϕ · T(ϕ) d A,
(5)
Γn
are the internal and external contributions to the virtual work. Here, P denotes the first Piola-Kirchhoff stress tensor, B body forces and T surface loads, acting on Γ n . For linear elasticity, the second term on the right hand side of (4) has to be linearized, hence, a(ϕ, δϕ) is a bi-linear form. Constrained elasticity. Elastic systems can be subject to a wide range of different constraints. For incompressible systems, constraints are defined throughout the whole domain. For overlapping domain decomposition methods as used for fluid-structure interaction (immersed techniques) as well as for solid-solid interaction problems, the constraints are defined on at least parts of the domain. Plasticity takes a special role, as the corresponding constraints are defined as Karush-Kuhn Tucker inequality conditions within the whole domain, almost always locally condensed. Many formulations focus on conditions at certain internal and external interfaces; classical domain decomposition problems act on fixed interior interfaces, whereas boundary fitted fluid-structure interaction formulations rest on moving internal interfaces. On the other hand, constraints acting on external interfaces like Dirichlet and control conditions can be applied as well as contact problems, which, similar to plasticity, are given as a set of Karush-Kuhn Tucker inequality conditions. This principle of virtual work reads now a(ϕ, δϕ) + b(λ, δϕ) = l(ϕ, δϕ) b(μ, ϕ) = 0
∀ δϕ ∈ V,
(6)
∀ μ ∈ N,
(7)
where the form b(μ, ϕ) is to be defined corresponding to the considered constraints. Detailed formulations in the context of the mortar methods under investigations in this article are presented in subsequent sections. All formulations require appropriate definitions for the spaces of solution functions M and weighting functions N of the Lagrange multipliers λ, depending on the chosen problem to be taken into account. Note that this kind of problems leads in general to a saddle point structure, such that the chosen spaces have to obey the inf-sup conditions. B-spline and NURBS spaces. Next, we introduce in a nutshell suitable B-spline and NURBS approximations. We refer to Hesch et al. [37] for more details on the construction of the spaces including hierarchical refinement procedures. A multivariate B-spline basis of degree p = [ p1 , . . . , pd ] is defined by the dyadic product = Θ1 ⊗ · · · ⊗ Θd of univariate knot vectors, built by a sequence of knots l Θl = [ξ1l ≤ ξ2l ≤ · · · ≤ ξn+ pl +1 ], l ∈ {1, . . . , d} and n the number of basis functions. In the absence of repeated knots, the partition [ξi11 , ξi11 +1 ] × · · · × [ξidd , ξidd +1 ] form an
Frontiers in Mortar Methods for Isogeometric Analysis
415
element of the mesh in the parametric domain.1 A single multivariate B-spline B A is then defined by B = A
Bpi (ξ)
=
Bpi (ξ 1 , . . . , ξ d )
=
d
Nil , pl (ξ l ),
(8)
l=1
with multi-index i = [i 1 , . . . , i d ] and supp(B A ) = [ξi11 , ξi11 + p1 +1 ] × · · · × [ξidd , ξidd + pd +1 ], providing the necessary support for the required continuity. The recursive definition of a univariate B-spline is given as follows Nil , pl =
ξ − ξill ξill + pl − ξill
Nil , pl −1 (ξ) +
starting with
Nil ,0 (ξ) =
ξill + pl +1 − ξ ξill + pl +1 − ξill +1
1 if ξill ≤ ξ < ξill +1 . 0 otherwise
Nil +1, pl −1 (ξ),
(9)
(10)
The collection of B-splines B A , A ∈ [1, . . . , n] is defined on and the corresponding spline space, defined as S() = span(B A ). Moreover, the extension to the NURBS space is given by d Nil , pl (ξ l ) wi l=1 , (11) R A = Rpi (ξ) = d l) w N (ξ ˆi ˆi iˆl , pl l=1
along with the corresponding NURBS weights wi . The fundamental properties of a basis are typical for B-spline and NURBS spaces: • Linear independence, c A R A (ξ) ≡ 0 ⇔ c A = 0. AA • Partition of unity, R (ξ) = 1. A
• Local support of B-splines and of NURBS • Smoothness is related to knot multiplicity m i 2 • Nonnegativity, i.e. R A (ξ) ≥ 0 . Please note that the Kronecker delta property R A (ξ i ) = δiA , which is common for Lagrangian basis functions, is in general not fulfilled here. The parametric domain as ˆ independent of the existence defined in the knot vector correlates to the domain B, A of local refinements. The shape functions R can be associated with a net of control We define the number of repetitions in at node i as multiplicity m i . C m continuity of the approximation spaces relates to solution functions u which are at least u ∈ Hm+1 (B), with Hm+1 (B) being the Hilbert space of square integrable functions with m + 1 square integrable derivatives.
1 2
416
C. Hesch et al.
points q A ∈ Rd , such that a geometrical map F : Bˆ0 → B0 can be defined to link the parameter and the physical space ϕh := F(ξ) = R A (ξ) q A = Rpi (ξ) qi ,
(12)
cf. da Veiga et al. [12]. The physical domain is the image of the parametric domain through F, and we note that both domains share the same regularity properties, see Brivadis et al. [8].
4 Mortar Techniques for Isogeometric Analysis To combine mortar with IgA techniques is of special interest for complex domains which require a multi-patch representation. The use of multi-patches can also help to avoid singularities in the domain mapping for relatively simple domains. To allow for patch-wise independent mesh generation, it is a must to tear and interconnect the discrete solution in a suitable way. Different alternatives such as discontinuous Galerkin (DG) based interior penalty approaches or mortar based Lagrange multiplier formulations exist. Here we focus on mortar techniques and guarantee always a variational consistent weak C 0 -coupling at the interior patch boundaries. We review two conceptual different strategies to realize higher-order continuity and address the cost of static elimination of the Lagrange multiplier. A rigorous mathematical analysis of standard mortar IgA techniques can be found in [8].
4.1 Biorthogonal Splines for Isogeometric Analysis This subsection is based on [82]. We provide the abstract construction framework of biorthogonal Lagrange multiplier basis functions. The handling of non-matching meshes in terms of Lagrange multipliers may result in a uniformly stable and variationally consistent discrete formulation, and thus from a theoretical point of view it is well-understood and attractive. However, it gives, in general, rise to a saddle point system, which is more challenging for iterative solvers due to its indefinite algebraic character. The analysis of such a system can be based on a mixed approach and requires continuity of the bilinear forms, approximation properties of the primal and dual space and a uniform inf-sup stability between the discrete spaces. Alternatively the saddle point system can be formally condensed. Then we are in the setting of a positive definite system on a constrained primal space. The dual variable is eliminated, and at the same time constraints are incorporated in the primal space. Now we can apply the theory of nonconforming finite elements and have to analyze the consistency error. In the mortar case, the consistency error is directly related to the jump of the discrete solution across the interface. Due to the weak continuity, which is enforced by the discrete Lagrange multiplier, it can be shown,
Frontiers in Mortar Methods for Isogeometric Analysis
417
−0.5
−0.5
P2−P1 (unstable) P2−P2 (stable)
P2−P1 (unstable) P2−P2 (stable) −1
λh(x)
λh(x)
−1
−1.5
−1.5
−2 0
0.2
0.6
0.4 x
0.8
1
−2 0
0.2
0.6
0.4
0.8
1
x
Fig. 8 Spurious oscillations in the Lagrange multiplier for p = 1 in case of quadratic splines as primal space and uniformly stable results for p = 2: Dirichlet boundary conditions (right) and Neumann boundary conditions (left)
in case of an optimal mortar method, that it is at least of the same order as the best approximation error in the unconstrained primal space. Using a discrete Lagrange multiplier space, which is obtained, up to modifications at the crosspoints, as trace space of the primal space restricted to the slave side, there exists no basis of the discrete constrained space such that all basis functions have a local support. This is related to the fact that a typical mass matrix is sparse but has a dense inverse. Consequently, we obtain basis functions having a support on the slave side, which is local in the direction normal to the interface but global along the interface. This also holds for classical low-order mortar finite elements but is even more pronounced in the higher-order IgA framework. To obtain an optimal mortar IgA approach, which results in a sparse positive definite system on the constrained space, the discrete Lagrange multiplier space has to satisfy four elementary properties: • (BA) – best approximation property with respect to the dual norm, • (LS) – local support of the basis functions, • (BC) – biorthogonality condition between the trace of the primal and the dual basis functions, • (UC) – uniform inf-sup condition. While (LS) and (BC) influence mainly the computational aspect, (BA) and (UC) are essential from the theoretical point of view. To see that (UC) is not only a counting argument of the dimensions of the involved spaces, we give a simple illustration of the effect of a mesh-dependent constant in the inf-sup condition. To do so, we consider p = 2 for the primal space and two different pairings in the discrete Lagrange multiplier space. One is obtained by the trace space with p = 2 and one with p = 1. Here, a counting point argument would be very misleading. In this case, the choice p = 1 for the discrete Lagrange multiplier is unstable while p = 2 is uniformly stable, as illustrated in Fig. 8. The Dirichlet condition case is more pronounced to instabilities due to the strong constraint of the primal space at the boundary. For more details and a theoretical analysis of the uniformly stable pairing, we refer to [8].
418
C. Hesch et al.
In the following, we briefly sketch the main steps in the construction of such a biorthogonal set of basis functions. The technical details can be found in [82]. For simplicity of notation, we consider only one interface Γ . Step I: In a first step, we embed the trace space on the slave side of the interface Γ into the product space of piecewise polynomials having no continuity constraints between the elements, i.e., the spline space of lowest regularity. Using for this higher dimensional product space an elementwise defined basis obtained by multiplying the original basis functions φi , i = 1, . . . I , with elementwise cut off functions χe , e ∈ E. Here I is the dimension of the trace space and E the set of elements. We note that due to the locality of the support of φi many product terms φi χe yield the zero function. The non-trivial ones can be reordered and denoted by φi,e , i = 1, . . . , ( p + 1)(d − 1) where p is the polynomial order of the trace space and d the dimension of the domain. They form the new basis functions of the product space. It is now easy to construct a biorthogonal basis by inverting a local mass matrix of size ( p + 1)(d − 1) × ( p + 1)(d − 1). More precisely, we require ψi,e ∈ Q p (e) such that φi,e ψ j,e ds = φi,e ds δi, j , i, j = 1, . . . , ( p + 1)(d − 1). (13) e
e
Then by construction the biorthogonal basis defined as product space satisfies (BA), (LS) and (BC) but unfortunately not (UC), and thus we cannot guarantee unique solvability of the system. Step II: In a second step, we reduce the dimension such that we have equality in the dimensions of the trace space and the one spanned by our modified biorthogonal basis functions. Each basis function φi of the trace space can be written uniquely as linear combination of the basis functions of the product space φi =
φg(i,e),e , i = 1, . . . , I,
(14)
e∈E,e⊂supp φi
where g(i, e) ≤ ( p + 1)(d − 1) such that φi χe = φg(i,e),e . We note that if i = j then for e ⊂ supp φi ∩ supp φ j =: Si j we get g(i, e) = g( j, e). To define now a smaller set of biorthogonal basis functions, we glue the ones of Step I together by using the same coefficients in the linear combination, i.e., ψg(i,e),e , i = 1, . . . , I. (15) ψi := e∈E,e⊂supp ψi
By doing so, we obtain for each basis function of the trace space one basis function in the dual space and (UC) can be shown. By construction it satisfies (LS), i.e., supp φi = supp ψi , and (BC), i.e.,
Frontiers in Mortar Methods for Isogeometric Analysis
419
Fig. 9 Example of biorthogonal basis functions satisfying (BC), (UC) and (LS) but not (BA)
Γ
φi ψ j ds =
e∈E
=
φi ψ j ds =
e
e∈E
e
φg(i,e),e ψg( j,e),e ds
e∈E,e⊂Si j
e
φg(i,e),e ds δg(i,e),g( j,e)
e∈E,e⊂Si j
=
e
φg(i,e),e ds δi, j =
(16)
Γ
φi ds δi, j .
Unfortunately by reducing the number of basis function, we have gained the property (UC) but lost for p > 1 the property (BA). In Fig. 9, we illustrate the result of Steps I-II for p = 2 and a seven dimensional trace space. The basis functions have exactly the same support as the ones of the trace space, i.e., at most three elements, and are discontinuous across the elements. We note that these basis functions cannot reproduce a linear function with mean value zero, and thus the required best approximation property is not satisfied. As can be easily observed, all interior basis functions φ3 , φ4 , φ5 have the same shape and can be obtained from each other by a simple coordinate shift. Step III: In the third step, we modify the biorthogonal basis that we have obtained in Step II such that (LS), (BC) and (UC) will be preserved and the best approximation property (BA) will be restored. We note that adding to a biorthogonal basis function a function which is orthogonal to the trace space does not destroy (BC). As a preliminary step, we define out of the biorthogonal product space locally defined functions being orthogonal to the trace space. Having these basis functions at hand, we then define coefficients by solving systems of small size. We point out that the system size depends on the order p but not on the meshsize. The small size of the system then guarantees that (LS) is preserved. Let us assume for the moment that we have calculated the coefficients, we then obtain the modified dual basis function by adding a linear combination of globally orthogonal functions where the computed coefficients are used. The system to be solved is given in such a way that the new basis satisfies by construction (BA). In other word the condition (BA) determines the small size system to be solved. Since on a uniform mesh all interior basis function have the same form, only a small number of different systems has to be solved, and this step is, as all other steps, computationally cheap. We point out that we have here to enlarge
420
C. Hesch et al.
Fig. 10 Example of a interior biorthogonal basis function, p = 2
the support from at most p + 1 elements to 2 p + 1 elements. A biorthogonal basis having the same support as the trace basis and optimal reproduction property does not exist for p > 1. In the standard finite element case, we refer to [77] for p = 1 and to [56] for p > 1. Figure 10 illustrates one interior dual basis function for p = 2. In this case the support is enlarged from 3 to 5 elements as indicated by the shadowed regions. Step IV: In case of crosspoints/wirebaskets, we further reduce the dimension of the basis functions such that (UC) holds with respect to a smaller trace space. It is of importance to note that this step has to be worked out carefully such that (BA) is not lost. Although, Steps I-IV are quite technical and can be for d = 3 on non-uniform meshes prone to coding bugs, all steps are local in the sense that the size of all involved systems to be solved does depend on p and d but not on the meshsize. In the rest of this subsection, we illustrate the robustness and flexibility of the approach by a numerical example previously discussed in full detail in [82]. As test case, we consider the well-known 2D benchmark of an infinite plate with a hole with the equations of linear elasticity. Due to symmetry, only a quarter of the plate is considered, and the infinite geometry is cut with the exact traction being applied as a boundary condition. As exemplary geometric setup, we choose two patches with a straight interface, but where the parametrization of the interface is different in the two patches. The entire setting is illustrated in Fig. 11 for a mesh ratio of 2:3. Numerical investigations are outlined here for quadratic ( p = 2) and cubic ( p = 3) splines and three different choices for the Lagrange multiplier bases. Specifically, a so-called standard Lagrange multiplier basis (‘std’), which is constructed as trace space of the primal space restricted to the slave side and therefore does not satisfy the (LS) condition, is compared with the two variants of biorthogonal Lagrange multiplier bases introduced above: the elementwise approach (‘ele dual’) from Fig. 9, which violates the (BA) property, and the approach with slightly enlarged support
Frontiers in Mortar Methods for Isogeometric Analysis
421
Fig. 11 2D plate with a hole - Geometry and setup (left) and two-patch parametrization (right) with an exemplary mesh ratio of 2:3. Reproduced and slightly modified from [82]
Fig. 12 2D plate with a hole - Convergence results for different Lagrange multiplier bases for p = 2 (left) and p = 3 (right) with exemplary mesh ratios of 2:3 and 3:2. Reproduced and slightly modified from [82]
from Step III above (‘optimal’), which satisfies all four conditions (BA), (LS), (BC) and (UC). Convergence results under uniform mesh refinement, with the discretization error of the displacement field u measured in the energy norm u − uh E , are reproduced from [82] in Fig. 12. Several key results can be identified: first and foremost, both the standard Lagrange multiplier basis as well as the optimal biorthogonal basis yield the expected convergence order of O(h p ) in all considered cases, and the absolute error levels are comparable. It should be kept in mind, however, that only the optimal biorthogonal basis from Step III above at the same time guarantees local support of the basis functions (LS). Second, the simple elementwise biorthogonal basis from Step II clearly cannot provide optimal results in all cases, but may exhibit a deteriorated convergence order of O(h 3/2 ). This becomes particularly apparent if the slave mesh is coarser than the master mesh and for higher-order interpolations (here p = 3). The
422
C. Hesch et al.
results underline the importance of all three steps in the construction of biorthogonal Lagrange multiplier basis functions highlighted above.
4.2 Multi-patch Analysis for Kirchhoff–Love Shells As a first prototypical example for the use of IgA concepts and advanced mortar methods in elasticity, we focus on Kirchhoff–Love (KL) shell elements. The developments here are based on [67]. The kinematical assumptions of KL shells rely on the out of plane curvature terms, used to describe the bending of the shell. This approach requires a general G 1 continuity across the whole domain (see [47] for details on G 1 continuity), in contrast to Hellinger-Reissner (HR) beams developed throughout the past decades. As IgA naturally allow us to deal with equations of higher-order, the central drawback of KL shell elements is removed. For complex geometries, the domain is always divided into sub-patches Ωm , m = 1, . . . , M with interfaces Γl , l = 1, . . . , L. In particular, we require G 1 continuous patch connection of the in general non-conform discretized patches. Within a classical mortar method to enforce C 0 continuity across the interface, a Lagrange multiplier space is introduced by the trace space of the displacements restricted to the slave side Γ1 . Now we can state that for a given ϕ(2) h at the interface Γ2 on the mortar side we assume now that a ϕ(1) at the interface Γ 1 on the slave side can be found, such h that the minimization problem (2) 2 (2) 2 ϕ(1) h − ϕh L 2 (Γ1 ) = inf w − ϕh L 2 (Γ1 ) , w∈Wh(1)
(17)
is satisfied. Here, Wh(1) = span{Nr(1) }, where Nr( j) are B-Spline shape functions on side j, restricted to the subset r , see [67], Sect. 3.2 for details. This leads to the classical mortar formulation of the constraints
1 2 1 3 Nr(1) dΓ, (18) Φ 0 := · Nr(1) qr(1) − Nr(1) · Nr(2) qr(2) 2 3 Γ1
where we have made use of ϕ(1) h
=
3 n(1) r =1
qr(1)
Nr(1) ,
ϕ(2) h
=
3 n(2)
qr(2) Nr(2) .
(19)
r =1
Note that we use the discrete Lagrange multiplier space with biorthogonality conditions between the primal and the dual basis functions, see Sect. 4.1 for details. The situation is different for a G 1 continuous coupling. Therefore, we assume again that (1) for a given ϕ(2) h at the interface Γ2 on the mortar side a ϕh at the interface Γ1 on the slave side can be found, such that
Frontiers in Mortar Methods for Isogeometric Analysis
423
Fig. 13 Initial bases (left) and modified bases functions (right)
ϕ(1) h
−
2 ϕ(2) h L 2 (Γ1 )
2 2 2 (1) α (2) + λk ϕh,k dΓ = ϕh,α − α=1 Γ 1
k=1
⎤
2 2 2 2 λαk ϕ(2) dΓ ⎦ inf ⎣w − ϕ(2) w,α − h L 2 (Γ1 ) + h,k ⎡
w∈Wh(1)
α=1 Γ 1
(20)
k=1
is satisfied, where we make use of the notation (•),α for the derivative with respect to the direction α. This is equivalent to enforce 1 Nr(1),1
Φ := Φ + 1
0
j
qr(1) 2
−
1 Nr(1),1
Γ1 1 Nr(1),2
Γ1
·
2 Nr(1),1
·
2 Nr(1),2 qr(1) 2
−
1 Nr(1),2
·
·
2
2
λ1k
k=1 3 λ2k Nr(2),k
3 Nr(2),k
dΓ + qr(2) 3
(21) dΓ. qr(2) 3
k=1
Here, λi , i, j = 1, 2 are four real numbers, defined at each point of the surface to j j control the G 1 continuity. Note that for λi = δi we obtain C 1 continuity. Certain ways exist to enforce the mortar constraints. Here, we provide some information for a local condensation procedure where we calculate modified basis functions to avoid the explicit usage of Lagrange multipliers. Therefore, we distribute points ξi on the parametric domain of the finer-meshed surface and determine the corresponding ¯ (2) ¯ (1) parameters ξi with ϕ h (ξ i ) = ϕ h (ξ i ) using an orthogonal projection. Afterwards we use the information from (20) evaluated at the distributed points to calculate new bases functions, see Fig. 13 for a graphical representation of modified functions. This procedure can be considered as a local null-space reduction scheme, acting on the space of basis functions.
424
C. Hesch et al.
f
Fig. 14 Reference configuration and boundary conditions (left) and von Mises stress distribution with C 0 continuous mortar coupling (right)
To demonstrate the applicability to KL shells, we investigate a cylinder composed of two, initially curved shell patches discretized by 18 × 18 and 20 × 20 cubic NURBS elements, respectively. Note that the NURBS weights are chosen such that two perfect half cylinders with a radius of 1 m, a length of 3 m and a thickness of 0.02 m are obtained. The cylinder is fixed along a bottom line and a line load of 28 N/m is applied on the opposite side as shown in Fig. 14, left side. On the right side, a classical C 0 continuous mortar method is applied, which does not allow for a transfer of bending moments across the interface. The effects on the deformed geometry displayed are obvious. In contrast, the results in Fig. 15 demonstrate that the coupling conditions satisfying (20) at the interface can counterbalance the non-matching meshes. Note that the G 1 coupling conditions are in general linear constraints, such that linear and angular momentum are conserved quantities throughout the interface, providing that the constraints are fulfilled in the reference configuration.
4.3 Weak C n Coupling for Solids In [20], the previously introduced concept of high-order mortar coupling conditions is extended towards the application on Cauchy continua. Moreover, we investigate different evaluations of general C n continuous coupling conditions, written in terms of a saddle point system. The constraints for C 0 and C 1 continuous coupling, respecj j tively, have been introduced in (18) and (21), using λi = δi . The extension for C 2 follows immediately via
3 r2 r1 r3 (1) (2) 1 Φ := Φ + Nr(1), dΓ, · N q − N · N q jl (1), jl r2 (1), jl (2), jl r3 2
1
j,l=1 Γ 1 j≥l
(22)
Frontiers in Mortar Methods for Isogeometric Analysis
425
Fig. 15 Von Mises stress result for G 1 coupling
which can be extended towards general C m continuity in a straight forward manner. One particular challenge in realization of a mortar method is the evaluation of the interface integral. This might be more a technical issue, but extremely important for an efficient implementation. Any quadrature rule based on the slave mesh does not respect the mesh lines of the master mesh and vice versa for a quadrature rule on the master mesh. Therefore it is common to use a quadrature rule based on a merged mesh, i.e. a mesh leads to an exact evaluation of the integral, if it respects the reduced smoothness of the master and slave functions at their respective lines. The standard mortar analysis assumes that the interface is resolved by the mesh on the master and the slave side. In that case, no projection of points on the discrete master side onto the discrete slave side and vice versa is required. Consequently the construction of the common mesh, named segmentation process, is still challenging but does not result in an additional variational crime. The situation is different for curved interfaces where the discrete interfaces do not match in general. Then for the segmentation process a mapping from the vertices of the master side onto the discrete slave interface is required. For the evaluation of the basis function on the master side the quadrature points of the merged mesh have to be mapped back onto the discrete master side, which results in an additional error contribution. In contrast, equidistant sample points on the parametric domain can be taken into account, which can be interpreted as a midpoint quadrature formula on a sub-mesh with respect to a weighted Lebesgue measure. More precisely, we use the Lebesgue measure on the parametric domain assuming a uniform decomposition. Then the quadrature weights are identically given by h 2 and are just a constant scaling which does not alter the least-squares approach. To separate the effect of the approximation error we reconsider a patch test and recall, that the stress is a constant in the domain. Therefore, a fixed number of elements is applied with a curved interface in between, see Fig. 16. Figure 17 shows the results for a C 0 , C 1 and C 2 coupling using different numbers of sample points. The error for the sample point evaluation decays with
426
C. Hesch et al.
Fig. 16 Patch test. Reference configuration (left) and computational mesh (right) for a patch test with curved interface
Fig. 17 Maximum error in the von Mises stress result of a patch test plotted over the total number of sample points per element
the same order independent of the enforced continuity across the interface, whereby the asymptotic limit for C 2 coupling is already reached at 25 sample points per element and dimension. For comparison, a Gauss integration with rising number of Gauss points is presented as well, enforcing C 0 continuity across the interface. To be precise, a Gauss integration on the parameter space as well as on the physical space using the usual transformation rule evaluated at each Gauss point on the interface, i.e. ϕh,ξ1 × ϕh,ξ2 for the area transformation, has been applied. As can be seen, the asymptotic limit is reached for a small number of Gauss points on the physical space, whereas on the parameter space the Gauss integration converges. To demonstrate the applicability on large deformations, we introduce an example composed of two parts, which are bonded together via the basis modification approach. The lower surface is fixed in space and the upper surface is rotated by an angle of φ = 720◦ . For both parts, we apply quadratic as well as cubic B-spline
Frontiers in Mortar Methods for Isogeometric Analysis
427
Fig. 18 Twisted block. Von Mises stress result for the quadratic discretization with full and reduced modification and the cubic discretization with full and reduced modification (from left to right)
based discretization, where the lower part consists of 4 × 4 × 10 elements and the upper part of 5 × 5 × 10 elements. In order to consider locking effects, we apply the full set of dependent degrees of freedom (dof), referred to as full modification, as well as a reduced set of dependent dofs, referred to as reduced modification. The von Mises stress distribution is depicted in Fig. 18. Concerning the result for the quadratic discretization with full modification, we observe a locking behavior at the interface since only 54 degrees of freedom are applied at the interface, reducing the approximation quality significantly. In contrast, the deformation is not suppressed for the reduced modification, where 216 degrees of freedom remain for the approximation of the interface. Moreover, the result of the cubic discretization with full modification shows a non significant locking behavior, visible only as a slightly reduced maximum value of the von Mises stress at the interface. Here, 96 degrees of freedom remain for the approximation at the interface. Eventually, for the cubic discretization with reduced modification, 294 degrees of freedom are used for the approximation at the interface such that we obtain a nearly perfect stress distribution across the interface without any disorders due to the basis modification.
4.4 Crosspoint Modification In a last step concerning higher-order coupling conditions using an extended mortar method, we consider crosspoint modifications at the crosspoints between sub-patches of a multipatch geometry. This subsection is based on [21], where a modification of the Lagrange multipliers is shown to decouple the interfaces, avoid overconstraint situations and resume the best approximation property (BA), as discussed in Sect. 4.1.
428
C. Hesch et al.
Fig. 19 Crosspoint modification. Evaluation of quadratic B-spline basis functions and their first derivatives in normal and tangential direction at the interface (from left to right). Modified functions and derivatives at the crosspoint are colored in red. The dashed curves denote derivatives associated with the interior of the slave patch
Our modification will be carried out on the parametric space of the crosspoint for the slave side of each interface separately. Thus, without restriction it is sufficient to consider one interface and one crosspoint at a time. In case of a weak C l−1 , 1 ≤ l ≤ p coupling we have to remove the first l basis functions on the interface but also functions associated with the interior of the subdomain. While the interior basis functions are affected by normal derivatives, the ones on the interface by tangential derivatives. Now we want to modify the following next p such that the new reduced basis functions are given as Rim
=
l
ci j R j + Ri+l ,
i = 1, . . . , p,
(23)
j=1
Rim = Ri+l ,
i > p,
(24)
with coefficients matrix C ∈ R p×l , [C]i j = ci j , see [21], Sect. 2.2 for details on the definition of the coefficient matrix. Obviously, the new Rim are linearly independent, and in case of maximal continuity C forms a square matrix. Let A1 ∈ R p×l and A2 ∈ R p× p with components [A1 ]i j = ai j and [A2 ]i j = ai j+l . These matrices can be obtained from computing the L 2 scalar products [Q]i j := (qi , R j ) on the corresponding boundary with i = 1, . . . , p, j = 1, . . . , p + l, and [M]i j := (Ri , R j ), i, j = 1, . . . , p + l and setting (A1 , A2 ) = QM−1 . Now, the matrix C can be formally computed from A1 and A2 by C := A−1 2 A1 .
(25)
3 2 for l = 2, [C] = 2 1 for l = 1. [C] = −2 − 23 −1
(26)
For p = 2 it reads as
5 2
In case of p = 3 we can impose up to weak C 2 interface continuity and find
Frontiers in Mortar Methods for Isogeometric Analysis
429
Fig. 20 Grain growth in crystalline materials. Setting of multi-patch problem (left) and initial field (right)
⎡ [C] =
37 6 ⎣− 25 3 19 6
⎤ 5 3 − 19 −3⎦ for l = 3. 3 7 1 3
(27)
Note that we have considered open knot vectors with p + 1 repeated knots at the crosspoint. Further matrices can be found in [21]. Figure 19 illustrates the crosspoint modification of a quadratic B-spline basis. Therein, the bases are evaluated at the interface where modified functions and their derivatives are colored in red. Note that basis functions associated with the interior of the slave patch only contribute to derivatives normal to the interface due to the assumed construction of the parametric domain. To demonstrate the applicability on systems with C 2 continuity requirements, we investigate a phase-field crystal equation defined in terms of an order-parameter ψ(X, t) : B × I → R which describes a local deviation from a reference mass density. Therefore, we introduce a Swift-Hohenberg energy function as F(ψ) = B
1 4 1 1 ψ + (r + 1) ψ 2 + ψ Δψ + ψ ΔΔψ dV, 4 2 2
(28)
where the parameter r represents an undercooling of the system. The phase-field crystal model is derived as a Wasserstein gradient flow of the Swift-Hohenberg energy δF + ˙ ψ=∇· ψ ∇ , ∀ (X, t) ∈ B × I, (29) δψ where the variational derivative reads δF = ψ 3 + (1 + r ) ψ + 2 Δψ + ΔΔψ. δψ
(30)
430
C. Hesch et al.
Fig. 21 Grain growth in crystalline materials. Solution of multi-patch simulation at different times t = [50, 65, 100]
Here, ψ + = ψ − ψ min ≥ 0 denotes a mobility parameter with lower bound ψ min . Moreover, assuming again a quadratic domain B = [a1 , a2 ], the strong form (29) is supplemented by periodic boundary conditions and initial conditions given by ψ(X, 0) = ψ0 (X).
(31)
Therein, ψ0 is a prescribed initial deviation of the mass density. A multi-patch setting is now introduced, where each patch is of size 145 × 145 and we apply 205 × 200 and 200 × 205 cubic B-spline based elements per patch such that each interface is non-conform as shown in Fig. 20. The simulation parameters are specified as follows. For the lower bound we set ψ min = −1.5 and for the parameter regarding the undercooling of the system we set r = −0.35. In addition, for the initial field we apply the configuration illustrated in Fig. 20. Results of the simulation are shown in Fig. 21 at different times. Note that no disturbances at the interface are observed.
4.5 Hybrid Approaches for Higher-Order Continuity Constraints This subsection is based on [40]. In contrast to the previous two subsections, we do not enforce higher-order continuity conditions weakly in terms of Lagrange multipliers. Here we combine ideas from C 0 -mortar based coupling techniques with DG methods. The resulting scheme yields a hybrid approach in the sense that we use discrete Lagrange multipliers only for the handling of the C 0 continuity condition but all higher regularity constraints are included by terms resulting from DG. As already mentioned IgA approaches are natural candidates for the approximation of fourth- and sixth-order partial differential equations. But they also may yield excellent numerical approximation results for second-order elliptic eigenvalue problems.
Frontiers in Mortar Methods for Isogeometric Analysis
431
Fig. 22 Decomposition of a 3D violin bridge into sub-patches (left), the fourth eigenmode: homogeneous (middle) and inlay (right)
Figure 22 shows the fourth eigenmodes of a maple wood violin bridge having nine orthotropic material parameters in the elasticity tensor. The difference between the middle and right picture results from the fact that in the middle picture the material parameters are constant, whereas in the picture on the right an inlay of a harder wood is inserted, and thus only piecewise constant material parameters are assumed. On the left the decomposition into the sub-patches is given for the homogeneous case. We recall that crosspoint/wirebasket modifications of the Lagrange multiplier basis functions have to be worked out. In comparison to classical conforming finite elements, IgA approaches provide much better results for higher frequency modes. However in case of non-matching meshes and weak C 0 continuity constraints across interfaces between sub-patches, one can observe severe outliers. To overcome these shortcomings and to improve quantitatively the error decay, a simple strategy is to enforce higher continuity by penalty terms. While for second-order elliptic problems, it is not mandatory to impose higher-order constraints on the regularity of the discrete solution, it is for higher-order PDEs. For simplicity of presentation, we discuss here only the model problem of a biharmonic equation and refer to [40] for more general situations and numerical results. The approach can be easily adapted to a sixth-order PDE using cubic splines with maximal regularity in the sub-patches. Also for second-order PDEs, we can penalize jumps in the derivatives up to the maximal regularity, i.e., for a cubic spline approach we can introduce a suitable penalization for jumps in the first and second derivatives. However it is important to note that all jump terms have to be scaled properly for not loosing the continuity of the bilinear form. The scaling is dictated by the order of the PDE and the order of the derivative. Having the abstract framework of defining Lagrange multipliers up to maximal weak regularity of the two previous subsections at hand and knowing how to penalize jumps in the derivatives and adding consistency terms in a DG setting allow us in a flexible way to combine both approaches. Of special interest is to keep the first layer of degrees of freedom in the Lagrange multiplier space associated with the nodes which sit physically on the interface between sub-patches but remove the layers which are associated with nodes in the interior of the sub-patches. This simplifies the required data-structure for handling higher-order continuity constraints.
432
C. Hesch et al.
Following [7], the weak formulation of a C 0 symmetric interior penalty formulation for biharmonic equations reads as: a(w, v) :=
T ∈Th
D 2 u : D 2 v dx + T
f ∈Fh
+
f ∈Fh
{wn,n } [vn ] + {vn,n } [wn ] ds
f
f
(32)
τ [vn ] [wn ] ds. hf
Here D 2 stands for the Hessian, Th denotes the set of elements, Fh the set of faces, and ·n and ·n,n denote the first and second normal derivatives on the faces, respectively. The second term on the right of (32) reflects consistency terms involving the first and second derivatives across the faces in normal direction. As it is standard, the parenthesis [·] stands for the jump and {·} for the average. On a boundary face, these terms have to be defined such that the boundary conditions are reflected properly. The last term in (32) depends on the penalty parameter τ ≥ τ0 > 0 and the diameter h f of the face f . If τ0 is large enough well-posedness is guaranteed, and the bilinear form is uniformly elliptic on a suitable space. Using a IgA approach, which is locally on each sub-patch C k , k ≥ 1, the jump terms with respect to faces associated with the interior of the sub-patches vanish and do not have to be taken into account. However for non-matching meshes at the subpatch interfaces, it is, in general, not possible to preserve the strong C 1 continuity within the IgA framework. Thus we adapt the bilinear form defined by (32) in two steps. The first step follows directly from the approach above and reduces the sum over all faces to a sum over all interfaces. It reads as ⎞ ⎛ K τ ⎝ {wn,n } [vn ] + {vn,n } [wn ] ds + [vn ] [wn ] ds ⎠ , (33) Γl f s h f l=1 f ∈Fl
h
where Fls h stands for all faces on the slave side of the interface Γl . To avoid locking of the approach, we relax in a second step the penalty term and only consider the jumps projected onto piecewise constants with respect to the mesh associated with the slave side, i.e., the modified bilinear form reads as a I g A (w, v) :=
M m=1 Ωm
+
K
K {wn,n } [vn ] + {vn,n } [wn ] ds D u : D v dx + 2
τ s h f
l=1 f ∈Fl
2
l=1
f
Γl
π0s [vn ] π0s [wn ] ds,
h
(34) where π0s stands for the projection operator onto piecewise constants with respect to the slave side mesh.
Frontiers in Mortar Methods for Isogeometric Analysis
433
Using standard inverse and trace estimates in combination with DG techniques it is easy to show that the resulting bilinear form is uniformly continuous and elliptic on a suitable space provided that C 0 continuity across the interfaces is given. However, even this is, in general in a mortar context, not possible. In contrast to the normal derivative where a discontinuity is penalized, we impose the C 0 continuity of the solution weakly. As it is standard for mortar techniques this is realized in terms of a Lagrange multiplier space. Here we can use all options which are known for the standard case of a second-order elliptic operator. To summarize, the hybrid formulation for the biharmonic equation guarantees that the solution is in the constraint mortar IgA-space, i.e., it satisfies a weak C 0 continuity but no weak higher-order continuity. The discontinuity in the normal derivative is penalized by the bilinearform a I g A (·, ·). This is the big difference between the hybrid approach discussed in this subsection and the weak C n continuity of Sects. 4.2–4.4. We point out that due to the need of a uniform inf-sup condition, crosspoints in 2D and the wirebasket in 3D have to be very carefully handled within the Lagrange multiplier approach. As it is typical for this situation, the number of degrees of freedom in the Lagrange multiplier space has to be reduced without compromising the approximation property. In the case of the hybrid approach one can also add terms which only penalize jumps at the crosspoints and wirebasket, respectively, see [40]. Due to the scaling different weights have to be used in the penalty formulation. In 2D a crosspoint is a geometrical object of dimension zero while the interface is of dimension one. This difference in the dimension has to be balanced by different mesh-size depending weight factors.
5 Mortar Contact Formulations for Isogeometric Analysis Mortar low-order finite element methods are widely used in contact mechanics. In contrast to penalty methods they allow for a variational consistent formulation of the non-penetration condition and a friction law. An optimal a priori estimate can be derived, for both the displacement and the surface stress being approximated by the Lagrange multiplier, [41]. The performance can be increased by applying adaptive mesh refinement techniques based on a posterior error indicators [79] and by specially designed energy preserving time integration schemes [31]. For an overview of variationally consistent formulations of inequality constrained problems, we refer to [80]. Of special interest are formulations which allow for local static condensation such as biorthogonal based Lagrange multiplier techniques. By this one can easily apply all-at once semi-smooth Newton techniques which can be implemented in form of primal-dual active set strategies, [42, 43]. In each Newton iteration, one has to decide for each node on the slave side of the contact interface the type of boundary condition. In case of a thermo-mechanical contact problem typically nonlinear Robin type conditions occur, and the heat flux can be eliminated locally, [44]. Most theoretical results and algorithmic approaches can be easily adapted to the IgA framework. In the following subsections, we report on recent results for contact mechanics and IgA approaches.
434
C. Hesch et al.
5.1 Biorthogonal Basis Functions Applied to Contact Mechanics This subsection is related to [53, 69]. While the condition (BA) in subsection 4.1 is of crucial importance to obtain optimal order best approximation properties for the constrained IgA space, this condition can be considerably relaxed in case of contact mechanics. Here the solution is typically not of high global regularity. Thus we cannot expect convergence rates of order p, p ≥ 2, in the energy norm if uniform refinement is used. Numerically one observes typically a sub-optimal convergence rate of ≈ 3/2 if quadratic or even cubic basis functions are used. If the numerically convergence rate is bounded not by the best approximation order of the involved discrete spaces but by the regularity of the solution, then the best approximation order might be reduced without loosing the observed convergence order. More precisely, if the solution is in H 5/2 (Ω) but not in H s (Ω) with s > 5/2, then we cannot expect a better order than 3/2 for the error decay in the energy norm. Typically due to the inequality constraints of contact problems such as the non-penetration condition or a friction law which determines about sliding, the solution of a mechanical contact problem is in H s (Ω) with s < 5/2. Therefore the required best approximation property for the discrete Lagrange multiplier space Mh reads as inf ψ − ψh H −1/2 (Γ ) ≤ C h 3/2 ψ H 1 (Γ ) ,
ψh ∈Mh
(35)
where H −1/2 (Γ ) stands here for the dual of the trace space on the possibly contact boundary Γ . This property holds on mild assumptions on the shape of the basis functions ψi of Mh and the following two conditions: • Each ψi is locally supported, in the sense that the support of ψi contains at most K 1 elements and each element is at most contained in the support of K 2 basis functions. Both K 1 and K 2 are supposed to be meshsize independent. • The constant function equal one is an element of Mh . (RE) Let us now consider the dual basis obtained in Sect. 4.1 after Step II (see also Fig. 9). As mentioned it does not satisfy an order p best approximation property for p ≥ 2 but it satisfies (RE). Recalling that Mh ⊂ span {φ i,e } and 1 ∈ span {φi,e }, it is easy to show that ψi form a partition of unity. Let Ψ := i ψi , then we get for all φ j,e that Γ
Ψ φ j,e ds =
i,supp φi ⊃e
=
ψg(i,e),e φ j,e ds = e
φ j,e ds = e
i,supp φi ⊃e
Γ
φ j,e ds δg(i,e), j e
(36)
φ j,e ds.
Thus, we found that Ψ = 1 ∈ Mh . In other words: although the order p best approximation property is lost for p ≥ 2, this dual basis may still preserve the observed
Frontiers in Mortar Methods for Isogeometric Analysis
435
Fig. 23 2D Hertzian contact. Geometry setup, material parameters and the chosen patch parametrization with an exemplary coarse NURBS mesh (level 2). Reproduced and slightly modified from [69]
convergence rate for problems where the convergence is bounded by the regularity of the solution as discussed above. While sub-optimal convergence results are to be expected in IgA patch coupling situations (as has been confirmed in Sect. 4.1, in particular with Fig. 12), the element-wise dual basis is still an attractive candidate for contact problems in IgA. In [53], low-order dual Lagrange multipliers have been applied to a dynamic viscoelastic contact problem with short memory. Existence and uniqueness results have been shown for the associated mixed formulation. For the discretization of the primal space, low-order conforming finite elements have been applied as a special case of IgA. Numerical results for higher-order NURBS-IgA in the mesh tying case but also for finite deformation contact problems can be found in [69]. As can be expected from the lack of a higher-order reproduction property of Mh , the error decay in the mesh tying examples is asymptotically not optimal for general non-matching meshes. To better illustrate the theoretical findings, a two-dimensional Hertzian-type contact example of a cylindrical body (radius R) with a rigid planar surface under plane strain conditions is reproduced and slightly modified from [69]. To avoid singularities in the isogeometric mapping, a small inner radius (radius r ) is introduced, see Fig. 23 for the geometric setting, the material parameters and the parametrization (different IgA patches are marked with different shading). The two horizontal upper boundaries undergo a prescribed vertical displacement. Meshes using second-order and third-order NURBS basis functions are employed, which is also illustrated in Fig. 23 for a very coarse mesh (level 2). In this setup, half of the elements on the potential contact surface are located within one ninth of the circumferential length and C p−1 continuity is ensured over the entire active contact surface. In the convergence study, uniform mesh refinement via knot insertion is performed on each of the patches resulting in a constant local element aspect ratio. Although only relatively small deformations are to be expected, a fully nonlinear description
436
C. Hesch et al.
Fig. 24 2D Hertzian contact. Convergence results for standard and dual Lagrange multiplier bases for p = 2 and p = 3. The biorthogonality construction for the dual case is based on Step I and II in Sect. 4.1. Reproduced and slightly modified from [69]
of the continuum using nonlinear kinematics and a Saint-Venant-Kirchhoff material under plane strain condition is assumed. Figure 24 depicts the convergence behavior in terms of the energy norm. Since no analytical solution is available, the finest mesh (level 7) with standard third-order NURBS is used as a numerical reference solution. In the limit, all methods converge with the expected order of O(h 3/2 ) in the energy norm and also the absolute error values are quantitatively very similar. In the secondorder case ( p = 2) the standard and dual Lagrange multiplier bases yield the same error asymptotically, whereas for third-order NURBS, a slightly elevated error of the dual variant as compared to the standard one can be observed. In view of Fig. 24, the use of a simple (i.e. element-wise) biorthogonal basis for the Lagrange multiplier (as obtained in Sect. 4.1 after Step II) instead of primal ones does not come at the expense of a reduced accuracy for contact problems, but yields equally accurate results while reducing the total system size to the number of displacement degrees of freedom only. In contrast to the IgA patch coupling case in Sect. 4.1, the convergence is now limited by the regularity of the solution, such that both the standard and biorthogonal Lagrange multiplier variants converge with the same order. The use of higher-order NURBS for contact problems with reduced regularity, i.e. third-order in Fig. 24 or even higher seems questionable from this viewpoint, since no faster convergence is gained from the higher-order interpolation.
5.2 Thermomechanical Contact Problems The isogeometric mortar methods for isothermal contact derived in the previous section can also be extended to include thermal coupling effects consisting of heat conduction across the contact interface, frictional heating and a temperaturedependent coefficient of friction. The following remarks are based on [68, 71], and the interested reader is referred to the original publications for further details. From
Frontiers in Mortar Methods for Isogeometric Analysis
437
the continuum mechanical perspective, the first two coupling effects are included in the contact interface heat fluxes, while the last one enters in Coulomb’s law of friction via a temperature-dependent coefficient of friction. The thermomechanical coupling in the bulk continuum (e.g. thermo-elasticity or thermo-plasticity) is not revisited here, but the focus is clearly set on the thermomechanical interface and the choice of a discrete Lagrange multiplier basis in IgA. As in the isothermal case, a Lagrange multiplier field λ is introduced to enforce the mechanical contact constraints and can be identified as the negative slave-side contact traction, i.e. λ = −t(1) c . In a similar fashion, a thermal Lagrange multiplier field λT is now introduced to enforce the thermal interface constraint and will be chosen as the slave side heat flux λT = qc(1) . Specifically, the variational formulation of the thermal interface constraint is as follows: (37) λT − βc λn (T (1) − T (2) ) − δc λ · vτ δλT dγ = 0, Γ
where vτ represents the relative tangential velocity (sliding velocity), λn is the normal part of the mechanical Lagrange multiplier (contact pressure), βc is the contact heat conductivity and δc is the distribution parameter for frictional heat dissipation. In the limit cases δc = 0 or δc = 1 the entire frictional dissipation is converted to heat on the master or slave side, respectively. Interestingly, the choice of a discrete Lagrange multiplier basis follows the exact same steps for the thermal contact part as for the mechanical contact part described in Sect. 5.1. The main complexity in terms of mortar discretization and algebraic system representation lies in the third and last part of the thermal interface constraint described above, which represents the frictional heat dissipation at the contact interface. Firstly, an objective kinematic measure has to be defined for the relative tangential velocity vτ . Secondly, and most importantly, the term involves a so-called ‘triple’ integral, i.e. an integral over a product of three shape functions at the contact interface, since vτ as constraint interface shape functions besides λ and δλT . This poses very high demands on the quadrature accuracy at the contact interface, especially when dealing with higher-order approximations using Lagrange polynomials or NURBS, see e.g. [19]. Following the work in [44], an appropriate lumping technique is applied to reduce the computational cost without compromising on accuracy. The fully coupled nonlinear system to be solved in each time step is comprised of the structural and thermal equilibrium equations, the nonlinear complementarity (NCP) function of normal and tangential contact and, finally, the thermal contact interface condition. The global vector of discrete unknowns consists of four groups of degrees of freedom: vectors containing all nodal values of the displacements D ¯ and λ ¯ T . As in the and temperatures T as well as the discrete Lagrange multipliers λ isothermal case, the system is non-smooth due to the involved NCP functions, but still amenable to semi-smooth versions of Newton‘s method as discussed in [60, 61, 70]. If biorthogonal basis functions as introduced in Sects. 4.1 and 5.1 are used for the Lagrange multiplier fields λ and λT , the local support (LS) property from Sect. 5.1 is again satisfied by construction due to to the similar structure of the variational
438
C. Hesch et al.
Fig. 25 Thermomechanical contact. Geometry setup (left) and exemplary temperature solution (right). Reproduced and slightly modified from [68]
formulations including λ and λT . Hence, both the usual Lagrange multiplier increments and the thermal Lagrange multiplier increments can be trivially condensed, and therefore the saddle point structure of the system matrix is successfully removed. The condensed linear system to be solved consists of displacement and temperature degrees of freedom only. In an abstract notation, it reads RD K D D K DT ΔD =− KT D KT T ΔT RT
(38)
Only one representative numerical example is presented in the following to highlight the most important features of isogeometric mortar methods for thermomechanical contact, see also [68] for further details and results. First, convergence under uniform mesh refinement is analyzed with a two-body contact setup as given in Fig. 25. Both bodies are modeled with a Neo-Hookean material law with E (1) = 5, E (2) = 1 and ν (1) = ν(2) = 0.2. Furthermore, thermal expansion is included with (2) the coefficient of thermal expansion being α(1) T = αT = 0.01 and thermal conductivities are set to k0(1) = 1 and k0(2) = 5. At the contact interface, frictionless contact is assumed with a contact heat conductivity βc = 103 . The final configuration and temperature distribution are also illustrated in Fig. 25. Figure 26 exemplarily depicts the convergence behavior in the H 1 semi-norms of the discrete displacement and temperature fields within the two bodies for mesh sizes h ∈ [2−7 , 2−1 ]. In particular, classical finite elements with Lagrange multiplier bases according to [62] and IgA with biorthogonal Lagrange multiplier bases according to [82] are compared for the quadratic case ( p = 2). All variants converge with the optimal order to be expected based on the regularity of the solution, i.e. O(h 3/2 ). For second-order NURBS approximation, the absolute error values are slightly larger than for quadratic finite elements when the same mesh size is analyzed. This is not surprising since, at the same mesh size h, the isogeometric approximation has a
Frontiers in Mortar Methods for Isogeometric Analysis
439
Fig. 26 Thermomechanical contact. Convergence results for second-order finite elements (solid lines) and second-order NURBS (dashed lines). For both approximations, dual Lagrange multipliers are employed. Comparison by mesh size (left) and by number of control points/nodes (right). Reproduced and slightly modified from [68]
smaller function space. More specifically, the B-spline basis used for the discretization at a certain mesh size is included entirely in the corresponding quadratic finite element discretization. If, however, the errors are analyzed with respect to the number of nodes or control points, respectively, the isogeometric case is slightly more accurate in the displacement solution, whereas the error in the discrete temperature field is of similar accuracy as compared to finite elements. To analyze the effects of frictional heating, thermoplasticity and nonlinear dynamics including mechanical and thermal energy conservation over an isogeometric contact interface, several more examples have been collected in [68]. The interested reader is referred to the original publication for details on problem geometry, loading and material parameters.
6 Multi-dimensional Coupling In this last chapter, we consider a dimension reduced model for a fiber-matrix coupling. The fiber is modeled by a one dimensional beam theory and is embedded into a three dimensional body. This approach follows fundamental ideas as introduced in [57, 58]. For more recent contributions, we refer to [38] in the context of immersed finite element methods for fluid-structure interaction problems and to [49, 73], in the context of 3D-1D transport models in microvascular networks. Working with a 1D-3D model has clear advantages with respect to meshing in cases of stochastic fiber distributions. We start with a classical continuum degenerated beam model, assuming that the motion of the beam is given by the restricted position field ˜ + θα dα (s). x˜ (θα , s) = ϕ(s)
(39)
440
C. Hesch et al.
As usual, Greek indices are ranging from α = 1, 2 and latin from i = 1, 2, 3. Here, ˜ ∈ the orthonormal triad di is related to the reference triad via the rotation tensor R ˜ = di ⊗ Di , orthogonal to the beam cross-section. S O(3), i.e. R As strain energy, we use a simple form Ψ˜ (Γ , K) =
1 1 Γ · K1 Γ + K · K2 K, 2 2
(40)
where K1 = Diag[G A1 , G A2 , E A] and K2 = Diag[E I1 , E I2 , G J ]. The standard beam strain and curvature measures, Γ and K, are respectively given as
T ˜ . ˜ R ˜T ϕ ˜ − D3 , K = axl R Γ =R
(41)
˜ is parameterized by quaternions q ∈ R4 , where |q| = √q · q = The rotation tensor R 1. ˜ K2 K, ˜ K1 Γ and m = R Let us introduce the beam load and moment n˜ = R respectively. Postulating that the principle of virtual work holds, we obtain the strong form n˜ + next = 0, (42) ˜+ϕ m ˜ × n˜ + mext = 0. ˜ and Here, we made use of the resultant contact force n˜ and resultant contact torque m, next , mext are external contributions. Without loss of generality, we assume mext = 0. The strong form can be discretized using a method of weighted residuals, also known as collocation type method. Here we use the isogeometric collocation method proposed in [76] for the beam discretization. For the matrix, we apply a standard variational IgA approach. We enforce the continuity condition ˜ ϕ=ϕ ˜ in Ω,
(43)
for matrix-fiber deformation fields weakly in terms of Lagrange multipliers. In absence of other external forces for the beam, next contains only the matrix-fiber interaction force, which corresponds to our Lagrange multiplier. After condensation ˜ we obtain the of the Lagrange multiplier along with the boundary conditions for n, following matrix-fiber system: Ω
∂Ψ : ∇δϕ dV + ∂F
Ω˜
n˜ ·
∂ δϕ ds = 0, ∂s
for all δϕ from an appropriate functional space, and
(44)
Frontiers in Mortar Methods for Isogeometric Analysis
441
Fig. 27 Deformed fiber-matrix system with p = [2, 2, 4] and n = 17 (left) and the associated von Mises stress of the matrix (right)
ϕ ˜ = ϕ, ˜ +ϕ m ˜ × n˜ = 0, ˜ ϕ, n˜ = n( ˜ q), ˜ = m(q), ˜ m
(45)
q · q = 1, in the collocation points. Here, Ψ (F) and F = ∇ϕ denote the strain energy and the deformation gradient of the matrix, respectively. For a numerical example, we consider a simple model problem from [72, Sect. 4.2]: a beam of length 5 m and radius r = 0.125 m with Young modulus 4346 N/m2 is embedded into the 1 m × 1 m × 5 m matrix block of Saint-Venant–Kirchhoff material with Young modulus 10 N/m2 . Poisson ratio is zero for both materials. The matrix and the beam are both fixed at z = 0, and a moment −0.025 Nm in x-direction is applied to the beam tip z = 5. This simple test allows us to illustrate the stability of the proposed approach, its convergence and the model error. Simulation is performed using Esra code developed in Siegen university by C. Hesch group. The matrix is discretized with NURBS of degree p = [ px , p y , pz ]. We consider the same degree in x and y directions, px = p y , and the degree in z direction is the same as for the beam. The number n of elements in x and y directions is the same and by a factor of five smaller than in z direction. The deformations and von Mises stress for p = [2, 2, 4] and n = 17 are depicted in Fig. 27. The left picture in Fig. 28 shows convergence of the tip displacement u(ti p) with the number n of elements in x direction increasing for different spline degrees. On the right, we also compare it with the reference finite element solution u r e f (ti p) = 0.19009 m from [72] obtained with 2D-3D (surface-to-volume) coupling. As it can be clearly seen, the numerical solution does not converge to the reference one. This results from the reduced model approach and asymptotically we obtain the model error between a 1D-3D and a computationally more expensive full 3D-3D coupling. For a detailed analysis and a more sophisticated framework with a reduced model error for the 1D-3D coupling, we refer to [46].
442
C. Hesch et al.
10
-2
0.19
Rel. error
Tip displacement
0.191
0.189 [2,2,2] [2,2,4] [2,2,6] [4,4,4] Reference
0.188
0.187
10 3
5
7
9
11
13
Number of elements
15
17
[2,2,2] [2,2,4] [2,2,6] [4,4,4]
-3
3
5
7
9
11
13
15
17
Number of elements
Fig. 28 Convergence of the tip displacement u(ti p), left, and relative error |u(ti p) − u r e f (ti p)|/|u r e f (ti p)|, right
7 Conclusions We summarized modern mortar based IgA methodologies and their application to a large variety of problems in structural mechanics ranging from non-linear contact, thermo-mechanical friction and fracture to fiber reinforced material simulations. While some of the proposed methods are a mere combination of well-established techniques out of the mortar finite element community with IgA approaches, the nature of IgA brings also new challenges. IgA approaches are typically used in the higher-order context and allow higher regularity of the discrete solution. To preserve this higher smoothness in the case of non-matching meshes is not as simple as in the finite element context. A variationally consistent approach requires a careful design of suitable discrete Lagrange multiplier spaces and for all Lagrange multipliers a suitable modification at crosspoints. Also the construction of biorthogonal basis function is not as local as in the low-order finite element context. However it can be achieved on the prices of enlarging the local support by at most p elements. Mortar based IgA methodologies provide flexible and robust discretization schemes for approximating a large class of partial differential equations including higher-order equations such as Kirchhoff Love shells which require a G 1 -continuity and nonsmooth problems such as contact mechanics. Traditional mortar methods are typically based on a non-overlapping domain decomposition of the physical d-dimensional domain and implement a weak coupling in terms of Lagrange multipliers defined on a (d − 1)-dimensional interface. However, the concept is not restricted to such situations and can be also generalized to a multi-dimensional setting which opens the possibility for many more applications. Acknowledgements We would like to thank T. Horger and A. Reali as co-authors of [40], L. Wunderlich as co-author of [8, 40, 82], M.D. Alaydin as co-author of [82], A. Matei, S. Sitzmann and K. Willner as co-authors of [53] as well as P. Farah and J. Kremheller as co-authors of [69]. Funds provided by the Deutsche Forschungsgemeinschaft under the contract/grant numbers: PO1883/3-1, WO671/11-1 as well as PO1883/1-1, WA1521/15-1 and WO671/15-1, WO671/15-2 (within the Priority Program SPP 1748, “Reliable Simulation Techniques in Solid Mechanics. Development
Frontiers in Mortar Methods for Isogeometric Analysis
443
of Non-standard Discretisation Methods, Mechanical and Mathematical Analysis”) are gratefully acknowledged. Moreover, we would like to thank S. Schuß and M. Dittmann as co-authors of [20, 21, 67], S. Klinkel as co-author of [67] and ˙I. Temizer and M. Franke as co-authors of [37]. Funds provided by the Deutsche Forschungsgemeinschaft under the contract/grant numbers: HE5943/3-2, HE5943/6-1, HE5943/8-1, DI2306/1-1 and HE5943/5-1 (within the Priority Program SPP 1748, “Reliable Simulation Techniques in Solid Mechanics. Development of Non-standard Discretisation Methods, Mechanical and Mathematical Analysis”) are gratefully acknowledged. R. Krause wants to thank the Swiss National Science Foundation for their support through project 154090 and the Platform for Advanced Scientific Computing for support through the projects FASTER and AVFLOW.
References 1. F.B. Belgacem, The mortar finite element method with lagrange multipliers. Numerische Mathematik 84(2), 173–197 (1999) 2. D.J. Benson, Y. Bazilevs, M.C. Hsu, T.J.R. Hughes, Isogeometric shell analysis: the ReissnerMindlin shell. Comput. Methods Appl. Mech. Eng. 199, 276–289 (2010) 3. D.J. Benson, Y. Bazilevs, M.C. Hsu, T.J.R. Hughes, A large deformation, rotation-free, isogeometric shell. Comput. Methods Appl. Mech. Eng. 200, 1367–1378 (2011) 4. C. Bernardi, Y. Mayday, A.T. Patera, A new nonconforming approch to domain decomposition: the mortar element method, in Nonlinear Partial Differential Equations and Their Applications, pp. 13–51 (1994) 5. I. Berre, W.M. Boon, B. Flemisch, A. Fumagalli, D. Gläser, E. Keilegavlen, A. Scotti, I. Stefansson, A. Tatomir, K. Brenner, S. Burbulla, P. Devloo, O. Duran, M. Favino, J. Hennicker, I-H. Lee, K. Lipnikov, R. Masson, K. Mosthaf, M.G. Chiara Nestola, C.-F. Ni, K. Nikitin, P. Schädle, D. Svyatskiy, R. Yanbarisov, P. Zulian, Verification benchmarks for single-phase flow in three-dimensional fractured porous media (2020) 6. M.J. Borden, T.J.R. Hughes, C.M. Landis, C.V. Verhoosel, A higher-order phase-field model for brittle fracture: formulation and analysis within the isogeometric analysis framework. Comput. Methods Appl. Mech. Eng. 273, 100–118 (2014) 7. S.C. Brenner, L.-Y. Sung, C 0 interior penalty methods for fourth order elliptic boundary value problems on polygonal domains. J. Sci. Comput. 22(23), 83–118 (2005) 8. E. Brivadis, A. Buffa, B. Wohlmuth, L. Wunderlich, Isogeometric mortar methods. Comput. Methods Appl. Mech. Eng. 284, 292–319 (2015). Isogeometric Analysis Special Issue 9. J.A. Cottrell, T.J.R. Hughes, Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA (Wiley, 2009) 10. J.A. Cottrell, T.J.R. Hughes, A. Reali, Studies of refinement and continuity in isogeometric structural analysis. Comput. Methods Appl. Mech. Eng. 196(41–44), 4160–4183 (2007) 11. J.A. Cottrell, A. Reali, Y. Bazilevs, T.J.R. Hughes, Isogeometric analysis of structural vibrations. Comput. Methods Appl. Mech. Eng. 195(41–43), 5257–5296 (2006) John H. Argyris Memorial Issue. Part II 12. L.B. da Veiga, D. Cho, L.F. Pavarino, S. Scacchi, Overlapping Schwarz methods for isogeometric analysis. SIAM J. Numer. Anal. 50(3), 1394–1416 (2012) ˙ Temizer, P. Wriggers, G. Zavarise, A large deformation frictional contact 13. L. De Lorenzis, I. formulation using NURBS-based isogeometric analysis. Int. J. Numer. Methods Eng. 87(13), 1278–1300 (2011) 14. L. De Lorenzis, P. Wriggers, G. Zavarise, A mortar formulation for 3D large deformation contact using NURBS-based isogeometric analysis and the augmented Lagrangian method. Comput. Mech. 49(1), 1–20 (2012) 15. T. Dickopf, R. Krause, Efficient simulation of multi-body contact problems on complex geometries: a flexible decomposition approach using constrained minimization. Int. J. Numer. Methods Eng. 77(13), 1834–1862 (2009)
444
C. Hesch et al.
16. T. Dickopf, R. Krause, Numerical study of the almost nested case in a multilevel method based on non-nested meshes, in Domain Decomposition Methods in Science and Engineering XX, ed. by R. Bank. et al. Lecture Notes in Computational Science and Engineering, vol. 91 (Springer, Berlin, 2013), pp. 551–558 17. M. Dittmann, Isogeometric analysis and hierarchical refinement for multi-field contact problems. Ph.D. thesis, University of Siegen (2017) 18. M. Dittmann, F. Aldakheel, J. Schulte, P. Wriggers, C. Hesch, Variational phase-field formulation of non-linear ductile fracture. Comput. Methods Appl. Mech. Eng. 342, 71–94 (2018) ˙ Temizer, C. Hesch, Isogeometric analysis and thermomechanical 19. M. Dittmann, M. Franke, I. Mortar contact problems. Comput. Methods Appl. Mech. Eng. 274, 192–212 (2014) 20. M. Dittmann, S. Schuß, B. Wohlmuth, C. Hesch, Weak C n coupling for multi-patch isogeometric analysis in solid mechanics. Int. J. Numer. Methods Eng. 118, 678–699 (2019) 21. M. Dittmann, S. Schuß, B. Wohlmuth, C. Hesch, Crosspoint modification for multi-patch isogeometric analysis. Comput. Methods Appl. Mech. Eng. 360, 112768 (2020) 22. W. Dornisch, S. Klinkel, B. Simeon, Isogeometric Reissner-Mindlin shell analysis with exactly calculated director vectors. Comput. Methods Appl. Mech. Eng. 253, 491–504 (2013) 23. W. Dornisch, R. Müller, S. Klinkel, An efficient and robust rotational formulation for isogeometric Reissner-Mindlin shell elements. Comput. Methods Appl. Mech. Eng. 303, 1–34 (2016) 24. R. Echter, B. Oesterle, M. Bischoff, A hierarchic family of isogeometric shell finite elements. Comput. Methods Appl. Mech. Eng. 254, 170–180 (2013) 25. N. El-Abbasi, K.J. Bathe, Stability and patch test performance of contact discretizations and a new solution algorithm. Comput. Struct. 79, 1473–1486 (2001) 26. Konstantin Fackeldey, Dorian Krause, Rolf Krause, Christoph Lenzen, Coupling molecular dynamics and continua with weak constraints. SIAM J. Multiscale Model. Simul. 9(4), 1459– 1494 (2011) 27. P. Fischer, M. Klassen, J. Mergheim, P. Steinmann, R. Müller, Isogeometric analysis of 2D gradient elasticity. Comput. Mech. 47(3), 325–334 (2011) 28. B. Flemisch, J.M. Melenk, B.I. Wohlmuth, Mortar methods with curved interfaces. Appl. Numer. Math. 54, 339–361 (2005) 29. B. Flemisch, M.A. Puso, B.I. Wohlmuth, A new dual mortar method for curved interfaces: 2d elasticity. Int. J. Numer. Methods Eng. 63, 813–832 (2005) 30. H. Gomez, V.M. Calo, Y. Bazilevs, T.J.R. Hughes, Isogeometric analysis of the Cahn-Hilliard phase-field model. Comput. Methods Appl. Mech. Eng. 197, 4333–4352 (2008) 31. C. Hager, S. Hüeber, B.I. Wohlmuth, A stable energy conserving approach for frictional contact problems based on quadrature formulas. Int. J. Numer. Methods Eng. 73, 205–225 (2008) 32. J.O. Hallquist, NIKE2D. Technical Report UCRL-52678, University of California, Lawrence Livermore National Laboratory (1979) 33. J.O. Hallquist, G.L. Goudreau, D.J. Benson, Sliding Interfaces with contact-impact in largescale Lagrangian computations. Comput. Methods Appl. Mech. Eng. 51, 107–137 (1985) 34. Rolf Henniger, Dominik Obrist, Leonhard Kleiser, High-order accurate solution of the incompressible navier-stokes equations on massively parallel computers. J. Comput. Phys. 229(10), 3543–3572 (2010) 35. C. Hesch, P. Betsch, A mortar method for energy-momentum conserving schemes in frictionless dynamic contact problems. Int. J. Numer. Methods Eng. 77, 1468–1500 (2009) 36. C. Hesch, P. Betsch, Isogeometric analysis and domain decomposition methods. Comput. Methods Appl. Mech. Eng. 213–216, 104–112 (2012) ˙ Temizer, Hierarchical NURBS and a higher-order phase37. C. Hesch, M. Franke, M. Dittmann, I. field approach to fracture for finite-deformation contact problems. Comput. Methods Appl. Mech. Eng. 301, 242–258 (2016) 38. C. Hesch, A.J. Gil, A. Arranz Carreño, J. Bonet, P. Betsch, A Mortar approach for FluidStructure Interaction problems: Immersed strategies for deformable and rigid bodies. Comput. Methods Appl. Mech. Eng. 278, 853–882 (2014)
Frontiers in Mortar Methods for Isogeometric Analysis
445
39. K. Höllig, Finite Element Methods with B-Splines. Society for Industrial and Applied Mathematics Philadelphia (2003) 40. T. Horger, A. Reali, B. Wohlmuth, L. Wunderlich, A hybrid isogeometric approach on multipatches with applications to Kirchhoff plates and eigenvalue problems. Comput. Methods Appl. Mech. Eng. 348, 396–408 (2019) 41. S. Hüeber, M. Mair, B.I. Wohlmuth, A priori error estimates and an inexact primal-dual active set strategy for linear and quadratic finite elements applied to multibody contact problems. Appl. Numer. Math. 54, 555–576 (2005) 42. S. Hüeber, G. Stadler, B.I. Wohlmuth, A primal-dual active set algorithm for three-dimensional contact problems with Coulomb friction. SIAM J. Sci. Comput. 30, 572–596 (2008) 43. S. Hüeber, B.I. Wohlmuth, A primal-dual active set strategy for non-linear multibody contact problems. Comput. Methods Appl. Mech. Eng. 194, 3147–3166 (2005) 44. S. Hüeber, B.I. Wohlmuth, Thermo-mechanical contact problems on non-matching meshes. Comput. Methods Appl. Mech. Eng. 198(15–16), 1338–1350 (2009) 45. T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Eng. 194(39–41), 4135– 4195 (2005) 46. U. Khristenko, S. Schusß, B. Wohlmuth, C. Hesch, Multidimensional coupling: a variational consistent approach for fiber reinforced materials. Comput. Methods Appl. Mech. Eng. 382, 113869 (2021) 47. J. Kiendl, K.U. Bletzinger, J. Linhard, R. Wüchner, Isogeometric shell analysis with KirchhoffLove elements. Comput. Methods Appl. Mech. Eng. 198, 3902–3914 (2009) 48. T. Klöppel, A. Popp, U. Küttler, W.A. Wall, Fluid-structure interaction for non-conforming interfaces based on a dual mortar formulation. Comput. Methods Appl. Mech. Eng. 200, 3111– 3126 (2011) 49. T. Köppl, E. Vidotto, B. Wohlmuth, A 3D-1D coupled blood flow and oxygen transport model to generate microvascular networks. Int. J. Numer. Methods Biomed. Eng. 36, e3386 (2020) 50. Dorian Krause, Thomas Dickopf, Mark Potse, Rolf Krause, Towards a large-scale scalable adaptive heart model using shallow tree meshes. J. Comput. Phys. 298, 79–94 (2015). (October) 51. Dorian Krause, Rolf Krause, Enabling local time stepping in the parallel implicit solution of reaction–diffusion equations via space-time finite elements on shallow tree meshes. Appl. Math. Comput. 277, 164–179 (2016) 52. Rolf Krause, Patrick Zulian, A parallel approach to the variational transfer of discrete fields between arbitrarily distributed unstructured finite element meshes. SIAM J. Sci. Comput. 38(3), C307–C333 (2016) 53. A. Matei, S. Sitzmann, K. Willner, B.I. Wohlmuth, A mixed variational formulation for a class of contact problems in viscoelasticity. Appl. Anal. 97(8), 1340–1356 (2018) 54. M.G.C. Nestola, B. Becsek, H. Zolfaghari, P. Zulian, D. De Marinis, R. Krause, D. Obrist, An immersed boundary method for fluid-structure interaction based on variational transfer. J. Comput. Phys. 398, 108884 (2019) 55. S. Osborn, P. Zulian, T. Benson, U. Villa, R. Krause, P.S. Vassilevski, Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes. Numer. Linear Algebra Appl. 25(3), e2146 (2018) 56. P. Oswald, B.I. Wohlmuth, On polynominal reproduction of dual FE bases, in Domain Decomposition Methods in Science and Engineering. ed. by N. Debit, M. Garbey, R.H.W. Hoppe, D. Keyes, Y. Kuznetsov, J. Périaux. CIMNE. Thirteenth International Conference on Domain Decomposition Methods, Lyon, France (2002), pp. 85–96 57. C.S. Peskin, Flow patterns around heart values: a numerical method. J. Comput. Phys. 10, 252–271 (1972) 58. C.S. Peskin, D.M. Mc Queen, A three-dimensional computational method for blood flow in the heart. I. Immersed elastic fibers in a viscous incompressible fluid. J. Comput. Phys. 81, 372–405 (1989) 59. C. Planta, D. Vogler, P. Zulian, M. Oliver Saar, R. Krause, Solution of contact problems between rough body surfaces with non matching meshes using a parallel mortar method. Submitted to International Journal of Rock Mechanics and Mining (2020). arXiv:1811.02914
446
C. Hesch et al.
60. A. Popp, M. Gitterle, W. Gee, W.A. Wall, A dual mortar approach for 3D finite deformation contact with consistent linearization. Int. J. Numer. Methods Eng. 83, 1428–1465 (2010) 61. A. Popp, A. Seitz, M.W. Gee, W.A. Wall, Improved robustness and consistency of 3D contact algorithms based on a dual mortar approach. Comput. Methods Appl. Mech. Eng. 264, 67–80 (2013) 62. A. Popp, B.I. Wohlmuth, M.W. Gee, W.A. Wall, Dual quadratic mortar finite element methods for 3D finite deformation contact. SIAM J. Sci. Comput. 34, B421–B446 (2012) 63. M.A. Puso, T.A. Laursen, A mortar segment-to-segment contact method for large deformation solid mechanics. Comput. Methods Appl. Mech. Eng. 193(6–8), 601–629 (2004) 64. M.A. Puso, T.A. Laursen, A mortar segment-to-segment frictional contact method for large deformations. Comput. Methods Appl. Mech. Eng. 193(45–47), 4891–4913 (2004) 65. A. Reali, H. Gomez, An isogeometric collocation approach for Bernoulli-Euler beams and Kirchhoff plates. Comput. Methods Appl. Mech. Eng. 284, 623–636 (2015) 66. Philipp Schädle, Patrick Zulian, Daniel Vogler, Bhopalam R. Sthavishtha, Maria Giuseppina Chiara. Nestola, Anozie Ebigbo, Rolf Krause, Martin O. Saar, 3D non-conforming mesh model for flow in fractured porous media using Lagrange multipliers. Comput. Geosci. 132, 42–55 (2019) 67. S. Schuß, M. Dittmann, S. Klinkel, B. Wohlmuth, C. Hesch, Multi-patch isogeometric analysis for Kirchhoff-Love shell elements. Comput. Methods Appl. Mech. Eng. 349, 91–116 (2019) 68. A. Seitz, Computational methods for thermo-elasto-plastic contact. Ph.D. thesis, Technische Universität München (2019) 69. A. Seitz, P. Farah, J. Kremheller, B.I. Wohlmuth, W.A. Wall, A. Popp, Isogeometric dual mortar methods for computational contact mechanics. Comput. Methods Appl. Mech. Eng. 301, 259–280 (2016) 70. A. Seitz, A. Popp, W.A. Wall, A semi-smooth newton method for orthotropic plasticity and frictional contact at finite strains. Comput. Methods Appl. Mech. Eng. 285, 228–254 (2015) 71. A. Seitz, W.A. Wall, A. Popp, A computational approach for thermo-elasto-plastic frictional contact based on a monolithic formulation using non-smooth nonlinear complementarity functions. Adv. Model. Simul. Eng. Sci. 5(1), 5 (2018) 72. I. Steinbrecher, M. Mayr, M.J. Grill, J. Kremheller, C. Meier, A. Popp, A mortar-type finite element approach for embedding 1D beams into 3D solid volumes. arXiv (2019), pp. 1–20 73. E. Vidotto, T. Koch, T. Köppl, R. Helmig, B. Wohlmuth, Hybrid models for simulating blood flow in microvascular networks. Multiscale Model. Simul. 17(3), 1076–1102 (2019) 74. C. von Planta, D. Vogler, X. Chen, M.G.C. Nestola, M.O. Saar, R. Krause, Modelling of hydromechanical processes in heterogeneous fracture intersections using a fictitious domain method with variational transfer operators. Comput. Geosci. (2020). arXiv:2001.02030 75. C. von Planta, D. Vogler, X. Chen, M.G.C. Nestola, M.O. Saar, R. Krause, Simulation of hydromechanically coupled processes in rough rock fractures using an immersed boundary method and variational transfer operators. Comput. Geosci. 23(5), 1125–1140 (2019) 76. O. Weeger, S.K. Yeung, M.L. Dunn, Isogeometric collocation methods for Cosserat rods and rod structures. Comput. Methods Appl. Mech. Eng. 316, 100–122 (2017) 77. B.I. Wohlmuth, A mortar finite element method using dual spaces for the Lagrange multiplier. SIAM J. Numer. Anal. 38, 989–1012 (2000) 78. B.I. Wohlmuth, Discretization Methods and Iterative Solvers based on Domain Decomposition (Springer, 2000) 79. B.I. Wohlmuth, An a posteriori error estimator for two-body contact problems on non-matching meshes. J. Sci. Comput. 33, 25–45 (2007) 80. B.I. Wohlmuth, Variationally consistent discretization schemes and numerical algorithms for contact problems. Acta Numerica 20, 569–734 (2011) 81. B.I. Wohlmuth, R. Krause, A Multigrid method based on the unconstrained product space arising form motar finite element discretizations. SIAM J. Numer. Anal. 39, 192–213 (2001) 82. L. Wunderlich, A. Seitz, M.D. Alaydin, B. Wohlmuth, A. Popp, Biorthogonal splines for optimal weak patch-coupling in isogeometric analysis with applications to finite deformation elasticity. Comput. Methods Appl. Mech. Eng. 346, 197–215 (2019)
Frontiers in Mortar Methods for Isogeometric Analysis
447
83. H. Zolfaghari, B. Becsek, M.G.C. Nestola, W.B. Sawyer, R. Krause, D. Obrist, High-order accurate simulation of incompressible turbulent flows on many parallel gpus of a hybrid-node supercomputer. Comput. Phys. Commun. 244, 132–142 (2019) 84. P. Zulian, ParMOONoLith: parallel intersection detection and automatic load-balancing library. Git repository (2016). https://bitbucket.org/zulianp/par_moonolith 85. P. Zulian, Geometry–aware finite element framework for multi–physics simulations. Ph.D. thesis, Università della Svizzera italiana (2017)
Collocation Methods and Beyond in Non-linear Mechanics F. Fahrendorf, S. Shivanand, B. V. Rosic, M. S. Sarfaraz, T. Wu, L. De Lorenzis, and H. G. Matthies
Abstract Within the realm of isogeometric analysis, isogeometric collocation has been driven by the attempt to minimize the cost of quadrature associated with higherorder discretizations, with the goal of achieving higher-order accuracy at low computational cost. While the first applications of isogeometric collocation have mainly concerned linear problems, here the focus is on non-linear mechanics formulations including hyperelasticity, elastoplasticity, contact and geometrically non-linear structural elements. We also address the treatment of locking issues as well as the establishment of a bridge between Galerkin and collocation schemes leading to a new reduced quadrature technique for isogeometric analysis. In stochastic uncertainty computations, the evaluation of full-scale deterministic models is the main computational burden, which may be avoided with cheap to evaluate proxy-models. Their construction is a kind of regression, which, when reduced to the minimum number of F. Fahrendorf · T. Wu Institute of Applied Mechanics, Technische Universität Braunschweig, Pockelsstr. 3, 38106 Braunschweig, Germany e-mail: [email protected] S. Shivanand · M. S. Sarfaraz · H. G. Matthies Institute of Scientific Computing, Technische Universität Braunschweig, Mühlenpfordtstr. 23, 38106 Braunschweig, Germany e-mail: [email protected] M. S. Sarfaraz e-mail: [email protected] H. G. Matthies e-mail: [email protected] B. V. Rosic Faculty of Engineering Technology, University of Twente, Horst-Ring N116, 7500 AE Enschede, The Netherlands e-mail: [email protected] L. De Lorenzis (B) Department of Mechanical and Process Engineering, ETH Zürich, Tannenstr. 3, 8092 Zürich, Switzerland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_16
449
450
F. Fahrendorf et al.
samples, turns into collocation or interpolation. It is possible to go well beyond that minimum using ideas from probabilistic numerics and Bayesian updating, which is shown both for constructing proxy-models and for upscaling (coarsening) of highly nonlinear material laws. Another way to reduce costly full-scale model evaluations is to use multi-level hierarchies of models, leading to multi-level Monte Carlo methods. In this chapter, we present the main achievements obtained on the above topics within the DFG Priority Program 1748, Reliable Simulation Techniques in Solid Mechanics.
1 Introduction The idea to deal with collocation methods in many instances is driven by the desire to limit the number of “samples” of a quantity which may be expensive to evaluate. This can take many forms, and two areas are explored here. One is the use of collocation in the formulation and numerical computation of approximations to the partial differential equations of nonlinear mechanics using isogeometric finite elements (Sect. 1.1). Here the goal is to minimise the number of “quadrature points” (samples) in each element. Another use is in uncertainty quantification (UQ) (Sect. 1.2), where the “samples” are evaluations of computationally expensive deterministic codes. In the construction of proxy-models it turns out that through the use of Bayesian ideas one can reduce the number of samples beyond the minimum required for collocation/interpolation. In the multi-level Monte Carlo (MLMC) method on the other hand, many of the expensive samples are replaced by less expensive ones which come from models with less refinement, i.e. models with coarser grids. This can be seen as a kind of analogue to multi-grid or multi-level algebraic solvers.
1.1 Collocation and Isogeometric Analysis Isogeometric collocation is a relatively new computational technique enabled by isogeometric analysis. Its emergence has been driven by the attempt to minimize the cost of quadrature associated with higher-order discretizations, with the goal of achieving higher-order accuracy at low computational cost. While the first applications of isogeometric collocation have mainly concerned linear problems, its application to non-linear mechanics has started much more recently. In this chapter, such as in the related subproject of the DFG Priority Program 1748, the focus is on the investigation of isogeometric collocation for non-linear mechanics formulations. Sections 2.3 and 2.4 illustrate isogeometric collocation for hyperelastic and elastoplastic material behavior, respectively. Instability issues for the primal elastoplasticity formulation motivate the development of mixed methods, which are also shown to alleviate locking for incompressible elasticity (Sect. 2.4). The investigation is also extended to contact (Sect. 2.5) and to geometrically non-linear structural elements (a summary is given in Sect. 2.6). A promising result for future research in the field is the estab-
Collocation Methods and Beyond in Non-linear Mechanics
451
lishment of a bridge between Galerkin and collocation schemes (Sect. 2.2.1), which leads to a new reduced quadrature technique for isogeometric analysis (Sect. 2.2.2) and bears a significant potential for further exploration in the future.
1.2 Beyond Collocation in Uncertainty Quantification This part deals with stochastic models of mechanical situations, modelled with expensive to simulate deterministic so-called full models. The aim is to minimise the number of times those full models have to be sampled, and some possibilities are outlined in Sect. 3. When constructing proxy-models [35, 39] one essentially employs some form of regression. Typically the proxy-model is a linear combination of some basis functions, and the coefficients are computed by some form of projection/regression given some samples of the full deterministic model. The minimum number of samples is equal to the number of coefficients, leading to the case of collocation or interpolation. In evaluating the coefficients, one part is the computation of projection integrals resp. inner products, and each sample point/numerical quadrature point is an expensive to compute full deterministic model evaluation. As already mentioned, the minimum number of such samples—in the case of collocation/interpolation—is equal to the number of coefficients to be determined. In Sect. 3.1 we sketch how it is possible to go beyond collocation by using ideas from probabilistic numerics, e.g. see [6, 9, 25]. A more detailed account and the corresponding references will be given in Sect. 3.1, as well as connections to the areas of reduced order models (ROMs) and upscaling. The idea of using multi-grid resp. multi-level hierarchies [8, 20, 24] of models adjoined to the full model in order to save on the evaluation of the full model is further developed in Sect. 3.2. Another way to save on full model evaluations when dealing with complex heterogeneous material structure is upscaling, which is addressed in Sect. 3.3. A central part of Bayesian updating is the computation of the conditional expectation, a kind of inverse map for the inverse problem. The exact determination of the conditional expectation map is only rarely possible, and hence one has to seek numerical approximations. The simplest of these is a linear approximation, which is the Gauss-Markov-Kalman filter (GMKF), [43, 49, 51] an extension and generalisation of the well known Kalman filter. In Sect. 3.4 a nonlinear extension [64] is shown in the context of an inverse identification problem [78].
2 Isogeometric Collocation for Linear and Non-linear Mechanics In this section we report the progress on isogeometric analysis approaches based on collocation and reduced quadrature methods within the related subproject of the DFG Priority Program 1748. These approaches are applied to linear as well as non-linear
452
F. Fahrendorf et al.
mechanical problems in order to obtain stable solutions with a maximum of efficiency. Before highlighting the major achievements in the corresponding subsections, a short theoretical introduction to the isogeometric collocation method is given.
2.1 Introduction to Isogeometric Collocation As follows, we provide a short introduction to isogeometric collocation methods, which have their foundation in the concept of isogeometric analysis. The key characteristic of the isogeometric analysis approach is the specific choice of basis functions, which is described first. Subsequently we introduce the isogeometric collocation method using the example of linear elasticity.
2.1.1
Spatial Discretization
A B-spline basis of degree p is constructed by a so-called knot vector Ξ = {ξ1 , ξ2 , . . . , ξn+ p+1 }, where the knots ξi are sorted in non-decreasing order and n denotes the number of basis functions of degree p. We consider open knot vectors, which implies ξ1 = · · · = ξ p+1 and ξn+1 = · · · = ξn+ p+1 . This leads to interpolatory ends, since the continuity of a B-Spline/NURBS basis is C p−k at a knot with the multiplicity k. In the interior of a knot span the continuity is C ∞ . The B-Spline basis functions are defined by means of the Cox-de Boor recursion formula, which reads for p = 0
1 if ξi ≤ ξ < ξi+1 , 0 otherwise
(1)
ξi+ p+1 − ξ ξ − ξi Ni, p−1 (ξ ) + Ni+1, p−1 (ξ ) ξi+ p − ξi ξi+ p+1 − ξi+1
(2)
Ni,0 (ξ ) = and for p ≥ 1 Ni, p (ξ ) =
while adopting the convention 00 = 0. A NURBS curve C(ξ ) can be expressed as a linear combination of the control points P i with the corresponding basis functions Ri, p of degree p as C(ξ ) =
n
Ri, p (ξ ) P i
(3)
i=1
with
wi Ni, p (ξ ) Ri, p (ξ ) = n wiˆ Ni,ˆ p (ξ ) ˆ i=1
(4)
Collocation Methods and Beyond in Non-linear Mechanics
453
as the NURBS basis functions with the associated weights wi . p,q Similarly bivariate NURBS basis functions Ri, j of degrees p and q in the two parametric directions ξ and η with the corresponding weights wi, j are defined as Ni, p (ξ )M j,q (η)wi, j m Ni,ˆ p (ξ )M j,q ˆ (η)wi, ˆ jˆ ˆ ˆ i=1 j=1
p,q Ri, j (ξ, η) = n
(5)
with the control points P i, j . Thus a NURBS surface of degree p, q can be described as n m p,q Ri, j (ξ, η) P i, j . (6) S(ξ, η) = i=1 j=1
2.1.2
Isogeometric Collocation Method
In the following, the isogeometric collocation method is introduced using the example of linear elasticity. Further details and modifications needed for the application of the isogeometric collocation method to non-linear constitutive material models or mixed approaches are shown in the corresponding subsections. Let Ω ⊂ Rds represent an elastic body B subjected to body forces b, to prescribed displacements u¯ on the Dirichlet boundary Γ D , and to prescribed tractions t on the Neumann boundary Γ N . Under the assumptions of small strains and linear elastic, isotropic material, the stress tensor σ can be related to the displacement vector u as σ = C : ∇ s u = 2μ∇ s u + λ (∇ · u) I
(7)
where C is the fourth-order elasticity tensor, ∇ s is the symmetric gradient operator, I is the identity tensor, and λ and μ are the Lamé constants. The strain tensor is defined as = ∇ s u. The elasticity problem in variational form, derived from the principle of virtual work, reads C : ∇ s u : ∇ s w dΩ = f · w dΩ + t · w dΓ (8) Ω
Ω
ΓN
d for every test function w ∈ H 1 (Ω) s satisfying homogeneous Dirichlet boundary conditions. Integration by parts of Eq. (8) leads to Ω
div C : ∇ s u + f · w dΩ −
ΓN
C : ∇ s u · n − t · w dΓ = 0.
(9)
454
F. Fahrendorf et al.
The of the unknown displacement field u(x) is introduced as uh (x) = m n approximation ˆ i, j with the NURBS basis functions Ri, j previously described i=1 j=1 Ri, j (x) u in this subsection and the corresponding unknown displacement control variables uˆ i, j . Inserting the discretization uh (x) into Eq. (9) yields Ω
div C : ∇ s uh + f · w dΩ −
ΓN
C : ∇ s uh · n − t · w dΓ = 0. (10)
In contrast to Galerkin approaches, Dirac delta functions δ are chosen as test functions in order to deduce the isogeometric collocation approach. These functions satisfy the so-called sifting property, i.e.,
Ω
f Ω (τ )δ(τ − τ i j ) dΩ = f Ω (τ i j ),
Γ
f Γ (τ )δ(τ − τ i j ) dΓ = f Γ (τ i j ) (11)
for every function f Ω continuous about the point τ i j ∈ Ω and for every function f Γ continuous about the point τ i j ∈ Γ [2, 3, 60]. In the following the collocation points are numbered as τ kl , k = {1, . . . , n}, l = {1, . . . , m} and if k = 1, n or l = 1, m the collocation points are located at the boundary Γ . A common choice for the collocation point locations in the isogeometric framework are the Greville abscissae of the knot vectors (see e.g. [3]). Recent strategies to determine abscissae values with improved properties are outlined in Sect. 2.2. Applying the sifting property to Eq. (10) yields div C : ∇ s uh + f (τ kl ) = 0 C : ∇ s uh · n − t (τ kl ) = 0 C : ∇ s uh · n + n − t + t (τ kl ) = 0
τ kl ⊂ Ω,
(12a)
τ kl ⊂ edge ⊂ Γ N ,
(12b)
τ kl ≡ corner ⊂ Γ N .
(12c)
The Dirichlet boundary conditions are enforced strongly. Regarding the Neumann boundary conditions, a distinction is necessary between the collocation points located at the edges and those at the corners of the domain. For collocation points located on corners within the Neumann boundary, the contributions from the adjacent edges are added as shown in Eq. (12c). This treatment of corner points was proposed in [3].
2.1.3
Treatment of Neumann Boundaries in Isogeometric Collocation
In [10] it was shown that the strong imposition of Neumann boundary conditions may lead to oscillations and thus to a loss of accuracy, in particular when nonuniform meshes are used. In order to solve this problem, two alternative strategies were proposed, which are outlined in the following. The first strategy is called hybrid approach (HC), because collocation equations are written at the patch interior (Eq. (12a)) whereas Galerkin-like equations are used at the Neumann boundary. The equations at the Neumann boundary are written by
Collocation Methods and Beyond in Non-linear Mechanics
455
choosing as test functions some of the shape functions used for the discretization of the unknown displacements. For the evaluation of the governing equations at collocation points on a Neumann boundary, the shape functions Ra are used instead of Dirac delta functions in the hybrid approach, resulting in Ω
div C : ∇ s uh + f Ra dΩ −
Γt k¯
C : ∇ s uh · nk¯ − t k¯ Ra dΓ = 0. (13)
In the above formula, Γ N k¯ denotes the considered edge within the Neumann boundary, and nk¯ and t k¯ are the respective outward unit normal and applied traction. The second approach is named enhanced collocation (EC) and mimics the hybrid approach without the need for numerical integration. For this approach, the Neumann BCs are written considering a combination of area and edge terms, as follows
C ∗ div C : ∇ s uh + f (τkl ) − C : ∇ s uh · n − t (τkl ) = 0, τkl ⊂ edge ⊂ Γ N h
(14) where h is the mesh size in the direction perpendicular to the edge. A suitable calibration of the constant C ∗ in Eq. (14) is required. In [10], C ∗ was calibrated through numerical experiments and an optimal value of C ∗ = 4 was found. The adaptation to corner points can be carried out in analogy to Eq. (12c) and is therefore not reported herein.
2.2 Variational Collocation and a New Reduced Quadrature Rule One of the major objectives of this project is to improve the general numerical properties of isogeometric collocation methods. The research on the properties of isogeometric collocation methods is very limited and in most cases restricted to numerical studies, which is in contrast to Galerkin approaches. A very promising research direction, which bridges the gap between the Galerkin method and classical collocation methods, has been found through the development of the so-called variational collocation approach. In this section, the general concept of the variational collocation method is introduced and subsequently the application to collocation and Galerkin methods is outlined. This subsection contains a summary of the findings published in [14, 21].
2.2.1
Variational Collocation Method
The idea of the variational collocation method is to establish a direct connection between the Galerkin and the collocation method such that the accuracy of the Galerkin method can be successfully combined with the efficiency of the collo-
456
F. Fahrendorf et al.
cation method. Under certain requirements, it can be shown that there exists a set of points (denoted as Cauchy-Galerkin points) such that direct collocation of the strong form of the governing partial differential equation at these points produces the Galerkin solution exactly. One requirement lies in a discrete space constructed by smooth and point-wise non-negative basis functions, which can easily be fulfilled with an isogeometric analysis approach due to the characteristics of the underlying basis functions. The starting point for the description of the variational collocation method is the first mean value theorem of integral calculus [27] as given below Theorem 1 (Cauchy, 1821) Let Ω be a measurable subset of Rd , where d is the number of spatial dimensions. The functions G : Ω → R and w : Ω → R are considered. If G is continuous and w(x) ≥ 0 for all x ∈ Ω, then there exists τ ∈ Ω such that w(x)G(x) dx = G(τ ) w(x) dx. (15) Ω
Ω
In order to make the explanations more vivid, we consider a simple example problem in the following, namely the Poisson equation u + f = 0 in Ω, on Γ, u = uD
(16a) (16b)
with the domain Ω, the corresponding boundary Γ , the Laplace operator and the unknown scalar solution field u. We assume to have Dirichlet boundary conditions on the whole boundary. The isogeometric collocation approach is based on the direct evaluation of the strong form of the considered partial differential equation at a certain set of collocation points {τ A } A=1,...,n that are distributed over the domain Ω. A finite-dimensional space Uh = span{R A } A=1,...,n with the linearly independent basis functions {R A } A=1,...,n is used for the spatial discretization. So we search for a solution u h ∈ Uh such that the Galerkin variational formulation of the aforementioned boundary value problem
Ω
∇ R A · ∇u h dx −
Ω
R A f dx = 0
for all A ∈ D.
(17)
is fulfilled, with D as the set of indices A such that R A vanishes on Γ . If the basis functions are sufficiently smooth, integration by parts leads to Ω
R A ( u h + f ) dx = 0
for all A ∈ D.
We refer to Eq. (18) as weighted residual formulation in the following.
(18)
Collocation Methods and Beyond in Non-linear Mechanics
457
Table 1 Superconvergent points referred to the bi-unit domain [−1, 1], for different degrees (adapted from [21]) Degree p Superconvergent points 3
± √1
4
−1, 0, 1 √ √ 30 ± 225−30 15 −1, 0, 1 ±0.5049185675126533
3
5 6 7
Considering the support S A of R A , R A ≥ 0 on S A and by the aid of Theorem 1 one can conclude that there exists a point τ A ∈ S A such that
0=
R A ( u h + f ) dx = ( u h (τ A ) + f (τ A )) SA
R A dx SA
for all A ∈ D, (19)
which implies u h (τ A ) + f (τ A ) = 0
for all A ∈ D.
(20)
In [21] it was proven that these characteristic points (τ A ) A∈D are all distinct and they were named Cauchy-Galerkin (CG) points. At the CG points the Galerkin solution is reproduced exactly, which is the fundamental idea of the variational collocation method [21]. Since the Galerkin solution has to be known to determine the CG points, a suitable a-priori estimation procedure is necessary in practice. As shown e.g. in [21] estimators can be obtained based on superconvergence theory. The procedure is described in detail in [21]. The resulting evaluation points referred to the bi-unit domain [−1, 1] are listed in Table 1. As one can see from Table 1, the superconvergent points are more numerous than needed. Different strategies were proposed to deal with this situation. One possible approach is the least-squares collocation method (see [1]) taking into account all possible superconvergent points and thus leading to an overdetermined system of equations. This approach is abbreviated as “LS-SP” in the legends of the later numerical example in this subsection. In contrast to that, in [21] a subset of the superconvergent points as numerous as the control points was used, which is abbreviated as “C-ASP”, due to the alternating selection of collocation points. Furthermore, a symmetric stencil, leading to a “clustered” structure of the superconvergent collocation points, was proposed in the subsequent study [53] and is abbreviated as “C-CSP”. Collocation at the Greville abscissae is termed “C-GP” in the legends. The resulting orders of convergence for the aforementioned collocation approaches, determined through numerical studies, are shown in Table 2 for comparison purposes.
458
F. Fahrendorf et al.
Table 2 Comparisons of orders of convergence: Galerkin, C-GP, LS-SP, C-CSP and C-ASP (adapted from [53]) Norm Galerkin C-GP LS-SP and C-CSP C-ASP Odd p Even p Odd p Even p L2 H1
2.2.2
p+1 p
p−1 p−1
p+1 p
p p
p p
p p
New Reduced Quadrature Rule
In the following a new reduced quadrature rule is presented. This new quadrature rule is based on the idea of combining the concept of variational collocation with a weighted residual formulation, to obtain a new reduced quadrature scheme. In general, numerical quadrature applied to a function g(x) can be expressed as
b
g(x) dx ≈
a
nq p b−a g(xi )αi , 2 i=1
(21)
where xi are the quadrature points, in number n q p , and αi the corresponding weights, both depending on the specific quadrature rule. Applying this to the Galerkin variational formulation of the Poisson equation (see Eq. (17)), one obtains n el
≈
(∇ R A · ∇u h − R A f )J dΩ˜ e
˜ e=1 Ωe nq p
n el
∇ R A (ξ˜i ) · ∇u h (ξ˜i ) − R A (ξ˜i ) f (ξ˜i ) J αi = 0
(22)
e=1 i=1
for all A ∈ D with n el as the total number of elements and the corresponding domain Ω˜ e in the parent (bi-unit) element space with the associated coordinates ξ˜i of the quadrature points. J is the Jacobian of the mapping from the parent element space to the physical space. Instead of utilizing the Galerkin variational formulation as in all previous studies on reduced quadrature in isogeometric analysis, we use the weighted residual formulation Eq. (18) instead, resulting in
Collocation Methods and Beyond in Non-linear Mechanics n el e=1
≈
Ω˜ e
459
R A ( u h + f )J dΩ˜ e
nq p
n el
(23)
R A (ξ˜i )( u h (ξ˜i ) − f (ξ˜i )) J αi = 0 for all A ∈ D.
e=1 i=1
In contrast to a Gaussian quadrature rule, we use the estimated CG points, e.g. the superconvergent points in table 1, as quadrature points for the weighted residual formulation. We focus on odd degree B-Spline/NURBS approximations, since in this case there exist only two superconvergent points (per parametric direction) in each Bézier element. This results in a quadrature rule with n q p = 2d quadrature points, d being the dimension of the parametric space, regardless of the (odd) polynomial degree of the discretization. Especially for high polynomial degrees this is a remarkable reduction of the amount of evaluations in comparison to standard Gaussian quadrature. In case of cubic B-Spline/NURBS approximations the superconvergent points coincide with the two Gaussian quadrature points. For the legends in the subsequent numerical example, we use the abbreviations “G-FGP” for the Galerkin variational formulation in case of full Gaussian quadrature and “G-RGP” in case of reduced Gaussian quadrature (with 2 Gaussian quadrature points per parametric direction). For the weighted residual formulation we use the abbreviations “WR-SP” for quadrature at the superconvergent points and “WR-RGP” for reduced Gaussian quadrature (with 2 Gaussian quadrature points per parametric direction).
2.2.3
Numerical Example on Variational Collocation Method and New Reduced Quadrature Rule
In the following the performance of the proposed approaches, which are based on the concept of variational collocation, is studied with the aid of a numerical example. A quarter of annulus is modeled as described e.g. in [4]. The geometry, boundary conditions and simulations parameters are depicted in Fig. 1. Plane strain conditions are assumed. On the complete boundary of the domain homogenous Dirichlet boundary conditions are assumed and the body forces, which are applied to the domain, are calculated from a “manufactured” displacement field, which is chosen as u r e f (x, y) =10−6 x 2 y 4 (x 2 + y 2 − 16)(x 2 + y 2 − 1) (5x 4 + 18x 2 y 2 − 85x 2 + 13y 4 + 80 − 153y 2 ), vr e f (x, y) = − 2 · 10−6 x y 5 (x 2 + y 2 − 16)(x 2 + y 2 − 1) (5x 4 − 51x 2 + 6x 2 y 2 − 17y 2 + 16 + y 4 ).
(24)
460
F. Fahrendorf et al.
Fig. 1 Quarter of annulus with manufactured solution: Geometry, boundary conditions and simulation setup (adapted from [14])
Fig. 2 Quarter of annulus with manufactured solution: Convergence plots for methods based on variational collocation, degree p = 3 (adapted from [14])
The convergence plots for the relative error in the L 2 norm and the H 1 semi-norm can be found in Figs. 2, 3 and 4 for the considered cubic, quintic and septic NURBS discretizations. From the convergence plots in Figs. 2, 3 and 4 it is evident that collocation at the Greville points (C-GP) leads to the lowest convergence rates for all tested discretizations. In addition the error in both norms is also the highest for this approach. It is also obvious from the plots that for the approaches based on the concept of variational collocation (C-ASP, C-CSP), significantly better convergence rates and errors are obtained. The best results among all approaches are obtained, as expected, by the classical Galerkin method with full Gaussian quadrature (G-FGP). Nevertheless, nearly
Collocation Methods and Beyond in Non-linear Mechanics
461
Fig. 3 Quarter of annulus with manufactured solution: Convergence plots for methods based on variational collocation, degree p = 5 (adapted from [14])
Fig. 4 Quarter of annulus with manufactured solution: Convergence plots for methods based on variational collocation, degree p = 7 (adapted from [14])
identical convergence rates and errors can be achieved with the proposed reduced quadrature scheme for the weighted residual formulation (WR-SP). In contrast to the G-FGP approach, the computational effort due to the reduced amount of evaluations is significantly lower. Especially for higher order NURBS approximations the difference is remarkable, e.g. in case of septic NURBS, only 2d quadrature points are required for the proposed reduced quadrature approach instead of 8d for the full Gaussian quadrature. For p = 3 the quadrature points for the reduced Gaussian quadrature and the superconvergent points are identical, thus also the results for the methods WR-RGP and WR-SP coincide. For the tested higher orders, the results of WR-SP are superior to those obtained by the WR-RGP approach. The results of G-RGP are not plotted
462
F. Fahrendorf et al.
for p = 5 and p = 7, since instabilities occur. The obtained results indicate that both the superconvergent points as quadrature points and the weighted residual formulation instead of the classical weak formulation are necessary ingredients to achieve accurate results while dramatically reducing the cost of quadrature.
2.3 Isogeometric Collocation for Hyperelasticity In this subsection we explore the application of isogeometric collocation to large deformation elasticity. Especially in the three-dimensional case and for higher order discretizations, efficient approaches such as isogeometric collocation are favourable in the context of hyperelasticity. We first derive the non-linear governing equations for the hyperelastic problem and then show the performance of the approach by a numerical example. For more details the reader is referred to [34], which contains the complete study.
2.3.1
Governing Equations for Hyperelasticity
Since we consider the finite deformation case in the following, two configurations of the body B are considered. The undeformed reference configuration is parameterized in X and the current deformed configuration in x with the mapping x = ϕ (X). The displacement field is defined as u = x − X and the deformation gradient as F = Gradx. The domains of the body B in the reference and the current configuration are denoted as Ω, ω ⊂ Rds , respectively. The finite deformation elasticity problem in variational form expressed in the reference configuration consists of finding u ∈ U such that for all w ∈ V
Ω
P : Grad w dΩ =
Ω
B · w dΩ +
ΓN
Tˆ · w dΓ
(25)
is fulfilled. In this equation P is the first Piola-Kirchhoff stress tensor, B are the body forces, Tˆ are the prescribed tractions on the Neumann boundary Γ N and N is the outward normal unit vector on Γ N . For the approximation spaces the following definition holds ¯ U = {u|u ∈ (H 1 (Ω))d , u|Γ D = u}, (26) 1 d V = {w|w ∈ (H (Ω)) , w|Γ D = 0}. Integrating Eq. (25) by parts results in
Ω
[Div P + B] · w dΩ −
ΓN
P N − Tˆ · w dΓ = 0.
The discretization of the unknown displacement field u can be expressed as
(27)
Collocation Methods and Beyond in Non-linear Mechanics
uh =
n
463
Ra uˆ a
(28)
a=1
where Ra are the NURBS basis functions and uˆ a are the unknown displacement control variables. Thus Eq. (27) in the discretized form reads Ω
Div P h + B · w dΩ −
ΓN
P h N − Tˆ · w dΓ = 0.
(29)
Analogously to the procedure described in Sect. 2.1 for the linear elastic case, Dirac delta functions are chosen as test functions and the sifting property is applied to transform the integral equations resulting in ref =0 Div P h + B τkl
ref P h N − Tˆ τkl =0
ref P h N + N − Tˆ + Tˆ τkl =0
ref
(30a)
ref
(30b)
ref
(30c)
τkl ⊂ Ω, τkl ⊂ edge ⊂ Γ N , τ kl ≡ corner ⊂ Γ N .
The Dirichlet boundary conditions are enforced strongly. Further details on the linearization of the governing equations are omitted herein, but the interested reader is referred to [34], where the theory is explained in more detail. For the determination of the collocation points τˆkl , the standard Greville abscissae are used in this study. We indicate the physical maps of τˆkl in the reference and current ref configurations as τ kl and τ cur kl , respectively. The corresponding formulation for the treatment of the Neumann boundaries with the enhanced collocation (EC) approach as outlined in Sect. 2.1 reads
ref C∗ h ref P N − Tˆ τ kl = 0 Div P h + B τ kl − h
ref
τ kl ⊂ edge ⊂ Γ N . (31) This approach requires a suitable choice for the constant C ∗ in Eq. (31). The counterparts of Eq. (30) in the current configuration read =0 div σ h + b τ cur kl h cur σ n − tˆ τ kl = 0
σ h n + n − tˆ + tˆ τ cur =0 kl
τ cur kl ⊂ ω,
(32a)
τ cur kl
⊂ edge ⊂ γ N ,
(32b)
τ cur kl ≡ corner ⊂ γ N
(32c)
and for the enhanced collocation approach C∗ h σ n − tˆ τ cur − =0 div σ h + b τ cur kl kl h
τ cur kl ⊂ edge ⊂ γ N .
(33)
464
F. Fahrendorf et al.
Fig. 5 Quarter of annulus: Geometry, boundary conditions and simulation setup (adapted from [34])
In the former equations, σ is the Cauchy stress tensor, b and ˆt are respectively the body load per unit current volume and the prescribed traction per unit current area on the Neumann boundary γ N , whereas n is the outward normal unit vector to γ N .
2.3.2
Numerical Example on Hyperelasticity
In the following the results for a quarter of annulus, subjected to inner pressure or, equivalently, to outward radial displacement on the inner radius is investigated. The geometry, boundary conditions and simulation parameters are illustrated in Fig. 5. The imposed displacement on the inner radius is rˆ = 0.5. The relative error in the L 2 -norm is investigated for this example. The obtained convergence plots are given in Fig. 6. The Greville abscissae are used to determine the collocation points. The observed convergence rates correspond to those found in the linearly elastic case [3] (see also Table 2). As expected, the Galerkin reference solution exhibits the best accuracy for all tested polynomial degrees and Bézier meshes. The difference between the EC and BC (i.e. the standard collocation) approach seems to be negligible for this test case.
2.4 Mixed Isogeometric Collocation for Nearly Incompressible Elasticity and Elastoplasticity Isogeometric collocation methods exhibit many positive attributes such as computational efficiency and simplicity, however for displacement-based approaches instabilities can be observed for strongly non-linear problems such as plasticity. To overcome this deficiency, a mixed approach is investigated herein, in which both stress
Collocation Methods and Beyond in Non-linear Mechanics
465
Fig. 6 Quarter of annulus: Convergence plots for different polynomial degrees (adapted from [34])
and displacement fields are approximated as primary variables. This approach is also applied to nearly-incompressible elastic problems, where volumetric locking is observed for the displacement-based approach. The following subsection is based on the publication [15].
2.4.1
Governing Equations for Mixed Stress-Displacement Isogeometric Collocation
In Sect. 2.1 an elastic material model has been introduced under the assumption of infinitesimal strains. The material properties for this case have been described by the two Lamé constants λ and μ. In case of nearly incompressible materials, the first Lamé constant λ exhibits large values. Depending on the chosen approach, this can result in the so-called volumetric locking effect, which is characterized by an overly stiff behaviour of the simulated domain and in certain situations even loss of spatial convergence in the solution of the discretized problem. In the second part of this subsection, elastoplastic material is considered. In this case, the total strain tensor ε can be calculated as ε = εe + ε p with the elastic strain tensor εe and the plastic strain tensor ε p . Therefore the stress tensor σ can be calculated as σ = C : (ε − ε p ).
466
F. Fahrendorf et al.
Within this study we consider von Mises plasticity with linear isotropic hardening (see, e.g., [11, 73]), which implies the yield condition f (σ , α) = s −
2 [σY + K α] ≤ 0 3
(34)
with the deviatoric stress tensor s = dev [σ ] = σ − 13 tr [σ ] 1, the yield stress σY , the equivalent plastic strain α and the isotropic hardening modulus K . A classical return mapping algorithm (see, e.g., [11, 73]) is applied for the integration of the elastoplastic constitutive equations. In order to derive the mixed stress-displacement isogeometric collocation approach, we consider the mixed weak formulation at the current time step n + 1, which consists of finding un+1 ∈ U n+1 and σ˜ n+1 ∈ S such that for all v ∈ V and w ∈ S the weak momentum balance equation (including boundary conditions) mom Rn+1
=
∇ v : σ˜ n+1 dΩ − S
Ω
Ω
v · bn+1 dΩ −
ΓN
v · t n+1 dΓ N = 0
(35)
and the weak stress coupling equation str = Rn+1
Ω
w : (σ˜ n+1 − σ n+1 (un+1 )) dΩ = 0
(36)
are fulfilled. The corresponding approximation spaces are defined as U n+1 = {u|u ∈ (H 1 (Ω))d , u|Γ D = u¯ n+1 }, V = {v|v ∈ (H 1 (Ω))d , v|Γ D = 0}, S = {σ˜ |σ˜ ∈ (L (Ω)) 2
d×d
(37)
}.
To deduce the proposed isogeometric collocation approach, the weak momentum balance Eq. (35) needs to be integrated by parts, which leads to mom Rn+1 =−
Ω
v · (∇ · σ˜ n+1 + bn+1 ) dΩ +
ΓN
v · (σ˜ n+1 · n − t n+1 ) dΓ N = 0.
(38) In contrast to Galerkin approaches Dirac delta functions δ are chosen as test functions for the isogeometric collocation approach. By applying the sifting property to Eqs. (36) and (38), the following set of equations for the mixed isogeometric collocation approach is obtained:
Collocation Methods and Beyond in Non-linear Mechanics
∇ · σ˜ n+1 + bn+1 τ iuj = 0 σ˜ n+1 − σ n+1 (un+1 ) τ iσj = 0 σ˜ n+1 · n − t n+1 τ iuj = 0
467
τ iuj ∈ Ω,
(39a)
τ iσj τ iuj
∈ Ω,
(39b)
∈ ΓN .
(39c)
The set of collocation points τ iuj and τ iσj is obtained from the discretization of the displacement and stress fields, respectively. Herein the Greville abscissae are used as the collocation points. Other more advanced approaches for the determination of the abscissae values (as e.g. outlined in Sect. 2.2) are not directly applicable to mixed formulations, especially in case of non-linear problems, hence they are not adopted herein. For collocation points located on corners within the Neumann boundary Γ N , the contributions from the adjacent edges are added in the same vein as in the linear elastic case (see Eq. (12c) in Sect. 2.1). Instabilities have been observed for the pure displacement-based collocation approach. These are most likely induced by the non-differentiability of the algorithmic tangent modulus (of the return mapping algorithm) at the elastoplastic boundary. For the proposed mixed stress-displacement approach, a linearization of the algorithmic tangent modulus is not necessary and thus the instabilities can be avoided. h h and σ˜ n+1 are discretized with NURBS (see Sect. 2.1). The trial functions un+1 The polynomial degrees of the displacement and stress fields are denoted as pd and ps , respectively. As a result of preliminary studies, the following selection of discretizations is tested: 1. pd = ps , same Bézier mesh/same number of control points 2. pd = ps + 1, same number of control points 3. ps = pd + 1, same Bézier mesh.
2.4.2
Numerical Example on Volumetric Locking
In this section the results obtained for the well-known Cook’s membrane are reported. The geometry along with the simulation setup is depicted in Fig. 7. The material parameters and boundary conditions are chose in accordance to [30]. Thus a tangential traction q = 6.25 is enforced on the right-hand side of the geometry and the left-hand side is modelled as fully clamped. A hybrid collocation-Galerkin treatment of the Neumann boundaries is applied for this test, since a positive effect on the stability and accuracy of the results could be observed. The results without hybrid treatment can be found in [15]. The values for the vertical displacement of point A (see Fig. 7) obtained for different polynomial degrees and discretizations are reported in Figs. 8, 9 and 10. Since no analytical solution exists, the results obtained by a displacement-based Galerkin method are plotted as well. The convergence plots for a discretization with equal degrees can be found in Fig. 8. One could observe that for the lowest polynomial degree, both Galerkin and collocation converge slowly. For the other degrees, collocation converges faster than Galerkin in most cases.
468
F. Fahrendorf et al.
Fig. 7 Cook’s membrane: Geometry, boundary conditions and simulation setup (adapted from [15])
Fig. 8 Cook’s membrane: Vertical displacement of top right corner (point A), equal degree approximation ( ps = pd ) (adapted from [15])
For the discretization approach with an enriched stress field ( ps = pd + 1, same Bézier mesh) the obtained results are not satisfactory, since only a few of the tested discretizations converge. Therefore this choice for the discretization seems not to be optimal for this example. For the chosen approach with an enriched displacement field ( pd = ps + 1, same number of control points) the solutions converge fast to a constant value and no volumetric locking behaviour can be observed for the collocation approach as one can see in Fig. 10. Especially for lower polynomial degrees, the proposed mixed isogeometric collocation approach outperforms the displacement-based Galerkin method in this example.
Collocation Methods and Beyond in Non-linear Mechanics
469
Fig. 9 Cook’s membrane: Vertical displacement of top right corner (point A), enriched stress field ( ps = pd + 1, same Bézier mesh) (adapted from [15])
Fig. 10 Cook’s membrane: Vertical displacement of top right corner (point A), enriched displacement field ( pd = ps + 1, same number of control points) (adapted from [15])
2.4.3
Numerical Example on Elastoplasticity
In the following the performance of the proposed mixed approach is evaluated for the case of elastoplastic material behaviour with the aid of a numerical example. A quarter of annulus as shown in Fig. 11 is simulated herein. The relevant simulations parameters and applied boundary conditions are also reported in Fig. 11. After a loading phase with a total horizontal displacement u = 0.05 at the bottom edge, the specimen is unloaded again to observe the influence of the isotropic hardening law. Here a hybrid collocation-Galerkin approach (Galerkin for the momentum balance equation and collocation for the stress coupling equation) is applied to improve the accuracy of the results, since the additional computational effort is comparatively low. The load-displacement curves for the numerical test described above can be found in Fig. 12. As one can see, the results of the proposed mixed collocation approach and the Galerkin reference solution are in perfect agreement. There are no deviations visible from the plot in the loading and unloading phases.
470
F. Fahrendorf et al.
Fig. 11 Quarter of annulus: Geometry, boundary conditions and simulation setup (adapted from [15])
Fig. 12 Quarter of annulus: Load-displacement curves (adapted from [15])
Additionally the plots of the equivalent plastic strains at the last load step (end of the unloading phase) are visualized in Fig. 13. Except for mild oscillations observed for the mixed collocation approach in case of the discretization with low polynomial degree ( pd = 3, ps = 2), there is an excellent agreement between the results of the proposed collocation approach and the reference solution in the equivalent plastic strain distributions.
2.5 Contact Approaches for Isogeometric Collocation In the following a large deformation contact formulation is shown and tested in the frictional setting. It continues the previous study [10] on frictionless contact in
Collocation Methods and Beyond in Non-linear Mechanics
471
Fig. 13 Quarter of annulus: Equivalent plastic strain α, load step 40 (adapted from [15])
the small strain regime. The descriptions and results reported herein summarize the studies shown in [34]. Let ω(i) ⊂ Rds , i = 1, 2, represent the domains occupied in the current configuration by two elastic bodies B (i) . In the current configuration, each body is subjected (i) to body forces b(i) , to prescribed displacements uˆ on the Dirichlet portion of the (i) boundary γ D(i) , to prescribed tractions tˆ on the Neumann portion of the boundary γ N(i) , and to contact constraints on the remaining portion γC(i) . Thus Eq. (32) for the finite deformation elasticity problem has to be fulfilled for each of the two bodies in the current configuration, supplemented by the contact constraints in the normal direction gN ≥ 0
tN ≤ 0
gN tN = 0
g˙ N t N = 0
(40)
and in the tangential direction Φ = t T + μt N ≤ 0
g˙ T = γ˙
tT t T
γ ≥0
γ Φ = 0.
(41)
472
F. Fahrendorf et al.
Fig. 14 Ironing problem with friction: Geometry, boundary conditions and simulation setup (adapted from [34])
The contact constraints are both valid on γC(s) , with μ as the Coulomb friction coefficient, γ as the incremental plastic slip and a superposed dot indicating time derivation. The gaps in normal and tangential direction are g N and g T , respectively. The backward Euler method is used for the time discretization. The contact traction vector t C in the current configuration, decomposed into the normal and tangential components, is defined as t C = t N + t T = t N n¯ (m) + tT1 τ¯ 1(m) = t N n¯ (m) + tT1 τ¯ (m) 1
t N = tC · n¯ (m)
(42)
1(m) with the covariant vector τ (m) , the normal vector N n(m) 1 , the contravariant vector τ to the master surface and the symbol (¯) indicating the closest point projection from the slave onto the master surface. The so-called two-half-pass algorithm [71] is applied, which alternatively treats both surfaces as slave and master. The contact constraints can be enforced by treating them as deformation-dependent Neumann boundary conditions. The corresponding equations for each body are obtained by simply substituting the traction tˆ at the Neumann boundary in Eq. (32) with the contact traction vector t C . To enforce the contact constraints the penalty method is used. The Greville abscissae are used as the collocation points.
2.5.1
Numerical Example on Frictional Contact
As a numerical example, we consider the so-called ironing problem, where a halfcylindrical body is pressed onto an elastic slab and then dragged in the tangential direction. The geometry, boundary conditions and simulation parameters are given in Fig. 14. A vertical downward displacement vˆ = 0.3 is applied to the upper face of the cylinder with 20 increments and then held constant while a horizontal displacement uˆ = 2.0 is applied with 130 additional increments.
Collocation Methods and Beyond in Non-linear Mechanics
473
Fig. 15 Ironing problem with friction: Plots of stress σ yy on the deformed configuration at t = 20, t = 50, t = 100 and t = 150 (from top to bottom) (adapted from [34])
Figure 15 shows the contour plot of the stress component σ yy in the deformed configuration at different time steps for the problem without and with friction (μ = 0.3). No spurious stress oscillations or other unwanted effects are visible in the plot. This is in good agreement with the results obtained with the proposed approach in the small strain regime and for frictionless contact as reported in [10]. The reaction forces in the horizontal and vertical directions, computed through numerical integration on the top surface of the half-cylinder are shown in Fig. 16. The smoothness of these curves emphasizes the good performance of the proposed collocation contact algorithm.
474
F. Fahrendorf et al.
Fig. 16 Ironing problem with friction: Reaction force history (adapted from [34])
2.6 Isogeometric Collocation for Geometrically Non-linear Structural Elements In this subsection, we summarize recent contributions from our group within the SPP1748 project on isogeometric collocation applied to non-linear structural elements, including shells and beams. In [33], we present an isogeometric collocation formulation for the ReissnerMindlin shell problem. The standard approach of expressing the equilibrium equations in terms of the primal variables, followed in the previous sections, is not a suitable way for shells due to the complexity of the underlying equations. We then propose an alternative approach, based on a stepwise formulation, and show its numerical implementation within an isogeometric collocation framework. The formulation is tested successfully on a set of benchmark examples, which comprise important aspects like locking and boundary layers. These tests show that locking effects can be circumvented by using high polynomial degrees. An accompanying study on the computational time also confirms that high polynomial degrees are preferable in terms of computational efficiency. In [77], we focus on spatial rods within the framework of isogeometric collocation and develop a frictionless contact formulation for them. The structural mechanics is described by the Cosserat theory of geometrically nonlinear spatial rods. Contact points are detected by a coarse-level and a refined search for close centerline points and reaction forces are computed by the actual penetration of rod surface points, so that the enforcement of the contact constraints is performed with the penalty method. An important aspect is the application of contact penalty forces as point loads within the collocation scheme, and methods for this purpose are proposed and evaluated. The overall contact algorithm is successfully applied to several numerical examples. Finally, [36, 37] initiate the study of three-dimensional shear-deformable geometrically exact beam dynamics through explicit and implicit isogeometric collocation
Collocation Methods and Beyond in Non-linear Mechanics
475
methods, respectively. The explicit formulation we propose is based on a natural combination of the chosen finite rotations representation with an explicit, geometrically consistent Lie group time integrator. We focus on extending the integration scheme, originally proposed for rigid body dynamics, to our nonlinear initial-boundary value problem, where special attention is required by Neumann boundary conditions. The overall formulation is simple and only relies on a geometrically consistent procedure to compute the internal forces once control angular and linear accelerations of the beam cross sections are obtained from the previous time step. The capabilities of the method are shown through numerical applications involving very large displacements and rotations and different boundary conditions. In the implicit study, we adopt the Newmark time integration scheme extended to the rotation group SO(3). The proposed formulation is fully consistent with the underlying geometric structure of the configuration manifold. The method is highly efficient, stable, and does not suffer from any singularity problem due to the (material) incremental rotation vector employed to describe the evolution of finite rotations. Consistent linearization of the governing equations, variables initialization and update procedures are the most critical issues. Numerical applications involving very large motions and different boundary conditions demonstrate the capabilities of the method and reveal the critical role that the high-order approximation in space may have in improving the accuracy of the solution.
3 Beyond Stochastic Collocation In stochastic computations in the realm of uncertainty quantification (UQ), the underlying assumption is that there is a basic but expensive to evaluate computational model of some physical system—also denoted as the deterministic system—which contains some uncertainty modelled as random variables, processes, or fields [39]. In uncertainty quantification (UQ) with such computationally intensive deterministic models, one of the main computational bottlenecks is the repeated evaluation of this deterministic model with different values of the uncertain parameters. So beyond collocation means lowering the computational cost below what would be used in a collocation approach, and on the other hand keep the numerical stability of projection resp. regression methods. So the goal is increasing the speed of UQ methods by avoiding the costly evaluation of the full-scale deterministic model previously described in Sect. 2 as much as possible. One avenue for this is to reduce the number of evaluation points beyond the normally minimal set for collocation through the use of “stochastic numerics”, in particular through Bayesian methods for evaluating integrals [16, 64, 70, 74], to be described in a bit more detail in Sect. 3.1. This was often done also in conjunction with other scientifically interesting methods for the reduction of numerical work [16], like for example reduced order models (ROMs) [74], or upscaling [62, 67–70] in the context of multi-scale computations, a bit more of which will be given in Sect. 3.3.
476
F. Fahrendorf et al.
Another avenue is through the use of multi-fidelity methods, a kind of multigrid in in the stochastic domain. Best known in this direction is multi-level Monte Carlo (MLMC), which is what was also employed initially here, where performance increases were achieved through novel algorithms. As the cost for each sample is high, a naïve Monte Carlo (MC) approach is usually not considered feasible. One way out of this when the deterministic system is a partial differential equation (PDE) discretised by some grid-associated method, like the finite element (FEM), finite volume (FVM), or finite difference method (FDM), or even some abstract Galerkin like approach. In this case one has computational deterministic models at various levels of refinement. Just as this may be used in the deterministic solution process with the use of multi-grid (MG) or more generally multi-level (ML) methods, where one saves in the evaluation of the finest discretisation level through the clever use and combination of—cheaper—solves on coarser levels, it is also possible to use this idea in the MC method, described in Sect. 3.2, so that one can save on samples on the finest level by using many more cheaper samples on coarser levels. The idea is to estimate statistics of the structural response by using the telescoping sum of statistics on several discretisation grids, each being coarser of the previous one. For this purpose we are developing numerical error estimates that allow the algorithm to adapt the number of levels as well as number of evaluations of the structural response on each of the grids. Another often used way to reduce the computational burden is to build a proxyor surrogate model, e.g. see [35, 39]. One of the ways to build such a model is essentially by regression. Although coefficients of the regression have to be computed by integration, numerically by quadrature, alluded to already above. The minimum number of integration points in a deterministic setting is equal to the number of coefficients to be determined, which is actually the case of collocation or interpolation. For a large number of basis vectors, the number of coefficients is so large, that even this number of evaluations of the deterministic full-scale model seems too much. In Sect. 3.1 we try to go beyond collocation by using probabilistic ideas, specifically ideas from probabilistic numerics, e.g. see [6, 7, 9, 25, 54, 56, 57, 64, 79, 80]. Here the unknown integral may be viewed as a not observable quantity, modelled as a random variable (RV), and the observations one has are of a related RV, namely samples of the integrand at some quadrature points. This is using Bayesian ideas in a purely numerical procedure. It turns out that a further circumstance which may be exploited is the fact that not just one, but many—one for each coefficient—such integrals have to be computed. In such UQ computations one often ends up with a very high-dimensional problem, partly due to a natural formulation—like for other parametric problems—in terms of tensor products [41, 44–48]. To speed up the computations, especially in nonlinear problems, it is possible to use certain algebraic properties of tensor representations [12, 13], and in that way re-use existing algorithms which were originally developed for matrices but are often general enough to be formulated in any associative algebra. This last point is already very close to the general method of using reduced order models (ROMs), in a certain sense a kind of proxy model, see e.g. [41, 45, 48]. These ROMs are a very general way to reduce the computational expense. Somewhat related
Collocation Methods and Beyond in Non-linear Mechanics
477
to this is another way to achieve reduced order models, namely through upscaling of heterogeneous material laws. Here Bayesian methods may be used as well [62, 67–70], and this will be shown in Sect. 3.3. In uncertainty quantification (UQ), one is looking at a system which has some properties uncertain, and these are modelled as random variables (RVs), processes, or random fields (RFs) [35, 39]. A prime example is the diffusion equation with a random conductivity κ(x, ω): − div κ(x, ω)∇u(x) = f (x),
(43)
i.e. κ may be a random field, and u is the quantity which is diffusing (e.g. temperature), defined for x ∈ G in some domain G ⊂ Rd , f is some forcing function of sinks and sources, the boundary conditions are not explicitly shown for simplicity, and Eq. (43) expresses a conservation law. Here (Ω, A, P) is a probability space, where Ω is the set of all possible realisations of the RF κ(x, ω), A a σ -algebra of measurable subsets of Ω, and P a probability measure. By ω ∈ Ω we designate a realisation of the stochastic variables, and as now κ and possibly f are RFs, so is u(x, ω). In a more abstract setting, one may write Eq. (43) as A(u; ω) = f (ω)
(44)
with the state in some vector space u ∈ U and the excitation or action in another vector space f ∈ F , and a random or stochastic operator A : U × Ω → F . This can be thought of representing more involved deterministic models than Eq. (43), such as equations or variational inequalities describing possibly a time evolution. What shall be assumed is that for each choice ω ∈ Ω of the stochastic variables the Eq. (44) is well-posed, in that it has a unique solution u(ω), continuously dependent on the data. A quantity of interest (QoI) may be some function Y (u(ω), ω) ∈ Y of the state u. Often the QoI may be taken as the state itself Y (u(ω), ω) = u(ω). What one is often interested in are expected values of such functions, i.e. Y (u) = E (Y (u(ω), ω)). Computationally the task is then to compute approximations to u(ω), or Y (u(ω), ω), or Y (u). This will be taken up in the following.
3.1 Bayesian Numerics Here we shall be concerned with the case that one wants to compute an approximation ya (ω) to Y (u(ω), ω): (45) ya (ω) ≈ Y (u(ω), ω) ∈ Y, where then ya (ω) is called a proxy- or surrogate model. Regarding the case where one wants to approximate u(x, ω) [35, 39] as example (i.e. Y (u, ω) = u) one would like a proxy-model
478
F. Fahrendorf et al.
u(x, ω) ≈ u a (x, ω) =
J
u j (x)ψ j (ω), or
(46)
βk wk (x)χk (ω).
(47)
j=1
u(x, ω) ≈ u a (x, ω) =
K k=1
Both Eqs. (46) and (47) are separated representations, separating the influence of the stochastic variables in ψ j (ω) resp. χk (ω) from the deterministic variables u j resp. wk . They both exhibit the basic underlying tensor product structure inherent in any such parametric model [41, 44–48], which allows additional savings in computations [12, 13]. The form Eq. (46) is a linear proxy-model in ω ∈ Ω, where typically the functions ψ j (ω) are chosen beforehand and the “coefficients” u j (x) have to be determined, whereas the form Eq. (47) is more like a reduced order model (ROM), where in the simplest case both the deterministic basis wk and the stochastic one χk (ω) are defined beforehand, and the solution process finds just the coefficients βk . Let us assume that in Eq. (46) the stochastic functions ψ j (ω) have been chosen to be orthonormal, i.e. ψi (ω)ψ j (ω) P(dω) = δi j , (48) ψi , ψ j := E ψi ψ j = Ω
then an approximation to a solution field u(x, ω) can be found via orthogonal projection by determining the functions u j (x) = E u(x, ·)ψ j =
Ω
u(x, ω)ψ j (ω) P(dω).
(49)
Typically the integrals in Eq. (49) have to be determined numerically u j (x) = u j = E uψ j =
Ω
u(ω)ψ j (ω) P(dω) ≈
N
u(ων )ψ j (ων )ν
(50)
ν=1
by summing over quadrature points ων with integration weights ν . It is not difficult to see that Eq. (50) is a particular kind of regression [58], and one would normally need at least as many integration points as coefficients u j , i.e. N ≥ J , with N = J the case of interpolation resp. collocation. One thread was to investigate collocation as a degenerate form of integration [58] and this way have a variational characterisation of the error in Eq. (50), but this is still considered as too many integration points. It is possible to compute such integrals with fewer points (N < J ), and this is where Bayesian ideas can be brought into the game [6, 9, 25, 54, 56]. If one views the integrals in Eq. (50) Φ j := E uψ j as unknown and uncertain quantities which have to be estimated and are hence, and views the samples of the integrand as “observations” of a related quantity ϕ j := u(ων )ψ j (ων ), then, by assuming a prior distribution for the [Φ1 , . . . , Φ j , . . . , Φ J ], this prior can be updated by Bayesian
Collocation Methods and Beyond in Non-linear Mechanics
479
Fig. 17 Laplace (sparse) and Gaussian densities Fig. 18 Pdf of RV
methods. Thus one arrives at Bayesian integration resp. quadrature [7, 19, 31, 32, 55, 57, 59, 79, 80]. One kind of assumption for the prior which has been quite successful is to assume that the vector [Φ1 , . . . , Φ j , . . . , Φ J ] is sparse [64]. This can be done by assigning for example as a prior a Laplace probability density function (PDF) to the vector instead of a Gaussian one, see Fig. 17 in 2D, which gives greater weight to “sparse” vectors, especially in higher dimension. To show the effect of this on a simple example, consider a RV which is given by a polynomial chaos expansion (PCE) with 53,130 terms, the PDF of which is shown in Fig. 18. The experiment is to try to reconstruct this PCE by computing the coefficients as in Eq. (50), where the vector [Φ1 , . . . , Φ j , . . . , Φ J ] is the unobserved quantity, and what one does observe is the vector [ϕ1 , . . . , ϕ j , . . . , ϕ J ]. The results are in Fig. 19, where the errors of the reconstructed PCE are shown, or more precisely on the ordinate is the root mean square error (RMSE), which is also the standard deviation of the difference. On the abscissae are the number of integration points resp. samples of the vector [ϕ1 , . . . , ϕ j , . . . , ϕ J ] in relation to J = Ninterp . In the left picture (a) the error is shown for a straight forward Monte Carlo (MC) quadrature resp. integration, whereas in the right picture (b) one may see the results for Bayesian quadrature. Observe also the
480
F. Fahrendorf et al.
Fig. 19 RSME for MC and Bayesian quadrature
Fig. 20 Mesh for quarter with hole and loading regime
different scale for the ordinate, which is logarithmic in the left picture for MC. It may be gleaned from Fig. 19 that in this example the Bayesian quadrature is almost two orders of magnitude more accurate for these low number of samples, ca. 1% of what would be needed for collocation resp. interpolation. Also to further this topic of Bayesian quadrature, quite a bit of work was done on the development of fast and efficient Bayesian approximation filters [40, 43, 49–51, 63]. Of particular interest is the accurate computation of the conditional expectation [75, 76]. To show a more involved example, consider a quarter disk shown in Fig. 20 of elasto-plastic material with uncertain material properties [66]. Subplot (a) shows the mesh, whereas subplot (b) shows the loading regime. Here the stochastic proxymodel is computed in the manner indicated via Bayesian quadrature [64], once with Monte Carlo sampling points (left half (a) of Fig. 21) and once with Smolyak sparse grid sampling points (left half (b) of Fig. 21).
Collocation Methods and Beyond in Non-linear Mechanics
481
Fig. 21 Integration points—Monte Carlo and sparse grid in 2D
Fig. 22 Convergence
In Fig. 22 the Kullback-Leibler divergence (KLD) of the error (compared to 1e5 Monte Carlo samples) is shown for different numbers of sampling points, and—on the abscissae—different levels of enforced sparsity in the Bayesian prior. One may glean from these graphs that even for high levels of enforced sparsity the error can be kept quite low. As a result, in Fig. 23 the evolution of the plastic strain is shown, in the left (a) subplot the band on the 95% percentile range is shown, whereas in the right (b) subplot one may find the pdf of the plastic strain evolution for different loading steps.
482
F. Fahrendorf et al.
Fig. 23 Total plastic strain
3.2 Multilevel Monte Carlo Method Let the approximate solution in deterministic sense be denoted as uh (x), where h is the discretization parameter. If uh (x, ω) ≡ uh is the corresponding solution in stochastic space, then it’s expected value is determined analytically as, E(uh ) =
uh dP(ω).
(51)
Due to the difficulty in performing analytical integration, one may estimate the expectation using Monte Carlo method (MC): E(uh ) ≈ μMC (uh ) :=
N 1 uh (x, ωi ). N i=1
(52)
N Here, [uh (x, ωi )]i=1 are independent random samples of uh and N ∈ N denotes the total number of samples. Let us define C(uh ) as the computational time to compute one sample of uh . Then, the total cost of MC mean estimator is,
C(μMC (uh )) = N C(uh ).
(53)
By virtue of law of large numbers and central limit theorem, the mean square error (MSE) of μMC (uh ) is expanded in the form: 2 (μMC (uh )) =
Var(uh ) + (E(uh ) − E(u))2 , N
(54)
Collocation Methods and Beyond in Non-linear Mechanics
483
in which, E(u) is the analytical expectation of u(x, ω) ≡ u. Further, the population variance Var(uh ) is approximated by unbiased sample variance VMC (uh ). It is clear that, the first term in the above equation denotes the statistical error (denoted as s2 ); the second term represents the deterministic discretization error (signified as d2 ). Evidently, one may define s2 ∝ N −1 and |d | ∝ h α , where α > 0 is the order of convergence. Therefore, to attain overall higher accuracy on μMC (uh ), one requires a very small h and a very large N . This demands a tremendous amount of computational effort. To overcome this drawback, a variance reduction technique called multilevel Monte Carlo method (MLMC) is considered in this study [8, 20, 24]. Let {h l , l = 0, 1, 2 . . . , L} be a sequence of nested meshgrids, such that, h l−1 = sh l , l > 0 holds. Here, l ∈ N0 denotes the mesh level, L ∈ N0 the finest mesh, and s ∈ N \ {1} is the factor of mesh refinement. Subsequently, we consider uhl (x, ω) ≡ uhl as the stochastic solution on mesh level l. Because of linearity of expectation, the MLMC mean estimation of solution on level L is expressed as: E(uh L ) ≈ μML (uh L ) := μMC (uh 0 ) +
L
μMC (uhl − uhl−1 ).
(55)
l=1
It follows that, μMC (uh 0 ) is the estimator of E(uh 0 ) using N0 samples on l = 0; μMC (uhl − uhl−1 ) approximates E(uhl − uhl−1 ) with Nl samples at level l > 0. For simplification, one may introduce: μ
MC
(Y l ) =
μMC (uh 0 ), l = 0 μMC (uhl − uhl−1 ), l > 0,
(56)
where, the quantities uhl and uhl−1 in the difference term Y l for l > 0 are sampled using same random seed. The expansion in Eq. (55) is thus rewritten, μML (uh L ) =
L
μMC (Y l ).
(57)
l=0 L Under the assumption that [Y l ]l=0 are modelled as i.i.d’s, the following MSE estimate
2 (μML (uh L )) =
L Var(Y l ) l=0
Nl
+ (E(uh L ) − E(u))2
(58)
holds. Similar to MSE of MC in Eq. (54), the above estimate is also split into stochastic and deterministic error. On analogous terms, Var(Y l ) ≈ VMC (Y l ) is considered. In order to control the total accuracy of MLMC, the user may state the constraints on both types of errors. If C(Y l ) is the computational time to run a single sample of Y l on each level l, then, the overall computational cost of μML (uh L ) is:
484
F. Fahrendorf et al.
C(μML (uh L )) =
L
Nl C(Y l ).
(59)
l=0
To determine the optimum number of samples on each mesh level with a minimum cost, here a constrained optimization problem is solved. Accordingly, the cost function is given as, f (Nl ) = arg min Nl
L l=0
VMC (Y l ) Nl C(Y l ) + τ Nl
.
(60)
Here, τ denotes the Lagrange multiplier which is determined by τ ≥ s−2
L
1
[VMC (Y l )C(Y l )] 2 .
(61)
l=0
Hence, the optimal samples are calculated as, 1
Nl = τ [VMC (Y l )/C(Y l )] 2 .
(62)
Considering, s = 2 and positive constants c1 , c2 , c3 , α, β, γ > 0 such that α ≥ holds, the following error bounds
1 min(β, γ ) 2
|E(uhl ) − E(u)| ≤ c1 2−αl , Var[Y l ] ≤ c2 2−βl ,
(63)
γl
C(Y l ) ≤ c3 2 , are defined. Further, it turns out that, if |E(uhl ) − E(u)| → 0 as l → ∞, then |E(Y l )| ≤ c4 2−αl
(64)
holds good. Here, c4 > 0 is a constant. Based on the values of β and γ , one may also understand the major cost contributor. If β > γ , the maximum cost is controlled by the coarsest level, and if β < γ , the finest level governs the dominant cost. Finally, when β = γ , then the cost on each level is roughly evenly distributed.
3.2.1
Application to Linear-Elastic Problem
Here, a linear-elastic model problem with homogeneous random elasticity matrices C(ω) is considered. The objective is to determine the random displacement field u(x, ω), such that, the following stochastic partial differential equations
Collocation Methods and Beyond in Non-linear Mechanics
485
Fig. 24 Boundary conditions
−div σ (x, ω) = f (x), ∀x ∈ G, ω ∈ , u(x, ω) = u0 = 0, ∀x ∈ Γ D , ω ∈ , σ (x, ω) · n(x) = t(x), ∀x ∈ Γ N , ω ∈ ,
(65)
are fully satisfied. In which, σ (x, ω) denotes the Cauchy stress tensor field, f (x) is the body force, t(x) represents the surface tension on Neumann boundary Γ N , n(x) is the outward unit normal to Γ N , and u0 signifies the homogeneous displacement on Dirichlet boundary Γ D . The strain-displacement relationship is further described by, 1 ∇u(x, ω) + ∇u(x, ω)T , ∀x ∈ G, ω ∈ , (66) ε(x, ω) = 2 where, ε(x, ω) is the strain tensor field. Finally, the constitutive law is given in the form: σ (x, ω) = C(ω) : ε(x, ω), ∀x ∈ G, ω ∈ . (67) Carrying out variational formulation of the above equations on G, one generally seeks for approximate solution uh (x, ω). Further, the intent is to determine the numerical expectation of uh (x, ω) using MLMC. This is shown on a practical application in the next Sect. 3.2.2.
3.2.2
Results on 2D Proximal Femur
A proximal femur bone geometry G ∈ Rd , d = 2 with a body width of approximately 7 cm and 21.7 cm in total height was considered. Further, an in-plane uniform pressure load with a resultant load of 1500 N was applied from the top, and zero displacements were considered at the bottom (see Fig. 24). The deterministic simulation was performed by FEM, using four-noded plane stress elements. To assess the impact of optimality of different finite element solvers on cost benefits of MLMC, we used two softwares, namely Plaston (PL) [66] and CalculiX (CCX). The mean elasticity matrix of the matrix-valued random variable C(ω) belonged to orthotropic symmetry, whose values are accounted in Table 3.
486
F. Fahrendorf et al.
Table 3 Orthotropic material parameters Young’s modulus (MPa) Poisson’s ratio E 1 = 1173.7 E 2 = 875.5
ν12 = 0.22
Table 4 Mesh specifications l Elements 0 1 2 3
Shear modulus (MPa)
171 684 2736 10944
G 12 = 481.0
Nodes
DOF
206 753 2873 11217
396 1476 5688 22320
Fig. 25 Nested meshgrids of 2D femur bone
The implementation of MLMC was carried out on fixed four levels of nested meshes (shown in Fig. 25). Table 4 provides the corresponding mesh specifications. If utL (x, ω) is the total displacement random field on level L (i.e., l = 3), then, the objective was to calculate the MLMC mean estimate, μML (utL (x, ω)) = μML (uh L (x, ω)2 ), for given sampling MSE s2 . Note that, the difference term Y l from Eq. (56) was evaluated only at nodes which had common spatial coordinates between all four levels of meshes. The idea here was to avoid interpolation error. Consequently, the expectation of system response on the finest level was determined only at these common nodes. To have an apriori understanding of MLMC performance, a so called screening test using PL and CCX solvers was conducted, as shown in Fig. 26. It is to be noted that, the results displayed are pertaining only to a certain common node, P. Here, a fixed number of samples (in this case 20) and three levels of meshes {l = 0, 1, 2} were utilized. The top left plot in the figure shows the behaviour of logarithmic mean of quantities ul and Y l , l > 0 at each level l. Similarly, logarithm of variance of same quantities is plotted in the top right plot. The expected decay in μMC (Y l ) and VMC (Y l ) and approximately constant behaviour in μMC (ul ) and VMC (ul ) are observed. We can see that, both solvers display similar behaviour. The plot in the bottom displays logarithmic computational time of Y l against mesh level l. The corresponding CPU times were recorded on 2.3 GHz Intel core i5 processor. It is
Collocation Methods and Beyond in Non-linear Mechanics
487
Fig. 26 Screening results with three levels of meshes at node P Table 5 Convergence results α β CCX PL
1.712 1.712
2.648 3.265
γ
c4
c2
c3
1.401 0.386
0.079 0.077
1.68e-05 3.15e-05
0.245 0.428
clear that, C(Y l ) increases as l increases. The interesting aspect of this plot is that, at l = 0, PL is more expensive than CCX and vice versa at l > 0. In Table 5, we summarize the constants given in Eqs. (63) and (64), by determining the slopes and y-intercepts of log |μMC (Y l )|, log(VMC (Y l )) and log(C(Y l )). It is apparent that the values, such as, α and c4 are similar for both solvers, except a slight deviation in β and c2 . What stands out in the table is a significant difference in γ . This value is accounted for the optimality of solvers, that is, PL is more optimal than CCX on finer levels (l > 0). A closer inspection of c3 further shows the behaviour of solver costs on coarsest mesh. This implies that PL is suboptimal compared to CCX at l = 0.
488
F. Fahrendorf et al.
Figure 27 presents the performance results of MLMC. The top left plot shows the propagation of number of samples on each level l, for different values of s2 , at node P. Evidently, Nl decreases monotonically with increasing l. Also, for stricter accuracies, Nl is higher. The interesting aspect of this plot is that, at l = 0, CCX requires slightly more samples than PL. On the other side, PL needs more samples than CCX for l > 0. This behaviour is courtesy of different C(Y l ) of the solvers. The top right plot shows the mean estimation cost of MLMC and MC against s2 . The CPU time to run a single sample of Y L was clocked at 5.62 s for CCX and 1.42 s for PL. We found that, MLMC estimate using CCX was approximately 13.8 times faster than the corresponding MC estimate at all accuracies. In comparison, the cost benefits of MLMC with regard to solver PL was only around 2.2 times. Due to the lower computational cost of PL on level L, as expected, the cost of MC estimation was much lower than CCX’s. However, what was interesting that, although C(Y L ) of CCX was much higher than PL, yet MLMC with CCX performed roughly 1.39 times quicker than PL. This occurs, as the maximum cost is contributed by the coarsest level (l = 0), and that C(Y 0 ) for PL is greater than that of CCX. The bottom two plots compare the MLMC mean estimate of total displacement on mesh level L between CCX (on left) and PL (on right). The results are plotted for accuracy s2 = 2 × 10−7 . It is evident that, the difference in results between two solvers is small. The relative error of the mean estimate of PL with respect to CCX was found to be of the order 10−4 .
3.3 Stochastic Upscaling Another way of saving expensive full-scale model evaluations is to use proxy-models of some sort. One special case of this may be seen when looking for large scale properties of highly inhomogeneous micro- or meso-structures. This is normally the domain of homogenisation, but this pre-supposes that there is a separation of scales. In many materials, this is not the case. In cooperation with Adnan Ibrahimbegovi´c [29, 42] we have developed a stochastic multi-scale formulation, which allows such stochastic upscaling, in order to have simpler macro-scale material models and thus save expensive meso-scale evaluations. In this project attempts are made to upscale the meso-scale heterogeneous material structure of concrete to a macro-scale. In the linear case the homogenisation approaches can be successfully used, in which the macro-scale properties such as the stiffness matrix are evaluated, given the meso-scale structural response of a representative material element. However, in the nonlinear case the upscaling of meso-scale information to the macro-scale is not as straightforward. The so- called size-effect problem, e.g. the problem of determining the size of the representative element, appears and has to be resolved. To overcome this issue, in this project the focus is put on the so-called the mesh in element approach (MIEL) [28, 29, 42], in which the meso-scale structure is embedded in a macro-scale finite element.
Collocation Methods and Beyond in Non-linear Mechanics
489
Fig. 27 Performance of MLMC
As this is to be a model for possibly more complex behaviour, we assume that the macro-scale continuum model can be described as a generalised standard material model [18, 22, 28]. This covers pretty much all materials which obey the maximum dissipation hypothesis, and are thus in a sense optimal in fulfilling the requirements of the second law of thermodynamics. This has the advantage that these materials are completely characterised by the specification of two scalar functions, the stored energy resp. Helmholtz free energy density φ, and the dissipation pseudo-potential density ψ. In our view this description is also a key for the connection with the micro-scale behaviour. No matter how the physical and mathematical/computational description on the micro-scale has been chosen, in all cases where the description is based on physical principles it will be possible to define the stored (Helmholtz free) energy and the dissipation (entropy production). These two thermodynamic functions are thus employed as measurements in the Bayesian inference used to identify the macro-scale model parameters given the meso-scale response energy and dissipation. This approach is also a good start for computational procedures [38] (see also [11, 28, 73]), and—at least for the case of hardening plasticity—has been given a fully variational formulation in a Hilbert space context [23]. This description has been subsequently extended to much more general cases of rate-independent behaviour [52].
490
F. Fahrendorf et al.
In a nutshell, for an isothermal small-strain situation with strain ε = ∇ s u, from the Clausius-Duhem inequality it follows that the stress is σ = D φ(ε, w, q ), where w are the internal phenomenological variables [18, 22, 38], D is the partial derivative w.r.t. , the collection q of tensors of even order which describes the specific material has to be identified, and v = − Dw φ(ε, w, q ) are “thermodynamic forces” conjugate ˙ ˙ is a rate of (dissipated) to the “thermodynamic fluxes” w—the inner product v, w energy. ˙ The evolution of w—i.e. w—is defined by dissipation pseudo-potential ψ through a variational inequality: ˙ w, q ). (68) w˙ ∈ ∂w ψ(ε, ε, Convex analysis and variational inequalities enter, as, for rate-independent material behaviour, ψ cannot be smooth [38]; thus it is appropriate to use the subdifferential ∂w w.r.t. w in (68)—this is a concise form of writing a variational inequality. In order to use this idea for Bayesian identification, the definition of a generalised standard material (GSM) model has to be extended into the stochastic range. This has been done for the cases treated in [23] in [61, 66], and we follow the prescriptions pioneered there. In our case the stochastic model as the collection q of defining tensors will be assumed as a random variable (RV), making the model described above a stochastic GSM (SGSM). This allows also the loss of resolution inherent in upscaling to be somewhat captured in the variation of the stochastic response, next to the uncertainty inherent in the material model error—as a specific form of φ is postulated as a prior—and the identification process itself. First results were shown in [62, 67], and further details can be seen in [26, 65, 68–70]. If, as in Eq. (44), we write the totality of all equilibrium and evolution equations for the meso-structure with an operator Aμ as Aμ (u μ ; qμ (ω)) = f μ ,
u μ ∈ Uμ , qμ ∈ Qμ ,
(69)
where u μ are all the variables needed to describe the meso-structure, and qμ (ω) may be some parameters describing possibly different random realisations of the mesostructure, which produces an observable output—here that will be the Helmholtz free energy and the dissipation rate—yμ (ω) = Yμ (qμ ; u μ ) given the action f μ , then we want to identify a macro model A(u M ; q M (ω)) = f M ,
u M ∈ UM , q M ∈ Q M ,
(70)
where everything is analogous to Eq. (69). The output of the macro-model is y M = Y M (q M ; u M )—matching Yμ —to be used in a Bayesian fashion [40, 43, 49–51, 63] to update the macro-model parameters q M ∈ Q M . The identification works in the following way: Given f M ∼ = f μ , compute yμ and predict y M (ω). From the difference of y M to yμ one obtains the update for q M by Bayesian identification methods. In Fig. 28 one may see such an upscaling setup. The meso-scale model consists of 50 ∗ 50 = 2500 elements—shown together with the loading program f μ in subplot
Collocation Methods and Beyond in Non-linear Mechanics
491
Fig. 28 a Realisation random field b Meso-Macro model and loadings
(b). The material properties are given by random fields, an example of a realisation of such a field is seen in the left subplot (a) of Fig. 28. The upscaling is to just one finite element, so there is a loss of resolution of about a factor of 50. The model to be identified is an elasto-plastic damage model, described in detail in [62, 67–70]. While the identification of elastic parameters is fairly standard also with homogenisation approaches, more interesting is the identification and upscaling of plastic and damage constants. Here we show in Fig. 29 the Bayesian identification of bulk modulus K and shear modulus G, where one sees that the variance practically goes to zero. This means, that it is possible to find macro-scale values for those parameters which give exactly the same stored energy as the realisation of a random field for a large enough RVE. This is well known from the fact that the linear elastic strain energy is a quadratic function of the applied strain in the macro-scale case, and an integral or average of local quadratic functions in the meso-scale, which is obviously again a global quadratic function. The only restriction which is imposed on
492
F. Fahrendorf et al.
Fig. 29 Identifying elastic constants
the macro-scale is that we require an isotropic macro-scale model. As the random field is isotropic and the meso-scale linear elastic strain energy is locally isotropic as well, by taking a large enough RVE the bounds can become arbitrarily sharp. This in marked contrast for the irreversible behaviour such as plasticity and damage, which is highly localised, such that the upscaling has to describe also a loss of resolution. This is shown in Fig. 30 for some plasticity parameters—the probability density distributions (pdf) for the priors and posteriors are displayed—and here it is remarkable that the posteriors are so sharp. Similar results are achieved for the parameters of the damage law. The evolution of the stored energy in the left subplot (a), the plastic dissipation in the middle subplot (b), and the damage dissipation in the right subplot (c) are shown in Fig. 31. The line labelled “fine” is the heterogeneous meso-scale model, which
Collocation Methods and Beyond in Non-linear Mechanics Fig. 30 Identifying plasticity parameters
a
b
c
493
494 Fig. 31 Evolution of a stored elastic energy, b plastic, and c damage dissipation
F. Fahrendorf et al.
a
b
c
Collocation Methods and Beyond in Non-linear Mechanics
495
the macro-scale model tries to match. The other lines are the median and ±45% percentiles from the median. Again one may glean from subplot (a) that the stored energy can be captured perfectly before the onset of any irreversible mechanisms, this is not possible after irreversible mechanisms have been activated. The reason for this seems to be that in the heterogeneous meso-scale model the onset of irreversible mechanisms varies from location to location, whereas for the homogeneous macroscale model such mechanism only can or cannot start in the whole region. This is captured in the stochastic GSM model through a much larger variance after the onset of irreversible mechanisms. This can also clearly be seen in the right subplot (c) for the damage dissipation.
3.4 Nonlinear Identification For Bayesian numerics and upscaling to work efficiently, the appropriate Bayesian identification numerical schemes have to be made computationally fast and reliable. However, this is often not the case. The use of classical approaches for the estimation of the full Bayes posterior, e.g. Markov Chain Monte Carlo and derived methods, are known to be slowly convergent, and thus expensive. Hence, approximate methods are necessary. The simplest approximation is to try to get the conditional mean correct, as this is the most significant descriptor of the Bayesian posterior, and is often used in practice. Such algorithms result from an orthogonal decomposition in the space of random variables [40, 43, 49–51, 63], which upon restriction to linear maps gives the Gauss-Markov-Kalman filter. Due to the estimator’s linearity, its use in nonlinear mechanics models has to be carefully tuned. Namely, one may implement the estimator in at least two different ways, here distinguished as online (sequential) and offline (non-sequential) approaches [78]. Offline estimation is matching the complete history of data with the prediction, and hence may fail in accurately approximating the posterior. However, its implementation is less challenging as the restarting function in the existing finite element software is not needed. On the other hand, the online approach, see Fig. 33, provides a natural and consistent way to update the parameters in a sequential manner. The method can be embedded in the experimental machine as a software which can be used, e.g., to stabilise the crack patterns in real time by altering the load conditions in the experimental fracture test or to control the manufacturing process. From an computer implementation perspective the method is more challenging as it requires the restarting type of procedures in the existing finite element software. Next to this, the accuracy of estimations is affected by the time increment chosen for updating. Both of the identification procedures are validated [78] on the phase-field model for the cement mortar originating from the variational formulation of brittle fracture by Francfort and Marigo [17], based on the regularisation term as in Bourdin et al. [5]. The model is described by the energy functional
496
F. Fahrendorf et al.
E (ε, s) =
G
ψel (ε, s) dG + G c
1 (1 − s)2 + |∇s|2 dG, G 4
(71)
in which s is the phase-field parameter describing the state of the material and which varies smoothly between 1 (intact) and 0 (completely broken), 0 ≤ s ≤ 1. Here, is a length-scale parameter characterising the width of the diffusive approximation of a discrete crack, i.e. the width of the transition zone between completely broken and intact phases which takes the zero value for a discrete crack. The elastic strain energy density affected by damage ψel is given by ψel (ε, s) = g(s)ψel+ (ε) + ψel− (ε),
(72)
in which the split into positive and negative parts is used to differentiate the fracture behaviour in tension and compression, as well as to prevent the interpenetration of the crack faces under compression. Therefore, normal contact between the crack faces is automatically dealt with. However, frictional forces as a result of tangential relative sliding of the crack faces are excluded. For more information please see [78].
Fig. 32 Experimental set up
Collocation Methods and Beyond in Non-linear Mechanics
497
Fig. 33 Online approach
To perform the phase-field modelling of fracture as described in the previous section, one requires the qualitative description of the parameters q ∈ Rn+ consisting of the pair of bulk K and shear G moduli, tensile strength f t , and fracture energy G f . Initially, when no experimental data are available, the only information on q originates from expert knowledge the modeller’s experience. Thus, q is described as uncertain and modelled as a random variable, the probability of which is specified by an apriori probability density function (PDF) π(q). Even though both algorithms essentially work with the same experimental data (see Figs. 32 and 34), the resulting posterior estimates are not necessarily identical, see Figs. 35 and 36 for an update on tensile strength and fracture energy. Offline estimation results in narrower posterior probability density functions (PDFs) compared to the online estimation strategy. There are two main reasons for this: (i) the predicted uncertainties are underestimated due to the high nonlinearity, and thus large approximation error, and (ii) the online procedure does not use all available measurement data from the load-deflection curve, but only the data at time points marked as the arrival of new measurement information. Due to this, the online posterior parameter PDFs are wider. On the other hand, both methods give satisfactory matching results for the posterior mean values. This means that we get the same mean estimate from both approaches, only they are characterised by different levels of confidence (Fig. 33).
498
F. Fahrendorf et al.
Fig. 34 Experimental deflection-force curve of the three-point bending test
Fig. 35 Probability density function of the posterior of tensile strength a and b correspond to offline and online updates respectively
Fig. 36 Probability density function of the posterior of fracture energy a and b correspond to offline and online updates respectively
Collocation Methods and Beyond in Non-linear Mechanics
499
4 Conclusion Isogeometric collocation is a relatively new computational technique enabled by isogeometric analysis, and motivated by the desire to minimize the cost of quadrature associated with higher-order discretizations, with the ultimate goal of achieving higher-order accuracy at low computational cost. At the time of start of the DFG Priority Program 1748, the available applications of isogeometric collocation had mainly concerned linear problems, therefore the goal of the related subproject of the Priority Program was to explore and foster its application to non-linear mechanics. Thus, in the project as well as in this chapter, we have focused on the investigation of isogeometric collocation for non-linear mechanics formulations. We have first focused on non-linear material behavior and shown that isogeometric collocation can be successfully applied to the treatment of problems with hyperelastic and elastoplastic material behavior. Instability issues for the primal elastoplasticity formulation have motivated the development of mixed methods, which have been also shown to alleviate locking for incompressible elasticity. We have also tackled problems with geometric non-linearities, not only by using non-linear kinematics in combination with the hyperelastic material model, but also by developing geometrically non-linear structural elements. Finally, we have addressed problems with non-linearities due to the boundary conditions, investigating contact without and with friction. In all these applications, we have illustrated the possible issues connected with the specific features of isogeometric collocation and proposed strategies for their solution. A particularly promising path for future research in the field is in our opinion the establishment of a bridge between Galerkin and collocation schemes, also resulting from this subproject, which has been shown to lead to a new reduced quadrature technique for isogeometric analysis and which bears a significant potential for further exploration in the future. The investigations for uncertainty quantifications started with the idea that when building proxy-models, typically a kind of regression is performed, with at least as many evaluations of the full deterministic model as there are coefficients to be determined. Obviously, following this line of thought, the minimum number of full deterministic model evaluations is equal to the number of coefficients to be determined, which is the case of collocation or interpolation. This was the original idea of the project, where we sought to view collocation as an imprecise numerical quadrature formula [58], with attempts to quantify the error in the more variational setting and also determine better collocation points. This is in a similar spirit to the view taken in the isogeometric part of the project, minimising the number of integration points and choosing them as quadrature points. As it turns out, this from a deterministic point of view minimum number of full deterministic model evaluations is in many cases still unacceptably high. To further reduce this number of full deterministic model evaluations, additional conditions are needed. To achieve a sparse representation, i.e. many vanishing coefficients, a probabilistic perspective is employed. This is the use of Bayesian quadrature
500
F. Fahrendorf et al.
rules [57], where the prior is such that sparse sets of coefficients have a higher probability. Evaluations of the integrand are the “observations”, and it turns out that good approximations are achieved often already with surprisingly low numbers of full deterministic model evaluations. As the Bayesian method produces a whole distribution, the mean of this—the conditional expectation—is often taken as a point estimate. This makes methods for computing or approximating the conditional expectation a central technique for our endeavour, and this is where a fairly extensive research effort was directed at. These techniques were subsequently also used at other tasks, not just at estimating integrals. But the common thread was always to reduce the computational effort connected with full deterministic model evaluations. Upscaling [42] is one such kind of model reduction, which coarsens the information from fine scale heterogeneous material models, and here a linear approximation to the conditional expectation map was used [68, 69], the Gauss-Markov-Kalman filter [49, 51]. Extensions of this are nonlinear maps [64], used in identification problems. Another way to reduce the number of full deterministic model evaluations is to have hierarchies of models with different evaluation costs. One of the best understood instances of this are situations where the models with different evaluation costs come from discretising a partial differential equation on nested grids—this leads in this case to [8] Multi-Level Monte Carlo methods. These ideas could be combined with the previous ones on Bayesian quadrature, something which has not yet been undertaken. Acknowledgements We gratefully acknowledge the financial support of the German Research Foundation (DFG) within the DFG Priority Program SPP 1748 “Reliable Simulation Techniques in Solid Mechanics”.
References 1. C. Anitescu, Y. Jia, Y.J. Zhang, T. Rabczuk, An isogeometric collocation method using superconvergent points. Comput. Methods Appl. Mech. Eng. 284, 1073–1097 (2015) 2. F. Auricchio, L.B. Da Veiga, T.J. Hughes, A. Reali, G. Sangalli, Isogeometric collocation methods. Math. Models Methods Appl. Sci. 20(11), 2075–2107 (2010) 3. F. Auricchio, L.B. Da Veiga, T.J. Hughes, A. Reali, G. Sangalli, Isogeometric collocation for elastostatics and explicit dynamics. Comput. Methods Appl. Mech. Eng. 249, 2–14 (2012) 4. F. Auricchio, L. Beirao da Veiga, A. Buffa, C. Lovadina, A. Reali, G. Sangalli, A fully “lockingfree” isogeometric approach for plane linear elasticity problems: a stream function formulation. Comput. Methods Appl. Mech. Eng. 197(1), 160–172 (2007) 5. B. Bourdin, G.A. Francfort, J.J. Marigo, Numerical experiments in revisited brittle fracture. J. Mech. Phys. Solids 48, 797–826 (2000) 6. F.X. Briol, M. Girolami, Bayesian numerical methods as a case study for statistical data science, in Statistical Data Science, chap. 6. ed. by N. Adams, E. Cohen (World Scientific, 2018), pp. 99–110. https://doi.org/10.1142/9781786345400_0006 7. F.X. Briol, C.J. Oates, M. Girolami, M.A. Osborne, D. Sejdinovic, Probabilistic integration: a role in statistical computation? Stat. Sci. 34(1), 1–22 (2019). https://doi.org/10.1214/18STS660
Collocation Methods and Beyond in Non-linear Mechanics
501
8. K.A. Cliffe, M.B. Giles, R. Scheichl, A.L. Teckentrup, Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients. Comput. Vis. Sci. 14(1), 3–15 (2011). https://doi.org/10.1007/s00791-011-0160-x 9. J. Cockayne, C. Oates, T. Sullivan, M. Girolami, Bayesian probabilistic numerical methods (2017). arXiv:1702.03673 [stat.ME]. https://arxiv.org/abs/1702.03673 10. L. De Lorenzis, J. Evans, T.J. Hughes, A. Reali, Isogeometric collocation: Neumann boundary conditions and contact. Comput. Methods Appl. Mech. Eng. 284, 21–54 (2015) 11. E.A. de Souza Neto, D. Peric, D.R. Owen, Computational Methods for Plasticity: Theory and Applications (Wiley, 2011) 12. M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, E. Zander, Post-processing of highdimensional data (2019). arXiv:1906.05669 [math.NA]. https://arxiv.org/abs/1906.05669 13. M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, E. Zander, Iterative algorithms for the post-processing of high-dimensional data. J. Comput. Phys. 410, 109,396 (2020). https://doi. org/10.1016/j.jcp.2020.109396 14. F. Fahrendorf, L. De Lorenzis, H. Gomez, Reduced integration at superconvergent points in isogeometric analysis. Comput. Methods Appl. Mech. Eng. 328, 390–410 (2018) 15. F. Fahrendorf, S. Morganti, A. Reali, T.J. Hughes, L. De Lorenzis, Mixed stress-displacement isogeometric collocation for nearly incompressible elasticity and elastoplasticity. Comput. Methods Appl. Mech. Eng. 369, 113,112 (2020) 16. R. Ferrier, M. Kadri, P. Gosselet, H.G. Matthies, A Bayesian approach for uncertainty quantification in elliptic Cauchy problem, in Virtual Design and Validation, ed. by P. Wriggers, O. Allix, C. Weißenfels. Lecture Notes in Applied and Computational Mechanics, vol. 93 (Springer, Cham, 2020), pp. 293–308. https://doi.org/10.1007/978-3-030-38156-1_15 17. G.A. Francfort, J.J. Marigo, Revisiting brittle fractures as an energy minimization problem. J. Mech. Phys. Solids 46, 1319–1342 (1998) 18. P. Germain, Q.S. Nguyen, P. Suquet, Continuum thermodynamics. Trans. ASME 50, 1010– 1020 (1983) 19. A. Gessner, J. Gonzales, M. Mahsereci, Active multi-information source Bayesian quadrature (2019). arXiv: 1903.11331 [cs.LG]. http://arxiv.org/1903.11331 20. M.B. Giles, Multilevel Monte Carlo methods. Acta Numerica 24, 259–328 (2015). https://doi. org/10.1017/S096249291500001X 21. H. Gomez, L. De Lorenzis, The variational collocation method. Comput. Methods Appl. Mech. Eng. 309, 152–181 (2016) 22. B. Halphen, Q.S. Nguyen, Sur les matériaux standards généralisés. J. de Mécanique 14, 39–63 (1975) 23. W. Han, B.D. Reddy, Plasticity: Mathematical Theory and Numerical Analysis, Interdisciplinary Applied Mathematics, vol. 9, 2nd edn. (Springer, 2013). https://doi.org/10.1007/9781-4614-5940-8 24. S. Heinrich, Multilevel Monte Carlo methods, in Large-Scale Scientific Computing. ed. by S. Margenov, J. Wa´sniewski, P. Yalamov. Lecture Notes in Computer Science. (Springer, 2001), pp. 58–67. https://doi.org/10.1007/3-540-45346-6_5 25. P. Hennig, M.A. Osborne, M. Girolami, Probabilistic numerics and uncertainty in computations. Proc. R. Soc. A 471, 20150,142 (2015). https://doi.org/10.1098/rspa.2015.0142 26. T.V. Hoang, B.V. Rosi´c, H.G. Matthies, Characterization and propagation of uncertainties associated with limited data using a hierarchical parametric probability box. PAMM 18(1), e201800,475 (2018). https://doi.org/10.1002/pamm.201800475 27. E.W. Hobson, The theory of functions of a real variable and the theory of Fourier’s series, vol. 1 (The University Press, 1921) 28. A. Ibrahimbegovi´c, Nonlinear Solid Mechanics: Theoretical Formulations and Finite Element Solution Methods (Springer, 2009). https://doi.org/10.1007/978-90-481-2331-5 29. A. Ibrahimbegovi´c, H.G. Matthies, Probabilistic multiscale analysis of inelastic localized failure in solid mechanics. Comput. Assist. Methods Eng. Sci. 19, 277–304 (2012). http://cames. ippt.gov.pl/pdf/CAMES_19_3_5.pdf
502
F. Fahrendorf et al.
30. C. Kadapa, W. Dettmer, D. Peri´c, Subdivision based mixed methods for isogeometric analysis of linear and nonlinear nearly incompressible materials. Comput. Methods Appl. Mech. Eng. 305, 241–270 (2016) 31. T. Karvonen, C.J. Oates, S. Särkkä, A Bayes-Sard cubature method, in Advances in Neural Information Processing Systems. ed. by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Curran Associates, Inc., 2018), pp. 5882–5893. http://papers.nips. cc/paper/7829-a-bayes-sard-cubature-method.pdf 32. T. Karvonen, S. Särkkä, Classical quadrature rules via Gaussian processes, in Proceedings of 27th IEEE International Workshop on Machine Learning for Signal Processing, MLSP (2017), pp. 1–6 . https://doi.org/10.1109/MLSP.2017.8168195 33. J. Kiendl, E. Marino, L. De Lorenzis, Isogeometric collocation for the Reissner-Mindlin shell problem. Comput. Methods Appl. Mech. Eng. 325, 645–665 (2017) 34. R. Kruse, N. Nguyen-Thanh, L. De Lorenzis, T.J. Hughes, Isogeometric collocation for large deformation elasticity and frictional contact problems. Comput. Methods Appl. Mech. Eng. 296, 73–112 (2015) 35. O.P. Le Maître, O.M. Knio, Spectral Methods for Uncertainty Quantification. Scientific Computation (Springer, Cham, 2010) 36. E. Marino, J. Kiendl, L. De Lorenzis, Explicit isogeometric collocation for the dynamics of three-dimensional beams undergoing finite motions. Comput. Methods Appl. Mech. Eng. 343, 530–549 (2019) 37. E. Marino, J. Kiendl, L. De Lorenzis, Isogeometric collocation for implicit dynamics of threedimensional beams undergoing finite motions. Comput. Methods Appl. Mech. Eng. 356, 548– 570 (2019) 38. H.G. Matthies, Computation of constitutive response, in Nonlinear Computational Mechanics—State of the Art. ed. by P. Wriggers, W. Wagner (Springer, 1991) 39. H.G. Matthies, Uncertainty quantification with stochastic finite elements, in Encyclopaedia of Computational Mechanics, vol. 1, ed. by E. Stein, R. de Borst, T.J.R. Hughes (Wiley, 2007). https://doi.org/10.1002/0470091355.ecm071. Part 1. Fundamentals. Encyclopaedia of Computational Mechanics 40. H.G. Matthies, Uncertainty quantification and Bayesian inversion, in Encyclopaedia of Computational Mechanics, vol. 1, 2nd edn., ed. by E. Stein, R. de Borst, T.J.R. Hughes (Wiley, 2017). https://doi.org/10.1002/9781119176817.ecm2071. Part 1. Fundamentals. Encyclopaedia of Computational Mechanics 41. H.G. Matthies, Analysis of probabilistic and parametric reduced order models (2018). arXiv: 1807.02219 [math.NA]. http://arxiv.org/1807.02219 42. H.G. Matthies, A. Ibrahimbegovi´c, Stochastic multiscale coupling of inelastic processes in solid mechanic, in Multiscale Modelling and Uncertainty Quantification of Materials and Structures, vol. 3, ed. by M. Papadrakakis, G. Stefanou (Springer, 2014), pp. 135–157. https:// doi.org/10.1007/978-3-319-06331-7_9 43. H.G. Matthies, A. Litvinenko, B. Rosi´c, E. Zander, Bayesian parameter estimation via filtering and functional approximations (2016). arXiv: 1611.09293 [math.NA]. http://arxiv.org/abs/ 1611.09293 44. H.G. Matthies, R. Ohayon, Analysis of parametric models — linear methods and approximations (2018). arXiv: 1806.01101 [math.NA]. http://arxiv.org/1806.01101 45. H.G. Matthies, R. Ohayon, Analysis of parametric models for coupled systems (2018). arXiv: 1806.07255 [math.NA]. http://arxiv.org/1806.07255 46. H.G. Matthies, R. Ohayon, Analysis of parametric models – linear methods and approximations. Adv. Comput. Math. 45, 2555–2586 (2019). https://doi.org/10.1007/s10444-019-09735-4 47. H.G. Matthies, R. Ohayon, Parametric models analysed with linear maps (2019). arXiv: 1911.10155 [math.NA]. http://arxiv.org/1911.10155 48. H.G. Matthies, R. Ohayon, Analysis of parametric models for coupled systems, in IUTAM Symposium on Model Order Reduction of Coupled Systems, ed. by J. Fehr, B. Haasdonk. IUTAM Bookseries, vol. 36 (Springer, 2020), pp. 25–39. https://doi.org/10.1007/978-3-03021013-7_2
Collocation Methods and Beyond in Non-linear Mechanics
503
49. H.G. Matthies, E. Zander, B.V. Rosi´c, A. Litvinenko, Parameter estimation via conditional expectation: a Bayesian inversion. Adv. Model. Simul. Eng. Sci. 3, 24 (2016). https://doi.org/ 10.1186/s40323-016-0075-7 50. H.G. Matthies, E. Zander, B.V. Rosi´c, A. Litvinenko, O. Pajonk, Inverse problems in a Bayesian setting (2015). arXiv: 1511.00524 [math.PR]. http://arxiv.org/abs/1511.00524 51. H.G. Matthies, E. Zander, B.V. Rosi´c, A. Litvinenko, O. Pajonk, Inverse problems in a Bayesian setting, in Computational Methods for Solids and Fluids — Multiscale Analysis, Probability Aspects and Model Reduction, ed. by A. Ibrahimbegovi´c. Computational Methods in Applied Sciences, vol. 41 (Springer, 2016), pp. 245–286. https://doi.org/10.1007/978-3-319-279961_10 52. A. Mielke, T. Roubiˇcek, Rate Independent Systems: Theory and Application (Springer, 2015) 53. M. Montardini, G. Sangalli, L. Tamellini, Optimal-order isogeometric collocation at Galerkin superconvergent points. Comput. Methods Appl. Mech. Eng. 316, 741–757 (2017) 54. C.J. Oates, J. Cockayne, D. Prangle, T.J. Sullivan, M. Girolami, Optimality criteria for probabilistic numerical methods (2019). arXiv:1901.04326 [stat.ME]. https://arxiv.org/abs/1901. 04326 55. C.J. Oates, M. Girolami, N. Chopin, Control functionals for Monte Carlo integration (2016). arXiv:1410.2392 [stat.ME]. https://arxiv.org/abs/1410.2392 56. C.J. Oates, T.J. Sullivan, A modern retrospective on probabilistic numerics. Stat. Comput. 29, 1335–1351 (2019). https://doi.org/10.1007/s11222-019-09902-z 57. A. O’Hagan, Bayes-Hermite quadrature. J. Stat. Plan. Inference 29(3), 245–260 (1991) 58. J. Rang, H.G. Matthies, Variational formulation with error estimates for uncertainty quantification via collocation, regression, and sprectral projection. PAMM 17, 79–82 (2017). https:// doi.org/10.1002/pamm.201710024 59. C.E. Rasmussen, Z. Ghahramani, Bayesian Monte Carlo, in Proceedings of the 15th International Conference on Neural Information Processing Systems, NIPS’02, ed. by S. Becker (MIT Press, 2002), pp. 505–512. https://doi.org/10.5555/2968618.2968681 60. A. Reali, T.J. Hughes, An introduction to isogeometric collocation methods, in Isogeometric Methods for Numerical Simulation (Springer, 2015), pp. 173–204 61. B. Rosi´c, H.G. Matthies, Variational theory and computations in stochastic plasticity. Arch. Comput. Methods Eng. 22(3), 457–509 (2015). https://doi.org/10.1007/s11831-014-9116-x 62. B. Rosi´c, M.S. Sarfaraz, H.G. Matthies, A. Ibrahimbegovi´c, Stochastic upscaling of random microstructures. PAMM 17, 869–870 (2017). https://doi.org/10.1002/pamm.201710401 63. B. Rosi´c, J. Sýkora, O. Pajonk, A. Kuˇcerová, H.G. Matthies, Comparison of numerical approaches to Bayesian updating, in Computational Methods for Solids and Fluids — Multiscale Analysis, Probability Aspects, and Model Reduction, ed. by A. Ibrahimbegovi´c. Computational Methods in Applied Sciences, vol. 41 (Springer, 2016), pp. 427–461. https://doi.org/ 10.1007/978-3-319-27996-1_16 64. B.V. Rosi´c, Stochastic state estimation via incremental iterative sparse polynomial chaos based Bayesian-Gauss-Newton-Markov-Kalman filter (2019). arXiv:1909.07209 [math.OC]. https:// arxiv.org/abs/1909.07209 65. B.V. Rosi´c, S.K. Shivanand, T.V. Hoang, H.G. Matthies, Iterative spectral identification of bone macroscopic properties described by a probability box. PAMM 18(1), e201800,404 (2018). https://doi.org/10.1002/pamm.201800404 66. B. Rosi´c, Variational Formulations and Functional Approximation Algorithms in Stochastic Plasticity of Materials. Ph.D. Thesis, TU Braunschweig (2012). http://www.digibib.tu-bs.de/? docid=00052794 67. M.S. Sarfaraz, B. Rosi´c, H.G. Matthies, Stochastic upscaling of heterogeneous materials. PAMM 16, 679–680 (2016). https://doi.org/10.1002/pamm.201610328 68. S.M. Sarfaraz, B.V. Rosi´c, H.G. Matthies, A. Ibrahimbegovi´c, Stochastic Upscaling via Linear Bayesian Updating, in Multiscale Modeling of Heterogeneous Structures, ed. by J. Sori´c, P. Wriggers, O. Allix. Lecture Notes in Applied and Computational Mechanics, vol. 86 (Springer, 2018), pp. 163–181. https://doi.org/10.1007/978-3-319-65463-8_9
504
F. Fahrendorf et al.
69. S.M. Sarfaraz, B.V. Rosi´c, H.G. Matthies, A. Ibrahimbegovi´c, Stochastic upscaling via linear Bayesian updating. Coupled Syst. Mech. 7(2), 211–232 (2018). https://doi.org/10.12989/csm. 2018.7.2.211 70. S.M. Sarfaraz, B.V. Rosi´c, H.G. Matthies, A. Ibrahimbegovi´c, Bayesian stochastic multi-scale analysis via energy considerations (2019). arXiv:1912.03108 [math.ST]. Submitted to AMSES https://arxiv.org/abs/1912.03108 71. R. Sauer, L.D. Lorenzis, A computational contact formulation based on surface potentials. Comput. Methods Appl. Mech. Eng. 253, 369–395 (2013) 72. J. Simo, K. Pister, Remarks on rate constitutive equations for finite deformation. Comput. Methods Appl. Mech. Eng. 46, 201–215 (1984) 73. J.C. Simo, T.J. Hughes, Computational Inelasticity, vol. 7 (Springer Science & Business Media, 2006) 74. G. Stabile, B. Rosi´c, Bayesian identification of a projection based reduced order model for computational fluid dynamics. Comput. Fluids 201, 104,477 (2020). https://doi.org/10.1016/ j.compfluid.2020.104477 75. J. Vondˇrejc, H.G. Matthies, Accurate computation of conditional expectation for highly nonlinear problems (2018). arXiv: 1806.03234 [math.NA]. http://arxiv.org/1806.03234 76. J. Vondˇrejc, H.G. Matthies, Accurate computation of conditional expectation for highly nonlinear problems. SIAM/ASA J. Uncertain. Quantif. 7, 1349–1368 (2019) 77. O. Weeger, S.K. Yeung, M.L. Dunn, Isogeometric collocation methods for Cosserat rods and rod structures. Comput. Methods Appl. Mech. Eng. 316, 100–122 (2017) 78. T. Wu, B. Rosi´c, L. De Lorenzis, H.G. Matthies, Parameter identification for phase-field modeling of fracture: a Bayesian approach with sampling-free update. Comput. Mech. (submitted in 2020) 79. X. Xi, F.X. Briol, M. Girolami, Bayesian quadrature for multiple related integrals (2018). arXiv:1801.04153 [stat.CO]. https://arxiv.org/abs/1801.04153 80. X. Xi, F.X. Briol, M. Girolami, Bayesian quadrature for multiple related integrals, in Proceedings of the 35th International Conference on Machine Learning, vol. 80 (2018), pp. 5373–5382. http://proceedings.mlr.press/v80/xi18a.html
Approximation Schemes for Materials with Discontinuities Sören Bartels, Marijo Milicevic, Marita Thomas, Sven Tornquist, and Nico Weber
Abstract Damage and fracture phenomena are related to the evolution of discontinuities both in space and in time. This contribution investigates and devises methods from mathematical and numerical analysis to quantify them: Suitable mathematical formulations and time-discrete schemes for problems with discontinuities in time are presented. For the treatment of problems with discontinuities in space, the focus lies on FE-methods for minimization problems in spaces of functions of bounded variation. The developed methods are used to introduce fully discrete schemes for a rate-independent damage model and for the viscous approximation of a model for dynamic phase-field fracture. Convergence of the schemes is discussed.
1 Introduction This contribution discusses methods from mathematical and numerical analysis developed for the numerical treatment of damage and fracture models used in engineering applications. Phenomenologically, damage and fracture processes lead to a weakening of the material’s abilities to bear external loads, to a degradation of internal stresses, and ultimately result in its complete failure. Such material defects appear as discontinuities in the spatial domain, across which kinematic quantities S. Bartels · M. Milicevic · N. Weber Abteilung für Angewandte Mathematik, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Str. 10, 79104 Freiburg im Breisgau, Germany e-mail: [email protected] M. Milicevic e-mail: [email protected] M. Thomas · S. Tornquist (B) Weierstrass Instinuite for Applied Analysis and Stochastics, Mohrenstr. 39, 10117 Berlin, Germany e-mail: [email protected] M. Thomas e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Schröder and P. Wriggers (eds.), Non-standard Discretisation Methods in Solid Mechanics, Lecture Notes in Applied and Computational Mechanics 98, https://doi.org/10.1007/978-3-030-92672-4_17
505
506
S. Bartels et al.
associated with the deformable body, such as the deformation or the displacement field, may feature jumps with respect to the spatial coordinates. But often cracks also seem to form or to propagate instantaneously in previously undamaged regions, i.e., material points seem to jump in one instant from being sound to being damaged. In other words, the evolution of material defects is not only accompanied by discontinuities in space but also by discontinuities with respect to time. This effect is already reflected by Griffith’ fracture criterion for brittle, quasistatic crack growth [48], with states that, in a body s0 with a pre-existing crack of length s0 , crack growth sets in as soon as the energy released from the body by potential crack growth reaches a critical value given by the fracture toughness Gc , i.e., −
d(s0 +s ) ds s=0
= Gc .
(1)
Here, (s0 +s ) denotes the sum of the strain energy and of the energy due to the applied forces of the body with a crack extended by the length s. The left-hand side of (1) defines the energy release rate. The growth of the crack length s is thus formally characterized by the conditions
d(s0 +s ) s˙ ds
s˙ ≥ 0,
(2a)
+ Gc ≥ 0 , + Gc = 0 ,
(2b)
d(s0 +s ) ds s=0 s=0
(2c)
where s˙ denotes the time derivative. Above, condition (2a) states that the crack either keeps its position (˙s = 0) or grows (˙s > 0) with time. By condition (2b) the values of the energy release rate can never exceed the fracture toughness. Condition (2c) ensures that crack growth of any positive rate s˙ > 0 is possible if and only if (1) is satisfied. We remark here, that (2c) holds true on a formal level, only, where it is assumed that the terms involved are sufficiently regular. To simplify above explanations we have considered for (1) and (2) a two-dimensional setting, i.e. s0 , s0 +s ⊂ R2 , so that the crack is a one-dimensional subset and s0 , resp. s0 + s indicates the position of the crack tip along this line. More general crack geometries in higher space dimensions can be described by the Francfort–Marigo model for brittle fracture, cf., e.g. [43], which is formulated as a minimum problem for the total energy of the body ⊂ Rd E(c ) = (\c ) +
c
Gc dS
(3)
given as the sum of the elastic bulk energy of the body and the crack surface energy along the crack c . However, the propagation of general crack geometries, which may also involve effects like crack kinking and branching, is hard to handle as sharp (d − 1)-dimensional manifolds in a d-dimensional body from the view point of mathematical analysis and numerics. Therefore, it has become a well-established
Approximation Schemes for Materials with Discontinuities
507
method to regularize the energy with the sharp (d − 1)-dimensional crack surfaces by an energy functional that locates the material degradation in narrow d-dimensional volumes. In the spirit of generalized standard materials [53] this is done with the aid of an internal variable, which indicates the state of material degradation in each point of the domain ⊂ Rd . This so-called damage or phase-field variable z : [0, T] × → [0, 1] with z(t, x) = 1 if the material point x ∈ is undamaged at time t ∈ [0, T] and z(t, x) = 0 in case of maximal damage, is a phase indicator for the damaged and undamaged phase in the body and can be understood as the volume fraction of undamaged material in each point x ∈ . In this way, a possible regularization of (3) is given by the Ambrosio–Tortorelli energy functional [45] E(t, u, z) :=
1 2 (z
1 + η)2 Ce(u) : e(u) − f (t) · u dx + Gc 2 |∇z|2 + 2 (1 − z)2 dx.
(4) Herein, the second integral term can be seen as the volumetric regularization of the crack surface energy in (3). With a small parameter η > 0 the first integral term is an approximation of the elastic bulk energy (\c ) in (3), which in this case takes the form (\c ) := \c 21 Ce(u) : e(u) − f (t) · u dx with the displacements u : [0, T] × → Rd and a time-dependent external volume load f (t). In this work we discuss methods from mathematical and numerical analysis that allow it to numerically handle the discontinuities in space and time exhibited by solutions of damage and fracture models. This will involve energies of the form E(t, u, z) :=
1 w (z)Ce(u) : e(u) 2 C
− f (t) · u dx + G(z) ,
(5)
where the degradation function wC allows for generalizations of the one in (4) and G(z) is a gradient regularization for the damage variable. It can be a volumetric regularization of the crack surface energy as in (4) and thus our results apply to models for phase-field fracture. But we will also adress general models for volume damage with G(z) :=
1 |∇z|r r
dx
for r ∈ (1, ∞)
(6a)
a gradient in the sense of Sobolev spaces and, in case of r = 1, G(z) := |Dz|()
(6b)
the total variation of z in leading to a regularization in BV (), the space of functions of bounded variation, see Sect. 3.1.1 for more details. While gradient regularizations of type (6a) prevent jumps of z in across (d − 1)-dimensional manifolds, such jumps are possible in the limiting case r = 1. Hence, a BV -regularization of type (6b) can be used to sharply distinguish between undamaged and damaged zones in a material. As already indicated along with (2), solutions of problems related to damage and fracture also exhibit discontinuities with respect to time. In particu-
508
S. Bartels et al.
lar, the time-derivate appearing in (2) cannot be understood in the classical sense, but only in the sense of measures, as solutions in general can be shown to be of bounded variation in time, only. This low regularity in time is a general feature of rate-independent evolution problems. Thus, in order to have (2c) well-defined, better d(s0 +s ) in order to compensate for the low regularity of regularity is required for ds s˙ . However, this cannot be expected in general. Therefore, alternative formulations of the evolution problem are required, which can handle the low regularity in time. Such formulations suited for discontinuities in time and time-discrete approximation schemes thereof will be the topic of Sect. 2. Subsequently, Sect. 3 is devoted to finite-element methods for problems with discontinuities in space, in particular for the FE-approximation of minimization problems for functionals involving the BV -regularization (6b). Based on the developed FE-algorithms and on the methods of Sect. 2 we will present in Sect. 3.5 an approximation result for a rate-independent damage model and also address the challenges related to the convergence proof of the fully discrete scheme. Finally, in Sect. 4 we present a fully discrete scheme for the viscous approximation of dynamic phase-field fracture in visco-elastic materials and prove convergence of the method. Here, in addition to the elastic bulk energy of the type (5) also the kinetic energy of the body and further viscous potentials will play a role.
2 Mathematical Formulations to Handle Discontinuities in Time Before introducing abstract mathematical concepts to handle discontinuities in time we once more address Griffith’ model for brittle fracture (2) and motivate with this example some terms and ideas that will reappear in a more general setting lateron. At first, we discuss that (2) is a rate-independent process. Rate-independent processes are characterized by the fact that reparametrizations in time of the given data lead to solutions of the problem, which are reparametrized in the same way. Thus, if the external loadings change twice as fast, then the solution of the problem will respond twice is fast, i.e., here the crack tip position s will move twice as fast. This effect can be described mathematically by introducing a convex, lower semicontinuous dissipation potential R1 that is positively 1-homogeneous to reflect rate-independence, i.e., for all admissible velocities v and all constants λ > 0 it is R1 (λv) = λR1 (v)
and R1 (0) = 0 .
(7)
Indeed, for (2) we may set R1 (v) := Gc |v| + χ[0,∞) (v), where the characteristic function of the interval [0, ∞) ensures that velocities are non-negative, i.e., χ[0,∞) (v) = 0 if v ≥ 0 and χ[0,∞) (v) = ∞ if v < 0. We remark here that R1 is not classically differentiable in v = 0, but generalized derivatives can be defined in the sense of subdifferentials of convex functionals, which here takes the form
Approximation Schemes for Materials with Discontinuities
∂R1 (v) :=
509
⎧ ⎨
{Gc } if v > 0, (−∞, Gc ] if v = 0, ⎩ ∅ if v < 0.
(8)
For above choice of R1 it can be easily checked that (7) is satisfied and from (8) one can see that ∂R1 is positively homogeneous of degree 0, i.e., ∂R1 (λv) = ∂R1 (v) for all admissible v and all λ ≥ 0. Hence, reparametrizations in time do not change the evolution law. We now check how (2) can be reformulated in terms of the dissipation potential R1 . With the above choice of R1 , by multiplication with velocities v ≥ 0, (2b) can be formally rewritten as v
d(s0 +s ) ds s=0
+ R1 (v) ≥ 0
for all admissible velocities v with v ≥ 0 .
(9)
This provides a local stability condition, restricted to test functions with v ≥ 0. Moreover, given that the terms involved are sufficiently regular, then (2c) can be integrated over [0, t] for any t ∈ [0, T]. A formulation in the spirit of (9) together with a time-integrated version of (2c) adapted to the context of phase-field fracture will be obtained for limit solutions of the approximation procedure in Sect. 4. We further observe that (9) can be assumed to hold true for all admissible velocities d(s0 +s ) takes a finite value. Then the inequality v such that the expression v ds s=0 is true also if v < 0, since then R1 (v) = ∞. If the terms in (2c) are suitably regular, the subtraction of (2c) from (9) results in a variational inequality
d(s0 +s ) ds s=0
+ R1 (v) − R1 (˙s ) ≥ 0
for all admissible velocities v , (10) where we also used that R1 (˙s ) = 0 by (2a). In view of the convexity of R1 and provided that all terms are sufficiently regular, the variational inequality (10) is equivalent to the subdifferential inclusion (v − s˙ )
−
d(s0 +s ) ds s=0
∈ ∂R1 (˙s ) .
(11)
We point out once more that above discussion has been carried out on a formal level in a pointwise sense in space and time always assuming that all quantities are sufficiently regular. For an elaborate mathematical analysis the reader is referred, e.g., to [58]. Solution concepts for rate-independent systems: Let us now turn to a general setting that allows us also to treat models for rate-independent damage and phase-field fracture in elastically deformable bodies. The state of the body is then characterized by a kinematic variable, such as the displacement field u and by an internal variable z responsible for the dissipative process. It is assumed that the evolution of z is rateindependent and that u evolves in a quasi-static way. The evolution of the pair (u, z) can then be given as a variational formulation in suitable Banach spaces U and Z
510
S. Bartels et al.
with duals U∗ and Z∗ based on an energy functional E : [0, T ] × U × Z → R ∪ {∞}, e.g., of the form (5) and based on a convex, lower semicontinuous and positively 1-homogeneous dissipation potential R1 : Z → [0, ∞]: ˜ U = 0 for all u˜ ∈ U, (12a) Du E(t, u(t), z(t)), u Dz E(t, u(t), z(t)), z˜ − z˙ (t) Z + R1 (˜z ) − R1 (˙z (t)) ≥ 0 for all z˜ ∈ Z (12b) for a.e. t ∈ (0, T ). Here, ·, · U and ·, · Z denote the dual pairings defined for elements from the Banach spaces U, resp. Z and their duals. Moreover, Du E(t, u(t), z(t)) ∈ U∗ and Dz E(t, u(t), z(t)) ∈ Z∗ denote the variational derivatives of the energy functional with respect to the variables u and z. In this way, (12a) provides a weak formulation of the quasistatic momentum balance and (12b) characterizes the rateindependent evolution of z in terms of a variational inequality alike (10). Again, it has to be assumed that all quantities in (12b) are sufficiently regular in order to have the dual pairing well-defined. Since this regularity cannot be provided in general, one is interested in reformulations of (12) that avoid time-derivatives and differentials. Such reformulations are based on additional convexity assumptions for the energy functional and they can be deduced as follows: • Multiplying (12b) by h > 0 and using the test function z˜ := v/ h with v ∈ Z leads in the limit h → 0 to the local stability condition Dz E(t, u(t), z(t)), v Z + R1 (v) ≥ 0 for all v ∈ Z ,
(13)
i.e., the analogon of (2b). In fact, with R1 (0) = 0 one finds that (13) is equivalent to −Dz E(t, u(t), z(t)) ∈ ∂R1 (0). Choosing u˜ := uˆ − u(t) for uˆ ∈ U in (12a) and z˜ := zˆ − z(t) for zˆ ∈ Z in (12b), summing up, and exploiting convexity relations the energy functional, results in the global stability condition ˆ zˆ ) ∈ U × Z E(t, u(t), z(t)) ≤ E(t, u, ˆ zˆ ) + R1 (ˆz − z(t)) for all (u,
(14)
in case that E(t, ·, ·) : U × Z → R ∪ {∞} is convex. Instead, if the energy is only separately convex, i.e., E(t, ·, z˜ ) : U → R ∪ {∞} is convex for any fixed z˜ ∈ Z and E(t, u, ˜ ·) : Z → R ∪ {∞} is convex for any fixed u˜ ∈ U, one finds two separate stability conditions for u and z, i.e., ˆ z(t)) for all uˆ ∈ U, (15a) minimality: E(t, u(t), z(t)) ≤ E(t, u, semistability: E(t, u(t), z(t)) ≤ E(t, u(t), zˆ ) + R1 (ˆz − z(t)) for all zˆ ∈ Z.(15b) • Testing (12b) with the test functions z˜ := 0 and z˜ := 2˙z (t) results in Dz E(t, u(t), z(t)), z˙ (t) Z + R1 (˙z (t)) = 0,
(16)
i.e., the analogon of (2c). Testing (12a) with u˜ := u(t), ˙ provided that u(t) ˙ ∈U is an admissible testfunction, summing the result with (16), and integrating over
Approximation Schemes for Materials with Discontinuities
511
(t1 , t2 ) for any t1 < t2 ∈ [0, T] results in the energy-dissipation balance
t2 E(t2 , u(t2 ), z(t2 )) + Var R1 (z; [t1 , t2 ]) = E(t1 , u(t1 ), z(t1 )) + ∂t E(t, u(t), z(t)) dt . t1
(17) of the energy functional and Here, ∂t E(·, u, z) denotes the partial time-derivative
N R1 (z(tk ) − z(tk−1 )) denotes the Var R1 (z; [t1 , t2 ]) := supall partitions of [t1 ,t2 ] k=1 total variation in time with respect to the potential R1 . The above deduction (13)–(17) motivates. Definition 1 (Notions of solution for rate-independent systems) Let U, Z be Banach spaces, E : [0, T] × U × Z → R ∪ {∞} be an energy functional and R1 : Z → [0, ∞] a convex, lower semincontinuous, and positively 1-homogeneous dissipation potential. The triple (U × Z, R1 , E) is called a rate-independent system. A pair (u, z) : [0, T] → U × Z is called 1. local solution of (U × Z, R1 , E), if (12a) and (13) are satisfied for almost all t ∈ (0, T) and if (17) holds true as an upper energy-dissipation estimate, i.e., with ‘≤’ in (17) for all t1 ≤ t2 ∈ [0, T]; 2. semistable energetic solution of (U × Z, R1 , E), if (15a) is satisfied for almost all t ∈ (0, T), if (15b) is valid for all t ∈ [0, T], and if (17) holds true as an upper energy-dissipation estimate for t1 = 0 and for all t2 ∈ [0, T], i.e.: t2 E(t2 , u(t2 ), z(t2 )) + DissR1 (z; [0, t2 ]) ≤ E(0, u 0 , z 0 ) + ∂t E(t, u(t), z(t)) dt ; 0
(18) 3. energetic solution of (U × Z, R1 , E), if (14) and (17) are satisfied for all t ∈ [0, T ]. We also refer to the monograph [66] for more details on the theory of rateindependent processes, cf. also [66, Definition 3.3.2] for further solution concepts. Remark 1 (Approximation methods for rate-independent systems) In the setting of infinite-dimensional Banach spaces U × Z it has been shown in [74] that semistable energetic solutions for rate-independent systems (Definition 1, Item 2) can be obtained via alternate minimization: Given a partition τ := {tτk , k = 0, . . . Nτ } of the time interval [0, T] with 0 = tτ0 ≤ tτ1 ≤ · · · ≤ tτNτ = T and starting with the given initial datum (u 0τ , z τ0 ) := (u 0 , z 0 ) ∈ U × Z, for each k ∈ {0, . . . Nτ } find a pair (u kτ , z τk ) via the following staggered time-discrete scheme: E(tτk , u, ˜ z τk−1 ), u kτ = argminu∈U ˜ z τk
∈
argminz˜ ∈Z E(tτk , u kτ , z˜ )
+ R1 (˜z −
(19a) z τk−1 ) .
(19b)
Convergence proofs are based on deriving a discrete version of the defining properties ((15) and (17) with t1 = 0 and ‘≤’) for semistable energetic solutions, where the discrete upper energy-dissipation estimate provides suitable compactness properties,
512
S. Bartels et al.
and by subsequently passing to the limit from time-discrete to time-continuous in the defining properties. Here, in particular in case of non-smooth and discontinuous dissipation potentials as in the case of damage and fracture problems, the limit passage in (15b) requires techniques for (evolutionary) -convergence such as the construction of a mutual recovery sequence, cf. e.g., [66] and [74, Hyp. 2.5] for further details. The limit passage in the upper energy-dissipation estimate is based on weak lower semicontinuity properties of the functionals and the well-preparedness of the initial data. Instead, if the upper energy-dissipation estimate was required to hold for all t1 ∈ [0, T ] on the right-hand side of (17), then convergence of the energy along sequences of approximate solutions is needed. However, for non-smooth and non-linear energy functionals, as it is often the case for damage and fracture problems, this property is not available a priori. In some cases, as in Sect. 4, energy convergence along approximate solutions can be obtained a posteriori, e.g., if the energy dissipation estimate can be confirmed to be valid as an equality. Energetic solutions of rate-independent systems (Definition 1, Item 3) can be obtained by the approximation with solutions of a time-discrete scheme that simultaneously minimizes with respect to the pair (u, z), i.e., for all k ∈ {1, . . . , Nτ } it is k ˜ z˜ ) + R1 (˜z − z τk−1 ) . (20) (u kτ , z τk ) ∈ argmin(u,˜ ˜ z )∈U×Z E(tτ , u, Based on this minimality property such an approximation procedure has been successfully carried out also for energy functionals with weaker convexity properties than required in the deduction (13)–(17). However, if properties (12a) and (13) are employed to determine solutions of (20), then convexity of E(t, ·, ·) in the pair (u, z) is needed to find energetic solutions, whereas separate convexity will in general result in semistable energetic solutions, only. Indeed, algorithms based on a FE-discretization in space use (12a) and (13), so that only the approximation of semistable energetic solutions can be expected. In fact, for many applications in damage and phase-field fracture the energy functionals are assumed to be separately convex, but in general lack the joint convexity in the pair (u, z), because of multiplicative terms of the form wC (z)W (e(u)) appearing in the bulk elastic energy, cf. (4) and (5). Recently the concept of Balanced Viscosity solutions for rate-independent systems has gained attention, see, e.g., [67]. This notion of solution can be obtained by introducing an additional viscous dissipation for z, weighted with a parameter ε. As ε → 0 a selection of solution for the resulting rate-independent system is made, which is characterized by a local stability condition and an energy-dissipation balance that features in comparison to (18) additional dissipative terms which become active in particular in jump regimes of the solutions, see, e.g., also [61] in the setting of damage models. It has been shown in [5, 59] that solutions of this type can be obtained for phase-field fracture problems with the aid of an alternating multi-step algorithm in time. In [3] the convergence of alternating single- and multi-step algorithms in combination with FE-discretization is analyzed in the setting of L 2 -gradient flows for the Ambrosio–Tortorelli phase-field fracture model with (non-vanishing) viscous regularization of the damage variable. For this model the authors show that solutions of the limit problem that satisfy the unidirectionality constraint z(s) ≥ z(t) for all
Approximation Schemes for Materials with Discontinuities
513
s ≤ t ∈ [0, T ] can be approximated by a posteriori truncated solutions of the discrete, unconstrained problems. Finally, we also refer to [1], where the P1 FE-approximation of the quasistatic evolution in terms of semistable energetic solutions is analyzed for the Ambrosio–Tortorelli functional. It is pointed out that the study of the viscous problem as an L 2 -gradient flow relies on improved regularity results for elliptic systems [52]. For nonlinearly coupled damage problems this restricts the results to d = 2 and to a quasistatic evolution of the displacements. This is why we will employ a different concept for the visco-elastodynamic problem in Sect. 4; it is rather based on the ideas below. Concepts for rate-independent systems coupled with rate-dependent processes: While mathematical analysis of purely rate-independent systems (U × Z, R1 , E) is well-established by now, results for rate-independent systems coupled with other rate-dependent processes are much less developed. These types of systems with such a mixed type of evolution arise in mechanics, e.g., if, instead of the quasistatic law (12a) the evolution of the displacements is assumed to be dynamic or subject to dissipative effects in a visco-elastic material, while the evolution of the internal variable z is still governed by a rate-independent dissipation potential R1 . In this case, (12a) is replaced by (a weak formulation of the) momentum balance ˙ = 0 in U∗ , ρ u(t) ¨ + Du E(t, u(t), z(t)) + DV(u)
(21)
with ρ > 0 the mass density and V : U → [0, ∞) a dissipation potential of superlinear growth such that V(0) = 0. In [71] first steps towards the analysis of such coupled rate-independent/rate-dependent systems were made for the case that E is separately convex in u and z, and that V is quadratic. This type of viscous dissipation potential covers Kelvin-Voigt rheology, see also Sect. 4. Under this setting [71] provides a notion of solution that consists of the weak formulation of the momentum balance (21), coupled with the semistability inequality (15b), and complemented by an upper energy-dissipation estimate in analogy to (18), see (23) below. In [74] this concept was generalized to non-smooth energy functionals with (lower order) non-convexities based on the notion of Fréchet subdifferentials and also allowing for non-quadratic, convex, lower semicontinuous potentials V : U → [0, ∞). We denote by dissipation ˙ 2 dx the kinetic energy with W a Hilbert space K : W → [0, ∞), K(u) ˙ := ρ2 |u| and U a separable Banach space such that U ⊂ W ⊂ U∗ form an evolution triple. In [74] two different cases are distinguished: the case ρ ≡ 0 in where inertia is disregarded and the case ρ > 0 in where inertia is present. In the first case, the coupled system forms a gradient system, whereas second case is a damped inertial system. The following definition is used: Definition 2 ([74, Definitions 3.1 and 3.4] semistable energetic solutions for coupled rate-independent/rate-dependent systems) Let U, Z be separable Banach spaces, W a Hilbert space, E : [0, T] × U × Z → R ∪ {∞} an energy functional, K : W → [0, ∞) the kinetic energy functional, R1 : Z → [0, ∞) and V : U → [0, ∞) convex and lower semicontinuous dissipation potentials with R1 positively 1-homogenous and V of superlinear growth. A coupled rate-independent/rate-dependent system
514
S. Bartels et al.
characterized by the tuple (U, Z, V, R1 , E) is called a gradient system and a coupled system characterized by (U, W, Z, V, K, R1 , E) is called a damped inertial system. A pair (u, z) : [0, T] → U × Z is called a semistable energetic solution of (U, Z, V, R1 , E), resp. (U, W, Z, V, K, R1 , E) if the following conditions are satisfied: • subdifferential inclusion for u for almost all t ∈ (0, T): ˙ 0 in V∗ , ρ u(t) ¨ + ∂u E(t, u(t), z(t)) + ∂V(u(t))
(22)
˙ i.e., ρ u(t) ¨ + ξ(t) + ω(t) = 0, with ξ(t) ∈ ∂u E(t, u(t), z(t)) and ω(t) ∈ ∂V(u(t)) for almost all t ∈ (0, T ); • semistability condition (15b) for all t ∈ [0, T]; • upper energy-dissipation estimate for all t ∈ [0, T]: t V(u(r ˙ ))+V∗ (−ξ(r )− u(r ¨ )) dr +Var R1 (z; [0, t])+E(t, u(t), z(t)) K(u(t))+ ˙ 0 t ∂r E(r, u(r ), z(r )) dr ≤ K(u(0)) ˙ + E(0, u(0), z(0)) + 0
(23) with ξ(r ) ∈ ∂u E(r, u(r ), z(r )) the selection from (22), i.e., ξ(r ) fulfills (22) for almost all r ∈ (0, T). Moreover, a pair (u, z) : [0, T] → U × Z is called a weak semistable energetic solution of (U, Z, V, R1 , E), resp. (U, W, Z, V, K, R1 , E), if for all t ∈ [0, T] it satisfies semistability condition (15b) and the upper energy-dissipation estimate (23). Above in (22) the term ∂V(u) ˙ denotes the subdifferential of the convex, lower semicontinuous potential V. Instead E may feature non-convex terms of lower order, so that ∂u E(t, u, z) rather is to be understood in the sense of Fréchet subdifferentials. In (23) the term V∗ : U∗ → [0, ∞) denotes the Legendre–Fenchel conjugate of the convex potential V : U → [0, ∞). This term in (23) stems from the DeGiorgiprinciple for gradient flows, see [74] for a derivation. Remark 2 Using suitable staggered time-discrete schemes, alike (19), abstract existence results in the sense of Definition 2 were deduced in [74] for four cases: • semistable energetic solutions for (U, Z, V, R1 , E) for V quadratic, cf. [74, Theorem 4.9]; • weak semistable energetic solution for (U, Z, V, R1 , E) for V with general superlinear growth, cf. [74, Theorem 4.13]; • weak semistable energetic solution for (U, W, Z, V, K, R1 , E) for V with general superlinear growth, cf. [74, Theorem 5.4]; • semistable energetic solutions for (U, Z, V, R1 , E) for V quadratic, cf. [74, Theorem 5.6]. The notion of semistable energetic solutions for coupled systems has been applied in the context of damage models [64] as well as for delamination models [73, 75, 81].
Approximation Schemes for Materials with Discontinuities
515
3 Finite Element Approximation for Total Variation Regularized Problems 3.1 Model Problem and Analytical Properties 3.1.1
The Space BV ()
Classes of weakly differentiable functions like those defined by Sobolev spaces are not suitable to describe quantities that are discontinuities. A function space that contains a large class of discontinuous functions is provided by the set of functions of bounded variation which is the subset of integrable functions on whose distribtutional derivative is a bounded Radon measure, i.e., BV () = {v ∈ L 1 () : Dv ∈ M()}. The condition Dv ∈ M() is specified by the requirement that Dv is of bounded total variation, i.e.,
v div φ dx : φ ∈ Cc∞ (; Rd ), |φ(x)| ≤ 1 < ∞, |Dv|() = sup −
which means that the operator norm of the distributional derivative Dv is bounded as a functional on compactly supported, smooth functions. If v is weakly differentiable then we have |Dv|() = ∇v L 1 () . The space BV () is larger than L 1 () as, e.g., characteristic functions of sets with bounded perimeter are contained in BV (). The quantity v BV () = v L 1 () + |Dv|() defines a norm on BV () for which it is complete. For variational problems it is important to note that the concept of weak* convergence guarantees that bounded sequences admit suitable subsequences with corresponding limits. For analyzing numerical methods an intermediate notion of convergence is needed which asserts that v j → v intermediately if v j → v in L 1 () and |Dv j |() → |Dv|(). For this notion of convergence density of smooth functions can be established. We refer the reader to [2, 4] for details.
516
3.1.2
S. Bartels et al.
Model Problem
A model problem arising in image processing determines a regularized image z ∈ BV () ∩ L 2 () of a noisy image g ∈ L 2 () via minimizing I (z) = |Dz|() +
α z − g2 . 2
Despite the implicit definition of the total variation |Dz|(), the functional has positive analytical features, cf., e.g., [8, 16, 30, 70] for full explanations of the results summarized below. Proposition 1 (Well posedness) (i) Given g ∈ L 2 () there exists a unique minimizer z ∈ BV () ∩ L 2 () for I . In particular, for every y ∈ BV () ∩ L 2 () we have α z − y2 ≤ I (y) − I (z). 2 (ii) If z, z ∈ BV () ∩ L 2 () are minimizers corresponding to the data g, g ∈ L 2 () then we have that z − z ≤ g − g . (iii) If g ∈ L ∞ () then we have that z ∈ L ∞ () with z L ∞ () ≤ g L ∞ () . Proof (Proof (sketched)) The properties are direct consequences of compactness properties of the space BV () and coercivity and strong convexity properties of the functional I .
3.1.3
Dual Problem
Important implicit properties of solutions are provided by the dual formulation of the convex minimization problem I . By using the characterization of the total variation |Dv|(z) as a maximization problem we find by exchanging extrema that
α z − g2 2 p α ≥ sup inf − z div p dx − I K 1 (0) ( p) + z − g2 z 2 p
inf I (z) = inf sup − z
z
z div p dx − I K 1 (0) ( p) +
= sup inf L(z, p). p
z
Here we assumed that p · n = 0 on the boundary ∂. The functional I K 1 (0) denotes the indicator functional of the subset of vector fields p ∈ L 2 (; Rd ) that satisfy | p(x)| ≤ 1 almost everywhere in . Given such a vector field p the optimal z in the saddle-point formulation satisfies ∂z L(z, p) = 0, i.e.,
Approximation Schemes for Materials with Discontinuities
− div p + α(z − g) = 0
⇐⇒
517
z = g + α −1 div p.
This equation complements the condition 0 ∈ ∂ p L(z, p), i.e., the subdifferential inclusion ∇z ∈ ∂ I K 1 (0) ( p) or equivalently p ∈ ∂|∇z|. Inserting the identity for z into L and using that −g div p − α −1 (div p)2 +
α −1 1 α (α div p)2 = − (div p + αg)2 + g 2 2 2α 2
yields the dual functional α 1 div p + αg2 + g2 − I K 1 (0) ( p) 2α 2 1 2 div p g dx − I K 1 (0) ( p). = − div p − 2α
D( p) = −
The derivation of the functional implies that we have the weak duality relation I (z) ≥ D( p) for admissible functions z ∈ BV () ∩ L 2 () and vector fields p ∈ W N2 (div; ). In fact, it can be shown that strong duality applies, i.e., that equality holds at optimality, cf. [51]. Proposition 2 (Strong duality) The functionals I and D satisfy the strong duality relation inf I (z) = sup D( p). z
p
Existence of a dual solution p can be established using the direct method in the calculus of variations, uniqueness cannot be expected in general. While it is difficult to establish general regularity properties for the primal problem, solutions of the dual problem may satisfy classical regularity properties such as Lipschitz continuity. The following example illustrates this aspect, cf., e.g., [8]. Example 1 Let r > 0 be such that Br (0) ⊂ and define g = χ Br (0) . Then z = max 0, 1 − d/(αr ) χ Br (0) is the minimizer for I subject to z|∂ = 0. Assume that d ≤ αr and define p(x) =
−r −1 x for |x| ≤ r, −r x/|x|2 for |x| ≥ r.
Then p ∈ H (div; ) with div p = −(d/r )χ Br (0) and | p| ≤ 1. Moreover, we have z = (1/α) div p + g. Since p = −n on ∂ Br (0) we have for every q ∈ H (div; )
518
S. Bartels et al.
with |q| ≤ 1 that −(z, div(q − p)) = − 1 − d/(αr )
∂ Br (0)
(q − p) · n ds ≤ 0,
i.e., ∇z ∈ ∂ I K 1 (0) ( p). If d ≥ αr , we define p(x) =
−(α/d)x for |x| ≤ r, 2 2 −(α/d)r x/|x| for |x| ≥ r
and verify div p = −αχ Br (0) = −αg, i.e., z = (1/α) div p + g = 0, and | p| ≤ αr/d ≤ 1. Since z = 0 the variational inclusion Dz ∈ ∂ I K 1 (0) ( p) is satisfied.
3.2 Notation in Finite Element Spaces For a sequence of regular triangulations (Th )h>0 , where h > 0 refers to a maximal mesh-size that tends to zero, the set of elementwise polynomial functions or vector fields of maximal polynomial degree k ≥ 0 is defined by Lk (Th ) = vh ∈ L 1 (; R ) : vh |T ∈ Pk (T ) for all T ∈ Th . We let h : L 1 (; R ) → L0 (Th ) denote the L 2 projection onto elementwise constant functions or vector fields and note that h is self-adjoint, i.e.,
h f g dx =
f h g dx
for all f, g ∈ L 1 (). We let Sh denote the set of sides of elements and define the mesh-size function h S | S = h S = diam(S) for all sides S ∈ Sh . We let n S : Sh → Rd denote a unit vector field given for every side S ∈ Sh by nS |S = n S for a fixed unit normal n S on S which is assumed to coincide with the outer unit normal if S ⊂ ∂. The jump and average on a side S of a function vh ∈ Lk (Th ) are for x ∈ S defined for inner sides via vh (x) = lim vh (x − εn S ) − vh (x + εn S ) , ε→0
1 vh (x − εn S ) + vh (x + εn S ) . ε→0 2
{vh }(x) = lim For S ⊂ ∂ we set
Approximation Schemes for Materials with Discontinuities
519
vh = {vh } = vh . The integral means of jumps and averages are denoted by vh h = |S|−1
vh ds, {vh }h = |S|−1 S
{vh } ds, S
which in case of elementwise affine functions coincide with the evaluation at the midpoint x S for every S ∈ Sh . We denote the space of continuous, piecewise linear functions via S1 (Th ) = L1 (Th ) ∩ C(), and the larger space of discontinuous, piecewise linear functions via S1,dg (Th ) = L1 (Th ). A space of discontinuous vector fields is given by RT 0,dg (Th ) = L0 (Th )d + (id −xT )L0 (Th ), where id is the identity and xT = h id ∈ L0 (Th )d the elementwise constant vector field that coincides with the midpoint x T on every T ∈ Th . Differential operators on these spaces are defined elementwise, indicated by a subscript h, i.e., we have ∇h vh |T = ∇(vh |T ), divh z h |T = div(z h |T ) for vh ∈ S1,dg (Th ), z h ∈ RT 0,dg (Th ) and all T ∈ Th . The operators are also applied to weakly differentiable functions and vector fields in which case they coincide with the weak gradient and the weak divergence. By construction, any vector field yh ∈ RT 0,dg (Th ) has a piecewise constant normal component yh · n L along straight lines L with normal n L . Subspaces of elementwise affine functions and vector fields with certain continuity properties on element sides are given by 1,dg (Th ) : vh h | S = 0 for all S ∈ Sh \ Neu }, S1,cr D (Th ) = {vh ∈ S
and RT N0 (Th ) = {yh ∈ RT 0,dg (Th ) : yh · n S h | S = 0 for all S ∈ Sh \ Dir }, which coincide with low order Crouzeix–Raviart and Raviart–Thomas finite element spaces introduced in [35, 72]. These spaces provide quasi-interpolation operators 1, p
q
1,cr 0 rt Jcr h : W D () → S D (Th ), Jh : W N (div; ) → RT N (Th ),
with the projection properties
520
S. Bartels et al. rt ∇h Jcr h v = h ∇v, div Jh y = h div y,
and the interpolation estimates v − Jcr h v L p () ≤ ccr,1 h∇v L p () , cr 2 2 v − Jcr h v L p () + h∇h (v − Jh v) L p () ≤ ccr,2 h D v L p () , 2, p
for v ∈ W D () with 1 ≤ p ≤ ∞, and y − Jrht y L q () ≤ cr t h∇ y L q () q
for y ∈ W N (div; ) with 1 ≤ q ≤ ∞. The standard nodal interpolation operator is denoted by Ih : C() → S1 (Th ) and satisfies the estimate v − Ih v L p () + h∇(v − Ih v) L p () ≤ c p1 h 2 D 2 v L p () . We refer the reader to [10, 13, 26, 29] for details. Elementary calculations lead to the identities ⎧ ⎪ ⎨vh {yh · n S } + {vh }yh · n S if S ⊂ ∂, vh yh · n S = vh {yh · n S } if S ⊂ Dir , ⎪ ⎩ if S ⊂ Neu . {vh }yh · n S By carrying out an elementwise integration by parts we thus find that for vh ∈ S1,dg (Th ) and yh ∈ RT 0,dg (Th ) we have
vh div yh dx + ∇h vh · yh dx = vh h {yh · n S } ds + Sh \Neu
Sh \Dir
(24) {vh }h yh · n S ds.
0 If vh ∈ S1,cr D (Th ) and yh ∈ RT N (Th ) then the terms on the right-hand side are equal to zero.
3.3 Finite Element Discretization Typical finite dimensional spaces of functions on such as spaces of continuous or discontinuous piecewise polynomial finite element functions define subsets of BV () ∩ L 2 (). We show below that their performance in discretizing the model problem can be quite different. We always consider a sequence of regular triangulations (Th )h>0 of consisting of triangles or tetrahedra for d = 2 and d = 3 respec-
Approximation Schemes for Materials with Discontinuities
521
tively. We recall that associated finite element spaces are defined via sets Pk (T ) of polynomials of degree k on the elements T ∈ Th and certain optional continuity conditions across interelement sides. The P0 finite element space of elementwise constant functions is given by L0 (Th ) = {vh ∈ L ∞ () : vh |T ∈ P0 (T ) for all T ∈ Th }. Elementwise affine, globally continuous functions are contained in the space of P1 finite element functions S1 (Th ) = {vh ∈ C() : vh |T ∈ P1 (T ) for all T ∈ Th }. A space of discontinuous functions that are continuous at midpoints of element sides is the Crouzeix–Raviart finite element space S1,cr (Th ) = {vh ∈ L ∞ () : vh |T ∈ P1 (T ) for all T ∈ Th , vh continuous at all x S for all S ∈ Sh }. Low order discontinuous Galerkin methods use the space of elementwise affine functions S1,dg (Th ) = {vh ∈ L ∞ () : vh |T ∈ P1 (T ) for all T ∈ Th }. We discuss below the discretization of the model problem with these finite element spaces, i.e., the minimization of the functional I (z) = |Dz|() +
α z − g2 , 2
restricted to the spaces L0 (Th ), S1 (Th ), S1,cr (Th ), and S1,dg (Th ).
3.3.1
Discontinuous P0 Elements
Using a space of discontinuous functions to discretize the model problem appears attractive as such spaces can reproduce simple discontinuities exactly. However, if the geometry of the underlying sequence of triangulations (Th )h>0 does not approximate the discontinuity set sufficiently accurately then discrete minimizers may fail to converge to the right objects. We note that we have |Dvh |() =
|S||vh S |
S∈Sh \∂
for every vh ∈ L0 (Th ), where vh S is the jump of vh across an inner side S of the triangulation Th whose length or surface area is denoted by |S|.
522
S. Bartels et al.
Fig. 1 Triangulations Th n with h n = 1/n for n = 2 and n = 4 used to illustrate the failure of the P0 method. The length of the discontinuity set of the discontinuous function v(x, y) = sign(x) (indicated via gray shading) is incorrectly approximated by any sequence of piecewise constant functions (vh )h>0 with vh → v in L 1 ()
Proposition 3 (Failure of convergence) Given n ≥ 1 let h = 1/n and Th the triangulation of = (−1, 1)2 as indicated in Fig. 1. Then for v(x1 , x2 ) = χ{x1 >0} (x1 , x2 ) we have for every sequence (vh )h>0 of functions vh ∈ L0 (Th ) the implication vh → v in L 1 ()
=⇒
|Dvh |() → |Dv|()
as h → 0. In particular, the union of finite element spaces ∪h>0 L0 (Th ) is not dense in BV () with respect to intermediate convergence. The proposition implies that in general, it is not possible to correctly approximate minimizers of the model problem I via the minimization I (z h ) =
S∈Sh
|S||z h S | +
α z h − g2 2
in the set of all z h ∈ L0 (Th ) despite the consistency of the method. We refer the reader to [6, 8, 17] for further details.
3.3.2
Continuous P1 Elements
The standard finite element space of piecewise affine, globally continuous functions S1 (Th ) provides the approximation property that for all v ∈ BV () there exists a sequence (vh )h>0 with vh ∈ S1 (Th ) for all h > 0 and vh → v in L 1 () & |Dvh |() → |Dv|() as h → 0. This is an immediate consequence of the intermediate density of smooth functions in BV () and the density of the union of P1 finite element spaces in the space W 1,1 (). In fact, we have that
Approximation Schemes for Materials with Discontinuities
523
|Dvh |() =
|∇vh | dx
for all vh ∈ S1 (Th ). If z h ∈ S1 (Th ) is the minimizer for α I (z h ) = |∇z h | dx + z h − g2 2 in the set of all z h ∈ S1 (Th ) then it follows that α z − z h 2 ≤ I (z h ) − I (z) ≤ I (vh ) − I (z) → 0 2 as h → 0 if the sequence (vh )h>0 is chosen such that vh → z intermediately in BV (). By an explicit construction of an approximating sequence (vh )h>0 for a given function z ∈ BV () ∩ L ∞ () it is possible to determine a convergence rate as in [6, 23, 82]. Proposition 4 (Suboptimal convergence) Assume that g ∈ L ∞ () and is star shaped. Then we have that z − z h ≤ ch 1/4 , where c > 0 depends on the geometry of and the triangulations, as well as α and g L ∞ () . Proof (Proof (sketched)) The strong convexity property of I and a binomial formula lead for arbitrary vh ∈ S1 (Th ) to the estimate α z − z h 2 ≤ I (z h ) − I (z) 2 ≤ I (vh ) − I (z)
α = |Dvh |() − |Dz|() + (vh − g)2 − (z − g)2 dx 2 α ≤ |Dvh |() − |Dz|() + vh − z L 1 () vh + z + 2g L ∞ () . 2
By choosing a regularization z ε ∈ C ∞ () of z and setting vh,ε = Ih z ε one derives the bounds z − vh,ε L 1 () ≤ c h 2 ε−1 + ε |Dz|(), |Dvh,ε |() ≤ |Dz|() + c hε−1 + ε)|Dz|(), vh,ε L ∞ () ≤ z L ∞ () . With these estimates we deduce that α z − z h 2 ≤ c hε−1 + h 2 ε−1 + ε . 2
524
S. Bartels et al.
The choice ε = ε1/2 leads to the asserted error bound.
The estimate can be improved if a total-variation diminishing quasi interpolation operator is available, i.e., on an intermediately dense subset X ⊂ BV () there exists an operator Ih : X → S1 (Th ) with the monotonicity estimate ∇ Ih v L 1 () ≤ |Dv|(), the approximation and stability bounds Ih v L ∞ () ≤ cv L ∞ () , v − Ih v L 1 () ≤ ch, then by following the lines of the proof of the previous proposition one finds, cf. [24], that z − z h ≤ ch 1/2 . Total-variation diminishing interpolation operators can be constructed in onedimensional settings or if the anisotropic variant of the total variation is used on regular partitions. The convergence rate O(h 1/2 ) is optimal for the approximation of a discontinuous function by continuous finite element functions, e.g., for a generic function v ∈ BV () ∩ L ∞ () with discontinuity, we have for the L 2 best approximation inf v − vh ≥ ch 1/2 . vh ∈S1 (Th )
As shown in [8] this can be verified directly in the simple setting = (−1, 1), Th a sequence of symmetric triangulations with respect to the origin, and the function v(x) = sign(x) as illustrated in Fig. 2. To prove the estimate, we first note that the optimal finite element approximation satisfies vh (0) = 0. This follows from the fact vh (x) = −vh (−x) we have, thatfor the unique optimal function vh and its reflection noting that −v(−x) = v(x), vh . v − vh = v − vh )/2 and noting that the L 2 Considering now the convex combination wh = (vh + norm is convex we deduce that v − wh ≤
1 1 v − vh + v − vh = v − vh . 2 2
By uniqueness we obtain that necessarily vh = wh where wh satisfies by construction wh (0) = 0. Moreover, we obtain that vh satisfies vh (−x) = −vh (x). On the two elements adjacent to the origin covering the region (−h, h) we have that vh (x) = ax and hence h 2 (ax − 1)2 dx = 2(a 2 h 3 /3 − ah 2 + h). vh − v ≥ 2 0
Approximation Schemes for Materials with Discontinuities Fig. 2 Oscillations in the approximation of a discontinuous function by continuous, piecewise affine functions. Oscillations occur in a neighborhood of the discontinuity and lead to suboptimal convergence behavior
525
h
The minimal value occurs for a = (3/2)h −1 and equals h/2.
3.3.3
Crouzeix–Raviart Method
The error analysis of the continuous P1 method revealed the importance of a totalvariation diminishing interpolation operator. The Crouzeix–Raviart finite element method provides an inconsistent variant of this property via the quasi-interpolation operator Jcr h with the property ∇h Jcr h v = h ∇v, where ∇h is the elementwise application of the gradient operator and h the orthogonal projection onto elementwise constant vector fields. Jensen’s inequality directly implies the monotonicity property ∇h Jcr h v L 1 () ≤ ∇v L 1 () . Via appropriate density arguments this property can be carried over to functions v ∈ BV (). We follow [12, 34] and use the discrete functional α |∇h z h | dx + h z h − gh 2 Ih (z h ) = 2 on the set S1,cr (Th ) to approximate minimizers of the model problem. The functional Ih is an inconsistent approximation of I since the first term does not coincide with the total variation |Dz h |() of a discontinuous function z h ∈ S1,cr (Th ). Therefore, an error analysis has to control the effect of inconsistency of the method. This is done via a discrete duality argument. We thus consider the discrete dual problem consisting in maximizing the functional Dh ( ph ) = −
1 α div ph + αgh 2 + gh 2 − I K 1 (0) ( h ph ) 2α 2
526
S. Bartels et al.
in the set of discrete vector fields ph ∈ RT 0N (Th ). The indicator functional I K 1 (0) applied to the elementwise average of ph enforces midpoint values ph (x T ) to satisfy | ph (x T )| ≤ 1 for all T ∈ Th . An important feature is the following discrete duality relation. Proposition 5 (Discrete duality) Assume that gh ∈ L0 (Th ). Then, the functionals Ih defined on S1,cr (Th ) and Dh defined on RT 0N (Th ) are in discrete duality, i.e., inf
z h ∈S1,cr (Th )
Ih (z h ) ≥
sup
Dh ( ph ).
ph ∈RT 0N (Th )
Proof We first note that for any vector field ph ∈ RT 0N (Th ) with | ph (x T )| ≤ 1 for all T ∈ Th we have ph · ∇h z h dx ≤ T
|∇h z h | dx T
for all T ∈ Th since ∇h z h is constant on T . We use the discrete integration-byparts (24) formula to verify
α h z h − gh 2 2 α ≥ ph · ∇h z h dx − I K 1 (0) ( h ph ) + h z h − gh 2 2 α = − div ph z h dx − I K 1 (0) ( h ph ) + h z h − gh 2 . 2
|∇h z h | dx +
For the convex function G(s) = (α/2)|s − g|2 for s, g ∈ R we have Fenchel’s inequality G(s) − r s ≥ −G ∗ (r ) with G ∗ (r ) = sup r s − G(s) = s∈R
1 α (r + αg)2 − g 2 . 2α 2
Hence, it follows that α |∇h z h | dx + h z h − gh 2 2 1 α ≥ − div ph + αgh 2 + g2 − I K 1 (0) ( h ph ). 2α 2 Since z h and ph are arbitrary, this implies the asserted estimate.
Since the modulus function and its convex conjugate can be approximated uniformly on their supports by differentiable functions, a strong duality relation can be established, i.e., that in fact equality applies in Proposition 5, cf. [12, 34]. For the
Approximation Schemes for Materials with Discontinuities
527
quasi-optimal error estimate stated below the weak duality result of the proposition is sufficient. Theorem 1 (Quasi-optimality, [12, 34]) If z h ∈ S1,cr (Th ) and z ∈ BV () ∩ L ∞ () are the minimizers of Ih and I , respectively, for some g ∈ L ∞ () and with gh = h g and if there exists a dual solution p ∈ W N2 (div; ) with p ∈ W 1,∞ (; Rd ), then we have the quasi-optimal error estimate z − h z h ≤ ch 1/2 . Proof The discrete functional Ih satisfies the coercivity property α h (yh − z h )2 ≤ Ih (yh ) − Ih (z h ). 2 Using the discrete duality relation Ih (z h ) ≥ Dh ( ph ), and choosing z h = Jcr h z and −1 r t rt ∞ ph (x T )| ≤ 1 for all T ∈ Th ph = γh Jh p, with γh = max{1, Jh p L () } so that | and hence ph is admissible in Dh , we find that α h ( z h − z h )2 ≤ Ih ( z h ) − Dh ( ph ). 2 z h L 1 () ≤ |Dz|() and the identity The monotonicity property ∇h h ( z h − g)2 = h z h − g2 − g − gh 2
h = z − g2 + z h − z h z h + z − 2g dx − g − gh 2
imply that α h ( z h − g)2 2 α α ≤ |Dz|() + z − g2 − g − gh 2 2 2 α + h z h − z L 1 () h z h + z − 2g L ∞ () 2 α = I (z) − g − gh 2 2 α + h z h − z L 1 () h z h + z − 2g L ∞ () . 2
Ih ( z h ) = ∇h z h L 1 () +
Defining p = γh−1 p we have that ph = Jrht p and p + g). div ph + gh = h (div
528
S. Bartels et al.
The identity g2 − gh 2 = g − gh 2 shows that 1 α div ph + αgh 2 + gh 2 2α 2 1 α α ≥ − div p + αg2 + g2 − g − gh 2 . 2α 2 2
ph ) = − Dh (
We use that 1 ≤ γh ≤ 1 + cLh to deduce that 1 α div p 2 − div p g dx − g − gh 2 2α 2 1 −2 α = − γh div p2 − γh−1 g div p dx − g − gh 2 2α 2 1 −2 ≥ − γh div p2 − div p g dx 2α α p − g − gh 2 − (1 − γh−1 )g div 2 α −1 ≥ D( p) − (1 − γh )g div p − g − gh 2 . 2
Dh ( ph ) ≥ −
By combining the estimates, noting that 1 − γh−1 ≤ ch∇ p L ∞ () , and using z −
h z h 2 ≤ ch|Dz|()z∞() , we deduce the asserted error bound. L 3.3.4
Discontinuous Galerkin Method
The discontinuous Galerkin finite element method generalizes the Crouzeix–Raviart method by introducing jump and average terms. It is crucial to use quadrature via midpoint evaluation to obtain a precise duality relation. We follow [11]. Definition 3 (Jumps and averages) Let r, s ≥ 1 and let αS , βS : Sh → R≥0 be piecewise constant. For z h ∈ S1,dg (Th ) and ph ∈ RT 0,dg (Th ) define 1 −1 1 α z h h rL r (Sh \Neu ) + βS {z h }h sL s (Sh \Dir ) , r S s 1 1 K h ( ph ) = αS { ph · n S }rL r (Sh \Neu ) + βS−1 ph · n S sL s (S \ ) , h Dir r s Jh (z h ) =
where we require z h h = 0 if αS = 0 and ph · n S = 0 if βS = 0. For r = 1 or s = 1 the functionals (1/r ) · rL r or (1/s ) · sL s are interpreted as indicator functionals I K 1 (0) of the closed unit ball K 1 (0). We have the following discrete duality result, here stated for the case of the total variation minimization problem. Proposition 6 (dG duality) For u h ∈ S1,dg (Th ) and gh = h g let
Approximation Schemes for Materials with Discontinuities
Ih (u h ) =
|∇h z h | dx +
529
α h z h − gh 2 + Jh (z h ), 2
Then with the discrete dual functional defined for ph ∈ RT 0,dg (Th ) by Dh ( ph ) = −I K 1 (0) ( h ph ) −
1 α divh ph + αgh + gh 2 − K h (z h ) 2α 2
we have Ih (z h ) ≥ Dh ( ph ) and equality holds if and only if z h and ph are optimal for Ih and Dh , respectively. By adapting the arguments that lead to the error estimate in case of the Crouzeix– Raviart method we obtain a similar estimate here. Proposition 7 (Error estimate) Assume that g ∈ L ∞ () and that there exists a Lipschitz continuous solution p ∈ W N2 (div; ) ∩ W 1,∞ () for the dual problem. Moreover, suppose that −1 s r ∞ ∞ h −1 S αS L (Sh ) + h S βS L (Sh ) ≤ ch, where the first term can be omitted if r = 1 and 0 < αS ≤ 1. Then, for the solutions z ∈ BV () ∩ L 2 () and z h ∈ S1,dg (Th ) of the primal and discrete primal problem we have z − h z h ≤ ch 1/2 Mz, p,g , with a factor Mz, p,g that depends on α > 0, g L ∞ () , and ∇ p L ∞ () .
3.3.5
A Posteriori Error Estimates
Duality relations also lead to computable error estimates for conforming discretizations. Proposition 8 (A posteriori error estimate, [7]) Let z ∈ BV () ∩ L 2 () be minimal for I and z h ∈ S1 (Th ) be arbitrary. For every ph ∈ RT 0N (Th ) with | ph | ≤ 1 we have that α 1 2 z − z h ≤ div ph − α(z h − g)2 . |∇z h | − ∇z h · ph dx + 2 2α Proof Given any p ∈ W N2 (div; ) with | p| ≤ 1 almost everywhere in we have by minimality of z and conformity of the P1 method that α z − z h 2 ≤ I (z h ) − I (z). 2 The continuous duality relation yields that I (z) ≥ D( ph ) and hence we have that
530
S. Bartels et al.
I (z h ) − I (z) ≤ I (z h ) − D( ph ) α 1 α = |∇z h | dx + z h − g2 + div ph + αg2 − g2 . 2 2α 2
We use that −
∇z h · ph dx =
z h · div ph dx
and 1 div ph − α(z h − g)2 2α 1 α 2 div ph + αg − (div p + αg)z h dx + z h 2 = 2α 2 1 α α 2 div ph + αg − = z h div p dx + z h − g2 − g2 . 2α 2 2 A combination of the equations yields the asserted estimate.
The a posteriori error estimate is of residual type since the optimal z ∈ BV () ∩ L 2 () and an optimal vector field p are formally related via the identities div p = α(z − g),
p=
∇z . |∇z|
The error estimate is optimal. If ph is the solution of the dual problem restricted to the Raviart–Thomas finite element space with a relaxation of the constraint | ph | ≤ 1. By computing an approximation with the elementwise constraint | ph (x T )| ≤ 1 for ph ∈ RT 0N (Th ) and may then define all T ∈ Th one obtains a vector field ph ph = γh−1 where γh = max{1, ph L ∞ () }. In fact, the elementwise quantity ph L ∞ (T ) − 1}, γh (T ) = max{0, T ∈ Th , may be used as an additional error indicator.
3.3.6
Numerical Experiments
Figures 3, 4, and 5 show the numerical results of the finite element discretization of the model problem using standard P1 finite elements, the Crouzeix–Raviart method, and the discretization of the dual problem using the Raviart–Thomas method. The setting was chosen as in Example 1 with d = 2, = (−1, 1)2 , r = 1/2, and α = 10. The advantages of the nonconforming methods become apparent when the projec-
Approximation Schemes for Materials with Discontinuities
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0 1
531
0 1
0.5
1
0.5
0.5
0
1 0
-0.5
-0.5 -1
0.5
0
0
-0.5
-0.5 -1
-1
-1
Fig. 3 Numerical solution (left) obtained with the continuous P1 finite element method and its elementwise average (right). Although a reasonably acurate resolution of the jump is obtained its circular geometry is not well resolved
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0 1
0 1
0.5
1 0.5
0
0
-0.5
-0.5 -1
-1
0.5
1 0.5
0
0
-0.5
-0.5 -1
-1
Fig. 4 Numerical solution (left) obtained with the Crouzeix–Raviart finite element method and its elementwise average (right). While the approxmimation does not obey a maximum principle, the circular discontinuity set is well approximated
tions onto piecewise constant functions are plotted. The P1 function leads to an inaccurate approximation of the circular discontinuity set which is improved by the other methods. The discrete problems were solved with the methods described in the subsequent section and we refer the reader to [18–20] for comparisons of their performances. For iterative methods for the dual problem we refer the reader to [28, 50].
3.4 Iterative Solution Methods The nondifferentiability of the functional I and limited regularity properties of solutions lead to difficulties in the iterative solution of the discretized model problem. We discuss below possible approaches and address aspects such as choice of step sizes, monotonicity properties, and the development of stopping criteria. Throughout what
532
S. Bartels et al.
1
0.8 0.8
0.6
0.6 0.4
0.4 0.2 0
0.2
-0.2
0 1
-0.4 -0.6
0.5
1 0.5
0
-0.8
0
-0.5
-1 -1
-0.5
0
0.5
-0.5 -1
1
-1
Fig. 5 Numerical solution (left) obtained with the Raviart–Thomas method for the dual formulation and the resulting elementwise constant approximation z h = α −1 div ph + gh of the primal variable (right). The approximation is nearly identical with the averages of the Crouzeix–Raviart approximation
follows we use for a step size τ > 0 the difference quotient operator dt a k = τ −1 (a k − a k−1 ) for an arbitrary sequence (a k )k=0,1,... in a linear space X .
3.4.1
Regularized Gradient Descent
A classical gradient descent approach can be used if a regularization of the functional is introduced, e.g., via a regular approximation of the modulus function or the euclidean length, e.g., for ε > 0 and a ∈ Rd we define |a|ε = (|a|2 + ε2 )1/2 . This leads to the regularized functional Iε (z) =
|∇z|ε dx +
α z − g2 . 2
The uniform estimate 0 ≤ |a|ε − |a| ≤ ε for all a ∈ Rd implies that minimizers z for I and z ε for Iε are related via α z − z ε 2 ≤ ε, 2 cf. [41, 44] for related estimates. In a finite element setting this motivates using ε = h. With a suitable inner product (·, ·)∗ and a semi-implicit treatment of the variation δ Iε we obtain the following numerical scheme.
Approximation Schemes for Materials with Discontinuities
533
Algorithm (Regularized gradient descent) Let z 0 ∈ W 1,1 () and choose τ, εstop > 0, set k = 1. (1) Compute z k ∈ W 1,2 () such that (dt z , v)∗ + k
∇z k · ∇v dx + α |∇z k−1 |ε
(z k − g)v dx = 0
for all v ∈ W 1,2 (). (2) Stop if dt z k ∗ ≤ εstop ; otherwise increase k → k + 1 and continue with (1). The semi-implicit treatment eliminates monotonicity properties of the variation δ Iε . Remarkably, an energy decay property can be established unconditionally for ε > 0. Proposition 9 (Energy decay, [14]) For ε > 0 the iterates (z k )k=0,1,... are well defined and satisfy for every K ≥ 0 Iε (z ) + τ K
K
dt z k 2∗ ≤ Iε (z 0 ).
k=1
In particular, we have that dt z k → 0 as k → ∞ and the sequence (z k )k≥0 converges weakly to the unique minimizer z ε of Iε . Proof (Proof (sketched)) To illustrate the main idea of the proof we omit the quadratic term in Iε , i.e., we assume for simplicity α = 0 and note that in this case choosing v = dt z k in Algorithm 4 shows that dt z k 2∗ +
1 2
dt |∇z k |2 + τ |dt ∇z k |2 dx = 0. |∇z k−1 |ε
To identify the regularized energy Iε on the left-hand side we employ elementary formulas related to difference quotient calculus and derive the identity dt |a k |ε = dt
|a k |2ε 1 dt |a k |2ε = + |a k |2ε dt k |a k |ε |a k−1 |ε |a |ε dt |a k |ε dt |a k |2 = k−1 ε − |a k |2ε k−1 |a |ε |a |ε |a k |ε k 2 k dt |a | |a |ε dt |a k |ε = k−1 ε − |a |ε |a k−1 |ε k 2 dt |a | 1 dt |a k |2ε + τ (dt |a k |ε )2 = k−1 ε − |a |ε 2 |a k−1 |ε k 2 1 dt |a |ε 1 τ (dt |a k |ε )2 = − . 2 |a k−1 |ε 2 |a k−1 |ε
534
S. Bartels et al.
Using this formula with a k = ∇z k and noting that dt |a k |2ε = dt |a k |2 for the regularized euclidean length, we find that dt z k 2∗
+ dt
τ |∇z |ε dx + 2
k
|dt ∇z k |2 + (dt |∇z k |ε )2 dx = 0. |∇z k−1 |ε
This implies the asserted bound.
While stability of the iteration is independent of ε and also of a spatial discretization, the development of an efficient stopping criterion, i.e., optimal choice of εstop is difficult. Related error estimates have to control the effect of the semi-implicit treatment of the operator which introduces a critical dependence on ε; we refer the reader to [14, 25] for related estimates.
3.4.2
Primal-Dual Iteration
The use of primal-dual methods in the context of total-variation minimization problems has been proposed in [31–33]. The main idea is to alternatingly update the primal and dual variables z and p in the Lagrange functional L(z, p) = −
z div p +
α z − g2 − I K 1 (0) ( p) 2
via appropriate discretizations of the dynamical system ∂t p = δ p L(z, p),
∂t z = −δz L(z, p).
When the evolution becomes stationary, a saddle-point for L has been detected. In case of a continuous P1 finite element discretization of the primal problem, we may carry out an integration by parts in the first term and consider the discrete Lagrange functional α ph · ∇z h dx + z h − g2 − I K 1 (0) ( ph ) L h (z h , ph ) = 2 where we use elementwise constant vector fields ph ∈ L0 (Th )d . An important aspect here is that the functional is quadratic in z h and nondifferentiable but pointwise in ph . Hence, the separate minimization and maximization in the variables can be realized efficiently. We have that a pair (z h , ph ) ∈ S1 (Th ) × L0 (Th )d is a saddle-point for L h if and only if | ph | ≤ 1 in and ( ph , ∇vh ) = −α(z h − g, vh ), (∇z h , qh − ph ) ≤ 0 for all (vh , qh ) ∈ S1 (Th ) × L0 (Th )d with |qh | ≤ 1 in . The inequality is equivalent to the pointwise variational inclusion
Approximation Schemes for Materials with Discontinuities
535
∇z h ∈ ∂ I K 1 (0) ( ph ). The following algorithm uses appropriate implicit and explicit treatments of the discretized dynamical system to decouple the equations. The use of the extrapolated iterate z hk = z hk−1 + τ dt z hk−1 is crucial to obtain moderate conditions for stability on the involved step size τ > 0. Appropriate choices of the inner product (·, ·)h,s to define the evolution of the primal variable will be discussed below. Algorithm (Primal-dual iteration) Let (·, ·)h,s be an inner product on S1 (Th ), τ > 0, z hk = z hk−1 + (z h0 , ph0 ) ∈ S1 (Th ) × L0 (Th )d , set dt z h0 = 0, and for k = 1, 2, . . . with k−1 τ dt z h solve the equations z hk , qh − phk ) ≤ 0, (−dt phk + ∇ (dt z hk , vh )h,s + ( phk , ∇vh ) + α(z hk − g, vh ) = 0 subject to | phk | ≤ 1 in for all (vh , qh ) ∈ S1 (Th ) × L0 (Th )d with |qh | ≤ 1 in . Stop the iteration if dt z hk h,s ≤ εstop . We have that phk is the unique minimizer of the nondifferentiable mapping qh →
1 qh − phk−1 2 − (qh , ∇ z hk ) + I K 1 (0) (qh ). 2τ
It is straightforward to verify that phk is given by the pointwise truncation operation z hk / max{1, | phk−1 + τ ∇ z hk |}. phk = phk−1 + τ ∇ For this explicit formula the use of the L 2 inner product to define the evolution in the p variable is essential. The iterates of Algorithm 5 converge to a stationary point if τ is sufficiently small. The following result is obtained from arguments developed in [6, 27, 31, 39, 68, 69]. Proposition 10 (Convergence) Let z h ∈ S1 (Th ) be minimal for I in S1 (Th ) and define ∇vh . θ= sup vh ∈S1 (Th )\{0} vh h,s If τ θ ≤ 1, then the iterates of Algorithm 5 converge to z h in the sense that they satisfy for every K ≥ 1 τ
K 1 τ (1−τ 2 θ 2 ) dt z hk 2h,s + αz h − z hk 2 ≤ z h − z h0 2h,s + ph − ph0 2 . 2 2 k=1
536
S. Bartels et al.
In general we cannot expect convergence phk → ph since ph may fail to be unique, e.g., if ∇z h |T = 0 for some T ∈ Th . If (·, ·)h,s is the L 2 inner product then the parameter θ characterizes the constant in an inverse estimate and is given by θ ≤ ch −1 . To avoid the resulting restrictive step size condition τ ≤ ch other choices of the inner product (·, ·)h,s obtained as weighted combinations of the inner product in L 2 () and the semi-inner product in H 1 () are useful. Proposition 11 (Discrete inner products, [9]) For s ∈ [0, 1] and vh , wh ∈ S1 (Th ) define (vh , wh )h,s = (vh , wh ) + h (1−s)/s (∇vh , ∇wh ), where h (1−s)/s = 0 if s = 0. We then have ∇vh ≤ ch − min{1,(1−s)/(2s)} vh h,s for all vh ∈ S1 (Th ) with c = 1 if s > 0. A particular choice of the scalar products (·, ·)h,s has to guarantee that the righthand side in the estimate of Proposition 10 remains bounded, e.g., the choice s = 1 defines the H 1 norm but minimizers for the total variation minimization problem do not belong to this space, i.e., the quantity on the right-hand side will deteriorate as h → 0. For s ≤ 1/2 the upper bounded remains bounded which follows from the discrete interpolation estimate h∇vh 2 ≤ cvh L ∞ () ∇vh L 1 () and the fact that minimizers z h for Ih remain bounded in the set W 1,1 () ∩ L ∞ (). To obtain robustness of the stopping criterion a smallness property of dt z hk h,s has to be checked.
3.4.3
ADMM Iteration
The idea of the alternating direction of multiplier method proposed in [42] for solving convex optimization problems of the form I (z) = F(Bz) + G(z) consists in introducing the variable r = Bz and imposing this identity via a Lagrange multiplier λ and a stabilizing term. In the case of the total variation minimization problem the method is thus based on the augmented Lagrange functional L τ (z, r, λ) =
|r | dx +
α τ z − g2 + (λ, ∇z − r ) H + ∇z − r 2H . 2 2
Here, a suitable Hilbert space and a parameter τ > 0 have to be chosen. We have that inf I (z) = inf sup L τ (z, r, λ). z
z,r
λ
Approximation Schemes for Materials with Discontinuities
537
The ADMM iteration successively minimizes L τ with respect to z and r , and then performs an ascent step with respect to λ. Because of the splitting of the differential operator and the nonquadratic, nondifferentiable functional, the separate optimization in the different variables can be realized efficiently. To explain the algorithm and derive some features we consider the general form as stated above with convex functionals F : X → R ∪ {+∞} and G : Y → R ∪ {+∞} and a bounded linear operator B : X → Y . Possible strong convexity of F or G is characterized by nonnegative functionals F : Y × Y → R and G : X × X → R in the following lemma. Lemma 1 (Optimality conditions) A triple (z, r, λ) is a saddle point for L τ if and only if Bz = r and λ, q − r Y + F(r ) + F (q, r ) ≤ F(q), − λ, B(v − z) Y + G(z) + G (v, z) ≤ G(v), for all (v, q) ∈ X × Y . We approximate a saddle-point using the following iterative scheme which coincides with the scheme introduced in [46] in the case of fixed step sizes. Algorithm (Generalized ADMM) Choose (z 0 , λ0 ) ∈ X × Y such that G(z 0 ) < ∞. Choose τ ≥ τ > 0 and R 0 and set j = 1. (1) Set τ1 = τ and R0 = R. (2) Compute a minimizer r j ∈ Y of the mapping r → L τ j (z j−1 , r ; λ j−1 ). (3) Compute a minimizer z j ∈ X of the mapping z → L τ j (z, r j ; λ j−1 ). (4) Update λ j = λ j−1 + τ j (Bz j − r j ). (5) Define 1/2 R j = λ j − λ j−1 2Y + τ j2 B(z j − z j−1 )2Y . (6) Stop if R j is sufficiently small. (7) Choose step size τ j+1 ∈ [τ , τ ]. (8) Set j → j + 1 and continue with (2).
Further variants and related algorithms are investigated in [36–38, 47, 55, 56, 63, 77]. In [20] a strategy for the adjustment of τ j based on checking certain contraction properties has been developed. Convergence of the iteration of Algorithm 6 is based on comparing the optimality conditions for L τ to the optimality conditions arising from the iteration.
538
S. Bartels et al.
Lemma 2 (Decoupled optimality) With λ j := λ j−1 + τ j (Bz j−1 − r j ) the iterates j j j (z , r , λ ) j=0,1,... satisfy for j ≥ 1 the variational inequalities j λ , q − r j Y + F(r j ) + F (q, r j ) ≤ F(q), − λ j , B(v − z j ) Y + G(z j ) + G (v, z j ) ≤ G(v), for all (v, q) ∈ X × Y . In particular, (z j , r j ; λ j ) is a saddle-point for L τ if and only if λ j − λ j−1 = 0 and B(z j − z j−1 ) = 0. To state a convergence property of the iteration we use the symmetrized coercivity functionals G (z, z ) = G (z, z ) + G (z , z), F (r, r ) = F (r, r ) + F (r , r ). G are given by certain powers of norms of differences, e.g., Typically, F and G (v, w) ∼ v − w2 . Theorem 2 (Termination) Let (z, r ; λ) be a saddle-point for L τ . Suppose that the step sizes satisfy the monotonicity property 0 < τ ≤ τ j+1 ≤ τ j for j ≥ 1. For the iterates (z j , r j ; λ j ), j ≥ 0, of Algorithm 6, the corresponding j j j differences δλ = λ − λ j , δr = r − r j and δz = z − z j , and the distance j
D 2j = δλ 2Y + τ j2 Bδzj 2Y , we have for every J ≥ 1 that 1 1 1 2 DJ + τj G (z, z j ) + F (r, r j ) + G (z j−1 , z j ) + R 2j ≤ D02 . 2 2 2 j=1 J
In particular, R j → 0 as j → ∞ and Algorithm 6 terminates.
3.5 Fully Discrete Approximation of Rate-Independent Damage Processes In [21] a numerical method is developed to determine approximate solutions for a rate-independent damage model (U × Z, R1 , E). Here, the energy functional E : U × X → R is of the form (5) with a gradient regularization of BV -type as in (6b), with finite sublevels on the Banach space U × X. Here, U := {u ∈ H 1 (; R2 ), u = 0 on D }, X := BV (), and Z := L 1 (). The positively 1-homogeneous dissipation
Approximation Schemes for Materials with Discontinuities
539
potential R1 : Z → [0, ∞] is given by
R1 (v) :=
R1 (v) dx ,
with R1 (v) :=
a1 |v| if v ≤ 0, ∞ otherwise.
(25)
With z = 1 for the undamaged state of the material and z = 0 for the maximally damaged state R1 from (25) ensures that z has to decrease with time and thus prevents healing of the material. The non-smoothness of R1 together with the non-smoothness and nonlinearity of E impose a challenge both for numerical and mathematical analysis. To devise an iterative solution method, the staggered time-discrete scheme (19) is combined with a P1-FE discretization in space. To solve for the nonlinear, nonsmooth discrete problem (19b) an ADMM-algorithm as described in Sect. 3.4.3 is used. It is obtained that the approximate solutions satisfy a discrete analogon of the notion of semistable energetic solutions, cf. Definition 1, Item 2, upon an error term arising from the numerical method. Thanks to a result similar to Theorem 2 it can be shown that this error is controlled and vanishes as time-step and mesh size tend to zero. This is the basis to show that the approximate solutions converge to a semistable energetic solution of the rate-independent process. The convergence of the method is shown in [22] for gradient regularizations of the type (6a) and (6b). The convergence proof is based on methods from evolutionary -convergence for rate-independent systems. The interplay of the non-smooth constraint imposed by the dissipation potential with the discrete FE-spaces lead to additional error terms in the discrete semistability inequality, which are shown to vanish as h → 0 if the triangulations tend to a right-angled triangulation.
4 Fully Discrete Approximation of Dynamic Phase-Field Fracture by Viscous Regularization In this section we regularize the rate-independent damage process by a viscous damping. This means that R1 from (25) now is replaced by R M (v) =
R M (v) dx with R M (v) =
M 2 |v| + χ(−∞,0] v , 2
(26)
with M > 0 a viscosity parameter, and with χ(−∞,0] (v) = 0 if v ∈ (−∞, 0] and χ(−∞,0] (v) = ∞ if v > 0 the characteristic function of the interval (−∞, 0] to prevent healing of the material. While R1 from (26) allows solutions to jump in time, this is prevented by the viscous potential (25). A viscous regularization of the evolution law is often used in engineering literature, see e.g., [57, 65, 76] to make numerical simulations more stable. It was used in [78, 79], where convergence of a staggered time-discrete scheme was shown for a phase-field fracture model at finite strains. There, the focus lay on a quasistatic evolution law for the deformation (ρ = 0 and D ≡ 0 in (27a) below) featuring a stress tensor which takes into account the anisotropy of
540
S. Bartels et al.
damage. This is achieved by applying an anisotropic split of the modified principle invariants of the right Cauchy–Green strain tensor. In this framework the existence of solutions was studied using a staggered time-discrete scheme and by showing that the time-discrete solutions converge in a weak sense to a solution of the time-continuous formulation of the model. The main challenge here comes from the non-convexity of the energy functional with respect to the deformation gradient in the finite-strain setting, where in general only polyconvexity is available, combined with the use of modified principle invariants. While complicating mathematical analysis the use of the anisotropic split and modified invariants has proved to lead to better numerical results with good qualitative agreement of simulation and experiment [54]. While the viscosity M > 0 in (26) was kept fixed in [78, 79] and convergence was investigated for a time-discrete scheme, it is the aim of this section to prove the convergence of a discretization of a visco-elastodynamic phase-field fracture model both in time and space such that M(τ ) → 0 as time-step size τ → 0. We will confine the analysis to the setting of small strains, but allow for a visco-elastodynamic evolution of the displacements. More precisely, the model problem in a time interval [0, T] in the reference domain ⊂ Rd , d ∈ N, d > 1, formally reads: ρ u¨ − div D(z)e(u) ˙ + C(z)e(u) = f V ∂R M (˙z ) + C (z)e(u) : e(u) − Gc
1
in (0, T) × ,
(1 − z) − div∇z 0
(27a) in (0, T) × . (27b)
for the displacement u : [0, T ] × → Rd and the phase-field z : [0, T ] × → [0, 1] with z = 1 for the undamaged state and z = 0 for the maximally damaged state of the material. In (27a), e(u) = 21 (∇u + (∇u) ) denotes the linearized strain tensor, ρ > 0 is the (constant) mass density, and f V : [0, T ] × → Rd a given volume force. The parameters and Gc are the characteristic length scale of the crack regularization and the fracture toughness appearing in the phase-field fracture energy functional E of the type (4). Equation (27b) is given as a subdifferential inclusion because of the nonsmooth term χ(−∞,0] . The evolution laws (27a)–(27b) are complemented by the boundary and initial conditions u(t) = 0
(D(z)e(u) ˙ + C(z)e(u) n = f S Gc ∇z · n = 0 u(0) = u 0 u(0) ˙ = u˙ 0 z(0) = z 0
in (0, T) × ∂ D
(27c)
in (0, T) × ∂ N , in (0, T) × ∂, in ,
(27d) (27e) (27f)
in , in ,
(27g) (27h)
Above in (27e), ∂ denotes the boundary of and n the outer unit normal vector to ∂. In (27c), ∂ D defines the Dirichlet boundary for the displacements and ∂ N =
Approximation Schemes for Materials with Discontinuities
541
∂\∂ D the Neumann boundary, where the surface load f S : [0, T ] × ∂ N → Rd is active. The functions u 0 , u˙ 0 , and z 0 are given initial data for u and z, respectively. The phase-field energy functional E : [0, T ] × U × X → R associated with system (27) is very similar to (4) and here takes the form E(t, u, z) :=
−
1 1 C(z)e(u) : e(u) + Gc (1 − z)2 + |∇z|2 − f v (t) · u 2 2 2
∂N
dx
f S · u dS .
(28) As in Sect. 2 we also introduce the kinetic energy K : W → [0, ∞) and the viscous dissipation potential V : U → [0, ∞), which here take the form K(u) ˙ :=
ρ 2 |u| ˙ dx and V(u) ˙ := 2
1 D(z)e(u) ˙ : e(u) ˙ dx . 2
(29)
We formally understand system (27) as a viscous approximation of a rate-independent evolution of the phase-field parameter z. We will thus investigate the limit M → 0 in (26) and hence in (27b). In the limit this leads to a rate-independent, non-smooth potential R : Z → [0, ∞], which is here given by R(v) :=
χ(−∞,0] (v) dx .
(30)
In the rate-independent limit M → 0 the evolutionary inclusion (27b) for z will thus formally turn into (1 − z) + div∇z 0 in (0, T) × . (31) While the potential R M keeps rates z˙ with values in Z M = L 2 (), this regularity will be lost with M → 0 and one will only find that z is of bounded variation in time. For the definition of the above functionals and in the subsequent exposition we will make use of the following abbreviations for function spaces ∂χ(−∞,0] (˙z ) + C (z)e(u) : e(u) − Gc
1
Z := L 1 () , Z M := L 2 () , X := H 1 () , Y := H 1 () ∩ L ∞ () , (32a) U := {v ∈ H 1 (, Rd ), v = 0 on ∂ D }, W := L 2 (; Rd ) .
(32b)
Definition 4 We denote the damped inertial system with viscous regularization from (27) by the tuple (U, W, Z M , V, K, R M , E). The damped inertial system obtained in the rate-independent limit M → 0 will be denoted by (U, W, Z, V, K, R, E). In this section we discuss the convergence of a numerical scheme to find solutions for system (U, W, Z, V, K, R, E). Solutions of (U, W, Z, V, K, R, E) are defined in a weak sense in the following way:
542
S. Bartels et al.
Definition 5 (Solutions of (U, W, Z, V, K, R, E)) A pair (u, z) : [0, T] → U × X is a solution of (U, W, Z, V, K, R, E) if it satisfies the following four conditions: • one-sided variational inequality for z:
Gc 1 C (z(t))e u(t) : e u(t) − 1 − z(t) η + Gc ∇z(t) · ∇η d x ≥ 0 2 (33a)
for all t ∈ [0, T) and for all η ∈ Y such that η ≤ 0 a.e. in ; •unidirectionality:for all t1 < t2 ∈ [0, T] it is z(t2 ) ≤ z(t1 ) a.e. in ;
(33b)
• weak formulation of the momentum balance for all t ∈ [0, T] : t
u(t) ˙ · v(t) dx − ρ u(r ˙ ) · v˙ (r ) dx dr 0 t D(z)e(u) ˙ + C(z)e(u) : e(v) dx dr + 0 t f (r ), v(r ) U∗ ,U dr u(0) ˙ · v(0) dx + =ρ
ρ
0
for all v ∈ L 2 (0, T; U) ∩ W 1,1 (0, T; L 2 (, Rd )) ; (33c) • energy-dissipation balance for all t ∈ [0, T): ρ 2
t |u(t)| ˙ 2 dx + E(t, u(t), z(t)) + D(z)e(u) ˙ : e(u) ˙ dx dr 0 t ρ 2 |u(0)| ˙ dx + E(0, u(0), s(0)) + ∂t E r, u(r ), z(r ) dr . = 2 0
(33d)
Remark 3 (Semistable energetic solution of (U, W, Z, V, K, R, E)) In fact, we obtain that solutions of (U, W, Z, V, K, R, E) in the sense of Definition 5 also satisfy the semistability inequality for all t ∈ [0, T): E(t, u(t), z(t)) ≤ E(t, u(t), z˜ ) + R(˜z − z(t)) for all z˜ ∈ X
(34)
with E from (28) and R from (30). Thus, solutions of (U, W, Z, V, K, R, E) are also semistable energetic solutions in the sense of Definition 2. It is the aim of this section to show the existence of solutions for system (U, W, Z, V, K, R, E) in the sense of Definition 5 by discrete approximation. For this, we will combine a staggered time-discrete scheme with a P1 finite-element discretization in space to find weak solutions of system (U, W, Z M , V, K, R M , E)
Approximation Schemes for Materials with Discontinuities
543
corresponding to (27), see (48). While the numerical computation of solutions for the discrete version of (27a) reduces to solving a linear system of equations, solving for the discrete version of (27b) is more involved. For this we propose to regularize the non-smooth viscous dissipation potential by a smoothened version of the Yosidaregularization, cf. (42) for more details. In this way, e.g., a Newton’s method will be applicable to solve the discretized version of the nonlinear problem, where the nonlinearities stem from the nonlinear dependence of the tensor C on z, cf. (36), and from the Yosida-regularization. We show that the approximate solutions obtained by the staggered Galerkin scheme (48) satisfy a discrete version of the notion of solution given in Definition 5. However, since the discrete nonlinear problem will only be solved approximately, error terms will appear. We derive sufficient conditions to control the error terms, so that convergence of the approximate solutions can be shown. These sufficient conditions can serve as stopping criteria for the numerical algorithm. The outline of this Section is as follows: After specifying the basic assumptions on the domain and given data in Sect. 4.1.1, we introduce the staggered Galerkin scheme (48) in Sect. 4.1.2 and show the existence of approximate solutions, cf. Proposition 12. This already leads to a first set of qualifying conditions, cf. (51). Subsequently, in Sect. 4.1.3 we deduce a second set of qualifying criteria, cf. (55), (56), and (59), that allow it to find uniform bounds for the approximate solutions and we show the convergence of (a subsequence of) approximate solutions to a solution of (U, W, Z, V, K, R, E) in the sense of Definition 5, cf. Theorem 3.
4.1 Basic Assumptions and Main Result 4.1.1
Basic Assumptions
Assumptions on the domain: We assume that ⊂ Rd is a bounded domain with Lipschitz-boundary ∂, such that ∂ D ⊂ ∂ is non-empty and relatively open and ∂ N := ∂ \ ∂ D .
(35)
depend on the Assumptions on the tensors C, D: The tensors C, D : R → Rd×d×d×d sym phase-field parameter z through functions wC , wD : R → [w0 , w∗ ] being prefactors ˜ D, ˜ i.e., to constant tensors C, ˜ C(z) := wC (z)C
and
˜ for all z ∈ R, D(z) := wD (z)D
˜ D. ˜ with constant, symmetric, and positively definite tensors C, For the functions wC , wD we further assume: • Differentiability and boundedness:
(36a) (36b)
544
S. Bartels et al.
wD ∈ C 1 (R, [w0 , w∗ ]), wC ∈ C 2 (R, [w0 , w∗ ]),
(37a)
∗
with constants 0 < w0 < w , • Monotonicity:
wC (z) ≥ 0 and wD (z) ≥ 0 for all z ∈ R,
(37b)
• Locally constant growth: wC (z) = 0 and wD (z) = 0.
(37c)
∗
for all z ∈ (−∞, 0] ∪ [z , ∞), • Local convexity: There are z ∗ ∈ (1, z ∗ ) and w∗ ∈ (w0 , w∗ ) s.t. wC : [0, z ∗ ] → [w0 , w∗ ] is convex.
(37d)
A direct implication of (36) and (37a) is the existence of constants 0 < cD0 < cD∗ and 0 < cC0 < cC∗ such that for all z ∈ R and for all A ∈ Rd×d sym there holds: cD0 |A|2 ≤ D(z)A : A ≤ cD∗ |A|2 and cC0
|A| ≤ C(z)A : A ≤ 2
cC∗
|A| . 2
(38a) (38b)
Remark 4 (Discussion of the assumptions (37)) Assumption (37a) on the boundedness of wC and wD is crucial to guarantee the existence of discrete solutions because it ensures the uniform bounds from below in (38) and thus the coercivity of the energy functional (28) and the viscous dissipation potential (26). We further impose in (37a) the regularity wC ∈ C 2 (R, [w0 , w∗ ]) in order to comply with the requirements of a Newton’s method to numerically solve the nonlinear equation (48a). Monotonicity assumption (37b) reflects the physical property that an increase of damage leads to a decrease of the stresses, since it ensures wC (z 1 ) ≤ wC (z 2 ) as well as wD (z 1 ) ≤ wD (z 2 ) for all z 1 ≤ z 2 , and since an increase of damage is represented by a decrease of the values of z in our model. As a direct implication of the boundedness (37a) and the monotonicity (37b) the functions wC and wD need to be constant on subintervals of R as further stated in (37c). It can be shown for solutions (u, z) of (U, W, Z, V, K, R, E) that z takes values in [0, 1] a.e. in and thus can be understood as the volume fraction of undamaged material, cf. Theorem 3. To obtain this result it is important to make sure in (37c) that the subintervals, where wC and wD have constant growth, do not intersect with the interval [0, 1]. We also refer to [62] where similar growth assumptions and resulting observations have been made. Finally, convexity assumption (37d) allows it to deduce that solutions (u, z) of (U, W, Z, V, K, R, E) satisfy the upper energy-dissipation
Approximation Schemes for Materials with Discontinuities
545
Fig. 6 Qualitative shape of wC : R → [w0 , w∗ ]: The function is constant on (−∞, 0] ∪ [z ∗ , ∞), monotonously increasing on R, and convex on (−∞, z ∗ ) with z ∗ > 1 but non-convex on [z ∗ , z ∗ )
estimate (18), which can be shown even to hold as a balance (33d). This result is important from a thermodynamical point of view and from a general mathematical point of view it provides compactness properties. Alltogether, assumptions (37) in particular imply that wC qualitatively is of the form indicated in Fig. 6. However, monotonicity (37b) together with the boundedness (37a) further require wC to be non-convex on a subinterval, which is given by [z ∗ , z ∗ ] with z ∗ > 1 in Fig. 6. In turn, the non-convexity of wC on the interval [z ∗ , z ∗ ] entails that upper energy-dissipation estimates are not yet available for approximating solutions of the fully discretized problem. Remark 5 (Comparison of wC from (37) with other degradation functions from literature) We finally point out that on the interval [0, 1] the degradation function wC may take any polynomial form commonly used in literature, such as, e.g., wC (z) := η + z 2 with a constant η > 0 in the standard Ambrosio–Tortorelli functional, cf. [45, 57, 76] or wC (z) := (1 − z)2 in [65]. Other variants like wC (z) := (ag − 2)(1 − z)3 + (3 − ag )(1 − z)2 with ag ∈ (0, 2] in [49] or wC (z) := a(z 3 − z 2 ) + 3z 2 − 2z 3
(39)
in [15] are non-convex in the interval [0, 1] for the typical choice of parameters. This is used to model a linear behaviour of the non-fractured material right before crack initiation and accomplished with a horizontal slope at the transition between sound and damaged. However, in our work convexity assumption (37d) is a technical but crucial tool to deduce the convergence of the approximation method (49). In order to comply with the requirements of mechanics and thus also to allow for nonconvex degradation functions we propose here to formulate the degradation function in dependence of the mesh size h, as for example in (39) with a = a(h) to recover convexity in the limit as h → 0. Assumptions on the given data: For the volume force f V in (27a) and the surface force f S in (27d) we assume here the regularity f V ∈ C 1,1 ([0, T]; U∗ ) and f S ∈ C 1,1 ([0, T]; L 2 (∂ N , Rd )), i.e., the external loadings are continuously differentiable in time and the time-derivative is Lipschitz-continuous with values in (a subspace of) the spatial dual. We then define the combined external loading f by
546
S. Bartels et al.
f (t), v U∗ ,U := f V (t), v U∗ ,U +
∂N
f S (t) · v dHd−1 for all v ∈ U .
(40a)
Above regularity assumptions on f V and f S imply the following properties for f : • Regularity: (40b) f ∈ C 1,1 [0, T ]; U∗ , • Boundedness of the time-derivative: sup f˙(t)U∗ < ∞ ,
(40c)
t∈[0,T]
• Lipschitz-continuity of the time-derivative: There is c L > 0 such that for all t1 , t2 ∈ [0, T] : f˙(t1 ) − f˙(t2 )U∗ ≤ c L |t1 − t2 | .
(40d)
Additionally, we impose for the initial data in (27f)–(27h): u 0 ∈ U , u˙ 0 ∈ U,
(41a)
z 0 ∈ X such that z 0 (x) ∈ [0, 1] for almost all x ∈ .
(41b)
Yosida-regularization and mollified max-function: For the numerical method we propose to replace the non-smooth dissipation potential R M by a smooth approximation that allows it to compute second derivatives. This can be achieved using a smoothened variant of the Yosida-regularization and the regularization parameter will be chosen in dependence of the time-step size τ . For this, the characteristic function χ(−∞,0] enforcing unidirectionality in (26) will be approximated by r →
Nτ |m τ (r )|2 2
(42a)
where m τ : R → [0, ∞) denotes a regularization of the function max{·, 0}, given as ⎧ τ ⎪ r ≥τ, ⎨r − 2 3 r r4 m τ (r ) := τ 2 − 2τ 3 r ∈ (0, τ ) , ⎪ ⎩ 0 r ≤ 0.
(42b)
We refer to [60, Sect. 4] for further details on this construction. In this way, R M in (26) will be replaced in the discrete scheme by R Mτ (v) :=
M 2 Nτ |v| + |m τ (v)|2 2 2
and we write R Mτ for the corresponding integral functional.
(42c)
Approximation Schemes for Materials with Discontinuities
4.1.2
547
Discretization of (U, W, Z M , V, K, R M , E) in Space and Time
Our strategy to find weak solutions for (U, W, Z M , V, K, R M , E) consists in a discretization in time and space using FEM. In the following we introduce the notation for the discrete setting and present the discrete scheme below in (49). Subsequently, in Sect. 4.1.3 we discuss the convergence of the method. Discretization in space: For a family (Th )h of triangulations of with mesh size h = supT ∈Th diam T let Nh , Eh be the sets of nodal points and edges, respectively. For the infinite-dimensional Banach space V ∈ {X, U}, we consider the finite-element spaces Vh , of piecewise affine-linear functions. We assume (Th )h to be such that the finite-dimensional spaces contain each other successively as h → 0, i.e., Vh 1 ⊂ Vh 2 ⊂ V for any h 2 < h 1 . In this way, the finite-dimensional spaces Vh are dense in V, i.e., V = ∪h Vh . Further, let Nh be the number of vertices in Nh . Then Nh coincides with the dimension of the FE-space in the scalar case Vh = Xh , while for h the vectorial case Vh = Uh the space dimension is given by dNh . Let ϕ := (ϕ j )Nj=1 denote the vector of (suitably ordered) nodal basis elements for Xh given by the scalar hat function with ϕ j (x j ) = 1 in node x j and ϕ j (xi ) = 0 for i = j. Then the dNh h nodal basis for Uh given as (ϕ l )l=1 = (ϕ j e1 , . . . , ϕ j ed )Nj=1 , where ei , i = 1, . . . , d, d are orthornormal basis vectors of R . In this way, the elements z ∈ Xh and u ∈ Uh
h
dNh are represented by linear combinations z = Nj=1 z j ϕ j and u = l=1 u l ϕ l of the h h dNh basis elements using coefficient vectors z = (z j )Nj=1 and u = (u j )dN . For j=1 ∈ R d ¯ functions η ∈ C() and v ∈ C(, R ) we further introduce the scalar and vectorial nodal interpolants as follows
PhX : C() → Xh , PhX (η) :=
η(xi )ϕi ,
(43a)
xi ∈Nh
PhU : C(, Rd ) → Uh , PhU (v) :=
d (v(xi ) · el )ϕi el .
(43b)
xi ∈Nh l=1
Then, PhX (η) → η strongly in X for any η ∈ X ∩ (C ∞ (Rd )| ) as well as PhU (v) → v strongly in U for any v ∈ U ∩ (C ∞ (Rd )| )d . Remark 6 By choice of piecewise affine-linear finite-element spaces, the density of Xh ⊂ X can be assumed for all h > 0. In certain cases these approximations have to meet some additional constraints. For the inital datum z = z 0 ∈ X with z ∈ [0, 1] a.e. in it must be guaranteed that the approximations in the finite-element spaces satisfy this bound as well which can be justified as follows: First, one finds by density of smooth functions in X a sequence (ηl )l ⊂ X ∩ (C ∞ (Rd )| ) such that ηl → z strongly in X. Now, let (εl )l ⊂ R be such that εl → 0 as l → ∞. Projection of ηl onto the box [εl , 1 − εl ] defines the truncated functions η˜l := min{1 − εl , max{ηl , εl }}. Then because of η˜l X ≤ ηl X , (η˜l )l is uniformly bounded in the separable Hilbert space X and thus admits passing to a (not relabeled) subsequence that
548
S. Bartels et al.
η˜l z˜ weakly in X.
(44)
Now, we have η˜l → z in L 2 () which can be seen by
|η˜ l − z|2 d x =
[ηl ∈[0,1]\[εl ,1−εl ]]
|η˜l − z|2 d x
+ ≤
[ηl ∈[εl ,1−εl ]]
|η˜l − z|2 d x +
[ηl ∈R\[0,1]]
|η˜l − z|2 d x
2 |η˜l − ηl | d x + [ηl ∈[0,1]\[εl ,1−εl ]] |ηl − z|2 d x + + 2
[ηl ∈[εl ,1−εl ]]
[ηl ∈[0,1]\[εl ,1−εl ]]
[ηl ∈R\[0,1]]
2 |ηl − z|2 d x
|ηl − z|2 d x + εl Ld () .
The first two terms on the right-hand side converges to 0 as l → ∞ because on the set [ηl ∈ [0, 1] \ [εl , 1 − εl ]] ⊂ we have |η˜l − ηl | ≤ εl . while for the second summand we already know the strong convergence in U. For the third term notice that η˜l = ηl on [ηl ∈ [εl , 1 − εl ]]. For the last term again one knows strong convergence on the whole domain . Altogether it follows η˜ → z in L 2 () and thus z˜ = z with (44). Then, lim supl→∞ η˜ l X ≤ lim supl→∞ ηl X = zX , i.e. the convergence in the norms of X, supplemented with the weak convergence η˜l z imply η˜l → z strongly in X. At last, mollifying η˜l one obtains functions ηˆl having the suitable regularity to see that for the projections PhX (ηˆl ) → z strongly in X is true (see [40, Corollary 1.109 and 1.110, p. 61]). Discretization in time: Let τ = {0 = tτ0 < tτ1 · · · < tτNτ = T} be a uniform partition of the time interval [0, T] with step size τ = NTτ . For a function v : [0, T] → V we write vτk := v(tτk ) for any tτk ∈ τ and introduce the discrete time-derivatives Dτ vτk := D2τ vτk
:=
vτk −vτk−1 , τ k 1 Dτ vτ − τ
Dτ vτk−1
(45a) =
vk −2vk−1 +vk−2 τ2
.
(45b)
For the discretization of the given data and here especially for the external loading f from (40), we use an approximation f τk := f (tτk ),
(46)
and denote by f τkh the restriction of f τk ∈ U∗ to Uh , where naturally f τkh → f τk strongly in U∗ as h → 0 for all k ∈ {1, . . . , Nτ } and τ > 0 fixed . (47) Discrete approximation scheme for (U, W, Z M , V, K, R M , E): Keep τ > 0 fixed. For the initial data (z 0 , u 0 , u˙ 0 ) from (41a) set z τ0 := z 0 , u 0τ := u 0 , and u −1 τ := −1 0 0 u 0 − τ u˙ 0 and let (z τ0 h )h , (u 0τ h )h , (u −1 ) with z ∈ X , u , u ∈ U for all h >0 h h τh h τh τh τh
Approximation Schemes for Materials with Discontinuities
549
−1 be approximations of the inital data such that z τ0 h → z τ0 , u 0τ h → u 0τ and u −1 τ h → uτ −1 0 0 as h → 0. For each τ, h > 0 fixed, using the discrete intial data (z τ h , u τ h , u τ h ) our aim is to find for every time step tτk ∈ τ solutions z˜ τk h ∈ Xh , u˜ kτ h ∈ Uh by solving the following staggered discrete Galerkin scheme: For all k ∈ {1, . . .} find z˜ τk h ∈ Xh , u˜ kτ h ∈ Uh such that
k k ∗ Dz E(tτk , u˜ k−1 (48a) τ h , z˜ τ h ) + DR Mτ (Dτ z˜ τ h ), ηh X ,X = 0 for all ηh ∈ Yh , ρD2τ u˜ kτ h · vh dx + Du E(tτk , u˜ kτ h , z˜ τk h ) + DV(Dτ u˜ kτ h ), vh U∗ ,U = 0 for all vh ∈ Uh .
(48b)
We point out that, on an abstract level it is possible to show the existence of Galerkin solutions (u˜ kτ h , z˜ τk h ) for system (48); we refer to [80, Proposition 4.2] for a proof. While u˜ kτ h is obtained by solving the linear system of Eq. (48b), z˜ τk h is given by the nonlinear system (48a), where the nonlinearity stems from the properties (37) of the degradation function wC and from the properties (42) of the regularized maximumfunction m τ . The abstract existence proof is based on fixed point arguments for nonlinear systems of equations and verifies that (48a) can be exactly solved. Instead, when applying an iterative method to solve the nonlinear system (48a), it will be only solved approximately. We denote the approximate solution for (48a) obtained by the numerical method by z τk h . The approximate solution z τk h will satisfy (48a) only up k on the right-hand side, see (49a) below. to an error, which we will indicate by τ,h Furthermore, the approximate solution z τk h is an input in the staggered scheme to solve for the discrete momentum balance, which, due to its linearity, can be solved exactly. With the input z τk h this results in a solution u kτ h . In conclusion, given the ∗ discrete initial data (z τ0 h , u 0τ h , u −1 τ h ) ∈ Xh × Uh × Uh the numerical method provides for any choice of h, τ > 0 fixed and for all k ∈ {1, . . . , Nτ } approximate solutions (u kτ h , z τk h ) satisfying k k k Dz E(tτk , u k−1 τ h , z τ h ) + DR Mτ (Dτ z τ h ), ηh X∗ ,X = τ,h (ηh ) for all ηh ∈ Yh , (49a)
ρD2τ u kτ h + Du E(tτk , u kτ h , z τk h ) + DV(Dτ u kτ h ), vh U∗ ,U = 0 for all vh ∈ Uh . (49b) k (η) indicates that the error induced in (49a) by the numerical method also Here, τ,h depends on the test functions η ∈ Yh and it has to be ensured by suitable stopping criteria for the numerical algorithm that this error can be controlled in such a way k (η) ≈ 0. The following proposition provides the existence of approximate that τ,h solutions for the staggered Galerkin scheme (49) as well as uniform a priori bounds.
Proposition 12 (Existence of approximate solutions and a priori estimates) Let the assumptions (35)–(42) be satisfied. Keep h, τ > 0, k ∈ {1, . . . , Nτ } fixed. Then there exists an approximate solution (u kτ h , z τk h ) of the staggered Galerkin scheme (49) for system (U, W, Z, V, K, R1 , E). Moreover, for all k ∈ {1, . . . , Nτ } there is a constant C˜ so that the approximate solutions (u kτ h , z τk h ) satisfy the a priori bounds
550
S. Bartels et al.
u kτ h U ≤ C˜ , z τk h X ≤ C˜
(50a) (50b)
˜ with a constant C˜ = C(k, τ −1 ) > 0, but independent of h > 0. The proof of Proposition 12 is carried out in Sect. 4.2. There it becomes apparent that a suitable stopping criterion for the algorithm to solve (49) must ensure that k k (ϕ j )|, |τ,h (z τk h )| , j = 1, . . . , Nh ≤ TOL(h) 1 max |τ,h
(51)
with a suitably chosen tolerance TOL(h) with the property TOL(h) → 0 as h → 0.
(52)
k (ϕ j )| ≤ TOL(h) for j = 1, . . . , Nh directly stems from In (51) the requirement |τ,h k k (49a), while |τ,h (z τ h )| ≤ TOL(h) is imposed in addition to ensure that the a priori bounds (50) are independent of h > 0. This is important for a limit passage h → 0, while keeping τ and k fixed. For this limit passage the bounds (50) provide sufficient compactness to find suitably convergent subsequences and limit pairs (u kτ , z τk ), k = 1, . . . , Nτ , that are solutions of the time-discrete, but space-continuous version of (49). Instead, due to the explicit dependence of C˜ on τ −1 estimate (50) is not sufficient to pass to the limit also with the time-step size τ or to consider the simultaneous limit h = h(τ ) as τ → 0, as it will be discussed in Sect. 4.1.3 below. For this, further estimates are needed, which can be understood as discrete energy-dissipation estimates perturbed by some error terms stemming from the approximation method and from the non-convexity of the degradation function wC . As we shall see in Sect. 4.1.3, the control of these error terms will lead to additional criteria alike (51) that will impose relations between the fineness of the mesh size h and time-step size τ.
4.1.3
Convergence of the Staggered Galerkin Scheme (49)
Nτ In the following we discuss the convergence of the approximate solutions (u kτ h , z τk h )k=1 to a pair (u, z) that provides a solution to (U, W, Z, V, K, R, E) in the sense of Definition 5. For this we want to treat a simultaneous limit h → 0 and τ → 0 and thus we consider mesh size h as a function of the time-step size τ, i.e., from now on we assume that h = h(τ ) . (53)
Yet, we will continue using the notation from Sect. 4.1.2 and only explicitely write h(τ ) in sub- or superscripts when relevant. As we shall outline in what follows, the dependence of h on τ can be specified by further criteria alike (51) that are needed to verify the convergence of the approximate solutions. More precisely, the convergence is obtained from perturbed energy-dissipation estimates that need to be uniform with
Approximation Schemes for Materials with Discontinuities
551
respect to the parameters k, τ, and h. We now discuss the main points that provide the criteria for the h(τ )-dependence and refer to Sect. 4.3 for further details: The above-mentioned perturbed energy-dissipation estimate for the approximate solutions is obtained by testing (49b) by τ Dτ u kτ h and (49a) by τ Dτ z τk h , summing the result and summing up over k ∈ {1, . . . , Nτ }. Since the criteria for the h(τ )dependence mainly arise from contributions of (49a) we here focus on these terms. More precisely, applying above procedure to (49a) results in Nτ
k Dz E(tτk , u k−1 τ h , zτ h )
+
DR Mτ (Dτ z τk h ), z τk h
k=1
−
z τk−1 h X∗ ,X
=
Nτ
k τ,h (z τk h − z τk−1 h )
k=1
(54) and the right-hand side of (54) will appear as a perturbation term in the energydissipation estimate. To control this perturbation one has to ensure that k k max |τ,h (z τ h − z τk−1 h )| , k = 1, . . . , Nτ ≤ TOL(h) .
(55)
Moreover, to make the perturbation term disappear in (54) as h → 0, further requires Nτ TOL(h(τ )) = T
TOL(h(τ )) → 0 as τ → 0 , τ
(56)
which provides a first refinement of (52) and (53). In addition, a further perturbation of the energy-dissipation estimate arises on the left-hand side of (54) by k−1 k−1 k k ˜ the term 21 C (z τk h )(z τk h − z τk−1 h )e(u τ h ) : e(u τ h ), where C (z τ h ) = wC (z τ h )C. This error is due to non-convexity of the degradation function wC in subsets of where ∗ z ∗ ≤ z τk h ≤ z ∗ or z ∗ ≤ z τk−1 h ≤ z , cf. (37) and Remark 4. The treatment of this nonconvex term requires the control of the integrand term k−1 k E C (z τk−1 h , z τ h , e(u τ h )) :=
1 2
k−1 k−1 k−1 k k k C(z τk−1 h ) − C(z τ h ) + C (z τ h )(z τ h − z τ h ) e(u τ h ) : e(u τ h )
(57) on the set where the non-convexity of C is located, which is the set ∗ ≤ z ] Bhk := [z ∗ ≤ z τk h ≤ z ∗ ] ∩ [z ∗ ≤ z τk−1 h k−1 ∪ [z τk h ≤ z ∗ ] ∩ [z τ h ≥ z ∗ ] ∪ [z τk h ≥ z ∗ ] ∩ [z τk−1 h ≤ z∗]
(58)
with [ f ≤ g] := {x ∈ , f (x) ≤ g(x)}. In other the energy-dissipation esti
Nτ words, k−1 k k−1 mate will also feature the perturbation term k=1 k E C (z τ h , z τ h , e(u τ h )) dx on Bh its right-hand side. This term can be controlled by (56) if the additional condition k−1 k E C (z τk−1 h , z τ h , e(u τ h )) dx ≤ TOL(h) Bhk
(59)
552
S. Bartels et al.
is imposed. We will deduce in Proposition 13 of Sect. 4.3 that criterion (59) can be met. With the stopping criteria (51) and (55) for the algorithm and with the conditions (56) and (59) on the discretization at hand we now state the convergence result: Theorem 3 (Convergence of the staggered Galerkin scheme (49)) Let the assumptions of Proposition 12 be satisfied. Further let the criteria (51) and (55) and the conditions (56) and (59) on the parameters τ and h = h(τ ) be satisfied. Then the Nτ obtained by the staggered family of approximate solutions (u kτ h(τ ) , z τk h(τ ) )k=1 h(τ ) Galerkin scheme (49) provides a subsequence that suitably converges to a limit pair (u, z) ∈ L ∞ (0, T; U) ∩ H 1 (0, T; U) × L ∞ (0, T; X) ∩ BV (0, T ; Z) with the following properties: 1. The pair (u, z) is a solution of (U, W, Z, V, K, R, E) in the sense of Definition 5. 2. There holds 0 ≤ z(t, x) ≤ 1 for Ld -a.e. x ∈ and for all t ∈ [0, T]. 3. The pair (u, z) also satisfies the semistability inequality (34). The proof of Theorem 3 will be discussed in Sect. 4.3. It consists of three major steps, which treat the limit passage in h and τ separately: In a first step the limit passage h → 0 from the fully discrete setting to a space-continuous but time-discrete setting is carried out, leading to the criterion (51). The second step handles the limit passage τ → 0 from the time-discrete to the time-continuous setting. In the third step, the simultaneous limit passage by suitably choosing a sequence of mesh-sizes h = h(τ ) → 0 as τ → 0 is justified and in the line of this argument conditions (55), (56), and (59) are observed.
4.2 Proof of Proposition 12 In the following, the parameters h, τ > 0 and k ∈ {1, . . . , Nτ } are kept fixed. Using the notation of Sect. 4.1.2 the finite-element scheme (49) can be rewritten as a Nh ∈ RNh system of (non-)linear equations for the coefficient vectors zkτ h = (z τk hi )i=1 k dNh and uτ h ∈ R . More precisely, testing (49a) with the basis functions ϕ j for Xh , j ∈ {1, . . . , Nh }, we find the nonlinear system of Nh equations k k k τ,h (ϕ j ) = Dz E(tτk , u k−1 τ h , z τ h ) + DR Mτ (Dτ z τ h ), ϕ j X∗ ,X Nτ d k−1 k−1 k−1 2 1 k 1 k = C (z )e(u ) : e(u ) + m ( (z − z )) ϕ j dx τh τh τh τh 2 2 dz τ τ τ h
+
Nh i=1
−
M τ
+
Gc
Nh ϕi ϕ j dx z τk hi + Gc ∇ϕi ·∇ϕ j dx z τk hi
Gc M k−1 ϕ j dx z + τ h τ
i=1
(60)
Approximation Schemes for Materials with Discontinuities
553
,Nh k for j ∈ {1, . . . , Nh }. Recalling that z τk h = i=1 z τ hi ϕi and that C (·) and dzd m 2τ (·) are nonlinear functions by assumptions (36), (37), and (42) we observe that the first integral term on the right-hand side constitutes a nonlinear function f(zkτ h ) = h of the coefficient vector zkτ h . Instead, the second and the third term on the (f j (zkτ h ))Nj=1 right-hand side are linear in zkτ h and can be rewritten as a matrix-vector multiplication with matrices M1 and M2 collecting the integrals over the basis elements and the parameters. Finally, the last term on the right-hand side is independent of zkτ h and we denote it by p. In this way, the above nonlinear system of equations rewrites as k Nh τ,h (ϕ j ) j=1 = f(zkτ h ) + M1 zkτ h + M2 zkτ h − p =: g(zkτ h ) .
(61)
Here, a suitable numerical method, such as the Newton’s method, can be applied to approximate roots of the nonlinear function g. A stopping criterion for the algo k Nh (ϕ j ) j=1 ≈ 0. This can be ensured rithm has to be chosen such that g(zkτ h ) = τ,h by enforcing in the stopping criterion that k (ϕ j )|, j = 1, . . . , Nh ≤ TOL(h) 1 max |τ,h
(62)
with a suitably chosen tolerance TOL(h) such that TOL(h) → 0 as h → 0. Similarly, testing (49b) with the nodal basis ϕ j for Uh , j = 1, . . . , dNh , we obtain the system of dNh linear equations 0 =τ 2 ρD2τ u kτ h + Du E(tτk , u kτ h , z τk h ) + DV(Dτ u kτ h ), ϕ j U∗ ,U dNh 2 = u kτ hi ρϕ i ·ϕ j dx + τ C(z τk h ) + τ D(z τk h ) e(ϕ i ) : e(ϕ j ) dx i=1
+ρ
k−2 k−1 k 2 k (−2u k−1 τ h + u τ h ) · ϕ j − τ D(z τ h )e(u τ h ) : e(ϕ j ) dx − τ f τ h , ϕ j U∗ ,U
for j ∈ {1, . . . , dNh }. We see that the first sum on the right-hand side can be rewritten as a matrix-vector multiplication of the coefficient vector ukτ h with two matrices M3 and M4 , which gather the integrals over the basis elements and the material tensors C(z τk h ) and D(z τk h ). Moreover, the remaining terms on the righthand side are independent of ukτ h and we denote them by b. Hence, the above system of linear equations reformulates as (M3 + M4 )ukτ h = b . This linear system is solvable since the matrices M3 and M4 are invertible due to the linear independence of the basis elements and thanks to the coercivity of the tensors C(z τk h ) and D(z τk h ) given in (38). To verify the uniform a priori bounds (50) we argue by induction, i.e., we assume k−1 u + u k−2 + z k−1 ≤ C τh τh τh U U X
(63)
554
S. Bartels et al.
for all h > 0 and show that approximate solutions (u kτ h , z τk h ) at step k are bounded independenty of h. We note that (63) is indeed an outcome of the induction argument below starting out from uniform bounded initial data as given by (41). For the argument, we test in (49a) and (49b) with the approximate solutions z τk h and u kτ h . Summing these two relations results in k k k k ∗ (z τk h ) = Dz E(tτk , u k−1 , z ) + DR (D z ), z + ρ D2τ u kτ h · u kτ h τ,h Mτ τ τh τh τ h X ,X τh k k k k k D(z τ h )e(Dτ u τ h ) + C(z τ h )e(u τ h ) : e(u τ h ) dx − f τkh , u kτ h U∗ ,U . +
(64)
At this point we see that, in order to obtain a uniform bound C˜ being independent h as in (50), we have to make sure for the approximate solutions that also k (z τk h )| ≤ TOL(h) 1 |τ,h
(65)
in addition to (62). Now, the right-hand side of (64) can be further estimated using standard arguments; we refer to [80, Proposition 3.2] for the details. In this way, we ultimately obtain from (64) that vspace*-8pt 2 2 c0 Gc k 2 M + z τ h + Gc ∇z τk h dx + C2 u kτ h U 4 2c K 2τ G T c4 T d c c 2 k (z k ) + + + L () + 5 f τkh ∗ ≤ τ,h τh U 2 8 2 2 2 2 2 2 c∗ 2 T M ρ 1 + 2 z τk−1 + 2 2 u k−1 ) 2 2 + u k−2 2 + 2 + D 0 e(u k−1 τ h τ h h τ h L L L L 2 2τ τ 2τ 2τ cD
where it was used that in [0, τ ), τ 1, m τ , m τ can be estimated from above by τ and 1, respectively while c4 , c5 > 0 are constants. The right-hand side indeed provides a constant C˜ that depends on the approximate solutions from the previous time-step and on τ −1 , but which is independent of h thanks to (63) and (65). This finishes the proof of the a priori estimates (50) and completes the proof of Proposition 12. Remark 7 (Newton’s method) For h, τ, k fixed, Newton’s method to find roots of the nonlinear equation (61) takes in every iteration step α the form zkα = zkα−1 − Dg(zkα−1 )−1 g(zkα−1 ) =: Ng (zkα−1 ) with the Newton-operator Ng . Here, the nonlinear function f of g in (61) depends on C . Thus, for Ng to be meaningful requires wC ∈ C 2 (R, [w0 , w∗ ]) as demanded in (37). If in addition, wC ∈ C 3 (R, [w0 , w∗ ]), one can use a Taylor expansion near a
Approximation Schemes for Materials with Discontinuities
555
root a of g to estimate Ng (zk ) − a ≤ Dg(zk )−1 cD2 g(ξ ) (zk − a)2 α α α
(66)
with ξ on a straight line between a and zkα . If Dg(·)−1 , D2 g(·) are uniformly bounded in a neighbourhood U (a) of a and defining K := c inf z∈U (a) |Dg(z)| supz∈U (a) D2 g(z), then (66) implies with dn := K zkα − a that 2 K zkα − a = dn ≤ dα−1 ≤ · · · ≤ d02α = (K zk0 − a )2α
(67)
By induction one finds quadratic convergence the initial value z 0k is k to a provided 1 located in a neighbourhood of a such that z0 − a < K .
4.3 Outline of the Proof of Convergence Theorem 3 The strategy of the proof consists in three major steps: Nτ )h given Step 1: For τ > 0 fixed, starting from approximate solutions ((u kτ h , z τk h )k=1 by Proposition 12 we pass to the limit h → 0. By compactness arguments we find Nτ for which we show that it satisfies a space-continuous but a limit pair (u kτ , z τk )k=1 time-discrete version of (49). The results of this step are summarized in Theorem 4 below and we refer to [80] for the details of the proof. Step 2: We pass to the limit τ → 0 and show that a subsequence of the timeNτ ))τ converges to a limit pair being a solution of discrete solutions ((u kτ , z τk )k=1 (U, W, Z, V, K, R, E) in the sense of Definition 5. The results of this step are collected in Theorem 5 and refer to [80] for a proof. Step 3: We show that the simultaneous limit in h = h(τ ), τ → 0 can be carried out by selecting a suitable diagonal sequence that complies with the constraints (51), (55), and (59). We further show that the constraint (59) on the discretization can be met. Step 3 will be carried out in Sect. 4.3.2.
4.3.1
Results of Steps 1 and 2
Theorem 4 (Existence of solutions in the space-continuous setting) Let the assumptions of Theorem 3 be satisfied. Keep τ > 0 fixed. Then the following statements hold true: 1. For each k ∈ {1, . . . , Nτ } there is a (not relabeled) subsequence (u kτ h , z τk h )h and limit pairs (u kτ , z τk ) ∈ U × X such that u kτ h u kτ weakly in U , z τk h
z τk
weakly in X
(68a) as h → 0.
(68b)
556
S. Bartels et al.
2. Assume that the discrete initial data satisfy −1 in U, u 0τ h → u 0τ in U and u −1 τ h → uτ
z τ0 h
→
z τ0
in X.
(69a) (69b)
Then, for each k ∈ {1, . . . , Nτ } the limit pair (u kτ , z τk ) ∈ U × X is a solution of the time-discrete problem k k (70a) 0 = Dz E(tτk , u k−1 τ , z τ ) + DR Mτ (Dτ z τ ), η X∗ ,X for all η ∈ Y , 0= ρD2τ u kτ · v + D(z τk )e(Dτ u kτ ) + C(z τk )e(u kτ ) : e(v) dx − f τk , v U∗ ,U
for all v ∈ U .
(70b)
3. Suppose that (69) is satisfied. Then, in addition to (68), for each k ∈ {1, . . . , Nτ } also the following improved convergence results hold true: u kτ h → u kτ strongly in U , z τk h
→
z τk
strongly in X .
(71a) (71b)
4. Suppose that z τ0 h ∈ [0, 1] a.e. in . Then, for each k ∈ {1, . . . , Nτ } the limit function z τk satisfies z τk ∈ Y, in particular 0 ≤ z τk ≤ 1 a.e. in .
(72)
Nτ 5. The time-discrete solutions (u kτ , z τk )k=0 of (70) satisfy the following upper energydissipation estimate for each L ∈ {1, . . . , Nτ }:
L 2 ρ τ D(z τk )e(Dτ u kτ ) : e(Dτ u kτ ) dx Dτ u τL dx + 2 k=1 1 1 C(z τL )e(u τL ) : e(u τL ) + Gc (1 − z τL )2 + |∇z τL |2 dx + 2 2 2
− f τL , u τL U∗ ,U +
L
2τ R Mτ (Dτ z τk )
k=1
2 1 ρ 1 C(z τ0 )e(u 0τ ) : e(u 0τ ) + Gc (1 − z τ0 )2 + |∇z τ0 |2 dx ≤ Dτ u 0τ dx + 2 2 2 2 L ! Dτ f τk , u k−1 . − f τ0 , u 0τ U∗ ,U − τ (73) τ ∗ k=1
U ,U
Nτ For the approximate solutions (u kτ h , z τk h )k=1 obtained by solving (49), piecewise constant interpolants v¯ τ h , vτ h , and affine-linear approximations vτ h for v ∈ {u, z} are introduced, defined for t ∈ (tτk−1 , tτk ], k = 1, . . . Nτ by
Approximation Schemes for Materials with Discontinuities
557
t − tτk−1 k t k − t k−1 vτ h + τ vτ h . τ τ
v¯ τ h (t) = vτk h , vτ h (t) = vτk−1 h , vτ h (t) =
(74)
Theorem 5 (Existence of solutions in the space- and time-continuous setting) Let the assumptions of Theorem 4 be satisfied. Further suppose that z τ0 = z 0 , u 0τ = u 0 and u −1 ˙ 0 for all τ > 0. Consider the viscousity parameter M in (26) to τ = u0 − τ u depend on τ such that M(τ ) → 0 as τ → 0. Then the following results hold true: 1. There exists a limit pair (u, z) : [0, T] → U × X and a (not relabeled) subsequence of approximate solutions (u¯ τ , uτ , u τ , z¯ τ , zτ )τ such that ∗
weakly- ∗ in L ∞ (0, T; U) ,
(75a)
weakly in H (0, T; U) , weakly- ∗ in L ∞ 0, T; L 2 (, Rd ) ,
(75b)
weakly in U for all t ∈ [0, T] ,
(75d) (75e)
z¯ τ , zτ z z¯ τ (t) z(t)
weakly in L (, R ) for all t ∈ [0, T] , weakly- ∗ in L ∞ 0, T; X , weakly in X for all t ∈ [0, T] ,
(75f) (75g)
z¯ τ (t) → z(t) zτ (t) z(t)
strongly in L 2 () for all t ∈ [0, T] , weakly in X for all t ∈ [0, T] ,
(75h) (75i)
zτ (t) → z(t)
strongly in L 2 () for all t ∈ [0, T] .
(75j)
u¯ τ , uτ u uτ u ∗
u˙ τ u˙ u¯ τ (t), uτ (t) u(t) u˙ τ (t) u(t) ˙ ∗
1
2
d
(75c)
2. The limit pair (u, z) is a solution of (U, W, Z, V, K, R, E) in the sense of Definition 5. Corollary 1 Let the assumptions of Theorem 5 be valid. Then: 1. The limit pair (u, z) from Theorem 5 also satisfies semistability inequality (34) and thus is a semistable energetic solution in the sense of Definition 2. 2. As a result of the energy-dissipation balance (33d) there also holds z¯ τ (t) → z(t) strongly in X for all t ∈ [0, T].
4.3.2
(76)
Discussion of Step 3: Simultaneous Limit h = h(τ ), τ → 0
In the following we verify that it is possible to select a (diagonal) subsequence of (interpolants of) approximate solutions (u¯ τ j h j , uτ j h j , u τ j h j , z¯ τ j h j , zτ j h j ) j converging to a solution (u, z) of (U, W, Z, V, K, R, E) in the sense of Definition 5. By making use of the convergence results in Theorems 4, 5, and Corollary 1, we provide in Proposition 13 sufficient conditions (55), (56) and (59), which allow it to deduce a uniform upper energy-dissipation estimate (perturbed by error terms). Under these conditions uniform a priori bounds will be available for the approximate solutions
558
S. Bartels et al.
which provide sufficient compactness so that convergence results in the topologies of (75) can be concluded. These will be sufficient to pass to the limit in the staggered Galerkin scheme (49) and to find a solution of (U, W, Z, V, K, R, E) in the sense of Definition 5. In addition, we will show in Lemma 3 below, that on an abstract level a better selection is possible. The idea is here that, in accordance with the necessary stopping criteria (51) and (52), h as to be chosen ‘as small as possible’ so that the Nτ from Theorem 4 is approximated already as space continuous solution (u kτ , z τk )k=1 ‘good as possible’. We remark that solutions obtained from the uniform bound in Proposition 13 must not coincide with the one obtained in Lemma 3, because solutions of (U, W, Z, V, K, R, E) are not unique since the energy functional E(t, ·, ·) is nonconvex. Proposition 13 (Uniform energy-dissipation estimate and validity of fineness criteNτ )τ h rion (59)) Let the assumptions of Proposition 12 be satisfied and let ((u kτ h , z τk h )k=1 be approximate solutions obtained by the staggered Galerkin scheme (49) such that condition (51) holds true. Nτ 1. Then, ((u kτ h , z τk h )k=1 )τ h comply with an upper energy-dissipation estimate up to an error: L
k τ,h (z τk h − z τk−1 h )+
k=1
L k=1
L
Bhk
k−1 k E C (z τk−1 h , z τ h , e(u τ h )) dx
2 ρ Dτ u 0τ h dx 2 k=1 1 2 1 0 0 0 + C(z τ h )e(u τ h ) : e(u τ h ) dx + (1 − z τ0 h )2 + ∇z τ0 h dx Gc 2 2 2 0 0 − f τ h , u τ h U∗ ,U L ρ L 2 Dτ u τ h dx + ≥ τ D(z τk h )e(Dτ u kτ h ) : e(Dτ u kτ h ) dx 2 k=1 1 2 1 L L L + C(z τ h )e(u τ h ) : e(u τ h ) + (1 − z τLh )2 + ∇z τLh dx Gc 2 2 2 L − f τLh , u τLh U∗ ,U + τ 2R Mτ (Dτ z τk h ) +τ
Dτ f τkh , u k−1 τh
U∗ ,U
+
k=1
(77) k−1 k k for all L ∈ {1, . . . , Nτ } and with the error terms τ,h , B k E C (z τk−1 h , z τ h , e(u τ h )) h dx given in (49a) and (57). 2. Condition (59) can be met. 3. Assume in addition (56) as are fulfilled. Then the as (59)k−1
Nwell
Nτ kthat k (55) and k−1 k τ τ,h (z τ h − z τk−1 error terms k=1 k=1 Bhk E C (z τ h , z τ h , e(u τ h )) dx in h ) and (77) vanish as h = h(τ ) → 0 and τ → 0. Thus, the upper energy-dissipation
Approximation Schemes for Materials with Discontinuities
559
estimate (77) provides a uniform a priori estimate for the approximate solutions Nτ )τ h . ((u kτ h , z τk h )k=1 Proof of Proposition 13, Item 1: As mentioned for (54), by testing the discrete evolution Eq. (48b) with τ Dτ u kτ h and (48a) with τ Dτ z τk h and summing up one finds Nτ
k τ,h (z τk h − z τk−1 h )+
k=1
Nτ
f τkh , u kτ h − u k−1 τh
U∗ ,U
k=1
1 2 2 ρ Dτ u 0τ h dx + (1 − z τ0 h )2 + ∇z τ0 h dx Gc 2 2 2 Nτ 2 ρ N ≥ τ D(z τk h )e(Dτ u kτ h ) : e(Dτ u kτ h ) dx Dτ u τ hτ dx + 2 k=1
+
+
Nτ k=1
τ 2R Mτ (Dτ z τk h )
2 1 + Gc (1 − z τNhτ )2 + ∇z τNhτ dx 2 2
(78)
Nτ 1 1 k−1 C(z τk h )e(u kτ h ) : e(u kτ h ) − C(z τk h )e(u k−1 + τ h ) : e(u τ h ) dx 2 2 k=1
+
Nτ 1 k k−1 k−1 C (z τ h )(z τk h − z τk−1 h )e(u τ h ) : e(u τ h )dx . 2 k=1
In order to make the stored elastic energy at step k − 1 appear in (78) we would k−1 like to replace in the second last line of (78) the term − 21 C(z τk h )e(u k−1 τ h ) : e(u τ h ) by k−1 k−1 − 21 C(z τk−1 h )e(u τ h ) : e(u τ h ). This has to be compensated and together with the last term in (78) we collect it in the error term E C from (57), once more recalled k−1 k−1 k−1 k−1 k−1 k k k k 1 E C z τk−1 h , z τ h , e(u τ h ) := 2 C(z τ h ) − C(z τ h ) + C (z τ h )(z τ h − z τ h ) e(u τ h ) : e(u τ h ) .
In this way, the last two lines in (78) can be rewritten as follows: Nτ 1 1 k−1 C(z τk h )e(u kτ h ) : e(u kτ h ) − C(z τk h )e(u k−1 τ h ) : e(u τ h ) dx 2 2 k=1
+
Nτ 1 k k−1 k−1 C (z τ h )(z τk h − z τk−1 h )e(u τ h ) : e(u τ h )dx 2 k=1
Nτ k−1 k 1 1 k−1 k−1 k−1 = C(z τk h )e(u kτ h ) : e(u kτ h ) − C(z τk−1 h )e(u τ h ) : e(u τ h ) + E C z τ h , z τ h , e(u τ h ) dx 2 2 k=1
(79) Using (79) in (78) one arrives at the upper energy-dissipation estimate (77). Proof of Proposition 13, Item 2: To show that condition (59) can be met, we keep τ > 0 fixed and investigate E C on a partition = B1 ∪ B2 ∪ B3 ∪ B4 ∪ B5 with
560
S. Bartels et al.
k−1 k ∗ ∗ B1 = [z τk h ≤ z ∗ ] ∩ [z τk−1 h ≤ z ∗ ] , B2 = [z ∗ ≤ z τ h ≤ z ] ∩ [z ∗ ≤ z τ h ≤ z ] , k−1 k−1 k k B3 = [z ∗ ≤ z τk h ] ∩ [z ∗ ≤ z τk−1 h ] , B4 = [z τ h ≤ z ∗ ] ∩ [z ∗ ≤ z τ h ] , B5 = [z ∗ ≤ z τ h ] ∩ [z τ h ≤ z ∗ ] .
k−1 k On B1 , the error E C z τk−1 by 0 because h , z τ h , e(u τ h ) can be estimated from below k−1 k this is a convex branch of the energy. Instead, on B3 , the error E C z τk−1 , z τ h , e(u τ h ) h =0 vanishes. For the remaining sets, i.e. for Bhk = B2 ∪ B4 ∪ B5 , we have the inclusion k−1 ≥δ (80) Bhk ⊂ z τk h − z τk ≥ δ ∪ z τk−1 h − zτ k−1 with δ ∈ 0, 21 (z ∗ − 1) . Here, clearly Ld [z τk h − z τk ≥ δ] ∪ z τk−1 →0 h − zτ as h → 0 in consequence of the strong convergence in Z, cf. (71b). Hence, also Ld (Bhk ) → 0 as h → 0 .
(81)
k−1 in U given by (71a) one finds Thanks to the strong convergence u k−1 τ h → uτ
k−1 k E C (z τk−1 h , z τ h , u τ h ) dx →
E C (z τk−1 , z τk , u k−1 τ ) dx
(82)
by continuity of E C (·, ·, ·) guaranteed by (36) and (37). This provides the tools to verify that the condition (59) can be met: Using (82) the error on Bhk can be estimated as follows: k−1 k E C (z τk−1 , z , e(u )) τ h h τ h Bk h k−1 k k−1 k k−1 k−1 k k−1 ≤ + E C (z τk−1 , z , e(u )) − E (z , z , e(u )) dx E (z , z , e(u )) dx C C τ τ τ τ τ τ τh h τh Bk Bk h h k−1 k k−1 + E (z k−1 , z τk h , e(u k−1 τ h )) − E C (z τ , z τ , e(u τ )) dx \B k C τ h h k−1 k k−1 k−1 k k−1 k k−1 k−1 = E C (z τ h , z τ h , e(u τ h )) − E C (z τ , z τ , e(u τ )) dx + E C (z τ , z τ , e(u τ )) dx → 0 , k B h
where the first term tends to zero by (82) and the second term by (82) and (81). Hence also E C (z τk−1 , z τk h , e(u k−1 )) → 0 as h → 0 for all k ∈ {1, . . . , Nτ } . (83) h τ h Bhk From this we see that the perturbation term in (77) stemming from the non-convexity of the energy is controlled if N τ k−1 k k−1 E C (z τ h , z τ h , e(u τ )) dx → 0 . Bhk k=1
(84)
Approximation Schemes for Materials with Discontinuities
561
This can be accomplished by ensuring for all k ∈ {1, . . . , Nτ } that k−1 k k−1 E C (z τ h , z τ h , e(u τ h )) ≤ TOL(h) Bhk
(85)
and in addition Nτ TOL(h(τ )) = T
TOL(h(τ )) → 0 as τ → 0 . τ
(86)
This provides conditions (56) and (59). We conclude that (85), hence (59), can be met thanks to the convergence ontained in (83). Proof of Proposition 13, Item 3: The error term due to the non-convexity can be controlled and vanishes according to (84) if conditions (85) and i.e., (59) and
Nτ(86), k (56), are satisfied. Similarly, one can see that the error term k=1 τ,h (z τk h − z τk−1 h ) in (77) tends to zero, provided (56) and (55) are satisfied. Under these condtions the two error terms in (77) are uniformly bounded in h = h(τ ), τ, so that the upper energy dissipation estimate provides uniform bounds for approximate solutions. Lemma 3 (Selection of a diagonal subsequence) Let the conditions (51) and (52) be satisfied. Then, it is possible to select a diagonal subsequence (τ j h j ) j∈N of the time-step and mesh sizes, such that the corresponding approximate solutions (u¯ τ j h j , z¯ τ j h j ) j∈N : [0, T] → U × X converge to the limit pair (u, z) : [0, T] → U × X obtained in Theorem 5. More precisely, we have for all t ∈ [0, T ]: u τ j h j (t) u(t) weakly in U, z τ j h j (t) → z(t) strongly in Z
(87a) as j → ∞.
(87b)
Proof We first show (87b). For this, first, keep τ > 0 fixed. Since there are only Nτ time-steps in the partition τ of the time interval [0, T], the strong convergence (71b) for h → 0 is uniform in time for the interpolants z¯ τ h . Thus, one can select z¯ τ h(τ ) such that for all t ∈ [0, T] z¯ τ h(τ ) (t) − z¯ τ (t) < τ . X
(88)
By strong convergence (76), z¯ τ (t) → z(t) in X pointwise for all t ∈ [0, T ], we find z¯ τ h(τ ) (t) − z(t) ≤ z¯ τ h(τ ) (t) − z¯ τ (t) + ¯z τ (t) − z(t)X ≤ τ + ¯z τ (t) − z(t)X → 0 X X
as τ → 0. We argue in a similar manner to verify (87a), based on the strong convergence (71a) and the weak convergence (75d). For each τ > 0 fixed, with the same arguments as for (88), we select u¯ τ h(τ ) such that for all t ∈ [0, T] u¯ τ h(τ ) (t) − u¯ τ (t) < τ . U
562
S. Bartels et al.
For each v ∈ U with vU ≤ 1 we then deduce u¯ τ h(τ ) (t) − u(t), v U∗ ,U ≤ u¯ τ h(τ ) (t) − u¯ τ (t), v U∗ ,U + u¯ τ (t) − u(t), v U∗ ,U ≤ u¯ τ h(τ ) (t) − u¯ τ (t)U vU + u¯ τ (t) − u(t), v U∗ ,U ≤ τ + u¯ τ (t) − u(t), v U∗ ,U → 0
as τ → 0 by the weak convergence (75d), u¯ τ (t) u(t) in U pointwise for all t ∈ [0, T ] where we used the usual identification of dual and predual in Hilbert spaces. One may then choose a subsequence τ j → 0 as j → ∞ and set h j := h(τ j ) to ultimately conclude (87). Acknowledgements The authors gratefully acknowledge the support by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 “Reliable simulation techniques in solid mechanics. Development of non- standard discretization methods, mechanical and mathematical analysis” within the project “Reliability of Efficient Approximation Schemes for Material Discontinuities Described by Functions of Bounded Variation”—Project Number 255461777 (TH 1935/1-2 and BA 2268/2-2).
References 1. S. Almi, S. Belz, Consistent finite-dimensional approximation of phase-field models of fracture. Ann. Mat. Pura Appl. 198(4), 1191–1225 (2019) 2. H. Attouch, G. Buttazzo, G. Michaille, Variational Analysis in Sobolev and BV Spaces. MPS/SIAM Series on Optimization, vol. 6. (Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; Mathematical Programming Society (MPS), Philadelphia, 2006). Applications to PDEs and optimization 3. S. Almi, S. Belz, M. Negri, Convergence of discrete and continuous unilateral flows for Ambrosio-Tortorelli energies and application to mechanics. ESAIM M2AN 53(2), 659–699 (2018) 4. L. Ambrosio, N. Fusco, D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems. Oxford Mathematical Monographs (The Clarendon Press, Oxford University Press, New York, 2000) 5. S. Almi, M. Negri, Analysis of staggered evolutions for nonlinear energies in phase field fracture. Arch. Ration. Mech. Anal. (2019) 6. S. Bartels, Total variation minimization with finite elements: convergence and iterative solution. SIAM J. Numer. Anal. 50(3), 1162–1180 (2012) 7. S. Bartels, Error control and adaptivity for a variational model problem defined on functions of bounded variation. Math. Comp. 84(293), 1217–1240 (2015) 8. S. Bartels, Numerical Methods for Nonlinear Partial Differential Equations. Springer Series in Computational Mathematics, vol. 47 (Springer, Cham, 2015) 9. S. Bartels, Broken Sobolev space iteration for total variation regularized minimization problems. IMA J. Numer. Anal. 36(2), 493–502 (2016) 10. S. Bartels, Numerical Approximation of Partial Differential Equations. Texts in Applied Mathematics, vol. 64 (Springer, Cham, 2016) 11. S. Bartels, Error estimates for a class of discontinuous Galerkin methods for nonsmooth problems via convex duality relations (2020). arXiv:2004.09196 12. S. Bartels, Nonconforming discretizations of convex minimization problems and precise relations to mixed methods (2020). arXiv:2002.02359 13. D. Boffi, F. Brezzi, M. Fortin, Mixed Finite Element Methods and Applications, Springer Series in Computational Mathematics, vol. 44. (Springer, Heidelberg, 2013)
Approximation Schemes for Materials with Discontinuities
563
14. S. Bartels, L. Diening, R.H. Nochetto, Unconditional stability of semi-implicit discretizations of singular flows. SIAM J. Numer. Anal. 56(3), 1896–1914 (2018) 15. M.J. Borden, T.J.R. Hughes, C.M. Landis, A. Anvari, I.J. Lee, A phase-field formulation for fracture in ductile materials: finite deformation balance law derivation, plastic degradation, and stress triaxiality effects. Comput. Methods Appl. Mech. Eng. 312, 130–166 (2016) 16. K. Bredies, K. Kunisch, T. Pock, Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010) 17. P. Bˇelík, M. Luskin, A total-variation surface energy model for thin films of martensitic crystals. Interfaces Free Bound. 4(1), 71–88 (2002) 18. S. Bartels, M. Milicevic, Stability and experimental comparison of prototypical iterative schemes for total variation regularized problems. Comput. Methods Appl. Math. 16(3), 361– 388 (2016) 19. S. Bartels, M. Milicevic, Iterative finite element solution of a constrained total variation regularized model problem. Discrete Contin. Dyn. Syst. Ser. S 10(6), 1207–1232 (2017) 20. S. Bartels, M. Milicevic, Efficient iterative solution of finite element discretized nonsmooth minimization problems. Comput. & Math. Appl. 80(5), 588–603 (2020) 21. S. Bartels, M. Milicevic, M. Thomas, Numerical approach to a model for quasistatic damage with spatial BV -regularization, in Proceedings of the INdAM-ISIMM Workshop on Trends on Applications of Mathematics to Mechanics, Rome, Italy, September 2016, eds. by E. Rocca, U. Stefanelli, L. Truskinovsky, vol. 27 (Springer International Publishing, Cham, 2018), pp. 179–203 22. S. Bartels, M. Milicevic, M. Thomas, N. Weber, Fully discrete approximation of rateindependent damage models with gradient regularization. WIAS-Preprint 2707 (2020) 23. S. Bartels, R.H. Nochetto, A.J. Salgado, Discrete total variation flows without regularization. SIAM J. Numer. Anal. 52(1), 363–385 (2014) 24. S. Bartels, R.H. Nochetto, A.J. Salgado, A total variation diminishing interpolation operator and applications. Math. Comp. 84(296), 2569–2587 (2015) 25. S. Bartels, M. Ružiˇcka, Convergence of fully discrete implicit and semi-implicit approximations of singular parabolic equations. SIAM J. Numer. Anal. 58(1), 811–833 (2020) 26. S.C. Brenner, L. Ridgway Scott, The Mathematical Theory of Finite Element Methods. Texts in Applied Mathematics, vol. 15, 3rd edn. (Springer, New York, 2008) 27. A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) 28. A. Chambolle, An Algorithm for Total Variation Minimization and Applications, vol. 20(2004), pp. 89–97. Special issue on mathematics and image analysis 29. P.G. Ciarlet, The Finite Element Method for Elliptic Problems. Studies in Mathematics and its Applications, vol. 4 (North-Holland Publishing Co., Amsterdam, 1978) 30. A. Chambolle, P.-L. Lions, Image recovery via total variation minimization and related problems. Numer. Math. 76(2), 167–188 (1997) 31. A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 40(1), 120–145 (2011) 32. A. Chambolle, T. Pock, An introduction to continuous optimization for imaging. Acta Numer. 25, 161–319 (2016) 33. A. Chambolle, T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1–2, Ser. A), 253–287 (2016) 34. A. Chambolle, T. Pock, Crouzeix-Raviart approximation of the total variation on simplicial meshes. J. Math. Imaging Vision 62(6–7), 872–899 (2020) 35. M. Crouzeix, P.-A. Raviart, Conforming and nonconforming finite element methods for solving the stationary Stokes equations. I. Rev. Française Automat. Informat. Recherche Opérationnelle Sér. Rouge 7(R-3), 33–75 (1973) 36. Y.-H. Dai, D. Han, X. Yuan, W. Zhang, A sequential updating scheme of the Lagrange multiplier for separable convex programming. Math. Comp. 86(303), 315–343 (2017) 37. J. Douglas Jr., H.H. Rachford Jr., On the numerical solution of heat conduction problems in two and three space variables. Trans. Amer. Math. Soc. 82, 421–439 (1956)
564
S. Bartels et al.
38. W. Deng, W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016) 39. J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Prog. 55(3, Ser. A), 293–318 (1992) 40. A. Ern, J.-L. Guermond, Theory and Practice of Finite Elements, vol. 159 (Springer Science & Business Media, Berlin, 2013) 41. C.M. Elliott, S.A. Smitheman, Numerical analysis of the TV regularization and H −1 fidelity model for decomposing an image into cartoon plus texture. IMA J. Numer. Anal. 29(3), 651– 689 (2009) 42. M. Fortin, R. Glowinski, Augmented Lagrangian, Methods. Studies in Mathematics and its Applications, vol. 15. Applications to the Numerical Solution of Boundary Value Problems (North-Holland Publishing Co., Amsterdam, 1983) (Translated from the French by B. Hunt and D. C, Spicer, 1983) 43. G.A. Francfort, J.-J. Marigo, Revisiting brittle fracture as an energy minimization problem. J. Mech. Phys. Solids 46(8), 1319–1342 (1998) 44. X. Feng, M. von Oehsen, A. Prohl, Rate of convergence of regularization procedures and finite element approximations for the total variation flow. Numer. Math. 100(3), 441–456 (2005) 45. A. Giacomini, Ambrosio-Tortorelli approximation of quasi-static evolution of brittle fractures. Calc. Var. Part. Diff. Equ. 22(2), 129–172 (2005) 46. R. Glowinski, Numerical Methods for Nonlinear Variational Problems. Springer Series in Computational Physics. (Springer, New York, 1984) 47. D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. & Math. Appl. 2(1), 17–40 (1976) 48. A.A. Griffith, Vi. The phenomena of rupture and flow in solids, Philos. Trans. R. Soc. Lond. Ser. A Contain. Papers Math. Phys. Character 221(582–593), 163–198 (1921) 49. C. Hesch, A.J. Gil, R. Ortigosa, M. Dittmann, C. Bilgen, P. Betsch, M. Franke, A. Janz, K. Weinberg, A framework for polyconvex large strain phase-field methods to fracture. Comput. Methods Appl. Mech. Eng. 317, 649–683 (2017) 50. M. Herrmann, R. Herzog, S. Schmidt, J. Vidal-Núñez, G. Wachsmuth, Discrete total variation with finite elements and applications to imaging. J. Math. Imaging Vision 61(4), 411–431 (2019) 51. M. Hintermüller, K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem. SIAM J. Appl. Math. 64(4), 1311–1333 (2004) 52. R. Herzog, C. Meyer, G. Wachsmuth, Integrability of displacement and stresses in linear and nonlinear elasticity with mixed boundary conditions. J. Math. Anal. Appl. 382(2), 802–813 (2011) 53. B. Halphen, Q.S. Nguyen, Sur les matériaux standards généralisés. J. Mécanique 14, 39–63 (1975) 54. C. Hesch, S. Schuß, M. Dittmann, M. Franke, K. Weinberg, Isogeometric analysis and hierarchical refinement for higher-order phase-field models. Comput. Methods Appl. Mech. Eng. 303, 185–207 (2016) 55. B. He, X. Yuan, On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012) 56. S. Kontogiorgis, R.R. Meyer, A variable-penalty alternating directions method for convex optimization. Math. Prog. 83(1, Ser. A), 29–53 (1998) 57. C. Kuhn, R. Müller, A continuum phase field model for fracture. Eng. Fract. Mech. 77(18), 3625–3634 (2010) 58. D. Knees, A. Mielke, C. Zanini, On the inviscid limit of a model for crack propagation. Math. Models Methods Appl. Sci. 18, 1529–1569 (2008) 59. D. Knees, M. Negri, Convergence of alternate minimization schemes for phase-field fracture and damage. Math. Models Methods Appl. Sci. 27(9), 1743–1794 (2017) 60. I. Kopacka, MPECs/MPCCs in function space: first order optimality concepts, path-following, and multilevel algorithms. na (2009)
Approximation Schemes for Materials with Discontinuities
565
61. D. Knees, R. Rossi, C. Zanini, A vanishing viscosity approach to a rate-independent damage model. Math. Models Methods Appl. Sci. 23(04), 565–616 (2013) 62. D. Knees, R. Rossi, C. Zanini, A vanishing viscosity approach to a rate-independent damage model. Math. Models Methods Appl. Sci. 23(04), 565–616 (2013) 63. P.-L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979) 64. G. Lazzaroni, R. Rossi, M. Thomas, R. Toader, Rate-independent damage in thermoviscoelastic materials with inertia. J. Dyn. Diff. Equ. 30, 1311–1364 (2018) 65. C. Miehe, M. Hofacker, F. Welschinger, A phase field model for rate-independent crack propagation: robust algorithmic implementation based on operator splits. Comput. Methods Appl. Mech. Eng. 199(45–48), 2765–2778 (2010) 66. A. Mielke, T. Roubíˇcek, Rate-independent Systems: Theory and Application. Applied Mathematical Sciences, vol. 193 (Springer, Berlin, 2015) 67. A. Mielke, R. Rossi, G. Savaré, BV solutions and viscosity approximations of rate-independent systems. ESAIM Control Optim. Calc. Var. 18(1), 36–80 (2012) 68. Y. Nesterov, Smooth minimization of non-smooth functions. Math. Program. 103(1, Ser. A), 127–152 (2005) 69. R.T. Rockafellar, Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976) 70. L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, vol. 60 (1992), pp. 259–268. Experimental mathematics: computational issues in nonlinear science (Los Alamos, NM, 1991) 71. T. Roubíˇcek, Rate-independent processes in viscous solids at small strains. Math. Methods Appl. Sci. 32(7), 825–862 (2009) 72. P.-A. Raviart, J.M. Thomas, A mixed finite element method for 2nd order elliptic problems, in Mathematical aspects of finite element methods (Conference Proceedings, Consiglio Naz. delle Ricerche (C.N.R.), Rome, 1975), pp. 292–315. Lecture Notes in Mathematics, vol. 606 (1977) 73. R. Rossi, M. Thomas, From an adhesive to a brittle delamination model in thermo-viscoelasticity. ESAIM Control Optim. Calc. Var. 21, 1–59 (2015) 74. R. Rossi, M. Thomas, Coupling rate-independent and rate-dependent processes: Existence results. SIAM J. Math. Anal. 49(2), 1419–1494 (2017) 75. R. Rossi, M. Thomas, From adhesive to brittle delamination in visco-elastodynamics. Math. Models Methods Appl. Sci. 27(08), 1489–1546 (2017) 76. A. Schlüter, A. Willenbücher, C. Kuhn, R. Müller, Phase field approximation of dynamic brittle fracture. Comput. Mech. 54(5), 1141–1161 (2014) 77. Y. Shen, M. Xu, On the O(1/t) convergence rate of Ye-Yuan’s modified alternating direction method of multipliers. Appl. Math. Comput. 226, 367–373 (2014) 78. M. Thomas, C. Bilgen, K. Weinberg, Analysis and simulations for a phase-field fracture model at finite strains based on modified invariants. WIAS-Preprint 2456 (2017) 79. M. Thomas, C. Bilgen, K. Weinberg, Phase-field fracture at finite strains based on modified invariants: a note on its analysis and simulations. GAMM-Mitteilungen 40(3), 207–237 (2018) 80. M. Thomas, S. Tornquist, Discrete & continuous dynamical systems-S, 14(11), 3865–3924 (2021) 81. M. Thomas, C. Zanini, Cohesive zone-type delamination in visco-elasticity. Discrete & Cont. Dyn. Syst. - S 10(6), 1487–1517 (2017) 82. J. Wang, B.J. Lucier, Error bounds for finite-difference methods for Rudin-Osher-Fatemi image smoothing. SIAM J. Numer. Anal. 49(2), 845–868 (2011)