125 47 6MB
English Pages 610 [593] Year 2021
Ernst P. Stephan Thanh Tran
Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods
Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods
Ernst P. Stephan Thanh Tran •
Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods
123
Ernst P. Stephan Leibniz University Hannover Hannover, Germany
Thanh Tran The University of New South Wales Sydney, NSW, Australia
ISBN 978-3-030-79282-4 ISBN 978-3-030-79283-1 https://doi.org/10.1007/978-3-030-79283-1
(eBook)
Mathematics Subject Classification: 65N30, 65N38, 65N55, 35J25, 45B05 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Dedicated to our wives Karin Sabine and Thanh-Binh.
Preface
Several textbooks and monographs on preconditioners for FEM (finite element methods) are available, but none exists for BEM (boundary element methods). For BEM, the topic is dealt with in research papers only and is focused on specific situations. A systematic approach and display of preconditioners for BEM is now done for the first time with this monograph. BEM have become an important tool for the numerical solution of boundary value problems covering a rich area of applications in engineering and physics. Roughly speaking, BEM consist of the reformulation of the boundary value problems into boundary integral equations which are then solved by discretisation of the boundary into finite elements; see the books by Hsiao and Wendland [128], McLean [147], N´ed´elec [161], Sauter and Schwab [186], Steinbach [199], and the recent book by Gwinner and Stephan [92] which deals especially with contact problems. While the reformulation of boundary value problems into boundary integral equations has a great advantage of reducing the domain dimension by one, the nature of integral operators is non-local whereas that of differential operators is local. As a consequence, boundary element discretisations of first-kind integral equations lead to systems of equations with dense and ill-conditioned matrices whose condition numbers grow with the degrees of freedom. (It should be noted that finite element discretisations of partial differential equations also result in ill-conditioned matrices, albeit sparse.) Non-locality of boundary integral operators indicates that they can be analysed with the calculus of pseudodifferential operators, where the correct setting of the equations leads to the study of fractional-order Sobolev spaces (like H 1/2 and H −1/2 ). This is a crucial difference to FEM where the underlying differential equations live in standard integral-order Sobolev spaces (like H 1 ). There are major differences between integral-order and fractional-order Sobolev spaces, which requires different analyses of the discretisations (by Galerkin, collocation or quadrature methods) of boundary integral equations. Moreover, as the convergence analysis for preconditioners requires sophisticated dealings with Sobolev norms, these
vii
viii
Preface
differences mean new analyses are required when studying preconditioners for equations governed by non-local operators. This book presents in detail the necessary analysis and focusses on additive and multiplicative Schwarz methods (domain-decomposition-type methods) and on multilevel methods to develop optimal preconditioners for the respective solvers. These solvers include the conjugate gradient method (for matrices resulting from the Galerkin boundary element approximation of the weakly singular and hypersingular integral equations governing Dirichlet and Neumann problems for the Laplacian) and GMRES (generalised minimal residual methods) for Helmholtz problems in acoustics. The convergence theory for preconditioning of boundary integral equations has been developed over more than two decades, with results derived and presented in different ways and published in different papers for different equations. This monograph has the benefit of hindsight to put the whole theory in a systematic framework. It is our hope that the reader will benefit from this book to perform their own analysis for their specific problem. Survey articles on subjects of the book are [204, 206]. The book is addressed to mathematicians and engineers as well as graduate students. There are other preconditioning techniques and fast solvers for boundary integral equations which are not investigated in this book, e.g., multigrid methods [35, 171, 172], the so-called operator preconditioning which uses operators of opposite orders to construct preconditioners [123, 149, 198], boundary element tearing and interconnecting (BETI) methods [132, 133, 134, 135], fast multipole methods (see, e.g., [9, 165, 194, 217]), panel clustering (see, e.g., [93, 99, 100]), and H-matrices (see, e.g., [95, 98, 97]). The book has 16 chapters which are grouped into six parts. Part I has two chapters and is the core for the analysis in subsequent chapters. Chapter 1 introduces the problems to be studied in the book. We are particularly interested in the screen and crack problems on open curves (in two dimensions) and open surfaces (in three dimensions). All three versions of the Galerkin boundary element method, namely the h-version, p-version, and hp-version, are considered. This first chapter sets the scene for the following chapters. Chapter 2 provides with detail the framework of preconditioners by Schwarz methods. How to understand the mechanism of these preconditioning techniques for proper implementation? What is required in the analysis? What is the connection between the different forms of the preconditioners, namely the matrix form, variational form, and operator form? Each form has its own advantage and their connection and equivalence need to be clarified. All these issues are addressed in this chapter. Apart from the standard convergence theory for the preconditioned conjugate gradient method, we also present the convergence theory for preconditioners with GMRES in different settings, including problems with block matrices. To the best of our knowledge, this is the first systematic presentation on this topic. The chapter also presents the hybrid modified conjugate residual (HMCR) method for FEM-BEM coupling problems. The chapter ends with a section on the convergence theory for linear iterative methods which are used to solve preconditioned systems.
Preface
ix
Part II covers preconditioners for two-dimensional problems. Chapter 3 presents the methods and analysis for additive and multiplicative Schwarz preconditioners. Both non-overlapping and overlapping methods are studied for the h-version Galerkin discretisation for the hypersingular and weakly singular integral equations. Chapter 4 treats the p-version Galerkin discretisation in the same manner as Chapter 3. Chapter 5 and Chapter 6 present multilevel methods for the h- and hp-versions of the hypersingular and weakly singular integral equations. A fullydiscrete method is the topic of Chapter 7. Chapter 8 deals with indefinite problems when the boundary integral equations are the reformulation of boundary value problems with the Helmholtz equation. Non-overlapping additive Schwarz preconditioners are designed and analysed for the h- and p-version approximations of these boundary integral equations. Chapter 9 provides numerical results for the methods studied in Chapter 3–Chapter 6 and Chapter 8. Part III deals with three-dimensional problems. As at this stage the reader should have been familiar with the required analysis, in this part we focus on the most general version of the Galerkin approximation, namely the hp-version. Results for the h-version and p-version, respectively, can be directly deduced by setting p fixed, respectively, h fixed, and disregard their corresponding treatment in the preconditioner. Two-level additive Schwarz preconditioners are considered in Chapter 10 for rectangular elements, while triangular elements are dealt with in Chapter 11. The last two chapters of this part study the effect of local mesh refinements to the condition number and propose strategies to overcome this side effect. Chapter 12 deals with the preconditioner by diagonal scaling, which effectively reduces the effect of local mesh refinements so that the condition number only grows as in the case of uniform refinements. Multilevel preconditioners with adaptive mesh refinements are the topic of Chapter 13. Part IV is devoted to the treatment of preconditioners for the coupling of finite element and boundary element methods (Chapter 14). Both the symmetric and nonsymmetric coupling methods with the GMRES solver are studied. A non-standard solver (HMCR) is also studied for the symmetric coupling. The unified theory developed in this chapter can be used to analyse different preconditioners. Part V aims at showing robustness of additive Schwarz preconditioners. Departing from the standard boundary integral equations on open boundaries and the standard FEM and BEM considered in previous parts, we study in this part pseudodifferential equations on the sphere. Chapter 15 investigates preconditioners when spherical splines are used in the approximation. Chapter 16 deviates further from the boundary element context. In this chapter, meshless methods with radial basis functions are employed in the approximation. In both cases, it is shown that additive Schwarz preconditioners provide an efficient tool to solve the problem at a low cost. Even though spherical splines do not bear the same properties as finite and boundary elements, some modifications of the analysis are sufficient for this type of approximation. On the contrary, a completely different approach has to be employed for meshless methods. Part VI contains three appendices where we collect important facts on interpolation and Sobolev spaces (Appendix A), boundary integral operators (Ap-
x
Preface
pendix B), and various technical lemmas (Appendix C) in order to make the book self-contained. In particular, in Appendix A we present new results on local and global properties of fractional-order Sobolev norms; see Subsection A.2.9. These new results help fill the gap in the analysis in current literature. This appendix also provides rigorous proofs of the equivalence of different types of Sobolev norms, which is required in the study of preconditioners. It should be noted that the screen problem and crack problem (in linear elasticity) are the cases where the solution is most singular. The preconditioners developed in this book are also applicable to other problems defined on domains with closed boundaries, e.g., on polyhedral domains. Furthermore, even though we present preconditioners for the screen problem only, the same algorithms work for the crack problem (being a system of equations). It suffices to apply the preconditioners to the various components of the vector fields. Altogether, this book showcases state-of-the-art preconditioning techniques for boundary integral equations with up-to-date research. Thorough treatments of interpolation spaces and Sobolev spaces of fractional orders familiarise the reader with necessary techniques in the analysis of preconditioners for first-kind boundary integral equations. The concise presentation of adaptive BEM, hp-version BEM, and coupling of FEM-BEM provides the reader with very efficient computational tools for solving practical problems with applications in mechanical engineering, acoustics, geodesy, and many other fields which involve boundary value problems in unbounded domains. Hannover, Sydney, April 2021
Ernst P. Stephan Thanh Tran
Acknowledgements
This book could not be completed without the tremendous support that we received from many different sources. First and foremost, the support of Mathematisches Forschungsinstitut Oberwolfach (through its Research-in-Pairs Grant, reference number 1815q) and the Australian Research Council (through DP160101755 and DP190101197) is gratefully acknowledged. A large part of the book was drafted during the authors’ stay at MFO in April 2018 and during Stephan’s visit to UNSW Sydney in November 2017. We also thank Springer Nature for its understanding of the significant delay in finalising the book due to the COVID-19 pandemic. The authors thank their colleagues for their collaboration which has deeply influenced the contents of this book. They are (in alphabetical order): M. Ainsworth, M. Feischl, T. F¨uhrer, N. Heuer, T. Le Gia, F. Leydecker, M. Maischak, W. McLean, T.D. Pham, D. Praetorius, I.H. Sloan, and A. Wathen. We take the opportunity to acknowledge our gratitude to our universities, the Leibniz University of Hannover, Germany, and the University of New South Wales, Sydney, Australia, for supporting our visits. Last but not least, we thank our wives Karin Sabine and Thanh-Binh for their great understanding and support during the work-intensive time it took us to write this book.
xi
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part I General Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Model Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Screen problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Crack problems in elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Galerkin Boundary Element Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 The h-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 The p-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 The hp-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Graded meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Adaptive schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Condition Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 General Framework of Preconditioners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Finite Dimensional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Galerkin equations and matrix systems . . . . . . . . . . . . . . . . . . . 2.1.2 Solution by CG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Solution by GMRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Preconditioners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Preconditioned conjugate gradient method . . . . . . . . . . . . . . . . 2.2.2 Preconditioned GMRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Preconditioning and linear iteration . . . . . . . . . . . . . . . . . . . . . . 2.3 Schwarz Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Required properties of a preconditioner . . . . . . . . . . . . . . . . . . 2.3.2 Ingredients of the Schwarz operators . . . . . . . . . . . . . . . . . . . . . 2.3.2.1 Subspace decomposition . . . . . . . . . . . . . . . . . . . . . . 2.3.2.2 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Schwarz operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Additive Schwarz Preconditioners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Matrix forms of Pj and Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii xi 1 3 4 4 7 9 9 9 10 10 10 10 13 14 15 15 16 18 18 22 24 26 26 26 27 27 28 29 29 xiii
xiv
Contents
2.4.2 Additive Schwarz preconditioner: the matrix form . . . . . . . . . 2.4.3 Additive Schwarz preconditioner: the variational form . . . . . . 2.4.4 Additive Schwarz preconditioner: the operator form . . . . . . . . 2.5 Multiplicative Schwarz Preconditioners . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Multiplicative Schwarz preconditioner: the matrix form . . . . . 2.5.2 Multiplicative Schwarz preconditioner: the operator form . . . 2.5.3 Symmetric multiplicative Schwarz . . . . . . . . . . . . . . . . . . . . . . 2.6 Convergence Theory for Preconditioners with PCG . . . . . . . . . . . . . . . 2.6.1 Extremal eigenvalues of preconditioned matrices . . . . . . . . . . 2.6.2 Condition numbers of Schwarz operators . . . . . . . . . . . . . . . . . 2.6.3 Stability of decomposition – A lower bound for λmin (Pad ) . . . 2.6.4 Coercivity of decomposition – An upper bound for λmax (Pad ) 2.6.5 Strengthened Cauchy–Schwarz inequalities and local stability 2.6.6 An upper bound for λmax (Psmu ) . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.7 A lower bound for λmin (Psmu ) . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Convergence Theory for Preconditioners with GMRES . . . . . . . . . . . . 2.7.1 Preconditioned GMRES with the B-inner product . . . . . . . . . . 2.7.2 Preconditioned GMRES with the C-inner product . . . . . . . . . . 2.7.3 Preconditioned GMRES with block matrices . . . . . . . . . . . . . . 2.8 Other Krylov Subspace Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1 General Krylov subspace methods . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 The generalised three-term CG methods . . . . . . . . . . . . . . . . . . 2.8.3 The hybrid modified conjugate residual (HMCR) method . . . 2.9 Convergence Theory for Linear Iterative Methods with Preconditioners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Symmetric preconditioner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 Non-symmetric preconditioner . . . . . . . . . . . . . . . . . . . . . . . . . . Part II Two-Dimensional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Two-Level Methods: the h-Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Additive Schwarz Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Non-overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1.1 The hypersingular integral equation . . . . . . . . . . . . 3.1.1.2 The weakly-singular integral equation . . . . . . . . . . 3.1.2 Overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2.1 The hypersingular integral equation . . . . . . . . . . . . 3.1.2.2 The weakly-singular integral equation . . . . . . . . . . 3.2 Multiplicative Schwarz Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Non-overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1.1 The hypersingular integral equation . . . . . . . . . . . . 3.2.1.2 The weakly-singular integral equation . . . . . . . . . . 3.2.2 Overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2.1 The hypersingular integral equation . . . . . . . . . . . . 3.2.2.2 The weakly-singular integral equation . . . . . . . . . .
31 31 32 32 33 33 34 35 35 37 38 40 41 42 43 47 48 48 52 52 55 56 57 58 63 63 66 71 73 74 74 74 77 83 83 89 90 91 91 92 92 92 92
Contents
xv
3.2.3
4
5
6
7
A special case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.2.3.1 The hypersingular integral equation . . . . . . . . . . . . 93 3.2.3.2 The weakly-singular integral equation . . . . . . . . . . 95 3.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Two-Level Methods: the p-Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.1 Additive Schwarz Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.1 Non-overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.1.1 The hypersingular integral equation . . . . . . . . . . . . 100 4.1.1.2 The weakly-singular integral equation . . . . . . . . . . 103 4.1.2 Overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.1.2.1 The hypersingular integral equation . . . . . . . . . . . . 104 4.1.2.2 The weakly-singular integral equation . . . . . . . . . . 108 4.2 Multiplicative Schwarz Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.2.1 Non-overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.2.1.1 The hypersingular integral equation . . . . . . . . . . . . 108 4.2.1.2 The weakly-singular integral equation . . . . . . . . . . 110 4.2.2 Overlapping methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.2.2.1 The hypersingular integral equation . . . . . . . . . . . . 113 4.2.2.2 The weakly-singular integral equation . . . . . . . . . . 114 4.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Multilevel Methods: the h-Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.1 Additive Schwarz Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.1.1 The hypersingular integral equation . . . . . . . . . . . . . . . . . . . . . 117 5.1.2 The weakly-singular integral equation . . . . . . . . . . . . . . . . . . . 123 5.2 Multiplicative Schwarz methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.2.1 The hypersingular integral equation . . . . . . . . . . . . . . . . . . . . . 125 5.2.2 The weakly-singular integral equation . . . . . . . . . . . . . . . . . . . 126 5.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Additive Schwarz Methods for the hp-Version . . . . . . . . . . . . . . . . . . . . . . . . 127 6.1 Preconditioners with Quasi-uniform Meshes . . . . . . . . . . . . . . . . . . . . . 127 6.1.1 A two-level non-overlapping method . . . . . . . . . . . . . . . . . . . . 128 6.1.2 A two-level overlapping method . . . . . . . . . . . . . . . . . . . . . . . . 130 6.1.3 Multilevel methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.2 Preconditioners with Geometric Meshes . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.2.1 A two-level preconditioner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.2.2 A multilevel preconditioner . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.3 Results for the Weakly-Singular Integral Equation . . . . . . . . . . . . . . . . 139 6.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 A Fully Discrete Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 7.1 The Boundary Integral Equation and a Fully Discrete Method . . . . . . 143 7.2 The Fully-Discrete and Symmetric Method . . . . . . . . . . . . . . . . . . . . . . 144 7.3 Two-level Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 7.3.1 A non-overlapping method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 7.3.2 An overlapping method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 7.4 A Multilevel Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
xvi
Contents
7.5
Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.5.1 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.5.2 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 8 Indefinite Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 8.1 General Theory for Indefinite Problems . . . . . . . . . . . . . . . . . . . . . . . . . 154 8.1.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 8.1.2 Additive Schwarz operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 8.2 Hypersingular Integral Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 8.2.1 The h-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 8.2.2 The p-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 8.3 Weakly-Singular Integral Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 8.3.1 The h-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 8.3.2 The p-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 8.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 8.5 Indefinite Problems in Three-Dimensions . . . . . . . . . . . . . . . . . . . . . . . 173 9 Implementation Issues and Numerical Experiments . . . . . . . . . . . . . . . . . . . . 175 9.1 Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 9.2 Numerical Results for the h-Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 9.3 Numerical Results for the p-Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 9.4 Numerical Results for the hp-Version . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Part III Three-Dimensional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 10 Two-Level Methods: the hp-Version on Rectangular Elements . . . . . . . . . . . 205 10.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 10.1.1 Two-level meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 10.1.2 Shape functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 10.1.3 Boundary element spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 10.2 The Hypersingular Integral Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 10.2.1 A non-overlapping method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 10.2.1.1 Subspace decomposition . . . . . . . . . . . . . . . . . . . . . . 209 10.2.1.2 Bilinear forms on subspaces . . . . . . . . . . . . . . . . . . . 210 10.2.1.3 Auxiliary lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 10.2.1.4 Coercivity and stability of the decomposition . . . . 216 10.2.2 An overlapping method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 10.2.2.1 The h-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 10.2.2.2 The hp-version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 10.3 The Weakly-Singular Integral Equation . . . . . . . . . . . . . . . . . . . . . . . . . 234 10.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 10.4.1 The hypersingular integral equation . . . . . . . . . . . . . . . . . . . . . 239 10.4.2 The Lam´e equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 11 Two-Level Methods: the hp-Version on Triangular Elements . . . . . . . . . . . . 243 11.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 11.1.1 Sobolev spaces of functions vanishing on a part of the boundary of a domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 11.1.2 Extension operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Contents
xvii
11.1.3 Construction of basis functions . . . . . . . . . . . . . . . . . . . . . . . . . 253 11.1.3.1 Construction of preliminary vertex basis functions 254 11.1.3.2 Construction of preliminary edge basis functions . 255 11.1.3.3 Construction of preliminary interior basis functions255 11.1.3.4 Representation of a polynomial by preliminary vertex, edge, and interior components . . . . . . . . . . . 256 11.1.4 Change of basis functions and change of the wire basket space257 11.1.5 The wire basket in three dimensions . . . . . . . . . . . . . . . . . . . . . 258 11.1.6 Properties of the interpolation operators IW and IW (Ω ) . . . . 260 11.2 Preconditioners for the Hypersingular Integral Equation . . . . . . . . . . . 267 11.2.1 Subspace decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 11.2.2 Preconditioner I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 11.2.3 Preconditioner II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 11.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 12 Diagonal Scaling Preconditioner and Locally-Refined Meshes . . . . . . . . . . . 277 12.1 Problem Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 12.2 Preconditioning by Diagonal Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 12.3 Shape-Regular Mesh Refinements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 12.3.1 Coercivity of the decomposition . . . . . . . . . . . . . . . . . . . . . . . . 282 12.3.2 Stability of the decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 286 12.3.3 Bounds for the condition numbers . . . . . . . . . . . . . . . . . . . . . . . 289 12.4 Anisotropic Mesh Refinements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 12.4.1 Technical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 12.4.2 Coercivity of the decomposition . . . . . . . . . . . . . . . . . . . . . . . . 294 12.4.3 Stability of the decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 296 12.4.4 Bounds for the condition numbers . . . . . . . . . . . . . . . . . . . . . . . 298 12.5 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 13 Multilevel Preconditioners with Adaptive Mesh Refinements . . . . . . . . . . . . 307 13.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 13.1.1 Mesh refinements and hierarchical structures . . . . . . . . . . . . . . 308 13.1.1.1 Shape-regular triangulations . . . . . . . . . . . . . . . . . . . 308 13.1.1.2 Newest vertex bisection algorithm (NVB algorithm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 13.1.1.3 Hierarchical structures and the Scott–Zhang interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 13.1.2 Level functions and uniform mesh refinements . . . . . . . . . . . . 315 13.2 Multilevel Preconditioners for the Hypersingular Integral Equation . . 320 13.2.1 Local multilevel diagonal preconditioner (LMD preconditioner) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 13.2.2 Global multilevel diagonal preconditioner (GMD preconditioner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 13.3 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 13.4 A Remark on the Weakly-Singular Integral Equation . . . . . . . . . . . . . . 328
xviii
Contents
Part IV FEM–BEM Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 14 FEM-BEM Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 14.1 The Interface Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 14.1.1 The symmetric FEM-BEM coupling . . . . . . . . . . . . . . . . . . . . . 333 14.1.1.1 Existence and uniqueness of solutions . . . . . . . . . . . 333 14.1.1.2 Galerkin solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 14.1.1.3 Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . 335 14.1.2 Non-symmetric coupling methods . . . . . . . . . . . . . . . . . . . . . . . 338 14.1.2.1 The Johnson–N´ed´elec coupling . . . . . . . . . . . . . . . . 338 14.1.2.2 The Bielak–MacCamy coupling . . . . . . . . . . . . . . . . 340 14.2 Preconditioning for the Symmetric Coupling . . . . . . . . . . . . . . . . . . . . . 341 14.2.1 Preconditioning with HMCR . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 14.2.1.1 Three-block preconditioner: . . . . . . . . . . . . . . . . . . . 342 14.2.1.2 Two-block preconditioner: . . . . . . . . . . . . . . . . . . . . 351 14.2.2 Preconditioning with GMRES . . . . . . . . . . . . . . . . . . . . . . . . . . 354 14.3 Preconditioning for the Non-Symmetric Coupling Methods . . . . . . . . 359 14.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Part V Problems on the Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 15 Pseudo-differential Equations with Spherical Splines . . . . . . . . . . . . . . . . . . . 369 15.1 Pseudo-differential Operators and Sobolev Spaces . . . . . . . . . . . . . . . . 369 15.1.1 Sobolev spaces on the sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 15.1.2 Pseudo-differential operators on the sphere . . . . . . . . . . . . . . . 372 15.2 Solving Pseudo-differential Equations by Spherical Splines . . . . . . . . 373 15.2.1 Spherical splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 15.2.1.1 Spherical triangulation . . . . . . . . . . . . . . . . . . . . . . . 373 15.2.1.2 Spherical splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 15.2.1.3 Quasi-interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . 375 15.2.2 Approximate solutions and error estimates . . . . . . . . . . . . . . . . 377 15.3 Additive Schwarz Methods for Equations on the Sphere . . . . . . . . . . . 378 15.3.1 Decomposition of the finite element space . . . . . . . . . . . . . . . . 378 15.3.1.1 Two-level meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 15.3.1.2 Domain decomposition . . . . . . . . . . . . . . . . . . . . . . . 379 15.3.1.3 Subspace decomposition . . . . . . . . . . . . . . . . . . . . . . 380 15.3.2 The hypersingular integral equation . . . . . . . . . . . . . . . . . . . . . 380 15.3.2.1 The equation and its Galerkin approximation . . . . . 380 15.3.2.2 Technical lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 15.3.2.3 Stability of decomposition: A general result for both odd and even degrees . . . . . . . . . . . . . . . . . . . . 392 15.3.2.4 A better result on stability of the decomposition for even degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 15.3.2.5 Bounds for the condition number of the preconditioned system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 15.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Contents
xix
16 Pseudo-differential Equations with Radial Basis Functions . . . . . . . . . . . . . . 399 16.1 Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 16.1.1 Positive-definite kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 16.1.2 Spherical radial basis functions . . . . . . . . . . . . . . . . . . . . . . . . . 401 16.1.3 Native space and reproducing kernel property . . . . . . . . . . . . . 403 16.2 Solving Pseudo-differential Equations by Radial Basis Functions . . . . 403 16.3 Additive Schwarz Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 16.3.1 Subspace decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 16.3.2 Coercivity of the decomposition . . . . . . . . . . . . . . . . . . . . . . . . 406 16.3.3 Stability of the decomposition and bounds for the minimum eigenvalue of P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 16.3.4 Bounds for the condition number . . . . . . . . . . . . . . . . . . . . . . . . 412 16.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 16.4.1 An algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 16.4.2 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Part VI Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 A Interpolation Spaces and Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 A.1 Real Interpolation Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 A.1.1 Compatible couples and intermediate spaces . . . . . . . . . . . . . . 426 A.1.2 The K-functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 A.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 A.2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 A.2.2 Sobolev spaces on Rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 A.2.2.1 The space H s (Rd ), s ∈ R . . . . . . . . . . . . . . . . . . . . . 429 A.2.2.2 The space W s (Rd ), s ∈ R . . . . . . . . . . . . . . . . . . . . . 430 A.2.3 Sobolev spaces on a Lipschitz domain . . . . . . . . . . . . . . . . . . . 432 A.2.3.1 Lipschitz domains . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 s (Ω ), s ∈ R . . . . 433 A.2.3.2 The spaces H s (Ω ), H0s (Ω ), and H A.2.3.3 Density and duality properties . . . . . . . . . . . . . . . . . 435 s (Ω ), s ∈ R . . . . . . . . . . . 435 A.2.3.4 The spaces W s (Ω ) and W A.2.3.5 The interpolation spaces [H s0 (Ω ), H s1 (Ω )]θ and s1 (Ω )]θ . . . . . . . . . . . . . . . . . . . . . . . . . . 436 s0 (Ω ), H [H A.2.4 Extension operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 A.2.5 Equivalence of norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 A.2.6 The weighted norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 s (Ω ) for A.2.6.1 The weighted norms in H s (Ω ) and H s > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 A.2.6.2 The interpolation norms with weights in H s (Ω ) −s (Ω ) for s ∈ [0, 1] and Ω ⊂ R2 . . . . . . . . . . 446 and H A.2.7 Scaling properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 A.2.8 Important results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 A.2.9 Comparison of global and local norms . . . . . . . . . . . . . . . . . . . 457 A.2.10 The special case s = 1/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 A.2.11 Sobolev spaces on curves and surfaces . . . . . . . . . . . . . . . . . . . 480 A.2.12 A generalised antiderivative operator in Sobolev spaces . . . . . 482
xx
Contents
B Boundary Integral Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 B.1 Boundary Integral Operators and Pseudo-differential Operators on the Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 B.1.1 Boundary potentials and boundary integral operators . . . . . . . 486 B.1.2 Representation of harmonic functions by potentials . . . . . . . . 487 B.1.2.1 Interior problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 B.1.2.2 Exterior problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 B.1.3 Dirichlet-to-Neumann and Neumann-to-Dirichlet operators . . 488 B.1.4 The weakly-singular and hypersingular bilinear forms . . . . . . 491 B.1.5 Representations of solutions to the Laplace equation by spherical harmonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 B.1.5.1 Dirichlet problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 B.1.5.2 Neumann problems . . . . . . . . . . . . . . . . . . . . . . . . . . 494 B.1.6 Representations of boundary integral operators by spherical harmonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 B.1.6.1 DtN and NtD operators . . . . . . . . . . . . . . . . . . . . . . . 495 B.1.6.2 Weakly-singular integral operator . . . . . . . . . . . . . . 496 B.1.6.3 Hypersingular integral operator . . . . . . . . . . . . . . . . 496 B.1.6.4 The operator K and its adjoint K . . . . . . . . . . . . . . . 496 B.2 Discretised Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 B.2.1 Natural embedding operators and biorthonormal bases . . . . . . 497 B.2.2 Discretised operators and matrix representations . . . . . . . . . . . 498 B.2.2.1 Operators from X into Y ∗ . . . . . . . . . . . . . . . . . . . 498 B.2.2.2 Operators from X into X ∗ . . . . . . . . . . . . . . . . . . . 498 B.2.2.3 Operators from X into X where X is reflexive . 499 B.2.3 Compositions of operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 C Some Additional Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 C.1 Conditioning of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 C.1.1 The hierarchical basis functions for the p-version . . . . . . . . . . 504 C.1.2 The condition numbers of the weakly-singular and hypersingular stiffness matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 C.1.3 Extremal eigenvalues of equivalent matrices . . . . . . . . . . . . . . 511 C.1.4 Eigenvalues of block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 512 C.2 Norms of Nodal Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 C.3 Further Properties of the Hierarchical Basis Functions for the pVersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 C.4 Some Properties of Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 C.4.1 Inverse properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 C.4.2 Some useful bounds for the h and p finite element functions . 529 C.4.2.1 The h-version in one dimension . . . . . . . . . . . . . . . . 529 C.4.2.2 The h-version in two dimensions . . . . . . . . . . . . . . . 532 C.4.2.3 The p-version and hp-version in one dimension . . 536 C.4.2.4 The p-version and hp-version in two dimensions . 538 C.4.2.5 Functions of zero mean . . . . . . . . . . . . . . . . . . . . . . . 539 C.4.3 Polynomials of low energies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
Contents
xxi
C.4.4 Discrete harmonic functions and discrete harmonic extension 542 C.5 Some Useful Projections or Projection-like Operators . . . . . . . . . . . . . 544 C.5.1 The L2 -projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 C.5.2 The standard interpolation operator . . . . . . . . . . . . . . . . . . . . . . 545 C.5.3 Some useful operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 C.5.4 Cl´ement’s interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 C.5.5 The Scott–Zhang interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . 561 C.6 Gauss–Lobatto Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 C.7 Additional Technical Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 Index of Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
Part I
General Theory
Chapter 1
Introduction
This chapter sets the scene for the problems to be studied in the book. We first introduce in Section 1.1 typical problems from which boundary integral equations arise, namely, the scattering of time harmonic waves and cracks in elastic media. Single-layer and double-layer potentials with the appropriate fundamental solutions are given together with the first kind integral equations for Dirichlet and Neumann problems with weakly-singular and hypersingular boundary integral equations, respectively. Section 1.2 briefly introduces the corresponding Galerkin schemes for solving approximately these boundary integral equations. It is well known that the solutions of the screen and crack problems in Subsection 1.1.1 and Subsection 1.1.2 are less regular near the screen or crack. The solutions of the boundary integral equations inherit this lack of smoothness. Therefore, to achieve desirable approximations, one uses besides the h-version (where better approximation is obtained by refining the mesh) the p-version (which yields better approximation by increasing the polynomial degree) and the hp-version as a combination of both previous versions. Besides (locally) quasi-uniform meshes, one takes (algebraically or geometrically) graded meshes to obtain even more accurate approximate solutions of the integral equations. If the location where the solution becomes less regular is not known a priori, it pays off to apply adaptive schemes. These will also be mentioned in Section 1.2. The Galerkin schemes described above result in systems of linear equations with dense and ill-conditioned matrices. As key solvers, we advocate the conjugate gradient method for symmetric positive definite matrices, and the GMRES for nonsymmetric matrices. Both methods will be discussed in detail in Chapter 2. The ill conditioness of the matrices is discussed in Section 1.3. As the matrices are ill-conditioned, the numbers of iterations of the solves blow up quickly with the degrees of freedom of the various Galerkin schemes. Therefore, preconditioning must be performed. As the matrices are dense, this is not a straightforward task. The following chapters describe and analyse domain decomposition and multilevel methods as a remedy for these defects.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_1
3
4
1 Introduction
1.1 Model Problems Let Γ ⊂ Rd be a (curved) slit when d = 2 or a screen with boundary γ when d = 3, and let n be a normal vector to Γ . To fix the idea, we define Γ as (−1, 1) × {0} if d = 2, Γ := (−1, 1) × (−1, 1) × {0} if d = 3, and denote by Γ1 and Γ2 the two sides of Γ determined by n; see Figure 1.1.
Γ1
n
n
Γ1
Γ2
γ
Γ2 Fig. 1.1 Screen Γ ⊂ Rd and sides Γ1 , Γ2 (left: d = 2, right: d = 3)
1.1.1 Screen problems Denoting ΩΓ := Rd \ Γ and r := |x| (where |x| is the Euclidean norm of x ∈ Rd ) we consider the Dirichlet screen problem (Δ + κ 2 )u = 0 in ΩΓ , (DSP) u = fi on Γi , i = 1, 2,
and the Neumann screen problem
1.1 Model Problems
5
(NSP)
⎧ 2 ⎨(Δ + κ )u = 0
∂u = gi ∂n
⎩
in ΩΓ , on Γi , , i = 1, 2,
where Im(κ ) ≥ 0 is the wave number, the data fi and gi , i = 1, 2, are given functions to be specified later. For each problem, the solution also satisfies the radiation condition at infinity, i.e., the condition when r → ∞, ⎧ ⎪ O(log r) if d = 2, ⎪ ⎪For κ = 0 : u(x) = ⎪ ⎪ ⎪ O(1/r) if d = 3. ⎪ ⎪ ⎪ ⎪ √ ⎨ if d = 2, O(1/ r) u(x) = (IC) For κ = 0 : ⎪ O(1/r) if d = 3, ⎪ ⎪ ⎪ ⎪ √ ⎪ ⎪ if d = 2, o(1/ r) ∂u ⎪ ⎪ ⎪ − iκ u = ⎩For κ = 0 : ∂r o(1/r) if d = 3. Let G(x, y) denote the fundamental solution given by ⎧ 1 d = 2, κ = 0, − 2π log |x − y|, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ i (1) d = 2, κ = 0, G(x, y) = 4 H0 (κ |x − y|), ⎪ ⎪ ⎪ ⎪ ⎪ eiκ |x−y| ⎪ ⎩ 41π , d = 3, |x − y|
where H01 is the Hankel function of order zero and of the first kind. We recall the single-layer and double-layer potential operators Sv(x) := 2
G(x, y)v(y) dσy ,
x ∈ Rd ,
Γ
Dv(x) := 2
Γ
∂ G(x, y) v(y) dσy , ∂ ny
x ∈ Rd .
The representation formula states that the solution u of (DSP) and (NSP) with the condition (IC) can be represented as
∂u 1 1 u(x) = D[u]Γ (x) − S (x), x ∈ ΩΓ , 2 2 ∂n Γ where, for any function v, the notation [v]Γ := γ1 v − γ2 v denotes the jump of v across Γ . Here γi is the trace operator onto Γi , i = 1, 2. Hence, for the (DSP) problem it suffices to find ∂ u whereas for the (NSP) problem [u]Γ is required. ∂n Γ −1/2 (Γ ) → H 1/2 (Γ ) and W : H 1/2 (Γ ) → H −1/2 (Γ ) be defined by Let V : H
6
1 Introduction
Vv(x) := 2
G(x, y)v(y) dsy ,
Γ
Wv(x) := −2
∂ ∂ nx
x ∈Γ,
∂ G(x, y) v(y) dsy , ∂ ny
Γ
x ∈Γ.
1/2 (Γ ) → H −1/2 (Γ ) are bi −1/2 (Γ ) → H 1/2 (Γ ) and W : H It is known that V : H jective, provided that Im(κ ) ≥ 0. It is noted that in the case that κ = 0 and d = 2, we further assume that cap(Γ ) < 1, where cap(Γ ) denotes the logarithmic capacity of Γ . This assumption, which can always be obtained by scaling, ensures the positive definiteness of V; see e.g. [147]. For the definition of Sobolev spaces, please see Appendix A, and for further details on the properties of V and W, please see Appendix B. We assume that the Dirichlet and Neumann data in (DSP) and (NSP) sat 1/2 (Γ ), g1 − g2 ∈ isfy f1 , f2 ∈ H 1/2 (Γ ) and g1 , g2 ∈ H −1/2 (Γ ) such that f1 − f2 ∈ H ∂ u −1/2 (Γ ) and Γ g1 − g2 = 0. Letting ϕ = and ψ = [u]Γ , then the (DSP)– H ∂n Γ (IC) problem is equivalent to the following weakly-singular integral equation Vϕ (x) = f (x),
x ∈Γ,
(1.1)
whereas the (NSP)–(IC) problem is equivalent to the hypersingular integral equation Wψ (x) = −g(x),
x ∈Γ.
(1.2)
The right-hand sides of (1.1) and (1.2) are defined by f (x) = f1 (x) + f2 (x) + 2
Γ
g(x) = g1 (x) + g2 (x) − 2
Γ
∂ G(x, y) f1 (y) − f2 (y) dσy ∂ ny
∂ G(x, y) g1 (y) − g2 (y) dσy ; ∂ nx
see [67] for the general case and [202] for the case when f1 = f2 and g1 = g2 . We define aV (ϕ , ψ ) := Vϕ , ψ aW (ϕ , ψ ) := Wϕ , ψ
−1/2 (Γ ), ∀ϕ , ψ ∈ H 1/2 (Γ ), ∀ϕ , ψ ∈ H
−1/2 (Γ ) satisfying A weak formulation for (1.1) reads: Find ϕ ∈ H −1/2 (Γ ). φ ∈H
(1.3)
1/2 (Γ ) satisfying Similarly, a weak formulation for (1.2) reads: Find ψ ∈ H 1/2 (Γ ). φ ∈H
(1.4)
aV (ϕ , φ ) = f , φ ,
aW (ψ , φ ) = − g, φ ,
1.1 Model Problems
7
The existence and uniqueness of the solutions of (1.3) and (1.4) are studied in [68, 202, 212].
1.1.2 Crack problems in elasticity We also consider the following crack problems involving the Lam´e operator. The Dirichlet crack problem is defined as follows. Given two functions f1 , f2 ∈ (H 1/2 (Γ ))d , 1/2 (Γ ))d , find u ∈ (H 1 (Ω ))d satisfying d = 2, 3, such that f := f1 − f2 ∈ (H loc ⎧ μΔ u + (λ + μ ) grad divu = 0 in ΩΓ , ⎪ ⎪ ⎪ ⎨ u = fi on Γi , i = 1, 2, (DCP) ⎪ u(x) = o(1) as |x| → ∞, ⎪ ⎪ ⎩ ∇u(x) = o(1/|x|) as |x| → ∞.
The Neumann crack problem is defined as follows. Given two functions g1 , g2 ∈ −1/2 (Γ ))d , find u ∈ (H 1 (Ω ))d satisfying (H −1/2 (Γ ))d such that g := g1 − g2 ∈ (H loc ⎧ μΔ u + (λ + μ ) grad divu = 0 in ΩΓ , ⎪ ⎪ ⎪ ⎨ on Γi , i = 1, 2, T u = gi (NCP) ⎪ u(x) = o(1) as |x| → ∞, ⎪ ⎪ ⎩ ∇u(x) = o(1/|x|) as |x| → ∞,
where the Lam´e constants μ and λ satisfy λ +μ > 0 μ > 0 and λ + 2μ > 0
for d = 2, for d = 3,
and where the traction T u ∈ Rd is defined by T u := λ (div u)n + 2μ
∂u + μ n × curl u. ∂n
(1.5)
It is shown in [201] that the solution of (DCP) admits the integral representation u(x) =
G(x, y)[T u](y) dσy ,
x ∈ ΩΓ ,
Γ
where G(x, y) :=
λ + 3μ y)Id×d + λ + μ (x − y)(x − y) G(x, 4π (d − 1)μ (λ + 2μ ) λ + 3μ |x − y|d
with Id×d being the identity matrix of size d × d and
8
1 Introduction
y) := G(x,
⎧ ⎪ ⎨− log |x − y| ⎪ ⎩
1 |x − y|
for d = 2, for d = 3.
The jump of the traction T u across Γ , namely [T u], is the solution to the following weakly-singular integral equation Vϕ = f where Vϕ (x) := 2
on Γ
G(x, y)ϕ (y) dσy ,
(1.6) x ∈Γ,
Γ
and f := f1 + f2 + Λ (f1 − f2 ) with
Λ v(x) := 2
Ty G(x, y) v(y) dσy .
Γ
Here, Ty is the traction T defined in (1.5) acting in the y-variable. It is also shown in [67, 201] that the solution of (NCP) admits the representation u(x) =
Γ
[u](y) Ty G(x, y) dσy
with the jump [u] being the solution of Wψ = −g on Γ where Wψ (x) := −2Tx
(1.7)
(Ty G(x, y)) ψ (y) dσy
Γ
and where
g(x) := g1 (x) + g2 (x) + Λ (g1 − g2 )
Λ v(x) := 2Tx
G(x, y)v(y) dσy .
Γ
The existence and uniqueness of weak solutions to (1.6) and (1.7) are studied in [67, 201]. It should be noted that corresponding settings can be formulated for problems on closed curves or closed surfaces. Moreover, even though we choose to present the settings for the screen problem, the preconditioners and analysis in the following chapters are applicable to the crack problem as well.
1.2 Galerkin Boundary Element Methods
9
1.2 Galerkin Boundary Element Methods The solutions of (1.3) and (1.4) are found numerically by solving these equations in finite dimensional spaces. More precisely, let V and V be finite dimensional sub 1/2 (Γ ), respectively, to be defined later. The solution ϕ −1/2 (Γ ) and H spaces of H of (1.3) is approximated by ϕV ∈ V satisfying aV (ϕV, φ ) = f , φ ,
φ ∈ V.
(1.8)
aW (ψV , φ ) = g, φ ,
φ ∈V.
(1.9)
Similarly, the solution of (1.4) is approximated by ψV ∈ V satisfying
The choice of V and V determines different versions of the Galerkin method.
1.2.1 The h-version Consider a partition Th of Γ into subintervals (when d = 2) or triangles/rectangles (when d = 3). On this mesh, we define V to be the space of piecewise constant functions and V to be the space of continuous piecewise affine functions. In this version, the accuracy of the approximation is obtained by considering a family of partitions {Th }h>0 with the space size h tending to 0. The convergence of this approximation scheme is standard and goes back to the early works by N´ed´elec & Planchard [162], Hsiao & Wendland [127] for the weakly-singular integral equation, and N´ed´elec [160] for the hypersingular integral equation. For polygonal domains, see [65] and for cracks and screens [201, 202].
1.2.2 The p-version In the p-version, we fix one partition Th and define V and V to be the space of piecewise polynomial functions and continuous piecewise polynomial functions of degree p, respectively. Accuracy is achieved by increasing p. The stability and convergence of the schemes is proved in [208]; see also [190]. More recent works include [30, 91].
10
1 Introduction
1.2.3 The hp-version This version uses the same finite element spaces as in the p-version and seeks to increase accuracy by both increasing p and decreasing h. The stability and convergence of the schemes is proved in [209].
1.2.4 Graded meshes On a geometric mesh, the hp-version yields convergence of exponential rates. Early works on curves include [17, 114, 203]. Other works on surfaces include [112, 124, 125, 142].
1.2.5 Adaptive schemes If a certain accuracy of the solution ϕV of (1.8) or ψV of (1.9) is required, adaptive mesh-refining algorithms of the type Solve
Estimate
Mark
Refine
are used where, starting with a given initial triangulation J0 , a sequence of locally refined triangulations Jl and corresponding Galerkin solutions ϕl and ψl , respectively, are computed. Then the adaptive mesh refinement leads to a nested sequence of spaces V(Jl ) ⊂ V(Jl+1 ) and V (Jl ) ⊂ V (Jl+1 ), respectively, for l = 0, 1, 2, . . .. A posteriori error analysis for boundary integral equations with error estimation of residual type are derived in [53, 54, 55]. In recent years, convergence and optimality of adaptive BEM have been shown in [50, 74]; see also [92].
1.3 Condition Numbers Equations (1.8) and (1.9) and corresponding equations for the p and hp-versions result in matrix systems of the form Au = f. When the size of A is large, one resorts to the use of the conjugate gradient (CG) method if A is symmetric (which is the case when the wave number κ in (DP) and (NP) is zero) or the generalised minimal residual (GMRES) method if A is
1.3 Condition Numbers
11
unsymmetric (which is the case when κ = 0). These two methods will be discussed in detail in Chapter 2. As will be seen in Chapter 2, if the matrix A is symmetric and positive definite, then the convergence of the CG method depends on the condition number
κ2 (A) =
λmax (A) , λmin (A)
where λmax (A) and λmin (A) are respectively the maximum and minimum eigenvalues of the matrix A. On the other hand, if A is non-symmetric and positive definite, then the convergence of the GMRES method is dictated by the minimum eigenvalue of the symmetric part of A and the maximum eigenvalue of A A. In the appendices, we will show how the condition numbers depend on the mesh size h and the polynomial degree p. As will be seen in Chapter 2, when the condition number is large, more iterations are required to obtain desirable accuracy of the approximation. In order to speed up the convergence, it is essential to apply preconditioning to the matrix. This is the subject of the next chapter.
Chapter 2
General Framework of Preconditioners
This chapter provides the general framework of preconditioning by additive and multiplicative Schwarz operators. The theory is developed for a general variational equation on a Hilbert space. Preconditioners are viewed from different angles: in the matrix form, variational form, and operator form. We discuss how to implement the preconditioner with the conjugate gradient method (CG), the generalised minimum residual method (GMRES), and with a linear iterative method. Most importantly, we present the theory of convergence which justifies what needs to be proved to show the efficiency of the designed preconditioner. A thorough investigation of bounds for the condition number and convergence rate is presented. In Section 2.1 we introduce the general weak formulation together with the required assumptions. For convenience, we also present the CG and GMRES algorithms and error bounds for each algorithm. The general concept of employing the CG and GMRES with preconditioners is presented in Section 2.2. Various versions of the preconditioned CG and GMRES algorithms are explained in this section. We also present in this section how a linear iterative method is used to solve the preconditioned system. Domain decomposition methods are designed by using Schwarz operators, which is the subject of Section 2.3. This section is followed by a section on additive Schwarz preconditioners (Section 2.4) and a section for multiplicative Schwarz preconditioners (Section 2.5). In each section we present different forms of the preconditioners: the variational and operator forms (which are used in the analysis) and the matrix form (which is used in the implementation). The last four sections are the core of the chapter and will often be referred to in the remainder of the book. Section 2.6 and Section 2.7 provide important ingredients for carrying out the analysis for preconditioners to be used with CG and GMRES, respectively. We present in these two sections the required estimates one has to show to be able to ensure convergence. Section 2.8 extends the commonly used CG and GMRES to other Krylov methods including ORTHODIR and the hybrid modified conjugate residual (HMCR) method which will be used for preconditioning of symmetric indefinite matrices (in FEM-BEM coupling problems). The chapter concludes with Section 2.9 which presents the convergence analysis for a linear it© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_2
13
14
2 General Framework of Preconditioners
erative method when it is used to solve a system preconditioned by the additive or multiplicative Schwarz preconditioner.
2.1 Finite Dimensional Problems We present in this section the general form that covers all equations discussed in Section 1.2. The weak formulation has the following form. m (Γ ) −→ R be a bilinear form, and ∈ H −m (Γ ), m ∈ R, m (Γ ) × H Let a(·, ·) : H m (Γ ) satisfying where Γ is an open curve in R2 or an open surface in R3 . Find u ∈ H a(u, v) = , v
m (Γ ). ∀v ∈ H
(2.1)
When m = 1/2 equation (2.1) is the hypersingular integral equation arising from the Helmholtz or Laplace equation with a Neumann boundary condition. On the other hand, when m = −1/2 equation (2.1) is the weakly-singular integral equation arising from the Helmholtz or Laplace equation with a Dirichlet boundary condition. The bilinear form a(·, ·) is assumed to satisfy one of the following assumptions: Assumption 2.1. (i) The bilinear form a(·, ·) is symmetric; m (Γ ) (ii) There exists a positive constant CA1 such that for any u, v ∈ H |a(u, v)| ≤ CA1 uH m (Γ ) vH m (Γ ) ;
m (Γ ) (iii) There exists a positive constant CA2 such that for any u ∈ H Assumption 2.2.
|a(u, u)| ≥ CA2 u2H m (Γ ) .
(i) The bilinear form a(·, ·) can be decomposed as a(·, ·) = b(·, ·) + k(·, ·) where the bilinear form b(·, ·) is symmetric and k(·, ·) is skew-symmetric; m (Γ ) (ii) There exists a positive constant CB1 such that for any u, v ∈ H |b(u, v)| ≤ CB1 uH m (Γ ) vH m (Γ )
|k(u, v)| ≤ CB1 uH m−1/2 (Γ ) vH m (Γ )
|k(u, v)| ≤ CB1 uH m (Γ ) vH m−1/2 (Γ ) .
m (Γ ) (iii) There exist positive constants CB2 , CB3 , and CB4 such that for any u ∈ H b(u, u) ≥ CB2 u2H m (Γ ) ,
ℜ(a(u, u)) ≥ CB3 u2H m (Γ ) −CB4 u2H m−1/2 (Γ ) .
2.1 Finite Dimensional Problems
15
Returning to the model problems introduced in Chapter 1, it is noted that the weak formulations arising from the weakly-singular and hypersingular integral equations for the screen problem with κ = 0 satisfy Assumption 2.1, while the screen problem with κ = 0 and the crack problem satisfy Assumption 2.2; see [201, 202].
2.1.1 Galerkin equations and matrix systems Let V be a finite-dimensional subspace of a Hilbert space H which has a basis {Φ1 , . . . , ΦN }, i.e., V = span{Φ1 , . . . , ΦN }. Then the Galerkin solution to (2.1) is given by: u ∈ V satisfying a(u, v) = f , v
∀v ∈ V .
(2.2)
Representing the solution u by u = ∑Ni=1 ui Φi , we rewrite (2.2) as N
∑ ui a(Φi , Φ j ) =
i=1
f,Φj ,
j = 1, . . . , N.
This set of equations can be represented in the matrix form as Au = b
(2.3)
where the matrix A and the vectors u and b are defined respectively by A = (a(Φi , Φ j ))Ni,j=1 ,
u = (u1 , . . . , uN )
and
b = ( f , Φ1 , . . . , f , ΦN ) .
Here denotes the transpose matrix. In practice, the system (2.3) is large and therefore iterative methods are preferable to direct solvers. The next two subsections introduce the conjugate gradient method and generalised minimal residual method. The reader is referred to [16, 87, 96, 183] for the development of these methods and other iterative methods.
2.1.2 Solution by CG When the matrix A in (2.3) is symmetric and positive definite, the conjugate gradient (CG) method can be used. The method, given in Algorithm 2.1, is a Krylov subspace method originally proposed by Hestenes & Stiefel [104]; see Section 2.8 for other Krylov subspace methods.
16
2 General Framework of Preconditioners
Algorithm 2.1: Conjugate Gradient Method 1 2 3 4 5 6 7 8 9 10 11 12 13 14
k ← 0; u0 ← 0; r0 ← b while rk = 0 do k ← k+1 if k = 1 then p1 = r0 else
rk−1 , rk−1 RN βk =
rk−2 , rk−2 RN pk = rk−1 + βk pk−1 end
rk−1 , rk−1 RN αk =
Apk , pk RN uk = uk−1 + αk pk rk = rk−1 − αk Apk end u = uk
It is well known that (see e.g. [83]) after kth iterations, the error in the approximation is k κ2 (A) − 1 u − u0 A , (2.4) u − uk A ≤ 2 κ2 (A) + 1 √ where the A-norm is defined by vA = v Av for all v ∈ RN , and
κ2 (A) = λmax (A)/λmin (A) is the condition number of A (with respect to the Euclidean norm). It follows that a large value of κ2 (A) results in a slow convergence of the CG method. To be precise, given a tolerance ε > 0, it takes approximately κ2 (A)| log(ε /2)|/2
iterations to ensure that
u − uk A ≤ ε u − u0 A .
2.1.3 Solution by GMRES When the matrix A in (2.3) is non-symmetric and/or indefinite, the generalised minimal residual (GMRES) method can be used. The method, originally proposed by Saad & Schultz [184], is also a Krylov subspace method and is given in Algorithm 2.2.
2.1 Finite Dimensional Problems
17
Algorithm 2.2: Generalised Minimal Residual Method 1:
Start: Choose u0
2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
Arnoldi process: r0 ← b − Au0 , β ← r0 RN , v1 ← r0 /β for j = 1, 2, . . . , k do p j ← Av j for i = 1, 2, . . . , j do hi, j ← p j , vi RN p j ← p j − hi, j vi end for h j+1, j ← p j RN if h j+1, j = 0 then j←k else v j+1 ← p j /h j+1, j end if end for Vk ← [v1 , . . . , vk ] ∈ RN×k Hk ← (k + 1) × k Hessenberg matrix with nonzero entries hi, j
18: 19:
Approximate solution: wk ← arg minw∈Rk β e1 − Hk wRN where e1 = [1, 0, . . . , 0] ∈ Rk+1 ; uk ← u0 + Vk wk ;
20: 21: 22: 23: 24:
Restart: if stopping criterion satisfied then u ← uk ; stop else u0 ← uk and go to Step 2. end if
Let As be the symmetric part of A, i.e., As := if As is positive definite then rk RN
A + A . It is proved in [73] that 2
k/2
λmin (As )2 ≤ 1− r0 RN , λmax (A A)
(2.5)
For the GMRES to converge, it is required that λmin (As ) is bounded below away from zero, and λmax (A A) is not too large. In Section 2.7 we will discuss how to estimate these eigenvalues.
18
2 General Framework of Preconditioners
2.2 Preconditioners In this section we will explain how to expedite the convergence of the CG and GMRES algorithms by preconditioning. The reader is referred to [24, 87, 96, 183, 233] for more details.
2.2.1 Preconditioned conjugate gradient method A preconditioner C is a symmetric matrix that satisfies C−1 ≈ A−1 . Hence instead of solving Au = b we solve a seemingly simpler equation C−1 Au = C−1 b. Note however that even though both A and C−1 are symmetric, in general C−1 A is not symmetric. There are two approaches: Approach 1: Since C−1/2 AC−1/2 is symmetric and positive definite, it is possible to obtain the solution u by successively solving with the CG method (Algorithm 2.1) the following two systems: u = A b,
with
C1/2 u = u,
:= C−1/2 AC−1/2 A
and
(2.6)
b := C−1/2 b.
It turns out that in practice we even do not have to solve the two systems in (2.6); see Algorithm 2.3. Approach 2: Note that C−1 A is symmetric with respect to the inner product ·, ·C defined by ·, ·C := C·, ·RN , since for all x, y ∈ RN
C−1 Ax, y
C
= y CC−1 Ax = y Ax = y A C− Cx = x, C−1 Ay C .
Therefore we can solve
= Au b
(2.7)
by the CG method (Algorithm 2.1) with the inner product ·, ·C , where := C−1 A A
and
b := C−1 b.
(2.8)
In the following we show that both approaches in fact entail the same algorithm. It should be noted that in the implementation neither C−1/2 (for Approach 1)
2.2 Preconditioners
19
nor C−1 (for Approach 2) need be computed. At each iteration step an equation of the form Cz = r is solved. Note however that the matrix C itself is not computed either. The preconditioner is designed so that z can be computed without an explicit form of C; see Section 2.4.2. Consider the first approach where we solve (2.6). Algorithm 2.1 applied to this equation reads: r0 = b k = 0; u0 = 0; while rk = 0 k = k+1 if k = 1 1 = r0 p else
rk−1 , rk−1 RN βk =
rk−2 , rk−2 RN = rk−1 + βk p p k
end
k−1
(2.9)
r , r N k = k−1 k−1 R α pk , p k A N R
k p k k = u k−1 + α u pk k A rk = rk−1 − α
end k u=u
uk . If we define − A It can be shown by induction that rk = b k , uk := C−1/2 u zk := C−1 rk ,
then
k , αk := α
rk := b − Auk ,
k , pk := C−1/2 p
βk := βk ,
rk = C−1/2 b − C−1/2 AC−1/2 C1/2 uk = C−1/2 (b − Auk ) = C−1/2 rk ,
so that
(2.10)
20
2 General Framework of Preconditioners
rk , rk RN = zk , rk RN
and
implying
pk , p k A
RN
= Apk , pk RN ,
βk =
zk−1 , rk−1 RN ,
zk−2 , rk−2 RN
pk = zk−1 + βk pk−1 ,
αk =
zk−1 , rk−1 RN ,
Apk , pk RN
uk = uk−1 + αk pk .
It is also clear that if { uk } converges to u then {uk } converges to u. Hence it follows from (2.9) that uk can be computed by Algorithm 2.3. Steps 5 and 8 require the Algorithm 2.3: Preconditioned Conjugate Gradient Method 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
k = 0; u0 = 0; r0 = b while rk = 0 do k = k+1 if k = 1 then z0 = C−1 r0 p1 = z0 else zk−1 = C−1 rk−1
zk−1 , rk−1 RN βk =
zk−2 , rk−2 RN pk = zk−1 + βk pk−1 end
zk−1 , rk−1 RN αk =
Apk , pk RN uk = uk−1 + αk pk rk = rk−1 − αk Apk end u = uk
solution of Cz = r for some vector r ∈ RN and thus C should be designed such that this problem is easy (cheap) to solve; see Sections 2.4.2, 2.5.1, and 2.5.3. As mentioned earlier, we can also solve (2.7) by Algorithm 2.1 using the inner product ·, ·C instead of the Euclidean inner product. This generates the following vectors and parameters 1 = r0 , p
rk−1 , rk−1 C βk = ,
rk−2 , rk−2 C
k = rk−1 + βk pk−1 , p
2.2 Preconditioners
21
r , r k = k−1 k−1 C . α pk , p k A C
k . Similarly to (2.10), we define It can be shown that rk = b − Au
Then
rk := b − Auk , zk := C−1 rk , pk := pk , k , βk := βk . αk := α rk = C−1 rk = zk
which implies
rk , rk C = zk , rk RN and
so that
pk , p k A
C
= Apk , pk RN ,
βk =
zk−1 , rk−1 RN ,
zk−2 , rk−2 RN
αk =
zk−1 , rk−1 RN ,
Apk , pk RN
pk = zk−1 + βk pk−1 , uk = uk−1 + αk pk . We proved that an application of Algorithm 2.1 to solve (2.7) using the inner product
·, ·C is indeed Algorithm 2.3. In view of (2.4), the error in the approximation of u by uk by Algorithm 2.3 is ⎛
u − uk A ≤ 2 ⎝
−1 κ2 (A) +1 κ2 (A)
⎞k
⎠ u − u0 A ,
(2.11)
22
2 General Framework of Preconditioners
2.2.2 Preconditioned GMRES When equation (2.3) arises from (2.1) where the bilinear form satisfies Assumption 2.2, the matrix A is non-symmetric and indefinite. The preconditioner C in this case can be non-symmetric, but it has to be positive definite. If we use GMRES method defined in Algorithm 2.2 to solve the preconditioned system u = A b (2.12) = C−1 A, = C−1 b, then we obtain Algorithm 2.4. This alu = u, and b where A gorithm is often used in practice and is called the left preconditioned GMRES; see e.g. [183]. It is possible to design the right preconditioned GMRES to solve AC−1 u = b where
u = Cu.
In this book, we only focus on the left preconditioner. The reader is referred to [183] for more details on the right version. is not necessarily positive s of A It should be remarked that the symmetric part A definite, in particular with the preconditioners to be defined in this book (Chapter 8), as proved in [46]. Therefore the convergence given in (2.5) does not hold in general, and thus optimal results may not be achieved for these preconditioners. This loss of optimality can be avoided by using a more appropriate inner product than the Euclidean inner product, as suggested in [185]. Before presenting this inner product and its use to solve (2.12) we note that Step 18 in Algorithm 2.4 is the minimization of r0 RN in the Krylov space (see [16, 87, 96, 183, 184]) k−1
0, . . . , A Kk := span{r0 , Ar
}.
Now let B be the stiffness matrix defined by the bilinear form b(·, ·) given in Assumption 2.2; namely N B = b(Φi , Φ j ) i, j=1 .
By Assumption 2.2 this matrix is symmetric and positive definite. Therefore we can define another inner product and its associated norm
x, yB := Bx, yRN
and
1/2
1/2
xB := x, xB = Bx, xRN
∀x, y ∈ RN . (2.13)
A new preconditioned GMRES algorithm is designed by replacing the Euclidean inner product and norm in Steps 2, 6, 9, and 18 in Algorithm 2.4 by the B-inner product and the B-norm. As shown in [185], the minimization of r0 B in the Krylov space Kk results in the same Step 18 as in Algorithm 2.4. Therefore we have the following algorithm whose convergence results will be shown in Section 2.7.1. In the case the preconditioner C is symmetric and positive definite, it can be used to define an inner product used in Algorithm 2.5, namely instead of (2.13) we define
x, yC := Cx, yRN
1/2
1/2
and xC := x, xC = Cx, xRN
∀x, y ∈ RN .
2.2 Preconditioners
23
Algorithm 2.4: Preconditioned GMRES I 1:
Start: Choose u0
2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
Arnoldi process: r0 ← C−1 (b − Au0 ), β ← r0 RN , v1 ← r0 /β for j = 1, 2, . . . , k do p j ← C−1 Av j for i = 1, 2, . . . , j do hi, j ← p j , vi RN p j ← p j − hi, j vi end for h j+1, j ← p j RN if h j+1, j = 0 then j←k else v j+1 ← p j /h j+1, j end if end for Vk ← [v1 , . . . , vk ] ∈ RN×k Hk ← (k + 1) × k Hessenberg matrix with nonzero entries hi, j
18: 19:
Approximate solution: wk ← arg minw∈Rk β e1 − Hk wRN where e1 = [1, 0, . . . , 0] ∈ Rk+1 ; uk ← u0 + Vk wk ;
20: 21: 22: 23: 24:
Restart: if stopping criterion satisfied then u ← uk ; stop else u0 ← uk and go to Step 2. end if
The C-inner product and C-norm are then used in Steps 2, 6, 9, and 18. Since in general C is not known explicitly, the above steps are not implementable by directly using C. However, we can write β and v1 in Step 2 as follows. Let s0 = b − Au0 so that r0 = C−1 s0 . Then 1/2 1/2 β = r0 C = Cr0 , r0 RN = s0 , C−1 s0 RN
and
v1 = C−1 s0 /β .
Similarly, letting q j = Av j = Cp j (see Step 4 in Algorithm 2.4 or Algorithm 2.5) we have 1/2 1/2 hi, j = p j , vi C = q j , vi RN and h j+1, j = p j C = Cp j , p j RN = q j , p j RN .
Even though an explicit form of C−1 is not known, one can compute C−1 x for any x ∈ RN ; see e.g. Section 2.4.2. Therefore, Algorithm 2.5 with the C-inner prod-
24
2 General Framework of Preconditioners
Algorithm 2.5: Preconditioned GMRES II 1:
Start: Choose u0
2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
Arnoldi process: r0 ← C−1 (b − Au0 ), β ← r0 B , v1 ← r0 /β for j = 1, 2, . . . , k do p j ← C−1 Av j for i = 1, 2, . . . , j do hi, j ← p j , vi B p j ← p j − hi, j vi end for h j+1, j ← p j B if h j+1, j = 0 then j←k else v j+1 ← p j /h j+1, j end if end for Vk ← [v1 , . . . , vk ] ∈ RN×k Hk ← (k + 1) × k Hessenberg matrix with nonzero entries hi, j
18: 19:
Approximate solution: wk ← arg minw∈Rk β e1 − Hk wRN where e1 = [1, 0, . . . , 0] ∈ Rk+1 ; uk ← u0 + Vk wk ;
20: 21: 22: 23: 24:
Restart: if stopping criterion satisfied then u ← uk ; stop else u0 ← uk and go to Step 2. end if
uct becomes Algorithm 2.6 below. We remark that one can repeat the proof in [185, Section 5] with the C-inner product to show that Step 18 is still the same.
2.2.3 Preconditioning and linear iteration A preconditioner C (not necessarily symmetric) can be used with a linear iterative method to find the approximate solution to Au = b as follows (2.14) uk+1 = uk + C−1 b − Auk , k = 0, 1, 2, . . . . We note here again that, as in the case of PCG or PGMRES, in order to find C−1 b − Auk ,
2.2 Preconditioners
25
Algorithm 2.6: Preconditioned GMRES III – Symmetric preconditioner C 1:
Start: Choose u0
2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
Arnoldi process: 1/2 s0 ← b − Au0 , β ← s0 , C−1 s0 RN , v1 ← C−1 s0 /β for j = 1, 2, . . . , k do q j ← Av j , p j ← C−1 q j for i = 1, 2, . . . , j do hi, j ← q j , vi RN p j ← p j − hi, j vi end for 1/2 h j+1, j ← q j , p j RN if h j+1, j = 0 then j←k else v j+1 ← p j /h j+1, j end if end for Vk ← [v1 , . . . , vk ] ∈ RN×k Hk ← (k + 1) × k Hessenberg matrix with nonzero entries hi, j
18: 19:
Approximate solution: wk ← arg minw∈Rk β e1 − Hk wRN where e1 = [1, 0, . . . , 0] ∈ Rk+1 uk ← u0 + Vk wk
20: 21: 22: 23: 24:
Restart: if stopping criterion satisfied then u ← uk ; stop else u0 ← uk and go to Step 2. end if
we solve Cdk = b − Auk . The Jacobi and Gauss–Seidel methods are two examples of linear iterative methods with preconditioners. We note that (2.14) can be written as (2.15) uk+1 = I − C−1 A uk + C−1 b, k = 0, 1, 2, . . . .
The matrix I − C−1 A is called the iteration matrix. See [96] for more details.
26
2 General Framework of Preconditioners
2.3 Schwarz Operators 2.3.1 Required properties of a preconditioner Both the preconditioned conjugate gradient method and the preconditioned GMRES method involve the solution of Cz = r for any residual vector r; see Algorithms 2.3, 2.4, 2.5, and 2.6. So it is required that C be designed such that the above-mentioned system is easy to solve. There is no need to compute C or C−1 . It is also required and A defined in (2.6) and (2.8) be small. It should that the condition numbers of A be pointed out that there is a competition between these two aims. The extreme case of C = I results in a quick solution z with no decrease in condition number. The other extreme is C = A which results in a condition number equal 1 but the same difficulty in solving the original equation. Before defining the Schwarz operators, we recall the definitions in Section 2.1.1 and observe that for all u, v ∈ V there hold
u, v = v Mu
a(u, v) = v Au, where M and A are N ×N matrices with entries Mi j = Φi , Φ j and Ai j = a(Φi , Φ j ), respectively. Let M, A : V → V be the operators having matrix representations M and A (with respect to the basis {Φ1 , . . . , ΦN }), respectively. Then Mu =
N
∑ u, Φm Φm
and
and
Au, v = a(u, Mv)
∀u, v ∈ V .
m=1
If we define bi = f , Φi and b = ∑Ni=1 bi Φi , then we have the following equivalences Au = b ⇐⇒ a(u, v) = f , v
∀v ∈ V ⇐⇒ Au = b.
The first equation is used in the implementation, whereas last two equations are used for the analysis. They represent three different forms of the problem to be solved, namely, the matrix form, the variational form, and the operator form. In the sequel while designing preconditioners we develop the three corresponding forms.
2.3.2 Ingredients of the Schwarz operators The Schwarz operators are defined from projections onto subspaces of V . In the next definition, the spaces V j are spaces with dimensions much less than that of V .
2.3 Schwarz Operators
27
2.3.2.1 Subspace decomposition For j = 0, . . . , J, let
( j)
( j)
V j = span{Φ1 , . . . , ΦN j } be finite dimensional spaces with dimensions N j < N. We denote by R j : V → V j the restriction operator from V onto V j and by Rtj : V j → V the prolongation operator from V j to V . They are defined so that their matrix representations are the transpose matrices R j and R j , respectively. If V j is a subspace of V then the entries of R j and R j contain 0 and 1. We will denote by B and B j the corresponding bases of V and V j , namely, B = {Φ 1 , . . . , Φ N }
( j)
( j)
B j = {Φ1 , . . . , ΦN j }.
and
A representation V = Rt0 V0 + · · · + RtJ VJ is called a subspace decomposition of V . In practice with FEM/BEM, the subspaces V j , j = 1, . . . , J, are defined from a decomposition of the spatial domain into subdomains. Hence the name domain decomposition method. Example 2.1. Assume that V = span{Φ1 , Φ2 , Φ3 } and V1 = span{Φ1 , Φ3 }. The restriction operator R1 : V → V1 is defined by R1 u = b1 Φ1 + b3 Φ3
u = b1 Φ1 + b2 Φ2 + b3 Φ3 ,
for any
and the prolongation operator Rt1 : V1 → V is defined by Rt1 v1 = a1 Φ1 + 0Φ2 + a3 Φ3
for any
v1 = a1 Φ1 + a3 Φ3 .
It can be seen that [R1 ]B,B1
100 = R1 = 001
and (1)
[Rt1 ]B1 ,B (1)
⎡
⎤ 10 ⎣ ⎦ = R 1 = 0 0 , 01
where B = {Φ1 , Φ2 , Φ3 } and B1 = {Φ1 , Φ2 } := {Φ1 , Φ3 }. The coordinate vector of v1 as an element of V1 is [v1 ]B1 = v1 = (a1 , a3 ), whereas the coordinate vector of v1 as an element of V is [v1 ]B = R 1 v1 = (a1 , 0, a3 ). 2.3.2.2 Projections Let V j , R j and Rtj , j = 0, . . . , J, be the spaces and operators defined in Subsection 2.3.2.1. Let a j (·, ·) : V j × V j → R be a symmetric, positive definite bilinear form on V j . We define the following projection-like operators
28
2 General Framework of Preconditioners
Pj : V → V j ,
a j (Pj u, v j ) = a(u, Rtj v j ) ∀u ∈ V , v j ∈ V j ,
j = 0, . . . , J. (2.16)
The operators Pj are well defined because the bilinear forms a j (·, ·) are coercive. The reason to call Pj a projection-like operator can be seen from the following lemma. Lemma 2.1. For j = 0, . . . , J, if a j (·, ·) is chosen to be the bilinear form inherited from that in V , i.e., a j (u j , v j ) = a(Rtj u j , Rtj v j ) ∀u j , v j ∈ V j ,
(2.17)
Pj Rtj Pj = Pj .
(2.18)
then Proof. The required result follows from (2.16) and (2.17) because a j (Pj Rtj Pj u j , v j ) = a(Rtj Pj u j , Rtj v j ) = a j (Pj u j , v j ) ∀u j , v j ∈ V j .
A projection can be defined from Pj as shown in the following lemma.
Lemma 2.2. For j = 0, . . . , J, let Pj : V → V be defined by
Then the following statements hold:
Pj = Rtj Pj .
(2.19)
(i) Pj is symmetric with respect to the bilinear form a(·, ·), i.e., a(Pj u, v) = a(u, Pj v) for all u, v ∈ V . (ii) Pj is positive semidefinite, i.e., a(Pj u, u) ≥ 0 for all u ∈ V . (iii) If a j (·, ·) is chosen to satisfy (2.17), then Pj is a projection, i.e., Pj2 = Pj . Proof. (i) For any u, v ∈ V we obtain from the definitions of Pj and Pj that a(Pj u, v) = a j (Pj u, Pj v) = a(u, Rtj Pj v) = a(u, Pj v).
Hence Pj is symmetric. (ii) For any u ∈ V we have
a(Pj u, u) = a(Rtj Pj u, u) = a j (Pj u, Pj u) ≥ 0
because a j (·, ·) is positive definite. (iii) With the choice (2.17), the identity Pj2 = Pj follows from (2.18) by applying Rtj to the left of both sides. This completes the proof of the lemma.
2.3.3 Schwarz operators Definition 2.1. The additive Schwarz operator Pad : V → V is defined by
2.4 Additive Schwarz Preconditioners
29
Pad = P0 + · · · + PJ .
(2.20)
The non-symmetric multiplicative Schwarz operator Pmu : V → V is defined by Pmu = I − Emu ,
(2.21)
where Emu : V → V is the error propagation operator defined by Emu = (I − PJ ) · · · (I − P0 ).
(2.22)
Finally, the symmetric multiplicative Schwarz operator Psmu : V → V is defined by Psmu = I − Esmu ,
(2.23)
where Esmu : V → V is defined by t Emu = (I − P0 ) · · · (I − PJ−1 )(I − PJ )(I − PJ )(I − PJ−1 ) · · · (I − P0 ). Esmu = Emu (2.24)
The following lemma is a consequence of Lemma 2.2. Lemma 2.3. The additive Schwarz operator Pad is symmetric with respect to the bilinear form a(·, ·), i.e., a(Pad u, v) = a(u, Pad v) ∀u, v ∈ V . Moreover, if (2.17) holds then Esmu = (I − P0 ) · · · (I − PJ−1 )(I − PJ )(I − PJ−1 ) · · · (I − P0 ).
2.4 Additive Schwarz Preconditioners We shall now describe how the Schwarz operators are used to define preconditioners. First we describe the matrix form of these operators. We then design the additive Schwarz preconditioner in the matrix form which helps to understand the implementation of the method. Finally, we show the variational and operator forms of this preconditioning technique. These forms are useful in the convergence analysis.
2.4.1 Matrix forms of Pj and Pad ( j)
( j)
N
j Let A j := (a j (Φk , Φl ))k,l=1 for j = 0, . . . , J. It can be easily seen that
a j (u j , v j ) = v j A j u j
∀u j , v j ∈ V j , j = 0, . . . , J,
30
2 General Framework of Preconditioners
where u j = [u j ]B j and v j = [v j ]B j . Here B j is the basis of V j . If the bilinear form a j (·, ·) in V j is inherited from the bilinear form a(·, ·) in V , then the matrices A j and A are related as follows. Lemma 2.4. If (2.17) holds then the following relation holds A j = R j AR j .
(2.25)
Proof. For any k, = 1, . . . , N j we have ( j)
This implies (2.25).
( j)
( j)
( j)
a j (Φk , Φ ) = a(Rtj Φk , Rtj Φ ) ( j) ( j) AR j Φ k = R j Φ ( j) ( j) = Φ R j AR j Φ k .
j , P j , and Pad of the operators Pj , Pj Lemma 2.5. The matrix representations P and Pad , respectively, are given by j = A−1 R j A, P j
P j = R j A−1 j R jA
and
Pad =
J
∑ R j A−1 j R j A.
(2.26)
j=0
Proof. We have the following equivalences from the definition of Pj :
j u = (R j v j ) Au a j (Pj u, v j ) = a(u, Rtj v j ) ⇐⇒ v j A j P j u = v j R j Au. ⇐⇒ v j A j P
j = R j A or P j = A−1 R j A. As this holds for all u ∈ V and v j ∈ V j we deduce A j P j The matrix representations of Pj and Pad follow from their definitions (2.19) and (2.20), concluding the proof of the lemma. Defining the additive Schwarz preconditioner by C−1 ad :=
J
∑ R j A−1 j R j,
j=0
we deduce
Pad = C−1 ad A.
(2.27)
2.4 Additive Schwarz Preconditioners
31
2.4.2 Additive Schwarz preconditioner: the matrix form Recall from Algorithm 2.3 that at each iteration step one needs to find z ∈ RN satisfying Cad z = r for some r ∈ RN . This is done by the following algorithm, recalling that N is the dimension of V whereas N j is the dimension of V j .
Algorithm 2.7: The matrix form of the additive Schwarz preconditioner To compute z = C−1 ad r for a given r: for j = 0, . . . , J do Compute R j r ∈ RN j Solve A j y j = R j r for y j ∈ RN j end for z←
J
J
j=0
j=0
N ∑ R j y j = ∑ R j A−1 j R jr ∈ R
2.4.3 Additive Schwarz preconditioner: the variational form −1 Let Cad : V → V be the operator whose matrix representation is C−1 ad . For the analysis, it is sometimes more convenient to write Algorithm 2.7 in the following variational form.
Algorithm 2.8: The variational form of the additive Schwarz preconditioner For any r ∈ V for j = 0, . . . , J do Compute u j ∈ V j satisfying
a j (u j , v j ) = M −1 r, Rtj v j
end for Compute
−1 r= Cad
J
∑ Rtj u j
j=0
∀v j ∈ V j
32
2 General Framework of Preconditioners
2.4.4 Additive Schwarz preconditioner: the operator form −1 It follows from (2.27) that Pad = Cad A. The following equivalences then follow: −1 b Au = b ⇐⇒ Pad u = Cad −1 The right-hand side Cad b has the following representation.
Lemma 2.6. For all j = 0, . . . , J let g j ∈ V j satisfy ∀v j ∈ V j . a j (g j , v j ) = f , Rtj v j
(2.28)
Define g ∈ V by
g=
J
∑ Rtj g j .
j=0 −1 Then g = Cad b.
N
( j)
j (v j )k Φk , we have Proof. Since any v j ∈ V j can be represented as v j = ∑k=1
f , Rtj v j =
Nj
∑ (v j )k
k=1
Hence (2.28) implies
( j)
f , Rtj Φk
Nj
=
∑ (v j )k (R j b)k = vtj R j b.
k=1
v j A j g j = v j R j b,
so that
g j = A−1 j R j b.
The definition of g gives g=
J
−1 ∑ R j A−1 j R j b = Cad b,
j=0
yielding the required identity.
Thanks to Lemma 2.6, Algorithms 2.7 and 2.8 can be written in the operator form as in Algorithm 2.9 below.
2.5 Multiplicative Schwarz Preconditioners In this section, we shall describe the matrix form and operator form for the nonsymmetric and symmetric multiplicative Schwarz preconditioners.
2.5 Multiplicative Schwarz Preconditioners
33
Algorithm 2.9: The operator form of the additive Schwarz preconditioner for j = 0, . . . , J do Find g j ∈ V j satisfying a j (g j , v j ) = f , Rtj v j end for g←
∀v j ∈ V j
J
∑ Rtj g j ∈ V
j=0
Solve Pad u = g
2.5.1 Multiplicative Schwarz preconditioner: the matrix form Algorithm 2.10: The matrix form of the multiplicative Schwarz preconditioner N To compute z = C−1 mu r for a given r ∈ R
−1 y0 ← R0 A0 R0 r; for j = 1, . . . , J do y j ← y j−1 + R j A−1 j R j (r − Ay j−1 ) end for C−1 mu r ← yJ
2.5.2 Multiplicative Schwarz preconditioner: the operator form We first derive a closed form for C−1 mu . Lemma 2.7. If Emu is the matrix representation of the error propagation operator Emu defined in (2.22), then
and
Emu = (I − PJ ) · · · (I − P0 )
(2.29)
−1 C−1 mu = (I − Emu )A .
(2.30)
Proof. The identity (2.29) follows directly from the definition (2.22). We prove (2.30). Let C j = R j A−1 j R j for j = 0, . . . , J. Then y0 = C0 r and y1 = y0 + C1 (r − Ay0 ) = C0 r + C1 r − C1 AC0 r
By induction we have
= A−1 (AC0 r + AC1 r − AC1 AC0 r) = A−1 I − (I − AC1 )(I − AC0 ) r.
34
2 General Framework of Preconditioners
yJ = A−1 I − (I − ACJ )(I − ACJ−1 ) · · · (I − AC0 ) r.
Since C−1 mu r = yJ we infer
−1 − A−1 (I − ACJ )(I − ACJ−1 ) · · · (I − AC0 ). C−1 mu = A
Applying successively the identity A−1 (I − AC j ) = (I − C j A)A−1 we obtain A−1 (I−ACJ )(I−ACJ−1 ) · · · (I−AC0 ) = (I−CJ A)(I−CJ−1 A) · · · (I−C0 A)A−1 , so that, noting (2.26) and (2.29), −1 C−1 mu = I − (I − CJ A)(I − CJ−1 A) · · · (I − C0 A) A = I − (I − PJ )(I − PJ−1 ) · · · (I − P0 ) A−1 = (I − Emu )A−1 .
The lemma is proved. It follows from Lemma 2.7 that −1 = (I − Emu )A−1 . Cmu
Reminding the definition of Pmu , see (2.21), we deduce −1 A Pmu = Cmu
or
Pmu = C−1 mu A
(2.31)
and the equivalence −1 b Au = b ⇐⇒ Pmu u = Cmu
It is noted that C−1 mu A = I − Emu = I − (I − PJ ) · · · (I − P0 ) is not symmetric, and therefore in the implementation the GMRES method or a linear iterative method is used instead of CG. A symmetric version of the multiplicative Schwarz preconditioner is designed in the next section.
2.5.3 Symmetric multiplicative Schwarz The symmetric version of the symmetric multiplicative Schwarz preconditioner is given in Algorithm 2.11.
2.6 Convergence Theory for Preconditioners with PCG
35
Algorithm 2.11: The matrix form of the symmetric multiplicative Schwarz preconditioner 1: 2: 3: 4: 5: 6: 7: 8: 9:
N To compute z = C−1 smu r for a given r ∈ R
−1 y0 ← R0 A0 R0 r for j = 1, . . . , J do y j ← y j−1 + R j A−1 j R j (r − Ay j−1 ) end for yJ+1 ← yJ for j = J, . . . , 0 do y j ← y j+1 + R j A−1 j R j (r − Ay j+1 ) end for C−1 smu r ← y0
Remark 2.1. If the local bilinear forms a j (·, ·) satisfy (2.17) then Step 5 is not necessary and the loop in Step 6 starts from J − 1 instead of J; cf. Lemma 2.3. The matrix C−1 smu has the following closed form: −1 C−1 smu = (I − Esmu )A ,
where Esmu = (I − P0 ) · · · (I − PJ−1 )(I − PJ )(I − PJ )(I − PJ−1 ) · · · (I − P0 ). Therefore,
Psmu = C−1 smu A.
(2.32)
2.6 Convergence Theory for Preconditioners with PCG This section forms the background for the convergence analysis of the Schwarz methods. In view of (2.11), we shall explain what is to be proved to justify that a preconditioner is efficient. We follow the analysis in [219]. In the sequel, by Ppre we mean either Pad or Psmu . Similarly for C−1 pre .
2.6.1 Extremal eigenvalues of preconditioned matrices Recall from Section 2.2.1 that solving the system Au = b by the PCG method is u = equivalent to solving the system A b with the Euclidean inner product ·, ·RN or = the system Au b with the C-inner product ·, ·C := Cpre ·, · RN , where −1/2 = C−1/2 A pre ACpre
and
= C−1 A. A pre
(2.33)
36
2 General Framework of Preconditioners
and κC (A) According to (2.4), it is necessary to estimate the condition number κ2 (A) where −1
and κC (A) λmin (A) := A C A C . := λmax (A)/ κ2 (A)
can be estimated by estimating the exThe following lemma shows that κC (A) tremal eigenvalues of A.
Lemma 2.8. The following statements hold. and A have the same eigenvalues. (i) The matrices A satisfy and the condition number κC (A) (ii) The extremal eigenvalues of A x Ax, C = max C, λmax (A) = A x∈RN x, xC x Ax, 1 C = min λmin (A) = ' −1 ' , N
x, xC x∈R ' A 'C = λmax (A) = λmax (A) . κC (A) λmin (A) λmin (A)
Proof. In this proof we write C for Cpre . (i) The first statement is obvious from
= λ u ⇐⇒ C−1/2 AC 1/2 u = λ u ⇐⇒ AC 1/2 u = λ C1/2 u ⇐⇒ A u = λ u Au
= C1/2 u. where u Due to part (i), the symmetry (ii) For the second statement, we first consider λmax (A). of A, and the symmetry and positive definiteness of C we have, with x = C−1/2 x, C−1/2 AC−1/2 x, x N R = λmax (A) = max λmax (A)
x, xRN x∈RN C−1/2 A x N x, C1/2 R = max 1/2 1/2 x∈RN C x, C x N R x, A x
A x, x RN C = max = max . x, xC x, xRN x∈RN x∈RN C On the other hand, we have, with w = C1/2 v, Av Av, CAv (Av) v AC−1 Av C 2 = max = max = max A C
v, vC v Cv v Cv v∈RN v∈RN v∈RN
2.6 Convergence Theory for Preconditioners with PCG
= max v∈RN
37
2w Aw w A w A 2 = λmax (A) 2, = max = λmax (A)
w w w∈RN w w
and thus A 2 are symmetric. Therefore, the results where we used the fact that A hold. Similarly, the results for λmin (A) hold. The last two identities for λmax (A) and part (i). follow from the definition of κC (A)
The above lemma tells us that regardless of which approach we use, for conver Lemma C.6 in gence analysis it suffices to estimate the extremal eigenvalues of A. we can estimate Appendix C shows that instead of estimating the eigenvalues of A, 1/2 . The next lemma shows how the extremal the extremal eigenvalues of A1/2 C−1 pre A 1/2 −1 1/2 eigenvalues of A Cpre A can be expressed in terms of the bilinear form a(·, ·) and the preconditioner operator Ppre . It is this form of expression that will be used in the analysis. Lemma 2.9. The following identities hold −1/2
−1/2
λmin (Cpre ACpre ) = inf
u∈V
and −1/2
−1/2
λmax (Cpre ACpre ) = sup u∈V
a(Ppre u, u) a(u, u) a(Ppre u, u) . a(u, u)
Proof. It follows from Lemma C.6 in Appendix C and the Rayleigh quotient that −1/2
−1/2
1/2 λmin (Cpre ACpre ) = λmin (A1/2 C−1 ) = inf pre A
v∈RN
1/2 v A1/2 C−1 v pre A .
v v
−1/2 Recalling that C−1 v, pre A = Ppre , see (2.27), (2.31), and (2.32), and letting u = A we deduce −1/2
−1/2
λmin (Cpre ACpre ) = inf
u∈RN
u AC−1 u APpre u a(Ppre u, u) pre Au . = inf = inf u∈V a(u, u) u Au u Au u∈RN
Similar arguments give the result for λmax .
The above lemma prompts us to define the extremal eigenvalues and condition number of the Schwarz operator Ppre as in the following subsection.
2.6.2 Condition numbers of Schwarz operators Definition 2.2. The maximum eigenvalue, minimum eigenvalue, and condition number of the Schwarz operator Ppre are defined by
38
2 General Framework of Preconditioners
a(Ppre u, u) , a(u, u) u∈V a(Ppre u, u) , λmin (Ppre ) := inf u∈V a(u, u) λmax (Ppre ) . κ (Ppre ) := λmin (Ppre )
λmax (Ppre ) := sup
The following lemma is a direct consequence of Lemma 2.9. Lemma 2.10. If C0−2 a(u, u) ≤ a(Ppre u, u) ≤ C12 a(u, u) ∀u ∈ V , then
λmax (Ppre ) ≤ C12 so that
−1/2
and
(2.34)
λmin (Ppre ) ≥ C0−2
−1/2
κ2 (Cpre ACpre ) = κ (Ppre ) ≤ C02C12 . Recalling that the condition number κ2 (A) grows significantly with the degrees of freedom N (see Section 1.3) the goal is to design preconditioners C−1 so that (2.34) hold with C0 and C1 independent of (or depending moderately on) the variables in concern, i.e., J (number of subspaces) and N.
2.6.3 Stability of decomposition – A lower bound for λmin (Pad ) Assumption 2.3 (Stability of decomposition). There exists a constant C0 such that every u ∈ V admits a decomposition u=
J
∑ Rtj u j ,
u j ∈ V j,
(2.35)
j=0
which satisfies
J
∑ a j (u j , u j ) ≤ C02 a(u, u).
(2.36)
j=0
Proposition 2.1. Let Assumption 2.3 be satisfied. Then the following statements hold. (i) For any u ∈ V Hence (ii) Pad is positive-definite.
a(Pad u, u) ≥ C0−2 a(u, u).
λmin (Pad ) ≥ C0−2 .
(2.37)
2.6 Convergence Theory for Preconditioners with PCG
39
(iii) For any u ∈ V −1 u, u) = a(Pad
J
min
u j ∈V j u=∑Jj=0 Rtj u j
∑ a j (u j , u j ).
(2.38)
j=0
Proof. We first prove (2.37). The decomposition (2.35), the Cauchy-Schwarz inequality, (2.16), and (2.36) imply a(u, u) = ≤
J
J
j=0
j=0
1/2
∑ a(u, Rtj u j ) = ∑ a j (Pj u, u j ) J
∑ a j (Pj u, Pj u)
j=0
≤ C0
J
1/2
∑ a j (u j , u j )
j=0
1/2 ∑ a j (Pj u, Pj u) a(u, u)1/2 . J
j=0
Hence, using again (2.16), (2.19) and (2.20) we deduce J
J
J
j=0
j=0
j=0
a(u, u) ≤ C02 ∑ a j (Pj u, Pj u) = C02 ∑ a(Rtj Pj u, u) = C02 ∑ a(Pj u, u) = C02 a(Pad u, u).
This gives a lower bound for λmin (Pad ) if one invokes Lemma 2.10. It also implies that Pad is positive definite. It remains to show (2.38). By using again (2.35), (2.16), the Cauchy–Schwarz inequality, and (2.20) we obtain −1 u, u) = a(Pad
J
∑ a(Pad−1 u, Rtj u j ) =
j=0
≤ =
J
∑
j=0 J
∑
J
∑ a j (Pj Pad−1 u, u j )
j=0
1/2
−1 −1 a j (Pj Pad u, Pj Pad u)
1/2
−1 −1 a(Pad u, Pj Pad u)
j=0
−1 = a(Pad u, u)1/2
J
∑ a j (u j , u j )
j=0
∑ a j (u j , u j )
j=0
1/2
J
∑ a j (u j , u j )
j=0
1/2
J
1/2
,
which implies −1 u, u) ≤ a(Pad
−1 Equality occurs when u j = Pj Pad u.
J
min
u j ∈V j u=∑Jj=0 Rtj u j
∑ a j (u j , u j ).
j=0
40
2 General Framework of Preconditioners
2.6.4 Coercivity of decomposition – An upper bound for λmax (Pad ) Assumption 2.4 (Coercivity of decomposition). There exists a constant C1 such that for every u ∈ V if u=
J
∑ Rtj u j ,
u j ∈ V j,
j=0
then
J
a(u, u) ≤ C12 ∑ a j (u j , u j ). j=0
Proposition 2.2. Let Assumption 2.4 and (2.17) be satisfied. Assume further that Pad is positive definite. Then for any u ∈ V the following statement holds a(Pad u, u) ≤ C12 a(u, u) and thus λmax (Pad ) ≤ C12 . −1/2
Proof. Since Pad is positive definite, for any u ∈ V , we can define v = Pad the symmetry of Pad yields 1/2
1/2
a(u, u) = a(Pad v, Pad v) = a(Pad v, v).
u. Then
(2.39)
On the other hand, for any w ∈ V , since Pad w = ∑Jj=0 Rtj Pj w, Assumption 2.4 and (2.17) imply
Hence,
J
J
j=0
j=0
a(Pad w, Pad w) ≤ C12 ∑ a j (Pj w, Pj w) = C12 ∑ a(Pj w, Pj w). 1/2
J
1/2
a(Pad u, u) = a(Pad u, Pad u) = a(Pad v, Pad v) ≤ C12 ∑ a(Pj v, Pj v) j=0
Since Pj is a projection, this implies J
a(Pad u, u) ≤ C12 ∑ a(Pj v, v) = C12 a(Pad v, v). j=0
This inequality and (2.39) give the required estimate.
2.6 Convergence Theory for Preconditioners with PCG
41
2.6.5 Strengthened Cauchy–Schwarz inequalities and local stability Assumption 2.5 (Strengthened Cauchy–Schwarz Inequalities). There exist constants εi j ∈ [0, 1), i, j = 1, . . . , J, such that ( ( t (a(Ri ui , Rtj u j )( ≤ εi j a(Rti ui , Rti ui )1/2 a(Rtj u j , Rtj u j )1/2
∀ui ∈ Vi , u j ∈ V j .
We denote by ρ (E ) the spectral radius of the matrix E = (εi j ). The inequality is trivial with εi j = 1; however, in that case ρ (E ) = J, resulting in a poor upper bound for λmax (Pad ); see Lemma 2.11 and Proposition 2.2. Assumption 2.6 (Local stability). There exists ω > 0 such that a(Rtj u j , Rtj u j ) ≤ ω a j (u j , u j ) ∀u j ∈ V j , j = 0, . . . , J. This local stability obviously holds with ω = 1 if a j (·, ·) is chosen by (2.17). Lemma 2.11. Assume that Assumptions 2.5 and 2.6 hold. Then Assumption 2.4 holds. Proof. For any u = ∑Jj=0 Rtj u j ∈ V with u j ∈ V j , denoting v0 = Rt0 u0 and v1 = ∑Jj=1 Rtj u j so that u = v0 + v1 , we have (with the help of the Cauchy–Schwarz inequality) a(u, u) = a(v0 , v0 ) + a(v1 , v1 ) + 2a(v0 , v1 ) ≤ a(v0 , v0 ) + a(v1 , v1 ) + 2a(v0 , v0 )1/2 a(v1 , v1 )1/2 ≤ 2a(v0 , v0 ) + 2a(v1 , v1 ) = 2a(Rt0 u0 , Rt0 u0 ) + 2
J
∑
a(Rti ui , Rtj u j ).
i, j=1
The Strengthened Cauchy–Schwarz inequalities then imply a(u, u) = 2a(Rt0 u0 , Rt0 u0 ) + 2
J
∑
εi j a(Rti ui , Rti ui )1/2 a(Rtj u j , Rtj u j )1/2
i, j=1 J
≤ 2a(Rt0 u0 , Rt0 u0 ) + 2ρ (E ) ∑ a(Rtj u j , Rtj u j ) j=1
J
≤ 2(ρ (E ) + 1) ∑ a(Rtj u j , Rtj u j ). j=0
The local stability assumption yields
42
2 General Framework of Preconditioners J
a(u, u) ≤ 2ω (ρ (E ) + 1) ∑ a j (u j , u j ). j=0
Hence Assumption 2.4 holds with C12 = 2ω (ρ (E ) + 1).
Therefore, due to Proposition 2.2, Assumptions 2.3, 2.5, and 2.6 provide the following upper bound for the maximum eigenvalue of Pad :
λmax (Pad ) ≤ 2ω (ρ (E ) + 1). The following two lemmas give a better upper bound. Lemma 2.12. Under Assumption 2.6, the following statement holds, for all u ∈ V and j = 0, . . . , J, a(Pj u, Pj u) ≤ ω 2 a(u, u) and
a(Pj u, u) ≤ ω a(u, u).
Proof. For j = 0, . . . , J and u ∈ V , we have from the definitions of Pj and Pj , and Assumption 2.6 a(Pj u, Pj u) = a(Rtj Pj u, Rtj Pj u) ≤ ω a j (Pj u, Pj u) = ω a(u, Pj u) ≤ ω a(u, u)1/2 a(Pj u, Pj u)1/2
which implies the first inequality. The second inequality is obtained by using the Cauchy–Schwarz inequality a(Pj u, u) ≤ a(Pj u, Pj u)1/2 a(u, u)1/2 ≤ ω a(u, u).
Lemma 2.13. Under the Assumptions 2.5 and 2.6, the following estimate holds, for all u ∈ V , a(Pad u, u) ≤ ω (ρ (E ) + 1)a(u, u). Therefore, λmax (Pad ) ≤ ω (ρ (E ) + 1). Proof. Let P be defined as in Lemma 2.15, i.e., P = ∑Jj=1 Pj . Then u). a(Pad u, u) ≤ a(P0 u, u) + a(Pu,
Lemmas 2.15 and 2.12 give
a(Pad u, u) ≤ ω a(u, u) + ωρ (E )a(u, u) = ω (1 + ρ (E ))a(u, u).
2.6.6 An upper bound for λmax (Psmu ) Proposition 2.3. For any u ∈ V the following inequality holds
2.6 Convergence Theory for Preconditioners with PCG
43
a(Psmu u, u) ≤ a(u, u). Therefore, λmax (Psmu ) ≤ 1. Proof. It follows from (2.23) and (2.24) that t a(Psmu u, u) = a(u, u) − a(Emu Emu u, u) = a(u, u) − a(Emu u, Emu u)
≤ a(u, u). Hence λmax (Psmu ) ≤ 1.
2.6.7 A lower bound for λmin (Psmu ) Lemma 2.14. Assume that Assumptions 2.5 and 2.6 are satisfied. For any j, k = 1, . . . , J and any u, v ∈ V the following inequalities hold a(Pj u, v) ≤ a(Pj u, u)1/2 a(Pj v, v)1/2 , a(Pj u, Pk v) ≤ ωε jk a(Pj u, u)1/2 a(Pk v, v)1/2 . Proof. Definitions (2.16) and (2.19), and the Cauchy–Schwarz inequality yield a(Pj u, v) = a(u, Pj v) = a(u, Rtj Pj v) = a j (Pj u, Pj v) ≤ a j (Pj u, Pj u)1/2 a j (Pj v, Pj v)1/2
= a(u, Pj u)1/2 a(v, Pj v)1/2 = a(Pj u, u)1/2 a(Pj v, v)1/2
and a(Pj u, Pk v) ≤ ε jk a(Pj u, Pj u)1/2 a(Pk Pv, Pk v)1/2 ≤ ωε jk a j (Pj u, Pj u)1/2 ak (Pk v, Pk v)1/2
= ωε jk a(Pj u, u)1/2 a(Pk v, v)1/2 ,
proving the lemma. Lemma 2.15. Under Assumptions 2.5 and 2.6, if P = ∑Jj=1 Pj then u) ≤ ωρ (E )a(u, u). a(Pu,
Proof. It follows from Lemma 2.14 that Pu) = a(Pu,
J
∑
j,k=1
a(Pj u, Pk u) ≤ ω
J
∑
j,k=1
ε jk a(Pj u, u)1/2 a(Pk u, u)1/2 .
44
2 General Framework of Preconditioners
Since E is symmetric we have v E v ≤ ρ (E )v v
∀v ∈ RJ .
In particular, with v = (a(u, Pj u)1/2 )Jj=1 we obtain J
Pu) ≤ ωρ (E ) ∑ a(u, Pi u) = ωρ (E )a(u, Pu) a(Pu, j=1
Pu) 1/2 , ≤ ωρ (E )a(u, u)1/2 a(Pu,
so that Hence
Pu) 1/2 ≤ ωρ (E )a(u, u)1/2 . a(Pu,
u) ≤ a(Pu, Pu) 1/2 a(u, u)1/2 ≤ ωρ (E )a(u, u), a(Pu,
proving the lemma.
Proposition 2.4. Let Assumptions 2.3, 2.5 and 2.6 be satisfied, and assume that ω ∈ (0, 2). Then for any u ∈ V there holds 2−ω a(u, u), 1 + 2ω 2 ρ (E )2 C02
a(Psmu u, u) ≥
= max{1, ω }. Therefore, where ω
2−ω . 1 + 2ω 2 ρ (E )2 C02
λmin (Psmu ) ≥
Proof. Let E−1 = I
and
E j = (I − Pj ) · · · (I − P0 ),
(2.40)
(2.41)
j = 0, . . . , J.
We recall from the proof of Proposition 2.3 that a(Psmu u, u) = a(u, u) − a(EJ u, EJ u).
(2.42)
Inequality (2.40) is proved if, for any u ∈ V , J
(2 − ω ) ∑ a(Pj E j−1 u, E j−1 u) ≤ a(u, u) − a(EJ u, EJ u)
(2.43)
j=0
and
J 2 ρ (E )2 ∑ a(E j−1 u, Pj E j−1 u). a(Pad u, u) ≤ 1 + 2ω j=0
Indeed, it follows from (2.42)–(2.44) that
(2.44)
2.6 Convergence Theory for Preconditioners with PCG
45
(2 − ω )a(Pad u, u) . a(Psmu u, u) ≥ 2 ρ (E )2 1 + 2ω
The required result now follows from (2.37). It remains to show (2.43) and (2.44). Noting the property of telescoping sums we have J a(u, u) − a(EJ u, EJ u) = ∑ a(E j−1 u, E j−1 u) − a(E j u, E j u) . j=0
Since E j = (I − Pj )E j−1 implying E j u = E j−1 u − Pj E j−1 u, it follows that
a(u, u) − a(EJ u, EJ u) J = ∑ a(E j−1 u, E j−1 u) − a(E j−1 u − Pj E j−1 u, E j−1 u − Pj E j−1 u) j=0
=
J
∑
j=0
2a(E j−1 u, Pj E j−1 u) − a(Pj E j−1 u, Pj E j−1 u) .
Assumption 2.6, (2.16), and (2.19) imply a(u, u) − a(EJ u, EJ u) ≥
J
∑
j=0
2a(E j−1 u, Pj E j−1 u) − ω a j (Pj E j−1 u, Pj E j−1 u) J
= (2 − ω ) ∑ a(E j−1 u, Pj E j−1 u). j=0
Finally, to prove (2.44) we first note that Ek = (I − Pk )Ek−1 so that Ek−1 = Ek + Pk Ek−1 ,
k = 1, 2, . . . , J.
Taking the sum for k from 1 to j − 1 we deduce j−1
I = E j−1 + P0 + ∑ Pk Ek−1 . k=1
Therefore, j−1
a(Pj u, u) = a(Pj u, E j−1 u) + a(Pj u, P0 u) + ∑ a(Pj u, Pk Ek−1 u). k=1
Applying Lemma 2.14 to the first two terms and applying Lemma 2.14 to each term in the sum on the right-hand side, we infer a(Pj u, u) ≤ a(Pj u, u)1/2 a(Pj E j−1 u, E j−1 u)1/2 + a(Pj P0 u, P0 u)1/2
46
2 General Framework of Preconditioners j
+ω
∑ ε jk a(Pj u, u)1/2 a(Pk Ek−1 u, Ek−1 u)1/2 ,
k=1
so that a(Pj u, u)1/2 ≤ a(Pj E j−1 u, E j−1 u)1/2 + a(Pj P0 u, P0 u)1/2 j−1
+ω
∑ ε jk a(Pk Ek−1 u, Ek−1 u)1/2
k=1
j ∑ ε jk a(Pk Ek−1 u, Ek−1 u)1/2 . ≤ a(Pj P0 u, P0 u)1/2 + ω k=1
Hence by squaring both sides and using (a + b)2 ≤ 2a2 + 2b2 we deduce 2 a(Pj u, u) ≤ 2a(Pj P0 u, P0 u) + 2ω
J
∑ ε jk a(Pk Ek−1 u, Ek−1 u)1/2
k=1
2
Summing over j = 1, . . . , J we obtain
u) ≤ 2a(PP 0 u, P0 u) + 2ω a(Pu, =: T1 + T2 .
2
J
J
1/2
∑ ∑ ε jk a(Pk Ek−1 u, Ek−1 u)
j=1
k=1
2
Using successively Lemma 2.15, Assumption 2.6, and the definition of P0 we infer T1 ≤ 2ωρ (E )a(P0 u, P0 u) ≤ 2ω 2 ρ (E )a0 (P0 u, P0 u) = 2ω 2 ρ (E )a(P0 u, u).
To estimate T2 we recall that
E v22 ≤ ρ (E )2 v22 ,
v ∈ RJ ,
J and thus with v = a(Pk Ek−1 u, Ek−1 u)1/2 k=1 we deduce J
∑
j=1
J
1/2
∑ ε jk a(Pk Ek−1 u, Ek−1 u)
k=1
2
J
≤ ρ (E )2 ∑ a(Pj E j−1 u, E j−1 u). j=1
Hence J
2 ρ (E )2 ∑ a(Pj E j−1 u, E j−1 u). T2 =≤ 2ω j=1
Therefore, assuming ρ (E ) ≥ 1 we obtain
2.6 Convergence Theory for Preconditioners with PCG
47
J
u) ≤ 2ω 2 ρ (E )2 ∑ a(Pj E j−1 u, E j−1 u). a(Pu, j=0
Finally,
J u) ≤ 1 + 2ω 2 ρ (E )2 ∑ a(Pj E j−1 u, E j−1 u), a(Pad u, u) = a(P0 u, u) + a(Pu, j=0
proving (2.44), and thus completing the proof of (2.40). Inequality (2.41) is a consequence of (2.40).
2.6.8 Summary We summarise the above analysis in this section. Table 2.1 shows bounds for the maximum and minimum eigenvalues obtained under various assumptions. Assumptions 2.4 (∗) 2.5 2.6 Strengthened Local Stability Coercivity Cauchy-Schwarz stability 2.3
λmin (Pad ) ≥ C0−2
X
λmax (Pad ) ≤ C12
X
X
Extremum eigenvalues
X
X
λmax (Pad ) ≤ ω (ρ (E ) + 1)
X
X
λmin (Psmu ) ≥
2−ω 2 ρ (E )2 C02 1 + 2ω
Table 2.1 Bounds for extremal eigenvalues from assumptions. (∗) Further assumption: Pad is positive definite.
The following theorems are direct consequences of the above results. Theorem 2.1. If Assumptions 2.3 and 2.4 hold, then
κ (Pad ) ≤ C02C11 . If Assumptions 2.3, 2.5, and 2.6 hold, then
κ (Pad ) ≤ C02 ω (ρ (E ) + 1). In particular, if the local bilinear forms a j (·, ·), j = 0, . . . , J are chosen by (2.17), then Assumption 2.6 holds with ω = 1 and therefore
κ (Pad ) ≤ C02 (ρ (E ) + 1).
48
2 General Framework of Preconditioners
Theorem 2.2. If Assumptions 2.3, 2.5, and 2.6 hold, then 2 ρ (E )2 C02 1 + 2ω κ (Psmu ) ≤ . 2−ω
In particular, with the choice (2.17) for the bilinear forms a j (·, ·) we have κ (Psmu ) ≤ 1 + 2ρ (E )2 C02 .
2.7 Convergence Theory for Preconditioners with GMRES 2.7.1 Preconditioned GMRES with the B-inner product In this section we derive the convergence results for Algorithm 2.5. Recalling the convergence result (2.5) proved in [73, Theorem 3.3], we define the corresponding transpose and extremal eigenvalues with respect to the B-inner product (2.13) as follows: B v, w := v, Aw ∀v, w ∈ RN A B
B
B
B := A + A A s 2
B B λmin (As ) :=
inf
v∈RN
B := sup B A) λmax (A
v∈RN
We note that
B v, v A s
B
B λmin (As ) = inf
v∈RN
B = sup B A) λmax (A
v∈RN
v, vB
(2.45)
B
v B Av, A
v, vB
B
.
v Av,
B
v, vB Av Av,
v, vB
B
(2.46) 2. = A B
B is positive definite with respect If the preconditioner C is designed such that A s B B to the B-inner product, then λmin (As ) > 0. By repeating the proof of [73, Theorem 3.3] we can prove that
2.7 Convergence Theory for Preconditioners with GMRES
49
B ⎤k/2 B) 2 λmin (A s ⎦ rk B ≤ ⎣1 − r0 B .
c B λmax (A A) ⎡
(2.47)
A similar convergence result can also be obtained by applying Algorithm 2.2 to the system u = A b (2.48)
where
:= B1/2 AB −1/2 , A
:= B1/2 b b.
, := B1/2 u u
First we show that solving (2.48) by Algorithm (2.2) (with the Euclidean inner product) yield the same iterates as solving (2.12) by Algorithm 2.4; see [156, Lemma 3.1]. k be the approximate solution of (2.48) genLemma 2.16. For k ∈ {1, . . . , N}, let u erated by GMRES with the Euclidean inner product (Algorithm 2.2) and initial k be the approximate solution of (2.12) generated by PGMRES 0 , and let u guess u 0 . Then u0 = B−1/2 u with the ·, ·B -inner product (Algorithm 2.4) and initial guess k . uk = B−1/2 u
ri , and search direcMoreover, for both approaches the corresponding residuals ri , pi , i = 1, . . . , k + 1, satisfy tions pi , ri = B−1/2 ri , −1/2
i = B p
ri B = ri RN
i , p pi B = pi RN .
Proof. For simplicity of notation we will denote B1/2 by D. Then = D−1 AD, A
= D−1 b b,
u0 . u0 = D−1
With the notations of Algorithms 2.2 and 2.5, we have u0 = D−1r0 u0 = D−1 b − A − A r0 = b r0 2B = Dr0 , D r0 RN = r0 2RN v1 = v1 . r0 / r0 B = D−1
For j = 1, . . . , k and i = 1, . . . , j we also have v j , v j , D v j , vi = DA vi N = A vi N = hi, j hi, j = A B R R j j v j − ∑ v j − ∑ hi, j hi, j p j = A pj v j = D−1 A v j = D−1 i=1
h j+1, j = p j B = D p j , D pi
i=1
RN
= p j RN = h j+1, j
50
2 General Framework of Preconditioners
v j+1 =
j p
h j+1, j
=
pj D−1 v j+1 . = D−1 h j+1, j
Obviously, i , p j B = D i , p j RN = δi, j p pi , D p j RN = p
(Kronecker delta).
Furthermore,
From
r0 , . . . , A k−1 r0 , . . . , A k−1 r0 } = D−1 { r0 } =: D−1 K* K) r0 , A r0 , A k := { k. r0 +z)B = arg min b r0 + − A( − A( k k = arg min b z)RN = w w z∈K) k
z∈K* k
k , completing the proof. k = D−1 u it follows that u
s is positive definite, a convergence result similar to (2.47) can be derived by If A applying (2.5) to rk : +
rk RN ≤ 1 −
,k/2 s) 2 λmin (A A) λmax (A
r0 RN .
(2.49)
In fact, one can show the following lemma.
Lemma 2.17. The following statement holds: B B = λmax (A s ) and λ B (A B A) A). λmin (As ) = λmin (A max
Proof. First we note from (2.45) and (2.13) that, with w = B1/2 v, 1/2 1/2 v Av, B Av, B v N B B B R λmin (As ) = inf = inf N N 1/2 1/2
v, v v∈R v∈R B B v, B v N R Aw, w N As w, w N R R s ). = inf = inf = λmin (A w∈RN w, wRN w∈RN w, wRN
Similarly, for the maximum eigenvalue we have B1/2 Av Av Av, B1/2 Av,
N B B = sup B A) R λmax (A = sup 1/2 1/2
v, v N N B v∈R v∈R B v, B v N R w Aw Aw, Aw, A RN RN A). = sup = sup = λmax (A
w,
w, w w N N N N R R w∈R w∈R
2.7 Convergence Theory for Preconditioners with GMRES
51
As with the symmetric case using PCG, we now want to relate these eigenvalues with the bilinear forms defining the matrix system. Let Q be the Schwarz operator designed for an indefinite problem, whose matrix representation is Q = C−1 A; see (2.27). (A precise definition of Q will be given in Chapter 8.) Note that in general Q is non-symmetric. Similarly to Lemma 2.9 for the case of symmetric A, we now have the following result for non-symmetric A. Lemma 2.18. The following statements hold: b(Qv, Qv) = λ B (A A) B A), = λmax (A max b(v, v) b(Qv, v) s ) = λ B (A B ). = λmin (A inf s min v∈V b(v, v)
sup v∈V
Proof. The required identities are a result of (2.46) and Lemma 2.17, noting that and that Q=A Av v Av, Av, b(Qv, v) b(Qv, Qv) B B = = and .
v, vB
v, vB b(v, v) b(v, v) As a consequence, we have the following lemma; cf. [185]. Lemma 2.19. Assume that the Schwarz operator Q is designed such that 0 < cQ ≤ inf v∈V
b(Qv, v) b(v, v)
and
sup v∈V
b(Qv, Qv) ≤ CQ2 . b(v, v)
(2.50)
rk be the residuals obtained by solving (2.12) and (2.48) by AlgoLet rk and rithms 2.5 and 2.2, respectively. Then rk RN ≤ rk B =
1−
c2Q
CQ2
k/2
r0 RN =
1−
c2Q
CQ2
k/2
r0 B .
(2.51)
Furthermore, if rk is the residual obtained by solving (2.12) with Algorithm 2.4 then rk RN ≤
κ2 (B) 1 −
c2Q CQ2
k/2
r0 RN
(2.52)
where κ2 (B) = λmax (B)/λmin (B). Proof. Estimate (2.51) is a result of Lemma 2.16, Lemma 2.18, (2.49) and (2.50). Estimate (2.52) is proved in [185].
52
2 General Framework of Preconditioners
2.7.2 Preconditioned GMRES with the C-inner product When the stiffness matrix A is non-symmetric but the preconditioner C is symmetric (and positive definite), one can use Algorithm 2.6 to solve Au = b; see [78]. C (A in the same manner as in (2.45) but with the C ) and λ C (A c A) Defining λmin s max B-inner product replaced by the C-inner product, and assuming that C is designed C ) > 0 we obtain similarly to (2.47) such that λ C (A min
s
+
C
rk C ≤ 1 −
C (A )2 λmin s
C (A c A) λmax
,k/2
r0 C .
(2.53)
Similarly to Lemma 2.18 we have the following lemma. Lemma 2.20. Assume that the preconditioner C is defined such that −1 C Av, Av RN
Av, vRN 0 < α ≤ inf and sup ≤ β 2.
Cv, v v∈RN Cv, vRN N N v∈R R v=0
(2.54)
v=0
Then
.k/2 α2 rk C ≤ 1 − 2 r0 C . β
Proof. The result is a direct consequence of (2.53) and the following identities −1 C Av, v C
Av, vRN C C λmin (As ) = inf = inf N N
v,
Cv, v vRN v∈R v∈R C v=0
v=0
−1 C−1 Av, C−1 Av C C Av, Av RN
c C λmax (A A) = sup = sup .
v, vC
Cv, vRN v∈RN v∈RN v=0
v=0
2.7.3 Preconditioned GMRES with block matrices In this subsection, we are concerned with convergence results when GMRES is used to solve the matrix equation Au = b (2.55) where -
A11 A12 A := A21 A22
.
∈R
N×N
,
- . u1 ∈ RN u := u2
and
- . b1 ∈ RN . b := b2
2.7 Convergence Theory for Preconditioners with GMRES
53
Here Aii ∈ Rni ×ni , Ai j ∈ Rni ×n j , and ui , bi ∈ Rni , i, j = 1, 2, with n1 + n2 = N. We assume that Aii , i = 1, 2, is symmetric and positive definite, and that A is positive definite but can be non-symmetric. Letting . . A11 0 0 A12 , and Aas := Adiag := A21 0 0 A22 then A = Adiag + Aas with Adiag being symmetric and positive definite. This matrix A arises, for example, from the use of FEM-BEM coupling in solving interface problems; see Chapter 14. Readers who are not familiar to or not interested in these problems can skip this section in the first reading. ni ×ni is a preconditioner of A , i = 1, 2, which satisfies Assume that C−1 ii ii ∈ R di Cii vi , vi Rni ≤ Aii vi , vi Rni ≤ Di Cii vi , vi Rni ,
di > 0, Di > 0,
(2.56)
for all vi ∈ Rni , i = 1, 2. Let C := Then
. C11 0 . 0 C22
-
d Cv, vRN ≤ Adiag v, v RN ≤ D Cv, vRN
∀v ∈ RN
(2.57)
where d := min{d1 , d2 }, D := max{D1 , D2 }. Note that (2.57) is equivalent to −1 −1 ≤ C w, w ≤ D A w, w ∀w ∈ RN . (2.58) d A−1 N diag w, w diag R N N R
R
1/2 −1/2 −1/2 1/2 1/2 Adiag v and This equivalence follows from the choice w = Adiag Adiag CAdiag elementary calculations. We will precondition equation (2.55) with C. The following convergence result holds.
Lemma 2.21. Assume that (2.57) holds. Let
Av, vRN , Adiag v, v RN
cA := inf v∈RN v=0
Cβ2 := sup
v∈Rn2 v=0
Cβ3 := and let
−1 A 12 A11 A12 v, v
A22 v, vRn2
Rn2
sup v=(v1 ,v2 )∈Rn1 ×Rn2 v=0
,
Cβ1 := sup
v∈Rn1 v=0
−1 A 21 A22 A21 v, v
A11 v, vRn1
C := max{Cβ1 ,Cβ2 },
( ( 2( A12 v2 , v1 Rn1 + A21 v1 , v2 Rn2 ( , Adiag v, v RN
Rn1
,
54
2 General Framework of Preconditioners
α :=
β 2 :=
d
if A 12 = −A21 ,
otherwise, cA d, 2 D C + 1 if A 12 = −A21 ,
Then
D2 (C +Cβ3 + 1)
otherwise.
.k/2 α2 r0 C . rk C ≤ 1 − 2 β
(2.59)
Proof. We note that if A 12 = −A21 , then Cβ3 = 0. Due to Lemma 2.20, it suffices to prove that α and β satisfy (2.54). We first consider α . n1 n2 • If A 12 = −A21 , then since A = Adiag + Aas we have, for v = (v1 , v2 ) ∈ R × R ,
Av, vRN = Adiag v, v RN + Aas v, vRN = Adiag v, v RN + A12 v2 , v1 Rn1 + A21 v1 , v2 Rn2 = Adiag v, v RN + v2 , A + A21 v1 , v2 Rn2 12 v1 Rn2 = Adiag v, v RN .
Hence, assumption (2.57) implies Adiag v, v RN
Av, vRN α = d ≤ inf = inf .
Cv, vRN v∈RN v∈RN Cv, vRN v=0
v=0
• If A 12 = −A21 , then it follows from the definition of cA and from (2.57) that
Av, vRN ≥ cA Adiag v, v RN ≥ cA d Cv, vRN ,
which implies
α = cA d ≤ inf
v∈RN v=0
Av, vRN .
Cv, vRN
Next we prove the estimate for β 2 . For any v = (v1 , v2 ) ∈ Rn1 × Rn2 we have by using (2.58) which is equivalent to assumption (2.57) −1
−1 Av, Av = D A A Av, v . C Av, Av RN ≤ D A−1 diag diag N N R
R
Noting that
A A−1 diag A =
-
A11 A 21 A 12 A22
.-
A−1 0 11 0 A−1 22
.-
A11 A12 A21 A22
.
2.8 Other Krylov Subspace Methods
55
. - −1 0 A21 A22 A21 0 + = Adiag + −1
0 A A A A12 + A21 12 11 12 we deduce A A−1 diag Av, v
RN
= Adiag v, v RN
−1 + A 21 A22 A21 v1 , v1
Rn1
+ (A12 + A 21 )v2 , v1
Rn1
= Adiag v, v RN −1 + A A A v , v 21 1 1 21 22
Rn1
A12 + A 21 0
−1 + A 12 A11 A12 v2 , v2
Rn2
+ (A + A )v , v 21 1 2 12
Rn2
−1 + A A A v , v 12 2 2 12 11
Rn2
+ 2 A12 v2 , v1 Rn1 + 2 A21 v1 , v2 Rn2 ≤ Adiag v, v RN +Cβ1 A11 v1 , v1 Rn1 +Cβ2 A22 v2 , v2 Rn2 +Cβ3 Adiag v, v RN ≤ (C +Cβ3 + 1) Adiag v, v RN .
Altogether we have, using again (2.57), −1 C Av, Av RN ≤ D(C +Cβ3 + 1) Adiag v, v RN ≤ D2 (C +Cβ3 + 1) Cv, vRN = β 2 Cv, vRN ,
proving the lemma.
2.8 Other Krylov Subspace Methods In this section we introduce some other Krylov subspace methods for symmetric indefinite matrices. These methods, which will be useful for solving FEM-BEM coupling problems in Chapter 14, are Generalised Conjugate Residual methods, which are of Krylov’s subspace type. More details can be found in [16, 87, 183]. The reader who is not interested in FEM-BEM coupling problems can skip this section.
56
2 General Framework of Preconditioners
2.8.1 General Krylov subspace methods First we follow [183] to explain briefly how the Krylov subspace methods are developed. They belong to a larger class of methods, the projection methods. Recall equation (2.3) Au = b where A ∈ RN×N is a nonsingular matrix. Suppose that u0 is given and a sequence of iteration vectors u j , j = 1, . . . , m − 1, has been computed. Let r j := b − Au j , j = 0, . . . , m − 1, denote the residual vectors. In a projection method, the next iterate um is defined by um ∈ u0 + Km
such that
rm , vRN = 0 ∀v ∈ Lm
(2.60)
where Km and Lm are two m-dimensional subspaces of RN . This condition is called the Petrov–Galerkin condition. Assuming exact arithmetic, due to this condition, a projection method gives the exact solution u after at most N iterations. For any s ∈ RN , B ∈ RN×N , and m = 1, 2, . . . , the Krylov subspace of dimension m = 1, 2, . . . generated by s and B is defined by Km (s, B) := span{s, Bs, . . . , Bm−1 s}.
(2.61)
If the subspace Km in (2.60) is chosen to be Km (r0 , A), then the projection method is called a Krylov subspace method. Different choices of the space Lm give rise to different Krylov subspace methods. Three commonly seen choices are Lm = Km , Lm = AKm , and Lm = Km (r0 , A ). It can be shown, see [183, Proposition 5.2], that if A is symmetric and positive definite, and if Lm = Km , then um satisfies (2.60) if and only if it minimises the error e := u − u over the space u0 + Km in the A-norm, i.e., uA . um = arg min u −
(2.62)
u∈u0 +Km 1/2
Here we recall that xA = x, xA = Ax, xRN . On the other hand, it can be shown, see [183, Proposition 5.3], that if A is an arbitrary square matrix and if Lm = uRN , AKm then um satisfies (2.60) if and only if it minimises the residual b − A i.e., uRN . (2.63) um = arg min b − A u∈u0 +Km
The CG method (Algorithm 2.1) for a symmetric and positive definite matrix A is a Krylov subspace method with Km = Lm = Km (r0 , A). The new iterate um satisfies (2.62). The GMRES method (Algorithm 2.2) for an arbitrary square matrix A is a Krylov method with Km = Km (v1 , A) and Lm = AKm , where v1 := r0 /r0 RN . The new iterate um satisfies (2.63). In the next subsection we generalise the CG and GMRES methods.
2.8 Other Krylov Subspace Methods
57
2.8.2 The generalised three-term CG methods In this subsection we consider the equation Gu = g
(2.64)
where G is a non-singular N × N matrix. We generalise the methods discussed in Section 2.8.1. Let H be a symmetric positive definite matrix that defines a norm, called the H1/2 1/2 norm, xH := x, xH = Hx, xRN . Suppose that u0 is given and that a sequence of iteration vectors u j , j = 1, . . . , m + 1, has been computed. Let r j := g − Gu j and e j := u − u j be the residual vectors and error vectors, respectively. The iterates u j are assumed to be computed by minimising the H-norm of the error e j over the Krylov subspace K j (r0 , G), namely u j+1 :=
arg min ∈u0 +K j+1 (r0 ,G) u
u − uH ,
c.f., (2.62). This is equivalent to e j+1 , v H = 0 ∀v ∈ K j+1 (r0 , G),
j = 0, . . . , m;
j = 0, . . . , m.
(2.65)
(2.66)
Noting that um ∈ u0 +Km ⊂ u0 +Km+1 , and letting dm := um+1 −um ∈ Km+1 (r0 , G) − um ∈ Km+1 (r0 , G), we have em+1 = em + dm so that and d := u
dm , vH = em , vH
∀v ∈ Km+1 (r0 , G).
Since
em , vH = 0
∀v ∈ Km (r0 , G)
dm , vH = 0
∀v ∈ Km (r0 , G).
it follows that This condition means that an H-orthogonal basis of Km+1 (r0 , G) will deliver the set of conjugate search directions {p j }mj=0 . This set can be calculated by means of a standard Gram–Schmidt orthogonalisation process: j Gp j , pi H , j = 0, . . . , m. (2.67) p0 := r0 , p j+1 := Gp j − ∑ σ ji pi , σ ji :=
pi , pi H i=0 Then um+1 = um + αm pm where αm mus be chosen such that (2.66) holds. This yields
αm =
em , pm H .
pm , pm H
58
2 General Framework of Preconditioners
Under conditions to be addressed now, one can avoid working out the expensive recurrence formula (2.67) and turn to a three-term recursion instead: pk+1 = Gpk − σk,k−1 pk−1 − σkk pk .
(2.68)
We follow [11] to call this algorithm three-term CG(H, G) for the solution of (2.64). We note that Algorithm 2.1 is three-term CG(A, A) while Algorithm 2.3 is threeterm CG(A, C−1 A). Let G H be the H-adjoint of G, i.e., the matrix which satisfies G H x, y = x, GyH ∀x, y ∈ RN . H
Then
G H = (HGH−1 ) = H−1 G H.
(2.69)
H
If G = G, we say that G is H-symmetric. As a special case of [11, Theorem 2.1], we have Theorem 2.3. If G is H-symmetric, then three-term CG(H, G) gives the exact solution of (2.64) within a finite number of steps. Algorithm 2.12, called Orthodir(H, G), implements CG(H, G) for H-symmetric matrices. Algorithm 2.12: Orthodir(H, G) 1 2 3 4 5
6 7
p0 ← r0
ei , pi H αi ←
pi , pi H ui+1 ← ui + αi pi ri+1 ← ri − αi Gpi
Gpi , pi H γi ← pi , pi H Gpi , pi−1 H σi ← pi−1 , pi−1 H pi+1 ← Gpi − γi pi − σi pi−1
The computation of αi in Step 2 involves the unknown quantity ei . Thus, H must be chosen so that αi can be expressed in terms of G and g; see [11].
2.8.3 The hybrid modified conjugate residual (HMCR) method We now apply Orthodir(H, G) (Algorithm 2.12) to the preconditioned system C−1 Au = C−1 b
(2.70)
2.8 Other Krylov Subspace Methods
59
where A is symmetric and indefinite. We restrict ourselves to the case when C is symmetric and positive definite even though generalisations to more general C are available; see [11]. Let (2.71) G := C−1 A and H := AC−1 A. Then G is H-symmetric. Indeed, it follows from (2.69), (2.71), and the symmetry of A and C that −1 −1 G H = (AC−1 A)−1 (C−1 A) (AC−1 A) = /A−1 CA−1 01 AC A2 C A = G. =I
Hence Orthodir(H, G) is applicable and gives the exact solution of (2.70) after a finite number of steps; see Theorem 2.3. The choice (2.71) also guarantees that αi in Step 2 of Algorithm 2.12 is computable −1 AC A(u − ui ), pi ri , C−1 Api
ei , pi H −1 . αi = = =
pi , pi H AC Api , pi Api , C−1 Api Therefore, Orthodir(AC−1 A, C−1 A) yields Algorithm 2.13 for solving (2.70). Algorithm 2.13: MCR-ODIR(C−1 A) 1 2 3 4 5 6 7 8 9
10 11 12 13 14
Given u0 , r0 ← b − Au0 and p0 ← C−1 r0 for i = 0, 1, 2, . . . do C−1 Api , ri αi ← −1 C Api , Api ui+1 ← ui + αi pi ri+1 ← ri − αi Api if stopping criterion then stop else −1 AC Api , C−1 Api −1 γi ← C Api , Api −1 C Api , Api−1 σi ← −1 C Api−1 , Api−1 pi+1 ← C−1 Api − γi pi − σi pi−1 end Next i end
One can define a sequence of Krylov subspaces which is different from the subspace K j (r0 , G); see Subsection 2.8.2. Let s j = C−1 r j ,
j = 0, 1, . . . ,
60
2 General Framework of Preconditioners
be the preconditioned residuals. We replace K j+1 (r0 , G) in (2.65) by K j+1 (s0 , G). Under certain conditions to be given in Theorem 2.4, the three-term recursion (2.68) can be replaced by a two-term recursion so that the resulting algorithm converges within a finite number of steps; see Algorithm 2.14. Algorithm 2.14: MCR-OMIN(C−1 A) 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Given u0 , r0 ← b − Au0 , s0 ← C−1 r0 , p0 ← C−1 s0 for i = 0, 1, 2, . . . do C−1 Api , ri αi ← −1 C Api , Api ui+1 ← ui + αi pi ri+1 ← ri − αi Api si+1 ← si − αi C−1 Api if (stopping criterion) then stop else −1 C Api , Asi+1 βi ← −1 C Api , Api pi+1 ← si+1 − βi pi end Next i end
We now give a theorem on the relation between MCR-ODIR and MCR-OMIN; see [11]. Theorem 2.4. The following statements are equivalent. (i) MCR-OMIN(C−1 A) and MCR-ODIR(C−1 A) are equivalent, namely they generate the same sequence of iterates {uk }, given the same initial vector u0 ; (ii) MCR-OMIN(C−1 A) converges for each starting vector u0 to the exact solution of (2.70); (iii) The matrix HC−1 A = AC−1 AC−1 A is definite; (iv) In MCR-OMIN(C−1 A) the scalar α j is non-zero for all j ≥ 0; (v) K j+1 (s0 , C−1 A) = span{s0 , . . . , s j }. MCR-OMIN(C−1 A) takes one scalar product less than MCR-ODIR(C−1 A) and requires less storage. However, the difficulty with applying MCR-OMIN to indefinite linear system is that βi may become zero; in which case the iteration is tracked in the current Krylov subspace. This effect, which is named “stalling”, can be avoided by switching from MCR-OMIN to the robust MCR-ODIR each time when αi becomes smaller than a certain prescribed tolerance. This yields the hybrid modified conjugate residual (HMCR) algorithm [58], which is given in Algorithm 2.15. We note that MCR-ODIR(C−1 A), MCR-OMIN(C−1 A), and HMCR(C−1 A) are MINRES methods for symmetric indefinite matrices. Since A is symmetric and indefinite, the matrix HC−1 A is also indefinite, and thus the MCR-OMIN(C−1 A)
2.8 Other Krylov Subspace Methods
61
Algorithm 2.15: HMCR(C−1 A) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26
Given u0 and a tolerance α , set r0 ← b − Au0 , p0 ← C−1 r0 for i = 0, 1, 2, . . . do C−1 Api , ri αi ← −1 C Api , Api ui+1 ← ui + αi pi ri+1 ← ri − αi Api si+1 ← si − αi C−1 Api if (stopping criterion) then stop end if |αi | < α then *** do ODIR *** if |αi−1 | < α then c←1 else c ← −1/αi−1 end −1 AC Api , C−1 Api −1 γi ← C Api , Api −1 C Api , Api−1 σi ← −1 C Api−1 , Api−1 −1 pi+1 ← C Api − γi pi − σi pi−1 else *** do OMIN *** −1 C Ap , Asi+1 βi ← −1 i C Api , Api pi+1 ← si+1 − βi pi end Next i end
part of the foregoing algorithm does not necessarily converge. Hence the method of choice is HMCR(C−1 A). The error vector has to be minimised with respect to the H-norm. This implies 1/2 ek+1 H = C−1 A(u − uk+1 ), A(u − uk+1 ) RN = b − Auk+1 C−1 3 4 ∈ u0 + Kk+1 (C−1 r0 , C−1 A) . = min b − A uC−1 : u
Here Kk+1 (C−1 r0 , C−1 A) is defined in (2.61). Thus
(I − A(p(C−1 A))C−1 )r0 C−1 b − Auk+1 C−1 = min p∈Πk b − Au0 C−1 r0 C−1
62
2 General Framework of Preconditioners
= min
p∈Πk+1 p(0)=1
p(AC−1 )r0 C−1 = min max |p(λ j )| p∈Πk+1 λ ∈Λ (C−1 A) r0 C−1 j p(0)=1
where Πk is the space of real polynomials of degree at most k and Λ (C−1 A) is the set of eigenvalues of C−1 A. Hence in order to be able to find a good preconditioner C, one has to investigate the spectra Λ of A and of its submatrices. If E is an inclusion set for the eigenvalues (which are real but may be positive or negative), then b − Auk C−1 ≤ min max |p(x)|. p∈Πk x∈E b − Au0 C−1 p(0)=1
Let E = E(α ) := [−a, −bα ] ∪ [cα 2 , d] where a, b, c, d, and α are positive and α is asymptotically small, and let ek := min max |p(x)|. p∈Πk p(0)=1
x∈E
Then it is proved in [232, Theorem 4.1] that 5 1/k
lim ek
k→∞
≤ 1 − α 3/2
bc + O(α 5/2 ). ad
Hence we have the following lemma. Lemma 2.22. Assume that the eigenvalues of C−1 A belong to [−a, −bα ] ∪ [cα 2 , d] where a, b, c, d, and α are positive and α is asymptotically small. Then 5 . rk C−1 1/k bc 3/2 + O(α 5/2 ) ≤ 1−α lim k→∞ r0 C−1 ad where rk := b − Auk . Remark 2.2. The above result is a generalisation of [58, Theorem 3.2] which assumes that the eigenvalues of C−1 A belong to symmetric inclusion sets [−a, −b] ∪ [c, d] with −a < −b < 0 < c < d and a − b = d − c. This theorem states that -
rk RN r0 RN
.1/k
≤2
1/2k
1/2 bc/ad . 1 + bc/ad
1−
Therefore, the numbers of iterations of the preconditioned minimum residual method which are required to solve Au = b (with A being symmetric and indefinite) up to a given accuracy is bounded by O(1/ bc/ad).
2.9 Convergence Theory for Linear Iterative Methods with Preconditioners
63
2.9 Convergence Theory for Linear Iterative Methods with Preconditioners As mentioned in Subsection 2.2.3, a preconditioner can be used to accelerate the convergence of a linear iterative method. We briefly present here the convergence analysis of this preconditioning technique; see [96] for more details. Recall the definition of uk and u in Subsection 2.2.3. Defining ek := uk − u and using the identity u = u + C−1 b − Au = I − C−1 A u + C−1 b, we deduce from (2.15) that
ek+1 = I − C−1 A ek ,
k = 0, 1, 2, . . . .
It follows that if there exists a norm such that I − C−1 A < 1, then the preconditioned iterative method converges. We call I − C−1 A the convergence rate of the method. We show in the following subsections that under some assumptions, I − C−1 AA < 1
and
I − C−1 AC < 1
where we recall that the ·E -norm of a matrix D is defined by DE = sup x∈RN
Dx, DxE
EDx, DxRN = sup .
x, xE x∈RN Ex, xRN
Here, N is the size of the matrix E and D. We consider two different cases: the case when C is symmetric (which arises from the additive Schwarz method or symmetric multiplicative Schwarz method), and the case when C is non-symmetric (which arises from the non-symmetric multiplicative Schwarz method).
2.9.1 Symmetric preconditioner In this subsection, the preconditioner C is defined from the additive Schwarz operator, see (2.27), or the multiplicative Schwarz operator, see (2.32), and thus C is and A in (2.33), we first prove the followsymmetric. Recalling the definition of A ing lemma. Lemma 2.23. We have
( ( ( C = (1 − λmin (A) I − A
(2.72)
64
2 General Framework of Preconditioners
and A≤ I − A
+ λ 2 (A). 1 − 2λmin (A) max
(2.73)
∈ RN ,
Proof. First we note that, for any v C(I − A)v v2 = v (I − A) I−A C = v C − A I − C−1 A v
= v C1/2 (I − C−1/2 AC−1/2 )C1/2 C−1/2 (I − C−1/2 AC−1/2 )C1/2 v 2w = w I − A
where w = C1/2 v. Hence
v2 v2 I−A I−A C C 2 I − AC = max = max v Cv v2C v∈RN v∈RN 2w w I − A 2 . I−A = λmax = max
N w w w∈R
On the other hand,
= 1 − λmin A . λmax I − A
This proves (2.72). To prove (2.73), we write v2 = A v − Av , v − Av I−A A RN Av = Av, vRN − 2 Av, Av + AAv, RN RN 2 2 = vA − 2 Av, Av RN + AvA
so that
2 I − A A
v2 I−A A = max v2A v∈RN ' '2 Av, Av RN ≤ 1 − 2 min + ' A'A . v∈RN Av, vRN
' ' Moreover, with w = A1/2 v we have Lemma 2.8 gives ' A'A = λmax (A). Av, Av v AC−1 Av RN = min min Av v v∈RN Av, vRN v∈RN
Lemma C.6 implies
w A1/2 C−1 A1/2 w = min w w w∈RN 1/2 −1 1/2 . = λmin A C A
2.9 Convergence Theory for Linear Iterative Methods with Preconditioners
Av, Av RN . min = λmin A N
Av, v v∈R RN
65
Altogether we obtain (2.73).
Using the above lemma, we can prove the following convergence result for the additive and symmetric multiplicative Schwarz methods. Theorem 2.5. If C is a symmetric positive definite preconditioner, then the linear iteration uk+1 = uk + C−1 b − Auk , k = 0, 1, 2, . . . , converges if
( ( ( 0. Hence
W vL2 (Γ ) vH 1/2 (Γ )
ε −1+2ε v2H 1/2 (Γ ) . W v2L2 (Γ ) ≤ Ch−2 hk
(5.17)
Inequalities (5.16) and (5.17) then give
ε aW (v, v) = Cγ 2(−k) aW (v, v), aW (T v, v) ≤ Ch1−2ε h−1+2 k
where γ = (1/2)(1−2ε )/2 , proving the lemma.
Using the above Strengthened Cauchy–Schwarz inequality we prove the following result for estimating the maximum eigenvalue of the operator PMAS . Lemma 5.3. The multilevel additive Schwarz operator PMAS satisfies
122
5 Multilevel Methods: the h-Version
aW (PMAS v, v) aW (v, v) for any v ∈ V L
(5.18)
where the constant is independent of the number of levels and the number of mesh points. Proof. The proof is similar to [38, Theorem 3.1] and is included here for completeness. For = 1, . . . , L let P : V L → V be defined for any v ∈ V L as aW (P v, w) = aW (v, w) for any w ∈ V . Then for any v ∈ V L we can write P v =
∑ (Pk − Pk−1 )v,
(5.19)
k=1
where P0 ≡ 0. Hence aW (PMAS v, v) = =
L
L
∑ aW (T v, v) = ∑ aW (T v, P v)
=1 L
∑
=1
∑ aW T v, (Pk − Pk−1 )v .
=1 k=1
Since aW (T ·, ·) is symmetric and positive definite, we can use Cauchy–Schwarz’s inequality with respect to that quadratic form to obtain aW (PMAS v, v) ≤
L
∑ ∑ aW (T v, v)1/2 aW
=1 k=1
1/2
T (Pk − Pk−1 )v, (Pk − Pk−1 )v
Since V k−1 ⊂ V k ⊂ V , we can invoke Lemma 5.2 to deduce
.
1/2 1/2 ≤ C1/2 γ −k aW (Pk − Pk−1 )v, (Pk − Pk−1 )v aW T (Pk − Pk−1 )v, (Pk − Pk−1 )v 1/2 = C1/2 γ −k aW (Pk − Pk−1 )v, v
where the above equality is obtained by a simple calculation. Consequently, with i S := ∑∞ i=0 γ , aW (PMAS v, v) ≤ ≤ ≤
L
∑ ∑ γ (−k)/2 aW (T v, v)1/2 C1/2 γ (−k)/2 aW
=1 k=1
1/2 (Pk − Pk−1 )v, v
1 CS −k γ −k aW (T v, v) + γ aW (Pk − Pk−1 )v, v 2 =1 k=1 2S L
∑∑
CS2 L 1 L aW (T v, v) + aW (Pk − Pk−1 )v, v ∑ ∑ 2 =1 2 k=1
5.1 Additive Schwarz Methods
123
CS2 1 aW (PL v, v) = aW (PMAS v, v) + 2 2 CS2 1 aW (v, v), = aW (PMAS v, v) + 2 2 where in the second last step we used (5.19). Inequality (5.18) then follows.
Lemma 5.1 and Proposition 2.1 yield λmin (PMAS ) 1. Moreover, Lemma 5.3 yields λmax (PMAS ) 1. Therefore we have the following result. Theorem 5.1. The condition number of the multilevel additive Schwarz method defined by (5.5) has a bounded condition number
κ (PMAS ) 1.
5.1.2 The weakly-singular integral equation We proceed as in the case of the two-level method in Subsection 3.1.1.2. Let N , = 1, 2, . . . , L be the meshes defined in Subsection 5.1.1. We define (compare with (3.25) and (3.26)) ⎧ 1 , x ), ⎪ ⎪ , x ∈ (xi−1 ⎪ i ⎪ h ⎪ ⎨ χ01 (x) ≡ 1 and χi (x) := − 1 , ), x ∈ (xi , xi+1 ⎪ ⎪ h ⎪ ⎪ ⎪ ⎩ 0, otherwise,
where i = 1, . . . , N , and = 1, . . . , L. (We note that by the way the meshes are defined, N is an odd number.) The subspaces on level are defined by V01 = span{χ01 },
Vi = span{χi },
i = 1, . . . , N .
By using the same argument as in Subsection 3.1.1.2, see (3.24), we can show that any v ∈ V can be decomposed as L
v = v0 + ∑
N
∑ vi
(5.20)
=1 i=1
i odd
where v0 ∈ V01 and vi ∈ Vi . Indeed, for i = 1, 3, 5, . . . , NL , we can define vLi ∈ Vi L by ( vLi := v(ΓL − μΓL (v) i
i
L , xL ) is a subinterval on level L − 1. Hence where Γi L := (xi−1 i+1
124
5 Multilevel Methods: the h-Version
v = vL−1 +
NL
∑ vLi
i=1 i odd
where (vL−1 is a piecewise constant function on the mesh N L−1 which satisfies (vL−1 )( L := μ L (v). Repeating the same argument, we arrive at (5.20). Γi
Γi
Consequently the boundary element space V consisting of piecewise constant functions on the mesh NL can be decomposed by L N
V = V01 + ∑ ∑ Vi .
(5.21)
=1 i=1
We note that the functions χ01 , χi , i = 1, 3, 5, . . . , N , = 1, . . . , L, form a hierarchical basis for V. However, a decomposition with these basis functions does not yield an optimal preconditioner; see [231]. The multilevel additive Schwarz operator is now defined as L N
PMAS = PV1 + ∑ ∑ PV ,
(5.22)
aV (PV v, w) = aV (v, w) for all w ∈ Vi .
(5.23)
0
=1 i=1
i
where PV : VL → Vi is defined for any v ∈ VL by i
i
By using Lemma A.19 and the same argument as in Subsection 3.1.1.2, we can derive from Lemma 5.1 and Lemma 5.2 the following results, the proofs of which are left as exercises. Lemma 5.4. Any v ∈ V admits a decomposition L N
v = v10 + ∑ ∑ vi ,
vi ∈ Vi ,
=1 i=1
that satisfies L N
aV (v10 , v10 ) + ∑ ∑ aV (vi , vi ) aV (v, v). =1 i=1
The constant is independent of L and N , = 1, . . . , L. Lemma 5.5 (Strengthened Cauchy–Schwarz inequality). Let N
T = ∑ PV . i=1
i
There exist constants C > 0 and γ ∈ (0, 1) such that for any v ∈ Vk where 1 ≤ k ≤ ≤ L the following inequality holds
5.2 Multiplicative Schwarz methods
125
aV (T v, v) ≤ Cγ 2(−k) aV (v, v).
As a consequence, we have the following result.
Theorem 5.2. The condition number of the multilevel additive Schwarz method defined by (5.22) has a bounded condition number
κ (PMAS ) 1.
5.2 Multiplicative Schwarz methods 5.2.1 The hypersingular integral equation For the multiplicative Schwarz preconditioners for this equation, we use the same subspace decomposition defined by (5.2) and projections PV , i = 1, . . . , N , = i 1, . . . , L, defined by (5.3). The multilevel symmetric multiplicative Schwarz operator is defined by, see (2.23), (5.24) PMSMS = I − EMSMS . Here, see (2.24), t EMSMS = EMMS EMMS
with
L
EMMS = ∏ T =1
where T is defined by (5.4). The convergence of the multilevel symmetric multiplicative Schwarz is given by (2.11). As in previous chapters, Lemma 2.10 implies that it suffices to estimate κ (PMSMS ). The analysis in Subsection 5.1.1 yields the following theorem. Theorem 5.3. The condition number of the multilevel symmetric multiplicative Schwarz PMSMS defined by (5.24) satisfies
κ (PMSMS ) 1. Proof. The theorem is a consequence of Lemma 5.1 (Stability of decomposition), Lemma 5.2 (Strengthened Cauchy–Schwarz inequality), and Theorem 2.2, noting that Assumption 2.6 (local stability) is obvious due to the choice of the bilinear forms in subspaces to be aW (·, ·).
126
5 Multilevel Methods: the h-Version
5.2.2 The weakly-singular integral equation For the symmetric multiplicative Schwarz preconditioner for this equation, we use the same subspace decomposition defined by (5.21). The multilevel multiplicative Schwarz operator is defined by (5.24) where EMMS is defined by (2.24) with the projections PV1 and PV , i = 1, . . . , N , = 1, . . . , L, defined by (5.23). i 0 As a consequence of the analysis in Subsection 5.1.2 we obtain the following theorem. Theorem 5.4. The condition number of the multilevel symmetric multiplicative Schwarz PMMS defined by (5.24) satisfies
κ (PMSMS ) 1. Proof. The theorem is a consequence of Lemma 5.4 (Stability of decomposition), Lemma 5.5 (Strengthened Cauchy–Schwarz inequality), and Theorem 2.2, noting that Assumption 2.6 (local stability) is obvious due to the choice of the bilinear forms in subspaces to be aV (·, ·).
5.3 Numerical Results The following table shows tables and figures in Chapter 9 which support the theoretical results obtained in this chapter. Equation Preconditioner Theorem Table Figure Hypersingular Additive Schwarz (PCG) 5.1 9.2, 9.3 9.3 Weakly-singular Additive Schwarz (PCG) 5.2 9.1 9.2 Table 5.1 Theoretical results in Chapter 5 and relevant numerical results in Chapter 9 .
Chapter 6
Additive Schwarz Methods for the hp-Version
In this chapter, we report on additive Schwarz preconditioners for the hp-version of the hypersingular integral equation on a slit or a polygon in R2 . Section 6.1 develops preconditioners when quasi-uniform meshes are used whereas Section 6.2 deals with geometrically graded meshes. These results first appeared in [120] and [229]. We recall here the Galerkin equation to be considered: aW (ψV , φ ) = g, φ ,
φ ∈V
(6.1)
where the boundary element space V is to be defined later. It is noted that if Γ is 1/2 (Γ ) for the hypersingular integral operator a polygon, then the energy space H 1/2 coincides with the space H (Γ ). In this case, in the definition of the boundary element space the vanishing assumption at the endpoints of Γ is invalid. However, continuity of these functions is still required. In Section 6.1 we only present the results for the case when Γ is a slit. Extension to polygons is obvious. Section 6.2 presents results for both the slit and polygon. We will not elaborate on the different boundary element spaces as it is clear from the context. In Section 6.3 we will briefly mention how the results for the weakly-singular integral equation can be obtained in a similar manner.
6.1 Preconditioners with Quasi-uniform Meshes We will develop in this section two-level methods (non-overlapping and overlapping) for the hypersingular integral equation (6.1). A similar approach as in Chapter 3 and Chapter 4 can be employed to obtain similar results for the weakly-singular integral equation. We will not repeat that discussion here. Subsection 6.1.1 presents a non-overlapping method whereas an overlapping preconditioner is presented in Subsection 6.1.2. The two-level preconditioners designed in this subsection are based on the following two-level mesh and boundary-element spaces.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_6
127
128
6 Additive Schwarz Methods for the hp-Version
Definition 6.1 (Two-level mesh and boundary-element spaces). The coarse mesh: We first divide Γ into disjoint subdomains Γi with length Hi , i = 1, . . . , J, so that Γ = ∪Ji=1Γ i , and denote by H the maximum value of Hi . The fine mesh: Each Γi is further divided into disjoint subintervals Γi j , j = 1, . . . , Ni , i so that Γ i = ∪Nj=1 Γ i j . The maximum length of the subintervals Γi j in Γi is denoted by hi , and the maximum value of hi is denoted by h. We require that the fine mesh is locally quasi-uniform, i.e., it is quasi-uniform in each subdomain Γi . Boundary element spaces: 4 3 V := v ∈ C(Γ ) : v|Γi j ∈ P pi j (Γi j ); i = 1, . . . , J; j = 1, . . . , Ni ; v(±1) = 0 , 3 4 Vh := v ∈ C(Γ ) : v|Γi j ∈ P1 (Γi j ); i = 1, . . . , J; j = 1, . . . , Ni ; v(±1) = 0 , 3 4 VH := v ∈ C(Γ ) : v|Γi ∈ P1 (Γi ) for i = 1, . . . , J; v(±1) = 0 , 4 3 Vh0 (Γi ) := v ∈ Vh : supp v ⊂ Γ i , 4 3 V p0 (Γi j ) := v ∈ V : supp v ⊂ Γ i j , V p :=
Ni J 9 9
V p0 (Γi j ).
i=1 j=1
Here Pq (G) denotes the set of polynomials of degree at most q which are defined on the set G. For simplicity of notations, in this section we will assume pi j = p > 0
∀ j = 1, . . . , Ni , i = 1, . . . , J.
Recalling the definition in Section 4.1.1.1 of affine images of antiderivatives of Legendre polynomials, we have 3 4 (6.2) V p0 (Γi j ) = span Lq,i j : 2 ≤ q ≤ p
where Lq,i j is the affine image of Lq onto Γi j . It is clear from Definition 6.1 that V can be decomposed into V = Vh ⊕ V p .
(6.3)
If we further decompose Vh and V p we will obtain different additive Schwarz methods for equation (6.1).
6.1.1 A two-level non-overlapping method Due to (6.3) we can define the non-overlapping method via a two-level decomposition of Vh as follows. The two blocks Vh and V p can be decomposed in an obvious
6.1 Preconditioners with Quasi-uniform Meshes
way by Vh = VH ⊕ and Vp = ⊕
129
J 9
Vh0 (Γi )
i=1
Ni J 9 9
(6.4)
V p0 (Γi j )
i=1 j=1
.
(6.5)
The above decompositions result in the following two-level decomposition for V . Definition 6.2. The boundary element space V is decomposed by the following direct sum: J 9
V = VH ⊕
Ni J 9 9
Vh0 (Γi ) ⊕
i=1
V p0 (Γi j ) .
i=1 j=1
The above decomposition is a direct decomposition because the intersection of any two subspaces contains only the zero function. Let ΠH : C(Γ ) → VH and Πh : C(Γ ) → Vh be the usual interpolation operators. A function v ∈ V can be uniquely decomposed by J
Ni
J
v = vH + ∑ vhi + ∑ ∑ vipj i=1
(6.6)
i=1 j=1
where vH = ΠH v,
J
∑ vhi = Πh v − vH ,
Ni
J
and
i=1
∑ ∑ vipj = v − Πh v.
(6.7)
i=1 j=1
As in the previous chapters, we prove the coercivity and stability of this decomposition. The proof for coercivity is simple. Lemma 6.1 (Coercivity of decomposition). For any v ∈ V J
Ni
J
aW (v, v) aW (vH , vH ) + ∑ aW (vhi , vhi ) + ∑ ∑ aW (vipj , vipj ). i=1
i=1 j=1
The constant is independent of v, H, h, and p. Proof. Noting the disjoint supports of vhi , i = 1, . . . , J, and of vipj , j = 1, . . . , Ni and i = 1, . . . , J, we obtain by recalling (6.6) and using the Cauchy–Schwarz inequality and Theorem A.11 v2 1/2 HI
(Γ )
vH 2 1/2 HI
vH 2 1/2 HI
(Γ )
' J '2 'J + ' ∑ vhj 'H 1/2 (Γ ) + ' ∑ i=1
I
(Γ )
i=1
HI
J
(Γ )
+∑
'2
∑ vipj 'HI1/2 (Γ )
i=1 j=1
J
+ ∑ vhj 2 1/2
Ni
Ni
∑ vipj 2HI1/2 (Γ ) .
i=1 j=1
130
6 Additive Schwarz Methods for the hp-Version
This proves the lemma. Next, we prove the stability of the decomposition. Lemma 6.2 (Stability of decomposition). For any v ∈ V J
Ni
J
aW (vH , vH ) + ∑ aW (vhi , vhi ) + ∑ ∑ aW (vipj , vipj ) i=1
i=1 j=1
(1 + log p)2 max
1≤i≤J
-
1 + log
Hi hi
.
aW (v, v).
The constant is independent of v, H, h, and p. Proof. As before, we need to prove vH 2 1/2 HI
J
(Γ )
+ ∑ vhi 2 1/2 i=1
HI
Ni
J
(Γi )
+ ∑ ∑ vipj 2 1/2 HI
i=1 j=1
(1 + log p)2 max
1≤i≤J
-
1 + log
Hi hi
.
(Γi j )
v2 1/2 HI
(Γ )
.
(6.8)
The first and third terms on the left-hand side are bounded by the right-hand side due to (6.7) and Lemma C.38. To show the boundedness of the second term, we note that vhi can be rewritten as vhi = (Πh v − ΠH (Πh v))|Γi . Invoking Lemma C.38 with v in that lemma replaced by Πh v, a piecewise polynomial of degree 1, we deduce J
∑
i=1
vhi 2 1/2 H (Γ ) I
i
max
1≤i≤J
-
Hi 1 + log hi
.
Πh v2 1/2 HI
(Γ )
.
Invoking the same lemma again yields the required bound.
Lemma 6.1, Lemma 6.2, and Theorem 2.1 give the following result. Theorem 6.1. The condition number of the two-level additive Schwarz operator Pad associated with the direct sum decomposition defined in Definition 6.2 satisfies . Hi 2 . κ (Pad ) (1 + log p) max 1 + log 1≤i≤J hi
6.1.2 A two-level overlapping method The overlapping subdomains Γi are defined as follows. We extend each subdomain Γi on both sides (except for the left side of Γ1 and the right side of ΓJ which are the endpoints of Γ ) so that the length of the overlap between two adjacent extended is proportional to δ for some δ ∈ (0, H). We assume that subdomains Γi and Γi+1
6.1 Preconditioners with Quasi-uniform Meshes
131
the endpoints of Γi coincide with the fine mesh points. This means the overlap is a union of subintervals Γi j . The overlapping method is defined via the following subspace decomposition. Definition 6.3. The boundary element space V is decomposed as follows: N V = VH ∪
J 6
J 6 i 6
Vh0 (Γi ) ∪
i=1
V p0 (Γi j )
i=1 j=1
The above decomposition is coercive and stable, as given in the next two lemmas. Lemma 6.3 (Coercivity of decomposition). If v ∈ V satisfies v = vH + ∑Ji=1 vhi + Ni vipj where vH ∈ VH , vhi ∈ Vh0 (Γi ) and vipj ∈ V p0 (Γi j ), then ∑Ji=1 ∑ j=1 J
J
Ni
aW (v, v) aW (vH , vH ) + ∑ aW (vhi , vhi ) + ∑ ∑ aW (vipj , vipj ). i=1
i=1 j=1
The constant is independent of v, h, H, and p. Proof. The proof is similar to that of Lemma 3.6 by using the colouring argument, and is omitted. Lemma 6.4 (Stability of decomposition). For any v ∈ V , there exists a decompoi vipj with vH ∈ VH , vhi ∈ Vh0 (Γi ) and vipj ∈ V p0 (Γi j ), sition v = vH + ∑Ji=1 vhi + ∑Ji=1 ∑Nj=1 then J
J
Ni
aW (vH , vH ) + ∑ aW (vhi , vhi ) + ∑ ∑ aW (vipj , vipj ) i=1
(1 + log p)
2
i=1 j=1
-
H 1 + log δ
.2
aW (v, v).
The constant is independent of v, H, h, δ , and p. Proof. Consider a partition of unity which consists of piecewise-linear functions θi , i = 1, . . . , J, supported in Γ i as defined in Lemma 3.7. Here δ is the overlap size. Let PH : V → VH be defined as in (3.35), recall (3.36) and (3.37). We define vH := PH v, w := v − vH , vhi := Πh (θi w), vipj := (w − Πh w)|Γi j . Then since θi w has support in Γi , so does vhi and thus vhi ∈ Vh0 (Γi ). It is also easy to see that supp vipj ∈ Γ i j , implying vipj ∈ V p0 (Γi j ). Moreover,
132
6 Additive Schwarz Methods for the hp-Version
J
J
∑ vhi = Πh ∑ θi w i=1
i=1
= Πh w = Πh v − vH .
Hence J
Ni
J
Ni
J
vH + ∑ vhi + ∑ ∑ vipj = Πh v + ∑ i=1
i=1 j=1
∑ (w − Πh w)|Γi j
i=1 j=1
= Πh v + w − Πh (v − vH ) = Πh v + w − Πh v + vH = v. Therefore, the above defined is a valid decomposition of v. Next we prove the inequality vH 2 1/2 HI
J
(Γ )
+ ∑ vhi 2 1/2 HI
i=1
J
(Γi )
-
Ni
+ ∑ ∑ vipj 2 1/2 HI
i=1 j=1
(1 + log p)2 1 + log
.2
H δ
(Γi j )
v2 1/2 HI
(Γ )
.
(6.9)
The first term on the left-hand side is bounded by the right-hand side due to (3.36). Inequality (3.37) gives 1 w2L2 (Γ ) v2 1/2 HI (Γ ) H
and
w2 1/2 HI
(Γ )
v2 1/2 HI
(Γ )
.
To show that the second term on the left-hand side of (6.9) is bounded by the righthand side, we invoke Lemma C.38 and Lemma 3.7 to obtain J
J
∑ vhi 2HI1/2 (Γi ) (1 + log p)2 ∑ θi w2HI1/2 (Γi )
i=1
i=1 J -
(1 + log p)2 ∑ 1 + log
i=1
|Γi | δ
.2 -
1 w2L2 (Γ ) + |w|2H 1/2 (Γ ) i i |Γi |
|w(x)|2 |w(x)|2 dx + dx + 1−x x+1 Γ Γ . . H 2 1 2 2 2 wL2 (Γ ) + |w|H 1/2 (Γ ) (1 + log p) 1 + log δ H
|w(x)|2 |w(x)|2 dx + dx 1−x x+1 Γ Γ . H 2 2 (1 + log p) 1 + log v2 1/2 Hw (Γ ) δ +
.
6.1 Preconditioners with Quasi-uniform Meshes
133
. H 2 (1 + log p)2 1 + log v2 1/2 . HI (Γ ) δ
Finally, to prove that the third term on the left-hand side of (6.9) is bounded by the right-hand side, we note that vipj = (w − Πh w)|Γi j = (v − Πh v)|Γi j
and invoke Lemma C.38 again.
Combining Lemma 6.3, Lemma 6.4, and Theorem 2.1 we obtain the following theorem. Theorem 6.2. The condition number of the two-level additive Schwarz operator Pad associated with the overlapping decomposition defined in Definition 6.3 satisfies . H 2 κ (Pad ) (1 + log p)2 1 + log . δ
6.1.3 Multilevel methods We first consider a multilevel method in p, which is initially studied in [120]. Noting (6.2) and (6.5), we can decompose V p by Vp =
p Ni 9 J 9 9
Wi qj
(6.10)
i=1 j=1 q=2
where
Wi qj := span{Lq,i j }.
We then obtain the following subspace decomposition of V : p N V = Vh ⊕
J 9 i 9 9
Wi qj .
(6.11)
i=1 j=1 q=2
Any u ∈ V admits a unique decomposition J
u = u0 + ∑
Ni
p
∑ ∑ uqij ,
u0 ∈ Vh ,
uqij ∈ Wi qj .
(6.12)
i=1 j=1 q=2
We denote
p
ui j :=
∑ uqij
q=2
J
and
w p :=
Ni
∑ ∑ ui j .
j=1 i=1
As usual, we prove the stability and coercivity of this decomposition.
(6.13)
134
6 Additive Schwarz Methods for the hp-Version
Lemma 6.5 (Coercivity of decomposition). The decomposition (6.12) satisfies Ni
J
p
aW (u, u) aW (u0 , u0 ) + ∑ ∑
∑ aW (uqij , uqij ).
i=1 j=1 q=2
The constant is independent of u, J, and p. i Proof. It follows from (6.12) and (6.13) that u = u0 + w p = u0 + ∑Jj=1 ∑Ni=1 ui j . Not1/2 ing that ui j ∈ H (Γi j ) is the restriction of w p onto Γi j and using Lemma B.3, Theorem A.11, together with standard arguments, we obtain
J
Ni
aW (u, u) aW (u0 , u0 ) + ∑ ∑ aW (ui j , ui j ).
(6.14)
i=1 j=1
It remains to show that for i = 1, . . . , J, j = 1, . . . , Ni p
aW (ui j , ui j )
∑ aW (uqij , uqij )
q=2
or equivalently p
ui j 2 1/2 HI
(Γ )
∑ uqij 2HI1/2 (Γ )
q=2
where the constant is independent of all parameters involved. This is a direct consequence of Lemma C.15 by mapping the interval Γj onto I = (−1, 1) and using Lemma A.6. Lemma 6.6 (Stability of decomposition). Any u ∈ V admits a unique decomposition (6.12) which satisfies Ni
J
p
aW (u0 , u0 ) + ∑ ∑
∑ aW (uqij , uqij ) p(1 + log p)2 aW (u, u).
i=1 j=1 q=2
The constant is independent of u, J, and p. Proof. A verbatim proof along the lines of the proof of Lemma 4.2 gives J
aW (u0 , u0 ) + ∑
Ni
∑ aW (ui j , ui j ) (1 + log p)2 aW (u, u).
i=1 j=1
It remains to show that for each j = 1, . . . , Ni and i = 1, . . . , J p
∑ aW (uqij , uqij ) paW (ui j , ui j )
q=2
or equivalently
6.1 Preconditioners with Quasi-uniform Meshes
135
p
∑ uqij 2HI1/2 (Γi j ) pui j 2HI1/2 (Γi j )
q=2
where the constant is independent of all involving parameters. Again this is a direct consequence of Lemma C.15 and Lemma A.6. It follows from Lemma 6.5, Lemma 6.6, and Theorem 2.1 the following theorem concerning a multilevel decomposition of the p-block V p . Theorem 6.3. The additive Schwarz operator Pad associated with the multilevel decomposition (6.11) for the hp-version with quasi-uniform meshes has condition number bounded as 2 κ (Pad ) p 1 + log p . The positive constant is independent of all parameters involved. If we further decompose the h-block Vh by (6.4) then p N V = VH ⊕
J 9
Vh0 (Γi ) ⊕
i=1
J 9 i 9 9 i=1 j=1 q=2
Wi qj
.
(6.15)
Theorem 6.1 and Theorem 6.3 yield Theorem 6.4. The condition number of the additive Schwarz operator associated with the decomposition (6.15) satisfies . Hi 2 . κ (Pad ) p(1 + log p) max 1 + log 1≤i≤J hi A full multilevel method can be defined with the h-block decomposed as in Section 5.1.1, see (5.2) and the p-block decomposed as (6.10). We then obtain from Theorem 5.1 and Theorem 6.3 the following theorem. Theorem 6.5. The additive Schwarz operator Pad associated with the multilevel decomposition (5.2) and (6.11) for the hp-version with quasi-uniform meshes has condition number bounded as 2 κ (Pad ) p 1 + log p . The positive constant is independent of all parameters involved.
We note that even though the condition number of the p-multilevel additive Schwarz operator grows faster than that of the two-level method, the most efficient method is the full multilevel method in Theorem 6.5.
136
6 Additive Schwarz Methods for the hp-Version
6.2 Preconditioners with Geometric Meshes In this section, we design a two-level non-overlapping preconditioner (Subsection 6.2.1) and a multilevel preconditioner (Subsection 6.2.2) for the case when a geometric mesh is used to accelerate the convergence rate of the Galerkin scheme. It is known that this type of meshes yields an exponentially fast convergence (in the energy norm) whereas an hp-version with a quasi-uniform mesh converges only with an algebraic rate. Concerning the convergence theory we refer to [17, 114, 115, 208, 209]. Hence a preconditioner for the hp-version with geometric meshes is of practical importance. We first define the ansatz space for the hp-version with geometric meshes on (−1, 0). On (−1, 1) the geometric meshes are defined by symmetry. The space on the whole polygon Γ can then be obtained by affine mappings of (−1, 1) onto the edges of Γ . The geometric mesh is designed so that it is geometrically graded towards the endpoints ±1 of Γ where singularities of the solutions occur. This mesh is defined by levels. We will define the mesh points on the half interval [−1, 0] and use symmetry to obtain the other half. Let K be a positive integer, which is usually referred to as the level of the geometric mesh, and let σ be a constant such that 0 < σ < 1. The mesh points are defined as (K) = −1, x0 (6.16) (K) = −1 + σ K− j , j = 1, . . . , K. xj We note that the mesh points of level K are obtained from those of level K − 1 by keeping all the points of level K − 1, adding one more point near the endpoint −1, namely −1 + σ K−1 , and then reordering the index. The ansatz space VσK (on (−1, 0)) is now spanned by piecewise polynomial (K) (K) functions on (−1, 0) whose restrictions on (x j−1 , x j ) are polynomials of degree at most p j = j, j = 1, . . . , K. On Γ the space VσK is obtained by affine mappings as mentioned above. For the hypersingular operator it is also required that the trial functions are continuous. We note that the dimension of VσK is M := dim(VσK ) = K 2 + K − 1.
(6.17)
In order to decompose VσK as a direct sum as in Definition 6.2, we need to define a coarse mesh in association with the fine mesh (6.16). For our analysis, it is essential to maintain local quasi-uniformity, i.e., quasi-uniformity in each subdomain. (K) (K) Thus it is appropriate to group a fixed number of subintervals (x j−1 , x j ) to form a subdomain. Let k0 < K be a fixed positive integer. There exists an integer m such that (K−mk0 )
x1
(K)
(K−(m+1)k0 )
≤ 0 = xK < x1
.
6.2 Preconditioners with Geometric Meshes
137
The coarse mesh points in (−1, 0) are the first points of the fine meshes of different levels K, K − k0 , K − 2k0 , . . . , K − mk0 which are close to −1, namely, (K)
− 1 = x0
(K)
< x1
(K−k0 )
< x1
(K−2k0 )
< x1
(K−3k0 )
(K−mk0 )
(K−ik0 )
K = xik 0 +1
< · · · < x1
(K)
≤ xK = 0. (6.18) (K−mk0 ) (K) (K−mk0 ) (K) < xK or x1 = xK there will be m + 3 or Depending on whether x1 m + 2 points, i.e. m + 2 or m + 1 subdomains, in the half interval [−1, 0]. Since x1
< x1
for i = 0, 1, . . . , m,
each subdomain (K−(i−1)k0 )
Γi := (x1
(K)
(K−ik0 )
, x1
(K)
) = (x(i−1)k
0 +1
(K)
, xik0 +1 )
(6.19)
(K)
consists of k0 subintervals (x j−1 , x j ). Since k0 is fixed with respect to K, this procedure ensures quasi-uniformity in each Γi . Moreover, there exists a constant c (depending on k0 ) such that Hi ≤ c, (6.20) hi where, we recall that, Hi is the length of Γi and hi is the maximum length of the subintervals in Γi . We can now define the coarse-mesh space VH and the local spaces Vh0 (Γi ) and V p0 (Γi j ) as in Definition 6.1. We note that the dimensions of the spaces VH and Vh0 (Γi ) defined on these coarse and fine meshes are (K−mk0 ) (K) < xK , 2m + 3 if x1 dim VH = (K−mk0 ) (K) = xK , 2m + 1 if x1 and dim Vh0 (Γi ) = k0 − 1. Note that m is proportional to k0−1 . Therefore if we choose k0 to be small so that dim Vh0 (Γi ) (and hence the size of the subproblem in this subspace) is small, then dim VH (and hence the size of the subproblem in this subspace) is large, and vice-versa.
6.2.1 A two-level preconditioner Let Vσ ,h , Vσ ,H , Vσ0,h (Γi ), V p0 (Γi j ), and V p be defined in the same manner as Vh , VH , Vh0 (Γi ), V p0 (Γi j ), and V p , respectively; see Definition 6.1. Here Γi and Γi j are defined by (6.16), (6.18), and (6.19). Then we can define the following two-level subspace decomposition for Vσ ,h and VσK :
138
6 Additive Schwarz Methods for the hp-Version
Vσ ,h = Vσ ,H ⊕
J 9
Vσ0,h (Γi ) ,
i=1
(6.21)
VσK = Vσ ,h ⊕ V p , and thus VσK
= Vσ ,H ⊕
J 9
Vσ0,h (Γi )
i=1
⊕
Ni J 9 9
(6.22)
V p0 (Γi j )
i=1 j=1
.
(6.23)
We deduce from Theorem 6.1 the following result. Theorem 6.6. The condition number of the additive Schwarz operator Pad associated with the decomposition (6.23) satisfies
κ (Pad ) log2 M where M is the degrees of freedom defined by (6.17). Proof. It is noted that Theorem 6.1 requires local quasi-uniformity of the two-level mesh, and that the way the two-level geometric mesh is designed ensures this property. Therefore, this theorem still applies in this case. Let J = dim VH + 1 be the number of subdomains. Then it follows from Theorem 6.1 and (6.20) that
κ (Pad ) (1 + log p)2 max (1 + log 1≤i≤J
Hi ) hi
(1 + log p)2 . By the definition of VσK we have p = max p j = max j = K. 1≤ j≤K
1≤ j≤K
(6.24)
Hence
κ (Pad ) log2 K. Since M = dim VσK = K 2 + K − 1,
(6.25)
we have
κ (Pad ) log2 M.
6.2.2 A multilevel preconditioner The p-block can be decomposed as in (6.10). However, a multilevel decomposition of the h-block by (5.2) is impossible because of the structure of the geometric mesh. Nevertheless, √ dim(Vh ) = 2K M.
6.3 Results for the Weakly-Singular Integral Equation
139
Hence, the size of the subproblem on this h-block is small compared to the size of the p-block. Thus, a two-level preconditioner for the h-block, as designed in Subsection 6.2.1, in conjunction with the multilevel method for the p-block gives an efficient preconditioner for the hp-version with geometric meshes. We state the main theorem of this section. Theorem 6.7. With the two-level geometric mesh defined by (6.16) and (6.18), the condition number of the additive Schwarz operator Pad associated with the decomposition (6.22), (6.21), and (6.10) satisfies √ κ (Pad ) M log2 M, where M is the degrees of freedom defined by (6.17). Proof. Noting (6.24) and (6.25), we derive this theorem directly from Theorem 6.4 and Theorem 6.6.
6.3 Results for the Weakly-Singular Integral Equation We recall the Galerkin equation for the weakly-singular integral equation aV (ψV, φ ) = f , φ ,
φ ∈ V.
(6.26)
Results for both types of meshes, namely quasi-uniform and geometric meshes, can be derived. In the case of quasi-uniform meshes (see Definition 6.1) the space V is the space of polynomials of degree p ≥ 0 on each subinterval Γi j for j = 1, . . . , Ni and i = 1, . . . , J. In the case of geometric meshes defined by (6.16) and (6.18), the space V = VσK consists of functions which are not necessarily continuous whose (K) (K) restriction on (x j−1 , x j ) is a polynomial of degree p j = j − 1, j = 1, . . . , K. Let Vh and Vσ ,h be the space of piecewise constant functions on the quasiuniform mesh and geometric mesh, respectively. We also define 3 4 Vp0 = span Lq,i j : 1 ≤ q ≤ p
and
Vp =
Ni J 9 9 i=1 j=1
Vp0 (Γi j )
where Lq,i j denotes the affine image of the Legendre polynomial of degree q onto Γi j . Corresponding to the decompositions (6.3) and (6.22) we now have V = Vh ⊕ Vp
and
VσK = Vσ ,h ⊕ Vp .
Different decompositions of Vh , Vσ ,h , and Vp result in different preconditioners for (6.26). We obtain the following theorems, the proofs of which follow along the lines of the proofs in Chapter 3, Chapter 4, and Chapter 5 by using the corresponding results for (6.1). We omit the proofs and refer interested readers to [120].
140
6 Additive Schwarz Methods for the hp-Version
In the case of quasi-uniform meshes, we obtain the following results. Theorem 6.8. The condition number of the additive Schwarz operator Pad associated with the two-level decomposition Ni J 9 9 V0 (Γi j ) V = Vh ⊕ p
i=1 j=1
where Vh is decomposed as in Definition 3.3 satisfies . Hi . κ (Pad ) (1 + log p)2 max 1 + log 1≤i≤J hi
The constant is independent of N and p. If we apply multilevel decompositions for both the h-block Vh and the p-block V p , we obtain the following theorem.
*q = span{Lq,i j }. If Pad is the multilevel additive Schwarz operTheorem 6.9. Let W ij ator associated with the decomposition p Ni 9 J 9 9 q * W V = Vh ⊕ ij
i=1 j=1 q=1
where the h-block Vh is decomposed by (5.21), then its condition number κ (Pad ) satisfies 2 κ (Pad ) (p + 1) 1 + log(p + 1) . The constant is independent of N and p.
In the case of geometric meshes, we obtain the following results, recalling property (6.20) of this two-level mesh. Theorem 6.10. If Pad is the additive Schwarz operator associated with the two-level decomposition VσK = Vσ ,h ⊕
Ni J 9 9 i=1 j=1
Vp0 (Γi j )
where Vσ ,h is decomposed as in Definition 3.3, then the condition number κ (Pad ) satisfies κ (Pad ) log2 K log2 M. The constants are independent of K and M. Theorem 6.11. The additive Schwarz operator Pad associated with the multilevel decomposition
6.4 Numerical Results
141
VσK = Vσ ,h ⊕
p Ni 9 J 9 9 i=1 j=1 q=1
*q W ij
where Vσ ,h is decomposed as in Definition 3.3, then the condition number κ (Pad ) satisfies √ κ (Pad ) M log2 M. The constants are independent of M.
6.4 Numerical Results The theoretical results in this chapter are corroborated by the numerical results presented in Chapter 9. The following table shows the relevant theorems and figures. Equation
Preconditioner Theorem Figure Two-level non-overlapping 6.1 9.12–9.15 Hypersingular Two-level overlapping 6.2 9.12–9.14, 9.16 Two-level 6.8 9.17, Quasi-uniform meshes Multilevel 6.9 9.17 Weakly-singular Two-level 6.10 9.18 Geometric meshes Multilevel 6.11 9.18 Table 6.1 Theoretical results in Chapter 6 and relevant numerical results in Chapter 9 .
Chapter 7
A Fully Discrete Method
A fully-discrete and symmetric method for the weakly-singular boundary integral equation is studied in [148]. In this chapter we discuss the use of additive Schwarz methods as preconditioners for the linear system arising from this discretization. Non-overlapping and overlapping two-level methods, and a multilevel method are discussed. We prove that the condition numbers of the preconditioned systems grow at most logarithmically with the degrees of freedom. The results in this chapter were first reported in [221]. We would like to note that for the problem considered in this chapter, due to the smoothness of the solution, a direct method may be sufficient in solving the linear system. However, this study is a step along the way to developing preconditioners for more demanding problems, where they are truly needed. Furthermore, the results in this paper apply not only to the logarithmic boundary integral equation but also to any equation of the form (7.5) in which the bilinear form (·, ·) is equivalent to the H −1/2 norm. In the next section, we reintroduce the Dirichlet boundary value problem on a domain with smooth boundary and present a parametrization for the smooth boundary to derive the boundary integral equation. This derivation is similar to that in Chapter 1.
7.1 The Boundary Integral Equation and a Fully Discrete Method Let Γ be a smooth closed curve in R2 . The equation to be considered in this chapter has the form 1 α d σy = F(x), x ∈ Γ , φ (y) log (7.1) 2π |x − y| Γ
where α is a positive parameter to be specified later. Let γ be a 1-periodic parametric representation of Γ such that γ ∈ C∞ and |γ (x)| = 0 for x ∈ R and let © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_7
143
144
7 A Fully Discrete Method
u(x) :=
1 φ [γ (x)]|γ (x)| and 2π
g(x) := F[γ (x)].
Then we can recast (7.1) as 1
K(x, y)u(y) dy = g(x),
x ∈ [0, 1],
(7.2)
∀x, y ∈ [0, 1].
(7.3)
0
where K(x, y) := log
α |γ (x) − γ (y)|
For all s ∈ R, let H s denote the usual 1-periodic Sobolev space of order s with norm denoted by ·s . With the bilinear form (·, ·) : H −1/2 × H −1/2 → R defined by (v, w) :=
1 1
K(x, y)v(x)w(y) dx dy,
v, w ∈ H −1/2 ,
(7.4)
0 0
the weak form of (7.2) has the form: find u ∈ H −1/2 such that (u, v) = (g, v) ∀v ∈ H −1/2 .
(7.5)
Here, (·, ·) denotes the inner product in L2 (0, 1). The parameter α in the definition (7.3) of K is chosen to satisfy
α ≥ logarithmic capacity of Γ ,
(7.6)
so that there exists a positive constant λ satisfying (v, v) ≥ λ v2−1/2
∀v ∈ H −1/2 .
(7.7)
For an explanation of the logarithmic capacity, see e.g. [121]. In practice, a simple way of guaranteeing that (7.6) holds is to choose α to be greater than the diameter of Γ .
7.2 The Fully-Discrete and Symmetric Method The approximate solution u∗h to u satisfies (u∗h , v) = (g, v) ∀v ∈ Sh ,
(7.8)
where Sh is a finite-dimensional subspace of H −1/2 . The stiffness matrix and the right-hand side of the matrix system arising from (7.8) involve complicated integrals that usually cannot be evaluated analytically. Hence, we are led to consider a fullydiscrete Galerkin method in which the numerical solution uh ∈ Sh satisfies
7.2 The Fully-Discrete and Symmetric Method
145
h (uh , v) = g, v
∀v ∈ Sh .
(7.9)
Here h and ·, · are certain discrete approximations to and (·, ·). The forms h and
·, · are constructed in such a way that uh achieves the same rates of convergence as u∗h in the appropriate Sobolev norms. In [148] h is designed to preserve the symmetry of the exact problem so that h (v, w) = h (w, v) ∀v, w ∈ Sh . We choose Sh to be the space of piecewise-constant functions on a 1-periodic uniform mesh with N subintervals between 0 and 1 defined by t j := jh,
h :=
1 , N
j = 0, . . . , N.
The discrete bilinear form h (·, ·) and the discrete inner product ·, · are defined as follows; see [148, Section 5]. Let a = 1/2,
b = 0.1262814358793956,
c = 0.6841827226000933,
d = 1/6,
b = 0.4992036265530720,
c = 0.0398933386986113,
d = 1/6,
or a = 1/2, and let
s1 = s6 = a, s2 = s5 = 0, s3 = s4 = −a,
σ1 = σ6 = b, σ2 = σ5 = c, σ3 = σ4 = b,
w1 = w6 = d, w2 = w5 = 1/2 − 2d, w3 = w4 = d.
Let the quadrature weights be {w1 , . . . , w6 } and the quadrature points be 1 (ξ p , η p ) := (1 + s p − σ p , 1 + s p + σ p ), 2
p = 1, . . . , 6.
We define h (v, w) := h2
N−1 N−1 6
∑ ∑ ∑ w p K(x jp , ykp )v(ykp )w(x jp ),
j=0 k=0 p=1
where x jp := t j + ξ p h
and
ykp := tk + η p h.
We note that the quadrature points (ξ p , η p ), p = 1, . . . , 6, are symmetric about both diagonals of the unit square. We next define the discrete inner product ·, · as a two-point Gauss–Legendre quadrature rule of degree of precision 2 with quadrature points ξ j and weights wj , j = 1, 2, given by
146
7 A Fully Discrete Method
ξ1 =
-
.
1 1 1− √ , 2 3
ξ2 =
-
.
1 1 1+ √ , 2 3
w1 = w2 = 1/2.
Thus the discrete inner product is defined by
v, w := h
N−1 2
∑ ∑ wp v(xjp )w(xjp ),
j=0 p=1
where xjp := t j + ξ p h. With h and ·, · so defined, Theorem 5.1 in [148] yields, for h sufficiently small, the unique existence of the solution uh ∈ Sh of (7.9) which satisfies uh − us ≤ c ht−s ut+max(−1−s,0) , for s < 1/2, −1/2 < t, and −1 ≤ s ≤ t ≤ 1. It is also shown in [148] that the discrete Galerkin matrix is positive definite. It is noted that the method designed above is of translation-invariant order 3 and that the orders of precision of the left-hand side and right-hand side are both at least 3; see [148, Definition 4.2 and page 337]. Before describing the preconditioners, we prove the following lemma which yields the equivalence of the two bilinear forms and h on Sh × Sh . Lemma 7.1. The following statement holds (v, v) h (v, v) ∀v ∈ Sh . The constants are independent of h. Proof. The boundedness of the form (·, ·) together with Theorem 2.1 (i) and Corollary 3.9 in [148] immediately gives (v, v) h (v, v). To prove h (v, v) (v, v), we first note that Lemmas 4.4 and 3.5 in [148] yield, respectively (noting Theorem 2.1 (v) in the same paper), |ah (v, v)| |a(v, v)| + v2−1/2 v2−1/2
∀v ∈ Sh
and |bh (v, v)| |b(v, v)| + v2−1/2
∀v ∈ Sh ,
where ah (·, ·) and bh (·, ·) are defined as in [148] so that h (v, w) = ah (v, w) + bh (v, w) ∀v, w ∈ Sh . Therefore by using (7.7) we obtain, for all v ∈ Sh , h (v, v) ≤ |ah (v, v)| + |bh (v, v)| v2−1/2 + |b(v, v)| v2−1/2 + (v, v) (v, v).
7.3 Two-level Methods
147
7.3 Two-level Methods For simplicity, we assume in this chapter a two-level uniform mesh defined as follows. The coarse mesh: We first divide I into disjoint subdomains Ii with length H, for i = 1, . . . , J, so that I = ∪Ji=1 I i . The fine mesh: Each Ii is further divided into disjoint subintervals Ii j with length h, for j = 1, . . . , N, so that I i = ∪Nj=1 I i j . Recall that the finite-dimensional space S = Sh is defined as the space of 1periodic piecewise-constant functions on the fine mesh. We define, for both the non-overlapping and overlapping methods, the space S0 as the space of 1-periodic piecewise-constant functions on the coarse mesh. As in Chapter 3, the subspaces Si , i = 1, . . . , J, will be defined as spaces of functions which are derivatives of piecewise-linear functions. On the fine mesh we also define the space V of continuous piecewise-linear functions.
7.3.1 A non-overlapping method Let and let
4 3 Vi := v ∈ V | supp v ⊂ I i ,
i = 1, . . . , J,
4 3 Si := w | w = v for some v ∈ Vi ,
i = 1, . . . , J.
(7.10)
Bases for Si can be formed by the Haar basis functions; see Chapter 3. The decomposition of S by S = S0 + · · · + SJ ,
(7.11)
with Si defined as above allows us to define the projections Pj : S → S j by L(Pj v, w) = L(v, w) ∀v ∈ S , w ∈ S j , j = 0, . . . , J.
(7.12)
As usual, the additive Schwarz operator is defined by J
P :=
∑ Pj .
(7.13)
j=0
When L = (Galerkin approximation), we denote P by PG . When L = h (fullydiscrete method), we denote P by PF . Theorem 7.1. The condition number of the non-overlapping additive Schwarz operator PF is bounded as
148
7 A Fully Discrete Method
κ (PF ) 1 + log
H . h
Proof. Lemma 3.4 and Lemma 3.5 in Chapter 3 are proved for the bilinear form defined with L = . Due to Lemma 7.1 the same results hold for L = h . Invoking Theorem 2.1 we obtain the desired result.
7.3.2 An overlapping method We now extend each subdomain Ii on each side by a fixed number of subintervals is 2δ so that the length of the overlap between two extended subdomains Ii and Ii+1 for some δ ∈ (0, H] which is a multiple of h. This implies |Ii | ∼ H. Let Vi , i = 1, . . . , J, be defined by 8 7 Vi := v ∈ V | supp v ⊂ I i ,
i = 1, . . . , J.
Then the subspaces Si , i = 1, . . . , J, are defined as in (7.10). The subspace decomposition (7.11) using these subspaces yields an overlapping decomposition which together with (7.12) and (7.13) completely defines the additive Schwarz operators PG and PF .
Theorem 7.2. The condition number of the overlapping additive Schwarz operator PF is bounded as H κ (PF ) 1 + log2 . δ Proof. Similarly to the proof of Theorem 7.1, this is a result of Lemma 3.6, Lemma 3.8, Lemma 7.1, and Theorem 2.1.
7.4 A Multilevel Method The multilevel method is defined in exactly the same manner as in Chapter 5. The following theorem is then a result of the results obtained in that chapter and Lemma 7.1. Theorem 7.3. The condition number of the multilevel additive Schwarz operator PF is bounded independently of the number of level L and the number of mesh points.
7.5 Numerical Experiments
149
7.5 Numerical Experiments 7.5.1 Implementation issues We briefly discuss in this section how two-level methods are implemented. The implementation of the multilevel method is discussed in [227, 244]. Let φ j be the brick function in the interval (t j−1 ,t j ) and let Akl , k, l = 1, . . . , J, be sub-blocks of the stiffness matrix A arising from (7.9) defined by Akl := (akl i j ) with kl ai j = h (φi , φ j ). Here φi and φ j are brick functions such that supp φi ⊂ I k
and
supp φ j ⊂ I l
and
supp φ j ⊂ I l
for the non-overlapping method, and
supp φi ⊂ I k
for the overlapping method. Then A has the form ⎤ ⎡ A11 A12 . . . A1J ⎢ A22 . . . A2J ⎥ ⎥ ⎢ ⎢ .. ⎥ . . ⎣ . . ⎦ AJJ
The matrix representation B of the non-overlapping preconditioner will now have the form ⎤ ⎡ −1 A11 O . . . O ⎥ ⎢ A−1 22 . . . O ⎥ ⎢
+ R0 A−1 ⎢ . . .. ⎥ 0 R0 , ⎣ . . ⎦ A−1 JJ
where A0 is the stiffness matrix on the coarse mesh level, R0 is the matrix representation of the interpolation operator from the coarse mesh level to the fine mesh level, and R 0 is its transpose; see Subsection 2.3.2.1. Let v be the vector representation of a function v = ∑Ji=0 vi ∈ S , vi ∈ Si . In the implementation, there is no need to compute the matrix B explicitly. To compute Bv, it suffices to solve the local problems Aii wi = vi , i = 1, . . . , J, and the global problem A0 w0 = R 0 v0 , and compute Bv by J
Bv = R0 w0 + ∑ wi . i=1
However, before doing so one has to transform the representation of the function v in the usual basis to the Haar basis; see Section 7.3. After obtaining the resulting function from Bv, a reverse transform to the usual basis is necessary.
150
7 A Fully Discrete Method
7.5.2 Numerical results Equation (7.1) is the reformulation (by the boundary integral equation method) of the boundary value problem with the Laplace equation Δ U = 0 on Ω ⊂ R2 , subject to a Dirichlet boundary condition U = f on Γ = ∂ Ω ; see Chapter 1. In our numerical experiment we take Γ to be the ellipse with semi-axes a1 = 4 and a2 = 2. The boundary data is chosen to be f (x) = hand side F(x) of (7.1) is given by F(x) :=
1 1 f (x) + 2 2π
Γ
f (y)
|x1 + x22 |, where x = (x1 , x2 ). The right1 ∂ d σy , log ∂ νy |x − y|
x ∈Γ.
The logarithmic capacity of Γ in this case is (a1 + a2 )/2 = 3. We chose α = 3.5. The iterations were stopped when the l2 norm of the residual was less than 10−9 , and all calculations were performed in double precision. The Lanczos algorithm was used to compute the condition numbers. Table 7.1 shows the condition numbers and numbers of iterations for the unpreconditioned systems and the systems preconditioned by the non-overlapping, overlapping, and multilevel methods designed in Section 7.3 and Section 7.4. Table 7.2 shows the cpu times for each of these methods.
DoF 4 8 16 32 64 128 256 512 1024
(1) 0.366E+01 0.474E+01 0.116E+02 0.242E+02 0.489E+02 0.977E+02 0.195E+03 0.390E+03 0.779E+03
Condition numbers (2) (3) 0.366E+01 0.366E+01 0.474E+01 0.474E+01 0.193E+02 0.474E+01 0.253E+02 0.315E+01 0.319E+02 0.281E+01 0.389E+02 0.105E+02 0.467E+02 0.115E+02 0.550E+02 0.128E+02 0.641E+02 0.145E+02
(4) 0.234E+01 0.457E+01 0.665E+01 0.290E+01 0.105E+02 0.122E+02 0.144E+02 0.164E+02 0.183E+02
Iterations (1) (2) (3) (4) 2 2 2 2 4 4 4 4 7 8 8 7 15 12 11 10 24 14 10 13 34 15 12 13 47 16 13 14 64 17 13 14 87 18 14 14
Table 7.1 (1): unpreconditioned, (2): preconditioned by non-overlapping method, (3): preconditioned by overlapping method, (4): preconditioned by multilevel method
7.5 Numerical Experiments
DoF 4 8 16 32 64 128 256 512 1024
151
(1) 0.635E-01 0.106E-02 0.153E-02 0.410E-02 0.142E-01 0.572E-01 0.594E+00 0.719E+01 0.520E+02
CPU times (2) (3) 0.704E-03 0.669E-03 0.922E-03 0.980E-03 0.201E-01 0.219E-01 0.343E-01 0.371E-01 0.615E-01 0.536E-01 0.149E+00 0.169E+00 0.608E+00 0.533E+00 0.359E+01 0.281E+01 0.188E+02 0.150E+02
(4) 0.648E-01 0.468E-02 0.106E-01 0.206E-01 0.396E-01 0.695E-01 0.264E+00 0.165E+01 0.878E+01
Table 7.2 (1): unpreconditioned, (2): preconditioned by non-overlapping method, (3): preconditioned by overlapping method, (4): preconditioned by multilevel method
Chapter 8
Indefinite Problems
In this chapter we design additive Schwarz methods for boundary integral equations arising from the Helmholtz equation in 2D; see Section 1.1.1. The integral operators take the forms Vκ ϕ (x) =
i 2
Wκ ϕ (x) = −
(1)
H0 (κ |x − y|)ϕ (y) dsy ,
x ∈Γ,
Γ
i ∂ 2 ∂ nx
Γ
(1)
∂ H0 (κ |x − y|)ϕ (y) dsy , x ∈ Γ , ∂ ny (1)
with κ being a nonzero real number and H0 the first-kind Hankel function of order zero. The corresponding equations (1.1) and (1.2) admit the following weak formu m (Γ ) satisfying lation: Given ∈ H −m (Γ ), find ϕ ∈ H a(ϕ , ψ ) = , ψ
where a(ϕ , ψ ) = aV (ϕ , ψ ) = Vκ ϕ , ψ , or a(ϕ , ψ ) = aW (ϕ , ψ ) = Wκ ϕ , ψ ,
m (Γ ) ∀ψ ∈ H
−1/2 (Γ ), ϕ, ψ ∈ H 1/2 (Γ ), ϕ, ψ ∈ H
m = −1/2,
(8.1)
m = 1/2.
(8.2)
In the next section we will present the general theory for indefinite problems satisfying some required assumptions. In subsequent sections we will show that the hypersigular equation (8.2) and the weakly-singular equation (8.1) satisfy these assumptions and thus we will apply the proposed preconditioners to these equations.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_8
153
154
8 Indefinite Problems
8.1 General Theory for Indefinite Problems In this section we consider the following general problem: m (Γ ) satisfying Given ∈ H −m (Γ ), m ∈ R, find ϕ ∈ H a(ϕ , ψ ) = , ψ
m (Γ ) ∀ψ ∈ H
(8.3)
m (Γ ) → C satisfies several assumptions m (Γ ) × H where the bilinear form a(·, ·) : H m (Γ ). The solution ϕ as stated below. Let V be a finite dimensional subspace of H is approximated by u ∈ V satisfying a(u, v) = , v
∀v ∈ V .
(8.4)
8.1.1 Assumptions First we assume that the bilinear form a(·, ·) satisfies Assumption 2.2 (page 14), which we recall here for the reader’s convenience. Assumption 8.1. (i) The bilinear form a(·, ·) can be decomposed as a(·, ·) = b(·, ·) + k(·, ·) where m (Γ ) → R is symmetric and the bilinear m (Γ ) × H the bilinear form b(·, ·) : H m m (Γ ) → C is skew-symmetric; (Γ ) × H form k(·, ·) : H m (Γ ) (ii) There exists a positive constant CB1 such that for any u, v ∈ H |b(u, v)| ≤ CB1 uH m (Γ ) vH m (Γ ) ,
|k(u, v)| ≤ CB1 uH m−1/2 (Γ ) vH m (Γ ) , |k(u, v)| ≤ CB1 uH m (Γ ) vH m−1/2 (Γ ) .
m (Γ ) (iii) There exist positive constants CB2 , CB3 , and CB4 such that for any u ∈ H b(u, u) ≥ CB2 u2H m (Γ ) ,
ℜ(a(u, u)) ≥ CB3 u2H m (Γ ) −CB4 u2H m−1/2 (Γ ) .
(G˚arding’s inequality)
Assumption 8.1 ensures the existence and uniqueness of (8.3) and (8.4). As usual, domain decomposition methods for (8.4) are designed based on subspace decompositions of V . This space is assumed to have the following properties. Assumption 8.2. The space V is decomposed by V = V0 + · · · + VJ where V j , j = 0, . . . , J, are subspaces of V such that
8.1 General Theory for Indefinite Problems
155
4 3 V j = v ∈ V : supp v ⊂ Γj ,
j = 1, . . . , J.
Here Γj , j = 1, . . . , J, are subdomains satisfying Γ = Γ 1 ∪ · · · ∪ Γ J . These subdomains may or may not overlap. Furthermore we assume that (i) there exists a positive constant CV such that for all v j ∈ V j , j = 1, . . . , J, the following statement holds with v := v1 + · · · + vJ b(v, v) ≤ CV
J
∑ b(v j , v j );
(8.5)
j=1
(ii) each v ∈ V admits a decomposition v = v0 + · · · + vJ with v j ∈ V j satisfying J
∑ b(v j , v j ) ≤ CV b(v, v).
(8.6)
j=0
The constants CV and CV are independent of v and v j but may depend on other parameters in the problem.
Finally, we make the following assumption on the relation between the symmetric and positive definite bilinear form b(·, ·) and the non-symmetric and non-positive definite bilinear form a(·, ·). Assumption 8.3. There exists a positive constant Cb such that for any v j ∈ V j with j = 1, . . . , J, the following inequality holds b(v j , v j ) ≤ Cb ℜ(a(v j , v j )). In Chapter 2, Section 2.7, we presented the convergence theory for preconditioners to be implemented with GMRES. In the next subsection, we design preconditioners that yield the upper and lower bounds in (2.50). This in turn gives the residual estimates (2.51) and (2.52).
8.1.2 Additive Schwarz operators Lemma 8.1. Under Assumption 8.1, the following projections are well defined: Q j : V −→ V j , Pj : V −→ V j , Moreover,
a(Q j w, v j ) = a(w, v j ) ∀w ∈ V , v j ∈ V j , j = 0, . . . , J, b(Pj w, v j ) = ℜ a(w, v j ) ∀w ∈ V , v j ∈ V j , j = 1, . . . , J. Q0 vH m (Γ ) ≤ CQ0 vH m (Γ )
m (Γ ). ∀v ∈ H
Proof. The results are direct consequences of G˚arding’s inequality (Assumption 8.1 (iii)).
156
8 Indefinite Problems
We make the following assumption on the projection Q0 . Assumption 8.4. There exists a positive function w(α ) satisfying lim w(α ) = 0
α →0+
such that for any v ∈ V the following inequality holds v − Q0 vH m−1/2 (Γ ) ≤ w1/2 (α )vH m (Γ ) .
(In the next section, α is the coarse mesh size H for the h-version and the fine mesh size h for the p-version.) Since a(·, ·) is non-symmetric, so are Q j and Pj , resulting in the non-symmetry of the following Schwarz operators. Definition 8.1. The additive Schwarz operators are defined by Q(1) = Q0 + Q1 + · · · + QJ
and
Q(2) = Q0 + P1 + · · · + PJ .
We can now design two algorithms from the operators Q(1) and Q(2) . Algorithm 8.1: Preconditioner Q(1) Find u ∈ V satisfying where
g(1)
Q(1) u = g(1)
∈ V is defined by g(1) :=
J
(1) with a(g j , v j ) = , v j
(1)
∑ gj
j=0
Algorithm 8.2: Preconditioner Q(2) Find u ∈ V satisfying where
(8.7)
g(2)
∀v j ∈ V j , j = 0, . . . , J.
Q(2) u = g(2)
(8.8)
∈ V is defined (1)
J
(2)
g(2) := g0 + ∑ g j j=1
(2) with b(g j , v j ) = , v j ,
v j ∈ V j , j = 1, . . . , J.
Lemma 8.2. Under Assumptions 8.1–8.3, there exist cQ > 0 and CQ > 0 satisfying cQ b(v, v) ≤
J
∑ b(Q j v, Q j v) ≤ CQ b(v, v)
j=0
∀v ∈ V
(8.9)
8.1 General Theory for Indefinite Problems
157
and J
cP b(v, v) ≤ b(Q0 v, Q0 v) + ∑ b(Pj v, Pj v) ≤ CP b(v, v) ∀v ∈ V .
(8.10)
j=1
Proof. The following inequality obtained from Assumption 8.1 (i)–(ii) will be frequently used in the proof: m (Γ ). ∀u, v ∈ H
|a(u, v)| ≤ 2CB1 uH m (Γ ) vH m (Γ )
(8.11)
We first estimate b(Q0 v, Q0 v) which is required in both (8.9) and (8.10). Assumption 8.1 (ii)–(iii) and Lemma 8.1 give b(v, v). b(Q0 v, Q0 v) ≤ CQ2 0 CB1 CB−1 2
(8.12)
:= Q1 + · · · + QJ and use To prove the right inequality in (8.9), we define Q successively Assumption 8.3, the definition of Q j , (8.11), Assumption 8.1 (iii) and Assumption 8.2 (i) to obtain (with c being a generic constant) J
J
∑ b(Q j v, Q j v) ∑ ℜ
j=1
j=1
a(Q j v, Q j v) =
J
a(v, Q j v) = ℜ a(v, Qv)
∑ℜ
j=1
' ' ( ( ' m ( v m 'Qv (a(v, Qv) H (Γ ) H (Γ )
Qv) 1/2 b(v, v)1/2 b(v, v)1/2 b(Qv,
so that
J
1/2
∑ b(Q j v, Q j v)
j=1
J
∑ b(Q j v, Q j v) b(v, v)
j=1
where the constant depends only on Cb , CB1 , CB2 , and CV . Combining the above estimate with (8.12) we obtain the right-hand side of (8.9) with CQ = CQ (CQ0 ,CB1 ,CB2 ,Cb ,CV ).
(8.13)
Similarly, if P := P1 + · · · + PJ then we have from the definition of Pj , (8.11), and Assumption 8.1 (iii), and Assumption 8.2 (i) that J
J
j=1
j=1
∑ b(Pj v, Pj v) = ∑ ℜ
( ( ≤ (a(v, Pv) ( a(v, Pj v) = ℜ a(v, Pv)
m b(v, v)1/2 b(Pv, Pv) 1/2 vH m (Γ ) Pv H (Γ ) J 1/2 b(v, v)1/2 ∑ b(Pj v, Pj v) , j=1
158
8 Indefinite Problems
implying J
∑ b(Pj v, Pj v) b(v, v).
j=1
This inequality and (8.12) yield the right inequality in (8.10) with CP = CP (CQ0 ,CB1 ,CB2 ,Cb ,CV ).
To prove the left inequality in (8.9) we employ G˚arding’s inequality and the boundedness of b(·, ·) (see Assumption 8.1 (ii)) to obtain (8.14) b(v, v) ≤ CB5 ℜ a(v, v) + v2H m−1/2 (Γ )
where CB5 depends on CB1 , CB3 , and CB4 . Let v = v0 + · · · + vJ be any decomposition of v ∈ V where v j ∈ V j , j = 0, . . . , J. Then the definition of Q j , (8.11), the Cauchy– Schwarz inequality, and Assumption 8.1 (iii) give ℜ a(v, v) ≤ =
J
(
(
∑ (a(v, v j )(
j=0 J
(
(
∑ (a(Q j v, v j )(
j=0
≤ 2CB1
J
∑ Q j vHm (Γ ) v j Hm (Γ )
j=0
≤ 2CB1
J
∑ Q j v2Hm (Γ )
j=0
≤ 2CB1 CB−1 2
1/2
J
∑ v j 2Hm (Γ )
j=0
1/2
1/2 J 1/2 b(Q v, Q v) b(v , v ) . j j j j ∑ ∑ J
j=0
j=0
In particular, if the decomposition v = v0 + · · · + vJ is the one given by Assumption 8.2 (ii), then J 1/2 1/2 ∑ b(Q j v, Q j v) C b(v, v)1/2 . ℜ a(v, v) ≤ 2CB1 CB−1 2 V
(8.15)
j=0
The final term in (8.14) can be estimated by using the triangle inequality, Lemma 8.1 and Assumption 8.4 as follows: v2H m−1/2 (Γ ) ≤ 2v − Q0 v2H m−1/2 (Γ ) + 2Q0 v2H m−1/2 (Γ )
≤ 2CB−1 w(α )b(v, v) + 2CB−1 b(Q0 v, Q0 v)1/2 b(v, v)1/2 . 2 2
Altogether, (8.14), (8.15), and (8.16) yield
(8.16)
8.1 General Theory for Indefinite Problems
159
1−2CB−1 C w(α ) b(v, v) 2 B5 J 1/2 1/2 1/2 ∑ b(Q j v, Q j v) ≤ 2CB−1 C b(v, v)1/2 C C + b(Q v, Q v) B B 0 0 1 V 5 2 j=0
J 1/2 1/2 C C ≤ 2CB−1 C + 1 b(Q v, Q v) b(v, v)1/2 . B1 V ∑ j j 2 B5 j=0
Since w(α ) → 0 when α → 0, the above inequality implies, for α sufficiently small, 2 J 2 −2 2 1/2 + 1 ∑ b(Q j v, Q j v) 1 − 2CB−1 C C C w( α ) b(v, v) ≤ 4C C B B B B 1 5 V 5 2 2 j=0
giving the left inequality in (8.9) with
2 −2 2 1/2 + 1 . c−1 C C 4C C B B B2 1 V Q 5
(8.17)
To prove the left inequality in (8.10) we estimate the first term on the right-hand side of (8.14) by ℜ a(v, v) =
a(v, v j )
J
∑ℜ
j=0
J = ℜ a(Q0 v, v0 ) + ∑ b(Pj v, v j ) j=1
J
b(Q0 v, Q0 v)1/2 b(v0 , v0 )1/2 + ∑ b(Pj v, Pj v)1/2 b(v j , v j )1/2 ≤ 2CB1 CB−1 2 j=1
where in the last step we used (8.11) and Assumption 8.1 (iii). The proof can be proceeded as above and is omitted. Assumption 8.5. For all v ∈ V ( ( J ( ( ( ∑ k(Q j v, v − Q j v)( ≤ w(α )b(v, v), j=0
( J ( ( ( |k(v − Q0 v, Q0 v)| + ( ∑ k(v, Pj v)( ≤ w(α )b(v, v), j=1
where w(α ) is the function given in Assumption 8.4.
As the main result of this section, we prove that under Assumptions 8.1–8.5, estimates (2.50) hold and thus we can invoke Lemma 2.19 in Chapter 2 for the convergence of the preconditioned GMRES applied to (8.4). Theorem 8.1. Under Assumptions 8.1–8.5, there exist cQ > 0 and CQ > 0 such that for any v ∈ V and = 1, 2 the following estimates hold
160
8 Indefinite Problems
b(Q() v, Q() v) ≤ CQ2 b(v, v)
(8.18)
cQ b(v, v) ≤ b(v, Q() v).
(8.19)
and
Proof. We first prove (8.18) for = 1. By using successively Assumption 8.1 (ii), Assumption 8.2 (i), and (8.9) we obtain b(Q(1) v, Q(1) v) ≤ CB1 Q(1) v2H m (Γ ) ≤ 2CB1
'
Q0 v2H m (Γ ) + '
≤ 2CB1 CB−1 2 ≤ 2CB1 CB−1 2
'2 ∑ Q j v' m J
H (Γ )
j=1
J J b(Q0 v, Q0 v) + b ∑ Q j v, ∑ Q j v j=1
b(Q0 v, Q0 v) +CV
j=1
J
∑ b(Q j v, Q j v)
j=1 J
≤ 2CB1 CB−1 (1 +CV ) ∑ b(Q j v, Q j v) 2 j=0
(1 +CV ≤ 2CB1 CB−1 2
) CQ b(v, v)
where CQ is given in (8.13). This means (8.18) holds with (1 +CV )CQ . CQ2 = 2CB1 CB−1 2
(8.20)
Similar result holds for = 2 by using (8.10) instead of (8.9). Next we prove (8.19) for = 1. By using the definition of Q j , (8.9), and Assumption 8.5 we obtain b(v, Q(1) v) =
J
∑ b(v, Q j v)
j=0
≥ =
J
J
j=0
j=0
J
J
j=0
j=0
∑ b(Q j v, Q j v) − ∑ |b(Q j v, v − Q j v)| ∑ b(Q j v, Q j v) − ∑ |k(Q j v, v − Q j v)|
≥ cQ b(v, v) − w(α )b(v, v).
The required result is proved with
cQ = cQ − w(α )
(8.21)
which is positive if we can choose α > 0 such that w(α ) is sufficiently smaller than cQ .
8.1 General Theory for Indefinite Problems
161
Finally, for = 2 we have by using the definition of Pj , j = 1, . . . , J, b(Pj v, Pj v) = ℜ a(v, Pj v) = b(v, Pj v) + ℜ k(v, Pj v)
so that Hence
b(v, Pj v) = b(Pj v, Pj v) − ℜ k(v, Pj v) ,
j = 1, . . . , J.
J
b(v, Q(2) v) = b(v, Q0 v) + ∑ b(v, Pj v) j=1
J J = b(Q0 v, Q0 v) + b(v − Q0 v, Q0 v) + ∑ b(Pj v, Pj v) − ∑ ℜ k(v, Pj v) . j=1
j=1
By using the definition of Q0 we deduce b(v − Q0 v, Q0 v) = −k(v − Q0 v, Q0 v) and therefore we obtain, by rearranging the equation, using (8.10) and Assumption 8.5, J J b(v, Q(2) v) = b(Q0 v, Q0 v) + ∑ b(Pj v, Pj v) − k(v − Q0 v, Q0 v) − ∑ ℜ k(v, Pj v) j=1
j=1
( ( ( ( ≥ cP b(v, v) − (k(v − Q0 v, Q0 v)( − ∑ (k(v, Pj v)( J
j=1
≥ cP b(v, v) − w(α )b(v, v).
Hence (8.19) is proved for = 2 by choosing α > 0 sufficiently small.
As a direct consequence of Theorem 8.1, we prove that (8.7) and (8.8) have a unique solution and this solution is also the solution of (8.4). Theorem 8.2. Equations (8.7) and (8.8) have unique solutions. Moreover, equations (8.4), (8.7), and (8.8) are equivalent. Proof. It is clear from (8.19) and Assumption 8.1 (iii) that, for = 1, 2, if Q() u = 0 then u = 0. Hence (8.7) and (8.8) have at most one solution. Since these equations are in a finite dimensional space, uniqueness also implies existence. Recall that Assumption 8.1 ensures the existence and uniqueness of the solution of (8.4). Let u ∈ V be the solution of (8.4). Then it follows from Definition 8.1 that, for all v j ∈ V j , j = 0, . . . , J, (1) a(Q j u, v j ) = a(u, v j ) = f , v j = a(g j , v j ). (1)
Hence the above identity implies Q j u = g j so that Q(1) u = g(1) , i.e., u is also the solution of (8.7). Similarly, for j = 1, . . . , J (2) b(Pj u, v j ) = ℜ a(u, v j ) = f , v j = b(g j , v j ),
162
8 Indefinite Problems (2)
implying b(Pj u − g j , v j ) = 0 for all v j ∈ V j . The positive definiteness of b(·, ·) (2)
yields Pj u = g j so that Q(2) u = g(2) , i.e., u is the solution of (8.8). The converse can be seen by uniqueness. In fact, if u is the solution of (8.7) which is not equal to the solution of (8.4), then the proof above shows that (8.7) has two solutions, which is a contradiction. Similarly for (8.8).
8.2 Hypersingular Integral Equation In this section we apply the theory developed in Section 8.1 to the hypersingular integral equation (8.2). We consider different preconditioners for both the h-version and p-version. In each case, it suffices to check on the assumptions posed in Section 8.1, namely Assumptions 8.1–8.5. First we consider Assumption 8.1 which is universal for both the h-version and p-version. 1/2 (Γ ), Checking of Assumption 8.1: The bilinear forms are defined, for any v, w ∈ H by aW (v, w) := Wκ v, w , bW (v, w) := W0 v, w , (8.22) kW (v, w) := KW v, w , where, for x ∈ Γ , Wκ v(x) := −
i ∂ 2 ∂ nx
1 W0 v(x) := − f.p. π
Γ
Γ
∂ H01 (κ |x − y|) v(y)d σy , ∂ ny 1 ∂ v(y) d σy = |x − y|2 π ∂ nx
Γ
1 ∂ d σy , log ∂ ny |x − y|
KW v(x) := Wκ v(x) − W0 v(x). Here, κ is the wave number in the Helmholtz equation; see Subsection 1.1.1. It is noted that the Hankel function H01 has the following well known expansion i 1 H (κ |x − y|) 2 0 =
∞
1
∞ 1 + ∑ d j |κ (x − y)|2 j ,
∑ c j |κ (x − y)|2 j − π log |κ (x − y)|
j=0
(8.23)
j=1
which converges for |κ (x − y)| < R0 for some R0 > 0. It is shown in [235] that Assumption 8.1 holds true with m = 1/2; see also [65, 92].
8.2 Hypersingular Integral Equation
163
Next we reconsider the subspace decompositions developed in Chapter 3 and Chapter 4, and show that Assumptions 8.2–8.5 hold true for these considerations. It should be noted that the bilinear form aW (·, ·) in those chapters is the symmetric positive definite bilinear form bW (·, ·) considered in this chapter.
8.2.1 The h-version Consider the subspace decomposition of V defined in Definitions 3.1 and 3.2: V = V 0 + V1 + · · · + V J where 4 3 V := v ∈ C(Γ ) : v(±1) = 0, v|Γi j is affine, j = 1, . . . , Ni , i = 1, . . . , J , 3 4 V0 := v ∈ C(Γ ) : v(±1) = 0, v|Γi is affine, i = 1, . . . , J , 4 3 1/2 (Γi ), i = 1, . . . , J, Vi := v ∈ V : supp v ⊂ Γ i = V ∩ H
with Γi and Γi j being disjoint subintervals such that
Γ=
J 6
Γi
and Γ i =
i=1
Ni 6
Γ i j , i = 1, . . . , J.
j=1
Recalling that Hi = |Γi | and hi = max1≤ j≤Ni |Γi j | we denote H := max Hi 1≤i≤J
and
1 + log
Hi H := max 1 + log . 1≤i≤J h hi
We assume that there exists a positive constant C independent of J such that H H 1/4 1 + log ≤ C. h
(8.24)
The above assumption dictates how fast log(H/h) could go to infinity. In fact,if ε is the positive constant in (8.28), then it suffices to assume H 1−3ε 1 + log(H/h) ≤ C; see the proof of Theorem 8.3. Checking of Assumption 8.2: As were shown in Lemma 3.1 and Lemma 3.2, inequalities (8.5) and (8.6) hold true with H CV = C0 1 + log and CV = C1 (8.25) h
where C0 and C1 are constants independent of H and h, j = 1, . . . , J.
164
8 Indefinite Problems
Checking of Assumption 8.3: We show in the following lemma that this assumption is satisfied. Lemma 8.3. Under assumption (8.24), there exist positive constants Cb and H0 independent of J such that if 0 < H ≤ H0 then for any v j ∈ V j , the following inequality holds bW (v j , v j ) ≤ Cb ℜ aW (v j , v j ) , j = 1, . . . , J.
Proof. First we note that if v j ∈ V j , j = 1, . . . , J, then since supp v j ⊂ Γ j and |Γj | = H j we have by using Lemma A.6 v j 2H 0 (Γ) ≤ H j v j 2 1/2 v j 2L2 (Γ ) = v j 2H 0 (Γ ) = H j j
I
HI
I
(Γ)
= H j v j 2 1/2 HI
(Γj )
. (8.26)
Invoking Lemma A.8 and Lemma C.21 and noting that v j vanishes at the endpoints of Γj , we obtain v j 2 1/2 HI
(Γj )
v j 2 1/2 Hw
H j 2 2 1 + log |v j |H 1/2 (Γ ) . j (Γj ) hj
Since |v j |H 1/2 (Γj ) ≤ |v j |H 1/2 (Γ ) , it follows that v j 2 1/2 HI
H j 2 2 H j 2 ≤ 1 + log |v j |H 1/2 (Γ ) ≤ 1 + log v j 2 1/2 Hw (Γ ) (Γj ) hj hj 2 H j 2 H j 1 + log v j 2 1/2 1 + log bW (v j , v j ) HI (Γ ) hj hj
where we used again Lemma A.8 and used Assumption 8.1 (iii). This inequality together with (8.26) and the assumption (8.24) implies v j 2L2 (Γ ) H 1/2 bW (v j , v j ).
(8.27)
Therefore, Assumption 8.1 (iii) (G˚arding’s inequality) yields bW (v j , v j ) ℜ aW (v j , v j ) + v j 2L2 (Γ ) ℜ aW (v j , v j ) + H 1/2 bW (v j , v j )
implying
(1 − H 1/2 )bW (v j , v j ) ℜ aW (v j , v j ) .
This proves the lemma with H0 chosen to be smaller than 1.
Checking of Assumption 8.4: This assumption is satisfied with m = 1/2 and w(H) = cH 1−2ε for some ε > 0, as shown in the following lemma.
(8.28)
8.2 Hypersingular Integral Equation
165
Lemma 8.4. For every ε > 0 there exist positive constants c and H0 such that for any H ∈ (0, H0 ] and v ∈ V the following estimates hold Q0 v − vL2 (Γ ) ≤ cH 1/2−ε vH 1/2 (Γ ) .
Proof. The results are consequences of G˚arding’s inequality and Nitsche’s trick. The detailed proof is omitted. We note that the presence of ε in the estimate is due to the fact that the projection Q0 is defined with the bilinear form aW (·, ·) which inherits the singularity due to Γ being an open curve (or a screen in 3D). When Γ is a closed curve (or closed surface in 3D) the estimate is optimal in the order of O(H 1/2 ); see [235]. In order to prove that Assumption 8.5 is satisfied, we first prove the following lemma. Lemma 8.5. Under assumption (8.24), for any v ∈ V , the following inequality holds 1/2
Q j v2L2 (Γ ) ≤ H j Q j v2 1/2 HI
(Γ )
,
j = 1, . . . , J.
Proof. The proof follows in the same manner as the proof of (8.27). Checking of Assumption 8.5: Finally we show the assumption on kW (·, ·).
Lemma 8.6. Under assumption (8.24), there exists H0 > 0 such that for 0 < H ≤ H0 , the bilinear form kW (·, ·) defined in (8.22) satisfies for all v ∈ V ( ( ( J ( ( ( ( ∑ kW (v − Q j v, Q j v)( H 1/4 bW (v, v), ( j=0 ( ( ( ( J ( ( ( |kW (v − Q0 v, Q0 v)| + ( ∑ kW (v, Pj v)( H 1/4 bW (v, v). ( j=1 ( Proof. Firstly, for any v ∈ V we have by using Assumption 8.1 and Lemma 8.4 |kW (v − Q0 v, Q0 v)| v − Q0 vL2 (Γ ) Q0 vH 1/2 (Γ ) H 1/2−ε v2H 1/2 (Γ ) H 1/2−ε bW (v, v). Next we have by using again Assumption 8.1 ( ( J ( ( J ( ( J ( ( ( ( ( ( ( ∑ kW (v − Q j v, Q j v)( ≤ ( ∑ kW (v, Q j v)( + ( ∑ kW (Q j v, Q j v)( j=1
j=1
j=1
( ( J ( ( J ( ( ( ( = (kW v, ∑ Q j v ( + ( ∑ kW (Q j v, Q j v)( j=1
j=1
(8.29)
166
8 Indefinite Problems
v 1/2 HI
(Γ )
' J ' ' ∑ Q j v'
J
L2 (Γ )
j=1
+ ∑ Q j v 1/2 j=1
HI
(Γ )
Q j vL2 (Γ ) .
Due to the fact that supp Q j v ⊂ Γ j for j = 1, . . . , J, we have ' J ' ' ∑ Q j v' j=1
= L2 (Γ ) =
J
∑
i=1
( J
Γi
(2 1/2 ( ( ( ∑ Q j v(x)( dx j=1
J (
∑
i=1
Γi
(2 1/2 J 1/2 ( ( . = ∑ Qi v2L2 (Γi ) (Qi v(x)( dx
(8.30)
i=1
Therefore, by invoking Lemma 8.5 we infer
( J ( J 1/2 ( ( ( ∑ kW (v − Q j v, Q j v)( vH 1/2 (Γ ) ∑ Q j v2L2 (Γ ) I
j=1
j=1
J
+ ∑ Q j v 1/2 HI
j=1
H 1/4 v 1/2 HI
J
(Γ )
(Γ )
Q j vL2 (Γ )
J
∑ Q j v2HI1/2 (Γ )
j=1
+ H 1/4 ∑ Q j v2 1/2 j=1
H 1/4 bW (v, v)1/2 J
HI
1/2
(Γ )
J
1/2
∑ bW (Q j v, Q j v)
j=1
+ H 1/4 ∑ bW (Q j v, Q j v). j=1
Invoking Lemma 8.2 we deduce ( ( J ( ( ( ∑ kW (v − Q j v, Q j v)( H 1/4 bW (v, v). j=1
The above inequality and (8.29) give the first inequality in the lemma. The second inequality in the lemma can be shown by estimating | ∑Jj=1 kW (v, Pj v)| in the same manner as | ∑Jj=1 kW (v, Q j v)| above. We omit the details. Having proved Assumptions 8.1–8.5, we can now invoke Theorem 8.1 and Lemma 2.19 to have the convergence result. Theorem 8.3. Under assumption (8.24), there exists H0 > 0 such that for any H ∈ (0, H0 ] the following residual estimate holds
8.2 Hypersingular Integral Equation
167
rk RN 1 −
1
2 1 + log(H/h)
k/2
r0 RN
where rk is the residual defined in Subsection 2.2.2. Proof. We recall from Lemma 2.19 that rk RN
1−
c2Q CQ2
k/2
r0 RN
(8.31)
where cQ and CQ are given by Theorem 8.1. Due to (8.13), (8.20), and (8.25), we have CQ2 CV (1 +CV ) 1. On the other hand, due to (8.17), (8.21), (8.25), and (8.28), we have H −1 1 − H 1/2 − H 1−2ε cQ CV−1 − w(H) 1 + log h 1 + log(H/h) ≥
C2 1 − 2 1 + log(H/h) 1 + log(H/h)
where C is the constant in (8.24). We are only concerned with the case when 1 + log(H/h) → ∞, because in this case cQ → 0 so that the convergence rate goes to 1, which implies a slow convergence of the preconditioned GMRES. However, in this case C2 1 1 − 2 ≥ 1 + log(H/h) 2 1 + log(H/h) 1 + log(H/h) when H and h are sufficiently small. Consequently 1−
c2Q CQ2
1−
1
2 1 + log(H/h)
which together with (8.31) proves the theorem.
8.2.2 The p-version For this version, we consider the subspace decomposition defined in Definition 4.1 and Definition 4.2, namely V = V 0 ⊕ V1 ⊕ · · · ⊕ V N where 3 V := v ∈ C(Γ ) : v(±1) = 0, v|Γj ∈ P p ,
j = 1, . . . , N
4
168
8 Indefinite Problems
3 V0 := v ∈ C(Γ ) : v(±1) = 0, v|Γj ∈ P1 , 1/2 (Γj ), V j := V ∩ H
j = 1, . . . , N
j = 1, . . . , N.
4
Here, P p is the set of polynomials of degree p, and
Γj := (x j−1 , x j ),
x j = −1 + 2 j/N,
j = 1, . . . , N.
Analogously to assumption (8.24) we assume the following property on the mesh size h and polynomial degree p. For any p0 > 0, there exists h0 > 0 such that 1/4
h0 (1 + log p0 ) 1.
(8.32)
Checking of Assumption 8.2: It was shown in Lemma 4.1 and Lemma 4.2 that inequalities (8.5) and (8.6) hold with CV (1 + log p)2
and CV = constant.
Checking of Assumption 8.3: Similarly to the h-version, we have the following lemma. Lemma 8.7. Under the assumption (8.32), there exists positive constants Cb and h0 such that for j = 1, . . . , N and 0 < h ≤ h0 the following inequality holds bW (v j , v j ) ≤ Cb ℜ aW (v j , v j ) , v j ∈ V j .
Proof. The proof is very similar to that of Lemma 8.3, except that in this version |Γj | = h and thus H j is replaced by h. Moreover, we invoke Lemma C.26 instead of Lemma C.21, and use assumption (8.32) in lieu of (8.24). The details are omitted. Checking of Assumption 8.4: Without giving further detail we state that this assumption is satisfied with m = 1/2 and w(h) = ch1−2ε for some ε > 0.
Checking of Assumption 8.5: Similarly to Lemma 8.6, we have exactly the same result for the p-version, with H0 and H replaced by h0 and h. The detailed proof is omitted. Collecting all the estimates in the above assumption checks, we obtain the following convergence result. Theorem 8.4. Under the assumption (8.32), there exists h0 > 0 such that for any h satisfying 0 < h ≤ h0 and p = 1, . . . , p0 the following residual estimate holds rk RN 1 −
k/2 1 r0 RN . (1 + log p)4
Proof. Similarly to the h-version, we have
8.3 Weakly-Singular Integral Equation
CQ 1
169
cQ (1 + log p)−2 − h1−2ε .
and
The proof follows along the lines of that of Theorem 8.3.
8.3 Weakly-Singular Integral Equation We now consider the application of the general theory in Section 8.1 to the weaklysingular integral equation (8.1). Similarly to the case of the hypersingular integral equation, it suffices to show that Assumptions 8.1–8.5 are satisfied. 1/2 (Γ ), Checking of Assumption 8.1: The bilinear forms are defined, for any v, w ∈ H by aV (v, w) := Vκ v, w , bV (v, w) := V0 v, w , (8.33) kV (v, w) := KV v, w , where, for x ∈ Γ , Vκ v(x) :=
i 2
V0 v(x) := −
Γ
1 π
H01 (κ |x − y|)v(y) d σy ,
log |x − y|v(y) d σy ,
Γ
KV v(x) := Vκ v(x) − V0 v(x); see (8.23). It is shown in [212] that Assumption 8.1 holds true with m = −1/2. In the following we will reconsider the subspace decompositions developed in Chapter 3 and Chapter 4, and show that Assumptions 8.2–8.5 hold true for these considerations. As in the case of the hypersingular integral equation, the bilinear form aV (·, ·) in those chapters is the symmetric positive definite bilinear form bV (·, ·) considered in this chapter.
8.3.1 The h-version Recall the subspace decomposition of 3 4 V := v : Γ → R : v|Γi j is constant, j = 1, . . . , Ni , i = 1, . . . , J defined in Definitions 3.1 and 3.3:
V = V−1 + V0 + · · · + VJ .
170
8 Indefinite Problems
As in the case of the hypersingular integral equation, it suffices to check that Assumptions 8.1–8.5 are satisfied. Checking of Assumption 8.2: As were shown in Lemma 3.4 and Lemma 3.5, inequalities (8.5) and (8.6) hold true with H and CV = C2 CV = C0 1 + log h where C0 and C2 are constants independent of H and h.
Checking of Assumption 8.3: This assumption can be shown in the same manner as in the proof of Lemma 8.3 by using G˚arding’s inequality, except that instead −1/2 -norm. Not −1 of (8.26)we now use the following relation of the H -norm and H ing that v j , 1 L2 (Γ ) = 0, we can invoke Lemma A.19 and use (8.27) to have v j 2H −1 (Γ ) I0 v j 2H 0 (Γ ) H 1/2 I0 v j 2 1/2 I
HI
I
(Γ )
H 1/2 v j 2 −1/2 HI
(Γ )
.
(8.34)
Checking of Assumption 8.4: This assumption is satisfied with m = −1/2 and w(H) = cH 1−2ε for some ε > 0, namely Q0 v − vH −1 (Γ ) H 1/2−ε vH −1/2 (Γ ) ,
0 < H ≤ H0 .
(8.35)
The proof is exactly the same as that of Lemma 8.4 and is omitted. Checking of Assumption 8.5: Finally we check on the assumption of the bilinear form kV (·, ·). Lemma 8.8. For any ε > 0 there exists H0 > 0 such that for all H ∈ (0, H0 ], the bilinear form kV (·, ·) defined in (8.33) satisfies for all v ∈ V ( J ( ( ( ( ∑ kV (v − Q j v, Q j v)( H 1/4 bV (v, v), j=0
( J ( ( ( |kV (v − Q0 v, Q0 v)| + ( ∑ kV (v, Pj v)( H 1/4 bV (v, v). j=1
Proof. Firstly, for any v ∈ V we have by using Assumption 8.1, Lemma 8.1 (with m = −1/2), and (8.35) |kV (v − Q0 v, Q0 v)| v − Q0 vH −1 (Γ ) Q0 vH −1/2 (Γ )
H 1/2−ε v2H −1/2 (Γ ) H 1/2−ε bV (v, v).
Next we have by using again Assumption 8.1
(8.36)
8.3 Weakly-Singular Integral Equation
171
( J ( ( J ( ( J ( ( ( ( ( ( ( ( ∑ kV (v − Q j v, Q j v)( ≤ ( ∑ kV (v, Q j v)( + ( ∑ kV (Q j v, Q j v)( j=1
j=1
j=1
( ( J ( ( J ( ( ( ( = (kV v, ∑ Q j v ( + ( ∑ kV (Q j v, Q j v)( j=1
j=1
' ' vH −1/2 (Γ ) ' ∑ Q j v'H −1 (Γ ) J
j=1
J
+ ∑ Q j vH −1/2 (Γ ) Q j vH −1 (Γ ) j=1
J ' J ' vH −1/2 (Γ ) ' ∑ Q j v'H −1 (Γ ) + H 1/4 ∑ Q j v2H −1/2 (Γ ) j=1
j=1
' ' where in the last step we used (8.34). For the term '∑Jj=1 Q j v'H −1 (Γ ) we use again
Lemma A.19 (noting that ∑Jj=1 Q j v has mean zero) and use the same argument as in (8.30) to obtain ' ' J ' ∑ Q j v' −1 j=1
H
1/2 J ' J ' ' ∑ I0 Q j v' 0 = ∑ I0 Q j v2 0 . (Γ ) H (Γ ) (Γ ) H j=1
j=1
Using (8.27) and Lemma A.19 we deduce ' ' J ' ∑ Q j v' −1 j=1
H
H 1/4 (Γ )
so that
J
∑ I0 Q j v2HI1/2 (Γ )
j=1
1/2
H 1/4
J
∑ Q j v2H−1/2 (Γ )
j=1
1/2
( ( J J 1/2 ( ( ( ∑ kV (v − Q j v, Q j v)( H 1/4 vH −1/2 (Γ ) ∑ Q j v2H −1/2 (Γ ) j=1
j=1
J
+ H 1/4 ∑ Q j v2H −1/2 (Γ ) . j=1
Hence, by using Assumption 8.1 and Lemma 8.2 we deduce ( ( J ( ( ( ∑ kV (v − Q j v, Q j v)( H 1/4 bV (v, v). j=1
The above inequality and (8.36) give the first inequality in the lemma. The second inequality in the lemma can be shown by estimating | ∑Jj=1 kV (v, Pj v)| in the same manner as | ∑Jj=1 kV (v, Q j v)| above. Having proved Assumptions 8.1–8.5, we can now invoke Theorem 8.1 and Lemma 2.19 to have the following convergence result.
172
8 Indefinite Problems
Theorem 8.5. Under the assumption (8.24), there exists H0 > 0 such that for any H ∈ (0, H0 ] the following residual estimate holds rk RN 1 −
k/2 1 r0 RN . (1 + log(H/h))2
Proof. The proof is similar to that of Theorem 8.3 and is omitted.
8.3.2 The p-version For this version, we consider the subspace decomposition defined in Definition 4.1 and Definition 4.3, namely
where
V = V0 ⊕ V1 ⊕ · · · ⊕ VN 3 4 V := v : Γ → R : v|Γi j ∈ P p , i = 1, . . . , N, j = 1, . . . , Ni 3 4 V0 := v : Γ → R : v|Γi ∈ P0 , i = 1, . . . , N −1/2 (Γj ), j = 1, . . . , N. Vj := V ∩ H
Here, P p is the set of polynomials of degree p, and
Γj := (x j−1 , x j ),
x j = −1 + 2 j/N,
j = 1, . . . , N.
Checking of Assumption 8.2: It was shown in Lemma 4.3 and Lemma 4.4 that inequalities (8.5) and (8.6) hold with CV (1 + log p)2
and CV 1.
Checking of Assumption 8.3: Similarly to the h-version, we have the following lemma. Lemma 8.9. Under the assumption (8.32), there exists positive constants Cb and h0 such that for j = 1, . . . , N and 0 < h ≤ h0 the following inequality holds b(v j , v j ) ≤ Cb ℜ a(v j , v j ) , v j ∈ Vj .
Proof. The proof is exactly the same to that of Lemma 8.3, except that in this ver sion |Γj | = h and thus H j is replaced by h.
Checking of Assumption 8.4: Without giving further detail we state that this assumption is satisfied with m = −1/2 and w(h) = ch1−2ε for some ε > 0.
8.5 Indefinite Problems in Three-Dimensions
173
Checking of Assumption 8.5: Similarly to Lemma 8.6, we have exactly the same result for the p-version, with H0 and H replaced by h0 and h. The detailed proof is omitted. Collecting all the estimates in the above assumption checks, we obtain the following convergence result. Theorem 8.6. Under the assumption (8.32), there exists h0 > 0 such that for any h satisfying 0 < h ≤ h0 and p = 0, 1, . . . , p0 , the following residual estimate holds rk RN 1 −
k/2 1 r0 RN . (1 + log(p + 1))4
Proof. The proof follows along the lines of that of Theorem 8.4 and is omitted.
8.4 Numerical Results The theoretical results obtained in this chapter are supported by numerical results in Chapter 9. The connection is shown in the following table. Equation
Preconditioner Theorem Table h-version 8.3 9.6 Hypersingular p-version 8.4 9.20 h-version 8.5 9.5 Weakly-singular p-version 8.6 9.19 Table 8.1 Theoretical results in Chapter 8 and relevant numerical results in Chapter 9 .
8.5 Indefinite Problems in Three-Dimensions Since the general theory developed in Section 8.1 is independent of the dimension of the problem, similar results as those derived in Section 8.2 can be obtained in the case of three dimensions. We omit the detailed analysis, which can be found in [207], and only present here some numerical results. We solve equation (1.2) with Γ = (−1, 1) × (−1, 1) × {0}, κ = 5, and g(x) ≡ 1 on a uniform triangular mesh by using the non-overlapping and overlapping preconditioner; see Chapter 10. The results for the non-overlapping preconditioner are reported in Table 8.2 while those for the overlapping preconditioner are reported in Table 8.3. In both tables, the number of iterations and CPU times (in seconds), with various values of H/h, are listed.
174
8 Indefinite Problems DoF 9 49 225 961 3969 16129
WP 6 17 23 31 44 63
(0.01) (0.02) (0.02) (0.15) (3.02) (84.94)
H/h = 2 6 (0.01) 17 (0.01) 20 (0.03) 21 (0.24) 21 (4.39) 21 (93.72)
Non-overlapping H/h = 4 H/h = 8 H/h = 16 17 20 21 21 21
(0.02) (0.02) (0.12) (1.62) (32.18)
21 23 21 21
(0.04) (0.20) 23 (0.62) (1.75) 26 (4.16) (29.86) 24 (41.06)
Table 8.2 Number of iterations and CPU times (in parentheses). WP: without preconditioner
DoF 9 49 225 961 3969 16129
H/h = 2 6 (0.02) 19 (0.01) 28 (0.04) 30 (0.39) 30 (6.53) 30 (135.47)
Overlapping δ =h H/h = 4 H/h = 8 H/h = 16 H/h = 2 6 (0.02) 18 (0.02) 17 (0.03) 27 (0.03) 22 (0.05) 22 (0.09) 27 (0.23) 26 (0.34) 25 (0.86) 26 (0.65) 28 (2.50) 27 (2.84) 28 (6.12) 31 (8.34) 28 (43.76) 27 (41.14) 29 (58.46) 35 (166.31)
Table 8.3 Number of iterations and CPU times (in parentheses)
δ = 2h H/h = 4 H/h = 8 H/h = 16 20 26 29 31 31
(0.02) (0.08) (0.48) (3.92) (53.10)
26 27 28 29
(0.09) (0.57) 27 (1.17) (4.13) 28 (7.99) (49.80) 29 (69.04)
Chapter 9
Implementation Issues and Numerical Experiments
In this chapter we discuss issues with implementation for the two-dimensional problems discussed in this part of the book. First we mention some important issues in implementation of the methods and then present the different numerical results for the different methods studied in Chapters 3–6 and Chapter 8. The stiffness matrix arising from the fully-discrete method studied in Chapter 7 is computed slightly differently and the numerical results for this study have been presented in that chapter. The equations to be solved in this chapter take the form (1.1) and (1.2) on the interval Γ = (−1, 1). The Galerkin boundary element method solves these equations in the finite dimensional space S where S = V is the space of piecewise polynomials of degree p ≥ 0 for (1.1), and S = V is the space of continuous piecewise polynomials of degree p ≥ 1 vanishing at ±1 for (1.2); see (1.8) and (1.9). Many of the numerical results reported in this chapter are obtained by using maiprogs written by Matthias Maischak. See the online documents Book of Numerical Experiments and User Guide Technical Manual at http://people.brunel.ac.uk/ mastmmm/.
9.1 Implementation Issues There are two different ways to benefit from the Schwarz preconditioners. They can be used in conjunction with the preconditioned conjugate gradient method (Algorithm 2.3) if the matrix is symmetric and positive definite, or with the preconditioned GMRES method (Algorithm 2.4 or Algorithm 2.5) if the matrix is indefinite or nonsymmetric. They can also be used together with a linear iterative method (Richardson, Jacobi, or Gauss–Seidel method). The analysis for the first approach requires estimation of the condition number (as done in Section 2.6), while the analysis for the second approach requires estimation of the convergence rate; see Section 2.9. It is noted that estimates for condition numbers give estimates for convergence rates, and vice versa; see [241].
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_9
175
176
9 Implementation Issues and Numerical Experiments
In this chapter, for the additive Schwarz methods we present numerical results using the first approach (PCG or PGMRES) while for the multiplicative Schwarz methods we present results using the second approach (linear iterative methods). The implementation of the additive Schwarz preconditioners is explained in Algorithm 2.7 and the implementation of the multiplicative Schwarz preconditioner, which is like that of multigrid methods, is explained in Algorithm 2.10 or Algorithm 2.11. For both the additive and multiplicative versions, it is necessary to explain how the restriction and prolongation matrices R j and R j are computed. It suffices to explain the additive Schwarz case. We have to distinguish two different cases: twolevel and multilevel preconditioners. (i) In a two-level method, the space S is decomposed by S = S0 + S1 + · · · + SJ where S0 is the boundary element space defined on a coarse mesh. We recall the following matrix representation of the additive Schwarz operator which has been developed in Chapter 2, Section 2.3. Let A j denote the Galerkin matrix corresponding to S j , j = 0, . . . , J and A the matrix corresponding to S . It is shown in (2.27) that the matrix representation of the additive Schwarz operator Pad is Pad = C−1 ad A where C−1 ad =
J
∑ R j A−1 j Rj
j=0
with R j being the matrix representation of the restriction operator from 3 4 the space S onto S j and R j its transpose. Assume that φi : i = 1, . . . , M is a 4 3 basis for S , and φi : i = N j−1 + 1, . . . , N j is a basis for S j , j = 1, . . . , J. Then, ( j)
for j = 1, . . . , J, the (, k)-th entry of R j , c,k , is defined by ( j) c,k
1 := 0
if = k ∈ {N j−1 + 1, . . . , N j }, otherwise.
The simple form of the projection matrices R j is due to the fact that the same basis functions are used in S and in the subspaces S j , j = 1, . . . , J. In the case of the two-level non-overlapping method, the matrices A j are simply the diagonal blocks of A, the preconditioner C−1 ad can be written as ⎛ A1 ⎜ A2 ⎜
−1 C−1 .. ad = R0 A0 R0 + ⎜ ⎝ .
⎞−1 AJ
⎟ ⎟ ⎟ ⎠
.
9.1 Implementation Issues
177
φi
+1 φ2i−1
φ2i+1
+1 φ2i+1
Fig. 9.1 Hat basis functions on levels and + 1
In the case of the two-level overlapping method, the subblocks A j , j = 1, . . . , J, overlap and we only have ⎛ −1 ⎞ A1 ⎜ ⎟ A−1 2 ⎜ ⎟
−1 = R A R + C−1 ⎜ ⎟. 0 0 0 .. ad ⎝ ⎠ . A−1 J
In the implementation, the matrix C−1 ad is not calculated; however, Algorithm 2.7 is used. (ii) In a multilevel method, the subspaces form a nested sequence S 0 ⊂ S 1 ⊂ ···S L
where, for each = 0, . . . , L, the space S has as basis functions φi , i = 1, . . . , N = dim(S ). Here the restriction matrix R from S +1 to S is an N × N+1 matrix whose entries ci j , i = 1, . . . , N and j = 1, . . . , N+1 , are defined via the following relation between the basis functions on level and level + 1:
φi =
N+1
∑ ci j φ j+1 .
j=1
In the case of hat functions on a uniform mesh, as described in Section 5.1.1, the above formula becomes (see Figure 9.1) 1 +1 1 +1 φi = φ2i−1 + φ2i+1 + φ2i+1 , 2 2
i = 1, . . . , N .
In the case of the Haar basis a similar relation holds
178
9 Implementation Issues and Numerical Experiments
ψ0 = ψ0+1 ≡ 1,
1 +1 1 +1 ψi = ψ2i−1 + ψ2+1 j + ψ2i+1 , 2 2
i = 1, . . . , N − 1.
These relations of basis functions on different levels define the restriction matrix R . Each subspace S is furthered decomposed as S = span{φ1 }⊕· · ·⊕span{φN }. Consequently, the multilevel additive Schwarz preconditioner becomes a multilevel diagonal preconditioner, and Algorithm 2.7 becomes Algorithm 9.1.
Algorithm 9.1: The matrix form of the multilevel additive Schwarz preconditioner To compute z = C−1 ad r for a given r: for j = 0, . . . , L do Compute R j r ∈ RN j N j where D is the diagonal matrix defined from the Compute y j = D−1 j j R j r for y j ∈ R diagonal of A j end for z←
L
∑ R j y j ∈ RN
j=0
Note that R j are sparse matrices. Therefore, in both Algorithm 2.7 and Algorithm 9.1, the multiplication of R j and R j to a vector costs only O(N) operations in contrary to O(N 2 ) in case of a full matrix. The following numerical results have been reported in [120, 143, 210, 211, 222, 228, 229]. They are reprinted here with copyright permission.
9.2 Numerical Results for the h-Version In this section, we report on the numerical experiments with the weakly-singular and hypersingular integral equations studied in Chapter 3, Chapter 5, and Chapter 8, when the h-version Galerkin boundary element method is used. The righthand sides of both equations (1.1) and (1.2) are chosen to be f (x) ≡ 1 and g(x) ≡ 1. For each equation, we solve with different preconditioners, namely, two-level nonoverlapping additive Schwarz, multilevel methods, and the BPX preconditioner which is introduced in [40] for partial differential equations. In [80] the BPX preconditioner is studied for the weakly-singular integral equation. We note that the condition numbers are computed by using the Lanczos algorithm incorporated in the iterative solvers (CG, PCG, GMRES, and PGMRES); see [83]. The results for Chapter 3 and Chapter 5 involving the boundary integral equations arising from the Laplace equation are reported in Tables 9.1–9.4 and Figures 9.2– 9.5. The conjugate gradient method (Algorithm 2.1) is used to solve the unprecondi-
9.2 Numerical Results for the h-Version
179
tioned system, while the preconditioned conjugate gradient method (Algorithm 2.3 is used to solve the preconditioned system. The results for Chapter 8 involving the boundary integral equations arising from the Helmholtz equation are reported in Table 9.5, and Table 9.6. For these equations, the GMRES and preconditioned GMRES (Algorithms 2.2 and 2.4) are used. Condition number N
A
two-level multilevel
16 15.7545 2.1310 32 32.6847 2.6175 64 65.1292 2.9767 128 129.6566 3.1926 256 259.8891 3.3153 512 517.1460 3.3777 1024 1036.1733 3.4108 2048 2066.6248 3.4255 4096 4119.1740 3.4338 8192 3.4361 16384 3.4370 32768 3.4379 65536 3.4406
5.1005 6.0171 6.9188 7.8182 8.7199 9.6253 10.5344 11.4468 12.3619 13.2791 14.1980 15.1182 16.0395
BPX
CG
4.0543 8 4.8520 20 8.8454 33 10.4593 45 12.1835 66 14.0330 85 16.0186 125 18.1483 168 20.4271 226 22.8595 25.4481 28.1890 31.1033
Number of iterations PCG two-level multilevel BPX 8 8 8 12 11 12 16 14 17 19 16 19 21 17 22 21 19 23 22 19 24 22 19 26 23 19 29 23 19 30 23 19 31 23 19 32 24 19 33
Table 9.1 Weakly-singular integral equation (3.2) with f (x) ≡ 1. [Theory: Chapter 3, Section 3.1.1.2 (two-level); Chapter 5, Section 5.1.2 (multilevel); and References [40, 80] (BPX).]
140
120
Condition number
100
CG 2-level multilevel,Haar-basis BPX
80
60
40
20
0 10
100
1000 Number of Unknowns
10000
100000
Fig. 9.2 Weakly-singular integral equation (3.2) with f (x) ≡ 1. [Theory: Chapter 3, Section 3.1.1.2 (two-level); Chapter 5, Section 5.1.2 (multilevel); and Reference [40, 80] (BPX).]
180
9 Implementation Issues and Numerical Experiments Condition number N
A
15 31 63 127 255 511 1023 2047 4095 8191 16383 32767 65535
7.7629 15.5445 31.1092 62.4163 125.0924 250.4733 501.2394 1002.7757 2005.8634
Number of iterations PCG two-level multilevel CG two-level multilevel 2.1475 3.0353 8 7 8 2.2072 3.4613 11 11 11 2.2162 3.7561 17 12 14 2.2276 3.9714 26 13 16 2.2299 4.1335 38 13 17 2.2262 4.2578 55 12 17 2.2250 4.3545 78 12 17 2.2236 4.4308 109 12 17 2.2225 4.4917 154 12 18 2.2160 4.5408 11 18 2.2153 4.5808 11 18 2.2150 4.6138 11 18 2.2067 4.6413 10 18
Table 9.2 Hypersingular integral equation (3.1) with g(x) ≡ 1. [Theory: Chapter 3, Section 3.1.1.1 (two-level); Chapter 5, Section 5.1.1 (multilevel).]
DoF 3 7 15 31 63 127 255 511 1023 2047
Condition number PCG CG MAS NP 0.2012E+01 1.644 0.3865E+01 (0.77) 2.407 2.608 0.7578E+01 (0.88) 3.035 3.597 0.1535E+02 (0.97) 3.461 4.667 0.3089E+02 (0.99) 3.755 5.826 0.6220E+02 (1.00) 3.971 7.115 0.1249E+03 (1.00) 4.131 8.564 0.2503E+03 (1.00) 4.255 10.190 0.5010E+03 (1.00) 4.350 12.010 0.1003E+04 (1.00) 4.425 14.010
OP 3.026 2.839 2.561 2.468 2.550 2.786 3.120 3.505 3.933
CG 2 4 5 9 13 19 27 39 55 79
Iterations PCG MAS NP 2 4 4 6 6 8 7 9 8 10 9 10 9 11 10 11 11 11 11
OP 4 6 7 7 7 8 8 8 8
Table 9.3 Hypersingular integral equation: Condition number and number of iterations of different methods. CG: without preconditioner (number in brackets: order of increase); MAS: multilevel additive Schwarz method; NP: nonoverlapping preconditioner with H = 1/2; OP: overlapping preconditioner with H = 1/2 and δ = h = 2/DoF. [Theory: Chapter 3, Section 3.1.1.1 (NP), Section 3.1.2.1 (OP); Chapter 5, Section 5.1.1 (MAS).]
9.2 Numerical Results for the h-Version
181
70
60
Condition number
50
CG 2-level multilevel
40
30
20
10
0 10
100
1000 Number of Unknowns
10000
100000
Fig. 9.3 Hypersingular integral equation (3.1) with g(x) ≡ 1. [Theory: Chapter 3, Section 3.1.1.1 (two-level); Chapter 5, Section 5.1.1 (multilevel).]
182
9 Implementation Issues and Numerical Experiments Overlapping additive Schwarz: condition number DoF δ = h δ = 2h δ = 4h δ = h δ = 2h δ = 4h H/δ = 2 H/δ = 4 7 3.026 2.838 2.515 15 3.035 3.262 2.876 2.839 2.554 31 3.087 3.266 3.328 2.930 2.900 2.577 63 3.103 3.294 3.342 2.896 2.986 2.932 127 3.117 3.302 3.367 2.900 2.955 3.015 255 3.123 3.303 3.371 2.895 2.957 2.985 511 3.125 3.302 3.371 2.890 2.952 2.987 1023 3.124 3.299 3.369 2.885 2.947 2.982 2047 3.122 3.298 3.367 2.882 2.943 2.976 H/δ = 8 H/δ = 16 15 2.298 31 2.561 2.320 2.238 63 2.597 2.594 2.333 2.468 2.219 127 2.621 2.630 2.613 2.480 2.472 2.208 255 2.614 2.654 2.648 2.482 2.495 2.478 511 2.605 2.648 2.673 2.460 2.490 2.501 1023 2.597 2.639 2.666 2.442 2.467 2.496 2047 2.592 2.632 2.657 2.429 2.449 2.474 H/δ = 32 H/δ = 64 63 2.388 127 2.550 2.272 2.631 255 2.575 2.567 2.259 2.786 2.609 511 2.534 2.568 2.564 2.797 2.764 2.600 1023 2.499 2.529 2.566 2.752 2.775 2.755 2047 2.475 2.495 2.529 2.716 2.731 2.766 H/δ = 128 H/δ = 256 255 2.954 511 3.120 2.927 3.338 1023 3.116 3.092 2.915 3.505 3.306 2047 3.056 3.090 3.079 3.499 3.473 3.292 H/δ = 512 H/δ = 1024 1023 3.773 2047 3.933 3.737 4.253
Table 9.4 Hypersingular integral equation: overlapping additive Schwarz, h = 2/DoF fine mesh size, δ : overlap size, H: coarse mesh size. [Theory: Chapter 3, Section 3.1.2.1.]
9.2 Numerical Results for the h-Version
183
4.4
4.2
4
Condition number
3.8
3.6
3.4
3.2
3
2.8
2.6
2.4
0
200
400
600 H/δ
800
1000
1200
Fig. 9.4 Hypersingular integral equation: Condition number of the overlapping preconditioned system as a function of H/δ when DoF = 2047. H: coarse mesh size, δ : overlap size. [Theory: Chapter 3, Section 3.1.2.1.]
CG
160
140
DoF 120
CPU time
100
80
60
40 NP MAS
20
7 15 31 63 127 255 511 1023 2047
H NP OP 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/4 1/4 1/8 1/8 1/32 1/16 1/64 1/32 1/128 1/64
OP 0 200
400
600
800
1000 1200 1400 Degrees of freedom
1600
1800
2000
2200
Fig. 9.5 Hypersingular integral equation: CPU time of different methods. CG: without preconditioner, MAS: multilevel additive Schwarz, NP: non-overlapping, OP: overlapping, with H chosen as in the side table and δ = h = 2/DoF for OP. [Theory: Chapter 3 and Chapter 5.]
184
9 Implementation Issues and Numerical Experiments Number of iterations PGMRES Q(1) κ N GMRES H/h = 2 4 8 16 2 32 18 16 14 10 7 16 64 26 21 22 16 11 21 128 34 21 24 24 18 21 2 256 44 22 24 27 27 22 512 55 22 25 27 29 22 1024 69 23 25 27 30 23 32 17 16 15 11 7 16 64 28 25 23 17 12 25 128 37 24 29 26 20 24 10 256 46 23 27 32 28 23 512 58 23 27 30 35 23 1024 72 24 27 29 33 24 CPU times PGMRES (1) Q κ N GMRES H/h = 2 4 8 16 2 32 0.00 0.01 0.01 0.00 0.00 0.01 64 0.01 0.02 0.02 0.01 0.01 0.02 128 0.04 0.04 0.04 0.04 0.03 0.05 2 256 0.23 0.18 0.15 0.16 0.17 0.20 512 1.77 1.10 0.88 0.89 0.96 1.19 1024 13.14 6.97 5.08 5.15 5.67 7.01 32 0.00 0.01 0.01 0.00 0.00 0.01 64 0.01 0.02 0.02 0.01 0.01 0.02 128 0.04 0.05 0.05 0.05 0.04 0.06 10 256 0.24 0.20 0.19 0.21 0.19 0.25 512 1.85 1.15 1.02 1.06 1.24 1.33 1024 13.69 7.03 5.31 5.38 6.06 7.30
Q(2) 4 14 22 24 24 25 25 16 23 29 27 27 27
8 10 16 24 27 27 27 14 19 26 32 30 30
16 8 11 18 27 29 30 13 15 21 29 35 33
Q(2) 4 0.01 0.02 0.05 0.18 1.01 5.17 0.01 0.02 0.06 0.23 1.16 5.86
8 0.00 0.01 0.05 0.20 1.03 5.24 0.01 0.02 0.05 0.28 1.25 6.06
16 0.01 0.01 0.04 0.21 1.11 5.77 0.01 0.01 0.05 0.24 1.43 6.60
Table 9.5 Weakly-singular integral equation (8.1) for the Helmholtz equation with f (x) ≡ 1 and wave number κ = 2 and κ = 10. The two preconditioners Q(1) and Q(2) are defined in Definition 8.1. [Theory: Chapter 8, Section 8.3.1.]
9.3 Numerical Results for the p-Version N 16 32 64 128 256 512
185
Condition number Number of iterations A Q(1) GMRES with A PGMRES with Q(1) 9.04 2.17 8 8 12.46 2.22 12 11 24.90 2.24 19 14 49.77 2.26 29 15 99.50 2.26 43 15 198.96 2.26 62 15
Table 9.6 Hypersingular integral equation (8.2) with g(x) ≡ 1 and wave number κ = 2. The preconditioner Q(1) defined in Definition 8.1. [Theory: Chapter 8, Section 8.2.1.]
9.3 Numerical Results for the p-Version In this section we report on the numerical results when the p-version Galerkin boundary element method is used to solve the boundary integral equations in concern. These numerical results underline the theoretical results obtained in Chapter 4, Chapter 6, and Chapter 8, Sections 8.2.2 and 8.3.2. For additive Schwarz preconditioners, see Tables 9.7–9.20 and Figures 9.6–9.7. For multiplicative Schwarz preconditioners, see Figures 9.8–9.11.
p 1 2 3 4 5 6 7 8 9 10
Minimum eigenvalue PCG two-level multilevel 5.41e-2 0.55 0.53 1.87e-2 0.39 0.38 9.10e-3 0.33 0.29 5.16e-3 0.29 0.23 3.22e-3 0.26 0.20 2.15e-3 0.24 0.17 1.50e-3 0.23 0.15 1.09e-3 0.21 0.13 8.19e-4 0.20 0.12 6.29e-4 0.19 0.11 CG
Maximum eigenvalue PCG two-level multilevel 0.52 1.43 1.47 0.52 1.43 1.47 0.52 1.49 1.52 0.52 1.49 1.52 0.52 1.52 1.54 0.52 1.52 1.54 0.52 1.54 1.54 0.52 1.55 1.55 0.52 1.56 1.55 0.52 1.56 1.55 CG
Condition number PCG two-level multilevel 9.67 2.59 2.75 27.90 3.65 3.88 57.57 4.46 5.27 101.53 5.15 6.50 162.54 5.77 7.82 243.82 6.32 9.06 348.35 6.83 10.37 479.38 7.29 11.62 639.93 7.73 12.91 833.20 8.14 14.16 CG
Table 9.7 Weakly-singular integral equation (4.2) with f (x) ≡ 1: p-version with two subintervals. [Theory: Chapter 4, Section 4.1.1.2 (two-level) and Chapter 6, Section 6.2 (multilevel).]
186
9 Implementation Issues and Numerical Experiments
p DoF 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
32 48 64 80 96 112 128 144 160 176 192 208 224 240 256 272 288 304 320 336
N = 16 Condition number CG PCG 114.78 3.75 292.68 (1.35) 4.30 590.10 (1.73) 5.60 1013.59 (1.88) 6.02 1604.98 (2.06) 6.97 2374.78 (2.15) 7.32 3365.43 (2.26) 8.08 4587.78 (2.32) 8.39 6084.37 (2.40) 9.03 7865.87 (2.44) 9.31 9974.92 (2.49) 9.86 12422.04 (2.52) 10.12 15249.94 (2.56) 10.61 18469.06 (2.58) 10.84 22122.19 (2.62) 11.28 26219.49 (2.63) 11.50 30804.54 (2.66) 11.90 35881.72 (2.67) 12.10 41507.71 (2.69) 12.47 47966.68 (2.82) 12.65
Iterations CG PCG 23 17 44 18 62 22 80 23 103 24 113 25 133 26 153 26 169 26 195 27 210 28 231 28 246 28 272 28 291 29 327 29 354 30 363 30 393 30 415 30
CPU (sec) CG PCG 0.00 0.00 0.01 0.01 0.01 0.01 0.02 0.01 0.03 0.01 0.04 0.02 0.06 0.02 0.08 0.02 0.11 0.03 0.15 0.03 0.19 0.04 0.26 0.05 0.32 0.05 0.40 0.06 0.49 0.07 0.63 0.08 0.76 0.09 0.91 0.10 1.11 0.11 1.26 0.12
Table 9.8 Weakly-singular integral equation with f (x) ≡ 1, p-version. N: number of subintervals, CG: without preconditioner, PCG: non-overlapping additive Schwarz preconditioner. [Theory: Chapter 4, Section 4.1.1.2.]
9.3 Numerical Results for the p-Version
p DoF 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
64 96 128 160 192 224 256 288 320 352 384 416 448 480 512 544 576 608 640 672
N = 32 Condition number CG PCG 231.20 3.79 588.05 (1.35) 4.32 1184.97 (1.73) 5.65 2034.33 (1.88) 6.06 3220.56 (2.06) 7.01 4764.08 (2.15) 7.36 6750.46 (2.26) 8.13 9200.85 (2.32) 8.44 12200.98 (2.40) 9.08 15771.70 (2.44) 9.36 19998.87 (2.49) 9.92 24903.08 (2.52) 10.17 30570.33 (2.56) 10.66 37021.07 (2.58) 10.89 44341.31 (2.62) 11.34 52553.41 (2.63) 11.55 61736.76 (2.66) 11.96 72016.98 (2.69) 12.15 83129.70 (2.65) 12.53 99704.00 (3.54) 12.71
187
Iterations CG PCG 48 22 76 20 105 24 132 23 156 26 181 25 220 28 247 26 277 27 316 27 347 28 378 28 393 29 446 29 485 30 522 30 573 30 614 30 638 31 657 31
CPU (sec) CG PCG 0.01 0.01 0.02 0.01 0.05 0.02 0.09 0.03 0.14 0.04 0.20 0.04 0.31 0.06 0.43 0.07 0.59 0.08 0.80 0.10 1.27 0.13 1.58 0.15 1.88 0.18 2.43 0.20 2.96 0.23 3.56 0.25 4.32 0.28 5.18 0.32 5.95 0.36 6.71 0.39
Table 9.9 Weakly-singular integral equation with f (x) ≡ 1, p-version. N: number of subintervals, CG: without preconditioner, PCG: non-overlapping additive Schwarz preconditioner. [Theory: Chapter 4, Section 4.1.1.2.]
188
9 Implementation Issues and Numerical Experiments
p DoF 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
128 192 256 320 384 448 512 576 640 704 768 832 896 960 1024 1088 1152 1216 1280 1344
N = 64 Condition number CG PCG 463.25 3.81 1177.47 (1.35) 4.33 2372.38 (1.73) 5.66 4072.30 (1.88) 6.07 6446.51 (2.06) 7.03 9535.54 (2.15) 7.38 13510.86 (2.26) 8.14 18414.54 (2.32) 8.45 24418.31 (2.40) 9.10 31563.67 (2.44) 9.37 40022.60 (2.49) 9.93 49836.07 (2.52) 10.18 61176.35 (2.56) 10.68 74084.03 (2.58) 10.91 88731.70 (2.62) 11.35 105137.80 (2.63) 11.56 123573.94 (2.67) 11.97 143949.95 (2.67) 12.17 166169.50 (2.65) 12.54 197207.23 (3.34) 12.71
Iterations CG PCG 72 22 109 20 159 24 198 23 251 26 286 25 330 26 385 26 429 27 470 28 534 28 559 30 658 29 696 29 717 30 828 31 819 31 936 32 1016 31 1106 31
CPU (sec) CG PCG 0.03 0.02 0.09 0.03 0.22 0.06 0.42 0.07 0.75 0.11 1.15 0.14 1.73 0.18 2.53 0.22 3.49 0.27 4.64 0.34 7.72 0.48 10.29 0.63 15.11 0.75 19.51 0.90 24.10 1.10 31.41 1.28 35.12 1.45 45.32 1.81 55.22 1.82 80.17 2.03
Table 9.10 Weakly-singular integral equation with f (x) ≡ 1, p-version. N: number of subintervals, CG: without preconditioner, PCG: non-overlapping additive Schwarz preconditioner. [Theory: Chapter 4, Section 4.1.1.2.]
9.3 Numerical Results for the p-Version
p DoF 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
256 384 512 640 768 896 1024 1152 1280 1408 1536 1664 1792 1920 2048 2176 2304 2432 2560 2688
N = 128 Condition number CG PCG 926.93 3.81 2355.64 (1.35) 4.33 4745.99 (1.73) 5.66 8146.45 (1.88) 6.07 12895.73 (2.06) 7.03 19074.80 (2.15) 7.38 27026.73 (2.26) 8.15 36835.55 (2.32) 8.45 48844.85 (2.40) 9.10 63137.53 (2.44) 9.37 80057.70 (2.49) 9.93 99687.17 (2.52) 10.18 122370.66 (2.56) 10.68 148189.27 (2.58) 10.91 177488.10 (2.61) 11.36 210342.04 (2.63) 11.57 247110.42 (2.66) 11.98 287901.31 (2.67) 12.17 332784.60 (2.68) 12.55 384781.09 (2.83) 12.74
189
Iterations CG PCG 106 22 163 20 233 24 268 23 348 26 421 27 495 26 579 28 637 28 686 29 750 28 875 30 932 29 1012 31 1135 31 1198 32 1265 31 1369 32 1366 31 1498 33
CPU (sec) CG PCG 0.15 0.07 0.48 0.10 1.22 0.19 2.17 0.25 4.70 0.45 9.60 0.72 16.58 0.97 25.01 1.33 34.78 1.65 46.38 2.11 62.21 2.47 86.89 3.15 163.47 3.60 139.20 4.48 198.52 5.63 212.50 5.91 248.65 6.36 299.08 7.25 329.19 7.75 397.32 9.06
Table 9.11 Weakly-singular integral equation with f (x) ≡ 1, p-version. N: number of subintervals, CG: without preconditioner, PCG: non-overlapping additive Schwarz preconditioner. [Theory: Chapter 4, Section 4.1.1.2.]
p CG 1 2 3 4 5 6 7 8 9 10
4 6 9 12 15 17 21 24 27 33
Number of iterations N=2 N=4 PCG PCG CG multilevel two-level multilevel two-level 2 2 8 4 4 3 3 15 6 6 4 4 20 8 8 5 5 27 10 10 6 6 36 12 10 7 6 43 14 11 8 6 52 15 11 9 6 56 17 11 10 6 65 19 12 11 6 78 22 13
Table 9.12 Weakly-singular integral equation (4.2) with f (x) ≡ 1: p-version. N: number of subintervals. [Theory: Chapter 4, Section 4.1.1.2 (two-level); Chapter 6, Section 6.2 (multilevel).]
190
9 Implementation Issues and Numerical Experiments 60
CG, N=2 2-level, N=2 multilevel, N=2
50
Condition number
40
30
20
10
0 1
2
3
4
5 6 Polynomial degrees
7
8
9
10
Fig. 9.6 Weakly-singular integral equation (4.2) with f (x) ≡ 1: p-version with two subintervals, using CG and PCG with two-level and multilevel additive Schwarz preconditioners. [Theory: Chapter 4, Section 4.1.1.2 (two-level); Chapter 6, Section 6.2 (multilevel).]
p 2 3 4 5 6 7 8 9 10
Minimum eigenvalue PCG two-level multilevel 0.21 0.55 0.55 7.40e-2 0.39 0.39 3.62e-2 0.33 0.29 2.06e-2 0.29 0.24 1.29e-2 0.26 0.20 8.58e-2 0.24 0.17 6.01e-2 0.23 0.15 4.37e-3 0.21 0.13 3.27e-3 0.20 0.12 CG
Maximum eigenvalue PCG two-level multilevel 0.93 1.27 1.27 0.94 1.39 1.39 0.94 1.39 1.43 0.94 1.40 1.47 0.94 1.40 1.48 0.94 1.41 1.50 0.94 1.40 1.50 0.94 1.40 1.51 0.94 1.40 1.52 CG
Condition number PCG two-level multilevel 4.44 2.31 2.31 12.75 3.55 3.55 26.04 4.16 4.85 45.88 4.84 6.14 73.32 5.31 7.41 109.94 5.84 8.69 156.98 6.21 9.95 215.97 6.61 11.24 288.23 6.95 12.50 CG
Table 9.13 Hypersingular integral equation (4.1) with f (x) ≡ 1: p-version with two subintervals. [Theory: Chapter 8, Section 8.2.2.]
9.3 Numerical Results for the p-Version
p DoF 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
31 47 63 79 95 111 127 143 159 175 191 207 223 239 255 271 287 303 319
N = 16 Condition number PCG CG NP OP 11.09 3.75 2.94 19.98 (1.45) 4.30 3.01 41.00 (2.50) 5.60 3.09 69.18 (2.34) 6.02 3.10 110.14 (2.55) 6.97 3.13 162.09 (2.51) 7.32 3.13 230.26 (2.63) 8.08 3.15 313.13 (2.61) 8.39 3.15 415.83 (2.69) 9.03 3.16 536.86 (2.68) 9.31 3.16 681.36 (2.74) 9.86 3.17 847.83 (2.73) 10.12 3.17 1041.39 (2.77) 10.61 3.18 1260.55 (2.77) 10.84 3.17 1510.44 (2.80) 11.28 3.18 1789.53 (2.80) 11.50 3.18 2103.01 (2.82) 11.90 3.18 2449.24 (2.82) 12.10 3.18 2833.71 (2.84) 12.47 3.19
191
Iterations PCG NP OP 17 17 15 26 17 17 42 22 18 52 22 17 62 23 19 73 24 18 91 25 18 99 25 18 113 26 19 124 26 19 136 27 19 158 27 19 164 27 19 184 28 19 202 28 19 206 28 19 221 29 19 229 29 19 261 29 19 CG
CPU ×10 secs PCG CG NP OP 0.02 0.04 0.05 0.04 0.05 0.07 0.08 0.09 0.10 0.13 0.11 0.12 0.20 0.13 0.16 0.28 0.16 0.18 0.42 0.20 0.21 0.54 0.23 0.25 0.74 0.28 0.30 0.99 0.34 0.36 1.25 0.39 0.41 1.77 0.46 0.48 2.11 0.52 0.55 2.69 0.60 0.59 3.36 0.67 0.67 3.99 0.77 0.75 4.72 0.87 0.83 5.71 1.04 0.92 7.38 1.06 0.99
Table 9.14 Hypersingular integral equation with g(x) ≡ 1, p-version. CG: without preconditioner; PCG with non-overlapping preconditioner (NP) and with overlapping preconditioner (OP); N: number of subintervals. Number in brackets: order of increase. [Theory: Chapter 4, Section 4.1.1.1 and Section 4.1.2.1.]
192
9 Implementation Issues and Numerical Experiments
p DoF 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
63 95 127 159 191 223 255 287 319 351 383 415 447 479 511 543 575 607 639
N = 32 Condition number PCG CG NP OP 18.35 3.79 2.97 20.14 (0.23) 4.32 3.03 41.58 (2.52) 5.65 3.11 69.66 (2.31) 6.06 3.12 110.93 (2.55) 7.01 3.15 163.12 (2.50) 7.36 3.16 231.73 (2.63) 8.13 3.17 315.03 (2.61) 8.44 3.17 418.34 (2.69) 9.08 3.19 540.01 (2.68) 9.36 3.18 685.33 (2.74) 9.92 3.19 852.67 (2.73) 10.17 3.19 1047.28 (2.77) 10.66 3.20 1267.58 (2.77) 10.89 3.20 1518.79 (2.80) 11.34 3.20 1799.42 (2.80) 11.55 3.20 2114.40 (2.82) 11.96 3.21 2465.71 (2.84) 12.15 3.21 2846.90 (2.80) 12.53 3.21
Iterations PCG NP OP 29 20 16 37 19 17 49 23 18 63 22 17 75 23 18 88 24 18 106 25 19 125 25 18 139 26 19 156 26 19 178 27 19 193 27 19 206 28 19 218 28 19 240 28 19 257 28 19 274 29 19 300 29 19 324 30 19 CG
CPU (sec) PCG NP OP 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.02 0.02 0.04 0.02 0.03 0.06 0.03 0.03 0.10 0.04 0.04 0.15 0.05 0.06 0.22 0.06 0.06 0.29 0.08 0.08 0.42 0.10 0.10 0.58 0.13 0.12 0.79 0.15 0.13 1.00 0.17 0.15 1.21 0.19 0.17 1.49 0.21 0.19 1.82 0.24 0.21 2.21 0.27 0.24 2.73 0.31 0.27 3.29 0.35 0.30 CG
Table 9.15 Hypersingular integral equation with g(x) ≡ 1, p-version. CG: without preconditioner; PCG with non-overlapping preconditioner (NP) and with overlapping preconditioner (OP); N: number of subintervals. Number in brackets: order of increase. [Theory: Chapter 4, Section 4.1.1.1 and Section 4.1.2.1.]
9.3 Numerical Results for the p-Version
p DoF 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
127 191 255 319 383 447 511 575 639 703 767 831 895 959 1023 1087 1151 1215 1279
N = 64 Condition number Iterations PCG PCG CG CG NP OP NP OP 34.90 3.81 2.97 32 19 16 35.18 (0.02) 4.33 3.04 40 19 17 43.57 (0.74) 5.66 3.12 56 21 18 69.78 (2.11) 6.07 3.13 72 22 18 111.28 (2.56) 7.03 3.16 87 23 18 163.40 (2.49) 7.38 3.16 102 23 18 232.18 (2.63) 8.14 3.18 118 25 18 315.56 (2.60) 8.45 3.18 134 25 18 419.05 (2.69) 9.10 3.19 155 26 18 540.88 (2.68) 9.37 3.19 175 26 18 686.43 (2.74) 9.93 3.20 197 27 19 854.01 (2.73) 10.18 3.20 213 27 18 1048.92 (2.77) 10.68 3.20 223 28 19 1269.53 (2.77) 10.91 3.20 246 28 19 1521.11 (2.80) 11.35 3.21 275 28 19 1802.05 (2.80) 11.56 3.21 292 28 19 2117.51 (2.82) 11.97 3.21 321 29 19 2467.33 (2.83) 12.17 3.21 344 29 19 2849.45 (2.81) 12.54 3.21 369 29 19
193
CPU (sec) PCG NP OP 0.01 0.02 0.02 0.03 0.03 0.04 0.08 0.05 0.05 0.15 0.07 0.07 0.26 0.10 0.10 0.41 0.13 0.12 0.62 0.17 0.15 0.88 0.21 0.19 1.27 0.27 0.22 2.22 0.36 0.31 3.17 0.47 0.40 4.19 0.58 0.46 5.29 0.74 0.58 6.83 0.88 0.69 8.94 1.04 0.80 10.90 1.19 0.91 13.68 1.37 1.02 16.72 1.67 1.15 20.28 1.72 1.27 CG
Table 9.16 Hypersingular integral equation with g(x) ≡ 1, p-version. CG: without preconditioner; PCG with non-overlapping preconditioner (NP) and with overlapping preconditioner (OP); N: number of subintervals. Number in brackets: order of increase. [Theory: Chapter 4, Section 4.1.1.1 and Section 4.1.2.1.]
194
9 Implementation Issues and Numerical Experiments
p DoF 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
255 383 511 639 767 895 1023 1151 1279 1407 1535 1663 1791 1919 2047 2175 2303 2431 2559
N = 128 Condition number Iterations PCG PCG CG CG NP OP NP OP 68.59 3.81 2.97 40 19 16 69.09 (0.02) 4.33 3.04 46 19 16 70.26 (0.06) 5.66 3.11 62 21 17 70.29 (0.00) 6.07 3.12 75 21 17 111.98 (2.55) 7.03 3.15 91 23 17 163.48 (2.45) 7.38 3.16 109 23 17 232.43 (2.64) 8.15 3.17 127 24 18 315.70 (2.60) 8.45 3.18 148 24 18 419.30 (2.69) 9.10 3.19 170 25 18 541.12 (2.68) 9.37 3.19 189 25 18 686.76 (2.74) 9.93 3.19 205 26 18 854.37 (2.73) 10.18 3.19 220 26 18 1049.39 (2.77) 10.68 3.20 242 27 18 1270.06 (2.77) 10.91 3.20 266 27 18 1521.76 (2.80) 11.36 3.21 291 28 18 1802.71 (2.79) 11.57 3.21 305 28 18 2118.45 (2.82) 11.98 3.21 341 28 19 2468.17 (2.83) 12.17 3.21 360 28 19 2852.32 (2.82) 12.55 3.21 394 29 19
CPU (sec) PCG NP OP 0.06 0.06 0.07 0.14 0.10 0.10 0.32 0.17 0.16 0.61 0.23 0.22 1.25 0.40 0.34 2.51 0.62 0.52 4.29 0.91 0.75 6.47 1.16 0.95 9.40 1.50 1.18 12.98 1.85 1.45 17.23 2.35 1.74 22.27 2.80 2.11 29.11 3.53 2.44 37.53 3.99 2.89 111.9 4.78 3.26 55.41 5.30 3.69 68.92 5.87 4.21 80.34 6.52 4.75 97.14 7.41 5.15 CG
Table 9.17 Hypersingular integral equation with g(x) ≡ 1, p-version. CG: without preconditioner; PCG with non-overlapping preconditioner (NP) and with overlapping preconditioner (OP); N: number of subintervals. Number in brackets: order of increase. [Theory: Chapter 4, Section 4.1.1.1 and Section 4.1.2.1.]
p 2 3 4 5 6 7 8 9 10
Number of iterations PCG CG multilevel two-level 2 2 2 3 3 3 4 4 4 5 5 5 7 6 6 8 7 7 9 8 7 12 9 7 13 10 7
Table 9.18 Hypersingular integral equation (4.1) with g(x) ≡ 1: p-version. Number of subintervals = 2. [Theory: Chapter 4, Section 4.1.1.1 (two-level) and Chapter 6, Section 6.2 (multilevel).]
9.3 Numerical Results for the p-Version
κ
p DoF 4 6 8 10 12 14 16 18 20
80 112 144 176 208 240 272 304 336
4 6 8 10 12 14 16 18 20
320 448 576 704 832 960 1088 1216 1344
4 6 8 10 12 14 16 18 20
80 112 144 176 208 240 272 304 336
4 6 8 10 12 14 16 18 20
320 448 576 704 832 960 1088 1216 1344
κ =2
κ = 10
195
Number of iterations PGMRES PGMRES GMRES (1) (2) DoF GMRES (1) (2) Q Q Q Q N = 16 N = 32 44 23 23 160 64 25 25 55 25 25 224 77 27 27 64 26 27 288 91 29 29 72 28 28 352 105 31 31 82 29 29 416 114 32 32 89 29 29 480 124 33 33 95 30 30 544 135 33 33 103 30 30 608 145 34 34 110 31 31 672 155 35 35 N = 64 N = 128 90 26 26 640 119 28 28 112 28 28 896 149 30 30 127 30 30 1152 179 32 32 144 32 32 1408 204 33 33 159 33 33 1664 222 34 34 175 34 34 1920 239 35 35 190 34 34 2176 258 36 36 203 35 35 2432 279 37 37 215 36 36 2688 297 38 38 N = 16 N = 32 45 26 27 160 65 31 31 57 28 29 224 80 33 34 65 30 31 288 94 35 35 74 32 32 352 108 37 37 84 33 33 416 118 38 38 91 34 34 480 128 39 39 99 35 35 544 138 40 40 105 35 36 608 148 40 40 113 36 37 672 159 40 40 N = 64 N = 128 94 29 29 640 124 29 29 115 32 32 896 154 31 31 130 34 33 1152 185 33 33 149 35 35 1408 211 35 35 164 36 36 1664 226 36 36 179 37 37 1920 247 37 37 195 38 38 2176 268 38 38 210 39 39 2432 287 39 39 220 40 39 2688 306 39 39
Table 9.19 Weakly-singular integral equation (8.1), p-version with f (x) ≡ 1, wave number κ , number of subintervals N. The preconditioners Q(1) and Q(2) are defined in Definition 8.1. [Theory: Chapter 8, Section 8.3.2.]
196
9 Implementation Issues and Numerical Experiments p 2 3 4 5 6 7 8 9 10
Condition number Number of iterations GMRES PGMRES Q(1) GMRES PGMRES Q(1) 3.99 2.96 2 2 14.58 3.46 3 3 29.12 4.09 4 4 51.20 4.58 5 5 81.54 5.06 6 6 122.18 5.48 7 7 174.29 5.87 8 7 239.73 6.23 9 7 319.81 6.56 11 7
Table 9.20 Hypersingular integral equation (8.2) with g(x) ≡ 1 and wave number κ = 2: p-version with 2 subintervals. [Theory: Chapter 8, Section 8.2.2.]
60
CG, N=2 2-level, N=2 multilevel, N=2
50
Condition number
40
30
20
10
0 2
3
4
5
6 7 Polynomial degrees
8
9
10
Fig. 9.7 Hypersingular integral equation (4.1) with f (x) ≡ 1: p-version with two subintervals, using CG and PCG with two-level and multilevel additive Schwarz preconditioners. [Theory: Chapter 4, Section 4.1.1.1 (two-level); Chapter 6, Section 6.2 (multilevel).]
9.3 Numerical Results for the p-Version
197
90
80
iteration number
70
60
50
40
N=2 N=4 N=8 N=16 N=32 N=64 N=128
30
20
10 0
5
10
15 polynomial degree p
20
25
30
Fig. 9.8 Weakly-singular integral equation, p-version, iteration numbers, multiplicative Schwarz preconditioner. [Theory: Chapter 4, Section 4.2.1.2.]
1000
N=2 N=4 N=8 N=16 N=32 N=64 N=128
100
computation time
10
1
0.1
0.01
0.001 1
10 polynomial degree p
Fig. 9.9 Weakly-singular integral equation, p-version, computational time, multiplicative Schwarz preconditioner. [Theory: Chapter 4, Section 4.2.1.2.]
198
9 Implementation Issues and Numerical Experiments 90
80
iteration number
70
60
50
40
N=2 N=4 N=8 N=16 N=32 N=64 N=128
30
20
10 0
5
10
15 polynomial degree p
20
25
30
Fig. 9.10 Hypersingular integral equation, p-version, iteration numbers, multiplicative Schwarz preconditioner. [Theory: Chapter 4, Section 4.2.1.1.]
1000
computation time (sec)
100
N=2 N=4 N=8 N=16 N=32 N=64 N=128
10
1
0.1
0.01
0.001 10 polynomial degree p
Fig. 9.11 Hypersingular integral equation, p-version, computational time, multiplicative Schwarz preconditioner. [Theory: Chapter 4, Section 4.2.1.1.]
9.4 Numerical Results for the hp-Version
199
9.4 Numerical Results for the hp-Version The numerical results in this section corroborate the theoretical results developed in Chapter 6. Figures 9.12–9.17 show the results for quasi-uniform meshes, while Figure 9.18 shows the results for geometric meshes. 30
25
Condition number
20
15
10
5
0
non−overlapping overlapping δ = 2h overlapping δ = H 0
2
4
6
8
10 12 Polynomial degree
14
16
18
20
Fig. 9.12 Hypersingular integral equation: Condition numbers when N = 128 and H = 1/2. [Theory: Chapter 6.]
200
9 Implementation Issues and Numerical Experiments 40
35
Number of iterations
30
25
20
non−overlapping overlapping δ = 2h overlapping δ = H
15
10
5
0
2
4
6
8
10 12 Polynomial degree
14
16
18
20
Fig. 9.13 Hypersingular integral equation: Numbers of iterations when N = 128 and H = 1/2. [Theory: Chapter 6.]
10
non−overlapping overlapping δ = 2h overlapping δ = H
9
8
7
CPU time
6
5
4
3
2
1
0
0
2
4
6
8
10 12 Polynomial degree
14
16
18
20
Fig. 9.14 Hypersingular integral equation: CPU times when N = 128 and H = 1/2. [Theory: Chapter 6.]
9.4 Numerical Results for the hp-Version
201
28
20 N=128 N=64,128
18 26
N=32
16 N=64 14 Condition number
Condition number
24
N=32
22
20
12
10
8
18
6
4 16 0
20
40
2
60
0
5
H/h
10 p
15
20
Fig. 9.15 Hypersingular integral equation: non-overlapping additive Schwarz. Condition numbers as a function of H/h when p = 20 (left figure) and as a function of p when H/h = 4 (right figure). [Theory: Chapter 6.]
8
24 22
7.5
20 18 16 Condition number
Condition number
7
N=1024
6.5
N=32,64,128
14 12 10
6
8 6
5.5
4 5
0
50
100
150 H/δ
200
250
2
5
10 p
15
20
Fig. 9.16 Hypersingular integral equation: overlapping additive Schwarz. Condition numbers as a function of H/δ when p = 2 (left figure) and as a function of p when H/h = 4 (right figure). [Theory: Chapter 6.]
202
9 Implementation Issues and Numerical Experiments
104
♦
CG ♦ PCG 2-level + PCG multilevel
103
♦ ♦
♦ 102 ♦
10
+
+
1
20
+
+
+
100 Number of unknowns
500
Fig. 9.17 Weakly-singular integral equation: condition numbers for the hp-version with quasiuniform meshes. [Theory: Chapter 6, Section 6.2.]
♦
CG ♦ PCG 2-level + PCG multilevel
105
104
♦ ♦ ♦ ♦
103 ♦ 102
10
♦
♦
+ 1
+
10
+
+
Number of unknowns
+
+
+
+
100
Fig. 9.18 Weakly-singular integral equation: condition numbers for the hp-version with geometric meshes (σ = 0.5). [Theory: Chapter 6, Section 6.2.]
Part III
Three-Dimensional Problems
Chapter 10
Two-Level Methods: the hp-Version on Rectangular Elements
In this chapter, we will define different two-level methods for the hp-version of the Galerkin method for both the hypersingular and weakly-singular integral equations on a screen in R3 . As discussed in Chapter 2, it suffices to define a subspace decomposition for each preconditioner. The contents in this chapter which are related to the hypersingular integral equation are based on the papers [2, 3, 116, 118, 119, 223]. However, this chapter corrects a common error in these articles in the proof of coercivity of the decomposition; see Assumption 2.4. We use the newly-derived theorem, Theorem A.11, instead of Theorem A.10 which are used in the afore-cited papers. The results for the weakly-singular integral equation in Section 10.3 are new; see also Remark 10.1.
10.1 Preliminaries 10.1.1 Two-level meshes The coarse mesh. We assume that the screen Γ is quasi-uniformly partitioned into quadrilateral elements Γi , i = 1, . . . , J, and that the mesh is shape regular. The set of all edges of Γi which are not on the boundary of Γ is denoted by EH . We also denote by Hi the maximum side length of Γi , and by H the maximum value of Hi , i = 1, . . . , J. The fine mesh. We subdivide each Γi into quadrilateral elements Γi j , j = 1, . . . , Ni . In the whole chapter, we will use the generic notation K to indicate Γi j when it is not necessary to clarify the indices i and j. We also assume that this mesh is shape regular and locally quasi-uniform. Let hK be the maximum side length of element K. We define hi = max hK and assume that, for all i, j = 1, . . . , J, if Γi and Γj are two K⊂Γi
adjacent subdomains then hi /h j 1. Let h be the maximum value of hi , i = 1, . . . , J, and let Ih denote a suitable indexing set. We define Nh = {xk : k ∈ Ih } to be the set of all element vertices which © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_10
205
206
10 Two-Level Methods: the hp-Version on Rectangular Elements
are not on the boundary of Γ , and Eh to be the set of all open element edges which are not on the boundary of Γ . We also define IH,v , IH,e , and IH,0 to be the set of indices corresponding to the fine grid nodes located at the subdomain vertices, the open subdomain edges, and the open subdomain interiors, respectively; see Figure 10.1. The h-wirebasket Wh is the set of all xk ∈ Nh located at the subdomain vertices or on the open subdomain edges, i.e., 4 3 Wh := xk ∈ Nh : k ∈ IH,v ∪ IH,e .
h-nodes with indices in IH,0 h-nodes with indices in IH,v ∪ IH,e
Fig. 10.1 Fine-mesh nodal points (h-nodes) in a two-level mesh of Γ
The next subsection introduces shape functions that define the basis functions for the boundary element spaces to be used in the solutions of the hypersingular and weakly-singular integral equations.
10.1.2 Shape functions In order to define the shape functions on Γ , we first define the shape functions on the reference element Rref := (−1, 1)2 . For each p ∈ N, let τ p, j , j = 0, . . . , p, be the solutions of (1 − t 2 )Lp (t) = 0, ordered such that −1 = τ p,0 < τ p,1 < · · · < τ p,p = 1, where L p is the Legendre polynomial normalised so that L p (1) = 1. These points τ p, j are called the Gauss–Lobatto–Legendre (GLL) points. The Lagrange polynomial associated with τ p, j is the polynomial of degree p satisfying
10.1 Preliminaries
207
p,i (τ p, j ) = δi j ,
i, j = 0, . . . , p.
Let γˆ1 , . . . , γˆ4 denote respectively the bottom, right, top, and left sides of Rref , and let P = (p1 , . . . , p6 ) be the vector of polynomial degrees associated with Rref , in which p1 , . . . , p4 are the degrees associated with γˆ1 , . . . , γˆ4 , whereas p5 and p6 are degrees in the interior in the ξ - and η -directions, respectively. The GLL points induce the following nodes on Rref (see Figure 10.2) 1. Side nodes (a) (b) (c) (d)
on γˆ1 : (τ p1 , j , −1), j = 0, . . . , p1 ; on γˆ2 : (1, τ p2 , j ), j = 0, . . . , p2 ; on γˆ3 : (τ p3 , j , 1), j = 0, . . . , p3 ; on γˆ4 : (−1, τ p4 , j ), j = 0, . . . , p4 ;
2. Interior nodes (τ p5 ,i , τ p6 , j ), i = 1, . . . , p5 − 1, j = 1, . . . , p6 − 1.
(−1, 1)
γˆ3
γˆ4
(−1, −1)
(1, 1)
γˆ2
γˆ1
(1, −1)
Fig. 10.2 Side nodes and interior nodes (p-nodes) on Rref (p1 = p4 = 2, p2 = 3, p3 = 4, p5 = p6 = 6)
The nodal basis functions associated with these nodes are defined as follows. 1. Nodal modes, which are nodal basis functions at vertices of Rref , (a) At (−1, −1): p1 ,0 (ξ ) p4 ,0 (η ); (b) At (1, −1): p1 ,p1 (ξ ) p2 ,0 (η );
208
10 Two-Level Methods: the hp-Version on Rectangular Elements
(c) At (1, 1): p3 ,p3 (ξ ) p2 ,p2 (η ); (d) At (−1, 1): p3 ,0 (ξ ) p4 ,p4 (η ); 2. Side modes, which are nodal basis functions at nodes on sides of Rref , (a) (b) (c) (d)
on γˆ1 : p1 , j (ξ ) p6 ,0 (η ), j = 1, . . . , p1 − 1; on γˆ2 : p5 ,p5 (ξ ) p2 , j (η ), j = 1, . . . , p2 − 1; on γˆ3 : p3 , j (ξ ) p6 ,p6 (η ), j = 1, . . . , p3 − 1; on γˆ4 : p5 ,0 (ξ ) p4 , j (η ), j = 1, . . . , p4 − 1;
3. Internal modes, which are nodal basis functions in the interior of Rref , p5 ,i (ξ ) p6 , j (η ),
i = 1, . . . , p5 − 1, j = 1, . . . , p6 − 1.
As a generic notation, we will denote in the sequel a node in Rref by qˆ and a nodal basis function at this node by ˆqˆ (ξ , η ). In the following subsection, with the aid of bijections FK that map Rref onto K, we shall transplant the shape functions defined on Rref onto the elements K; thereby define the boundary element space V (for the hypersingular integral equation) and V (for the weakly-singular integral equation).
10.1.3 Boundary element spaces Roughly speaking, the boundary element space V for the hypersingular integral equation consists of continuous functions defined on Γ vanishing on the boundary of Γ , whose restrictions on each element K are polynomials. Meanwhile the boundary element space V for the weakly-singular integral equation consists of functions (not necessarily continuous) whose restriction on K are polynomials. We now define the basis functions in V . For each element K in the fine mesh, we associate a vector of polynomial degrees PK = (pK1 , . . . , pK6 ). Denoting pK = max pKi 0≤i≤6
and
pK = min pKi , 0≤i≤6
we assume that for any neighbouring elements K and K the following property holds pK 1. pK The nodes on K are the images (under the bijection FK ) of nodes on Rref defined by the vector of polynomial degrees PK . This gives rise to a set of global nodes on Γ N p = {qk : k ∈ I p }, where I p denotes a suitable indexing set. This indexing set can be partitioned into disjoint subsets (10.1) I p = I p,v ∪ I p,e ∪ I p,0 ,
10.2 The Hypersingular Integral Equation
209
where I p,v , I p,e and I p,0 consist of indices of nodes located at the element vertices, open element edges and open element interiors, respectively. The p-wire basket is defined as W p = {qk ∈ N p : k ∈ I p,v ∪ I p,e }. A global nodal basis function φkp associated with qk is defined such that the restriction of φkp to each element K is of the form if qk ∈ K, ˆqˆ ◦ FK−1 p (10.2) φk |K = 0 otherwise. Here qˆ = FK−1 (qk ). The set of all these nodal basis functions is denoted by Φ p and the approximation space V is defined as 1/2 (Γ ). Φ p) ∩ H V = span(Φ K p-node q in p-wirebasket Interior p-node q
Fig. 10.3 p-nodes q in one element K
10.2 The Hypersingular Integral Equation We present in this section two preconditioners for the hypersingular integral equation: a non-overlapping additive Schwarz preconditioner and an overlapping additive Schwarz preconditioner.
10.2.1 A non-overlapping method 10.2.1.1 Subspace decomposition As in [90] for the finite element method for partial differential equations, we consider a three-level decomposition of V consisting of a coarse level, an h level, and a p level.
210
10 Two-Level Methods: the hp-Version on Rectangular Elements
The coarse mesh space is defined as VH∗ = {v : (v|Γj ) ◦ F j ∈ P1 and the h-fine space as
1/2 (Γ ) ∀ j = 1, . . . , J} ∩ H
Vh = {v : (v|K ) ◦ FK ∈ P1
1/2 (Γ ). ∀K} ∩ H
Here, P1 is the space of bilinear functions on Rref . Since VH∗ may not be a subspace of Vh (and of V ), we need to consider the interpolated subspace VH = Πh (VH∗ ), where Πh is the interpolation operator from the space of continuous functions into Vh . The dimension of VH equals that of VH∗ . takes the value one at node xk and Indeed, if φkH is the hat function in VH∗ which 4 3 ∈ IH,v , then φkH : k ∈ IH,v forms a basis for VH∗ zero at3 other nodes x , 4 while Πh φkH : k ∈ IH,v forms a basis for VH . A nodal basis function φkh ∈ Vh associated with xk ∈ Nh , the set of all element vertices not on the boundary of Γ , is a function in Vh which takes the value 1 at xk and 0 at other nodes in Nh . The set of all these functions is denoted by Φ h . Letting V p = V , we decompose V as V = V H + Vh + V p ,
(10.3)
and further decompose Vh and V p as Vh = V W h +
J 6
VΓj
and
V p = VW p +
6
VK ,
(10.4)
K
j=1
where VWh and VW p are the h-wire basket and p-wire basket spaces defined by 4 3 VWh = span{φkh ∈ Φ h : xk ∈ Wh } = span φkh ∈ Φ h : k ∈ IH,v ∪ IH,e , 4 3 VW p = span{φkp ∈ Φ p : qk ∈ W p } = span φkp ∈ Φ p : k ∈ I p,v ∪ I p,e ,
and VΓj and VK are the interior spaces defined by VΓj = span{φkh ∈ Φ h : xk ∈ Γj }
and
VK = span{φkp ∈ Φ p : qk ∈ K}.
10.2.1.2 Bilinear forms on subspaces The choice of the bilinear forms in VH , VΓj and VK is straightforward bH (v, w) := aW (ΠVH∗ v, ΠVH∗ w) ∀v, w ∈ VH , bΓj (v, w) := aW (v, w)
∀v, w ∈ VΓj ,
bK (v, w) := aW (v, w)
∀v, w ∈ VK ,
(10.5)
10.2 The Hypersingular Integral Equation
211
where ΠVH∗ is the standard interpolation operator from the space of continuous functions into VH∗ . We note that if v∗ ∈ VH∗ and v ∈ VH then v = ΠVh v∗ ⇐⇒ v∗ = ΠVH∗ v.
(10.6)
Here, ΠVh : C(Γ ) → Vh is the standard interpolation operator. In the following, when there is no confusion, we shorten the notation to Πh . We will also write ΠH∗ instead of ΠVH∗ . Statement (10.6) implies
ΠH∗ Πh = Id
(10.7)
where Id is the identity operator. The bilinear form on the wire basket VWh is chosen such that the problem involved is completely decoupled and only involves inverting a diagonal matrix: bWh (v, w) =
J
h j v(xk )w(xk ),
∑ ∑
j=1
xk ∈∂Γj
v, w ∈ VWh .
(10.8)
Similarly, we define a bilinear form on VW p so that the problem involved is solved by inverting a diagonal matrix. This is done by using the quadrature rule defined in Section C.6 in Appendix C for the integral over a side γ of an element K: bW p (v, w) =
∑ hγ ∑
γ ∈Eh
qk ∈W p ∩γ
ωk v(qk )w(qk )
∀v, w ∈ VW p ,
(10.9)
where we recall that Eh is the set of all element edges not on the boundary of Γ .
10.2.1.3 Auxiliary lemmas The first lemma establishes the equivalences of the bilinear forms on the wire baskets and the corresponding L2 -norms. Lemma 10.1. The bilinear form bWh (·, ·) defined by (10.8) satisfies bWh (v, v)
∑
γ ∈EH
v2L2 (γ ) ,
v ∈ VWh .
Similarly, the bilinear form bW p (·, ·) defined by (10.9) satisfies bW p (v, v)
∑ v2L2 (γ ) ,
γ ∈Eh
v ∈ VW p .
The constants involved are independent of v, H, h and p. Proof. This is a direct consequence of (C.89) in Appendix C.
The next lemma gives a uniform estimate of an h-wire basket function extended by zero.
212
10 Two-Level Methods: the hp-Version on Rectangular Elements
Lemma 10.2. Let γ ∈ EH be an open subdomain edge. We define Γγ to be the union of γ and the pair of subdomains sharing this edge. We also define Ih,γ to be the set of indices k ∈ Ih such that xk ∈ γ . If v ∈ VWh is of the form v=
v(xk )φkh ,
∑ k∈Ih,γ
then v 1/2 HI
(Γγ )
vL2 (γ )
where the constant involved is independent of H, h and v. Proof. This lemma is proved in [3, Lemma 15]. The proof is included here for completeness. By using transformations, we may assume that γ = {0} × (0, H) and Γγ = (−H, H) × (0, H). It suffices to show that v 1/2 HI
where v has the form
(Γγ )
⎧ ⎪ ⎨(1 + x/h) w(y), v(x, y) = (1 − x/h) w(y), ⎪ ⎩ 0,
wL2 (γ )
(x, y) ∈ (−h, 0) × (0, H), (x, y) ∈ (0, h) × (0, H), otherwise,
with w being a continuous, piecewise linear function on γ which satisfies w(0) = w(H) = 0. Simple calculations reveal v2H 0 (Γ ) = v2L2 (Γγ ) hw2L2 (γ ) I
and
γ
' ∂ v '2 ' ∂ v '2 v2H 1 (Γ ) = ' 'H 1 (Γγ ) + ' 'H 1 (Γγ ) h−1 w2L2 (γ ) + hw 2L2 (γ ) . ∂x I ∂y I I γ
With the aid of the inverse inequality we deduce
v2H 1 (Γ ) h−1 w2L2 (γ ) . I
By interpolation,
γ
v2 1/2 HI
(Γγ )
w2L2 (γ ) .
The following two lemmas are analogous to [2, Lemma 8] and [2, Lemma 7], respectively, where the p-version is considered with a uniform distribution of p. The current results assure us that the constants involved are independent of not only p but also h and H. We note a slight difference in the bilinear form bW p defined in [2]. Lemma 10.3. Let γ ∈ Eh be an open element edge and let Kγ be the pair of elements sharing γ . If v ∈ VW p is of the form
10.2 The Hypersingular Integral Equation
v=
213
∑
qk ∈W p ∩γ
then v 1/2 HI
where
p∗
(Kγ )
v(qk )φkp , p∗ vL2 (γ ) , p∗
and p∗ are the maximum and minimum polynomial orders in Kγ .
Proof. Let Kγ− and Kγ+ be two elements such that Kγ− ∪ γ ∪ Kγ+ = Kγ . Also let Kˆ γ− = (−2, 0)× (−1, 1), Kˆ γ+ = (0, 2)× (−1, 1) and γˆ = {0} × (−1, 1) be the images of Kγ− , Kγ+ and γ (respectively) under appropriate transformations. If Kˆ γ = Kˆ γ− ∪ γˆ ∪ Kˆ γ+ and vˆ is defined on Kˆ γ from v by using the above-mentioned transformations, then it follows from Lemma A.6 in Appendix A and a simple calculation that v 1/2 HI
1/2
(Kγ )
h j v ˆ 1/2 HI
and
(Kˆ γ )
1/2
vL2 (γ ) h j v ˆ L2 (γˆ) ,
where h j is the maximum element size in Γj , the subdomain which contains Kγ− and Kγ+ . Therefore, it suffices to prove v ˆ 1/2 HI
(Kˆ γ )
p∗ v ˆ L2 (γˆ) . p∗
(10.10)
− + + + Let P − = (p− 1 , . . . , p6 ) and P = (p1 , . . . , p6 ) be the polynomial orders as− + signed to Kγ and Kγ , respectively. Then, see Section 10.1.2,
v( ˆ ξ , η) =
l p− ,p− (1 + ξ )w(η ) 5
5
l p+ ,0 (−1 + ξ )w(η ) 5
on Kˆ γ− , on Kˆ γ+ ,
− where w is a polynomial of degree at most p := p+ 4 = p2 satisfying w(±1) = 0. Therefore we deduce by using (C.88) and (C.90) in Appendix C ˆ 2L2 (γˆ) . v ˆ 2L (Kˆ ) l p− ,p− 2L2 (γˆ) + l p+ ,0 2L2 (γˆ) w2L2 (γˆ) (p∗ )−2 v 2
γ
5
5
5
By using Schmidt’s inequality (C.35) we obtain |v| ˆ 2H 1 (Kˆ ) l p− ,p− 2L2 (γˆ) + l p+ ,0 2L2 (γˆ) w2L2 (γˆ) γ 5 5 5 2 + l p− ,p− L2 (γˆ) + l p+ ,0 2L2 (γˆ) w 2L2 (γˆ) 5 5 5 ∗ 4 2 (p ) l p− ,p− L2 (γˆ) + l p+ ,0 2L2 (γˆ) w2L2 (γˆ) 5
5
5
(p∗ )4 v ˆ 2L2 (γˆ) . (p∗ )2
Inequality (10.10) now follows from Lemma A.12 in Appendix A.
214
10 Two-Level Methods: the hp-Version on Rectangular Elements
Lemma 10.4. For any k ∈ I p,v , let Mk be the set of all element edges sharing qk as one endpoint, and for each γ ∈ Mk , let ωγ be the weight associated with qk when considered as a quadrature point on γ ; see (C.88) in Appendix C. Then the nodal basis function φkp associated with qk satisfies φkp 2 1/2 HI
(Sk )
p∗ ∑ hγ ωγ p∗ γ ∈M k
where Sk = supp(φkp ) while p∗ and p∗ are the maximum and minimum polynomial orders in Sk . Proof. Thanks to the assumption on the mesh, the cardinality M of Mk is independent of h and H. By indexing the edges γ in Mk in some order (e.g. clockwise) as γ1 , γ2 , . . . , γM , we can represent Sk as Sk = ∪M i=1 Ki,i+1 , where Ki,i+1 is the element in Sk having γi and γi+1 as two of its edges, see Figure 10.4, and where we use a periodic indexing, i.e. γM+1 = γ1 and KM,M+1 = KM,1 .
K12 K51
γ1
γ5
γ2 qk
K45
K23
γ4
γ3 K34
Fig. 10.4 Sk = supp φkp
Each element Ki,i+1 can be assumed to be the image of the reference element ˆ = qk , where qˆ = (−1, −1). Then, see (10.2), Kˆ by a bijection Fi such that Fi (q) φkp |Ki,i+1 = ˆqˆ ◦ Fi−1 , where ˆqˆ (ξ , η ) = pi ,0 (ξ ) pi+1 ,0 (η ), with pi being such that pi and ωi are related by (C.88) in Appendix C. Here ωi is the weight at qk when considered as a node on γ i . A simple calculation reveals φkp 2L2 (Ki,i+1 ) hγi hγi+1 ˆqˆ 2L
ˆ 2 (K)
and
|φkp |2H 1 (K
i,i+1 )
|ˆqˆ |2H 1 (K) ˆ .
10.2 The Hypersingular Integral Equation
215
Reasoning as in the proof of Lemma 10.3 one can easily establish, noting (C.90), ˆqˆ 2L
ˆ 2 (K)
ωi ωi+1
and
ωi ωi+1 + |ˆqˆ |2H 1 (K) ˆ ωi+1 ωi
-
p∗ p∗
.2
,
resulting in φkp 2L2 (Ki,i+1 )
hγi hγi+1 ωi ωi+1
|φkp |2H 1 (K ) i,i+1
and
-
p∗ p∗
.2
.
Recalling that by the periodic indexing hγM+1 = hγ1 and ωM+1 = ω1 , we obtain by summing over i = 1, . . . , M M
M
i=1
i=1
φkp 2L2 (Sk ) = ∑ φkp 2L2 (Ki,i+1 ) ∑ hγi hγi+1 ωi ωi+1 and |φkp |2H 1 (S ) k
M
=∑
i=1
|φkp |2H 1 (K ) i,i+1
-
p∗ p∗
.2
.
Applying Lemma A.12 in Appendix A gives φkp 2 1/2 HI (Sk )
p∗ p∗
M
∑ hγi hγi+1 ωi ωi+1
i=1
1/2
.
An elementary calculation reveals that
M
∑ hγi hγi+1 ωi ωi+1
i=1
1/2
M
≤ ∑ hγi ωi , i=1
finishing the proof.
Lemma 10.3 does not hold for the H 1/2 -norm because this norm has different ˆ scaling properties. However, similar estimate holds on the reference element K. Lemma 10.5. If vˆ is of the form vˆ =
∑
ˆ ˆqˆ , v( ˆ q)
-
.2
ˆ ∂ Kˆ q∈
then v ˆ 2 1/2 HI
ˆ (K)
p∗ p∗
4
ˆ 2L2 (γˆi ) , ∑ v
i=1
216
10 Two-Level Methods: the hp-Version on Rectangular Elements
ˆ and γˆi , where p∗ and p∗ are the maximum and minimum polynomial orders in K, ˆ i = 1, . . . , 4 , denote the open edges of K. Proof. Denoting qˆ 1 = (1, −1), qˆ 2 = (1, 1), qˆ 3 = (−1, 1) and qˆ 4 = (−1, −1), we can write vˆ as vˆ = vˆ1 + · · · + vˆ4 , where vˆi =
ˆ ˆqˆ + v( ˆ qˆ i )ˆqˆ i . ∑ v(ˆ q)
ˆ γˆi q∈
Since v ˆ 2 1/2 HI
4
ˆ (K)
∑ vˆi 2 1/2
it suffices to prove vˆi 2 1/2 ˆ HI (K)
HI
i=1
-
p∗ p∗
.2
ˆ (K)
,
v ˆ 2L2 (γˆi ) .
We prove this inequality for vˆ1 only. By definition, see Subsection 10.1.2, vˆ1 (ξ , η ) =
p1 −1
∑
v( ˆ τ p1 , j , −1) p1 , j (ξ ) p6 ,0 (η ) + v( ˆ τ p1 ,p1 , −1) p1 ,p1 (ξ ) p2 ,0 (η ).
j=1
Therefore, with the aid of (C.88), (C.89) and (C.90), we infer vˆ1 2L
ˆ 2 (K)
p1
(p∗ )−2 ∑ ω p1 , j |v( ˆ τ p1 , j , −1)|2 ≤ (p∗ )−2 v ˆ 2L2 (γˆ1 ) . j=1
Using Schmidt’s inequality (C.35) we also obtain vˆ1 2H 1 (K) ˆ
(p∗ )4 (p∗ )2
(p∗ )4
p1
ˆ 2L2 (γˆ1 ) . ∑ ω p1 , j |v(ˆ τ p1 , j , −1)|2 ≤ (p∗ )2 v
j=1
Applying Lemma A.12 gives the desired estimate, thus proves the lemma.
As usual, we now study the coercivity and stability of the decomposition defined by (10.3) and (10.4).
10.2.1.4 Coercivity and stability of the decomposition Lemma 10.6 (Coercivity of decomposition). For any J
v = vH + vWh + ∑ vΓj + vW p + ∑ vK j=1
K
with vH ∈ VH , vWh ∈ VWh , vΓj ∈ VΓj , j = 1, . . . , J, v ∈ VW p , vK ∈ VK , the following inequality holds
10.2 The Hypersingular Integral Equation
217 J
aW (v, v) bH (vH , vH ) + bWh (vWh , vWh ) + ∑ bΓj (vΓj , vΓj ) j=1
+ bW p (vW p , vW p ) + ∑ bK (vK , vK ). K
Proof. Let J
vh := vWh + ∑ vΓj
and
v p := vW p + ∑ vK . K
j=1
Then v = vH + vh + v p so that by the Cauchy–Schwarz inequality aW (v, v) v2 1/2 HI
(Γ )
vH 2 1/2 HI
(Γ )
+ vh 2 1/2 HI
(Γ )
aW (vH , vH ) + aW (vh , vh ) + aW (v p , v p ).
+ v p 2 1/2 HI
(Γ )
Hence the lemma is proved if we prove the coercivity of the bilinear forms on VH , VWh , ∑ j VΓj , VW p , and ∑K VK : aW (vH , vH ) bH (vH , vH ),
(10.11) J
aW (vh , vh ) bWh (vWh , vWh ) + ∑ bΓj (vΓj , vΓj ),
(10.12)
aW (v p , v p ) bW p (vW p , vW p ) + ∑ bK (vK , vK ).
(10.13)
j=1
K
This will be done in the following three lemmas (Lemmas 10.7–10.9), and thus on proving those lemmas we complete the proof of this lemma. Inequality (10.11) is proved in [3, Lemma 13]. We briefly sketch the proof here. Lemma 10.7. For any vH ∈ VH we have aW (vH , vH ) bH (vH , vH ). Proof. For any vH ∈ VH , there exists v∗H ∈ VH∗ such that vH = Πh v∗H , so that aW (vH , vH ) vH 2 1/2 HI
(Γ )
= Πh v∗H 2 1/2 HI
(Γ )
v∗H 2 1/2 HI
(Γ )
,
where the last inequality is given by Lemma C.36. On the other hand, (10.6) gives v∗H = ΠH∗ vH , and thus due to the definition of the bilinear form bH (·, ·) in (10.5) we have bH (vH , vH ) = aW (ΠH∗ vH , ΠH∗ vH ) = aW (v∗H , v∗H ) v∗H 2 1/2 HI
The required result then follows.
(Γ )
.
Inequality (10.12) is proved by proving separately the coercivity of the bilinear form bWh (·, ·) and the sum ∑Jj=1 bΓj (·, ·). This is done in [3, Lemmas 14 and 16].
218
10 Two-Level Methods: the hp-Version on Rectangular Elements
However, the proof of [3, Lemma 14] relies on (A.65) and the equivalence bΓj (vΓj , vΓj ) = aW (vΓj , vΓj ) vΓj 2 1/2 HI
(Γj )
.
Lemma A.13 shows that the constants in this equivalence depend on h. We prove inequality (10.12) in the following lemma, where we use (A.86) instead of (A.65) and thus we are able to use the following equivalence to ensure that the equivalence constants are independent of h: bΓj (vΓj , vΓj ) = aW (vΓj , vΓj ) vΓj 2 1/2 HI
(Γ )
.
Lemma 10.8. For any vh = vWh + ∑Jj=1 vΓj ∈ Vh with vWh ∈ VWh and vΓj ∈ VΓj we have J
aW (vh , vh ) bWh (vWh , vWh ) + ∑ bΓj (vΓj , vΓj ). j=1
Proof. Let vΓ := ∑Jj=1 vΓj . Then vh = vWh + vΓ so that aW (vh , vh ) vh 2 1/2 HI
(Γ )
Hence it suffices to prove
vWh 2 1/2 HI
and
vΓ 2 1/2 HI
vWh 2 1/2 HI
(Γ )
(Γ )
+ vΓ 2 1/2 HI
(Γ )
bWh (vWh , vWh )
.
(10.14)
J
(Γ )
∑ bΓj (vΓj , vΓj ).
(10.15)
j=1
First we prove (10.14). This is done by decomposing vWh into edge and vertex components and prove the required inequality for each component as follows. Let vWh ,e :=
v(xk )φkh
∑
and
vWh ,v :=
k∈IH,e
∑
v(xk )φkh .
k∈IH,v
Then vWh = vWh ,e + vWh ,v so that vWh 2 1/2 HI
(Γ )
vWh ,e 2 1/2 HI
(Γ )
+ vWh ,v 2 1/2 HI
(Γ )
.
(10.16)
To deal with the first term on the right-hand side of (10.16), we partition the set EH of all subdomain edges not on ∂Γ into EH = EH,1 ∪ · · · ∪ EH,M with the property that for all m = 1, . . . , M γ , γ ∈ EH,m ⇐⇒ Γγ ∩ Γγ = ∅ (10.17) where Γγ is the union of γ and the pair of subdomains sharing this edge. The assumption on the mesh assures us that such a partition is possible with M independent of J, H, and h. With this partition, we further decompose vWh ,e as
10.2 The Hypersingular Integral Equation
vWh ,e =
M
∑ vWh ,e,m
where
219
vWh ,e,m =
∑
vγ
γ ∈EH,m
m=1
with vγ =
∑
v(xk )φkh .
k∈Ih,γ
4 3 Here Ih,γ = k ∈ IH,e : xk ∈ γ . The triangle and Cauchy–Schwarz inequalities yield vWh ,e 2 1/2 HI
M
(Γ )
∑ vWh ,e,m 2HI1/2 (Γ )
M
m=1
Note that supp(vγ ) ⊂ Γγ . Property (10.17) implies that for each m = 1, . . . , M, the supports of vγ for γ ∈ EH,m are disjoint. Hence we can invoke Theorem A.10 to obtain '2 ' vWh ,e,m 2 1/2 = ' ∑ vγ 'H 1/2 (Γ ) ≤ ∑ vγ 2 1/2 . (Γ )
HI
γ
I
γ ∈EH,m
HI
γ ∈EH,m
(Γγ )
It then follows from Lemma 10.2 that vWh ,e,m 2 1/2 HI
Consequently,
vWh ,e 2 1/2 HI
(Γ )
γ ∈EH,m
M (Γ )
∑ ∑
∑
m=1 γ ∈EH,m
vγ 2L2 (γ ) .
vγ 2L2 (γ ) =
∑
γ ∈EH
vγ 2L2 (γ ) .
This and Lemma 10.1 imply vWh ,e 2 1/2 HI
(Γ )
bWh (vWh ,e , vWh ,e ).
(10.18)
The second term on the right-hand side of (10.16) can be dealt with in the same manner. The index set IH,v is decomposed into IH,v = Iv,1 ∪ · · · Iv,L such that for any = 1, . . . , L k, k ∈ Iv, ⇐⇒ supp(φkh ) ∩ supp(φkh ) = ∅. Denoting by γkh the support of φkh and noting that diam(γkh ) hk , we deduce on invoking Lemma C.12 and recalling definition (10.8) of bWh (·, ·) vWh ,v 2 1/2 HI
(Γ )
∑
k∈IH,v
v(xk )φkh 2 1/2 HI
(γkh )
∑
hk |v(xk )|2 = bWh (vWh ,v , vWh ,v ).
k∈IH,v
The above inequality, (10.18), and (10.16) yield (10.14). Estimate (10.15) can be shown by invoking Theorem A.11 and then Lemma B.3 to obtain vΓ 2 1/2 HI
(Γ )
J
J
J
j=1
j=1
j=1
∑ vΓj 2HI1/2 (Γ ) ∑ aW (vΓj , vΓj ) = ∑ bΓj (vΓj , vΓj ).
220
10 Two-Level Methods: the hp-Version on Rectangular Elements
This completes the proof of the lemma.
The next lemma which establishes coercivity of the bilinear forms on the pwire basket and p-interior spaces (VW p and VK , respectively) are analogous to [2, Lemmas 9 and 12]. The current proofs assure us that the constants are independent of not only p but also H and h. We note here again that Theorem A.11 is used instead of Theorem A.10, as was done in [2]. Lemma 10.9. For any v p = vW p + ∑K vK with vW p ∈ VW p and vK ∈ VK , the following inequality holds aW (v p , v p ) bW p (vW p , vW p ) + ∑ bK (vK , vK ). K
The constant involved is independent of v p , p, h and H. Proof. Let v p,K := ∑K vK . Then v p = vW p + v p,K so that aW (v p , v p ) v p 2 1/2 HI
(Γ )
vW p 2 1/2 HI
(Γ )
Since (due to Theorem A.11 and Lemma B.3) v p,K 2 1/2 HI
it suffices to prove
(Γ )
≤ ∑ vK 2 1/2 HI
K
vW p 2 1/2 HI
(Γ )
(Γ )
+ v p,K 2 1/2 HI
(Γ )
.
∑ bK (vK , vK ), K
bW p (vW p , vW p ).
Similarly to h-wire basket functions, a function in the p-wire basket vW p ∈ VW p can be decomposed into two components, an edge component vW p,e and a vertex component vW p,v , namely, vW p = vW p,e + vW p,v where vW p,e =
∑
v p (qk )φkp
and
k∈I p,e
vW p,v =
∑
v p (qk )φkp ;
k∈I p,v
see (10.1) and Figure 10.3. The proof of vW p,e 2 1/2 HI
(Γ )
bW p (vW p,e , vW p,e )
follows exactly along the lines of the proof of the component vWh ,e in (10.14), except that Lemma 10.3 is used in lieu of Lemma 10.2. Similarly, the proof of vW p,v 2 1/2 HI
(Γ )
bW p (vW p,v , vW p,v )
follows exactly along the lines of the proof of the component vWh ,v in (10.14), except that Lemma 10.4 is used in lieu of Lemma C.12. The lemma is proved.
10.2 The Hypersingular Integral Equation
221
Stability of the decomposition We now prove the stability of the decomposition. First we prove that there exists a decomposition of a function in V which is stable under the defined bilinear forms. We note that because there is more than one way to define the component in the coarse space VH , the decomposition is not unique. Recalling the definition and properties of Cl´ement’s interpolation operator in Subsection C.5.4, we introduce the notations
Πh0 : L2 (Γ ) −→ Vh
ΠH0 ∗ : L2 (Γ ) −→ VH∗ ;
and
We define for any v ∈ V vh = Πh0 v ∈ Vh ,
vH = Πh ΠH0 ∗ vh ∈ VH ,
v p = v − vh ∈ V p .
(10.19)
Letting we can decompose vh uniquely as
vh = vh − vH ,
J
(10.20)
vh = vWh + ∑ vΓj ,
vWh ∈ VWh , vΓj ∈ VΓj .
(10.21)
v p = vW p + ∑ vK ,
vW p ∈ VW p , vK ∈ VK .
(10.22)
j=1
We can also decompose v p uniquely as K
Then it is clear that J
v = vH + vWh + ∑ vΓj + vW p + ∑ vK .
(10.23)
K
j=1
We now show stability for each component on the right-hand side of (10.23). Lemma 10.10. Let v ∈ V and vH be defined as in (10.19). Then bH (vH , vH ) aW (v, v). The constant involved is independent of v, h, H and p. Proof. It follows from (10.7) that ΠH∗ vH = ΠH∗ Πh ΠH0 ∗ vh = ΠH0 ∗ vh . Therefore, the definition of the bilinear form bH (·, ·) in (10.5) implies bH (vH , vH ) = aW (ΠH0 ∗ vh , ΠH0 ∗ vh ) ΠH0 ∗ vh 2 1/2 HI
It follows from Lemma C.46 that
bH (vH , vH ) v2 1/2 HI
(Γ )
(Γ )
= ΠH0 ∗ Πh0 v2 1/2
aW (v, v).
HI
(Γ )
.
222
10 Two-Level Methods: the hp-Version on Rectangular Elements
Lemma 10.11. For any v ∈ V , let vWh be defined as in (10.21). Then bWh (vWh , vWh ) max (1 + log(H j /h j )) aW (v, v). 1≤ j≤J
The constant involved is independent of v, h, H and p. Proof. It follows from Lemma 10.1, the definition of vWh , and (10.21) that bWh (vWh , vWh )
∑
vWh 2L2 (γ )
∑
vh 2L2 (γ ) +
∑
vh 2L2 (γ ) ,
γ ∈EH
γ ∈EH
=
γ ∈EH
'
'2
J
∑ ' ∑ vΓj 'L2 (γ )
γ ∈EH
j=1
where we note that vΓj ≡ 0 on γ . Invoking Lemma C.24 we deduce Hj J bWh (vWh , vWh ) max 1 + log ∑ vh 2Hw1/2 (Γj ) 1≤ j≤J h j j=1 where the weighted norm ·
1/2
Hw (Γj )
J
∑ vh 2 1/2
Hw (Γj )
j=1
≤
is defined in Subsection A.2.6.1. Note that
J
vh 2L2 (Γj ) + | vh |2H 1/2 (Γ ) . ∑ H −1 j
j=1
Since we can write vh as vh = Πh (vh − ΠH0 vh ), by Lemma C.36 (more precisely, we use (C.64)) and Lemma C.46 we obtain J
J
j=1
j=1
0 2 0 2 ∑ vh 2Hw1/2 (Γj ) ∑ H −1 j vh − ΠH vh L2 (Γj ) + |vh − ΠH vh |H 1/2 (Γ )
|vh |2H 1/2 (Γ ) ≤ vh 2 1/2
HS (Γ )
vh 2 1/2 HI
(Γ )
.
Using again the boundedness of Πh0 given by Lemma C.46, we infer J
∑ vh 2Hw1/2 (Γj ) v2HI1/2 (Γ ) aW (v, v).
j=1
The desired result then follows.
Lemma 10.12. Let v ∈ V and vΓj be defined as in (10.21). Then J
∑ bΓj (vΓj , vΓj ) 1≤max j≤J
j=1
1 + log
H j 2 aW (v, v). hj
10.2 The Hypersingular Integral Equation
223
The constant involved is independent of v, h, H and p. Proof. By the definition of the bilinear form bΓj (·, ·), see (10.5), and by Lemma A.13 (i) J
J
J
J
j=1
j=1
j=1
j=1
∑ bΓj (vΓj , vΓj ) = ∑ aW (vΓj , vΓj ) ∑ vΓj 2HI1/2 (Γ ) ∑ vΓj 2Hw1/2 (Γj ) .
Due to the assumption on the coarse mesh, we can assume that Γj = I × I where I = (0, H). Invoking inequality (A.99) in Lemma A.16 we deduce J
∑ bΓj (vΓj , vΓj )
j=1
J
∑
j=1
I
vΓj (x, ·)2 1/2
Hw (I)
dx +
I
For any x ∈ I, Lemma C.21 gives I
vΓj (x, ·)2 1/2 Hw
vΓj (·, y)2 1/2
Hw (I)
dy . (10.24)
Hj dx max 1 + log |vΓj (x, ·)|2H 1/2 (I) dx. (I) 1≤ j≤J hj I
Using (A.96) we infer I
vΓj (x, ·)2 1/2 Hw
Hj vΓj 2 1/2 . dx max 1 + log Hw (Γj ) (I) 1≤ j≤J hj
The same estimate can be derived for the second integral on the right-hand side of (10.24), which then implies J
∑ bΓj (vΓj , vΓj ) 1≤max j≤J
j=1
1 + log
Hj J ∑ vΓj 2Hw1/2 (Γj ) . h j j=1
Noting that vΓj = vh − vWh on Γj , see (10.21), we deduce
Hj J 2 2 1 + log v b (v , v ) max + v Γ Γ Γ ∑ j j j 1≤ j≤J ∑ h Hw1/2 (Γj ) Wh Hw1/2 (Γj ) . (10.25) h j j=1 j=1 J
Let H = max1≤ j≤J H j . Since the coarse mesh is quasi-uniform, we deduce from the definition of the · 1/2 -norm and (10.20) that Hw (Γj )
J
1
1
∑ vh 2Hw1/2 (Γj ) H vh 2L2 (Γ ) + |vh |2H 1/2 (Γ ) = H vh − vH 2L2 (Γ ) + |vh − vH |2H 1/2 (Γ ) .
j=1
Noting (10.19) and the fact that the standard interpolation operator Πh is a projection, we can write vh − vH = Πh (vh − ΠH0 vh ), so that
224
10 Two-Level Methods: the hp-Version on Rectangular Elements J
1
∑ vh 2Hw1/2 (Γj ) H vh − ΠH0 vh 2L2 (Γ ) + |vh − ΠH0 vh |2H 1/2 (Γ ) .
j=1
Hence on invoking Lemma C.46 we deduce J
∑ vh 2Hw1/2 (Γj ) |vh |2H 1/2 (Γ ) ≤ vh 2Hw1/2 (Γ ) vh 2HI1/2 (Γ ) v2HI1/2 (Γ ) .
j=1
This inequality together with (10.25) and Lemma 10.11 yields the required result. Lemma 10.13. Let v ∈ V and vW p be defined as in (10.22). Then bW p (vW p , vW p ) max(1 + log pK ) aW (v, v), K
where the constant involved is independent of v, H, h and p. Proof. Due to Lemma 10.1 it suffices to prove
∑∑
K γ ∈∂ K
vW p 2L2 (γ ) max(1 + log pK )v2 1/2 K
HI
(Γ )
.
Noting that vW p = v p on γ , by mapping to the reference element we obtain v p 2L2 (γ ) . vW p 2L2 (γ ) = v p 2L2 (γ ) hγ It follows from (C.57) (the proof of which is at the end of the proof of Lemma C.30) that v p 2 1/2 vW p 2L2 (γ ) hγ (1 + log pK ) Invoking Lemma A.5 gives
HS (K)
hγ (1 + log pK ) v p 2 1/2
Hw (K)
.
vW p 2L2 (γ ) (1 + log pK )v p 2 1/2 Hw (K) 2 = (1 + log pK ) |v p |2H 1/2 (K) + h−1 K v p L2 (K)
where the constant is independent of p, H, and h. Recall that v p = v − ΠV0h v; see (10.19). Summing over γ and K and using (C.80) and (C.81), we obtain the desired result. In the following lemma, to avoid a dependence of the constant on h, we employ an approach different from what is used in [2] where the pure p-version is studied. Lemma 10.14. Let v ∈ V and vK ∈ VK be defined as in (10.22). Then (1 + log pK )2 aW (v, v). ∑ bK (vK , vK ) max K K
10.2 The Hypersingular Integral Equation
225
The constant involved is independent of v, H, h and p. Proof. By using successively Lemma B.3, Lemma A.13, and Lemma A.5, we obtain bK (vK , vK ) = aW (vK , vK ) vK 2 1/2 HI
(Γ )
Lemma 6 in [2] gives
vK 2 1/2
Hw (K)
hK vK 2 1/2
Hw (K)
bK (vK , vK ) hK vK 2 1/2
+ (1 + log pK )2 v p 2 1/2
vW p 2 1/2 bK (vK , vK ) hK
+ (1 + log pK )2 v p 2 1/2
Hw (K)
Hw (K)
.
.
Here we use the usual notation that if u is a function defined in K, then u is defined by u( in the reference element3K x) = 4u(hK x), where hK is the scaling factor that defines K, namely, K := x/hK : x ∈ K . Recalling that vK = v p − vW p on K, so that vK = vp − vW p , we infer Hw (K)
Hw (K)
.
We note that a scaling argument at this stage will yield a constant growing as h−1 K for the first term in the sum on the right-hand side. To avoid this, we use Lemma 10.3 to obtain vW p 2L2 (γ ) , vW p 2 1/2 ∑ (K)
HI
γ∈∂ K
which implies after a scaling argument and an application of Lemma A.5 bK (vK , vK )
∑
γ ∈∂ K
vW p 2L2 (γ ) + (1 + log pK )2 v p 2 1/2
Hw (K)
.
Summing over K and proceeding as in the proof of Lemma 10.13, we obtain the desired result. We are now able to state the stability of the decomposition (10.3)–(10.4). Lemma 10.15 (Stability of decomposition). There exists a constant C0 independent of H, h, and p such that any v ∈ V admits a decomposition J
v = vH + vWh + ∑ vΓj + vW p + ∑ vK j=1
K
satisfying J
bH (vH , vH ) + bWh (vWh , vWh ) + ∑ bΓj (vΓj , vΓj ) + bW p (vW p , vW p ) + ∑ bK (vK , vK ) K
j=1
max (1 + log(H j p j /h j )) aW (v, v) 2
1≤ j≤J
where p j = max pK . K⊂Γj
226
10 Two-Level Methods: the hp-Version on Rectangular Elements
Proof. The lemma is a direct consequence of Lemmas 10.10, 10.11, 10.12, 10.13, and 10.14. As a consequence of Lemma 10.6, Lemma 10.15, and the general theory Theorem 2.1 we have Theorem 10.1. The condition number of the additive Schwarz operator Pad is bounded as H j p j 2 κ (Pad ) max 1 + log , 1≤ j≤J hj where p j = max pK . K⊂Γj
10.2.2 An overlapping method 10.2.2.1 The h-version In this subsection, we first report on the result for the h-version in [230]. Recall the two-level mesh and the boundary element spaces V0 := VH∗ and V := Vh defined in Subsection 10.1.1. We also recall the subspaces V j := VΓj , j = 1, . . . , J defined in that subsection. Overlapping subdomains: We extend each subdomain Γj in the following way. First we define, for some positive constant δ , called the overlap size,
and denote
*j = span{φk : xk ∈ / Γ j , dist(xk , ∂Γj ) ≤ δ }, V *j }, Γj = supp{φk : φk ∈ V
which is the shaded area in Figure 10.5. Here the basis functions φk = φkh are defined in Subsection 10.2.1.1 and the distance is defined with the max norm x = max{|x1 |, |x2 |} where x = (x1 , x2 ). The extended subdomain Γj is then defined as Γj = Γ j ∪ Γj . We note that Γj need not be a quadrilateral domain. Also, if δ is chosen such that δ ∈ (0, H], then diam(Γi ) H.
Subspace decomposition: The subspace decomposition V = V0 + · · · + VJ is performed with subspaces V j , j = 0, 1, . . . , J, defined by
(10.26)
10.2 The Hypersingular Integral Equation
227
δ
Γj
Γj
δ
δ
δ
Fig. 10.5 • vertex at a distance δ to Γ j , Γj : shaded region, Γj = Γj ∪ Γj : overlapping subdomain, R j : (rectangle in shaded region) support of cutoff function θ j .
V0 := VH = Πh VH∗ = Πh V0
and
*j = V ∩ H 1/2 (Γj ), j = 1, . . . , J, V j := V j ∪ V
where Πh is the interpolation operator which interpolates continuous functions into functions in V . The bilinear forms bi (·, ·) associated with the subspaces Vi are defined by b0 (v, w) = aW (ΠH∗ v, ΠH∗ w) ∀v, w ∈ V0 , and
b j (v, w) = aW (v, w) ∀v, w ∈ V j , j = 1, . . . , J,
where ΠH∗ be the interpolation operator which interpolates continuous functions into functions in V0 = VH∗ . Stability of the decomposition: We first prove the stability of the decomposition. The following results were proved in [72, Lemma 3.5]; however, for the convenience of the reader we present the detailed proof to justify that the results still hold with the weighted norm defined in Subsection A.2.6.1. Lemma 10.16. Let 0 < δ < β and Iβ := (0, β ). (i) If v ∈ H 1/2 (Iβ ), then β δ
|v(x)|2 β 2 dx 1 + log2 v 1/2 . Hw (Iβ ) x δ
(ii) If v ∈ H 1/2 (Iβ × Iβ ), then
(10.27)
228
10 Two-Level Methods: the hp-Version on Rectangular Elements
β β 0
δ
|v(x, y)|2 β 2 dx dy 1 + log2 v 1/2 . Hw (Iβ ×Iβ ) x δ
(10.28)
Proof. We first prove (10.27) for vδ ∈ Vδ , the space of continuous piecewise-linear functions on a uniform mesh of size δ on (0, β ). Simple calculation reveals β δ
|vδ (x)|2 dx ≤ vδ 2L∞ (0,β ) x
β δ
dx β = vδ 2L∞ (0,β ) log . x δ
By invoking Lemma C.19 with H = β , h = δ , and I = (0, H), we deduce β δ
|vδ (x)|2 β dx 1 + log2 vδ 2 1/2 . Hw (0,β ) x δ
(10.29)
Consider now v ∈ H 1/2 (0, β ). Let vδ ∈ Vδ be the L2 -projection of v onto Vδ . It can be shown that, see (C.47) and (C.48) with R = (0, β ) and h = δ , v − vδ 2L2 (0,β ) δ |v|2H 1/2 (0,β )
and
vδ
1/2
Hw (0,β )
v
1/2
Hw (0,β )
.
(10.30)
Hence, β δ
|v(x)|2 dx x
β δ
|v(x) − vδ (x)|2 dx + x
1 v − vδ 2L2 (0,β ) + δ
β δ
β δ
|vδ (x)|2 dx x
|vδ (x)|2 dx. x
(10.31)
Inequality (10.27) follows immediately from (10.29), (10.30), and (10.31). Inequality (10.28) is obtained by using successively (10.27), the definition of the weighted norm (see Subsection A.2.6.1), (A.97), Fubini’s Theorem, and (A.98) β β 0
δ
|v(x, y)|2 β dx dy 1 + log2 x δ
β = 1 + log δ
+
β β 0 0
2
β
v(·, y)2 1/2
Hw (0,β )
dy
0
β 0
1 v(·, y)2L2 (0,β ) β
|v(x, y) − v(x , y)|2 dy dx dx |x − x |2
10.2 The Hypersingular Integral Equation
229
β 1 1 + log2 v2L2 ((0,β )2 ) δ β +
β β v(x, ·) − v(x , ·)2 L2 (0,β )
|x − x |2
0 0
dx dx
β 2 v 1/2 , 1 + log2 Hw ((0,β )×(0,β )) δ finishing the proof of the lemma.
The above results will be used to prove the following lemma, which is crucial to obtain the stability of the decomposition. Let R be the union of overlapping rectangular subdomains Rl , l = 1, . . . , J, of diameters βl . Assume that the size of the overlap is δ . Let {θl : l = 1, . . . , J} be a partition of unity on R defined by piecewise bilinear functions such that supp θl = Rl . 1/2 (R) the following estimate Lemma 10.17. If δ ≤ minl βl /2, then for any w ∈ H holds J
∑ θ w21/2
=1
J
Hw (R )
∑
=1
1 + log
β 2 w2 1/2 + Hw (R ) δ
R
|w(x)|2 dx. (10.32) dist(x, ∂ R)
Proof. Without loss of generality, we can assume that R = I × I, where I = (0, β ), so that θ can be defined as ⎧ ⎪ 0 ≤ t ≤ δ, ⎨t/δ , θ (x, y) = η (x)η (y) where η (t) = 1, δ < t < β − δ , ⎪ ⎩ β − δ ≤ t ≤ β . (β − t)/δ , In view of the definition of the norm · 1/2
Lemma A.16, in order to estimate
T1 :=
, see Subsection A.2.6.1) and
Hw (Rl ) θ w2H 1/2 (R ) we will
estimate
(θ w)(x, ·) − (θ w)(x , ·)2 L2 (I)
|x − x |2
I×I
and T2 :=
R
dx dx
|(θ w)(x)|2 dx. dist(x, ∂ R )
Splitting the integral over I = (0, β ) into three integrals over I1 := [0, δ ], I2 := [δ , β − δ ], and I3 := [β − δ , β ], and denoting, for any v, Ai j (v) :=
v(x, ·) − v(x , ·)2 L2 (I)
Ii ×I j
|x − x |2
dx dx ,
230
10 Two-Level Methods: the hp-Version on Rectangular Elements
we observe that, by symmetry, in order to estimate T1 it suffices to consider A11 (θ w), A12 (θ w), A13 (θ w), A23 (θ w), and A33 (θ w). There is no need to consider A22 (θ w) because A22 (θ w) = A22 (w). The symmetric shape of θ implies the similarity of A11 (θ w) and A33 (θ w), and of A12 (θ w) and A23 (θ w). There remains three terms to be considered: A11 (θ w), A12 (θ w), and A13 (θ w). Firstly, for A11 we have A11 (θ w) ≤ 2
θ (x, ·) − θ (x , ·)2 L2 (I)
|x − x |2
I1 ×I1
+2
w(x, ·) − w(x , ·)2 L2 (I)
|x − x |2
I1 ×I1
≤2
w(x, ·)2L2 (I) dx dx
θ (x, ·)2L2 (I) dx dx
∑3 x η (·) − x η (·)2 i=1 δ δ L2 (I ) i
|x − x |2
I1 ×I1
w(x, ·)2L2 (I) dx dx + 2β A11 (w)
1 w2L2 (I ×I) + A11 (w). 1 δ It is proved in Lemma C.25 that β 1 w2 1/2 , w2L2 (I ×I) ≤ c 1 + log 1 Hw (R ) δ δ
(10.33)
where c is independent of w, β , and δ . Thus A11 (θ w) is bounded by the right-hand side of (10.32). For the term A12 (θ w) we have 1 A12 (θ w) = 2 δ ≤
2 δ2
δ β −δ xw(x, ·) − δ w(x , ·)2
L2 (I)
0
|x − x |2
δ
δ β −δ 0
δ
dx dx
dx |x − δ |2 w(x, ·)2L2 (I) dx + 2A12 (w) |x − x |2
2 ≤ w2L2 (I ×I) + 2A12 (w). 1 δ Using again (10.33) we deduce that A12 (θ w) is bounded by the right-hand side of (10.32). Finally, for A13 (θ w) since the assumption δ ≤ β /2 implies that β2 − x ≥ 0 for x ∈ I1 and x − β2 ≥ 0 for x ∈ I3 , we find 1 A13 (θ w) = 2 δ
δ β xw(x, ·) − (β − x )w(x , ·)2 L2 (I) 0 β −δ
|x − x |2
dx dx
10.2 The Hypersingular Integral Equation
2 ≤ 2 δ
δ β
2 δ2
δ β
2 ≤ 2 δ
δ β
+
0 β −δ
231
|x + x − β |2 w(x, ·)2L2 (I) dx dx |x − x |2 |β − x |2
w(x, ·) − w(x , ·)2L2 (I) |x − x |2
0 β −δ
0 β −δ
|(x − β2 ) − ( β2 − x)|2 |(x − β2 ) + ( β2 − x)|2
dx dx
w(x, ·)2L2 (I) dx dx + 2A13 (w)
2 ≤ w2L2 (I ×I) + 2A13 (w). 1 δ Inequality (10.33) yields the estimate for A13 (θ w), which implies that T1 is bounded by the right-hand side of (10.32). To estimate T2 we first assume, without loss of generality, that dist(x, ∂ R ) = x, where x = (x, y). Then T2 ≤
β δ 2 x |w(x, y)|2 0
≤
0
δ2
x
1 w2L2 (I ×I) + 1 δ
dx +
β δ
β β 0 δ
|w(x, y)|2 dx dy x
|w(x, y)|2 dx dy. x
The desired estimate for T2 now follows from (10.33) and (10.28).
Lemma 10.18 (Stability of decomposition). For any u ∈ V there exist ui ∈ Vi satisfying u = ∑Ji=0 ui such that J
H
∑ bi (ui , ui ) (1 + log2 δ )aW (u, u),
(10.34)
i=0
where the constant is independent of u, H, h, and δ . Proof. To define a decomposition for u ∈ V we need a projection and a partition of 1 (Γ ) = H 1 (Γ ) is positive unity. Since the operator −Δ with domain of√definition H 0 definite and self-adjoint, we can define Λ = −Δ which in turn is self-adjoint as an 1/2 (Γ ) to H −1/2 (Γ ). Moreover, operator from H I
Λ ξ , ξ ξ 2 1/2 HI
1/2
Let PH : H I
(Γ )
1/2 (Γ ). ∀ξ ∈ H I
(Γ ) −→ V0 be the projection defined by the inner product Λ ·, ·, i.e.
Λ PH v, w = Λ v, w
1/2 (Γ ), w ∈ V0 . ∀v ∈ H I
232
10 Two-Level Methods: the hp-Version on Rectangular Elements
1/2 (Γ ) the following Using standard arguments one can prove that for any v ∈ H I estimates hold PH v 1/2 HI
(Γ )
v 1/2 HI
PH v − vL2 (Γ ) H 1/2 v 1/2
and
(Γ )
HI
(Γ )
.
(10.35)
We next define a partition of unity having the properties of the partition in Lemma 10.17. For j = 1, . . . , J, we define R j = Γj ∪ {x ∈ Γ : dist(x, ∂Γj ) ≤ δ }, which are quadrilateral domains. It is crucial to define R j as a quadrilateral domain so that Lemma 10.17 can be applied. The cutoff function θ j is defined as in the proof of Lemma 10.17 so that supp θ j = R j . We are now ready to define a decomposition of u ∈ V such that (10.34) holds. For any u ∈ V let u0 = Πh PH u ∈ V0 and u j = Πh (θ j w) ∈ V j , j = 1, . . . , J, where
w = u − u0 . The fact that supp u j ⊂ Γ j so that u j ∈ V j for j = 1, . . . , J is clear from the definitions of R j and Γj . It is also clear that u = ∑Ji=0 ui . We recall from Lemma 10.10 that b0 (u0 , u0 ) aW (u, u).
(10.36)
Moreover, by using Lemma C.36 and (10.35), we obtain the following estimates ui 1/2 HI
w 1/2 HI
(Γ ) (Γ )
θi w 1/2 HI
(Γ )
,
(10.37)
= Πh (u − PH u) 1/2 u 1/2 HI
(Γ )
HI
(Γ )
,
u − PH u 1/2 HI
(Γ )
(10.38)
wL2 (Γ ) = Πh (u − PH u)L2 (Γ ) u − PH uL2 (Γ ) H 1/2 u 1/2 HI
(Γ )
.
(10.39)
Moreover, by invoking Lemma A.13, we obtain θi w 1/2 HI
(Γ )
θi w 1/2
Hw (Ri )
.
(10.40)
Hence, by using successively Lemma B.3, (10.36), (10.37), (10.40), (10.32) we deduce J
∑ bi (ui , ui ) ΠH u0 21/2 HI
i=0
u2 1/2 HI
(Γ )
J
(Γ ) J
+ ∑ ui 2 1/2 i=1
HI
+ ∑ θi w2 1/2 i=1
(Γ )
Hw (Ri )
u2 1/2 HI
J
(Γ )
+ ∑ θi w2 1/2 i=1
HI
(Γ )
10.2 The Hypersingular Integral Equation
u2 1/2 HI
(Γ )
233
+ (1 + log2
H J ) ∑ w2 1/2 + Hw (Ri ) δ i=1
Γ
|w(x)|2 dx. dist(x, ∂Γ ) (10.41)
Using (10.26) and the definition of the weighted norms in Subsection A.2.6.1, we can estimate the last two terms on the right-hand side by (1 + log2
H J ) ∑ w2 1/2 + Hw (Ri ) δ i=1
Γ
|w(x)|2 dx dist(x, ∂Γ )
H J 1 w2L2 (Ri ) + |w|2H 1/2 (R ) + = (1 + log2 ) ∑ i δ i=1 H
|w(x)|2 dx dist(x, ∂Γ )
Γ
|w(x)|2 H 1 (1 + log2 ) w2L2 (Γ ) + |w|2H 1/2 (Γ ) + dx δ H dist(x, ∂Γ ) Γ H 1 (1 + log2 ) w2L2 (Γ ) + w2 1/2 Hw (Γ ) δ H 1 H w2L2 (Γ ) + w2 1/2 . (1 + log2 ) HI (Γ ) δ H
Inequalities (10.38) and (10.39) then give (1 + log2
H J ) ∑ w2 1/2 + Hw (Ri ) δ i=1
Γ
|w(x)|2 H dx (1 + log2 )u2 1/2 . HI (Γ ) dist(x, ∂Γ ) δ
Hence we deduce from (10.41) the required estimate.
Coercivity of the decomposition: Finally we prove the coercivity of the decomposition in the next lemma. Lemma 10.19 (Coercivity of decomposition). For any u ∈ V if u = ∑Ji=0 ui for some ui ∈ Vi , then J
aW (u, u) ∑ bi (ui , ui ), i=0
where the constant is independent of u, h, δ , and H.
Proof. By construction there are at most ν subdomains Γ i to which any x ∈ Γ can belong. (The utmost case happens when δ = H.) A standard colouring argument yields, noting that u0 = Πh ΠH u0 , aW (u, u) u2 1/2 HI
(Γ )
u0 2 1/2
ΠH u0 2 1/2 HI
HI
(Γ )
J
(Γ )
' J '2 + ' ∑ ui 'H 1/2 (Γ ) i=1
+ ν ∑ ui 2 1/2 i=1
HI
I
J
(Γ )
∑ bi (ui , ui ). i=0
234
10 Two-Level Methods: the hp-Version on Rectangular Elements
Theorem 10.2. The condition number of the additive Schwarz operator Pad is bounded as κ (Pad ) 1 + log2 (H/δ ). Proof. The above estimate is a result of Theorem 2.1, Lemmas 10.18 and 10.19.
10.2.2.2 The hp-version Subspace decomposition for the hp-version: To define an overlapping preconditioner, we will decompose V as in (10.3) with V p decomposed by (10.4), while Vh is decomposed by subspaces defined on overlapping subdomains Γj , as in Subsection 10.2.2.1. The following estimate for the condition number can be shown in the same manner as for the h-version. We omit the proof. Theorem 10.3. The condition number of the additive Schwarz operator P is bounded as κ (P) max (1 + log(H j p j /δ ))2 , 1≤ j≤J
where p j := max pK . K⊂Γj
10.3 The Weakly-Singular Integral Equation In this section we analyse a non-overlapping method for the weakly-singular integral equation. Recall the two-level mesh is defined in Subsection 10.1.1. On the fine mesh we define V to be the space of piecewise polynomials of degree p on each element Γi j , i = 1, . . . , J, j = 1, . . . , Ni , i.e., 4 3 V := v : v|Γi j ∈ P p ∀i = 1, . . . , J, j = 1, . . . , Ni ,
and on the coarse mesh, we define the space of piecewise constant functions , i.e., 3 4 V0 := v ∈ V : v|Γi ∈ P0 , i = 1, . . . , J . We further define, for i = 1, . . . , J, j = 1, . . . , Ni , 3 4 Vi := v ∈ V : supp v ⊂ Γ i , v, 1L2 (Γi ) = 0, v|Γi j ∈ P0 , j = 1, . . . , Ni , , 3 4 Vi j := v ∈ V : supp v ⊂ Γ i j , v, 1L2 (Γi j ) = 0 .
A basis for Vi is the set of tensor products of Haar basis functions, whereas a basis for Vi j is the set of tensor products of Legendre polynomials of degree q = 2, . . . , p, scaled to the element Γi j .
10.3 The Weakly-Singular Integral Equation
235
Consider a function v ∈ V. Recalling the mean function defined by (3.13), we define, for j = 1, . . . , Ni and i = 1, . . . , J, vi j := v|Γi j − μΓi j (v|Γi j ), vi := vi − μΓi ( vi ),
Then v0 ∈ V0 , vi ∈ Vi , vi j ∈ Vi j , and
Ni
vi :=
∑ μΓi j (v|Γi j ),
j=1
(10.42)
J
v0 := ∑ μΓi ( vi ). i=1
J
J
Ni
v = v0 + ∑ vi + ∑ ∑ vi j . i=1
Therefore, the space V can be decomposed by V = V0
J 9 i=1
(10.43)
i=1 j=1
Vi
Ni J 9 9 i=1 j=1
Vi j .
(10.44)
We use the same bilinear form aV (·, ·) on each subspace. Lemma 10.20 (Coercivity of decomposition). For any v ∈ V with decomposition (10.43) the following statement is true J
J
Ni
aV (v, v) aV (v0 , v0 ) + ∑ aV (vi , vi ) + ∑ ∑ aV (vi j , vi j ) i=1
i=1 j=1
where the constant involved is independent of v, H, h, and p. Proof. First we note that vi and vi j have zero integral mean on Γi and Γi j , respectively. Thanks to Lemma B.3, it suffices to prove v2 −1/2 HI
(Γ )
v0 2 −1/2 HI
J
(Γ )
+ ∑ vi 2 −1/2 HI
i=1
J
(Γ )
Ni
+ ∑ ∑ vi j 2 −1/2 HI
i=1 j=1
(Γ )
.
(10.45)
This is obtained by first using the Cauchy–Schwarz inequality to have v2 −1/2 HI
(Γ )
v0 2 −1/2 HI
(Γ )
' J '2 'J + ' ∑ vi 'H −1/2 (Γ ) + ' ∑ i=1
I
Ni
'2
∑ vi j 'HI−1/2 (Γ )
i=1 j=1
and then invoking Theorem A.11 (A.88) to deduce (10.45). The constant depends only on the size of the screen Γ . We follow [108] to prove the stability of the decomposition. However, it is necwhich is scalable; see Lemma A.7 essary to use the weighted norm ∗· −1/2
and compare it with Lemma A.6.
HI
(Ω )
236
10 Two-Level Methods: the hp-Version on Rectangular Elements
Lemma 10.21 (Stability of decomposition). For any v ∈ V the unique decomposition (10.43) satisfies Ni
J
J
aV (v0 , v0 ) + ∑ aV (vi , vi ) + ∑ ∑ aV (vi j , vi j ) i=1
i=1 j=1
H 2 log (p + 1)aV (v, v) 1 + log2 h
where the constant involved does not depend on v, H, h, and p. Proof. Again, due to Lemma B.3 it suffices to prove 2 + ∗v0 −1/2 HI (Γ )
J
Ni
J
∑ ∗vi 2HI−1/2 (Γ ) + ∑ ∑ ∗vi j 2HI−1/2 (Γ )
i=1
i=1 j=1
H 2 log (p + 1)∗v2 −1/2 . 1 + log2 (Γ ) HI h
On the other hand, (A.82) and (A.83) imply ∗vi H −1/2 (Γ )
≤ ∗vi −1/2 HI
I
(Γi )
∗vi j H −1/2 (Γ )
and
I
Hence it suffices to prove
2 + ∗v0 −1/2 HI (Γ )
J
∑ ∗vi 2−1/2 HI
i=1
J
(Γi )
+∑
≤ ∗vi j −1/2 HI
(Γi j )
Ni
∑ ∗vi j 2HI−1/2 (Γi j )
i=1 j=1
H 2 log (p + 1)∗v2 −1/2 . 1 + log2 HI (Γ ) h
Let
J
T1 := ∑ ∗vi 2 −1/2 HI
i=1
J (Γi )
T2 := ∑
and
.
(10.46)
Ni
∑ ∗vi j 2HI−1/2 (Γi j ) .
i=1 j=1
We will first show that T1 and T2 are bounded by the right-hand side of (10.46) has the same property. Consider T1 . Lemma A.7 and then show that ∗v0 2 −1/2 HI
Part (ii) gives
2 ∗vi −1/2 HI (Γi )
(Γ )
= Hi3 ∗ vi 2 −1/2 HI
(Γi )
≤ Hi3 ∗ vi 2 −1/2+ε HI
= Hi2ε ∗vi 2 −1/2+ε . H (Γ ) I
i
(Γi )
(10.47)
vi , μΓi (ϕ )Γi we have vi ), see (10.42), and μΓi ( vi ), ϕ Γi = Now since vi = vi − μΓi ( ∗vi H −1/2+ε (Γ ) I
i
=
vi − μΓi ( vi ), ϕ Γi
sup
1/2−ε
ϕ ∈HI
(Γi )
∗ϕ H 1/2−ε (Γ ) I
i
=
vi , ϕ − μΓi (ϕ )Γi
sup
1/2−ε
ϕ ∈HI
(Γi )
∗ϕ H 1/2−ε (Γ ) I
i
10.3 The Weakly-Singular Integral Equation
≤ vi
−1/2+ε (Γi ) HI
237
ϕ − μΓi (ϕ ) 1/2−ε HI
sup 1/2−ε
ϕ ∈HI
(Γi )
∗ϕ H 1/2−ε (Γ )
(Γi )
.
(10.48)
i
I
Lemma A.7 and Lemma A.10 yield ϕ − μΓi (ϕ ) 1/2−ε HI
1/2+ε
(Γi )
Hi
ϕ − μ Γi (ϕ ) 1/2−ε HI
1/2+ε
Hi
ε
ϕ − μ Γi (ϕ )
(Γi )
1/2−ε
(Γi )
1/2−ε
(Γi )
HI
1/2+ε Hi
ε
ϕ − μΓ (ϕ) i
HS
(10.49)
where in the last step we used Theorem A.7 and Theorem A.9. The constants are independent of Hi and ε for 0 < ε ≤ ε0 < 1/2. Since ( ( μΓ (ϕ) 1/2−ε = ( ϕ, 1Γ ( 1 1/2−ε (Γi )
HS
i
≤ ϕ
1/2−ε
HS
ϕ
1/2−ε
HS
we have ϕ − μΓ (ϕ) i
with (10.49) implies
1/2−ε
HS
ϕ
(Γi )
1 −1/2+ε
, (Γi )
1/2−ε
HS
HS
(Γi )
1/2+ε
ϕ − μΓi (ϕ ) 1/2−ε HI
(Γi )
(Γi )
HS
i
(Γi )
Hi
∗ϕ
1/2−ε ∗ϕ H (Γi )
ε
I
1
(Γi )
1/2−ε
HI
=
1/2−ε
HS
(Γi )
(Γi )
which together
1 ∗ϕ H 1/2−ε (Γ ) i ε I
where in the last step we used the scaling property proved in Lemma A.7. Therefore, it follows from (10.47) and (10.48) that 2 ∗vi −1/2 HI (Γi )
Hi2ε vi 2 −1/2+ε , HI (Γi ) ε2
which implies, with the help of Theorem A.10 and Lemma C.18, 2ε 1 ' J ' J ' H 2ε ' ' ∑ vi '2 −1/2+ε H ' ∑ vi '2 −1/2 HI (Γ ) (Γ ) ε 2 i=1 HI h ε 2 i=1 H 2ε 1 ' J ' ' ∑ vi '2 −1/2 . H (Γ ) h ε 2 i=1 I
T1
By choosing
ε= we deduce
1 2 log(H/h)
(10.50)
238
10 Two-Level Methods: the hp-Version on Rectangular Elements
T1 log2 log2
J J Ni ' ' ' H' ' ∑ vi '2 −1/2 ≤ log2 H 'v − ∑ ∑ vi j '2 −1/2 H (Γ ) h i=1 HI (Γ ) h I i=1 j=1
H h
v2 −1/2 HI
J
(Γ )
Ni
+ ∑ ∑ vi j 2 −1/2 HI
i=1 j=1
(Γi j )
.
Next we treat T2 in the same manner. Instead of (10.50) we now have, noting (10.42), h2i jε 2 2 v|Γi j 2 −1/2+ε ∗vi j −1/2 HI HI (Γi j ) (Γi j ) ε
so that, using again Theorem A.10 and Lemma C.18, T2 ≤
h2ε ε2 p4ε
ε2
J
Ni
∑ ∑ v|Γi j 2HI−1/2+ε (Γi j ) ≤
i=1 j=1
v2 −1/2 HI
(Γ )
Choosing
h2ε v2 −1/2+ε HI (Γ ) ε2
p4ε v2 −1/2 . HI (Γ ) ε2
1 2 log(1 + p2 )
ε= we deduce
T2 log2 (1 + p)v2 −1/2 HI
Therefore,
T1 + T2 log2
H h
By the triangle inequality, we have v0 2 −1/2 HI
(Γ )
i=1
duality arguments, noting that HI
(Γ )
HI
≥ |Γ |2 v2 1/2
Altogether we obtain (10.46).
HI
Ni
HI
H h
(Γ )
(Γ)
(Γ )
.
'2
∑ vi j 'HI−1/2 (Γ )
i=1 j=1
v2 −1/2 + T1 + T2 HI (Γ )
Finally, we need to prove v2 −1/2
.
log2 (1 + p)v2 −1/2
J J ' = 'v − ∑ vi − ∑
log2
v2 1/2
(Γ )
log2 (1 + p)v2 −1/2 HI
∗v2 −1/2 HI
(Γ )
= |Γ |2 ∗ v2 1/2 HI
(Γ )
.
. This can be done by using
(Γ)
= |Γ | ∗v2 1/2 HI
(Γ )
.
By using the general theory, we deduce from Lemma 10.20 and Lemma 10.21 the following theorem.
10.4 Numerical Results
239
Theorem 10.4. The condition number of the additive Schwarz operator Pad defined by the subspace decomposition (10.44) is bounded by H 2 log (1 + p). κ (Pad ) 1 + log2 h
Remark 10.1. It is noted that [108, Theorem 1] contains the same upper bound as that given in Theorem 10.4. However, the decomposition defined in this article, see [108, Equation (6)], results in an additive Schwarz operator for the p-version with a two-level mesh. (Compare that equation with (10.44).) For the p-version, it is a common practice to design additive Schwarz preconditioners on a one-level mesh. Therefore, the method proposed in this article can be considered as an additive Schwarz method for the p-version on a two-level mesh (if we keep H and h fixed), and it can also be considered as an additive Schwarz preconditioner for the h-version with a high polynomial order (if we keep p fixed). The subproblems will be large if all H, h, and p vary.
10.4 Numerical Results The following numerical experiments were initially presented in [118] and [230]. The results are reprinted here with copyright permission. As in the case of twodimensional problems (Chapter 9), the preconditioned conjugate gradient method is used to solve the preconditioned systems, and the Lanczos algorithm (the QRmethod) is used in conjunction with the PCG to compute their eigenvalues.
10.4.1 The hypersingular integral equation In this section, we report on the numerical results when the non-overlapping additive Schwarz preconditioner is used to solve the hypersingular integral equation on the screen (−1/2, 1/2)2 × {0}. It is noted that the entries of the stiffness matrices are computed by analytical recurrence formulae; see [141]. For the CG method with the unpreconditioned system, we computed the stiffness matrices by using two different types of basis functions: the standard basis functions defined in Section 10.1 and the hierarchical basis functions with antiderivatives of Legendre polynomials; see Section C.3. For the PCG, we used two different bilinear forms on the wirebasket spaces VWh and VW p , namely the bilinear form aW (·, ·) and the bilinear forms (10.8) and (10.9). Both the CG and PCG stop when the residual is less than 10−6 .
240
10 Two-Level Methods: the hp-Version on Rectangular Elements
1/h DoF 2 4 6 8 10 12 14 16
9 49 121 225 361 529 729 961
Number of iterations CG PCG Standard Hierarchical H/h = 1 H/h = 2 H/h = 4 basis basis aW (·, ·) (10.8), (10.9) 3 3 3 3 3 10 10 10 9 10 9 25 16 12 13 10 40 20 13 13 12 11 49 24 13 14 11 56 27 13 14 12 12 58 29 13 14 12 64 32 13 14 12 12
Table 10.1 Number of iterations of CG and PCG, hp-version with p = 2
p DoF 2 3 4 5 6 7 8
5 12 32 70 127 202 295
Number of iterations CG PCG Standard basis Hierarchical basis aW (·, ·) L2 -inner product 4 4 4 3 8 8 7 7 25 22 10 10 52 47 12 12 104 80 13 13 156 111 14 14 227 151 14 14
Table 10.2 Number of iterations of CG and PCG, hp-version with 1/h = 3
10.4.2 The Lam´e equation We finish this chapter with some numerical results for the crack problem (Lam´e equation) to demonstrate that the overlapping preconditioner (Theorem 10.2) also work for systems of equations. The equation is of the form (1.7) with g(x, y) = (−y, x, 0).
10.4 Numerical Results
241
DoF
h
CG
147 675 2883 11907
1/8 1/16 1/32 1/64
4.14 8.24 16.50 33.07
PCG H = 2h H = 4h δ = h δ = 2h δ = h δ = 2h 2.38 4.11 2.78 4.27 5.15 9.01 3.88 4.31 5.43 9.01 5.36 5.88 5.56 9.01
Table 10.3 Condition number for Lam´e equation
DoF 147 675 2883 11907
h CG 1/8 1/16 1/32 1/64
24 34 48 68
PCG H = 2h H = 4h δ = h δ = 2h δ = h δ = 2h 13 16 16 19 20 22 19 21 22 25 23 23 24 26
Table 10.4 Number of iterations for Lam´e equation
Chapter 11
Two-Level Methods: the hp-Version on Triangular Elements
In this chapter we report on the paper [111] by Heuer, Leydecker, and Stephan, which analyses a two-level additive Schwarz method for the hp-version on triangular elements applied to the hypersingular integral equation on a screen Γ .
11.1 Preliminaries 11.1.1 Sobolev spaces of functions vanishing on a part of the boundary of a domain In this chapter, besides the Sobolev spaces defined in Appendix A, we also use the following Sobolev spaces. Definition 11.1. Let Ω be a Lipschitz domain in R2 or a bounded interval in R, and γ be a subset of ∂ Ω , the boundary of Ω . (In the case of a bounded interval Ω , the subset γ is one of the two endpoints of Ω .) We define 3 4 1/2 (Ω , γ ) := v ∈ H 1/2 (Ω ) : |||v||| 1/2 H H (Ω ,γ ) < ∞ , where
|||v|||2H 1/2 (Ω ,γ ) := |v|2H 1/2 (Ω ) +
Ω
|v(x)|2 dx. dist(x, γ )
1/2 (Ω , γ ), then v vanishes on γ but not on the whole boundWe note that if v ∈ H ary ∂ Ω . So this norm is a special case of the weighted norm defined in Section A.2.6.1. A proof similar to that of Lemma A.5 gives |||v|||2H 1/2 (Ω ,γ ) = τ ||| v|||2H 1/2 (Ω ,γ ) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_11
(11.1)
243
244
11 Two-Level Methods: the hp-Version on Triangular Elements
4 4 3 3 = where Ω x ∈ R2 : x = x/τ , x ∈ Ω , γ = x ∈ R2 : x = x/τ , x ∈ γ , and v is defined by v( x) = v(x). Here τ = diam(Ω ), and we assume that τ < 1.
11.1.2 Extension operators As in Chapter 10, the proposed preconditioner requires the use of special basis functions which are nodal basis functions, edge basis functions and interior or bubble functions (associated with vertices, edges and elements, respectively). The nodal and edge basis functions to be used in this chapter are constructed by extensions of polynomials from edges onto elements. The following specific extension operators will be used. We first define these extensions on the reference triangle. Let I := (0, 1), 4 3 T := (x, y) ∈ R2 : x ≥ 0, y ≥ 0, x + y ≤ 1 , 3 4 := span xi : i = 0, 1, . . . , p , P p (I) 3 4 := v ∈ P p (I : v(0) = v(1) = 0 , P0p (I) 3 4 P p (T) := span xi y j : i, j = 0, 1, . . . , p, i + j ≤ p . v3 = (0, 1)
I3
v1 = (0, 0)
Fig. 11.1 The reference triangle T
x = ( x, y)
x
I2
x+ y
I1
v2 = (1, 0)
The vertices and edges of T are denoted by v1 , v2 , v3 , and I1 , I2 , I3 , respectively.
11.1 Preliminaries
245
We first recall the following extension operators frequently used in the finite element analysis, see [18, 19, 139, 196], −→ P p (T), F ϕ ( F : P p (I) x, y) := −→ P p (T), E : P p (I)
E ϕ ( x, y) :=
1 y
x y
x+ y
ϕ (t) dt,
x
x+ y x
ϕ (t) dt t
if ϕ (0) = 0.
We now define the operator E j : P0p (Ij ) → P p (T), j = 1, 2, 3, which extends a polynomial f ∈ P p (Ij ) vanishing at the endpoints of Ij into a complete polynomial in P p (T) vanishing on the other two edges Ii , i = j. More precisely, E 1 f ( x, y) := E 2 f ( x, y) :=
x(1 − x− y) y xy 1 − x− y
x
1− y x
y(1 − x− y) E f ( x, y) := x 3
x+ y
f (t, 0) dt, if f (0, 0) = f (1, 0) = 0, t(1 − t)
f (t, 1 − t) dt, t(1 − t)
x+ y x
f (0,t) dt, t(1 − t)
if f (0, 1) = f (1, 0) = 0,
if f (0, 0) = f (0, 1) = 0.
The above extension operators require that the given polynomial vanishes at both endpoints of the corresponding edge. We will need other extension operators which only require that the given polynomial vanishes at one endpoint of the edge. For a systematic description, we introduce the index addition which is a modular addition i k = i + k (mod 3), i, k = 1, 2, 3. (11.2) v j and v j1 , Under the convention (11.2), for j = 1, 2, 3, an edge Ij has as endpoints while a vertex v j is shared by two edges Ij and Ij1 ; see Figure 11.1. For j ∈ {1, 2, 3}, let i = j or i = j 1 and 4 3 (11.3) Pip (Ij ) = v ∈ P p (Ij ) : v vanishes at the endpoint vi of Ij .
j Two new types of extension operators are introduced: E jj : P pj (Ij ) → R and E j1 : j1 P p (I j ) → R. These operators are explicitly defined by
x, y) := E11 f (
x y
x+ y x
f (t, 0) dt, t
if f ( v1 ) = f (0, 0) = 0,
246
11 Two-Level Methods: the hp-Version on Triangular Elements
E21 f ( x, y) := E22 f ( x, y) := E32 f ( x, y) := E33 f ( x, y) := E13 f ( x, y) :=
We note that
1 − x− y y
y 1 − x− y x 1 − x− y
1 − x− y x y x
x+ y x
x+ y
f (t, 0) dt, 1−t
1− y
f (t, 1 − t) dt, if f ( v2 ) = f (1, 0) = 0, 1−t
1− y
f (t, 1 − t) dt, if f ( v3 ) = f (0, 1) = 0, t
x+ y
f (0,t) dt, 1−t
x
x x
x
f (0,t) dt, t
if f ( v2 ) = f (1, 0) = 0,
if f ( v3 ) = f (0, 1) = 0,
if f ( v1 ) = f (0, 0) = 0.
E 1 f ( x, y) = (1 − x− y)E11 f ( x, y) + xE21 f ( x, y),
x, y) = xE22 f ( x, y) + yE32 f ( x, y), E 2 f (
E
3
(11.4)
f ( x, y) = yE33 f ( x, y) + (1 − x− y)E13 f ( x, y).
Moreover, for j = 1, 2, 3,
E j f = 0 on Ij1 ∪ Ij2 , E jj f = 0 on Ij2 ,
j E j1 f
(11.5)
= 0 on Ij1 .
It is easy to see that all the operators defined above are extension operators which extend polynomials of degree p on an edge into polynomials of degree p on T. Furthermore, all the operators which deal with polynomials that vanish at only one j for j = 1, 2, 3, are linear transformations of the operavertex, namely E jj and E j1 1 1 tor E = E1 . To see that E1 f is the extension of f (defined on I1 ) we use the mean value theorem to obtain ( ( x, y)( = lim E11 f ( x, y) E11 f ( I1
y→0
x = lim y→0 y
x+ y x
f (t, 0) dt = f ( x, 0) ∀ x ∈ [0, 1]. t
The main results concerning these operators are shown in the next theorems where bounds in different norms are obtained. The following theorem states the bounds in the L2 -norm.
11.1 Preliminaries
247
Theorem 11.1. For j = 1, 2, 3, the following estimates hold E jj f L2 (T) f L2 (Ij )
∀ f ∈ P pj (Ij ),
(11.6)
E j f L2 (T) f L2 (Ij )
∀ f ∈ P0p (Ij ).
(11.8)
j E j1 f L2 (T) f L2 (Ij )
Moreover,
E jj f L2 (I
j1 )
j E j1 f L2 (I
j2 )
∀ f ∈ P pj1 (Ij ),
∀ f ∈ P pj (Ij ),
f L2 (Ij )
∀ f ∈ P pj1 (Ij ).
f L2 (Ij )
The constants are independent of f and p.
(11.7)
(11.9) (11.10)
Proof. We prove the results for j = 1 only. The other cases can be proved similarly. For the reader’s convenience, we first restate (11.6)–(11.8) for this value of j: E11 f L2 (T) f L2 (I ) 1
E21 f L2 (T) f L2 (I1 ) E 1 f L2 (T) f L2 (I1 )
By definition
E11 f 2L2 (T) = ≤
∀ f ∈ P p (I1 ), f ( v1 ) = 0,
∀ f ∈ P p (I1 ), f ( v2 ) = 0,
∀ f ∈ P p (I1 ), f ( v1 ) = f ( v2 ) = 0,
1− x- .2 1 0
0
x y
1− x- .2 1 0
0
1 y
⎞2
⎛ x+y ⎝
x
⎛ x+y ⎝
x
f (t, 0) ⎠ dt d y d x t ⎞2
f (t, 0) dt ⎠ d y d x
where we used ( x/t)2 ≤ 1 for x ≤ t ≤ x+ y. Invoking Lemma C.52 we obtain (11.6). Analogous arguments yield (11.7), (11.8). Next we prove (11.9)–(11.10) for j = 1, which now become ∀ f ∈ P p (I1 ), f ( v1 ) = 0,
E11 f L2 (I2 ) f L2 (I1 )
∀ f ∈ P p (I1 ), f ( v2 ) = 0.
E21 f L2 (I3 ) f L2 (I1 )
We first prove (11.11). We have E11 f 2L2 (I ) = 2
1 0
|E11 f (1 − t,t)|2 dt =
(
1 (( 0
(1−t ( t (
1
1−t
(2 ( f (s, 0) (( ds( dt s (
(11.11) (11.12)
248
11 Two-Level Methods: the hp-Version on Triangular Elements
=
(
1 (( 0
(1−t ( t (
t 0
(2 ⎛ ⎞2 ( t 1 f (1 − s, 0) (( 1⎝ ds( dt ≤ | f (1 − s, 0)| ds⎠ dt. 1−s t2 ( 0
0
Invoking Lemma C.52 we deduce E11 f 2L2 (I ) 2
1 0
| f (1 − s, 0)|2 ds = f 2L2 (I ) . 1
Next we prove (11.12). We have, by using again Lemma C.52,
E21 f 2L2 (I ) = 3
1 0
|E21 f (0, y)|2 d y=
(
1 (( 0
( 1 − y ( y (
y 0
(2 ( f (t, 0) (( dt d y 1 − t ((
(2 ( ( ( y 1 ( 1 (( ≤ y f 2L2 (I ) . | f (t, 0)| dt (( d 1 y2 (( ( 0 0
Results for the other values of j can be obtained similarly.
The next theorem states the bounds for the extension operators in the Sobolev norm defined in Definition 11.1. Theorem 11.2. For j = 1, 2, 3, the following estimates hold ((( j ((( 1/2 (((E f ((( f L2 (Ij ) ∀ f ∈ P pj (Ij ), j 1/2 (T, Ij2 ) (1 + log p) H ((( j ((( 1/2 (((E f ((( f L2 (Ij ) ∀ f ∈ P pj1 (Ij ), j1 1/2 (T, Ij1 ) (1 + log p) H ((( j ((( (((E f ((( 1/2 (1 + log p)1/2 f 2 ∀ f ∈ P0p (Ij ). L (I j ) H (T, I ∪I ) j1
j2
(11.13) (11.14) (11.15)
The constants are independent of f and p. (We recall (11.2) for the modular addition .) Proof. This theorem is proved in [110, Theorem 3.1]. We present the proof here for completeness, but only for the case j = 1. For convenience, we rewrite (11.13)– (11.15) for this case: ((( 1 ((( (((E1 f ((( 1/2 (1 + log p)1/2 f 2 ∀ f ∈ P p (I1 ), f ( v1 ) = 0, (11.16) L (I1 ) H (T, I3 ) ((( 1 ((( (((E2 f ((( 1/2 (1 + log p)1/2 f 2 ∀ f ∈ P p (I1 ), f ( v2 ) = 0, (11.17) L (I1 ) H (T, I2 ) ((( 1 ((( (((E f ((( 1/2 (1 + log p)1/2 f 2 ∀ f ∈ P p (I1 ), f ( v1 ) = f ( v2 ) = 0. L (I1 ) H (T, I ∪I ) 2
3
(11.18)
We first prove (11.16). Recalling Definition 11.1 and noting that dist( x, I3 ) = x for any x = ( x, y) ∈ T , we have
11.1 Preliminaries
249
((( 1 (((2 (((E1 f ((( 1/2 H
= |E11 f |2H 1/2 (T) + (T,I ) 3
T
|E11 f ( x)|2 d x. x
(11.19)
We will estimate the H 1/2 -seminorm and the L2 -weighted norm on the right-hand side separately. It is proved in [110, Lemmas 3.3 and 3.5] that, for f ∈ P p (I1 ) satisfying f (0, 0) = 0 and γ = {(0, 0)}, E11 f H 1 (T) f H 1/2 (I1 ,γ ) ,
E11 f L2 (T) (1 + log p) f H −1/2 (I1 ) .
Interpolation then gives
E11 f H 1/2 (T) (1 + log p)1/2 f L2 (I1 )
(11.20)
so that |E11 f |H 1/2 (T) is bounded by the right-hand side of (11.16). To estimate the weighted L2 -norm in (11.19) we note that
T
|E11 f ( x)|2 d x= x
0
Invoking Lemma C.52 we deduce
T
|E11 f ( x)|2 d x≤4 x
1 1
⎞2 ⎛ x+y x ⎝ f (t, 0) ⎠ dt d y d x. t y2
1− x 1 0
| f (t, 0)| dt d x=4 2
0 x
(11.21)
x
1 0
t| f (t, 0)|2 dt ≤ 4 f 2L2 (I ) , (11.22) 1
proving (11.16). This result and a linear transformation yield (11.17). To prove (11.18), the techniques proposed in [32] are used. Recalling (11.4) it suffices to prove the required estimate of (1 − x− y)E11 and xE21 . We will prove only for the first term as the second can be dealt with analogously. Noting that 7 8 7 | x + y− 1| 8 √ , x , dist( x, I2 ∪ I3 ) = min dist( x, I2 ), dist( x, I3 ) = min 2
we have, recalling Definition 11.1,
((( ((( ( ( ((((1 − x− y)E11 f (((2 1/2 ≤ ((1 − x− y)E11 f (2 1/2 + A1 + A2 H (T ,I ∪I ) H (T ) 2
where
3
(11.23)
250
11 Two-Level Methods: the hp-Version on Triangular Elements
A1 :=
A2 :=
(2 √ (( 2 (1 − x− y)E 1 f ( x, y)( 1
T
| x + y− 1|
((
d x d y, (11.24)
(2 (1 − x− y)E11 f ( x, y)( d x d y. x
T
We will prove that each term on the right-hand side of (11.23) is bounded by (1 + log p) f 2L2 (I ) . Consider the first term. Note that 1
( (2 ( ( x, y) − (1 − x − y )E11 f ( x , y )( ((1 − x− y)E11 f ( ( ( = ((1 − x− y) E11 f ( x, y) − E11 f ( x , y ) (2 ( x , y )( + (1 − x− y) − (1 − x − y ) E11 f ( ( (2 ( ( (E11 f ( x, y) − E11 f ( x , y )( + | x − x + y− y |2 |E11 f ( x , y )|2 .
As a consequence, the definition of the H 1/2 -seminorm and Fubini Theorem give ( ( ((1 − x− y)E11 f (2 1/2 |E11 f |2 1/2 H (T ) H (T) +
1− y 1 1− 1 y
0
0
0
0
= |E11 f |2H 1/2 (T) + where x , y ) := A3 (
| x − x + y− y |2 |E11 f ( x , y )|2 d x d y d xd y 2 2 |( x − x ) + ( y − y ) |3/2 1 1− y
0
0
A3 ( x , y )|E11 f ( x , y )|2 d x d y
1− y 1 0
| x − x + y− y |2 d x d y. |( x − x )2 + ( y − y )2 |3/2
0
The elementary inequality (a + b)2 ≤ 2(a2 + b2 ) gives
x , y ) ≤ A3 ( =
≤
1− y 1 0
T
B2
0
2 d x d y
|( x − x )2 + ( y − y )2 |1/2 2 d x d y
|( x − x )2 + ( y − y )2 |1/2
( x , y )
2 d x d y
|( x − x )2 + ( y − y )2 |1/2
= 4π
11.1 Preliminaries
251
for all ( x , y ) ∈ T, where B2 ( x , y ) is the disk centered at ( x , y ) and having radius 2, and where in the last step we changed to polar coordinates to calculate the integral. Consequently, ( ( ((1 − x− y)E11 f (2 1/2 |E11 f |2 1/2 + E11 f 2 2 H (T ) H (T) L (T) (1 + log p) f 2L2 (I ) 1
where in the last step we used (11.20). For the second term on the right-hand side of (11.23), we note that A1 E11 f 2L2 (T) ≤ E11 f 2H 1/2 (T)
so that A1 satisfies the required estimate due to (11.20). It remains to prove the same estimate for A2 . We have, noting that 0 ≤ 1 − x− y ≤ 1, A2 =
0
≤
⎞2 ⎛ x+y f (t, 0) ⎠ x(1 − x− y)2 ⎝ dt d x d y y2 t
1− y 1 0
1− y 1 0
⎛ x+y
x ⎝ y2
0 f 2L2 (I ) 1
x
x
⎞2 f (t, 0) ⎠ dt d x d y t
where in the last step we used (11.21) and (11.22). This proves (11.18) completing the proof for j = 1. Similar arguments can be done for j = 2, 3. In the remainder of this section, for any f ∈ C(∂ T), the set of continuous function defined on ∂ T, we denote by fi the restriction of f on the edge Ii , i = 1, 2, 3. In order to construct the basis functions (Section 11.1.3) we need another type of extension operators which extend a continuous function defined on ∂ T whose restriction of one side vanishes and whose restrictions on the other two sides are polynomials of degree at most p. The next theorem defines these extension operators and provides the bounds in the L2 -norm. Theorem 11.3. For j = 1, 2, 3, let 3 4 P p (∂ T, Ij ) := f ∈ C(∂ T) : f j = 0, fi ∈ P p (Ii ), i ∈ {1, 2, 3}, i = j . We define E j : P p (∂ T, Ij ) −→ P p (T) by ( E j f := E jj2 f j2 + E j1 f j1 − E jj2 f j2 (I j1
∀ f ∈ P p (∂ T, Ij ).
Then E j f is the extension of f to T, i.e., the restriction of E j f on ∂ T is f . Moreover, E j f L2 (T) f j1 L2 (I
j1 )
+ f j2 L2 (I
j2 )
.
252
and
11 Two-Level Methods: the hp-Version on Triangular Elements
|E j f |H 1/2 (T) (1 + log p)1/2 f j1 L2 (I
j1
The constant is independent of f and p.
) + f j2 L2 (I
j2 )
.
Proof. To fix ideas, we explicitly write down the extension formula and proof for the case when j = 2, i.e., the extended function vanishes on the side I2 of ∂ T; see Figure 11.1. The operator E 2 : P p (∂ T, I2 ) −→ P p (T) is defined by ( ∀ f ∈ P p (∂ T, I2 ), E 2 f := E21 f1 + E 3 f3 − E21 f1 (I 3
and the required estimates are
and
E 2 f L2 (T) f1 L2 (I1 ) + f3 L2 (I3 )
(11.25)
|E 2 f |H 1/2 (T) (1 + log p)1/2 f1 L2 (I1 ) + f3 L2 (I3 ) .
The idea behind the definition of E 2 is in order to extend f to T, we extend f1 and f3 adjusted to avoid double counting on I3 . Firstly, since f1 ∈ P2p (I1 ), see (11.3) and Figure 11.1, E21 f1 is well defined. Next, due to the continuity of f , we have
f3 (v1 ) − E21 f1 (v1 ) = f (v1 ) − f (v1 ) = 0 and f3 (v3 ) − E21 f1 (v3 ) = 0 − 0 = 0. ( Thus E 3 f3 − E21 f1 (I is also well defined. Clearly E 2 f ∈ P p (T). The restric3 tion of E 2 f on Ii equals fi , i = 1, 2, 3, due to (11.5). In order to prove (11.25) we use the triangle inequality and invoke Theorem 11.1, using in particular (11.7), (11.8), and (11.10) to obtain ' ( ' ' ' E 2 f L2 (T) ≤ E21 f1 L2 (T) + E 3 f3 L2 (T) + 'E 3 E21 f1 (I ' 2 3 L (T) ' 1 ( ' f1 L2 (I1 ) + f3 L2 (I3 ) + ' E2 f1 (I 'L2 (I ) 3
3
f1 L2 (I1 ) + f3 L2 (I3 ) + f1 L2 (I1 ) .
Finally, we invoke Theorem 11.2 to have
( ( (( ( |E 2 f |H 1/2 (T) ≤ |E21 f1 |H 1/2 (T) + |E 3 f3 |H 1/2 (T) + (E 3 E21 f1 (I ( 1/2 3 H (T) ' ' ( (1 + log p)1/2 f1 L2 (I1 ) + f3 L2 (I3 ) + ' E21 f1 (I 'L2 (I ) 3 3 1/2 f1 L2 (I1 ) + f3 L2 (I3 ) (1 + log p)
where in the last step we used (11.10).
11.1 Preliminaries
253
Using Theorem 11.2 and Theorem 11.3, we can prove the following result, which is proved in [110]. Corollary 11.1. Let T be a triangle and let γ be one of its sides or the union of any two sides. If f is a continuous function defined on ∂ T which vanishes on γ and is a polynomial of degree up to p on each remaining side in ∂ T \ γ , then there exists an extension U ∈ P p (T ) such that U = f on ∂ T and |||U|||H 1/2 (T,γ ) (1 + log p)1/2 f L2 (∂ T ) .
The constant is independent of f , p, and the size of T . Here the norm |||·|||H 1/2 (T,γ ) is defined in Definition 11.1. Proof. The result initially proved in [110, Theorem 2.1] is just a consequence of Theorem 11.2, Theorem 11.3 and a scaling argument. First we note the scaling properties, see (11.1), ((( (((2 ((( 1/2 |||U|||2H 1/2 (T,γ ) = τ (((U H (T , γ)
and
f 2L2 (∂ T ) = τ f2L2 (∂ T) ,
where τ is the diameter of T . (Here, for any entity, the hat version is its corresponding entity on the reference triangle T.) Hence, it suffices to prove ((( ((( ((( ((( (1 + log p)1/2 fL2 (∂ T) . (((U ((( 1/2 H
(T, γ)
It also suffices to consider two cases: γ = I2 and γ = I2 ∪ I3 . We denote by fi the restriction of f on Ii , i = 1, 2, 3. := E 2 f where E 2 is defined in Theorem 11.3 In the case γ = I2 we define U which also gives the required estimate. Consider next the case when γ = I2 ∪ I3 . Then f2 ≡ 0 and f3 ≡ 0 while f1 vanishes := E 1 f1 and obtain the required v2 of I1 . Thus we define U at the endpoints v1 and result from (11.15) in Theorem 11.2, noting that f1 L2 (I1 ) ≤ fL2 (∂ T) .
11.1.3 Construction of basis functions Vertex, edge, and interior basis functions are presented in detail in e.g., the book by Babuˇska and Szab´o [215]. In this subsection we present how they are constructed as extensions of polynomials on edges. The edge basis functions are defined as extensions of polynomials defined on an edge, vanishing at the endpoints of that edge. In Figure 11.2(a), a polynomial defined on I vanishing at the endpoints of I is extended to be a piecewise polynomial on T1 ∪ T2 which vanishes at the non-common edges of these two triangles, so that it can be continuously extended by zero outside T1 ∪ T2 . The vertex basis functions are defined on patches as in Figure 11.2(b). Their values at the nodal points on the edges of a patch (the skeleton) are given. These
254
11 Two-Level Methods: the hp-Version on Triangular Elements
values include the value 1 at the center node and 0 at the nodes on the boundary of the patch.
T1
I T2
(a)
(b)
Fig. 11.2 Patches of elements
For a given triangle T , affine transformations of the basis functions defined above on the reference triangle T are used to span the polynomial space P p (T ). Therefore, it suffices to establish all required ingredients on the reference triangle T; see Figure 11.1. 11.1.3.1 Construction of preliminary vertex basis functions We first define three affine mappings α j that maps [0, 1] onto the edges Ij of T (including the endpoints), j = 1, 2, 3,
α1 (t) = (t, 0),
α2 (t) = (1 − t,t),
α3 (t) = (0, 1 − t),
t ∈ [0, 1].
The construction of vertex basis functions uses the special low energy functions φ0 and φ0− defined in Section C.4.3 which were initially introduced in [169]. v j of T, j = 1, 2, 3, we first conTo define the vertex basis function φv j at the vertex struct a function f defined on the boundary ∂ T of T as follows. Let f j , j = 1, 2, 3, be the restriction of f onto Ij . Then we define f j (α j (t)) = φ0 (t),
f j1 (α j1 (t)) = 0,
f j2 (α j2 (t)) = φ0− (t),
t ∈ [0, 1].
11.1 Preliminaries
255
The preliminary vertex basis function φv j is defined by
φv j := E j1 f
where the extension operator E j1 is defined in Theorem 11.3. As an example, the preliminary vertex basis function φv1 is defined by
φv1 := E 2 f
where the restrictions of f on Ij are given by f1 (t, 0) = φ0 (t), f2 (1 − t,t) = 0, and f3 (0, 1 − t) = φ0− (t) or f3 (0,t) = φ0 (t), t ∈ [0, 1]. In Subsection 11.1.4 we will modify these preliminary basis functions to ensure the constant reproduction of the space.
11.1.3.2 Construction of preliminary edge basis functions Recalling the definition of the antiderivative Lq of the Legendre polynomial Lq−1 in (C.4), we consider the affine image LIi ,q of Lq onto the edge Ii of the reference triangle T, i = 1, 2, 3. These affine images vanish at the vertices of T. Therefore we can extend them by using the extension operators E i , i = 1, 2, 3, to define the edge basis functions. More precisely, we define
φIi ,q := E i LIi ,q ,
q = 1, . . . , p − 1.
There are p − 1 basis functions on each edge, and altogether there are 3(p − 1) edge basis functions on T. 11.1.3.3 Construction of preliminary interior basis functions The interior (or bubble) basis functions on the reference triangle T are defined as tensor products of Lq by Lq (2 x − 1) Lm (2 y − 1) (1 − x− y), φq,m ( x, y) = 1 − x 1 − y
q ≥ 2, m ≥ 2, q + m ≤ p + 1.
We note that φq,m vanishes on the boundary of the reference element T and that it is a polynomial of degree q+m−1 with q+m = 4, . . . , p+1. There are (p−1)(p−2)/2 such interior functions. Altogether, there are (p + 1)(p + 2)/2 basis functions for P p (T). A sample set of vertex, edge and interior basis functions is depicted in Figure 11.3.
256
11 Two-Level Methods: the hp-Version on Triangular Elements
0,025 0,1
1,0 0,75
0,0
0,05
0,5
−0,025
0,0
0,25 −0,05
−0,05
0,0 −0,25 1,0
1,0 0,75
0,75 y
0,5
0,5 0,25
0,25 0,0
−0,1 1,0
1,0
y
x
−0,075
0,75
0,75 0,5
0,5 0,25
0,25 0,0
0,0
1,0
x
0,0
0,75
0,75 0,5 y
0,25
0,25 0,0 0,0
1,0
0,5 x
Fig. 11.3 Sample basis functions (vertex, edge, interior) for p = 4.
11.1.3.4 Representation of a polynomial by preliminary vertex, edge, and interior components By using the preliminary basis functions φvi , φIi ,q , and φk,l , we can represent any function u ∈ P p (T) uniquely by 3
3
i=1
i=1
u = ∑ uvi + ∑ uIi + uT
(11.26)
where uvi , uIi , and uT are the vertex, edge, and interior components, respectively, defined by uvi ∈ span{φvi }, i = 1, 2, 3, 3 4 uIi ∈ span φIi ,q : q = 2, . . . , p , i = 1, 2, 3, 3 4 uT ∈ span φk,l : k ≥ 2, l ≥ 2, k + l ≤ p + 1 .
. The set of all vertices and edges is called the wire basket and is denoted by W The space generated by all vertex and edge basis functions is called the wire basket space. Any function of the wire basket space is uniquely defined by its values on the . An interpolation operator I from P p (T) onto the space of wire wire basket W W basket functions is defined by 3
3
i=1
i=1
IW u := uW := ∑ uvi + ∑ uIi
Therefore we can represent any u ∈ P p (T) by
u = uW + uT .
∀u ∈ P p (T).
(11.27)
(11.28)
11.1 Preliminaries
257
11.1.4 Change of basis functions and change of the wire basket space Since the wire basket space does not contain constants on T we redefine the preliminary vertex and edge functions as follows. Writing the representation (11.28) for the constant function 1, we have 1T = 1W + 1T . 1 = IW 1 +
(11.29)
IW u := IW u + 1T μ∂ T (u) ∀u ∈ P p (T)
(11.30)
We define a new interpolation operator by
where μ∂ T (u) := In particular,
is the integral mean of u over the boundary ∂ T of T.
∂ T u/|∂ T |
1T μ∂ T (φvi ), IW φvi = φvi + 1T μ∂ T (φIi ,q ), IW φIi ,q = φIi ,q + IW φk,l = 0,
where we recall that φk,l vanishes on ∂ T. We note that since 1T vanishes on ∂ T, vi , while IW φIi ,q , the interpolant IW φvi , as φvi , vanishes on the edge opposite to as φIi ,q , vanishes on ∂ T \ Ii . Hence we can define the new basis functions for P p (T) as φvi := IW φvi , φIi ,q := IW φIi ,q , φk,l := φk,l .
The new wire basket space is now the span of φvi and φIi ,q , i = 1, 2, 3, q = 2, . . . , p, which is also the range of the operator IW . The new vertex and edge components of u with respect to the new basis functions are denoted by uvi and uIi , i = 1, . . . , 3. They are images under IW of the preliminary components uvi and uIi ; see (11.26). Meanwhile, the new interior component of u is defined by uT := uT − 1T μ∂ T (IW u).
(11.31)
We note that both uT and 1T are linear combinations of φk,l while μ∂ T (IW (u)) is a constant, so uT is also a linear combination of φk,l . Consequently, if u has the decomposition (11.26), then it also has a unique decomposition with respect to the new basis functions: 3 3 3 3 1 μ ( 1 μ ( uv ) + ∑ u − ∑ u ) + u u = ∑ uv − ∑ T
i
i=1
= IW u + uT
i=1
∂T
i
i=1
Ii
T
i=1
∂T
Ii ,q
T
(11.32)
258
11 Two-Level Methods: the hp-Version on Triangular Elements
noting that 3
3
i=1
i=1
∑ μ∂ T (uvi ) + ∑ μ∂ T (uIi ,q ) = μ∂ T (IW (u)).
In particular, writing the representations (11.31) and (11.32) for the constant function 1, we have 1T − 1T μ∂ T (IW 1) = 0 1T =
and thus
IW 1( x) = 1W ( x) = 1
∀ x ∈ cl(T)
(11.33)
where cl(T) denotes the closure of T. In other words, the operator IW reproduces constant functions.
11.1.5 The wire basket in three dimensions For technical reasons, we sometimes need to extend polynomials defined on twodimensional surfaces onto three-dimensional domains. This subsection introduces the concepts of vertex functions and edge functions in three dimensions. the reference tetrahedron In the remainder of this chapter, we denote by Ω 3 4 := (x, y, z) ∈ R3 : 0 ≤ x, y, z ≤ 1, x + y + z ≤ 1 Ω and define for integer p
3 4 ) := span xi y j zk : 0 ≤ i + j + k ≤ p . P p (Ω
are denoted by w i, Not to be confused with the planar case, the vertices of Ω i = 1, . . . , 4, edges by e j , j = 1, . . . , 6, and faces by Fk , k = 1, . . . , 4. To fix ideas, we introduce a numbering for these vertices and edges, as depicted in Figure 11.4, with k , k = 1, . . . , 4. face Fk opposite to vertex w w , i = 1, . . . , 4. We start by definFirst we define the vertex basis functions Φ i w on the faces Fk , k = 1, . . . , 4 by using the method of Subsection 11.1.3.1 ing Φ i to construct three two-variable vertex basis functions on the three faces sharing this vertex, and set it to be zero on the remaining face. Consider for example the verw is defined on faces F2 , F3 , F4 , and is set to be zero 1 . The vertex function Φ tex w 1 = F1 ∪ · · · ∪ F4 . The function Φ w is on F1 . We note that Φw 1 is continuous on ∂ Ω 1 ; see Subsection C.4.4. The then extended into a discrete harmonic function in Ω vertex functions at other vertices are defined similarly. Similarly, we use the method in Subsection 11.1.3.2 to define the edge basis e ,q , j = 1, . . . , 6 and q = 1, . . . , p − 1. For example, consider the edge e1 . functions Φ j We first identify the two faces sharing e1 , namely F2 and F3 , with T. We then define e ,q , q = 1, . . . , p − 1, on F2 and F3 and extend it by zero the edge basis functions Φ 1
11.1 Preliminaries
259
3 w e3
e 6
e5
F1
e1
F2
1 w
4 w F3
e2
F4
2 w
e 4
Fig. 11.4 Vertices and edges of the reference tetrahedron Ω
e ,q is continuous on ∂ Ω . We onto the remaining faces F1 and F4 . Note again that Φ 1 as discrete harmonic functions. then extend these functions into Ω Finally, the face basis functions are defined using the method in Section 11.1.3.3 for each face Fk , k = 1, . . . , 4, which is now identified with the reference triangle T (k) q,m by some affine transformation. The resulting basis function Φ (q, m ≥ 2, q + m ≤ p + 1) on face Fk , k = 1, . . . , 4, is extended by zero onto the other faces. They are . Note that corresponding to then extended to be discrete harmonic functions in Ω each face there are (p − 1)(p − 2)/2 face functions; see Section 11.1.3.3. ) are denoted by uw , i = 1, . . . , 4, The vertex and edge components of u ∈ P p (Ω i contains all four vertices and six and ue j , j = 1, . . . , 6. The wire basket W (Ω ) of Ω edges. The corresponding wire basket interpolation operator I is defined by W (Ω )
see (11.27), where
4
6
i=1
j=1
IW (Ω ) u := ∑ uw i + ∑ ue j
), ∀u ∈ P p (Ω
(11.34)
260
11 Two-Level Methods: the hp-Version on Triangular Elements p−1
w uw i = u( wi )Φ i
and
ue j =
∑ α j,q Φe j ,q ,
α j,q ∈ R.
q=1
It is known that the vertex, edge, and face basis functions
e ,q , Φ j
w , Φ i
i = 1, . . . , 4,
j = 1, . . . , 6,
(k) q,m Φ ,
q, m ≥ 2,
q = 1, . . . , p − 1, q + m ≤ p + 1,
. In particular, since form a basis for the space of discrete harmonic functions in Ω the constant function 1 in Ω is a discrete harmonic function we can write 4
1F 1 = IW (Ω ) 1 + ∑ k=1
k
where the face components 1F of 1 are defined by k
1F := k
∑
(q,m)∈I p
(k) (k) βq,m Φ q,m
(k)
for some βq,m ∈ R.
(11.35)
3 4 Here I p := (q, m) : q, m ≥ 2, q + m ≤ p + 1 . As in the two-dimensional case, we define another interpolation operator which reproduces constants: 4
1F μ∂ F (u); IW (Ω ) u := IW (Ω ) u + ∑ k=1
compare with (11.30).
k
k
(11.36)
11.1.6 Properties of the interpolation operators IW and IW (Ω )
In this subsection, we prove some lemmas involving the properties of the wire basket interpolation operators defined above. First we prove an equivalence of norms which will be needed in the proof of Lemma 11.2 and Lemma 11.3. Lemma 11.1. Let u ∈ P p (T) and uW j be the preliminary wire basket component of u associated with edge Ij , j = 1, 2, 3, i.e., see Figure 11.1. Then
uW j = uv j + uv j1 + uIj ;
11.1 Preliminaries
261
3 u 2 ≤ uW j 2L2 (I ) ≤ 2 uW j 2∗ j 4 W j ∗
where
uv j 2L2 (I ) + uv j1 2L2 (I ) + uIj 2L2 (I ) . uW j 2∗ := j
j
j
Proof. We prove the lemma for j = 1. In this case, uW1 = uv1 + uv2 + uI1 and the required estimate is 3 u 2 ≤ uW1 2L2 (I ) ≤ 2 uW1 2∗ 1 4 W1 ∗
where
uv1 2L2 (I ) + uv2 2L2 (I ) + uI1 2L2 (I ) . uW1 2∗ = 1
1
1
Due to (C.59) we have
uv1 2L2 (I ) + uv1 2L2 (I ) + uI1 2L2 (I ) uW1 2L2 (I ) = 1 1 1 1 + 2 uv1 , uv2 L2 (I ) + 2 uv1 , ue1 L2 (I ) + 2 uv2 , ue1 L2 (I ) 1 1 1 2 2 2 = uv1 L2 (I ) + uv1 L2 (I ) + uI1 L2 (I ) + 2 uv1 , uv2 L2 (I ) 1 1 1 1 2 2 2 ≤ 2 uv1 L2 (I ) + uv1 L2 (I ) + uI1 L2 (I ) . 1
1
1
On the other hand,
( ( ( ( 2 2 2 uW 2L2 (I ) ≥ u + u + u − 2 u , u ( . (11.37) ( 2 v v v v I1 L2 (I ) 1 L2 (I ) 1 L2 (I ) 1 2 L (I ) 1 1
1
1
1
1
We note that by the way that the vertex basis functions φvi , i = 1, 2, 3, are constructed (see Subsection 11.1.3.1) the restrictions of uv1 and uv2 on I1 can be represented by ( ( uv1 (I (t, 0) = cφ0 (t) and uv2 (I (t, 0) = c− φ0− (t), c, c− ∈ R, t ∈ [0, 1], 1
1
where φ0 and φ0− are defined in Definition C.1. Hence, by using (C.58) and noting that φ0 L2 (0,1) = φ0− L2 (0,1) we deduce ( ( ( ( ( ( ( ( 2 ( uv1 , uv2 L2 (I ) ( = 2|c||c− | ( φ0 , φ0− L2 (I ) ( 1
1
1 |c||c− |φ0 L2 (0,1) φ0− L2 (0,1) = p+1 1 u 2 = u 2 p + 1 v1 L (I1 ) v2 L (I1 ) 1 uv1 2L2 (I ) + for p ≥ 1. ≤ uv2 2L2 (I ) 1 1 4
This inequality and (11.37) imply
262
11 Two-Level Methods: the hp-Version on Triangular Elements
uW1 2L2 (I ) ≥ 1
3 uv1 2L2 (I ) + uv1 2L2 (I ) + uI1 2L2 (I ) . 1 1 1 4
This completes the proof of the lemma.
The next lemma gives some boundedness results for the operator IW .
Lemma 11.2. The interpolation operator IW defined by (11.30) satisfies, for all u ∈ P p (T), IW uL2 (T) uL2 (∂ T) ,
(11.38)
|IW u|H 1/2 (T) (1 + log p)
uL2 (∂ T) ,
1/2
IW uL2 (T) (1 + log p)1/2 u
|IW u|H 1/2 (T) (1 + log p)u
1/2
HS (T)
1/2
HS (T)
The constants are independent of u and p.
(11.39) ,
.
(11.40) (11.41)
Proof. First we note that (11.40) and (11.41) are direct consequences of (11.38), (11.39) and (C.55) in Lemma C.30. Hence we prove only (11.38) and (11.39). Recalling the definition of IW in (11.30), it suffices to prove the required estimates for IW u and 1T μ∂ T (u), i.e., prove IW uL2 (T) uL2 (∂ T) ,
and
(11.42)
|IW u|H 1/2 (T) (1 + log p)
1/2
1T μ∂ T (u)L2 (T) uL2 (∂ T) ,
uL2 (∂ T) ,
(11.43)
(11.44)
| 1T μ∂ T (u)|H 1/2 (T) (1 + log p)
1/2
uL2 (∂ T) .
(11.45)
First consider IW u. Due to (11.27), we need to estimate each of uvi and uIi , the vertex and edge components of u. Since these components are linear combinations of, respectively, the basis functions φvi and φIi ,q , q = 1, . . . , p − 1 (see Subsection 11.1.3.4) and since the extension operators are linear, we have ( ( and uIi = E i uIi (I . uvi = E i1 uvi (∂ T i
Consequently,
( 3 ( 3 IW u = ∑ E i1 uvi (∂ T + ∑ E i uIi (I . i=1
i=1
i
The triangle inequality, Theorem 11.1 and Theorem 11.3 yield
(11.46)
11.1 Preliminaries
263
( 3 3 ( IW uL2 (T) = ∑ E i1 uvi (∂ T L2 (T) + ∑ E i uIi (I L2 (T) uvi L2 (I ∑
+ uvi L2 (Ii ) i2 )
i=1
Note that the cyclic notation implies 3
∑ uvi L2 (I
i2
i=1
so that
i
i=1
i=1 3
3
+ ∑ uIi L2 (Ii ) . i=1
3
uvi1 L2 (Ii ) ) = ∑ i=1
3 uvi L2 (Ii ) + IW uL2 (T) ∑ uvi1 L2 (Ii ) + uIi L2 (Ii ) . i=1
Lemma 11.1 gives
3
IW uL2 (T) ∑ uW j 2L2 (I ) = IW uL2 (∂ T) = uL2 (∂ T) j
i=1
proving (11.42). To prove (11.43), we use (11.46) again, but this time we invoke Theorem 11.2 and Theorem 11.3 for the H 1/2 -seminorm uvi L2 (I ) + uvi L2 (Ii ) + uIi L2 (Ii ) |IW u|H 1/2 (T) (1 + log p)1/2 i2
(1 + log p)1/2 IW uL2 (∂ T) = (1 + log p)1/2 uL2 (∂ T) ,
proving (11.43). 1T we have Next we consider 1T μ∂ T (u). By the definition of
| 1T μ∂ T (u)| = |1 − IW 1| |μ∂ T (u)| ≤ 1 + |IW 1| |μ∂ T (u)|.
Hence, by using Lemma C.31 and (11.42) (for the constant function 1) we deduce 1T μ∂ T (u)L2 (T) 1L2 (T) + IW 1L2 (T) uL2 (∂ T) 1L2 (∂ T) uL2 (∂ T) uL2 (∂ T) ,
proving (11.44). Similarly, by using Lemma C.31 and (11.43) we deduce | 1T μ∂ T (u)|H 1/2 (T) (1 + log p)1/2 |IW 1|H 1/2 (T) uL2 (∂ T) (1 + log p) 1L2 (∂ T) uL2 (∂ T) (1 + log p) uL2 (∂ T) ,
proving (11.45). The lemma is proved.
264
11 Two-Level Methods: the hp-Version on Triangular Elements
The following results concerning the interpolation operators on the three dimensional wire basket are initially proved in [32] (which deals with wire basket preconditioners for the p-version of the finite element method on tetrahedral meshes). However, several results in this reference contain an unknown constant C(p) which depends on the polynomial degree p. This unknown constant stems from an unproved extension theorem. The results presented in the remainder this subsection, originally proved in [111], use Theorems 11.1–11.3 and provide an explicit dependency of the constant on p. The following lemma can be compared with [32, Lemma 4.13 and Lemma 4.16]. Lemma 11.3. The following estimate holds
and
|IW (Ω ) u|2H 1 (Ω ) (1 + log p)3 u2L2 (W (Ω ))
) ∀u ∈ P p (Ω ) ∀u ∈ P p (Ω
|IW (Ω ) u|2H 1 (Ω ) (1 + log p)3 u2L2 (W (Ω ))
(11.47)
(11.48)
) is the three-dimensional wire basket consisting of all vertices and (Ω where W edges of Ω ; see Subsection 11.1.5. The constants are independent of p and u. Proof. Recalling the definition of IW (Ω ) u in (11.34) and the definitions of the components uw and ue in Subsection 11.1.5, we confirm that I u is the discrete i
i
harmonic extension of its boundary value. Lemma C.34 gives IW (Ω ) u2H 1 (Ω ) IW (Ω ) u2H 1/2 (∂ Ω ) 4
6
i=1
j=1
W (Ω )
∑ uw i 2H 1/2 (∂ Ω ) + ∑ ue j 2H 1/2 (∂ Ω ) .
Consider for example uw 1 2 1/2 . Noting the support of uw 1 we have by usH (∂ Ω ) ing (A.106) and Lemma C.28 uw 1 2H 1/2 (F ∪F ∪F ) uw 1 2H 1/2 (∂ Ω ) 2
3
4
(1 + log p) uw 1 2H 1/2 (F ∪F ∪F ) . 2
2
3
4
(11.49)
Recalling the definition of the preliminary vertex basis functions in Section 11.1.3.1, the vertex component uw 1 can be depicted as follows. First by using affine transformations we can map faces F2 , F3 , and F4 onto triangles T2 , T3 , and T4 , as in Fig 1 of the tetrahedron, which is also vertex v1 of the ure 11.5, such that the vertex w triangles, is located at the center. On each triangle Tk , k = 2, 3, 4, the vertex function uw 1 is the extension of the values of u on the boundary of the triangles. The definition of the vertex basis function φv1 dictates that
11.1 Preliminaries
265
T4
T2
1 = w v1
Fig. 11.5 Affine images of F2 , F3 , and F4
T3
,
⎧ 2 ⎪ ⎨E u(x, y), uw 1 (x, y) = E 2 u(x, −y), ⎪ ⎩ 2 E u(−x, y),
Hence, by using Theorem 11.3 we deduce
(x, y) ∈ T2 , (x, y) ∈ T3 , (x, y) ∈ T4 .
uw 1 2L2 (F ∪F ∪F ) = E 2 u2L2 (T ) + E 2 u(x, −y)2L2 (T ) + E 2 u(−x, y)2L2 (T ) 2
3
4
2
3
4
u2L2 (∂ T ) + u2L2 (∂ T ) + u2L2 (∂ T ) 2
3
4
u2L2 (W (Ω )) .
(11.50)
On the other hand, for the H 1/2 -seminorm we have | uw 1 |2H 1/2 (F ∪F ∪F ) |E 2 u|2H 1/2 (T ) + |E 2 u(x, −y)|2H 1/2 (T ) + |E 2 u(−x, y)|2H 1/2 (T ) 2
3
4
+
3
|E 2 u(x, y) − E 2 u(x , −y )|2 dx dy dx dy |(x, y) − (x , −y )|3
|E 2 u(x, y) − E 2 u(−x , y )|2 dx dy dx dy |(x, y) − (−x , y )|3
T2 ×T3
+
2
T2 ×T4
4
266
11 Two-Level Methods: the hp-Version on Triangular Elements
+
T3 ×T4
|E 2 u(x, −y) − E 2 u(−x , y )|2 dx dy dx dy |(−x, y) − (−x , y )|3
= |E 2 u|2H 1/2 (T ) + |E 2 u(x, −y)|2H 1/2 (T ) + |E 2 u(−x, y)|2H 1/2 (T ) +3
T2 ×T2
2
3
4
|E 2 u(x, y) − E 2 u(x , y )|2 dx dy dx dy |(x, y) − (x , y )|3
|E 2 u|2H 1/2 (T ) + |E 2 u(x, −y)|2H 1/2 (T ) + |E 2 u(−x, y)|2H 1/2 (T ) 2
3
4
(1 + log p)u2L2 (W (Ω ))
(11.51)
where in the last step we used Theorem 11.3. It follows from (11.49), (11.50), and (11.51) that uw 1 2H 1/2 (∂ Ω ) (1 + log p)3 u2L2 (W (Ω )) .
Similar results hold for uw i , i = 2, 3, 4. Next we consider the edge components ue j , j = 1, . . . , 6. For example, consider ue1 . We have by using again (A.106) and Lemma C.28 ue1 2H 1/2 (F ∪F ) ue1 2H 1/2 (Ω ) 2
3
(1 + log p)2 ue1 2H 1/2 (F ∪F ) . 2
3
Using again the affine mappings to change the( geometries to T2 , T3 , and T4 (see Figure 11.4 and Figure 11.5) we note that ue1 (T is the extension (by E 2 ) of the 2 values of u on edge e1 and that ue1 (x, y) = E 2 u(x, −y),
(x, y) ∈ T3 .
Using the same approach as for uw 1 we can prove that
ue1 2H 1/2 (∂ Ω ) (1 + log p)3 u2L2 (W (Ω )) .
The result also holds for ue j , j = 2, . . . , 6. Altogether, we deduce (11.47). In order to prove (11.48) it remains to prove that 1F μ∂ F (u)2H 1 (Ω ) (1 + log p)3 u2L2 (∂ F ) , k
k
k
k = 1, . . . , 4
(11.52)
due to (11.35) and (11.36). It follows from Lemma C.31 and Lemma C.34 that 1F 2H 1 (Ω ) |μ∂ F (u)|2 1F μ∂ F (u)2H 1 (Ω ) = k
k
k
k
1F 2H 1/2 (∂ Ω ) u2L2 (∂ F ) . k
k
11.2 Preconditioners for the Hypersingular Integral Equation
267
Noting the support of 1F and using (A.106) and Lemma C.28, we deduce k
1F 2H 1/2 (∂ Ω ) 1F 2H 1/2 (F ) k
k
k
(1 + log p)2 1F 2H 1/2 (F ) . k
k
Using (11.29), (11.42), and (11.43) (for the constant function 1) we infer 1F 2H 1/2 (∂ Ω ) (1 + log p)2 12H 1/2 (∂ Ω ) + IW 12H 1/2 (∂ Ω ) k (1 + log p)3 12H 1/2 (∂ Ω ) + 12H 1/2 (∂ F ) k
(1 + log p) . 3
This proves (11.52), completing the proof of the lemma.
We are now ready to present the preconditioners for the hypersingular integral equation.
11.2 Preconditioners for the Hypersingular Integral Equation We recall that the boundary element space for solving the hypersingular integral equation (1.4) by the hp-version of the boundary element Galerkin method is 4 3 V = v ∈ C(Γ ) : v|∂Γ = 0 and v|Γi ∈ P p (Γi ), i = 1, . . . , n
where Γ1 , . . . , Γn are triangles forming a conforming, regular, and quasi-uniform triangulation of Γ . In a standard way, we utilise the local basis functions defined in Section 11.1.3 to generate a basis for V . In particular, we use the notation for components in (11.26), and the wire basket interpolation operators (11.27) and (11.30). Each triangular element Γi , i = 1, . . . , n, is mapped onto the reference triangle T. Thus we can define the global wire basket W from the local wire baskets W j , j = 1, . . . , n.
11.2.1 Subspace decomposition As usual, the additive Schwarz preconditioners are defined from the subspace decomposition of V (11.53) V = V0 + V1 + · · · + Vn . For our preconditioners, the subspace V0 is the space of wire basket functions with local basis functions φvi and φIi ,q , i = 1, 2, 3 and q = 2, . . . , p defined in Subsection 11.1.4, while V j consists of functions in V supported in Γj , j = 1, . . . , n. Ac-
268
11 Two-Level Methods: the hp-Version on Triangular Elements
cordingly, any v ∈ V has a unique representation n
v = vW + ∑ vΓj
(11.54)
j=1
where vW ∈ V0 and vΓj ∈ V j , j = 1, . . . , n. This is a non-overlapping decomposition, resulting in an iterative substructuring method. Preconditioners designed on this decomposition have a simple block-diagonal form when the basis functions are numbered appropriately. Depending on the choice of the local bilinear forms, see Subsection 2.3.2.2, these blocks are blocks of the stiffness matrix or specifically assembled blocks. For j = 1, . . . , n the local bilinear form a j (·, ·) : V j × V j −→ R inherits that of the bilinear form on V , namely, a j (v, w) := aW (v, w) ∀v, w ∈ V j .
(11.55)
Two different choices of a0 (·, ·) : V0 × V0 −→ R will result in two different preconditioners.
11.2.2 Preconditioner I For this preconditioner, we follow [169]. The bilinear form a0 (·, ·) is defined by n a0 (v, w) := (1 + log p)3 ∑ v − μ∂Γj (v), w − μ∂Γj (w) j=1
L2 (∂Γj )
∀v, w ∈ V0 ,
(11.56) where we recall that μ∂Γj (v) is the integral mean of v on ∂Γj . We note that, due to Lemma C.29, n
a0 (v, v) = (1 + log p)3 ∑ vW − μ∂Γj (vW )2L2 (∂Γj ) j=1 n
= (1 + log p)3 ∑ inf v − c j 2L2 (∂Γj ) j=1 c j ∈R
∀v ∈ V0 .
(11.57)
We prove the following two lemmas. Lemma 11.4 (Coercivity of decomposition). For any v ∈ V satisfying (11.54) the following estimate holds n
aW (v, v) a0 (vW , vW ) + ∑ a j (vΓj , vΓj ) j=1
where the constant is independent of v, h, and p.
11.2 Preconditioners for the Hypersingular Integral Equation
269
Proof. It follows from Theorem A.11 and Lemma B.3 that n
aW (v, v) aW (vW , vW ) + ∑ a j (vΓj , vΓj ). j=1
Hence it suffices to prove aW (vW , vW ) a0 (vW , vW ) or equivalently, see (11.57), vW 2 1/2 HI
n
(Γ )
(1 + log p)3 ∑ inf vW − c j 2L2 (∂Γj ) . j=1 c j ∈R
(11.58)
In order to prove this inequality we consider a three-dimensional domain Ω such that Γ ⊂ ∂ Ω . We decompose Ω into tetrahedra Ω j , j = 1, . . . , N, such that the trace of this mesh on Γ is compatible with the mesh on Γ . Let Ωi be a tetrahedron containing Γi as a face. By an affine transformation, we map this tetrahedron onto the such that Γi is mapped onto the reference triangle T, a face reference tetrahedron Ω of Ω . ( Let v be the function defined on T from vWi := vW (Γ and the affine mapping, i . It can be seen that v = be the discrete harmonic extension of v onto Ω and let V ( ( v(∂ T ) and V = IW (Ω ) ( v(∂ T ). Hence, Lemma 11.3 gives IT ( |V |2H 1 (Ω ) (1 + log p)3 v2L2 (∂ T) .
Since IT and IW (Ω ) reproduce constants we deduce
|V |2H 1 (Ω ) = |V − c|2H 1 (Ω ) (1 + log p)3 v − c2L2 (∂ T)
∀c ∈ R.
It follows from Lemma A.4 that
|VWi |2H 1 (Ωi ) (1 + log p)3 vW − ci 2L2 (∂Γi )
∀ci ∈ R.
(11.59)
Here VWi is the function defined on Ωi corresponding to V via the affine mapping. Patching these functions VWi , i = 1, . . . , n, together we obtain a function VW defined on the whole Ω whose trace on the boundary is vW . The trace theorem gives vW 2 1/2 HI
(Γ )
= vW 2H 1/2 (∂ Ω ) VW 2H 1 (Ω ) .
Since vW vanishes on ∂ Ω \ Γ we can invoke [181, Theorem 2.3.1] to obtain VW H 1 (Ω ) |VW |H 1 (Ω ) and thus
270
11 Two-Level Methods: the hp-Version on Triangular Elements
vW 2 1/2 HI
n
(Γ )
|VW |2H 1 (Ω ) = ∑ |VWi |2H 1 (Ωi ) . i=1
This together with (11.59) implies vW 2 1/2 HI
n
(Γ )
(1 + log p)3 ∑ vW − ci 2L2 (∂Γi )
∀ci ∈ R,
i=1
resulting in (11.58). This completes the proof of the lemma.
Lemma 11.5 (Stability of decomposition). For any v ∈ V satisfying (11.54) the following estimate holds n
a0 (vW , vW ) + ∑ aW (vΓj , vΓj ) (1 + log p)4 aW (v, v) j=1
where the constant is independent of v, h, and p. Proof. We first consider a0 (vW , vW ). Since n
a0 (vW , vW ) = (1 + log p)3 ∑ vW − μ∂Γi (vW )2L2 (∂Γi ) i=1 n
= (1 + log p)3 ∑ v − μ∂Γi (v)2L2 (∂Γi ) , i=1
Lemma C.30 and a scaling argument imply a0 (vW , vW ) (1 + log p)4 |v|2H 1/2 (Γ ) . Next we consider (see Lemma A.13) aW (vΓj , vΓj ) vΓj 2 1/2 HI
(Γ )
≤ vΓj 2 1/2 HI
(Γj )
,
j = 1, . . . , n.
By using an affine mapping which maps Γj onto T, and which transformed v, vT 2 1/2 instead vW , and vΓj into v, vW , and vT , respectively, we can consider HI (T ) ( ( 2 ( ( v − vW ) T = ( v − IW v) T we have of vΓj 1/2 . Since vT = ( HI
(Γi )
vT 2 1/2 HI
(T)
= v − IW v2 1/2 HI
(T)
v2 1/2 + IW v2 1/2 HI (T ) HI (T ) v2 1/2 HI
(T)
+ (1 + log p)2 IW v2 1/2 HI
(T)
where in the last step we used Lemma C.28. It follows from (11.40) and (11.41) that vT 2 1/2 HI
(T)
= v − IW v2 1/2 HI
(T)
(1 + log p)4 v2 1/2
HS (T)
11.2 Preconditioners for the Hypersingular Integral Equation
where we used the fact that ·2 1/2 HI
(T)
·2 1/2
HS (T)
271
with constants depending only
on the diameter of T. Since IW reproduces constants, we deduce vT 2 1/2 HI
(T)
= ( v + c) − IW (v + c)2 1/2 HI
(1 + log p)
4
A quotient argument yields vT 2 1/2 HI
(T)
v + c2 1/2 HS (T )
(T)
∀c ∈ R.
(1 + log p)4 | v|2H 1/2 (T) .
It follows from Lemma A.6 and (A.53) that vΓj 2 1/2 HI
implying n
∑ aW (vΓj , vΓj )
j=1
(Γj )
(1 + log p)4 |v|2H 1/2 (Γ ) j
n
∑ vΓj 21/2
j=1
HI
n
(Γj )
(1 + log p)4 ∑ |v|2H 1/2 (Γ ) j=1
j
≤ (1 + log p)4 |v|2H 1/2 (Γ ) (1 + log p)4 aW (v, v), where in the penultimate step we used the definition of the H 1/2 -seminorm to ob tain ∑nj=1 |v|2H 1/2 (Γ ) ≤ |v|2H 1/2 (Γ ) . This completes the proof of the lemma. j
Lemma 11.4, Lemma 11.5, and Theorem 2.1 give the following result. Theorem 11.4. The condition number κ (PI ) of the additive Schwarz operator PI associated with the subspace decomposition (11.53) and the choice of bilinear forms (11.55) and (11.56) is bounded by
κ (PI ) (1 + log p)4 where the constant is independent of p and h.
11.2.3 Preconditioner II This preconditioner uses the same bilinear form for the wire basket space V0 , namely, (11.60) a0 (v, w) := aW (v, w) ∀v, w ∈ V0 . As usual, we prove the coercivity and stability of the decomposition (11.53). Lemma 11.6 (Coercivity of decomposition). For any v ∈ V satisfying (11.54) the following estimate holds
272
11 Two-Level Methods: the hp-Version on Triangular Elements n
aW (v, v) aW (vW , vW ) + ∑ aW (vΓj , vΓj ) j=1
where the constant is independent of v, h, and p. Proof. The proof is standard as previously done by using Lemma B.3 and Theorem A.11. We omit the details. Lemma 11.7 (Stability of decomposition). For any v ∈ V satisfying (11.54) the following estimate holds n
aW (vW , vW ) + ∑ aW (vΓj , vΓj ) (1 + log p)4 aW (v, v) j=1
where the constant is independent of v, h, and p. Proof. This is a direct consequence of Lemma 11.5 and inequality (11.58).
The above two lemmas and Theorem 2.1 prove the following theorem. Theorem 11.5. The condition number κ (PII ) of the additive Schwarz operator PII associated with the subspace decomposition (11.53) and the choice of bilinear forms (11.55) and (11.60) is bounded by
κ (PII ) (1 + log p)4 where the constant is independent of p and h. Remark 11.1. We finish this section by commenting on the costs of both preconditioners. For a given polynomial degree p, the dimension of the local subspace V j associated with element Γj , j = 1, . . . , n, equals the number of interior basis functions on Γj , which is (p − 1)(p − 2)/2; see Subsection 11.1.3. Thus the local problems associated with these local subspaces can be solved directly and in parallel. This is a procedure analogous to static condensation (albeit, it does not affect the wire basket components) and is standard in p-methods. For refined meshes, the dimension of the wire basket space V0 can be much larger. It depends linearly in p and h−1 . Therefore, the problem associated with this subspace using the bilinear form aW (·, ·) (Preconditioner II) results in dense matrices and thus a direct solution is costly in general. One strategy is to solve this coarse problem approximately by, e.g., the conjugate gradient method. Preconditioner I on the other hand yields sparse matrices due to the use of the L2 -inner product in a0 (·, ·). Therefore, the problem can be solved efficiently. In this sense, Preconditioner II is rather of theoretical interest whereas Preconditioner I is the preferred method in practice.
11.3 Numerical Results
273
11.3 Numerical Results For our model problem we choose the domain Γ = (−1/2, 1/2)2 × {0}, use uniform triangular meshes of the type given in Figure 11.6, and select the right-hand side function f = 1 in the hypersingular integral equation. For this right-hand side function the hp-version with quasi-uniform meshes converges like O(h1/2 p−1 ) in the energy norm, see [29, 31]. Γ
Fig. 11.6 The mesh for the p-version.
Before presenting the numerical results, we comment on some computational issues. Firstly, we note that if the basis functions of the boundary element space V are appropriately order, then the matrix form of the preconditioners has a block diagonal form ⎡ ⎤ 0 AW 0 · · · 0 ⎢ 0 AΓ · · · 0 0 ⎥ 1 ⎢ ⎥ ⎢ .. .. . . . .. ⎥ . C := ⎢ . . . . . ⎥ ⎢ ⎥ ⎣ 0 0 · · · AΓ 0 ⎦ n−1 0 0 · · · 0 AΓn
Here the submatrix AW is the discretization of the bilinear form a0 (·, ·) and AΓj is the discretization of the bilinear form aW (·, ·) involving the interior basis functions defined on Γj , j = 1, . . . , n. However, it is noted that in the implementation this matrix is not explicitly computed. One only needs to solve the local problem in V j using the matrix AΓj , j = 1, . . . , n, and solve the global problem in V0 using the matrix AW . To construct the submatrix AW it is necessary to compute n a0 (v, w) = (1 + log p)3 ∑ v − μ∂Γj (v), w − μ∂Γj (w) j=1
L2 (∂Γj )
∀v, w ∈ V0 .
As usual, we carry out this computation on each element Γj by computing on the reference element T and using the affine transformation, namely, we compute − μ∂ T (w) L2 (∂ T) ∀ ∈ V0 . v− μ∂ T ( v), w v, w
274
11 Two-Level Methods: the hp-Version on Triangular Elements
First we recall that the wire basket space on T has P := 3 + 3(p − 1)(p − 2)/2 basis functions, consisting of 3 vertex basis functions and 3(p − 1)(p − 2)/2 edge basis functions; see Section 11.1.3. We denote these basis functions by φj , j = 1, . . . , P, and let M be the mass matrix defined by Mi j :=
∂ T
φi (x)φj (x) dx,
i, j = 1, . . . , P.
, and Let v, w z (respectively) be the coefficient vectors in the representations (with and the constant funcrespect to the basis function φj ) of the polynomials v, w, tion 1 (respectively). Then, noting that 1W = 1 (since IW reproduces constants; see (11.33)), we have L2 (∂ T) − μ∂ T (w) L2 (∂ T) = v, w v− μ∂ T ( v), w 1L2 (∂ T) v, 1L2 (∂ T) − μ∂ T ( − μ∂ T (w) v) w, 1, 1L2 (∂ T) + μ∂ T ( v)μ∂ T (w)
M z M =w v − μ∂ T (w) v − μ∂ T ( v) z M w z M + μ∂ T ( v)μ∂ T (w) z.
Simple calculations reveal |∂ T| = so that
1 dt =
∂ T
μ∂ T ( v) = Hence, since M is symmetric,
∂ T
1 |∂ T|
1W · 1W dt = z Mz
∂ T
1 · v =
z M v
z M z
.
v)( z M w) ( z M M − μ∂ T (w) L2 (∂ T) = w v− μ∂ T ( v), w v−
z M z
(M w v z)(M z) M =w v−
z M z (Mz)(M z)
M− v. =w z M z
Consequently, the (i, j) entry of the submatrix AW defined by a0 (·, ·) is computed as follows: AW i j = a0 (φi , φj ) = (1 + log p)3 φi − μ∂ T (φi ), φj − μ∂ T (φj ) 2 L (∂ T)
11.3 Numerical Results
275
(Mz)(M z) = (1 + log p)3 e j M − ei , z Mz
where ei = (0, . . . , 1, . . . , 0) denotes the ith canonical vector in RP . Thus . (M z)(Mz) AW = (1 + log p)3 M − . z M z
By using the affine transformations and assembly technique, the matrix AW on the whole Γ can be constructed. Remark 11.2. To simplify the computation, the submatrix AW can be defined by AW = (1 + log p)
3
-
D−
(D z)(D z) z D z
.
where D is the diagonal matrix whose entries are the diagonal entries of M. The matrix AW in this case is a rank-one perturbation of the diagonal part D of the stiffness matrix M. The two matrices M and D have equivalent spectra; see [169]. The following numerical results are reprinted from [111] with copyright permission. In Table 11.1, we present the condition numbers of the un-preconditioned stiffness matrix (κ (A)), and the condition numbers of the preconditioned matrices resulting from both preconditioners designed above (κ (PI ) and κ (PII )). We also present the number of iterations (# iter) in each case. It is noted that it is simpler to implement Preconditioner I than Preconditioner II; see Remark 11.1.
p DoF 1 1 2 9 3 25 4 49 5 81 6 121
CG
κ (A) 0.100E+01 0.781E+01 0.734E+02 0.739E+03 0.102E+05 0.189E+06
# iter 1 6 29 100 315 993
PCG Preconditioner I Preconditioner II κ (PI ) # iter κ (PII ) # iter 0.100E+01 1 0.100E+01 1 0.561E+01 7 0.100E+01 1 0.443E+02 18 0.547E+01 11 0.745E+02 34 0.796E+01 17 0.112E+03 44 0.103E+02 22 0.150E+03 56 0.124E+02 27
Table 11.1 Condition numbers and numbers of iterations for the p-version without and with preconditioning
In Figure 11.7 we present the condition numbers of the preconditioned matrix using Preconditioner I. The numerical results for Preconditioner II behave analogously. In this figure, we also plot the curve of (1 + log p)4 and observe that the numerical results behave similarly in the given range of p. In Figure 11.8 we present condition numbers of the preconditioned matrix PI for the h-version and different polynomial degrees. The results confirm the asymptotic independence of the condition numbers on h.
276
11 Two-Level Methods: the hp-Version on Triangular Elements 1000 Preconditioner I (1+log p)^4
Condition number
100
10
1 1
2
3
4
5
6
Polynomial degree
Fig. 11.7 Condition number of the preconditioned Galerkin matrix, p-version.
p=4 p=5 p=6
condition number
200
100
60
100
1000 number of unknowns
Fig. 11.8 Condition number of the preconditioned Galerkin matrix, h-version
Chapter 12
Diagonal Scaling Preconditioner and Locally-Refined Meshes
In this chapter, we study the effect of mesh refinements on the condition numbers of the stiffness matrices arising from the h-version Galerkin boundary element discretisation of the weakly-singular and hypersingular integral equations, when quasiuniform and locally refined meshes are used. The condition numbers of the diagonally scaled matrices are also studied. So far in this monograph, our claim that the condition numbers for the h-version grow significantly with the degrees of freedom (the reason for the study of preconditioners) is not analytically supported. The first goal of this chapter is to address this issue. Moreover, when the solution to the boundary integral equation has singularities, it is more efficient to refine the mesh adaptively. Indeed, a uniform triangulation increases the degrees of freedom significantly, which is not suitable for dense matrices resulting from the boundary element discretisation. The adaptively refined partition may contain elements of very widely different sizes. As a consequence, the condition number may grow even more significantly, which means the advantages accrued by adaptivity are dissipated by the cost of solving a highly ill-conditioned linear system. This chapter also confirms the effect of local mesh refinement on the condition number. Furthermore, the chapter shows that a simple diagonal scaling preconditioner is a remedy. This preconditioner is in fact an additive Schwarz preconditioner with the subspaces in the decomposition being the spans of individual basis functions. Section 12.3 presents results for shape-regular refinements. These results are first obtained in [5] where more general operators besides the hypersingular and weaklysingular integral operators are studied. We also present in Section 12.4 results obtained in [86] where anisotropic mesh refinements are studied. Numerical results are reported in Section 12.5.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_12
277
278
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
12.1 Problem Setting In this chapter, Γ is a manifold in Rd , d = 2, 3. When d = 2, the manifold Γ is a bounded open curve in R2 , and when d = 3 it is an open surface in R3 . We consider m (Γ ) satisfying the problem: Find ϕ ∈ H a(ϕ , ψ ) = , ψ
where a(·, ·) =
aV (·, ·) aW (·, ·)
m (Γ ) ∀ψ ∈ H
(12.1)
if m = −1/2, if m = 1/2.
(12.2)
Let Ph be a partition of Γ into boundary elements K, as defined in Chapter 3 (for d = 2) and Chapters 10–11 (for d = 3). Let hK be the diameter of K. Let Nh be an indexing set of nodal points xk in the sense of Ciarlet [60]. This means, for piecewise constant elements (m = −1/2), xk is a midpoint of an element, while for continuous piecewise linear elements (m = 1/2), xk is a vertex of an element. Denoting {φk }k∈Nh to be the set of nodal basis functions defined on Ph , i.e.,
φk (x ) = δk ,
k, ∈ Nh .
(12.3)
we define Γk to be the support of φk and hk to be the average of hK , K ⊂ Γk . The boundary element space is defined by 4 3 Vh := span φk : k ∈ Nh . Let A be the stiffness matrix of size N × N arising from the Galerkin approximation of (12.1). Here N = card(N ). In the subsequent sections, we study the effect of local mesh refinements on the condition number of A. Recall that
κ (A) =
λmax (A) λmin (A)
where
λmax (A) = max v∈RN
Av, vRN
v, vRN
and
λmin (A) = min
v∈RN
Av, vRN .
v, vRN
As in previous chapters, for the analysis it is convenient to evaluate the extremal eigenvalues of A in terms for functions in the boundary element spaces. The isomorphism v → v from RN to V is defined as follows. For k = 1, . . . , N, let ck be the k-th component of the vector v. We define vk := ck φk Then, see (12.2),
and
v :=
N
N
k=1
k=1
∑ vk = ∑ ck φk .
(12.4)
12.2 Preconditioning by Diagonal Scaling
279
Av, vRN = a(v, v) v2H m (Γ )
(12.5)
I
and, recalling (12.3),
v, vRN =
N
N
N
N
k=1
k=1
k=1
k=1
∑ c2k = ∑ c2k φk (xk )2 = ∑ vk (xk )2 = ∑ v(xk )2 .
Therefore,
λmax (A) = max
m (Γ ) v∈H
v2H m (Γ ) I
N
and
v2H m (Γ )
λmin (A) = min
m (Γ ) v∈H
∑ |v(xk )|
2
k=1
I
N
(12.6)
∑ |v(xk )|
2
k=1
where we recall that m = 1/2 in the case of the hypersingular integral equation, and m = −1/2 in the case of the weakly-singular integral equation.
12.2 Preconditioning by Diagonal Scaling We denote by D the diagonal matrix whose diagonal entries are those of A, i.e., Di j = Ai j δi j ,
i, j = 1, . . . , N,
where δi j denotes the Kronecker delta. Taking D as the preconditioner, the preconditioned matrix becomes = D−1/2 AD−1/2 ; A see Subsection 2.2.1. It is shown in Lemma 2.8 that = κ (A)
where = D−1 A, A
λmax (A) λmax (A) = λmin (A) λmin (A)
= max Av, vRN , λmax (A) v∈RN Dv, vRN
With v and vk defined in (12.4) we have
Dv, vRN =
N
∑ a(vk , vk ) =
k=1
Therefore, due to (12.5),
= min Av, vRN . λmin (A) v∈RN Dv, vRN N
∑ vk 2HIm (Γ ) .
k=1
280
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
= max λmax (A)
I
m (Γ ) v∈H
= min λmin (A)
v2H m (Γ )
N
∑ vk 2HIm (Γ )
k=1
v2H m (Γ ) I
m (Γ ) v∈H
N
(12.7) .
∑ vk 2Hm (Γ ) I
k=1
This preconditioner is in fact an additive Schwarz3 preconditioner. Indeed, let V j 4 be the space spanned by {φ j }, where we recall that φ j : j = 1, . . . , N is the set of brick functions in the case m = −1/2, and hat functions in the case m = 1/2. Then the subspace decomposition V = V1 + · · · + VN
(12.8)
defines an additive Schwarz preconditioner which yields exactly the diagonal scaling matrix D. In the remainder of this chapter, we find cmax , cmin , Cmax and Cmin such that for m (Γ ) of the form any v ∈ V ⊂ H v=
∑
vk ,
vk := v(xk )φk ,
(12.9)
k∈N
the following inequalities hold N
v2H m (Γ ) ≤ cmax ∑ |v(xk )|2 , I
k=1
N
cmin ∑ |v(xk )|
2
k=1
and
(12.10)
≤ v2H m (Γ ) , I
N
v2H m (Γ ) ≤ Cmax ∑ vk 2H m (Γ ) , I
k=1
N
Cmin ∑
k=1
vk 2H m (Γ ) I
I
≤ v2H m (Γ ) , I
(12.11)
for m = ±1/2. Bounds for other values of m can be found in [5]. We recall that (12.11) provides the coercivity and stability of the decomposition (12.8). We will consider shape-regular mesh refinements in Section 12.3 and anisotropic mesh refinements in Section 12.4.
12.3 Shape-Regular Mesh Refinements
281
12.3 Shape-Regular Mesh Refinements In this section, we consider mesh refinements which yield shape-regular elements. We present only results for the hypersingular and weakly-singular integral equations. More general results can be found in [5]. We make the following assumptions on Ph : 4 3 Assumption 12.1. The family of partitions Ph : h > 0 is nondegenerate or shape-regular, i.e., if hK is the diameter of an element K and ρK is the diameter of the largest inscribed ball in K, then hK ≤c ρK
∀K ∈ Ph , ∀h > 0.
Assumption 12.1 implies that the partition Ph is locally quasi-uniform: the ratio of the diameters of any pair of adjacent elements is uniformly bounded. We note that a locally quasi-uniform partition may still contain elements of greatly different size. In other words, if hmax (respectively, hmin ) denotes the diameter of the largest (respectively, smallest) element in Ph , the ratio hmax /hmin may be arbitrarily large. Assumption 12.2. The number of elements K intersecting Γk is uniformly bounded, i.e., for any K ∈ Ph , if 3 4 Nh (K) := k ∈ Nh : Γk ∩ K = ∅ , 4 3 (12.12) Mh := max card(Nh (K)) : K ∈ Ph , then
Mh ≤ M
∀h > 0.
(12.13)
Assumption 12.2 implies the following properties: The set Nh can be partitioned into disjoint subsets Nh,1 , . . . , Nh,Lh Nh = Nh,1 ∪ · · · ∪ Nh,Lh ,
Nh, ∩ Nh, = ∅, = ,
(12.14)
such that for all = 1, . . . , Lh k, k ∈ Nh, =⇒ interior supp φk ∩ interior supp φk = ∅.
(12.15)
The constants Lh are uniformly bounded, i.e., Lh ≤ L
∀h > 0.
For instance, if Γ is an open curve and the Galerkin subspace consists of continuous piecewise polynomials of degree p, then M = L = p + 1. Due to the above uniform boundedness, in the sequel we omit the subscript h for simplification of notations.
282
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
12.3.1 Coercivity of the decomposition In this section, we find the constants cmax and Cmax in (12.10) and (12.11). These estimates have the same effect as coercivity in the decomposition of v by additive Schwarz operators. First we need the following lemma. Lemma 12.1. Let v ∈ V be decomposed by (12.9). Then, for q ∈ [1, ∞),
∑ k∈N
vqLq (Γ ) vqLq (Γ ) k
∑ k∈N
vk qLq (Γ ) .
Proof. The left inequality is a straightforward consequence of (12.14) and (12.15). Indeed
∑ k∈N
vqLq (Γ ) = k
L
∑ ∑
=1 k∈N
vqLq (Γ ) ≤ k
L
∑ vqLq (Γ ) = LvqLq (Γ ) .
=1
To prove the right inequality, let x ∈ K ∈ P. Then, noting (12.12), ( ( ( ( ( ( ( ( |v(x)| = ( ∑ vk (x)( = ( ∑ vk (x)( ≤ ∑ |vk (x)|. k∈N (K)
k∈N
k∈N (K)
It follows from H¨older’s inequality and (12.13) that |v(x)|q
∑
|vk (x)|q
k∈N (K)
where the constant is independent of q. Integrating over x ∈ K and summing over all K ∈ P give vqLq (Γ ) ∑ vk qLq (Γ ) , k∈N
completing the proof.
The next lemma gives explicit forms of cmax and Cmax in (12.10) and (12.11) for m = 1/2. Lemma 12.2. Let v be decomposed by (12.9). Then for d = 2, 3 v2 1/2 HI
and
v2 1/2 HI
(Γ )
(Γ )
∑ k∈N
hd−2 max
vk 2 1/2 HI
∑
(Γ )
|v(xk )|2 .
(12.16)
(12.17)
k∈N
Inequality (12.16) is the coercivity of the decomposition (12.8) when m = 1/2. Proof. By (12.14) and the Cauchy–Schwarz inequality
12.3 Shape-Regular Mesh Refinements
v2 1/2 HI (Γ )
' L = '∑
∑
=1 k∈N
L ' ≤L∑'
283
'2 vk 'H 1/2 (Γ ) ≤ I
∑
=1 k∈N
' ∑' L
∑
=1 k∈N
'2 vk 'H 1/2 (Γ ) .
' vk ' 1/2 HI
(Γ )
2
I
Noting (12.15), we can invoke Theorem A.11 to obtain v2 1/2 HI
L (Γ )
∑ ∑
=1 k∈N
vk 2 1/2 HI
=
(Γ )
∑ k∈N
vk 2 1/2 HI
(Γ )
,
proving (12.16). The above inequality and Lemma C.13 also give v2 1/2 HI
(Γ )
∑ k∈N
∑
|v(xk )|2 φk 2 1/2 HI
(Γ )
|v(xk )|2 hd−2 ≤ hd−2 max k
k∈N
∑ k∈N
∑
|v(xk )|2 φk 2H 1/2 (Γ )
|v(xk )|2 ,
k∈N
proving (12.17).
The following lemma gives explicit forms of cmax and Cmax in (12.10) and (12.11) for m = −1/2. Lemma 12.3. Let v be decomposed by (12.9). Then v2 −1/2 HI
and
v2 −1/2 HI
where
cmax and
(Γ )
(Γ )
≤ cmax
≤ Cmax
∑
|v(xk )|2
k∈N
∑ k∈N
vk 2 −1/2 HI
⎧ ⎨Nh2max 1 + | log(Nhmax )| , ⎩ 1/2 3 N hmax ,
⎧ ⎪ ⎨N 1 + | log(Nhmin )| , 1 + | log(hmax )| Cmax ⎪ ⎩N 1/2 ,
(Γ )
d = 2,
(12.18)
d = 3, d = 2,
(12.19)
d = 3.
The inequality involving (12.19) is the coercivity of the decomposition (12.8) in the case m = −1/2. Proof. Consider first the case when d = 2. Applying Lemma A.18 Part (iv) gives, for any q ∈ (1, 2),
284
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
v2 −1/2 HI (Γ )
v2H −1/2 (Γ )
q q v2Lq (Γ ) q−1 q−1
vk qLq (Γ )
∑
k∈N
2/q
(12.20)
where in the last step we used Lemma 12.1. Here the constants are independent of q. Lemma C.12 gives
∑ k∈N
vk qLq (Γ ) =
∑ k∈N
|v(xk )|q φk qLq (Γ )
hk |v(xk )|q ≤ hmax
∑ k∈N
∑
|v(xk )|q .
k∈N
Using H¨older’s inequality
N
N
∑ ak bk ≤ ∑
k=1
akp
k=1
1/p
bk = |v(xk )|q ,
∑ k∈N
≤ hmax N
1−q/2
1/p
2 , 2−q
p=
we deduce vk qLq (Γ )
∑
bkp
k=1
with ak = 1,
N
2 p = , q
|v(xk )|
2
∑
(12.21)
k∈N
q/2
(12.22)
.
This and (12.20) yield v2 −1/2 HI
(Γ )
q 2/q hmax N 2/q−1 ∑ |v(xk )|2 . q−1 k∈N
Let q be the conjugate of q, i.e., 1/q + 1/q = 1. Then v2 −1/2 HI
Let
−2/q
(Γ )
q h2max hmax N 1−2/q
∑
−2/q = q Nhmax Nh2max
−2/q A = q Nhmax
|v(xk )|2
k∈N
and
∑
|v(xk )|2 .
(12.23)
k∈N
q = max{3, | log(Nhmax )|}.
Note that this choice of q ensures that q < 2 as required in (12.22). We consider different cases depending on the value of Nhmax .
• If Nhmax > e3 , then q = log(Nhmax ) so that (Nhmax )−2/q = e−2 , implying A log(Nhmax ) = | log(Nhmax )|.
• If 1 < Nhmax ≤ e3 , then q = 3 so that (Nhmax )−2/q = (Nhmax )−2/3 < 1, implying A < 3.
12.3 Shape-Regular Mesh Refinements
285
• If e−3 < Nhmax ≤ 1, then q = 3 so that 1 < (Nhmax )−2/q = (Nhmax )−2/3 < e2 . Hence A < 3e2 .
• If 0 < Nhmax ≤ e−3 , then q = | log(Nhmax )| so that (Nhmax )−2/q = e2 , implying A | log(Nhmax )|. In all cases, by choosing q = max{3, | log(Nhmax )|} we obtain A 1+| log(Nhmax )|, so that (12.18) (for d = 2) follows from (12.23). To obtain (12.19) for d = 2, we use Lemma C.12 and H¨older’inequality again q q with ak = h1−q k , bk = |v(xk )| hk , p = 2/(2 − q), and p = 2/q, see (12.21), to have
∑ k∈N
vk qLq (Γ ) ≤
∑
|v(xk )|q hk
k∈N
∑
−2(q−1)/(2−q) hk
k∈N
1−q/2
∑
|v(xk )|2 h2k
k∈N
q/2
.
(12.24)
Since 1 < q < 2 we have 2(q − 1)/(2 − q) > 0, so that the first factor on the righthand side is bounded as 1−q/2
∑
−2(q−1)/(2−q)
q/2 N 1−q/2 h1−q (Nhmin )1−q min = N
hk
k∈N
while the second factor is bounded by
∑
|v(xk )|2 h2k
k∈N
∑ k∈N
|v(xk )|2 φk 2H −1/2 (Γ ) 1 + | log hk |
≤
1 ∑ vk 2H−1/2 (Γ ) 1 + | log(hmax )| k∈N
where we have used Lemma C.13. Therefore, (12.20) and (12.24) imply v2 −1/2 HI
(Γ )
1 q N(Nhmin )2(1−q)/q ∑ vk 2H−1/2 (Γ ) q−1 1 + | log(hmax )| k∈N
= q (Nhmin )−2/q
N ∑ vk 2H −1/2 (Γ ) , 1 + | log(hmax )| k∈N
(12.25)
where q is the conjugate of q, i.e., 1/q+1/q = 1. Letting q = max{3, | log(Nhmin )|} and repeating the argument in the proof of (12.18) above, we deduce
q (Nhmin )−2/q 1 + | log(Nhmin )|. Thus (12.19) (for d = 2) follows from (12.25). Now consider the case when d = 3. Lemma A.18 Part (iii) (with q = 4/3) and Lemma 12.1 give
286
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
v2 −1/2 HI (Γ )
v2H −1/2 (Γ )
v2L4/3 (Γ )
∑
k∈N
4/3 vk L4/3 (Γ )
3/2
.
Using H¨older’s inequality again yields
∑ k∈N
4/3 vk L4/3 (Γ )
N
1/3
∑
k∈N
vk 2L4/3 (Γ )
2/3
so that v2 −1/2 HI
(Γ )
N 1/2
vk 2L4/3 (Γ ) .
∑ k∈N
On the one hand, Lemma C.12 gives vk 2L4/3 (Γ ) = |v(xk )|2 φk 2L4/3 (Γ ) |v(xk )|2 h3k ≤ |v(xk )|2 h3max and on the other hand Lemma C.13 gives vk 2L4/3 (Γ ) |v(xk )|2 h3k |v(xk )|2 φk 2H −1/2 (Γ ) = vk 2 −1/2 HI
(Γ )
.
Therefore,
v2 −1/2 HI
and
v2 −1/2 HI
N 1/2 h3max
(Γ )
(Γ )
|v(xk )|2
∑ k∈N
N 1/2
∑ k∈N
vk 2 −1/2 HI
(Γ )
.
12.3.2 Stability of the decomposition We now find the constants cmin and Cmin in (12.10) and (12.11). These estimates have the same effect as stability of decomposition in the decomposition of v by 1/2 -norm. additive Schwarz operators. First we prove the result for the H Lemma 12.4. Let v be decomposed by (12.9). Then cmin
∑ k∈N
where
|v(xk )|2 ≤ v2 1/2 HI
(Γ )
and Cmin
∑ k∈N
c−1 min
N 1 + | log(Nhmin )| ,
−1 Cmin
N 1 + | log(Nhmin )| ,
and
N 1/2 h−1 min ,
N 1/2 ,
vk 2 1/2 HI
(Γ )
d = 2, d = 3, d = 2, d = 3.
≤ v2 1/2 HI
(Γ )
12.3 Shape-Regular Mesh Refinements
287
The inequality involving Cmin gives the stability of the decomposition (12.8). Proof. Recalling that hk is the average of hK for K ⊂ Γk and that the number of elements K in Γk is bounded, as assumed in (12.13), we have meas(Γk ) hd−1 k . Consequently, if Γ is a reference element and v is defined by v( x) = v(x) for x ∈ Γk and x ∈ Γ , then by changing of variables in the integration we have −(d−1)/q
vLq (Γ) hk
vLq (Γk ) ,
1 ≤ q < ∞,
with the constants independent of q. Therefore, using equivalence of norms in a finite-dimensional space, we obtain −2(d−1)/q
|v(xk )|2 ≤ v2L∞ (Γk ) = v2L∞ (Γ) v2Lq (Γ) hk
v2Lq (Γk ) ,
(12.26)
where the constants are independent of q. This implies
∑
|v(xk )|2
k∈N
−2(d−1)/q
∑
hk
k∈N
v2Lq (Γk ) .
(12.27) −2/q
For d = 2, we deduce from (12.27) and H¨older’s inequality (with ak = hk v2Lq (Γ ) , p = q/(q − 2), p = q/2; see (12.21))
∑
|v(xk )| 2
k∈N
∑
−2/(q−2) hk
k∈N
1−2/q
∑
k∈N
vqLq (Γ ) k
2/q
, bk =
.
Lemma 12.1 and Lemma A.18 Part (ii) give
∑
k∈N
vqLq (Γ ) k
2/q
so that
∑ k∈N
v2Lq (Γ ) qv2H 1/2 (Γ ) ≤ qv2H1/2 (Γ ) qv2 1/2 HI
|v(xk )| q 2
∑
−2/(q−2) hk
k∈N
1−2/q
−2/q
≤ qN 1−2/q hmin v2 1/2 HI
−2/q
= q(Nhmin )
(Γ )
v2 1/2
Nv21/2 . H (Γ )
HI
(Γ )
(Γ )
I
This holds for any q > 2. Repeating the same argument as in the proof of Lemma 12.3 and choosing q = max{3, | log(Nhmin )|} we deduce
288
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
q(Nhmin )−2/q 1 + | log(Nhmin )| and thus obtain the result for cmin when d = 2. For d = 3 and q = 4, it follows from (12.27) that
∑
|v(xk )|2 h−1 min
k∈N
v2L4 (Γ ) .
∑
(12.28)
k
k∈N
For the sum on the right-hand side, we use H¨older’s inequality and Lemma 12.1 to obtain 1/2
∑
k∈N
v2L4 (Γ ) ≤ N 1/2 k
∑
k∈N
v4L4 (Γ )
N 1/2 v2L4 (Γ ) ,
k
which together with Lemma A.18 Part (ii) yields
∑ k∈N
v2L4 (Γ ) N 1/2 v2H 1/2 (Γ ) ≤ N 1/2 v2H 1/2 (Γ ) N 1/2 v2 1/2 HI
k
(Γ )
.
(12.29)
1/2 h−1 for d = 3. Inequalities (12.28) and (12.29) imply c−1 min N min We now prove the result for Cmin . Due to Lemma C.12
∑ k∈N
vk 2H 1/2 (Γ ) =
∑ k∈N
|v(xk )|2 φk 2H 1/2 (Γ )
|v(xk )|2 hd−2 k .
∑
(12.30)
k∈N
−1 = c−1 For d = 2, it follows from (12.30) that Cmin min . For d = 3, (12.30) and (12.26) yield 1−4/q ∑ vk 2H1/2 (Γ ) ∑ hk v2Lq (Γk ) k∈N
k∈N
for any q ≥ 1. Choosing q = 4 gives
∑ k∈N
vk 2H 1/2 (Γ )
∑ k∈N
v2L4 (Γ ) , k
−1 which together with (12.29) implies Cmin = N 1/2 , completing the proof of the lemma.
−1/2 -norm. The next lemma provides similar results for the H
Lemma 12.5. Let v be decomposed by (12.9). Then cmin
∑ k∈N
where
|v(xk )|2 ≤ v2 −1/2 HI
(Γ )
and Cmin
cmin hdmin , and
∑ k∈N
vk 2 −1/2
d = 2, 3,
HI
(Γ )
≤ v2 −1/2 HI
(Γ )
12.3 Shape-Regular Mesh Refinements
289
1 + | log(hmin )|, −1 Cmin 1,
d = 2, d = 3.
Proof. Let xk ∈ K. Then by using Lemma A.6 Part (iii) and equivalence of norms in a finite-dimensional space, we obtain (noting that the dimension of Γ is d − 1) ˆ 2L∞ (K) ˆ 2 −1/2 |v(xk )|2 ≤ v2L∞ (K) = v v HI
(K)
2 h−d k v −1/2 HI
(K)
.
(12.31)
Summing over all k ∈ N and using Theorem A.10 yield
∑
|v(xk )|2 h−d min
k∈N
∑
v2 −1/2 HI
K∈P
(K)
2 ≤ h−d min v −1/2 HI
(Γ )
2 h−d min v −1/2 HI
(Γ )
,
giving the result for cmin . To prove the result for Cmin we use (12.31) again to obtain vk 2 −1/2 HI
(Γ )
2 2 vk 2H −1/2 (Γ ) = |v(xk )|2 φk 2H −1/2 (Γ ) h−d −1/2 (Γ ) v −1/2 k φk H HI
(K)
.
Lemma C.13 gives
vk 2 −1/2 HI
(Γ )
⎧ ⎨ 1 + | log hk | v2 −1/2 ⎩v2 −1/2 HI
(K)
,
HI
(K)
,
d = 2, d = 3.
Summing over k ∈ N and using Theorem A.10 give the desired result.
12.3.3 Bounds for the condition numbers We are now able to state the main theorems of this chapter. We denote by AW (respectively, AV ) the stiffness matrix arising from the Galerkin approximation of the hypersingular integral equation (respectively, weakly-singular integral equation). V , respectively. W and A Their diagonally scaled matrices are denoted by A Theorem 12.1 (Hypersingular integral equation). The condition numbers of AW W are bounded by and A ⎧ ⎪ ⎪ d = 2, ⎨N 1 + | log(Nhmin )| , κ (AW ) hmax ⎪ ⎪ , d = 3, ⎩N 1/2 hmin and
W) κ (A
⎧ ⎨N 1 + | log(Nhmin )| , ⎩ 1/2 N ,
d = 2, d = 3.
290
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
Proof. The theorem is a consequence of Lemma 12.2 and Lemma 12.4, together with (12.6) and (12.7). Theorem 12.2 (Weakly-singular integral equation). The condition numbers of V are bounded by the matrices AV and A ⎧ - h .2 ⎪ max ⎪ ⎪ N 1 + | log(Nh , d = 2, )| max ⎨ hmin κ (AV ) . ⎪ hmax 3 ⎪ 1/2 ⎪ ⎩N , d = 3, hmin and
⎧ ⎪ ⎨N 1 + | log(Nhmin )| 1 + | log hmin | , V) 1 + | log hmax | κ (A ⎪ ⎩N 1/2 ,
d = 2, d = 3.
Proof. The theorem is a consequence of Lemma 12.3 and Lemma 12.5, together with (12.6) and (12.7).
12.4 Anisotropic Mesh Refinements In this section we report on the results in [86] on the extremal eigenvalues and condition numbers of the stiffness matrices arising from the Galerkin boundary element discretisation of the weakly-singular and hypersingular integral equations in the case of anisotropic mesh refinements of a screen Γ in R3 . Assumption 12.1 is not required, that means each partition Ph of Γ may contain elements K for which the aspect ratio hK /ρK approaches infinity as h → 0. Each partition Ph is conforming such that each element K ∈ P is the image of → K, which is an affine mapping by a bijective map χK : K the reference element K if K is a triangle, or a bilinear mapping if K is a quadrilateral. Let JK denote the 2 × 2 Jacobian matrix of χK . Then, for any function f defined on K,
K
f (x) dx =
K
1/2 f χK ( x) det JK (x)JK (x) d x.
The following conditions are imposed on the mappings χK . For any K ∈ Ph , we denote by |K| the two-dimensional measure of K. Assumption 12.3. The following statements hold: |K|2 det JK ( x)JK ( x) |K|2 , ρK2 λmin JK ( x)JK ( x) .
12.4 Anisotropic Mesh Refinements
291
K ∈ Ph , and h > 0. The constants are independent of x ∈ K,
It is noted that the above assumption holds when χK is an affine mapping. It is also satisfied when χK is a bilinear map if K is not too far from a parallelogram. The next assumption describes how the size and shape of neighbouring elements may vary even though the meshes are neither required to be quasi-uniform nor shaperegular. Assumption 12.4. The following inequalities hold hK hK ,
ρK ρK ,
|K| |K |,
∀K, K ∈ Ph , K ∩ K = ∅.
The constants are independent of K, K ∈ Ph and of all h > 0. We also assume the following assumption which is a variant of Assumption 12.2. Assumption 12.5. There exists a positive constant M such that 3 4 max max card K ∈ Ph : x ∈ K ≤ M. h>0 x∈Nh
Commonly used meshes in the h-version of the boundary element method which satisfy Assumption 12.4 and Assumption 12.5 are power-graded meshes. On the screen Γ = (−1, 1) × (−1, 1) × {0} these meshes are defined as follows. Choosing a grading exponent β ≥ 1, we first define points on [−1, 1] which are symmetric about 0: ⎧ - .β ⎪ ⎨−1 + 2 j , 0 ≤ j ≤ n/2, n (12.32) tj = ⎪ ⎩ n/2 < j ≤ n. −tn− j ,
When β = 1, these points are uniformly distributed on [−1, 1]. As β increases from 1, the points are more clustered at the endpoints ±1. The length Δ t j = t j −t j−1 of the jth interval satisfies
Δ t j = Δ tn− j
1 n
- .β −1 j , n
j = 1, . . . , n/2.
(12.33)
Using these points, we define vertices on Γ by (xi , y j ) = (ti ,t j ),
i, j = 0, . . . , n.
(12.34)
These vertices form N rectangular elements K on Γ , where N ∼ n2 ; see Figure 12.1. Near a corner, these elements are shape-regular with hK (1/n)β ρK . Away from the boundary, they are also shape-regular with hK 1/n ρK . However, near the middle of an edge we have hK 1/n while ρK (1/n)β . Hence, the maximum aspect ratio grows like nβ −1 , so that if β > 1 then degeneracy occurs. Similarly to Section 12.3, we denote by φk the nodal basis function centred at the nodal point xk . In the case of the hypersingular integral equation, this is the hat
292
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
Fig. 12.1 Power-graded tensor-product mesh with β = 3 and n = 8.
function while in the case of the weakly-singular integral equation it is the brick function. We recall that hk is the average of hK for K ⊂ Γk , with Γk being the support of φk . We also define ρk to be the average of ρK with K ⊂ Γk . Assumption 12.4 implies that ρk ρK , K ⊂ Γk , k ∈ N . (12.35) We also note that
N −β /2 ρk ≤ hk N −1/2 .
(12.36)
The reader is referred to [86, Example 5.5] for a generalisation of this mesh construction.
12.4.1 Technical results We first show two inequalities involving hk and ρk . Lemma 12.6. For the mesh defined by (12.32) and (12.34) we have
12.4 Anisotropic Mesh Refinements
∑ k∈N
293
⎧ ⎪ ⎨N, −1 ρ h ∑ k k ⎪N(1 + log N), ⎩ β /2 k∈N N , ⎧ ⎪N 2 , −1 ⎨ 2 hk ρk N (1 + log N)2 , ⎪ ⎩ β N ,
1 ≤ β < 2, β = 2, β > 2, (12.37) 1 ≤ β < 2, β = 2, β > 2.
Proof. Due to (12.35) we have
hk ρk−1
∑ k∈N
hK ρK−1 .
∑ K∈Ph
We number the elements K from left to right, starting from the left bottom corner, so that Ki j = (ti−1 ,ti ) × (t j−1 ,t j ), i, j = 1, . . . , n. Then, by symmetry (see Figure 12.1), n
∑ k∈N
i
hk ρk−1 ∑ ∑ hKi j ρK−1 . ij i=1 j=1
Note that for i = 1, . . . , n and j = 1, . . . , i we have, recalling (12.33), 1 n
hKi j = Δ ti
- .β −1 i n
and
ρKi j = Δ t j
1 n
- .β −1 j . n
Therefore, n - .β −1 i (i/n)β −1 1 2 = n ∑ ( j/n)β −1 ∑ n n i=1 j=1 i=1 n
∑
hk ρk−1 ∑
k∈N
n
2
i
1
1/n
s
β −1
s
- .1−β j 1 ∑ n n j=1 i
t 1−β dt ds.
1/n
This yields the first inequality in (12.37), noting that N = n2 . Similarly,
∑ k∈N
hk ρk
−1
n
n i 1 ∑∑ ∑ ∑ n2 i=1 j=1 Δ ti Δ t j i=1 j=1 i
- .1−β i 1 n ∑ n i=1 n 4
n
- .1−β - .1−β i j n n
- .1−β 1 s j 1 4 1−β n s t 1−β dt ds, ∑ n j=1 n i
1/n
from which the second inequality in (12.37) follows. The following lemma is a special case of [86, Theorem 4.1].
1/n
294
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
Lemma 12.7. The following statements hold (i) φk 1/2 HI
(ii)
(Γ )
= φk 1/2
1/2 ρk φk L2 (Γk )
HI
−1/2
(Γk )
ρk
φk −1/2 HI
(Γk )
φk L2 (Γk ) ;
.
Proof. The proof of more general results which follows the same lines as the proofs of [85, Theorems 3.2 and 3.6] is carried out in [86] and is omitted. Using the above lemma, one can prove the following lemma which is a partial generalisation of Lemma C.12 and Lemma C.13 for shape-regular meshes. Lemma 12.8. For any k ∈ N the following statements hold (i) φk L p (Γk ) |Γk |1/p for 1 ≤ p ≤ ∞; |Γk |ρk−1 ; (ii) φk 2 1/2 φk 2 1/2 HI
(Γ )
(iii) |Γk |ρk φk 2 −1/2 (iv) φk 2 −1/2 HI
HI
(Γ )
|Γk
HI
(Γk )
φk 2 −1/2
(Γk ) |3/2 .
HI
(Γ )
;
Proof. See [86, Theorem 4.2]. Finally, we need this result proved in [85]. Lemma 12.9. For any v ∈ V , −1/2
vL2 (K) ρK
v −1/2 HI
(K)
,
Proof. See [85, Theorem 3.6] and [85, Remark 3.8].
K ∈ Ph .
12.4.2 Coercivity of the decomposition In this section, we find the constants cmax and Cmax in (12.10) and (12.11). These estimates have the same effect as the coercivity in the decomposition associated with additive Schwarz preconditioners. The following lemma gives explicit forms of cmax and Cmax in (12.10) and (12.11) when m = 1/2. It is noted that estimate (12.38) is an improvement over a similar result proved in [86, Lemma 5.2] in which the constant depends on the mesh type. Only in special cases (for example, when the mesh is a power-graded mesh) is the constant independent of all parameters; see [86, Theorem 7.5]. This improvement is due to the fact that in our proof we use the newly-derived estimate (A.86) instead of (A.65) as done in [86]. Lemma 12.10. Let v be decomposed by (12.9). Then
12.4 Anisotropic Mesh Refinements
295
v2 1/2 HI
and
v2 1/2 HI
(Γ )
(Γ )
vk 2 1/2
∑
HI
k∈N
max |Γk |ρk−1 k∈N
(12.38)
(Γ )
|v(xk )|2 .
∑
(12.39)
k∈N
Proof. The proof of (12.38) is exactly the same as that of (12.16) because the argument in that proof does not rely on the properties of the mesh and finite elements. In order to prove (12.39), we further estimate vk 2 1/2 on the right-hand side vk 2 1/2 HI
= |v(xk )|2 φk 2 1/2
(Γ )
HI
(Γ )
Estimate (12.39) then follows.
(Γ )
HI
of (12.38), using Lemma 12.8 (ii), as follows:
|v(xk )|2 φk 2 1/2 HI
(Γk )
|Γk |ρk−1 |v(xk )|2
The next lemma shows similar results for the case when m = −1/2, which is related to the weakly singular integral equation. Lemma 12.11. Let v be decomposed by (12.9). Then v2 −1/2 HI
and
v2 −1/2 HI
(Γ )
(Γ )
∑
∑
|Γk |3
k∈N
|Γk |ρk−2
k∈N
1/2
1/2
∑
|v(xk )|2
(12.40)
k∈N
∑ k∈N
vk 2 −1/2 HI
(Γ )
.
(12.41)
Proof. A more general result is proved in [86, Lemma 5.1]. We only present the proof for the special case of our interest. By using Lemma A.18 part (iii) we ob vL4/3 (Γ ) . This together with H¨older’s inequality and (12.15) tain v −1/2 gives
HI
(Γ )
v −1/2 HI
(Γ )
vL4/3 (Γ )
Lh
' ∑'
∑
=1 k∈N
Lh
'
∑' ∑
=1 k∈N
' vk 'L4/3 (Γ )
'4/3 3/4 vk 'L4/3 (Γ )
∑
k∈N
4/3
vk L4/3 (Γ )
3/4
.
Let wk , k ∈ N , be any positive weights. Then by using H¨older’s inequality again with 1/p + 1/q = 1 where p = 3 and q = 3/2, we have
∑ k∈N
4/3
vk L4/3 (Γ ) ≤
∑
w2k
k∈N
This inequality and the one above yield
1/3
∑ k∈N
2 w−1 k vk L4/3 (Γ )
2/3
.
296
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
v2 −1/2 HI
(Γ )
∑
w2k
k∈N
1/2
∑ k∈N
2 w−1 k vk L4/3 (Γ ) .
By choosing wk = |Γk |3/2 and applying Lemma 12.8 part (i), we obtain (12.40). On the other hand, by choosing wk = |Γk |1/2 ρk−1 and applying Lemma 12.8 part (i) and part (iii), we obtain v2 −1/2 HI
(Γ )
∑
|Γk | ρk−2
k∈N
∑
|Γk | ρk−2
k∈N
1/2 1/2
∑
|Γk | ρk |v(xk )|2
k∈N
∑
vk 2 −1/2
(Γ )
HI
k∈N
,
yielding (12.41).
12.4.3 Stability of the decomposition In this section, we find the constants cmin and Cmin in (12.10) and (12.11). These estimates have the same effect as the stability in the decomposition associated with additive Schwarz preconditioners. The next lemma gives the results when m = 1/2. More general results can be found in [86, Lemma 5.2]. Lemma 12.12. Let v be decomposed by (12.9). Then and
∑
|Γk |−1
k∈N
|Γk | ρk−2
∑ k∈N
−1/2
−1/2
∑ k∈N
∑ k∈N
|v(xk )|2 v2 1/2 HI
vk 2 1/2 HI
(Γ )
(12.42)
(Γ )
v2 1/2 HI
(Γ )
.
(12.43)
for any xk ∈ K we Proof. Recalling the usual notation of the reference element K, have v2L∞ (K) |v(xk )|2 ≤ v2L∞ (K) = .
By equivalence of norms on finite-dimensional spaces, we have −1/2 v2L4 (K) v2L4 (K) . v2L∞ (K) |K|
Assumption 12.4 implies |K|−1/2 |Γk |−1/2 so that
|v(xk )|2 |Γk |−1/2 v2L4 (Γ ) . k
(12.44)
12.4 Anisotropic Mesh Refinements
297
Taking the sum over k ∈ N and using the Cauchy–Schwarz inequality and Assumption 12.5, we deduce |v(xk )|2
∑ k∈N
|Γk |−1
∑ k∈N
1/2
∑ k∈N
v4L4 (Γ ) k
1/2
∑
∑
|Γk |−1
k∈N
1/2
v2L4 (Γ ) .
Lemma A.18 part (i) then gives |v(xk )|2
∑ k∈N
∑
|Γk |−1
k∈N
1/2
v2 1/2 HI
(Γ )
≤
|Γk |−1
k∈N
1/2
v2 1/2 HI
(Γ )
,
proving (12.42). We can prove (12.43) in a similar manner. Indeed, Lemma 12.8 part (ii) and (12.44) imply vk 2 1/2 HI
(Γ )
= |v(xk )|2 φk 2 1/2 HI
(Γ )
|v(xk )|2 |Γk | ρk−1 |Γk |1/2 ρk−1 v2L4 (Γ ) . k
Therefore, proceeding as above we obtain
∑ k∈N
vk 2 1/2 HI
(Γ )
1/2 1/2 |Γk | ρk−2 v2L4 (Γ ) |Γk | ρk−2 v2 1/2 HI
(Γ )
,
completing the proof of the lemma.
The following lemma gives similar results when m = −1/2. More general results can be found in [86, Lemma 5.1]. Lemma 12.13. Let v be decomposed by (12.9). Then min |Γk | ρk ∑ |v(xk )|2 v2 −1/2 k∈N
and
min |Γk |−1/2 ρk
k∈N
HI
k∈N
∑ k∈N
vk 2 −1/2 HI
(Γ )
(12.45)
(Γ )
v2 −1/2 HI
(Γ )
.
(12.46)
Proof. Choose K ∈ Ph such that xk ∈ K. Then, making the same arguments as in the proof of the previous lemma, we obtain −1 v2L∞ (K) v2L2 (K) v2L2 (K) . |v(xk )|2 ≤ v2L∞ (K) = |K|
Lemma 12.9 implies
|v(xk )|2 |K|−1 ρK−1 v2 −1/2 HI
(K)
|Γk |−1 ρk−1 v2 −1/2
so that, with the help of Theorem A.10, ∑ |v(xk )|2 max |Γk |−1 ρk−1 v2 −1/2 k∈N
k∈N
HI
HI
(Γ )
(12.47)
(K)
≤ max |Γk |−1 ρk−1 v2 −1/2 k∈N
HI
(Γ )
.
298
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
This proves (12.45). To prove (12.46) it suffices to note that vk 2 −1/2 HI
(Γ )
= |v(xk )|2 φk 2 −1/2 HI
(Γ )
|Γk |1/2 ρk−1 v2 −1/2 HI
(K)
where in the last step we used (12.47) and Lemma 12.8 part (iv).
12.4.4 Bounds for the condition numbers We are now able to state the main theorems of this section. Similarly to Theorem 12.1 and Theorem 12.2, we now have the following results for the anisotropic mesh refinement defined by (12.32) and (12.34). Other results for other anisotropic mesh refinements can be found in [86]. Theorem 12.3 (Hypersingular integral equation). For the graded mesh defined by (12.32) and (12.34), ⎧ −1 ⎪ 1 ≤ β < 2, ⎨N , −1/2 −1 λmax (AW ) N , λmin (AW ) N (1 + log N)−1 , β = 2, ⎪ ⎩ −β /2 , β > 2, N ⎧ −1/2 , ⎪ 1 ≤ β < 2, ⎨N −1/2 λmax (AW ) 1, λmin (AW ) N (1 + log N)−1/2 , β = 2, ⎪ ⎩ −β /4 , β > 2. N
Consequently,
and
⎧ 1/2 ⎪ ⎨N , κ (AW ) N 1/2 (1 + log N), ⎪ ⎩ (β −1)/2 , N ⎧ 1/2 ⎪ ⎨N , W ) N 1/2 (1 + log N)1/2 , κ (A ⎪ ⎩ β /4 N ,
1 ≤ β < 2, β = 2, β > 2,
1 ≤ β < 2, β = 2, β > 2.
Proof. It follows from (12.6), (12.7), Lemma 12.10, and Lemma 12.12 that W ) 1, λmax (AW ) max |Γk |ρk−1 , λmax (A k∈N
λmin (AW )
∑
k∈N
|Γk |−1
−1/2
,
W) λmin (A
∑
k∈N
|Γk | ρk−2
−1/2
.
12.4 Anisotropic Mesh Refinements
299
Since |Γk | ∼ hk ρk for k ∈ N , we have
λmax (AW ) max hk N −1/2 k∈N
−1 where in the last step we used (12.36). Since |Γk |−1 ∼ hk ρk and |Γk |ρk−2 ∼ W ) follow from (12.37), comhk ρk−1 , the lower bounds for λmin (AW ) and λmin (A pleting the proof of the lemma. Theorem 12.4 (Weakly-singular integral equation). For the graded mesh defined by (12.32) and (12.34),
λmin (AV ) N −3β /2 ,
λmax (AV ) N −1 , ⎧ 1/2 ⎪ ⎨N , −( β −1)/2 V) N V ) N 1/2 (1 + log N)1/2 , λmin (A , λmax (A ⎪ ⎩ β /4 N ,
1 ≤ β < 2, β = 2, β > 2.
Consequently,
κ (AV ) N
(3β /2)−1
and
⎧ β /2 ⎪ ⎨N , V ) N β /2 (1 + log N)1/2 , κ (A ⎪ ⎩ (3β −2)/4 , N
1 ≤ β < 2, β = 2, β > 2.
Proof. It follows from (12.6), (12.7), Lemma 12.11, and Lemma 12.13 that λmin (AV ) min |Γk | ρk , k∈N
V ) min |Γk |−1/2 ρk , λmin (A k∈N
It also follows from (12.36) that
λmax (AV ) V) λmax (A
∑ k∈N
∑ k∈N
|Γk |3
1/2
|Γk |ρk−2
,
1/2
.
N −β |Γk | N −1 . V ) now follow from the above The bounds for λmin (AV ), λmax (AV ), and λmin (A −2 inequalities and (12.36). Noting that |Γk |ρk ∼ hk ρk−1 , we also obtain the bound V ) by using (12.37). The bounds for κ (AV ) and κ (A V ) then follow, for λmax (A completing the proof of the theorem. Remark 12.1. In the case of the weakly-singular integral equation, if one uses the Haar basis functions instead of the brick functions so that vk has integral mean zero, V then by invoking Theorem A.11 one can show that the maximum eigenvalue of A is bounded by a constant as in the case of the hypersingular integral equation. See Chapter 3 and Chapter 10.
300
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
12.5 Numerical Results We present in this section numerous numerical results that illustrate our theory. We considered the weakly-singular and hypersingular integral equations on various boundaries Γ . In each case, we computed the extremal eigenvalues λmin (A), and λmax (A), and the condition numbers κ (A) and κ (A) when λmax (A), λmin (A), the mesh is uniform and when it is refined to observe that diagonal scaling in fact restores the condition numbers of the matrix as is in the case of uniform meshes. The following boundaries Γ and equations are considered: Open curve: Γ is the open curve (−1, 1) × {0} in R2 . On this boundary a uniform mesh, a mesh graded (with grading parameter 3) at the endpoints ±1, and a geometric mesh are considered. The weakly-singular and hypersingular integral equations on this boundary have the form 1 Vu(x) = π Wu(x) = −
u(y) log
Γ
1 ∂ π ∂ νx
1 d σy = f (x), |x − y|
u(x)
Γ
x ∈Γ,
∂ 1 d σy = g(x), log ∂ νy |x − y|
x ∈Γ.
Open surface Γ is the open surface (−1, 1)×(−1, 1)×{0} in R3 . On this boundary a uniform mesh and a mesh that is refined at the corner (−1, −1) are considered (see Figures 12.2, and 12.3). The weakly-singular and hypersingular integral equations on this boundary have the form Vu(x) =
1 2π
Wu(x) = −
Γ
u(y) d σy = f (x), |x − y|
1 ∂ 2π ∂ νx
Γ
u(x)
x ∈Γ,
∂ 1 d σy = g(x), ∂ νy |x − y|
x ∈Γ.
The following tables and figures are reproduced from those in [5] (Copyright c 1999 Society for Industrial and Applied Mathematics. Reprinted with permission. All rights reserved).
12.5 Numerical Results
301 Geometric mesh (d = 2)
N 8 16 24 32 40 48
log λmin (AV ) −0.5071e+1 −0.1061e+2 1.05 −0.1615e+2 1.03 −0.2170e+2 1.02 −0.2724e+2 1.02 −0.3279e+2 1.02
λmax (AV ) 0.2525e+0 0.2523e+0 0.00 0.2523e+0 0.00 0.2523e+0 0.00 0.2523e+0 0.00 0.2523e+0 0.00
theory
−N
N
V) λmin (A 0.3133e+0 0.1720e+0 −0.94 0.1160e+0 −0.99 0.8706e−1 −1.00 0.6948e−1 −1.02 0.5767e−1 −1.01
V) λmax (A 0.2552e+1 0.4969e+1 0.98 0.7447e+1 1.01 0.9998e+1 1.03 0.1260e+2 1.04 0.1524e+2 1.04
N −1
N2
8 16 24 32 40 48 theory
κ (AV ) 0.4023e+02 0.1021e+05 10.38 0.2611e+07 15.93 0.6685e+09 21.48 0.1712e+12 27.03 0.4398e+14 32.60
N(1 + log N) 22N N(1 + log N) V) κ (A 0.8146e+1 0.2890e+2 0.6419e+2 0.1148e+3 0.1814e+3 0.2642e+3
1.92 2.00 2.03 2.06 2.06
N3
Table 12.1 Weakly-singular integral equation on open curve with geometric mesh: hmin = 2−(N−2) and hmax = 1. Theory: Lemma 12.3, Lemma 12.5, Theorem 12.2.
Geometric mesh (d = 2) log λmin (AW ) 0.3784e+0 0.2903e+0 −0.30 0.2615e+0 −0.21 0.2479e+0 −0.16 0.2402e+0 −0.12 0.2355e+0 −0.10
λmax (AW ) 0.1171e+1 0.1170e+1 0.00 0.1171e+1 0.00 0.1171e+1 0.00 0.1171e+1 0.00 0.1173e+1 0.00
κ (AW ) 0.3094e+01 0.4030e+01 0.29 0.4476e+01 0.21 0.4722e+01 0.16 0.4873e+01 0.13 0.4980e+01 0.09
theory
N −2
1
N2
N
W) λmin (A 0.4200e+00 0.3197e+00 −0.30 0.2874e+00 −0.22 0.2722e+00 −0.16 0.2637e+00 −0.13 0.2584e+00 −0.10
W) λmax (A 0.1296e+01 0.1296e+01 0.00 0.1297e+01 0.00 0.1297e+01 0.00 0.1297e+01 0.00 0.1298e+01 −0.02
W) κ (A 0.3087e+01 0.4055e+01 0.30 0.4511e+01 0.22 0.4763e+01 0.16 0.4918e+01 0.13 0.5024e+01 0.08
N −2
1
N2
N 7 15 23 31 39 47
7 15 23 31 39 47 theory
Table 12.2 Hypersingular integral equation on open curve with geometric mesh: hmin = 2−(N−1) and hmax = 1. Theory: Lemma 12.2, Lemma 12.4, Theorem 12.1.
302
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes Uniform mesh (d = 3) 8 32 128 512 2048
λmin (AV ) 0.32e−1 0.35e−2 −1.59 0.42e−3 −1.53 0.51e−4 −1.51 0.64e−5 −1.50
λmax (AV ) 0.24e+0 0.60e−1 −0.99 0.15e−1 −1.00 0.38e−2 −1.00 0.94e−3 −1.00
κ (AV ) 0.75e+1 0.17e+2 0.60 0.36e+2 0.53 0.73e+2 0.51 0.15e+3 0.50
theory
N −3/2
N −1
N 1/2
N 8 32 128 512 2048
V) λmin (A 0.40e+0 0.35e+0 −0.09 0.33e+0 −0.03 0.33e+0 −0.01 0.33e+0 0.00
V) λmax (A 0.30e+1 0.60e+1 0.51 0.12e+2 0.50 0.24e+2 0.50 0.48e+2 0.50
V) κ (A 0.75e+1 0.17e+2 0.60 0.36e+2 0.53 0.73e+2 0.51 0.15e+3 0.50
theory
1
N 1/2
N 1/2
N
Table 12.3 Weakly-singular integral equation on a screen with uniform mesh: hmin = hmax = O(N −1/2 ). Theory: Lemma 12.3, Lemma 12.5, Theorem 12.2.
Non-uniform mesh (d = 3) N 12 40 140 528 2068
λmin (AV ) 0.43e−2 0.68e−4 −3.45 0.11e−5 −3.32 0.17e−7 −3.13 0.26e−9 −3.05
λmax (AV ) 0.21e+0 0.59e−1 −1.06 0.15e−1 −1.09 0.38e−2 −1.04 0.94e−3 −1.01
κ (AV ) 0.49e+2 0.87e+3 2.39 0.14e+5 2.23 0.23e+6 2.09 0.36e+7 2.03
theory
N −3
N −1
N2
N 12 40 140 528 2068
V) λmin (A 0.37e+0 0.35e+0 −0.06 0.33e+0 −0.03 0.33e+0 −0.01 0.33e+0 0.00
V) λmax (A 0.36e+1 0.64e+1 0.47 0.12e+2 0.52 0.24e+2 0.52 0.48e+2 0.51
V) κ (A 0.98e+1 0.18e+2 0.53 0.37e+2 0.55 0.73e+2 0.53 0.15e+3 0.51
theory
1
N 1/2
N 1/2
Table 12.4 Weakly-singular integral equation on a screen with non-uniform mesh: hmin = O(N −1 ) and hmax = O(N −1/2 ). Theory: Lemma 12.3, Lemma 12.5, Theorem 12.2.
12.5 Numerical Results
303 Uniform mesh (d = 3)
λmin (AW ) 0.20e+0 0.56e−1 −0.75 0.14e−1 −0.89 0.36e−2 −0.95
λmax (AW ) 0.36e+0 0.19e+0 −0.37 0.99e−1 −0.44 0.50e−1 −0.47
κ (AW ) 0.18e+1 0.35e+1 0.38 0.69e+1 0.45 0.14e+2 0.48
theory
N −1
N −1/2
N 1/2
N
W) λmin (A 0.68e+0 0.38e+0 −0.34 0.20e+0 −0.44 0.99e−1 −0.47
W) λmax (A 0.12e+1 0.13e+1 0.04 0.14e+1 0.02 0.14e+1 0.00
W) κ (A 0.18e+1 0.35e+1 0.38 0.69e+1 0.45 0.14e+2 0.48
N −1/2
1
N 1/2
N 9 49 225 961
9 49 225 961 theory
Table 12.5 Hypersingular integral equation on a screen with uniform mesh: hmin = hmax = O(N −1/2 ). Theory: Lemma 12.2, Lemma 12.4, Theorem 12.1.
Non-uniform mesh (d = 3)
λmin (AW ) 0.33e+0 0.81e−1 −0.82 0.20e−1 −0.89 0.51e−2 −0.94 0.13e−2 −0.96
λmax (AW ) 0.53e+0 0.36e+0 −0.23 0.19e+0 −0.39 0.99e−1 −0.45 0.50e−1 −0.48
κ (AW ) 0.16e+1 0.44e+1 0.59 0.96e+1 0.50 0.20e+2 0.48 0.40e+2 0.49
theory
N −3/2
N −1/2
N
N
W) λmin (A 0.94e+0 0.68e+0 −0.19 0.38e+0 −0.37 0.20e+0 −0.45 0.99e−1 −0.48
W) λmax (A 0.10e+1 0.12e+1 0.09 0.13e+1 0.05 0.14e+1 0.02 0.14e+1 0.00
W) κ (A 0.11e+1 0.18e+1 0.28 0.35e+1 0.41 0.69e+1 0.46 0.14e+2 0.48
N −1/2
1
N 1/2
N 2 11 52 229 966
2 11 52 229 966 theory
Table 12.6 Hypersingular integral equation on a screen with non-uniform mesh of Figure 12.2: hmin = O(N −1 ) and hmax = O(N −1/2 ). Theory: Lemma 12.2, Lemma 12.4, Theorem 12.1.
304
12 Diagonal Scaling Preconditioner and Locally-Refined Meshes
Fig. 12.2 Successive meshes in a sequence of non-uniform meshes on the screen, with local refinement at one corner: hmin = O(N −1 ) and hmax = O(N −1/2 ).
Non-uniform mesh (d = 3) 8 35 142 548 2139
λmin (AW ) 0.1900e+0 0.6028e−1 −0.78 0.1680e−1 −0.91 0.4436e−2 −0.99 0.1141e−2 −1.00
λmax (AW ) 0.3826e+0 0.3012e+0 −0.16 0.2369e+0 −0.17 0.1803e+0 −0.20 0.1308e+0 −0.24
κ (AW ) 0.2014e+1 0.4996e+1 0.62 0.1410e+2 0.74 0.4064e+2 0.78 0.1146e+3 0.76
theory
N −1
N −1/4
N 3/4
N 8 35 142 548 2139
W) λmin (A 0.7113e+0 0.4265e+0 −0.35 0.2327e+0 −0.43 0.1226e+0 −0.47 0.6329e−1 −0.49
W) λmax (A 0.1119e+1 0.1174e+1 0.03 0.1215e+1 0.02 0.1209e+1 0.00 0.1225e+1 0.01
W) κ (A 0.1573e+1 0.2752e+1 0.38 0.5221e+1 0.46 0.9859e+1 0.47 0.1936e+2 0.50
theory
N −1/2
1
N 1/2
N
Table 12.7 Hypersingular integral equation on a screen with non-uniform mesh of Figure 12.3: hmin = O(N −1/2 ) and hmax = O(N −1/4 ). Theory: Lemma 12.2, Lemma 12.4, Theorem 12.1.
12.5 Numerical Results
305
Fig. 12.3 Successive meshes in a different sequence of non-uniformly refined meshes on the screen: hmin = O(N −1/2 ) and hmax = O(N −1/4 ).
Chapter 13
Multilevel Preconditioners with Adaptive Mesh Refinements
In Chapter 12 we showed that local mesh refinements have an adverse effect on the condition number of the boundary element matrix, namely, the condition number grows significantly if hmax /hmin grows. We also proved that diagonal scaling at best reduces the condition number to that of a uniform triangulation, namely it still grows in the order of O(N 1/2 ) (in three dimensions), where N is the degrees of freedom; see also [5] and [86]. In this chapter we study the use of multilevel methods to obtain optimal preconditioners in the case that the mesh is locally refined. Local mesh refinements, particularly important for boundary element discretisations, are carried out by using some a posteriori error estimators to mark elements to be refined; see e.g., [13, 14, 15, 50, 52, 54, 55, 92]. Different refinement strategies to partition these marked elements are available and the advantage of the so-called newest vertex bisection algorithm is explained in e.g., [130, 153]. In this chapter, we are interested in the newest vertex bisection algorithm. It is noted that the article [4] also studies an optimal multilevel preconditioner for local mesh refinements. However, the analysis therein requires some restrictive conditions on the meshes which are not satisfied by the newest vertex bisection algorithm; see [4, page 391]. The analysis to be presented in this chapter extends that in [239] where the same bisection algorithm is studied for the V-cycle multigrid method for second order elliptic problems. As in previous chapters, the analysis in fractional order Sobolev spaces for boundary element methods requires different strategies than those used in finite element methods. We report on the results in [75] where multilevel additive Schwarz algorithms are designed and analysed for the hypersingular integral equation. Even though we will focus on problems on surfaces in R3 , the same analysis can be carried out for problems on curves in R2 ; see [75] for detail.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_13
307
308
13 Multilevel Preconditioners with Adaptive Mesh Refinements
13.1 Preliminaries 13.1.1 Mesh refinements and hierarchical structures 13.1.1.1 Shape-regular triangulations Let T denote a regular triangulation of Γ into planar triangles and let N denote the set of nodes of the mesh T . Each triangle T ∈ T is understood to be a closed set, i.e., T contains its boundary. For any subset τ ⊂ Γ , we define 3 4 (13.1) T (τ ) := T ∈ T : T ∩ τ = ∅ . We also define the patch ωk (τ ) ⊂ Γ recursively by
ω (τ ) := ω1 (τ ) :=
6
T,
T ∈T (τ )
(13.2)
ωk+1 (τ ) := ωk (ω (τ )) for k ∈ N. For any z ∈ Γ , we write T (z) and ωk (z) as abbreviations for T ({z}) and ωk ({z}), respectively. Associated with the triangulation T , we define the mesh-size function h : Γ → R by h (z) := max diam(T ) for all z ∈ Γ . T ∈T (z)
We abuse the notation and write h (T ) = diam(T ) instead of h (T˚ ) = {diam(T )}, where T˚ is the interior of T . We suppose that T is uniformly shape-regular, i.e., 4 3 (13.3) max h (T )2 /|T | : T ∈ T ≤ γ .
where |T | denotes the measure of the triangle T . It follows from this assumption that (13.4) h (T ) ≤ h (z) h (T ) for all z ∈ N and T ∈ T (z), where the hidden constant depends only on the γ -shape regularity of T . As usual, for ≥ 0, we consider the boundary element space 3 4 V := v ∈ C(Γ ) : v|T ∈ P1 (T ) ∀T ∈ T and v|∂Γ = 0 .
(13.5)
The natural basis of V consists of hat functions ηz which takes the value 1 at the node z and zero at other nodal points: 4 3 V = span ηz : z ∈ N .
13.1 Preliminaries
309
These basis functions satisfy ∇ηz L∞ (Γ ) 1/h (z),
supp(ηz ) = ω (z),
z ∈ N
(13.6)
where the hidden constant depends only on the γ -shape regularity of T . 13.1.1.2 Newest vertex bisection algorithm (NVB algorithm) The basic building block of an adaptive refinement algorithm is a method for dividing a triangle. In the newest vertex bisection algorithm, a reference edge eT is chosen for each triangle T which is called a parent triangle. This triangle T is bisected by joining the midpoint of eT with its opposite vertex, resulting in two child triangles T1 and T2 . The midpoint of eT becomes the newest vertex of both T1 and T2 . The opposite side of the newest vertex of a new triangle becomes its reference edge; see Figure 13.1. We note that repeated applications of this bisection result in only two different triangles up to geometric similarity. This procedure of triangle bisection is described in Algorithm 13.1.
Algorithm 13.1: Bisection of a triangle in NVB 1: 2: 3: 4:
5:
Inputs: T : given triangle with a reference edge eT . Bisection: Bisect T by joining the midpoint of eT with its opposite vertex to obtain T1 and T2 . Mark the edge opposite to the new vertex to be the reference edge of Ti , i = 1, 2. In a local mesh refinement, bisect one or both child triangles T1 and T2 in the same manner only if it is required to avoid hanging nodes. In a uniform refinement, bisect both T1 are T2 . Outputs: Two, three or four child triangles. (Four child triangles in the case of uniform refinement.) Up to geometric similarity, there are only two different types of triangles. See Figure 13.1.
Fig. 13.1 For each surface triangle T ∈ T , there is one fixed reference edge, indicated by the double line (left, top). Refinement of T is done by bisecting the reference edge, where its midpoint becomes a new node. The reference edges of the child triangles are opposite to this newest vertex (left, bottom). To avoid hanging nodes, iterated newest vertex bisection leads to 2, 3, or 4 child triangles if more than one edge, but at least the reference edge, is marked for refinement (right); see Algorithm 13.1.
310
13 Multilevel Preconditioners with Adaptive Mesh Refinements
We now explain the newest vertex bisection (NVB) algorithm. Let T0 be an initial mesh. For each element T ∈ T0 we choose one reference edge eT . There is no condition on the choice of reference edges in this initial mesh, but the longest edge is a good choice. Algorithm 13.2 explains how the mesh T is refined into T+1 . It is proved in [20, 152, 153] that only a finite number of steps is required to remove hanging nodes. It is also proved in [20] that there exists a positive constant θ such that if θT is the smallest angle in T , then
θT ≥ θ
∀T ∈ T , = 0, 1, 2, . . . .
(13.7)
The presentation of the NVB algorithm in Algorithm 13.2 follows that of [130]. The while loop in this algorithm terminates after a finite number of iterates be(k−1) (k) ⊂ E ⊂ E . It is also clear that the for loop terminates since M ⊂ T . cause E The right figures in Figure 13.2 demonstrates the steps in Algorithm 13.2. Algorithm 13.2: Newest Vertex Bisection (NVB) Inputs: T (a given refined mesh) M ⊂3T (set of marked elements given by some error indicator) 4 E := eT : T ∈ T (set of all reference edges) (−1) = ∅; 4: E 3 4 (0) 5: E := eT ∈ E : T ∈ M (set of reference edges of marked elements) 1: 2: 3:
6: k ← 0
(k−1) (k) E do 7: while E 8: k ← k + 1; (k) (k−1) (k−1) : another edge of T belongs to E } (set of reference 9: E¯ := {eT ∈ E \ E edges of triangles marked with blue dashed lines in Figure 13.2) (k) (k−1) (k) ∪ E¯ 10: E := E 11: end while (k) 12: E := E (all reference (red) edges of triangles marked with a star or a blue edge) * 13: M := {T ∈ T : eT ∈ E } (all T marked with a star or a blue edge in Figure 13.2)
* do 14: for T ∈ M 15: Bisect T by using Algorithm 13.1. 16: If a new reference edge of a new triangle belongs to E (blue edge in Figure 13.2), continue to bisect this new triangle by using this edge. (This step removes hanging nodes.) 17: end for
18: 19:
Outputs: Refined mesh T+1 3 4 A set of reference edges E+1 := eT : T ∈ T+1
311
∗
13.1 Preliminaries
z
∗ level0 (z) = 0
m=1
level1 (z) = 1
m=2
level2 (z) = 1
∗
m=0
∗
∗
∗ ∗
level3 (z) = 2
Uniform triangulations
NVB triangulations
Fig. 13.2 Left figures: Uniform triangulations. Right figures: NVB triangulations. Red dot: newest vertex. Asterisk: marked element. Red line: reference edge. Blue dashed line: marked edge. Triangles with dashed blue lines will also be bisected together with marked triangles. (The level function level (z) is defined in Subsection 13.1.2 and the value of m indicates the level of uniform refinements.)
312
13 Multilevel Preconditioners with Adaptive Mesh Refinements
13.1.1.3 Hierarchical structures and the Scott–Zhang interpolation It is clear from the above procedure that N ⊂ N+1 . To design an optimal additive Schwarz scheme on locally refined meshes, we also define N* 0 := N0 , 3 4 * N := N \ N−1 ∪ z ∈ N−1 : ω (z) ω−1 (z) ,
≥ 1.
(13.8)
Thus N* contains all new nodes and the neighbouring old nodes z ∈ N−1 such that the support of the new basis function at z is a strict subset of the support of the old basis function at that point, namely supp(ηz ) supp(ηz−1 ); see Figure 13.3. It is noted that for = 0, 1, . . . , N =
6
k=0
N* k,
(13.9)
(N \ N* ) ⊂ N−1
ηz−1 = ηz For ≥ 0 we define
∀z ∈ N \ N* .
Vz := span{ηz }, z ∈ N , 6 4 3 V := span ηz : z ∈ N* Vz . =
(13.10)
z∈N*
Fig. 13.3 The left figure shows a mesh T−1 , where the two elements in the lower left corner are marked for refinement (green). Bisection of these two elements provides the mesh T (right), where two new nodes are created. The set N* consists of these new nodes plus those immediate neighbours, where supp(ηz ) supp(ηz−1 ) (red). The union of the supports of basis functions in V is given by the light- and dark-green areas in the right figure.
13.1 Preliminaries
313
On each triangulation T , we can define a Scott–Zhang interpolation operator J : L2 (Γ ) −→ V ; see Subsection C.5.5. We note that in the definition of this interpolation, the choice of the triangle Tz ∈ T for each nodal point z ∈ N is arbitrary. Noting (13.9), we define J by choosing Tz such that Tz−1 = Tz ∈ T ∩ T−1
∀z ∈ N \ N* .
Note that we also have
ηz = ηz−1
ψz = ψz−1
and
∀z ∈ N \ N*
(13.11)
(13.12)
where ψz is the corresponding L2 -dual basis function to ηz ; see Subsection C.5.5. For convenience, we rewrite the definition (C.84) J v(y) =
∑
z ∈N
ηz (y)
ψz (x)v(x) ds(x),
y ∈ Γ , = 0, 1, 2, . . . .
(13.13)
Tz
The sequence {J } has the following properties. Lemma 13.1. For any v ∈ L2 (Γ ) and = 1, 2, . . ., ∀z ∈ N \ N* ,
(J − J−1 )v(z) = 0
(J − J−1 )v ∈ V ,
In particular,
⎧ 1 ⎪ ⎪ ⎨ h (z) vL2 (ω (z)) |J v(z)| 1 ⎪ ⎪ ⎩ vL2 (ω 2 (z)) h (z) |J v(z) − J−1 v(z)|
(13.14) (13.15)
∀z ∈ N , (13.16) ∀z ∈ N+1 \ N .
1 vL2 (ω 2 (z)) −1 h (z)
∀z ∈ N* .
(13.17)
The constants depend only on the γ -shape regularity of T .
Proof. It follows from the property ηz (z) = δz,z , (13.11), and (13.12) that, for any z ∈ N \ N* , (J − J−1 )v(z) =
∑
z ∈N
−
ηz (z)
∑
z ∈N−1
ψz (x)v(x) ds(x)
Tz
ηz−1 (z)
Tz−1
ψz−1 (x)v(x) ds(x)
314
13 Multilevel Preconditioners with Adaptive Mesh Refinements
=
ψz (x)v(x) ds(x) −
Tz
ψz−1 (x)v(x) ds(x) = 0,
Tz−1
proving (13.14). It follows that
∑ (J − J−1 )v(z)ηz = ∑ (J − J−1 )v(z)ηz .
(J − J−1 )v =
z∈N*
z∈N
4 3 This means (J − J−1 )v ∈ span ηz : z ∈ N* , proving (13.15). Next we prove (13.16). If z ∈ N , we have from (C.83) that ψz L∞ (Tz ) 1/|Tz |. Moreover, Tz ⊂ ω (z), and thus |J v(z)| ≤
|ψz (x) v(x)| ds(x)
Tz
≤ ψz L∞ (Tz ) |Tz |1/2 vL2 (Tz ) |Tz |−1/2 vL2 (ω (z)) h (z)−1 vL2 (ω (z)) , where in the last step we used (13.3) and (13.4). If z ∈ N+1 \ N , then z is the midpoint of an edge in T . Therefore, there exist z1 , z2 ∈ N such that J v(z) = ηz1 (z)
ψz1 (x)v(x) ds(x) + ηz2 (z)
Tz
ψz2 (x)v(x) ds(x)
Tz
1
2
implying |J v(z)| ≤
( ( ( ( (ψz (x)v(x)( ds(x) + (ψz (x)v(x)( ds(x). 1 2
Tz
Tz
1
2
Repeating the above argument yields |J v(z)| h (z1 )−1 vL2 (ω (z1 )) + h (z2 )−1 vL2 (ω (z2 )) ≤ h (z)−1 vL2 (ω 2 (z)) ,
where in the last step we used h (z) ≤ h (zi ) and ω (zi ) ⊂ ω2 (z) for i = 1, 2, proving (13.16). Finally, to prove (13.17) we invoke (13.16) to have, for z ∈ N* , |J v(z) − J−1 v(z)| ≤ |J v(z)| + |J−1 v(z)| 1 1 vL2 (ω (z)) + vL2 (ω 2 (z)) −1 h (z) h−1 (z)
13.1 Preliminaries
315
1 vL2 (ω 2 (z)) −1 h (z)
2 (z). where in the last step we used h (z) ≤ h−1 (z) and ω (z) ⊂ ω−1
13.1.2 Level functions and uniform mesh refinements For each element T ∈ T , let T0 ∈ T0 be the unique ancestor element satisfying T ⊂ T0 . The generation of T is defined by (13.18) gen(T ) := log2 |T0 |/|T | ∈ N0 .
Recalling the definition (13.1) of T (z), we assign to each node z ∈ N the level ? > gen(T ) , (13.19) level (z) := max 2 T ∈T (z) 4 3 where · denotes the Gaussian ceiling function, i.e., x = min n ∈ N0 : x ≤ n for x ≥ 0. Note that (13.19) slightly differs from the level function in [239]. However, both definitions of the level function are equivalent. Besides the sequence of locally refined triangulations T , we consider a sequence )m defined as follows. We start with T )0 := T0 of uniformly refined triangulations T in which each triangle is assigned with a reference edge (usually the longest side). )m , we define T )m+1 by bisecting each triangle T ∈ T )m three times Having defined T ) to obtain four child triangles T j ∈ Tm+1 , j = 1, . . . , 4; see the bottom right figure )m in Figure 13.1 and the left figures in Figure 13.2. As a consequence, if T ∈ T ) and T0 ∈ T0 are such that T ⊂ T0 (T0 is the ancestor of T ) then If we define
then
|T0 | = 4m , |T|
m = 1, 2, . . . .
hm := max diam(T), )m T∈T
hm 2−m h0 ,
(13.20)
m ≥ 0,
m ≥ 0,
(13.21)
)m , where the constant is independent of m. We note that hm diam(T) for all T ∈ T m ≥ 0. We define on each triangulation a boundary element space Vm of continuous )m vanishing on the boundary of Γ . We then have piecewise linear functions on T the following nested sequence V0 ⊂ V1 ⊂ · · · ⊂ Vm ⊂ · · · .
(13.22)
316
13 Multilevel Preconditioners with Adaptive Mesh Refinements
The following lemma gives the relation between this mesh size and the mesh-size function h defined in Subsection 13.1.1. Lemma 13.2. For any z ∈ N , let m = level (z). Then (i) (ii) (iii)
z ∈ N) m, ηz ∈ Vm , hm where the constants are independent of z, , and L. hm h (z)
)m , (13.18) and (13.20) give Proof. First we prove (i). Note that for any T ∈ T Hence it follows from (13.19) that
gen(T) = 2m.
gen(T ) ≤ gen(T)
)m . ∀T ∈ T (z), ∀T ∈ T
(13.23)
Due to the way that both types of refinement (uniform and NVB) bisect triangles )0 = T0 with same reference edges, it (Algorithm 13.1) and due to the initial choice T )m . In particular, follows from (13.23) that any T ∈ T (z) is a union of triangles T ∈ T ) with T ∈ T (z) there exists T ∈ Tm (z) such that T ⊂ T . This means z is a vertex of T, i.e., z ∈ N) m. )m , either T ⊂ ω (z) or T ⊂ Γ \ ω (z) To prove Part (ii) we note that, for any T ∈ T and ω (z) besides the triangle boundary). In the second case, (i.e., no overlap of T ( clearly ηz (T ≡ 0. In the first case ( ( ( ( ηz (T = ηz (T ( T
is a linear function. Therefore, ηz ∈ Vm . Finally, we prove (iii). For any z ∈ N , there exists T ∈ T (z) such that gen(T ) ≥ gen(T ) ∀T ∈ T (z).
It follows from (13.19) and the definition of m that gen(T ) gen(T ) ≤m< + 1. 2 2 Noting that |T | h (T )2 and |T0 | h20 , we deduce by using (13.18) . . h0 2h0 m log2 log2 h (T ) h (T ) implying
h (T ) 2−m h0 h (T ). 2 The required result follows from (13.4) and (13.21).
13.1 Preliminaries
317
The analysis in this chapter requires some additional definitions and technical * results. For a given node z ∈ N* , it may happen that z ∈ N+k with level (z) = level+k (z) for some k > 0. We count how often a node z ∈ NL with a fixed level m ∈ N0 shows up in the sets N* . We therefore define, for z ∈ NL and m ∈ N0 , 3 * K* m (z) := ∈ {0, 1, . . . , L} : z ∈ N and
4 level (z) = m .
(13.24)
The following lemma from [239, Lemma 3.1] proves that the cardinality of the set K* m (z) is uniformly bounded.
Lemma 13.3. There exists a positive constant d0 depending on θ in (13.7) such that for all positive integers L and m, and for all z ∈ NL , the cardinality of K* m (z) satisfies card(K* m (z)) ≤ d0 .
Proof. The result is obvious if K* m (z) is empty. If it is not empty, we prove by contradiction. Assume that ∀d0 > 0, ∃L > 0, m > 0, z ∈ NL :
card(K* m (z)) > d0 .
Let d := card(K* m (z)). There exist 1 , . . . , d ∈ {0, 1, . . . , L} satisfying 1 < 2 < · · · < d ,
z ∈ N* j,
level j (z) = m,
j = 1, . . . , d.
It follows from Part (iii) of Lemma 13.2 that hm , h j (z)
j = 1, . . . , d.
Recalling the definition of h (z) in Subsection 13.1.1, the above statement implies that there exists a triangle T ∈ T1 (z) such that diam(T ) h1 (z) hd (z). In other words, this triangle T also belongs to Td (z), i.e., it is not bisected at all while the triangulation T1 is in the refinement process to yield Td . On the other hand, if the mesh T1 is refined to have T2 , then at least one triangle in T1 (z) * is bisected since z ∈ N* 1 ∩ N2 . The process to go from T1 to Td and the fact that z ∈ N* j for all j = 1, . . . , d imply that at least d triangles in T1 (z) (the triangles can be repeated) are bisected. If d0 is large enough, so is d, and consequently the triangle next to T which shares the same edge with T is bisected many times on the other two edges; see Figure 13.4. This contradicts the shape regular property (13.7). The above argument also implies that the constant d 0 depends on θ . This completes the proof of the lemma. For each z ∈ N , we further define the quantities r (z) :=
min
2 (z)) T ∈T−1 (ω−1
gen(T ) and
R (z) := "r (z)/2#,
(13.25)
318
13 Multilevel Preconditioners with Adaptive Mesh Refinements
z T
T
T1 (z)
T2 (z)
T
T
T3 (z) (red)
T4 (z) (blue)
Fig. 13.4 Non-shape regular triangulations. Note that T ∈ T j (z) and that h j (z) diam(T ) for all j = 1, . . . , d.
3 4 where "x# := max n ∈ N0 : x ≥ n is the Gaussian floor function for x ≥ 0. Analomk (z) correspondgously to the patch ωk (z) defined by (13.2), we define the patch ω ) ing to the uniformly refined triangulation Tm .
Lemma 13.4. (i) There exists a constant c0 ∈ N depending only on the initial triangulation T0 such that for all z ∈ N level (z) ≤ R (z) + c0 .
2 (z)), there exists an (ii) For all z ∈ N , let m = R (z). Then for any T ∈ T−1 (ω−1 )m such that T ⊂ T. element T ∈ T (iii) There exists n ∈ N0 depending only on the initial triangulation T0 such that for all z ∈ N 2 n level ω (z) ⊆ ω−1 (z) ⊆ ω (z). (z)
Proof. The proofs of (i) and (ii) follow the same lines with the proof of Lemma 3.3 in [239]. We reproduce them here for completeness.
13.1 Preliminaries
319
Consider z ∈ N . From the definition of level (z) and R (z) in (13.19) and (13.25), in order to prove Part (i) it suffices to prove gen(T ) ≤ r (z) + c1
∀T ∈ T (z)
(13.26)
where c1 depends only on T0 . For any T ∈ T (z), let T ∈ T−1 (z) be the father element of T . Then |gen(T ) − gen(T )| ≤ 2. Furthermore, there exists a constant k ∈ N such that |gen(T ) − gen(T )| ≤ k
2 ∀T ∈ T−1 (ω−1 (z)),
i.e., the difference of the generations of two elements of one patch is uniformly bounded due to [82, Theorem 3.3]. Therefore, gen(T ) ≤ |gen(T ) − gen(T )| + |gen(T ) − gen(T )| + gen(T ) ≤ c1 + gen(T ). 2 (z)) and using the definition of r (z) Taking the minimum over T ∈ T−1 (ω−1 in (13.25), we deduce (13.26), and thus we prove (i). 2 ), definiTo prove Part (ii) we note that for any z ∈ N and T ∈ T−1 (ω−1 tions (13.18) and (13.25) give
gen(T ) ≥ r (z) ≥ 2R (z) = 2m. )m we have gen(T) = From the definition of the uniform triangulation, for all T ∈ T 2m. This implies )m gen(T) ≤ gen(T ) ∀T ∈ T and thus
)m . |T | ≤ |T| ∀T ∈ T
Since both types of triangulations (the uniform triangulation and the newest vertex bisection algorithm) define new triangles by bisecting a triangle through a midpoint of an edge (resulting in similar triangles), the above inequality implies that there )m such that T ⊂ T. exists T ∈ T 2 (z). It remains Finally, to prove Part (iii) we first note that ω (z) ⊂ ω−1 (z) ⊂ ω−1 to prove that there exists an integer n which depends only on T0 such that 2 n level ω−1 (z) ⊂ ω (z)
(13.27)
2 (z)), there exists T )m such that T ⊂ ∈T Due to Part (ii) above, for any T ∈ T−1 (ω−1 2 (z) and the patch ω m2 (z), it follows that T ⊂ T. From the definition of the patch ω−1 2 2 2 )m (ω m (z), i.e., T ∈ T m (z)). Hence T ⊂ ω m (z), i.e., ω 2 m2 (z). ω−1 (z) ⊂ ω
320
13 Multilevel Preconditioners with Adaptive Mesh Refinements
On the other hand, with c0 being the integer given in Part (i), each triangle T ∈ ) )m+c , j = 1, . . . , 4c0 such that m2 (z)) is bisected into 4c0 triangles Tj ∈ T Tm (ω 0 T =
4c 60
j=1
Tj .
n m+c In particular, there exists n ∈ N with n ≤ 8c0 such that T ∈ Tm+c0 (ω (z)), so 0 that 2 n m+c ω−1 (z) ⊂ ω (z). 0
Part (i) states that level (z) ≤ m + c0 . Hence by the definition of patches we have n n level m+c ω (z) ⊂ ω (z). 0 (z)
Both set inclusions above imply (13.27), completing the proof of the lemma.
13.2 Multilevel Preconditioners for the Hypersingular Integral Equation In this section we present two adaptive schemes for the hypersingular integral equation based on two multilevel additive Schwarz preconditioners. We recall the Galerkin boundary element approximation: For some fixed positive integer L, find uL ∈ V L satisfying aW (uL , v) = g, v
∀v ∈ V L .
Here V L is defined as in (13.5).
13.2.1 Local multilevel diagonal preconditioner (LMD preconditioner) A subspace decomposition of the boundary element space V L in the form VL=
L
∑ ∑
Vz
=0 z∈N*
(13.28)
as usual defines an additive Schwarz preconditioner, which will be called the local multilevel diagonal (LMD) preconditioner. Here Vz and N* are defined by (13.10) and (13.8), respectively. We note that this subspace decomposition is valid due to (13.9).
13.2 Multilevel Preconditioners for the Hypersingular Integral Equation
321
Similarly to (5.5) in Chapter 5, the multilevel additive Schwarz operator is defined by PLMD =
L
∑ ∑
(13.29)
PVz
=0 z∈N*
where PVz : V L → Vz is defined for all v ∈ V L by aW (PVz v, w) = aW (v, w) ∀w ∈ Vz .
(13.30)
The matrix form of this preconditioner will be explained in Section 13.3. The analysis also requires the L2 -projection Q j : L2 (Γ ) −→ V j for j = 0, 1, . . ., defined by Q j v, w L2 (Γ ) = v, wL2 (Γ ) ∀v ∈ L2 (Γ ), w ∈ V j . (13.31)
Lemma 13.5 (Stability of decomposition). Any v ∈ V admits a decomposition v= that satisfies
L
∑ ∑
vz ,
=0 z∈N*
L
∑ ∑
vz ∈ Vz ,
(13.32)
aW (vz , vz ) aW (v, v).
(13.33)
=0 z∈N*
The constant is independent of = 0, . . . , L. Proof. Recalling the definition of J by (13.13), for any v ∈ V L with some fixed positive integer L, we can write v = JL v =
L
∑ (J − J−1 )v
=0
where we set J−1 = 0. Here we used the fact that J is a projection; see Lemma C.47. Let v := (J − J−1 )v, = 0, . . . , L. Then it follows from (13.15) that v ∈ V . Hence we can write v =
∑
vz
where
vz := v (z)ηz ∈ Vz ,
= 0, . . . , L,
z∈N*
so that v admits the decomposition (13.32). Noting (13.6), in order to prove (13.33) we prove L
∑ ∑
vz 2 1/2
=0 z∈N*
HI
(ω (z))
v2 1/2
We have by using Lemma A.13 and Lemma C.12
HI
(Γ )
.
(13.34)
322
13 Multilevel Preconditioners with Adaptive Mesh Refinements
vz 2 1/2 HI
(ω (z))
= |vz |2 ηz 2 1/2 HI
(ω (z))
h (z)|vz |2 .
(13.35)
Hence it suffices to estimate |vz |2 for z ∈ N* and = 0, . . . , L. Recalling Lemma 13.4, we define mz = level (z) − c0 which can be a negative integer. For negative integers j, we define V j := V0 and Q j := Q0 ; see (13.22) and (13.31). Denoting rz = R (z), we deduce from Lemma 13.4 Part (i) that mz ≤ rz so that Qmz v ∈ Vmz ⊂ Vrz . 2 (z)) is a subset It is also given by Part (ii) of this lemma that any T ∈ T−1 (ω−1 ( )r . This implies that (Qm v)( is a linear function. Since J−1 is a of some T ∈ T z z T projection, it follows that J−1 Qmz v = Qmz v
on
T
and in particular J−1 Qmz v(z) = Qmz v(z). On the other hand, since ω (z) ⊂ ω−1 (z) ⊂ 2 (z), the restriction of Q v on any triangle T ∈ T (ω (z)) is also a linear funcω−1 mz tion. Consequently, we also have J Qmz v(z) = Qmz v(z) so that J − J−1 Qmz v(z) = 0. Therefore, we can write
( (2 |vz |2 = ( J − J−1 v − Qmz v (z)( .
It follows from inequality (13.17) that
|vz |2 h (z)−2 v − Qmz v2L2 (ω 2
−1 (z))
.
Altogether, Lemma 13.2 Part (iii), (13.35), and the above inequality yield v2 1/2 HI
(Γ )
2 h−1 level (z) v − Qmz vL2 (ω 2
−1 (z))
.
We can now estimate the left-hand side of (13.33) as follows L
∑ ∑
vz 2 1/2
=0 z∈N*
HI
L (ω (z))
=
∑ ∑
2 h−1 level (z) v − Qmz vL2 (ω 2
=0 z∈N* ∞
L
∑ ∑ ∑
m=0 =0
−1 (z))
z∈N* level (z)=m
2 h−1 m v − Qmz vL2 (ω 2
−1 (z))
Invoking Lemma 13.4 Part (iii) and then Lemma 13.3 we deduce L
∑ ∑
vz 2 1/2
=0 z∈N*
HI
∞ (ω (z))
L
∑ ∑ ∑
m=0 =0
z∈N* level (z)=m
2 h−1 m v − Qmz vL2 (ω n (z)) m
.
13.2 Multilevel Preconditioners for the Hypersingular Integral Equation ∞
= =
∑ ∑
∑
m=0 z∈NL ∈K* m (z) ∞
∑
∑
323
2 h−1 m v − Qmz vL2 (ω n (z)) m
∑
m=0 z∈N ∩N) * m ∈Km (z) L
2 h−1 m v − Qmz vL2 (ω n (z)) m
where in the last step we used the fact that z ∈ N) m for z ∈ N with level (z) = m, which is given by Lemma 13.2 Part (i). Moreover, Lemma 13.3 gives card(K* m (z)) ≤ d0 . Hence, the uniform shape regularity of Tm and the definition Q j = Q0 for j < 0 yield L
∑ ∑
vz 2 1/2
=0 z∈N*
HI
∞
(ω (z))
∑
2 h−1 m v − Qmz vL2 (ω n (z))
∑
m
m=0 z∈N ∩N) m L ∞ 2 h−1 ≤ n (z)) m v − Qmz vL2 (ω m m=0 z∈N) m
∑ ∑
≤
∞
∑
m=0 ∞
∑
m=0
2 h−1 m v − Qmz vL2 (Γ )
2 h−1 m v − Qm vL2 (Γ ) .
Invoking Lemma C.35 we obtain (13.34) and complete the proof of the lemma.
We now prove the Strengthened Cauchy–Schwarz inequality. Let M := max levelL (z) z∈NL
denote the maximal level of all nodes z ∈ NL . It follows from Lemma 13.2 that NL ⊂ L M N) M which then implies V ⊂ V . Defining PmL :=
L
∑
=0
∑
z∈N* level (z)=m
PVz ,
(13.36)
we can write the additive Schwarz operator PLMD defined by (13.29) can be rewritten as PLMD =
M
∑ PmL .
(13.37)
m=0
Lemma 13.6 (Strengthened Cauchy–Schwarz inequality). For 0 ≤ k ≤ m ≤ M and for any v ∈ Vk aW (PmL v, v) 2−(m−k) aW (v, v). (13.38) The constant depends only on Γ and the initial triangulation T0 .
324
13 Multilevel Preconditioners with Adaptive Mesh Refinements
Proof. The proof is somewhat a generalisation of Lemma 5.2. Since Vz = span{ηz }, the projection PVz can be explicitly represented as PVz v = so that aW (PmL v, v) =
aW (v, ηz ) η aW (ηz , ηz ) z aW (v, ηz )2 . aW (ηz , ηz )
L
∑
=0
∑
z∈N* level (z)=m
H¨older’s inequality, Lemma B.3 and Lemma C.12 give 2 Wv2L2 (Γ ) ηz 2L2 (Γ ) Wv, ηz L2 (Γ ) aW (v, ηz )2 = aW (ηz , ηz ) aW (ηz , ηz ) ηz 2 1/2 h (z)Wv2L2 (ω (z))
H I
(ω (z))
−(m−k)
2
1/2 hk Wv2L2 (ω (z))
where in the last step we used Lemma 13.2 Part (iii) and (13.21). Lemma 13.4 Part (iii) implies aW (v, ηz )2 1/2 hk Wv2L2 (ω n (z)) . 2−(m−k) m aW (ηz , ηz )
Consequently, with the help of Lemma 13.3, aW (PmL v, v) = =
L
∑
=0
∑
1/2
z∈N* level (z)=m
∑
2−(m−k) hk Wv2L2 (ω n (z)) m
∑
* * z∈N) m ∩NL ∈Km (z)
∑
z∈N) m
1/2 2−(m−k) hk Wv2L2 (ω n (z)) m
1/2 2−(m−k) hk Wv2L2 (ω n (z)) m
(13.39)
1/2 hk Wv2L2 (Γ ) . 2−(m−k)
It follows from [12, Corollary 3.2] (inverse estimate) that aW (PmL v, v) 2−(m−k) v2 1/2 HI
(Γ )
2−(m−k) aW (v, v).
)k but We note that the inverse property holds not only for uniform triangulations T also for shape-regular triangulations and higher-order polynomials; see [12, Corollary 3.2].
13.2 Multilevel Preconditioners for the Hypersingular Integral Equation
325
Similarly to the case of two-dimensional problems, we deduce from the Strengthened Cauchy–Schwarz inequality the following result. Lemma 13.7. The multilevel additive Schwarz operator PLMD satisfies aW (PLMD v, v) aW (v, v) for any v ∈ V L where the constant is independent of the number of levels and the number of mesh points. Proof. The proof follows exactly along the same lines of Lemma 5.3 and is omitted. Lemma 13.5 and Proposition 2.1 yield λmin (PLMD ) 1. Moreover, Lemma 13.7 and Lemma 2.10 yield λmax (PLMD ) 1. Therefore we have the following result. Theorem 13.1. The condition number of the multilevel additive Schwarz operator defined by (13.29) has a bounded condition number
κ (PLMD ) 1.
13.2.2 Global multilevel diagonal preconditioner (GMD preconditioner For this preconditioner, the boundary element space V L is decomposed by VL=
L
∑ ∑
=0 z∈N
Vz .
(13.40)
See (5.2) for the corresponding two-dimensional version of this decomposition. The global multilevel additive Schwarz operator PGMD is defined by PGMD =
L
∑ ∑
=0 z∈N
PVz
(13.41)
where the projection PVz is defined by (13.30). Since N* ⊂ N , see (13.9), the stability of the decomposition (13.40) can be derived from that of (13.28). Lemma 13.8 (Stability of decomposition). Any v ∈ V admits a decomposition v= that satisfies
L
∑ ∑
=0 z∈N
vz ,
vz ∈ Vz ,
(13.42)
326
13 Multilevel Preconditioners with Adaptive Mesh Refinements L
∑ ∑
=0 z∈N
aW (vz , vz ) aW (v, v).
(13.43)
The constant is independent of = 0, . . . , L. Proof. The decomposition (13.42) can be chosen to be the same as (13.32) if one notes that N* ⊂ N . Thus (13.43) follows.
A Strengthened Cauchy–Schwarz inequality can also be proved. However, this is not an optimal result. Similarly to (13.36) we define PmL :=
L
∑
=0
∑ z∈N level (z)=m
and have PGMD =
PVz
M
∑ PmL .
m=0
Lemma 13.9 (Strengthened Cauchy–Schwarz inequality). For 0 ≤ k ≤ m ≤ M and for any v ∈ Vk aW (PmL v, v) 2−(m−k) (1 + L)aW (v, v).
(13.44)
The constant depends only on Γ and the initial triangulation T0 . Proof. The proof follows along the same lines as that of Lemma 13.6 with some * obvious modifications. The set N* is replaced by N and the set Km (z) is replaced by Km (z) defined by, cf. (13.24), 3 4 Km (z) := ∈ {0, 1, . . . , L} : z ∈ N and level (z) = m . Differently from Lemma 13.3 we now have
card(Km ) ≤ 1 + L. This upper bound 1 + L comes into play in the right-hand side of (13.39), resulting in the final result (13.44). As a result of this Strengthened Cauchy–Schwarz inequality, we have the following lemma which is similar to Lemma 13.7. Lemma 13.10. The multilevel additive Schwarz operator PGMD satisfies aW (PGMD v, v) (1 + L)aW (v, v) for any v ∈ V L where the constant is independent of the number of levels and the number of mesh points.
13.3 Numerical Experiments
327
Thus we obtain the following bound for the condition number of PGMD . Theorem 13.2. The condition number of the multilevel additive Schwarz operator defined by (13.41) has a bounded condition number
κ (PGMD ) (1 + L). Remark 13.1. The same preconditioner as the GMD preconditioner is studied in [4] where an optimal bound for the condition number is obtained. However, the local mesh refinement used in that study (which partitions a triangle into four subtriangles by connecting midpoints of its edges) requires some conditions which are not satisfied by the newest vertex bisection algorithm; see [4, pages 391 and 405]. For example, it is required in that study that if T, T ∈ T share a common edge or node, then |gen(T ) − gen(T )| ≤ 1. This condition is not satisfied by the NVB algorithm; see the last subfigure in the right column of Figure 13.2. (In the language of [4], gen(T ) is the level of T .)
13.3 Numerical Experiments We report here some numerical experiments carried out in [75]. They are reprinted with copyright permission. Experiment 1: In this experiment, Γ = (−1, 1) × {0} is a slit in R2 . The right-hand √ side of the equation is g(x) = 1 and thus the exact solution is u(x) = 2 1 − x2 . Figure 13.5 shows the condition numbers and numbers of iterations when different preconditioners are used: the local multilevel diagonal and global multilevel diagonal preconditioners, the hierarchical basis preconditioner; see e.g. [231]. These numbers are compared with the corresponding numbers when no preconditioner (denoted by A) is used.
Ω Experiment 2: In this experiment, Γ is the boundary of an L-shaped domain in R3 defined by Ω = [−1, 1] × [−1, 1] × [0, 1] \ [0, 1] × [0, 1] × [0, 1] . This is a case of closed boundary Γ . The analytical solution of the hypersingular integral equation is chosen to be the Neumann trace of the harmonic function given in the cylindrical coordinates by u(r, ϕ , , z) = r2/3 cos(2ϕ /3). Again, Figure 13.6 shows the condition numbers and numbers of iterations of the two multilevel preconditioners (LMD and GMD) together with the preconditioner by diagonal scaling. These numbers are compared with those of the unpreconditioned matrix A.
328
13 Multilevel Preconditioners with Adaptive Mesh Refinements 100
condition numbers
LMD GMD HB A
90 80
number of iterations
3
10
2
10
LMD GMD HB A
70 60
50 40 30
1
10
20
10
1
2
10
3
10
1
10
number of elements N
2
10
3
10
10
number of elements N
Fig. 13.5 Experiment 1: Condition numbers (left) of the preconditioned matrices PLMD , PGMD , PHB and the unpreconditioned matrix A as well as the corresponding number of iterations (right) with Γ = (−1, 1) × {0}
condition numbers
80
LMD GMD DIAG
70
number of iterations
3
10
LMD GMD DIAG A
2
10
60
50
40
30
1
20
10
3
10
4
10
number of elements N
5
10
3
10
4
10
number of elements N
5
10
Fig. 13.6 Experiment 2: Condition numbers (left) of the preconditioned matrices PLMD , PGMD , PHB and the unpreconditioned matrix A as well as the corresponding number of iterations (right) with Γ being the boundary of an L-shaped domain in R3
13.4 A Remark on the Weakly-Singular Integral Equation For the weakly-singular integral equation, a local multilevel additive Schwarz preconditioner for a sequence of locally-refined meshes is studied in [79]. This article also presents an adaptive algorithm which steers the local mesh refinement as well as the stopping of the PCG iteration. We refer interested readers to this paper for detail.
Part IV
FEM–BEM Coupling
Chapter 14
FEM-BEM Coupling
The coupling of boundary elements and finite elements is a powerful tool for the numerical treatment of boundary value problems on unbounded domains, and in particular, of interface problems. It allows us to benefit from the best of both worlds: boundary elements for treating unbounded domains and finite elements for treating non-homogeneity and non-linearity of the equations; see [245]. The first mathematical treatment of the FEM-BEM coupling is that of the socalled non-symmetric coupling which includes the Johnson–N´ed´elec coupling and the Bielak–MacCamy coupling; see [33, 44, 129]. Even though numerical experiments show that the method works well with general boundaries, the analysis in these papers is confined to smooth boundaries. In the search for coupling methods which are supported by mathematical analysis and suitable for Lipschitz boundaries, the symmetric coupling is originated independently in [62, 102]. As a consequence, plenty of studies on preconditioning of the symmetric coupling have been carried out [51, 81, 113, 117, 156]. Meanwhile, little has been known on preconditioning of the non-symmetric coupling. An optimal iterative process is suggested in [150], which requires smooth boundaries. Since the first analysis for the Johnson–N´ed´elec coupling on Lipschitz boundaries appeared in[187], researchers have paid more attention to preconditioning for this type of coupling. In this chapter, we present preconditioners for the symmetric and non-symmetric FEM-BEM coupling methods. In Section 14.1 we first briefly introduce various variational formulations for the standard transmission problem which give rise to both coupling techniques. The symmetric coupling results from the use of all four operators in the Calder´on projector, whereas in the non-symmetric coupling the hypersingular integral operator is not used. More information on FEM-BEM coupling can be found in [92, 205]. In the remaining sections, we discuss in detail the analysis for preconditioning each coupling technique. The symmetric coupling results in a symmetric matrix system with saddle point structure; see (14.14). We investigate how to precondition that matrix with the so-called three-block and two-block preconditioners. The HMCR, see Subsection 2.8.3, is used to solve this system. Interestingly, with a minor modification (by changing the sign in the second equation), the matrix turns out to be © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_14
331
332
14 FEM-BEM Coupling
non-symmetric but positive definite and thus can be solved by the GMRES. Both approaches will be discussed in Section 14.2. Obviously, the non-symmetric coupling leads to a non-symmetric matrix system, which will be solved by the GMRES. This is discussed in Section 14.3. Different results can be found in [51, 81, 113, 117, 156] and the references therein. In particular, a study of multilevel preconditioners is carried out in [76]. In this chapter, we present our results in the most general form so that any preconditioner (two-level or multilevel) designed for FEM and BEM separately can be combined to form a preconditioner for the coupled problem. In other words, various preconditioners developed in previous chapters can be applied. In particular, Subsection 14.2.2 presents a general approach in the matrix form, which can be used for any problem with any available preconditioners; see also Subsection 2.7.3.
14.1 The Interface Problem Let Ω ⊂ Rd , d = 2, 3, be a bounded domain with Lipschitz boundary Γ := ∂ Ω , and let Ω c := Rd \ Ω¯ with normal n on Γ pointing into Ω c . Without loss of generality, when d = 2 we assume that the logarithmic capacity cap(Γ ) is less than one, which can always be obtained by scaling. This is to ensure positive definiteness of the weakly-singular integral operator; see e.g. [147]. Given f ∈ L2 (Ω1 ), u0 ∈ H 1/2 (Γ ), t0 ∈ H −1/2 (Γ ), and k ∈ R (k = 0), we consider 1 (Ω c ) such that the model interface problem: Find u1 ∈ H 1 (Ω ) and u2 ∈ Hloc −Δ u1 + k2 u1 = f Δ u2 = 0
in Ω , in Ω c ,
(14.1a) (14.1b)
u1 = u2 + u0
on Γ ,
(14.1c)
on Γ ,
(14.1d)
as |x| → ∞,
(14.1e)
∂ u1 ∂ u2 = + t0 ∂n ∂n a + b log |x| + o(1), d = 2, u2 (x) = O(1/|x|), d = 3,
for some constants a and b. To ensure existence of solutions in the case d = 2, we assume also the compatibility condition
f , 1Ω + t0 , 1Γ = 0. (14.2) Since the late seventies of the last century, the coupling of the finite element method and the boundary element method has been widely used to solve (14.1) and other interface problems of similar types. The method was first proposed in 1979 [245] and first analysed in 1980 by Johnson and N´ed´elec [129]. It is therefore often referred to as the Johnson–N´ed´elec coupling. This coupling and the Bielak– MacCamy coupling [33] are non-symmetric coupling methods. Their analysis re-
14.1 The Interface Problem
333
quires compactness of the double-layer potential operator K to be defined in Section 14.1.1. This property fails to hold when the interface Γ is a Lipschitz domain. Due to this lack of analysis, the symmetric coupling has been proposed independently by Costabel [62] and Han [102]. Not until 2009 was a mathematical analysis carried out by Sayas [187] for the Johnson–N´ed´elec coupling with Γ being a polygonal or polyhedral boundary. In the next two subsections, we briefly explain these two coupling methods.
14.1.1 The symmetric FEM-BEM coupling The mathematical boundary element community initially preferred the use of the symmetric coupling in the case of Lipschitz interfaces or for the elasticity system, thanks to the availability of a mathematical analysis. In this subsection, we present this coupling technique.
14.1.1.1 Existence and uniqueness of solutions We introduce the following integral operators Kψ (x) := 2
Γ
∂ G(x, y)ψ (y) d σy , ∂ ny
∂ K ψ (x) := 2 ∂ nx
x ∈Γ, (14.3)
G(x, y)ψ (y) d σy ,
x ∈Γ,
Γ
and recall the definitions of the single-layer and double-layer potential operators Sv(x) := 2
G(x, y)v(y) d σy ,
x ∈ Rd ,
Γ
Dv(x) := 2
Γ
∂ G(x, y) v(y) d σy , ∂ ny
x ∈ Rd .
The solution in the exterior domain can be represented by 1 1 u2 = D(γ u2 ) − S(∂n u2 ) in Ω c 2 2
(14.4)
where γ and ∂n denote the trace and normal trace operators on Γ ; see [147]. By taking the trace on Γ and using the well-known jump relations, we deduce V(∂n u2 ) + (I − K)u2 = 0
on Γ
(14.5)
334
14 FEM-BEM Coupling
where I is the identity operator. On the other hand, by taking the normal trace on Γ , we obtain (14.6) 2∂n u2 = −Wu2 + (I − K )∂n u2 on Γ . By multiplying equation (14.1a) by 2v, integrating over Ω , and formally using integration by parts, we deduce (noting (14.6), (14.1c)), and (14.1d)) 2aΔ (u1 , v) + (K − I)(∂n u2 ), v Γ + aW (u1 , v) = 2 f , vΩ + 2 t0 , vΓ + aW (u0 , v)
where, for any u, v, and w, aΔ (u, v) :=
(∇u · ∇v + k2 uv) dx,
v, wΩ :=
Ω
vw dx,
v, wΓ :=
Ω
vw dσ .
Γ
Furthermore, equation (14.5) and conditions (14.1c), (14.1d) yield formally aV (∂n u2 , ψ ) + (I − K)u1 , ψ Γ = (I − K)u0 , ψ Γ . Therefore, the variational formulation to the interface problem (14.1) reads: Find (u, φ ) ∈ H 1 (Ω ) × H −1/2 (Γ ) such that, for all (v, ψ ) ∈ H 1 (Ω ) × H −1/2 (Γ ), aΔ W (u, v) + (K − I)φ , v Γ = 2 f , vΩ + 2 t0 , vΓ + aW (u0 , v) (14.7)
(K − I)u, ψ Γ − aV (φ , ψ ) = (K − I)u0 , ψ Γ . where aΔ W (·, ·) : H 1 (Ω ) × H 1 (Ω ) → R is defined by aΔ W (u, v) := 2aΔ (u, v) + aW (u|Γ , v|Γ ) ∀u, v ∈ H 1 (Ω ).
(14.8)
Unique solvability and equivalence of (14.1) and (14.7) are proved in [64, 66, 69] and is summarised in the following theorem. Theorem 14.1. Under the compatibility condition (14.2) when d = 2, and no condition for d = 3, the problem (14.7) is uniquely solvable. Moreover, if u1 ∈ H 1 (Ω ) 1 (Ω c ) satisfy (14.1), then and u2 ∈ Hloc u := u1
and
φ :=
∂ u2 ∂n
satisfy (14.7). Conversely, if u ∈ H 1 (Ω ) and φ ∈ H −1/2 (Γ ) satisfy (14.7), then u1 := u and solve the interface problem (14.1).
1 1 u2 := D(u − u0 ) − Sφ 2 2
14.1 The Interface Problem
335
14.1.1.2 Galerkin solution Let Ωh be a mesh in Ω whose restriction on Γ yields a mesh Γh . For the Galerkin scheme, we choose two finite dimensional subspaces (which appropriately define the h-, p-, or hp-version) SΩ ⊂ H 1 (Ω ) and
−1/2 * S (Γ ), Γ ⊂H
* and denote X := SΩ × S Γ . For any function v ∈ SΩ we denote by vΩ (resp., vΓ ) the restriction of v on the interior (resp., the boundary) of Ω . We define the Galerkin solution (u, φ ) ∈ X to (14.1) by aΔ W (u, v) + (K − I)φ , vΓ Γ = 2 f , vΩ + 2 t0 , vΓ Γ + aW (u0 , vΓ ) (14.9)
(K − I)uΓ , ψ Γ − aV (φ , ψ ) = (K − I)u0 , ψ Γ for all (v, ψ ) ∈ X . Here, for any v ∈ SΩ , we denote by vΓ the trace of v on Γ . We note that for notational simplicity we use the same notation for the Galerkin solution and the solution of (14.7).
14.1.1.3 Matrix representation Let {η1 , . . . , ηM } be a basis for SΩ consisting of continuous piecewise polynomial * functions, and {φ1 , . . . , φN } be a basis for S Γ consisting of piecewise polynomial functions (not necessarily continuous). The functions η j are supposed to be ordered such that {η1 , . . . , ηMΩ } is the set of basis functions which vanish on the boundary Γ , and {ηMΩ +1 , . . . , ηMΩ +MΓ } (where MΩ + MΓ = M) is the set of those basis functions η j which do not vanish on Γ . We define I1 := {1, . . . , MΩ },
I2 := {MΩ + 1, . . . , MΩ + MΓ },
I3 := {1, . . . , N}. (14.10) be the stiffness matrix arising from the bilinear form aΔ (·, ·),
Let AΔ ∈ RM×M i.e., AΔ i j := 2aΔ (ηi , η j ),
This matrix can be decomposed as AΔ AΔ = B
B D
i, j ∈ I1 ∪ I2 . .
∈ RM×M
where AΔ ∈ RMΩ ×MΩ , B∈R
MΓ ×MΩ
,
D ∈ RMΓ ×MΓ ,
AΔ
ij
:= 2aΔ (ηi , η j ), i, j ∈ I1 ,
Bi j := 2aΔ (ηi , η j ), i ∈ I2 , j ∈ I1 , Di j := 2aΔ (ηi , η j ), i, j ∈ I2 .
(14.11)
336
14 FEM-BEM Coupling
The matrix AΔ is in fact the discrete Laplacian for a Dirichlet problem, whereas AΔ is the discrete Laplacian for a homogeneous Neumann problem. Similarly, the stiffness matrices W, V, K, and M arising from the bilinear forms defined by the operators W, V, K, and I, respectively, are given by W ∈ RMΓ ×MΓ ,
Wi j := aW (ηi,Γ , η j,Γ ), i, j ∈ I2 ,
V∈R
,
K∈R
N×MΓ
Vi j := aV (φi , φ j ), Ki j := Kη j,Γ , φi Γ , Mi j := η j,Γ , φi Γ ,
N×N
,
M ∈ RN×MΓ ,
i, j ∈ I3 , i ∈ I 3 , j ∈ I2 ,
(14.12)
i ∈ I 3 , j ∈ I2 .
We note that M is the stiffness matrix defined from the bilinear form I·, ·Γ and we can scale the basis functions φi such that Mi j = δi j (Kronecker’s delta). However, even in this case M is a rectangular matrix and is not an identity matrix. Defining i ∈ I1 , bΩ ∈ RMΩ , bΩ i := 2 f , ηi Ω , MΓ bΓ ∈ R , bΓ i := 2 f , ηi Γ + 2 t0 , ηi,Γ Γ + aW (u0 , ηi,Γ ), i ∈ I2 , N bΓ i := (K − I)u0 , φi Γ , i ∈ I3 , bΓ ∈ R , (14.13) we obtain from (14.9) the (M + N) × (M + N) linear system ⎛
B
AΔ
⎜ ⎝ B
0NMΩ
⎞⎛ ⎞ ⎛ ⎞ bΩ uΩ ⎟ K − M ⎠ ⎝ uΓ ⎠ = ⎝ bΓ ⎠ . Γ φ b −V 0MΩ N
D+W K−M
Here 0KL is the zero matrix of size K × L. By denoting - . 0MΩ MΩ 0MΩ MΓ uΩ ∈ RM×M , ∈ RM , W := uΩ ,Γ := uΓ 0M M W Γ
and
K := 0NMΩ
Γ
K − M ∈ RN×M ,
bΩ ,Γ
-
bΩ := bΓ
AΔ W = AΔ + W, we can rewrite the matrix equation (14.14) as uΩ ,Γ AΔ W K K
−V
φ
=
.
(14.14)
(14.15)
∈ RM ,
(14.16) bΩ ,Γ bΓ
.
(14.17)
We denote by As the symmetric (M + N) × (M + N)-stiffness matrix in (14.14) (which is also the stiffness matrix in (14.17)), i.e.,
14.1 The Interface Problem
⎛
AΔ As := ⎝ B 0NMΩ
337
⎞ 0MΩ N AΔ W K − M ⎠ = K −V
B D+W K−M
K . −V
(14.18)
Then since AΔ and W are positive semi-definite and V is positive definite (provided that cap(Γ ) < 1 in the case Ω ⊂ R2 ), the matrix As is indefinite. Indeed, Sylvester’s Law of Inertia [83] and the congruence transform ... 0MN IM AΔ W + K V−1 K 0MN −K V−1 IM As = −V−1 K IN 0NM IN 0NM −V imply that As has m positive and n negative eigenvalues. Here Ik denotes the identity matrix of size k × k. Since As is symmetric and indefinite, we will use a minimum residual method (MINRES) to solve (14.14) and (14.17). More precisely, we will use the HMCR method (Algorithm 2.15) defined in Subsection 2.8.3. Another approach is to solve, instead of (14.17), the equivalent equation bΩ ,Γ uΩ ,Γ AΔ W K = . (14.19) Γ −K V φ −b This system is positive definite but not symmetric and will be solved by GMRES. We denote by Au this non-symmetric but positive definite matrix, namely, AΔ W K . (14.20) Au := −K V
It is worth noting that the matrix AΔ W defined by (14.16) is the stiffness matrix −1 (Ω ) defined by of the operator AΔ W : H 1 (Ω ) → H
AΔ W v, wΓ = aΔ W (v, w),
v, w ∈ H 1 (Ω ),
(14.21)
where aΔ W (·, ·) : H 1 (Ω ) × H 1 (Ω ) → R is defined in (14.8). The discretised operator AM : SΩ → SΩ∗ is given by, see Subsection B.2.2.2 in Appendix B,
AM v, wS ∗ ,S = AΔ W v, wΓ , Ω
Ω
v, w ∈ SΩ .
1 −1 Note that the coercivity condition (B.30) holds; thus, A−1 Δ W : H (Ω ) → H (Ω ) is well defined. Similarly, the matrix K is the stiffness matrix of the operator K defined by
K : H 1 (Ω ) → H 1/2 (Γ ), K(v) := K(v|Γ ) − I(v|Γ ) = K(v|Γ ) − v|Γ , Indeed, by referring to Subsection B.2.2.1 with
v ∈ H 1 (Ω ).
(14.22)
338
14 FEM-BEM Coupling
X = H 1 (Ω ),
Y ∗ = H 1/2 (Γ ),
XM = SΩ ,
we define KNM : XM → YN∗ by
* YN = S Γ,
*∗ , YN∗ = S Γ
KNM (v) := i∗YN KiXM (v) = KNMΓ − INMΓ (v|Γ ).
The stiffness matrix arising from K has entries K i j = Kη j , φi Y ∗ ,Y , i = 1, . . . , N, j = 1, . . . , M. N
N
Note that Kη j = 0 for j = 1, . . . , MΩ . Hence, for any i ∈ I3 , 0, j ∈ I1 , Ki j = j ∈ I2 , Kη j,Γ − η j,Γ , φi Y ∗ ,Y , N
N
Therefore, noting (14.12), we have K = 0NMΩ K − M ∈ RN×M .
This means K defined in (14.15) is the stiffness matrix of the operator K defined in (14.22). Finally, for the weakly-singular integral operator, if the condition cap(Γ ) < 1 holds in the case that Ω is a two-dimensional domain, then (B.30) also holds for aV (·, ·); see e.g. [92, 147]. Hence it is possible to define the inverse opera −1/2 (Γ ) together with its discretised form and stiffness mator V−1 : H 1/2 (Γ ) → H trix.
14.1.2 Non-symmetric coupling methods The symmetric coupling methods discussed in the previous subsection is not easy to implement since they involve all four boundary integral operators V, W, K, and K (whose stiffness matrices are, respectively, V, W, K, and K ). This is a costly computation; in particular the computation of the hypersingular integral operator is not favoured in the engineering community. The analysis in [187] supports the use of non-symmetric coupling methods, even when the interface Γ is a polyhedron or a polygon. In this subsection we introduce two non-symmetric FEM-BEM coupling methods, namely, the Johnson–N´ed´elec coupling [129] and the Bielak–MacCamy coupling [33].
14.1.2.1 The Johnson–N´ed´elec coupling Again we represent u2 satisfying (14.1b) by (14.4). By multiplying (14.1a) by v and formally using integration by parts, together with (14.1d), we obtain
14.1 The Interface Problem
339
aΔ (u1 , v) − ∂n u2 , vΓ = f , vΩ + t0 , vΓ . Moreover, we deduce from (14.5) aV (∂n u2 , ψ ) + (I − K)u1 , ψ Γ = (I − K)u0 , ψ Γ . Therefore, the coupled variational formulation to the interface problem (14.1) reads: Find (u, φ ) ∈ H 1 (Ω ) × H −1/2 (Γ ) such that, for all (v, ψ ) ∈ H 1 (Ω ) × H −1/2 (Γ ), aΔ (u, v) − φ , vΓ = f , vΩ + t0 , vΓ ,
(I − K)u, ψ Γ + aV (φ , ψ ) = (I − K)u0 , ψ Γ .
(14.23)
We note that even though the coupled system (14.23) is non-symmetric, it involves only two boundary integral operators, namely V and K, while (14.7) involves four boundary integral operators V, W, K, and K . Unique solvability of (14.23) is proved in [187]; see also [188]. Theorem 14.2. Under the compatibility condition (14.2) when d = 2, and no condition for d = 3, the problem (14.23) is uniquely solvable. 1 (Ω c ) satisfy (14.1), then Moreover, if u1 ∈ H 1 (Ω ) and u2 ∈ Hloc u := u1
and
φ :=
∂ u2 ∂n
satisfy (14.23). Conversely, if u ∈ H 1 (Ω ) and φ ∈ H −1/2 (Γ ) satisfy (14.23), then u1 := u
and
solve the interface problem (14.1).
u2 :=
1 D(u − u0 ) − Sφ 2
With the finite dimensional space X defined as in Subsection 14.1.1.2 we derive the following Galerkin equations for (14.23): (u, φ ) ∈ X satisfying, for all (v, ψ ) ∈ X, aΔ (u, v) − φ , vΓ = f , vΩ + t0 , vΓ , (14.24)
(I − K)u, ψ Γ + aV (φ , ψ ) = (I − K)u0 , ψ Γ . Here again, we use the same notation for the solution of (14.23) and (14.24). Recalling the definitions of AΔ , V, M, and K, in (14.11), (14.12), and (14.15), we introduce further M := 0NMΩ M ∈ RN×M . (14.25)
We note the factor 2 in the definition of (14.11). Hence, by multiplying the first equation in (14.24) by 2 we can write this system in matrix form as bΩ ,Γ uΩ ,Γ AΔ −2M = (14.26) Γ −K V φ b
340
14 FEM-BEM Coupling
where, instead of (14.13), the right-hand side vector is defined by i ∈ I1 , bΩ ∈ RMΩ , bΩ i := 2 f , ηi Ω , MΓ bΓ ∈ R , bΓ i := 2 f , ηi Γ + 2 t0 , ηi,Γ Γ , i ∈ I2 , Γ ∈ RN , bΓ i := (I − K)u0 , φi Γ , i ∈ I3 , b with uΩ ,Γ and bΩ ,Γ defined by (14.15). We denote AΔ −2M . AJN := −K V
(14.27)
14.1.2.2 The Bielak–MacCamy coupling The Bielak–MacCamy coupling has the same property as the Johnson–N´ed´elec coupling in that it does not involve all four integral operators, in particular, not the hypersingular integral operator W. However, differently from the symmetric coupling and the Johnson–N´ed´elec coupling, its derivation is not initiated from (14.4). Instead, u2 is represented indirectly by 1 u2 = Sφ 2
in Ω c
(14.28)
where φ ∈ H −1/2 (Γ ) is some unknown function. This representation results in the indirect boundary integral equation method for solving the Laplace equation. Again, using the jump relation we deduce 2u2 = Vφ on Γ , which together with (14.1c) yields −2 u1 , ψ Γ + aV (φ , ψ ) = −2 u0 , ψ Γ , ψ ∈ H −1/2 (Γ ). Furthermore, taking the normal trace in (14.28) gives 2∂n u2 = −φ + K φ
on Γ .
Inserting this into the weak formulation of (14.1a) after using (14.1d) results in 2aΔ (u1 , v) + (I − K )φ , v Γ = 2 f , vΩ + 2 t0 , vΓ .
Therefore, the variational formulation of the Bielak–MacCamy coupling reads:
Find (u, φ ) ∈ H 1 (Ω ) × H −1/2 (Γ ) such that, for all (v, ψ ) ∈ H 1 (Ω ) × H −1/2 (Γ ), 2aΔ (u, v) + (I − K )φ , v Γ = 2 f , vΩ + 2 t0 , vΓ , (14.29) −2 u, ψ Γ + aV (φ , ψ ) = −2 u0 , ψ Γ , ψ ∈ H −1/2 (Γ ). Theorem 14.3. Under the compatibility condition (14.2) when d = 2, and no condition for d = 3, the problem (14.29) is uniquely solvable.
14.2 Preconditioning for the Symmetric Coupling
341
1 (Ω c ) satisfy (14.1), then there exists φ ∈ Moreover, if u1 ∈ H 1 (Ω ) and u2 ∈ Hloc such that u2 = Sφ /2. Moreover (u1 , φ ) solves (14.29). Conversely, if u ∈ H 1 (Ω ) and φ ∈ H −1/2 (Γ ) satisfy (14.29), then
H −1/2 (Γ )
u1 := u and
1 u2 := Sφ 2
solve the interface problem (14.1). The Galerkin approximation to (14.29) can be proceeded as before on the space X ; see Subsection 14.1.1.2. With the matrices AΔ , M, K, and V defined as in Subsection 14.1.2.1, we derive the matrix form of the Bielak–MacCamy coupling as bΩ ,Γ uΩ ,Γ −K AΔ = (14.30) Γ −2M V φ b where bΩ ,Γ = (bΩ bΓ ) and i ∈ I1 , bΩ ∈ RMΩ , bΩ i := 2 f , ηi Ω , MΓ bΓ ∈ R , bΓ i := 2 f , ηi Γ + 2 t0 , ηi,Γ Γ , i ∈ I2 , N Γ ∈ R , bΓ i := −2 u0 , φi Γ , i ∈ I3 . b
We denote
ABM :=
AΔ
−K
−2M
V
.
(14.31)
14.2 Preconditioning for the Symmetric Coupling In this section, we design preconditioners for the matrix systems developed above. We adopt the following convention on notation of preconditioners and preconditioned matrices. Definition 14.1. ˘ 1. Let Q be a square matrix. A spectrally equivalent matrix to Q is denoted by Q. The symmetric preconditioned matrix is defined by := Q ˘ −1/2 QQ ˘ −1/2 . Q
2. Let P, Q, and R be three matrices of appropriate sizes. The matrix P˘ QR˘ is defined by ˘ −1/2 QR ˘ −1/2 . ˘ := P P˘ QR
342
14 FEM-BEM Coupling
14.2.1 Preconditioning with HMCR In this subsection we design a three-block preconditioner for (14.14) and a twoblock preconditioner for (14.17). Since these coefficient matrices are symmetric but indefinite, we solve the systems by the hybrid modified conjugate residual method (Algorithm 2.15). We will first present the general theory, and then apply the convergence result to the h-, p-, and hp-versions of FEM-BEM.
14.2.1.1 Three-block preconditioner: We now design a preconditioner for solving (14.14). The 3 × 3-block representation of As defined in (14.18), i.e., ⎞ ⎛ B 0 AΔ As = ⎝ B D + W K − I ⎠ , 0 K−I −V is symmetrically preconditioned by
⎞ ˘Δ 0 0 A ˘ := ⎝ 0 D ˘ 0⎠. A ˘ 0 0 V ⎛
(14.32)
=A ˘ −1/2 As A ˘ −1/2 then has the form The preconditioned matrix A ⎛ AΔ ⎜ =⎜ A ⎜D˘ BA˘ Δ ⎝ 0
˘ Δ(B )D ˘ A
0
⎞ ⎟
⎟ ˘ (K − I )V ˘⎟ D ⎠
+ ˘W˘ D D D
−V
˘ (K − I)D ˘ V
where we have used the notation convention in Definition 14.1. Defining ⎞ ⎛
Δ A (B ) ˘ ˘ D AΔ := ⎝ := 0 ˘ (K − I) ˘ ⎠ and K A V D + ˘W˘ D ˘Δ ˘ BA D D D as and noting that (V˘ (K − I)D˘ ) = D˘ (K − I )V˘ we can rewrite A = A
K A
−V K
.
(14.33)
(14.34)
Due In view of Lemma 2.22 we will find an inclusion set for the eigenvalues of A. to Lemma C.10 in the appendix, this set is well described if we can estimate the
14.2 Preconditioning for the Symmetric Coupling
343
and V, and the singular values of K. These eigenvalues and singular eigenvalues of A ˘ in (14.32). values are of course determined by the choice of the preconditioner A In the following, for any square matrix Q, we denote by Λ (Q) the set of all eigenvalues of Q, and by [μ (Q), μ (Q)] a set that contains Λ (Q). With these notations introduced, we assume
Λ (AΔ ) ⊂ [μ (AΔ ), μ (AΔ )],
μ (AΔ ) > 0,
Λ (D) ⊂ [μ (D), μ (D)],
μ (D) > 0,
Λ (V) ⊂ [μ (V), μ (V)],
μ (V) > 0,
Λ (W) ⊂ [μ (W), μ (W)],
μ (W) > 0.
(14.35)
These matrices are symmetric and positive definite. If they are constructed from the finite element and boundary element approximations, these bounds are usually well ˘ Δ , D, ˘ and V ˘ are chosen such that known. We also assume that the preconditioners A Δ ) ⊂ [ μ (A Δ ), μ (A Δ )], Λ (A ⊂ [μ (D), μ (D)], Λ (D) ⊂ [μ (V), μ (V)], Λ (V)
Δ ) > 0, μ (A > 0, μ (D)
(14.36)
> 0. μ (V)
As a consequence of Lemma C.5 in Appendix C, we can choose ˘ Δ) = μ (A ˘ = μ (D) ˘ = μ (V)
μ (AΔ ) , Δ ) μ (A μ (D) , μ (D) μ (V) , μ (V)
˘ Δ) = μ (A ˘ = μ (D) ˘ = μ (V)
μ (AΔ ) , Δ ) μ (A μ (D) , μ (D)
(14.37)
μ (V) . μ (V)
With these bounds on eigenvalues, we are ready to estimate the extremal eigenvalues and singular values of K. of A The extremal eigenvalues of A: First we estimate the minimum eigenvalue of A.
Lemma 14.1. Assume that (14.35) and (14.36) hold. Assume further that μ (W) is is positive semi-definite. Moreover, the following sufficiently small. Then the matrix A statement holds: μ (D) 1 +1 λmin (A) ≥ μ (AΔ ) + μ (W) 2 μ (D)
344
14 FEM-BEM Coupling
@ 2 A μ (D) μ (D) 1A B Δ )μ (W) + 1 − 4μ (A . − μ (AΔ ) + μ (W) 2 μ (D) μ (D)
Proof. In this proof only we set
so that
Δ , E1 := A
E2 := A˘ Δ(B )D˘ , = A
+ ˘ W ˘ = ˘ (D + W) ˘ , E3 := D D D D D
E1 E 2
E 2 E3
.
Then there exist some vectors x and y, (x, y) = (0, 0), Let λ be an eigenvalue of A. satisfying E1 x + E2 y = λ x,
(14.38a)
E 2 x + E3 y = λ y.
(14.38b)
it suffices to Since our aim in this lemma is to look for a lower bound for λmin (A), consider small values of λ , say Δ ) = μ (E1 ). λ < μ (A
(14.39)
For these values of λ , the matrix E1 − λ I is invertible, and it follows from (14.38a) that x = −(E1 − λ I)−1 E2 y. Substituting this into (14.38b) and premultiply the resulting equation by y give −1
− y E 2 (E1 − λ I) E2 y + y E3 y = λ y y.
(14.40)
It follows from
that
1/2 1/2 −1 −1/2 −1 −1/2 (E1 − λ I)−1 = E1 (I − λ E−1 = E1 (I − λ E−1 1 )E1 1 ) E1 −1/2
−1/2
−1
−1 (I − λ E−1 E2 y −y E 2 (E1 − λ I) E2 y = − y E2 E1 1 ) E1 / 01 2 / 01 2 z =
−1 ≥ −λmax (I − λ E−1 1 )
=−
1 z z λmin I − λ E−1 1
z:=
z z
(14.41)
where we have used the symmetry of I − λ E−1 1 . This symmetry also implies x E−1 x (I − λ E−1 λ λ 1 )x 1 x ≥ 1− , = 1− = 1 − ≥ 1 − λ λmax E−1 λ 1 λmin (E1 ) μ (E1 ) x x x x
14.2 Preconditioning for the Symmetric Coupling
so that
≥ 1− λmin I − λ E−1 1
μ (E1 ) − λ λ = . μ (E1 ) μ (E1 )
It follows from (14.40)–(14.42) that μ (E1 ) − z z + y E3 y ≤ λ y y. μ (E1 ) − λ −1/2
˘ By setting w := D we obtain
345
(14.42)
(14.43)
y and using the definitions of z in (14.41) and of Ei , i = 1, 2, 3, −1/2
−1
˘ z z = y E 2 E1 E2 y = y D −1/2
˘ ˘ y E3 y = y D (D + W) D ˘ y y = w Dw.
−1/2
˘ −1/2
BA−1 D B y = w BA−1 Δ Δ B w,
y = w (D + W)w,
Therefore, (14.43) yields Δ ) μ (A
˘ w BA−1 − Δ B w + w (D + W)w ≤ λ w Dw. Δ ) − λ μ (A
Then it follows from (14.39) that Δ )w BA−1 B w + μ (A Δ ) − λ w (D + W)w ≤ λ μ (A Δ ) − λ w Dw, ˘ −μ (A Δ or
Δ )w D − BA−1 B w − λ w Dw + μ (A Δ ) − λ w Ww μ (A Δ Δ ) − λ w Dw. ˘ ≤ λ μ (A
Moreover, (14.39) also implies y = 0 because otherwise (14.38a) implies that λ is Δ , which is a contradiction. As a consequence, w = 0. Thus by an eigenvalue of A noting the symmetry of the matrices involved, we deduce Δ )λmin D − BA−1 B − λ λmax (D) + μ (A Δ ) − λ λmin (W) μ (A Δ (14.44) Δ ) − λ λmax (D). ˘ ≤ λ μ (A
˘ Bounds for the extremal eigenvalues of D, W, and D are given in (14.35)–(14.37).
. B Let us consider λmin D − BA−1 Δ Recall the stiffness matrix AΔ and its decomposition (14.11). This matrix is a discrete Laplacian for a homogeneous Neumann problem, and thus λmin (AΔ ) = 0. From the congruence transform
346
14 FEM-BEM Coupling
⎛
AΔ B
AΔ = ⎝
B
D
⎞
⎛
1/2
AΔ
⎠=⎝ −1/2 BAΔ
0 I
⎞⎛
I
⎞⎛
0
1/2
AΔ
⎠⎝ ⎠⎝ 0 D − BAΔ−1 B 0
−1/2
AΔ
B
I
and Sylvester’s Law of Inertia [83] it follows that
x (D − BAΔ−1 B )x ≥ 0
∀x.
⎞ ⎠
(14.45)
In particular λmin (D − BA−1 Δ B ) ≥ 0. Hence (14.44), (14.35), and (14.36) yield Δ ) − λ μ (W) ≤ λ μ (A Δ ) − λ μ (D), ˘ −λ μ (D) + μ (A
or, after rearranging the terms, Δ )μ (D) Δ )μ (W) ≤ 0. ˘ − λ μ (A ˘ + μ (W) + μ (D) + μ (A λ 2 μ (D)
˘ > 0 (see (14.37)) this implies Since μ (D) 1 Δ )μ (D) ˘ + μ (W) + μ (D) ˘ λ≥ μ (A ˘ 2μ (D) 5 2 Δ )μ (D) Δ )μ (D) ˘ + μ (W) + μ (D) ˘ ˘ μ (W) − μ (A − 4μ (A . μ (W) 1 1 Δ ) + = μ (A +1 − ˘ 2 2 μ (D)
C -
Δ ) + μ (A
.2
μ (W) +1 ˘ μ (D)
Δ ) − 4μ (A
μ (W) . ˘ μ (D)
is The right-hand side is non-negative when μ (W) is sufficiently small. Hence A positive semi-definite. The required lower bound for λmin (A) is now a consequence of (14.37). Remark 14.1. In general, for the Galerkin boundary element method, μ (W) → 0 when h goes to 0 and/or p tends to ∞. Therefore, the assumption on μ (W) in the above lemma holds. Next we estimate the maximum eigenvalue of A.
Lemma 14.2. The following statement holds:
≤ max μ (A Δ ), μ (D) + μ (W)μ (D) λmax (A) μ (D)
as A =A 1 + A 2 where Proof. Writing A Δ A 0 1 := and A + ˘W˘ 0 D D D
2 := A
D
+
C
0 B ˘ A ˘Δ D
Δ )μ (D)μ (D) μ (A . μ (D)
˘ ˘ Δ(B )D A
0
,
14.2 Preconditioning for the Symmetric Coupling
347
1 and A 2 , and Lemmas C.7, C.8, and C.9 in we have, due to the symmetry of A Appendix C,
≤ λmax (A 1 ) + λmax (A 2) λmax (A) 8 7 Δ ), λmax (D) + λmax ( ˘ W ˘ ) + σmax ( ˘ (B ) ˘ ) ≤ max λmax (A D D D AΔ 8 7
Δ ), μ (D) + μ ( ˘ W ˘ ) + σmax ( ˘ (B ) ˘ ). (14.46) ≤ max μ (A D D D AΔ
Δ ) and μ (D) are given in (14.35) and (14.36). It follows from The bounds μ (A Lemma C.5 that μ (W) μ (W)μ (D) . μ (D˘ WD˘ ) ≤ ≤ ˘ μ (D) μ (D)
It remains to estimate σmax (A˘ Δ(B )D˘ ). The singular values of a matrix E are the square root of the eigenvalues of E E; see [83]. Hence 5 5
˘ −1/2 ˘ −1 ˘ −1/2 BA . σmax ( ˘ (B ) ˘ ) = λmax ˘ B ˘ ˘ (B ) ˘ = λmax D Δ B D AΔ
D
D
AΔ AΔ
D
˘ Δ−1/2 , and G = A−1 . For any x, ˘ −1/2 , F = A For notational convenience, we set E = D Δ if B Ex = 0, then EBF2 B Ex = 0. If x is such that B Ex = 0, then x EBGB Ex = 0 and x EEx = 0. Therefore we can write x EBF2 B Ex x EBF2 B Ex x EBGB Ex x E2 x = · . · x x x EEx x x x EBGB Ex −1/2 y we have With y := FB Ex, z := Ex, and w := A Δ
y y x EBF2 B Ex z BGB z x E2 x = · · x x z z x x y F−1 GF−1 y −1 −1
˘ x z BAΔ B z x D y y · = · −1
z z x x y y A Δ Δ w z BA−1 B z x D ˘ −1 x w A Δ = · · . w w z z x x
By using (14.45) we deduce
Δ w z Dz x D Δ )λmax (D) μ (A Δ )μ (D) ˘ −1 x λmax (A x EBF2 B Ex w A ≤ · · ≤ ≤ .
˘ ˘ x x w w z z x x λmin (D) μ (D) Thus
348
14 FEM-BEM Coupling
@ A 5 Δ )μ (D) A μ (A −1/2 ˘ −1 −1/2
˘ ˘ ≤B σmax (A˘ Δ(B )D˘ ) = λmax D BAΔ B D . ˘ μ (D)
The required result follows from the above inequality, (14.46), and (14.37).
then Remark 14.2. If we use the same approach to show a lower bound for λmin (A), analogously to (14.46) we have 7 8 ≥ min μ (A Δ ), μ (D) + μ ( ˘ W ˘ ) − σmax ( ˘ (B ) ˘ ), λmin (A) D D D AΔ
so that
7
8
≥ min μ (A Δ ), μ (D) + μ (˘ W ˘ ) − λmin (A) D D
C
Δ )μ (D)μ (D) μ (A . μ (D)
This lower bound could be negative.
The extremal singular values of K: see (14.33) We now turn to estimates for the extremal singular values of K; and (14.34).
˘ is chosen such that D ˘ and D are Lemma 14.3. Assume that the preconditioner D defined in (14.33) is spectrally equivalent. Then the maximum singular value of K bounded independently of all parameters involved in the discretisation. Proof. Let S := W+(K −I )V−1 (K−I) be the Steklov–Poincar´e (or Dirichletto-Neumann) operator and S = W + (K − I )V−1 (K − I) be its matrix representation. Then it is well known [53] that for any vector v, if v is the finite element function whose coefficient vector is v then v Sv v2H 1/2 (Γ ) . It is also well known that v Dv v2H 1 (Ω ) v|Γ 2H 1/2 (Γ ) .
˘ ˘ is chosen such that D ˘ and D are spectrally equivalent, then v Dv Hence, if D v|Γ 2H 1/2 (Γ ) . As a consequence, we have −1/2
2 ≤ max σmax (K) v=0
˘ v D
˘ −1/2 v (K − I )V−1 (K − I)D v v
w (K − I )V−1 (K − I)w ˘ w=0 w Dw
w Sw ≤ max ≤ C, ˘ w=0 w Dw
= max
where in the penultimate step we have used the definition of S and the fact that W is positive semi-definite.
14.2 Preconditioning for the Symmetric Coupling
349
Convergence of the three-block preconditioner: Due to Lemma 2.22, the convergence rate of the three-block preconditioner is λ− (A), λ+ (A), and λmax (A), which in turn are determined determined by λmin (A), see (14.34) and by the extremal eigenvalues of A and extremal singular values of K; Lemma C.10. ˘ Δ , D, ˘ and V ˘ are chosen such Theorem 14.4. Assume that the preconditioners A that (14.36) hold. Then ⊂ [μ (A), μ− (A)] ∪ [μ+ (A), μ (A)] Λ (A)
where
1 − μ (A) − μ (V) 2 = −μ (V), μ− (A) = μ (A)
= μ (A), μ+ (A) = 1 μ (A) + μ (A) 2
+ μ (V) 2 + 4σ 2 , μ (A)
2 + 4σ 2 , μ (A)
μ (A) given by Lemmas 14.1, 14.2, and μ (V), μ (V) given by (14.36). with μ (A), given by Lemma 14.3. Here σ is an upper bound of σmax (K) Proof. It follows from (14.34) and Lemma C.10 that
1 − λmax (V) + λmax (V) − 2 + 4σmax (K) 2 , λmin (A) λmin (A) 2 ≥ λmin (A), λ+ (A) ≤ 1 λmax (A) + λmax (A) 2 + 4σmax (K) 2 . λmax (A) 2 ≥ λmin (A)
are given in Lemma 14.1 and Lemma 14.2. Bounds for the extremal eigenvalues of A 2 Bounds for the extremal eigenvalues of V are given in (14.36). A bound for σmax (K) is given in Lemma 14.3. we use a different approach than In order to obtain an upper bound for λ− (A), Let λ be that given in Lemma C.10, due to a lack of technique to estimate σmin (K). Then there exist some vectors x and y, (x, y) = (0, 0), a negative eigenvalue of A. satisfying +K y = λ x Ax − Vy = λ y. Kx
(14.47a) (14.47b)
− λ I is invertible, so is positive semi-definite and λ < 0, the matrix A Since A that (14.47a) implies − λ I)−1 K y. x = −(A
350
14 FEM-BEM Coupling
Substituting this into (14.47b) and premultiplying the resulting equation by y yield − λ I)−1 K y − y Vy = λ y y. A −y K(
− λ I)−1 K A y ≤ 0, there follows Since −y K(
λ y y ≤ −y Vy.
Note that y = 0 because otherwise (14.47a) implies that λ is an eigenvalue of A, which contradicts the semi-definite property of A. Hence it follows from the above ≤ −λmin (V). This completes the proof of the theorem. inequality that λ− (A)
Remark 14.3.
(i) Theorem 14.4 is purely at the matrix level, so it can be used for both two- and three-dimensional problems. (ii) In [113] the hp-version of FEM-BEM coupling for the two-dimensional problem is considered. It is known that (cf. (14.35))
Λ (AΔ ) ⊂ [cA h2 p−4 ,CA ], −2
Λ (W) ⊂ [cW hp ,CW ],
Λ (D) ⊂ [cD ,CD ], Λ (V) ⊂ [cV h2 p−2 ,CV h],
where all the constants are positive and independent of h and p. The results for the FEM part can be found in [146, 151, 166, 213] while those of the BEM part can be found in [113]. The authors of [113] employ spectrally equivalent ˘ Δ , D, ˘ and V. ˘ This means the sub-blocks AΔ , D, and V are preconditioners A exactly solved. This implies (14.36) holds with bounds independent of h and p. As a consequence, Lemma 14.1 and Lemma 14.2 give = c hp−2 μ (A) A
and
= C . μ (A) A
Theorem 14.4 yields the following inclusion set for the spectrum of A: ⊂ [−a, −b] ∪ [chp−2 , d] Λ (A)
where a, b, c, and d are some positive constants independent of h and p. (iii) The case considered above shows that the three-block preconditioner is not a good choice for preconditioning of FEM-BEM coupling, because even though the diagonal blocks are inverted, the coupled preconditioned matrix still has eigenvalues going to zero quickly. This is due to the dependency of λmin (A) on λmin (W); see Lemma 14.1. Therefore, in the remainder of this chapter, we consider only two-block preconditioners.
14.2 Preconditioning for the Symmetric Coupling
351
14.2.1.2 Two-block preconditioner: In this subsection, we design a preconditioner for the 2 × 2-block representation of As defined in (14.18), i.e., AΔ + W K As = K −V where we recall that AΔ and W are symmetric positive semi-definite while V is symmetric positive definite. The matrix As is symmetrically preconditioned by . ˘Δ 0 A ˘ A := ˘ 0 V ˘ Δ is spectrally equivalent to AΔ + W + β M, β > 0, and V ˘ is spectrally where A equivalent to V. Here, M is an additional mass matrix which is added to make AΔ + W positive definite. The preconditioned symmetric matrix then has the form
Δ + ˘ W ˘ A ˘ ˘ ΔK V −1/2 −1/2 A A A Δ Δ := A ˘ ˘ A As A = , −V ˘ KA ˘Δ V
where we have used the notation convention in Definition 14.1. Similarly to Theorem 14.4 we have the following theorem. Theorem 14.5. Assume that there exist positive constants μ0 , μ1 , and μ2 satisfying x AΔ + W x , μ0 ≥ x AΔ + β M x x AΔ + W + K V−1 K x , μ1 ≤ x AΔ + β M x
μ2 ≥
x K V−1 Kx , x AΔ + β M x
for all nonzero vectors x. Then
where
⊂ [μ (A), μ− (A)] ∪ [μ+ (A), μ (A)] Λ (A)
1 + μ (A) = − μ (V) 2 = −μ (V), μ− (A) = 1 −μ (V) + μ+ (A) 2
. 2 μ (V) + 4μ2 μ (A)μ (V) , . 2 + 4μ1 μ (A)μ (V) , μ (V)
352
14 FEM-BEM Coupling
-
= 1 −μ (V) + μ0 μ (A) + μ (A) 2
2
+ μ0 μ (A) μ (V)
. + 4μ2 μ (A)μ (V) ,
−1/2 and μ (V) given by (14.36). Here A = A ˘ −1/2 ˘Δ . AΔ + β M A with μ (V) Δ
Proof. The theorem is a result of Lemma C.11 in Appendix C.
In the following lemma, we show how the constants μ0 , μ1 , and μ2 behave. Lemma 14.4. The constants μ0 , μ1 , and μ2 in Theorem 14.5 can be chosen to be independent of all relevant parameters (namely, h and p). Proof. To determine μ0 we note that x AΔ u ≤ c∇u2L2 (Ω )
and
x Wx u2H 1/2 (Γ )/R ,
where u is the piecewise-polynomial function the coefficient vector of which is x. As a consequence, there exists a positive constant c1 independent of all relevant parameters such that x (AΔ + W)x ≤ c1 u2H 1 (Ω ) . Furthermore, there exists a positive constant c2 satisfying x (AΔ + β M)x ≥ c2 u2H 1 (Ω ) .
(14.48)
Therefore, μ0 = c1 /c2 . On the other hand, under the assumption that the logarithmic capacity of the boundary Γ is less than 1, the single-layer potential V is positive definite on the space H −1/2 (Γ ); see [63, 65]. Moreover, V : H −1/2 (Γ ) −→ H 1/2 (Γ ) is bijective and K − I : H 1/2 (Γ ) −→ H 1/2 (Γ ) is continuous, it follows that x K V−1 Kx ≤ (K − I )V−1 (K − I )u, u ≤ cu2H 1/2 (Γ ) ≤ c3 u2H 1 (Ω ) .
This together with (14.48) gives μ2 = c3 /c1 . Finally, to determine μ1 we follow [63] to define an extension operator E with the following properties. Given v ∈ H 1/2 (Γ ), then there exists w := E v satisfying w = v on Γ and wH 1 (Ω ) ≤ cvH 1/2 (Γ ) . For all u ∈ H 1 (Ω ), by using the first Friedrich inequality and the above inequality we obtain u2H 1 (Ω ) ≤ 2u − E (u|Γ )2H 1 (Ω ) + 2E (u|Γ )2H 1 (Ω ) ≤ c|u − E (u|Γ )|2H 1 (Ω ) + 2E (u|Γ )2H 1 (Ω ) ≤ c|u|2H 1 (Ω ) + cE (u|Γ )2H 1 (Ω )
14.2 Preconditioning for the Symmetric Coupling
353
≤ c |u|2H 1 (Ω ) + u2H 1/2 (Γ ) . It is proved in [53] that the operator W + (K − I )V−1 (K − I ) : H 1/2 (Γ ) −→ H −1/2 (Γ ) is elliptic. Hence the above inequality yields, for all finite element functions u, u2H 1 (Ω ) ≤ c4 x AΔ + W + K V−1 K x. This together with (14.48) yields μ1 = c4 /c2 .
Remark 14.4. Theorem 14.5 can be used for both two- and three-dimensional prob˘ Δ and V ˘ have been studlems. For two-dimensional problems, different choices of A ied with this two-block preconditioner. (i) In [113] the authors study the hp-version of the coupled FEM-BEM with quasi˘ Δ and V ˘ which are spectrally equivuniform meshes. They use preconditioners A alent to AΔ + W + β M and V, respectively. Recalling that −1/2 =V ˘ −1/2 ˘Δ , ˘ −1/2 and A = A ˘ −1/2 VV AΔ + β M A V Δ it follows that
= cV , μ (V)
= CV , μ (V)
μ (A) = cA ,
μ (A) = CA ,
where all the constants are positive and independent of h and p. Theorem 14.5 and Lemma 14.4 yield ⊂ [−a, −b] ∪ [c, d] Λ (A)
where a, b, c, and d are positive and independent of h and p. This two-block preconditioner is clearly superior to the corresponding three-block preconditioner; see Remark 14.3. (ii) The authors of [113] also use non-overlapping additive Schwarz methods for ˘ Δ and V. ˘ As proved in Part II and Part III of the book, both preconditioners A = cV 1 + log(H p/h) −2 μ (V)
and
= CV , μ (V)
where cV and CV are positive constants independent of H, h, and p. It is proved in [89, 90] that −2 μ (A) = cA 1 + log(H p/h)
and
μ (A) = CA ,
where cA and CA are positive constants independent of H, h, and p. Theorem 14.5 and Lemma 14.4 yield ⊂ [−a, −b 1 + log(H p/h) −2 ] ∪ [c 1 + log(H p/h) −2 , d], Λ (A)
354
14 FEM-BEM Coupling
where a, b, c, and d are positive constants independent of the meshsize h and polynomial degree p. (iii) Theorem 14.5 can also be applied when multigrid methods are used instead of additive Schwarz preconditioners. Consider for simplicity the h-version only. ˘ Δ and In [81] the V -cycle multigrid method with Gauss–Seidel smoother for A ˘ the V -cycle multigrid method for V are employed. It is proved in [38] that −1 μ (A) = cA 1 + log h
and
μ (A) = CA .
and
= CV . μ (V)
It is also proved in [172] that
= cV 1 + log h −1 μ (V)
As a consequence of Theorem 14.5 and Lemma 14.4 we have
⊂ [−a, −b 1 + log h −1 ] ∪ [c 1 + log h −1 , d]. Λ (A)
14.2.2 Preconditioning with GMRES In this subsection we use the GMRES method to solve the unsymmetric positive definite system (14.19), namely Au u = b, which is rewritten here for convenience: bΩ ,Γ uΩ ,Γ AΔ W K = . (14.49) Γ −K V φ −b
The analysis developed in Subsection 2.7.3 will be applied. Let Adiag be the block-diagonal matrix of Au defined by (14.20), i.e., . AΔ W 0 Adiag := . (14.50) 0 V The preconditioner C :=
. CΔ W 0 0 CV
(14.51)
is defined by applying various preconditioners CΔ W for AΔ W and CV for V. As long as CΔ W and CV satisfy (2.56), we can invoke Lemma 2.21 to obtain the convergence rate for the GMRES iteration. The preconditioner CV can be any preconditioner for the weakly-singular integral equation designed in Chapter 3, Chapter 4, Chapter 5, Chapter 6, Chapter 10, and Chapter 12. Let λ V and λ V be the upper bound and lower bound, respectively, of λmax (PV ) and λmin (PV ), respectively, where PV is the Schwarz operator associated with the weakly-singular integral operator V. This means
14.2 Preconditioning for the Symmetric Coupling
355
λ V aV (φ , φ ) ≤ aV (PV φ , φ ) ≤ λ V aV (φ , φ ),
* φ ∈S Γ.
(14.52)
For example, if PV is the h-version two-level additive Schwarz operator, then it is proved in Subsection 3.1.1.2 that
λV
1 max 1 + Hi /hi
1≤i≤J
and
λ V 1.
Recall that PV = C−1 V V with PV being the matrix representation of PV . Then (14.52) together with Lemma 2.8 and Lemma 2.9 implies
λ V CV v, vRN ≤ Vv, vRN ≤ λ V CV v, vRN
∀v ∈ RN .
This is (2.56) for the V component in Adiag . It remains to show the same estimate for AΔ W so that Lemma 2.21 can be invoked. Assume that PΔ is some Schwarz operator defined on some subspace decomposition of SΩ , namely SΩ = SΩ ,1 ⊕ · · · ⊕ SΩ ,J for some positive integer J. Similarly, assume that PW is some Schwarz operator defined on some subspace decomposition of 4 3 SΓ := span η j,Γ : j ∈ I2 , namely SΓ = SΓ ,1 ⊕ · · · ⊕ SΓ ,J for some positive integer J0 . Then -
P PΔ W := Δ 0
0 PW
.
is the corresponding Schwarz operator defined on the subspace decomposition S =
J J 9 9
SΩ ,i × SΓ , j .
i=1 j=1
Their matrix representations satisfy PΔ = CΔ−1 AΔ ,
PW = C−1 W W,
where
-
CΔ CΔ W = 0
PΔ W = C−1 Δ W AΔ W , . 0 . CW
If the following estimates hold
λ Δ aΔ (u, u) ≤ aΔ (PΔ u, u) ≤ λ Δ aΔ (u, u),
u ∈ SΩ ,
(14.53)
and
λ W aW (uΓ , uΓ ) ≤ aW (PW u, u) ≤ λ W aW (uΓ , uΓ ),
uΓ ∈ SΓ ,
(14.54)
356
14 FEM-BEM Coupling
then, with λ Δ W := min{λ Δ , λ W } and λ Δ W := max{λ Δ , λ W }, λ Δ W aΔ W (u, v), (u, v) ≤ aΔ W PΔ W (u, v), (u, v) ≤ λ Δ W aΔ W (u, v), (u, v) for all (u, v) ∈ S . Again, Lemma 2.8 and Lemma 2.9 imply
λ Δ W CΔ W v, vRM ≤ AΔ W v, vRM ≤ λ Δ W CΔ W v, vRM
∀v ∈ RM .
We need the following notations. For any w = (w1 , . . . , wM ) ∈ RM , recalling the definition of I1 and I2 in (14.10), we define 2w1 := (w1 , . . . , , wMΩ ) ∈ RMΩ , w :=
∑
i∈I1 ∪I2
w2 := (wMΩ +1 , . . . , , wM ) ∈ RMΓ ,
wi ηi ∈ SΩ
wΓ := w|Γ =
(14.55)
∑ wi ηi,Γ ∈ SΓ . i∈I2
We first prove the following lemma. Lemma 14.5. There exist positive constants C1 and C2 independent of M and N such that K V−1 Kw, w M ≤ C1 AΔ W w, wRM ∀w ∈ RM (14.56) R
and
K A−1 K w, w ΔW
RN
≤ C2 Vw, wRN
∀w ∈ RN .
(14.57)
Proof. We first prove (14.56). Let KNMΓ , KMΓ N , INMΓ , IMΓ N , and V−1 N , respectively, be discretised operators defined from K, K , I, I , and V; see Subsection B.2.2. Recall that K, K , M, M , and V−1 , respectively, are the matrix representations of these operators. We define, see Subsection B.2.2, −1 ∗ M := K E MΓ N − IMΓ N VN KNMΓ − INMΓ ∈ L (SΓ , SΓ ), Γ E := K − I V−1 K − I ∈ L (H 1/2 (Γ ), H −1/2 (Γ )), EMΓ := i∗SΓ EiSΓ ∈ L (SΓ , SΓ∗ ). M . It follows from Lemma B.5 that the matrix repreNote that in general EMΓ = E Γ sentation of EMΓ is K − M V−1 K − M . Hence, recalling (14.15) and (14.55), we deduce K V−1 Kw, w M = K − M V−1 K − M w2 , w2 M R R Γ = EMΓ wΓ , wΓ ∗ SΓ ,SΓ
= EMΓ wΓ , wΓ S ∗ ,SΓ + Γ
M − EM wΓ , wΓ ∗ . E Γ Γ S ,S
The first term on the right-hand side of (14.58) is easily estimated by
EMΓ wΓ , wΓ S ∗ ,SΓ = EwΓ , wΓ H −1/2 (Γ ),H 1/2 (Γ ) Γ
Γ
Γ
(14.58)
14.2 Preconditioning for the Symmetric Coupling
357
= V−1 K − I wΓ , K − I wΓ H −1/2 (Γ ),H 1/2 (Γ ) ≤ cV−1 K − I wΓ 2H 1/2 (Γ ) 2 cV−1 cK wΓ 2H 1/2 (Γ ) ≤ c−1 V cK wH 1 (Ω )
≤ cV−1 cK cΔ W aΔ W (w, w) =
C1
AΔ W w, wRM 2
(14.59)
where C1 := 2cV−1 cK cΔ W . Here cV−1 and cK are the norm of V−1 and K − I, respectively, and c−1 Δ W is the coercivity constant of the bilinear form aΔ W (·, ·). For the second term on the right-hand side of (14.58), we need to invoke Lemma B.4 in Appendix B with A = V, X =H
1/2
B = K − I , ∗
(Γ ) = Y , Xh = SΓ ,
to have M − EM wΓ , wΓ E Γ Γ
SΓ∗ ,SΓ
C = K − I,
Y = H −1/2 (Γ ) = X ∗ , * Yh = S Γ,
M − EM wΓ S ∗ wΓ S ≤ E Γ Γ Γ Γ
≤ inf V−1 (K − I)wΓ − zH −1/2 (Γ ) wΓ SΓ * z∈S Γ
≤ V−1 (K − I)wΓ H −1/2 (Γ ) wΓ SΓ ≤ c−1 V (K − I)wΓ H 1/2 (Γ ) wΓ SΓ 2 −1 ≤ c−1 V cK wΓ H 1/2 (Γ ) ≤ cV cK cΔ W aΔ W (w, w)
=
C1
AΔ W w, wRM . 2
(14.60)
Thus (14.56) follows from (14.58), (14.59), and (14.60). We prove (14.57) in a similar manner. Recalling the definition of the linear bounded operator K ∈ L (H 1 (Ω ), H 1/2 (Γ ) by (14.22) and thus of its adjoint op −1 (Ω )), we define erator K ∈ L (H −1/2 (Γ ), H F := KA−1 Δ WK
FN = i∗YN FiYN
∈ L (H −1/2 (Γ ), H 1/2 (Γ )), ∈ L (YN , YN∗ ),
N := KNM A−1 K ∈ L (YN , Y ∗ ), F N M MN
N is KA−1 K ; see Lemma B.5. The and note that the matrix representation of F ΔW proof follows along exactly the same lines as that of (14.56). The above results yield the following theorem.
358
14 FEM-BEM Coupling
Theorem 14.6. Assume that PΔ , PW , and PV are preconditioning operators defined with the corresponding bilinear form aΔ (·, ·), aW (·, ·), and aV (·, ·), respectively, such that (14.52), (14.53), and (14.54) hold. Then the preconditioned GMRES method applied to (14.49) with the preconditioner C defined by (14.51) has the following convergence rate 2 k/2 α r0 C rk C ≤ 1 − β2
where r0 and rk are, respectively, the initial and kth residuals of the iteration, and where = min{λ Δ , λ V , λ W } and β = C11/2C21/2 max{λ Δ , λ V , λ W } α
with C1 , C2 being the constants given in Lemma 14.5.
Proof. The theorem is proved by invoking Lemma 2.21 with A11 = AΔ W ∈ RM×M ,
M×N A12 = K = −A , 21 ∈ R
A22 = V ∈ RN×N .
The constants α and β defined in (2.54) have the forms
α = min{λ Δ W , λ V } = min{λ Δ , λ V , λ W } and
1/2 1/2
1/2 1/2
β = C1 C2 max{λ Δ W , λ V } = C1 C2 max{λ Δ , λ V , λ W } where
K V−1 Kv, v RM C1 := sup
AΔ W v, vRM v∈RM v=0
K A−1 Δ W K v, v and C2 := sup
Vv, vRN v∈RN v=0
RN
/β ≥ α /β , and thus the theorem follows from Lemma 2.21. It is clear that α
.
Remark 14.5. Analogously to Remark 14.4, we collect here the different results obtained in various papers for various preconditioners CΔ W and CV . (i) Consider the p-version for the solution of the interface problem (14.1). If the two-level additive Schwarz preconditioners for all three operators Δ , W, and V are used, then (14.52), (14.53), and (14.54) hold with
λ V 1, λ W 1, λ Δ 1,
1 , log (p + 1) 1 λW , log2 p 1 λΔ , log2 p
λV
2
14.3 Preconditioning for the Non-Symmetric Coupling Methods
359
see Lemma 4.3 and Lemma 4.4 for V, Lemma 4.1 and Lemma 4.2 for W, and [18, Lemmas 3.1–3.3] for Δ . Moreover, the constants C1 and C2 given by Lemma 14.5 are independent of p. Therefore, Theorem 14.6 gives so that
1/(1 + log p)2 α
rk C ≤ 1 −
and
β 1,
k/2 c r0 C . (1 + log p)4
This preconditioner is studied in [117]. (ii) Consider hierarchical basis block preconditioners for problem (14.1). The constants in (14.52), (14.53), and (14.54) are
λ V 1,
λ V (L + 1)−2 ,
λ W 1,
λ W (L + 1)−2 ,
λ Δ 1,
λ Δ (L + 1)−2 ,
where L is the number of levels. The results for Δ are proved in [243, Theorem 4.1] while those for W and V are proved in [231, Theorem 1 and Theorem 2]. Therefore, it follows from Theorem 14.6 that rk C ≤ 1 −
k/2 c r0 C . (L + 1)4
This preconditioner is studied in [156]. (iii) In the same manner, other preconditioners for problem (14.1) can be derived as long as relevant preconditioners are studied for Δ , W, and V. Remark 14.6. As mentioned in the introduction of this chapter, our analysis holds not only for the particular problem considered in this chapter, but also for any problem whose FEM-BEM coupling matrix is of the form (14.18) or (14.20). Indeed, the same procedure can be applied to the time-harmonic eddy current problem, as studied in [145].
14.3 Preconditioning for the Non-Symmetric Coupling Methods We recall that in the non-symmetric coupling methods, namely the Johnson–N´ed´elec coupling and the Bielak–MacCamy coupling, one has to solve the matrix system with the matrix being AJN and ABM defined by (14.27) and (14.31), respectively. Noting that ABM = A JN , it suffices to consider AJN . We define
360
14 FEM-BEM Coupling
Adiag :=
AΔ 0
0 V
and C :=
CΔ 0
0 CV
(14.61)
where CV and CΔ are designed such that (14.52) and (14.53) hold. As in Subsection 14.2.2, we will invoke Lemma 2.21 with A = AJN ,
A11 = AΔ ,
A12 = −2M ,
A21 = −K,
A22 = V,
and thus the task is to estimate the constants cA and Cβi , i = 1, 2, 3, defined in Lemma 2.21. For any v = (v1 , · · · , vM+N ) ∈ RM+N we define, recalling the definition of I j , j = 1, 2, 3, in (14.10), v11 := (v1 , . . . , vMΩ ), v1 := (v11 , v12 ), vΩ :=
∑ vi ηi ,
v12 := (vMΩ +1 , . . . , vM ), v2 := (vM+1 , · · · , vM+N ), vΓ :=
i∈I1
∑ vi ηi,Γ ,
(14.62)
i∈I2
v := vΩ + vΓ ,
v :=
∑ vM+i φi , i∈I3
where we recall that ηi,Γ is the restriction of ηi on Γ . Then v = (v1 , v2 ) ∈ RM × RN , vΩ ∈ H01 (Ω ), vΓ ∈ H 1/2 (Γ ), v ∈ H 1 (Ω ), and ψv ∈ H −1/2 (Γ ). Since 0MΩ N
v2 = Mv12 , v2 RN M v2 , v1 M = v11 v12 R M and - . v11 0 K − M
Kv1 , v2 RN = v K − M v12 = v NM 2 2 Ω v12 = Kv12 , v2 RN − Mv12 , v2 RN ,
we have
A12 v2 , v1 Rn1 + A21 v1 , v2 Rn2 = −2 M v2 , v1 RM − Kv1 , v2 RN = − (M + K)v12 , v2 RN
Hence due to Lemma 2.21, it suffices to find constants cA and Cβi , i = 1, 2, 3, such that cA Adiag v, v RM+N ≤ AJN v, vRM+N , ∀v ∈ RM+N , (14.63) ∀v ∈ RM , (14.64) K V−1 Kv, v M ≤ Cβ1 AΔ v, vRM R
14.3 Preconditioning for the Non-Symmetric Coupling Methods
M A−1 Δ M v, v
≤ Cβ2 Vv, vRN ( ( ( (M + K)v12 , v2 N ( ≤ Cβ Adiag v, v M+N R 3 R RN
361
∀v ∈ RN ,
(14.65)
∀v ∈ RM+N .
(14.66)
We first deal with (14.66) because it will be useful to prove (14.63). Lemma 14.6. There exists a constant Cβ3 ∈ (0, 1) such that for any v ∈ RM+N we have ( ( ( M + K v12 , v2 N ( ≤ Cβ Adiag v, v M+N . 3 R R Here v12 and v2 are defined from v by (14.62).
Proof. Recalling definition (14.62), we first note that M + K)v12 , v2 RN = (I + K)vΓ , vΓ .
Let Tint : H 1/2 (Γ ) → H −1/2 (Γ ) be the interior Steklov–Poincar´e operator defined by (B.11). Then it follows from Lemma B.2 that there exists a constant cK ∈ (1/2, 1) such that (I + K)w2V−1 ≤ 2cK Tint w, wΓ ∀w ∈ H 1/2 (Γ ). Here, the norm ·V−1 is defined by the inverse operator V−1 , i.e. wV−1 := −1 1/2 V w, w Γ . It then follows that ( ( ( M + K)v12 , v2 N ( ≤ (I + K)vΓ −1 vV V R √ 1/2 1/2 ≤ 2 cK Tint vΓ , vΓ Γ vV 1 ≤ cK Tint vΓ , vΓ Γ + v2V 2 (14.67) ≤ cK Tint v|Γ , v|Γ Γ + v2V
where vΓ , v and v are defined in (14.62). Lemma B.1 in Appendix B states that there exists uniquely vD ∈ H 1 (Ω ) such that −Δ vD = 0 vD = v
in Ω , on Γ ,
∂ vD = Tint ( v|Γ ) on Γ , ∂n
and that ∇vD 2L2 (Ω ) ≤ ∇ v2L2 (Ω ) . Therefore, by using Green’s identity we can estimate the first term on the right-hand side of (14.67) by F E ∂ vD Tint v|Γ , v|Γ Γ = , vD = ∇vD 2L2 (Ω ) ∂n Γ ≤ ∇ v2L2 (Ω ) ≤ aΔ ( v, v) = AΔ v1 , v1 RM .
This and (14.67), together with the identity v2V = Vv2 , v2 RN , imply
362
14 FEM-BEM Coupling
( ( M + K)v12 , v2
RN
( ( ≤ cK AΔ v1 , v1 M + Vv2 , v2 N R R
This completes the proof with Cβ3 := cK .
= cK Adiag v, v RM+N .
We can now prove (14.63).
Lemma 14.7. There exists a positive constant cA independent of M and N such that
AJN v, vRM+N ≥ cA Adiag v, v RM+N ∀v ∈ RM+N . Proof. Simple calculation, with definition (14.62), reveals that
AJN v, vRM+N = AΔ v1 , v1 RM − 2 M v2 , v1 M − Kv1 , v2 RN + Vv2 , v2 RN R
= Adiag v, v RM+N − 2 M v2 , v1 M − Kv1 , v2 RN . R
Since
M v2 , v1
RM
=
v 11
v 12
0MΩ N M
v2 = Mv12 , v2 RN
and - . v11 0 K − M
Kv1 , v2 RN = v K − M v12 = v NMΩ 2 2 v12 = Kv12 , v2 RN − Mv12 , v2 RN ,
we have
AJN v, vRM+N = Adiag v, v RM+N − M + K)v12 , v2 RN ( ( ≥ Adiag v, v RM+N − ( M + K)v12 , v2 RN (.
It follows from Lemma 14.6 that
AJN v, vRM+N ≥ (1 − Cβ3 ) Adiag v, v RM+N ,
completing the proof of the lemma with cA := 1 − Cβ3 > 0.
Finally, the following lemma gives (14.64) and (14.65).
Lemma 14.8. There exist positive constants C1 and C2 independent of M and N such that K V−1 Kw, w M ≤ C1 AΔ w, wRM ∀w ∈ RM R
and
K A−1 Δ K w, w
RN
≤ C2 Vw, wRN
∀w ∈ RN .
14.4 Numerical Results
363
Proof. Note that the stiffness matrix AΔ differs from AΔ W in that the former is defined with the bilinear from aΔ (·, ·) instead of aΔ W (·, ·). Thus, the proof follows exactly along the same lines of that of Lemma 14.5 and is omitted. We can now derive the convergence result for the Johnson–N´ed´elec and Bielak– MacCamy FEM-BEM coupling methods. Theorem 14.7. Assume that PΔ and PV are preconditioning operators defined with the corresponding bilinear form aΔ (·, ·) and aV (·, ·), respectively, such that (14.52) and (14.53) hold. Then the preconditioned GMRES method applied to (14.26) and to (14.30) with the preconditioner C defined by (14.61) has the following convergence rate 2 k/2 α r0 C rk C ≤ 1 − β2 where r0 and rk are, respectively, the initial and kth residuals of the iteration, and where = cA d and β = D2 max{Cβ1 , Cβ2 } + Cβ3 + 1 α
with cA and Cj , j = 1, 2, 3, being the constants given in Lemma 14.6, Lemma 14.7, and Lemma 14.8.
Proof. The theorem is a direct consequence of Lemma 2.21 in Chapter 2 and Lemma 14.6, Lemma 14.7, and Lemma 14.8.
Remark 14.7. Analogously to Remark 14.4 and Remark 14.5, it is remarked here that Theorem 14.7 is a general result in that as soon as preconditioners are designed for Δ and V, then they can be applied for the non-symmetric coupling (Johnson– N´ed´elec or Bielak–MacCamy) and relevant convergence rates can be derived.
14.4 Numerical Results In this section, we report on the numerical experiments carried out in [113], where HMCR is used as solver. The following tables are reprinted with copyright permission. Numerical results for the preconditioners with GMRES designed in Subsection 14.2.2 and preconditioners for non-symmetric coupling methods designed in Section 14.3 can be found in [76]. Problem (14.1) is solved with Ω being an L-shaped domain in R2 having vertices (0, 0), (0, 1/2), (−1/2, 1/2), (−1/2, −1/2), (1/2, −1/2), and (1/2, 0). The functions f , u0 , and t0 are chosen such that u1 (x, y) = ℑ(z2/3 ) for z = x + iy
and
u2 (x, y) = log |(x, y) + (0.3, 0.3)|.
Detailed descriptions of mesh generation and boundary / finite element spaces are given in [113]. Tables 14.1–14.6 show bounds of the extreme eigenvalues of the un-
364
14 FEM-BEM Coupling
preconditioned and preconditioned matrices with two-block and three-block preconditioners, preconditioner by additive Schwarz methods (see Remark 14.4 part (ii), and diagonal scaling. Table 14.1 The extreme eigenvalues of the unpreconditioned matrix A 1/h 2 2 2 2 2 2 2 2 4 8 16 32
p 2 4 6 8 10 12 14 16 1 1 1 1
M+N 37 97 181 289 421 577 757 961 37 97 289 961
λmin -0.9850 -0.9857 -0.9857 -0.9857 -0.9857 -0.9857 -0.9857 -0.9857 -0.5643 -0.2465 -0.1056 -0.0455
α
λ− -0.1270E-1 -0.4967E-2 -0.2624E-2 -0.1615E-2 -0.1092E-2 -0.7867E-3 -0.5933E-3 -0.4646E-3 -0.1531E-1 1.19 -0.3996E-2 1.22 -0.1021E-2 1.21 -0.2575E-3
α -1.35 -1.57 -1.68 -1.75 -1.80 -1.83 -1.83 1.93 1.96 1.98
λ+ 0.2472 0.5687E-1 0.1834E-1 0.7487E-2 0.3592E-2 0.1932E-2 0.1130E-2 0.7045E-3 0.3354 0.1229 0.4092E-1 0.1259E-1
α -2.12 -2.79 -3.11 -3.29 -3.40 -3.48 -3.53 1.44 1.58 1.70
λmax 6.521 6.568 6.568 6.568 6.568 6.568 6.568 6.568 6.724 7.526 7.857 7.961
−1
˘3 A Table 14.2 The extreme eigenvalues of the three-block preconditioned matrix A 1/h 2 2 2 2 2 2 2 2 4 8 16 32
p 2 4 6 8 10 12 14 16 1 1 1 1
M+N 37 97 181 289 421 577 757 961 37 97 289 961
λmin -6.854 -6.880 -6.883 -6.884 -6.884 -6.884 -6.884 -6.885 -1.655 -1.433 -1.370 -1.356
λ− -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -0.999 -1.000 -1.000 -1.000 -1.000
λ+ 0.1909 0.1040 0.6896E-1 0.5067E-1 0.3963E-1 0.3231E-1 0.2713E-1 0.2347E-1 0.4401 0.2419 0.1308 0.6891E-1
α -0.87 -1.01 -1.07 -1.10 -1.12 -1.13 -1.08 0.86 0.88 0.92
λmax 5.872 5.899 5.902 5.903 5.904 5.904 5.904 5.904 5.350 5.365 5.396 5.410
14.4 Numerical Results
365
˘ −1 Table 14.3 The extreme eigenvalues of the two-block preconditioned matrix A 2 A 1/h 2 2 2 2 2 2 2 2 4 8 16 32
p 2 4 6 8 10 12 14 16 1 1 1 1
M+N 37 97 181 289 421 577 757 961 37 97 289 961
λmin -6.858 -6.884 -6.889 -6.891 -6.891 -6.892 -6.892 -6.893 -5.798 -5.827 -5.839 -5.843
λ− -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -0.999 -1.000 -1.000 -1.000 -1.000
λ+ 0.9853 0.9850 0.9849 0.9850 0.9850 0.9850 0.9850 0.9850 1.947 1.939 1.937 1.937
λmax 5.877 5.905 5.910 5.912 5.913 5.913 5.914 5.914 4.952 5.019 5.049 5.064
˘ −1 Table 14.4 The extreme eigenvalues of the matrix A ASM A (1/h = 2, 3 elements) p 2 4 6 8 10 12 14 16
M+N 37 97 181 289 421 577 757 961
λmin -6.815 -6.818 -6.819 -6.820 -6.820 -6.821 -6.821 -6.822
λ− -0.4151 -0.2995 -0.2494 -0.2207 -0.2014 -0.1869 -0.1755 -0.1663
α -0.47 -0.45 -0.42 -0.41 -0.40 -0.40 -0.40
λ+ 0.2957 0.1969 0.1519 0.1290 0.1151 0.1057 0.9879E-1 0.9351E-1
α -0.58 -0.64 -0.56 -0.51 -0.46 -0.43 -0.41
λmax 5.836 5.839 5.839 5.840 5.840 5.840 5.840 5.841
˘ −1 Table 14.5 The extreme eigenvalues of the matrix A diag A (1/h = 2, 3 elements) p 2 4 6 8 10 12 14 16
M+N 37 97 181 289 421 577 757 961
λmin λ− α λ+ -6.815 -0.4151 0.2957 -6.817 -0.2422 -0.77 0.1781 -6.817 -0.1708 -0.86 0.1271 -6.817 -0.1321 -0.89 0.1002 -6.817 -0.1077 -0.91 0.8354E-1 -6.817 -0.9089E-1 -0.93 0.7214E-1 -6.817 -0.7857E-1 -0.94 0.6377E-1 -6.818 -0.6935E-1 -0.93 0.5732E-1
α -0.73 -0.83 -0.82 -0.81 -0.80 -0.80 -0.79
λmax 5.836 5.838 5.838 5.838 5.838 5.838 5.838 5.838
366
14 FEM-BEM Coupling
Table 14.6 The numbers of minimum residual iterations which are required to reduce the initial residual by a factor of 10−3 1/h 2 2 2 2 2 2 2 2 4 8 16 32 64 128
p 2 4 6 8 10 12 14 16 1 1 1 1 1 1
˘ −1 ˘ −1 ˘ −1 ˘ −1 M+N A A 3 A A2 A AASM A Adiag A 37 36 21 11 30 30 97 138 34 12 47 52 181 285 40 13 54 67 289 536 45 13 62 77 421 892 50 13 66 94 577 1374 55 13 74 111 757 2328 60 13 82 135 961 > 9999 79 17 102 197 37 17 15 10 97 38 23 13 289 67 32 13 961 130 45 14 3457 243 60 15 13057 476 75 15
Part V
Problems on the Sphere
Chapter 15
Pseudo-differential Equations with Spherical Splines
The aim of this chapter is to report on the design of additive Schwarz methods when spherical splines are used to solve the hypersingular integral equation on the sphere. On this geometry, this equation belongs to a wider class of the so-called pseudodifferential equations on the sphere; see Appendix B.1. These equations have long been used [126, 131] as a modern and powerful tool to tackle linear boundary-value problems. Svensson [214] introduces this approach to geodesists who study [77, 84] these problems on the sphere which is taken as a model of the earth. Efficient solutions to pseudodifferential equations on the sphere become more demanding when given data are collected by satellites. For completeness, we will first introduce the general form of pseudodifferential equations, show how they can be solved approximately by spherical splines, and present additive Schwarz preconditioners for the hypersingular integral equation. Detailed results can be found in the articles [174, 175, 179]. More details on spherical splines can be found in the thesis [177]. Results for the Laplace–Beltrami equation can also be derived, but it is not in the scope of this book. The reader who is interested in this equation is referred to the article [175].
15.1 Pseudo-differential Operators and Sobolev Spaces 15.1.1 Sobolev spaces on the sphere Throughout this chapter and the next chapter, we denote by S the unit sphere in R3 , i.e., S := {x ∈ R3 : |x| = 1} where | · | is the Euclidean norm in R3 . A spherical harmonic of order on S is the restriction to S of a homogeneous harmonic polynomial of degree in R3 . The space of all spherical harmonics of order is the eigenspace of the Laplace–Beltrami operator ΔS (see Example 15.1)
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_15
369
370
15 Pseudo-differential Equations with Spherical Splines
corresponding to the eigenvalue λ = −( + 1). The dimension of this space being 2 + 1, see e.g. [155], one may choose for it an orthonormal basis {Y,m }m=− . The collection of all the spherical harmonics Y,m , m = −, . . . , and = 0, 1, . . ., forms an orthonormal basis for L2 (S). The basis {Y,m }m=− allows us to define Sobolev spaces on spheres using Fourier coefficients of functions. Spaces defined this way are equivalent to those defined in Appendix A; see e.g., [138]. For s ∈ R, the Sobolev space H s is defined by 7 H s := v ∈ D (S) :
∞
∑ ∑
8
( + 1)2s | v,m |2 < ∞
=0 m=−
where D (S) is the space of distributions on S and v,m are the Fourier coefficients of v defined by v,m =
v(x)Y,m (x) d σx ,
m = −, . . . , ,
= 0, 1, . . . .
S
Here d σx is the element of surface area. The space H s is equipped with the following norm and inner product: vs :=
v, ws :=
∞
∑ ∑
( + 1)2s | v,m |2
=0 m=−
∞
∑ ∑
1/2
(15.1)
,m . ( + 1)2s v,m w
=0 m=−
When s = 0 we write ·, · instead of ·, ·0 ; this is in fact the L2 -inner product. We note that (15.2) | v, ws | ≤ vs ws ∀v, w ∈ H s , ∀s ∈ R, and vs1 = sup
v, w s1 +s2
∀v ∈ H s1 , ∀s1 , s2 ∈ R.
2
w∈H s2 w=0
ws2
(15.3)
When s = 1, we can define an equivalent norm v2H 1 (S) :=
|∇∗ v|2 d σ +
S
where |v|2H 1 (S) :=
|v|2 d σ ,
S
|∇∗ v|2 d σ ,
S
∇∗ ,
with called the surface gradient, being the restriction of the gradient operator on S, and d σ being the Lebesgue measure on S.
15.1 Pseudo-differential Operators and Sobolev Spaces
371
In the case s = k ∈ Z+ , the set of non-negative integers, the Sobolev space H k can also be defined by using atlas for the unit sphere S; see [159]. This construction uses a concept of homogeneous extensions. Definition 15.1. Given any spherical function v, its homogeneous extension of degree n ∈ Z+ to R3 \{0} is a function vn defined by - . x . vn (x) := |x|n v |x| Let {(Γj , φ j )}Jj=1 be an atlas for S, i.e, a finite collection of charts (Γj , φ j ), where Γj are open subsets of S, covering S, and where φ j : Γj → B j are infinitely differentiable mappings whose inverses φ j−1 are also infinitely differentiable. Here B j , j = 1, . . . , J, are open subsets in R2 . Also, let {α j }Jj=1 be a partition of unity subordinate to the atlas {(Γj , φ j )}Jj=1 , i.e., a set of infinitely differentiable functions α j on S vanishing outside the sets Γj , such that ∑Jj=1 α j = 1 on S. For any k ∈ Z+ , the Sobolev space H k on the unit sphere is defined by H k := {v : (α j v) ◦ φ j−1 ∈ H k (B j ), j = 1, . . . , J}, which is equipped with a norm defined by v∗k :=
J
∑ (α j v) ◦ φ j−1 H k (B j ) .
(15.4)
j=1
This norm is equivalent to the norm defined in (15.1); see Ref. [138]. Let v ∈ H k , k ≥ 1. For each = 1, . . . , k, let v−1 denote the unique homogeneous extension of v of degree − 1. Then |v| :=
∑
|α |=
Dα v−1 L2 (S)
(15.5)
is a Sobolev-type seminorm of v in H k . Here Dα v−1 L2 (S) is understood as the L2 -norm of the restriction of the trivariate function Dα v−1 to S. When = 0 we define |v|0 := vL2 (S) , which can now be used together with (15.5) to define another norm in H k : vk :=
k
∑ |v| .
=0
This norm turns out to be equivalent to the Sobolev norm defined in (15.4); see [159].
372
15 Pseudo-differential Equations with Spherical Splines
15.1.2 Pseudo-differential operators on the sphere Let {L()} ≥0 be a sequence of real numbers. A pseudo-differential operator L is a linear operator that assigns to any distribution v ∈ D (S) a distribution L v :=
∞
∑ ∑
=0 m=−
v,mY,m . L()
(15.6)
The sequence {L()} ≥0 is referred to as the spherical symbol of L. Let K (L) := { : L() = 0}. Then ker L = span{Y,m : ∈ K (L), m = −, . . . , }.
Denoting M := dim ker L, we assume that 0 ≤ M < ∞. We also assume that L is a strongly elliptic pseudo-differential operator of order 2α , i.e., ≤ C2 ( + 1)2α C1 ( + 1)2α ≤ L()
for all ∈ / K (L),
(15.7)
for some positive constants C1 and C2 . More general pseudo-differential operators can be defined via Fourier transforms by using local charts; see e.g. Ref. [128, 173]. It can be easily seen by using (15.1), (15.2), and (15.3) that if L is a pseudo-differential operator of order 2α then L maps H s+α into H s−α and is bounded for all s ∈ R. Example 15.1. The following pseudo-differential operators are strongly elliptic; see for example [214]. For the derivation of the symbols, see Appendix B.1. (i) The Laplace–Beltrami operator is an operator of order 2 and has as symbol = ( + 1). This operator is the restriction of the Laplacian on the sphere. L() (ii) The hypersingular integral operator (without the minus sign) is an operator of = ( + 1)/(2 + 1). This operator arises from order 1 and has as symbol L() the boundary-integral reformulation of the Neumann problem with the Laplacian in the interior or exterior of the sphere. (iii) The weakly-singular integral operator is an operator of order −1 and has as sym = 1/(2 + 1). This operator arises from the boundary-integral reformubol L() lation of the Dirichlet problem with the Laplacian in the interior or exterior of the sphere. The problem we consider in this chapter is posed as follows. Problem A: Let L be a pseudo-differential operator of order 2α . Given g ∈ H −α
satisfying
find u ∈ H α satisfying
g,m = 0
Lu = g,
μi , u = γi ,
for all ∈ K (L), m = −, . . . , ,
i = 1, . . . , M,
(15.8)
15.2 Solving Pseudo-differential Equations by Spherical Splines
373
where γi ∈ R and μi ∈ H −α are given. Here ·, · denotes the duality inner product between H −α and H α , which is the usual extension of the H 0 -inner product. Assume that the functionals μ1 , . . . , μN are unisolvent with respect to ker L, i.e., for any v ∈ ker L, if μi , v = 0 for all i = 1, . . . , M, then v = 0. Then Problem A has a unique solution; see [154]. Noting that H α = ker L ⊕ (ker L)⊥ Hα , the solution u ∈ H α is sought in the form u = u0 + u1
where
u0 ∈ ker L
and
u1 ∈ (ker L)⊥ Hα ,
(15.9)
where u0 ∈ ker L satisfies
μi , u0 = γi − μi , u1 ,
i = 1, . . . , M,
and u1 ∈ (ker L)⊥ H α satisfies
Lu1 , v = g, v
for all v ∈ H α .
15.2 Solving Pseudo-differential Equations by Spherical Splines In this section, we will first recall the definition of spherical splines [6, 7, 8]. We will then present error estimates for the approximation of the solution of Problem A by spherical splines.
15.2.1 Spherical splines 15.2.1.1 Spherical triangulation Let {x1 , x2 , x3 } be linearly independent vectors in R3 . The trihedron T generated by {x1 , x2 , x3 } is defined by T := {x ∈ R3 : x = b1 x1 + b2 x2 + b3 x3 with bi ≥ 0, i = 1, 2, 3}. The intersection τ := T ∩ S is called a spherical triangle. Let Δ = {τi : i = 1, . . . , T } be a set of spherical triangles. Then Δ is called a spherical triangulation of the sphere S if there hold (i)
GT
i=1 τi
= S;
374
15 Pseudo-differential Equations with Spherical Splines
(ii) Each pair of distinct triangles in Δ are either disjoint or share a common vertex or an edge. A spherical cap centred at x ∈ S and having radius R is defined by C(x, R) := {y ∈ S : cos−1 (x · y) ≤ R}. For any spherical triangle τ , let |τ | denote the diameter of the smallest spherical cap containing τ , and ρτ denote the diameter of the largest spherical cap contained in τ . We define |Δ | := max{|τ | : τ ∈ Δ },
h := tan(|Δ |/2),
ρΔ := min{ρτ : τ ∈ Δ },
(15.10)
and refer to |Δ | as the mesh size. Throughout the chapter, we assume that |Δ | ≤ 1 and that all triangulations are shape-regular and quasi-uniform, i.e., there exist β > 1 and γ < 1 such that |τ | ≤ β ρτ
and
|τ | ≥ γ |Δ |
∀τ ∈ Δ .
Roughly speaking, in a shape-regular triangulation the smallest angle is not too small so that there are no long and thin triangles, while in a quasi-uniform triangulation the sizes of the triangles are not too much different.
15.2.1.2 Spherical splines Let H d denote the space of trivariate homogeneous polynomials of degree d restricted to S. We define Sdr (Δ ) to be the space of homogeneous splines of degree d and smoothness r on a spherical triangulation Δ , that is, Sdr (Δ ) := {s ∈ Cr (S) : s|τ ∈ H d , τ ∈ Δ }. In the whole chapter, for technical reasons we assume we assume that the integers d and r defining Sdr (Δ ) satisfy d ≥ 3r + 2 if r ≥ 1, (15.11) d≥1 if r = 0; see [6, 7, 8, 179]. Let τ be a spherical triangle with vertices x1 , x2 , and x3 . Since x1 , x2 , and x3 are linearly independent, for any x ∈ S, there exist unique real numbers b1,τ (x), b2,τ (x), and b3,τ (x) (called the spherical barycentric coordinates of x associated with τ ) which satisfy x = b1,τ (x)x1 + b2,τ (x)x2 + b3,τ (x)x3 . We define the homogeneous Bernstein basis polynomials of degree d on τ to be the polynomials
15.2 Solving Pseudo-differential Equations by Spherical Splines τ Bd, i jk (x) :=
d! b1,τ (x)i b2,τ (x) j b3,τ (x)k , i! j!k!
375
i + j + k = d.
As was shown in [6], we can use these polynomials as basis functions for H d . The space Sdr (Δ ) has the following approximation property. Theorem 15.1. Assume that Δ is a β -quasiuniform spherical triangulation with |Δ | ≤ 1, and that (15.11) holds. Then for any v ∈ H s , there exists η ∈ Sdr (Δ ) satisfying v − η t ≤ Chs−t vs , where t ≤ r + 1 and t ≤ s ≤ d + 1. Here C is a positive constant depending only on d and the smallest angle in Δ .
Proof. See [179, Theorem 4.1]. We will need the following lemmas.
Lemma 15.1. Let p be a homogeneous polynomial of degree d on a spherical triangle τ , which is written in Bernstein–B´ezier form as p(v) =
∑
ci jk Bid,jkτ (v),
v ∈ τ.
i+ j+k=d
Then
1/2
pL2 (τ ) Aτ cτ , Here Aτ is the area of τ and cτ is the vector of components ci jk , i + j + k = d.
Proof. See [21, Lemma 5].
Lemma 15.2. Let p be a homogeneous polynomial of degree d on a spherical triangle τ in Δ . Then the following estimate holds |p|H 1 (τ ) & ρτ−1 pL2 (τ ) . Proof. See [21, Lemma 6].
15.2.1.3 Quasi-interpolation In this subsection we briefly discuss the construction of a quasi-interpolation operator IΔ : L2 (S) → Sdr (Δ ) which is defined in [159]. This operator will be used frequently in the remainder of this chapter. First we introduce the set of domain points of Δ to be D :=
6
τ = x1 ,x2 ,x3 ∈Δ
8 7 ix1 + jx2 + kx3 : i, j, k ∈ N and i + j + k = d . ξiτjk = d
376
15 Pseudo-differential Equations with Spherical Splines
Here, τ = x1 , x2 , x3 denotes the spherical triangle whose vertices are x1 , x2 , x3 . We denote the domain points by ξ1 , . . . , ξD , where D = dim Sd0 (Δ ). Let {Bl : l = 1, . . . , D} be a basis for Sd0 (Δ ) such that the restriction of Bl on a triangle containing ξl is the Bernstein-B´ezier polynomial of degree d associated with this point, and that Bl vanishes on other triangles. A set M := {ζl }Nl=1 ⊂ D is called a minimal determining set for Sdr (Δ ) if, for every s ∈ Sdr (Δ ), all the coefficients νl (s) in the expression s = ∑D l=1 νl (s)Bl are uniquely determined by the coefficients corresponding to the basis functions which are associated with points in M . Given a minimal determining set, we construct a basis {B∗l }Nl=1 for Sdr (Δ ) by requiring
νl (B∗l ) = δl,l ,
1 ≤ l, l ≤ N.
The use of Hahn–Banach Theorem extends the linear functionals νl , l = 1, . . . , N, to be defined for all functions in L2 (S). We continue to use the same symbol for these extensions. The quasi-interpolation operator IΔ : L2 (S) → Sdr (Δ ) is now defined by N
IΔ v :=
∑ νl (v)B∗l ,
v ∈ L2 (S).
l=1
Before introducing the properties of this operator, we need the following definition. For any τ ∈ Δ , we define 6 ωτ := ωi i∈Iτ
where
4 3 3 4 ωi := supp(B∗i ) and Iτ := i ∈ 1, . . . , N : : τ ⊂ ωi .
The following result can be found in [159, 174].
Lemma 15.3. There exists a quasi-interpolation operator IΔ : L2 (S) −→ Sdr (Δ ) satisfying, for any v ∈ L2 (S), |IΔ v|H k (τ ) h−k vL2 (ωτ ) , and
IΔ vH k (S) h−k vL2 (S) ,
k = 0, 1, k = 0, 1/2, 1.
In particular, if the degree d is even then for any v ∈ H 1/2 (S) we have IΔ vH 1/2 (S) vH 1/2 (S) .
(15.12)
The constants depend only on the smallest angle ΘΔ of the triangulation Δ and the polynomial degree d. Proof. See [174, Lemma 2.3]. We note here that the assumption d even is to ensure that Sdr (Δ ) contains the constant functions and that IΔ reproduces these functions.
15.2 Solving Pseudo-differential Equations by Spherical Splines
377
15.2.2 Approximate solutions and error estimates We approximate the solution u of Problem A by u ∈ H α having the form u = u0 + u1
where
u0 ∈ ker L
and u1 ∈ Sdr (Δ ).
The solution u1 will be found by solving the Galerkin equation
L∗ u1 , v = f , v
∀v ∈ Sdr (Δ ),
(15.13)
(15.14)
in which L∗ is a strongly elliptic pseudo-differential operator of order 2α whose symbol is given by L() if ∈ / K (L) L∗ () = (1 + )2α if ∈ K (L). Noting (15.7), we have
L∗ () (1 + )2α
∀ ∈ N.
This assures us that the bilinear form a∗ : H α × H α → R defined by a∗ (v, w) := L∗ v, w ,
v, w ∈ H α ,
is continuous and coercive in H α . The well known Lax–Milgram Theorem confirms the unique existence of u1 ∈ Sdr (Δ ). Having found u1 , we will find u0 ∈ ker L by solving the equations (cf. (15.8)) so that
μi , u0 = γi − μi , u1 ,
μi , u = μi , u ,
i = 1, . . . , M,
i = 1, . . . , M.
The unique existence of u0 follows from Assumption B in exactly the same way as that of u0 in (15.9); see [179, Theorem 2.1]. The exact solution u to Problem A and the approximate solution u given by equation (15.13) satisfy the following estimate.
Theorem 15.2. Assume that Δ is a β -quasiuniform spherical triangulation with |Δ | ≤ 1 and that (15.11) hold. If the order 2α of the pseudo-differential operator L satisfies α ≤ r + 1, then u and u satisfy u − ut ≤ ChΔs−t us ,
where s ≤ d + 1 and 2α − d − 1 ≤ t ≤ min{s, α }. Here C is a positive constant depending only on d and the smallest angle in Δ . Proof. See [179, Theorem 5.1].
378
15 Pseudo-differential Equations with Spherical Splines
In the next section, we design additive Schwarz preconditioners for solving (15.14) in the case of the hypersingular integral operator. Results for the Laplace–Beltrami operator can be found in [175].
15.3 Additive Schwarz Methods for Equations on the Sphere 15.3.1 Decomposition of the finite element space The construction of a preconditioner by additive Schwarz method, as is usual, requires a two-level mesh.
15.3.1.1 Two-level meshes Let X := {x1 , . . . , xK } and Y := {y1 , . . . , yJ } be two sets of points on S with J < K. We denote by Δh (respectively, ΔH ) the spherical triangulation generated by X (respectively, Y ), and recall the definitions h = tan(|Δh |/2) and
H = tan(|ΔH |/2);
see (15.10). We assume that |ΔH | > |Δh |. We also assume that both ΔH and Δh are regular and quasi-uniform. To distinguish triangles in the two different meshes, we denote a triangle in the fine mesh by τ , and a triangle in the coarse mesh by τH . For each τ ∈ Δh , we denote by Aτ its area. To invoke results in [21, 159] we also define
ς = tan(ρΔh /2),
hτ = tan(|τ |/2),
and
ςτ = tan(ρτ /2) for τ ∈ Δh
and, similarly for the coarse mesh,
ςH = tan(ρΔH /2),
HτH = tan(|τH |/2),
and ςτH = tan(ρτH /2) for τH ∈ ΔH .
It is straightforward to see that since Δh and ΔH are regular and quasi-uniform, we have ρτ |τ | |Δh | h hτ ςτ ς ρτH |τH | |ΔH | H HτH ςτH ςH . Denoting by star1 (x) the union of all triangles in Δh that share the vertex x, we define 63 4 star1 (w) : w is a vertex of stark−1 (x) , k > 1, stark (x) := and
15.3 Additive Schwarz Methods for Equations on the Sphere
stark (τ ) :=
63
4 stark (x) : x is a vertex of τ ,
379
k ≥ 1.
We make use of the radial projection defined in [159]. Let Ω be a subset of S. We denote by rΩ the centre of a spherical cap of smallest possible radius containing Ω , and by ΠΩ the tangential plane touching S at rΩ . For each point x ∈ Ω , the intersection of ΠΩ and the ray passing through the origin and x is denoted by x. We define (15.15) R(Ω ) := {x ∈ ΠΩ : x ∈ Ω }. The radial projection RΩ is defined by RΩ : R(Ω ) → Ω x → x := x/|x|.
(15.16)
It is clear that RΩ is invertible. The following result is proved in [21, 159]. Lemma 15.4. Under the assumptions that Δh is a regular, quasi-uniform triangulation satisfying |Δh | ≤ 1 we have, for any τ ∈ Δh , (i) Aτ h2 , (ii) νk (τ ) (2k + 1)2 where Aτ denotes the area of τ , and νk (τ ) denotes the number of triangles in stark (τ ). Here, the constants depend only on the smallest angle of the triangulation.
15.3.1.2 Domain decomposition From the triangulations ΔH and Δh we define the following subdomains. For each triangle τHi ∈ ΔH , i = 1, . . . , J, let Ωi be the union of all triangles in Δh that intersect τHi , i.e., 6 Ωi = τ. τ ∈Δh i =φ τ¯ ∩τ¯H
Since a triangle in the fine mesh can intersect more than one triangle in the coarse mesh, the subdomains are overlapping. We denote the size of the overlapping by δ , which is proportional to h by our construction. It will be assumed that we can partition the index set I = {1, . . . , J} into M sets Ik (M independent of J) in which for any 1 ≤ k ≤ M, if i, j ∈ Ik and i = j then Ωi ∩ Ω j = ∅. Remark 15.1. The partition above is related to the graph colouring problem (see e.g. [34]). We define an undirected graph G = (V, E) in which the set of vertices V = {V1 , . . . ,VJ } is identified with the set of subdomains Ωi , and E is the set of edges, where if Ωi ∩ Ω j = ∅ then there is an edge between Vi and V j . A partition of the index set I satisfying the assumption above is equivalent to a colouring of the vertices of the graph G in which adjacent vertices have different colours. The
380
15 Pseudo-differential Equations with Spherical Splines
minimal number δ (G) of colours required is called the chromatic number of G. The number M can be chosen to be δ (G). When G is neither a complete graph nor an odd cycle, an upper bound of δ (G) is the maximal degree μ (G) of G; see [34, Theorem 3, Chapter 5]. In terms of our subdomains, each subdomain intersects at most μ (G) other subdomains. It is clear that in our subdomain construction, the number μ (G) depends only on the smallest angle of the triangulation ΔH . Therefore, M is independent of J. We also assume, as in [219], that for j = 1, ..., J, there exists δ j > 0 such that for any x ∈ Ω j there exists i ∈ {1, ..., J} satisfying x ∈ Ωi
and
dist(x, ∂ Ωi ) := min cos−1 (x · y) ≥ δ j . y∈∂ Ωi
(15.17)
Here δ j measures the amount of overlap in Ω j and this assumption ensures that the overlap of the subdomains is small. We denote
Ω j,δ j := {x ∈ Ω j : dist(x, ∂ Ω j ) ≤ δ j }. 15.3.1.3 Subspace decomposition Denoting by V the space Sdr (Δh ), we define Vi := V ∩ H01 (Ωi ),
i = 1, . . . , J.
We also denote by V0 the space Sdr (ΔH ). Since this space is not a subspace of V we use the quasi-interpolation IΔh defined in Subsection 15.2.1.3 to define V0 := IΔh V0 . This results in the following subspace decomposition which will be used in the design of preconditioners for the hypersingular integral equation to be studied in the next subsection (15.18) V = V0 + V1 + · · · + VJ . As described in Section 2.3, this subspace decomposition will be used to define the additive Schwarz preconditioner for the hypersingular integral equation studied in the following subsection.
15.3.2 The hypersingular integral equation 15.3.2.1 The equation and its Galerkin approximation We recall that the hypersingular integral equation on the sphere is of the form − Wu + ω 2
S
u dσ = f
on S,
(15.19)
15.3 Additive Schwarz Methods for Equations on the Sphere
381
where W is the hypersingular integral operator given by 1 ∂ Wv(x) := − 2π ∂ νx
S
v(y)
∂ 1 d σy ∂ νy |x − y|
and ω is some nonzero real constant. It can be shown that [161] the operator −W = ( + 1)/(2 + can be represented in the form (15.6) with the symbol being L() 1); see Example 15.1. We note that the second term on the left-hand side of (15.19) is added to ensure strong ellipticity and thus unique existence of the solution. Introducing the bilinear form a(u, v) := − Wu, v + ω 2 u, 1 v, 1 ,
u, v ∈ H 1/2 (S),
we note that (see [161]) a(v, v) v2H 1/2 (S)
∀v ∈ H 1/2 (S).
(15.20)
A natural weak formulation of equation (15.19) is: Find u ∈ H 1/2 (S) satisfying a(u, v) = f , v
∀v ∈ H 1/2 (S).
This bilinear form is clearly bounded and coercive (cf. [42]). This guarantees the unique solvability of the equation. The Ritz–Galerkin approximation problem is: Find uh ∈ Sdr (Δh ) satisfying a(uh , vh ) = f , vh
∀vh ∈ Sdr (Δh ).
The following proposition shows that the matrix system arising from this Galerkin approximation is ill-conditioned. Proposition 15.1. The condition number of the stiffness matrix A is bounded by
κ (A) h−1 . Proof. Recall that {φi }Ni=1 is a basis for Sdr (Δh ). Let c = (ci )Ni=1 ∈ RN . We define v := ∑Ni=1 ci φi ∈ Sdr (Δh ). Noting (15.20), we have cT Ac = a(v, v) v2H 1/2 (S) .
(15.21)
We note that IΔh v = v for all v ∈ Sdr (Δh ), where IΔh is the quasi-interpolation operator defined in Subsection 15.2.1.3. It follows from (15.21) and Lemma 15.3 that cT Ac IΔh u2H 1/2 (S) h−1 u2L2 (S) h−1
∑
τ ∈Δh
By using Lemma 15.1 and noting Aτ |τ |2 h2 we obtain
u2L2 (τ ) .
382
15 Pseudo-differential Equations with Spherical Splines
cT Ac h−1
∑
τ ∈Δh
Aτ |cτ |2 h
∑
τ ∈Δh
|cτ |2 .
(15.22)
Here, cτ is a vector whose components are those of the vector c corresponding to the basis functions φi whose supports contain τ . Proposition 5.1 in [12] ensures that the support of φi lies inside star3 (τ ) if the domain point corresponding to φi belongs to τ . Therefore, if μ = max{ν3 (τ ) : τ ∈ Δh } then each component of c appears at most μ times in the sum on the right-hand side of (15.22). This together with Lemma 15.4 yields cT Ac hμ |c|2 h|c|2 = h cT c, which implies
λmax (A) h. Using (15.21) and a similar argument as above, we have cT Ac u2H 1/2 (S) u2L2 (S) =
∑
τ ∈Δh
u2L2 (τ )
∑
τ ∈Δh
Aτ |cτ |2 h2 cT c,
implying
λmin (A) h2 . An upper bound for κ (A) can now be derived.
The additive Schwarz preconditioner based on the decomposition (15.18) will be used to precondition the above ill-conditioned system. In order to invoke Theorem 2.1 again we prove the stability and coercivity of the decomposition.
15.3.2.2 Technical lemmas To prove Assumption 2.3 (Stability of decomposition), standard analysis as used in previous chapters cannot be directly used. There are two levels of difficulties. First, we have to work with homogeneous polynomials on spherical triangles which do not have all properties of polynomials on planar triangles. Second, the space V = Sdr (Δh ) does not contain constant functions when d is odd. Recalling the construction of the subdomains in Subsection 15.3.1, for each triangle τHi in the coarse mesh ΔH , we defined a corresponding subdomain Ωi . We i satisfying now define the triangle τH,h i τHi ⊂ τH,h ⊂ Ωi ,
i = 1, . . . , J.
(15.23)
i are parallel and of Without loss of generality we can assume that the edges of τH,h i distance h from those of τH ; see Figure 15.1. (We can always choose Ωi to be a bigger domain so that the assumption holds.) For simplicity of presentation, we assume that the spherical triangles τHi , i = i . The set 1, . . . , J, are equilateral triangles. Then so are the spherical triangles τH,h i {τH,h : i = 1, . . . , J} is a set of overlapping spherical triangles which covers the
15.3 Additive Schwarz Methods for Equations on the Sphere
383
i τH,h
τHi
h
i Fig. 15.1 Extended spherical triangle τH,h
sphere S (Figure 15.2). We will define a partition of unity {θi }Ji=1 , whose definition is given in Appendix B, satisfying i , supp(θi ) ⊂ τH,h
i = 1, . . . , J.
In the sequel, we denote Ti := supp(θi ),
i = 1, . . . , J,
and Wi the smallest rectangle which contains Ti and shares a common edge with Ti ; see Figure 15.4. We first define a partition of unity {θi }Ji=1 . Note here that the partition of unity i : i = 1, . . . , J} and it is also will be defined for the extended spherical triangles {τH,h a partition of unity for the overlapping subdomains {Ωi : i = 1, . . . , J}. Let x be a point on the sphere S. Then θi (x), for i = 1, . . . , J, are defined as follows: i
0 for some i0 ∈ {1, . . . , J} (e.g. x ∈ A in • Case 1: x belongs to only one triangle τH,h Figure 15.3. Then
θi0 (x) := 1 and θi (x) := 0,
for all i = i0 .
384
15 Pseudo-differential Equations with Spherical Splines
Fig. 15.2 Six overlapping extended spherical triangles (Triangles with dotted edges: τHi )
i
i1 0 • Case 2: x belongs to exactly two extended triangles τH,h and τH,h (e.g. x ∈ B in Figure 15.3. Then i0 θi0 (x) := δ −1 dist(x, ∂ τH,h ) i1 θi1 (x) := δ −1 dist(x, ∂ τH,h )
θi (x) := 0,
i∈ / {i0 , i1 },
i i . is the boundary of τH,h where ∂ τH,h • Case 3: x belongs to the intersection of more than two extended triangles (i.e. x ∈ U in Figure 15.3. For simplicity of notation we denote the six triangles in Figure 15.3 by τ1 , . . . , τ6 . Then
θi (x) := 0, i ∈ / {1, 2, 3, 4, 5, 6} ⎧ −1 ⎪ if x ∈ U3 ∪U7 ⎨δ dist(x, ∂ τ1 ), θ1 (x) = δ −2 dist(x, ∂ τ1 ∩ int(τ5 )) dist(x, ∂ τ1 ∩ int(τ3 )), if x ∈ U1 ∪U2 ∪U5 ⎪ ⎩ 0, if x ∈ U4 ∪U6 ,
15.3 Additive Schwarz Methods for Equations on the Sphere
385
⎧ ⎪ δ −2 dist(x, ∂ τ2 ∩ int(τ6 )) dist(x, ∂ τ2 ∩ int(τ4 )), ⎪ ⎪ ⎪ −2 ⎪ ⎪ ⎨δ dist(x, ∂ τ2 ) dist(x, ∂ τ1 ∩ int(τ5 )), θ2 (x) = δ −3 dist(x, ∂ τ2 ∩ int(τ6 ))× ⎪ ⎪ ⎪ dist(x, ∂ τ2 ∩ int(τ4 )) dist(x, ∂ τ1 ∩ int(τ5 )), ⎪ ⎪ ⎪ ⎩0,
if x ∈ U3 if x ∈ U2 ∪U4 if x ∈ U1 if x ∈ U5 ∪U6 ∪U7 ,
Here, int(τi ) denotes the interior of τi . The function θ4 (x) is defined similarly to θ1 (x), and θ3 (x), θ5 (x), θ6 (x) similarly to θ2 (x). The supports Ti of these functions θi can be one of the four shapes in Figure 15.4. The rectangles Wi mentioned in Lemma 15.6 are also depicted in Figure 15.4. τ1 τ6
U7
B
τ2
A U6
U2 U∗
U1
τ5
U5
U3 U4
τ3 τ4 Fig. 15.3 τi : equilateral triangles, i = 1, . . . , 6
We will frequently use the following results. Lemma 15.5. Let 0 < α < min{β , β }. If v ∈ H 1/2 ([0, β ] × [0, β ]) then β 0
and
⎛ β
⎝
α
|v(x, y)|2 x
⎞
. β dx⎠ dy 1 + log2 v 1/2 , Hw ([0,β ]×[0,β ]) α
(15.24)
386
15 Pseudo-differential Equations with Spherical Splines
Fig. 15.4 Supports Ti and rectangles Wi
. 1 β 2 2 vL2 ([0,α ]×[0,β ]) 1 + log v 1/2 , Hw ([0,β ]×[0,β ]) α α
(15.25)
Proof. Estimates (15.24) and (15.25) are proved in Lemma 10.16 for β = β ; the results still hold for β = β . Recall that for each subset Ω ⊂ S, the set R(Ω ) is the image of Ω under the inverse of the radial projection RΩ ; see (15.15) and (15.16). It is shown in [159, Lemma 3.1] that vL2 (Ω ) v¯0 L2 (R(Ω ))
and
vH 1 (Ω ) v¯0 H 1 (R(Ω ))
for any v belongs to L2 (Ω ) and H 1 (Ω ), respectively. Using interpolation (see [147, Theorem B2]), we obtain vH 1/2 (Ω ) v¯0 H 1/2 (R(Ω )) , From the definitions of ·
1/2
Hw (Ω )
, · 1/2
Hw (Ω )
v ∈ H 1/2 (Ω ).
(15.26)
, and |·|H 1/2 (Ω ) , we deduce by repeating
the argument used in the proof of [159, Lemma 3.1] that for any v ∈ H 1/2 (Ω )
15.3 Additive Schwarz Methods for Equations on the Sphere
v
1/2
Hw (Ω )
v 1/2
Hw (Ω )
v¯0
1/2
Hw (R(Ω ))
v¯0 1/2
Hw (R(Ω ))
387
, ,
(15.27)
|v|H 1/2 (Ω ) |v¯0 |H 1/2 (R(Ω )) ,
where the constants are independent of the size of Ω . Lemma 15.6. For any v ∈ H 1/2 (S) there holds J
∑ θi v21/2 Hw
i=1
. J Hj 2 ∑ 1 + log2 v2 1/2 (Ti ) Hw (Wi ) δ i=1
(15.28)
where δ is defined in (15.17).
Proof. We prove (15.28) for Ti being of the first shape in Figure 15.4. The cases when Ti is of other shapes can be proved in the same manner. Equivalences (15.27) and (15.26) allow us to prove instead of (15.28) the following inequality θi v0 2 1/2 Hw
. Hi 2 1 + log v0 2 1/2 . Hw (R(Wi )) (R(Ti )) δ
For notational convenience, in this proof we write Ti ,Wi , θi , and v instead of R(Ti ), R(Wi ), (θ¯i )0 , and v¯0 , namely we think of Ti and Wi as planar regions, and θi and v as two variables functions. Here Hi and δ are the size of √ Ti and the size of the overlap. It is noted that Wi = [0, Hi ] × [0, Hi ] where Hi = 3/2Hi . Recall that θi v2 1/2
Hw (Ti )
= |θi v|2H 1/2 (T ) + i
Ti
where |θi v|2H 1/2 (T ) = i
Ti Ti
[θi v(x)]2 dx, dist(x, ∂ Ti )
|θi v(x) − θi v(x )|2 dx dx . |x − x |3
(15.29)
(15.30)
We first estimate the second term in the right hand side of (15.29), which is split into a sum of integrals over the triangles Ti := A ∪ B ∪C ∪ D (see Figure 15.5), for = 1, 2, 3, as follows:
Ti
3 [θi v(x)]2 dx = ∑ dist(x, ∂ Ti ) =1
Ti
[θi v(x)]2 dx. dist(x, ∂ Ti )
We only need to estimate the integral over Ti1 , the other two can be bounded similarly. Recall that
388
15 Pseudo-differential Equations with Spherical Splines
Hi K1
K2
G1
G2 C3 D2
1
F1
2 B3
F2
B2
A3 A2 A1
E1
D3
C2 B1
C1
E2
δ
D1
3 0
√ δ 3
Hi
√ Hi − δ 3
Fig. 15.5 Ti and Wi
⎧ 1, if x ∈ A1 , ⎪ ⎪ ⎪ ⎨δ −1 dist(x, ∂ T ), if x ∈ B1 , i θi (x) = −2 ⎪ δ dist(x, ∂ Ti ) dist(x, 1 ), if x ∈ C1 , ⎪ ⎪ ⎩ −2 δ dist(x, ∂ Ti ) dist(x, 2 ), if x ∈ D1 .
Since δ −1 dist(x, k ) ≤ 1 for k = 1, 2 and x respectively belongs to C1 and D1 , there holds θi (x) ≤ δ −1 dist(x, ∂ Ti ) ∀x ∈ B1 ∪C1 ∪ D1 . Hence
Ti1
[θi v(x)]2 dx = dist(x, ∂ Ti )
[θi v(x)]2 dx + dist(x, ∂ Ti )
|v(x)|2 1 dx + |v(x)|2 dx y δ B1 ∪C1 ∪D1 ⎞ ⎛
A1
A1
≤
Hi 0
B1 ∪C1 ∪D1
[θi v(x)]2 dx dist(x, ∂ Ti )
Hi
⎜ ⎝
δ
|v(x, y)|2 ⎟ 1 dy⎠ dx + y δ
It follows from Lemma 15.5 and Hi Hi that
Hi δ 0 0
|v(x, y)|2 dy dx.
15.3 Additive Schwarz Methods for Equations on the Sphere
Ti1
389
. Hi 2 [θi v(x)]2 dx 1 + log v2 1/2 . Hw (Wi ) dist(x, ∂ Ti ) δ
We now need to use Lemma A.16 to estimate the double integral in (15.30). This motivates us to transform the integral over the triangle Ti into the integral over the rectangle Wi . To do so, we first introduce an extension θi of θi over the rectangle Wi as follows (see Figure 15.5): ⎧ ⎪ if x ∈ Ti , ⎪ ⎪θi (x), ⎪ −1 ⎪ ⎪ if x ∈ Fk , k = 1, 2, ⎨δ dist(x, k ), θi (x) := δ −2 dist(x, k ) dist(x, 3 ), if x ∈ Ek , k = 1, 2, ⎪ ⎪ ⎪δ −2 dist(x, 1 ) dist(x, 2 ), if x ∈ Kk , k = 1, 2, ⎪ ⎪ ⎪ ⎩1, if x ∈ Gk , k = 1, 2. The extension θi is defined in order to preserve the continuity and symmetry of θi across the edges 1 and 2 of Ti . Since Ti ⊂ Wi and θi = θi on Ti , there holds
Ti Ti
|θi v(x) − θi v(x )|2 dx dx ≤ |x − x |3
|θi v(x) − θi v(x )|2
Wi Wi
|x − x |3
dx dx .
Noting (A.97), we have |θi v(x) − θi v(x )|2
Wi Wi
|x − x |3
dx dx
θ v(x, ·) − θ v(x , ·)2 i i L2 (I ) I I
+
|x − x |2
θ v(·, y) − θ v(·, y )2 i i L2 (I) I I
|y − y |2
dx dx
dy dy
=: I1 + I2 , where I = [0, Hi ], I = [0, Hi ], x = (x, y) and x = (x , y ). We will show that I1 is bounded by . Hi 2 v2 1/2 . 1 + log Hw (Wi ) δ The term I2 can be estimated in the same manner. By using the triangle inequality, and noting that θi ≤ 1, we have I1 ≤
[θ (x, ·) − θ (x , ·)]v(x, ·)2 i i L2 (I ) I I
+
|x − x |2
v(x, ·) − v(x , ·)2 L2 (I ) I I
|x − x |2
dx dx .
dx dx
(15.31)
390
15 Pseudo-differential Equations with Spherical Splines
It follows from Lemma A.16 that the second integral on the right-hand side of inequality (15.31) is bounded by v2 1/2 . We still need to estimate Hw (Wi )
AI,I (θi v) :=
[θ (x, ·) − θ (x , ·)]v(x, ·)2 i i L2 (I )
|x − x |2
I I
dx dx .
√ √ √ √ We denote I1 := [0, δ 3], I2 := [δ 3, Hi − δ 3], and I3 := [Hi − δ 3, Hi ]. In order to show that AI,I (θi v) is bounded by |v|2H 1/2 (W ) , we will prove that AIk ,I (θi v) are i bounded by v2 , for k, = 1, 2, 3. By the symmetry of θi , it is sufficient to 1/2
Hw (Wi )
prove the estimation for AIk ,I (θi v) for (k, ) ∈ {(1, 1), (1, 2), (1, 3), (2, 2)}. Let x, x ∈ I1 . Elementary calculation (though tedious) reveals max |θi (x, y) − θi (x , y)| δ −1 |x − x |. y∈I
Thus
AI1 ,I1 (θi v)
1 δ
v(x, ·)2L2 (I ) dx =
I1
1 v(·, ·)2L2 (I1 ×I ) . δ
Noting that the size of I1 is defined to be proportional to δ , and by using (15.25), we deduce . Hi 2 AI1 ,I1 (θi v) 1 + log v2 1/2 . Hw (Wi ) δ
We next estimate AI1 ,I2 ∪I3 (θi v) by estimating the two integral AI1 ,[δ √3, 2δ √3] (θi v) and AI1 ,[2δ √3, Hi ] (θi v). Repeating the argument used in estimating AI1 ,I1 (θi v), we obtain . Hi 2 v2 1/2 . AI1 ,[δ √3, 2δ √3] (θi v) 1 + log Hw (Wi ) δ
We note here that
max[θi (x, y) − θi (x , y)]2 ≤ 4, y∈I
and
√ √ |x − 2δ 3| ≥ δ 3
∀x, x ∈ I,
∀x ∈ I1 .
We then deduce from (15.32), (15.33) and (15.25) that
(15.32)
(15.33)
15.3 Additive Schwarz Methods for Equations on the Sphere
AI1 ,[2δ √3, Hi ] (θi v)
v(x, ·)2L2 (I )
√ [2δ 3, Hi ]
I1
391
1 dx dx (x − x)2
√ Hi − 2δ 3 √ v(x, ·)2L2 (I ) dx (2δ 3 − x)(Hi − x) I1
1 v(·, ·)2L2 (I1 ×I ) δ . Hi 2 1 + log v2 1/2 . Hw (Wi ) δ
We note here that the estimation for AI1 ,I1 (θi v) is based on the fact that the size of I1 is proportional to δ and |θi (x, y) − θi (x , y)| δ −1 |x − x |
∀x, x ∈ I1 ,
∀y ∈ I .
(15.34)
The proof AI1 ,[δ √3, Hi ] (θi v) is then obtained by first splitting it into AI1 ,[δ √3, 2δ √3] (θi v) and AI1 ,[2δ √3, Hi ] (θi v) in which the former is bounded by using similar argument as √ √ in the proof for AI1 ,I1 (θi v), requiring the sizes of √ I1 and √ [δ 3, 2δ 3] to be proportional to δ and (15.34) to hold for x ∈ I1 , x ∈ [δ 3, 2δ 3]. The latter is estimated by first using (15.32) to write it in the form containing the integral
√ [2δ 3,Hi ]
1 dx , |x − x |2
which is then proved to be bounded by cδ −1 for some constant c > 0. This procedure can be used in estimating AI2 ,I2 (θi v) by first writing AI2 ,I2 (θi v) =
I2 I2 [0,Hi ]
[θi (x, y) − θi (x , y)]2 v2 (x, y) dy dx dx . |x − x |2
The integral is then split into sum of integrals over subregions in which (x, y) and (x , y) can belong to one of the following sets G1 , F1 ∪ H1 , B3 ∪ C3 ∪ D2 ∪ H2 , H1 ∪C3 ∪ D2 ∪ B2 , H2 ∪ F2 , G2 , and A1 ∪ A2 ∪ A3 in Figure 15.5. The integral when by using (x, y) and (x , y) belong to A1 ∪ A2 ∪ A3 is easily bounded by v2 1/2 Hw (Wi )
Lemma A.16, noting that θi (x, y) = θi (x , y) = 1. The other integrals can be estimated by using similar argument as used in the proof for AI1 ,I (θi v). Combining all these we obtain the desired result.
392
15 Pseudo-differential Equations with Spherical Splines
15.3.2.3 Stability of decomposition: A general result for both odd and even degrees We need to introduce an operator PH from H 1/2 (S) into Sdr (ΔH ) defined by a(PH u, v) = a(u, v) ∀v ∈ Sdr (ΔH ) for any u ∈ H 1/2 (S). Standard finite element arguments yield PH u − u
1/2
(S)
PH u − u
1/2
(S)
PH u
1/2
(S)
HI
HI HI
u − v
1/2
HI
u
1/2
(S)
u
1/2
(S)
HI HI
PH u − uL2 (S) H 1/2 u
∀v ∈ Sdr (ΔH )
(S)
(15.35) 1/2
HI
(S)
.
Lemma 15.7 (Stability of decomposition). There exists a positive constant C depending on the smallest angle of Δh and the polynomial degree d such that for any u ∈ V there exist ui ∈ Vi , i = 0, ..., J, satisfying u = ∑Ji=0 ui and J
∑ a(ui , ui )
i=0
H a(u, u). h
Proof. Let u0 := PH u and u0 = IΔh u0 . Let {θi }Ji=1 be a partition of unity defined on S satisfying supp(θi ) = Ω i , for i = 1, ..., J. We define w := u − u0 , and ui := IΔh (θi w) for i = 1, ..., J. Then we can split u ∈ V by u = u0 + IΔh (u − u0 ) = u0 + IΔh
J
∑ θi w
= u0 + u1 + · · · uJ .
i=1
It is clear that ui ∈ Vi for all i = 0, . . . , J. By Lemma 15.3, we have IΔh (PH u − u)
1/2
HI
(S)
h−1/2 PH u − uL2 (S) .
(15.36)
By writing u0 = IΔh (PH u − u) + u and using the triangle inequality together with inequalities (15.36) and (15.35), we have a(u0 , u0 ) u0 2 1/2 HI (S) -
-
≤ IΔh (PH u − u)
h−1/2 PH u − uL2 (S) + u
H H u2 1/2 a(u, u). HI (S) h h
1/2
HI
1/2
HI
(S)
(S)
.2
+ u
1/2
HI
(S)
.2 (15.37)
15.3 Additive Schwarz Methods for Equations on the Sphere
393
Applying Lemma 15.3 and noting that θi vanishes outside Ωi , we obtain a(ui , ui ) IΔh (θi w)2 1/2 HI
(S)
h−1 θi w2L2 (S) h−1 θi w2L2 (Ωi ) .
This together with the fact that θi wL2 (Ωi ) ≤ wL2 (Ωi ) implies a(ui , ui ) h−1 w2L2 (Ωi ) . Summing up the above inequality over all subdomains, we obtain J
∑ a(ui , ui ) h−1 w2L2 (S) .
(15.38)
i=1
Noting that w ∈ Sdr (Δh ) then IΔh w = w and by applying Lemma 15.3 and the last inequality in (15.35), we infer wL2 (S) = IΔh (u − IΔh PH u)L2 (S) = IΔh (u − PH u)L2 (S) u − PH uL2 (S) H 1/2 u
1/2
HI
(S)
.
This together with (15.38) gives J
∑ a(ui , ui )
i=1
H H u2 1/2 a(u, u). H (S) h h I
From this and (15.37) we infer J
∑ a(ui , ui )
i=0
H a(u, u), h
completing the proof.
15.3.2.4 A better result on stability of the decomposition for even degrees In this subsection we assume that the degree d of the spherical splines is even. We recall that this assumption on d allows us to invoke (15.12) in Lemma 15.3. Lemma 15.8. Assume that the polynomial degree d is even. For any u ∈ V , there exist ui ∈ Vi for i = 0, . . . , J satisfying u = ∑Ji=0 ui and . 2H a(u, u), a(u , u ) 1 + log ∑ i i δ i=0 J
(15.39)
where the constant depends only on the smallest angle of the triangulations.
394
15 Pseudo-differential Equations with Spherical Splines
Proof. Recall that the support Ti of the partition of unity function θi satisfies Ti ⊂ Ωi (see (15.23)). We now define a decomposition of u ∈ V such that (15.39) holds. For any u ∈ V let u0 := IΔh PH u ∈ V0 and ui = IΔh (θi w) ∈ Vi , i = 1, . . . , J, where w = u − u0 . It is clear that u = ∑Ji=1 ui . By using Lemma 15.3 and (15.35) we obtain a(u0 , u0 ) u0 2 1/2 HI
w
1/2
HI
(S)
(S)
= IΔh PH u2 1/2 HI
= IΔh (u − PH u)
1/2
HI
(S)
u2 1/2 HI
u − PH u
(S)
1/2
HI
(S)
(S)
a(u, u)
u
1/2
HI
(S)
1/2
.
(15.40)
and wL2 (S) = IΔh (u − PH u)L2 (S) u − PH uL2 (S) H 1/2 u
HI
(S)
(15.41)
Lemma A.13 gives θi w
1/2
HI
(S)
θi w 1/2
Hw (Ti )
.
(15.42)
By using successively Lemma 15.3, (15.42), (15.28) and the definition of the · 1/2 -norm we obtain Hw (Ti ) J
J
J
J
i=1
i=1
∑ a(ui , ui ) ∑ ui 2Hw1/2 (S) ∑ θi w2Hw1/2 (S) ∑ θi w2Hw1/2 (Ti )
i=1
i=1 J -
∑ 1 + log2 i=1
Hj δ
.2
w2 1/2
Hw (Wi )
. . 1 H 2 J 2 2 w 1 + log + |w| . ∑ L2 (Wi ) H 1/2 (Wi ) δ i=1 H
(15.43)
It is obvious that ∑Ji=1 w2L2 (Wi ) wL2 (S) and by the definition of the seminorm |·|H 1/2 (Wi ) , it is clear that J
∑ |w|2H 1/2 (Wi ) |w|2H 1/2 (S) .
i=1
This together with (15.43), (15.41) and (15.40) implies . H 2 1 w2L2 (S) + |w|2H 1/2 (S) a(u , u ) 1 + log ∑ i i δ H i=1 .2 H 1 + log u2H 1/2 (S) δ . H 2 1 + log a(u, u), δ J
completing the proof of the lemma.
15.4 Numerical Results
395
15.3.2.5 Bounds for the condition number of the preconditioned system Lemma 15.9 (Coercivity of the decomposition). There exists a positive constant C depending only on the smallest angle of the triangulation such that for any u ∈ V satisfying u = ∑Ji=0 ui with ui ∈ Vi the following estimate holds J
a(v, v) ≤ C ∑ a(ui , ui ). i=0
Proof. By the assumption on the colouring of the subdomains there are at most M subdomains to which any x ∈ S can belong. By a standard colouring argument we have a(v, v) v2 1/2 HI
(S)
v0 2 1/2 HI
≤ v0 2 1/2 HI
(S)
J
(S)
' J '2 + ' ∑ vi 'H 1/2 (S)
+ M ∑ vi 2 1/2 i=1
I
i=1
HI
J
(S)
∑ a(vi , vi ). i=0
The lemma is proved.
Combining the results of Lemmas 15.7, 15.8, and 15.9 we obtain the following theorem. Theorem 15.3. The condition number of the additive Schwarz operator P is bounded by H κ (P) . h In particular, if the degree d is even then
κ (P) 1 + log2
H δ
where the constants depend only of the smallest angle in Δh and the polynomial degree d.
15.4 Numerical Results The numerical results reported in this section are reprinted from [174] with copyright permission. In [174], we solved (15.19) with f (x) = f (x, y, z) = ex (2x − xz). by using spherical spline spaces Sd0 (Δh ), in which Δh are spherical triangulations of the following types:
396
15 Pseudo-differential Equations with Spherical Splines
(i) Type 1: Uniform triangulations which are generated as follows. The initial (spherical) triangulation have vertices (1, 0, 0), (0, 1, 0), (0, 0, 1), (−1, 0, 0), (0, −1, 0), (0, 0, −1). The following finer meshes are obtained by dividing each triangle in the previous triangulation into four equal equilateral triangles (by connecting the midpoints of the three edges), resulting in triangulations with the number of vertices being 18, 66, 258, 1026, and 4098. (ii) Type 2: Triangulations with vertices obtained from M AGSAT satellite data. The free S TRIPACK package is used to generate the triangulations from these vertices. The number of vertices to be used are 204, 414, 836, and 1635. Details of how to compute the stiffness matrix A and the right-hand side vector b of the matrix system arising from the Galerkin boundary element approximation can be found in [174, 179].
N 18 66 258 1026
h 0.7071 0.3536 0.1768 0.0883
d = 1 ω 2 = 0.01 κ (A) α 12.27 20.90 -0.77 39.22 -0.91 75.93 -0.95
d = 2 ω 2 = 0.01 κ (A) α 469.36 736.53 -0.65 1422.77 -0.95 2798.35 -0.97
d = 3 ω 2 = 0.1 κ (A) α 19109.90 25735.72 -0.42 50877.82 -0.98 100876.63 -0.99
Table 15.1 Unpreconditioned systems with uniform triangulations; κ (A) = O(hα )
DoF h κ (A) H κ (P) 66 0.3536 20.90 0.7071 5.37 258 0.1768 39.22 0.3536 6.05 0.7071 6.45 1026 0.0883 75.93 0.1768 6.46 0.3536 6.68 0.7071 7.45 4098 0.0442 149.40 0.0883 0.1768 0.3536 0.7071
6.71 6.80 7.90 9.05
Table 15.2 Condition numbers when d = 1, ω 2 = 0.01 with uniform triangulations
15.4 Numerical Results
397 DoF h κ (A) H κ (P) 258 0.3536 736.53 0.7071 7.24 1026 0.1768 1422.77 0.3536 6.92 0.7071 7.06 4098 0.0883 2798.35 0.1768 7.22 0.3536 6.90 0.7071 6.59
Table 15.3 Condition numbers when d = 2, ω 2 = 0.01 with uniform triangulations
DoF h κ (A) H κ (P) 578 0.3536 25735.72 0.7071 412.96 2306 0.1768 50877.82 0.3536 13.49 0.7071 24.62 9218 0.0883 100876.63 0.1768 7.94 0.3536 8.62 0.7071 8.13 Table 15.4 Condition numbers when d = 3, ω 2 = 0.1 with uniform triangulations
DoF h κ (A) H 204 0.511 390.4 1.307 1.670 2.292
κ (P) 19.5 19.2 26.7
414 0.370 509.9 1.307 17.2 1.670 17.0 2.292 27.1 836 0.280 785.2 1.307 13.8 1.670 14.9 2.292 28.3 1635 0.184 1048.5 1.307 11.1 1.670 16.3 2.292 31.5 Table 15.5 Condition numbers when d = 1, ω 2 = 0.001 with MAGSAT satellite data
398
15 Pseudo-differential Equations with Spherical Splines DoF h κ (A) H 810 0.511 1170.9 1.307 1.670 2.292
κ (P) 23.2 11.7 5.7
1650 0.370 1616.9 1.307 17.7 1.670 9.0 2.292 10.0 3338 0.280 2230.5 1.307 13.2 1.670 11.1 2.292 8.1 6534 0.184 3046.7 1.307 13.1 1.670 9.0 2.292 6.3 Table 15.6 Condition numbers when d = 2, ω 2 = 0.01 with MAGSAT satellite data DoF h κ (A) H 1820 0.511 3322.0 1.307 1.670 2.292
κ (P) 23.9 10.9 116.6
3710 0.370 4696.2 1.307 16.5 1.670 7.7 2.292 186.9 7508 0.280 6211.0 1.307 12.1 1.670 12.0 2.292 167.6 Table 15.7 Condition numbers when d = 3, ω 2 = 1.2 with MAGSAT satellite data
Chapter 16
Pseudo-differential Equations with Radial Basis Functions
In this chapter we consider again pseudo-differential equations on the sphere. However, the chapter is diverted from the main theme of other chapters in that boundary elements are not the tool used to solve the equations. Instead, we use radial basis functions (RBFs) which results in a meshless method. The benefit of RBFs lies in the fact that these functions allow us to deal with scattered data (obtained from satellites) which are usually unstructured. It is known that the RBF method converges fast but it is also known that it yields very ill-conditioned matrix systems. Therefore, preconditioners are particularly important. We present in this chapter the analysis of additive Schwarz preconditioners with RBFs, which is entirely different from the analysis usually seen in the case of boundary element methods. We will consider the same problem and notations as introduced in Section 15.1.2, i.e., we will solve the following problem: Let L be an elliptic pseudo-differential operator of order 2α . Given, for some σ ≥ 0, g ∈ H σ −α
satisfying
find u ∈ H σ +α satisfying
g,m = 0 Lu = g,
μi , u = γi ,
for all ∈ K (L), m = 1, . . . , N(n, ),
i = 1, . . . , M,
(16.1)
where γi ∈ R and μi ∈ H −σ −α are given. It should be noted that the inclusion of σ ≥ 0 in the assumption on g is to allow for the consideration of the collocation method. For the Galerkin method, it suffices to take σ = 0. Existence and uniqueness of (16.1) has been discussed in Section 15.1.2. We define a bilinear form a(·, ·) : H α +s × H α −s → R, for any s ∈ R, by a(w, v) := Lw, v
for all w ∈ H α +s , v ∈ H α −s .
In particular, when s = σ we have (see Section 15.1.2) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1_16
399
400
16 Pseudo-differential Equations with Radial Basis Functions
a(u1 , v) = g, v
for all v ∈ H α −σ .
(16.2)
16.1 Radial Basis Functions In this section we introduce the definition of RBFs and present their basic properties. The finite-dimensional subspaces to be used in the approximation will be defined from spherical radial basis functions, which in turn are defined from kernels.
16.1.1 Positive-definite kernels A continuous function Θ : Sn−1 × Sn−1 → R is called a positive-definite kernel on Sn−1 if it satisfies (i) Θ (x, y) = Θ (y, x) for all x, y ∈ Sn−1 , (ii) for any positive integer N and any set of distinct points {y1 , . . . , yN } on Sn−1 , the N × N matrix B with entries Bi, j = Θ (yi , y j ) is positive-semidefinite. If the matrix B is positive-definite then Θ is called a strictly positive-definite kernel; see [189, 242]. We characterise the kernel Θ by a shape function θ as follows. Let θ : [−1, 1] → R be a univariate function having a series expansion in terms of Legendre polynomials, ∞ θ (t) = ∑ ω −1 N(n, )θ()P (n;t), =0
n−1
where ωn−1 is the surface area of the sphere Sn−1 , and θ() is the Fourier–Legendre coefficient,
θ() = ωn−1
1
θ (t)P (n;t)(1 − t 2 )(n−3)/2 dt.
−1
Here, P (n;t) denotes the degree normalised Legendre polynomial in n variables so that P (n; 1) = 1, as described in [155]. Using this shape function θ , we define
Θ (x, y) := θ (x · y) for all x, y ∈ Sn−1 ,
(16.3)
where x · y denotes the dot product between x and y. By using the well-known addition formula for spherical harmonics [155], N(n,)
∑
−1 Y,m (x)Y,m (y) = ωn−1 N(n, ) P (n; x · y) for all x, y ∈ Sn−1 ,
m=1
we can write
(16.4)
16.1 Radial Basis Functions
401
Θ (x, y) =
∞ N(n,)
∑ ∑
=0 m=1
θ()Y,m (x)Y,m (y).
(16.5)
Remark 16.1. In [59], a complete characterisation of strictly positive-definite kernels is established: the kernel Θ is strictly positive-definite if and only if θ() ≥ 0 for all ≥ 0, and θ() > 0 for infinitely many even values of and infinitely many odd values of ; see also [189] and [242]. In the remainder of this section, we shall define specific shape functions φ and ψ , and corresponding kernels Φ and Ψ , which will be used to define the approximation subspaces. The notations θ and Θ are reserved for future general reference.
16.1.2 Spherical radial basis functions We choose a shape function φ satisfying for some τ ∈ R
φ() ( + 1)−2τ
for all ≥ 0.
(16.6)
A specific choice of φ will be clarified in Section 16.4. We also define from φ another shape function ψ : [−1, 1] → R by
ψ (t) =
∞
−1 φ()P (n;t). N(n, )L() ∑ ωn−1
=0
φ(), it follows from (15.7) and (16.6) that () = L() Since ψ ()| ( + 1)−2(τ −α ) |ψ
for all ∈ / K (L).
(16.7)
The corresponding kernels Φ and Ψ are defined by (16.3), i.e.,
Φ (x, y) = φ (x · y) and Ψ (x, y) = ψ (x · y) for all x, y ∈ Sn−1 . We will define spherical radial basis functions by using the kernels Φ and Ψ . These functions are basis functions for the test and trial spaces in the Galerkin scheme for (16.2). Let X = {x1 , . . . , xN } be a set of data points on the sphere. Two important parameters characterising the set X are the mesh norm hX and separation radius qX , defined by hX := sup
min cos−1 (x j · y) and qX :=
y∈Sn−1 1≤ j≤N
1 min cos−1 (xi · x j ). 2 i= j
(16.8)
1≤i, j≤N
The spherical radial basis functions Φ j and Ψj , j = 1, . . . , N, associated with X and the kernels Φ and Ψ (respectively) are defined by (see (16.5))
402
16 Pseudo-differential Equations with Radial Basis Functions
Φ j (x) := Φ (x, x j ) = Ψj (x) := Ψ (x, x j ) =
∞ N(n,)
∑ ∑
=0 m=1 ∞ N(n,)
∑ ∑
=0 m=1
φ()Y,m (x j )Y,m (x),
φ()Y,m (x j )Y,m (x). L()
We note that for j = 1, . . . , N there hold Ψj = LΦ j and
Φ j ),m = φ()Y,m (x j ) (H
φ()Y,m (x j ), (H Ψj ),m = L()
and
for m = 1, . . . , N(n, ), and = 0, 1, 2, . . .. It follows from (16.6) that, for any s ∈ R, ( (2 ( ) ( ( + 1)2s ((Φ j ),m (
∞ N(n,)
∑ ∑
=0 m=1
∞ N(n,)
∑ ∑
=0 m=1
( + 1)2(s−2τ ) |Y,m (x j )|2 .
By using (16.4) and noting P (n; x j · x j ) = P (n; 1) = 1 we obtain, recalling that N(n, ) = O(n−2 ), ( (2 ( ) ( ( + 1)2s ((Φ j ),m (
∞ N(n,)
∑ ∑
=0 m=1
∞
∑ ( + 1)2(s−2τ )+n−2.
=0
The latter series converges if and only if s < 2τ + (1 − n)/2. Hence,
Φ j ∈ Hs
⇐⇒
s < 2τ +
1−n . 2
(16.9)
Similarly, it follows from (16.7) that
Ψj ∈ H s
⇐⇒
s < 2(τ − α ) +
1−n . 2
Using Φi and Ψi , we are now able to define the finite dimensional subspaces for our approximation. For our approximation schemes we consider a family of sets of data points Xk = {x1 , . . . , xNk } for k ∈ N where Nk < Nk+1 so that Xk ⊂ Xk+1 and the corresponding mesh norm hk , see (16.8), satisfies hk → 0 when k → ∞. The corresponding finite-dimensional spaces are defined by φ
Vk := span{Φ1 , . . . , ΦNk } φ
φ
ψ
and
ψ
Vk := span{Ψ1 , . . . , ΨNk }.
ψ
We note that Vk ⊂ Vk+1 and Vk ⊂ Vk+1 for all k ∈ N. The following approximation result is proved in [178, Proposition 3.3]; see also [176, Proposition 3.2 and Remark 3.3]. Proposition 16.1. Assume that (16.6) holds for some τ > (n − 1)/2. Let s∗ ,t ∗ ∈ R satisfy t ∗ ≤ s∗ ≤ 2τ and t ∗ ≤ τ . There exists a positive constant C such that, for any ∗ φ v ∈ H s and k > 0, there exists ηk ∈ Vk satisfying
16.2 Solving Pseudo-differential Equations by Radial Basis Functions ∗ −t ∗
v − ηk t ∗ ≤ Chsk
403
vs∗ .
16.1.3 Native space and reproducing kernel property In the analysis for the collocation method we will use the native space Nφ associated with φ defined by 7 Nφ := v ∈ D (Sn−1 ) : v2φ =
∞ N(n,)
∑ ∑
=0 m=1
We equip this space with an inner product and a norm
v, wφ =
∞ N(n,)
∑ ∑
=0 m=1
,m v,m w φ()
and
vφ =
8 | v,m |2
α+ 2 2 so that V φ ⊂ H α . We find u1 ∈ V φ by solving the Galerkin equation a( u1 , v) = g, v
for all v ∈ V φ .
(16.11)
404
16 Pseudo-differential Equations with Radial Basis Functions
Existence of a unique solution u1 follows from the positive definiteness of the kernel; see [178, Lemma 5.1]. The following error estimate is also proved in the same article. Theorem 16.1. Assume that the shape function φ is chosen to satisfy (16.6), (16.10) and τ ≥ α , τ > (n − 1)/2. Assume further that u ∈ H s for some s satisfying α ≤ s ≤ 2τ . If μi ∈ H −t for i = 1, . . . , M with t ∈ R satisfying 2(α − τ ) ≤ t ≤ α , then for hX sufficiently small there holds u − ut ≤ Chs−t X us .
The constant C is independent of u and hX .
16.3 Additive Schwarz Methods By writing u1 = ∑Ni=1 ci Φi we derive from (16.11) the matrix equation Ac = g, where Ai j = a(Φi , Φ j ) =
∞ N(n,)
∑ ∑
=0 m=1
[φ()]2 Y,m (xi )Y,m (x j ), L()
(16.12)
c = (c1 , . . . , cN ), and g = ( g, Φ1 , . . . , g, ΦN ). It is well known [157] that this matrix is ill-conditioned; see also the numerical results in Section 16.4. In this section we present how an overlapping Schwarz preconditioner is designed. We also carry out the analysis which, even though still follows the general framework of Theorem 2.1, requires a completely original approach due to the non-local structure of radial basis functions. In the remainder of this chapter, we return to our standard notations in Chapter 2, namely we will write V for V φ . Moreover, we will use a norm for Sobolev spaces on the sphere S which facilitates the analysis. This norm requires the introduction of the following concepts. A spherical cap of radius α centered at p ∈ S is defined by C(p, α ) := {x ∈ S : θ (p, x) < α }, where θ (p, x) = cos−1 (p · x) is the geodesic distance between two points x, p ∈ S. and s denote the north and south poles of S, respectively. Then a simple cover Let n for the sphere is provided by n, θ0 ) and U2 = C(s, θ0 ), where θ0 ∈ (π /2, 2π /3). U1 = C(
n} onto Rn is defined The stereographic projection σn of the punctured sphere S \ { as a mapping that maps x ∈ S \ { n} to the intersection of the equatorial hyperplane {z = 0} and the extended line that passes through x and n. The stereographic projection σs based on s can be defined analogously. We set
16.3 Additive Schwarz Methods
ψ1 =
405
1 σ |U tan(θ0 /2) s 1
and
ψ2 =
1 σ |U , tan(θ0 /2) n 2
(16.13)
so that ψk , k = 1, 2, maps Uk onto B(0, 1), the unit ball in Rn . We conclude that A = {Uk , ψk }2k=1 is a C∞ atlas of covering coordinate charts for the sphere. It is known [180] that the stereographic coordinate charts {ψk }2k=1 as defined in (16.13) map spherical caps to Euclidean balls, but in general concentric spherical caps are not mapped to concentric Euclidean balls. The projection ψk , for k = 1, 2, does not distort too much the geodesic distance between two points x, y ∈ S, as shown in [136]. With the atlas so defined, we define the map πk which takes a real-valued function g with compact support in Uk into a real-valued function on Rn by g ◦ ψk−1 (x), if x ∈ B(0, 1), πk (g)(x) = 0, otherwise . Let {χk : S → R}2k=1 be a partition of unity subordinated to the atlas, i.e., a pair of non-negative infinitely differentiable functions χk on S with compact support in Uk , such that ∑k χk = 1. For any function f : S → R, we can use the partition of unity to write f=
2
∑ (χk f ), where (χk f )(p) = χk (p) f (p),
p ∈ S.
k=1
The Sobolev space H s is defined to be the set { f ∈ L2 (S) : πk (χk f ) ∈ H s (Rn )
for k = 1, 2},
which is equipped with the norm f H s (S) =
2
∑ πk (χk f )2H s (Rn )
k=1
1/2
.
(16.14)
This H s (S)-norm is equivalent to the H s norm given in Appendix A; see [138].
16.3.1 Subspace decomposition We assume that we are given a finite set X = {x1 , . . . , xN } of points on Sn . To define a subspace decomposition of V , we first decompose the data set X in the form X = X0 ∪ · · · ∪ XJ as follows. (1) (2) (3) (4)
Select α ∈ (0, π /3) and β ∈ [α , π ]. Choose first centre p1 = x1 ∈ X. Define X1 := {x ∈ X : cos−1 (x · p1 ) ≤ α }. Suppose X j−1 , j > 1, has been defined. Then p j is chosen from X \{p1 , · · · , p j−1 } such that cos−1 (p j−1 · p j ) ≥ β and cos−1 (pl · p j ) ≥ α for l = 1, . . . , j − 2.
406
16 Pseudo-differential Equations with Radial Basis Functions
(5) Define X j := {x ∈ X : cos−1 (x · p j ) ≤ α }. (6) Repeat (4) and (5) until every point in X is in at least one X j . (7) Define X0 := {p1 , . . . , pJ }. The second condition in Step (4) ensures that the geodesic distance between centres is no less than α , guaranteeing that the algorithm terminates. We note that the subsets X j overlap. The subspaces V j can now be defined by V j = span{Φk : xk ∈ X j },
j = 0, . . . , J,
and the following subspace decomposition is considered V = V0 + · · · + VJ . Assume that the support of Φ (p, ·), which is a spherical cap centered at p, has radius γ . (In case the spherical basis functions are unscaled Wendland’s radial basis functions [234], we have γ = π /3). Then, functions in V j have supports in Γj , where
Γj := C(p j , α + γ ),
j = 1, . . . , J.
As before, we assume that: Assumption 16.1. We can partition the index set {1, . . . , J} into M (with 1 ≤ M ≤ J) sets Jm , for 1 ≤ m ≤ M, such that if i, j ∈ Jm and i = j then Γi ∩ Γj = ∅.
16.3.2 Coercivity of the decomposition The proof for coercivity of decomposition is similar to previous proofs with different settings of problems and methods, except that we have to use the projected norm (16.14). Lemma 16.1 (Coercivity of decomposition). Under Assumption 16.1, there exists a positive constant c independent of the set X such that for every v ∈ V satisfying v = ∑Jj=0 v j with v j ∈ V j for j = 0, . . . , J there holds J
a(v, v) ≤ cM ∑ a(v j , v j ). j=0
Proof. Using the inequality |a + b|2 ≤ 2(|a|2 + |b|2 ), we have ⎞ ⎛ ' ' ' J '2 ' ' ⎠. v2H σ (S) ≤ 2 ⎝v0 2H σ (S) + ' ∑ v j ' ' j=1 ' σ H (S)
Recalling the definition of the Sobolev norm (16.14), we have
16.3 Additive Schwarz Methods
' ' ' J '2 ' ' ' ∑ v j' ' j=1 ' σ
H (S)
407
'2 ' ' ' J ' ' = ' ∑ π1 (χ1 v j )' ' σ ' j=1
H (R2 )
'2 ' ' ' J ' ' + ' ∑ π 2 ( χ 2 v j )' ' σ ' j=1
.
H (R2 )
Now, from the fact that v j ∈ V j together with Assumption 16.1 we can partition the index set {1, . . . , J} to M sets of indices Jm so that if i, j ∈ Jm then supp vi ∩ supp v j = ∅. In this proof only, let g j = π1 (χ1 v j ) with Ω j := supp g j . By using successively the triangle inequality and the Cauchy-Schwarz inequality, we have ' ' ' J '2 ' ' ' ∑ g j' ' j=1 ' σ H
(R2 )
' '2 ' M ' ' ' = ' ∑ ∑ g j' 'm=1 j∈Jm ' σ
H (R2 )
' '2 ' ' ' ' ≤ M ∑ ' ∑ g j' ' ' σ j∈J m=1 m M
⎛
' ' ' ' ' ' ≤ ⎝ ∑ ' ∑ g j' ' ' m=1 j∈Jm M
H σ (R2 )
⎞2 ⎠
.
H (R2 )
With the partition {Ω j : j ∈ Jm } of Ω where Ω = supp ∑ j∈Jm g j we have, due to Theorem A.11 in Appendix A,
∑ g j 2H σ (R2 ) = ∑ g j 2Hσ (Ω ) ≤ ∑ g j 2Hσ (Ω ) = ∑ g j 2H σ (R2 ) .
j∈Jm
Thus,
j∈Jm
' ' ' J '2 ' ' ' ∑ g j' ' j=1 ' σ
j∈Jm
J
M
≤M
∑ ∑
m=1 j∈Jm
H (R2 )
j∈Jm
g j 2H σ (R2 ) = M ∑ g j 2H σ (R2 ) . j=1
Hence, by using similar arguments for π2 (χ2 v j ), we conclude ' ' ' J '2 ' ' ' ∑ v j' ' j=1 ' σ
≤ cM
H (S)
J
∑
j=1
π1 (χ1 v j )2H σ (R2 ) +
J
∑
j=1
π2 (χ2 v j )2H σ (R2 )
J
= cM ∑ v j 2H σ (S) . j=1
Therefore,
' ' ' J '2 ' ' v2H σ (S) = ' ∑ v j ' ' j=0 ' σ
H (S)
J
≤ cM ∑ v j 2H σ (S) . j=0
Using the fact that a(v, v) v2H σ (S) we obtain the result.
408
16 Pseudo-differential Equations with Radial Basis Functions
16.3.3 Stability of the decomposition and bounds for the minimum eigenvalue of P As we have mentioned, to prove stability in this case since radial basis functions are not local to the subdomains, we have to employ a different approach as with boundary element methods. We need the following concepts.; Firstly, we recall the following definition of angle between closed subspaces of a general Hilbert space. Definition 16.1. Let V be a Hilbert space with inner product and norm denoted by
·, · and · , respectively. Assume that U1 and U2 are two closed subspaces of V. The angle α between U1 and U2 is the angle in [0, π /2] whose cosine is given by cos α = sup{ v, w : v ∈ U1 ∩ U⊥ ,
w ∈ U2 ∩ U⊥ ,
v ≤ 1, w ≤ 1},
where U = U1 ∩ U2 , and U⊥ is its orthogonal complement, namely, U⊥ := { f ∈ V : f , v = 0
∀v ∈ U}.
It follows from the definition of orthogonal complement that (U1 + U2 )⊥ = { f ∈ V : f , v = 0 ∀v ∈ U1 + U2 } = { f ∈ V : f , v1 = 0 = f , v2 ∀v1 ∈ U1 and v2 ∈ U2 } ⊥ = U⊥ 1 ∩ U2 .
(16.15)
The following theorem [195, Theorem 2.2] is crucial in our estimate of the minimum eigenvalue of P. Theorem 16.2. Let V1 , . . . , VJ be closed subspaces of a Hilbert space V, and Wi := ∩Jj=i V j , i = 1, . . . , J. If Qi : V → Vi is the orthogonal projection onto Vi , i = 1, . . . , J, and Q : V → W1 is the orthogonal projection onto W1 , then l f − Q f ≤ cl f − Q f , Q
:= QJ · · · Q1 and where Q
∀ f ∈ V,
l = 1, 2, . . . ,
J−1
c2 = 1 − ∏ sin2 αi , i=1
with αi being the angle between Vi and Wi+1 . We shall apply Theorem 16.2 with V being V , which is equipped with the inner product a(·, ·) and induced norm ·a , and V j being V j⊥ , j = 1, . . . , J. If T is a linear operator on V we denote by T a the norm of T defined by ·a , i.e., T a = sup T va . v∈V va ≤1
16.3 Additive Schwarz Methods
409
:= QJ · · · Q1 where Qi is the orthogonal projection from V Proposition 16.2. Let Q ⊥ onto Vi , and let Wi := ∩Jj=i V j⊥ , i = 1, . . . , J. Then a≤ Q
J−1
1 − ∏ sin2 αi i=1
1/2
< 1,
where αi is the angle between Vi ⊥ and Wi+1 . Proof. First we note that since X0 ⊂ X1 ∪ · · · ∪ XJ , there holds V = V1 + · · · + VJ . It follows from (16.15) that W1⊥ = (V1 + · · · + VJ )⊥⊥ = V ⊥⊥ = V . Hence W1 = {0} which implies that the orthogonal projection Q from V onto W1 is identically zero. Theorem 16.2 then yields a≤ Q
J−1
1 − ∏ sin αi 2
i=1
1/2
.
It remains to show that αi = 0 for all i = 1, . . . , J − 1. Suppose that αi = 0 for some i ∈ {1, . . . , J − 1}. Then noting that (Vi ⊥ ∩ Wi+1 )⊥ = Wi⊥ , we obtain from Definition 16.1 sup{a(v, w) : v ∈ Vi ⊥ ∩ Wi ⊥ , w ∈ Wi+1 ∩ Wi ⊥ , va ≤ 1, wa ≤ 1} = 1. The spaces being finite dimensional, by compactness there exist v ∈ Vi ⊥ ∩ Wi ⊥ and w ∈ Wi+1 ∩ Wi ⊥ satisfying va = wa = 1 and
a(v, w) = 1.
The condition for equality to occur in the Cauchy–Schwarz inequality implies v = w. Thus v ∈ Vi ⊥ ∩ Wi+1 = Wi . On the other hand v ∈ Wi ⊥ , which implies v = 0. This contradicts the fact that va = 1, proving the proposition. Lemma 16.2. For any v ∈ V there exist v j ∈ V j , j = 0, . . . , J, satisfying v = ∑Jj=0 v j and J J a(v, v), ∑ a(v j , v j ) ≤ 1 + (1 − Q a )2 j=0
is defined in Proposition 16.2. where Q
is invertible and satisfies Proof. It follows from Proposition 16.2 that I − Q −1 a ≤ (I − Q)
1
a 1 − Q
,
410
16 Pseudo-differential Equations with Radial Basis Functions
where I is the identity operator on V . We define, for any v ∈ V , v0 = P0 v,
w = v − v0 , −1 w, v1 = P1 (I − Q)
−1 w, j = 2, . . . , J, v j = Pj Q j−1 · · · Q1 (I − Q)
where Pj = I − Q j is the orthogonal projection from V onto V j , j = 0, . . . , J, defined in Subsection 2.3.2.2. It is easy to check that ∑Jj=1 v j (being a telescoping sum) equals w, and therefore ∑Jj=0 v j = v. A crude estimate yields, for j = 1, . . . , J,
resulting in
−1 2a a(v, v) ≤ a(v j , v j ) ≤ (I − Q)
J
J
j=0
j=1
∑ a(v j , v j ) = a(v0 , v0 ) + ∑ a(v j , v j ) ≤
1 a(v, v), a )2 (1 − Q
J 1+ a )2 (1 − Q
a(v, v),
proving the lemma. The above lemma with the help of the general theory in Chapter 2 yields the following estimate for the minimum eigenvalue of P
λmin (P) ≥
J 1+ a )2 (1 − Q
−1
.
(16.16)
This estimate is by no means sharp. We present in Table 16.1 three computed values: −1 λmin (P),
J
1 + ∑ C2j , j=1
and
1+
J , a )2 (1 − Q
where for the middle term we explicitly compute C j as the norm of the operator defining u j , namely, −1 a C1 = P1 (I − Q)
−1 a , j = 2, . . . , J. and C j = Pj Q j−1 · · · Q1 (I − Q)
It is clear from Table 16.1 that (1 + ∑Jj=1 C2j )−1 is a better approximant to λmin (P). Our experiments show that the projection Pj in the definition of u j plays a key role −1 but we cannot account for this fact. in reducing the norm of (I − Q) In Table 16.1 the norms of the operators were computed by using their matrix a as follows. Recalling the definirepresentations. For example, we computed Q tion of the positive definite matrix A, see (16.12), and using the Cholesky factorisation A = LT L, we obtain for any u = ∑Ni=1 ci Φi ∈ VX u2a = cT Ac = Lc22 ,
16.3 Additive Schwarz Methods
411
where c = (c1 · · · cN )T . On the other hand, by writing Φi = Q
one can easily see that
N
∑ di,k Φk
k=1
T AQc T LT LQc = cT Q = LQc 2, 2a = cT Q Qu 2
is the matrix representation of Q with column ith being (di,1 · · · di,N ) . where Q Therefore, a = sup Q u∈V
−1 c a LQc LQL Qu 2 2 −1 . = sup = sup = LQL 2 ua Lc c N N 2 2 c∈R c∈R
−1 Table 16.1 Upper bounds for λmin (P)
N qX cos α 1344 π /80 0.90 0.80 0.70 0.60 0.50 2133 π /100 0.90 0.80 0.70 0.60 0.50 3458 π /140 0.90 0.80 0.70 0.60 0.50 4108 π /160 0.90 0.80 0.70 0.60 0.50 7663 π /200 0.80 0.70 0.60 0.50
cos β -0.90 -0.93 -0.86 -0.87 -0.83 -0.85 -0.89 -0.89 -0.85 -0.83 -0.88 -0.80 -0.81 -0.85 -0.83 -0.88 -0.81 -0.87 -0.86 -0.80 -0.89 -0.88 -0.81 -0.84
J 44 23 17 13 10 44 24 17 13 10 46 22 16 13 10 46 24 17 14 10 23 17 11 10
−1 a )2 λmin (P) 1 + ∑Jj=1 C2j 1 + J/(1 − Q 120.41 1612.23 98914208.40 285.80 461.67 4003258.71 17.44 135.84 625605.77 13.03 33.11 7481.56 3.58 12.43 219.18 34.97 391.90 1684693.43 20.16 69.22 57266.95 6.56 24.09 712.68 8.18 26.44 4745.12 4.48 13.88 266.90 4.92 68.75 3662.59 5.31 41.58 2649.72 1.82 17.77 101.96 1.02 13.31 47.03 1.70 11.67 58.05 2.17 66.21 4023.85 6.41 36.54 1043.01 4.57 22.21 755.70 1.26 15.31 86.07 4.13 16.85 1388.24 1.37 27.47 154.84 1.68 19.32 130.77 5.54 17.35 609.86 3.27 12.78 170.82
412
16 Pseudo-differential Equations with Radial Basis Functions
16.3.4 Bounds for the condition number Lemma 16.1 and Lemma 16.2 yield an upper bound for the condition number of the Schwarz operator. Theorem 16.3. The condition number of the additive Schwarz operator P is bounded by J , κ (P) ≤ cM 1 + a )2 (1 − Q is defined as in where c is a constant independent of M, J and the set X, and Q Proposition 16.2.
16.4 Numerical Results 16.4.1 An algorithm Suppose we number the scattered data following the satellite track as {1, . . . , N}. Let α and β be parameters satisfying 0 < α < π /3 and α ≤ β ≤ π . The algorithm to partition X can be described as follows. (1) The first center is p1 = x1 . (2) Assume that the centers p1 , . . . , pl have been chosen. (3) Define Xk as in Section 16.3.1 Item 5. If X = ∪lk=1 Xk , then stop the algorithm and let J = l. Otherwise, choose the new center pl+1 as a point in X satisfying
θ (pl+1 , pl ) ≥ β
and
θ (pl+1 , pk ) ≥ α ,
k = 1, . . . , l − 1.
(4) Let l = l + 1 and repeat Step (3). Now X0 is defined by X0 := {p1 , . . . , pJ }. The parameter β is included so that the condition X = ∪lk=1 Xk in Step (3). value of α we chose an appropriate value of β by starting with the value β = π , and decreased its value until the above-mentioned condition holds. For j = 0, . . . , J, let I j be a subset of the index set of {1, . . . , N} such that m ∈ I j ↔ xm ∈ X j . The cardinality of the set I j is denoted by s j and the m-th element of the set I j is denoted by I j (m) . For a given vector v = (v1 , . . . , vN )T , the restriction of v on X j is the vector u = (u1 , u2 , . . . , us j )T defined as follows um := vI j (m) ,
m = 1, . . . , s j ,
16.4 Numerical Results
413
and we write u = R j (v), thus the restriction operation to X j is denoted by R j . Conversely, for a vector u = (u1 , . . . , us j )T , we extend u to v = (v1 , . . . , vN )T as um vk := 0
if k = I j (m) for 1 ≤ m ≤ s j , otherwise.
and write v = E j (u), where E j denotes the extension operation. A pseudo-code. INPUT Input the scattered set X on the sphere, the right-hand side f , and the desired accuracy ε . SETUP (1) Partition the scattered set X into J overlapping subsets X j for j = 1, . . . , J and construct the global coarse set X0 . (2) Set the initial residual vector r = f where f is the vector defined from the righthand side of (16.11). (3) Set the initial pseudo-residual vector p = 0. (4) Set the initial solution vector s = 0. (5) Set the iteration counter iter = 0. ITERATIVE SOLUTION (1) while r > ε f (2) Set p = 0. (3) for j = 1 to J (4) Construct the local matrix C with entries Cm,n = AI j (m),I j (n) . (5) Set the restriction residual vector z = R j (r). (6) Solve the linear system Cy = z. (7) Update the pseudo-residual vector p = p + E j (y). (8) end for (9) Construct the coarse global matrix G with entries Gm,n = AI0 (m),I0 (n) . (10) (11) (12) (13) (14) (15) (16) (17)
Set zg = R0 (r). Solve the linear system Gyg = zg . Update the pseudo-residual vector p = p + E0 (yg ). If iter > 0 then set ζ0 = ζ1 . Set ζ1 = p · r. Update the counter, iter = iter +1. If iter = 1 then define p = p else p = p + (ζ1 /ζ0 )p. Update the residual vector . r·p Ap. r = r− p · Ap
414
(18)
16 Pseudo-differential Equations with Radial Basis Functions
Update the solution vector . r·p s = s+ p. p · Ap -
(19) end while
16.4.2 Numerical results In this section we reprint, with copyright permission, the numerical results obtained in [226]. The experiments are based on globally scattered data extracted from a very large data set collected by NASA satellite MAGSAT. The scattered data sets X = {x1 , x2 , · · · , xN } are extracted so that the separation radius qX defined by qX :=
1 min cos−1 (xi · x j ) 2 i= j
satisfies qX = π /240, π /280, and π /320. The corresponding sets have cardinality N = 10443, 13897 and 17262.
Fig. 16.1 Global scattered MAGSAT satellite data
16.4 Numerical Results
415
We solved the Dirichlet and Neumann problems in the unbounded domain B := {x ∈ R3 : x > 1}, namely,
ΔU = 0 U = UD
in B
ΔU = 0 ∂ν U = UN
in B
on S
and
on S,
with a vanishing condition at infinity for both problems: U(x) = O(1/x) as
x → ∞.
Here ν is the outer normal vector to S, and UD and UN are chosen such that the exact solutions to both problems are U(x) =
1 x − p
where p = (0, 0, p) with p ∈ (0, 1).
(16.17)
Thus, for x = (x1 , x2 , x3 ) ∈ S, we chose UD (x) =
1 (1 + p2 − 2px3 )1/2
and UN (x) =
px3 − 1 . (1 + p2 − 2px3 )3/2
It is well known that the Dirichlet problem can be reformulated into a weaklysingular integral equation on S, which can then be rewritten [225] as a pseudodifferential equation Lu = f with = L()
and
1 2 + 1
∂ UD (y) d σ (y) ∂ ν (y) x − y S . 1 ∞ 1 = ∑ ∑ −1 − (U D ),mY,m (x) 2 =1 m=− 2 + 1
1 1 f (x) = − UD (x) + 2 4π
∞
+1 (UD ),mY,m (x). 2 +1 =1 m=−
=−∑
∑
In this case L is a pseudo-differential operator of order −1 and hence σ = −1/2.
416
16 Pseudo-differential Equations with Radial Basis Functions
Similarly, the Neumann problem can be reformulated into a hypersingular integral equation on S, which can then be rewritten [225] as a pseudo-differential equation Lu = f with = ( + 1) L() 2 + 1 and 1 ∂ 1 f (x) = UN (x) + 2 4π ∂ ν (x)
S
UN (y) d σ (y) x − y
. 1 1 (U = ∑ ∑ 1− N ),mY,m (x) 2 =1 m=− 2 + 1
∞
=
∞
-
(UN ),mY,m (x). =1 m=− 2 + 1
∑ ∑
In this case L is a pseudo-differential operator of order 1 and hence σ = 1/2. The √ univariate function φ defining the kernel Φ , see (16.5), is given by φ (t) = ρm ( 2 − 2t), where ρm are Wendland’s functions [234, page 128]. It is proved in [158, Proposition 4.6] that (16.6) holds with τ = m + 3/2. Table 16.2 details the functions ρm used in our experiments and the corresponding values for τ . The spherm ρm (r) τ 0 (1 − r)2+ 1.5 1 (1 − r)4+ (4r + 1) 2.5 Table 16.2 Wendland’s RBFs
ical radial basis functions Φi , i = 1, . . . , N, are computed by Φi (x) = ρm ( 2 − 2x · xi ), x ∈ S.
Tables 16.3–16.6 show the 2 -condition number κ (A), the CPU time in seconds, and the number of iterations ITER for two different values of p, see (16.17), namely p = 0.5 and p = 0.95. The numbers suggest that when the point p is deeply buried in the sphere (p = 0.5), the unpreconditioned method can solve the problems with m = 0 but not with m = 1, though the solutions are not reliable due to large condition numbers of the stiffness matrix A. The results are even worse when the point p is closer to the surface (p = 0.95); the unpreconditioned method can hardly solve the problems. We then tested the overlapping additive Schwarz method as a preconditioner for the conjugate gradient method, with different values of α and β , see Section 16.3.1, and hence different values of J, the number of subproblems. The numbers in Tables 16.7–16.10 show the efficiency of the preconditioner. (In these tables κ (P) is the condition number of the preconditioned systems.) The numbers suggest that the
16.4 Numerical Results
417
m N 0 10443 13897 17262 1 10443 13897
λmin 2.729E-10 1.182E-10 5.359E-11 -3.131E-14 -1.866E-13
λmax κ (A) CPU ITER 227.937 8.353E+11 168.7 259 303.418 2.568E+12 98.2 85 377.013 7.035E+12 227.7 128 167.478 n/a 133.2 209 222.963 n/a 143.3 124
Table 16.3 Unpreconditioned CG for the Dirichlet problem with p = 0.5
m N 0 10443 13897 17262 1 10443 13897
λmin 2.729E-10 1.182E-10 5.359E-11 -3.131E-14 -1.866E-13
λmax κ (A) CPU ITER 227.937 8.353E+11 >23000 >41000 303.418 2.568E+12 >40000 >33000 377.013 7.035E+12 >73000 >48000 167.478 n/a >23000 >41000 222.963 n/a >65000 >55000
Table 16.4 Unpreconditioned systems for the Dirichlet problem with p = 0.95 m N 0 10443 13897 17262 1 10443 13897 17262
λmin 5.449E-06 3.305E-06 2.040E-06 1.207E-10 3.694E-11 1.217E-11
λmax 149.848 204.544 259.912 135.598 185.949 237.437
κ (A) 2.750E+07 6.189E+07 1.274E+08 1.124E+12 5.034E+12 1.951E+13
CPU ITER
775.0 339.7 453.7 447.3 195.6 400.6
1397 299 286 611 195 237
Table 16.5 Unpreconditioned systems for the Neumann problem with p = 0.5
m N 0 10443 13897 17262 1 10443 13897 17262
λmin 5.449E-06 3.305E-06 2.040E-06 1.207E-10 3.694E-11 1.217E-11
λmax 149.848 204.544 259.912 135.598 185.949 237.437
κ (A) 2.750E+07 6.189E+07 1.274E+08 1.124E+12 5.034E+12 1.951E+13
CPU
ITER
4587.4 6402.2 9736.9 >23000 >55000 >123000
6398 5772 5743 >41000 >55000 >69000
Table 16.6 Unpreconditioned systems for the Neumann problem with p = 0.95
algorithm is not affected by the smoothness of the kernel. In all cases, for both the weakly-singular and hypersingular operators, the condition numbers and the CPU times are both much better than in the unpreconditioned case. The benefit is particularly great for the harder case p = 0.95. It should be noted that when cos α decreases (meaning that α increases) then λmax decreases (whereas the behaviour of λmin is influenced by the LANCZOS algorithm), but the CPU time first decreases then increases. We note that the larger value of α results in the larger size of the overlap and a smaller value of J (the number of subproblems to be solved), which in turn implies larger sizes of the subproblems. As in the case of the differential operator considered in [137], this results in a smaller
418
16 Pseudo-differential Equations with Radial Basis Functions m N 0 10443 10443 10443 10443 13897 13897 13897 13897 17262 17262 17262 17262 1 10443 10443 10443 10443 10443 13897 13897 13897 13897 13897
cos α 0.95 0.90 0.80 0.60 0.95 0.90 0.80 0.60 0.95 0.90 0.80 0.60 0.99 0.98 0.97 0.95 0.60 0.99 0.98 0.97 0.95 0.60
cos β -0.07 -0.63 -0.87 -0.87 0.03 -0.76 -0.67 -0.69 -0.17 -0.62 -0.76 -0.85 0.99 0.95 0.69 -0.07 -0.87 0.98 0.74 0.68 0.03 -0.69
J 86 45 20 11 90 46 26 13 90 47 24 12 424 201 138 86 11 395 217 138 90 13
λmin 2.110E-02 3.048E-01 1.199E-01 5.191E-01 6.043E-02 1.961E-01 1.081E+00 1.088E+00 8.102E-02 1.468E-02 9.821E-01 2.635E-01 1.371E-02 9.979E-03 1.446E-02 1.155E-02 5.299E-02 8.076E-03 1.169E-02 3.374E-03 9.140E-03 1.054E+00
λmax 52.945 31.130 15.992 10.365 55.499 31.905 20.528 12.031 55.624 32.666 19.087 11.240 223.980 113.033 81.024 53.828 10.402 210.291 122.993 81.772 56.687 12.085
κ (P) 2.509E+03 1.021E+02 1.334E+02 1.997E+01 9.184E+02 1.627E+02 1.898E+01 1.105E+01 6.866E+02 2.226E+03 1.943E+01 4.266E+01 1.634E+04 1.133E+04 5.605E+03 4.658E+03 1.963E+02 2.604E+04 1.052E+04 2.424E+04 6.202E+03 1.147E+01
CPU
ITER
86.0 101.0 316.4 716.0 168.1 271.4 555.5 1723.9 254.3 1118.4 1006.8 4507.6 145.1 136.7 121.2 163.5 1202.7 327.7 248.8 414.9 389.4 1823.4
62 29 31 18 59 35 19 16 50 76 20 23 202 179 132 122 31 259 168 234 134 17
Table 16.7 Preconditioned systems for the Dirichlet problem with p = 0.5
condition number κ (P) because the preconditioner is closer to the inverse of the stiffness matrix. However, for an optimal value of α in terms of CPU time, one has to balance between the number of subproblems and their sizes. As our experiments show, any value of α so that cos α ≤ 0.6 is not recommended.
16.4 Numerical Results m N 0 10443 10443 10443 10443 10443 13897 13897 13897 13897 13897 17262 17262 17262 17262 17262 1 10443 10443 10443 10443 10443 13897 13897 13897 13897 13897
cos α 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90
419 cos β 0.99 0.95 0.69 -0.07 -0.63 0.98 0.74 0.68 0.03 -0.76 0.98 0.94 0.70 -0.17 -0.62 0.99 0.95 0.69 -0.07 -0.63 0.98 0.74 0.68 0.03 -0.76
J 424 201 138 86 45 395 217 138 90 46 405 206 140 90 47 424 201 138 86 45 395 217 138 90 46
λmin 3.954E-03 4.652E-03 5.381E-02 1.280E-02 2.976E-01 1.782E-02 2.665E-02 8.361E-03 7.944E-02 1.914E-01 5.336E-02 2.755E-02 8.905E-03 6.687E-02 1.449E-02 2.337E-03 1.679E-03 1.534E-03 1.408E-03 4.726E-02 3.300E-04 2.545E-03 7.030E-04 1.050E-02 1.381E-02
λmax 223.130 111.679 79.763 52.945 31.130 208.653 120.889 80.043 55.499 31.905 214.422 115.125 81.374 55.624 32.666 224.019 113.144 81.042 53.828 31.444 210.991 122.884 82.394 56.681 32.438
κ (P) 5.644E+04 2.400E+04 1.482E+03 4.135E+03 1.046E+02 1.171E+04 4.536E+03 9.574E+03 6.987E+02 1.667E+02 4.019E+03 4.179E+03 9.139E+03 8.318E+02 2.255E+03 9.585E+04 6.740E+04 5.284E+04 3.822E+04 6.653E+02 6.395E+05 4.829E+04 1.172E+05 5.401E+03 2.349E+03
CPU
ITER
160.3 109.8 64.2 122.2 109.8 219.7 161.4 233.9 178.9 298.9 172.2 221.4 394.8 275.4 1209.6 371.6 289.9 231.9 306.4 203.5 1228.8 456.4 869.3 405.1 833.3
252 162 78 98 34 173 110 134 63 40 102 108 151 58 84 584 428 282 246 63 1019 325 506 143 111
Table 16.8 Preconditioned systems for the Dirichlet problem with p = 0.95
420
16 Pseudo-differential Equations with Radial Basis Functions m N 0 10443 10443 10443 10443 10443 13897 13897 13897 13897 13897 17262 17262 17262 17262 17262 1 10443 10443 10443 10443 10443 13897 13897 13897 13897 13897 17262 17262 17262 17262
cos α 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95
cos β 0.99 0.95 0.69 -0.07 -0.63 0.98 0.74 0.68 0.03 -0.76 0.98 0.94 0.70 -0.17 -0.62 0.99 0.95 0.69 -0.07 -0.63 0.98 0.74 0.68 0.03 -0.76 0.98 0.94 0.70 -0.17
J 424 201 138 86 45 395 217 138 90 46 405 206 140 90 47 424 201 138 86 45 395 217 138 90 46 405 206 140 90
λmin 1.998E-03 3.350E-04 1.300E-04 4.700E-05 6.888E-01 1.707E-02 5.593E-02 4.843E-02 2.361E-02 3.453E-01 2.565E-02 3.661E-02 3.908E-02 1.113E-01 7.370E-02 1.204E-02 5.453E-03 5.919E-03 8.853E-03 1.181E-01 1.110E-02 1.145E-02 5.694E-03 1.569E-02 3.217E-02 1.402E-02 8.439E-03 1.098E-02 2.380E-02
λmax 56.451 31.681 24.397 18.374 12.711 52.756 34.114 24.713 19.316 12.978 54.441 32.891 25.302 19.347 13.570 64.025 35.481 27.191 20.464 13.787 59.929 38.469 27.781 21.614 14.137 62.282 37.329 28.467 21.584
κ (P) 2.825E+04 9.446E+04 1.870E+05 3.917E+05 1.845E+01 3.091E+03 6.100E+02 5.102E+02 8.182E+02 3.758E+01 2.122E+03 8.984E+02 6.475E+02 1.738E+02 1.841E+02 5.319E+03 6.506E+03 4.594E+03 2.312E+03 1.168E+02 5.398E+03 3.360E+03 4.879E+03 1.378E+03 4.395E+02 4.442E+03 4.423E+03 2.593E+03 9.068E+02
CPU ITER
105.6 132.1 105.7 123.8 71.4 125.5 93.3 108.7 179.2 188.6 168.2 143.5 176.0 203.0 701.4 128.3 147.1 154.6 146.4 121.8 184.9 165.2 255.5 253.0 376.3 272.2 355.7 341.5 350.9
Table 16.9 Preconditioned systems for the Neumann problem with p = 0.5
165 194 128 99 22 100 64 62 63 25 95 67 63 41 47 156 170 152 101 35 165 125 158 93 51 145 159 120 72
16.4 Numerical Results m N 0 10443 10443 10443 10443 10443 13897 13897 13897 13897 13897 17262 17262 17262 17262 17262 1 10443 10443 10443 10443 10443 13897 13897 13897 13897 13897 17262 17262 17262 17262
cos α 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95 0.90 0.99 0.98 0.97 0.95
421 cos β 0.99 0.95 0.69 -0.07 -0.63 0.98 0.74 0.68 0.03 -0.76 0.98 0.94 0.70 -0.17 -0.62 0.99 0.95 0.69 -0.07 -0.63 0.98 0.74 0.68 0.03 -0.76 0.98 0.94 0.70 -0.17
J 424 201 138 86 45 395 217 138 90 46 405 206 140 90 47 424 201 138 86 45 395 217 138 90 46 405 206 140 90
λmin 1.982E-03 3.340E-04 1.300E-04 4.700E-05 1.100E-05 9.992E-03 9.638E-02 4.898E-02 2.462E-02 3.435E-01 3.767E-02 2.676E-02 8.573E-02 8.531E-02 8.307E-02 6.942E-05 6.208E-06 8.176E-07 1.332E-07 1.300E-08 2.819E-03 7.676E-03 4.683E-03 2.381E-02 6.049E-02 1.701E-02 7.684E-03 5.230E-03 2.162E-02
λmax 56.452 31.682 24.398 18.374 12.772 52.758 34.115 24.714 19.317 12.981 54.443 32.892 25.303 19.348 13.570 64.025 35.481 27.191 20.464 13.787 59.929 38.469 27.781 21.614 14.137 62.282 37.329 28.467 21.584
κ (P) 2.849E+04 9.474E+04 1.873E+05 3.924E+05 1.176E+06 5.280E+03 3.540E+02 5.046E+02 7.845E+02 3.779E+01 1.445E+03 1.229E+03 2.951E+02 2.268E+02 1.634E+02 9.223E+05 5.715E+06 3.326E+07 1.537E+08 1.061E+09 2.126E+04 5.011E+03 5.932E+03 9.078E+02 2.337E+02 3.662E+03 4.858E+03 5.443E+03 9.981E+02
CPU
ITER
181.3 162.9 115.8 148.7 173.4 144.9 88.5 119.5 170.0 223.8 160.8 209.8 175.4 230.5 636.0 1467.0 1087.1 589.6 545.1 331.7 343.0 184.3 276.3 222.5 384.9 296.8 383.9 406.4 379.0
224 222 126 105 50 121 62 66 59 29 85 93 61 46 43 2316 1612 721 440 103 311 141 171 80 50 151 165 138 76
Table 16.10 Preconditioned systems for the Neumann problem with p = 0.95
Part VI
Appendices
Appendix A
Interpolation Spaces and Sobolev Spaces
Throughout this book, Sobolev spaces of fractional orders are the major function space. Since these spaces have many properties which are not possessed by Sobolev spaces of integral orders, this appendix serves to provide the necessary background in this topic. Since many properties of Sobolev spaces are derived from their interpolation property, we first recall in Section A.1 the concept of interpolation spaces. The main section is Section A.2 where different definitions of Sobolev spaces and norms (namely the Slobodetski norms, interpolation norms, and weighted norms) together with their properties are introduced. Each norm plays a different role in the analysis, which could have been observed in the previous chapters. The spatial domains in this consideration contain both the whole Rd -space and Lipschitz domains in Rd , d ≥ 1. Equivalences between the different norms are established. Another important property of fractional-ordered Sobolev spaces is the relation between the local norm and global norm. This property has to be derived via the interpolation norm; see Theorem A.10 and Theorem A.11. The results in the latter theorem are new; those for positive orders first appeared in [224] and those for negative orders are first presented in this chapter. This new development corrects a mistake in the derivation of the coercivity of the decomposition which is ubiquitously seen in the literature; see Subsection 2.6.4 for the definition of the coercivity of the decomposition. The different uses of norms are possible due to equivalence of these norms. However, this equivalence may depend on the measure of the domain (on which the function is defined) and since this measure may depend on the problem parameters (mostly the meshsize h), care has to be taken. A scaling argument is required to only apply the norm equivalence on Sobolev spaces defined on a reference element (which means the constants are independent of h). The scaling property of norms when the domain is rescaled is therefore another important issue and will be discussed in this chapter. The chapter also depicts other features of Sobolev spaces which are used throughout this monograph. The reader who is interested in the topic of Sobolev spaces can find more details in many other sources, e.g. [1, 88, 138, 147], to name a few. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1
425
426
A Interpolation Spaces and Sobolev Spaces
The chapter ends with a section on a generalised antiderivative operator in Sobolev spaces on curves; see Subsection A.2.12. This operator is a mechanism that −1/2 (Γ ) 1/2 (Γ ) and the space H allows us to go back and forth between the space H so that results for the weakly-singular integral operator (defined on curves) can be derived from corresponding results for the hypersingular integral operator.
A.1 Real Interpolation Spaces A.1.1 Compatible couples and intermediate spaces Real interpolation spaces are defined from two normed spaces A0 and A1 . These two spaces are said to be a compatible couple A = (A0 , A1 ) if they are subspaces of a Hausdorff (or separated) topological vector space U . Under this assumption the spaces A1 ∩ A2 and A0 + A1 are well defined, in particular 4 3 A0 + A1 = u ∈ U : u = u0 + u1 for some u0 ∈ A0 and u1 ∈ A1 . These spaces are equipped with the norms
and
1/2 uA0 ∩A1 = u2A0 + u2A1 uA0 +A1 =
inf
u0 ∈A0 , u1 ∈A1 u0 +u1 =u
u0 2A0 + u1 2A1
1/2
.
If A0 and A1 are complete then A0 ∩ A1 and A0 + A1 are also complete. Consider another compatible couple for normed spaces B0 and B1 , B = (B0 , B1 ). The couple T = (T0 , T1 ) of bounded operators T0 : A0 → B0
and
T1 : A1 → B1
is said to be compatible if T0 u = T1 u for all u ∈ A0 ∩ A1 . We abbreviate this property by writing T : A → B. Suppose that A = (A0 , A1 ) is a compatible couple. A normed space A is called an intermediate space between A0 and A1 (or with respect to A ) if A0 ∩ A1 ⊂ A ⊂ A0 + A1 with continuous inclusions. An intermediate space A is called an interpolation space between A0 and A1 (or with respect to A ) if for any compatible couple T : A → A there exists a bounded operator TA : A → A such that TA u = T0 u = T1 u for all u ∈ A0 ∩ A1 . If there are two compatible couples A = (A0 , A1 ) and B = (B0 , B1 ) then the intermediate spaces A and B are interpolation spaces with respect to A and B, re-
A.1 Real Interpolation Spaces
427
spectively, if for any compatible couple T : A → B there exists uniquely a bounded operator TA : A → B that coincides with T0 and T1 on A0 ∩ A1 . We note that the existence of an operator TA : A → B is easily seen. Indeed, for any u ∈ A we can define TA u = T0 u0 + T1 u1 where u = u0 + u1 is one representation of u in A0 + A1 . If there is another representation u = u0 + u1 , then u0 − u0 = u1 − u1 ∈ A0 ∩ A1 . Hence T0 (u0 − u0 ) = T1 (u1 − u1 ) which implies TA (u0 + u1 ) = TA (u0 + u1 ).
A.1.2 The K-functional For any compatible couple A = (A0 , A1 ), the K-functional is defined for t > 0 and u ∈ A0 + A1 by K(t, u, A ) :=
inf
u0 ∈A0 , u1 ∈A1 u0 +u1 =u
u0 2A0 + t 2 u1 2A1
1/2
.
The K-functional is used to define the interpolation space [A0 , A1 ]θ ,q , θ ∈ (0, 1) and q ∈ [1, ∞), as follows 3 4 [A0 , A1 ]θ ,q := u ∈ A0 + A1 : u[A0 ,A1 ]θ ,q < ∞
where
u[A0 ,A1 ]θ ,q := Nθ ,q
1/q ∞ ( ( (t −θ K(t, u, A )(q dt t
(A.1)
0
with the factor Nθ ,q defined by
1/q ; Nθ ,q := qθ (1 − θ )
(A.2)
see [147, page 319]. When q = 2 we write [A0 , A1 ]θ for [A0 , A1 ]θ ,2 . We state the following results for references. Theorem A.1. The following inclusions hold with continuous embedding: A0 ∩ A1 ⊂ [A0 , A1 ]θ ,q ⊂ A0 + A1 . Moreover, if the norm u[A0 ,A1 ]θ ,q is defined by (A.1) with the factor (A.2), then θ θ uA0 +A1 ≤ u[A0 ,A1 ]θ ,q ≤ u1− A0 uA1 ≤ uA0 ∩A1 .
Proof. See [138, Proposition 2.3]. See also [147, Lemma B.1]. The following theorem is called the duality theorem.
428
A Interpolation Spaces and Sobolev Spaces
Theorem A.2. Assume that A0 ∩ A1 is dense in A0 and in A1 . If θ ∈ (0, 1) and q ≥ 1, then A0 ∩ A1 is dense in [A0 , A1 ]θ ,q and [A0 , A1 ]∗θ ,q = [A∗0 , A∗1 ]θ ,q∗
where
1 1 + =1 q q∗
with equal norms if proper factors Nθ ,q and Nθ ,q∗ are used. Proof. See [25, Theorem 3.7.1] and [138, Theorem 6.2]. See also [57, Theorem 2.4] and [147, Theorem B.5] for the claim on equality of norms by choosing proper factors Nθ ,q and Nθ ,q∗ . The next theorem is known as the reiteration theorem. Theorem A.3. For any θ , θ0 , θ1 ∈ [0, 1] satisfying θ0 ≤ θ ≤ θ1 , if η ∈ [0, 1] is such that θ = (1 − η )θ0 + ηθ1 , then [A0 , A1 ]θ0 , [A0 , A1 ]θ1 η = [A0 , A1 ]θ
with equality of norms, if a proper factor Nθ is used. The statement also holds if θ j ∈ {0, 1} for j = 0, 1, provided that in such case we define [A0 , A1 ]θ j = Aθ j . Proof. See [25, Theorem 3.5.3] and [138, Theorem 6.1]. See also [57, Theorem 3.7] and [147, Theorem B.6] for the claim on equality of norms. The next result establishes the interpolation property of [A0 , A1 ]θ ,q , regardless of the choice of Nθ ,q (provided that it is the same for A and B). Theorem A.4. Let A = (A0 , A1 ) and B = (B0 , B1 ) be two compatible couples of normed spaces. If T = (T0 , T1 ) : A → B is a compatible couple of linear operators, then there exists a unique bounded linear operator Tθ : [A0 , A1 ]θ ,q → [B0 , B1 ]θ ,q satisfying Tθ u = T0 u = T1 u, u ∈ A0 ∩ A1 .
Moreover, if M0 and M1 are positive constants such that T j uB j ≤ M j uA j ,
u ∈ A j , j = 0, 1,
then Tθ u[B0 ,B1 ]θ ,q ≤ M01−θ M1θ u[A0 ,A1 ]θ ,q ,
u ∈ [A0 , A1 ]θ ,q .
In particular, if T : A j → B j is a linear operator that maps A j to B j , j = 0, 1, then T maps [A0 , A1 ]θ ,q to [B0 , B1 ]θ ,q and θ θ T [A0 ,A1 ]θ ,q →[B0 ,B1 ]θ ,q ≤ T 1− A0 →B0 T A1 →B1 .
Here T X→Y denotes the norm of T as an operator from X to Y . Proof. See [147, Theorem B.2]. See also [42, Proposition 14.1.5].
A.2 Sobolev Spaces
429
A.2 Sobolev Spaces In this section, we give detailed constructions of Sobolev spaces and their properties. References in this topic include [1, 43, 88, 138, 147]. We introduce in this section different types of Sobolev norms, each of them has its own use, even though they are equivalent. The Slobodetski norm with its integral form is easy for calculation, in particular to find bounds of some expressions. The interpolation norm is useful when properties of Sobolev spaces of integral orders (e.g., the L2 and H 1 spaces) are derived for Sobolev spaces of fractional orders 1/2 (Γ ) spaces). Weighted norms are used when a norm is (e.g., the H 1/2 (Γ ) and H not scalable under affine mappings.
A.2.1 Notations D(Rd ) :
the space of all C∞ functions with compact supports in Rd .
S ∗ (Rd ) : the space of temperate distributions in Rd .
Ω: D(Ω ) : D ∗ (Ω ) : L p (Ω ) :
a Lipschitz open set or Lipschitz domain (open and connected) in Rd . the space of all C∞ functions with compact supports in Ω . the space of all distributions on Ω . the Lebesgue spaces on Ω , 1 ≤ p ≤ ∞.
We also denote, for any closed subset F of Rd , 4 3 D(F) = u : u = U|F for some U ∈ D(Rd ) .
A.2.2 Sobolev spaces on Rd A.2.2.1 The space H s (Rd ), s ∈ R For s ∈ R, we define the Sobolev spaces H s (Rd ) by Fourier transforms 3
∗
H (R ) := u ∈ S (R ) : s
d
d
Rd
4 (1 + |ξ |2 )s | u(ξ )|2 d ξ < ∞ ,
where u is the Fourier transform of u defined by u(ξ ) :=
Rd
e−i2πξξ ·x u(x) dx for
ξ ∈ Rd .
(A.3)
430
A Interpolation Spaces and Sobolev Spaces
The space H s (Rd ) is equipped with the norm uH s (Rd ) = uH s (Rd ) := F
u(ξ )|2 d ξ (1 + |ξ |2 )s |
Rd
1/2
.
The following dual property is known; see e.g. [147, (3.22)], ∗ H −s (Rd ) = H s (Rd ) , s ∈ R, and
uH −s (Rd ) =
sup 0=v∈H s (Rd )
u, vRd , vH s (Rd )
(A.4)
(A.5)
u ∈ H −s (Rd ).
Here ·, ·Rd is the dual pairing which extends the L2 (Rd )-inner product. It is known that the spaces H s (Rd ) have the exact interpolation property; see [138, Remark 7.5 page 32] and [147, Theorem B.7]. This means that if s0 , s1 ∈ R and θ ∈ (0, 1) are such that s = (1 − θ )s0 + θ s1 , then H s (Rd ) = [H s0 (Rd ), H s1 (Rd )]θ
(A.6)
uH s (Rd ) = uH s (Rd )
(A.7)
with equal norms F
I
where uH s (Rd ) denotes u[H s0 (Rd ),H s1 (Rd )]θ and where the interpolation space and I interpolation norm are defined in Section A.1.2, provided that the normalisation factor Nθ ,q in (A.2) is chosen to be, see [147, page 329], Nθ ,q =
2 sin(πθ ) 1/2
π
.
(A.8)
A.2.2.2 The space W s (Rd ), s ∈ R Another Sobolev space on Rd can be defined with the Slobodetski norm as follows. For any non-negative number s 3 4 (A.9) W s (Rd ) = u ∈ L2 (Rd ) : uH s (Rd ) < ∞ S
where, for r = 0, 1, 2, . . . and μ ∈ (0, 1), the Slobodetski norm uH s (Rd ) is defined S by ⎧ r 1/2 ⎪ α 2 ⎪ a ∂ u , s = r, ⎪ α ,r 2 d ∑ L (R ) ⎨ uH s (Rd ) := S
|α |=0
⎪ 2 ⎪ ⎪ ⎩ uHSr (Rd ) +
∑
|α |=m
|∂ α u|2H μ (Rd )
1/2
(A.10)
,
s = r + μ,
A.2 Sobolev Spaces
431
where aα ,r > 0 is to be determined later. Here ∂ α is the differential operator defined by ∂ |α | u ∂ α u := α1 with x = (x1 , · · · , xd ), α ∂ x1 · · · ∂ xd d (A.11)
α = (α1 , · · · , αd ) ∈ Nd
and
|α | = α1 + · · · + αd ,
and |·|H μ (Rd ) is the seminorm defined by |u|H μ (Rd ) :=
|u(x) − u(y)| p 1/2 dx dy . d+2 μ |x − y| Rd ×Rd
As will be seen in Theorem A.5, the two spaces H s (Rd ) and W s (Rd ) equal. This is the reason we denote the norm of W s (Rd ) by ·H s (Rd ) instead of ·W s (Rd ) . S The factor aα ,r in (A.10) is usually chosen to be one. However, as observed in [57] - .- . - . |α | r |α | |α |! 2 α 2 . ∂ uL2 (Rd ) where := uH r (Rd ) = ∑ F | α | α α α ! 1 · · · αd ! |α |≤r We choose, as in [57],
.- . |α | r aα ,r = |α | α
(A.12)
uH r (Rd ) = uH r (Rd ) .
(A.13)
-
to ensure that F
S
For s < 0, the space is defined by duality ∗ W s (Rd ) = W −s (Rd )
(A.14)
equipped with the dual norm
uW s (Rd ) = uH s (Rd ) = S
sup v∈W −s (Rd ) v=0
u, vRd . vH −s (Rd ) S
The two spaces H s (Rd ) and W s (Rd ) coincide with equivalent norms, as shown in the following theorem. Theorem A.5. For any s ∈ R, the spaces H s (Rd ) and W s (Rd ) coincide with equal norms if s is an integer, and equivalent norms otherwise. Proof. Let s = r + μ where r is a non-negative integer and μ ∈ [0, 1). If μ = 0, then the result is given in (A.13). If μ > 0, then it is shown in [147, Lemma 3.15] that |u|2μ = aμ
Rd
|ξ |2μ | u(ξ )|2 d ξ
432
A Interpolation Spaces and Sobolev Spaces
where aμ =
∞
t −2μ −1
|ω |=1
0
Note that
i2πω1 t
|e
|ω |=1
|ei2πω1 t − 1|2 d ω dt.
O(t 2 ) − 1| d ω = O(1)
as t → 0, as t → ∞.
2
(A.15)
(A.16)
Hence, aμ is bounded for μ ∈ (0, 1) and therefore u2H s (Rd ) = u2H r (Rd ) + F
S
∑
|α |=r d R
aμ |ξ |2μ |(2πξ )α u(ξ )|2 d ξ
(1 + |ξ |2 )r+μ | u(ξ )|2 d ξ = u2H r+μ (Rd ) . F
Rd
For s < 0, the required result is a consequence of duality arguments, noting (A.5) and (A.14). For s = 0 the spaces H 0 (Rd ) and W 0 (Rd ) are the Lebesgue space L2 (Rd ) with norms equal the usual L2 -norm. It is noted that the equivalence constants in the above proof depend on μ , and hence on s. The following result states the special case of s independence. Corollary A.1. If s = r + μ for some non-negative integer r and μ ∈ [δ , 1/2) for some δ > 0, then u2H s (Rd ) u2H s (Rd ) S
F
∀u ∈ H s (Rd ) = W s (Rd )
with the constants depending on δ but are independent of s. Proof. It suffices to estimate aμ defined by (A.15). Due to (A.16), aμ
1 0
t −2μ +1 dt +
∞ 1
t −2μ −1 dt =
1 2μ (1 − μ )
(with the constants independent of μ ) so that 1/(1 − δ ) aμ 1/δ . This proves the required result.
A.2.3 Sobolev spaces on a Lipschitz domain s (Ω ) can be Let Ω be an open subset of Rd . The spaces H s (Ω ), H0s (Ω ), and H s d s d s d defined either from H (R ) or from W (R ). Even though H (R ) = W s (Rd ), as proved in Theorem A.5, it is not necessary that both definitions give the same spaces
A.2 Sobolev Spaces
433
on Ω , unless Ω is a Lipschitz domain, the set of our main interest in this book. Section A.2.5 justifies the use of unique notations for these spaces. Nevertheless, since different definitions give different norms on the same space, we will differentiate these norms. Readers who are interested in more general sets Ω are referred to [88, 147].
A.2.3.1 Lipschitz domains For the reader’s convenience, we first recall the definition of Lipschitz domains, following [147, Chapter 3]. Definition A.1. Let Ω be an open subset of Rd . (i) Ω is called a Lipschitz hypograph if 4 3 Ω = x = (x , xd ) ∈ Rd : xd < ζ (x ), x = (x1 , . . . , xd−1 ) ∈ Rd−1
where ζ : Rd−1 → R is a Lipschitz function with a Lipschitz constant L. (ii) Ω is a Lipschitz domain if its boundary ∂ Ω is compact and if there exist finite families of open subsets {W j }Jj=1 and {Ω j }Jj=1 having the following properties: (a) The family {W j }Jj=1 is a finite cover of ∂ Ω , i.e., ∂ Ω ⊂ ∪Jj=1W j ; (b) Each Ω j can be transformed, by a rotation and a translation, to be a Lipschitz hypograph such that W j ∩ Ω = W j ∩ Ω j . Remark A.1. (i) A Lipschitz hypograph Ω is unbounded, its unbounded boundary being 4 3 ∂ Ω = x = (x , xd ) ∈ Rd : xd = ζ (x ), x = (x1 , . . . , xd−1 ) ∈ Rd−1 .
(ii) A Lipschitz domain Ω can be bounded or unbounded, but its boundary is bounded. If Ω is a bounded Lipschitz domain, then Rd \ Ω is an unbounded Lipschitz domain. s (Ω ), s ∈ R A.2.3.2 The spaces H s (Ω ), H0s (Ω ), and H
We give the first definition of Sobolev spaces on a Lipschitz domain Ω .
Definition A.2. Let Ω be a Lipschitz domain in Rd , d ≥ 1. For s ∈ R, we de s (Ω ) by fine H s (Ω ), H0s (Ω ), and H 3 4 H s (Ω ) := u ∈ D ∗ (Ω ) : u = U|Ω for some U ∈ H s (Rd ) , H s (Ω )
H0s (Ω ) := D(Ω ) = closure of D(Ω ) in H s (Ω ), 4 3 s (Ω ) := u ∈ H s (Rd ) : supp u ⊂ Ω , H
(A.17)
434
A Interpolation Spaces and Sobolev Spaces
with the corresponding norms uHFs (Ω ) := inf UH s (Rd ) , u ∈ H s (Ω ), F
u=U|Ω
uH0s (Ω ) := uHFs (Ω ) ,
u ∈ H0s (Ω ),
uH s (Ω ) := uH s (Rd ) ,
s
u ∈ H (Ω ).
F
F
(A.18)
3 4 Notice that there exists U ∗ ∈ X (u) := U ∈ H s (Rd ) : U|Ω = u satisfying 3 4 (A.19) U ∗ H s (Rd ) = inf UH s (Rd ) : U ∈ X (u) = uHFs (Ω ) . F
F
4 Indeed, let m = inf UH s (Rd ) : U ∈ X (u) . Then there exists a sequence {Uk } F in X (u) satisfying limk→∞ Uk H s (Rd ) = m. Since this sequence is bounded in 3
F
the Hilbert space H s (Rd ), there exists a subsequence {Uk j } that converges weakly to U ∗ ∈ H s (Rd ). It can be seen that U ∗ ∈ X (u). Hence m ≤ U ∗ H s (Rd ) . On the F other hand, the weak convergence implies U ∗ H s (Rd ) ≤ lim inf j→∞ Uk j H s (Rd ) = F F m. Hence, U ∗ satisfies (A.19). It is clear from (A.17) and (A.18) that
and that
0 (Ω ) H 0 (Ω ) = H00 (Ω ) = H
u|Ω HFs (Ω ) ≤ uH s (Ω ) F
for
s (Ω ), s ∈ R. u∈H
(A.20)
Moreover, we have the following relations among the three spaces (see [88, Theorem 1.4.2.4 and Theorem 1.4.5.2]) s (Ω ) ⊂ H0s (Ω ) ⊂ H s (Ω ), H
s ∈ R. s (Ω ) = H0s (Ω ), s ≥ 0, s = 1 , 3 , 5 , . . . H 2 2 2 s (Ω ) = H0s (Ω ) = H s (Ω ), 0 ≤ s < 1 . H 2
(A.21)
Remark A.2. Assume that Ω = Ω 1 ∪ · · · ∪ Ω N with Ωi ∩ Ω j = ∅ for i = j. The following properties are clear from definitions (A.17) and (A.18). (i) H s (Ω ) ⊂ H s (Ωi ) and if u ∈ H s (Ω ), then u|Ωi HFs (Ωi ) ≤ uHFs (Ω ) for i = 1, . . . , N. Consequently, the Cauchy–Schwarz inequality implies N
∑ u|Ωi 2HFs (Ωi ) ≤ Nu2HFs (Ω ) .
(A.22)
i=1
s (Ω ) and u s s s (Ωi ) ⊂ H (ii) H s (Ωi ) for all u ∈ H (Ω i ), i = 1, . . . , N. HF (Ω ) = uH F s (Ωi ), i = 1, . . . , N, then s (Ω ) is such that u|Ω ∈ H (iii) If u ∈ H i
A.2 Sobolev Spaces
435 N
u(ξ ) = ∑ (u| Ωi )(ξ ) i=1
so that, see (A.4),
N
u2H s (Ω ) ≤ N ∑ u|Ωi 2H s (Ω ) . F
F
i=1
(A.23)
i
The constant N on the right-hand sides of (A.22) and (A.23) is not satisfactory for the analysis of preconditioners. This is the main reason to introduce other spaces and norms in subsequent sections.
A.2.3.3 Density and duality properties The following density properties hold H s (Rd )
H s (Rd ) = D(Rd )
H s (Ω )
H0s (Ω ) = D(Ω )
H s (Ω )
,
H s (Ω ) = D(Ω )
,
H s (Rd )
s (Ω ) = D(Ω ) H
,
,
X
where, for any subset A of a normed vector space X, the set A denotes the closure of A in X. s (Ω ) satisfy the duality relations For any s ∈ R, the spaces H s (Ω ) and H −s (Ω ) and H s (Ω )∗ = H
s (Ω )∗ = H −s (Ω ) H
(A.24)
with respect to the dual pairing which is the extension of the L2 (Ω ) inner product. s (Ω ), s ∈ R A.2.3.4 The spaces W s (Ω ) and W
Sobolev spaces on Ω can also be constructed in the same manner as W s (Rd ) defined by (A.9): 3 4 W s (Ω ) := u ∈ L2 (Ω ) : uHSs (Ω ) < ∞ , 3 4 s (Ω ) := u ∈ L2 (Ω ) : u s W H (Ω ) < ∞ , S
where the Slobodetski norms uHSs (Ω ) and uH s (Ω ) are defined as follows are. For S any u ∈ H s (Ω ), s = r + μ with r = 0, 1, 2, . . . , and μ ∈ [0, 1) we define the seminorm ⎧ ⎪ a ∂ α u2L2 (Ω ) , μ = 0, ⎪ ⎪ ∑ α ,r ⎨ |α |=r |u|2H s (Ω ) := (A.25) |∂ α u(x) − ∂ α u(y)|2 ⎪ dx dy, 0 < μ < 1, ⎪ ∑ ⎪ ⎩|α |=r |x − y|d+2μ Ω ×Ω
436
A Interpolation Spaces and Sobolev Spaces
where ∂ α u and aα ,r are defined in (A.11) and (A.12), respectively. The Slobodetski norm is defined for u ∈ H s (Ω ) by 2 s = 0, uL2 (Ω ) u2H s (Ω ) := (A.26) 2 2 S uL2 (Ω ) + |u|H s (Ω ) s > 0. s (Ω ) we define For u ∈ H
u2H s (Ω ) := u2H s (Ω ) + S
S
∑
|α |=r Ω
|∂ α u(x)|2 dx. dist(x, ∂ Ω )2μ
(A.27)
−s (Ω ) with s > 0, the norms · −s In H −s (Ω ) and H −s (Ω ) can be HS (Ω ) and ·H S defined by duality, see (A.24), as follows
u, vΩ , v s (Ω ) s (Ω ) H v∈H
uH −s (Ω ) := sup S
S
(A.28)
u, vΩ uH −s (Ω ) := sup . S v s HSs (Ω ) v∈H (Ω )
Here ·, ·Ω is the dual pairing extension of the inner product in L2 (Ω ). The factor aα ,r in (A.25) ensures that uHSr (Ω ) ≤ uHFr (Ω ) ,
r = 0, 1, 2, . . . .
(A.29)
s0 (Ω ), H s1 (Ω )]θ A.2.3.5 The interpolation spaces [H s0 (Ω ), H s1 (Ω )]θ and [H
A third construction of Sobolev spaces on Ω is via interpolation introduced in Section A.1. In the literature, this interpolation is established with the K-functionals defined using the Sobolev norms ·HFs (Ω ) and ·H s (Ω ) ; see e.g. [57] and [147]. F For completeness and also for better understanding, we first use the same definition. However, we will point out that this construction does not serve our purpose in the analysis of additive Schwarz preconditioners. Therefore, we will later introduce another construction. r (Ω ) by For any integer r, we define the interpolation norm in H r (Ω ) and in H uHIFr (Ω ) := uHFr (Ω )
and the K-functionals
and
uH r (Ω ) := uH r (Ω ) , IF
F
A.2 Sobolev Spaces
KF (t, u, A ) := KF (t, u, A*) := with
437
inf
u0 ∈H 0 , u1 ∈H r u0 +u1 =u
inf
r u0 ∈H 0 , u1 ∈H u0 +u1 =u
u0 2H 0 (Ω ) + t 2 u1 2H r (Ω ) IF
IF
u0 2H 0 (Ω ) + t 2 u1 2H r (Ω ) IF IF
1/2 1/2
, (A.30) ,
0 (Ω ), H r (Ω )). A = (H 0 (Ω ), H r (Ω )) and A*= (H
For θ ∈ (0, 1) and s = θ r, these K-functionals yield the following interpolation spaces s (Ω ) := [H 0 (Ω ), H r (Ω )]θ HIF
and
s IF 0 (Ω ), H r (Ω )]θ , H (Ω ) := [H
(A.31)
and
uH s (Ω ) := u[H 0 (Ω ),H r (Ω )] .
(A.32)
and their corresponding norms uHIFs (Ω ) := u[H 0 (Ω ),H r (Ω )]θ
θ
IF
We used the index IF to make it clear that the interpolation spaces and norms are defined from the norms given in (A.18). We will later prove that if Ω is a Lipschitz s (Ω ) and H s (Ω ) = H s (Ω ) with equivalent norms. domain, then H s (Ω ) = HIF IF As will be seen in Lemma A.1 and Theorem A.8, the interpolation property mentioned above hold. However, the norms ·HFs (Ω ) and ·H s (Ω ) used in the definition F of the K-functionals in (A.30) are not suitable for the study of the scaling property and other properties of the interpolation norms; see Section A.2.7 and Section A.2.9. For this reason, we introduce another type of interpolation norms, starting with the Slobodetski norm. For any non-negative integer r, we define uHIr (Ω ) := uHSr (Ω )
and
see (A.25). Note that, see (A.29),
uH r (Ω ) := |u|H r (Ω ) ; I
uH r (Ω ) ≤ uHIr (Ω ) ≤ uHFr (Ω ) .
(A.33)
I
Instead of (A.30), the K-functionals are now defined by KS (t, u, A ) := KS (t, u, A*) :=
inf
u0 ∈H 0 , u1 ∈H r u0 +u1 =u
inf
r u0 ∈H 0 , u1 ∈H u0 +u1 =u
u0 2H 0 (Ω ) + t 2 u1 2H r (Ω ) I
I
u0 2H 0 (Ω ) + t 2 u1 2H r (Ω ) I
I
1/2 1/2
, (A.34) ,
Hence, instead of (A.31) and (A.32) we now have HIs (Ω ) := [H 0 (Ω ), H r (Ω )]θ
and
Is (Ω ) := [H 0 (Ω ), H r (Ω )]θ , H
438
A Interpolation Spaces and Sobolev Spaces
and their corresponding norms uHIs (Ω ) := u[H 0 (Ω ),H r (Ω )]θ
and
uH s (Ω ) := u[H 0 (Ω ),H r (Ω )] . θ
I
The norm ·H −s (Ω ) and ·H −s (Ω ) are defined by duality, see (A.24), I
I
uH −s (Ω ) := I
uH −s (Ω ) := I
u, vΩ 0=v∈H s (Ω ) vHIs (Ω ) sup
u, vΩ . v s (Ω ) s (Ω ) H 0=v∈H
(A.35)
sup
I
In this book, we mostly use these interpolation norms and spaces, not the ones defined by (A.30). Therefore, for simplicity of notations, we do not include the index S in these norms and spaces. This consideration has not studied before in the literature, except in [42] for a special case when s ∈ (0, 1); see [42, Theorem 14.2.3]. It turns out that this construction is the correct construction for Section A.2.7 and Section A.2.9. Given that the three different definitions of Sobolev spaces on Rd give the same space with equal norms or equivalent norms; see (A.6), (A.7), and Theorem A.5, what can we say about Sobolev spaces on Ω ? It turns out that thanks to the Lipschitz assumption of the domain Ω , the same results hold. This is due to the existence of extension operators to be discussed in the next section.
A.2.4 Extension operators In this subsection, we collect some results on extension operators which extend functions defined on a Lipschitz domain Ω in Rd to functions defined on Rd . First we recall that an operator E : W s (Ω ) → W s (Rd ) is called an extension operator if E satisfies E u|Ω = u, u ∈ W s (Ω ). (i) Calder´on [47] constructed for each positive integer r an extension operator Er : W r (Ω ) → W r (Rd ) that is bounded. (ii) Stein [197, page 181] constructed an extension operator E : W r (Ω ) → W r (Rd ) not depending on r that is bounded for each positive integer r. (iii) Other early developments of extension operators can be found, for example, in the book by Neˇcas [163]. (iv) Grisvard [88, Theorem 1.4.3.1] introduced for each positive real number s a bounded extension operator ES : W s (Ω ) → W s (Rd ). Grisvard also mentioned that Aronszajn & Smith [10] and Seeley [192] showed that ES can be chosen independently of s. This means the following estimate holds
A.2 Sobolev Spaces
439
ES uH s (Rd ) uHSs (Ω ) . S
(A.36)
(v) McLean [147, Theorem A.4] modified Calder´on’s extension to show that for each non-negative integer k, there exists an extension operator EMk : W s (Ω ) → W s (Rd ) that is bounded for s ∈ [k, k + 1). The proof of this theorem reveals that the norm of EMk depends on k but not on s: EMk uH s (Rd ) uHSs (Ω ) S
(A.37)
where the constant is independent of u and s ∈ [k, k + 1). (vi) Rychkov [182] further modified Calder´on’s extension and proved that there exists an extension operator ER : H s (Ω ) → H s (Rd ), not depending on s ∈ R, that is continuous for all s: (A.38) ER uH s (Rd ) uHFs (Ω ) . F
In particular, when Ω is a Lipschitz hypograph in Rd , McLean [147, Theorem A.1] obtained a simple extension by reflection in the boundary of Ω . Since we used this result in Section A.2.5, we state (without detailed proof) this result here. Theorem A.6. Let Ω be a Lipschitz hypograph defined by Definition A.1 and let EM : W s (Ω ) → W s (Rd ) be defined by u(x), x ∈ Ω, (A.39) EM u(x) = x ∈ Rd \ Ω , u(x , 2ζ (x ) − xd ), where x ∈ Rd−1 and x = (x , xd ). Then EM is bounded for s ∈ [0, 1] and the norm of EM is independent of s. Proof. This result is proved in [147, Theorem A.1]. The proof also reveals that EM uH s (Rd ) uHSs (Ω ) , S
with a constant independent of u and s.
0 ≤ s ≤ 1,
A.2.5 Equivalence of norms Recall that in the case of Sobolev spaces of functions defined on the whole Rd , the interpolation norm with a proper normalisation factor is equal to the norm defined by Fourier transforms, see (A.7) and (A.8), and both norms are equivalent to the Slobodetski norm; see Theorem A.5. In general, the equivalence constants depend on s, but when s ∈ [δ , 1/2) for some δ ∈ (0, 1/2), then the constants depend only on δ ; see Corollary A.1. We now use these results to study the equivalence of different Sobolev norms of functions defined on Ω . The main tool is the extension operators discussed in Section A.2.4.
440
A Interpolation Spaces and Sobolev Spaces
The following theorem shows that if Ω is a Lipschitz domain or a Lipschitz hypograph, then H s (Ω ) = W s (Ω ) with equivalent norms. Theorem A.7. (i) Assume that Ω is a Lipschitz domain. For all s ≥ 0, the two spaces H s (Ω ) and W s (Ω ) are equal with equivalent norms, i.e., for any u ∈ H s (Ω ), uHSs (Ω ) uHFs (Ω ) where the constants are independent of u but may depend on Ω and s. (ii) Assume that Ω is a Lipschitz domain or Lipschitz hypograph. If 0 < δ ≤ s < 1/2 for some δ ∈ (0, 1/2), then part (i) holds true with the equivalence constants independent of s and u, but possibly depending on δ and Ω . Proof. Part (i) is proved in [138, 147]. The proof is presented here for completeness. Consider u ∈ H s (Rd ) and let U ∈ H s (Rd ) be such that u = U|Ω and that uHFs (Ω ) = UH s (Rd ) ; see (A.19). Then it follows successively from the definitions of the Slobodetski norms (A.10) and (A.26), and Theorem A.5 that uHSs (Ω ) ≤ UH s (Rd ) UH s (Rd ) = uHFs (Ω ) . S
F
(A.40)
Therefore, H s (Ω ) ⊂ W s (Ω ) with continuous inclusion. Conversely, consider u ∈ W s (Ω ). Let ES : W s (Ω ) → W s (Rd ) be the extension operator discussed in Section A.2.4. Then (A.18), Theorem A.5, and the boundedness of ES imply uHFs (Ω ) ≤ ES uH s (Rd ) ES uH s (Rd ) uHSs (Ω ) F
S
(A.41)
with constant depending on s and the norm of ES . Therefore, W s (Ω ) ⊂ H s (Ω ) with continuous inclusion. To prove part (ii), we note that the equivalence constants in (A.40) and (A.41) are independent of s if s ∈ [δ , 1/2); see Corollary A.1. If Ω is a Lipschitz domain, we choose the extension operator to be EM0 in Section A.2.4 part (v), and if Ω is a Lipschitz hypograph, we choose E to be EM defined by (A.39). Then by (A.37) and Theorem A.6, we obtain the last inequality in (A.41) with a constant independent of u and s ∈ [0, 1], but may depend on δ . This proves the theorem. s (Ω ) and H s (Ω ) which are deThe next results show similar properties for HIF IF fined by (A.31). First we prove a lemma.
Lemma A.1. Assume that Ω is a Lipschitz domain in Rd . For any integer r, if s = θ r for some θ ∈ [0, 1], then s (Ω ) and H s (Ω ) = HIF
with equivalent norms:
s s (Ω ) = H IF H (Ω )
uHIFs (Ω ) ≤ uHFs (Ω ) ≤ Cr uHIFs (Ω ) , Cr−1 uH s (Ω ) ≤ uH s (Ω ) ≤ uH s (Ω ) , IF
F
IF
u ∈ H s (Ω ), s (Ω ). u∈H
A.2 Sobolev Spaces
441
Proof. Put A0 = H 0 (Ω ),
s As = [A0 , A1 ]θ = HIF (Ω ),
A = (A0 , A1 ),
B0 = H (R ), B1 = H (R ), Bs = [B0 , B1 ]θ = H (R ),
B = (B0 , B1 ).
0
A1 = H r (Ω ),
d
r
d
s
d
For any u ∈ H s (Ω ), due to (A.19) and (A.7), there exists U ∈ H s (Rd ) such that u = U|Ω
and
uHFs (Ω ) = UH s (Rd ) = UH s (Rd ) . F
I
Since U ∈ Bs , there exist U j ∈ B j , j = 0, 1, satisfying U = U0 +U1 . Define u j = U j |Ω . Then u = u0 + u1 . Recalling the definition of the K-functionals in (A.30), we deduce KF (t, u, A )2 ≤ u0 2H 0 (Ω ) + t 2 u1 2H r (Ω ) ≤ U0 2H 0 (Rd ) + t 2 U1 2H r (Rd ) F
F
F
F
and thus KF (t, u, A ) ≤ K(t,U, B) with a constant depending on s (and Ω ) only. (Note that it is not necessary to distinguish the different K-functionals for U because they are all equal.) This together with (A.7) implies H s (Ω ) ⊂ As and uHIFs (Ω ) ≤ UH s (Rd ) = UH s (Rd ) = uHFs (Ω ) . I
F
(A.42)
Conversely, consider u ∈ As . Let ES : A j → B j , j = 0, 1, be the extension operator defined in Section A.2.4, part (vi) satisfying, see (A.38), ER uH r j (Rd ) = ER uB j ≤ λ j uA j = λ j uH r j (Ω ) , IF
I
j = 0, 1,
where r0 = 0 and r1 = r. By Theorem A.4, the operator ER maps As into Bs = H s (Rd ) such that ER uH s (Rd ) ≤ λ01−θ λ1θ uHIFs (Ω ) ≤ max{1, λ0 λ1 }uHIFs (Ω ) . I
Consequently, with Cr := max{1, λ0 λ1 }, uHFs (Ω ) ≤ ER uH s (Rd ) = ER uH s (Rd ) ≤ Cr uHIFs (Ω ) . F
I
(A.43)
Thus, As ⊂ H s (Ω ). Therefore As = H s (Ω ) for all real number s = θ r, θ ∈ [0, 1], and the two norms ·HFs (Ω ) and ·HIFs (Ω ) are equivalent. The constants depend on r but not on s. s (Ω ) follows from duality by using Theorem A.2 and (A.24). The results for H Indeed, for any s = θ r, it follows from above that H s (Ω ) = [H 0 (Ω ), H r (Ω )]θ , so that ∗ −s (Ω ) = [H 0 (Ω ), H r (Ω )]θ = [ H 0 (Ω ) ∗ , H r (Ω ) ∗ ]θ H 0 (Ω ), H −r (Ω )]θ . = [H
The norm equivalence can be seen from
442
A Interpolation Spaces and Sobolev Spaces
uH −s (Ω ) = F
u, vΩ 0=v∈H s (Ω ) vHFs (Ω ) sup
together with (A.42) and (A.43).
We slightly extend the above lemma to have the following result. Lemma A.2. For any positive integer r and s ∈ [−r, r], if we define KF (t, u, A ) :=
inf
u0 ∈H −r (Ω ), u1 ∈H r (Ω ) u0 +u1 =u
1/2 u0 2H −r (Ω ) + t 2 u0 2H r (Ω ) , F
F
u ∈ H −r (Ω ),
where A := (H −r (Ω ), H r (Ω )), then the corresponding interpolation space satisfies [H −r (Ω ), H r (Ω )]θ = H s (Ω ) with equivalent norms, where θ = 1/2 + s/(2r). Proof. The proof follows exactly along the lines of that of Lemma A.1 if we re place H 0 (Ω ) in that lemma with H −r (Ω ). We omit the details. The above two lemmas can be extended as shown in the following theorem. Theorem A.8. Assume that Ω is a Lipschitz domain in Rd . For any s0 , s1 ∈ R, s0 ≤ s1 , if s = (1 − η )s0 + η s1 for some η ∈ (0, 1), then H s (Ω ) = [H s0 (Ω ), H s1 (Ω1 )]η ,F
and
s (Ω ) = [H s0 (Ω ), H s1 (Ω1 )]η ,F H
with equivalent norms where the constants are independent of u and s, but may depend on Ω , s0 , and s1 . Here, the index F in the interpolation space indicates that the KF -functional is used. Proof. The theorem is proved by using the reiteration theorem, Theorem A.3. We present the proof for H s (Ω ) only. First consider 0 ≤ s0 ≤ s1 and let r be an integer satisfying s1 ≤ r. Invoking Lemma A.1 and Theorem A.3 with A0 = H 0 (Ω ),
A1 = H r (Ω ),
θ = s/r,
θ j = s j /r, j = 0, 1,
we obtain the required result. Similar result holds for r ≤ s0 ≤ s1 ≤ 0. Now consider the case when s0 < 0 ≤ s1 . Let r > 0 be an integer such that −r ≤ s0 < 0 ≤ s1 ≤ r. It follows from Lemma A.2 that H s j (Ω ) = [H −r (Ω ), H r (Ω )]θ j ,F ,
θj =
1 sj + , j = 0, 1. 2 2r
Invoking Theorem A.3 yields [H s0 (Ω ), H s1 (Ω )]η ,F = [H −r (Ω ), H r (Ω )]θ0 ,F , [H −r (Ω ), H r (Ω )]θ1 ,F η ,F
A.2 Sobolev Spaces
443
= [H −r (Ω ), H r (Ω )](1−η )θ0 +ηθ1 ,F = [H −r (Ω ), H r (Ω )] 1 + s ,F = H s (Ω ) 2
2r
where in the last step we used again Lemma A.2. This completes the proof of the theorem. s (Ω ) We now prove similar results for the interpolation spaces HIs (Ω ) and H I constructed with the K-functional defined by (A.34).
Lemma A.3. Assume that Ω is a Lipschitz domain in Rd . For any integer r, if s = θ r for some θ ∈ (0, 1), then H s (Ω ) = HIs (Ω ) and with equivalent norms:
s (Ω ) = H Is (Ω ) H
uHIs (Ω ) ≤ uHFs (Ω ) ≤ Cr uHIs (Ω ) ,
u ∈ H s (Ω ),
uH s (Ω ) ≤ uHFs (Ω ) ≤ Cr uH s (Ω ) ,
s (Ω ). u∈H
I
I
Proof. The proof follows along the lines of that of Lemma A.1. A slight difference is the use of the Slobodetski norm in the definition of the K-functional. Therefore, we have to consider first the case when r is a non-negative integer. Put A0 = H 0 (Ω ),
A1 = H r (Ω ),
As = [A0 , A1 ]θ , A = (A0 , A1 ),
B0 = H (R ), B1 = H (R ), Bs = [B0 , B1 ]θ , 0
d
r
d
B = (B0 , B1 ).
For any u ∈ H s (Ω ), due to (A.19) and (A.7), there exists U ∈ H s (Rd ) such that u = U|Ω
and
uHFs (Ω ) = UH s (Rd ) = UH s (Rd ) . F
I
Since U ∈ Bs , there exist U j ∈ B j , j = 0, 1, satisfying U = U0 + U1 . Define u j = U j |Ω . Then u = u0 + u1 . Recalling the definition of the K-functionals in (A.34) and noting (A.33), we deduce KS (t, u, A )2 ≤ u0 2H 0 (Ω ) + t 2 u1 2H r (Ω ) ≤ U0 2H 0 (Rd ) + t 2 U1 2H r (Rd ) F
F
F
F
and thus KS (t, u, A ) ≤ K(t,U, B) with a constant depending on s (and Ω ) only. (Here again, we do not distinguish the K-functionals for U because they are all equal.) This together with (A.7) implies H s (Ω ) ⊂ As and uHIs (Ω ) ≤ UH s (Rd ) = UH s (Rd ) = uHFs (Ω ) . I
F
(A.44)
Conversely, consider u ∈ As . Let ES : A j → B j , j = 0, 1, be the extension operator defined in Section A.2.4, part (iv) satisfying ES uH r j (Rd ) = ES uB j ≤ λ j uA j = λ j uH r j (Ω ) , I
I
j = 0, 1,
444
A Interpolation Spaces and Sobolev Spaces
where r0 = 0 and r1 = r. By Theorem A.4, the operator ES maps As into Bs = H s (Rd ) such that ES uH s (Rd ) ≤ λ01−θ λ1θ uHIs (Ω ) ≤ max{1, λ0 λ1 }uHIs (Ω ) . I
Consequently, uHFs (Ω ) ≤ ES uH s (Rd ) = ES uH s (Rd ) ≤ max{1, λ0 λ1 }uHIs (Ω ) . F
I
(A.45)
Thus, As ⊂ H s (Ω ). Therefore As = H s (Ω ) for all real number s ∈ [0, r] and the two norms ·HFs (Ω ) and ·HIs (Ω ) are equivalent. The constants depend on r but not on s. Differently from Lemma A.1, we cannot repeat the above argument for r < 0. s (Ω ) with s ∈ [0, r]. Put We will proceed by proving the result for H I 0 = H 0 (Ω ), A
1 = H r (Ω ), A
s = [A 0 , A 1 ]θ , A
0 , A 1 ), A*= (A
where the K-functional is defined by (A.34). Proceeding as in the first part of this proof, with U ∈ H s (Rd ) being the zero extension of u ∈ H s (Ω ). Then we obtain, instead of (A.44), uH s (Ω ) ≤ UH s (Rd ) = UH s (Rd ) = uH s (Ω ) . I
I
F
F
s (Ω ). With ES defined as above, we obtain Conversely, consider u ∈ H ES uH r j (Rd ) ≤ λ j uH r j (Ω ) uH r j (Ω ) , I
I
I
j = 0, 1,
where in the last step we used Poincar´e’s inequality, with the constant depending on r. Hence, by interpolation ES uH s (Rd ) uH s (Ω ) . I
I
The constant depends on r but is independent of s ∈ (0, r). Similarly to (A.45), we have, recalling that U is the zero extension of u, uH s (Ω ) = UH s (Rd ) UH s (Rd ) ≤ ES uH s (Rd ) ES uH s (Rd ) uH s (Ω ) . F
F
S
S
I
I
s ⊂ H s (Ω ) for all s ∈ [0, r] with equivalence of s (Ω ). Therefore, As = H Thus, A norms. s (Ω ) are derived by duality The results for r < 0 for both spaces H s (Ω ) and H arguments. We omit the details. Similarly to Theorem A.8, we now state without proof the following theorem.
Theorem A.9. Assume that Ω is a Lipschitz domain in Rd . For any s0 , s1 ∈ R, s0 ≤ s1 , if s = (1 − η )s0 + η s1 for some η ∈ (0, 1), then H s (Ω ) = [H s0 (Ω ), H s1 (Ω1 )]η
and
s (Ω ) = [H s0 (Ω ), H s1 (Ω1 )]η H
A.2 Sobolev Spaces
445
with equivalent norms where the constants are independent of u and s, but may depend on Ω , s0 , and s1 . Here, we recall that the interpolation space is defined with the KS -functional defined by (A.34). Thanks to TheoremsA.7, A.8, and A.9, in this book (apart from this subsection) we use only one notation H s (Ω ) to denote the space with different equivalent norms. s (Ω ). We stress that the interpolation norms to be used are defined Similarly for H with the KS -functional. This is the correct choice for the study of the scaling property in Section A.2.7 and the global-local property in Section A.2.9. Finally, it is remarked that the recent paper [109] also shows some results on the equivalence of fractional-order Sobolev seminorms.
A.2.6 The weighted norms As will be seen in Section A.2.7, not all Sobolev norms are scalable. For this reason, we introduce weighted norms. s (Ω ) for s > 0 A.2.6.1 The weighted norms in H s (Ω ) and H
As will be seen in Section A.2.7, the L2 -norm and the seminorm defining the Slobodetski norm scale differently. In order to ensure scalability we introduce weighted norms. Different weights have different usage. Three different weighted norms are used in this book. For s = k + μ > 0 with k = 0, 1, 2, . . . and μ ∈ [0, 1) we define 1 v2Hws (Ω ) := v2L2 (Ω ) + |v|2H s (Ω ) , τ 1 2 2 2 ∗vHws (Ω ) := d vL2 (Ω ) + |v|H s (Ω ) , τ ⎧ |∂ α v(x)|2 ⎪ ⎪v2 s + dx, ⎪ ∑ ( Ω ) H ⎪ w ⎪ dist(x, ∂ Ω )2μ ⎨ |α |=k Ω v2H s (Ω ) := w ⎪ ⎪ |v(x)|2 ⎪|v|2 ⎪ dx, s (Ω ) + ⎪ H ⎩ dist(x, ∂ Ω )2s
k = 0,
(A.46)
k = 0,
Ω
where τ = diam(Ω ) and |v|H s (Ω ) is the Slobodetski seminorm defined in (A.25). Note that if d = 1 then vHws (Ω ) = ∗vHws (Ω ) .
446
A Interpolation Spaces and Sobolev Spaces
−s (Ω ) for A.2.6.2 The interpolation norms with weights in H s (Ω ) and H 2 s ∈ [0, 1] and Ω ⊂ R
The non-scalability of the norms ·HIs (Ω ) and ·H −s (Ω ) for s > 0, as will be seen in I Section A.2.7, prompts us to define another type of interpolation norms which uses the weighted norms defined in Subsection A.2.6.1, instead of the Slobodetski norm. We restrict ourselves to the case when s ∈ [0, 1] and Ω ⊂ R2 . For any u ∈ H s (Ω ), 0 ≤ s ≤ 1, recalling the definition of the weighted norm in (A.46), we define ⎧ s = 0, ⎪ ⎨uL2 (Ω ) , s s = 1, (A.47) ∗uHI (Ω ) := ∗uHw1 (Ω ) , ⎪ ⎩u , s = θ ∈ (0, 1), 0 1 [H (Ω ),H (Ω )]∗,θ
where the interpolation norm u[H 0 (Ω ),H 1 (Ω )]∗,θ is defined as in (A.1) with the Kfunctional being K∗ (t, u, A ) :=
inf
u0 ∈H 0 , u1 ∈H 1 u0 +u1 =u
2 2 2 ∗u0 H 0 (Ω ) + t ∗u1 H 1 (Ω ) I
I
1/2
.
The norm ∗uH −s (Ω ) is defined by duality: I
∗uH −s (Ω ) I
:=
u, vΩ . 0=v∈H s (Ω ) ∗vHIs (Ω ) sup
We are particularly interested in the norm ∗· −1/2
weakly-singular integral equation.
HI
(Ω )
(A.48)
for the analysis of the
A.2.7 Scaling properties The next lemma shows how the Slobodetski norms scale when the domain Ω is rescaled. Lemma A.4. Assume that Ω is an open and bounded domain in Rd , d ≥ 1, such that τ := diam(Ω ) < 1, and that v is a function defined on Ω . Let 4 3 := Ω x ∈ Rd : x = x/τ , x ∈ Ω , , x = τ v( x) = v(x) ∀ x∈Ω x ∈ Ω.
Then, for s = k + μ , k = 0, 1, 2, . . . , and μ ∈ [0, 1), the following relations hold v|2H s (Ω ) , |v|2H s (Ω ) = τ d−2s |
A.2 Sobolev Spaces
447
v2H s (Ω ) ≤ v2H s (Ω ) ≤ τ d−2s v2H s (Ω ) . τ d S
S
S
Proof. We first note that
v2L2 (Ω ) = τ d v2L2 (Ω )
and
α x), x) = τ |α | ∂H ∂xα v( x v(
(A.49)
where ∂xα is defined in (A.11). Hence, if s = k = 1, 2, . . ., then |v|2H s (Ω ) =
∑
|α |=k
=τ
∂xα v2L2 (Ω ) = τ d
d−2s
∑
|α |=k
| v|2H s (Ω ) .
d−2k α 2 ∂H x vL2 (Ω ) = τ
∑
|α |=k
∂xα v2L2 (Ω )
If s = k + μ with k = 0, 1, 2, . . . and μ ∈ (0, 1) then |v|2H s (Ω ) = =
∑
|∂xα v(x) − ∂xα v(y)|2 dx dy |x − y|d+2μ
∑
H α x) − ∂ α y)|2 |∂H x v( x v( τ 2d d x d y τ d+2μ | x− y|d+2μ
|α |=k Ω ×Ω
|α |=k Ω ×Ω
= τ d−2k−2μ
∑
|α |=k Ω ×Ω
v|2H s (Ω ) . = τ d−2s |
x) − ∂xα v( y)|2 |∂xα v( d x d y | x− y|d+2μ
This together with the first identity in (A.49) yields the result for the ·HSs (Ω ) -norm, completing the proof of the lemma. The non-scalability of the norm ·HSs (Ω ) proved in Lemma A.4 dictates the introduction of another norm, the weighted norm defined in the next subsection. The following lemma shows how these weighted norms scale when the domain Ω is rescaled. We will particularly be interested in the case when they are invariant under scaling, and that is the main reason for the different weights used in the definition above. Three most important cases are the invariance in (A.50), (A.51), and (A.52). Lemma A.5. Assume that Ω is an open and bounded domain in Rd , d ≥ 1, such that τ := diam(Ω ) < 1, and that v is a function defined on Ω . Let 4 3 := Ω x ∈ Rd : x = x/τ , x ∈ Ω , , x = τ v( x) = v(x) ∀ x∈Ω x ∈ Ω.
(i) For s = k + μ , k = 0, 1, 2, . . . and μ ∈ [0, 1) the following statements hold
τ d−2s v2H s (Ω ) ≤ v2Hws (Ω ) ≤ τ d−1 v2H s (Ω ) w
w
1 if s ≤ , 2
448
A Interpolation Spaces and Sobolev Spaces
1 if s > , 2 d if s ≤ , 2 d if s > . 2
τ d−1 v2H s (Ω ) ≤ v2Hws (Ω ) ≤ τ d−2s v2H s (Ω ) w
w
v2H s (Ω ) τ d−2s ∗ v2H s (Ω ) ≤∗v2Hws (Ω ) ≤ ∗ w
w
v2H s (Ω ) ∗ w
In particular,
v
≤∗v2Hws (Ω ) ≤ τ d−2s ∗ v2H s (Ω ) w
= v
1/2
Hw (Ω )
and ∗vHw1 (Ω )
1/2
) Hw (Ω
= ∗ vH 1 (Ω ) w
1 if d = 1 and s = , 2
(A.50)
if d = 2 and s = 1.
(A.51)
(ii) For s = k + μ , k = 1, 2, . . ., μ ∈ [0, 1), we have
τ d−1 v2H s (Ω ) ≤ v2H s (Ω ) ≤ τ d−2s v2H s (Ω ) w
w
w
and for s = μ ∈ [0, 1)
v2H s (Ω ) . v2H s (Ω ) = τ d−2s w
w
In particular, if d = 1 and s = 1/2 then v 1/2
Hw (Ω )
Proof. Lemma A.4 and (A.49) give
v2L2 (Ω ) v2L2 (Ω ) = τ d
and
It then follows that
and
= v 1/2
) Hw (Ω
.
|v|2H s (Ω ) = τ d−2s | v|2H s (Ω ) .
(A.52)
(A.53)
v2L2 (Ω ) + τ d−2s | v|2H s (Ω ) v2Hws (Ω ) = τ d−1 2 ∗vHws (Ω )
= v2L2 (Ω ) + τ d−2s | v|2H s (Ω ) .
Hence, noting that τ < 1 and that
v2H s (Ω ) = v2L2 (Ω ) + | v|2H s (Ω ) , v2H s (Ω ) = ∗ w
w
we can bound v2Hws (Ω ) and ∗v2Hws (Ω ) below and above by v2 s and ∗ v2 s , Hw (Ω ) H (Ω ) respectively, depending on the values of s. This proves Part (i). α x) it is also easy to establish x) = τ |α | ∂H Moreover, since ∂xα v( x v(
A.2 Sobolev Spaces
∑
|α |=k Ω
449
|∂xα v(x)|2 dx = τ d−2s ∑ dist(x, ∂ Ω )2μ |α |=k
Hence, for k = 0, we have
x)|2 |∂xα v( d x. )2μ dist( x, ∂ Ω
Ω
v2H s (Ω ) = τ d−1 v2L2 (Ω ) + τ d−2s | v|2H s (Ω ) + τ d−2s w
and for k = 0 we have v|2H s (Ω ) + τ d−2s v2H s (Ω ) = τ d−2s | w
∑
|α |=k Ω
x)|2 |∂xα v( d x )2μ dist( x, ∂ Ω
| v( x)|2 d x = τ d−2s v2H s (Ω ) . w )2s dist( x, ∂ Ω
Ω
The required estimates in Part (ii) then follow.
The next lemma shows how the interpolation norms scale when the bounded domain Ω is rescaled; see [5, Lemma 4.3]. Lemma A.6. Assume that Ω is an open and bounded domain in Rd , d 3 ≥ 1, such := ) < 1, and that v is a function defined on Ω . Let Ω x ∈ Rd : that τ := diam(Ω 4 x = x/τ , x ∈ Ω and Then, for s ∈ [0, r],
, x = τ v( x) = v(x) ∀ x∈Ω x ∈ Ω.
s (Ω ), (i) if v ∈ H
(ii) if v ∈ H s (Ω ),
v2H s (Ω ) ; v2H s (Ω ) = τ d−2s I
I
τ d v2H s (Ω ) ≤ v2H s (Ω ) ≤ τ d−2s v2H s (Ω ) ; I
I
(iii) if v
∈ H −s (Ω ),
I
v2H −s (Ω ) ; v2H −s (Ω ) = τ d+2s I
−s (Ω ), (iv) if v ∈ H
I
τ d+2s v2H −s (Ω ) ≤ v2H −s (Ω ) ≤ τ d v2H −s (Ω ) . I
I
I
Proof. Since vHIr (Ω ) = vHSr (Ω ) and vH r (Ω ) = |v|H r (Ω ) for r ∈ N, it follows I from Lemma A.4 that v2H r (Ω ) v2H r (Ω ) = τ d−2r I
I
and
τ d v2H r (Ω ) ≤ v2H r (Ω ) ≤ τ d−2r v2H r (Ω ) . I
I
I
(A.54)
450
A Interpolation Spaces and Sobolev Spaces
Since v2L2 (Ω ) = τ d v2 2
) L (Ω
r (Ω )) we , by interpolation between L2 (Ω ) and H
prove Part (i). Similarly, by interpolation between L2 (Ω ) and H r (Ω )) we prove Part (ii). Ω . v, w Part (iii) then follows from Part (i) by duality, thanks to v, wΩ = τ d Indeed, by duality we have v2H −s (Ω ) I
2Ω v, w τ 2d
v, w2Ω = sup = sup = τ d+2s v2H −s (Ω ) . 2 d−2s w 2 s I ) τ s (Ω ) w s s (Ω w∈H H w∈ HI (Ω )
HI (Ω )
Similarly, Part (iv) follows from Part (ii) by using the same argument.
We now prove the scalability of ∗uHIs (Ω ) and ∗uH −s (Ω ) for s ∈ [0, 1]. I
Lemma A.7. Under the assumptions of Lemma A.6, for s ∈ [0, 1] and Ω ⊂ R2 , (i) if v ∈ H s (Ω ),
2 ∗vH s (Ω ) I
−s (Ω ), (ii) if v ∈ H
2 ∗vH −s (Ω ) I
Proof. By definition and (A.53)
2 ∗vH 0 (Ω ) I
= τ 2−2s ∗ v2H s (Ω ) ; I
= τ 2+2s ∗ v2H −s (Ω ) . I
= τ 2 ∗ v2H 0 (Ω ) . I
This together with (A.51) and interpolation proves Part (i). Part (ii) follows from the Ω , v, w definition of dual norms and Part (i), noting that v, wΩ = τ 2 2 ∗vH −s (Ω ) = I
2Ω v, w τ 4
v, w2Ω = sup 2 2 2−2s w ∗ s ) τ w∈H s (Ω ) ∗wH s (Ω ) s (Ω w∈H sup
) HI (Ω
I
This proves the lemma.
= τ 2+2s ∗ v2H −s (Ω ) . I
A.2.8 Important results It is sometimes useful to relate the interpolation norms and the weighted norms. Lemma A.8. Assume that Ω is an open bounded domain in Rd satisfying τ := diam(Ω ) < 1. Then, for s = k + 1/2, k = 0, 1, 2, . . .,
τ 2k v2H s (Ω ) v2H s (Ω ) v2H s (Ω ) I
and
w
I
s (Ω ) ∀v ∈ H
A.2 Sobolev Spaces
451
τ 2k v2H s (Ω ) v2Hws (Ω ) τ −2s v2H s (Ω ) I
I
∀v ∈ H s (Ω ).
In particular, when s = 1/2 we have v 1/2 HI
and v2 1/2 HI
(Ω )
(Ω )
1/2 (Ω ) ∀v ∈ H
v 1/2
Hw (Ω )
v2 1/2
Hw (Ω )
τ −1 v2 1/2 HI
(Ω )
∀v ∈ H 1/2 (Ω )
where the constants are independent of v and τ . Proof. The lemma is a direct consequence of Lemma A.5 and Lemma A.6, noting that v2H s (Ω ) and v2H s (Ω ) v2H s (Ω ) ; v2H s (Ω ) w
w
I
I
see [138, Theorem 11.7]. The constants in the above two equivalences are indepen dent of τ . s (Ω ) = Recall that if Ω is a Lipschitz domain in Rd and if 0 ≤ s < 1/2, then H for s ∈ (−1/2, 1/2); see (A.21). The next result, first studied in [108, s (Ω ), |s| < 1/2. Lemma 5], gives a bound for the norm of the injection H s (Ω ) → H
H s (Ω )
s (Ω ), |s| < 1/2, Lemma A.9. Let Ω be a Lipschitz domain in Rd . Then for any v ∈ H vH s (Ω ) S
1 vHSs (Ω ) . 1/2 − |s|
The constant is independent of v. In particular, if δ > 0 satisfies δ ≤ |s| < 1/2 then the constant depends on δ but is independent of s. Proof. Consider first the case when 0 < s < 1/2. The proof, carried out in [108, Lemma 5], follows along the lines of the proof of [88, Theorem 1.4.4.4]. See also the proofs of [147, Lemma 3.31 and Lemma 3.32]. None of these proofs gives detailed arguments. We provide them here. Recall the definition of vH s (Ω ) in (A.27): S
v2H s (Ω ) = v2H s (Ω ) + S
S
Ω
|v(x)|2 dx. dist(x, ∂ Ω )2s
(A.55)
It suffices to estimate the second term on the right-hand side. First consider the case when Ω is a Lipschitz hypograph. Recall the definition of ∂ Ω in Remark A.1. Then, for x = (x , xd ) ∈ Ω and y = (y , yd ) ∈ ∂ Ω , |ζ (x ) − xd | ≤ |ζ (x ) − ζ (y )| + |xd − yd | ≤ L|x − y | + |xd − yd | ≤ L2 + 1|x − y|
where we used the Lipschitz property of ζ in the penultimate step and the Cauchy– Schwarz inequality in the last step. Consequently,
452
A Interpolation Spaces and Sobolev Spaces
|ζ (x ) − xd | ≤ L2 + 1 inf |x − y| = L2 + 1 dist(x, ∂ Ω ). y∈∂ Ω
Therefore, we can estimate the second term on the right-hand side of (A.55) by using Fubini’s Theorem and making a change of variables by setting x = ζ (x ) − xd
Ω
|v(x)|2 dx ≤ (L2 + 1)s dist(x, ∂ Ω )2s
ζ (x )
|v(x)|2 dxd dx |ζ (x ) − xd |2s
Rd−1 −∞ ∞ |v(x , ζ (x ) − x)|2 = (L2 + 1)s x2s Rd−1 0 ∞ |v(x , ζ (x ) − x)|2 dx dx x2s Rd−1 0
dx dx
(A.56)
where the constant is independent of s. Here, we note that (L2 + 1)s ≤ L2 + 1 because 0 < s < 1/2. Fix x ∈ Rd−1 and define v(x) := v(x , ζ (x ) − x). We want to estimate ∞ ∞ |v(x , ζ (x ) − x)|2 | v(x)|2 dx = dx. (A.57) x2s x2s 0
0
Making use of the identity (see [88, equations (1,4,4,9)–(1,4,4,10)]) v(x) = −w(x) +
∞ x
w(y) dy y
where
w(x) =
1 x
x
v(t) − v(x) dt,
0
we deduce
∞ 0
| v(x)|2 dx = x2s
∞
−s
− x w(x) + x
−s
x
0
≤2
∞ 0
∞
x−2s w2 (x) dx + 2
w(y) 2 dy dx y
∞
x−s
0
(A.58)
∞ x
w(y) 2 dy dx. y
Invoking Lemma C.52 part (i) (Hardy’s inequality) with r = 2s < 1, p = 2, and with f (y) = w(y)/y, we obtain the following bound for the second term on the right-hand side above, with η (s) := 1/(0.5 − s)2 , ∞
x−s
0
∞ x
∞ w(y) 2 dy dx ≤ η (s) x−2s w2 (x) dx. y 0
This and (A.58) together with the Cauchy–Schwarz inequality imply
A.2 Sobolev Spaces
∞ 0
453
| v(x)|2 dx ≤ 3η (s) x2s
∞
≤ 3η (s)
∞
= 3η (s)
∞ x
x
−2s 2
w (x) dx = 3η (s)
0
x−2(s+1)
0 0 ∞ ∞ 0 0
= 3η (s)
x
−2(s+1)
0
0
≤ 3η (s)
∞
∞ ∞ 0 0
x
dt
0
x 0
| v(x) − v(t)|2 x2s+1
x
2 v(t) − v(x) dt dx
0
| v(x) − v(t)|2 dt dx
dt dx ≤ 3η (s)
∞ x 0 0
| v(x) − v(t)|2 dt dx |x − t|2s+1
| v(x) − v(t)|2 dt dx |x − t|2s+1
|v(x , ζ (x ) − x) − v(x , ζ (x ) − t)|2 dt dx. |x − t|2s+1
Making another change of variables xd = ζ (x ) − x and yd = ζ (x ) − t we deduce ∞ 0
| v(x)|2 dx ≤ 3η (s) x2s
ζ (x ) ζ (x )
−∞ −∞
|v(x , xd ) − v(x , yd )|2 dxd dyd . |xd − yd |2s+1
This inequality, (A.56), (A.57), and Fubini’s Theorem imply
Ω
|v(x)|2 dx η (s) dist(x, ∂ Ω )2s
≤ η (s)
Rd−1
Rd−1
≤ η (s)
ζ (x ) ζ (x ) |v(x , x ) − v(x , y )|2 d d dx dy d d dx |xd − yd |2s+1
−∞ −∞ ∞ ∞ −∞ −∞
|V (x , xd ) −V (x , yd )|2 dx dx dy d d |xd − yd |2s+1
V (x , ·)2H s (R) dx S
(A.59)
Rd−1
where V : Rd → R is such that V |Ω = v. It follows from (A.7) and Theorem A.5 that for almost all x ∈ Rd−1 V (x , ·)2H s (R) S
∞ 0
dt |K t,V (x , ·) |2 2s+1 t
where K t,V (x , ·) =
inf
V1 (x ),V2 (x ) ∈X (V,x )
J 1/2 t,V1 (x , ·),V2 (x , ·)
(A.60)
454
A Interpolation Spaces and Sobolev Spaces
with and
J t,V1 (x , ·),V2 (x , ·) := V1 (x , ·)2L2 (R) + t 2 V2 (x , ·)2H 1 (R)
X (V, x ) :=
3 4 V1 (x , ·),V2 (x , ·) ∈ L2 (R)×H 1 (R) : V (x , ·) = V1 (x , ·)+V2 (x , ·) .
Notice that the constants in (A.60) are independent of s if 0 < δ ≤ s < 1/2; see Corollary A.1. If we define 4 3 X (V ) := (V1 ,V2 ) ∈ L2 (Rd ) × H 1 (Rd ) : V = V1 +V2 ,
then X (V ) ⊂ X (V, x ) for almost all x ∈ Rd−1 . Hence (A.60) implies
V (x
, ·)2H s (R) S
dx
∞ 0
dt J t,V1 (x , ·),V2 (x , ·) 2s+1 t (V1 ,V2 )X (V ) inf
so that Fubini’s Theorem gives
V (x
, ·)2H s (R) S
dx
Rd−1
≤
∞
0 ∞ 0
inf
(V1 ,V2 )X (V )
inf
(V1 ,V2 )X (V )
Rd−1
dt J t,V1 (x , ·),V2 (x , ·) dx 2s+1 t
V1 2L2 (Rd ) + t 2 V2 2H 1 (Rd )
dt t 2s+1
= V 2[L2 (Rd ),H 1 (Rd )]s = V 2H s (Rd ) F
where in the last step we used (A.7). Therefore, (A.59) implies
Ω
|v(x)|2 dx η (s)V 2H s (Rd ) . F dist(x, ∂ Ω )2s
It follows from the definition of vHFs (Ω ) , see (A.18), and Theorem A.7 part (ii) that
Ω
|v(x)|2 dx η (s)v2H s (Ω ) η (s)v2H s (Ω ) F S dist(x, ∂ Ω )2s
where the constants are independent of v and s ∈ [δ , 1/2). This proves the required result for a Lipschitz hypograph Ω . 3 Next4we consider the case when Ω is a general Lipschitz domain. Let 3W j : j = 1, . . . , J be the family of open sets given by Definition A.1, and let W0 := x ∈ Ω : 4 dist(x, ∂ Ω ) > β with β > 0 chosen sufficiently small such that Ω ⊂ ∪Jj=0W j . 3 4 Let ϕ j : j = 0, . . . , J be a partition of unity for Ω such that supp(ϕ j ) ⊂ W j , j = 0, . . . , J. The existence of such a partition of unity is given by [147, Corollary 3.22].
A.2 Sobolev Spaces
455
If v j := ϕ j v, then supp(v j ) ⊂ W j . Theorem 3.20 in [147] gives v j HSs (Ω ) vHSs (Ω ) ,
j = 0, . . . , J,
(A.61)
where the constant is independent of s and v. Thus it follows from the Cauchy– Schwarz inequality that J
J
v2H s (Ω ) = ∑ v j 2H s (Ω ) ≤ J ∑ v j 2H s (Ω ) S
S
j=0
J
J
∑ v j 2HSs (Ω ) + ∑
j=0
j=0
J
v2H s (Ω ) + ∑ S
S
j=0
j=0
Ω
Ω
|v j (x)|2 dx dist(x, ∂ Ω )2s
|v j (x)|2 dx. dist(x, ∂ Ω )2s
(A.62)
The constant is independent of s. Note that for j = 1, . . . , J
|v j (x)|2 dx = dist(x, ∂ Ω )2s
Ω
Ω ∩W j
|v j (x)|2 dx = dist(x, ∂ Ω )2s
Ω j ∩W j
|v j (x)|2 dx. dist(x, ∂ Ω )2s
Since the boundary ∂ Ω is compact by Definition A.1 part (ii), for each x ∈ Ω ∩W j , there exists x∗ ∈ ∂ Ω such that |x − x∗ | = dist(x, ∂ Ω ). By changing W j if necessary, we can assume that x∗ ∈ W j . Since Ω ∩W j = Ω j ∩W j , the part of ∂ Ω contained in W j equals the part of ∂ Ω j contained in W j . Hence x∗ ∈ ∂ Ω j and thus dist(x, ∂ Ω j ) ≤ dist(x, ∂ Ω ); see Figure A.1. This implies
Ω
|v j (x)|2 dx ≤ dist(x, ∂ Ω )2s
Ω j ∩W j
|v j (x)|2 dx ≤ dist(x, ∂ Ω j )2s
Ωj
|v j (x)|2 dx. dist(x, ∂ Ω j )2s
Since Ω j is a Lipschitz hypograph, the first part of the proof yields, with the help of (A.61),
Ωj
|v j (x)|2 dx η (s)v j 2H s (Ω j ) = η (s)v j 2H s (Ω j ∩W j ) = η (s)v j 2H s (Ω ∩W j ) S S S dist(x, ∂ Ω j )2s ≤ η (s)v j 2H s (Ω ) η (s)v2H s (Ω ) , S
(A.63)
S
for j = 1, . . . , J. The constant is independent of s ∈ [δ , 1/2). The case when j = 0 can be dealt with as follows:
Ω
|v0 (x)|2 dx = dist(x, ∂ Ω )2s
Ω ∩W0
1 |v0 (x)|2 dx ≤ 2s dist(x, ∂ Ω )2s β
v2L2 (Ω )
≤ v2H s (Ω ) S
Ω ∩W0
|v0 (x)|2 dx
456
A Interpolation Spaces and Sobolev Spaces
where the constant depends only on β . This inequality, (A.62), and (A.63) give the desired result for s ∈ (0, 1/2) and for any Lipschitz domain Ω . The constant is independent of s when s ∈ [δ , 1/2).
W1 x∗ x
∂ Ω1 Ω1 W0
Ω
Fig. A.1 Lipschitz domain Ω , Lipschitz hypograph Ω1 , open sets W0 and W1
The case when s ∈ (−1/2, 0) is obtained by duality, see (A.28), as follows: vH s (Ω ) = S
v, wΩ
v, wΩ 1 1 sup vHSs (Ω ) = 1 + s w∈H −s (Ω ) wH −s (Ω ) 1+s w∈H −s (Ω ) wH −s (Ω ) sup
S
S
s (Ω ) for s ∈ (0, 1/2). This completes the proof. where we used H s (Ω ) = H
In Lemma A.10 we will extend the above lemma to interpolation norms. The following lemma is a consequence of Lemma A.9.
s (Ω ), |s| < Lemma A.10. Let Ω be a Lipschitz domain in Rd . Then for any v ∈ H 1/2, 1 vHIs (Ω ) . vH s (Ω ) I 1/2 − |s|
The constant is independent of v. In particular, if δ > 0 satisfies δ ≤ |s| < 1/2 then the constant depends on δ but is independent of s. Proof. The lemma is a consequence of Lemma A.9 and Theorem A.9.
A.2 Sobolev Spaces
457
A.2.9 Comparison of global and local norms The following theorem concerning properties of the global and local HIs (Ω ) and s (Ω ) norms is initially proved in [170, Lemma 3.2, p. 49]. An alternative proof is H I presented in [5, Theorem 4.1] and is included here for the reader’s convenience. Theorem A.10. Let {Ω1 , . . . , ΩN } be a partition of a bounded Lipschitz domain Ω into nonoverlapping Lipschitz domains. For any s ∈ R the following inequalities hold (assuming that all the norms are well defined) N
∑ u2HIs (Ω j ) ≤ u2HIs (Ω )
(A.64)
j=1
and u2H s (Ω ) ≤ I
N
∑ u2HIs (Ω j ) .
(A.65)
j=1
Proof. Let r be an integer such that s ∈ [−r, r]. Introduce the product spaces s := H s (Ω1 ) × · · · × H s (ΩN ) Π s := H s (Ω1 ) × · · · × H s (ΩN ) and Π
with norms defined from the interpolation norms by u2Π s :=
N
∑ u j 2HIs (Ω j )
j=1
and u2Π s :=
N
∑ u j 2HIs (Ω j ) ,
j=1
where u = (u1 , . . . , uN ). Let θ = s/r. Then θ ∈ [0, 1] and
Π s = [Π 0 , Π r ]θ Moreover,
∗ s Π −s = Π
and
and
with the duality pairing
u, v :=
N
∑
j=1
s = [Π 0, Π r ]θ . Π −s = (Π s )∗ , Π
u j, v j Ω , j
where v = (v1 , . . . , vN ). Now consider the restriction and sum operators R : H s (Ω ) → Π s
and
defined by
Since
Ru := u|Ω1 , . . . , u|ΩN
s → H s (Ω ) S :Π and
S u :=
N
∑ u j.
j=1
458
A Interpolation Spaces and Sobolev Spaces
RuΠ 0 = uH 0 (Ω ) I
RuΠ r = uHIr (Ω ) ,
and
by interpolation RuΠ s ≤ uHIs (Ω )
for 0 ≤ s ≤ r.
(A.66)
Similarly, since S uH 0 (Ω ) = uΠ 0 I
S uH r (Ω ) = uΠ r
and
I
by interpolation
S uH s (Ω ) ≤ uΠ s
for 0 ≤ s ≤ r.
I
(A.67)
The inequalities holds also for −r ≤ s < 0 by duality because
Ru, vΩ = u, S vΩ . Indeed, uHIs (Ω ) S vH −s (Ω )
Ru, vΩ
u, S vΩ I = sup ≤ sup v v v −s −s −s −s −s −s Π Π Π v∈Π v∈Π v∈Π
RuΠ s = sup
≤ S vH −s (Ω )
for − r ≤ s < 0.
I
Similarly,
S uH s (Ω ) = I
(A.68)
S u, vΩ
u, RvΩ = sup v∈H −s (Ω ) vH −s (Ω ) v∈H −s (Ω ) vH −s (Ω ) sup I
I
I
I
uΠ s RvΠ −s ≤ sup vH −s (Ω ) v∈H −s (Ω ) I
I
≤ uΠ s
for − r ≤ s < 0.
(A.69)
Now for any function u we define u = (u|Ω1 , . . . , u|ΩN ). Then u = S u because {Ω1 , . . . , ΩN } is a partition of Ω . Assuming that all the following norms are well defined, we deduce from (A.66) and (A.68) N
N
j=1
j=1
∑ u2HIs (Ω j ) = ∑ u|Ω j 2HIs (Ω j ) = Ru2Π s ≤ u2HIs (Ω )
and from (A.67) and (A.69) u2H s (Ω ) = S u2H s (Ω ) ≤ u2Π s = I
I
proving (A.64) and (A.65).
N
N
j=1
j=1
∑ u|Ω j 2HIs (Ω j ) = ∑ u2HIs (Ω j ) ,
We now want to improve the result of Theorem A.10 to ensure that the norms on the right-hand side of (A.65) can be replaced with u j H s (Ω ) where u j is the zero I
A.2 Sobolev Spaces
459
extension of u|Ω j onto the exterior of Ω j . This is the purpose of Theorem A.11. We need to define some new interpolation spaces. ∗s (Ω1 ) for s ∈ (0, 1): The space H Let Ω1 Ω Rn and Ω2 = Ω \ Ω1 . We define two spaces 4 3 A0 := v ∈ L2 (Ω ) : supp(v) ⊆ Ω 1 , 4 3 1 (Ω ) : supp(v) ⊆ Ω 1 , A1 := v ∈ H
which form a compatible couple A = (A0 , A1 ). For any v ∈ A0 we define 4 3 1 (Ω ) : v0 + v1 = v , X (v) := (v0 , v1 ) ∈ L2 (Ω ) × H 3 4 X1 (v) := (v0 , v1 ) ∈ A0 × A1 : v0 + v1 = v .
We note that in the definition of X (v) with v ∈ A0 , the functions v0 and v1 do not have to be zero in Ω2 . Instead, it suffices to have v0 = −v1 in Ω 2 so that v = 0 in Ω2 . Hence, X1 (v) X (v). Let us define two functionals
by
1 (Ω ) → R J : R+ × L2 (Ω ) × H
and
J1 : R+ × A0 × A1 → R
1 (Ω ), v0 ∈ L2 (Ω ), v1 ∈ H
J(t, v0 , v1 ) := v0 2L2 (Ω ) + t 2 |v1 |2H 1 (Ω ) , 0
J1 (t, v0 , v1 ) := v0 2L2 (Ω ) + t 2 |v1 |2H 1 (Ω ) , v0 ∈ A0 , v1 ∈ A1 . 1
0
1
(A.70)
Then, for any t > 0, 4 3 4 3 inf J(t, v0 , v1 ) : (v0 , v1 ) ∈ X (v) ≤ inf J(t, v0 , v1 ) : (v0 , v1 ) ∈ X1 (v) 4 3 = inf J1 (t, v0 , v1 ) : (v0 , v1 ) ∈ X1 (v) .
The infima are in fact minima. The following lemma shows that in general the above inequality is a strict inequality. It suffices to prove the following lemma with r = 1. First we recall the definition of the H¨older space C0,α (Ω ) for α ∈ (0, 1) 7 8 |v(x) − v(y)| 0, 4 3 4 3 min J(t, v0 , v1 ) : (v0 , v1 ) ∈ X (v) < min J1 (t, v0 , v1 ) : (v0 , v1 ) ∈ X1 (v) . Proof. Fix t > 0. There exists (v∗0 , v∗1 ) ∈ X (v) such that 3 4 (v∗0 , v∗1 ) = arg min J(t, v0 , v1 ) : (v0 , v1 ) ∈ X (v) .
(A.71)
460
A Interpolation Spaces and Sobolev Spaces
Indeed, let 4 4 3 3 m := inf J(t, v0 , v1 ) : (v0 , v1 ) ∈ X (v) = inf J(t, v − v1 , v1 ) : v1 ∈ A1 (k)
and let {v1 } be a sequence in A1 satisfying (k)
(k)
lim J(t, v − v1 , v1 ) = m.
k→∞
(k)
It follows from the definition of the functional J that {v1 } is bounded in H01 (Ω ). (k )
Hence there exists a subsequence {v1 j } converging weakly in H01 (Ω ) to v∗1 . It can be seen that v∗1 ∈ A1 . From the definition of m, it follows that m ≤ J(t, v − v∗1 , v∗1 ). On the other hand, recall the property that if a sequence {v j } in a Banach space E converges weakly to v ∈ E, then vE ≤ lim inf j→∞ v j E . Hence, the weak conver(k )
gence of {v1 j } to v∗1 implies (k )
(k )
J(t, v − v∗1 , v∗1 ) ≤ lim inf J(t, v − v1 j , v1 j ) = m. j→∞
Thus J(t, v − v∗1 , v∗1 ) = m and therefore (v∗0 , v∗1 ) := (v − v∗1 , v∗1 ) is a minimiser. / X1 (v). To prove the required strict inequality, it suffices to show that (v∗0 , v∗1 ) ∈ The problem (A.71) is an optimisation problem with constraint, the constraint being g(v0 , v1 ) = 0 where 1 (Ω ) → L2 (Ω ), g : L2 (Ω ) × H
g(v0 , v1 ) := v0 + v1 − v.
1 (Ω ) × L2 (Ω ) → R is defined by The Lagrangian functional L : L2 (Ω ) × H L (v0 , v1 ; p) := J(t, v0 , v1 ) + G(v0 , v1 , p)
where p is the Lagrangian multiplier and G(v0 , v1 , p) := p, g(v0 , v1 )L2 (Ω ) = p, v0 + v1 − vL2 (Ω ) . If g(v0 , v1 ) = 0, then
sup G(v0 , v1 , p) = 0.
p∈L2 (Ω )
If g(v0 , v1 ) = 0, then we can choose p ∈ L2 (Ω ) such that G(v0 , v1 , p) > 0. Consequently G(v0 , v1 ,t p) = tG(v0 , v1 , p) → ∞ as t → ∞. Hence sup G(v0 , v1 , p) = ∞.
p∈L2 (Ω )
A.2 Sobolev Spaces
461
Therefore, J(t, v0 , v1 ) sup L (v0 , v1 , p) = ∞ 2 p∈L (Ω )
if g(v0 , v1 ) = 0, else.
As a direct consequence, we conclude that min
(v0 ,v1 )∈X (v)
J(t, v0 , v1 ) =
inf
sup L (v0 , v1 , p).
1 (Ω ) p∈L2 (Ω ) v0 ∈L2 (Ω ), v1 ∈H
(A.72)
For any functional F : (v, p) → F(v, p), we denote by ∂v F(v, p)(ϕ ) the v-Fr´echet derivative of F at (v, p) acting on ϕ , and by ∂ p F(v, p)(q) the p-Fr´echet derivative of F at (v, p) acting on q. The minimiser (v∗0 , v∗1 ) of (A.71) or the solution (v∗0 , v∗1 , p∗ ) to the minimax problem (A.72) solves the following equations
∂v0 L (v0 , v1 , p) = 0,
∂v1 L (v0 , v1 , p) = 0,
∂ p L (v0 , v1 , p) = 0.
Since
∂v0 J(t, v0 , v1 )(ϕ ) = 2 v0 , ϕ L2 (Ω )
∀ϕ ∈ L2 (Ω ),
∂v1 J(t, v0 , v1 )(ψ ) = 2t 2 ∇v1 , ∇ψ L2 (Ω )
1 (Ω ), ∀ψ ∈ H
∂v0 G(v0 , v1 , p)(ϕ ) = p, ϕ L2 (Ω )
∀ϕ ∈ L2 (Ω ),
1 (Ω ), ∀ψ ∈ H
∂v1 G(v0 , v1 , p)(ψ ) = p, ψ L2 (Ω ) ∂ p G(v0 , v1 , p)(q) = q, g(v0 , v1 )L2 (Ω )
∀q ∈ L2 (Ω ),
we have
∂v0 L (v0 , v1 , p)(ϕ ) = 2 v0 , ϕ L2 (Ω ) + p, ϕ L2 (Ω )
∀ϕ ∈ L2 (Ω ),
∂v1 L (v0 , v1 , p)(ψ ) = 2t 2 ∇v1 , ∇ψ L2 (Ω ) + p, ψ L2 (Ω )
1 (Ω ), ∀ψ ∈ H
∂ p L (v0 , v1 , p)(q) = q, v0 + v1 − vL2 (Ω )
∀q ∈ L2 (Ω ).
Hence, v∗0 , v∗1 , and p∗ satisfy 2 v∗0 , ϕ L2 (Ω ) + p∗ , ϕ L2 (Ω ) = 0 ∀ϕ ∈ L2 (Ω ), 2t
2
∇v∗1 , ∇ψ L2 (Ω ) + p∗ , ψ L2 (Ω )
q, v∗0 + v∗1 − vL2 (Ω )
1
= 0 ∀ψ ∈ H (Ω ),
(A.74)
= 0 ∀q ∈ L (Ω ).
(A.75)
2
It follows from (A.73) and (A.74) that t 2 ∇v∗1 , ∇ψ L2 (Ω ) − v∗0 , ψ L2 (Ω ) = 0 This equation and (A.75) give
(A.73)
1 (Ω ). ∀ψ ∈ H
462
A Interpolation Spaces and Sobolev Spaces
t 2 ∇v∗1 , ∇ψ L2 (Ω ) + v∗1 , ψ L2 (Ω ) = v, ψ L2 (Ω )
1 (Ω ). ∀ψ ∈ H
This is a weak formulation of the following boundary value problem −t 2 Δ v∗1 + v∗1 = v in Ω , v∗1 = 0 on ∂ Ω .
(A.76)
Since Ω has smooth boundary and v ∈ C0,α (Ω ), we deduce that v∗1 ∈ C(Ω )∩C2 (Ω ). Moreover, since v ≥ 0 in Ω , due to the strong maximum principle, see e.g. [43, Corollary 9.37], either v∗1 > 0 in Ω or v∗1 ≡ 0 in Ω . If v∗1 ≡ 0 then v∗0 = v. Moreover, (A.74) implies p ≡ 0 so that v∗0 ≡ 0 on Ω due to (A.73). This contradicts the / X1 (v). assumption on v. Hence v∗1 > 0 on Ω , which implies (v∗0 , v∗1 ) ∈ Remark A.3. The above result is consistent with the well-known fact that the solution v∗1 of (A.76) in a subdomain Ω2 Ω depends on the values of v in all of Ω and not only on the values of v in Ω2 ; see e.g. [43, page 307]. On the compatible couple A = (A0 , A1 ), see Section A.1.1, we define two Kfunctionals K and K1 by using the two functionals J and J1 defined in (A.70) as follows: ( ( (J(v0 , v1 )(1/2 ∀v ∈ A0 , K(t, v, A ) := inf (v0 ,v1 )∈X (v)
K1 (t, v, A ) :=
inf
(v0 ,v1 )∈X1 (v)
( ( (J1 (v0 , v1 )(1/2
∀v ∈ A0 .
The corresponding interpolation norms defined from these K-functionals, see (A.1), are vH s (Ω ) := I
vH s (Ω1 ) := I
∞ 0 ∞
|K(t, u, A )|2
t 1+2s
|K1 (t, u, A )|2
0
dt 1/2
,
dt 1/2
t 1+2s
.
Lemma A.11 implies that in general vH s (Ω ) ≤ vH s (Ω1 ) for v ∈ A0 . Denote I
3
4
∗s (Ω1 ) := v ∈ A0 : v s H H (Ω ) < ∞ I
I
and
vH∗s (Ω1 ) := vH s (Ω ) . I
(A.77)
∗s (Ω1 ). (Here we extend functions s (Ω1 ) is a proper subset of H It is clear that H s ∗s (Ω1 ) is a in H (Ω1 ) by zero onto the exterior of Ω1 .) Lemma A.11 shows that H s proper subset of H (Ω ). This means ∗s (Ω1 ) H s (Ω ). s (Ω1 ) H H
∗s (Ω1 ) is first studied in [224]. The space H
(A.78)
A.2 Sobolev Spaces
463
∗−s (Ω1 ) for s ∈ [0, 1]: The space H ∗s (Ω1 ), we will in this section conFor s ∈ (0, 1), similarly to the definition of H −s −s struct H∗ (Ω1 ) which contains H (Ω1 ) as a subspace. First we recall from [138, Theorem 12.5, page 76] that −s (Ω1 ) = L2 (Ω1 ), H −1 (Ω1 ) ; H (A.79) s
see also [57, Corollary 4.10] and [147, Theorem B.9]. Therefore we can equip −s (Ω1 ) with different norms, namely, the norm · −s the space H HI (Ω1 ) defined by (A.35), the norm ∗·H −s (Ω1 ) defined by (A.48), and the interpolation norm intrinI sic to the interpolation space on the right-hand side of (A.79). The non-scalability of ·H −s (Ω1 ) , as shown in Lemma A.6 Part (iv), renders this norm least favourable. I −s (Ω1 ) with the norm defined by (A.48). It is remarked For this reason, we equip H that, by using interpolation, the interpolation norm scales in exactly the same way as ∗·H −s (Ω1 ) ; see Lemma A.7 Part (ii). Thus (up to some constant independent of I the size of Ω1 ) these two norms are equal. We define 3 4 A0 := v ∈ L2 (Ω ) : supp(v) ⊆ Ω 1 , v, 1Ω1 = 0 , (A.80) 3 4 −1 (Ω ) : supp(v) ⊆ Ω 1 , v, 1 = 0 , A1 := v ∈ H Ω1 where the support is understood in the distributional sense. These two sets form a compatible couple A = (A0 , A1 ). For any v ∈ A1 we define 4 3 −1 (Ω ) : v0 + v1 = v , X (v) := (v0 , v1 ) ∈ L2 (Ω ) × H ( ( (J(v0 , v1 )(1/2 , K(t, v, A ) := inf (v0 ,v1 )∈X (v)
and
3 4 X1 (v) := (v0 , v1 ) ∈ A0 × A1 : v0 + v1 = v , ( ( (J1 (v0 , v1 )(1/2 , K1 (t, v, A ) := inf (v0 ,v1 )∈X1 (v)
where
J(t, v0 , v1 ) := v0 2L2 (Ω ) + t 2 ∗v1 2H −1 (Ω ) , I
−1 (Ω ), v0 ∈ L2 (Ω ), v1 ∈ H
J1 (t, v0 , v1 ) := v0 2L2 (Ω ) + t 2 ∗v1 2H −1 (Ω ) , v0 ∈ A0 , v1 ∈ A1 . 1
I
1
We note that X1 (v) X (v). If v1 ∈ A1 then recalling the definition of A1 and using H¨older’s inequality for dual norms we have ∗v1 H −1 (Ω ) I
=
v1 , w + cΩ1
v1 , wΩ ≤ sup inf w∈H 1 (Ω ) ∗wH 1 (Ω ) w∈H 1 (Ω ) c∈R |w|H 1 (Ω ) sup
I
464
A Interpolation Spaces and Sobolev Spaces
≤ ∗v1 H −1 (Ω I
1)
inf
sup
∗w + cH 1 (Ω1 ) I
|w|H 1 (Ω )
w∈H 1 (Ω ) c∈R
.
Invoking Lemma A.15 Part (ii) we deduce ∗v1 H −1 (Ω ) I
This implies
∗v1 H −1 (Ω I
1)
|w|H 1 (Ω1 ) I
sup w∈H 1 (Ω )
J(t, v0 , v1 ) J1 (t, v0 , v1 ),
|w|H 1 (Ω ) I
≤ ∗v1 H −1 (Ω ) . 1
I
(v0 , v1 ) ∈ A0 × A1 .
Thus, for v ∈ A1 , |K(t, v, A )|2 ≤
inf
(v0 ,v1 )∈X1 (v)
J(t, v0 , v1 ) |K1 (t, v, A )|2 .
(A.81)
−s (Ω1 ) with the norm ∗· −s As a consequence, apart from the space H HI (Ω1 ) defined ∗−s (Ω1 ) from the functional K1 (t, v, A ) as usual, we can also construct the space H defined with the functional K(t, v, A ) by 3 4 ∗−s (Ω1 ) := v ∈ A1 : v −s H H∗ (Ω1 ) < ∞ , vH∗−s (Ω1 ) := ∗vH −s (Ω ) = I
∞
|K(t, u, A )|2
0
dt 1/2
t 1+2s
.
(A.82)
It follows from (A.81) that vH∗−s (Ω1 ) ∗vH −s (Ω1 )
(A.83)
∗−s (Ω1 ) −s (Ω1 ) ⊂ H H 0
(A.84)
I
and hence where, see (A.80),
3 4 σ (Ω1 ) : v, 1 = 0 , 0σ (Ω1 ) := v ∈ H H Ω1
σ < 0.
(A.85)
∗s (Ω1 ), s ∈ [−1, 1], and obWe can now generalise Theorem A.10 for the spaces H tain the following results. This generalisation is crucial for our proofs of coercivity of decompositions; see Assumption 2.4. For example, in the case of the hypersingular integral equation, since the bilinear form a j (u j , u j ) on the right-hand side of this assumption is equivalent to the norm u j 1/2 , while inequality (A.65) corresponds to the norm u j 1/2 HI
HI
(Γj )
(Γ )
, which is a larger norm, Theorem A.10 cannot be
used directly. Similar comment can also be made for the weakly-singular integral equation and the norm u j −1/2 . The following theorem will be useful in these HI (Γ ) cases.
A.2 Sobolev Spaces
465
Theorem A.11. Let {Ω1 , . . . , ΩN } be a partition of a bounded Lipschitz domain Ω into nonoverlapping Lipschitz domains. For any function u defined on Ω , we denote by u j the zero extension of u|Ω j onto the exterior of Ω j . ∗s (Ω j ) for s ∈ [−1, 1]. Then s (Ω ) satisfies u j ∈ H (i) Assume that u ∈ H N
u2H s (Ω ) ≤
∑ u j 2HIs (Ω ) ,
I
and
2 ∗uH s (Ω ) I
≤
0 ≤ s ≤ 1,
(A.86)
−1 ≤ s ≤ 0.
(A.87)
j=1
N
∑ ∗u j 2HIs (Ω ) ,
j=1
s (Ω ) satisfies u j ∈ H s (Ω j ) for s ∈ [0, 1] and u j ∈ H s (Ω j ) (ii) Assume that u ∈ H 0 for s ∈ [−1, 0); see (A.85). Then (A.86) and (A.87) also hold. Moreover, for s ∈ [−1, 0), u2H s (Ω ) I
N
∑ u j 2HIs (Ω )
(A.88)
j=1
where the constant depends on the size of Ω , but not on the size of Ω j , j = 1, . . . , N. Proof. Inequality (A.86) is proved in [224] following along the lines of the proof of Theorem A.10. We now prove both (A.86) and (A.87). Introduce the product space s := H ∗s (Ω1 ) × · · · × H ∗s (ΩN ), Π
−1 ≤ s ≤ 1,
with norm defined from the interpolation norms by
2 uΠ s
N
:=
∑
j=1
u j 2H s (Ω ) j ∗
where u = (u1 , . . . , uN ). Then
⎧N ⎪ 2 ⎪ ⎪ ⎨ ∑ u j HIs (Ω ) ,
j=1 = N ⎪
⎪ 2 ⎪ ⎩ ∑ ∗u j H s (Ω ) , j=1
I
0 ≤ s ≤ 1, (A.89) −1 ≤ s < 0.
s = [Π 0, Π 1 ]s . Π
s (Ω ) equipped with the norm Introduce also the space X s := H ⎧ ⎨· s , s ∈ [0, 1], HI (Ω ) uX s := ⎩∗· s , s ∈ [−1, 0), H (Ω ) I
s → X s defined by and consider the sum operator S : Π
(A.90)
466
A Interpolation Spaces and Sobolev Spaces N
S u :=
∑ u j.
j=1
We recall that X s = [X 0 , X 1 ]s for s ∈ (0, 1) and X s = [X 0 , X −1 ]−s for s ∈ (−1, 0); see (A.79). Since S uX 0 = S uH 0 (Ω ) = uΠ 0 , I
S uX 1 = S uH 1 (Ω ) = uΠ 1 , I
by interpolation
S uX s ≤ uΠ s
for 0 ≤ s ≤ 1.
On the other hand, if v j is the zero extension of v|Ω j , then since 2 ∗vH 1 (Ω ) I
=
N
∑ ∗v j 2HI1 (Ω )
j=1
we have from definition (A.48) N
S uX −1 = ∗S uH −1 (Ω ) = I
≤
sup v∈H 1 (Ω )
uΠ −1
S u, vΩ = sup v∈H 1 (Ω ) ∗vH 1 (Ω ) v∈H 1 (Ω ) sup
I
N
∑ ∗v j 2HI1 (Ω )
j=1
∗vH 1 (Ω )
1/2
I
By interpolation between X −1 and X 0 we obtain S uX s ≤ uΠ s
∑
j=1
u j, v j
Ω
∗vH 1 (Ω ) I
= uΠ −1 .
for −1 ≤ s < 0.
Now for any function u we define u = (u1 , . . . , uN ). Then u = S u and thus, for s ∈ [−1, 1], u2X s = S u2X s ≤ u2Π s =
N
∑ u j 2H∗s (Ω j ) .
j=1
This together with (A.89) and (A.90) proves Part (i). s (Ω j ) ⊂ H ∗s (Ω j ) for s ∈ [0, 1] and H s (Ω j ) ⊂ For j = 1, . . . , N, recall that H 0 s H∗ (Ω j ) for s ∈ [−1, 0); see (A.78) and (A.84). Hence, the first statement in Part (ii) holds true. In particular when s ∈ [−1, 0), it follows from inequality (A.87) and a scaling argument that
A.2 Sobolev Spaces
467
u2H s (Ω ) ∗u2H s (Ω ) ≤ I
N
N
j=1
j=1
∑ ∗u j 2HIs (Ω ) ∑ u j 2HIs (Ω ) ,
I
proving (A.88). The equivalence constants depend on the size of Ω .
A.2.10 The special case s = 1/2 1/2 1/2 (Ω )-norm. In this book, two frequently-used norms are the HI (Ω )-norm and H I We write their definitions here, see (A.1), for convenience
u
1/2
HI
=
=
∞
0
u 1/2 HI
(Ω )
∞
(Ω )
0
where K(t, u, A ) = K(t, u, A*) = with
inf
u0 ∈H 0 , u1 ∈H 1 u0 +u1 =u
inf
1 u0 ∈H 0 , u1 ∈H u0 +u1 =u
( ( (K(t, u, A )(2 dt t2 ( ( (K(t, u, A*)(2 dt t2
1/2
u0 2H 0 (Ω ) + t 2 u1 2H 1 (Ω )
(A.91)
1/2
I
I
u0 2H 0 (Ω ) + t 2 u1 2H 1 (Ω ) I
I
1/2 1/2
0 (Ω ), H 1 (Ω )). A = (H 0 (Ω ), H 1 (Ω )) and A*= (H
First we show how the interpolation norm ·
1/2
HI
(Ω )
(resp., · 1/2 HI
(Ω )
) is re-
lated to the norms ·L2 (Ω ) and ·H 1 (Ω ) (resp., ·L2 (Ω ) and ·H 1 (Ω ) ); cf. [138, I I Proposition 2.3]. Lemma A.12. If v ∈ H 1 (Ω ) then v2 1/2 HI
(Ω )
≤ 2vL2 (Ω ) vH 1 (Ω ) . I
1 (Ω ) = H 1 (Ω ) then Similarly, if v ∈ H I 0 v2 1/2 HI
(Ω )
≤ 2vL2 (Ω ) vH 1 (Ω ) . I
Proof. Suppose that v ∈ H 1 (Ω ). It is easy to see that the K-functional can be estimated as K(t, v, A )2 ≤ min v2L2 (Ω ) ,t 2 v2H 1 (Ω ) . I
468
A Interpolation Spaces and Sobolev Spaces
Hence, with α = vL2 (Ω ) /vH 1 (Ω ) , we deduce from (A.91) I
v2 1/2 HI (Ω )
=
α
≤
α
0
2
∞
K(t, v, A )2
t 2 v2H 1 (Ω )
∞
v2L2 (Ω )
dt K(t, v, A ) 2 + t
I
0
dt + t2
α
α
dt t2
dt t2
= 2vL2 (Ω ) vH 1 (Ω ) . I
The result for v2 1/2 HI
(Ω )
can be proved similarly.
The following lemma gives results on Sobolev norms of functions supported in a subset. Lemma A.13. Let Ω be a bounded Lipschitz domain in Rd , d = 1, 2, satisfying τ := 1/2 (Ω ) satisfies supp(v) ⊂ Ω ⊂ Ω . diam(Ω ) < 1. Assume that v ∈ H
(i) The following relations between norms of v hold v 1/2 HI
(Ω )
v 1/2
Hw (Ω )
v 1/2
Hw (Ω )
v 1/2 HI
(Ω )
where the constants are independent of v, τ , and the size of Ω . (ii) The opposite inequality is not true, i.e., there is no constant C independent of v, τ , and the size of Ω such that v 1/2 ≤ Cv 1/2 . Hw (Ω )
Hw (Ω )
Proof. Lemma A.8 gives the equivalences v 1/2 HI
(Ω )
v 1/2
Hw (Ω )
v 1/2
and
Hw (Ω )
It follows from Theorem A.10 that v2 1/2 HI
Thus we also have
(Ω )
≤ v2 1/2 HI
(Ω )
v 1/2
+ v2 1/2
Hw (Ω )
HI
(Ω \Ω )
v 1/2
Hw (Ω )
v 1/2 HI
= v2 1/2 HI
(Ω )
(Ω )
.
.
.
We present here another proof of this inequality. Recall the definition of v 1/2 v2 1/2
Hw (Ω )
Clearly,
Ω
|v(x)|2 dx = dist(x, ∂ Ω )
Ω
= |v|2H 1/2 (Ω ) +
Ω
Hw (Ω )
:
|v(x)|2 dx. dist(x, ∂ Ω )
|v(x)|2 dx ≤ dist(x, ∂ Ω )
Ω
|v(x)|2 dx. dist(x, ∂ Ω )
(A.92)
A.2 Sobolev Spaces
469
On the other hand, since supp(v) ⊂ Ω |v|2H 1/2 (Ω ) =
Ω ×Ω
|v(x) − v(y)|2 dx dy + 2 |x − y|d+1
Since
Ω
Ω \Ω
Ω \Ω
dy |v(x)|2 dx. (A.93) |x − y|d+1
1 dy , dist(x, ∂ Ω ) |x − y|d+1
(A.94)
see Lemma A.14, the following holds |v|2H 1/2 (Ω ) |v|2H 1/2 (Ω ) +
Ω
|v(x)|2 dx = v2 1/2 Hw (Ω ) dist(x, ∂ Ω )
so that, with the help of (A.92), v2 1/2
Hw (Ω )
v2 1/2
Hw (Ω )
+
Ω
|v(x)|2 dx v2 1/2 . Hw (Ω ) dist(x, Ω )
This proves part (i). Part (ii) is proved by the following counter-example, which is a modification of the counter-example in the Appendix of [2]. Consider D to be the upper half of the unit disk in the (x, y)-plane, i.e., 4 3 D = (r, θ ) : r ∈ [0, 1], θ ∈ [0, π ] where (r, θ ) denote polar coordinates. Then define 4 3 Ω = (r, θ ) : r ∈ [0, 1], θ = 0 or θ = π = [−1, 1] × {0} 3 4 Ωε = (r, θ ) : r ∈ [ε , 3/4], θ = 0 = [ε , 3/4] × {0}. For ε ∈ (0, 1/2), define Uε : D −→ R and U : D −→ R by
and
⎧ 0, ⎪ ⎪ ⎪ ⎨(− log r)−1/2 − (− log ε )−1/2 , Uε (r, θ ) = ⎪(3 − 4r) (log 2)−1/2 − (− log ε )−1/2 , ⎪ ⎪ ⎩ 0, ⎧ 0, ⎪ ⎪ ⎪ ⎨(− log r)−1/2 , U(r, θ ) = ⎪ (3 − 4r)(log 2)−1/2 , ⎪ ⎪ ⎩ 0,
0 ≤ r < ε, ε ≤ r < 1/2, 1/2 ≤ r < 3/4, 3/4 ≤ r ≤ 1,
r = 0, 0 < r < 1/2, 1/2 ≤ r < 3/4, 3/4 ≤ r ≤ 1,
470
A Interpolation Spaces and Sobolev Spaces
These two functions are first studied in [2]. We now define Vε : D −→ R by 0 ≤ θ < π /2, Uε (r, θ ) cos θ , Vε (r, θ ) = π /2 ≤ θ ≤ π . 0, Let uε be the trace of Vε on the boundary of D. Then supp(uε ) = [ε , 3/4] × {0} = Ωε . For (r, θ ) ∈ D,
|Vε (r, θ )| ≤ |Uε (r, θ )| ≤ |U(r, θ )| ( ( ∂U ( ( ∂U ( ( ∂V ( ( ε ( ( ( ( ε (r, θ )( ≤ ( (r, θ )( ≤ ( (r, θ )( ( ∂r ∂r ∂r ( ( ∂V ( ( ε (r, θ )( ≤ |Uε (r, θ )| ≤ |U(r, θ )| ( ∂θ
We note that
⎧ 1 −1 −3/2 , ⎪ ⎨ 2 r (− log r) ∂U (r, θ ) = −4(log 2)−1/2 , ⎪ ∂r ⎩ 0,
(A.95)
0 < r < 1/2, 1/2 < r < 3/4, 3/4 < r < 1,
so that U ∈ H 1 (D). Indeed,
|U(r, θ )| dx dy = π 2
D
1/2 0
r dr + π − log r
3/4
1 (3 − 4r)2 r dr < ∞ log 2
1/2
and 1/2 3/4 (2 π dr 16 ( ∂U ( (r, θ )( dx dy = r dr < ∞. +π ( 3 ∂r 4 r(− log r) log 2
( D
0
1/2
It follows from (A.95) that Vε ,Uε ∈ H 1 (D) and
Vε H 1 (D) Uε H 1 (D) ≤ UH 1 (D) ,
0 < ε < 1.
Consequently, by the definition of the Slobodetski norm and the trace theorem uε
1/2
HS (Ω )
≤ uε
1/2
HS (∂ D)
uε H 1/2 (∂ D) Vε H 1 (D) 1.
Since supp(uε ) = [ε , 3/4] × {0}, we have
A.2 Sobolev Spaces
471
uε 2 1/2
Hw (Ω )
= |uε |2H 1/2 (Ω ) +
1
−1
= |uε |2H 1/2 (Ω ) +
|uε (r, 0)|2 dr min{1 − r, 1 + r}
3/4 ε
|uε (r, 0)|2 dr. min{1 − r, 1 + r}
Due to uε 2L2 (Ω )
= uε 2L2 (Ωε )
3/4
ε
|uε (r, 0)|2 dr, min{1 − r, 1 + r}
we deduce uε 2 1/2
Hw (Ω )
so that uε 2 1/2
Hw (Ω )
uε 2 1/2
Hw (Ωε )
|uε |2H 1/2 (Ω ) + uε 2L2 (Ω ) = uε 2 1/2
HS (Ω )
,
1. On the other hand, a simple calculation reveals that
≥
Ωε
|uε (r, 0)|2 dr > dist(r, ∂ Ωε )
= log | log ε | +
1/2 ε
|uε (r, 0)|2 dr r
4(log 2)1/2 log 2 − log | log 2| − 3. − (log(1/ε ))1/2 log(1/ε )
Hence, uε 1/2 → ∞ as ε → 0+ , while uε 1/2 is bounded. This proves Hw (Ωε ) Hw (Ω ) part (ii). We now prove the claim (A.94). Lemma A.14. Let Ω and Ω be two open bounded domains in Rd , d = 1, 2, 3, satisfying Ω Ω , and let x ∈ Ω . (i) The following inequality holds
Ω \Ω
dy 1 . dist(x, ∂ Ω ) |x − y|d+1
(ii) The opposite inequality is not true, i.e., there is no constant C independent of x such that 1 dy ≤ C . dist(x, ∂ Ω ) |x − y|d+1 Ω \Ω
Proof. To prove part (i), we first observe that Ω \ Ω ⊂ Rn \ Bρ (x) (x) ,
x ∈ Ω ,
472
A Interpolation Spaces and Sobolev Spaces
where ρ (x) = dist(x, ∂ Ω ) and Bρ (x) (x) is the ball centred at x having radius ρ (x). By using spherical coordinates centred at x we obtain
Ω \Ω
dy ≤ |x − y|n+1
Rn \Bρ (x) (x)
dy = ωn |x − y|n+1
∞ n−1 r dr
ρ (x)
rn+1
=
ωn . dist(x, ∂ Ω )
To prove part (ii), we revisit the proof of part (i) of Lemma A.13. If the opposite of (A.94) holds, then 1 dy n+1 |x − y| dist(x, ∂ Ω ) Ω \Ω
with constants independent of x and Ω . Consider a function u satisfying the assumptions in Lemma A.13. It follows from the definition of u 1/2 that HS (Ω )
u2 1/2
Hw (Ω )
|u|21/2,Ω +
Ω
Ω \Ω
dy |u(x)|2 dx. |x − y|n+1
This together with equation (A.93) yields u2 1/2
Hw (Ω )
|u|2H 1/2 (Ω ) ≤ u2 1/2
Hw (Ω )
,
which contradicts part (ii) of Lemma A.13.
The equivalence of the H s -seminorm, s = 1/2, 1, and the quotient norm is useful for the analysis. Lemma A.15. (i) Let I be a bounded interval in R. For any v ∈ H 1/2 (I), if we define v
1/2
Hq (I)
:= inf v + α α ∈R
1/2
Hw (I)
,
then |v|H 1/2 (I) v
1/2
Hq (I)
|v|H 1/2 (I) .
(ii) Let R be a bounded domain in R2 . For any v ∈ H 1 (R), if we define vHq1 (R) := inf ∗v + α Hw1 (R) , α ∈R
then |v|H 1 (R) vHq1 (R) |v|H 1 (R) . The constants are independent of v and the size of I and R. Here the weighted norms · 1/2 and ∗·Hw1 (R) are defined in Subsection A.2.6.1. Hw (I)
A.2 Sobolev Spaces
473
Proof. It suffices to prove Part (i). Part (ii) is proved in exactly the same manner. It follows from Lemma A.4 and Lemma A.5 that the seminorm |·|H 1/2 (I) and the quotient norm · 1/2 are invariant under scaling. Hence, to ensure that the conHq (I)
stants are independent of the size of I, we prove the inequalities for I = (0, 1). The left inequality is obvious from the definition of the weighted norm · 1/2 . The Hw (I) right inequality can be obtained by using the same argument as in the proof of [60, Theorem 3.1.1]. We finish this subsection with a lemma on different forms of the H 1/2 -norm. Lemma A.16. Let I = (0, H) and R = I × I for some H > 0. Then for any v ∈ H 1/2 (R)
v(x, ·)2 1/2 dx + Hw (I)
I
v(·, y)2 1/2
Hw (I)
dy v2 1/2
Hw (R)
,
(A.96)
I
|v|2H 1/2 (R) ≤ 4
v(x1 , ·) − v(x2 , ·)2 L2 (I)
|x1 − x2 |2
R
+4
dx1 dx2
v(·, y1 ) − v(·, y2 )2 L2 (I)
|y1 − y2 |2
R
dy1 dy2 ,
(A.97)
and v2 1/2
Hw (R)
v2L2 (R) + w
v(x1 , ·) − v(x2 , ·)2 L2 (I)
|x1 − x2 |2
R
+
v(·, y1 ) − v(·, y2 )2 L2 (I)
|y1 − y2 |2
R
where v2L2 (R) := w
1/2 (R) then Moreover, if v ∈ H v2 1/2
Hw (R)
and I
v(x, ·)2 1/2
Hw (I)
I
dx +
I
dx1 dx2
dy1 dy2
(A.98)
1 v2L2 (R) . H
v(x, ·)2 1/2
dx +
v(·, y)2 1/2
dy v2L2 (R) + v2 1/2
Hw (I)
Hw (I)
The constants are independent of H and v.
I
v(·, y)2 1/2
Hw (I)
w
dy
Hw (R)
(A.99)
.
(A.100)
474
A Interpolation Spaces and Sobolev Spaces
Proof. Inequalities (A.98) is partially proved in [163, Lemma 5.3, page 95]; see also [163, Exercise 5.1, page 96]. We present the full proof here for completeness. Let v be defined by v( x, y) = v(H x, H y) for all ( x, y) ∈ R = (0, 1) × (0, 1). It follows from Lemma A.5 (i) that Since
v( x, ·)2H 1/2 (I) v(H x, ·)2H 1/2 (I) =
v(x, ·)2 1/2
Hw (I)
and
dx = H
v2H 1/2 (R) = H v2H 1/2 (R) .
v( x, ·)2 1/2
Hw (I)
I
I
it suffices to prove (A.96) with H = 1. Similarly,
v2L2 (R) v2L2 (R) v2L2 (R) = H = H , w
R
w
v(x1 , ·) − v(x2 , ·)2L2 (I) |x1 − x2 |2
v(·, y1 ) − v(·, y2 )2 L2 (I) R
d x,
|y1 − y2 |2
dx1 dx2 = H
dy1 dy2 = H
v( x1 , ·) − v( x2 , ·)2L2 (I)
d x1 d x2 ,
v(·, y1 ) − v(·, y2 )2L2 (I)
d y1 d y2 .
R
| x1 − x2 |2
R
| y1 − y2 |2
Hence it suffices to prove (A.97) and (A.98) with H = 1. v2 1/2 . On the other hand, Finally, Lemma A.5 (ii) gives v2 1/2 = H Hw (R)
I
I
v(x, ·)2 1/2
Hw (I)
v(·, y)2 1/2
Hw (I)
Hw (R)
dx = H
dy = H
I I
v( x, ·)2 1/2
Hw (I)
v(·, y)2 1/2
Hw (I)
d x,
d y.
Hence we will also prove (A.99) and (A.100) with H = 1. In the remainder of this proof, I = (0, 1) and R = I × I. We first prove (A.96). It follows from Lemma A.8 and the definition of interpolation norms that
v(x, ·)2 1/2
Hw (I)
dx
I
HI
I
=
∞ I
where, for almost all x ∈ I,
v(x, ·)2 1/2
0
(I)
dx
|K(t, v(x, ·))|2
dt dx t2
(A.101)
A.2 Sobolev Spaces
475
|K(t, v(x, ·))|2 =
inf
(w1 (x),w2 (x))∈X (v,x)
w1 (x, ·)2L2 (I) + t 2 w2 (x, ·)2H 1 (I)
with X (v, x) := If we define
3
4 w1 (x, ·), w2 (x, ·) ∈ L2 (I) × H 1 (I) : v(x, ·) = w1 (x, ·) + w2 (x, ·) .
|K ∗ (t, v)|2 = with
inf
(w1 ,w2 )∈X (v)
w1 2L2 (R) + t 2 w2 2H 1 (R)
4 3 X (v) := (w1 , w2 ) ∈ L2 (R) × H 1 (R) : v = w1 + w2 ,
then |K(t, v(x, ·))| ≤ |K ∗ (t, v)| for almost all x ∈ I. Hence (A.101) implies
v(x, ·)2 1/2
Hw (I)
dx ≤
I
1 ∞ 0
Similarly,
|K ∗ (t, v)|2
0
v(·, y)2 1/2
Hw (I)
dt dx = v2 1/2 v2 1/2 . HI (R) Hw (R) t2
dy v2 1/2
Hw (R)
,
I
implying (A.96). Next we prove (A.97). Recalling the definition of the H 1/2 -seminorm and using the triangle and Cauchy–Schwarz inequalities we obtain, with x1 = (x1 , y1 ) and x2 = (x2 , y2 ), |v|2H 1/2 (R) =
R
R
|v(x1 ) − v(x2 )|2 dx1 dx2 ≤ 2(T1 + T2 ) |x1 − x2 |3
(A.102)
where T1 =
=
1
R
R
R
0
and T2 =
R
R
|v(x1 , y1 ) − v(x2 , y1 )|2 dx1 dx2 |x1 − x2 |3
|v(x1 , y1 ) − v(x2 , y1 )|2
1 0
dy2 ( (3/2 dy1 dx1 dx2 ((x1 − x2 )2 + (y1 − y2 )2 (
|v(x2 , y1 ) − v(x2 , y2 )|2 dx1 dx2 |x1 − x2 |3
476
A Interpolation Spaces and Sobolev Spaces
=
1 R
|v(x2 , y1 ) − v(x2 , y2 )|2
0
1 0
dx1 ( (3/2 dx2 dy1 dy2 . ((x1 − x2 )2 + (y1 − y2 )2 (
Consider first T1 . Simple calculations reveal that 1 0
1 − y1 dy2 + ( (3/2 = 2 2 ((x1 − x2 )2 + (y1 − y2 )2 ( a a + (1 − y1 )2 a2
y1 a2 + y21
where we shortened the notation by writing a2 in lieu of (x1 − x2 )2 . Let t f (t) = √ 2 a + t2
g(t) = f (t) + f (1 − t),
t ∈ [0, 1].
1 1 √ = g(0) ≤ g(t) ≤ g(1/2) = 2 2 a +1 a + 1/4
∀t ∈ [0, 1].
and
It can be easily shown that
Since a2 = (x1 − x2 )2 ∈ [0, 1] it follows that √ 1/ 2 ≤ g(t) ≤ 2 ∀t ∈ [0, 1], ∀a2 ∈ [0, 1]. Hence 1 √ ≤ 2a2
1 0
dy2 g(y1 ) 2 ( (3/2 = 2 ≤ 2 . a a ((x1 − x2 )2 + (y1 − y2 )2 (
(A.103)
Consequently, recalling that a2 = (x1 − x2 )2 , T1 ≤ 2
v(x1 , ·) − v(x2 , ·)2 L2 (I) R
|x1 − x2 |2
dx1 dx2 .
Similarly, 1 √ ≤ 2|y1 − y2 |2
1 0
so that T2 ≤ 2
dx1 2 ( (3/2 ≤ |y1 − y2 |2 ((x1 − x2 )2 + (y1 − y2 )2 ( v(·, y1 ) − v(·, y2 )2 L2 (I) R
It follows from (A.102) that
|y1 − y2 |2
dy1 dy2 .
(A.104)
A.2 Sobolev Spaces
477
|v|2H 1/2 (R) ≤ 4
v(x1 , ·) − v(x2 , ·)2 L2 (I)
|x1 − x2 |2
R
+4
dx1 dx2
v(·, y1 ) − v(·, y2 )2 L2 (I)
|y1 − y2 |2
R
dy1 dy2 ,
proving (A.97). The inequality in (A.98) is a direct consequence of (A.97) and the definitions of the weighted norms. To prove the opposite inequality we use (A.96) to obtain v(x1 , ·) − v(x2 , ·)2 L2 (I) R
=
|x1 − x2 |2
|v(·, y)|2H 1/2 (I) dy +
I
≤
dx1 dx2 +
I
v(·, y)2 1/2 dy + Hw (I)
I
v(·, y1 ) − v(·, y2 )2 L2 (I)
|y1 − y2 |2
R
dy1 dy2
|v(x, ·)|2H 1/2 (I) dx
v(x, ·)2 1/2
Hw (I)
dx v2 1/2
Hw (R)
.
I
It is obvious that v2L2 (R) ≤ v2 1/2
Hw (R)
w
, yielding (A.98).
1/2 -norm and (A.97) In order to prove (A.99) we note that the definition of the H yield v2 1/2
Hw (R)
v(x1 , ·) − v(x2 , ·)2 L2 (I)
|x1 − x2 |2
R
dx1 dx2
+
v(·, y1 ) − v(·, y2 )2 L2 (I)
+
|y1 − y2 |2
R
R
dy1 dy2
|v(x)|2 dx dist(x, ∂ R)
where x = (x, y) ∈ R. Since v(x1 , ·) − v(x2 , ·)2 L2 (I) R
|x1 − x2 |2
v(·, y1 ) − v(·, y2 )2 L2 (I) R
it suffices to prove
|y1 − y2 |2
dx1 dx2 =
|v(·, y)|2H 1/2 (I) dy,
dy1 dy2 =
|v(x, ·)|2H 1/2 (I) dx,
I
I
478
A Interpolation Spaces and Sobolev Spaces
R
|v(x)|2 dx dist(x, ∂ R)
I
|v(x, y)|2 dx dy + dist(x, ∂ I)
I
|v(x, y)|2 I
I
dist(y, ∂ I)
dy dx.
(A.105)
We partition the square R into four equal triangles by using the diagonals of R. Label them by R1 , R2 , R3 , and R4 , starting from the bottom triangle and moving counterclockwise; see Figure A.2. y
1 R32
R31
R41
R22
R42
R21
1/2
R11 0
R12 x
1
1/2
Fig. A.2 R = R1 ∪ · · · ∪ R4 , R1 = R11 ∪ R12 , R2 = R21 ∪ R22 , R3 = R31 ∪ R32 , R4 = R41 ∪ R42
Let x∗ := max{x, 1 − x}, y∗ := max{y, 1 − y},
x∗ := min{x, 1 − x}, y∗ := min{y, 1 − y}, Then R
|v(x)|2 dx = dist(x, ∂ R)
R1
+
|v(x, y)|2 dx dy + y
R3
=
|v(x, y)|2 1−y
0
y
R2
dx dy +
x∗ |v(x, y)|2 I
|v(x, y)|2 dx dy 1−x
|v(x, y)|2 dx dy x
R4
1 |v(x, y)|2 dy dx + dx dy 1−x I
y∗
A.2 Sobolev Spaces
479
+
y∗ |v(x, y)|2 dy dx + dx dy. 1−y x
1 |v(x, y)|2 x∗
I
I
0
Noting that 0 ≤ x∗ , y∗ ≤ 1/2 and 1/2 ≤ x∗ , y∗ ≤ 1, we deduce
|v(x)|2 dx ≤ dist(x, ∂ R)
R
1/2 |v(x, y)|2 I
I
I
1/2
|v(x, y)|2 I
dist(y, ∂ I)
I
1/2
1/2 |v(x, y)|2 dy dx + dx dy 1−y x
1 |v(x, y)|2
+ =
y
0
1 |v(x, y)|2 dy dx + dx dy 1−x
I
dy dx +
0
I
|v(x, y)|2 dx dy. dist(x, ∂ I)
I
It remains to prove the opposite inequality of (A.105). First consider the first term on the right-hand side of (A.105). We have by using Fubini’s Theorem, recalling the definition of the triangles R1 , . . . , R4 in Figure A.2, I
|v(x, y)|2 dx dy = dist(x, ∂ I)
I
=
1/2
+
0
|v(x, y)|2 dx dy 1−x
1/2
x
0
1−x 1 |v(x, y)|2
1/2
1−x
0
1/2
1−x
dy +
x
1−x
|v(x, y)|2 dy + 1−x
1 x
|v(x, y)|2 dy dx 1−x
1−x 1 x |v(x, y)|2 |v(x, y)|2 |v(x, y)|2 dy + dy + dy dx y x 1−y
0
+
y
0
|v(x, y)|2
R11
+
x
0
1−x 1 |v(x, y)|2
1/2
=
0
1
|v(x, y)|2 dx + x
1−x 1 x |v(x, y)|2 |v(x, y)|2 |v(x, y)|2 dy + dy + dy dx x x x
0
≤
1 1/2
R12
y
dy +
dx dy +
1−x
R4
|v(x, y)|2 y
1−x
x
dx dy +
|v(x, y)|2
R2
|v(x, y)|2 dy + 1−x
x
dx dy +
|v(x, y)|2 1−x
1 x
R32
dx dy +
|v(x, y)|2 dy dx 1−y |v(x, y)|2 dx dy 1−y
R31
|v(x, y)|2 dx dy 1−y
480
A Interpolation Spaces and Sobolev Spaces 4
=∑
i=1 R i
|v(x)|2 dx = dist(x, ∂ R)
R
|v(x)|2 dx. dist(x, ∂ R)
The same estimate can be derived for the second term on the right-hand side of (A.105), proving this equivalence. Inequality (A.100) is a direct consequence of (A.98) and (A.105).
A.2.11 Sobolev spaces on curves and surfaces Since this book involves boundary integral equations, we mostly work with Sobolev spaces on curves and surfaces. It is worth stating the definition and properties of these spaces here. Let Γ be the boundary of a Lipschitz domain Ω in Rd , d = 2, 3. Then there is a surface measure σ and the outward unit normal n exists σ -almost everywhere on Γ . For s ∈ [0, 1], the space H s (Γ ) is defined via a partition of unity of subordinate to an atlas of local coordinate patches. Roughly speaking, as a point on Γ can be represented (via local charts) by (x , ζ (x )) with x ∈ Rd−1 where ζ : x → ζ (x ) ∈ R is a Lipschitz function, we can define uζ (x ) := u(x , ζ (x )). Then u ∈ H s (Γ ) if u ∈ L2 (Γ ) and uζ ∈ H s (Rd−1 ) (via local charts). A more precise definition can be found in [147]. The space H −s (Γ ) is defined by duality: uH −s (Γ ) =
| u, vΓ | , 0=v∈H s (Γ ) vH s (Γ ) sup
−1 ≤ s ≤ 1.
If Γ is an open subset of a boundary Γ of a Lipschitz domain Ω (i.e., Ω is an open curve if d = 2 and open surface if d = 3) then we define 3 4 H s (Γ ) := U|Γ : U ∈ H s (Γ) , s (Γ ) := closure of D(Γ ) in H s (Γ), H H0s (Γ ) := closure of D(Γ ) in H s (Γ ).
Here, D(Γ ) is defined by 4 3 D(Γ ) := u ∈ D(Γ) : supp u ⊂ Γ
while D(Γ) is defined by (A.3). All the properties of Sobolev spaces on Lipschitz domains in Rd carry over to Sobolev spaces on Γ when the order s belongs to [−1, 1]. In particular, in parallel
A.2 Sobolev Spaces
481
to (A.18) we have uH s (Γ ) uH s (Γ)
s (Γ ). ∀u ∈ H
(A.106)
Finally, we state the following trace result proved by Costabel [63] Lemma A.17. Let Γ be the boundary of a Lipschitz domain Ω in Rd . Then the trace operator γ : D(Ω ) → D(Γ ) defined by γ u := u|Γ has a unique extension to a bounded operator γ : H s (Ω ) → H s−1/2 (Γ ) for 1/2 < s < 3/2 which has a continuous right inverse. As a consequence of this lemma, we have in addition to (A.21) 3 4 H0s (Ω ) = u ∈ H s (Ω ) : γ (u) = 0 ,
1 < s ≤ 1. 2
We end this section by a result on Sobolev embedding. Lemma A.18. Let Γ be a bounded, (d − 1)-dimensional, open or closed manifold in Rd where d ≥ 2. (i) If 0 ≤ 2s < d − 1 and q = 2(d − 1)/(d − 1 − 2s), then q ∈ [2, ∞) and vLq (Γ ) vH s (Γ )
∀v ∈ H s (Γ ).
(ii) If 2s = d − 1, then for all q ∈ [2, ∞), vLq (Γ )
√ qvH s (Γ )
∀v ∈ H s (Γ ),
where the constant is independent of q. (iii) If 0 ≤ 2s < d − 1 and q = 2(d − 1)/(d − 1 + 2s), then q ∈ (1, 2] and vH −s (Γ ) vLq (Γ )
∀v ∈ Lq (Γ ).
(iv) If 2s = d − 1, then for all q ∈ (1, 2] 5 q vLq (Γ ) vH −s (Γ ) q−1
∀v ∈ Lq (Γ ),
where the constant is independent of q.
Proof. The result is proved in [5, Theorem 4.2]. We include the proof here for completeness. Part (i) follows immediately from the embedding H s (Γ ) → Lq (Γ ) when q = 2(d − 1)/(d − 1 − 2s); see e.g. [1]. To prove Part (ii), we first note that for any q ≥ 2 the following estimate holds v2Lq (Rd−1 ) qv2H (d−1)/2 (Rd−1 ) where the constant is independent of q. The analogous result holds for Γ thanks to its construction in terms of local coordinate patches in Rd−1 . Part (iii) (respectively, Part (iv)) is a result of Part (i) (respectively, Part (ii)) by using H¨older’s inequality and duality properties (A.24).
482
A Interpolation Spaces and Sobolev Spaces
A.2.12 A generalised antiderivative operator in Sobolev spaces −1/2 norm and We need to create a mechanism to go back and forth between the H 1/2 norm in order to carry over some results proved for the hypersingular the H integral equation to the weakly-singular integral equation. This can be done by introducing a generalised antiderivative operator, which was first introduced in [209] and further developed in [101]. We recall here the results on that operator. Lemma A.19. Let Γ = (a, b) be an interval in R. For all 0 ≤ s ≤ 1 the following statements are valid s (Γ ) → H s−1 (Γ ) is bounded with (i) The generalised differentiation operator Ds : H norm independent of the measure of Γ . s−1 (Γ ) → H s (Γ ) satisfying (ii) There exists a bounded linear operator Is : H 0 s−1 (Γ ), Ds Is χ = χ for any χ ∈ H 0
where
s−1 (Γ ) : χ , 1 s−1 1−s = 0}. s−1 (Γ ) := {χ ∈ H H 0 H ,H
(A.107)
Proof. Part (i) is proved in [209, Lemma3.5] and Part (ii) in [101, Lemma 3]. The proof is included here for completeness. The idea is to define D0 , D1 , I0 , and I1 by using the usual differentiation and integration operators, together with their adjoints, then use interpolation to define Ds and Is for s ∈ (0, 1). Proof of (i): Let D : H 1 (Γ ) → H 0 (Γ ) be the weak derivative operator Dχ = χ . −1 (Γ ) denotes the adjoint Then D is bounded and D = 1. If D∗ : H 0 (Γ ) → H of D,
D∗ χ , ϕ H −1 ,H 1 = χ , Dϕ H 0 ∀χ ∈ H 0 (Γ ), ϕ ∈ H 1 (Γ ),
then D∗ is bounded and D∗ = D. Similarly, let I : H 0 (Γ ) → H 1 (Γ ) be the integral operator defined for any χ ∈ 0 H (Γ ) by I χ (x) :=
x
χ (y) dy,
x ∈Γ.
a
Since I χ 2H 0 (Γ ) =
a
=
(
b (( x
( ( (a
(2 ( b b ( ( χ (y) dy( dx ≤ (x − a) dx |χ (y)|2 dy ( a a
(b − a)2 χ 2H 0 (Γ ) , 2
we have I χ 2H 1 (Γ ) = I χ 2H 0 (Γ ) + χ 2H 0 (Γ ) ≤
(b − a)2 + 2 χ 2H 0 (Γ ) . 2
A.2 Sobolev Spaces
483
−1 (Γ ) → H 0 (Γ ) of I defined by The adjoint I ∗ : H
I ∗ χ , ϕ H 0 = χ , I ϕ H −1 ,H 1
−1 (Γ ), ϕ ∈ H 0 (Γ ) ∀χ ∈ H
is also bounded with I ∗ = I. We now prove the first statement of the lemma. Consider the case when s = 1. 1 (Γ ) → H 0 (Γ ) is defined to be D1 := D| 1 . Clearly, D1 is The operator D1 : H H (Γ ) bounded and D1 = 1. (A.108) D1 χ H 0 (Γ ) = χ H 1 (Γ ) , I
I
0 (Γ ) → H −1 (Γ ) by D0 := −D∗ . implying D1 = 1. Now for s = 0 we define D0 : H ∗ 1 (Γ ) then by using integration by Then D0 = D = 1. Moreover, if χ , ϕ ∈ H parts we deduce
χ , D0 ϕ H 1 ,H −1 = − D1 χ , ϕ H 0 = −
χ (x)ϕ (x) dx =
Γ
χ (x)ϕ (x) dx
Γ
= χ , D1 ϕ H 1 ,H −1 .
1 (Γ ), i.e., D1 is the restriction of D0 This implies D0 ϕ = D1 ϕ for all ϕ ∈ H 1 onto H (Γ ). By interpolation, see Theorem A.4, for each 0 < s < 1 there ex s (Γ ) → H s−1 (Γ ) with Ds χ s−1 ists uniquely Ds : H s (Γ ) , implying HI (Γ ) ≤ χ H I 1 Ds ≤ 1. Moreover, D1 ϕ = Ds ϕ for all ϕ ∈ H (Γ ).
Proof of (ii): To prove the second part of the lemma, we start again with s = 1 and 0 (Γ ) → H 1 (Γ ) by I1 := I| 0 . Since define I1 : H 0 H (Γ ) 0
I1 χ (a) = 0 and
I1 χ (b) = χ , 1H 0 = 0,
0 (Γ ) into H 1 (Γ ). By the definition of the H s -norm, we have I1 maps H I 0 I1 χ H 1 (Γ ) = |I1 χ |H 1 (Γ ) = χ H 0 (Γ ) . I
I
0 (Γ ) into H 1 (Γ ) with I1 = 1. Hence, I1 is bounded as an operator from H 0 −1 (Γ ) → H 0 (Γ ) by I0 := I ∗ | −1 . Then I0 ≤ 1. By For s = 0, we define I0 : H 0 H0 (Γ ) s−1 (Γ ) is the interpolaconsidering the functional χ → χ , 1 we can show that H 0 −1 0 (Γ ) and H (Γ ). Hence, there exists Is : H s−1 (Γ ) → H s (Γ ) tion space between H 0 0 0 0 (Γ ). with Is ≤ 1 such that Is χ = I1 χ for all χ ∈ H 0 0 (Γ ) Finally, we prove that Ds Is χ = χ . First we note that D1 I1 χ = χ for all χ ∈ H 0 0 (because DI χ = χ for all χ ∈ H (Γ )). Next we show that D0 I0 χ = χ for all χ ∈ −1 ⊂ H −1 ⊂ H −1 , −1 (Γ ). By the definition of D0 and I0 we have, noting that H H 0 0
D0 I0 χ , ϕ H −1 ,H 1 = χ , I1 D1 ϕ H −1 ,H 1
1 (Γ ). ∀ϕ ∈ H
484
A Interpolation Spaces and Sobolev Spaces
1 (Γ ). Note that I1 D1 ϕ is well Hence it suffices to prove I1 D1 ϕ = ϕ for all ϕ ∈ H 1 defined for ϕ ∈ H (Γ ) because
D1 ϕ , 1H 0 = ϕ (b) − ϕ (a) = 0
0 (Γ ). Let ψ = I1 D1 ϕ . Then D1 ψ = D1 ϕ and thus ψ = ϕ because so that D1 ϕ ∈ H 0 both functions vanish at the endpoints a and b. By interpolation, Ds Is χ = χ for s−1 (Γ ). all χ ∈ H 0
Remark A.4. 1. We note that for any s ∈ [0, 1), Is is the extension of I1 onto s−1 (Γ ), i.e., Is v = I1 v for any v ∈ L2 (Γ ). H 0 0 1 (Γ ) is the usual integral operator. Therefore, if v is a 0 (Γ ) → H 2. Recall that I1 : H 0 piecewise-constant function on Γ , then I1 v is piecewise linear on the same mesh.
Appendix B
Boundary Integral Operators
This chapter collects some important facts about boundary integral operators which have been used in the book. In the first section, Section B.1, we define the four operators that form the Calder´on projector, namely, the weakly-singular integral operator, the hypersingular integral operator, the double-layer potential, and its adjoint. We show how they can be used to solve the Laplace equation on bounded and unbounded domains with the Dirichlet and Neumann boundary conditions. We also derive these operators as pseudo-differential operators when the surface is a sphere. In Lemma B.3 we present the coercivity and G˚arding’s inequalities of the bilinear forms defined from the weakly-singular integral operator and the hypersingular integral operator. This section will be useful for the reader who is not familiar with the integral equation method for the solution of boundary value problems. In the second section of this appendix, Section B.2, we discuss general discretised operators and their matrix representations. The results will be applied to the boundary integral operators introduced in Section B.1.
B.1 Boundary Integral Operators and Pseudo-differential Operators on the Sphere In this section we first define the boundary potentials and boundary integral operators on general boundaries. We then relate these operators with pseudo-differential operators when the boundary is a sphere. These operators on the sphere can be represented in the general form L v :=
∞
∑ ∑
=0 m=−
v,mY,m ; L()
see Section 15.1.2. We will determine the symbol L() for each operator. More details can be found in the book [161]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1
485
486
B Boundary Integral Operators
B.1.1 Boundary potentials and boundary integral operators Let Γ be the boundary of a bounded domain Ω in Rd , d = 2, 3. We also denote Ω − := Ω and Ω + := Rd \ Ω . Let n denote the exterior normal to Γ . The fundamental solution of the equation (Δ + κ 2 )u = 0 is given by ⎧ 1 d = 2, κ = 0, − 2π log |x − y|, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ i (1) d = 2, κ = 0, (B.1) G(x, y) = 4 H0 (κ |x − y|), ⎪ ⎪ ⎪ ⎪ ⎪ eiκ |x−y| ⎪ ⎩ 41π , d = 3, |x − y| The single-layer potential S and the double-layer potential D are defined by Sv(x) := 2
G(x, y)v(y) d σy ,
v(y)
x∈ / Γ,
Γ
Dv(x) := 2
Γ
∂ G(x, y) d σy , x ∈ / Γ, ∂ ny
where ny denotes the exterior normal to Γ at y ∈ Γ , and d σy denotes the arc-length element (d = 2) or surface element (d = 3). Associated with these potentials, we define the following boundary integral operators x ∈Γ, Vv(x) = 2 G(x, y)v(y) d σy , Γ
Kv(x) = 2
v(y)
Γ
K v(x) = 2
∂ ∂ nx
Wv(x) = −2
∂ G(x, y) d σy , ∂ ny
x ∈Γ,
G(x, y)v(y) d σy ,
x ∈Γ,
(B.2)
Γ
∂ ∂ nx
v(y)
Γ
∂ G(x, y) d σy , x ∈ Γ . ∂ ny
It is well-known that if Γ is smooth then, for all s ∈ R, the mappings V : H s−1/2 (Γ ) → H s+1/2 (Γ ),
W : H s+1/2 (Γ ) → H s−1/2 (Γ ),
K : H s+1/2 (Γ ) → H s+1/2 (Γ ),
K : H s−1/2 (Γ ) → H s−1/2 (Γ )
are bounded operators. The operator V is a weakly-singular integral operator and W is a hypersingular integral operator. If Γ is Lipschitz, then the above operators are bounded when s ∈ (−1/2, 1/2); see [63].
B.1 Boundary Integral Operators and Pseudo-differential Operators on the Sphere
487
It is remarked that our definition (B.2) differs from relevant definitions in some other books and articles, e.g., [128], by a factor of 2. This is to avoid a factor of 1/2 in the following well-known jump relations: (Sv)|Γ = Vv,
∂n± (Sv) = (∓I + K )v,
(Dv)|Γ± = (±I + K)v,
∂n± (Dv) = −Wv,
(B.3)
where + denotes the limit taken from the exterior domain, and − the limit taken from the interior domain, and I denotes the identity operator.
B.1.2 Representation of harmonic functions by potentials 1 (R3 \ Γ ) satisfies Δ U = 0 in R3 \ Γ and U(x) = O(1/x) as x → ∞, If U ∈ Hloc then for x ∈ / Γ the following representation formula holds
1 1 U(x) = S([∂nU])(x) − D([U])(x), 2 2
(B.4)
where [v] := v− − v+ .
B.1.2.1 Interior problems It follows from (B.4) that 1 1 U(x) = S(∂n−U)(x) − D(U)(x), 2 2
x ∈ Ω.
(B.5)
Dirichlet problem Let the Dirichlet data UD be given. By taking the limit on both sides of (B.5) as x approaches a point on the boundary Γ and using the jump relations (B.3) we obtain 1 1 1 UD (x) = V(∂n−U)(x) + UD (x) − KUD (x), 2 2 2 implying
V(∂n−U)(x) = UD (x) + KUD (x),
x ∈Γ,
x ∈Γ.
(B.6)
Neumann problem Let the Neumann data UN be given. By taking the normal derivative ∂n− on both sides of (B.5) and using the jump relations (B.3), we obtain 1 1 1 ∂n−U(x) = ∂n−U(x) + K ∂n−U(x) + WU(x), 2 2 2 implying
x ∈Γ.
488
B Boundary Integral Operators
WU(x) = UN (x) − KUN (x),
x ∈Γ.
(B.7)
B.1.2.2 Exterior problems It follows from (B.4) that 1 1 U(x) = D(U)(x) − S(∂n+U)(x), 2 2
x ∈ R3 \ Ω .
(B.8)
Dirichlet problem By taking the limit on both sides of (B.8) as x approaches a point on the boundary Γ and using the jump relations (B.3) we obtain 1 1 1 UD (x) = UD (x) + KUD (x) − V(∂n+U)(x), 2 2 2 implying
V(∂n+U)(x) = −UD (x) + KUD (x),
x ∈Γ,
x ∈Γ.
(B.9)
Neumann problem By taking the normal derivative ∂n+ on both sides of (B.8) as x approaches a point on the boundary Γ and using the jump relations (B.3) we obtain 1 1 1 ∂n+U(x) = − WU(x) + ∂n+U(x) − K (∂n+U), 2 2 2 implying
WU(x) = −UN (x) − KUN (x),
x ∈Γ,
x ∈Γ,
(B.10)
B.1.3 Dirichlet-to-Neumann and Neumann-to-Dirichlet operators Given u defined on Γ , we define the interior and exterior Dirichlet-to-Neumann (DtN) operators by Tint : u → Uint → ∂n−Uint Text : u → Uext → ∂n+Uext , where Uint (respectively, Uext ) is the solution to the interior (respectively, exterior) Dirichlet problem with UD = u. On the one hand, (B.6) gives
∂n−Uint = V−1 (I + K)u, so that
Tint u = V−1 (I + K)u.
B.1 Boundary Integral Operators and Pseudo-differential Operators on the Sphere
489
On the other hand, it follows from (B.5) and (B.6) that 1 1 Uint = SV−1 (I + K)u − Du, 2 2 so that
1 1 Tint u = ∂n−Uint = (I + K )V−1 (I + K)u + Wu. 2 2
Hence
1 1 Tint = V−1 (I + K) = (I + K )V−1 (I + K) + W. 2 2 Similarly, on the one hand, (B.9) gives
(B.11)
∂n+Uext = V−1 (−I + K)u, so that
Text u = V−1 (−I + K)u.
On the other hand, it follows from (B.8) and (B.9) that 1 1 Uext = Du + SV−1 (I − K)u, 2 2 so that
1 1 Text u = ∂n+Uext = − Wu − (I − K )V−1 (I − K)u. 2 2
Hence
1 1 Text = −V−1 (I − K) = − W − (I − K )V−1 (I − K). (B.12) 2 2 It is noted that even though the boundary integral operators V, W, K, and K defined by (B.2) are twice the corresponding operators defined in some other books and articles, the DtN operators Tint and Text remain the same. −1 The inverses T−1 int and Text are the Neumann-to-Dirichlet (NtD) operators. The DtN operator is also called the Steklov–Poincar´e operator. We summarise in the next two lemmas some of its important properties. We focus on Tint only. Lemma B.1. Let Ω be a bounded domain with Lipschitz boundary Γ . For any vΓ ∈ H 1/2 (Γ ), there exists uniquely vD ∈ H 1 (Ω ) satisfying
Δ vD = 0 in Ω , vD = vΓ on Γ , ∂ vD = Tint (vΓ ) on Γ . ∂n
(B.13)
Moreover, if v ∈ H 1 (Ω ) is any extension of vΓ (i.e., v|Γ = vΓ ), then ∇vD 2L2 (Ω ) ≤ ∇v2L2 (Ω ) .
(B.14)
490
B Boundary Integral Operators
Proof. The unique existence of vD ∈ H 1 (Ω ) satisfying (B.13) is clear, noting the definition of the DtN operator Tint . To prove (B.14), let v ∈ H 1 (Ω ) be such that its trace on Γ equals vΓ . Then v0 := v − vD ∈ H01 (Ω ). It follows from Green’s identity and the first equation in (B.13) that
∇vD , ∇v0 Ω = − Δ vD , v0 Ω = 0. Therefore, ∇v2L2 (Ω ) = ∇vD 2L2 (Ω ) + ∇v0 2L2 (Ω ) ≥ ∇vD 2L2 (Ω ) .
This completes the proof of the lemma. Before proving the next lemma, we recall the definition of the ·V−1 -norm: vV−1 :=
V−1 v, vΓ
∀v ∈ H 1/2 (Γ ).
Lemma B.2. There exist a constant cK ∈ (1/2, 1) such that for any v ∈ H 1/2 (Γ ) 1 1 I + K v2V−1 I + K v2V−1 ≤ Tint v, vΓ ≤ 2cK 2(1 − cK )
(B.15)
Proof. The result is proved in [164, Lemma 2.1]; see also [200, Proposition 5.2]. We include the proof here for the reader’s convenience. It is noted that the factor 2 in (B.16) and (B.17) below is due to the factor 2 in our definition (B.2). One should also note that the ·V−1 -norm differs by a factor of 1/2. Theorem 5.6.11 in [128] and Theorem 5.1 in [200] give (I + K)vV−1 ≤ 2cK vV−1
∀v ∈ H 1/2 (Γ ),
(I + K)vV−1 ≥ 2(1 − cK )vV−1
∀v ∈ H0 (Γ ),
where 1 cK = + 2
5
1/2
(B.16)
1 − cV cW < 1. 4
Here, cV ∈ (0, 1/2] and cW ∈ (0, 1/2] are given in [128, Lemma 5.6.10] which satisfy
Vv, vΓ ≥ 2cV v2H −1/2 (Γ ) ∀v ∈ H −1/2 (Γ ), (B.17) 1/2
Ww, wΓ ≥ 2cW w2H 1/2 (Γ ) ∀w ∈ H0 (Γ ), with
3 4 1/2 H0 (Γ ) := w ∈ H 1/2 (Γ ) : w, 1Γ = 0 .
For any v ∈ H −1/2 (Γ ), v = 0, we have
B.1 Boundary Integral Operators and Pseudo-differential Operators on the Sphere
VvH 1/2 (Γ ) =
sup u∈H −1/2 (Γ ) u=0
491
Vv, uΓ
Vv, vΓ ≥ ≥ 2cV vH −1/2 (Γ ) . uH −1/2 (Γ ) vH −1/2 (Γ )
Hence, for any w ∈ H 1/2 (Γ ) and u ∈ H 1/2 (Γ ), by letting v = V−1 u or u = Vv we deduce −1 −1 V w, u Γ V w, Vv Γ −1 = sup V wH −1/2 (Γ ) = sup uH 1/2 (Γ ) VvH 1/2 (Γ ) u∈H 1/2 (Γ ) Vv∈H 1/2 (Γ ) u=0
=
sup Vv∈H 1/2 (Γ ) Vv=0
≤
Vv=0
w, vΓ 1 ≤ VvH 1/2 (Γ ) 2cV
sup Vv∈H 1/2 (Γ ) Vv=0
w, vΓ vH −1/2 (Γ )
1 wH 1/2 (Γ ) . 2cV
Therefore, −1 1 V w, w Γ ≤ V−1 wH −1/2 (Γ ) wH 1/2 (Γ ) ≤ w2H 1/2 (Γ ) . 2cV
Noting that cV cW = cK (1 − cK ), we obtain by using successively (B.17), the above inequality, and (B.16) that
Ww, wΓ ≥ 4cV cW V−1 w, w Γ = 4cK (1 − cK )w2V−1 1 − cK ≥ (I + K)w2V−1 . cK Hence, the left inequality in (B.15) is now easily seen by using the symmetric form in (B.11) 1 1 1
Tint w, wΓ = (I + K)w2V−1 + Ww, wΓ ≥ (I + K)w2V−1 . 2 2 2cK The right inequality in (B.15) is derived from the non-symmetric representation of Tint in (B.11) and (B.16)
Tint w, wΓ = V−1 (I + K)v, v Γ ≤ (I + K)vV−1 vV−1 1 (I + K)v2V−1 . ≤ 2(1 − cK )
B.1.4 The weakly-singular and hypersingular bilinear forms The weakly-singular and hypersingular operators define the following bilinear forms
492
B Boundary Integral Operators
−1/2 (Γ ) × H −1/2 (Γ ) → R, aV (·, ·) : H
1/2 (Γ ) → R, 1/2 (Γ ) × H aW (·, ·) : H
−1/2 (Γ ), aV (v, w) := Vv, w ∀v, w ∈ H
1/2 (Γ ). aW (v, w) := Wv, w ∀v, w ∈ H
These bilinear forms have the following properties.
−1/2
1/2 (Γ ) and w ∈ H Lemma B.3. Recalling (B.1), when κ = 0, for any v ∈ H 0 the following statements hold aW (v, v) v2 1/2 HI
aV (w, w) w2 −1/2 HI
(Γ )
(Γ )
(Γ ),
,
∗w2 −1/2 HI
(Γ )
.
The constants are independent of v and w but may depend on the size of Γ . −1/2 (Γ ) defined in (A.85) is the subspace of H −1/2 (Γ ) consisting of all Here H 0 functions of integral mean zero. When κ = 0, we have aV (ϕ , ϕ ) ≥ γV ϕ 2H −1/2 (Γ ) − ηV ϕ 2H −1 (Γ )
aW (ψ , ψ ) ≥ γW ψ 2H 1/2 (Γ ) − ηW ϕ 2L2 (Γ )
−1/2 (Γ ), ∀ϕ ∈ H 1/2 (Γ ), ∀ψ ∈ H
where γV , γW , ηV , and ηW are constants independent of ϕ and ψ .
Proof. The equivalence between the bilinear forms and the non-weighted norms is given in [186] for d = 3. It is originally proved for V and d = 2 by Hsiao and Wendland [127]; see also [147]. The equivalence between the non-weighted norms and weighted norms is obvious from their definitions, see Subsection A.2.5 and Subsection A.2.6, noting that the equivalence constants may depend on the size of Γ which is fixed. The result when κ = 0 can be found in [92, 202, 235].
B.1.5 Representations of solutions to the Laplace equation by spherical harmonics In the remainder of this section, Γ is the unit sphere S in R3 . Let (r, θ , ϕ ) be the spherical coordinates of a point x = (x1 , x2 , x3 ) ∈ R3 , where r ≥ 0 is the radius and where θ ∈ [0, π ], ϕ ∈ [0, 2π ] are the two Euler angles ⎧ ⎪ ⎨x1 = r sin θ cos ϕ , x2 = r sin θ sin ϕ , ⎪ ⎩ x3 = r cos θ . The vectors
B.1 Boundary Integral Operators and Pseudo-differential Operators on the Sphere
493
∂x = (sin θ cos ϕ , sin θ sin ϕ , cos θ ), ∂r 1 ∂x = (cos θ cos ϕ , cos θ sin ϕ , − sin θ ), eθ = r ∂θ 1 ∂x = (− sin ϕ , cos ϕ , 0) eϕ = r sin θ ∂ ϕ er =
are unitary, and they form an orthonormal basis for R3 . In these coordinates, the gradient operator and the Laplace operator have the form 1 ∂ ∇ = e r + ∇S , ∂r r (B.18) 1 ∂ 2∂ 1 r + 2 ΔS Δ= 2 r ∂r ∂r r where ∇S and ΔS are, respectively, the surface gradient and the Laplace–Beltrami operator on the unit sphere S defined by 1 ∂ ∂ eϕ + eθ , sin θ ∂ ϕ ∂θ 1 1 ∂ ∂ ∂2 sin . ΔS := 2 + θ ∂θ sin θ ∂ ϕ 2 sin θ ∂ θ
∇S :=
(B.19)
Define H,m (r, θ , ϕ ) := rY,m (θ , ϕ ) and
K,m (r, θ , ϕ ) :=
1 Y,m (θ , ϕ ) r+1
where Y,m , m = −, . . . , and = 0, 1, . . ., are spherical harmonics which form an orthonormal basis for L2 (S); see e.g. [155]. Theorem 2.4.1 in [161] gives
ΔSY,m + ( + 1)Y,m = 0. Hence by using (B.18) and (B.19) it is easy to check that H,m is harmonic, smooth at the origin, and tends to ∞ as r → ∞, whereas K,m is harmonic, not smooth at the origin, and tends to 0 as r → ∞; see e.g. [161]. Moreover, by using (B.19) one can show that, see [161, (2.5.36) & (2.5.37)],
∂ H,m = Y,m , ∂n ∂ K,m = −( + 1)Y,m , ∂n
(B.20)
For any v ∈ D (S), the Fourier coefficients of v are defined by v,m =
S
v(x)Y,m (x) d σx ,
m = −, . . . , ,
= 0, 1, . . . .
494
B Boundary Integral Operators
Here d σx is the element of surface area. In the following we derive the representations of the solutions of the Dirichlet and Neumann problems in terms of the Fourier coefficients of the given data; see Subsection B.1.2.1 and Subsection B.1.2.2.
B.1.5.1 Dirichlet problems If the Dirichlet data UD has an expansion as a sum of spherical harmonics UD (θ , ϕ ) =
∞
∑ ∑
=0 m=−
(U D ),mY,m (θ , ϕ ),
then the solution of the interior problem can be represented by U(r, θ , ϕ ) = =
∞
∑ ∑
=0 m=−
∞
∑ ∑
=0 m=−
(U D ),m H,m (r, θ , ϕ ) r (U D ),mY,m (θ , ϕ ),
(B.21)
where the series is absolutely convergent for r < 1. The solution of the exterior problem can be represented by U(r, θ , ϕ ) = =
∞
∑ ∑
=0 m=−
(U D ),m K,m (r, θ , ϕ )
∞
1 (UD ),mY,m (θ , ϕ ), +1 r m=−
∑ ∑
=0
where the series is absolutely convergent for r > 1.
B.1.5.2 Neumann problems If the Neumann data UN has an expansion as a sum of spherical harmonics UN (θ , ϕ ) =
∞
∑ ∑
=0 m=−
(U N ),mY,m (θ , ϕ ),
then by using (B.20) one can show that for the interior problem U(r, θ , ϕ ) =
∞
1 (UN ),m H,m (r, θ , ϕ ) m=−
∑ ∑
=0
(B.22)
B.1 Boundary Integral Operators and Pseudo-differential Operators on the Sphere
=
495
r (UN ),mY,m (θ , ϕ ), m=−
∞
∑ ∑
=0
(B.23)
and for the exterior problem U(r, θ , ϕ ) = =
∞
∑ ∑
=0 m=− ∞
∑ ∑
=0 m=−
−
1 (UN ),m K,m (r, θ , ϕ ) +1
−
1 (U N ),mY,m (θ , ϕ ). ( + 1)r+1
(B.24)
B.1.6 Representations of boundary integral operators by spherical harmonics When Γ = S, the unit sphere in R3 , we can represent the operators Tint , Text , T−1 int , , and W in terms of spherical harmonics. , V, K, K T−1 ext
B.1.6.1 DtN and NtD operators By using (B.21) and (B.23) for the interior problems, (B.22) and (B.24) for the exterior problems, for any given function u(θ , ϕ ) =
∞
uˆ,mY,m (θ , ϕ ),
∑ ∑
=0 m=−
we can easily derive that Tint u = T−1 int u =
∞
∑ ∑
=0 m=−
uˆ,mY,m
∞
1 uˆ,mY,m =0 m=−
∑ ∑ ∞
Text u = − ∑
∑
( + 1)uˆ,mY,m
=0 m=− ∞
1 uˆ,mY,m . =0 m=− + 1
T−1 ext u = − ∑
∑
(B.25)
496
B Boundary Integral Operators
B.1.6.2 Weakly-singular integral operator From the non-symmetric form in (B.11) and in (B.12) we deduce 1 Tint − Text . 2
V−1 = Therefore, by using (B.25) we have V−1 u =
∞
∑ ∑
=0 m=−
∞
+ 1/2 uˆ,mY,m ,
(B.26)
1 uˆ,mY,m . Vu = ∑ ∑ =0 m=− + 1/2 B.1.6.3 Hypersingular integral operator From (B.7) and (B.10) we deduce
−1 −1 T−1 int UN = Uint = W (UN ) − W K (UN ) −1 −1 T−1 ext UN = Uext = −W (UN ) − W K (UN ),
so that
1 −1 Tint UN − T−1 ext UN . 2
W−1 (UN ) = Therefore, W−1 u =
∞
+ 1/2 uˆ,mY,m ( + 1) =0 m=−
∑ ∑
∞
( + 1) uˆ,mY,m . Wu = ∑ ∑ =0 m=− + 1/2 B.1.6.4 The operator K and its adjoint K It follows from (B.6) and (B.26) Ku = VTint u − u =
∞
∑ ∑
=0 m=−
1 − 1 uˆ,mY,m = − Vu. + 1/2 2
Note that we can use (B.9) and (B.26) to obtain the same formula. Similarly, from (B.7) and (B.27) we deduce K u = −WT−1 int u + u =
∞
∑ ∑
=0 m=−
−
+1 1 + 1 uˆ,mY,m = − Vu. + 1/2 2
(B.27)
B.2 Discretised Operators
497
The same formula can be derived from (B.10) and (B.27).
B.2 Discretised Operators Let X and Y be two Banach spaces. Let XM and YN be two finite-dimensional subspaces of X and Y , respectively. Let dim(XM ) = M and dim(YN ) = N.
B.2.1 Natural embedding operators and biorthonormal bases The natural embedding operators are denoted by iX M : X M → X ,
iXM u = u,
and
iYN : YN → Y ,
iYN v = v.
Let X ∗ , Y ∗ , XM∗ , and YN∗ be, respectively, the dual spaces of X , Y , XM , and YN . The adjoint operators of iXM and ıYN are defined by ∗ iXM v, w X ∗ ,X = v, iXM w X ∗ ,X , v ∈ X ∗ , w ∈ XM , i∗XM : X ∗ → XM∗ , M ∗ M iYN v, w Y ∗ ,Y = v, iYN w Y ∗ ,Y , v ∈ Y ∗ , w ∈ YN , i∗YN : Y ∗ → YN∗ , N
N
where ·, ·X ∗ ,X and similar notations denote dual inner products. The dual pair (XM , XM∗ ) possesses the following biorthonormal bases BXM := {η1 , . . . , ηM } where
ηi∗ , η j
XM∗ ,XM
∗ BXM∗ := {η1∗ , . . . , ηM }
and = δi j ,
i, j = 1, . . . , M.
Similarly, for (YN , YN∗ ) we have BYN := {φ1 , . . . , φN } with
BYN∗ := {φ1∗ , . . . , φN∗ }
and
∗ φ j , φi Y ∗ ,Y = δ ji , N
i, j = 1, . . . , N.
N
It is known that any x ∈ XM , x∗ ∈ XM∗ , y ∈ YN , and y∗ ∈ YN∗ can be represented as M
x = ∑ ηi∗ , xX ∗ ,XM ηi , i=1 N
y=∑
i=1
M
φi∗ , yY ∗ ,YN N
φi ,
M
x∗ = ∑ x∗ , ηi X ∗ ,XM ηi∗ , ∗
i=1 N
y = ∑ y i=1
M
(B.28) ∗
, φi Y ∗ ,YN φi∗ . N
498
B Boundary Integral Operators
B.2.2 Discretised operators and matrix representations B.2.2.1 Operators from X into Y ∗ Given A ∈ L (X , Y ∗ ), we define ANM : XM → YN∗ by
ANM v, wY ∗ ,YN = AiXM v, iYN w Y ∗ ,Y ∀v ∈ XM , ∀w ∈ YN . N
On the other hand, AiXM v, iYN w Y ∗ ,Y = i∗YN AiXM v, w Y ∗ ,Y
N
N
∀v ∈ XM , ∀w ∈ YN .
Hence
ANM v, wY ∗ ,YN = i∗YN AiXM v, w Y ∗ ,Y N
N
This means
N
∀v ∈ XM , ∀w ∈ YN .
ANM = i∗YN AiXM .
(B.29)
The matrix representation of ANM has the form
( j) ANM
= [ANM η j ]BY ∗ N
( j)
⎛
⎞
⎛
i∗YN AiXM η j , φ1
⎞
ANM η j , φ1 Y ∗ ,Y ⎜ YN∗ ,YN ⎟ N N ⎟ ⎜ ⎟ .. . ⎟=⎜ ⎟ .. . ⎟ ⎠ ⎜ ⎝ ∗ ⎠ ANM η j , φN Y ∗ ,Y iYN AiXM η j , φN ∗ N N YN ,YN ⎞ ⎛ AiXM η j , iYN φ1 Y ∗ ,Y N N⎟ ⎜ .. ⎟ =⎜ . ⎠ ⎝ AiXM η j , iYN φN Y ∗ ,Y
⎜ =⎜ ⎝
where ANM denotes the jth -column of the matrix ANM and [ANM η j ]BY ∗ denotes the N vector coefficient of ANM η j with respect to the basis BYN∗ ; see (B.28). B.2.2.2 Operators from X into X ∗ In practice, A = W (the hypersingular integral operator) or A = V (the weaklysingular integral operator), we have Y = X so that Y ∗ = X ∗ . In this case M = N, YN = XM , and it follows from (B.29) that AM : XM → XM∗ ,
AM = i∗XM AiXM .
The Galerkin matrix, which is also the matrix representation of AM , has entries
B.2 Discretised Operators
499
Ai j = AiXM η j , iXM ηi X ∗ ,X = AM η j , ηi X ∗ ,X , M
M
i, j = 1, . . . , M.
B.2.2.3 Operators from X into X where X is reflexive Consider A ∈ L (X , X ). Let Y = X ∗ so that Y ∗ = X . Then A ∈ L (X , Y ∗ ). The discretised operator ANM : XM → YN∗ defined by (B.29) satisfies
ANM v, wY ∗ ,YN = iYN w, AiXM v X ∗ ,X ∀v ∈ XM , ∀w ∈ YN . N
The jth -column matrix representation of ANM with respect to the bases BXM and BYN∗ is ( j)
ANM = [ANM η j ]BY ∗ N
⎞ ⎛ ⎞ ANM η j , φ1 Y ∗ ,Y iYN φ1 , AiXM η j X ∗ ,X N N ⎜ ⎟ ⎟ .. .. ⎟=⎜ =⎜ ⎠. . . ⎝ ⎠ ⎝ ANM η j , φN Y ∗ ,Y iYN φN , AiXM η j X ∗ ,X ⎛
N
N
B.2.3 Compositions of operators We consider the following linear operators and their discretisations A ∈ L (Y , Y ∗ ), B ∈ L (Y , X ∗ ), C ∈ L (X , Y ∗ ),
AN = i∗YN AiYN ∈ L (YN , YN∗ ), BMN = i∗XM BiYN ∈ L (YN , XM∗ ), CNM = i∗YN CiXM ∈ L (XM , YN∗ ).
Assume that
Av, vY ∗ ,Y ≥ cA v2Y
∀v ∈ Y .
(B.30)
According to the Lax–Milgram Theorem, A−1 ∈ L (Y ∗ , Y ) and
∗ A−1 N ∈ L (YN , YN ).
Hence we are able to define E := BA−1 C EM := i∗XM EiXM
∈ L (X , X ∗ ), ∈ L (XM , XM∗ ),
M := BMN A−1 CNM ∈ L (XM , X ∗ ). E M N
M . The following lemma gives an estimate for the We note that in general EM = E difference.
500
B Boundary Integral Operators
Lemma B.4. Under assumption (B.30), there exists a positive constant cE independent of M such that for any v ∈ XM ' ' M v' ∗ ≤ cE inf A−1 CiX v − iY wY . ' EM − E M N X w∈YN
M
Proof. This result is in fact [49, Lemma 2.4]. Recall that M v, w ∗ EM − E ' ' XM ,XM M v' ∗ = sup ' EM − E . XM wXM w∈XM w=0
For any v, w ∈ XM , we have ( ( ∗ ( ( M v, w ∗ ( = ( i B A−1 − iY A−1 i∗ CiX v, w ∗ ( ( EM − E XM N N YN M XM ,XM XM ,XM ( ( −1 ∗ ( = ( B A − iYN A−1 h iYN CiXM v, iXM w X ∗ ,X ( ∗ ( −1 ∗ ( = ( B iXM w, A − iYN A−1 h iYN CiXM v Y ∗ ,Y ' ' ∗ ' ≤ B∗ iXM wY ∗ ' A−1 − iYN A−1 h iYN CiXM v Y ' ' ∗ ' ≤ B∗ X →Y ∗ iXM wX ' A−1 − iYN A−1 h iYN CiXM v Y ' −1 ' ∗ ' = BY →X ∗ wXM ' A − iYN A−1 h iYN CiXM v Y .
Hence
' ' M v' ' EM − E
XM∗
' ' ∗ ≤ 'B'Y →X ∗ A−1 − iYN A−1 h iYN CiXM vY .
' ' ∗ ' It remains to estimate ' A−1 − iYN A−1 h iYN CiXM v Y . Let f := CiXM v ∈ Y ∗
and
u := A−1 CiXM v ∈ Y ,
so that u is the solution to the equation Au = f in Y ∗ . The corresponding Galerkin −1 ∗ ∗ equation is Ah uh = i∗YN f in YN∗ , and thus uh := A−1 h iYN f = Ah iYN CiXM v ∈ YN is the Galerkin approximation to u. By C´ea’s Lemma we have ' −1 ' ' A − iY A−1 i∗ CiX v' = u − iY uh Y YN N h M N Y ≤
AY →Y ∗ inf u − iYN wY w∈YN cA w=0
=
AY →Y ∗ inf A−1 CiXM v − iYN wY . w∈YN cA w=0
This proves the lemma. We now consider the special case when
B = C∗ ,
i.e.,
E := C∗ A−1 C ∈ L (X , X ∗ ),
B.2 Discretised Operators
501
M := C∗ A−1 CNM ∈ L (XM , X ∗ ), E MN N M
M . The next lemma shows how M the matrix representation of E and denote by E this matrix can be represented in terms of the matrix representations of A−1 N , CNM ,
, C , and C . and C∗MN , namely A−1 NM MN N Lemma B.5. The following identity holds −1 M = C E MN AN CNM .
Proof. This result is in fact [51, Theorem 5]. We reproduce the proof here for completeness. Since M η j , ηi M = E , i, j = 1, . . . , M, E ∗ XM ,XM
ij
it suffices to show that, for i, j = 1, . . . , M, −1 M η j , ηi = C = C A−1 C , E MN AN CNM ∗ XM ,XM
ij
(B.31)
ij
M η j = C∗ A−1 CNM η j ∈ X ∗ , we define In view of E MN N M
ψ ∗j := CNM η j ∈ YN∗
∗ ξ j := A−1 N ψ j ∈ YN .
and
(B.32)
The definition of ψ ∗j implies ∗ ψ j , φi Y ∗ ,Y = CNM η j , φi Y ∗ ,Y = Ci j N
N
while that of ξ j implies
N
(B.33)
N
ψ ∗j = AN ξ j .
(B.34)
It follows from (B.28) that
ξj =
N
∑
k=1
φk∗ , ξ j
YN∗ ,YN
φk
which together with (B.34) gives
ψ ∗j = AN
N
∑
k=1
∗ φk , ξ j Y ∗ ,Y φk = N
N
N
∑
k=1
∗ φk , ξ j Y ∗ ,Y AN φk . N
N
This implies
ψ ∗j , φi
YN∗ ,YN
=
N
∑
k=1
φk∗ , ξ j
YN∗ ,YN
AN φk , φi Y ∗ ,YN = N
N
∑ Aik
k=1
∗ φk , ξ j Y ∗ ,Y N
N
502
B Boundary Integral Operators
⎛ ⎛
⎞⎞
φ1∗ , ξ j Y ∗ ,Y N N ⎜ ⎜ ⎟⎟ .. ⎟⎟ . ⎜ =⎜ A . ⎝ ⎝ ⎠⎠ φN∗ , ξ j Y ∗ ,Y N
N
i
Comparing with (B.33) we deduce
⎛ ∗ ⎞ φ1 , ξ j Y ∗ ,Y N N ⎜ ⎟ .. ⎟ C( j) = A ⎜ . ⎝ ⎠ φN∗ , ξ j Y ∗ ,Y N
N
where C( j) denotes the jth column of C. Consequently ⎛ ∗ ⎞ φ1 , ξ j Y ∗ ,Y N N ⎜ ⎟ .. ⎟. A−1 C( j) = ⎜ . ⎝ ⎠ ∗ φN , ξ j Y ∗ ,Y N
N
Therefore
C A−1 C
ij
=
∑ Cki A−1 C k j = N
k=1
On the other hand we have M η j , ηi E
XM∗ ,XM
N
∑ Cki
k=1
φk∗ , ξ j
YN∗ ,YN
.
(B.35)
= C∗MN A−1 N CNM η j , ηi XM∗ ,XM = CNM ηi , A−1 N CNM η j YN∗ ,YN = ψi∗ , ξ j Y ∗ ,Y . N
N
From (B.28) and (B.32) it follows that
ψi∗ = implying
N
N
N
k=1
k=1
k=1
∑ ψi∗ , φk YN∗ ,YN φk∗ = ∑ CNM ηi , φk YN∗ ,YN φk∗ = ∑ Cki φk∗ ,
M η j , ηi E
XM∗ ,XM
=
N
∑ Cki
k=1
φk∗ , ξ j
YN∗ ,YN
.
This identity and (B.35) yield (B.31), completing the proof of the lemma.
Appendix C
Some Additional Results
In this chapter we collect all technical lemmas which are not important to understand the main arguments in the analysis of preconditioning. However, these lemmas are important for the rigorous analysis in the previous chapters. It is noted that some results which have their own interests are first proved in this chapter. These include Lemma C.18 on inverse properties of polynomials.
C.1 Conditioning of Matrices In this section, many results on the extremal eigenvalues of the stiffness matrices arising from the Galerkin boundary element discretisation of boundary integral equations are derived. First we study the condition numbers of the stiffness matrix AV and AW defined by the weakly-singular integral operator V and hypersingular integral operator W. The results for the h-version have been studied in Chapter 12 and will not be repeated here. For the p-version, it is shown in [106, Lemma 2] that when the boundary Γ is a curve in R2 then the condition numbers of AV and AW grow at most like p3 . It is then proved in [228, Lemma A.1] that they grow at most as p3 and at least as p2 . Moreover, if the basis functions are properly scaled, these condition numbers grow as p2 for both AV and AW in the case when Γ = (−1, 1) × {0}. We reproduce the proof of these results in Lemma C.4, where the scaled Legendre polynomials and their antiderivatives are used as basis functions. We also prove in this lemma that when Γ = (−1, 1) × (−1, 1) × {0}, the condition number of AV still behaves like p2 while that of AW is bounded above by p6 . We first introduce these basis functions.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1
503
504
C Some Additional Results
C.1.1 The hierarchical basis functions for the p-version Let I = (−1, 1). For q = 0, 1, 2, . . ., let Lq∗
=
5
2q + 1 Lq 2
where Lq is the Legendre polynomial of degree q. Then
Lq , Lr
I
=
2 δq,r 2q + 1
and
∗ ∗ Lq , Lr I = δq,r ,
(C.1)
where ·, ·I is the L2 -inner product in I. Note also that (see [27, (3.8)]) for 1 ≤ k ≤ q (k)
Lq (1) =
(q − k + 1)(q − k + 2) · · · q(q + 1) · · · (q + k − 1)(q + k) , 2k k!
(C.2)
Moreover, the well-known property Lq (−x) = (−1)q Lq (x) for all x ∈ R implies (k)
(k)
Lq (−x) = (−1)q+k Lq (x),
x ∈ R.
(C.3)
For q = 2, 3, . . ., we define Lq (x) =
x
Lq−1 (s) ds,
−1
)q (x) = L
Lq∗ (x) =
Lq (x) , Lq L2 (I) x
(C.4)
∗ Lq−1 (s) ds.
−1
Simple calculations give the following lemma. Lemma C.1. For q, r = 2, 3, . . ., we have ⎧ 4 ⎪ , ⎪ ⎪ ⎪ (2q − 3)(2q − 1)(2q + 1) ⎪ ⎪ ⎪ ⎪ −2 ⎪ ⎨ , (2q − 1)(2q + 1)(2q + 3) Lq , Lr I = ⎪ ⎪ −2 ⎪ ⎪ , ⎪ ⎪ (2q − 5)(2q − 3)(2q − 1) ⎪ ⎪ ⎪ ⎩ 0,
q = r, r = q + 2, r = q − 2, otherwise,
(C.5)
C.1 Conditioning of Matrices
505
⎧ 2 ⎪ , ⎪ ⎪ ⎪ (2q − 3)(2q + 1) ⎪ ⎪ ⎪ ⎪ −1 ⎪ ⎨ , ∗ ∗ (2q + 1) (2q − 1)(2q + 3) Lq , Lr I = ⎪ ⎪ −1 ⎪ ⎪ , ⎪ ⎪ ⎪ (2q − 3) (2q − 5)(2q − 1) ⎪ ⎪ ⎩ 0, ⎧ ⎪ 1, ⎪ ⎪ ⎪ ⎪ ⎪ C ⎪ ⎪ ⎪ 1 (2q − 3)(2q + 5) ⎪ ⎪ ⎨− , 2 (2q − 1)(2q + 3) )r = )q , L L C ⎪ I ⎪ ⎪ 1 (2q − 7)(2q + 1) ⎪ ⎪ − ⎪ ⎪ ⎪ 2 (2q − 5)(2q − 1) ⎪ ⎪ ⎪ ⎩0,
q = r, r = q + 2,
(C.6)
r = q − 2, otherwise, q = r, r = q + 2, (C.7) r = q − 2, otherwise.
Proof. Due to the well-known recurrence formula, see e.g., [27], − Lq−1 , (2q + 1)Lq = Lq+1
q = 1, 2, 3, . . .
(C.8)
and the fact that Lq (−1) = (−1)q we have (2q + 1)Lq+1 = Lq+1 − Lq−1 ,
q = 1, 2, 3, . . . ,
or (2q − 1)Lq = Lq − Lq−2 ,
q = 2, 3, . . . .
Hence for q, r = 2, 3, . . ., Lq , Lr I =
1 Lq − Lq−2 , Lr − Lr−2 I (2q − 1)(2r − 1) 2 1 = δqr − δq,r−2 (2q − 1)(2r − 1) 2q + 1 2 − δq−2,r − δq−2,r−2 . 2q − 3
This proves (C.5). This identity together with (C.4) implies (C.6) and (C.7). 3 4 A similar result can be proved for Lk : k ≥ 1 . Lemma C.2. For k, ≥ 1, we have
Lk , L
I
=2
[(k−1)/2] [(−1)/2]
∑
m=0
∑
(2( − 2n) − 1)δk−1−2m,−1−2n .
n=0
Proof. From the recurrence formula (C.8) we have
506
C Some Additional Results
Lk =
[(k−1)/2]
2(k − 2m) − 1 Lk−1−2m
∑
m=0
so that
Lk , L
I
=
[(k−1)/2]
[(−1)/2] 2(k − 2m) − 1 ∑ 2( − 2n) − 1 Lk−1−2m , L−1−2n I
∑
m=0
=
n=0
[(k−1)/2]
∑
[(−1)/2]
2(k − 2m) − 1
m=0
=2
∑
n=0
[(k−1)/2] [(−1)/2]
∑
∑
m=0
n=0
2δk−1−2m,−1−2n 2( − 2n) − 1 2(k − 1 − 2m) + 1
2( − 2n) − 1 δk−1−2m,−1−2n .
The following lemma concerning the extremal eigenvalues of the mass matrices defined by Lq and Lq∗ is proved in [146]. The proof is reproduced here for convenience. Lemma C.3. Let I := (−1, 1) and let M, M∗ ∈ R(p−1)×(p−1) be defined by ∗ = Lq∗ , Lr∗ I , q, r = 2, . . . , p. Mqr = Lq , Lr I and Mqr Then the extremal eigenvalues of the matrices M and M∗ satisfy p−5 λmin (M) ≤ λmax (M) 1 whereas
λmin (M∗ ) p−4
and
λmax (M∗ ) 1.
p p Proof. For any x = (c2 , . . . , c p ) ∈ R p−1 , let φ = ∑q=2 cq Lq so that φ = ∑q=2 cq Lq−1 . Then φ 2H 0 (I) = x Mx
and, due to (C.1),
φ 2H 1 (I) = φ 2H 0 (I) =
The last identity implies
or
Therefore,
p
∑
q,r=2
cq cr Lq−1 , Lr−1 H 0 (I) =
p
2
∑ 2q − 1 c2q .
q=2
2 2 x x ≤ φ 2H 1 (I) ≤ x x 2p − 1 3
3 2p − 1 φ 2H 1 (I) ≤ x x ≤ φ 2H 1 (I) . 2 2
(C.9)
C.1 Conditioning of Matrices
p−5
507
2 2 x Mx 2 φ H 0 (I) 2 φ H 0 (I) ≤ ≤ 1 2p − 1 φ 2 1 3 φ 2 1 x x H (I)
H (I)
where in the first inequality we used the inverse inequality (C.35) in Subsection C.4.1. It follows that p−5 λmin (M) ≤ λmax (M) 1. The results for M∗ can be obtained similarly, noting that Lq∗ , Lr∗ I = δqr so that instead of (C.9) we have x x = φ 2H 1 (−1,1) . Hence −4
p
2 x M∗ x φ H 0 (I) 1, x x φ 2 1 H (I)
implying
p−4 λmin (M∗ ) ≤ λmax (M∗ ) 1.
To show an upper bound for λmin (M∗ ) we follow [26, 146] to choose
φ = Lp+2 −
(p + 3)(p + 4) L p+1 . 4
(C.10)
First we note that φ is a polynomial of degree p vanishing at ±1. Indeed, (C.2) gives φ (1) = 0. Moreover, (C.3) gives
φ (−1) = (−1) p+4 Lp+2 (1) −
(p + 3)(p + 4) (−1) p+2 Lp+1 (1) 4
= (−1) p+2 φ (1) = 0. p p Therefore, there exists x = (cq )q=2 ∈ R p−1 such that φ = ∑q=2 cq Lq∗ . It is shown in [146, Lemma 2] that φ 2H 0 (I) p−4 φ 2H 1 (I) , implying λmin (M∗ ) p−4 .
An upper bound for λmax (M∗ ) can be shown by choosing x = (1, 0, . . . , 0) so that φ 2H 0 (I) φ 2H 1 (I) 1, implying λmax (M∗ ) 1. This completes the proof of the lemma.
C.1.2 The condition numbers of the weakly-singular and hypersingular stiffness matrices Since the number of elements in the p-version is fixed, in the following for simplicity we will assume that Γ = (−1, 1)d has only one element, d = 1, 2. For d = 1, the boundary element spaces for the weakly-singular integral equation and hypersingular integral equation, respectively, are
508
C Some Additional Results
3 4 V := span Lq∗ : q = 0, . . . , p
For d = 2, and
3 4 and V := span Lq∗ : q = 2, . . . , p .
3 4 V = span Lq∗1 ⊗ Lq∗2 : q1 , q2 = 0, . . . , p
4 3 V = span Lq∗1 ⊗ Lq∗2 : q1 , q2 = 2, . . . , p
where the tensor product ⊗ is defined for any one-variable function f and g by ( f ⊗ g)(x, y) = f (x)g(y),
(x, y) ∈ Γ .
When d = 1 we provide both upper and lower bounds for the condition numbers, while for d = 2 we only give an upper bound, as shown in the following lemma. Lemma C.4. Let AdV and AdW be the stiffness matrices defined by p ⎧ ∗ ∗ ⎪ , d = 1, ⎨ aV (Lq , Lr ) q,r=0 AdV = p ⎪ ⎩ aV (Lq∗ ⊗ Lq∗ , Lr∗ ⊗ Lr∗ ) , d = 2, 1 2 1 2 q1 ,q2 ,r1 ,r2 =0
p ⎧ ⎪ , ⎨ aW (Lq+1 , Lr+1 ) q,r=2 AdW = p ⎪ ⎩ aW (Lq∗ ⊗ Lq∗ , Lr∗ ⊗ Lr∗ ) 1 2 1 2
d = 1,
q1 ,q2 ,r1 ,r2 =2
,
d = 2.
Then, for the weakly-singular integral equation,
λmin (AdV ) p−2
and
λmax (AdV ) 1,
d = 1, 2.
For the hypersingular integral equation,
λmin (A1W ) p−2 ,
p−6 λmin (A2W ),
and λmax (AdW ) 1,
d = 1, 2.
Consequently,
κ (AdV ) κ (A1W ) p2 ,
d = 1, 2,
and
κ (A2W ) p6 . 1 p+1 let φ = Proof. Consider first the AV . For any x = (c0 , . . . , c p ) ∈ R ∗ matrix p ∗ ∗ ∑q=0 cq Lq . Then since Lq , Lr Γ = δqr , we have
φ 2 −1/2 x A1V x HI (Γ ) . x x φ 2 0 HI (Γ )
The inverse inequality (see [27, Theorem 5.2] and Section C.4.1) and Lemma A.19 give
C.1 Conditioning of Matrices
509
p−2 φ 2H 0 (Γ ) φ 2 −1/2 HI
I
Hence
(Γ )
≤ φ 2H 0 (Γ ) . I
p−2 λmin (A1V ) ≤ λmax (A1V ) 1.
In particular, taking φ = Lp . Then (see [27, page 254]) φ 2 −1/2
HI (Γ ) 2 φ 0 H (Γ )
I
Lp 2 1/2
HI (Γ ) L p 2 0 H (Γ )
≤
I
Lp H 0 (Γ ) Lp H 1 (Γ ) I
Lp 2 0 H (Γ )
I
I
Lp H 0 (Γ ) I
Lp H 0 (Γ )
I
p = p−2 . p3
Hence λmin (A1V ) p−2 . Taking x = (1, 0, . . . , 0) ∈ R p+1 so that φ (x) ≡ 1 we deduce λmax (A1V ) 1, implying λmax (A1V ) 1. Similar arguments work for A1W . For any x = (c2 , . . . , c p ) ∈ R p−1 let φ = p p ∗ , so that x x = φ 2 2 ∑q=2 cq Lq∗ . Then φ = ∑q=2 cq Lq−1 0 (Γ ) φ H 1 (Γ ) . ThereH fore, by using the inverse inequality again we obtain −2
p
I
I
φ 2 1/2 x A1W x HI (Γ ) ≤ 1.
x x φ 2 1 HI (Γ )
This means
p−2 λmin (A1W ) ≤ λmax (A1W ) 1.
By choosing φ as in (C.10) we can show that φ 2 1/2 φ H 0 (Γ ) x A1W x HI (Γ ) I p−2 , 2
φ H 1 (Γ ) x x φ 1 HI (Γ )
I
implying λmin (A1W ) p−2 . Similarly, by choosing x = (1, 0, . . . , 0) ∈ R p−1 we can show that λmax (A1W ) 1. This proves the result for A1W . 2 We now consider A2V . For any x = (cq1 ,q2 )qp1 ,q2 =0 ∈ R(p+1) let p
φ (x, y) =
∑
q1 ,q2 =0
Then φ 2 −1/2 HI
and
(Γ ×Γ )
p
φ 2H 0 (Γ ×Γ ) = I
=
∑
q1 ,q2 ,r1 ,r2 =0 p
∑
q1 ,q2 ,r1 ,r2 =0
cq1 ,q2 (Lq∗1 ⊗ Lq∗2 )(x, y).
aV (φ , φ ) = x A2V x
cq1 ,q2 cr1 ,r2 Lq∗1 ⊗ Lq∗2 , Lr∗1 ⊗ Lr∗2 Γ ×Γ cq1 ,q2 cr1 ,r2 Lq∗1 , Lr∗1 Γ Lq∗2 , Lr∗2 Γ
510
C Some Additional Results p
=
∑
q1 ,q2 ,r1 ,r2 =0
cq1 ,q2 cr1 ,r2 δq1 ,r1 δq2 ,r2 = x x.
It follows that −2
p
φ 2 −1/2 x A2V x HI (Γ ×Γ ) 1, 2
x x φ 0 HI (Γ ×Γ )
where we used again the inverse inequality. By choosing φ such that
φ (x, y) = Lp (x) ∀x, y ∈ Γ , we deduce, as in the case of A1V , that λmin (A2V ) p−2 . Similarly, we can use the same argument as in the case d = 1 to show λmax (A2V ) 1. 2 Finally, we prove the result for A2W . For any x = (cq1 ,q2 )qp1 ,q2 =2 ∈ R(p−1) let p
∑
cq1 ,q2 Lq∗1 ⊗ Lq∗2 (x, y).
(Γ ×Γ )
aW (φ , φ ) = x A2W x
φ (x, y) =
q1 ,q2 =2
Then φ 2 1/2 HI
and
φ 2H 1 (Γ ×Γ ) = ∇φ , ∇φ Γ ×Γ I
p
= =
∑
cq1 ,q2 cr1 ,r2 ∇ Lq∗1 ⊗ Lq∗2 , ∇ Lr∗1 ⊗ Lr∗2 Γ ×Γ
∑
cq1 ,q2 cr1 ,r2
q1 ,q2 ,r1 ,r2 =2 p q1 ,q2 ,r1 ,r2 =2
p
=
∑
q1 ,q2 ,r2 =2
/
+
Γ
Lq∗2 , Lr∗2
Γ
cq1 ,q2 cq1 ,r2 Lq∗2 , Lr∗2 Γ
p q1 ,q2 ,r1 =2
/
Lq∗1 −1 , Lr∗1 −1
+ Lq∗1 , Lr∗1 Γ Lq∗2 −1 , Lr∗2 −1 Γ
01
T1 :=
∑
2
cq1 ,q2 cr1 ,q2 Lq∗1 , Lr∗1 Γ . 01
T2 :=
2
First we consider T1 . For any q1 ∈ {2, . . . , p}, if we define xq1 = (cq1 ,2 , . . . , cq1 ,p ) then, with M∗ defined in Lemma C.3,
C.1 Conditioning of Matrices
511 p
T1,q1 :=
∑
q2 ,r2 =2
∗ cq1 ,q2 cq1 ,r2 Lq∗2 , Lr∗2 Γ = x q1 M xq1
so that ∗ ∗
p−4 x q1 xq1 λmin (M )xq1 xq1 ≤ T1,q1 ≤ λmax (M )xq1 xq1 xq1 xq1 .
Since x = (xq1 )qp1 =2 , it follows that p−4 x x = p−4
p
p
p
q1 =2
q1 =2
q1 =2
∑ x q1 xq1 T1 = ∑ T1,q1 ∑ x q1 xq1 = x x.
Similar estimates hold for T2 . Consequently, φ 2H 1 (Γ ×Γ ) x x p4 φ 2H 1 (Γ ×Γ ) . I
Therefore, p−6
I
φ 2 1/2
HI (Γ ×Γ ) p−4 φ 2 1 HI (Γ ×Γ )
φ 2 1/2 x A2W x HI (Γ ×Γ ) 1, x x φ 2 1 HI (Γ ×Γ )
where we have used the inverse inequality φ 2 1/2 HI
(Γ ×Γ )
p−2 φ 2H 1 (Γ ×Γ ) . I
A constant lower bound for λmax (A2W ) can be shown as in the case d = 1 and is omitted.
C.1.3 Extremal eigenvalues of equivalent matrices Lemma C.5. Let A and C be two symmetric positive definite matrices of the same size and let B := C−1/2 AC−1/2 . Then
λmin (A) λmax (A) ≤ λmin (C) ≤ λmax (C) ≤ λmax (B) λmin (B) and
λmin (A) λmax (A) ≤ λmin (B) ≤ λmax (B) ≤ . λmax (C) λmin (C)
Proof. For any x = 0 we have x Cx x Cx x Ax x Ax y y y y x Ax · = · = · =
−1/2 −1/2 x x x Ax x x y By x x y C AC y x x
512
C Some Additional Results
where y := C1/2 x = 0. Since y y 1 1 ≤ ≤ λmax (B) y By λmin (B)
and
λmin (A) ≤
x Ax ≤ λmax (A), x x
the first result follows. The second result can be proved similarly.
Lemma C.6. If B1 and B2 are two symmetric, positive-definite matrices of the same −1/2 −1/2 1/2 1/2 and B1 B−1 size, then B2 B1 B2 2 B1 have the same eigenvalues. −1/2
−1/2
Proof. Assume that λ is an eigenvalue of B2 B1 B2 −1/2 1/2 eigenvector v. Then with w = B1 B2 v we have 1/2 −1/2
B1 B−1 2 B1 w = B1 B2 1/2
1/2
with a corresponding
v
−1/2 1/2 −1/2 −1/2 = B1 B2 B2 B1 B2 v −1/2 1/2 = B1 B2 (λ v)
= λ w, i.e., λ is an eigenvalue of B1 B−1 2 B1 . Conversely, assume that λ is an eigenvalue 1/2 −1 1/2 1/2 −1/2 of B1 B2 B1 with a corresponding eigenvector v. Then with w = B2 B1 v we have 1/2
−1/2
B2
1/2
−1/2
B1 B2
−1/2 1/2 B1 v 1/2 −1/2 1/2 −1 1/2 = B2 B1 B1 B2 B1 v 1/2 −1/2 = B2 B1 (λ v)
w = B2
= λ w, −1/2
i.e., λ is an eigenvalue of B2
−1/2
B1 B2
.
C.1.4 Eigenvalues of block matrices In the following lemma, we adopt the following notations. For any indefinite square matrix Q, we denote the positive and negative eigenvalues of Q closest to zero by λ± (Q). Hence, if Λ (Q) is the set of all eigenvalues of Q, then
Λ (Q) ⊂ [λmin (Q), λ− (Q)] ∪ [λ+ (Q), λmax (Q)]. If P is a rectangular matrix, then σmax (P) and σmin (P) denote the maximum and minimum singular values of P. The following lemma gives bounds for the eigenvalues of the sum of symmetric matrices.
C.1 Conditioning of Matrices
513
Lemma C.7. Assume that A = A1 + A2 where A1 and A2 are two symmetric matrices. Let the eigenvalues of each matrix be arranged in non-increasing order. Then the sth -eigenvalues of A and A1 , denoted by λs (A) and λs (A1 ), respectively, satisfy
λs (A1 ) + λmin (A2 ) ≤ λs (A) ≤ λs (A1 ) + λmax (A2 ). Proof. See [238, p.101]. The result can be interpreted as follows. When A2 is added to A1 , the eigenvalues of A1 are changed by an amount which lies between the smallest and largest eigenvalues of A2 . The next lemma represents the eigenvalues of a block-diagonal matrix in term of the eigenvalues of each block. Lemma C.8. Let A be a block diagonal matrix defined by . A1 0 . A= 0 A2 Then
Λ (A) = Λ (A1 ) ∪ Λ (A2 ). Proof. This is a direct consequence of det(A − λ I) = det(A1 − λ I1 ) det(A2 − λ I2 ) where I, I1 , and I2 are the identity matrices of the same size as A, A1 , and A2 , respectively. Lemma C.9. Let A be a block diagonal matrix defined by . 0 B . A= B 0 Then Λ (A) consists of plus and minus singular values of B and zeros. In particular,
λmin (A) = −σmax (B) and λmax (A) = σmax (B). Proof. See [83, p. 427].
Lemma C.10. [193, Lemma 2.2] Let A be a symmetric and indefinite matrix of the form . A11 A12 A= A21 −A22
where Aii , i = 1, 2, are symmetric positive definite matrices and A12 = A 21 are rectangular matrices of appropriate sizes. Then the spectrum of A satisfies
Λ (A) = [λmin (A), λ− (A)] ∪ [λ+ (A), λmax (A)] where
514
C Some Additional Results
1 λmin (A11 ) − λmax (A22 ) 2 2 λmin (A11 ) + λmax (A22 ) + 4σmax (A21 )2 , − 1 λ− (A) ≤ λmax (A11 ) − λmax (A11 )2 − 4σmin (A21 )2 , 2 λ+ (A) ≥ λmin (A11 ), 1 λmax (A) ≤ λmax (A11 ) + λmax (A11 )2 + 4σmax (A21 )2 . 2
λmin (A) ≥
Proof. See [193].
Lemma C.11. Let . A11 A12 A := A21 −A22
and
-
B11 0 B := 0 B22
.
where A11 is symmetric positive semi-definite, A22 , B11 and B22 are symmetric pos(1) (2) itive definite, and A12 = A 21 . Assume that A11 can be written as A11 = A11 + A11 () where A11 , = 1, 2, are symmetric positive semi-definite. Assume further that there exist a positive definite matrix T and positive constants θ1 , θ2 , θ3 , Θ0 , Θ1 , Θ2 , Θ3 satisfying, for all vectors x of appropriate size,
Θ0 ≥ and
x A11 x (1) x A11 + T x
−1/2 −1/2 (1) , θ1 ≤ λmin B11 A11 + T B11 −1/2 −1/2 θ2 ≤ λmin B22 A22 B22 , x A11 + A12 A−1 22 A21 x , θ3 ≤ (1) x A11 + T x
−1/2 −1/2 (1) , Θ1 ≥ λmax B11 A11 + T B11 −1/2 −1/2 Θ2 ≥ λmax B22 A22 B22 ,
Θ3 ≥
:= B−1/2 AB−1/2 satisfies Then the spectrum of A where
x A12 A−1 22 A21 x (1) . x A11 + T x
= [λmin (A), λ− (A)] ∪ [λ+ (A), λmax (A)] Λ (A)
≥ − 1 θ2 + θ 2 + 4Θ1Θ2Θ3 , λmin (A) 2 2 λ− (A) ≤ −θ2 , . ≥ 1 −θ2 + θ 2 + 4θ1 θ2 θ3 , λ+ (A) 2 2
C.1 Conditioning of Matrices
515
-
≤ 1 −θ2 + Θ0Θ1 + λmax (A) 2
Proof. Elementary calculations reveal ⎛ −1/2 −1/2 B11 A11 B11 =⎝ A −1/2 −1/2 B22 A21 B11
.
(θ2 + Θ0Θ1 )2 + 4Θ1Θ2Θ3 .
−1/2
−1/2
B11 A12 B22 −1/2
−1/2
−B22 A22 B22
Let
11 := B−1/2 A11 B−1/2 , A 11 11 21 := B−1/2 A21 B−1/2 , A 22 11
⎞
⎠.
12 := B−1/2 A12 B−1/2 , A 11 22 22 := B−1/2 A22 B−1/2 , A 22 22
Then there exist some vectors x and y of appropriate and let λ be an eigenvalue of A. size satisfying (x, y) = (0, 0) and 11 x + A 12 y = λ x, A 21 x − A 22 y = λ y. A
(C.11a) (C.11b)
11 is invertible with eigenvalues greater Consider λ < 0. Then the matrix I − λ −1 A 11 )−1 A 12 y. than or equal to 1, which together with (C.11a) implies x = λ −1 (I− λ −1 A It then follows from (C.11b) that 21 (I − λ −1 A 11 )−1 A 12 y − λ y A 22 y = λ 2 y y. y A
11 )−1 are less than or equal to 1, the following Since the eigenvalues of (I − λ −1 A inequality holds 21 (I − λ −1 A 11 )−1 A 12 y ≤ y A 21 A 12 y. y A
We infer from the above two equations that
21 A 12 y − λ y A 22 y ≥ λ 2 y y. y A
11 , Note that y = 0, because otherwise (C.11a) implies that λ is an eigenvalue of A 11 is positive semi-definite while λ is negative. which is a contradiction because A Hence 21 )2 − λ λmin (A 22 ) ≥ λ 2 , σmax (A implying
This means −
21 )2 ≤ 0. λ 2 + θ2 λ − σmax (A
1 θ2 + 2
. 21 )2 ≤ λ < 0. θ2 + 4σmax (A
(C.12)
516
C Some Additional Results
21 ). For any nonzero vector x of appropriate size, we Let us now estimate σmax (A −1/2 −1/2 −1/2 have, with y = B22 A12 B11 x and z = B11 x, −1/2
−1/2 A x B11 A12 B−1 x A 21 21 x 22 A12 B11 x = x x x x
=
y y
−1
y y A 22
·
≤ Θ2Θ3Θ1 . This implies
z z A12 A−1 22 A12 (1) z A11 + T z
·
−1/2
x B11
(1) −1/2 A11 + T B11 x x x
(C.13)
21 )2 ≤ Θ1Θ2Θ3 . σmax (A
This together with (C.12) yields the required estimate for λmin (A). Next, by pre-multiplying (C.11a) by x , (C.11b) by y , subtracting the resulting 12 y = y A 21 x, we obtain equations, and noting that x A 11 x + y A 22 y = λ x x − λ y y. x A
11 x ≥ 0, it follows that θ2 y y ≤ λ x x − λ y y or 22 ) and x A Since θ2 ≤ λmin (A (θ2 + λ )y y ≤ λ x x ≤ 0.
Since y y > 0, this results in λ ≤ −θ2 , yielding an upper bound for λ− (A). Consider now λ > 0 in (C.11). Then A22 + λ I is invertible. Thus by eliminating y in (C.11) and pre-multiplying the resulting equation by x we obtain
Therefore,
−1 11 x + x A 12 A 21 x. 22 + λ I A λ x x = x A
−1 −1/2 11 x + x A 12 A −1/2 I + λ A −1 A λ x x = x A 22 22 22 A21 x −1 11 x + λmin I + λ A −1 12 A −1 A ≥ x A x A 22 22 21 x. 11 x + ≥ x A
1
1+
λ 22 ) λmin (A
−1
12 A A x A 22 21 x
θ2 12 A −1 A x A 22 21 x θ2 + λ θ2 −1 A 21 x 11 + A 12 A x A ≥ 22 θ2 + λ θ2 −1/2 −1/2 A11 + A12 A−1 x B11 = 22 A21 B11 x θ2 + λ 11 x + ≥ x A
(C.14)
C.2 Norms of Nodal Basis Functions
517
θ2 θ3 −1/2 (1) −1/2 A11 + T B11 x x B11 θ2 + λ θ1 θ2 θ3 x x. ≥ θ2 + λ ≥
22 , which is a conIf x = 0 then (C.11b) implies that −λ < 0 is an eigenvalue of A tradiction. Hence x x > 0, and the above inequality yields
λ 2 + θ2 λ − θ1 θ2 θ3 ≥ 0,
resulting in the required lower bound for λ+ (A). Finally, it follows from (C.14) and (C.13) that −1/2 (1) −1/2 12 A 21 x 22 + λ I)−1 x A A11 + T B11 x + λmax (A λ x x ≤ Θ0 x B11 Θ1Θ2Θ3 x x, ≤ Θ0Θ1 + θ2 + λ which implies
λ 2 + (θ2 − Θ0Θ1 )λ − θ2Θ0Θ1 − Θ1Θ2Θ3 ≤ 0. This yields the required upper bound for λmax (A).
C.2 Norms of Nodal Basis Functions In this section we present results on norms of the nodal basis functions of the boundary element spaces used to approximate the solution of the hypersingular integral equation. Let Γ be the boundary of a Lipschitz domain in Rd , d = 2, 3. Assume that Γ is partitioned into triangular or rectangular elements. On this mesh one can define nodal basis functions φk which take the value 1 at one nodal point and zero at all other nodal points. Lemma C.12. Let Γk be a (d −1)-dimensional manifold in Rd , d = 2, 3, which is the support of a nodal basis function φk , and let hk be the average size of the elements in Γk . Then for s ≥ 0 , φk 2H −s (Γ ) hd−1+2s k k
I
φk 2H s (Γ ) k I
and
hd−1−2s , k (d−1)/q
φk Lq (Γk ) = φk Lq (Γ ) hk The constants are independent of Γk and q.
,
1 ≤ q ≤ ∞.
518
C Some Additional Results
Proof. Let φˆ be the scale of φk onto a reference element Γˆ . Then, noting that the dimension of Γk is d − 1, it follows from Lemma A.6 that φˆ 2H −s (Γˆ ) hd−1+2s , φk 2H −s (Γ ) = hd−1+2s k k k
I
I
φˆ 2H s (Γˆ ) hd−1−2s . φk 2H s (Γ ) = hd−1−2s k k I
k
I
The final result can be shown by a simple calculation.
As a consequence of the above lemma, we have the following results. More general results can be found in [5, Theorem 4.8]. Lemma C.13. Let hk and φk be given as in Lemma C.12. Then φk 2 1/2
HF (Γ )
and
φk 2 −1/2 HF (Γ )
φk 2 1/2
HF (Γ )
φk 2 −1/2 HF (Γ )
hd−2 k ,
d = 2, 3,
h2k 1 + | log hk | , h3k ,
(C.15)
d = 2, d = 3,
(C.16)
where the norms are defined by (A.18). Proof. We recall that the dimension of Γ is d − 1. First we prove (C.15). It follows from (A.20) and Lemma A.3 that φk 2 1/2
HF (Γ )
≤ φk 2 1/2
HF (Γ )
φk 2 1/2 HI
Since supp(φk ) = Γ k , Theorem A.10 yields φk 1/2 HI
above inequality and Lemma C.12 imply φk 2 1/2
HF (Γ )
≤ φk 2 1/2
HF (Γ )
(Γ )
(Γ )
.
≤ φk 1/2 HI
(Γk )
hd−2 k .
, so that the
(C.17)
On the other hand, the equivalence of norms given in Subsection A.2.5, the definition of the Slobodetski norm in (A.26), and Lemma A.4 on the scaling property of the H 1/2 -seminorm give φk 2 1/2
HF (Γ )
φk 2 1/2
HS (Γ )
> |φk |2H 1/2 (Γ )
d−2 2 ≥ |φk |2H 1/2 (Γ ) = hd−2 k |φk |H 1/2 (Γ) hk . k
(C.18)
Inequalities (C.17) and (C.18) yield (C.15). Next we prove (C.16). Noting that supp(φk ) = Γ k , we deduce from Lemma C.12 and Theorem A.10 ≤ φk 2 −1/2 . hdk φk 2 −1/2 HI
(Γk )
HI
(Γ )
The equivalence of norms given in Subsection A.2.5 and (A.20) yield
C.2 Norms of Nodal Basis Functions
φk 2 −1/2 HI
(Γ )
519
φk 2 −1/2 HF
(Γ )
so that hdk φk 2 −1/2 HF
(Γ )
≤ φk 2 −1/2 HF
≤ φk 2 −1/2 HF
(Γ )
(Γ )
,
.
(C.19)
On the other hand, Lemma A.18 Parts (iii)–(iv), and Lemma C.12 give ⎧ d = 3, ⎨φk 2L4/3 (Γ ) , 2 φk −1/2 q HF (Γ ) ⎩ φk 2Lq (Γ ) , for all q ∈ (1, 2], d = 2, q−1 ⎧ 3 d = 3, ⎨hk , q 2/q ⎩ h for all q ∈ (1, 2], d = 2. q−1 k
Therefore, for d = 3 the above inequality and (C.19) yield (C.16). For d = 2, let q be the conjugate of q, i.e., 1/q + 1/q = 1. Then q ≥ 2 and φk 2 −1/2 HF
−2/q
(Γ )
q h2k hk
. −2/q
If e−2 ≤ hk ≤ 1 then | log hk | ≤ 2. We choose q = 2 so that q = 2. Then q hk 2 2h−1 k ≤ 2e so that φk 2 −1/2 h2k . (Γ )
HF
−2/q
If 0 < hk < e−2 then | log hk | > 2. We choose q such that q = | log hk |. Then q hk
−2/| log hk | | log hk |hk
= e2 | log hk |
=
=
so that
φk 2 −1/2 HF
h2k | log hk |.
(Γ )
In any case, we choose q such that q = max{2, | log hk |} to obtain φk 2 −1/2 h2k 1 + | log hk | . (Γ )
HF
Combining this with (C.19) gives
h2k φk 2 −1/2 HF
(Γ )
h2k 1 + | log hk | .
By using [5, Lemma 4.5], a sharper left inequality can be obtained for sufficiently small hk h2k 1 + | log hk | φk 2 −1/2 h2k 1 + | log hk | , HF
(Γ )
see the proof of [5, Theorem 4.8]. This yields (C.16) for d = 2, completing the proof of the lemma.
520
C Some Additional Results
C.3 Further Properties of the Hierarchical Basis Functions for the p-Version The following three lemmas show bounds for Sobolev norms of functions belonging )k , respectively. These lemmas are used in Chapter 4 to the spans of Lk , Lk , and L and Chapter 6. p Lemma C.14. Let I := (−1, 1) and u = ∑k=1 uk Lk where Lk is the Legendre polynomial of degree k. Then p
u2 1/2 HI
(I)
∑ ku2k
and
p
u2 −1/2 HI
(C.20)
k=1
(I)
u2
∑ k3k .
(C.21)
k=1
The constant is independent of u and p.
Proof. We first prove (C.20). From the definition of the interpolation norm by (A.91), noting that u ∈ H 1 (I), we have u2 1/2 HI
(I)
= ≤
∞ 0 ∞ 0
inf
v∈H 1 (I)
v2L2 (I) + t 2 u − v2H 1 (I) I
dt t2
dt inf v2L2 (I) + t 2 u − v2H 1 (I) 2 I v∈P t
(C.22)
4 3 where P := span Lk : k = 1, . . . , p . By the orthogonality property of the Legendre polynomials we have p 2v2k . v2L2 (I) = ∑ k=1 2k + 1 On the other hand u − v 2L2 (I) = p
= ≤
p
p
∑ ∑ (uk − vk )(u − v )
k=1 =1
p
Lk , L
∑ ∑ k3/2 (uk − vk ) 3/2 (u − v ) k−3/2 −3/2
k=1 =1 p 3
∑k
k=1
(uk − vk )2
I
Lk , L
I
1/2 ' 1/2 ' p '{k−3/2 −3/2 L , L }k,=1,...,p ' ∑ 3 (u − v )2 k I 2 =1
' p ' = '{k−3/2 −3/2 Lk , L I }k,=1,...,p '2 ∑ k3 (uk − vk )2 . k=1
It follows from the definition of matrix norms that
(C.23)
C.3 Further Properties of the Hierarchical Basis Functions for the p-Version
521
' −3/2 −3/2 ' ' ' '{k Lk , L I }k,=1,...,p '2 ≤ '{k−3/2 −3/2 Lk , L I }k,=1,...,p '∞ p = max k−3/2 ∑ −3/2 Lk , L I . (C.24) 1≤k≤p
=1
Lemma C.2 and a straightforward calculation give p k−3/2 ∑ −3/2 Lk , L I =1
p
= 2k−3/2 ∑ −3/2
[(k−1)/2] [(−1)/2]
∑
k−1 −1
=1
m=0 n=0
≤ 2k−3/2 ∑ −3/2
(2( − 2n) − 1)δk−1−2m,−1−2n
∑
=1 p
m=0
n=0
∑ ∑ (2( − n) − 1)δk−1−m,−1−n.
Since, by change of variables, k−1 −1
∑ ∑
m=0 n=0
2( − n) − 1 δk−1−m,−1−n =
k−1 −1
k−1
−1
m=0 n=0
m=0
n=0
∑ ∑ (2n + 1)δm,n = ∑ (2m + 1) ∑ δm,n
we have p k−1 k−3/2 ∑ −3/2 Lk , L I ≤ 2k−3/2 ∑ (2m + 1) =1
m=0
= 2k−3/2
k−1
k−1
∑ (2m + 1)
m=0
≤ 4k−3/2
−1
=1 p
n=0
∑ (2m + 1) ∑
m=0
≤ 2k−3/2
p
∑ −3/2 ∑ δm,n −3/2
=m+1 p −3/2
x
dx
m+1
k−1
∑ (2m + 1)(m + 1)−1/2
m=0
≤ 8k−3/2
k−1
∑ (m + 1)1/2 ≤ 8.
m=0
Inequalities (C.23), (C.24), and (C.25) yield u − v 2L2 (I) so that
p
u − v2H 1 (I) I
This inequality and (C.22) imply
∑
k=1
p
∑ k3 (uk − vk )2 ,
k=1
1 k
+ k3 v2k
p
∑ k3 v2k .
k=1
(C.25)
522
C Some Additional Results
u2 1/2 HI (I)
=
∞ 0 ∞
p
t −2 inf
∑ (vk )∈R p
k=1
p
t −2 ∑ inf
k=1 vk ∈R
0
v2 k
k
v2 k
k
+ t 2 k3 (uk − vk )2 dt
+ t 2 k3 (uk − vk )2 dt.
Note that if f (x) = ax2 + b(x − c)2 with a, b > 0, then min f (x) = f (bc/(a + b)) = x∈R
abc2 /(a + b). Hence, p
u2 1/2 HI
(I)
∑
∞
k=1 0
p k3 u2k π dt = ∑ ku2k , 4 2 1+k t k=1 2
proving (C.20). Inequality (C.21) is proved by duality as follows: u −1/2 HI
(I)
=
sup 1/2 v∈HI (I)
u, vI v 1/2 HI
≥
(I)
u, vI v 1/2 HI
∀v ∈ P.
(I)
p In particular, for v = ∑k=1 vk Lk with vk = uk /k2 we have, thanks to (C.1) and (C.20),
1/2 p ∑k=1 u2k /k3 2 3 , 1/2 = ∑ uk /k p k=1 ∑k=1 u2k /k3 p
u −1/2 HI
(I)
completing the proof of the lemma.
)k , k ≥ 2, where L )k is defined in (C.4). Lemma C.15. Let I = (−1, 1) and uk = ck L p If u = ∑k=2 uk for some p ≥ 2, then p
p
∑ c2k u21/2 HI
k=2
Consequently,
(I)
∑ k c2k .
(C.26)
k=2
p
u2 1/2 HI
(I)
∑ uk 2HI1/2 (I) p u2HI1/2 (I) .
(C.27)
k=2
The constants are independent of u and p.
)k in Section C.1.1 and the orthogonality of the Proof. Recalling the definition of L Legendre polynomials, we have uk 2H 1 (I) I
and
= c2k
Lk−1 2L2 (I) Lk 2L2 (I)
C.3 Further Properties of the Hierarchical Basis Functions for the p-Version
523
p '2 c2k ck Lk−1 'L2 (I) = ∑ Lk−1 2L2 (I) . 2 L L 2 k k L (I) k=2 k=2 L2 (I)
' p u2H 1 (I) = ' ∑ I
It follows from (C.1) and (C.5) that
p
u2H 1 (I) I
Next we show that
p
∑ k2 c2k .
(C.28)
k=2
p
c2
∑ kk2 u2L2 (I) ∑ c2k .
(C.29)
k=2
k=2
)l in (C.7), we obtain )k , L Recalling the calculation of L I
p
u2L2 (I) =
p
) )
ck cl Lk , Ll I k=2 l=2 p p−2 )k , L )k+2 I = c2k + 2 ck ck+2 L k=2 k=2
∑∑ ∑
∑
p
=
∑
p−2
c2k −
k=2
∑ ck ck+2
k=2
C
(2k − 3)(2k + 5) . (2k − 1)(2k + 3)
(C.30)
Since (2k − 3)(2k + 5) ≤ (2k − 1)(2k + 3) we deduce p
u2L2 (I) ≤
∑ c2k +
k=2
p 1 p−2 2 1 p−2 2 c + c ≤ 2 ∑ k 2 ∑ k+2 ∑ c2k . 2 k=2 k=2 k=2
which proves the right inequality in (C.29). To prove the left inequality in (C.29) we observe that (C.30) and the Cauchy–Schwarz inequality give p
u2L2 (I) ≥
∑ c2k
k=2
−
p−2
∑
k=2
c2k
C
(2k − 3)(2k + 5) (2k − 1)(2k + 3)
1/2
p−2
∑
k=2
c2k+2
C
(2k − 3)(2k + 5) (2k − 1)(2k + 3)
1/2
p
=
∑ c2k
k=2
C 1/2 1/2 p (2k − 3)(2k + 5) (2k − 7)(2k + 1) 2 2 − ∑ ck ∑ ck (2k − 5)(2k − 1) (2k − 1)(2k + 3) k=2 k=4 C C p p−2 p 1 1 (2k − 3)(2k + 5) (2k − 7)(2k + 1) − ∑ c2k . ≥ ∑ c2k − ∑ c2k 2 (2k − 1)(2k + 3) 2 (2k − 5)(2k − 1) k=2 k=2 k=4
p−2
C
524
C Some Additional Results
It suffices to prove that C C p p p−2 p c2 1 1 (2k − 3)(2k + 5) (2k − 7)(2k + 1) ∑ c2k − 2 ∑ c2k (2k − 1)(2k + 3) − 2 ∑ c2k (2k − 5)(2k − 1) ∑ kk2 . k=2 k=2 k=4 k=2 This can be seen by noting that C C 1 (2k − 3)(2k + 5) 1 (2k − 7)(2k + 1) 3 2 lim k 1 − − = . k→∞ 2 (2k − 1)(2k + 3) 2 (2k − 5)(2k − 1) 2 Thus we showed the left inequality of (C.29). Inequality (C.26) is a consequence of (C.28) and (C.29) and interpolation, noting that the interpolation space between two spaces of polynomials is a space of polynomial; see [140, Theorem 1]. Inequality (C.27) follows from (C.26) if one notes that uk 2L2 (I) = c2k so that uk 2 1/2 HI
(I)
and
uk 2H 1 (I) = c2k I
kc2k .
Lk−1 2L2 (I) Lk 2L2 (I)
k2 c2k
p Lemma C.16. Let I = (−1, 1) and u = ∑k=2 ck Lk , p ≥ 2, where Lk is defined in (C.4). Then p p c2 c2 ∑ kk3 u2HI1/2 (I) ∑ kk2 . k=2 k=2
The constants are independent of u and p.
Proof. The lemma is a consequence of Lemma C.15, noting that p
u=
ck
∑ Lk 2
k=2
and that Lk L2 (I) k−3/2 ; see (C.5).
L (I)
)k L
The next lemma provides an upper bound for the bilinear form aV (·, ·) acting on Legendre polynomials. Lemma C.17. Let I and J be two bounded and open intervals on the real line satisfying I ∩ J = ∅. Let L pi ,I (respectively, L p j ,J ) be the transformation of the Legendre polynomial L pi (respectively, L p j ) onto I (respectively, J). The functions L pi ,I and L p j ,J are extended by zero outside I and J if necessary. Then aV (L pi ,I , L p j ,J ) ≤
2|I| |J| max(|I|/2, |J|/2) pi +p j 1 . π dist(I, J) pi +p j (2pi + 1)(2p j + 1)
Here |I| and |J| are the lengths of I and J.
C.3 Further Properties of the Hierarchical Basis Functions for the p-Version
525
Proof. Let x0 be the midpoint of I and y0 be the midpoint of J. Then there holds using the Taylor expansion of log |x − y| pi +p j −1
log |x − y| =
∑
∂xα1 ∂yα2 log |x0 − y0 | (x − x0 )α1 (y − y0 )α2 + R pi +p j (x, y) ν! |α |=ν
∑
ν =0
and R pi +p j (x, y) =
∑
|α |=pi +p j
∂xα1 ∂yα2 log |x0 + θ (x − x0 ) − y0 − θ (y − y0 )| (x − x0 )α1 (y − y0 )α2 , (pi + p j )!
with θ ∈ (0, 1). Due to the orthogonal property of the Legendre polynomials the first sum vanishes and we have aV (L pi ,I , L p j ,J ) = −
1 π
1 =− π
L pi ,I (x)L p j ,J (y) log |x − y| dsy dsx
I×J
L pi ,I (x)L p j ,J (y)R pi +p j (x, y) dsy dsx .
I×J
Applying the Cauchy–Schwarz inequality two times we obtain 1 aV (L pi ,I , L p j ,J ) ≤ π ×
L2pi ,I (x) dsx
I
1/2 J
R2pi +p j (x, y) dsy dsx
I×J
=
|I|1/2 |J|1/2
π
L2p j ,J (y) dsy
1/2
2 (2pi + 1)(2p j + 1)
1/2
R2pi +p j (x, y) dsy dsx
I×J
2|I| |J| 1 sup |R p +p (x, y)|. ≤ π (2pi + 1)(2p j + 1) (x,y)∈I×J i j
Since
|∂xα1 ∂yα2 log |x − y| | ≤
1/2
(C.31)
(α1 + α2 − 1)! |x − y|α1 +α2
we have sup |R pi +p j (x, y)|
(x,y)∈I×J
≤
sup
∑
(x,y)∈I×J |α |=pi +p j
|x − x0 |α1 |y − y0 |α2 (α1 + α2 − 1)! (pi + p j )! |x0 + θ (x − x0 ) − y0 − θ (y − y0 )| pi +p j
526
C Some Additional Results
≤ ≤ ≤
sup
∑
(x,y)∈I×J |α |=pi +p j
1 (|I|/2)α1 (|J|/2)α2 p +p i j pi + p j |x0 + θ (x − x0 ) − y0 − θ (y − y0 )|
1 1 pi + p j dist(I, J) pi +p j
∑
(|I|/2)α1 (|J|/2)α2
|α |=pi +p j
1 max(|I|/2, |J|/2) pi +p j . dist(I, J) pi +p j
(C.32)
Inequalities (C.31) and (C.32) imply aV (L pi ,I , L p j ,J ) ≤
2|I| |J| 1 1 max(|I|/2, |J|/2) pi +p j . π (2pi + 1)(2p j + 1) dist(I, J) pi +p j
This completes the proof.
C.4 Some Properties of Polynomials We collect in this section some useful results on polynomials.
C.4.1 Inverse properties In this subsection we present the inverse property of piecewise polynomial functions. First we state a classical theorem by Andrey Markov in the 1890s, the socalled Markov Theorem, max | f (x)| ≤ p2 max | f (x)|
−1≤x≤1
−1≤x≤1
(C.33)
where f is a polynomial of degree p, p ≥ 1. Later, in 1932 Erhard Schmidt stated (without proof) in the Sitzungsberichte der Preussicher Akademie the following inequality 1 ( 1 ( ( 1/2 ( 1/2 ( f (x)(2 dx ( f (x)(2 dx ≤ k p p2 (C.34) −1
−1
where
lim k p = 1.
p→∞
In their 1937 paper [122], Hille, Szeg¨o, and Tamarkin gave various proofs for both inequalities (C.33) and (C.34). In particular, they proved that lim k p = 1/π .
p→∞
C.4 Some Properties of Polynomials
527
R. Bellman, in his 1944 paper [22], provided an elegant proof for (C.34) with an explicit upper bound 1 1 ( ( 1/2 ( 1/2 (p + 1)2 ( ( f (x)(2 dx ( f (x)(2 dx ≤ √ . 2 −1
(C.35)
−1
We now extend the above results to two-dimensional Lipschitz domains with fractional-order Sobolev norms. The generalisation also includes piecewise polynomial functions. Lemma C.18. Let Γ be a rectangle in R2 which is partitioned by subrectangles Γj whose side lengths are proportional to H ∈ (0, 1), j = 1, . . . , J. Let 4 3 V := v : v|Γj is a polynomial of degree p ≥ 1, j = 1, . . . , J .
(i) For any 0 ≤ s ≤ r ≤ 1,
vH r (Γ ) ≤ H −(r−s) (p + 1)2(r−s) vH s (Γ ) , I
I
∗vHIr (Γ )
≤ H −(r−s) (p + 2)2(r−s) ∗vHIs (Γ ) ,
r (Γ ), v ∈ V ∩H v ∈ V ∩ H r (Γ ).
(ii) For any −1 ≤ s ≤ r ≤ 0, ∗vH r (Γ ) I
r (Γ ), H −(r−s) (p + 2)2(r−s) ∗vH s (Γ ) , v ∈ V ∩ H I
vHIr (Γ ) H −(r−s) (p + 1)2(r−s) vHIs (Γ ) ,
v ∈ V ∩ H r (Γ ).
The constants are independent of v, H, p, s, and r. Proof. This lemma was presented and proved in [108, Lemma 4] with slightly different norms. Our choice of norms is to ensure scalability. We first prove Part (i). By invoking Lemma A.6 Part (i), we obtain with Γj = (−1, 1) being the reference element that rescales Γj v2H 1 (Γ ) = |v|2H 1 (Γ ) = I
J
J
j=1
j=1
∑ |v|Γj |2H 1 (Γj ) = ∑ |v|)Γj |2H 1 (Γj ) ,
It follows from (C.35) and Lemma A.6 Part (i) that
1 (Γ ). v ∈ V ∩H
2 4 ) 2 −2 ) (p + 1)4 v|Γj 2H 0 (Γ ) . |v| Γj |H 1 (Γ ) ≤ (p + 1) v|Γj H 0 (Γ ) = H j
I
j
I
Altogether, we deduce
vH 1 (Γ ) ≤ H −1 (p + 1)2 vH 0 (Γ ) , I
Let
0 (Γ ) and A0 = V ∩ H
I
1 (Γ ). v ∈ V ∩H
1 (Γ ). B0 = A1 = B1 = V ∩ H
j
528
C Some Additional Results
0 (Γ ) and V ∩ H 1 (Γ ) are algebraically equal, but they are equipped Note that V ∩ H with different norms. Hence it is possible to define Tj : A j → B j ,
T j u = u, u ∈ A j ,
j = 0, 1.
Making use of the fact that interpolating between spaces of polynomials yields spaces of polynomials (see [140] for the case of one-dimensional domains and [23] for higher dimensions) we deduce by invoking Theorem A.4, with M0 = H −1 p2 , s (Γ ) and M1 = 1, and θ = s, that As = [A0 , A1 ]θ = V ∩ H vH 1 (Γ ) ≤ H −(1−s) (p + 1)2(1−s) vH s (Γ ) , I
I
noting that Tθ u = u. In the same manner, let
s (Γ ) and A0 = B0 = A1 = V ∩ H
1 (Γ ), v ∈ V ∩H
1 (Γ ). B1 = V ∩ H
Another application of Theorem A.4 with θ = (r − s)/(1 − s), M0 = 1, and M1 = H −(1−s) p2(1−s) yields vH r (Γ ) ≤ H −(r−s) (p + 1)2(r−s) vH s (Γ ) , I
I
r (Γ ), v ∈ V ∩H
proving the first inequality in Part (i). To prove the second inequality in Part (i), we use the same argument as above. Firstly, it follows from the definition of the ∗·H 1 (Γ ) -norm in (A.46) and (A.47) and I the scaling property proved in Lemma A.7 that 2 ∗vH 1 (Γ ) I
≤
J
J
j=1
j=1
∑ ∗v|Γj 2HI1 (Γj ) = ∑ ∗v|)Γj 2HI1 (Γj ) .
It follows from (C.35) and Lemma A.7 Part (i) that ) 2 ∗v|Γj H 1 (Γ ) I
j
1 ) 2 2 ) v|Γj H 0 (Γ ) + |v| Γj |H 1 (Γ ) j j I |Γj | 2 −2 4 2 ) ≤ (p + 1)4 + 1 v| Γj H 0 (Γ ) ≤ H (p + 2) v|Γj H 0 (Γ ) . =
j
I
I
j
Altogether we deduce
2 ∗vH 1 (Γ ) I
≤ H −2 (p + 2)4 v2H 0 (Γ ) I
= H −2 (p + 2)4 ∗v2H 0 (Γ ) , I
v ∈ V ∩ H 1 (Γ ).
Noting the definition of ∗vHIs (Γ ) by (A.47) and invoking Theorem A.4, we obtain the required inequality. Part (ii) is proved by duality arguments. Indeed, let −1 ≤ s ≤ r ≤ 0. Then ∗wH −s (Γ ) I
≤ H −(r−s) (p + 2)2(r−s) ∗wH −r (Γ ) I
C.4 Some Properties of Polynomials
529
so that ∗vH r (Γ ) I
=
v, wΓ 0=w∈H −r (Γ ) ∗wH −r (Γ ) sup
I
≤ H −(r−s) (p + 2)2(r−s)
v, wΓ w 0=w∈H −r (Γ ) ∗ H −s (Γ ) sup
I
≤H
−(r−s)
(p + 2)
2(r−s)
v, wΓ sup w −s 0=w∈H (Γ ) ∗ H −s (Γ ) I
= H −(r−s) (p + 2)2(r−s) ∗vH s (Γ ) . I
Similar arguments prove the second inequality in part (ii), completing the proof of the lemma.
C.4.2 Some useful bounds for the h and p finite element functions C.4.2.1 The h-version in one dimension The following result gives a bound for the L∞ -norm by the H 1/2 -weighted norm. The corresponding p-version is Theorem C.1 (C.49) and the corresponding h-version in two dimensions is Lemma C.23 and Lemma C.25. Lemma C.19. Let I be an interval of length H which is partitioned into subintervals J which form a quasi-uniform mesh of size h. If v is a continuous function defined on I whose restriction onto each subinterval is a polynomial of degree 1, then H 1/2 v 1/2 (C.36) vL∞ (I) 1 + log Hw (I) h where, see Subsection A.2.6.1, v2 1/2
Hw (I)
:=
1 v2L2 (I) + |v|2H 1/2 (I) . H
Moreover, if J is one of the subintervals with length h which partition I, then H 1/2 v 1/2 . vL2 (J) h1/2 1 + log Hw (I) h
(C.37)
The constants are independent of v, H, and h.
Proof. Inequality (C.36) is proved in [70, Lemma 1]. We include the proof here for completeness. As in the proof of Lemma C.23 we consider the reference ele The partition of I is transplanted into ment I= (0, 1) and v( x) = v(H x) for all x ∈ I. , the triangle a partition of I with size h, which is then extended into a partition of Ω
530
C Some Additional Results
of vertices (0, 0), (1, 0), and (0, 1). We extend v to be a continuous piecewise linear such that function V on Ω 1 ∗V H (Ω ) w
v
1/2
Hw (I)
.
(Note that the weights in the definitions of ∗·H 1 (Ω ) and · 1/2 are 1.) It then Hw (Ω ) w follows from Lemma C.23, more precisely, from (C.42), that 1 1/2 vL∞ (I) = vL∞ (I) ∗V H 1 (Ω ) ≤ V L∞ (Ω ) 1 + log w h 1 1/2 H 1/2 1 + log v 1/2 = 1 + log v 1/2 H ( I) Hw (I) w h h
where in the last step we used (A.50) in Lemma A.5. This proves (C.36). Inequality (C.37) is a consequence of (C.36) and v2L2 (J) ≤ hv2L∞ (J) ≤ hv2L∞ (I) .
The next lemma provides a useful bound for the weighted norm · 1/2 . The Hw (I) analogous p-version is (C.50). Lemma C.20. Let I = (0, H) be partitioned into subintervals which form a quasiuniform mesh of size h. If v is a continuous function defined on I whose restriction onto each subinterval is a polynomial of degree 1, and if v(0) = v(H) = 0, then v 1/2 Hw
H 1/2 |v|H 1/2 (I) + 1 + log vL∞ (I) (I) h
(C.38)
where (see Subsection A.2.6.1) v2 1/2
Hw (I)
:= |v|2H 1/2 (I) +
I
|v(x)|2 dx. dist(x, ∂ I)
The constant is independent of v, H, and h. Proof. Inequality (C.38) is proved in [3, Lemma 10]; see also the proof of [236, Lemma 3.2]. We include the proof here for completeness. First we note that v2 1/2 Hw (I)
= |v|2H 1/2 (I) +
H/2 0
|v(x)|2 dx + x
H
H/2
|v(x)|2 dx. H −x
On the other hand H/2 0
|v(x)|2 dx = x
h 0
|v(x)|2 dx + x
H/2 h
|v(x)|2 dx x
C.4 Some Properties of Polynomials
≤
531
h 0
≤h
2
|v(x) − v(0)|2 dx + v2L∞ (I) x
H
dx x
h
v 2L∞ (I) + v2L∞ (I)
H log . h
By using the inverse inequality, see e.g. [28], v 2L∞ (I)
1 v2L∞ (I) , h2
we deduce H/2 0
|v(x)|2 H 2 dx 1 + log vL∞ (I) . x h
Similarly we can prove H
H/2
|v(x)|2 H 2 dx 1 + log vL∞ (I) , H −x h
completing the proof of (C.38). Combining Lemma C.19 and Lemma C.20 we have
Lemma C.21. Let I = (0, H) be partitioned into subintervals which form a quasiuniform mesh of size h. If v is a continuous function defined on I whose restriction onto each subinterval is a polynomial of degree 1, and if v(0) = v(H) = 0, then v 1/2 Hw
H |v|H 1/2 (I) 1 + log (I) h
(C.39)
where we recall that, see Subsection A.2.6.1, |v|2H 1/2 (I) :=
I×I
|v(x) − v(y)|2 dx dy |x − y|2
and v2 1/2
Hw (I)
:= |v|2H 1/2 (I) +
I
|v(x)|2 dx. dist(x, ∂ I)
The constant is independent of v, H, and h. Proof. In view of Lemma C.20, in order to prove (C.39), it suffices to prove H 1/2 vL∞ (I) 1 + log |v|H 1/2 (I) . h
(C.40)
532
C Some Additional Results
Let v be the scaled function defined from v when I is rescaled into I = (0, 1), i.e., Then for any α ∈ R we have, noting that v(0) = v(0) = 0, v( x) = v(H x) for all x ∈ I. vL∞ (I) v + α L∞ (I) vL∞ (I) = ≤ + |α |
v(0) + α | ≤ 2 v + α L∞ (I) = v + α L∞ (I) + | .
The partition of I induces a partition of I with size h = h/H. Invoking Lemma C.19 h we infer for v+ α defined on I with mesh size 1 1/2 1 1/2 vL∞ (I) 1 + log v + α 1/2 = 1 + log v + α 1/2 . (C.41) Hw (I) HS (I) h h Following along the lines of [60, Theorem 3.1.1] one can prove v + α inf
α ∈R
1/2
HS (I)
= | v|H 1/2 (I) .
Hence, by taking the infimum of both sides of (C.41) over α ∈ R and by using Lemma A.4, we infer 1 1/2 H 1/2 | v|H 1/2 (I) = 1 + log |v|H 1/2 (I) , vL∞ (I) 1 + log h h
proving (C.40) and thus completing the proof of (C.39).
C.4.2.2 The h-version in two dimensions We first present a Sobolev inequality, which is proved in [240] and will be useful for the proof of Lemma C.23. We recall that W 1,∞ (D) is the Sobolev space of functions belonging to L∞ (D) whose weak first-order partial derivatives also belong to L∞ (D). Lemma C.22. Let D be a bounded domain in R2 with Lipschitz boundary ∂ D. Then vL∞ (D) | log ε |1/2 vH 1 (D) + ε vW 1,∞ (D)
∀v ∈ W 1,∞ (D), ε ∈ (0, e−2 ).
The constant depends of the size of D. Proof. The proof follows along the lines of the 3proofs of [240, Theorem 4 2.1 and Corollary 2.1]. Consider x0 ∈ D and let Dε = x ∈ D : |x − x0 | < ε . We note that |Dε | ε 2 , where |Dε | is the measure of Dε . The Sobolev inequality (see e.g. [43, Corollary 9.14]) |v(x) − v(x0 )| vW 1,∞ (D) |x − x0 | ∀x ∈ D implies |v(x0 )| |v(x)| + vW 1,∞ (D) |x − x0 |.
C.4 Some Properties of Polynomials
533
In particular, for x ∈ Dε |v(x0 )| |v(x)| + ε vW 1,∞ (D) so that, by taking the L2 (Dε )-norm on both sides, |Dε |1/2 |v(x0 )| vL2 (Dε ) + ε vW 1,∞ (D) |Dε |1/2 . This implies, noting that |Dε |−1/2 ε −1 , |v(x0 )| ε −1 vL2 (Dε ) + ε vW 1,∞ (D) . H¨older’s inequality yields, for r ∈ [2, ∞), 1
1
vL2 (Dε ) ≤ |Dε | 2 − r vLr (Dε ) ε 1−2/r vLr (D) . It is well known that H 1 (D) is continuously embedded into Lr (D) for all r ∈ [2, ∞). Moreover, by tracing the constant, see e.g. [43, Corollaries 9.11 and 9.13], we have vLr (D) r1/2 vH 1 (D) . Therefore vL2 (Dε ) ε 1−2/r r1/2 vH 1 (D) . By choosing r = | log(1/ε )| ≥ 2 we have ε −2/r = 1 so that vL2 (Dε ) ε | log ε |1/2 vH 1 (D) . Altogether we deduce the required result.
The following well-known result appears in various forms at various places; cf. [37, 36, 41, 218, 236, 243]. Its proof is included for completeness. Lemma C.23. Let v be a finite element function defined on a substructure Ω ⊂ R2 of diameter H, and let the minimum diameter of the elements in Ω be h. Then H 1/2 vL∞ (Ω ) 1 + log ∗vHw1 (Ω ) h
where (see Subsection A.2.6.1)
2 ∗vH 1 (Ω ) w
:=
1 v2L2 (Ω ) + |v|2H 1 (Ω ) . H2
The constant is independent of v, H, and h, but may depend on the polynomial degree p of v. be the reference substructure defined by Proof. Let Ω
534
C Some Additional Results
4 3 := Ω x ∈ R2 : x = x/H, x ∈ Ω
. We partition Ω into elements of minimum diamand let v( x) = v(H x) for all x∈Ω vL∞ (Ω ) . Moreover, Lemma A.5 gives ∗vHw1 (Ω ) = eter h = h/H. Then vL∞ (Ω ) = vH 1 (Ω ) . Hence it suffices to prove ∗ w
1 1/2 vH 1 (Ω ) . vL∞ (Ω ) 1 + log ∗ w h
(C.42)
, ε = Invoking Lemma C.22 with D = Ω h, and with v being v, we deduce h|1/2 vH 1 (Ω ) + h vW 1,∞ (Ω ) . vL∞ (Ω ) | log
The inverse property of the finite element functions (see e.g. [28, Proposition 5.1, page 157]) h−2/q ϕ W 1,q (D) , 1 ≤ q ≤ ∞, ϕ W 1,∞(D)
yields (C.42).
A bound for the L2 -norm on an edge by the weighted H 1/2 -norm on the domain is the subject of the next lemma. Lemma C.24. Let G = (0, H) × (0, H) and γ be an edge of G. Assume that G is partitioned into subsquares of size h. If v is a continuous piecewise linear function on G then H 2 v 1/2 v2L2 (γ ) 1 + log Hw (G) h
where · 1/2 is defined in Section A.2.6.1. The constant is independent of H, h, Hw (G) and v. Proof. This result is proved in [3, Lemma 11], see also [2, Lemma 3], and is included here for completeness. Without loss of generality, we assume that γ = (0, H) × {0}. Let γ = (0, H). Then v2L2 (γ ) =
H
|v(x, 0)|2 dx ≤
0
H 0
max |v(x, y)|2 dx. y∈γ
It follows from (C.36) that v2L2 (γ )
H H 1 + log v(x, ·)2 1/2 dx. Hw (γ ) h 0
Invoking Lemma A.16 (A.96) we obtain the required result.
The following results are first proved in [72]. We state them here for convenience.
C.4 Some Properties of Polynomials
535
Lemma C.25. Let R := (0, H) × (0, H) and Rh := (0, H) × (0, h) for 0 < h < H. Then v2L2 (R ) h
and
h v2L2 (R) + h h + H |v|2H 1 (R) H
H 2 v 1/2 v2L2 (R ) h 1 + log h Hw (R) h The constants are independent of v, H, and h.
∀v ∈ H 1 (R)
(C.43)
∀v ∈ H 1/2 (R).
(C.44)
Proof. The lemma is proved in [72, Lemma 3.1 and Lemma 3.4]. We include the proof here for completeness. First we prove (C.43). For any v ∈ H 1 (R), since v(x, y) = v(x, 0) +
y 0
∂ v(x, s) ds ∂s
(C.45)
we have, by using the Cauchy–Schwarz inequality and by integrating over Rh v2L2 (R ) h
≤ 2h
H
|v(x, 0)|2 dx + h2 |v|2H 1 (R) .
(C.46)
0
On the other hand, it follows from (C.45) that |v(x, 0)|2 ≤ 2|v(x, y)|2 + 2y
H ( 0
( ( ∂ v(x, s) (2 ds ∂s
so that, after integrating over R and dividing by H, H
|v(x, 0)|2 dx ≤
0
2 v2L2 (R) + H |v|2H 1 (R) . H
This inequality and (C.46) imply (C.43). Next we prove (C.44). Consider a mesh of size h on R and let Vh be the finite element space consisting of continuous piecewise linear functions defined on this mesh. Let Qh : L2 (R) → Vh be the L2 -projection onto Vh , namely, for any v ∈ L2 (R), we define Qh v by
Qh v, wL2 (R) = v, wL2 (R) ∀w ∈ Vh . Let R := (0, 1) × (0, 1). For any function u defined on R, if u( x) = u(H x), then it 2 H H h v. can be easily checked that Qh v is the L -projection of v onto Vh , i.e, Qh v = Q 2 1 It is well known that Qh is bounded in L (R) and in H (R); see e.g. [41]. Hence, h v 1/2 i.e., Q by interpolation, it is bounded in H 1/2 (R), v 1/2 . Hence, it follows from Lemma A.5 that
HI
(R)
HI
(R)
536
C Some Additional Results
Qh v2 1/2
Hw (R)
h v2 1/2 ≤ H Q
Hw (R)
v2 1/2 HI (R)
H
h v2 1/2 H Q
H
HI
v2 1/2 Hw (R)
(R)
≤ v2 1/2
Hw (R)
.
(C.47)
On the other hand, a standard argument yields
v − Qh v2L2 (R) h|v|2H 1/2 (R) .
(C.48)
Therefore, v2L2 (R ) Qh v2L2 (R ) + v − Qh v2L2 (R ) Qh v2L2 (R ) + h|v|2H 1/2 (R) . h
h
h
h
Since Qh v ∈ H 1 (R), we can invoke (C.46) (with v replaced by Qh v) to have v2L2 (R ) h
H
|Qh v(x, 0)|2 dx + h2 |Qh v|2H 1 (R) + h |v|2H 1/2 (R)
h
H
|Qh v(x, 0)|2 dx + h |Qh v|2H 1/2 (R) + h |v|2H 1/2 (R)
h
0
0
where in the last step we used the inverse inequality. This inequality and (C.47) yield v2L2 (R ) h h
H
|Qh v(x, 0)|2 dx + h v2 1/2
Hw (R)
.
0
The first term on the right-hand side can be bounded by invoking Lemma C.24 and (C.47), completing the proof of (C.44). The proof for the case when R = (0, H) and Rh = (0, h) is completely analogous and is omitted.
C.4.2.3 The p-version and hp-version in one dimension We now present corresponding results for the p-version. The results in the following theorem are proved in [18]. We note that on a reference element I the weight in the definition of · 1/2 can be chosen to be 1 and thus · 1/2 = · 1/2 . Note Hw (I)
Hw (I)
the analogy between (C.36), (C.38), (C.39) and (C.49), (C.50), (C.51).
HS (I)
be the space of polynomials of degree p on the interval I= Theorem C.1. Let P p (I) (−1, 1). Then for any v ∈ P p (I) 1/2 vL∞ (I) v (1 + log p)
Moreover, if v(±1) = 0 then
1/2
Hw (I)
.
(C.49)
C.4 Some Properties of Polynomials
and
v 1/2
Hw (I)
537
1/2 |v|H 1/2 (I) vL∞ (I) + (1 + log p)
v 1/2
Hw (I)
(1 + log p)v
1/2
Hw (I)
(C.50)
.
(C.51)
In each case the constant is independent of v and p. Here, we recall that v2 1/2
Hw (I)
v2 1/2
Hw (I)
= v2L2 (I) + =
I I×
see Subsection A.2.6.1.
I I×
|v(x) − v(y)|2 dx dy, |x − y|2
|v(x) − v(y)|2 dx dy + |x − y|2
I
|v(x)|2 dist(x, ∂ I)
dx;
Proof. These results are proved in Theorem 6.2, Theorem 6.5, and Theorem 6.6 in [18]. The following result which will be useful in our analysis for the p-version slightly extends (C.49) and (C.51) in the above theorem. (Compare this result with Lemma C.21.) Lemma C.26. Let I be an interval in R. If v is a polynomial of degree at most p in I, then (C.52) vL∞ (I) (1 + log p)1/2 v 1/2 . Hw (I)
Assume further that v vanishes at the endpoints of I. Then vL∞ (I) (1 + log p)1/2 |v|H 1/2 (I)
(C.53)
(1 + log p)|v|H 1/2 (I) .
(C.54)
and v 1/2
Hw (I)
The constants are independent of v, p and of the length of I. Proof. Let v be the scaled version of v when I is rescaled into I = (−1, 1), i.e., v( x) = v(x) for all x ∈ I where x is the corresponding point on I after the rescaling. Then vL∞ (I) = vL∞ (I) v 1/2 ; see Lemma A.5 (i). and vH 1/2 (I) = Hw (I) w Hence (C.52) is a consequence of (C.49). For any α ∈ R, vL∞ (I) v + α L∞ (I) vL∞ (I) = ≤ + |α | v(1) + α | = v + α L∞ (I) + |
It follows from (C.49) that
≤ 2 v + α L∞ (I) .
538
C Some Additional Results
vL∞ (I) (1 + log p)1/2 v + α
1/2
HS (I)
.
Following along the lines of [60, Theorem 3.1.1] one can prove v + α inf
1/2
HS (I)
α ∈R
= | v|H 1/2 (I) .
The above results and Lemma A.4 give (C.53). To prove (C.54), we invoke successively Lemma A.5, Theorem C.1, and Lemma A.4 to have v 1/2
Hw (I)
= v 1/2
Hw (I)
1/2 | v|H 1/2 (I) vL∞ (I) + (1 + log p)
= |v|H 1/2 (I) + (1 + log p)1/2 vL∞ (I) .
Inequality (C.54) now follows from (C.53). Combining Lemma C.19 and Lemma C.26 we obtain the following result.
Lemma C.27. Let I be an interval of length H. Assume that I is partitioned into subintervals of size h such that the mesh is quasi-uniform. Let v be a continuous function defined on I such that its restriction on each subinterval is a polynomial of degree p. Then H 1 + log p v2 1/2 . v2L∞ (I) 1 + log Hw (I) h
The constant is independent of H, h, p, and v.
Proof. First we note that there exists a subinterval J (of I) of length h such that vL∞ (I) = vL∞ (J) = vL∞ (J)
where v is the affine image of v onto the reference interval J of length 1. Since v is a polynomial of degree p on J, so is v, and thus Theorem C.1 and Lemma A.5 give v2 1/2 v2L∞ (I) (1 + log p)
Hw (J)
It follows from the definition of the ·
1/2
Hw (J)
v2 1/2 Hw
= (1 + log p)v2 1/2
Hw (J)
.
and (C.37) that
1 H 2 v 1/2 . = v2L2 (J) + |v|2H 1/2 (J) 1 + log Hw (I) (J) h h
Altogether, we have the desired result.
C.4.2.4 The p-version and hp-version in two dimensions A two-dimensional analogue of Lemma C.26 is the following lemma.
C.4 Some Properties of Polynomials
539
Lemma C.28. Let T be the reference triangle. Then v 1/2 HI
(T)
(1 + log p)v
1/2
HI
The constant is independent of v and p.
(T)
∀v ∈ P p (T).
Proof. See [108, Lemma 6].
C.4.2.5 Functions of zero mean Functions of zero means are frequently considered in the study of the weaklysingular integral equation; see Subsection A.2.12. The following property is elementary. Lemma C.29. For any function v ∈ L2 (D), D ⊂ Rd , d ≥ 1, with finite measure |D|, let μD (v) denote the integral mean of v defined by
μD (v) :=
v(x) dx.
D
Then v − μD (v)L2 (D) = min v − cL2 (D) . c∈R
Proof. Defining f (c) := v − c2L2 (D) , we can write f as a quadratic function of c as f (c) = |D|c2 − 2|D|μD (v)c + v2L2 (D) . The result follows by finding the minimum value of this quadratic function.
Lemma C.30. Let I be one side of the reference triangle T. Then for any polynomial v of degree p on T the following inequality holds Moreover,
2 v2L2 (I) (1 + log p)v 1/2
HS (T)
.
v − μ∂ T (v)2L2 (∂ T) (1 + log p)|v|2H 1/2 (T)
(C.55)
(C.56)
where μ∂ T (v) is defined in Lemma C.29. The constants are independent of v and p.
Proof. Without loss of generality, we can assume that I = (0, 1) × {0} and that the denote the reference reference triangle T has vertices (0, 0), (1, 0), and (0, 1). Let Q the square (0, 1) × (0, 1). First, assume that for any polynomial v of degree p on Q, following estimate holds 2 v2L2 (I) (1 + log p)v 1/2
HS (Q)
.
(C.57)
540
C Some Additional Results
by reflecting it about I2 ; see Now for any v ∈ P p (T) we extend v from T onto Q Figure 11.1. The reflected function on the reflected triangle T is denoted by v. By symmetry the coupling term between v and v in the H 1/2 -seminorm can be shown to satisfy
T T
|v(x) − v( y)|2 dx d y= 3 |x − y|
T T
(1 + log p)v2 1/2
.
|v(x) − v(y)|2 dx dy = |v|2H 1/2 (T) . |x − y|3
Therefore we deduce from (C.57) that 2 2 2 2 v2L2 (I) (1 + log p) v + v + |v| + | v | 1/2 1/2 L2 (T) L2 (T) H (T) H (T) HS (T)
To prove (C.56) we use the minimizing property of μ∂ T (v) shown in Lemma C.29, (C.55), and a quotient space argument as follows: 3
v − μ∂ T (v)2L2 (∂ T) = inf v − c2L2 (∂ T) = inf ∑ v − c2L2 (∂ I) c∈R
i
c∈R i=1
(1 + log p) inf v − c2 1/2
HS (T) c∈R = (1 + log p)|v|2H 1/2 (T) .
we have It remains to prove (C.57). For v ∈ P p (Q) v2L2 (I)
=
1
|v(x, 0)| dx ≤ 2
0
1
v(x, ·)2L∞ (0,1) dx
0
(1 + log p)
1
v(x, ·)2 1/2
HS (0,1)
0
dx
where in the last step we used Theorem C.1. Invoking Lemma A.16 (A.96) we deduce v2L2 (I)
(1 + log p)
1
v(x, ·)2 1/2
Hw (0,1)
0
= (1 + log p)v2 1/2
HS (Q)
dx.
dx (1 + log p)v2 1/2
Hw (Q)
dx
This proves (C.57) and completes the proof of the lemma. Lemma C.31. Let Th be a triangle of diameter h and T be the reference triangle. Then μ∂2Th (v) h−1 v2L2 (∂ T ) ∀v ∈ H 1/2 (Th ) h
C.4 Some Properties of Polynomials
541
and
μ∂2T (v) (1 + log p)v2 1/2
HS (T)
∀v ∈ P p (T).
The constants are independent of v, h, and p. Here the integral mean μ∂ Th (v) and μ∂ T (v) are defined in Lemma C.29.
Proof. For any v ∈ H 1/2 (Th ), H¨older’s inequality gives
μ∂2Th (v) ≤
1 |∂ Th |2
Th
ds
|v(s)|2 ds h−1 v2L2 (∂ T ) h
∂ Th
which is the first assertion of the lemma. This result applied to the reference triangle T and Lemma C.30 give the second assertion.
C.4.3 Polynomials of low energies The following space introduced in [169] is useful in the definitions of basis functions in Chapter 11. be the space of polynomials of degree at most p in I := Definition C.1. Let P p (I) are defined by (0, 1). The low energy polynomials φ0 , φ0− ∈ P p (I) 3 4 φ0 = arg min φ L2 (I) : φ ∈ P p (I), φ (0) = 1, φ (1) = 0 , 3 4 φ0− = arg min φ L2 (I) : φ ∈ P p (I), φ (0) = 0, φ (1) = 1 .
We note that φ0− (x) = φ0 (1 − x). For illustration, see Figure C.1 for the plot of φ0 with p = 10. The following results are proved in [169, Lemmas 4.1–4.2]. Lemma C.32. Let φ0 and φ0− be defined as in Definition C.1. Then p
φ0 (x) =
rk Lk (1 − 2x), k=0 2
∑
x ∈ [0, 1],
where Lk is the Legendre polynomial of degree k and if k is even, (2k + 1)r0 rk = if k is odd, (2k + 1)r1 /3 with r0 = Moreover,
2 (p + 1)(p + 2)
and
r1 2 = . 3 p(p + 1)
542
C Some Additional Results
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
x
Fig. C.1 Low energy function φ0 with p = 10.
φ0 2L2 (0,1) = and
1 , p(p + 1)
(−1) p+1 φ0 2L2 (0,1) , φ0 , φ0− L2 (0,1) = 2(p + 1)
φ0 , vL2 (0,1) = φ0− , v L2 (0,1) = 0
∀v ∈ P0p (I)
(C.58)
(C.59)
is the space of polynomials in P p (I) which vanish at the endpoints 0 where P0p (I) and 1.
Proof. See [169, Lemma 4.1].
C.4.4 Discrete harmonic functions and discrete harmonic extension In this subsection we explain the concept of discrete harmonic functions and intro be the reference tetraduce the discrete harmonic extension; see e.g. [107]. Let Ω ) be the space of polynomials of degree at most p in three hedron in R3 , P p (Ω ) be the space of polynomials in P p (Ω ) which vanish on the variables, and P0p (Ω of Ω . The space of discrete harmonic functions Vp is defined by boundary ∂ Ω where
3 ) : a(v, w) = 0 Vp := v ∈ P p (Ω
4
) ∀w ∈ P0p (Ω
C.4 Some Properties of Polynomials
543
a(v, w) :=
∇v(x) · ∇w(x) dx.
Ω
, then the discrete harmonic extension If f is a polynomial of degree p defined on ∂ Ω is f . of f is a function u ∈ V p whose restriction on ∂ Ω A discrete harmonic function v ∈ V p fulfils the minimizing property as stated in the following lemma. the following ) satisfying v = u on ∂ Ω Lemma C.33. If u ∈ Vp then for any v ∈ P p (Ω inequality holds |u|H 1 (Ω ) ≤ |v|H 1 (Ω ) .
) and therefore a(u, w) = 0. Consequently, Proof. Let w := u − v. Then w ∈ P0p (Ω |v|2H 1 (Ω ) = a(v, v) = a(u − w, u − w) = a(u, u) + a(w, w)
≥ a(u, u) = |u|2H 1 (Ω ) .
The next lemma shows that for discrete harmonic functions, the H 1 -norm can be bounded by the H 1/2 -norm on the boundary. , then Lemma C.34. If u is a discrete harmonic function in Ω uH 1 (Ω ) uH 1/2 (∂ Ω ) .
The constant is independent of u and p.
. By [196, Theorem 1], Proof. Let v be the Mu˜noz–Sola extension of u|∂ Ω f onto Ω vH 1 (Ω ) uH 1/2 (∂ Ω ) ,
which implies, due to Lemma C.33,
|u|H 1 (Ω ) uH 1/2 (∂ Ω ) .
(C.60)
On the other hand, the triangle inequality and Poincar´e–Friedrich’s inequality give uL2 (Ω ) ≤ u − vL2 (Ω ) + vL2 (Ω ) |u − v|H 1 (Ω ) + vL2 (Ω ) ≤ |u|H 1 (Ω ) + vH 1 (Ω ) uH 1/2 (∂ Ω ) .
The required result follows from (C.60) and (C.61).
(C.61)
544
C Some Additional Results
C.5 Some Useful Projections or Projection-like Operators In this section, we present some results on the different operators which have been used in the book.
C.5.1 The L2 -projection )0 be an initial The following result on the L2 -projection is used in Chapter 13. Let T 2 triangulation on Γ ⊂ R with mesh size h0 being the maximum diameter of trian)j } j≥0 with mesh gles in T0 . We consider a sequence of uniform triangulations {T − j j ) size h j = 2 h0 . On each mesh T j let V be the boundary element space of continuous piecewise linear functions vanishing on the boundary of Γ . These spaces form a nested sequence V0 ⊂ V1 ⊂ · · · ⊂ V j ⊂ · · · . Lemma C.35. Let Q j : L2 (Γ ) −→ V j be the L2 -orthogonal projection onto V j defined by Q j v, w L2 (Γ ) = v, wL2 (Γ ) ∀v ∈ L2 (Γ ), ∀w ∈ V j . 1/2 (Γ ) the following statement holds Then for all v ∈ H ∞
2 2 . ∑ h−1 j v − Q j vL2 (Γ ) vH 1/2 (Γ ) I
j=0
)0 . The constant depends only on Γ and the initial triangulation T Proof. First we note that
lim v − Qk vL2 (Γ ) = 0
∀v ∈ L2 (Γ ).
k→∞
Next we note that Q j − Q j−1 maps L2 (Γ ) into V j for all j = 1, 2, . . .. Consequently, if i < j then Vi ⊆ Vj−1 ⊂ Vj and thus it follows from the definition of Q j that, for any v ∈ L2 (Γ ), Q j − Q j−1 v, Qi − Qi−1 v L2 (Γ ) = Q j v − v, Qi − Qi−1 v L2 (Γ ) − Q j−1 v − v, Qi − Qi−1 v L2 (Γ ) = 0.
This orthogonality and the limiting property above imply ∞
∑
i= j+1
' Qi − Qi−1 v2L2 (Γ ) = lim ' J→∞
J
∑
i= j+1
'2 Qi − Qi−1 v'L2 (Γ )
C.5 Some Useful Projections or Projection-like Operators
545
= lim QJ v − Q j v2L2 (Γ ) J→∞ = lim QJ v − v + v − Q j v 2L2 (Γ ) J→∞
= v − Q j v2L2 (Γ ) .
By using the above identity and by interchanging the order of summation we deduce ∞
∞
∞
j=0
j=0
i= j+1
2 −1 ∑ ∑ h−1 j v − Q j vL2 (Γ ) = ∑ h j ∞
Qi − Qi−1 v2L2 (Γ )
i−1 h−1 = ∑ Qi − Qi−1 v2L2 (Γ ) ∑ j . i=1
j=0
The second sum on the right-hand side satisfies
so that
i−1
i−1
j=0
j=0
j −1 −1 i −1 ∑ h−1 j = h0 ∑ 2 < h0 2 = hi
∞
∞
j=0
i=1
2 −1 ∑ h−1 j v − Q j vL2 (Γ ) < ∑ hi
Qi − Qi−1 v2L2 (Γ )
(C.62)
On the other hand, Theorem 4 in [167] (see also [4, Theorem 5]) states that for s ∈ s (Γ ) [0, 1] and for all v ∈ H ∞ 2 h−2s v2H s (Γ ) Q0 v2H s (Γ ) + ∑ i Qi − Qi−1 vL2 (Γ ) I
I
(C.63)
i=1
)0 , and s. In particwhere the constants depend only on Γ , the initial triangulation T ular, for s = 1/2 we deduce from (C.62) and (C.63) ∞
2 2 ∑ h−1 s (Γ ) , j v − Q j vL2 (Γ ) vH I
j=0
proving the lemma.
C.5.2 The standard interpolation operator In this subsection, we provide some properties of the standard interpolation operator. We note that on the one hand, this operator is a projection, a property which is useful in some proofs. On the other hand, it is not bounded as a function on H 1 (Ω ). In fact the operator is not even well-defined in H 1 (Ω ). The following lemma is recalled here for convenience.
546
C Some Additional Results
Lemma C.36. Assume that Th and TH are two shape-regular (not necessarily nested) meshes in a domain Ω in Rd , d = 1, 2, 3. Assume that the mesh size h of Th and H of TH satisfy 0 < h < H. Let Vh and VH be two corresponding finite element spaces consisting of continuous piecewise polynomials defined on Ω , with respect to the two meshes. Let Πh : C(Ω ) → Vh be the standard interpolation operator. Then h−1 Πh v − vL2 (Ω ) + |Πh v|H 1 (Ω ) |v|H 1 (Ω )
∀v ∈ VH .
Consequently, Πh v
1/2
HI
(Ω )
v
1/2
HI
(Ω )
∀v ∈ VH
and Πh v 1/2 HI
(Ω )
v 1/2 HI
(Ω )
The constants are independent of v, h, and H.
1/2 (Ω ). ∀v ∈ VH ∩ H
Proof. We note that since the meshes Th and TH are not necessarily nested, in general VH ⊆ Vh . For alternative proofs of the first estimate, the reader is referred to [45, Lemma 2.1] and [56, Lemma 1]. By using the triangle inequality and the inverse property, it is also clear from the first estimate that Πh vL2 (Ω ) vL2 (Ω )
∀v ∈ VH .
(C.64)
Hence, the next two estimates follow by interpolation. The following lemma is proved in [229, Lemma 4.3 and Lemma 4.4].
Lemma C.37. Assume that I is an interval of length H and that there is a quasiuniform mesh of size h on I. Let v be a continuous function on I and ΠH v be the affine function which interpolates v at the endpoints of I. (i) If v is a piecewise polynomial of degree p ≥ 1 on this mesh, then . H 2 (1 + log p)v2 1/2 . ΠH v 1/2 1 + log Hw (I) Hw (I) h (ii) If v is a polynomial of degree p ≥ 1 on I, then ΠH v2 1/2
Hw (I)
(1 + log p)v2 1/2 . Hw (I)
The constants are independent of v, H, h, and p. Proof. First we recall the definition of the weighted norm defined in Section A.2.6.1) ΠH v2 1/2
Hw (I)
=
1 H
I
|ΠH v(x)|2 dx +
I×I
|ΠH v(x) − ΠH v(y)|2 dx dy. |x − y|2
Let I = (a, b) so that b − a = H. Then since ΠH v is an affine function on I, we have for all x, y ∈ I
C.5 Some Useful Projections or Projection-like Operators
547
ΠH v2L∞ (I) |ΠH v(x) − ΠH v(y)|2 |ΠH v(a) − ΠH v(b)|2 = . |x − y|2 |a − b|2 H2 Hence ΠH v2 1/2 Hw (I)
ΠH v2L∞ (I) H
dx +
I
ΠH v2L∞ (I) H2
dx dy
I×I
= 2 ΠH v2L∞ (I) v2L∞ (I) . The first required result is now a consequent of Lemma C.27. The second required result is obtained by taking H = h. For the next lemma we define the following two-level mesh for an interval Γ . The coarse mesh: We first divide Γ into disjoint subdomains Γi with length Hi , i = 1, . . . , J, so that Γ = ∪Ji=1Γ i , and denote by H the maximum value of Hi . The fine mesh: Each Γi is further divided into disjoint subintervals Γi j , j = 1, . . . , Ni , i so that Γ i = ∪Nj=1 Γ i j . The maximum length of the subintervals Γi j in Γi is denoted by hi , and the maximum value of hi is denoted by h. We require that the fine mesh is locally quasi-uniform, i.e., it is quasi-uniform in each subdomain Γi . 1/2 (Γ )The following lemma gives similar results as the previous lemma, in the H norm; see [229, Lemmas 4.5–4.7]. Lemma C.38. Assume that a two-level mesh is defined on an interval Γ as described above. Assume that v is a continuous function vanishing at the endpoints of Γ , whose restriction on each subinterval Γi j , i = 1, . . . , J, j = 1, . . . , Ni , is a polynomial of degree p ≥ 1.
(i) If ΠH v is the piecewise linear function which interpolates v at the coarse mesh points and if wi := (v − ΠH v)|Γi , then ΠH v2 1/2 + H (Γ ) I
J
∑
i=1
wi 2 1/2 H (Γi )
(1 + log p) max
I
2
1≤i≤J
-
Hi 1 + log hi
.
v2 1/2 HI
(Γ )
.
(ii) If Πh v is the piecewise linear function which interpolates v at the fine mesh points and if wi j := (v − Πh v)|Γi j , then Πh v2 1/2 HI
J
(Γ )
+∑
Ni
∑ wi j 2HI1/2 (Γi j ) (1 + log p)2 v2HI1/2 (Γ ) .
i=1 j=1
The constants are independent of v, H, h, and p.
Proof. It suffices to prove the first statement and then set Hi = hi to obtain the second statement. For each i = 1, . . . , J, since ΠH v|Γi is the affine function which interpolates v at the endpoints of Γi , Lemma C.37 gives
548
C Some Additional Results
ΠH v2 1/2
Hw (Γi )
-
(1 + log p) 1 + log
Hi hi
.
v2 1/2
,
v2 1/2
.
Hw (Γi )
which implies |ΠH v|2H 1/2 (Γ ) i
-
Hi (1 + log p) 1 + log hi
.
Hw (Γi )
Consequently, for any α ∈ R we have |ΠH v|2H 1/2 (Γ ) = |ΠH v + α |2H 1/2 (Γ ) i
i
= |ΠH (v + α )|2H 1/2 (Γ ) i -
Hi (1 + log p) 1 + log hi
.
v + α 2 1/2
Hw (Γi )
.
Taking the infimum over α ∈ R and using Lemma A.15 we deduce . Hi |v|2H 1/2 (Γ ) |ΠH v|2H 1/2 (Γ ) (1 + log p) 1 + log i i hi i be where the constant is independent of v, p, h, and H. Let w := v − ΠH v and let w the affine image of wi = w|Γi onto the reference element I when Γi is mapped onto I. It follows from Lemma A.4, the triangle inequality, and the above inequality that . Hi 2 i |2H 1/2 (I) |v|2H 1/2 (Γ ) . (1 + log p) 1 + log = |w | |w i H 1/2 (Γ ) i i hi On the other hand, since w vanishes at the fine mesh points which include the coarse 1/2 (Γi ) and thus w 1/2 (I). The equivalence of norms i ∈ H mesh points, we have wi ∈ H 1/2 in H (I) (with constants independent of all the parameters), inequality (C.50) and the above inequality yield i 2 1/2 w HI
(I)
i 2 1/2 w
Hw (I)
. Hi i 2L∞ (I) i |2H 1/2 (I) w |w + (1 + log p) 1 + log hi . Hi i 2L∞ (I) |v|2H 1/2 (Γ ) + w (1 + log p) 1 + log . i hi
Note that, for any arbitrary point xi in Γi , 2 i 2L∞ (I) w = wL∞ (Γi )
v − v(xi )2L∞ (Γi ) + ΠH (v − v(xi ))2L∞ (Γi ) v − v(xi )2L∞ (Γi ) .
C.5 Some Useful Projections or Projection-like Operators
549
There exists j ∈ {1, . . . , Ni } such that v − v(xi )2L∞ (Γi ) = v − v(xi )2L∞ (Γi j ) . Since v−v(xi ) is a polynomial of degree at most p in Γi j , if we choose xi = xi j to be a point in Γi j so that v − v(xi j ) vanishes at this point, then we can invoke Lemma C.26 to obtain 2 i 2L∞ (I) w (1 + log p)|v − v(xi j )|H 1/2 (Γ
i j)
= (1 + log p)|v|2H 1/2 (Γ ) ij
≤ (1 + log p)|v|2H 1/2 (Γ ) . i
Altogether, we obtain i 2 1/2 w HI
Lemma A.6 yields
. Hi 2 1 + log |v|2H 1/2 (Γ ) . (1 + log p) i (I) hi
J
J
i=1
i=1
∑ wi 2HI1/2 (Γi ) = ∑ wi 2HI1/2 (I)
(1 + log p) max 2
1≤i≤J
-
Hi 1 + log hi
.
J
∑ |v|2H 1/2 (Γi )
i=1
. Hi |v|2H 1/2 (Γ ) ≤ (1 + log p)2 max 1 + log 1≤i≤J hi . Hi 2 v2 1/2 ≤ (1 + log p) max 1 + log HS (Γ ) 1≤i≤J hi . H i v2 1/2 , (1 + log p)2 max 1 + log HI (Γ ) 1≤i≤J hi
proving one part of the required inequality. Theorem A.10 then gives . Hi 2 2 v2 1/2 . w 1/2 (1 + log p) max 1 + log HI (Γ ) HI (Γ ) 1≤i≤J hi
The remaining part of the required inequality follows from the triangle inequality. The following result is first proved in [228, Lemma 3.5].
Lemma C.39. Let Γ be a bounded interval in R which is uniformly partitioned by points x j , j = 0, . . . , N, into subintervals Γj = (x j−1 , x j ), j = 1, . . . , N, of length h. We define 3 4 V := u ∈ C(Γ ) : u|Γj ∈ P p , j = 1, . . . , N
550
C Some Additional Results
where P p denotes the space of polynomials of degree at most p. For any u ∈ V , if Πh u is the piecewise linear function which interpolates u at the mesh points x j , then N
N
∑ |Πh u|2H 1/2 (Γ ) (1 + log p) ∑ |u|2H 1/2 (Γ ) . j
j=1
j
j=1
The constant is independent of u, p, and the mesh size h. Proof. We denote by u0 the interpolant Πh u and generate from Γ a rectangular region Ω = Γ × (0, h), which is partitioned into sub-rectangular regions
Ω j = Γj × (0, h),
j = 1, . . . , N;
see Figure C.2 for N = 4. By [18, Theorem 7.8], there exist U j ∈ P p (Ω j ) such that h
−1
Ω1
Ω2
Γ1
Γ2
Fig. C.2 Generating Ω from Γ =
4 G
0
Ω3
Ω4
Γ3
Γ4
1
Γi
i=1
u Uj = 0
on Γj × {0}, on Γj × {h}.
Let U0, j be the bilinear interpolant of U j at the vertices of Ω j and let N
U0 :=
∑ U0, j .
j=1
Then U0 |Γ = u0 . Indeed, both U0 |Γ and u0 are piecewise-linear functions on Γ which take the same values u(xi ) at the nodal points xi . Using the trace theorem and Lemma A.6, we deduce |u0 |H 1/2 (Γj ) ≤ U0 H 1 (Ω j ) = U0, j H 1 (Ω j ) I
I
C.5 Some Useful Projections or Projection-like Operators
551
' (( (( ' ≤ 'H U0, j ((H 1 (Ω) ) U0, j 'H 1 (Ω) ) = ∗((H j
I
j
w
) H where U 0, j is the affine image of U0, j onto scaled element Ω j of side length 1. )j )j , the affine image of U j onto Ω H Moreover, U 0, j is the bilinear interpolant of U under scaling. Hence, invoking Lemma C.40 gives (( (( )j 1 ) , |u0 |H 1/2 (Γj ) (1 + log p)1/2 ∗(() U j ((H 1 (Ω) ) (1 + log p)1/2 U H (Ω ) w
j
j
I
)j ) with constants depending only where we used the equivalence of norms in H 1 (Ω ) on the diameter of Ω j which is a constant. Theorem 7.8 in [18] gives u |u0 |H 1/2 (Γj ) (1 + log p)1/2
Using Lemma A.5 we deduce
1/2
HI
(Γj )
(1 + log p)1/2 u
|u0 |H 1/2 (Γj ) (1 + log p)1/2 u
1/2
Hw (Γj )
. 1/2 Hw (Γj )
.
For any α ∈ R we write the above inequality for u + α ∈ V instead of u to have |u0 |2H 1/2 (Γ ) = |u0 + α |2H 1/2 (Γ ) (1 + log p)1/2 u + α j
1/2
Hw (Γj )
j
.
Taking the infimum over α and using Lemma A.15 we then deduce |u0 |2H 1/2 (Γ ) ≤ c(1 + log p)|u|2H 1/2 (Γ ) . j
j
By summing over j we obtain N
N
j=1
j=1
∑ |u0 |2H 1/2 (Γj ) ≤ c(1 + log p) ∑ |u|2H 1/2 (Γj ) ,
proving the lemma.
Lemma C.40. Let R be a rectangular domain in R2 of side length h < 1. If u p belongs to P p (R), the space of polynomials of degree at most p in each variable, and u0 is the bilinear interpolant of u p at the vertices of R, then ∗u0 Hw1 (R)
(1 + log p)1/2 ∗u p Hw1 (R) ,
where the constant is independent of u p , p, and h. Proof. Due to Lemma A.5, it suffices to prove (( (( (( (( (( 0 (( 1 (1 + log p)1/2 ∗((up (( 1 ∗ u H (R) H (R) w
where R has side length 1. A simple calculation reveals
w
(C.65)
552
C Some Additional Results
(( (( ( ' ' ( ((u0 (( 1 = 'u0 ' 1 max (u0 ( xi )( H (R) H (R)
∗
w
S
1≤i≤4
Since u0 is bilinear on R, it attains its maxiwhere xi , i = 1, . . . , 4, are vertices of R. mum value at one of its vertices. Moreover, since u0 interpolates up at these vertices, this maximum value is not greater than the maximum value of up . Therefore, (( (( ( ( ' ' ' ' (( 0 (( 1 max (u0 ( ' p ' ∞ . xi )( = 'u0 'L∞ (R) (C.66) ∗ u ≤ u H (R) L (R) w
1≤i≤4
Lemma C.22 gives ' ' ' ' ' ' 'up ' ∞ | log ε |1/2 'up ' 1 + ε 'up ' 1,∞ , L (R) H (R) W (R) S
ε ∈ (0, 1),
where the constant is independent of p, u p , and h. By Markov’s inequality (C.33) ( ( ( ( ( ∂ up ( ( ∂ up ( 2 ( ( ≤ p2 max | ( ( ≤ p max | max ( u p (x, y)| and max ( u p (x, y)|, x x y y ∂x ( ∂y ( implying
' ' 'up '
W 1,∞ (R)
p2 u p L∞ (R) .
By choosing ε = 1/(2p2 ) we deduce ' ' ' ' (( (( 'up ' ∞ (1 + log p)1/2 'up ' 1 = (1 + log p)1/2 ∗((up (( 1 . L (R) H (R) H (R) w
S
This together with (C.66) gives (C.65), completing the proof of the lemma. H 1/2 -seminorm
of We finish this subsection by proving the boundedness in the the interpolation operator interpolating continuous functions at the Gauss–Lobatto points z1 , z2 , . . . , z p+1 which are zeros of L p+1 defined in (C.4). To assure ourselves that L p+1 has p + 1 distinct zeros in [−1, 1], we first recall from [216, Theorem 4.2.1] that the Legendre polynomial L p satisfies the differential equation (1 − x2 )Lp (x) − 2xLp (x) + p(p + 1)L p (x) = 0. As a consequence, L p+1 satisfies (1 − x2 )Lp (x) + p(p + 1)L p+1 (x) = 0. This implies that x = ±1 are zeros of L p+1 . Moreover, as is well known, L p has p distinct zeros in (−1, 1). Therefore, the above identity implies that L p+1 has p + 1 distinct zeros in [−1, 1], the other zeros besides ±1 are zeros of Lp which interleave those of L p . We denote the zeros of L p+1 by − 1 = z1 < z2 < · · · < z p < z p+1 = 1.
(C.67)
Lemma C.41. Let J = [−1, 1], P p (J) be the space of polynomials of degree at p (J) := P p (J)/R. For any continuous function v on J, we denote most p on J, and P
C.5 Some Useful Projections or Projection-like Operators
553
by Tp v the polynomial of degree at most p that interpolates v at z1 , . . . , z p+1 defined p+1 (J) → P p (J) is uniformly bounded in the |·| 1/2 -norm, in (C.67). Then Tp : P H (J) i.e., p+1 (J). (C.68) |Tp f |H 1/2 (J) | f |H 1/2 (J) ∀ f ∈ P The constant is independent of f and p. p+1 (J × J) be the extension of f onto J × J p+1 (J), let F ∈ P Proof. For any f ∈ P given by [18, Theorem 7.8] so that |F|H 1 (J×J) | f |H 1/2 (J) .
p (J × J) be the interpolant of F at (zi , z j ) ∈ J × J, i, j = 1, . . . , p + 1. It is Let G ∈ P shown in [168, Lemma 2] that |G|H 1 (J×J) |F|H 1 (J×J) .
On the other hand, since G|J = Tp ( f ), it follows from the trace theorem that |Tp ( f )|H 1/2 (J) |G|H 1 (J×J) . Here we note that on the quotient spaces, the seminorms are norms which are equivalent to the usual norms. Altogether, we obtain (C.68).
C.5.3 Some useful operators In this subsection, we define some useful transformations and prove their important properties. It is known that a polynomial on the boundary of a rectangular element can be extended into a polynomial in the rectangle, with appropriate bounds of norms; see e.g. [18]. The following lemma which extends this result to piecewise polynomial functions is first proved in [222]. The next two lemmas require the following sets depicted in Figure C.3.
Ω1− := [−1, 0] × [0, 1],
Ω1+ := [0, 1] × [0, 1],
Ω1 := Ω1− ∪ Ω1+ ,
Ω2− := [−1, 0] × [−1, 0], Ω2+ := [0, 1] × [−1, 0], Ω2 := Ω2− ∪ Ω2+ , γ1− := [−1, 0] × {0},
γ1+ := [0, 1] × {0},
γ1 := γ1− ∪ γ1+ ,
γ2− := {0} × [0, 1],
γ3− := [−1, 0] × {1},
γ4− := {−1} × [0, 1],
γ2+ := {0} × [0, 1],
γ3+ := [0, 1] × {1},
γ4+ := {1} × [0, 1],
γ − := γ1− ∪ · · · ∪ γ4− ,
γ + := γ1+ ∪ · · · ∪ γ4+ ,
Ω := Ω1 ∪ Ω2 .
(C.69)
We also identify γ1 and γ1± with the corresponding one-dimensional intervals. Lemma C.42. Let f be a continuous function defined on γ1 such that f (±1) = 0 and f ± := f |γ ± are polynomials of degree p. Then there exists F ∈ H01 (Ω ) such that 1
554
C Some Additional Results 1
γ3− γ4−
Ω1−
γ3+ γ2− γ2+
Ω1+
γ1−
γ4−
γ1+
−1
1
Ω2−
Ω2+
−1 Fig. C.3 Sets defined in (C.69), Ωi = Ωi+ ∪ Ωi− , i = 1, 2, Ω = Ω1 ∪ Ω2 , γ ± = γ1± ∪ . . . ∪ γ4± , γ1 = γ1− ∪ γ1+
F|Ω ± ∈ P p (Ωi± ), i = 1, 2,
F|∂ Ω = 0,
i
F|γ1 = f ,
and FH 1 (Ωi ) f 1/2 HI
I
(γ 1 )
,
i = 1, 2,
(C.70)
where the constant is independent of f and p. Proof. Let f˜± be functions defined on γ ± as ⎧ ± ⎪ on γ1± , ⎨f f˜± := f + ◦ η on γ2± , ⎪ ⎩ 0 on γ3± ∪ γ4± ,
where η is the rotation that maps γ2+ onto γ1+ . We note that f˜+ and f˜− take the same values on γ2± and are continuous on γ ± , respectively. By [18, Theorem 7.5] there exist F1± ∈ P p (Ω1± ) such that F1± |γ ± = f˜± and F1± H 1 (Ω ± ) F1± H 1 (Ω ± ) f˜± I
1
S
1
1/2
HS (γ ± )
f˜±
1/2
HI
(γ ± )
,
(C.71)
where the constant is independent of f and p. Since F1+ |γ + = F1− |γ − , if we define 2
F1+ F1 := F1−
2
on Ω1+ , on Ω1− ,
then F1 is continuous on Ω1 ; thus F1 ∈ H 1 (Ω1 ). Similarly, we can define F2 ∈ H 1 (Ω2 ) with similar properties as F1 , and then define F ∈ H 1 (Ω ) such that F|Ωi =
C.5 Some Useful Projections or Projection-like Operators
555
Fi , i = 1, 2. Clearly, F vanishes on the boundary of Ω . In view of (C.71), the required inequality (C.70) will follow if f˜±
1/2
HI
(γ ± )
It is clear from the definition of f˜± that f˜−
1/2
HI
and
f˜+
1/2
HI
(γ − )
(γ + )
f 1/2 HI
(γ1 )
.
(C.72)
f˜− H 1/2 (γ − ) f H 1/2 (γ1 ) f 1/2 HI
(γ1 )
f˜+ H 1/2 (γ + ) f + H 1/2 (γ + ) f 1/2 1
HI
Thus (C.72) holds and the lemma is proved.
(γ1 )
.
In the sequel, if v is a function defined on an interval, then we denote by vˆ its affine image on the reference interval [−1, 1]. Lemma C.43. Let f be a continuous function on [−1, 1] vanishing at ±1 such that f − := f |[−1,0] and f + := f |[0,1] are polynomials of degree p + 1. Let Tp∗ ( f ) be defined as g− on [−1, 0), ∗ Tp ( f ) := g+ on [0, 1], where g± are polynomials of degree p, the affine images gˆ± of which are defined by gˆ± := Tp ( fˆ± ), respectively. (Here Tp is the interpolation operator defined in Lemma C.41.) Then Tp∗ ( f ) is continuous on [−1, 1] and Tp∗ ( f ) 1/2 HI
(−1,1)
f 1/2 HI
(−1,1)
(C.73)
where the constant is independent of f and p.
Proof. We note that the functions g± so defined are polynomials of degree p. Moreover, since f (±1) = 0 and f is continuous at 0, we have g− (−1) = g+ (1) = 0 1/2 (−1, 1). We and g− (0) = g+ (0). Hence Tp∗ ( f ) is continuous and belongs to H now prove (C.73). Recall the sets defined by (C.69). By Lemma C.42, there exists a function F ∈ H01 (Ω ) such that F|Ω ± ∈ P p+1 (Ωi± ), i
i = 1, 2,
F|γ1 = f ,
and FH 1 (Ω ) f 1/2 HI
I
(γ1 )
(C.74)
where the constant is independent of f and p. By using the two-dimensional version ± of the interpolation operator Tp discussed in [168], we can define G± i ∈ P p (Ω i ) such that |G± i |H 1 (Ω ± ) |F|Ω ± |H 1 (Ω ± ) . i
i
i
556
C Some Additional Results
1 Let G denote the function defined on Ω such that G|Ω ± = G± i . Then G ∈ H0 (Ω ) i
(because F ∈ H01 (Ω )) and |G|H 1 (Ω ) |F|H 1 (Ω ) .
(C.75)
Now let ∂ Ω1 be the boundary of Ω1 and let Γ := ∂ Ω1 \ γ1 . We denote by g˜ the extension of Tp∗ ( f ) by 0 onto Γ . On each interval [−1, 0] and [0, 1], the functions G and Tp∗ ( f ) are polynomials of degree p which coincide at the interpolation points z1 , ˜ γ1 . On the other hand, . . . , z p+1 defined in (C.67). Therefore, G|γ1 = Tp∗ ( f ) = g| ˜ Γ . Thus G|∂ Ω2 = g, ˜ which implies G|Γ ≡ 0 ≡ g| Tp∗ ( f ) 1/2 HI
(−1,1)
= g ˜ H 1/2 (∂ Ω2 ) GH 1 (Ω ) |G|H 1 (Ω ) .
The estimate (C.73) hence follows from (C.74), (C.75), and (C.76).
(C.76) 1/2 -norm H
We finish this section by a result concerning the invariance of the under a special transformation which will be used in the analysis of the p-version. First we recall that the norm v 1/2 is defined in Section A.2.6.1 by Hw (α ,β )
v2 1/2 Hw (α ,β )
= |v|2H 1/2 (α ,β ) +
β 2 v (x)
α
β −x
dx +
β 2 v (x)
α
x−α
dx
(C.77)
with |v|H 1/2 (α ,β ) defined in (A.25). 1/2 (a, c) onto Lemma C.44. Let a < b < c and let A : u → u˜ be a mapping from H 1/2 H (−1, 1) defined as u˜1 on [−1, 0], u˜ := on [0, 1], u˜2 where u˜1 and u˜2 are the affine images of u1 := u|[a,b] and u2 := u|[b,c] on [−1, 0] and [0, 1], respectively. Then
where
1 u2 1/2 ≤ Au2 1/2 ≤ μ u2 1/2 , Hw (a,c) Hw (−1,1) Hw (a,c) μ
μ := max{
(C.78)
c−b b−a , }. b−a c−b
Proof. Assume without loss of generality that μ = (c − b)/(b − a). For any t ∈ [a, b] and τ ∈ [b, c], if s=
t −b ∈ [−1, 0] b−a
and
σ=
τ −b ∈ [0, 1], c−b
(C.79)
then u˜1 (s) = u1 (t) and u˜2 (σ ) = u2 (τ ). To prove (C.78), in view of (C.77) we consider three terms
C.5 Some Useful Projections or Projection-like Operators
T1 :=
1 1
−1 −1
T2 :=
1
−1
2 |u(x)| ˜ dx, 1−x
557
2 |u(x) ˜ − u(y)| ˜ dx dy, 2 |x − y|
and
T3 :=
1
−1
2 |u(x)| ˜ dx. 1+x
By symmetry we have T1 = T11 + 2T12 + T13 where 1 1
T11 :=
T12 :=
0 0 0 0
−1 −1
T13 :=
1 0 0 −1
|u˜2 (σ ) − u˜2 (σ )|2 dσ dσ , |σ − σ |2 |u˜1 (s) − u˜1 (s )|2 ds ds , |s − s |2 |u˜1 (s) − u˜2 (σ )|2 ds dσ . |s − σ |2
Noting (C.79) we have T11 =
c c
|u2 (τ ) − u2 (τ )|2 dτ dτ , |τ − τ |2
T12 =
b b
|u1 (t) − u1 (t )|2 dt dt , |t − t |2
b b
a a
T13 = μ
c b b a
|u1 (t) − u2 (τ )|2 dt dτ . |τ − μ t + (μ − 1)b|2
Since a ≤ t ≤ b ≤ τ ≤ c and μ ≥ 1, we have |τ − μ t + (μ − 1)b| = τ − t + (μ − 1)(b − t), which implies
τ − t ≤ |τ − μ t + (μ − 1)b| ≤ μ (τ − t). Therefore, 1 μ
b c a b
|u2 (τ ) − u1 (t)|2 dτ dt ≤ T13 ≤ μ |τ − t|2
b c a b
|u2 (τ ) − u1 (t)|2 dτ dt. |τ − t|2
558
C Some Additional Results
Altogether we obtain 1 2 |u| ≤ T1 ≤ μ |u|2H 1/2 (a,c) . μ H 1/2 (a,c) Similarly, we have T2 =
b a
|u1 (t)|2 dt + 2b − a − t
c b
|u2 (τ )|2 dτ . c−τ
Under the assumption b − a ≤ c − b, for t ∈ [a, b] we have c−t ≤ 2b − a − t ≤ c − t. μ Hence
c a
|u(z)|2 dz ≤ T2 ≤ μ c−z
c
|u(z)|2 dz. c−z
c
|u(z)|2 dz, z−a
a
Finally, a similar argument yields 1 μ
c a
|u(z)|2 dz ≤ T3 ≤ z−a
a
completing the proof of the lemma.
C.5.4 Cl´ement’s interpolation As has been seen in Subsection C.5.2, the standard interpolation is not bounded in H 1 (Γ ) nor L2 (Γ ). In fact, it is not even well defined in these spaces. It is sometimes necessary, in the analysis, to employ other types of interpolation operators 1/2 (Γ )). In which possess boundedness in these spaces (and thus in H 1/2 (Γ ) and H this subsection we recall some properties of Cl´ement’s interpolation operator introduced in [61] by using local averaging. Let Vh be the space of continuous piecewise linear functions defined on a mesh of Γ . We define Πh0 : L2 (Γ ) −→ Vh by
Πh0 v =
∑
xk ∈Nh
(Pk0 v)φkh
where Pk0 v being the constant associated with Sk = supp φkh (the patch of all elements K sharing the vertex xk ) satisfies
C.5 Some Useful Projections or Projection-like Operators
559
(v − Pk0 v)w dx = 0
Sk
for all w ∈ P0 (Sk ). The operator Πh0 is a type of local averaging interpolation operators. We note that it is not a projection. The following results are well known and are recalled here for convenience. See [61]; see also [3, 56, 90]. Lemma C.45. For any v ∈ H01 (Γ ) the following estimates hold Πh0 vL2 (Γ ) vL2 (Γ ) , and
|Πh0 v|H 1 (Γ ) |v|H 1 (Γ ) , 0
0
0 2 0 2 2 ∑ h−2 K v − Πh vL2 (K) + |v − Πh v|H 1 (Γ ) |v|H 1 (Γ ) . K
The constants are independent of v and h. 1/2 (Γ ) and H 1/2 (Γ ) can be drawn. Analogous bounds in H
1/2 (Γ ) there hold Lemma C.46. For any v ∈ H Πh0 v 1/2 HI
and
(Γ )
v 1/2 HI
(Γ )
0 2 0 2 2 ∑ h−1 K v − Πh vL2 (K) + |v − Πh v|H 1/2 (Γ ) |v|H 1/2 (Γ ) .
(C.80)
(C.81)
K
Proof. The results for quasi-uniform meshes are proved in [3, Lemma 8]. We prove the results when the mesh is only locally quasi-uniform. It follows from Lemma C.45 that Πh0 vL2 (Γ ) vL2 (Γ )
and
|Πh0 v|H 1 (Γ ) |v|H 1 (Γ ) .
So (C.80) is obtained by interpolation. To prove (C.81) we first note that the inequality |v − Πh0 v|2H 1/2 (Γ ) |v|2H 1/2 (Γ ) can be obtained by following the argument of the proof of [3, Lemma 8]. We now prove 0 2 2 ∑ h−1 K v − Πh vL2 (K) |v|H 1/2 (Γ ) . K
For each element K, let XK = L2 (K) equipped with norm ·XK = ·L2 (K) and YK = L2 (K) equipped with norm ·YK = h−1 K ·L2 (K) . Let ZK be defined by Hilbert space interpolation as ZK := [XK ,YK ]1/2 , with norm
560
C Some Additional Results
v2ZK
=
∞
K(t, v)2
0
dt , t2
where the K-functional is defined, for v ∈ XK +YK , by K(t, v)2 = inf v0 2XK + t 2 v1 Y2K . v=v0 +v1
Then
2 v2ZK vXK vYK = h−1 K vL2 (K) .
(C.82)
Now if we define X = ∏ XK ,
Y = ∏ YK ,
K
Z = ∏ ZK ,
and
K
K
with norms v2X = ∑ vK 2XK , K
v2Y = ∑ vK Y2K ,
and
K
v2Z = ∑ vK 2ZK , K
where v = (vK )K , then Z = [X , Y ]1/2 with the K-functional satisfying K(t, v)2 = ∑ K(t, vK )2 . K
Let T be an operator defined by T v = v − Πh0 v. Then it is given by Lemma C.45 that T v2X = ∑ v − Πh0 v2L2 (K) v2L2 (Γ ) K
and
0 2 2 T v2Y = ∑ h−2 K v − Πh vL2 (K) |v|H 1 (Γ ) . K
By interpolation and noting (C.82), we infer 0 2 2 ∑ h−1 K v − Πh vL2 (K) T vZ K
v2H 1/2 (Γ ) .
Since T α = 0 for all α ∈ R, the above inequality implies, with the aid of Lemma A.6 and Lemma A.5, 0 2 2 ∑ h−1 K v − Πh vL2 (K) T (v + α )Z
v + α 2 1/2
K
Hw (Γ )
.
Taking the infimum over α and applying [3, Lemma 5] yields 0 2 2 ∑ h−1 K v − Πh vL2 (K) |v|H 1/2 (Γ ) , K
proving the desired result.
C.5 Some Useful Projections or Projection-like Operators
561
C.5.5 The Scott–Zhang interpolation The Scott–Zhang interpolation is defined in [191] with a variant considered in [14] s (Γ ), 0 ≤ s ≤ 1. This interpolation, also a local averaging for the Sobolev space H interpolation operator as Cl´ement’s interpolation defined Subsection C.5.4, differs from the latter one in that it naturally preserves homogenous boundary conditions. Let Th be a triangulation of Γ , not necessarily quasi-uniform but shape regular. Let Vh be the boundary element space defined on this mesh Th by 4 3 Vh := v ∈ C(Γ ) : v|T ∈ P1 ∀T ∈ Th , v = 0 on ∂Γ . 4 3 Let Nh := z j : j = 1, . . . , Nh be the set of nodal points in Th . For any j = 1, . . . , Nh , let η j ∈ Vh be the hat function which takes the value 1 at z j and zero at the other nodal points. These functions η j , j = 1, . . . , Nh , form a basis for Vh . For each z j ∈ Nh , let Tz j ∈ Th be a triangle containing z j as a vertex. We consider the L2 -dual basis for Vh consisting of functions ψ j which satisfy
ψ j (x)ηi (x) ds(x) = δi j ,
i = 1, . . . , Nh .
Tz j
It is shown in [191, Lemma 3.1] that ψ j L∞ (Tz ) |Tz j |−1 ,
j = 1, . . . , Nh .
j
(C.83)
The Scott–Zhang interpolation operator Jh : L2 (Γ ) −→ Vh is defined by Jh v =
Nh
∑ ηj
j=1
ψ j (x)v(x) ds(x) ∀v ∈ L2 (Γ ).
(C.84)
Tz j
Different choices of the triangle Tz j result in different operators Jh . The following properties of Jh are proved in [191, Theorem 3.1 and Theorem 4.1]; see also [14]. Lemma C.47. The operator Jh : L2 (Γ ) −→ Vh has the following properties Jh v = v
∀v ∈ Vh ,
Jh vH s (Γ ) vH s (Γ ) I
I
s (Γ ), ∀v ∈ H
v − Jh vL2 (T ) diam(T )∇vL2 (ωh (T )) ∇(v − Jh v)L2 (T ) ∇vL2 (ωh (T )) where ωh (T ) := regularity of Th .
0 ≤ s ≤ 1, 1 (Γ ), ∀v ∈ H
1 (Γ ), ∀v ∈ H
4 G3 T ∈ Th : T ∩ T = ∅ . The constant depends only on the shape
562
C Some Additional Results
C.6 Gauss–Lobatto Quadrature Let L p be the Legendre polynomial normalised so that L p (1) = 1. For each p ∈ N, let τ p, j , j = 0, . . . , p, be the solutions of (1 − t 2 )Lp (t) = 0, ordered such that − 1 = τ p,0 < τ p,1 < · · · < τ p,p = 1.
(C.85)
These points τ p, j are called the Gauss–Lobatto–Legendre (GLL) points. The Lagrange polynomial associated with τ p, j is the polynomial of degree p satisfying p,i (τ p, j ) = δi j ,
i, j = 0, . . . , p.
(C.86)
It is well-known that the GLL points defined in (C.85) can be used to define the (p + 1)-point Gauss-Lobatto quadrature rule 1
p
f (t) dt ≈
−1
∑ ω p,k f (τ p,k ),
k=0
where the weights ω p,k are given by
ω p,k =
2 , p(p + 1)L2p (τ p,k )
k = 0, . . . , p,
(C.87)
so that in particular
ω p,0 = ω p,p =
2 . p(p + 1)
(C.88)
The quadrature rule is exact when f is a polynomial of degree at most 2p − 1. Moreover, if f is a polynomial of degree at most p, there holds [48] p
f 2L2 (−1,1) ≤
∑ ω p,k [ f (τ p,k )]2 ≤ (2 + 1/p) f 2L2 (−1,1) .
(C.89)
k=0
It follows from (C.89) that p,i 2L2 (−1,1) ≤ ω p,i ≤ (2 + 1/p) p,i 2L2 (−1,1) ,
(C.90)
where p,i is defined by (C.86). When there is no confusion present, we write ωk for ω p,k . Let γ ⊂ ∂ K be a side of an element K with associated polynomial order p. A node qk on γ (the union of γ and its two endpoints) is defined from a node τ p,k ∈ [−1, 1] by FK . Associated with this node is a weight ωk defined by (C.87). Therefore, if f is a function defined on γ , we can approximate the integral of f on γ as γ
f (x)d γx ≈
hγ 2
∑
qk ∈W p ∩γ
ωk f (qk ),
C.7 Additional Technical Lemmas
563
where hγ = meas(γ ).
C.7 Additional Technical Lemmas Lemma C.48. Let
x2 log |x| F(x) = 0
if x = 0, if x = 0.
and let f (x) = F(x + 1) + F(x − 1) − 2F(x)
∀x ∈ R.
Then
√ (i) f (x) < 0 if and only if |x| > 1/ 2 and |x| = 1; (ii) f (x) > 0 for any x > 0 and f (x) √ > 0 for any x = 0; (iii) f (2x)/2 − f (x) < 0 for x > 1/ 2. Proof. We first note that F (x) = 2x log |x| + x
and
F (x) = 2 log |x| + 3 for all x = 0.
Hence, for x = 0 and x = ±1, f (x) = 2(x + 1) log |x + 1| + 2(x − 1) log |x − 1| − 4x log |x| and f (x) = 2 log
|x2 − 1| . x2
Therefore (i) holds. We now prove (ii). Consider a fixed x > 0 and define h(t) := 2(t + x) log |t + x| − 2(t − x) log |t − x| ∀t ∈ [0, 1]. Then h (t) = 2 log
|t + x| . |t − x|
For x ≥ 1, there exists θ ∈ (0, 1) such that f (x) = h(1) − h(0) = h (θ ) = 2 log
|θ + x| > 0. |θ − x|
For 0 < x < 1, there exist θ1 ∈ (x, 1) and θ2 ∈ (0, x) such that f (x) = h(1) − h(x) + h(x) − h(0) = (1 − x)h (θ1 ) + xh (θ2 ) > 0.
564
C Some Additional Results
Hence f (x) > 0 for all x > 0. The remainder of the proof of (ii) is straightforward if one notes that f (0) = 0 and that f is an even function. To prove (iii) we note that if √ f (2x) − f (x) ∀x > 1/ 2, 2 √ then, since f (x) < 0 for x > 1/ 2 and x = 1 we have k(x) :=
k (x) = f (2x) − f (x) < 0, and therefore k(x) < k(0) = 0.
In the following lemma, we prove a special version of the mean value theorem, which will be used in the next result. Lemma C.49. Let ϕ : R → R be a differentiable function. For any x ∈ R, there exists θ ∈ (0, 1) satisfying
ϕ (x + 1) + ϕ (x − 1) − 2ϕ (x) = ϕ (x + θ ) − ϕ (x − θ ). Proof. Fixing x ∈ R, we define
ψ : [0, 1] → R,
t → ψ (t) := ϕ (x + t) + ϕ (x − t) − 2ϕ (x).
Then, there exists θ ∈ (0, 1) such that
ϕ (x + 1) + ϕ (x − 1) − 2ϕ (x) = ψ (1) − ψ (0) = ψ (θ ) = ϕ (x + θ ) − ϕ (x − θ ),
proving the lemma. Lemma C.50. Let f be defined as in Lemma C.48. If G(x) = f (x + 1) + f (x − 1) − 2 f (x), then
√ (i) G(x) < 0 and G (x) > 0 for x ≥ 1 + 1/ 2; (ii) The following statement holds ∞
∑ |G(m)| ≤
m=4
π2 . 3
√ Proof. (i) Due to√Lemma C.48, f is concave on (1/ 2, ∞). Therefore, G(x) < 0 when x − 1 ≥ 1/ 2. It follows from Lemma C.49 that there exists θ ∈ (0, 1) satisfying G (x) = f (x + 1) + f (x − 1) − 2 f (x) = f (x + θ ) − f (x − θ )
C.7 Additional Technical Lemmas
= 2 log
565
-
(x + θ )2 − 1 (x − θ )2 (x + θ )2 (x − θ )2 − 1
.
> 0,
proving (i). (ii) From (i) and Lemma C.49, we deduce, for m ≥ 2, |G(m)| = 2 f (m) − f (m + 1) − f (m − 1) = f (m − θ ) − f (m + θ ) = −2θ f (m − a) + f (m + a) 2 2 m − a2 = 2θ log (m − 1)2 − a2 (m + 1)2 − a2
(C.91)
where θ , θ ∈ (0, 1) and a = θ θ . Since
2 m2 − a2 1 1 ≤ = 2 2m−1 2m+1 (m − 1)2 − a2 (m + 1)2 − a2 2m−1 1 − m2 −a2 1 + m2 −a2 1 − m2 −a2
and
2m − 1 2 ≤ < 1 for m ≥ 3, 2 2 m −a m
it follows from (C.91) that ∞ 4 1 |G(m)| ≤ −2 log(1 − 2 ) = 2 ∑ m i=1 i
- .2i 2 , m
m ≥ 3.
Therefore ∞
∑
|G(m)| ≤
m=4
∞
∞
1 ∑∑i m=4 i=1 ∞
1 =∑ i=1 i
- .2i ∞ ∞ - .2i 2 1 2 ≤ ∑ dx m x i=1 i 3
∞ - .2i 3
2 x
∞
3 dx = ∑ i(2i − 1) i=1
- .2i 2 3
∞
2 π2 . = 2 3 i=1 i
≤∑
The lemma is proved. Lemma C.51. Let f be defined as in Lemma C.48. For a fixed n ∈ N, if
then
:= f (x) − f (x − n), G(x)
x ∈ R,
566
C Some Additional Results
1 n−1 ∑ |G( j)| = 4 log 2. n→∞ n j=1 lim
Proof. It follows from the parity of f that
G(n/2 + x) = −G(n/2 − x)
Therefore,
⎧[n/2]−1 ⎪ ⎪ ⎪ ⎪ ⎨ ∑ |G( j)| = j=1 ⎪[n/2]
⎪ ⎪ ⎪ ⎩ ∑ |G( j)| = j=1
n−1
∑ j=[n/2]+1 n−1
∑
j=[n/2]+1
∀x ∈ [0, n/2].
j)|, |G(
n is even,
j)|, |G(
n is odd,
where [n/2] denotes the integral part of n/2. Noting that G([n/2]) = G(n/2) =0 when n is even, we deduce, for all n ≥ 2, n−1
n−1
j=1
j=n∗
j)| = 2 ∑ |G(
j)| ∑ |G(
where n∗ := [n/2] + 1. On the other hand, the parity of f and Lemma C.48 imply (x) = f (x) + f (n − x) > 0 G
so that
≥ G(n/2) G(x) =0
∀x ∈ [0, n],
∀x ∈ [n/2, n].
we deduce Therefore, by using the definition of G,
n−1
n−1
j=1
j=n
n−1
j) = 2 ∑ j)| = 2 ∑ G( ∑ |G( ∗ ∗ =2
n−1
∑∗
j=n
−2
j=n
n−1
∑
f ( j) − f ( j − n)
n−1 F( j + 1) − F( j) − 2 ∑ F( j) − F( j − 1)
j=n∗
j=n∗
n−1 F( j − n + 1) − F( j − n) + 2 ∑ F( j − n) − F( j − n − 1) . j=n∗
Telescoping sums give, noting that n∗ = (n + 1)/2 when n is odd, n−1
j)| = 2 ∑ |G(
j=1
F(n) − F(n∗ ) − F(n − 1) + F(n∗ − 1) + F(n∗ − n) − F(n∗ − n − 1)
⎧ ⎨2 F(n) − F(n/2 + 1) − F(n − 1) + F(n/2 − 1) , = ⎩2 F(n) − 2F((n + 1)/2) − F(n − 1) + 2F((n − 1)/2) ,
n is even, n is odd.
C.7 Additional Technical Lemmas
567
Consider first the case of even n. We have, from the definition of F, 1 F(n) − F(n/2 + 1) − F(n − 1) + F(n/2 − 1) n n2 n+2 1 + n + 1 log = n2 log n − n 4 2 n2 n−2 − n + 1 log − (n2 − 2n + 1) log(n − 1) + 4 2 n n n+2 (n + 2)(n − 2) 1 (n + 2)(n − 1) = n log − log − log − log n−1 4 n−2 (n − 1)2 n n−2 + 2 log 2 4 (n + 2)(n − 2) 1 n − log 1 + − log = n log 1 + n−1 4 n−2 (n − 1)2 n(n + 2)(n − 1) 1 + 2 log 2 − log n n(n − 2) 1 n 4 (n + 2)(n − 2) = n log 1 + − log 1 + − log n−1 4 n−2 (n − 1)2 (n + 2)(n − 1) 1 1 − log n + 2 log 2. − log n n(n − 2) n Letting n → ∞ and noting that the first two terms on the right-hand side tend to 1 − 1 = 0 whereas each of the next three terms tends to 0, we obtain 1 lim F(n) − F(n/2 + 1) − F(n − 1) + F(n/2 − 1) = 2 log 2. n→∞ n Similarly, for the case of odd n, we have
1 F(n) − 2F((n + 1)/2) − F(n − 1) + 2F((n − 1)/2) n 2 n+1 1 1 n − log 1 + − log − − log(n2 − 1) = n log 1 + n−1 2 n−1 n−1 2n + 2 log 2. Again, when n → ∞ the first two terms on the righ-hand side tend to 1 − 1 = 0 and each of the next two terms tends to 0. Therefore 1 F(n) − 2F((n + 1)/2) − F(n − 1) + 2F((n − 1)/2) = 2 log 2. lim n→∞ n Altogether we deduce
1 n−1 ∑ |G( j)| = 4 log 2, n→∞ n j=1 lim
completing the proof of the lemma.
568
C Some Additional Results
Lemma C.52. The following statements hold true. (i) If p > 1, r = 1, and F(x) is defined by ⎧ x ⎪ ⎪ ⎪ ⎪ f (y) dy, ⎪ ⎨ F(x) = 0∞ ⎪ ⎪ ⎪ ⎪ f (y) dy, ⎪ ⎩
r > 1, r < 1,
x
then
∞ 0
p p x F (x) dx < |r − 1| −r
p
∞ 0
p x−r x f (x) dx,
unless f ≡ 0. The constant is the best possible. When p = 1, the two sides are equal. (ii) If 0 ≤ x ≤ 1 and f ∈ L2 (x, 1), then 1−x 0
⎛ x+y ⎞2 1 1 ⎝ ⎠ f (t) dt dy ≤ 4 f 2 (t) dt. y2 x
x
Proof. This result, which is a consequence of Hardy’s inequality [103, Theorem 330, page 245], is proved in [110, Lemma 3.1].
References 1. Adams, R.A.: Sobolev Spaces. Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London (1975). Pure and Applied Mathematics, Vol. 65 2. Ainsworth, M., Guo, B.: An additive Schwarz preconditioner for p-version boundary element approximation of the hypersingular operator in three dimensions. Numer. Math. 85, 343–366 (2000) 3. Ainsworth, M., Guo, B.: Analysis of iterative sub-structuring techniques for boundary element approximation of the hypersingular operator in three dimensions. Appl. Anal. 81, 241–280 (2002) 4. Ainsworth, M., McLean, W.: Multilevel diagonal scaling preconditioners for boundary element equations on locally refined meshes. Numer. Math. 93(3), 387–413 (2003) 5. Ainsworth, M., McLean, W., Tran, T.: The conditioning of boundary element equations on locally refined meshes and preconditioning by diagonal scaling. SIAM J. Numer. Anal. 36, 1901–1932 (1999) 6. Alfeld, P., Neamtu, M., Schumaker, L.L.: Bernstein-B´ezier polynomials on spheres and sphere-like surfaces. Comput. Aided Geom. Design 13, 333–349 (1996) 7. Alfeld, P., Neamtu, M., Schumaker, L.L.: Dimension and local bases of homogeneous spline spaces. SIAM J. Math. Anal. 27, 1482–1501 (1996) 8. Alfeld, P., Neamtu, M., Schumaker, L.L.: Fitting scattered data on sphere-like surfaces using spherical splines. J. Comput. Appl. Math. 73, 5–43 (1996) 9. Amini, S., Profit, A.T.J.: Multi-level fast multipole Galerkin method for the boundary integral solution of the exterior Helmholtz equation. In: Current trends in scientific computing (Xi’an, 2002), Contemp. Math., vol. 329, pp. 13–19. Amer. Math. Soc., Providence, RI (2003) 10. Aronszajn, N., Smith, K.T.: Theory of Bessel potentials. I. Ann. Inst. Fourier (Grenoble) 11, 385–475 (1961) 11. Ashby, S.F., Manteuffel, T.A., Saylor, P.E.: A taxonomy for conjugate gradient methods. SIAM J. Numer. Anal. 27(6), 1542–1568 (1990) 12. Aurada, M., Feischl, M., F¨uhrer, T., Karkulik, M., Melenk, J.M., Praetorius, D.: Local inverse estimates for non-local boundary integral operators. Math. Comp. 86(308), 2651–2686 (2017) 13. Aurada, M., Feischl, M., F¨uhrer, T., Karkulik, M., Praetorius, D.: Efficiency and optimality of some weighted-residual error estimator for adaptive 2D boundary element methods. Comput. Methods Appl. Math. 13(3), 305–332 (2013) 14. Aurada, M., Feischl, M., F¨uhrer, T., Karkulik, M., Praetorius, D.: Energy norm based error estimators for adaptive BEM for hypersingular integral equations. Appl. Numer. Math. 95, 15–35 (2015) 15. Aurada, M., Feischl, M., Karkulik, M., Praetorius, D.: A posteriori error estimates for the Johnson-N´ed´elec FEM-BEM coupling. Eng. Anal. Bound. Elem. 36(2), 255–266 (2012) 16. Axelsson, O.: Iterative Solution Methods. Cambridge University Press, Cambridge (1994) 17. Babuˇska, I., Guo, B.Q., Stephan, E.P.: On the exponential convergence of the h-p version for boundary element Galerkin methods on polygons. Math. Methods Appl. Sci. 12, 413–427 (1990) 18. Babuˇska, I., Craig, A., Mandel, J., Pitk¨aranta, J.: Efficient preconditioning for the p version finite element method in two dimensions. SIAM J. Numer. Anal. 28, 624–661 (1991) 19. Babuˇska, I., Suri, M.: The h-p version of the finite element method with quasi-uniform meshes. RAIRO Mod´el. Math. Anal. Num´er. 21(2), 199–238 (1987) 20. B¨ansch, E.: Local mesh refinement in 2 and 3 dimensions. Impact Comput. Sci. Engrg. 3(3), 181–191 (1991) 21. Baramidze, V., Lai, M.J.: Error bounds for minimal energy interpolatory spherical splines. In: Approximation theory XI: Gatlinburg 2004, Mod. Methods Math., pp. 25–50. Nashboro Press, Brentwood, TN (2005) 22. Bellman, R.: A note on an inequality of E. Schmidt. Bull. Amer. Math. Soc. 50, 734–737 (1944)
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1
569
570
References
23. Ben Belgacem, F.: Polynomial extensions of compatible polynomial traces in three dimensions. Comput. Methods Appl. Mech. Engrg. 116(1-4), 235–241 (1994). ICOSAHOM’92 (Montpellier, 1992) 24. Benzi, M.: Preconditioning techniques for large linear systems: a survey. J. Comput. Phys. 182(2), 418–477 (2002) 25. Bergh, J., L¨ofstr¨om, J.: Interpolation Spaces: An Introduction. Springer-Verlag, Berlin (1976) 26. Bernardi, C., Maday, Y.: Approximations Spectrales de Probl`emes aux Limites Elliptiques, Math´ematiques & Applications (Berlin) [Mathematics & Applications], vol. 10. SpringerVerlag, Paris (1992) 27. Bernardi, C., Maday, Y.: Spectral methods. In: Handbook of numerical analysis, Vol. V, Handb. Numer. Anal., V, pp. 209–485. North-Holland, Amsterdam (1997) 28. Bernardi, C., Maday, Y., Rapetti, F.: Discr´etisations Variationnelles de Probl`emes aux Limites Elliptiques, Math´ematiques & Applications (Berlin) [Mathematics & Applications], vol. 45. Springer-Verlag, Berlin (2004) 29. Bespalov, A., Heuer, N.: The p-version of the boundary element method for hypersingular operators on piecewise plane open surfaces. Numer. Math. 100(2), 185–209 (2005) 30. Bespalov, A., Heuer, N.: The p-version of the boundary element method for weakly singular operators on piecewise plane open surfaces. Numer. Math. 106, 69–97 (2007) 31. Bespalov, A., Heuer, N.: The hp-version of the boundary element method with quasi-uniform meshes in three dimensions. M2AN Math. Model. Numer. Anal. 42, 821–849 (2008) 32. Bic˘a, I.: Iterative substructuring algorithms for the p-version finite element method for elliptic problems. Ph.D. thesis, Courant Institute of Mathematical Sciences, New York University, New York (1997) 33. Bielak, J., MacCamy, R.C.: An exterior interface problem in two-dimensional elastodynamics. Quart. Appl. Math. 41(1), 143–159 (1983/84) 34. Bollob´as, B.: Modern Graph Theory. Springer Verlag, New York (1998) 35. Bramble, J., Leyk, Z., Pasciak, J.: The analysis of multigrid algorithms for pseudodifferential operators of order minus one. Math. Comp. 63, 461–478 (1994) 36. Bramble, J., Pasciak, J., Schatz, A.: The construction of preconditioners for elliptic problems by substructuring. I. Math. Comp. 47, 103–134 (1986) 37. Bramble, J.H.: A second order finite difference analog of the first biharmonic boundary value problem. Numer. Math. 9, 236–249 (1966) 38. Bramble, J.H., Pasciak, J.E.: New estimates for multilevel algorithms including the V -cycle. Math. Comp. 60(202), 447–471 (1993) 39. Bramble, J.H., Pasciak, J.E., Wang, J.P., Xu, J.: Convergence estimates for product iterative methods with applications to domain decomposition. Math. Comp. 57(195), 1–21 (1991) 40. Bramble, J.H., Pasciak, J.E., Xu, J.: Parallel multilevel preconditioners. Math. Comp. 55(191), 1–22 (1990) 41. Bramble, J.H., Xu, J.: Some estimates for a weighted L2 projection. Math. Comp. 56(194), 463–476 (1991) 42. Brenner, S.C., Scott, L.R.: The Mathematical Theory of Finite Element Methods. Springer, Berlin (2002) 43. Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Universitext. Springer, New York (2011) 44. Brezzi, F., Johnson, C.: On the coupling of boundary integral and finite element methods. Calcolo 16(2), 189–201 (1979) 45. Cai, X.C.: The use of pointwise interpolation in domain decomposition methods with nonnested meshes. SIAM J. Sci. Comput. 16(1), 250–256 (1995) 46. Cai, X.C., Zou, J.: Some observations on the l 2 convergence of the additive Schwarz preconditioned GMRES method. Numer. Linear Algebra Appl. 9(5), 379–397 (2002) 47. Calder´on, A.P.: Lebesgue spaces of differentiable functions and distributions. In: Proc. Sympos. Pure Math., Vol. IV, pp. 33–49. American Mathematical Society, Providence, R.I. (1961) 48. Canuto, C., Quarteroni, A.: Approximation results for orthogonal polynomials in Sobolev spaces. Math. Comp. 38(157), 67–86 (1982)
References
571
49. Carstensen, C.: Nonlinear interface problems in solid mechanics–finite element and boundary element coupling. Habilitation thesis, Leibniz Universit¨at Hannover, Hannover (1992) 50. Carstensen, C., Feischl, M., Page, M., Praetorius, D.: Axioms of adaptivity. Comput. Math. Appl. 67(6), 1195–1253 (2014) 51. Carstensen, C., Kuhn, M., Langer, U.: Fast parallel solvers for symmetric boundary element domain decomposition equations. Numer. Math. 79(3), 321–347 (1998) 52. Carstensen, C., Maischak, M., Praetorius, D., Stephan, E.P.: Residual-based a posteriori error estimate for hypersingular equation on surfaces. Numer. Math. 97(3), 397–425 (2004) 53. Carstensen, C., Stephan, E.P.: Adaptive coupling of boundary elements and finite elements. RAIRO Mod´el. Math. Anal. Num´er. 29(7), 779–817 (1995) 54. Carstensen, C., Stephan, E.P.: A posteriori error estimates for boundary element methods. Math. Comp. 64(210), 483–500 (1995) 55. Carstensen, C., Stephan, E.P.: Adaptive boundary element methods for some first kind integral equations. SIAM J. Numer. Anal. 33(6), 2166–2183 (1996) 56. Chan, T.F., Smith, B.F., Zou, J.: Overlapping Schwarz methods on unstructured meshes using non-matching coarse grids. Numer. Math. 73(2), 149–167 (1996) 57. Chandler-Wilde, S.N., Hewett, D.P., Moiola, A.: Interpolation of Hilbert and Sobolev spaces: quantitative estimates and counterexamples. Mathematika 61(2), 414–443 (2015) 58. Chandra, R., Eisenstat, S., Schultz, M.: The modified conjugate residual method for partial differential equations, pp. 13–19. Defense Technical Information Center, New Brunswick (1977) 59. Chen, D., Menegatto, V.A., Sun, X.: A necessary and sufficient condition for strictly positive definite functions on spheres. Proc. Amer. Math. Soc. 131, 2733–2740 (2003) 60. Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. North-Holland, Amsterdam (1978) 61. Cl´ement, P.: Approximation by finite element functions using local regularization. RAIRO Anal. Numer. 9, 77–84 (1975) 62. Costabel, M.: Symmetric methods for the coupling of finite elements and boundary elements (invited contribution). In: Boundary elements IX, Vol. 1 (Stuttgart, 1987), pp. 411–420. Comput. Mech., Southampton (1987) 63. Costabel, M.: Boundary integral operators on Lipschitz domains: Elementary results. SIAM J. Math. Anal. 19, 613–626 (1988) 64. Costabel, M.: On the coupling of finite and boundary element methods. In: Proceedings of the International Symposium on Numerical Analysis (Ankara, 1987), pp. 1–14. Middle East Tech. Univ., Ankara (1989) 65. Costabel, M., Stephan, E.: Boundary integral equations for mixed boundary value problems in polygonal domains and Galerkin approximation. In: Mathematical Models and Methods in Mechanics, Banach Center Publ., vol. 15, pp. 175–251. PWN, Warsaw (1985) 66. Costabel, M., Stephan, E.: A direct boundary integral equation method for transmission problems. J. Math. Anal. Appl. 106(2), 367–413 (1985) 67. Costabel, M., Stephan, E.P.: An improved boundary element Galerkin method for threedimensional crack problems. Integral Equations Operator Theory 10(4), 467–504 (1987) 68. Costabel, M., Stephan, E.P.: On the convergence of collocation methods for boundary integral equations on polygons. Math. Comp. 49, 461–478 (1987) 69. Costabel, M., Stephan, E.P.: Coupling of finite and boundary element methods for an elastoplastic interface problem. SIAM J. Numer. Anal. 27(5), 1212–1226 (1990) 70. Dryja, M.: A method of domain decomposition for three-dimensional finite element elliptic problems. In: R. Glowinski, G. Golub, G. Meurant, J. P´eriaux (eds.) First International Symposium on Domain Decomposition Methods for Partial Differential Equations, pp. 43– 61. SIAM, Philadelphia, PA (1988) 71. Dryja, M., Widlund, O.B.: Multilevel additive methods for elliptic finite element problems. In: W. Hackbusch (ed.) Parallel Algorithms for Partial Differential Equations (Proc. of the Sixth GAMM–Seminar, Kiel, Germany, January, 1990), pp. 58–69. Vieweg, Braunschweig (1991)
572
References
72. Dryja, M., Widlund, O.B.: Domain decomposition algorithms with small overlap. SIAM J. Sci. Comput. 15, 604–620 (1994) 73. Eisenstat, S.C., Elman, H.C., Schultz, M.H.: Variational iterative methods for non-symmetric systems of linear equations. SIAM J. Numer. Anal. 20, 345–357 (1983) 74. Feischl, M., F¨uhrer, T., Heuer, N., Karkulik, M., Praetorius, D.: Adaptive boundary element methods. Arch. Comput. Methods Eng. 22(3), 309–389 (2015) 75. Feischl, M., F¨uhrer, T., Praetorius, D., Stephan, E.P.: Optimal additive Schwarz preconditioning for hypersingular integral equations on locally refined triangulations. Calcolo 54(1), 367–399 (2017) 76. Feischl, M., F¨uhrer, T., Praetorius, D., Stephan, E.P.: Optimal preconditioning for the symmetric and nonsymmetric coupling of adaptive finite elements and boundary elements. Numer. Methods Partial Differential Equations 33(3), 603–632 (2017) 77. Freeden, W., Gervens, T., Schreiner, M.: Constructive Approximation on the Sphere with Applications to Geomathematics. Oxford University Press, Oxford (1998) 78. F¨uhrer, T.: Zur Kopplung von Finiten Elementen und Randelementen. Ph.D. thesis, Vienna University of Technology, Vienna (2014) 79. F¨uhrer, T., Haberl, A., Praetorius, D., Schimanko, S.: Adaptive BEM with inexact PCG solver yields almost optimal computational costs. Numer. Math. 141(4), 967–1008 (2019) 80. Funken, S.A., Stephan, E.P.: The BPX preconditioner for the single layer potential operator. Appl. Anal. 67(3-4), 327–340 (1997) 81. Funken, S.A., Stephan, E.P.: Fast solvers with block-diagonal preconditioners for linear FEM-BEM coupling. Numer. Linear Algebra Appl. 16(5), 365–395 (2009) 82. Gallistl, D., Schedensack, M., Stevenson, R.P.: A remark on newest vertex bisection in any space dimension. Comput. Methods Appl. Math. 14(3), 317–320 (2014) 83. Golub, G.H., Van Loan, C.F.: Matrix Computations, fourth edn. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore, MD (2013) 84. Grafarend, E.W., Krumm, F.W., Schwarze, V.S. (eds.): Geodesy: the Challenge of the 3rd Millennium. Springer, Berlin (2003) 85. Graham, I.G., Hackbusch, W., Sauter, S.A.: Finite elements on degenerate meshes: inversetype inequalities and applications. IMA J. Numer. Anal. 25(2), 379–407 (2005) 86. Graham, I.G., McLean, W.: Anisotropic mesh refinement: the conditioning of Galerkin boundary element matrices and simple preconditioners. SIAM J. Numer. Anal. 44(4), 1487– 1513 (2006) 87. Greenbaum, A.: Iterative Methods for Solving Linear Systems, Frontiers in Applied Mathematics, vol. 17. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1997) 88. Grisvard, P.: Elliptic Problems in Nonsmooth Domains. Pitman, Boston (1985) 89. Guo, B., Cao, W.: Additive Schwarz methods for the h-p version of the finite element method in two dimensions. SIAM J. Sci. Comput. 18(5), 1267–1288 (1997) 90. Guo, B., Cao, W.: An additive Schwarz method for the h-p version of the finite element method in three dimensions. SIAM J. Numer. Anal. 35(2), 632–654 (1998) 91. Guo, B., Heuer, N.: The optimal rate of convergence of the p-version of the boundary element method in two dimensions. Numer. Math. 98, 499–538 (2004) 92. Gwinner, J., Stephan, E.P.: Advanced Boundary Element Methods, Springer Series in Computational Mathematics, vol. 52. Springer, Cham (2018). Treatment of boundary value, transmission and contact problems 93. Hackbusch, W.: The panel clustering technique for the boundary element method (invited contribution). In: Boundary elements IX, Vol. 1 (Stuttgart, 1987), pp. 463–474. Comput. Mech., Southampton (1987) 94. Hackbusch, W.: Integral Equations, International Series of Numerical Mathematics, vol. 120. Birkh˝auser Verlag, Basel (1995). Theory and numerical treatment, Translated and revised by the author from the 1989 German original 95. Hackbusch, W.: A sparse matrix arithmetic based on H -matrices. I. Introduction to H matrices. Computing 62(2), 89–108 (1999)
References
573
96. Hackbusch, W.: Iterative Solution of Large Sparse Systems of Equations, Applied Mathematical Sciences, vol. 95, second edn. Springer, [Cham] (2016) 97. Hackbusch, W., B¨orm, S.: Data-sparse approximation by adaptive H 2 -matrices. Computing 69(1), 1–35 (2002) 98. Hackbusch, W., Khoromskij, B., Sauter, S.A.: On H 2 -matrices. In: Lectures on applied mathematics (Munich, 1999), pp. 9–29. Springer, Berlin (2000) 99. Hackbusch, W., Khoromskij, B.N., Sauter, S.: Adaptive Galerkin boundary element methods with panel clustering. Numer. Math. 105(4), 603–631 (2007) 100. Hackbusch, W., Nowak, Z.P.: On the fast matrix multiplication in the boundary element method by panel clustering. Numer. Math. 54(4), 463–491 (1989) 101. Hahne, M., Stephan, E.P.: Schwarz iterations for the efficient solution of screen problems with boundary elements. Computing 56(1), 61–85 (1996) 102. Han, H.D.: A new class of variational formulations for the coupling of finite and boundary element methods. J. Comput. Math. 8(3), 223–232 (1990) 103. Hardy, G.H., Littlewood, J.E., P´olya, G.: Inequalities. Cambridge, at the University Press (1952). 2d ed 104. Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Research Nat. Bur. Standards 49, 409–436 (1953) (1952) 105. Heuer, N.: Additive Schwarz methods for weakly singular integral equations in R3 – the p version. In: W. Hackbusch, G. Wittum (eds.) Boundary Elements: Implementation and Analysis of Advanced Algorithms (Proc. of the 12th GAMM–Seminar, Kiel, Germany, January, 1996), pp. 126–135. Vieweg-Verlag, Braunschweig (1996) 106. Heuer, N.: Efficient algorithms for the p version of the boundary element method. J. Integral Equations Appl. 8, 337–361 (1996) 107. Heuer, N.: Preconditioners for the p-version of the boundary element Galerkin method in R3 . Habilitation thesis, Leibniz Universit¨at Hannover, Hannover (2000) 108. Heuer, N.: Additive Schwarz method for the p-version of the boundary element method for the single layer potential operator on a plane screen. Numer. Math. 88(3), 485–511 (2001) 109. Heuer, N.: On the equivalence of fractional-order Sobolev semi-norms. J. Math. Anal. Appl. 417(2), 505–518 (2014) 110. Heuer, N., Leydecker, F.: An extension theorem for polynomials on triangles. Calcolo 45(2), 69–85 (2008) 111. Heuer, N., Leydecker, F., Stephan, E.P.: An iterative substructuring method for the hp-version of the BEM on quasi-uniform triangular meshes. Numer. Methods Partial Differential Equations 23(4), 879–903 (2007) 112. Heuer, N., Maischak, M., Stephan, E.P.: Exponential convergence of the hp-version for the boundary element method on open surfaces. Numer. Math. 83(4), 641–666 (1999) 113. Heuer, N., Maischak, M., Stephan, E.P.: Preconditioned minimum residual iteration for the h-p version of the coupled FEM/BEM with quasi-uniform meshes. Numer. Linear Algebra Appl. 6(6), 435–456 (1999). Iterative solution methods for the elasticity equations in mechanics and biomechanics, IMMB’98, Part 1 (Nijmegen) 114. Heuer, N., Stephan, E.P.: The hp-version of the boundary element method on polygons. J. Integral Equations Appl. 8(2), 173–212 (1996) 115. Heuer, N., Stephan, E.P.: Boundary integral operators in countably normed spaces. Math. Nachr. 191, 123–151 (1998) 116. Heuer, N., Stephan, E.P.: Iterative substructuring for hypersingular integral equations in R3 . SIAM J. Sci. Comput. 20(2), 739–749 (1998) 117. Heuer, N., Stephan, E.P.: Preconditioners for the p-version of the Galerkin method for a coupled finite element/boundary element system. Numer. Methods Partial Differential Equations 14(1), 47–61 (1998) 118. Heuer, N., Stephan, E.P.: An additive Schwarz method for the h-p version of the boundary element method for hypersingular integral equations in R3 . IMA J. Numer. Anal. 21(1), 265–283 (2001)
574
References
119. Heuer, N., Stephan, E.P.: An overlapping domain decomposition preconditioner for high order BEM with anisotropic elements. Adv. Comput. Math. 19(1-3), 211–230 (2003). Challenges in computational mathematics (Pohang, 2001) 120. Heuer, N., Stephan, E.P., Tran, T.: Multilevel additive Schwarz method for the h-p version of the Galerkin boundary element method. Math. Comp. 67(222), 501–518 (1998) 121. Hille, E.: Analytic Function Theory, vol. II. Ginn, Boston (1962) 122. Hille, E., Szeg¨o, G., Tamarkin, J.: On some generalizations of a theorem of A. Markoff. Duke Math. J. 3, 729–739 (1937) 123. Hiptmair, R.: Operator preconditioning. Comput. Math. Appl. 52(5), 699–706 (2006) 124. Holm, H., Maischak, M., Stephan, E.P.: The hp-version of the boundary element method for Helmholtz screen problems. Computing 57(2), 105–134 (1996) 125. Holm, H., Maischak, M., Stephan, E.P.: Exponential convergence of the h-p version BEM for mixed boundary value problems on polyhedrons. Math. Methods Appl. Sci. 31, 2069–2093 (2008) 126. H¨ormander, L.: Pseudodifferential operators. Comm. Pure Appl. Math. 18, 501–517 (1965) 127. Hsiao, G.C., Wendland, W.L.: A finite element method for some integral equations of the first kind. J. Math. Anal. Appl. 58, 449–481 (1977) 128. Hsiao, G.C., Wendland, W.L.: Boundary Integral Equations, Applied Mathematical Sciences, vol. 164. Springer-Verlag, Berlin (2008) 129. Johnson, C., N´ed´elec, J.C.: On the coupling of boundary integral and finite element methods. Math. Comp. 35(152), 1063–1079 (1980) 130. Karkulik, M., Pavlicek, D., Praetorius, D.: On 2D newest vertex bisection: optimality of mesh-closure and H 1 -stability of L2 -projection. Constr. Approx. 38(2), 213–234 (2013) 131. Kohn, J., Nirenberg, L.: On the algebra of pseudodifferential operators. Comm. Pure Appl. Math. 18, 269–305 (1965) 132. Langer, U., Of, G., Steinbach, O., Zulehner, W.: Inexact data-sparse boundary element tearing and interconnecting methods. SIAM J. Sci. Comput. 29(1), 290–314 (2007) 133. Langer, U., Of, G., Steinbach, O., Zulehner, W.: Inexact fast multipole boundary element tearing and interconnecting methods. In: Domain decomposition methods in science and engineering XVI, Lect. Notes Comput. Sci. Eng., vol. 55, pp. 405–412. Springer, Berlin (2007) 134. Langer, U., Steinbach, O.: Boundary element tearing and interconnecting methods. Computing 71(3), 205–228 (2003) 135. Langer, U., Steinbach, O.: Coupled boundary and finite element tearing and interconnecting methods. In: Domain decomposition methods in science and engineering, Lect. Notes Comput. Sci. Eng., vol. 40, pp. 83–97. Springer, Berlin (2005) 136. Le Gia, Q.T., Narcowich, F.J., Ward, J.D., Wendland, H.: Continuous and discrete leastsquare approximation by radial basis functions on spheres. J. Approx. Theory 143, 124–133 (2006) 137. Le Gia, Q.T., Sloan, I.H., Tran, T.: Overlapping additive Schwarz preconditioners for elliptic PDEs on the unit sphere. Math. Comp. 78(265), 79–101 (2009) 138. Lions, J.L., Magenes, E.: Non-Homogeneous Boundary Value Problems and Applications I. Springer-Verlag, New York (1972) 139. Maday, Y.: Rel`evements de traces polynomiales et interpolations hilbertiennes entre espaces de polynˆomes. C. R. Acad. Sci. Paris S´er. I Math. 309(7), 463–468 (1989) 140. Maday, Y.: Rel`evements de traces polynomiales et interpolations hilbertiennes entre espaces de polynˆomes. C. R. Acad. Sci. Paris S´er. I Math. 309(7), 463–468 (1989) 141. Maischak, M.: The analytical computation of the Galerkin elements for the Laplace, Lam´e, and Helmholtz equations in 3d-BEM. Tech. Rep. 95-16, Institut f¨ur Angewandte Mathematik, Universit¨at Hannover (1995) 142. Maischak, M., Stephan, E.P.: The hp-version of the boundary element method in R3 : the basic approximation results. Math. Methods Appl. Sci. 20, 461–476 (1997) 143. Maischak, M., Stephan, E.P., Tran, T.: Domain decomposition methods for boundary integral equations of the first kind: numerical results. Appl. Anal. 63(1-2), 111–132 (1996) 144. Maischak, M., Stephan, E.P., Tran, T.: Multiplicative Schwarz algorithms for the Galerkin boundary element method. SIAM J. Numer. Anal. 38(4), 1243–1268 (2000)
References
575
145. Maischak, M., Tran, T.: A block preconditioner for an electromagnetic FEM-BEM coupling problem in R3 . In: Z.C.S. Wenbin Liu Michael Ng (ed.) Recent Progress in Scientific Computing, pp. 302–318. Science Press, Beijing (2007). Proceeding of the 2nd International Conference on Scientific Computing and Partial Differential Equations, 2 December, 2005 146. Maitre, J.F., Pourquier, O.: Condition number and diagonal preconditioning: comparison of the p-version and the spectral element methods. Numer. Math. 74(1), 69–84 (1996) 147. McLean, W.: Strongly Elliptic Systems and Boundary Integral Equations. CUP, Cambridge (2000) 148. McLean, W., Sloan, I.H.: A fully discrete and symmetric boundary element method. IMA J. Numer. Anal. 14, 311–345 (1994) 149. McLean, W., Tran, T.: A preconditioning strategy for boundary element Galerkin methods. Numer. Methods Partial Differential Equations 13(3), 283–301 (1997) 150. Meddahi, S.: An optimal iterative process for the Johnson-Nedelec method of coupling boundary and finite elements. SIAM J. Numer. Anal. 35(4), 1393–1415 (1998) 151. Melenk, J.M.: On condition numbers in hp-FEM with Gauss-Lobatto-based shape functions. J. Comput. Appl. Math. 139(1), 21–48 (2002) 152. Mitchell, W.F.: Unified multilevel adaptive finite element methods for elliptic problems. Ph.D. thesis, University of Illinois at Urbana-Champaign, Illinois, USA (1988) 153. Mitchell, W.F.: Optimal multilevel iterative methods for adaptive grids. SIAM J. Sci. Statist. Comput. 13(1), 146–167 (1992) 154. Morton, T.M., Neamtu, M.: Error bounds for solving pseudodifferential equations on spheres. J. Approx. Theory 114, 242–268 (2002) 155. M¨uller, C.: Spherical Harmonics, Lecture Notes in Mathematics, vol. 17. Springer-Verlag, Berlin (1966) 156. Mund, P., Stephan, E.P.: The preconditioned GMRES method for systems of coupled FEMBEM equations. Adv. Comput. Math. 9(1-2), 131–144 (1998). Numerical treatment of boundary integral equations 157. Narcowich, F.J., Ward, J.D.: Norms of inverses and condition numbers for matrices associated with scattered data. J. Approx. Theory 64, 69–94 (1991) 158. Narcowich, F.J., Ward, J.D.: Scattered data interpolation on spheres: error estimates and locally supported basis functions. SIAM J. Math. Anal. 33, 1393–1410 (2002) 159. Neamtu, M., Schumaker, L.L.: On the approximation order of splines on spherical triangulations. Advances in Comput. Math. 21, 3–20 (2004) 160. N´ed´elec, J.C.: Integral equations with nonintegrable kernels. Integral Equations Operator Theory 5, 562–572 (1982) 161. N´ed´elec, J.C.: Acoustic and Electromagnetic Equations. Springer-Verlag, New York (2000) 162. N´ed´elec, J.C., Planchard, J.: Une m´ethode variationnelle d’ e´ l´ements finis pour la r´esolution num´erique d’un probl`eme ext´erieur dans R3 . Rev. Franc¸aise Automat. Informat. Recherche Op´erationnelle S´er. Rouge 7, 105–129 (1973) 163. Neˇcas, J.: Les M´ethodes Directes en Th´eorie des Equations Elliptiques. Masson et Cie , Paris (1967) 164. Of, G., Steinbach, O.: Is the one-equation coupling of finite and boundary element methods always stable? ZAMM Z. Angew. Math. Mech. 93(6-7), 476–484 (2013) 165. Of, G., Steinbach, O., Wendland, W.L.: The fast multipole method for the symmetric boundary integral formulation. IMA J. Numer. Anal. 26(2), 272–296 (2006) 166. Olsen, E.T., Douglas Jr., J.: Bounds on spectral condition numbers of matrices arising in the p-version of the finite element method. Numer. Math. 69(3), 333–352 (1995) 167. Oswald, P.: Multilevel norms for H −1/2 . Computing 61(3), 235–255 (1998) 168. Pavarino, L.F.: Additive Schwarz methods for the p version finite element method. Numer. Math. 66, 493–515 (1994) 169. Pavarino, L.F., Widlund, O.B.: A polylogarithmic bound for an iterative substructuring method for spectral elements in three dimensions. SIAM J. Numer. Anal. 33(4), 1303–1335 (1996)
576
References
170. von Petersdorff, T.: Randwertprobleme der Elastizit¨atstheorie f¨ur Polyeder–Singularit¨aten und Approximation mit Randelementmethoden. Ph.D. thesis, Technische Hochschule Darmstadt, Darmstadt (1989) 171. von Petersdorff, T., Stephan, E.P.: On the convergence of the multigrid method for a hypersingular integral equation of the first kind. Numer. Math. 57(4), 379–391 (1990) 172. von Petersdorff, T., Stephan, E.P.: Multigrid solvers and preconditioners for first kind integral equations. Numer. Methods Partial Differential Equations 8(5), 443–450 (1992) 173. Petersen, B.E.: Introduction to the Fourier Transform & Pseudodifferential Operators, Monographs and Studies in Mathematics, vol. 19. Pitman (Advanced Publishing Program), Boston, MA (1983) 174. Pham, D., Tran, T.: A domain decomposition method for solving the hypersingular integral equation on the sphere with spherical splines. Numer. Math. 120(1), 117–151 (2012) 175. Pham, D., Tran, T., Crothers, S.: An overlapping additive Schwarz preconditioner for the Laplace-Beltrami equation using spherical splines. Adv. Comput. Math. 37(1), 93–121 (2012) 176. Pham, D.T., Tran, T.: Solving non-strongly elliptic pseudodifferential equations on a sphere using radial basis functions. Comput. Math. Appl. 70(8), 1970–1983 (2015) 177. Pham, T.D.: Pseudodifferential equations on spheres with spherical radial basis functions and spherical splines. Ph.D. thesis, UNSW Sydney, Sydney (2011) 178. Pham, T.D., Tran, T.: Strongly elliptic pseudodifferential equations on the sphere with radial basis functions. Numer. Math. 128(3), 589–614 (2014) 179. Pham, T.D., Tran, T., Chernov, A.: Pseudodifferential equations on the sphere with spherical splines. Math. Models Methods Appl. Sci. 21(9), 1933–1959 (2011) 180. Ratcliffe, J.G.: Foundations of Hyperbolic Manifolds. Springer, New York (1994) 181. Raviart, P.A., Thomas, J.M.: Introduction a` l’analyse num´erique des e´ quations aux d´eriv´ees partielles. Collection Math´ematiques Appliqu´ees pour la Maˆıtrise. [Collection of Applied Mathematics for the Master’s Degree]. Masson, Paris (1989) 182. Rychkov, V.S.: On restrictions and extensions of the Besov and Triebel-Lizorkin spaces with respect to Lipschitz domains. J. London Math. Soc. (2) 60(1), 237–257 (1999) 183. Saad, Y.: Iterative Methods for Sparse Linear Systems, second edn. Society for Industrial and Applied Mathematics, Philadelphia, PA (2003) 184. Saad, Y., Schultz, M.H.: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 7(3), 856–869 (1986) 185. Sarkis, M., Szyld, D.B.: Optimal left and right additive Schwarz preconditioning for minimal residual methods with Euclidean and energy norms. Comput. Methods Appl. Mech. Engrg. 196(8), 1612–1621 (2007) 186. Sauter, S.A., Schwab, C.: Boundary Element Methods, Springer Series in Computational Mathematics, vol. 39. Springer-Verlag, Berlin (2011). Translated and expanded from the 2004 German original 187. Sayas, F.J.: The validity of Johnson-N´ed´elec’s BEM-FEM coupling on polygonal interfaces. SIAM J. Numer. Anal. 47(5), 3451–3463 (2009) 188. Sayas, F.J.: The validity of Johnson-N´ed´elec’s BEM-FEM coupling on polygonal interfaces. SIAM Rev. 55(1), 131–146 (2013) 189. Schoenberg, I.J.: Positive definite functions on spheres. Duke Math. J. 9, 96–108 (1942) 190. Schwab, C., Suri, M.: The optimal p-version approximation of singularities on polyhedra in the boundary element method. SIAM J. Numer. Anal. 33, 729–759 (1996) 191. Scott, L.R., Zhang, S.: Finite element interpolation of nonsmooth functions satisfying boundary conditions. Math. Comp. 54(190), 483–493 (1990) 192. Seeley, R.T.: Extension of C∞ functions defined in a half space. Proc. Amer. Math. Soc. 15, 625–626 (1964) 193. Silvester, D., Wathen, A.: Fast iterative solution of stabilised Stokes systems. II. Using general block preconditioners. SIAM J. Numer. Anal. 31(5), 1352–1367 (1994) 194. Simpson, R.N., Liu, Z.: Acceleration of isogeometric boundary element analysis through a black-box fast multipole method. Eng. Anal. Bound. Elem. 66, 168–182 (2016)
References
577
195. Smith, K.T., Solmon, D.C., Wagner, S.L.: Practical and mathematical aspects of the problem of reconstructing objects from radiographs. Bull. Amer. Math. Soc. 83, 1227–1270 (1977) 196. Mu˜noz Sola, R.: Polynomial liftings on a tetrahedron and applications to the h-p version of the finite element method in three dimensions. SIAM J. Numer. Anal. 34(1), 282–314 (1997) 197. Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton Mathematical Series, No. 30. Princeton University Press, Princeton, N.J. (1970) 198. Steinbach, O.: Fast solvers for the symmetric boundary element method. In: Boundary elements: implementation and analysis of advanced algorithms (Kiel, 1996), Notes Numer. Fluid Mech., vol. 54, pp. 232–242. Friedr. Vieweg, Braunschweig (1996) 199. Steinbach, O.: Numerical Approximation Methods for Elliptic Boundary Value Problems. Springer, New York (2008). Finite and boundary elements, Translated from the 2003 German original 200. Steinbach, O., Wendland, W.L.: On C. Neumann’s method for second-order elliptic systems in domains with non-smooth boundaries. J. Math. Anal. Appl. 262(2), 733–748 (2001) 201. Stephan, E.P.: A boundary integral equation method for three-dimensional crack problems in elasticity. Math. Meth. Appl. Sci. 8, 609–623 (1986) 202. Stephan, E.P.: Boundary integral equations for screen problems in R3 . Integral Equations Operator Theory 10, 236–257 (1987) 203. Stephan, E.P.: The h-p boundary element method for solving 2- and 3-dimensional problems. Comput. Methods Appl. Mech. Engrg. 133(3-4), 183–208 (1996) 204. Stephan, E.P.: Multilevel methods for the h-, p-, and hp-versions of the boundary element method. J. Comput. Appl. Math. 125(1-2), 503–519 (2000). Numerical analysis 2000, Vol. VI, Ordinary differential equations and integral equations 205. Stephan, E.P.: Coupling of boundary element methods and finite element methods. In: Encyclopedia of Computational Mechanics Second Edition, pp. 1–40. John Wiley & Sons (2017) 206. Stephan, E.P., Maischak, M., Leydecker, F.: Some Schwarz methods for integral equations on surfaces—h and p versions. Comput. Vis. Sci. 8(3-4), 211–216 (2005) 207. Stephan, E.P., Maischak, M., Tran, T.: Domain decomposition algorithms for an indefinite hypersingular integral equation in three dimensions. In: Domain decomposition methods in science and engineering XVII, Lect. Notes Comput. Sci. Eng., vol. 60, pp. 647–655. Springer, Berlin (2008) 208. Stephan, E.P., Suri, M.: On the convergence of the p version of the boundary element Galerkin method. Math. Comp. 52, 31–48 (1989) 209. Stephan, E.P., Suri, M.: The h-p version of the boundary element method on polygonal domains with quasiuniform meshes. Math. Modelling Numer. Anal. (RAIRO) 25, 783–807 (1991) 210. Stephan, E.P., Tran, T.: Domain decomposition algorithms for indefinite hypersingular integral equations: the h and p versions. SIAM J. Sci. Comput. 19(4), 1139–1153 (1998) 211. Stephan, E.P., Tran, T.: Domain decomposition algorithms for indefinite weakly singular integral equations: the h and p versions. IMA J. Numer. Anal. 20, 1–24 (2000) 212. Stephan, E.P., Wendland, W.L.: An augmented Galerkin procedure for the boundary integral method applied to two-dimensional screen and crack problems. Appl. Anal. 18, 183–219 (1984) 213. Strang, G., Fix, G.: An Analysis of the Finite Element Method. Prentice-Hall, Englewood Cliffs, New Jersey (1973) 214. Svensson, S.: Pseudodifferential operators – a new approach to the boundary problems of physical geodesy. Manuscr. Geod. 8, 1–40 (1983) 215. Szab´o, B., Babuˇska, I.: Finite Element Analysis. Wiley-Interscience Publication, New York (1991) 216. Szeg¨o, G.: Orthogonal Polynomials. Amer. Math. Soc. Colloquium Publication,23, Providence, R.I. (1959) 217. Tausch, J.: The variable order fast multipole method for boundary integral equations of the second kind. Computing 72(3-4), 267–291 (2004) 218. Thom´ee, V.: Galerkin Finite Element Methods for Parabolic Problems. Springer, Berlin (1997)
578
References
219. Toselli, A., Widlund, O.: Domain Decomposition Methods—Algorithms and Theory, Springer Series in Computational Mathematics, vol. 34. Springer-Verlag, Berlin (2005) 220. Tran, T.: Additive Schwarz algorithms and the Galerkin boundary element method. In: Computational techniques and applications: CTAC97 (Adelaide), pp. 703–710. World Sci. Publ., River Edge, NJ (1998) 221. Tran, T.: Additive Schwarz preconditioners for a fully-discrete and symmetric boundary element method. ANZIAM J. 42((C)), C1420–C1442 (2000) 222. Tran, T.: Overlapping additive Schwarz preconditioners for boundary element methods. J. Integral Eqns Appl. 12, 177–207 (2000) 223. Tran, T.: Additive Schwarz preconditioners for the h-p version boundary-element approximation to the hypersingular operator in three dimensions. Int. J. Comput. Math. 84(10), 1417–1437 (2007) 224. Tran, T.: Remarks on Sobolev norms of fractional orders. J. Math. Anal. Appl. 498(2), 124,960, 19 (2021) 225. Tran, T., Le Gia, Q.T., Sloan, I.H., Stephan, E.P.: Boundary integral equations on the sphere with radial basis functions: Error analysis. Appl. Numer. Math. 59, 2857–2871 (2009) 226. Tran, T., Le Gia, Q.T., Sloan, I.H., Stephan, E.P.: Preconditioners for pseudodifferential equations on the sphere with radial basis functions. Numer. Math. 115(1), 141–163 (2010) 227. Tran, T., Stephan, E.P.: Additive Schwarz methods for the h version boundary element method. Appl. Anal. 60, 63–84 (1996) 228. Tran, T., Stephan, E.P.: Additive Schwarz algorithms for the p version of the Galerkin boundary element method. Numer. Math. 85, 433–468 (2000) 229. Tran, T., Stephan, E.P.: Two-level additive Schwarz preconditioners for the h-p version of the Galerkin boundary element method for 2-d problems. Computing 67(1), 57–82 (2001) 230. Tran, T., Stephan, E.P.: An overlapping additive Schwarz preconditioner for boundary element approximations to the Laplace screen and Lam´e crack problems. J. Numer. Math. 12(4), 311–330 (2004) 231. Tran, T., Stephan, E.P., Mund, P.: Hierarchical basis preconditioners for first kind integral equations. Appl. Anal. 65, 353–372 (1997) 232. Wathen, A., Fischer, B., Silvester, D.: The convergence rate of the minimal residual method for the Stokes problem. Numer. Math. 71(1), 121–134 (1995) 233. Wathen, A.J.: Preconditioning. Acta Numer. 24, 329–376 (2015) 234. Wendland, H.: Scattered Data Approximation. Cambridge University Press, Cambridge (2005) 235. Wendland, W.L., Stephan, E.P.: A hypersingular boundary integral method for twodimensional screen and crack problems. Arch. Rational Mech. Anal. 112, 363–390 (1990) 236. Widlund, O.B.: Iterative substructuring methods: algorithms and theory for elliptic problems in the plane. In: R. Glowinski, G.H. Golub, G.A. Meurant, J. P´eriaux (eds.) Proc. of the First International Symposium on Domain Decomposition Methods for Partial Differential Equations, pp. 113–128. SIAM, Philadelphia, PA (1988) 237. Widlund, O.B.: Optimal iterative refinement methods. In: T.F. Chan, R. Glowinski, J. P´eriaux, O.B. Widlund (eds.) Proc. of the Second International Symposium on Domain Decomposition Methods for Partial Differential Equations, pp. 114–125. SIAM, Philadelphia, PA (1989) 238. Wilkinson, J.H.: The Algebraic Eigenvalue Problem. Clarendon Press, Oxford (1965) 239. Wu, H., Chen, Z.: Uniform convergence of multigrid V-cycle on adaptively refined finite element meshes for second order elliptic problems. Sci. China Ser. A 49(10), 1405–1429 (2006) 240. Xu, J.: Theory of multilevel methods. Ph.D. thesis, Cornell University, Ithaca, New York (1989) 241. Xu, J.: Iterative methods by space decomposition and subspace correction. SIAM Rev. 34(4), 581–613 (1992) 242. Xu, Y., Cheney, E.W.: Strictly positive definite functions on spheres. Proc. Amer. Math. Soc. 116, 977–981 (1992)
References
579
243. Yserentant, H.: On the multilevel splitting of finite element spaces. Numer. Math. 49(4), 379–412 (1986) 244. Zhang, X.: Multilevel Schwarz methods. Numer. Math. 63, 521–539 (1992) 245. Zienkiewicz, O.C., Kelly, D.W., Bettess, P.: Marriage a` la mode—the best of both worlds (finite elements and boundary integrals). In: Energy methods in finite element analysis, pp. 81–107. Wiley, Chichester (1979)
Index
adaptive mesh refinement, see mesh refinement additive Schwarz method, see Schwarz preconditioner additive Schwarz operator, see Schwarz operator additive Schwarz preconditioner, see Schwarz preconditioner anisotropic mesh, see mesh basis function, 208 bubble basis function, 244 edge basis function, 244, 253, 255, 258, 274 face basis function, 259 global basis function, 209 interior basis function, 244, 253, 255 nodal basis function, 207, 210, 244, 517 vertex basis function, 253, 254, 258, 274 Bernstein basis polynomial, 374 Bernstein–B´ezier form, 375 Bernstein-B´ezier polynomial, 376 Bielak–MacCamy coupling, see coupling boundary integral operator, 485, 486 BPX preconditioner, 118, 178
bubble basis function, see basis function C´ea’s Lemma, 500 Calder´on projector, 331, 485 CG method, 13, 15, 16, 178 preconditioned, 18, 20, 26, 31, 175, 239 three term, 57 child triangle, 309 Cl´ement’s interpolation, see interpolation operator closed curve, 8, 143 closed surface, 8 coarse mesh, see mesh coercivity of the decomposition, see subspace decomposition colouring argument, 233 compatible couple, 426–428, 462, 463 condition number, 10, 289, 290, 298, 503 additive Schwarz operator, see Schwarz operator multiplicative Schwarz operator, see Schwarz operator conjugate gradient method, see CG method convergence rate, 63, 65, 175 coupling
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. P. Stephan and T. Tran, Schwarz Methods and Multilevel Preconditioners for Boundary Element Methods, https://doi.org/10.1007/978-3-030-79283-1
581
582
FEM-BEM coupling, 13, 331, 350, 353 non-symmetric coupling, 338, 359, 363 Bielak–MacCamy coupling, 331, 332, 338, 340, 359, 363 Johnson–N´ed´elec coupling, 331, 332, 338, 359, 363 symmetric coupling, 333, 341 crack problem, 7, 8, 15 density, 435 diagonal scaling, 277, 279, 280 Dirichlet crack problem, see crack problem Dirichlet screen problem, see screen problem Dirichlet-to-Neumann operator, 348, 361, 488, 495 discrete harmonic extension, 542 discrete harmonic function, 258–260, 542 discrete Laplacian, 336, 345 discretised operator, 497, 498 domain decomposition method, 13, 27, 379 non-overlapping, 73, 74, 91, 92, 100, 101, 108, 128, 136, 147, 149, 176, 209, 234, 239, 353 overlapping, 73, 83, 92, 104, 113, 130, 148, 149, 177, 226, 234, 404 two-level, 73, 99, 137, 147, 176, 205 double-layer potential, 5, 333, 485, 486 duality, 430, 435 duality theorem, 427 edge basis function, see basis function eigenvalue block matrix, 512 equivalent matrices, 511 extremal, 35, 37, 290, 343, 349, 503
Index
maximum, 37, 38, 40, 346 additive Schwarz operator, 47 multiplicative Schwarz operator, 42, 47 minimum, 37, 38, 343 additive Schwarz operator, 38, 47 multiplicative Schwarz operator, 43, 47 equivalence of norms, see Sobolev space error propagation operator, 29 extension operator, 244, 245, 255, 438 face basis function, see basis function FEM-BEM coupling, see coupling fine mesh, see mesh Fourier transform, 429 Fr´echet derivative, 461 Friedrich inequality, 352 fully-discrete method, 143 function of zero mean, 539 fundamental solution, 5 G˚arding’s inequality, 154, 155, 158, 164, 165, 485 Galerkin method, see h-version, p-version, hp-version Gauss–Legendre quadrature rule, 145 Gauss–Lobatto quadrature, 562 Gauss–Lobatto–Legendre point, 206 Gauss–Seidel method, 175, 354 generalised antiderivative operator, 482 generalised minimum residual method, see GMRES method generation of element, 315 geometric mesh, see mesh global basis function, see basis function global multilevel diagonal preconditioner, see GMD preconditioner
Index
GMD preconditioner, 325 GMRES method, 13, 16, 17, 34, 155, 178, 363 block matrices, 52, 337, 354 preconditioned, 22–26, 48, 159, 167, 175, 358 graded mesh, see mesh h-version, 73, 163, 169, 178, 226, 277, 355 Haar basis function, 147, 177, 234, 299 Hahn–Banach Theorem, 376 Hankel function, 5, 153 Hardy’s inequality, 452, 568 Helmholtz equation, 14, 153 hierarchical basis function for the p-version, 504, 520 hierarchical structure, 308, 312 HMCR method, 13, 58, 61, 337, 342, 363 homogeneous extension, 371 homogeneous harmonic polynomial, 369 homogeneous polynomial, 375, 382 hp-version, 127, 199, 205, 234, 243, 267 hybrid modified conjugate residual method, see HMCR method hypersingular integral equation, 6, 14, 15, 73, 74, 83, 91–93, 99, 100, 104, 108, 113, 117, 118, 125, 127, 162, 208, 209, 239, 267, 279, 281, 289, 290, 298, 320, 369, 380, 381, 508 hypersingular integral operator, 127, 331, 338, 372, 485, 496 indefinite problem, 153, 154, 173 interface problem, 332 interior basis function, see basis function intermediate space, 426 interpolation norm, 429, 436 weighted, 446
583
interpolation operator, 210, 211, 223, 256, 257, 260, 381, 545 Cl´ement’s interpolation, 221, 558 quasi-interpolation, 375, 376 Scott–Zhang interpolation, 312, 313, 561 interpolation property, 428, 437 interpolation space, 426, 436, 442, 445 inverse inequality, 507, 508, 526 iteration matrix, 25 iterative method, 15 Jacobi method, 175 Johnson–N´ed´elec coupling, see coupling K-functional, 427, 436, 437, 441, 462 Krylov subspace method, 13, 15, 16, 56 L2 -projection, 544 Lagrange polynomial, 206 Lagrangian multiplier, 460 Lam´e constants, 7 Lam´e equation, 240 Lam´e operator, 7 Lanczos algorithm, 178, 239 Laplace equation, 14, 150, 372, 485, 492 Laplace–Beltrami equation, 369 Laplace–Beltrami operator, 369, 372, 378, 493 Lax–Milgram Theorem, 377, 499 Legendre polynomial, 100, 139, 206, 234, 400, 504 level function, 315 linear iterative method, 13, 24, 34, 63, 176 Lipschitz domain, 333, 433, 440, 451 Lipschitz hypograph, 433, 439, 440, 451 LMD preconditioner, 320 local mesh refinement, see mesh refinement
584
local multilevel diagonal preconditioner, see LMD preconditioner logarithmic capacity, 6, 144, 332, 352 marked element, 310 maximum eigenvalue, see eigenvalue MCR-ODIR method, 59 MCR-OMIN method, 60 mesh adaptive mesh refinement, 320 anisotropic, 290 coarse, 74, 128, 147, 205, 547 fine, 74, 128, 147, 205, 547 geometric, 136, 139, 140, 199 graded, 10 locally quasi-uniform, 205 quasi-uniform, 127, 139, 140, 199, 353, 374, 378 shape-regular, 280, 281, 308, 374, 378 two-level, 74, 128, 140, 147, 205, 378 mesh norm, 401 mesh refinement, 277, 280, 281, 307, 308 adaptive mesh refinement, 10, 277, 309, 331 anisotropic mesh refinement, 277, 290, 298 local mesh refinement, 277, 307, 312 uniform mesh refinement, 315 minimal determining set, 376 minimax problem, 461 minimum eigenvalue, see eigenvalue minimum residual method, see MINRES method MINRES method, 337 multigrid method, 176 multilevel method, 117, 118, 133, 136, 138, 148, 149, 176, 307, 320
Index
additive Schwarz, 121, 124, 135, 178 condition number, 123, 125, 126, 135, 148 diagonal scaling, 178 multiplicative Schwarz, 125, 126 multiplicative Schwarz method, see Schwarz preconditioner multiplicative Schwarz operator, see Schwarz operator native space, 403 natural embedding operator, 497 Neumann crack problem, see crack problem Neumann screen problem, see screen problem Neumann-to-Dirichlet operator, 488, 489, 495 newest vertex bisection algorithm, see NVB algorithm Nitsche’s trick, 165 nodal basis function, see basis function node interior node, 207 internal mode, 208 nodal mode, 207 side mode, 208 side node, 207 non-overlapping method, see domain decomposition method non-symmetric coupling, see coupling normal trace operator, 333 NVB algorithm, 307, 309, 310 order of pseudo-differential operator, see pseudo-differential operator ORTHODIR method, 13, 58 overlapping method, see domain decomposition method p-version, 99, 108, 167, 172, 185, 358
Index
parent triangle, 309 partition of unity, 105, 231, 232, 383 PCG method, see CG method polygonal boundary, 333 polyhedral boundary, 333 polynomial of low energy, 541 positive-definite kernel, 400 preconditioned conjugate gradient method, see CG method preconditioned GMRES, see GMRES method projection, 27, 228 projection-like operator, 27 prolongation matrix, 176 prolongation operator, 27 pseudo-differential equation, 369, 373, 399 pseudo-differential operator, 369, 372, 377, 485 elliptic, 399 order of, 372 spherical symbol, 372 strongly elliptic, 372, 377 quadrilateral domain, 232 quasi-interpolation operator, see interpolation operator quasi-uniform mesh, see mesh quasi-uniform triangulation, see mesh radial basis function, see RBF radial projection, 379 radiation condition, 5 Rayleigh quotient, 37 RBF, 399–401 rectangular element, 205 reference edge, 309 reiteration theorem, 428, 442 representation formula, 5, 7 reproducing kernel, 403 restriction matrix, 176 restriction operator, 27, 176 Richardson method, 175 scaling property, 446
585
Schmidt’s inequality, 213 Schwarz method, see Schwarz preconditioner Schwarz operator, 13, 26, 28, 156, 354 additive, 13, 28, 29, 155, 176, 355 condition number, 47, 130, 133, 138–140, 147, 148, 226, 234, 239, 271, 272, 325, 327, 395, 412 maximum eigenvalue, 40, 47 minimum eigenvalue, 38, 47 condition number, 37 eigenvalue, 37 matrix form, 29 maximum eigenvalue, 37, 73 minimum eigenvalue, 37 multilevel, 321 multiplicative, 13, 29, 114 closed form, 33 condition number, 48, 110, 112, 114 maximum eigenvalue, 42, 47 minimum eigenvalue, 43, 47 Schwarz preconditioner, 13 additive, 29–31, 63, 65, 73–75, 99, 100, 117, 127, 153, 176, 239, 267, 280, 312, 353, 354, 358, 364, 369, 378, 404 matrix form, 31 operator form, 32, 33 variational form, 31 multiplicative, 63, 65, 73, 92, 93, 99, 108, 176 matrix form, 33 non-symmetric, 32, 33, 66 operator form, 33 symmetric, 32, 34, 35, 91, 92 Scott–Zhang interpolation, see interpolation operator screen problem, 4, 8, 15 separation radius, 401, 414 shape function, 206 shape-regular mesh, see mesh shape-regular triangulation, see mesh
586
single-layer potential, 5, 333, 352, 486 singular value, 347, 348 Slobodetski norm, 429, 430, 435, 437, 445, 446 Sobolev space, 243, 369, 429 equivalence of norms, 439 in Rd , 429 Lipschitz domain, 432, 433 spectral radius, 41, 91, 93, 96 sphere, 369 spherical barycentric coordinates, 374 spherical cap, 374 spherical harmonic, 369, 400, 495 spherical radial basis function, see RBF spherical spline, 369, 373, 374, 395 spherical symbol, see pseudo-differential operator spherical triangle, 374, 376, 382, 383 spherical triangulation, 373–375, 377, 378, 395 stability of the decomposition, see subspace decomposition Steklov–Poincar´e operator, see Dirichlet-to-Neumann operator Strengthened Cauchy–Schwarz inequality, 41, 67, 91, 93, 96, 108, 110, 113, 114, 120, 124, 323, 325, 326 strongly elliptic pseudo-differential operator, see pseudo-differential operator subspace decomposition, 27, 75, 81, 83, 89, 101, 103, 104, 108, 154, 163, 167, 169, 172, 209 coercivity of the decomposition, 40, 73, 77, 82, 83, 101, 104, 105, 129, 131, 134, 216, 233, 235, 268, 271, 282, 294, 395, 406 local stability, 41
Index
stability of the decomposition, 38, 65, 75, 82, 88, 101, 104, 106, 118, 130, 131, 134, 221, 225, 227, 236, 270, 272, 286, 296, 321, 325, 392, 393, 408 surface gradient, 493 Sylvester’s Law of Inertia, 337, 346 symmetric coupling, see coupling three-block preconditioner, 342, 349, 353, 364 three-level subspace decomposition, 209 trace operator, 333 traction, 7, 8 triangular element, 243 two-block preconditioner, 351, 353, 364 two-level mesh, see mesh two-level method, see domain decomposition method unisolvent, 373 unit sphere, 369 vertex basis function, see basis function weakly-singular integral equation, 6, 8, 14, 15, 73, 77, 89, 92, 95, 99, 103, 108, 110, 114, 123, 126, 139, 143, 169, 208, 234, 279, 281, 290, 299, 328, 354, 508 weakly-singular integral operator, 372, 485, 496 weighted norm, 429, 445 wire basket, 211, 256, 258–260, 264, 267, 272 h-wire basket, 206, 210, 211, 220 p-wire basket, 209, 210, 220 wire basket interpolation operator, 259, 260, 267 wire basket space, 256, 257, 271, 272, 274
Index of Notation: Function Spaces
[A0 , A1 ]θ ,q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 [A0 , A1 ]θ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 A = (A0 , A1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 C0,α (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 D(F) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 D(Γ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 D(Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 D(Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 D ∗ (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 H s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 H0s (Γ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 H0s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 H −s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 H s (Γ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 H s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 ∗−s (Ω1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 H ∗s (Ω1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 H σ (Ω1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 H 0 s (Γ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 H s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 H L p (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
S ∗ (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 W 1,∞ (D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 W s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 W s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 587
Index of Notation: Norms
·[A0 ,A1 ]θ ,q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 ·HFs (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 ·H s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 F ·H s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 I ·H −s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 S ·HSs (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 ·H s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 S ·H0s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 ·H −s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 ·H s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 ·H s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 F ·H −s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 S ·H s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 S ·Hws (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 ·Hws (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 ·W s (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 |·|H μ (Rd ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 |·|H s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 ∗·HIs (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 ∗·H −s (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 I ∗·Hws (Ω ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
588
Index of Notation: Operators and Other Symbols
aV (·, ·) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 aW (·, ·) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 C−1 ad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 C−1 mu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
∂ α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Emu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Esmu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 K(t, u, A ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 κ (Ppre ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
λmax (Ppre ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 λmin (Ppre ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Nθ ,q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427, 430 Pj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Pmu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Psmu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Pmu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Psmu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Pj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 589