Wave Phenomena: Mathematical Analysis and Numerical Approximation 3031057929, 9783031057922

This book presents the notes from the seminar on wave phenomena given in 2019 at the Mathematical Research Center in Obe

229 53 5MB

English Pages 367 [368] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
Contents
About the Authors
Part I Space-Time Approximations for Linear Acoustic, Elastic, and Electro-Magnetic Wave Equations
Willy Dörfler and Christian Wieners
1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves
1.1 Modeling in Continuum Mechanics
1.2 The Wave Equation in 1d
1.3 Harmonic, Anharmonic and Viscous Waves
1.4 Elastic Waves
1.5 Visco-Elastic Waves
1.6 Acoustic Waves in Solids
1.7 Electro-Magnetic Waves
2 Space-Time Solutions for Linear Hyperbolic Systems
2.1 Linear Hyperbolic First-Order Systems
2.2 Solution Spaces
2.3 Solution Concepts
2.4 Existence and Uniqueness of Space-Time Solutions
2.5 Mapping Properties of the Space-Time Operator
2.6 Inf-Sup Stability
2.7 Applications to Acoustics and Visco-Elasticity
3 Discontinuous Galerkin Methods for Linear Hyperbolic Systems
3.1 Traveling Wave Solutions in Homogeneous Media
3.2 Reflection of Traveling Acoustic Waves at Boundaries
3.3 Transmission and Reflection of Traveling Waves at Interfaces
3.4 The Riemann Problem for Acoustic Waves
3.5 The Riemann Problem for Linear Conservation Laws
3.6 The DG Discretization with Full Upwind
3.7 The Full Upwind Discretization for the Wave Equation
4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic Systems
4.1 Decomposition of the Space-Time Cylinder
4.2 The Petrov–Galerkin Setting
4.3 Inf-Sup Stability
4.4 Convergence for Strong Solutions
4.5 Convergence for Weak Solutions
4.6 Goal-Oriented Adaptivity
4.7 Reliable Error Estimation for Weak Solutions
Part II Local Wellposedness and Long-Time Behavior of Quasilinear Maxwell Equations
Roland Schnaubelt
5 Introduction and Local Wellposedness on R3
5.1 The Maxwell System
5.2 The Linear Problem on R3 in L2
5.3 The Linear Problem on R3 in H3
5.4 The Quasilinear Problem on R3
5.5 Energy and Blow-Up
6 Local Wellposedness on a Domain
6.1 The Maxwell System on a Domain
6.2 The Linear Problem on R3+ in L2
6.3 The Linear Problem on R3+ in H3
6.4 The Quasilinear Problem on R+3
6.5 The Main Wellposedness Result
7 Exponential Decay Caused by Conductivity
7.1 Introduction and Theorem on Decay
7.2 Energy and Observability-Type Inequalities
7.3 Time Regularity Controls Space Regularity
Part III Error Analysis of Second-Order Time Integration Methods for Discontinuous Galerkin Discretizations of Friedrichs' Systems
Marlis Hochbruck and Jonas Köhler
8 Introduction
Acknowledgment
8.1 Notation
9 Linear Wave-Type Equations
9.1 A Short Course on Semigroup Theory
9.2 Analytical Setting and Friedrichs' Operators
9.3 Examples
9.3.1 Advection Equation
9.3.2 Acoustic Wave Equation
9.3.3 Maxwell Equations
10 Spatial Discretization
10.1 The Discrete Setting
10.2 Friedrichs' Operators in the Discrete Setting
10.3 Discrete Friedrichs' Operators
10.4 The Spatially Semidiscrete Problem
11 Full Discretization
11.1 Crank–Nicolson Scheme
11.2 Leapfrog Scheme
11.3 Peaceman–Rachford Scheme
11.4 Locally Implicit Scheme
11.5 Addendum
12 Error Analysis
12.1 Crank–Nicolson Scheme
12.1.1 Error Recursion
12.1.2 Bounds on the Defect
12.1.3 Error Bounds for the dG-Crank–Nicolson Scheme
12.2 Leapfrog Scheme
12.2.1 Error Recursion
12.2.2 Bounds on and Splitting of the Defect
12.2.3 Error Bounds for the dG-Leapfrog Scheme
12.3 Peaceman–Rachford Scheme
12.3.1 Error Recursion
12.3.2 Bounds on the Defect
12.3.3 Error Bounds for the dG-Peaceman–Rachford Scheme
12.4 Locally Implicit Scheme
12.4.1 Error Recursion
12.4.2 Bounds on and Splitting of the Defect
12.4.3 Error Bounds for the dG-Locally Implicit Scheme
12.5 Concluding Remarks
12.5.1 Less Regular Solutions
12.5.2 Approximations of Initial Values
12.5.3 Approximations at Half Time Steps
13 Appendix
13.1 Friedrichs' Operators Exhibiting a Two-Field Structure
13.2 Full Bounds for the Discrete Derivative Errors
14 List of Definitions
Part IV An Abstract Framework for Inverse Wave Problems with Applications
Andreas Rieder
15 What Is an Inverse and Ill-Posed Problem?
15.1 Electric Impedance Tomography: The Continuum Model
15.2 Seismic Tomography
16 Local Ill-Posedness
16.1 Examples for Local Ill-Posedness
16.1.1 Electric Impedance Tomography
16.1.2 Seismic Tomography
16.2 Linearization and Ill-Posedness
17 Regularization of Linear Ill-Posed Problems in Hilbert Spaces
18 Newton-Like Solvers for Non-linear Ill-Posed Problems
18.1 Decreasing Error and Weak Convergence
18.1.1 A Heuristic for Choosing the Tolerances
18.2 Convergence Without Noise
18.3 Regularization Property of REGINN
19 Inverse Problems Related to Abstract Evolution Equations
19.1 Motivation: Full Waveform Inversion in Seismic Imaging
19.1.1 Elastic Wave Equation
19.1.2 Visco-Elastic Wave Equation
19.1.3 The Inverse Problem of Seismic Imaging in the Visco-Elastic Regime
19.1.4 Visco-Elastic Wave Equation (Transformed)
19.2 Abstract Framework
19.2.1 Existence, Uniqueness, and Regularity
19.2.2 Parameter-to-Solution Map
20 Applications
20.1 Full Waveform Inversion in the Visco-Elastic Regime
20.1.1 Full Waveform Forward Operator
20.1.2 Differentiability and Adjoint
20.2 Maxwell's Equation: Inverse Electromagnetic Scattering
20.2.1 Inverse Electromagnetic Scattering
20.2.2 The Electromagnetic Forward Map
20.2.3 Differentiability and Adjoint
References
Recommend Papers

Wave Phenomena: Mathematical Analysis and Numerical Approximation
 3031057929, 9783031057922

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Oberwolfach Seminars 49

Wave Phenomena Mathematical Analysis and Numerical Approximation Willy Dörfler Marlis Hochbruck Jonas Köhler Andreas Rieder Roland Schnaubelt Christian Wieners

Oberwolfach Seminars Volume 49

The workshops organized by the Mathematisches Forschungsinstitut Oberwolfach are intended to introduce students and young mathematicians to current fields of research. By means of these well-organized seminars, also scientists from other fields will be introduced to new mathematical ideas. The publication of these workshops in the series Oberwolfach Seminars (formerly DMV seminar ) makes the material available to an even larger audience.

Willy Dörfler • Marlis Hochbruck • Jonas Köhler • Andreas Rieder • Roland Schnaubelt • Christian Wieners

Wave Phenomena Mathematical Analysis and Numerical Approximation

Willy Dörfler Institute for Applied and Numerical Mathematics Karlsruhe Institute of Technology Karlsruhe, Germany

Marlis Hochbruck Institute for Applied and Numerical Mathematics Karlsruhe Institute of Technology Karlsruhe, Germany

Jonas Köhler Institute for Applied and Numerical Mathematics Karlsruhe Institute of Technology Karlsruhe, Germany

Andreas Rieder Institute for Applied and Numerical Mathematics Karlsruhe Institute of Technology Karlsruhe, Germany

Roland Schnaubelt Institute for Analysis Karlsruhe Institute of Technology Karlsruhe, Germany

Christian Wieners Institute for Applied and Numerical Mathematics Karlsruhe Institute of Technology Karlsruhe, Germany

ISSN 1661-237X ISSN 2296-5041 (electronic) Oberwolfach Seminars ISBN 978-3-031-05792-2 ISBN 978-3-031-05793-9 (eBook) https://doi.org/10.1007/978-3-031-05793-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The research on wave-type problems is a fascinating and emerging field in mathematical research with many challenging applications in sciences and engineering. Profound investigations on waves require a strong interaction of several mathematical disciplines including functional analysis, partial differential equations, mathematical modeling, mathematical physics, numerical analysis, and scientific computing. The goal of these lecture notes is to present a comprehensive introduction to the research on wave phenomena. Starting with basic models for acoustic, elastic, and electromagnetic waves, we will consider the existence of solutions for linear and some nonlinear material laws, efficient discretizations and solution methods in space and time, and the application to inverse parameter identification problems. Karlsruhe, Germany January 2022

Willy Dörfler Marlis Hochbruck Jonas Köhler Andreas Rieder Roland Schnaubelt Christian Wieners

v

Acknowledgements

Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)— Project-ID 258734477—SFB 1173.

vii

Contents

Part I Space-Time Approximations for Linear Acoustic, Elastic, and Electro-Magnetic Wave Equations Willy Dörfler and Christian Wieners 1

Modeling of Acoustic, Elastic, and Electro-Magnetic Waves . .. . . . . . . . . . . . . . . 1.1 Modeling in Continuum Mechanics .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.2 The Wave Equation in 1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.3 Harmonic, Anharmonic and Viscous Waves . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.4 Elastic Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.5 Visco-Elastic Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.6 Acoustic Waves in Solids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 1.7 Electro-Magnetic Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

3 3 4 7 9 11 12 14

2

Space-Time Solutions for Linear Hyperbolic Systems . . . . . . . . .. . . . . . . . . . . . . . . 2.1 Linear Hyperbolic First-Order Systems . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.2 Solution Spaces.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.3 Solution Concepts .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.4 Existence and Uniqueness of Space-Time Solutions . . . . . . .. . . . . . . . . . . . . . . 2.5 Mapping Properties of the Space-Time Operator .. . . . . . . . . .. . . . . . . . . . . . . . . 2.6 Inf-Sup Stability .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 2.7 Applications to Acoustics and Visco-Elasticity .. . . . . . . . . . . .. . . . . . . . . . . . . . .

19 19 22 24 27 30 32 32

3

Discontinuous Galerkin Methods for Linear Hyperbolic Systems . . . . . . . . . . . 3.1 Traveling Wave Solutions in Homogeneous Media . . . . . . . .. . . . . . . . . . . . . . . 3.2 Reflection of Traveling Acoustic Waves at Boundaries . . . .. . . . . . . . . . . . . . . 3.3 Transmission and Reflection of Traveling Waves at Interfaces . . . . . . . . . . . 3.4 The Riemann Problem for Acoustic Waves . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.5 The Riemann Problem for Linear Conservation Laws . . . . .. . . . . . . . . . . . . . . 3.6 The DG Discretization with Full Upwind .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 3.7 The Full Upwind Discretization for the Wave Equation.. . .. . . . . . . . . . . . . . .

35 35 36 37 39 40 41 43 ix

x

4

Contents

A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic Systems.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.1 Decomposition of the Space-Time Cylinder . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.2 The Petrov–Galerkin Setting.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.3 Inf-Sup Stability .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.4 Convergence for Strong Solutions .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.5 Convergence for Weak Solutions .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.6 Goal-Oriented Adaptivity .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 4.7 Reliable Error Estimation for Weak Solutions .. . . . . . . . . . . . .. . . . . . . . . . . . . . .

49 49 50 51 56 58 61 66

Part II Local Wellposedness and Long-Time Behavior of Quasilinear Maxwell Equations Roland Schnaubelt 5

Introduction and Local Wellposedness on R3 . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.1 The Maxwell System .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.2 The Linear Problem on R3 in L2 . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.3 The Linear Problem on R3 in H 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.4 The Quasilinear Problem on R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 5.5 Energy and Blow-Up .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

73 73 76 84 89 99

6

Local Wellposedness on a Domain .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.1 The Maxwell System on a Domain .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.2 The Linear Problem on R3+ in L2 . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.3 The Linear Problem on R3+ in H 3 . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.4 The Quasilinear Problem on R3+ . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 6.5 The Main Wellposedness Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

107 107 111 117 125 127

7

Exponential Decay Caused by Conductivity.. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7.1 Introduction and Theorem on Decay . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7.2 Energy and Observability-Type Inequalities . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7.3 Time Regularity Controls Space Regularity . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

133 133 137 143

Part III

8

Error Analysis of Second-Order Time Integration Methods for Discontinuous Galerkin Discretizations of Friedrichs’ Systems Marlis Hochbruck and Jonas Köhler

Introduction .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 163 8.1 Notation .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 165

Contents

9

xi

Linear Wave-Type Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 9.1 A Short Course on Semigroup Theory . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 9.2 Analytical Setting and Friedrichs’ Operators . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 9.3 Examples . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

167 169 176 182

10 Spatial Discretization.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.1 The Discrete Setting.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.2 Friedrichs’ Operators in the Discrete Setting . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.3 Discrete Friedrichs’ Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 10.4 The Spatially Semidiscrete Problem . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

187 187 192 194 197

11 Full Discretization .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.1 Crank–Nicolson Scheme .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.2 Leapfrog Scheme .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.3 Peaceman–Rachford Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.4 Locally Implicit Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 11.5 Addendum . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

201 202 208 221 229 239

12 Error Analysis . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.1 Crank–Nicolson Scheme .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.2 Leapfrog Scheme .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.3 Peaceman–Rachford Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.4 Locally Implicit Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 12.5 Concluding Remarks .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

243 245 251 259 263 266

13 Appendix . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 269 13.1 Friedrichs’ Operators Exhibiting a Two-Field Structure . . .. . . . . . . . . . . . . . . 269 13.2 Full Bounds for the Discrete Derivative Errors . . . . . . . . . . . . .. . . . . . . . . . . . . . . 271 14 List of Definitions . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 275 Part IV

An Abstract Framework for Inverse Wave Problems with Applications Andreas Rieder

15 What Is an Inverse and Ill-Posed Problem? . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 281 15.1 Electric Impedance Tomography: The Continuum Model .. . . . . . . . . . . . . . . 281 15.2 Seismic Tomography .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 282 16 Local Ill-Posedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 285 16.1 Examples for Local Ill-Posedness . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 288 16.2 Linearization and Ill-Posedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 292

xii

Contents

17 Regularization of Linear Ill-Posed Problems in Hilbert Spaces .. . . . . . . . . . . . . 299 18 Newton-Like Solvers for Non-linear Ill-Posed Problems . . . . . . .. . . . . . . . . . . . . . . 18.1 Decreasing Error and Weak Convergence .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.2 Convergence Without Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 18.3 Regularization Property of REGINN . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

303 306 312 314

19 Inverse Problems Related to Abstract Evolution Equations . . .. . . . . . . . . . . . . . . 321 19.1 Motivation: Full Waveform Inversion in Seismic Imaging .. . . . . . . . . . . . . . . 321 19.2 Abstract Framework .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 325 20 Applications . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 339 20.1 Full Waveform Inversion in the Visco-Elastic Regime . . . . .. . . . . . . . . . . . . . . 339 20.2 Maxwell’s Equation: Inverse Electromagnetic Scattering ... . . . . . . . . . . . . . . 352 References .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 359

About the Authors

Willy Dörfler Institute for Applied and Numerical Mathematics, Karlsruhe Institute of Technology, Karlsruhe, Germany

Marlis Hochbruck Institute for Applied and Numerical Mathematics, Karlsruhe Institute of Technology, Karlsruhe, Germany

xiii

xiv

About the Authors

Jonas Köhler Institute for Applied and Numerical Mathematics, Karlsruhe Institute of Technology, Karlsruhe, Germany

Andreas Rieder Institute for Applied and Numerical Mathematics, Karlsruhe Institute of Technology, Karlsruhe, Germany

Roland Schnaubelt Institute for Analysis, Karlsruhe Institute of Technology, Karlsruhe, Germany

Christian Wieners Institute for Applied and Numerical Mathematics, Karlsruhe Institute of Technology, Karlsruhe, Germany

Part I Space-Time Approximations for Linear Acoustic, Elastic, and Electro-Magnetic Wave Equations Willy Dörfler and Christian Wieners

We introduce and analyze variational approximation schemes in space and time for linear wave equations. The discretization is based on a discontinuous Galerkin approximation in space and a Petrov–Galerkin method in time. This is applied to models for acoustic and elastic waves in solids and for electro-magnetic waves. For the corresponding linear hyperbolic first-order systems we show the existence of strong and weak solutions. Then, based on the exact solution of Riemann problems the upwind flux is constructed. Discrete inf-sup stability is established for the nonconforming space-time discretization, which builds the basis for a priori and a posteriori error estimates.

1

Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

Mathematical modeling of physical processes yields a system of partial differential equations that describes the behavior of a system physically correct and allows for analytical and numerical predictions of the system behavior. Here we start by shortly summarizing modeling principles which are illustrated for simple linear models in one space dimension. Then this is specified for different types of wave equations.

1.1

Modeling in Continuum Mechanics

Describing a model in continuum mechanics is a complex process combining physical principles, parameters and data. For a mathematical framework, we introduce the following terminology: •





Geometric configuration We select a domain in space  ⊂ Rd (d ∈ {1, 2, 3}) and a time interval I ⊂ R, and for the specification of boundary conditions we select boundary parts k ⊂ ∂, k = 1, . . . , m, where m is the number of components of the variables which describe the current state of the physical system. Constituents Which physical quantities determine the model? Which quantities directly depend on these primary quantities? For the mathematical formulation it is required to select a set of primary variables. Parameters Which material data are required for the model? Which properties do these parameters have in order to be physically meaningful?

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9_1

3

4

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves



Balance relations This collects relations between the physical quantities (and external sources) which are derived from basic energetic or kinematic principles. These relations are independent of specific materials and applications. Material laws This collects relations between the physical quantities that have to be determined by measurements and depend on the specific material and application. External forces, boundary and initial data The system behavior is controlled by the initial state at t = 0, by external forces in the interior of the space-time domain I × , and by conditions on the boundary I × ∂.





1.2

The Wave Equation in 1d

This formalism is now specified for the most simple wave model in 1d with constant coefficients. Therefore, we assume that all quantities are sufficiently smooth, so that all derivatives and integrals are well-defined. Configuration We consider an interval  = (0, X) ⊂ R in space and a time interval I = (0, T ) ⊂ R for given X, T > 0. Constituents Here, we consider the simplified situation that material points in R2 move up and down vertically. The state of this physical system is then determined by the vertical displacement u : [0, T ] ×  −→ R   describing the position of the material point x, u(t, x) ∈ R2 at time t, and the tension σ : [0, T ] ×  −→ R describing the forces between the points x ∈ . In this simplified 1d setting with vertical displacements the tension corresponds to the shear stress in higher dimensions. Depending on the primal variable u, we define the velocity v = ∂t u, the acceleration a = ∂t v = ∂t2 u, the strain ε = ∂x u, and the strain rate ∂t ε = ∂x v = ∂x ∂t u. Material Parameters This simple model only depends on the mass density ρ > 0 and √ the stiffness κ > 0; together, this defines c = κ/ρ. We will see that c is the wave speed which characterizes this model. Balance of Momentum Depending on the velocity v and the mass density ρ we define the momentum ρv. Newton’s law states that the temporal change of the momentum in time

1.2 The Wave Equation in 1d

5

equals the sum of all driving forces. Here, without any external forces, this balance relation reads as follows: for all 0 < x1 < x2 < X and 0 < t1 < t2 < T we have 

x2

  ρ(x) v(t2 , x) − v(t1 , x) dx =



x1

t2

  σ (t, x2 ) − σ (t, x1 ) dt .

t1

For smooth functions this yields 

x2 x1



t2

 ρ(x)∂t v(t, x) dt dx =

t1

t2



x2

∂x σ (t, x) dx dt , t1

x1

and since this holds for all 0 < x1 < x2 < X and 0 < t1 < t2 < T , this holds point-wise, i.e., ρ(x)∂t v(t, x) = ∂x σ (t, x) ,

(t, x) ∈ (0, T ) × (0, X) .

(1.1)

Material Law One observes that the tension σ (t, x) only depends on the strain ε(t, x) = ∂x u(t, x). This is formulated as a material law: a material is by definition elastic, if a function exists such that σ = (∂x u), and it is linear elastic, if σ = κε with stiffness κ > 0. In a homogeneous material, the stiffness κ is independent of x ∈ (0, X). Boundary and Initial Data The actual physical state at time t of the system depends on its state at the beginning t = 0 and on constraints at the boundary. Here, assume that at t = 0 the system is given by the initial displacement u(0, x) = u0 (x) and velocity v(0, x) = v0 (x) for x ∈ , and we use homogeneous boundary conditions u(t, 0) = u(t, X) = 0 for t ∈ [0, T ] corresponding to a string that is fixed at the endpoints. Inserting v = ∂t u and ε = ∂x u in (1.1) we obtain the second-order formulation of the wave equation 2∂t2 u(t, x) − c2 ∂x2 u(t, x) = 0

for (t, x) ∈ (0, T ) × (0, X) ,

(1.2a)

u(0, x) = u0 (x)

for x ∈ (0, X) at t = 0 ,

(1.2b)

∂t u(0, x) = v0 (x)

for x ∈ (0, X) at t = 0 ,

(1.2c)

for x ∈ {0, X} and t ∈ (0, T ) .

(1.2d)

u(t, x) = 0

Note that the same equation can be derived for a 1d wave with horizontal displacement, corresponding to an actual position of the material point x + u(x) ∈ R.

6

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

The Solution of the Linear Wave Equation in 1d in Homogeneous Media The equation (1.2) with constant wave speed c > 0 can be solved explicitly. For given initial values (1.2b) and (1.2c) the solution is given within the cone   C = (t, x) ∈ (0, T ) × (0, X) : 0 < x − ct < x + ct < X by the d’Alembert formula u(t, x) =

   1 1 x+ct v0 (ξ ) dξ , u0 (x − ct) + u0 (x + ct) + 2 c x−ct

(t, x) ∈ C .

Now we consider the solution in the bounded interval  = (0, X) of length X = π with homogeneous Dirichlet boundary conditions (1.2d). The solution can be expanded into eigenmodes of the operator −∂x2 u in H10 () ∩ H2 (), so that we obtain u(t, x) =



αn cos(cnt) + βn sin(cnt) sin(nx) , n=1

where the coefficients are determined by the initial values (1.2b) and (1.2c). For the special example with initial values u0 (x) = 1, v0 (x) = 0 for x ∈ (0, π), and wave speed c = 1, we obtain the explicit Fourier representation u(t, x) =

∞     4 1 cos (2n + 1)t sin (2n + 1)x π 2n + 1

(1.3)

n=0

=

1

u0 (x + t) + u0 (x − t) , 2

where the initial function u0 is extended to the periodic function ⎧ ⎪ ⎨ 1 u0 (x) = 0 ⎪ ⎩ −1

x ∈ (0, π) + 2πZ , x ∈ πZ , x ∈ (−π, 0) + 2πZ ,

cf. Fig. 1.1. We observe that this solution solves the wave equation only in a weak sense since it is discontinuous along linear characteristics x ± ct = const.

1.3 Harmonic, Anharmonic and Viscous Waves

7

 Fig. 1.1 Weak solution u ∈ L2 (0, 8) × (0, π)) with initial values for u(0, ·) = 1, ∂t u(0, ·) = 0, and homogeneous Dirichlet boundary values u(·, 0) = u(·, π) = 0

1.3

Harmonic, Anharmonic and Viscous Waves

Special solutions of the linear wave equation (1.2) can be derived by the ansatz u(t, x) = exp(−iωt)a(x) with a fixed frequency ω ∈ R. This yields in case of constant wave speed c =

√ κ/ρ

∂t2 u(t, x) − c2 ∂x2 u(t, x) = − ω2 a(x) + c2 ∂x2 a(x) exp(−iωt) . The equation ω2 a(x) + c2 ∂x2 a(x) = 0 is solved by a(x) = a0 exp(ikx) with k = ω/c and a0 ∈ R, cf. Table 1.1. Interaction with Material: Anharmonic Waves The harmonic wave with constant amplitude is an idealistic model. This contradicts to observations: a wave traveling through material interacts with the particles in some sense, so that the amplitude is decreasing in time. A simple ansatz are waves of the form   u(t, x) = a(t) exp i(kx − ωt) ,

a(t) = a0 exp(−τ t)

(1.4)

depending on wave number k, angular frequency ω, and relaxation time τ > 0. Then, we observe for (1.4) in case of constant ρ and κ   (ρ∂t2 − κ∂x2 )u(x, t) = ρ(τ + iω)2 + κk 2 u(x, t) ,

∂t u(x, t) = −(τ + iω)u(x, t)

which yields with the angular frequency ω=



k 2 κ/ρ + τ 2 ∈ R

(1.5)

  Table 1.1 Characteristic quantities for harmonic waves u(t, x) = a0 exp i(kx − ωt) Wave number Wave speed

k c = ω/k

Angular frequency Wave length

ω λ = c/ν

Frequency Amplitude

ν = ω/2π a0

8

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

a solution of the wave equation with attenuation ρ∂t2 u(t, x) − κ∂x2 u(t, x) + 2τρ ∂t u(t, x) = 0 .

(1.6)

In general, one observes that the wave speed depends on the frequency of the wave, i.e., the wave is dispersive. For the case of constant parameters this is characterized by the dispersion relation ω = ω(k). In this example, we find the dispersion relation (1.5) for the wave equation with attenuation (1.6). For the general description of real media this approach is too simple and applies only for the wave propagation within a limited frequency range, in particular since the relaxation time also depends on the frequency. For viscous waves suitable material laws are constructed where the parameters can be determined from measurements of the dispersion relation at sample frequencies which are relevant for the application. This is now demonstrated for a specific example. A Model for Viscous Waves One approach to characterize waves with dispersion is to use a linear superposition of the constitutive law for a harmonic wave with several relations for anharmonic waves. In this ansatz the material law for the stress is based on a decomposition σ = σ0 + σ1 + · · · + σr with Hooke’s law for σ0 , i.e., σ0 = κ0 ε ,

(1.7a)

and several Maxwell bodies for σ1 , . . . , σr described by the relations ∂t σj + τj−1 σj = κj ∂t ε ,

j = 1, . . . , r .

(1.7b)

This model depends on the stiffness of the components κ0 , . . . , κr and relaxation times τ1 , . . . , τr . Solving the linear ODE (1.7b) with initial value σj (0) = 0 and inserting ∂t ε = ∂x v yields 

t

σj (t) = 0

1 κj exp − (t − s) ∂x v(s) ds , τj

and together with (1.7a) we obtain the retarded material law σ (t) = κ0 ∂x u(t) +

 t r 0 j =1

1 κj exp − (t − s) ∂x v(s) ds . τj

1.4 Elastic Waves

9

This can be summarized to ∂t σ (t) = κ0 ∂x v(t) +

r j =1



t

= κ∂x v(t) +

 t r

κj 1 κj ∂x v(t) − exp − (t − s) ∂x v(s) ds τj τj 0 j =1

κ(t ˙ − s)∂x v(s) ds

0

with the total stiffness κ = κ0 + κ1 + · · · + κr and the retardation kernel κ(s) ˙ =−

s . exp − τj τj

r κj j =1

Together with the balance relation ρ∂t v = ∂x σ this is a model for viscous waves.

1.4

Elastic Waves

In the next step we derive equations for waves in solids. We consider heterogeneous media where the material parameters depend on the position, and we assume that the wave energy is sufficiently small, so that the material law can be approximated by a linear relation. Configuration We consider an elastic body in the spatial domain  ⊂ R3 and we fix a time interval I = (0, T ). The boundary ∂ = V ∪ S is decomposed into parts corresponding to dynamic boundary conditions for the velocity and static boundary conditions for the stress. Constituents The current state of the body is described by the deformation or by the displacement ϕ = id + u : [0, T ] ×  −→ R3 ,

u : [0, T ] ×  −→ R3 ,

i.e., ϕ(t, x) = x + u(t, x) is the actual position of the point x ∈  at time t. Depending on the displacement, we define the velocity v = ∂t u, the strain ε(u) = sym(Du), the acceleration a = ∂t v = ∂t2 u, and the strain rate ε(v) = sym(Dv) = ∂t ε(u). The internal forces in the material are described by the stress tensor σ : [0, T ] ×  −→ R3×3 sym .

10

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

Material Parameters Measurements are required to determine the distribution of the mass density ρ :  −→ (0, ∞) and to determine the material stiffness in all directions which are collected in Hooke’s tensor 3×3 C :  −→ L(R3×3 sym , Rsym ) .

Balance of Momentum Newton’s law postulates equality between the temporal change of the momentum ρv in any time interval (t1 , t2 ) ∈ (0, T ) within any subvolume K ⊂  and the driving forces on the boundary ∂K described by the stress in direction of the outer normal vector n on ∂K. This results in the balance relation (without external loads) 

  ρ(x) v(t2 , x) − v(t1 , x) dx =

K



t2

 σ (t, x)n(x) da dt .

t1

(1.8)

∂K

For smooth functions we obtain by the Gauß theorem   K

t2

 ρ(x)∂t v(t2 , x) dt dx =

t1

t2

 div σ (t, x) dx dt ,

t1

K

and since this holds for all time intervals and subvolumes, we get the pointwise relation ρ∂t v = div σ

in (0, T ) ×  .

(1.9)

Remark 1.1 In the balance relation (1.8) only the normal stress σ (t, x)n for all directions n ∈ S 2 on the boundary of a subvolume K ⊂  is included. This describes the force between material points left and right from x ∈ ∂K with respect to the direction n. The existence of such a vector for all directions and all points is postulated by the Cauchy axiom, and by the Cauchy theorem a tensor representing this force exists; moreover, the symmetry of this tensor is a consequence of the balance of angular momentum. Material Law Since the forces between the material points x1 and x2 only depend on the difference of the actual positions u(t, x2 ) − u(t, x1 ), the stress σ (t, x) only depends on the deformation gradient Dϕ. By definition, a material is elastic, if a function exists such that σ = (Dϕ). Then, ∂t σ = D (Dϕ)[Dv]. In the limit of small strains the material response can be approximated by a linear model, i.e., we assume Dϕ ≈ I, and we use the linear relation ∂t σ = D (I)[Dv]. In addition, we assume that the stress response is objective, i.e., it is

1.5 Visco-Elastic Waves

11

independent of the observer’s position; then it can be shown that it only depends on the symmetric strain ε(u) = sym(Du). Together, we obtain Hooke’s law ∂t σ = Cε(v) .

(1.10)

Boundary and Initial Data We start with u(0) = u0 and v(0) = v0 in  at t = 0, and for t ∈ (0, T ) we use the boundary conditions for the displacement u(t) = uV (t) or the velocity v(t) = vV (t) on the dynamic boundary V , and for the stress σ (t)n = gS on the static boundary S . Including external body forces f, we obtain the second-order formulation of the linear wave equation ρ∂t2 u − div Cε(u) = f

in (0, T ) ×  ,

(1.11a)

u(0) = u0

in  at t = 0 ,

(1.11b)

∂t u(0) = v0

in  at t = 0 ,

(1.11c)

u(t) = uV (t)

on V for t ∈ (0, T ) ,

(1.11d)

Cε(u)n = gS (t)

on S for t ∈ (0, T ) ,

(1.11e)

and, equivalently, the first-order formulation ρ∂t v − div σ = f

in (0, T ) ×  ,

(1.12a)

∂t σ − Cε(u) = 0

in (0, T ) ×  ,

(1.12b)

v(0) = v0

in  at t = 0 ,

(1.12c)

σ (0) = Cε(u0 )

in  at t = 0 ,

(1.12d)

v(t) = ∂t uV (t)

on V for t ∈ (0, T ) ,

(1.12e)

on S for t ∈ (0, T ) .

(1.12f)

σ (t)n = gS (t)

1.5

Visco-Elastic Waves

The balance of momentum (1.9) together with Hooke’s law ∂t σ = Cε(v) describes linear elastic waves. We observe  t  t σ (t) = σ (0) + ∂t σ (s) ds = σ (0) + Cε(v(s)) ds . 0

0

12

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

General linear visco-elastic waves are described by a retarded material law  t σ (t) = σ (0) + C(t − s)ε(v(s)) ds 0

implying 

t

∂t σ (t) = C(0)ε(v(t)) +

  ˙ − s)ε v(s) ds C(t

0

˙ of the elasticity tensor C. with a time-dependent extension C In analogy to the 1d model (1.7), one defines Generalized Standard Linear Solids with the relaxation tensor ˙ C(s) =−

r

1 s exp − Cj , τj τj

C(0) = C0 + C1 + · · · + Cr .

j =1

Introducing the corresponding stress decomposition σ = σ 0 + · · · + σ r with 



t

σ j (t) =

exp 0

s−t τj

 Cj ε(v(s)) ds ,

j = 1, . . . , r,

results in the first-order system for visco-elastic waves   ρ ∂t v − ∇ · σ 0 + · · · + σ r = f ,

(1.13a)

∂t σ 0 − C0 ε(v) = 0 ,

(1.13b)

∂t σ j − Cj ε(v) + τj−1 σ j = 0 ,

j = 1, . . . , r .

(1.13c)

This is complemented by initial and boundary conditions for the velocity v and the total stress σ , which are the observable quantities. The stress components σ 1 , . . . , σ r are inner variables describing the retarded material law; they can be replaced, e.g., by memory variables encoding the material history.

1.6

Acoustic Waves in Solids

In isotropic media, Hooke’s tensor only depends on two parameters, e.g., the Lamé parameters μ, λ Cε = 2με + λ tr(ε)I = 2μ dev(ε) + κ tr(ε)I ,

dev(ε) = ε −

1 tr(ε)I . 3

1.6 Acoustic Waves in Solids

13

For the wave dynamics, one uses a decomposition into components corresponding to shear waves depending on the shear modulus μ, and compressional waves depending on the compression modulus κ = 23 μ + λ. Then, the linear second order elastic wave equation (1.11a) in isotropic and homogeneous media takes the form ρ∂t2 u + μ∇ × ∇ × u − 3κ∇(∇ · u) = f . A vanishing shear modulus μ → 0 leads to the linear acoustic wave equation for the hydrostatic pressure p = 13 tr(σ ) and the velocity, described by the first-order system ρ ∂t v − ∇p = f

in (0, T ) ×  ,

(1.14a)

∂t p − κ∇ · v = 0

in (0, T ) ×  ,

(1.14b)

v(0) = v0

in  at t = 0 ,

(1.14c)

p(0) = p0

in  at t = 0 ,

(1.14d)

n · v(t) = gV (t)

on V for t ∈ (0, T ) ,

(1.14e)

p(t) = pS (t)

on S for t ∈ (0, T ) ,

(1.14f)

where we set pS = n·gS for the static boundary condition and gV = n·vV for the dynamic boundary condition. For acoustics, this corresponds to Dirichlet and Neumann boundary conditions, for elasticity this is reversed. In homogeneous media and for f = 0, (1.14a) and (1.14b) combine to the linear secondorder acoustic wave equation ∂t2 p − c2 p = 0 ,

c=



κ/ρ .

Remark 1.2 Simply neglecting the shear component is only an approximation and not fully realistic for waves in solids, in particular since by reflections compressional waves split in compressional and shear components. Nevertheless, in applications the acoustic wave equation is used also in solids since the system is much smaller so that computations are much faster. Remark 1.3 One obtains the same acoustic wave equations describing compression waves in a fluid or a gas. Note that, historically, the sign conventions for pressure and stress are different in fluid and solid mechanics. Visco-Acoustic Waves Generalized Standard Linear Solids can be reduced to acoustics. The corresponding retarded material law for the hydrostatic pressure takes the form 

t

∂t p(t) = κ∇ · v(t) + 0

κ(t ˙ − s)∇ · v(s) ds ,

κ(s) ˙ =−

s exp − . τj τj

r κj j =1

14

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

Defining κ = κ0 + κ1 + · · · + κr and p = p0 + p1 + · · · + pr with 

t

pj (t) = 0



s−t exp τj

 κj ∇ · v(s) ds ,

j = 1, . . . , r

results in the first-order system for linear visco-acoustic waves ρ ∂t v − ∇(p0 + · · · + pr ) = f , ∂t p0 − κ0 ∇ · v = 0 , ∂t pj − κj ∇ · v + τj−1 pj = 0 ,

j = 1, . . . , r .

This is complemented by initial and boundary conditions (1.14c)–(1.14f).

1.7

Electro-Magnetic Waves

Electric fields induce magnetic fields and vice versa. This is formulated by Maxwell’s equations describing the propagation of electro-magnetic waves. Configuration We consider a spatial domain  ⊂ R3 , a time interval I = (0, T ), and a boundary decomposition ∂ = E ∪ I corresponding to perfect conducting or transmission boundaries. Constituents Electro-magnetic waves are determined by the electric field and the magnetic field intensity E : I ×  → R3 ,

H : I ×  → R3 ,

and by the electric flux density and magnetic induction D : I ×  → R3 ,

B : I ×  → R3 .

Further quantities are the electric current density and the electric charge density J : I ×  → R3 ,

ρ: I ×  → R.

Balance Relations Faraday’s law states that the temporal change of the magnetic induction through a two-dimensional subset A ⊂  induces an electric field along the boundary ∂A, so that for all 0 < t1 < t2 < T  A





B(t2 ) − B(t1 ) · da = −



t2

t1

 E · d dt . ∂A

1.7 Electro-Magnetic Waves

15

Ampere’s law states that the temporal change of the electric flux density together with the electric current density through a two-dimensional manifold A ⊂  induces a magnetic field intensity along the boundary ∂A, i.e., 

  D(t2 ) − D(t1 ) · da + A



t2 t1



 J · da dt = A

t2 t1

 H · d dt . ∂A

Here, we use u · da = u · n da and u · d = u · τ d, the normal vector field n : A → R3 and the tangential vector field τ : ∂A → R3 (where the orientation of ∂A is given by n). The Gauß laws state for all subvolumes K ⊂  the conservation of the magnetic induction  B · da = 0 ∂K

and the equilibrium of electric charge density in the volume with electric flux density across the boundary ∂K 

 D · da = ∂K

ρ dx . K

Together, by the integral theorems of Stokes and Gauß we obtain  

t2

A t1



t2

∂t B · da dt = −



t1



∇ × E · da dt , A

∇ · B dx = 0 ,  

t2

 ∂t D · da dt +

A t1

t2 t1



K

 J · da dt =



A



t2 t1

∇ · D dx = K

 ∇ × H · da dt, A

ρ dx , K

and since this holds for all (t1 , t2 ) ⊂ I and all A, K ⊂ , it results in the Maxwell system ∂t B + ∇ × E = 0 ,

∂t D − ∇ × H = −J ,

∇ ·B = 0,

∇ ·D=ρ.

(1.15)

Note that a combination of the second and fourth equation implies the conservation of charge ∂t ρ + ∇ · J = 0. Material Laws in Vacuum Without the interaction with matter, electric field and the electric flux density, D = ε0 E, and magnetic induction and magnetic field intensity, B = μ0 H, are proportional by multiplication with the constant permittivity ε0 and

16

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

permeability μ0 , respectively, which together results in the linear second-order Maxwell equation for E ∂t2 E − c2 ∇ × ∇ × E = 0 √ with speed of light c = 1/ ε0 μ0 . A corresponding equation holds for H. In vacuum, in the absence of electric currents and electric charges, we find J = 0 and ρ = 0. Effective Material Laws for Electro-Magnetic Waves in Matter The interaction of electro-magnetic waves with the atoms in matter are described by the polarization P and the magnetization M depending on the electric field E and the magnetic induction B. For the electric flux density holds D = ε0 E + P(E, B) , and the magnetic field intensity is given by μ0 H = B − M(E, B) . The electric current density depends on the conductivity σ (Ohm’s law) and the external current J0 , so that J = σ E + J0 . In case of linear materials with instantaneous response, the polarization is proportional to the electric field P = ε0 χE with the susceptibility χ, that yields D = εr E with relative permittivity εr = ε0 (1 + χ). Linear materials with retarded response are given by  P(t) = ε0

t −∞

χ(t − s)E(s) ds .

A special case is the Debye model with χ(t) = exp polarization is determined by τ ∂t P + P = ε0 (εs − ε∞ )E .





(1.16) t εs − ε∞ , so that the τ τ

1.7 Electro-Magnetic Waves

17

This model is dispersive with a dispersion relation similar to the model for viscous elastic waves. The relation (1.16) extends to nonlinear materials by, e.g., t P(t) = ε0

χ1 (t − s)E(s) ds

−∞

t t t +

  χ3 (t − s1 , t − s2 , t − s3 ) E(s1 ), E(s2 ), E(s3 ) ds1 ds2 ds3 .

−∞−∞−∞

For materials of Kerr-type this response is instantaneous, i.e., P = χ1 E + χ3 |E|2 E . In more complex material models, the Maxwell system (1.15) is coupled to evolution equations for polarization or magnetization. E.g., in the Maxwell–Lorentz system the evolution of the polarization is determined by ∂t2 P =

1 (E − P) + |P|2 P . ε02

In the Landau–Lifshitz–Gilbert (LLG) equation the magnetization M is given by ∂t M − αM × ∂t M = −M × Heff ,

|M| = 1 ,

where α > 0 is a damping factor, and the effective field Heff is a combination of the external magnetic field and the demagnetizing field, which is a magnetic field due to the magnetization. Boundary Conditions The Maxwell system is complemented by conditions on ∂. On a perfectly conducting boundary E , we have E×n =0

and

B· n = 0,

and on the impedance (or Silver–Müller) boundary I , we prescribe H × n + ζ (E × n) × n = 0 depending on the given impedance ζ .

18

1 Modeling of Acoustic, Elastic, and Electro-Magnetic Waves

Together, we obtain for general nonlinear instantaneous material laws D(E, H) and B(E, H) the first-order system 2∂t D(E, H) − ∇ × H + σ E = −J0 ,

in (0, T ) ×  ,

(1.17a)

in (0, T ) ×  ,

(1.17b)

E(0) = E0

in  at t = 0 ,

(1.17c)

H(0) = H0

in  at t = 0 ,

(1.17d)

E×n=0

on E for t ∈ (0, T ) ,

(1.17e)

H × n + ζ (E × n) × n = g

on I for t ∈ (0, T ) .

(1.17f)

∂t B(E, H) + ∇ × E = 0

In nonlinear optics, for the special case of an instantaneous nonmagnetic material law D(E) = ε0 E+P(E) and M ≡ 0, the Maxwell system reduces to the second-order equation ∂t2 D(E) + μ−1 0 ∇ × ∇ × E + σ E = −∂t J0 complemented by initial and boundary conditions. Bibliographic Comments The mathematical foundations of modeling elastic solids (including a detailed discussion and a proof of the Cauchy theorem) is given in [27], and more physical background is given in [37]. For generalized standard linear solids we refer to [70]. An overview on modeling of electro-magnetic waves is given in [100], the mathematical aspects of photonics are considered in [47]. The example (1.3) is taken from [119, Example 3.4]. Dispersion relations and the analogy in the modeling of elastic and electro-magnetic waves are collected in [23, Chap. 2 and Chap. 8].

2

Space-Time Solutions for Linear Hyperbolic Systems

The linear wave equation can be analyzed in the framework of symmetric Friedrichs systems as a special case of linear hyperbolic conservation laws. Here, we introduce a general framework for the existence and uniqueness of strong and weak solutions in space and time which applies to general linear wave equations. We consider operators in space and time of the form L = M∂t + A describing a linear hyperbolic system, where A is a first-order operator in space. All results transfer to operators of the form L = M∂t + A + D with an additional positive semi-definite operator D; this applies to visco-acoustic and visco-elastic models, to mixed boundary conditions of Robin type and impedance boundary conditions. In the following, we use standard notations: for open domains G ⊂ Rd in space or G ⊂ R1+d in space-time and functions v, w : G → R we define the inner product  √ (v, w)G = G vw dx, the norm v G = (v, v)G and the Hilbert space L2 (G) of measurable functions v : G → R with v G < ∞.

2.1

Linear Hyperbolic First-Order Systems

Let  ⊂ Rd be a domain in space with Lipschitz boundary, I = (0, T ) a time interval, and we denote the space-time cylinder by Q = (0, T ) × . Boundary conditions will be imposed on k ⊂ ∂ for k = 1, . . . , m, depending on the model, so that the corresponding equations are well-posed. We consider a linear operator in space and time of the form L = M∂t + A with a uniformly positive definite operator M defined by My(x) = M(x)y(x) with a matrix d valued function M ∈ L∞ (; Rm×m sym ), and a differential operator Ay = j =1 Aj ∂j y with d m×m matrices Aj ∈ Rsym . Moreover, we define the matrix An = j =1 nj Aj ∈ Rm×m sym for n ∈ Rd and the corresponding boundary operator (An y)(x) = An y(x). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9_2

19

20

2 Space-Time Solutions for Linear Hyperbolic Systems

In the first step, we consider the properties of the operators A and L for smooth functions. Then the operators are extended to Hilbert spaces and, by specifying boundary conditions, we define maximal domains for the operators. Example 2.1 This applies to the linear acoustic wave equation (1.14) with m = d + 1 and   v y= , p





ρv My = κ −1 p



,

−∇p Ay = −∇ · v



 ,

−pn An y = −n · v

 .

(2.1)

For linear elastic waves with v = ∂t u and σ = Cε(u) we have         − div σ v −σ n ρv , Ay = , y= , An y = , My = −ε(v) σ − 12 (nv + vn ) C−1 σ (2.2)     and 12 My · y = 12 ρ|v|2 + σ · C−1 σ = 12 ρ|∂t u|2 + ε(u) · Cε(u) is the kinetic and potential energy. For linear electro-magnetic waves we have   E y= , H and 12 My · y =

 ε0 E , My = μ0 H 

1 2



  −∇ × H Ay = , ∇ ×E



 −n × H An y = , n×E

(2.3)

 ε0 |E|2 + μ0 |H|2 is the electro-magnetic energy.

Linear Conservation Laws Defining A = (A1 , . . . , Ad ) = (Aj,kl )j =1,...,d, k,l=1,...,m ∈ Rd×m×m we observe for the operator Ay = div(Ay) and the matrix An = n · A, so that the system Ly = f takes the form of a linear conservation law M∂t y + div(Ay) = f .

2.1 Linear Hyperbolic First-Order Systems

21

Integration by parts and using the symmetry of Aj yields for differentiable functions with compact support in  (Ay, z) =

d  j =1 

=−

Aj ∂j y · z dx =

d m 

d m 

Aj,kl yl ∂j zk dx = −

j =1 k,l=1 

=−

d  j =1 

Aj,kl (∂j yl )zk dx

j =1 k,l=1  d m 

yl Aj,lk ∂j zk dx

j =1 k,l=1 

y · Aj ∂j z dx = −(y, Az) ,

y, z ∈ C1c (; Rm ) ,

so that A∗ = −A on C1c (; Rm ). On the boundary ∂ with outer unit normal n, integration by parts yields for y, z ∈ C1 (; Rm ) ∩ C0 (; Rm ) (Ay, z) + (y, Az) =

d m 



Aj,kl (∂j yl )zk + yl Aj,lk ∂j zk dx

j =1 k,l=1 

=

d m 

d m    ∂j Aj,kl yl zk dx =

j =1 k,l=1 

 =

∂

nj Aj,kl yl zk da

j =1 k,l=1 ∂

An y · z da = (An y, z)∂ .

Together, we obtain in space and time for L = M∂t + A and its adjoint L∗ = −L (Lv, w)Q − (v, L∗ w)Q     = Mv(T ), w(T )  − Mv(0), w(0)  + (An v, w)(0,T )×∂

(2.4)

for v, w ∈ C1 (Q; Rm ) ∩ C0 (Q; Rm ). Example 2.2 For linear acoustic waves (2.1) we have 

L(v, p), (w, q)

 Q

      + (v, p), L(w, q) Q = ρv(T ), w(T )  + κ −1 p(T ), q(T )      − ρv(0), w(0)  − κ −1 p(0), q(0)  − (p, n · w)(0,T )×∂ − (n · v, q)(0,T )×∂ .

22

2 Space-Time Solutions for Linear Hyperbolic Systems

For linear elastic waves (2.2) we have 

L(v, σ ), (w, τ )

 Q

      + (v, σ ), L(w, τ ) Q = ρv(T ), w(T )  + C−1 σ (T ), τ (T )      − ρv(0), w(0)  − C−1 σ (0), τ (0)  − (σ n, w)(0,T )×∂ − (v, τ n)(0,T )×∂ .

For linear electro-magnetic waves (2.3) we have 

L(E, H), (e, h)

 Q

      + (E, H), L(e, h) Q = ε0 E(T ), e(T )  + μ0 H(T ), h(T )      − ε0 E(0), e(0)  − μ0 H(0), h(0)  − (E × n, h)(0,T )×∂ + (H × n, e)(0,T )×∂ .

Here we use the following calculus: for vectors a, b, c ∈ R3 we have a · (b × c) = (a × b) · c = (c × a) · b, and for vector fields u, v :  → R3 we have ∇ · (u × v) = v · (∇ × u) − u · (∇ × v). Thus, the Gauß theorem gives 

 v · (∇ × u) dx − 

 u · (∇ × v) dx =



∇ · (u × v) dx 





(u × v) · n da =

= ∂

u · (v × n) da . ∂

The formulation in our examples of wave equations as Friedrichs systems yields 0 A˜ j symmetric matrices of the form Aj = with A˜ j ∈ Rm1 ×m2 and m = m1 + m2 .

A˜ j 0 In order to obtain a well-posed problem with a unique solution, boundary conditions are required. Here we select 1 = . . . = m1 ⊂ ∂ and the complement k = ∂ \  1 for k = m1 + 1, . . . , m, as it is specified in the next section for acoustics in Example 2.3.

2.2

Solution Spaces

We define the Hilbert spaces  H(A, ) = y ∈ L2 (; Rm ) : z ∈ L2 (; Rm ) exists with

 (z, w) = (y, A∗ w) for all w ∈ C1c (; Rm ) ,

 H(L, Q) = v ∈ L2 (Q; Rm ) : z ∈ L2 (Q; Rm ) exists with

 (z, w)Q = (v, L∗ w)Q for all w ∈ C1c (Q; Rm ) ,

2.2 Solution Spaces

23

so that for y ∈ H(A, ) and v ∈ H(L, Q) the weak derivatives Ay ∈ L2 (; Rm ) and Lv ∈ L2 (Q; Rm ) exist; the corresponding norms are y H(A,)

 = y 2 + Ay 2 ,

v H(L,Q) =



v 2Q + Lv 2Q .

Depending on homogeneous boundary conditions on k ⊂ ∂, k = 1, . . . , m, we define   Z = w ∈ C1 (; Rm ) ∩ C0 (; Rm ) : (An w)k = 0 on k , k = 1, . . . , m ,  V = w ∈ C1 (Q; Rm ) ∩ C0 (Q; Rm ) : w(0) = 0 ,  (An w)k = 0 on (0, T ) × k , k = 1, . . . , m ,  V∗ = z ∈ C1 (Q; Rm ) ∩ C0 (Q; Rm ) : z(T ) = 0 ,  (An z)k = 0 on (0, T ) × k∗ , k = 1, . . . , m ,

(2.5a) (2.5b)

(2.5c)

where the sets k ⊂ ∂ are chosen such that (Az, z) = 0 ,

z ∈ Z,

(2.6)

and such that for the sets k∗ ⊂ ∂ in the definition of the test space holds (An w, z)(0,T )×∂ =

m   (An w)k , zk (0,T )× , k

w ∈ C1 (Q; Rm ) , z ∈ V∗ .

(2.7)

k=1

This is obtained by taking k∗ ⊂ ∂ minimal such that for homogeneous boundary conditions in V and V∗ (An w, z)(0,T )×∂ = 0 ,

w ∈ V , z ∈ V∗ .

(2.8)

The choice of k and k∗ is essential in order to obtain a well-posed problem; this will be explained for our examples in Sect. 2.7. Since we have A∗ = −A, this implies (Az, z) = 1 ∗ 2 (An z, z)∂ , and we observe k = k . Note that this is specific for our applications to wave problems but does not apply to general linear hyperbolic systems. Let Z ⊂ H(A, Q) be the closure of Z with respect to the norm · H(A,Q) , let V ⊂ H(L, Q) be the closure of V with respect to the norm · H(L,Q), and let V ∗ ⊂ H(L∗ , Q) be the closure of V∗ with respect to the norm · H(L∗ ,Q) . Then, we obtain from (2.4) and (2.7) (Lv, w)Q − (v, L∗ w)Q = 0 ,

v ∈ V , w ∈ V∗.

(2.9)

24

2 Space-Time Solutions for Linear Hyperbolic Systems

Example 2.3 For linear acoustic waves (2.1) we have H(A, ) = H(div, ) × H1 (), and for d = 2 the boundary parts 1 = 2 = S and 3 = V with ∂ = S ∪ V in Example 2.2 yields that (2.9) holds with k = k∗ , and we obtain   Z = (v, p) ∈ H(div, ) × H1 () : v · n = 0 on V , p = 0 on S ,  V ⊃ (v, p) ∈ H1 (0, T ; L2 (; Rm )) ∩ L2 (0, T ; H(div, ) × H1 ()) :

 v(0) = 0 , p(0) = 0 , v · n = 0 on (0, T ) × V , p = 0 on (0, T ) × S ,  V ∗ ⊃ (w, q) ∈ H1 (0, T ; L2 (; Rm )) ∩ L2 (0, T ; H(div, ) × H1 ()) :  w(T ) = 0 , q(T ) = 0 , w · n = 0 on (0, T ) × V , q = 0 on (0, T ) × S .

In Y = L2 (; Rm ) and W = L2 (Q; Rm ) we use the energy norms y Y =



y∈Y,

(My, y) ,

w W =

 (Mw, w)Q ,

w∈W,

and for the L2 adjoints (y, z) = z∈Y \{0} z Y

y Y ∗ = sup



(M −1 y, y) ,

w W ∗ =

 (M −1 w, w)Q .

In V and V ∗ we use the weighted norms v V =



v 2W

+ Lv 2W ∗

,

z V ∗ =



z 2W + L∗ z 2W ∗ ,

v ∈ V , z ∈ V∗.

Remark 2.4 For the extension to visco-acoustic and visco-elastic models the same solution spaces can be used. For mixed boundary conditions of Robin type or impedance boundary conditions a modification is required to include additional conditions on the boundary, see Remark 2.19. This relies on the fact that traces are well-defined for smooth test functions in V∗ , but in general not in V , where traces on mixed boundaries are only defined in distributional sense.

2.3

Solution Concepts

We consider different solution spaces of the equation Lu = f with initial and boundary conditions.

2.3 Solution Concepts

25

Definition 2.5 Depending on regularity of the data, we define: (a) u ∈ C1 (Q; Rm ) ∩ C0 (Q; Rm ) is a classical solution, if Lu = f u(0) = u0 (An u)k = gk

in Q = (0, T ) ×  , in  at t = 0 , on (0, T ) × k , k = 1, . . . , m ,

for f ∈ C0 (Q; Rm ), u0 ∈ C0 (; Rm ), gk ∈ C0 ((0, T ) × k ). (b) u ∈ H(L, Q) is a strong solution, if Lu = f u(0) = u0 (An u)k = gk

in Q = (0, T ) ×  , in  at t = 0 , on (0, T ) × k , k = 1, . . . , m ,

for f ∈ L2 (Q; Rm ), u0 ∈ L2 (; Rm ), gk ∈ L2 ((0, T ) × k ). (c) u ∈ L2 (Q; Rm ) is a weak solution, if   u, L∗ z Q = , z ,

z ∈ V∗ ,

with the linear functional  defined by   , z = (f, z)Q + Mu0 , z(0)  − (g, z)(0,T )×∂ for data f ∈ L2 (Q; Rm ), u0 ∈ L2 (; Rm ), and gk ∈ L2 ((0, T ) × k ). We set g = (gk )k=1,...,m ∈ L2 ((0, T ) × ∂; Rm ) with gk = 0 on ∂ \ k . Remark 2.6 For the variational definition of weak solutions we use smooth test functions V∗ so that the space-time traces on {0}× ⊂ ∂Q and (0, T )×∂ ⊂ ∂Q are well defined; with additional assumptions in Theorems 2.8 and 4.10 this extends to test functions in V ∗ . Example 2.7 A weak solution (v, σ ) ∈ L2 ((0, T )×(0, X); R2 ) of the linear wave equation √ (1.2) in 1d with wave speed c = κ/ρ and homogeneous Dirichlet boundary conditions satisfies     v, −ρ∂t w + ∂x τ (0,T )×(0,X) + σ, −κ −1 ∂t τ + ∂x w (0,T )×(0,X)     = v0 , w(0) (0,X) + σ0 , τ (0) (0,X)

26

2 Space-Time Solutions for Linear Hyperbolic Systems

for all test functions w, τ ∈ C1 ([0, T ] × [0, X]) with w(T , x) = τ (T , x) = 0 for x ∈ (0, X) and w(t, 0) = w(t, X) = 0 for t ∈ (0, T ). This allows for discontinuities of the solution along the characteristics 





t

∈ (0, T ) × R : x0 ± ct ∈  =

x0 ± ct

  t x



    ±c ∈ (0, T ) ×  : · =0 x − x0 1 t

for some x0 ∈ R. Here we illustrate this for a simple example: consider a piecewise constant function ⎧⎛ ⎞ ⎪ ⎪ ⎪⎝vL ⎠ for x < x + ct , [v] = vR − vL , 0 ⎪   ⎪ ⎪ ⎨ σL v(t, x) = ⎛ ⎞ ⎪ σ (t, x) ⎪ v ⎪ ⎪ ⎝ R ⎠ for x > x0 + ct , ⎪ [σ ] = σR − σL . ⎪ ⎩ σ R

Then, we have for all (w, τ ) ∈ Cc ([0, T ] × [0, X], R2 ) 

T 0

    v −ρ∂t w + ∂x τ dx dt · −κ −1 ∂t τ + ∂x w σ 0      ∂t −ρvL w − κ −1 σL τ · = dx dt vL τ + σL w xx0 +ct ∂x      1 −c −ρvL w − κ −1 σL τ = da · √ vL τ + σL w 1 + c2 1 x=x0 +ct      1 c −ρvR w − κ −1 σR τ + da · √ vR τ + σR w 1 + c2 −1 x=x0 +ct      1 c −ρ[v]w − κ −1 [σ ]τ = −√ da · [v]τ + [σ ]w 1 + c2 x=x0 +ct 1             1 ρ 0 [v] 01 [v] w c − · da =√ −1 2 0κ [σ ] 10 [σ ] τ 1 + c x=x0 +ct         1 [v] [v] w cM +A · da . =√ 2 [σ ] [σ ] τ 1 + c x=x0 +ct



X

2.4 Existence and Uniqueness of Space-Time Solutions

27

We observe, that (v, σ ) is a weak solution if the jump ([v], [σ ]) is an eigenvector of 

   [v] [v] A = −cM . [σ ] [σ ] This is equivalent to the jump conditions [σ ] − cρ[v] = 0 and [v] − cκ −1 [σ ] = 0. Based on the jump conditions we construct a weak solution  (v, σ ) ∈ L2 (0, T ) × (0, X), R2 ) with X = cT that is discontinuous along the characteristics (t, j x ±ct) on a special mesh in space and time depending of the wave speed c with x = ct and t = T /N, N ∈ N, cf. Fig. 2.1. Starting with v(0, x) = v 0 1 and σ (0, x) = σ 0 1 for (j − 1)x < x < j x, j− 2

j− 2

we obtain from the jump condition recursively for n = 1, 2, . . . , N n− 12

vj

n− 1 σj 2

vjn− 1 2

σjn− 1 2

1 n−1 v 2 j + 12 1 n− 12 v = 2 j + 12 1 n− 12 v = 2 j 1 n− 12 v = 2 j =

+ v n−11 + σ n−11 − σ n−11 , j− 2

j+ 2

n− 12

n− 12

−v

j − 12



j + 12

j− 2



n− 12

j − 12

n+ 1 + vj −12

n− 1 + σj 2

n− 1 − σj −12

n− 1 − vj −12

n− 1 + σj 2

n− 1 + σj −12

n n v− 1 = −v 1 , 2

,

j = 0, . . . , N ,

2

n n , vN+ 1 = −v N− 1 2



σ−n 1 = σ 1n ,

,

2

,

2

j = 1, . . . , N ,

2

n n σN+ , 1 = σ N− 1 2

2

with suitable extensions for homogeneous Dirichlet boundary conditions for v, see Fig. 2.1 for an example.

2.4

Existence and Uniqueness of Space-Time Solutions

Now we construct strong and weak solutions by a least squares approach. Therefore, we define the quadratic functionals 1 Lv − f 2W ∗ , v ∈ H(L, Q) , 2 1 J ∗ (z) = L∗ z 2W ∗ − , z , z ∈ V∗ . 2 J (v) =

28

2 Space-Time Solutions for Linear Hyperbolic Systems

Fig. 2.1 Illustration of a piecewise constant weak solution in 1d of the wave equation in space and time with jumps along the characteristics. The solution is computed by the explicit time stepping scheme in Example 2.7

Theorem 2.8 Depending on the regularity of the data, we obtain: (a) Assume that CL > 0 exists with v W ≤ CL Lv W ∗ ,

v∈V.

(2.10)

Then, a unique minimizer u ∈ V of J (·) exists, and if L(V ) = W , the minimizer u ∈ V is the unique strong solution of (Lu, w)Q = (f, w)Q ,

w∈W

(2.11)

with homogeneous initial and boundary data. (b) Assume that CL∗ > 0 and C > 0 exists with z W ≤ CL∗ L∗ z W ∗ ,

|, z| ≤ C z V ∗ ,

z ∈ V∗ .

(2.12)

Then, J ∗ (·) extends to V ∗ , a unique minimizer z∗ ∈ V ∗ of J ∗ (·) exists, and if L∗ (V∗ ) ⊂ W is dense, u = L∗ z∗ ∈ L2 (Q; Rm ) is the unique weak solution of (u, L∗ z)Q = , z ,

z ∈ V∗.

(2.13)

2.4 Existence and Uniqueness of Space-Time Solutions

29

Proof ad (a) The functional J (·) > 0 is bounded from below, and any minimizing sequence {un }n∈N ⊂ V with lim J (un ) = inf J (v) := Jinf

n→∞

v∈V

satisfies 2  1  1 1 1   Lun − Luk 2W ∗ = Lun − f 2W ∗ + Luk − f 2W ∗ − L un + uk − f ∗ W 4 2 2 2

1  un + uk = J (un ) + J (uk ) − 2J 2 ≤ J (un ) + J (uk ) − 2Jinf −→ 0

for n, k −→ ∞ .

Condition (2.10) implies the norm equivalence Lv W ∗ ≤ v V =



v 2W + Lv 2W ∗ ≤



1 + CL2 Lv W ∗ ,

v∈V,

(2.14)

so that the minimizing sequence is a Cauchy sequence converging to u ∈ V . Since J (·) is strictly convex, the minimizer is unique. Moreover, since J (·) is differentiable, u is a critical point, i.e., 0 = ∂J (u)[v] = (Lu − f, Lv)W ∗ = (Lu − f, M −1 Lv)Q ,

v∈V.

If L is surjective, this implies (2.11) by inserting w = M −1 Lv ∈ M −1 L(V ) = W . ad (b) By assumption (2.12), J ∗ (·) and  are continuous in V∗ with respect to the norm in V ∗ , so they extend to V ∗ , and we observe that J ∗ (·) is bounded from below by J ∗ (z) =

 1 ∗ 2 1 1  L z W ∗ − , z ≥ z 2V ∗ − C z V ∗ ≥ − C2 1 + CL2 ∗ . 2 2 2 2(1 + CL∗ )

By the same arguments as above a unique minimizer z∗ ∈ V ∗ exists characterized by 0 = ∂J ∗ (z∗ )[z] = (L∗ z∗ , L∗ z)W ∗ − , z ,

z ∈ V∗.

Inserting u = L∗ z∗ implies (2.13). Now assume that u˜ also solves (2.13); then, (u − ˜ L∗ z)Q = 0 for all z ∈ V∗ . Since L∗ (V∗ ) is dense in W , this implies u = u, ˜ so that the u, weak solution is unique.   Remark 2.9 Strong solutions with inhomogeneous initial and boundary data exist, if the initial function u0 in  can be extended to a function u0 ∈ H(L, Q) satisfying the boundary conditions.

30

2.5

2 Space-Time Solutions for Linear Hyperbolic Systems

Mapping Properties of the Space-Time Operator

Lemma 2.10 v W ≤ CL Lv W ∗ for v ∈ V holds with CL = 2T . Proof For v ∈ V we have v(0) = 0, and using (2.6) we obtain  v 2W = 

T 0

  Mv(t), v(t)  dt =

T





T

= 0

=2

0

0

 =2

0

t

T



T 0

    Mv(t), v(t)  − Mv(0), v(0)  dt

  ∂s Mv(s), v(s)  ds dt = 2



T 0

 0

t

  M∂s v(s), v(s)  ds dt

 t

    M∂s v(s), v(s)  + Av(s), v(s)  ds dt 

0

t



Lv(s), v(s)

0

 

 ds dt = 2 0

T

  (T − t) Lv(t), v(t)  dt

≤ 2T Lv W ∗ v W . Since V is dense in V , this extends to V .

 

As a consequence of Lemma 2.10, the operator L : V → L2 (Q; Rm ) is injective and continuous, i.e., L ∈ L(V , W ). Corollary 2.11 L(V ) ⊂ L2 (Q; Rm ) is closed. Proof For any sequence (wn )n∈N ⊂ V with lim Lwn = f ∈ W we have n→∞

wn − wk W + Lwn − Lwk W ∗ ≤ (CL + 1) Lwn − Lwk W ∗ −→ 0 ,

n, k → ∞ ,

so that (wn )n is a Cauchy sequence in V ; since V ⊂ H(L, Q) is closed, the limit w =   lim wn ∈ V with Lw = f exists. Let the domain D(A) = Z ⊂ H(A, ) of the operator A be the closure of Z defined in     (2.5a). Then, (2.6) gives (M + τ A)z, z  = Mz, z  > 0 for all z = 0 and τ ∈ R, i.e., M + τ A is injective on Z. Moreover, we require that M + τ A is surjective on Z, which is achieved in our applications in Sect. 2.7 by a suitable balanced selection of k ⊂ ∂. Lemma 2.12 Assume that M + τ A : Z → L2 (; Rm ) is surjective for all τ > 0. Then, L(V ) ⊂ L2 (Q; Rm ) is dense.

2.5 Mapping Properties of the Space-Time Operator

31

T Proof For f ∈ L2 (Q; Rm ), N ∈ N and tN,n = n N let fN ∈ L2 (Q; Rm ) be piecewise constant in time with fN,n = fN |(tN,n−1 ,tN,n ) so that lim fN −f Q = 0. Since the operator N→∞

M+

T N A:

Z −→ L2 (; Rm ) is surjective, starting with uN,0 = 0 we find uN,n ∈ Z with

T T M + A uN,n = uN,n−1 + fN,n , N N

n = 1, . . . , N .

Let uN ∈ H1 (0, T ; Z) ⊂ V be the piecewise linear interpolation: for n = 1, . . . , N set uN (t) =

tN,n − t t − tN,n−1 uN,n−1 + uN,n , tN,n − tN,n−1 tN,n − tN,n−1

t ∈ (tN,n−1 , tN,n ) .

Then, we observe by construction LuN = fN and thus lim LuN − f Q = 0. N→∞

 

Remark 2.13 Together with Corollary 2.11 we observe L(V ) = L2 (Q; Rm ), i.e., the operator L : V −→ L2 (Q; Rm ) is surjective. A corresponding result can be achieved for L∗ (V ∗ ) as the same arguments as in Lemma 2.10, 2.12, and Corollary 2.11 hold for L∗ and V ∗ . We obtain z W ≤ CL L∗ z W ∗ ,

z ∈ V∗

i.e., CL = CL∗ . By the assumption of Lemma 2.12, L∗ (V ∗ ) ⊂ L2 (Q; Rm ) is dense which implies L∗ (V ∗ ) = L2 (Q; Rm ). Remark 2.14 Since L(V) and L∗ (V∗ ) are dense in W , we have   V = v ∈ H(L, Q) : (Lv, z)Q = (v, L∗ z)Q for z ∈ V∗ ,   V ∗ = z ∈ H(L∗ , Q) : (L∗ z, v)Q = (z, Lv)Q for v ∈ V , i.e., V ∗ is the Hilbert adjoint space of V , and V is the Hilbert adjoint space of V ∗ . Lemma 2.15 For z ∈ V ∗ holds z(0) 2Y ≤ z 2V ∗ . Proof We obtain, using z(T ) = 0,  z(0) 2Y

=

z(0) 2Y



z(T ) 2Y

=− 0

T

∂t z(t) 2Y dt = −2(M∂t z, z)Q

= −2(M∂t z, z)Q − 2(Az, z)Q = 2(L∗ z, z)Q ≤ z 2V ∗ .  

32

2.6

2 Space-Time Solutions for Linear Hyperbolic Systems

Inf-Sup Stability

From the previous section we directly obtain the following results. Theorem 2.16 The bilinear form b : V × W → R, b(v, w) = (Lv, w)Q , is inf-sup stable satisfying b(v, w) 1 ≥ β :=  . v∈V \{0} w∈W \{0} v V w W 1 + CL2 inf

sup

Thus, for all f ∈ L2 (Q, Rm ) a unique Petrov–Galerkin solution u ∈ V of   b(u, w) = f, w Q ,

w ∈ W,

exists, and the solution is bounded by u V ≤ β −1 f W ∗ . Proof For v ∈ V \ {0} we test with w = M −1 Lv, so that with (2.14) b(v, w) b(v, M −1 Lv) 1 ≥ = M −1 Lv W ≥  v V . −1 Lv w M W W w∈W \{0} 1 + CL2 sup

The existence and the a priori bound are now an easy consequence.

 

Corollary 2.17 Due to our previous results on the adjoint operator L∗ we find correspondingly that for all d ∈ L2 (Q, Rm ) the dual problem L∗ z = d admits a unique solution z ∈ V ∗ which is bounded by z V ∗ ≤ β −1 d W ∗ . Corollary 2.18 Additional regularity for the right-hand side f ∈ H1 (0, T ; L2 (; Rm )) implies for the solution the regularity u ∈ H1 (0, T ; L2 (; Rm )) and the estimate ∂t u W ≤ CL ∂t f W ∗ . Proof This simply follows from Lu = f, which formally gives for the derivative in time L∂t u = ∂t f. If ∂t f ∈ W , a solution v ∈ V solving Lv = ∂t f exists, and since the solution   is unique, v = ∂t u.

2.7

Applications to Acoustics and Visco-Elasticity

Acoustic Waves In the setting of Example 2.3 we have A(v, p) = −(∇p, ∇ · v) and     A(v, p), (w, q)  + (v, p), A(w, q)  = −(p, n · w)∂ − (n · v, q)∂ .

2.7 Applications to Acoustics and Visco-Elasticity

33

We now show that the assumption in Lemma 2.12 is satisfied. For all (f, g) ∈ L2 (; Rd+1 ) and τ > 0 we define in the first step p ∈ H1 () with p = 0 on S by solving the elliptic equation         τ ρ −1 ∇p, ∇φ  + κ −1 p, φ  = g, φ  − ρ −1 f, ∇φ 

(2.15)

for φ ∈ H1 () with φ = 0 on S . Then, we define v = ρ −1 (τ ∇p + f) ∈ L2 (; Rd ), and inserting (2.15), we observe 

v, ∇φ

 

    = g, φ  − κ −1 p, φ  ,

φ ∈ C1c () ,

i.e., ∇ · v = −g + κ −1 p ∈ L2 (), and thus     0 = v, ∇φ  + ∇ · v, φ  = n · v, φ∂ ,

φ ∈ C1 () , φ = 0 on S ,

so that n · v = 0 on ∂ \ S = V . Together, (v, p) ∈ Z and (M + τ A)(v, p) = (f, g) . Moreover, the solution is unique, so that M + τ A is injective and surjective.  Visco-Elastic Waves For the system (1.13) we set y = v, σ 0 , . . . , σ r ) and ⎛ ⎞ ρ 0 ··· 0 ⎜ ⎟ ⎜ 0 C−1 ⎟ 0 ⎜ ⎟, M =⎜ . .. ⎟ . . ⎝. ⎠ −1 0 Cr



0 ⎜ ⎜ε A=− ⎜ ⎜ .. ⎝. ε

⎞ div · · · div ⎟ 0 ⎟ ⎟, .. ⎟ . ⎠ 0 0



⎞ 0 0 0 ··· 0 ⎜0 0 0 · · · 0 ⎟ ⎜ ⎟ ⎜ ⎟ −1 ⎜ ⎟ 0 0 τ D= M ⎜ 1 ⎟ ⎜ .. .. ⎟ .. ⎝. . ⎠ . −1 00 τr

with m = 2 + 3(1 + r) components for d = 2 and m = 3 + 6(1 + r) for d = 3, and where D ∈ L∞ (; Rm×m sym ) is a positive semi-definite matrix function. This defines the operator Dy(x) = D(x)y(x), and we have (Dy, y) ≥ 0 for all y ∈ L2 (; Rm ). The space-time setting is extended to the operator L = M∂t + A + D, and formally the adjoint operator is L∗ = −M∂t − A + D. The assumption in Lemma 2.12 can be verified analogously to the acoustic case. Remark 2.19 The extension to mixed boundary conditions on R ⊂ ∂ requires L2 regularity of the traces on the boundary part R . Then, extending the norm · V by a corresponding boundary term again defines V as closure of V with respect to this stronger norm, and the space-time operator L is extended by a dissipative boundary operator D.

34

2 Space-Time Solutions for Linear Hyperbolic Systems

Bibliographic Comments Least squares for linear first-order systems for finite elements are considered in [21, 22], where also the LL∗ technique is established which is used to prove Theorem 2.8 (b). Here this is applied to the space-time setting, see [48, 49, 65, 66]. The extension to mixed boundary conditions is considered in [50]. The inf-sup constant β in Theorem 2.16 is not optimal for the continuous problem; for an improved estimate see [65, Lem. 1]. Here, it relies on the estimate for CL in Lemma 2.10 which is generalized in Theorem 4.1 for the approximation. The suitable choice of boundary conditions for general Friedrichs systems is discussed in [42, Chap. 7.2].

3

Discontinuous Galerkin Methods for Linear Hyperbolic Systems

We develop a space-time method with a discontinuous Galerkin discretization in space for linear wave problems. For the ansatz space we use piecewise polynomials in every cell, where the traces on the cell interfaces can be different from the two sides. Therefore, we need to extend the first-order operator A to discontinuous finite element spaces. Here, we introduce the discrete operator Ah with upwind flux, where the evaluation of the upwind flux is based on solving Riemann problems, i.e., by construction of piecewise constant solutions in space and time. We start with simple examples for interface and transmission problems, and then consider the general case for waves in heterogeneous media.

3.1

Traveling Wave Solutions in Homogeneous Media

We consider linear hyperbolic first-order systems L = M∂t + A introduced in Sect. 2.1, and we start with the case of homogeneous material parameters, so that the operator M is represented by a symmetric positive definite matrix M ∈ Rm×m sym which is constant in . Let (λ, w) ∈ R × Rm be an eigenpair of An w = λMw, and let a ∈ C1 (R) be an amplitude function describing the shape of the traveling wave. Then, we observe for y(t, x) = a(n · x − λt) w ∂t y(t, x) = −λa  (n · x − λt)w , ∂xj y(t, x) = nj a  (n · x − λt)w , Ly(t, x) = M∂t y(t, x) + Ay(t, x)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9_3

35

36

3 Discontinuous Galerkin Methods for Linear Hyperbolic Systems d

= a  (n · x − λt) − λM + nj Aj w



j =1

= a  (n · x − λt) An − λM w = 0 , so that y solves Ly = 0 for all t ∈ R in  = Rd . Example 3.1 For acoustic waves with wave speed c =   v y= , p



 ρv My = κ −1 p



,

 pn An y = − , v·n

For elastic waves with wave speeds cp = √ cs = μ/ρ for shear waves, we have   v y= , σ λ ∈ {0, ±cp , ±cs } ,

√ κ/ρ we have λ ∈ {0, ±c} ,

  ∓cn w= . κ

√ (2μ + λ)/ρ for compressional waves and

   ρv σn  , , An y = − 1  My =

C−1 σ 2 nv + vn     ∓cp n ∓cs τ , ws = , wp = 2μnn + λI μ(nτ + τ n ) 

where τ ∈ Rd is a tangential unit vector, i.e., τ · n = 0 and |τ | = 1. √ For linear electro-magnetic waves with wave speed c = 1/ εμ we have   E y= , H λ ∈ {0, ±c} ,

3.2



   εE n×H My = , An y = − , μH −n × E √  √   ± ετ εn × τ , w2 = √ . w1 = √ ± μτ μn × τ

Reflection of Traveling Acoustic Waves at Boundaries

In the next step we consider solutions of the acoustic wave equation in the half space 

   ρ∂t v − ∇p 0 = κ −1 ∂t p − ∇ · v 0

  in R = x ∈ Rd : n · x > 0

3.3 Transmission and Reflection of Traveling Waves at Interfaces

37

with initial value 

   v(0, x) cn = a(n · x) p(0, x) κ

depending on a ∈ C1 (R) with a(n·x) = 0 for n·x < ct0 and t0 > 0, i.e., supp a ⊂ [ct0 , ∞]. The wave starts traveling from right to left, and at time t = t0 it reaches the boundary. In case of a homogeneous Neumann boundary condition v · n = 0 it is reflected, i.e., ⎧ ⎛ ⎞ ⎪ ⎪ cn ⎪ a(ct + n · x) ⎝ ⎠ ⎪   ⎪ ⎪ ⎨ κ v(t, x) ⎛ ⎞ ⎛ ⎞ = ⎪ p(t, x) ⎪ cn −cn ⎪ ⎪a(ct + n · x) ⎝ ⎠ + a(ct − n · x) ⎝ ⎠ ⎪ ⎪ ⎩ κ κ

0 < c(t0 − t) < n · x , 0 < n · x < c(t − t0 ) .

This is illustrated in Fig. 3.1. Otherwise, with homogeneous Dirichlet boundary conditions p = 0 the reflection also changes sign, i.e., ⎧ ⎛ ⎞ ⎪ ⎪ cn ⎪ a(ct + n · x) ⎝ ⎠ ⎪   ⎪ ⎪ ⎨ κ v(t, x) ⎛ ⎞ ⎛ ⎞ = ⎪ p(t, x) ⎪ cn −cn ⎪ ⎪ ⎠ a(ct + n · x) ⎝ ⎠ − a(ct − n · x) ⎝ ⎪ ⎪ ⎩ κ κ

0 < c(t0 − t) < n · x , 0 < n · x < c(t − t0 ) .

For smooth amplitude functions this is a classical solution.

3.3

Transmission and Reflection of Traveling Waves at Interfaces

Now we consider solutions of the acoustic wave equation in Rd with an interface 

   ρ∂t v − ∇p 0 = −1 κ ∂t p − ∇ · v 0

in L ∪ R ,

⎧ ⎨ = x ∈ Rd : n · x < 0, L ⎩R = x ∈ Rd : n · x > 0

38

3 Discontinuous Galerkin Methods for Linear Hyperbolic Systems

= 0.0

= 0.50

= 1.0

= 1.5

= 2.0

Fig. 3.1 The evolution of the pressure distribution with reflection at a fixed boundary (left, cf. Sect. 3.2), and reflection and transmission at an interface (right, cf. Sect. 3.3) of traveling waves

with constant coefficients (ρL , κL ) in L and (ρR , κR ) in R defining M L and M R , starting in R with    n v(0, x) , = a(n · x/cR ) ZR p(0, x)



a(n · x) = 0 for n · x < cR t0 , t0 > 0 ,

√ √ where ZL = κL ρL , ZR = κR ρR are the left and right impedances, and where cL = √ √ κL /ρL , cR = κR /ρR are the left and right wave speeds. Note that we use a different scaling of the eigenvectors for the transmission problem.

3.4 The Riemann Problem for Acoustic Waves

39

We state continuity at the interface to determine a classical solution and obtain ⎧ ⎛ ⎞ ⎪ ⎪ n ⎪ ⎪ a(t + n · x/cR )⎝ ⎠ 0 < cR (t0 − t) < n · x, ⎪ ⎪ ⎪ ⎪ ZR ⎪ ⎪ ⎛ ⎞ ⎪ ⎪ ⎪ ⎪ n ⎪ ⎝ ⎠   ⎪ ⎪ ⎨a(t + n · x/cR ) v(t, x) ZR ⎛ ⎞ = ⎪ p(t, x) −n ⎪ ⎪ + βR a(t − n · x/cR )⎝ ⎠ 0 < n · x < cR (t − t0 ), ⎪ ⎪ ⎪ ⎪ ZR ⎪ ⎪ ⎛ ⎞ ⎪ ⎪ ⎪ ⎪ n ⎪ ⎪ β a(t + n · x/cL )⎝ ⎠ cL (t0 − t) < n · x < 0 ⎪ ⎪ ⎩ L ZL with transmission and reflection coefficients βL =

2ZR , ZR + ZL

βR =

ZL − ZR ZR + ZL

derived from the interface condition An [y] = 0, see Fig. 3.1. By the interface condition we obtain L(v, p) ∈ L2,loc(R × Rd ; Rd+1 ), so that (v, p) is a strong solution. We observe that no wave is reflected if the impedance ZL = ZR is continuous. This property can be used to design absorbing boundary layers.

3.4

The Riemann Problem for Acoustic Waves

Now we consider weak solutions in L2,loc(Rd ; Rd+1 ) of the acoustic wave equation 

   ρ∂t v − ∇p 0 = −1 κ ∂t p − ∇ · v 0

in L ∪ R ,

⎧ ⎨ =  x ∈ R d : n · x < 0  L ⎩R = x ∈ Rd : n · x > 0

,

with constant coefficients (ρL , κL ) in L and (ρR , κR ) in R , and with piecewise constant initial values         v(0, x) vL v(0, x) vR = , x ∈ L , , x ∈ R , = pL pR p(0, x) p(0, x)

40

3 Discontinuous Galerkin Methods for Linear Hyperbolic Systems

called Riemann problem. The weak solution is of the form ⎧⎛ ⎞ ⎪ ⎪ ⎪⎝ vL ⎠ ⎪ ⎪ ⎪ ⎪ pL ⎪ ⎪ ⎪ ⎛ ⎞ ⎛ ⎞ ⎪ ⎪ ⎪ ⎪ n v ⎪⎝ L ⎠ + βL ⎝ ⎠ ⎪   ⎪ ⎪ ⎨ pL Z v(t, x) ⎛ L ⎞ = ⎛ ⎞ ⎪ p(t, x) ⎪ v n ⎪ ⎪ ⎝ R ⎠ + βR ⎝ ⎠ ⎪ ⎪ ⎪ ⎪ p −Z R ⎪ ⎪ ⎛ R⎞ ⎪ ⎪ ⎪ ⎪ v ⎪ ⎪ ⎝ R⎠ ⎪ ⎪ ⎩ pR

x · n < −cL t −cL t < x · n < 0 0 < x · n < cR t cR t < x · n

depending on βL , βR ∈ R determined by the flux condition  An

which yields βL =

       vL n vR n + βL = An + βR , ZL −ZR pL pR

(3.1)

[p] + ZR n · [v] [p] − ZL n · [v] , βR = depending on [p] = pR − pL , ZL + ZR ZL + ZR

[v] = vR − vL . For discontinuous initial values the solution is discontinuous along the characteristic linear manifolds x · n + cL t = 0 and x · n − cR t = 0 in the space-time domain, so that we only obtain a weak solution.

3.5

The Riemann Problem for Linear Conservation Laws

We now construct a weak solution of the Riemann problem for general linear conservation laws, i.e., a piecewise constant weak solution of Ly = 0 in L2,loc (Rd ; Rm ) with discontinuous initial values ⎧   ⎨y in L = x ∈ Rd : n · x < 0 , L y0 (x) = yL , yR ∈ Rm , M L , M R ∈ Rm×m sym . ⎩yR in R = x ∈ Rd : n · x > 0,     R Let (λLj , wLj ) j =1,...,m and (λR j , wj ) j =1,...,m be eigensystems, i.e., R R An wLj = λLj M L wLj , An wR j = λj M R wj ,

R wLk · M L wLj = wR k · M R wj = 0 for j = k .

3.6 The DG Discretization with Full Upwind

41

A solution is constructed by a superposition of traveling waves ⎧  ⎪ yL + βjL wLj ⎪ ⎪ ⎨ L j : x·n>λj t y(t, x) =  ⎪ ⎪ y + βjR wR ⎪ j ⎩ R R

x ∈ L , x ∈ R ,

j : x·n 0, and the piecewise constant function y is the unique weak solution of Ly = 0 with initial value y(0) = y0 . In summary, the solution of the Riemann problem defines the upwind flux

upw An y0 = An yL + βjL wLj .

(3.3)

j : λLj 0 exists with (Mh yh , yh ) ≥ cM yh 2W ,

yh ∈ Yh ;

(4.1a)

(b) Dh ∈ L(Yh , Yh ) is monotone, i.e., (Dh yh , yh ) ≥ 0 ,

yh ∈ Yh ;

(4.1b)

(c) Ah ∈ L(Yh , Yh ) is monotone and consistent, i.e., (Ah yh , yh ) ≥ 0 ,

yh ∈ Yh ,

(4.1c)

z ∈ Yh ∩ D(A) .

(4.1d)

(Ah z, yh ) = (Az, yh ) , (Ah yh , z) = −(yh , Az) ,

The operators Mh , Dh , Ah do not depend on the time variable t ∈ (0, T ), i.e., they are defined in Yh ⊂ P(h ; Rm ).

4.3 Inf-Sup Stability

51

In the next step we construct a suitable ansatz space Vh ⊂ P(Qh ; Rm ). In every time slice (tn−1 , tn ) let n,h : L2 (; Rm ) −→ Yn,h be the weighted L2 -projection defined by     Mh n,h y, zh  = Mh y, zh  ,

y ∈ L2 (; Rm ) , zh ∈ Yn,h

corresponding to the norm     yh  = Mh yh , yh  , Y h

yh ∈ Yh .

For vh ∈ P(Ih × h ; Rm ) let vn,h ∈ P([tn−1 , tn ] × h ; Rm ) be the extension of vh |(tn−1 ,tn )×h to [tn−1 , tn ]. Then, we define " Vh = vh ∈

!

PpR ⊗ PqR (K; Rm ) ⊂ P(Ih × h ; Rm ) :

R=(tn−1 ,tn )×K∈Rh

# vh (0) = 0 , vn,h (tn−1 ) = n,h vn−1,h (tn−1 ) for n = 2, . . . , N ⊂ H1 (0, T ; Yh ) . By construction, we have ∂t Vh = Wh in Ih and dim Vh = dim Wh . Note that Vh includes homogeneous initial data. For inhomogeneous initial data u0 we define the affine space " Vh (u0 ) = vh ∈

!

PpR ⊗ PqR (K; Rm ) ⊂ P(Ih × h ; Rm ) :

R=(tn−1 ,tn )×K∈Rh

vh (0) = 1,h u0 for t = 0 ,

#

vn,h (tn−1 ) = n,h vn−1,h (tn−1 ) for n = 2, . . . , N ⊂ H1 (0, T ; Yh ) .

4.3

Inf-Sup Stability

Let h : L2 (Q; Rm ) −→ Wh be the projection defined by 

Mh h v, wh

 Q

  = Mh v, wh Q ,

v ∈ L2 (Q; Rm ) , wh ∈ Wh .

Note that h Mh vh = Mh h vh and h Ah vh = Ah h vh for vh ∈ L2 (0, T ; Yh ).

(4.2)

52

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

The analysis of the discretization is based on the norms    wh 

Wh

=



Mh wh , wh



, Q

    fh  ∗ = Mh−1 fh , fh Q , W h

wh , fh ∈ L2 (Q; Rm )

and   vh  = V

$

h

 2   vh  + h M −1 Lh vh 2 , h W W h

h

vh ∈ H1 (0, T ; Yh ) .

(4.3)

Theorem 4.1 The bilinear form bh : H1 (0, T ; Yh ) × L2 (Q; Rm ) −→ R defined by bh (vh , wh ) = (Lh vh , wh )Q is inf-sup stable in Vh × Wh satisfying bh (vh , wh ) ≥ β vh Vh , wh ∈Wh \{0} wh Wh sup

vh ∈ Vh

1 with β = √ . 4T 2 + 1

Thus, for given f ∈ L2 (Q; Rm ), a unique solution uh ∈ Vh of (Lh uh , wh )Q = (f, wh )Q ,

wh ∈ Wh ,

(4.4)

exists satisfying the a priori bound uh Vh ≤ β −1 h f Wh∗ . The stability constant β > 0 is the same as in the continuous case in Theorem 2.16. The proof of the inf-sup stability is based on the following estimates. Lemma 4.2 Let λn,k ∈ Pk , k = 0, 1, 2, . . ., be the orthonormal Legendre polynomials in   L2 (tn−1 , tn ). Then, we have t∂t λn,k , λn,k (t ,t ) ≥ 0. n−1 n

  Proof The orthonormal Legendre polynomials λn,k with respect to ·, · (t ,tn ) are given n−1 by scaling the orthogonal polynomials λ˜ n,k  k λn,k (t) = cn,k λ˜ n,k (t) , λ˜ n,k (t) = ∂tk (t − tn−1 )(t − tn ) , cn,k = λ˜ n,k −1 (tn−1 ,tn ) .   For k = 0 we have ∂t λn,0 = 0 and thus t∂t λn,0 , λn,0 (t 

t∂t λn,k , λn,k

 (tn−1 ,tn )

n−1 ,tn )

= 0. For k ≥ 1 we have

   k = tcn,k ∂tk+1 (t − tn−1 )(t − tn ) , λn,k (t ,tn ) n−1   k+1 2k = tcn,k ∂t t , λn,k (t ,t ) n−1 n     k 2k = cn,k k∂t t , λn,k (t ,tn ) = k λn,k , λn,k (t ,tn ) = k > 0 n−1



using t ∂tk+1 t 2k = t 2k(2k − 1) · · · (k + 1)k t k−1 = k ∂tk t 2k .

n−1

 

4.3 Inf-Sup Stability

53

Lemma 4.3 Let X = (Xkn ) ∈ RN×N sym be a symmetric and positive semidefinite matrix, N×N be a positive semidefinite matrix. Then, and Y = (Ykn ) ∈ R N

X:Y =

Xkm Ykm ≥ 0 .

k,m=1

Proof Let (μn , w n ), n = 1, . . . , N be a complete eigensystem of X with μn ≥ 0 and 

w n = (wnk )k=1,...,N ∈ RN , so that X = N n=1 μn w n w n . Then, we have N

X:Y =

N

Xkm Ykm =

k,m=1

μn wnk wnm Ykm =

k,m,n=1

N

μn w n Y wn ≥ 0 .

n=1

  Lemma 4.4 We have for vh ∈ Vh vh Wh ≤ 2T h Mh−1 Lh vh Wh ,

vh ∈ Vh .

(4.5)

This shows that Lemma 2.10 extends to the discrete estimate also with CL = 2T . Proof Set p = max pR . For vh ∈ Vh in every time slice (tn−1 , tn ) a representation R∈Rh

vn,h (t, x) =

p

λn,k (t)vn,k,h (x) ,

vn,k,h ∈ Yn,h , (t, x) ∈ (tn−1 , tn ) × h

k=0

exists with vn,k,h (x) = 0 for (t, x) ∈ R = (tn−1 , tn ) × K and k > pR , so that h vn,h (t, x) =

p R −1

λn,k (t)vn,k,h (x) =

k=0

p

λn,k (t)ˆvn,k,h (x) ,

(t, x) ∈ R

k=0

with vˆ n,k,h (x) = vn,k,h (x) for k < pR and vˆ n,k,h (x) = 0 for k ≥ pR . The proof of (4.5) relies on the application of Fubini’s theorem 

T 0



t 0



T

φ(s)dsdt =

dT (t)φ(t)dt ,

φ ∈ L1 (0, T )

0

and on estimates with respect to the weighting function in time dT (t) = T − t.

(4.6)

54

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

In the first step, we show 

Mh ∂t vh , dT vh



  ≤ Mh ∂t vh , dT h vh Q ,   0 ≤ h Ah vh , dT h vh Q ,   0 ≤ h Dh vh , dT h vh Q .

(4.7)

Q

(4.8) (4.9)

Since Ah and Dh are monotone, we obtain (4.8) and (4.9) from Lemma 4.3 applied to N     h Ah vh , dT h vh Q = h Ah vn,h , dT h vn,h (t

n−1 ,tn )×

n=1

=

N

p R −1 p R −1



n=1 R=(tn−1 ,tn )×K k=0

=

p p N 

  λn,k , dT λn,l (t

 n−1 ,tn )

Ah vn,k,h , vn,l,h

 K

l=0

λn,k , dT λn,l

 (tn−1 ,tn )

  Ah vˆ n,k,h , vˆ n,l,h  ≥ 0 ,

n=1 k=0 l=0

  h Dh vh , dT h vh Q =

N

p R −1 p R −1



n=1 R=(tn−1 ,tn )×K k=0

=

p p N 

  λn,k , dT λn,l (t

 n−1 ,tn )

Dh vn,k,h , vn,l,h

 K

l=0

λn,k , dT λn,l

 (tn−1 ,tn )

  Dh vˆ n,k,h , vˆ n,l,h  ≥ 0 .

n=1 k=0 l=0

    For k ≥ 1 we have dT ∂t λn,k , λn,k (t ,tn ) = − t∂t λn,k , λn,k (t ,tn ) < 0 by Lemma 4.2, n−1 n−1 which gives N     dT Mh ∂t vh , vh − h vh Q = dT Mh ∂t vh , vh − h vh (t

n−1 ,tn )×

n=1

=

N



pR   dT ∂t λn,k , λn,pR (t

n−1 ,tn )

  Mh vn,k,h , vn,pR ,h K

n=1 R=(tn−1 ,tn )×K k=0

=

N



n=1 R=(tn−1 ,tn )×K



dT ∂t λn,pR , λn,pR

 (tn−1 ,tn )

  Mh vn,pR ,h , vn,pR ,h K ≤ 0 .

4.3 Inf-Sup Stability

55

Thus we obtain (4.7) by 

Mh ∂t vh , dT vh

 Q

  = dT Mh ∂t vh , vh Q     ≤ dT Mh ∂t vh , h vh Q = Mh ∂t vh , dT h vh Q .

Finally, we show the assertion (4.5). We have for k = 2, . . . , N       vk,h (tk−1 ) = k,h vk−1,h (tk−1 ) ≤ vk−1,h (tk−1 ) , Y Y Y h

h

h

so that for all t ∈ (tn−1 , tn ) using vh (0) = v1,h (0) = 0 n

         k,h vk−1,h (tk−1 ) 2 − vk,h (tk−1 )2 − v1,h (0)2 vh (t)2 = vh (t)2 + Y Y Y Y Y h h

h

h

h

k=2 n

 2      vk−1,h (tk−1 ) 2 − vk,h (tk−1 )2 − v1,h (0)2 ≤ vh (t)Y + Yh Y Y h

h

h

k=2 n−1

  2  2    vk,h (tk )2 − vk,h (tk−1 )2 = vh (t)Y − vn,h (tn−1 )Y + Y Y h

h

h

h

k=1

 =

n−1   ∂s Mh vn,h (s), vn,h (s)  ds +

t tn−1



tk

k=1 tk−1



t

=2



Mh ∂s vh (s), vh (s)

0

 

  ∂s Mh vn,h (s), vn,h (s)  ds

ds

and thus using (4.6), (4.7), (4.8), and (4.9) we obtain (4.5) by  vh 2Wh =

T 0



=2 0

  Mh vh (t), vh (t)  dt ≤ 2 T

 0

T



t



Mh ∂s vh (s), vh (s)

0

 

dsdt

    dT (t) Mh ∂t vh (t), vh (t)  dt = 2 Mh ∂t vh , dT vh Q

    ≤ 2 Mh ∂t vh , dT h vh Q = 2 Mh h ∂t vh , dT h vh Q     = 2 h Mh ∂t vh , dT h vh Q ≤ 2 h Lh vh , dT h vh Q     = 2 Mh−1 h Lh vh , Mh dT h vh Q = 2 h Mh−1 Lh vh , Mh dT h vh Q ≤ 2 h Mh−1 Lh vh Wh dT h vh Wh ≤ 2T h Mh−1 Lh vh Wh vh Wh .

Now we can prove Theorem 4.1.

 

56

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

Proof (Theorem 4.1) For vh ∈ Vh \ {0} we have       bh (vh , wh ) = Lh vh , wh Q = Mh−1 Lh vh , wh W = h Mh−1 Lh vh , wh W , h

h

and we test with wh = h Mh−1 Lh vh , so that bh (vh , h Mh−1 Lh vh ) bh (vh , wh ) ≥ = h Mh−1 Lh vh Wh h Mh−1 Lh vh Wh wh ∈Wh \{0} wh Wh sup

≥ (4T 2 + 1)−1/2 vh Vh . using vh 2Vh = vh 2Wh + h Mh−1 Lh vh 2Wh ≤ (4T 2 + 1) h Mh−1 Lh vh 2Wh inserting the estimate (4.5) in Lemma 4.4.  

4.4

Convergence for Strong Solutions

For the error estimate with respect to the norm in Vh we need to extend the norm · Vh such that the error can be evaluated in this norm. For sufficiently smooth functions the operator Ah can be extended by (3.8), so that Lh and thus the norm in Vh is well-defined. Theorem 4.5 Let u ∈ V be the strong solution of Lu = f, and let uh ∈ Vh be the approximation solving (4.4). If the solution is sufficiently smooth, we obtain the a priori error estimate  p+1  u − uh Vh ≤ C t p + x q ∂t u Q + Dq+1 u Q  −1/2  + β −1 Mh (Mh − M)M −1/2∞ ∂t u W  −1/2  + β −1 Mh (Dh − D)M −1/2 ∞ u W for t, x and p, q ≥ 1 with t ≥ tn − tn−1 , x ≥ diam(K), p ≤ pR and q ≤ qR (for all n, K, R), and with a constant C > 0 depending on β = (4T 2 + 1)−1/2, on the material parameters in M, and on the mesh regularity.   Proof For the solution we assume the regularity u ∈ Hp+1 0, T ; L2 (; Rm ) ∩   L2 0, T ; Hq+1 (; Rm ) , hence there exists an interpolant vh ∈ Vh such that  p+1  u − vh Vh ≤ C t p + x q ∂t u Q + Dq+1 u Q .

(4.10)

4.4 Convergence for Strong Solutions

57

Moreover, Ah u is well-defined and consistent satisfying (3.9). We have bh (vh − uh , wh ) = bh (vh , wh ) − bh (uh , wh ) = bh (vh , wh ) − (f, wh )Q = bh (vh , wh ) − b(u, wh ) = (Lh vh , wh )Q − (Lu, wh )Q       = Lh (vh − u), wh Q − Lu, wh Q + Lh u, wh Q     = Lh (vh − u), wh Q − (M − Mh )∂t u, wh Q     − (D − Dh )u, wh Q − (A − Ah )u, wh Q   = Mh h Mh−1 Lh (vh − u), wh Q     − Mh Mh−1 (M − Mh )∂t u, wh Q − Mh Mh−1 (D − Dh )u, wh Q

    ≤ h Mh−1 Lh (vh − u)W + Mh−1 (M − Mh )∂t uW h h  −1  + Mh (D − Dh )uW wh Wh h

and thus the assertion follows from u − uh Vh ≤ u − vh Vh + vh − uh Vh ≤ u − vh Vh + β −1

bh (vh − uh , wh ) wh Wh wh ∈Wh \{0} sup

≤ u − vh Vh

    + β −1 h Mh−1 Lh (vh − u)W + Mh−1 (M − Mh )∂t uW h h  −1  + M (D − Dh )u h

≤ (1 + β −1 ) u − vh Vh

Wh

  −1/2 + β −1 Mh (M − Mh )∂t uQ   −1/2 + β −1 Mh (D − Dh )uQ

by the interpolation estimate (4.10) and   −1/2   −1/2 M (M − Mh )∂t uQ = Mh (M − Mh )M −1/2 M 1/2 ∂t uQ h  −1/2  ≤ Mh (M − Mh )M −1/2 ∞ ∂t u W .

 

58

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

Remark 4.6 The estimate is derived for homogeneous initial and boundary conditions. It transfers to the inhomogeneous case if initial and boundary data u0 and gk can be ˆ x) = u0 (x) for x ∈  extended to H(L, Q), i.e., if uˆ ∈ H(L, Q) exists such that u(0, ˆ x))k = gk (t, x) and for (t, x) ∈ (0, T ) × k , k = 1, . . . , m. Then, the and (An u(t, approximation uh ∈ Vh (u0 ) in the affine space (4.2) is computed by bh (uh , wh ) = h , wh  for wh ∈ Wh , and the strong solution with inhomogeneous initial and boundary ˆ Then, the result data is given by u = u˜ + uˆ ∈ H(L, Q), where u˜ ∈ V solves Lu˜ = f − Lu. in Theorem 4.5 can be extended. Remark 4.7 Since the norm (4.3) in Vh + (H1 (Q; Rm ) ∩ V ) is discrete in the derivatives, the topology in the space V with respect to this norm is equivalent to the topology in L2 with mesh dependent bounds for the norm equivalence. Norm equivalence with respect to · V is obtained in the limit: Let (Vh )h∈H be a shape regular family of discrete spaces with 0 ∈ H such that (Vh ∩ V )h∈H is dense in V . Then, defining v VH = suph∈H v Vh yields a norm, and for sufficiently smooth functions v ∈ H1 (Q; Rm ) ∩ V this norm is equivalent to · V .

4.5

Convergence for Weak Solutions

Qualitative convergence estimates with respect to the norm in V ⊂ H(L, Q) require additional regularity, so that these estimates do not apply to weak solutions with discontinuities or singularities. For weak solutions without additional regularity we only can derive asymptotic convergence. Here, this is shown for simplicity only for homogeneous boundary data. The given data are the right-hand side f ∈ L2 (Q; Rm ) and the initial value u0 ∈ Z. We assume that Lemma 2.12 and dual consistency for Ah in Lemma 3.3 is satisfied. In the first step, we show that the inf-sup stability of the Petrov–Galerkin method yields a uniform a priori bound for the approximation. We define the approximation of the initial value by 1,h u0 ; this is extended to the space-time cylinder by defining u0,h (t) = (1 − t/T )n,h u0 for t ∈ (tn−1 , tn ], n = 1, . . . , N, so that Vh (u0 ) = u0,h + Vh . Lemma 4.8 The discrete solution uh ∈ Vh (u0 ) of the variational space-time equation   bh (uh , wh ) = f, wh Q ,

wh ∈ Wh

is bounded by uh Wh ≤ 2T Mh−1 h f Wh + (1 + 2T ) u0,h Vh .

4.5 Convergence for Weak Solutions

59

Proof For vh = uh − u0,h ∈ Vh the estimate vh Wh ≤ 2T h Mh−1 Lh vh Wh in Lemma 4.4 together with 

h Mh−1 Lh vh , wh

 Wh

    = Mh−1 Lh vh , wh W = Lh vh , wh Q h

= bh (uh , wh ) − bh (u0,h , wh )     = f, wh Q − bh (u0,h , wh ) = h f, wh Q − bh (u0,h , wh ) for wh ∈ Wh yields by duality vh Wh ≤

2T h Mh−1 Lh vh Wh 

= 2T

sup

= 2T

h f, wh

 Q

sup

  h Mh−1 Lh vh , wh W

wh ∈Wh \{0}

h

wh Wh

− bh (u0,h , wh )

wh Wh wh ∈Wh \{0}

≤ 2T Mh−1 h f Wh + h Mh−1 Lh u0,h Wh , so that uh Wh ≤ vh Wh + u0,h Wh

≤ 2T Mh−1 h f Wh + h Mh−1 Lh u0,h Wh + u0,h Wh ≤ 2T Mh−1 h f Wh + (1 + 2T ) u0,h Vh .   Next we show that the dual consistency of the DG operator implies dual consistency of the space-time method. For simplicity, we assume that the parameters in M and D are piecewise constant on all cells K ∈ Kh , so that M = Mh and D = Dh , which implies zh Wh = zh W for zh ∈ L2 (0, T ; Yh ). Lemma 4.9 We have     bh (vh , w) = vh , L∗ w Q − M1,h u0 , w  , vh ∈ Vh (u0 ) , w ∈ Wh ∩ V ∗ . Proof We obtain for vh ∈ Vh (u0 ) ⊂ H1 (0, T ; Yh ) and w ∈ Wh ∩ V ∗       bh (vh , w) = M∂t vh , w Q + Dvh , w Q + Ah vh , w Q h     = Mvh (T ), w(T )  − Mvh (0), w(0) 

(4.11)

60

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

      − vh , M∂t w Q + vh , Dw Q − vh , Aw Q     = vh , L∗ w Q − M1,h u0 , w(0)  using M = Mh , D = Dh , integration by parts for vh , w ∈ H1 (0, T ; Yh ), w(T ) = 0 in V ∗ , and the dual consistency of the DG operator Ah with upwind flux (see Lemmas 3.2 and 3.3 for acoustics).   Let (Vh , Wh ), h ∈ H ⊂ (0, h0 ), be a dense family of nested discretizations with Vh ⊂ Vh and Wh ⊂ Wh for h < h, h, h ∈ H and 0 ∈ H. We assume that the assumptions in this section are fulfilled for all discretizations, so that (Vh , Wh ) is uniformly inf-sup stable by Theorem 4.1. We only consider the case P1 (Qh ; Rm ) ⊂ Wh , so that Wh ∩ H1 (Q; Rm )  includes the continuous linear elements and thus h∈H (V ∗ ∩ Wh ) is dense in V ∗ . Theorem 4.10 Assume M = Mh , D = Dh , and that u0,h Vh ≤ C is uniformly bounded for all h ∈ H. Then, the discrete solutions (uh )h∈H are weakly converging to the weak solution u ∈ W of the equation     (u, L∗ z)Q = f, z Q + Mu0 , z(0)  ,

z ∈ V∗ .

(4.12)

Proof By Theorem 4.1 uh − u0,h is uniformly bounded in Vh and thus, by Lemma 4.4, (uh )h∈H is uniformly bounded in W , so that a subsequence H0 ⊂ H and a weak limit u ∈ W exists, i.e.,     lim uh , w W = u, w W ,

h∈H0

w∈W.

The assumption that (Wh ∩ V ∗ )h∈H is dense in V ∗ ⊃ V∗ implies that for all z ∈ V∗ there exists a sequence (wh )h∈H with wh ∈ Wh ∩ V ∗ and lim wh − z V ∗ = 0. This implies h∈H

lim wh (0) − z(0) Y = 0, cf. Remark 2.13. Using the weak convergence of uh , the strong

h∈H

convergence of L∗ wh , and Lemma 4.9 yields       u, L∗ z Q = lim uh , L∗ z Q = lim uh , L∗ wh Q h∈H0

h∈H0

   = lim bh (uh , wh + M1,h u0 , wh (0)  h∈H0

        = lim f, wh Q + Mu0 , z(0)  = f, z Q + Mu0 , z(0)  , h∈H0

so that u is a weak solution of (4.12). Since the weak solution is unique by Theorem 2.8 and Lemma 2.12, this shows that the weak limit of all subsequences in H is the unique weak solution, so that the full sequence is convergent.  

4.6 Goal-Oriented Adaptivity

61

Remark 4.11 Since we assume that (u0,h )h∈H is uniformly bounded in Vh , the initial value u0 extends to H(L, Q), and the weak solution is a strong solution.

4.6

Goal-Oriented Adaptivity

In order to find an efficient choice for the polynomial degrees (pR , qR ), we introduce a dual-weighted residual error indicator with respect to a suitable goal functional. Its construction is based on a dual-primal error representation combined with a priori estimates constructed from an approximation of the dual solution. Note that this corresponds to a problem backward in time, so that the resulting error indicator only refines regions of the space-time domain which are relevant for the evaluation of the chosen goal functional. Dual-Primal Error Bound Let E : W −→ R be a linear error functional. Our goal is to estimate and then to reduce the error with respect to this functional. The dual solution u∗ ∈ V ∗ is defined by (w, L∗ u∗ )Q = E, w ,

w∈W.

For the local representation of E we define the pairing in R ∈ Rh vR , wR ∂R = (LvR , wR )R − (vR , L∗ wR )R ,

vR ∈ H(L, R) , wR ∈ H(L∗ , R) .

Lemma 4.12 Let u ∈ V be the solution of Lu = f, and let uh ∈ Vh be the approximation solving (4.4). Then, the error can be represented by E, u − uh  =

  f − Luh , u∗ R + uh , u∗ ∂R . R∈Rh

If the dual solution is sufficiently regular, the error is bounded for all wh ∈ Wh by N % % %E, u − uh % ≤



     f − (Mh ∂t + A + Dh )uh  u∗ − wh  R R

n=1 R=(tn−1 ,tn )×K

  A uh,R − Aupw  + nK un,h (t nK F ∈FK

+

N−1 n=1

n−1 ,tn )×F

 ∗   u − wh 

(tn−1 ,tn )×F

     Mh un,h (tn ) − n+1,h un,h (tn )  u∗ (tn ) − wn+1,h (tn )  



62

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . . N−1   −1/2 un,h (tn ) − n+1,h un,h (tn ) Y u∗ (tn ) Y + Mh (M − Mh )M −1/2 ∞ n=1

 −1/2     + Mh (M − Mh )M −1/2 ∞ ∂t uh W u∗ W  −1/2     + Mh (D − Dh )M −1/2 ∞ uh W u∗ W . Proof We have by definition of u∗ E, u − uh  = (u − uh , L∗ u∗ )Q = (u, L∗ u∗ )Q − (uh , L∗ u∗ )Q = (Lu, u∗ )Q − (uh , L∗ u∗ )Q = (f, u∗ )Q − (uh , L∗ u∗ )R R∈Rh



= (f, u∗ )Q − (Luh , u∗ )R − uh , u∗ ∂R R∈Rh

  f − Luh , u∗ R + uh , u∗ ∂R . = R∈Rh

Using uh (0) = 0 and u∗ (T ) = 0, we obtain N







(M∂t un,h , u )R + (Mun,h , ∂t u )R =

N 

tn

∂t (Mun,h , u∗ ) dt

n=1 tn−1

n=1 R=(tn−1 ,tn )×K

=

N     Mun,h (tn ), u∗ (tn )  − Mun,h (tn−1 ), u∗ (tn−1 )  n=1

=

N−1

    M un,h (tn ) − un+1,h (tn ) , u∗ (tn ) 

n=1

=

N−1

    M un,h (tn ) − n+1,h un,h (tn ) , u∗ (tn ) 

n=1

and in every time slice (tn−1 , tn ) we obtain, if the dual solution u∗ is sufficiently smooth satisfying u∗ |∂Qh ∈ L2 (∂Qh ; Rm ), for the restriction to the space-time skeleton   Aun,h , u∗ (t

n−1 ,tn )×K

  + un,h , Au∗ (t

K∈Kh

=

 K∈Kh

n−1 ,tn )×K

AnK un,h.K , u∗

 (tn−1 ,tn )×∂K

4.6 Goal-Oriented Adaptivity

63

=



AnK un,h.K , u∗

K∈Kh F ∈FK

=



 (tn−1 ,tn )×F

AnK un,h,K − AnK un,h , u∗ upw

K∈Kh F ∈FK



,

(tn−1 ,tn )×F

where un,h,K is the extension of un,h |K to K. This gives, inserting (3.5),

uh , u∗ ∂R =





Lun,h , u∗

 R

  − un,h , L∗ u∗ R

n=1 R=(tn−1 ,tn )×K

R∈Rh

=

N

N

        M∂t un,h , u∗ R + Mun,h , ∂t u∗ R + Aun,h , u∗ R + un,h , Au∗ R

n=1 R=(tn−1 ,tn )×K

=

N−1

    M un,h (tn ) − n+1,h un,h (tn ) , u∗ (tn ) 

n=1



+

  upw AnK uh,R − AnK un,h , u∗ (t

R=(tn−1 ,tn )×K F ∈FK

n−1 ,tn )×F

,

where uh,R is the extension of uh |R to R. For the discrete solution uh ∈ Vh and any discrete test function wh ∈ Wh we have 

f, wh

+

 Q

        = Lh uh , wh Q = Mh ∂t uh , wh Q + Ah uh , wh Q + Dh uh , wh Q     = Mh ∂t uh , wh Q + Dh uh , wh Q

N



  upw    AnK un,h − AnK uh,R , wh,R (t Auh , wh R + F ∈FK

n=1 R=(tn−1 ,tn )×K

 ,

n−1 ,tn )×F

so that 0=

N



       Auh − f, wh R + Mh ∂t uh , wh R + Dh uh , wh R

n=1 R=(tn−1 ,tn )×K

+

 F ∈FK

upw

AnK un,h − AnK uh,R , wh,R

 (tn−1 ,tn )×F

 .

64

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

Together, this gives E, u − uh  =



f − Luh , u∗



+ uh , u∗ ∂R R



R∈Rh

=

N

 



f − (Mh ∂t + A + Dh )uh , u∗

 R

n=1 R=(tn−1 ,tn )×K

    − (M − Mh )∂t uh , u∗ R − (D − Dh )uh , u∗ R    upw AnK uh,R − AnK un,h , u∗ (t ,tn )×F + n−1

F ∈FK

+

N−1

    M un,h (tn ) − n+1,h un,h (tn ) , u∗ (tn ) 

n=1

=

N

 



f − (Mh ∂t + A + Dh )uh , u∗ − wh

 R

n=1 R=(tn−1 ,tn )×K

+



AnK uh,R −

upw AnK un,h , u∗

− wh

F ∈FK

+

N−1



 (tn−1 ,tn )×F

    Mh un,h (tn ) − n+1,h un,h (tn ) , u∗ (tn ) − wn+1,h (tn ) 

n=1

+

N−1

    (M − Mh ) un,h (tn ) − n+1,h un,h (tn ) , u∗ (tn ) 

n=1

    − (M − Mh )∂t uh , u∗ Q − (D − Dh )uh , u∗ Q ≤

N



     f − (Mh ∂t + A + Dh )uh  u∗ − wh  R R

n=1 R=(tn−1 ,tn )×K

+

  A uh,R − Aupw  nK un,h (t nK F ∈FK

+

N−1 n=1

n−1

 ∗  u − wh  ,tn )×F (t

 n−1 ,tn )×F

     Mh un,h (tn ) − n+1,h un,h (tn )  u∗ (tn ) − wn+1,h (tn )  

4.6 Goal-Oriented Adaptivity

+

N−1

65

 −1   M (M − Mh ) un,h (tn ) − n+1,h un,h (tn ) Y u∗ (tn ) Y

n=1

        + (M − Mh )∂t uh W ∗ u∗ W − (D − Dh )uh W ∗ u∗ W .  

This yields the assertion.

Dual-Primal Error Indicator For the evaluation of the error bound the exact solution u∗ of the dual problemis required,  of u∗ can be inserted. Then, the interpolation errors u∗ −wh R and for  ∗wh an interpolation  and u − wh (t ,tn )×F can be estimated by the regularity of the dual solution. n−1 Since u∗ ∈ V ∗ cannot be computed exactly, it is approximated by u∗h ∈ Wh solving the discrete dual solution bh (vh , u∗h ) = E, vh  ,

vh ∈ Vh ,

and the regularity of the dual solution is estimated from the regularity of u∗h . Therefore, we compute the L2 projection 0h : L2 (Q; Rm ) −→ P0 (Qh ; Rm ) and the jump terms [0h u∗h ]F with [yh ]F = yh,KF − yh,K on inner faces, ([yh ]F )j = (An yh )j for F ⊂ j∗ ,  and ([yh ]F )j = 0 for F ⊂ ∂ \ j∗ . Then, the error indicator ηh = R∈Rh ηR for R = (tn−1 , tn ) × K is defined by

  ηR = (Mh ∂t + A + Dh )uh − fR

  1/2  + un,h (tn−1 ) − n,h un−1,h (tn−1 ) K hK [0h u∗h ]F (t ,tn )×∂K n−1   0 ∗   upw +  AnK − AnK uh (tn−1 ,tn )×∂K [h uh ]F (t ,tn )×∂K . n−1

Depending on threshold parameters 0 < ϑ0 < ϑ0 < 1 this results in the following padaptive algorithm: 1: choose low order polynomial degrees on the initial mesh 2: while maxR (pR ) ≤ pmax and maxR (qR ) ≤ qmax do 3: compute uh 4: compute u∗h and the projection 0h u∗h 5: compute ηR on every cell R 6: if the estimated error ηh is small enough, then STOP 7: mark space-time cell R for refinement if ηR > ϑ1 maxR  ηR 

and for derefinement if ηR < ϑ0 maxR  ηR  8: increase/decrease polynomial degrees on marked cells 9: redistribute cells on processes for better load balancing 10: end while

66

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

4.7

Reliable Error Estimation for Weak Solutions

Finally, we derive a posteriori estimates for weak solutions based on local conforming reconstructions. Here we consider the general case including inhomogeneous initial and boundary data, where initial data are included in the definition of the affine ansatz space (4.2), and the DG formulation for boundary data is derived in (3.6), see (3.13) for an example. For simplicity, we assume that the parameters in M and D are piecewise constant, so that M = Mh and D = Dh . For the data f ∈ L2 (Q; Rm ), u0 ∈ L2 (; Rm ), gk ∈ L2 ((0, T ) × k ) defining the linear functional  by m     gk , zk (0,T )× , , z = (f, z)Q + Mu0 , z(0)  −

z ∈ V∗ ,

k

k=1

we select piecewise polynomial approximations fh ∈ P(Qh ; Rm ), u0,h ∈ P(h ; Rm ), and gk,h ∈ P((0, T ) × k ) defining the approximated linear functional h by m     gk,h , zk,h (0,T )× , h , zh  = (fh , zh )Q + Mu0,h , zh (0)  − k

zh ∈ V ∗ .

k=1

We assume that  is bounded by (2.12) so that a unique weak solution u ∈ W of   u, L∗ z Q = , z ,

z ∈ V∗

exists by Theorem 2.8 and Corollary 2.17. For the approximation uh ∈ Vh (u0,h ) solving     bh (uh , wh ) = fh , wh Q − Abnd n gh , wh (0,T )×∂ ,

wh ∈ Wh ,

we now construct a conforming reconstruction in a continuous finite element space Vhcf ⊂ H(L, Q) ∩ P(Qh ; Rm ) as described in the following. Here, we set for the right-hand side gh = (gk,h )k=1,...,m ∈ L2 ((0, T ) × ∂; Rm ) with gk,h = 0 on ∂ \ k . The reconstruction is defined on local patches associated to the corners of the spacetime mesh. Therefore, let CK ⊂ K be the corner points in space of the elements K ∈ Kh  such that K = conv CK , and define Ch = K∈Kh CK . For all c ∈ Ch we define Kh,c =    K ∈ Kh : c ∈ CK and open subdomains ωc ⊂  with ωc = K∈Kh,c K. This extends to space-time patches Q0,c = (0, t1 ) × ωc , Qn,c = (tn−1 , tn+1 ) × ωc for n = 1, . . . , N − 1, and QN,c = (tN−1 , T )×ωc . Let ψn,c ∈ C0 (Q)∩P(Qh ) be a corresponding decomposition   of 1 ≡ N n=0 c∈Ch ψn,c with supp ψn,c = Qn,c .

4.7 Reliable Error Estimation for Weak Solutions

67

On every patch we define discrete conforming local affine spaces  cf Vn,c (u0,h , gh ) = vh ∈ Vhcf : supp(vh ) ⊂ Qn,c , vh (0) = ψn,c u0,h in  if n = 0 , vh (tn−1 ) = 0 in  if n > 0 , vh (tn+1 ) = 0 in  if n < N , (An vh )k = ψn,c gk,h on (0, T ) × k , k = 1, . . . , m ,  An vh = 0 on (0, T ) × (∂ωc \ ∂) . cf (u , g ) = ∅, which can be achieved by a suitable choice In the following we assume Vn,c 0,h h of the data approximation u0,h and gk,h depending on the reconstruction space Vhcf . Now, the local conforming reconstruction of the discrete solution uh is defined by ucf h = N  cf , where ucf ∈ V cf (u , g ) is the best approximation of ψ u in the u n,c h n,c n,c 0,h h n=0 c∈Ch n,c topology of W , i.e.,

    ψn,c uh − ucf  ≤ ψn,c uh − vn,c  , n,c W W

cf vn,c ∈ Vn,c (u0,h , gh ) ,

so that ucf n,c is determined by a small local quadratic minimization problem with linear constraints. Lemma 4.13 The approximation error of the weak solution can be estimated by       u − uh  ≤ uh − ucf  + 2T Lucf − fh  ∗ + β −1 h W h W W

 − h , z . z V ∗ z∈V∗ \{0} sup

cf Proof By construction we have for ucf n,c ∈ Vn,c (u0,h , gh )

ucf h (0) =

N

ψn,c u0,h = u0,h

in  ,

ψn,c gk,h = gk,h

on (0, T ) × k , k = 1, . . . , m ,

n=0 c∈Ch



An ucf h

 k

=

N n=0 c∈Ch

so that for all z ∈ V∗ integration by parts and the boundary conditions in V∗ gives        cf ∗  cf cf uh , L z Q = Lucf h , z Q + Muh (0), z(0)  − An uh , z (0,T )×∂   = Lucf h − fh , z Q + h , z .

68

4 A Petrov–Galerkin Space-Time Approximation for Linear Hyperbolic. . .

Since L2 (Q; Rm ) = M −1 L∗ (V ∗ ) and V∗ ⊂ V ∗ is dense, we obtain by duality  u −

ucf h W

= = ≤

 M(u − ucf h ), v Q

  ∗ u − ucf h ,L z Q

= sup −1 ∗ v W z∈V∗ : L∗ z=0 M L z W   cf Luh − fh , z Q +  − h , z

sup

v∈L2 (Q;Rm )\{0}

sup

z∈V∗ : L∗ z=0

sup

L∗ z W ∗  cf    Lu − fh  ∗ z h W W

z∈V∗ : L∗ z=0

L∗ z W ∗

  −1  ≤ 2T Lucf h − fh W ∗ + β

+

 − h , z ∗ z∈V∗ : L∗ z=0 L z W ∗ sup

 − h , z z V ∗ z∈V∗ : L∗ z=0 sup

using the a priori estimate z W ≤ √ 2T L∗ z W ∗ from Remark 2.13 with CL = 2T and −1 ∗ −1 z V ∗ ≤ β L z W ∗ with β = 1 + 4T 2 from Corollary 2.17, so that cf u − uh W ≤ u − ucf h W + uh − uh W

 

yields the assertion. This lemma shows that the corresponding error estimator with local contributions ηn,K =



2 2 ηn−1,c + ηn,c

1/2 ,

c∈CK

 2  −1/2 2 1/2    (Lucf , ηn,c = M 1/2 (ψn,c uh − ucf n,c ) Qn,c + 2T M h − fh ) Qn,c is reliable up to the data approximation error, i.e.,   u − uh 

W



N

n=0 K∈Kh

2 ηn,K

1/2

+ β −1

 − h , z . z V ∗ z∈V∗ \{0} sup

Bibliographic Comments This chapter is based on [48, 49], where also numerical results for the adaptive algorithm are presented. Further applications and several numerical applications are reported in [50, 71, 169]. The extension to estimates for weak solutions is based on the construction of a rightinverse as it is done in [62] for conforming Petrov–Galerkin approximations in reflexive Banach spaces. The estimate for the Legendre polynomials can also be obtained recursively using [1, Lem. 8.5.3], see, e.g., [48, Lem. A.1].

4.7 Reliable Error Estimation for Weak Solutions

69

The error estimation based on dual-weighted residuals transfers the approach in [10] to our space-time framework, and for the general concepts on error estimation by conforming reconstructions we refer to [63]. The results are closely related to the analysis of space-time discontinuous Galerkin methods for acoustics in [8, 98, 127]. Alternative concepts for space-time discretizations for wave equations are collected in [115]. See also the results in [6, 75] and more recently in [139, 160], and the references therein.

Part II Local Wellposedness and Long-Time Behavior of Quasilinear Maxwell Equations Roland Schnaubelt

The Maxwell system is the foundation of electro-magnetic theory. We develop the local wellposedness theory for the (non-autonomous) linear and the quasilinear Maxwell equations in the Sobolev space H 3 , which is the natural state space for energy methods in this case. On R3 these results directly follow from the standard theory of symmetric hyperbolic systems, whereas it was shown only recently on domains. In the first chapter we present the theory on R3 in detail, using energy methods. We also treat the finite speed of propagation and, for the isotropic Maxwell system, a blow-up example in H 1 . The second chapter is devoted to the problem with boundary conditions focusing on the halfspace case. Here the main challenge is to establish regularity in normal direction at the boundary, employing the special structure of the Maxwell system. The general case is studied via localization, where severe additional difficulties arise, so that this step is only sketched. The last chapter then combines these results and methods with an observabilitytype estimate to show global existence and exponential convergence to 0 for small initial fields in the presence of a strictly positive conductivity.

Introduction and Local Wellposedness on R3

5

In this section we develop a local wellposedness theory for the quasilinear Maxwell equations on R3 . Our approach is based on energy methods and a fixed-point argument, which make use of the linear system with time-depending coefficients. One has to work in Sobolev spaces H s with s > 52 in this context, where we take s = 3 for simplicity. Actually we treat general symmetric hyperbolic systems on R3 . In the first subsection we introduce Maxwell equations and discuss some facts used throughout these notes. We then investigate the linear case, first in L2 and then in H 3 , also establishing the finite speed of propagation. Our main tools are energy estimates, duality arguments for existence in L2 , approximation by mollifiers for regularity and uniqueness, and finally a transformation from L2 to H 3 . The non-linear problem is solved by means of fixed-point arguments going back to Kato [106] at least, where the derivation of blow-up conditions in W 1,∞ and the continuous dependence of data in H 3 requires significant additional effort. Finally, for the isotropic Maxwell system, we show the preservation of energy and construct a blow-up example in H 1 . The wellposedness results on R3 are due to Kato [105], but our proof differs from Kato’s and instead uses energy methods from the theory of symmetric hyperbolic PDE, see [7, 11, 26, 123], for instance. This approach is well known, but we think that a detailed presentation in form of lecture notes is quite helpful. In particular, it gives us the opportunity to discuss in a simpler situation some core features of the domain case treated in the Chap. 6.

5.1

The Maxwell System

The Maxwell equations relate the electric field E(t, x) ∈ R3 , the (electric) displacement field D(t, x) ∈ R3 , the magnetic field B(t, x) ∈ R3 and the magnetizing field H(t, x) ∈ R3 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9_5

73

5 Introduction and Local Wellposedness on R3

74

via the Maxwell–Ampère and Maxwell–Faraday laws ∂t D = curl H − J,

∂t B = − curl E,

t ≥ 0, x ∈ G,

(5.1)

where G ⊆ R3 is open and J(t, x) ∈ R3 is the current density. (See e.g. [100] for the background in physics.) If G = R3 we have to add boundary conditions to (5.1) which are discussed in the next chapter. We use the standard differential expressions ⎞⎛ ⎞ u1 0 −∂3 ∂2 ⎟⎜ ⎟ ⎜ curl u = ∇ × u = ⎝ ∂3 0 −∂1 ⎠ ⎝u2 ⎠ , −∂2 ∂1 0 u3 ⎛

div u = ∇ · u = ∂1 u1 + ∂2 u2 + ∂3 u3 ,

where the derivatives are interpreted in a weak sense if needed (see Sect. 5.2). Since div curl = 0, solutions to (5.1) fulfill Gauß’ laws 

t

ρ(t) := div D(t) = div D(0) −

div J(s) ds,

div B(t) = div B(0), t ≥ 0.

(5.2)

0

The electric charge density ρ is thus determined by the initial charge and the current density. As there are no magnetic charges in physics, one often requires div B(0) = 0. To complete the Maxwell system (5.1), we have to connect the fields via material laws. They involve the polarization P = D − ε0 E and the magnetization M = B − μ0 H which describe the material response to the fields E and B. Below we set ε0 = μ0 = 1 for simplicity (thus destroying physical units). In these notes we use instantaneous constitutive relations, namely (D, B) = θ (x, E, H) = θ (x, u)

for regular θ : G × R6 → R6 ,

(5.3)

We choose u = (E, H) as state because this fits best to energy estimates. Other choices are possible since transformations like θ (x, ·) are typically invertible. Our main hypothesis will be that ∂u θ (x, u) =: a0 (x, u) is symmetric and a0 ≥ ηI > 0. Finally, the current is modelled as the sum J = σ (x, E, H)E + J0

(5.4)

of a given external current density J0 : R≥0 × G → R3 and a current induced via Ohm’s law for a (possibly state-depending) conductivity σ : G × R6 → R3×3 . Example 5.1 A basic example in nonlinear optics is the (instantaneous) Kerr law D = χ1 (x)E + χ3 (x)|E|2 E,

H = B,

5.1 The Maxwell System

75

for bounded functions χj : G → R with χ1 (x) ≥ 2η > 0 for all x, see e.g. [18] and also Example 5.20. It is isotropic; i.e., D(t, x) and E(t, x) are parallel. The Kerr law satisfies our assumption a0 = a0 ≥ ηI for small E (and for all E if χ3 ≥ 0). The latter also holds for the more general laws D = χe (x)E + βe (x, |E|2 )E and H = χm (x)B + βm (x, |B|2 )B for 3 × 3 matrices χj = χj ≥ 2ηI and smooth scalar βj with βj (0) = 0. In physics material laws often also contain a time retardation, see [19] or [68]. Here we stick to the instantaneous case which stays within the PDE framework. (But we expect to tackle the Maxwell system with retardation with variants of our methods.) It is often convenient to rewrite (5.1) with (5.3) and (5.4) as a quasilinear symmetric hyperbolic system. To this end, we first introduce the matrices ⎛ ⎞ 00 0 ⎜ ⎟ S1 = ⎝0 0 −1⎠ , 01 0



0 0 ⎜ S2 = ⎝ 0 0 −1 0

curl = S1 ∂1 + S2 ∂2 + S3 ∂3 ,

⎞ 1 ⎟ 0⎠ , 0

⎛ ⎞ 0 −1 0 ⎜ ⎟ S3 = ⎝1 0 0⎠ 0 0 0

satisfying

a × b = (a1 S1 + a2 S2 + a3 S3 )b

for vectors a, b ∈ R3 . We then define  0 −Sj , = Sj 0 

Aco j

  −J0 , f = 0

  σ d= , 0

∂t = ∂0

(5.5)

for j ∈ {1, 2, 3}. Note that the matrices Aco j are symmetric. Then the Maxwell system (5.1) with material laws (5.3) and (5.4) becomes L(u)u := a0 (u)∂t u +

3 j =1

Aco j ∂j u + d(u)u = f.

(5.6)

Our strategy to solve this problem goes (at least) back to Kato [106]. One freezes a function v from a suitable space E in the nonlinearities, setting A0 = a0 (v) and D = d(v). One then  solves the resulting non-autonomous linear problem L(v)u = 3j =0 Aj ∂j u + Du = f in the space E. For small times (0, T ) one finds a fixed point of the map v "→ u which then solves (5.6) and (5.1). The first linear step is more difficult; here it is crucial to control very well how the constants in the estimates depend on the coefficients. We carry out this program for G = R3 in the following sections.

5 Introduction and Local Wellposedness on R3

76

5.2

The Linear Problem on R3 in L2

Let J = (0, T ). We will solve the linear problem in the space C(J , L2 (R3 , R6 )) = C(J , L2x ) for coefficients and data subject to the assumptions 1,∞ 1,∞ Aj = A (J × R3 , R6×6 ) for j ∈ {0, 1, 2, 3}, A0 = A j ∈ Wt,x = W 0 ≥ ηI > 0, ∞ 3 6×6 D ∈ L∞ ), t,x = L (J × R , R

u0 ∈ L2x , f ∈ L2t,x = L2 (J × R3 , R6 ).

(5.7)

(We often omit range spaces as R6 in the notation. We use the subscript t to indicate a function space over t ∈ J or other time intervals, and x for a space over x ∈ R3 (or over x ∈ U ⊆ Rm ).) Compared to (5.6) we allow for D and f with non-zero ‘magnetic’ components, as needed in our analysis. We also deal with general (symmetric) x-depending coefficients A1 , A2 and A3 , and thus with linear symmetric hyperbolic systems. Those occur in many applications, see [11, 106, 123], or Part I, and our reasoning would not differ much if we restricted to Aj = Aco j . Moreover, when treating the Maxwell system on domains by localization arguments, one obtains x-depending coefficients. It is useful to see them first in an easier case. Assuming (5.7), we look for a solution u ∈ C(J , L2x ) of the system Lu :=

3 j =0

Aj ∂j u + Du = f,

t ≥ 0,

u(0) = u0 ,

(5.8)

with ∂0 = ∂t . Here the derivatives are understood in a weak sense. To explain this, we assume that the reader is familiar with Sobolev spaces W k,p (U ) = W k,p for an open subset U of Rm , k ∈ N0 , and p ∈ [1, ∞]. (See [2] or [17], for instance.) We mostly work  p α p with real scalars, endow W k,p with the (complete) norm v k,p = 0≤|α|≤k ∂ v p (obvious modification for p = ∞), and write H k := W k,2 (which is a Hilbert space), k,p Lp = W 0,p and v p := v 0,p . By W0 (U ) we denote the closure of test functions k,p Cc∞ (U ) in W k,p (U ). If ∂U is compact and C k (or Lipschitz if k = 1)), say, then W0 is the closed subspace in W k,p of functions whose (weak) derivatives of order up to k − 1 k,p have trace 0. One can check that W0 (Rm ) = W k,p (Rm ). −k Let H (U ) be the dual space H0k (U )∗ , where we restrict ourselves to p = 2 for simplicity. For ϕ ∈ L2 (U ), j ∈ {1, . . . , m} and v ∈ H01 (U ), we define the weak derivative ∂j ϕ ∈ H −1 (U ) by setting (∂j ϕ)(v) = v, ∂j ϕH 1 := −∂j v, ϕL2 . 0

(The brackets ·, ·X designate the duality pairing between Banach spaces X and X∗ .) Since |∂j v, ϕ| ≤ v 1,2 ϕ 2 , the linear map ∂j : L2 (U ) → H −1 (U ) is bounded. Iteratively, one obtains bounded maps ∂j : H −k (U ) → H −k−1 (U ), and analogously

5.2 The Linear Problem on R3 in L2

77

∂ α : H −k (U ) → H −k−|α| (U ) for multi-indices α ∈ Nm 0 and k ∈ N0 . The definitions imply that these derivatives commute. For a ∈ W 1,∞ (U ) and ϕ ∈ H −1 (U ), we next define the map aϕ ∈ H −1 (U ) by (aϕ)(v) = v, aϕH 1 := av, ϕH 1 , 0

v ∈ H01 (U ).

0

Because of av 1,2  a 1,∞ v 1,2 , we see as above that the multiplication operator Ma : ϕ "→ aϕ is bounded on H −1 (U ). (Here and below A α B stands for A ≤ cB for a generic constant c = c(α) which is non-decreasing in each component of α ∈ Rn .) These facts easily extend to Rl –valued functions. −1 We infer that Lu ∈ Ht,x if u ∈ L2t,x . If Lu = f is contained in L2t,x , we obtain ∂t u = A−1 0 f −

3 j =1

−1 2 −1 2 −1 A−1 0 Aj ∂j u − A0 Du ∈ Lt Hx := L (J, Hx ).

(5.9)

As u belongs to Ht1 Hx−1 → C(J , Hx−1 ), the initial condition in (5.8) is taken in Hx−1 . We will first show the basic energy (or apriori) estimate. Here we use the temporal weights e−γ (t) = e−γ t for γ ≥ 0 and t ∈ J (or t ∈ R) and the weighted spaces L2γ Hxk of functions with (finite) norm v L2γ Hxk = e−γ v L2 H k . On J these norms are equivalent t

x

to the unweighted case as v L2γ Hxk ≤ v L2 Hxk ≤ eγ T v L2γ Hxk . Taking large γ in these norms, we can produce small constants in front of the contribution of f in the inequality below. This fact will be used to absorb error terms by the left-hand side, for instance. The estimate and the precise form of the constants is also crucial for the nonlinear problem.  We write div A = 3j =0 ∂j Aj . Lemma 5.2 Assume that (5.7) is true and that u ∈ H 1 (J × R3 ) solves (5.8). Let C := 1  2 div A − D, γ ≥ γ0 (L) := max{1, 4 C ∞ /η}, and t ∈ J . We then obtain γη 2 4 u L2γ ((0,t ),L2x )

+ η2 e−2γ t u(t) 2L2 ≤ 12 A0 (0) ∞ u0 2L2 + x

x

2 1 2γ η f L2γ ((0,t ),L2x ) .

Proof Set v = e−γ u and g = e−γ f . We have γ A0 v + Lv = g. Using the symmetry of Aj , we derive g, v = γ A0 v, v + = γ A0 v, v +

3 j =0

Aj ∂j v, v + Dv, v

1 3

j =0 2

 t 0

R3

∂j (Aj v · v) dx ds − ∂j Aj v, v + Dv, v,

5 Introduction and Local Wellposedness on R3

78

where we drop the subscript L2 ((0, t), L2x ) of the brackets and denote the scalar product in R6 by a dot. Integration yields γ A0 v, v + 12 A0 (t)v(t), v(t)L2x = 12 A0 (0)v(0), v(0)L2x + Cv, v + g, v. We now replace v = e−γ u, g = e−γ f as well as u(0) = u0 , and use (5.7) and C ∞ ≤ γ η/4. It follows γ η u L2γ L2x + η2 e−2γ t u(t) 2L2 x



2 1 2 A0 (0) ∞ u0 L2x

+ C ∞ u 2L2 L2 +



2 1 2 A0 (0) ∞ u0 L2x

+

γη 4

γ

+

x

√ γη √ u L2 L2 f L2 L2 γη γ x γ x

γη 2 2 u L2γ L2x

+

2 1 2γ η f L2γ L2x

,  

which implies the assertion.

Below we use the above estimate for γ ≥ γ0 (r, η) := max{1, 12r/η} ≥ γ0 (L) where ∂j Aj ∞ , D ∞ ≤ r. For γ = 0 its proof yields the energy equality 

 R3

A0 (t)u(t) · u(t) dx =

R3

A0 (0)u0 · u0 dx + 2

 t 0

R3

  C(s)u(s) + f (s) · u(s) dx ds. (5.10)

In the term with C = 12 div A − D we have damping effects (if D = D  0) and extra errors terms coming from the t- or x-dependence of Aj . Lemma 5.2 yields uniqueness of H 1 –solutions to (5.8). However, we need uniqueness (and the energy estimate) for solutions in C(J , L2x ). This fundamental gap can be closed by a crucial regularization argument based on mollifiers, see e.g. [17]. We set gε (x) = ε−m g(ε−1 x) for a function g on Rm , ε > 0, and x ∈ Rm . Take  0 ≤ ρ ∈ Cc∞ (Rm ) with ρ dx = 1, support supp ρ in the closed unit ball B(0, 1), and ρ(x) = ρ(−x) for x ∈ Rm . Note that ρε 1 = 1. For ε > 0 and v ∈ L1loc (Rm ), we set  Rε v(x) = ρε ∗ v(x) =

Rm

ρε (x − y)v(y) dy,

x ∈ Rm .

One can check that Rε v ∈ C ∞ (Rm ), supp Rε v ⊆ supp v + B(0, ε), and ∂ α Rε v = Rε ∂ α v for v ∈ W |α|,p (Rm ). Young’s inequality for convolutions yields Rε v k,p ≤ v k,p for p ∈ [1, ∞] and k ∈ N0 . Using this estimate, one derives that Rε v → v in W k,p (Rm ) for v ∈ W k,p (Rm ) as ε → 0 if p < ∞, since this limit is true for test functions v. Differentiating ρε (x − y) in x, one also obtains Rε v k,p ε,k v p .

5.2 The Linear Problem on R3 in L2

79

Finally, for ϕ ∈ H −k (Rm ), v ∈ H k (Rm ) and k ∈ N, we put (Rε ϕ)(v) = v, Rε ϕH k := Rε v, ϕH k . This definition is consistent with the symmetry Rε∗ = Rε on L2 (Rm ) which follows from the symmetry of ρ and Fubini’s theorem. By means of its properties in H k (Rm ), one can show that Rε is contractive on H −l (Rm ) and that it maps this space into H k (Rm ) for all l ∈ N. Moreover, it commutes with ∂ α . Hence, the commutator [Rε , Ma ] = Rε Ma −Ma Rε tends to 0 strongly in L2x if a ∈ L∞ x . 1,∞ It even gains a derivative if a ∈ Wx , which is crucial for our analysis. Proposition 5.3 Let a ∈ W 1,∞ (Rm ), u ∈ L2 (Rm ), j ∈ {1, . . . , m}, and ε > 0. Set Cε u := Rε (a∂j u) − a∂j (Rε u). Then there is a constant c = c(ρ) such that Cε u 2 ≤ c a 1,∞ u 2

and

Cε u → 0 in L2x

as ε → 0.

Proof Let v ∈ H 1 (Rm ). Using the above indicated facts, we compute v, Cε uH 1 = aRε v, ∂j uH 1 − av, Rε ∂j uH 1 = ∂j (Rε (av) − aRε v), uL2 . We set Cε v = ∂j (Rε (av) − aRε v) and Rε for the convolution with (|∂j ρ|)ε . For a.e. x ∈ Rm , differentiation and |x − y| ≤ ε yield j

Cε v(x) = |Cε v(x)|



ε−m (∂j ρ)(ε−1 (x − y)) ε−1 (a(y)−a(x))v(y) dy − ∂j a(x)Rε v(x), B(x,ε)

≤ a 1,∞ (|Rεj v(x)| + |Rε v(x)|).

(Recall that W 1,∞ (Rm ) is isomorphic to the space of bounded Lipschitz functions, [17].) Young’s inequality now implies the first assertion. The second one is true for u in the dense   subspace H 1 (Rm ) and thus on L2 (Rm ) by the uniform estimate. With this tool at hand we can extend Lemma 5.2 to all solutions of (5.8) in C(J , L2x ). Corollary 5.4 Let (5.7) hold and u ∈ C(J , L2x ) solve (5.8). Then the statement of Lemma 5.2 and (5.10) are also valid for u. Hence, (5.8) has at most one solution in C(J , L2x ). Proof We note that Rε u belongs to C(J , Hxk ) for all ε > 0 and k ∈ N. Moreover, Rε u tends to u in C(J , L2x ) as ε → 0 since u(J ) is compact and Rε → I strongly in L2x . As Rε f (t) 2 ≤ f (t) 2 , dominated convergence also yields Rε f → f in L2t,x . Using

5 Introduction and Local Wellposedness on R3

80

Lu = f and (5.9), we compute LRε u = Rε f + [D, Rε ]u +

3 j =1

[Aj , Rε ]∂j u + [A0 , Rε ]∂t u

= Rε f + [D, Rε ]u + [A0 , Rε ]A−1 0 (f − Du) +

3 

(5.11)

 [Aj , Rε ] − [A0 , Rε ]A−1 0 Aj ∂j u.

j =1

Proposition 5.3 shows that the right-hand side belongs to L2t,x with uniform bounds. Hence, Rε u is also contained Ht1 L2x by (5.9). Arguing as above, we further see that the commutator terms tend to 0 in L2t,x and thus in L2γ L2x . Lemma 5.2 and (5.10) for Rε u now lead to the first assertion letting ε → 0. The second one follows from linearity.   Combining the energy estimate with a clever duality argument, one can also deduce the existence of a solution. Theorem 5.5 Let (5.7) be true. Then there is a unique function u in C(J , L2x ) solving (5.8). It satisfies the estimate in Lemma 5.2 and (5.10).  Proof (1) We need the (formal) adjoint L◦ = − 3j =0 Aj ∂j + D ◦ of L with D ◦ = D − %   div A. Let V = v ∈ H 1 (J × R3 , R6 ) % v(T ) = 0 , v ∈ V , and L◦ v = h. We introduce v(t) ˜ = v(T − t) and f (t) = h(T − t) for t ∈ J and the operator L˜ with coefficients ˜ ˜ A0 (t) = A0 (T − t), A˜ j (t) = −Aj (T − t) for j ∈ {1, 2, 3} and D(t) = D ◦ (T − t). Note ˜ ˜ that Lv˜ = f and v(0) ˜ = 0. Applied to L, v˜ and γ = γ0 (r, η) at time T − t, Lemma 5.2 yields the estimate v(t) 22 = v(T ˜ −t) 22 ≤

  2e2γ (T −t ) T −t −2γ τ e2γ T T e h(T − τ ) 22 dτ ≤ h(s) 22 ds, η · 2ηγ 0 γ η2 t

√ v L2 ≤ κ T L◦ v L2 , t,x

t,x

κ :=

γT 1 √ η γe .

(5.12)

In particular, L◦ : V → L2 (J × R3 )6 is injective. We can thus define the functional 0 : L◦ V → R;

0 (L◦ v) = v, f L2 + v(0), A0 (0)u0 L2x . t,x

The Cauchy–Schwarz inequality and estimate (5.12) imply  √   |0 (L◦ v)| ≤ f L2 + A0 (0)u0 L2x κ T + 1 L◦ v L2 . t,x

t,x

5.2 The Linear Problem on R3 in L2

81

By the Hahn–Banach theorem, 0 has an extension  in (L2t,x )∗ which can be represented by a function u ∈ L2 (J, L2x ) via v, f L2 + v(0), A0 (0)u0 L2x = (L◦ v) = L◦ v, uL2 t,x

(5.13)

t,x

= v, Du−

3

 T

∂j (Aj v) · u dx dt (∀ v ∈ V ).

j =0 0 R3

(2) To evaluate (5.13), we first take v ∈ H01 (J × R3 ). The definition of weak derivatives −1 . Hence, u belongs to Ht1 Hx−1 then leads to v, f L2 = v, LuH 1 ; i.e., Lu = f in Ht,x t,x

0

because of (5.9) and f ∈ L2t,x . For v ∈ V , we can now integrate by parts the summand in (5.13) with j = 0 in Hx−1 ; the others are treated as before. As v(T ) = 0, it follows v, f L2 + v(0), A0 (0)u0 L2x = v, LuH 1 + A0 (0)v(0), u(0)L2x . t,x

0

Since A0 (0) is symmetric and Lu = f , we have also shown that u(0) = u0 . (3) We next use (5.11) for wn,m = R1/n u − R1/m u. As in the proof of Corollary 5.4, 1 and satisfies Lw 2 Proposition 5.3 implies that wn,m is contained in Ht,x n,m → 0 in Lt,x 2 and wn,m (0) → 0 in Lx as n, m → ∞. So (R1/n u) is a Cauchy sequence in C(J , L2x ) by Lemma 5.2, and it converges to u in L2t,x . Thus, u belongs to C(J , L2x ). The other   assertions were proven in Corollary 5.4. In the time-independent Maxwell case (A0 = A0 (x) and Aj = Aco j ) one can show a similar result if A0 is only bounded and positive definite (even with boundary conditions), see e.g. Theorem 5.2.5 in [4] or §7.8 in [68]. In the non-autonomous case there are blow-up solutions even for the wave equation on G = R with Hölder continuous and x-independent coefficients, as shown in [28]. As indicated in Sect. 5.1 and described in the next example, the above result can easily be applied to the linear Maxwell system ∂t (εE) = curl H − σ E − J0 ,

∂t (μH) = − curl E,

t ≥ 0, x ∈ R3 ,

(5.14)

which is (5.1) on G = R3 with the material laws D = ε(t, x)E and B = μ(t, x)H. We write Rn×n for the space of real n × n matrices M = M ≥ ηI . η 3 3×3 ), ∞ Example 5.6 Let ε, μ ∈ W 1,∞ (J × R3 , R3×3 η ) for some η > 0, σ ∈ L (J × R , R 2 3 3 2 3 3 E0 , H0 ∈ L (R , R ) and J0 ∈ L (J × R , R ). As in (5.5), we set A0 = diag(ε, μ), Aj = Aco j for j = {1, 2, 3}, D = diag(σ + ∂t ε, ∂t μ), f = (−J0 , 0), and u0 = (E0 , H0 ). Theorem 5.5 then yields a unique solution (E, H) ∈ C(J , L2x ) of (5.14) with E(0) = E0

5 Introduction and Local Wellposedness on R3

82

and H(0) = H0 . It satisfies the energy equality 1

1

1

1

ε(t) 2 E(t) 22 + μ(t) 2 H(t) 22 = ε(0) 2 E0 22 + μ(0) 2 H0 22  t   (2σ +∂t ε E+2J0) · E + ∂t μ H · H dx ds. − 0

R3

One of the key features of hyperbolic systems is the finite propagation speed of their solutions. To see a simple example first, we look at the standard wave equation ∂t t u = c2 ∂xx u on R for the wave speed c > 0 equipped with the initial conditions u(0) = u0 and ∂t u(0) = v0 . (As in Part I one can put this second-order equation in the above first-order framework for the new state (∂t u, ∂x u).) The solution of this wave problem is given by d’Alembert’s formula u(t, x) = 12 (u0 (x + ct) + u0 (x − ct)) +

1 2c



x+ct

v0 (s) ds,

t ≥ 0, x ∈ R.

x−ct

Hence, the solution at (x, t) only depends on the initial data on [x −ct, x +ct]; for instance, u(t, x) = 0 if u0 and v0 vanish on [x − ct, x + ct]. Conversely, the value of u0 and v0 at y influences u at most for (t, x) with |x − y| ≤ ct; i.e., on a triangle with vertex (y, 0) and lateral sides of slope ±1/c. In this sense, c is the speed of propagation. We extend these observations to the system (5.8), assuming (5.7). In the statement we use the backward ‘light’ cone %   (x0 , R, K) = (t, x) ∈ R≥0 × R3 % |x − x0 | < R − Kt . R It has the base B(x0 , R) at t = 0 and the apex ( K , x0 ). Set

k02 = A1 2∞ + A2 2∞ + A3 2∞ √ with the operator norm for | · |2 on R6×6 . Note that k0 = 3 in the Maxwell example. Below we see (for f = 0) that u vanishes on (x0 , R, k0 /η) if u0 = 0 on B(x0 , R). Hence, if two initial functions u0 and u˜ 0 coincide on B(x0 , R) then the corresponding solutions u and u˜ are equal on (x0 , R, k0 /η). In other words, the values of u0 outside B(x0 , R) influence u(t) only off (x0 , R, k0 /η), that is, with maximal speed k0 /η. Our proof is based on energy estimates with an exponential weight, and the arguments are taken from §4.2.2 of [7]. Theorem 5.7 Let (5.7) be true. Assume that u0 = 0 on B(x0 , R) and f = 0 on (x0 , R, k0 /η) for some R > 0 and x0 ∈ R3 . Then the solution u ∈ C(J , L2x ) of (5.8) also vanishes on (x0 , R, k0 /η).

5.2 The Linear Problem on R3 in L2

83

Proof (1) Let δ, R > 0 and x0 ∈ R3 be given. There is a function ψ ∈ C ∞ (R3 ) with |∇ψ| ≤ η/k0 (for the euclidean norm) and − 2δ + ηk0−1 (R − |x − x0 |) ≤ ψ(x) ≤ −δ + ηk0−1 (R − |x − x0 |),

x ∈ R3 .

(5.15)

We construct ψ as in Theorem 6.1 of [157]. Take χ(s) = − 32 δ + ηk0−1 (R − |s|) for s ∈ R. This function is Lipschitz with constant η/k0 . The same is true for the mollified map χε = Rε χ as ∇χε = Rε ∇χ. Also, χε tends uniformly to χ as ε → 0 since  |χε (s) − χ(s)| ≤

R

ε

−1

ρ(ε

−1

τ ) |χ(s − τ ) − χ(s)| dτ ≤



ηk0−1 ε

R

ρ(σ ) |σ | dσ.

We fix a small ε > 0 such that χε satisfies (5.15) with s instead of |x − x0 | and 5/3 instead of 2. Then ψ(x) = χε ((δ02 + |x − x0 |2 )1/2 ) does the job, where δ0 = k0 δ(3η)−1 . Set φ(t, x) = ψ(x)−t and uτ = eτ φ u for τ > 0. Inequality (5.15) yields ψ(x) ≤ −δ+t if |x − x0 | ≥ R − k0 t/η (i.e., (t, x) ∈ / (x0 , R, k0 /η)), so that eτ φ ≤ e−τ δ ≤ 1 off τ φ (x0 , R, k0 /η) and e is bounded on J × R3 . We further have ∇eτ φ = τ ∇ψeτ φ and ∂t eτ φ = −τ eτ φ . As a result, uτ is an element of C(J , L2x ) and the right-hand side of

3 Luτ = eτ φ f − τ A0 −

j =1

Aj ∂j ψ uτ

belongs to L2t,x . The matrix in parentheses is denoted by M. (2) For ξ ∈ R6 we have Mξ · ξ ≥ (η − k0 |∇ψ|)|ξ |2 ≥ 0. Set C = 12 div A − D and κ = C ∞ . By Theorem 5.5, the function uτ satisfies the energy equality 1

1

A0 (t) 2 uτ (t) 2L2 = A0 (0) 2 uτ (0) 2L2 + 2(C − τ M)uτ + eτ φ f, uτ L2 . x

t,x

x

Using Cauchy–Schwarz, the above inequalities and Gronwall, we estimate  η uτ (t) 2L2 x

≤ A0 (0) ∞ e

τφ

u0 2L2 x

+ e

τφ

f 2L2 t,x

+ (2κ + 1) 0

t

uτ (s) 2L2 ds, x

eτ φ u(t) 2L2 T eτ φ u0 2L2 + eτ φ f 2L2 . x

x

t,x

The right-hand side tends to 0 as τ → ∞ since u0 and f vanish on (x0 , R, k0 /η) and eτ φ → 0 uniformly off (x0 , R, k0 /η). Hence, u(t) has to be 0 on {φ > δ} = {ψ > t + δ}. By (5.15), this set includes points (t, x) with |x − x0 | < R − k0 η−1 (t + 3δ). Since δ > 0 is arbitrary here, u equals 0 on (x0 , R, k0 /η).  

5 Introduction and Local Wellposedness on R3

84

5.3

The Linear Problem on R3 in H 3

As noted in Sect. 5.1, to solve the nonlinear problem (5.6) we will set A0 = a0 (v) for functions v having the same regularity as the desired solution u. Since A0 has to be Lipschitz in Theorem 5.5, the same must be true for v. Working in Hxk spaces, we thus 1,∞ 2 3 Hx at least. We want to reduce the problem in Hx3 to that need solutions in L∞ t Hx ∩ Wt 2 in Lx by means of a transformation. (One could also perform the proof of Theorem 5.5 in Hx3 instead of L2x , see e.g. [11] or [26], which would require more work in our context.) To this end, we define the square root = (I − )1/2 = F −1 (1 + |ξ |2 )1/2 F of the shifted Laplacian on L2 (R3 ), where F is the Fourier transform. Using standard properties of F , one can check that commutes with derivatives and that it can be extended, respectively restricted, to isomorphisms Hxk → Hxk−1 for k ∈ Z with inverse given by −1 = (I − )−1/2 = F −1 (1 + |ξ |2 )−1/2 F . Observe that = (I − ) −1 and that −1 is a convolution operator with positive kernel, see Proposition 6.1.2 in [77]. Hence, leaves invariant real-valued functions. Our analysis relies on a commutator estimate for 3 and Ma : ϕ "→ aϕ which gains a derivative. In Lemma A2 in [106] it is shown that [

3

, Ma ] B(H 2 (R3 ),L2 (R3 ))  ∇a H 2 (R3 ) .

(5.16)

Here the space dimension 3 is crucial; on Rm one obtains e.g. an analogous bound for [ k , Ma ] : Hxk−1 → L2x with k > m2 + 1. (Noninteger k are also allowed here.) Guided by (5.16) and (5.7), we introduce the space %   k−1 , F˜ k (J ) = F˜ k (T ) = A ∈ W 1,∞ (J × R3 , R6×6 ) % ∇t,x A ∈ L∞ t Hx

k ∈ N,

for the coefficients, endowed with its natural norm. We will usually take k = 3. We use the same notation for vector- or scalar-valued functions of the same regularity. The subscript sym will refer to symmetric matrices and η to those with A = A ≥ ηI with η > 0. We state the hypotheses of the present section: A0 ∈ F˜η3 (J ),

3 A1 , A2 , A3 ∈ F˜sym (J ),

u0 ∈ Hx3 = H 3 (R3 , R6 ),

D ∈ F˜ 3 (J ),

f ∈ Z 3 (J ) = Z 3 (T ) := L2 (J, Hx3 ) ∩ H 1 (J, Hx2 ). (5.17)

Set f 2Z 3 (J ) = e−γ f 2 2 3 + e−γ ∂t f 2 2 2 for γ ≥ 0. We also use the spaces Hˆ xk = Lt Hx Lt Hx γ %   ˜ k (J ) = G ˜ k (T ) = C(J , Hxk ) ∩ C 1 (J , Hxk−1 ) with v ∈ L∞ (R3 ) % ∇x v ∈ Hxk−1 and G their natural norms. (Such spaces will also be considered on other time intervals.)

5.3 The Linear Problem on R3 in H 3

85

On R3 we have the product estimates j

vw H j  v H k w H j

for v ∈ Hxk , w ∈ Hx , k ≥ max{j, 2}.

j ˜ j (J ) and G ˜ j (J ) (or F˜ j (J )), Here one can replace Hxk by Hˆ xk , as well as Hx and Hxk by G j k k −1 ˜ ˆ ˜ or by F (J ) and F (J ). Also, if A ∈ Hη for k ∈ N, then A belongs to Hˆ ηk with norm bounded by c(η, k)(1+ A Hˆ k )k−1 A Hˆ k . See Lemmas 2.1 and 2.3 in [159]. x

x

We sketch the proof of the first claim, using Sobolev embeddings such as H 2 → Lp for p ∈ [2, ∞] and H 1 → Lq for q ∈ [2, 6] on R3 . By the product rule (and interpolative β α−β inequalities) we have to estimate ∂x v∂x w for multi-indices 0 ≤ β ≤ α with |α| = j . β k−|β| α−β |β| and ∂x w ∈ Hx . This product can be estimated in L2x as Observe that ∂x v ∈ Hx needed if k − |β| ≥ 2 or |β| ≥ 2 since then v respectively w is bounded. As k ≥ 2, only β α−β the case |β| = 1 remains. Here ∂x v and ∂x w belong to Hx1 → L4x and thus the product to L2x . The listed variants are proved similarly. ˜ 3 (J ) of (5.8) assuming (5.17). The basic idea is to solve We look for a solution u ∈ G 3 u in C(J , L2x ). Since the inequality (5.16) only improves a modified problem for w = ˆ = fˆ := A−1 f where Lˆ has space regularity, we first replace the equation Lu = f by Lu 0 −1 −1 the coefficients Aˆ 0 = I , Aˆ j = A0 Aj and Dˆ = A0 D. We then obtain ˆ = Lw

3

Lw = A0

fˆ + 3

3

fˆ +

j =1

[Aˆ j ,

3 j =1

3

ˆ ]∂j u + [D,

A0 [Aˆ j ,

3

3

]u,

ˆ ]∂j u + A0 [D,

3

]u =: g(f, u).

(5.18)

We now replace in g the unknown u by a given function v ∈ C(J , Hx3 ). Theorem 5.5 gives a solution w ∈ C(J , L2x ) of Lw = g(f, v) with w(0) = 3 u0 . The energy estimate from Lemma 5.2 (with a large γ ) then implies that ! : v "→ −3 w is a strict contraction 3 on L∞ γ Hx . This fact will lead to the desired regularity result. Let λ be the maximum of k B(H k ,L2 ) and −k B(L2 ,H k ) for k ∈ {2, 3}. It will be important in the fixed-point argument for the nonlinear problem that the constant c0 in (5.19) only depends on r0 (and η), but not on r. Theorem 5.8 Let (5.17) be true. Then there is a unique solution u ∈ C(J , Hx3 ) ∩  √  C 1 (J , Hx2 ) of (5.8). For t ∈ J and γ ≥ γ1 (r, η) := max γ0 (r, η), c1 we have γ u 2Z 3 (0,t ) + e−2γ t ( u(t) 2H 3 + ∂t u(t) 2H 2 ) γ

x



c0 ( u0 2H 3 x

x

+ f (0) 2H 2 ) + x

c1 γ

f 2Z 3 (0,t ) , γ

(5.19)

where Aj (0) Hˆ 2 , D(0) Hˆ 2 ≤ r0 , Aj F˜ 3 (J ), D F˜ 3 (J ) ≤ r for j ∈ {0, 1, 2, 3}, and x x c0 = c0 (r0 , η) and c1 = c1 (r, η) are constants described in the proof.

5 Introduction and Local Wellposedness on R3

86

Proof (1) Take v ∈ C(J , Hx3 ) and γ ≥ γ0 (r, η) from Lemma 5.2 and the text following it. Using the above product rules and (5.16) we see that the square of the norm in L2γ L2x of g(f, v) from (5.18) is bounded by c1 ( f 2L2 H 3 + v 2L2 H 3 ) for a constant c1 = c1 (r, η). γ

x

γ

x

Theorem 5.5 yields a solution w ∈ C(J , L2x ) of Lw = g(f, v) and w(0) = which satisfies γη 2 4 w L2γ L2x

+ η2 w 2L∞ L2 ≤ c0 u0 2H 3 + γ

x

x

c1  2 2γ η f L2γ Hx3

+ v 2L2 H 3 γ



x

3u

0

=: w0

(5.20)

2

with c0 = λ2 A0 (0) ∞ . The map w also belongs to C 1 (J , Hx−1 ) because of (5.9) and ˜ 3 (J ). Let w satisfy Lw = g(f, v) and w(0) = w0 for f ∈ Z 3 (J ). Set !v = −3 w ∈ G 3 some v ∈ C(J , Hx ). For w − w estimate (5.20) applies with u0 = 0 and f = 0 so that −3

!(v − v) L∞ 3 = γ Hx

(w − w) L∞ 3 ≤ γ Hx

λ



c T √ 1 γη

v − v L∞ 3. γ Hx

3 Fixing a large γ = γ (r, η, T ), we obtain a fixed point u of ! in L∞ γ Hx . It actually belongs 3 ˜ (J ) and satisfies u(0) = u0 . Equation (5.18) implies that Lu = f . Uniqueness of to G solutions was already shown in Corollary 5.4. 3 (2) It remains to establish √ # (5.19). We first insert u = v and w = u in (5.20) and take " 2λ c1 γ ≥ max γ0 (r, η), η . Absorbing u 2L2 H 3 by the left-hand side, we infer γ

γη 2 8 u L2γ Hx3

x

+ η2 u 2L∞ H 3 ≤ c0 λ2 u0 2H 3 + γ

x

x

c1 λ2 2γ η

f 2L2 H 3 . γ

x

(5.21)

If we estimated ∂t u in Hx2 by means of (5.9) and (5.21), we would obtain a constant depending on r in front of the norm of u0 . Instead we use that ∂t u ∈ C(J , Hx2 ) satisfies L∂t u = ∂t f − ∂t Du −

3 j =0

∂t Aj ∂j u =: h,

∂t u(0) = A0 (0)−1 f (0) − A0 (0)−1 D(0)u0 −

3 j =1

A0 (0)−1 Aj (0)∂j u0 =: v0 .

The above product rules yield h(t) Hx2 ≤ ∂t f (t) Hx2 + c(r)( u(t) Hx3 + ∂t u(t) Hx2 , v0 Hx2 ≤ c(r0 , η)( f (0) Hx2 + u0 Hx3 ).

5.3 The Linear Problem on R3 in H 3

87

The commutator [Ma , 2 ] = [Ma , −] : Hx1 → L2x is bounded if a ∈ Wx1,∞ and D 2 a ∈ Hx1 → L3x . Starting from L∂t u = h, as in (5.18) and (5.20) we thus deduce γη 2 4 ∂t u L2γ Hx2

+ η2 ∂t u 2L∞ H 2 γ

x

  ≤ cˆ0 λ2 u0 2H 3 + f (0) 2H 2 + x

x

cˆ1 λ2  2 2γ η ∂t f L2γ Hx2

+ u 2L2 H 3 + ∂t u 2L2 H 2 γ

x

γ



x

for constants cˆ0 = cˆ0 (r0 , η) and cˆ1 = cˆ1 (r, η). Set c0 = 16λ2 η−1 (c0 + cˆ0 ) and 2

 ˆ }. We add the above inequality to (5.21) and take γ ≥ γ (r, η) := c1 = 8λ 1 1 2 max{c1 , c  η √    max γ0 (r, η), c1 . Estimate (5.19) follows after some calculations.

In the above result we control more space than time derivatives. Under stronger assumptions on Aj , D and f , one can obtain analogous estimates on ∂t2 u in Hx1 and ∂t3 u in L2x by differentiating (5.8) in time, see (6.23). We discuss variants of the above theorem partly needed below. Corollary 5.9 Let Aj and D be as in Theorem 5.8, as well as u0 ∈ Hx2 and f ∈ L2 (J, Hx2 ). Then there is a unique solution u ∈ C(J , Hx2 ) ∩ C 1 (J , Hx1 ) of (5.8). For  √  t ∈ J and γ ≥ γ˜1 (r, η) := max γ0 (r, η), c˜1 , we have γ u 2L2 ((0,t ),H 2) + +e−2γ t u(t) 2H 2 ≤ c˜0 u0 2H 2 + γ

x

x

x

c˜1 γ

f 2L2 ((0,t ),H 2) γ

x

for constants c˜0 = c˜0 (r0 , η) and c˜1 = c˜1 (r, η). If ∂t f ∈ L2 (J, Hx1 ) we also obtain γ ∂t u 2L2 ((0,t ),H 1) + e−2γ t ∂t u(t) 2H 1 ≤ c˜0 ( u0 2H 2 + f (0) 2H 1 ) + γ

x

x

x

x

c˜1 2 γ f Zγ2 (0,t )

where Z k (J ) := L2 (J, Hxk ) ∩ H 1 (J, Hxk−1 ) for k ∈ N. The result is shown as Theorem 5.8, replacing 3 by 2 in its proof up to (5.21) and 2 by afterwards. For the second part one also uses that the commutator [Ma , ] is bounded on L2x by Proposition 4.1.A in [162] if a ∈ Wx1,∞ . Remark 5.10 In Theorem 5.8 we have focused on the space Hx3 needed for the quasilinear ˜ k (J ) of (5.8) satisfying the problem. Actually, one obtains a unique solution u ∈ G k k analogue of (5.19) if u0 ∈ Hx , f ∈ Z (J ), Aj , D ∈ F˜ k (J ), Aj = A j , A0 ≥ ηI , and k ∈ N \ {2}. For k = 2 one needs another assumption stated below. This can be shown as for k = 3, one only has to take care of estimates for products, inverse matrices and commutators. Indeed, for k > 3 one can use the product and inversion results mentioned above and 1,∞ ) the the higher-order version of (5.16) in [106]. For k = 1 (thus for coefficients in Wt,x

5 Introduction and Local Wellposedness on R3

88

needed product and inversion bounds are easy to check, and we have just seen that [Ma , ] is bounded on L2x if a ∈ Wx1,∞ . For k = 2 the second-order derivatives of Aj also have to 3 2 ] = [M , −] : H 1 → L2 is bounded, belong to L∞ a t Lx . Then the commutator [Ma , x x and the extra condition is preserved by products and inverses. Moreover, there is no problem to change the range space R6 to Rn . Also other spatial domains Rm can be treated analogously, though one has to modify the assumptions on the coefficients in this case. Finally, invoking a bit more harmonic analysis one can also work in fractional Sobolev spaces Hxs instead of Hxk , see [105]. Remark 5.11 In (5.17) we have required that the derivatives of the coefficients belong to j Hx spaces. So local singularities are allowed to some extent, but one enforces a certain decay at infinity which is an unnecessary restriction. Actually Theorem 5.8 remains valid if 3 (J ) = Fˆ 3 (J ) + W 3,∞ , and Hˆ 2 by Hˆ 2 = Hˆ 2 + W 2,∞ . we replace the space Fˆ 3 (J ) by Fˆ∞ x t,x x ∞ x (They have the norm of sums X + Y , namely z X+Y = infz=x+y x X + y Y .) To show this fact, we note that [MA , 2 ] : Hx2 → Hx1 is bounded uniformly in t if A ∈ 3,∞ , and so the same is true for Fˆ 3 (J ) + Wt,x [MA ,

3

] = [MA , ]

2

+

[MA ,

2

] : Hx2 → L2x .

(Recall the boundedness of [Ma , ] on L2x .) One can further show the appropriate bounds 3,∞ ˜ 3 (J ). and Hˆ x2 + Wx2,∞ , as well as G for products and inversions involving Fˆ 3 (J ) + Wt,x The analogue of Theorem 5.8 can now be proven as before. As a preparation for Theorem 5.18 on the wellposedness of the nonlinear problem we show an approximation result for the coefficients. 3 (J ) be Lemma 5.12 Let u0 ∈ L2x , f ∈ L2t,x , n ∈ N ∪ {∞}, j ∈ {0, 1, 2, 3}, Anj ∈ Fˆ∞ 3 (J ). Assume that An n symmetric with An0 ≥ ηI , and D n ∈ Fˆ∞ 1,∞ ≤ r and D L∞ ≤ j Wt,x t,x  n ∂ + Dn . n → D ∞ in L∞ as n → ∞. Set L = r, as well as Anj → A∞ and D A n j t,x j j j We have functions un ∈ C(J , L2x ) with Ln un = f and un (0) = u0 . Then un → u∞ in C(J , L2x ) as n → ∞.

Proof For the given data there are functions u0,m in Hx3 and fm in Z 3 (J ) converging to u0 and f in L2x and L2t,x , respectively, as m → ∞. For these data Theorem 5.8 provides ˜ 3 (J ) satisfying Ln un,m = fm and un,m (0) = u0,m . Fixing γ = γ0 (r, η) functions un,m ∈ G from Lemma 5.2, Corollary 5.4 then shows   2 2 . un − un,m L∞ 2 ≤ c un − un,m L∞ L2 ≤ c u0 − u0,m 2 + f − fm 2 L t Lx L γ x x

t,x

5.4 The Quasilinear Problem on R3

89

with c = c(r, η, T ). The right-hand side tends to 0 as m → ∞ uniformly for n ∈ N ∪ {∞}. ˜ 3 (J ). We have It is thus enough to take u0 ∈ Hx3 , f ∈ Z 3 (J ), and un ∈ G Ln (un − u∞ ) = L∞ u∞ − Ln u∞ =

3 j =0

n ∞ n (A∞ j − Aj )∂j u∞ + (D − D )u∞ =: gn

˜ 3 (T ), as above Lemma 5.2 yields Since u∞ ∈ G un − u∞ L∞ 2 ≤ c(γ , T ) gn L∞ L2 −→ 0, t Lx γ x

n → ∞.  

5.4

The Quasilinear Problem on R3

In this section we treat the nonlinear system L(u)u :=

3 j =0

aj (u)∂j u + d(u)u = f,

t ≥ 0, x ∈ R3 ,

u(0) = u0 ,

(5.22)

under the assumptions aj , d ∈ C 3 (R3 × R6 , R6×6 ),

aj = aj ,

a0 ≥ ηI, η ∈ (0, 1],

, ∂xα d(·, ξ ) L∞ < ∞, j ∈ {0, 1, 2, 3}, ∀ r > 0 : sup max ∂xα aj (·, ξ ) L∞ x x |ξ |≤r 0≤|α|≤3

(5.23) u0 ∈ Hx3 ,

∀ T > 0 : f ∈ Z 3 (T ) = Z 3 (J ) = L2 (J, Hx3 ) ∩ H 1 (J, Hx2 ), J = (0, T ).

One can also treat coefficients only defined for (x, ξ ) ∈ R3 × U and an open subset U ⊆ R6 , see Remark 5.19. This is already needed in the Kerr Example 5.1 if χ3 is not non-negative. To simplify a bit, we focus on the case U = R6 in (5.23). We look for solutions u of (5.22) in C([0, T+ ), Hx3 ) ∩ C 1 ([0, T+ ), Hx2 ) for a maximally chosen final time T+ ∈ (0, ∞]. As indicated in the next section, solutions may blow up and so T+ could be finite. The solutions will be constructed in a fixed-point argument on ˜ k− (J ) = L∞ (J, Hxk ) ∩ W 1,∞ (J, Hxk−1 ) endowed with its natural norm, where the space G k = 3. The overall strategy of this section and many techniques are typical for quasilinear (or semilinear) evolution equations, though there are different (but related) approaches, see e.g. [7], [11], or [97]. We first state basic properties of substitution operators. (Recall Remark 5.11 concerning 3 (J ) and Hˆ 2 .) We set E = L∞ (J, H 2 ) for a moment. F˜∞ γ ∞ γ x

5 Introduction and Local Wellposedness on R3

90

Lemma 5.13 Let a be as in (5.23) and γ ≥ 0. 3 ˜ 3 (J ) with v ∞ ≤ r. Then a(v) ˜ 3 (a) Let v ∈ G F∞ (J ) ≤ κ(r)(1 + v ˜ 3

G (J )

).

2 (b) Let v, w ∈ L∞ t Hx with norm ≤ r. Then a(v) − a(w) Eγ ≤ κ(r) v − w Eγ . Here 2 ˜2 ˜2 we can also replace L∞ t Hx and Eγ by G (J ) and Gγ (J ), respectively. (c) Let v0 ∈ Hx2 with v0 ∞ ≤ r0 . Then a(v0 ) Hˆ 2 ≤ κ0 (r0 )(1 + v0 2H 2 ). ∞

x

(d) Let v0 , w0 ∈ Hx2 with norm ≤ r0 . Then a(v0 ) − a(w0 ) Hx2 ≤ κ0 (r0 ) v0 − w0 2H 2 . x

We only sketch the proof. (See §7.1 in [157] or §2 in [158] for more details.) Take α ∈ N40 with 1 ≤ |α| ≤ 3 and α0 ∈ {0, 1}. The latter refers to the time derivative. It is clear that the function |(∂ β a)(v)| is bounded by c(r) for all 0 ≤ |β| ≤ 3 where β = (βx , βξ ) ∈ N30 × N60 . Note that ∂ α a(v) is a linear combination of products of (∂ β a)(·, v) 1,∞ and j ∈ {0, 1, 2, 3} factors ∂ γi v with βx + γ1 + · · · + γj = α. Since v ∈ Wt,x by α ∞ 2 ∞ Sobolev’s embedding, one can estimate ∂ a(v) in Lt Lx if j ≥ 1 and in Lt,x if j = 0, both by c(r)(1 + v 3˜ 3 ). For (b) we start from the formula G (J )



1

a(v) − a(w) =

∂ξ a(·, v + s(w − v)) (w − v) ds

0

and proceed as above. Parts (c) and (d) are treated similarly. As the space for the fixed-point argument we will use ˜ 3− (J ) | v ˜ 3− ≤ R, v(0) = u0 .} E(R, T ) := {v ∈ G G (J ) for suitable R ≥ u0 H 3 and T > 0. This set is non-empty as it contains the constant function t "→ v(t) = u0 . It is crucial that E(R, T ) is complete for a metric involving only two derivatives, which can be shown by a standard application of the Banach–Alaoglu 2 1 2 theorem. For this we recall that L∞ t Lx is the dual space of Lt Lx , see Corollary 1.3.22 in ∞ [96]. (This is the reason to take L in time instead of C.) Lemma 5.14 The space E(R, T ) is complete with the metric u − v L∞ 2. t Hx Proof Let (un ) be Cauchy in E(R, T ) with this metric. Then (un ) has a limit u in C(J , Hx2 ). Take α ∈ N40 with α0 ≤ 1 and 0 ≤ |α| ≤ 3. Applying Banach–Alaoglu iteratively, we obtain a subsequence (also denoted by (un )) such that ∂ α un tends to a  2 2 2 function vα weak* in L∞ t Lx which also satisfies α vα L∞ L2 ≤ R . It remains to check t

x

that vα = ∂ α u. To this end, take ϕ ∈ H03 (J × R3 ). We compute ∂ α ϕ, u = lim ∂ α ϕ, un  = lim (−1)|α| ϕ, ∂ α un  = (−1)|α| ϕ, vα  n→∞

n→∞

2 α in the duality pairing L1t L2x × L∞ t Lx . There thus exists ∂ u = vα .

 

5.4 The Quasilinear Problem on R3

91

In the next lemma we perform the core fixed-point argument. Lemma 5.15 Let (5.23) hold and ρ 2 ≥ u0 2H 3 + f (0) 2H 2 + f 2Z 3 (1) . Then there is a x x radius R = R(ρ) > ρ given by (5.24), a time T0 = T0 (ρ) ∈ (0, 1] given by (5.25), and a unique solution u ∈ E(R, T0 ) of (5.22). 2 by some κ (ρ). Proof (1) Lemma 5.13 shows that aj (u0 ) and d(u0 ) are bounded in Hˆ ∞ 0 This yields a constant c0 = c0 (ρ) ≥ 1 in (5.19), in the setting of Remark 5.11. We define

R 2 = R(ρ)2 = ec0 (ρ)ρ 2 + 1 > ρ 2 .

(5.24)

Take v, w ∈ E(R, T ) for some T > 0. Let a ∈ {a0 , a1 , a2 , a3 , d} and γ ≥ 0. By Lemma 5.13 and Hx2 → L∞ x there is a constant κ = κ(R) with a(v) F˜∞ 3 (J ) ≤ κ

and

a(v) − a(w) L∞ 2 ≤ κ v − w L∞ H 2 . γ Hx γ x

Let c1 = c1 (κ, η), c˜1 = c˜1 (κ, η), and γ1 = max{γ1 (κ, η), γ˜1 (κ, η)} be given by Theorem 5.8 and Corollary 5.9. We fix   γ = γ (ρ) = max γ1 , ec1 ρ 2 , 2ec˜1 c2 κ 2 R 2 ,

T0 = T0 (ρ) = min{1, (2γ )−1}, (5.25)

where the constant c > 0 is introduced below. ˜ 3 (J0 ) of L(v)u = f and u(0) = u0 satisfying (2) Theorem 5.8 gives a solution u ∈ G   u(t) 2H 3 + ∂t u(t) 2H 2 ≤ e2γ T0 c0 ( u0 2H 3 + f (0) 2H 2 ) + c1 γ −1 f 2Z 3 (1) ≤ R 2 x

x

x

x

for t ∈ [0, T0 ]. So the map ! : v "→ u =: vˆ leaves invariant E(R, T0 ). Observe that L(v)(vˆ − w) ˆ = (L(w) − L(v))wˆ =

3 j =0

(aj (w) − aj (v))∂j wˆ + (d(w) − d(v))w. ˆ

The right-hand side at time t is bounded in Hx2 by cκR v(t)−w(t) 2,2 due to Lemma 5.13. Since v(0) = w(0) and T0 ≤ 1, Corollary 5.9 then implies !(v) − !(w) 2L∞ H 2 ≤ e2γ T0 !(v) − !(w) 2L∞ H 2 t

x

γ

(5.26)

x

≤ ec˜1 γ −1 c2 κ 2 R 2 T0 v − w 2L∞ H 2 ≤ 12 v − w 2L∞ H 2 . γ

x

The assertion now follows from the contraction mapping principle.

t

x

 

5 Introduction and Local Wellposedness on R3

92

The above result yields uniqueness only in the ball E(R, T0 ), but the contraction estimate (5.26) itself will lead to a much more flexible uniqueness statement. Before ˜ 3 (J ) to (5.22) showing it, we note that restrictions or translations of a solution u ∈ G 3 ˜ ˜ 3 (J  ) with satisfy (obvious) variants of (5.22). Let u ∈ G (J ) solve (5.22) and v ∈ G   v(T ) = u(T ) solve it on J = (T , T ). Then the concatenation w of u and v belongs to ˜ 3 (0, T  ) and fulfills (5.22). (Use (5.22) to check ∂t w ∈ C([0, T  ], Hx2 ).) G ˜ 3 (J ) and u˜ ∈ G ˜ 3 (J˜) solve (5.22) on J Lemma 5.16 Let (5.23) hold, J˜ = (0, T˜ ), u ∈ G and J˜, respectively. Then u = u˜ on J ∩ J˜ =: Jˆ. Proof Let τ be the supremum of all t ∈ [0, sup Jˆ) for which u = u˜ on [0, t]. Note that u(0) = u0 = u(0). ˜ We suppose that τ < sup Jˆ. Then u = u˜ on [0, τ ] by continuity, and there exists a number δ > 0 with Jδ := [τ, τ + δ] ⊆ Jˆ. Let R be the maximum of the ˜ 3 (J ). Fix γ as in (5.25) (with κ = κ(R) and ρ = 0) and take norms of u and u˜ in G δ δ ∈ (0, δ]. As in (5.26), Corollary 5.9 yields a constant c1 = c˜1 (R) > 0 with u − u ˜ 2L∞ (J γ

2

2 δ ,Hx )

≤ ec1 γ −1 c2 κ 2 R δ u − u ˜ L∞ 2 . γ (Jδ ,Hx )

Choosing a sufficiently small δ > 0, we infer u = u˜ on Jδ . This fact contradicts the definition of τ , so that τ = sup Jˆ as asserted.   We now use the above results to define a maximal solution u to (5.22) assuming (5.23). The maximal existence time is given by ˆ 3 (T ) of (5.22) on [0, T ]} ∈ (0, ∞]. T+ = T+ (u0 , f ) := sup{T ≥ 0 | ∃ solution uT ∈ G Lemma 5.15 shows T+ (u0 , f ) > T0 (ρ) as we can restart the problem at time t0 = T0 (ρ) with the initial value uT (T ). Moreover, by Lemma 5.16 the solutions ut and uT coincide on [0, t] for 0 < t < T < T+ . Setting u(t) = uT (t) for such times thus yields a unique ˜ 3 (T ) for each T ∈ (0, T+ ). solution u of (5.22) on [0, T+ ) which belongs to G In the proof of our main result below, we need the following Moser-type estimates. Lemma 5.17 Let k ∈ N and α, β ∈ Nm 0. (a) For v, w ∈ L∞ (Rm ) ∩ H k (Rm ) and |α| + |β| = k, we have ∂ α v ∂ β w 2 ≤ c v ∞ w k,2 + v k,2 w ∞ .

5.4 The Quasilinear Problem on R3

93

(b) For v, w ∈ W 1,∞ (Rm ) ∩ H k (Rm ) with ∂ α v, ∂ β w ∈ L2 (Rm ), 1 ≤ |α| ≤ k and |α| + |β| = k + 1, we have ∂ α v∂ β w 2 ≤ c ∇v ∞

m j =1

∂j w k−1,2 + ∇w ∞

m j =1

∂j v k−1,2 .

Proof We first recall the Gagliardo–Nirenberg inequality 1− |α| k

∂ α ϕ 2k/|α| ≤ c ϕ ∞



|α|

|γ |=k

∂ γ ϕ 2k

for ϕ ∈ L∞ (Rm ) with ∂ γ ϕ ∈ L2 (Rm ) for all |γ | = k, see [135]. Assertion (a) is clear if |α| is 0 or k. So let k ≥ 2 and 1 ≤ |α| ≤ k − 1. Note that |β| |α| |α| |β| 1 k = 1 − k . The inequalities of Hölder (with 2 = 2k + 2k ), Gagliardo–Nirenberg and Young yield 1− |α| k

∂ α v∂ β w 2 ≤ ∂ α v 2k/|α| ∂ β w 2k/|β| ≤ c v ∞ = ( v ∞ w k,2 )1−

|α| k

( w ∞ v k,2 )

|α| k

|α|

1− |β| k

k v k,2 w ∞

|β|

k w k,2

≤ c v ∞ w k,2 + v k,2 w ∞ .

In part (b) we can assume that k ≥ 3 and 2 ≤ |α| ≤ k − 1. There are i, j ∈ {1, . . . , m} with α = α  + ei and β = β  + ej , where |α  | + |β  | = k − 1. From (a) we deduce 



∂ α v∂ β w 2 = ∂ α ∂i v ∂ β ∂j w 2 ≤ c ∂i v ∞ ∂j w k−1,2 + ∂i v k−1,2 ∂j w ∞ and thus statement (b).

 

We state the core local wellposedness result for (5.22). Let BT ((u0 , f ), r) be the closed ball in Hx3 × Z 3 (T ) with center (u0 , f ) and radius r > 0. Theorem 5.18 Let (5.23) hold and ρ 2 ≥ u0 2H 3 + f (0) 2H 2 + f 2Z 3 (1) . Then the x x following assertions are true. (a) There is a unique solution u = "(u0 , f ) of (5.22) on [0, T+ ), where T+ = ˜ 3 (T ) for all T+ (u0 , f ) ∈ (T0 (ρ), ∞] with T0 (ρ) > 0 from (5.25) and u ∈ G T ∈ (0, T+ ). (b) Let T+ < ∞. Then limt →T+ u(t) Hx3 = ∞ and lim supt →T+ u(t) W 1,∞ = ∞. x (c) Let T ∈ [0, T+ ). Then there is a radius δ > 0 such that for all (v0 , g) ∈ BT ((u0 , f ), δ) ˜ 3 (T ) is continuous. Moreover, we have T+ (v0 , f ) > T and " : BT ((u0 , f ), δ) → G ˜ 2 (T ) is Lipschitz. " : (BT ((u0 , f ), δ), · Hx2 ×Z 2 (T ) ) → G

5 Introduction and Local Wellposedness on R3

94

Proof (a)/(b) Above we have shown part (a). Let T+ < ∞ and u = "(u0 , f ). (1) Suppose there are tn → T+ with r := supn u(tn ) 3,2 < ∞. Let T = T+ + 1. Set ρ 2 = r 2 + f 2Z 3 (T ) + supn f (tn ) 22,2 < ∞. Let R = R(ρ) > 0 and τ = T0 (ρ) > 0 be given by (5.24) and (5.25), respectively. Fix an index such that tN + τ > T+ . Lemma 5.15 ˜ 3 (tN , tN + τ ) of (5.22) with v(tN ) = u(tN ). We and a time shift yield a solution v ∈ G thus obtain a solution on [0, tN + τ ]. This fact contradicts the definition of T+ , and hence u(t) 3,2 → ∞ as t → T+ . (2) Next, set ω = sup0≤t 0 and data v0 , w0 ∈ Lx , ϕ, ψ ∈ L2t,x , and ω ∈ L2 (J, H 1/2 ())3 with n·ω = 0. Theorem 1.4 of [53] yields a unique solution (v, w) ∈ G0 (J ) with trta (v, w) ∈ L2 (J, H −1/2 ())6 of the linear system a∂t v = curl w − σ v + ϕ,

t ∈ J, x ∈ G,

b∂t v = − curl v + ψ,

t ∈ J, x ∈ G,

trta v = ω,

t ∈ J, x ∈ ,

v(0) = v0 ,

w(0) = w0 ,

x ∈ G.

(Theorem 6.17 deals with the case ω = 0 without the regularity of trta w.) For ω = 0 and G = R3+ , the next lemma is a part of Theorem 6.7. In the present form it follows from Theorem 6.18 by approximation arguments omitted here, see Lemma 4.2 in [116].

7.2 Energy and Observability-Type Inequalities

139

Lemma 7.3 Under the assumptions above, for 0 ≤ s ≤ t ≤ T we have 1 2







a(t)v(t) · v(t) + b(t)w(t) · w(t) dx +

G

=

1 2 +

 G  t s

 t s



 a(0)v0 · v0 + b(0)w0 · w0 dx +

σ v · v dx dτ

G  t s



1 G

2 ∂t a v

 ω · trta w dx dτ 

 · v + 12 ∂t b w · w + ϕ · v + ψ · w dx dτ.

In the next proposition we control the energy by the dissipation, i.e., ∂tk E by ∂tk H. Following [54], our approach is based on a Helmholtz decomposition. We use the following spaces on G, where j are the components of  and N denotes the kernel of div and curl as maps from L2x to Hx−1 : N0 (curl) = {v ∈ N(curl) | trta v = 0}, N0 (div) = {v ∈ N(div) | trno v = 0}, %    N (div) = v ∈ N(div) % ∀ j : j trno v dx = 0 , N = N(div) ∩ N0 (curl), 1 (G) = {v ∈ H 1 (G)3 | trta v = 0} = H (div) ∩ H0 (curl). Hta0

The last identity is shown in Theorem XI.1.3 of [36]. The first three spaces are endowed 1 (G) and other subspaces of H 1 . We with the L2 -norm, and we use the H 1 -norm for Hta0 x list some consequences of Theorems 2.8–2.10’ from [24] for simply connected G (see also §IX.1 in [36]), used several times below: 1 curl : Hta0 (G) ∩ N (div) → N0 (div)

is invertible,

(7.18)

curl : H 1 (G)3 ∩ N0 (div) → N (div)

is invertible,

(7.19)

L2 (G)3 = N (div) + ∇H01 (G) + N

(7.20)

N0 (curl) = ∇H01 (G) + N,

with orthogonal sums in L2x . We now establish the desired Helmholtz decomposition. Our result is a variant of Proposition 2 in [54] where the case of time-independent ε and μ and less regular solutions was treated. Lemma 7.4 Let the assumptions of Theorem 7.1 be satisfiedand let (E, H) solve (7.1). 1 (G) ∩ N (div) ∩ C 4 (J , L2 (G))3 , p in Then there exist functions w in C 3 J+ , Hta0 + 3 1 3 C (J+ , H0 (G)) and h in C (J+ , N) with ∂tk E = −∂tk+1 w + ∇∂tk p + ∂tk h,

μk ∂tk H = curl ∂tk w − gk ,

for k ∈ {0, 1, 2, 3}, cf. (7.8) and (7.10), where the sum for ∂tk E is orthogonal in L2x .

(7.21)

140

7 Exponential Decay Caused by Conductivity

Proof Let t ∈ J+ . Equation (7.6) implies that the function μ(H(t))H(t) is contained in N0 (div). Since G is simply connected, (7.18) then yields a vector field w(t) in 1 (G) ∩ N (div) satisfying curl w(t) = μ(H(t))H(t). Moreover, the map w belongs Hta0 1 (G) ∩ N (div)) because of (E, H) ∈ G3 and (7.18). Differentiating to C 3 (J+ , Hta0 curl w = μ(H)H in t, we deduce curl ∂tk w = ∂tk (μ(H)H) = μd (H)∂tk H + gk for k ∈ {1, 2, 3} which shows the second part of (7.21). Comparing this relation for k = 1 with (7.2), we infer curl(E + ∂t w) = 0. Moreover, E + ∂t w belongs to the kernel of trta . From (7.20) we obtain functions p(t) ∈ H01 (G) and h(t) ∈ N such that E(t) = −∂t w(t) + ∇p(t) + h(t) for t ∈ J+ with orthogonal sums. This fact and (E, H) ∈ G3 imply the remaining  regularity assertions. We can now differentiate the above identity in t, proving (7.21).  We can now show the desired observability-type estimate. Let us explain this name. For solutions of (7.1) with σ = 0, ε = ε(x) and μ = μ(x), Lemma 7.3 shows the energy equality e0 (t) = e0 (0) for t ≥ 0. Take σ = 1 in the definition of d0 . Then the next inequality can still be shown with modified constants and z = 0, implying (t −2c3 )e0 (0) ≤ t c2 0 E(τ ) 2L2 dτ . Hence, the initial fields can be determined by observing the electric x

field alone until t > 2c3 .

Proposition 7.5 Let the conditions of Theorem 7.1 be satisfied. For 0 ≤ s ≤ t < T∗ and k ∈ {0, 1, 2, 3}, we can estimate 

t s



t

ek (τ ) dτ ≤ c2



t

dk (τ ) dτ + c3 (ek (t) + ek (s)) + c4

s

z3/2 (τ ) dτ.

s

Proof Let k ∈ {0, 1, 2, 3}. To simplify, we take s = 0. Equality (7.21) yields 

 Gt

, μk ∂tk H · ∂tk H d(x, τ ) =

 Gt

curl ∂tk w · ∂tk H d(x, τ ) −

Gt

gk · ∂tk H d(x, τ ),

(7.22)

7.2 Energy and Observability-Type Inequalities

141

1 (G)) by Lemma 7.4, we apply (6.7), where Gt = G × (0, t). Using ∂tk w ∈ C(J+ , Hta0 insert the first line of the system (7.9), and integrate by parts in t. It follows

 Gt

curl ∂tk w · ∂tk H d(x, τ ) = ∂tk w, curl ∂tk HL2 (0,t ),H0(curl))

(7.23)

 =

∂tk w, ∂t (, εk ∂tk E)L2 ((0,t ),H0(curl)) 

+ Gt



= G

∂tk w(t) ·, εk (t)∂tk E(t) dx − 

− Gt

G

∂tk w(0) ·, εk (0)∂tk E(0) dx

·, εk ∂tk E d(x, τ ) +

∂tk+1 w

∂tk w · (σ ∂tk E + ∂t fk ) d(x, τ )

 Gt

∂tk w · (σ ∂tk E + ∂t fk ) d(x, τ ).

1 (G)3 ∩ N (div), formula (7.18) yields the Poincaré-type estimate Since ∂tk w(t) ∈ Hta0 k ∂t w(τ ) L2x ≤ c curl ∂tk w(τ ) L2x . From (7.21) and (7.16), we then infer the bound 1/2

∂tk w(τ ) L2x ≤ c curl ∂tk w(τ ) L2x = c μˆ k ∂tk H(τ ) + gk (τ ) L2x ≤ cek (τ ).

(7.24)

The orthogonality in the first part of (7.21) implies ∂tk+1 w(τ ) L2x ≤ ∂tk E(τ ) L2x . For any θ > 0, these inequalities along with (7.23) and (7.16) lead to the estimate % % %

Gt

 % % curl ∂tk w · ∂tk H d(x, τ )% ≤ c(ek (t) + ek (0)) + c  +θ

Gt

 |∂tk w|2 d(x, τ ) + cθ

Gt

|∂tk E|2 d(x, τ ) 

|∂tk E|2 d(x, τ ) + c

Gt

(7.25) t

3

z 2 (τ ) dτ. 0

As in (7.24), we further compute 

 Gt

|∂tk w|2 d(x, τ ) ≤ c

Gt

k curl ∂tk w · , μ−1 k curl ∂t w d(x, τ )

 =c

Gt

% % ≤ c%

Gt

curl ∂tk w · (∂tk H + , μ−1 k gk ) d(x, τ )  t % 3 % curl ∂tk w · ∂tk H d(x, τ )% + c z 2 (τ ) dτ. 0

Fixing a small number θ > 0, the term with |∂tk w|2 in Eq. (7.25) can now be absorbed by the left-hand side and by the integral of z3/2 . So we arrive at % % %

Gt

 t  t % 3 % curl ∂tk w · ∂tk H d(x, τ )% ≤ c(ek (t) + ek (0)) + c dk (τ ) dτ + c z 2 (τ ) dτ, 0

0

142

7 Exponential Decay Caused by Conductivity j

also using that dk (t) is equivalent to maxj ≤k ∂t E(t) 2L2 . This fact, Eq. (7.22), the last x   inequality, and the estimates (7.16) yield the claim. Combining Propositions 7.2 and 7.5, we arrive at the following energy bound. Corollary 7.6 Under the conditions of Theorem 7.1, we have the inequality 

t

ek (t) +



t

ek (s)ds ≤ C1 ek (s) + C2

s

z3/2(τ ) dτ

s

for 0 ≤ s ≤ t < T∗ and k ∈ {0, 1, 2, 3}. Proof We multiply the inequality in Proposition 7.5 by α = min{c2−1 , (2c3 )−1 } and add it to (7.17), obtaining 

t

ek (t) + 2α



t

ek (τ ) dτ ≤ 3ek (s) + 2(c1 + αc4 )

s

z3/2 (τ ) dτ.



s

For z = 0, from Corollary 7.6 one could easily infer exponential decay by a standard argument, see below. The extra term can be made small since z1/2 (τ ) ≤ δ for τ < T∗ by (7.14). However, z involves space derivatives so that it cannot be absorbed by e that does not contain them. This gap is closed by the next surprising result proved in the next section. It then allows us to show Theorem 7.1. Proposition 7.7 We impose the conditions of Theorem 7.1 with the exception of the simple connectedness of G. Then the solutions (E, H) to (7.1) satisfy 

t

z(t) +

  z(τ ) dτ ≤ c5 z(s) + e(t) + z2 (t) + c6

s



t



 e(τ ) + z3/2(τ ) dτ

s

for all 0 ≤ s ≤ t < T∗ . Proof (Of Theorem 7.1) Proposition 7.7 and Corollary 7.6 show that 

t

z(t) +

 2

z(τ ) dτ ≤ (c5 + C1 (c5 + c6 ))z(s) + c5 z (t) + (c6 + C2 (c5 + c6 ))

s

t

z3/2 (τ ) dτ.

s

Fixing a sufficiently small radius δ ∈ (0, δ0 ], we can now absorb the superlinear terms involving z2 and z3/2 by the left-hand side and hence obtain 

t

z(t) + s

z(τ ) dτ ≤ Cz(s),

for all 0 ≤ s ≤ t < T∗

7.3 Time Regularity Controls Space Regularity

143

and some constant C > 0. Since then z(τ ) ≥ C −1 z(t), we infer that (1 + (t − s)C −1 )z(t) ≤ Cz(s).

(7.26)

The differentiated Maxwell system (7.12) and the bounds from (7.16) yield z(0) ≤ c0 (E0 , H0 ) 2H 3 ≤ c0 r 2 for a constant c0 > 0. We now fix the radius " δ # , r := min r(δ), √ 2c0 C where r(δ) was introduced before (7.14). We suppose that T∗ < ∞, yielding z(T∗ ) = δ 2 by (7.15). Because of (7.26), the number z(t) is bounded by Cz(0) ≤ δ 2 /2 for t < T∗ and by continuity also for t = T∗ . This contradiction shows that T∗ = ∞ and hence T+ = ∞. In particular, (7.26) is true for all t ≥ s ≥ 0. Fixing the time T > 0 with C 2 /(C +T ) = 1/2, we derive z(nT ) ≤ 12 z((n − 1)T ) for n ∈ N and then z(nT ) ≤ 2−n z(0) by induction. With (7.26) one then obtains the asserted exponential decay.  

7.3

Time Regularity Controls Space Regularity

We first collect some preparations for the proof of Proposition 7.7. One can bound the H 1 norm of a field v by its norms in H (curl) ∩ H (div) and the H 1/2 -norm of trta v or trno v, see Corollary XI.1.1 of [36]. We need a version of this result with regular, matrix-valued coefficients a (which does not directly follow from the case a = I unless a is scalar). It is stated in Remark 4 of [54] with a brief indication of a proof. We present a (different) proof inspired by Lemma 4.5.5 of [32]. Proposition 7.8 Let a ∈ W 1,∞ (G, R3×3 η ) for some η > 0 and let v ∈ H (curl) fulfill div(av) ∈ L2 (G) and trno (av) ∈ H 1/2 (). Then v belongs to H 1 (G)3 and satisfies   v Hx1 ≤ c v H (curl) + div(av) L2x + trno (av) H 1/2 () =: cκ(v). Proof There exists a finite partition of unity {χi }i on G such that the support of each χi is contained in a simply connected subset of G with a connected smooth boundary. Since each χi is scalar, we obtain the estimate χi v L2x + curl(χi v) L2x + div(aχi v) L2x + trno (aχi v) H 1/2 () ≤ cκ(v).

144

7 Exponential Decay Caused by Conductivity

We can thus assume that  is connected (and G simply connected). In this case, curl v belongs to N (div) and so (7.19) yields a vector field w ∈ H 1 (G) ∩ N0 (div) with curl v = curl w and w Hx1 ≤ c curl v L2x . As the difference v − w belongs to N(curl), it is represented by v − w = ∇ϕ for a function ϕ ∈ H 1 (G) by Proposition IX.1.2 of [36].  Here we can assume that G ϕ dx and so ϕ 2  ∇ϕ 2  v 2 + w 2 by Poincaré’s inequality. We further have div(a∇ϕ) = div(av) − div(aw) ∈ L2 (G), trno (a∇ϕ) = trno (av) − trno (aw) ∈ H 1/2(), because of the assumptions and w ∈ H 1 (G). Due to the uniform ellipticity, ϕ is thus an element of H 2 (G) satisfying   ϕ Hx2 ≤ c v L2x + div(av) L2x + trno (av) H 1/2 () + w Hx1 ≤ cκ(v). The assertion now follows from the equation v = w + ∇ϕ.

 

In the proof of Proposition 7.7, we want to avoid the localization procedure since we need global-in-time estimates. This can be done using a new coordinate system near  = ∂G. (Possibly, one could derive the apriori estimates in Sect. 6.3 in a similar way; but for the regularization this is not clear because of the mollifier arguments.) For a fixed distance ρ > 0, on the collar ρ = {x ∈ G | dist(x, ) < ρ}, we can find smooth functions τ 1 , τ 2 , n : ρ → R3 such that the vectors {τ 1 (x), τ 2 (x), n(x)} form an orthonormal basis of R3 for each point x ∈ ρ and n extends the outer unit normal at . Hence, τ 1 and τ 2 span the tangential planes at . For ξ, ζ ∈ {τ 1 , τ 2 , n}, v ∈ R3 and a ∈ R3×3 , we set ∂ξ =

j

ξj ∂j ,

vξ = v · ξ,

v ξ = vξ ξ,

v τ = vτ 1 τ 1 + vτ 2 τ 2 ,

aξ ζ = ξ aζ.

We state several calculus formulas needed below, where it is always assumed that the functions involved are sufficiently regular. We can switch between the derivatives of the coefficient vξ and the component v ξ up to a zero-order term since ∂ζ v ξ = ∂ζ vξ ξ + vξ ∂ζ ξ. The commutator of tangential derivatives and traces ∂τ trta v = ∂τ (v × n) = trta ∂τ v + v × ∂τ n

on 

7.3 Time Regularity Controls Space Regularity

145

is also of lower order. Similarly, the directional derivatives commute ∂ξ ∂ζ v =

j,k

ξj ∂j (ζk ∂k v) = ∂ζ ∂ξ v +



ξj ∂j ζk ∂k v − ζk ∂k ξj ∂j v

j,k

up to a first-order operator with bounded coefficients. The gradient of a scalar function ϕ is expanded as ∇ϕ = so that ∂j =



ξ ξj ∂ξ

ξ



ξ · ∇ϕ ξ =

ξ ∂ξ ϕ,

ξ

for j ∈ {1, 2, 3}. Because of the formulas before (5.5) we have

curl =

j

Sj ∂j =

j,ξ

Sj ξj ∂ξ =:

ξ

S(ξ )∂ξ .

Since the kernel of S(n) is spanned by n, we can write S(n)v = S(n)v τ , and the restriction of S(n) to span{τ 1 , τ 2 } has an inverse R(n). We now provide the tools that allow us transfer to the arguments of Proposition 6.9 from R+ 3 to the present setting. We first isolate the normal derivative of the tangential components of v in the equation curl v = f . By the above expansion curl v = J (n)(∂n v)τ + J (τ 1 )∂τ 1 v + J (τ 2 )∂τ 2 v, we obtain ∂n v τ =

i

(∂n τ i vτ i + τ i ∂n τ i · v) + R(n) f − J (τ i )∂τ i v i

(7.27)

where the first sum only contains zero-order terms. In order to recover the normal derivative of the normal component of v, we resort to the divergence operator. The divergence of a vector field v can be expressed as div v =

j

∂j

ξ

vξ ξj =

  ∂ξ vξ + div(ξ )vξ . ξ

Letting ϕ = div(av) for a matrix-valued function a, we derive div(av) = =



ξ,ζ ξ,ζ

ann ∂n vn = ϕ −

∂ξ (ξ aζ vζ ) +

ξ

div(ξ ) ξ av

(aξ ζ ∂ξ vζ + ∂ξ aξ ζ vζ ) +

(ξ,ζ )=(n,n)

=: ϕ − D(a)v,

aξ ζ ∂ξ vζ −

ξ,ζ

ξ

div(ξ ) ξ av,

∂ξ aξ ζ vζ −



div(ξ ) ξ av

ξ

(7.28)

146

7 Exponential Decay Caused by Conductivity

where D(a)v contains all tangential derivatives and normal derivatives of tangential components of v plus zero-order terms. Next, let a ∈ W 1,∞ (J × G, R3×3 sym ) be positive 1 1 2 definite, v ∈ C (J , Hx ), and ψ ∈ Lt,x . In view of (7.7), we look at the equation 









t

div a(t)v(t) = div a(0)u(0) −



 div(σ u(s)) + ψ(s) ds

(7.29)

0

for 0 ≤ t ≤ T . We set γ = σnn /ann and (t, s) = exp(− and (7.29) yield

t s

γ (τ ) dτ ). Equations (7.28)

    ann (t)∂n vn (t) = div a(0)v(0) − D a(t) v(t)  t    − γ (s)ann (s)∂n vn (s) + D σ )v(s) + ψ(s) ds, 0

cf. (6.25). Differentiating with respect to t and solving the resulting ODE, we obtain ann (t)∂n vn (t)



t

= (t, 0)ann (0)∂n nn (0) −

   (t, s) D(σ )v(s) + ψ(s) + ∂s D(a(s))v(s) ds

0

= (t, 0) div(a(0)v(0)) − D(a(t))v(t)  t     + (t, s) γ (s)D a(s) v(s) − D(σ )v(s) − ψ(s) ds.

(7.30)

0

Before tackling the (quite demanding) proof of Proposition 7.7, we describe our reasoning. We have to bound ∂tk E and ∂tk H in Hx3−k for k ∈ {0, 1, 2} by the L2x -norms j j of ∂t E and ∂t H for j ∈ {0, 1, 2, 3}. The Hx1 -norm of ∂tk H with k ∈ {0, 1, 2} can easily be estimated by means of the curl–div estimates from Proposition 7.8 since we control curl, divergence and normal trace of ∂tk H via the time differentiated Maxwell system (7.9) and (7.11). Aiming at higher space regularity, we can apply the above strategy to tangential derivatives of ∂tk H only, whereas normal derivatives destroy the boundary conditions in (7.9). Here we proceed as in Proposition 6.9: The tangential components of normal derivatives are read off the differentiated Maxwell system using the expansion (7.27) of the curl-operator, while the normal components are bounded employing the divergence condition (7.11) and formula (7.28). In these arguments we have to restrict ourselves to fields localized near the boundary. The localized fields in the interior can be controlled more easily since the boundary conditions become trivial for them. The electric fields E have less favorable divergence properties because of the conductivity term in (7.9). Instead of Proposition 7.8, we thus employ the energy bound of the system (7.32) derived by differentiating the Maxwell equations in time and tangential

7.3 Time Regularity Controls Space Regularity

147

directions. The normal derivatives are again treated by the curl-div-strategy indicated in the previous paragraph. However, to handle the extra divergence term in (7.11) caused by the conductivity, we need the more sophisticated divergence formula (7.30) which relies on an ODE derived from (7.11). This program is carried out by iteration on the space regularity. In each step one has to start with the magnetic fields in order to use their better properties when estimating the electric ones. Proof (Of Proposition 7.7) Let (E, H) be a solution of (7.1) on J∗ = [0, T∗ ) satisfying z(t) ≤ δ 2 and Eqs. (7.6) and (7.7). Take k ∈ {0, 1, 2} and 0 ≤ t < T∗ , where we let s = 0 for simplicity. To localize the fields, we choose smooth scalar functions χ and 1 − χ =: ϑ on G having compact support in G \ ρ/2 and ρ , respectively. The proof is divided into several steps following the above outline. (1) Estimate of ∂tk H in Hx1 . The time differentiated Maxwell system (7.9) and (7.11) combined with estimates (7.16) yield    curl ∂ k H(t)

1/2

L2x

≤ cek+1 (t) + cz(t)δk2 ,

L2x

≤ cz(t)δk2 ,

H 1/2 ()

≤ cz(t)δk2 ,

t

   div(, μk ∂ k H(t)) t

   trno (, μk ∂ k H(t)) t

where δk2 = 1 for k = 2 and δk2 = 0 for k ∈ {0, 1}. Proposition 7.8 thus implies   k ∂ H(t)2 1 ≤ cek+1 (t) + cz2 (t)δk2 , t Hx  t  t   k ∂ H(s)2 1 ds ≤ c (ek+1 (s) + z2 (s)δk2 ) ds. t H 0

x

(7.31)

0

We stress the core fact that the inhomogeneities in (7.9) and (7.11) are quadratic in (E, H) and can thus be bounded by z via (7.16). (2) Estimates in the interior for E and H. We look at the localized fields ∂tk (χE) and k ∂t (χH) whose support supp χ is strictly separated from the boundary. Hence, their spatial derivatives satisfy the boundary conditions of the Maxwell system so that we can treat the electric fields via energy bounds and the magnetic ones via the curl-div estimates.

148

7 Exponential Decay Caused by Conductivity

(a) Let α ∈ N30 with |α| ≤ 3 − k. We apply ∂xα χ to the Maxwell system (7.12), deriving the equations εd (E) ∂t ∂xα ∂tk (χE) = curl ∂xα ∂tk (χH) − σ ∂xα ∂tk (χE) + ∂xα ([χ, curl]∂tk H) α  − ∂ α−β (σ + εd (E)) ∂xβ ∂tk (χE) − ∂xα (χ f˜k ), β x 0≤β 1. All families {R }∈N0 from above examples furnished with the discrepancy principle yield regularization schemes. Theorem 17.7 (Error Reducing Property [38]) Let {uk }, u0 = 0, be generated by the Landweber method with 0 < ϑ < 1/ T 2 or the steepest descent method w.r.t. T ∈ L(X, Y ) and 0 = y ∈ Y . For x ∈ X we have that y − T uk Y ≥ 2 y − T x Y

⇒

uk+1 − x X < uk − x X .

A similar result holds for the conjugate gradient method. Proof Let rk := y − T uk . From   uk+1 − x 2X − uk − x 2X = ϑk 2y − T x, rk Y − rk 2 − rk+1 , rk Y and rk+1 , rk Y > 0 we find that   uk+1 − x 2X − uk − x 2X < ϑk rk Y 2 y − T x Y − rk Y . 0 12 3 ≤0

The proof for the cg method is more involved and is given in [118].

 

The above theorem reveals the discrepancy principle to be especially suited to stop the Landweber, steepest descent, and cg methods.

Newton-Like Solvers for Non-linear Ill-Posed Problems

18

Let F : D(F ) ⊂ X → Y act continuously Fréchet-differentiable between the Hilbert spaces X and Y . We like to solve F (x) = y δ where y − y δ ≤ δ, y = F (x + ), and F (·) = y is locally ill-posed at x + ∈ D(F ). Now we will derive an iterative procedure to approximate x + . Assume xn is already an approximation which we would like to improve by adding a correction sn : xn+1 = xn + sn . Naturally, we want sn to approximate the error en := x + − xn (exact Newton step) which satisfies the linear equation (An := F  (xn )) An en = y − F (xn ) − E(x + , xn ) =: bn where E(u, v) = F (u) − F (v) − F  (v)(u − v) is the linearization error. The exact right hand side bn is unknown. Therefore, we determine sn from An s = bnδ

with bnδ := y δ − F (xn ).

(18.1)

Observe that (18.1) is ill-posed under (16.12), see Theorem 16.18. We, hence, apply regularization operators to stably solve (18.1):   sn = sn, = s',n + Rn, bnδ − An s',n © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9_18

303

304

18 Newton-Like Solvers for Non-linear Ill-Posed Problems

where {Rn, } are regularizing operators for A+ n and s',n ∈ X is a starting guess. Recall that1 ⎧ δ δ ⊥ ⎨A+ n bn + PN(An ) s',n : bn ∈ R(An ) ⊕ R(An ) , lim sn, = ⎩ →∞ ∞ : otherwise. Our generic Newton iteration accordingly reads   xn+1 = xn + s',n + Rn,mn bnδ − An s',n To set up a specific scheme requires the choice of {Rn, }, the choice of s',n , the choice of mn , and the choice of a stopping criterion for the Newton iteration. Here are some examples. Example 18.1 (Nonlinear Landweber Method) This iteration is given by xn+1 = xn + A∗n bnδ ,

An ≤ 1.

It can be derived from the generic scheme when the choices are Rn, = q (A∗n An )A∗n  j where q (t) = −1 j =0 (1 − t) , mn = 1, s',n = 0. Example 18.2 (Nonlinear Steepest Descent Method) The iteration xn+1 = xn + λn A∗n bnδ ,

λn =

A∗n bnδ 2X An A∗n bnδ 2Y

,

fits the generic scheme from above with Rn, = q (A∗n An , bnδ )A∗n where q (·, bnδ ) ∈ −1 , mn = 1, s',n = 0. Example 18.3 (Iteratively Regularized Gauß-Newton Methods) . Here,   xn+1 = x0 + Rn,mn bnδ − An (x0 − xn )  where Rn, = A∗n An + α I )−1 A∗n (in the original version), lim→∞ α = 0 (strictly decreasing), {mn } is chosen a priori (strictly increasing), s',n = x0 − xn . Please see [104] for more details on the three examples from above as well as references to the original literature.

1 By P : X → X we denote the orthogonal projector onto the closed subspace W ⊂ X. W

18 Newton-Like Solvers for Non-linear Ill-Posed Problems

305

Example 18.4 (Inexact Newton Methods) These iterations are of the form xn+1 = xn + Rn,mn bnδ where {Rn, } may be chosen from the following methods: Landweber, steepest descent, conjugate gradients, implicit iteration. Moreover, {mn } is chosen a posteriori (using xn ) to allow adaptivity. Further, s',n = 0. In this chapter we study the inexact Newton method REGINN (REGularization by INexact Newton methods), see Algorithm 1. Algorithm 1 REGINN Input: xN ; (y δ , δ); F ; F  ; {μn } ⊂ ]0, 1[; τ > 0; Output: xN with y δ − F (xN ) ≤ τ δ; n := 0; x0 := xN ; while bnδ > τ δ do m := 0; sn,0 := 0; repeat m := m + 1; sn,m := Rn,m bnδ ; until bnδ − An sn,m Y < μn bnδ Y xn+1 := xn + sn,m ; n := n + 1; end while xN := xn ;

Remark 18.5 The “while”-loop in Algorithm REGINN realizes the Newton or outer iteration whereas the “repeat”-loop computes the Newton step and is called inner iteration. Observe that, if REGINN is well-defined and terminates then   mn = min m ∈ N : An sn,m − bnδ Y < μn bnδ Y

(18.2)

and y δ − F (xN ) Y ≤ τ δ < y δ − F (xn ) Y ,

n = 0, . . . , N − 1.

(18.3)

306

18.1

18 Newton-Like Solvers for Non-linear Ill-Posed Problems

Decreasing Error and Weak Convergence

For the analysis of REGINN we require two properties of the regularizing sequence {sn,m }, sn,m = Rn,m bnδ . If the iterate xn is well defined then lim An sn,m = PR(An ) bnδ ,

m→∞

(18.4)

and there exist a + ≥ 1 such that An sn,m Y ≤ + bnδ Y

for all m.

(18.5)

The following methods satisfy (18.4) as well as (18.5): Landweber, Tikhonov, implicit iteration, conjugate gradients (all with + = 1), and steepest descent (+ ≤ 2 proven but + = 1 conjectured). Lemma 18.6 Let xn be well defined and assume (18.4) as well as PR(An )⊥ bnδ Y < bnδ Y . Then, for any tolerance μn ∈

' P

δ R(An )⊥ bn Y bnδ Y

' ,1

(18.6)

the stopping index mn (18.2) is well defined. Proof By (18.4), PR(An )⊥ bnδ Y An sn,m − bnδ Y = < μn m→∞ bnδ Y bnδ Y lim

 

which completes the proof.

If the assumption PR(An )⊥ bnδ Y < bnδ Y of the above lemma is violated then REGINN fails (as well as other Newton schemes): under PR(An )⊥ bnδ Y = bnδ Y , that is, PR(An ) bnδ = 0, we have sn,m = 0 for all m. Lemma 18.7 Let xn be well defined with PR(An )⊥ bnδ Y < bnδ Y . Assume that μn satisfies (18.6). Then, sn,mn is a descent direction for ϕ : D(F ) ⊂ X → R, ϕ(·) := 1 δ 2 2 2 y − F (·) Y , at xn : ϕ  (xn ), sn,mn X < 0.

2 For the ease of presentation we identify ϕ  (x ) ∈ X  with its Riesz-representer in X. n

18.1 Decreasing Error and Weak Convergence

307

  Proof By ϕ  (·) = −F  (·)∗ y δ − F (·) we find that ϕ  (xn ), sn,mn X = bnδ , −An sn,mn Y = − bnδ 2Y + bnδ , bnδ − An sn,mn Y   ≤ bnδ Y bnδ − An sn,mn − bnδ Y < bnδ 2Y (μn − 1) < 0  

and the lemma is verified. Remark 18.8 If the assumptions of above lemma hold true then ϕ(xn + λsn,mn ) < ϕ(xn )

for all positive λ being small enough.

This property explains the term “descent direction”. Indeed, by Theorem 16.17,  ϕ(xn + λsn,mn ) − ϕ(xn ) = λ

1 0

ϕ  (xn + tλsn,mn ), sn,mn X dt.

  As ϕ  is continuous in xn ∈ int D(F ) , the integrand is negative for small λ > 0. We need a further property of the regularizing sequence {sn,m }: ⎫ If xn is well defined then for any m ∈ {1, . . . , mn } there is a vn,m−1 ∈ Y such that ⎪ ⎪ ⎪ ⎪ ⎪ ∗ ⎪ ⎪ sn,m = sn,m−1 + An vn,m−1 . ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Further, let there be a continuous and monotonically increasing function ⎪ ⎪ ⎪ ⎪ ⎪ " : R → R with t ≤ "(t) for t ∈ [0, 1] such that if ⎪ ⎪ ⎪ ⎬ δ bn − An en Y 1, conjugate gradients: "(t) = 2t.

For our analysis we require the TCC in the following version E(v, w) Y ≤ L F  (w)(v − w) Y for one L < 1 and for all v, w ∈ Br (x + ) ⊂ D(F ).

(18.9)

Lemma 18.9 Assume (18.9) to hold with L < 1/2. Then, there is a 0 < % ≤ L/(1 − L) < 1 such that    P  ⊥ y − F (u)  ≤ % y − F (u) Y R(F (u)) Y

for all u ∈ Br (x + ).

Proof We have that       P  ⊥ y − F (u)  = P  ⊥ y − F (u) − F  (u)(x + − u)  R(F (u)) R(F (u)) Y Y ≤ L F  (u)(x + − u) Y . Further, F  (u)(x + − u) Y ≤ E(x + , u) Y + y − F (u) Y ≤ L F  (u)(x + − u) Y + F (x + ) − F (u) Y yielding first F  (u)(x + − u) Y ≤ and then the assertion.

1 y − F (u) Y 1−L  

18.1 Decreasing Error and Weak Convergence

309

Theorem 18.10 Assume (18.4), (18.5), (18.7). Additionally, assume (18.9) with L satisfying3 "

L + +L < 1−L

for one

< 1.

Further, define μmin := "



1 τ

+L

1 1−L



and choose τ so large that μmin + +L
0) and the noise-free situations. Quantities with a superscript δ refer to the noisy situation: xnδ , bnδ , Aδn , etc. Thus, xn , 0 bn := y − F (xn ), and An originate from exact data y. Note that bn = y − F (xn ) − E(x + , xn ). The starting vector is independent of the noise: x0 = x0δ . REGINN is well defined in the noise-free situation when we set δ = 0, τ = ∞, and τ δ = 0. Except for termination, all results of Theorem 18.10 hold true for even smaller L μmin = "( 1−L ). Further, N(δ) = ∞ if there is no iterate xn with F (xn ) = y (premature termination). Theorem 18.13 Let δ = 0 and adopt all assumptions of Theorem 18.10. Restrict {μn }n to L [μ, −+L] where μ > "( 1−L ). Then, REGINN either stops after finitely many iterations with a solution of F (·) = y or the generated sequence {xn }n ⊂ Br (x + ) converges in norm

18.2 Convergence Without Noise

313

to a solution of F (·) = y. If x + is the unique solution in Br (x + ) then lim xn − x + X = 0 .

n→∞

Proof If REGINN terminates prematurely with xN then F (xN ) − y = 0, hence, xN is a solution. Otherwise, {xn }n is a Cauchy sequence which we show now. Let , p ∈ N with  > p. We have that x − xp 2 = e − ep 2 = 2e − ep , e  + ep 2 − e 2 . From the monotonicity 0 ≤ en+1 < en we infer that limn→∞ en = ξ ≥ 0. Thus, p→∞

ep 2 − e 2 −−−→ 0.

(18.14)

Further, e − ep = −(x − xp ) = −

−1

(18.7)

si,mi =

i=p

−1

A∗i / vi

where / vi = −

i=p

mi

vi,k−1 .

k=1

Hence, e − ep , e  =

−1

/ vi , Ai e  ≤

i=p

−1

/ vi Ai e

i=p

We proceed with Ai e = F  (xi )(x + − x ) ≤ F  (xi )(x + − xi ) + F  (xi )(x − xi ) (18.9)







 1  y − F (xi ) + F (xi ) − F (x ) 1−L

1  2 y − F (xi ) + 1−L

y − F (x ) 0 12 3

≤ y−F (xi ) as i≤, see (18.10)

3 y − F (xi ) 1−L

yielding 3 / vi y − F (xi ) . 1−L −1

e − ep , e  ≤

i=p



314

18 Newton-Like Solvers for Non-linear Ill-Posed Problems

For bounding the summands we apply (18.7) once again. Setting n = i in the inequality in (18.7) and summing both sides from m = 1 to mi we end up in ei − si,mi − ei < 0 12 3 2

CM bi0

2

mi

  vi,m−1 "(γi ) − μi

m=1

=ei+1

where we have used that bi0 −Ai si,m−1 / bi0 ≥ μi . The latter displayed inequality leads to

y − F (xi )

mi

vi,m−1
"( 1−L )

/ vi y − F (xi ) ≤

mi

(18.11)



"(γi ). We get

  /M ei 2 − ei+1 2 vi,m−1 y − F (xi ) ≤ C

m=1

and then e − ep , e  ≤

/M   3C ep 2 − e 2 . 1−L

Finally, x − xp 2 ≤

6C /M 1−L

  p→∞ + 1 ep 2 − e 2 −−−−→ 0 (18.14)

which reveals {xn } to be a Cauchy sequence with limit , x ∈ Br (x + ). The proof is finished x ) .   by 0 = limn→∞ y − F (xn ) = y − F (,

18.3

Regularization Property of REGINN

In this section we assume that x + is the unique solution F (x) = y in Br (x + ).

18.3 Regularization Property of REGINN

315

To validate the regularization property of REGINN we follow a general scheme: 1. Show convergence in the noise-free setting: limn→∞ x + − xn X = 0. This has been established in the previous section. 2. Show a kind of stability for any n ∈ N0 : limδ→0 xnδ = ξn and limn→∞ x + − ξn X = 0 (generically one would expect ξn = xn ). 3. Employ the triangle inequality and monotonicity of the REGINN iterates: For m ≤ N(δ), δ δ δ x + − xN(δ) X ≤ x + − xm X ≤ x + − ξm X + ξn − xm X .

For step 2 we introduce lists Xn with elements from Br (x + ), n ∈ N0 . As we will see later Xn contains just all possible limits of xnδ for δ → 0. Definition 18.14 Set X0 := (x0 ) and determine Xn+1 from Xn as follows: for each ξ ∈ Xn compute the Newton step sn,mn (ξ ) = sn,mn (ξ ) (ξ ) by the repeat–loop of REGINN where, however, An is replaced by F  (ξ ) and bnδ by y − F (ξ ). If y = F (ξ ) then ξ + sn,mn (ξ ) (ξ ) belongs to Xn+1 , otherwise include ξ into Xn+1 . Further, if     F (ξ )sn,m −i (ξ ) − y − F (ξ )  = μn y − F (ξ ) Y n Y for i = 1, . . . , kn < mn = mn (ξ )

(18.15)

then ξ + sn,mn −i (ξ ), i = 1, . . . , kn , belong to Xn+1 as well. We call ξ a predecessor of ξ + sn,mn −i (ξ ), i = 1, . . . , kn , and, in turn, call the latter successors of ξ . Obviously, xn ∈ Xn and generically, Xn will contain only this element. Assume that Xj = (xj ), j = 0, . . . , n, and that sn,mn −1 (xn ) as well as sn,mn −2 (xn ) satisfy (18.15). Then, Xn+1 = (xn+1 , xn + sn,mn −1 (xn ), xn + sn,mn −2 (xn )) and from now on all Xj , j ≥ n + 1, will have three elements at least. By (18.12) x + − ξn+1 X < x + − ξn X whenever ξn+1 ∈ Xn+1 is a successor of ξn ∈ Xn .

(18.16)

As Xn+1 contains only successors of elements in Xn the whole sequence {Xn }n comes with a tree structure. Figure 18.1 sketches such a possible tree. The bold dots indicate the sequence {xn }n generated by REGINN with exact data (δ = 0 and the standard stopping criterion for the repeat-loop).

316

18 Newton-Like Solvers for Non-linear Ill-Posed Problems

Fig. 18.1 A possible tree structure of the lists {Xn }n∈N0 from Definition 18.14

Next we require a stability property of the inner iteration of REGINN: Let {δi }i∈N be a δ positive zero sequence and let {xni }i∈N be well defined. Then, lim xnδi = ξ ∈ Xn

i→∞ δ

⇒

lim s δi i→∞ n,k

= sn,k (ξ ), k = 1, . . . , mn (ξ ),

(18.17)

δ

i where sn,k = sn,k (xni ). The following methods satisfy (18.17):

• •

For the Landweber and implicit iterations stability (18.17) can be shown by induction straightforwardly. For the nonlinear schemes steepest descent and conjugate gradients the situation is more delicate, see [151, Lemma 3.2] and [83, Lemma 3.4], respectively.

Lemma 18.15 Let all assumptions of Theorem 18.10 hold true. Further, assume (18.17). δ If limi→∞ δi = 0 and n ≤ N(δi ) for i sufficiently large then {xni }i∈N splits into convergent subsequences, all of which converge to elements of Xn . Proof We argue inductively. For n = 0 the assertion holds as X0 = (x0 ) and x0δ = x0 .

18.3 Regularization Property of REGINN

317

δ

Now assume that {xni }i∈N , for an n < N(δi ), i sufficiently large, splits into convergent subsequences with limits in Xn . To simplify the notation, let {xnδi }i∈N itself converge to an element of Xn , say, limi→∞ xnδi = ξ ∈ Xn . By (18.17) and continuity, δi lim Aδni sn,k − bnδi Y = F  (ξ )sn,k (ξ ) − / bn Y ,

i→∞

k = 0, . . . , mn (ξ ),

where / bn Y < μn / bn Y we get bn = y − F (ξ ). Let y = F (ξ ). As F  (ξ )sn,mn (ξ ) (ξ ) − / δi Aδni sn,m − bnδi Y < μn bnδi Y n (ξ )

for i sufficiently large.

δ This yields that mn (xni ) ≤ mn (ξ ) for i sufficiently large. In case of μn / bn Y <  bn Y we also have that F (ξ )sn,mn (ξ )−1 (ξ ) − / δi μn bnδi Y < Aδni sn,m − bnδi Y n (ξ )−1 δ

for i large enough.

δ

Thus, mn (ξ ) − 1 < mn (xni ) ≤ mn (ξ ), i.e., mn (xni ) = mn (ξ ) for i sufficiently large and lim s

δ

i i→∞ n,mn (xn )

(xnδi ) = sn,mn (ξ ) (ξ ).

Hence,  δi lim xn+1 = lim xnδi + sn,m

i→∞

i→∞

δ

i n (xn )

 (xnδi ) = ξ + sn,mn (ξ ) (ξ ) ∈ Xn+1 .

In case (18.15) applies we have that μn / bn Y < F  (ξ )sn,mn (ξ )−(kn +1) (ξ )−/ bn Y . Arguing δ δ as above we see that mn (ξ ) − (kn + 1) < mn (xni ) implying mn (ξ ) − kn ≤ mn (xni ) ≤ δi mn (ξ ) for i large enough. Therefore, the set {mn (xn )}i∈N has limit points in {mn (ξ ) − δi δi = xnδi + s kn , . . . , mn (ξ )}. In any case, all possible limit points of xn+1 δ (x ) as n,mn (xni ) n i → ∞ are in Xn+1 by the very construction of this list. It remains to consider the situation where y = F (ξ ). Since x + is the unique solution in δi X < x + − xnδi X , see Br (x + ) we obtain x + = ξ . Using the monotonicity x + − xn+1 δi = ξ ∈ Xn+1 follows immediately.   Theorem 18.10, the convergence limi→∞ xn+1 The set   S := ξ ∈ XN0 : ξk ∈ Xk ∀k ∧ ξk+1 is a successor of ξk contains all sequences which can be generated following a continuous path in the tree of {Xn }n from the root downwards, see Fig. 18.1. Each sequence in S can be generated by a

318

18 Newton-Like Solvers for Non-linear Ill-Posed Problems

run of REGINN with a slightly modified stopping criterion: The ’ n a successor of ξn and any ξ with  < n a predecessor. Lemma 18.16 Let {ξ () }∈N be a sequence in S. Under the assumptions of Theorem 18.13, for any η > 0 there is an M(η) ∈ N0 such that x + − ξn() X < η

for all n ≥ M(η) and all  ∈ N.

(18.19)

Proof Assume (18.19) not to be true. Then, there are an η > 0 and sequences {nj } and {j }, where {nj } is strictly increasing, such that x + − ξnj j X ≥ η ( )

for all j ∈ N.

We distinguish two cases: 1. Let {j } be bounded. Then, {j } has finitely many limit points. Let {jk }k converge to )

∗ , that is, ∗ = jk for k large enough. Hence, x + − ξn(jk X ≥ η which contradicts (18.18). 2. Let {j } be unbounded. Without loss of generality we may consider {j } to be strictly increasing. We rearrange, that is, relabel the sequences in {ξ (i) }i∈N such that x + − ξn() ≥η  X

for all  ∈ N.

(18.20)

()

As Xn1 is finite there are infinitely many  ∈ N such that ξn1 = ρn1 ∈ Xn1 . We collect those  in L1 ⊂ N. As Xn2 is finite too there are infinitely many  ∈ L1 \{1} such that ξn() 2 = ρn2 ∈ Xn2 . We collect those  in L2 ⊂ N\{1}. Since L2 ⊂ L1 , ρn2 is a successor of ρn1 . Proceeding inductively we find {ρnk }k , ρnk ∈ Xnk , and {Lk }k with Lk+1 ⊂ Lk , Lk ⊂ N\{1, . . . , k − 1} unbounded, such that ξn() = ρnk k

for all  ∈ Lk .

(18.21)

18.3 Regularization Property of REGINN

319

Since Lk+1 ⊂ Lk , ρnk+1 is a successor of ρnk . By (18.18), limk→∞ ρnk = x + , that is, there exists a K = K(η): x + − ρnk X < η

for all k ≥ K.

(18.22)

For  ∈ LK fixed, the errors x + − ξn() r X are decreasing in r ∈ N, see (18.16). Since  ≥ K we have x + − ξn() ≤ x + − ξn()  X K X

(18.21)

=

x + − ρnK X

(18.22)

< η  

contradicting (18.20).

Theorem 18.17 Adopt all assumptions of Theorem 18.10 and let x + be the only solution of F (x) = y in Br (x + ). Then, δ lim x + − xN(δ) X = 0.

δ→0

Proof Let {δj }j ∈N be a positive zero sequence. Set nj = N(δj ). δ

δ

δ

1. If nj = m as j → ∞ then xnjj = xmj for j sufficiently large and y δj −F (xmj ) Y ≤ τ δj . δ

By Lemma 18.15, {xmj } splits into subsequences which converge to elements of Xm . δ Let {xmjr }r converge to ξm . Thus, F (ξm ) = y and ξm = x + since Xm ⊂ Br (ξ ). This δ δ argumentation holds for any limit point of {xmj } which yields xmj → x + as j → ∞. 2. If nj ≤ m for j → ∞. Then {nj } has limit points in {0, . . . , m} each of which can be dealt with as in case 1. 3. We begin with constructing a special sequence {ξ () }∈N ⊂ S. To this end we name the elements in Xn uniquely: for any n ∈ N0 let Xn = (σn,1 , . . . , σn,kn ). With σn,i we  associate the sequence ξ () ,  = (n, i) = i + n−1 m=0 km , defined by

ξr()

⎧ ⎪ ⎪ ⎨the predessor of σn,i in Xr = σn,i ⎪ ⎪ ⎩ a successor of σn,i in Xr

: r < n, : r = n, : r > n.

Obviously, ξ () ∈ S. Let η > 0. By Lemma 18.16 there is an M = M(η) such that x + − ξM X < η/2 ()

for all  ∈ N.

320

18 Newton-Like Solvers for Non-linear Ill-Posed Problems δ

The sequence {xMj }j splits into subsequences which converge to elements of XM , see δj

Lemma 18.15. Suppose that the subsequence {xMp }p converges to σM,i for one i ∈ {1, . . . , kM }. Then, there is a P = P (η) ∈ N such that δj

σM,i − xMp X < η/2 for all p ≥ P . (∗ )

Further, by the construction of {ξ () }∈N there exists an ∗ = (M, i) such that ξM σM,i . Thus, for p ≥ P so large that njp ≥ M we have δj

δj



=

δj

p ( ) x + − xN(δ X < x + − xMp X ≤ x + − ξM X + σM,i − xMp X < η j ) p

where the first estimate is due to the monotone error decrease of the REGINN-iterates. δ As this argumentation holds for any of the finitely many limit points of {xMj }j the stated convergence is validated.   Remark 18.18 (a) From a practical point of view, as an additional safeguarding technique, one would like to limit the number of inner iterations of REGINN. To this criterion  δ  end the  stopping  δ     for the repeat-loop might be replaced by bn − An sn,m < μn bn or m = mmax . In this case, Theorem 18.17 as well as the monotone error decrease still hold true if Landweber, implicit iteration, steepest descent or conjugate gradients are used as inner iterations. (b) In [164, 165] REGINN is adapted to solve the inverse problem of electrical impedance tomography. For this application all assumptions of this section hold true, especially TCC (18.9) is satisfied [117] (in a semi-discrete setting). The implementation of [165] uses the conjugate gradient method as inner iteration and the tolerances are chosen according to (18.13). A variety of numerical experiments including some with measured data show the performance of the scheme. (c) For variants of REGINN in a Banach space environment consult [124–126].

Inverse Problems Related to Abstract Evolution Equations

19

In this section we consider a special type of abstract evolution equations and related inverse problems which are motivated by the visco-elastic wave equation. This equation describes wave propagation in a realistic medium incorporating dispersion and attenuation. Thus, it serves as forward model in seismic tomography which entails the reconstruction of material parameters (wave speeds, density etc.) from measurements of reflected wave fields. This section is mainly based on results presented originally in [111, 112, 167] where the underlying differential operators are independent of time. For a wave equation with time-dependent differential operators we refer to [73, 74].

19.1

Motivation: Full Waveform Inversion in Seismic Imaging

19.1.1 Elastic Wave Equation The propagation of waves in an elastic medium D ⊂ R3 is governed by the elastic wave 3 equation: Let σ : [0, ∞) × D → R3×3 sym be the stress tensor and v : [0, ∞) × D → R be the velocity field. Then,   ∂t σ (t, x) = C μ(x), π(x) ε(v(t, x)) ρ(x)∂t v(t, x) = div σ (t, x) + f(t, x)

in [0, ∞) × D,

(19.1)

in [0, ∞) × D,

(19.2)

where ρ : D → R is the mass density, f : [0, ∞) × D → R3 a volume force, and C(m, p) = 2m + (p − 2m) trace( )I,

∈ R3×3 sym , m, p ∈ R,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9_19

321

322

19 Inverse Problems Related to Abstract Evolution Equations

specifies Hooke’s law. Further, μ, π : D → R are the relaxed P- and S-wave moduli. Finally, ε(v) :=

) 1( (∇x v) + ∇x v 2

is the linearized strain rate. Initial and boundary conditions for σ and v have to be specified. Remark 19.1 The stress in a body is caused by the displacement field u and the body’s material. One material law for a standard linear material is Hooke’s law:   σ (t, x) = C μ(x), π(x) ε(u(t, x)), see, e.g., [76, Chap. 4]. As the velocity v is the time derivative of the displacement, v = ∂t u, we get the first equation (19.1) of the elastic wave equation. The second one (19.2) is a balance law for the conservation of momentum, see Sect. 1.4. Remark 19.2 The first order system (19.1) and (19.2) can equivalently be formulated as one second order equation: (    ) ρ(x)∂t2v(t, x) = div C μ(x), π(x) ε v(t, x) + g(t, x).

(19.3)

Set 

σ (t, x) := C μ(x), π(x)





t

  ε v(s, x) ds.

0

Then, (σ , v) solves     ∂t σ (t, x) = C μ(x), π(x) ε v(t, x) , 

t

ρ(x) ∂t v(t, x) = div σ (t, x) +

g(s, x) ds + ρ(x)∂t v(0, x).

(19.4) (19.5)

0

Conversely, if (σ , v) solves (19.4) and (19.5) then v is a solution of (19.3). Initial and boundary conditions for both formulations have to be compatible.

19.1.2 Visco-Elastic Wave Equation Waves propagating in the earth exhibit damping (loss of energy) which is not reflected by the elastic wave equation. Thus, the elastic wave equation has to be augmented by a mechanism which models dispersion and attenuation. Several of these mechanisms are

19.1 Motivation: Full Waveform Inversion in Seismic Imaging

323

known in the literature which are all closely related, see [70, Chap. 5], [167, Chap. 2] and Sect. 1.5 for an overview and references. Here, we focus upon the visco-elastic wave equation in the velocity stress formulation based on the generalized standard linear solid rheology: Using L ∈ N memory tensors ηl : [0, T ] × D → R3×3 sym , l = 1, . . . , L, the new formulation reads ρ ∂t v = div σ + f L   ∂t σ = C (1 + LτS )μ, (1 + LτP )π ε(v) + ηl

  −τσ ,l ∂t ηl = C τS μ, τP π ε(v) + ηl ,

in ]0, T [×D,

(19.6a)

in ]0, T [×D,

(19.6b)

in ]0, T [×D.

(19.6c)

l=1

l = 1, . . . , L,

The functions τP , τS : D → R are scaling factors for the relaxed moduli π and μ, respectively. They have been introduced by Blanch et al. [12]. Wave propagation is frequency-dependent and the numbers τσ ,l > 0, l = 1, . . . , L, are used to model this dependency over a frequency band with center frequency ω0 . Within this band the rate of the full energy over the dissipated energy remains nearly constant. This observation lets us determine the stress relaxation times τσ ,l by a leastsquares approach [13, 14]. Now the frequency-dependent phase velocities of P- and S-waves are given by vP2 =

L ω02 τσ2,l π μ (1 + τP α) and vS2 = (1 + τS α) with α = . ρ ρ 1 + ω02 τσ2,l

(19.7)

l=1

19.1.3 The Inverse Problem of Seismic Imaging in the Visco-Elastic Regime Let T > 0 be the observation time and assume—for the time being—the parameter-tosolution map ! : (ρ, vS , τS , vP , τP ) "→ (σ |[0,T ] , v|[0,T ] ) to be well defined. Further, we model the measurement process by the linear measurement operator R which records (σ |[0,T ] , v|[0,T ] ) at finitely many times at the receiver positions, that is,   σ |[0,T ] R ∈ RN v|[0,T ]

324

19 Inverse Problems Related to Abstract Evolution Equations

where N is the overall number of data points (time points × number of receivers). Given a seismogram w ∈ RN find five parameters (ρ, vS , τS , vP , τP ) such that R!(ρ, vS , τS , vP , τP ) = w.

(19.8)

Solving above problem is called full waveform inversion (FWI) of seismic imaging in the visco-elastic regime. In the remainder of this section we explore the properties of ! which are essential for applying a Newton-like solver of Sect. 18 to FWI: We will show that ! is well defined and Fréchet-differentiable. Moreover, we will characterize the adjoint operator of the Fréchetderivative by an adjoint wave equation akin to (19.6). Finally, we will see that (19.8) is locally ill-posed indeed.

19.1.4 Visco-Elastic Wave Equation (Transformed) By the transformation, see [167], ⎛

⎞ ⎞ ⎛ v v ⎜σ ⎟ ⎜ σ + L τ η ⎟ ⎜ 0⎟ ⎜ l=1 σ ,l l ⎟ ⎜ ⎟ ⎟ ⎜ ⎜ σ 1 ⎟ := ⎜ ⎟ −τσ ,1 η1 ⎜ ⎟ ⎟ ⎜ .. ⎜ .. ⎟ ⎟ ⎜ ⎝ . ⎠ ⎠ ⎝ . σL −τσ ,L ηL we reformulate (19.6) equivalently into

1 1 σl + f div ρ ρ l=0   ∂t σ 0 = C μ, π ε(v) L

∂t v =

  1 ∂t σ l = C τS μ, τP π ε(v) − σ l, τσ ,l

l = 1, . . . , L,

in ]0, T [×D,

(19.9a)

in ]0, T [×D,

(19.9b)

in ]0, T [×D.

(19.9c)

We augment the above system by initial conditions v(0) = v0 and σ l (0) = σ l,0 , l = 0, . . . , L.

(19.9d)

19.2 Abstract Framework

325

For a suitable function space X and suitable1 w = (w, ψ 0 , . . . , ψ L ) ∈ X we define operators A, B, and Q mapping into X by ⎛ ⎜ ⎜ ⎜ Aw = − ⎜ ⎜ ⎜ ⎝

div

 L

l=0 ψ l

ε(w) .. . ε(w)



⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎠

⎞ ⎛ ⎞ 0 w ⎜ ⎟ ⎜ ⎟ ⎜ C μ, π ψ ⎟ ⎜ 0 ⎟ 0 ⎟ ⎜ ⎜ ⎟ ⎜  ⎟ ⎜ 1 ⎟ ⎜ C τ μ, τ π ψ ⎟ ⎜ ⎟ −1 S P 1 ⎟ , Qw = ⎜ τσ ,1 ψ 1 ⎟ . B w=⎜ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ .. ⎟ .. ⎜ ⎟ ⎜ . ⎟ . ⎝ ⎠ ⎝ ⎠   1 ψ C τS μ, τP π ψ L τσ ,L L 1 ρ

(19.10)

Now (19.9) can be formulated as Bu (t) + Au(t) + BQu(t) = f (t) where u = (v, σ 0 , . . . , σ L ) and f = (f, 0, . . . , 0). We emphasize that the five parameters (ρ, vS , τS , vP , τP ) to be reconstructed by FWI show up solely in the operator B since, by (19.7), π=

19.2

ρ vP2 1 + τP α

and μ =

ρ vS2 . 1 + τS α

(19.11)

Abstract Framework

Motivated by our previous considerations we investigate an abstract evolution equation in a Hilbert space X: Find u : [0, T ] → X satisfying Bu (t) + Au(t) + BQu(t) = f (t),

t ∈ ]0, T [,

u(0) = u0 .

(19.12)

We assume the following throughout (unless otherwise specified): T > 0, u0 ∈ X, B ∈ L∗ (X) = {J ∈ L(X) : J ∗ = J } satisfies Bx, xX = x, BxX ≥ β x 2X for some β > 0 and for all x ∈ X, A : D(A) ⊂ X → X is a maximal monotone operator: Ax, xX ≥ 0 for all x ∈ D(A) and Id +A : D(A) → X is onto,   Q ∈ L(X), and f ∈ L1 [0, T ], X .

1 A rigorous mathematical formulation will be given in Chap. 20 below.

326

19 Inverse Problems Related to Abstract Evolution Equations

  Above, L1 [0, T ], X denotes the space of X-valued functions over [0, T ] which are measurable and Bochner-integrable. In the sequel we also use other spaces, namely     C [0, T ], D(A) = v : [0, T ] → D(A) : v continuous ,2     C1 [0, T ], X = v : [0, T ] → X : v continuously Fréchet-differentiable ,        W 1,1 [0, T ], X = v ∈ C [0, T ], X : v  ∈ L1 [0, T ], X .     Recursively, we then define Ck [0, T ], X and W k,1 [0, T ], X for k ∈ N, k ≥ 2.

19.2.1 Existence, Uniqueness, and Regularity Our first theorem is fundamental for our analysis of (19.12). Theorem 19.3 (Hille-Yosida) Let A be maximal monotone. If u0 ∈ D(A) then the evolution problem u (t) + Au(t) = 0,

t > 0,

u(0) = u0 ,

(19.13)

has a unique classical solution u ∈ C([0, ∞[, D(A)) ∩ C1 ([0, ∞[, X) satisfying u(t) X ≤ u0 X

and u (t) X ≤ Au0 X

for all t > 0.

Proof The proof consists of four steps: 1. Uniqueness follows from the monotonicity of A. Let u and w be two solutions of (19.13). Then, for d = u − v, 1 d d(t) 2X = d  (t), d(t)X = −Ad(t), d(t)X ≤ 0 2 dt and d(0) = 0. Thus d = 0. 2. For λ > 0 replace A by the bounded operator (Yosida approximation) Aλ = A(I + λA)−1 . 3. The unique solution of v  (t) + Aλ v(t) = 0, t > 0, v(0) = u0 , is given by uλ (t) =

∞ (−t)k k=0

k!

Akλ u0 = e−t Aλ u0 .

2 D(A) is equipped with the Graph norm: v 2 = v 2X + Av 2X . D(A)

19.2 Abstract Framework

327

4. Pass to the limit λ , 0, see, e.g., [17, Chap. 7.2] for all technical details.

 

The bound u(t) X ≤ u0 X implies that the linear operator S(t) : u0 "→ u(t) has a bounded extension S(t) ∈ L(X) with S(t) ≤ 1. Thus the family {S(t)}t ≥0 constitutes a C0 -semigroup of contractions (see, e.g., [17, 99, 138]): S(t + s) = S(t)S(s);

S(t) ≤ 1;

lim S(t)x = x, x ∈ X.

t →0

(19.14)

Further, d S(t)x = AS(t)x = S(t)Ax, dt

x ∈ D(A).

For later reference we present the following result. Lemma 19.4 A maximal monotone operator A is closed, has a dense domain D(A) and the adjoint A∗ is maximal monotone as well. Furthermore,   1 D(A) = w ∈ X : lim [w − S(h)w]/ h ∈ X and Av = lim [v − S(h)v], v ∈ D(A). h→0 h→0 h

Theorem 19.5 Let A be maximal monotone. 1,1 (a) (Classical solution) If u0 ∈ D(A) and f ∈ Wloc ([0, ∞[, X) then

u (t) + Au(t) = f (t),

t > 0,

u(0) = u0 ,

(19.15)

has the unique solution u ∈ C([0, ∞[, D(A)) ∩ C1 ([0, ∞[, X) given by the variationof-constant formula 

t

u(t) = S(t)u0 +

S(t − s) f (s) ds

0

and we have the stability estimate u (t) X ≤ Au0 − f (0) X + f  L1 ([0,t ],X)

for all t > 0.

(19.16)

328

19 Inverse Problems Related to Abstract Evolution Equations

(b) (Mild solution) If only u0 ∈ X and f ∈ L1loc ([0, ∞[, X) then formula (19.16) defines the mild solution satisfying u(t) X  u0 X + f L1 ([0,t ],X)

for all t > 0.

(19.17)

Proof The estimate (19.17) for the mild solution is a direct consequence of (19.16), (16.9), and (19.14). This settles the proof of part (b).  t t To prove part (a) we first show that v(t) := 0 S(t − s) f (s) ds = 0 S(s) f (t − s) ds is in D(A). By our assumptions, v  (t) = S(t)f (0) +



t

S(t − s) f  (s) ds

0

and  1 1 v(t + h) − v(t) = h h



t +h

S(t + h − s) f (s) ds +

t

) 1( S(h) − Id v(t). h

Thus, by ) 1( Id −S(h) v(t) = f (t) − v  (t) h→0 h lim

we get v(t) ∈ D(A) (Lemma 19.4) and v  (t) = f (t) − Av(t). Hence, if u(t) denotes the right hand side of (19.16) then it solves the evolution problem:   u (t) = −AS(t)u0 + f (t) − Av(t) = f (t) − A S(t)u0 + v(t) = f (t) − Au(t). The stability estimate follows from ( ) u (t) = S(t) −Au0 + f (0) +



t

S(t − s) f  (s) ds

0

 

and the proof is done.

Note that the mild solution does not satisfy the differential equation of (19.15). However, it is the unique solution of the weak formulation: Find u ∈ C([0, ∞[, X) such that u(0) = u0 and d u(t), vX + u(t), A∗ vX = f (t), vX dt

for a.a. t > 0,

∀v ∈ D(A∗ ).

19.2 Abstract Framework

329

This new notion of a solution is well defined since the test space D(A∗ ) is dense in X (Lemma 19.4). Lemma 19.6 Let u0 ∈ X and f ∈ L1loc ([0, ∞[, X). Then, the mild solution of (19.15) agrees with the weak solution.  

Proof See [99, Th. 2.18] for details.

Next we consider evolution equations with an operator in front of the time derivative. To this end let   B := J ∈ L(X) : J = J ∗ , ∃β > 0 : β v 2X ≤ J v, vX ∀v ∈ X .

Theorem 19.7 Let A be maximal monotone and B ∈ B. 1,1 (a) If u0 ∈ D(A) and f ∈ Wloc ([0, ∞[, X) then

Bu (t) + Au(t) = f (t),

t > 0,

u(0) = u0 ,

has a unique solution u ∈ C([0, ∞[, D(A)) ∩ C1 ([0, ∞[, X) satisfying u (t) X  Au0 − f (0) X + f  L1 ([0,t ],X)

for all t > 0.

The constant in the above estimate depends on B and B −1 . (b) If only u0 ∈ X and f ∈ L1loc ([0, ∞[, X) then the mild/weak solution u ∈ C([0, ∞[, X) is given by u(0) = u0 , d Bu(t), vX + u(t), A∗ vX = f (t), vX dt

for a.a. t > 0,

∀v ∈ D(A∗ ).

Proof We sketch the main steps and refer to [111, Lem. 3.1] for details. First we show that A + B is onto: Indeed, A and A∗ are maximal monotone (Lemma 19.4). Hence, A∗ + B is one-to-one yielding that R(A + B) is dense in X. Since β v X ≤ (A + B)v X for v ∈ D(A) we see that R(A + B) is closed. Thus, R(A + B) = X. Now, B −1 A is maximal monotone in (X, ·, ·B ) with the weighted inner product ·, ·B = B·, ·X . Finally we apply the Hille-Yosida theorem to u (t) + B −1 Au(t) = B −1 f (t), which proves part (a).

t > 0,

u(0) = u0 ,  

330

19 Inverse Problems Related to Abstract Evolution Equations

Remark 19.8 The inner product ·, ·B = B·, ·X induces a norm on X which is equivalent to the original norm · X . We are now ready to approach our original evolution equation (19.12). Theorem 19.9 Let A be maximal monotone, B ∈ B, Q ∈ L(X), and T > 0. (a) If u0 ∈ D(A) and f ∈ W 1,1 ([0, T ], X) then Bu (t) + Au(t) + BQu(t) = f (t),

u(0) = u0 ,

has a unique solution u ∈ C([0, T ], D(A)) ∩ C1 ([0, T ], X) satisfying     u (t) X  (B −1 A + Q)u0 − B −1 f (0) + f  L1 ([0,t ],X) X

(19.18)

for t ∈ [0, T ] where the constant depends on B , B −1 , Q , and T . (b) If only u0 ∈ X and f ∈ L1 ([0, T ], X) then the mild/weak solution u ∈ C([0, T ], X) is given by u(0) = u0 , d Bu(t), vX + u(t), A∗ vX + BQu(t), vX = f (t), vX dt for a.a. t ∈ ]0, T [ and all v ∈ D(A∗ ). Further, u(t) X  u0 X + f L1 ([0,t ],X)

(19.19)

for t ∈ [0, T ] where the constant depends on B , B −1 , Q , and T . Proof We look at the equivalent evolution equation u (t) + (B −1 A + Q)u(t) = B −1 f (t),

u(0) = u0 ,

As above, B −1 A with D(B −1 A) = D(A) generates a contraction semigroup on (X, ·, ·B ). Further, P = B −1 A + Q, D(P ) = D(A), is the infinitesimal generator of a C0 -semigroup {R(t)}t ≥0 and R(t) B ≤ exp( Q B t),

(19.20)

19.2 Abstract Framework

331

see, e.g., [138, Th. 3.1.1]. Thus, the above evolution equation has the unique classical/weak solution represented by 

t

u(t) = R(t)u0 +

R(t − s)B −1 f (s) ds

0

from which the stability estimate follows as before when using (19.20) and the equivalence   of the norms · B and · X . Higher regularity of u in time can be shown when the data u0 and f are smoother and compatible with the domain of A. Theorem 19.10 Let A be maximal monotone, B ∈ B, Q ∈ L(X), and T > 0. For k ∈ N let f ∈ W k,1 ([0, T ], X) and u0, := (−P ) u0 +

−1 (−P )j B −1 f (−1−j ) (0) ∈ D(A),

 = 0, . . . , k − 1,

j =0

where P = B −1 A + Q. Then, u ∈ Ck−1 ([0, T ], D(A)) ∩ Ck ([0, T ], X) satisfying u() (t) X  u0, X + f () L1 ([0,t ],X)

for all t ∈ [0, T ],

 = 0, . . . , k,

where u0,k := −P u0,k−1 + B −1 f (k−1) (0) ∈ X. Note that f (k−1) is continuous. Proof The u0, ’s satisfy the recursion u0,0 = u0 and u0, = −P u0,−1 + B −1 f (−1)(0),

 = 1, . . . , k.

By induction we are going to show 

t

u (t) = R(t)u0, + ()

R(t − s) B −1 f () (s) ds,

 = 0, . . . , k.

(19.21)

0

The case k = 0 is just (19.19). Assume above formula to be true for  = 0, . . . , k. Let f ∈ W k+1,1 and u0,k ∈ D(A). Then (19.21) holds for  = k; that is, 

t

u(k)(t) = R(t)u0,k + 0

R(s) B −1 f (k) (t − s) ds.

332

19 Inverse Problems Related to Abstract Evolution Equations

The additional differentiability of f yields that u(k) is differentiable and thus, 

u(k+1) (t) = −P R(t)u0,k + R(t)B −1 f (k) (0) + (

= R(t) −P u0,k + B

−1

f

(k)

)

t

R(s) B −1 f (k+1) (t − s) ds

0



t

(0) +

R(t − s) B −1 f (k+1) (s) ds

0

which is formula (19.21) for  = k + 1. From (19.21) the assertion follows immediately in the usual way.

 

19.2.2 Parameter-to-Solution Map We consider the nonlinear map F : D(F ) ⊂ L∗ (X) → C([0, T ], X),

B "→ u,

(19.22)

which maps B to the unique solution of (19.12) where A, Q, u0 , and f are fixed. Its domain is set to D(F ) := {B ∈ L∗ (X) : β− x 2X ≤ Bx, xX ≤ β+ x 2X } for given 0 < β− < β+ < ∞. We first specify sufficient conditions for the corresponding inverse problem to be locally ill-posed, see Definition 16.1. But we do not need the results of Chap. 16 which rely on compactness and weak-'-to-weak continuity of F . Theorem 19.11 Let u0 ∈ D(A) and f ∈ W 1,1 ([0, T ], X). The inverse problem F (·) = u , ∈ D(F ) satisfying F (B) , = u if there are is locally ill-posed at any B • •

an operator family {Ek } ⊂ L(X) with Ek ∼ 1, Ek → 0 point-wise, and an , r > 0 and a γ > 0 such that , + rEk ∈ D(F ), ∀r ∈ ]0,, r ]: B

. , + rEk )v, v . ∀v ∈ X : γ v 2X ≤ (B X

19.2 Abstract Framework

333

, + rEk . Since Bk − B , ∼ r it remains to show that Proof Set Bk := B , in C([0, T ], X). uk := F (Bk ) −−−→ u := F (B) k→∞

The difference dk := u − uk solves   Bk dk (t) + (A + Bk Q)dk (t) = rEk u (t) + Qu(t) ,

dk (0) = 0.

As the above right hand side is in L1 (]0, T [, X), dk is the mild solution given by  dk (t) = r

t

  R(t − s)B −1 Ek u (s) + Qu(s) ds.

0

Thus, 1 sup dk (t) Bk  r sup dk (t) X ≤ γ t ∈[0,T ] t ∈[0,T ]



T 0

  k→∞ Ek u (s) + Qu(s)  ds − −−→ 0. X

The convergence follows by the dominated convergence theorem as the integrand converges point-wise to zero and is uniformly bounded.   Remark 19.12 The local-illposedness result remains true if we furnish F with the larger image space L2 ([0, T ], X) which is more appropriate in the context of ill-posed problems. Our next goal is to show Fréchet differentiability of F . To this end we need continuity of F with respect to the stronger topology of C1 ([0, T ], X). Lemma 19.13 Let B ∈ int(D(F )). Further, let u0 ∈ D(A), f ∈ W 2,1 ([0, T ], X), and u0,1 = (B −1 A + Q)u0 − B −1 f (0) ∈ D(A). Then, F is Lipschitz continuous at B, that is,   / L(X) / C1 ([0,T ],X)  1 + f W 2,1 ([0,T ],X) B − B F (B) − F (B) / in a neighborhood of B. The involved constant depends on u0 , u0,1 , f (0), f  (0), for all B A, Q, T , β− , and β+ . / = B + δB for δB ∈ L∗ (X) sufficiently small and let u = F (B) and / Proof Set B u= / According to our assumptions we have that u ∈ C2 ([0, T ], X) (Theorem 19.10) F (B). u ∈ C1 ([0, T ], X). Especially, both are classical solutions of their respective evolution and / equations. Hence, d = / u − u solves /−1 A + Q)d(t) = B /−1 δB(u (t) + Qu(t)), d  (t) + (B

d(0) = 0.

334

19 Inverse Problems Related to Abstract Evolution Equations

Applying the stability estimate (19.17) for the mild solution yields d(t) X  δB u C1 ([0,T ],X) ,

t ∈ [0, T ].

(19.23)

By (19.18) for the strong solution we get   d  (t) X  δB u (0) + Qu0 X + u C2 ([0,T ],X) ,

t ∈ [0, T ].

(19.24)

Finally, Theorem 19.10 leads to u() (t) X  u0, X + f () L1 ([0,t ],X),

t ∈ [0, T ],

 = 0, 1, 2,

where u0,0 = u0 and u0,2 = (B −1 A + Q)u0,1 − f  (0). These three bounds imply the claimed Lipschitz continuity when plugged into (19.23) and (19.24).   To prove continuity we need a further ingredient, namely the invariance of D(A) under Q:   Q D(A) ⊂ D(A).

(19.25)

Theorem 19.14 Assume (19.25). Then, for u0 ∈ D(A) and f ∈ W 1,1 ([0, T ], X), the map F : D(F ) ⊂ L∗ (X) → C1 ([0, T ], X) is continuous at interior points. Proof Let {Bk }k ⊂ D(F ) converge to B ∈ int(D(F )) and set uk = F (Bk ), u = F (B). Note that uk , u ∈ C1 ([0, T ], X). We define a sequence of linear operators Jk : D(A) × W 1,1 ([0, T ], X) → C1 ([0, T ], X), Jk (u0 , f ) := uk −u. By the previous lemma we obtain  Jk (u0 , f ) → 0 in C1 ([0, T ], X) for (u0 , f ) ∈ D := (u0 , f ) ∈ D(A) × W 2,1 ([0, T ], X) :  B −1 Au0 − B −1 f (0) ∈ D(A) . Further, in view of (19.18) and (19.17), Jk (u0 , f ) C1 ([0,T ],X) ≤ uk C1 ([0,T ],X) + u C1 ([0,T ],X)  u0 D(A) + f W 1,1 ([0,T ],X) , i.e. { Jk }k is uniformly bounded. The continuity of F at B follows now at once from the Banach-Steinhaus theorem since the space D is dense in D(A) × W 1,1 ([0, T ], X) as we show in the remainder of the proof. j Let (u0 , f ) ∈ D(A) × W 1,1 ([0, T ], X). Choose sequences {u0 }j ⊂ D((B −1 A)2 ), j {vj }j ⊂ D(A), and {f/j }j ⊂ W 2,1 ([0, T ], X) such that u0 → u0 in3 D(A), vj → 3 This is possible since D((B −1 A)2 ) is dense in D(B −1 A) = D(A), see, e.g., [17, Lemma 7.2].

19.2 Abstract Framework

335

B −1 f (0) in X, and f/j → f in W 1,1 ([0, T ], X). Additionally, choose ϕ ∈ C∞ (R) with ϕ(0) = 1 and ϕ(t) = 0 for t ≥ t0 and some fixed  t0 ∈ ]0, T [. Define {fj }j ⊂ W 2,1 ([0, T ], X) by fj (t) = f/j (t) + ϕ(t) Bvj − f/j (0) for t ≥ 0. Then, B −1 fj (0) = j vj ∈ D(A) and fj → f in W 1,1 ([0, T ], X). Finally, B −1 Au0 ∈ D(B −1 A) = D(A), that j j is, (u0 , fj ) ∈ D and (u0 , fj ) → (u0 , f ) in D(A) × W 1,1 ([0, T ], X) which settles the proof.   Theorem 19.15 Let u0 ∈ D(A) and B ∈ int(D(F )). (a) Let f ∈ W 1,1 ([0, T ], X) and assume (19.25). Then, F as defined in (19.22) is Fréchet differentiable at B with F  (B)H = u where u is the mild/weak solution of   Bu (t) + (A + BQ)u(t) = −H u (t) + Qu(t) ,

u(0) = 0,

(19.26)

with u = F (B) being the classical solution of (19.12). (b) Let u0 = 0 and f ∈ W 2,1 ([0, T ], X) with f (0) = f  (0) = 0. Then, F as defined in (19.22) is Fréchet differentiable at B with F  (B)H = u where u is now the classical solution of (19.26). Moreover, F  is Lipschitz continuous: / − F  (B) L(L∗ (X),C([0,T ],X))  B / − B L(X) F  (B) / in a neighborhood of B. for all B Proof (a) As H (u + Qu) is in C([0, T ], X), the mild solution u of (19.26) exists. Let H ∈ L∗ (X) such that B + H ∈ D(F ) and let u+ = F (B + H ) be the classical solution of (B + H )u+ (t) + (A + (B + H )Q)u+ (t) = f (t),

u+ (0) = u0 ,

i.e.   u+ (t) + (B −1 A + Q)u+ (t) = B −1 f (t) − H (u+ (t) + Qu+ (t)) . Thus,  u+ (t) = R(t)u0 +

  R(t − s)B −1 f (s) − H (u+ (s) + Qu+ (s)) ds.

t 0

336

19 Inverse Problems Related to Abstract Evolution Equations

Since 

t

R(t − s)B −1 f (t)ds,

u(t) = R(t)u0 + 0

and  u(t) = −

t

R(t − s)B −1 H (u (s) + Qu(s))ds

0

we get, for d := u+ − u,  u+ (t) − u(t) − u(t) =

  R(t − s)B −1 H d  (s) + Qd(s) ds.

t 0

Straightforward estimates yield u+ − u − u C([0,T ],X)  u+ − u C1 ([0,T ],X) = F (B + H ) − F (B) C1 ([0,T ],X) . H The assertion follows from the previous theorem about continuity of F . (b) Under our assumptions, u is C2 ([0, T ], X) (Theorem 19.10) so that the source term of (19.26) is in W 1,1 ([0, T ], X). Hence, u ∈ C1 ([0, T ], X) by Theorem 19.9. Now we check the Lipschitz continuity. To this end let u = F  (B)H and v =  F (B + δB)H . By the regularity assumptions on f , v and u are the classical solutions of (B + δB)v  (t) + (A + (B + δB)Q)v(t) = −H (v  (t) + Qv(t)), t ∈ ]0, T [,

v(0) = 0,

Bu (t) + (A + BQ)u(t) = −H (u (t) + Qu(t)), t ∈ ]0, T [,

u(0) = 0,

where u solves (19.12) and v solves (19.12) with B replaced by B + δB. Hence, d = v − u mildly solves 

Bd (t) + (A + BQ)d(t) = −H (d  (t) + Qd(t)) − δB(v  (t) + Qv(t)), t ∈ ]0, T [,

d(0) = 0,

where d = v − u. By the continuous dependency of d on the right hand side, see (19.19), we get d C([0,T ],X)  H L(X) d C1 ([0,T ],X) + δB L(X) v C1 ([0,T ],X) .

(19.27)

Next we apply the regularity estimate (19.18) to d which solves Bd  (t) + (A + BQ)d(t) = −δB(v  (t) + Qv(t))

t ∈ ]0, T [,

d(0) = 0.

19.2 Abstract Framework

337

Thus, d C1 ([0,T ],X)  δB L(X) v C2 ([0,T ],X)  δB L(X) f W 2,1 ([0,T ],X) where the right bound comes from the regularity of v. In a similar way we get v C1 ([0,T ],X)  H L(X) v C2 ([0,T ],X)  H L(X) f W 2,1 ([0,T ],X) . Plugging these bounds into (19.27) we end up with sup

H ∈L∗ (X)

v − u C([0,T ],X)  f W 2,1 ([0,T ],X) δB L(X) H L(X)

which is the claimed Lipschitz continuity.

 

Remark 19.16 (a) The conclusion of Remark 19.12 applies as well: F is Fréchet differentiable into L2 ([0, T ], X). (b) Fréchet differentiability of F : D(F ) ⊂ L∗ (X) → C1 ([0, T ], X) at B requires stronger regularity assumptions on f and u0 . We do not want to go into more details here but mention that u0 = 0 and f ∈ W 3,1 ([0, T ], X) with f (0) = f  (0) = f  (0) = 0 are sufficient and yield F (B) ∈ C3 ([0, T ], X). In a general concrete setting B contains the parameters of a partial differential equation. So, the conditions of Theorem 19.10 for higher regularity encode smoothness assumptions on these parameters! For the Newton-like solvers of Sect. 18 the adjoint of the Fréchet derivative is needed. We consider F  (B) : L∗ (X) → L2 ([0, T ], X). Since L∗ (X) does not carry a canonical Hilbert space structure the adjoint (or dual) F  (B)∗ maps into the dual of L∗ (X): F  (B)∗ : L2 ([0, T ], X) → L∗ (X) . This adjoint is characterized as follows. Theorem 19.17 Under the notation and assumptions of Theorem 19.15 (a) we have that [F  (B)∗ g]H =



T0

. H (u (t) + Qu(t)), w(t) X dt

338

19 Inverse Problems Related to Abstract Evolution Equations

where w is the mild/weak solution of the adjoint equation Bw (t) − (A∗ + Q∗ B)w(t) = g(t),

t ∈ ]0, T [,

w(T ) = 0,

(19.28)

and u = F (B), i.e., the classical solution of (19.12). The adjoint equation (19.28) is furnished with an end condition which we turn into an /(t) = w(T − t) and / g (t) = g(T − t) initial condition by ‘reversing’ time: introducing w we get the initial value problem Bw / (t) + A∗ w /(t) + Q∗ B w /(t) = −/ g (t), t ∈ ]0, T [,

w /(0) = 0.

Since A∗ is maximal monotone as well (Lemma 19.4) the transformed equation inherits the structure of (19.12). For the visco-elastic wave equation we even have that A∗ = −A (see next section) so that the same subroutine can be used to solve both, the state and the adjoint state equation. Proof (of Theorem 19.17) Recall that F  (B)H = v mildly solves Bv  (t) + (A + BQ)v(t) = −H (u (t) + Qu(t)), t ∈ [0, T ],

v(0) = 0.

T First we note that F  (B)H, gL2 ([0,T ],X) = 0 v(t), g(t)X dt. To work with classical solutions we choose sequences {gk }k , {hk }k ⊂ W 1,1 ([0, T ], X) with gk → g and hk → H (u + Qu) in L2 ([0, T ], X). Further, let wk and vk be the classical solutions when replacing g by gk and H (u + Qu) by hk , respectively. Integrating by parts yields 

T0

. vk (t), gk (t) X dt = 

T-

=− 0



. vk (t), Bwk (t) − (A∗ + Q∗ B)wk (t) X dt

T0

. Bvk (t) + (A + BQ)vk (t), wk (t) X dt =



T0

. hk (t), wk (t) X dt

where the boundary terms vanish due to zero initial and end conditions. Taking the limit in k proves the theorem.

 

20

Applications

In this section we apply the abstract results to FWI (Sect. 19.1.3) and an inverse problem for Maxwell’s equation (electromagnetic scattering in conducting media). For an application to the visco-acoustic wave equation including a multitude of numerical experiments see [15].

20.1

Full Waveform Inversion in the Visco-Elastic Regime

We consider visco-elastic wave equation in its transformed version (19.9). The underlying Hilbert space is 1+L X = L2 (D, R3 ) × L2 (D, R3×3 sym )

furnished with the inner product -

. (v, σ 0 , . . . , σ L ), (w, ψ 0 , . . . , ψ L ) X =



L v·w+ σ l : ψ l dx D

l=0

where the colon denotes the Frobenius inner product on R3×3 and D is a bounded Lipschitz domain. We split the boundary of D into disjoint parts, ∂D = ∂DD ∪˙ ∂DN , and define L # " D(A) = (w, ψ 0 , . . . ψ L ) ∈ HD1 × H ( div ) : ψ l n = 0 on ∂DN l=0

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9_20

339

340

20 Applications

with HD1 = {v ∈ H 1 (D, R3 ) : v = 0 on ∂DD } and L "

#  1+L 2 3 ∈ L H ( div ) = σ ∈ L2 D, R3×3 : div σ (D, R ) . l sym l=0

Lemma 20.1 Under the settings of this subsection, the operator A : D(A) ⊂ X → X from (19.10) is maximal monotone. Proof The operator A is skew-symmetric and, as such, is monotone: -

. A(v, σ 0 , . . . , σ L ), (v, σ 0 , . . . , σ L ) X = 0.

Indeed, using the identities (σ = σ ) div (σ v) = div σ · v + σ : ∇v and ε(v) : σ = ∇v : σ , as well as the divergence theorem we find for (v, σ 0 , . . . , σ L ) and (w, ψ 0 , . . . , ψ l ) ∈ D(A) that -

. A(v, σ 0 , . . . , σ L ), (w, ψ 0 , . . . , ψ L ) X =−

 & div D



σl

· w + ε(v) :

0 12 3 =: σ

 div (σ w) − σ : ∇w + ε(v) : ψ dx

D





=

 0



∂D



ε(w) : σ − ε(v) : ψ dx +

D

0

 =− D



dx

0 12 3 =: ψ

  ε(v) : ψ − ε(w) : σ dx −

=−

' ψl

l=0

D



L



l=0



=−

L





∂D

div (σ ) · w + ε(v) : ψ) dx

(σ w) · n ds 12 3 =0

(ψv) · n ds 12 3 =0

20.1 Full Waveform Inversion in the Visco-Elastic Regime

 =

341



 div (ψv) − ψ : ∇v + ε(w) : σ dx



 ε(w) : σ + div (ψ) · v dx

D

 =

D

. = (v, σ 0 , . . . , σ L ), −A(w, ψ 0 , . . . , ψ L ) X .

Next we show that Id +A is onto: For (f, g0 , . . . , gL ) (v, σ 0 , . . . , σ L ) ∈ D(A) satisfying v − div

L



σ l = f,

σ l − ε(v) = gl ,



X we need to find

l = 0, . . . , L.

l=0

Assume—for the time being—that the σ l ’s are known. We multiply the equation on the left by a w ∈ HD1 , integrate over D and use the divergence theorem to get  &

v·w+

D

L



 ' σ l : ∇w dx = f · w dx. D

l=0

Now we sum up the L + 1 equations on the right and obtain 



 v · w + (L + 1)ε(v) : ε(w) dx =

D



L f·w− gl : ∇w dx D

∀w ∈ HD1 .

l=0

This is a standard variational problem (cf. displacement ansatz in elasticity) admitting a unique solution v ∈ HD1 by Korn’s inequality and the Lax-Milgram theorem, see, e.g., [16]. Finally, set σ l := gl + ε(v). Thus, σ l = σ l . Plugging gl = σ l − ε(v) into the displacement ansatz yields  f · w dx = D

 &

v·w+

L



D

' σ l : ∇w dx

l=0

3 Since (20.1) holds especially for w ∈ C∞ 0 (D, R ) we have

div

L

l=0

σ l = v − f in L2 .

∀w ∈ HD1 .

(20.1)

342

20 Applications

Hence, (σ 0 , . . . , σ L ) ∈ H ( div ). Plugging this equality back into (20.1) leads to  0=

div D

which is the weak form of

L

L



σ l w dx

∀w ∈ HD1

l=0

l=0 σ l n

= 0 on ∂DN .

 

Next we show that B ∈ L(X) from (19.10) is well defined with the required properties. To this end we first study1 C : D(C) ⊂ R2 → Aut(R3×3 sym ),

C(m, p)M = 2mM + (p − 2m) trace(M) Id,

(20.2)

with   D(C) := (m, p) ∈ R2 : m ≤ m ≤ m, p ≤ p ≤ p where 0 < m < m and 0 < p < p such that 3p > 4m.

(20.3)

For (m, p) ∈ D(C), / C(m, p) := C(m, p)−1 = C



 p−m 1 , . 4m m(3p − 4m)

Moreover, C(m, p)M : N = M : C(m, p)N. Lemma 20.2 For (p, m) ∈ D(C) we have that min{2m, 3p − 4m}M : M ≤ C(m, p)M : M ≤ max{2m, 3p − 4m} M : M. Proof The assertion follows readily from C(m, p)M : M = 2mM : M + (p − 2m) trace(M) Id : M 1

1 = 2m M : M − trace(M)2 + (3p − 4m) trace(M)2 3 3 1 Aut(R3×3 ) is the space of linear maps from R3×3 into itself (space of automorphisms). sym sym

(20.4)

20.1 Full Waveform Inversion in the Visco-Elastic Regime

343

and M:M≥

1 trace(M)2 . 3

(20.5)

The latter can be seen from trace(M)2 =

3



1 · Mi,i

2



3



i=1

12

3

i=1

3

M2i,i = 3 M2i,i

i=1

i=1

yielding M:M−

3 3

1 trace(M)2 ≥ M : M − M2i,i = M2i,j ≥ 0 3 i,j=1 i=1

i=j

 

which is (20.5). If     ρ(·) > 0 and μ(·), π(·) , τS (·)μ(·), τP (·)π(·) ∈ D(C) a.e. in D

(20.6)

then B given by w





⎞ ρw   ⎟ ⎜ ⎟ ⎜ / ⎟ ⎜ψ0 ⎟ ⎜ C ⎜ ⎟ ⎜  μ, π ψ 0 ⎟ ⎟ ⎜ ⎟ ⎜/ ψ ⎟ ⎜ C τS μ, τP π ψ 1 ⎟ B⎜ ⎟ ⎜ 1⎟ = ⎜ ⎟ ⎜ . ⎟ ⎜ .. ⎟ ⎜ .. ⎟ ⎜ . ⎠ ⎝ ⎠ ⎝   / ψL C τS μ, τP π ψ L ⎛

(20.7)

is in L∗ (X). As Q from (19.10) is just a multiplication operator by real numbers we conclude that Q is in L(X) and satisfies (19.25). Hence, all hypotheses are fulfilled for the visco-acoustic wave equation and all abstract results of the previous subsection apply. Accordingly the following theorem holds true. Theorem 20.3 Under (20.6) we have the following.   (a) A unique mild/weak solution of (19.9) exists in C [0, T ], X for   f ∈ L1 [0, T ], L2 (D, R3 ) ,

v0 ∈ L2 (D, R3 ),

σ l,0 ∈ L2 (D, R3×3 sym ), l = 0, . . . , L.

344

20 Applications

    (b) A unique classical solution of (19.9) exists in C [0, T ], D(A) ∩ C1 [0, T ], X for   f ∈ W 1,1 [0, T ], L2 (D, R3 ) , v0 ∈ HD1 , (σ 0,0 , . . . , σ L,0 ) ∈ H ( div ), σ l,0 n = 0 on ∂DN . l

    (c) A unique classical solution of (19.9) exists in C1 [0, T ], D(A) ∩ C2 [0, T ], X for   f ∈ W 2,1 [0, T ], L2 (D, R3 ) , (v0 , σ 0,0 , . . . , σ L,0) ∈ D(A)(as in part(b)), ( ) %−1 div σ l,0 + f(0) ∈ HD1 , div C ∈ L2 (D, R3 ), Cn = 0 on ∂DN , l

  where C = C (1 + LτS )μ, (1 + LτP )π ε(v0 ). Proof (a) and (b) are the concrete formulations of Theorem 19.9. (c) We have to check the conditions of Theorem 19.10 for k = 2.

 

Remark 20.4 For the visco-elastic wave equation it even holds that A + BQ : D(A) ⊂ X → X is maximal monotone (The proof of Lemma 20.1 is adapted straightforwardly to establish surjectivity of A + BQ + Id, see also [167, Prop. 70]). As a consequence the solution of (19.9) exists for all times: meaning T = ∞ is allowed in the above theorem if f is locally in the respective space, see Theorem 19.7.

20.1.1 Full Waveform Forward Operator FWI entails the reconstruction of the five parameters (ρ, vS , τS , vP , τP ) from wave field measurements. Therefore we will define a parameter-to-solution map ! which takes these parameters as arguments with the physically meaningful domain of definition  D(!) = (ρ, vS , τS , vP , τP ) ∈ L∞ (D)5 : ρmin ≤ ρ(·) ≤ ρmax , vP,min ≤ vP (·) ≤ vP,max ,  vS,min ≤ vS (·) ≤ vS,max , τP,min ≤ τP (·) ≤ τP,max , τS,min ≤ τS (·) ≤ τS,max a.e. in D where 0 < ρmin < ρmax < ∞, etc. are suitable positive bounds. In view of (19.7) we set μmin :=

ρmin vS2,min 1 + τS,max α

and μmax :=

ρmax vS2,max 1 + τS,min α

20.1 Full Waveform Inversion in the Visco-Elastic Regime

345

which are lower and upper bounds for μ. We define the bounds πmin and πmax for π in the same way replacing S by P. Next we define p, p, m, and m such that (μ, π), (τS μ, τP π) as functions of (ρ, vP , vS , τP , τS ) ∈ D(!) are in D(C). Indeed, p := πmin min{1, τP,min }

and p := πmax max{1, τP,max }

with m and m set correspondingly will do the job. The condition 3p > 4m, see (20.3), recasts into vP2,min 4 ρmax 1 + τP,max α max{1, τS,max } < 2 3 ρmin 1 + τS,min α min{1, τP,min } vS,max which is in agreement with the physical evidence that pressure waves propagate considerably faster than shear waves. For f ∈ W 1,1 ([0, T ], L2 (D, R3 )) and u0 = (v0 , σ 0,0 , . . . , σ L,0 ) ∈ D(A) the full waveform forward operator ! : D(!) ⊂ L∞ (D)5 → L2 ([0, T ], X),

(ρ, vS , τS , vP , τP ) "→ (v, σ 0 , . . . , σ L ),

is well defined where (v, σ 0 , . . . , σ L ) is the unique classical solution of (19.9) with initial value u0 . We factorize ! = F ◦ V with F from (19.22) and V : D(!) ⊂ L∞ (D)5 → L∗ (X),

(ρ, vS , τS , vP , τP ) "→ B,

where B is defined in (20.7) by way of (19.11). Remark 20.5 Note that V maps D(!) into D(F ) by an appropriate choice of β− and β+ in terms of ρmin , ρmax , p, p, m, and m. FWI in the visco-elastic regime is locally ill-posed. We do not rely here on Theorem 19.11 but give a direct proof. Theorem 20.6 The inverse seismic problem !(·) = (v, σ 0 , . . . , σ L ) is locally ill-posed at any interior point p = (ρ, vS , τS , vP , τP ) of D(!). Proof For a point ξ ∈ D define balls Kn = {y ∈ R3 : |y − ξ | ≤ δ/n}, n ∈ N, where δ > 0 is so small that K1 ⊂ D. Let χn be the characteristic function of Kn . Further, for any r > 0 such that pn := p+r(χn , χn , χn , χn , χn ) ∈ D(!) we have that pn −p L∞ (D)5 = r and pn does not converge to p. However, limn→∞ !(pn ) − !(p) L2 ([0,T ],X) = 0 as we demonstrate now.

346

20 Applications

Let un = !(pn ) and u = !(p). Then, dn = un − u satisfies   V (pn )dn + Adn + V (pn )Qdn = V (p) − V (pn ) (u + Qu),

dn (0) = 0.

By Theorem 19.9 (b), see (19.19), we obtain    dn L2 ([0,T ],X)   V (p) − V (pn ) (u + Qu)L1 ([0,T ],X)  where the constant is independent of n, see Remark 20.5. Next, limn→∞ V (p) − V (pn ) v X = 0 for any v ∈ X by the dominated convergence theorem since pn → p point-wise a.e. in D. Because V (pn ) X  1 uniformly in n a further application of the dominated convergence theorem with respect to the time domain yields 

T 0

   n→∞  V (p) − V (pn ) u (t) + Qu(t)  dt − −−→ 0 X  

and finishes the proof.

20.1.2 Differentiability and Adjoint The chain rule determines the Fréchet derivative of ! by the derivatives of F and V . The / whose structure is closely related to the derivative of the latter needs the derivative of C matrix inverse from Example 16.14. Lemma 20.7 Let C : D(C) ⊂ R2 → Aut(R3×3 sym ) be the mapping defined in (20.2). Its Fréchet derivative at (m, p) ∈ int(D(C)) is given by 7 8 m , / / C (m, p) = −C(m, p) ◦ C(, m, p ,) ◦ C(m, p) p , /

(20.8)

for any (, m, p ,) ∈ R2 . / / p) ◦ C(m, p) = C(m, p) ◦ C(m, p) = Proof Recall that C is a linear operator with C(m, , and p , sufficiently small we have Id. For m / +m / / / C(m ,, p + p ,) − C(m, p) + C(m, p) ◦ C(, m, p ,) ◦ C(m, p)   / +m / = C(m ,, p + p ,) ◦ C(m, p) − C(m + m ,, p + p ,) ◦ C(m, p) / / + C(m, p) ◦ C(, m, p ,) ◦ C(m, p)   / / +m / = C(m, p) − C(m ,, p + p ,) ◦ C(, m, p ,) ◦ C(m, p).

20.1 Full Waveform Inversion in the Visco-Elastic Regime

347

Thus,   / / / / C(m + m ,, p + p ,) − C(m, p) + C(m, p) ◦ C(, m, p ,) ◦ C(m, p)Aut     / +m C(m, p)Aut/ C(m, p) − C(m ,, p + p ,)Aut max{|,  / m|, |, p |}   = o max{|, m|, |, p |} as max{|, m|, |, p|} → 0  

which proves the assertion.

Let p = (ρ, vS , τS , vP , τP ) ∈ int(D(!)) and , p = (, ρ ,, vS ,, τS ,, vP ,, τP ) ∈ L∞ (D)5 . Then, ∗ L (X) is given by

V  (p), p∈



ρ ,w



7 8 ⎟ ⎜ ⎟ ⎜ / μ ρ ,/ ⎟ ⎜  / − ρ C(μ, π)ψ 0 + ρ C (μ, π) ψ0 ⎟ ⎛ ⎞ ⎜ ⎟ ⎜ / π w ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 8 7 ⎟ ⎜ψ0 ⎟ ⎜ , μ ⎟ ⎜ ⎟ ⎜ ρ, /  / (τS μ, τP π) V (p), p ⎜ . ⎟ = ⎜ − C(τS μ, τP π)ψ 1 + ρ C ψ1 ⎟ ⎟ ⎜ . ⎟ ⎜ ρ , π ⎟ ⎝ . ⎠ ⎜ ⎟ ⎜ .. ⎟ ⎜ ψL . ⎟ ⎜ ⎜ 7 8 ⎟ ⎟ ⎜ , μ ⎠ ⎝ ρ, / / (τS μ, τP π) − ρ C(τS μ, τP π)ψ L + ρ C ψL , π

(20.9)

where / μ, / π, , μ, and , π are abbreviations for the occurring inner derivatives: / μ=

2vS α vS2 , vS − , τS , 1 + τS α (1 + τS α)2

/ π=

2vP α vP2 , vP − , τP , 1 + τP α (1 + τP α)2

(20.10)

, μ=

2τS vS vS2 , vS + , τS , 1 + τS α (1 + τS α)2

, π=

2τP vP vP2 , vP + , τP . 1 + τP α (1 + τP α)2

(20.11)

With these preparations we are ready to characterize the Fréchet derivative of the full waveform forward operator. Theorem 20.8 Under the assumptions made in this section the full waveform forward operator ! is Fréchet differentiable at any interior point p = (ρ, vS , τS , vP , τP ) of D(!): For , p = (, ρ ,, vS ,, τS ,, vP ,, τP ) ∈ L∞ (D)5 we have ! (p), p = u where u = (v, σ 0 , . . . , σ L ) ∈ C([0, T ], X) with u(0) = 0 is the mild solution of ρ ∂t v = div

L

l=0

σl − ρ ,∂t v,

(20.12a)

348

20 Applications

∂t σ 0 = C(μ, π)ε(v) +

ρ , ρ

C(μ, π) + ρ C(/ μ, / π ) ε(v),

(20.12b)

∂t σ l = C(τS μ, τP π)ε(v) −

1 τσ ,l

σl +

(20.12c)

ρ , ρ

C(τS μ, τP π) + ρ C(, μ, , π ) ε(v),

l = 1, . . . , L,

where (v, σ 0 , . . . , σ L ) is the classical solution of (19.9). Proof The chain rule ! (p), p = F  (V (p))V  (p), p together with part (a) of Theorem 19.15 yields ⎛

ρ ∂t v





 L

div ⎜ ⎟ l=0 σ l / ⎜ C(μ, π)∂t σ 0 ⎟ ⎜ ⎟ ⎜ ε(v) ⎜/ ⎟ ⎜ ⎜ C(τS μ, τP π)∂t σ 1 ⎟ = ⎜ ⎜ .. ⎜ ⎟ ⎜ ⎜ ⎟ ⎝ . .. ⎜ ⎟ . ⎝ ⎠ ε(v) / S μ, τP π)∂t σ L C(τ

⎞





0

⎜ ⎟ ⎟ 0 ⎟ ⎜ ⎜ ⎟ ⎜ 1   ⎟ ⎟ ⎟ ⎜ / ⎟ − ⎜ τσ ,1 C τS μ, τP π σ 1 ⎟ ⎟ ⎟ ⎜ ⎟ .. ⎠ ⎜ ⎟ . ⎝ ⎠   1 / τσ ,L C τS μ, τP π σ L ⎡⎛

∂t v





0

⎞⎤

⎢⎜ ⎟ ⎜ ⎟⎥ ⎢⎜ ∂t σ 0 ⎟ ⎜ 0 ⎟⎥ ⎢⎜ ⎟ ⎜ ⎟⎥ ⎢⎜ ⎟ ⎜ 1 ⎟⎥  σ ⎜ ⎜ ⎢ ⎟ ⎟⎥ . ∂ σ 1 t 1 − V (p), p ⎢⎜ ⎟ + ⎜ τσ ,1 ⎟⎥ ⎢⎜ . ⎟ ⎜ . ⎟⎥ ⎢⎜ .. ⎟ ⎜ .. ⎟⎥ ⎣⎝ ⎠ ⎝ ⎠⎦ 1 ∂t σ L τσ ,L σ L This system can be rewritten as (20.12) using (19.9b), (19.9c), (20.9), and (20.8).

 

Now we are able to apply Theorem 19.17 to obtain the following representation of the dual of the Fréchet derivative ! (p)∗ . Theorem 20.9 The assumptions are as in Theorem 20.8. Then, the adjoint ! (p)∗ ∈     L L2 ([0, T ], X), (L∞ (D)5 ) at p = (ρ, vS , τS , vP , τP ) ∈ int D(!) is given by  T  ⎞ 1 0 ∂t v · w − ρ ε(v) : (ϕ 0 + ) dt ⎜ T   ⎟ ⎜2 ⎟ v ⎜ vS 0 − ε(v) : (ϕ 0 + ) + π trace( ) div v dt ⎟ ⎜ ⎟ T   ⎜ 1 τ + π trace( τ ) div v dt ⎟ ∈ L1 (D)5 ! (p)∗ g = ⎜ 1+ατ ε(v) :

⎟ S,2 S,1 0 S ⎜ ⎟  ⎜ ⎟ 2π T v ) div v dt ⎜ ⎟ trace(

− 0 v P ⎝ ⎠  T π τ ) div v dt trace(

P 1+ατP 0 ⎛

(20.13)

20.1 Full Waveform Inversion in the Visco-Elastic Regime

349

  1+L where v is the first for g = (g−1 , g0 , . . . , gL ) ∈ L2 [0, T ], L2 (D, R3 )×L2 (D, R3×3 sym )  component of the solution of (19.9), = L l=1 ϕ l , and

v =

1 τP ϕ0 +

, 3π − 4μ 3τP π − 4τS μ

τS,1 = −

τP =

α τP ϕ0 +

, 3π − 4μ τS (3τP π − 4τS μ)

τS,2 = α ϕ 0 −

1

, τS

α 1 ϕ0 −

. 3π − 4μ 3τP π − 4τS μ

Further, w = (w, ϕ 0 , . . . , ϕ L ) ∈ C([0, T ], X) the mild/weak solution of

1 1 div ϕ l + g−1 , ρ ρ l=0   ∂t ϕ 0 = C(μ, π) ε(w) + g0 , L

∂t w =

  1 ∂t ϕ l = C(τS μ, τP π) ε(w) + gl + ϕ, τσ ,l l

(20.14a) (20.14b) l = 1, . . . , L,

(20.14c)

with w(T ) = 0. Remark 20.10 Please note that each component of the right hand side of (20.13) is still a function depending on the spatial variable and is in L1 (D) as product of two L2 -functions. Hence, ! (p)∗ indeed maps into L1 (D)5 which is a subspace of (L∞ (D)5 ) . Proof (of Theorem 20.9) By A∗ = −A (skew-symmetry), Q∗ = Q, and QB = BQ we see that (20.14) is the visco-elastic formulation of (19.28). Further, by Theorem 19.17, -  ∗ . . ! (p) g,, p (L∞ (D)5 ) ×L∞ (D)5 = F  (V (p))∗ g, V  (p), p L(X) ×L(X) 

T-

= 0

  . V  (p), p u (t) + Qu(t) , w(t) X dt

(20.15)

where u = (v, σ 0 , . . . , σ L ) is the classical solution of (19.9). We will now evaluate the above inner product suppressing its dependence on time. Using (20.9) and (20.8) we find p = (, ρ ,, vS ,, τS ,, vP ,, τP ) that for , -

  V (p), p u + Qu , wX = 



  ρ ,∂t v · w + S0 + S1 + · · · + SL dx D

(20.16)

350

20 Applications

with ' & ρ ,/ / / S0 = − C(μ, π)C(/ μ, / π )C(μ, π)∂t σ 0 : ϕ 0 π)∂t σ 0 − ρ C(μ, ρ and, for l = 1, . . . , L, & ρ

,/ σl Sl = − C(τ S μ, τP π) ∂t σ l + ρ τσ ,l

'

/ S μ, τP π)C(, / S μ, τP π) ∂t σ l + σ l : ϕl . − ρ C(τ μ, , π )C(τ τσ ,l

In view of (19.9b) we may write ' & ρ , / π)C(/ μ, / π )ε(v) : ϕ 0 S0 = − ε(v) − ρ C(μ, ρ ρ , / = − ε(v) : ϕ 0 − ρ C(/ μ, / π )ε(v) : C(μ, π)ϕ 0 ρ and, similarly by (19.9c), ρ , / S μ, τP π)ϕ l , Sl = − ε(v) : ϕ l − ρ C(, μ, , π )ε(v) : C(τ ρ

l = 1, . . . , L.

Next, via (20.4), we obtain / C(/ μ, / π )ε(v) : C(μ, π)ϕ 0   1 2μ − π ϕ0 + trace(ϕ 0 )I = 2/ μ ε(v) + (/ π − 2/ μ) div v I : 2μ 2μ(3π − 4μ)

1 π div v trace(ϕ 0 ) =/ μ ε(v) : ϕ 0 − μ μ(3π − 4μ) +

/ π div v trace(ϕ 0 ) 3π − 4μ

yielding

1 ρ , π div v trace(ϕ 0 ) S0 = − ε(v) : ϕ 0 + ρ/ μ − ε(v) : ϕ 0 + ρ μ μ(3π − 4μ) −

ρ/ π div v trace(ϕ 0 ). 3π − 4μ

20.1 Full Waveform Inversion in the Visco-Elastic Regime

351

Analogously,

ρ , 1 τP π ε(v) : ϕ l + div v trace(ϕ l ) Sl = − ε(v) : ϕ l + ρ, μ − ρ τS μ τS μ(3τP π − 4τS μ) −

ρ, π div v trace(ϕ l ). 3τP π − 4τS μ

Now we group the terms in the sum of (20.16) with respect to the components of , p. To this μ, ρ/ π , ρ, μ, and ρ, π by their expressions from (20.10) and (20.11) which end we replace ρ/ we slightly rewrite introducing μ and π. Thus, ρ/ μ=

2μ αμ , τS , , vS − vS 1 + τS α

ρ/ π=

2π απ , τP , , vP − vP 1 + τP α

ρ, μ=

2τS μ μ , τS , , vS + vS 1 + τS α

ρ, π=

2τP π π , τP . , vP + vP 1 + τP α

A few algebraic rearrangements lead to -

  V (p), p u + Qu , uX = 

 &

1 ρ , ∂t v · w − ε(v) : (ϕ 0 + ) ρ D

2

− ε(v) : (ϕ 0 + ) + π trace( v ) div v vS , τS

+ ε(v) : τS,2 + π trace( τS,1 ) div v 1 + ατS

+, vS

−, vP

' 2π π trace( v ) div v + , τP trace( τP ) div v dx vP 1 + ατP

from which (20.13) follows.

 

The expression for the Fréchet derivative and its adjoints cannot directly be applied to the visco-elastic wave equation in two spatial dimensions. There are two differences to the 3D case: 1 2m − p / trace(I) = 2 and C(m, p)M = C −1 (m, p)M = M+ trace(M)I. 2m 4m(p − m) Taking into account these adjustments we get the following 2D version of Theorem 20.9.

352

20 Applications

Theorem 20.11 Re-defining the three quantities

v =

1 τP ϕ +

, 2(π − μ) 0 2(τP π − τS μ)

τS,1 = −

τP =

α τP ϕ0 +

, 2(π − μ) 2 τS (τP π − τS μ)

α 1 ϕ0 −

, 2(π − μ) 2(τP π − τS μ)

the statement of Theorem 20.9 can be copied without any further changes. Proof In 2D we have that / C(/ μ, / π )ε(v) : C(μ, π)ϕ 0   1 2μ − π ϕ0 + trace(ϕ 0 )I = 2/ μ ε(v) + (/ π − 2/ μ) div v I : 2μ 4μ(π − μ)

1 π / π div v trace(ϕ 0 ) + div v trace(ϕ 0 ) =/ μ ε(v) : ϕ 0 − μ 2μ(π − μ) 2(π − μ)  

and this is the only difference compared to the 3D proof.

20.2

Maxwell’s Equation: Inverse Electromagnetic Scattering

Let E, H : [0, ∞) × D → R3 be the electric and magnetic fields in a bounded Lipschitz domain D ⊂ R3 . We consider the following Maxwell system ε ∂t E − curl H = −Je − σ E

in [0, ∞) × D,

(20.17a)

μ ∂t H + curl E = Jm

in [0, ∞) × D,

(20.17b)

with boundary condition n×E= 0

in [0, ∞) × ∂D

(20.17c)

and initial conditions E(0, ·) = E0 (·)

and H(0, ·) = H0 (·)

in D.

(20.17d)

20.2 Maxwell’s Equation: Inverse Electromagnetic Scattering

353

The scalar functions ε, μ, σ : D → R are the (electric) permittivity, the (magnetic) permeability, and the conductivity which characterize an isotropic electromagnetic medium. Further, Je , Jm : [0, ∞) × D → R3 are the current and magnetic densities. Remark 20.12 In case of a dielectric medium, that is, σ = 0, the conservation equations ∂t div(ε E) + div Je = 0 and ∂t div(μ H) − div Jm = 0 follow directly from (20.17a) and (20.17b), respectively. If div Je = 0 then div(ε E) = 0 provided the initial field satisfies div (εE0 ) = 0. The analog result holds for the magnetic field as well. The additional boundary condition n·H = 0 on ∂D (in the physically relevant case Jm = 0) originates from ∂t (μ n · H) = −n · curl E = − Div(n × E)

on ∂D

and the boundary condition (20.17c). Here, Div denotes the surface divergence, see, e.g., [109, Sec. A.5]. If we define     σ Id − curl ε Id 0 A= , B= , curl 0 0 μ Id

    E −Je , and u = Q = 0, f = Jm H

then the system (20.17a), (20.17b) is of the abstract form (19.12) with u0 = (E0 , H0 ) . As Hilbert space and domain of A we choose X = L2 (D, R3 ) × L2 (D, R3 )

and D(A) = H0 (curl, D) × H (curl, D),

(20.18)

respectively. Here, H (curl, D) is the space of all vector fields which do have a weak curl in L2 , see, e.g., [109, Sec. 4.1.2]. By H0 (curl, D) we denote the closure of the compactly supported C∞ vector fields in H (curl, D). Note that fields w ∈ H0 (curl, D) do not necessarily vanish on ∂D, however, their tangential components do: w × n = 0 on ∂D. We assume that 0 < c ≤ ε, μ ∈ L∞ (D) and 0 ≤ σ ∈ L∞ (D) a.e.

(20.19)

Lemma 20.13 Under (20.19) we have B ∈ L∗ (X) and A : D(A) ⊂ X → X is maximal monotone. Proof The first assertion is clear. We prove now the second.

354

20 Applications

For (E, H) ∈ D(A) we have ?    @     E E (σ E − curl H) · E + curl E · H dx = A , = σ |E|2 dx ≥ 0 H H D D X

by Green’s theorem, see, e.g., [128, Remark 3.28] (no boundary term appears due to E ∈ H0 (curl, D)). It remains to show surjectivity of A + Id. For any Je , Jm ∈ L2 (D, R3 ) we have to find E ∈ H0 (curl, D) and H ∈ H (curl, D) with σ E − curl H + E = Je

and

curl E + H = Jm .

(20.20)

We multiply the first equation by ψ ∈ H0 (curl, D) and  the second by curl ψ, add the equations and integrate over D. Taking into account that D (ψ · curl H − H · curl ψ)dx = 0 we arrive at     Jm · curl ψ + Je · ψ dx. (curl E · curl ψ + (σ + 1)E · ψ) dx = D

D

The theorem of Lax-Milgram in H0 (curl, D) implies existence of a unique solution E ∈ H0 (curl, D). Finally, we define H = Jm − curl E. Then the equation on the right of (20.20) is satisfied and the variational equation becomes 

  (σ + 1)E · ψ − H · curl ψ dx =



D

Je · ψ dx D

 

which is the weak form of the equation on the left of (20.20). Here we are in a situation considered in Theorem 19.7. Theorem 20.14 Under (20.19) we have the following.   (a) A unique mild/weak solution E, H ∈ C [0, ∞[, L2 (D, R3 ) of (20.17) exists for   Je , Jm ∈ L1loc [0, ∞[, L2 (D, R3 ) ,

E0 , H0 ∈ L2 (D, R3 ).

    1 [0, ∞[, L2 (D, R3 ) ∩ C [0, ∞[, H (curl, D) , (b) A unique classical solution E ∈ C 0     H ∈ C1 [0, ∞[, L2 (D, R3 ) ∩ C [0, ∞[, H (curl, D) of (20.17) exists for  1,1  Je , Jm ∈ Wloc [0, ∞[, L2 (D, R3 ) ,

E0 ∈ H0 (curl, D),

H0 ∈ H (curl, D).

20.2 Maxwell’s Equation: Inverse Electromagnetic Scattering

355

    2 [0, ∞[, L2 (D, R3 ) ∩ C1 [0, ∞[, H (curl, D) , (c) A unique classical solution E ∈ C 0     H ∈ C2 [0, ∞[, L2 (D, R3 ) ∩ C1 [0, ∞[, H (curl, D) of (20.17) exists for  2,1  Je , Jm ∈ Wloc [0, ∞[, L2 (D, R3 ) ,

E0 ∈ H0 (curl, D),

( ) ε−1 curl H0 − σ E0 − Je (0) ∈ H0 (curl, D),

H0 ∈ H (curl, D),

( ) μ−1 Jm (0) − curl E0 ∈ H (curl, D).

Proof Parts (a) and (b) directly follow from Theorem 19.7 whereas (c) needs the   application of Theorem 19.10 which holds true for (20.17) even for T = ∞.

20.2.1 Inverse Electromagnetic Scattering We consider the inverse problem of determining the permittitivity and permeability of the medium D from measurements of the electric and magnetic fields at parts of D, the boundary for instance. The involved linear measurement operator is of no interest in what follows and will be neglected therefore, cf. Sect. 19.1.3.

20.2.2 The Electromagnetic Forward Map We define the corresponding electromagnetic forward map by   ! : D(!) ⊂ L∞ (D)2 → L2 [0, T ], L2 (D, R3 )2 ,

(ε, μ) "→ (E, H) ,

where T > 0 is the observation period and (E, H) is the (classical) solution of the   Maxwell system (20.17) under the assumptions that Je , Jm ∈ W 1,1 [0, T ], L2 (D, R3 ) , E0 ∈ H0 (curl, D), H0 ∈ H (curl, D), and that σ satisfies (20.19). Further,   D(!) = (ε, μ) ∈ L∞ (D)2 : c− ≤ ε, μ ≤ c+ a.e. with suitable constants 0 < c− < c+ < ∞. Again it will be convenient to factorize ! = F ◦ V where F is the mapping from (19.22) adapted to the Maxwell system and   V : D(!) ⊂ L∞ (D)2 → L∗ L2 (D, R3 )2 ,



 ε Id 0 (ε, μ) "→ . 0 μ Id

  Here, V D(!) ⊂ D(F ) when setting β± = c± (under the canonical Hilbert norm on L2 (D, R3 )2 ).

356

20 Applications

Theorem 20.15 The inverse electromagnetic scattering problem !(·, ·) = (E, H) is locally ill-posed at any interior point (ε, μ) of D(!). Proof For a point ξ ∈ D define balls Kk = {y ∈ R3 : |y − ξ | ≤ δ/k}, k ∈ N, where δ > 0 is so small that K1 ⊂ D. Let χk be the characteristic function of Kk . Define  χk Id 0 = V (χk , χk ). Ek := 0 χk Id 

Then, Ek is monotone (non-negative) with Ek L(L2 (D,R3 )2 ) = 1, Ek → 0 point-wise as k → ∞ by the dominated convergence theorem. Further, for any r > 0 such that (εk , μk ) := (ε + rχk , μ + rχk ) ∈ D(!), we have that  k→∞    !(εk , μk ) = F V (ε, μ) + rEk −−−→ F V (ε, μ) = !(ε, μ) where the convergence can been shown exactly as in the proof of Theorem 19.11. But (εk , μk ) − (ε, μ) L∞ (D)2 = r for all k.  

20.2.3 Differentiability and Adjoint Since V is a linear operator we immediately get 7 8 , ε V (ε, μ) = V (, ε, , μ) μ , 

  for all (ε, μ) ∈ int D(!) .

(20.21)

Theorem 20.16 Under the assumptions made in this section the electromagnetic forward at any interior point (ε, μ) of D(!): For (, ε, , μ) ∈ operator ! is Fréchet differentiable & '   , ε 3 ∞ 2  2 L (D) we have ! (ε, μ) , = (E, H) where E, H ∈ C [0, T ], L (D, R ) is the mild μ solution of 2ε ∂t E + σ E − curl H = −, ε ∂t E, μ ∂t H + curl E = −, μ ∂t H,

E(0) = 0,

(20.22a)

H(0) = 0,

(20.22b)

with (E, H) = !(ε, μ) being the classical solution of (20.17) in [0, T ] × D.

20.2 Maxwell’s Equation: Inverse Electromagnetic Scattering

357

& ' , ε = F  (V (ε, μ))V (, Proof The chain rule and (20.21) yield ! (ε, μ) , ε, , μ). Now, we μ apply part (a) of Theorem 19.15: the abstract system (19.26) formulated in the present case reads (recall that Q = 0) 

        E ε Id 0 σ Id − curl , ε Id 0 ∂t E ∂t E + =− , H ∂t H ∂t H 0 μ Id curl 0 0 , μ Id



   E(0) 0 = , H(0) 0  

which is the system we stated.

Remark 20.17 Under the additional regularity assumption of part (c) of Theorem 20.14 the mild solution of (20.22) is indeed a classical solution. Theorem 20.18 The assumptions are as in Theorem 20.16. Then, the adjoint ! (ε, μ)∗ ∈       L L2 [0, T ], L2 (D, R3 )2 , (L∞ (D)2 ) at (ε, μ) ∈ int D(!) is given by  ! (ε, μ)∗

gE gH



⎛

 =

T ∂t E(t, ·) · E(t, ·) dt ⎠ ⎝ 0 T ∂ H(t, ·) · H(t, ·) dt t 0

∈ L1 (D)2

(20.23)

  where (E, H) ∈ C [0, T ], L2 (D, R3 )2 is the mild/weak solution of ε ∂t E − σ E − curl H = gE ,

E(T ) = 0,

(20.24a)

μ ∂t H + curl E = gH ,

H(T ) = 0,

(20.24b)

and (E, H) = !(ε, μ) is the classical solution of (20.17) in [0, T ] × D.  Id curl  Proof By A∗ = −σcurl and Q = 0 we see that (20.24) is the formulation of (19.28) 0 in the Maxwell setting. Further, by Theorem 19.17, A

 

! (ε, μ)



 B gE ,(, ε, , μ) ∞ 2  ∞ 2 (L (D) ) ×L (D) gH . = F  (V (ε, μ))∗ g, V (, ε, , μ) L(L2 (D,R3 )2 ) ×L(L2 (D,R3 )2 ) 

TA

= 0

=



   ∂t E E B V (, ε, , μ)) dt , ∂t H H L2 (D,R3 )2

  T  T μ ∂t H · H dt dx , ε ∂t E · E dt + , D

0

0

This equation proves the stated representation (20.23) of ! (ε, μ)∗ .

 

358

20 Applications

Remark 20.19 Remark 20.10 applies to (20.23) accordingly. Remark 20.20 Differentiability of the electric and magnetic fields with respect to the conductivity σ cannot directly be achieved by our abstract theory. A slightly different evolution equation has to be considered, namely, u (t) + B −1 (A + Q)u(t) = B −1 f (t),

t ∈]0, T [,

u(0) = u0 ,

(20.25)

where B, A, and Q are as in the abstract theory, see beginning of Sect. 19.2. Setting 

         0 − curl E ε Id 0 σ Id 0 −Je A= , and u = , B= , Q= , f = Jm curl 0 H 0 μ Id 0 0 we can frame the Maxwell system in the abstract evolution equation (20.25). The spaces X and D(A) are as in (20.18). Then B −1 A is maximal monotone in (X, ·, ·B ) and, by the Hille-Yosida theorem and Theorem 3.1.1 of [138], B −1 A + B −1 Q generates a C 0 -semigroup even for arbitrary σ ∈ L∞ (D) (no sign condition is required). Now, the mapping F : L(X) - Q "→ u ∈ C([0, T ], X) can be approached with the techniques of Sect. 19.2. It is Fréchet differentiable under conditions which are weaker than those of Theorem 19.15. The following holds true: Let u0 ∈ X and f ∈ L1 ([0, T ], X). Then, F  (Q)H = u with u being the mild solution of Bu (t) + (A + Q)u(t) = −H u(t),

t ∈]0, T [,

u(0) = 0,

where u mildly solves (20.25). We apply this result to the mapping ! : L∞ (D) - σ "→ (E, H) ∈ C([0, T ], X) where (E, H) solves (20.17) in [0, T ] × D under the conditions of σ = (E, H) where (E, H) is the mild Theorem 20.14 (a) restricted to [0, T ]. Thus, ! (σ ), solution of ε ∂t E + σ E − curl H = −, σ E, μ ∂t H + curl E = 0,

E(0) = 0, H(0) = 0.

References

1. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Dover Publications (1964). https://doi. org/10.1119/1.15378 2. Adams, R.A., Fournier, J.J.F.: Sobolev Spaces, 2nd edn. Elsevier/Academic Press, Amsterdam (2003) 3. Aregba-Driollet, D., Hanouzet, B.: Kerr-Debye relaxation shock profiles for Kerr equations. Commun. Math. Sci. 9(1), 1–31 (2011). http://projecteuclid.org/euclid.cms/1294170323 4. Assous, F., Ciarlet, P., Labrunie, S.: Mathematical Foundations of Computational Electromagnetism. Applied Mathematical Sciences, vol. 198. Springer, Cham (2018). https://doi.org/10. 1007/978-3-319-70842-3 5. Astala, K., Päivärinta, L.: Calderón’s inverse conductivity problem in the plane. Ann. Math. 163(1), 265–299 (2006). https://doi.org/10.4007/annals.2006.163.265 6. Banjai, L., Georgoulis, E.H., Lijoka, O.: A Trefftz polynomial space-time discontinuous Galerkin method for the second order wave equation. SIAM J. Numer. Anal. 55(1), 63–86 (2017). https://doi.org/10.1137/16M1065744 7. Bahouri, H., Chemin, J.Y., Danchin, R.: Fourier Analysis and Nonlinear Partial Differential Equations. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-16830-7 8. Bansal, P., Moiola, A., Perugia, I., Schwab, C.: Space–time discontinuous Galerkin approximation of acoustic waves with point singularities. IMA J. Numer. Anal. 41(3), 2056–2109 (2021). https://doi.org/10.1093/imanum/draa088 9. Beauchard, K., Zuazua, E.: Large time asymptotics for partially dissipative hyperbolic systems. Arch. Ration. Mech. Anal. 199(1), 177–227 (2011). https://doi.org/10.1007/s00205-010-0321y 10. Becker, R., Rannacher, R.: An optimal control approach to a posteriori error estimation in finite element methods. Acta Numer. 10(1), 1–102 (2001). https://doi.org/10.1017/ S0962492901000010 11. Benzoni-Gavage, S., Serre, D.: Multidimensional Hyperbolic Partial Differential Equations. The Clarendon Press/Oxford University Press, Oxford (2007) 12. Blanch, J.O., Robertsson, J.O.A., Symes, W.W.: Modeling of a constant Q: Methodology and algorithm for an efficient and optimally inexpensive viscoelastic technique. Geophysics 60(1), 176–184 (1995). https://doi.org/10.1190/1.1443744 13. Bohlen, T.: Viskoelastische FD-Modellierung seismischer Wellen zur Interpretation gemessener Seismogramme. Ph.D. thesis, Christian-Albrechts-Universität zu Kiel (1998). https://bit.ly/2LM0SWr © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Dörfler et al., Wave Phenomena, Oberwolfach Seminars 49, https://doi.org/10.1007/978-3-031-05793-9

359

360

References

14. Bohlen, T.: Parallel 3-D viscoelastic finite difference seismic modelling. Comput. Geosci. 28(8), 887–899 (2002). https://doi.org/10.1016/S0098-3004(02)00006-7 15. Bohlen, T., Fernandez, M.R., Ernesti, J., Rheinbay, C., Rieder, A., Wieners, C.: Visco-acoustic full waveform inversion: from a DG forward solver to a Newton-CG inverse solver. Comput. Math. Appl. 100, 126–140 (2021). https://doi.org/10.1016/j.camwa.2021.09.001 16. Braess, D.: Finite Elements. Cambridge University Press, Cambridge (2001) 17. Brezis, H.: Functional analysis, Sobolev spaces and partial differential equations. Universitext. Springer, New York (2011). https://link.springer.com/book/10.1007/978-0-387-70914-7 18. Busch, K., von Freymann, G., Linden, S., Mingaleev, S., Tkeshelashvili, L., Wegener, M.: Periodic nanostructures for photonics. Phys. Rep. 444, 101–202 (2007). https://doi.org/10. 1016/j.physrep.2007.02.011 19. Butcher, P.N., Cotter, D.: The Elements of Nonlinear Optics. Cambridge University Press, Cambridge (1990) 20. Burazin, K., Erceg, M.: Non-stationary abstract Friedrichs systems. Mediterr. J. Math. 13(6), 3777–3796 (2016). https://doi.org/10.1007/s00009-016-0714-8 21. Cai, Z., Lazarov, R., Manteuffel, T.A., McCormick, S.F.: First-order system least squares for second-order partial differential equations: Part I. SIAM J. Numer. Anal. 31(6), 1785–1799 (1994). https://doi.org/10.1137/0731091 22. Cai, Z., Manteuffel, T.A., McCormick, S.F., Ruge, J.: First-order system LL∗ (FOSLL∗ ): Scalar elliptic partial differential equations. SIAM J. Numer. Anal. 39(4), 1418–1445 (2001). https:// doi.org/10.1137/S0036142900388049 23. Carcione, J.M.: Wave Fields in Real Media: Wave Propagation in Anisotropic, Anelastic, Porous and Electromagnetic Media. Elsevier (2014). https://doi.org/10.1016/C2013-0-188939 24. Cessenat, M.: Mathematical Methods in Electromagnetism. World Scientific Publishing Co., River Edge, NJ (1996). https://doi.org/10.1142/2938 25. Chabassier, J., Imperiale, S.: Construction and convergence analysis of conservative second order local time discretisation for linear wave equations. ESAIM Math. Model. Numer. Anal. 55(4), 1507–1543 (2021). https://doi.org/10.1051/m2an/2021030 26. Chazarain, J., Piriou, A.: Introduction to the Theory of Linear Partial Differential Equations. North-Holland Publishing Co., Amsterdam (1982) 27. Ciarlet, P.G.: Mathematical Elasticity. Vol. I: Three-dimensional Elasticity. Studies in Mathematics and its Applications, vol. 20. North-Holland Publishing Co., Amsterdam (1988). https:// doi.org/10.1002/crat.2170250509 28. Colombini, F., De Giorgi, E., Spagnolo, S.: Sur les équations hyperboliques avec des coefficients qui ne dépendent que du temps. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 6(3), 511–559 (1979) 29. Collino, F., Fouquet, T., Joly, P.: A conservative space-time mesh refinement method for the 1-D wave equation. I. Construction. Numer. Math. 95(2), 197–221 (2003). https://doi.org/10. 1007/s00211-002-0446-5 30. Collino, F., Fouquet, T., Joly, P.: A conservative space-time mesh refinement method for the 1D wave equation. II. Analysis. Numer. Math. 95(2), 223–251 (2003). https://doi.org/10.1007/ s00211-002-0447-4 31. Collino, F., Fouquet, T., Joly, P.: Conservative space-time mesh refinement methods for the FDTD solution of Maxwell’s equations. J. Comput. Phys. 211(1), 9–35 (2006). https://doi.org/ 10.1016/j.jcp.2005.03.035 32. Costabel, M., Dauge, M., Nicaise, S.: Corner singularities and analytic regularity for linear elliptic systems. Part I (2010). See http://hal.archives-ouvertes.fr/hal-00453934/en/

References

361

33. Courant, R., Friedrichs, K., Lewy, H.: über die partiellen Differenzengleichungen der mathematischen Physik. Math. Ann. 100(1), 32–74 (1928). https://doi.org/10.1007/BF01448839 34. D’Ancona, P., Nicaise, S., Schnaubelt, R.: Blow-up for nonlinear Maxwell equations. Electron. J. Differential Equations, pp. 9, Paper No. 73 (2018) 35. Dautray, R., Lions, J.L.: Mathematical analysis and numerical methods for science and technology, vol. 1. Springer-Verlag, Berlin (1990). Physical origins and classical methods, With the collaboration of Philippe Bénilan, Michel Cessenat, André Gervat, Alain Kavenoky and Hélène Lanchon 36. Dautray, R., Lions, J.L.: Mathematical analysis and numerical methods for science and technology, vol. 3. Springer-Verlag, Berlin (1990). Spectral theory and applications, With the collaboration of Michel Artola and Michel Cessenat 37. Davis, J.L.: Wave propagation in solids and fluids. Springer Science & Business Media (2012). https://doi.org/10.1007/978-1-4612-3886-7 38. Defrise, M., De Mol, C.: A note on stopping rules for iterative regularization methods and filtered SVD. In: Inverse Problems: An Interdisciplinary Study (Montpellier, 1986), Adv. Electron. Electron Phys., Suppl. 19, pp. 261–268. Academic Press, London (1987) 39. Descombes, S., Lanteri, S., Moya, L.: Locally implicit time integration strategies in a discontinuous Galerkin method for Maxwell’s equations. J. Sci. Comput. 56(1), 190–218 (2013). https://doi.org/10.1007/s10915-012-9669-5 40. Descombes, S., Lanteri, S., Moya, L.: Locally implicit discontinuous Galerkin time domain method for electromagnetic wave propagation in dispersive media applied to numerical dosimetry in biological tissues. SIAM J. Sci. Comput. 38(5), A2611–A2633 (2016). https:// doi.org/10.1137/15M1010282 41. Descombes, S., Lanteri, S., Moya, L.: Temporal convergence analysis of a locally implicit discontinuous Galerkin time domain method for electromagnetic wave propagation in dispersive media. J. Comput. Appl. Math. 316, 122–132 (2017). https://doi.org/10.1016/j.cam.2016.09. 038 42. Di Pietro, D.A., Ern, A.: Mathematical Aspects of Discontinuous Galerkin Methods, vol. 69. Springer Science & Business Media (2011). https://doi.org/10.1007/978-3-642-22980-0 43. Di Pietro, D.A., Ern, A.: Mathematical Aspects of Discontinuous Galerkin Methods. Mathématiques & Applications (Berlin) [Mathematics & Applications], vol. 69. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-22980-0 44. Diaz, J., Grote, M.J.: Energy conserving explicit local time stepping for second-order wave equations. SIAM J. Sci. Comput. 31(3), 1985–2014 (2009). https://doi.org/10.1137/070709414 45. Diehl, R., Busch, K., Niegemann, J.: Comparison of low-storage Runge-Kutta schemes for discontinuous Galerkin time-domain simulations of Maxwell’s equations. J. Comput. Theor. Nanosci. 7(8), 1572–1580 (2010). http://www.ingentaconnect.com/content/asp/jctn/ 2010/00000007/00000008/art00035 46. Diestel, J.: Sequences and Series in Banach Spaces. Graduate Texts in Mathematics, vol. 92. Springer-Verlag, New York (1984). https://doi.org/10.1007/978-1-4612-5200-9 47. Dörfler, W., Lechleiter, A., Plum, M., Schneider, G., Wieners, C.: Photonic Crystals: Mathematical Analysis and Numerical Approximation, vol. 42. Springer Science & Business Media (2011). https://doi.org/10.1007/978-3-0348-0113-3 48. Dörfler, W., Findeisen, S., Wieners, C.: Space-time discontinuous Galerkin discretizations for linear first-order hyperbolic evolution systems. Comput. Methods Appl. Math. 16(3), 409–428 (2016). https://doi.org/10.1515/cmam-2016-0015

362

References

49. Dörfler, W., Findeisen, S., Wieners, C., Ziegler, D.: Parallel adaptive discontinuous Galerkin discretizations in space and time for linear elastic and acoustic waves. In: Langer, U., Steinbach, O. (eds.) Space-Time Methods. Applications to Partial Differential Equations. Radon Series on Computational and Applied Mathematics, vol. 25, pp. 61–88. Walter de Gruyter (2019). https:// doi.org/10.1515/9783110548488-002 50. Dörfler, W., Wieners, C., Ziegler, D.: Space-time discontinuous Galerkin methods for linear hyperbolic systems and the application to the forward problem in seismic imaging. In: Klöfkorn, R., Keilegavlen, E., Radu, F., Fuhrmann, J. (eds.) Finite Volumes for Complex Applications IX – Methods, Theoretical Aspects, Examples. Springer Proceedings in Mathematics & Statistics, vol. 323, pp. 477–485. Springer (2020). https://doi.org/10.1007/978-3-030-436513_44 51. Eilinghoff, J., Schnaubelt, R.: Error analysis of an ADI splitting scheme for the inhomogeneous Maxwell equations. Discrete Contin. Dyn. Syst. 38(11), 5685–5709 (2018). https://doi.org/10. 3934/dcds.2018248 52. Eller, M.: Continuous observability for the anisotropic Maxwell system. Appl. Math. Optim. 55(2), 185–201 (2007). https://doi.org/10.1007/s00245-006-0886-x 53. Eller, M.: On symmetric hyperbolic boundary problems with nonhomogeneous conservative boundary conditions. SIAM J. Math. Anal. 44(3), 1925–1949 (2012). https://doi.org/10.1137/ 110834652 54. Eller, M.: Stability of the anisotropic Maxwell equations with a conductivity term. Evol. Equ. Control Theory 8(2), 343–357 (2019). https://doi.org/10.3934/eect.2019018 55. Eller, M., Rieder, A.: Tangential cone condition and Lipschitz stability for the full waveform forward operator in the acoustic regime. Inverse Problems 37, 085011 (2021). https://doi.org/ 10.1088/1361-6420/ac11c5 56. Eller, M., Lagnese, J.E., Nicaise, S.: Decay rates for solutions of a Maxwell system with nonlinear boundary damping. Comput. Appl. Math. 21(1), 135–165 (2002) 57. Engel, K.J., Nagel, R.: One-Parameter Semigroups for Linear Evolution Equations. Graduate Texts in Mathematics, vol. 194. Springer-Verlag, New York (2000). https://doi.org/10.1007/ b97696 58. Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Mathematics and Its Applications, vol. 375. Kluwer Academic Publishers Group, Dordrecht (1996) 59. Ern, A., Guermond, J.L.: Discontinuous galerkin methods for Friedrichs’ systems. I. General theory. SIAM J. Numer. Anal. 44(2), 753–778 (2006). https://doi.org/10.1137/050624133 60. Ern, A., Guermond, J.L.: Discontinuous Galerkin methods for Friedrichs’ systems. II. Secondorder elliptic PDEs. SIAM J. Numer. Anal. 44(6), 2363–2388 (2006). https://doi.org/10.1137/ 05063831X 61. Ern, A., Guermond, J.L.: Discontinuous Galerkin methods for Friedrichs’ systems. III. Multifield theories with partial coercivity. SIAM J. Numer. Anal. 46(2), 776–804 (2008). https://doi.org/10.1137/060664045 62. Ern, A., Guermond, J.L.: A converse to Fortin’s lemma in Banach spaces. C. R. Math. 354(11), 1092–1095 (2016). https://doi.org/10.1016/j.crma.2016.09.013 63. Ern, A., Vohralík, M.: Polynomial-degree-robust a posteriori estimates in a unified setting for conforming, nonconforming, discontinuous Galerkin, and mixed discretizations. SIAM J. Numer. Anal. 53(2), 1058–1081 (2015). https://doi.org/10.1137/130950100 64. Ern, A., Guermond, J.L., Caplain, G.: An intrinsic criterion for the bijectivity of Hilbert operators related to Friedrichs’ systems. Commun. Partial Differential Equations 32(1–3), 317– 341 (2007). https://doi.org/10.1080/03605300600718545

References

363

65. Ernesti, J., Wieners, C.: Space-time discontinuous Petrov-Galerkin methods for linear wave equations in heterogeneous media. Comput. Methods Appl. Math. 19(3), 465–481 (2019). https://doi.org/10.1515/cmam-2018-0190 66. Ernesti, J., Wieners, C.: A space-time DPG method for acoustic waves. In: Langer, U., Steinbach, O. (eds.) Space-Time Methods. Applications to Partial Differential Equations. Radon Series on Computational and Applied Mathematics, vol. 25, pp. 89–116. Walter de Gruyter (2019). https://doi.org/10.1515/9783110548488-003 67. Evans, L.C.: Partial differential equations. Graduate Studies in Mathematics, vol. 19, 2nd edn. American Mathematical Society, Providence, RI (2010) 68. Fabrizio, M., Morro, A.: Electromagnetism of Continuous Media. Oxford University Press, Oxford (2003). https://doi.org/10.1093/acprof:oso/9780198527008.001.0001 69. Faragó, I., Horváth, R., Schilders, W.H.: Investigation of numerical time-integrations of Maxwell’s equations using the staggered grid spatial discretization. Int. J. Numer. Model. Electron. Netw. Dev. Fields 18(2), 149–169 (2005). https://doi.org/10.1002/jnm.570 70. Fichtner, A.: Full Seismic Waveform Modelling and Inversion. Advances in Geophysical and Environmental Mechanics and Mathematics. Springer-Verlag, Berlin Heidelberg (2011). https://doi.org/10.1007/978-3-642-15807-0 71. Findeisen, S.: A parallel and adaptive space-time method for Maxwell’s equations. Ph.D. thesis, Dept. of Mathematics, Karlsruhe Institute of Technology (2016). https://doi.org/10.5445/IR/ 1000056876 72. Friedrichs, K.O.: Symmetric positive linear differential equations. Commun. Pure Appl. Math. 11, 333–418 (1958). https://doi.org/10.1002/cpa.3160110306 73. Gerken, T.: Dynamic inverse wave problems—part II: operator identification and applications. Inverse Problems 36(2), 024005 (2020). https://doi.org/10.1088/1361-6420/ab47f4 74. Gerken, T., Grützner, S.: Dynamic inverse wave problems—part I: regularity for the direct problem. Inverse Problems 36(2), 024004 (2020). https://doi.org/10.1088/1361-6420/ab47ec 75. Gopalakrishnan, J., Schöberl, J., Wintersteiger, C.: Mapped tent pitching schemes for hyperbolic systems. SIAM J. Sci. Comput. 39(6), B1043–B1063 (2017). https://doi.org/10.1137/ 16M1101374 76. Gould, P.L., Feng, Y.: Introduction to linear elasticity, 4th edn. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-73885-7 77. Grafakos, L.: Modern Fourier Analysis, 2nd edn. Springer, New York (2009). https://doi.org/ 10.1007/978-0-387-09434-2 78. Grote, M.J., Michel, S., Sauter, S.A.: Stabilized leapfrog based local time-stepping method for the wave equation. Math. Comput. 90(332), 2603–2643 (2021). https://doi.org/10.1090/mcom/ 3650 79. Guès, O.: Problème mixte hyperbolique quasi-linéaire caractéristique. Comm. Partial Differential Equations 15(5), 595–645 (1990). https://doi.org/10.1080/03605309908820701 80. Hairer, E., Wanner, G.: Solving ordinary differential equations II: Stiff and differentialalgebraic problems. Springer Series in Computational Mathematics, vol. 14, 2nd edn. SpringerVerlag, Berlin (1996). https://doi.org/10.1007/978-3-642-05221-7 81. Hairer, E., Lubich, C., Wanner, G.: Geometric numerical integration illustrated by the StörmerVerlet method. Acta Numer. 12, 399–450 (2003). https://doi.org/10.1017/S0962492902000144 82. Hairer, E., Lubich, C., Wanner, G.: Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations. Springer Series in Computational Mathematics, vol. 31, 2nd edn. Springer-Verlag, Berlin (2006). https://doi.org/10.1007/3-540-306668

364

References

83. Hanke, M.: Regularizing properties of a truncated Newton-CG algorithm for nonlinear inverse problems. Numer. Funct. Anal. Optim. 18(9-10), 971–993 (1997). https://doi.org/10.1080/ 01630569708816804 84. Hansen, E., Henningsson, E.: A convergence analysis of the Peaceman-Rachford scheme for semilinear evolution equations. SIAM J. Numer. Anal. 51(4), 1900–1910 (2013). https://doi. org/10.1137/120890570 85. Hesthaven, J.S.: Numerical methods for conservation laws: From analysis to algorithms. SIAM (2017). https://doi.org/10.1137/1.9781611975109 86. Hesthaven, J.S., Warburton, T.: Nodal Discontinuous Galerkin Methods. Algorithms, Analysis, and Applications. Texts in Applied Mathematics, vol. 54 (Springer, New York, 2008). https:// doi.org/10.1007/978-0-387-72067-8 87. Hochbruck, M., Köhler, J.: On the efficiency of the Peaceman–Rachford ADI-dG method for wave-type problems. In: Radu, F., Kumar, K., Berre, I., Nordbotten, J., Pop, I. (eds.) Numerical Mathematics and Advanced Applications ENUMATH 2017. Lecture Notes in Computational Science and Engineering, vol. 126, pp. 135–144. Springer International Publishing (2019). https://doi.org/10.1007/978-3-319-96415-7 88. Hochbruck, M., Köhler, J.: Error Analysis of Discontinuous Galerkin Discretizations of a Class of Linear Wave-type Problems. In: Dörfler, W., et al. (eds.) Mathematics of Wave Phenomena, Trends in Mathematics, pp. 197–218. Birkhäuser, Cham (2020). https://doi.org/10.1007/978-3030-47174-3_12 89. Hochbruck, M., Köhler, J.: Error analysis of a fully discrete discontinuous Galerkin alternating direction implicit discretization of a class of linear wave-type problems. Numer. Math. 150, 893–927 (2022). https://doi.org/10.1007/s00211-021-01262-z 90. Hochbruck, M., Pažur, T.: Implicit Runge-Kutta methods and discontinuous Galerkin discretizations for linear Maxwell’s equations. SIAM J. Numer. Anal. 53(1), 485–507 (2015). https://doi.org/10.1137/130944114 91. Hochbruck, M., Sturm, A.: Error analysis of a second-order locally implicit method for linear Maxwell’s equations. SIAM J. Numer. Anal. 54(5), 3167–3191 (2016). https://doi.org/10.1137/ 15M1038037 92. Hochbruck, M., Pažur, T., Schulz, A., Thawinan, E., Wieners, C.: Efficient time integration for discontinuous Galerkin approximations of linear wave equations [Plenary lecture presented at the 83rd Annual GAMM Conference, Darmstadt, 26th–30th March, 2012]. ZAMM Z. Angew. Math. Mech. 95(3), 237–259 (2015). https://doi.org/10.1002/zamm.201300306 93. Hochbruck, M., Jahnke, T., Schnaubelt, R.: Convergence of an ADI splitting for Maxwell’s equations. Numer. Math. 129(3), 535–561 (2015). https://doi.org/10.1007/s00211-014-0642-0 94. Hofmann, B., Plato, R.: On ill-posedness concepts, stable solvability and saturation. J. Inverse Ill-Posed Probl. 26(2), 287–297 (2018). https://doi.org/10.1515/jiip-2017-0090 95. Hofmann, B., Scherzer, O.: Local ill-posedness and source conditions of operator equations in Hilbert spaces. Inverse Problems 14(5), 1189–1206 (1998). https://doi.org/10.1088/02665611/14/5/007 96. Hytönen, T., van Neerven, J., Veraar, M., Weis, L.: Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley Theory. Springer, Cham (2016) 97. Ifrim, M., Tataru, D.: Local well-posedness for quasilinear problems: a primer (2020). Preprint, arXiv:2008.05684 98. Imbert-Gérard, L.M., Moiola, A., Stocker, P.: A space-time quasi-Trefftz DG method for the wave equation with piecewise-smooth coefficients. Math. Comp. 92, 1211–1249 (2023). https:// doi.org/10.1090/mcom/3786

References

365

99. Ito, K., Kappel, F.: Evolution equations and approximations. Series on Advances in Mathematics for Applied Sciences, vol. 61. World Scientific Publishing Co., Inc., River Edge, NJ (2002). https://doi.org/10.1142/9789812777294 100. Jackson, J.D.: Classical Electrodynamics, 3rd edn. Wiley, New York (1999). https://doi.org/10. 1002/3527600434.eap109 101. Jacob, B., Zwart, H.J.: Linear port-Hamiltonian systems on infinite-dimensional spaces. Operator Theory: Advances and Applications, vol. 223. Birkhäuser/Springer Basel AG, Basel (2012). https://doi.org/10.1007/978-3-0348-0399-1 102. Jensen, M.: Discontinuous Galerkin methods for Friedrichs systems with irregular solutions. Ph.D. thesis, University of Oxford (2004). http://sro.sussex.ac.uk/45497/ 103. Joly, P., Rodríguez, J.: An error analysis of conservative space-time mesh refinement methods for the one-dimensional wave equation. SIAM J. Numer. Anal. 43(2), 825–859 (2005). https:// doi.org/10.1137/040603437 104. Kaltenbacher, B., Neubauer, A., Scherzer, O.: Iterative regularization methods for nonlinear illposed problems. Radon Series on Computational and Applied Mathematics, vol. 6. Walter de Gruyter GmbH & Co. KG, Berlin (2008). https://doi.org/10.1515/9783110208276 105. Kato, T.: The Cauchy problem for quasi-linear symmetric hyperbolic systems. Arch. Rational Mech. Anal. 58(3), 181–205 (1975). https://doi.org/10.1007/BF00280740 106. Kato, T.: Quasi-linear equations of evolution, with applications to partial differential equations. In: William N. Everitt (eds.) Spectral Theory and Differential Equations (Proc. Sympos., Dundee, 1974). Lecture Notes in Math., vol. 448, pp. 25–70 (1975) 107. Kato, T.: Abstract Differential Equations and Nonlinear Mixed Problems. Scuola Normale Superiore/Accademia Nazionale dei Lincei, Pisa/Rome (1985) 108. Kirsch, A.: An introduction to the mathematical theory of inverse problems. Applied Mathematical Sciences, vol. 120, 2nd edn. Springer, New York (2011). https://doi.org/10.1007/9781-4419-8474-6 109. Kirsch, A., Hettlich, F.: The mathematical theory of time-harmonic Maxwell’s equations. Applied Mathematical Sciences, vol. 190. Springer, Cham (2015). https://doi.org/10.1007/ 978-3-319-11086-8 110. Kirsch, A., Rieder, A.: On the linearization of operators related to the full waveform inversion in seismology. Math. Methods Appl. Sci. 37, 2995–3007 (2014). https://doi.org/10.1002/mma. 3037 111. Kirsch, A., Rieder, A.: Inverse problems for abstract evolution equations with applications in electrodynamics and elasticity. Inverse Problems 32(8), 085001, 24 (2016). https://doi.org/10. 1088/0266-5611/32/8/085001 112. Kirsch, A., Rieder, A.: Inverse problems for abstract evolution equations II: higher order differentiability for viscoelasticity. SIAM J. Appl. Math. 79(6), 2639–2662 (2019). https:// doi.org/10.1137/19M1269403. Corrigendum under https://doi.org/10.48550/arXiv.2203.01309 113. Köhler, J.: The Peaceman–Rachford ADI-dG method for linear wave-type problems. Ph.D. thesis, Karlsruhe Institute of Technology (2018). https://doi.org/10.5445/IR/1000089271 114. Komornik, V.: Boundary stabilization, observation and control of Maxwell’s equations. PanAmer. Math. J. 4(4), 47–61 (1994) 115. Langer, U., Steinbach, O.: Space-Time Methods: Applications to Partial Differential Equations. Radon Series on Computational and Applied Mathematics, vol. 25. Walter de Gruyter GmbH & Co KG (2019). https://doi.org/10.1515/9783110548488 116. Lasiecka, I., Pokojovy, M., Schnaubelt, R.: Exponential decay of quasilinear Maxwell equations with interior conductivity. NoDEA Nonlinear Differential Equations Appl. 26(6), Paper No. 51, 34 (2019). https://doi.org/10.1007/s00030-019-0595-1

366

References

117. Lechleiter, A., Rieder, A.: Newton regularizations for impedance tomography: convergence by local injectivity. Inverse Problems 24(6), 065009, 18 (2008). https://doi.org/10.1088/02665611/24/6/065009 118. Lechleiter, A., Rieder, A.: Towards a general convergence theory for inexact Newton regularizations. Numer. Math. 114(3), 521–548 (2010). https://doi.org/10.1007/s00211-009-02560 119. Leis, R.: Initial Boundary Value Problems in Mathematical Physics. Courier Corporation (2013). https://doi.org/10.1007/978-3-663-10649-4 120. Liu, Y., Shu, C.W., Zhang, M.: Sub-optimal convergence of discontinuous galerkin methods with central fluxes for linear hyperbolic equations with even degree polynomial approximations. J. Comput. Math. 39(4), 518–537 (2021). https://doi.org/10.4208/jcm.2002-m2019-0305 121. Louis, A.K.: Inverse und schlecht gestellte Probleme. B. G. Teubner, Stuttgart (1989). https:// doi.org/10.1007/978-3-322-84808-6 122. Lucente, S., Ziliotti, G.: Global existence for a quasilinear Maxwell system. Rend. Istit. Mat. Univ. Trieste 31(suppl. 2), 169–187 (2000) 123. Majda, A.: Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables. Springer-Verlag, New York (1984). https://doi.org/10.1007/978-1-4612-1116-7 124. Margotti, F.: Inexact Newton regularization combined with gradient methods in Banach spaces. Inverse Problems 34(7), 075007, 26 (2018). https://doi.org/10.1088/1361-6420/aac21f 125. Margotti, F., Rieder, A.: An inexact Newton regularization in Banach spaces based on the nonstationary iterated Tikhonov method. J. Inverse Ill-Posed Probl. 23(4), 373–392 (2015). https://doi.org/10.1515/jiip-2014-0035 126. Margotti, F., Rieder, A., Leitão, A.: A Kaczmarz version of the reginn-Landweber iteration for ill-posed problems in Banach spaces. SIAM J. Numer. Anal. 52(3), 1439–1465 (2014). https://doi.org/10.1137/130923956 127. Moiola, A., Perugia, I.: A space-time Trefftz discontinuous Galerkin method for the acoustic wave equation in first-order formulation. Numer. Math. 138(2), 389–435 (2018). https://doi. org/10.1007/s00211-017-0910-x 128. Monk, P.: Finite Element Methods for Maxwell’s Equations. Numerical Mathematics and Scientific Computation. Oxford University Press, New York (2003). https://doi.org/10.1093/ acprof:oso/9780198508885.001.0001 129. Montseny, E., Pernet, S., Ferriéres, X., Cohen, G.: Dissipative terms and local time-stepping improvements in a spatial high order discontinuous Galerkin scheme for the time-domain Maxwell’s equations. J. Comput. Phys. 227(14), 6795–6820 (2008). https://doi.org/10.1016/ j.jcp.2008.03.032 130. Moya, L.: Temporal convergence of a locally implicit discontinuous Galerkin method for Maxwell’s equations. ESAIM Math. Model. Numer. Anal. 46(5), 1225–1246 (2012). https:// doi.org/10.1051/m2an/2012002 131. Muñoz Rivera, J.E., Racke, R.: Mildly dissipative nonlinear Timoshenko systems—global existence and exponential stability. J. Math. Anal. Appl. 276(1), 248–278 (2002). https:// doi.org/10.1016/S0022-247X(02)00436-5 132. Namiki, T.: A new FDTD algorithm based on alternating-direction implicit method. IEEE Trans. Microw. Theory Tech. 47(10), 2003–2007 (1999). https://doi.org/10.1109/22.795075 133. Nicaise, S., Pignotti, C.: Boundary stabilization of Maxwell’s equations with space-time variable coefficients. ESAIM Control Optim. Calc. Var. 9, 563–578 (2003). https://doi.org/ 10.1051/cocv:2003027 134. Nicaise, S., Pignotti, C.: Internal stabilization of Maxwell’s equations in heterogeneous media. Abstr. Appl. Anal. 7, 791–811 (2005). https://doi.org/10.1155/AAA.2005.791

References

367

135. Nirenberg, L.: On elliptic partial differential equations. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3) 13, 115–162 (1959) 136. Ostermann, A., Schratz, K.: Error analysis of splitting methods for inhomogeneous evolution equations. Appl. Numer. Math. 62, 1436–1446 (2012). https://doi.org/10.1016/j.apnum.2012. 06.002 137. Pažur, T.: Error analysis of implicit and exponential time integration of linear Maxwell’s equations. Ph.D. thesis, Karlsruhe Institute of Technology (2013). https://doi.org/10.5445/IR/ 1000038617 138. Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations. Applied Mathematical Sciences, vol. 44. Springer, New York (1983). https://doi.org/10.1007/ 978-1-4612-5561-1 139. Perugia, I., Schöberl, J., Stocker, P., Wintersteiger, C.: Tent pitching and Trefftz-DG method for the acoustic wave equation. Comput. Math. Appl. (2020). https://doi.org/10.1016/j.camwa. 2020.01.006 140. Peaceman, D.W., Rachford Jr., H.H.: The numerical solution of parabolic and elliptic differential equations. J. Soc. Indust. Appl. Math. 3, 28–41 (1955). https://doi.org/10.1137/0103003 141. Phillips, R.S.: Dissipative operators and hyperbolic systems of partial differential equations. Trans. Amer. Math. Soc. 90, 193–254 (1959). https://doi.org/10.2307/1993202 142. Phung, K.D.: Contrôle et stabilisation d’ondes électromagnétiques. ESAIM Control Optim. Calc. Var. 5, 87–137 (2000). https://doi.org/10.1051/cocv:2000103 143. Picard, R.H., Zajaczkowski, ˛ W.M.: Local existence of solutions of impedance initial-boundary value problem for non-linear Maxwell equations. Math. Methods Appl. Sci. 18(3), 169–199 (1995). https://doi.org/10.1002/mma.1670180302 144. Piperno, S.: Symplectic local time-stepping in non-dissipative DGTD methods applied to wave propagation problems. M2AN Math. Model. Numer. Anal. 40(5), 815–841 (2006). https://doi. org/10.1051/m2an:2006035 145. Pokojovy, M., Schnaubelt, R.: Boundary stabilization of quasilinear Maxwell equations. J. Differential Equations 268(2), 784–812 (2020). https://doi.org/10.1016/j.jde.2019.08.032 146. Racke, R.: Lectures on Nonlinear Evolution Equations. Friedr. Vieweg, Braunschweig (1992). https://doi.org/10.1007/978-3-663-10629-6 147. Rauch, J.: L2 is a continuable initial condition for Kreiss’ mixed problems. Comm. Pure Appl. Math. 25, 265–285 (1972). https://doi.org/10.1002/cpa.3160250305 148. Rieder, A.: Keine Probleme mit inversen Problemen. Friedr. Vieweg & Sohn, Braunschweig (2003). https://doi.org/10.1007/978-3-322-80234-7 149. Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill Book Co., New York (1987) 150. Scherzer, O.: The use of Morozov’s discrepancy principle for Tikhonov regularization for solving nonlinear ill-posed problems. Computing 51(1), 45–60 (1993). https://doi.org/10.1007/ BF02243828 151. Scherzer, O.: A convergence analysis of a method of steepest descent and a two-step algorithm for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 17(1–2), 197–214 (1996). https:// doi.org/10.1080/01630569608816691 152. Schnaubelt, R., Spitz, M.: Local wellposedness of quasilinear Maxwell equations with absorbing boundary conditions. Evol. Equ. Control Theory 10(1), 155–198 (2021). https://doi.org/10. 3934/eect.2020061 153. Schnaubelt, R., Spitz, M.: Local wellposedness of quasilinear Maxwell equations with conservative interface conditions. Commun. Math. Sci. 20(8), 2265–2313 (2022). https://doi.org/10. 48550/arXiv.1811.08714

368

References

154. Schulz, E., Hiptmair, R.: First-kind boundary integral equations for the Dirac operator in 3dimensional Lipschitz domains. SIAM J. Math. Anal. 54(1), 616–648 (2022). https://doi.org/ 10.1137/20M1389224 155. Secchi, P.: Well-posedness of characteristic symmetric hyperbolic systems. Arch. Rational Mech. Anal. 134(2), 155–197 (1996). https://doi.org/10.1007/BF00379552 156. Speck, J.: The nonlinear stability of the trivial solution to the Maxwell-Born-Infeld system. J. Math. Phys. 53(8), 083703, 83 (2012). https://doi.org/10.1063/1.4740047 157. Spitz, M.: Local wellposedness of nonlinear Maxwell equations. Ph.D. thesis, Karlsruhe Institute of Technology (2017). https://doi.org/10.5445/IR/1000078030 158. Spitz, M.: Local wellposedness of nonlinear Maxwell equations with perfectly conducting boundary conditions. J. Differential Equations 266(8), 5012–5063 (2019). https://doi.org/10. 1016/j.jde.2018.10.019 159. Spitz, M.: Regularity theory for nonautonomous Maxwell equations with perfectly conducting boundary conditions. J. Math. Anal. Appl. 506(1), Paper No. 125646, 43 (2022). https://doi. org/10.1016/j.jmaa.2021.125646 160. Steinbach, O., Zank, M.: Coercive space-time finite element methods for initial boundary value problems. Electron. Trans. Numer. Anal. 52, 154–194 (2020). https://doi.org/10.1553/etna_ vol52s154 161. Sturm, A.: Locally implicit time integration for linear Maxwell’s equations. Ph.D. thesis, Karlsruhe Institute of Technology (2017). https://doi.org/10.5445/IR/1000069341 162. Taylor, M.E.: Pseudodifferential Operators and Nonlinear PDE. Birkhäuser, Boston, MA (1991). https://doi.org/10.1007/978-1-4612-0431-2 163. Verwer, J.G.: Component splitting for semi-discrete Maxwell equations. BIT 51(2), 427–445 (2011). https://doi.org/10.1007/s10543-010-0296-y 164. Winkler, R.: Tailored interior and boundary parameter transformations for iterative inversion in electrical impedance tomography. Inverse Problems 35(11), 114007 (2019). https://doi.org/10. 1088/1361-6420/ab2783 165. Winkler, R., Rieder, A.: Model-aware Newton-type inversion scheme for electrical impedance tomography. Inverse Problems 31(4), 045009, 27 (2015). https://doi.org/10.1088/0266-5611/ 31/4/045009 166. Zeidler, E.: Nonlinear Functional Analysis and Its Applications. I. Springer-Verlag, New York (1986) 167. Zeltmann, U.: The viscoelastic seismic model: Existence, uniqueness and differentiability with respect to parameters. Ph.D. thesis, Karlsruhe Institute of Technology (2018). https://doi.org/ 10.5445/IR/1000093989 168. Zhen, F., Chen, Z., Zhang, J.: Toward the development of a three-dimensional unconditionally stable finite-difference time-domain method. IEEE Trans. Microw. Theory Technol. 48(9), 1550–1558 (2000). https://doi.org/10.1109/22.869007 169. Ziegler, D.: A parallel and adaptive space-time discontinuous Galerkin method for visco-elastic and visco-acoustic waves. Ph.D. thesis, Karlsruhe Institute of Technology (KIT) (2019). https:// doi.org/10.5445/IR/1000110469