Mathematical Methods in Elasticity Imaging 9781400866625

This book is the first to comprehensively explore elasticity imaging and examines recent, important developments in asym

134 21 3MB

English Pages 240 [239] Year 2015

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title
Copyright
Contents
Introduction
Chapter 1: Layer Potential Techniques
1.1 Sobolev Spaces
1.2 Elasticity Equations
1.3 Radiation Condition
1.4 Integral Representation of Solutions to the Lamé System
1.5 Helmholtz-Kirchhoff Identities
1.6 Eigenvalue Characterizations and Neumann and Dirichlet Functions
1.7 A Regularity Result
Chapter 2: Elasticity Equations with High Contrast Parameters
2.1 Problem Setting
2.2 Incompressible Limit
2.3 Limiting Cases of Holes and Hard Inclusions
2.4 Energy Estimates
2.5 Convergence of Potentials and Solutions
2.6 Boundary Value Problems
Chapter 3: Small-Volume Expansions of the Displacement Fields
3.1 Elastic Moment Tensor
3.2 Small-Volume Expansions
Chapter 4: Boundary Perturbations due to the Presence of Small Cracks
4.1 A Representation Formula
4.2 Derivation of an Explicit Integral Equation
4.3 Asymptotic Expansion
4.4 Topological Derivative of the Potential Energy
4.5 Derivation of the Representation Formula
4.6 Time-Harmonic Regime
Chapter 5: Backpropagation and Multiple Signal Classification Imaging of Small Inclusions
5.1 A Newton-Type Search Method
5.2 A MUSIC-Type Method in the Static Regime
5.3 A MUSIC-Type Method in the Time-Harmonic Regime
5.4 Reverse-Time Migration and Kirchhoff Imaging in the Time-Harmonic Regime
5.5 Numerical Illustrations
Chapter 6: Topological Derivative Based Imaging of Small Inclusions in the Time-Harmonic Regime
6.1 Topological Derivative Based Imaging
6.2 Modified Imaging Framework
Chapter 7: Stability of Topological Derivative Based Imaging Functionals
7.1 Statistical Stability with Measurement Noise
7.2 Statistical Stability with Medium Noise
Chapter 8: Time-Reversal Imaging of Extended Source Terms
8.1 Analysis of the Time-Reversal Imaging Functionals
8.2 Time-Reversal Algorithm for Viscoelastic Media
8.3 Numerical Illustrations
Chapter 9: Optimal Control Imaging of Extended Inclusions
9.1 Imaging of Shape Perturbations
9.2 Imaging of an Extended Inclusion
Chapter 10: Imaging from Internal Data
10.1 Inclusion Model Problem
10.2 Binary Level Set Algorithm
10.3 Imaging Shear Modulus Distributions
10.4 Numerical Illustrations
Chapter 11: Vibration Testing
11.1 Small-Volume Expansions of the Perturbations in the Eigenvalues
11.2 Eigenvalue Perturbations due to Shape Deformations
11.3 Splitting of Multiple Eigenvalues
11.4 Reconstruction of Inclusions
11.5 Numerical Illustrations
Appendix A: Introduction to Random Processes
A.1 Random Variables
A.2 Random Vectors
A.3 Gaussian Random Vectors
A.4 Conditioning
A.5 Random Processes
A.6 Gaussian Processes
A.7 Stationary Gaussian Random Processes
A.8 Multi-valued Gaussian Processes
Appendix B: Asymptotics of the Attenuation Operator
B.1 Stationary Phase Theorem
B.2 Derivation of the Asymptotics
Appendix C: The Generalized Argument Principle and Rouché’s Theorem
C.1 Notation and Definitions
C.2 Generalized Argument Principle
C.3 Generalization of Rouché’s Theorem
References
Index
Recommend Papers

Mathematical Methods in Elasticity Imaging
 9781400866625

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Mathematical Methods in Elasticity Imaging

PRINCETON SERIES IN APPLIED MATHEMATICS Series Editors Ingrid Daubechies (Duke University); Weinan E (Princeton University); Jan Karel Lenstra (Centrum Wiskunde & Informatica, Amsterdam); Endre Süli (University of Oxford) The Princeton Series in Applied Mathematics publishes high quality advanced texts and monographs in all areas of applied mathematics. Books include those of a theoretical and general nature as well as those dealing with the mathematics of specific applications areas and real-world situations. A list of titles in this series appears at the back of the book.

Mathematical Methods in Elasticity Imaging

Habib Ammari, Elie Bretin, Josselin Garnier, Hyeonbae Kang, Hyundae Lee, and Abdul Wahab

PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD

c 2015 by Princeton University Press Copyright Published by Princeton University Press 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press 6 Oxford Street, Woodstock, Oxfordshire OX20 1TW press.princeton.edu All Rights Reserved ISBN 978-0-691-16531-8 British Library Cataloging-in-Publication Data is available This book has been composed in LATEX The publisher would like to acknowledge the authors of this volume for providing the camera-ready copy from which this book was printed. Printed on acid-free paper. ∞ Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

Contents

Introduction

1

1 Layer Potential Techniques 1.1 Sobolev Spaces 1.2 Elasticity Equations 1.3 Radiation Condition 1.4 Integral Representation of Solutions to the Lamé System 1.5 Helmholtz-Kirchhoff Identities 1.6 Eigenvalue Characterizations and Neumann and Dirichlet Functions 1.7 A Regularity Result

4 4 6 10 11 21 27 32

2 Elasticity Equations with High Contrast Parameters 2.1 Problem Setting 2.2 Incompressible Limit 2.3 Limiting Cases of Holes and Hard Inclusions 2.4 Energy Estimates 2.5 Convergence of Potentials and Solutions 2.6 Boundary Value Problems

33 34 34 36 38 42 45

3 Small-Volume Expansions of the Displacement Fields 3.1 Elastic Moment Tensor 3.2 Small-Volume Expansions

48 48 55

4 Boundary Perturbations due to the Presence of Small Cracks 4.1 A Representation Formula 4.2 Derivation of an Explicit Integral Equation 4.3 Asymptotic Expansion 4.4 Topological Derivative of the Potential Energy 4.5 Derivation of the Representation Formula 4.6 Time-Harmonic Regime

66 66 69 71 75 76 79

5 Backpropagation and Multiple Signal Classification Imaging of Small Inclusions 80 5.1 A Newton-Type Search Method 80 5.2 A MUSIC-Type Method in the Static Regime 82 5.3 A MUSIC-Type Method in the Time-Harmonic Regime 82 5.4 Reverse-Time Migration and Kirchhoff Imaging in the Time-Harmonic Regime 84 5.5 Numerical Illustrations 86

vi

CONTENTS

6 Topological Derivative Based Imaging of Small Inclusions in the Time-Harmonic Regime 6.1 Topological Derivative Based Imaging 6.2 Modified Imaging Framework

91 91 102

7 Stability of Topological Derivative Based Imaging Functionals 7.1 Statistical Stability with Measurement Noise 7.2 Statistical Stability with Medium Noise

112 112 118

8 Time-Reversal Imaging of Extended Source Terms 8.1 Analysis of the Time-Reversal Imaging Functionals 8.2 Time-Reversal Algorithm for Viscoelastic Media 8.3 Numerical Illustrations

125 127 129 137

9 Optimal Control Imaging of Extended Inclusions 9.1 Imaging of Shape Perturbations 9.2 Imaging of an Extended Inclusion

148 149 152

10 Imaging from Internal Data 10.1 Inclusion Model Problem 10.2 Binary Level Set Algorithm 10.3 Imaging Shear Modulus Distributions 10.4 Numerical Illustrations

160 160 162 164 165

11 Vibration Testing 11.1 Small-Volume Expansions of the Perturbations in the Eigenvalues 11.2 Eigenvalue Perturbations due to Shape Deformations 11.3 Splitting of Multiple Eigenvalues 11.4 Reconstruction of Inclusions 11.5 Numerical Illustrations

168 169 181 192 193 195

A Introduction to Random Processes A.1 Random Variables A.2 Random Vectors A.3 Gaussian Random Vectors A.4 Conditioning A.5 Random Processes A.6 Gaussian Processes A.7 Stationary Gaussian Random Processes A.8 Multi-valued Gaussian Processes

201 201 202 203 204 205 206 208 208

B Asymptotics of the Attenuation Operator B.1 Stationary Phase Theorem B.2 Derivation of the Asymptotics

210 210 211

C The C.1 C.2 C.3

213 213 214 214

Generalized Argument Principle and Rouché’s Theorem Notation and Definitions Generalized Argument Principle Generalization of Rouché’s Theorem

CONTENTS

vii

References

217

Index

229

Introduction Elasticity imaging is used to determine the characteristics of structures inside an elastic body based on observations of the displacement fields made on a part of the boundary surface or inside the body. The aim is to recover certain material and geometric parameters characteristic to these structures. The main motivations of elasticity imaging are non-destructive testing of elastic structures for material impurities, exploration geophysics, and medical diagnosis, in particular, for detection of potential tumors of diminishing size. Elasticity imaging for medical diagnosis aims at providing a quantitative visualization of mechanical properties of human tissues using the relation between the wave propagation velocity and the tissue viscoelastic properties. Different imaging modalities can be used to measure the displacement field in the interior of tissue in response to a time-harmonic or a dynamic excitation. The two major techniques are based on magnetic resonance imaging and on ultrasound. When magnetic resonance imaging is used, the excitation is at a relatively low frequency, and then the time-harmonic elastic response of the tissue is measured. On the other hand, when ultrasound is used, the excitation is dynamic and broadband. However, even in this case, the dynamic displacement field is resolved into its time-harmonic components via a Fourier transform. The time-harmonic displacement in the interior of the tissue offers a wealth of information that may be used to characterize the tissue viscoelastic properties by solving an imaging problem. These properties in turn carry information about the tissue composition, micro-structure, physiology, and pathology. Changes in tissue elasticity are generally correlated with pathological phenomena such as weakening of vessel walls or cirrhosis of the liver. Many cancers appear as extremely hard nodules because of the recruitment of collagen during tumorigenesis. It is therefore very interesting and challenging for diagnostic applications to find ways for generating resolved images that depict tissue elasticity or stiffness. Elasticity imaging plays also an essential role in a wide range of industrial areas and at almost any stage in the production or life cycle of many components. It is important to determine the integrity of a structure, quantitatively measure some characteristic of an object or ensure a cost effective operation, safety and reliability of a plant, with resultant benefit to the community. As a number of infrastructures currently in service reach the end of their expected serviceable life, elasticity testing methods to evaluate their durability, and thus to ensure their structural integrity, have received gradually increasing attention. Concrete structures are designed, in particular, on the basis of the compression strength of the concrete. Concrete often degrades by the corrosion of the embedded steel-reinforcing bars, which can lead to internal stress. When reinforcing bars in concrete become corroded (i.e., rusted), rust is generated on the surface of the bars, and small cracks begin to grow as the rusting expands. A change in the physical properties of the concrete due to degradation of a concrete structure affects the concrete’s compression strength.

2

INTRODUCTION

Elasticity imaging, which uses the direct relationship between the velocities of the elastic waves and the elastic properties of the material through which they are propagating, is the method of choice for estimating the compression strength of concrete. In environmental sciences, a major application of elasticity imaging is the monitoring of potentially dangerous structures like active fault zones prone to damaging earthquakes or volcanic edifices. Oil reservoir monitoring is also of primary interest for the oil industry in order to get insight into the depletion dynamics of a reservoir. Quantitative elasticity imaging of elastic properties of the subsurface is essential for oil- and gas-reservoir characterization and for monitoring carbon dioxide sequestration with time-lapse acquisitions. Indeed, fluids and gases have significant effects on the elastic properties of the subsurface in terms of Poisson’s ratio anomalies. This quantitative imaging is also required for near-surface imaging in the framework of civil engineering applications because the shear properties of the shallow weathered layers strongly impact the elastic wavefield. Many challenging mathematical problems arise in elasticity imaging techniques and pose interesting mathematical riddles that often lead to the investigation of fundamental problems in various branches of mathematics. This book covers recent mathematical, numerical and statistical approaches for elasticity imaging of inclusions and cracks with waves at zero, single or multiple non-zero frequencies. The inclusions and cracks of small size are believed to be the starting point of fatigue failure in elastic materials. An inclusion or a crack is called small when the product of its characteristic size with the operating frequency is less than one while it is called extended when this factor is much larger than one. There are two interesting problems: one is of finding small elastic inclusions and the other is of reconstructing shape deformations of an extended elastic inclusion. In both situations, we are interested in imaging small perturbations with respect to known situations. Recently, there have been important developments on asymptotic imaging, stochastic modeling, and analysis of both deterministic and stochastic elastic wave propagation phenomena. The aim of this book is to put them together in a coherent way. An emphasis is laid down on deriving the best possible imaging functionals for small inclusions and cracks in the sense of stability and resolution. For imaging extended elastic inclusions, we design accurate optimal control methodologies and evaluate the effect of uncertainties of the geometric or physical parameters on their stability and resolution properties. We also provide an asymptotic framework for vibration testing. Localized damage to a mechanical structure affects its dynamic characteristics. The modification is characterized by changes in the eigenparameters, i.e., eigenvalues and the associated eigenvectors. Considerable effort has been spent in obtaining a relationship between the changes in the eigenparameters, the damage location, characteristics, and size. In this book, we relate the measured eigenparameters to the elastic inclusion or crack location, orientation, and size. We design a method that can be used to identify, locate, and estimate inclusions and cracks in elastic structures by measuring their modal characteristics. The book is organized as follows. In Chapter 1, after reviewing some well-known results on the solvability and layer potentials for static and time-harmonic elasticity equations, we proceed to prove representation formulas for solutions of the elasticity equations. Then we establish Helmholtz-Kirchhoff identities. These formulas are our main tool in later chapters for analyzing the resolution of elastic wave imaging approaches. Chapter 2 collects some recent results on the elasticity equations with

INTRODUCTION

3

high contrast coefficients. Chapter 3 covers the method of small-volume expansions. It provides the leading-order terms in the asymptotic expansions of the solutions to the static and time-harmonic elasticity equations with respect to the size of a small inclusion. We also introduce the concept of elastic moment tensor associated with an elastic inclusion and present its main properties. The results of Chapter 2 are used in Chapter 3 in order to show that the asymptotic expansion of the solution in the presence of small inclusions holds uniformly with respect to material parameters. The results of this chapter can be extended to anisotropic inclusions. Chapter 4 deals with the perturbations of the displacement (or traction) vector that are due to the presence of a small crack with homogeneous Neumann boundary conditions in an elastic medium. An asymptotic formula for the boundary perturbations of the displacement as the length of the crack tends to zero is derived. It carries information about the location, size, and orientation of the crack. Chapter 5 is devoted to direct imaging of small inclusions and cracks in the static regime. It focuses on MUSIC- and migration-type algorithms for detecting the small defects. Chapter 6 introduces a topological derivative based imaging framework for detecting elastic inclusions in the time-harmonic regime. Based on a weighted Helmholtz decomposition of the topological derivative based imaging functional, we achieve optimal resolution imaging. Its stability properties with respect to both medium and measurement noises are investigated. Chapters 8 and 9 discuss imaging techniques for extended elastic inclusions. We start with inverse source problems and introduce time-reversal techniques. Then we focus on reconstructing shape changes of an extended target. We introduce several algorithms and analyze their resolution and stability for the linearized reconstruction problem. Finally, we describe optimal control approaches for solving the nonlinear problem. Chapter 9 extends time-reversal techniques for imaging in viscoelastic media. Chapter 10 is to propose efficient methods for reconstructing both the shape and the elasticity parameters of an inclusion using internal displacement measurements. Chapter 11 is on vibration testing. Following the asymptotic formalism developed in this book, we derive asymptotic formulas for eigenvalue perturbations due to small inclusions, cracks, and shape deformations. We propose efficient algorithms for detecting small elastic inclusions and cracks or perturbations in the interface of an inclusion from modal measurements. In Appendix A we review useful probabilistic tools for elastic imaging in the presence of noise. In Appendix B we derive, based on the stationary phase theorem, asymptotics of the attenuation operator. In Appendix C we recall the main results of Gohberg and Sigal in [100] concerning the generalization to operator-valued functions of classical results in complex analysis. The book opens a door for a mathematical and numerical framework for elasticity imaging of nano-particles and cellular structures. Some of the material in this book is from our wonderful collaborations with Elena Beretta, Yves Capdeboscq, Elisa Francini, Pierre Garapon, Wenjia Jing, Vincent Jugnon, Eunjoo Kim, Kyoungsun Kim, Mikyoung Lim, Jisun Lim, Graeme Milton, Gen Nakamura, Kazumi Tanuma, and Habib Zribi. We feel indebted to all of them. We would also like to acknowledge the support of the European Research Council Project MULTIMOD and of Korean ministry of education, science, and technology through grants NRF 2010-0004091 and 2010-0017532.

Chapter One Layer Potential Techniques The asymptotic theory for elasticity imaging described in this book relies on layer potential techniques. In this chapter we prepare the way by reviewing a number of basic facts and preliminary results regarding the layer potentials associated with both the static and time-harmonic elasticity systems. The most important results in this chapter are on one hand the decomposition formulas for the solutions to transmission problems in elasticity and characterization of eigenvalues of the elasticity system as characteristic values of layer potentials and on the other hand, the Helmholtz-Kirchhoff identities. As will be shown later, the Helmholtz-Kirchhoff identities play a key role in the analysis of resolution in elastic wave imaging. We also note that when dealing with exterior problems for harmonic elasticity, one should introduce a radiation condition, known as the Sommerfeld radiation condition, in order to select the physical solution to the problem. This chapter is organized as follows. In Section 1.1 we first review commonly used function spaces. Then we introduce in Section 1.2 equations of linear elasticity and use the Helmholtz decomposition theorem to decompose the displacement field into the sum of an irrotational (curl-free) and a solenoidal (divergence-free) field. Section 1.3 is devoted to the radiation condition for the time-harmonic elastic waves, which is used to select the physical solution to exterior problems. In Section 1.4 we introduce the layer potentials associated with the operators of static and timeharmonic elasticity, study their mapping properties, and prove decomposition formulas for the displacement fields. In Section 1.5 we derive the Helmholtz-Kirchhoff identities, which play a key role in the resolution analysis in Chapters 7 and 8. In Section 1.6 we characterize the eigenvalues of the elasticity operator on a bounded domain with Neumann or Dirichlet boundary conditions as the characteristic values of certain layer potentials which are meromorphic operator-valued functions. We also introduce Neumann and Dirichlet functions and write their spectral decompositions. These results will be used in Chapter 11. Finally, in Section 1.7 we state a generalization of Meyer’s theorem concerning the regularity of solutions to the equations of linear elasticity, which will be needed in Chapter 11 in order to establish an asymptotic theory of eigenvalue elastic problems. 1.1

SOBOLEV SPACES

Throughout the book, symbols of scalar quantities are printed in italic type, symbols of vectors are printed in bold italic type, symbols of matrices or 2-tensors are printed in bold type, and symbols of 4-tensors are printed in blackboard bold type. The following Sobolev spaces are needed for the study of mapping properties of layer potentials for elasticity equations. 2 d Let ∂i denote ∂/∂xi . We use ∇ = (∂i )di=1 and ∂ 2 = (∂ij )i,j=1 to denote the gradient and the Hessian, respectively.

5

LAYER POTENTIAL TECHNIQUES

Let Ω be a smooth domain in Rd , with d = 2 or 3. We define the Hilbert space H (Ω) by   H 1 (Ω) = u ∈ L2 (Ω) : ∇u ∈ L2 (Ω) , 1

where ∇u is interpreted as a distribution and L2 (Ω) is defined in the usual way, with Z 1/2 ||u||L2 (Ω) = |u|2 dx . Ω

1

The space H (Ω) is equipped with the norm ||u||H 1 (Ω) =

Z

2



|u| dx +

Z



2

|∇u| dx

1/2

.

If Ω is bounded, another Banach space H01 (Ω) arises by taking the closure of the set of infinitely differentiable functions with compact support in Ω, in 1 H (Ω). We will also need the space Hloc (Rd \ Ω) of functions u ∈ L2loc (Rd \ Ω), the set of locally square summable functions in Rd \ Ω, such that C0∞ (Ω), 1

h u ∈ H 1 (Rd \ Ω) ∀ h ∈ C0∞ (Rd \ Ω). Furthermore, we define H 2 (Ω) as the space of functions u ∈ H 1 (Ω) such that 2 ∂ij u ∈ L2 (Ω), for i, j = 1, . . . , d, and the space H 3/2 (Ω) as the interpolation space [H 1 (Ω), H 2 (Ω)]1/2 (see, for example, the book by Bergh and Löfström [49]). It is known that the trace operator u 7→ u|∂Ω is a bounded linear surjective operator from H 1 (Ω) into H 1/2 (∂Ω), where H 1/2 (∂Ω) is the collection of functions f ∈ L2 (∂Ω) such that Z Z |f (x) − f (y)|2 dσ(x) dσ(y) < +∞. |x − y|d ∂Ω ∂Ω We set H −1/2 (∂Ω) = (H 1/2 (∂Ω))∗ and let h , i1/2,−1/2 denote the duality pair between these dual spaces. We introduce a weighted norm, kukH 1 (Rd \Ω) , in two dimensions. Let w

kuk2H 1 (R2 \Ω) := w

Z

R2 \Ω

|u(x)|2 p dx + 1 + |x|2

Z

R2 \Ω

|∇u(x)|2 dx.

(1.1)

This weighted norm is introduced because, as will be shown later, the solutions of the static elasticity equation behave like O(|x|−1 ) in two dimensions as |x| → ∞. For convenience, we set ( Hw1 (R2 \ Ω) for d = 2, d W (R \ Ω) := (1.2) H 1 (R3 \ Ω) for d = 3. In three dimensions, W (Rd \ Ω) is the usual Sobolev space. We also define the Banach space W 1,∞ (Ω) by   W 1,∞ (Ω) = u ∈ L∞ (Ω) : ∇u ∈ L∞ (Ω) ,

(1.3)

6

CHAPTER 1

where ∇u is interpreted as a distribution and L∞ (Ω) is defined in the usual way, with   ||u||L∞ (Ω) = inf C ≥ 0 : |u(x)| ≤ C a.e. x ∈ Ω . We will need the following Hilbert spaces for deriving the Helmholtz decomposition theorem Hcurl (Ω) := {u ∈ L2 (Ω)d , ∇ × u ∈ L2 (Ω)d }, equipped with the norm

||u||curl (Ω) =

Z



|u|2 dx +

Z



|∇ × u|2 dx

1/2

,

and Hdiv (Ω) := {u ∈ L2 (Ω)d , ∇ · u ∈ L2 (Ω)},

equipped with the norm

||u||div (Ω) =

Z



|u|2 dx +

Z



|∇ · u|2 dx

1/2

.

Finally, let T1 , . . . , Td−1 be an orthonormal basis for the tangent plane to ∂Ω at x and let d−1 X ∂/∂T = (∂/∂Tp ) Tp (1.4) p=1

denote the tangential derivative on ∂Ω. We say that f ∈ H 1 (∂Ω) if f ∈ L2 (∂Ω) and ∂f /∂T ∈ L2 (∂Ω)d−1 . Furthermore, we define H −1 (∂Ω) as the dual of H 1 (∂Ω) and the space H s (∂Ω), for 0 ≤ s ≤ 1, as the interpolation space [L2 (∂Ω), H 1 (∂Ω)]s or, equivalently, as the set of functions f ∈ L2 (∂Ω) such that Z Z |f (x) − f (y)|2 dσ(x) dσ(y) < +∞; d−1+2s ∂Ω ∂Ω |x − y| see again [49].

1.2

ELASTICITY EQUATIONS

Let Ω be a domain in Rd , d = 2, 3. Let λ and µ be the Lamé constants for Ω satisfying the strong convexity condition µ > 0 and dλ + 2µ > 0.

(1.5)

The constants λ and µ are respectively referred to as the compression modulus and the shear modulus. The compression modulus measures the resistance of the material to compression and the shear modulus measures the resistance to shearing. We also introduce the bulk modulus β := λ + 2µ/d. We refer the reader to [122, p.11] for an explanation of the physical significance of (1.5). In a homogeneous isotropic elastic medium, the elastostatic operator corre-

7

LAYER POTENTIAL TECHNIQUES

sponding to the Lamé constants λ, µ is given by Lλ,µ u := µ△u + (λ + µ)∇∇ · u,

u : Ω → Rd .

(1.6)

If Ω is bounded with a connected Lipschitz boundary, then we define the conormal derivative ∂u/∂ν by ∂u = λ(∇ · u)n + µ(∇u + ∇ut )n, ∂ν

(1.7)

where ∇u is the matrix (∂j ui )di,j=1 with ui being the i-th component of u, the superscript t denotes the transpose, and n is the outward unit normal to the boundary ∂Ω. Note that the conormal derivative has a direct physical meaning: ∂u = traction on ∂Ω. ∂ν The vector u is the displacement field of the elastic medium having the Lamé coefficients λ and µ, and the symmetric gradient ∇s u := (∇u + ∇ut )/2 is the strain tensor. In Rd , d = 2, 3, let I := δij ei ⊗ ej , 1 I := (δik δjl + δil δjk )ei ⊗ ej ⊗ ek ⊗ el , 2 with (e1 , . . . , ed ) being the canonical basis of Rd and ⊗ denoting the tensor product between vectors in Rd . Here, I is the d × d identity matrix or 2-tensor while I is the identity 4-tensor. Define the elasticity tensor C = (Cijkl )di,j,k,l=1 for Rd by Cijkl = λδij δkl + µ(δik δjl + δil δjk ), which can be written as With this notation, we have

C := λI ⊗ I + 2µI. Lλ,µ u = ∇ · C∇s u,

and

∂u = (C∇s u)n = σ(u)n, ∂ν where σ(u) is the stress tensor given by σ(u) = C∇s u.

(1.8)

8

CHAPTER 1

Now, we consider the elastic wave equation ρ∂t2 U − Lλ,µ U = 0, where the positive constant ρ is the density of the medium. Then, we obtain a timeharmonic solution U (x, t) = ℜe[e−iωt u(x)] if the space-dependent part u satisfies the time-harmonic elasticity equation for the displacement field, (Lλ,µ + ω 2 ρ)u = 0,

(1.9)

with ω being the angular frequency. The time-harmonic elasticity equation (1.9) has a special family of solutions called p- and s-plane waves: √ √ (1.10) U p (x) = eiω ρ/(λ+2µ)x·θ θ and U s (x) = eiω ρ/µx·θ θ ⊥ for θ ∈ Sd−1 := {θ ∈ Rd : |θ| = 1} the direction of the wavevector and θ ⊥ is such that |θ ⊥ | = 1 and θ ⊥ · θ = 0. Note that U p is irrotational while U s is solenoidal. Taking the limit ω → 0 in (1.9) yields the static elasticity equation Lλ,µ u = 0.

(1.11)

In a bounded domain Ω, the equations (1.9) and (1.11) need to be supplemented with boundary conditions at ∂Ω. If ∂Ω is a stress-free surface, the traction acting on ∂Ω vanishes: ∂u = 0. ∂ν This boundary condition is appropriate when the surface ∂Ω forms the outer boundary on the elastic body that is surrounded by empty space. In a homogeneous, isotropic medium, using the Helmholtz decomposition theorem, the displacement field can be decomposed into the sum of an irrotational and a solenoidal field. Assume that Ω is simply connected and its boundary ∂Ω is connected. The Helmholtz decomposition states that for w ∈ L2 (Ω)d there exist φw ∈ H 1 (Ω) and ψw ∈ Hcurl (Ω) ∩ Hdiv (Ω) such that w = ∇φw + ∇ × ψw .

(1.12)

The Helmholtz decomposition (1.12) can be found by solving the following weak Neumann problem in Ω [38, 78]: Z Z ∇φw · ∇p dx = w · ∇p dx ∀ p ∈ H 1 (Ω). (1.13) Ω



The function φw ∈ H 1 (Ω) is uniquely defined up to an additive constant. In order to uniquely define the function ψw , we impose that it satisfies the following properties [53]: ( ∇ · ψw = 0 in Ω, (1.14) ψw · n = (∇ × ψw ) · n = 0 on ∂Ω. The boundary condition (∇ × ψw ) · n = 0 on ∂Ω shows that the gradient and curl parts in (1.12) are orthogonal.

9

LAYER POTENTIAL TECHNIQUES

We define, respectively, the Helmholtz decomposition operators Hp and Hs for w ∈ L2 (Ω)d by Hp [w] := ∇φw and Hs [w] := ∇ × ψw , (1.15)

where φw is a solution to (1.13) and ψw satisfy ∇ × ψw = w − ∇φw together with (1.14). The L2 -projectors Hp and Hs are pseudo-differential operators of symbols πp (x, ξ) equal to the orthogonal projector onto Rξ and πs = I − πp ; see [161]. The following lemma holds.

Lemma 1.1 (Properties of the Helmholtz decomposition operators). Let the Lamé parameters (λ, µ) be constants satisfying (1.5). We have the orthogonality relations Hs Hp = Hp Hs = 0. (1.16) Moreover, Hs and Hp commute with Lλ,µ : For any smooth vector field w in Ω, Hα [Lλ,µ w] = Lλ,µ Hα [w],

α = p, s.

(1.17)

Proof. We only prove (1.17). The orthogonality relations (1.16) are easy to see. Let Hs [w] = ∇φw and let Hp [w] = ∇ × ψw . Then we have Lλ,µ w = (λ + 2µ)∇∆φw + µ∇ × ∆ψw , and therefore, and as desired.

Hs [Lλ,µ w] = (λ + 2µ)∇∆φw = Lλ,µ Hs [w], Hp [Lλ,µ w] = µ∇ × ∆ψw = Lλ,µ Hp [w] 

It is worth emphasizing that in the exterior (unbounded) domain Rd \Ω or in the free space Rd , the Helmholtz decomposition (1.12) stays valid with H 1 (Ω) replaced by {v ∈ L2loc : ∇v ∈ L2 }; see, for instance, [93, 88]. In the time-harmonic regime, if the medium is infinite, then the irrotational and solenoidal fields solve two separate Helmholtz equations with different wavenumbers. As will be shown in the next section, radiation conditions should be imposed in order to select the physical solutions. The irrotational field is called compressional wave (p-wave) and the solenoidal field is called shear wave (s-wave). The displacement field associated with the p-wave is in the same direction as the wave propagates while the displacement field associated with the s-wave propagates orthogonally to the direction of propagation of the wave. Note that, in three dimensions, the s-wave has two directions of oscillations. Note also that if the medium is bounded, then the p- and s-waves are coupled by the boundary conditions at the boundary of the medium. Let the wave numbers κs and κp be given by κs =

ω cs

and κp =

ω , cp

(1.18)

10

CHAPTER 1

where cs is the wave velocity for shear waves and cp is the wave velocity for compressive waves: s r µ λ + 2µ cs = and cp = . (1.19) ρ ρ The α-wave, α = p, s, propagates with a wave number κα , through space and the corresponding wave velocity is given by cα . Note that if λ > 0, then cp is larger than cs provided that (1.5) holds. This means that the p-wave arrives faster than the s-wave in the time domain. Finally, it is worth mentioning that by antiplane elasticity equation we mean the conductivity equation ∇ · µ∇u3 = 0, where u3 is the x3 -component of the displacement field u. When the elastic material is invariant under the transformation x3 → −x3 , the equations of linearized elasticity can be reduced to the antiplane elasticity equation.

1.3

RADIATION CONDITION

Let us formulate the radiation condition for the time-harmonic elastic waves when Im ω ≥ 0 and ω 6= 0.

Since Hs and Hp commute with Lλ,µ , as shown in Lemma 1.1, it follows from the Helmholtz decomposition (1.12) that any smooth solution u to the constantcoefficient equation (Lλ,µ + ω 2 ρ)u = 0 can be decomposed as follows [120, Theorem 2.5]: u = up + us , (1.20) where up and us satisfy the equations (

(△ + κ2p )up = 0, ∇ × up = 0, (△ + κ2s )us = 0,

∇ · us = 0.

(1.21)

In fact, up and us are given by up = Hp [u] and us = Hs [u].

In order to select the physical solutions, we impose on up and us the radiation condition for solutions of the Helmholtz equation by requiring, as r = |x| → +∞, that ( ∂r up (x) − iκp up (x) = O(r−(d+1)/2 ), (1.22) ∂r us (x) − iκs us (x) = O(r−(d+1)/2 ).

We say that u satisfies the Sommerfeld-Kupradze radiation condition if it can be decomposed in the form (1.20) with up and us satisfying (1.21) and (1.22). We recall the following uniqueness results for the exterior problem [120]. Lemma 1.2 (Uniqueness result). Let u be a solution to (Lλ,µ + ω 2 ρ)u = 0 in R \ Ω satisfying the Sommerfeld-Kupradze radiation condition (1.22). If either u = 0 or ∂u/∂ν = 0 on ∂Ω, then u is identically zero in Rd \ Ω. d

11

LAYER POTENTIAL TECHNIQUES

1.4

INTEGRAL REPRESENTATION OF SOLUTIONS TO THE LAMÉ SYSTEM

1.4.1

Fundamental Solutions

d In dimension d, the Kupradze matrix Γω = (Γω ij )i,j=1 of the fundamental solution λ,µ 2 to the operator L + ω ρ satisfies

(Lλ,µ + ω 2 ρ)Γω (x − y) = δy (x)I,

x ∈ Rd , x 6= y,

(1.23)

where δy is the Dirac mass at y and I is the d×d identity matrix. See [120, Chapter 2]. The function Γω can be decomposed into shear and pressure components [3]: ω Γω (x) = Γω s (x) + Γp (x),

x ∈ Rd ,

x 6= 0,

(1.24)

where Γω p (x) = −

1 1 ω DΓω (κ2 I + D)Γω p (x) and Γs (x) = s (x). 2 µκs µκ2s s

(1.25)

Here, the tensor D is defined by 2 d D = ∇ ⊗ ∇ = (∂ij )i,j=1 ,

(1.26)

where the function Γω α is the fundamental solution to the Helmholtz operator, i.e., (∆ + κ2α )Γω α (x) = δ0 (x),

x ∈ Rd , x 6= 0,

subject to the Sommerfeld radiation condition: ω −(d+1)/2 ∂r Γω ) as r = |x| → +∞. α (x) − iκα Γα (x) = O(r ω ω Note that ∇ · Γω s = 0 and ∇ × Γp = 0. Moreover, Γ satisfies the SommerfeldKupradze radiation condition (1.22). See [2] and [120, Chapter 2]. Here, the vector ω field ∇ · Γω s and the matrix field ∇ × Γp are defined by

(

ω (∇ · Γω s )p = ∇ · (Γs p),

ω (∇ × Γω p )p = ∇ × (Γp p)

for all p ∈ Rd . The function Γω α is given by

(1)

 i (1)    − H0 (κα |x|), 4 Γω α (x) = eiκα |x|   ,  − 4π|x|

d = 2, (1.27) d = 3,

where H0 is the Hankel function of the first kind of order 0. For definition and properties of the Hankel function we refer, for instance, to [125]. The only relevant

12

CHAPTER 1

fact we shall recall here is the following behavior of the Hankel function near 0: +∞

X i (1) 1 − H0 (κα |x|) = ln(κα |x|) + τ + (bn ln(κα |x|) + cn )(κα |x|)2n , 4 2π n=1 where bn =

(−1)n 1 , 2π 22n (n!)2

α = p, s, (1.28)

  n πi X 1 cn = −bn γ − ln 2 − − , 2 j j=1

and the constant τ = (1/2π)(γ − ln 2) − i/4, γ being the Euler constant. It is known (see, for example, [75, 125]) that, as t → +∞, we have r    2 i(t− π ) 1 (1) 4 H0 (t) = e 1+O , πt t (1.29) r    d (1) 2 i(t+ π ) 1 4 H (t) = e 1+O . dt 0 πt t Using (1.29), one can see that in the two-dimensional case (1)

(1)

ˆ · ∇H0 (κα |x|) − iκα H0 (κα |x|) = O(|x|−3/2 ), x

(1.30)

ˆ := x/|x|. This is exactly the two-dimensional Sommerfeld radiation conwhere x dition one should impose in order to select the physical solution of the Helmholtz equation. 3 In the three-dimensional case, the Kupradze matrix Γω = (Γω ij )i,j=1 is given by

Γω ij (x) = −

1 eiκp |x| − eiκs |x| δij eiκs |x| + ∂ ∂ , i j 4πµ|x| 4πω 2 ρ |x|

(1.31)

where κα , α = p, s, is given by (1.18). One can easily show that Γω ij has the series representation: Γω ij (x) = − +

+∞ n + 1 1  1 X in + n+2 ω n δij |x|n−1 n+2 4π n=0 (n + 2)n! cs cp

(1.32)

+∞ 1 X in (n − 1)  1 1  n n−3 − ω |x| xi xj . 4πρ n=0 (n + 2)n! cn+2 cn+2 s p

If ω = 0, then Γ := Γ0 is the Kelvin matrix of the fundamental solution to the Lamé system; i.e., γ1 δij γ2 xi xj − , (1.33) Γij (x) = − 4π |x| 4π |x|3

where

1 γ1 = 2



1 1 + µ 2µ + λ



1 and γ2 = 2



1 1 − µ 2µ + λ



.

(1.34)

2 In the two-dimensional case, the Kupradze matrix Γω = (Γω ij )i,j=1 of the fun-

13

LAYER POTENTIAL TECHNIQUES

damental solution to the operator Lλ,µ + ω 2 ρ, ω 6= 0, is given by Γω ij (x) = −

  i i (1) (1) (1) δij H0 (κs |x|) + ∂i ∂j H0 (κp |x|) − H0 (κs |x|) . 2 4µ 4ω ρ

(1.35)

For ω = 0, we set Γ to be the Kelvin matrix of fundamental solutions to the Lamé system; i.e., γ1 γ2 xi xj Γij (x) = δij ln |x| − . (1.36) 2π 2π |x|2 1.4.2

Single- and Double-Layer Potentials

Let Ω be a bounded domain in Rd , d = 2, 3, with a connected Lipschitz boundary. The single- and double-layer potentials for the operator Lλ,µ + ω 2 ρ are given by Z SΩω [ϕ](x) = Γω (x − y)ϕ(y) dσ(y), x ∈ Rd , (1.37) ∂Ω Z ∂Γω ω (x − y)ϕ(y) dσ(y), x ∈ Rd \ ∂Ω, (1.38) DΩ [ϕ](x) = ∂Ω ∂ν(y) for ϕ ∈ L2 (∂Ω)d , where ∂/∂ν denotes the conormal derivative defined in (1.7). Thus, for i = 1, . . . , d, Z ∂Γω ij ω (DΩ [ϕ](x))i = λ (x − y)ϕ(y) · n(y) ∂yj ∂Ω  ∂Γω  ∂Γω ij il +µ + (x − y)nj (y)ϕl (y) dσ(y). ∂yl ∂yj The following formulas give the jump relations satisfied by the conormal derivative of the single-layer potential and by the double-layer potential:  ∂(SΩω [ϕ]) (x) = ± ∂ν ±  ω (DΩ [ϕ]) (x) = ∓ ±

 1 ω ∗ I + (KΩ ) [ϕ](x) a.e. x ∈ ∂Ω, 2  1 ω I + KΩ [ϕ](x) a.e. x ∈ ∂Ω, 2

ω where KΩ is the operator defined by Z ω KΩ [ϕ](x) = p.v.

∂Ω

∂Γω (x − y)ϕ(y) dσ(y) ∂ν(y)

(1.39) (1.40)

(1.41)

−ω ω ∗ and (KΩ ) is the L2 -adjoint of KΩ ; that is, Z ∂Γω ω ∗ (KΩ ) [ϕ](x) = p.v. (x − y)ϕ(y) dσ(y). ∂Ω ∂ν(x)

See [77, 120]. Here, p.v. stands for the Cauchy principal value and the subscripts + and − indicate the limits from outside and inside Ω, respectively. The operators ω ∗ ω (KΩ ) and KΩ are called the Neumann-Poincaré operators. By a straightforward calculation, one can see that the single- and double-layer ω potentials, SΩω [ϕ] and DΩ [ϕ] for ϕ ∈ L2 (∂Ω)d , satisfy the time-harmonic elastic-

14

CHAPTER 1

ity equation in Ω and Rd \ Ω together with the Sommerfeld-Kupradze radiation condition (1.22). We refer to [1, 120] for details. Let SΩ , DΩ , (KΩ )∗ , and KΩ be the layer potentials for the operator Lλ,µ . Analogously to (1.39) and (1.40), the following formulas give the jump relations obeyed by DΩ [ϕ] and by ∂(SΩ [ϕ])/∂ν on general Lipschitz domains for ϕ ∈ L2 (∂Ω)d :  ∂(SΩ [ϕ]) (x) = ± ∂ν ±  (DΩ [ϕ]) (x) = ∓ ±

 1 I + (KΩ )∗ [ϕ](x) a.e. x ∈ ∂Ω, 2  1 I + KΩ [ϕ](x) a.e. x ∈ ∂Ω. 2

(1.42) (1.43)

Again, the layer potentials SΩ [ϕ], DΩ [ϕ] for ϕ ∈ L2 (∂Ω)d satisfy Lλ,µ SΩ [ϕ] = Lλ,µ DΩ [ϕ] = 0 in Ω ∪ (Rd \ Ω). ω We emphasize that the singular integral operators KΩ and KΩ are not compact, even on smooth domains. This causes some difficulties in solving the elasticity system using layer potential techniques. Let Ψ be the vector space of all linear solutions to the equation Lλ,µ u = 0 satisfying ∂u/∂ν = 0 on ∂Ω, or, equivalently,   1 d Ψ = ψ ∈ H (Ω) : ∂i ψj + ∂j ψi = 0, 1 ≤ i, j ≤ d ,   (1.44) = a + Bx, a ∈ Rd , B ∈ MdA ,

where MdA is the space of antisymmetric matrices. One has dim Ψ = d(d + 1)/2. Define a subspace of L2 (∂Ω)d by  Z 2 2 d LΨ (∂Ω) = f ∈ L (∂Ω) :

∂Ω

 f · ψ dσ = 0 ∀ ψ ∈ Ψ .

(1.45)

In particular, since Ψ contains constant functions, we get Z f dσ = 0 ∂Ω

for any f ∈ L2Ψ (∂Ω). Define   −1/2 −1/2 d HΨ (∂Ω) := ϕ ∈ H (∂Ω) : hϕ, ψi1/2,−1/2 = 0 ∀ ψ ∈ Ψ .

(1.46)

Then the following result holds. ∗ ∗ is invertible Lemma 1.3 (Mapping properties of KΩ ). The operator ± 21 I + KΩ −1/2 on HΨ (∂Ω). Moreover, there exists a positive constant C such that

kSΩ [ϕ]kW (Rd ) ≤ CkϕkH −1/2 (∂Ω)

(1.47)

15

LAYER POTENTIAL TECHNIQUES

for all ϕ ∈ H −1/2 (∂Ω)d . Furthermore, the null space of − 21 I + KΩ on H −1/2 (∂Ω) is Ψ. The following invertibility results will be also needed. ω ∗ ω ∗ Lemma 1.4 (Mapping properties of (KΩ ) ). The operator 12 I+(KΩ ) is invertible −1/2 d 2 λ,µ on H (∂Ω) . If ω is not a Dirichlet eigenvalue for −L on Ω, then − 12 I + ω ∗ −1/2 d (KΩ ) is invertible on H (∂Ω) as well.

Next, we recall Green’s formulas for the Lamé system, which can be obtained by integration by parts. The first formula is Z Z ∂v dσ = u · Lλ,µ v dx + Q(u, v), (1.48) u· ∂ν Ω ∂Ω where u ∈ H 1 (Ω)d , v ∈ H 3/2 (Ω)d , and  Z  s s Q(u, v) = λ(∇ · u)(∇ · v) + 2µ∇ u : ∇ v dx.

(1.49)



Pd Here and throughout this book A : B = i,j=1 aij bij for matrices A = (aij ) and B = (bij ). The strong convexity condition (1.5) shows that the quadratic form u 7→ Q(u, u) is positive definite. Note that H 1 (Ω)d is the closure of this quadratic form since u 7→ ∇s u is elliptic of order 1. Formula (1.48) yields Green’s second formula  Z  Z  ∂v ∂u u· −v· dσ(x) = u · Lλ,µ v − v · Lλ,µ u dx (1.50) ∂ν ∂ν ∂Ω Ω

for u, v ∈ H 3/2 (Ω)d . Formula (1.50) shows that if u ∈ H 3/2 (Ω)d satisfies Lλ,µ u = 0 in Ω, then ∂u/∂ν|∂Ω ∈ L2Ψ (∂Ω). The following formulation of Korn’s inequality will be of interest to us. See [144] and [73, Theorem 6.3.4].

Lemma 1.5 (Korn’s inequality). Let Ω be a bounded Lipschitz domain in Rd . Let u ∈ H 1 (Ω)d satisfy  Z  u · ψ + ∇u : ∇ψ dx = 0 for all ψ ∈ Ψ. (1.51) Ω

Then there is a constant C depending only on the Lipschitz character of Ω such that  Z  Z |u|2 + |∇u|2 dx ≤ C |∇s u|2 dx. (1.52) Ω

2

D

s

2

s

s

Here, |∇u| = ∇u : ∇u and |∇ u| = ∇ u : ∇ u. Finally, we prove using Green’s formulas that −SΩ is positive.

16

CHAPTER 1

Lemma 1.6. The operator −SΩ : L2 (∂Ω)d → L2 (∂Ω)d is positive and selfadjoint. Proof. It is clear that SΩ is self-adjoint. Let ϕ ∈ L2 (∂Ω)d . Since ( Lλ,µ SΩ [ϕ] = 0 in Ω ∪ (Rd \ Ω), SΩ [ϕ](x) = O(|x|1−d )

as |x| → +∞,

then we have   Z Z ∂ s 2 2 − SΩ [ϕ] · SΩ [ϕ] dσ = 2µ|∇ SΩ [ϕ]| + λ|∇ · SΩ [ϕ]| dx, ∂Ω ∂ν Rd \Ω + and

Z

∂Ω

 Z  ∂ SΩ [ϕ] · SΩ [ϕ] dσ = 2µ|∇s SΩ [ϕ]|2 + λ|∇ · SΩ [ϕ]|2 dx. ∂ν Ω −

Summing up the above two identities we find by using the jump relation ∂ ∂ SΩ [ϕ] − SΩ [ϕ] = ϕ, ∂ν ∂ν + −

that



Z

∂Ω

SΩ [ϕ] · ϕ dσ =

Z

Rd

  2µ|∇s SΩ [ϕ]|2 + λ|∇ · SΩ [ϕ]|2 dx.

Thus −SΩ ≥ 0. Moreover, if SΩ [ϕ] = 0 on ∂Ω, then by the uniqueness of a solution to both the interior and exterior Dirichlet boundary problems for Lλ,µ it follows that SΩ [ϕ] = 0 in Ω∪(Rd \Ω) and therefore, by the jump relation (1.42), ϕ = 0.  1.4.3

Transmission Problem

In this subsection we consider a Lipschitz bounded inclusion D with Lamé paramee µ ters λ, e different from those λ and µ of the background medium. We assume that e µ the pair of Lamé parameters λ, e satisfy the strong convexity condition (1.5) and is such that e e 2 + (µ − µ (λ − λ)(µ −µ e) ≥ 0, (λ − λ) e)2 6= 0. (1.53)

ω e µ Let SeD denote the single-layer potential defined by (1.37) with λ, µ replaced by λ, e. e e the conormal derivative associated with λ, µ We also denote by ∂u/∂ ν e. We now have the following solvability result which can be viewed as a compact perturbation result of the case ω = 0.

Theorem 1.7. Let D be a Lipschitz bounded domain in Rd . Suppose that (λ − e e µ λ)(µ −µ e) ≥ 0 and 0 < λ, e < +∞. Suppose that ℑm ω ≥ 0 and ω 2 ρ is not a λ,µ Dirichlet eigenvalue for −L on D. For any given (F , G) ∈ H 1 (∂D)d × L2 (∂D)d , there exists a unique pair (f , g) ∈ L2 (∂D)d × L2 (∂D)d such that  ω ω [g]|+ = F , SeD [f ]|− − SD  ∂ Seω [f ] − ∂ S ω [g] = G. D D e ∂ν ∂ν − +

17

LAYER POTENTIAL TECHNIQUES

A positive constant C exists such that ||f ||L2 (∂D)d + ||g||L2 (∂D)d

  ≤ C ||F ||H 1 (∂D)d + ||G||L2 (∂D)d .

(1.54)

Moreover, if ω = 0 and G ∈ L2Ψ (∂D), then g ∈ L2Ψ (∂D).

Proof. For ω = 0, the theorem is proved in [84]. Here, we only consider the case ω 6= 0, which can be treated as a compact perturbation of the case ω = 0. In fact, let us define the operators T, T0 : L2 (∂D)d × L2 (∂D)d → H 1 (∂D)d × L2 (∂D)d by T (f , g) := and T0 (f , g) :=

∂ eω ∂ ω ω ω SeD [f ]|− − SD [g]|+ , SD [f ] − S [g] e ∂ν ∂ν D − +

!

! ∂ ∂ SeD [f ] − SeD [f ]|− − SD [g]|+ , SD [g] . e ∂ν ∂ν − +

It is easily checked that T − T0 is a compact operator. Since we know that T0 is invertible, by the Fredholm alternative, it is enough to show that T is injective. Suppose that T (f , g) = 0. Then the function u given by ( ω SD [g](x), x ∈ Rd \ D, u(x) := Seω [f ](x), x ∈ D, D

is a solution to the transmission problem   Lλ,µ u + ω 2 u = 0     e  Lλ,eµ u + ω 2 u = 0 u + − u − = 0        ∂u − ∂u = 0 + e − ∂ν ∂ν

in Rd \ D, in D,

on ∂D, on ∂D,

satisfying the radiation condition. By the uniqueness of a solution to this transmission problem (see, for instance, [120, Chapter 3]), we have u = 0. From the assumption on ω, we conclude by using Lemma 1.4 that f = g = 0. This completes the proof. 

18

CHAPTER 1

Later in this book, we will consider the following transmission problem:   Lλ,µ u + ω 2 ρu = 0 in Ω \ D,     e   Lλ,eµ u + ω 2 ρu = 0 in D,     ∂u =g on ∂Ω, (1.55) ∂ν     u + − u − = 0 on ∂D,        ∂u − ∂u = 0 on ∂D, e − ∂ν + ∂ ν

where D and Ω are Lipschitz bounded domains in Rd with D ⊂ Ω. Note that the p- and s-waves cannot be decoupled because of the boundary and transmission conditions. For problem (1.55) the following representation formula holds.

Theorem 1.8 (Representation formula). Let ℑm ω ≥ 0. Suppose that ω 2 ρ is not a Dirichlet eigenvalue for −Lλ,µ on D. Let u be a solution of (1.55) and f := u|∂Ω . Define ω H(x) := DΩ [f ](x) − SΩω [g](x),

Then u can be represented as ( ω H(x) + SD [ψ](x), u(x) = ω Se [ϕ](x), D

x ∈ Rd \ ∂Ω.

(1.56)

x ∈ Ω \ D,

(1.57)

x ∈ D,

where the pair (ϕ, ψ) ∈ L2 (∂D)d × L2 (∂D)d is the unique solution of  ω ω SeD [ϕ] − SD [ψ] = H|∂D ,  ∂ Seω [ϕ] − ∂ S ω [ψ] = ∂H . D D e ∂ν ∂ν ∂ν ∂D

(1.58)

Moreover, we have

ω H(x) + SD [ψ](x) = 0,

x ∈ Rd \ Ω.

Proof. We consider the following two-phase transmission problem:  S  Lλ,µ v + ω 2 ρv = 0 in (Ω \ D) (Rd \ Ω),     eµ λ,e 2   in D, L v + ω ρv = 0 ∂v − ∂v = g on ∂Ω,  v − v + = f ,  −  − ∂ν ∂ν +    ∂v ∂v  v − v = 0, − = 0 on ∂D, − + − e ∂ν ∂ν +

(1.59)

(1.60)

with the radiation condition. This problem has a unique solution. See [120, Chapter

19

LAYER POTENTIAL TECHNIQUES

e defined by 3]. It is easily checked that both v and v ( u(x), x ∈ Ω, v(x) = 0, x ∈ Rd \ Ω, and  H(x) + S ω [ψ](x), D e(x) = v  eω SD [ϕ](x),

  x ∈ R \ D ∪ ∂Ω , d

x ∈ D,

e, which concludes the proof of the theorem.  are solutions to (1.60). Hence v = v

We now consider the static case. For a given g ∈ L2Ψ (∂D), let u be the solution of the transmission problem  λ,µ  L u = 0 in Ω \ D,   eµ  Lλ,e u = 0 in D,      = u  u on ∂D,  +   − ∂u ∂u (1.61) = on ∂D,   e − ∂ ν ∂ν  +     ∂u   = g,   ∂ν ∂Ω     u ∂Ω ∈ L2Ψ (∂Ω).

The following representation theorem for the solution of the transmission problem (1.61) holds [28]. Theorem 1.9. Let H be defined by H(x) = DΩ [u|∂Ω ](x) − SΩ [g](x),

x ∈ Rd \ ∂Ω.

Then the solution u of (1.61) can represented by ( H(x) + SD [ψ](x), x ∈ Ω \ D, u(x) = SeD [ϕ](x), x ∈ D,

where (ϕ, ψ) is the unique solution in L2 (∂D)d × L2Ψ (∂D) of  e  SD [ϕ] − − SD [ψ] + = H|∂D on ∂D, ∂ e ∂ ∂H  S [ϕ] − S [ψ] = on ∂D.  D D e ∂ν ∂ν ∂ν −

+

(1.62)

(1.63)

(1.64)

∂D

There exists a positive constant C such that

kϕkL2 (∂D)d + kψkL2 (∂D)d ≤ CkHkH 1 (∂D) .

(1.65)

For any integer n, there exists a positive constant Cn depending only on c0 :=

20

CHAPTER 1

e µ dist(D, ∂Ω) and λ, µ (not on λ, e) such that

kHkC n(D) ≤ Cn kgkL2 (∂Ω) .

(1.66)

Moreover, x ∈ Rd \ Ω.

H(x) + SD [ψ](x) = 0,

(1.67)

We will also need the following lemma. Lemma 1.10. Let ϕ ∈ Ψ. Let D be a bounded Lipschitz domain. If the pair (f , g) ∈ L2 (∂D) × L2Ψ (∂D) is the solution of  e  SD [f ] − − SD [g] + = ϕ|∂D , (1.68) ∂ e ∂  SD [g] = 0,  SD [f ] − e ∂ν ∂ν − +

then g = 0.

Proof. Define u by u(x) :=

Since g ∈ L2Ψ (∂D), then

Z

(

SD [g](x), SeD [f ](x) − ϕ(x),

x ∈ Rd \ D, x ∈ D.

g dσ = 0, and hence ∂D

SD [g](x) = O(|x|1−d )

as |x| → +∞.

Therefore, u is the unique solution of  λ,µ L u = 0 in Rd \ D,    e   Lλ,eµ u = 0 in D,    u|+ = u|− on ∂D,   ∂u ∂u   = on ∂D,  +  e − ∂ν   ∂ν u(x) = O(|x|1−d ) as |x| → +∞.

(1.69)

Using the fact that the trivial solution u = 0 is the unique solution to (1.69), we see that SD [g](x) = 0 for x ∈ Rd \ D. It then follows that Lλ,µ SD [g](x) = 0 for x ∈ D and SD [g](x) = 0 for x ∈ ∂D. Thus, SD [g](x) = 0 for x ∈ D. Since by (1.42) ∂(SD [g]) ∂(SD [g]) g= − on ∂D, ∂ν + ∂ν −

we have that g = 0.



21

LAYER POTENTIAL TECHNIQUES

1.5

HELMHOLTZ-KIRCHHOFF IDENTITIES

We now discuss the reciprocity property and derive the Helmholtz-Kirchhoff identities for elastic media. Some of the results presented in this section can be found in [171, 172] in the context of elastodynamic seismic interferometry. Indeed, the elastodynamic reciprocity theorems (Propositions 1.11 and 1.15) will be the key ingredients to understand the relation between the cross correlations of signals emitted by uncorrelated noise sources and the Green function between the observation points. Note first that the conormal derivative tensor ∂Γω /∂ν means that for all constant vectors p,  ω ∂ [Γω p] ∂Γ p := . ∂ν ∂ν From now on, we set Γω (x, y) := Γω (x − y) for x 6= y. 1.5.1

Reciprocity Property and Helmholtz-Kirchhoff Identities

An important property satisfied by the fundamental solution Γω is the reciprocity property. If the medium is not homogeneous, then the following holds: t

Γω (y, x) = [Γω (x, y)] ,

(1.70)

x 6= y.

If the medium is homogeneous, then one can see from (1.31) and (1.35) that Γω (x, y) is symmetric and Γω (y, x) = Γω (x, y),

(1.71)

x 6= y.

Identity (1.70) states that the nth component of the displacement at x due to a point source excitation at y in the mth direction is identical to the mth component of the displacement at y due to a point source excitation at x in the nth direction. The following result is from [172, Eq. (73)]. It is the first building block of the resolution analysis in Chapters 5 and 6. Moreover, elements of the proof are used in Proposition 1.12. Proposition 1.11. Let Ω be a bounded Lipschitz domain. For all x, z ∈ Ω, we have # Z " ω ω ∂Γ (y, z) ∂Γ (x, y) ω Γω (y, z) − Γ (x, y) dσ(y) = −2iℑm {Γω (x, z)}. ∂ν(y) ∂ν(y) ∂Ω (1.72) Proof. Our goal is to show that for all real constant vectors p and q, we have # Z " ω ∂Γω (x, y) ω ∂Γ (y, z) q· Γ (y, z)p − q · Γω (x, y) p dσ(y) ∂ν(y) ∂ν(y) ∂Ω = −2iq · ℑm{Γω (x, z)}p.

Taking scalar products of equations (Lλ,µ + ω 2 )Γω (y, x)q = δx (y)q

ω

and (Lλ,µ + ω 2 )Γ (y, z)p = δz (y)p

22

CHAPTER 1 ω

with Γ (y, z)p and Γω (y, x)q respectively, subtracting the second result from the first, and integrating with respect to y over Ω, we obtain Z h i ω ω (Γ (y, z)p) · Lλ,µ (Γω (y, x)q) − Lλ,µ (Γ (y, z)p) · (Γω (y, x)q) dy Ω

ω

= −p · (Γω (z, x)q) + q · (Γ (x, z)p) = −2iq · ℑm{Γω (x, z)}p,

where we have used the reciprocity relation (1.70).

Using the form of the operator Lλ,µ , this gives Z  ω −2iq · ℑm{Γω (x, z)}p = λ (Γ (y, z)p) · {∇∇ · (Γω (y, x)q)} Ω

+

Z



n o ω −(Γω (y, x)q) · ∇∇ · (Γ (y, z)p) dy

 ω µ (Γ (y, z)p) · {(∆ + ∇∇·)(Γω (y, x)q)}

n o ω −(Γω (y, x)q) · (∆ + ∇∇·)(Γ (y, z)p) dy.

We recall that, for two functions u, v : Rd → Rd , we have   (∆u + ∇(∇ · u)) · v = 2∇ · ∇s uv − 2∇s u : ∇s v,   ∇(∇ · u) · v = ∇ · (∇ · u)v − (∇ · u)(∇ · v),

where ∇u = (∂j ui )di,j=1 . Therefore, we find

−2iq · ℑm {Γω (x, z)}p Z h n o ω = λ ∇ · [∇ · (Γω (y, x)q)](Γ (y, z)p) Ω n oi ω −∇ · [∇ · (Γ (y, z)p)](Γω (y, x)q) dy Z h n o  ω + µ ∇ · (∇Γω (y, x)q) + ∇(Γω (y, x)q)t Γ (y, z)p Ω n oi  ω ω − ∇ · ∇(Γ (y, z)p + ∇(Γ (y, z)p))t Γω (y, x)q dy.

Now, we use divergence theorem to get

−2iq · ℑm {Γω (x, z)}p Z h n o ω = λ n · [∇ · (Γω (y, x)q)](Γ (y, z)p) ∂Ω n oi ω −n · [∇ · (Γ (y, z)p)](Γω (y, x)q) dσ(y) Z h n o  ω + µ n · ∇(Γω (y, x)q) + (∇(Γω (y, x)q))t Γ (y, z)p ∂Ω n  oi ω ω − n · ∇(Γ (y, z)p) + (∇(Γ (y, z)p))t Γω (y, x)q dσ(y).

23

LAYER POTENTIAL TECHNIQUES

Hence, −2iq · ℑm {Γω (x, z)}p Z h ω = λ (Γ (y, z)p) · {∇ · (Γω (y, x)q)n} ∂Ω n oi ω −(Γω (y, x)q) · ∇ · (Γ (y, z)p)n dσ(y) Z h ω   + µ (Γ (y, z)p) · ∇(Γω (y, x)q) + (∇(Γω (y, x)q))t n ∂Ω n  oi ω ω −(Γω (y, x)q) · ∇(Γ (y, z)p) + (∇(Γ (y, z)p))t n dσ(y),

and therefore, using the definition of the conormal derivative,

−2iq · ℑm {Γω (x, z)}p # Z " ω ∂Γω (y, x)q ∂Γ (y, z)p ω ω = (Γ (y, z)p) · − (Γ (y, x)q) · dσ(y) ∂ν(y) ∂ν(y) ∂Ω # Z " ω ∂Γω (x, y) ω ∂Γ (y, z) = q· Γ (y, z)p − q · Γω (x, y) p dσ(y), ∂ν(y) ∂ν(y) ∂Ω which is the desired result. Note that for establishing the last equality we have used the reciprocity relation (1.70).  The proof of Proposition 1.11 uses only the reciprocity relation and the divergence theorem. Consequently, Proposition 1.11 also holds in a heterogeneous medium, as shown in [172]. The following proposition from [15] is an important ingredient in the analysis of elasticity imaging. Proposition 1.12. Let Ω be a bounded Lipschitz domain. For all x, z ∈ Ω, we have # Z " ω ∂Γω ∂Γs (x, y) ω p (y, z) ω Γp (y, z) − Γs (x, y) dσ(y) = 0. (1.73) ∂ν(y) ∂ν(y) ∂Ω ω Proof. First, we note that Γω p (y, x) and Γs (y, x) are solutions of p (Lλ,µ + ω 2 ρ)Γω p = H [δ0 I]

s and (Lλ,µ + ω 2 ρ)Γω s = H [δ0 I] .

Here, Hp [δ0 I] = ∇∇ · (ΓI),

Hs [δ0 I] = ∇ × ∇ × (ΓI),

Γ(x) = −1/(4π|x|) for d = 3, and Γ(x) = 1/(2π) ln |x| for d = 2 [155]. Then we proceed as in the proof of the previous proposition to find: # Z " ω ∂Γω ∂Γs (x, y) ω p (y, z) ω Γp (y, z) − Γs (x, y) dσ(y) ∂ν(y) ∂ν(y) ∂Ω Z  s  ω p = H [δx I](y)Γω p (y, z) − Γs (x, y)H [δz I](y) dy Ω

ω p = [Hs [δ0 I] ∗ Γω p (·, z)](x) − [Γs (x, ·) ∗ H [δ0 I]](z),

(1.74)

24

CHAPTER 1

p ω where ∗ denotes the convolution product. Using the fact that Γω p = H [Γ ] and (1.16) we get

Hs [Hs [δ0 I] ∗ Γω p (·, z)] = 0

and

Hp [Hs [δ0 I] ∗ Γω p (·, z)] = 0.

Therefore, we conclude Similarly, we have

[Hs [δ0 I] ∗ Γω p (·, z)](x) = 0. p [Γω s (x, ·) ∗ H [δ0 I]](z) = 0,

which gives the desired result.



Finally the following proposition shows that the elastodynamic reciprocity theorem (Proposition 1.11) holds for each wave component in a homogeneous medium. Proposition 1.13. Let Ω be a bounded Lipschitz domain. For all x, z ∈ Ω and α = p, s,  Z  ω ∂Γα (x, y) ω ∂Γω α (y, z) Γα (y, z) − Γω (x, y) dσ(y) = −2iℑm {Γω α α (x, z)}. ∂ν(y) ∂ν(y) ∂Ω (1.75) Proof. As both cases, α = p and α = s, are similar, we only provide a proof for α = p. For α = p, we have as in the previous proof # Z " ω ∂Γω ∂Γp (x, y) ω p (y, z) ω Γp (y, z) − Γp (x, y) dσ(y) ∂ν(y) ∂ν(y) ∂Ω ω p = [Hp [δ0 I] ∗ Γω p (·, z)](x) − [Γp (x, ·) ∗ H [δ0 I]](z).

We can write p ω [Hp [δ0 I] ∗ Γω p (·, z)](x) = [H [δ0 I] ∗ Γp (·)](x − z)

and p ω p p ω [Γω p (x, ·) ∗ H [δ0 I]](z) = [Γp (·) ∗ H [δ0 I]](z − x) = [H [δ0 I] ∗ Γp (·)]](x − z).

Therefore, Z " ∂Ω

# ∂Γω ∂Γω p (x, y) ω p (y, z) ω Γp (y, z) − Γp (x, y) dσ(y) = −2iℑm {Γω p (x, z)}, ∂ν(y) ∂ν(y)

where the last equality results from (1.16).



We emphasize that the proofs of Propositions 1.12 and 1.13 require the medium to be homogeneous (so that Hs and Hp commute with Lλ,µ ), and we cannot expect these propositions to be true in a heterogeneous medium because of mode conversion between pressure and shear waves.

25

LAYER POTENTIAL TECHNIQUES

1.5.2

Approximation of the Conormal Derivative

In this subsection, we derive an approximation of the conormal derivative ∂Γω (x, y)/∂ν(y),

y ∈ ∂Ω, x ∈ Ω.

In general this approximation involves the angles between the pressure and shear rays and the normal direction on ∂Ω. This approximation becomes simple when Ω is a ball with very large radius, since in this case all rays are normal to ∂Ω (Proposition 1.14). It allows us to use a simplified version of the Helmholtz-Kirchhoff identities in order to analyze elasticity imaging. \ Proposition 1.14. If n(y) = y − x (:= (y − x)/|x − y|) and |x − y| ≫ 1, then, for α = p, s,   ∂Γω 1 α (x, y) ω = iωcα Γα (x, y) + o . (1.76) ∂ν |x − y|(d−1)/2 Proof. We only prove here the proposition for d = 3. The case d = 2 follows from exactly the same arguments. Moreover, it is enough to show that for all constant vectors q,   ∂Γω 1 α (x, y)q ω = iωcα Γα (x, y)q + o , α = p, s. ∂ν |x − y| Pressure component: Recall from (1.25) and (1.27) that Γω p (x, y) = −

1 1 ω \ \ DΓω p (x, y) = 2 Γp (x, y)y − x ⊗ y − x + o ω2 cp



1 |x − y|



so we have Γω p (x, y)q

1 \ \ (x, y) (y − x · q) y −x+o = 2 Γω cp p



1 |x − y|



.

Therefore,  ∂Γω p (x, y)q = λ∇y · Γω p (x, y)q n(y) ∂ν   t ω +µ ∇y (Γω p (x, y)q) + (∇y (Γp (x, y)q)) n(y) h i \ y −x·q \ \ \ \ = iωΓω p (x, y) λ y − x · y − xn + 2µ(y − x ⊗ y − x)n 3 cp  1 +o |y − x|   h i \ 1 y −x·q ω \ \ = iωΓp (x, y) λn + 2µ(y − x · n)y − x + o c3p |y − x| h i   \ y−x·q ω \ \ \ = iωΓ (x, y) λ n − y − x + 2µ ( y − x · n) − 1 y − x p c3p   1 +iωcp Γω (x, y)q + o . p |y − x|

,

26

CHAPTER 1

\ In particular, when n = y − x, we have ∂Γω p (x, y)q = iωcp Γω p (x, y)q + o ∂ν



1 |y − x|



.

Shear components: As Γω s (x, y)

= =

 1 κ2s I + D Γω s (x, y) 2 ω     1 ω 1 \ \ Γ (x, y) I − y − x ⊗ y − x + o , c2s s |x − y|

we have Γω s (x, y)q Therefore,

     1 ω 1 \ . = 2 Γs (x, y) q − y\ −x·q y −x +o cs |x − y|

∂Γω s (x, y)q ∂ν

=

Now, remark that λ∇ · (Γω s (x, y)q) n =

=

ω λ∇y · (Γω s (x, y)q) n(y) + µ [∇y (Γs (x, y)q)  t +(∇y (Γω s (x, y)q)) n(y).

 i h  iω ω \ \ \ y − x · y − x n Γ y − x · q (x, y) q − c3s s   1 +o |x − y|   1 o , |x − y| λ

and   ω t µ ∇(Γω s (x, y)q) + ∇(Γs (x, y)q) n i h  iω \ \ \ \ \ = µ 3 Γω s (x, y) q ⊗ y − x + y − x ⊗ q − 2 y − x · q y − x ⊗ y − x n cs   1 +o |x − y| h i     iω ω \ \ \ \ \ = µ 3 Γs (x, y) y −x·n q+ q·n y −x−2 y −x·q y −x·n y −x cs   1 +o |x − y| h ih i   iω ω \ \ \ = µ 3 Γs (x, y) y −x·n −1 q− y −x·q y −x cs h i   iω \ \ \ +µ 3 Γω q·n− y −x·q y −x·n y − x iωcs Γω s (x, y) s (x, y) cs   1 +o . |x − y|

27

LAYER POTENTIAL TECHNIQUES

In particular, when n = y\ − x, we have ∂Γω s (x, y)q = iωcs Γω s (x, y)q + o ∂ν



1 |y − x|



.

This completes the proof.



The following is a direct consequence of Propositions 1.12, 1.13, and 1.14. Proposition 1.15 (Helmholtz-Kirchhoff Identities). Let Ω ⊂ Rd be a ball with radius R. Then, for all x, z ∈ Ω, we have Z 1 ω lim Γω ℑm {Γω (1.77) α (x, y)Γα (y, z)dσ(y) = − α (x, z)}, α = p, s, R→+∞ ∂Ω ωcα and lim

R→+∞

1.6 1.6.1

Z

∂Ω

ω Γω s (x, y)Γp (y, z)dσ(y) = 0.

(1.78)

EIGENVALUE CHARACTERIZATIONS AND NEUMANN AND DIRICHLET FUNCTIONS Eigenvalue Characterizations

In this subsection we characterize the eigenvalues of the elasticity operator on a bounded domain with Neumann or Dirichlet boundary conditions as the characteristic values of certain layer potentials which are meromorphic operator-valued functions. For doing so, we first recall the notions of characteristic values and root functions of analytic operator-valued functions. We refer, for instance, to [132] for the details. If B and B ′ are two Banach spaces, we denote by L(B, B ′ ) the space of bounded linear operators from B into B ′ . Let U(ω0 ) be the set of all operator-valued functions with values in L(B, B ′ ) which are holomorphic in some neighborhood of ω0 , except possibly at ω0 . The point ω0 is called a characteristic value of A(ω) ∈ U(ω0 ) if there exists a vector-valued function ϕ(ω) with values in B such that (i) ϕ(ω) is holomorphic at ω0 and ϕ(ω0 ) 6= 0, (ii) A(ω)[ϕ(ω)] is holomorphic at ω0 and vanishes at this point. Here, ϕ(ω) is called a root function of A(ω) associated with the characteristic value ω0 . The vector ϕ0 = ϕ(ω0 ) is called an eigenvector. The closure of the linear set of eigenvectors corresponding to ω0 is denoted by KerA(ω0 ). Let κ be an eigenvalue of −Lλ,µ in Ω with the Neumann condition on ∂Ω and let u denote an eigenvector associated with κ; i.e.,  Lλ,µ u + κu = 0 in Ω, (1.79)  ∂u = 0 on ∂Ω. ∂ν We note that since −Lλ,µ is elliptic, it has discrete eigenvalues of finite multiplicities. The following proposition from [120, Chapter 7] is of importance to us.

28

CHAPTER 1

Proposition 1.16 (Neumann eigenvalue characterization). The necessary and sufficient condition for (1.79) to have a nontrivial solution is that κ is nonnegative √ and κ coincides with one of the characteristic values of ω (1/2) I − KΩ : L2 (∂Ω)d → L2 (∂Ω)d .

If κ = ω02 ρ is a Neumann eigenvalue of (1.79) with multiplicity m, then  ω0 (1/2) I − KΩ [ϕ0 ] = 0

has m linearly independent solutions. Moreover, for every eigenvalue κ > 0, ω −1 a simple pole of the operator-valued function ω 7→ ((1/2) I − KΩ ) .

√ κ is

For the Dirichlet eigenvalue problem, the following characterization holds. Proposition 1.17 (Dirichlet eigenvalue characterization). Consider the eigenvalue problem with Dirichlet boundary conditions ( Lλ,µ v + τ v = 0 in Ω, (1.80) v=0 on ∂Ω. The necessary and sufficient √ condition for (1.80) to have a nontrivial solution is that τ is nonnegative and τ coincides with one of the characteristic values of ω ∗ (1/2) I + (KΩ ) : L2 (∂Ω)d → L2 (∂Ω)d .

If τ = ω02 ρ is a Dirichlet eigenvalue of (1.80) with multiplicity m, then  ω0 ∗ (1/2) I + (KΩ ) [ψ0 ] = 0

has m linearly independent solutions. Moreover, for every eigenvalue τ > 0, ω ∗ −1 a simple pole of the operator-valued function ω 7→ ((1/2) I + (KΩ ) ) .

1.6.2

√ τ is

Neumann Function

Let 0 = κ1 ≤ κ2 ≤ . . . be the eigenvalues of −Lλ,µ in Ω with the Neumann condition on ∂Ω. Note that κ1 = 0 is of multiplicity d(d + 1)/2, the eigenspace being Ψ. For √ √ λ,µ / { κj }j≥1 , let Nω ω ρ ∈ + ω 2 ρ in Ω Ω (x, z) be the Neumann function for L corresponding to a Dirac mass at z. That is, for z ∈ Ω, Nω (·, z) is the matrixΩ valued solution to  (Lλ,µ + ω 2 ρ)Nω Ω (x, z) = −δz (x)I, x ∈ Ω, ω (1.81) ∂N Ω  (x, z) = 0, x ∈ ∂Ω. ∂ν Then the following relation holds (see [26]): 

 1 ω ω − I + KΩ [Nω Ω (·, z)](x) = Γ (x, z), 2

x ∈ ∂Ω, z ∈ Ω.

(1.82)

29

LAYER POTENTIAL TECHNIQUES

Let (uj )j≥1 denote the set of orthogonal eigenvectors associated with (κj )j≥1 , with kuj kL2 (Ω) = 1. Then we have the following spectral decomposition: Nω Ω (x, z) =

+∞ X uj (x)uj (z)t j=1

κj − ω 2 ρ

.

(1.83)

Here we regard uj as a column vector, and hence uj (x) uj (z)t is a d × d matrixvalued function. We refer the reader to [158] for a proof of (1.83). Note that the eigenvectors (uj )j≥1 have in general a nontrivial decomposition in terms of p- and s-waves. Let NΩ (x, z) be the Neumann function for the Lamé system on Ω, namely, for z ∈ Ω, N(x, z) is the solution to   Lλ,µ NΩ (x, z) = −δz (x)I, x ∈ Ω, (1.84) 1 ∂NΩ  (x, z) = − I, x ∈ ∂Ω,  ∂ν |∂Ω|

subject to the orthogonality condition: Z NΩ (x, z)ψ(x) dσ(x) = 0 ∂Ω

We have NΩ (x, z) =

∀ ψ ∈ Ψ.

+∞ X 1 uj (x)uj (z)t , κ j j=1

(1.85)

x 6= z.

Moreover, the following identity holds:

1 (− I + KΩ )−1 [Γ(· − z)] (x) = NΩ (x, z), 2

x ∈ ∂Ω, z ∈ Ω,

(1.86)

modulo a function in Ψ. See [28, 32, 120] for properties of the Neumann function and a proof of (1.86). Using NΩ one can derive a representation formula for the solution to (1.61) in terms of the background solution, i.e., the solution to   Lλ,µ u0 = 0 in Ω,     ∂u 0 (1.87) = g,  ∂ν  ∂Ω   u ∈ L2 (∂Ω). 0 ∂Ω Ψ

We need to fix some notation. Let D ⋐ Ω ⊂ Rd be two bounded Lipschitz domains. Let us define for f ∈ L2Ψ (∂D) Z ND [f ](x) := NΩ (x, y)f (y) dσ(y), x ∈ ∂Ω. (1.88) ∂D

Theorem 1.18. Let u be the solution to (1.61) and u0 the background solution.

30

CHAPTER 1

Then the following holds: u(x) = u0 (x) − ND [ψ](x),

(1.89)

x ∈ ∂Ω,

where ψ is defined by (1.64). Proof. By substituting (1.63) into the equation (1.62), we obtain   H(x) = −SΩ [g](x) + DΩ H|∂Ω + SD [ψ] ∂Ω (x), x ∈ Ω.

By using (1.43), we see that

1 1 ( I − KΩ )[H|∂Ω ] = −SΩ [g] + ( I + KΩ )[SD [ψ] ∂Ω ] on ∂Ω. 2 2 ∂Ω

(1.90)

Since u0 (x) = −SΩ [g](x) + DΩ [u0 |∂Ω ](x) for all x ∈ Ω, we have 1 ( I − KΩ )[u0 ]|∂Ω = −SΩ [g] ∂Ω . 2

(1.91)

By Theorem 1.7 and (1.86), we have

1 (− I + KΩ )[(ND [ψ])|∂Ω )(x) = SD [ψ](x), 2

(1.92)

x ∈ ∂Ω,

since ψ ∈ L2Ψ (∂D). We see from (1.90), (1.91), and (1.92) that

h i 1 1 ( I − KΩ ) H|∂Ω − u0 |∂Ω + ( I + KΩ )[(ND [ψ])|∂Ω ] = 0 2 2

on ∂Ω,

and hence, by Lemma 1.3, we obtain that

1 H|∂Ω − u0 |∂Ω + ( I + KΩ )[(ND [ψ])|∂Ω ] ∈ Ψ. 2 Note that

1 ( I + KΩ )[(ND [ψ])|∂Ω ] = (ND [ψ])|∂Ω + (SD [ψ])|∂Ω , 2 which follows from (1.86). Thus, (1.63) gives u|∂Ω = u0 |∂Ω − (ND [ψ])|∂Ω

modulo Ψ.

(1.93)

Since all the functions in (1.93) belong to L2Ψ (∂Ω), we have (1.89). This completes the proof.  In Chapter 3 we will be dealing with elastic inclusions of the form D = ǫB + z, where B is a bounded Lipschitz domain in Rd and ǫ is a small positive parameter. For the purpose of use in Chapter 3, we consider the asymptotic expansion of NΩ (x, z + ǫy) for x ∈ ∂Ω, z ∈ Ω and y ∈ ∂B as ǫ → 0. Recall that if j = (j1 , . . . , jd ) is a multi-index (an ordered d-tuple of nonnegative integers), then we write j! = j1 ! . . . jd !, y j = y1j1 . . . ydjd , |j| = j1 + . . . + jd , and ∂ j = ∂ |j| /∂y1j1 . . . ∂ydjd .

31

LAYER POTENTIAL TECHNIQUES

Since Γ(x − ǫy) =

+∞ X 1 |β| β ǫ ∂ (Γ(x))y β , β!

|β|=0

we get from (1.86) that, modulo Ψ,   +∞ X 1 1 |β| β (− I + KΩ ) NΩ (·, ǫy + z) (x) = ǫ ∂ (Γ(x − z))y β 2 β! |β|=0   +∞ X 1 1 = (− I + KΩ )  ǫ|β| ∂zβ NΩ (·, z)y j  (x). 2 β! |β|=0

Since NΩ (·, w) ∈ L2Ψ (∂Ω) for all w ∈ Ω, we have the following asymptotic expansion of the Neumann function. Lemma 1.19. For x ∈ ∂Ω, z ∈ Ω, y ∈ ∂B, and ǫ → 0, NΩ (x, ǫy + z) =

+∞ X 1 |β| β ǫ ∂z NΩ (x, z)y β . β!

(1.94)

|β|=0

A proof of Lemma 1.19 can be found in [28]. 1.6.3

Dirichlet Function

Now we turn to the properties of the Dirichlet function. Let 0 ≤ τ1 ≤ τ2 ≤ . . . be the √ √ / { τj }j≥1 , eigenvalues of −Lλ,µ in Ω with the Dirichlet condition on ∂Ω. For ω ρ ∈ λ,µ let Gω + ω 2 ρ in Ω corresponding to a Dirac Ω (x, z) be the Dirichlet function for L ω mass at z. That is, for z ∈ Ω, GΩ (·, z) is the matrix-valued solution to (

(Lλ,µ + ω 2 ρ)Gω Ω (x, z) = −δz (x)I, x ∈ Ω, Gω Ω (x, z) = 0,

x ∈ ∂Ω.

(1.95)

Then for any x ∈ ∂Ω, and z ∈ Ω we can prove in the same way as (1.82) that 1

 ω ∂Γω ω ∗ ∂GΩ I + (KΩ ) [ (·, z)](x) = − (x, z). 2 ∂ν ∂ν

(1.96)

Moreover, we mention the following important properties of Gω Ω: (i) Let (vj )j≥1 denote the set of orthogonal eigenvectors associated with (τj )j≥1 , with kvj kL2 (Ω) = 1. Then we have the following spectral decomposition: Gω Ω (x, z) =

+∞ X vj (x)vj (z)t j=1

τj − ω 2 ρ

.

(1.97)

(ii) For x ∈ ∂Ω, z ∈ Ω, y ∈ ∂B, and ǫ → 0, Gω Ω (x, ǫy

+∞ X 1 |β| β ω + z) = ǫ ∂z GΩ (x, z)y β . β! |β|=0

(1.98)

32 1.7

CHAPTER 1

A REGULARITY RESULT

We state a generalization of Meyer’s theorem concerning the regularity of solutions to systems with bounded coefficients. This result will be useful in Chapter 11. For p > 1, define H 1,p (Ω) by   H 1,p (Ω) := v ∈ Lp (Ω), ∇v ∈ Lp (Ω)d and let H −1,q (Ω) with q = (p − 1)/p be its dual. Introduce   1,p Hloc (Ω) := v ∈ H 1,p (K) ∀ K ⋐ Ω . Theorem 1.20. Let C ∈ L∞ (Ω) be a strongly convex tensor, i.e., there exists a positive constant C such that CA : A ≥ CkAk2 for every d × d symmetric matrix A = (aij ), where kAk = η > 0 such that if u ∈ H 1 (Ω)d is solution to

∇ · (C∇s u) = f

in

qP

i,j

a2ij . There exists

Ω,

1,2+η where f ∈ H −1,2+η (Ω)d , then u ∈ Hloc (Ω)d and for any two disks Bρ ⊂ B2ρ ⋐ Ω 2

k∇ukL2+η (Bρ ) ≤ C ′ (kf kH −1,2+η (B2ρ ) + ρ 2+η k∇ukL2(B2ρ ) ) for some positive constant C ′ . The above theorem was proved in [60, 129]. CONCLUDING REMARKS In this chapter, we have briefly reviewed layer potential techniques associated with the static and time-harmonic elasticity equations. Our main concern has been to represent the solutions of these equations and to establish Helmholtz-Kirchhoff identities in the time-harmonic case. In the next chapters, these results will be used to establish an asymptotic theory for the perturbations in the displacement measurements due to inclusions or cracks. Further, these results will be indispensible to analyze the stability and resolution properties of the imaging algorithms which we will design for locating the defects and characterizing them.

Chapter Two

Elasticity Equations with High Contrast Parameters

The purpose of this chapter is to prove basic theorems in relation to the elasticity equations with high contrast coefficients. It is to show that when the elastic material parameters tend to the extreme, the corresponding solutions converge strongly in appropriate norms. These results will be used to show in the next chapter that the asymptotic expansion of the solution in the presence of small inclusions holds uniformly with respect to material parameters. Moreover, they will be the main ingredients for solving the elasticity imaging problem from internal displacement measurements in soft tissues in Chapter 10. We consider a linear isotropic elastic body containing an inclusion with different elastic parameters. When the compression and shear moduli of the inclusion are finite, the solution satisfies the transmission conditions in (1.55) across the interface (the boundary of the inclusion). If the shear modulus of the inclusion is infinity, then the interface transmission condition is replaced by a null condition of the displacement. If the compression and shear moduli are zero, then it is replaced by the traction zero condition on the boundary of the inclusion. The objective of this chapter is to prove the convergence in an appropriate Sobolev space of the solution to the Lamé system as the compression and shear moduli tend to the extreme (zero or infinity). Throughout this book, by a hole we mean that its Lamé parameters are zero whereas by a hard inclusion we mean that its shear modulus is infinity. The methods of this chapter are based on the layer potential techniques. The solutions to the Lamé system can be expressed as a single layer potential on the boundary of inclusion. We show that certain norms of the potentials are bounded uniformly with respect to Lamé parameters, and the main results follow from this fact. This chapter is organized as follows. In Section 2.1, we set up the problems for finite and extreme moduli. In Section 2.2 we begin by studying the incompressible limit of elasticity equations. We provide a complete asymptotic expansion with respect to the compressional modulus first proved in [20]. In Section 2.3 we turn to the limiting cases of holes and hard inclusions and review some basic results. The results stated in this section are from [29]. In Section 2.4, we prove that the energy functional is uniformly bounded. As a consequence, we obtain that the potentials on the boundary of the inclusion are also uniformly bounded. In Section 2.5 we show that these potentials converge as the bulk and shear moduli tend to their extreme values and prove the second main result of this chapter. In Section 2.6, we discuss that similar boundedness and convergence result holds to be true for the boundary value problem.

34 2.1

CHAPTER 2

PROBLEM SETTING

Let D be an elastic inclusion which is a bounded domain in Rd (d = 2, 3) with e µ Lipschitz boundary. Let (λ, e) be the pair of Lamé parameters of D while (λ, µ) is that of the background Rd \ D. Then the elasticity tensors for the inclusion and e = (C eijkℓ ) and C = (Cijkℓ ) where the background can be written respectively as C e Cijkℓ and Cijkℓ are defined according to (1.8) and the elasticity tensor for Rd in the presence of the inclusion D is given by e +χ d C χD C R \D

(2.1)

with χD being the indicator function of D. We assume that the strong convexity conditions (1.5) and (1.53) hold for the e µ pairs (λ, µ) and (λ, e) respectively, that are in turn required to have the representation of the displacement vectors in terms of the single layer potential in what e + 2e follows. We also denote by βe the bulk modulus given by βe = λ µ/d. We consider the problem of the Lamé system of the linear elasticity: For a given function h satisfying ∇ · C∇s h = 0 in Rd , ( e + χ d C)∇s u = 0 in Rd , ∇ · (χD C R \D (2.2) u(x) − h(x) = O(|x|1−d ) as |x| → ∞, where ∇s u is the symmetric gradient (or the strain tensor). Equation (2.2) is equivalent to the following problem:   Lλ,µ u = 0 in Rd \ D,    e   Lλ,eµ u = 0 in D,    u|− = u|+ on ∂D, (2.3)  ∂u ∂u   = on ∂D,   e − ∂ν ∂ν +    u(x) − h(x) = O(|x|1−d ) as |x| → ∞.

e → ∞ while µ We study three limiting cases of (2.3): when λ e is fixed (income pressible limit), when both β and µ e tend to 0 (limiting case of holes), and when e is fixed (limiting case of hard inclusions). As said earlier, by a hole µ e → ∞ while λ we mean that βe = µ e = 0, whereas by a hard inclusion we mean that µ e = ∞. 2.2

INCOMPRESSIBLE LIMIT

e → ∞ and µ In this section we show that if λ e is fixed, then (2.3) approaches to the e → ∞, then ∇ · u is approaching to 0 while Stokes system. Roughly speaking, if λ e · u stays bounded. So (2.3) approaches to the Stokes problem. The following λ∇ result from [20] holds. e go to +∞ with λ/λ e = O(1). Suppose that Theorem 2.1. Suppose that λ and λ

ELASTICITY EQUATIONS WITH HIGH CONTRAST PARAMETERS

35

∇ · h = 0 in Rd . Let (u∞ , p) be the solution to

 µ∆u∞ + ∇p = 0 in Rd \ D,       e∆u∞ + ∇p = 0 in D, µ      u∞ − = u∞ + on ∂D,     ∂u∞ ∂u∞ (pn + µ e ) = (pn + µ ) on ∂D,  ∂n − ∂n +      ∇ · u∞ = 0 in Rd ,      1−d  ) as |x| → +∞, u∞ (x) − h(x) = O(|x|    p(x) = O(|x|−d ) as |x| → +∞,

(2.4)

where ∂u∞ /∂n|± = ∇s u∞ |± n. There exists a positive constant C independent of e such that the following error estimate holds for λ and λ e large enough: λ and λ



u − u∞

W (Rd )

where W (Rd ) is defined by (1.2).

C

∂h ≤ √ ,

λ ∂n H −1/2 (∂D)

(2.5)

Equations (2.4) are the linearized equations of incompressible fluids or the Stokes system. Existence and uniqueness of a solution to (2.4) can be proved using layer potential techniques; see [20]. We refer the reader to [86, 87] for a unique continuation and regularity results for (2.4). A complete asymptotic expansion can be constructed. For doing so, let uj for j ≥ 1 be defined by   µ∆uj + ∇pj + µ∇pj−1 = 0 in Rd \ D,      e∆uj + ∇pj + µ e∇pj−1 = 0 in D,  µ   !j   e  λ   uj − = uj + on ∂D,    λ   !j ! ! (2.6) e λ ∂uj ∂uj   p | n + µ − p | n + µ e = 0 on ∂D,  j + j −  λ ∂n + ∂n −       ∇ · uj = pj−1 in Rd ,      uj (x) = O(|x|1−d ) as |x| → +∞,     pj (x) = O(|x|−d ) as |x| → +∞. Here, p0 = p given by (2.4). Equations (2.6) are nonhomogeneous. In [20], the following theorem is proved. e such Theorem 2.2. There exists a positive constant C independent of λ and λ e large enough and for all integers that the following error estimate holds for λ and λ J: J

 1 X 1 1 1 

∂h ( j χRd \D + χD ) uj ≤ C .

u − u∞ −

1 + 1 ej eJ+ 2 λ ∂n H −1/2 (∂D) W (Rd ) λJ+ 2 λ λ j=1 (2.7)

36 2.3

CHAPTER 2

LIMITING CASES OF HOLES AND HARD INCLUSIONS

e is bounded. If βe = µ e=µ From now on, we assume that λ e = 0 (or λ e = 0), one can e|− = 0, we have from easily see what the limiting problem should be. Since ∂u/∂ ν the fourth line of (2.3) that ∂u ∂ν |+ = 0. So the elasticity equation in this case is   Lλ,µ u = 0 in Rd \ D,    ∂u = 0 on ∂D, (2.8) ∂ν +    u(x) − h(x) = O(|x|1−d ) as |x| → ∞.

e remains bounded, we need to To describe the equation when µ e = ∞ while λ introduce the following functional space: Let Ψ be the d(d+1)/2 dimensional vector space defined by (1.44). We emphasize that Ψ is the space of solutions to Lλ,µ u = 0 in D and ∂u/∂ν = 0 on ∂D for any (λ, µ). Let ψj , j = 1, . . . , d(d + 1)/2, be a basis of Ψ. If µ e → ∞, then from the second and fourth equations in (2.3) we have ∆u + ∇∇ · u = 0

in D,

∇s u n = 0

on ∂D,

which is another elasticity equation (with µ = 1 and λ = 0) with zero traction on the boundary. Thus there are constants αj such that d(d+1)/2

u(x) =

X j=1

αj ψj (x),

x ∈ D.

So, the elasticity problem when µ e = ∞ is  λ,µ d  L u = 0 in R \ D,   d(d+1)/2  X u= αj ψj on ∂D,   j=1    u(x) − h(x) = O(|x|1−d ) as |x| → ∞.

(2.9)

We need extra conditions to determine the coefficients αj . Note that the solution u to (2.3) satisfies Z ∂u d(d + 1) . (2.10) · ψl dσ = 0, l = 1, . . . , ∂ν 2 ∂D +

So, by taking a (formal) limit, one can expect that the solution u to (2.9) should satisfy (2.10), and the constants αj in (2.9) are determined by this orthogonality condition. Let uj for j = 1, . . . , d(d+1) , be defined by 2  λ,µ d  L uj = 0 in R \ D, uj = ψj on ∂D,   uj (x) = O(|x|1−d ) as |x| → ∞.

The following lemma holds.

(2.11)

ELASTICITY EQUATIONS WITH HIGH CONTRAST PARAMETERS

Lemma 2.3. The matrix A = (ajl ) given by Z d(d + 1) ∂uj · ψl dσ, j, l = 1, . . . , , ajl := 2 ∂D ∂ν +

37

(2.12)

is invertible.

Proof. The solution uj to (2.11) has the following representation ∂uj uj = SD [ ] − DD [ψj ]. ∂ν +

Taking the limit from outside D and using the jump relation (1.43) yields   ∂uj 1 SD [ ] = I + K on ∂D, D [ψj ] = ψj ∂ν + 2

since, as stated in Lemma 1.3, KD [ψj ] = (1/2)ψj . Hence, Lemma 1.6 shows that Z Z ∂uj −1 SD [ψj ] · ψl dσ. · ψl dσ = ∂D ∂ν + ∂D

Therefore, proving the invertibility of A is equivalent to proving the invertibility of Z  −1 SD [ψj ] · ψl dσ . ∂D

jl

Suppose that α = (α1 , . . . , αd(d+1)/2 )t is such that Aα = 0. Then, X Z −1 αj SD [ψj ] · ψl dσ = 0 ∀ l, ∂D

j

and consequently,

Z

∂D

−1 SD [ψ] · ψ dσ = 0,

P where ψ = j αj ψj . Lemma 1.6 together with the linear independence of the ψj shows that αj = 0 for all j. 

Using Lemma 2.3 it is now clear that the coefficients αj in (2.9) can be uniquely determined by the orthogonality relations (2.10). −1/2

Next, let HΨ (∂D) be the space defined in (1.46). From Theorem 1.9, it follows that the solution u to (2.3) is represented as ( h(x) + SD [ϕ](x), x ∈ Rd \ D, (2.13) u(x) = SeD [ψ](x), x ∈ D,

38

CHAPTER 2

−1/2

where the pair (ϕ, ψ) ∈ HΨ (∂D) × H −1/2 (∂D) is the solution to   SeD [ψ](x) − SD [ϕ](x) = h(x) for x ∈ ∂D. ∂ Se [ψ] ∂SD [ϕ] ∂h   D (x) (x) − (x) = e − ∂ν ∂ν ∂ν +

(2.14)

Even if βe = µ e = 0 or µ e = ∞, we have a similar representation: uβe(x) = h(x) + SD [ϕβe](x),

x ∈ Rd \ D, βe = 0, ∞.

When βe = µ e = 0, ϕ satisfies   ∂h 1 ∗ I + KD [ϕ] = − 2 ∂ν

(2.15)

on ∂D,

and if µ e = ∞, then ϕ∞ satisfies   1 ∂h ∗ − I + KD [ϕ∞ ] = − 2 ∂ν

(2.16)

on ∂D.

(2.17)

−1/2

We emphasize that ϕβe ∈ HΨ (∂D). See, for example, [28] for details of the above-mentioned representation of the solutions. As shown in Chapter 1, a similar representation formula holds for the solutions to the boundary value problems. Let Ω be a bounded Lipschitz domain in Rd containing D, which is also Lipschitz. Let u be the solution to e + χ d C)∇s u = 0 in Ω, ∇ · (χD C R \D

(2.18)

with either the Dirichlet boundary condition u = f or the Neumann boundary condition ∂u ∂ν = g on ∂Ω. Let Z

∂u h(x) := − Γ(x − y) (y) dσ(y) + ∂ν − ∂Ω

Z

∂Ω

∂Γ(x − y) u(y) dσ(y), ∂ν(y)

x ∈ Ω.

(2.19)

Then u is represented as

u(x) =

(

h(x) + SD [ϕ](x), x ∈ Ω \ D, SeD [ψ](x), x ∈ D,

−1/2

where the pair (ϕ, ψ) ∈ HΨ 2.4

(2.20)

(∂D) × H −1/2 (∂D) is the solution to (2.14).

ENERGY ESTIMATES

Let 1 J[u] := 2

Z

e s u : ∇s u dx + 1 C∇ 2 D

Z

Rd \D

C∇s (u − h) : ∇s (u − h) dx.

(2.21)

For the solution u to (2.3), we prove that J[u] is bounded regardless of βe and µ e. More precisely we prove the following lemma.

ELASTICITY EQUATIONS WITH HIGH CONTRAST PARAMETERS

39

e ≤ Λ for some constant Λ, then Lemma 2.4. Let u be the solution to (2.3). If λ e such there is a constant C depending on Λ, but otherwise independent of µ e and β, that

2

∂h

. (2.22) J[u] ≤ C

∂ν −1/2 H (∂D) As a consequence of Lemma 2.4, we have the following result.

e ≤ Λ for some constant Lemma 2.5. Let ϕ be the potential defined in (2.13). If λ Λ, then there is a constant C depending on Λ, but otherwise independent of µ e and e such that β,



∂h

kϕkH −1/2 (∂D) ≤ C . (2.23)

∂ν −1/2 H (∂D) Proof. Let v := u − h. Then (2.13) yields v(x) = SD [ϕ](x) for x ∈ Rd \ D. Thus, we have from (1.42) 1  ∂v ∗ I + KD [ϕ] = ∂ν + 2

on ∂D.

−1/2

∗ is invertible on HΨ Since 12 I + KD

kϕkH −1/2 (∂D)

(∂D), we have



∂v

≤C .

∂ν + −1/2 H (∂D)

R Let η be a function in H 1/2 (∂D)d satisfying ∂D η = 0 and let w be the solution to ∆w = 0 in Rd \D with w(x) = O(|x|1−d ) and w = η on ∂D, so that the following estimate holds: k∇s wkL2 (Rd \D) ≤ CkηkH 1/2 (∂D) . Since

Z

Rd \D

we have Z

∂D

C∇s v : ∇s wdx = −

Z

∂D

∂v · ηdσ, ∂ν +

(2.24)

∂v · ηdσ(x) ≤ Ck∇s vkL2 (Rd \D) k∇s wkL2 (Rd \D) ∂ν + ≤ Ck∇s vkL2 (Rd \D) kηkH 1/2 (∂D) .

Since η ∈ H 1/2 (∂D) is arbitrary, we have from (2.22)



∂v

∂h

s

≤ Ck∇ vkL2 (Rd \D) ≤ C ,

−1/2

∂ν −1/2 ∂ν + H (∂D) H (∂D)

and so follows (2.23).



Proof of Lemma 2.4. Let v := u − h. It is known [62] that v is the minimizer in W (Rd ) of the functional Z 1 e + χ d C)(∇s v + χD P∇s h) : (∇s v + χD P∇s h) dx, (2.25) I[v] := (χD C R \Ω 2 Rd

40

CHAPTER 2

e −1 C and I is the identity 4-tensor. Note that where P = I − (C) Z 1 e s (v + h) − C∇s h) : (∇s (v + h) − (C) e −1 C∇s h) dx I[v] = (C∇ 2 D Z 1 + C∇s v : ∇s v dx 2 Rd \D Z Z 1 e s v : ∇s v + (C e − C)∇s v : ∇s h dx C∇ = 2 D D Z 1 e e −1 C)∇s h : ∇s h dx + (C − 2C + C(C) 2 D Z 1 + C∇s v : ∇s v dx. (2.26) 2 Rd \D

Let v∞ := u∞ − h.

(2.27)

Then v∞ ∈ W (Rd ), and (v∞ + h)|D ∈ Ψ which implies that ∇s (v∞ + h) = 0 in D. So, we have from the first line in (2.26) that Z Z 1 e −1 C∇s h : ∇s h dx + 1 I[v∞ ] = C(C) C∇s v∞ : ∇s v∞ dx. (2.28) 2 D 2 Rd \D

We then have J[u] = J[v + h] Z Z Z 1 e s v : ∇s v dx + e s v : ∇s h dx + 1 e s h : ∇s h dx = C∇ C∇ C∇ 2 D 2 D D Z 1 s s + C∇ v : ∇ v dx 2 Rd \D Z Z = I[v] + C∇s v : ∇s h dx + C∇s h : ∇s h dx D D Z 1 e −1 C∇s h : ∇s h dx − C(C) 2 D Z Z 1 e −1 C∇s h : ∇s h dx. = I[v] + C∇s u : ∇s h dx − C(C) 2 D D

Since I[v] ≤ I[v∞ ], it follows from (2.28) that Z Z 1 e −1 C∇s h : ∇s h dx J[u] ≤ I[v∞ ] + C∇s u : ∇s h dx − C(C) 2 D D Z Z 1 s s = C∇ v∞ : ∇ v∞ dx + C∇s u : ∇s h dx. 2 Rd \D D

(2.29)

41

ELASTICITY EQUATIONS WITH HIGH CONTRAST PARAMETERS

e is bounded, we have Note that since λ Z Z Z s s s 2 C∇ u : ∇ h dx = C( |∇ u| dx + |∇s h|2 dx) D D D  Z  Z 1 e s u : ∇s u dx + ≤C C∇ |∇s h|2 dx µ e D  D  Z 1 s 2 J[u] + |∇ h| dx . ≤C µ e D So, if µ e is sufficiently large, then we have from (2.29) that   J[u] ≤ C k∇s v∞ k2L2 (Rd \D) + k∇s hk2L2 (D) for some constant C. Since s

k∇ v∞ kL2 (Rd \D) ≤ Ckϕ∞ kH −1/2 (∂D)



∂h

≤C ,

∂ν −1/2 H (∂D)

we have (2.22) when µ e is large. When βe and µ e are bounded, we need a function which plays the role of v∞ in the above. For that we use ϕ0 in (2.15): define for x ∈ Rd .

v0 (x) := SD [ϕ0 ](x)

(2.30)

It is worth emphasizing that v0 is defined not only on Rd \ D but on Rd . Then one can show as above that Z Z J[u] ≤ I[v0 ] + C∇s v : ∇s h dx + C∇s h : ∇s h dx D D Z 1 e −1 C∇s h : ∇s h dx. − C(C) 2 D

Using (2.26), one can see that Z Z 1 e s v0 : ∇s v0 dx + e − C)∇s v0 : ∇s h dx J[u] ≤ C∇ (C 2 D D Z Z 1 1 s s e C∇s v0 : ∇s v0 dx + C∇ h : ∇ h dx + 2 D 2 Rd \D Z + C∇s v : ∇s h dx.

(2.31)

D

Since ∂(v0 + h)/∂ν = 0 on ∂D, we have +

Z

D

C∇s v : ∇s h dx =

=

Z

Z∂D

∂h · v dσ = − ∂ν

Rd \D

C ≤ ǫ

Z

Z

∂D

C∇s v0 : ∇s v dx

Rd \D

s

s

∂v0 · v dσ ∂ν +

C∇ v0 : ∇ v0 dx + Cǫ

Z

Rd \D

C∇s v : ∇s v dx

42

CHAPTER 2

for a (small) constant ǫ. If ǫ is sufficiently small, then we obtain by combining this with (2.31) J[u] ≤ C(k∇s v0 k2L2 (Rd ) + k∇s hk2L2 (D) ) (2.32) for some constant C independent of βe and µ e. Since



∂h

, k∇s v0 kL2 (Rd \D) ≤ Ckϕ0 kH −1/2 (∂D) ≤ C

∂ν −1/2 H (∂D)

we have (2.22). This completes the proof. 2.5



CONVERGENCE OF POTENTIALS AND SOLUTIONS

Lemma 2.5 shows that the potential defined in (2.13) is uniformly bounded with e as long as λ e is bounded. We now prove the following lemma. respect to µ e and λ Lemma 2.6. Let ϕ, ϕ0 and ϕ∞ be potentials defined by (2.13), (2.16) and (2.17), respectively.

e ≤ Λ for some constant Λ. There are constants µ1 and C1 such (i) Suppose that λ that

C1 ∂h

kϕ − ϕ∞ kH −1/2 (∂D) ≤ p (2.33) µ e ∂ν H −1/2 (∂D) for all µ e ≥ µ1 .

(ii) There are constants δ and C such that kϕ − ϕ0 kH −1/2 (∂D) eµ for all β, e ≤ δ.



1/4 ∂h e ≤ C(β + µ e) ∂ν H −1/2 (∂D)

(2.34)

Proof. We may assume that k ∂h ∂ν kH −1/2 (∂D) = 1. Let w = u − u∞ so that w satisfies  s d  ∇ · C∇ w = 0 in R \ D, (2.35) w − u|D ∈ Ψ in D,   1−d w(x) = O(|x| ) as |x| → ∞. Since ∇s (w − u) = 0 in D, we have from Lemma 2.4 that Z Z 1 e s w : ∇s wdx |∇s w|2 dx ≤ C∇ 2e µ D D Z 1 e s u : ∇s udx ≤ C . = C∇ 2e µ D µ e

Let ψj , j = 1, . . . , d(d + 1)/2, be a basis of Ψ as before, and let d(d+1)/2

e=

X j=1

β j ψj ,

(2.36)

43

ELASTICITY EQUATIONS WITH HIGH CONTRAST PARAMETERS

where βj are chosen so that Z h i (w − e) · ψj + ∇(w − e) : ∇ψj dx = 0,

j = 1, . . . , d(d + 1)/2.

(2.37)

D

We then apply Korn’s inequality (1.52) to w − e to have Z Z Z h i |∇s (w − e)|2 dx = C |∇s w|2 dx (2.38) |w − e|2 + |∇(w − e)|2 dx ≤ C D

D

D

for some constant C independent of µ e. It then follows from (2.36) that C kw − ekH 1 (D) ≤ p , µ e

(2.39)

and from the trace theorem in H 1 (D) that

C kw − ekH 1/2 (∂D) ≤ p µ e

(2.40)

for some constant C independent of µ e. By the strong convexity (1.5) of C, there is a constant C such that Z C∇s w : ∇s wdx k∇s wk2L2 (Rd \D) ≤ C Rd \D Z ∂w = −C · w dσ ∂ν + Z∂D ∂w = −C · (w − e) dσ ∂D ∂ν +

∂w

≤ Ckw − ekH 1/2 (∂D) , ∂ν + H −1/2 (∂D)

where the second equality holds because of the orthogonality property (2.10). It then follows from (2.40) that C

∂w k∇s wk2L2 (Rd \D) ≤ p . µ e ∂ν + H −1/2 (∂D)

(2.41)

We then obtain using the H −1/2 − H 1/2 duality, divergence theorem on Rd \ D, and the trace theorem that

∂w 2 C



∂w ≤ Ck∇s wk2L2 (Rd \D) ≤ p ,

−1/2 ∂ν + H (∂D) µ e ∂ν + H −1/2 (∂D) and so

∂w C

≤ p .

∂ν + H −1/2 (∂D) µ e

(2.42)

Using the representations (2.13) and (2.15), we have w(x) = u(x) − u∞ (x) = SD [ ϕ − ϕ∞ ](x),

x ∈ Rd \ D.

(2.43)

44

CHAPTER 2

Thus, (1.42) yields  1 ∂w ∗ [ ϕ − ϕ∞ ] on ∂D. I + KD = ∂ν + 2

(2.44)

So, (2.33) follows from (2.42).

To prove (2.34), let v0 be as defined in (2.30) and let z := v − v0 in Rd . Then z = u − u0 in Rd \ D and the following holds: Z Z s 2 |∇ z| dx ≤ C C∇s z : ∇s zdx Rd \D Rd \D Z Z ∂z ∂u = −C · z dσ = −C · z dσ ∂D ∂ν + ∂D ∂ν + Z Z   ∂u ∂u = −C · u dσ − · u0 dσ ∂D ∂ν + ∂D ∂ν + for some constant C > 0. Since Z Z Z ∂u ∂u e s u : ∇s u dx ≥ 0, · u dσ = · u dσ = C∇ e ∂ν ∂ ν + − ∂D ∂D D we have

Z

Rd \D

|∇s z|2 dx ≤ C =C =C

Z

∂D

Z

Z∂D

∂u · u0 dσ ∂ν + ∂u · (v0 + h) dσ ∂ν + ∂u · (v0 + h) dσ e − ∂ν

Z∂D e s u : ∇s (v0 + h) dx. =C C∇ D

By Cauchy-Schwarz inequality, we obtain that R s e s D C∇ u : ∇ (v0 + h) dx Z  1/2  Z 1/2 e s u : ∇s u dx e s (v0 + h) : ∇s (v0 + h) dx ≤C C∇ C∇ . D

D

Thus, from (2.22) it follows that Z Z 1/2 e s (v0 + h) : ∇s (v0 + h) dx |∇s z|2 dx ≤ C C∇ Rd \D

D

q q ≤ C βe + µ ek∇s (v0 + h)kL2 (D) ≤ C βe + µ e

e Therefore, we arrive at for a constant C independent of β.

∂z

≤ Ck∇s zkL2 (Rd \D) ≤ C(βe + µ e)1/4 .

−1/2 ∂ν + H (∂D)

(2.45)

Note that z = u − u0 = SD [ϕ − ϕ0 ] in Rd \ D. So by the same reasoning as above

45

ELASTICITY EQUATIONS WITH HIGH CONTRAST PARAMETERS

we have (2.34), and the proof is complete. ✷ As a consequence of Lemma 2.6, we obtain the second main result of this chapter. Theorem 2.7. Suppose that (1.5) and (1.53) hold. Let u, u∞ and u0 be the solutions to (2.3), (2.9) and (2.8), respectively. e ≤ Λ for some constant Λ. There are constants µ1 and C such (i) Suppose that λ that

C ∂h

(2.46) ku − u∞ kW (Rd ) ≤ p µ e ∂ν H −1/2 (∂D) for all µ e ≥ µ1 .

(ii) There are constants δ and C such that ku − u0 kW (Rd \D) eµ for all β, e ≤ δ.



1/4 ∂h e ≤ C(β + µ e) ∂ν H −1/2 (∂D)

(2.47)

It is worth mentioning that it is not clear if the convergence rates, µ e−1/2 and (βe + µ e)1/4 are optimal or not. Proof of Theorem 2.7. Assume that k ∂h ∂ν kH −1/2 (∂D) = 1. Since u − u0 = SD [ ϕ − ϕ0 ]

on Rd \ D,

(2.47) follows from (2.34). Since u − u∞ = SD [ ϕ − ϕ∞ ] on Rd \ D, we have

Moreover, we have

C ku − u∞ kW (Rd \D) ≤ p . µ e

(2.48)

C ku − u∞ kH 1/2 (∂D) = kSD [ϕ − ϕ∞ ]kH 1/2 (∂D) ≤ Ckϕ − ϕ∞ kH −1/2 (∂D) ≤ p , µ e

and hence

This completes the proof. 2.6

C ku − u∞ kH 1 (D) ≤ p . µ e

(2.49) ✷

BOUNDARY VALUE PROBLEMS

We now show that the results on the boundedness of the energy functional and on the convergence of solutions similar to the previous ones hold for the boundary value problems. Let Ω be a bounded domain in Rd and let D be an open subset in Ω. We assume that Ω and D have Lipschitz boundaries and satisfy dist(D, ∂Ω) ≥ c0

46

CHAPTER 2

for some c0 > 0. We consider  s e +χ  ∇ · (χD C  Ω\D C)∇ u = 0  ∂u =g    ∂ν u|∂Ω ∈ L2Ψ (∂Ω),

in Ω, on ∂Ω,

(2.50)

where g ∈ L2Ψ (∂Ω). The Dirichlet problem can be treated in exactly the same way. The relevant energy functional for the boundary value problem is Z 1 s s e +χ JΩ [u] := (χD C Ω\D C)∇ u : ∇ u dx. 2 Ω

(2.51)

Then the solution u to (2.50) is the minimizer of JΩ over H 1 (Ω) with the given e is bounded). It boundary condition. Let u∞ be the solution when µ e = ∞ (λ satisfies   ∇ · C∇s u∞ = 0 in Ω \ D,     d(d+1)/2  X   u ∞ = αj ψj on ∂D, (2.52) j=1   ∂u ∞   =g on ∂Ω,    ∂ν  u | ∈ L2 (∂Ω), ∞ ∂Ω Ψ with the coefficients αj being determined by the orthogonality relations Z d(d + 1) ∂u∞ . · ψl dσ = 0, l = 1, . . . , ∂ν 2 ∂D + Then we have

1 JΩ [u] ≤ JΩ [u∞ ] = 2

Z



s s e +χ (χD C Ω\D C)∇ u∞ : ∇ u∞ dx.

e s u∞ = 0 in D, and so, Since u∞ |D ∈ Ψ, we have C∇ Z 1 JΩ [u] ≤ C∇s u∞ : ∇s u∞ dx ≤ C kgkH −1/2 (∂Ω) , 2 Ω\D

(2.53)

e where C is independent of µ e, λ.

Using (2.53) one can show as before that C ku − u∞ kH 1 (Ω) ≤ p µ e

e is bounded, and for all µ e ≥ µ1 when λ

ku − u0 kH 1 (Ω\D) ≤ C(βe + µ e)1/4

(2.54)

(2.55)

ELASTICITY EQUATIONS WITH HIGH CONTRAST PARAMETERS

eµ for all β, e ≤ δ. Here u0 is the solution when βe = µ e = 0. It satisfies  ∇ · C∇s u0 = 0 in Ω \ D,     ∂u ∞   =0 on ∂D, ∂ν ∂u0   =g on ∂Ω,     ∂ν u0 |∂Ω ∈ L2Ψ (∂Ω).

47

(2.56)

We also note that (2.53) together with Korn’s inequality (1.52) implies that kukH 1/2 (∂Ω) ≤ C e It then follows from (2.19) that independently of µ e, λ. khkH 1 (Ω) ≤ C

(2.57)

e independently of µ e, λ. Finally, it is worth mentioning that in the incompressible limit, an error estimate similar to the one in Theorem 2.1 holds for the boundary value problem. CONCLUDING REMARKS In this chapter we have rigorously derived limiting models for the elasticity equations with high contrast parameters. We have used layer potential techniques to prove uniform convergence in an appropriate space of functions of solutions to the Lamé system as the bulk and shear moduli of the inclusion tend to extreme values (zero or ∞) provided that the compressional modulus is bounded. Our results in this chapter will be used in the next chapter in order to show that the small-volume asymptotic expansion of the solution due to the presence of diametrically small inclusions is uniform with respect to the bulk and shear moduli. It is worth noticing that a different approach for studying the convergence, as µ e → ∞, to the limiting problem was recently given in [39]. In [39], the convergence e + 2e is proved when µ e → ∞ and dλ µ → ∞ and a variational characterization of the limiting problem is established. The convergence is more general than µ e→∞ e fixed as discussed in this chapter. The approach of [39] is based on while keeping λ [40] where equations rather than systems are studied.

Chapter Three Small-Volume Expansions of the Displacement Fields Consider an elastic medium occupying a bounded domain Ω in Rd , with a connected Lipschitz boundary ∂Ω. Let the constants (λ, µ) denote the background Lamé coefficients, that are the elastic parameters in the absence of any inclusion. Suppose that the elastic inclusion D in Ω is given by D = ǫB + z, where B is a bounded Lipschitz domain in Rd . We assume that there exists c0 > 0 such that inf x∈D dist(x, ∂Ω) > c0 . e µ Suppose that D has the pair of Lamé constants (λ, e) satisfying (1.5) and (1.53). The purpose of this chapter is to find an asymptotic expansion for the displacement field in terms of the reference Lamé constants, the location, and the shape of the inclusion D. This expansion describes the perturbation of the solution caused by the presence of D. Using the method of matched asymptotic expansions we formally derive the first-order perturbation due to the presence of the inclusion. Then we provide a rigorous proof based on layer potential techniques. The asymptotic expansion in this chapter is valid for elastic inclusions with Lipschitz boundaries. It is expressed in terms of the elastic moment tensor which is a geometric quantity associated with the inclusion. Based on this asymptotic expansion, we will derive the algorithms to obtain accurate and stable reconstructions of the location and the order of magnitude of the elastic inclusion. The chapter is organized as follows. In Section 3.1, we introduce the notion of elastic moment tensor. Then we investigate some important properties of the elastic moment tensor such as symmetry and positive-definiteness. We drive formulas for the elastic moment tensors under linear transformations and compute those associated with ellipses and balls. Section 3.2 aims to derive small-volume expansions of the displacement fields. We consider both the static and time-harmonic regimes. We also extend the small-volume asymptotic framework to anisotropic elasticity. This chapter extends earlier results on small-volume expansions for electromagnetic imperfections inside a conductor of known background electromagnetic parameters [26, 28]. These works are on equations rather than on systems. More mathematically challenging problems for systems are discussed here.

3.1

ELASTIC MOMENT TENSOR

The asymptotic expansion of the displacement in the presence of a small-volume inclusion is expressed in terms of the elastic moment tensor (EMT) which is a geometric quantity associated with the inclusion. The EMT associated with the e µ domain B and the Lamé parameters (λ, µ; λ, e) is defined as follows: For i, j =

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

1, . . . , d, let fij and gij solve   SeB [fij ]|− − SB [gij ]|+ = xi ej |∂B , ∂ e ∂ ∂(xi ej )  S [f ] − S [g ] |∂B ,  B ij B ij = e ∂ν ∂ν ∂ν −

49

(3.1)

+

where (e1 , . . . , ed ) is the canonical basis of Rd . Then the EMT M := (mijpq )di,j,p,q=1 is defined by Z mijpq :=

∂B

xp eq · gij dσ.

(3.2)

The following lemma holds [28].

e µ Lemma 3.1. Suppose that 0 < λ, e < +∞. For i, j, p, q = 1, . . . , d,  Z  ∂(xp eq ) ∂(xp eq ) + · vij dσ, mijpq = − e ∂ν ∂ν ∂B

(3.3)

where vij is the unique solution of the transmission problem

 λ,µ L vij = 0 in Rd \ B,     e   Lλ,eµ vij = 0 in B,    vij |+ − vij |− = 0 on ∂B,  ∂vij  ∂vij   − = 0 on ∂B,   ∂ν + e − ∂ν    vij (x) − xi ej = O(|x|1−d ) as |x| → +∞.

(3.4)

Proof. Note first that vij defined by vij (x) :=

(

SB [gij ](x) + xi ej , x ∈ Rd \ B, SeB [fij ](x), x ∈ B,

(3.5)

is the solution of (3.4). Using (1.42) and (3.1) we compute Z mijpq = xp eq · gij dσ ∂B Z h ∂ i ∂ = xp eq · SB gij + − SB [gij ] − dσ ∂ν ∂ν ∂B Z Z h ∂ i ∂(xi ej ) ∂ e =− xp eq · dσ − xp eq · SB [gij ] − − SB [fij ] − dσ e ∂ν ∂ν ∂ν Z∂B Z∂B h i ∂(xp eq ) ∂(xp eq ) ∂(xp eq ) e =− · xi ej dσ − · SB [gij ] − · SB [fij ] dσ e ∂ν ∂ν ∂ν ∂B Z ∂B h ∂(x e ) ∂(x e ) i p q p q = − + · SeB [fij ] dσ, e ∂ν ∂ ν ∂B

and hence (3.3) is established.



50

CHAPTER 3

3.1.1

Properties of the EMT

We now provide some important properties of the EMT such as symmetry, positivedefiniteness, and Hashin-Shtrikman bounds. The following theorem holds [28, 36]. Theorem 3.2 (Symmetry). Let M be the EMT associated with the domain B, e µ and (λ, e) and (λ, µ) be the Lamé parameters of B and the background, respectively. Then, for p, q, i, j = 1, . . . , d, mijpq = mijqp ,

and

mijpq = mjipq ,

mijpq = mpqij .

(3.6)

Proof. Let us fix i, j, p, and q. By Theorem 1.7 and the definition (3.1) of gij , we obtain that gij ∈ L2Ψ (∂B). Since xp eq − xq ep ∈ Ψ, we have Z (xp eq − xq ep ) · gij dσ = 0. ∂B

The first identity of (3.6) immediately follows from the above identity. Since xi ej − xj ei ∈ Ψ, we have ∂(xi ej − xj ei )/∂ν = 0 on ∂B. Let g := gij − gji and f := fij − fji .

Then the pair (f , g) satisfies   SeB [f ]|− − SB [g]|+ = (xi ej − xj ei )|∂B , ∂ ∂ e  SB [g] = 0.  SB [f ] − e ∂ν ∂ν −

+

Lemma 1.10 shows that g = 0 or gij = gji . This proves the second identity of (3.6). The third identity is more difficult to establish. We refer to [28] for its proof. ✷ The symmetry property (3.6) implies that M is a symmetric linear transformation on the space MdS of d × d symmetric matrices. We now recall the positive-definiteness property of the EMT. The following holds [28, 36]. Theorem 3.3 (Positivity). Suppose that (1.53) holds. If µ e > µ (e µ < µ , resp.), then M is positive (negative, resp.) definite on the space MdS of d × d symmetric matrices. Set

1 I ⊗ I, d Since for any d × d symmetric matrix A P1 :=

P2 := I − P1 .

I ⊗ I(A) = (A : I) I = tr(A) I and I(A) = A, one can immediately see that P1 P1 = P1 ,

P2 P2 = P2 ,

P1 P2 = 0,

(3.7)

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

51

and P2 is then the orthonormal projection from the space of d × d symmetric matrices onto the space of symmetric matrices of trace zero. With notation (3.7), we express the trace bounds satisfied by the EMT in the following theorem. e µ/d. Suppose for simplicity Theorem 3.4 (Bounds). Set β = λ+2µ/d, βe = λ+2e that µ e > µ. We have dβ + 2(d − 1)e µ 1 tr(P1 MP1 ) ≤ d(βe − β) e |B| dβ + 2(d − 1)e µ  2 1 d +d−2 tr (P2 MP2 ) ≤ 2 (e µ − µ) |B| 2

! d−1 d−1 + − 2 (e µ − µ) , 2e µ dβe + 2(d − 1)e µ  dβe + 2(d − 1)µ 1 , |B| tr P1 M−1 P1 ≤ d(βe − β) dβ + 2(d − 1)µ  2  1 d +d−2 |B| tr P2 M−1 P2 ≤ 2(e µ − µ) 2    d−1 d−1 +2 µ e−µ + , 2µ dβ + 2(d − 1)µ where for C = (Cpqij ), tr(C) :=

d X

(3.8)

(3.9) (3.10)

(3.11)

Cijij .

i,j=1

Note that P1 MP1 and P2 MP2 are the bulk and shear parts of M. We also note that tr(P1 ) = 1 and tr(P2 ) = (d(d + 1) − 2)/2.

The bounds (3.8)–(3.11) are called Hashin-Shtrikman bounds for the EMT and are obtained in [62, 131]. The upper bounds for M are also derived in [131]. In [62], it is shown that an inclusion B whose trace is close to the upper bound must be infinitely thin. The upper and lower bounds for the EMT may also be derived as a low volume fraction limit of the Hashin-Shtrikman bounds for the effective moduli of the two phase composites, which was obtained in [138, 176, 177]. In [43, 139], the upper and lower bounds of EMTs when those EMTs happen to be isotropic were obtained. We are now interested in the shape of the inclusion whose EMT satisfies the equality in either (3.10) or (3.11). This is an isoperimetric inequality for the EMT. In this direction we state the following theorem from [17]. Theorem 3.5. Let B be a simply connected bounded domain in Rd with a Lipschitz boundary. Suppose that |B| = 1 and let M be the EMT associated with B. If the equality holds in either (3.10) or (3.11), then B is an ellipse in two dimensions and an ellipsoid in three dimensions. We remark that optimal shapes for a cavity (hole) in two dimension were investigated in [72, 140]. It is worth mentioning that the dimension of the space of symmetric 4-tensors in the three-dimensional space is 21, and hence the equalities (3.10) and (3.11) are

52

CHAPTER 3

satisfied on a 19 (21 − 2) dimensional surface in tensor space. However ellipsoid geometries (with unit volume) only cover a 5 dimensional manifold within that 19 dimensional space.

3.1.2

EMTs under Linear Transformations

In this subsection we recall formulas for EMTs under linear transformations. These formulas were first proved in [28]. Theorem 3.6. Let B be a bounded domain in Rd and let (mij pq (B)) denote the EMT associated with B. Then the following holds: translation formula Let z ∈ Rd . Then, ij mij pq (B + z) = mpq (B),

i, j, p, q = 1, . . . , d;

(3.12)

i, j, p, q = 1, . . . , d;

(3.13)

scaling formula Let ǫ > 0. Then, d ij mij pq (ǫB) = ǫ mpq (B),

rotation formula Let R = (rij ) be a unitary transformation in Rd . Then, mij pq (R(B))

=

d d X X

rpu rqv rik rjl mkl uv (B),

i, j, p, q = 1, . . . , d.

(3.14)

u,v=1 k,l=1

3.1.3

EMTs for Ellipses and Balls

Let M = (mijpq ) be the EMT for the ellipse B whose semi-axes are on the x1 - and e µ x2 -axes and of length a and b, respectively, and let (λ, e) and (λ, µ) be the Lamé parameters of B and the background, respectively. Then we have m1111 = |B|(λ + 2µ) m2222 = |B|(λ + 2µ) m1122 = |B| m1212 = |B| where

e−λ+µ (e µ − µ)(λ e − µ)[m2 − 2(τ − 1)m] + c , e+µ e+µ (e µ − µ)[3µ + (1 − τ )(λ e)]m2 + (µ + λ e)(µ + τ µ e) e−λ+µ (e µ − µ)(λ e − µ)[m2 + 2(τ − 1)m] + c , e+µ e+µ (e µ − µ)[3µ + (1 − τ )(λ e)]m2 + (µ + λ e)(µ + τ µ e)

e−λ+µ e − λ)(e (λ + 2µ)[(e µ − µ)(λ e − µ)m2 + (λ µ + τ µ) + (e µ − µ)2 ] , e+µ e+µ (e µ − µ)[3µ + (1 − τ )(λ e)]m2 + (µ + λ e)(µ + τ µ e) µ(e µ − µ)(τ + 1) , −(e µ − µ)m2 + µ + τ µ e

e−λ+µ e+µ c = (λ e − µ)(µ + τ µ e) + (τ − 1)(e µ − µ)(µ + λ e),

53

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

m = (a − b)/(a + b) and τ = (λ + 3µ)/(λ + µ). The remaining terms are determined by the symmetry properties (3.6). If m = 0, i.e., B is a disk, then  e − λ)(e  (λ + 2µ)[(λ µ + τ µ) + (e µ − µ)2 ]   ,  m1122 = |B| e+µ (µ + λ e)(µ + τ µ e) (3.15)  µ(e µ − µ)(τ + 1)    m1212 = |B| . µ + τµ e With notation (3.7), the EMT of a disk given by (3.15) can be rewritten as M = 2|B| or, equivalently,

e+µ (λ + 2µ)(λ e − λ − µ) µ(e µ − µ)(τ + 1) P1 + 2|B| P2 , e µ + τµ e µ+λ+µ e (2)

(2)

M = |B|m1 (2m2 P1 + 2P2 )

with (2)

m1

(2) m2

µ(e µ − µ)(τ + 1) , µ + τµ e e+µ (λ + 2µ)(λ e − λ − µ)(µ + τ µ e) . e µ(µ + λ + µ e)(e µ − µ)(τ + 1)

= =

(3.16)

(3.17)

Analogously, for a spherical inclusion B, M can be expressed as [33] (3)

(3)

M = |B|m1 (3m2 P1 + 2P2 ),

(3.18)

where (3)

m1

(3) m2

=

=

15µ(µ − µ e)(ν − 1) , 15µ(1 − ν) + 2(µ − µ e)(5ν − 4)

 e 15µλ(1 − ν) + 2λ(µ − µ (λ − λ) e)(5ν − 4)  e − λ(µ − µ 5(µ − µ e) 3λµ(1 − ν) − 3µν(λ − λ) e)(1 − 2ν)

− and ν =

(3.19)

e 2(µ − µ e)(λ(µ − µ e) − 5µν(λ − λ)) , e − λ(µ − µ 5(µ − µ e) 3λµ(1 − ν) − 3µν(λ − λ) e)(1 − 2ν)

λ denotes the Poisson ratio. 2(λ + µ)

Note that from (3.16) and (3.18) it follows that the EMT M of a disk or a sphere is isotropic. One can write M as M = aI + bI ⊗ I

(3.20)

e µ, µ for constants a and b depending only on λ, λ, e and the space dimension d, which can be easily computed. In fact, using (3.7), (3.16), and (3.18), we have (d)

a

= 2|B|m1 ,

b

2 (d) (d) = |B|m1 (m2 − ). d

54

CHAPTER 3

It is worth emphasizing that the EMT of an ellipsoid can be computed explicitly using layer potentials; see [33]. The formula involves too many terms to be written down. As will be shown in the next subsection, in the case of a hole or a hard inclusion e = µ of elliptic shape, taking λ e = 0 or µ e = ∞ yields explicit formulas for their associated EMTs. The EMTs for rotated elliptic holes and hard inclusions can be found using (3.14); see [28]. 3.1.4

Limiting Cases

We begin with the incompressible limit of the EMT. Let wij be the unique solution of the Stokes problem   µ∆wij + ∇p = 0 in Rd \ B,      µ e∆wij + ∇p = 0 in B,     w  ij |+ = wij |− on ∂B,        ∂wij ∂wij pn + µ = pn + µ e on ∂B, (3.21) ∂n + ∂n −    d   δij X   w (x) − x e + xl el = O(|x|1−d ) as |x| → +∞,  ij i j   d  l=1    p(x) = O(|x|−d ) as |x| → +∞. Define the tensor V = (vijpq )di,j,p,q=1 by Z vijpq := (e µ − µ) ∇wij : ∇s (xp eq ) dx,

i, j, p, q = 1, . . . , d.

B

The tensor V, introduced in [20], is called the viscous moment tensor. Again in [20], it is proved that

  d

δij X

e λ → +∞, → 0 as λ, (3.22) wll

vij − wij −

d d l=1

W (R )

where vij is defined by (3.4). Here, the limits are taken under the assumption that e = O(1). Therefore, one can show from (3.22) that λ/λ V=

lim

e λ,λ→+∞

P2 MP2 ,

where P2 , defined by (3.7), is the orthonormal projection from the space of d × d symmetric matrices onto the space of symmetric matrices of trace zero. 0 Now, for the hole case, let vij be the unique solution of the transmission problem  λ,µ 0 L vij = 0 in Rd \ B,     0 ∂vij = 0 on ∂B,  ∂ν +    0 vij (x) − xi ej = O(|x|1−d ) as |x| → +∞,

(3.23)

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

55

∞ while for the hard inclusion case, let vij denote the unique solution of the transmission problem

 λ,µ ∞ d  L vij = 0 in R \ B,   d(d+1)/2  X ∞ vij = αlij ψl on ∂B,  +  l=1    ∞ vij (x) − xi ej = O(|x|1−d ) as |x| → +∞.

(3.24)

In (3.24), the coefficients αlij are determined by the orthogonality relations Z

∂B

∞ ∂vij · ψl dσ = 0, ∂ν +

l = 1, . . . ,

d(d + 1) . 2

Using the results of Chapter 2, one can prove that mijpq → m0ijpq and where

m0ijpq

and

m∞ ijpq ,

mijpq → m∞ ijpq

m∞ ijpq

=−

as µ e → +∞,

for i, j, p, q = 1, . . . , d, are respectively defined by m0ijpq = −

and

e µ as λ, e → 0,

Z

∂B

Z

∂B

∂(xp eq ) ∞ · vij dσ, ∂ν

∂(xp eq ) 0 · vij dσ + ∂ν

Z

∂B

∞ ∂vij · (xp eq ) dσ. ∂ν +

(3.25)

(3.26)

From Subsection 3.1.3, explicit formulas for the EMTs in the limiting cases of ellipses, balls, spheres, and ellipsoids can be derived.

3.2 3.2.1

SMALL-VOLUME EXPANSIONS Static Regime

In this section we consider the effect of a small elastic inclusion on the boundary measurements in the static regime. For a given g ∈ L2Ψ (∂D), let uǫ be the solution of  λ,µ L uǫ = 0 in Ω \ D,     e   Lλ,eµ uǫ = 0 in D,       uǫ − = uǫ + on ∂D,    ∂uǫ ∂uǫ (3.27) = on ∂D,   e − ∂ ν ∂ν  +     ∂uǫ   = g,   ∂ν ∂Ω     uǫ ∂Ω ∈ L2Ψ (∂Ω).

56

CHAPTER 3

Our aim in this section is to derive an asymptotic formula for uǫ as ǫ goes to 0 in terms of the background solution u0 , that is the solution of (1.87). The derivation of such a formula is much more difficult than the one for antiplane elasticity because of the tensorial nature of the fundamental solution of the Lamé system. 3.2.1.1

Formal Derivations of the Asymptotic Expansion

Using the method of matched asymptotic expansions, we first give a formal derivation of the leading-order term in the asymptotic expansion of the displacement field uǫ as ǫ → 0. The outer expansion reads as uǫ (y) = u0 (y) + ǫτ1 u1 (y) + ǫτ2 u2 (y) + . . . ,

for |y − z| ≫ O(ǫ),

where 0 < τ1 < τ2 < . . ., u1 , u2 , . . . , are to be found. The inner expansion is written as ˆ ǫ (ξ) = uǫ (z + ǫξ) = u ˆ 0 (ξ) + ǫu ˆ 1 (ξ) + ǫ2 u ˆ 2 (ξ) + . . . , u

for |ξ| = O(1),

ˆ 0, u ˆ 1 , . . . , are to be found. where u In some overlap domain the matching conditions are given by ˆ 0 (ξ) + ǫu ˆ 1 (ξ) + ǫ2 u ˆ 2 (ξ) + . . . . u0 (y) + ǫτ1 u1 (y) + ǫτ2 u2 (y) + . . . ∼ u If we substitute the inner expansion into the transmission problem (3.27) and formally equate coefficients of ǫ−2 and ǫ−1 , we get: X ˆ 0 (ξ) = u0 (z), ˆ 1 (ξ) = u and u (∂i (u0 )j )(z)vij (ξ), i,j

where vij is the solution to (3.4). Therefore, we arrive at the following inner asymptotic formula: uǫ (x) ≃ u0 (z) + ǫ

X i,j

vij (

x−z )(∂i (u0 )j )(z) ǫ

for x near z.

(3.28)

Note that vij admits the representation (3.5), where the pair (fij , gij ) is the unique solution to (3.1). We now derive the outer expansion. From (1.63), uǫ (x) = H(x) + SD [ψǫ ](x),

x ∈ Ω \ D,

where ψǫ ∈ L2Ψ (∂D) and H(x) = DΩ [uǫ |∂Ω ](x) − SΩ [g](x), which yields 1 ( I − KΩ )[(uǫ − u0 )|∂Ω ] = SD [ψǫ ] on ∂Ω. 2 By using the jump relation (1.42), ∂SD [ψǫ ] ∂SD [ψǫ ] ψǫ = − . ∂ν + ∂ν −

(3.29)

Combining (3.29) together with the transmission conditions satisfied by uǫ , and

57

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

Green’s identity

we get

Thus,

  ∂H ∂Γ Γ − H dσ = 0, ∂ν ∂ν ∂D

Z

  ∂ SeD [ϕǫ ] ∂Γ e SD [ψǫ ] = Γ |− − SD [ϕǫ ] dσ e ∂ν ∂ν ∂D Z

e − λ) SD [ψǫ ] = (λ

Z

(e µ − µ) ∇ · Γ ∇ · uǫ + 2 D

Z

D

on ∂Ω.

∇s Γ : ∇s uǫ .

Inserting the inner expansion (3.28) into the above identity and using identity (1.86), we obtain after expanding (NΩ )kq (x, y) ≃ (NΩ )kq (x, z) + ∇z (NΩ )kq (x, z) · (y − z)

for y ∈ ∂D,

that for any x ∈ ∂Ω, (uǫ − u0 )k (x) ≃ −ǫ

d

d X

(∂i (u0 )j )(z)

i,j=1

d X

∂p (NΩ )kq (x, z)

p,q=1

 Z  e − λ) × (λ ∇ · (ξp eq ) ∇ · vij (ξ) dξ B   R +2(e µ − µ) B ∇s (ξp eq ) : ∇s vij (ξ) dξ . Since ξp eq is linear, integrating by parts gives Z e − λ)( ∇ · (ξp eq ) ∇ · vij (ξ) dξ) (λ B  Z s s +2(e µ − µ) ∇ (ξp eq ) : ∇ vij (ξ dξ B   Z ∂ ∂ = vij (ξ) · − (ξp eq ) dσ(ξ). e ∂ν ∂ν ∂B

But, from (3.5) it follows again by integrating by parts that   Z ∂ ∂ vij · − (ξp eq ) dσ e ∂ν ∂ν ∂B   Z ∂ ∂ e SB [fij ] − (ξi ej + SB [gij ]) ξp eq dσ = e ∂ν ∂ν ∂B − −  Z  ∂ ∂ = SB [gij ] − SB [gij ] ξp eq dσ ∂ν ∂ν ∂B + − = mijpq .

Therefore, the following outer formula holds: uǫ (x) ≃ u0 (x) − ǫd ∇u0 (z) : M∇z NΩ (x, z) for x far away from z.

(3.30)

58

CHAPTER 3

It is worth noticing that, for k = 1, . . . , d, d  d    X X ∇u0 (z) : M∇z NΩ (x, z) = ∂i (u0 )j (z) mijpq ∂p (NΩ )kq (x, z) . k

p,q=1

i,j=1

(3.31)

3.2.1.2

Proof of the Outer Asymptotic Expansion

In order to prove the formal expansion (3.30), the following lemma will be useful. Lemma 3.7. For any given (F , G) ∈ H 1 (∂D)d ×L2 (∂D)d , let (f , g) ∈ L2 (∂D)d × L (∂D)d be the solution of  e  SD [f ] − − SD [g] + = F on ∂D, (3.32) ∂ ∂ e  SD [g] = G on ∂D.  SD [f ] − e ∂ν ∂ν 2



+

e µ Then there exists a constant C depending only on λ, µ, λ, e, and B, but not on ǫ, such that   ∂F kL2 (∂D) + kGkL2 (∂D) . kgkL2 (∂D) ≤ C ǫ−1 kF kL2 (∂D) + k (3.33) ∂T Here, ∂/∂T denotes the tangential derivative defined by (1.4).

Proof. Assuming without loss of generality that z = 0, we scale x = ǫy, y ∈ B. Let fǫ (y) = f (ǫy), y ∈ ∂B, etc. Let (ϕǫ , ψǫ ) be the solution to the integral equation  −1  SeB [ϕǫ ]|− − SB [ψǫ ]|+ = ǫ Fǫ on ∂B, ∂ ∂ e  S [ϕ ] − S [ψ ] on ∂B.  B ǫ B ǫ = Gǫ e ∂ν ∂ν −

+

We can show that gǫ = ψǫ . It then follows from (1.54) that   −1 kgǫ kL2 (∂B) = kψǫ kL2 (∂B) ≤ C kǫ Fǫ kH 1 (∂B) + kGǫ kL2 (∂B) ,

where C does not depend on ǫ. By scaling back using x = ǫy, we obtain (3.33). This completes the proof. 

From Theorem 1.18, it follows that uǫ can be represented as uǫ (x) = u0 (x) − ND [ψ](x),

x ∈ ∂Ω,

where ψ is defined by (1.64) and ND is defined by (1.88).

(3.34)

59

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

Let H be the function defined in (1.62). Define H (1) by X H (1) (x) : = ∂ α H(z)(x − z)α |α|=0,1

=

 X

|α|=0,1

=

d X

∂ α H1 (z)(x − z)α , . . . ,

X

X

|α|=0,1

∂ α Hd (z)(x − z)α



∂ α Hj (z)(x − z)α ej ,

j=1 |α|=0,1

where (e1 , . . . , ed ) is the canonical basis of Rd . Define ϕ1 and ψ1 in L2 (∂D) by  (1) e  SD [ϕ1 ]|− − SD [ψ1 ]|+ = H |∂D , (1)  ∂ SeD [ϕ1 ] − ∂ SD [ψ1 ] = ∂H ,  e ∂ν ∂ν ∂ν ∂D − + and set

ϕ := ϕ1 + ϕR ,

and ψ := ψ1 + ψR .

Since (ϕR , ψR ) is the solution of the integral equation (3.32) with F = H − H (1) and G = ∂(H − H (1) )/∂ν, it follows from (3.33) that  kψR kL2 (∂D) ≤ C ǫ−1 kH − H (1) kL2 (∂D) + k∇(H − H (1) )kL2 (∂D) . (3.35)

By (1.66),

ǫ−1 kH − H (1) kL2 (∂D) + k∇(H − H (1) )kL2 (∂D)   1/2 −1 (1) (1) ≤ |∂D| ǫ kH − H kL∞ (∂D) + k∇(H − H )kL∞ (∂D) ≤ kHkC 2 (D) ǫ|∂D|1/2

≤ CkgkL2 (∂Ω) ǫ|∂D|1/2 ,

and therefore, kψR kL2 (∂D) ≤ CkgkL2 (∂Ω) ǫ|∂D|1/2 ,

(3.36)

where C is independent of ǫ.

By (3.34), we obtain that uǫ (x) = u0 (x) − ND [ψ1 ](x) − ND [ψR ](x),

x ∈ ∂Ω.

(3.37)

The first two terms in (3.37) are the main terms in our asymptotic expansion and the last term is the error term. We claim that the error term is O(ǫd+1 ). In fact, since ψ, ψ1 ∈ L2Ψ (∂D), in particular, Z Z ψ dσ = ψ1 dσ = 0, ∂D

and we have

∂D

Z

∂D

ψR dσ = 0.

60

CHAPTER 3

It then follows from (3.36) that, for x ∈ ∂Ω, Z   |ND [ψR ](x)| = NΩ (x, y) − NΩ (x, z) ψR (y) dσ(y) ∂D

≤ Cǫ|∂D|1/2 kψR kL2 (∂D)

≤ CkgkL2 (∂Ω) ǫd+1 .

In order to expand the second term in (3.37), we first define some auxiliary functions. Let D0 := D − z, the translate of D by −z. For multi-index α ∈ Nd , |α| = 0, 1, and j = 1, . . . , d, define ϕjα and ψαj by  j j α  SeD0 [ϕα ]|− − SD0 [ψα ]|+ = x ej |∂D0 , (3.38) ∂ ∂(xα ej ) ∂ e j j  SD0 [ψα ] = |∂D0 .  SD0 [ϕα ] − ∂ν ∂ν ∂ν − + Then the linearity and the uniqueness of the solution to (3.38) yield ψ1 (x) =

d X X

j=1 |α|=0,1

∂ α Hj (z)ψαj (x − z),

x ∈ ∂D.

Recall that D0 = ǫB and let the pair (fαj , gαj ) in L2 (∂B) × L2 (∂B) be the solution of  j j α  SeB [fα ]|− − SB [gα ]|+ = x ej |∂B , (3.39) ∂ e j ∂ ∂(xα ej ) j  SB [gα ] = |∂B .  SB [fα ] − e ∂ν ∂ν ∂ν − + Then, we can see that

ψαj (x) = ǫ|α|−1 gαj (ǫ−1 x),

and hence ψ1 (x) =

d X X

j=1 |α|=0,1

∂ α Hj (z)ǫ|α|−1 gαj (ǫ−1 (x − z)),

x ∈ ∂D.

We thus get ND [ψ1 ](x) =

d X X

α

∂ Hj (z)ǫ

|α|+d−2

Z

∂B

j=1 |α|=0,1

NΩ (x, z + ǫy)gαj (y) dσ(y). (3.40)

It then follows from (3.40) and Lemma 1.19 that ND [ψ1 ](x) =

d X X

j=1 |α|=0,1

α

∂ Hj (z)ǫ

|α|+d−2

Z +∞ X 1 |β| β y β gαj (y) dσ(y). ǫ ∂z NΩ (x, z) β! ∂B

|β|=0

61

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

Using Lemma 1.10, it follows that g0j = 0. We finally obtain that ND [ψ1 ](x) = ǫd

d X

X

∂ α Hj (z)∂zβ NΩ (x, z)

j=1 |α|,|β|=1

Z

∂B

y β gαj (y) dσ(y).

(3.41)

We now see from the definition (3.3) of the elastic moment tensor, (3.37), and (3.41) that uǫ (x) = u0 (x) − ǫd ∇H(z) : M∇z NΩ (x, z) + O(ǫd+1 ),

x ∈ ∂Ω.

(3.42)

Observe that formula (3.42) still uses the function H which depends on ǫ. Therefore, the remaining task is to transform this formula into a formula which is expressed using only the background solution u0 . Using u0 = −SΩ [g] + DΩ [u0 |∂Ω ] in Ω, substitution of (3.42) into (1.62) yields that, for any x ∈ Ω, H(x) = −SΩ [g](x) + DΩ [uǫ ]|∂Ω (x) = u0 (x) + O(ǫ),

(3.43)

which yields the main result of this chapter. Theorem 3.8. Let uǫ be the solution of (3.27) and u0 the background solution. The following pointwise asymptotic expansion on ∂Ω holds: uǫ (x) = u0 (x) − ǫd ∇u0 (z) : M∇z NΩ (x, z) + O(ǫd+1 ),

x ∈ ∂Ω.

(3.44)

Note that in (3.44) we have used the convention in (3.31). When there are multiple well-separated inclusions Dl = ǫBl + zl ,

l = 1, . . . , m,

where |zl − zl′ | > 2c0 for some c0 > 0, l 6= l′ , then by iterating formula (3.44), we obtain the following theorem. Theorem 3.9. The following asymptotic expansion holds uniformly for x ∈ ∂Ω: uǫ (x) = u0 (x) − ǫd

m X l=1

∇u0 (zl ) : Ml ∇z NΩ (x, zl ) + O(ǫd+1 ),

where Ml is the EMT corresponding to the inclusion Bl , l = 1, . . . , m. In imaging small inclusions from boundary measurements, it is of fundamental importance to catch the boundary signature of the presence of anomalies. In this respect, an asymptotic expansion of the boundary perturbations of the solutions due to the presence of the inclusion, as the diameter of the inclusion tends to zero, has been derived in this chapter. It is important in applications to know that the asymptotic expansion holds uniformly with respect to the pair of Lamé parameters of the inclusion. It can be shown using results of Chapter 2 that the small-volume asymptotic expansion (3.44) is uniform with respect to the pair of Lamé parameters under the assumption that the compressional modulus is bounded, which is necessary. In fact, there exists a e as long as λ e ≤ Λ for some constant Λ such that constant C independent of µ e and λ the remainder O(ǫd+1 ) in (3.44) is bounded by Cǫd+1 ; see [29]. In (3.44), the EMT

62

CHAPTER 3

e → 0) or with a hard M should be replaced by the one associated with a hole (if µ e, λ inclusion (if µ e → ∞) given by (3.25) and (3.26), respectively. For multi-index α ∈ Nd and j = 1, . . . , d, let fjα and gjα be the solution to  α α α  SeB [fj ]|− − SB [gj ]|+ = x ej |∂B , ∂ ∂(xα ej ) ∂ e α α  S [f ] − S [g ] |∂B .  B j B j = e ∂ν ∂ν ∂ν − +

(3.45)

For β ∈ Nd , define the high-order EMT associated with B by Z j Mαβ := y β gjα (y) dσ(y). ∂B

In [28, 36], it is shown that the high-order EMTs are the basic building blocks for the full-asymptotic expansion of uǫ in terms of ǫ. Finally, we refer the reader to [27] for a proof of the inner expansion (3.28).

3.2.2

Time-Harmonic Regime

Let u0 be the background solution associated with (λ, µ, ρ) in Ω, i.e.,  in Ω,  (Lλ,µ + ω 2 ρ)u0 = 0 ∂u 0  =g on ∂Ω, ∂ν

(3.46)

with g ∈ L2 (∂Ω)d .

Suppose that the elastic inclusion D in Ω is given by D = ǫB + z, where B is a bounded Lipschitz domain in Rd . We assume that there exists c0 > 0 such e µ that inf x∈D dist(x, ∂Ω) > c0 . Suppose that D has the pair of Lamé constants (λ, e) satisfying (1.5) and (1.53) and denote by ρe its density. Let uǫ be the solution to  (Lλ,µ + ω 2 ρ)uǫ = 0     e   (Lλ,eµ + ω 2 ρe)uǫ = 0      uǫ = uǫ − + ∂u ∂u  ǫ ǫ  =    e − ∂ν ∂ν +       ∂uǫ = g ∂ν

in Ω \ D, in D,

on ∂D,

(3.47)

on ∂D, on ∂Ω.

Then, the following result can be obtained using arguments analogous to those in Theorem 3.8. Theorem 3.10. Let uǫ be the solution to (3.47), u0 be the background solution defined by (3.46) and ω 2 ρ be different from the Neumann eigenvalues of the operator −Lλ,µ on Ω. Then, for ωǫ ≪ 1, the following asymptotic expansion holds uniformly

63

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

for all x ∈ ∂Ω:

 uǫ (x) − u0 (x) = −ǫd ∇u0 (z) : M∇z Nω Ω (x, z)

(3.48)

 +ω 2 (ρ − ρe)|B|Nω (x, z)u (z) + O(ǫd+1 ). 0 Ω

As a direct consequence of expansion (3.48) and identity (1.82), the following result holds. Corollary 3.11. Under the assumptions of Theorem 3.10, we have    1 ω I − KΩ [uǫ − u0 ](x) = ǫd ∇u0 (z) : M∇z Γω (x − z) 2  +ω 2 (ρ − ρe)|B|Γω (x − z)u0 (z) + O(ǫd+1 )

(3.49)

uniformly with respect to x ∈ ∂Ω.

Again, note that we have made use of the following conventions in (3.48) and (3.49):  and

d  d   X X ∇u0 (z) : M∇z Nω = ∂i (u0 )j (z) mijpq ∂p (Nω Ω (x, z) Ω )kq (x, z) , k

p,q=1

i,j=1

d   X Nω = (Nω Ω (x, z)u0 (z) Ω )ki (x, z)(u0 )i (z), k

i=1

for k = 1, . . . , d. We also have an asymptotic expansion of the solutions of the Dirichlet problem. Theorem 3.12. Let ω 2 ρ be different from the Dirichlet eigenvalues of the operator −Lλ,µ on Ω. Let vǫ be the solution to   (Lλ,µ + ω 2 ρ)vǫ = 0 in Ω \ D,    e   (Lλ,eµ + ω 2 ρe)vǫ = 0 in D,     vǫ − = vǫ + on ∂D, (3.50)  ∂vǫ ∂vǫ    = on ∂D,   e − ∂ν ∂ν +     vǫ = f on ∂Ω, and let v0 be the background solution defined by ( (Lλ,µ + ω 2 ρ)v0 = 0 v0 = f

in Ω, on ∂Ω,

(3.51)

with f ∈ H 1/2 (∂Ω)d . Then, for ωǫ ≪ 1, the following asymptotic expansion holds

64

CHAPTER 3

uniformly for all x ∈ ∂Ω:

 ∂vǫ ∂v0 ∂Gω Ω (x) − (x) = −ǫd ∇v0 (z) : M∇z (x, z) (3.52) ∂ν ∂ν ∂ν  ∂Gω Ω (x, z)u0 (z) + O(ǫd+1 ). +ω 2 (ρ − ρe)|B| ∂ν

Moreover,    ∂(vǫ − v0 ) ∂Γω 1 ω ∗ I + (KΩ ) [ ](x) = −ǫd ∇v0 (z) : M∇z (x − z) 2 ∂ν ∂ν  ∂Γω +ω 2 (ρ − ρe)|B| (x − z)v0 (z) + O(ǫd+1 ) ∂ν

(3.53)

uniformly with respect to x ∈ ∂Ω.

Finally, it is worth mentioning that if we consider the scattering problem in free space, then an asymptotic formula for the scattering amplitude perturbations due to a small elastic inclusion can be obtained following the same lines as in the derivation of (3.53). 3.2.3

Anisotropic Inclusions

e are fourth-order anisotropic elasticity tensors Assume that the tensors C and C that satisfy the full symmetry properties Cijpq = Cpqij = Cjipq ,

eijpq = C epqij = C ejipq , C

i, j, p, q = 1, . . . , d,

e such that and are strongly convex, i.e., there exist positive constants C and C

2 e : A ≥ CkAk e CA qP 2 for every d × d symmetric matrix A = (aij ), where kAk = i,j aij . Let D = ǫB + z be a small anisotropic elastic inclusion with elasticity tensor e inside the anisotropic background medium Ω with elasticity C. Consider the C displacement field in the presence of D which satisfies ( s e +χ ∇ · (χD C Ω\D C)∇ uǫ = 0 in Ω,

CA : A ≥ CkAk2 ,

(C∇s uǫ )n = g

on ∂Ω,

where g ∈ L2Ψ (∂Ω) is a given traction on ∂Ω. Let u0 denote the background solution, that is, the solution to ( ∇ · C∇s u0 = 0 in Ω, (C∇s u0 )n = g on ∂Ω. Introduce the Neumann function NΩ (x, y), y ∈ Ω, as the solution to   ∇ · C∇s NΩ (x, y) = −δy (x)I in Ω, 1  (C∇s NΩ )(x, y)n = − I on ∂Ω, |∂Ω|

SMALL-VOLUME EXPANSIONS OF THE DISPLACEMENT FIELDS

65

which satisfies the orthogonality condition (1.85). Let vij be the unique solution of the transmission problem ( e + χ d C)∇s vij = 0 in Rd , ∇ · (χB C R \B (3.54) vij (x) − xi ej = O(|x|1−d ) as |x| → +∞. In the anisotropic case the elastic moment tensor M = (mijpq ) is defined by Z e s vij · xp eq dx. mijpq = (C − C)∇ B

The following asymptotic expansion for the perturbations uǫ − u0 as ǫ → 0 holds uniformly on ∂Ω [45]: (uǫ − u0 )(x) = −ǫd ∇u0 (z) : M∇z NΩ (x, z) + O(ǫd+1 ).

(3.55)

Formula (3.55) can be easily extended to the harmonic case. CONCLUDING REMARKS In this chapter we have rigorously derived first-order asymptotic expansions for the displacement boundary perturbations due to the presence of a small elastic inclusion. These asymptotics are with respect to the size of the perturbation. They are valid for Lipschitz inclusions and for extreme values of the bulk and shear moduli (zero or ∞) provided that the compressional modulus stays bounded. The approach of this chapter can be extended for anisotropic inclusions and for thin inclusions as well; see [46]. However, we expect that, when the inclusion is thin and its thickness tends to zero, the asymptotic expansion is not uniform with respect to its elastic parameters.

Chapter Four Boundary Perturbations due to the Presence of Small Cracks The displacement (or traction) vector can be perturbed due to the presence of a small crack in an elastic medium. The aim of this chapter is to derive an asymptotic formula for the boundary perturbations of the displacement as the length of the crack tends to zero. The focus is on cracks with homogeneous Neumann boundary conditions. We consider the linear isotropic elasticity system in two dimensions and assume that the crack is a line segment of small size. The derivation of the asymptotic formula is based on layer potential techniques. The results of this chapter reveal that the leading order term of the boundary perturbation is ǫ2 where ǫ is the length of the crack, and its intensity is given by the traction force of the background solution on the crack. We also prove that the ǫ3 -order term vanishes. By integrating the boundary perturbation formula against the given traction, we are able to derive an asymptotic expansion for the perturbation of the elastic potential energy. The boundary perturbation formula derived in this chapter carries information about the location, size, and orientation of the crack. The results of this chapter are from [34]. The chapter is organized as follows. In Section 4.1, a representation formula for the solution of the problem in the presence of a Neumann crack is derived. Section 4.2 is devoted to making explicit the hyper-singular character involved in the representation formula. Using analytical results for the finite Hilbert transform, we derive in Section 4.3 an asymptotic expansion of the effect of a small Neumann crack on the boundary values of the solution. Section 4.4 aims to derive the topological derivative of the elastic potential energy functional. In Section 4.5 a useful representation formula for the Kelvin matrix of the fundamental solutions of Lamé system is proved. Section 4.6 gives an asymptotic formula for the effect of a small linear crack in the time-harmonic regime. 4.1

A REPRESENTATION FORMULA

Let Ω ⊂ R2 be an open bounded domain, whose boundary ∂Ω is of class C 1,α for some α > 0. We assume that Ω is a homogeneous isotropic elastic body so that its elasticity tensor C = (Cijkl ) is given by (1.8) with the Lamé coefficients λ and µ satisfying (1.5). Let γǫ ⊂ Ω be a small straight crack. The crack is a segment characterized by its length ǫ, its center z, and its orientation e:  γǫ = x ∈ Ω, x = z + se, s ∈ [−ǫ/2, ǫ/2] . We assume that the crack γǫ is located at some fixed distance c0 from ∂Ω, i.e., dist(γǫ , ∂Ω) ≥ c0 .

BOUNDARY PERTURBATIONS DUE TO THE PRESENCE OF SMALL CRACKS

67

We denote by e⊥ a unit normal to γǫ . Let uǫ be the displacement vector caused by the traction g ∈ L2Ψ (∂Ω), with L2Ψ (∂Ω) being defined by (1.45), applied on the boundary ∂Ω in the presence of γǫ . Then uǫ is the solution to   ∇ · σ(uǫ ) = 0 in Ω \ γ ǫ , (4.1) σ(uǫ ) n = g on ∂Ω,   ⊥ σ(uǫ ) e = 0 on γǫ ,

where n is the outward unit normal to ∂Ω and σ(uǫ ) is the stress defined by σ(uǫ ) = C∇s uǫ :=

1 C(∇uǫ + ∇utǫ ). 2

(4.2)

Note that the functions in Ψ are solutions to the homogeneous problem (4.1) with g = 0. So we impose the orthogonality condition on uǫ to guarantee the uniqueness of a solution to (4.1): Z uǫ · ψ dσ = 0 for all ψ ∈ Ψ. (4.3) ∂Ω

Let u0 be the solution in the absence of the crack, i.e., the solution to ( ∇ · σ(u0 ) = 0 in Ω, σ(u0 ) n = g on ∂Ω,

(4.4)

with the orthogonality condition: u0 |∂Ω ∈ L2Ψ (∂Ω) (or, equivalently, (4.3) with uǫ replaced with u0 ). The solution uǫ to (4.1) belongs to H 1 (Ω \ γǫ ). In fact, we have kuǫ kH 1 (Ω\γǫ ) ≤ C

(4.5)

for some C independent of ǫ. To see this, we introduce the potential energy functional Z 1 Jǫ [u] := − σ(u) : ∇s u dx. (4.6) 2 Ω\γǫ The solution uǫ of (4.1) is the maximizer of Jǫ , i.e., Jǫ [uǫ ] = max Jǫ [u],

(4.7)

where the maximum is taken over all u ∈ H 1 (Ω \ γǫ ) satisfying σ(u) n = g on ∂Ω and σ(u) e⊥ = 0 on γǫ . Let v be a smooth function with a compact support in Ω such that σ(v) e⊥ = −σ(u0 ) e⊥ on γǫ . We may choose v so that Jǫ [v] is independent of ǫ. Since 0 ≥ Jǫ [uǫ ] ≥ Jǫ [u0 + v], we have k∇s uǫ kL2 (Ω\γǫ ) ≤ C. We then have from Korn’s inequality (1.52) that there is a constant C independent of ǫ such that kuǫ − u0 kH 1 (Ω\γǫ ) ≤ C(k∇s (uǫ − u0 )kL2 (Ω\γǫ ) + kuǫ − u0 kH 1/2 (∂Ω) ).

(4.8)

68

CHAPTER 4

Since kuǫ − u0 kH 1/2 (∂Ω) is bounded regardless of ǫ as we shall show later (Theorem 4.1), we obtain (4.5).

Let ϕǫ (x) := uǫ |+ (x) − uǫ |− (x),

(4.9)

x ∈ γǫ ,

where + (resp. −) indicates the limit on the crack γǫ from the given normal direction e⊥ (resp. opposite direction), i.e., u± (x) := lim+ u(x ± te⊥ ). t→0

We sometimes denote σ(u) n, the traction on ∂Ω (or on γǫ ), by ∂u/∂ν as in previous chapters. If Γ = (Γij )2×2 is the Kelvin matrix of the fundamental solutions of Lamé system, then the solution uǫ to (4.1) is represented as Z Z ∂uǫ ∂Γ (x − y)uǫ (y) dσ(y) − Γ(x − y) (y) dσ(y) uǫ (x) = ∂ν(y) ∂ν(y) ∂Ω ∂Ω Z ∂Γ − (x − y)ϕǫ (y) dσ(y), x ∈ Ω \ γǫ . (4.10) ∂ν(y) γǫ The solution u0 to (4.4) is represented as Z Z ∂u0 ∂Γ (x − y)u0 (y) dσ(y) − Γ(x − y) (y) dσ(y). u0 (x) = ∂ν(y) ∂ν(y) ∂Ω ∂Ω Let (4.11)

wǫ := uǫ − u0 .

Since ∂uǫ /∂ν = ∂u0 /∂ν on ∂Ω, by subtracting the above two identities, we have wǫ (x) − DΩ [wǫ ](x) = −Dǫ [ϕǫ ](x),

(4.12)

x ∈ Ω,

where DΩ and Dǫ are the double layer potentials defined by Z ∂Γ DΩ [wǫ ](x) := (x − y)wǫ (y) dσ(y), x ∈ Ω ∂Ω ∂ν(y) and Dǫ [ϕǫ ](x) :=

Z

γǫ

∂Γ (x − y)ϕǫ (y) dσ(y), ∂ν(y)

It then follows from (4.12) that   1 − I + KΩ [wǫ ](x) = Dǫ [ϕǫ ](x), 2

x ∈ Ω.

(4.14)

(4.15)

x ∈ ∂Ω.

Since − 21 I + KΩ is invertible on L2Ψ (∂Ω) (Lemma 1.3), we have Z ∂ 1 wǫ (x) = (− I + KΩ )−1 [Γ(· − y)] (x)ϕǫ (y) dσ(y), 2 γǫ ∂ν(y)

(4.13)

x ∈ ∂Ω.

(4.16)

69

BOUNDARY PERTURBATIONS DUE TO THE PRESENCE OF SMALL CRACKS

Thus we obtain from (1.86) and (4.16) that Z ∂ uǫ (x) = u0 (x) + NΩ (x, y)ϕǫ (y) dσ(y), ∂ν(y) γǫ

(4.17)

x ∈ ∂Ω.

We now describe the scheme to derive an asymptotic expansion of uǫ − u0 on ∂Ω. Since ∂uǫ /∂ν = σ(uǫ )e⊥ = 0 on γǫ , we use (4.12) to obtain ∂ ∂ ∂u0 + DΩ [wǫ ] = Dǫ [ϕǫ ] on γǫ . ∂ν ∂ν ∂ν

(4.18)

We solve this integral equation for ϕǫ and then substitute it into (4.17) to derive an asymptotic expansion of uǫ as ǫ → 0.

4.2

DERIVATION OF AN EXPLICIT INTEGRAL EQUATION

In view of (4.18), we need to compute





∂ ∂Γ ∂ν(x) ∂ν(y) (x − y) on γǫ . As before by e⊥ = (n1 , n2 ). We emphasize that

e⊥

is the unit normal to γǫ , and denoted n1 and n2 are constant since γǫ is a line segment. It is convenient to use the following expression of the conormal derivative: ∂u = V (∂)u, ∂ν

(4.19) ∂ ∂xj ,

where the operator V (∂) = V (∂1 , ∂2 ), with ∂j = V (ξ1 , ξ2 ) :=

" (λ + 2µ)n1 ξ1 + µn2 ξ2

is defined by

µn2 ξ1 + λn1 ξ2

λn2 ξ1 + µn1 ξ2

µn1 ξ1 + (λ + 2µ)n2 ξ2

#

.

(4.20)

We first obtain the following formula whose derivation will be given in Section 4.5. For x 6= y, we have   2  ∂Γ  (xi − yi )(xj − yj ) X nl (xl − yl ) (x − y) = a δij + b ∂ν(y) |x − y|2 |x − y|2 ij l=1

nj (xi − yi ) − ni (xj − yj ) , −a |x − y|2

where a=−

µ , 2π(λ + 2µ)

b=−

(λ + µ) . π(λ + 2µ)

(4.21)

(4.22)

∂Γ Let vij = ( ∂ν(y) (x − y))ij for convenience and let

W (x − y) :=

∂ ∂ν(x)



 ∂Γ (x − y) . ∂ν(y)

(4.23)

70

CHAPTER 4

Then one can use (4.20) to derive W (x − y)11 W (x − y)12

W (x − y)21 W (x − y)22

= =

n1 [(λ + 2µ)∂1 v11 + λ∂2 v21 ] + n2 (µ∂2 v11 + µ∂1 v21 ), n1 [(λ + 2µ)∂1 v12 + λ∂x2 v22 ] + n2 (µ∂2 v12 + µ∂1 v22 ),

= =

n1 (µ∂2 v11 + µ∂1 v21 ) + n2 [λ∂1 v11 + (λ + 2µ)∂2 v21 ], n1 (µ∂2 v12 + µ∂1 v22 ) + n2 [λ∂1 v12 + (λ + 2µ)∂2 v22 ].

Since the crack which we consider is a line segment with length ǫ in the domain Ω ⊂ R2 , we may assume, after rotation and translation if necessary, that it is given by γǫ = {(x1 , 0) : −ǫ/2 ≤ x1 ≤ ǫ/2}. (4.24) In this case, one can check that   1 a+b (x1 − y1 )2 = , ∂2 v11 = a + b 2 2 |x − y| |x − y| (x1 − y1 )2   1 x1 − y1 x1 − y1 a ∂1 v21 = a − 2 =− , 2 2 2 |x − y| |x − y| |x1 − y1 | (x1 − y1 )2   1 x1 − y1 x1 − y1 a ∂1 v12 = −a −2 = , 2 2 2 |x − y| |x − y| |x1 − y1 | (x1 − y1 )2 a ∂2 v22 = (x1 − y1 )2 ∂1 v11 = ∂2 v21 = ∂2 v12 = ∂1 v22 = 0.

Since e⊥ = (n1 , n2 ) = (0, 1), we have W (x − y)11

= =

W (x − y)12 W (x − y)21 W (x − y)22

= = = =

that is,

Note that

µ∂2 v11 + µ∂1 v21 1 1 µb µ(a + b) − µa = , (x1 − y1 )2 (x1 − y1 )2 (x1 − y1 )2 µ∂2 v12 + µ∂1 v22 = 0, λ∂1 v11 + (λ + 2µ)∂2 v21 = 0, λ∂1 v12 + (λ + 2µ)∂2 v22 1 1 2(λ + µ)a λa + (λ + 2µ)a = , (x1 − y1 )2 (x1 − y1 )2 (x1 − y1 )2

" µ(λ+µ) − π(λ+2µ) 1 W (x − y) = (x1 − y1 )2 0

0 µ(λ+µ) − π(λ+2µ)

µ(λ + µ) E = , λ + 2µ 4

#

.

(4.25)

(4.26)

where E is the Young modulus in two dimensions. So, we have W (x − y) = −

E 1 I. 4π (x1 − y1 )2

(4.27)

BOUNDARY PERTURBATIONS DUE TO THE PRESENCE OF SMALL CRACKS

71

So far we have shown that if γǫ is given by (4.24), then Z

E ∂ Dǫ [ϕǫ ](x) = − ∂ν(x) 4π

ǫ/2

−ǫ/2

ϕǫ (y) dy, (x − y)2

x = (x, 0), −ǫ/2 < x < ǫ/2. (4.28)

Here the integral is hyper-singular and should be understood as a finite part in the sense of Hadamard, which will be defined in the next section. So the integral equation (4.18) becomes 1 π

Z

ǫ/2

−ǫ/2

ϕǫ (y) 4 dy = − f (x), 2 (x − y) E

where f (x) =

−ǫ/2 < x < ǫ/2,

∂u0 ∂ (x, 0) + DΩ [wǫ ](x, 0). ∂ν ∂ν

Define

ǫ fǫ (x) := f ( x), 2

(4.29)

(4.30) (4.31)

and

ǫ 2 ϕǫ ( x), ǫ 2 Then the scaled integral equation is

−1 < x < 1.

ψǫ (x) :=

1 π

Z

1

−1

4 ψǫ (y) dy = − fǫ (x), 2 (x − y) E

−1 < x < 1,

(4.32)

(4.33)

which we solve in the next section. 4.3

ASYMPTOTIC EXPANSION

The integral in (4.33) is understood as a finite-part in the sense of Hadamard [134, 135]: for ψ ∈ C 1,α (−1, 1) (0 < α ≤ 1) Z 1 ×

−1

h ψ(y) dy = lim δ→0 (x − y)2

Z

x−δ

−1

Define A[ψ](x) := As shown in [111, 133], we have

ψ(y) dy + (x − y)2

Z

1 x+δ

ψ(y) 2ψ(x) i dy − . (x − y)2 δ

Z 1 1 ψ(y) × dy, π −1 (x − y)2

A[ψ](x) = −

|x| < 1.

d H[ψ](x), dx

(4.34)

(4.35)

(4.36)

where H is the (finite) Hilbert transform, i.e., H[ψ](x) = p.v.

1 π

Z

1

−1

ψ(y) dy. x−y

(4.37)

More properties of finite-part integrals and principal-value integrals can be found in [32, 120, 134, 135, 143, 170].

72

CHAPTER 4

If ψ(−1) = ψ(1) = 0, we have from (4.36) that A[ψ](x) = −H[ψ ′ ](x).

(4.38)

Thus we can invert the operator A using the properties of H. The set Y, given by  Z 1p  Y = ϕ: 1 − x2 |ϕ(x)|2 dx < +∞ , (4.39) −1

is a Hilbert space with the norm ||ϕ||Y =

Z

1

−1

1/2 p 1 − x2 |ϕ(x)|2 dx .

It is shown in [32, section 5.2] that H √ maps Y onto itself and its null space is the one dimensional space generated by 1/ 1 − x2 . Therefore, if we define   0 ′ X = ψ ∈ C ( [ −1, 1 ] ) : ψ ∈ Y, ψ(−1) = ψ(1) = 0 , (4.40) where ψ ′ is the distributional derivative of ψ, then A : X → Y is invertible. We note that X is a Banach space with the norm ||ψ||X = ||ψ||L∞ + ||ψ ′ ||Y . Using the Hilbert inversion formula (see, for example, [32, section 5.2]), we can check that p A−1 [1](x) = − 1 − x2 , (4.41) p x A−1 [y](x) = − 1 − x2 . (4.42) 2 The equation (4.33) can be written as A[ψǫ ](x) = −

4 fǫ (x), E

−1 < x < 1.

(4.43)

The Taylor expansion yields ∂u0 ǫ ∂u0 ǫx ∂ 2 u0 ( x, 0) = (0) + (0) + e1 (x), ∂ν 2 ∂ν 2 ∂T ∂ν

(4.44)

where ∂/∂T denotes the tangential derivative on γǫ . The remainder term e1 satisfies |e1 (x)| ≤ Cǫ2 |x|2 , and in particular, ke1 kY ≤ Cǫ2 .

On the other hand, since ∂ ǫ DΩ [wǫ ]( x, 0) = ∂ν 2

Z

∂Ω

∂ 2 NΩ ǫ (( x, 0), y)wǫ (y) dσ(y), ∂ν(x)∂ν(y) 2

(4.45)

73

BOUNDARY PERTURBATIONS DUE TO THE PRESENCE OF SMALL CRACKS

and γǫ is away from ∂Ω, one can see that





DΩ [wǫ ]( ǫ ·, 0) ≤ Ckwǫ kL∞ (∂Ω) .

∂ν 2 Y

(4.46)

∂u0 ǫx ∂ 2 u0 (0) + (0) + e(x), ∂ν 2 ∂T ∂ν

(4.47)

Therefore, we have

fǫ (x) = where e satisfies

 kekY ≤ C ǫ + kwǫ kL∞ (∂Ω) . 

2

(4.48)

We now obtain from (4.43) that   4 ∂u0 ǫ ∂ 2 u0 −1 −1 −1 ψǫ (x) = − (0)A [1](x) + (0)A [y](x) + A [e](x) . E ∂ν 2 ∂T ∂ν

(4.49)

Note that E1 (x) = A−1 [e](x) satisfies kE1 kX

  2 ≤ CkekY ≤ C ǫ + kwǫ kL∞ (∂Ω) ,

and in particular,   kE1 kL∞ (−1,1) ≤ C ǫ2 + kwǫ kL∞ (∂Ω) .

(4.50)

It then follows from (4.41) and (4.42) that   p ǫ ∂ 2 u0 4 ∂u0 p (0) 1 − x2 + (0)x 1 − x2 + E1 (x) . ψǫ (x) = E ∂ν 4 ∂T ∂ν

Thus we have from (4.32) that   p 2 ∂u0 p 2 1 ∂ 2 u0 ϕǫ (x) = (0) ǫ − 4x2 + (0)x ǫ2 − 4x2 + E(x) , E ∂ν 2 ∂T ∂ν

(4.51)

(x, 0) ∈ γǫ ,

(4.52)

where E(x) = 2ǫ E1 ( 2ǫ x) satisfies

  kEkL∞ (γǫ ) ≤ Cǫ ǫ2 + kwǫ kL∞ (∂Ω) .

Substituting (4.52) into (4.17) we obtain Z p 2 ∂ ∂u0 wǫ (x) = NΩ (x, (y, 0)) ǫ2 − 4y 2 dy (0) E γǫ ∂ν(y) ∂ν Z p 1 ∂ 2 u0 ∂ + NΩ (x, (y, 0))y ǫ2 − 4y 2 dy (0) E γǫ ∂ν(y) ∂T ∂ν Z 2 ∂ + NΩ (x, (y, 0))E(y) dy := I + II + III, E γǫ ∂ν(y)

(4.53)

x ∈ ∂Ω.

(4.54)

74

CHAPTER 4

Since ∂ ∂2 ∂ NΩ (x, (y, 0)) = NΩ (x, 0) + NΩ (x, 0)y + O(y 2 ), ∂ν(y) ∂ν(y) ∂T (y)∂ν(y) we have Z

γǫ

Z p p ∂ ∂ NΩ (x, (y, 0)) ǫ2 − 4y 2 dy = NΩ (x, 0) ǫ2 − 4y 2 dy ∂ν(y) ∂ν(y) γǫ Z p ∂2 + NΩ (x, 0) y ǫ2 − 4y 2 dy + O(ǫ4 ) ∂T (y)∂ν(y) γǫ =

and hence I=

πǫ2 ∂ NΩ (x, 0) + O(ǫ4 ), 2 ∂ν(y)

πǫ2 ∂ ∂u0 NΩ (x, 0) (0) + O(ǫ4 ). E ∂ν(y) ∂ν

(4.55)

Here and throughout this chapter, O(ǫ4 ) is in the sense of the uniform norm on ∂Ω. Similarly, one can show that II = O(ǫ4 ). (4.56) So we obtain that wǫ (x) =

∂u0 πǫ2 ∂ NΩ (x, 0) (0) + O(ǫ4 ) + III. E ∂ν(y) ∂ν

(4.57)

In particular, we have kwǫ kL∞ (∂Ω) ≤ C(ǫ2 + |III|).

But, because of (4.53), we arrive at

|III| ≤ CǫkEkL∞ (γǫ ) ≤ Cǫ2 (ǫ2 + kwǫ kL∞ (∂Ω) ), and hence kwǫ kL∞ (∂Ω) ≤ Cǫ2 (1 + kwǫ kL∞ (∂Ω) ).

So, if ǫ is small enough, then

kwǫ kL∞ (∂Ω) ≤ Cǫ2 .

(4.58)

It then follows from (4.57) that wǫ (x) =

πǫ2 ∂ ∂u0 NΩ (x, 0) (0) + O(ǫ4 ). E ∂ν(y) ∂ν

(4.59)

We obtain the following theorem. Theorem 4.1. Suppose that γǫ is a linear crack of size ǫ and z is the center of γǫ . Then the solution to (4.1) has the following asymptotic expansion: (uǫ − u0 )(x) =

πǫ2 ∂NΩ ∂u0 (x, y) (z) + O(ǫ4 ) E ∂ν(y) y=z ∂ν

uniformly on x ∈ ∂Ω. Here E is the Young modulus defined by (4.26).

(4.60)

75

BOUNDARY PERTURBATIONS DUE TO THE PRESENCE OF SMALL CRACKS

It is worth emphasizing that in (4.60) the error is O(ǫ4 ) and the ǫ3 -term vanishes. One can see from the derivation of (4.60) that the ǫ3 -term vanishes because γǫ is a line segment. If it is a curve, then we expect that the ǫ3 -term does not vanish. We also emphasize that (4.60) is a point-wise asymptotic formula, and it can be used to design algorithms to reconstruct cracks from boundary measurements. We can also integrate this formula against the traction g to obtain the asymptotic formula for the perturbation of the elastic energy as we do in the next section. Similarly, if we consider the Dirichlet problem   ∇ · σ(uǫ ) = 0 in Ω \ γ ǫ , (4.61) uǫ = f on ∂Ω,   σ(uǫ ) e⊥ = 0 on γǫ ,

and denote the Dirichlet function of Lamé system in Ω by GΩ , that is, the solution to ( ∇ · σ(GΩ (x, y)) = −δy (x)I in Ω, GΩ = 0 on ∂Ω, then we get the following asymptotic expansion of its solution uǫ . Theorem 4.2. Suppose that γǫ is a linear crack of size ǫ, located at z. Then the solution to (4.61) has the following asymptotic expansion: ∂ ∂ 2 GΩ πǫ2 ∂u0 (uǫ − u0 )(x) = (x, y) (z) + O(ǫ4 ) ∂ν E ∂ν(x)∂ν(y) y=z ∂ν

(4.62)

uniformly on x ∈ ∂Ω. 4.4

TOPOLOGICAL DERIVATIVE OF THE POTENTIAL ENERGY

The elastic potential energy functional of the cracked body is given by (4.6), while without the crack the energy functional is given by Z 1 J[u0 ] = − σ(u0 ) : ∇s u0 dx. (4.63) 2 Ω By the divergence theorem we have Jǫ [uǫ ] − J[u0 ] = −

1 2

Z

∂Ω

(4.64)

(uǫ − u0 ) · g dσ.

Thus we obtain from (4.60) Jǫ [uǫ ] − J[u0 ] = −

πǫ2 ∂u0 ∂ (z) 2E ∂ν ∂ν(y)

Since u0 (y) =

Z

∂Ω

Z

∂Ω

NΩ (x, y)g(x) dσ(x)

NΩ (x, y)g(x) dσ(x),

y ∈ Ω,

y=z

+ O(ǫ4 ).

76

CHAPTER 4

we have Jǫ [uǫ ] − J[u0 ] = −

2 πǫ2 ∂u0 (z) + O(ǫ4 ). 2E ∂ν

(4.65)

We may write (4.65) in terms of the stress intensity factors. The (normalized) stress intensity factors KI and KII are defined by KI (u0 , e) := σ(u0 )e⊥ · e⊥

and KII (u0 , e) := σ(u0 )e⊥ · e.

(4.66)

So, we have σ(u0 )e⊥ = KI e⊥ + KII e,

(4.67)

and hence

∂u 2 0 2 (z) = |σ(u0 )e⊥ |2 = KI2 + KII . ∂ν We obtain the following result.

(4.68)

Theorem 4.3. We have Jǫ [uǫ ] − J[u0 ] = −

πǫ2 2 2 (K + KII ) + O(ǫ4 ) 2E I

(4.69)

as ǫ → 0. The topological derivative DT Jǫ (z) of the potential energy at the crack location z is defined by [163, 164]   1 d J (4.70) DT Jǫ (z, e) := lim ǫ , ǫ→0 ρ′ (ǫ) dǫ where ρ(ǫ) = πǫ2 and ρ′ (ǫ) = 2πǫ. So, one can immediately see from (4.69) that DT Jǫ (z, e) = −

1 2 (K 2 + KII ). 2E I

(4.71)

This formula is in accordance with the one obtained in [99]. It is worth emphasizing that −DT Jǫ (z, e) can be interpreted as the energy release rate due to the crack growth at z and in the direction e. Moreover, (4.69) shows that the presence of a crack anywhere in Ω and with any orientation provides a lower potential energy compared to the absence of any crack. Finally, (4.71) provides an explicit criterion for the determination of the weakest parts of Ω with respect to crack initiation. The nucleation points z∗ and orientation e∗ for crack initiation can be sought such that DT Jǫ (z∗ , e∗ ) = min DT Jǫ (z, e). z∈Ω,e

See [99]. 4.5

DERIVATION OF THE REPRESENTATION FORMULA

In this section we prove the representation formula (4.21). The Kelvin matrix (1.33) can be rewritten as Γij (x − y) = λ′ δij ln |x − y| + µ′ (xi − yi )

∂ ln |x − y| , ∂yj

i, j = 1, 2,

(4.72)

77

BOUNDARY PERTURBATIONS DUE TO THE PRESENCE OF SMALL CRACKS

where λ′ =

λ + 3µ , 4πµ(λ + 2µ)

µ′ =

λ+µ . 4πµ(λ + 2µ)

Using the operator V (∂) defined by (4.20) one can see that ∂Γ (x − y) = (V (∂y )Γ(x − y))t , ∂ν(y) or

2  ∂Γ  X (x − y) := Vjl (∂y )Γlk (x − y), ∂ν(y) kj

(4.73)

k, j = 1, 2.

(4.74)

l=1

We use the formulas

∂ ln |x − y| ∂yi ∂2 ln |x − y| ∂yi2 ∂2 ln |x − y| ∂yi ∂yj

xi − yi , |x − y|2 (xi − yi )2 1 = −2 + , 4 |x − y| |x − y|2 (xi − yi )(xj − yj ) = −2 if i 6= j. |x − y|4 = −

By (4.74), we have  ∂Γ  ∂ν(y) 11

=

=

  ∂ ∂ (λ + 2µ)n1 + µn2 ∂y1 ∂y2   ∂ ln |x − y| × λ′ ln |x − y| + µ′ (x1 − y1 ) ∂y1   ∂ ∂ ∂ ln |x − y| + µn2 + λn1 µ′ (x1 − y1 ) ∂y1 ∂y2 ∂y2 ∂ ln |x − y| ∂ ln |x − y| λ′ (λ + 2µ)n1 + λ′ µn2 ∂y1 ∂y2   ∂ ln |x − y| ∂ 2 ln |x − y| ′ + µ (λ + 2µ)n1 − + (x1 − y1 ) ∂y1 ∂y1 2 2 ∂ ln |x − y| ∂ ln |x − y| −µµ′ n2 + 2µµ′ n2 (x1 − y1 ) ∂y2 ∂y1 ∂y2 2 ∂ ln |x − y| +λµ′ (x1 − y1 ) . ∂y2 2

Since ∆ ln |x − y| = 0 for x 6= y, we have  ∂Γ  ∂ν(y) 11

=

∂ ln |x − y| ∂ ln |x − y| (λ + 2µ)(λ′ − µ′ )n1 + µ(λ′ − µ′ )n2 ∂y1 ∂y2   2 2 ∂ ln |x − y| ∂ ln |x − y| ′ +2µµ n1 (x1 − y1 ) + n2 (x1 − y1 ) . ∂y1 2 ∂y1 ∂y2

Since (λ + 2µ)(µ′ − λ′ ) + 2µµ′ = −

µ = µ(µ′ − λ′ ), 2π(λ + 2µ)

78

CHAPTER 4

we obtain   2  ∂Γ  (x1 − y1 )2 X xl − yl = µ(µ′ − λ′ ) − 4µµ′ nl . ∂ν(y) 11 |x − y|2 |x − y|2 l=1

Similarly, we can compute    ∂Γ  ∂ ∂ = λn2 + µn1 ∂ν(y) 12 ∂y1 ∂y2   ∂ ln |x − y| ′ ′ × λ ln |x − y| + µ (x1 − y1 ) ∂y1   ∂ ln |x − y| ∂ ∂ + µn1 µ′ (x1 − y1 ) + (λ + 2µ)n2 ∂y1 ∂y2 ∂y2 ∂ ln |x − y| ∂ ln |x − y| + µ(λ′ − µ′ )n1 = λ(λ′ − µ′ )n2 ∂y1 ∂y2 2 ∂ ln |x − y| +2µµ′ n1 (x1 − y1 ) ∂y1 ∂y2 2 ∂ ln |x − y| +2µµ′ n2 (x1 − y1 ) . ∂y2 2 Since λ(µ′ − λ′ ) + 2µµ′ = µ(λ′ − µ′ ), we have  ∂Γ  ∂ν(y) 12

x1 − y1 x2 − y2 + µ(µ′ − λ′ )n1 2 |x − y| |x − y|2 2 (x1 − y1 ) (x2 − y2 ) −4µµ′ n1 |x − y|4 (x1 − y1 )(x2 − y2 )2 −4µµ′ n2 |x − y|4

= [λ(µ′ − λ′ ) + 2µµ′ ]n2

2

(x1 − y1 )(x2 − y2 ) X xl − yl nl |x − y|2 |x − y|2 l=1   n2 (x1 − y1 ) − n1 (x2 − y2 ) ′ ′ − µ(µ − λ ) . |x − y|2

= −4µµ′

We also have  ∂Γ 

  ∂ ∂ ∂ ln |x − y| = λn2 + µn1 µ′ (x2 − y2 ) ∂ν(y) 22 ∂y1 ∂y2 ∂y1   ∂ ∂ + (λ + 2µ)n2 + µn1 ∂y1 ∂y2   ∂ ln |x − y| ′ ′ × λ ln |x − y| + µ (x2 − y2 ) ∂y2   2 2 X xl − yl ′ ′ ′ (x2 − y2 ) = µ(µ − λ ) − 4µµ nl . |x − y|2 |x − y|2 l=1

This proves (4.21).

BOUNDARY PERTURBATIONS DUE TO THE PRESENCE OF SMALL CRACKS

4.6

79

TIME-HARMONIC REGIME

In the time-harmonic regime,  2  ∇ · σ(u) + ω ρu = 0 σ(u) n = g   σ(u) e⊥ = 0

in Ω \ γ ǫ , on ∂Ω, on γǫ ,

(4.75)

we can derive an asymptotic formula for the effect of a small linear crack similar to (4.60) by replacing NΩ with the Neumann function Nω Ω associated with the time-harmonic elasticity system. Theorem 4.4. Suppose that ω 2 ρ is not a Neumann eigenvalue for −Lλ,µ on Ω. We have πǫ2 ∂Nω ∂u0 Ω (uǫ − u0 )(x) = (x, y) (z) + O(ǫ4 ) (4.76) E ∂ν(y) y=z ∂ν uniformly on x ∈ ∂Ω, where u0 is the solution to ( ∇ · σ(u0 ) + ω 2 ρu0 = 0 in Ω, σ(u) n = g on ∂Ω.

CONCLUDING REMARKS In this chapter, we have considered Neumann cracks in elastic bodies and established an asymptotic expansion for the perturbations of the displacement measurements at the surface. The magnitude of the perturbations is of order square the length of the crack. Our asymptotic expansion will be used for designing effective direct approaches for locating a collection of small elastic cracks and estimating their sizes and orientations.

Chapter Five Backpropagation and Multiple Signal Classification Imaging of Small Inclusions In this chapter we apply the asymptotic formulas derived in Chapters 3 and 4 for the purpose of identifying the locations and certain properties of the elastic inclusions and cracks. Using (3.44), (3.48), (4.62), and (4.76), least-squares solutions to the inclusion and crack imaging problems for the static and time-harmonic elasticity equations can be computed. However, the computations are done iteratively and may be difficult because of the nonlinear dependence of the data on the location, the physical parameter, the size, and the orientation of the inclusion or the crack. Moreover, there may be considerable non-uniqueness of the minimizer in the case where all parameters of the inclusions or the cracks are unknown [47]. In order to overcome this difficulty, we construct various direct (non-iterative) reconstruction algorithms that take advantage of the smallness of the elastic inclusions, in particular, MUltiple SIgnal Classification algorithm (MUSIC), reverse-time migration, and Kirchhoff migration. The topological derivative based imaging will be investigated in the next chapter. Even though we consider only the two dimensional case, the same algorithms can be applied in three dimensions, as illustrated numerically in this chapter. These direct location search algorithms have been applied to the imaging of small inclusions in antiplane elasticity and scalar wave propagation [23]. The objective of this chapter is to extend them to the general case of linear isotropic elasticity. The main difficulty encountered centers on the tensorial nature of the fundamental solutions to the elasticity equations and the coupling between the shear and compressional parts of the displacement fields. This chapter is organized as follows. In Section 5.1 we describe least-squares algorithms for locating small elastic inclusions. Section 5.2 presents a MUSICtype location search algorithm in the static regime. This algorithm is extended to the time-harmonic regime in Section 5.3. Section 5.4 is devoted to reversetime migration and Kirchhoff imaging. Three-dimensional simulations to illustrate relevant features of the MUSIC method are presented in Section 5.5. 5.1

A NEWTON-TYPE SEARCH METHOD

For the sake of simplicity, we only consider the two dimensional case. As a direct application of the asymptotic formula (3.9), we start with a least-squares algorithm for locating an elastic inclusion D and retrieving its elastic moment tensor. Let Ω be a bounded domain with Lipschitz boundary in R2 . For a traction of the form g (j) = (CW(j) )n, j = 1, 2, 3, on ∂Ω,

81

BACKPROPAGATION AND MULTIPLE SIGNAL CLASSIFICATION IMAGING

where W

(1)

 1 = 0

 0 , 1

W

(2)

 0 = 1

 1 , 0

W

(3)

 0 = 0

(j)

 0 1 (j)

form a basis of the space of 2×2 symmetric matrices, let uǫ and u0 be the solution of the static elastic problem (3.27) with and without inclusion, respectively. We assume that the inclusion D is located at z, its characteristic size is ǫ, and its EMT is ǫ2 M. We have Z (i) S ∇z u0 (z ) = ∇z NΩ (x, z S )g (i) (x) dσ(x) = W(i) for any z S ∈ Ω. ∂Ω

Then it follows from Theorem 3.9 that Z (j) (i) 2 (j) (u(j) : MW(i) + O(ǫ3 ), ǫ − u0 )(x) · g (x) dσ(x) = −ǫ W

i, j = 1, 2, 3.

∂Ω

(5.1) Therefore, the EMT of the inclusion D, ǫ2 M, can be recovered, modulo O(ǫ). In order to detect the location of D, one may solve the following least-squares problem: min

z S ∈Ω

3 Z X j=1

∂Ω

2 1 (j) 2 (j) S (− I + KΩ )[u(j) : M∇zS Γ(x, z ) dσ(x), ǫ − u0 ](x) − ǫ W 2

since by (1.86) and Theorem 3.9 we have

1 (j) 2 (j) (− I + KΩ )[u(j) : M∇z Γ(x, z) + O(ǫ3 ). ǫ − u0 ](x) = ǫ W 2 In the case of a small crack (with location at z and unit normal e⊥ ), a direct application of (4.62) yields Z (j) (i) (u(j) ǫ − u0 )(x) · g (x) dσ(x) ∂Ω     πǫ2 λ(tr W(i) )e⊥ + 2µW(i) e⊥ · λ(tr W(j) )e⊥ + 2µW(j) e⊥ = E +O(ǫ4 ), for i, j = 1, 2, 3, since (j)

∂u0 (z) = λ(tr W(j) )e⊥ + 2µW(j) e⊥ . ∂ν This allows us to determine ǫ and e⊥ . In order to detect the location of the crack one can solve the least-squares problem: min

z S ∈Ω

1 (− I + KΩ )[u(j) − u(j) ](x) ǫ 0 2 ∂Ω   2 πǫ2 ∂Γ S (j) ⊥ (j) ⊥ − (x, z ) λ(tr W )e + 2µW e dσ(x). S E ∂ν(z )

3 Z X j=1

The proposed least-squares algorithms may and may not converge to the true

82

CHAPTER 5

location of the inclusion depending on initial guess [47]. In order to overcome this difficulty, a MUSIC-type algorithm is proposed in the next section. 5.2

A MUSIC-TYPE METHOD IN THE STATIC REGIME

For the sake of simplicity, we take Ω to be the unit disk centered at the origin and choose N ≫ 1 equi-distributed points xi along the boundary. Let the unit vectors θ1 , . . . , θN be the corresponding observation directions.   Suppose that the measured N × 3 matrix A :=

(j)

(j)

(− 21 I + KΩ )[uǫ − u0 ](xi ) · θi

has the spectral

i,j

decomposition

A=

3 X l=1

σl vl ⊗ wl ,

where σl are the singular values of A and vl and wl are the corresponding left and right singular vectors. Let P : RN → span{v1 , v2 , v3 } be the orthogonal projector P=

3 X l=1

vl ⊗ vl .

Let a ∈ R2 \ {0}. One can prove that for a search point z S ∈ Ω, the vector f (j) (z S ; a) :=

 t (W(j) : M∇zS Γ(x1 , z S )) · a, . . . , (W(j) : M∇zS Γ(xN , z S )) · a

(5.2) in RN lies in the space spanned by columns of A if and only if z S = z [16]. In (5.2), M is estimated using (5.1). Thus one can form an image of the elastic inclusion by plotting, at each point z S , the MUSIC-type imaging functional IMU (z S ) = qP 3

1

,

(j) ](z S ; a)||2 j=1 ||(I − P)[f

where I is the N × N identity matrix. The resulting plot will have a large peak at the location of the inclusion. 5.3

A MUSIC-TYPE METHOD IN THE TIME-HARMONIC REGIME

In this section we extend the MUSIC algorithm to the time-harmonic regime. Let D e and µ be a small elastic inclusion (with location at z and Lamé parameters λ e). Let xi , i = 1, . . . , N be equi-distributed points along the boundary of Ω for N ≫ 1. The array of N elements {x1 , . . . , xN } is used to detect the inclusion. Let θ1 , . . . , θN be the corresponding unit directions of incident fields/observation directions. The array of elements {x1 , . . . , xN } is operating both in transmission and in reception. We choose the background displacement to be such that (j)

u0 (x) = Γω (x, xj )θj ,

x ∈ Ω.

(5.3)

BACKPROPAGATION AND MULTIPLE SIGNAL CLASSIFICATION IMAGING

83

From (3.49), we have  1 (j) ω ω ω 2 (− I + KΩ )[u(j) ǫ − u0 ](x) = −ǫ ∇z Γ (x, z) : M∇z (Γ (z, xj )θj ) 2  +ω 2 (ρ − ρe)|B|Γω (x, z)Γω (z, xj )θj

+ O(ǫ3 ).

The measured data is the N × N matrix given by   1 (j) Aω := (− I + KΩ )[u(j) − u ](x ) · θ . i i ǫ 0 2 i,j

(5.4)

(5.5)

For any point x ∈ Ω, let us introduce the N × 2 matrix of the incident field emitted by the array of N transmitters G(x, ω), which will be called the Green matrix, and the N × 3 matrix of the corresponding independent components of the stress tensors S(x, ω), which will be called the stress matrix: G(x, ω) = (Γω (x, x1 )θ1 , . . . , Γω (x, xN )θN )t , t

S(x, ω) = (s1 (x), . . . , sN (x)) ,

(5.6) (5.7)

where (j)

(j)

(j)

sj (x) = [σ11 (x), σ22 (x), σ12 (x)]t ,

σ (j) (x) = C∇s (Γω (x, xj )θj ).

One can see from (5.4) and (5.5) that the data matrix Aω is factorized as follows: Aω = −ǫ2 H(z, ω) D(ω) Ht (z, ω),

(5.8)

H(x, ω) = [S(x, ω), G(x, ω)]

(5.9)

where and D(ω) is a symmetric 5 × 5 matrix given by   L[M] 0 D(ω) = 0 ω 2 (ρ − ρe)|B|I

(5.10)

for some linear operator L. Consequently, the data matrix Aω is the product of three matrices Ht (z, ω), D(ω) and H(z, ω). The physical meaning of the above factorization is the following: the matrix Ht (z, ω) is the propagation matrix from the transmitter points toward the inclusion located at the point z, the matrix D(ω) is the scattering matrix and H(z, ω) is the propagation matrix from the inclusion toward the receiver points. Recall that MUSIC is essentially based on characterizing the range of data matrix Aω , so-called signal space, forming projections onto its null (noise) spaces, and computing its singular value decomposition. From the factorization (5.8) of Aω and the fact that the scattering matrix D is nonsingular (so, it has rank 5), the standard argument from linear algebra yields that, if N ≥ 5 and if the propagation matrix H(z, ω) has maximal rank 5 then the ranges Range(H(z, ω)) and Range(A) coincide. The following is a MUSIC characterization of the location of the elastic inclusion and is valid if N is sufficiently large.

84

CHAPTER 5

Proposition 5.1. Suppose that N ≥ 5. Let a ∈ C5 \ {0}, then H(z S )a ∈ Range(Aω )

if and only if

z S = z.

In other words, any linear combination of the vector columns of the propagation matrix H(z S , ω) defined by (5.9) belongs to the range of Aω (signal space) if and only if the points z S and z coincide. If the dimension of the signal space, s (≤ 5), is known or is estimated from the t singular value decomposition of Aω , defined by Aω = VΣU , then the MUSIC algorithm applies. Furthermore, if vi denote the column vectors of the matrix V then for any vector a ∈ C5 \ {0} and for any space point z S within the search domain, a map of the estimator IMU (z S ) defined as the inverse of the Euclidean distance from the vector H(z S , ω)a to the signal space by IMU (z S ) = qP N

1

S i=s+1 |vi · H(z , ω)a|

2

(5.11)

peaks (to infinity, in theory) at the center z of the inclusion. The visual aspect of the peak of IMU at z depends upon the choice of the vector a. A common choice which means that we are working with all the significant singular vectors is a = (1, 1, . . . , 1)t . However, we emphasize the fact that a choice of the vector a in (5.11) with dimension (number of nonzero components) much lower than 5 still permits one to image the elastic inclusion with our MUSIC-type algorithm. See the numerical results below. It is worth mentioning that the estimator IMU (z S ) is obtained via the projection of the linear combination of the vector columns of the Green matrix G(z S ) onto the noise subspace of the Aω for a signal space of dimension l if the dimension of a is l. Let us also point out here that the function IMU (z S ) does not contain any information about the shape and the orientation of the inclusion. Yet, if the position of the inclusion is found (approximately at least) via observation of the map of IMU (z S ), then one could attempt, using the decomposition (5.8), to retrieve the EMT of the inclusion (which is of order ǫ2 ). Finally, it is worth emphasizing that in dimension 3, the matrix D is 9 × 9 and is of rank 9. For locating the inclusion, the number N then has to be larger than 9. We also mention that the developed MUSIC algorithm applies to the crack location problem in the time-harmonic regime. 5.4

REVERSE-TIME MIGRATION AND KIRCHHOFF IMAGING IN THE TIME-HARMONIC REGIME

In this section we consider the time-harmonic regime. The perturbations of the boundary measurements due to the presence of a small inclusion or a small crack are given by the asymptotic expansions (3.48) and (4.76). Suppose for simplicity that a small elastic inclusion (with location at z) has only a density contrast and let the background displacement be the field generated by a point source at y ∈ Ω emitting along ej . Again, from (3.48), we have for x, y ∈ ∂Ω: 1 (j) ω 2 2 (− I + KΩ )[u(j) e)|B|Γω (x, z)Γω (z, y)ej + O(ǫ3 ). ǫ − u0 ](x, y) = −ǫ ω (ρ − ρ 2

85

BACKPROPAGATION AND MULTIPLE SIGNAL CLASSIFICATION IMAGING

Thus, for a search point z S ∈ Ω, it follows by using (1.77) that Z 1 (j) S ω (j) Γω α (z , x)(− I + KΩ )[uǫ − u0 ](x, y)dσ(x) 2 ∂Ω ǫ2 S ω ≃ ω(ρ − ρe)|B|(ℑm Γω α (z , z))Γ (z, y)ej . cα

We introduce the reverse-time migration imaging functional IRM,α (z S ) for α = p or s given by Z X Z 1 (j) S S ω (j) Γω (z , y)e · Γω j α α (z , x)(− I + KΩ )[uǫ − u0 ](x, y)dσ(x) dσ(y). 2 ∂Ω ∂Ω j=1,2

(5.12) IRM,α (z S ) consists in backpropagating with the α-Green function the data set   1 (j) ω (j) (− I + KΩ )[uǫ − u0 ](x, y), y ∈ ∂Ω, x ∈ ∂Ω, j = 1, 2 2 both from the source point y and the receiver point x. Using (1.77) and the reciprocity property (1.70) we obtain that IRM,α (z S ) ≃ −

ǫ2 (ρ − ρe)|B||ℑm Γα (z S , z)|2 . c2α

The imaging functional IRM,α (z S ) attains then its maximum (if ρ < ρe) or minimum (if ρ > ρe) at z S = z; see Figure 5.1. −10

−10

−8

−8

−6

−6

−4

−4

−2

−2

0

0

2

2

4

4

6

6

8

8

10 −10

10 −10

−5

0

5

10

−5

0

5

10

2 2 S Figure 5.1: Typical plots of ℑm Γω − z) (on the left) and ℑm Γω (z S − z) p (z s √ (on the right) for z = 0 and cp /cs = 11. The imaging functional IRM,α (z S ) can be simplified as follows to yield the socalled Kirchhoff migration imaging functional IKM,α (z S ) given by Z X Z S 1 (j) ω −iκα |y−z S | e ej · e−iκα |z −x| (− I + KΩ )[u(j) ǫ − u0 ](x)dσ(x) dσ(y). 2 ∂Ω j=1,2 ∂Ω

(5.13) The function IKM,α attains as well its maximum at z S = z. In this simplified version, backpropagation is approximated by travel time migration. The imaging functionals IRM,α and IKM,α can be applied for detecting a small

86

CHAPTER 5

(j)

(j)

crack in the time-harmonic regime. Let u0 be given by (5.3) and let uǫ be the (j) solution of (4.75) with g (j) = ∂u0 /∂ν on ∂Ω. For z S ∈ Ω, IRM,α (z S ) is given by X Z

j=1,2

∂Ω

∂Γω α (y, z S )ej · ∂ν(z)

Z

∂Ω

∂Γω 1 (j) α ω (z S , x)(− I+KΩ )[u(j) ǫ −u0 ](x)dσ(x) dσ(y). ∂ν(z) 2

(5.14) Here, ∂v/∂ν(z) = λ(∇ · v)e⊥ + 2µ∇s ve⊥ . The Kirchhoff migration imaging functional IKM,α (z S ) has the same form as in (5.13). 5.5

NUMERICAL ILLUSTRATIONS

In this section, we consider three-dimensional simulations to illustrate relevant features of the MUSIC method reviewed in the previous section. We present several computations acquired from synthetically generated data. An elastic sphere with radius ǫ = (2πcp )/5ω centered at the origin is considered. N = 144 source and receiver points are chosen. The cases of holes and hard inclusions are both considered here. The first case is illustrated by an epoxy sphere in a steel medium, while a hard inclusion is exemplified by a steel sphere in an epoxy medium. Wave speeds and 3 mass densities for these materials are taken as ρ = 7.54 g/cm , cp = 5.80 × 105 cm/s 3 and cs = 3.10 × 105 cm/s for steel; and ρ = 1.18 g/cm , cp = 2.54 × 105 cm/s and cs = 1.16 × 105 cm/s for epoxy. Within the above setting, the retrieval of the inclusion involves the calculation of the singular value decomposition of the matrix Aω ∈ C144×144 . The distribution of singular values of Aω in the absence of noise and in the case of noisy data with 30dB signal-to-noise ratio is exhibited in Figure 5.2 (hole case) and in Figure 5.6 (hard inclusion case). The 3D plots of the MUSIC estimator IMU (z ω ), in the case of noisy data with 30dB signal-to-noise ratio, computed within a cubical box Ω = [−1, 1]3 ⊂ R3 , are shown in Figures 5.3 and 5.4 (hole case) and in Figures 5.7 and 5.8 (hard inclusion case). The surface displayed is associated with values of half the peak magnitudes. In both cases the MUSIC criterion was tested via the orthogonal projection of the linear combination of vector columns of the Green matrix G(z S ) (Figures 5.3 and 5.7) and the stress matrix S(z S ) (Figures 5.4 and 5.8) onto the noise space of the data matrix Aω for a signal subspace of dimension 1 and 9 (hole case) and 3 and 9 (hard inclusion case). In the absence of noise in the data, it can be observed from the distribution of singular values of Aω that in both cases all of the nine singular values associated with the inclusion appear well discriminated from the rest (linked to the noise subspace). However, six out of nine singular values are lost when the data is corrupted with noise. This result shows also that the magnitudes of the singular values in the hard inclusion case are larger than those of the singular values of Aω in the hole case. In the hole case the inclusion is imaged with MUSIC for a signal subspace of dimension one. But in the hole case the image is obtained for a signal subspace of dimension three. Actually, in this case the results obtained for a signal subspace of dimension one do not produce a satisfactory image of the inclusion. In both cases, the best images are obtained with MUSIC for a signal subspace of dimension nine. As it has been indicated above, the vector columns of the matrix G(z S ) and the stress

BACKPROPAGATION AND MULTIPLE SIGNAL CLASSIFICATION IMAGING

87

matrix S(z S ) are linearly independent but they are not orthogonal, which is why we are able to image the inclusion via both matrices for a signal dimension which is smaller than nine. Singular Values of A (144×144)

10−5

Singular Values of A (144×144)

10−5 10−6

10−10 log( σj )

log( σj )

10−7 10−15

10−8 10−9

10−20 10−10 10 −25

1

50 100 Singular Value Number, σ j

(a)

10−11

1

50 100 Singular Value Number, σ j

(b)

Figure 5.2: (hole case) Distribution of the singular values of Aω for 144 × 144 singly-polarized transmitters and receivers in the absence of noise (a); distribution of the full set in the case of noisy data with 30dB signal-to-noise ratio (b).

(a)

(b)

Figure 5.3: (hole case) 3D plots of the MUSIC estimator IMU (z S ) obtained via the projection of the linear combination of the vector columns of the Green matrix G(z S ) onto the noise subspace of Aω for a signal space of dimension 1 (a) and 9 (b). We give in Figures 5.5(a)–5.5(c) a map of IMU (z S ) corresponding to MUSIC reconstruction via Green’s matrix for a signal space of dimension 9 on some cross sections passing through the origin. CONCLUDING REMARKS In this chapter we have introduced simple direct algorithms for inclusion and crack detection. MUSIC-type algorithms apply to both the static and time-harmonic regimes. However, time-reverse migration and Kirchhoff algorithms apply only to

88

CHAPTER 5

(a)

(b)

Figure 5.4: (hole case) 3D plots of the MUSIC estimator IMU (z S ) obtained via the projection of the linear combination of the vector columns of the stress matrix S(z S ) onto the noise subspace of Aω for a signal space of dimension 1 (a) and 9 (b).

120 1

100

0.5

80

0

100

0.5

80

0 60

−0.5 −1 1

120 1

40 0.5

0

−0.5

Y axis

−1

−1

(a)

−0.5

0

X axis

0.5

1

20

100

0.5

80

0 60

−0.5 −1 1

120 1

40 0.5

0

−0.5

Y axis

−1

−1

(b)

−0.5

0

X axis

0.5

1

20

60

−0.5 −1 1

40 0.5

0

−0.5

Y axis

−1

−1

−0.5

0

0.5

1

20

X axis

(c)

Figure 5.5: (hole case) Maps of IMU (z S ) in cross-sectional planes x1 = 0 (a), x2 = 0 (b) and x3 = 0 (c) (refer to Figure 5.3 b)).

the time-harmonic regime since they are based on extracting phase information from the data in order to locate the inclusion or the crack. The purpose of the next two chapters is to propose a more stable algorithm and to precisely quantify its resolution and stability with respect to medium and measurement noises.

BACKPROPAGATION AND MULTIPLE SIGNAL CLASSIFICATION IMAGING

Singular Values of A (144×144)

100

Singular Values of A (144×144)

10−2 10−3

10−5

10−4

10−10

log( σj )

log( σj )

89

10−5

10−15

10−6

10−20 10−25

10−7 1

50 100 Singular Value Number, σ j

(a)

10−8

1

50 100 Singular Value Number, σ j

(b)

Figure 5.6: (hard inclusion case) Distribution of the singular values of Aω for 144 × 144 singly-polarized transmitters and receivers in the absence of noise (a); distribution of the full set in the case of noisy data with 30dB signal-to-noise ratio (b).

(a)

(b)

Figure 5.7: (hard inclusion case) 3D plots of the MUSIC estimator IMU (z S ) obtained via the projection of the linear combination of the vector columns of the Green matrix G(z S ) onto the noise subspace of Aω for a signal space of dimension 3 (a) and 9 (b).

90

CHAPTER 5

(a)

(b)

Figure 5.8: (hard inclusion case) 3D plots of the MUSIC estimator IMU (z S ) obtained via the projection of the linear combination of the vector columns of the stress matrix S(z S ) onto the noise subspace of Aω for a signal space of dimension 3 (a) and 9 (b).

Chapter Six Topological Derivative Based Imaging of Small Inclusions in the Time-Harmonic Regime The problem of inclusion and crack detection has been addressed in the previous chapter using MUSIC-type algorithms and migration techniques. The focus of the present chapter is on the topological derivative based detection algorithms for elasticity. As shown in [22], in antiplane elasticity, the topological derivative based imaging functional performs well and is robust with respect to noise. It works well even with sparse or limited view measurements. Compared to MUSIC and migration, topological derivative based imaging is more stable in the scalar case, especially with respect to medium noise. The concept of topological derivative (TD), initially proposed for shape optimization in [65, 85, 163, 164], has been recently applied to the imaging of small anomalies; see, for instance, [80, 81, 103, 104, 107, 108, 136] and references therein. However, its use in the context of imaging has been heuristic and lacks mathematical justifications, notwithstanding its usefulness. The objective of this chapter is to extend the concept of TD to the general case of linear isotropic elasticity. Our goal is twofold: (1) to perform a rigorous mathematical analysis of the TD based imaging and (2) to design a modified imaging framework based on the analysis. In the case of a density contrast, the modified framework yields a topological derivative based imaging functional, i.e., deriving from the topological derivative of a discrepancy functional between the measured and computed data. However, in the case where the Lamé coefficients of the small inclusion are different from those of the background medium, the modified functional is rather of a Kirchhoff type. It is based on the correlations between, separately, the shear and compressional parts of the backpropagation of the data and those of the background solution. It cannot be derived as the topological derivative of a discrepancy functional. Most of the results of this chapter are from [11]. This chapter is organized as follows. In Section 6.1 we study the TD imaging functional resulting from the expansion of the filtered quadratic misfit with respect to the size of the inclusion. We show that the imaging functional may not attain its maximum at the location of the inclusion. Moreover, the resolution of the image is below the diffraction limit. Both phenomena are due to the coupling of pressure and shear waves propagating with different wave speeds and polarization directions. In Section 6.2, a modified imaging functional based on the weighted Helmholtz decomposition of the topological derivative is introduced. The sensitivity analysis of the modified framework is presented. 6.1

TOPOLOGICAL DERIVATIVE BASED IMAGING

The principle of TD based acoustic imaging is to create a trial inclusion in the background medium at a given search location z S . Then, a discrepancy functional

92

CHAPTER 6

is considered. The search points that minimize the discrepancy between measured data and the fitted data are then sought. In order to find its minima, the misfit is expanded using the asymptotic expansions due to the perturbation of the displacement field in the presence of an inclusion versus its characteristic size. The first-order term in the expansion is then referred to as the TD of the misfit, which synthesizes its sensitivity relative to the insertion of an inclusion at a given search location. We refer the reader to [22]. In the elastic case, we show that the maximum of the TD of the misfit, which corresponds to the point at which the insertion of the inclusion maximally decreases the misfit, may not be at the location of the true inclusion. Further, it is revealed that its resolution is low due to the coupling of pressure and shear wave modes having different wave speeds and polarization directions. Nevertheless, as it will be shown in the next section, the coupling terms responsible for this degeneracy can be canceled out using a modified imaging framework. Suppose that the elastic inclusion D in Ω is given by D = ǫB + z, where B is a bounded Lipschitz domain in Rd . We assume that there exists c0 > 0 such e µ that inf x∈D dist(x, ∂Ω) > c0 . Suppose that D has the pair of Lamé constants (λ, e) satisfying (1.5) and (1.53) and denote by ρe its density. Let uǫ be the solution to (3.47) and let u0 be the background solution defined by (3.46). We consider a filtered quadratic misfit and introduce a TD based imaging functional resulting therefrom and analyze its performance when identifying true location z of the inclusion D. For a search point z S , let uzS be the solution to (3.47) in the presence of a trial e′ , µ inclusion D′ = ǫ′ B ′ + z S with parameters (λ e′ , ρe′ ), where B ′ is chosen a priori ′ ′ ′ e and ǫ is small. Assume that (λ , µ e ) satisfy (1.5) and (1.53). Consider the filtered quadratic misfit: 1 Ef [u0 ](z ) = 2 S

 2  1 I − Kω [uzS − umeas ](x) dσ(x), Ω 2 ∂Ω

Z

(6.1)

where {umeas (x), x ∈ ∂Ω} are the boundary measurements. In this chapter there is no medium or measurement noise so that umeas = uǫ . As shown for scalar wave equations in [22], the identification of the exact location of true inclusion using the classical quadratic misfit Z 1 (uzS − umeas )(x) 2 dσ(x) (6.2) E[u0 ](z S ) = 2 ∂Ω

cannot be guaranteed and the postprocessing of the data is necessary. The operator ω (1/2)I − KΩ in the definition of the new misfit Ef plays the role of postprocessing. We show in the later part of this section that exact identification can be achieved using filtered quadratic misfit Ef . We emphasize that the postprocessing compensates for the effects of an imposed Neumann boundary condition on the displacement field.

93

TOPOLOGICAL DERIVATIVE BASED IMAGING

6.1.1

Topological Derivative of the Filtered Quadratic Misfit

Analogously to Theorem 3.10, the displacement field uzS , in the presence of the trial inclusion at the search location, can be expanded as  S uzS (x) − u0 (x) = −(ǫ′ )d ∇u0 (z S ) : M′ (B ′ )∇zS Nω Ω (x, z )   S S ′ d+1 , +ω 2 (ρ − ρe′ )|B ′ |Nω Ω (x, z )u0 (z ) + O (ǫ )

for a small ǫ′ > 0, where M′ (B ′ ) is the EMT associated with the domain B ′ and e′ , µ the parameters (λ, µ; λ e′ ). By using (1.82), we obtain that

 2  1 ω I − KΩ [umeas − u0 ](x) dσ(x) 2 ∂Ω  ′ d S ′ ′ +(ǫ ) ℜe ∇u0 (z ) : M (B )∇w(z S ) + ω 2 (ρ − ρe′ )|B ′ |u0 (z S ) · w(z S )   +O (ǫ′ )d+1 ǫd + O (ǫ′ )2d , (6.3) Z

1 Ef [u0 ](z ) = 2 S

where the function w is defined in terms of the measured data (umeas − u0 ) by w(x) = SΩω



  1 ω I − KΩ [umeas − u0 ] (x), 2

x ∈ Ω.

(6.4)

The function w corresponds to backpropagating inside Ω the boundary measurements of umeas − u0 . Substituting (3.49) in (6.4), we find that i h Z  S d w(z ) = ǫ Γω (z S − x) ∇u0 (z) : M(B)∇z Γω (x − z) dσ(x) ∂Ω

+ω 2 (ρ − ρe)|B| +O(ǫ

d+1

hZ

∂Ω

i  Γω (x − z)Γω (x − z S )dσ(x) u0 (z)

).

(6.5)

Definition 6.1 (Topological derivative of Ef ). The TD imaging functional associated with Ef at a search point z S ∈ Ω is defined by ITD [u0 ](z S ) :=



∂Ef [u0 ](z S ) ′d . ∂(ǫ′ )d (ǫ ) =0

(6.6)

The functional ITD [u0 ] (z S ) at every search point z S ∈ Ω synthesizes the sensitivity of the misfit Ef relative to the insertion of an elastic inclusion D′ = z S + ǫ′ B ′ at that point. The maximum of ITD [u0 ] (z S ) corresponds to the point at which the insertion of an inclusion centered at that point maximally decreases the misfit Ef . So, the location of the maximum of ITD [u0 ] (z S ) is expected to be a good estimate of the location z of the true inclusion, D, that determines the measured field umeas . Note that from (6.3) it follows that n ITD [u0 ](z S ) = −ℜe ∇u0 (z S ) : M′ (B ′ )∇w(z S ) o +ω 2 (ρ − ρe′ )|B ′ |u0 (z S ) · w(z S ) , (6.7)

94

CHAPTER 6

where w is given by (6.5). 6.1.2

Sensitivity Analysis of TD

In this section, we explain why TD imaging functional ITD may not attain its maximum at the location z of the true inclusion. Notice that the functional ITD consists of two terms: a density contrast term and an elastic contrast term with background material. For simplicity and purely for the sake of analysis, we consider two special cases when we have only the density contrast or the elastic contrast with reference medium. 6.1.2.1

Case I: Density contrast

e and µ = µ Suppose that λ = λ e. In this case, the wave function w in (6.5) satisfies    Z Γω (x − z)Γω (x − z S )dσ(x) u0 (z) . (6.8) w(z S ) ≃ ǫd ω 2 (ρ − ρe)|B| ∂Ω

Consequently, the imaging functional ITD at z S ∈ Ω reduces to  Z   ITD [u0 ] (z S ) ≃ C ω 4 ℜe u0 (z S ) · Γω (x − z)Γω (x − z S )dσ(x) u0 (z) , ∂Ω

where C = ǫd (ρ − ρe′ )(ρ − ρe)|B ′ ||B|.

(6.9)

Throughout this chapter we assume that

(ρ − ρe′ )(ρ − ρe) ≥ 0.

Recall that the Helmholtz-Kirchhoff identities (1.77) and (1.78) hold as the distance of z S and z from the boundary ∂Ω is large enough. By virtue of (1.24), (1.77), and (1.78), we can easily get      1 ω S 1 S ITD [u0 ] (z S ) ≃ −C ω 3 ℜe u0 (z S ) · ℑm Γp (z − z) + Γω u (z) , (z − z) 0 cp cs s where C is defined by (6.9). Let (eθ1 , eθ2 , . . . , eθn ) be n uniformly distributed directions over the unit disk or sphere, and denote by Ujp and Ujs , respectively, the p- and s-plane waves, that is, Ujp (x) = eiκp x·eθj eθj and Ujs (x) = eiκs x·eθj e⊥ (6.10) θj for d = 2. In three dimensions, we set s Uj,l (x) = eiκs x·eθj e⊥,l θj ,

l = 1, 2,

⊥,2 3 where (eθj , e⊥,1 θj , eθj ) is an orthonormal basis of R . For ease of notation, in three P 2 s dimensions, ITD [Ujs ](z S ) denotes l=1 ITD [Uj,l ](z S ). We have n 1 X iκα x·eθj π (6.11) e ≃ −4( )d−2 ℑm Γω α (x) n j=1 κα

95

TOPOLOGICAL DERIVATIVE BASED IMAGING

for large n; see, for instance, [22] for a proof of this formula. The following proposition holds. Proposition 6.2. Let Ujα , α = p, s, be defined in (6.10), where j = 1, 2, . . . , n, for n sufficiently large. Then, for all z S ∈ Ω far from ∂Ω, " n 2 1X p S 3 π d−2 κs 2 1 S ITD [Uj ](z ) ≃ 4µCω ( ) ( ) ℑm Γω p (z − z) n j=1 κp κp cp # 1 ω S ω S + ℑm Γp (z − z) : ℑm Γs (z − z) , (6.12) cs and " n 2 1X s S 3 π d−2 1 S ITD [Uj ](z ) ≃ 4µCω ( ) ℑm Γω s (z − z) n j=1 κs cs # 1 ω S ω S + ℑm Γp (z − z) : ℑm Γs (z − z) , cp

(6.13)

where C is given by (6.9).

Proof. From (6.11) it follows that n

1 X iκp x·eθj e eθ j ⊗ eθ j n j=1

π ≃ 4( )d−2 ℑm κp ≃ −4µ(



 1 ω Dx Γp (x) κ2p

π d−2 κs 2 ) ( ) ℑm Γω p (x), κp κp

(6.14)

and n

1 X iκs x·eθj ⊥ e eθ j ⊗ e⊥ θj n j=1

n

=

 1 X iκs x·eθj e I − eθ j ⊗ eθ j n j=1



−4(

=

−4µ(

π d−2 ) ℑm κs

   1 I + 2 Dx Γω (x) s κs

π d−2 ) ℑm Γω s (x), κs

(6.15)

where the last equality comes from (1.25). Note that, in three dimensions, (6.15) is to be understood as follows: n

2

1 X X iκs x·eθj ⊥,l π e eθj ⊗ e⊥,l )ℑm Γω s (x). θj ≃ −4µ( n j=1 κs

(6.16)

l=1

Then, using the definition of Ujp we compute imaging functional ITD for n p-plane

96

CHAPTER 6

waves as n

1X ITD [Ujp ](z S ) n j=1 n

= Cω 4

1X ℜe Ujp (z S ) · n j=1

Z

∂Ω

 Γω (x − z)Γω (x − z S )dσ(x)Ujp (z)

" n n1 1X iκp (z S −z)·eθj ℜe e eθj · ℑm ≃ −Cω Γω (z S − z) n j=1 cp p # o 1 ω S + Γs (z − z) eθj cs # " n 1 X iκp (zS −z)·eθj 3 e eθ j ⊗ eθ j ≃ −Cω ℜe n j=1 " #  1 ω S 1 ω S Γ (z − z) + Γs (z − z) . : ℑm cp p cs 3

Here we used the fact that eθj · Aeθj = eθj ⊗ eθj : A for a matrix A, which is easy to check.

Finally, exploiting the approximation (6.14), we conclude that " n 2 1X p S 3 π d−2 κs 2 1 S ℑm Γω ITD [Uj ](z ) ≃ 4µCω ( ) ( ) p (z − z) n j=1 κp κp cp

# 1 S ω S + ℑm Γω p (z − z) : ℑm Γs (z − z) . cs

Similarly, we can compute the imaging functional ITD for n s-plane waves exploiting the approximation (6.15) as n

1X ITD [Ujs ](z S ) n j=1 n

1X = Cω ℜe Ujs (z S ) · n j=1 4

Z

∂Ω

Γω (x

ω

− z)Γ (x − z

S



)dσ(x)Ujs (z)

" n n1 X S 1 iκ (z −z)·e θj ⊥ ℜe e s eθj · ℑm Γω (z S − z) ≃ −Cω 3 n cp p j=1 # o 1 ω S ⊥ + Γs (z − z) eθj , cs

97

TOPOLOGICAL DERIVATIVE BASED IMAGING

and therefore, n

1X ITD [Ujs ](z S ) ≃ n j=1



# n 1 X iκs (zS −z)·eθj ⊥ ⊥ −Cω ℜe e eθ ⊗ eθ n j=1 " # n1 o 1 ω S ω S : ℑm Γ (z − z) + Γs (z − z) cp p cs " 2 S 3 π d−2 1 ℑm Γω 4µCω ( ) s (z − z) κs cs # 1 ω S ω S + ℑm Γp (z − z) : ℑm Γs (z − z) . cp 3

"

In three dimensions, one should use (6.16) to get the desired result. This completes the proof.  From Proposition 6.2, it is not clear that the imaging functional ITD attains n n 1X 1X ITD [Ujs ](z S ) and ITD [Ujp ](z S ), its maximum at z. Moreover, for both n j=1 n j=1 the resolution at z is not fine enough due to the presence of the term S ω S ℑm Γω p (z − z) : ℑm Γs (z − z).

One way to cancel out this term is to combine n

1X ITD [Ujs ](z S ) and n j=1

n

1X ITD [Ujp ](z S ) n j=1

as follows:  n  1X κp κp κs cs ( )d−2 ( )2 ITD [Ujp ](z S ) − cp ( )d−2 ITD [Ujs ](z S ) . n j=1 π κs π However, one arrives at  n  κp κp κs 1X cs ( )d−2 ( )2 ITD [Ujp ](z S ) − cp ( )d−2 ITD [Ujs ](z S ) n j=1 π κs π   2 cp 2 3 cs ω S ω S ℑm Γp (z − z) − ℑm Γs (z − z) , ≃ 4µCω cp cs

which is not a sum of positive terms and thus cannot guarantee that the maximum of the obtained imaging functional is at the location of the inclusion. 6.1.2.2

Case II: Elastic contrast

Suppose that ρ = ρe. Further, we assume for simplicity that M = M′ (B ′ ) = M(B).

98

CHAPTER 6

From the Helmholtz-Kirchhoff identities (1.77) and (1.78) we have Z Γω (x − z)Γω (x − z S )dσ(x) ∇z ⊗ ∇zS ∂Ω

  1 1 S S ≃− ℑm ∇z ⊗ ∇zS Γω ℑm ∇z ⊗ ∇zS Γω s (z − z) − p (z − z) . cs ω cp ω (6.17) Then, using (6.5) and (6.17), ITD [U ] (z S ) at z S ∈ Ω becomes ITD [U ] (z S ) = −ǫd ℜe ∇U (z S ) : M∇w(z S )   Z = ǫd ℜe ∇U (z S ) : M ∇2zzS Γω (x − z)Γω (x − z S )dσ(x) : M∇U (z) ∂Ω " #   ǫd S S 2 ω f ≃ ℜe ∇U (z ) : M ∇ ℑm Γ (z − z) : M∇U (z) , ω

where

fω (z S − z) = 1 Γω (z S − z) + 1 Γω (z S − z). Γ cp p cs s

Here, we have made use of the convention  ω ∇2 Γω s ijkl = ∂ik (Γs )jl .

(6.18)

(6.19)

Let us define    S    2 ω S  t Jα,β (z S ) := Mℑm ∇2 Γω (z − z) : Mℑm ∇ Γ (z − z) , (6.20) α β

where At = (Aklij ) if A is the 4-tensor given by A = (Aijkl ). Here X A:B= Aijkl Bijkl ijkl

for any 4-tensors A = (Aijkl ) and B = (Bijkl ). The following result holds.

Proposition 6.3. Let Ujα , α = p, s, be defined in (6.10), where j = 1, 2, . . . , n, for n sufficiently large. Let Jα,β be defined by (6.20). Then, for all z S ∈ Ω far enough from ∂Ω, n  κs  1 1 µ π 1X ITD [Ujp ](z S ) ≃ 4ǫd ( )d−2 ( )2 Jp,p (z S ) + Js,p (z S ) n j=1 ω κp κp cp cs

(6.21)

and n 1  1X µ π 1 ITD [Ujs ](z S ) ≃ 4ǫd ( )d−2 Js,s (z S ) + Js,p (z S ) . n j=1 ω κs cs cp

(6.22)

99

TOPOLOGICAL DERIVATIVE BASED IMAGING

Proof. Let us compute ITD for n p-plane waves, i.e., n

1X ITD [Ujp ](z S ) n j=1

n o h n i X  ǫd 1 fω (z S − z) : M∇U p (z) ℜe ∇Ujp (z S ) : M ℑm ∇2 Γ j ωn j=1

=

n

≃ ǫd

Equivalently,

X iκ (zS −z)·e ω 1 θj ℜe e p eθ j ⊗ eθ j 2 cp n j=1 o  n  fω (z S − z) : Meθ ⊗ eθ . : M ℑm ∇2 Γ j j

n n d d X X X S ω 1 1X θ ITD [Ujp ](z S ) = ǫd 2 ℜe eiκp (z −z)·eθj Aikj mlmik n j=1 cp n i,k,l,m=1 i′ ,k′ ,l′ ,m′ =1 n  j=1o  S θj 2 f ω ′ ′ ′ ′ ×ℑm ∂li′ Γ (z − z) ml m i k Al′ m′ ′ mk

θ

where the matrix Aθj = (Aikj )di,k=1 is defined as Aθj := eθj ⊗ eθj . It follows that 1 n

Pn

j=1

ITD [Ujp ](z S )

= ǫd ℜe ×

d X

d X

mlmik ml′ m′ i′ k′ ℑm

i,k,l,m=1 i′ ,k′ ,l′ ,m′ =1 ! n ω 1 X iκp (zS −z)·eθj θj θj e Aik Al′ m′ . c2p n j=1

h

  fω (z S − z) ∂li2 ′ Γ

mk′

i

Recall that for n sufficiently large, we have from (6.14) n

1 X iκp x·eθj π κs e eθj ⊗ eθj ≃ −4µ( )d−2 ( )2 ℑm Γω p (x) n j=1 κp κp (with the version (6.16) in dimension 3). Taking the Hessian of the previous approximation leads to n

1 X iκp x·eθj e eθ j ⊗ eθ j ⊗ eθ j ⊗ eθ j n j=1





c2p π d−2 κs 2 ( ) ( ) ℑm ∇2 Γω p (x) ω 2 κp κp





c4p π d−2 ( ) ℑm ∇2 Γω p (x). (6.23) ω 2 c2s κp

Then, by virtue of (6.14) and (6.23), we obtain n d d X X 1X 4µ π κs ITD [Ujp ](z S ) ≃ ǫd ( )d−2 ( )2 mlmik ml′ m′ i′ k′ n j=1 ω κp κp i,k,l,m=1 i′ ,k′ ,l′ ,m′ =1 n  o n   S  o fω (z S − z) ×ℑm ∂li2 ′ Γ ℑm ∂l2′ i Γω (z − z) , p m′ k mk′

100

CHAPTER 6

and hence, n

1X 4µ π κs ITD [Ujp ](z S ) ≃ ǫd ( )d−2 ( )2 n j=1 ω κp κp ×

d X

d X

i,k,i′ ,k′ =1 d X

×

l′ ,m′ =1

l,m=1

mlmik ℑm

ml′ m′ i′ k′ ℑm

n

n

fω ∂li2 ′ Γ

∂l2′ i Γω p

  S (z − z)

mk′

!  S  o (z − z) m′ k .

o

!

Therefore, by the definition (6.20) of Jα,β , we conclude that o n 4µ π d−2 κs 2  fω (z S − z) ( ) ( ) Mℑm ∇2 Γ ω κp κp   t S : Mℑm ∇2 Γω p (z − z)   1 1 d 4µ π d−2 κs 2 S S ǫ ( ) ( ) Jp,p (z ) + Js,p (z ) . ω κp κp cp cs

n

1X ITD [Ujp ](z S ) ≃ n j=1

ǫd



Similarly, consider the case of s-plane waves and compute ITD for n directions. We have n

1X ITD [Ujs ](z S ) n j=1 =

n   n o X  ǫd 1 fω (z S − z) : M∇U s (z) ℜe ∇Ujs (z S ) : M ℑm ∇2 Γ j ωn j=1 n

≃ ǫd

≃ ǫd

X iκ (zS −z)·e ω 1 θj ⊥ ℜe e s eθ j ⊗ eθ j 2 cs n j=1  n o   fω (z S − z) : M e⊥ : M ℑm ∇2 Γ θ j ⊗ eθ j

n d d X X X ω 1 θ iκs (z S −z)·eθj ℜe e Bikj mlmik c2s n j=1 i,k,l,m=1 n  o i′ ,k′ ,l′ ,m′ =1  S θ 2 f ω ×ℑm ∂li′ Γ (z − z) ml′ m′ i′ k′ Bl′jm′ mk′

θ

where the matrix Bθj = (Bikj )ik is defined as Bθj = eθj ⊗ e⊥ θj . It follows that n

1X ITD [Ujs ](z S ) n j=1 = ǫd

d X

d X

i,k,l,m=1 i′ ,k′ ,l′ ,m′ =1



×

h   fω (z S − z) mlmik ml′ m′ i′ k′ ℑm ∂li2 ′ Γ n



ω 1 X iκs (zS −z)·eθj θj θj  e Bik Bl′ m′ . c2s n j=1

mk′

i

TOPOLOGICAL DERIVATIVE BASED IMAGING

101

Now, recall from (6.15) that for n sufficiently large, we have n

1 X iκs x·eθj ⊥ π d−2 e eθ j ⊗ e⊥ ) ℑm Γω θj ≃ −4µ( s (x). n j=1 κs Taking the Hessian of this approximation leads to n  1 X iκs x·eθj c2s π d−2 ⊥ e eθ j ⊗ e⊥ ) ℑm ∇2 Γω θj ⊗ eθj ⊗ eθj ≃ 4µ 2 ( s (x) , n j=1 ω κs

(6.24)

where we have made use again of the convention (6.19). Then, by using (6.15), (6.24) and arguments similar to those in the case of p-waves, we arrive at n

1X ITD [Ujs ](z S ) n j=1

d d X 4µ π d−2 X ( ) mlmik ml′ m′ i′ k′ ω κs i,k,l,m=1 i′ ,k′ ,l′ ,m′ =1 o n    S fω (z S − z) ℑm ∂l2′ i Γω ×ℑm ∂li2 ′ Γ − z) ′ k (z s m mk′  n o  d 4µ π d−2 fω (z S − z) ( ) ≃ǫ M ℑm ∇2 Γ ω κs   S t : M ℑm ∇2 Γω s (z − z)  1 1 4µ π Jp,s (z S ) + Js,s (z S ) . ≃ ǫd ( )d−2 ω κs cp cs

≃ ǫd

This completes the proof.



As observed in Subsection 6.1.2.1, Proposition 6.3 shows that the resolution of ITD deteriorates due to the presence of the coupling term    S     S t Jp,s (z S ) = Mℑm ∇2 Γω : Mℑm ∇2 Γω . s (z − z) p (z − z)

6.1.2.3

Summary

To conclude, we summarize the results of this section below. - Propositions 6.2 and 6.3 indicate that the imaging functional ITD may not attain its maximum at the true location, z, of the inclusion D. - In both cases, the resolution of the localization of elastic inclusion D degenerates due to the presence of the coupling terms S ω S ℑm Γω p (z − z) : ℑm Γs (z − z)

and Jp,s (z S ), respectively. - In order to enhance image resolution to its optimum and ensure that the imaging functional attains its maximum only at the location of the inclusion, one must eradicate the coupling terms.

102 6.2

CHAPTER 6

MODIFIED IMAGING FRAMEWORK

In this section, in order to achieve better localization and resolution properties, we introduce a modified imaging framework based on a weighted Helmholtz decomposition of the TD imaging functional. This decomposition will be also useful for studying the time-reversal imaging of extended elastic sources in Chapter 8. It is proved that the modified detection algorithm provides a resolution limit of the order of half a wavelength, indeed, as the new functional behaves as the square of the imaginary part of a pressure or shear Green function. For simplicity, we restrict ourselves to the study of two particular situations when we have only a density contrast or an elastic contrast. In order to cater to various applications, we provide explicit results for the canonical cases of circular and spherical inclusions. It is also important to note that the formulas of the TD based functionals are explicit in terms of the incident wave and the free space fundamental solution instead of the Green function in the bounded domain with imposed boundary conditions. This is in contrast with the prior results; see, for instance, [104]. Albeit a Neumann boundary condition is imposed on the displacement field, the results of this section can be extended to the problem with Dirichlet boundary conditions. We will show that the modified framework leads to both a better localization (in the sense that the modified imaging functional attains its maximum at the location of the inclusion) and a better resolution than the classical TD based sensitivity framework. It is worth mentioning that the classical framework performs quite well for the antiplane elasticity [22] and the resolution and localization deteriorations are purely dependent on the vectorial nature of the problem, that is, due to the coupling of pressure and shear waves propagating with different wave speeds and polarization directions. It should be noted that in the case of a density contrast only, the modified imaging functional is still TD based, i.e., obtained as the TD of a discrepancy functional. This holds because of the nonconversion of waves (from shear to compressional and vice versa) in the presence of only a small inclusion with a contrast density. However, in the presence of a small inclusion whose Lamé coefficients are different from those of the background medium, there is a mode conversion; see, for instance, [117]. As a consequence, the modified functional proposed here cannot be written as the topological derivative of a discrepancy functional. It is rather a Kirchhoff-type imaging functional.

6.2.1

Weighted Imaging Functional

In this subsection, we introduce a weighted imaging functional IW , and justify that it provides a better localization of the inclusion D than ITD in (6.7). This new functional IW can be seen as a correction based on a weighted Helmholtz decomposition of ITD . In fact, using (1.12), we find that in the search domain the pressure and the shear components of w, defined by (6.4), can be written as w = ∇ × ψw + ∇φw .

(6.25)

We recall the Helmholtz decomposition operators Hp and Hs defined by (1.15). Then we multiply the components of w with the weights cp and cs , the background

103

TOPOLOGICAL DERIVATIVE BASED IMAGING

pressure and the shear wave speeds, respectively. Finally, we define IW by  ′    ρe ′ p p p ′ ′ p 2 − 1 |B |H [U ] · H [w] IW [U ] = cp ℜe −∇H [U ] : M (B )∇H [w] + ω ρ  ′    ρe +cs ℜe −∇Hs [U ] : M′ (B ′ )∇Hs [w] + ω 2 − 1 |B ′ |Hs [U ] · Hs [w] . ρ (6.26) In the next section we rigorously explain why this new functional should be better than imaging functional ITD . 6.2.2

Sensitivity Analysis of Weighted Imaging Functional

In this section, we explain why imaging functional IW attains its maximum at the location z of the true inclusion with a better resolution than ITD . In fact, as shown in this section, IW behaves like the square of the imaginary part of a pressure or a shear Green function depending upon the incident wave. Consequently, it provides a resolution of the order of half a wavelength. For simplicity, we once again consider special cases of only density contrast and only elastic contrast.

6.2.2.1

Case I: Density contrast

e and µ = µ Suppose that λ = λ e. Recall that in this case, the wave function w is given by (6.8). Note that Hα [Γω ] = Γω α , α = p, s. Therefore, the imaging functional IW at z S ∈ Ω turns out to be h R

 i ω (x − z)Γω (x − z S )dσ(x) U (z) Γ p ∂Ω ! h R  i s S ω S ω +cs H [U ](z ) · , ∂Ω Γ (x − z)Γs (x − z )dσ(x) U (z) IW [U ] (z S ) = C ω 4 ℜe cp Hp [U ](z S ) ·

where C is given by (6.9). By using (1.77) and (1.78), we can easily get IW [U ] (z S )

i h  S U (z) ≃ −C ω 3 ℜe Hp [U ](z S ) · ℑm Γω (z − z) p

! i  S +Hs [U ](z S ) · ℑm Γω . s (z − z) U (z) h

(6.27) Consider n uniformly distributed directions (eθ1 , eθ2 , . . . , eθn ) on the unit disk or sphere for n sufficiently large. Then, the following proposition holds. Proposition 6.4. Let Ujα , α = p, s, be defined in (6.10), where j = 1, 2, . . . , n, for n sufficiently large. Then, for all z S ∈ Ω far from ∂Ω, n 2 1X π κs S IW [Ujp ](z S ) ≃ 4µCω 3 ( )d−2 ( )2 ℑm Γω p (z − z) , n j=1 κp κp

(6.28)

104

CHAPTER 6

and

n 2 1X π S IW [Ujs ](z S ) ≃ 4µCω 3 ( )d−2 ℑm Γω s (z − z) , n j=1 κs

(6.29)

where C is given by (6.9).

Proof. By using arguments similar to those in Proposition 6.2 and (6.27), we show that the weighted imaging functional IW for n p-plane waves is given by n

1X IW [Ujp ](z S ) n j=1

n h X  p i 1 S = −C ω 3 ℜe Ujp (z S ) · ℑm Γω (z − z) Uj (z) p n j=1 n X    S 1 S ≃ −Cω 3 ℜe eiκp (z −z).eθj eθj · ℑm Γω p (z − z) eθj n j=1

≃ 4µCω 3 ( For n s-plane waves n

2 π d−2 κs 2 S ) ( ) ℑm Γω p (z − z) . κp κp

1X IW [Ujs ](z S ) = n j=1

−Cω 3

n  s  1X s S  S Uj (z ) · ℑm Γω s (z − z) Uj (z) n j=1



−Cω 3

n  ⊥i 1 X iκs (zS −z)·eθj ⊥ h S e eθj · ℑm Γω s (z − z) eθj n j=1



4µCω 3 (

2 π d−2 S ) ℑm Γω s (z − z) , κs

where one should use the version (6.16) in three dimensions.

Proposition 6.4 shows that the coupling term   ω S S ℑm Γω p (z − z) : ℑm Γs (z − z) ,



responsible for the decreased resolution in ITD , is absent and IW attains its maximum at z (see Figure 5.1). Moreover, the resolution using weighted imaging functional IW is the Rayleigh one, that is, restricted by the diffraction limit of half a  2 S . wavelength of the wave impinging upon Ω, thanks to the term ℑm Γω α (z − z) Note that the resolution is higher for the shear version of the imaging functional as the wavelength is smaller. Finally, it is worth mentioning that IW is a topological derivative based imaging functional. In fact, it is the topological derivative of the discrepancy functional cs Ef [U s ] + cp Ef [U p ],

where U s is an s-plane wave and U p is a p-plane wave.

105

TOPOLOGICAL DERIVATIVE BASED IMAGING

6.2.2.2

Case II: Elastic contrast

Suppose that ρ = ρe, and assume for simplicity that M = M′ (B ′ ) = M(B). Then, the weighted imaging functional IW reduces to IW (z S )   p S p S s S s S d = −ǫ cp ∇H [U (z )] : M∇H [w(z )] + cs ∇H [U (z )] : M∇H [w(z )] " Z

= −ǫ

d

p

S

cp ∇H [U (z )] : M

∂Ω

Z +cs ∇H [U (z )] : M s

S

∂Ω

∇z

Γω (x



∇z Γω (x −

z)∇zS Γω p (x

z)∇zS Γω s (x

 − z )dσ(x) : M∇U (z) S

# − z )dσ(x) : M∇U (z) S

"     S ǫd =− ∇Hp [U (z S )] : M ℑm ∇2 Γω p (z − z) : M∇U (z) ω #     S . +∇Hs [U (z S )] : M ℑm ∇2 Γω s (z − z) : M∇U (z)

We observed in Subsection 6.1.2.2 that the resolution of ITD is compromised because of the coupling term Js,p (z S ). We can cancel out this term by using the weighted imaging functional IW . For example, using arguments analogous to those in Proposition 6.3, we can easily prove the following result. Proposition 6.5. Let Ujα , α = p, s, be defined in (6.10), where j = 1, 2, . . . , n, for n sufficiently large. Let Jα,β be defined by (6.20). Then, for all z S ∈ Ω far enough from ∂Ω, n

1X κs µ π IW [Ujα ](z S ) ≃ 4ǫd ( )d−2 ( )2 Jα,α (z S ), n j=1 ω κα κα

α = p, s.

(6.30)

It can be established that IW attains its maximum at z S = z. Consider, for example, the canonical case of a circular or spherical inclusion. The following propositions hold. They give explicit expressions for the point spread functions of the imaging functionals. Proposition 6.6. Let D be a disk or a ball. Then for all search points z S ∈ Ω far enough from ∂Ω, 2 2  S  S ω (z − z) + 2ab ∆ ℑm Γ (z − z) Jp,p (z S ) = a2 ∇2 ℑm Γω p p 2  S +b2 ∆ tr ℑm Γω (z − z) , p

(6.31)

where tr represents the trace operator and the constants a and b are defined in (3.20). Proof. Since ∇2 Γω p



ijkl

= ∂ik Γω p



jl

,

(6.32)

106

CHAPTER 6

it follows from (3.20) that X   M∇2 Γω = mijpq ∇2 Γω p ijkl p pqkl p,q

d X     a ω ∂ik Γω ∂qk Γω p jl + ∂jk Γp il + b p ql δij 2 q=1      t  a ω ω ω ∂k ∇Γp el ij + ∇Γp el ij + b∂k ∇ · Γp el δij , = 2 (6.33) where el is the unit vector in the direction xl .

=

Now, since Γω p el is a p-wave, its rotational part vanishes and the gradient is symmetric, i.e.,   ω ∇ × (Γω ∇Γω (6.34) p el ) = 0 and p el ij = ∇Γp el ji .

Consequently,

       ω ω ω ∇∇ · Γω e = ∇ × ∇ × Γ e + ∆ Γ e = ∆ Γ e p l p l p l p l ,

(6.35)

which, together with (6.33) and (6.34), implies

2 ω ω M∇2 Γω p = a ∇ Γp + b I ⊗ ∆Γp .

(6.36)

2 ω Moreover, by the definition of Γω p , its Hessian, ∇ Γp , is also symmetric. Indeed,



∇2 Γω p

t

ijkl

= ∂ki Γω p



lj

=−

  µ 2 ω ∂kijl Gω . P = ∇ Γp 2 κs ijkl

(6.37)

Therefore, by virtue of (6.36) and (6.37), Jp,p can be rewritten as    S  S ω Jp,p (z S ) = aℑm { ∇2 Γω p (z − z)} + bI ⊗ ℑm { ∆Γp (z − z)}    S  S ω : aℑm { ∇2 Γω (6.38) p (z − z)} + bℑm { ∆Γp (z − z)} ⊗ I .

Finally, we observe that 

2   t 2 ω = ∇2 ℑm Γω ∇2 ℑm Γω p , p : ∇ ℑm Γp

  ω ∇2 ℑm Γω : I ⊗ ∆ℑm Γ = p p =

  ω : ∆ℑm Γ ⊗ I ∇2 ℑm Γω p p d  X    ω ℑm ∂ik Γω p jl δij ∆ℑm Γp kl

i,j,k,l=1

=

(6.39)

d X

k,l=1

d X i=1

ℑm

 

∂ik Γω p il

!

∆ℑm Γω p



kl

107

TOPOLOGICAL DERIVATIVE BASED IMAGING

or equivalently,   ω ∇2 ℑm Γω p : I ⊗ ∆ℑm Γp

d  X

=

k,l=1

2 ∆ℑm Γω p ,

= and 

   ω I ⊗ ∆ℑm Γω : ∆ℑm Γ ⊗ I p p

=

∆ℑm Γω p

d X

=

d X

i,k=1

∆ℑm Γω p



kk

2 = ∆ tr(ℑm Γω p ) .

kl

(6.40)

δij ∆ℑm Γω p

i,j,k,l=1

 2



δ ∆ℑm kl kl

∆ℑm Γω p

Γω p



ij



ii

(6.41)

We arrive at the conclusion by substituting (6.39), (6.40), and (6.41) in (6.38).  Proposition 6.7. Let D be a disk in dimension two or a ball in dimension three. Then, for all search points z S ∈ Ω far enough from ∂Ω, " 2 (d − 6) 2 a2 1 4 2 ω ω S S S ∇ ℑm Γ ∇ ℑm Γ Js,s (z ) = (z − z) + (z − z) s s µ2 κ4s 4 # 2 κ4s S + ℑmΓω (6.42) s (z − z) , 4

where d is the space dimension and a is the constant as in (3.20), or, equivalently, " 2 a2 1 X ω S S (ℑm Γ )(z − z) (6.43) Js,s (z ) = ∂ ijkl s µ2 κ4s ijkl,k6=l 2 (d − 2) 2 S + )(z − z) ∇ (ℑm Γω s 4 # 2 κ4 S + s ℑmΓω (z − z) (6.44) , s 4 where a is the constant as in (3.20).

Proof. As before, we have    a ω ω ω M∇2 Γω = ∂ (Γ ) + ∂ (Γ ) ik jk s s jl s il + b ∂k ∇ · (Γs el ) δij 2 ijkl  a ω = ∂ik (Γω ) + ∂ (Γ ) (6.45) jk s jl s il , 2

108

CHAPTER 6

and 

M∇2 Γω s

t

ijkl

=

 a ω ∂ik (Γω ) + ∂ (Γ ) il s jl s jk . 2

(6.46)

ω Here we have used the facts that Γω s el is an s-wave and, Γs and its Hessian are symmetric, i.e., ω ω ω ∂ik (Γω s )jl = ∂ki (Γs )jl = ∂ik (Γs )lj = ∂ki (Γs )lj .

(6.47)

Substituting (6.45) and (6.46) in (6.20), we obtain Js,s (z S )

=

a2 4

d X

i,j,k,l=1

ℑm

n

 S  ∂ik Γω s (z − z) jl +

 S  ∂ik Γω s (z − z) jl +  a2  T1 (z S ) + 2T2 (z S ) + T3 (z S ) , 4 ×ℑm

:=

n

 S  o ∂jk Γω s (z − z) il

 S  o ∂il Γω s (z − z) jk

(6.48)

where  d    X     ω S ω S S  T (z ) = ℑm ∂ Γ (z − z) ℑm ∂ Γ (z − z) ,  1 ik ik s s  jl jl   i,j,k,l=1     d     X   S S S T2 (z ) = ℑm ∂ik Γω ℑm ∂il Γω s jl (z − z) s jk (z − z) ,   i,j,k,l=1     d   X      ω S ω S S  T (z ) = ℑm ∂ Γ (z − z) ℑm ∂ Γ (z − z) .  3 jk il s s il jk  i,j,k,l=1

Notice that

ℑm Γω s (x) =

1 (κ2 I + Dx )ℑm Γω s (x), µκ2s s

and ℑm Γω s satisfies S 2 ω S ∆(ℑm Γω s )(z − z) + κs (ℑm Γs )(z − z) = 0

for z S 6= z.

(6.49)

Therefore, the first term T1 can be computed as follows: 2  S T1 (z S ) = ∇2 ℑm Γω s (z − z)

=

1 2 µ κ4s

d h X

i,j,k,l=1

 2 2  S  S + κ4s δjl ∂ik ℑm Γω ∂ijkl ℑm Γω s (z − z) s (z − z)

i  S  S ω (z − z)∂ ℑm Γ (z − z) . +2κ2s δjl ∂ik ℑm Γω ijkl s s

109

TOPOLOGICAL DERIVATIVE BASED IMAGING

We also have d X

i,j,k,l=1

   S  S ω 2δjl ∂ik ℑm Γω (z − z) ∂ ℑm Γ (z − z) ijkl s s

! d  d  X X   S S =2 ∂ik ℑm Γω ∂ik ∂ll ℑm Γω s (z − z) s (z − z) i,k=1

= −2κ2s

l=1

d  X

i,k=1

2  S ∂ik ℑm Γω s (z − z) ,

and d X

i,j,k,l=1

d   2 2 X  S  S δjl ∂ik ℑm Γω =d ∂ik ℑm Γω . s (z − z) s (z − z) i,k=1

Consequently, we have 2  S T1 (z S ) = ∇2 ℑm Γω (z − z) = s +

d (d − 2) X  ∂ik µ2 i,k=1

2  S 1 4 ω ∇ ℑm Γ (z − z) s µ2 κ4s 2  S ℑm Γω (z − z) . s

(6.50)

Estimation of the term T2 is quite similar. Indeed, " d  2 X  S 1 S T2 (z ) = 2 4 ∂ijkl ℑm Γω s (z − z) µ κs i,j,k,l=1  S  S ω +2κ2s δjl ∂ik ℑm Γω s (z − z)∂ijkl ℑm Γs (z − z) #     S  S 4 ω ω +κs δjl δjk ∂ik ℑm Γs (z − z) ∂il ℑm Γs (z − z) . Finally, using the identity d X

i,j,k,l=1

=

    S  S ω (z − z) δjl δjk ∂ik ℑm Γω (z − z) ∂ ℑm Γ il s s d  2 X  S ∂ik ℑm Γω (z − z) , s

i,k=1

we obtain that T2 (z S ) =

2 2  S  S 1 2 1 4 ω ω ℑm Γ (z − z) − ℑm Γ (z − z) ∇ ∇ . s s µ2 κ4s µ2

(6.51)

110

CHAPTER 6

Similarly, d X

1 µ2 κ4s

T3 (z S ) =

i,j,k,l=1

"

2   S ∂ijkl ℑm Γω s (z − z)

   S  S ω (z − z) (z − z) ∂ ℑm Γ +2κ2s δjl ∂ik ℑm Γω ijkl s s +κ4s δil δjk

#     S  S ω ω ∂jk ℑm Γs (z − z) ∂il ℑm Γs (z − z) .

By virtue of d X

i,j,k,l=1

 S   S  δil δjk ∂jk ℑm Γω ∂il ℑm Γω s (z − z) s (z − z) d    X  S  S = ∂kk ℑm Γω ∂ii ℑm Γω s (z − z) s (z − z) i,k=1

 2 S = κ4s ℑm Γω s (z − z) ,

we have T3 (z S )

2 2  S  S 1 4 2 2 ω ω ℑm Γ ℑm Γ (z − z) − (z − z) ∇ ∇ s s µ2 κ4s µ2 2 κ4 S (6.52) + s2 ℑm Γω s (z − z) . µ

=

By substituting (6.50), (6.51) and (6.52) in (6.48) and using again (6.49), we have (6.42), and (6.43) follows by straightforward computation.  Figure 6.1 shows typical plots of Jα,α for α = p, s. −8

−8

−6

−6

−4

−4

−2

−2

0

0

2

2

4

4

6

6

8

8 −8

−6

−4

−2

0

2

4

6

8

−8

−6

−4

−2

0

2

4

6

8

Figure 6.1: √ Typical plots of Jp,p (on the left) and Js,s (on the right) for z = 0 and cp /cs = 11.

TOPOLOGICAL DERIVATIVE BASED IMAGING

111

CONCLUDING REMARKS In this chapter, we have performed an analysis of the topological derivative based elastic inclusion detection algorithm. We have seen that the standard TD based imaging functional may not attain its maximum at the location of the inclusion unlike the case of scalar wave equations. Moreover, we have shown that its resolution does not reach the diffraction limit and identified the responsible terms, which are associated with the coupling of different wave modes. In order to enhance resolution to its optimum, we canceled out these coupling terms by means of a Helmholtz decomposition and thereby we designed a weighted imaging functional. We proved that the modified functional behaves like the square of the imaginary part of a pressure or a shear Green function, depending upon the choice of the incident wave, and then attains its maximum at the true location of the inclusion with a Rayleigh resolution limit, that is, of the order of half a wavelength. It is worth emphasizing that topological derivative based imaging functional can be easily extended to the crack problem. Moreover, topological derivative based imaging is efficient only in the time-harmonic regime. In the static regime, there is no equivalent to Helmholtz-Kirchhoff identities. Topological derivatives of misfit functionals can be written in the static case. However, they do not attain their maximum at the location of the elastic defect. In this case, MUSIC-type algorithms seem to be the most appropriate ones for imaging small inclusions or cracks.

Chapter Seven Stability of Topological Derivative Based Imaging Functionals Elastic imaging involves measurement and processing of inclusion and crack signatures. Any practical measurement always contains an undesirable component that is independent of the desired signal. This component is referred to as measurement noise. On the other hand, medium noise models the uncertainty in the physical parameters of the background medium. The physical parameters of the background medium fluctuate spatially around a known background. Of great concern in imaging is the question of how measurement and medium noises are modeled and how the imaging process handles them–that is, whether they are suppressed or amplified. Measurement and medium noises affect the stability of the imaging functionals in very different ways. The principle objective of this chapter is to carry out a detailed stability analysis of the topological derivative based imaging functionals, proposed in Chapter 6, with respect to medium and measurement noises. The understanding of stability of the reconstruction algorithms is very important for qualitative analysis of the recovered images. Unfortunately, this issue has not received much attention in the literature, particularly for topological derivative based imaging algorithms. For scalar wave propagation imaging, stability and resolution analysis for direct imaging algorithms have been recently provided in [23]. Resolution analysis is used to estimate the size of the finest detail that can be reconstructed from the data. Moreover, the trade-off between signal-to-noise ratio and resolution have been also quantified. The analysis for elastic imaging is much more delicate than in the scalar case because of the coupling between the shear and pressure waves. This chapter is organized as follows. Section 7.1 is devoted to the stability analysis with respect to measurement noise. In Section 7.2 we analyze the stability with respect to medium noise of the proposed imaging functionals. The probabilistic tools essential for understanding the stability analysis are reviewed in Appendix A. 7.1

STATISTICAL STABILITY WITH MEASUREMENT NOISE

Let Ujp and Ujs , for j = 1, 2, . . . , n, be given by (6.10). Define n

IWF [{Ujα }](z S ) =

1X IW [Ujα ](z S ), n j=1

α = p, s.

(7.1)

In the previous chapter, we have analyzed the resolution of the imaging functional IWF in the ideal situation where the measurement umeas is accurate. Here, we analyze how the result will be modified when the measurement is corrupted by noise.

STABILITY OF TOPOLOGICAL DERIVATIVE BASED IMAGING FUNCTIONALS

7.1.1

113

Measurement Noise Model

We consider the simplest model for the measurement noise. Let utrue be the accurate value of the elastic displacement field. The measurement umeas is then (7.2)

umeas (x) = utrue (x) + νnoise (x),

which is the accurate value corrupted by measurement noise modeled as νnoise (x), x ∈ ∂Ω. Note that νnoise (x) is valued in Cd , d = 2, 3.

Let E denote the expectation with respect to the statistics of the measurement noise. We assume that {νnoise (x), x ∈ ∂Ω} is mean-zero circular Gaussian and satisfies 2 δy (y ′ )I. (7.3) E[νnoise (y) ⊗ νnoise (y ′ )] = σnoise

This means that first the measurement noises at different locations on the boundary are uncorrelated; second, different components of the measurement noise are uncorrelated; and third, the real and imaginary parts are uncorrelated. Finally, the 2 noise has variance σnoise .

In the imaging functional IWF , the elastic medium is probed by multiple plane waves with different propagating directions, and consequently multiple measurements are obtained at the boundary accordingly. We assume that two measurements corresponding to two different plane wave propagations are uncorrelated. Therefore, it holds that j 2 l E[νnoise (y) ⊗ νnoise (y ′ )] = σnoise δjl δy (y ′ )I,

(7.4)

where j and l are labels for the measurements corresponding to the jth and lth plane waves, respectively, and δjl is the Kronecker symbol.

7.1.2

Propagation of Measurement Noise in the Backpropagation Step

The measurement noise affects the TD based imaging functional through the backpropagation step which builds the function w in (6.4). Due to the noise, we have w(x) =

−SΩω



  1 ω I − KΩ [u0 − utrue − νnoise ] (x) = wtrue (x) + wnoise (x), (7.5) 2

for x ∈ Ω. Here, wtrue is the result of backpropagating only the accurate data, while wnoise is that of backpropagating the measurement noise: wnoise (x) =

SΩω



  1 ω I − KΩ [νnoise ] (x), 2

x ∈ Ω.

To analyze the statistics of wnoise , we proceed in two steps. First define   1 ω νnoise,1 (x) = I − KΩ [νnoise ](x), x ∈ ∂Ω. 2

(7.6)

(7.7)

Then, due to linearity, νnoise,1 is also a mean-zero circular Gaussian random process.

114

CHAPTER 7

Its covariance function can be calculated as 1 E[νnoise (y) ⊗ νnoise (y ′ )] 4 1 1 ω ω [ν ′ − E[KΩ [νnoise ](y) ⊗ νnoise (y ′ )] − E[νnoise (y) ⊗ KΩ noise ](y )] 2 2 ω ω [ν ′ + E[KΩ [νnoise ](y) ⊗ KΩ noise ](y )].

E[νnoise,1 (y) ⊗ νnoise,1 (y ′ )] =

The terms on the right-hand side can be evaluated using the statistics of νnoise and ω the explicit expression of KΩ . Let us calculate the last term. It has the expression Z E

∂Ω

Z

∂Ω



    ∂Γω ∂Γω ′ ′ ′ ′ (y − x)νnoise (x) ⊗ (y − x )νnoise (x ) dσ(x)dσ(x ) . ∂ν(x) ∂ν(x′ )

Using the coordinate representations and the summation convention, we can calculate the jkth element of this matrix by    Z Z  ∂Γω ∂Γω ′ ′ (y − x) (y − x ) ′ ∂Ω ∂Ω ∂ν(x) jl ∂ν(x ) ks ×E[νnoise (x) ⊗ νnoise (x′ )]ls dσ(x)dσ(x′ )    Z  ∂Γω ∂Γω 2 ′ =σnoise (y − x) (y − x) dσ(x) ∂Ω ∂ν(x) js ∂ν(x) ks   Z ∂Γω ∂Γω 2 ′ = σnoise (y − x) (x − y )dσ(x) . ∂ν(x) ∂Ω ∂ν(x) jk

In the last step, we used the reciprocity relation (1.70).

The other terms in the covariance function of νnoise,1 can be similarly calculated. Consequently, we have E[νnoise,1 (y) ⊗ νnoise,1 (y ′ )] =

2 σnoise δy (y ′ )I 4   2 σnoise ∂Γω ∂Γω ′ ′ (y − y ) + (y − y ) − 2 ∂ν(y ′ ) ∂ν(y) Z ∂Γω ∂Γω 2 + σnoise (y − x) (x − y ′ )dσ(x). ∂ν(x) ∂Ω ∂ν(x) (7.8)

From the expression of IWF and IW , we see that only the Helmholtz decomposition of w, that is, Hp [w] and Hs [w], is used in the imaging functional. Define wα = Hα [w], α = p, s. Using the decomposition in (7.5), we can similarly define α α wtrue and wnoise . In particular, we find that Z α wnoise (x) = Γω x ∈ Ω. α (x − y)νnoise,1 (y)dσ(y), ∂Ω

d

This is a mean-zero C -valued circular Gaussian random field defined for x ∈ Ω.

STABILITY OF TOPOLOGICAL DERIVATIVE BASED IMAGING FUNCTIONALS

115

The jkth element of its covariance function is evaluated by α α E[wnoise (x) ⊗ wnoise (x′ )]jk = Z ′ ′ ′ ′ ω (Γω α (x − y))jl (Γα (x − y ))ks E[νnoise,1 (y) ⊗ νnoise,1 (y )]ls dσ(y)dσ(y ). (∂Ω)2

Using the statistics of νnoise,1 derived above, we find that Z 2 σnoise ′ ω ⊗ Γω = α (x − y)Γα (y − x )dσ(y) 4 ∂Ω Z h ∂Γω i ∂Γω ′ Γω (y − y ′ ) + (y − y ) α (x − y) ∂ν(y) ∂ν(y ′ ) (∂Ω)2

α E[wnoise (x)



2 σnoise 2

α wnoise (x′ )]

′ ′ ′ ×Γω α (y − x )dσ(y)dσ(y ) Z ∂Γω 2 +σnoise Γω (y − y ′′ ) α (x − y) ∂ν(y ′′ ) (∂Ω)3 ∂Γω ′ ′ ′′ ′ × (y ′′ − y ′ )Γω α (y − x )dσ(y )dσ(y)dσ(y ). ∂ν(y ′′ )

Thanks to the Helmholtz-Kirchhoff identities, the above expression is simplified to σ2 α ′ α E[wnoise (x) ⊗ wnoise (x′ )] = − noise ℑm Γω α (x − x ) 4cα ω Z 2 ′ σnoise ∂ℑm Γω α (y − x ) Γω dσ(y) + α (x − y) 2cα ω ∂Ω ∂ν(y) Z ′ σ2 ∂ℑm Γω α (x − y ) ω Γα (y ′ − x′ )dσ(y ′ ) + noise 2cα ω ∂Ω ∂ν(y ′ ) Z ω ′′ ′ ′′ σ2 ∂ℑmΓω α (x − y ) ∂ℑm Γα (y − x ) + noise2 dσ(y ′′ ). ′′ ′′ (cα ω) ∂Ω ∂ν(y ) ∂ν(y ) Assuming that x, x′ are far away from the boundary, we have from [15] the asymptotic formula ∂Γω α (x − y) ≃ icα ωΓω (7.9) α (x − y), ∂ν(y) where the error is of order o(|x − y|1/2−d ). Using this asymptotic formula and the Helmholtz-Kirchhoff identities (taking the imaginary part of the identity), we obtain that α α (x′ )] = − E[wnoise (x) ⊗ wnoise

2 σnoise ′ ℑmΓω α (x − x ). 4cα ω

(7.10)

α In conclusion, the random field wnoise (x), x ∈ Ω, is a Gaussian field with meanzero and covariance function (7.10). It is a speckle pattern, i.e., a random cloud of hot spots where typical diameters are of the order of the wavelength and whose √ typical amplitudes are of the order of σnoise /(µ cα ω).

7.1.3

Stability Analysis

Now we are ready to analyze the statistical stability of the imaging functional IWF . As before, we consider separate cases where the medium has only density contrast

116

CHAPTER 7

or only elastic contrast. 7.1.3.1

Case I: Density contrast

Using the facts that the plane waves U p are irrotational and that the plane waves U s are solenoidal, we see that for a search point z S ∈ Ω, and α = p, s,   ′ ρe α S 2 − 1 |B ′ | IWF [{Uj }](z ) = cα ω ρ n 1X α α × ℜe {Ujα (z S ) · (wj,true (z S ) + wj,noise (z S ))}. n j=1 α We observe the following: The contribution of {wj,true } is exactly that in Proposiα tion 6.4. On the other hand, the contribution of {wj,noise } forms a field corrupting the true image. With Cα := cα ω 2 |B ′ |(e ρ′ /ρ − 1),

the covariance function of the corrupted image can be calculated as follows. Let y S ∈ Ω. We have Cov(IWF [{Ujα }](z S ), IWF [{Ujα }](y S )) n 1 X α α E[ℜe {Ujα (y S ) · wj,noise =Cα2 2 (y S )}ℜe {Ulα (z S ) · wl,noise (z S )}] n =Cα2

j,l=1 n X

1 2n2

j=1

o n α α (y S )]Ujα (y S ) . ℜe Ujα (z S ) · E[wj,noise (z S ) ⊗ wj,noise

α α To get the second equality, we used the fact that wj,noise and wl,noise are uncorrelated unless j = l. Thanks to the statistics (7.10), the covariance of the image is given by

−Cα2

n 2 X S S σnoise 1 ω S S α ℜe eiκα (z −y )·eθj eα θj · [ℑm{Γα (z − y )}eθj ], 2 4cα ω 2n j=1

S ⊥ where eP θj = eθj and eθj = eθj . Using arguments similar to those in the proof of Proposition 6.4, we obtain that

Cov(IWF [{Ujα }](z S ), IWF [{Ujα }](y S )) = Cα′

2 σnoise S S 2 |ℑm Γω α (z − y )| , 2n

(7.11)

where the constant Cα′ = cα ω 3 µ|B ′ |2 (

ρe′ π κs − 1)2 ( )d−2 ( )2 . ρ κα κα

The following remarks hold. √ First, the perturbation due to noise has small typical values of order σnoise / 2n and slightly affects the peak of the imaging functional IWF . Second, the typical shape of the hot spot in the corrupted image due to the noise is exactly of the form of the main peak of IWF obtained in the absence of noise. Third, the use of multiple directional plane waves reduces the effect of measurement noise on the image quality.

STABILITY OF TOPOLOGICAL DERIVATIVE BASED IMAGING FUNCTIONALS

117

From (7.11) it follows that the variance of the imaging functional IWF at the search point z S is given by Var(IWF [{Ujα }](z S )) = Cα′

2 σnoise 2 |ℑm Γω α (0)| . 2n

Define the signal-to-noise ratio (SNR) by E[IWF [{U α }](z)] j SNR := , Var(IWF [{Ujα }](z))1/2

(7.12)

(7.13)

where z is the true location of the inclusion. From (6.28), (6.29), and (7.12), we have p d ρ − ρ| 4 2π d−2 nω 5−d ρ3 cd−1 α ǫ |B||e SNR = |ℑm Γω (7.14) α (0)|. σnoise From (7.14) one can observe that the SNR is proportional to the contrast |e ρ −ρ| and the volume of the inclusion ǫd |B| over the standard deviation of the noise, σnoise . It also increases as the square root of the number of illuminations (which is not surprising since the additive noises are supposed to be uncorrelated for different illuminations).

7.1.3.2

Case II: Elastic contrast

In the case of elastic contrast, the imaging functional becomes for z S ∈ Ω n

IWF [{Ujα }](z S ) = cα

1X α α ∇Ujα (z S ) : M′ (B ′ )(∇wj,true (z S ) + ∇wj,noise (z S )). n j=1

α α Here, wj,true and wj,noise are defined in the previous section. They correspond to the backpropagation of pure data and that of the measurement noise. The α contribution of wj,true is exactly the imaging functional with unperturbed data α and it is investigated in Proposition 6.5. The contribution of wj,noise perturbs the true image. For z S , y S ∈ Ω, the covariance function of the TD noisy image is given by

Cov(IWF [{Ujα }](z S ), IWF [{Ujα }](y S )) n 1 X α α (y S )}] E[ℜe{∇Ujα (z S ) : M′ ∇wj,noise (z S )}ℜe {∇Ulα (y S ) : M′ ∇wl,noise =c2α 2 n =c2α =c2α

j,l=1 n X

1 2n2 1 2n2

j,l=1 n X j=1

α α ℜe E[(∇Ujα (z S ) : M′ ∇wj,noise (z S ))(∇Ulα (y S ) : M′ ∇wl,noise (y S ))]

ℜe

h n io α α (z S )∇wj,noise ∇Ujα (z S ) : M′ E[∇wj,noise (y S )] : M′ ∇Ujα (y S ) .

Using (7.10), we find that α α E[∇wj,noise (z S )∇wj,noise (y S )] = −

2 σnoise S S ℑm ∇zS ∇yS {Γω α (z − y )}. 4cα ω

118

CHAPTER 7

After substituting this term into the expression of the covariance function, we find that it becomes n h n io 2  2 ω S cα σnoise 1 X S ′ α S ′ α (y S ) ℑm ∇ Γ (z − y ) : M ℜe ∇U (z ) : M ∇U . α j j 4ω 2n2 j=1

The sum has exactly the form that was analyzed in the proof of Proposition 6.3. Using similar techniques, we finally obtain that 2 cα 3 π d−2 κs 2 σnoise ( ) Jα,α (z S , y S ), ω κα κα 2n (7.15) where Jα,α is defined by (6.20). The variance of the TD image can also be obtained from (7.15). As in the case of density contrast, the typical shape of hot spots in the image corrupted by noise is the same as the main peak √ of the true image. Further, the effect of measurement noise is reduced by a factor of n by using n plane waves. In particular, the SNR of the TD image is given by √ q √ ǫd µω π  d−2 κs 4 2n 2 Jα,α (z, z). (7.16) SNR = p κα σnoise c3α κα

Cov(IWF [{Ujα }](z S ), IWF [{Ujα }](y S )) = µ

7.2

STATISTICAL STABILITY WITH MEDIUM NOISE

In the previous section, we demonstrated that the proposed imaging functional using multi-directional plane waves is statistically stable with respect to uncorrelated measurement noises in the sense that the SNR increases as the square root of the number of plane waves used. Now we investigate the case of medium noise, where the constitutional parameters of the elastic medium fluctuate around a constant background. 7.2.1

Medium Noise Model

For simplicity, we consider a medium that fluctuates in the density parameter only. That is, ρ(x) = ρ0 [1 + γ(x)], (7.17) where ρ0 is the constant background and ρ0 γ(x) is the random fluctuation in the density. Note that γ is real valued. Throughout this section, we will call the homogeneous medium with parameters (λ0 , µ0 , ρ0 ) the reference medium. The background medium refers to the one without inclusion but with density fluctuation. Consequently, the background Neumann problem of elastic waves is no longer (3.46). Indeed, that equation corresponds to the reference medium, and its solution will be denoted by U (0) . The new background solution is   (Lλ0 ,µ0 + ρ0 ω 2 [1 + γ])U = 0 in Ω, (7.18)  ∂U = g on ∂Ω. ∂ν Similarly, the Neumann function associated with the problem in the reference medium will be denoted by Nω,(0) . We denote by Nω the Neumann function

STABILITY OF TOPOLOGICAL DERIVATIVE BASED IMAGING FUNCTIONALS

associated with the background medium, that is,  λ ,µ 2 ω  (L 0 0 + ρ0 ω [1 + γ(x)])N (x, y) = −δy (x)I, ∂Nω  (x, y) = 0, ∂ν(x)

x ∈ Ω,

x 6= y,

x ∈ ∂Ω.

119

(7.19)

We assume that γ has small amplitude so that the Born approximation is valid. In particular, from the Lippmann-Schwinger representation formula for Nω : Z Nω,(0) (x, y ′ )γ(y ′ )Nω (y ′ , y)dy ′ , Nω (x, y) = Nω,(0) (x, y) + ρ0 ω 2 Ω

we have Nω (x, y) ≃ Nω,(0) (x, y) + ρ0 ω 2

Z

Nω,(0) (x, y ′ )γ(y ′ )Nω,(0) (y ′ , y)dy ′ .

(7.20)



As a consequence, we also have that U ≃ U (0) + U (1) , where Z U (1) (x) = ρ0 ω 2 γ(z)Nω,(0) (x, z)U (0) (z)dz.

(7.21)



Letting σγ denote the typical size of γ (say, its standard deviation), the remainders in the above approximations are of order o(σγ ).

7.2.2

Statistics of the Speckle Field in the Case of a Density Contrast Only

We assume that the inclusion has density contrast only. The backpropagation step constructs w as follows: Z 1 ω )[u (0) ](y)dσ(y), w(x) = Γω (x, y)( I − KΩ x ∈ Ω. (7.22) meas − U 2 ∂Ω We emphasize that the backpropagation step uses the reference fundamental solutions, and the differential measurement is with respect to the reference solution. These are necessary steps because the fluctuations in the background medium are unknown, so that the background solution is unknown. We write the difference between umeas and U (0) as the sum of U − U (0) and umeas − U . These two differences are estimated by U (1) in (7.21) and by (3.48), respectively. Using identity (1.82), we find that, for x ∈ Ω, Z Z 2 ω w(x) = − ρ0 ω Γ (x − y) Γω (y − y ′ )U (0) (y ′ )γ(y ′ )dy ′ dσ(y) ∂Ω Ω Z (7.23) + Cǫd Γω (x − y)Γω (y − z)U (0) (y)dσ(y) + O(σγ ǫd ) + o(σγ ), ∂Ω

where C = ω 2 (ρ − ρe)|B|, as in (6.9). The second term is the leading contribution of umeas − U given by approximating the unknown Neumann function and the background solution by those associated with the reference medium (see Theorem 3.10). The leading error in this approximation is of order O(σγ ǫd ) and can be

120

CHAPTER 7

written explicitly as Z Z ω 2 d Γ (x, y) Cρ0 ω ǫ Γω (y, y ′ )Nω,(0) (y ′ , z)U (0) (z)γ(y ′ )dy ′ dσ(y) ∂Ω Ω Z Z Γω (x, y)Γω (y, z) + Cρ0 ω 2 ǫd Nω,(0) (z, y ′ )U (0) (y ′ )γ(y ′ )dy ′ dσ(y); ∂Ω



it is neglected in what follows. For the Helmholtz decomposition wα , α = p, s, the first fundamental solution Γ (x−z) in the expression (7.23) should be changed to Γω α (x−z). We observe that the second term in (7.23) is exactly (6.8). Therefore, we call this term wtrue and refer to the other term in the expression as wnoise . Using the Helmholtz-Kirchhoff identities, we obtain Z ρ0 ω α (0) (y)dy, wnoise (x) ≃ x ∈ Ω. (7.24) γ(y)ℑm {Γω α (x − y)}U cα Ω ω

α We have decomposed the backpropagation wα into the “true” wtrue which beα haves like in reference medium and the error part wnoise . In the TD imaging functional using multiple plane waves with equi-distributed directions, the contriα bution of wtrue is exactly as the one analyzed in Proposition 6.4. The contribution α of wnoise is a speckle field.

The covariance function of this speckle field, or, equivalently, that of the TD image corrupted by noise, is Cov(IWF [{Ujα }](z S ), IWF [{Ujα }](y S )) n 1 X (0),α S (0),α α α = Cα2 2 E[ℜe{Uj (z ) · wj,noise (z S )}ℜe {Ul (y S ) · wl,noise (y S )}], n j,l=1

for z S , y S ∈ Ω, where Cα is defined to be cα ω 2 |B ′ |(e ρ′ /ρ − 1). Here U (0),α are the reference incoming plane waves (6.10). Using the expression (7.24), we have n

1X α S α U (z ) · wj,noise (z S ) n j=1 j n Z  (0),α S  1X (0),α S γ(y) Uj (z ) ⊗ Uj (y) : ℑm {Γω = bα α (z − y)}dy n j=1 Ω Z n 1 X iκα (zS −y)·eθj α ω S = bα γ(y) e eθ j ⊗ eα θj : ℑm{Γα (z − y)}dy, n j=1 Ω where bα = (ρ0 ω)/cα . Finally, using (6.14) and (6.15) for α = p and s respectively, we obtain that Z n 1 X (0),α S S 2 α (z S ) = −b′α γ(y)|ℑm Γω (7.25) Uj (z ) · wj,noise α (z − y)| dy. n j=1 Ω Here b′α = 4bα µ0 (π/κα )d−2 (κs /κα )2 . Note that the sum above is a real quantity.

121

STABILITY OF TOPOLOGICAL DERIVATIVE BASED IMAGING FUNCTIONALS

The covariance function of the TD image simplifies to Z Z 2 ′ 2 S 2 ω S ′ 2 ′ Cα bα Cγ (y, y ′ )|ℑm Γω α (z − y)| |ℑmΓα (y − y )| dydy , Ω

(7.26)



where Cγ (y, y ′ ) = E[γ(y)γ(y ′ )] is the two-point covariance function of the fluctuations in the density parameter. Note that the SNR defined by (7.13) does not decay with n. This is because the same realization of the random medium perturbs the recorded data for all illuminations. Remark 7.1. The expression in (7.25) shows that the speckle field in the image 2 is essentially the medium noise smoothed by an integral kernel of the form |ℑm Γω α| . Similarly, (7.26) shows that the correlation structure of the speckle field is essentially that of the medium noise smoothed by the same kernel. Because the typical width of this kernel is about half the wavelength, the correlation length of the speckle field is roughly the maximum between the correlation length of medium noise and the wavelength. 7.2.3

Statistics of the Speckle Field in the Case of an Elastic Contrast

The case of elastic contrast can be considered similarly. The covariance function of the TD image is n c2α X (0),α S (0),α α α E[ℜe{∇Uj (z ) : M′ ∇wj,noise (z S )}ℜe {∇Ul (y S ) : M′ ∇wl,noise (y S )}]. n2 j,l=1

α Using the expression of wnoise , we have

Z n n S 1X 1X α ∇Ujα (z S ) : M′ ∇wj,noise iκα eiκα (z −y)·eθj (z S ) = −bα γ(y) n j=1 n j=1 Ω  ′  α α ω S ×eθj ⊗ eθj ⊗ eθj : M ℑm {∇zS Γα (z − y)} dy.

From (6.14) and (6.15), we see that n

1X π d−2 κs 2 α iκα eiκα x·eθj eθj ⊗ eα ( ) ℑm {∇Γω θj ⊗ eθj = −4µ0 α (x)}. n j=1 κα κα

(7.27)

Using this formula, we get n

1X α ∇Ujα (z S ) : M′ ∇wj,noise (z S ) = b′α n j=1

Z



γ(y)Q2α [M′ ](z S − y)dy,

(7.28)

where Q2α [M′ ](x) is a nonnegative function defined as ω Q2p [M](x) = ℑm {∇Γω p (x)} : [Mℑm {∇Γp (x)}]

2 ω 2 = a|ℑm {∇Γω p (x)}| + b|ℑm{∇ · Γp (x)}| ,

(7.29)

where a and b are defined in (3.20). The last equality follows from the expression ω (3.20) of M and the fact that ∂i (Γω p )jk = ∂j (Γp )ik . Note that this symmetry is not

122

CHAPTER 7

2 satisfied for Γω s . Note also that Qα is nonnegative and (7.28) is real valued. The covariance function of the TD image simplifies to Z Z 2 ′ 2 cα b α Cγ (y, y ′ )Q2α [M′ ](z S − y)Q2α [M](y S − y ′ )dydy ′ , z S , y S ∈ Ω. (7.30) Ω



Remark 7.2. If we compare (7.28) with (7.25), then one can see that they are of the same form except that the integral kernel is now Q2α [M′ ]. Therefore, Remark 7.1 applies here as well. We remark √ also that the further reduction of the effect of measurement noise with rate 1/ 2n does not appear in the medium noise case. In this sense, TD imaging is less stable with respect to medium noise. 7.2.4

Random Elastic Medium

In this section we consider the case when the random fluctuations occur in the elastic coefficients. This is a more delicate case because it is well known that inhomogeneity in the Lamé coefficients causes mode conversion. Nevertheless, we demonstrate below that as long as the random fluctuations are weak so that the Born approximation is valid, the imaging functional we proposed remains moderately stable. To simplify the presentation, we assume that random fluctuations occur only in the shear modulus µ while the density ρ0 and the compressional modulus λ0 are homogeneous. The equation for a time-harmonic elastic wave is then ρ0 ω 2 u + λ0 ∇(∇ · u) + 2∇ · (µ(x)∇s u) = 0,

(7.31)

with the same boundary condition as before. The inhomogeneous shear modulus is given by µ(x) = µ0 [1 + γ(x)], (7.32) where γ(x) is a random process modeling the fluctuation. Born approximation. The equation for the elastic wave above can be written as Lλ0 ,µ0 u + ρ0 ω 2 u = −2µ0 ∇ · (γ(x)∇s u).

Assume that the random fluctuation γ is small enough so that the Born approximation is valid. We then have u ≃ u0 + u1 , where u0 solves the equation in the reference medium and u1 , the simple scattered wave, solves the above equation with u on the right-hand side replaced by u0 . More precisely, we have Z u1 (x) = 2µ0 Nω,(0) (x, y)∇ · (γ(y)∇s u0 (y))dy. (7.33) Ω

Here, Nω,(0) is the known Neumann function in the reference medium without random fluctuation. Postprocessing step. As seen before, the postprocessing (6.4) is a critical step in our method. As discussed in Section 7.2.2, even when the medium is random, we have to use the reference Green’s function and the reference solution associated with the homogeneous medium in this postprocessing step since we do not know the medium. Following the analysis in Section 7.2.2, we see that as in (7.23) the function w contains two main contributions: First, backpropagating the difference between the measurement and the background solution in the random medium

STABILITY OF TOPOLOGICAL DERIVATIVE BASED IMAGING FUNCTIONALS

123

without inclusion contributes to the detection of inclusion. Second, backpropagating the difference between the background solution and the reference solution in the homogeneous medium amounts to a speckle pattern in the image. The first contribution corresponds to the case with exact data and is discussed in Section 7.1. We focus on the second contribution, which accounts for the statistical stability. This part of the postprocessed function w has the expression 1 α ω )u ](x) wnoise (x) = Hα [SΩω ( I − KΩ 1 2 Z Z 1 ω) = Hα [ Γω (x, y)[( I − KΩ Nω,(0) (·, x′ )v(x′ )dx′ ](y)dσ(y)], 2 ∂Ω Ω where v(x) = 2µ0 ∇ · (γ(x)∇s u0 (x))

is referred to as the first scattering source. Using (1.82) and the Helmholtz-Kirchhoff identities again, we obtain that Z 2µ0 α s wnoise (x) = (7.34) ℑm {Γω α (x, y)}∇ · [γ(y)∇ u0 (y)]dy. cα ω Ω Remark 7.3. Compare the above expression with that in (7.24). The first scattering source in (7.24) is exactly the incident wave in the case with density fluctuation but is more complicated in the case with elastic fluctuation; see (7.35) below. This shows that the Born approximation in an inhomogeneous medium indeed captures weak mode conversion. Nevertheless, (7.34) shows that our method, due to the Helmholtz-Kirchhoff identities and our proposal of using Helmholtz decomposition, extracts only the modes that are desired by the imaging functional. As we will see, this is crucial to the statistical stability of the imaging functional.

The speckle field. For simplicity of presentation, we consider only the case of density inclusion and the usage of pressure waves in (7.1). For a pressure wave U p = eiκp x·eθ eθ , the first scattering source is v(x) = 2iκp µ0 ∇ · (γ(x)eiκp x·eθ eθ ⊗ eθ ) = 2iκp µ0 (∇γ · eθ )U p (x) − 2κ2p µ0 γ(x)U p (x). (7.35) The speckle field in the imaging functional with a set of pressure waves {UjP } is, for z S ∈ Ω, IWF,noise [{Ujp }](z S ) = cp ω 2 ( ρe′ = ω( − 1)|B ′ |ℜe ρ −2

1 n

n X j=1

Z



n X ρe′ 1 p − 1)|B ′ | ℜe Ujp (z S ) · wnoise (z S ) ρ n j=1 n

−2κ2p γ(x)

iκp eiκp (z

S

−x)·eθj

1 X iκp (zS −x)·eθj S e eθj ⊗ eθj : ℑm Γω p (z , x) n j=1

S eθj ⊗ eθj ⊗ eθj : [∇γ(x) ⊗ ℑm Γω p (z , x)]dx.

Using the summation formulas (6.14) and (7.27), we can rewrite the above quantity

124

CHAPTER 7

as C1p

Z



S 2 ω S ω S 2κ2p γ(x)|ℑmΓω p (z , x)| +2ℑm {∇z S Γp (z , x)} : [∇γ(x)⊗ℑmΓp (z , x)]dx.

Here, the constant is C1p = 4µ0 (

ω3 ′ π d−2 κs 2 ρe′ ρ − ρ)|B ′ |. ) ( ) ω( − 1)|B ′ | = 4π d−2 d (e κp κp ρ κp

Assuming that γ = 0 near the boundary and using the divergence theorem, we can further simplify the expression of the speckle field to Z S 2 C1p γ(x)[(2κ2p I + ∆x )|ℑm Γω p (z , x)| ]dx Ω Z p S 2 = C1 [(2κ2p I + ∆)γ(x)]|ℑmΓω p (z , x)| dx. Ω

This expression is again of the form of (7.25) and (7.28) except that the integral kernel is more complicated. Its correlation can be similarly calculated. Furthermore, Remarks 7.1 and 7.3 apply here. Remark 7.4. For other settings such as elastic inclusion or when a set of shear waves are used in the imaging functional, the expressions for the speckle field and its statistics are more complicated. The salient feature of the speckle field does not change. It is essentially the medium noise (or the gradient of the medium noise) smoothed by an integral kernel whose width is of the order of the wavelength. The correlation length of the speckle field is of the order of the maximum between that of the medium noise and the wavelength. CONCLUDING REMARKS In this chapter, we have shown that the proposed topological derivative based imaging functionals are very stable with respect to measurement noise and moderately stable with respect to medium noise.

Chapter Eight

Time-Reversal Imaging of Extended Source Terms

In this chapter we design time-reversal techniques for imaging extended source terms from far-field measurements. A target or a source term is called extended when its characteristic size times the operating frequency (wavenumber) is much larger than one. For the scalar wave equation, the main idea of time reversal is to take advantage of its reversibility in order to backpropagate signals to the sources that emitted them. In the context of scalar inverse source problems, one measures the emitted wave on a closed surface surrounding the source, and retransmits it through the background medium in a time-reversed chronology. Then the perturbation will travel back to the location of the source; see, for instance, [23]. Motivated by the time reversibility property of the elastic waves, a time-reversal imaging technique can be implemented to reconstruct the source distribution in an elastic medium [63, 148, 150]. In a general setting, however, it is not sure that it provides a good reconstruction of the source distribution. Indeed the problem is that the recorded displacement field at the surface of the domain is a mixture of pressure and shear wave components. By time reversing and backpropagating these signals, a blurred image is obtained due to the fact that the pressure and shear wave speeds are different. In this chapter, we consider the homogeneous isotropic elastic wave equation in a d-dimensional open medium with d = 2, 3:  2   ∂ u (x, t) − Lλ,µ u(x, t) = dδ0 (t) F (x), (x, t) ∈ Rd × R, ∂t2 dt (8.1)   u(x, t) = ∂u (x, t) = 0, x ∈ Rd , t < 0, ∂t where (λ, µ) are the Lamé coefficients of the medium and the density of the medium is supposed to be equal to one.

Our first aim is to design efficient algorithms for reconstructing the compactly supported source function F from the recorded data n o g(y, t) = u(y, t), t ∈ [0, T ], y ∈ ∂Ω , (8.2) where Ω is supposed to strictly contain the support of F . We are interested in the following time-reversal functional: I(x) =

Z

0

T

vs (x, T )ds,

x ∈ Ω,

(8.3)

126

CHAPTER 8

where the vector field vs is defined as the solution of  2   ∂ vs (x, t) − Lλ,µ vs (x, t) = dδs (t) g(x, T − s)δ∂Ω (x), (x, t) ∈ Rd × R, ∂t2 dt   vs (x, t) = ∂vs (x, t) = 0, x ∈ Rd , t < s. ∂t

(8.4) Here, δ∂Ω is the surface Dirac mass on ∂Ω and g = u on ∂Ω × R is the measured displacement field. Our second goal is to give a regularized time-reversal imaging algorithm for source reconstruction in attenuating media and show that it leads to a first-order approximation in terms of the viscosity parameters of the source term. The envisaged problem is quite challenging, indeed, because time reversibility of the wave equation breaks down in lossy media. Further, if not accounted for, these losses produce serious blurring in source reconstruction using classical time-reversal methods. In this chapter, we use a thermoviscous law model for the attenuation losses. We refer, for instance, to [54, 118, 119] for detailed discussions on the attenuation models in wave propagation and their causality properties. e and justiWe start by presenting a modified time-reversal imaging functional I, fying mathematically that it provides a better approximation than I of the source F . This new functional Ie can be seen as a correction based on a weighted Helmholtz decomposition to I (which is considered as an initial guess). Again, using (1.12) we find in the search domain the compressional and the shear components of I such that I = ∇ × ψI + ∇φI . (8.5) √ √ Then we multiply these components with cp = λ + 2µ and cs = µ, the pressure and the shear wave speeds, respectively. Finally, we define Ie by Ie = cs ∇ × ψI + cp ∇φI = cs Hs [I] + cp Hp [I] ,

(8.6)

where the Helmholtz decomposition operators Hp and Hs are defined by (1.15). Then, for regularizing time-reversal in attenuating media, we express, using results from [12] based on the stationary phase theorem, the relationship between the Green tensors in attenuating and non-attenuating media. Then, with the help of a new version of Helmholtz-Kirchhoff identities in attenuating media, we prove that a regularized image of the source can be obtained. Finally, we present a variety of numerical illustrations to compare different time-reversal algorithms and to highlight the potential of the regularization approach. It is worth mentioning that there are several other models for attenuation. Empirically, attenuation in viscoelastic media obeys a frequency power law. The thermoviscous model considered here is a special case. In this regard, we refer to [54, 149, 168] and references therein. We concentrate on the thermoviscous model known as Kelvin-Voigt model, which is well justified in modeling the viscoelastic behavior of a large class of elastic materials including soft tissues [64]. In the acoustic case, the acoustic counterpart of the present inverse problem with generalized types of power-law attenuation is dealt with in [14, 12]. The analysis in this chapter is presented first for the thermoviscous model and then a simple procedure is devised to extend the analysis therein to general power-law models using an argument of fractional Laplacian and dominated convergence theorem of Lebesgue. Here, a very similar remark holds, since a viscoelastic wave can be converted to

TIME-REVERSAL IMAGING OF EXTENDED SOURCE TERMS

127

two acoustic type waves using Helmholtz decompositions, and the results presented in this chapter can be extended easily to general viscoelastic models. For acoustic power-law attenuation models and reconstruction algorithms, see [112]. This chapter is organized as follows. In Section 8.1 we analyze the time-reversal imaging functionals. In Section 8.2 we investigate the inverse source problem in an elastic attenuating medium. In Section 8.2 we present numerical illustrations and describe our algorithms for numerical resolution of the inverse source problem. 8.1

ANALYSIS OF THE TIME-REVERSAL IMAGING FUNCTIONALS

e should be better than the original We rigorously explain why this new functional (I) one (I). We substantiate this argument with numerical illustrations. The functional I(x) defined by (8.3) can be expressed in the form    Z Z Z 1 ω I(x) = ℜe ω2 Γω (x, y)Γ (y, z)dσ(y) dωF (z) dz 2π Rd R ∂Ω  Z h Z Z i 1 ω ω 2 ω = ω Γ (x, y)Γ (y, z) + Γ (x, y)Γω (y, z) dσ(y) dωF (z) dz. 4π Rd R ∂Ω Using the decomposition (1.24) of Γω into shear and compressional components, we obtain that I(x) = Hp [I](x) + Hs [I](x) (8.7)

with Hp [I] and Hs [I] being respectively defined by Z h Z Z 1 ω Hp [I](x) := ω2 Γω p (x, y)Γ (y, z) 4π Rd R ∂Ω i i ω +Γp (x, y)Γω (y, z) dσ(y) dωF (z) dz,

and

Hs [I](x) :=

Z

Z

Z

h ω Γω s (x, y)Γ (y, z) d R R ∂Ω i i ω +Γs (x, y)Γω (y, z) dσ(y) dωF (z) dz.

1 4π

ω2

Finally, the integral formulation of the modified imaging functional Ie defined by (8.6) reads  Z Z Z 1 e I(x) = ℜe ω2 [cs Γω s (x, y) 2π Rd R ∂Ω (8.8) i i  ω +cp Γω (x, y) Γ (y, z)dσ(y) dωF (z) dz . p Assume that Ω is a ball of radius R in Rd and that the support, supp F , of F is sufficiently localized at the center of Ω so that for all x ∈ supp F and for all y ∈ ∂Ω   1 \ y − x = n(y) + o . |y − x|

Then, we have the following theorem.

128

CHAPTER 8

Theorem 8.1. Let x ∈ Ω be sufficiently far from the boundary ∂Ω (with respect to wavelengths) and Ie be defined by (8.6). Then, e I(x) ≃ F (x).

(8.9)

Proof. From (8.8) we have Z  Z Z 1 e ω (x, y)Γω (y, z) e Γ ω2 I(x) = 4π Rd R ∂Ω   ω ω e +Γ (x, y)Γ (y, z) dσ(y) dωF (z) dz,

where

e ω (x, y) = cs Γω (x, y) + cp Γω (x, y). Γ s p

Proposition 1.15 allows us to write Z  Z Z 1 e ω (x, y)Γω (y, z) e I(x) ≃ − ω2 Γ 4π Rd R ∂Ω   ω ω e +Γ (x, y)Γ (y, z) dσ(y) dωF (z) dz Z  Z Z 1 2 e ω (x, y)Γω (y, z) ≃ ω Γ 4π Rd R ∂Ω   ω ω e (y, z) dσ(y) dωF (z) dz. +Γ (x, y)Γ

Proposition 1.14 then gives e I(x)

Z

Z

Z



∂Γω (x, y) ω Γ (y, z) ∂ν(y) Rd R ∂Ω #  ω ∂Γ (y, z) ω −Γ (x, y) dσ(y) dωF (z) dz ∂ν(y) Z Z 1 ≃ − ωℑm [Γω (x, z)] dωF (z) dz ≃ F (x). 2π Rd R ≃ −

1 4π



The last approximation results from the identity Z 1 iωΓω (x, z)dω = δx (z)I, 2π R which comes from the integration of the time-dependent version of (1.23) between t = 0− and t = 0+ . 

Remark 8.2. If the unweighted time-reversal imaging functional I is used ine then crossed terms remain. Using the same arguments as above we stead of I,

TIME-REVERSAL IMAGING OF EXTENDED SOURCE TERMS

129

find I(x)



≃ where

Z Z   cs + cp 1 ω − ωℑm (Γω p + Γs )(x, z) dωF (z) dz cs cp 4π Rd R Z Z   cs − cp 1 ω − ωℑm (Γω p − Γs )(x, z) dωF (z) dz cs cp 4π Rd R Z cs + cp cs − cp B(x, z)F (z)dz, F (x) − 2cs cp 2cs cp Rd 1 B(x, z) = − 2π

Z

R

  ω ωℑm (Γω p − Γs )(x, z) dω

(8.10)

(8.11)

is the operator that describes the error in the reconstruction of the source F obtained with I when cs 6= cp . In particular the operator B is not diagonal, which means that the reconstruction mixes the components of F . 8.2

TIME-REVERSAL ALGORITHM FOR VISCOELASTIC MEDIA

In this section, we investigate the inverse source problem in an elastic attenuating medium. We provide an efficient regularized time-reversal imaging algorithm which corrects for the leading-order terms of the attenuation effect. Consider the viscoelastic wave equation in an open medium Ω ⊂ Rd with d = 2, 3, i.e.,  2   ∂ ua (x, t) − Lλ,µ ua (x, t) − ∂ Lηλ ,ηµ ua (x, t) = dδ0 (t) F (x), (x, t) ∈ Rd × R, ∂t2 ∂t dt   ua (x, t) = ∂ua (x, t) = 0, t < 0, ∂t (8.12) where the viscosity parameters ηµ and ηλ are positive constants and account for losses in the medium. Let ga (x, t) := ua (x, t), x ∈ ∂Ω, t ∈ [0, T ]. Again, the inverse problem is to reconstruct F (x), x ∈ Ω from ga (x, t), x ∈ ∂Ω, t ∈ [0, T ]. For solving the problem, we use a time-reversal approach. The strategy of time-reversal is to consider the functional Ia (x) =

Z

T

vs,a (x, T )ds,

(8.13)

0

where vs,a should be the solution of the adjoint (time-reversed) viscoelastic wave equation, i.e.,  2 ∂ vs,a   (x, t) − Lλ,µ vs,a (x, t)    ∂t2 ∂ dδs (t) + Lηλ ,ηµ vs,a (x, t) = ga (x, T − s)δ∂Ω (x), (x, t) ∈ Rd × R,  ∂t dt     v (x, t) = ∂vs,a (x, t) = 0 x ∈ Rd , t < s. s,a ∂t (8.14) Further, the idea is to enhance image resolution using Iea , as in purely elastic media, where Iea = cs Hs [Ia ] + cp Hp [Ia ] . (8.15)

130

CHAPTER 8

Unfortunately, the adjoint viscoelastic problem is severely ill-posed. Indeed, the frequency component is exponentially increasing due to the presence of the antidamping term (+∂t Lηλ ,ηµ vs,a ), which induces instability. Therefore, we need to regularize the adjoint problem by truncating high-frequency components either in time or in space. Let us introduce the Green tensor Γω a associated with the viscoelastic wave equation (Lλ,µ − iωLηλ ,ηµ + ω 2 )Γω a (x, y) = δy (x)I,

x, y ∈ Rd ,

(8.16)

and let Γω −a be the adjoint viscoelastic Green tensor, that is, the solution to (Lλ,µ + iωLηλ ,ηµ + ω 2 )Γω −a (x, y) = δy (x)I,

x, y ∈ Rd .

(8.17)

We introduce an approximation vs,a,ρ of the adjoint wave vs,a by Z  Z 1 ω vs,a,ρ (x, t) = iωΓ−a (x, y)ga (y, T − s)dσ(y) exp (−iω(t − s)) dω, 2π |ω|≤ρ ∂Ω (8.18) where ρ ∈ R+ is the cut-off parameter. The regularized time-reversal imaging functional defined by Z T Ia,ρ (x) = vs,a,ρ (x, T )ds (8.19) 0

can be written as

Ia,ρ (x) = −

Z

∂Ω

Z

T

∂ ρ Γ (x, y, T − s)ga (y, T − s)ds dσ(y), ∂t −a

0

where Γρ−a (x, y, t)

1 = 2π

Z

|ω|≤ρ

Γω −a (x, y) exp (−iωt) dω.

(8.20)

(8.21)

Remark 8.3. Let S ′ be the space of tempered distributions, i.e., the dual of the Schwartz space S of rapidly decreasing functions [110]. The function vs,a,ρ (x, t) can be identified as the solution of the following wave equation: ∂ 2 vs,a,ρ ∂ (x, t) − Lλ,µ vs,a,ρ (x, t) + Lηλ ,ηµ vs,a,ρ (x, t) ∂t2 ∂t   dδs (t) = Sρ ga (x, T − s)δ∂Ω (x), dt where the operator Sρ is defined on the space S ′ by Z 1 ˆ Sρ [ψ] (t) = exp (−iωt) ψ(ω)dω, 2π |ω|≤ρ with ˆ ψ(ω) =

Z

R

ψ(t) exp(iωt) dt.

(8.22)

(8.23)

(8.24)

131

TIME-REVERSAL IMAGING OF EXTENDED SOURCE TERMS

8.2.1

Green’s Tensor in Viscoelastic Media

As in (1.24), we decompose Γ±a,ω in the form ω ω Γω ±a = Γ±a,s + Γ±a,p ,

(8.25)

ω where Γω ±a,s and Γ±a,p are respectively the fundamental solutions of α (Lλ,µ ∓ iωLηλ ,ηµ + ω 2 )Γω ±a,α (x, y) = H [δy I] ,

α = p, s.

(8.26)

Let us also introduce the decomposition of the operator Lλ,µ into shear and pressure components as Lλ,µ = Lλ,µ + Lλ,µ s p

and Lηλ ,ηµ = Lηs λ ,ηµ + Lηpλ ,ηµ ,

(8.27)

where and

2 Lλ,µ s u = cs [∆u − ∇(∇ · u)]

Lηs λ ,ηµ u = νs2 [∆u − ∇(∇ · u)]

2 and Lλ,µ p u = cp ∇(∇ · u),

and Lηpλ ,ηµ u = νp2 ∇(∇ · u).

(8.28)

(8.29)

ω Here, νs2 = ηµ and νp2 = ηλ + 2ηµ . Therefore, the tensors Γω ±a,s and Γ±a,p can also be seen as the solutions of   ηλ ,ηµ α (Lλ,µ + ω 2 )Γω α = p, s. (8.30) α ∓ iωLα ±a,α (x, y) = H δy I (x),

The corrected regularized time-reversal imaging functional defined by Iea,ρ = cs Hs [Ia,ρ ] + cp Hp [Ia,ρ ]

is then given by Iea,ρ (x)

Z

Z

(8.31)

T

∂  cp Γρ−a,p (x, y, T − s) ∂t ∂Ω 0  +cs Γρ−a,s (x, y, T − s) ga (y, T − s)ds dσ(y),

= −

where Γρ−a,α (x, y, t) =

1 2π

Z

|ω|≤ρ

Γω −a,α (x, y) exp (−iωt) dω,

α = p, s.

(8.32)

(8.33)

In the next subsection we express the relationship between the data ga and the ideal measurements g obtained in the non-attenuated case. By doing so, we prove with the help of a new version of the Helmholtz-Kirchhoff identities that a regularized image of the source F can be obtained. 8.2.2

Attenuation Operator and Its Asymptotics

Recall that u and ua are, respectively, the solutions of the wave equations ∂2u dδ0 (t) (x, t) − Lλ,µ u(x, t) = F (x), 2 ∂t dt

(8.34)

132

CHAPTER 8

and ∂ dδ0 (t) ∂ 2 ua (x, t) − Lλ,µ ua (x, t) − Lηλ ,ηµ ua (x, t) = F (x), ∂t2 ∂t dt

(8.35)

with the initial conditions u(x, t) = ua (x, t) =

∂u ∂ua (x, t) = (x, t) = 0, ∂t ∂t

t < 0.

(8.36)

We decompose u and ua as u = us + up = Hs [u] + Hp [u] and ua = usa + upa = Hs [ua ] + Hp [ua ]. (8.37) α α α The Fourier transforms uα ω and ua,ω of the vector functions u and ua are respectively solutions of

 α ω 2 + Lλ,µ uω = iωHα [F ] α

for α = p, s, where

and



and

 α κα (ω)2 α H [F ] κα (ω)2 + Lλ,µ u = i α a,ω ω (8.38)

ω , κα (ω) = p 1 − iωνα2 /c2α

(8.39)

is the principal square root. In particular, it implies that usa = Aνs2 /c2s [us ]

and upa = Aνp2 /c2p [up ] ,

where Aa , for a > 0, is the attenuation operator Z  Z 1 κa (ω) Aa [φ] (t) = φ(s) exp{iκa (ω)s}ds exp{−iωt}dω 2π R ω R with

(8.40)

(8.41)

ω κa (ω) = √ . 1 − iωa

We also define the operator A−a,ρ by (Z ) Z 1 κ−a (ω) A−a,ρ [φ] (t) = φ(s) exp{iκ−a (ω)s} exp{−iωt}dω ds, 2π R+ ω |ω|≤ρ (8.42) ω ∗ which is associated with κ−a (ω) = √ . Its adjoint operator A−a,ρ is given 1 + iωa by Z  Z 1 κ−a (ω) A∗−a,ρ [φ] (t) = exp{iκ−a (ω)t} φ(s) exp{−iωs}ds dω. 2π |ω|≤ρ ω R+ (8.43) Then, the following results hold; see Appendix B. Proposition 8.4.

133

TIME-REVERSAL IMAGING OF EXTENDED SOURCE TERMS

• Let φ(t) ∈ S ([0, ∞)) with S being the Schwartz space. Then Aa [φ] (t) = φ(t) +

a ′ (tφ′ ) (t) + o(a) 2

a → 0.

as

(8.44)

• Let φ(t) ∈ D ([0, ∞)), where D ([0, ∞)) is the space of C ∞ -functions of compact support on [0, ∞). Then, for all ρ > 0, i a h ′ (8.45) A∗−a,ρ [φ] (t) = Sρ [φ] (t) − Sρ (tφ′ ) (t) + o(a) as a → 0. 2 • Let φ(t) ∈ D ([0, ∞)). Then, for all ρ > 0, A∗−a,ρ Aa [φ] (t) = Sρ [φ] (t) + o(a)

as

a → 0.

(8.46)

We extend the operators Aa , A−a,ρ and A∗−a,ρ to tensors Γ, i.e., for all vectors p ∈ Rd , Aa [Γp] = Aa [Γ]p,

A−a,ρ [Γp] = A−a,ρ [Γ]p,

and A∗−a,ρ [Γp] = A∗−a,ρ [Γ]p.

By the definition of the operators Aa and A−a,ρ , we have for α = p, s:  α  ∂Γ ∂Γα a (x, y, t) = Aνα2 /c2α (x, y, ·) (t), ∂t ∂t   ∂Γα ∂Γα −a,ρ (x, y, t) = A−να2 /c2α ,ρ (x, y, ·) (t). ∂t ∂t

(8.47) (8.48)

Recall that g = u and ga = ua on ∂Ω × R. It then follows from (8.40) that, for α = p, s, (8.49) A∗−να2 /c2α ,ρ Aνα2 /c2α [gaα ] = Sρ [g α ] + o(a),

where

gaα = Hα [ga ],

g α = Hα [g].

(8.50)

Identity (8.49) proves that A∗−ν 2 /c2 ,ρ is an approximate inverse of Aνα2 /c2α . Moreα α over, it plays a key role in showing that the regularized time-reversal algorithm provides a first-order correction of the attenuation effect. 8.2.3

Helmholtz-Kirchhoff Identities in an Attenuating Media

In this subsection we derive new Helmholtz-Kirchhoff identities in elastic attenuat∂u ing media. For doing so, let us introduce the conormal derivatives ∂ν and ∂ν∂u as a −a follows: ∂u := (λ(∇ · u)n + 2µ∇s un) ∓ iω (ηλ (∇ · u)n + 2ηµ ∇s un) . ∂ν±a Note also that for a tensor Γ the conormal derivative constant vectors p,   ∂Γ ∂ [Γp] p := . ∂ν±a ∂ν±a The following properties hold.

∂Γ ∂ν±a

(8.51)

means that for all (8.52)

134

CHAPTER 8

Proposition 8.5. For all x, z ∈ Ω, we have # Z " ω ∂Γ−a,s (x, y) ω ∂Γω a,p (y, z) ω dσ(y) = 0. Γa,p (y, z) − Γ−a,s (x, y) ∂ν−a (y) ∂νa (y) ∂Ω

(8.53)

Proof. Note that # Z " ω ∂Γω ∂Γ−a,s (x, y) ω a,p (y, z) ω J := Γa,p (y, z) − Γ−a,s (x, y) dσ(y) ∂ν−a ∂νa ∂Ω # Z " ω ∂Γ−a,s (x, y) ω ∂Γω a,p (y, z) ω = Γa,p (y, z) − Γ−a,s (x, y) dσ(y) ∂ν−a ∂ν−a ∂Ω Z  λ,µ ω  ω = L Γ−a,s (x, y) + iωLηλ ,ηµ Γω −a,s (x, y) Γa,p (y, z) dy Ω Z   λ,µ ηλ ,ηµ ω − Γω Γω Γa,p (y, z) dy, a,p (y, z) + iωL −a,s (x, y) L Ω Z  λ,µ ω  ω = L Γ−a,s (x, y) + iωLηλ ,ηµ Γω −a,s (x, y) Γa,p (y, z) dy Ω Z   λ,µ Γω (y, z) − iωLηλ ,ηµ Γω (y, z) dy. − Γω −a,s (x, y) L a,p a,p Ω

ω Since Γω a,p (x, y) and Γ−a,s (x, y) are solutions of equations (8.26) with α = p, s, respectively, it follows that ω p J = [Hs [δ0 I] ∗ Γω a,p (·, z)](x) − [Γ−a,s (x, ·) ∗ H [δ0 I]](z).

As in the proof of Proposition 1.12, one can show that [Hs [δ0 I] ∗ Γω a,p (·, z)](x) = 0

and

p [Γω −a,s (x, ·) ∗ H [δ0 I]](z) = 0,

which completes the proof of the proposition.



We now give an approximation of the attenuating conormal derivative. \ Proposition 8.6. If n = y − x, then for α = p, s, we have ∂Γω icα ω 2 ω ±a,α (x, y) ≃ α Γ (x, y), ∂ν±a κ∓ (ω) ±a,α where

ω κα . ∓ (ω) = p 1 ∓ iωνα2 /c2α

(8.54)

(8.55)

Proof. Indeed, notice that Γω ±a,α (x, y) =



κα ∓ (ω) ω

2

κα (ω)

Γα∓

(x, y),

α = p, s.

(8.56)

135

TIME-REVERSAL IMAGING OF EXTENDED SOURCE TERMS

Then, from Proposition 1.14, we obtain ∂Γω ±a,α (x, y) ∂ν±a

2   κα (ω) κα∓ (ω) κα ∓ (ω) ic2α ∓ Γα (x, y) ω cα  κα (ω) κα (ω)  ∓iω iνα2 ∓ Γα∓ (x, y) cα  ω  icα ω 2 ω 2 2 Γ (x, y) ≃ Γ±a,α (x, y). icα κα (ω) 1 ∓ iων /c ±a,α ∓ α α κα ∓ (ω) 





This completes the proof.



In particular, the following estimate holds as a direct consequence of Propositions 8.5 and 8.6. Proposition 8.7. Let Ω ⊂ Rd be a ball with large radius R. Then for all x, z ∈ Ω sufficiently far with respect to wavelengths from boundary ∂Ω, we have  Z ω (y, z)dσ(y) ℜe Γω (x, y)Γ a,p −a,s ∂Ω Z  (8.57) ω (y, z)dσ(y) ≃ ℜe Γω ≃ 0. (x, y)Γ a,s −a,p ∂Ω

8.2.4

Analysis of the Regularized Time-Reversal Imaging Algorithm

The aim of this subsection is to justify that the regularized time-reversal imaging functional Iea,ρ provides a correction of the attenuation effect. Theorem 8.8. The regularized time-reversal imaging functional Iea,ρ defined by (8.32) satisfies  Iea,ρ (x) = Ieρ (x) + o νs2 /c2s + νp2 /c2p , where

Ieρ (x) := −

Z

∂Ω

Z

T

0



cs

 ∂ ∂ Γs (x, y, t) + cp Γp (x, y, t) Sρ [g(y, ·)] (t)dt dσ(y) ∂t ∂t

with Sρ being defined by (8.23). Proof. We can decompose the functional Iea,ρ as follows: Z

Z

T

∂  cp Γρ−a,p (x, y, T − t) ∂t ∂Ω 0  +cs Γρ−a,s (x, y, T − t) ga (y, T − t)dt dσ(y) Z Z T ∂  − cp Γρ−a,p (x, y, t) ∂Ω 0 ∂t  +cs Γρ−a,s (x, y, t) [gas (y, t) + gap (y, t)] dt dσ(y) pp ss sp ps (x) + Ia,ρ (x), Ia,ρ (x) + Ia,ρ (x) + Ia,ρ

Iea,ρ (x) =



=

= where αβ Ia,ρ (x) = −

Z

∂Ω

Z

0

T

 ∂  cα Γρ−a,α (x, y, t) gaβ (y, t)dt dσ(y), ∂t

α, β = p, s.

136

CHAPTER 8

Similarly we can decompose the functional Ieρ as

Ieρ (x) = Iρss (x) + Iρsp (x) + Iρps (x) + Iρpp (x),

with Iραβ (x) = −

Z

∂Ω

Z

T

∂ [cα Γα (x, y, t)] Sρ [g β (y, ·)](t)dt dσ(y), ∂t

0

ss The first term Ia,ρ (x) satisfies ss Ia,ρ (x)

= = = =

α, β = p, s.

 ∂ − A−νs2 /c2s ,ρ [cs Γs (x, y, t)] Aνs2 /c2s [g s (y, ·)] (t)dt dσ(y) ∂t ∂Ω 0 Z Z T   ∂ [cs Γs (x, y, t)] A∗−νs2 /c2s ,ρ Aνs2 /c2s [g s (y, ·)] (t)dt dσ(y) − ∂Ω 0 ∂t Z Z T  ∂ [cs Γs (x, y, t)] Sρ [g s (y, ·)] (t)dt dσ(y) + o νs2 /c2s − ∂Ω 0 ∂t  ss Iρ (x) + o νs2 /c2s , Z

Z

T



by using Proposition 8.4. Similarly, we get

 pp Ia,ρ (x) = Iρpp (x) + o νp2 /c2p .

sp ps Moreover, the coupling terms Ia,ρ and Ia,ρ vanish. Indeed, thanks to Proposition 8.7, we have  Z Z Z   1 2 ω sp ω ω cs Γ−a,s (x, y) Γa,p (y, z) dσ(y) dωF (z) dz ≃ 0, Ia,ρ (x) = 2π Rd |ω| 0 is an update parameter. To speed up the gradient descent scheme Newton or Quasi Newton methods can be used. The discrepancy function J can be regularized by adding the total variation of µ as follows Z Jreg [µ] := J[µ] + η ′

Ω′

|∇µ| dx,

with ηR ′ being the regularization parameter. The derivative of the regularization term Ω′ |∇µ| dx is given by ∇µ −∇ · ( ). (10.13) |∇µ|

Setting the values of η ′ is a sensitive task. If η ′ is too high, the numerical solution of the optimization problem will be homogeneous and gives no information about the medium, while if there is not enough regularization, the reconstruction becomes unstable; see [18]. It is also worth mentioning that (10.13) should be either regularized as follows ∆µ (Dµ∇µ) · ∇µ −p + , 2 ′′ 2 (|∇µ|2 + (η ′′ )2 )3/2 |∇µ| + (η )

where D is defined by (1.26) and η ′′ is a small regularization parameter, or computed using dual or primal-dual approaches; see [66, 67]. The described optimal control approach can be interpreted as a Landweber iteration scheme. Let F map the shear distribution µ onto the displacement field u. Let µtrue denote the true shear distribution. One can prove that J ′ [µ] = ℜe (F ′ [µ])∗ (F [µ] − F [µtrue ]), where (F ′ [µ])∗ denotes the adjoint of the derivative of F with respect to µ. Hence, (10.12) is nothing else than the Landweber iteration scheme µn+1 (x) = µn (x) − ηℜe (F ′ [µ])∗ (F [µ] − F [µtrue ])(x), 10.4

x ∈ Ω′ ,

n ≥ 1. (10.14)

NUMERICAL ILLUSTRATIONS

We present several examples of the shear modulus distribution reconstruction. For computations, the background medium Ω is assumed to be the unit disk centered at the origin and the window Ω′ equal to Ω. The density distribution ρ = 1 in Ω and the frequency ω = 102 . Figure 10.1 shows shear modulus distribution reconstructions using the optimal control scheme (10.12) after 100 iterations. CONCLUDING REMARKS In this chapter we have proposed an algorithm for imaging small elastic inclusions from internal measurements of the displacement field. The algorithm is based on an inner expansion of the perturbations due to the presence of the inclusion. It avoids the differentiation of the noisy measured displacement fields. It can be extended to anisotropic and viscoelastic systems. For shear modulus distributions, we have presented a gradient descent scheme and discussed its regularization. It would be very interesting to prove the local convergence of the optimal control scheme (10.12)

166

CHAPTER 10

1

1 1.08

0.8

1.06

0.6

1.06 0.6

0.4

1.04

0.4

0.2

1.02

0.2

1

0

0.98

−0.2

−0.4

0.96

−0.4

0.94

−0.8 −1 −1

0.92 −0.5

0

0.5

1

1.04 1.02

0

−0.2

−0.6

1.08

0.8

1 0.98 0.96

−0.6 0.94

−0.8 −1 −1

−0.5

0

0.5

1

0.92

Figure 10.1: The left columns are the true distributions while the right columns are the reconstructed ones.

IMAGING FROM INTERNAL DATA

167

or, equivalently, of the Landweber iteration (10.14). Based on the recent results in [109], it suffices to establish a Hölder-type stability estimate for F ′ [µ]. Internal measurements at multiple frequencies may be needed in order to ensure such an estimate.

Chapter Eleven Vibration Testing Vibration testing is to identify inclusions, cracks, or shape changes in a structure by measuring its modal characteristics. The measured eigenparameters are related to the defect or damage location, orientation, and size. Considerable effort has been spent in designing efficient and robust reconstruction methods; see, for instance, [96, 127, 153, 156, 157]. In this chapter a rigorous asymptotic framework is developed. A new technique for identifying the location of small-size damage in an elastic structure from perturbations in its modal characteristics is presented. Let Ω be an elastic medium in R3 with a connected Lipschitz boundary whose Lamé constants are λ, µ. We first consider the eigenvalue problem for the Lamé system of linear elasticity: Lλ,µ u + κu = µ△u + (λ + µ)∇∇ · u + κu = 0

in Ω,

(11.1)

with the boundary condition ∂u/∂ν = 0 on ∂Ω. Suppose that Ω contains a small inclusion D of the form D = ǫB + z, where B is a bounded Lipschitz domain containing the origin, ǫ is a small parameter, and z indicates the location of the inclusion. Due to the presence of the inclusion D, the eigenvalues of the domain Ω are perturbed. Our first goal in this chapter is to give an asymptotic expansion for the perturbation of eigenvalues due to the presence of the inclusion. Let κ1 ≤ κ2 ≤ . . . be the Neumann eigenvalues of (11.1) and let κǫ1 ≤ κǫ2 ≤ . . . be the Neumann eigenvalues in the presence of the inclusion. The first result of this chapter is an asymptotic expansion of κǫj − κj as ǫ → 0. The main ingredients in deriving the results of this chapter are the integral equations and the theory of meromorphic operator-valued functions. Using integral representations of solutions to the harmonic oscillatory linear elastic equation, we reduce this problem to the study of characteristic values of integral operators in the complex planes. The generalized argument principle and Rouché’s theorem (see Appendix C) are combined with asymptotic analysis of integral kernels to obtain an asymptotic expansion for eigenvalues. The elastic inclusions we deal with are of three kinds: holes, hard inclusions, and soft inclusions. As shown in Chapter 2, a hole D is characterized by a homogeneous Neumann boundary condition on its boundary ∂D. For the eigenvalue problem, it can be easily seen that a hard inclusion D is characterized by the homogeneous Dirichlet boundary condition on ∂D. A soft inclusion is characterized by the transmission conditions on its boundary. In all of these cases, we will explicitly calculate the leading-order term. It turns out that the leading-order term for the hard inclusion is of order the characteristic size of the inclusion, and is expressed in terms of the eigenvectors and a quantity related to the capacity of the inclusion. On the other hand, the leading-order term in the case of soft inclusion or a hole is of order the volume of the inclusion, and is expressed in terms of the eigenvectors and the elastic moment tensor. The leading-order terms can be used for identifying the

169

VIBRATION TESTING

small inclusions by taking eigenvalue measurements. We propose a new algorithm for locating the inclusion and reconstructing its characteristics. In the case of a crack, the same methodology can be developed. In the second part of this chapter, we consider eigenvalue perturbations that are due to deformations in the shape of an inclusion. We derive a first-order asymptotic expansion and use it in order to reconstruct features of the changes. We also consider the reconstruction problem using incomplete data. We present some numerical results to illustrate the resolution and the stability of the proposed algorithm. Most of the results of this chapter are from [10, 30, 32]. This chapter is organized as follows. Section 11.1 is devoted to imaging smallvolume defects from eigenparameter changes. The derivations of the asymptotic formulas are based on the generalized argument principle and Rouché’s theorem together with asymptotic analysis of integral kernels. In Section 11.2 we consider the problem of identifying shape deformations or damages from changes in modal characteristics. The proof of the asymptotic formula for eigenvalue perturbations makes use of Osborn’s theorem and the regularity result in Subsection 1.7. In Section 11.3 we address the splitting problem in the evaluation of the perturbations in the eigenvalues of the Lamé equations that are due to the presence of a small inclusion. In Section 11.4 we develop efficient algorithms to detect small inclusions or recover interface changes from variations of modal parameters. In Section 11.5 we present several two-dimensional examples for the interface reconstruction problem. 11.1

SMALL-VOLUME EXPANSIONS OF THE PERTURBATIONS IN THE EIGENVALUES

In this section, we confine our attention to the eigenvalues of the Neumann boundary value problem. The Dirichlet boundary case can be treated in a similar way with only minor modifications of the techniques presented here. We also confine our attention to the three-dimensional case. The two-dimensional case can be dealt with in an almost identical way. 11.1.1

Hard Inclusion Case

Let 0 ≤ κ1 ≤ κ2 ≤ . . . be the eigenvalues of −Lλ,µ in Ω with the Neumann condition on ∂Ω. Suppose that Ω contains a small hard inclusion D of the form D = ǫB + z, where B is a bounded Lipschitz domain in R3 containing the origin. Let κǫ be a Neumann eigenvalue of −Lλ,µ in the presence of the inclusion, and let uǫ be a corresponding eigenvector, i.e.,   Lλ,µ uǫ + κǫ uǫ = 0 in Ω \ D,    ∂uǫ (11.2) =0 on ∂Ω,  ∂ν    uǫ = 0 on ∂D. √ Let ψǫ := uǫ |∂Ω , φǫ := ∂uǫ /∂ν|∂D , and ωǫ = κǫ . By Green’s formula, one can see that the solution uǫ of (11.2) can be represented as ωǫ ωǫ uǫ (x) = DΩ [ψǫ ](x) + SD [φǫ ](x),

(11.3)

170

CHAPTER 11

where ψǫ and φǫ satisfy the system of integral equations  ( 1 I − Kωǫ )[ψǫ ] − S ωǫ [φǫ ] = 0 on ∂Ω, Ω D 2  ωǫ ωǫ DΩ [ψǫ ] + SD [φǫ ] = 0 on ∂D.

(11.4)

Here, ρ = 1 in the definitions of the layer potentials. Conversely, if a nonzero pair (ψǫ , φǫ ) ∈ L2 (∂Ω)3 × L2 (∂Ω)3 satisfies (11.4), then uǫ defined by (11.3) is a solution to (11.2). We can assume that ωǫ2 is not a Dirichlet eigenvalue for −Lλ,µ on D since the Dirichlet eigenvalues for D tend to +∞ as ǫ tends to 0. In this case, the resulting uǫ is also nonzero. Therefore, the square roots of the eigenvalues correspond exactly to the characteristic values of the following operator-valued function: ! 1 ω ω I − KΩ −SD . ω 7→ 2 ω ω DΩ SD In order to derive an asymptotic expansion for κǫ , we begin by establishing the following. e Lemma 11.1. Let ψ ∈ L2 (∂Ω)3 and φ ∈ L2 (∂D)3 . If φ(x) = ǫφ(ǫx + z) for x ∈ ∂B, then we have ω SD [φ](x)

=

+∞ X

n n+1

(−1) ǫ

n=0

ω DΩ [ψ](ǫx + z) =

Z X 1 α ω e ∂ Γ (x − z) y α φ(y)dσ(y), α! ∂B

|α|=n +∞ X

n=0

ǫn

X 1 ω ∂ α (DΩ [ψ])(z)xα , α!

|α|=n

x ∈ ∂B,

x ∈ ∂Ω, (11.5) (11.6)

and for x ∈ ∂B and i = 1, 2, 3, ω SD [φ]i (ǫx + z)

 Z 3 +∞ n+1 1  1 XX in n ej (y) dσ(y) (ǫω) + n+2 δij |x − y|n−1 φ =− 4π j=1 n=0 (n + 2)n! cn+2 cp s ∂B  n − 1 n − 1Z n−3 e − n+2 − n+2 |x − y| (xi − yi )(xj − yj )φj (y) dσ(y) , cs cp ∂B (11.7) ω ω where SD [φ]i denotes the i-th component of SD [φ]. ω Proof. The series (11.6) is exactly a Taylor series expansion of DΩ [ψ](ǫx + z) at z. By a change of variables, we have that, for any x ∈ ∂Ω, Z Z ω ω e e SD [φ](x) = Γ (x − y )φ(e y )dσ(e y) = ǫ Γω (x − z − ǫy)φ(y)dσ(y). ∂D

∂B

ω

Using the Taylor series expansion of Γ (x − z − ǫy) at x − z, we readily get (11.5). Similarly, (11.7) immediately follows from a change of variables and (1.32). This completes the proof. ✷

171

VIBRATION TESTING

By Lemma 11.1, (11.4) can be written in the form Aω ǫ

  ψǫ eǫ = 0, φ

eǫ (x) = ǫφǫ (ǫx + z), where for φ

 0 , SB



0

  Aω =  1 X 1 n  ω ∂ α DΩ [·](z)xα ωn α!

(−1)n ωn

(ωǫ)n Aω n

n=0

1 ω I − KΩ ω  A0 = 2 ω DΩ [·](z)

and for n = 1, 2, . . ., 

+∞ X

Aω ǫ =

X

|α|=n−1

1 α ω ∂ Γ (x − z) α! Sn

|α|=n

Here the operator Sn is given by

ei = Sn [φ]

3 X j=1

Sn



ij

ej ] [φ

Z

∂B

y α · dσ(y)



  . 

(11.8)

e ∈ L2 (∂B)3 and i = 1, 2, 3 with, for x ∈ ∂B, for φ

Z n + 1 in 1  1 ej (y)dσ(y) + δ |x − y|n−1 φ ij 4π (n + 2)n! cn+2 cn+2 s p ∂B Z 1 in (n − 1)  1 1  ej (y)dσ(y). + − |x − y|n−3 (xi − yi )(xj − yj )φ 4π (n + 2)n! cn+2 cn+2 s p ∂B (11.9) The following full asymptotic formula is from [32].

Sn



ej ](x) = − [φ ij

Theorem 11.2 (Eigenvalue perturbations). Let κj be a simple Neumann eigenǫ value for −Lλ,µ in Ω without the inclusion. p the inclusion and let κj be that with √ √ Let ω0 := κj and ωǫ := κǫj . Let Vj be a small neighborhood of κj such that ω0 is the only characteristic value of ω 7→ Aω 0 in Vj . Then we have ωǫ − ω0 =

Z +∞ +∞ 1 X1X n ǫ tr Bn,p (ω)dω, 2iπ p=1 p n=p ∂Vj

(11.10)

X

(11.11)

where Bn,p (ω) = (−1)p

n1 +...+np =n ni ≥1

−1 ω −1 ω (Aω An1 . . . (Aω Anp ω n . 0) 0)

It should be emphasized that (11.10) is a complete asymptotic expansion. All the terms can be computed even though the computation requires some endeavor. Here, let us compute the leading-order term as an example.

172

CHAPTER 11

Theorem 11.3 (Leading-order term). Let κj be a simple Neumann eigenvalue for −Lλ,µ in Ω without the inclusion, let uj be the corresponding eigenvector such that kuj kL2 (Ω) = 1, and let κǫj be the Neumann eigenvalue with the inclusion. Then we have Z  −1 ǫ t SB [I] dσ uj (z) + O(ǫ2 ). (11.12) κj − κj = −ǫuj (z) ∂B

Proof. By virtue of Theorem 11.2, Z ǫ −1 ω ωǫ − ω0 = − tr (Aω A1 ωdω + O(ǫ2 ). 0) 2iπ ∂Vj Moreover, since 

 A1 (ω) =  

and

−1 (Aω 0)

it follows that ωǫ − ω0 = −

0 1 ω ∇DΩ [·](z) · x ω

Z  1 − Γω (x − z) · dσ  ω ∂B  Z  C · dσ ∂B



1 ω −1 ( I − KΩ ) 2  =  −1 ω −1 ω 1 −SB [I] DΩ ) [·] (z) ( I − KΩ 2

ǫ tr 2iπ

Z

∂Vj

0 −1 SB



 ,

 −1 ω 1 ω −1 ω SB [I] DΩ ( I − KΩ ) [Γ (· − z)] (z) 2

Z

∂B

· dσdω

+ O(ǫ2 ) Z Z ǫ −1 ω 2 = tr SB [I] dσ DΩ [Nω Ω (·, z)](z) dω + O(ǫ ). 2iπ ∂B ∂Vj By Green’s formula, the following relation holds: Z ω DΩ [uj ](z) = uj (z) + (κj − ω 2 ) Γω (z − y)uj (y)dy.

(11.13)



Combining (11.13) with (1.83), we obtain Z Z 1 1 1 ω ω t D [N (·, z)](z)dω = uj (z)uj (z) dω 2iπ ∂Vj Ω Ω 2iπ κ − ω2 ∂Vj j −1 = √ uj (z)uj (z)t , 2 κj and therefore, κǫj

− κj = −ǫ tr

"Z

∂B

−1 SB [I] dσ



t

uj (z)uj (z)



#

+ O(ǫ2 ),

(11.14)

173

VIBRATION TESTING

or equivalently, κǫj

t

− κj = −ǫuj (z)

Z

∂B

 −1 SB [I]dσ uj (z) + O(ǫ2 ),

which completes the proof of the theorem.



If B is a ball with center at zero, then (11.12) takes a particularly simple form. It is easy to see from the symmetry of the ball and (1.33) that for i, j = 1, 2, 3, Z Z Z γ1 δij 1 γ2 (xi − yi )(xj − yj ) Γij (x − y)dσ(y) = − dσ(y) − dσ(y) 4π ∂B |x − y| 4π ∂B |x − y|3 ∂B Z Z 1 γ2 δij (xi − yi )2 γ1 δij dσ(y) − dσ(y). =− 4π ∂B |x − y| 4π ∂B |x − y|3 Once again, by the symmetry property, we have Z Z (xi − yi )2 1 1 dσ(y) = dσ(y), 3 3 ∂B |x − y| ∂B |x − y| and hence Z

∂B

Γij (x − y)dσ(y) = −δij

Since

1 4π

Z

∂B

γ2  1 γ1 + 3 4π

Z

∂B

1 dσ(y). |x − y|

1 dσ(y) = r, |x − y|

where r is the radius of B, for all x ∈ B, it follows from (1.34) that Z 3µ + 2λ Γij (x − y)dσ(y) = −rδij , 3µ(2µ + λ) ∂B which in turn implies that −1 SB [I] = −

3µ(2µ + λ) I. r(3µ + 2λ)

We then conclude from (11.12) that the following corollary holds. Corollary 11.4. Suppose that B is the unit ball. Then, κǫj − κj = ǫ

3µ(2µ + λ) 4π|uj (z)|2 + O(ǫ2 ). (3µ + 2λ)

(11.15)

Note that −4π is the capacity of the unit ball B. 11.1.2

Transmission Problem

We now investigate the perturbation of eigenvalues due to the presence of a small soft elastic inclusion. Suppose that the elastic medium Ω contains a small inclusion e µ e D of the form D = ǫB +z, whose Lamé constants are λ, e satisfying (λ− λ)(µ− µ e) ≥ λ,µ ǫ e µ 0 and 0 < λ, e < +∞. Let κl be an eigenvalue of −L and let κl be the perturbed

174

CHAPTER 11

eigenvalue in the presence of the inclusion. Then the eigenvector uǫ corresponding to the (simple) eigenvalue κǫl is the solution to (1.55) with ωǫ2 = κǫl :   Lλ,µ uǫ + ω 2 uǫ = 0 in Ω \ D,     e λ,e µ 2   L uǫ + ω uǫ = 0 in D,     ∂uǫ =g on ∂Ω, (11.16) ∂ν     on ∂D, uǫ + − uǫ − = 0     ∂uǫ ∂u  ǫ  − = 0 on ∂D. e − ∂ν + ∂ν

We may assume as before that ωǫ2 is not a Dirichlet eigenvalue for −Lλ,µ on D. By Theorem 1.8, uǫ can be represented as ( ω ω DΩ [ψǫ ] + SD [φǫ ] in Ω \ D, uǫ = (11.17) ω Se [θǫ ] in D, D

where ψǫ , φǫ , and θǫ satisfy the system of integral equations  1 ω ω  ( I − KΩ )[ψǫ ] − SD [φǫ ] = 0  2   ω ω ω DΩ [ψǫ ] + SD [φǫ ] − SeD [θǫ ] = 0   ω ω ω  [φǫ ]) ∂(SeD [θǫ ]) [ψǫ ]) ∂(SD  ∂(DΩ  + − =0 e ∂ν ∂ν ∂ν + −

on ∂Ω, on ∂D,

(11.18)

on ∂D.

Conversely, (ψǫ , φǫ , θǫ ) ∈ L2 (∂Ω)3 × L2 (∂D)3 × L2 (∂D)3 satisfying (11.18) yields the solution to (11.16) via the representation formula (11.17).

eǫ (x) = ǫφǫ (ǫx + z) and θeǫ (x) = ǫθǫ (ǫx + z). Then using Lemma 11.1, Let φ (11.18) can be written as follows: 

where

 ψǫ eǫ  = 0, φ Aω ǫ θeǫ 

1 ω  2 I − KΩ   ω Aω 0 =  DΩ [·](z)  0

Aω ǫ =

+∞ X

(ωǫ)n Aω n,

n=0

0

0

SB 1 I + (KB )∗ 2

−SeB

1 e B )∗ I − (K 2



  ,  

175

VIBRATION TESTING

and for n = 1, 2, . . ., Aω n is equal to



0

   1 X 1  ω xα ∂ α DΩ [·](z)  α!  ωn |α|=n   1 X 1 ∂(xα I) ω  ∂ α DΩ [·](z) ωn α! ∂ν

(−1)n ωn

X

|α|=n−1

1 α ω ∂ Γ (x − z) α!

Z

Sn Kn

|α|=n

∂B

y α · dσ(y)

Theorem 11.5. Let κl be a simple Neumann eigenvalue for −Lλ,µ in Ω without the inclusion, let ul be the corresponding eigenvector such that kul kL2 (Ω) = 1, and let κǫl be the Neumann eigenvalue with the inclusion. Then we have 3 X

mijpq ∂i (ul )j (z)∂p (ul )q (z) + O(ǫ4 ),

(11.19)

i,j,p,q=1

where (ul )j denotes the j-th component of ul and M = (mijpq ) is the elastic moment e and µ tensor associated with B and the elastic parameters λ e.

As in Chapter 3, we note that because of the symmetry of the elastic moment tensor mijpq = mjipq = mijqp (see (3.6)), (11.19) can be written in a more compact form using the standard notation of the contraction and the strain for tensors: κǫl − κl = ǫ3 ∇s ul (z) : M∇s ul (z) + O(ǫ4 ).



    e − Sn  .    e n −K

Here Sn is the operator from L2 (∂B)3 into H 1 (∂B)3 defined by (11.8) and (11.9) e n and K e n are defined in exactly the same way and Kn = ∂Sn /∂ν. The operators S q p e + 2e with cs and cp replaced by e cT = µ e and e cL = λ µ. With this notation, Theorem 11.2 remains valid. We now derive from (11.10) the leading-order term. Unlike the hard inclusion case, the leading-order term in this case turns out to be of order ǫ3 , the order of the volume of the inclusion. Moreover, it is determined by the elastic moment tensor associated with the inclusion given by (3.2). We now state the following theorem.

κǫl − κl = ǫ3

0

(11.20)

Moreover, it is worth mentioning that, according to Subsection 3.1.1, if the e ≥ λ inclusion is harder (softer, resp.) than the background, i.e., µ e > µ and λ e (e µ < µ and λ ≤ λ, resp.), then M is positive (negative, resp.) definite, and hence κǫl > κl (κǫl < κl , resp) provided that ǫ is small enough and ∇s ul (z) 6= 0. Formula (11.20) makes it possible to deduce the sign of the variation of a given eigenvalue in terms of the elastic parameters of the inclusion. √ Proof of Theorem 11.5. Let Vl be a small neighborhood of κl such that √ ω ω0 := κl is the only characteristic value of ω 7→ A0 in Vl . We first observe from (11.10) that the ǫ-order term is given by Z 1 −1 ω − tr (Aω A1 ωdω, (11.21) 0) 2iπ ∂Vl

176

CHAPTER 11

the ǫ2 -order term is given by  Z   1 1 ω −1 ω ω −1 ω 2 ω 2 dω, tr − (A0 ) A2 + (A0 ) A1 2iπ 2 ∂Vl

(11.22)

and the ǫ3 -order term is given by Z  1 −1 ω −1 ω −1 ω tr − (Aω A3 + (Aω A1 (Aω A2 0) 0) 0) 2iπ ∂Vl   1 ω −1 ω 3 ω 3 dω. − (A0 ) A1 3

(11.23)

Introduce 

1 2

SB I + (KB )∗

−SeB

1 e B )∗ I − (K 2

−1 

=



 A2 , A4

A1 A3

(11.24)

where the invertibility is guaranteed by Theorem 1.7. As another direct consequence of this theorem, we also have that A1 [f ], A2 [g] ∈ L2Ψ (∂B)

(11.25)

for any f ∈ H 1 (∂B)3 and g ∈ L2Ψ (∂B) and A1 [f ] = 0 for any f ∈ Ψ. Explicit calculations show that Aω 0

Aω 0

−1



−1

(11.26)

takes the following form:

 1 ω −1 I − KΩ  2  0 =   1 −1 ω ω −1 SeB [I] DΩ ) [·] (z) ( I − KΩ 2

0 A1 A3

Since Ai , i = 1, 2, 3, 4, are independent of ω, we have Z −1 ω n tr (Aω An ω dω = 0 0)

0



  A2  .  A4

(11.27)

∂Vl

for any integer n. From (11.25) and (11.26) we readily find that Aω 0 

0  h ∂(xα I) i X α ω ∂ α DΩ [·](z) A1 [x I] + A2   ∂ν |α|=1 X  h ∂(xα I) i  ω A3 [xα I] + A4 [·](z) ∂ α DΩ ∂ν |α|=1

−1

Aω 1 ω is equal to

T1 0

T2

0



   ,  e1 −ωA3 S 0

177

VIBRATION TESTING

where Z  1 ω −1 ω I − KΩ Γ (x − z) · dσ(y), 2 ∂B Z h 1 i  −1 ω ω −1 ω e T2 = −SB DΩ I − KΩ Γ (x − z) (z) · dσ(y) + A3 S1 . 2 ∂B

T1 = −

Using (11.25), we can write that Z  h ∂(xα I) i α A1 [x I] + A2 (y) dσ(y) = 0 ∂ν ∂B

if |α| = 1

and easily check that tr

Z

∂Vl

n  −1 ω ω n dω = 0, (Aω ) A 0 1

(11.28)

for any integer n. Now combining (11.21)–(11.23), (11.27), and (11.28) gives Z ǫ3 −1 ω −1 ω 3 ωǫ − ω0 = tr (Aω A1 (Aω A2 ω dω + O(ǫ4 ). (11.29) 0) 0) 2iπ ∂Vl

Indeed, we have

where

 0  −1 ω ω 2  A0 A2 ω = T5 T6

T3 2 ω (A1 S2 + A2 K2 ) T4

0



e 2 + A2 K e 2 ) , −ω 2 (A1 S 2 e e 2) −ω (A3 S2 + A4 K

Z  1 ω −1 α ω I − KΩ ∂ Γ (x − z) y α · dσ(y), 2 ∂B |α|=1 Z h 1 i X  −1 ω ω −1 α ω e SB [I]DΩ I − KΩ T4 = ∂ Γ (x − z) (z) y α · dσ(y) 2 ∂B

T3 =

X

|α|=1

+ ω 2 (A3 S2 + A4 K2 ), h ∂(xα I) i X 1 ω T5 = A1 [xα I] + A2 ∂ α DΩ [·](z), α! ∂ν |α|=2

T6 =

h ∂(xα I) i X 1 ω A3 [xα I] + A4 ∂ α DΩ [·](z). α! ∂ν

|α|=2

Using the following identity, whose proof will be given later, Z   1 e 2 + A2 K e 2 ) dω = 0, tr T1 T5 − ω 2 T2 (A1 S 2iπ ∂Vl

(11.30)

178

CHAPTER 11

it follows from (11.29) that Z 1 −1 ω −1 ω 3 tr (Aω A1 (Aω A2 ω dω 0) 0) 2iπ ∂Vl Z  h ∂(xα I) i X 1 ω tr A1 [xα I] + A2 ∂ α DΩ [T3 [·]](z) dω = 2iπ ∂ν ∂V l |α|=1 h ∂(xα I) i Z X  1 ω = tr A1 [xα I] + A2 ∂ α DΩ [T3 [·]](z) dω. 2iπ ∂ν ∂Vl

(11.31)

|α|=1

By (1.82) and (1.83) we have   1 1 ω −1 α ω ω −1 ω ∂ Γ (· − z)(x) = −∂zα I − KΩ [Γ (· − z)](x) I − KΩ 2 2 +∞ X 1 = u (x)∂ α uj (z)t . 2 j κ − ω j j=1

(11.32)

We also have from (11.13) that ω ∂ α DΩ [ul ](z) = ∂ α ul (z) + (κl − ω 2 )

Z



∂ α Γω (z − y)ul (y) dy.

(11.33)

Using (11.32) and (11.33), it follows that Z Z 1 1 X α β t α ω ∂ ul (z)∂ ul (z) y β · dσ(y). ∂ DΩ [T3 [·]](z) dω = − √ 2iπ ∂Vl 2 κl ∂B |β|=1

(11.34)

Substituting (11.34) into (11.31), we obtain Z 1 −1 ω −1 ω 3 tr (Aω A1 (Aω A2 ω dω (11.35) 0) 0) 2iπ ∂Vl Z h ∂(xα I) i X  1 A1 [xα I] + A2 ∂ α ul (z)∂ β ul (z)t y β · dσ(y) = − √ tr 2 κl ∂ν ∂B |α|=|β|=1 Z  h α i X ∂(x I) 1 ∂ α ul (z)∂ β ul (z)t y β A1 [xα I] + A2 (y)dσ(y) = − √ tr 2 κl ∂ν ∂B |α|=|β|=1 Z   h ∂(xα I) i X 1 β α β t =− √ ∂ ul (z) y A1 [x I] + A2 (y)dσ(y) ∂ α ul (z). 2 κl ∂ν ∂B |α|=|β|=1

But, by the definition of A1 and A2 , the (i, j)-component of Z  h ∂(xα I) i y β A1 [xα I] + A2 (y)dσ(y) ∂ν ∂B is equal to −mαjβi . Now, plugging (11.35) into (11.29), we arrive as desired at the

179

VIBRATION TESTING

following asymptotic formula: ǫ3 ωǫ − ω0 = √ 2 κl

3 X

mαjβi ∂β (ul )i (z)∂α (ul )j (z) + O(ǫ4 ).

i,j,α,β=1

In order to complete the proof of the theorem, we verify identity (11.30). As before, it is easy to see that Z 1 tr T1 T5 dω 2iπ ∂Vl Z   h ∂(xα I) i 1 X 1 =− √ A1 [xα I] + A2 (y)dσ(y) ∂ α ul (z) ul (z)t 2 κl α! ∂ν ∂B |α|=2

1 = − √ ul (z)t 2 κl (2)

where ul (x) = 1 tr 2iπ

Z

Z

A2

∂B

h ∂u(2) i l (y) dσ(y), ∂ν

X 1 xα ∂ α ul (z). Using (11.14), we also have α!

|α|=2

e 2 + A2 K e 2 ) dω ω 2 T2 (A1 S Z Z h 1 i  1 2 ω ω −1 ω e 2 Se−1 [I](y)dσ(y) dω =− tr ω DΩ I − KΩ Γ (· − z) (z) A2 K B 2iπ 2 ∂Vl ∂B Z √ κl e 2 Se−1 [I](y) dσ(y) tr ul (z)ul (z)t A2 K =− B 2 ∂B Z √ κl e 2 Se−1 [I] (y) dσ(y)ul (z). ul (z)t A2 K =− B 2 ∂B ∂Vl

Inserting the Taylor expansion of ul at z into (Lλ,µ + κl )ul = 0 yields (2)

Lλ,µ ul (x) + κl ul (z) = 0, ω Since SeB = SeB +

+∞ X

n=1

x ∈ B.

e n , we get ωnS e

e 2 Se−1 [I] + I = 0. Lλ,eµ S B

(11.36)

(11.37)

By the definition of An and the jump relation of a single-layer, we have A2 [f ] = and hence

∂ ∂ e ∂ ∂ 0 SB A2 [f ] − SB A2 [f ] = f + SB A4 [f ] − SB A2 [f ] , e ∂ν ∂ν ∂ν ∂ν + − − − Z

∂B

A2 [f ]dσ =

Z

∂B

f dσ.

(11.38)

180

CHAPTER 11

From (11.36), (11.37), and (11.38), we conclude that Z

 h (2) i  ∂ul −1 e e A2 (y) − κl A2 K2 SB [I] (y)ul (z) dσ(y) ∂ν ∂B  Z  (2) ∂ul (y) e 2 Se−1 [I] (y)ul (z) dσ(y) = − κl K B ∂ν ∂B  Z  e e e−1 (2) S [I] (y)u (z) dy = 0, = Lλ,µ ul (y) − κl Lλ,eµ S 2 B l B

which completes the proof by (11.36) and (11.37). 11.1.3



Hole Case

Suppose that Ω contains a small hole D of the form D = ǫB + z, where B is a bounded Lipschitz domain in R3 containing the origin. Let κǫ be a Neumann eigenvalue of −Lλ,µ in the presence of the hole, and let uǫ be a corresponding eigenvector, i.e.,   Lλ,µ uǫ + κǫ uǫ = 0 in Ω \ D,    ∂uǫ (11.39) =0 on ∂Ω,  ∂ν    ∂uǫ on ∂D. ∂ν = 0 From Chapter 1, we know that uǫ can be represented as ωǫ ωǫ uǫ (x) = DΩ [ψǫ ](x) + SD [φǫ ](x),

x ∈ Ω \ D,

where the densities ψǫ and φǫ satisfy the following system of integral equations  1 ωǫ ωǫ   ( I − KΩ )[ψǫ ] − SD [φǫ ] = 0 on ∂Ω, 2   ( 1 I − (Kωǫ )∗ )[φǫ ] + ∂ Dωǫ [ψǫ ] = 0 on ∂D, D 2 ∂ν Ω √ for ωǫ = κǫ . As before, we can reduce the eigenvalue problem to the calculation of the asymptotic expansions of the characteristic values of the operator-valued function   1 ω ω I − KΩ −SD   ω 7→  2 . ∂ ω 1 ω ∗ ǫ DΩ ǫ( I − (KD ) ) ∂ν 2

Using the following asymptotic expansions

+∞ X X 1 ∂ ω ∂xα ω [ψ](z) DΩ [ψ](ǫx + z) = ǫn ∂ α DΩ , ∂ν α! ∂ν n=1 |α|=n

and

ω ∗ ∗ e ǫ(KD ) [φ](ǫx + z) = KB [φ](x) + h.o.t.,

x ∈ ∂B,

e where φ(x) = ǫφ(ǫx + z) for x ∈ ∂B, the asymptotic expansion of κǫ can be

181

VIBRATION TESTING

obtained exactly in the same manner as in Theorem 11.5 and the formula is exactly the same as (11.19) where M is the EMT of the hole B given by (3.25). Here h.o.t. stands for higher-order term. 11.1.4

Crack Case

Let γǫ ⊂ Ω be a small straight crack of length ǫ, center z, and orientation e. We assume that the crack γǫ is located at some fixed distance from ∂Ω and denote by e⊥ a unit normal to γǫ . Let κǫ be a Neumann eigenvalue of −Lλ,µ in the presence of the crack, and let uǫ be a corresponding eigenvector, i.e.,  ǫ  ∇ · σ(uǫ ) + κ uǫ = 0 in Ω \ γ ǫ , (11.40) σ(uǫ ) n = 0 on ∂Ω,   ⊥ σ(uǫ ) e = 0 on γǫ ,

where n is the outward unit normal to ∂Ω and σ(uǫ ) is the stress defined by (4.2). Using the asymptotic framework in Chapter 4, one can prove exactly in the same manner as in Theorem 11.5 that the following expansion holds. Theorem 11.6. Let κl be a simple Neumann eigenvalue for −Lλ,µ in Ω without the crack, let ul be the corresponding eigenvector such that kul kL2 (Ω) = 1, and let κǫl be the Neumann eigenvalue with the inclusion. Then we have κǫl − κl = −

πǫ2 ∂ul | (z)|2 + O(ǫ4 ), E ∂ν

(11.41)

where E is the Young modulus given by (4.26) and ∂/∂ν is defined by (4.20). 11.2

EIGENVALUE PERTURBATIONS DUE TO SHAPE DEFORMATIONS

In this section we confine our attention to the two dimensional case. We first recall Osborn’s theorem; refer to [151]. The following result concerning estimates for the eigenvalues of a sequence of self-adjoint compact operators holds. Theorem 11.7 (Osborn’s theorem). Let H be a (real) Hilbert space and suppose we have a compact, self-adjoint linear operator T : H → H along with a sequence of compact, self-adjoint linear operators Tǫ : H → H such that Tǫ → T pointwise as ǫ → 0 and the sequence {Tǫ } is collectively compact. Let κ be a non-zero eigenvalue of T of multiplicity m. Then, for small ǫ, each Tǫ has a set of m eigenvalues j (counted according to their multiplicity), {κ1ǫ , . . . , κm ǫ }, such that for each j, κǫ → κ as ǫ → 0. Define the average m 1 X j κǫ = κ . m j=1 ǫ If φ1 , . . . , φm is an orthonormal basis of eigenvectors associated with the eigenvalue κ, then there exists a constant C such that the following estimate holds:

m X



 j j κ − κǫ − 1



, (T − Tǫ )[φ ], φ ≤ C (T − Tǫ ) span{φj }

1≤j≤m m j=1

182 where (T −Tǫ ) span{φj }

CHAPTER 11

1≤j≤m

j

denotes the restriction of (T −Tǫ ) to the m-dimensional

vector space spanned by {φ }1≤j≤m and ( , ) is the scalar product in H.

Let Ω ⊂ R2 . We consider Dǫ to be an ǫ-perturbation of D ⋐ Ω. The boundary ∂Dǫ is then given by   e:x e = x + ǫh(x)n(x), x ∈ ∂D , ∂Dǫ = x

where h ∈ C 1 (∂D). Consider the following Dirichlet eigenvalue   Lλ,µ uǫ + ωǫ2 uǫ = 0     eµ λ,e 2   L u ǫ + ω ǫ u ǫ = 0   uǫ = 0    uǫ + − uǫ − = 0        ∂uǫ − ∂uǫ = 0 + e − ∂ν ∂ν R with the normalization Ω |uǫ |2 = 1. The following theorem from [10] holds.

problem: in Ω \ Dǫ ,

in Dǫ ,

on ∂Ω,

(11.42)

on ∂Dǫ , on ∂Dǫ ,

Theorem 11.8. The leading-order term in the perturbations of the eigenvalues due to the interface changes is given by Z ωǫ2 − ω02 = ǫ h(x) M[ue0 ](x) : ∇s ue0 (x) dσ(x) + o(ǫ), ∂D

where the symmetric matrix M[ue0 ] on ∂D is given by (9.8). We will prove Theorem 11.8 using Theorem 11.7. e D and (u0 , ω 2 ) ∈ H 1 (Ω)2 × R+ be the solution to the Let CD = CχΩ\D + Cχ 0 following eigenvalue problem  s 2  ∇ · (CD ∇ u0 ) = −ω0 u0 in Ω, u0 = 0 on ∂Ω, (11.43)  ||u0 ||L2 (Ω) = 1.

e Dǫ and consider the solution (uǫ , ω 2 ) ∈ H 1 (Ω)2 × R+ Let CDǫ = CχΩ\Dǫ + Cχ ǫ of the eigenvalue problem on the perturbed domain:  s 2  ∇ · (CDǫ ∇ uǫ ) = −ωǫ uǫ in Ω, uǫ = 0 on ∂Ω, (11.44)  ||uǫ ||L2 (Ω) = 1.

Let

ue0 = u0 |Ω\D

and ui0 = u0 |D .

The field u0 satisfies the transmission conditions along the interface ∂D:  ui0 = ue0 , e s ui )n = (C∇s ue )n, (C∇ 0 0

(11.45)

(11.46)

183

VIBRATION TESTING

where n is the outer normal unit vector field to ∂D. We denote by T the unit tangential vector field to ∂D. The first identity in (11.46) shows that (∇ui0 )T = (∇ue0 )T

on ∂D,

and hence (∇s ui0 )T · T = (∇s ue0 )T · T

Therefore, we have on ∂D  (∇s ui0 )T · T   e · ui ) + 2e λ(∇ µ(∇s ui0 )n · n 0   µ e(∇s ui0 )n · T

on ∂D.

= (∇s ue0 )T · T ,

= λ(∇ · ue0 ) + 2µ(∇s ue0 )n · n, s

= µ(∇

ue0 )n

(11.47)

· T.

Observe that

∇ · ui0 = tr(∇s ui0 ) = (∇s ui0 )T · T + (∇s ui0 )n · n, where tr denotes the trace. It thus follows that ∇ · ui0 =

2(e µ − µ) s e λ + 2µ ∇ · ue0 + (∇ u0 )T · T . e + 2e e + 2e λ µ λ µ

(11.48)

We then obtain from (11.47) and (11.48) that

e · ui )T + 2e e s ui )T = λ(∇ (C∇ µ(∇s ui0 )T 0 0 e · ui )T + 2e = λ(∇ µ((∇s ui0 )T · T )T + 2e µ((∇s ui0 )T · n)n 0 e µ − µ) e + 2µ) 2λ(e λ(λ (∇ · ue0 )T + ((∇s ue0 )T · T )T = e + 2e e + 2µ λ µ λ + 2e µ((∇s ue0 )T · T )T + 2µ((∇s ue0 )T · n)n = p(∇ · ue0 )T + 2µ(∇s ue0 )T + q(∇s ue0 )T · T τ,

(11.49)

where τ denotes the curvature of ∂D and p and q are given by (9.6). Using the tensor K defined by (9.7), (11.49) can be rewritten in the following condensed form: e s ui )T = (K∇s ue )T (C∇ 0 0

on ∂D.

(11.50)

Let T : L2 (Ω)2 → L2 (Ω)2 be the operator given by T [f ] = v0 where v0 is the solution to  ∇ · (CD ∇s v0 ) = f in Ω, (11.51) v0 = 0 on ∂Ω, and let Tǫ : L2 (Ω) → L2 (Ω) be the operator given by Tǫ [f ] = vǫ where vǫ is the solution to  ∇ · (CDǫ ∇s vǫ ) = f in Ω, (11.52) vǫ = 0 on ∂Ω.

Clearly T (:= T0 ) and {Tǫ }ǫ>0 are linear and self-adjoint operators.

We claim that Tǫ is a compact operator. In fact, by standard energy estimates

184

CHAPTER 11

based on Korn and Poincaré inequalities, we have that for all ǫ ≥ 0, kTǫ [f ]kH 1 (Ω)2 = kvǫ kH 1 (Ω)2 ≤ Ck∇vǫ kL2 (Ω) ≤ Ck∇s vǫ kL2 (Ω) ≤ Ckf kL2 (Ω)2 , where the constant C is independent of ǫ. Since the embedding of H 1 (Ω)2 into L2 (Ω)2 is compact, we conclude that Tǫ is compact. Moreover, since the constant C is independent of ǫ, the sequence of operators (Tǫ )ǫ≥0 is collectively compact. We now prove that Tǫ [f ] converges to T [f ] in L2 (Ω)2 for every f ∈ L2 (Ω)2 . We first observe a simple relation Z Z e s v0 : ∇s (vǫ − v0 ), (11.53) CDǫ ∇s (vǫ − v0 ) : ∇s (vǫ − v0 ) = (C − C)∇ Ω

Dǫ △D

where △ denotes the symmetric difference.

The strong convexity assumption (1.5) on CDǫ and Korn’s inequality (1.52) yield Z Z Z CDǫ ∇s (vǫ − v0 ) : ∇s (vǫ − v0 ) ≥ C |∇s (vǫ − v0 )|2 ≥ C |∇(vǫ − v0 )|2 , Ω





e + 2e where C depends only on min(µ, µ e), min(2λ + 2µ, 2λ µ), and Ω. On the other hand, by Hölder’s inequality, we get Z   e ∇s v0 : ∇(v b ǫ − v0 ) dx C−C Dǫ △D n o e k∇v0 kL2 (D △D) k∇(vǫ − v0 )kL2 (Ω) . ≤ max 2|µ − µ e|, |λ − λ| ǫ We then obtain from the above two inequalities and (11.53) that k∇(vǫ − v0 )kL2 (Ω) ≤ Ck∇v0 kL2 (Dǫ △D) . It then follows from Poincaré’s inequality that kvǫ − v0 kH 1 (Ω) ≤ Ck∇v0 kL2 (Dǫ △D) .

(11.54)

Since ∇v0 ∈ L2 (Ω) and |Dǫ △D| → 0 as ǫ → 0, we get kvǫ − v0 kH 1 (Ω)2 → 0 as ǫ → 0. In particular, kvǫ − v0 kL2 (Ω) = kTǫ [f ] − T [f ]kL2 (Ω) → 0 as ǫ → 0. So, Theorem 11.7 yields 1  1 2 ω 2 − ω 2 + (T − Tǫ )[u0 ], u0 ≤ Ck(T − Tǫ )[u0 ]kL2 (Ω) , ǫ 0

(11.55)

where C is independent of ǫ and u0 is the solution of (11.43). Furthermore, if uǫ is the solution to (11.44), then kuǫ − u0 kL2 (Ω) ≤ Ck(T − Tǫ )[u0 ]kL2 (Ω) .

(11.56)

Let us state some regularity results on uǫ and u0 that will be used in what follows. Lemma 11.9. Let c0 := dist(D, ∂Ω) and let Ωc0 /2 := {x ∈ Ω : dist(x, ∂Ω) >

185

VIBRATION TESTING

c0 /2}. There is a constant C independent of ǫ such that kuǫ kC 1,α (Dǫ ) + kuǫ kC 1,α (Ωc0 /2 \Dǫ ) ≤ C,

(11.57)

for some α > 0. Proof. This estimate extends the regularity results obtained by De Giorgi and Nash in the scalar case (see, for instance, [95]) to the case of bidimensional elliptic systems. It is proved in [129] that uǫ ∈ C 1,α (D ǫ ) ∩ C 1,α (Ω\Dǫ ) for some α ∈ (0, 1), and there is a constant C depending on the ellipticity constants min(µ, µ e), e + 2e min(2λ + 2µ, 2λ µ), c0 , and C 1,1 norm of Dǫ such that kuǫ kC 1,α (Dǫ ) + kuǫkC 1,α (Ωc0 /2 \Dǫ ) ≤ C(kuǫ kL2 (Ω) + kuǫ kL∞ (Ωc0 /2 ) ).

(11.58)

Since uǫ ∈ H 1 (Ω)2 and its norm is bounded regardless of ǫ, it follows from the Sobolev embedding theorem that uǫ ∈ Lq (Ω)2 for q > 2 independently of ǫ. Then, 2 by Theorem 1.20, it follows that ∇uǫ ∈ L2+η loc (Ω) for some η > 0. Again by γ 2 Sobolev embedding theorem, this implies that uǫ ∈ Cloc (Ω) with γ = 1 − 2+η . Finally, recalling that kuǫ kL2 (Ω) = 1, we obtain (11.57).  Let us now evaluate the right-hand side of (11.56). We know that T [u0 ] = eǫ where v eǫ is the solution to − ω12 u0 and Tǫ [u0 ] = v 0



e 0 = − ω12 u0 , then Let u 0



eǫ ) = ∇ · (CDǫ ∇s v eǫ = v

u0 in Ω, 0 on ∂Ω.

e 0 ) = u0 in Ω, ∇ · (CD ∇s u e 0 = 0 on ∂Ω. u

(11.59)

(11.60)

Hence, one can show in the same way as for (11.54) that

e 0 k2H 1 (Ω) ≤ Ck∇u0 k2L2 (Dǫ △D) , ke vǫ − u

and by the regularity estimates (11.57)

k∇u0 kL2 (Dǫ △D) ≤ C|Dǫ △D|1/2 , which implies e 0 kH 1 (Ω) ≤ C|Dǫ △D|1/2 ke vǫ − u

(11.61)

e 0 kL2 (Ω) ≤ C|Dǫ △D|1/2+η ke vǫ − u

(11.62)

for some constant C independent of ǫ. We now prove the following estimate.

for η > 0. To this end, we need the following lemma.

Lemma 11.10. Let C = (Cijkl ) be an L∞ (Ω) strongly convex elliptic tensor field, e 2 × 2 matrix-valued function, where Ω e ⊂ Ω is a measurable set. Let ϕ F ∈ L∞ (Ω) be a solution to  ∇ · (C∇s ϕ) = ∇ · (χΩ e F ) in Ω, (11.63) ϕ = 0 on ∂Ω.

186

CHAPTER 11

Then,

e 1/2+η kF k ∞ e , kϕkL2 (Ω) ≤ C|Ω| L (Ω)

where η > 0.

(11.64)

Proof. We have Z



C∇s ϕ : ∇s ϕ =

Z



s χΩ e F : ∇ ϕ.

Hence, by the Cauchy–Schwarz inequality and Korn’s inequality (1.52), we immediately get e 1/2 k∇ϕkL2 (Ω) ≤ kF kL∞ (Ω) e |Ω| and therefore,

e 1/2 . kϕkH 1 (Ω) ≤ kF kL∞ (Ω) e |Ω|

Let ψ be the unique solution to  ∇ · (C∇s ψ) = ψ =

ϕ in Ω, 0 on ∂Ω.

(11.65)

We have (11.66)

k∇ψkL2 (Ω) ≤ kϕkH 1 (Ω) .

By Theorem 1.20, since ϕ ∈ H 1 (Ω)2 there exists η > 0 such that k∇ψkL2+η (Ω) e ≤ C(k∇ψkL2 (Ω e ′ ) + kϕkL2+η (Ω e ′ ) ), e ⊂ Ω e ′ ⊂ Ω. Finally, inserting (11.66) into the last inequality and using where Ω Sobolev immersion theorem we readily get k∇ψkL2+η (Ω) e ≤ CkϕkL2+η (Ω) .

By the Gagliardo-Nirenberg inequality, we have that α kϕkL2+η (Ω) ≤ Ck∇ϕk1−α L2 (Ω) kϕkL2 (Ω)

with α =

η η+2 .

Hence 1

η

e η+2 kϕk η+2 kϕkL2+η (Ω) ≤ C|Ω| L2 (Ω) .

Multiplying equation (11.65) for ψ by ϕ, integrating by parts and applying Hölder’s inequality, we obtain Z Z Z 2 s s s |ϕ| dx = − C∇ ϕ · ∇ ψ = χΩ eF · ∇ ψ Ω





187

VIBRATION TESTING

and consequently, Z

η+1



e η+2 |ϕ|2 dx ≤ kF kL∞ (Ω) e k∇ψkL2+η (Ω) e |Ω| η

η+2 e ≤ C|Ω|kϕk L2 (Ω) .

Hence, we get which shows that where γ =

η 2(η+4) .

η+2

e η+4 , kϕkL2 (Ω) ≤ C|Ω|

e 1/2+γ , kϕkL2 (Ω) ≤ C|Ω|

This completes the proof.



eǫ − u e 0 . Observe that v eǫ − u e 0 satisfies We apply the above lemma to the function v  e 0 )) = ∇ · ((CD − CDǫ )∇s v eǫ ) in Ω, ∇ · (C∇s (e vǫ − u eǫ − u e 0 = 0 on ∂Ω, v

and hence we get

e 0 kL2 (Ω) ≤ C|Dǫ △D|1/2+η k∇e ke vǫ − u vǫ kL∞ (Dǫ △D) .

(11.67)

Furthermore, according to (11.58), we have

ke vǫ kC 1,α (Dǫ ) + ke vǫ kC 1,α (Ωc0 /2 \Dǫ ) ≤ C(ke vǫ kL2 (Ω) + ku0 kL∞ (Ωc0 /2 ) ).

(11.68)

Since ke vǫ kH 1 (Ω)2 ≤ Cku0 kL2 (Ω)2 ≤ C, it follows from (11.57) that (11.69)

ke vǫ kC 1,α (Dǫ ) + ke vǫ kC 1,α (Ωc0 /2 \Dǫ ) ≤ C.

The desired estimate (11.62) now follows from (11.67) and (11.69). Then, we conclude that e 0 kL2 (Ω) ≤ Cǫ1/2+η . k(Tǫ − T )[u0 ]kL2 (Ω) = ke vǫ − u (11.70) It also follows from (11.56) that

kuǫ − u0 kL2 (Ω) ≤ Cǫ1/2+η .

(11.71)

The following lemma holds. Lemma 11.11. There exists a constant C independent of ǫ such that α

e 0 )kL∞ (∂Dǫ \D) + k∇(e e 0 )kL∞ (∂Dǫ ∩D) ≤ Cǫ 2(α+2) . k∇(e vǫ − u vǫ − u

(11.72)

Proof. To prove (11.72) we make use of a mean-value property for biharmonic functions (see [46, Theorem 4.1]). Let 2ǫ < δ < c0 /2 and let Ωǫδ := { x ∈ Ω\(D ∪ Dǫ ) : dist(x, ∂(Ω\(D ∪ Dǫ ))) > δ }.

(11.73)

e 0 ) is biharmonic in Ω\(D ∪ Dǫ ), we may apply the mean-value Since ∇(e vǫ − u

188

CHAPTER 11

theorem at points y ∈ Ωǫδ :

 Z 12 4 e 0 )(y) = e 0 ) ⊗ (x − y) dx ∇(e vǫ − u (e vǫ − u π δ 4 B δ (y)  Z 2 1 e 0 ) dx . − 4 |x − y|2 ∇(e vǫ − u δ B δ (y) 2

It then follows from the Hölder inequality and (11.61) that 1

e 0 )kL∞ (Ωǫδ ) ≤ Cδ −2 ǫ 2 , k∇(e vǫ − u

(11.74)

where C is independent of ǫ. Set

eǫe = v eǫ |Ω\D v

eǫi = v eǫ |D , and v

as in (11.45). For y ∈ ∂Dǫ \D, let yδ denote the closest point to y in the set Ωǫδ . By (11.69), we obtain |∇e vǫe (y) − ∇e vǫe (yδ )| ≤ Cδ α . Likewise, we have

|∇e u0 (y) − ∇e u0 (yδ )| ≤ Cδ α .

It then follows from (11.74) that

e e0 )(y)| ≤ |∇e |∇(e vǫe − u vǫe (y) − ∇e vǫe (yδ )| + |∇e vǫe (yδ ) − ∇e ue0 (yδ )| + |∇e ue0 (yδ ) − ∇e ue0 (y)|

≤ C(δ α + δ −2 ǫ1/2 ).

Minimizing the right-hand side of the above inequality with respect to δ, we get α

e e0 )kL∞ (∂Dǫ \D) ≤ Cǫ 2(α+2) . k∇(e vǫe − u

In a similar way one can prove that

α

to complete the proof.

e i0 )kL∞ (∂Dǫ ∩D) ≤ Cǫ 2(α+2) k∇(e vǫi − u



 Proof of Theorem 11.8. We begin by computing the term (T − Tǫ )[u0 ], u0 appearing in (11.56). In view of (11.59) and (11.60), we have   e0 − v eǫ , u0 (T − Tǫ )[u0 ], u0 = u Z Z 1 eǫ =− 2 u20 − u0 v ω0 Ω Ω Z 1 eǫ : ∇s u0 = 2 (CDǫ − CD )∇s v ω0 Ω Z Z 1 1 s i s e e e − C)∇s v eǫ : ∇ u0 − 2 eǫe : ∇s ui0 . = 2 (C − C)∇ v (C ω0 Dǫ \D ω0 D\Dǫ

189

VIBRATION TESTING

Let xt := x + th(x)n(x) for x ∈ ∂D and t ∈ [0, ǫ]. We get, for ǫ small enough, Z 1 e − C)∇s v eǫi : ∇s ue0 dx (C ω02 Dǫ \D Z ǫZ 1 e − C)∇s v eǫi (xt ) : ∇s ue0 (xt ) dσ(x) dt + O(ǫ2 ), h(x)(C = 2 ω0 0 ∂D∩{h>0} (11.75) and Z 1 e − C)∇s v eǫe : ∇s ui0 dx − 2 (C ω0 D\Dǫ Z ǫZ 1 e − C)∇s v eǫe (xt ) : ∇s ui0 (xt ) dσ(x) dt + O(ǫ2 ). = 2 h(x)(C ω0 0 ∂D∩{h 0 and hence eǫi (xǫ ) = ∇s v

α 1 e −1 C ((K∇s ue0 (x)T ) ⊗ T + (C∇s ue0 (x)n) ⊗ n) + O(ǫ 2(α+2) ). ω02

Thus we get Z 1 e − C)∇s v eǫi : ∇s ue0 dx (C ω02 Dǫ \D Z α ǫ h(x)M[∇s ue0 ](x) : ∇s ue0 (x) dσ(x) + O(ǫ1+ 2(α+2) ), = 4 ω0 ∂D∩{h>0} for α > 0, where M[∇s ue0 ] is given by (9.8).

Similarly, we get Z 1 e − C)∇s v eǫi : ∇s ue0 dx − 2 (C ω0 D\Dǫ Z α ǫ = 4 h(x)M[∇s ue0 ](x) : ∇s ue0 (x) dσ(x) + O(ǫ1+ 2(α+2) ). ω0 ∂D∩{h 0. We now prove the following theorem. The asymptotic formula in this theorem can be regarded as a dual formula to that of ωǫ2 − ω02 in (11.44). It plays a key role in our reconstruction procedure in the next section. Theorem 11.12. The following asymptotic formula holds as ǫ → 0: Z Z wg · u0 dx g · C(∇s uǫ − ∇s u0 )n dσ + (ωǫ2 − ω02 ) Ω ∂Ω Z =ǫ h(x)M[∇s ue0 ](x) : ∇s wge (x)dσ(x) + O(ǫ1+β )

(11.79)

∂D

for some β > 0. Here wg is the solution to (11.77) and M[∇s ue0 ] is given by (9.8).

191

VIBRATION TESTING

To prove (11.79), it suffices, thanks to (11.78), to show that Z − (CDǫ − CD )∇s uǫ : ∇s wg dx Ω Z = −ǫ h(x)M[∇s ue0 ](x) : ∇s wge (x) dσ(x) + O(ǫ1+β ). ∂D

This can be proved following the same lines of the proof of Theorem 11.8, as long as we have proper estimates for uǫ and wg . The required estimates are kwg kC 1,α (D) + kwg kC 1,α (Ωc0 /2 \D) ≤ C

(11.80)

and k∇(ueǫ − ue0 )kL∞ (∂Dǫ \D) + k∇(uiǫ − ui0 )kL∞ (∂Dǫ ∩D) ≤ Cǫγ

(11.81)

for some constant C independent of ǫ and γ > 0. The rest of this section is devoted to proving (11.80) and (11.81). The estimate (11.80) holds since ∇ · (CD ∇s ) + ω02 with Dirichlet boundary conditions is well posed on the subspace of H 1 (Ω)2 orthogonal to u0 and, on the other hand, u0 itself satisfies such an estimate. In order to prove (11.81), let 2ǫ < δ < c0 /2 and Ωǫδ be defined as in (11.73). Clearly, the function φǫ := ∇(uǫ − u0 ) is a solution to the following equation in Ω\D ∪ Dǫ : ∇ · (C∇s φǫ ) + ωǫ2 φǫ = (ω02 − ωǫ2 )∇u0 .

By standard regularity results for elliptic systems with constant coefficients, ∇u0 and φǫ belong to L2+η loc for some η > 0. Now, from a generalization of Meyer’s theorem to systems (see Section 1.7) we have   2 k∇φǫ kL2+η (Ωǫδ ) ≤ C d−1+ 2+η k∇φǫ kL2 (Ωǫδ/2 ) + ω02 − ωǫ2 ku0 kH 1 (Ωǫδ/2 ) . (11.82) We now apply Caccioppoli’s inequality on φǫ to have   k∇φǫ kL2 (Ωǫδ/2 ) ≤ C δ −2 kφǫ kL2 (Ωǫδ/3 ) + ω02 − ωǫ2 k∇u0 kL2 (Ωǫδ/3 ) .

√ Since ω02 − ωǫ2 ≤ Cǫ and kφǫ kL2 (Ωǫδ/3 ) ≤ C ǫ, we have

 √ k∇φǫ kL2 (Ωǫδ/2 ) ≤ C δ −2 ǫ + ǫ .

Inserting (11.83) into (11.82), we obtain   2 √ 2 √ k∇φǫ kL2+η (Ωǫδ ) ≤ C δ −3+ 2+η ǫ + ǫ ≤ Cδ −3+ 2+η ǫ.

(11.83)

(11.84)

√ On the other hand, since kφǫ kL2 (Ωǫδ/2 ) ≤ C ǫ, we have from the Sobolev embedding theorem and (11.83) that √ kφǫ kL2+η (Ωǫδ ) ≤ Ckφǫ kH 1 (Ωǫδ/2 ) ≤ Cδ −2 ǫ. (11.85) Using the Sobolev embedding theorem again, it follows from (11.85) and (11.84)

192

CHAPTER 11

that

2

kφǫ kL∞ (Ωǫδ ) ≤ Cδ −3+ 2+η



ǫ.

Now, let y ∈ ∂Dǫ \D and let yδ denote the closest point to y in the set Ωǫδ . From the gradient estimates for uǫ and u0 , we have |∇ueǫ (y) − ∇ueǫ (yδ )| ≤ Cδ α ,

(11.86)

which yields |∇(ueǫ − ue0 )(y)| ≤ |∇ueǫ (y) − ∇ueǫ (yδ )| + |∇ueǫ (yδ ) − ∇ue0 (yδ )| + |∇ue0 (yδ ) − ∇ue0 (y)| 2

≤ C(δ α + δ −3+ 2+η ǫ1/2 ). 1

2

Choosing δ = ǫ 2(3+α− 2+η ) , we get |∇(ueǫ − ue0 )(y)| ≤ Cǫγ , where γ =

α , 2 2(3+α− 2+η )

and hence k∇(ueǫ − ue0 )kL∞ (∂Dǫ \D) ≤ Cǫγ .

In a similar way, one can show that k∇(uiǫ − ui0 )kL∞ (∂Dǫ ∩D) ≤ Cǫγ . 11.3

SPLITTING OF MULTIPLE EIGENVALUES

In this section, we briefly address the splitting problem in the evaluation of the perturbations in the eigenvalues of the Lamé equations that are due to the presence of a small inclusion D. We refer the reader for more details to [32, Section 3.4]. The main difficulty in deriving asymptotic expansions of perturbations in multiple eigenvalues of the unperturbed configuration relates to their continuation. Multiple eigenvalues may evolve, under perturbations, as separated, distinct eigenvalues, and the splitting may only become apparent at high orders in their Taylor series expansions with respect to the perturbation parameter. Let ω02 denote an eigenvalue of the eigenvalue problem for (11.1) with geometric multiplicity m. We call the ω0 -group the totality of the perturbed eigenvalues ωǫ2 in the presence of the inclusion for ǫ > 0 that are generated by splitting from ω02 . We then proceed from the generalized argument principle to investigate the splitting problem. Let V be a small neighborhood of ω0 such that ω0 is the only characteristic value of ω 7→ Aω 0 in V . Let, for l any integer, al (ǫ) denote Z 1 d al (ǫ) = tr (ω − ω0 )l Aǫ (ω)−1 Aǫ (ω)dω. 2iπ dω ∂V By the generalized argument principle, we find al (ǫ) =

m X i=1

(ωǫi − ω0 )l .

193

VIBRATION TESTING

The following theorem from [37] holds. Theorem 11.13 (Splitting of a multiple eigenvalue). There exists a polynomialvalued function ω 7→ Qǫ (ω) of degree m and of the form Qǫ (ω) = ω m + c1 (ǫ)ω m−1 + . . . + ci (ǫ)ω m−i + . . . + cm (ǫ) such that the perturbations ωǫi − ω0 are precisely its zeros. The polynomial coefficients (ci )m i=1 are given by the recurrence relation for l = 0, 1, . . . , m − 1.

al+m + c1 al+m−1 + . . . + cm al = 0

Based on Theorem 11.13, our strategy for deriving asymptotic expansions of the perturbations ωǫi − ω0 relies on finding a polynomial of degree m such that its zeros are precisely the perturbations ωǫi −ω0 . We then obtain asymptotic expansions of the perturbations in the eigenvalues by computing the Taylor series of the polynomial coefficients.

11.4

RECONSTRUCTION OF INCLUSIONS

In this section we propose efficient algorithms for detecting small elastic inclusions or perturbations in the interface of an inclusion from modal measurements.

11.4.1

Reconstruction of Small-Volume Inclusions

For g ∈ L2 (∂Ω)2 satisfying

Z



∂Ω

g · u0 = 0, let wg solve

 Lλ,µ wg + ω02 wg = 0  ∂wg = g ∂ν

We can prove that (ω02 − ωǫ2 )

R

wg · u 0 +

Z

∂Ω

g · (uǫ − u0 ) ≃ −ǫ2

in Ω, on ∂Ω.

2 X

(11.87)

mijpq ∂i (u0 )j (z)∂p (wg )q (z).

i,j,p,q=1

(11.88)

It follows from (11.19) that the following reconstruction formula holds:

Z



wg · u 0 +

ω02

1 − ωǫ2

Z

∂Ω

g · (uǫ − u0 ) ≃

2 X

mijpq ∂i (u0 )j (z)∂p (wg )q (z)

i,j,p,q=1 2 X

i,j,p,q=1

. mijpq ∂i (u0 )j (z)∂p (u0 )q (z)

194

CHAPTER 11

The reconstruction method is then used to minimize the functional Z L Z X 1 wgl ·u0 + 2 ω − ω2 l=1



ǫ

0

∂Ω

gl ·(uǫ −u0 )−

2 X

mijpq ∂i (u0 )j (x)∂p (wgl )q (x) 2 2 X mijpq ∂i (u0 )j (x)∂p (u0 )q (x)

i,j,p,q=1

i,j,p,q=1

R

for L functions gl satisfying ∂Ω gl ·u0 = 0 for l = 1, . . . , L. The minimization is over x ∈ Ω and the elastic moment tensor M = (mijpq ) subject to the Hashin-Shtrikman bounds (3.8)–(3.11) and the symmetry relations (3.6). Based on the asymptotic expansion (11.41), a similar reconstruction procedure can be designed to detect and locate a small crack. However, it does not seem easy to separate cracks from inclusions by vibration testing. 11.4.2

Reconstruction of Shape Deformations

The inverse problem we consider now is to recover some information about h from the variations of the modal parameters (ωǫ − ω0 , (σ(uǫ ) − σ(u0 ))n|∂Ω ) associated R with the Dirichlet eigenvalue problem (11.42). Let gl ∈ L2 (∂Ω)2 satisfy ∂Ω gl · (CD ∇s u0 )n = 0 for l = 1, . . . , L. In order to reconstruct the shape deformation h, one can minimize, using Theorem 11.12, the following functional: L Z X l=1

∂Ω

gl · C(∇s uǫ − ∇s u0 )n + (ωǫ2 − ω02 ) −ǫ

Z

∂D

Z



wg l · u 0

2 s e s e h(x)M[∇ u0 ](x) : ∇ wgl (x)dσ(x) ,

(11.89)

where wgl is a solution of (11.77) with g replaced by gl . Let   Z 2 2 s V(∂Ω) := g ∈ L (∂Ω) : g · (CD ∇ u0 )n = 0 ∂Ω

2

and define Λ : V → L (∂D) by

Λ[g] := M[∇s ue0 ] : ∇s wge

on ∂D,

(11.90)

where wg is the solution to (11.77). The best choice of {g1 , . . . , gL } is the basis of the image space of Λ∗ Λ, where Λ∗ : L2 (∂D) → V(∂Ω) is the adjoint of Λ. Moreover, one should look for the changes h as a linear combination of M[∇s ue0 ] : ∇s wge |∂D for g ∈ Range(Λ∗ Λ): L X h(x) = αl vgl , l=1

where

vgl := M[∇s ue0 ] : ∇s wge l ∗

on ∂D,

l = 1, . . . , L,

(11.91)

L is the dimension of Range(Λ Λ), and gl are the significant singular vectors of Λ. We call the vectors vgl , l = 1, . . . , L, the optimally illuminated coefficients. The minimization procedure reduces then to

195

VIBRATION TESTING

L Z X αl′ ,l =1,...,L

min ′

gl · C(∇s uǫ − ∇s u0 )n + (ωǫ2 − ω02 )

∂Ω l=1 Z L X

−ǫ

αl′

l′ =1

∂D

2 vgl′ (x)vgl (x) .

Z



wg l · u 0

(11.92)

The quadratic minimization problem (11.92) has a unique solution which is stable with respect to the measurement vector given by Z

∂Ω

g1 · C(∇s uǫ − ∇s u0 )n, . . . ,

Z

∂Ω

t gL · C(∇s uǫ − ∇s u0 )n .

This implies that if h is a linear combination of the optimally illuminated coefficients, then it can be uniquely reconstructed from the measurements in a robust way. Moreover, the resolution limit in reconstructing the changes h is given by δ=

11.5

1 . maxl (||∂wgl /∂T ||L2 (∂D)2 /||wgl ||L2 (∂D)2 )

(11.93)

NUMERICAL ILLUSTRATIONS

We present several examples of the interface reconstruction in dimension two. For computations, the background domain Ω is assumed to be the unit disk centered at the origin, and the inclusion D is a disk centered at (0, 0.1) with the radius 0.4. The e µ Lamé constants of Ω \ Dǫ and Dǫ are given by (λ, µ) = (1, 1) and (λ, e) = (1.5, 2), respectively. In the following examples, we assume that ǫ is known and reconstruct h. Example 1. In this example, we represent, as in Subsection 9.1.3, the perturbation function h as 18 X h= ap Φp (θ), p=0

where Φp , for p = 1, . . . , 18, are given by (9.12). The actual (or true) perturbation is given by h(θ) = 1 + 2 cos pθ, p = 0, 3, 6, 9, and ǫ = 0.03. We use the first eigenvalue and the corresponding (two) eigenvectors of D and Dǫ , which are denoted by u0,j and uǫ,j (j = 1, 2), respectively. The eigenvalues, eigenvectors, and wgil in the following are simulated using Matlab. Numerical computation reveals that the first eigenvalue has multiplicity two, which may be two very close simple eigenvalues. Even though formula (11.79) is developed for simple eigenvalues, this does not cause any trouble. We simply superimpose the algebraic systems to minimize the functional (11.89). For the test function wg , which is a solution to (11.77), we use  (cos lθ, 0)t    (0, cos lθt ) gil = (cil , dil )t + (sin lθ, 0)t    (0, sin lθ)t

for for for for

i = 1, i = 2, i = 3, i = 4,

l = 1, . . . , L(= 5),

(11.94)

196

CHAPTER 11

and corresponding solutions are denoted by wgil . They are such that Z wgil · u0,j dx 6= 0. Ω

Moreover, the constants (cil , dil ) are chosen to fulfill the orthogonality conditions Z gil · (CD ∇s u0,j )n dσ = 0, j = 1, 2, i, l = 1, . . . , L. ∂Ω

In order to minimize the functional (11.89), we construct a 40 × 19 matrix A = (Akp ) as Z Akp = ǫ Φp (x)M[∇s ue0,j ](x) : ∇s wge il (x)dσ(x), ∂D

where k = 20(j − 1) + 4(l − 1) + i, 1 ≤ j ≤ 2, 1 ≤ l ≤ 5, 1 ≤ i ≤ 4, and 0 ≤ p ≤ 18. The 40-dimensional measurement vector b = (bk ) is given by Z Z bk = gil · C(∇s uǫ,j − ∇s u0,j )n + (ω02 − ωǫ2 ) wgil · u0,j , ∂Ω



where k = 20(j − 1) + 4(l − 1) + i. We then compute the coefficients ap ’s of h using the formula −1 t (a0 , . . . , a18 ) = At A + ηI A b, (11.95)

where I is the 19 × 19 identity matrix and η > 0 is the regularization parameter. The regularization parameter η is set to be 10−3 , 10−3 , 10−5 , 2 · 10−6 for each p = 0, 3, 6, 9. Figure 11.1 shows results of reconstruction with well chosen η. It shows that the reconstruction algorithm works pretty well if the perturbation h is not highly oscillating. Even when h is highly oscillating, the reconstructed interface e ǫ reveals general information of the shape of the interface. Table 11.1 shows ∂D e ǫ △D| and |Dǫ △D| for ǫ = 0.02, 0.03, 0.04 with the ratio of symmetric differences |D e is the reconstructed inclusion. It various regularization parameters η, where D shows that the ratio is close to 1 for well-chosen η.

Example 2 [Minimization using significant eigenvectors]. The second example is to show the result of minimizing the functional (11.92) using the optimally illuminated coefficients. To compute the significant eigenvalues and eigenvectors, we use the basis given in (11.94). To make the index simpler, we denote gil as gp , p = 1, . . . , 20. For j = 1, 2, let Λj be the operator defined by (11.90) and let Λ∗j Λj [gp ] =

20 X q=1

d(j) pq gq

for p = 1, . . . , 20.

197

VIBRATION TESTING

1

1

−0.1

−0.1

−1

−1 −1

0

1

1

1

−0.1

−0.1

−1

−1

0

1

−1

0

1

−1 −1

0

1

Figure 11.1: The dashed grey curves represent the disks. The solid grey curves represent the interfaces, which are perturbations of the disks. The perturbation is given by ǫh where ǫ = 0.03. The black curves are the reconstructed interfaces.

(j)

We then compute (dpq ) by solving the matrix equation 20 X

q′ =1

(j)

dpq′

Z

∂Ω

gqt gq′ dσ =

Z

Λj [gp ]Λj [gq ]dσ,

p, q = 1, . . . , 20.

(11.96)

∂D (j)

It turns out that, for each j = 1, 2, (dpq ) has six significant eigenvalues (counted according to their multiplicity) as shown in Figure 11.2. (j,i) (j) Let c(j,i) = (cp )20 p=1 , i = 1, . . . , 6, j = 1, 2, be significant eigenvectors of (dpq ), and define 20 X (j) φi = c(j,i) gp (x), j = 1, 2, i = 1, . . . , 6. p p=1

(j)

We note that φi , i = 1, . . . , 6, are significant eigenvectors of Λ∗j Λj , j = 1, 2. (j)

We look for h as a linear combination of Λj [φi ], j = 1, 2, i = 1, . . . , 6. The (1) (1) (2) actual perturbation is given by h = Λ1 [φ3 ] and h = 2Λ1 [φ2 ] − Λ2 [φ1 ]. The example in Figure 11.3 shows the reconstruction of the inclusion. It shows that the minimization using the optimally illuminated coefficients is as effective as that using (11.89) or (11.95) (see also Example 4). We emphasize that in this reconstruc(j) tion h is represented using only 12 basis functions Λj [φi ], while in the previous reconstruction 19 functions (Φp ) are used. Moreover, the representation of h in terms of the optimally illuminated coefficients avoids the computation of a basis for functions defined on the boundary of the unperturbed inclusion.

198

CHAPTER 11

p

η

0

10−2 10−3 10−4 10−5 10−2 10−3 10−4 10−5 10−2 10−3 10−4 10−5 10−2 10−3 10−4 10−5

3

6

9

e ǫ △D| |D |Dǫ △D|

ǫ = 0.02 0.8835 0.5622 0.4527 0.8558 0.7667 0.6484 0.6371 1.1516 0.9977 0.9950 0.9137 1.0286 1.0103 1.0741 1.1330 1.1339

ǫ = 0.03 0.8411 0.4130 0.5210 1.1803 0.7244 0.7769 0.8967 1.6356 1.0196 1.1380 1.1642 1.3878 1.0419 1.2865 1.4803 1.5083

ǫ = 0.04 0.8127 0.3447 0.6647 1.4565 0.7821 1.0457 1.3637 2.2430 1.0577 1.4119 1.6217 1.9081 1.0928 1.6192 1.9743 1.9957

Table 11.1: For h(θ) = 1 + 2 cos pθ, p = 0, 3, 6, 9, the area difference ratio e ǫ is the reconstructed inclusion. is presented, where D

e ǫ △D| |D |Dǫ △D|

3.5 eigenvalues of (d(1)) pq

3

eigenvalues of (d(2) ) pq

2.5 2 1.5 1 0.5 0

0

2

4

6

8

10

12

14

16

18

20

Figure 11.2: Significant eigenvalues of Λ∗j Λj , j = 1, 2. There are 6 such eigenvalues.

Example 3 [Incomplete measurements]. In this example, we use the data only measured on the part of ∂Ω, that is {eiθ : θ ∈ [0, π]}. We look for h as the (j) linear combination of Λj [φi ], j = 1, 2, 1 ≤ i ≤ 6. Here the domain of Λj is restricted to the functions supported on {eiθ : θ ∈ [0, π]}. The example in Figure (1) 11.4 shows the reconstruction of the inclusion, which is given by h = Λ1 [φ3 ] and (1) (2) h = 2Λ1 [φ2 ] − Λ2 [φ1 ]. Even with incomplete data the reconstructions are pretty

199

VIBRATION TESTING

1

1

−0.1

−0.1

−1

−1 −1

0

1

−1

0

1

Figure 11.3: Reconstruction in the case where h is expressed in terms of the significant eigenvectors of Λ∗j Λj , j = 1, 2.

accurate. See the next example for reconstruction of more general shapes. 1

1

−0.1

−0.1

−1

−1 −1

0

1

−1

0

1

Figure 11.4: Reconstruction from incomplete measurements. Example 4. Figure 11.5 shows the reconstruction of an inclusion which is given by ǫh = 0.04(1 + 2 cos 3θ) (the first row), shifted to the top by 0.2 (the second row), and an ellipse (the third row). The left column contains the results obtained using (11.95), the middle one by using significant eigenvectors of Λ∗j Λj , j = 1, 2, and the right column is obtained using the incomplete measurements on {eiθ : θ ∈ [0, π]}. In this example, the left and middle columns give similar results, and the reconstructed images are very close to the true ones. The incomplete measurement gives worse images, but the upper part which is the illuminated region is better reconstructed.

CONCLUDING REMARKS In this chapter we have presented an asymptotic theory for eigenvalue problems to the linear elasticity case. We have presented asymptotic expansions of the perturbations due to the presence of an elastic inclusion. The inclusion may be hard or soft. Small cracks have been also considered. Leading-order terms in these expansions have been explicitly written. An inversion approach for the purpose of identifying from eigenvalue measurements small elastic inclusions and cracks has been developed. It can be extended to the case of partial data where the measure-

200

CHAPTER 11

1

1

1

−0.1

−0.1

−0.1

−1

−1 −1

0

1

−1 −1

0

1

1

1

1

−0.1

−0.1

−0.1

−1

−1 −1

0

1

0

1

1

1

−0.1

−0.1

−0.1

−1 −1

0

1

0

1

−1

0

1

−1

0

1

−1 −1

1

−1

−1

−1 −1

0

1

Figure 11.5: The left column is obtained using (11.95), the middle one by using the significant eigenvectors of Λ∗j Λj , j = 1, 2. In the right column we use incomplete measurements. ments are made on only a part of the boundary. However, our algorithms require a priori knowledge of the unperturbed structure. It would be very interesting to design reconstruction methods which operate solely on the eigenparameters of the damaged elastic structure and do not require a priori knowledge of the undamaged structure. Another challenging problem is to develop an efficient procedure to easily separate cracks from inclusions by vibration testing.

Appendix A Introduction to Random Processes This appendix reviews some statistical concepts essential for understanding stability analysis of the imaging functionals in the presence of noise.

A.1

RANDOM VARIABLES

A characteristic of noise is that it does not have fixed values in repeated measurements or observations. Let us first consider such a scalar (real-valued) quantity. It can be modeled by a random variable, for which the exact value of a realization is not known, but for which the likelihood or empirical frequency of any measurable set of values can be characterized. The statistical distribution of a random variable can be defined as the probability measure over R that quantifies the likelihood that the random variable takes values in a particular interval. In this section we only address so-called continuous random variables, i.e., those whose distributions admit densities with respect to the Lebesgue measure over R, as discrete or other singular random variables have not been encountered in the book. The statistical distribution of a random variable can then be characterized by its probability density function (PDF). The PDF of a (real-valued) random variable Z is denoted by pZ (z): Z b  P Z ∈ [a, b] = pZ (z) dz. a

Note that pZ is a nonnegative function whose total integral is equal to one. Given the PDF it is possible to compute the expectation of a nice function (bounded or positive) of the random variable φ(Z), which is the weighted average of φ with respect to the PDF pZ : Z E[φ(Z)] = φ(z)pZ (z) dz. R

The most important expectations are the first- and second-order moments (we have only considered random variables with finite first- and second-order moments in this book). The mean of the random variable Z is defined as Z E[Z] = zpZ (z) dz. R

It is the first-order statistical moment. It is the deterministic value that best approximates the random variable Z as we will see in Subsection A.4. The variance is defined as   Var(Z) = E |Z − E[Z]|2 = E[Z 2 ] − E[Z]2 ,

202

APPENDIX A

p which is a second-order statistical moment. σZ = Var(Z) is called the standard deviation, which is a measure of the average deviation from the mean. The PDF of measurement noise is not always known in practical situations. We often use parameters such as mean and variance to describe it. It is then usual to assume that the noise has Gaussian PDF. This can be justified by the maximum Zof entropy principle, which claims that the PDF that maximizes the entropy − Z

pZ (z) ln pZ (z) dz with the constraints

pZ (z)dz = 1,

Z

zpZ (z) dz = µ,

and

Z

(z − µ)2 pZ (z)dz = σ 2 ,

is the Gaussian PDF pZ (z) = √

 (z − µ)2  1 exp − , 2σ 2 2πσ

(A.1)

with mean µ and variance σ 2 . If a random variable Z has PDF (A.1), then we write Z ∼ N (µ, σ 2 ). Moreover, the measurement error often results from the cumulative effect of many uncorrelated sources of uncertainty. As a consequence, based on the central limit theorem, most measurement noise can be treated as Gaussian noise. Recall here the central limit theorem: When a random variable Z is the sum of n independent and identically distributed random variables, then the distribution of Z is a Gaussian distribution with the appropriate mean and variance in the limit n → +∞, provided the variances are finite. A.2

RANDOM VECTORS

A d-dimensional random vector Z is a collection of d (real-valued) random variables (Z1 , · · · , Zd )t . The distribution of a random vector is characterized by the PDF pZ : Z P(Z ∈ [a1 , b1 ] × · · · × [ad , bd ]) = pZ (z) dz ∀ aj ≤ b j . [a1 ,b1 ]×···×[ad ,bd ]

The random vector Z = (Z1 , · · · , Zd )t is independent if its PDF can be written as a product of the one-dimensional PDFs of the coordinates of the vector: pZ (z) =

d Y

j=1

pZj (zj )

∀ z = (z1 , · · · , zd )t ∈ Rd ,

or, equivalently,       E φ1 (Z1 ) · · · φd (Zd ) = E φ1 (Z1 ) · · · E φd (Zd )

∀ φ1 , · · · , φd ∈ Cb (R, R).

Example: A d-dimensional normalized Gaussian random vector Z has the Gaussian PDF  |z|2  1 pZ (z) = exp − . 2 (2π)d/2

This PDF can be factorized into the product of one-dimensional Gaussian PDFs, which shows that Z is a vector of independent random normalized Gaussian vari-

203

INTRODUCTION TO RANDOM PROCESSES

ables (Z1 , · · · , Zd )t (normalized means with mean-zero and variance one).

As in the case of random variables, we may not always require or may not be able to give a complete statistical description of a random vector. In such cases, we work only with the first and second statistical moments. Let Z = (Zi )i=1,...,d be a random vector. The mean of Z is the vector µ = (µj )j=1,...,d : µj = E[Zj ]. The covariance matrix of Z is the matrix C = (Cjl )j,l=1,...,d :   Cjl = E (Zj − E[Zj ])(Zl − E[Zl ]) .

These statistical moments are enough to characterize the first two moments of any linear combination of the components of Z. Indeed, if β = (βj )j=1,...,d ∈ Rd , then P the random variable Zβ = β t Z = dj=1 βj Zj has mean: E[Zβ ] = β t µ =

d X

βj E[Zj ],

j=1

and variance: Var(Zβ ) = β t Cβ =

d X

Cjl βj βl .

j,l=1

As a byproduct of this result, we can see that the covariance matrix C is necessarily nonnegative. If the variables are independent, then the covariance matrix is diagonal. In particular: d d X  X Var Zj = Var(Zj ). j=1

j=1

The converse is false in general (i.e., the fact that the covariance matrix is diagonal does not ensure that the vector is independent).

A.3

GAUSSIAN RANDOM VECTORS

A Gaussian random vector Z = (Z1 , . . . , Zd ) with mean µ and covariance matrix R (write Z ∼ N (µ, R)) has the PDF p(z) =

 (z − µ)t R−1 (z − µ)  1 exp − , 2 (2π)d/2 (det R)1/2

(A.2)

provided that R is positive. As mentioned in the case of random variables, the Gaussian statistics is the one that is obtained from the maximum of entropy principle (given that the first two moments of the random vector are specified) and from the central limit theorem. This distribution is characterized by Z  t t λt Rλ  E[eiλ Z ] = eiλ z p(z) dz = exp iλt µ − , λ ∈ Rd , (A.3) 2 d R

204

APPENDIX A

which also shows that, if λ ∈ Rd , then the linear combination λt Z is a real-valued Gaussian random variable with mean λt µ and variance λt Rλ. The Gaussian property is robust: it is stable with respect to any linear transform, and it is also stable with respect to conditioning as we will see below. A.4

CONDITIONING

Let (Y, Z t )t be a 1 + n-dimensional random vector, Z is Rn -valued and Y is Rvalued. In order to introduce the conditional expectation, we start by noticing that the expectation of Y is the deterministic constant that is the best approximation of the random variable Y in the mean-square sense:   E[Y ] = argmin E (Y − a)2 . a∈R

Now assume that we observe Z. We would like to know the best approximation of Y given the observation Z. In other words, we consider the minimization problem:   min E (Y − ψ(Z))2 over all possible ψ : Rn → R . The solution to this minimization problem is the conditional expectation E[Y |Z], that is a random variable of the form ψ0 (Z) for some deterministic function ψ0 : Rn → R. The existence and uniqueness of the conditional expectation E[Y |Z] follows from its interpretation as the orthogonal projection (in the Hilbert space of square integrable random variables) of Y onto the (infinite-dimensional) subspace of square-integrable variables of the form ψ(Z). In the case in which Z and Y are independent, we simply have E[Y |Z] = E[Y ]. In the general case in which (Y, Z t )t has the PDF pY,Z , the conditional expectation is given by R ypY,Z (y, z) dy E[Y |Z] = ψ0 (Z) with ψ0 (z) = RR . R pY,Z (y, z) dy R If the denominator R pY,Z (y, z) dy is zero for some z, then the numerator is also zero and we set ψ0 (z) = 0. We note that it is not always easy to compute such an expression in practice, as it requires the knowledge of the joint PDF of the vector (Y, Z t )t . There is a well-known simplified approximation problem that is much more tractable. Assume that we observe Z. The linear regression problem consists in finding the affine combination of Z that best approximates Y . The problem to be solved is n h X 2 i min E Y − α0 − αj Zj over (αj )nj=0 ∈ Rn+1 . j=1

Note that this is a quadratic minimization problem that is much easier to solve than the computation of the conditional expectation, as it requires only the knowledge of the first two moments of the random vector (Y, Z t )t . But linear regression is clearly sub-optimal in the point of view of approximation theory: the best affine combina-

205

INTRODUCTION TO RANDOM PROCESSES

tion of Z has no reason to be the best approximation of Y given Z. However, in the Gaussian framework, it turns out that this is true. More exactly, if (Y, Z t )t is Gaussian, then the conditional expectation E[Y |Z] is the linear regression of Y on Z. In fact, we have the following result: If (Z1t , Z2t )t is a Gaussian random vector (with Z1 of size d1 and Z2 of size d2 ):      ! Z1 µ1 R11 R12 ∼N , Z2 µ2 R21 R22 with the means µ1 and µ2 of sizes d1 and d2 , the covariance matrices R11 of size d1 × d1 , R12 of size d1 × d2 , R21 = Rt12 of size d2 × d1 , and R22 of size d2 × d2 , then the distribution of Z1 conditionally to the observation Z2 = z2 is Gaussian with the mean µz2 = µ1 + R12 R−1 22 (z2 − µ2 )

that is the conditional expectation of Z1 given Z2 = z2 and with the covariance matrix Q = R11 − R12 R−1 22 R21

that is the Schur complement. This shows the stability of the Gaussian property with respect to conditioning, and this also shows a practical way to compute any conditional expectation E[φ(Z1 )|Z2 ]: Z E[φ(Z1 )|Z2 ] = ψ(Z2 ), ψ(z2 ) = φ(z1 )p(z1 |z2 ) dz1 , Rd 1

with p(z1 |z2 ) = A.5

  1 1 t −1 exp − (z1 − µz2 ) Q (z1 − µz2 ) . 2 (2π)d1 /2 det(Q)1/2

RANDOM PROCESSES

The perturbations in the parameters of an elastic medium, the wave fluctuations recorded by a receiver array, or the noise that appears in an image are described by functions of space (and/or time) with random values, which are known as random (or stochastic) processes. Recall that a random variable is a random number, in the sense that a realization of the random variable is a real number and that the statistical distribution of the random variable is characterized by its PDF. In the same way, a random process (Z(x))x∈Rd is a random function, in the sense that a realization of the random process is a function from Rd to R, and that the distribution of (Z(x))x∈Rd is characterized by the finite-dimensional distributions (Z(x1 ), . . . , Z(xn ))t , for any n, x1 , . . . , xn ∈ Rd (the fact that the finite-dimensional distributions completely characterize the distribution of the random process is not completely trivial and it follows from Kolmogorov’s extension theorem). As in the case of random variables, we may not always require a complete statistical description of a random process, or we may not be able to obtain it even if desired. In such cases, we work with the first and second statistical moments. The most important ones are (i) Mean: E[Z(x)];

206

APPENDIX A

  (ii) Variance: Var(Z(x)) = E |Z(x) − E[Z(x)]|2 ;   (iii) Covariance function: R(x, x′ ) = E (Z(x) − E[Z(x)])(Z(x′ ) − E[Z(x′ )]) .

We say that (Z(x))x∈Rd is a stationary random process if the statistics of the process is invariant to a shift in the origin: for any x0 ∈ Rd , (Z(x0 + x))x∈Rd

distribution

=

(Z(x))x∈Rd .

It is a statistical steady state. A necessary and sufficient condition is that, for any integer n, for any x0 , x1 , . . . , xn ∈ Rd , for any bounded continuous function φ ∈ Cb (Rn , R), we have E [φ(Z(x0 + x1 ), . . . , Z(x0 + xn ))] = E [φ(Z(x1 ), . . . , Z(xn ))] . A.6

GAUSSIAN PROCESSES

We say a random process (Z(x))x∈Rd is Gaussian if any linear combination Pthat n Zλ = i=1 λi Z(xi ) has Gaussian distribution (for any integer n, xi ∈ Rd , λi ∈ R). In this case Zλ has Gaussian distribution with PDF 1 (z − µλ )2  pZλ (z) = √ exp − , 2σλ2 2πσλ

z ∈ R,

where the mean and variance are given by µλ =

n X

λi E[Z(xi )] ,

σλ2 =

i=1

n X

i,j=1

λi λj E[Z(xi )Z(xj )] − µ2λ .

The first two moments of the Gaussian process (Z(x))x∈Rd µ(x1 ) = E[Z(x1 )]

and

R(x1 , x2 ) = E[(Z(x1 ) − E[Z(x1 )])(Z(x2 ) − E[Z(x2 )])]

characterize the finite-dimensional distributions of the process. Indeed, the finitedimensional distribution of (Z(x1 ), . . . , Z(xn ))t has PDF p(z1 , . . . , zn ) that can be characterized by its Fourier transform: Z Pn ei j=1 λj zj p(z1 , . . . , zn ) dz1 · · · dzn Rn Z  P σ2  i n λj Z(xj ) iZλ j=1 = E[e ] = E[e ] = eiz pZλ (z) dz = exp iµλ − λ 2 R n n  X  1 X = exp i λj µ(xj ) − λj λl R(xj , xl ) , 2 j=1 j,l=1

which shows with (A.3) that (Z(x1 ), . . . , Z(xn ))t has a Gaussian PDF with mean (µ(xj ))j=1,...,n and covariance matrix (R(xj , xl ))j,l=1,...,n . As a consequence the distribution of a Gaussian process is characterized by the mean function (µ(x1 ))x1 ∈Rd and the covariance function (R(x1 , x2 ))x1 ,x2 ∈Rd . It is rather easy to simulate a realization of a Gaussian process (Z(x))x∈Rd whose mean µ(x) and covariance function R(x, x′ ) are given. If (x1 , . . . , xn ) is a grid of

207

INTRODUCTION TO RANDOM PROCESSES

points, then the following algorithm is a random generator of (Z(x1 ), . . . , Z(xn ))t : - compute the mean vector mi = E[Z(xi )] and the covariance matrix Cij = E[Z(xi )Z(xj )] − E[Z(xi )]E[Z(xj )]. - generate a random vector Y = (Y1 , . . . , Yn )t of n independent Gaussian random variables with mean 0 and variance 1 (use randn in Matlab, or use the Box-Müller algorithm for instance). - compute Z = m + C1/2 Y . The vector Z has the distribution of (Z(x1 ), . . . , Z(xn ))t because it has Gaussian distribution (since it is the linear transform of the Gaussian vector Y ) and it has the desired mean vector and covariance matrix. Note that the computation of the square root of the matrix C is expensive from the computational point of view, and one usually chooses to use the Cholesky method to compute it. We will see in the next section a faster algorithm when the process is stationary. We now give two important results about Gaussian processes. Let (Z(x))x∈Rd be a Gaussian random process with mean-zero and covariance function E[Z(x)Z(x′ )] = R(x, x′ ). Result 1. The smoothness of the trajectories is related to the smoothness of the covariance function R [76]. If R ∈ C k+η,k+η (Rd × Rd ) for some η > 0, then the trajectories are of class C k (Rd ). For instance, for k = 1, the derivative (∂x1 Z(x))x∈Rd is a Gaussian process with mean-zero and covariance function E[∂x1 Z(x)∂x′1 Z(x′ )] = ∂x21 ,x′1 R(x, x′ ). Result 2. The local extrema have the spatial shape of the covariance function [4]. Given the existence of an extremum at x0 with amplitude z0 , then the Gaussian conditioning result stated in Subsection A.4 gives that (Z(x))x∈Rd is a Gaussian process with mean E[Z(x)|Z(x0 ) = z0 , ∇Z(x0 ) = 0] =

R(x, x0 ) z0 , R(x0 , x0 )

and covariance matrix Cov Z(x), Z(x′ )|Z(x0 ) = z0 , ∇Z(x0 ) = 0 = R(x, x′ ) −



R(x, x0 )R(x0 , x′ ) − ∇x0 R(x, x0 )t H(x0 , x0 )−1 ∇x0 R(x0 , x′ ) , R(x0 , x0 )

d where H(x, x′ ) = ∂xi ∂x′j R(x, x′ ) i,j=1 . This result allows us to visualize the typical behavior of a Gaussian random process. If the process has an extremum at positionpx0 with value z0 that is larger (in absolute value) than the standard deviation R(x0 , x0 ), then the shape of the extremum is locally given by the normalized covariance function x 7→ z0 R(x, x0 )/R(x0 , x0 ). Therefore, the shape and width of the covariance function gives the typical shape and width of the local extrema of a realization of a Gaussian process. In the context of an image, the characterization of the covariance function of the fluctuations of

208

APPENDIX A

the image gives the typical forms of the hot spots of the speckle noise of the image. A.7

STATIONARY GAUSSIAN RANDOM PROCESSES

We here focus our attention on stationary Gaussian processes. Since the distribution of a Gaussian process is characterized by its first two moments, a Gaussian process is stationary if and only if its mean µ(x) is constant and its covariance function R(x, x′ ) depends only on the lag x′ − x. Let us consider a stationary Gaussian process (Z(x))x∈Rd with mean-zero and covariance function c(x) = E[Z(x′ )Z(x′ + x)]. By Bochner’s theorem [94], the Fourier transform of c is necessarily nonnegative. The spectral representation of the real-valued stationary Gaussian process (Z(x))x∈Rd is Z p 1 −ikt x cˆ(k)ˆ nk dk, Z(x) = e (2π)d Rd

where n ˆ k is a complex white noise, i.e., n ˆk is complex-valued, Gaussian, n ˆ −k = n ˆk, E [ˆ nk ] = 0, E [ˆ nk n ˆ k′ ] = 0, and E n ˆkn ˆ k′ = (2π)d δ(k − k′ ) (the representation is ˆk = n formal; one should in fact use stochastic integrals dW ˆ k dk with respect to Brownian motions). A complex white noise is actually the Fourier transform of a R t real white noise: we have n ˆ k = eik x n(x)dx where n(x) is a real white noise, i.e., n(x) real-valued, Gaussian, E [n(x)] = 0, and E [n(x)n(x′ )] = δ(x − x′ ). It is quite easy to simulate a realization of a stationary Gaussian process (with mean-zero and covariance function c(x)) using its spectral representation and fast Fourier transforms (FFT). In dimension d = 1, if we fix a grid of points xi = (i − 1)∆x, i = 1, . . . , n, then one can simulate the vector (Z(x1 ), . . . , Z(xn ))t by the following algorithm: - evaluate the covariance vector c = (c(x1 ), . . . , c(xn ))t . - generate a random vector Y = (Y1 , . . . , Yn )t of n independent Gaussian random variables with mean 0 and variance 1. - filter with the square root of the discrete Fourier transform (DFT) of c: p  Z = IFT DFT(c) × DFT(Y ) .

Then the vector Z is a realization of (Z(x1 ), . . . , Z(xn ))t . In practice one uses FFT and inverse fast Fourier transform (IFFT) instead of DFT and inverse Fourier transform (IFT), and one obtains a periodized version of the random vector (Z(x1 ), . . . , Z(xn ))t , due to the FFT. This is good enough when the size n∆x is much larger than the correlation length of the process (i.e., the width of the covariance function c). It is possible to remove the end points of the grid over a band of thickness of the order of the correlation length to remove this periodization effect. In practice this algorithm is much more efficient than the Cholesky’s method. A.8

MULTI-VALUED GAUSSIAN PROCESSES

We finally introduce Gaussian multi-valued processes, which are natural extensions of the real-valued Gaussian processes discussed in the previous subsections. We say that a RpP -valued process (Z(x))x∈Rd is a Gaussian process if any finite linear combination i λi Zji (xi ) is a real-valued Gaussian random variable, for λi ∈ R, ji ∈ {1, . . . , p}, xi ∈ Rd . Therefore the coordinate functions (Z1 (x))x∈Rd , . . ., (Zp (x))x∈Rd are real-valued random processes, more exactly they are correlated

209

INTRODUCTION TO RANDOM PROCESSES

real-valued Gaussian processes. The distribution of the Rp -valued Gaussian process (Z(x))x∈Rd is characterized by its vector-valued mean function µ(x) = E[Z(x)] and its matrix-valued covariance function R(x, x′ ) = (Rij (x, x′ ))i,j=1,...,p , with Rij (x, x′ ) = E[(Zi (x) − µi (x))(Zj (x′ ) − µj (x′ ))]. In particular, the coordinate functions (Zi (x))x∈Rd and (Zj (x))x∈Rd are independent if and only if Rij (x, x′ ) = 0 for all x, x′ ∈ Rd . We say that a C-valued P process (Z(x)) Px∈Rd is a Gaussian process if any finite linear combination i λi ℜe(Z(xi )) + j λ′j ℑm(Z(x′j )) is a real-valued Gaussian random variable. A C-valued Gaussian process (Z(x))x∈Rd can be seen as t ˜ ˜ a R2 -valued Gaussian process (Z(x)) x∈Rd with Z = (ℜe(Z), ℑm(Z)) . Its dis˜ tribution can be characterized by the vector-valued mean function µ(x) and the ˜ associated to (Z(x)) ˜ matrix-valued covariance function R x∈Rd . It can as well be characterized by the complex-valued mean function µ(x) = E[Z(x)], the covariance function R(x, x′ ) = E[(Z(x) − µ(x))(Z(x′ ) − µ(x′ ))], and the relation function Q(x, x′ ) = E[(Z(x) − µ(x))(Z(x′ ) − µ(x′ ))]. The PDF of the random vector (Z(x1 ), . . . , Z(xn )) (with respect to the Lebesgue measure over Cn ) is p(z)

=

1 πn

det(Γ)1/2

t

det(Γ − C Γ−1 C)1/2   t  −1   Γ C 1 z−m z−m × exp − , t z−m 2 z−m C Γ

where Γij = R(xi , xj ), Cij = Q(xi , xj ), mi = µ(xi ). A circularly symmetric complex Gaussian process is a C-valued Gaussian process such that µ(x) = 0 and Q(x, x′ ) = 0 for any x, x′ ∈ Rd . Its distribution is characterized by its covariance function R(x, x′ ) = E[Z(x)Z(x′ )]. If, additionally, the covariance function R is real-valued, then the real and imaginary parts (ℜe(Z(x)))x∈Rd and (ℑm(Z(x)))x∈Rd are independent and identically distributed Gaussian processes with mean-zero and covariance function R(x, x′ )/2.

Appendix B

Asymptotics of the Attenuation Operator

B.1

STATIONARY PHASE THEOREM

We first recall the stationary phase theorem; see [110].

Theorem B.1 (Stationary phase). Let K ⊂ [0, ∞) be a compact set, X an open neighborhood of K and k a positive integer. If ψ ∈ C02k (K), f ∈ C 3k+1 (X) and ℑm(f ) ≥ 0 in X, ℑm(f (t0 )) = 0, f ′ (t0 ) = 0, f ′′ (t0 ) 6= 0, f ′ 6= 0 in K \ {t0 } then for a > 0 Z X  j ψ(t)eif (t)/a dt − eif (t0 )/a a−1 f ′′ (t0 )/2πi −1/2 a L [ψ] j K j 0 with

(B.1)

ω κa (ω) = √ . 1 − iωa

At low frequencies ω ≪ a1 , κa (ω) ≃ ω(1 +

iaω ), 2

and therefore, the operator Aa can be approximated as follows: Z ∞ Z  a  1 2 1 φ(s) 1 + i ω e− 2 aω s eiω(s−t) dω ds Aa [φ](s) ≃ 2π 0 2 R for φ compactly supported on [0, ∞). Since Z 2 2 1 1 (s−t) 1 1 √ e− 2 aω s eiω(s−t) dω = √ e− 2 as , as 2π R and

1 √ 2π

it follows that

Z

R

a −iaω − 1 aω2 s iω(s−t) e 2 e dω = ∂t 2 2







a  1 Aa [φ] ≃ 1 − ∂t  √ 2 2π

Z

0

+∞



2 1 1 (s−t) √ e− 2 as as

Z

+∞

0

,

 1 (s − t)2 1 −  φ(t) √ e 2 as ds . as

We then investigate the asymptotic behavior of Aea defined by 1 Aea [φ] = √ 2π



1 (s − t)2 1 − φ(s) √ e 2 as ds. as

(B.2)

(B.3)

Since the phase in (B.3) is quadratic and a is small, we can apply the stationary phase theorem to the operator Aea . Note that the integral Z ∞ J(t) = ψ(s)eif (s)/a ds, 0

2

(s−t) √ , f (s) = i with ψ(s) = φ(s) 2s , satisfies J(t) = s vanishes at s = t and satisfies   1 t2 t2 f ′ (s) = i 1 − 2 , f ′′ (s) = i 3 , 2 s s

√ a2π Aea [φ](t). The phase f 1 f ′′ (t) = i . t

212

APPENDIX B

The function gt (s) is given by gt (s) = i

1 (s − t)2 1 (t − s)3 1 (s − t)2 −i =i . 2 s 2 t 2 ts

We can deduce that    (3) ′ (gt ψ)(4) (t) = gx(4) = i 21 0 (t)ψ(t) + 4gx0 (t)ψ (t) (g 2 ψ)(6) (t) = (g 2 )(6) (t)ψ(t) = − 1 6! ψ(t), t x0 4 t4

24 t3 ψ(t)





24 ′ t2 ψ (t)

,

and then, with the same notation as in Theorem B.1,    ′′    1 1 ′′ 1 √ ′′ 1 φ′ (t) 3 φ φ (1)  −1 ′′  √ √ L1 [ψ] = − = (f (t)) ψ (t) = t tφ (t) − + 3/2 ,   i 2 2 2 4t  t t  !     ′     φ(t) 1 ′′ −2 (4) φ(t) 1 (2) (3)  ′ L1 [ψ] = f (t) 3 √ − 3 3/2 gt (t)ψ(t) + 4gt (t)ψ (t) =  8i 2 t t   ′   1 φ (t) 9 φ(t)   , = 3 √ −   2 2 t3/2  t       −1 ′′ −3 2 (6) 1 15 φ(t)  L(3) [ψ] = f (t) (g . ) (t)ψ(t) = t 1 23 2!3!i 2 4 t3/2

The operator L1 is given by L1 [ψ]

(1)

(2)

(3)

= L1 [ψ] + L1 [ψ] + L1 [ψ]     φ′ (t) 3 9 15 φ(t) 1 √ ′′ 1 ′′ tφ (t) + (3 − 1) √ + = − + = √ (tφ(t)) , 3/2 2 4 2 4 t t 2 t

and so,   X √ 1 ′′ 2 J(t) − 2πat φ(t) √ √ + a (tφ(t)) ≤ Ca sup |φ(α) |. t 2 t α≤4

Finally, we arrive at Z ∞   X 1 1 − (s−t)2 a ′′ √ 2as φ(s) √ e ds − φ(t) + (tφ(t)) ≤ Ca3/2 sup |φ(α) |, 2π as 2 0 α≤4

which together with (B.2) proves (8.44). Estimates (8.45) and (8.46) can be easily deduced from (8.44).

Appendix C The Generalized Argument Principle and Rouché’s Theorem In this appendix we review the results of Gohberg and Sigal in [100] concerning the generalization to operator-valued functions of classical results in complex analysis. C.1

NOTATION AND DEFINITIONS

Let B and B ′ be two Banach spaces. Let U(ω0 ) be the set of all operator-valued functions with values in L(B, B ′) which are holomorphic in some neighborhood of ω0 , except possibly at ω0 . Suppose that ω0 is a characteristic value of the function A(ω) and φ(ω) is an associated root function. Then there exists a number m(φ) ≥ 1 and a vector-valued function ψ(ω) with values in B ′ , holomorphic at ω0 , such that A(ω)[φ(ω)] = (ω − ω0 )m(φ) ψ(ω),

ψ(ω0 ) 6= 0.

The number m(φ) is called the multiplicity of the root function φ(ω). For φ0 ∈ KerA(ω0 ), we define the rank of φ0 , denoted by rank(φ0 ), to be the maximum of the multiplicities of all root functions φ(ω) with φ(ω0 ) = φ0 . Suppose that n = dim KerA(ω0 ) < +∞ and that the ranks of all vectors in KerA(ω0 ) are finite. A system of eigenvectors φj0 , j = 1, . . . , n, is called a canonical system of eigenvectors of A(ω) associated to ω0 if their ranks possess the following property: for j = 1, . . . , n, rank(φj0 ) is the maximum of the ranks of all eigenvectors in the direct complement in KerA(ω0 ) of the linear span of the vectors φ10 , . . . , φj−1 0 . We call N (A(ω0 )) :=

n X

rank(φj0 )

j=1

the null multiplicity of the characteristic value ω0 of A(ω). If ω0 is not a characteristic value of A(ω), we put N (A(ω0 )) = 0. Suppose that A−1 (ω) exists and is holomorphic in some neighborhood of ω0 , except possibly at ω0 . Then the number M (A(ω0 )) = N (A(ω0 )) − N (A−1 (ω0 )) is called the multiplicity of ω0 . If ω0 is a characteristic value and not a pole of A(ω), then M (A(ω0 )) = N (A(ω0 )) while M (A(ω0 )) = −N (A−1 (ω0 )) if ω0 is a pole and not a characteristic value of A(ω). Suppose that ω0 is a pole of the operator-valued function A(ω) and the Laurent series expansion of A(ω) at ω0 is given by X A(ω) = (ω − ω0 )j Aj . (C.1) j≥−s

214

APPENDIX C

If in (C.1) the operators A−j , j = 1, . . . , s, have finite-dimensional ranges, then A(ω) is called finitely meromorphic at ω0 . C.2

GENERALIZED ARGUMENT PRINCIPLE

Let V be a simply connected bounded domain with rectifiable boundary ∂V . An operator-valued function A(ω) which is finitely meromorphic and of Fredholm type in V and continuous on ∂V is called normal with respect to ∂V if the operator A(ω) is invertible in V , except for a finite number of points of V which are normal points of A(ω). Lemma C.1. An operator-valued function A(ω) is normal with respect to ∂V if it is finitely meromorphic and of Fredholm type in V , continuous on ∂V , and invertible for all ω ∈ ∂V . Now, if A(ω) is normal with respect to the contour ∂V and ωi , i = 1, . . . , σ, are all its characteristic values and poles lying in V , we put M(A(ω); ∂V ) =

σ X

M (A(ωi )).

(C.2)

i=1

The full multiplicity M(A(ω); ∂V ) of A(ω) in V is the number of characteristic values of A(ω) in V , counted with their multiplicities, minus the number of poles of A(ω) in V , counted with their multiplicities. Theorem C.2 (Generalized argument principle). Suppose that the operatorvalued function A(ω) is normal with respect to ∂V . Then we have Z d 1 tr A−1 (ω) A(ω)dω. (C.3) M(A(ω); ∂V ) = 2iπ dω ∂V By tr we mean the trace of operator which is the sum of all its nonzero characteristic values; see, for instance, [32] for an exact statement. The following general form of the argument principle will be useful. It can be proven by the same argument as the one in Theorem C.2. Theorem C.3. Suppose that A(ω) is an operator-valued function which is normal with respect to ∂V . Let f (ω) be a scalar function which is analytic in V and continuous in V . Then Z σ X 1 d tr f (ω)A−1 (ω) A(ω)dω = M (A(ωj ))f (ωj ), 2iπ dω ∂V j=1 where ωj , j = 1, . . . , σ, are all the points in V which are either poles or characteristic values of A(ω). C.3

GENERALIZATION OF ROUCHÉ’S THEOREM

A generalization of Rouché’s theorem to operator-valued functions is stated below. Theorem C.4 (Generalized Rouché’s theorem). Let A(ω) be an operator-valued function which is normal with respect to ∂V . If an operator-valued function S(ω)

THE GENERALIZED ARGUMENT PRINCIPLE AND ROUCHÉ’S THEOREM

215

which is finitely meromorphic in V and continuous on ∂V satisfies the condition kA−1 (ω)S(ω)kL(B,B) < 1,

ω ∈ ∂V,

then A(ω) + S(ω) is also normal with respect to ∂V and M(A(ω); ∂V ) = M(A(ω) + S(ω); ∂V ).

References [1] M.S. Agranovich, B.A. Amosov, and M. Levitin, Spectral problems for the Lamé system with spectral parameter in boundary conditions on smooth or nonsmooth boundary, Russ. J. Math. Phys., 6 (1999), 247–281. [2] J.F. Ahner and G.C. Hsiao, On the two-dimensional exterior boundary-value problems of elasticity, SIAM J. Appl. Math., 31 (1976), 677–685. [3] K. Aki and P. G. Richards, Quantitative Seismology, Vol. 1, W. H. Freeman & Co., San Francisco, 1980. [4] R.J. Adler, The Geometry of Random Fields, Wiley, New York, 1981. [5] C. Alves and H. Ammari, Boundary integral formulae for the reconstruction of imperfections of small diameter in an elastic medium, SIAM J. Appl. Math., 62 (2001), 94–106. [6] G. Allaire, F. Jouve, and N. Van Goethem, Damage and fracture evolution in brittle materials by shape optimization methods, J. Comput. Phys., 230 (2011), 5010–5044. [7] G. Allaire, F. Jouve, and A.M. Toader, Structural optimization using sensitivity analysis and a level-set method, J. Comput. Phys., 194 (2004), 363–393. [8] H. Ammari, An Introduction to Mathematics of Emerging Biomedical Imaging, Mathematics & Applications, Vol. 62, Springer-Verlag, Berlin, 2008. [9] H. Ammari, M. Asch, V. Jugnon, L. Guadarrama Bustos, and H. Kang, Transient imaging with limited-view data, SIAM J. Imag. Sci., 4 (2011), 1097–1121. [10] H. Ammari, E. Beretta, E. Francini, H. Kang, and M. Lim, Reconstruction of small interface changes of an inclusion from modal measurements II: The elastic case, J. Math. Pures Appl., 94 (2010), 322–339. [11] H. Ammari, E. Bretin, J. Garnier, W. Jing, H. Kang, and A. Wahab, Localization, stability, and resolution of topological derivative based imaging functionals in elasticity, SIAM. J. Imag. Sci., 6 (2013), 2174–2212. [12] H. Ammari, E. Bretin, V. Jugnon, and A. Wahab, Photoacoustic imaging for attenuating acoustic media, in Mathematical Modeling in Biomedical Imaging II, Lecture Notes in Mathematics, Vol. 2035, 57–84, Springer-Verlag, Berlin, 2011. [13] H. Ammari, E. Bretin, J. Garnier, and A. Wahab, Noise source localization in an attenuating medium, SIAM J. Appl. Math., 72 (2012), 317–336.

218

REFERENCES

[14] H. Ammari, E. Bretin, J. Garnier, and A. Wahab, Time reversal in attenuating acoustic media, Mathematical and Statistical Methods for Imaging, Contemporary Mathematics, Vol. 548, 151–163, Amer. Math. Soc., Providence, RI, 2011. [15] H. Ammari, E. Bretin, J. Garnier, and A. Wahab, Time reversal algorithms in viscoelastic media, European J. Appl. Math., 24 (2013), 565–600. [16] H. Ammari, P. Calmon, and E. Iakovleva, Direct elastic imaging of a small inclusion, SIAM J. Imag. Sci., 1 (2008), 169–187. [17] H. Ammari, Y. Capdeboscq, H. Kang, H. Lee, G.W. Milton, and H. Zribi, Progress on the strong Eshelby’s conjecture and extremal structures for the elastic moment tensor, J. Math. Pures Appl., 94 (2010), 93–106. [18] H. Ammari, P. Garapon, and F. Jouve, Separation of scales in elasticity imaging: a numerical study, J. Comp. Math., 28 (2010), 354–370. [19] H. Ammari, P. Garapon, F. Jouve, H. Kang, M. Lim, and S. Yu, A new optimal control approach for the reconstruction of extended inclusions, SIAM J. Cont. Optim., 51 (2013), 1372–1394. [20] H. Ammari, P. Garapon, H. Kang, and H. Lee, A method of biological tissues elasticity reconstruction using magnetic resonance elastography measurements, Quart. Appl. Math., 66 (2008), 139–175. [21] H. Ammari, P. Garapon, H. Kang, and H. Lee, Effective viscosity properties of dilute suspensions of arbitrarily shaped particles, Asympt. Anal., 80 (2012), 189–211. [22] H. Ammari, J. Garnier, V. Jugnon, and H. Kang, Stability and resolution analysis for a topological derivative based imaging functional, SIAM J. Control Optim., 50 (2012), 48–76. [23] H. Ammari, J. Garnier, W. Jing, H. Kang, M. Lim, K. Sølna, and H. Wang, Mathematical and Statistical Methods for Multistatic Imaging, Lecture Notes in Mathematics, Vol. 2098, Springer-Verlag, Berlin, 2013. [24] H. Ammari, J. Garnier, H. Kang, M. Lim, and K. Sølna, Multistatic imaging of extended targets, SIAM J. Imag. Sci., 5 (2012), 564–600. [25] H. Ammari, L. Guadarrama-Bustos, H. Kang, and H. Lee, Transient elasticity imaging and time reversal, Proc. Royal Soc. Edinburgh: Sect. A Math., 141 (2011), 1121–1140. [26] H. Ammari and H. Kang, Reconstruction of Small Inhomogeneities from Boundary Measurements, Lecture Notes in Mathematics, Vol. 1846, SpringerVerlag, Berlin, 2004. [27] H. Ammari and H. Kang, Reconstruction of elastic inclusions of small volume via dynamic measurements, Appl. Math. Optim., 54 (2006), 223–235. [28] H. Ammari and H. Kang, Polarization and Moment Tensors with Applications to Inverse Problems and Effective Medium Theory, Applied Mathematical Sciences, Vol. 162, Springer-Verlag, New York, 2007.

REFERENCES

219

[29] H. Ammari, H. Kang, K. Kim, and H. Lee, Strong convergence of the solutions of the linear elasticity and uniformity of asymptotic expansions in the presence of small inclusions, J. Diff. Equat., 254 (2013), 4446–4464. [30] H. Ammari, H. Kang, and H. Lee, Asymptotic expansions for eigenvalues of the Lamé system in the presence of small inclusions, Comm. Part. Diff. Equat., 32 (2007), 1715–1736. [31] H. Ammari, H. Kang, and H. Lee, Asymptotic analysis of high-contrast phononic crystals and a criterion for the band-gap opening, Arch. Rational Mech. Anal., 193 (2009), 679–714. [32] H. Ammari, H. Kang, and H. Lee, Layer Potential Techniques in Spectral Analysis, Mathematical Surveys and Monographs series, Vol. 153, Amer. Math. Soc., Rhode Island, 2009. [33] H. Ammari, H. Kang, and H. Lee, A boundary integral method for computing elastic moment tensors for ellipses and ellipsoids, J. Comput. Math., 25 (2007), 2–12. [34] H. Ammari, H. Kang, H. Lee, and J. Lim, Boundary perturbations due to the presence of small linear cracks in an elastic body, J. Elasticity, 113 (2013), 75–91. [35] H. Ammari, H. Kang, and M. Lim, Effective parameters of elastic composites, Indiana Univ. Math. J., 55 (2006), 903–922. [36] H. Ammari, H. Kang, G. Nakamura, and K. Tanuma, Complete asymptotic expansions of solutions of the system of elastostatics in the presence of an inclusion of small diameter and detection of an inclusion, J. Elasticity, 67 (2002), 97–129. [37] H. Ammari and F. Triki, Splitting of resonant and scattering frequencies under shape deformation, J. Diff. Equat., 202 (2004), 231–255. [38] C. Amrouche, C. Bernardi, M. Dauge, and V. Girault, Vector potentials in three-dimensional non-smooth domains, Math. Meth. Appl. Sci., 21 (1998), 823–864. [39] J.G. Bao, H.G. Li, and Y.Y. Li, Gradient estimates for solutions of the Lamé system with infinity coefficients, arXiv: 1311.1278v1. [40] E.S. Bao, Y.Y. Li, and B. Yin, Gradient estimates for the perfect conductivity problem, Arch. Rat. Mech. Anal., 193 (2009), 195–226. [41] G. Bao and X. Xu, An inverse random source problem in quantifying the elastic modulus of nanomaterials, Inverse Problems, 29 (2013), 015006. [42] H. Ben Ameur, M. Burger, and B. Hackl, Level set methods for geometric inverse problems in linear elasticity, Inverse Problems, 20 (2004), 673–696. [43] Y. Benveniste, A new approach to the application of Mori-Tanka’s theory in composite materials, Mech. Mater. 6 (1987), 147–157.

220

REFERENCES

[44] J. Bercoff, M. Tanter, M. Muller, and M. Fink, The role of viscosity in the impulse diffraction field of elastic waves induced by the acoustic radiation force, IEEE Trans. Ultrasonics, Ferro., Freq. Control, 51 (2004), 1523–1536. [45] E. Beretta, E. Bonnetier, E. Francini, and A.L. Mazzucato, Small volume asymptotics for anisotropic elastic inclusions, Inverse Probl. Imaging, 6 (2012), 1–23. [46] E. Beretta and E. Francini, An asymptotic formula for the displacement field in the presence of thin elastic inhomogeneities, SIAM J. Math. Anal., 38 (2006), 1249–1261. [47] E. Beretta, E. Francini, E. Kim, and J.-Y. Lee, Algorithm for the determination of a linear crack in an elastic body from boundary measurements, Inverse Problems, 26 (2010), 085015. [48] E. Beretta, E. Francini, and S. Vessella, Determination of a linear crack in an elastic body from boundary measurements–Lipschitz stability, SIAM J. Math. Anal., 40 (2008), 984–1002. [49] J. Bergh and J. Löfström, Interpolation Spaces. An Introduction, Grundlehren der Mathematischen Wissenschaften, Vol. 223, Springer-Verlag, Berlin-New York, 1976. [50] L. Borcea, G. Papanicolaou, and C. Tsogka, Theory and applications of time reversal and interferometric imaging, Inverse Problems, 19 (2003), 134–164. [51] L. Borcea, G. Papanicolaou, and C. Tsogka, Interferometric array imaging in clutter, Inverse Problems, 21 (2005), 1419–1460. [52] L. Borcea, G. Papanicolaou, C. Tsogka, and J. G. Berrymann, Imaging and time reversal in random media, Inverse Problems, 18 (2002), 1247–1279. [53] W. Borchers and H. Sohr, On the equations rot v = g and div u = f with zero boundary conditions, Hokkaido Math. J., 19 (1990), 67–87. [54] E. Bretin, L. Guadarrama Bustos, and A. Wahab, On the Green function in visco-elastic media obeying a frequency power-law, Math. Meth. Appl. Sci., 34 (2011), 819–830. [55] E. Bretin and A. Wahab, Some anisotropic viscoelastic Green functions, Contemp. Math., 548 (2011), 129–149. [56] R. Brossier, S. Operto, and J. Virieux, Seismic imaging of complex onshore structures by 2D elastic frequency-domain full-waveform inversion, Geophys., 74 (2009), 63–76. [57] M. Burger, A level set method for inverse problems, Inverse Problems, 17 (2001), 1327–1355. [58] M. Burger, B. Hackl, and W. Ring, Incorporating topological derivatives into level set methods, J. Comput. Phys., 194 (2004), 344–362. [59] M. Burger and S.J. Osher, A survey on level set methods for inverse problems and optimal design, European J. Appl. Math., 16 (2005), 263–301.

REFERENCES

221

[60] S. Campanato, Sistemi ellittici in forma divergenza. Regolaritá all’interno, Quaderni della Scuola Normale Superiore di Pisa, 1980. [61] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T.A. Zang, Spectral Methods in Fluid Dynamics, Springer-Verlag, New York-Heidelberg-Berlin, 1987. [62] Y. Capdeboscq, and H. Kang, Improved Hashin-Shtrikman bounds for elastic moment tensors and an application, Appl. Math. Opt. 57 (2008), 263–288. [63] S. Catheline, N. Benech, J. Brum, and C. Negreira, Time-reversal of elastic waves in soft solids, Phys. Rev. Lett., 100 (2008), 064301. [64] S. Catheline, J.L. Gennisson, G. Delon, R. Sinkus, M. Fink, S. Abdouelkaram, and J. Culioli, Measurement of visco-elastic properties of solid using transient elastography: An inverse problem approach, J. Acous. Soc. Amer., 116 (2004), 3734–3741. [65] J. Céa, S. Garreau, P. Guillaume and M. Masmoudi, The shape and topological optimization connection, Comput. Meth. Appl. Mech. Engrg., 188 (2001), 703– 726. [66] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), 89–97. [67] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision, 40 (2011), 120–145. [68] T.F. Chan and X.C. Tai, Identification of discontinuous coefficients in elliptic problems using total variation regularization, SIAM J. Sci. Comput., 25 (2003), 881–904. [69] T.F. Chan and X.C. Tai, Level set and total variation regularization for elliptic inverse problems with discontinuous coefficients, J. Comput. Phys., 193 (2004), 40–66. [70] Z. Chen and X. Zhang, An anisotropic perfectly matched layer method for three dimensional elastic scattering problems, preprint, 2011. [71] Z. Chen and J. Zou, An augmented Lagrangian method for identifying discontinuous parameters in elliptic systems, SIAM J. Control Optim., 37 (1999), 892–910. [72] A.V. Cherkaev, Y. Grabovsky, A.B. Movchan, and S.K. Serkov, The cavity of the optimal shape under the shear stresses, Int. J. Solids and Structures 35 (1998), 4391–4410. [73] P.G. Ciarlet, Mathematical Elasticity, Vol. I, Norh-Holland, Amsterdam (1988). [74] Y. Colin de Verdière, Elastic wave equation, Actes du Séminaire de Théorie Spectrale et Géométrie, Vol. 25, Année 2006-V2007, 55–69, Sémin. Théor. Spectr. Géom., 25, Univ. Grenoble I, Saint-Martin-d’Hères, 2008.

222

REFERENCES

[75] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Applied Mathematical Sciences, Vol. 93, 2nd Ed., Springer-Verlag, New York, 1998. [76] H. Cramér and M.R. Leadbetter, Stationary and Related Stochastic Processes - Sample Function Properties and Their Applications, Wiley, New York, 1967. [77] B.E. Dahlberg, C.E. Kenig, and G. Verchota, Boundary value problem for the systems of elastostatics in Lipschitz domains, Duke Math. J., 57 (1988), 795–818. [78] R. Dautray and J.L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, Vol. 3, Spectral Theory and Applications, SpringerVerlag, Berlin, 1990. [79] X. Deng, V. Joseph, W. Mai, Z.L. Wang, and F. Wu, Statistical approach to quantifying the elastic deformation of nanomaterials, Proc. Nat. Acad. Sci., 106 (2009), 11845–11850. [80] N. Dominguez and V. Gibiat, Non-destructive imaging using the time domain topological energy method, Ultrasonics, 50 (2010), 172–179. [81] N. Dominguez, V. Gibiat, and Y. Esquerrea, Time domain topological gradient and time reversal analogy: An inverse method for ultrasonic target detection, Wave Motion, 42 (2005), 31–52. [82] J.D. Eshelby, The determination of the elastic field of an ellipsoidal inclusion, and related problems, Proc. Roy. Soc. London. Ser. A., 241 (1957), 376–396. [83] J.D. Eshelby, Elastic inclusions and inhomogeneities, In Progress in Solid Mechanics, ed. by I.N. Sneddon and R. Hill, Vol. II, 87–140, North-Holland, Amsterdam, 1961. [84] L. Escauriaza and J.K. Seo, Regularity properties of solutions to transmission problems, Trans. Amer. Math. Soc., 115 (1992), 1069–1076. [85] A. Eschenauer, V.V. Kobelev and A. Schumacher, Bubble method for topology and shape optimization of structures, Struct. Optim., 8 (1994), 42–51. [86] C. Fabre and G. Lebeau, Régularité et unicité pour le problème de Stokes, Comm. Partial Diff. Equat., 27 (2002), 437–475. [87] C. Fabre and G. Lebeau, Prolongement unique des solutions de l’´ quation de Stokes, Comm. Partial Diff. Equat., 21 (1996), 573–596. [88] R. Farwig, H. Kozono, and H. Sohr, On the Helmholtz decomposition in general unbounded domains, Arch. Math., 88 (2007), 239–248. [89] G. Fichera, Existence theorems in elasticity, in Handbuch der Physik, Vol. VI, Springer-Verlag, Berlin, Heidelberg, New York, 1972, 347–389. [90] M. Fink, Time reversed acoustics, Physics Today, 50 (1997), 34. [91] M. Fink and C. Prada, Acoustic time-reversal mirrors, Inverse Problems, 17 (2001), R1–R38.

REFERENCES

223

[92] J.-P. Fouque, J. Garnier, G. Papanicolaou, and K. Sølna, Wave Propagation and Time Reversal in Randomly Layered Media, Springer, New York, 2007. [93] G.P. Galdi, An Introduction to the Mathematical Theory of the Navier-Stokes Equations, Vol. I, Linearized Steady Problems, Springer-Verlag, New York, 1994. [94] I.I. Gihman and A.V. Skorohod, The Theory of Stochastic Processes, SpringerVerlag, Berlin, 1974. [95] D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order, 2nd Edition, Springer-Verlag, 1983. [96] A.D. Dimarogonas, Vibration of cracked structures: A state of the art review, Engin. Fruct. Mech., 55 (1996), 831–857. [97] D. Gintides, M. Sini, and N.T. Thanh, Detection of point-like scatterers using one type of elastic scattered waves, J. Comput. Appl. Math., 236 (2012), 2137– 2145. [98] D. Gintides and M. Sini, Identification of obstacles using only the pressure parts (or only the shear parts) of the elastic waves, Inverse Probl. Imaging, 6 (2012), 39–55. [99] N. Van Goethem and A.A. Novotny, Crack nucleation sensitivity analysis, Math. Meth. Appl. Sci., 33 (2010), 1978–1994. [100] I.T.S. Gohberg and E.I. Sigal, Operator extension of the logarithmic residue theorem and Rouché’s theorem, Mat. Sb., 84 (1971), 607–642. [101] F. de Gournay, G. Allaire, and F. Jouve, Shape and topology optimization of the robust compliance via the level set method, ESAIM Control Optim. Calc. Var., 14 (2008), 43–70. [102] J. F. Greenleaf, M. Fatemi, and M. Insana, Selected methods for imaging elastic properties of biological tissues, Annu. Rev. Biomed. Eng., 5 (2003), 57–78. [103] B.B. Guzina and M. Bonnet, Topological derivative for the inverse scattering of elastic waves, Quart. J. Mech. Appl. Math., 57 (2004), 161–179. [104] B.B. Guzina and I. Chikichev, From imaging to material identification: A generalized concept of topological sensitivity, J. Mech. Phys. Solids, 55 (2007), 245–279. [105] F. Hastings, J.B. Schneider, and S.L. Broschat, Application of the perfectly matched layer (PML) absorbing boundary condition to elastic wave propagation, J. Acoust. Soc. Am., 100 (1996), 3061–3069. [106] L. He, C.Y. Kao, and S. Osher, Incorporating topological derivatives into shape derivatives based level set methods, J. Comput. Phys., 194 (2007), 891– 909. [107] M. Hintermüller and A. Laurain, Electrical impedance tomography: From topology to shape, Control Cybernet., 37 (2008), 913–933.

224

REFERENCES

[108] M. Hintermüller, A. Laurain, and A.A. Novotny, Second-order topological expansion for electrical impedance tomography, Adv. Comput. Math., 36 (2012), 235–265. [109] M.V. de Hoop, L. Qiu, and O. Scherzer, Local analysis of inverse problems: Hölder stability and iterative reconstruction, Inverse Problems 28 (2012), 045001. [110] L. Hörmander, The Analysis of Linear Partial Differential Operators. I. Distribution Theory and Fourier Analysis, Classics in Mathematics, Springer-Verlag, Berlin, 2003. [111] N.I. Ioakimidis, Application of finite-part integrals to the singular integral equations of crack problems in plane and three-dimensional elasticity, Acta Mech., 45 (1982), 31–47. [112] K. Kalimeris and O. Scherzer, Photoacoustic imaging in attenuating acoustic media based on strongly causal models, Math. Meth. Appl. Sci., 36 (2013), 2254–2264. [113] H. Kang, Conjectures of Pólya-Szegö and Eshelby, and the Newtonian potential problem; A Review, Mechanics of Materials, Milton special issue, 41 (2009), 405–410. [114] H. Kang, E. Kim, and J. Lee, Identification of elastic inclusions and elastic moment tensors by boundary measurements, Inverse Problems 19 (2003), 703– 724. [115] H. Kang, E. Kim, and J.-Y. Lee, Numerical reconstruction of a cluster of small elastic inclusions, Inverse Problems, 23 (6) (2007), 2311–2324. [116] H. Kang and G.W. Milton, Solutions to the Pólya-Szegö conjecture and the weak Eshelby conjecture, Arch. Rational Mech. Anal., 188 (2008), 93–116. [117] V.A. Korneev and L.R. Johnson, Scattering of P and S waves by a spherically symmetric inclusion, Pageoph, 147 (1996), 675–718. [118] R. Kowar and O. Scherzer, Photoacoustic imaging taking into account attenuation, in Mathematical Modeling in Biomedical Imaging II, Lecture Notes in Mathematics, Vol. 2035, 85–130, Springer-Verlag, Berlin, 2011. [119] R. Kowar, O. Scherzer, and X. Bonnefond, Causality analysis of frequency dependent wave attenuation, Math. Meth. Appl. Sci., 34 (2011), 108–124. [120] V.D. Kupradze, T.G. Gegelia, M.O. Basheleishvili, and T.V. Burchuladze, Three-Dimensional Problems of the Mathematical Theory of Elasticity and Thermoelasticity, North-Holland Publishing Co., Amsterdam-New York, 1979. [121] O.I. Kwon, C. Park, H.S. Nam, E.J. Woo, J.K. Seo, K.J. Glaser, A. Manduca, and R.L. Ehaman, Shear modulus decomposition algorithm in magnetic resonance elastography, IEEE Trans. Med. Imaging, 28 (2009), 1526–1533. [122] L.D. Landau and E.M. Lifshitz, Theory of Elasticity, Pergamon, London, 1959.

REFERENCES

225

[123] K.J. Langenberg, K. Mayer, and R. Marklein, Nondestructive testing of concrete with electromagnetic and elastic waves: Modeling and imaging, Cement Concrete Comp., 28 (2006), 370–383. [124] C. Larmat, J.P. Montagner, M. Fink, Y. Capdeville, A. Tourin, and E. Clévédé, Time-reversal imaging of seismic sources and application to the great Sumatra earthquake, Geophys. Res. Lett., 33 (2006), L19312. [125] N.N. Lebedev, Special Functions and Their Applications, Prentice-Hall, Englewood Cliffs, 1965. [126] T.H. Lee, C.Y. Ahn, O.I. Kwon, and J.K. Seo, A hybrid one-step inversion method for shear modulus imaging using time-harmonic vibrations, Inverse Problems, 26 (2010), 085014. [127] Y.S. Lee and M.J. Chung, A study on crack detection using eigenfrequency test data, Comput. Struct., 77 (2000), 327–342. [128] G. Lerosey, J. de Rosny, A. Tourin, A. Derode, G. Montaldo, and M. Fink, Time-reversal of electromagnetic waves and telecommunication, Radio Sci. 40, (2005), RS6S12. [129] Y.Y. Li and L. Nirenberg, Estimates for elliptic systems from composite material, Comm. Pure Appl. Math., 56 (2003), 892–925. [130] M. Lim, and S. Yu, Reconstruction of the shape of an inclusion from elastic moment tensors, Contemp. Math., 548 (2011), 61–76. [131] R. Lipton, Inequalities for electric and elastic polarization tensors with applications to random composites, J. Mech. Phys. Solids, 41(5) (1993), 809–833. [132] A.S.F. Markus, Introduction to the Spectral Theory of Polynomial Operator Pencils, Translations of Mathematical Monographs, Vol. 71, Amer. Math. Soc., Providence, RI, 1988. [133] P.A. Martin and F.J. Rizzo, On boundary integral equations for crack problems, Proc. R. Soc. Lond. A, 421 (1989), 341–355. [134] P.A. Martin, Exact solution of a simple hypersingular integral equation, J. Int. Eqs. Appl., 4 (1992), 197–204. [135] P.A. Martin and F.J. Rizzo, Hypersingular integrals: how smooth must the density be?, Int. J. Numer. Meth. Engng., 39 (1996), 687–704. [136] M. Masmoudi, J. Pommier, and B. Samet, The topological asymptotic expansion for the Maxwell equations and some applications, Inverse Problems, 21 (2005), 547–564. [137] G.W. Milton, The Theory of Composites, Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2002. [138] G.W. Milton and R.V. Kohn, Variational bounds on the effective moduli of anisotropic composites, J. Mech. Phys. Solids, 36 (6) (1988), 597–629. [139] G.W. Milton, R.C. McPhedran, and D.R. McKenzie, Transport properties of arrays of intersecting cylinders, Appl. Phys., 25 (1981), 23–30.

226

REFERENCES

[140] G.W. Milton, S.K. Serkov, and A.B. Movchan, Realizable (average strain, average stress) pairs in a plate with a hole, SIAM J. Appl. Math., 63 (2003), 987–1028. [141] J.G. Minonzio, F.D. Philippe, C. Prada, and M. Fink, Characterization of an elastic cylinder and an elastic sphere with the time-reversal operator: application to the sub-resolution limit, Inverse Problems, 24 (2008), 025014 (24pp). [142] J.G. Minonzio, C. Prada, D. Chambers, D. Clorennec, and M. Fink, Characterization of subwavelength elastic cylinders with the decomposition of the time-reversal operator: Theory and experiment, J. Acoust. Soc. Am., 117 (2005), 789–798. [143] N.I. Muskhelishvili, Singular Integral Equations, Noordhoff, Groningen, 1953. [144] J. Nečas, Les Méthodes Directes en Théorie des Équations Elliptiques, Academia, Prague, 1967. [145] L.K. Nielsen, X.C. Tai, S.I. Aanonsen, and M. Espedal, A binary level set model for elliptic inverse problems with discontinuous coefficients, Int. J. Numer. Anal. Model., 4 (2007), 74–99. [146] J. Nocedal and S.J. Wright, Numerical Optimization, Springer-Verlag, New York, 1999. [147] A.A. Novotny, R.A. Feijóo, E. Taroco, and C. Padra, Topological sensitivity analysis, Comput. Methods Appl. Mech. Engrg., 192 (2003), 803–829. [148] P. D. Norville and W. R. Scott, Time-reversal focusing of elastic surface waves, J. Acous. Soc. Amer., 118 (2005), 735–744. [149] S.P. Näsholm and S. Holm, On a fractional zener elastic wave equation, Fract. Calcul. Appl. Anal., 16 (2013), 26–50. [150] K. D. Phung and X. Zhang, Time reversal focusing of the initial state for Kirchhoff plate, SIAM J. Appl. Math., 68 (2008), 1535–1556. [151] J. Osborn, Spectral approximation for compact operators, Math. Comp., 29 (1975), 712–725. [152] S. Osher and J.A. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys., 79 (1988), 12–49. [153] A.K. Pandey, M. Biswas, and M.M. Samman, Damage detection from changes in curvature mode shapes, J. Sound Vibrat., 145 (1991), 321–332. [154] C. Prada, E. Kerbrat, D. Cassereau, and M. Fink, Time reversal techniques in ultrasonic nondestructive testing of scattering media, Inverse Problems, 18 (2002), 1761–1773. [155] J. Pujol, Elastic Wave Propagation and Generation in Seismology, Cambridge University Press, 2003. [156] C.P. Ratcliffe, Damage detection using a modified Laplacian operator on mode shape data, J. Sound Vibrat., 204 (1997), 505–517.

REFERENCES

227

[157] P.F. Rizos, N. Aspragathos, and A.D. Dimarogonas, Identification of crack location and magnitude in a cantilever beam from the vibration modes, J. Sound Vibrat., 138 (1990), 381–388. [158] G.F. Roach, Green’s Functions: Introductory Theory with Applications, Van Nostrand Reinhold, London, 1970. [159] J. de Rosny, G. Lerosey, A. Tourin, and M. Fink, Time reversal of electromagnetic waves, In Lecture Notes in Comput. Sci. Eng., Vol. 59, 2007. [160] F. Santosa, A level-set approach for inverse problems involving obstacles, ESAIM Contol. Optim. Calc. Var., 1 (1995/96), 17–33. [161] G. Schwarz, Hodge Decomposition - A Method for Solving Boundary Value Problems, Lecture Notes in Mathematics, Vol. 1607, Springer-Verlag, Berlin, 1995. [162] R. Sinkus, M. Tanter, S. Catheline, J. Lorenzen, C. Kuhl, E. Sondermann, and M. Fink, Imaging anisotropic and viscous properties of breast tissue by magnetic resonance-elastography, Mag. Res. Med., 53 (2005), 372–387. [163] J. Sokolowski and A. Żochowski, On the topological derivative in shape optimization, SIAM J. Control Optim., 37 (1999), 1251–1272. [164] J. Sokolowski and A. Żochowski, Topological derivatives of shape functional for elasticity systems, Int. Ser. Num. Math., 139 (2001), 231–244. [165] R. Song, X. Chen, and Y. Zhong, Imaging small three-dimensional elastic inclusions by an enhanced multiple signal classification method, J. Acoust. Soc. Amer., 132 (2012), 2420–2426. [166] J. Song, O.I. Kwon, and J.K. Seo, Anisotropic elastic moduli reconstruction in transversely isotropic model using MRE, Inverse Problems, 28 (2012), 115003. [167] G. Strang, On the construction and comparison of difference schemes, SIAM J. Numer. Anal., 5 (1968), 506–517. [168] T.L. Szabo and J. Wu, A model for longitudinal and shear wave propagation in viscoelastic media, J. Acous. Soc. Amer., 107 (2000), 2437–2446. [169] X.C. Tai and H. Li, A piecewise constant level set method for elliptic inverse problems, Appl. Numer. Math., 57 (2007), 686–696. [170] F.G. Tricomi, Integral Equations, Interscience, New York, 1957. [171] K. Wapenaar, Retrieving the elastodynamic Green’s function of an arbitrary inhomogeneous medium by cross correlation, Phys. Rev. Lett., 93 (2004), 254301. [172] K. Wapenaar and J. Fokkema, Green’s function representations for seismic interferometry, Geophysics, 71 (2006), SI33–SI46. [173] A. Wiegmann, Fast Poisson, fast Helmholtz and fast linear elastostatic solvers on rectangular parallelepipeds, Technical Report LBNL-43565, Lawrence Berkeley National Laboratory, Berkeley CA, June 1999.

228

REFERENCES

[174] H. Yuan, B. B. Guzina, and R. Sinkus, Application of topological sensitivity towards tissue elasticity imaging using magnetic resonance data, J. Eng. Mech., to appear. [175] Y. Zhang, A.A. Oberai, P.E. Barbone, and I. Harari, Solution of the timeharmonic viscoelastic inverse problem with interior data in two dimensions, Int. J. Numer. Meth. Engng., 92 (2012), 1100–1116. [176] V.V. Zhikov, Estimates for the trace of an averaged tensor, Dokl. Akad. Nauk SSSR 299 (4) (1988), 796–800. English translation in Soviet Math. Dokl. 37 (1988), 456–459. [177] V.V. Zhikov, Estimates for the homogenized matrix and the homogenized tensor, Uspekhi Mate. Nauk (=Russ. Math. Surveys) 46 (1991), 49–109. English translation in Russ. Math. Surv. 46 (1991), 65–136.

Index anisotropic elasticity, 64 attenuation, 129, 131 backpropagation, 91, 117 binary level set, 162 Born approximation, 122, 123, 156 capacity, 168, 173 central limit theorem, 202 characteristic value, 27 compact operator, 16 conditional expectation, 204 conormal derivative, 13 covariance function, 206 covariance matrix, 203 crack, 181 direct imaging, 148 Dirichlet function, 31 displacement field, 7 double-layer potential, 13 duality, 5 eigenvalue, 27 eigenvalue perturbation, 168, 169, 181 eigenvector, 27 elastic moment tensor, 48, 50, 61, 80, 175, 181, 194 elastic wave equation, 8 extended target, 148 far-field measurements, 160 finitely meromorphic operator, 214 fundamental solution, 11, 12 Gaussian process, 207 Gaussian random vector, 203 generalized argument principle, 192, 214 generalized Rouché’s theorem, 168, 214 Green’s formula, 15

Hankel function, 11 hard inclusion, 34, 36, 55, 168, 169 harmonic elasticity, 8 Hashin-Shtrikman bounds, 51, 194 Helmholtz decomposition, 8 Helmholtz equation, 10 Helmholtz-Kirchhoff identities, 21, 25, 94, 98, 115, 133 high contrast materials, 33 Hilbert inversion formula, 72 Hilbert transform, 71 hole, 34, 36, 54, 62, 180, 181 hopping algorithm, 158 hot spot, 115, 116, 208 incomplete measurements, 198 incompressible fluids, 35 inner expansion, 56, 162 internal data, 160 inverse source problem, 125, 129 isoperimetric inequality, 51 jump formula, 13, 14 Kelvin matrix, 13 Kelvin-Voigt model, 126 Kirchhoff migration, 80, 85 Korn’s inequality, 15, 47, 67, 184, 186 Kupradze matrix, 11, 12 Lamé system, 6, 12, 168 Lamé constants, 7 Landweber iteration scheme, 165 Laurent series expansion, 213 layer potentials, 4 level set method, 157, 164 linear regression, 204 Lippmann-Schwinger representation formula, 119, 156 matched asymptotic expansions, 48, 56

230 measurement noise, 112 medium noise, 118 modified Stokes system, 161 multiple eigenvalue, 192 multiplicity, 213 MUSIC algorithm, 80 near-field measurements, 160 Neumann function, 28, 31 Neumann-Poincaré operator, 13 Newton-type search method, 80 null multiplicity, 213 operator-valued function, 27, 213, 214 optimal control, 148 optimally illuminated coefficients, 195 Osborn’s theorem, 181 outer expansion, 56 perfectly matched layer, 137, 143 perimeter regularization, 154 plane waves, 8 Poisson ratio, 161 potential energy functional, 75 power law attenuation, 126 probability density function, 201 radiation condition, 10 random process, 205 random variable, 201 rank, 213 reciprocity property, 21, 24 reconstruction formula, 193, 194 regularization parameter, 151, 154, 158 representation formula, 18 reverse-time migration, 80, 84 root function, 27 scattering amplitude, 64 shape deformation, 182, 194 shape derivative, 154, 182 signal-to-noise ratio, 117, 118, 121 single-layer potential, 13 small crack, 66, 81, 181 Sobolev spaces, 4 Sommerfeld radiation condition, 11 Sommerfeld-Kupradze radiation condition, 10 speckle pattern, 115, 119 spectral decomposition, 29, 31 splitting, 192

INDEX

splitting of multiple eigenvalues, 192 splitting spectral Fourier approach, 138, 143 stationary phase theorem, 211 statistical moment, 203 Stokes system, 34, 35, 54 strain tensor, 7 Strang’s splitting method, 138 thermoviscous law model, 126 thermoviscous model, 126 Tikhonov regularization, 151 time-reversal imaging, 125 topological derivative, 76, 80 total variation, 154, 163 trace operator, 214 transmission problem, 16–19, 173 vibration testing, 168 viscoelastic Green’s tensor, 130 viscoelastic medium, 126 viscoelastic wave equation, 129 viscous moment tensor, 54 weighted imaging function, 102 Young’s modulus, 70, 74, 181

PRINCETON SERIES IN APPLIED MATHEMATICS Chaotic Transitions in Deterministic and Stochastic Dynamical Systems: Applications of Melnikov Processes in Engineering, Physics, and Neuroscience, Emil Simiu Selfsimilar Processes, Paul Embrechts and Makoto Maejima Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms, Jiming Peng, Cornelis Roos, and Tamás Terlaky Analytic Theory of Global Bifurcation: An Introduction, Boris Buffoni and John Toland Entropy, Andreas Greven, Gerhard Keller, and Gerald Warnecke, editors Auxiliary Signal Design for Failure Detection, Stephen L. Campbell and Ramine Nikoukhah Thermodynamics: A Dynamical Systems Approach, Wassim M. Haddad, VijaySekhar Chellaboina, and Sergey G. Nersesov Optimization: Insights and Applications, Jan Brinkhuis and Vladimir Tikhomirov Max Plus at Work, Modeling and Analysis of Synchronized Systems: A Course on Max-Plus Algebra and Its Applications, Bernd Heidergott, Geert Jan Olsder, and Jacob van der Woude Impulsive and Hybrid Dynamical Systems: Stability, Dissipativity, and Control, Wassim M. Haddad, VijaySekhar Chellaboina, and Sergey G. Nersesov The Traveling Salesman Problem: A Computational Study, David L. Applegate, Robert E. Bixby, Vasek Chvátal, and William J. Cook Positive Definite Matrices, Rajendra Bhatia Genomic Signal Processing, Ilya Shmulevich and Edward R. Dougherty Wave Scattering by Time-Dependent Perturbations: An Introduction, G. F. Roach Algebraic Curves over a Finite Field, J.W.P. Hirschfeld, G. Korchmáros, and F. Torres Distributed Control of Robotic Networks: A Mathematical Approach to Motion Coordination Algorithms, Francesco Bullo, Jorge Cortés, and Sonia Martínez Robust Optimization, Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski Control Theoretic Splines: Optimal Control, Statistics, and Path Planning, Magnus Egerstedt and Clyde Martin Matrices, Moments, and Quadrature with Applications, Gene H. Golub and Gérard Meurant Totally Nonnegative Matrices, Shaun M. Fallat and Charles R. Johnson Matrix Completions, Moments, and Sums of Hermitian Squares, Mihály Bakonyi and Hugo J. Woerdeman Modern Anti-windup Synthesis: Control Augmentation for Actuator Saturation, Luca Zaccarian and Andrew W. Teel Graph Theoretic Methods in Multiagent Networks, Mehran Mesbahi and Magnus Egerstedt Stability and Control of Large-Scale Dynamical Systems: A Vector Dissipative Systems Approach, Wassim M. Haddad and Sergey G. Nersesov Mathematical Analysis of Deterministic and Stochastic Problems in Complex Media Electromagnetics, G. F. Roach, I. G. Stratis, and A. N. Yannacopoulos Topics in Quaternion Linear Algebra, Leiba Rodman Hidden Markov Processes: Theory and Applications to Biology, M. Vidyasagar Mathematical Methods in Elasticity Imaging, Habib Ammari, Elie Bretin, Josselin Garnier, Hyeonbae Kang, Hyundae Lee, and Abdul Wahab