344 105 3MB
English Pages 471 Year 2019
Birkhäuser Advanced Texts Basler Lehrbücher
Andreas Rosén
Geometric Multivector Analysis From Grassmann to Dirac
Birkhäuser Advanced Texts Basler Lehrbücher
Series editors Steven G. Krantz, Washington University, St. Louis, USA Shrawan Kumar, University of North Carolina at Chapel Hill, Chapel Hill, USA Jan Nekováˇr, Sorbonne Université, Paris, France
More information about this series at http://www.springer.com/series/4842
Andreas Rosén
Geometric Multivector Analysis From Grassmann to Dirac
Andreas Rosén Department of Mathematical Sciences Chalmers University of Technology and the University of Gothenburg Gothenburg, Sweden
ISSN 1019-6242 ISSN 2296-4894 (electronic) Birkhäuser Advanced Texts Basler Lehrbücher ISBN 978-3-030-31411-8 (eBook) ISBN 978-3-030-31410-1 https://doi.org/10.1007/978-3-030-31411-8 Mathematics Subject Classification (2010): 15-01, 15A72, 15A66, 35-01, 35F45, 45E05, 53-01, 58A10, 58A12, 58J20 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents ix
Preface 1
2
3
4
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
1 2 5 9 13 17 21
Exterior Algebra 2.1 Multivectors . . . . . . . . . . . . . 2.2 The Grassmann Cone . . . . . . . 2.3 Mapping Multivectors . . . . . . . 2.4 Oriented Measure . . . . . . . . . . 2.5 Multicovectors . . . . . . . . . . . 2.6 Interior Products and Hodge Stars 2.7 Mappings of Interior Products . . . 2.8 Anticommutation Relations . . . . 2.9 The Pl¨ ucker Relations . . . . . . . 2.10 Comments and References . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
23 24 34 40 44 47 51 61 63 67 69
Clifford Algebra 3.1 The Clifford Product . . . . . . . . . 3.2 Complex Numbers and Quaternions 3.3 Abstract Clifford Algebras . . . . . . 3.4 Matrix Representations . . . . . . . 3.5 Comments and References . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
73 . 74 . 82 . 89 . 93 . 102
Prelude: Linear Algebra 1.1 Vector Spaces . . . . . . . . . . 1.2 Duality . . . . . . . . . . . . . 1.3 Inner Products and Spacetime . 1.4 Linear Maps and Tensors . . . 1.5 Complex Linear Spaces . . . . 1.6 Comments and References . . .
. . . . . .
Rotations and M¨ obius Maps 105 4.1 Isometries and the Clifford Cone . . . . . . . . . . . . . . . . . . . 106 4.2 Infinitesimal Rotations and Bivectors . . . . . . . . . . . . . . . . . 113 v
vi
Contents 4.3 4.4 4.5 4.6 4.7
Euclidean Rotations . . . . . . . Spacetime Rotations . . . . . . . Fractional Linear Maps . . . . . Mappings of the Celestial Sphere Comments and References . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
117 125 134 142 150
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
153 154 161 167 172 183
6 Interlude: Analysis 6.1 Domains and Manifolds . . . 6.2 Fourier Transforms . . . . . . 6.3 Partial Differential Equations 6.4 Operator Theory . . . . . . . 6.5 Comments and References . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
185 186 191 197 200 206
Multivector Calculus 7.1 Exterior and Interior Derivatives . . 7.2 Pullbacks and Pushforwards . . . . . 7.3 Integration of Forms . . . . . . . . . 7.4 Vector Fields and Cartan’s Formula 7.5 Poincar´e’s Theorem . . . . . . . . . 7.6 Hodge Decompositions . . . . . . . . 7.7 Comments and References . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
209 211 216 224 235 239 242 253
Hypercomplex Analysis 8.1 Monogenic Multivector Fields 8.2 Spherical monogenics . . . . . 8.3 Hardy Space Splittings . . . . 8.4 Comments and References . .
5 Spinors in Inner Product Spaces 5.1 Complex Representations . 5.2 The Complex Spinor Space 5.3 Mapping Spinors . . . . . . 5.4 Abstract Spinor Spaces . . 5.5 Comments and References .
7
8
9
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
255 257 265 277 283
Dirac Wave Equations 9.1 Wave and Spin Equations . . . . 9.2 Dirac Equations in Physics . . . 9.3 Time-Harmonic Waves . . . . . . 9.4 Boundary Value Problems . . . . 9.5 Integral Equations . . . . . . . . 9.6 Boundary Hodge Decompositions 9.7 Maxwell Scattering . . . . . . . . 9.8 Comments and References . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
285 287 291 303 309 319 327 332 339
. . . .
vii
Contents 10 Hodge Decompositions 10.1 Nilpotent operators . . . . . . . . . 10.2 Half-Elliptic Boundary Conditions 10.3 Hodge Potentials . . . . . . . . . . 10.4 Bogovski˘ı and Poincar´e Potentials ˇ 10.5 Cech Cohomology . . . . . . . . . 10.6 De Rham Cohomology . . . . . . . 10.7 Comments and References . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
343 344 350 354 362 367 372 381
11 Multivector and Spinor Bundles 11.1 Tangent Vectors and Derivatives . 11.2 Multivector Calculus on Manifolds 11.3 Curvature and Bivectors . . . . . . 11.4 Conformal Maps and ON-Frames . 11.5 Weitzenb¨ock Identities . . . . . . . 11.6 Spinor Bundles . . . . . . . . . . . 11.7 Comments and References . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
383 385 390 398 405 408 413 421
12 Local Index Theorems 12.1 Fredholm Dirac Operators . . . . . . 12.2 Normal Coordinates . . . . . . . . . 12.3 The Chern–Gauss–Bonnet Theorem 12.4 The Atiyah–Singer Index Theorem . 12.5 Comments and References . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
423 425 431 434 441 448
Bibliography
451
Index
459
Preface I guess all mathematicians have had their defining moments, some events that led them to devote much of their lives and energy to mathematics. Myself, I vividly recall the spring and summer of 1997, spending my days reading about Clifford algebras in David Hestenes’s inspirational books and listening to the Beatles. Don’t misunderstand me. To a Swede, there is nothing that beats ABBA, but that summer it happened that the Clifford algebras were enjoyed in this particular way. I was a fourth-year undergraduate student at Link¨ oping University studying the civil engineering program of applied physics and electrical engineering, and the very last course I took there came to change my life in a way that no one could have anticipated. The course was on “applied mathematics”, and we were supposed to pursue a math project of our choice, typically to solve some differential equation. One odd topic proposed was learning Clifford algebras, and it appealed to me. I fell deeply in love with the beauty of it all, and I read and I read. I found the biographies [34, 37] about Hermann Grassmann, and I learned what an unfortunate turn mathematics had taken since the 1800s. During my university studies I had had a sense of something missing in the vector calculus that we were taught. I remember students asking me in the linear algebra sessions that I taught how the vector product could have area as dimension while at the same time being a vector. I discovered that Grassmann had figured it all out more than 150 years ago, and now it was all strangely hidden to us students of mathematics, all but the one-dimensional vectors. No one had told me anything about vector products in dimensions other than three, or about determinants of rectangular matrices. My personal relations with the vector product had in fact begun some five years earlier, when I borrowed a telescope from my high school for a science project on satellites. Using Kepler’s laws, I calculated a formula for the altitude of a satellite’s orbit, using as input two observations of the satellite’s position and the time elapsed between the two observations. Of course you don’t need a telescope for this, it’s just to look for a slow falling star, but I did other things as well. As you may guess, I stumbled upon a curious expression involving three mixed products, for the plane of rotation of the satellite. It was only the following year, when I had started my university studies, that I learned in the linear algebra lectures that this intriguing formula was called a vector product. ix
x
Preface
A second defining moment occurred two years later, around May 1999. I was spending a Saturday or Sunday in the library at the mathematics department in Lund, and stumbled upon a friend. We started a discussion that led to a search on this rather new thing called the internet, where I found the perfect PhD supervisor, Alan McIntosh, from Australia, one of the giants in harmonic analysis and operator theory. It was a perfect match, since he was doing real analysis, singular integrals and operator theory, as well as mixing in the algebras of Clifford and Grassmann when needed. And so I ended up down under in Canberra, and spent three years applying singular integrals and Clifford algebra to solve Maxwell boundary value problems on Lipschitz domains with Alan McIntosh. The publications [11, 8, 9, 7, 14, 10] related to my thesis work are perhaps the real starting point for this book. To shed light on the confusion: Axelsson = Ros´en before 2011. The reason for telling this story is not that I think the reader is more interested in my personal story than in the subject of the book. I certainly hope not. But nothing is without context, and it may help to know the background to understand this book. The basic algebra is not new; it goes back to the pioneering works of Hermann Grassmann, first published in 1843, whose exterior algebra of multivectors is the topic of Chapter 2, and of William Kingdon Clifford from 1878, whose geometric algebra is the topic of Chapter 3. Although these algebras are geometric and useful enough that one would expect them to fit into the mainstream mathematics curriculum at a not too advanced level, this has not really happened. But over the last century, they have been rediscovered over and over ´ Cartan developed his calculus of again. Inspired by the Grassmann algebra, Elie differential forms in the early 1900s. He was also the first to discover spinors in general in 1913, which is the topic of Chapter 5. In 1928, Paul Dirac formulated his famous equation that describes massive spin 1/2 particles in relativistic quantum mechanics, which we discuss in Section 9.2, and which makes use of spacetime spinors and matrix representations of Clifford’s algebra. In 1963, Michael Atiyah and Isadore Singer rediscovered and generalized the Dirac operator to Riemannian manifolds in connection with their celebrated index theorem, which is the topic of Chapter 12. There are also works by Marcel Riesz from 1958 on spacetime isometries and by Lars Ahlfors from 1985 on M¨ obius maps, using Clifford algebra, which is the topic of Chapter 4. Mentioned above, David Hestenes has been advocating the use of Clifford algebra, in particular in mathematical physics, since the 1960s. There is also a research field of Clifford analysis, where a higher-dimensional complex analysis using Clifford algebras has been developed, starting from around 1980 and which is the topic of Chapter 8. Included in this book are also some more recent results related to my own research. The material in Sections 9.3 to 10.4 on Dirac integral equations and Hodge decompositions originates with my early thesis work with Alan McIntosh in 2000– 2002, and most of the key ideas there are an inheritance from him. Since then, the material covered in this book has been a continued source of inspiration for my research. The following publications of mine in particular make use, explicitly
Preface
xi
or implicitly, of the algebras of Grassmann and Clifford in real analysis: Axelsson, Keith, and McIntosh [12]; Auscher, Axelsson, and Hofmann [4]; Auscher, Axelsson, and McIntosh [5]; Axelsson, Kou, and Qian [13]; Ros´en [82, 83]; Bandara, McIntosh, and Ros´en [17]; Bandara and Ros´en [18]; and Ros´en [84, 80]. This book was written in four stages. The first part, on the algebras of Grassmann and Clifford, was written around 2008 at Stockholm University and was used as material for a graduate course given there. In the second stage I wrote basioping in 2010. In cally Chapters 7, 8, and 10 for a graduate course given in Link¨ the third stage I wrote Chapters 11 and 12 for a graduate course in Gothenburg 2014. In between and after these writing periods, the manuscript was collecting dust until I decided, upon returning to mathematics after an extended period of parental leave in 2018, to prepare this final version for publication. Having been away from math for a while gave me new perspectives on things, and this final preparation turned into a major rewriting of the whole book, which I hope will benefit the reader. A number of mathematicians and friends deserve a sincere thanks for being helpful, directly or indirectly, in the creation of this book. Those who untimely have passed away by now, Peetre, McIntosh, and Passare, will always be remembered fondly by me. In mainly chronological order the following people come to mind. Hans Lundmark, who was my mentor for that very first Clifford algebra project in Link¨ oping. I wonder whether and where I would have discovered this mathematics had he not proposed this project to me. Mats Aigner in Link¨ oping, whom I first met in Lund and with whom I have had uncountably many interesting discussions about the algebras of Clifford and Grassmann. Jaak Peetre, who encouraged me and provided interesting discussions on the subject. Wulf Staubach at Uppsala University, that friend from the library who changed my life by being well read and knowing about Alan McIntosh. Alan Mcintosh at the Australian National University, my mathematical father from whom I have learned so much. I doubt very much that I will ever again meet someone with as deep an understanding of life and mathematics as he possessed. Mikael Passare at Stockholm University, who supported me at a critical stage. Erik Duse, who was a student attending that first course that I gave in Stockholm, and who more recently himself gave a course based on the third version of this book in Helsinki, and who has given me valuable feedback, including some exercises contained in this book. The book is organized so that the reader finds in the introduction to each chapter a description of and a road map to the material in that chapter. Comments and references are collected in the final section of each chapter. There are two parts of the book. In the first part, the affine multivector and spinor algebra and geometry are explained. A key idea here is the principle of abstract algebra, as explained in the introduction to Chapter 1. In the second part, we use multivectors and spinors in analysis, first in affine space and later on manifolds. A key idea here is that of splittings of function spaces, as explained in the introduction to Chapter 6. My intention is that the material covered should be accessible to basically anyone with mathematical maturity corresponding to that of an advanced undergraduate
Preface
xii
student, with a solid understanding of standard linear algebra, multi-variable and vector calculus, and complex analysis. My hope is that you will find this beautiful mathematics as useful and inspiring as I have.
Andreas Ros´en oteborg, August 2019 G¨
...the horrible “Vector analysis”, which we now see as a complete perversion of Grassmann’s best ideas. (It is limited to 3 dimensions, replaces bivectors by the awful “vector product” and trivectors by the no less awful “mixed product”, notions linked to the euclidean structure and which have no decent algebraic properties!) / J. Dieudonn´e
Chapter 1
Prelude: Linear Algebra Road map: This chapter is not where to start reading this book, which rather is Chapter 2. The material in the present chapter is meant to be used as a reference for some background material and ideas from linear algebra, which are essential to this book, in particular to the first part of it on algebra and geometry consisting of Chapters 2 through 5. The main idea in this part of the book is what may be called the principle of abstract algebra: It is not important what you calculate with, it is only important how you calculate. Let us explain by examples. Consider √ for example the complex numbers x + iy, where you of course ask what is i = −1 when you first encounter this mathematical construction. But that uncomfortable feeling of what this strange imaginary unit really is fades away as you get more experienced and learn that C is a field of numbers that is extremely useful, to say the least. You no longer care what kind of object i is but are satisfied only to know that i2 = −1, which is how you calculate with i. It is this principle of abstract algebra that one needs to bear in mind for all our algebraic constructions in this book: the exterior algebra of multivectors in Chapter 2, Clifford algebras in Chapter 3, and spinors in Chapter 5. In all cases the construction starts by specifying how we want to calculate. Then we prove that there exist objects that obey these rules of calculation, and that any two constructions are isomorphic. Whenever we know the existence and uniqueness up to isomorphism of the objects, we can regard them as geometric objects with an invariant meaning. Which concrete representation of the objects we have becomes irrelevant. In this chapter, Sections 1.1, 1.2, and 1.4 contain background material for Chapter 2, whereas Sections 1.3 and 1.5 are mainly relevant for Chapters 4 and 5 respectively. © Springer Nature Switzerland AG 2019 A. Rosén, Geometric Multivector Analysis, Birkhäuser Advanced Texts Basler Lehrbücher, https://doi.org/10.1007/978-3-030-31411-8_1
1
Chapter 1. Prelude: Linear Algebra
2
1.1
Vector Spaces
Two general notations which we use throughout this book are the following. By X := Y or Y =: X we mean that X is defined to be / is assigned the value of Y . By A ↔ B we denote a one-to-one correspondence, or an isomorphism between A and B, depending on context. We shall distinguish the concept of a vector space from the more general concept of a linear space. Except for function spaces, which we use later in part two of the book, we shall assume that our linear spaces are finite-dimensional. The difference between linear spaces and vector spaces is only a conceptual one, though. Indeed, any linear space V is naturally an affine space (V, V ), where V acts on itself through the addition in V ; see below. Thus, strictly mathematically speaking, a linear space is the same thing as a vector space. The difference between linear and vector spaces lies in the geometric interpretation of their objects, and we want to make this distinction clear to start with, since we are going to work with linear spaces whose objects are not to be interpreted as geometric vectors. Definition 1.1.1 (Linear space). A real linear space (L, +, ·) is an abelian group (L, +) together with a scalar multiplication R × L → L that is bilinear with respect to addition and a group action of the multiplicative group R∗ = R \ {0} on L. We recall that a group is a set equipped with a binary associative multiplication, containing an identity element and an inverse to each element. For an abelian group, we assume commutativity and write the binary operation as addition. In a linear space, we sometimes write a product xv of x ∈ R and v ∈ L as vx. Since the product of real numbers is commutative, this presents no problem. On the other hand, by a vector space V we mean a linear space consisting of geometric vectors, that is, “one-dimensional directed objects”, which we refer to as vectors. More precisely, this means that V is the space of translations in some affine space X as follows. Definition 1.1.2 (Vector space). An affine space (X, V ) is a set X on which a real linear space V , the space of translations/vectors in X, acts freely and transitively by addition, that is, there exists an addition by vectors map X × V → X that is a (left or right) action of (V, +) on X such that for all x, y ∈ X there exists a unique v ∈ V , the vector denoted by y − x, for which x + v = y. If x, y ∈ X, then the vector v = y − x has the interpretation of a onedimensional arrow starting at x and ending at y. Starting at a different point x0 ∈ X, the same vector v also appears as the arrow from x0 to x0 + v. Thus a vector v is characterized by its orientation and length, but not its position in X. In general affine spaces, the notion of lengths, and more generally k-volumes, have only a relative meaning when we do not have access to an inner product on the space to measure angles and absolute lengths. Thus in general affine spaces, only
1.1. Vector Spaces
3
the relative lengths of two parallel vectors v1 and v2 can be compared: if v1 = λv2 , then v1 is λ times longer than v2 . In practice, one often identifies the affine space X and its vector space V . The difference is the origin 0: X is V , but where we have “forgotten” the origin. Given an origin point x0 ∈ X, we can identify the vector v ∈ V with the point x0 + v ∈ X. In particular, x0 ∈ X is identified with 0 ∈ V . The reader will notice that in Chapters 2 and 7 we carefully distinguish between X and its vector space V , but that in the later chapters, we become more pragmatic and often identify X =V. Definition 1.1.3 (Rn ). The vector space Rn is the set of n-tuples Rn := {(x1 , . . . , xn ) ; xi ∈ R}, with the usual addition and multiplication by scalars. This linear space has a distinguished basis, the standard basis {ei }, where ei := (0, . . . , 0, 1, 0, . . . , 0), with coordinate 1 at the ith position. We adopt the practical convention that we identify row vectors with column vectors, as is often done in doing analysis in Rn . More precisely, Rn should be the space of column vectors, since matrix multiplication is adapted to this convention. However, whenever no matrix multiplication is involved, it is more convenient x1 t to write x1 . . . xn than ... = x1 . . . xn , where ·t denotes matrix xn transpose. We will not distinguish between parentheses (·) and brackets · . Note the decreasing generality of the notions: an affine space is homogeneous and isotropic, that is, without any distinguished points or directions. A linear space is isotropic, but has a distinguished point: the origin 0. The linear space Rn is neither homogeneous or isotropic: it has an origin and a distinguished basis, the standard basis. Whenever we have fixed a basis {ei } in a vectorP space V , there is a xi ei corresponds natural identification between V and Rn , where a vector v = to the coordinate tuple x = (x1 , . . . , xn ) ∈ Rn . Recall the notion of direct sums of linear spaces. Define the sum of subspaces V1 + V2 := {v1 + v2 ; v1 ∈ V1 , v2 ∈ V2 } when V1 and V2 are two subspaces of a linear space V . When V1 ∩ V2 = {0}, we write V1 ⊕ V2 and call the sum a direct sum. This is an intrinsic direct sum. In contrast, suppose that we are given two linear spaces V1 and V2 , without any common embedding space V . In this case we define the (extrinsic) direct sum of these spaces as V1 ⊕ V2 := {(v1 , v2 ) ∈ V1 × V2 ; v1 ∈ V1 , v2 ∈ V2 }.
4
Chapter 1. Prelude: Linear Algebra
In a natural way, V1 ⊕ V2 is a linear space that contains both spaces V1 , V2 , under suitable identifications. As an example, Rn is the exterior direct sum of n copies of the one-dimensional linear space R. Recall the notions of linear independence of a set S ⊂ V and its linear span span(S) ⊂ V . For concrete calculations in a given linear space V , it is often needed to fix a basis {e1 , . . . , en } ⊂ V, with n = dim V being the dimension of V . It is conceptually important to understand that a basis in general is an unordered set. But often bases for vector spaces are linearly ordered e1 , e2 , e3 , . . . by the positive integers and considered as ordered sets. In particular, this is needed in order to represent v ∈ V , x1 n X . v = x1 e1 + · · · + xn en = xi ei = e1 . . . en .. , i=1 xn by its coordinates (x1 , . . . , xn ) ∈ Rn , and in order to represent a linear map T : V1 → V2 between linear spaces V1 , V2 , x1 a1,1 · · · a1,n n m X X . 0 .. . 0 . 0 . . . T (x1 e1 +· · ·+xn en ) = ei ai,j xj = e1 · · · em . . . . , i=1 j=1 am,1 · · · am,n xn by its matrix A = (ai,j ) relative to the bases {ej } for V1 and {e0i } for V2 . However, many fundamental types of bases used in mathematics do not come with any natural linear order. Indeed, this will be the usual situation in this book, where the basic linear spaces of multivectors, tensors, and spinors have standard bases that are not linearly ordered but rather have some sort of lattice ordering, meaning that the basis elements naturally are indexed by subsets of integers or tuples of integers. Another central theme in this book is that many basic linear spaces that appear are not only linear spaces, but associative algebras in the sense that they come equipped with an associative, but in general noncommutative, product. Definition 1.1.4 (Associative algebra). A real associative algebra (A, +, ∗, 1), with identity, is a linear space over R equipped with a bilinear and associative product ∗, with identity element 1. Scalars λ ∈ R are identified with multiples λ1 ∈ A of the identity, and it is assumed that (λ1) ∗ v = λv = v ∗ (λ1) for all v ∈ A. Let (A1 , +1 , ∗1 , 11 ) and (A2 , +2 , ∗2 , 12 ) be two algebras. Then a map T : A1 → A2 is said to be an algebra homomorphism if it is linear and satisfies T (v1 ∗1 v2 ) = T (v1 ) ∗2 T (v2 ) for all v1 , v2 ∈ A1 and if T (11 ) = 12 . An invertible homomorphism is called an algebra isomorphism.
1.2. Duality
5
Exercise 1.1.5. Let A be an associative algebra. Define the exponential function exp(x) :=
∞ X 1 k x , k!
x ∈ A.
k=0
Show that exp(x + y) = exp(x) exp(y), provided that x and y commute, that is, if xy = yx. If φ ∈ R, show that if j 2 = −1, cos φ + j sin φ, exp(φj) = cosh φ + j sinh φ, if j 2 = 1, 1 + φj, if j 2 = 0.
1.2
Duality
There are several reasons for us to consider inner products and dualities more general than Euclidean ones. A first reason is that we want to study the geometry of multivectors in Minkowski spacetimes, the closest relative to Euclidean spaces among inner product spaces, which are modeled by an indefinite inner product as in Section 1.3. A second reason is that we want to study real Clifford algebras where the fundamental representation Theorem 3.4.2 involves inner product spaces of signature zero. A third reason is that we want to study spinor spaces, where more general nonsymmetric dualities may appear. Definition 1.2.1 (Duality and inner product). A duality of two linear spaces V1 and V2 is a bilinear map V1 × V2 → R : (v1 , v2 ) 7→ hv1 , v2 i that is non-degenerate in the sense that hv1 , v2 i = 0 for all v1 ∈ V1 only if v2 = 0, and hv1 , v2 i = 0 for all v2 ∈ V2 only if v1 = 0. In the case V1 = V2 = V , we speak of a duality on V . If a duality on V is symmetric in the sense that hv1 , v2 i = hv2 , v1 i for all v1 , v2 ∈ V , then we call the duality an inner product and V an inner product space. We use the notation hvi2 := hv, vi ∈ R. A vector v such that hvi2 = 0 is called singular. If an inner product has the additional property that hvi2 > 0 for all 0 6= v ∈ V , then we call it a Euclidean inner product, and V is called a Euclidean space. In this case, we define the norm p |v| := hvi2 ≥ 0, so that hvi2 = |v|2 . If a duality on V is skew-symmetric in the sense that hv1 , v2 i = −hv2 , v1 i for all v1 , v2 ∈ V , then we call the duality a symplectic form and V a symplectic space. Note carefully that in general, hvi2 may be negative, as compared to the square of a real number. We do not define any quantity hvi, and the square in the notation hvi2 is only formal.
Chapter 1. Prelude: Linear Algebra
6
Exercise 1.2.2. Show that an inner product is Euclidean if hvi2 ≥ 0 for all v ∈ V . Let V be a linear space. There is a canonical linear space V ∗ and duality hV , V i, namely the dual space of V defined as ∗
V ∗ := {linear functionals θ : V → R}. Given such a scalar-valued linear function θ ∈ V ∗ , its value θ(v) ∈ R at v ∈ V will be denoted by hθ, vi := θ(v) ∈ R. Note that this is indeed a duality: if θ(v) = 0 for all v ∈ V , then θ = 0 by definition. On the other hand, if θ(v) = 0 for all θ, then it follows that v = 0, since otherwise, we can take a complementary subspace V 0 ⊂ V so that V = span{v} ⊕ V 0 and define the linear functional θ(αv + v 0 ) := α, α ∈ R, v 0 ∈ V 0 for which θ(v) 6= 0. If V is a vector space with a geometric interpretation of v ∈ V as in Section 1.1, then θ ∈ V ∗ , which we refer to as a covector, is best described in V by its level sets {v ∈ V ; hθ, vi = C}, for different fixed values of C ∈ R. Since θ is linear, these level sets are parallel hyperplanes. The following observation is fundamental in understanding dualities. Proposition 1.2.3 (Representation of dual space). Fix a linear space V . Then there is a one-to-one correspondence between dualities hV 0 , V i and invertible linear maps g : V 0 → V ∗ : v 7→ θ, given by hg(v 0 ), vi := hv 0 , vi,
v ∈ V.
Here the pairing on the left is the functional value g(v 0 )v, whereas the pairing on the right is as in Definition 1.2.1. If V 0 = V , then V is an inner product/symplectic space if and only if g : V → V ∗ is a symmetric/antisymmetric linear map. With Proposition 1.2.3 in mind, we write a duality between two linear spaces as hV ∗ , V i, where V ∗ not necessarily is the dual space of V , but rather a linear space dual to V in the sense of Definition 1.2.1. By Proposition 1.2.3 this abuse of notation presents no problem. In particular, when we have a duality or inner product on V , we shall write θ=v to mean θ = g(v).
7
1.2. Duality
Definition 1.2.4 (Orthogonal complement). Consider a linear space V and a duality hV ∗ , V i. If hv 0 , vi = 0, then we say that v 0 ∈ V ∗ and v ∈ V are orthogonal. The orthogonal complement of a set S 0 ⊂ V ∗ is the subspace (S 0 )⊥ := {v ∈ V ; hv 0 , vi = 0, for all v 0 ∈ S 0 } ⊂ V. For S ⊂ V we similarly define the orthogonal complement S ⊥ := {v 0 ∈ V ∗ ; hv 0 , vi = 0, for all v ∈ S} ⊂ V ∗ . Definition 1.2.5 (Dual basis).P Let {e1 , . . . , en } be a basis for V . Then each v ∈ V can be uniquely written v = j xj ej , and we define covectors e∗j by he∗j , vi := xj = the jth coordinate of v. We call {e1∗ , . . . , e∗n } ⊂ V ∗ the dual basis of {e1 , . . . , en } ⊂ V . Note that the dual basis {e∗1 , . . . , e∗n } is indeed a basis for V ∗ whenever {e1 , . . . , en } is a basis for V , and is characterized by the property ( 1, i = j, he∗i , ej i = 0, i 6= j. When we have a duality on V , then the dual basis is another basis for V . Exercise 1.2.6. Consider V = R2 , the Euclidean plane with its standard inner product. Find the dual basis to {(3/2, 0), (1/4, 1/2)} and draw the two bases. Example 1.2.7 (Crystal lattices). Let {e1 , e2 , e3 } be the standard basis for R3 . In solid-state physics one studies crystal structures. These have the atoms arranged/packed in a regular pattern that repeats itself, a lattice, which may be different for different crystals. Mathematically a crystal lattice is described by a basis {v1 , v2 , v3 }, which is such that the atoms in the crystal are located at the lattice points {n1 v1 + n2 v2 + n3 v3 ; n1 , n2 , n3 ∈ Z}. Two commonly occurring crystal structures are the body-centered cubic lattice, which has basis { 12 (−e1 + e2 + e3 ), 21 (e1 − e2 + e3 ), 21 (e1 + e2 − e3 )}, and the face-centered cubic lattice, which has basis { 21 (e2 + e3 ), 21 (e1 + e3 ), 21 (e1 + e2 )}. Except for a factor 2, these two bases are seen to be dual bases: one speaks of reciprocal lattices for crystal lattices. The names of these lattices are clear if one draws the basis vectors in relation to the unit cube {0 ≤ x1 , x2 , x3 ≤ 1} and its integer translates.
Chapter 1. Prelude: Linear Algebra
8
Example 1.2.8 (Basis FEM functions). When solving partial differential equations numerically using the finite element method (FEM), the following problem appears. For a three-dimensional computation we consider simplices D, the closed convex hull of four points. Using one corner as the origin 0, and vectors {v1 , v2 , v3 } along the edges to the other three corners, we wish to construct linear functions fk : D → R such that fk (vk ) = 1 and fk = 0 on the opposite face of D, for k = 1, 2, 3. Using the dual basis {v1∗ , v2∗ , v3∗ }, we immediately obtain fk (x) = hvk∗ , xi. For practical calculations in an inner product space, we prefer to use the simplest bases: the ON-bases. Definition 1.2.9 (ON-bases). Let h·, ·i be a duality on V . Then {ei } is called an ON-basis if hei , ej i = 0 when i 6= j and if hei i2 = ±1 for all i. In terms of dual bases, a basis {ei } is an ON-basis if and only if e∗i = ±ei ,
i = 1, . . . , n.
In particular, for a Euclidean space, a basis is an ON-basis if and only if it coincides with its dual basis. Proposition 1.2.10 (Existence of ON-bases). Consider a linear space V with a duality hV, V i. Then V is an inner product space if and only if there exists an ON-basis for V . Proof. Clearly V is an inner product space if an ON-basis exists. Conversely, fix any basis {vi } for V , and define the matrix A = (ai,j ) of hV, V i in this basis by ai,j := hvi , vj i. If V is an inner product space, then A is a symmetric matrix. Using the spectral theorem, we can write D = M ∗ AM , for some invertible matrix diagonal matrix D with ±1 as diagonal elements. The basis {ei } M = (mi,j ) and P defined by ei := j vj mj,i is seen to be an ON-basis. For symplectic spaces, the following is the analogue of ON-bases. Let h·, ·i be a duality on V , with dim V = 2k. Then {ei }ki=1 ∪ {ei0 }ki=1 is called a Darboux basis if 0 0 1 ≤ i, j ≤ k, hei , ej i = 0 = hei , ej i, 0 0 i 6= j, 1 ≤ i, j ≤ k, hei , ej i = 0 = hei , ej i, 0 hei , ei i = 1 = −hei , e0i i, 1 ≤ i ≤ k. In terms of dual bases, a basis is clearly a Darboux basis if and only if e∗i = e0i ,
(e0i )∗ = −ei ,
for each i = 1, . . . , n.
Exercise 1.2.11 (Existence of Darboux bases). Consider a linear space V with a duality hV, V i. Adapt the proof of Proposition 1.2.10 and prove that V is a symplectic space if and only if there exists a Darboux basis for V . Hint: The spectral theorem for normal complex linear operators applies.
1.3. Inner Products and Spacetime
1.3
9
Inner Products and Spacetime
In this section we consider non-Euclidean inner product spaces, and in particular Minkowski spacetimes, the mathematical model for special relativity theory. Definition 1.3.1. Let V be an inner product space. Let n+ be the maximal dimension of a subspace V+ ⊂ V such that hvi2 > 0 for all v ∈ V+ \ {0}, and let n− be the maximal dimension of a subspace V− ⊂ V such that hvi2 < 0 for all v ∈ V− \ {0}. The signature of V is the integer n+ − n− . We say that a subspace V1 ⊂ V is degenerate if there exists 0 6= v1 ∈ V1 such that hv1 , vi = 0 for all v ∈ V1 . Otherwise, V1 is called nondegenerate. If hu, vi = 0 for all u, v ∈ V1 , then V1 is called totally degenerate. Note that a subspace of an inner product space is itself an inner product space if and only if the subspace is nondegenerate. Also, a subspace of an inner product space is totally degenerate if and only if all its vectors are singular, as is seen through polarization, that is, the identity hu + vi2 − hu − vi2 = 4hu, vi. A nonzero singular vector spans a one-dimensional totally degenerate subspace. Proposition 1.3.2 (Sylvester’s law of inertia). Let h·, ·i be an inner product on an n-dimensional vector space V , and let n+ and n− be as in Definition 1.3.1. For every ON basis {ei } for V , the number of basis vectors with hei i2 = 1 equals n+ , and the number of basis vectors with hei i2 = −1 equals n− . If n0 denotes the maximal dimension of a totally degenerate subspace V0 ⊂ V , then n+ + n− = n,
min(n+ , n− ) = n0 .
Proof. Let V+ , V− , and V0 be any Euclidean, anti-Euclidean and totally degenerate subspaces, respectively. Then clearly V+ ∩ V− = V+ ∩ V0 = V− ∩ V0 = {0}, and it follows that n+ + n− ≤ n, n+ + n0 ≤ n, and n− + n0 ≤ n. Fix an ON-basis {ei } for V and choose V± := span{ei ; hei i2 = ±1}. Then dim V+ +dim V− = n and dim V± ≤ n± . It follows that n± = dim V± and n+ +n− = n. From n+ + n− = n, it follows that n0 ≤ min(n − n+ , n − n− ) = min(n− , n+ ) =: m. To see that equality is attained, let V0 := {ei1 − ej1 , . . . , eim − ejm }, where heik i2 = 1 and hejk i2 = −1. Then V0 is seen to be totally degenerate. Exercise 1.3.3. Generalize Proposition 1.3.2 to degenerate bilinear and symmetric forms B(·, ·). Let Rad(V ) := {v ∈ V ; B(v, v 0 ) = 0 for all v 0 ∈ V } be the radical of V , and let n00 := dim Rad(V ). Show that n+ + n− + n00 = n and n0 = n00 + min(n+ , n− ).
10
Chapter 1. Prelude: Linear Algebra
Geometrically, the most important difference between a general inner product space and Euclidean spaces concerns orthogonal complements. For any subspace V1 of a Euclidean space V , we always have a direct sum decomposition V = V1 ⊕ V1⊥ , since V1 ∩ V1⊥ = {0}, because there are no singular vectors. This is not always true in general inner product spaces, but we have the following general result. Proposition 1.3.4 (Orthogonal sums). Let V1 be a k-dimensional subspace in an n-dimensional inner product space V . Then dim V1⊥ = n − k and (V1⊥ )⊥ = V1 , and V1 is a nondegenerate subspace if and only if V1 ∩ V1⊥ = {0}, or equivalently, V = V1 ⊕ V1⊥ . In particular, if V1 is one-dimensional and is spanned by a vector v, then V = span{v} ⊕ span{v}⊥ if and only if v is a nonsingular vector. For the remainder of this section, we study the following non-Euclidean inner product spaces. Definition 1.3.5 (Spacetime). An inner product space (W, h·, ·i) is said to be a Minkowski spacetime, or spacetime for short, with n space dimensions if dim W = 1 + n and the signature is n − 1. We always index spacetime ON-bases as {e0 , e1 , . . . , en }, where he0 i2 = −1. Note that in spacetime coordinates, hx0 e0 + x1 e1 + · · · + xn en i2 = −x20 + x21 + · · · + x2n . To describe the geometry given by such an inner product, we use the following terminology. See Figure 1.1. • The double cone Wl := {v ∈ W ; hvi2 = 0} consisting of all singular vectors v is referred to as the light cone in spacetime. Vectors v ∈ Wl are called light-like. We make a choice and declare one of these two cones to be the future light cone Wl+ , and the other cone Wl− is the past light cone. Thus Wl = Wl+ ∪ Wl− and Wl+ ∩ Wl− = {0}. • We denote the interior of the light cone by Wt := {v ∈ W ; hvi2 < 0}, and it contains the time-like vectors. Since Wt is disconnected, we write it as the disjoint union of the future time-like vectors Wt+ , which is the interior of the future light cone, and the past time-like vectors Wt− , which is the interior of the past light cone. We always assume that e0 ∈ Wt+ , that is, that e0 is a future-pointing time-like vector. • We denote the exterior of the light cone by Ws := {v ∈ W ; hvi2 > 0}, and it contains the space-like vectors. Except when the space dimension is n = 1,
1.3. Inner Products and Spacetime
11
Ws is connected. The whole spacetime thus can be written as the disjoint union W = Wt+ ∪ Wt− ∪ Ws ∪ Wl+ ∪ Wl− , except for the origin. • The analogue of the Euclidean unit sphere is the spacetime unit hyperboloid H(W ) := {v ∈ W ; hvi2 = ±1}. Except for space dimension n = 1, this hyperboloid has three connected components: the future time-like part H(Wt+ ) := H(W ) ∩ Wt+ , the past time-like part H(Wt− ) := H(W ) ∩ Wt− , and the space-like part H(Ws ) := H(W ) ∩ Ws = {v ∈ W ; hvi2 = +1}.
Figure 1.1: The lightcone partition of spacetime, and the straight line representing an inertial observer. Exercise 1.3.6. Let {e0 , e1 , e2 } be an ON-basis for a Minkowski spacetime W . Calculate the dual basis {v1 , v2 , v3 } ⊂ W to {e0 +e1 , e2 , e0 −e1 }. If instead {e0 , e1 , e2 } were an ON-basis for a Euclidean space V , what would this dual basis be?
12
Chapter 1. Prelude: Linear Algebra
A main reason for considering Minkowski spacetime is that it is the mathematical model for Einstein’s special relativity theory, when n = 3. Fix an ON-basis {e0 , e1 , e2 , e3 } with he0 i2 = −1. Once an origin is fixed, points in W are identified with vectors x0 e 0 + x1 e 1 + x2 e 2 + x3 e 3 . The coordinates xi are lengths, and we shall use the meter [m] as the unit of length. We shall write the time coordinate x0 as x0 = ct, where t is time measured in seconds [s] and c = 299792458 [m/s] is the exact speed of light. In relativity theory, the points in spacetime are referred to as events, at time t and position x. The entire life of an observer forms a curve γ(s) ∈ W , s ∈ R, containing all the events that he is present at, at least if he has lived and will live forever. For each s ∈ R, the tangent vector γ 0 (s) ∈ Wt+ will be future-pointing and time-like, since the observer always moves at a speed less than that of light. An observer moving without acceleration is called an inertial observer, and is described p by a straight line in spacetime W spanned by a time-like vector. The quantity −hvi2 /c for a time-like vector v has the meaning of time elapsed as measured by an inertial observer present at two events separated by v in spacetime. We refer to the physics literature for further details on relativity theory. See Section 1.6. In the literature, one often models spacetime as an inner product space with signature 1 − 3, as opposed to the signature convention 3 − 1 used here. An advantage is that the important time-like vectors then have hvi2 > 0. A disadvantage is that in this case, spacetimes are close relatives to the anti-Euclidean space, rather than the Euclidean spaces. Of course, these differences are minor technical ones rather than real geometrical or physical ones. A geometric result about spacetime subspaces that we need is the following. Proposition 1.3.7. Let W be a spacetime and let V ⊂ W be a subspace. Then V is of exactly one of the following types. (i) A space-like subspace. In this case V is nondegenerate and is a Euclidean space, whereas V ⊥ is a spacetime. (ii) A time-like subspace. In this case V is nondegenerate and is a spacetime, whereas V ⊥ is a Euclidean space. (iii) A light-like subspace. In this case V is a degenerate subspace and contains a unique one-dimensional subspace V0 spanned by a light-like vector. The hyperplane V0⊥ in W is the tangent space to the light cone Wl along the line V0 and V0 ⊂ V ⊂ V0⊥ . If V 0 is a complement of V0 in V , so that V = V0 ⊕V 0 , then V 0 is space-like.
1.4. Linear Maps and Tensors
13
Proof. Consider first the case that V is nondegenerate, and let n0± be the signature indices for V as in Proposition 1.3.2. If n+ = n and n− = 1 are the indices for W , then clearly n0− ≤ n− = 1 and n0+ ≤ n+ . Thus two cases are possible. Either n0− = 0, in which case V is a Euclidean space, or n0− = 1, in which case V is a 0 + n00− = n− , which spacetime. Furthermore, if n00± are the indices for V ⊥ , then n− proves the statement about V ⊥ . On the other hand, if V is a degenerate subspace, write n000 and n00 for the dimensions of the radical and a maximal totally degenerate subspace in V as in Exercise 1.3.3. Then 1 ≤ n000 ≤ n00 ≤ n0 = min(n− , n+ ) = 1. 0 Therefore min(n0+ , n0− ) = n00 − n00 = 1 − 1 = 0, and also n0− ≤ n− = 1. We claim 0 that n− = 0. To prove this, assume on the contrary that n0− = 1. Then n0+ = 0, 0 0 so that dim V = n000 + n+ + n− = 1 + 0 + 1 = 2. Let v− ∈ V be a time-like vector, and consider the splitting W = span{v− } ⊕ span{v− }⊥ . If v0 ∈ Rad(V ) \ {0}, then v0 = αv− + v+ , which shows that V contains a space-like vector v+ = v0 − αv− by (ii). This contradicts n0+ = 0. We have proved that
n0− = 0, n000 = n00 = 1, n0+ = dim V − 1. Write V0 := Rad(V ). Then V0 ⊂ V ⊂ V0⊥ . Let t 7→ v(t) ∈ Wl be a curve on the light cone such that v(0) ∈ V0 \ {0}. Then 0 = ∂t hv(t), v(t)i|t=0 = 2hv 0 (0), v(0)i. This shows that the hyperplane V0⊥ must contain the tangent space to Wl along V0 . Since the dimensions are equal, this proves the proposition.
1.4
Linear Maps and Tensors
We denote the set of linear operators between two given linear spaces V1 and V2 by L(V1 ; V2 ) := {T : V1 → V2 ; T is linear}, which itself forms a linear space of dimension dim V1 × dim V2 . For V1 = V2 = V , we write L(V ). The null space of a linear map T is denoted by N(T ), and its range is denoted by R(T ) = T V1 . In this section we discuss a less well known generalization that is essential to this book: the tensor product of linear spaces. Just as a linear operator can be represented by its matrix, a two-dimensional rectangular scheme of numbers, general tensor products can be represented by k-dimensional schemes of numbers. However, we shall restrict ourselves to k = 2 and the relation between operators and tensors. The construction of tensors uses the following maps. Definition 1.4.1 (Multilinearity). A map M : V1 × · · · × Vk → V , where V1 , . . . , Vk and V are linear spaces, is called multilinear, or more precisely k-linear, if for each 1 ≤ j ≤ k, the restricted map Vj 3 vj 7→ M (v1 , . . . , vj , . . . , vk ) ∈ V
Chapter 1. Prelude: Linear Algebra
14
is linear for every fixed vi ∈ Vi , i 6= j. When k = 2, we use the name bilinear. The construction of tensors is very similar to that of multivectors in Section 2.1, but is less geometrically transparent. Following the principle of abstract algebra, we proceed as follows to construct the tensor product V ⊗ V 0 of two given linear spaces V and V 0 . • We first note that there exist a linear space VM and a bilinear map M : V × V 0 → VM such that for two given bases {ei }1≤i≤n and {ej0 }1≤j≤n0 for V and V 0 respectively, the set {M (ei , ej0 )}1≤i≤n,1≤j≤n0 forms a basis for VM . To see this, just let VM be any linear space of dimension nn0 and define M (ei , ej ) to be some basis for VM . Then extend M to a bilinear map. • We next note that if {M (ei , e0j )}ij is a basis, then {M (fi , fj0 )}ij is also a basis for VM , for any other choice of bases {fi }i and {fj0 }j for V and V 0 respectively. Indeed, using the bilinearity one checks that {M (fi , fj0 )}ij is a linearly independent set in VM . • If M : V × V 0 → VM maps bases onto bases as above, we note the following. If N : V × V 0 → VN is any other bilinear map, then since {M (ei , e0j )}ij is a basis, setting T (M (ei , ej0 )) := N (ei , e0j ),
1 ≤ i ≤ n, 1 ≤ j ≤ n0 ,
we have the existence of a unique linear map T : VM → VN such that N = T ◦ M . If M has the property that every other bilinear map factors through it in this way, we say that M has the universal property (U). We shall encounter universal properties for other constructions, so more precisely, this is the universal property for tensor products. Conversely, if a given bilinear map M satisfies (U), then it must map bases onto bases as above. Indeed, take any bilinear map N : V × V 0 → VN such that {N (ei , ej0 )}ij is a basis. We now have a unique linear map T : VM → VN mapping {M (ei , e0j )}ij onto a basis. This is possible only if {M (ei , e0j )}ij is a basis. Definition 1.4.2 (Tensor product). Let V and V 0 be linear spaces. Fix any bilinear map M : V × V 0 → VM satisfying (U). The tensor product of V and V 0 is the linear space V ⊗ V 0 := VM . We call elements in V ⊗ V 0 tensors and we write u ⊗ v := M (u, v).
15
1.4. Linear Maps and Tensors
Note that if some other bilinear map N : V × V 0 → VN satisfies (U), then the linear map T : VM → VN given by the universal property for M has inverse T −1 : VN → VM given by the universal property for N . Therefore, T provides a unique identification of VM and VN . By the principle of abstract algebra, our definition of V ⊗ V 0 makes sense. If {ei } and {e0j } are bases for V and V 0 , then a general tensor in V ⊗ V 0 is of the form XX αij ei ⊗ e0j , j
i
for some αij ∈ R. Proposition 1.4.3 (Operator=tensor). Let V1 and V2 be linear spaces and consider a duality hV1∗ , V1 i. Then there is a unique invertible linear map V2 ⊗ V1∗ → L(V1 ; V2 ) such that v ⊗ θ 7→ T , where T x := hθ, xiv, x ∈ V1 . Proof. Consider the bilinear map V2 × V1∗ → L(V1 ; V2 ) : (v, θ) 7→ T, where T (x) := hθ, xiv for all x ∈ V1 . According to the universal property for V2 ⊗ V1∗ , there exist a unique linear map V2 ⊗ V1∗ → L(V1 ; V2 ) such that v ⊗ θ 7→ T . Let {e0i } be a basis for V2 , and let {ej } be a basis for V1 with dual basis {ei∗ } for V1∗ . Then we see that the tensor X αij e0i ⊗ e∗j ij
maps onto the linear operator with matrix {αij }ij . This proves the invertibility. The following shows how this translation between tensors and linear operators works. • If T = v ⊗ θ : V1 → V2 and T 0 = v 0 ⊗ θ0 : V2 → V3 , then the composed operator T 0 ◦ T : V1 → V3 corresponds to the tensor (v 0 ⊗ θ0 )(v ⊗ θ) = hθ0 , viv 0 ⊗ θ. This yields a multiplication of tensors, which is referred to as a contraction. • Let T : V → V be a linear operator on a linear space V . Applying the universal property to the pairing V × V ∗ → R : (v, θ) 7→ hθ, vi,
Chapter 1. Prelude: Linear Algebra
16 we get a canonical linear map
Tr : L(V ) = V ⊗ V ∗ → R. The obtained number Tr(T ) ∈ P R is called the trace of the operator T . If {ei } is a basis for V , then Tr(T ) = i αii if {αij } is the matrix for T . • If V1 and V2 are two linear spaces, then there is a natural swapping map S : V2 ⊗ V1∗ → V1∗ ⊗ V2 : v ⊗ θ 7→ θ ⊗ v, defined using the universal property. Identifying V2 ⊗ V1∗ = L(V1 ; V2 ) and V1∗ ⊗ V2 = L(V2∗ ; V1∗ ), this map S of tensors corresponds to the operation of taking adjoints of linear operators. Recall that the adjoint, or dual, of a linear operator T ∈ L(V1 ; V2 ) is T ∗ ∈ L(V2∗ ; V1∗ ) given by hT ∗ θ, vi = hθ, T vi,
θ ∈ V2∗ , v ∈ V1 .
• Let V be a Euclidean space, and let T = T ∗ be a symmetric operator. By the spectral theorem, there exists an ON-basis {ei } for V in which T has a diagonal matrix. Translated to tensors, this result means that if a tensor w ∈ V ⊗ V is fixed by the above swapping map S, then there is an ON-basis in which X αi ei ⊗ ei , w= i
where we as usual identify V and V ∗ through the inner product. • Let V and V 0 be two Euclidean spaces, and w ∈ V ⊗ V 0 . Then there exist ON-bases {ej } for V and {e0j } for V 0 , and µj ∈ R such that w=
n X
µj ej
0 ⊗ ej .
j=1
This follows, by translation to tensors, from the spectral theorem and Proposition 1.4.4 for operators, where µj are the singular values of the corresponding operator. Proposition 1.4.4 (Polar decomposition). Let V1 , V2 be Euclidean spaces, and consider an invertible linear map T ∈ L(V1 , V2 ). Then there exists a unique symmetric map S ∈ L(V1 ) such that hSu, ui > 0 for all u ∈ V1 \ {0}, and a unique isometric map U ∈ L(V1 , V2 ) such that T = U S. Similarly, there exists a unique factorization T = S 0 U 0 of T , where S 0 is positive symmetric on V2 and U 0 : V1 → V2 is isometric. We have U 0 = U and S 0 = U SU ∗ .
1.5. Complex Linear Spaces
17
Proof. For such S, U we have T ∗ T = S(U ∗ U )S = S 2 . Thus S = (T ∗ T )1/2 , so S and U are uniquely determined by T . To show existence, define S := (T ∗ T )1/2 and U := T (T ∗ T )−1/2 . Then S is positive, T = U S, and hU x, U yi = hT S −1 x, T S −1 yi = hS 2 S −1 x, S −1 yi = hx, yi. Similarly, U 0 = (T T ∗ )−1/2 T = T (T ∗ T )−1/2 = = (V AV −1 )−1/2 for every positive A and invertible V .
1.5
U , since V A−1/2 V −1
Complex Linear Spaces
The fundamental constructions of exterior algebras and Clifford algebras in this book can be made for linear spaces over more general fields than the real numbers R. We will consider only the field of complex numbers C besides R, which is particularly useful in analysis. We write the complex conjugate of z ∈ C as z c . Given a complex matrix A = (aij )ij , its conjugate transpose is A∗ := (acji )ij , as compared to its transpose At := (aji )ij . Definition 1.5.1. A complex linear space (V, +, ·) is an abelian group (V, +) together with a scalar multiplication C × V → V that is bilinear with respect to the addition operations and a group action of the multiplicative group C∗ = C \ {0} on V. By a complex vector space we shall mean simply a complex linear space, without any interpretation like that in Definition 1.1.2, since this concerns the additive structure of the vector space. Before proceeding with the algebra, an example is in order, to show why complex linear spaces are natural and very useful in analysis. Example 1.5.2 (Time-harmonic oscillations). Consider a quantity f (t, x) that depends on time t ∈ R and position x in some space X. We assume that f takes values in some real linear space. Fixing a basis there, we can assume that f (t, x) ∈ RN . One example is the electromagnetic field in which case N = 6, since it consists of a three-dimensional electric field and a three-dimensional magnetic field. The most convenient way to represent f oscillating at a fixed frequency ω ∈ R is to write f (t, x) = Re(F (x)e−iωt ), for a function F : X → CN , where the real part is taken componentwise. In this way, each component fk (t, x), k = 1, . . . , N , at each point x will oscillate at frequency ω. The complex-valued function F has a very concrete meaning: the absolute value |Fk (x)| is the amplitude of the oscillation of component k at the point x, and the argument arg Fk (x) is the phase of this oscillation. Note that we
Chapter 1. Prelude: Linear Algebra
18
do not assume that the oscillations at different points have the same phase; this happens only for standing waves. Since the complex field has two automorphisms, the identity and complex conjugation, there are two types of dualities that are natural to consider. These correspond to linear and antilinear identification of V 0 and the dual space V ∗ = {θ : V → C ; θ is complex linear} of V. • A complex bilinear duality of two complex linear spaces V 0 and V is a complex bilinear map V 0 × V → C : (v 0 , v) 7→ hv 0 , vi that is nondegenerate. When V 0 = V, we refer to a bilinear duality as a complex bilinear inner product if it is symmetric, that is, if hx, yi = hy, xi. A main difference is that notions like signature are not present in the complex bilinear case since we can normalize −hx, xi = hix, ixi. • A complex sesquilinear duality of V 0 and V, is a nondegenerate pairing (·, ·i such that (v 0 , ·i is complex linear for each v 0 ∈ V 0 and (·, vi is complex antilinear for each v ∈ V . Note the difference in left and right parantheses, which we use to indicate the sesquilinearity. When V 0 = V, we refer to a sesquilinear duality as a complex inner c product if it is symmetric, that is, if (x, yi = (y, xi. A complex inner product is called Hermitian if it is positive definite, that is (u, ui > 0 for all p u ∈ V \{0}. The norm associated with a Hermitian inner product is |u| := (u, ui. The existence of the following types of canonical bases can be derived from the spectral theorem for normal complex linear operators. Proposition 1.5.3 (Complex ON-bases). Let V be a complex linear space. (i) A sesquilinear duality (·, ·i is symmetric if and only if there exists a basis {ei } that is ON in the sense that (ei , ej i = 0 when i 6= j and (ei , ei i = ±1. (ii) A bilinear duality h·, ·i is symmetric in the sense that hv1 , v2 i = hv2 , v1 i if and only if there exists a basis {ei } that is ON in the sense that (ei , ej i = 0 when i 6= j and (ei , ei i = 1. Exercise 1.5.4. (i) Prove that a sesquilinear duality (x, yi is skew-symmetric, that c is, (x, yi = −(y, xi, if and only if i(x, yi is an inner product. (ii) Prove that a bilinear duality h·, ·i is skew-symmetric in the sense that hv1 , v2 i = −hv2 , v1 i if and only if dim V = 2k and there exists a Darboux basis, that is, a basis {ei }ki=1 ∪{e0i }ki=1 in which the only nonzero pairings are he0i , ei i = 1, hei , e0i i = −1, i = 1, . . . , k.
19
1.5. Complex Linear Spaces
We next consider the relation between real and complex linear spaces. We first consider how any complex linear space can be turned into a real linear space, and how to reverse this process. • Let V be a complex linear space. Simply forgetting about the possibility of scalar multiplication by nonreal numbers, V becomes a real linear space, which we denote by V = V. Note that dimC V = 2 dimR V . Besides this real linear structure, V also is equipped with the real linear operator J : V → V : v 7→ iv, which has the property that J 2 = −I. A complex linear map T : V1 → V2 is the same as a real linear map T : V1 → V2 between these spaces regarded as real linear spaces, for which T J1 = J2 T . Given a complex functional θ ∈ V ∗ , the real linear functional V 3 v 7→ Re θ(v) ∈ R belongs to V ∗ . This gives a real linear one-to-one correspondence between V ∗ and V ∗ . In particular, if (·, ·i is a complex inner product on V, taking the real part of the antilinear identification V → V ∗ , we obtain a real inner product hv 0 , viR := Re(v 0 , vi on V , and h·, ·iR is a Euclidean inner product if and only if (·, ·i is a Hermitian inner product. It is possible but less useful to start with a complex bilinear inner product, since this always leads to a real inner product with signature zero. • We can reverse the above argument. Let V be a real linear space equipped with a complex structure, that is, a real linear operator J : V → V such that J 2 = −I. Then (α + βi)v := αv + βJ(v),
v ∈ V, α, β ∈ R,
defines a complex scalar multiplication, which turns V into a complex linear space V. If dim V is odd, then no such J exists, since we would then have (det J)2 = det(−I) = (−1)n = −1, which is unsolvable over R. If dim V is even, there are infinitely many complex structures among which to choose. Indeed, if {e1 , . . . , e2k } is any basis, then J
k k X X (α2j−1 e2j−1 + α2j e2j ) = (−α2j e2j−1 + α2j−1 e2j ) j=1
is one such complex structure.
j=1
Chapter 1. Prelude: Linear Algebra
20
If furthermore the complex structure J on V is an isometry J ∗ J = I, or equivalently skew-adjoint, then polarizing hv 0 , viR = Re(v 0 , vi recovers the sesquilinear duality (v 0 , vi = hv 0 , viR − ihv 0 , JviR . We next consider how any real linear space can be embedded in a complex linear space, and how to reverse this process. • Let V be a real linear space. Define the real linear space V ⊕ V , and consider V as a subspace of V ⊕ V by identifying v ∈ V and (v, 0) ∈ V ⊕ V . Define the standard complex structure J(v1 , v2 ) := (−v2 , v1 ),
(v1 , v2 ) ∈ V ⊕ V.
Then the complex linear space Vc := (V ⊕ V, J) is called the complexification of V . The complex vector (v1 , v2 ) is usually written as the formal sum v1 +iv2 , so that complex scalar multiplication becomes (α + βi)(v1 + iv2 ) = (αv1 − βv2 ) + i(αv2 + βv1 ). The complexification Vc of a real linear space V is a complex linear space V, with dimC Vc = dimR V , which comes with two canonical real linear c subspaces. Defining a complex conjugation operator (x + iy) := x−iy, this is a complex antilinear operation that fixes V ⊂ Vc and squares to the identity. A real linear map T : V → V 0 extends to a complex linear map Tc : Vc → Vc0 by complexification: Tc (v1 + iv2 ) := T v1 + iT v2 . The complexification (V ∗ )c of the real dual can in a natural way be identified with the complex dual (Vc )∗ of the complexification, through the complex linear invertible map given by hθ1 +iθ2 , v1 +iv2 i := hθ1 , v1 i−hθ2 , v2 i+ i(hθ1 , v2 i + hθ2 , v1 i). In particular, if h·, ·i is a duality on V , by complexifying the linear identification V → V ∗ , we obtain a complex bilinear inner product h·, ·iC on Vc , described by Vc 7→ (V ∗ )c = (Vc )∗ . Concretely, hu0 + iv 0 , u + iviC := hu0 , ui − hv 0 , vi + i(hv 0 , ui + hu0 , vi). Alternatively, we may equip Vc with the complex (sesquilinear) inner product (u0 + iv 0 , u + iviC := hu0 , ui + hv 0 , vi + i(−hv 0 , ui + hu0 , vi), which is Hermitian if h·, ·i is Euclidean. We can also complexify a real associative algebra (A, +, ∗, 1), by complexifying the linear space A as well as the bilinear product ∗, to obtain an associative algebra Ac over the complex field.
1.6. Comments and References
21
• We can reverse the above argument. Let V be any complex linear space equipped with a real structure, that is, a complex antilinear operator V → V : z 7→ z c c
such that (z c ) = z. Then V is isomorphic to the complexification Vc of the real subspace V := {z ∈ V ; z c = z} through 1 V 3 z = x + iy ←→ (x, y) = 21 (z + z c ), 2i (z − z c ) ∈ Vc . Clearly, on any complex linear space there are infinitely many real structures. A important advantage over the real theory is that every complex linear operator has an eigenvector, by the fundamental theorem of algebra. For a normal operator, that is, if T ∗ T = T T ∗ on a Hermitian space, we can iterate this result on the orthogonal complement, yielding an ON-basis of eigenvectors. If we apply these results to the complexification of a real linear operator, we obtain the following real result. • Every real linear map T : V → V has either an eigenvector or an invariant two-dimensional subspace. More precisely, in the latter case there exist α, β ∈ R, with β 6= 0, and linearly independent vectors v1 , v2 ∈ V such that T (v1 ) = αv1 − βv2 ,
T (v2 ) = βv1 + αv2 .
• Let T : V → V be a real linear normal operator, that is, T ∗ T = T T ∗ , on a Euclidean space. Then, there exists an ON-basis in which the matrix for T is block diagonal, with 2 × 2 and 1 × 1 blocks along the diagonal. Examples include isometries and skew-symmetric maps.
1.6
Comments and References
1.1 A reference for basic algebraic structures such as groups, rings, fields, vector spaces, and algebras is Nicholson [73]. 1.2 I thank Mats Aigner, Link¨ oping University, for suggesting the notation for dualities used in this book, which incorporates the dual space of linear functionals as a special case. 1.3 Spacetime in the sense of Definition 1.3.5 was first constructed by Hermann Minkowski (1864–1909), for Maxwell’s equations. He had Albert Einstein as a student and realized later when Einstein created his special theory of relativity that this could be modeled mathematically by a four-dimensional spacetime. A reference for the theory of relativity is Rindler [79]. The most common sign convention for spacetime in the literature is + − −−, that is, opposite to the sign − + ++ used in this book.
22
Chapter 1. Prelude: Linear Algebra
1.4 Tensors and tensor products appear in the work of J.W. Gibbs (1839–1903), although some specific examples of tensors such as the Cauchy stress tensor and the Riemann curvature tensor had been found earlier. A reference for our construction of tensor products, using the universal property, is Greub [46]. 1.5 We use of the word Hermitian as the complex analogue of Euclidean, with a meaning of positivity. However, in many contexts in the literature, Hermitian refers to the conjugate-symmetry, without any implied positivity. The proof of Proposition 1.5.3(ii) uses a variant of the spectral theorem known as the Autonne–Takagi factorization. An equivalent way to define the complexification Vc of a real linear space V , which is standard but not used in this book, is as the tensor product Vc := V ⊗ C of real linear spaces.
Chapter 2
Exterior Algebra Prerequisites: This chapter is where this book starts, and everything else in the book depends on it, except for Section 2.9, which is not needed elsewhere. Chapter 1 is meant to be used as a reference while reading this and later chapters. Otherwise, a solid background in linear algebra should suffice. Section 2.4 requires a small amount of analysis. Road map: We all know the algebra of vectors, the one-dimensional oriented/directed arrows. Here we construct and develop the algebra for bivectors, the two-dimensional oriented objects, and 3-vectors, the three-dimensional oriented objects, and so on, which live in n-dimensional affine space. In total we obtain a linear space of dimension 2n containing all the multivectors in the space, referred to as the exterior algebra. Algebraically, multivectors are in some sense nothing but rectangular determinants, but it is important to understand the geometry to be able to use the theory. Sections 2.2 and 2.4 aim to convey the geometric meaning of multivectors to the reader. Most applications use Euclidean space, but for a number of practical reasons, including applications to Minkowski spacetime, we allow for more general inner product spaces and dualities. The exterior product u ∧ v can be seen as a higherdimensional generalization of the vector product, but in a more fundamental way, so that it corresponds to the direct sum [u] ⊕ [v] of subspaces [u] and [v]. Since ∧ is noncommutative, two different but closely related dual products come into play, the right and left interior products v x u and u y v, which geometrically correspond to the orthogonal complement [u]⊥ ∩ [v] of subspace [u] in a larger subspace [v]. When the larger space is the whole space, we have the Hodge star map, which corresponds to taking orthogonal complements of subspaces. © Springer Nature Switzerland AG 2019 A. Rosén, Geometric Multivector Analysis, Birkhäuser Advanced Texts Basler Lehrbücher, https://doi.org/10.1007/978-3-030-31411-8_2
23
Chapter 2. Exterior Algebra
24
Developing the algebra of these products of multivectors, we obtain a geometric birds-eye view on various algebraic results in linear algebra such as identities for the vector product, Cramer’s rule, and the cofactor formula for inverses of linear maps, and expansion rules for determinants. Highlights: • Simple k-vectors ↔ k-dimensional subspaces: 2.2.3 • Factorization algorithm for k-vectors: 2.2.8 • Geometry of Cramer’s rule: 2.3.6 • Algebra for interior product: 2.6.3 • Geometry of cofactor formula: 2.7.1 • Anticommutation relation between exterior and interior products: 2.8.1
2.1
Multivectors
Let us fix an affine space (X, V ) of dimension 1 ≤ n < ∞. The letter n will be the standard notation for the dimension of the vector space V . We set out to construct, for any 0 ≤ k ≤ n, a linear space ∧k V of k-vectors in X. A k-vector w ∈ ∧k V is to be interpreted as an affine k-dimensional object in X determined by its orientation and k-volume. When k = 1, then ∧1 V := V and 1-vectors are simply vectors in X, or oriented 1-volumes. We build k-vectors from vectors using certain multilinear maps. See Definition 1.4.1. Lemma 2.1.1. For a multilinear map M : V × · · · × V → L, the following are equivalent: (i) M (v1 , . . . , vk ) = 0 whenever {v1 , . . . , vk } are linearly dependent. (ii) M (v1 , . . . , vk ) = 0 whenever vi = vj for some i 6= j. (iii) M is alternating, that is, for all 1 ≤ i < j ≤ k and vectors {vm }, we have M (v1 , . . . , vi , . . . , vj , . . . , vk ) = −M (v1 , . . . , vj , . . . , vi , . . . , vk ). Proof. That (i) implies (ii) is clear, as is (iii) implies (ii). P For (ii) implies (i), recall that if {v1 , . . . , vk } are linearly dependent, then vj = i6=j xi vi for some j. Doing this substitution and expanding with multilinearity shows that all terms have two identical factors. This proves (i), using (ii). Finally, to prove (ii) implies (iii), note that 0 = M (v1 , . . . , vi + vj , . . . , vi + vj , . . . , vk ) = M (v1 , . . . , vi , . . . , vj , . . . , vk ) + M (v1 , . . . , vj , . . . , vi , . . . , vk ), from which (iii) follows.
2.1. Multivectors
25
The theory of k-vectors can be thought of as a theory of rectangular determinants. Let us start with a definition of the usual concept of a (quadratic) determinant from linear algebra. Proposition 2.1.2 (Determinant). There exists a unique multilinear map det : Rn × · · · × Rn → R, where the number of copies of Rn is n, with the following properties. (A) If the vectors {v1 , . . . , vn } are linearly dependent, then det(v1 , . . . , vn ) = 0. (B) If {ei } is the standard basis, then det(e1 , . . . , en ) = 1. Let us sketch the proof of this well-known fact. P If det exists, then (A), (B), and multilinearity show that for any vectors vj = i αi,j ei , we must have det(v1 , . . . , vn ) =
n X s1 =1
···
n X
αs1 ,1 · · · αsn ,n (s1 , . . . , sn ),
(2.1)
sn =1
where (s1 , . . . , sn ) is zero if an index is repeated and otherwise denote the sign of the permutation (s1 , . . . , sn ) 7→ (1, . . . , n). Hence uniqueness is clear. Note now that if such det exists, then necessarily it must satisfy (2.1). Thus all that remains is to take (2.1) as the definition and verify properties (A) and (B). Note carefully this frequently useful technique to prove existence, using inspiration from a uniqueness proof. P If vj = i αi,j ei and A = (αi,j ), then we use the standard notation α1,1 .. det(v1 , . . . , vn ) = det(A) = . αn,1
··· .. . ···
α1,n .. . . αn,n
We now generalize this construction to fewer than n vectors, replacing the range R by a more general linear space L. Proposition 2.1.3. Let 2 ≤ k ≤ n and let {e1 , . . . , en } be a basis for V . Then there exist a linear space L and a multilinear map ∧k : V × · · · × V → L, where the number of copies of V is k, that satisfy the following properties. (A) If the {v1 , . . . , vk } are linearly dependent, then ∧k (v1 , . . . , vk ) = 0. (B) The set {∧k (es1 , . . . , esk )}s1 0 when X is Euclidean space. • The function f is C k -regular in D, k = 0, 1, 2, . . ., if all directional/partial derivatives ∂v1 · · · ∂vm f (x) of order m ≤ k exist as continuous functions of x ∈ D, for all directions vi ∈ V . Here ∂v f (x) := lim (f (x + hv) − f (x))/h. h→0
6.1. Domains and Manifolds
187
Given a basis {ei }, we write ∂i := ∂ei . We say that f is C ∞ -regular if it is C k -regular for all k < ∞. older regular of order 0 < α < 1 in D if • The function f is H¨ |f (x) − f (y)| . |x − y|α ,
for all x, y ∈ D,
and we write f ∈ C α (D; L). For α = 0, f ∈ C 0 (D; L) = C(D; L) means that f is continuous on D. When α = 1, we say that f is Lipschitz regular and write f ∈ C 0,1 (D; L). Note that the precise value of the implicit constant C older or Lipschitz property of f , depends as in Definition 6.1.1, but not the H¨ on the choice of Euclidean norm | · | on X. • A bijective function f : D → D0 , with an open set D0 ⊂ L, is a homeomorphism if f ∈ C 0 (D; D0 ) and f −1 ∈ C 0 (D0 ; D). Lipschitz diffeomorphisms and C k -diffeomorphisms are defined similarly. A diffeomorphism refers to a C ∞ -diffeomorphism. • The support of a function defined in X is the closed set supp f := {x ∈ X ; f (x) 6= 0}. If f ∈ C ∞ (D), D an open set, then we write f ∈ C0∞ (D) if supp f is a compact subset of D. • Write C k (D) := {F |D ; F ∈ C k (X)}, and similarly for C α and C ∞ . When the range L of the function is clear from the context, we suppress L in the notation and abbreviate C k (D; L) to C k (D). Definition 6.1.2 (Total derivative). Let D ⊂ X be an open set in an affine space X with vectors V , and let (X 0 , V 0 ) be a second affine space. If ρ : D → X 0 is differentiable at x ∈ D, then we define its total derivative at x to be the unique linear map ρx : V → V 0 such that ρ(x + v) − ρ(x) = ρx (v) + o(v), where o(v) denotes a function λ(v) such that λ(v)/|v| → 0 when v → 0. With respect to bases {ei } and {e0i }, ρx has matrix ∂1 ρ1 (x) · · · ∂k ρ1 (x) .. .. .. , . . . ∂1 ρn (x) · · ·
∂k ρn (x)
P 0 where ρ = ρi ei and partial derivatives ∂i = ∂ei are with respect to ei . Equivalently, the total derivative is ρx (v) = ∂v ρ(x). Note that when ρ maps between affine spaces, then the total derivative ρx maps between the vector spaces since differences of points are vectors. To simplify notation, we shall often drop subscripts x and write ρ.
Chapter 6. Interlude: Analysis
188
The total derivative of a differential map between affine spaces extends from a map of vectors to a map of multivectors as in Section 2.3. With our notation, for example, the chain rule takes the form ρ2 ◦ ρ1 x (w) = ρ2 ρ
1 (x)
ρ1 x (w),
w ∈ ∧V1 ,
for the composition of maps ρ1 : X1 → X2 and ρ2 : X2 → X3 . Definition 6.1.3 (Jacobian). Let ρ : D → X 0 be as in Definition 6.1.2, with total derivative ρx : V → V 0 . Denote by ρx : ∧V → ∧V 0 the induced linear map. Assume that X, X 0 are oriented n-dimensional affine spaces with orientations en and e0n respectively. Then its Jacobian Jρ (x) is the scalar function representing ρx |∧n V , that is, the determinant Jρ (x) := he0∗ n , ρx (en )i of ρx . The main use of Jacobians is in the change of variables formula Z Z f (y)dy = f (ρ(x))Jρ (x)dx ρ(D)
(6.2)
D
for integrals. For Lipschitz change of variables ρ, this continues to hold. Note that in this case Jρ is well defined almost everywhere, since Lipschitz maps ρ are differentiable almost everywhere by Rademacher’s theorem. We use the following standard terminology for domains D ⊂ X. Definition 6.1.4 (Domains). Let D be a domain, that is, an open subset, in an ndimensional affine space (X, V ). We say that D is a C k -domain, k = 1, 2, . . ., if its boundary D is C k -smooth in the following sense. At each p ∈ ∂D, we assume that there exists a C k diffeomorphism ρ : Ωp → Dp between a neighborhood Ωp ⊂ Rn of 0 and a neighborhood Dp ⊂ X such that ρ({x ∈ Ωp ; xn > 0}) = Dp ∩ D, ρ({x ∈ Ωp ; xn = 0}) = Dp ∩ ∂D, and ρ({x ∈ Ωp ; xn < 0}) = Dp \ D. Lipschitz domains are defined similarly, by requiring that the local parametrizations ρ be C 0,1 diffeomorpisms. In a Euclidean space X, we denote by ν the outward-pointing unit normal vector field on ∂D. For a C k -domain, ν is a C k−1 -regular vector field defined on
6.1. Domains and Manifolds
189
all ∂D. For a Lipschitz domain, by Rademacher’s theorem, ν is well defined at almost every point p ∈ ∂D. In many cases it is important to consider domains beyond C 1 , such as Lipschitz domains. For example, the intersection and union of two C 1 domains is much more likely to be Lipschitz than C 1 . However, as the following example indicates, Lipschitz domains constitute a far wider class than domains with a finite number of corners, edges, etc. Example 6.1.5 (Lipschitz scale invariance). We consider how a function φ : R → R scales. Assume that φ(0) = 0 and let φn (x) := nφ(x/n). Thus the graph of φn represents what φ looks like around 0 through a magnifying glass that magnifies n times. If φ is C 1 regular, then |φn (x) − φ0 (0)x| ≤ n |x|, |x| < 1, where n → 0 when n → ∞. This means that φ “looks flat” on small enough scales, since it is well approximated by the straight line y = φ0 (0)x. On the other hand, if φ is a Lipschitz function, then φn is another Lipschitz function with the same Lipschitz constant C. In contrast to the C 1 case, φn will not converge to a linear function, as is seen, for example, from φ(x) = |x|, for which φn (x) = |x| for all n. However, this example is very atypical for Lipschitz functions. In general, each φn will give an entirely new function. This means that a Lipschitz function is nontrivial, that is, nonflat, on each scale, but still nondegenerate, that is, still a Lipschitz function. By the implicit function theorem, the boundary of a C k domain, k = 1, 2, . . ., is locally the graph of a C k function, in the sense that the local parametrization ρ can be written ρ(x0 , xn ) = (x0 , xn + φ(x0 )),
x0 ∈ Rn−1 ,
(6.3)
in a suitable basis for X = V , where φ : Rn−1 → R is a C k -regular function. In stark contrast, this is not true for Lipschitz domains. Example 6.1.6 (Bricks and spirals). (i) In R3 , let D1 := {(x, y, z) ; −1 < x < 0, −1 < y < 1, −1 < z < 0} and D2 := {(x, y, z) ; −1 < x < 1, −1 < y < 0, 0 < z < 1}. Placing the “brick” D2 on top of D1 , consider the two-brick domain D with D = D1 ∪ D2 . Then D is a Lipschitz domain, but at the origin ∂D is not the graph of a Lipschitz function. (ii) In polar coordinates (r, θ) in R2 , consider the logarithmic spiral D := {(r cos θ, r sin θ) ; e−(θ+a) < r < e−(θ+b) , θ > 0}, where b < a < b + 2π are two constants. Then D is a Lipschitz domain, but at the origin ∂D is not the graph of a Lipschitz function. If D is a Lipschitz domain in which all local parametrizations ρ of ∂D are of the form (6.3) with C 0,1 functions φ, then we say that D is a strongly Lipschitz domain.
190
Chapter 6. Interlude: Analysis
Exercise 6.1.7 (Star-shaped domains). We say that a domain D is star-shaped with respect to some point p ∈ D if for each x ∈ D, the line {p + t(x − p) ; t ∈ [0, 1]} in contained in D. Show that every bounded domain in a Euclidean space that is starshaped with respect to each point in some ball B(p; ) ⊂ X, > 0, is a strongly Lipschitz domain. Conversely, show that every bounded strongly Lipschitz domain is a finite union of such domains that are star-shaped with respect to some balls. Exercise 6.1.8 (Rellich fields). Let D be a bounded strongly Lipschitz domain in a Euclidean space (X, V ). Show that there exists a vector field θ ∈ C0∞ (X; V ) such that inf hν(x), θ(x)i > 0. x∈∂D
A partition of unity, see below, may be useful. Besides open subsets, that is, domains in affine space, we also make use of lower-dimensional curved surfaces. More generally, we require the notion of a manifold from differential geometry, which we now fix notation. We consider only compact manifolds, but both with and without boundary, and in many cases embedded in an affine space. For simplicity, we consider only regularity k ≥ 1. n := {(x0 , xn ) ; x0 ∈ Rn−1 , xn ≥ 0} and Our notation is the following. Let H+ n−1 0 n 0 R+ := {(x , xn ) ; x ∈ R , xn > 0} denote the closed and open upper halfspaces, and identify Rn−1 and Rn−1 ×{0}. In general, let M be a compact (second countable Hausdorff) topological space, for example a compact subset of an affine space X. n , in the sense that we are • We assume that M is locally homeomorphic to H+ given a collection of charts, that is, homeomorphisms {µα : Dα → Mα }α∈I , n the atlas S for M , between open sets Dα ⊂ H+ and Mα ⊂ M such that M = α∈I Mα . By compactness, we may assume that the index set I is finite.
• Define open sets Dβα := µ−1 α (Mβ ) ⊂ Dα , and transition maps µβα : Dβα → Dαβ : x 7→ µβα (x) := µ−1 β (µα (x)) for α, β ∈ I. We say that M is a (compact) C k -manifold if µβα ∈ C k (Dβα ) for all α, β ∈ I. In this case, these transition maps are C k diffeomorphisms, ∞ since µ−1 βα = µαβ . A manifold refers to a C -manifold. If all these transition maps are orientation-preserving, then we say that M is oriented. When it is possible to find another atlas with all transition maps between its charts orientation-preserving, then we say that M is orientable. More generally, a chart for M refers to any homeomorphism µ0 : D0 → n M between open sets D0 ⊂ H+ and M 0 ⊂ M such that µ0−1 ◦ µα ∈ 0
0 C k (µ−1 α (M )) for all α ∈ I.
6.2. Fourier Transforms
191
• If Dα ⊂ Rn+ for all α ∈ I, then we say that M is a closed manifold. This means that M is a compact manifold without boundary. If Dα ∩ Rn−1 6= ∅ for some α ∈ I, then we say that M is a manifold with boundary. In this case, the boundary of M , denoted by ∂M , is the closed manifold defined as follows. Let Dα0 := Aα ∩ Rn−1 , µ0α := µα |Rn−1 , and Mα0 := µ0α (Dα0 ). It suffices to consider α such that Dα0 6= S ∅, and we may n−1 . Then ∂M is the closed manifold α∈I Mα0 with assume that Dα0 ⊂ R+ atlas {µ0α : Dα0 → Mα0 }α∈I . • When M is a compact n-dimensional C k -manifold that is also a subset of an affine space X, with the topology inherited from X, then we say that M is an n-surface in X if the derivative µα of µα : Dα → Mα ⊂ X is injective for all x ∈ Dα and all α ∈ I. If µα ∈ C k (Dα ; X), then we say that M is a C k -regular n-surface in X. By the inverse function theorem, an n-surface is locally the graph of a C k -regular function in n variables, in a suitably rotated coordinate system for X. As above, n-surfaces may be closed or may have a boundary. If D ⊂ X is a bounded C k -domain in an affine space X as in Definition 6.1.4, then we see that M = D is a compact C k -regular n-surface with boundary. More generally but similarly, we can consider n-surfaces M embedded in some, in general higher-dimensional, manifold N . • For a function f : M → L on a C k manifold M , with values in a linear space L, we define f ∈ C j (M ; L) to mean that f ◦ µα ∈ C j (Dα ; L) for all α ∈ I, when j ≤ k. k A partition S of unity for a C -manifold M , subordinate to a finite covering M = α∈I Mα by open sets MP α ⊂ M , is a collection {µα }α∈I of functions such that supp ηα ⊂ Mα and α∈I ηα (x) = 1 for all x ∈ M . There exists such a partition of unity with ηα ∈ C k (M ; [0, 1]) on every C k -manifold M.
The standard use of a partition of unity is to localize problems: Given a function f on M , we write X f= ηα f. α
Here supp ηα f ⊂ Mα , and by working locally in this chart, we can obtain results for ηα f , which then we can sum to a global result for f .
6.2
Fourier Transforms
This section collects computations of certain Fourier transforms that are fundamental to the theory of partial differential equations. Fix a point of origin in an
192
Chapter 6. Interlude: Analysis
oriented affine space X and identify it with its vector space V . In particular, V is an abelian group under addition, and as such it comes with a Fourier transform. This is the linear operator Z ˆ F(f )(ξ) = f (ξ) := f (x)e−ihξ,xi dx, ξ ∈ V ∗ . V
This Fourier transform maps a complex-valued function f on V to another complex-valued function on V ∗ . If instead f takes values in some complex linear space L, we let F act componentwise on f . Assuming that V is a Euclidean space and V ∗ = V , the fundamental theorem of Fourier analysis is Plancherel’s theorem, which states that F defines, modulo a constant, an L2 isometry: Z Z 1 |f (x)|2 dx = |fˆ(ξ)|2 dξ. n (2π) V V We recall that the inverse Fourier transform is given by Z F −1 (fˆ)(x) = f (x) = fˆ(ξ)eihξ,xi dξ,
x ∈ V,
V∗
and basic formulas F(∂k f (x)) = iξk fˆ(ξ), F(f (x) ∗ g(x)) = fˆ(ξ) · gˆ(ξ), where the convolution of f (x) and g(x) is the function Z (f ∗ g)(x) := f (x − y)g(y)dy. V
The most fundamental of Fourier transforms is 2
F{e−|x| 2
that is, the Gauss function e−|x|
/2
/2
} = (2π)n/2 e−|ξ|
2
/2
,
is an eigenfunction to F.
Proposition 6.2.1 (Gaussians and homogeneous functions). Let f (x) be a homogeneous polynomial of degree j that is harmonic on an n-dimensional Euclidean space V . Then for every constant s > 0, we have Fourier transforms 2
2
F{f (x)e−s|x| } = 2−j cs−(n/2+j) f (ξ)e−|ξ|
/(4s)
,
where c = π n/2 (−i)j . For every constant 0 < α < n, we have Fourier transforms F{f (x)/|x|n−α+j } = 2α c where Γ(z) :=
R∞ 0
Γ((α + j)/2) f (ξ)/|ξ|α+j , Γ((n − α + j)/2)
e−t tz−1 dt is the gamma function, with Γ(k) = (k − 1)!.
193
6.2. Fourier Transforms Proof. (i) Calculating the Fourier integral, we have Z Z 2 −s|x|2 −ihx,ξi −|ξ|2 /(4s) e f (x)e−shx+iξ/(2s)i dx dx = e f (x)e V V Z −s|x|2 −|ξ|2 /(4s) f (x − iξ/(2s))e =e dx V ! Z Z 2
= e−|ξ|
∞
2
e−sr rn−1
/(4s)
f (rω − iξ/(2s))dω dr, |ω|=1
0
where we have extended f to a polynomial of n complex variables. According to the mean value theorem for harmonic functions, Z f (rω + y)dω = σn−1 f (y), |ω|=1
for every y ∈ V , where σn−1 is the area of the unit sphere in V . By analytic continuation, this formula remains valid for all complex y ∈ Vc . Since Z ∞ Z ∞ 1 1 −sr 2 n−1 e r dr = n/2 e−u un/2−1 = n/2 Γ(n/2) 2s 2s 0 0 and σn−1 = 2π n/2 /Γ(n/2), the stated identity follows. (ii) To establish the second Fourier transform identity, we use the identity Z ∞ Z ∞ 2 1 Γ((n − α + j)/2) . s(n−α+j)/2−1 e−sr ds = n−α+j x(n−α+j)/2−1 e−x dx = r rn−α+j 0 0 2
Writing r−(n−α+j) as a continuous linear combination of functions e−sr in this way, we deduce that Z ∞ 2 1 n−α+j F{f (x)/|x| }= s(n−α+j)/2−1 F{f (x)e−s|x| }ds Γ((n − α + j)/2) 0 Z ∞ 2 1 = s(n−α+j)/2−1 2−j cs−(n/2+j) f (ξ)e−|ξ| /(4s) ds Γ((n − α + j)/2) 0 Z ∞ 2 2−j cf (ξ) s−(α+j)/2−1 e−(1/s)(|ξ|/2) ds = Γ((n − α + j)/2) 0 Γ((α + j)/2) f (ξ) = 2α c . Γ((n − α + j)/2) |ξ|α+j The following functions, or more precisely distributions in dimension ≥ 3, appear in solving the wave equation. Proposition 6.2.2 (Riemann functions). Let Rt , for t > 0, be the Fourier multiplier F(Rt f )(ξ) =
sin(t|ξ|) F(f )(ξ). |ξ|
194
Chapter 6. Interlude: Analysis
In low dimensions, the Riemann function Rt has the following expression for t > 0: Z 1 Rt f (x) = f (x − y)dy, dim V = 1, 2 |y| 0, x2 > 0}, then ρ : D1 → D2 is a diffeomorphism. Let F be the constant vector field F (y) = e1 parallel to the y1 -axis. To push forward and pull back F to the x1 x2 -plane, we calculate the derivative a cos y2 −ay1 sin y2 ρy = . b sin y2 by1 cos y2 This gives the pushed forward vector field 1 x1 a cos y2 a cos y2 −ay1 sin y2 1 . =p = ρ∗ F = 2 2 b sin y2 0 b sin y2 by1 cos y2 (x1 /a) + (x2 /b) x2 On the other hand, pulling back F by ρ−1 gives −1 −1 a cos y2 −a−1 y1−1 sin y2 1 a cos y2 (ρ−1 )∗ F = −1 = 0 b−1 sin y2 b sin y2 b−1 y1−1 cos y2 −2 1 a x1 . =p −2 (x1 /a)2 + (x2 /b)2 b x2
220
Chapter 7. Multivector Calculus
Note that ρ∗ F is tangent to the radial lines ρ({y2 = constant}), and that (ρ−1 )∗ F is normal to the ellipses ρ({y1 = constant}), in accordance with the discussion above. See Figure 7.1(b)–(c).
Figure 7.1: (a) Change of variables F ◦ ρ−1 . (b) Pushforward ρ∗ F . (c) Inverse pullback (ρ−1 )∗ F . (d) Normalized pushforward ρ˜∗ F . The field has been scaled by a factor 0.3, that is, the plots are for F = 0.3e1 . Since F is constant, it is of course divergence- and curl-free: ∇yF = 0 = ∇∧F . By direct calculation, we find that (ρ−1 )∗ F is curl-free. For the pushforward, we note that p div(ρ∗ F ) = 1/ (x1 /a)2 + (x2 /b)2 6= 0. However, the normalized pushforward
7.2. Pullbacks and Pushforwards
221
1 x1 ρ˜∗ F = ab((x1 /a)2 + (x2 /b)2 ) x2 is seen to be divergence-free. See Figure 7.1(d). This is in accordance with Theorem 7.2.9 below. We now show that in general, pullbacks commute with the exterior derivative, and dually that normalized pushforwards commute with the interior derivative. At first it seems that taking the exterior derivative of a pulled back multicovector field would give two terms, a first-order term when the derivatives hit Θ(ρ(x)) according to the chain rule and a zero-order term when the derivatives hit ρx according to the product rule. However, it turns out that the zero-order term vanishes miraculously due to the alternating property of the exterior product and the equality of mixed derivatives. Theorem 7.2.9 (The commutation theorem). Let ρ : D1 → D2 be a C 2 map between open sets D1 ⊂ X1 and D2 ⊂ X2 . (i) If Θ : D2 → ∧V2∗ is a C 1 multicovector field in D2 , then the pullback ρ∗ Θ : D1 → ∧V1∗ is C 1 and ∇ ∧ (ρ∗ Θ)(y) = ρ∗ (∇ ∧ Θ)(y) for y ∈ D1 , that is, d(ρ∗ Θ) = ρ∗ (dΘ). (ii) Further assume that ρ is a C 2 diffeomorphism. If F : D1 → ∧V1 is a C 1 multivector field in D1 , then the normalized pushforward ρ˜∗ F : D2 → ∧V2 is C 1 and ∇ y (˜ ρ∗ F )(x) = ρ˜∗ (∇ y F )(x) for x ∈ D2 , that is, δ(˜ ρ∗ F ) = ρ˜∗ (δF ). The proof uses the following lemma, the proof of which we leave as an exercise. Lemma 7.2.10. Let {ei } and {e0i } be bases for V1 and V2 , with dual {e∗i } and P bases 0∗ 0∗ {ei } respectively. Then the pullback of a covector field θ(x) = i θi (x)ei is X ρ∗ θ(y) = θi (x) ∂j ρi (y)e∗j , x = ρ(y) ∈ D2 , y ∈ D1 , i,j
P and the pushforward of a vector field v(y) = i vi (y)ei is X ρ∗ v(x) = vi (y) ∂i ρj (y)e0j , x = ρ(y) ∈ D2 , y ∈ D1 . i,j
Proof of Theorem 7.2.9. Since both y 7→ ρ∗y and y 7→ Θ(ρ(y)) are C 1 , so is ρ∗ Θ. (i) When Θ = f is a scalar field, then the formula is the chain rule. Indeed, changing variables x = ρ(y) in the scalar function f (x), for ρ∗ f = f ◦ ρ we have X ∇y (f (ρ(y))) = e∗i (∂i ρk (y))(∂xk f )(x) = ρ∗y (∇f )(x), i,k
222
Chapter 7. Multivector Calculus
using Lemma 7.2.10. P (ii) Next consider a vector field Θ = θ = i θi ei : D2 → ∧V2∗ . Fix bases {ei } and {e0i } for V1 and V2 respectively and write {e∗i } and {e0∗ i } for the dual bases and ∂i and ∂i0 for the partial derivatives. From Lemma 7.2.10 we have X ∇ ∧ (ρ∗ θ) = ∇y ∧ θi (ρ(y)) ∂j ρi (y) e∗j i,j
=
X
(∂k θi ∂j ρi + θi ∂k ∂j ρi )e∗k ∧ e∗j =
i,j,k
X
∂k θi ∂j ρi e∗k ∧ e∗j ,
i,j,k
since ∂k ∂j = ∂j ∂k and ek ∧ el = −el ∧ ek . This is the key point of the proof. On the other hand, we have X X X 0∗ 0∗ ρ∗ (∇∧θ) = ρ∗ ∂j0 θi e0∗ = ∂j0 θi ρ∗ (e0∗ ∂j0 θi ∂k ρj ∂l ρi e∗k ∧e∗l . j ∧ ei j ∧ ei ) = i,j
i,j
i,j,k,l
Note that j ∂j0 θi ∂k ρj = ∂k θi by the chain rule. Thus changing the dummy index l to j proves the formula for covector fields. (iii) Next consider a general multicovector field Θ. By linearity, we may assume that Θ(x) = θ1 (x) ∧ · · · ∧ θk (x) for C 1 covector fields θj . We need to prove P
X
e∗i ∧ ρ∗ θ1 ∧ · · · ∧ ∂i (ρ∗ θj ) ∧ · · · ∧ ρ∗ θk =
i,j
X
∗ 1 ∗ 0 j ∗ k ρ∗ e0∗ i ∧ ρ θ ∧ · · · ∧ ρ (∂i θ ) ∧ · · · ∧ ρ θ .
i,j
For this, it suffices to show that X X ∗ 0 e∗i ∧ ∂i (ρ∗ θ) = ρ∗ e0∗ j ∧ ρ (∂j θ) i
j
for all C 1 covector fields θ in D2 . But this follows from step (ii) of the proof. (iv) From the hypothesis it follows that x 7→ ρρ−1 (x) , x 7→ |Jρ (ρ−1 (x))|, and x 7→ F (ρ−1 (x)) are C 1 . Therefore the product rule shows that ρ˜∗ F is C 1 . Let Θ : D2 → ∧V2∗ be any compactly supported smooth multicovector field. Then Propositions 7.1.7(ii) and 7.2.5(ii) and step (iii) above show that Z Z Z hΘ, ∇ y ρ˜∗ F idx = − h∇ ∧ Θ, ρ˜∗ F idx = − hρ∗ (∇ ∧ Θ), F idy D2 D2 D1 Z Z Z =− h∇ ∧ ρ∗ Θ, F idy = hρ∗ Θ, ∇ y F idy = hΘ, ρ˜∗ (∇ y F )idx. D1
D1
D2
Since Θ is arbitrary, we must have ∇ y (˜ ρ∗ F ) = ρ˜∗ (∇ y F ).
Example 7.2.11 (Orthogonal curvilinear coordinates). Let ρ : R3 → X be curvilinear coordinates in three-dimensional Euclidean space X. Important examples
7.2. Pullbacks and Pushforwards
223
treated in the standard vector calculus curriculum are spherical and cylindrical coordinates. The pushforward of the standard basis vector fields are e˜i := ρ∗ ei = ∂yi ρ(y),
i = 1, 2, 3,
where {ei } denotes the standard basis in R3 . The frame {˜ ei } is in general not an ON-frame in X, but in important examples such as the two mentioned above, these frame vector fields are orthogonal at each point. Assuming that we have such orthogonal curvilinear coordinates, we define hi (y) := |ρ∗ ei | and ei := e˜i /hi , for y ∈ R3 . This gives us an ON-frame {ei (y)} in X. We now show how the well-known formulas for gradient, divergence, and curl follow from Theorem 7.2.9. Note that 0 h1 0 ρ = ρ∗ = 0 h2 0 0 0 h3 with respect to the ON-bases {ei } and {ei }. For the gradient, we have X X ∇u = (ρ∗ )−1 grad(ρ∗ u) = (ρ∗ )−1 (∂i u)ei = h−1 i (∂i u)ei . i
i
∗
Note that ρ acts on scalar functions just by changing variables, whereas ρ∗ acts on vectors by the above matrix. P For the curl of a vector field F = i Fi ei in X, we similarly obtain X ∇ ∧ F = (ρ∗ )−1 curl(ρ∗ F ) = (ρ∗ )−1 ∇ ∧ hi Fi ei i ∗ −1
= (ρ )
XX j
∂j (hi Fi )ej
∧ ei
=
i
h1 e1 1 h2 e2 = h1 h2 h3 h3 e3
XX (hi hj )−1 ∂j (hi Fi )ej j
∂1 ∂2 ∂3
∧
ei
i
h1 F1 h2 F2 . h3 F3
Note that ρ∗ acts on ∧2 V by two-by-two subdeterminants of the above matrix as in Example 2.3.4. P For the divergence of a vector field F = i Fi ei in X, we use instead the normalized pushforward to obtain X X ∇ y F = ρ˜∗ div(˜ ρ∗ )−1 F = ρ˜∗ ∇ y h1 h2 h3 h−1 F e = ρ˜∗ ∂i (h1 h2 h3 h−1 i i i i Fi ) i
i
= (h1 h2 h3 )−1 ∂1 (h2 h3 F1 ) + ∂2 (h1 h3 F2 ) + ∂3 (h1 h2 F3 ) . Note that ρ˜∗ = ρ∗ /(h1 h2 h3 ) and that ρ∗ acts on vectors by the above matrix and simply by change of variables on scalars.
Chapter 7. Multivector Calculus
224
Example 7.2.12 (Pullback of Laplace equation). To see how the Laplace operator ∆ transforms under a change of variables, let ρ : D1 → D2 be a C 2 -diffeomorphism of Euclidean domains and let u : D2 → R be harmonic, that is, ∆u = 0. Changing variables, u corresponds to a function v(y) = u(ρ(y)) = ρ∗ u(y) in D1 . According to the commutation theorem (Theorem 7.2.9), we have −1 ρ∗ )−1 (ρ∗ )−1 (∇v) v) = ∇ y (ρ∗ )−1 (∇v) = ρ˜∗ ∇ y (˜ 0 = ∇ y ∇ ∧ (ρ∗ = ρ˜∗ ∇ y (ρ∗ ρ˜∗ )−1 (∇v) = ρ˜∗ div(A−1 grad v). Since ρ˜∗ is invertible, we see that the Laplace equation transforms into the divergence-form equation div(A−1 grad v) = 0. Here the linear map A(y), for fixed y ∈ D1 , in an ON-basis {ei } has matrix elements Ai,j (y) = hei , ρ∗ ρ˜∗ ej i = |Jρ (y)|−1 hρy (ei ), ρy (ej )i = g(y)−1/2 gi,j (y), where gi,j is the metric in D1 representing the Euclidean metric in D2 and g(x) := 2 det(gi,j (x)) = det(ρ∗ ρP ∗ ) = |Jρ (x)| is the determinant of the metric matrix. Thus 2 the Laplace equation i ∂i u = 0 transforms to the divergence-form equation X √ ∂i ( g g i,j ∂j v) = 0, i,j
where g i,j denotes the inverse of the metric matrix. Example 7.2.12 is a special case of the following pullback formulas for exterior and interior derivatives. Proposition 7.2.13 (Pullback of interior derivatives). Let X1 , X2 be oriented Euclidean spaces, let ρ : D1 → D2 be a C 2 -diffeomorphism, and denote by G(y) : ∧V1 → ∧V1 , y ∈ D1 , the metric of D2 pulled back to D1 , that is, hGw1 , w2 i = hρ∗ w1 , ρ∗ w2 i for multivector fields w1 , w2 in D1 . Write g(y) = det G|∧1 V1 (y) = |Jρ (y)|2 . Then we have pullback formulas ρ∗ (dF ) = d(ρ∗ F ), ρ∗ (δF ) = (g −1/2 G)δ(g 1/2 G−1 )(ρ∗ F ).
7.3
Integration of Forms
In this section, we develop integration theory for forms over k-surfaces. To avoid technicalities concerning bundles, at this stage we limit ourselves to k-surfaces in affine spaces, but the integration theory we develop generalizes with minor changes to general manifolds.
7.3. Integration of Forms
225
Definition 7.3.1 (k-Form). A k-form defined on a subset D of an affine space (X, V ) is a function b k V → L, Θ:D×∧ with range in a finite-dimensional linear space L, such that Θ(x, λw) = λΘ(x, w),
k
b V, λ > 0. x ∈ M, w ∈ ∧
We say that Θ is a linear k-form if for each x ∈ D, w 7→ Θ(x, w) extends to a linear function of w ∈ ∧k V . The idea with k-forms is that in integrating at a point x ∈ D, the integrand also depends on the orientation at x ∈ M of the k-surface M that we integrate over. Definition 7.3.2 (Integral of form). Let M be a compact oriented C 1 -regular kb k V → L be a continuous surface in an affine space (X, V ), and let Θ : M × ∧ k-form. We define the integral of Θ over M to be Z Θ(x, dx) := M
XZ α∈I
Dα
ηα (µα (y))Θ(µα (y), µα y (e1 ∧ · · · ∧ ek ))dy,
where {ei } is the standard basis in Rk and dy is standard Lebesgue measure in Rk ⊃ Dα . Here {µα }α∈I is the atlas of M , and {ηα }α∈I denotes a partition of unity for M . Note as in Section 2.4 how the induced action of the derivative µα y : ∧k Rk → ∧k V maps the oriented volume element e1 ∧ · · · ∧ ek dy in Rk to the oriented volume element dx = µα y (e1 ∧ · · · ∧ ek dy) = µα y (e1 ∧ · · · ∧ ek )dy on M . Note that this infinitesimal k-vector is simple, and hence Θ in general, needs to be defined only on the Grassmann cone. However, it is only to linear forms that Stokes’s theorem applies, as we shall see. The following proposition shows that such integrals do not depend on the precise choice of atlas and partition of unity for M , but only on the orientation of M. Proposition 7.3.3 (Independence of atlas). Consider a compact oriented C 1 -regular k-surface M , with atlas {µα : Dα → Mα }α∈I , in an affine space (X, V ). Let {µβ : Dβ → Mβ }β∈I 0 , I 0 ∩ I = ∅, be a second atlas for M such that all transition maps between Dα and Dβ , α ∈ I, β ∈ I 0 , are C 1 -regular and orientation-preserving. Further assume that {ηα }α∈I is a partition of unity for {µα }α∈I and that {ηβ }β∈I 0
Chapter 7. Multivector Calculus
226
is a partition of unity for {µβ }β∈I 0 . Then for every continuous k-form Θ, we have XZ ηα (µα (y))Θ(µα (y), µα y (e1 ∧ · · · ∧ ek ))dy α∈I
=
Dα
XZ β∈I 0
ηβ (µβ (z))Θ(µβ (z), µβ (e1 ∧ · · · ∧ ek ))dz. z
Dβ
P Proof. Inserting 1 = β∈I 0 ηβ in the integral on the left-hand side and 1 = P α∈I ηα in the integral on the right-hand side, it suffices to show that Z Z Θαβ (µβ (z), µβ (e1 ∧ · · · ∧ ek ))dz, Θαβ (µα (y), µα y (e1 ∧ · · · ∧ ek ))dy = Dαβ
Dβα
z
where Θαβ (x, w) := ηα (x)ηβ (x)Θ(x, w), since supp ηα ηβ ⊂ Mα ∩ Mβ . Changing variables z = µβα (y) in the integral on the right-hand side, we get Z Θαβ (µβ (µβα (y)), µβ (e1 ∧ · · · ∧ ek ))Jµβα (y)dy. µβα (y)
Dβα
Since µβ ◦ µβα = µα , and therefore µβ (e1 ∧ · · · ∧ ek )Jµβα = µα (e1 ∧ · · · ∧ ek ), the stated formula follows from the homogeneity of w 7→ Θαβ (x, w). Example 7.3.4 (Oriented and scalar measure). The simplest linear k-form in an affine space is Θ(x, w) = w ∈ L := ∧k V. R R In this case, M Θ(x, dx) = M dx = ∧k (M ) is the oriented measure of M discussed in Section 2.4. In a Euclidean space, given a continuous function f : M → L, the integral of the k-form Θ(x, w) := f (x)|w| R is seen to be the standard surface integral M f (x)dx, where dx = |dx|. Note that these are R not linear k-forms. In particular, f = 1 yields the usual (scalar) measure |M | := M dx of M . Using that dx = |dx| and the usual triangle inequality for integrals, we obtain from the definitions that oriented and scalar measure satisfy the triangle inequality Z Z dx ≤ dx. M
M
We continue with Example 2.4.4, where we calculated the oriented area element dx = (e12 + 2y2 e13 − 2y1 e23 )dy1 dy2 . Hence dx = |dx| =
q 1 + 4y12 + 4y22 dy1 dy2 ,
7.3. Integration of Forms
227
giving an area of the paraboloid equal to Z Z q 1 + 4y12 + 4y22 dy1 dy2 = 2π |y| 0. The geometry definition: f is analytic if it is an (orientation-preserving) © Springer Nature Switzerland AG 2019 A. Rosén, Geometric Multivector Analysis, Birkhäuser Advanced Texts Basler Lehrbücher, https://doi.org/10.1007/978-3-030-31411-8_8
255
Chapter 8. Hypercomplex Analysis
256
conformal map, that is if at each z ∈ D the derivative f z is of the form a −b fz = , b a where a = ∂1 f1 and b = ∂1 f2 . This means that f z is a nonzero multiple of a rotation matrix and can be expressed as complex multiplication by f 0 (z). In generalizing to a hypercomplex analysis in higher-dimensional Euclidean spaces, the partial differential equation definition turns out to be most successful, where the Cauchy–Riemann equations are replaced by a Dirac equation ∇ 4 F (x) = 0, using the nabla operator induced by the Clifford product. As in Example 7.3.13, we may express this Dirac equation in terms of an integral difference quotient. Behind the Dirac equation, a fundamental type of splitting of function spaces is lurking: splittings into Hardy subspaces. With a solid understanding of Clifford algebra, these are straightforward generalizations of the classical such splittings in the complex plane. Recall that in the complex plane, any function f : ∂D → C on the boundary of a bounded domain D is in a unique way the sum f = f + + f −, where f + is the restriction to ∂D of an analytic function in D, and f − is the restriction to ∂D of an analytic function in C \ D that vanishes at ∞. The two subspaces consisting of traces of analytic functions from the interior or the exterior domain are the Hardy subspaces, and the Cauchy integral formula Z f (w) 1 dw 2πi ∂D w − z provides the projection operators onto these subspaces. There is one important difference with the Hodge splitting from Section 7.6: the two Hardy spaces are in general not orthogonal subspaces of L2 (∂D), but the angle between them depends on the geometry of ∂D. We show in Section 8.2 that the algebraic definition can be generalized to give power series expansions in higher dimensions. This is closely related to the classical theory of spherical harmonics. Later in Section 11.4, we shall see that the geometry definition does not generalize well. The higher dimensional conformal maps are very scarce indeed: the only ones are the M¨obius maps! Highlights: • The higher dimensional Cauchy integral formula: 8.1.8 • M¨obius pullbacks of monogenic fields: 8.1.14 • Splitting into spherical monogenics: 8.2.6 • Spherical Dirac operator: 8.2.15 • Splittings into Hardy subspaces: 8.3.6
8.1. Monogenic Multivector Fields
8.1
257
Monogenic Multivector Fields
In this chapter we work in Euclidean space (X, V ), and we study the following generalization of the Cauchy–Riemann equations. Definition 8.1.1 (Monogenic fields). Let D be an open set in a Euclidean space X. If F : D → 4V is differentiable at x ∈ D, we define the Clifford derivative ∇ 4 F (x) =
n X
ei∗ 4 ∂i F (x)
i=1
{e∗i }
as in Definition 7.1.2, where is the basis dual to {ei }, and ∂i is the partial derivative with respect to the corresponding coordinate xi . The Dirac operator D : F 7→ ∇ 4 F is the nabla operator induced by Clifford multiplication. If F is a C 1 multivector field for which ∇ 4 F (x) = 0 in all of D, then F is said to be a monogenic field in D. Let {es } be an induced ON-basis for 4V and write X F (x) = Fs (x)es . s
If F is a monogenic field, then each scalar component function Fs is a harmonic function. To see this, we note that XX X XX ∂i2 F (x) = ∂i2 Fs (x) es ei ej ∂i ∂j F (x) = 0 = D2 F (x) = i
j
i
s
i
X = (∆Fs (x))es . s
This is a consequence of the defining property v 2 = |v|2 for the Clifford product, and it means that D is a first-order differential operator that is a square root of the componentwise Laplace operator. Similar to the situation for analytic functions, a monogenic multivector field consists of 2n scalar harmonic functions, which are coupled in a certain sense described by the Dirac equation DF = 0. In particular, monogenic fields are smooth, even real analytic. The Dirac derivative further combines the exterior and interior derivative. Indeed, since e∗i 4 w = e∗i y w + e∗i ∧ w, it is clear that DF (x) = ∇ 4 F (x) = ∇ y F (x) + ∇ ∧ F (x) = δF (x) + dF (x). This means that D2 = (d + δ)2 = d2 + δ 2 + dδ + δd = dδ + δd, by nilpotence. Another way to see this is to put v = θ = ∇ in the anticommutation relation from Theorem 2.8.1.
Chapter 8. Hypercomplex Analysis
258
As in Chapter 3, in using the Dirac operator, it is in general necessary to work within the full Clifford algebra, since typically DF will not be a homogeneous multivector field even if F is. However, in some applications the fields have values in the even subalgebra, or even are homogeneous k-vector fields. Example 8.1.2 (Analytic functions). Let X be a two-dimensional Euclidean plane, and let C = 40 V ⊕ 42 V be the standard geometric representation of the complex plane as in Section 3.2. Consider the Dirac equation ∇ 4 f = 0 for a complex valued function f = u + vj : C → C, where we have fixed an origin and ON-basis {e1 , e2 } in X = V , giving the identification V ↔ C, e1 ↔ 1, e2 ↔ j = e12 . Writing out the equation, we have ∇ 4 f = (e1 ∂1 + e2 ∂2 ) 4 (u + e12 v) = (∂1 u − ∂2 v)e1 + (∂1 v + ∂2 u)e2 . Thus ∇ 4 f = 0 coincides with the Cauchy–Riemann equations, and f is monogenic if and only if it is analytic. Note also that the only functions f : C → C that satisfy ∇ y f = 0 = ∇ ∧ f are the locally constant functions, since ∇ ∧ f = grad u and ∇ y f = −j grad v. On the other hand, the complex function f corresponds to the vector field F (x) = e1 f (x) under the identification V ↔ C. Reversing this relation gives F (x) = F (x) = f (x)e1 . Since the Clifford product is associative, it follows that F (x) is a plane divergence and curl-free vector field if and only if 0 = ∇ 4 F (x) = ∇ 4 (f (x) 4 e1 ) = (∇ 4 f (x)) 4 e1 , that is, if f is antianalytic. Example 8.1.3 (3D monogenic fields). Let F, G be vector fields and let f, g be scalar functions defined in an open set D in three-dimensional oriented Euclidean space. Then the multivector field f (x) + F (x) + ∗G(x) + ∗g(x) is monogenic if and only if div F (x) = 0, ∇f (x) − ∇ × G(x) = 0, ∇ × F (x) + ∇g(x) = 0, div G(x) = 0. We note that there is no restriction to assume that a monogenic field F takes values in the even subalgebra 4ev V . Indeed, if F : D → 4V is monogenic, we write F = F ev + F od where F ev : D → 4ev V and F od : D → 4od V . Then 0 = ∇ 4 F (x) = ∇ 4 F ev + ∇ 4 F od , where ∇ 4 F ev : D → 4od V and ∇ 4 F od : D → 4ev V , so we conclude that F ev and F od each are monogenic.
8.1. Monogenic Multivector Fields
259
Example 8.1.4 (Stein–Weiss vector fields). If F : D → V = 41 V is a vector field in a general Euclidean space, then F is monogenic if and only if F is divergenceand curl-free. Thus, for vector fields the equation DF = 0 is equivalent to the first-order system ( div F (x) = 0, curl F (x) = 0. This is a consequence of the fact that ∇ y F : D → 40 V and ∇ ∧ F : D → 42 V , where 40 V ∩ 42 V = {0}. We note that Example 8.1.4 generalizes as follows. When F : D → 4k V is a homogeneous multivector field, then ∇ 4 F = 0 if and only if ( ∇ y F (x) = 0, ∇ ∧ F (x) = 0. The following proposition shows when all the homogeneous parts of a monogenic field are themselves monogenic. Proposition 8.1.5 (Two-sided monogenicity). Let F : D → 4V be a C 1 multivector field, and write F = F0 + · · · + Fn , where Fj : D → 4j V . Then the following are equivalent. (i) All the homogeneous parts Fj are monogenic fields. (ii) The field F satisfies both ∇ ∧ F = 0 and ∇ y F = 0. (iii) The field F is two-sided monogenic, that is, ∇4F = 0 and F 4∇ = e∗i = 0.
P
∂i F (x)4
Proof. (i) implies (iii): If Fj is monogenic, then ∇ ∧ Fj = 0 = ∇ y Fj as above, and therefore ∇ 4 Fj as well as Fj 4 ∇ = (−1)j (∇ ∧ Fj − ∇ y Fj ) is zero. Adding up all Fj proves (iii). (iii) implies (ii): this is a consequence of the Riesz formulas, which show that and ∇ y F = 12 (∇ 4 F + F\ 4 ∇) 4 ∇). ∇ ∧ F = 12 (∇ 4 F − F\ (ii) implies (i): If ∇ ∧ F = 0, then 0 = (∇ ∧ F )j+1 = ∇ ∧ Fj for all j, since d maps j-vector fields to (j + 1)-vector fields. Similarly ∇ y Fj = 0 for all j. Thus ∇ 4 Fj = ∇ y Fj + ∇ ∧ Fj = 0. We next consider the fundamental solution for the Dirac operator. In order to apply the Fourier transform componentwise as in Section 6.2, we complexify the Clifford algebra 4V ⊂ 4Vc . We note that the exterior, interior, and Clifford derivatives are the Fourier multipliers \ dF (x) = iξ ∧ Fˆ (ξ), \ δF (x) = iξ y Fˆ (ξ), \ DF (x) = iξ 4 Fˆ (ξ),
and ξ ∈ V.
Chapter 8. Hypercomplex Analysis
260
From this it follows that unlike d and δ, the Dirac operator D is elliptic and has a fundamental solution Ψ(x) with Fourier transform ξ ˆ = (iξ)−1 = −i 2 . Ψ(ξ) |ξ| Using the formula for the fundamental solution Φ to the Laplace operator ∆ ˆ from Example 6.3.1, where Φ(ξ) = −1/|ξ|2 , we obtain the following formula for Ψ(x) = ∇Φ. Note that unlike the situation for Φ, the two-dimensional case does not use any logarithm. Definition 8.1.6 (Fundamental solution). The fundamental solution to the Dirac operator D in an n-dimensional Euclidean space with origin fixed, n ≥ 1, is the vector field 1 x Ψ(x) := , σn−1 |x|n R where σn−1 := |x|=1 dx = 2π n/2 /Γ(n/2) is the measure of the unit sphere in V , R ∞ −t z−1 and Γ(z) := 0 e t dt is the gamma function, with Γ(k) = (k − 1)!. Exercise 8.1.7. Show by direct calculation that ∇ ∧ Ψ(x) = 0 and ∇ y Ψ(x) = δ0 (x) in the distributional sense in V , where δ0 (x) is the Dirac delta distribution. The following application of the general Stokes theorem is central to hypercomplex analysis. Theorem 8.1.8 (Cauchy–Pompeiu formula for D). Let D be a bounded C 1 -domain in Euclidean space X. If F ∈ C 1 (D; 4V ), then Z Z Ψ(y − x)ν(y)F (y) dy, Ψ(y − x)(DF )(y)dy = F (x) + ∂D
D
for all x ∈ D, where ν(y) denotes the outward-pointing unit normal vector field on ∂D. In particular, for monogenic multivector fields F , we have the Cauchy reproducing formula Z Ψ(y − x)ν(y)F (y) dy, x ∈ D. (8.1) F (x) = ∂D
Proof. For fixed x ∈ D, consider the linear 1-form θ(y, v) := Ψ(y − x) 4 v 4 F (y). For y ∈ D \ {x}, its nabla derivative is θ(y, ˙ ∇) =
n X
∂yi Ψ(y − x) 4 ei 4 F (y)
i=1
= (Ψ(y˙ − x) 4 ∇) 4 F (y) + Ψ(y − x) 4 (∇ 4 F (y)) ˙ = Ψ(y − x) 4 (DF )(y),
8.1. Monogenic Multivector Fields
261
by associativity of the Clifford product and since Ψ 4 ∇ = ∇ y Ψ − ∇ ∧ Ψ = 0 by Exercise 8.1.7. To avoid using distribution theory, we consider the domain D := D \ B(x, ), obtained by removing a small ball around x. On ∂B(x, ) the outward-pointing unit normal relative D is (x − y)/|x − y|. The Stokes formula (7.4) gives Z Ψ(y − x)(DF )(y)dy D Z Z x−y = Ψ(y − x) Ψ(y − x)ν(y)F (y) dy F (y) dy + |x − y| ∂B(x,) ∂D Z Z 1 y−x 1 =− F (y) dy + ν(y)F (y) dy. n−1 σ σ |y − x|n n−1 n−1 ∂B(x,) ∂D Upon taking limits → 0, the first term on the right-hand side will converge to −F (x), and the Cauchy–Pompeiu formula follows. Exercise 8.1.9 (Cauchy integral theorem). Apply Stokes’s theorem and prove the general Cauchy theorem Z G(y) 4 ν(y) 4 F (y) dy = 0 ∂D
for a left monogenic field F and a right monogenic G, that is,R ∇ 4 F = 0 = G(x) ˙ 4∇ = 0, in D. Deduce from this the classical Cauchy theorem ∂D f (w)dw = 0 for an analytic function f from complex analysis. See Example 7.3.12. Example 8.1.10 (Cauchy formula in C). The Cauchy formula for analytic functions in the complex plane is a special case of Theorem 8.1.8. To see this, consider an analytic function f (z) in a plane domain D. As in Example 8.1.2, we identify the vector x ∈ V and the complex number z = e1 x ∈ C = 4ev V , y ∈ V with w = e1 y ∈ C, and the normal vector ν with the complex number n = e1 ν. If f (z) : D → C = 4ev V is analytic, thus monogenic, and if x ∈ D, then Theorem 8.1.8 shows that Z e1 (w − z) 1 f (z) = (e1 n(w))f (w)|dw| σ1 ∂D |w − z|2 Z Z 1 e1 (w − z)e1 f (w)dw 1 = f (w)(jn(w)|dw|) . = 2 2πj ∂D |w − z| 2πj ∂D w − z Here we have used that e1 (w − z)e1 = w − z, that complex numbers commute, and that jn is tangent to a positively oriented curve. We have written |dw| for the scalar length measure on ∂D. Note that unlike the situation for analytic functions, in higher dimensions the normal vector must be placed in the middle, between the fundamental solution and the monogenic field. This is because the Clifford product is noncommutative. For
Chapter 8. Hypercomplex Analysis
262
analytic functions, the normal infinitesimal element ν(y)dy corresponds to dw/j, which can be placed, for example, at the end of the expression, since complex numbers commute. As in the complex plane, the Cauchy formula for monogenic fields has a number of important corollaries, of which we next consider a few. Corollary 8.1.11 (Smoothness). Let F : D → 4V be a monogenic field in a domain D. Then F is real analytic, and in particular a C ∞ -regular field. Proof. Fix a ball B(x0 , ) such that B(x0 , 2) ⊂ D. Then Z 1 y−x F (x) = ν(y)F (y) dy, for all x ∈ B(x0 , ). σn−1 |y−x0 |= |y − x|n The stated regularity now follows from that of the fundamental solution x 7→ y−x |y−x|n . We also obtain a Liouville theorem for entire monogenic fields. Corollary 8.1.12 (Liouville). Let F : X → 4V be an entire monogenic field, that is monogenic on the whole Euclidean space X. If F is bounded, then F is a constant field. Proof. Let x0 ∈ X. For all R > 0 and 1 ≤ k ≤ n we have Z 1 y − x ∂xk ν(y)F (y) dy. ∂k F (0) = σn−1 |y−x0 |=R |y − x|n x=0 If F is bounded, the triangle inequality for integrals shows that Z |∂k F (0)| . R−n dy . 1/R. |y|=R
Taking limits as R → 0 shows that ∂k F (x0 ) = 0 for all k. Since x0 was arbitrary, F must be constant. Next we consider what further properties monogenic fields do and do not share with analytic functions. In contrast to analytic functions, monogenic fields do not form a multiplication algebra, that is, F (x) and G(x) being monogenic in D does not imply that x 7→ F (x) 4 G(x) is monogenic. The obstacle here is the noncommutativity of the Clifford product, which causes D(F G) = (DF )G + DF G˙ 6= (DF )G + F (DG). Although monogenic fields in general cannot be multiplied to form another monogenic field, we can do somewhat better than the real linear structure of monogenic fields. Recall that analytic functions form a complex linear space. This generalizes to monogenic fields as follows.
8.1. Monogenic Multivector Fields
263
Proposition 8.1.13 (Right Clifford module). Let D be an open set in a Euclidean space X. Then the monogenic fields in D form a right Clifford module, that is, if F (x) is monogenic, then so is x 7→ F (x) 4 w for every constant w ∈ 4V . Proof. This is a consequence of the associativity of the Clifford product, since ∇ 4 (F (x) 4 w) = (∇ 4 F (x)) 4 w = 0 4 w = 0.
In contrast to analytic functions, monogenic functions do not form a group under composition, that is, F (y) and G(x) being monogenic in appropriate domains does not imply that x 7→ F (G(x)) is monogenic. Indeed, in general the composition is not even well defined, since the range space 4V is not contained in V . Although it does not make sense to compose monogenic fields, the situation is not that different in higher dimensions. Recall that in the complex plane, analytic functions are the same as conformal maps, at least for functions with invertible derivative. In higher dimensions one should generalize so that the inner function G is conformal and the outer function F is monogenic. In this way, we can do the following type of conformal change of variables that preserves monogenicity. Sections 4.5 and 11.4 are relevant here. Proposition 8.1.14 (Conformal Kelvin transform). Let T x = (ax + b)(cx + d)−1 be a fractional linear map of a Euclidean space X = V , and let D ⊂ X be an open set such that ∞ ∈ / T (D). For a field F : T (D) → 4V , define a pulled back field KTm F : D → 4V : x 7→ Then D(KTm F )(x) =
cx + d |cx + d|n
4
F (T (x)).
det4 (T ) m K (DF )(x), |cx + d|2 T
x ∈ D,
where det4 (T ) := ad − bc. In particular, if F is monogenic, then so is KTm F . Proof. Applying the product rule as in Example 7.1.9 shows that ! n X x + c−1 d c cx + d ∇ 4 (KT F )(x) = ∇ F (T (x)) + ei ∂i (F (T (x))). |x + c−1 d|n |c|n |cx + d|n i=1 The first term is zero, since the fundamental solution is monogenic outside the origin. For the second term, we note that since T is conformal, the derivative T x will map the ON-basis {ei } onto a basis {e0i = T x ei } of orthogonal vectors of equal length. By Exercise 4.5.18, we have e0i = (det4 (T )/cx + d)ei (cx + d)−1 . The dual basis is seen to be e0∗ i = ((cx + d)/det4 (T ))ei cx + d,
Chapter 8. Hypercomplex Analysis
264 so that
ei cx + d = (det4 (T )/(cx + d))e0∗ i . According to the chain rule, the directional derivatives are ∂ei (F (T (x))) = (∂e0i F )(T (x)), so ∇ 4 (KTm F )(x) =
n det4 (T ) cx + d X 0∗ det4 (T ) m K (DF )(x). e (∂e0i F )(T (x)) = |cx + d|2 |cx + d|n i=1 i |cx + d|2 T
Specializing to the inversion change of variables T x = 1/x, we make the following definition. Definition 8.1.15 (Kelvin transform). The monogenic Kelvin transform of a field F : D → 4V is the field K m F (x) :=
x |x|n
4
F (1/x).
Similarly, using the fundamental solution for the Laplace operator, we define the harmonic Kelvin transform of a function u : D → R to be K h u(x) := |x|2−n u(1/x). For the monogenic Kelvin transform, we have shown that DK m = −|x|−2 K m D. We now use this to obtain a similar result for the harmonic Kelvin transform. Proposition 8.1.16. The harmonic Kelvin transform satisfies the commutation relation ∆(K h u)(x) = |x|−4 K h (∆u)(x). In particular, the Kelvin transform of an harmonic function is harmonic. Proof. We note that ∆ = D2 and K h u = xK m u. Thus ∆K h u = DD(xK m u) = D(nK m u + (2∂x − xD)K m u) = nDK m u + 2(DK m u + ∂x DK m u) − nDK m u − (2∂x − xD)DK m u = 2DK m u + xD2 K m u = −2|x|−2 K m Du − xD|x|−2 K m Du = −x−1 DK m Du = x−1 |x|−2 K m D2 u = |x|−4 K h ∆u. Pn Here ∂x f = j=1 xj ∂j f = hx, ∇if denotes the radial directional derivative, and Pn we have used that D∂x f = j=1 (∇xj )∂j f + ∂x Df = (1 + ∂x )Df by the product rule.
8.2. Spherical monogenics
265
Exercise 8.1.17. Consider the special case of Proposition 8.1.14 in which ρ(x) = T x = qbxq −1 is an isometry, where q ∈ Pin(V ). Investigate how the conformal Kelvin transform KTm F , the pullback ρ∗ F , the pushforward ρ−1 ∗ F and the normalized pushforward ρ˜−1 F are related. Show that all these four fields are monogenic whenever F is ∗ monogenic, and relate this result to Proposition 8.1.13.
8.2
Spherical monogenics
In our n-dimensional Euclidean space X = V with a fixed origin, we denote by S := {x ∈ X ; |x| = 1} the unit sphere. We generalize the well-known theory of Taylor series expansions of analytic functions in the plane. When n = 2, we know that a function analytic at 0 can be written as a convergent power series f (x + iy) =
∞ X
Pk (x, y),
x2 + y 2 < 2 ,
k=0
for some > 0, where Pk ∈ Pkm := {ak (x + iy)k ; ak ∈ C}. A harmonic function can be written in the same way if we allow terms Pk ∈ Pkh := {ak (x + iy)k + bk (x − iy)k ; ak , bk ∈ C}. Note that P0h and all Pkm are one-dimensional complex linear spaces, whereas Pkh are two-dimensional when k ≥ 1. The spaces Pkm and Pkh are subspaces of the space Pk of all polynomials of order k, which has dimension k + 1. A polynomial P ∈ Pk is in particular homogeneous of degree k in the sense that P (rx) = rk P (x),
for all r > 0, x ∈ R2 .
This shows that P is uniquely determined by its restriction to the unit circle |x| = 1 if the degree of homogeneity is known. In the power series for f , the term Pk describes the kth-order approximation of f around the origin. Next consider an n-dimensional space X, and the following generalization of the spaces above. Definition 8.2.1 (Spherical harmonics and monogenics). Let X = V be a Euclidean space, and let k ∈ N and s ∈ R. Define function spaces Pk := {P : X → 4V ; all component functions are homogeneous polynomials of degree k}, m Ps := {F : X \ {0} → 4V ; DF = 0, F (rx) = rs F (x), x 6= 0, r > 0}, Psh := {F : X \ {0} → 4V ; ∆F = 0, F (rx) = rs F (x), x 6= 0, r > 0}.
Chapter 8. Hypercomplex Analysis
266
Let Pks ⊂ Pk and Pssh ⊂ Psh be the subspaces of scalar functions F : X \ {0} → 40 V = R, and let Psem ⊂ Psm be the subspace of functions F : X \ {0} → 4ev V that take values in the even subalgebra. Denote by Psh (S) the space of restrictions of functions P ∈ Psh to the unit sphere S. Denote by Psm (S) the space of restrictions of functions P ∈ Psm to the unit sphere S. We refer to these functions as (multivector-valued) spherical harmonics and spherical monogenics respectively. Note that the spaces Pk and Psh essentially are spaces of scalar functions: Each function in these spaces has component functions that belong to the same space, since the conditions on the function do not involve any coupling between the component functions. Even if the definitions of Psm and Psh are quite liberal, these are essentially spaces of polynomials, as the following shows. Proposition 8.2.2. Let n := dim X. The monogenic space Psm contains nonzero functions only if s ∈ {. . . , −(n + 1), −n, −(n − 1), 0, 1, 2, . . .}. The harmonic space Psh contains nonzero functions only if s ∈ {. . . , −(n + 1), −n, −(n − 1), −(n − 2), 0, 1, 2, . . .}. If k ∈ N, then Pkm ⊂ Pkh ⊂ Pk . The Kelvin transforms give self-inverse one-to-one correspondences m , K m : Psm → P−(s+n−1)
h K h : Psh → P−(s+n−2) .
Proof. (i) First consider the monogenic spaces Psm . Apply the Cauchy formula (8.1) to P ∈ Psm in the domain D := B(0; 1) \ B(0; ) for fixed 0 < < 1. For x ∈ D, we have Z Z P (x) = Ψ(y − x)yP (y)dy + Ψ(y − x)ν(y)P (y)dy. S
|y|=
For fixed x 6= 0, the second integral is dominated by n−1 sup|y|= |P |. Letting → 0, this tends to zero if s > −(n − 1), and it follows that 0 is a removable singularity of P (x). If −(n − 1) < s < 0, Liouville’s Theorem 8.1.12 shows that P = 0. Furthermore, generalizing the proof of Liouville’s theorem by applying higher-order derivatives shows that if s ≥ 0, then P (x) must be a polynomial. Thus Psm 6= {0} m only if s ∈ N. That K m : Psm → P−(s+n−1) is bijective and self-inverse is straightforward to verify. m (ii) Next consider the harmonic spaces Psh . If P ∈ Psh , then DP ∈ Ps−1 . If m s∈ / Z or −(n − 2) < s < 0, then (i) shows that DP = 0, so that P ∈ Ps . Again by (i), we conclude that P = 0. If s ∈ N, then the same argument shows that DP is a polynomial. Here we may assume that P is scalar-valued, so that DP = ∇P . h Integrating we find that P is a polynomial as well. That K h : Psh → P−(s+n−2) is bijective and self-inverse is straightforward to verify.
267
8.2. Spherical monogenics
We next examine the finite-dimensional linear spaces Pkm and Pkh for k ∈ N. m As we have seen, this also gives information about the spaces P−(k+n−1) and h P−(k+n−2) via the Kelvin transforms. Note that unlike the situation in the plane, there is a gap −(n − 1) < s < 0 and −(n − 2) < s < 0 respectively between the nonzero spaces, and that this gap grows with dimension. A polynomial P (x) ∈ Pk , can be written X X P (x) = Pαs xα es . s⊂n α∈Nk αk 1 Here we use multi-index notation xα = x(α1 ,...,αk ) := xα 1 · · · xk , and we shall write δi := (0, . . . , 0, 1, 0, . . . , 0), where 1 is the ith coordinate. We introduce an auxiliary inner product XX hP, Qip := α!Pαs Qαs , s
α
where α! = (α1 , . . . , αk )! := α1 ! · · · αk !. Proposition 8.2.3. With respect to the inner product h·, ·ip on Pk , we have orthogonal splittings Pk = Pkm ⊕ xPk−1 , Pk = Pkh ⊕ x2 Pk−2 , where xPk−1 := {x 4 P (x) ; P ∈ Pk−1 }, as well as m Pkh = Pkm ⊕ xPk−1 ,
k ≥ 1,
P0h = P0m .
Proof. (i) The key observation is that Pk → Pk−1 : P (x) 7→ ∇ 4 P (x)
and Pk−1 → Pk : P (x) 7→ x 4 P (x)
are adjoint maps with respect to h·, ·ip .P In fact, the inner product P is designed for this purpose. To see this, write P (x) = s,α Pα,s xα es and Q(x) = t,β Qβ,t xβ et . Pn P Then ∇ 4 P (x) = i=1 s,α Pα,s αi xα−δi (i, s)ei4s , so that X h∇ 4 P, Qip = Pα,s αi (i, s)Qβ,t hxα−δi ei4s , xβ et i i,s,α,t,β
=
X
Pα,s αi (i, s)Qα−δi ,i4s (α − δi )!
i,s,α
Pn P On the other hand, x 4 Q(x) = i=1 t,β Qβ,t xβ+δi (i, t)ei4t , so that X hP, x 4 Qip = Pα,s Qβ,t (i, t)hxα es , xβ+δi ei4t i i,s,α,t,β
=
X i,s,α
Pα,s Qα−δi ,i4s (i, i 4 s)α!.
Chapter 8. Hypercomplex Analysis
268
Since αi (α − δi )! = α! and (i, s) = (i, i 4 s), the duality follows. (ii) We note that Pkm = N(∇) and that xPk−1 = R(x). Since the maps are adjoint, these subspaces are orthogonal complements in Pk . Similarly, Pk = m Pkh ⊕x2 Pk−2 , since (∇2 )∗ = x2 . Finally, we consider the map Pkh → Pk−1 : P (x) 7→ ∇ 4 P (x). This is well defined, since ∇ 4 P is monogenic if P is harmonic. The m adjoint operator will be Pk−1 → Pkh : Q(x) 7→ x 4 Q(x), provided x 4 Q is harmonic whenever Q is monogenic. To verify that this is indeed the case, we calculate as in the proof of Proposition 8.1.16 that D2 (xQ) = D(nQ + (2∂x − xD)Q) = nDQ + 2(D + ∂x D)Q − (nDQ + (2∂x − D)DQ) = (2 + D)DQ. m This proves that Pkm = N(∇) is the orthogonal complement to xPk−1 = R(x) in h Pk .
Corollary 8.2.4 (Dimensions). Let X be an n-dimensional Euclidean space. Then k+n−1 , dim Pk = 2n dim Pks , dim Pks = n−1 s dim Pkem = 2n−1 (dim Pks − dim Pk−1 ), s dim Pksh = dim Pks − dim Pk−2 ,
dim Pkm = 2 dim Pkem ,
dim Pkh = 2n dim Pksh .
Proof. To find dim Pks , note that this is the number of monomials of degree k in n variables. The standard combinatorial argument is as follows. Choose n − 1 of the numbers 1, 2, 3, . . . , k + n − 1, say 1 ≤ m1 < m2 < · · · < mn−1 ≤ k + n − 1. This can be done in k+n−1 ways. n−1 Such choices {mi } are in one-to-one correspondence with monomials 1 −1 m2 −m1 −1 m3 −m2 −1 x3 · · · xnk+n−1−mm−1 . xm x2 1
From Proposition 8.2.3 the remaining formulas follow.
Exercise 8.2.5 (Two and three dimensions). Let V be a two-dimensional Euclidean space. In this case dim Pksh = 2 = dim Pkem . Show that dim Pkem is a one-dimensional complex linear space with the geometric complex structure j = e12 ∈ 42 V . Find bases for these spaces using the complex powers z k = (x + jy)k . Identifying vectors and complex numbers as in Section 3.2, write the splitting m Pkh = Pkm ⊕ xPk−1 in complex notation. Let V be a three-dimensional Euclidean space. In this case, dim Pksh = 2k + 1 and dim Pkem = 4(k + 1). Find bases for the spherical harmonics Pksh and for the spherical monogenics Pkem . Note that Pkem is a right vector space over H of dimension k + 1.
8.2. Spherical monogenics
269
Recall from Fourier analysis that the trigonometric functions {eikθ }k∈Z , suitably normalized, form an ON-basis for L2 (S) on the unit circle S ⊂ C = R2 in the complex plane. Thus every f ∈ L2 (S) can be uniquely written ∞ X
f (eiθ ) =
ak eikθ .
k=−∞
For k ≥ 0, the function eikθ extends to the analytic function z k on the disk |z| < 1. For k < 0, the function eikθ extends to the analytic function z k on |z| > 1, which vanishes at ∞, or alternatively to the antianalytic and harmonic function z −k on |z| < 1. In higher dimensions, we have the following analogue. Theorem 8.2.6. Let S be the unit sphere in an n-dimensional Euclidean space. The subspaces Pkh (S), k = 0, 1, 2, . . ., of spherical harmonics are pairwise orthogonal with respect to the L2 (S) inner product Z hF, Gi := hF (x), G(x)idx. S m Moreover, within each Pkh (S), the two subspaces Pkm (S) and xPk−1 (S) are orm m thogonal, and xPk−1 (S) = P2−n−k (S). The Hilbert space L2 (S) splits into finitedimensional subspaces as
L2 (S) =
∞ M
Pkh (S) =
k=0
∞ M
−(n−1)
Pkm (S) ⊕
k=0
M
Pkm (S).
k=−∞
Proof. Let P ∈ Pkh (S) and Q ∈ Plh (S) with k 6= l. Green’s second theorem, as in Example 7.3.11, shows that Z Z h∂x P (x), Q(x)i − hP (x), ∂x Q(x)i dx = h∆P, Qi − hP, ∆Qi dx = 0. |x| 0. For the two spherical operators we have the following. Proposition 8.2.15. Let V be an n-dimensional Euclidean space, and consider the Hilbert space L2 (S) on the unit sphere S. Then DS defines a self-adjoint operator in L2 (S) with spectrum σ(DS ) = Z \ {−(n − 2), . . . , −1}. The spherical Laplace operator equals ∆S = DS (2 − n − DS ). h In the splitting into spherical harmonics, L2 (S) = ⊕∞ k=0 Pk (S), the spherical Laplace operator acts according to
∆S
∞ X k=0
∞ X fk = k(2 − n − k)fk , k=0
8.2. Spherical monogenics
275
whereas in the splitting into spherical monogenics, L2 (S) =
∞ M
−(n−1)
Pkm (S)
k=0
M
⊕
Pkm (S),
k=−∞
the spherical Dirac operator acts according to DS
∞ X k=0
−(n−1)
fk +
X
−(n−1) ∞ X X fk = kfk . kfk +
k=−∞
k=0
k=−∞
Proof. It remains to prove that ∆S = DS (2 − n − DS ). Using polar coordinates x = ry, y ∈ S, we note that D = r−1 yxD = r−1 y(∂x − DS ) = y∂r − r−1 yDS . Squaring this Euclidean Dirac operator, we get ∆ = D2 = (y∂r − r−1 yDS )2 = y∂r y∂r − y∂r r−1 yDS − r−1 yDS y∂r + r−1 yDS r−1 yDS = ∂r2 − ∂r r−1 DS − r−1 yDS y∂r + r−2 yDS yDS . Writing [A, B] = AB − BA for the commutator of operators, we have used that [∂r , y] = 0 and [DS , r] = 0. To simplify further, we compute that [∂r , r] = 1 and [∂x , DS ] = [∂x , ∂x − xD] = −[∂x , xD] = 0. Thus ∂r r−1 DS = −r−2 DS + r−1 DS ∂r , so that ∆ = ∂r2 − r−1 (DS + yDS y)∂r + r−2 (DS + yDS yDS ). Comparing this equation and (8.2), we see that n − 1 = −(DS + yDS y) and ∆S = DS + yDS yDS = DS + (1 − n − DS )DS = DS (2 − n − DS ), as claimed.
In three dimensions, it is standard to introduce spherical coordinates (r, θ, φ), and DS and ∆S can be expressed in terms of ∂θ and ∂φ . The classical expression for the spherical harmonics, obtained by separation of variables, is rk Pkm (cos θ)eimφ , where the Pkm (t) denote the associated Legendre polynomials, m = −k, . . . , −1, 0, 1, . . . , k. The optimal parametrization of the sphere S, though, uses stereographic projection, which is conformal and has only one singular point for the coordinate system.
Chapter 8. Hypercomplex Analysis
276
Proposition 8.2.16 (Stereographic projection of DS ). Fix an (n − 1)-dimensional subspace VS ⊂ V and a point p ∈ S orthogonal to VS , and consider the stereographic parametrization T : VS → S : y 7→ T (y) = (py + 1)(y − p)−1 , as in (4.4). The monogenic Kelvin transform associated to the stereographic projection T defines an isometry of Hilbert spaces 2(n−1)/2 KTm : L2 (S) → L2 (VS ), and the spherical Dirac operator corresponds to (KTm DS (KTm )−1 )G(y) = − 12 p (|y|2 + 1)Dy + (y − p) G(y),
y ∈ VS ,
where Dy denotes the Dirac operator in the Euclidean space VS . Proof. According to Proposition 8.1.14, the Kelvin transform y−p KTm F (y) = F ((py + 1)(y − p)−1 ) |y − p|n satisfies D(KTm F )(y) = −2|y − p|−2 KTm (DF )(y). From the definition of DS we get KTm (DS F ) = KTm (∂x F ) − KTm (xDF ), where KTm (xDF )(y) =
y−p (py + 1)(y − p)−1 (DF )(T (y)) |y − p|n
= (y − p)−1 (py + 1)KTm (DF )(y) = − 12 (y − p)(py + 1)D(KTm F )(y) = 12 (1 + |y|2 )pD(KTm F )(y). To rewrite KTm (∂x F ), we observe that the vertical derivative of the stereographic parametrization at y ∈ VS is T y (p) =
−2 2 (y − p)p(y − p)−1 = x 2 1 + |y| 1 + |y|2
(8.3)
according to Exercise 4.5.18. Thus the chain and product rules give y−p y − p 1 + |y|2 = (∂ F )(T (y)) ∂yn (F (T (y)) x |y − p|n |y − p|n 2 1 + |y|2 p m = ∂yn KT F (y) − F (T (y)) 2 |y − p|n p 1 + |y|2 = ∂yn KTm F (y) − (y − p)KTm F (y). 2 2 Here ∂yn is the partial derivative in the direction p. Since pD = ∂yn + pDy , we obtain the stated formula. To show that the stated map is a Hilbert space isometry, note that by (8.3) the Jacobian is JT (y) = (2/(1 + |y|2 ))n−1 , since T is conformal. Thus Z Z Z 2n−1 dy 2 2 n−1 |F (x)| dx = |F (T (y))| =2 |KTm F (y)|2 dy. (1 + |y|2 )n−1 S VS VS KTm (∂x F )(y) =
8.3. Hardy Space Splittings
8.3
277
Hardy Space Splittings
Let D = D+ be a bounded Lipschitz domain in Euclidean space X, with boundary ∂D separating it from the exterior unbounded domain D− = X \ D. Let ν denote the unit normal vector field on ∂D pointing into D− . The main operator in this section is the principal value Cauchy integral Z Z Ψ(y − x)ν(y)h(y) dy = 2 lim Ψ(y − x)ν(y)h(y) dy, Eh(x) := 2p.v. →0
∂D
∂D\B(x;)
x ∈ ∂D, which appears when we let x ∈ ∂D, rather than x ∈ D, in the Cauchy reproducing formula from Theorem 8.1.8. Here we assume only suitable bounds on h, and in particular we do not assume that h is a restriction of a monogenic field. The factor 2 is a technicality that will ensure that E 2 = I. The singularity at y = x in the integral is of order |y − x|1−n on the (n − 1)-dimensional surface ∂D, which makes the definition and boundedness of E a nontrivial matter, and cancellations need to be taken into account. Due to the strong singularity at y = x, we also refer to E as the Cauchy singular integral. Ignoring these analytic problems for the moment, we first investigate by formal calculations how E is related to the two limits Z + Ψ(y − z)ν(y)h(y) dy, x ∈ ∂D, E h(x) := lim z∈D + ,z→x
and
∂D
Z
E − h(x) :=
lim −
z∈D ,z→x
Ψ(y − z)(−ν(y))h(y) dy,
x ∈ ∂D,
∂D
in the Cauchy reproducing formula (8.1) for D+ and D− respectively. Placing z = x infinitesimally close, but interior, to ∂D, we have for E + ! Z Z + Ψ(y − x)ν(y)h(y)dy Ψ(y − x)ν(y)h(x)dy + E h(x) = lim →0
=
1 2 h(x)
Σ1x
Σ0x
+
1 2 Eh(x).
where Σ0x := {y ∈ D− ; |y − x| = } and Σ1x := {y ∈ ∂D ; |y − x| > }. We have here approximated h(y) ≈ h(x), changed the integration surface from ∂D \ Σ1x to Σ0x using Stokes’s theorem, and used that Ψ(y − x)ν(y) = 1/(σn−1 n−1 ) on Σ0x in the first integral. Thus the first term h/2 appears when we integrate around the singularity y = x on an infinitesimal half-sphere. Since −ν is outward pointing from D− , a similar formal calculation indicates that E − h(x) = 12 h(x) − 12 Eh(x), and we deduce operator relations 1 2 (I 1 2 (I +
+ E) = E + , − E) = E − ,
E + E − = I, E + − E − = E.
278
Chapter 8. Hypercomplex Analysis Moreover, from Theorem 8.1.8 we conclude that E+E+ = E+, E−E− = E−,
since E ± h by definition is the restriction of a monogenic field to ∂D, no matter what h is. This shows that E + and E − are complementary projection operators. For a suitable space of multivector fields H on ∂D, these projections define a splitting H = E + H ⊕ E − H. This means that any given field h on ∂D can be uniquely written as a sum h = h+ +h− , where h+ is the restriction to ∂D of a monogenic field in D+ and h− is the restriction to ∂D of a monogenic field in D− that decays at ∞. We refer to E ± H as Hardy subspaces, and to E ± as Hardy projections. Note also the structure of the Cauchy singular integral operator E = E + − E − : it reflects the exterior Hardy subspace E − H across the interior Hardy subspace E + H. In particular, E 2 = I, as claimed.
Figure 8.1: (a) The piecewise constant vector field h : ∂D → 41 R2 which equals e1 in the second quadrant and vanishes on the rest of the curve ∂D. (b) The Hardy splitting of h as the sum of two traces of divergence- and curl-free vector fields.
Example 8.3.1 (Constant-curvature boundaries). The most natural space for the singular integral operator E is H = L2 (∂D). In the simplest case, in which D is the upper complex half-plane, with ∂D the real axis, then E is a convolution singular integral, which under the Fourier transform corresponds to multiplication by ( 1, ξ > 0, sgn(ξ) = −1, ξ < 0,
8.3. Hardy Space Splittings
279
at least if h takes values in the even subalgebra and we use the geometric imaginary unit j as in Example 8.1.10. The second simplest example is that in which L∞ D is the unit ball |x| < 1 as in Theorem 8.2.6. In this case, E + projects onto k=0 Pkm (S), whereas E − projects L−(n−1) onto k=−∞ Pkm (S). In these examples the Hardy subspaces are orthogonal and kE ± k = 1. However, unlike Hodge splittings, the splitting into Hardy subspaces is not orthogonal for more general domains D. When ∂D has some smoothness beyond Lipschitz, Fourier methods apply to prove that E is a bounded operator on L2 (∂D), which geometrically means that the angle between the Hardy subspaces, although not straight, is always positive. A breakthrough in modern harmonic analysis was the discovery that this continues to hold for general Lipschitz domains. Theorem 8.3.2 (Coifman–McIntosh–Meyer). Let D be a bounded strongly Lipschitz domain. Then the principal value Cauchy integral Eh(x) of any h ∈ L2 (∂D) is well defined for almost every x ∈ ∂D, and we have bounds Z Z |h(x)|2 dx. |Eh(x)|2 dx . ∂D
∂D
This is a deep result that is beyond the scope of this book. There exist many different proofs. A singular integral proof is to estimate the matrix for E in a wavelet basis for L2 (∂D) adapted to ∂D. A spectral proof is to identify E ± as spectral projections of a Dirac-type operator on ∂D, generalizing the spherical Dirac operator DS from Definition 8.2.13. The problem is that for general domains this operator is no longer self-adjoint, but rather has spectrum in a double sector around R, and it becomes a nontrivial matter involving Carleson measures to estimate the spectral projections corresponding to the two sectors. See Section 8.4 for references and further comments. We remark only that from Theorem 8.3.2 one can prove that for h ∈ L2 (∂D), the Cauchy extensions Z + F (x) := (8.4) Ψ(y − x)ν(y)h(y) dy, x ∈ D+ , ∂D
and −
Z Ψ(y − x)(−ν(y))h(y) dy,
F (x) :=
x ∈ D− ,
(8.5)
∂D
have limits as x → ∂D both in an L2 (∂D) sense and pointwise almost everywhere, provided that we approach ∂D in a nontangential way. In the remainder of this section, we perform a rigorous analysis of the splitting of the space of H¨older continuous multivector fields C α (∂D) = C α (∂D; 4V ),
0 < α < 1,
from Example 6.4.1, into Hardy subspaces on a bounded C 1 surface ∂D. This setup is a good starting point for studying Hardy splittings that only requires
Chapter 8. Hypercomplex Analysis
280
straightforward estimates. We exclude the endpoint cases α = 0, continuous functions, and α = 1, Lipschitz continuous functions, for the reason that typically singular integral operators like E are not bounded on these spaces. It is also not bounded on L∞ (∂D), something that could be seen in Figure 8.1 if we zoom in at the discontinuities of h. Proposition 8.3.3 (Hardy projection bounds). Let D be a bounded C 1 domain and 0 < α < 1, and assume that h ∈ C α (∂D). Define the Cauchy extensions F ± in D± as in (8.4) and (8.5). Then F + is a monogenic field in D+ , and F − is a monogenic field in D− with decay F − = O(|x|−(n−1) ) at ∞. At the boundary ∂D, the traces f + (y) :=
lim
x∈D + ,x→y
F + (x),
f − (y) :=
lim
x∈D − ,x→y
F − (x),
y ∈ ∂D,
exist, with estimates kf + kα . khkα and kf − kα . khkα . In terms of operators, this means that the Hardy projections E ± : h 7→ f ± are bounded on C α (∂D). Proof. (i) That F + and F − are monogenic is a consequence of the associativity of the Clifford product. Indeed, applying the partial derivatives under the integral sign shows that Z + ∇ 4 F (x) = ∇x 4 Ψ(y − x) 4 ν(y) 4 h(y) dy Z∂D = ∇x 4 (Ψ(y − x) 4 ν(y) 4 h(y)dy = 0, ∂D
/ ∂D. The decay at infinity follows from the fact that ∂D and h are when x ∈ bounded and the decay of the fundamental solution Ψ. (ii) We next consider the boundary trace of F + . A similar argument applies to the trace of F − . Note that in order to estimate kf + kα , it suffices to estimate |f + (x) − f + (y)| . |x − y|α khkα for |x − y| ≤ δ, provided that |f + (x)| . khkα for all x ∈ ∂D, since ∂D is bounded. Thus we may localize to a neighborhood of a point p ∈ ∂D, in which we can assume that ∂D coincides with the graph of a C 1 -function φ. We choose a coordinate system {xi } so that p is the origin and ∂D is given by xn = φ(x0 ), where x0 = (x1 , . . . , xn−1 ), in the cylinder |x0 | < r, |xn | < s. Let δ < min(r, s) and consider a point x = (x0 , xn ) ∈ D ∩ B(0, δ). We claim that |∂j F + (x)| . khkα (xn − φ(x0 ))α−1 , j = 1, . . . , n. (8.6) To show this, considerR the vertical projection z = (x0 , φ(x0 )) of R x onto ∂D, and note that F + (x) − h(z) = ∂D Ψ(y − x)ν(y)(h(y) − h(z))dy, since Ψ(y − x)ν(y)dy = 1, according to the Cauchy formula. Thus differentiation with respect to x, with z fixed, gives Z |∂j F + (x)| . khkα |∂j Ψ(y − x)| |y − z|α dy = khkα (I + II). ∂D
8.3. Hardy Space Splittings
281
Here I denotes the part of the integral inside the cylinder, and II is the part outside. Since |∂j Ψ(y − x)| . 1/|y − x|n , the term II is bounded. For the integral I, we change variable from y = (y 0 , φ(y 0 )) =: ρ(y 0 ) ∈ ∂D to y 0 ∈ Rn−1 . To find the change of (n − 1)-volume, we calculate ρy0 (e1 ∧ · · · ∧ en−1 ) = (e1 + (∂1 φ)en ) ∧ · · · ∧ (en−1 + (∂n−1 φ)en ) = e1···(n−1) + (∂1 φ)en2···(n−1) + (∂2 φ)e1n3···(n−1) + · · · + (∂n−1 φ)e123···(n−2) , p the norm of which is 1 + |∇φ|2 . Since the function φ is C 1 , we conclude that |∂j Ψ(y − x)| ≈ 1/(|y 0 − x0 | + t)n , |y − z|α ≈ |y 0 − x0 |α , and dy ≈ dy 0 , where t = xn − φ(x0 ). Therefore Z ∞ Z 0 0 −n 0 0 α 0 (r + t)−n rα rn−2 dr . tα−1 . (|y − x | + t) |y − x | dy . I. |y 0 | φ(x0 ) and consider first the vertical limit f + (y). Since Z r Z r−φ(y0 ) + 0 + 0 0 + 0 |F (y , r) − F (y , φ(y ) + t)| ≤ |∂n F (y , s)|ds . khkα sα−1 ds, φ(y 0 )+t
0
it is clear that this limit exists, since the integral is convergent. Moreover, we get the estimate |f + (y)| . khkα , since |F + (y 0 , r)| is bounded by khkα . Next we aim to show that {F + (x)} converges when x → y from D+ in general, and not only along the vertical direction. Let x1 = (x01 , t1 ), x2 = (x02 , t2 ) ∈ D ∩ B(y; ), and define t := max(t1 , t2 ) + 2(1 + k∇φk∞ ). Then Z F + (x2 ) − F + (x1 ) = hdx, ∇iF + (x), γ
where γ is the piecewise straight line from x1 to x2 viaR(x01 , t) and (x2 , t). The first and last vertical line integrals are dominated by khkα 0 tα−1 dt as above, whereas in the middle horizontal line integral, the integrand is dominated by khkα α−1 . In total we obtain the estimate |F + (x2 ) − F + (x1 )| . khkα α , when x1 , x2 ∈ D ∩ B(y, ). This shows the existence of the limit f + (y) as x → y from D+ . By taking x1 , x2 ∈ ∂D, it also shows that kf + kα . khkα , which completes the proof. Proposition 8.3.4 (Sokhotski–Plemelj jumps). Let D be a bounded C 1 domain and 0 < α < 1. Then the Cauchy principal value integral E : C α (∂D) → C α (∂D) is a well-defined and bounded linear operator. The Hardy projections E ± equal Z ± 1 Ψ(y − x)ν(y)h(y) dy, x ∈ ∂D. E h(x) = 2 h(x) ± p.v. ∂D
In terms of operators, this means that E ± = 12 (I ± E).
Chapter 8. Hypercomplex Analysis
282
Proof. We start by verifying the identity E + h(x) = 21 h(x) + 12 Eh(x) for x ∈ ∂D. As in the proof of Proposition 8.3.3, write x = (x0 , φ(x0 )) in a coordinate system in a cylinder around x. If h ∈ C α (∂D), the integrand of Z Ψ(y − (x + ten ))ν(y) h(y) − h(x) dy ∂D
is seen to be bounded by |y − x|α−(n−1) , uniformly for 0 < t ≤ t0 . Here we view h(x) as a constant function. Letting t → 0+ and applying the Lebesgue dominated convergence theorem, it follows that Z E + h(x) − h(x) = E + (h − h(x))(x) = Ψ(y − x)ν(y) h(y) − h(x) dy ∂D ! Z Z Ψ(y − x)ν(y)h(y)dy −
= lim
→0
∂D\B(x;)
Ψ(y − x)ν(y)dy h(x).
lim
→0
∂D\B(x;)
The first equality follows from the fact that the Cauchy integral of the constant field h(x) is the constant field h(x) in D+ . It suffices to show that Z Ψ(y − x)ν(y)dy = 12 . lim →0
∂D\B(x;)
formula for the domain D+ \ B(x; ) shows that it suffices Applying the Cauchy R to prove lim→0 ∂B(x;)∩D+ Ψ(y − x)ν(y)dy = 12 . But Z Ψ(y − x)
lim
→0
∂B(x;)∩D +
y−x |∂B(x; ) ∩ D+ | , dy = lim →0 |y − x| |∂B(x; )|
(8.7)
and on approximating ∂D by its tangent hyperplane at x, this limit is seen to be 1/2, since ∂D is assumed to be C 1 regular at x. To summarize, we have shown that E + = 12 (1 + E), and Proposition 8.3.3 shows that E = 2E + − I is a bounded and well-defined operator. Letting t → 0− instead, we get −E − h(x) − 0 = Eh(x) − 12 h(x), which shows that E − = 12 (I − E). Exercise 8.3.5. Generalize Proposition 8.3.3 to bounded Lipschitz domains. Show that Proposition 8.3.4 fails for bounded Lipschitz domains. Summarizing the H¨ older estimates in this section, we have the following main result. Theorem 8.3.6 (Hardy subspace splitting). Let D be a bounded C 1 domain and let 0 < α < 1. Then we have a splitting of the H¨ older space C α (∂D) into Hardy subspaces C α (∂D) = E + C α ⊕ E − C α .
8.4. Comments and References
283
The Hardy subspaces are the ranges of the Hardy projections E ± : C α (∂D) → C α (∂D), which are the spectral projections E ± = 12 (I ± E) of the Cauchy singular integral operator E. α The interior Hardy subspace E + C+ consists of all traces F + |∂D of monogenic + + fields F in D that are H¨ older continuous up to ∂D. The exterior Hardy subspace E − C α consists of all traces F − |∂D of monogenic fields F − in D− that are H¨ older continuous up to ∂D and have limit limx→∞ F − (x) = 0. In fact, all such F − have decay O(1/|x|n−1 ) as x → ∞. Proof. Proposition 8.3.3 shows that E ± : C α (∂D) → C α (∂D) are bounded projection operators. Proposition 8.3.4 shows in particular that they are complementary: E + + E − = I. This shows that C α (∂D) splits into the two Hardy subspaces. It is clear from the definition and Proposition 8.3.3 that the Hardy subspaces consist of traces of H¨ older continuous monogenic fields F ± in D± respectively. − The decay of F at ∞ follows from that of Ψ. Conversely, the fact that the trace of every H¨older continuous monogenic field F + in D+ belongs to E + C α follows from Theorem 8.1.8. For the corresponding result for D− , we apply the Cauchy − reproducing formula to the bounded domain DR := D− ∩ B(0; R) for large R. We have Z Z − F − (x) = − Ψ(y − x)ν(y)F − (y)dy + Ψ(y − x)ν(y)F − (y)dy, x ∈ DR . ∂D
|y|=R
Since |∂B(0; R)| grows like Rn−1 and Ψ(x − y) decays like 1/Rn−1 as R → ∞, the last integral will vanish if limx→∞ F − (x) = 0, showing that F − is the Cauchy integral of F − |∂D , so that F − |∂D ∈ E − C α .
8.4
Comments and References
8.1 The higher-dimensional complex analysis obtained from the Dirac equation and Clifford algebra has been developed since the 1980s. This research field is referred to as Clifford analysis. The pioneering work is Brackx, Delanghe, and Sommen [23]. Further references include Gilbert and Murray [42] and Delanghe, Sommen, and Soucek [33]. Div/curl systems like those in Example 8.1.4 have been used to define higher-dimensional harmonic conjugate functions in harmonic analysis. The seminal work is Stein and Weiss [89] 8.2 This material builds on the treatment by Axler, Bourdon, and Ramey [16] of spherical harmonics. We have generalized mutatis mutandis the theory for spherical harmonics to spherical monogenics. 8.3 The classical Lp -based Hardy spaces, named after G.H. Hardy, on the real axis or the unit circle in the complex plane where introduced by F. Riesz in
284
Chapter 8. Hypercomplex Analysis 1923. The function space topologies for p ≤ 1 that they provide are fundamental in modern harmonic analysis. Theorem 8.3.2 was proved by R. Coifman, A. McIntosh, and Y. Meyer in [28] for general Lipschitz graphs in the complex plane. Earlier, A. Calder´ on had obtained a proof in the case of small Lipschitz constants. The higherdimensional result in Theorem 8.3.2 is equivalent to the L2 boundedness of the Riesz transforms on Lipschitz surfaces, and this was known already in on’s [28] to follow from the one-dimensional case by a technique called Calder´ method of rotations. A direct proof using Clifford algebra is in [66]. From Calder´on–Zygmund theory, also Lp boundedness for 1 < p < ∞ follows. A reference for wavelet theory, which is intimitely related to Theorem 8.3.2, is Meyer [69]. It is interesting to note that just like induced bases {es } for multivectors, wavelet bases for function spaces also do not come with a linear order of the basis functions, but these are rather ordered as a tree. For Clifford algebras and wavelets, see Mitrea [71]. Unpublished lecture notes by the author containing the wavelet proof of Theorem 8.3.2 are [81]. The basic idea behind estimating singular integrals like the Cauchy integral using wavelets is simple: the matrices of such operators in a wavelet basis are almost diagonal in a certain sense. However, the nonlinear ordering of the basis elements and the details of the estimates make the proof rather technical. There is also a much deeper extension to higher dimensions of the result in [28] that was known as the Kato square root problem. It was finally solved affirmatively by Auscher, Hofmann, Lacey, McIntosh, and Tchamitchian [6] 40 years after it was formulated by Kato, and 20 years after the one-dimensional case [28] was solved. As McIntosh used to tell the story, the works on linear operators by T. Kato and J.-L. Lions closed that field of research in the 1960s; only one problem remained open, and that was the Kato square root problem. A reference for a spectral/functional calculus approach to Theorem 8.3.2 is Axelsson, Keith, and McIntosh [12]. See in particular [12, Consequence 3.6] for a proof of Theorem 8.3.2, and [12, Consequence 3.7] for a proof of the Kato square root problem. This paper illustrates well how Dirac operators and Hodge- and Hardy-type splittings can be used in modern research in harmonic analysis.
Chapter 9
Dirac Wave Equations Prerequisites: Some familiarity with electromagnetism and quantum mechanics is useful for Section 9.2. A background in partial differential equations, see Section 6.3, and boundary value problems is useful but not necessary for the later sections. For the operator theory that we use, the reader is referred to Section 6.4. Ideally, we would have liked to place Section 9.6 after Chapter 10. But since it belongs to the present chapter, we ask the reader to consult Chapter 10 for more on Hodge decompositions when needed. Road map: Acting with the nabla symbol through the Clifford product ∇4F (x) on multivector fields, or through a representation ∇.ψ(x) on spinor fields, in Euclidean space we obtain first-order partial differential operators which are square roots of the Laplace operator ∆. However, Paul Dirac first discovered his original equation in 1928 for spin-1/2 massive particles, in the spacetime setting as a square root of the Klein–Gordon equation, that is, the wave equation with a zero-order term ∂x2 ψ + ∂y2 ψ + ∂z2 ψ − c−2 ∂t2 ψ =
m2 c2 ψ. ~2
The resulting Dirac wave equation ~∇.ψ = mcψ describing the free evolution of the wave function for the particle, a spinor field ψ : W → 4W / in physical spacetime, has been described as one of the most successful and beautiful equations ever. For example, it predicted the existence of antiparticles some years before these were experimentally found in 1932. In Section 9.2 we survey Dirac’s equation, as well as Maxwell’s equations from the early 1860s, which describes the evolution of the electrical and magnetic fields. We show how, in a very geometric way, the electromagnetic field is a multivector field and that the Maxwell equations, when written in terms of Clifford algebra, form a Dirac wave equation. The four classical © Springer Nature Switzerland AG 2019 A. Rosén, Geometric Multivector Analysis, Birkhäuser Advanced Texts Basler Lehrbücher, https://doi.org/10.1007/978-3-030-31411-8_9
285
286
Chapter 9. Dirac Wave Equations
equations correspond to the four spaces ∧j V of homogeneous multivectors in threedimensional Euclidean space V . Motivated by applications to Maxwell’s equations, Sections 9.3 to 9.7 develop a theory for boundary value problems (BVPs) for Dirac equations, and they show how it applies to electromagnetic scattering. We consider only time-harmonic waves at a fixed frequency. Our abstract setup for a BVP is to consider two splittings of a space H of functions on the boundary ∂D of the domain D. The first splitting, H = A+ H ⊕ A− H, encodes the differential equation and generalizes the splitting into Hardy subspaces A+ H = E + H and A− H = E − H from Theorem 8.3.6. The second splitting, H = B + H ⊕ B − H, encodes the boundary conditions. Typically the projections B ± are pointwise and determined by the normal vector ν. Relating them to the classical boundary value problems for the Laplace operator, in that case B + would encode Dirichlet boundary conditions and B − would encode Neumann boundary conditions. From this point of view of functional analysis, studying BVPs amounts to studying the geometry between these two different splittings. Well-posedness of BVPs will mean that the subspaces A± H do not intersect the subspaces B ± H, and in the optimal case, the two reflection operators A and B, where A generalizes the Cauchy principal value integral, anticommute. In Section 9.5, we formulate integral equations for solving scattering problems for Dirac equations. The aim is to find singular but not hypersingular integral operators that are both bounded and invertible also on Lipschitz boundaries, whenever the scattering problem considered is well posed. A problem that we need to overcome to find an integral equation which is numerically useful is that we cannot easily discretize spaces like the Hardy spaces E ± H, which are defined by a nonlocal integral constraint. We obtain good integral equations on good function spaces for solving BVPs for the Dirac equation. To apply these to scattering problems for electromagnetic waves, we show in Sections 9.6 and 9.7 how we require a third splitting of the boundary function space H: a boundary Hodge decomposition H = R(Γk ) ⊕ R(Γ∗k ), where the Maxwell fields live in the Hodge component R(Γk ). Embedding Maxwell’s equations into the Dirac equation, and solving the BVP with a Dirac integral equation, we give examples in Section 9.7 of how this algorithm performs numerically. Highlights: • Boosting E and B using Clifford algebra: 9.2.4
9.1. Wave and Spin Equations
287
• Discovery of antiparticles: 9.2.8 • Stratton–Chu as a Clifford–Cauchy integral: 9.3.8 • Well-posedness via operator Clifford algebra 4R2 : 9.4.5 • Rellich spectral sector vs. Lipschitz geometry of boundary: 9.5.1 • Spin integral equation: 9.5.5 • Maxwell fields and boundary Hodge decompositions: 9.7.1
9.1
Wave and Spin Equations
The Dirac operator on a Euclidean space from Definition 8.1.1 generalizes in the obvious way to an inner product space of arbitrary signature. Definition 9.1.1 (4-Dirac operator). Let (X, V ) be an inner product space. The 4-Dirac operator D = DV acting on multivector fields F : D → 4V defined on some domain D ⊂ V is the nabla operator (DF )(x) := ∇ 4 F (x) =
n X
e∗j 4 ∂j F (x)
j=1
induced by the Clifford product V × 4V → 4V : (v, w) → 7 v 4 w as in Definition 7.1.2. Here ∂j are partial derivatives with respect to coordinates in a basis {ej } for V , with dual basis {e∗j }. Since 4-Dirac operators are our main type of Dirac operator, we sometimes omit 4 in the notation. When V is a Euclidean space, we speak of a harmonic Dirac operator, while if V = W is a spacetime, we speak of a wave Dirac operator. For a Euclidean space, we know from Chapter 8 that D2 = ∆. We have seen that this means that multivector fields F solving DF = 0 in particular have scalar component functions Fs that are harmonic. Turning to the wave Dirac operator, we now have D2 = , where is the d’Alembertian from Section 6.3. Indeed, in an ON-basis {ei } we have D2 F = (−e0 ∂0 + e1 ∂1 + · · · + en ∂n )2 F = (−∂02 + ∂12 + · · · + ∂n2 )F. Similar to the Euclidean case, it follows that multivector fields solving the wave Dirac equation DF = 0 have scalar component functions Fs that solve the wave equation Fs = 0.
Chapter 9. Dirac Wave Equations
288
The wave Dirac operator describes a wave propagation of multivector fields. For the harmonic Dirac operator we saw in Chapter 8 how the fundamental solution Φ to the Laplace operator yielded a fundamental solution Ψ = ∇Φ to D. Similarly, the fundamental solution to the wave equation encoded by the Riemann operators Rt from Proposition 6.2.2 and Example 6.3.2 now yield solution formulas for the wave Dirac operator. Proposition 9.1.2 (Propagation of Dirac waves). Fix a time-like unit vector e0 in a spacetime W and let V = [e0 ]⊥ be the space-like complement. Consider the initial value problem for the wave Dirac equation DW F = G for given initial data F |V = f and source G. We assume that f and Gx0 (·) = G(x0 , ·), for each fixed time x0 , belong to L2 (V ; 4W ). Then the solution Fx0 (x) = F (x0 , x) is given by Z x0 Mx0 −s (e0 Gs )(x)ds, x0 > 0, F (x0 , x) = Mx0 F0 (x) + 0
where the Fourier multiplier Mx0 on L2 (V ; 4W ) is Mx0 g := (∂0 − e0 D)Rx0 g. Proof. We apply the partial Fourier transform to DW F = G in the x-variables, ˆ x (ξ), ξ ∈ V , for each fixed x0 . We obtain the ODE −e0 ∂0 Fˆx0 (ξ) + iξ Fˆx0 (ξ) = G 0 with solution Z x0 ˆ s (ξ)ds. Fˆx0 (ξ) = exp(−ie0 ξx0 )fˆ(ξ) + exp(−ie0 ξ(x0 − s))e0 G 0
We have exp(−ie0 ξx0 ) = cos(|ξ|x0 )−ie0 ξ sin(|ξ|x0 )/|ξ| according to Exercise 1.1.5, which is seen to be the symbol of Mx0 . The inverse Fourier transformation yields the stated formula for F . It follows that the time evolution of the wave Dirac equation is quite similar to that for the scalar second-order wave equation: with our scaling, the propagation speed is 1, and in odd dimensions there is a Huygens principle. However, although evolution backward in time is well posed, the wave Dirac equation is not symmetric in the time variable, unlike the scalar wave equation. Another difference is that the only inital datum that we need is F (0, ·), and no normal derivative. The second type of Dirac operator that we consider are the following spinDirac operators. / Dirac operator). Let (X, V ) be an inner product space, with Definition 9.1.3 (4/ V acting on spinor fields / =D / . The 4-Dirac / operator D complex spinor space 4V Ψ : X → 4V / is the nabla operator / (DΨ)(x) := ∇.Ψ(x) =
n X j=1
e∗j .∂j Ψ(x),
9.1. Wave and Spin Equations
289
which is induced by the bilinear map V × 4V / → 4V : (θ, ψ) 7→ θ.ψ as in / Definition 7.1.2. Here ∂j are partial derivatives with respect to coordinates in a basis {ej } for V , with dual basis {e∗j }. / operator, When V is a Euclidean space, we speak of a harmonic 4-Dirac while if V = W is a spacetime, we speak of a wave 4-Dirac operator. The 4/ / Dirac operators are best known for their representations as matrix first-order partial differential operators. Such expressions are straightforward to derive using representations of Clifford algebras. See Section 5.1. 2 / = ∆, Analogous to the 4-Dirac operator, for a Euclidean space, we have D 2 / = , and spinor fields solving DΨ / = 0 have harmonic while in spacetime D functions and solutions to the wave equation as component functions, respectively. Exercise 9.1.4 (Hypercomplex spin analysis). Show how the theory from Chapter 8 for the Cauchy integral, the monogenic Kelvin transform, and spherical monogenoperator. / ics generalizes in a natural way for solutions to the harmonic 4-Dirac Explain why such solutions do not form a right Clifford module as in Proposition 8.1.13 and why the notion of two-sided monogenicity from Proposition 8.1.5 does not generalize. Show how Proposition 9.1.2 generalize to describe the free wave evolution for wave 4-Dirac equations. / We next consider how the wave Dirac equations are related to the harmonic Dirac equations. Proposition 9.1.5 (4V representation of DW ). Let W be a spacetime and fix a time-like unit vector e0 and its Euclidean orthogonal complement V = [e0 ]⊥ . Identify the even part 4ev W of the spacetime Clifford algebra, and the Euclidean Clifford algebra 4V via the isomorphism in Proposition 3.3.5. We write a general spacetime multivector w ∈ 4W as w = w+ + e0 w− , where w± ∈ 4ev W ↔ 4V . Identifying a multivector field F in W in this way with a pair (F + , F − ) of multivector fields in V , and similarly for G, the wave Dirac equation DW F = G corresponds to ( (∂0 + DV )F+ = −G− , (∂0 − DV )F− = G+ . Proof. Since DW swaps 4ev W and 4od W fields, we obtain in a spacetime ONbasis {ei } that DW F = G is equivalent to ( (−e0 ∂0 + D0 )F+ = e0 G− , (−e0 ∂0 + D0 )(e0 F− ) = G+ , Pn where D0 := j=1 ej ∂j . Multiplying the first equation by e0 and commuting e0 to the left in the second equation establishes the claim.
Chapter 9. Dirac Wave Equations
290
Changing notation, an argument as in the proof of Proposition 9.1.5 in the /WF = G equation D / case that dim W is even also shows that the wave 4-Dirac corresponds to ( / V )F+ = −G− , (∂0 + D / V )F− = G+ . (∂0 − D +
+
/ W : v 7→ Here we use that 4V / = 4 / W via the representation ρ : V → 4 e0 v.(·), and write the spacetime spinor field as F = F+ + e0 F− in the splitting + + − − / W . Note that e0 : 4 4 / W →4 / W is invertible. / W ⊕4 We end this section by considering how these Dirac operators are related to exterior and interior derivative operators d and δ. For any inner product space, it is clear from the definitions that the 4-Dirac operators are D = d + δ.
(9.1)
This holds true also for the wave 4-Dirac operator. We note the following refinement of Proposition 9.1.5. Proposition 9.1.6 (∧V representation of dW , δW ). Let W be a spacetime and use notation as for D in Proposition 9.1.5. Then the differential equations dW F = G and δW F = G correspond to ( ( (T − ∂0 + δV )F+ = −G− , (T + ∂0 + dV )F+ = −G− , − (T + ∂0 − δV )F− = G+ , (T ∂0 − dV )F− = G+ , respectively, where T + f = 12 (f + fb) and T − f = 12 (f − fb) denote the projections onto ∧ev V and ∧od V respectively. Proof. For example, we see that dW F = G is equivalent to ( −e0 ∧ ∂0 F+ + d0 F+ = e0 G− , −e0 ∧ ∂0 (e0 F− ) + d0 (e0 F− ) = G+ , Pn where d0 F := j=1 ej ∧ ∂j F . To relate the exterior product to the Clifford algebra isomorphism 4ev W ≈ 4V , we use the Riesz formula (3.4). We also note that w 7→ −e0 we0 yields an automorphism of 4ev W that negates e0 V . Therefore −e0 we0 corresponds to w b under the isomorphism 4ev W ↔ 4V . This yields for F ∈ ev 4 W ↔ 4V , e0 (e0 ∧ F ) = 21 (−F + e0 F e0 ) ↔ 12 (−F − Fb) = −T + F, e0 ∧ (e0 F ) = 21 (−F − e0 F e0 ) ↔ 12 (−F + Fb) = −T − F, Pn and with nabla calculus using ∇0 = j=1 ej ∂j that e0 (∇0 ∧ F ) = 12 ((e0 ∇0 )F − e0 F e0 (e0 ∇0 )) ↔ 12 (∇F + Fb∇) = dV F, ∇0 ∧ (e0 F ) = 1 (−(e0 ∇0 )F + e0 F0 (e0 ∇0 )) ↔ 1 (−∇F − Fb∇) = −dV F. 2
2
9.2. Dirac Equations in Physics
291
Similar calculations using the Riesz formula (3.3) prove the 4V representation for δW F = G. The 4-Dirac / operator in a general Euclidean or real inner product space, cannot be written as in (9.1). However, in the case of an even-dimensional Euclidean space with a complex structure given as in Example 5.1.5(i), we do have an invariant meaning of such exterior and interior derivative operators. Given a Euclidean space V of dimension n = 2m with an isometric complex structure J, consider the complex exterior algebra ∧V for the complex vector space V = (V, J), which comes with the corresponding Hermitian inner product (·, ·i. As in Example 5.1.5(i), the real linear map V → L(∧V) : v 7→ v y∗ (·) + v ∧ (·) gives a representation of the complex spinor space 4V / = ∧V. But the two terms induce separately the nabla operators Γ1 ψ := ∇ ∧ ψ
and
Γ2 ψ := ∇ y∗ ψ
/ = Γ1 + Γ2 . Fixing a complex ON-basis acting on spinor fields ψ : V → 4V / and D {ej }m for V and writing x for the real coordinates along ej and yj for the real j j=1 coordinates along Jej , we have, since {ej } ∪ {Jej } form a real ON-basis for V from Proposition 7.1.3, that Γ1 ψ =
m X
ej
∧
∂xj ψ +
j=1
and Γ2 ψ =
m X j=1
ej y∗ ∂xj ψ +
m X
iej
∧
∂yj ψ =
j=1
m X
ej
∧
∂zjc ψ
j=1
m m X X (iej ) y∗ ∂yj ψ = ej y∗ ∂zj ψ, j=1
j=1
since Jej = iej in V and (iej )y∗ w = −i(ej y∗ w) by sesquilinearity. Here we used the classical complex analysis operators ∂zj := ∂xj − i∂yj and ∂zjc := ∂xj + i∂yj . Since Γ∗1 = −Γ2 , one can develop a complex version of the theory of Hodge decomposition similar to the real theory in Chapter 10.
9.2
Dirac Equations in Physics
The aim of this section is to briefly review how Dirac equations appear in electromagnetic theory and quantum mechanics in physics. We model our universe by spacetime W with three space dimensions, as in special relativity. See Section 1.3. The unit of length is the meter [m]. Fixing a future-pointing time-like vector e0 with e20 = −1, we write V for the three-dimensional Euclidean space [e0 ]⊥ . We write the coordinate along e0 as x0 = ct,
Chapter 9. Dirac Wave Equations
292
where t is time measured in seconds [s] and c ≈ 2.998 · 108 [m/s] is the speed of light. Our discussion uses SI units. Out of the seven SI base units, we need the meter [m] for length, kilogram [kg] for mass, second [s] for time and ampere [A] for electric current. From this we have the SI derived units newton [N= kg·m/s2 ] for force, coulomb [C=A·s] for electric charge, volt [V= N·m/C] for electric potential, joule [J= Nm] for energy. We consider first Maxwell’s equations, which describe the time evolution of the electric and magnetic fields, which mediate the forces that electric charges in motion exert on each other. The charges that generate the electric and magnetic fields are described by a charge density and electric current density ρ(t, x) ∈ ∧0 V
and J(t, x) ∈ ∧1 V,
measured in units [C/m3 ] andR[A/m2 ] respectively. This means that a given domain D ⊂ V contains the charge D ρdx, and the electric current through a 2-surface R S in V is S hJ, ∗dyi, at time t. Here [dy] is the tangent plane to S and ∗dy is an infinitesimal vector normal to S, in the direction in which we measure the current. Maxwell’s four equations, which we discuss below, describe how ρ and J generate a vector field E(t, x) ∈ ∧1 V,
measured in units [N/C = V/m],
which is called the electric field, and a bivector field B(t, x) ∈ ∧2 V,
measured in units of tesla [T = N/(A · m)],
which we refer to as the magnetic field. The way we measure these fields is by placing a test charge with charge q0 at the point moving with velocity v0 ∈ ∧1 V . The electric and magnetic fields will then exert a force on this test charge given by the Lorentz force F = q0 E + q0 B x v0 . (9.2) Experiments show that the magnetic force is orthogonal to the velocity, and thus is described by a skew symmetric map. Recalling Proposition 4.2.3, this demonstrates that the magnetic field is a bivector field rather than a vector field. In classical vector notation, the magnetic field is described by the Hodge dual vector field ∗B, in which case the magnetic force is given by the vector product q0 v0 × (∗B). The three-dimensional exterior algebra ∧V = ∧0 V ⊕ ∧1 V ⊕ ∧2 V ⊕ ∧3 V provides a natural framework for expressing the four Maxwell equations, or more precisely eight scalar equations, for determining E ∈ ∧1 V and B ∈ ∧2 V from ρ ∈ ∧0 V and J ∈ ∧1 V . The constants of proportionality appearing are the permittivity of free space 0 ≈ 8.854 · 10−12 [C/(V· m)] and permeability of free space µ0 = 4π · 10−7 [V· s/(A· m)].
293
9.2. Dirac Equations in Physics
∧0 Gauss’s law for the electric field states that the flow of the electric field out Rthrough the boundary of a domain D is proportional to the charge Q = ρdx contained in the domain: D Z hE, ∗dyi = −1 0 Q. ∂D
By Stokes’s theorem, Gauss’s law is equivalent to the ∧0 V -valued differential equation 0 ∇ y E = ρ. In classical vector notation this reads 0 h∇, Ei = ρ. ∧1 The Amp`ere–Maxwell law states that Z Z Z hE, ∗dxi µ−1 = hJ, ∗dxi + ∂ hB, ∗dyi 0 t 0 S
S
∂S
for every 2-surface S. In the stationary case that ρ, J, E, and B are timeindependent, R it reduces to Amp`ere’s law, which shows that an electric curhJ, ∗dxi through S produces a magnetic field with circulation rent I := S R hB, ∗dyi = µ0 I. In the ∂S R nonstationary case, Maxwell added the necessary additional term 0 µ0 ∂t S hE, ∗dxi to the equation. By Stokes’s theorem, Amp`ere–Maxwell’s law is equivalent to the ∧1 V valued differential equation 0 ∂t E + µ−1 0 ∇ y B = −J. In classical vector notation this reads 0 ∂t E − µ−1 0 ∇ × (∗B) = −J. ∧2 Faraday’s law of induction states that a change of the integral of the magnetic field B over a 2-surface S induces an electric field around the boundary curve: Z Z hE, dyi = −∂t hB, dxi. ∂S
S
By Stokes’s theorem, Faraday’s law is equivalent to the ∧2 V -valued differential equation ∂t B + ∇ ∧ E = 0. In classical vector notation this reads ∂t (∗B) + ∇ × E = 0. ∧3 Gauss’s law for magnetic fields states that the integral of a magnetic field over the boundary of a domain D vanishes: Z hB, dyi = 0. ∂D
By Stokes’s theorem, the magnetic Gauss’s law is equivalent to the ∧3 V valued differential equation ∇ ∧ B = 0. In classical vector notation this reads h∇, ∗Bi = 0.
294
Chapter 9. Dirac Wave Equations
Figure 9.1: Maxwell’s equations for the electric vector field E and the magnetic bivector field B in ∧R3 . Since the electric and magnetic fields take values in the two different subspaces ∧1 V and ∧2 V of the exterior algebra, we can add them to obtain a sixdimensional total electromagnetic multivector field F . The most natural scaling is such that |F |2 is an energy density, with dimension [J/m3 ]. We set 1/2
−1/2
F := 0 E + µ0
B ∈ ∧1 V ⊕ ∧2 V.
Collecting and rescaling Maxwell’s equations, we have −1/2
∇ ∧ (µ0
B) = 0,
−1/2 1/2 c ∂t (µ0 B) + ∇ ∧ (0 E) 1/2 −1/2 c−1 ∂t (0 E) + ∇ y (µ0 B) 1/2 ∇ y (0 E) −1
= 0, 1/2
= −µ0 J, −1/2
= 0
ρ,
(9.3)
9.2. Dirac Equations in Physics
295
where c = (0 µ0 )−1/2 . Adding these four equations, we see that Maxwell’s equations are equivalent to the Dirac equation c−1 ∂t F + ∇ 4 F = G,
(9.4)
since Maxwell’s equations take values in the different homogeneous subspaces of −1/2 1/2 ∧V . Here G := 0 ρ − µ0 J is a ∧0 V ⊕ ∧1 V -valued multivector field, which we refer to as the electric four-current. From (9.4) it is clear that Maxwell’s equation is a wave Dirac equation for the ∧1 V ⊕ ∧2 V -valued electromagnetic field F . Example 9.2.1 (Static electromagnetic field). Assume that the sources ρ and J and the electromagnetic field are constant with respect to time, and that J is divergence-free. Then Maxwell’s equations reduce to the inhomogeneous Dirac equation ∇ 4 F = G, which by the Cauchy–Pompeiu formula from Theorem 8.1.8 has solution F (x) = Ψ(x) ∗ G(x) if G decays as x → ∞. This amounts to Z 1 y E(x) = ρ(x − y) 3 dy, 4π0 V |y| Z µ0 y B(x) = J(x − y) ∧ 3 dy. 4π V |y| Thus E is the Coulomb field from charge density ρ, and B is determined from J by the law of Biot–Savart. Exercise 9.2.2 (Pauli representation). Using an ON-basis {e1 , e2 , e3 } for V , write 1/2 ˜ 1 e1 + E ˜ 2 e2 + E ˜3 e3 , µ−1/2 B = B ˜1 e23 + B ˜2 e31 + B ˜3 e12 , µ1/2 J = J˜1 e1 + 0 E = E 0 0 −1/2 J˜2 e2 + J˜3 e3 , and 0 ρ = ρ˜. Represent the basis vectors {e1 , e2 , e3 } by the Pauli matrices from Example 3.4.19 and show that Maxwell’s equations become −1 ˜3 + iB ˜3 ˜ 1 − iE ˜2 + iB ˜1 + B ˜2 c ∂t + ∂3 ∂1 − i∂2 E E ˜1 + iE ˜2 + iB ˜1 − B ˜2 ˜ 3 − iB ˜3 ∂1 + i∂2 c−1 ∂t − ∂3 E −E ρ˜ − J˜3 −J˜1 + iJ˜2 = . ˜ ˜ −J1 − iJ2 ρ˜ + J˜3 Note that this representation requires that the components of the fields be realvalued, since we use a real algebra isomorphism 4R V ↔ C(2). For time-dependent electromagnetic fields we can obtain a spacetime Dirac formulation of Maxwell’s equation from Proposition 9.1.5. Namely, the electromagnetic field is really the spacetime bivector field 1/2
−1/2
FW := 0 e0 ∧ E + µ0
B ∈ ∧2 W,
solving the spacetime Dirac equation DW FW = −GW ,
(9.5)
Chapter 9. Dirac Wave Equations
296 −1/2
1/2
where GW := 0 ρe0 + µ0 J ∈ ∧1 W is the spacetime representation of the electric four-current. Since GW is a spacetime vector field and FW is a spacetime bivector field, Maxwell’s equations can equivalently be written as the system ( dW FW = 0, δW FW = −GW , by the mapping properties of DW = dW + δW . The difference between Maxwell’s equations and the Dirac equation is a constraint similar to the one described in Proposition 8.1.5. −1/2
1/2
Proposition 9.2.3 (Maxwell = Dirac + constraint). Let GW = 0 ρe0 + µ0 J ∈ ∧1 W . If FW is a ∧2 W -valued solution to (9.5), then ρ and J satisfy the continuity equation ∂t ρ + div J = 0. Conversely, if this continuity equation holds, then the multivector field FW solving the wave Dirac equation (9.5) described in Proposition 9.1.2 is ∧2 W -valued at all times, provided it is so at t = 0 with div E = ρ/0 and ∇ ∧ B = 0. Recall that the continuity equation ∂t ρ + div J = 0 expresses the fact that total charge is conserved. By Gauss’s theorem it shows that Z Z ∂t ρdx = − hJ, ∗dyi D
∂D
for every domain D ⊂ V . Proof. The necessity of the continuity equation follows from 2 δW GW = −δW FW = 0,
by the nilpotence of the spacetime interior derivative. For the converse, we investigate the proof of Proposition 9.1.2 and compute the ∧0 W and ∧4 W parts of FˆW = Fˆ . The ∧4 W part is −1/2 sin(|ξ|ct)
µ0
|ξ|
ˆ0 = 0, iξ ∧ B
since ∇ ∧ B0 = 0. The ∧0 W part is sin(|ξ|ct) e0 y (ξ y Fˆ0 ) |ξ| Z t 1/2 sin(|ξ|c(t − s)) −1/2 ρs − µ0 iξ y Jˆs ds +c cos(|ξ|c(t − s))ˆ 0 |ξ| 0 sin(|ξ|ct) 1/2 ˆ0 − −1/2 ρˆ0 ) = (0 iξ y E 0 |ξ| Z t sin(|ξ|c(t − s)) −1 −1/2 1/2 −c (c 0 ∂s ρˆs + µ0 iξ y Jˆs )ds, |ξ| 0
i
9.2. Dirac Equations in Physics
297
which vanishes, since div E0 = ρ0 /0 and ∂t ρ + div J = 0. This shows that FW is a homogeneous spacetime bivector field for all times. Example 9.2.4 (Lorentz transformation of E and B). From the spacetime representation FW of the electromagnetic field, we can find how the electric and magnetic fields transform under a change of inertial system. Consider two inertial observers O and O0 , with ON-bases {e0 , e1 , e2 , e3 } and {e00 , e01 , e02 , e30 } respectively. Assume that O sees O0 traveling in direction e3 at speed v. As in Example 4.4.1, the Lorentz boost that maps {ei } to {e0i } is T x = exp(φe03 /2)x exp(−φe03 /2),
tanh φ = v/c.
In ∧2 W we have the electromagnetic field 1/2
µ0 F = c−1 e0 4 E + B = c−1 e00 4 E 0 + B 0 , where E = E1 e1 + E2 e2 + E3 e3 and B = B1 e23 + B2 e31 + B3 e12 are the fields 0 +B20 e031 +B30 e012 are the measured by O and E 0 = E10 e10 +E20 e20 +E30 e30 and B 0 = B10 e23 0 fields measured by O . We now compare the two measurements by identifying the ˜ = E 0 e1 +E 0 e2 +E 0 e3 two bases as in the discussion above Example 4.6.5, letting E 1 2 3 0 0 0 ˜ and B = B1 e23 + B2 e31 + B3 e12 . Then ˜ + B) ˜ exp(−φe03 /2). c−1 e0 4 E + B = exp(φe03 /2)(c−1 e0 4 E Applying the isomorphism 4ev W ≈ 4V from Proposition 3.3.5, we have equivalently ˜ = exp(−φe3 /2)(c−1 E + B) exp(φe3 /2). ˜+B c−1 E Computing the action of x 7→ exp(−φe3 /2)x exp(φe3 /2) on e1 , e2 , e3 , e23 , e31 , e12 , we get p E10 = (E1 − vB2 )/ 1 − v 2 /c2 , p E20 = (E2 + vB1 )/ 1 − v 2 /c2 , E 0 = E , 3 3 p 0 B1 = (B1 + (v/c2 )B2 )/ 1 − v 2 /c2 , p B20 = (B2 − (v/c2 )E1 )/ 1 − v 2 /c2 , 0 B3 = B3 . From this we see that for speeds v comparable to the speed of light c, there is a significant mixing of E and B, which shows that indeed it is correct to speak of the electromagnetic field rather than electric and magnetic fields only, since the latter two depend on the inertial frame. We have seen that Maxwell’s equations can be written as a 4-Dirac wave equation DW FW = 0. However, the electromagnetic field FW is not a general spacetime multivector field, but a bivector field. This means that Maxwell’s equations are not identical to the 4-Dirac equation, but rather that we can embed
298
Chapter 9. Dirac Wave Equations
Maxwell’s equations in a Dirac equation. We show in the remaining sections of this chapter that this is a very useful technique, since in some respects, Dirac equations are better behaved than the Maxwell equations. An equation from physics that truly is a Dirac equation is Dirac’s original equation for the relativistic motion of spin-1/2 particles in quantum mechanics, such as electrons and quarks. With our notation this is a wave 4-Dirac / equation in physical spacetime with a lower-order mass term. Without any external potential, the free Dirac equation reads / = mcψ. (9.6) ~Dψ Here c is the speed of light and ~ = 6.626 · 10−34 [Js] is Planck’s constant. The parameter m is the mass of the particle, which in the case of the electron is m ≈ 9.109 · 10−31 [kg]. Dirac’s original approach was to look for a first-order differential equation that is a square root of the Klein–Gordon equation, that is, the wave equation with a mass term ~2 ψ = m2 c2 ψ,
(9.7)
which is obtained from the relativistic energy–momentum relation E 2 c−2 − p2 = m2 c2 by substituting E → i~∂t and p → −i~∇. Such a scalar first-order differential equation does not exist, but Dirac succeeded by allowing matrix coefficients. Having multivectors and spinors at our disposal, we already know that the 4/ Dirac equation (9.6) for spacetime spinor fields ψ : W → 4W has an invariant / geometric meaning. Exercise 9.2.5 (Matrix representation). Fix an ON-basis {e0 , e1 , e2 , e3 } for spacetime, and represent the dual basis {−e0 , e1 , e2 , e3 } by the imaginary Dirac matrices {iγ 0 , iγ 1 , iγ 2 , iγ 3 }, where γ k are Dirac’s gamma matrices as in Example 5.1.9. Show that Dirac’s equation reads ψ1 ψ1 ∂1 − i∂2 ∂0 0 ∂3 ψ i∂ −∂ ∂ ∂ 0 + 2 3 2 0 1 ψ2 i~ −∂3 ψ3 = mc ψ3 . −∂1 + i∂2 −∂0 0 ψ4 ψ4 −∂0 −∂1 − i∂2 ∂3 0 The physical interpretation of complex-valued wave functions ψ in quantum mechanics is that |ψ|2 represents a probability density for the position of the / , we require an inner particle. For the spinor-valued wave function ψ : W → 4W product on the spinor space 4V / . The following is a version of Proposition 5.3.1 for physical spacetime. Proposition 9.2.6 (Inner product). Let W be four-dimensional spacetime, with chosen future time direction fixed and complex spinor space 4W / . Then there exists a complex inner product (·, ·i on 4W / such that (ψ1 , v.ψ2 i = −(v.ψ1 , ψ2 i
9.2. Dirac Equations in Physics
299
for all ψ1 , ψ2 ∈ 4W / and v ∈ W , and −i(ψ, v.ψi > 0 / \ {0} and v ∈ Wt+ . If (·, ·i0 is any other such inner product, then for all ψ ∈ 4W there is a constant λ > 0 such that (ψ1 , ψ2 i0 = λ(ψ1 , ψ2 i for all ψ1 , ψ2 ∈ 4W / . Proof. The proof is analogous to that of Proposition 5.3.1. We look for a matrix M such that M ρ(v) = −ρ(v)∗ M, which exists, unique up to complex nonzero multiples, by Theorem 5.2.3, since (−ρ(v)∗ )2 = ρ(v 2 )∗ = hvi2 I. Using the representation ρ from Example 5.1.9, we see that we have M = λρ(e0 ), λ ∈ C \ {0}, where e0 is a fixed future-pointing time-like unit vector. For the duality to be an inner product, that is, symmetric, we must choose Re λ = 0, and to have −i(ψ, e0 .ψi > 0, we must have Im λ < 0. This shows uniqueness. Choosing λ = −i and v = e0 + v 0 , he0 , v 0 i = 0, we have −i(ψ, v.ψi = ψ ∗ (1 − ρ(e0 v 0 ))ψ > 0 if |v 0 | < 1, since ρ(e0 v 0 ) is |v 0 | times a C4 isometry. This completes the existence proof. This spacetime spinor inner product is used as follows. Given a wave function solving Dirac’s equation, a spinor field / ψ : W → 4W in spacetime, we define uniquely a vector field jp : W → ∧1 W by demanding hjp , vi = i(ψ, v.ψi for all v ∈ W . This exists by Proposition 1.2.3 and is referred to as the probability four-current. Fixing a future time direction e0 and writing jp = ρp e0 + c−1 Jp , Jp ∈ [e0 ]⊥ , it follows from the properties of (·, ·i that jp is a real vector field with time component ρp ≥ 0. This represents the probability density for the position of the particle. That Jp defines a probability current is clear from the continuity equation ∂t ρp + divV Jp = 0. This holds whenever ψ solves Dirac’s equation (9.6), since c−1 (∂t ρp + divV Jp ) = δW jp = −∂0 hjp , e0 i +
3 X
∂k hjp , ek i
1
=i
3 X 0
/ ψi = 0. / − i(Dψ, ∂k (ψ, e∗k .ψi = i(ψ, Dψi
Chapter 9. Dirac Wave Equations
300
Recall the main reflector from Definition 5.2.1, which for physical spacetime we choose as w4 = ie0123 ∈ 44 W. In physics ρ(w4 ) is referred to as the chiral operator, and spinors in its eigenspaces 4± W are called right- and left-handed spinors respectively. To obtain a Euclidean formulation of Dirac’s equation, we fix a future time direction e0 and rewrite (9.6) as a coupled system of Euclidean 4-Dirac / equations for the right- and left-handed components of the wave function. As in the discussion after Proposition 9.1.5, we obtain ( / + = −mψ (c−1 ∂t + D)ψ ˜ −, − −1 / = mψ ˜ +, (c ∂t − D)ψ +
/ =D / V is the 4-Dirac / / W , k = 1, 2, m / ↔4 ˜ := mc/~, and D where ψ ± (t, x) ∈ 4V operator for the Euclidean three-dimensional space V = [e0 ]⊥ . Exercise 9.2.7. Under the algebra isomorphism −
+
/ 4V / 2 3 (ψ + , ψ − ) ↔ ψ + + e0 ψ − ∈ 4 / W ⊕4 / W = 4W, show by uniqueness, with suitable normalization λ > 0, that the spacetime spinor inner product of ψ1 = ψ1+ + e0 ψ1− and ψ2 = ψ2+ + e0 ψ2− from Proposition 9.2.6 corresponds to i((ψ1+ , ψ2− i − (ψ1− , ψ2+ i), where (·, ·i denotes the Hermitian spinor inner product on 4V / . Using the Hermitian inner product (ψ1+ , ψ2+ i + (ψ1− , ψ2− i on 4V / 2 , we have the following Hilbert space result. Proposition 9.2.8 (Antiparticles and time evolution). Write Dirac’s equation as i~∂t ψ = H0 ψ, where / D m ˜ / 2 ) → L2 (V ; 4V : L2 (V ; 4V / 2 ). H0 := −i~c / −m ˜ −D Then the free Dirac Hamiltonian H0 has spectrum σ(H0 ) = (−∞, −mc2 ]∪[mc2 , ∞). We have an orthogonal splitting of L2 into spectral subspaces L2 (V ; 4V / 2 ) ⊕ L− / 2 ) = L+ / 2 ), 2 (V ; 4V 2 (V ; 4V where / 2) = L± 2 (V ; 4V
n −imψ ˜
/ ± iDψ
√
m ˜ 2 − ∆ψ
t
; ψ ∈ 4V /
o
are the spectral subspaces for the energy intervals [mc2 , ∞) and (−∞, −mc2 ] respectively. The solution to the inital value problem for the wave Dirac equation is m ˜ ψ(t, x) = (c−1 ∂t + H0 /(~c))Rct ψ(0, x),
ψ(t, ·) ∈ L2 (V ; 4V / 2 ),
9.2. Dirac Equations in Physics
301
m ˜ where Rct denotes the Klein–Gordon Riemann function from Corollary 6.2.3, acting component-wise.
/ 2 ), the parts Splitting the wave function ψ = ψ + +ψ − , where ψ ± ∈ L± 2 (V ; 4V − ψ and ψ of positive and negative energy describe a particle and an antiparticle respectively. Note that time evolution by H0 preserves the subspaces L± / 2 ). 2 (V ; 4V It follows from Corollary 6.2.3 that Dirac’s equation has finite propagation speed ≤ c. However, unlike the massless case in Proposition 9.1.2, the Huygens principle is not valid for Dirac’s equation in three spatial dimensions. Compare also this time evolution for Dirac’s equation to that for Schr¨ odinger’s equation in Example 6.3.6, where instead of finite propagation speed, we have the evolution given by an oscillatory quadratic exponential. +
Proof. Applying the Fourier transform in V , Dirac’s equation is turned into the ordinary differential equation iρ(ξ) ˜ m ∂t ψ = −c ψ, −m ˜ −iρ(ξ) p where ρ(ξ) ∈ L(4V / ). For the matrix we obtain eigenvalues ±i |ξ|2 + m ˜ 2 and p t eigenvectors −imψ ˜ −ρ(ξ)ψ ± |ξ|2 + m ˜ 2 ψ . Applying the inverse Fourier transform, this translates to the stated splitting. iρ(ξ) m ˜ To calculate the time evolution, we write j := √ 21 2 and ˜ |ξ| +m −m ˜ −iρ(ξ) note that j 2 = −I. It follows from Exercise 1.1.5 that p p p ˜ 2 j) = cos(ct |ξ|2 + m ˜ 2 ) − j sin(ct |ξ|2 + m ˜ 2 ), exp(−c |ξ|2 + m which under the Fourier transform is equivalent to the stated evolution formula. Example 9.2.9 (Foldy–Wouthuysen transformation). The particle and antiparticle splitting of a solution ψ : W → 4W / to Dirac’s equation (9.6) is independent of the inertial frame for W . Indeed, since H0 ψ = i~∂t ψ, we have p ic−1 ∂t ψ = ± m ˜ 2 − ∆ψ,
(9.8)
with sign +1 for particles, that is, ψ − = 0, and sign −1 for antiparticles, that is, ψ + = 0. Note that (9.6) is a differential equation that is a square root of the Klein–Gordon equation (9.7), and that (9.8) are also square roots of (9.7) although not differential equations. Using the spacetime Fourier transform, we see that the Fourier transforms of wave functions, in the distributional sense, for particles and antiparticles are supported on the two branches of the hyperboloid hξi2 + m ˜ 2 = 0. In particular, this shows the claimed relativistic invariance.
Chapter 9. Dirac Wave Equations
302
Exercise 9.2.10 (Charge conjugation). Consider the spinor space 4W / of physical spacetime, with spinor inner product as in Proposition 9.2.6. Show by generalizing the Euclidean theory from Section 5.3 that there exists an antilinear spinor / conjugation 4W / → 4W : ψ 7→ ψ † such that †
(v.ψ) = v.ψ † ,
v ∈ W, ψ ∈ 4W, /
†
and (ψ † ) = ψ, ψ ∈ 4W / , and that this is unique modulo a complex factor |λ| = 1. Show further that in the representation from Example 5.1.9, we can c choose ψ † = (ρ(e2 )ψ) and that this spinor conjugation is compatible with the spinor inner product as in Lemma 5.3.4. For Dirac’s equation, the operation ψ 7→ ψ † represents charge conjugation in physics, an operation that switches particles and antiparticles, which is readily † seen from (9.8). Mathematically, note that since (ψ † ) = ψ, the spinor conjugation yields a real structure on the spinor space of physical spacetime. This agrees with the fact that with our sign convention, the Clifford algebra 4W is isomorphic to R(4) by Theorem 3.4.13. Recall that a classical particle in an electromagnetic field is acted upon by the Lorentz force (9.2). For a quantum spin-1/2 particle in an electromagnetic field, the Dirac equation is modified by adding a source term and reads / = mcψ + iqAW .ψ. ~Dψ
(9.9)
The vector field AW : W → ∧1 W is a four-potential of the electromagnetic field FW = dW AW and q is the charge of the particle, which in case of the electron is q ≈ −1.602 · 10−19 [C]. A geometric interpretation of (9.9) is that AW provides Christoffel symbols for a covariant derivative as in Definition 11.1.5. The Faraday and magnetic Gauss laws show that the electromagnetic field FW is a closed spacetime bivector field, that is, dW FW = 0. Poincar´e’s theorem (Theorem 7.5.2) shows that locally this is equivalent to the existence of a spacetime vector field AW such that FW = dW AW . As we have seen, at least in the Euclidean setting, in Section 7.6, globally there can be topological obstructions preventing every closed field from being exact. And indeed, the famous Aharonov–Bohm experiment shows that in fact, FW being an exact bivector field is the correct −1/2 1/2 physical law, and not dW FW = 0. Writing AW = 0 Φe0 + µ0 A to obtain a Euclidean expression for potential, where Φ : W → R and A : W → V are scalar and vector potentials of the electromagnetic field, we have ( E = −∇Φ − ∂t A, B = ∇ ∧ A. Returning to (9.9), we note that a solution ψ still yields a probability four-current jp satisfying the continuity equation, as a consequence of AW being a real spacetime vector field. As in the free case, (9.9) describes the time evolution of the wave
9.3. Time-Harmonic Waves
303
functions for a particle and antiparticle pair. What is, however, not immediately clear is how the nonuniqueness of AW influences the solution ψ. To explain this, consider an exact spacetime bivector field FW : W → 42 W representing the electromagnetic field, and let AW , A˜W : W → 41 W be two different vector potentials, so that FW = dW AW = dW A˜W . Another application of Poincar´e’s theorem (Theorem 7.5.2) shows that locally, the closed vector field A˜W − AW is exact, so that A˜W = AW + ∇U, for some scalar potential U : W → 40 W = R. From the product rule, we deduce / = mcψ + iqAW .ψ if and only if that ~Dψ / iqU/~ ψ) = (mc + iq A˜W .)(eiqU/~ ψ). ~D(e Therefore ψ˜ := eiqU/~ ψ is the wave function of the particle in the electromagnetic ˜ = (ψ, v.ψi by sesquilinearity, the ˜ v.ψi field with potential A˜W . However, since (ψ, wave functions for the two choices of electromagnetic four-potential yield the same probability four-current jp . Therefore the physical effects are independent of the choice of electromagnetic four-potential AW .
9.3
Time-Harmonic Waves
Let W be a spacetime and fix a future pointing time-like unit vector e0 , and let V = [e0 ]⊥ . For the remainder of this chapter, we study time-harmonic solutions to the wave 4-Dirac equation DW F = DV F −e0 c−1 ∂t F = 0. We use the complexified spacetime Clifford algebra 4Wc , where the component functions Fs (x) belong to C. With a representation of the time-harmonic field as in Example 1.5.2, the Dirac equation reads (D + ike0 )F (x) = 0, with a wave number k := ω/c ∈ C. This is now an elliptic equation with a zeroorder term ike0 added to D = DV , rather than a hyperbolic equation. Since even the inner product on the real algebra 4W is indefinite, we require the following modified Hermitian inner product for the analysis and estimates to come. Definition 9.3.1 (Hermitian inner product). With V = [e0 ]⊥ ⊂ W as above, define the auxiliary inner product b1 e0−1 , w2 i, hw1 , w2 iV := he0 w
w1 , w2 ∈ 4W.
We complexify both the standard indefinite inner product h·, ·i on 4W and h·, ·iV to sesquilinear inner products (·, ·i and (·, ·iV on 4Wc respectively.
Chapter 9. Dirac Wave Equations
304
= u − e0 v. It We note that if w = u + e0 v, with u, v ∈ 4V , then e0 we b −1 0 follows that (·, ·iV is a Hermitian inner product in which the induced basis {es } is an ON-basis for 4Wc whenever {ej }nj=1 is an ON-basis for V . We use the L2 R norm kf k2L2 = (f (x), f (x)iV dx of complex spacetime multivector fields f . The aim of this section is to generalize Section 8.3 from the static case k = 0 to k ∈ C. Note that (9.10) (D ± ike0 )2 = ∆ + k 2 . Definition 9.3.2 (Fundamental solution). Let Φk be the fundamental solution to the Helmholtz equation from Corollary 6.2.4 for Im k ≥ 0. Define fundamental solutions Ψ± k = (D ± ike0 )Φk to the Dirac operators D ± ike0 . Note the relation + Ψ− k (x) = −Ψk (−x) − between these two families of fundamental solutions, and that Ψ+ 0 = Ψ0 equals ± Ψ from Definition 8.1.6. It is clear from Corollary 6.2.4 that Ψk in general can (1) be expressed in terms of Hankel functions Hν , which in odd dimensions are elementary functions involving the exponential function eik|x| .
Exercise 9.3.3 (Asymptotics). Show that in three dimensions, eik|x| x ik x ± Ψk (x) = − . ± e0 |x|3 |x| |x| 4π ± Note that Ψ± k ≈ Ψ near x = 0, while Ψk ∈ 4Wc is almost in the direction of the x light-like vector |x| ± e0 near x = ∞. Show that in dimension dim V = n ≥ 2 we have −(n−2) Ψ± ), k (x) − Ψ(x) = O(|x|
as x → 0,
−(n−1) as well as ∇ ⊗ (Ψ± ) as x → 0, and that k (x) − Ψ(x)) = O(|x| −ik|x| Ψ± k (x)e
−
π n−1 k (n−1)/2 1 −i 2 2 2e 2π
x |x| ± e0 |x|(n−1)/2
= O(|x|−(n+1)/2 ),
as x → ∞.
Theorem 9.3.4. Let D ⊂ V be a bounded C 1 -domain. If F : D → 4Wc solves (D + ike0 )F = 0 in D and is continuous up to ∂D, then Z Ψ− F (x) = for all x ∈ D. (9.11) k (y − x)ν(y)F (y) dy, ∂D
Note that since Ψ (x) = −Ψ+ k (−x), we can write the reproducing formula equivalently as Z F (x) = − Ψ+ k (x − y)ν(y)F (y) dy. −
∂D
9.3. Time-Harmonic Waves
305
Proof. The proof is analogous to that of Theorem 8.1.8. We define the linear 1-form D \ {x} × V → 4Wc : (y, v) 7→ θ(y, v) := Ψ− k (y − x)vF (y). For y 6= x, its exterior derivative is θ(y, ˙ ∇) = =
n X
∂yi Ψ− k (y − x) 4 ei 4 F (y)
i=1 (Ψ− k (y˙
− x) 4 ∇) 4 F (y) + Ψ− ˙ k (y − x) 4 (∇ 4 F (y)).
Since DF = −ike0 F and ˙ 4 ∇ = (∇ − ike0 ) 4 Φ− Ψk− (x) ˙ 4∇ k (x) ˙ − ike0 )2 + (∇ − ike0 ) 4 ike0 ) = Ψ− = Φ− k (x)((∇ k (x) 4 ike0 , we obtain θ(y, ˙ ∇) = 0. Applying the Stokes formula on the domain D := D \ B(x, ) and using the asymptotics of Ψ− near the origin from Exercise 9.3.3, the rest of the proof follows as for Theorem 8.1.8. It is essential in Theorem 9.3.4 that the domain D = D+ is bounded. In the exterior domain D− = V \ D, we need appropriate decay of F at ∞. When k 6= 0, this takes the form of a radiation condition as follows. Definition 9.3.5 (Radiating fields). Let F be a multivector field that solves (D + ike0 )F = 0 in D− . We say that F radiates at ∞ if Z Ψ− lim k (y − x)ν(y)F (y) dy = 0, R→∞
|y|=R
for every x ∈ D− . Note that by applying Theorem 9.3.4 to the annulus R1 < |x| < R2 , the limit is trivial since the integrals are constant for R > |x|. We need an explicit description of this radiation condition. Note that x ( |x| + e0 )2 = 0.
Proposition 9.3.6 (Radiation conditions). Let F be a multivector field that solves (D + ike0 )F = 0 in D− and is continuous up to ∂D, and assume that Im k ≥ 0 and k 6= 0. If x ( |x| + e0 )F = o(|x|−(n−1)/2 e(Im k)|x| ) as x → ∞, then F radiates. Conversely, if F radiates, then Z F (x) = Ψ+ k (x − y)ν(y)F (y) dy, ∂D
(9.12)
Chapter 9. Dirac Wave Equations
306 for all x ∈ D− . In particular, F = O(|x|−(n−1)/2 e−(Im k)|x| )
and
x ( |x| + e0 )F = O(|x|−(n+1)/2 e−(Im k)|x| )
as x → ∞. Not only does this give an explicit description of the radiation condition, but it also bootstraps it in that the necessary condition is stronger than the sufficient condition. x Proof. Assuming the decay condition on ( |x| + e0 )F , it suffices to prove that Z |F |2V dx = O(e2Im kR ) (9.13) |x|=R
as R → ∞. Indeed, the Cauchy–Schwarz inequality and the asymptotics for Ψ− k from Exercise 9.3.3 then show that F radiates. To estimate |F |V , we note that x x x x |( |x| + e0 )F |2V = (F, ( |x| − e0 )( |x| + e0 )F iV = 2|F |2V + 2(F, |x| e0 F iV . − := {x ∈ D− ; |x| < R}, Applying Stokes’s theorem for bodies on the domain DR we obtain Z Z x (F, |x| (F, νe0 F iV dx e0 F iV dx = |x|=R ∂D Z + (−ike0 F, e0 F iV + (F, −e0 (−ike0 F )iV dx. − DR
In total, this shows that Z Z Z Z 1 x |F |2V dx = (F, νe0 F iV dx−2Im k |F |2V dx, |( |x| +e0 )F |2V dx− − 2 |x|=R |x|=R DR ∂D from which (9.13) follows from the hypothesis, since ∂D is independent of R and since Im k ≥ 0. − The converse is an immediate consequence of Theorem 9.3.4 applied in DR , − and the asymptotics for Ψk from Exercise 9.3.3. There are two important applications of the Dirac equation (D + ike0 )F = 0 to classical differential equations, namely time-harmonic acoustic and electromagnetic waves. Example 9.3.7 (Helmholtz’s equation). For a scalar function u, define F = ∇u + ike0 u. Then u solves the Helmholtz equation ∆u + k 2 u = 0 from Example 6.3.4 if and only if F solves the Dirac equation DF + ike0 F = 0. However, note that F is not a general solution to this equation: it is a vector field F (x) ∈ 41 Wc .
9.3. Time-Harmonic Waves
307
To investigate the reproducing formula (9.11) for this vector field F , we evaluate the time-like and space-like parts of the equation, and get Z u(x) = ∂ν Φk (y − x)u(y) − Φk (y − x)∂ν u(y) dy, (9.14) Z∂D ∇u(x) = ∇Φk (y − x) 4 ν(y) 4 ∇u(y) + k 2 Φk (y − x)u(y)ν(y) dy, (9.15) ∂D
for x ∈ D, where ∂ν denotes the derivative in the normal direction. Equation (9.14) we recognise as the Green second identity for solutions to the Helmholtz equation, whereas (9.15) is an analogue of this for the gradient. This latter equation can be further refined by expanding the triple Clifford vector product as ∇Φk 4 ν 4 ∇u = (∂ν Φk )∇u + ∇Φk (∂ν u) − (∇Φk , ∇uiν + ∇Φk ∧ ν ∧ ∇u. Evaluating the vector part of (9.14), we obtain Z ∇u(x) = ∂ν Φk (y − x)∇u(y) + ∇Φk (y − x)∂ν u(y) ∂D − (∇Φk (y − x), ∇u(y)iν(y) + k 2 Φk (y − x)u(y)ν(y) dy,
x ∈ D.
For solutions to the Helmholtz equation ∆u + k 2 u = 0, the classical decay condition at ∞ is the Sommerfield radiation condition ∂r u − iku = o(|x|−(n−1)/2 e(Im k)|x| ), with ∂r denoting the radial derivative. To see its relation to the radiation condition for D + ike0 , we compute x x x x ( |x| + e0 )F = ( |x| + e0 )(∇u + ike0 u) = (1 + e0 ∧ |x| )(∂r u − iku) + ( |x| + e0 ) ∧ ∇S u, x x where ∇S u := |x| y ( |x| ∧ ∇u) is the angular derivative. By considering the scalar part of this identity, we see that the Dirac radiation condition entails the Sommerfield radiation condition. In fact the two conditions are equivalent. To see this, we can argue similarly to the proof of Proposition 9.3.6 to show that Green’s second identity (9.14) holds for the exterior domain D− . This will yield an estimate on ∇S u, given the Sommerfield radiation condition.
Example 9.3.8 (Time-harmonic Maxwell’s equations). Consider a time-harmonic electromagnetic wave F in a spacetime with three space dimensions. As in Sec1/2 −1/2 tion 9.2 we have F = 0 e0 ∧ E + µ0 B ∈ 42 Wc solving (D + ike0 )F = 0, √ with wave number k := ω/c = ω 0 µ0 . To investigate the reproducing formula (9.11) for this bivector field F , we evaluate the time-like and space-like bivector
Chapter 9. Dirac Wave Equations
308
parts of the equation, and obtain two classical equations known as the Stratton– Chu formulas: Z Φk (x − y) ν(y) × E(y) dy E(x) = ∇ × Z ∂U Φk (x − y) ν(y) · E(y) dy −∇ Z∂U + ikc Φk (x − y) ν(y) × (∗B)(y) dy, ∂U Z ∗B(x) = ∇ × Φk (x − y) ν(y) × (∗B)(y) dy Z ∂U −∇ Φk (x − y) ν(y) · (∗B)(y) dy ∂U Z − ik/c Φk (x − y) ν(y) × E(y) dy. ∂U
For solutions to the time-harmonic Maxwell’s equations, the classical decay condition at ∞ is the Silver–M¨ uller radiation condition ( x −(n−1)/2 (Im k)|x| e ), |x| × E(x) − c(∗B)(x) = o(|x| x c |x| × (∗B)(x) + E(x) = o(|x|−(n−1)/2 e(Im k)|x| ).
Since x |x|
+ e0 (e0 E + cB) = e0 cB −
x |x| E
x + − E + c |x| B ,
we see that the Dirac radiation condition for the electromagnetic field is equivalent to the Silver–M¨ uller radiation condition. Note that both radiation conditions also give decay of the radial parts of the vector fields E and ∗B. Given the Cauchy reproducing formulas for D + ike0 , we can extend the theory of Hardy subspaces from Section 8.3 to the case k 6= 0. Acting on functions h : ∂D → 4Wc we define traces of Cauchy integrals Z + Ek h(x) := Ψ− lim k (y − z)ν(y)h(y)dy, z→x,z∈D + ∂D Z Ek− h(x) := − lim Ψ− x ∈ ∂D, k (y − z)ν(y)h(y)dy, z→x,z∈D −
∂D
and the principal value Cauchy integral Z Ek h(x) := lim+ Ψ− k (y − x)ν(y)h(y)dy, →0
x ∈ ∂D.
∂D\B(x;)
As in the static case k = 0, we limit ourselves to proving splittings of H¨older spaces of multivector fields into Hardy subspaces.
9.4. Boundary Value Problems
309
Theorem 9.3.9 (Hardy wave subspace splitting). Let D = D+ ⊂ V be a bounded C 1 domain, with exterior domain D− = V \D, and let Im k ≥ 0. Consider the function space C α = C α (∂D; 4Wc ), for fixed regularity 0 < α < 1. Then the operators Ek+ , Ek− , and Ek are well defined and bounded on C α (∂D). The operators Ek± are complementary projections, with Ek± = 12 (I ± Ek ), and they split C α (∂D) into Hardy subspaces C α (∂D) = Ek+ C α ⊕ Ek− C α . There is a one-to-one correspondence, furnished by the Cauchy integral (9.11) and the trace map, between fields in the interior Hardy subspace Ek+ C α and fields in D+ solving DF + ike0 F = 0 and H¨ older continuous up to ∂D. Likewise, there is a one-to-one correspondence, furnished by the Cauchy integral (9.12) and the trace map, between fields in the exterior Hardy subspace Ek− C α and fields in D− solving DF + ike0 F = 0, H¨ older continuous up to ∂D and radiating at ∞. Proof. Define the operator Z Rk h(x) = (Ψ− k (y − x) − Ψ(y − x))ν(y)h(y)dy,
x ∈ V.
∂D
By the asymptotics at x = 0 from Exercise 9.3.3, Rk h(x) is a well-defined convergent integral for all x ∈ V . Furthermore, by differentiating under the integral sign, we have Z dy |∂j Rk h(x)| . khk∞ . khk∞ ln(1/dist (x, ∂D)), (9.16) |y − x|n−1 ∂D for 0 < dist (x, ∂D) < 1/2 and j = 1, . . . , n, where khk∞ := sup∂D |h|. Integrating (9.16) and (8.6) as in Proposition 8.3.3, it follows that Ek± are bounded on C α , since Ek± = E ± ± Rk . Note that integrating (9.16) in fact shows that Rk h is H¨older continuous across ∂D. Therefore it follows from Proposition 8.3.4 that Ek± h = E ± h ± Rk h = 21 h ± 12 (Eh + 2Rk h) = 12 (h ± Ek h) and that Ek = E + 2Rk is a bounded operator on C α . As in Theorem 8.3.6, we conclude that Ek+ + Ek− = I and that E ± are projections by Theorem 9.3.4 and Proposition 9.3.6 respectively. This proves the splitting into Hardy subspaces for D + ike0 .
9.4
Boundary Value Problems
For the remainder of this chapter, we study boundary value problems (BVPs) for Dirac operators, where our problem is to find a solution F to DF (x) + ike0 F (x) = 0
Chapter 9. Dirac Wave Equations
310
in a domain D that satisfies a suitable condition on the trace F |∂D . To make the problem precise, one needs to state assumptions on ∂D: How smooth is it? Is it bounded or unbounded? We also need to specify the space of functions on ∂D in which we consider F |∂D , and in what sense the boundary trace F |∂D is meant. To start with, we postpone these details, and assume only given a Banach space H of functions on ∂D. A concrete example is H = C α (∂D) from Theorem 9.3.9. We assume that the Cauchy integral operator Ek acts as a bounded operator in H, and we recall that Ek is a reflection operator, Ek2 = I, and it induces a splitting of H into Hardy wave subspaces. Solutions F to DF + ike0 F = 0 in D = D+ are in one-to-one correspondence with f = F |∂D in Ek+ H. A formulation of a Dirac BVP is ( DF + ike0 F = 0, in D, on ∂D. T f = g, Here T : H → Y is a given bounded and linear operator onto an auxiliary Banach function space Y, which contains the boundary datum g to the BVP. In such an operator formulation, well-posedness of the BVP means that the restricted map T : Ek+ H → Y
(9.17)
is an isomorphism. Indeed, if so, then for every data g ∈ Y we have a unique solution f ∈ Ek+ H, or equivalently a solution F to DF + ike0 F = 0 in D, which depends continuously on g. The main goal in studying BVPs is to prove such well-posedness. Almost as good is to prove well-posedness in the Fredholm sense, meaning that T is a Fredholm map. In this case, g needs to satisfy a finite number of linear constaints for f to exist, and this is unique only modulo a finite-dimensional subspace. Proposition 9.4.1. Let T : H → Y be a surjective bounded linear operator. Then the restriction T : Ek+ H → Y is an isomorphism if and only if we have a splitting Ek+ H ⊕ N(T ) = H. Proof. If T : Ek+ H → Y is an isomorphism, denote its inverse by T0 : Y → Ek+ H. Then P := T0 T : H → H is a projection with null space N(T ) and range Ek+ H, which proves the splitting. Conversely, if we have a splitting Ek+ H ⊕ N(T ) = H, then clearly T : Ek+ H → Y is injective and surjective. Without much loss of generality, we assume from now on that T is a bounded projection on H with range Y ⊂ H. We consider the following abstract formulation of the BVP, in terms of two bounded reflection operators A and B on H: A2 = I
and B 2 = I.
The operator A plays the role of the Cauchy integral Ek , so that A+ = 12 (I + A) projects onto traces of solutions to the differential equation in D+ and A− =
9.4. Boundary Value Problems
311
1 − 2 (I − A) projects onto traces of solutions to the differential equation in D , with appropriate decay at infinity. The operator B encodes two complementary boundary conditions: either T = B + = 12 (I + B) or T = B − = 12 (I − B) can be used to define boundary conditions. Note that we have null spaces N(B + ) = B − H and N(B − ) = B + H. We note that the algebra for each of the operators A and B is similar to that of Ek in Theorem 9.3.9. We have two different splittings of H:
H = A+ H ⊕ A− H
and H = B + H ⊕ B − H,
and A = A+ − A− and B = B + − B − . The core problem in the study of BVPs is to understand the geometry between on the one hand the subspaces A± H related to the differential equation, and on the other hand the subspaces B ± H related to the boundary conditions. Example 9.4.2 (BVP = operator 4R2 ). The algebra of two reflection operators A and B can be viewed as an operator version of the Clifford algebra 4R2 for the Euclidean plane R2 . Indeed, consider two unit vectors a, b ∈ V . Since a2 = b2 = 1 in 4R2 , we have here a very simple example of an abstract BVP. The geometry of a and b is described by the angle φ between the vectors. We recall that this angle can be calculated from the anticommutator 1 2 (ab
+ ba) = cos φ,
or from the exponential ab = eφj , where j is the unit bivector with orientation of a ∧ b. Definition 9.4.3 (Well-posedness). Let A, B : H → H be two reflection operators on a Banach space H. Define the cosine operator 1 2 (AB
+ BA)
and the rotation operators AB
and BA = (AB)−1 .
We say that the AB boundary value problems are well posed (in the Fredholm sense) if the four restricted projections B ± : A± H → B ± H are all isomorphisms (Fredholm operators). Exercise 9.4.4 (Simplest abstract BVP). Let H = C2 and consider the two orthogonal reflection operators cos(2α) sin(2α) 1 0 , and B = A= sin(2α) − cos(2α) 0 −1 for some 0 ≤ α ≤ π/2. Compute the cosine and rotation operators and show that the AB BVPs are well posed if and only if 0 < α < π/2. Show that we have spectra σ( 12 (AB + BA)) = {cos(2α)} and σ(AB) = {ei2α , e−i2α }, and that the AB BVPs fail to be well posed exactly when these spectra hit {+1, −1}.
312
Chapter 9. Dirac Wave Equations
Figure 9.2: The two splittings encoding an abstract BVP, with associated reflection operators. For two general reflection operators A and B, the associated cosine and rotation operators each contain the necessary information to conclude well-posedness of the AB BVPs. Useful identities include the following, which are straightforward to verify: 1 2 (I 1 2 (I
+ BA) = B + A+ + B − A− , +
−
−
+
− BA) = B A + B A ,
(9.18) (9.19)
2(I + C) = (I + BA)B(I + BA)B,
(9.20)
2(I − C) = (I − BA)B(I − BA)B.
(9.21)
Proposition 9.4.5 (Well-posedness and spectra). Let A, B : H → H be two reflection operators on a Banach space H. Then the following are equivalent: (i) The AB BVPs are well posed. (ii) The spectrum of the rotation operator BA does not contain +1 or −1.
9.4. Boundary Value Problems
313
(iii) The spectrum of the cosine operator C = 21 (AB + BA) does not contain +1 or −1. Similarly, the AB BVPs are well posed in the Fredholm sense if and only if I ±BA are Fredholm operators, if and only if I ± C are Fredholm operators. Proof. We note that B + A+ + B − A− is invertible if and only if the BVPs B + : A+ H → B + H and B − : A− H → B − H are well posed, and similarly for B + A− + B − A+ . Also ((I + BA)B)2 is invertible if and only if I + BA is invertible, and similarly for I − BA. The equivalences follow. With this general setup, and Proposition 9.4.5 as our main tool for proving well-posedness of Dirac BVPs, we now consider the two main examples that we have in mind. The boundary condition B, unlike A, is typically a pointwise defined multiplier, derived from the orientation of the tangent space to ∂D, described by the normal vector ν. For the remainder of this section we assume that D is a bounded C 2 domain. In this case, we note that ν is a C 1 smooth vector field on ∂D. We will see below that the cosine operators for such smooth BVPs tend to be compact, leading directly to BVPs that are Fredholm well posed by Proposition 9.4.5. Indeed, by the general Fredholm theory outlined in Section 6.4, the operators I ± C will then be Fredholm operators with index zero. The cosine operators typically are generalizations of the following classical integral operator from potential theory. Exercise 9.4.6 (Double layer potential). Consider the integral operator Z Kf (x) := hΨ(y − x), ν(y)if (y)dy, x ∈ ∂D, ∂D
with kernel k(x, y) = hΨ(y − x), ν(y)i. In three dimensions, a physical interpretation of k(x, y) is that of the electric potential from a dipole at y, in the direction ν(y), and for this reason K is called the double layer potential operator. The operator K is weakly singular on smooth domains. More precisely, show that on a C 2 boundary ∂D of dimension n − 1, we have kernel estimates |k(x, y)| . |x − y|2−n and |∇0x k(x, y)| . |x − y|1−n , x 6= y, x, y ∈ ∂D, where ∇0x denotes the tangential gradient in the x-variable. Lemma 9.4.7 (Weakly singular = compact). Let Z k(x, y)f (y)dy, T f (x) =
x ∈ ∂D,
∂D
be a weakly singular integral operator with kernel estimates |k(x, y)| . |x − y|2−n and |∇0x k(x, y)| . |x − y|1−n , x, y ∈ ∂D. Here ∇0x denotes the tangential gradient along ∂D in the variable x. Then T is a compact operator on C α (∂D) for all 0 < α < 1.
Chapter 9. Dirac Wave Equations
314
Proof. Assume that x, x0 ∈ ∂D with |x − x0 | = . Write T f (x0 ) − T f (x) Z Z = (k(x0 , y) − k(x, y))f (y)dy + |y−x|≤2
(k(x0 , y) − k(x, y))f (y)dy
|y−x|>2
=: I0 + I1 . For I1 , we obtain from the mean value theorem the estimate |k(x0 , y) − k(x, y)| . /|y − x|1−n when |y − x| > 2. This yields |I1 | . ln −1 kf kL∞ . For I0 , we estimate |k(x0 , y) − k(x, y)| . |y − x|2−n and obtain |I0 | . kf kL∞ . It follows that T : C α (∂D) → C β (∂D) is bounded for all β < 1, and that T : C α (∂D) → C α (∂D) is a compact operator. Example 9.4.8 (Normal/tangential BVP). Our main example of a Dirac BVP occurs when the differential equation is DF + ike0 F = 0, that is, A = Ek , for a fixed wave number k ∈ C, and the boundary conditions are encoded by the reflection operator B = N given by N f (x) := ν(x) 4 fd (x) 4 ν(x),
x ∈ ∂D.
We know from Section 4.1 that N reflects the multivector f (x) across the tangent plane to ∂D at x, and assuming that ν ∈ C 1 , we have that N is bounded on C α (∂D). The projection N + in this case will yield a boundary condition that specifies the part of f (x) tangential to ∂D in the sense of Definition 2.8.6. This can be verified using the Riesz formula (3.4), as N + f = 21 (f + ν fbν) = ν 12 (νf + fbν) = ν(ν ∧ f ) = ν y (ν ∧ f ). The corresponding calculation using (3.3) shows that N − f = ν ∧ (ν y f ) yields the boundary condition that specifies the normal part of f (x). The four Ek N BVPs consist of two BVPs for solutions to DF + ike0 F = 0 in the interior domain D+ , where the tangential or normal part of F |∂D is specified, and two BVPs for solutions to DF + ike0 F = 0 in the exterior domain D− , where the tangential or normal part of F |∂D is specified. By Proposition 9.4.5, the wellposedness of these four BVPs may be studied via the associated cosine operator Ek N +N Ek = (Ek +N Ek N )N . When k = 0, we calculate using Ψν = 2hΨ, νi−νΨ
9.4. Boundary Value Problems
315
that 1 2 (E
Z + N EN )f (x) = p.v.
Ψ(y − x)ν(y)f (y) + ν(x)Ψ(y − x)f (y)ν(y)ν(x) dy
Z∂D = 2p.v. hΨ(y − x), ν(y)if (y)dy ∂D Z + p.v. (ν(x) − ν(y))Ψ(y − x)f (y)dy ∂D Z Ψ(y − x)f (y)(ν(y) − ν(x))dy ν(x). + ν(x) p.v. ∂D
Assume now that D is a bounded C 2 domain. We can then apply Lemma 9.4.7 to each of these three terms, showing that EN + N E is a compact operator on C α . Moreover, the compactness of Ek − E on C α follows by yet another application of Lemma 9.4.7. We conclude that the Ek N BVPs are well posed in the sense of Fredholm in C α (∂D) for C 2 domains D. Example 9.4.9 (Spin BVP). The second example of a Dirac BVP that we shall consider is that in which the boundary conditions are induced by left Clifford multiplication by the normal vector ν. For technical reasons we study boundary conditions encoded by the reflection operator B = S given by Sf (x) := e0 4 ν(x) 4 f (x),
x ∈ ∂D.
Note that (e0 n)2 = −e20 n2 = 1, so indeed S is a reflection operator, and it is bounded on C α , since ν is C 1 regular. The factor e0 is motivated by Proposition 3.3.5 as in Proposition 9.1.5, and makes 4W ev invariant under S. As before, we study the differential equation DF + ike0 F = 0 encoded by the reflection operator A = Ek . It would be more natural to consider the operators Ek and S acting on spinor fields ∂D → 4W / , though, since both operators use only left multiplication by multivectors. So the true nature of the Ek S BVPs are BVPs for the 4-Dirac / operator. However, we here consider the 4-Dirac operator, since we aim to combine the Ek S and the Ek N BVPs in Section 9.5. The ranges of the projections S + f = 12 (1 + e0 ν)f
and S − f = 12 (1 − e0 ν)f
are seen to be the subspaces of multivector fields containing left Clifford factors that are respectively the light-like vectors ν ±e0 . The advantage of the S boundary conditions is that in some sense, the Ek S BVPs are the best local BVPs possible for the differential equation DF + ike0 F = 0. We will see several indications of this below.
Chapter 9. Dirac Wave Equations
316
For the cosine operator 21 (ES + SE), we calculate Z 1 (x) = p.v. Ψ(y − x)ν(y)f (y) + ν(x)Ψ(y − x)f (y) dy (E + SES)f 2 ∂D Z = 2p.v. hΨ(y − x), ν(y)if (y)dy Z∂D + p.v. (ν(x) − ν(y))Ψ(y − x)f (y)dy, ∂D
since e0 anticommutes with the space-like vectors ν and Ψ. As in Example 9.4.8, we conclude from this, using Lemma 9.4.7, that the Ek S BVPs are well posed in the sense of Fredholm in C α on C 2 domains. Having established well-posedness in the Fredholm sense for the Ek N and Ek S BVPs, we know that the BVP maps (9.17) are Fredholm operators, so that the null spaces are finite-dimensional and the ranges are closed subspaces of finite codimension. It remains to prove injectivity and surjectivity, whenever possible. Proposition 9.4.10 (Injectivity). Let 0 < α < 1 and Im k ≥ 0. • For the Ek N BVPs we have Ek+ C α ∩N + C α = Ek+ C α ∩N − C α = Ek− C α ∩N + C α = Ek− C α ∩N − C α = {0} if Im k > 0. Moreover, if D− is a connected domain and k ∈ R \ {0}, then Ek− C α ∩ N + C α = Ek− C α ∩ N − C α = 0. • For the Ek S BVPs we have Ek+ C α ∩ S + C α = Ek− C α ∩ S − C α = {0} whenever Im k ≥ 0. Proof. For the estimates we require the Hermitian inner product (w1 , w2 iV := (e0 w b1 e−1 0 , w2 i on 4Wc from Definition 9.3.1. Consider first the interior BVPs. Given f = F |∂D ∈ Ek+ C α , we define the linear 1-form D × V → C : (y, v) → (e0 vF (y), F (y)iV , which has nabla derivative (e0 ∇F (y), ˙ F (y)i ˙ V = (e0 (∇F ), F iV − (e0 F, ∇F iV = (e0 (−ike0 F ), F iV − (e0 F, −ike0 F iV = −2Im k|F |2V . From the Stokes formula (7.4), it follows that Z Z (Sf, f iV dy = −2Im k ∂D
D+
|F |2V dx.
9.4. Boundary Value Problems
317
If f ∈ N ± C α , then (Sf, f iV = 0, and we conclude that F = 0 if Im k > 0. So in this case, Ek+ C α ∩ N ± C α = {0}. If f ∈ S + C α , then (Sf, f iV = |f |2V , and we conclude that f = 0 whenever Im k ≥ 0, so Ek+ C α ∩ S + C α = {0}. Consider next the exterior BVPs. Let f = F |∂D ∈ Ek− C α , and fix a large − radius R. From Stokes’s theorem applied to the domain DR := D− ∩ {|x| < R}, we have Z Z Z x (e0 |x| F, F iV dx − |F |2V dx. (Sf, f iV dy = −2Im k |x|=R
− DR
∂D
Furthermore, on the sphere |x| = R, we note that x x |( |x| + e0 )F |2V = 2|F |2V − 2(e0 |x| F, F iV ,
and obtain the identity Z Z x (|F |2V − 12 |( |x| + e0 )F |2V )dx − |x|=R
Z (Sf, f iV dy = −2Im k
∂D
− DR
|F |2V dx.
R x Using Proposition 9.3.6, we have limR→∞ |x|=R |( |x| +e0 )F |2V dx = 0 for all Im k ≥ − α 2 0. If f ∈ S C , then (Sf, f iV = −|f |V , and we again conclude that f = 0, so Ek− C α ∩ S − C α = {0}. If f ∈ N ± C α , then (Sf, f iV = 0, and we have Z Z Z x |F |2V dx + 2Im k |F |2V dx = 12 |( |x| + e0 )F |2V dx → 0, R → ∞. |x|=R
− DR
|x|=R
When Im k > 0, this shows that F = 0. When k ∈ R \ {0}, we have Z |F |2V dy = 0. lim R→∞
|x|=R
Applying Rellich’s lemma (Lemma 6.3.5) to the component functions Fs of F , which satisfy Helmholtz’s equation ∆Fs + k 2 Fs = 0, we also in this case conclude that F = 0, so in either case, Ek− C α ∩ N ± C α = {0}. Summarizing our findings, we have obtained the following well-posedness results. Theorem 9.4.11 (C α well-posedness). For the Dirac BVPs with boundary function space C α (∂D), 0 < α < 1, on domains with C 2 regular boundary ∂D, we have the following well-posedness results. The four BVPs N ± : Ek± C α → N ± C α are well posed when Im k > 0. If the exterior domain D− is connected, then the exterior BVPs N ± : Ek− C α → N ± C α are well posed for all nonzero Im k ≥ 0. The two spin-Dirac BVPs S − : Ek+ C α → S − C α and S + : Ek− C α → S + C α are well posed for all Im k ≥ 0.
318
Chapter 9. Dirac Wave Equations
We remark that by applying analytic Fredholm theory, one can prove that in fact, also the interior Ek N BVPs are well posed for k ∈ R, except for a discrete set of resonances. Proof. We make use of the Fredholm theory outlined in Section 6.4. By Example 9.4.8 and Proposition 9.4.5, the Ek N BVPs are well posed in the Fredholm sense for all k. By Proposition 9.4.10 the four maps N ± : Ek± C α → N ± C α are injective when Im k > 0. We conclude that I ± 12 (Ek N + N Ek ) are injective Fredholm operators with index zero, and therefore invertible. So the Ek N BVPs are well posed when Im k > 0. For k ∈ R\{0}, we have injective semi-Fredholm maps N ± : Ek− C α → N ± C α by Proposition 9.4.10. By perturbing Ek− to Im k > 0, Lemma 9.4.12 below proves that they are invertible. The well-posedness of S − : Ek+ C α → S − C α and S + : Ek− C α → S + C α follows from Example 9.4.8 and Proposition 9.4.10, using Proposition 9.4.5. Note that I − 12 (Ek S + SEk ) = ((S − Ek+ + S + Ek− )S)2 is an injective Fredholm operator with index zero, and hence invertible. The following two techniques for proving existence of solutions to BVPs turn out to be useful. Lemma 9.4.12 (Perturbation of domains). Let At , t ∈ [0, 1], and B be reflection operators on a Banach space H, and consider the family of BVPs described by B + : + A+ t H → B H. If these are all semi-Fredholm maps and if t 7→ At is continuous, + + + + then the indices of B + : A+ 0 H → B H and B : A1 H → B H are equal. + Proof. We parametrize the domains A+ t H by the fixed space A0 H. Considering + + + + ˜ At := At : A0 H → At H as one of the four abstract A0 At BVPs, we note that
I + At A0 = 2I + (At − A0 )A0 . If kAt −A0 k ≤ 1/kA0 k, it follows that I +At A0 is invertible, and from (9.18) we see + + + ˜+ in particular for 0 ≤ t ≤ , that A˜+ t is invertible. Let Bt := B : At H → B H. + + + + ˜t A˜t , we conclude that Ind(B ˜ A˜ ) = Applying the method of continuity to B + + ˜ + A˜+ ). Since A˜+ ˜ ˜ Ind(B are invertible, we obtain Ind( B ) = Ind( B ). Repeating t 0 0 0 ˜ + ) = Ind(B ˜ + ). this argument a finite number of times, we conclude that Ind(B 1 0 Lemma 9.4.13 (Subspace duality). Let A and B be two reflection operators on a Banach space H, and consider the BVP described by B + : A+ H → B + H. This map is surjective if and only if the dual BVP described by (B ∗ )− : (A∗ )− H∗ → (B ∗ )− H∗ is an injective map. Proof. Note that A∗ and B ∗ are reflection operators in H∗ . By duality as in Section 6.4, we have (A+ H)⊥ = R(A+ )⊥ = N((A∗ )+ ) = R((A∗ )− ) = (A∗ )− H∗ and similarly (B − H)⊥ = (B ∗ )+ H∗ . Similarly to Proposition 9.4.1, since (A+ H + B − H)⊥ = (A+ H)⊥ ∩ (B − H)⊥ , this translates to the claim.
9.5. Integral Equations
319
We end this section with two applications of the techniques in this section to Dirac’s equation. Example 9.4.14 (The MIT bag model). Consider Dirac’s equation i~∂t ψ = H0 ψ from Proposition 9.2.8 on a bounded domain D ⊂ V . The MIT bag model is used in physics to describe the quarks in a nucleon, that is, a proton or neutron. The bag D represents the nucleon, and the boundary condition is ν.ψ = ψ, 2
or in the 4V representation e0 ν.ψ = ψ. This boundary condition implies in particular that the probability current hjp , νi = i(ψ, ν.ψi = i(ψ, ψi across ∂D vanishes, since jp is a real spacetime vector field. We see that with suitable modifications, such BVPs for time-harmonic solutions to Dirac’s equation can be studied with the methods described in this section. Example 9.4.15 (Chirality of (anti-)particles). What we refer to here as abstract BVPs, namely the algebra of two reflection operators describing the geometry between two splittings of a function space, appear in many places independent of any BVPs. One of many such examples we saw in connection to Proposition 9.2.8. Consider the Hilbert space H := L / c2 ), where we saw two different split2 (V ; 4V I 0 tings. The reflection operator B = encodes the Chiral subspaces of right0 −I and left-handed spinors, whereas / −i D m ˜ A = sgn(H0 ) = √ / . ˜ −D m ˜ 2 − ∆ −m Using, for example, the representation of 4V / by Pauli matrices, the Fourier multiplier of the rotation operator AB at frequency ξ ∈ V is seen to have the four eigenvalues p λ = (±|ξ| ± im)/ ˜ |ξ|2 + m ˜ 2. Therefore the spectrum of AB is precisely the unit circle |λ| = 1. We conclude that although the spectral subspaces L± / 2 ) do not intersect the chiral subspaces, 2 (V ; 4V the angle between them is zero. The problem occurs at high frequencies: particles or antiparticles of high energy may be almost right- or left-handed.
9.5
Integral Equations
The aim of this section is to use the somewhat abstract theory from Section 9.4 to derive integral equations for solving Dirac BVPs, with good numerical properties, that have recently been discovered.
320
Chapter 9. Dirac Wave Equations
• It is desirable to extend the theory to nonsmooth domains, which have boundaries that may have corners and edges, as is often the case in applications. Ideally, one would like to be able to handle general Lipschitz domains. • To solve a given BVP, we want to have an equivalent integral formulation Z k(x, y)f (y)dy = g(x), x ∈ ∂D, ∂D
where the boundary datum gives g and the integral equation is uniquely solvable for f if and only if the BVP to solve is well posed. Ideally we want to have a function space without any constraints, meaning a space of functions ∂D → L with values in a fixed linear space L and coordinate functions in some classical function space. In this section we let D be a bounded strongly Lipschitz domain. At this generality, the normal vector field ν is only a measurable function without any further smoothness. To extend the theory from Section 9.4 and keep the basic operators Ek , N and S bounded, we shall use L2 = L2 (∂D; 4Wc ), which is the most fundamental space to use for singular integral operators like Ek . Indeed, the singular integral operator Ek is bounded on L2 (∂D) for every Lipschitz domain D, by Theorem 8.3.2 and Exercises 9.3.3 and 6.4.3. We first consider Fredholm well-posedness of the Ek N BVPs in L2 on bounded strongly Lipschitz domains. On such nonsmooth domains, it is not true in general that Ek N + N Ek , or even the classical double layer potential from Exercise 9.4.6, is compact. However, we recall from Proposition 9.4.5 that it suffices to show that the spectrum of 12 (Ek N + N Ek ) does not contain ±1. Theorem 9.5.1 (Rellich estimates). Let D be a bounded strongly Lipschitz domain, and let θ be a smooth compactly supported field that is transversal to ∂D as in Exercise 6.1.8. Define the local Lipschitz constant L := sup∂D (|θ ∧ ν|/hθ, νi) for ∂D. Then λI + Ek N is a Fredholm operator on L2 (∂D) of index zero whenever λ = λ1 + iλ2 , |λ2 | < |λ1 |/L, λ1 , λ2 ∈ R. Note that since Ek N and (Ek N )−1 = N Ek are bounded, we also know that the spectrum of Ek N is contained in an annulus around 0. Furthermore, since ((λI + Ek N )Ek )2 = λ(λ + λ−1 + Ek N + N Ek ), the resolvent set of the cosine operator contains the hyperbolic regions onto which λ 7→ 12 (λ + λ−1 ) maps the double cone |λ2 | < |λ1 |/L. And for λ = ±1 it follows in particular that the Ek N BVPs are well posed in the Fredholm sense in L2 (∂D).
9.5. Integral Equations
321
Proof. To motivate the calculations to come, we consider first the BVP described by N + : Ek+ L2 → N + L2 . To estimate kf kL2 in terms of kN + f kL2 , we insert the factor hθ, νi and express it with the Clifford product as Z Z Z 1 2 2 ≈ kf kL hθ, νidy |f | = (f ν, f θiV dy. (f, f (θν + νθ)i dy = Re V V 2 2 ∂D ∂D ∂D We next use the reversed twin of the Riesz formula (3.4) to write f ν = 2f ∧ ν − ν fb. We estimate the last term so obtained by applying Stokes’s theorem with the linear 1-form (y, v) 7→ (v fd (y), f (y)θ(y)iV , giving Z (ν fb, f θiV dy = ∂D
Z
(−ike0 fb, f θiV + (fb, (−ike0 f )θiV +
D
n X (fb, ej f (∂j θ)iV dy j=1
(9.22) Combining and estimating, we get kf k2L2 (∂D) . kf
∧
νkL2 (∂D) kf kL2 (∂D) + kF k2L2 (Dθ ) ,
where Dθ := D ∩ supp θ. The Cauchy integral L2 (∂D) → L2 (Dθ ) : f 7→ F can be shown to be a bounded operator by generalizing the Schur estimates from Exercise 6.4.3 to integral operators from ∂D to Dθ . Moreover, such estimates show by truncation of the kernel that this Cauchy integral is the norm limit of Hilbert–Schmidt operators, and hence compact. On the first term we can use the 1 absorption inequality kN + f kkf k ≤ 2 kN + f k2 + 2 kf k2 . Choosing small leads to + + a lower bound, showing that N : Ek L2 → N + L2 is a semi-Fredholm operator. Next consider the integral equation λh + Ek N h = g, where we need to estimate khkL2 in terms of kgkL2 . To this end, we note that Ek N h = g − λh, so that Ek± L2 3 f ± := 2Ek± N h = N h ± (g − λh). Applying (9.22) to f + and the corresponding application of Stokes’s theorem to f − , we obtain estimates Z ± ± b (ν f , f θiV dy . kF k2L2 (supp θ) . ∂D
We now expand the bilinear expressions on the left, writing f ± = N h ∓ λh ± g, and observe that the integrals Z Z (ν b h, hθiV dy and (νN b h, N hθiV dy ∂D
∂D
are bad in the sense that we have only an upper estimate by khk2L2 , whereas the terms Z Z (νN b h, λhθiV dy = λ (h, hθνiV dy ∂D
∂D
Chapter 9. Dirac Wave Equations
322
c R R h, N hθiV dy = λ ∂D (h, hθνiV dy are good in the sense that they are and ∂D (νλb comparable to khk2L2 . To avoid the bad terms, we subtract identities and obtain Z kF k2L2 (supp θ) & (ν fb+ , f + θiV − (ν fb− , f − θiV dy ∂D Z 2 & 2 Re λ (h, hθνiV dy − khkL2 kgkL2 − kgkL . 2 ∂D
Writing θν = hθ, νi + that |θ ∧ ν| ≤ Lhθ, νi for some L < ∞. It R θ ∧ ν, we know follows that 2 Re λ ∂D (h, hθνiV dy & khk2L2 if |λ2 | < |λ1 |/L, and we conclude that in this case, λI + Ek N is a semi-Fredholm operator. That it is a Fredholm operator with index zero follows from the method of continuity, by perturbing λ, for example, to 0, where Ek N is an invertible operator. Theorem 9.5.2 (L2 well-posedness for Ek N ). For the Ek N Dirac BVPs with boundary regularity L2 (∂D) on bounded strongly Lipschitz domains D, we have the following well-posedness results. The four BVPs N ± : Ek± L2 → N ± L2 are well posed for all Im k ≥ 0 except for a discrete set of real k ≥ 0. If the exterior domain D− is connected, then the exterior BVPs N ± : Ek− L2 → N ± L2 are well posed for all nonzero Im k ≥ 0. Proof. By Theorem 9.5.1 and Proposition 9.4.5, the Ek N BVPs are well posed in the Fredholm sense for all k. Proposition 9.4.10 can be verified when the C α topology is replaced by L2 . The proof can now be completed as in Theorem 9.4.11. For the remainder of this section we consider the second problem posed above, namely how to formulate a given Dirac BVP as an integral equation that is good for numerical applications. As a concrete example, we take the exterior BVP with prescribed tangential part, that is, N + : Ek− L2 → N + L2 .
(9.23)
This BVP has important applications, since a solution of it yields an algorithm for computing, for example, how acoustic and electromagnetic waves are scattered by an object D. Assuming that the exterior domain D− is connected and Im k ≥ 0, k 6= 0, we know that this BVP is well posed. Although it is an invertible linear equation by which we can solve the BVP, it is not useful for numerical applications. The reason is that the solution space Ek− L2 is defined by a nonlocal constraint on f ∈ L2 . What we need is an ansatz, meaning some operator U : Y → Ek− L2 , where Y is a function space that is good for numerical purposes and U has good invertibility properties. Using such a U , we can solve the BVP (9.23) by solving N +U h = g
9.5. Integral Equations
323
for h ∈ Y. This gives the solution f = U h ∈ Ek− L2 . As a first try, we swap the roles of Ek and N and consider U = Ek− : N + L2 → Ek− L2 . This leads to the operator N + Ek− |N + L2 , which can be shown to be closely related to the double layer potential operator from Exercise 9.4.6. The function space Y = N + L2 is good, but although this U is a Fredholm operator, it fails to be invertible for a discrete set of real k. Indeed, N(U ) = N + L2 ∩ Ek+ L2 will contain the eigenvalues of the self-adjoint operator −ie0 D with tangential boundary conditions on the bounded domain D+ . This explains a well-known problem in the numerical solution of BVPs by integral equations: the existence of spurious interior resonances k, where the integral equation fails to be invertible, even though the BVP it is used to solve is itself well posed. A better try, which should be more or less optimal, comes from the Ek S BVPs. Swapping the roles of Ek and S, we consider U = Ek− : S + L2 → Ek− L2 . Similarly, a good ansatz for an interior Dirac BVP is U = Ek+ : S − L2 → Ek+ L2 . It is important not to swap S + and S − in these ansatzes. Maybe the best way to see that the Ek S BVPs have well-posedness properties superior to those for the Ek N BVPs on L2 , even in the Fredholm sense and in particular on Lipschitz domains, is to consider the rotation operator Z Ek Sf (x) = 2p.v. Ψ− x ∈ ∂D. k (y − x)f (y)dy, ∂D
2
Note that we used that ν = 1. Since Ek −E is a weakly singular integral operator, it is compact on L2 (∂D), and when k = 0, we note that ES is a skew-symmetric operator, since Ψ is a space-like vector depending skew-symmetrically on x and y. In particular, this means that the spectrum of ES is on the imaginary axis and the operators I ± ES are invertible with k(I ± ES)−1 k ≤ 1. By the identities from Proposition 9.4.5, this means, for example, that kE − hk = 21 k(I − Ek S)hk ≥ 12 khk,
h ∈ S + L2 .
For general k, we note that there is still a major difference in well-posedness properties of the Ek S BVPs as compared to those for the Ek N BVPs. The operator λI + Ek S can fail to be Fredholm only when Re λ = 0, whereas λI + Ek N can fail to be Fredholm whenever | Re λ| ≤ L|Im λ|, not far away from λ = ±1 for large L. So, as compared to the Ek N BVPs, the well-posedness properties for the Ek S BVPs do not essentially depend on the Lipschitz geometry of ∂D. Theorem 9.5.3 (L2 well-posedness for Ek S). For the Ek S Dirac BVPs with boundary regularity L2 (∂D) on bounded strongly Lipschitz domains D, we have the following well-posedness results. The two spin-Dirac BVPs S − : Ek+ L2 → S − L2 and
Chapter 9. Dirac Wave Equations
324
S + : Ek− L2 → S + L2 are well posed for all Im k ≥ 0. Equivalently, the ansatzes Ek− : S + L2 → Ek− L2 and Ek+ : S − L2 → Ek+ L2 are invertible for all Im k ≥ 0. Proof. As before, we note the identity 21 (I − Ek S) = Ek+ S − + Ek− S + and its twin S + Ek− + S − Ek+ = 12 (I − SEk ) = 12 (Ek S − I)SEk . From the discussion above it follows that I − Ek S is a Fredholm operator of index 0, which directly shows that the two BVPs and the two ansatzes are Fredholm maps. By Proposition 9.4.10 adapted to L2 , the four maps are injective for all Im k ≥ 0. Therefore I − Ek N is injective, hence surjective. We conclude that the two BVPs and the two ansatzes are invertible. Example 9.5.4 (Asymptotic APS BVPs). Consider the Cauchy reflection operator A = Ek encoding the Dirac equation DF + ike0 F = 0, together with the abstract boundary conditions B = El , where k, l ∈ C. Clearly not all four Ek El BVPs are well posed, since El − Ek is a compact operator. However, since 1 2 (I
+ El Ek ) = El+ Ek+ + El− Ek−
clearly is a Fredholm operator with index zero, the two BVPs El+ : Ek+ L2 → El+ L2 and El− : Ek− L2 → El− L2 are Fredholm operators. Such BVPs with nonlocal boundary conditions defined by the differential equation itself are essentially the boundary conditions employed by Atiyah, Patodi, and Singer (APS) in their work on index theory for manifolds with boundary. We next let l → ∞ along the upper imaginary axis. The operators El are not norm convergent, but for a fixed function h, one can show that El h → −Sh. Note from the formula for Ψ− l how the singular integral operators El localize to the pointwise multiplier −S. This shows that indeed, the operator S is related to the differential equation as a local asymptotic Cauchy singular integral, and to some extent explains why the Ek S BVPs are so remarkably well posed. Example 9.5.5 (Spin integral equation). We now return to the exterior Dirac BVP (9.23) with prescribed tangential parts, which we know is well posed whenever Im k ≥ 0, k 6= 0, and D− is connected. Using the invertible ansatz Ek− : S + L2 → Ek− L2 from Theorem 9.5.3, we can solve the BVP (9.23), given datum g ∈ N + L2 , by solving N + Ek− h = g (9.24) for h ∈ S + L2 , giving the solution f = Ek− h ∈ Ek− L2 and Z F (x) = Ψ− x ∈ D− , k (y − x)(−ν(y))h(y)dy, ∂D
solving DF + ike0 F = 0 in D− with N + F |∂D = g. This is certainly numerically doable, since both spaces N + L2 and S + L2 are defined by a simple pointwise
9.5. Integral Equations
325
constraint determined by the normal ν. However, we can enhance the integral equation somewhat as follows. Consider the reflection operator T given by T f = −e0 fbe0 . We note that, similarly to N , replacing ν by the time-like vector e0 , indeed T 2 = I and T reflects time-like multivectors in the subspace of space-like multivectors. Computing relevant cosine operators, we have b (T S + ST )f = −e0 (e[ 0 νf )e0 + e0 ν(−e0 f e0 ) = 0, (T N + N T )f = −e0 νf νe0 − νe0 f e0 ν 6= 0, b (N S + SN )f = ν(e[ 0 νf )ν + e0 ν(ν f ν) = 0. By Proposition 9.4.5, this means that we have optimally well posed abstract BVPs T S and N S. In particular, this allows us to parametrize the domain space S + L2 of the integral equation (9.24) for example by T + L2 = L2 (∂D; 4Vc ), the space of space-like multivector fields, which √ is an ideal space for applications. In fact, we verify that S + : T + L2 → S + L2 is 1/ 2 times an isometry. Since T N + N T 6= 0, we cannot directly parametrize the range space N + L2 of (9.24) by T + L2 . However, we can go via the splitting L2 = S + L2 ⊕ S − L2 , since for example, T + S + : N + L2 → T + L2 √ is invertible. In fact, both S + : N + L2 → S + L2 and T + : S + L2 → T + L2 are 1/ 2 times isometries. To summarize, we propose that the exterior BVP (9.23) with prescribed tangential part is best solved using the integral equation T + S + N + Ek− S + h = T + S + g, for h ∈ T + L2 . Indeed, the derivation above shows that this integral equation is uniquely solvable, and the function space for the variable h and the datum T + S + g is simply T + L2 = L2 (∂D; 4Vc ). To write out this equation more explicitly, we compute that T + S + g = 12 (g0 + νg1 ), when g = g0 + e0 g1 and g1 , g2 ∈ N + L2 ∩ T + L2 , so the time-like part is mapped onto a normal part when the original multivector is tangential. We also compute that T + S + N + S + T + = 14 T + . Writing Ek− = 12 (I − Ek ), the integral equation for h ∈ L2 (∂D; 4Vc ) becomes Z 1 + M (x)p.v. Ψ− x ∈ ∂D. h(x) k (y − x)(ν(y) − e0 )h(y)dy = 2M (x)g(x), 2 ∂D
Chapter 9. Dirac Wave Equations
326
Here M denotes the multiplier that projects onto tangential multivectors and maps tangential time-like multivectors onto normal space-like multivectors by replacing a left factor e0 into ν. We refer to this integral equation as a spin integral equation for solving the BVP (9.23), since the key feature is that it uses an ansatz derived from the Ek S BVPs, which, as we have discussed in Example 9.4.9, really are / + ike0 ψ = 0. / equation Dψ BVPs for the 4-Dirac Example 9.5.6 (Transmission problems). Transmission problems generalize boundary value problems in that we look for a pair of fields F + : D+ → 4Wc and F − : D− → 4Wc such that + + + DF + ik2 e0 F = 0, in D , (9.25) DF − + ik1 e0 F − = 0, in D− , M f + = f − + g, on ∂D. Here the wave numbers k1 , k2 ∈ C are different in the two domains, with Im k1 ≥ 0 and Im k2 ≥ 0. The relation between the traces f + = F + |∂D and f − = F − |∂D on ∂D is described by a multiplier M ∈ L(L2 ) and a given source g ∈ L2 . For solving the transmission problem (9.25), unlike in the case of BVPs, we have a good ansatz directly available, namely U : L2 → Ek+2 L2 ⊕ Ek−1 L2 : h 7→ (Ek+2 h, Ek−1 h). In the case k1 = k2 , it is clear from the L2 analogue of Theorem 9.3.9 that U is invertible. What is somewhat surprising is that U is invertible for all Im k1 ≥ 0 and Im k2 ≥ 0. To prove this, it suffices by the method of continuity to show that U is injective. To this end, note that U h = 0 means that h = F + |∂D = F − |∂D , − − − + where DF + + ik1 e0 F + = 0 in R D and DF + ik2 e0 F = 0 in D . Applying Stokes’s theorem twice to ∂D (e0 νh, hiV dy, computations as in the proof of Proposition 9.4.10 give Z Z Z 2Im k1 |F + |2V dx + 2Im k2 |F − |2V dx + |F − |2V dx − DR
D+
=
1 2
Z |x|=R
|x|=R
x |( |x| + e0 )F − |2V dx.
Using radiation conditions and jumps, this shows that F + = F − = 0 and therefore h = 0. Using this invertible ansatz U , we can now solve the transmission problem (9.25) by solving the integral equation (M Ek+2 − Ek−1 )h = g
9.6. Boundary Hodge Decompositions
327
for h ∈ L2 . Note that this is an integral equation in L2 (∂D; 4Wc ) without any constraints. From the solution h, we finally compute the field Z F + (x) = Ψk−2 (y − x)ν(y)h(y)dy, ∂D Z − F (x) = − Ψ− k1 (y − x)ν(y)h(y)dy, ∂D
solving the transmission problem. In Section 9.7, we apply this integral equation for Dirac transmission problems to solve scattering problems for electromagnetic waves.
9.6
Boundary Hodge Decompositions
We have considered Dirac BVPs in the previous sections and how to solve them by integral equations. Returning to Examples 9.3.7 and 9.3.8, one important issue remains. We saw there that both the Helmholtz equation and Maxwell’s equations can be viewed as special cases of the Dirac equation DF + ike0 F = 0. However, in these examples F is a vector field and a bivector field respectively, and not a general multivector field. If we intend, for example, to solve BVPs for Helmholtz’s or Maxwell’s equations by a spin integral equation as in Example 9.5.5 or a transmission problem with a Dirac integral equation as in Example 9.5.6, then we need a tool to ensure that the solution multivector field F is in fact a vector or bivector field. It turns out that there exists an exterior/interior derivative operator acting on multivector fields ∂D → 4Wc , which we shall denote by Γk , which is the tool needed. Applications to Maxwell scattering are found in Section 9.7. The point of departure for our explanations is Proposition 8.1.5, where we noted that for a monogenic field ∇ 4 F = 0, each of its homogeneous component functions Fj is monogenic if and only if ∇ ∧ F = 0 = ∇ y F . Generalizing this to time-harmonic waves with wave number k ∈ C, we have the following. Lemma 9.6.1 (Two-sided k-monogenic fields). Assume that F : D → 4Wc solves DF + ike0 F = 0 in some open set D ⊂ V . Write F = F0 + F1 + · · · + Fn+1 , where Fj : D → 4j Wc . Then DFj + ike0 Fj = 0 in D for all 0 ≤ j ≤ n + 1 if and only if ( dF + ike0 ∧ F = 0, δF + ike0 y F = 0. The way we use this result is that if we construct F solving DF + ike0 and some BVP, and if dF + ike0 ∧ F = 0, then we can conclude, for example, that F2 is a bivector field solving the Dirac equation, since the homogeneous parts of F decouple, and thus F2 is an electromagnetic field satisfying Maxwell’s equations. Proof. If (∇ + ike0 ) ∧ F = 0 = (∇ + ike0 ) y F , then (∇ + ike0 ) ∧ Fj = ((∇ + ike0 ) ∧ F )j+1 = 0
Chapter 9. Dirac Wave Equations
328
and (∇ + ike0 ) y Fj = ((∇ + ike0 ) y F )j−1 = 0, and so (∇ + ike0 ) 4 Fj = (∇ + ike0 ) y Fj + (∇ + ike0 ) ∧ Fj = 0 for all j. Conversely, if (∇ + ike0 ) 4 Fj = 0 for all j, then (∇ + ike0 ) ∧ Fj = ((∇ + ike0 ) 4 Fj )j+1 = 0 and (∇ + ike0 ) y Fj = ((∇ + ike0 ) 4 Fj )j−1 = 0. Summing over j, we obtain (∇ + ike0 ) ∧ F = 0 = (∇ + ike0 ) y F . To proceed with the analysis, we need to choose a function space. Since our theory for Hodge decompositions as well as for spin integral equations is set in Hilbert spaces, we choose L2 (∂D). Definition 9.6.2 (Boundary Γk operator). Consider the Hardy space splitting L2 (∂D) = Ek+ L2 ⊕ Ek− L2 on a strongly Lipschitz domain. Define the operator Γk by Γk f := g + + g − , where f = Ek+ f + Ek− f , F ± denote the Cauchy integrals of f in D± so that Ek± f = F ± |∂D , and g ± = G± |∂D ∈ Ek± L2 are such that their Cauchy integrals equal G± = (∇ + ike0 ) ∧ F ± in D± . The domain of Γk is the set of f for which such g ± exist. In a series of lemmas, we derive below a more concrete expression for this unbounded operator Γk as a tangential differential operator on L2 (∂D). It turns out that Γk acts by exterior differentiation along ∂D on tangential fields and by interior differentiation along ∂D on normal fields, modulo zero order terms determined by k. Definition 9.6.3 (Tangential derivatives). Consider the Lipschitz boundary M = ∂D, which is a Lipschitz manifold in the sense that the transition maps, as in Section 6.1, are Lipschitz regular. As in Definitions 11.2, 11.2.6, 12.1.1 and extending to Lipschitz regularity as in Section 10.2, we define tangential exterior and interior derivative operators d0 and δ 0 in L2 (M ; ∧M ), such that (d0 )∗ = −δ 0 . In the notation of this chapter, complexifying the bundle ∧M to ∧Mc , we have N + L2 = {f1 + e0 ∧ f2 ; f1 , f2 ∈ L2 (M ; ∧Mc )}, and extending d0 and δ 0 to operators in N + L2 acting as d0 f := d0 f1 − e0 ∧ (d0 f2 ), δ 0 f := δ 0 f1 − e0 ∧ (δ 0 f2 ), on f = f1 + e0 ∧ f2 , with f1 , f2 ∈ L2 (M ; ∧M ). The reader is kindly advised to consult the relevant sections of the following chapters, as indicated in Definition 9.6.3, for further details. Note that the minus sign in the actions on N + L2 occurs because the time-like e0 and the formally space-like tangential ∇0 anticommute.
9.6. Boundary Hodge Decompositions
329
Lemma 9.6.4 (Ek± L2 to N + L2 ). If f ∈ Ek+ L2 ∩ D(Γk ), then N + f ∈ D(d0 ) and d0 (N + f ) + ike0 ∧ (N + f ) = N + (Γk f ). The same holds for f ∈ Ek− L2 ∩ D(Γk ). Proof. Let f = F |∂D , where F is the Cauchy extension of f . Write f = f1 + e0 ∧ f2 and F = F1 + e0 ∧ F2 , where fj and Fj are space-like fields, j = 1, 2. Generalizing Exercise 11.2.3, with methods as in Lemma 10.2.4, to Lipschitz regular hypersurfaces, we have N + fj = ρ∗ Fj , where ρ : ∂D → V denotes the embedding of ∂D into V . The commutation theorem shows that d0 ρ∗ Fj = ρ∗ (dFj ), giving d0 (N + f ) = d0 ρ∗ F1 − e0 ∧ d0 ρ∗ F2 = N + (dF ). This proves the first statement, since e0 ∧ N + f = N + (e0 ∧ f ), and the proof for Ek− L2 is similar. Using Hodge star dualities, we next derive the corresponding result for the normal part. This uses left Clifford multiplication by ν, which is an isometry between N − L2 and N + L2 . Lemma 9.6.5 (Ek± L2 to N − L2 ). If f ∈ Ek+ L2 ∩ D(Γk ), then ν y f = νN − f ∈ D(δ 0 ) and δ 0 (ν y f ) + ike0 y (ν y f ) = ν y (Γk f ). The same holds for f ∈ Ek− L2 ∩ D(Γk ). Proof. Using nabla calculus with ∇k := ∇ + ike0 , given f = F |∂D ∈ Ek+ L2 ∩ D(Γk ), we write for example DF + ike0 F = 0 as ∇k F = ∇k 4 F = 0. Extending Proposition 8.1.13, such solutions form a right Clifford module, so G = F w = F ∗ = F y w, writing w = e012···n for the spacetime volume element, with dual volume element w∗ = −w ∈ 4n W ∗ , is also a solution to ∇k G = 0 in D+ . Moreover, ∗(∇k ∧ G) = −w x (∇k ∧ G) = (−w x G) x ∇k = F x ∇k = ∇k y F , making use of the algebra from Section 2.6. By Lemma 9.6.4 applied to G, we have N + (∇k ∧ G)|∂D = ∇0k ∧ (N + g) with g = G|∂D , writing d0 formally with nabla calculus using ∇0k = ∇0 + ike0 along ∂D. The spacetime Hodge dual of the left-hand side is ∗(N + (∇k ∧ G)|∂D ) = N − (∇k y F )|∂D . For the right hand side, we note for h := ∇0k ∧(N + g) ∈ N + L2 that ∗h = −(νw0 )h = −ν(w0 h) = −ν(w0 x h), where w0 := ν y w. We used here Corollary 3.1.10 and hwi2 = −1. We get ∗(∇0k ∧ (N + g)) = −ν((w0 x N + g) x ∇0k ) = ν((ν y f ) x ∇0k ) = ν∇0k y (f x ν).
Chapter 9. Dirac Wave Equations
330
Note that the first step uses a nonsmooth extension of Exercise 11.2.7. Reversing these two equations, multiplying them from the left by ν, and equating them yields ν y (∇k y F )|∂D = ν(∇0k y (f x ν))ν = ∇0k y (fb x ν) = −∇0k y (ν y f ). In the second step we used that νh = b hν whenever h ∈ N + L2 , and in the last step we applied the commutation relation from Proposition 2.6.3. This proves the lemma for Ek+ L2 , since Γk f = −(∇k0 y F )|∂D . The proof for Ek− L2 is similar. We next show the converses of Lemmas 9.6.4 and 9.6.5. Lemma 9.6.6 (N ± L2 to Ek± L2 ). If f ∈ N + L2 and f ∈ D(d0 ), then f ∈ D(Γk ) with Γk f = d0 f + ike0 ∧ f. Similarly, if f ∈ N − L2 and ν y f ∈ D(δ 0 ), then f ∈ D(Γk ) with Γk f = ν ike0 y)(ν y f ). Proof. Let f ∈ N + L2 and define Cauchy extensions Z 0 Ψk− (y − x)ν(y)f (y)dy, F ± (x) = ±
∧
(δ 0 +
x ∈ D± .
Differentiating under the integral sign, with notation as in the proof of Lemma 9.6.5, we have Z ∇k ∧ F (x) = ∓ Ψ− ˙ k ∧ ν(y) ∧ f (y))dy k (y − x)(∇ ∂D Z =± Ψk− (y˙ − x)(∇−k ∧ ν(y) ∧ f (y))dy, ∂D
where we have used the algebraic anticommutation relation − − ∇k ∧ (Ψ− k h) = (∇k , Ψk ih − Ψk (∇k ∧ h) − and the first term vanishes, since Ψ+ k (·−y) = −Ψk (y −·) is a fundamental solution to D + ike0 . Aiming to apply a nonsmooth extension of Exercise 11.2.7, we form the inner product with a fixed multivector w ∈ 4Wc , and obtain Z hw, ∇k ∧ F (x)i = ± hν(y) y (∇k y (Ψ+ k (y˙ − x)w)), f (y)idy. ∂D
We choose to use the complex bilinear pairing on 4Wc , but this is not important. + 0 By Lemma 9.6.5, we have ν(y)y(∇k y(Ψ+ k (y˙ −x)w)) = −(δ +ike0 y)(ν(y)y(Ψk (y − x)w)). Note that F in the proof of Lemma 9.6.5 need not solve a Dirac equation for such a trace result to be true. Duality yields Z hw, ∇k ∧ F (x)i = ± hw, Ψk− (y − x)ν(y)(d0 f + ike0 ∧ f )idy. ∂D
9.6. Boundary Hodge Decompositions
331
Since w is arbitrary, this proves the lemma for N + L2 . The proof for f ∈ N − L2 is similar. We calculate Z ∇k ∧ F (x) = −∇k y F (x) = ∓ Ψ− k (y˙ − x)(∇−k y (ν(y) y f (y)))dy. ∂D
Pairing with w gives Z hw, ∇k ∧ F (x)i = ∓ h∇k ∧ (Ψ+ k (y˙ − x)w), ν(y) y f (y)idy ∂D Z =∓ h(d0 + ike0 ∧)N + (Ψ+ k (y˙ − x)w), ν(y) y f (y)idy ∂D Z 0 =± hw, Ψ− k (y − x)ν(y)(ν(y) ∧ ((δ + ike0 y)(ν(y) y f (y))))idy. ∂D
The second equality uses that ν yf ∈ N + L2 and Lemma 9.6.4. Since w is arbitrary, this proves the lemma for N − L2 . Summarizing the above results, we obtain the following concrete expression for Γk . Given Lemmas 9.6.4, 9.6.5, and 9.6.6, the proof is straightforward. Proposition 9.6.7. The operator Γk is a nilpotent operator in L2 (∂D) in the sense of Definition 10.1.1. Its domain equals D(Γk ) = {f ∈ L2 (∂D) ; N + f ∈ D(d0 ) and ν y f ∈ D(δ 0 )} and Γk f = (d0 + ike0 ∧)N + f + ν ∧ (δ 0 + ike0 y)(ν y f ),
f ∈ D(Γk ).
The operator Γk commutes with Ek and with N . Having uncovered this nilpotent operator Γk , we now investigate the Hodge splitting of L2 (∂D) that it induces. We need a Hermitian inner product on L2 (∂D), and we choose Z (f, gi =
(f (x), g(x)iV dx. ∂D
Proposition 9.6.8 (Boundary Hodge decomposition). When k 6= 0, the nilpotent operator Γk induces an exact Hodge splitting L2 (∂D) = R(Γk ) ⊕ R(Γk∗ ), where the ranges R(Γk ) = N(Γk ) and R(Γ∗k ) = N(Γ∗k ) are closed. When k = 0, the ranges are still closed, but the finite-dimensional cohomology space H(Γk ) = N(Γk ) ∩ N(Γ∗k ) will be nontrivial.
Chapter 9. Dirac Wave Equations
332
Proof. Proposition 10.1.2 shows that Γ = Γk induces an orthogonal splitting L2 (∂D) = R(Γk ) ⊕ H(Γk ) ⊕ R(Γ∗k ). When D is smooth and k = 0, it follows from Propositions 12.1.3 and 10.1.6 that the ranges are closed and that the cohomology space is finite-dimensional for Γ = Γk . Adapting the methods from Theorem 10.3.1 to the manifold setting, this result can be extended to the case that D is merely a Lipschitz domain. However, on nonsmooth boundaries ∂D we do not have D(d0 ) ∩ D(δ 0 ) = H 1 (∂D), but still D(d0 ) ∩ D(δ 0 ) is compactly embedded in L2 (∂D). Assume next that k 6= 0 and define the nilpotent operator µf = ike0 ∧ N + f + ν ∧ (ike0 y (ν y f )) so that Γk = Γ + µ. We compute µ∗ f = ik c e0 y N + f + ν ∧ (ik c e0 ∧ (ν y f )). As in Example 10.1.7, we note that N(µ) ∩ N(µ∗ ) = {0}. Consider the abstract Dirac operators Γk + Γ∗k = (Γ + Γ∗ ) + (µ + µ∗ ) : D(Γ) ∩ D(Γ∗ ) → L2 . Since Γ + Γ∗ : D(Γ) ∩ D(Γ∗ ) → L2 is a Fredholm operator and µ + µ∗ : D(Γ) ∩ D(Γ∗ ) → L2 is a compact operator, it follows from Proposition 10.1.6 that Γk +Γ∗k : D(Γ) ∩ D(Γ∗ ) → L2 is a Fredholm operator. Thus the ranges are closed. To prove that in fact the cohomology space N(Γk ) ∩ N(Γ∗k ) in fact is trivial, we note that Γµ∗ + µ∗ Γ = 0. Thus, if Γf + µf = 0 = Γ∗ f + µ∗ f , then 0 = (f, (Γµ∗ + µ∗ Γ)f iV = (Γ∗ f, µ∗ f iV + (µf, Γf iV = −kµ∗ f k2 − kµf k2 . This shows that f = 0 and completes the proof.
Exercise 9.6.9. Roughly speaking, Γ∗k acts as interior derivative on N + L2 and as exterior derivative on N − L2 . Write down the details of this, and show that Γ∗k commutes with N , but not with Ek in general.
9.7
Maxwell Scattering
In this section, we demonstrate how classical Helmholtz and Maxwell boundary value and transmission problems can be solved using the operators Ek , N , and Γk . Recall that Ek is the reflection operator for the Hardy space splitting from Theorem 9.3.9, that N is the reflection operator for the splitting into normal and
333
9.7. Maxwell Scattering
tangential fields from Example 9.4.8, and that Γk is the nilpotent operator for the boundary Hodge decomposition from Proposition 9.6.8. The basic operator algebra is that Ek2 = N 2 = I, Γ2k
= 0,
6 N Ek , Ek N =
Γk Ek = Ek Γk ,
Γk N = N Γk .
The Ek N BVPs are essentially well posed, so by Proposition 9.4.5, roughly speaking Ek and N are closer to anticommuting than to commuting.
Figure 9.3: A rough sketch of the splittings involved in a Dirac BVP. The splitting into Ek± H encodes the Dirac equation. The splitting into N ± H encodes the boundary conditions. The circle indicates the boundary Hodge splitting, with the interior of the circle illustrating N(Γk ) where the Maxwell BVP takes place. We note that the operator S from Example 9.4.9 does not commute with Γk , but we will not need this, since we use only S as a computational tool for solving an Ek N BVP.
Chapter 9. Dirac Wave Equations
334
We consider as an example the exterior Dirac BVP (9.23) with prescribed tangential part. The other three Ek N BVPs can be analyzed similarly. Using the operator Γk , we have three relevant L2 (∂D; 4Wc )-based function spaces, namely H = L2 ,
H = D(Γ),
and H = N(Γk ).
Note that D(Γ) = D(Γk ) is a dense subspace of L2 , which does not depend on k, 2 1/2 ) do depend on k. although the equivalent norms kf kD(Γk ) = (kf k2L2 + kΓk f kL 2 Further note that N(Γk ) is a closed subspace of L2 , as well as of D(Γ), which is roughly speaking half of the latter spaces by Hodge decomposition. Since Ek and N commute with Γk , they act as bounded linear operators in each of the three function spaces H, and in each case we see that Ek2 = N 2 = I. Therefore we can consider the BVP (9.23), expressed as the restricted projection N + : Ek− H → N + H, in each of the three function spaces H. Our aim in this section is to solve BVPs in H = N(Γk ). This, however, is a function space defined by a differential constraint, which we may want to avoid numerically. For this reason, we prefer to enlarge the function space to either L2 or to the function space D(Γ) in which roughly speaking half of the functions have Sobolev regularity H 1 , since Γk is nilpotent, and to solve the integral equation in such a space. Proposition 9.7.1 (Constrained Dirac BVPs). Consider the exterior Dirac BVP N + : Ek− L2 → N + L2 with prescribed tangential part at ∂D, and assume that Im k ≥ 0 and k 6= 0, so we have L2 well-posedness of this BVP by Theorem 9.5.2. Then the restricted map N + : Ek− H → N + H is also invertible for each of the function spaces H = D(Γ) and H = N(Γk ). For the solution f ∈ Ek− L2 to the BVP with datum g = N + f ∈ L2 , the following holds. If g ∈ D(Γ), then f ∈ D(Γ). If Γk g = 0, then Γk f = 0. If Γk g = 0 and g is a j-vector field, then f is a j-vector field. Note that if g ∈ N + L2 is a j-vector field, then in general the solution f ∈ to the BVP will not be a homogeneous j-vector field. The constraint Γk g = 0 is crucial. Ek− L2
Proof. (i) Lower bounds for N + : Ek− D(Γ) → N + D(Γ)
(9.26)
hold, since kf kD(Γ) ≈ kf kL2 + kΓk f kL2 . kN + f kL2 + kN + Γk f kL2 = kN + f kL2 + kΓk (N + f )kL2 ≈ kN + f kD(Γ) .
9.7. Maxwell Scattering
335
To show surjectivity, we can proceed as follows. First apply Lemma 9.4.12 with A = Ek , B = N , H = D(Γ) and perturb k into Im k > 0. This shows that it suffices to show surjectivity for Im k > 0. Then we use that N and Ek commute with Γk , and similarly to the above, we derive lower bounds for λI + Ek N : D(Γ) → D(Γ), when |λ1 | > L|λ2 |, from Theorem 9.5.1. Therefore the method of continuity shows that I ± Ek N are Fredholm operators of index zero on D(Γ). The argument in Proposition 9.4.10 shows that all four Ek N BVPs are injective when Im k > 0, and so it follows from (9.19) that (9.26) is surjective. If f ∈ Ek− L2 solves the BVP with datum g ∈ D(Γ), then let f˜ ∈ Ek− D(Γ) be the solution to the well-posed BVP described by (9.26). By uniqueness of the solutions to the L2 BVP, we conclude that f = f˜ ∈ D(Γ). (ii) Next consider N + : Ek− N(Γk ) → N + N(Γk ). This map is clearly bounded and injective with a lower bound. To show surjectivity, let g ∈ N + N(Γ) ⊂ N + D(Γ). By (i) there exists f ∈ Ek− D(Γ) such that N + f = g. Since N + (Γk f ) = Γk (N + f ) = Γk g = 0, it follows from L2 well-posedness that f ∈ Ek− N(Γk ). If furthermore g ∈ N(Γk ) is a j-vector field, then the solution f satisfies Γk f = 0, and we conclude from Lemma 9.6.1 that each homogeneous component function fm belongs to Ek+ L2 . Since N + fm = gm = 0 if m 6= j, it follows in this case that fm = 0 by uniqueness of solutions to the BVP. Therefore f = fj is a j-vector field. Example 9.7.2 (Helmholtz BVPs). In Example 9.3.7 we saw how the Helmholtz equation for a scalar acoustic wave u is equivalent to the vector field F = ∇u + ike0 u solving the Dirac equation DF + ike0 F = 0. (i) The Neumann BVP for u amounts to specifying the normal part N − f = (∂ν u)ν of f = F |∂D . In this case, by Proposition 9.6.7 the condition Γk (N − f ) = 0 is automatic for a vector field f , since ∧−1 W = {0}. Therefore, solving the Dirac BVP for F with this prescribed datum on ∂D will produce a vector field F according to Proposition 9.7.1. From Proposition 9.6.8 it follows that F ∈ R(Γk ), which means that there exists a scalar function u such that F = ∇u + ike0 u. In particular, u solves ∆u + k 2 u = 0 with prescribed Neumann datum. (ii) The Dirichlet BVP for u amounts to specifying the tangential part N + f = ∇0 u + ike0 u of f = F |∂D . For a given tangential vector field g = g1 + e0 g0 , where g1 is a space-like vector field and g0 is a scalar function, we note that Γk g = ∇0 ∧ g1 + e0 ∧ (−∇0 g0 + ikg1 ), so g ∈ N(Γk ) amounts to ikg1 = ∇0 g0 .
336
Chapter 9. Dirac Wave Equations
Therefore, solving the Dirac BVP for F with such a tangential vector field g ∈ N(Γk ) on ∂D as datum will produce a vector field of the form F = ∇u + ike0 u by Proposition 9.7.1, where u solves the Helmholtz Dirichlet problem. Example 9.7.3 (Maxwell BVPs). In Example 9.3.8 we saw how Maxwell’s equations 1/2 for an electromagnetic wave F are equivalent to the bivector field F = 0 e0 ∧ −1/2 E + µ0 B solving the Dirac equation DF + ike0 F = 0. We now assume that the interior domain D+ ⊂ R3 is a perfect electric conductor, so that E = B = 0 in D+ . If Maxwell’s equations are to hold in the distributional sense in all R3 , by the vanishing right-hand sides in the Faraday and magnetic Gauss laws, we need N + f = 0 for f = F |∂D . If the electromagnetic wave in D− is the superposition of an incoming wave f0 and a reflected wave f1 ∈ Ek− L2 , then f1 needs to solve the BVP where N + f1 is specified to cancel the datum N + f0 . Note that for the classical vector fields E and ∗B, the tangential part N + f corresponds to the tangential part of E and the normal part of ∗B. For a given tangential bivector field g = e0 ∧ g1 + g2 , where g1 is a space-like tangential vector field and g2 is a space-like tangential bivector field, we note that Γk g = e0 ∧ (−∇0 ∧ g1 + ikg2 ) + ∇0 ∧ g2 , so g ∈ N(Γk ) amounts to ikg2 = ∇0 ∧ g1 . In terms of the electric and magnetic fields, the tangential part of B is given by the tangential curl of the tangential part of E. From Proposition 9.7.1 it follows that if we solve the Dirac BVP (9.23) with such a tangential bivector field g ∈ N(Γk ) on ∂D as datum, then the solution f will indeed be a bivector field representing an electromagnetic field. Example 9.7.4 (Maxwell transmission problems). When an electromagnetic wave propagates in a material and not in vacuum, we account for the material’s response to the field by replacing 0 and µ0 by permittivity and permeability constants and µ depending on the material properties. These may in general be variable as well as matrices, but we limit ourselves to homogeneous and isotropic materials for which and µ are constant complex numbers. Similar to (9.3), we define the electromagnetic field F := 1/2 e0 ∧ E + µ−1/2 B. √ Maxwell’s equations in such a material read DF + ike0 F = 0, with k = ω µ. Consider the following transmission problem. We assume that the exterior domain D− consists of a material with electromagnetic properties described by 1 , √ µ1 , giving a wave number k1 := ω 1 µ1 , and that the interior domain D+ consists of a material with electromagnetic properties described by 2 , µ2 , giving a wave √ number k2 := ω 2 µ2 . We obtain a transmission problem of the form (9.25) for a pair of electromagnetic fields F ± : D± → 42 Wc .
337
9.7. Maxwell Scattering 1.5
0.5 0.4
1
0.3 0.5
0.1
1
0.2
0.5
0.1
0
0
0
-0.1
0
-0.1 -0.5
-0.2 -0.3
-0.5
-0.2 -0.3
-1
-0.4 -0.5 0
0.2
0.4
0.6
0.8
1
-1.5
0.4
1
0.3
-1
-0.4 -0.5 0
(b) 1.5
0.5
0.2
0.4
0.6
0.8
1
-1.5
1.5
0.5 0.4
1
0.3
0.2
0.5
0.1
0.2
0.5
0.1
0
0
0
-0.1
0
-0.1 -0.5
-0.2 -0.3
-0.5
-0.2 -0.3
-1
-0.4 -0.5
(c)
0.4 0.3
0.2
(a)
1.5
0.5
0
0.2
0.4
0.6
0.8
1
-1.5
-1
-0.4 -0.5
(d)
0
0.2
0.4
0.6
0.8
1
-1.5
Figure 9.4: TM magnetic waves U = B12 . ∂Ω parametrized√by sin(πs) exp(i(s−1/2) π/2), 0 ≤ s ≤ 1. (a) Incoming wave U0 = exp(18i(x + y)/ 2) from south-west. (b) Wave reflected by a perfect electric conductor, computed with the spin integral equation in Example 9.5.5. (c) Waves reflected into Ω− and transmitted into a dielectric object Ω+ , computed with a tweaked version of the Dirac integral equation in Example 9.5.6. Wave numbers k1 = 18 and k2 = 27 as in Example 9.7.4. (d) As in (c), but Ω+ is now a conducting √ object described by the Drude model and an imaginary wave number k2 = i18 1.1838. Here the wave decays exponentially into Ω+ and surface plasmon waves, excited by the corner singularity, appear near ∂Ω. The jump condition M f + = f − + g is found by returning to the original formulation of Maxwell’s equations for E and B. For these to hold in the distributional sense across ∂D, Faraday’s law and the magnetic Gauss laws dictate that ν ∧ E and ν ∧ B do not jump across ∂D. Furthermore, assuming that we do not have any electric charges and current except for those induced in the material described by and µ, the Amp`ere and Gauss laws require that ν y(µ−1 B) and ν y(E) not jump across ∂D. If we translate this to spacetime multivector algebra, this specifies the multiplier q q q q M = µµ21 N + T + + 12 N + T − + µµ12 N − T + + 21 N − T − ,
338
Chapter 9. Dirac Wave Equations
0.5
3
0.5
0.4 2
0.3
-13
0.3
0.2
1
0.1
-13.5
0.2 0.1
0
0
-0.1 -1
-0.2
-14
0 -0.1
-14.5
-0.2
-0.3
-2
-0.4
-0.3
-15
-0.4 -3
-0.5 0
(d)
0.4
0.2
0.4
0.6
0.8
1
-15.5
-0.5 0
(b)
0.2
0.4
0.6
0.8
1
0.5
0.5 0.4
-13
-13
-13.5
-13.5
0.3 0.2 0.1 -14
0 -0.1
-14
0
-14.5
-14.5
-15
-15
-0.2 -0.3 -0.4 -15.5
-0.5
(c)
0
0.2
0.4
0.6
0.8
1
-15.5
-0.5
(d)
0
0.2
0.4
0.6
0.8
1
Figure 9.5: Upper left (d) is same as Figure 9.4(d), but scaled so the peaks of the plasmon wave are visible. (b), (c) and (d) show log10 of the estimated absolute error for the three scattering computations. (d) indicates the numerical challenge in computing surface plasmon waves. Here the parameters hit the essential spectrum, where the integral equation fails to be Fredholm. using the normal reflection operator N and the time reflection operator T . Note how the two commuting reflection operators N and T split the electromagnetic field into these four parts. With this formulation, and with the datum g being the boundary trace g = F0 |∂D of an incoming electromagnetic wave F0 in D− , we can use the Dirac integral equation proposed in Example 9.5.6 to compute the transmitted wave F + in D+ and the reflected wave F − in D− . We end this chapter with some examples of how the integral equations from Examples 9.5.5 and 9.5.6 perform numerically when applied to scattering problems for electromagnetic fields as in Examples 9.7.3 and 9.7.4. Results are shown in Figures 9.4 and 9.5. For simplicity, we consider a two-dimensional scattering problem in which the object represented by the domain D+ ⊂ R3 is a cylinder D+ = Ω+ × R
9.8. Comments and References
339
along the z-axis over the base Ω+ ⊂ R2 = [e12 ] in the xy-plane, and the field is transversal magnetic. This means that we assume that √ F = F (x, y) = e0 ∧ (E1 (x, y)e1 + E2 (x, y)e2 ) + √1µ B12 (x, y)e12 . In classical vector calculus notation this means that E is parallel to R2 and the 2 vector field ∗B is orthogonal √ to R , explaining the terminology. Maxwell’s equations, after dividing F by , read (∇ + ike0 )(e0 E + cB) = (c∇B − ikE) + e0 (−∇E + iωB) = 0, where ∇ = e1 ∂1 + e2 ∂2 is the nabla symbol for R2 . From the space- and time-like parts of this equation, we get ∆B = ∇(∇B) = (ik/c)∇E = (ik/c)iωB = −k 2 B, that is, U := B12 solves the Helmholtz equation, and E = (c/ik)∇B = (c/ik)(∇U )e12 . This means that Maxwell’s equations for transversal magnetic fields F are equivalent to the Helmholtz equation for U = B12 and that E is obtained from the gradient ∇U by rotation and scaling. In particular, it follows that for transversal magnetic fields F , the tangential boundary datum N + f corresponds to the Neumann data ∂ν U for U .
9.8
Comments and References
9.2 Building on the work of Michael Faraday and Andr´e-Marie Amp`ere, James Clerk Maxwell (1831–1879) collected and completed the system of equations governing electromagnetic theory in the early 1860s. His Treatise on Electricity and Magnetism was published in 1873. The equations that he obtained showed that electric and magnetic fields propagate at the speed of light, and they were relativistically correct decades before Einstein formulated relativity theory. The fundamental equation of quantum mechanics, the Schr¨odinger equation from Example 6.3.6, was first discovered in 1925 and describes physics at small scales. The famous Stern–Gerlach experiment from 1922 showed that the intrinsic angular momentum of particles is quantized. The Pauli equation from 1927 is a modification of the Schr¨odinger equation that takes this spin phenomenon into account, but neither of these is the correct equation at high speeds, that is, they are not relativistically correct. The Klein–Gordon equation from 1926 is a relativistically correct version of the Schr¨odinger equation, but it does not incorporate spin. Paul Dirac finally succeeded in
340
Chapter 9. Dirac Wave Equations 1928 in finding the equation that is correct from the point of view of both quantum mechanics and relativity theory, as well as correctly describing spin1/2 particles, which include all the elementary particles constituting ordinary matter. The classical derivation of the Dirac equation is to seek matrices γ0 , γ1 , γ2 , γ3 by which one can factorize the Klein–Gordon equation into a first-order wave equation. This amounts to using a matrix representation of the spacetime Clifford algebra, something that the pioneers of quantum mechanics were unaware of. Starting from the 1960s there has been a renewed interest in Clifford’s geometric algebra, where in particular, David Hestenes [55], Hestenes and Sobczyk [57], and Hestenes [56] have advocated geometric algebra as the preferred mathematical framework for physics. In particular, [55] is a reference for using Clifford algebra to study Maxwell’s and Dirac’s equations. The formulations (9.4) and (9.5) of Maxwell’s equations as wave 4-Dirac equations go back to M. Riesz. A further reference for the use of multivectors in electromagnetic theory is Jancewicz [60]. A standard mathematics reference for the analysis of Dirac’s equation is Thaller [93]. Further references on Dirac operators and spinors in physics include Benn and Tucker [19] and Hitchin [58].
9.3-9.6 The material covered in these sections, which aim to solve Maxwell BVPs using multivector calculus, builds on the author’s PhD thesis and publications [8, 9, 7, 14, 10]. The first basic idea for solving boundary value problems for Maxwell’s equations is to embed it into a Dirac equation as in Example 9.3.8. This was first used by McIntosh and M. Mitrea in [67] in connection with BVPs on Lipschitz domains. The second basic idea is to formulate Dirac boundary value problems in terms of Hardy projections Ek± and projections N ± encoding boundary conditions, and to show that these subspaces are transversal. This was first worked out by Axelsson, Grognard, Hogan, and McIntosh [11]. The third main idea is to extract a Maxwell solution from the Dirac solution as in Proposition 9.7.1, using the Hodge decomposition on the boundary defined by the operator Γk from Section 9.6. This was worked out in detail in [9]. We have chosen to use the spacetime formulation, but as in Propositions 9.1.5 and 9.1.6, we can equally well use a 4V formulation in which the Dirac equation reads DF = ikF for F : D → 4Vc . The main reason for our choice is that the operator Γk in Section 9.6 is difficult, although not impossible, to handle using the latter formalism. To minimize the algebra, the 4Vc formulation was used in [84, 80], where the spin integral equation from Example 9.5.5 was first introduced.
9.8. Comments and References
341
A main philosophy in [9] and associated publications is to handle the boundary value problems by first-order operators. It is clear what this means for the differential operators: in (9.10) the second-order Helmholtz operator is factored by the first-order Dirac operator. But we also have corresponding factorizations of the boundary integral operators. In the abstract formulation with Proposition 9.4.5, the second-order cosine operator is factored by the first-order rotation operator in (9.20)–(9.21). We think of the rotation operators being as of first order, since they essentially are direct sums of two restricted projections as in (9.18)–(9.19). Similarly, the cosine operator can be seen to be essentially the direct sum of compositions of two restricted projections, hence of second order. A reference for Bessel functions and Exercise 9.3.3 is Watson [95]. Standard references for the classical double and single layer potential integral equations are Colton and Kress [29, 30] and Kress [62]. The method to prove semi-Fredholm estimates of singular integral operators on Lipschitz domains as in Theorem 9.5.1 using Stokes’s theorem and a smooth transversal vector field as in Exercise 6.1.8 goes back to Verchota [94]. The spectral estimates in Theorem 9.5.1 are from [7]. 9.7 Figures 9.4 and 9.5 have been produced by Johan Helsing using the spin and tweaked Dirac integral equations. The state-of-the-art numerical algorithm RCIP, recursively compressed inverse preconditioning that he uses is described in [51], with applications to Helmholtz scattering in [52] and [53]. Since the Dirac equation is more general than the Helmholtz and Maxwell equations that it embeds, the spin and Dirac integral equations cannot quite compete with the most efficient Kleinman–Martin type integral equation [53, eq. 45] in terms of computational economy. In terms of achievable numerical accuracy in the solution, however, the two systems of integral equations perform almost on par with each other. Moreover, the spin and Dirac integral equations apply equally well to Maxwell scattering in three dimensions, where the present understanding of integral formulations for Maxwell’s equations is incomplete.
Chapter 10
Hodge Decompositions Prerequisites: The reader is assumed to have read Sections 7.5 and 7.6, which this chapter develops further. A good understanding of unbounded Hilbert space operators and the material in Section 6.4 is desirable. Some exposure to distribution theory and algebraic topology helps, but is not necessary. Road map: We saw in Section 7.6 that every multivector field F on a domain D can be decomposed into three canonical parts F = ∇ ∧ U + H + ∇ y V, where ∇ ∧ H = 0 = ∇ y H, and H and the potential V are tangential on ∂D. This is the Hodge decomposition of the multivector field F , which amounts to a splitting of the space of all multivector fields F into two subspaces R(d) and R(δ ) of exact and coexact fields respectively, and a small subspace Ck (D) of closed and coclosed fields, all with appropriate boundary conditions. Alternatively, we can instead demand that H and the potential U be normal on ∂D. At least four types of questions arise. (i) Are the subspaces R(d) and R(δ) transversal, that is do they intersect only at 0 and at a positive angle? This would mean that these subspaces give a splitting of the function space H that we consider, modulo Ck (D). In the case of H = L2 (D) which we only consider here, these subspaces are in fact orthogonal, but more generally this problem amounts to estimating singular integral operators realizing the Hodge projections onto these subspaces. We touch on this problem in Proposition 10.1.5 and Example 10.1.8. (ii) Are the ranges R(d) and R(δ) closed subspaces? This is a main problem that we address in this chapter, and we show that this is indeed the case for © Springer Nature Switzerland AG 2019 A. Rosén, Geometric Multivector Analysis, Birkhäuser Advanced Texts Basler Lehrbücher, https://doi.org/10.1007/978-3-030-31411-8_10
343
Chapter 10. Hodge Decompositions
344
bounded domains D. See Section 10.3 and Example 10.1.8. We saw in Section 7.6 that such closedness yields well-posedness results for boundary value problems. (iii) What properties, in particular regularity, of the potentials U and V do we have? Note that the parts ∇ ∧ U and ∇ y V are uniquely determined by F , but not so for the potentials U and V . We show in Section 10.4 that the most obvious choice, the Hodge potentials of minimal L2 norm, are not always the best choices. Even more surprising is the fact that there exist Bogovski˘ı potentials V , for which we have full Dirichlet boundary conditions V |∂D = 0. (iv) Is the cohomology space Ck (D) finite-dimensional? More exactly, how do we go about calculating the dimension of this subspace for a given domain D? As compared to the first three questions, which belong to analysis, this fourth question belongs to algebraic topology and is addressed in Section 10.6. In the analysis of Hodge decompositions on domains, the regularity and curvature of the boundary play an important role through Weitzenb¨ ock formulas. Hodge decompositions can also be considered on manifolds, and in this case also the curvature of the manifold in the interior of the domain enters the picture. This will be a central idea in Chapters 11 and 12. In the present chapter we avoid the technicalities of vector bundles and limit the discussion to domains in affine spaces. Highlights: • Compactness and Hodge decomposition: 10.1.6 • Natural boundary conditions for d and δ: 10.2.3 • Weitzenb¨ock boundary curvature: 10.3.6 • Bogovski˘ı and Poincar´e potentials: 10.4.3 ˇ computation of Betti numbers: 10.6.5 • Cech
10.1
Nilpotent operators
In terms of operators, a splitting of a function space corresponds to a projection P , along with its complementary projection I − P . Somewhat similarly, we show in this section how Hilbert space operators Γ with the property that Γ2 = 0 induce splittings of the function space in a natural way, generalizing Hodge decompositions. Usually the condition Γk = 0 for some k ∈ Z+ defines nilpotence, but we shall always assume index k = 2. Definition 10.1.1 (Nilpotent). A linear, possibly unbounded, operator Γ : H → H in a Hilbert space H is said to be nilpotent (with index 2) if it is densely defined, closed, and if R(Γ) ⊂ N(Γ). In particular, Γ2 f = 0 for all f ∈ D(Γ). We say that a nilpotent operator Γ is exact if R(Γ) = N(Γ).
345
10.1. Nilpotent operators
Recall that the null space N(Γ) is always closed if Γ is closed but that in general, the range R(Γ) is not a closed subspace. If Γ is nilpotent, then we have inclusions R(Γ) ⊂ R(Γ) ⊂ N(Γ) ⊂ D(Γ) ⊂ H. Let H0 denote any closed subspace complementary to N(Γ), for example H0 = N(Γ)⊥ , so that H = N(Γ) ⊕ H0 . Then the restricted map Γ : H0 → R(Γ) ⊂ N(Γ) is injective, which roughly speaking means that N(Γ) is at least half of H. For this reason it is natural to combine a nilpotent operator Γ1 with a “complementary” nilpotent operator Γ2 . Ideally one would like to have a splitting of the Hilbert space H = R(Γ1 ) ⊕ R(Γ2 ), where R(Γ1 ) = N(Γ1 ) and R(Γ2 ) = N(Γ2 ). Since N(Γ∗1 ) = R(Γ1 )⊥ , the natural choice in a Hilbert space is Γ2 = Γ∗1 . Proposition 10.1.2 (Abstract Hodge decomposition). Let Γ be a nilpotent operator in a Hilbert space H. Then so is Γ∗ , and there is an orthogonal splitting into closed subspaces H = R(Γ) ⊕ C(Γ) ⊕ R(Γ∗ ), (10.1) where C(Γ) := N(Γ) ∩ N(Γ∗ ), N(Γ) = R(Γ) ⊕ C(Γ),
and
∗
N(Γ ) = C(Γ) ⊕ R(Γ∗ ). Note that C(Γ) = {0} if and only if Γ is exact. Proof. If T is a densely defined and closed operator in H, then R(T )⊥ = N(T ∗ ) and therefore R(T ) = N(T ∗ )⊥ . This proves that R(Γ∗ ) = N(Γ)⊥ ⊂ R(Γ)⊥ = N(Γ∗ ), showing that Γ∗ is nilpotent and that we have orthogonal splittings H = N(Γ) ⊕ R(Γ∗ ) = R(Γ) ⊕ N(Γ∗ ). But R(Γ) ⊂ N(Γ), since Γ is nilpotent, so using the second splitting in the first, we get ⊥ N(Γ) = R(Γ) ⊕ N(Γ) ∩ R(Γ) = R(Γ) ⊕ N(Γ) ∩ N(Γ∗ ) , which proves the stated splitting.
The mapping properties of Γ and Γ∗ are as follows. In the Hodge decomposition (10.1), the operator Γ is zero on R(Γ) ⊕ N(Γ) ∩ N(Γ∗ ) = N(Γ), and Γ∗ is zero on N(Γ∗ ) ∩ N(Γ) ⊕ R(Γ∗ ) = N(Γ∗ ). On the other hand, we see that the
Chapter 10. Hodge Decompositions
346
restrictions Γ : R(Γ∗ ) → R(Γ) and Γ∗ : R(Γ) → R(Γ∗ ) are injective and have dense ranges: H
H
=
=
⊕
R(Γ)
R(Γ)
s
N(Γ) ∩ N(Γ∗ )
Γ∗
Γ
⊕
∗
N(Γ) ∩ N(Γ )
R(Γ∗ )
⊕
⊕
+
(10.2)
R(Γ∗ )
We have been using the formally skew-adjoint Dirac operator D = d + δ in Chapters 8 and 9. Using instead the anti-Euclidean Clifford product leads to a formally self-adjoint Dirac operator d − δ. For the following results we can use either the abstract Dirac operator Γ − Γ∗ or its self-adjoint analogue Γ + Γ∗ . To be able to use resolvents without complexifying the space, we choose to work with Γ − Γ∗ . Note from the mapping properties of Γ and Γ∗ that such operators swap the subspaces R(Γ) and R(Γ∗ ). Proposition 10.1.3 (Abstract Hodge–Dirac operators). Let Γ be a nilpotent operator in a Hilbert space H. Consider the operator Π := Γ − Γ∗ with domain D(Π) := D(Γ) ∩ D(Γ∗ ). Then Π is skew-adjoint, that is, Π∗ = −Π in the sense of unbounded operators, with N(Π) = C(Γ) and R(Π) = R(Γ) + R(Γ∗ ). We refer to operators Π = Γ − Γ∗ , derived from a nilpotent operator Γ, as an abstract Hodge–Dirac operator. Note that in Euclidean spaces, the 4-Dirac operator D from Definition 9.1.1 is an example of a Hodge–Dirac operator, whereas / from Definition 9.1.3 as a Hodge–Dirac operator to have the 4-Dirac / operator D requires a complex structure on our Euclidean space, as discussed at the end of Section 9.1. Proof. We use the Hodge decomposition from Proposition 10.1.2. If Γu + Γ∗ u = 0, then Γu = 0 = Γ∗ u by orthogonality, from which N(Π) = C(Γ) follows. If f = Γu1 + Γ∗ u2 , then f = Π(PΓ∗ u1 + PΓ u2 ), from which R(Π) = R(Γ) + R(Γ∗ ) follows. Note that u1 − PΓ∗ u1 ∈ N(Γ) ⊂ D(Γ) and similarly for u2 . It is clear that −Π is the formal adjoint of Π. It remains to prove that if hf, Πgi + hf 0 , gi = 0 for all g ∈ D(Π), then f ∈ D(Π) and f 0 = Πf . Writing f = f1 + f2 + f3 in the Hodge splitting, and similarly for f 0 , we have hf1 , Γgi + hf30 , gi = 0, 0 + hf20 , gi = 0, hf3 , −Γ∗ gi + hf10 , gi = 0, by choosing g ∈ R(Γ∗ ) ∩ D(Γ), g ∈ C(Γ) and g ∈ R(Γ) ∩ D(Γ∗ ) respectively. Since Γ and Γ∗ are adjoint in the sense of unbounded operators, we conclude that f1 ∈ D(Γ∗ ), f30 = −Γ∗ f1 , f20 = 0, f3 ∈ D(Γ) and f10 = Γf3 . This shows that f ∈ D(Π) and f 0 = Πf .
347
10.1. Nilpotent operators
Definition 10.1.4 (Hodge projections). Let Γ be a nilpotent operator in a Hilbert space H. The associated Hodge projections are the orthogonal projections PΓ and PΓ∗ onto the subspaces R(Γ) and R(Γ∗ ) respectively. The orthogonal projection PC(Γ) onto the Γ-cohomology space C(Γ) is PC(Γ) = I − PΓ − PΓ∗ . Proposition 10.1.5 (Formulas for Hodge projections). Let Γ be a nilpotent operator in a Hilbert space H. If Γ is exact, then PΓ f = ΓΠ−1 f = −Π−1 Γ∗ f = −ΓΠ−2 Γ∗ f, PΓ∗ f = −Γ∗ Π−1 f = Π−1 Γf = −Γ∗ Π−2 Γf, for f ∈ D(Π) ∩ R(Π). If Γ is not exact, let ∈ R\{0}. Then we have PC(Γ) f = lim→0 (I +Π)−1 f , and the Hodge projections are PΓ f = lim Γ(I + Π)−1 f, →0
PΓ∗ f = − lim Γ∗ (I + Π)−1 f, →0
with convergence in H, for f ∈ H. We also have PΓ f = − lim(I + Π)−1 Γ∗ f for f ∈ D(Γ∗ ) and PΓ∗ f = lim(I + Π)−1 Γf for f ∈ D(Γ). Proof. The formulas for exact operators Γ involving Π−1 are immediate from (10.11), and the final second-order formulas follow since PΓ = PΓ2 and PΓ∗ = PΓ2∗ . For nonexact Γ, consider first PC(Γ) f . If f ∈ C(Γ), then (I + Π)−1 f = f . If f = Πu ∈ R(Π), then (I + Π)−1 Πu = u − 2 (I + Π)−1 u → 0 as → 0. We have used the skew-adjointness of Π, which implies that k(I + Π)−1 k ≤ 1. These uniform bounds also allow us to conclude that (I +Π)−1 f → 0 also for all f ∈ R(Π). This proves the formula for PC(Γ) f , from which it immediately follows that Γ(I + Π)−1 f = PΓ Π(I + Π)−1 f → PΓ (f − PC(Γ) f ) = PΓ f, and similarly for PΓ∗ . Alternatively, for f ∈ D(Γ∗ ), we have −(I + Π)−1 Γ∗ f = (I + Π)−1 ΠPΓ f → (I − PC(Γ) )PΓ f = PΓ f, and similarly for PΓ∗ .
The following result describes an important property that a nilpotent operator may have, which we will establish for d and δ on bounded Lipschitz domains. Proposition 10.1.6 (Compact potential maps). For a nilpotent operator Γ in a Hilbert space H, the following are equivalent.
Chapter 10. Hodge Decompositions
348
(i) The subspaces R(Γ) and R(Γ∗ ) are closed and C(Γ) is finite-dimensional, and the inverses of Γ : R(Γ∗ ) → R(Γ) and Γ∗ : R(Γ) → R(Γ∗ ) are compact. (ii) There exist compact operators K0 , K1 : H → H, with R(K1 ) ⊂ D(Γ), such that the homotopy relation ΓK1 f + K1 Γf + K0 f = f holds for all f ∈ D(Γ). (iii) The Hilbert space D(Γ) ∩ D(Γ∗ ), equipped with the norm (kf k2 + kΓf k2 + kΓ∗ f k2 )1/2 , is compactly embedded in H. Carefully note that unlike (i) and (iii), property (ii) does not involve the adjoint Γ∗ . We exploit this in Theorem 10.3.1 below to reduce the problem of existence of potentials, from Lipschitz domains to smooth domains. Also note for (i) that when the ranges are closed, Γ : R(Γ∗ ) → R(Γ) has a compact inverse if and only if there exists a compact operator KΓ : R(Γ) → H such that ΓKΓ = IR(Γ) . Indeed, if we have such KΓ , then PΓ KΓ is a compact operator giving a potential u ∈ R(Γ∗ ). Proof. Assume (i). Define compact operators K0 := PC(Γ) , and ( Γ−1 f ∈ R(Γ∗ ), f ∈ R(Γ), K1 f := 0, f ∈ N(Γ∗ ). It is straightforward to verify that ΓK1 = PΓ and K1 Γ = PΓ∗ , from which (ii) follows. ∞ be a sequence such that fj , Γfj and Γ∗ fj all are Assume (ii). Let (fj )j=1 bounded sequences in H. We have (I −PΓ )fj = (I −PΓ )(ΓK1 fj +K1 (Γfj )+K0 fj ) = (I −PΓ )K1 (Γfj )+(I −PΓ )K0 fj . By duality, we also obtain from the homotopy relation that PΓ fj = PΓ (Γ∗ K1∗ fj + K1∗ (Γ∗ fj ) + K0∗ fj ) = PΓ K1∗ (Γ∗ fj ) + PΓ K0∗ fj . ∞ ∞ This shows that (PΓ∗ fj )∞ j=1 , (PC(Γ) fj )j=1 and (PΓ∗ fj )j=1 have subsequences that converge in H, and (iii) follows. Assume (iii). The operator I + Π is an isometry between the Hilbert spaces D(Γ)∩D(Γ∗ ) and H, since Π is a skew-adjoint operator. Since I is compact between these spaces, perturbation theory shows that
Π : D(Γ) ∩ D(Γ∗ ) → H is a Fredholm operator, and (i) follows.
349
10.1. Nilpotent operators
Nilpotent operators appear naturally from the exterior and interior products, since v ∧ v ∧ w = 0 and v y (v y w) = 0. Example 10.1.7 (Algebraic Hodge decomposition). Fix a unit vector v ∈ V in an n-dimensional Euclidean space and define nilpotent linear maps µ(w) := v ∧ w,
µ∗ (w) := v y w,
w ∈ ∧V.
We apply the abstract theory above to Γ = µ and H, the finite-dimensional Hilbert space ∧V . Lemma 2.2.7 shows that R(µ) = N(µ), so in this case µ is exact and the Hodge decomposition reads ∧V = R(µ) ⊕ R(µ∗ ), where R(µ) are the multivectors normal to and R(µ∗ ) are the multivectors tangential to the hyperplane [v]⊥ , in the sense of Definition 2.8.6. We have (µ−µ∗ )2 = −1, and the Hodge projections are µµ∗ onto normal multivectors, and µ∗ µ onto tangential multivectors. Note that R(µ) and R(µ∗ ), for the full algebra ∧V , both have dimension 2n−1 . However, this is not true in general for the restrictions to ∧k V . For example, the space R(µ) ∩ ∧1 V of vectors normal to [v]⊥ is one-dimensional, whereas the space R(µ∗ ) ∩ ∧1 V of vectors tangential to [v]⊥ has dimension n − 1. The smaller k is, the more tangential k-vectors exist as compared to normal k-vectors. At the ends, all scalars are tangential and all n-vectors are normal. Example 10.1.8 (Rn Hodge decomposition). Consider the exterior and interior derivative operators dF (x) = ∇ ∧ F (x)
and δF (x) = ∇ y F (x)
in the Hilbert space H = L2 (V ; ∧Vc ) on the whole Euclidean space X = V , where we complexify the exterior algebra in order to use the Fourier transform. These two nilpotent operators are the Fourier multipliers Z n X c ei ∧ dF (ξ) = ∂i F (x)e−ihξ,xi dx = iξ ∧ Fˆ (ξ), c (ξ) = δF
k=1 n X k=1
X
Z ei y
∂i F (x)e−ihξ,xi dx = iξ y Fˆ (ξ) = (−iξ) y F (ξ),
X
defining the interior product as the sesquilinear adjoint of the exterior product. Define the pointwise multiplication operators µξ (Fˆ (ξ)) := ξ ∧ Fˆ (ξ),
µ∗ξ (Fˆ (ξ)) := ξ y Fˆ (ξ).
We view µξ , µ∗ξ : L2 (X; ∧Vc ) → L2 (X; ∧Vc ) as multiplication operators by the Pn radial vector field X 3 ξ 7→ ξ = k=1 ξk ek ∈ V . Thus we have F(dF ) = iµξ (F(F )),
F(δF ) = iµ∗ξ (F(F )).
Chapter 10. Hodge Decompositions
350
In particular, F is closed if and only if Fˆ is a radial multivector field, that is, ξ ∧ Fˆ = 0, and F is coclosed if and only if Fˆ is an angular multivector field, that is, ξ y Fˆ = 0. From Plancherel’s theorem it is clear that Γ = d and Γ∗ = −δ, with domains D(Γ) := {F ∈ L2 ; ξ ∧ Fˆ ∈ L2 } and D(Γ∗ ) := {F ∈ L2 ; ξ y Fˆ ∈ L2 }, are nilpotent operators in H, and that d = −δ ∗ in the sense of unbounded operators. In this case, the Hodge decomposition reads L2 (V ; ∧Vc ) = R(d) ⊕ R(δ). That d is exact is a consequence of µξ being exact for each ξ ∈ V \ {0}. By considering Fˆ near ξ = 0, we see that the ranges are not closed, which is a consequence of the domain X not being bounded. Using the formulas from Proposition 10.1.5, we see that the Hodge projections are the singular integrals Z Pd F (x) = ∇ ∧ Ψ(x − y) y F (y)dy X Z k = F (x) + p.v. ∇ ∧ (Ψ(x˙ − y) y F (y))dy, n X Z Pδ F (x) = ∇ y Ψ(x − y) ∧ F (y)dy X Z n−k = F (x) + p.v. ∇ y (Ψ(x˙ − y) ∧ F (y))dy, n X for k-vector fields F ∈ L2 (X; ∧k Vc ). We have used the distributional derivative ∂i Ψ(x) = ei δ(x)/n + p.v.∂i Ψ(x).
10.2
Half-Elliptic Boundary Conditions
For the remainder of this chapter, we study the nilpotent operators d and δ on bounded domains D, at least Lipschitz regular, in Euclidean space X. The main idea in this section is to use the commutation theorem (Theorem 7.2.9) and reduce the problems to smooth domains. Realizing the operators that are implicit in Definition 7.6.1 as unbounded nilpotent operators, we have the following. Definition 10.2.1 (d and δ on domains). Let D be a bounded Lipschitz domain in a Euclidean space (X, V ). Define unbounded linear operators d, d, δ, δ in L2 (D) = L2 (D; ∧V ) as follows. Assume that F, F 0 ∈ L2 (D) and consider the equation Z hF 0 (x), φ(x)i + hF (x), ∇ y φ(x)i dx = 0. D
If this holds for all φ ∈ C0∞ (D), then we define F ∈ D(d) and dF := F 0 . If this holds for all φ ∈ C ∞ (D), then we define F ∈ D(d) and dF := F 0 .
351
10.2. Half-Elliptic Boundary Conditions Assume that F, F 0 ∈ L2 (D) and consider the equation Z hF 0 (x), φ(x)i + hF (x), ∇ ∧ φ(x)i dx = 0. D
If this holds for all φ ∈ C0∞ (D), then we define F ∈ D(δ) and δF := F 0 . If this holds for all φ ∈ C ∞ (D), then we define F ∈ D(δ) and δF := F 0 . We recall from Section 7.6 that by Stokes’s theorem we interpret F ∈ D(d) as being normal at ∂D in a weak sense, and F ∈ D(δ) as being tangential at ∂D in a weak sense. Basic properties of these operators are the following. Proposition 10.2.2 (Nilpotence). Let D be a bounded Lipschitz domain in a Euclidean space. Then the operators d, d, δ, δ are well-defined nilpotent operators on L2 (D). In particular, they are linear, closed, and densely defined. With the pointwise Hodge star and involution maps, we have δ(F ∗) = (dFb)∗, d(F ∗) = (δ Fb)∗,
F ∈ D(d), F ∈ D(δ).
Proof. Consider d. The other proofs are similar. That d is defined on C0∞ (D), linear, and closed is clear. It is well defined, since F = 0 implies F 0 = 0, since C ∞ (D) is dense in L2 (D). To show nilpotence, assume F ∈ D(d). Then Z Z 0 + hdF (x), ∇ y φ(x)i dx = − hF (x), ∇ y (∇ y φ(x))idx = 0 D
D
for all φ ∈ C ∞ (D), which shows that d(dF ) = 0. The relation between d, δ and δ, d follows from Proposition 7.1.7(i).
The goal of this section is to prove the following duality. Recall the definition (6.4) of adjointness in the sense of unbounded operators. Proposition 10.2.3 (Duality). Let D be a bounded Lipschitz domain in a Euclidean space. Then d and −δ are adjoint in the sense of unbounded operators. Similarly, d and −δ are adjoint in the sense of unbounded operators. From Propositions 10.1.2 and 10.2.3 we obtain a Hodge decomposition with tangential boundary conditions L2 (D) = R(d) ⊕ Ck (D) ⊕ R(δ), where Ck (D) := N(d) ∩ N(δ), and a Hodge decomposition with normal boundary conditions L2 (D) = R(d) ⊕ C⊥ (D) ⊕ R(δ), where C⊥ (D) := N(d) ∩ N(δ). We will prove in Section 10.3 that the ranges of the four operators are closed, so the closures here are redundant. For the proof of Proposition 10.2.3, we need the following results.
352
Chapter 10. Hodge Decompositions
Lemma 10.2.4 (Local nonsmooth commutation theorem). Let ρ : D1 → D2 be a Lipschitz diffeomorphism between domains D1 and D2 in Euclidean space. If F ∈ D(d) in D2 with supp F ⊂ D2 , then ρ∗ F ∈ D(d) with d(ρ∗ F ) = ρ∗ (dF ) in D1 . Similarly, if F ∈ D(δ) in D1 with supp F ⊂ D1 , then ρ˜∗ F ∈ D(δ) with δ(˜ ρ∗ F ) = ρ˜∗ (δF ) in D2 . We recall, for example, that supp F ⊂ D2 means that F = 0 in a neighborhood of ∂D2 . Note that for general Lipschitz changes of variables, ρ∗ F and ρ˜∗ F are defined almost everywhere by Rademacher’s theorem. Proof. By Proposition 7.2.7 it suffices to prove the first statement. Consider first F ∈ C0∞ (D2 ). We mollify and approximate ρ by ρt (x) := ηt ∗ ρ(x), where η ∈ C0∞ (X; R) with η = 1 and ηt (x) := t−n η(x/t). Note that ρt is well defined on every compact subset of D1 for small t. It follows that ρt ∈ C ∞ and R
d(ρ∗t F ) = ρ∗t (dF ) holds by Theorem 7.2.9. From the dominated convergence theorem we conclude that ρ∗t F → ρ∗ F in L2 (D1 ). Since for the same reason ρ∗t (dF ) → ρ∗ (dF ), and d is a closed operator, it follows that ρ∗ F ∈ D(d) and d(ρ∗ F ) = ρ∗ (dF ). Next consider general F ∈ D(d) with compact support in D2 . Similarly to above, we now mollify and approximate F by Fn ∈ C0∞ (D2 ), with Fn → F and dFn → dF in L2 (D2 ). We have shown above that d(ρ∗ Fn ) = ρ∗ (dFn ). Using that ρ∗ : L2 (D2 ) → L2 (D1 ) is bounded and that d is closed, it follows that ρ∗ F ∈ D(d) and d(ρ∗ F ) = ρ∗ (dF ). The following shows that the normal and tangential boundary conditions for d and δ are obtained by closure from C0∞ . Proposition 10.2.5 (Half Dirichlet conditions). Let D be a bounded Lipschitz domain in a Euclidean space. If F ∈ D(d), then there exists Ft ∈ C0∞ (D) such that Ft → F
and
dFt → dF
in L2 (D) as t → 0. Similarly, if F ∈ D(δ), then there exists Ft ∈ C0∞ (D) such that Ft → F and δFt → δF in L2 (D) as t → 0. Proof. By Hodge star duality it suffices to consider d. By the compactness of D, we can localize and assume that supp F ⊂ Dp ∩ D near p ∈ ∂D as in Definition 6.1.4. We note from Definition 10.2.1 that extending F by 0 outside D, we have F ∈ D(d) on X as in Example 10.1.8. Pulling back by the local parametrization ρ, Lemma 10.2.4 shows that ρ∗ F ∈ D(d) on Rn . We translate ρ∗ F up into Ωp and
10.2. Half-Elliptic Boundary Conditions
353
pull back by ρ−1 to define F˜t := (ρ∗ )−1 (ρ∗ F (x0 , xn − t)). This yields F˜t ∈ D(d) with supp F˜t ⊂ D. Finally, we mollify and approximate F˜t by Ft (x) := ηt (x) ∗ F˜t (x), x ∈ D, R where η ∈ C0∞ (X; R) with η = 1, supp η ⊂ B(0, r), and ηt (x) := t−n η(x/t). If 0 < r < t is chosen small enough depending on the Lipschitz geometry, we obtain Ft ∈ C0∞ (D) and can verify that Ft and dFt converge to F and dF respectively. Proof of Proposition 10.2.3. Consider the equation Z hF 0 (x), φ(x)i + hF (x), ∇ y φ(x)i dx = 0. D
This holds for all F ∈ D(d) with F 0 = dF , and all φ ∈ C0∞ (D) by Definition 10.2.1. By Proposition 10.2.5 and a limiting argument, this continues to hold for φ ∈ D(δ). This shows that d and −δ are formally adjoint. Furthermore, assume that the equation holds for some F and F 0 ∈ L2 (D) and all φ ∈ D(δ). In particular, it holds for all φ ∈ C0∞ (D), and it follows by definition that F ∈ D(d) and F 0 = dF . This shows that d and −δ are adjoint in the sense of unbounded operators. The proof that d and −δ are adjoint in the sense of unbounded operators is similar. We next remove the assumption of compact support in Lemma 10.2.4. Lemma 10.2.6 (Nonsmooth commutation theorem). Let ρ : D1 → D2 be a Lipschitz diffeomorphism between bounded Lipschitz domains D1 and D2 in Euclidean space. If F ∈ D(d) on D2 , then ρ∗ F ∈ D(d) on D(D1 ) with d(ρ∗ F ) = ρ∗ (dF ) in D1 . Similarly, if F ∈ D(δ) on D1 , then ρ˜∗ F ∈ D(δ) with δ(˜ ρ∗ F ) = ρ˜∗ (δF ) on D2 . Proof. By Proposition 10.2.2, it suffices to consider d. In this case, we must show that Z hρ∗ (dF ), φi + hρ∗ F, ∇ y φi dx = 0 D1
C0∞ (D1 ).
for φ ∈ By the Lipschitz change of variables formula (6.2), see Section 6.5, and Lemma 10.2.4, this is equivalent to Z hdF, ρ˜∗ φi + hF, ∇ y (˜ ρ∗ φ)i dx = 0, D2
which holds by Proposition 10.2.3.
It is clear from the definition that D(d) on D can be viewed as a subspace of D(d) on X, by extending F on D by zero to all X. The following existence of extension maps shows that D(d) on D can be identified with the quotient space D(dX )/D(dX\D ).
Chapter 10. Hodge Decompositions
354
Proposition 10.2.7 (Extensions for d and δ). Let D be a bounded Lipschitz domain in a Euclidean space X. Assume that F ∈ D(d) on D. Then there exists F˜ ∈ D(d) on X such that F˜ |D = F . Furthermore, there exists Ft ∈ C ∞ (D) such that Ft → F and dFt → dF in L2 (D) as t → 0. Similarly, assume that F ∈ D(δ) on D. Then there exists F˜ ∈ D(δ) on X such that F˜ |D = F . Furthermore, there exists Ft ∈ C ∞ (D) such that Ft → F and δFt → δF in L2 (D) as t → 0. Proof. As in the proof of Proposition 10.2.5, it suffices to consider d, and we may assume that supp F ⊂ Dp ∩ D, a small neighborhood of p ∈ ∂D. By Lemma 10.2.6 we have ρ∗ F ∈ D(d) on Ωp ∩ {xn > 0}. Define ( ρ∗ F (x), xn > 0, G(x) := ∗ ∗ R ρ F (x), xn < 0, where R(x0 , xn ) := (x0 , −xn ) denotes reflection in Rn−1 . We claim that G ∈ D(d) on all Ωp across Rn−1 . To see this, for φ ∈ C0∞ (Ωp ), we calculate Z Z ∗ ∗ hdρ F, φi + hρ F, ∇ y φi dx + hdR∗ ρ∗ F, φi + hR∗ ρ∗ F, ∇ y φi dx xn >0 xn 0
Since φ + R∗ φ is tangential on Rn−1 , we have φ + R∗ φ ∈ D(δ) on Ωp ∩ Rn+ , so by Proposition 10.2.3, the integral vanishes. By Lemma 10.2.4, the field F˜ := (ρ∗ )−1 G ∈ D(d) on X is an extension of F , and if we mollify and approximate F˜ by Ft (x) := ηt ∗ F˜ (x),
x ∈ D,
∞
as above, we obtain Ft ∈ C (D) and can verify that Ft and dFt converge to F and dF respectively.
10.3
Hodge Potentials
Our main result on Hodge decompositions is the following. Theorem 10.3.1 (Hodge decompositions on Lipschitz domains). Let D be a bounded Lipschitz domain in a Euclidean space X. Then the operators d, δ, d, δ in L2 (D; ∧V ) all have closed ranges, the cohomology spaces Ck (D) = N(d) ∩ N(δ) and C⊥ (D) = N(d) ∩ N(δ) are finite-dimensional, and we have Hodge decompositions L2 (D; ∧V ) = R(d) ⊕ Ck (D) ⊕ R(δ) = R(d) ⊕ C⊥ (D) ⊕ R(δ). Moreover, the inverses of d : R(δ) → R(d), δ : R(d) → R(δ), d : R(δ) → R(d), and δ : R(d) → R(δ) are all L2 compact.
355
10.3. Hodge Potentials
The proof follows from the following reduction and Theorem 10.3.3 below. Reduction of Theorem 10.3.1 to a ball. We prove that there are compact operators K0 and K1 on L2 (D) such that dK1 F + K1 dF + K0 F = F for all F ∈ D(d). By Propositions 10.1.6 and 10.2.2, this will prove TheoremS10.3.1. By Definition 6.1.4 we have a finite covering D = α Dα , with Lipschitz diffeomorphisms ρα : B → Dα from the unit ball B. Moreover, we have a partition of unity ηα ∈ C ∞ (D) subordinate to this covering. By Theorem 10.3.3 for the ball B, we have compact maps K1B and K0B on L2 (B) such that dK1B F + K1B dF + K0B F = F . Note that we need only part (i) in the proof of Theorem 10.3.3 for this. Define X K1 F := ηα (ρ∗α )−1 K1B (ρ∗α F |Dα ), α
which is seen to be compact on L2 (D). We calculate X X ∇ηα ∧ (ρ∗α )−1 K1B (ρ∗α F |Dα ) dK1 F = ηα (ρ∗α )−1 (I − K1B d − K0B )(ρ∗α F |Dα )+ α
α
= F − K1 dF − K0 F, where K0 F :=
X
ηα (ρ∗α )−1 K0B (ρ∗α F |Dα ) −
X
α
∇ηα ∧ (ρ∗α )−1 K1B (ρ∗α F |Dα )
α
is seen to be compact on L2 (D). Note the critical use of Theorem 7.2.9. This proves Theorem 10.3.1 for Lipschitz domains D. In the proof of Theorem 10.3.1 we used Proposition 10.1.6(ii). As for the characterization (iii), it is natural to ask whether D(d) ∩ D(δ) ⊂ H 1 (D), that is, whether the total derivative ∇⊗F belongs to L2 (D), whenever F, dF, δF ∈ L2 (D). This is not true for general Lipschitz domains, where the irregularities of ∂D may prevent F ∈ D(d) ∩ D(δ) from having full Sobolev H 1 regularity, but it does hold for smooth domains. Example 10.3.2 (Nonconvex corner). Let Dα ⊂ R2 be a bounded domain that is smooth except at 0, in a neighborhood of which Dα coincides with the sector {reiφ ; r > 0, 0 < φ < α}. Define a scalar function u : Dα → R such that u = rπ/α sin(πφ/α)η, where η ∈ C0∞ (R2 ), η = 1 in a neighborhood of 0, and η = 0 where Dα differs from the sector. Consider the gradient vector field F := ∇u ∈ R(d). Using the estimate |F | . rπ/α−1 , we verify that F ∈ D(d) ∩ D(δ). However, Z Z 1 |∇ ⊗ F |2 dxdy & (rπ/α−2 )2 rdr. D
0
Chapter 10. Hodge Decompositions
356
Therefore, when Dα is not convex, that is, when α > π, then F ∈ / H 1 (D). Figure 10.1 shows the case α = 3π/2.
Figure 10.1: The harmonic function r2/3 sin(2φ/3) in quadrants 1–3 in the unit circle, with Dirichlet boundary conditions but infinite gradient at the origin.
Theorem 10.3.3 (Full regularity of Hodge potentials). Let D be a bounded C 2 domain. Then D(d) ∩ D(δ) = Hk1 (D) := {F ∈ H 1 (D) ; ν y F |∂D = 0} 1
and
1
D(d) ∩ D(δ) = H⊥ (D) := {F ∈ H (D) ; ν ∧ F |∂D = 0}. For the proof of Theorem 10.3.3 we shall prove a Weitzenb¨ock identity for d and δ on D, involving a boundary curvature term. This requires the following definitions from differential geometry and uses that the boundary ∂D is C 2 regular. In this case, the unit normal vector field ν on ∂D is C 1 , and the curvature of the boundary is a continuous function. Proposition 10.3.4 (Derivative of normal). Let D be a bounded C 2 domain, with outward-pointing unit normal vector field ν on ∂D. At p ∈ ∂D, let Tp (∂D) denote the tangent hyperplane. Then the map p S∂D : Tp (∂D) → Tp (∂D) : v 7→ ∂v ν,
is linear and symmetric. Moreover, for any tangential C 1 vector fields u and v on ∂D, at each p ∈ ∂D we have p p hu, S∂D vi = −h∂u v, νi = hS∂D u, vi.
357
10.3. Hodge Potentials
Proof. We have 0 = ∂v |ν|2 = 2h∂v ν, νi, since |ν| = 1 on ∂D, which shows that p p at p ∈ ∂D, we note (v) is a tangential vector. To show the symmetry of S∂D S∂D that 0 = ∂u hv, νi = h∂u v, νi + hv, ∂u νi and 0 = ∂v hu, νi = h∂v u, νi + hu, ∂v νi, p since u and v are tangential on ∂D. The symmetry of S∂D now follows, since the Lie bracket ∂u v − ∂v u = [u, v] = Lu v is tangential.
Definition 10.3.5 (Second fundamental form). Let D be a bounded C 2 domain. The symmetric bilinear form p B∂D : Tp (∂D) × Tp (∂D) → R : (u, v) 7→ −h∂u v, νi
from Proposition 10.3.4 is called the second fundamental form for ∂D. The associated symmetric map p S∂D : Tp (∂D) → Tp (∂D) : v 7→ ∂v ν
from Proposition 10.3.4 is called the Weingarten map, or shape operator, for ∂D. p are called the principal curvatures of ∂D The eigenvalues {κ1 , . . . , κn−1 } of S∂D at p, and a corresponding ON-basis {e01 , . . . , e0n−1 } for Tp (∂D) of eigenvectors to p is referred to as the principal directions of curvatures at p. S∂D Note that if D is a convex domain, then κj ≥ 0. ock identities). Let D be a bounded C 2 domain, and let Theorem 10.3.6 (Weitzenb¨ 0 ej denote the principal directions of curvatures κj , j = 1, . . . , n − 1. Then Z
Z
2
2
|∇ ⊗ F | dx = D
Z
2
(|dF | + |δF | )dx − D
|∇ ⊗ F |2 dx =
D
Z
j=1
(|dF |2 + |δF |2 )dx −
D
where |∇ ⊗ F |2 =
Pn
n−1 XZ
j=1
n−1 XZ j=1
κj |e0j
F |2 dy,
F ∈ H⊥1 (D),
κj |e0j y F |2 dy,
F ∈ Hk1 (D),
∧
∂D
∂D
|∂j F |2 .
Example 10.3.7 (Kadlec’s formula). Consider a scalar function U : D → R satisfying Poisson’s equation ∆U = f in D, with Dirichlet boundary conditions U |∂D = 0. This means that its gradient vector field F = ∇U is normal at the boundary. Assuming that F ∈ H⊥1 (D), we have Kadlec’s formula n Z X i,j=1
D
2
Z
|∂i ∂j U | dx =
2
Z
|f | dx − (n − 1) D
∂D
H(y) |∇U |2 dy,
Chapter 10. Hodge Decompositions
358
y )/(n − 1) is the mean curvature of the boundary. Note that where H(y) := Tr(S∂D Lagrange’s identity Proposition 3.1.1 shows that |e0j ∧ F |2 = |ej0 |2 |F |2 −|he0j , F i|2 = |F |2 . If instead U satisfies Neumann boundary conditions hν, ∇U i = 0, then we get a similar identity n Z X i,j=1
|∂i ∂j U |2 dx =
D
Z
|f |2 dx −
D
n Z X j=1
κj |he0j , ∇U i|2 dy,
∂D
but where all the principal curvatures appear and not only the mean curvature. For convex domains, the Weitzenb¨ ock identities imply that Z Z |∇ ⊗ F |2 dx ≤ (|dF |2 + |δF |2 )dx, for all F ∈ H⊥1 (D) ∪ Hk1 (D), D
D
since in this case all κj ≥ 0. In general, we have the following estimates. Corollary 10.3.8 (Gaffney’s inequality). Let D be a bounded C 2 domain. Then Z Z |∇ ⊗ F |2 dx . (|dF |2 + |δF |2 + |F |2 )dx, for all F ∈ H⊥1 (D) ∪ Hk1 (D). D
D
Proof. For a C 2 domain, we note that the principal curvatures κj are bounded functions, which shows that the boundary integral terms are . kF k2L2 (∂D) . To replace this by a term kF k2L2 (D) , we apply Stokes’s theorem to obtain a standard trace estimate as follows. Let θ ∈ C0∞ (X; V ) be a vector field such that inf y∈∂D hθ(y), ν(y)i > 0, that is, θ is uniformly outward pointing on ∂D. Stokes’s theorem gives Z Z |F |2 hθ, νidy = 2h∂θ F, F i + |F |2 div θ dx. ∂D
D
Estimating, this shows that Z Z 2 2 kF kL2 (∂D) . |F | hθ, νidy . |∇ ⊗ F ||F | + |F |2 dx. ∂D
D
It follows from the Weitzenb¨ ock identities that Z Z Z Z |∇ ⊗ F |2 dx ≤ (|dF |2 + |δF |2 )dx + C |∇ ⊗ F ||F |dx + C |F |2 dx, (10.3) D
D
D
D
for some constant C < ∞. We next use an estimate technique called the absorption inequality, which is 1 2 b . ab ≤ 2 a2 + 2
10.3. Hodge Potentials
359
√ √ This is, of course, nothing deeper than ( a − b/ )2 ≥ 0. To use this, we take a = |∇ ⊗ F (x)|, b = |F (x)|, and = C −1 . This shows that the second term on the right-hand side in (10.3) is Z |∇ ⊗ F ||F |dx ≤
C D
1 2
Z
2
|∇ ⊗ F | dx + D
C2 2
Z
|F |2 dx,
D
where the first term can be moved to the left-hand side in (10.3) and be absorbed there. Gaffney’s inequality follows. Proof of Theorem 10.3.6. (i) Let first F ∈ C 2 (D) and consider the 1-form θ(x, v) :=
n X hv, ej ihF (x), ∂j F (x)i − hv ∧ F (x), dF (x)i − hv y F (x), δF (x)i, j=1
for x ∈ D, v ∈ V . We calculate its exterior derivative θ(x, ˙ ∇) = (|∇ ⊗ F |2 + hF, ∆F i) − (|dF |2 + hF, δdF i) − (|δF |2 + hF, dδF i) = |∇ ⊗ F |2 − |dF |2 − |δF |2 , since ∆ = δd + dδ. The Stokes’ formula (7.4) gives Z
Z |∇ ⊗ F |2 − |dF |2 − |δF |2 dx =
D
hF, ∂ν F i − hν ∧ F, dF i − hν y F, δF i dy.
∂D
We continue and rewrite the right-hand side with nabla calculus as hF, hν, ∇iF i − hF, ν ∧ (∇ y F )i − hν ∧ F, ∇ ∧ F i = hF, ∇ y (n ∧ F˙ )i − hν ∧ F, ∇ ∧ F i = −hF, ∇ y (n˙ ∧ F )i + hF, ∇ y (n ∧ F )i − hν ∧ F, ∇ ∧ F i,
(10.4) (10.5)
where n ∈ C 1 (X; V ) denotes an extension of ν. The first step uses the algebraic anticommutation relation ν ∧ (∇ y F ) = hν, ∇iF − ∇ y (n ∧ F˙ ), and the second step uses the analytic product rule ∇ y (n ∧ F ) = ∇ y (n˙ ∧ F ) + ∇ y (n ∧ F˙ ). At p ∈ ∂D, we calculate the first term in the ON-basis {e01 , . . . , e0n−1 , ν} and get hF, ∇ y (n˙ ∧ F )i =
n−1 X
κj |e0j
∧
F |2 + hν ∧ F, (∂ν n) ∧ F i.
j=1
On the other hand, the normal derivatives in the last two terms in (10.4) are hF, ν y ∂ν (n ∧ F )i − hν ∧ F, ν ∧ ∂ν F i = hν ∧ F, (∂ν n) ∧ F i.
Chapter 10. Hodge Decompositions
360
Therefore these three terms involving the normal derivatives cancel, and we obtain the identity Z (|∇ ⊗ F |2 − |dF |2 − |δF |2 )dx D
=−
n−1 XZ j=1
κj |e0j ∧
2
Z
F | dy +
hF, ∇0 y (ν ∧ F )i − hν ∧ F, ∇0 ∧ F i dy, (10.6)
∂D
∂D
Pn−1 where ∇0 := ν y (ν ∧ ∇) = j=1 ej0 ∂e0j . (ii) Next consider F ∈ H⊥1 (D). To obtain the first Weitzenb¨ ock identity, we use the fact that C 2 (D) is dense in H 1 (D) and take Fj ∈ C 2 (D) such that Fj → F and ∇ ⊗ Fj → ∇ ⊗ F in L2 (D). On the C 2 manifold ∂D, we use the Sobolev spaces H 1/2 (∂D) and H −1/2 (∂D), as discussed in Example 6.4.1, where H 1/2 ⊂ L2 ⊂ H −1/2 . As usual, we allow the functions to be multivector fields, and require that each component function be in such Sobolev space. We need the following well-known facts. • The trace map H 1 (D) → H 1/2 (∂D) : F 7→ F |∂D is a bounded linear operator. • The tangential derivative ∇0 defines a bounded linear operator ∇0 : H 1/2 (∂D) → H −1/2 (∂D). • Multiplication by a C 1 function like ν is a bounded operation on H 1/2 (∂D). • The spaces H 1/2 (∂D) and H −1/2 (∂D) are dual; in particular, we have the estimate Z . kF kH 1/2 (∂D) kGkH −1/2 (∂D) . hF, Gidx ∂D
Given this, we apply (10.6) to Fj and take the limit as j → ∞. Since ν ∧ Fj → ν ∧ F = 0 in H 1/2 (∂D), we obtain the Weitzenb¨ ock identity for F ∈ H⊥1 (D). (iii) To obtain the Weitzenb¨ ock identity for F ∈ Hk1 (D), we instead rewrite θ as θ(x, ν) = hF, hν, ∇iF i − hF, ν y (∇ ∧ F )i − hν y F, ∇ y F i = hF, ∇ ∧ (n y F˙ )i − hν y F, ∇ y F i = −hF, ∇ ∧ (n˙ y F )i + hF, ∇ ∧ (n y F )i − hν y F, ∇ y F i, and proceed as in (i) and (ii).
We finally prove Theorem 10.3.3. From the Weitzenb¨ock identities, the Gaffney inequalities show that on C 2 domains we have Hk1 (D) ⊂ D(d) ∩ D(δ) and H⊥1 (D) ⊂ D(d) ∩ D(δ) and that k∇ ⊗ F k2 + kF k2 ≈ k∇ ∧ F k2 + k∇ y F k2 + kF k2
361
10.3. Hodge Potentials
in L2 (D) norm, for all F ∈ Hk1 (D) and all F ∈ H⊥1 (D). It is important to note that this equivalence of norms, without further work, does not imply that D(d)∩D(δ ) ⊂ Hk1 (D) and D(d) ∩ D(δ) ⊂ H⊥1 (D). It particular, the absorption technique in the proof of Corollary 10.3.8 fails to prove this. Proof of Theorem 10.3.3. It remains to prove D(d) ∩ D(δ) ⊂ H⊥1 (D). By Hodge duality as in Proposition 10.2.2, this will imply the corresponding result for normal boundary conditions. (i) Consider first the case that D is the unit ball B := {x ∈ V ; |x| < 1} and let F ∈ D(d) ∩ D(δ). Using a partition of unity, we write F = F0 + F1 , F0 , F1 ∈ D(d) ∩ D(δ), where F0 (x) = 0 when |x| > 1/2 and F1 (x) = 0 when |x| < 1/3. We use inversion R(x) = 1/x in the unit sphere, with derivative Rx h = −x−1 hx−1 to extend F1 to ( F1 (x), |x| < 1, ˜ F1 (x) := ∗ R F1 (x), |x| > 1. Arguing as in the proof of Proposition 10.2.7, replacing Rn−1 by the sphere |x| = 1, we conclude that F˜ ∈ D(d) on X. Moreover, R is a conformal map and R∗ = ˜ ∗−1 . From this it follows that R∗ F1 ∈ D(δ) on X, with |x|2(n−1) R ˜ ∗−1 (∇ y F1 )(x) ˜ ∗−1 F1 (x) + |x|2(n−1) R ∇ y R∗ F1 (x) = |x|2(n−2) x y R for |x| > 1. Recall that F1 , extended by 0 for |x| > 1 belongs to D(δ). We obtain an extension F˜ := F0 + F˜1 of F to X, with F˜ = 0 for |x| > 3 and ˜ ˜ F , dF , and δ F˜ all belonging to L2 (X). By Plancherel’s theorem and Langrange’s identity, we get Z Z Z (2π)n |∇ ⊗ F˜ |2 dx = |F(F˜ )|2 |ξ|2 dξ = |ξ ∧ F(F˜ )|2 + |ξ y F(F˜ )|2 dξ < ∞. X
X
X
Recall from Example 10.1.8 that d and δ on X are the Fourier multipliers iµξ and iµ∗ξ . This shows that F ∈ H 1 (D). (ii) Next consider a general bounded C 2 domain D. Localizing the problem with a partition of unity, we may assume that D is C 2 diffeomorphic to B. Moreover, we may assume that we have a C 2 map ρ : [0, 1] × B → X such that ρt = ρ(t, ·) defines a C 2 diffeomorphism B → ρt (B) =: Dt , with D0 = B and D1 = D. For fixed t ∈ [0, 1], we consider the inclusion Hk1 (Dt ) ⊂ D(d)∩D(δ) on the C 2 domain Dt . We note from Proposition 10.1.3 that I + d + δ : D(d) ∩ D(δ) → L2 (Dt ) is an invertible isometry, so the inclusion amounts to I + d + δ : Hk1 (Dt ) → L2 (Dt ) being an injective semi-Fredholm operator. See Definition 6.4.9. From (i), we know that it is surjective for the ball at t = 0. To apply the method of continuity and
Chapter 10. Hodge Decompositions
362
conclude that it is surjective for all t, and in particular for D at t = 1, we note that the normalized pushforward ρ˜t∗ defines invertible maps Hk1 (B) → Hk1 (Dt ) and L2 (B) → L2 (Dt ). The method of continuity therefore applies to the family of semi-Fredholm operators ρt∗ )−1 (I + d + δ)˜ ρt∗ : Hk1 (B) → L2 (B). (˜ We conclude that I + d + δ : Hk1 (Dt ) → L2 (Dt ) is invertible, which shows that D(d) ∩ D(δ ) = Hk1 (D) and completes the proof of Theorem 10.3.3.
10.4
Bogovski˘ı and Poincar´e Potentials
Recall that exterior and interior potentials in general are highly nonunique. In this section we prove the following surprising results about potentials on strongly Lipschitz domains D. • We have seen in Example 10.3.2 that in contrast to smooth domains, the potential U in the subspace R(δ) to F = dU ∈ R(d) may not belong to H 1 (D). We refer to this potential U as the Hodge potential for F , which is characterized by its minimal L2 norm. It follows from Theorem 10.4.3 below that every exact field F ∈ R(d) on any bounded and strongly Lipschitz domain D nevertheless has a potential ˜ , in general different from the Hodge potential, such that U ˜ ∈ H 1 (D) and U dU = F . We refer to such potentials as (regularized) Poincar´e potentials for F. • We have seen that the Hodge potential U ∈ R(d) to F = δU ∈ R(δ) is tangential on ∂D, meaning that half of the component functions of U vanish there. Theorem 10.4.3 below show that every field F ∈ R(δ) on any bounded ˜ , in general different and strongly Lipschitz domain D in fact has a potential U ˜ ∈ H 1 (D) and δU = F . This means from the Hodge potential, such that U 0 ˜ vanish on ∂D, and we note that this is that all component functions of U a nontrivial result also for smooth domains. We refer to such potentials as Bogovski˘ı potentials for F . ˜∈ Similarly, and related by the Hodge star, there exist Poincar´e potentials U ˜ ∈ H 1 for F ∈ R(d). We will H 1 (D) for F ∈ R(δ), and Bogovski˘ı potentials U 0 formulate the results only for d and δ, and leave it to the reader to translate the results in this section to d and δ. First consider a star-shaped domain D. In what follows, we extend the operators initially defined on k-vector fields, by linearity to act on general multivector fields. The method we use to construct a Poincar´e potential U to a given field F ∈ R(d) on D builds on Poincar´e’s Theorem 7.5.2. If D is shar-shaped with respect to p0 ∈ D, then this gives the potential Z 1 Tp0 (F )(x) := (x − p0 ) y F (p0 + t(x − p0 )) tk−1 dt, x ∈ D, (10.7) 0
10.4. Bogovski˘ı and Poincar´e Potentials
363
provided k ≥ 1 and F is a smooth k-vector field. For a scalar function F : D → ∧0 R, we let Tp0 F = 0. We would like to extend (10.7) to fields that are square integrable, without any assumption on regularity. To obtain a bounded operator, we need to average the formula (10.7) over base points p around p0 . In what follows, we assume that D is star-shaped not only with respect to a point, but to a whole ball. We fix θ ∈ C0∞ (B(p0 ; )) andR assume that D is star-shaped with respect to each p ∈ B(p0 ; ), where > 0 and θdx = 1. Then define the averaged operator Z TD F (x) :=
θ(p)Tp F (x)dp,
x ∈ D.
(10.8)
|p−p0 |≤
We rewrite this formula by changing the variables p and t to y := p + t(x − p) and s = 1/(1 − t) − 1. This gives Z TD F (x) = (x − y) y F (y) kθ (x, y)dy, (10.9) D
where Z
∞
kθ (x, y) :=
θ(y + s(y − x))sk−1 (1 + s)n−k ds.
0
This operator TD constructs the regularized Poincar´e potential for an exact kvector field on a bounded domain that is star-shaped with respect to B(p0 ; ). Exercise 10.4.1 (Kernel support). Show that kθ (x, y) 6= 0 is possible only when y lies on the straight line between x and a point p ∈ supp η, and that we have estimates 1 |kθ (x, y)| . , x, y ∈ D, |x − y|n so that TD is a weakly singular integral operator. Note how by averaging with θ we have replaced the line integral for Tp0 by a volume integral over a conical region for TD . The adjoint operator ∗ TD F (x) =
Z (y − x) ∧ F (y) kθ (y, x)dy
(10.10)
D
constructs the Bogovski˘ı potential for a (k − 1)-vector field F ∈ R(δ) on the star∗ ∗ shaped domain D. We see from Exercise 10.4.1 that TD F |∂D = 0, since for TD we integrate over a cone starting at x, away from B(p0 , ). For domains D that are C 2 diffeomorphic to a domain that is star-shaped with respect to a ball, we can pull back and push forward these operators TD and ∗ TD to obtain Bogovski˘ı and regularized Poincar´e potentials. Next we extend these constructions to general strongly Lipschitz domains, and provide the necessary analysis.
Chapter 10. Hodge Decompositions
364
Definition 10.4.2 (Bogovski˘ı and Poincar´e maps). Let D be a bounded and strongly S Lipschitz domain. Fix a finite cover D = α Dα by domains Dα that Rare starshaped with respect to balls B(pα ; ). Further fix θα ∈ C0∞ (B(pα ; )) with θα dx = 1 and a partition of unity ηα ∈ C ∞ (D) subordinate to the covering Dα . We assume that ηα = 1 on a neighborhood of supp θα . The regularized Poincar´e map with these choices Dα , θα , ηα , for d on D is X TD F (x) = ηα (x)TDα (F |Dα )(x), x ∈ D. α
The Bogovski˘ı map, with these choices Dα , θα , ηα , for δ on D is X ∗ ∗ TD (ηα F |Dα )(x), x ∈ D. TD F (x) = α α ∗ Here TDα and TD are the Poincar´e and Bogovski˘ı maps on the star-shaped doα mains Dα , constructed as above.
Unlike the star-shaped case, these Bogovski˘ı and regularized Poincar´e maps on general strongly Lipschitz domains do not straight away give potentials for (co-)exact fields. We proceed as follows. Theorem 10.4.3 (Bogovski˘ı and Poincar´e homotopies). Let D be a bounded and strongly Lipschitz domain. The regularized Poincar´e potential map from Definition 10.4.2, maps TD : C ∞ (D) → C ∞ (D) and extends by continuity to a bounded operator TD : L2 (D) → H 1 (D). ∗ The Bogovski˘ı potential map from Definition 10.4.2 maps TD : C0∞ (D) → C0∞ (D) and extends by continuity to a bounded operator ∗ : L2 (D) → H01 (D). TD
We have homotopy relations d(TD F ) + TD (dF ) + KD F = F, ∗ ∗ F = F, −δ(TD F ) − T ∗ (δF ) + KD
F ∈ D(d), F ∈ D(δ),
with perturbation terms KD F (x) =
X
Z ηα (x)
θα F0 dy +
α ∗ F (x) = KD
X α
X
∇ηα (x) ∧ TDα (F |Dα )(x),
α
Z θα (x)
ηα F0 dy +
X
∗ (∇ηα y F |Dα )(x), TD α
α
∗ which are bounded, KD : L2 (D) → H 1 (D) and KD : L2 (D) → H01 (D). Here F0 0 denotes the ∧ V part of F .
365
10.4. Bogovski˘ı and Poincar´e Potentials
To see how Theorem 10.4.3 implies the existence of Bogovski˘ı and Poincar´e potentials, we consider the following Hodge decomposition: L2 (D)
L2 (D)
=
⊕
R(d)
=
R(d)
t
Ck (D)
δ
d
⊕
⊕
Ck (D)
⊕
*
R(δ)
(10.11)
R(δ)
Given F ∈ R(d), we apply the homotopy relation to the Hodge potential U ∈ R(δ), with dU = F , to obtain U = dTD U + TD dU + KD U , and in particular, F = dU = d(TD F + KD U ). ˜ := TD F + KD U ∈ H 1 (D) is a regularized Poincar´e potential Therefore the field U for F . Similarly, for F ∈ R(δ) we apply the homotopy relation to the Hodge potential U ∈ R(d), with δU = F , to obtain ∗ ∗ F = δU = δ(−TD F + KD U ),
˜ := −T ∗ F + K ∗ U ∈ H 1 (D) is a Bogovski˘ı potential for F . where the field U 0 D D Proof of Theorem 10.4.3. (i) Let F ∈ C ∞ (D). Then F |Dα ∈ C ∞ (Dα ), and we see from (10.8) for the star-shaped domain Dα that TDα (F |Dα ) ∈ C ∞ (Dα ). Note that TDα acts on C ∞ (X), but the values TDα F (x), for x ∈ Dα , depend only on F |D . With the partition of unity ηα , we obtain TD F ∈ C ∞ (D). Let F ∈ C0∞ (D). Then ηα F |Dα ∈ C0∞ (Dα ), and we see from Exercise 10.4.1 ∗ that supp TD (ηα F |Dα ) is compactly contained in Dα . To verify smoothness, we α write Z ∞ Z k−1 n−k ∗ z ∧ G(x − z) θα (x + sz)s (1 + s) ds dz. TDα G(x) = − Dα
0
∗ (ηα F |Dα ) ∈ C0∞ (Dα ), and thereDifferentiation with respect to x shows that TD α ∗ ∞ fore that TD F ∈ C0 (D). Averaging the homotopy relation in Exercise 7.5.6, we obtain
d(TDα F ) + TDα (dF ) + KDα F = F on Dα , with
Z KDα F :=
θα F0 dx.
As in the proof of Theorem 10.3.1, the product rule yields d(TD F ) + TD (dF ) + KD F = F on D, and duality yields the stated formulas for δ. ∗ . To this end, assume (ii) It remains to establish H 1 bounds for TDα and TD α that D = Dα is star-shaped with respect to a ball and consider the operators (10.9)
Chapter 10. Hodge Decompositions
366
and (10.10). By Exercise 10.4.1, these are weakly singular integral operators, and Schur estimates as in Exercise 6.4.3 show that TD is bounded on L2 (D). Expanding (1 + s)n−k with the binomial theorem, we may further replace kθ (x, y) by Z ∞ θ(y + s(y − x))sn−1 ds. 0
Indeed, in estimating k∇⊗TD F kL2 the difference will be a weakly singular operator ∗ that is bounded as above, and similarly for k∇ ⊗ TD F kL2 . Make the change of variables t = s|y − x|, fix a coordinate 1 ≤ i ≤ n, and define Z ∞ z n−1 zi k(x, z) := η x+t t dt . n |z| |z| 0 Estimating the multivector fields componentwise, we see that it is enough to consider the operators Z Z ∗ Sf (x) := k(y, y − x)f (y) dy and S f (x) := k(x, x − y)f (y) dy, D
D ∗
and prove bounds on k∇Sf kL2 and k∇S f kL2 . We note that k(x, z) is homogeneous of degree −n + 1 with respect to z. For fixed x, we expand k(x, z/|z|) in a series of spherical harmonics on the unit sphere S. We get k(x, z) =
1 |z|n−1
hj ∞ X X
kjm (x)Yjm (z/|z|) =
j=0 m=1
hj ∞ X X
kjm (x)
j=0 m=1
Yjm (z) , |z|n−1+j
h
j denotes an ON-basis for the space Pjsh (S) of scalar-valued spherwhere {Yjm }m=1 ical harmonics, for j ∈ N. See Section 8.2. In particular the coefficients are R kjm (x) := S k(x, z)Yjm (z) dz. Define weakly singular convolution integral operators Z Yjm (x − y) f (y) dy. Sjm (x) := |x − y|n−1+j D
With kjm as multipliers we have Sf (x) =
hj ∞ X X (−1)j Sjm (kjm f )(x), j=0
m=1
S ∗ f (x) =
hj ∞ X X
kjm (x)Sjm f (x).
j=0 m=1
The main estimate we need is kSjm kL2 (D)→H 1 (D) . (1 + j)n−2 .
(10.12)
To see this, we use zonal harmonics as in Section 8.2 to estimate Z |Yjm (z)| = Zj (z, y)Yjm (y)dy ≤ kZj (z, ·)kL2 (S) kYjm kL2 (S) . (1 + j)n−2 |z|j , S
ˇ Cohomology 10.5. Cech
367
which yields the L2 estimate. To bound ∇Sjm f on L2 (X), we use Proposition 6.2.1 to see that ∇Sjm is a Fourier multiplier with estimates ξ2c Γ((1 + j)/2) Yjm (ξ)/|ξ|1+j . (1 + j)n−2 , ξ ∈ X, Γ((n − 1 + j)/2) of the symbol. This proves (10.12). To estimate the multipliers Z kjm (x) =
k(x, z)Yjm (z)dz, S
we use that k(x, ·) is smooth on S, while Yjm becomes more oscillatory as j grows, to show that kjm decays with j as follows. By Proposition 8.2.15, the spherical Laplace operator ∆S is a self-adjoint operator in L2 (S) with ∆S Yjm = (2 − n − j)jYjm . Using self-adjointness N times shows that Z 1 kjm (x) = (∆N k(x, z))Yjm (z)dz. (2 − n − j)N j N S S Since ∆N S k(x, ·), for any fixed N , is bounded, we get the estimate |kjm (x)| . (1 + j)−2N . Similarly, we bound Z ∇kjm (x) =
∇x k(x, z)Yjm (z)dz S
uniformly by (1 + j)−2N . Collecting our estimates, we obtain kSf kH 1 (D) .
∞ X
hj (1 + j)n−2 (1 + j)−N kf kL2 (D) . kf kL2 (D) ,
j=0
kS ∗ f kH 1 (D) .
∞ X
hj (1 + j)−N (1 + j)n−2 kf kL2 (D) . kf kL2 (D) ,
j=0
provided we fix large enough N . This completes the proof.
ˇ 10.5 Cech Cohomology In this section we collect some tools from algebraic topology that we use in Section 10.6 to calculate the dimensions of the finite-dimensional cohomology space
Chapter 10. Hodge Decompositions
368
N(d) ∩ N(δ ), more precisely the Betti numbers bk (D), from Definition 7.6.3. We also use these tools in Chapters 11 and 12. Our starting point is the notion of a sheaf, where we only use the following simplified version of this concept. We consider some set D and some fixed finite covering of it by subsets D1 , . . . , DN , so that D = D1 ∪ · · · ∪ DN . By a sheaf F on D we mean a collection of linear spaces F(D0 ), one for each intersection D0 of the subsets Dj . In fact, it is only the additive structure of ˇ sheaves that is relevant, and in Chapter 11 we shall use Cech cohomology, where the spaces F(D0 ) are the smallest additive group Z2 = {0, 1}. The linear spaces that we use in this chapter are supposed to behave like spaces of functions defined on D0 in the sense that we require that there exist linear restriction maps F(D0 ) → F(D00 ) : f 7→ f |D00 whenever D00 ⊂ D0 ⊂ D. If an intersection D0 is empty, then we require that the linear space F(D0 ) be trivial, that is, F(D0 ) = {0}. The intersections Ds = Ds1 ∩ · · · ∩ Dsk , of distinct subsets Dsj that we consider, are indexed by the 2N subsets s = ˇ {s1 , . . . , sk } ⊂ {1, . . . , N }. Since the Cech algebra that we are about to construct is alternating, we choose below to index the intersections not by s, but by auxiliary basis multivectors es in ∧RN . This is only a formal notation, which turns out to be useful, since it allows us to recycle some, by now well known to us, exterior algebra. Definition 10.5.1 (k-cochains). Let F be a sheaf on D as above, with covering ˇ k-cochain f associates to each (k + 1)-fold intersection D = {D1 , . . . , DN }. A Cech Ds , |s| = k + 1, an element in the linear space F(D0 ), which we denote by hf, es i ∈ F(Ds ). This is not an inner product, but only a convenient notation for the value of f on Ds . We also extend the definition of f homogeneously by letting hf, αes i := αhf, es i, for α ∈ R. ˇ The space of all Cech k-cochains f on D with values in F is denoted by k k C (D; F). Viewing C (D; F) as ⊕s:|s|=k+1 F(Ds ), it is clear that this is a linear space. For k < 0 and k ≥ N we let C k (D; F) := {0}. ˇ The Cech coboundary operator ∂k : C k (D; F) → C k+1 (D; F) is the linear map defined by h∂k f, es i :=
N X hf, ej y es i|Ds ,
|s| = k + 2, f ∈ C k (D; F).
j=1
For k < 0 and k ≥ N − 1, we let ∂k = 0. ˇ We will see that Cech k-cochains and ∂k behave in many ways like k-covector fields and the exterior derivative d. We need some terminology.
ˇ 10.5. Cech Cohomology
369
Definition 10.5.2 (Complex of spaces). A complex of linear spaces is a sequence of linear maps between linear spaces ∂j−2
∂j−3
∂j−1
∂j
∂j+1
∂j+2
→ Vj−2 → Vj−1 → Vj → Vj+1 → Vj+2 →
such that R(∂j−1 ) ⊂ N(∂j ) in Vj . The complex is said to be exact at Vj if R(∂j−1 ) = N(∂j ). If it is exact at all Vj , we say that the complex is exact. More generally, the cohomology of the complex at Vj is the quotient space H j (V ) := N(∂j )/R(∂j−1 ). An important special case occurs when Vj = {0} for some j, so that ∂j = ∂j−1 = 0. In this case, exactness at Vj+1 means that ∂j+1 is injective, and exactness at Vj−1 means that ∂j−2 is surjective. Lemma 10.5.3. If (Vj , ∂j ) is a an exact complex of finite-dimensional linear spaces and Vj1 = Vj2 = 0, then X (−1)j dim Vj = 0. j1