Computing Highly Oscillatory Integrals 1611975115, 9781611975116

Highly oscillatory phenomena range across numerous areas in science and engineering and their computation represents a d

176 38 6MB

English Pages 182 [187] Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
1.9781611975123.fm
1.9781611975123.ch1
1.9781611975123.ch2
1.9781611975123.ch3
1.9781611975123.ch4
1.9781611975123.ch5
1.9781611975123.ch6
1.9781611975123.ch7
1.9781611975123.ch8
1.9781611975123.appa
1.9781611975123.bm
Recommend Papers

Computing Highly Oscillatory Integrals
 1611975115, 9781611975116

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Computing Highly Oscillatory Integrals

OT155_Deano_FM_11-06-17.indd 1

11/21/2017 8:19:09 AM

Computing Highly Oscillatory Integrals Alfredo Deaño Daan Huybrechs Arieh Iserles

Society for Industrial and Applied Mathematics Philadelphia

OT155_Deano_FM_11-06-17.indd 3

11/21/2017 8:19:09 AM

Copyright © 2018 by the Society for Industrial and Applied Mathematics 10 9 8 7 6 5 4 3 2 1 All rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. Maple is a trademark of Waterloo Maple, Inc. MATLAB is a registered trademark of The MathWorks, Inc. For MATLAB product information, please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098 USA, 508-647-7000, Fax: 508-647-7001, [email protected], www.mathworks.com. Publications Director Executive Editor Developmental Editor Managing Editor Production Editor Copy Editor Production Manager Production Coordinator Compositor Graphic Designer

Kivmars H. Bowling Elizabeth Greenspan Gina Rinelli Harris Kelly Thomas Ann Manning Allen Bruce Owens Donna Witzleben Cally A. Shrader Cheryl Hufnagle Lois Sellers

Library of Congress Cataloging-in-Publication Data Please visit www.siam.org/books/ot155 to view the CIP data.

is a registered trademark.

OT155_Deano_FM_11-06-17.indd 4

11/21/2017 8:19:10 AM

To our families

s

OT155_Deano_FM_11-06-17.indd 5

11/21/2017 8:19:10 AM

Contents Preface

ix

1

Introduction 1 1.1 The 3 2 stages of mathematical research . . . . . . . . . . . . . . . . . 1.2 What’s wrong with standard quadrature? . . . . . . . . . . . . . . . . 1.3 An example: Filon quadrature . . . . . . . . . . . . . . . . . . . . . . .

2

Asymptotic theory of highly oscillatory integrals 2.1 Univariate asymptotic expansions . . . . . . . 2.2 How good is an asymptotic method? . . . . . 2.3 Unbounded intervals . . . . . . . . . . . . . . . 2.4 Multivariate asymptotic expansions . . . . . . 2.5 More general oscillators . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

5 5 10 13 13 23

Filon 3.1 3.2 3.3 3.4

3

4

5

6

and Levin methods Filon-type quadrature in an interval Multivariate Filon-type quadrature . Levin methods . . . . . . . . . . . . . . Filon or Levin? . . . . . . . . . . . . . .

1 1 1 3

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

29 29 38 42 56

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

59 59 60 67 74

Numerical steepest descent 5.1 The asymptotic method of steepest descent . 5.2 The numerical method of steepest descent . 5.3 Implementation aspects . . . . . . . . . . . . . 5.4 Multivariate integrals . . . . . . . . . . . . . . . 5.5 More general oscillators . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

79 . 79 . 87 . 99 . 101 . 106

Extended Filon method 4.1 Extended Filon method . . . . 4.2 Choosing internal nodes . . . . 4.3 Error analysis of FJ and FCC . 4.4 Adaptive Filon . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

Complex-valued Gaussian quadrature 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Complex-valued Gaussian quadrature for oscillatory integrals on [−1, 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Complex quadrature and uniform asymptotic expansions . . . . . 6.4 A discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

109 109 115 125 133

viii

Contents

7

A highly oscillatory olympics 7.1 Heat 1: No stationary points, eight function evaluations . . . . . . 7.2 Heat 2: A stationary point, 12 function evaluations . . . . . . . . . 7.3 What is the best method? . . . . . . . . . . . . . . . . . . . . . . . . . .

135 135 141 145

8

Variations on the highly oscillatory theme 8.1 Oscillatory integrals in theory and in practice 8.2 A singular integral with multiple oscillators . 8.3 Integral transforms . . . . . . . . . . . . . . . . . 8.4 Postscript . . . . . . . . . . . . . . . . . . . . . . . .

149 149 151 158 162

A

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Orthogonal polynomials 163 A.1 Orthogonal polynomials on the real line . . . . . . . . . . . . . . . . 163 A.2 Orthogonal polynomials with complex weight function . . . . . . 169

Bibliography

171

Index

179

Preface Numerical computation of highly oscillatory integrals came of age in the last 15 years. All three of us have been actively involved in this “highly oscillatory journey,” which, in our view, has now reached a level of relative completeness, rendering it suitable for a broad review. The box of tricks in the computation of highly oscillatory integrals—indeed, in the computation of a wider range of highly oscillatory phenomena, which includes, for example, highly oscillatory integral and differential equations—is fairly unusual to classically trained numerical analysts. This is because of its strong emphasis on asymptotic analysis in the oscillation parameter as a means to introduce numerical algorithms and, even more so, as a primary tool in understanding algorithms. The underlying mathematics is not very complicated, but it has different flavor to most texts in computational mathematics. Theoretical interest, though, is neither the only nor the main motivation for this monograph. Highly oscillatory integrals feature in numerous applications, from fluid dynamics to mathematical physics, electronic engineering, acoustic and electromagnetic scattering, and so on, and the practical need to evaluate them is ubiquitous, often as part and parcel of a wider computational project. We thus hope that this monograph will be useful as well as illuminating, presenting a range of practical, highly efficient, and affordable algorithms. We find the numerical analysis of highly oscillatory integrals a truly exciting chapter of computational mathematics, replete in unexpected and often counterintuitive ideas. Thus, we are so pleased to share it on these pages. While the theory has reached a stable stage which in our view makes it suitable for a review, nothing in mathematical research is truly final. We expect new ideas to come along (who knows, perhaps stimulated by this book) and add to the excitement. In particular, our review of Gaussian quadrature with a complex-valued measure in Chapter 6 represents work in a state of active exploration and clearly incomplete. We await further developments in this area. All three of us have contributed to the state of the art in highly oscillatory quadrature, but we are far from alone: mathematical research is a collective enterprise. The story of highly oscillatory quadrature is no exception to this rule, and we are delighted to acclaim Andreas Asheim, Jing Gao, Arno Kuijlaars, Nele Lejon, David Levin, Syvert Nørsett, Sheehan Olver, Stefan Vandewalle, Haiyong Wang, and Shuhuang Xiang, whose work and insight have contributed so much to the state of the art and feature widely in this volume. And, of course, we must mention Louis Napoleon George Filon, whose ideas almost a century ago, serially forgotten or misinterpreted, fostered much of the modern theory and whose spirit (benignly, we hope) hovers over this volume. Writing this book was a laborious process because all three of us are active academics, busy with the many preoccupations and challenges of academic existence, and ix

x

Preface

we have had to fit writing this book around our numerous other duties and projects. This means that deadlines have been serially missed, and we are truly grateful to SIAM’s Executive Editor, Elizabeth Greenspan, for her infinite understanding and sorely tested, saintly patience. Finally, we must pay tribute to our families for bearing with us during our work on this book. By now they must be used to our being busy, but in the last year they have had to get used to the three of us being very busy. They deserve our heartfelt thanks. Alfredo Deaño, University of Kent at Canterbury Daan Huybrechs, University of Leuven Arieh Iserles, University of Cambridge

Chapter 1

Introduction

1.1 The 3 21 stages of mathematical research Mathematical research often follows a set pattern of three stages: • First, a problem is identified, and the immediate reaction is “This is trivial!” After all, the problem is a variation on a known body of knowledge, and surely, with few straightforward tricks and tweaks, it should be easily sorted out. • Second, the above known body of knowledge is applied to the problem—and it does not work! It is tweaked to and fro—and still it fails to work, whereby a new outcry goes out: “This is impossible!" • Third, an altogether new approach is devised, analyzing the problem at hand with entirely new criteria and rules of engagement, understanding the nature of mathematics that made it “difficult” in the first place. And the new approach works beautifully, so much so that the “This is trivial!” crowd can exclaim in a self-congratulatory manner that, after all, they were always right. To this one might add another stage: • Having reached stage 3, write a monograph.

1.2 What’s wrong with standard quadrature? A quadrature formula approximates an integral by a finite sum, β ν  F (x) dμ(x) ≈ bk F (ck ), α

(1.1)

k=1

where F is a smooth function and dμ is a Borel measure, while c1 , . . . , cν ∈ [α, β] and b1 , . . . , bν are the nodes and the weights, respectively. Both nodes and weights depend upon dμ but are independent of the function F . Quadrature formulas are very well understood both as mathematical constructs and as practical algorithms [27]. Typically we measure them in terms of order: quadrature formula (1.1) is of order p ≥ 1 if it is exact whenever f itself is a polynomial of degree p − 1, and, other things being equal, the higher the order, the better. This rule 1

2

Chapter 1. Introduction

of thumb is loosely based on the classical theorem of Weierstrass that states that any continuous function in a compact interval can be uniformly approximated by polynomials. Given any distinct nodes c1 , . . . , cν , we can always choose weights so that the order is at least ν, integrating cardinal functions of Lagrange’s interpolation:  β ν x − ci dμ(x), k = 1, . . . , ν. bk = c α i =1 k − ci i =k

Higher-order p = ν + m follows by imposing orthogonality conditions on the nodal polynomial ν  c(x) = (x − ci ), i =1

that is,



β

x α

−1

 c(x) dμ(x) = 0,

 = 1, . . . , m,

β α

x m c(x) dμ(x) = 0.

The highest possible order, p = 2ν, is obtained when c is the ν-degree monic orthogonal polynomial with respect to the measure dμ—standard results from theory of orthogonal polynomials ensure that all its zeros live in (α, β) and are distinct; see the Appendix for details. This is the celebrated Gaussian (or Gauss–Christoffel) quadrature, a standard workhorse of scientific computation, familiar to veterans of undergraduate numerical analysis courses. The subject matter of this monograph is the quadrature of highly oscillatory integrals, e.g., 1 1 2 iωx f (x)e dx and I2 [ f ] = f (x)eiωx dx, I1 [ f ] = −1

−1

where f is a smooth function and ω  1: the integrand oscillates rapidly. Such integrals are ubiquitous in applications, ranging from acoustic and electromagnetic scattering to mathematical physics and fluid dynamics.1 Gaussian quadrature, our algorithm for all seasons, is the natural first port of call. 2 Thus, we let (α, β) = (−1, 1), dμ(x) = dx, and F (x) = f (x)eiωx or F (x) = f (x)eiωx for I1 or I2 , respectively. In Fig. 1.1 we have displayed in logarithmic scale the error once Gaussian quadrature (Gauss–Legendre to be more precise) is applied to I1 and I2 with f (x) = 1/(2 + x) and ν = 4¯ ν , ν¯ = 1, . . . , 8. In other words, we display the number of significant decimal digits in the error,   ν   1    log10  F (x) dx − bk F (ck )   −1 k=1

for the two choices of F . All is well for small ω ≥ 0, and the accuracy of Gaussian quadrature is impressive. All changes, however, once ω grows. The error increases, and, after a while, all accuracy is lost: Gaussian quadrature, even with 32 points, fails to deliver even a single significant digit of accuracy! And, mind you, as frequencies go, ω = 100 is fairly small. 1 Note that the requirement that ω is positive is without loss of generality: if f (x) and the phase function are real-valued, then the case ω < 0 corresponds to the complex conjugate of the highly oscillatory integral, with analogous properties and challenges.

1.3. An example: Filon quadrature

3

Figure 1.1. The errors in logarithmic (to base 10) scale in approximating I1 [ f ] (on the left) and I2 [ f ] with Gaussian quadrature for 0 ≤ ω ≤ 100 and ν = 4, 8, 12, . . . , 32: the colors seamlessly vary from red to blue as the number of quadrature nodes grows.

The reason for the uselessness of Gaussian quadrature in a highly oscillatory setting stares us in the face. The error of (1.1) is dν F (2ν) (θ), where dν is a constant, dependent on ν but not on F , while θ ∈ [α, β]. All this is absolutely fine as long as derivatives of F are small, but a hallmark of high oscillation is the very rapid increase in the amplitude  of derivatives: in our case F (2ν) (θ) =  ω 2ν . The error term for large ω rapidly gets out of hand! All this goes much deeper, to the very heart of numerical analysis. Most numerical methods are implicitly based on the Taylor expansion, the ansatz that locally the behavior of functions is modeled well by polynomials. This, however, is simply not true in the presence of high oscillation (alternatively, it requires “local” to be reduced to tiny intervals, for instance, of size ω −1 ). As will be clear in what follows, in place of the familiar Taylor expansion, in our setting we need to use asymptotic expansions in inverse powers of ω. Such expansions have the remarkable feature that the larger ω, the more precise they are, but the quid pro quo is that they fail for ω → 0. This hints at a more ambitious goal: expansions which are good for small and large ω alike. It is worth mentioning that once the asymptotic framework for oscillatory integrals is clear, Gaussian quadrature will redeem itself in a spectacular fashion, and its optimal polynomial order will be put to good use. We defer this to Chapters 5 and 6.

1.3 An example: Filon quadrature In this monograph we introduce and analyze in detail a long list of effective methods for highly oscillatory quadrature. Here, as an existence proof that highly oscillatory quadrature is possible—indeed, easy—once we adopt the right mathematical frame of mind, we present a single method, the (plain-vanilla) Filon quadrature. Its analysis and all proofs are deferred to Chapter 3; here we just sketch the method and compare it with the woeful performance of Gaussian quadrature in Fig. 1.1. We commence from the integral I1 . Given an integer s ≥ 0, we seek a polynomial p of degree 2s + 1 that satisfies the Hermite interpolation conditions p (i ) (−1) = f (i ) (−1),

p (i ) (1) = f (i ) (1),

i = 0, . . . , s,

(1.2)

4

Chapter 1. Introduction

Figure 1.2. The errors in logarithmic scale in approximating I1 [ f ] (on the left) and I2 [ f ] with the plain-vanilla Filon method ωF,s [ f ] for 0 ≤ ω ≤ 100 and s = 0, 1, . . . , 6: the colors seamlessly vary from green to blue as the number of quadrature nodes grows.

and set

ωF,s [ f

 ]=

1

p(x)eiωx dx.

(1.3)

−1

Note that ωF,s [ f ] can be evaluated explicitly with great ease.2 Fig. 1.2 on the left displays the error log10 | ωF,s [ f ] − I1 [ f ]| for f (x) = 1/(2 + x). Comparison with Fig. 1.1 is striking: as ω grows, so does the accuracy of a Filon method! The interpolation conditions for plain-vanilla Filon method (1.2) require an important modification once we apply this approach to I2 [ f ]. The reason, which will be debated in great detail in Chapter 2, is that the integrand has a stationary point at the origin. We choose a polynomial p of degree 4s + 3 and, in addition to (1.2), impose Hermite interpolation conditions at the origin, p (i ) (0) = f (i ) (0),

i = 0, . . . , 2s + 1,

4s + 4 conditions altogether. We use (1.3) as before, noting that the integral can again be computed explicitly. The outcome for f (x) = 1/(2 + x) is displayed on the right of Fig. 1.2, and evidently the error again decreases rapidly as ω grows. Indeed, the error is significantly smaller than for I1 [ f ], on the left, but we should temper our enthusiasm with the observation that the degree of p is more than doubled for I2 [ f ]. Plain-vanilla Filon quadrature (1.3) is far from being the most powerful method to calculate highly oscillatory integrals: we used it here because it can be described with such great ease. The point we are striving to make is that, once we do things right, high oscillation is no longer an insurmountable problem. The remainder of this volume is devoted to this issue, doing things right and unraveling the fascinating world of highly oscillatory quadrature. Welcome on board! 2 The reason for choosing only x = ±1 as interpolation points and adding derivative values there will become clear in what follows, once we understand the underlying asymptotic behavior of this oscillatory integral.

Chapter 2

Asymptotic theory of highly oscillatory integrals

2.1 Univariate asymptotic expansions Classical numerical analysis rests upon the Taylor theorem. Whether expanding our expressions, equations, and methods in a time step or grid spacing or quantifying the goodness of our approximations, we invariably assume that local behavior is determined by function and derivative values. This, however, is not a useful paradigm insofar as highly oscillatory phenomena are concerned. As with every other mathematical phenomenon, the challenge with high oscillation is to identify and exploit its structural features and regularities—at the first instance recognize that high oscillation is often very regular in its own way. High oscillation intuitively means that something is varying rapidly, but this variance is typically not random: it obeys its own peculiarities, whose exploitation is the key to effective computation. The key to these peculiarities is asymptotic analysis. Asymptotic theory of highly oscillatory integrals is fairly comprehensive and well known (cf. [10, 80, 83, 98, 111]), but in this volume we focus on a variant thereof which is well suited to our ultimate task: accurate and efficient computation of highly oscillatory integrals. Before we commence our exposition, let us develop a certain intuition about the form and the information contained in the asymptotic method, which is the basic method for highly oscillatory integrals. Fig. 2.1 shows an example of an oscillatory function, F (x) = cos(10 − x)eiωx . The usual heuristic that is given in asymptotic analysis is that high oscillation (with average 0) produces substantial cancellation between positive and negative parts inside [a, b ]; hence, the contribution from points inside the interval to the total integral is expected to be “small,” and this effect is more pronounced as ω grows. The exception in this example is given by the endpoints, where this cancellation does not take place. Let us consider the same example as before, but with a different oscillator, namely iωx 2 , in Fig. 2.2. Intuitively speaking, something different happens at x = 0, where e again this local cancellation explained before between positive and negative areas does not happen. The reason why the origin is special in this example is given by the derivative of the oscillator d iω g (x) ) = iω g (x)eiω g (x) . (e dx 5

6

Chapter 2. Asymptotic theory of highly oscillatory integrals

Figure 2.1. Plot of the function cos(10 − x)eiωx for ω = 100 (left) and ω = 200 (right). The real part is shown in red and the imaginary part in blue.

2

3

Figure 2.2. Plot of the function cos(10 − x) eiωx (left) and cos(10 − x) eiωx (right) for ω = 100. The real part is shown in red and the imaginary part in blue.

The derivative is large as ω → ∞, and this suggests highly oscillatory behavior, except if g (c) = 0 for some c ∈ [a, b ]. In such a situation, the integrand is not highly oscillatory locally, and the contribution of x = c has to be considered in the total integral. In our example with g (x) = x 2 , the derivative vanishes at c = 0. Such points are called stationary points, they can be of different orders depending on the number of derivatives vanishing at the point, and they will feature in what follows. We commence gently with the integral  Iω [ f ] =

1

−1

f (x) eiωx dx,

2.1. Univariate asymptotic expansions

7

where ω  1 and f ∈ C∞ [−1, 1]. Integrating repeatedly by parts, we have Iω [ f ] =

1 iω



1

−1

f (x)

f (1)eiω − f (−1)e−iω 1 deiωx I [ f ] + dx = − −iω ω −iω dx

f (1)eiω − f (−1)e−iω f (1)eiω − f (−1)e−iω 1 =− − I [ f ] + 2 (−iω)2 ω (−iω) −iω s  1 1 I [ f (s +1) ]. [ f (k) (1)eiω − f (k) (−1)e−iω ] + = ··· = − s +1 ω k+1 (−iω) (−iω) k=0

This being true for every s ≥ 0, we deduce that 

1 −1

f (x)eiωx dx ∼ −

∞  k=0

1 [ f (k) (1)eiω − f (k) (−1)e−iω ], (−iω)k+1

ω  1.

(2.1)

The “∼” sign means that for every f ∈ C∞ [−1, 1] and  > 0 there exist ω > 0 and n ≥ 0 such that   n   1  1  (k) iω (k) −iω  iωx [ f (1)e − f (−1)e ] < , ω ≥ ω . f (x) e dx +    −1 (−iω)k+1 k=0

Before we contemplate extending (2.1) to more complicated oscillators, it is a good idea to ponder briefly its meaning: for large ω, precisely when classical quadrature methods fail, we can approximate Iω [ f ] to very high accuracy using just the values of f and its derivatives at the endpoints! Moreover,   the approximation error for a method using just f (k) (±1), k = 0, . . . , s, is  ω −s −2 —for large ω the error is actually smaller! The expansion (2.1) can be easily generalized to a substantially greater family of highly oscillatory integrals,  Iω [ f ] =

b

f (x) eiω g (x) dx,

(2.2)

a

where f , g ∈ C∞ [a, b ] and the oscillator g is strictly monotone: g (x) = 0 for all x ∈ [a, b ]. Hence Iω [ f ] =



f (x) deiω g (x) 1 dx = − (x) −iω dx g a  b d f (x) iω g (x) 1 dx. + e −iω a dx g (x)

1 iω

b



f (b ) iω g (b ) f (a) iω g (a) e e − g (a) g (b )



(Note that g = 0 is essential to avoid singularities in the above expression.) Letting f0 (x) = f (x),

we thus have 1 I ω [ f0 ] = − −iω



fk (x) =

d fk−1 (x) , dx g (x)

k ≥ 1,

f0 (b ) iω g (b ) f0 (a) iω g (a) 1 + − I [ f ]. e e −iω ω 1 g (a) g (b )

(2.3)

8

Chapter 2. Asymptotic theory of highly oscillatory integrals

The following theorem, a generalization of (2.1) to the current setting, follows at once by induction. Theorem 2.1. Let f , g ∈ C∞ [a, b ] and g = 0 there. Then  ∞  fk (b ) iω g (b ) fk (a) iω g (a) 1 , − Iω [ f ] ∼ − e e g (a) (−iω)k+1 g (b ) k=0

ω  1,

(2.4)

where the coefficients fk are given by (2.3). Note that



g

1 f1 = − 2 f + f , g g

3g 2

f2 =

g

4



g

g

3

f −

3g

g 3

f +

1 f , g 2

and in general (as can be trivially verified by induction) each fk (x) is a linear combination of f (x), f (x), . . . , f (k) (x) with coefficients that, while depending on x, are nonsingular. Like in (2.1), it is clear that the behavior of Iω [ f ] for ω  1 is determined by f and its derivatives at the endpoints, together with g , which is consistent with the intuition developed earlier in the chapter. This brings us to our first numerical method designed explicitly for highly oscillatory integrals: the asymptotic method  s  fk (b ) iω g (b ) fk (a) iω g (a) 1 [s ] e e , (2.5) − ω [ f ] = − g (a) (−iω)k+1 g (b ) k=0

where s ≥ 0 is given. The asymptotic method is the opposite of Gaussian quadrature: for small ω > 0, Gaussian quadrature is excellent, while the asymptotic method is   [s ] useless, the other way around for large ω, when ω [ f ] = Iω [ f ] +  ω −s −2 . (The definition of “small” and “large” depends upon f and its derivatives—it is easily possible to construct examples of intermediate values of ω when both approaches are very poor.) Another way of looking at the expansion (2.4) is by setting F (t ) =

f ( g −1 (t )) , g ( g −1 (t ))

min{g (a), g (b )} ≤ t ≤ max{g (a), g (b )},

where g −1 is the inverse function of g (whose existence is guaranteed by g = 0). Then, changing variable t = g (x) and using (2.1) (shifted from [−1, 1] to [a, b ]), 

b

iω g (x)

f (x)e a

 dx =

g (b ) g (a) ∞ 

∼−

f (g

−1

(t ))e

iωt

dt = g ( g −1 (t ))



g (b ) g (a)

F (t )eiωt dt



1 F (m) ( g (b ))eiω g (b ) − F (m) ( g (a))eiω g (a) . m+1 m=0 (−iω)

In what follows we prefer the expansion (2.4) because it generalizes in a natural manner to our next setting, asymptotic expansion in the presence of stationary points. Definition 2.2. The point c ∈ [a, b ] is a stationary point of order r ≥ 1 of the oscillator g if g (i ) (c) = 0 for i = 1, 2, . . . , r and g (r +1) (c) = 0.

2.1. Univariate asymptotic expansions

9

Suppose that g has a single stationary point of order 1 in (a, b )—if it has more stationary points, disregarding the pathological case of g having an infinite number of zeros, we can divide [a, b ] into subintervals. The standard analysis of highly oscillatory integrals with stationary points uses the method of stationary phase [111], but, in our experience, it is not very useful in deriving expansions of arbitrary accuracy. We prefer to adopt an alternative approach, pioneered in [66]. Let b x n eiω g (x) dx, n ≥ 0, μn (ω) = a

be the nth moment of the oscillator g . Then b Iω [ f ] = μ0 (ω) f (c) + [ f (x) − f (c)]eiω g (x) dx a

= μ0 (ω) f (c) +

1 iω

 a

b

f (x) − f (c) deiω g (x) dx. dx g (x)



Note that, although g (c) = 0, the expression [ f (x) − f (c)]/ g (c) has a removable singularity there and is C∞ [a, b ]. This means that we can repeat our trick: integrate by parts. Thus, letting f0 (x) = f (x),

we have 1 Iω [ f0 ] = μ0 (ω) f0 (c)− −iω

fk (x) =



d fk−1 (x) − fk−1 (c) , g (x) dx

k ≥ 1,

f0 (b ) − f0 (c) iω g (b ) f0 (a) − f0 (c) iω g (a) 1 + − I [ f ], e e −iω ω 1 g (a) g (b )

and turning the handle repeatedly leads to an asymptotic expansion for oscillators with a stationary point. Theorem 2.3. Suppose that f , g ∈ C∞ [a, b ], g (c) = 0, and g (c) = 0 for some c ∈ (a, b ), while g (x) = 0 for x ∈ [a, b ] \ {c}. Then ∞  fk (c) (−iω)k k=0  ∞  fk (b ) − fk (c) iω g (b ) fk (a) − fk (c) iω g (a) 1 e e − . − g (a) g (b ) (−iω)k+1 k=0

Iω [ f ] ∼ μ0 (ω)

(2.6)

As before, the expansion has its computational counterpart in the asymptotic method s  fk (c) [s ,1] ω [ f ] = μ0 (ω) (2.7) (−iω)k k=0  s  fk (b ) − fk (c) iω g (b ) fk (a) − fk (c) iω g (a) 1 − . − e e g (a) g (b ) (−iω)k+1 k=0

    [s ,1] Because μ0 (ω) ∼  ω −1/2 [98], it follows that ω [ f ] = Iω [ f ] +  ω −s −3/2 . The method (2.7) depends solely upon f and its derivatives at the endpoints and at

10

Chapter 2. Asymptotic theory of highly oscillatory integrals

the stationary point c. Note, however, that, while fk at a or b is, as before, a linear combination of f , f , . . . , f (k) there, this is not the case at c. Since the removal of a singularity uses the l’Hôpital rule, each progression from fk to fk+1 “costs” two function evaluations. Therefore f (k) (c) is a linear combination (with smooth coefficients depending on g and its derivatives) of f (c), f (c), . . . , f (2k) (c). Practical use of the asymptotic method (2.7) is predicated upon the availability of the zeroth moment μ0 (ω). While easily available when g is a quadratic polynomial, this is not the case for general g , and this represents clear restriction on the applicability of this approach. A minor variation on the theme of (2.6) is when a stationary point is at an endpoint, without loss of generality c = a. Since

f (a) fk (x) − fk (a) , = k g (a) g (x)

lim

x→a

(2.6) needs to be replaced with Iω [ f ] ∼ μ0 (ω)

∞  fk (a) k=0

(−iω)k



∞  k=0

1 (−iω)k+1



fk (b ) − fk (a)

g (b )

iω g (b )

e



fk (a)

g (a)

 iω g (a)

e

,

with similar changes to (2.7). The generalization to stationary points of any order r ≥ 1 is straightforward: instead of subtracting and adding f (c), we do so with r −1  (x − c)

!

=0

f () (c).

Defining generalized moments  μn (ω; c) =

1 −1

(x − c)n eiω g (x) dx,

n ≥ 0,

and assuming again that c ∈ (a, b ), we thus obtain Iω [ f ] =

=

r −1  μ (ω; c) =0

!

r −1  μ (ω; c) =0

!

f

()

 b f (x) −

(c) + a

f

()

1 (c) + iω

r −1  (x − c)

!

=0



b a

f (x) −

 r −1

 f

(x−c) ! =0 g (x)

()

(c) eiω g (x) dx

f () (c)

eiω g (x) dx,

and again we have a removable singularity at x = c. This can be again iterated, resulting [s ,r ] in an asymptotic expansion and an asymptotic method ω . We recall that here the superscript r denotes the order of the stationary point.

2.2 How good is an asymptotic method? The narrative so far indicates that an asymptotic method is useless for small values of ω > 0, while being highly effective for ω  1. It is helpful to illustrate this by a

2.2. How good is an asymptotic method?

11

[s ]

Figure 2.3. The magnitude of the error committed by ω [ f ] for s = 0, 1, 2, 3, 4, with no stationary point.

numerical example, at the first instance assuming that there are no stationary points and using method (2.5). Let [a, b ] = [−1, 1], g (x) = 3x + x 2 (hence g = 0 in [−1, 1]), and f (x) = cos πx. [s ] In Fig. 2.3 we have sketched (from the top) the magnitude of the error, log10 |ω [ f ]− Iω [ f ]|, for s = 0, 1, 2, 3, 4, for ω ∈ [0, 100] (the x-axis). There is no surprise at the origin: all methods blow up and they remain completely useless for small ω > 0. However, the transition to the ω  1 regime is surprisingly rapid, and already for ω ≈ 10 the methods do well, and, consistently with theory, they improve for growing ω. It is a sobering thought that all this is based on surpris[4] ingly little information. Thus, ω , which delivers 12 significant digits for ω = 100, requires just 10 pieces of data, the values of f (i ) , i = 0, . . . , 4, at ±1. What happens once we have a stationary point? We keep the same interval and 1 1 1 f ,while using g (x) = 2 x + x 2 ; therefore, g (− 4 ) = 0, g (− 4 ) = 2 = 0—a first-order 1 stationary point at − 4 . Before we repeat the calculations from Fig. 2.3, it is instructive to examine the exact solution of the  two  problems. Note that in the absence of stationary points we have Iω [ f ] ∼  ω −1 , as can be deduced at once from (2.4), while   we have already observed that μ0 (ω) ∼  ω −1/2 , and this represents the asymptotic rate of decay in the presence of a first-order stationary point.3 Fig. 2.4 displays the real and imaginary parts of the two integrals, with and without a stationary point. We observe vividly the different rates of decay, but also that the oscillations on the left are much more regular. This is hardly surprising, because the leading terms in the asymptotic expansion are     1 1 g (x) = 3x + x 2 : Iω [ f ] ∼ e−2iω − e4iω +  ω −2 , 5 iω    1/2   1 (2π) 1 2 + 2 1 3 iω 1 1 iω − 16 iω 2 2 2 g (x) = x + x : Iω [ f ] ∼ e e + e − +  ω −3/2 , 1/2 3 5 iω 2(−iω) 2   the record, for a stationary point of order r ≥ 1 we have μn (ω; c) ∼  ω−1/(r +1) (the van der Corput Lemma, [98]), and this is the rate of decay of Iω [ f ]. 3 For

12

Chapter 2. Asymptotic theory of highly oscillatory integrals

Figure 2.4. Real and imaginary parts of Iω [ f ] for g (x) = 3x + x 2 (no stationary points) 1 on the left, g (x) = 2 x + x 2 (a stationary point) on the right.

where we have used standard asymptotics for the error function to determine the leading term of μ0 (ω); see, for instance, [108, 7.6.1]. [s ,1] The error committed by ω [ f ] is displayed in Fig. 2.5, and the lesson to be drawn is the  same as from  Fig. 2.3. The rate of decay of the error is a tad slower—   ω −s −3/2 vs.  ω −s −2 —but otherwise not much changes. To those used to classical quadrature methods, asymptotic methods are miraculous: they deliver remarkable accuracy for minimal effort in a situation where Gaussian quadrature is useless. However, as we go deeper into this book, we will soon realize that they are strongly inferior to a wide range of other methods: Filon-type, Levin-type, numerical steepest descent (NSD), and complex-valued Gaussian quadrature. Indeed, it will transpire that the great merit of asymptotic quadrature is not as a numerical method on its own but as a method of proof to deduce the behavior of superior methods.

[s ,1]

Figure 2.5. The magnitude of the error committed by ω [ f ] for s = 0, 1, 2, 3, 4, with order-1 stationary point.

2.3. Unbounded intervals

13

2.3 Unbounded intervals What happens once we allow unbounded intervals? Assume first that a = 0 and b = ∞, i.e., that we wish to calculate ∞ f (x) eiω g (x) dx. Iω [ f ] = 0

Assume that, in addition to f ∈ C∞ [0, ∞), it is also true that f remains regular at infinity and f (k) (∞) = 0 for all k ≥ 0. In that case (2.4) and (2.6) become Iω [ f ] ∼

∞  k=0

and Iω [ f ] ∼ μ0 (ω)

fk (0) iω g (0) 1 e k+1 g (0) (−iω)

∞ ∞   fk (0) − fk (c) iω g (0) fk (c) 1 , + e k+1 k g (0) (−iω) (−iω) k=0 k=0

respectively. The remaining case, [a, b ] = , is different. Provided that f is C∞ in a strip around the real axis with rapid decay at ±∞, if g = 0, then Iω [ f ] decays faster than  (ω −s ) for any s > 0: we have spectral convergence! Of course, this does not mean that Iω [ f ] vanishes—just that it decays faster than a reciprocal of any polynomial, a situation familiar to students of trigonometric approximation of periodic functions and of spectral methods in numerical analysis.

2.4 Multivariate asymptotic expansions 2.4.1 The simplest cases: The square Generalizing conventional quadrature methods from univariate to multivariate settings is either very simple or fiendishly difficult: it is simple once the domain of integration is a parallelepiped and we can use a tensor product of univariate rules, and is difficult otherwise [24, 27]. Also in our case parallelepipeds are easier, but far from trivial. Let us consider thus highly oscillatory integrals in the unit square [−1, 1]2 ,  Iω [ f ] =

1

−1



1

−1

f (x1 , x2 ) eiω g (x1 ,x2 ) dx1 dx2 ,

where f , g ∈ C∞ [−1, 1], and their asymptotic expansions. For the time being we assume nothing about the derivatives of g ; our assumptions will emerge naturally from our analysis. In the simplest situation, roughly basing ourselves upon [67], we assume that g (x1 , x2 ) = g1 (x1 ) + g2 (x2 ) (a separable oscillator) and wish to expand à la (2.4) in the inner integral, using integration by parts. This means that for every −1 ≤ x2 ≤ 1 the inner integral is not allowed stationary points—in other words, that g1 (x) = 0, x ∈ [−1, 1]. Letting f0,0 (x1 , x2 ) = f (x1 , x2 ),

fk1 ,0 (x1 , x2 ) =

∂ fk1 −1,0 (x1 , x2 ) , g1 (x1 ) ∂ x1

k1 ≥ 1,

14

Chapter 2. Asymptotic theory of highly oscillatory integrals

we deduce that (using smoothness to justify exchanging integration and summation)  1 ∞  fk1 ,0 (1, x2 ) iω g (x ) 1 e 2 2 dx2 eiω g1 (1) Iω [ f ] ∼ − (1) k +1 1 g (−iω) −1 1 k1 =0   1 f (−1, x ) k1 ,0 2 iω g (x ) iω g1 (−1) 2 2 . dx2 e − e g1 (−1) −1

Having reduced everything to univariate integrals, we assume that they have no stationary points either—in other words, that g2 (x) = 0, x ∈ [−1, 1]—and expand again: Iω [ f ] ∼

∞ 1 eiω g1 (1)  1 k2 +1 k +1 1 (−iω) g (1) (−iω) 1 k =0 =0

∞  k1





fk1 ,k2 (1, 1)

2

∞ 1 eiω g1 (−1)  1 k2 +1 k +1 1 (−iω) g (−1) (−iω) 1 k =0 =0

∞  k1

2

eiω g2 (1) −

g2 (1)  fk1 ,k2 (−1, 1)

g2 (1)

∞  k=0

g2 (−1)

 eiω g2 (−1)

eiω g2 (1) −

=

fk1 ,k2 (1, −1)

fk1 ,k2 (−1, −1)



g2 (−1)

 eiω g2(−1)

k k 1 eiω[ g1 (1)+ g2 (−1)]  eiω[ g1(1)+ g2 (1)]  f (1, −1) f (1, 1) − ,k− g1 (1)g2 (−1) =0 ,k− g1 (1)g2 (1) =0 (−iω)k+2  k k eiω[ g1 (−1)+ g2 (1)]  eiω[ g1(−1)+ g2 (−1)]  − f (−1, −1) , f (−1, 1) + g1 (−1)g2 (−1) =0 ,k− g1 (−1)g2 (1) =0 ,k−

where fk1 ,k2 (x1 , x2 ) =

∂ fk1 ,k2 −1 (x1 , x2 ) , g2 (x2 ) ∂ x2

k2 ≥ 1.

Note that for k1 , k2 ≥ 1 fk1 ,k2 (x1 , x2 ) =

=

∂ ∂ fk1 −1,k2 −1 (x1 , x2 ) 1 ∂ fk1 −1,k2 −1 (x1 , x2 ) ∂ = g2 (x2 ) ∂ x1 g1 (x1 ) ∂ x2 ∂ x2 ∂ x1 g1 (x1 ) g2 (x2 )

∂ fk1 −1,k2 (x1 , x2 ) . g1 (x1 ) ∂ x1

This provides an alternative definition of fk1 ,k2 and demonstrates that it is symmetric in k1 and k2 : had we first expanded with respect to x2 , the outcome would have been identical. The take-home lesson from this expansion is threefold. First, a multivariate expansion can be reduced to a sequence of expansions in increasingly lower-dimensional domains: in our case a bivariate expansion has been reduced to an infinite sum over univariate expansions. Second, like in the univariate case, the expansion depends solely on the values of f and its derivatives at specific points—currently, at the vertices of thesquare.  Thus, they play a role similar to the endpoints in (2.4). Finally, Iω [ f ] ∼  ω −2 —an increase in dimension means that the integral decays faster for ω  1. What about a more complicated model problem, with stationary points? Consider next   Iω [ f ] =

1

1

−1

−1

f (x1 , x2 ) eiω(x1 +x2 ) dx1 dx2 . 2

2

(2.8)

2.4. Multivariate asymptotic expansions

15

Before we expand it, though, let us specialize the univariate expansion (2.6) to g (x) = x 2 and c = 0: It is 1 1 ∞ ∞  fk (0) eiω  1 2 iωx 2 [ f (1)−2 fk (0)+ fk (−1)], − f (x) e dx ∼ eiωx dx k 2 k=0 (−iω)k+1 k (−iω) −1 −1 k=0

where f0 (x) = f (x),

fk (x) =

d fk−1 (x) − fk−1 (0) , 2x dx

k ≥ 1.

This can be somewhat simplified by computing the zeroth moment exactly, 

1

2

−1

eiωx dx =

π1/2 erf((−iω)1/2) , (−iω)1/2

where erf is the standard error function, and realizing that fk (0) =

f (2k) (0) , 4k k!

k ≥ 0.

  Note that, in line with the previous section, the zeroth moment is  ω −1/2 and fk at the stationary points depends on twice the number of derivatives elsewhere. Substituting the univariate expansion into (2.8) and expanding again in a similar manner results (after long and not particularly interesting algebra—see [65] for details) in Iω [ f ] ∼ πerf2 ((−iω)1/2)

∞  k=0



k  1 f (0, 0) (−iω)k+1 =0 ,k−

∞ k  πe erf ((−iω)1/2 )  1 [f (1, 0) + f,k− (0, 1) (−iω)k+1 =0 ,k− 2(−iω)1/2 k=0 iω

+ f,k− (−1, 0) + f,k− (0, −1) − 4 f,k− (0, 0)] +

k ∞  e2iω  1 [f (1, 1) + f,k− (1, −1) + f,k− (−1, 1) 4 k=0 (−iω)k+2 =0 ,k−

+ f,k− (−1, −1) − 2 f,k− (1, 0) − 2 f,k− (0, 1) − 2 f,k− (−1, 0) − 2 f,k− (0, −1) + 4 f,k− (0, 0)], where f0,0 (x1 , x2 ) = f (x1 , x2 ), fk1 ,0 =

∂ fk1 −1,0 (x1 , x2 ) − fk1 −1,0 (0, x2 ) , 2x1 ∂ x1

and fk1 ,k2 (x1 , x2 ) =

∂ fk1 ,k2 −1 (x1 , x2 ) − fk1 ,k2 −1 (x1 , 0) , 2x2 ∂ x2

k1 ≥ 1,

k2 ≥ 1.

Observe that we have again obtained a bivariate expansion by first reducing it to an infinite However, the rate of decay is now just Iω [ f ] ∼ sum of univariateexpansions.    ω −1 , reflecting the  ω −1/2 behavior of the univariate expansion. The expansion depends again on f and its derivatives at a finite number of points: specifically at the vertices (1, 1), (1, −1), (−1, 1), (−1, −1); the center (0, 0); and the

16

Chapter 2. Asymptotic theory of highly oscillatory integrals

midpoints of the four faces, (1, 0), (0, 1), (−1, 0), and (0, −1). It is easy to “interpret” the center: the gradient of the oscillator g (x1 , x2 ) = x12 + x22 vanishes there, and this is a bivariate stationary point. But why the midpoints of the faces? The explanation is that, once (2.8) is expanded in x1 , i.e., expressed as an infinite series of univariate highly oscillatory integrals, the latter have stationary points at the center of each (singly dimensional) face—at the midpoint. This is a reflection of a more general state of affairs: the behavior of “nice”4 multivariate highly oscillatory integrals  f (x) eiω g (x) dx, (2.9) Iω [ f ] = Ω

where Ω ⊂ d is a polytope, while f , g : d →  are C∞ , is determined by three types of critical points: • Vertices of the polytope; • Stationary points x ∈ cl Ω where ∇ g (x) = 0; and • Hidden stationary points (sometimes known as resonance points [65]) x ∈ ∂ Ω which occur once the oscillator g is restricted to the boundary: as an example, g (x1 , x2 ) = x1 + x2 has no stationary points in the unit disc x12 + x22 ≤ 1, but, once  restricted to the boundary x12 + x22 = 1 (i.e., g˜ (x1 ) = x1 ± 1 − x12 ), it has two:     ( 2/2, 2/2) and (− 2/2, − 2/2). It is straightforward to verify that ∇ g is orthogonal to the boundary at a hidden stationary point.

It is trivial to check that midpoints of the four faces of [−1, 1]2 are indeed hidden stationary points.

2.4.2 General “nice” integrals over polytopes “Nice” in the present context means that all stationary points and hidden stationary points of an oscillator g are isolated—this excludes, for example, the integral 

1

−1



1

−1

f (x1 , x2 ) eiω(x2 −x1 ) dx1 dx2 , 2

which possesses an entire line {(x, 1 − x) : −1 ≤ x ≤ 1} of stationary points. The key to asymptotic expansions in univariate settings was integration by parts, and its multivariate counterpart is the Stokes Theorem [78]. Thus, let Ω be an ndimensional orientable manifold and θ an (n − 1)-form acting on Ω with compact support. Then   Ω

dθ =

∂Ω

θ,

where d is the exterior derivative and ∂ Ω the oriented boundary of Ω. This is the moment when we are in danger of losing a significant proportion of our readers and the only remedy is to move into recovery mode: abandon the Stokes Theorem and its special (and relatively simpler) cases, the Green Theorem and the Kelvin–Stokes Theorem, and expand from scratch. Thus, we adopt the elementary, step-by-step procedure from [67]. 4 We

will see in what follows that, like in real life, the niceness of (2.9) is far from universal.

2.4. Multivariate asymptotic expansions

17

We commence from a two-dimensional regular simplex, 2 = {(x1 , x2 ) : 0 ≤ x1 ≤ 1 − x2 , 0 ≤ x2 ≤ 1}, and the integral  Iω [ f ] =

2

f (x1 , x2 ) eiω g (x1 ,x2 ) dV =

 1 0

1−x2 0

f (x1 , x2 ) eiω g (x1 ,x2 ) dx1 dx2 ,

(2.10)

where dV is the volume element, while g has no stationary points on cl 2 . Integrating by parts first in the inner and then in the outer integral,  1 1 2 Iω [g x1 f ] = g (1 − x2 , x2 ) f (1 − x2 , x2 )eiω g (1−x2 ,x2 ) dx2 iω 0 x1   ∂ 1 1 1 ( g x1 f ) , g x1 (0, x2 ) f (0, x2 )eiω g (0,x2 ) dx2 − − Iω ∂ x1 iω iω 0 1 1 Iω [g x22 f ] = g (x , 1 − x1 ) f (x1 , 1 − x1 )eiω g (x1 ,1−x1 ) dx1 iω 0 x2 1   ∂ 1 1 1 iω g (x1 ,0) (g f ) ; g (x , 0) f (x1 , 0)e dx1 − − I iω ω ∂ x2 x2 iω 0 x2 1

therefore, adding the two expressions,

 ∂ ∂ 1 1 I (M + M2 + M3 ) − ( f g x2 ) , ( f g x1 ) + Iω [∇g  f ] = ∂ x2 iω ω ∂ x1 iω 1 2

where  M1 = M2 =

0



 M3 =

1

iω g (x1 ,0) f (x1 , 0) n dx1 , 1 ∇ g (x1 , 0) e



1

2 0 1

0

iω g (x1 ,1−x1 ) f (x1 , 1 − x1 ) n dx1 , 2 ∇ g (x1 , 1 − x1 ) e

iω g (0,x2 ) f (0, x2 ) n dx2 . 3 ∇ g (0, x2 ) e   2 2

Here n1 = [0, −1], n2 = [ 2 , 2 ], and n3 = [−1, 0] are outward unit normals along the three edges of the simplex, and we note that  M1 + M2 + M3 = f (x1 , x2 ) n (x1 , x2 )∇ g (x1 , x2 ) eiω g (x1 ,x2 ) dS, ∂ 2

where ∂ 2 is oriented in a natural way and dS is the surface differential. The length  of the edges, 1, 2, and 1, is subsumed into the surface differential, and n is the unit outward normal vector along the boundary (which we need not define at the vertices). All this allows us to represent a single step of “pushing the integral to the boundary,” similar to integration by parts,  1 2 f (x1 , x2 ) n (x1 , x2 )∇g (x1 , x2 ) eiω g (x1,x2 ) dS Iω [∇ g  f ] = iω ∂ 2  ∂ ∂ 1 ( f g x2 ) . ( f g x1 ) + − Iω ∂ x2 ∂ x1 iω

18

Chapter 2. Asymptotic theory of highly oscillatory integrals

Recalling that ∇g = 0 in cl 2 , we now replace f by f /∇g 2 : the outcome is  f (x1 , x2 ) 1 Iω [ f ] = − eiω g (x1 ,x2 ) dS (2.11) n (x1 , x2 )∇g (x1 , x2 ) ∇g (x1 , x2 )2 −iω ∂ 2   f (x1 , x2 ) 1  ∇g (x1 , x2 ) eiω g (x1 ,x2 ) dV . ∇ + ∇g (x1 , x2 )2 −iω 2

Having plowed through §2.1, the way forward is obvious: iterate (2.11), bearing in mind that each “turn of the handle” multiplies an integral over 2 by (−iω)−1 . However, before we do so, we observe that there is absolutely nothing special in using a regular simplex in 2 : the proof for any simplex there (i.e., a triangle) is identical. There is nothing special in two variables either: the method of proof generalizes at once to any d -dimensional simplex (that is, a nonempty convex hull of points v 0 , v 1 , . . . , v d ∈ d )  , say:   f (x) 1 iω g (x) eiω g (x) dS n (x)∇g (x) f (x)e dV = − Iω [ f ] = ∇g (x)2 −iω ∂     f (x) 1  ∇g (x) eiω g (x) dV . (2.12) ∇ + ∇g (x)2 −iω 

Theorem 2.4. Let  ⊂ d be a simplex, f , g ∈ C∞ [ ], and ∇g =  0 in the closure of  . Set  fk−1 (x) ∇g (x) , k ≥ 1. fk (x) = ∇ f0 (x) = f (x), ∇g (x)2

Then Iω [ f ] ∼ −

∞  k=0

1 (−iω)k+1

 ∂

n (x)∇g (x)

fk (x) eiω g (x) dS. ∇g (x)2

(2.13)

Proof. Starting from (2.12) we implement the usual iterative procedure to prove that for every s ≥ 1 it is true that  s  f (x) 1 Iω [ f ] = − eiω g (x) dS n (x)∇g (x) k 2 k+1 ∇g (x) (−iω) ∂ k=0   fk (x) 1  + ∇ g (x) eiω g (x) dV . ∇ ∇g (x)2 (−iω) s 

The theorem follows by letting s → ∞.

Corollary 2.5. The last theorem remains true if  is replaced by an arbitrary polytope (which need not be convex!) in d . Proof. Every d -dimensional polytope can be represented as the closure of a union of (open) simplexes. An integral over boundary  of a simplex reduces to the sum of integrals over its oriented faces. Once two simplexes share a face, the two faces in question have opposite orientation and the underlying portion of the integral cancels: an integral over the boundary of the polytope is all that is left.

2.4. Multivariate asymptotic expansions

19

Note that, once we are gluing simplexes together, hidden stationary points along their joint faces go away: the hidden stationary points on the boundary of the polytope are the only ones to survive! The way forward is now clear: we have expressed in (2.13) a d -variate highly oscillatory integral as an asymptotic expansion in (d − 1)-variate highly oscillatory integrals. So, let us continue in this vein, expressing each (d − 1) integral as an asymptotic series in (d −2)-dimensional integrals and so on, until we reach zero-dimensional simplexes— the vertices of  . At this level of generality, though, this procedure is careless because it does not reckon with the presence of hidden singularities. For example, consider again the unit square [−1, 1]2 , whereby (2.13) yields Iω [ f ] ∼ −

∞  k=0 1





1 (−iω)k+1



1

−1

∂ g (x1 , −1) fk (x1 , −1) iω g (x1 ,−1) e dx1 ∇g (x, −1)2 ∂ x2

∂ g (1, x2 ) fk (1, x2 ) eiω g (1,x2 ) dx2 2 ∇ g (1, x ) ∂ x −1 2 1 1

∂ g (x1 , 1) fk (x1 , 1) eiω g (x1 ,1) dx1 2 ∇ g (x , 1) ∂ x −1 1 2 1 ∂ g (−1, x2 ) fk (−1, x2 ) iω g (−1,x2 ) e dx2 . + ∇ g (−1, x2 )2 ∂ x1 −1



Thus, we have four different oscillators, one across each of the four faces of [−1, 1]2 . Let g (x1 , x2 ) = x1 + x22 —clearly, its gradient cannot vanish in the square. However, the 1 second oscillator, g (1, x2 ) = 1 + x22 , has a stationary point at x2 = − 2 , an example of a hidden stationary point of the original integral. If—and only if—there are no hidden singularities as we go along, systematically reducing the dimension by “pushing” integration to the boundary, we can ultimately express the asymptotic expansion purely in terms of the function f and its directional derivatives at the vertices of the polytope. However, a word of warning: except in the case of parallelepipeds, this is a fiendishly complicated exercise, replete with truly horrendous expressions. The case of the equilateral triangle, perhaps the simplest nontrivial example of this kind, has been worked out in [58] with a simple oscillator, and it should act as a deterrent to those seeking explicit asymptotic expansions in a multivariate case. And matters are even more complicated once we attempt to cater for stationary and hidden stationary points! In other words, it is practically hopeless to expand asymptotically (at least to more than a small number of terms) highly oscillatory integrals in the multivariate case with the goal of achieving an exponential method à la (2.5) or (2.7). Fortunately, the true practical utility of asymptotic expansions in this setting is altogether different. As we have already mentioned by the end of §2.2, asymptotic expansions are instrumental in the construction of superior numerical methods—more specifically, they present us with the information necessary to construct such methods and analyze their properties. The nature of this information is (a) what are the critical points, and (b) how many functions and derivative values at various critical points do we need to use to ensure that the asymptotic method decays at certain speed for ω  1. The precise knowledge of the coefficients multiplying these values plays no role in the construction of these methods. Thus, consider a highly oscillatory integral over a polytope  ⊂ d with the vertices v 0 , . . . , v n , where n ≥ d .

20

Chapter 2. Asymptotic theory of highly oscillatory integrals

1. According to (2.13), to compute the integrals  along  the (d −1)-dimensional faces of  with absolute5 asymptotic error of  ω −s −2 we require f0 , . . . , f s along the faces, and these in turn can be expressed as a linear combination of ∂ |α| f (x)/∂ α for 0 ≤ |α| ≤ s on the faces in question. Here we have used the multi-index notation

∂ |α| f (x) ∂ |α| f (x) = α , α1 α ∂α ∂ x1 ∂ x2 2 · · · ∂ xd d

|α| = α1 + α2 + · · · + αd .

2. Next we express each (d − 1)-dimensional integral along a face of  as an expansion along the (lower-dimensional) faces of the face: a face of a polytope is itself a polytope and Theorem 2.4 applies! Specifically, suppose that the faces of  are 1 , . . . , L and set f˜k (x) = n (x)∇g (x)

fk (x) , ∇g (x)2

k ≥ 0.

Thus, (2.13) becomes Iω [ f ] ∼ −

∞  k=0

L  1 (−iω)k+1 j =1

 j

f˜k (x)eiω g (x) dV .

We next apply the asymptotic expansion to the integrals over the  j s: the outcome is L  k  ∞   f˜k−, (x) iω g (x) 1 e dS. n (x)∇g (x) Iω [ f ] ∼ (−1)2 ∇g (x)2 (−iω)k+1 =0 j =1 ∂  j k=0

We deduce that the knowledge of f˜k−, ,  = 0, . . . , k, along the (n−2)-dimensional   can new faces results in relative asymptotic error of  ω −s −3 . Since each f˜ k−,

be expressed as a linear combination of ∂ |α| f (x)/∂ α for |α| ≤ s, we conclude that knowledge of

∂ |α| f (x) , |α| ≤ s on the  j s ∂α



  relative decay of  ω −s −3 .

3. We continue in this vein, progressing along the complex defined by the polytope  6 and lowering dimension. In each stage m = 1, 2, . . . , d the knowledge of of (m − 1)the derivatives ∂ |α| f (x)/∂ α for |α| ≤ s along the new generation  dimensional faces results in relative asymptotic decay of  ω −s −m−1 . 4. Once we reach stage m = d , there is no distinction between “relative” and “absolute”: the 0-dimensional faces are just the vertices of  . We thus deduce that ∂ |α| f (x) , |α| ≤ s at v 1 , . . . , v n ∂α   In particular, Iω [ f ] ∼  ω −d . knowledge of

5



  decay of  ω −s −d .

“Absolute” because behavior of the integrals themselves for large ω. As a matter of fact,  we disregard  these integrals are  ω−d +1 : this will be an immediate conclusion of our analysis. 6 That is,  , its oriented faces, the oriented faces of its faces, and so on, down to the vertices.

2.4. Multivariate asymptotic expansions

21

All this is predicated on the assumption that we never encounter stationary points (“plain” or hidden) along the way The outcome of this step-by-step reasoning is that we have identified how many derivatives are needed—and where—to construct an asymptotic expansion to a d dimensional integral over a polytope which has neither stationary nor hidden stationary points. This is not the same as having an explicit asymptotic expansion, because we have not determined the coefficients (which depend on ω, g , and its derivatives) that multiply the derivatives in question. However, while falling short of presenting us with an asymptotic method, this information suffices in the construction of Filontype and Levin-type methods in the next chapter. Note that in a multivariate setting “derivatives at the vertex v i” means all direc  tional derivatives of requisite order, and this translates into i +di −1 directional deriva  tives of order i, altogether s +d derivatives of all orders ≤ s. s Can all this be generalized? First, let Ω ⊂ d be an arbitrary d -dimensional manifold with a piecewise-Jordan boundary. Theorem 2.4 can be extended to such sets, approximating it (e.g., in the Hausdorff metric) by a sequence of polytopes. Alternatively, and with further minor restrictions on the topology of Ω, we could have used the Stokes Theorem. This, however, does not mean that our subsequent construction necessarily makes sense. Suppose that the boundary of Ω is smooth—it is then obvious that ∇ g must be orthogonal to the boundary at least twice, implying presence of hidden stationary points. (Worse is possible: let Ω be the bivariate unit disc and g (x, y) = h(xy) for some h ∈ C∞ [cl Ω]. Then it is trivial to prove that every point along ∂ Ω is a hidden stationary point!) Does this mean that our derivative-counting argument is dead? Not necessarily! All it means is that it becomes much more complicated. We need to compute derivatives at hidden stationary points, as well as at “vertices” (provided they exist), and the required number of derivatives at hidden stationary points is larger and needs be computed on a case-by-case basis.

2.4.3 A few examples The measure of the task of analyzing general-case asymptotics becomes apparent already in the case of the humble isosceles right triangle with vertices at (0, 0), (1, 0), and (0, 1)—the bivariate regular simplex

@ @ @ @ @ @ @ x2 = 0 @ = x2

1

(0, 0)

+ x1

x1 = 0

(0, 1)

(1, 0)

Let first g (x1 , x2 ) = ν1 x1 +ν2 x2 , with ν1 , ν2 constants. While there are neither stationary points nor hidden stationary points along the legs x1 = 0 and x2 = 0, once ν1 = ν2 the oscillator along the hypotenuse degenerates, g (x1 , 1 − x1 ) ≡ 1. Consider next g (x1 , x2 ) = x1 +x22 . Again, there are no stationary points. However, once we examine the three faces, it is clear that there is a hidden stationary point at

22

Chapter 2. Asymptotic theory of highly oscillatory integrals 1 1

( 2 , 2 ). More interestingly, there is also a hidden stationary point at (0, 0), but we need to be careful. Along x2 = 0 the oscillator reduces to g (x1 , 0) = x1 and has no stationary points, while along the leg x1 = 0 it is g (0, x2 ) = x22 , and this gives rise to a stationary point at the origin. Thus, we need to take a hidden stationary point at the origin into account once we expand the integral along x1 = 0, but not along x2 = 0. Insofar as counting the number of required derivatives at (0, 0), we must take context into 1 1 account. Thus, at ( 2 , 2 ) we need just the directional derivatives along the hypotenuse, while at the origin we need twice the number of derivatives with respect to x2 than with respect to x1 . 1 1 Finally, we examine g (x1 , x2 ) = (x1 − 2 )2 + (x2 − 3 )2 , with a “proper” stationary 1 1 1 1 point at ( 2 , 3 ). It has two hidden stationary points, at (0, 3 ) and ( 2 , 0), but, interestingly enough, none along the hypotenuse. Our next example is the Euclidean 2-ball x12 + x22 + x32 ≤ 1 and the linear oscillator  g (x1 , x2 , x3 ) = 3=1 ν x , where the ν s are not all zero. There are no stationary points, but it is an easy exercise that there exist two hidden stationary points on the boundary (the 2-sphere), namely, ⎡ ⎤ ν1 1 ⎣ ν2 ⎦ . ± ν12 + ν22 + ν32 ν3

 If, instead, we stipulate g (x1 , x2 , x3 ) = 3=1 x2 , then the origin is a stationary point, and the entire boundary is a hidden stationary point—more specifically, g ≡ 1 on the 2-sphere, and the integral does not oscillate there at all—we need to compute it by classical, nonoscillatory methods. By our terminology, after a single application of “push to the boundary,” the integrals in the last case are no longer “nice” because their stationary points are not isolated. For the simplest example of this state of affairs we need not go any further than considering 11 f (x1 , x2 ) eiω g (x2 ) dx1 dx2 ; −1

−1

the inner integral does not oscillate at all. Of course, we can expand asymptotically in x2 , and this requires a computation of the inner integral by classical methods like Gaussian quadrature. The next animal in our menagerie is  Iω [ f ] =

1 −1



1 −1

f (x1 , x2 )eiω(x1 +x2 ) dx1 dx2 , 2

2

with an isolated stationary point at the origin and with hidden stationary points at (±1, 0) and (0, ±1). In that case all we need is to expand the univariate integral (2.6) (with a = −1, b = 1, and g (x) = x 2 ) twice, first for x1 and then for x2 . The outcome of elementary, yet lengthy, algebra is ∞ ∞   eiω 1 (ω) f1 ,2 (1, 0) μ 0 2 (−iω)1 +2 +1 (−iω)1 +2 1 =0 2 =0 1 =0 2 =0

+ f1 ,2 (0, 1) + f1 ,2 (−1, 0) + f1 ,2 (0, −1) − 4 f1 ,2 (0, 0)

Iω [ f ] ∼ μ20 (ω)

∞ f ∞   1 ,2 (0, 0)



2.5. More general oscillators

23

∞  ∞ e2iω  1 (1, 1) + f1 ,2 (1, −1) + f1 ,2 (−1, 1) f 4  =0  =0 (−iω)1 +2 +2 1 ,2

+

1

2

+ f1 ,2 (−1, −1) − 2 f1 ,2 (1, 0) − 2 f1 ,2 (0, 1) − 2 f1 ,2 (−1, 0) − 2 f1 ,2 (0, −1)

+ 4 f1 ,2 (0, 0) , where f0,0 (x1 , x2 ) = f (x1 , x2 ), 1 ∂ f1 ,2 (x1 , x2 ) − f1 ,2 (0, x2 ) , x1 2 ∂ x1 1 ∂ f1 ,2 (x1 , x2 ) − f1 ,2 (x1 , 0) f1 ,2 +1 (x1 , x2 ) = , x2 2 ∂ x2 f1 +1,2 (x1 , x2 ) =

and

 μ0 (ω) =

1

−1

2

eiωx dx =

1 , 2 ≥ 0,

    π1/2 erf (−iω)1/2 ∼  ω −1/2 1/2 (−iω)

  [67]. Note that Iω [ f ] ∼  ω −1 . Finally, we have already mentioned an integral with an entire line of stationary points, to wit, 11 2 f (x1 , x2 ) eiω(x2 −x1 ) dx1 dx2 . (2.14) Iω [ f ] = −1

−1

Changing a variable x1 → x2 − x1 , we have  Iω [ f ] =



1

x2 +1 x2 −1

−1

iωx12

f (x2 −x1 , x2 ) e

 dx1 dx2 =

2



−2

x1 +1 x1 −1

 2

f (x2 − x1 , x2 ) dx2 eiωx1 dx1 .

In other words, (2.14) oscillates only in the x1 direction and admits an asymptotic expansion of the form (2.6), except that we need to replace the (univariate!) function f there by  x+1 ˜ f (ξ − x, ξ ) dξ . f (x) = x−1

The lesson of all this is that, once the underlying domain Ω ⊂ d is a polytope, the oscillator is “nice,” and we can rule out hidden stationary points; we know exactly, given M ≥ d + 1, how many derivatives need be computed at each and every vertex to derive an asymptotic expansion of given asymptotic speed of decay  ω −M . Otherwise we must consider each and every combination of the integration domain Ω and oscillator g on a case-by-case basis.

2.5 More general oscillators Not all univariate highly oscillating integrals can be shoehorned into the form (2.2). An example, replete with applications including asymptotic theory itself (see, for example, [108, §9.16]), is the Airy integrals [1] Iω [ f



b

]=

f (x)Ai(−ωx) dx a

and

[2] Iω [ f



b

]=

f (x)Bi(−ωx) dx. (2.15) a

24

Chapter 2. Asymptotic theory of highly oscillatory integrals

The Airy functions Ai and Bi are linearly independent solutions of the Airy equation y − xy = 0 with the initial conditions Ai(0) =

3−2/3 2

Γ(3)

2

Ai (0) = −

,

31/6 Γ ( 3 )



and Bi(0) =

3−1/6 2

Γ(3)

2

Bi (0) =

,

32/3 Γ ( 3 )



.

The Airy functions exhibit exponential behavior as x → +∞ (exponentially small for Ai(x) and exponentially large for Bi(x)), but they are oscillatory as x → −∞. Namely, we have 2

π 4



1 +  (x −3/2 ) ,  1/4 πx 2  π − sin 3 x 3/2 − 4

Bi(−x) = 1 +  (x −3/2 ) ,  1/4 πx 2  π 1/4 x sin 3 x 3/2 − 4

1 +  (x −3/2 ) , Ai (−x) =  π 2  π 1/4

x cos 3 x 3/2 − 4 Bi (−x) = 1 +  (x −3/2 ) ,  π Ai(−x) =

cos

3

x 3/2 −

as x → ∞ (see [108, 9.7.9–9.7.12]). Let us define the following matrix function:  G ω (x) =

Ai(−ωx) Bi(−ωx)



and write (2.15) in a vector form,  I ω[ f ] =

a

b

f (x) G ω (x) dx.

We assume that a > 0. Therefore     G ω (x) ∼  ω −1/4 , G ω (x) ∼  ω 5/4 ,

x > 0 fixed, ω  1,

and G ω obeys the differential equation G ω (x) + ω 3 xG ω (x) = 0;

hence

G ω (x) = −

1 G (x). ω3 x ω

We proceed exactly like earlier in this chapter. Integrating twice by parts and recalling

2.5. More general oscillators

25

that a > 0, I ω[ f ] = −

1 ω3



b

f (x) G ω (x) dx x

a

 b x f (x) − f (x) f (a) 1 1 f (b ) G ω (a) + G ω (b ) − G ω (x) dx =− x2 ω3 a a b ω3  f (a) 1 f (b ) =− G ω (a) G ω (b ) − 3 a b ω  2  a f (a) − f (a) 1 b f (b ) − f (b ) d f (x) 1 + . Iω G ω (a) − G ω (b ) − dx 2 x ω3 a2 b2 ω3

Everything is now set for the recursive process, which, by now, should be familiar. Thus, setting d2 fk−1 (x) , k ≥ 1, f0 (x) = f (x), fk (x) = x dx 2 we finally obtain an asymptotic expansion,

 ∞  fk (a) (−1)k fk (b ) (a) (b ) − G G ω ω a b ω 3(k+1) k=0   ∞  a fk (a) − f (a) (−1)k b fk (b ) − f (b ) G ω (a) . G ω (b ) − + a2 b2 ω 3(k+1) k=0

I ω[ f ] ∼ −

What is the size of I ω [ f ] for ω  1? We should not be carried away by the leading 3 coefficient  −7/4  1/ω because of the growth of G ω : adding these two we obtain I ω [ f ] ∼  ω . Noting further that fk is a linear combination of f , f , . . . , f (2k) , and hence fk is a linear combination of f , f , . . . , f (2k+1) , we can design an asymptotic method in the style of (2.5) and, perhaps more importantly, monitor the location and the size of derivative information necessary to obtain an asymptotic method of given asymptotic decay of the error. The general case, worked out in [87], is similar. Suppose that the vector function y obeys the linear differential equation y = A(x)y and that the asymptotic behavior of y(x) for x  1 is known. Letting G ω (x) = y(ωx), we have G ω (x) =

1 −1 A (ωx)G ω (x); ω

consequently 

b a

1 f (x)G ω (x) dx = ω



b a

f (x)A−1 (ωx)G ω (x) dx

and all is set for integration by parts and recursion. Of course, A(x) must be invertible in [a, b ] and we need to know the asymptotics of G ω (x), A−1 (ωx), and A (ωx) for fixed x ∈ [a, b ] and ω  1. Many things can go wrong, which we must rule out on a case-by-case basis. Thus, let us return to the Airy case (2.15) except that we set a = 0. Because G ω (0) = 0, the integrand in I ω [ f ] is regular, except that we are forbidden to interfere with G ω ;

26

Chapter 2. Asymptotic theory of highly oscillatory integrals

otherwise the entire integration-by-parts trick will go awry. Instead, we proceed like for stationary points, adding and subtracting f (0): b f (x) 1 I ω[ f ] = − G ω (x) dx ω3 0 x b  b G ω (x) f (x) − f (0) 1 1 =− G ω (x) dx dx − f (0) 3 3 x ω 0 x ω 0 b b G ω (x) f (x) − f (0) 1 = f (0) G ω (x) dx, dx − 3 x ω 0 x 0

where we have used the differential equation to simplify the first integral. We can now safely integrate by parts twice in the second integral and the outcome is a single step of recursion. Iterating, we can easily derive the asymptotic expansion, yet we must be b careful! The moment-like integral 0 x −1 G ω (x) dx can be derived explicitly, as can its  −1    asymptotic behavior: it is  ω —actually, its first component is  ω −1 , and the   second is  ω −7/4 . This of course completely changes the asymptotic order and the volume of information (i.e., of derivatives of f at the endpoints) required to construct an asymptotic expansion of requisite order. A highly oscillatory integral for which this approach falls short is b f (x)h(sin ωx) dx, (2.16) Iω [ f ] = a



where f ∈ C [a, b ] and h is an analytic function. In the special case h(y) = eσ y , σ ∈  \ {0}, (2.16) is of great interest in the modeling of high frequency modulated signals in electronic engineering and it has been expanded into asymptotic series in [23] using a serendipitous formula, eσ sin ωx = I0 (σ)+2

∞ 

(−1) m I2m+1 (σ) sin(2(m +1)ωx)+2

m=0

∞ 

(−1) m I2m (σ) cos 2mωx

m=1

(see [108, formula 10.35.3]), where Iν is the modified Bessel function of order ν ∈  [108, formula 10.25.2]. It can be derived in a few simple steps from the identity e z(t −t

−1

)/2

=

∞ 

t m J m (z),

z ∈ ,

t ∈  \ {0}

m=−∞

[108, formula 10.12.1]. We commence by setting t = eiθ , z = −iσ, θ = ωx to obtain eσ sin ωx =

∞ 

eimωx J m (−iσ),

m=−∞

where Jν is the standard Bessel function. Then we convert J m (−iσ) = i m I m (−σ) = (−i) m I m (σ) [108, formula 10.25.2] and conclude with elementary algebra. In the case of a general analytic h, when the magic of Bessel functions no longer works, we adopt an alternative approach, originating in [63]. We commence by plain Taylor expansion of h(sin ωx) in (2.16), b ∞  h (m) (0) where Sω,m [ f ] = f (x) sin m ωx dx. Iω [ f ] = Sω,m [ f ], m! a m=0

2.5. More general oscillators

27

Next we expand each Sω,m [ f ] asymptotically—the easiest (but exceedingly messy) means to this end is to replace sin ωx = (eiωx − e−iωx )/(2i), whereby

  b m 1  m− m Sω,m [ f ] = (−1) f (x)eiω(2−m) dx.  a (2i) m =0 This can be expanded asymptotically using the approach of §2.1, except for the case of an even m and  = m/2: in that case we obtain a nonoscillatory integral! After a very long yet elementary algebra displayed in detail in [63], we obtain an asymptotic expansion of the form Iω [ f ] ∼

ρ0 2

+



b

f (x) dx + a

∞  (−1)k (2k) [f (b )Uk (b ) − f (2k) (a)Uk (a)] 2k+1 ω k=0

(2.17)

∞  (−1)k (2k+1) [f (b )Vk (b ) − f (2k+1) (a)Vk (a)], 2k+2 ω k=0

where ∞ h (+2m) (0) 1 1  , 2−1 m=0 m!(m + )! 4 m ∞ ∞   (−1) (−1) ρ cos(ω(2 + 1)t ), ρ sin(2ωt ) − Uk (t ) = (2 + 1)2k+1 2+1 (2)2k+1 2 =0 =1

ρ =

Vk (t ) =

∞ ∞   (−1) (−1) ρ sin(ω(2 + 1)t ). ρ2 cos(2ωt ) + (2 + 1)2k+2 2+1 (2)2k+2 =0 =1

The applicability and usefulness of (2.17) hinge on a single question: does the seconverge to zero (and how fast) for an analytic function h? Fortuquence {ρ }∞ =0 nately, the answer is positive and, according to [63, Theorem 1], once h is analytic in a disc with the radius of convergence r > 1, then lim→∞ ρ = 0. Moreover, if r < ∞ then ρ = o(r −2 ), while if h is entire and r = +∞ then the ρ ’s decay spectrally fast, i.e., faster than a reciprocal of any polynomial in . For example, for the entire functions h(x) = eσ x and h(x) = sin σ x we have ρ m = 2I m (σ)

ρ2m = 0,

and

ρ2m+1 = 2 (−1) m J2m+1 (σ),

respectively. Since J m (σ), I m (σ) ∼ 

eσ !m , 2πσ 2m 1

σ = 0,

|m|  1

[108, formulas (10.19.1) and (10.41.1)], the decay is indeed faster than exponential. For h(x) = (1 − σ x)−1 , 0 < |σ| < 1, we have m

σ 1 , ρm =   1 − σ 2 /4 1 + 1 − σ 2 /4

exponentially fast decay [63]. All this allows for the infinite expansions of Uk and Vk to be truncated in a controlled manner.

28

Chapter 2. Asymptotic theory of highly oscillatory integrals

Figure 2.6. Plot (in log10 scale) of the error of a truncated asymptotic expansion with 5 1 1 (red), 10 (blue), and 15 (green) terms, respectively, for the integral −1 101 eiωx dx. 100 +x

As can be seen, the main message of this chapter is that in the face of high oscillation, one should change the viewpoint from discretizing the integrand on a certain grid of points to applying techniques from asymptotic analysis in the regime ω  1. To this end one can use repeated integration by parts or more sophisticated techniques [10, 83]. Equally important is the idea that asymptotic expansions are typically divergent; in this context, this means that once we fix ω > 0, adding more terms in the expansion does not necessarily lead to a more accurate approximation. For this reason, the asymptotic method is often not a computational solution in itself, but rather the starting point of other methods that we will see in subsequent chapters, where one tries to improve accuracy by adding internal nodes, or by deforming the path of integration, while always keeping the “asymptotic philosophy” in mind. Like any other mathematical tool, an asymptotic expansion can be foiled by choosing a particularly “bad” example: in our case badness occurs once the derivatives of f grow too rapidly at critical points. An example in point is displayed in Fig. 2.6, where the derivatives of f grow very rapidly at −1. The error is huge unless ω is very large, and adding extra terms to the expansion for small or moderate ω might actually increase the error.

Chapter 3

Filon and Levin methods

3.1 Filon-type quadrature in an interval Louis Napoleon George Filon (1875–1937), his French name and family’s origins notwithstanding, was a British mathematician whose entire career (except for a brief sojourn in Cambridge and a longer one in Flanders in World War I as the commanding officer of 2nd [Reserve] Battalion, London Regiment) was in London, mostly at University College, where he was the Goldsmid Professor and eventually the vicechancellor of the University of London. He was a Fellow of the Royal Society and for a while its vice president. His mathematical œuvre was wide in pure but mostly in applied mathematics: fluid dynamics, elasticity theory, piezoelectricity, and so on [69]. Today Filon is known mostly for a single paper published in 1929 in Proceedings of Royal Society of Edinburgh, where he introduced a method to compute highly oscillatory integrals [41]. Phrased in modern terminology, given the integral b f (x) eiωx dx, Iω [ f ] = a

we first divide the interval [a, b ] into subintervals [xk , xk+1 ], where xk = a + k h, k = 0, . . . , N , and h = (b −a)/(N +1). In each subinterval we approximate the function 1 f by a quadratic pk , say, determining its three parameters by interpolation at xk , 2 (xk + xk+1 ), and xk+1 . Finally, we let N  xk+1  [1]

ω [ f ] = pk (x) eiωx dx. (3.1) k=0

xk

To paraphrase, we interpolate f by a quadratic spline at equispaced knots in [a, b ], replace f by the spline in question in (3.1), and integrate exactly. Note the fundamental idea that renders [1] [ f ] different from conventional quadrature. The Simpson rule, the nearest cousin of the Filon method in the family of conventional quadrature methods, replaces the whole integrand f (x)eiωx by a quadratic spline and integrates exactly. Thus, [2]

ω [ f ] =

N

h 1 f (xk )eiωxk + 4 f ( 2 (xk + xk+1 ))eiω(xk +xk+1 )/2 + f (xk+1 )eiωxk+1 . 6 k=0

29

30

Chapter 3. Filon and Levin methods

[1]

Figure 3.1. The errors in logarithmic scale of the Filon method ω [ f ] (on the left) and [2] the Simpson rule ω [ f ] for f (x) = x/(1 + x 2 ) and 20 ≤ ω ≤ 200. The color progression {plum, violet, red, green, blue, orange} corresponds to N = 2K points, K = 0, . . . , 5.

[1]

[2]

Note that ω and ω use exactly the same information! Yet, they produce altogether different errors. To make our argument more concrete, consider the interval [−1, 1] and the function f (x) = x/(1 + x 2 ). The exact integral Iω [ f ] can be written down explicitly in terms of exponential integrals, and, as we already know from (2.3), in the absence of   stationary points, it behaves asymptotically as Iω [ f ] ∼  ω −1 . We have divided [−1, 1] into N = 2K “panels” for K = 0, . . . , 5 and computed [i ] [i ]

ω [ f ] for i = 1, 2. The logarithmic error log10 | ω [ f ] − Iω [ f ]| is displayed in Fig. 3.1. The difference between the two methods (which, to recollect, are using exactly the same information!) is striking. The main impression is that the Filon method delivers substantially greater accuracy, and, perhaps even more remarkably, this accuracy improves once ω grows. The second observation is that increasing N improves Filon’s error, but the improvement is fairly erratic and inconsistent. Matters are worse with the Simpson rule—we need N ∼  (ω) to counteract high oscillation and deliver small error, and this is prohibitively time consuming for large ω. The intuitive reason why Filon beats Simpson is that polynomial (or spline) interpolation error scales like a derivative of the interpolated function (in our case, the fourth derivative) and the latter is clearly much smaller for f (x) than for the highly oscillatory function f (x) eiωx for large ω. Intuition gained from Taylor expansions and derivatives, however, serves us poorly, and this is but a small part of the story! Inasmuch as Fig. 3.1 demonstrates the superiority of the Filon method (3.1) over its more conventional competition, the truth of the matter is that also the Filon method is vastly substandard! To comprehend what exactly we are missing, we have displayed in Fig. 3.2 to the same scale the logarithmic error committed by four simplest Filontype methods ωF that we are about to encounter. Here we just observe that, while [1] [2] both ω [ f ] and ω [ f ] use 2K +1 units of information, the Filon-type method uses just 2(K + 1). We are getting hugely greater bang for far fewer bucks! The reason is that the original Filon method has been misinterpreted. It has been traditionally treated by the same tools as classical quadrature rules and analyzed using Taylor expansion of the error [1, p. 890]. The true secret of the Filon method, once

3.1. Filon-type quadrature in an interval

31

Figure 3.2. The errors in logarithmic scale of the Filon-type method ωF [ f ] in the same setting as Fig. 3.1. Plum, violet, red, and green correspond to 2, 4, 6, and 8 units of information, respectively.

stripped of misleading ingredients like the division of [a, b ] into subintervals, is that it follows for large ω the logic of the asymptotic expansions of Chapter 2, while for small ω > 0 it recovers the spirit of classical quadrature. This has been first observed in [66], and we follow here along similar lines.

3.1.1 Plain-vanilla Filon-type methods A crucial implication of the expansion (2.3), as we saw before, is that in the absence of stationary points the asymptotic behavior of b f (x) eiω g (x) dx (3.2) Iω [ f ] = a

is determined solely by f and its derivatives at the endpoints; recall Theorem 2.1. It thus makes sense to interpolate there rather than at points—any points—within the open interval (a, b ). In this spirit, given s ≥ 0 we let p be the unique polynomial of degree 2s + 1 such that p ( j ) (a) = f ( j ) (a),

p ( j ) (b ) = f ( j ) (b ),

j = 0, . . . , s.

(3.3)

This is classical Hermite polynomial interpolation. Although it is possible to obtain this polynomial in an explicit form, similarly to the Lagrange interpolation formula ([92, p. 60] and, with greater generality, [97]), we will not pursue this route: it is much quicker to solve a small linear algebraic system. The plain Filon method ωF,s [ f ] is defined by b

ωF,s [ f ] = p(x) eiω g (x) dx, (3.4) a

where the polynomial p(x) satisfies the interpolation conditions (3.3). We note that computing (3.4) requires the knowledge of the moments b x n eiω g (x) dx, n = 0, . . . , 2s + 1, μn (ω) = a

32

Chapter 3. Filon and Levin methods

and this represents a limitation on the scope of a Filon method, although moment-free methods of this kind exist, and we will discuss them in §3.3. What is the asymptotic performance of this plain Filon method ωF,s [ f ]? We would expect that Hermite interpolation will eliminate s + 1 terms in the asymptotic expansion. The following theorem confirms this idea.   Theorem 3.1. For large ω the error of (3.2) decays at the rate of  ω −s −2 , while for ω → 0 the method tends to the Birkhoff–Hermite quadrature [39] with nodes at a and b , both of multiplicity s. Proof. The asymptotic error follows at once from Theorem 2.1. Since p− f ∈ C∞ [a, b ], the linearity of the operator Iω and substitution in the asymptotic expansion (2.3) yield  ∞  φk (b ) iω g (b ) φk (a) iω g (a) 1 F,s e e , −

ω [ f ] − I ω [ f ] = I ω [ p − f ] ∼ − g (a) (−iω)k+1 g (b ) k=0

where φ = p − f . Recall that each φk is a linear combination of φ, φ , . . . , φ(k) . Therefore, because of Hermite interpolation conditions, φk (a) = φk (b ) = 0 for k = 0, . . . , s, and the first assertion  b of the theorem follows. The second claim is self-evident because limω→0 Iω [ f ] = a f (x) dx.

We conclude that for large ω a Filon method is as good as the asymptotic method  [s ] from (2.4), but, unlike the latter, it does not blow up at the origin, that is, when ω = 0. Indeed, the method is good for all ω! The original method  of L. N. G. Filon uses the values of f at the endpoints; hence its error decays like  ω −2 . This is the reason for its advantage vis-à-vis the Simpson rule and other conventional quadrature methods. The interpolation at the midpoint makes no difference to the rate of asymptotic decay; neither does the division into subintervals. It is a sobering thought that, out of N + 1 function evaluations involved in (3.1), just two—at the endpoints—do the real heavy lifting. Having said so, one should not disregard the sheer serendipity in choosing the endpoints in the first place. Had Filon tried to be “clever” and chose Gaussian points instead, his method would have been next to useless. We can extend an identical reasoning to integrals with stationary points. In general, the following is a guiding principle in the construction of the Filon method and its generalizations. Theorem 3.2 (The Filon Paradigm). Given a highly oscillatory integral Iω [ f ], suppose that we know that the function and derivative values q = { f ( j ) (ck ) : j = 0, . . . , ξk − 1, k = 1, . . . , r } determine its asymptotic expansion up to  (ω −q ) for some q > 0. Let p be an interpolating function that matches f and its derivatives at q and let ωF [ f ] = Iω [ p] be the underlying Filon-type method. Then  

ωF [ f ] = Iω [ f ] +  ω −q˜ for some q˜ > q.

3.1. Filon-type quadrature in an interval

33

The proof is an exact replica of that of Theorem 3.1 (where q˜ = q + 1), while the points c1 , . . . , c r must include the critical points of the underlying oscillator. The implications of the Filon Paradigm are profound and this explains our insistence in §2.4 that what really matters is not the explicit asymptotic expansion—which is often fiendishly complicated—but the information needed to derive a requisite number of its terms: in current terminology, the set q . Theorem 3.1 is nothing else but the Filon Paradigm for the integral (3.2) with strictly monotone oscillator g : the critical points are just the endpoints a and b , while  s +1 = { f ( j ) (a), f ( j ) (b ) : j = 0, . . . , s}. Likewise, let c ∈ (a, b ) be a first-order stationary point of g : g (c) = 0, g (c) = 0, and assume g (x) = 0 for x ∈ [a, b ] \ {c}. We know from §2.4 that  s +1/2 = { f ( j ) (a), f ( j ) (b ) : j = 0, . . . , s} ∪ { f ( j ) (c) : j = 0, . . . , 2s}. In other words, the Filon-type method for this eventuality constructs a polynomial p of degree 4s + 2 using Hermite interpolation at  s +1/2 , and its asymptotic error is    ω −s −3/2 . We need no longer assume for simplicity that g has a single stationary point in the interval, while the case of higher-order stationary points can be dealt with in an identical manner.

3.1.2 Constructing Filon methods Although there is nothing to prevent us in Theorem 3.2 from using nonpolynomial interpolating functions, we currently focus on polynomial interpolation. The first requirement of a Filon method is the knowledge of the moments μk (ω) for k = 0, . . . , K, where deg p = K. Noting in passing that nonpolynomial interpolation and momentfree Filon will be dealt with in §3.3.3, we currently assume that the moments are known. There are several ways of constructing a Lagrange interpolating polynomial, of which Lagrange cardinal polynomials and the Newton approach of divided differences are the most familiar. Cardinal polynomials are critical in designing interpolatory quadrature rules, e.g., Gaussian quadrature. Thus, given distinct quadrature points c1 , . . . , c r ∈ [a, b ] the underlying cardinal polynomials of Lagrangian interpolation are k (x) =

r  x − cj j =1 j =k

ck − c j

,

k = 1, . . . , r,

“cardinal” because k (ck ) = 1, k (ci ) = 0 for i = k. The k ’s are all of degree r − 1. The classical interpolatory quadrature then reads 

b

f (x) dμ(x) ≈ a

r  k=1

 bk f (ck ),

where

bk =

a

b

k (x) dμ(x),

k = 1, . . . , r

[27]. All this translates to our setting, although explicit representations of cardinal polynomials are in general more complicated in the Hermite case [97]—in our experience it is easier to derive them on an ad hoc basis than seeking general formulas.

34

Chapter 3. Filon and Levin methods

Observe that, based on the previous discussion, we are fundamentally interested in using the endpoints and stationary points as interpolation nodes, for asymptotic reasons, but the setting that we just presented is completely general. Not wishing to revisit the same grounds with minor differences, we consider the more general setting of extended Filon methods, which will be introduced in Chapter 4 and which, in addition to endpoints and stationary points, interpolate f at additional values. Let q = { f ( j ) (ck ) : j = 0, . . . , ξk − 1, k = 1, . . . , r } be given (where a = c1 , b = c r , and min{ξ1 , ξ r } = q, with suitable amendments for r any stationary points in between) and set K = k=1 ξk − 1. We seek polynomials k, j of degree K such that (j)

k, j (ck ) = 1, (i )

k, j (c m ) = 0,

(i )

k, j (ck ) = 0,

i = 0, . . . , ξk − 1, i = j ,

m = 1, . . . , r, m = k, j = 0, . . . , ξk − 1, i = 0, . . . , ξ m − 1,

for k = 1, . . . , r . These are the cardinal polynomials of our Hermite interpolation (which are available explicitly [97]) and p(x) =

r ξ k −1  k=1 j =0

f ( j ) (ck )k, j (x).

We thus deduce that the underlying Filon method is

F,s

[f ] =

r ξ k −1  k=1 j =0

bk, j (ω) f

(j)

 (ck ), where bk, j (ω) = Iω [k, j ] =

a

b

k, j (x)eiω g (x) dx.

(3.5) r We note that here s = min{ξk }k=1 − 1, which establishes the number of function values and derivatives used in the Filon method. As an example, in the absence of stationary points, letting [a, b ] = [−1, 1], we may use  s +1 = { f ( j ) (−1), f ( j ) (+1) : j = 0, . . . , s},

(3.6)

whereby r = 2, c1 = −1, c2 = 1, ξ1 = ξ2 = s + 1, −1, j (x) = (−1) j 1, j (x), and s =0:

s =1:

s =2:

s =3:

1 1,0 (x) = (1 + x); 2 1 1 1,0 (x) = (1 + x)2 (2 − x), 1,1 (x) = − (1 + x)2 (1 − x); 4 4 1 1 3 2 1,0 (x) = (1 + x) (8 − 9x + 3x ), 1,1 (x) = − (1 + x)3 (1 − x)(5 − 3x), 16 16 1 3 2 1,2 (x) = (1 + x) (1 − x) ; 16 1 1,0 (x) = (1 + x)4 (19 − 37x + 27x 2 − 7x 3 ), 32 1 1,1 (x) = − (1 + x)4 (1 − x)(11 − 14x + 5x 2 ), 32 1 1 1,2 (x) = (1 + x)4 (1 − x)2 (3x − 2), 1,3 (x) = − (1 + x)4 (1 − x)3 . 96 32

3.1. Filon-type quadrature in an interval

35

Thus, for s = 1 it follows from (3.5) that

e−iω i(2e−iω + eiω ) 3i sin ω ie−iω 3i cos ω 3i sin ω + , − + , b1,1 (ω) = − − ω4 ω3 ω2 ω4 ω3 ω eiω i(e−iω + 2eiω ) 3i sin ω ieiω 3i cos ω 3i sin ω b2,0 (ω) = − . − + , b2,1 (ω) = + − ω4 ω3 ω2 ω4 ω3 ω

b1,0 (ω) =

Note that the weights are perfectly well defined for ω → 0: b1,0 (0) = b2,0 (0) = 1, 1 1 b1,1 (0) = 3 , and b2,1 (0) = − 3 . This corresponds to the Birkhoff–Hermite quadrature



1

1 f (x) dx ≈ f (−1) + f (1) + [ f (−1) − f (1)], 3 −1

which is of the classical (i.e., polynomial) order 4. An obvious generalization of the plain-vanilla Filon-type method (3.4) is by adding extra terms to the set q . For example, the original Filon method uses three values: the two members of 1 and f (0). Although   this does not change the asymptotic behavior of the error, which remains  ω −2 , this procedure may conceivably confer some other advantages. To proceed more formally, we consider a set ˜q such that q ⊆ ˜q as a basis for a Filon-type method. In particular, let us examine the case of [a, b ] = [−1, 1], rule out stationary points, and set ˜s +1,m (d) =  s +1 ∪ { f (d j ) : j = 1, . . . , m}, where  s +1 has been given in (3.6) and d1 , . . . , d m are distinct points in (−1, 1). Thus, Filon’s original method (3.1) (with a single panel, N = 0) corresponds to 1,1 (0). An important benefit of generalizing from plain to extended Filon is that it affords a convenient mechanism for error control [64]. Let ˜q,m = q ∪ { f (d j ) : j = 1, . . . , m}, where d1 , . . . , d m ∈ (a, b ) are distinct both from the c j s and from each other. For example, plain Filon (for [a, b ] = [−1, 1]) corresponds to 1 = {−1, 1}, while the original Filon method (with a single panel, N = 0) corresponds to ˜1,1 = {−1, 0, 1}. Denote by ωs ,i [ f ] the Filon-type method using the information in ˜s +1,i , i = 0, 1, 2. Similarly to embedded Runge–Kutta methods [13], we can use a more accurate method to estimate the error in ωs ,0 [ f ], | ωs ,0 [ f ] − Iω [ f ]| ≈ | ωs ,0 [ f ] − ωs ,i [ f ]| for i = 1 or i = 2. Let i ρω

   s ,0 [ f ] − s ,i [ f ]    ω ω = ,  ωs ,0 [ f ] − I [ f ]  ω

i = 1, 2.

Fig. 3.3 displays the ratios ρ1ω and ρ2ω for f (x) = (x + 2) sin x and g (x) = 4x − x 2 and s = 3. Had the estimate been perfect, the ratio would have been identically 1. In our case, in particular for ω3,2 [ f ], it is close enough to 1 to be an excellent error estimate.

36

Chapter 3. Filon and Levin methods

using

i Figure 3.3. Absolute value of the ratio ρω of the computed and the true error of ω4,0 [ f ] 4,2 ] (magenta) and ω [ f ] (green).

ω4,1 [ f

Our next scenario is of an oscillator with stationary points—note that, unlike in the asymptotic analysis of Chapter 2, we do not need to divide the interval into subintervals, each with a single stationary point! As an example, we let 1 3 3 Iω [ f ] = f (x)eiω(x − 4 x) dx. (3.7) −1

Since g (x) = 3(x 2 −

1 ), 4

1

we have two first-order stationary points at ± 2 and

3/2 = { f (−1), f (− 2 ), f (− 2 ), f ( 2 ), f ( 2 ), f (1)}. 1

1

1

1

We thus use a 5th-degree approximating   polynomial in the simplest Filon-type method, with an asymptotic error of  ω −3/2 . As before, we may extend q by adding extra points, thereby increasing the degree of the approximating polynomial. This has two benefits, exactly like when stationary points are absent: decreasing the size of the error, in particular for small ω ≥ 0 (upon which we will dwell at great length in Chapter 4), and providing us with the means to estimate the error. As an example, we have computed ω1,i [e x ] for i = 0, 1, 2, where 0 ˜3/2 = 3,2 ,

1 ˜3/2 = 3,2 ∪ {0},

2 ˜3/2 = 3,2 ∪ {−



29 , 9

 29 }, 9

and the results are reported in Fig. 3.4. The underlying setting is the same as in Fig. 3.3. While |ρ2ω | is very near 1 throughout the range, meaning that the error estimate is very good indeed, ρ1ω is much more uneven: mostly near 1 but occasionally dipping well below. At such points we have a gross underestimate of the error. The reason becomes ˆ where ω1,0 [e x ] = ω1,1 [e x ]; hence, ρ1ωˆ = 0, apparent from the plot: there are points ω ˆ ˆ and the error estimate there becomes useless.

3.1.3 Derivative-free Filon Filon-type methods typically require the computation of derivatives of f at critical points. Occasionally, this might be expensive or problematic, e.g., when f itself origi-

3.1. Filon-type quadrature in an interval

37

3

i Figure 3.4. The ratios ρω for f (x) = e x and g (x) = x 3 − 4 x. The colors have the same meaning as in Fig. 3.3.

nates in a numerical computation. A natural remedy in this case is to replace Hermitetype interpolation to derivatives by interpolation to function values near the point in question; e.g., instead of interpolating to f (a) and f (a), we might interpolate to f (a) and f (a + h) at some suitably small h > 0. Done carelessly, this might damage the asymptotic rate of decay of the error, and it has been in [64] that the correct   proved course of action is to use spacing of the size h =  ω −1 . In that case the asymptotic rate of decay of the error is unharmed, and the damage to accuracy is marginal. We have displayed in Fig. 3.5 the errors committed by two Filon-type methods for [a, b ] = [−1, 1], f (x) = sin x/(1 + x 2 ), g (x) = x, and ω ∈ [10, 200]. On the left is the standard method ωF,2 [ f ], which interpolates f at f (−1), f (−1), f (−1) f (1), f (1), f (1),

Figure 3.5. Filon with derivatives on the left and with derivatives replaced by finite differences on the right: spot the difference!

38

Chapter 3. Filon and Levin methods

while the method on the right replaces derivative information by interpolation at nearby points, f (−1), f (−1 + ω −1 ), f (−1 + 2ω −1 ), f (1 − 2ω −1 ), f (1 − ω −1 ), f (1) (of course, this makes sense only for ω ≥ ω0 for some ω0 > 0—in our case ω0 = 2). We observe that, for all intents and purposes, the two methods deliver virtually identical precision.

3.2 Multivariate Filon-type quadrature The Filon Paradigm remains true in a multivariate setting, although the construction of Filon-type methods is fraught with greater difficulties, both because of the nature of information that we must use in the interpolation procedure and since the interpolation itself is much more complicated once we move away from the univariate case. In particular, we cannot just take for granted that matching the number of degrees of freedom in the interpolating polynomial to the number of interpolation conditions necessarily has a solution and such solution is unique: each case must be examined on its own merit. To illustrate these two types of difficulties, consider again the isosceles right tricase there are angle from §2.4.3. We commence with g (x1 , x2 ) = x1 − x2, in which  neither stationary nor hidden stationary points, Iω [ f ] ∼  ω −2 , and we need to interpolate just at the vertices. The simplest such approximation fits a linear function, p [1] (x1 , x2 ) = p0,0 + p1,0 x1 + p0,1 x2 , to the values of f at the three vertices, and its   asymptotic error is  ω −3 . The interpolation problem is uniquely solvable.   To obtain an asymptotic error of  ω −4 we fit both f and its two directional derivatives at the vertices, nine conditions altogether. However—a situation familiar from the construction of finite element functions on a plane—quadratic polynomials have just six degrees of freedom while cubic polynomials have 10, and we need to 1 1 specify an additional interpolation condition. We opt for the centroid ( 3 , 3 ), whereby  i i the interpolation problem is uniquely solvable by p [2] (x1 , x2 ) = 0≤i1 +i2 ≤3 pi1 ,i2 x11 x22 . We set  1  1−x2 F, j p [ j ] (x1 , x2 )eiω(x1 −x2 ) dx1 dx2 , j = 1, 2.

ω [ f ] = 0

0

We can continue in a similar vein. Increasing asymptotic order of decay by one unit means six derivative conditions at each vertex, altogether 18 conditions, which we need to supplement with a further three (e.g., at the midpoints of the faces) to match the 21 degrees of freedom in a quintic. F,1 x1 −x2 F,2 x1 −x2 Fig. 3.6 (left) displays committed [e ] and ]. Preω [e  −2 the error  −3  by ω F,2 

F,1 −4 dictably, Iω [ f ] ∼  ω , ω [ f ] ∼  ω , and ω [ f ] ∼  ω . Our next example is (2.8): g (x1 , x2 ) = x12 + x22 in the unit square [−1, 1]2 . We already know from §2.4.1 that the problem has a stationary point at the origin, as  well as  four hidden stationary points at the midpoints of its faces, and that Iω [ f ] ∼  ω −3/2 . Our first attempt disregards the presence of hidden stationary points. Thus, we fit function values at the four corners and at the center, as well as the two first derivatives at the center, altogether seven interpolation conditions, using the polynomial p(x1 , x2 ) = p0,0 + p1,0 x1 + p0,1 x2 + p1,1 x1 x2 + p0,2 x22 + p2,1 x12 x2 + p1,2 x1 x22 (some trialand-error is typically needed to choose a good multivariate interpolation polynomial)

3.2. Multivariate Filon-type quadrature

39

F, j

Figure 3.6. The error (to logarithmic scale) committed by ω [ f ] for j = 1 (plum) and j = 2 (violet). On the left, in a triangle, for g (x1 , x2 ) = x1 − x2 , f (x1 , x2 ) = e x1 −x2 . On the right, in a square, for g (x1 , x2 ) = x12 + x22 , f (x1 , x2 ) = x1 e x1 −x2 .

and denote the underlying Filon-type method by ωF,1 [ f ]. Our second attempt is more honest, and we interpolate also at the midpoints. We have altogether 11 interpolation conditions and use the polynomial p(x1 , x2 ) = p0,0 + p1,0 x1 + p0,1 x +2+ p2,0 x12 + p1,1 x1 x2 + p0,2 x22 + p3,0 x13 + p2,1 x12 x2 + p1,2 x1 x22 + p0,3 x23 + p2,2 x12 x22 . This results in the method ωF,2 [ f ]. The errors for both methods are reported in Fig. 3.6  (right),  and the clear implication is that cheating does not pay: while ωF,1 [ f ] ∼  ω −3/2 , decaying at the same rate as Iω [ f ], we have a betterrate of decay once we take hidden stationary points into account, ωF,2 [ f ] ∼  ω −5/2 . Conventional quadrature suffers from the curse of dimensionality [36]: once we generalize a ν-point quadrature rule in an interval to a cube in d using tensor products, we require ν d function evaluations, and this rapidly increasing cost limits the efficacy of this approach. Once d is even moderately large we need to use different methods altogether, e.g., Monte Carlo or quasi–Monte Carlo algorithms. Although the curse of dimensionality afflicts Filon-type methods, it loses much of its sting. Thus, assuming absence of both stationary and hidden stationary points, we need to use in a d -dimensional simplex just d + 1 points. Once we compute all directional derivatives   up to degree s at the d +1 vertices, we need (d +1) s +d points: the number of function s evaluations increases polynomially rather than exponentially! A d -dimensional cube   function evaluations, but even is more expensive, coming with the price tag of 2d s +d d this is considerably less than the tensor-product rule of conventional quadrature and allows us to compute in substantially higher dimensions using this set of ideas. Filon-type methods (and other methods for highly oscillatory integrals), alas, suffer in a multivariate setting from another curse: the curse of special cases. This becomes apparent once we attempt to extend the calculus of the last paragraph and compute the number of function evaluations in the presence of stationary and hidden stationary points. The plot thickens once degenerate oscillators are allowed. Let us revisit an example from §2.4.3, the bivariate simplex with g (x1 , x2 ) = x1 + x2 , and recall that the oscillator degenerates along the hypotenuse. The idea now is to perform the “push towards the

40

Chapter 3. Filon and Levin methods

boundary” (2.12),  1 Iω [ f ] =

0

∼−

1−x2 0

f (x1 , x2 ) eiω g (x1 ,x2 ) dx1 dx2

 1 1 fk (1 − x2 , x2 ) eiω g (1−x2 ,x2 ) dx2 k+1 (−iω) 0 k=0   1 1 1 1 fk (x1 , 0) eiω g (x1 ,0) dx1 , fk (0, x2 ) eiω g (0,x2) dx1 − − 2 0 2 0

∞ 

where  

f0 (x1 , x2 ) = f (x1 , x2 ),

fk (x1 , x2 ) = ∇

fk−1 (x1 , x2 )

∇g (x1 , x2 )2

∇ g (x1 , x2 ) ,

k ≥ 1.

In our case this reduces to Iω [ f ] ∼ −

∞  k=0

 1 1 1 eiω fk (1 − x, x) dx − [ fk (0, x) + fk (x, 0)]eiωx dx , (−iω)k+1 0 0

where fk (x1 , x2 ) =

 1 ∂ fk−1 (x1 , x2 ) ∂ fk−1 (x1 , x2 ) , + ∂ x2 ∂ x1 2

k ≥ 1.

The second integral can be solved by the univariate Filon-type method, while the first, being nonoscillatory, must be computed by a conventional quadrature. This outcome, in numerical terms, is a competition between the errors committed by classical and Filon-type quadrature. Let us denote by ωF,s [ f ] the univariate Filon-type method interpolating to f (i ) , i = 0, . . . , s, at the endpoints and applied to the second integral. We discretize the first integral, appropriately truncated for k = 0, . . . , s, by standard Gauss–Legendre quadrature (shifted to [0, 1]) with ν points, denoted by  [ν,s ] [ f ]. Thus, Iω [ f ] ≈

  1 1 [ν,s ] eiω  [ν,s ] [ f ] − ωF,s [ f ] := ω [ f ], 2 iω

where  #  1 " 1 3 1 3 3 3 1 1 f (2 − 6 , 2 + 6 )+ f (2 + 6 , 2 − 6 ) , 2 # 4   5 " 1 15 1 15 1 1 15 15 1 1 [3,s ] [f ] =  f ( 2 − 10 , 2 + 10 ) + f ( 2 + 10 , 2 − 10 ) + f ( 2 , 2 ), 9 18 where  [4,s ] [ f ] = b1 [ f (c1 , c4 ) + f (c4 , c1 )] + b1 [ f (c2 , c3 ) + f (c3 , c2 )] ,   1 1 30 30 , , b2 = + b1 = − 72 4 72 4     1 1 525 − 70 30 525 + 70 30 c1 = − , , c2 = − 70 2 70 2     1 1 525 + 70 30 525 − 70 30 c3 = + . , c4 = + 70 2 70 2

 [2,s ][ f ] =

3.2. Multivariate Filon-type quadrature

41

[ν,s ]

Figure 3.7. The error (to logarithmic scale) committed by ω [ f ] for f (x1 , x2 ) = [∞,s ] e x1 −x2 . On the left s = 0, while on the right s = 1. The thick black line corresponds to ω [ f ] [ν,s ] while blue, pink, and brown are the errors committed by ω [ f ] for ν = 2, 3, 4, respectively.

Note that, for the sake of simplicity, we have selected f such that along the hypotenuse, fk ≡ 0 for k ≥ 1; hence,  [ν,s ] [e x1 −x2 ] is independent of s. We let [∞,s ]



[f ] =

 1 1 1 f (1 − x, x) dx − F,s [ f ] . eiω 2 iω 0 [ν,s ]

Fig. 3.7 displays the errors committed by different methods ω . The thick black line is the “pure” Filon error, while the thin colorful lines represent a combination of asymptotic and Gaussian quadrature error. Clearly, for small ω ≥ 0 the asymptotic error dominates, while, for increasing ω, after a while the error is dominated by Gaussian quadrature. Gaussian quadrature error is independent of ω, but since the Filon error decreases with ω, eventually the latter becomes smaller. [ν,s ] The larger s, the smaller ν, the sooner the error of ω “peels off” the error of [∞,s ] . This can take a while: for s = 0 and ν = 4 the two trajectories part ways for ω ω ≈ 1400. This example provides a possible avenue towards dealing with at least some degenerate oscillators, as well as with hidden stationary points, using Filon-type methods. Thus, expand (2.12) fully: this reveals degenerate values along the faces, as well as hidden stationary points. Then group the integrals, some of which oscillate rapidly and others of which do not, and finally solve the first with a Filon-type method and the other with Gaussian quadrature or its multivariate counterparts. Of course, all this runs against the main principle enunciated in §2.4, namely that an explicit derivation of a multivariate asymptotic expansion should be avoided if at all possible. Unfortunately, no alternatives to this course of action are at present available, and the only saving grace is that multivariate integrals of this kind have few important applications.

42

Chapter 3. Filon and Levin methods

3.3 Levin methods 3.3.1 Univariate Levin Other things being equal, by this stage of our narrative Filon-type methods are arguably the first port of call once we wish to compute highly oscillatory integrals. Unfortunately, other things are not always equal. In particular, the usability of Filon-type methods is restricted by the requirement to compute the moments μn (ω) of the underlying oscillator, and this is a clear limitation of their scope. The alternative approach of Levin methods has a crucial advantage: no need to compute moments! It also has a disadvantage: being inconsistent with the presence of stationary points. The origins of Levin methods are in two important papers of David Levin [74, 75], the first to observe explicitly asymptotic decay for ω  1, and their theory in both univariate and multivariate settings has been worked out by Sheehan Olver [84]. Focusing our attention at the first instance on the univariate problem (3.2) and assuming that the oscillator g is strictly monotone in [a, b ], we consider the linear ordinary differential equation y + iω g (x)y = f (x),

x ∈ [a, b ].

(3.8)

Supposing that we can find a solution of (3.8) (for an arbitrary initial condition), it follows at once that

d y(x)eiω g (x) = f (x)eiω g (x) , dx

and integration yields Iω [ f ] = y(b )eiω g (b ) − y(a)eiω g (a) .

(3.9)

The general solution of (3.8) can be written at once using the standard variation of constants formula, iω[ g (a)− g (x)]

y(x) = e



x

y(a) +

eiω[ g (ξ )− g (x)] f (ξ ) dξ ,

a

and for a fleeting moment we might entertain an illusion that we are done—an illusion because the above integral is nothing else but e−iω g (x) Iω [ f ]. However, we might attempt to compute (3.8) numerically, using collocation. Before we do so, however, we observe that the solution of (3.8) depends on ω. A highly oscillatory solution, though, might be ill advised because small errors in the solution of the ordinary differential equation might be magnified in (3.9). Worse, a solution that becomes large for ω  1 is inimical to our ends because asymptotic error estimates no longer hold. Fortunately, it has been in [74] that there exists   proved a solution of (3.8) which is slowly oscillatory and  ω −1 for ω  1. There are several alternative ways to define a slowly oscillatory function, the intuitive one being that it has few “ups” and “downs.” It is more helpful to use a more elaborate definition. Thus, assume without loss of generality that 0 < |g (x)| ≤ 1 in [a, b ] (otherwise we rescale ω) and let v be the inverse function of the strictly monotone function g . Following [74], we say that a function h oscillates slowly if there

3.3. Levin methods

43

exists 1 < w0  ω such that  h(v(ζ )) =

w0

−w0

H (t )eiωζ t dt ,

where H is a smooth function, independent of ω. Proposition 3.3 ([74]). Suppose that f (v(ζ ))/ g (v(ζ )) is slowly oscillatory,  w0 f (v(ζ )) H (t ) eiωζ t dt . = g (v(ζ )) −w0

(3.10)

  Then (3.8) has a slowly oscillatory solution y˜ω (x) =  ω −1 . Proof. We commence by noting that (3.10) implies that  w0 f (x) H (t ) eiω g (x)t dt = g (x) −w0

and set



w0

H (t ) iω g (x)t dt , e −w0 1 + t   noting that it is a slowly oscillating function and is  ω −1 for ω  1. Then 1 y˜ω (x) = iω



w0

H (t ) [iω g (x)t ]eiω g (x)t dt 1 + t −w0  w0 H (t ) iω g (x)t + g (x) dt e −w0 1 + t  w0 f (x) = f (x). H (t ) eiω g (x)t dt = g (x) · = g (x) (x) g −w0

yω (x) = y˜ω (x) + iω g (x)˜

1 iω

We deduce that y˜ω is a solution of (3.8), concluding the proof.

As a trivial example, let [a, b ] = [−1, 1], f (x) = e x , and g (x) = x. A general solution of (3.8) being  e−1 ex y(x) = y(−1) − , e−(1+iω)x + 1 + iω 1 + iω

we see at once that, to annihilate the highly oscillatory term, we need to set y(−1) = e−1 /(1 + iω), whereby ex y˜ω (x) = . 1 + iω

Another way of thinking of y˜ follows by examining (2.3) in Chapter 2: it is suggestive of the explicit formula y˜(x) = −

∞  k=0

fk (x) 1 , k+1 g (x) (−iω)

44

Chapter 3. Filon and Levin methods

provided that the infinite sum on the right converges. It is trivial to verify that this is the case with our example and, assuming that everything in sight converges, can be easily proved in general by telescoping series,  ∞  fk (x) d fk (x) 1 + iω g (x) y (x) = − y˜ (x) + iω g (x)˜ g (x) (−iω)k+1 dx g (x) k=0 ∞  1 =− [ f (x) + iω fk (x)] = f0 (x) = f (x), k+1 k+1 (−iω) k=0

demonstrating that y˜ obeys (3.8). To approximate y˜ω by collocation we follow the work of Sheehan Olver [84] and commence by choosing linearly independent functions Ψ = {ψ1 , ψ2 , . . . , ψN } in C∞ [a, b ] and N distinct collocation points c1 , . . . , cN ∈ [a, b ]. Our assumption is that Ψ is a Chebyshev set [90]—in other words, a linear combination of the ψ ’s can be used to interpolate in a unique manner any N units of data in the interval. We are seeking a function ψ(x) =

N  =1

q ψ (x),

which obeys (3.8) at the collocation points: this reduces to the linear system N  =1

[ψ  (cn ) + iω g (cn )ψ (cn )]q = f (cn ),

n = 1, . . . , N .

(3.11)

  Proposition 3.4. The solution of (3.11) is slowly oscillatory and  ω −1 for ω  1. Proof. Written in a vector form, (3.12) becomes (A+iωB) q = f , where the N ×N matrices A and B are independent of ω. Therefore q = (A+ iωB)−1 f and it follows from Kramer’s rule that each q is a rational function in ω (more specifically, a ratio of an (N − 1)-degree and an N -degree polynomials) and   hence slowly oscillatory. Moreover, for ω  1 we have q = (iω)−1 B −1 f +  ω −2 .7

This can be at once generalized, allowing for confluent collocation points. Given M ≤ N distinct Mcollocation points, c1 , . . . , cM ∈ [a, b ], each with multiplicity m ≥ m = N , we impose N collocation conditions by requiring the 1, such that =1 0, 1, . . . , m − 1 derivatives of the differential equation (3.8) to be satisfied by ψ at each collocation point c . In place of (3.11) we obtain the linear system N  ∂j [ψ  (cn )+iω g (cn )ψ (cn )]q = f ( j ) (cn ), j ∂ x =1

j = 0, . . . , mn −1,

n = 1, . . . , M .

(3.12) Note that (3.11) is a special case with M = N and mn ≡ 1. The definition of a Chebyshev set is generalized to confluent points by continuity. Having derived the solution of (3.12) (which, of course, hinges upon the linear system being nonsingular—we will return to this issue in what follows) we define the Levin method

ωL [ f ] = ψ(b )eiω g (b ) − ψ(a)eiω g (a) . 7 The

matrix B is nonsingular once g (cn ) = 0, n = 1, . . . , N —this will be proved in the next theorem.

3.3. Levin methods

45

Theorem 3.5. Suppose that, without loss of generality, c1 = a = −1, cM = b = 1, m1 = mM = s + 1, and, further, g (i ) (cn ) = 0 for i = 1, . . . , mn , n = 1, . . . , M . Then  

ωL [ f ] = Iω [ f ] +  ω −s −2 , ω  1. (3.13) Proof. Let  [ϕ] = ϕ + iω g ϕ. Hence (3.8) reduces to  [y] = f and (3.12) to dj  [ψ](cn ) = f ( j ) (cn ), dx j

j = 0, . . . , mn − 1,

n = 1, . . . , M .

Moreover,  Iω [ [ψ]] =

b





iω g (x)

[ψ (x) + iω g (x)ψ(x)]e

 dx =

a

allowing us to deduce that

a

b

d ψ(x) eiω g (x) dx, dx

ωL [ f ] = Iω [ [ψ]].

(3.14)

On the face of it, we are done: ωL [ f ] − Iω [ f ] = Iω [ [ f ] − f ] and everything reduces to the interpolation conditions at the endpoints, identically to the proof of Theorem 3.1. However, since  depends on ω, this is true only once we can bound the magnitude of the functions d j ( [ψ] − f )/ dx j , j = 0, . . . , s, uniformly for ω  1. We do so in what follows just for the case M = 2, c1 = a, c2 = b , m1 = m2 = s + 1, and N = 2s + 2, noting that a more general proof is identical in concept while replete in a fairly awkward notation. The underlying linear system (3.12) is U q = f , where  j −1   j − 1 (i +1) ( j −i −1) g (−1)ψ (−1), i i =0  j −1   j − 1 (i +1) (j) ( j −i −1) g (+1)ψ (+1), Us +1+ j , = ψ (+1) + iω i i =0 (j) Uj , = ψ (−1) + iω

f j = f ( j −1) (−1), f s +1+ j = f ( j −1) (+1)

for j = 1, . . . , s + 1 and  = 1, . . . , 2s + 2. We write U = A + iωB: since our concern is solely with the boundedness of d j  [ψ]/ dx j for large ω, all we need is to show that det B = 0. In the simplest nontrivial case s = 1 the matrix B reads ⎤ ⎡ g (−1)ψ1 (−1) g (−1)ψ2 (−1) g (−1)ψ3 (−1) g (−1)ψ4 (−1) ⎥ ⎢ χ2 (−1) χ3 (−1) χ4 (−1) ⎥ ⎢ χ1 (−1) ⎣ g (+1)ψ1 (+1) g (+1)ψ2 (+1) g (+1)ψ3 (+1) g (+1)ψ4 (+1) ⎦ , χ1 (+1) χ2 (+1) χ3 (+1) χ4 (+1) where we define χk (x) = g (x)ψk (x) + g (x)ψ k (x). Subtracting a multiple of the first row from the second and a multiple of the third row from the fourth (both multiples are well defined because g (±1) = 0), we have ⎡ ⎤ ψ1 (−1) ψ2 (−1) ψ3 (−1) ψ4 (−1) ⎢ ψ (−1) ψ (−1) ψ (−1) ψ (−1) ⎥ 1 2 3 4 ⎥ det B = g (−1) g (+1) g (−1) g (+1) det ⎢ ⎣ ψ1 (+1) ψ2 (+1) ψ3 (+1) ψ4 (+1) ⎦ . ψ 1 (+1) ψ 2 (+1) ψ 3 (+1) ψ 4 (+1)

46

Chapter 3. Filon and Levin methods

Figure 3.8. The real and imaginary parts of ψ for s = 0, 1, 2, 3 (in plum, violet, red, and olive, respectively) for ω = 50, f (x) = x/(1 + x 2 ), and g (x) = x 3 + x.

The matrix on the right is nonsingular as a trivial consequence of the Chebyshev condition, while our assumption was that g (i ) (±1) = 0: this implies det B = 0. An identical proof can be extended to any s ≥ 1 and, for that matter, to additional collocation points in (a, b ). The proof is complete.

We deduce that, as long as g = 0 and the conditions of Theorem 3.5 hold, the Filon Paradigm is valid for the Levin method—indeed, both methods utilize the same information to obtain the same asymptotic rate of decay. To illustrate the Levin method we consider f (x) = x/(1 + x 2 ) and g (x) = x 3 + x, an oscillator for which the moments are unknown and a Filon-type method cannot be constructed.8 We choose a polynomial basis, ψ (x) = x −1 , and construct the collocating polynomials ψ of degree 2s + 1 for s = 0, 1, . . . , 3:

x 1 + , 2 8iω 32ω 3(48ω 2 + 1)x 21x 2 1 + 6ω 2 3 − + s = 1 : ψ(x) = 8 ω 2 (64ω 2 − 15) 3iω(64ω 2 − 15) 4(64ω 2 − 15) s = 0 : ψ(x) =

+

(3 + 42iω + 64ω 2 − 1120iω 3 − 3840ω 4 )x 3 , 8iω(1 + 14iω + 48ω 2 )(64ω 2 − 15)

 and so on. Note that ψ blows up for s = 1 and ω = 15/8—there is nothing in our construction to prevent failure for small ω ≥ 0, and Theorem 3.5 is an asymptotic statement, ensuring existence (and more!) for ω  1. Fig. 3.8 displays the real and imaginary parts of different polynomials ψ. We denote the Levin method corresponding to (s + 1)-fold collocation at the endpoints by ωL,s . Fig. 3.9 depicts the logarithmic error committed by ωL,s for s = 0, 1, 2, 3 in the same setting as in Figs. 3.1, 3.2, and 3.8. Performance is roughly similar 8 There is a good reason why we do not consider g (x) = x, like in Figs. 3.1 and 3.2: we will return to this point in §3.3.3.

3.3. Levin methods

47

Figure 3.9. Logarithmic error log10 | ωL,s [ f ] − Iω [ f ]| for increasing ω and s = 0, 1, 2, 3.

to that of the Filon-type method in Fig. 3.2, except for small values of ω ≥ 0, when y˜ blows up. In general, the performance of Levin methods is roughly similar to Filon-type methods for large ω, with the crucially important advantage that no moments are needed in their construction. Like with Filon-type methods, we can further enhance accuracy by adding extra collocation points in (a, b ). However, the Levin method has a disadvantage vis-à-vis the Filon-type approach: this approach cannot work in the presence of stationary points. The reason is clear. To attain any reasonable asymptotic decay of the error, it is clear from the proof of Theorem 3.5 that we must collocate at stationary points. Unfortunately, this means that in the presence of a stationary point the matrix B has a row of zeros and cannot be nonsingular. This implies that U −1 cannot be uniformly bounded, and it is impossible to obtain the requisite order of asymptotic decay. In our example we have used a polynomial basis. Inasmuch as using polynomials is the first idea springing to mind, it is not necessarily the best. We will return to this issue in §3.3.3.

3.3.2 Multivariate Levin Given that the Levin method does not coexist with stationary points, its prospects in a multivariate setting are more restricted. Not only stationary points but also hidden stationary points must be ruled out. On the other hand, in the absence of such points it entertains a very significant advantage because multivariate moments, if they exist, are even more difficult to compute in their full generality than their univariate brethren. As in §3.1.3, we commence from the bivariate regular simplex 2 , a.k.a. the right isosceles triangle with vertices at (0, 0), (1, 0), and (0, 1). Suppose that y : 2 → 2 obeys the differential equation ∇ y + iω(∇g ) y = f ,

x ∈ cl 2 ,

the multivariate generalization of (3.8). Then

∇ eiω g (x) y(x) = eiω g (x) ∇ y(x) + iω∇g (x) y(x) = f (x)eiω g (x) ;

(3.15)

48

Chapter 3. Filon and Levin methods

consequently,  Iω [ f ] =

2

&

∇ eiω g (x) y(x) dV =

∂ 2

n(x) y(x)eiω g (x) dS,

the latter following from the Green Theorem or its more general version, the Stokes Theorem [78].9 We have on the right a sum of univariate integrals over the three faces [i ] of the simplex, Iω [ fi ], i = 1, 2, 3,  Iω [ f ] =

1 0

[y1 (1 − x, x) + y2 (1 − x, x)] eiω g (1−x,x) dx −



1

− 0



1 0

y1 (0, x) eiω g (0,x) dx

y2 (x, 0) eiω g (x,0) dx.

Thus, denoting explicitly the dependence of Iω on g , Iω [ f ] = Iω [ f ; g ], and letting g1 (x) = g (1 − x, x), f

[1]

g2 (x) = g (0, x),

(x) = y1 (1 − x, x) + y2 (1 − x, x),

we have Iω [ f ; g ] =

f

g3 (x) = g (x, 0),

[2]

3 

(x) = −y1 (0, x),

f [3] (x) = −y2 (x, 0),

Iω [ f [m] ; g m ],

m=1

a sum of univariate integrals, all without stationary points. Suppose now that Ψ is composed of bivariate basis functions ψ  (x1 , x2 ),  = 1, . . . , N , and that the function N  ψ(x1 , x2 ) = q ψ (x1 , x2 ) =1

is determined by the collocation conditions

∂ | j|  ∇ ψ(c k ) + iω∇g (c k ) ψ(c k ) − f (c k ) = 0 j ∂x for 0 ≤ | j | ≤ mk −1 and distinct c 1 , . . . , c M , such that the overall number of conditions is N . This is a linear system, and its nonsingularity for ω  1 is ensured by the projections of ψ on the faces of the simplex being Chebyshev sets. We define

ωL [ f ; g ] =

3 

ωL [ f [m] ; g m ].

(3.16)

m=1

Once the three vertices are among the collocation points and their corresponding   weights are ≥ s+1, it follows similarly to Theorem 3.5 that ωL [ f ] ∼ Iω [ f ]+ ω −s −3 [85]. In other words, the Filon Paradigm is still valid! All this, as long as there are neither stationary nor hidden stationary points, can be generalized to bounded domains in d with piecewise-smooth boundary; we omit the details because they follow along similar lines. 9 We could have equally well used the elementary approach of §2.4.2 but prefer heavier mathematical machinery for the sake of brevity.

3.3. Levin methods

49

Figure 3.10. Logarithmic error of the Filon method for the bases Ψ 1 (cyan), Ψ 2 (red), Ψ 3 (green), and Ψ 4 (blue).

3.3.3 When Filon met Levin. . . We have defined a Levin method using a general Chebyshev set Ψ = {ψ1 , ψ2 , . . . , ψN }, and, obviously, also a Filon-type method can be defined using such a more general basis. In other words, instead of using a polynomial, we can interpolate with a linear combination from Ψ. The question—for both Filon-type and Levin methods—is how to choose a “good” basis. As always, the phrase “good” admits a variety of often subtle meanings. Do we seek (for Filon) a basis in which generalized moments b ψ m (x) eiω g (x) dx, m = 1, 2, . . . , N , μ m (ω; ΨN ) = a

can be computed? Do we seek (for Levin) a basis in which the linear system (3.12) is always nonsingular? Can we choose a basis that promises higher accuracy? To demonstrate how the choice of a basis can impact accuracy, let us consider the strictly monotone oscillator g (x) = x + x 2 /5 in [−1, 1] and approximate Iω [e x ] with four bases: Ψ 1 = {1, x, x 2 , x 3 }, ( ' 1 1 1 , , Ψ 2 = 1, , 2 + x (2 + x)2 (2 + x)3 ) 3 * 2        2x 2x x2 x2 2x x2 2x , Ψ3 = 1 + , 1 + , 1+ x+ x+ , 1+ x+ 5 5 5 5 5 5 5

Ψ 4 = {1, x, e−x , xe x }, in each case interpolating to f and f at the endpoints. Note that a Filon-type method  uses exactly the same information in each case! Asymptotic accuracy is  ω −3 in all cases, but, as is evident from Fig. 3.10, actual accuracy may differ by orders of magnitude! The outright winner is Ψ 3 —note that this is not the usual polynomial basis.

50

Chapter 3. Filon and Levin methods

Definition 3.6. Let g = 0 in [a, b ]. We call Ψ N = {g (x), g (x) g (x), g (x) g 2 (x), . . . , g (x) g N −1 (x)} the N th natural basis of the oscillator g . Since g is strictly monotone, it is trivial that Ψ N is a Chebyshev set. A natural basis has emerged as a clear winner in Fig. 3.10, but its great advantage is of a different kind. As observed in [88], its generalized moments are trivial to evaluate. Since

+ , d m g (x)eiω g (x) = mψ m (x) + iωψ m+1 (x) eiω g (x) , dx

integration from a to b yields g m (b ) eiω g (b ) − g m (a) eiω g (a) = mμ m (ω) + iωμ m+1 (ω); therefore, μ m+1 (ω) = −

1 m m g (b )eiω g (b ) − g m (a)eiω g (a) , μ m (ω) + iω iω

Since μ0 (ω) =

1 iω



b a

m = 0, . . . , N − 1. (3.17)

deiω g (x) 1 iω g (b ) − eiω g (a) , e dx = iω dx

we have in (3.17) a recurrence relation for the evaluation of all the moments. Advantage Filon! The very method that won in Fig. 3.10 had the basis ψ3 = Ψ 4 , and its moments can be computed at ease for any strictly monotone g . It seems that the unique selling point of a Levin method is gone, but this is illusory. Once we choose a natural basis, a Filon-type method and a Levin method are identical! For simplicity, let us assume that there are no supplementary points—in other words, having chosen the natural basis Ψ 2s +2 , we have

ωF [ f ] = Iω [ p],

ωL [ f ] = ψ(b )eiω g (b ) − ψ(a)eiω g (a) ,

where the coefficients of p(x) = g (x)

2s +2  =1

d g  (x),

ψ(x) = g (x)

2s +2  =1

e g  (x)

are determined by interpolation p ( j ) (a) = f ( j ) (a),

p ( j ) (b ) = f ( j ) (b ),

j = 0, . . . , s,

and collocation

, dj + (a) + iω g (a)ψ(a) = f ( j ) (a), ψ dx j

, dj + ψ (b ) + iω g (b )ψ(b ) = f ( j ) (b ), j dx respectively.

j = 0, . . . , s,

3.3. Levin methods

51

Proposition 3.7. Let ϕ(x) = −

2s +1  =0

Then

2s +1  m! 1 d g m− (x). (−iω)+1 m= (m − )! m+1

ωF [ f ] = ϕ(b )eiω g (b ) − ϕ(a)eiω g (a) .

(3.18)

Proof. We commence from (3.17). Since, by straightforward induction on m, μ m (ω) = −eiω g (b )

m−1  =0

m−1  (m − 1)! g  (a) (m − 1)! g  (b ) , + eiω g (a) (−iω) m− ! (−iω) m− ! =0

we deduce that

ωF [ f ] =

2s +2 

d m μ m (ω) m=1  2s +2 m−1 m−1    (m − 1)! g  (b ) − eiω g (a) d m eiω g (b ) =− m− (−iω) ! m=1 =0 =0 2s +1 m m−   g (b ) m! = −eiω g (b ) d m+1 +1 (−iω) (m − )! m=0 =0 2s +1 m   g m− (a) m! + eiω g (a) d m+1 (m − )! (−iω)+1 m=0 =0 2s +1 2s +1   m! 1 d m+1 = −eiω g (b ) g m− (b ) +1 (m − )! (−iω) m= =0 2s +1 2s +1   m! 1 + eiω g (a) g m− (a) d m+1 +1 (m − )! (−iω) m= =0 iω g (b ) iω g (a)

= ϕ(b )e

− ϕ(a)e

(m − 1)! g  (a) (−iω) m− !



,

and the proof follows.

The formula (3.18) is very suggestive of the Levin method L [ f ], except that we have ϕ in place of ψ. Proposition 3.8. Assuming that the collocation equations (3.12) are nonsingular (as is the case for ω  1 ), the functions ϕ and ψ are identical. Proof. Once the collocation equations are nonsingular, they have a unique solution. Therefore, it is sufficient to prove that ϕ obeys these equations. But, for j = 0, 1, . . . , s, ϕ ( j +1) + iω

2s +1 2s +1   , d j +1 g m− (x) dj + m! 1 (x)ϕ(x) = − g d m+1 dx j +1 (−iω)+1 m= (m − )! dx j =0 , + +1 2s +1   d j g (x) g m− (x) m! 1 2s + d dx j (−iω) m= (m − )! m+1 =0

52

Chapter 3. Filon and Levin methods

=−

2s +2  =1 2s +1 

+1  d m! 1 2s d m+1  (−iω) m= (m − )!

j

+

g (x) g m− (x)

+ j

,

dx j

+1  d g (x) g m− (x) m! 1 2s + d m+1 dx j (−iω) m= (m − )! =0 2s +2  dj  d g (x) g m−1 (x) = p ( j ) (x). = dx j m=1 m

,

Bearing in mind that p ( j ) (a) = f ( j ) (a), p ( j ) (b ) = f ( j ) (b ) for j = 0, . . . , s, we deduce that the function ϕ obeys the collocation conditions and the proof is complete.

All this algebra has not been in vain: we have just proved the following. Theorem 3.9. Using a natural basis, Filon-type and Levin methods are identical. We have alluded in §3.1 that there is a good reason why, in Fig. 3.9, we did not take the same oscillator g as in Figs. 3.1 and 3.2. The reason is that for g (x) = x the natural basis is the standard polynomial basis, which we have used earlier for both Filon-type and Levin methods. In other words, in that case the two approaches coincide. Given a univariate strictly monotone oscillator, it makes every sense to use a natural basis, not least because a wealth of computational results demonstrates that when Filon and Levin meet the error is minimized. Once we consider integrals with stationary points, the Levin method is no longer available. Yet, using the same logic that has led us to the natural basis, Sheehan Olver proposed a basis for the Filon method which, while leading to a Levin-like method, dispenses with the need for compute moments [86]. Let us consider an interval [a, b ], assuming without loss of generality that a < 0 < b and that 0 is a stationary point of order r ≥ 1. It is instructive to commence from the special case g (x) = x r +1 and recall the “Levin operator”  [ϕ] = ϕ + ig ϕ which has been introduced in the proof of Theorem 3.5. A solution of the equation  [y] = x n for n ≥ 1 is

  n+1    exp −iωx r +1 + 2(r +1) iπ   n + 1 n +1 r y(x) = − Γ Γ , 0 , , −iωx r +1 r +1 (r + 1)ω (n+1)/(r +1)

where Γ is the incomplete gamma function, 



Γ (a, z) =

t a−1 e−t dt ,

Re a > 0

z

[108, eqn. 8.2.2]. The incomplete gamma function is an analytic function for z = 0, and it can be readily computed by standard mathematical software. Since  [ϕ](x)eiω g (x) =

d ϕ(x)eiω g (x) , dx

we deduce that  a

b

x n eiω g (x) dx = Iω [x n ] = ϕ(b )eiωb

r +1

− ϕ(a)eiωa

r +1

,

3.3. Levin methods

53

which looks similar to the Levin method yet different because we did not use collocation to form ϕ—and the new “method” is applicable just to f (x) = x n , n = 0, and g (x) = x r +1 . The latter restriction can be lifted readily. Thus, suppose that 0 is a stationary point of g of order r ≥ 1, i.e., g (i ) (0) = 0, i = 1, . . . , r , and g (r +1) (0) = 0. Following [86], we define   n+1      iπ exp −iω g (x) + n +1 n +1 2(r +1) [r ] [r ] , 0 , , −iω g (x) − Γ Γ φn (x) = τn (x) r +1 r +1 (r + 1)ω (n+1)/(r +1)

where

⎧ (−1)n , ⎪ ⎨ [r ] τn (x) = (−1)n e−iπ(n+1)/(r +1) , ⎪ ⎩ −1

x < 0 and r odd, x < 0 and r even, otherwise.

[r ]

(The role of the complex sign function τn is to ensure a branch cut such that (x r +1 )1/(r +1) = x for all x ∈ [a, b ].) [r ]

Proposition 3.10. The function φn is C ∞ [a, b ] for all r ≥ 1, n ≥ 0. Moreover, [r ]

 [φn ] =

and

[r ]

(sgn x) r +n g (x)|g (x)|(n−r )/(r +1) r +1 [r ]

(3.19)

[r ]

Iω [ [φn ]] = φn (b )eiω g (b ) − φn (a)eiω g (a) .

Proof. Away from the origin (3.19) follows from the differential equation for the incomplete Gamma function,   1−a w + 1 + w(z) = Γ (a, z) w = 0, z

[108, eqn. 8.2.12]; we omit the straightforward yet laborious algebra. Near the origin we have  d (sgn x)n+r [r ] (n+1)/(r +1)  [φn ](x) = |g (x)| , x = 0, r +1 dx

and, since (sgn x)

 =

n+r

g r +1 (0) (r + 1)!

(n+1)/(r +1)

|g (x)|

(n+1)/(r +1)

[r ]

= (sgn x)

n+r

(n+1)/(r +1)   g r +1 (0)  r +2   r +1 + x x     (r + 1)!

x n+1 [1 +  (x)](n+1)/(r +1) ,

it follows that  [φn ] is C∞ [a, b ]. (We can assume without loss of generality that g (0) > 0; otherwise replace g (x) by g (x) + α for sufficiently large α > 0, while mul[r ] tiplying the integral by e−iωα .) Finally, the explicit form of Iω [ [φn ]] follows in a standard manner from the fundamental theorem of calculus.

54

Chapter 3. Filon and Levin methods

[2]

Figure 3.11. The functions φn , n = 0, 1, 2, 3, for g (x) = x 3 + x 4 /4 and ω = 50: real parts in red and imaginary parts in blue.

[r ]

Functions φn are sketched in Fig. 3.11 for g (x) = x 3 + x 4 /4; hence r = 2. We note for further use that, while highly oscillatory, they are bounded. Indeed, since Γ (a, z) ∼ z a−1 e−z ,

|z|  1,

| arg z| 0 because it relies on cancellation. It is advised to use a more convenient expression for small ω instead. For example, for the same coefficient b−,0 we obtain

+ 1 b−,0 = (−iω)−7 (2700 − 2340iω − 834ω 2 + 127iω 3 + 11ω 4 )eiω 2 , −(2700 + 3060iω − 1554ω 2 − 441iω 3 + 65ω 4 + 2ω 6 )e−iω 19 = +  (ω) , ω → 0. 105

4.2.2 Filon–Clenshaw–Curtis Filon–Clenshaw–Curtis (FCC) quadrature has been introduced in [89] and analyzed in [38], and it represents a generalization of the popular Clenshaw–Curtis quadrature for nonoscillatory integrals [105]. The idea in [38], which for simplicity we present in the interval [a, b ] = [−1, 1], is to interpolate by a polynomial p at the ν + 2 points ck = cos(kπ/(ν +1)), k = 0, . . . , ν +1 (note that c0 = 1 and cν+1 = −1 are the endpoints), and let 

ωFCC,0,ν [ f ] =

1

p(x) eiω g (x) dx.

(4.4)

−1

Our first observation is that p interpolates f at the endpoints; hence the asymptotic  error is (the rather modest)  ω −2 . Moreover, at ω = 0 classical order is ν +1 for even ν and ν +2 for an odd one [47]. Altogether, FCC does not look overly promising, but, as it turns out, it has a substantial virtue of simplicity and low cost. The underlying reason is that p can be represented explicitly (and very simply) through representation in Chebyshev polynomials, ν+1  pn Tn (x), p(x) = n=0

4.2. Choosing internal nodes

63

where Tn (x) = cos(n arccos x) is the nth Chebyshev polynomial [91, p. 301] and     ν  π 1 1 1 p0 = f cos + f (−1) , f (1) + 2 ν +1 ν +1 2 =1     ν  π 2 nπ (−1)n 1 pn = f cos f (−1) , n = 1, . . . , ν, + cos f (1) + 2 ν ν ν +1 2 =1     ν  (−1)ν π 1 1  f cos pν+1 = f (−1) . (−1) − f (1) + 2 ν +1 ν +1 2 =1

This means that p0 , . . . , pν+1 can be computed in  (ν log ν) operations using the Discrete Cosine Transform DCT-I [105]. Therefore 1 ν+1  FCC,0,ν [f ] = pn bn (ω), where bn (ω) = Tn (x) eiω g (x) dx, n = 0, . . . , ν + 1.

ω −1

n=0

The coefficients bn can be computed (at least for g (x) = x) with a fast and stable algorithm [38]. Alternatively, computing directly for g (x) = x, 2i sin ω , −iω 2 cos ω 2i sin ω b1 (ω) = − , − (−iω)2 −iω  1 8 cos ω 4 b2 (ω) = − , − 2i sin ω + −iω (−iω)3 (−iω)2   9 24 1 24 b3 (ω) = −2 cos ω , − 2i sin ω + + (−iω)2 (−iω)4 −iω (−iω)3   1 192 192 16 80 b4 (ω) = −2 cos ω , − 2i sin ω + + + −iω (−iω)2 (−iω)5 (−iω)2 (−iω)4

b0 (ω) = −

and so on. Comparison with the FJ weight (4.3) demonstrates the overwhelming simplicity of FCC coefficients, simplicity which is richly rewarded once we implement and program the method. This is the moment to pause and assess how FCC compares to, say, FJ: • FCC is considerably cheaper to evaluate for large ν, and its weights can be written in a considerably simper form. First strike to FCC. • The asymptotic rate of decay of FJ can be made  as large as we wish by increasing s, while FCC always decays like  ω −2 . Therefore, FJ wins for ω  1. A draw. • For small ω the classical order of FCC is either ν + 1 or ν + 2 (depending on the parity of ν), while the classical order of FJ is 2(s +ν +1)—even for s = 0, FJ wins. Advantage FJ! It is obvious how to design a method that combines the simplicity of FCC with superior asymptotic behavior of FJ—all we need is to interpolate to f and its derivatives at the endpoints and to Chebyshev points inside. Thus, we seek the unique polynomial p of degree 2s + ν + 1 such that p ( j ) (±1) = f ( j ) (±1),

j = 0, . . . , s,

p(ck ) = f (ck ),

k = 1, . . . , ν,

(4.5)

64

Chapter 4. Extended Filon method

where, like for FCC, ck = cos(kπ/(ν + 1)), and, in place of (4.4), set

ωFCC,s ,ν [ f ] =



1

p(x) eiω g (x) dx.

(4.6)

−1

Not wishing to multiply name-tags, we term (4.6)  FCC. By design, ωFCC,s ,ν [ f ] − Iω [ f ] ∼  ω −s −2 , ω  1, matching the asymptotic order of FJ. Our goal in the remainder of this subsection is to demonstrate, following [46], that the advantage of simplicity and fast computation is retained by this procedure. We note in passing, though, that at ω = 0 FJ always wins by design. Like for the original FCC, we represent the polynomial defined by (4.5) in the form 2s +ν  pn Tn (x), p(x) = n=0

and our next step is to calculate the pn ’s using DCT-I. However, first we note that (j)

Tn (1) =

2 j j !n(n + j − 1)! , (2 j )!(n − j )!

(j)

(j)

Tn (−1) = (−1)n− j Tn (1),

0 ≤ j ≤ n, n + j ≥ 1

[46]. The interpolation conditions (4.5) can be then rewritten as 2s +ν+1 n=1

2s +ν+1 n=1

(2 j )! n(n + j − 1)! pn = j f ( j ) (1), 2 j! (n − j )!

n(n + j − 1)! pn = (n − j )!   2s +ν+1 knπ cos p = ν +1 n n=0

(−1)n− j

(2 j )! ( j ) f (−1), 2j j !   kπ f cos , ν +1

j = 1, . . . , s,

(4.7)

k = 0, . . . , ν + 1.

Let 



 j nπ cos p , hj = fj − ν +1 n n=ν+2

 jπ , f j = f cos ν +1

2s +ν+1

j = 0, . . . , ν + 1.

The third line in (4.7) is ν+1  n=0

 cos

 j nπ p = hj , ν +1 n

j = 0, . . . , ν + 1,

 where ν+1 n=0 α j means that for n = 0 and n = ν + 1 the terms are halved. This is precisely DCT-I, which, written in a vector form, becomes ν+1 p = h. The inverse of −1 DCT-I is itself a scaled DCT-I, ν+1 = [2/(ν + 1)]ν+1 ; therefore we can recover the pn ’s from hn ’s, pn =

  ν+1 mnπ 2  cos hm , ν +1 ν + 1 m=0

where p0 and pν+1 need to be halved.

n = 0, . . . , ν + 1,

4.2. Choosing internal nodes

65

The coefficients hn are unknown and need to be determined from the leading two lines in (4.7). As a reality check, we note that they are determined by the 2s unknowns pn , n = ν + 2, . . . , 2s + ν + 1, matching the number of relevant equations in (4.7). ˇ = ν+1 f, where f is the vector of the We follow the algebra from [46] and set p fn ’s. Therefore it follows from the definition of the hn ’s that, halving the first and the last pn ,     ν  (−1)n k nπ 1 2 cos pn = hν+1 h + h + 2 ν +1 k ν + 1 2 0 k=1       ν 2s (−1)n+m+ν+1 k nπ k mπ 1  2  k + = ˇpn − (−1) cos p + cos 2 ν +1 ν +1 ν + 1 m=2 ν+m+1 2 k=1  ν      ν 2s   k(n − m)π k(n + m)π 1 = ˇpn − + (−1) j cos pν+m+1 (−1)k cos ν +1 ν +1 ν + 1 m=1 k=0 k=0 

+ (−1)n+m+ν+1 − 1 .

Since

 1 + (−1)n−m+ν k(n − m)π , n = 0, . . . , ν + 1, m = 1, . . . , 2s, = 2 ν +1 k=0   ν  k(n + m)π 1 + (−1)n+m+ν k (−1) cos , n + m = ν + 1, = 2 ν +1 k=0   ν  k(n + m)π k (−1) cos = ν + 1, n = ν − 2s + 1, . . . , ν, m = ν − n + 1, ν +1 k=0

ν 



(−1)k cos

everything simplifies beautifully: n = 0, . . . , ν − 2s or n = ν + 1, pn = ˇpn , n = ν − 2s + 1, . . . , ν. pn = ˇpn − p2ν−n+2 ,

(4.8)

Note that most of the coefficients pn are exactly the same as in the “classical” FCC with s = 0. Finally, we need to determine the pn ’s for n = ν + 2, . . . , ν + 2s + 1 using derivative conditions from (4.7). Direct substitution yields a (2s) × (2s) linear system,

2s   (ν + n + 1)(ν + n + j )! (ν − n + 1)(ν − n + j )! pν+n+1 − (ν − n − j + 1)! (ν + n − j + 1)! n=1 ν  (ν + 1)(ν + j )! (2 j )! n(n + j − 1)! ˇp , ˇpn − = j f ( j ) (1) − 2(ν − j + 1)! ν+1 (n − j )! 2 j! n=1



(ν + n + 1)(ν + n + j )! (ν − n + 1)(ν − n + j )! (−1) pν+n+1 − (ν − n − j + 1)! (ν + n − j + 1)! n=1 ν  (ν + 1)(ν + j )! n(n + j − 1)! (2 j )! ˇp ˇpn − (−1)n = (−1)ν+ j +1 j f ( j ) (−1) + (−1)ν 2(ν − j + 1)! ν+1 (n − j )! 2 j! n=1 2s 

n

for j = 1, . . . , s. In particular, the first two nontrivial cases yield

66

Chapter 4. Extended Filon method

s = 1: ν  f (1) − (−1)ν f (−1) 1 [1 + (−1)ν−n ]n 2 ˇpn , − 8(ν + 1) n=1 8(ν + 1) ν  f (1) + (−1)ν f (−1) ν +1 1 ˇp ; [1 − (−1)ν−n ]n 2 ˇpn − pν+3 = − 16 ν+1 16(ν + 1) n=1 16(ν + 1)

pν+2 =

s = 2: pν+2 =

pν+3 =

3 2ν 2 + 4ν + 19 [ f (1) + (−1)ν f (−1)] [ f (1) − (−1)ν f (−1)] − 128(ν + 1) 128(ν + 1) ν  1 − n 2 (2ν 2 + 4ν − n 2 + 20)[1 + (−1)ν−n ] ˇpn , 128(ν + 1) n=1

1 2ν 2 + 4ν + 33 [ f (1) − (−1)ν f (−1)] [ f (1) + (−1)ν f (−1)] − 128(ν + 1) 384(ν + 1) ν  1 − n 2 (2ν 2 + 4ν − n 2 + 34)[1 − (−1)ν−n ] ˇpn 384(ν + 1) n=1

(ν + 1)(ν 2 + 2ν + 33) ˆpν+1 , 384 1 2ν 2 + 4ν + 3 pν+4 = − [ f (1) + (−1)ν f (−1)] [ f (1) − (−1)ν f (−1)] + 128(ν + 1) 384(ν + 1) ν  1 + n 2 (2ν 2 + 4ν − n 2 + 4)[1 + (−1)ν−n ] ˇpn , 384(ν + 1) n=1 −

1 2ν 2 + 4ν + 9 [ f (1) − (−1)ν f (−1)] [ f (1) + (−1)ν f (−1)] + 256(ν + 1) 768(ν + 1) ν  1 + n 2 (2ν 2 + 4ν − n 2 + 10)[1 − (−1)ν−n ] ˇpn 768(ν + 1) n=1

pν+5 = −

+

(ν + 1)(ν 2 + 2ν + 9 ˇpν+1 . 768

Once the pn ’s for n ≥ ν +2 are known, we reconstruct pn for n = 0, . . . , ν +1 from (4.8)—the entire cost of FCC is thus  (ν log ν) for the Discrete Cosine Transform,    s 3 for solving the linear system (unless pn for n ≥ ν + 2 are known, as above), and  (ν s) for sundry calculations. Typically s is small, while ν might be large, and the cost is dominated by the  (ν log ν) term. Therefore, the generalization of FCC to all s ≥ 0 enjoys all the advantages of “plain” FCC from [38] in cost and simplicity, while arbitrarily increasing the asymptotic rate of decay of the error. Translating the pn ’s into the weights bn is identical to the case s = 0.

4.2.3 Stationary points It is convenient to assume that the integral Iω [ f ] has a single stationary point at the left endpoint x = a: for any g with a finite number of derivatives in [a, b ] we can easily split the problem into a sum of such integrals, possibly using linear transformation of variables.

4.3. Error analysis of FJ and FCC

67

Let us assume for simplicity that g (a) = 0, g (a) = 0: an extension of our argument to higher-order stationary points is straightforward. Commencing from FJ, we wish to choose c1 , . . . , cν to maximize the classical order of quadrature at ω = 0, i.e. (cf. (2.1.6) for motivation and (4.2) for comparison), Q0FJ,s ,ν [ f ] =  ≈ I0 [ f ] =

2s  j =0

b−, j (0) f ( j ) (a) +

ν 

b˜k (0) f (ck ) +

k=1

s  j =0

b+, j (0) f ( j ) (b )

b

f (x) dx. a

What is the optimal choice of nodes and weights? Recall the basics of Gaussian quadrature: given a Borel measure dμ in the interval [a, b ], the highest classical order of the quadrature formula b ν  ˜ bk f (ck ) ≈ f (x) dμ(x) a

k=1

is obtained once c1 , . . . , cν are the zeros of the νth-degree orthogonal polynomial with respect to the underlying measure dμ [90, p. 139]. (The weights b˜1 , . . . , b˜ν can be easily derived, e.g., by requiring that the quadrature be exact for f (x) = x i , i = 0, . . . , ν − 1.) This can be easily generalized to Birkhoff–Hermite quadrature of the form s−  j =0

b−, j f ( j ) (a) +

ν 

b˜k f (ck ) +

k=1

s+  j =0

b+, j f ( j ) (b ),

except that now c1 , . . . , cν need to be the zeros of the νth orthogonal polynomial with respect to the Borel measure (x − a) s− +1 (b − x) s+ +1 dμ(x). The proof is identical to that of Gaussian quadrature from [90, p. 139]. Specializing to our case, for a = −1, b = +1 the ν-degree polynomial which is (s +1,2s +1) orthogonal with respect to (1 + x)2s +1 (1 − x) s +1 dx is the Jacobi polynomial Pν [91, p. 259], while for a general finite interval [a, b ] we need a simple linear transformation. This governs the choice of optimal internal nodes at ω = 0 and results in Filon–Jacobi quadrature in the presence of a simple stationary point at x = a. Generalizing FCC to the current setting is even easier. The unique selling point of FCC is the choice of c1 , . . . , cν as Clenshaw–Curtis points, since they lend themselves to cheaper algebra and to much simpler and neater expressions. This does not change in the presence of a stationary point at x = a [37] except that we need to interpolate to f and a suitable number of derivatives at the endpoints.

4.3 Error analysis of FJ and FCC How do FJ and FCC compare in accuracy? We need to consider two regimes: first, ω  1, and, second, small ω ≥ 0. The first regime is more important because after all this is what highly oscillatory quadrature is all about, yet there are also important advantages to a quadrature rule which is uniformly good for all ω ≥ 0. Error estimates for any Filon-type methods for large ω are easy; all we need is to examine the leading term in the asymptotic expansion of ωF [ f ]−Iω [ f ] = Iω [ p− f ], but they are often unsatisfactory. First, such estimates involve the interpolating polynomial p, adding an extra layer of complexity. Second, error estimates are suboptimal— what we really want are tight error bounds, separating explicitly between the influence

68

Chapter 4. Extended Filon method

on the error of the quadrature method and of the function f . These, as well as error bounds for ω = 0, are provided using the methodology of the Peano Kernel Theorem (PKT), as discussed in detail in [47].

4.3.1 The Peano Kernel Theorem Let  be a bounded linear functional acting on functions of bounded variation in [a, b ] and suppose that  [ p] = 0 for every polynomial p of degree ≤ k. Given θ ∈  and n ∈ {0, . . . , k}, we let n , sθ (x) = (x − θ)+

x ∈ [a, b ],

where (z)+ = max{z, 0}. The Peano kernel K is K(θ) =

1  [sθ ], n!

θ ∈ [a, b ].

Theorem 4.1 (The Peano Kernel Theorem). Assuming that K is of bounded variation and f ∈ Cn+1 [a, b ], it is true that b K(θ) f (n+1) (θ) dθ. (4.9)  [f ] = a

See, for example, [90, p. 271] for an elementary proof. The PKT is an amazingly powerful (and often underappreciated) tool in the analysis of “linear” numerical algorithms, not least of quadrature formulas [35, 40]. The reason is that, using Hölder inequality, it follows at once from (4.9) that | [ f ]| ≤ K p  f (n+1) q ,

where

1 1 + = 1. p q

In particular, letting p = 1, q = ∞, we obtain | [ f ]| ≤ K1  f (n+1) ∞ ,

(4.10)

an inequality which will accompany us throughout this section. We use the PKT for both ω  1 and ω = 0. Intriguingly, each case requires a different, subtle argument and, for ω = 0, a mild extension of the theorem. The outcomes, however, are very tight upper bounds on the error and a most welcome hint on how to choose optimal internal nodes. Before we commence our analysis, it behooves us, consistently with the first principle of numerical mathematics, to follow the numbers. In Fig. 4.2 we have computed 1 the integral −1 (1 + x + x 2 )−1 eiωx dx using FJ and FCC with s = 1 and ν = 0 (when both methods are identical and coincide with plain Filon), ν = 4, and ν = 8. Here and in what follows we reserve for FJ and FCC blue and red liveries. While the rate of error attenuation for large ω is the same, FCC is marginally more precise: for example, for ω = 200 the FJ and FCC errors are 1.37−08 and 5.16−09 , respectively. The picture is diametrically different for small ω ≥ 0, as illustrated by Fig. 4.3. Now FJ is considerably more precise than FCC—not very surprising, indeed, given that the internal nodes of FJ have been handcrafted to maximize accuracy for ω = 0. Just examining this single example, it is evident that neither FJ nor FCC is always superior: we need to weigh carefully different considerations to choose between the methods. This observation will be confirmed in the analysis of the two subsequent subsections.

4.3. Error analysis of FJ and FCC

69

Figure 4.2. Errors of FJ (on the left) and FCC (on the right) for f (x) = (1 + x + x 2 )−1 , g (x) = x, [a, b ] = [−1, 1], s = 1, and (in an order of an increasingly dark hue) ν = 0, 4, 8.

Figure 4.3. The same information as in Fig. 4.2 for small ω ≥ 0.

4.3.2 The case ω  1 It is a direct consequence of the asymptotic expansion (2.1.3) that for an arbitrary EFM, e = p − f , and ω  1 the error is

 e s +1 (b ) iω g (b ) e s +1 (a) iω g (a) 1 − e e ωF [ f ] = ωF [ f ] − Iω [ f ] = Iω [e] ∼ − g (a) (−iω) s +2 g (b )  −s −3  + ω , (4.11) where the functions ek have been defined in §2.1, e0 (x) = e(x) = p(x) − f (x),

ek (x) =

d ek−1 , dx g (x)

k ≥ 1.

70

Chapter 4. Extended Filon method

(We assume that g = 0 and that there are no stationary points.) We recall from §2.1 that k  ek = σk, j (x)e ( j ) (x), j ≥ 0, j =0

and it is trivial to prove that σk,k (x) = 1/ g k (x). Since, by design, p ( j ) and f ( j ) match at the endpoints for j = 0, . . . , s, we deduce that e s +1 (y) =

p (s +1) (y) − f (s +1) (y)

g s +1 (y)

,

y ∈ {a, b },

and (4.11) can be simplified to  (s +1) p (b ) − f (s +1) (b ) iω g (b ) p (s +1) (a) − f (s +1) (a) iω g (a) 1 F ω [ f ] ∼ − − e e (−iω) s +2 g s +2 (a) g s +2 (b )   +  ω −s −3 , ω  1. (4.12)

For large ω the leading term ˜ωF [ f ] has all the information we need, and it follows at once from (4.12) that e− e+ , (4.13) ≤ |˜ωF [ f ]| ≤ ω s +2 ω s +2 where    | p (s +1) (b ) − f (s +1) (b )| | p (s +1) (a) − f (s +1) (a)|    − e− =  , s +2 s +2   |g (a)| |g (b )|

e+ =

| p (s +1) (b ) − f (s +1) (b )| | p (s +1) (a) − f (s +1) (a)| . + |g (a)| s +2 |g (b )| s +2

Given that the polynomial p is a by-product of the construction of a Filon method, we can form easily the numbers e± and obtain remarkably faithful lower and upper bounds on the error, provided that ω is sufficiently large. Fig. 4.4 demonstrates the quality of the error bounds (4.13) for s = 2 and the same integrals as in Fig. 4.2. Inasmuch as (4.13) is an excellent error bound, it is not very good once it comes to comparing different methods with the same value of s. Is FCC consistently better? Or does the pecking order between FJ and FCC depend on the function f ? After all, can we trust a single example? To address these questions we need a handle on the interpolation error p − f at the endpoints, and this is precisely where the Peano Kernel Theorem comes into our discussion. The leading term in (4.12) equals e+ whenever ω[g (b ) − g (a)] is an odd multiple of π if [ p (s +1) (b ) − f (s +1) (b )] · [ p (s +1) (a) − f (s +1) (a)] > 0 or an even multiple if the expression is negative. We seek an upper bound on e+ , except that we wish to eliminate any explicit dependence on the interpolation polynomial p. Fortunately, optimal error bounds for Hermite polynomial interpolation have been presented by Shadrin in [94]. We assume for ease of notation that a = −1, b = 1. Given an EFM with given s ≥ 0 and with ν ≥ 0 internal nodes c1 , . . . , cν , we define the nodal polynomial r (x) = (x 2 − 1) s +1

ν  k=1

(x − ck ).

4.3. Error analysis of FJ and FCC

71

Figure 4.4. The true error and the error bounds (4.13) for the example from Fig. 4.2 with s = 2: FJ on the left, FCC on the right, and all errors in logarithmic scale.

Using the Leibnitz rule,  s +1  ν s − j +1   s + 1 dj s +1 s +1 d r (s +1) (x) = (x + 1) (x − ck ); (x − 1) j dx s − j +1 dx j j =0 k=1

therefore r (s +1) (−1) = 2 s +ν+1 (s + 1)!2 s +1

ν 

(1 + ck ),

r (s +1) (+1) = (s + 1)!2 s +1

k=1

ν 

(1 − ck ).

k=1

Using PKT, [94] proves that (letting  [ f ] = p (s +1) (±1) − f (s +1) (±1) and n = ν + s)   |r (s ) (−1)|  (s +1)  (−1) − f (s +1) (−1) ≤  f (2s +ν+2) ∞ p (s + ν + 1)! ν (s + 1)!2 s +1  (1 + ck ) f (2s +ν+2) ∞ , = (2s + ν + 2)! k=1

   (s +1)  (+1) − f (s +1) (+1) ≤ p

r (s ) (+1)  f (2s +ν+2) ∞ (2s + ν + 2)! ν (s + 1)!2 s +1  (1 − ck ) f (2s +ν+2) ∞ . = (2s + ν + 2)! k=1

We deduce our basic error bound for extended Filon methods as follows. Theorem 4.2 ([47]). The leading term of EFM is bounded by 3ν  3ν e f (2s +ν+2) ∞ (s + 1)!2 s +1 k=1 (1 − ck ) k=1 (1 + ck ) . (4.14) + , where e = |g (+1)| s +2 (2s + ν + 2)! |g (−1)| s +2 ω s +2

Note that the bound is the best possible, because one can show that it is satisfied as an equality when f = r , the nodal polynomial [94]. Moreover, it is completely

72

Chapter 4. Extended Filon method

independent of the form of the interpolating polynomial: it is just an outcome of the constant e (that depends solely on the choice of the method) and of  f (2s +ν+2) ∞ . In other words, a good EFM is one where e is small, and, for fixed s and ν, this depends solely on the choice of internal nodes. The analysis so far was fairly general, yet our competition is between 3νFJ and FCC. (1− ck ) = In both cases internal nodes are symmetric about the origin; therefore k=1 3ν (1 + c ), which somewhat simplifies our calculus. Note that the polynomial k k=1 3 (s +1,s +1) t (x) = νk=1 (x − ck ) is a scaled Jacobi polynomial for both FJ and FCC: Pν for 1 1

( , )

FJ and Pν 2 2 for FCC. We note in passing that, for s = 1 and ν = 8, |t (±1)| equals ≈ 0.09145 and ≈ 0.03516 for FJ and FCC, respectively: marginal advantage FCC. ˜ ν(α,β) the Jacobi polynomial, normalized to be monic. Since We denote by P (α,β)



(x) =

(1 + α + β + ν)ν ν x + lower-order terms, ν!2ν

where (z)n is the Pochhammer symbol (a.k.a. generalized binomial symbol), (z)0 = 1, and (z)n = z(z + 1) · · · (z + n − 1) = (z + n − 1)(z)n−1 for n ≥ 1 [91, p. 254], we deduce that ν!2ν (α,α) (α,α) Pν (x). P˜ν (x) = (1 + 2α + ν)ν (α,α)

Since |Pν

(±1)| = (1 + α)ν /ν!, we deduce that 2ν (1 + α)ν ˜ (α,α) , (1) = ρνα := P ν (1 + 2α + ν)ν

α > −1.

1 1

( ,2)

˜ν 2 For FCC we have t = P

; therefore |t (±1)| = (ν + 1)2−ν ,

while for FJ α = β = s + 1 and |t (±1)| = ρνs +1 = 2ν

(s + ν + 1)!(ν + 2s + 2)! . (s + 1)!(2s + 2ν + 2)!

For ν = 0 there is nothing to prove—both methods reduce to plain Filon—while for ν = 1 they coincide because the single internal node is at the origin. For ν ≥ 2, though, and α > −1/2 the function ρνα is strictly monotonically increasing [47]; consequently we have the following theorem. Theorem 4.3 ([47]). The upper bound (4.14) for ν ≥ 2 is always smaller for FCC than for FJ. It is fair, however, to comment that the margin of FCC’s victory is very small, because the function ρνα increases very slowly as α grows with fixed s, e.g., ρ2α = 1 −

1 , 3 + 2α

ρ3α = 1 −

3 , 5 + 2α

ρ4α = 1 −

27 + 12α . (5 + 2α)(7 + 2α)

4.3. Error analysis of FJ and FCC

73

4.3.3 Small ω ≥ 0 The analysis of FJ and FCC for ω = 0 is further complicated by the fact that they have different classical orders. FJ, by design, is of the highest possible order consistent with interpolating derivatives at the endpoints, namely 2s +2ν +2. For FCC, just counting degrees of freedom yields classical order 2s +ν +2, but this is an underestimate. Because internal nodes of FCC are symmetric with respect to the origin and the same number of derivatives is used at the endpoints, it integrates (for ω = 0) odd polynomials to zero. Thus, once ν is odd, the order is automatically increased to 2s + ν + 3. Yet, even then it is always smaller (for ν ≥ 2—for ν = 1 the methods coincide) than that of FJ. Were we to compare optimal PKT constants for the highest possible order, the outcome will not be commensurate—all we can really say is that FJ is of higher order for the same s and ν and hence is superior for small ω ≥ 0. Alternatively, we can compare FJ and FCC but letting the value of n in PKT correspond to the order of FCC. While arguably unfair to FJ, this procedure provides at least a comparison of alike with alike, and this is the approach we pursue in this subsection. Note that, according to [105], Clenshaw–Curtis quadrature is, for very large ν, as good as Gaussian quadrature: marginally less precise for the same number of nodes but much cheaper to implement. However, this has little relevance in our setting because ν is unlikely to be very large. We assume for simplicity that a = −1, b = +1. Recalling that the internal nodes for both FJ and FCC are symmetric with respect to the origin, we base our analysis on [40], which exploits this feature for superior PKT upper bounds. Specifically, we rename internal nodes c(ν+1)/2−1 < · · · < −c˜1 ≤ c˜1 < · · · < c˜(ν+1)/2−1 < c˜(ν+1)/2 −˜ c(ν+1)/2 < −˜ and set c˜(n u+1)/2+1 = 1. Note that c˜1 = 0 if ν is odd; otherwise c˜1 > 0. The underlying Birkhoff–Hermite quadrature, written in a symmetric form, reads

0 [ f ] =

(ν+1)/2+1 k −1  μ k=1

j =0



bk, j f

(j)

j

(˜ ck ) + (−1) f

(j)



(−˜ c k ) ≈ I0 [ f ] =

1

−1

f (x) dx, (4.15)

where μ1 = μ2 = · · · = μ(ν+1)/2 = 1, μ(ν+1)+1 = s + 1. Each function f can be decomposed into f = fE + fO , where fE and fO are even and odd, respectively. Since I0 [ fO ] = 0 [ fO ] = 0, it is sufficient to consider the integration of fE in the interval [0, 1]: trivially, (4.15) reduces to ˜ [f ] = 2

0

(ν+1)/2+1 k −1  μ k=1

j =0

(j)

bk, j fE (˜ ck ) ≈ 2

 0

1

fE (x) dx = I0 [ f ].

Although this construction works for arbitrary multiplicities s1 , . . . , s(ν+1)/2+1 , we will specialize it to our setting. Favati, Lotti, and Romani proved in [40] that, provided the method is of classical order p ≥ 1, the Peano kernel is explicitly Kn (θ) =

n+1 2(1 − θ)+

(n + 1)!



(ν+1)/2 2  n b (˜ c − θ)+ n! k=1 k,0 k

(4.16)

74

Chapter 4. Extended Filon method

for every n ∈ {1, 2, . . . , p − 1}. Integrating |Kn | from (4.16) for θ ∈ [0, 1] results in the PKT error bound 1 (n) | 0 [ f ] − I0 [ f ]| ≤ |Kn (θ)| dθ ·  fE ∞ . 0

Table 4.3.3, which originates in [47], displays the PKT constants for FJ and FCC for different values of s ≥ 0 and ν ≥ 2. Evidently, FJ is a winner, while the winning margin grows rapidly both with s and with ν. Recall that we have badly handicapped FJ in our comparison, because the classical order of FJ is so much greater—and yet it emerges as a clear winner. The situation is completely reversed in comparison with ω  1. Table 4.3.3. PKT constants for FJ, FCC, and n equal to the classical order of FCC. s 0

1

ν 2 3 4 5 6 2 3 4 5 6

n 3 5 5 7 7 5 7 7 9 9

FJ 1.13−03 6.22−06 1.25−06 3.27−09 7.95−10 1.08−05 3.28−08 4.55−09 8.01−12 1.53−12

FCC 2.92−03 2.67−05 3.37−06 1.97−08 4.92−09 1.59−04 6.30−07 8.20−08 2.42−10 2.02−11

s 2

3

ν 2 3 4 5 6 2 3 4 5 6

n 7 9 9 11 11 9 11 11 13 13

FJ 7.53−08 1.37−10 1.35−11 1.67−14 2.52−15 3.96−10 4.58−13 3.36−14 2.98−17 3.61−18

FCC 3.15−06 6.36−09 2.39−09 4.17−12 3.60−13 3.56−08 4.15−11 2.60−11 2.85−14 7.13−15

4.3.4 Stationary points The rule of thumb emerging from the last two subsections—FCC marginally ahead for ω  1 and FJ the winner for small ω ≥ 0—remains valid also in the presence of stationary points. An example is provided in Fig. 4.5. Stationary points lend themselves to analysis using PKT, and this confirms the evidence of Fig. 4.5 [47]. Given that the methodology is identical to the last two subsections, while the details are somewhat more technically challenging, we omit them here and direct the readers to the primary source.

4.4 Adaptive Filon Wishing to calculate a highly oscillatory integral in the interval [−1, 1], we can play the following game: we are allowing ourselves ν + 2 function and derivative evaluations anywhere in [−1, 1], but with locations that may depend on ω! This allows us to adapt the method as ω varies, hopefully catering for both small and large values of the parameter. The natural question in this context is the following: what is a good placement of such points ck (ω), k = 0, 1, . . . , ν +1? We know what the optimal choices are for ω  1 and for ω = 0, except that they are completely different. Suppose for simplicity that ν is even, ν = 2¯ ν . For ω = 0 a top choice is represented by Gaussian points, i.e., the

4.4. Adaptive Filon

75

Figure 4.5. The errors (in logarithmic scale) of FJ (in blue) and FCC (in red) for small 1 2 and large values of ω, respectively, for the integral −1 (1 + x + x 2 )−1 eiωx dx.

zeros of the Legendre polynomial Pν+2 . For ω  1 the ideal option is plain Filon with   s = ν, because it ensures asymptotic error decay of  ω −¯ν −2 . Alternatively, like in [64] and §3.2.1, we might forgo computation of derivatives in favor of suitably chosen finite differences. [0] [0] [0] Let c0 , c1 , . . . , cν+1 be Legendre points, while 4 [1] ck (ω) =

k

k = 0, 1, . . . , ν¯,

−1 + 1+ω ,

1−

2¯ ν −k+1 , 1+ω

k = ν¯ + 1, . . . , 2¯ ν + 1,

are the interpolation points of the derivative-free plain Filon from [64], where we have [1] replaced ω by 1+ω in the denominator so that the ck (0)’s are well defined.11 Clearly, [0]

[1]

the ck are an optimal choice for small ω, while ck (ω) produce an excellent approximation for ω  1. The idea of adaptive Filon (AF) ωAF,ν [ f ] is to choose interpolation points [0] [1] (4.17) ck (ω) = ck κ(ω) + ck (ω)[1 − κ(ω)], where κ is a monotonically decreasing homotopy function, κ(0) = 1 and limω→∞ κ(ω) = 0 [45]. (A similar approach, with a different function κ, features in [112].) How to choose the homotopy function in (4.17)? We wish κ to be fairly flat for small ω ≥ 0, where Gaussian points are likely to be a good choice, but also for large ω, with relatively rapid transition between the two. The suggestion in [45] is to use the homotopy function   π eaω − 1 , κa,b (ω) = cos 2 b + eaω

where a, b > 0. Note that κ is indeed monotone and κ(0) = 1, limω→∞ κ(ω) = 0, 1 as required. After many experiments, the values recommended in [45] are a = 2 and b = 256. 11

c
0; (ii) | f (t )| < Ke b t , where K and b are independent of t , when t is positive and t ≥ a, then there exists a complete asymptotic expansion given by the formula ∞ ∞  f (t )e−ωt dt ∼ a m Γ (m/r )ω −m/r , ω  1. 0

(5.4)

m=1

In other words, the local expansion of f (5.3) about the origin leads to the asymptotic expansion (5.4) of the Laplace-type integral for large ω. In fact, the asymptotic expansion is derived simply by substituting the local expansion of f into the Laplacetype integral and integrating term by term. Mathematically this seems to make no sense: the expansion of f is convergent only for t ≤ a, where the singularity of the integrand is located, while it is being integrated for t ∈ [0, ∞) stretching out to infinity! This asymptotic crime is precisely the reason why the result is only asymptotic, and why an asymptotic expansion is a divergent series rather than a convergent one. The aim of the method of steepest descent is to turn a Fourier-type integral on the real axis into a sum of Laplace-type integrals in the complex plane, which can then be expanded using Watson’s Lemma. This is achieved by deforming the path of integration into the complex plane onto steepest descent paths that include the points that give the main contribution to the integral and along which the oscillatory behavior of the exponential function becomes decaying. Readers seasoned in asymptotic analysis will recognize this as the main idea underlying the classical method of steepest descent (often also called saddle point method), which is presented in many references in the literature. Consider the function eiωx , the oscillator in our integral (5.1), as a function of the complex variable z = a + ib : eiωz = eiω(a+ib ) = eiωa e−ωb . It is rapidly oscillatory as a function of a, the real part of z. However, the function is nonoscillatory, and indeed rapidly decaying, as a function of the imaginary part b . Let us assume that f is analytic in the upper half plane. Instead of integrating from

5.1. The asymptotic method of steepest descent

81 Im z

 −1 + iP

1 + iP

iP

i Re z −1

1

Figure 5.1. Steepest descent paths for the oscillatory integral (5.1): the path goes from −1 upwards in the complex plane as far as analyticity of the integrand allows, then parallel to the real axis, and finally down to +1.

x = −1 to x = +1 along the real line in a highly oscillatory way, we may integrate from x = −1 straight up into the complex plane to −1 + iP , for some P > 0, along a straight line parallel to the real axis to 1 + iP , and then straight down to x = +1, as shown in Fig. 5.1: 1 P 1 iωx iω(−1+i p) f (x)e dx = f (−1 + i p)e idp + f (x + iP )eiω(x+iP ) dx Iω [ f ] = −1

P

− −iω



−1

0



f (1 + i p)eiω(1+i p) i d p

0 P

−ω p

= ie

f (−1 + i p)e 0

 iω

− ie

P

−ωP

dp +e



1

−1

f (x + iP )eiωx dx

f (1 + i p)e−ω p d p.

0

We have a sum of three integrals. The first and the last look a lot like Laplace integrals, while the middle one is as oscillatory as the one we started out with, but multiplied by e−ωP . If we are lucky, we can take the limit of P → ∞, discard the middle integral, and obtain ∞ ∞ f (−1 + i p)e−ω p d p − ieiω f (1 + i p)e−ω p d p. (5.5) Iω [ f ] = ie−iω 0

0

In this case the deformation into Laplace-type integrals is exact. We are not always so lucky, for example because f might have singularities in the upper half of the complex plane or grow too rapidly at infinity (so that we can no longer discard the middle integral). The path deformation has to be justified by Cauchy’s integral theorem: the integrand has to be analytic in the closed region bounded by [−1, 1] and our deformed paths. Yet, even disregarding the limiting case, we still have exactness up to an exponentially small error.

82

Chapter 5. Numerical steepest descent

Proposition 5.2. If f is analytic in an open neighborhood of [−1, 1] and g (x) = x, then there exists a constant P > 0 such that P P   −iω −ω p iω Iω [ f ] = ie f (−1+i p)e d p−ie f (1+i p)e−ω p d p+ e−ωP , ω  1. 0

0

This result follows immediately from the decomposition above extending to ±1 + iP in the complex plane, however small P may be. Subsequently, we can invoke Watson’s Lemma, or rather an extension thereof that caters to finite intervals [0, P ], and this yields the familiar asymptotic expansion (2.1) that we derived before using integration by parts. More general oscillators g (x) can be treated similarly, as long as there are no stationary points where g (x) = 0. The reason for this will be clear in what follows. The integral is now 1 f (x)eiω g (x) dx. Iω [ f ] = −1

Starting at a point x = a on the real line, we recover exponential decay along a path parameterized by z = ha ( p), such that ha (0) = a and additionally g (ha ( p)) = g (a) + i p,

p ≥ 0.

(5.6)

By construction, we have along this path eiω g (z) = eiω g (ha ( p)) = eiω(g (a)+i p) = eiω g (a) e−ω p . Indeed, this is a nonoscillatory function of p. The phase of the oscillations, the real part of g (z), is constant and equal to g (a). Note that the property of constant phase alone is purely geometric and is sufficient to define a contour Γa in the complex plane— the steepest descent path. It is everywhere orthogonal to the curves of constant imaginary part of g (z) in . The function ha ( p) is but one specific parameterization of that path, and it is useful to us because it leads to explicit exponential decay suitable for Watson’s Lemma. With that in mind, the path integral along Γa is  Γa

iω g (z)

f (z)e

iω g (a)



P

dz = e

0

f (ha ( p))ha ( p)e−ω p d p.

As an example, consider g (x) = (x − 2)2 . The steepest descent path parameterization from the point a = 1 should satisfy h1 (0) = 1, as well as g (h1 ( p)) = (h1 ( p) − 2)2 = (1 − 2)2 + i p. In this case an explicit expression can be easily found, h1 ( p) = 2 +



1 + i p.

The steepest descent path  is curved. It starts at the point a = 1 and ultimately tends to infinity at an angle of i = eπi/4 . We can find such paths at both endpoints ±1, discard the path that connects the two, and invoke Watson’s Lemma as before. The following proposition generalizes Proposition 5.2.

5.1. The asymptotic method of steepest descent

83

Proposition 5.3. Assume that f and g are analytic in an open neighborhood of [−1, 1], with g real-valued on that interval, and such that g (x) = 0, x ∈ [−1, 1]. Then there exist a constant P > 0 and paths h±1 ( p), defined by (5.6) for p ∈ [0, P ], such that

    f (z)eiω g (z) dz +  e−ωP , Iω [ f ] = − ω  1, Γ−1

where for a = ±1  Γa

Γ1

f (z)eiω g (z) dz = eiω g (a)



P 0

f (ha ( p))ha ( p)e−ω p d p.

  The  e−ωP term captures how small the integrand is along the line connecting h−1 (P ) with h1 (P ). We omit the proof, since showing that this connecting integral is indeed exponentially small depends crucially on the absence of stationary points. We discuss stationary points separately below. Another comment about P is in order here. From a theoretical point of view, it does not matter how small or large it is. As soon as you enter the complex plane, even if you just dip your toes in, everything is exponentially small in ω. Of course in practice constants do matter, but we postpone a discussion of their impact until later. Suffice it to say at this stage that line integrals can be expanded asymptotically, via Watson’s Lemma as always, and in this way one would recover precisely (2.3). This process leads to a lot of series inversions and is fairly labor intensive, but we are not concerned with it any further since numerical methods are superior to asymptotics in terms of accuracy. An elegant example of such a computation using the Faà di Bruno formula can be found in [109].

5.1.2 Hills, valleys, and stationary points What about stationary points, then? They do feature in asymptotic expansions, and hence we must visit them along the way in our path deformation. The point where the above process fails is when the steepest descent paths at the endpoints end up going in different directions—one to the upper and one to the lower half of the complex plane. The paths are fixed and their direction is not ours to choose. If this happens, the connecting path has to cross the real axis somewhere, and at that point the integrand is  (1) and not exponentially small in ω: it can no longer be discarded. It comes as no surprise, then, that a suitable point to cross the real axis is precisely the stationary point. This is illustrated in Fig. 5.2 for the case g (x) = x 2 . Stationary points required special treatment in Filon and Levin methods, and steepest descent is no exception. The situation shown in Fig. 5.2 is illustrative for any oscillator with a stationary point. Paths always go up or down into the complex plane, depending on which side of the stationary point they are on. In order to show this, let us examine what happens with the defining equation (5.6) of the path for small values of p, i.e., close to the point a on the real line. Substituting the approximation g (x) ≈ g (a) + g (a)(x − a), g (a) + i p = g (ha ( p)) ≈ g (a) + g (a)(ha ( p) − a), we find to first order in p that ha ( p) ≈ a +

ip . g (a)

(5.7)

84

Chapter 5. Numerical steepest descent Γ0,+ Im z

−1

Γ1

Γ−1

Re z

1

 Γ0,−

Figure 5.2. Steepest descent paths for an oscillatory integral with g (x) = x 2 : the path extends from −1 into the bottom left quadrant of the complex plane and then passes along a straight line through the stationary point at x = 0 to the upper-right quadrant, before finally terminating at the real axis at +1.

Thus, since p is always positive, the direction we take depends on the sign of g (a). We go up in the complex plane if g (a) > 0, and down if g (a) < 0. The sign switches precisely at a stationary point, and therefore no stationary point should escape our attention. At the stationary point itself, some more care is called for. It is tempting to reformulate (5.6) as ha ( p) = g −1 ( g (a) + i p), but we have to be careful of what that means. The crux of the problem is that, in the presence of a stationary point, the inverse of g is multivalued. Consider the example g (x) = x 2 , with a stationary point at x = 0. Its inverse is the square root function, and it has two branches. As a result, we can have either  ha ( p) = a 2 + i p

or ha ( p) = −



a 2 + i p.

Which branch of the two is the correct one? That is determined by the condition that ha (0) = a: for a > 0 the correct branch has the plus sign, and for a < 0 it has the minus sign. The corresponding paths go up and down into the  complex plane, respectively. At the stationary point a = 0 itself, both branches ± i p are valid. This means that two suitable paths emanate from the stationary point, and it turns out that we have to take both of them. We arrive at a deformation of the following form:



1 −1

iωx 2

f (x)e

 dx =

 Γ−1





 Γ0,−

+

Γ0,+



Γ1

2

f (z)eiωz dz.

5.1. The asymptotic method of steepest descent

85

3

Figure 5.3. Illustration of the size of the function eiωz in the complex plane (in log10 scale). The modulus of this function divides the plane into sectors where the function is exponentially large, called hills, or sectors where it is exponentially small, so-called valleys. Three paths of steepest descent emanate from the stationary point or saddle point at the origin z = 0 into each of the three valleys. These paths connect one valley with another one. Steepest descent paths from other points on the real line go into one of the valleys in the upper half plane, depending on whether they are to the left or to the right of the origin.

The paths are illustrated in Fig. 5.2. Note that this is the only way to write the integral on [−1, 1] as a sum of nonoscillatory integrals. Indeed, the stationary point is the only point where we can choose to go either up or down in the complex plane along a nonoscillatory steepest descent path, using the different branches of the inverse of g . The endpoint integrals have the same expression as before, as in Proposition 5.3, and they can be expanded asymptotically using Watson’s Lemma. At the stationary point our new notation Γ0,± reflects the branch of the square root. The path integrals are explicitly parameterized by ∞   i iωz 2 f (z)e dz = ± f (± i p)  e−ω p d p. 2 ip Γ0,± 0

One complication arises here: these integrals feature square roots of p. Fortunately, this can be accommodated by choosing r = 2 in Watson’s Lemma. The asymptotic expansion of the integral contains fractional powers of ω as a result, but that is exactly what we have seen before when discussing stationary points. The situation is comparable for a more general oscillator with a simple stationary point. The inverse is always multivalued and at least locally well defined, and the number of branches is the same as in the case of g (x) = x 2 . Matters become more complicated when higher-order derivatives of g vanish as well, but even such degenerate cases can be resolved (with correspondingly larger values of r in Watson’s Lemma). The literature on asymptotic analysis of integrals in the complex plane often uses the terms hills and valleys, while stationary points are called saddle points. The reason for 3 this terminology is clear from Fig. 5.3, which shows the size of the function eiωz in the complex plane (on a logarithmic scale). The function decays exponentially quickly in

86

Chapter 5. Numerical steepest descent

some parts of the complex plane, called valleys. In other parts, so-called hills, it grows exponentially large. In the figure, the stationary point looks like a saddle point that separates the hills from the valleys. Higher-order stationary points lie at the intersection of an even larger number of hills and valleys. Each steepest descent path from an endpoint goes into one of the valleys of the integral. These integrals can be connected by passing from one valley to another through the saddle point.12 At the saddle point, each valley corresponds to a branch of the (local) inverse of g . Thus, the condition that the whole steepest descent path deformation leads to a globally connected path determines all the right branches. For a general oscillator g (x), there can also be any number of stationary points in the interval [−1, 1]. One can ensure global connectedness by choosing appropriate branches of the inverse of g at each stationary point when solving (5.6). At the end of the day, we can treat any number of stationary points of any order, and we have the following proposition. Proposition 5.4. Consider an oscillatory integral Iω [ f ], with f and g analytic in a neighborhood of [−1, 1], and with g real-valued on [−1, 1]. Let {x m }M m=1 be the ordered set of the endpoints ±1 and all stationary points of g inside [−1, 1]. Then there exist a sequence of finite contours Γ xm of constant phase around x m , Γ xm ∩  = {x m }

and

Re g (z) = g (x m ),

and a constant P > 0 such that 1 M   Iω [ f ] = f (x)eiω g (x) dx = −1

m=1 Γ x m

z ∈ Γ xm ,

  f (z)eiω g (z) dz +  e−ωP .

(5.8)

Proof. We observe that g is monotone on each interval Li := [xi , xi +1 ], because by construction there are no stationary points in each Li . Hence, g maps Li to Ri := [yi , yi +1 ] with y m = g (x m ) and it is uniquely invertible on Ri . Denote this inverse by gi−1 , and note that gi−1 is analytic in a neighborhood of Ri , with the exception of possible branch points at yi and yi +1 and associated branch cuts that we may choose to extend from the branch points along the real line and away from Ri . For each x ∈ Li , we can uniquely define a path up to P > 0: h x ( p) = gi−1 ( g (x) + i p),

p ∈ [0, P ].

For fixed and sufficiently small p > 0, and as a function of x, these paths lie entirely in the complex plane, since gi−1 is also monotone on Ri and therefore g and gi−1 form a bijection between Ri and Li . We can deform the oscillatory integral on Li along the paths determined by h xi and h xi+1 , for p ∈ [0, P ], with a path integral connecting the endpoints h xi (P ) and h xi+1 (P ) given explicitly by  xi+1  xi+1 iω g (h x (P )) −ωP f (h x (P ))e dx = e f (h x (P ))eiω g (x) dx. xi

xi

12 The reader may wonder why the critical points are always saddle points, and not local maxima or minima. The reason is that if g (z) is analytic, and not constant, so is eiω g (z) , and then the maximum modulus principle implies that no local extrema can happen inside the domain of analyticity. It follows that the only critical points of g (z) must be saddle points.

5.2. The numerical method of steepest descent

87

The integrand of the latter integral is bounded, M = max z∈C | f (z)| < ∞, where C is a small neighborhood of [−1, 1] in the complex plane, since |eiω g (x) | = 1 for x ∈ .  −ωP Hence, the connecting integral is  e . We can repeat this reasoning for each subinterval of [−1, 1] determined by the points {x m } and choose P to be the minimal value on each subinterval. The contours Γ x±1 correspond to h±1 ( p), while Γ xi consists of the concatenation of h xi ( p) defined for Li −1 and for Li .

Each of the integrals in this decomposition can be parameterized to be a Laplacetype integral, or in the case of stationary points a sum of two Laplace-type integrals. One could compute asymptotic expansions for arbitrarily complicated oscillators this way, but that is not our goal. Instead, we set out to evaluate the path integrals numerically.

5.2 The numerical method of steepest descent The exponential decay in Laplace-type integrals leads to asymptotic expansions, but this property is also very interesting numerically. In addition, these integrals are not at all oscillatory. The situation really could not have been better! What can be easier than evaluating an integral whose integrand is nonoscillatory and tends rapidly to zero? There are many ways to do so, and no doubt several of those lead to excellent computational methods for oscillatory integrals, but the numerical method of steepest descent aims to double down on our profits and exploit these two properties to a maximal extent. It does so using specially crafted quadrature rules, though in the majority of cases the classical Gauss–Laguerre and Gauss–Hermite rules will do (and they need not be specially crafted).13 In particular, the method offers the best bang for the buck when you are in the business of high asymptotic orders. A typical result is that the asymptotic order of numerical steepest descent is roughly twice that of a Filon- or Levin-type method, using the same number of function evaluations. As we shall see in a moment, this property is related to the fact that Gaussian quadrature rules are exact for twice as many polynomials than regular interpolatory quadrature. Of course all of this is still based on analyticity of the original integrand.

5.2.1 From polynomial to asymptotic order The process of path deformation remains the same. For any given integral, we assume that a sequence of path transformations like (5.8) is given, can be found, or can be computed, such that a Fourier integral becomes a sum of Laplace-type integrals. The larger ω, the simpler this becomes, and we elaborate upon some numerical aspects further on in §5.3. For now, let us assume we are faced only with evaluating Laplace-type integrals, and we are kindly requested to do so quickly. A feature that all Laplace-type integrals share is exponential decay of the integrand, so we gain from including that knowledge 13 By Gauss–Hermite quadrature here we mean the Gaussian quadrature associated with the classical

Her2 mite orthogonal polynomials with weight function e−x on the real line; see the Appendix for details. This is not related to a Hermite or Birkhoff–Hermite interpolation problem that involves derivatives: in contrast to the previous chapters, the quadrature rules in this chapter do not employ derivatives.

88

Chapter 5. Numerical steepest descent

into the quadrature rule. The ω-dependence is just a scaling, and it is convenient to factor it out:  Lω [ f ] =



−ωt

f (t ) e

0

1 dt = ω





f 0

t ! −t e dt . ω

We can now regard the function e−t as a fixed (independent of ω) and real weight function and look for quadrature rules for the following type of integrals: 



u(t ) e−t dt ≈ Q[u] :=

n  i =1

0

wi u(ti ).

Evaluating the scaled Laplace-type integral by the quadrature rule leads to the followt 1 ing approximation, with u(t ) = ω f ( ω ):

Lω [ f ] ≈ Qω [ f ] :=

n 1  w f ω i =1 i

ti ! . ω

For an integral along the contour Γa , this results in the numerical approximation  Γa

f (z)eiω g (z) dz ≈

n t ! t !! eiω g (a)  wi f ha i ha i . ω ω ω i =1

(5.9)

For the time being, we will assume that the steepest descent integrals stretch out to infinity. This is not a strong restriction in practice, since the formula indicates that the quadrature points ti are divided by ω. That means we end up evaluating the original integrand at points close to a on the real line—recall that ha (0) = a. When it comes to evaluating integrals with exponential decay, Gauss–Laguerre quadrature is the no-brainer choice. In that case the points ti are the roots of the n-degree Laguerre polynomial, which is orthogonal with respect to e−t on [0, ∞); see the Appendix for details. It this a good choice? Certainly! Among all interpolatory quadrature rules, i.e., quadrature rules based on polynomial interpolation, it has the highest order. It quickly turns out that the same holds true for the asymptotic order of the method. We have the following proposition, where for now we choose α = 1. Proposition 5.5 ([31]). Consider α > 0 and a quadrature rule of polynomial order d with n points ti and weights wi , such that 



α

t k e−t dt =

n  i =1

0

wi tik ,

k = 0, . . . , d − 1.

Define the integral Lαω [u]

 := 0



−ωt α

u(t ) e

dt = ω

−1/α



∞ 0

α

u(t ω −1/α ) e−t dt

5.2. The numerical method of steepest descent

89

and its quadrature rule approximation Qωα [u] := ω −1/α

n  i =1

wi u(ti ω −1/α ).

If Lαω [u] exists for ω ≥ ω0 , and if u is analytic at t = 0, then it is true that   ω → ∞. |Lαω [u] − Qωα [u]| =  ω −(d +1)/α , Proof. Since u is analytic at t = 0, it has a convergent Taylor series for t < R with some R > 0. Denote by d −1 (k)  u (0) k ud (t ) = t k! k=0

the Taylor series truncated after d terms and define u˜e (x) := u(x) − ud (x) as the remainder. We can write the quadrature error as Lαω [u] − Qωα [u] = Lαω [ud ] + Lαω [ u˜e ] − Qωα [ud ] − Qωα [ u˜e ] = Lαω [ u˜e ] − Qωα [ u˜e ]. The latter equality follows from the exactness of the quadrature rule for polynomials up to degree d − 1, which implies that Lαω [ud ] = Qωα [ud ].   We can show that both remaining terms are of size  ω −(d +1)/α . For the first term, we can use Watson’s Lemma. With the change of variables p = t α ,  ∞   1 ∞ 1/α−1 α −ωt α p dt = u˜e ( p 1/α )e −ω p d p =  ω −(d +1)/α , u˜e (t )e Lω [ u˜e ] = α 0 0

because

p 1/α−1 u˜e ( p 1/α ) ∼ p

d +1 −1 α

,

p → 0.

Next, for sufficiently large ω such that ω −1/α ti < R, i = 1, . . . , n, we also have Qωα [ u˜e ] = ω −1/α



= ω

n  i =1

wi u˜e (ti ω −1/α ) = ω −1/α

−(d +1)/α



n  i =1

wi

∞  u (k) (0) k −k/α t ω k! i k=d

.

This proves the result.

We have already seen using Watson’s Lemma that each term in the Taylor series of the integrand around the origin yields a term in the asymptotic expansion. The observation we make here is the following: any quadrature rule that integrates the first d terms of the Taylor series accurately also cancels the first d terms of the asymptotic expansion. Hence, polynomial order leads to asymptotic order! In turn, maximal polynomial order leads to maximal asymptotic order, and this is why Gauss–Laguerre quadrature is such a good choice. Let us combine all ingredients and formally define a numerical method for integral (5.1). Recall that the steepest descent paths are h±1 ( p) = ±1 + i p. Applying Gauss– Laguerre twice to the integrals in (5.5), once for each endpoint of the interval, leads to

90

Chapter 5. Numerical steepest descent

Figure 5.4. The errors in logarithmic scale of the NSD method ωNSD [ f ] for g (x) = x x and f (x) = 5+x 2 . Plum, violet, and red correspond to n = 1, n = 2, and n = 3.

the explicit formula

ωNSD [ f ] =

    n n it it ie−iω  ieiω  wi f 1 + i . wi f −1 + i − ω ω i =1 ω ω i =1

(5.10)

Here, wi and ti are the weights and points of the classical Gauss–Laguerre quadrature rule. The logarithmic error log10 | ωNSD [ f ] − Iω [ f ]| is displayed in Fig. 5.4. Note the rapid decay of the error for increasing ω. We can loosely compare the computational effort to the previous numerical methods in the following way. Consider the case we were studying, where α = 1. A Gaussian quadrature rule with n points requires n evaluations of the integrand, but its polynomial order is twice that, namely 2n. As such, the asymptotic order of the absolute error  numerical evaluation of a Laplace-type integral by our lemma above is  of the  ω −2n−1 . For an oscillatory integral without stationary points, we have to evaluate two integrals. Using n evaluations for each, so 2n in total, we achieve   Laplace-type  ω −2n−1 accuracy for the overall integral. In contrast, using n derivatives of f at the left endpoint and n at the right endpoint—the same number 2n of function eval  uations in total—we only achieve an  ω −n−1 error in the asymptotic method (2.4). This goes to show that if an integrand is analytic, we had better exploit that property and make the most of it! On the other hand, in all honesty, this comparison of computational effort is both loose and unfair. A more elaborate comparison would also take into account the effort required to compute the steepest descent paths. In case the paths, unlike in our simple example, cannot be determined analytically, they have to be computed, and the associated cost can dwarf the cost of the application of the quadrature rule. Whether or not this cost plays a role in applications very much depends on the types of integrals that are encountered, and in particular the types of oscillators. We will elaborate on this point in §5.3.1. Before we move on, a word on the history of the methods in this chapter. The first systematic study of numerical steepest descent and Gaussian rules for general oscillatory integrals arguably started with [60], and follow-up papers include [3, 31, 59, 61]. However, the main idea to evaluate steepest descent integrals numerically, including

5.2. The numerical method of steepest descent

91

even the use of Gauss–Laguerre quadrature, predates this research by several decades. Examples include [15] and [26], and the doubling effect of Gaussian quadrature on asymptotic order has been noted several times before [42, 48, 102, 110].

5.2.2 Stationary points Once we ignore the process of path deformation itself, integrals with stationary points are not all that different. At least, that is the optimistic point of view. As usual the devil is in the details, and since the devil is regularly accused in the literature of having invented asymptotic expansions in the first place [12], we suffer twice. Let us return to the example g (x) = x 2 , for which we found a suitable decomposition that was illustrated in Fig. 5.2. This is a canonical example, and each stationary point of order 1 is just a smooth deformation of this case, as we shall see later on. Recall the earlier result:

 1    2

−1

f (x)eiωx dx =

Γ−1



Γ0,−

+

Γ0,+



2

Γ1

f (z)eiωz dz.

The steepest descent paths Γa are the paths of constant phase, and they are only geometric in nature. They could be parameterized in many ways. With Watson’s Lemma in mind, we have been using a specific parameterization that displays explicit exponential decay, ha ( p), defined by (5.6). But for numerical purposes, we can freely explore other choices, and we will do so for stationary point integrals. The endpoint integrals have the same expression as before, with a = ±1: ∞  iωz 2 iω g (a) f (z)e dz = e f (ha ( p))ha ( p)e−ω p d p. Γa

0

We can also evaluate them with Gauss–Laguerre quadrature as before, using the very same formula (5.9). The two integrals at the stationary point call for a different numerical treatment. They are, with ± denoting the chosen branch of the inverse of the square root,   ∞  i iωz 2 f (z)e dz = ± f (± i p)  e−ω p d p. p 2 Γ0,± 0

First of all, the integrand is weakly singular, due to the appearance of the square root. As a consequence, standard Gauss–Laguerre quadrature will perform poorly. Furthermore, Proposition 5.5 does not apply anymore either, because the condition that the integrand of the Laplace-type integral be analytic at p = 0 is no longer met.14 Fortunately, the situation can be rectified simply by the substitution p = q 2 . The Jacobian cancels the singularity and we arrive at the more pleasant integrals    ∞ 2 2 (5.11) f (± iq)e−ωq dq. f (z)eiωz dz = ± i Γ0,±

0

Alternatively, it is even simpler to avoid the singularity in the first place. Indeed, the substitution p = q 2 corresponds to a new parameterization ˜h0,± (q) = h0,± (q 2 ) of the 14 While the first problem, integration with weak singularity, can be overcome using generalized Gauss–  Laguerre quadrature, the second issue, due to p appearing in the argument of f , is more substantive.

92

Chapter 5. Numerical steepest descent

same contour in the complex plane. We can find ˜h directly if we replace (5.6) at a stationary point ξ by (5.12) g (˜hξ (q)) = g (ξ ) + iq 2 . This is actually a general observation. We can use equation (5.12) instead of (5.6) for any stationary point of order 1, not just for g (x) = x 2 . From here on, the good news seems never-ending! First of all, in the form given by (5.11), the path integral is fully covered by Proposition 5.5 with α = 2. Thus, we could proceed to evaluation with optimal Gaussian quadrature, where the correspond2 ing polynomials are orthogonal with respect to e−t on [0, ∞). They are an example of so-called Freud-type polynomials [76]. These Gaussian quadrature rules are nonstandard, but not hard to construct either, using the three-term recurrence relation of the underlying orthogonal polynomials [49]. In any case, this need be done only once, because the oscillator g (x) = x 2 is canonical. The prime result here is that, with just n points, we achieve  ω −(2n+1)/2 absolute error! The division by two in the exponent is unavoidable: the asymptotic behavior of stationary point contributions is intrinsically  slower than that of endpoint contributions. Each term in the expansion is just ω smaller than the previous term. But it is actually simpler than this. The Freud-type polynomials can also be avoided. It was already evident in Fig. 5.2 that the two half-infinite paths Γ0,± combine into a single straight  line. Indeed,  been  this also follows from our parameterization: we have using both iq and − iq on [0, ∞). Of course, these combine into just iq on (−∞, ∞),

  ∞    ∞  2 i i −ωq 2 f  t e−t dt . dq =  f ( iq) e + = i − ω ω −∞ Γ0,− Γ0,+ −∞

The latter equality suggests numerical evaluation using Gauss–Hermite quadrature, since the classical Hermite polynomials are orthogonal with respect to the weight func2 tion e−t on (−∞, ∞). Let us reconsider our crude assessment of the number of function evaluations versus asymptotic order. Using Gaussian quadrature for Freud-type polynomials, we use n points for each of the two line integrals at the stationary point to achieve  ω −(2n+1)/2 asymptotic error. On the face of it, this is an excellent result. However, since there are two integrals, we have used 2n points in total. In contrast, we can use a single n-point Gauss–Hermite rule to evaluate both integrals simultaneously.   Perhaps surprisingly, the absolute error in this approach is still of size  ω −(2n+1)/2 ! Indeed, Proposition 5.5 can easily be extended to cover integration on (−∞, ∞), and we will do just this later on. With Gauss–Hermite we obtain the same asymptotic error rate, using just half as many points. Hence, Gauss–Hermite quadrature is the method of choice to evaluate steepest descent integrals around stationary points. One exception is if a stationary point happens to coincide with an endpoint of the oscillatory integral. In that case, there is only a single half-infinite integral to evaluate, and Freud-type   Gaussian rules are the best choice. This still leads to  ω −(2n+1)/2 absolute error with just n points, because in this case we need apply Freud-type quadrature only once. An example is in order. We consider again a simple function f (x) = e x and the oscillator g (x) = x 2 on the interval [−1, 1]. We have three line integrals to evaluate. The two half-infinite paths originating at ±1 we can evaluate with Gauss–Laguerre quadrature as before, and the infinite path through the stationary point is evaluated by a single Gauss–Hermite rule. Explicitly, with t L standing for Laguerre and t H for

5.2. The numerical method of steepest descent

93

Figure 5.5. The errors in logarithmic scale of the NSD method ωNSD,n,2n [ f ] for g (x) = x and f (x) = e x . Plum, violet, and red correspond to n = 1, n = 2, and n = 3. 2

Hermite, respectively, this is given by the expression

ωNSD,n,m [ f

n e−iω  wL f ]= ω i =1 i



h−1

tiL



h−1

tiL

(5.13)

ω

L L n t t eiω  L − h1 i w i f h1 i ω ω ω i =1

5 5 m i H i  + wH f t . ω i ω i =1 i ω

    In view of the asymptotic errors  ω −(2n+1) and  ω −(2m+1)/2 for the Laguerre and Hermite parts, we choose m = 2n. This is comparable to evaluating twice as many derivatives at stationary points in Filon quadrature. The logarithmic error is illustrated in Fig. 5.5 and confirms rapid improvement of accuracy as ω increases for very modest values of n. For a more general oscillator g (x) with a stationary point ξ of order 1, we solve (5.12) to find ˜hξ for ξ ∈ (−∞, ∞). The numerical evaluation of the corresponding line integral is  Γξ

iω g (z)

f (z) e

n eiω g (ξ )  wH f dz ≈ ω 1/2 i =1 i



˜h ξ

tiH

ω 1/2



˜h ξ

tiH

ω 1/2

.

(5.14)

In the general case this expression replaces the third term in (5.13).

5.2.3 Higher-order stationary points and a new paradigm An interesting new phenomenon appears once we consider higher-order stationary points. The optimal quadrature rules have their points on seemingly smooth curves in the complex plane, away from the actual steepest descent paths of the integral. The corresponding complex-valued orthogonal polynomials set the stage for the contents of the next chapter.

94

Chapter 5. Numerical steepest descent



Im z i

Γ0,1

Γ0,0

Γ−1

Γ1

−1

Re z

1

Figure 5.6. Steepest descent paths for an oscillatory integral with g (x) = x 3 : the two half-infinite paths through the stationary point at x = 0 are at an angle of 2π/3 with each other.

We will use the canonical example g (x) = x 3 , for which the origin x = 0 is a stationary point of order 2. The inverse of g is triple-valued. This implies that three possible steepest descent paths are eligible, of which we require just two. The paths Γ0,k , k = 0, 1, 2, correspond to the three cubic roots of unity, and one can verify that all solutions to the defining equation (5.6) in this case are  h0,k ( p) = e2πki/3 3 i p, k = 0, 1, 2.

As in the previous section, we can avoid cubic roots of p if we replace (5.6) at a secondorder stationary point ξ by g (˜hξ (q)) = g (ξ ) + iq 3 . Using that

 3

(5.15)

i = eπi/6 , this leads for our example to the straight lines ˜h (q) = e(4k+1)πi/6 q, 0,k

k = 0, 1, 2.

The third path, ˜h0,2 (q) = −iq, goes straight down the imaginary axis. It is clear from Fig. 5.6 that in order to connect the endpoint integrals the relevant paths are h0,0 and h0,1 . These are the ones in the upper half plane. Thus, a suitable path deformation is

 1    3

−1

f (x) eiωx dx =

Γ−1



+

Γ0,1

Γ0,0



3

Γ1

f (z) eiωz dz,

where the middle path integrals are given explicitly by  ∞ 3 3 f (z)eiωz dz = e(4k+1)πi/6 f (e(4k+1)πi/6 q) e−ωq dq Γ0,k

0

=

(4k+1)πi/6

e

ω 1/3







f 0

 e(4k+1)πi/6 3 t e−t dt . ω 1/3

After the usual scaling these integrals can be evaluated with Freud-type quadrature rules, where the polynomials are orthogonal with respect to the Freud-type weight 3 e−t on [0, ∞). Again, these polynomials are not standard, but they can be easily computed, and this need be done only once.

5.2. The numerical method of steepest descent

95

3

Figure 5.7. The roots of the degree 15 polynomial that is orthogonal with respect to eiz , on a contour in the complex plane that consists of Γ0,1 and Γ0,0 shown in Fig. 5.6.

Having said that, the new phenomenon appears when we aim for an analogue of Gauss–Hermite quadrature. Can we combine the two steepest descent integrals into one? Geometrically, it is not as simple as before, because the lines make an angle of 2π/3; see Fig. 5.6. In the Hermite case, we have been lucky without realizing our good fortune, because the fact that we had a single straight line through the stationary point that we could easily rotate back to the real axis does not generalize. Yet, we can be bold and look for polynomials orthogonal with respect to the oscil3 latory weight function eiωz on a complex contour that consists of two separate lines [31]. That is, we consider the monic polynomial pn of degree n that satisfies the orthogonality conditions

  3 + k = 0, 1, . . . , n − 1. (5.16) pn (z)z k eiz dz = 0, − Γ0,1

Γ0,0

Does such a polynomial even exist? There is a priori no guarantee that this is so; we elaborate further on the technical issues in the Appendix. We are, after all, leaving the realm of real-valued and positive weight functions. However, these polynomials do turn out to exist for all degrees n. Their roots, i.e., the Gaussian quadrature points of interest to us, lie in the region of the complex plane between the two steepest descent paths Γ0,0/1 . Once the polynomial exists, though, we can use its roots to numerically evaluate our oscillatory integrals. We generalize Proposition 5.5 as follows. Proposition 5.6. Consider a positive number α ∈  and a scale-invariant contour Γ ⊂ . Furthermore, assume that there exists a quadrature rule of polynomial order d with n points ti and weights wi , i = 1, . . . , n, such that  n  α t k eit dt = wi tik , k = 0, . . . , d − 1. Γ

i =1

Define the integral Lαω [u] :=



iωt α

Γ

u(t ) e

dt = ω

−1/α



α

Γ

u(t ω −1/α ) eit dt

96

Chapter 5. Numerical steepest descent

and its quadrature rule approximation Qωα [u] := ω −1/α

n  i =1

wi u(ti ω −1/α ).

If Lαω [u] exists for ω ≥ ω0 , and if u is analytic at t = 0, then it is true that   |Lαω [u] − Qωα [u]| =  ω −(d +1)/α ,

ω → ∞.

The proof of this proposition is exactly the same as the proof of Proposition 5.5, based on exact integration of a truncated Taylor series. By a scale-invariant contour, we mean any contour Γ such that z ∈ Γ implies az ∈ Γ for any a > 0. In particular, scale-invariant contours include the half line [0, ∞), the real line (−∞, ∞), and any collection of rays that emanate from the origin to infinity or vice versa. Let us consider another example. We have g (x) = x 3 and f (x) = e−x on [−1, 1]. We use n Gauss–Laguerre points for the endpoint integrals and m = 3n points of the newly found quadrature rule (w C , t C ) to evaluate the contribution of the degenerate stationary point at the origin,

ωNSD,n,m [ f

n e−iω  wL f ]= ω i =1 i



h−1

tiL



h−1

tiL



ω



n tiL tiL eiω  L h1 w f h1 − ω ω ω i =1 i

 1/3    m i i 1/3 C C + ti . wi f ω ω i =1 ω

(5.17)

This leads to the result shown in Fig. 5.8. While it may seem like things are getting more complicated, we can actually simplify our approach. For a general oscillator with a second-order stationary point

Figure 5.8. The errors in logarithmic scale of the NSD method ωNSD,n,3n [ f ] for g (x) = x and f (x) = e−x . Plum, violet, and red correspond to n = 1, n = 2, and n = 3. 3

5.2. The numerical method of steepest descent

97

ξ , we might as well do away with the computation of steepest descent paths altogether. The quadrature points are not on the steepest descent paths anyway. Instead, we will locally map the oscillator g (x) to y 3 with a variable substitution. If g (ξ ) = g (ξ ) = 0 = g (ξ ), then the relation g (x(y)) = g (ξ ) + y 3 ,

with x(0) = ξ ,

defines a smooth map from x to y that is uniquely invertible, at least locally. Afterwards, we can proceed with the canonical case. From this point of view, the problem of solving (5.15) is reduced to finding the smooth map x(y). This technique actually works for all of the integrals we have encountered thus far. For endpoint integrals, we can locally map g (x) to y, and for simple stationary points we map g (x) to y 2 . The combination of this map with the canonical steepest descent paths is entirely equivalent to solving (5.6) and (5.12). It is just a two-step procedure to obtain the same result. This leads to another paradigm: each steepest descent integral can be classified using a small number of types, and there exist a canonical form and an optimal quadrature rule for each type. Theorem 5.7 (The steepest descent paradigm). Given a steepest descent deformation as in Proposition 5.4, each integral along Γ xm can be mapped to a canonical integral of the form  r f˜(y)eiωy dy, Γ

with f˜(y) analytic in a neighborhood of y = 0, with r ≥ 1, and Γ a path of constant phase. Application of an n-point quadrature rule to thecanonical form  as in Proposition 5.6 with d = 2n, if it exists, leads to an error of order  ω −(2n+1)/r . Proof. Our main task is to find an analytic map to the canonical form, which we can define by g (x(y)) = g (x m ) + y r , with x(0) = y m . First, consider the case where x m is an endpoint; i.e., we assume that g (x m ) = 0. Then g is locally invertible with analytic inverse. We can choose r = 1, and we find x = g −1 ( g (x m ) + y). This map and its inverse y(x) = g (x) − g (x m ) are analytic. Next, assume x m is a stationary point of order t , i.e., g (x m ) = . . . = g (t ) (x m ) = 0 = g (t +1) (x m ). Then we choose r = t + 1, and we have to satisfy g (x(y)) = g (x m ) + y r . The inverse of g is locally r -valued, but so is the inverse of y r , and the branch cuts cancel. The map x(y) and its inverse y(x) are analytic. In both cases, we can define f˜(y) = eiω g (xm ) f (x(y))x (y) and let Γ be the canonical steepest descent path(s) for y r . The final claim follows from Proposition 5.6.

The paradigm shows that we don’t really need to compute steepest descent paths. All we need is a local map of g to a canonical form at each contributing point. In practice, this map is not unlike finding the inverse of g . The same holds for equations (5.6), (5.12), and (5.15): they essentially amount to inverting g . However, unlike in other numerical methods where g is transformed to a simpler form, in the steepest descent method the transformation need only be computed locally. Computationally, this is a lot simpler than inverting g on [−1, 1]. On the other hand, in the complex plane we have to be careful with branch cuts. Using the paradigm, we can rederive all the rules seen this far. For example, if x0 is an endpoint and g (x0 ) = 0, then we can perform the substitution y = g (x) −

98

Chapter 5. Numerical steepest descent

g (x0 ). This leads to the familiar oscillator eiωy and, subsequently, to Gauss–Laguerre quadrature. Similarly, if g (x0 ) = 0 and g (x0 ) = 0, then we substitute g (x) − g (x0 ) = y 2 , which can be followed by Gauss–Hermite quadrature. We stress again that the substitution is intended to be local, not global: the oscillator in y has rapid decay in the complex plane away from y = 0, and we rapidly truncate the path. This renders the scheme asymptotic in ω, but restricts the original variable x to values near x0 only. More importantly, however, the paradigm allows a natural generalization of steepest descent to more complicated scenarios. A relevant example is when two stationary points are close to each other, possibly coalescing when some other parameter in the integral reaches a critical value. We will encounter this example, along with others, in the next chapter.

5.2.4 Error analysis The first and most important observation to make about numerical steepest descent is that, in spite of its name, it remains an asymptotic method in the parameter ω. There are at least two main reasons for this. The first is to be found in the asymptotic result of Proposition 5.4. This result   indicates that there may be an asymptotic difference of  e−ωP , however small, between the Fourier-type oscillatory integral and the sum of Laplace-type integrals. The numerical method may well converge to the exact value of the Laplace-type integrals by adding quadrature points. It may converge even if the asymptotic expansion fails to do so. Indeed, that is the main reason for performing a numerical computation rather than evaluating an asymptotic expansion. But this does not necessarily imply that  the method converges to the exact value of the original integral. The difference  e−ωP is exponentially small in ω, and in the majority of cases it may not be noticeable, but for any given fixed value of ω, it could still be arbitrarily large. That could be the case, for example, if the integrand has a pole close to the real axis: we refer the reader to Olver’s book [83] for examples. The second reason has to do with our scaling of the path integrals. We have scaled α them in order to remove the dependency on ω from the weight function eiωt . After applying the quadrature rule, we scale back and this leads to the quadrature points being divided by (some power of) ω: Qωα [u] := ω −1/α

n  i =1

wi u(ti ω −1/α ).

Asymptotically all is well: quadrature points are scaled towards 0. For large ω, we end up evaluating our original integrand at a number of points that are very close to the endpoints and stationary points, and locally these quadrature points are distributed in an optimal way. However, as ω becomes smaller, the opposite happens: the scaling pushes the quadrature points further into the complex plane. In particular, the numerical steepest descent method does not have a well-defined limit ω → 0. For these reasons, the error analysis in literature has predominantly focused on asymptotic estimates rather than on strict error bounds as a function of the number of quadrature  this spirit, we make no attempt here to improve on the order  points. In estimate  ω −(2n+1)/r of Theorem 5.7. Instead, further on in this book we focus on alternative numerical methods that combine this asymptotic order with numerical convergence to Iω [ f ] for all ω ≥ 0. Having said that, a loose end has remained in our analysis thus far. We have often assumed that steepest descent paths extend to infinity, and our quadrature rules were

5.3. Implementation aspects

99

designed for that case. Fortunately, there is an easy fix: there is no need to change our quadrature rules; we just augment the theory. Since quadrature points are scaled towards the real axis for increasing ω in all cases considered, one only needs a small section of the steepest descent path for the asymptotic error result to hold. Proposition 5.8 ([59]). Let x, w, Γ , α, d , Lαω , and Qωα be as in Proposition 5.6. Fix  > 0 and let Γ˜ = Γ ∩ B(), where B() is an open ball of size  around the origin. Define the truncated integral  α L˜αω [u] := u(t )eiωt dt . Γ˜

If u is analytic, bounded, and with bounded derivatives as ω → ∞ in B(), then      ˜α  ω → ∞. Lω [u] − Qωα [u] =  ω −(d +1)/α ,

This result allows for finite steepest descent paths. Moreover, the integrand can depend on ω, as long as the dependence is benign: the derivatives of u should not grow as ω increases. Similar to extended Filon in Chapter 4, the accuracy of the numerical steepest descent method can be improved by adding internal points. This was explored in [59] and effectively leads to a Filon method with complex interpolation points, with higher asymptotic order of accuracy than the Filon methods with real points. The addition of internal points ameliorates the analyticity requirements of the steepest descent method to a great extent but does not entirely remove its asymptotic nature.

5.3 Implementation aspects 5.3.1 Computing steepest descent paths Thus far, we have assumed that all oscillatory integrals are presented to us on a silver platter, inclusive of their deformation into Laplace-type integrals. For simple functions g that may be indeed so, but in general steepest descent paths themselves require computation. In fact, this may be the most time-consuming part of the overall evaluation of the integral. As we mentioned before, computing a path amounts to computing a local inverse of g . We focus on the equation (5.6), g (ha ( p)) = g (a) + i p,

p ≥ 0,

with ha (0) = a. Differentiation with respect to p yields g (ha ( p))ha ( p) = i



ha ( p) =

i . g (ha ( p))

The latter equation is an ordinary (nonlinear) differential equation, and standard ODE solvers should have no difficulty with it. It is also a simple expression for ha ( p), which is needed in some of the formulas in this chapter. Fortunately, the complexity of an ODE solver is rarely necessary. Typically, one only needs a few values of ha ( p), equal to the number of quadrature points, and p is

100

Chapter 5. Numerical steepest descent

fairly small. It is straightforward to derive the Taylor series expansion of ha ( p) about p = 0 from the Taylor series of g around x = a. In fact, we have already computed the first two terms in (5.7). Explicit expressions for the higher-order terms can be found in terms of the derivatives of g [3]. One could use these approximate paths ˜ha ( p) as follows:  P ˜ f (z) eiω g (z) dz ≈ f (˜ha ( p))˜ha ( p)eiω g ( ha ( p)) d p Γa

0

 =

0

P

˜

f (˜ha ( p))˜ha ( p)eiω(g ( ha ( p))−i p) e−ω p d p.

Here, we cannot explicitly factor out e−ω p from the integrand, because ˜ha is not on the exact steepest descent path. However, we still have it approximately that g (˜ha ( p)) ≈ g (a) + i p, and hence we can divide and multiply the integrand by e−ω p as we have done above. The final integral can be evaluated with Gauss–Laguerre quadrature just the same. There is some loss in the asymptotic order of accuracy, depending on the number of terms in the Taylor series of ha that were used [3]. Alternatively, the points on the approximate paths are suitable starting values for a Newton–Raphson iteration to solve (5.6) exactly. Only a few iterations are needed to achieve very high accuracy, because this iterative scheme converges quadratically. Unless the computational cost of this scheme is an issue, this is the method of choice. Lack of convergence or loss of accuracy is easily detected by verifying whether (5.6) holds at the computed points.

5.3.2 Branch cuts and computations in the complex plane For large ω, application of the numerical steepest descent method is fairly simple. All quadrature points lie close to the real line, and the scheme outlined above finds them very efficiently. For small ω, the method is not recommended—read on for alternatives. Is there a regime in between? What limits the application of the method? How large should ω be? A frank answer is of course unsatisfactory: it depends. It depends mostly on the behavior of the integrand in the complex plane. Steepest descent paths do not necessarily extend to infinity. They may end in a pole, in another stationary point, or in other kinds of singularities. By Cauchy’s integral theorem, singularities of the integrand enclosed by steepest descent paths are an expected nuisance. Perhaps unexpected are the singularities of the inverse of g . Indeed, since ha ( p) is defined by an inverse of g , any singularity of the latter affects the analyticity of ha , and consequently it affects the numerical convergence rates of our methods. Strictly speaking, any stationary point of g , anywhere in the complex plane, is a singularity of any inverse of g . Fortunately, whatever happens in the complex plane has exponentially small impact (in ω) on our integral. Still, particularly bothersome are stationary points close to the real line. Our steepest descent deformation in Proposition 5.4 does not include them. Yet, their presence implies that g (x) may be small on the real line, and hence the integral may be locally not very oscillatory anymore. In a robust implementation, one may want to verify the magnitude of g (x) on [−1, 1]. If their location is known, stationary points’ contributions in the complex plane can also be evaluated with Gauss–Hermite quadrature in much the same way as for real stationary points—it is only the factor eiω g (ξ ) in front of the expression (5.14) that for complex ξ along a steepest descent path is exponentially decaying.

5.4. Multivariate integrals

101

The possible global structure of steepest descent paths is beyond a systematic review. One practical concern that we want to highlight here is the evaluation of functions in the presence of branch cuts. In the theory of complex analysis, branch cuts can be moved arbitrarily. In practice, though, standard functions have standard branch cuts. It is of course forbidden for a steepest descent path to cross a branch cut. That would violate the golden rule: the path deformation has to be justified by Cauchy’s integral formula, and this requires analyticity in the interior. If crossing a branch cut is about to happen, the cut has to move. How does one go about moving a branchcut? Let us consider the square root the function as an example. It is well known that z has a branch point at z = 0, and  z standard branch cut extends from 0 to −∞ on the real line. The two branches are  and − z. In order to move the branch cut, it suffices to implement a custom square root that checks the location of z in the complex plane and returns either  function  z or − z. For example, in order to move the branch cut to the imaginary axis, it is sufficient to check the argument of z:

6 S(z) =

 − z  z

if π/2 < arg z ≤ π, otherwise.

 The function S(z) agrees with the standard z everywhere, except in the  upper-left quadrant of the complex plane, where it evaluates to the other branch − z.

5.4 Multivariate integrals Complications multiply when considering multivariate integrals for all methods in this book. The derivation of the quadrature scheme is more involved. We should not be surprised: the cost of traditional quadrature methods for oscillatory integrals increases rapidly with the dimension.   If a traditional quadrature in one dimension requires  (n) points, it requires  n d points in d dimensions once generalized, for example, by a product rule—this is known as the curse of dimensionality [36]. The difference with a scheme that requires just  (1) points becomes much larger. Multivariate oscillatory integrals can easily be a bottleneck computational problem when they arise, and the effort invested into their efficient evaluation for large frequency may well be worth your time. Among the multivariate extensions discussed in this book, NSD is perhaps the most general—as long as the integrand involved is analytic in all of its variables. Conceptually, it is a matter of extending steepest descent paths to steepest descent manifolds. Much like the univariate case, the steepest descent manifold is governed by the structure of the local inverses of g . This can become very complicated for degenerate oscillators with many vanishing partial derivatives, in which case one is led to singularity theory [34]. Less degenerate cases, however, can be dealt with using repeated univariate integration. This approach leads to steepest descent integrals with product structure, which are amenable to product quadrature. This is not optimal—product quadrature rules are never optimal in any dimension [24]—but it is practical. We illustrate the ideas with a few examples based on the results reported in [61]. The examples are encouraging, but the reader should be warned that the results of this section are both incomplete and inconclusive by a wide margin.

102

Chapter 5. Numerical steepest descent

5.4.1 Two-dimensional integrals Consider first an integral on a rectangular domain with the oscillator g (x, y) = x + y,  b Iω [ f ] =

a

d

f (x, y)eiω(x+y) dy dx.

c

We can deform the path of integration for the inner integral in y onto its steepest descent paths. As before in this chapter, it is simpler to assume that the paths can extend to infinity. The difference with the preceding results is that the deformed integral still depends on the outer integration variable x. We obtain ∞ ∞  b iωc −ωq iωd −ωq ie f (x, c + iq)e dq − ie f (x, d + iq)e dq eiωx dx. Iω [ f ] = a

0

0

If we define the nonoscillatory function, using the steepest descent path at y = y0 , ∞ f (x, y0 + iq)e−ωq dq, G(x, y0 ) = i 0

then we can write the result more concisely as b [eiωc G(x, c) − eiωd G(x, d )]eiωx dx Iω [ f ] = a

 =



b

iω(x+c)

G(x, c) e a

b

dx −

G(x, d ) eiω(x+d ) dx.

(5.18)

a

We have two integrals that look familiar: they are univariate oscillatory integrals, both with the oscillator eiωx . We can deform onto steepest descent paths again, this time in x. Similarly to G we define the function ∞ F (x0 , y0 ) = i G(x0 + i p, y0 ) e−ω p d p; 0

consequently, we can write the original integral Iω [ f ] as a sum of four contributions originating in the four corners of the rectangle: Iω [ f ] = eiω(a+c) F (a, c) − eiω(a+d ) F (a, d ) − eiω(b +c) F (b , c) + eiω(b +d ) F (b , d ). (5.19) The function F (x0 , y0 ) captures the local contribution at the point (x0 , y0 ) and is given explicitly by a double integral that can be considered to be an integral along a steepest descent manifold,  ∞ ∞ f (x0 + i p, y0 + iq)e−ω( p+q) dq d p. (5.20) F (x0 , y0 ) = i2 0

0

This function can easily be evaluated numerically using tensor-product Gauss–Laguerre quadrature. Generalizing our original integral, interesting new possibilities arise if the integration range for y depends on x as in the following case:  b  d (x) f (x, y) eiω(x+y) dy dx. Iω [ f ] = a

c(x)

5.4. Multivariate integrals

103

With the same definition of G, the analogue of (5.18) becomes b b iω[x+c(x)] G(x, c(x)) e dx − G(x, d (x)) eiω[x+d (x)] dx. Iω [ f ] = a

a

Note that the oscillatory behavior in the variable x has changed! In particular, the oscillators x + c(x) and x + d (x) might well have stationary points, even though the original oscillator g (x, y) = x + y looked innocent in that respect. The oscillator x + c(x) = g (x, c(x)) is actually our original oscillator, but evaluated along the boundary of the domain. Hence, its stationary points correspond to certain points on the boundary of the domain, and they correspond exactly to the so-called hidden stationary points that we have described in §2.4.1, while discussing the asymptotics of multivariate integrals. An example is a Fourier-type integral over the unit disk x 2 + y 2 ≤ 1, which one might write in the following way:  ID [ f ] = The oscillator x +



1 −1



 1−x 2

 − 1−x 2

f (x, y) eiω(x+y) dy dx.

1 − x 2 has a stationary point at x =   2 2 ( 2 , 2 )

 2 . 2

This corresponds to the

hidden stationary point on the boundary of the disk. The other oscillator    2 2 2 x − 1 − x exposes a second hidden stationary point at (− 2 , − 2 ). We do have to reconsider the function F (x0 , y0 ), since the path of steepest descent at x = x0 is no longer just x0 + i p, and because y0 = c(x) now depends on x. This makes F (x) a function of x only, and we include c in the notation as follows:  Fc (x0 ) = G(u, c(u))ei ω[u+c(u)] du Γx

0  =i

Γx

0



f (u, c(u) + iq)e−ωq ei ω[u+c(u)] dq du,

0

where Γ x0 is the steepest descent path that originates at the point x0 . It follows from the formulas that one should be prepared to evaluate c(u) for complex values of u. Indeed, the modulating function of our oscillatory integral in x is G(x, c(x)). We end up evaluating the analytic continuation of the function c(x) that described the boundary of the domain. Following up on the disk example, it turns out that the corner contributions at x = ±1 cancel—after all a disk has no corners—and that the hidden stationary points correspond to a stationary point in x but not in y. The corresponding integrals can be evaluated with Gauss–Hermite quadrature in x and Gauss–Laguerre quadrature in  y. Care has to be taken when evaluating the analytic continuation c(u) = 1 − u 2 : one has to evaluate the right branch of the square root in the complex plane, and it is forbiddento cross any branch cuts. Say the steepest descent path at the stationary point x0 = 2/2 is u = h( p)  with h(0)= ξ . The correct branch in this case is the one that agrees with c(h(0)) = 1 − x02 = 2/2. Our final generalization is to consider the integral with a general oscillator g (x, y),  b  d (x) f (x, y) eiω g (x,y) dy dx. Iω [ f ] = a

c(x)

104

Chapter 5. Numerical steepest descent

In this case, we also have to modify G(x, y0 ), since the path of steepest descent at y = y0 is no longer y0 + iq. Worse still, we are faced with a new complication: the path depends on x! As a matter of fact, so do the contributing points, and in general we should be prepared to consider y0 (x) as a function of x. There may be several functions y m (x) involved, m = 1, . . . , M , two for the endpoints c(x) and d (x) and one for each stationary point in y as a function of x. Hopefully these functions y m are all analytic in x, as we will soon complicate matters further and consider y m (u) for a complex variable u! Explicit expressions for the paths may be difficult to obtain for most oscillators, and one has to resort to numerical computations. Yet, for a contributing point at y = y m (x), we can formally proceed with the definition  f (x, v)eiω g (x,v) dv, Gym (x) = Γym

where Γym is the notation for the steepest descent path at y = y m (x). The oscillator for the outer integral in x is g (x, y m (x)). Again, this function may have several contributing points x m,n , n = 1, . . . , N . We proceed formally by defining the function F as  Fym (x) = Gym (u) ei ω g (u,ym (u)) du Γ x,ym

 =



Γ x,ym

Γym

f (u, v) ei ω g (u,v) dv du,

(5.21)

where Γ x,ym is the steepest descent path at x with respect to the oscillator g (x, y m (x)). The contributions to the original integral are given by Fym (x m,n ), m = 1, . . . , M , n = 1, . . . , N . One can think of the combination Γ xm,n ,ym and Γy,m as the steepest descent manifold at the point (x m,n , y m (x m,n )), and there are M × N of those. Our formal results seem plausible, albeit complicated. But do these definitions actually make sense? Unfortunately, not always! We have made many silent assumptions: for example, the analyticity of all functions involved and the existence of steepest descent paths in y uniformly in x. More importantly, there is no reason to assume implicitly that M and N are independent. Why should g (x, y1 (x)) and g (x, y2 (x)) have the same number of contributing points? That may be so for separable oscillators, but surely not in general. Several other critical observations can be made, and we can only conclude that it is fundamentally problematic to treat general multivariate integrals as repeated univariate integrals. Sometimes the method above applies when the variables are interchanged and one considers steepest descent in x first, as a function of y, but not the other way around. It is fair to say that the possibilities that may arise for bivariate integrals have not been explored to their full extent in the literature. A number of bivariate examples for which the above process does work are presented in greater detail in [61]. These examples include an oscillator with a critical point on a square, and an integral on the half of a circle with a hidden stationary point.

5.4.2 A three-dimensional example By continuing to consider multivariate oscillatory integrals as repeated univariate integrals in higher dimensions, we are only digging a bigger hole for ourselves. Can we

5.4. Multivariate integrals

105

reach a different conclusion? Fortunately, we can, and the solution is to go back to basics. We are fundamentally interested, both in theory and in practice, in the decay of the function eiω g (x,y) for complex values of x and y. In which direction of the complex plane should x and y go? For univariate integrals we were forced to consider intervals on which g is uniquely invertible. The inverse of g to use for a steepest descent path changes as one transitions across a stationary point. This observation does generalize to higher dimensions: the domain should be subdivided into regions where g (x, y) is uniquely invertible both in x and in y. The complicating factor is that the inverses of a multivariate function can be very intricate; hence the appearance of singularity theory in advanced theoretical papers on asymptotics such as [34]. For univariate integrals the breakpoints are points where g (x) = 0—stationary points. In the multivariate case, the subregions are delineated by curves where partial ∂g ∂g derivatives of g vanish, i.e., ∂ x = 0 or ∂ y = 0. Such lines intersect at critical points where ∇ g = 0, or they may intersect the boundary. We shall not make the claim that this observation settles the issue of multivariate oscillatory integrals. On the contrary! It is not sufficient to understand the invertibility structure of g on the integration domain Ω; we have to understand it in a complex neighbourhood of Ω (complex in all variables) and be able to justify globally path deformation onto manifolds. The theory of analytic functions in several variables currently does not offer the tools to do so in any concise manner. Instead, we conclude this section with a positive and convincing 3D example where the extension of the repeated-univariate-integration technique does work: evaluation of a Fourier-type integral on a ball. This integral appeared in the study of light propagation in tiny drops of fluid, as seen through a microscope [104]. The model problem is  f (x, y, z) eiω(x+y+z) dz dy dx, Iω [ f ] = B

where B is the unit ball x + y + z 2 ≤ 1. In the application, f involved, among other things, the refractive index of the fluid. The linear oscillator results from assuming a plane wave incidence. We forgo a complete derivation of the steepest descent integrals but illustrate the process by which to derive the oscillators. It is clear that a ball has no corners and that there are two hidden stationary points on the boundary where ∇ g ⊥ ∂ Ω: they are    ! 2

the points ±

3 3 3 , 3, 3 3

 Iω [ f ] =

1

−1

2

. Consider the following parameterization: 

 1−x 2  − 1−x 2

 1−x 2 −y 2





1−x 2 −y 2

f (x, y, z)eiω(x+y+z) dz dy dx.

 The inner integral in z is free of stationary points, but z varies between ± 1 − x 2 − y 2 . Hence, after deformation, the oscillator for y is   g1 (x, y) = g (x, y, ± 1 − x 2 − y 2 ) = x + y ± 1 − x 2 − y 2 . 7 2 1−x One can verify that its partial derivative with respect to y vanishes at ± ; i.e., 2 there is an x-dependent stationary point in each case. Thus, the oscillator in the outer variable x is



5 5 5  2 1 − x 1 − x2 1 − x2 = x± 2 − 2x 2 . = g x, ± g2 (x) = g1 x, ± , ± 1 − x2 − 2 2 2

106

Chapter 5. Numerical steepest descent

7 2   3 1−x 3 = ± 3 and There is a stationary point at x = ± 3 . In turn, we find that y = ± 2   3 z = ± 1 − x 2 − y 2 = 3 , confirming the two hidden stationary points. By this time we know all the oscillators involved, and hence we can compute the steepest descent paths. We have to bear in mind that in our formulation the path for y depends on x, and the path for z depends on both x and y. Thus, we look for h x ( p),  for hy (x, q), and for h z (x, y, r ). The path h x ( p) for x = 3/3 satisfies, with g2 (x) defined above,   g2 (h x ( p)) = g2 3/3 + i p 2 .

The path hy (x, q) for y satisfies

5 g1 (x, hy (x, q)) = g1 x,

1 − x2 + iq 2 = g2 (x) + iq 2 . 2

Finally, for h z (x, y, r ) we find g (x, y, h z (x, y, r )) = g (x, y,



1 − x 2 − y 2 ) + ir = g1 (x, y) + ir.

Taken together, this means that

  3 3 3 + i p 2 + iq 2 + ir. , , 3 3 3

 g (h x , hy , h z ) = g

Omitting several more technical details, this is what it takes in order to apply Gauss– Hermite quadrature in p and q, and Gauss–Laguerre quadrature in r . We have used n = 3 Gauss–Laguerre points and 2n = 6 Gauss–Hermite points. This leads to 3×6×6 = 108 function evaluations for each hidden stationary point and hence 216 point evaluations in total. Since the paths can be found analytically in this case by solving the equations above, there is little overhead. At a value of ω = 50, the 2 integral for f (x, y, z) = e x+y z (3y + cos z) evaluates to Iω [ f ] ≈ 0.021 − 0.240i. The NSD quadrature scheme with 216 evaluations achieves a relative error of 3.7 × 10−4 . In contrast, a scheme based on tensor-product Gaussian quadrature for the same integral in spherical coordinates required 1.7×106 function evaluations to reach similar accuracy. In other words, we achieved a speedup by a factor of approximately 104 . Of course, the cost of NSD remains the same (with improving accuracy)as the  frequency increases, whereas the cost of naive classical quadrature scales like  ω 3 .

5.5 More general oscillators We have focused in this chapter on Fourier-type integrals, with oscillators of the form eiω g (x) in one dimension. If you are reading this book with a specific application in mind, chances are you have an integral that does not quite fit this pattern. However, there appears to be a deeper generality about the methods of this chapter. Many oscillatory functions, as long as they are also analytic, exhibit oscillations that are intrinsically exponential in nature. One could attempt to substantiate that claim by showing the ubiquity of the exponential function in solutions of differential equations, but here

5.5. More general oscillators

107

we restrict ourselves to some examples where oscillators can put aside their differences and agree on a common ground. In some cases, minor variations just ask for minor modifications. The first variation that comes to mind is integrals involving cosines (or sines, for that matter). The cosine is nonoscillatory in the complex plane in directions orthogonal to the real axis. However, it also grows, rather than decays, exponentially! This can be remedied by writing the integral as a sum of two integrals: 

b a

1 f (x) cos(ωx) dx = 2



b

iωx

f (x) e a

1 dx + 2



b

f (x) e−iωx dx.

a

You can apply the numerical steepest descent method, at the expense of having to evaluate two integrals instead of one. A similar observation holds for other special functions, such as the Bessel function. Oscillatory integrals involving Bessel functions sometimes appear in the computation of Green’s functions. Much like sines and cosines, the Bessel functions of the first and second kinds exhibit exponential growth in the complex plane. However, their complex combination as a Hankel function exhibits exponential decay, and as above one can write 

b a

1 f (x)Jν (ωx) dx = 2



b

f a

(1) (x)Hν (ωx) dx

1 + 2



b

(2)

f (x)Hν (ωx) dx. a

Hankel functions decay in the upper or lower half of the complex plane, respectively, and one can successfully apply Gauss–Laguerre quadrature. Integrals of this kind, with specialized quadrature rules that cater to the specific oscillators like Bessel and Airy functions, are explored in [4].  Sometimes  ω −2n−1 behavior of the error is too much to hope for. In some applications the integrand can be seen to oscillate on the real line and decay in the complex plane, but the exact phase of the oscillations is unknown from an analytical standpoint. Perhaps the integrand is a sum of oscillatory components that cannot be identified individually. Or perhaps the integrand is itself the result of a larger computation, such as the solution of a partial differential equation, and analytic information is not available. Of course, decay in the complex plane can still be exploited by integrating numerically along a nonoscillatory, or less oscillatory, path. It could also be the case that the integrand itself does not decay in the complex plane, but its oscillatory component does. Hence, deformation into the complex plane leads to a nonoscillatory integral. This case appeared in the evaluation of certain frequency-band averaging integrals in vibro-acoustics [25], where integration in the complex plane was orders of magnitude faster than integration along the real line. If the function f in the integrand is not analytic, but the oscillator eiω g (x) is, then so are the integrands of the oscillatory moments of g . After all, polynomials are analytic functions. Hence, one can use numerical steepest descent to compute moments for a Filon method. Problems can arise with large-degree polynomials in this approach, since polynomials tend to grow in the complex plane. This growth may ultimately offset the exponential decay, jeopardizing numerical stability. On the other hand, the need for large-degree moments with Filon is also an indication that f is itself oscillatory. Unless additional knowledge about f is available, one may have to resort to a quadrature rule with many points in order to resolve its oscillations, and one might as well resolve eiω g (x) along with it.

108

Chapter 5. Numerical steepest descent

Finally, we return to the steepest descent paradigm that was formulated as Theorem 5.7 earlier in this chapter. The philosophy of numerical steepest descent is to write the integral as a sum of local contributions, each of which can be mapped to a canonical case and evaluated with an optimal Gaussian quadrature. A case alluded to at the end of §5.2.3 is that of an integral with coalescing stationary points. Here, the  x3 canonical case is g (x) = 3 − δ x, with stationary points at ± δ coalescing for δ = 0. Standard asymptotic expansions break down when δ approaches zero (note that δ is a parameter independent of the frequency ω). This leads in the literature of asymptotic analysis to uniform expansions, which are uniformly valid in the extra parameter δ. Adhering to our newly found paradigm, the numerical analogue is to define a smooth map from any oscillator with coalescing stationary points to the canonical case, and to devise an optimal quadrature rule for the latter [73]. This leads to the study of orthogonal polynomials satisfying  3 pn (z)z k ei(z −δ z) dz, k = 0, . . . , n − 1. Γ

These orthogonality conditions generalize (5.16). In the spirit of numerical steepest descent, the associated Gaussian quadrature rules are the asymptotically optimal way to numerically evaluate oscillatory integrals with coalescing stationary points. But do these quadrature rules even exist? What do we know about complex-valued Gaussian quadrature and the associated polynomials in general? Fairly little, it seems! This is new research area, fostered by highly oscillatory quadrature but conceivably with broader relevance, which we start exploring in the next chapter.

Chapter 6

Complex-valued Gaussian quadrature

6.1 Introduction As we have seen in Chapter 1, classical Gaussian quadrature is of no use in the presence of high oscillation: this is indeed the incentive for the entirety of this book. However, Gaussian quadrature is so effective in the case of classical integration that whether we can somehow extend it to our setting is a natural question. In this chapter we provide an affirmative answer and demonstrate that this results in a very powerful approach to highly oscillatory quadrature, as well as a long list of juicy mathematical challenges. In the previous chapter we have already seen how Gaussian quadrature is naturally connected to oscillatory integrals, via deformation of the contour of integration into the complex plane and then optimal discretization of the resulting contour integrals. In a sense, this application of Gaussian quadrature is modulo the change of variables needed to work along the path of steepest descent, within the classical theory of orthogonal polynomials (involving a positive and integrable weight function). In this chapter we explore a less conventional construction related to Gaussian quadrature: namely, we consider the problem of computing the integral 

1 −1

f (x) eiωx dx

(6.1)

by using Gaussian quadrature directly, with weight function w(x) = eiωx . This is the place to note that 1 f1 (x) f2 (x) eiωx dx 〈 f1 , f2 〉ω = −1

is not, strictly speaking, an inner product: for example, it is entirely possible that 〈 f , f 〉ω = 0 while f = 0. In principle, it is a bilinear form, and the existence of underlying orthogonal polynomials is not guaranteed. We will return to this issue in what follows, but, for the time being, we just assume that the underlying orthogonal polynomials, hence also quadrature formulas, exist. What are the advantages and challenges of such an approach? • The scheme preserves the optimality properties in terms of ω: in the limit as ω → 0+ we recover the classical Gauss–Legendre quadrature, which is optimal 109

110

Chapter 6. Complex-valued Gaussian quadrature

in the sense of polynomial degree of exactness; cf. Appendix A. On the other hand, in the limit as ω → ∞, the construction is optimal in the asymptotic sense of Chapters 2 and 5. • The fact (to which we have already alluded above) that the weight function w(x) = eiωx is no longer positive, or even real for x ∈ [−1, 1], raises both theoretical and computational challenges. In particular, the essential working tool in this chapter is formal orthogonal polynomials pnω with respect to the weight function w, and we will see that some parts of the standard theory apply to them while others do not. The latter require extra care and, indeed, lead to an array of fascinating new phenomena, only some of which are currently understood. As an extra application of this complex-valued Gaussian quadrature, we also show examples of this construction in problems where uniformity in terms of other parameters needs to be studied, for instance in the presence of coalescing stationary points. In Appendix A we collect some results from the standard theory of orthogonal polynomials and its classical connection with Gaussian quadrature on the real line, and we discuss some of the issues that arise in the extension to complex-valued weight functions.

6.1.1 Hermitian vs. non-Hermitian orthogonality Gaussian quadrature in the complex plane has been explored in some detail in the literature, mainly as an extension of classical results on the real line. In principle one is keen on retaining existence and (hopefully) easy computation of the corresponding sequence of orthogonal polynomials pn , exploring the location of their zeros, and retaining optimality (in the standard sense) of the underlying Gaussian quadrature, that is, that polynomials of degree as high as possible are integrated exactly. Our first observation is that there are different ways to construct orthogonal polynomials in : one standard way, to replicate the results obtained on the real line, is to consider a contour Γ ⊂  and then define orthogonality on Γ in terms of a Hermitian inner product  〈f , g〉 =

Γ

f (z) g (z) dμ(z),

for given functions f and g , and a positive measure dμ on Γ . It is clear that this definition ensures that  f  = 〈 f , f 〉1/2 is a norm, and therefore existence and uniqueness of the sequence of monic orthogonal polynomials pn follow from classical theory, e.g., by applying the Gram–Schmidt procedure to the canonical basis of monomials. Thus, pn exists uniquely, has degree equal to n, and satisfies the orthogonality conditions  k = 0, 1, 2, . . . , n − 1. pn (z) z k dμ(z) = 0, Γ

The most prominent example in the literature is the case where Γ =  is the unit circle; if we write z = eiθ , then the orthogonal polynomials with respect to the inner product   (f , g) =



f (z) g (z) dμ(z) =

π

−π

f (eiθ ) g (eiθ ) dμ(θ),

where dμ is in general a distribution function with infinitely many points of increase, are known as Szeg˝ o polynomials. A very rich body of research is devoted to these

6.1. Introduction

111

polynomials and their properties; we refer the reader to the monograph by Gabor Szeg˝ o [99, Chapter XI], Simon’s monograph [95], or the paper of Jones, Njåstad, and Thron [70] for more details. In this setting, the natural mathematical object to work with is trigonometric polynomials on [−π, π], or alternatively Laurent polynomials on . We will not dwell here on the theory of orthogonal polynomials in the unit circle, but we remark that it is an area that has witnessed an enormous development in the last decades from analytic and numerical perspective alike. Another important case that has received attention in the literature concerns Hermitian orthogonality where the support Γ is composed of (a union of) radial rays in the complex plane. We refer the reader to the work of Milovanovi´c and collaborators [82, 81]. It is very interesting to note that one property that is not preserved in this context (and that will feature extensively later) is the location of the zeros of pn . In the real case, it is well known that these zeros are contained in the interval of orthogonality (see, for instance, [49, Theorem 1.46]) (or in the convex hull of it, if, for example, the orthogonality is taken on several intervals). The root distribution in the complex plane follows a similar idea, but a detailed analysis can be much more complicated, as discussed by Saff in [93]. In this chapter we present an altogether different perspective on complex quadrature and complex orthogonal polynomials: namely, we define the orthogonality on the real line, but allow for oscillatory or even complex weight functions. Suppose that b we have an oscillatory integral I [ f ] = a f (x)w(x) dx, where we do not require that w(x) be real valued, let alone positive. It is clear that, as we have already mentioned, this construction does not define an inner product, but defying all the rules applying to the classical theory of orthogonal polynomials, we ask the following question: what happens if we consider w(x) as a (complex) weight function and construct (formal) orthogonal polynomials pn (x) and a (formal) Gaussian quadrature rule? That is, we look for a family of (formal) orthogonal polynomials, 

b a

pn (x)x k w(x) dx = 0,

k = 0, 1, . . . , n − 1.

It is very possible that for some values of n and other parameters present in the weight function (like ω in the case of Fourier-type integrals) it happens that 〈 pn , pn 〉 = 0, which will cause a breakdown in the Gram–Schmidt process of orthogonalization. We will see that determining if (or when) this happens is in general a complicated issue, but in exchange it also showcases some very interesting, if unorthodox, families of orthogonal polynomials and a beautiful pattern of behavior.

6.1.2 Comparison with the method of numerical steepest descent In limited cases the approach of complex-valued Gaussian quadrature is equivalent to the numerical steepest descent method explained in the previous chapter. For instance, consider the integral ∞ f (x) eiωx dx, Iω [ f ] = 0

where we assume that f (x) decays sufficiently quickly as x → ∞ for the integral to

112

Chapter 6. Complex-valued Gaussian quadrature

converge, in the sense that

   R    iωx lim  f (x) e dx  < ∞.  R→∞  0

Applying the Cauchy Theorem, it is not difficult to check that using polynomials (formally) orthogonal to w(x) = eiωx is in fact equivalent to the method of numerical steepest descent. Indeed, if we consider the contour that consists of the interval [0, R], the quarter circle CR = {z = Reiθ , θ ∈ [0, π/2]}, oriented counterclockwise, and the segment [0, iR] along the imaginary axis oriented from top to bottom, then the Cauchy Theorem gives   iR R k iωx k iωx pn (x)x e dx + pn (x)x e dx − pn (x)x k eiωx dx, 0= CR

0

0

and, taking the limit R → ∞, it is clear that the second integral tends to 0. Making the change of variables x = is/ω in the last integral, we have ∞  k+1  ∞   i is k −s pn pn (x)x k eiωx dx = s e ds. ω ω 0 0

Hence, the orthogonal polynomials are just the (scaled and rotated) standard Laguerre polynomials, and the roots that we find for the complex Gaussian quadrature are the same as the ones that one finds in the method of numerical steepest descent. Of course, this is in agreement with the general idea that the main contribution to the above oscillatory integral originates in a neighborhood of the origin. A similar result r can be obtained with the more general oscillator w(x) = eiωx , x ≥ 0. In other examples the method of numerical steepest descent may not be directly applicable, but complex-valued quadrature may yield a similar scheme (in the sense that one obtains quadrature nodes along paths of steepest descent). For example, in [4], Asheim and Huybrechs study several oscillatory integral transforms, including Fourier and Hankel transforms, and the possibility of approximating them using Gaussian quadrature with complex nodes. Some examples are the following: ∞ ∞ ∞ f (x) Jν (ωx) dx, f (x) sin(ωx) dx, f (x) Ai(ωx) dx, 0

0

0

where Ai and Jν are the classical Airy and Bessel functions. Again, we understand the integrals as improper, assuming that f decays sufficiently quickly at infinity, or with some other suitable regularization. Taking the second example, since the sine function grows exponentially in the complex plane, no deformation of the path in the spirit of numerical steepest descent will give a satisfactory result. However, once we write   ∞ 1 ∞ 1 ∞ iωx f (x)e−iωx dx, f (x)e dx − f (x) sin(ωx) dx = 2i 0 2i 0 0

provided f is smooth and decays sufficiently quickly at infinity, the changes of variable x = it /ω and x = −it /ω in the first and second integrals, respectively, yield   ∞  ∞      1 ∞ 1 it −t it it fE f (x) sin(ωx) dx = f − e dt , e−t dt = +f ω ω ω ω 2ω 0 0 0 (6.2) where fE (t ) = [ f (t ) + f (−t )]/2 is the even part of the function f .

6.1. Introduction

113

Consider now the weight function  1 w(x) = x −1/2 e− x , 2

x ∈ (0, ∞),

which is positive and integrable. Then a Gaussian quadrature rule with n nodes t1 , . . . , tν and weights w1 , . . . , wν gives  ν  1 ∞ m−1/2 − x x e dx, m = 0, 1, . . . , 2ν − 1. wk tkm = 2 0 k=1

With the change of variables x = t 2 , we obtain ∞ ν  2m wk tk = t 2m e−t dt , m = 0, 1, . . . , 2ν − 1; k=1

0

hence with ν evaluations of f we obtain a classical order of 4ν for even functions. Tak  ing the nodes {−i tk }νk=1 ∪ {i tk }νk=1 , the quadrature rule is exact for all polynomials up to order 4ν − 1 with 2ν function evaluations, i.e., is Gaussian. See also [4, Theorem 5]. Thus, a clever decomposition of the oscillatory function leads to a quadrature rule with complex nodes but real and positive weights. An example with 8 nodes and weights is displayed in Table 6.1. Table 6.1. Nodes tk and weights wk for complex-valued Gaussian quadrature presented above.

k 1 2 3 4 5 6 7 8

Nodes 0.7038464344085956 11.098601962447075 43.037402946181646 110.45078606998899 233.03054819390852 441.72247098078693 792.879667997572633 1422.95413303522634

Weights 0.8920399241318109 0.1027708524869015 0.5069870675072899−2 0.1181254507161250−3 0.1222515276821382−5 4.7354554641504959−9 4.7658044513536984−12 4.8198220224786556−16

An alternative, presented by the numerical method of steepest descent, would be to apply two rotated Gauss–Laguerre quadrature rules to the second integral in (6.2). It is crucial to observe that this set of nodes and weights and the previous one are different, since the nodes tk do not correspond to the Laguerre weight function. However, by construction, they will have the same asymptotic properties as ω → ∞. As an example, consider the integral ∞ cos x sin ωx dx, (6.3) 1+x 0

which can be evaluated explicitly as a fairly complicated combination of trigonometric functions and sine and cosine integrals. The function oscillates faster and faster with increasing ω inside an envelope that slowly shrinks to zero as x gets large (cf. Fig. 6.1). In Fig. 6.2 we plot the relative errors when computing this integral with eight complex Gauss–Laguerre nodes and weights and increasing ω.

114

Chapter 6. Complex-valued Gaussian quadrature

Figure 6.1. Plot of the integrand in (6.3) with ω = 10 (left) and ω = 100 (right).

Naturally, the previous example can be a bit misleading, since the results depend on several specific features of the oscillatory weight function: first, we can split it into parts that decay conveniently in the complex plane and make use of the resulting symmetry; second, we obtain a nonoscillatory rule that is completely classical. Other examples that are less obvious involve the Hankel transform of order 0: ∞ f (x) J0 (ωx) dx, 0

whereby the analogue of the trigonometric decomposition is iπ J0 (z) = K0 (−iz) − K0 (iz) (see, for example, [108, eqn. 10.27.9]) in terms of modified Bessel functions K0 , which likewise decays along the imaginary axis. The emphasis of this chapter, however, is on cases where the two approaches (path deformation together with discretisation, and direct Gaussian quadrature with a non-

Figure 6.2. The error in logarithmic scale in the computation of (6.3) using complex Gauss–Laguerre quadrature with 8 points.

6.2. Complex-valued Gaussian quadrature for oscillatory integrals on [−1, 1]

115

positive weight function) can be applied but differ in the results. Our favorite, the Fourier integral 1 f (x) eiωx dx, Iω [ f ] = −1

where ω > 0 is a real (typically large) parameter, is a case in point. Since we are working on the real line we consider the bilinear form, 1 〈 f , g 〉ω = f (x) g (x) eiωx dx. (6.4) −1

The method of numerical steepest descent would deform the interval [−1, 1] onto a union of three contours, two vertical, stemming from the endpoints x = a and x = b , and a horizontal one that in general we may disregard from the standpoint of large ω asymptotics. Instead, direct construction of orthogonal polynomials and a complex quadrature rule (if they actually exist!) treats both endpoints in one go. Apart from this optimality, this complex-valued Gaussian quadrature rule will naturally interpolate between the standard case (when ω = 0, which corresponds to Gauss–Legendre quadrature) and the asymptotic regime (as ω → ∞). The first regime is quite straightforward to check, since the bilinear form (6.4) depends analytically on ω, as do all quantities computed from it. As a consequence, we expect to recover the Gauss–Legendre case once we let ω → 0+. The second regime, as ω grows large, is more intricate, but using Hankel determinants (cf. §6.2.3 and §A.1.3) and a remarkable connection with multivariate oscillatory integrals we can prove that the nodes cluster around x = ±1 as ω → ∞, in a fashion that preserves the asymptotic order of the quadrature as well. The immediate concern, and rightly so, is the price to pay for all these good properties. Apart from the already mentioned issue regarding existence of such a family of orthogonal polynomials, the distribution of the zeros of pn is far from obvious. Last but not least, it is unclear how to compute these formal orthogonal polynomials and related quantities.

6.2 Complex-valued Gaussian quadrature for oscillatory integrals on [−1, 1] The construction and properties of formal orthogonal polynomials with respect to (6.4) was explored in [2], and further asymptotic properties as n → ∞ were studied in [30]. This asymptotic regime, and also the case when ω = λn, with λ ≥ 0, is amenable to direct analysis using the Riemann–Hilbert formulation of the corresponding complex-valued orthogonal polynomials and the Deift–Zhou method of steepest descent. However, since the motivation from Gaussian quadrature naturally focuses on a small number of nodes and therefore small n, we concentrate here on the behavior in terms of ω.

6.2.1 Complex-valued orthogonal polynomials Our current goal is to investigate formal monic orthogonal polynomials pnω that satisfy the orthogonality conditions 1 〈 pnω , z k 〉ω = pnω (z)z k eiωz dz = 0, k = 0, 1, . . . , n − 1. −1

116

Chapter 6. Complex-valued Gaussian quadrature ω If we assume that three consecutive orthogonal polynomials pnω and pn±1 exist, then the fact that 〈 p, zq〉 = 〈z p, q〉 implies that the following three-term recurrence relation holds:15 ω ω z pnω (z) = pn+1 (z) + αn pnω (z) + βn pn−1 (z).

(6.5)

And the recurrence coefficients (which depend on ω) can be written in the usual way, αn =

〈z pnω , pnω 〉ω

〈 pnω , pnω 〉ω

βn =

,

〈 pnω , pnω 〉ω

ω , pω 〉 〈 pn−1 n−1 ω

,

(6.6)

ω ω provided that 〈 pn−1 , pn−1 〉ω , 〈 pnω , pnω 〉ω = 0. The underlying reason is that the threeterm recurrence relation (6.5) is an algebraic (as opposed to analytic) construct and hence survives the passage from a “proper” Borel measure to a complex-valued one. Our first straightforward observation is that when ω = 0 we are in the classical setting of Legendre polynomials, with weight function w ≡ 1. Furthermore, the bilinear form depends continuously on ω, so whatever happens next we expect it to be some kind of perturbation of the Legendre case. The moments of the weight function w(z) = eiωz are

μk = μω k =



1

z k eiωz dz,

−1

k ≥ 0,

(6.7)

and they can be easily obtained by the recurrence μω 0 =

2 sin ω , ω

μω m =

e iω − (−1) m e −iω m ω , μ − iω m−1 iω

m ≥ 1,

which is deduced through integration by parts. Alternatively, they can be written in terms of incomplete Gamma functions. In contrast, an explicit construction of formal orthogonal polynomials pnω is quite cumbersome. Let us start with p0ω ≡ 1; then the bilinear form gives 〈 p0ω , p0ω 〉ω =



1

−1

eiωz dz =

2 sin ω . ω

It is clear from this expression that 〈 p0ω , p0ω 〉ω = 0

whenever

ω = ω0,k = kπ,

k ≥ 1.

Once we go back to the recurrence coefficients (6.6), it turns out that the coefficient α0 is not defined for these values ω = ω0,k : α0 = −

iω(cos ω − sin ω) i i + , =− tan ω ω ω sin ω

which indeed has simple poles at ω = ω0,k . At these values p1ω is not defined, since from the recurrence relation we have p1ω (z) = z p0ω (z) − α0 p0ω (z) = z − α0 . 15 This is contrast with the Hermitian orthogonality explained before, where this property does not hold!

6.2. Complex-valued Gaussian quadrature for oscillatory integrals on [−1, 1]

117

What happens with the “norm” of p1ω ? Straightforward calculation yields 〈 p1ω , p1ω 〉ω =

2(ω 2 − sin2 ω) , ω 3 sin ω

which is different from zero for ω > 0, since ω = ± sin(ω) has the only real solution ω = 0. Therefore, in principle we can compute α1 and β1 for any ω > 0. If p2ω (z) = z 2 + b1 z + b0 , we obtain b1 = −α0 − α1 = −

b0 = α0 α1 − β1 =

2i(ω 2 − 2 sin2 ω + ω sin ω cos ω)

ω(ω 2 − sin2 ω)

,

−ω 4 + 2ω 2 cos(2ω) − 2ω sin(2ω) + 2 sin2 ω

ω 2 (ω 2 − sin2 ω)

.

Apart from the fact that the expressions become increasingly complicated, it is crucial to note that both b0 and b1 , and therefore p2ω , are well defined for any ω > 0! The explanation is that a remarkable cancellation of the poles takes place when forming these coefficients, since the term 〈x p1ω , p1ω 〉ω contributes extra singularities to α1 . If we take one last explicit step, we have 〈 p2ω , p2ω 〉ω =

8[ω 2 (3 − ω 2 ) sin ω − 2ω 3 cos ω − sin(ω 3 )]

ω 5 (ω 2 − sin2 ω)

.

The zeros of 〈 p2ω , p2ω 〉ω , say ω2,k , are difficult to compute analytically, but one can have an idea of their behavior from Fig. 6.3, and some information can be given for large values of ω. Dividing the numerator by ω 4 throughout, we have   2 cos(ω) sin(ω)3 3 + 1− = 0. sin(ω) + ω4 ω2 ω ω ω Hence, to leading order thezeros  of 〈 p2 , p2 〉ω are the same as the zeros of sin ω, and −1 once we let ω2,k = kπ +  k then the first correction to this estimation yields

ω2,k = kπ −

  2 +  k −3 , kπ

k  1.

Figure 6.3. Plot of |〈 p2ω , p2ω 〉|ω , in logarithmic scale, as a function of ω.

118

Chapter 6. Complex-valued Gaussian quadrature

At these values of ω, the next polynomial p3ω is undefined. A first bold guess is that ω this pattern repeats itself: the polynomials of even degree, p2n , are always well defined for all ω > 0, thanks to magical cancellations occurring in the recurrence relation, ω , become undefined for certain values whereas the polynomials of odd degree, p2n+1 ∞ ω ω of ω, namely sequences {ω2n, j } j =1 , where 〈 p2n , p2n 〉ω = 0. In what follows we will obtain more information about this phenomenon when ω → ∞.

6.2.2 Root patterns in  We recall that with a complex-valued weight function, the zeros of the orthogonal polynomials are no longer constrained to live on the interval where the orthogonality is defined, which is a key feature of the standard case; see the Appendix. Therefore, a further experiment was carried out in [2], namely tracing the zeros of a low, evenω as a function of ω. The outcome for p4ω and p6ω is displayed degree polynomial p2n in Figs. 6.4 and 6.5, respectively. As ω increases, the nodes depart from the real axis (actually from the values of Gauss–Legendre nodes!) and wander into the complex plane, following strikingly regular patterns. Several observations are in order: • The zeros are symmetric with respect to the imaginary axis. This is expected as a consequence of the fact that the weight function w(z) = eiωz and the interval of integration are symmetric with respect to the imaginary axis. As shown in [2, Lemma 3.1], the fact that w(z) = w(− z), and also that z ∈ Γ ⇔ − z ∈ Γ , where Γ is the original integration contour, implies that

pnω (z) = (−1)n pnω (− z),

z ∈ ;

therefore, if z is a zero of pnω (z), so is − z.

• The trajectories of the roots show a sequence of cusps, whose meaning will become clearer in what follows. • As ω gets large, the zeros seem to tend to the endpoints ±1. We recall from the previous chapter that this behavior is typical of numerical steepest descent for oscillatory integrals. With greater generality, we know from asymptotic analysis that information at the endpoints is crucial when ω  1. What happens in the case of odd degree polynomials? A similar picture is obtained (see Fig. 6.6), but in this case with an extra zero that moves along the imaginary axis, something that is to be expected because of the symmetry of the weight function and of the orthogonal polynomials. As an example, we consider the following integral: 

1

eiωx dx, −1 3 + x

(6.8)

which can be written in terms of the exponential integral. We test the numerical quadrature using the roots of p4ω and p6ω , for ω ∈ [0, 100] and for different numbers of quadrature nodes. The results are displayed in Fig. 6.6. We note in passing that the error decays considerably faster than for other methods using the same volume of

6.2. Complex-valued Gaussian quadrature for oscillatory integrals on [−1, 1]

119

Figure 6.4. The zeros of p4ω as a function of ω, with a close-up near z = +1.

Figure 6.5. The zeros of p6ω as a function of ω, with a close-up near z = +1.

Figure 6.6. The zeros of p3ω (left) and p5ω (right) as a function of ω. (Some points on the imaginary axis are omitted to get a clearer picture.)

120

Chapter 6. Complex-valued Gaussian quadrature

Figure 6.7. Errors in logarithmic scale as a function of ω for complex-valued Gaussian quadrature to compute the integral (6.8), using 4 (violet), 6 (crimson), and 8 (black) nodes and weights.

information. Unlike numerical steepest descent, the error is small already for ω = 0 (where, of course, the method is Gauss–Legendre quadrature), and unlike Filon-type methods there is no “hump” in the error curve for an intermediate value of ω > 0: it seems that we live in the best of all possible worlds! We readily note that, in the best spirit of highly oscillatory quadrature as presented in the previous chapters, the performance of the complex numerical quadrature improves as ω increases. Taking into account the root patterns shown before, where it is apparent that the nodes approach the endpoints z = ±1 as ω → ∞, it is very tempting to relate this scheme to a complex version of the derivative-free Filon method presented in §3.1. Yet another eerily similar idea is the numerical method of steepest descent from Chapter 5, albeit after deformation of the path of integration into the complex plane. This is no coincidence, of course, and the remarkable connection between these ideas will be explained later, after we get a handle on the behavior of the orthogonal polynomials and their roots as ω becomes large. The main result that we need for this purpose is a Gaussian case of [2, Theorem 4.1]. be a 2n-point complex-valued quadrature rule with polyTheorem 6.1. Let {x j , w j }2n j =1 nomial order 4n, i.e., 2n  j =1

 w j x kj =

1

x k eiωx dx,

−1

k = 0, 1 . . . , 4n − 1.

If the nodes x j can be split into two groups {x 1j }nj=1 and {x 2j }nj=1 , such that   x 1j = −1 +  ω −1 ,

  x 2j = 1 +  ω −1 ,

ω → ∞,

for j = 1, . . . , n, then, for an analytic function f the quadrature error has the asymptotic decay 1 2n    w j f (x j ) − f (x) eiωx dx =  ω −2n−1 . j =1

−1

6.2. Complex-valued Gaussian quadrature for oscillatory integrals on [−1, 1]

121

Figure 6.8. Close-up on the zeros of p4ω (in violet) and p5ω (in dark green).

There are two key elements missing at this stage: first, the existence of such a ω quadrature rule, since we need the zeros of p2n to construct it, and, second, its numerical construction We will deal with the first issue in the asymptotic regime ω  1 in what follows, following the approach of [32] and using classical Hankel determinants. Regarding the second point, it is a very important issue that we have to weigh along with all the advantages of this approach (see the analysis in the next chapter); the calculation of complex orthogonal polynomials is the subject of current research, but efficient and reliable algorithms (that do not require extended precision) are not available yet.

6.2.3 Large-ω asymptotic behavior In [32], a further analysis of this family of orthogonal polynomials is carried out, and it is found that considerable insight is gained by studying the connection between pnω and Hankel determinants constructed from the moments of the weight function, given by (6.7) in this case. We refer the reader to Appendix A for the general construction. The Hankel determinant ⎤ ⎡ μ0 μ1 · · · μn ⎢ μ1 μ2 · · · μn+1 ⎥ ⎥ ⎢ n ≥ 0, Dn = Dn (ω) = det⎢ . . .. ⎥ , .. ⎣ .. . ⎦ μn μn+1 · · · μ2n where the moments are given by (6.7), admits the following multiple integral representation: 11 1   1 Dn = ··· (x − xk )2 w(xk ) dxk , (n + 1)! −1 −1 −1 0≤k 0 and LN (c) is the classical Laguerre polynomial [91, p. 200] with parameter α = 0. We note that one of the main technical challenges in the proof of this result, differently to the case of the Hankel determinants, is the fact that the multivariate symmetric function f (x), once we scale around the endpoints, depends on ω. However, this dependence is not highly oscillatory as ω grows large. As an immediate consequence, we can confirm the observed behavior of the roots ω with those in the previous plots, and what is more, we can relate the roots of p2N of (rotated and scaled) Laguerre polynomials, exactly the idea that we pursued in the numerical method of steepest descent in Chapter 5! Namely, we have the following. [N ]

[N ]

[N ]

Corollary 6.6. Let c1 , c2 , . . . , cN > 0 be the zeros of the N th Laguerre polynomial; ω are then as ω → ∞, the zeros of p2N [N ]

zk = ±1 +

The same result holds for

ω p2N +1 ,

ick

ω

  +  ω −2 .

with an added zero on the imaginary axis.

6.2.4 Root patterns in  revisited ˆ where the Hankel determinant We study what happens as ω tends to a critical value ω, of order 2N vanishes. In all cases we choose ω sufficiently large, so that even-degree polynomials under consideration exist but odd-degree polynomials may not, in line ˆ = 0 and with the asymptotic approximations given before. Thus, we have D2N (ω) ˆ = 0 for some N > 0. Using Hankel determinants, we can consider a differD2N −1 (ω) ent normalization of the orthogonal polynomials: ˜pnω (z) = Dn−1 pnω (z). Note that now ˜pnω is always well defined, but since ˜pnω (x) = Dn−1 z n + . . ., this polynomial has degree strictly less than n whenever Dn−1 = 0. The three-term recurrence relation (6.5) can be adapted for the ˜pnω ’s, and, once we consider it with n = 2N , we obtain +  , ω ω 2 2 ˜ω ˜p2N (z)−D2N ˜p2N p2N −1 (z). +1 (z)D2N −1 = D2N D2N −1 z + i D2N D2N −1 − D2N −1 D2N (6.10)

6.3. Complex quadrature and uniform asymptotic expansions

125

Here, we have used the following formulas for the recurrence coefficients in terms of the Hankel determinants:

Dn Dn−2 Dn Dn−1 , βn = , − αn = −i 2 Dn−1 Dn Dn−1

where Dn indicates again differentiation with respect to ω. We note that because of the differential identity the formula for βn in terms of the Hankel determinants is true for all orthogonal polynomials, but the one for αn is specific to the pnω ’s because the differential identity dμk = iμk+1 μ k = dω holds for all k ≥ 0; see [32]. ˆ the expression (6.10) simplifies to Thus, if ω = ω, ˆ ω ˜p2N +1 (z) = i

D2N

D2N −1

ˆ ω ˜p2n (z).

ˆ ω ˜ ωˆ In other words, ˜p2N +1 is a scalar multiple of p2N . Hence, their zeros coincide, and the trajectories of both polynomials “kiss.” This phenomenon is illustrated in Fig. 6.8, and we note that these critical values coincide with the cusps that we observed in previous ˆ ω ˆ the original polynomial p2N plots. At this critical value ω, +1 is not defined, and one way to express this is the fact that one of its zeros (the one on the imaginary axis due to symmetry) becomes infinite—we recall Fig. 6.6. We note that 1 D2N ˆ ω = 0, [ p2N (z)]2 eiωz dz = h2N = D2N −1 −1

which implies the orthogonality property 

1 −1

ˆ p2N (z)z k eiωz dz = 0,

k = 0, 1, . . . , 2N .

ˆ ω ˜ ωˆ Therefore, we can replace ˜p2N +1 (z) by z p2N (z) in order to obtain a complete basis for the space of polynomials. ω }, n = 2, 4, 6, 8, in the upper-right Fig. 6.9 displays the root patterns for { pnω , pn+1 quadrant. Note that the entire action takes place in the closed upper half plane, while i is an axis of symmetry. The “kissing pattern” is clear, as is its fractal nature: since ω asymptotically D2N vanishes near kπ for large k, p2N +1 blows up at a countable number of “kissing points” for ω ∈ (0, ∞). Projected into the complex plane of root patterns, these points accumulate at ±1. The nearer the root pattern is to the endpoints, the more frequent the “kissing points” are, and this accounts for the fractal nature of Fig. 6.9.

6.3 Complex quadrature and uniform asymptotic expansions A recent important use of complex Gaussian quadrature is related to the idea of constructing a numerical method that can handle in a uniform manner more complicated situations like coalescing saddle points or saddle points near endpoints of integration.

126

Chapter 6. Complex-valued Gaussian quadrature

ω Figure 6.9. Root patterns in  for pairs { pnω , pn+1 }, n = 2, 4, 6, 8, in the upper-right quadrant.

Examples of these two cases are given by g being a cubic polynomial (with two stationary points that coincide for a critical value of an extra parameter) and a quadratic polynomial. The theory that we develop in this section is motivated by numerical problems related to quadrature of highly oscillatory integrals, but this is also a very important area in asymptotic analysis, namely the construction and properties of uniform asymptotic expansions. In this context, one tries to compute an asymptotic approximation (for large ω in our case) that is uniform with respect to other parameters in the oscillator (that determines, for instance, the location of stationary points) around one of more critical values where the asymptotic behavior changes. In the literature there are many examples of such constructions, for instance, to name just few standard references, [10, 83, 100, 111]. The general philosophy is that one needs to consider more complicated asymptotic expansions than the standard Poincaré series (in inverse powers of ω) that we have seen before. These uniform expansions typically include classical special functions (such as Airy, Bessel, or error functions) that capture the transition in the asymptotic behavior that happens for large parameters and in a neighborhood of critical points (for example, between monotonic and oscillatory regimes). Which special function is needed in each case is typically dictated by the geometry of the problem and the properties of the phase function g . The result is often a much more precise approximation, at the price of more complicated

6.3. Complex quadrature and uniform asymptotic expansions

127

coefficients that need to be computed symbolically or numerically; we refer the reader to [52, 101] for more details. In the context of complex Gaussian quadrature, such special functions needed for uniformity appear naturally as moments once we consider the oscillatory factor directly as a weight function.

6.3.1 A cubic oscillator A canonical case in uniform asymptotic analysis is the integral 

1 −1

f (x) eiω g (x) dx,

g (x) =

x3 − c x, 3

(6.11)

where cis a real parameter. This oscillator has two stationary points of order 1 at x± = ± c, which coalesce to create a second-order stationary point as c → 0, and which coalesce with the endpoints as c → 1. When applying the numerical method of steepest descent, the standard deformation will be as indicated in Fig. 6.10. Then the analysis proceeds as explained in Chapter 5, parameterizing each integral and using Gauss–Laguerre quadrature along the paths that involve the endpoints and Gauss–Hermite for the ones that include the stationary points. However, we remark that an essential ingredient in the construction of the NSD method (cf. Proposition 5.4 in Chapter 5) is the monotonicity of the phase function g in any interval between endpoints and stationary points. This is easily achieved in this situation if c is bounded away from 0 and 1, but it becomes problematic when c → 0. As studied in [73], a complex-valued Gaussian quadrature scheme in this context reduces to a formula





Γ− ∪Γ+

f (x) e

x3 3

−c x

!

dx ≈

n 

wkc f (xkc ),

(6.12)

k=1

combining the two paths Γ+ and Γ− in one single expression, with some complex nodes and weights. Such a rule uses formal orthogonal polynomials pnω,c defined in

Im z



Γ−1

Γ1 −1

 − c



Γ−

Γ+

c

Re z

1

Figure 6.10. Steepest descent paths for the oscillator g (x) =

x3 3

− c x.

128

Chapter 6. Complex-valued Gaussian quadrature

Figure 6.11. Error in log10 scale when computing the integral of f (x) = sin 4x along the path Γ− ∪ Γ+ , with ω = 100, using the numerical method of steepest descent (royal blue) and the uniform method (turquoise), as a function of c.

the following way: 



Γ− ∪Γ+

pnω,c (x)x k e

x3 3

−c x

!

dx = 0,

k = 0, 1, . . . , n − 1,

(6.13)

assuming that they exist for given values of n and ω. As an example, we ignore for the moment the path integrals stemming from the endpoints z = ±1 in Fig. 6.10, where we can use Gauss–Laguerre quadrature, and we concentrate on the integral along the contours Γ− and Γ+ . For this contour, we have two possible choices: two Gauss–Hermite quadrature rules, one on each path, after mapping to the real axis, or a single Gaussian quadrature rule for both contours simultaneously. In Fig. 6.11 we compare these two calculations, for different values of c between 0 and 1, using n = 6 nodes and weights. The plot shows that the uniform numerical steepest descent (UNSD) method keeps uniform accuracy in terms of c, including small values of c which are problematic for the standard method of steepest descent. Naturally, this improvement comes at the price of a more elaborate computation, using nonstandard orthogonal polynomials. Also, we note that these complex orthogonal polynomials, and therefore the associated complex Gaussian quadrature rule, depend on the value of c; this is certainly a computational drawback because they need to be recomputed if this parameter changes. The analysis of the corresponding orthogonal polynomials appears in [73] and also in [57]. We summarize here the main results, concentrating again on the path Γ− ∪ Γ+ : it is convenient to couple the two parameters c and ω into one, using the following scaling: δ = cω 2/3 , x = uω −1/3 , and subsequently 



Γ− ∪Γ+

f (x) e

x3 3

−c x

!

1 dx = 1/3 ω

 f Γ− ∪Γ+

u ! i e ω 1/3

u3 3

−δ u

!

du.

6.3. Complex quadrature and uniform asymptotic expansions

129

We study now orthogonal polynomials { pnδ }n≥0 defined via the orthogonality conditions !  3 i

Γ− ∪Γ+

pnδ (u)u k e

u 3

−δ u

du = 0,

k = 0, 1, . . . , n − 1.

(6.14)

The moments of this weight function can be written in terms of classical Airy functions, !  u3 i 3 −δ u μ0 = e du = 2π Ai(−δ) (6.15) Γ− ∪Γ+

and subsequently μk+1 (δ) = i

dμk (δ) , dδ

k ≥ 0.

This implies μk (δ) = ik

dk μ (δ) = 2π(−i)k Ai(k) (−δ), dδ k 0

k ≥ 0.

(6.16)

The corresponding Hankel determinants Dn (δ) are then constructed using Airy functions and derivatives.17 Regarding the zeros of Dn , in [73] it is proved that Dn (δ) = 0 for n ≥ 1 and δ < δ0 , where δ0 = 2.3381 . . . is the smallest real zero of the Airy function Ai(−δ). Furthermore, it is true that D2n (δ) = 0 for n ≥ 1 and all δ ∈ . An alternative proof of this last result is presented in [57]. Regarding the quadrature nodes for this Gaussian scheme, we show the distribuδ (z) for different values of δ in Fig. 6.12. It is remarkable that the tion of the zeros of p20 configuration of the roots changes significantly, from what seems like a single smooth curve (when δ = 0) to two disjoint arcs in the left and right half planes when δ is larger. The breaking point can be computed by considering a scaled weight, dependent on n, which is as always the degree of the polynomial, −n −

wn (x) = e

ix 3 3

+iK x

!

,

K ∈ .

In [73], it is proved that there exists a critical value K ∗ = 1.0005424 . . . such that for K < K ∗ the zeros of the orthogonal polynomials (if they exist) accumulate along a single analytic arc as n  1, whereas for K > K ∗ they tend to two disjoint arcs in the complex plane. The case K = K ∗ is a transition where the zeros tend to one arc, which is not analytic at the point of intersection with the imaginary axis. Taking n = 20, we obtain the critical value δ ∗ = K ∗ n 2/3 = 7.372059434 . . . , which is consistent with the transitional behavior observed in Fig. 6.12. We wish to remark that this idea, which for us is arising in numerical analysis, has independent interest in the wider scenario of the theory of non-Hermitian orthogonal polynomials in the complex plane, with weight function of the form w(z) = eV (z) , where the potential V (z) is a polynomial; the case of a cubic phase function has been analyzed in [29, 56], and a more general example was studied in [71]. Asymptotic behavior of orthogonal polynomials and their roots can be studied using powerful tools of potential theory in the complex plane and the notion of S-curves in the presence of an external field ReV (z). We refer the reader to the aforementioned works and references therein for more details. 17 The Hankel determinant can be thought of as a Wronskian determinant as well. This establishes a very fruitful connection between these determinants and special function solutions of Painlevé equations, in this case Painlevé II; see the work of Clarkson in [20].

130

Chapter 6. Complex-valued Gaussian quadrature

δ Figure 6.12. A string of pearls: the distribution of the roots of p20 for δ = 0, 2, 4, . . . , 14 (from top to bottom).

6.3.2 A quadratic oscillator A different situation arises when we consider the previous  function g (x) in (6.11) but as c → 1− . In this case, the two stationary points x± = ± c are not close to each other but are close to the endpoints of the interval of integration. The main idea is similar to that before, with the difference that now we have to work with semi-infinite contours: if c is away from 1, then we can consider numerical steepest descent along each path in Fig. 6.10, parameterize the contour integrals, and use Gauss–Laguerre for the paths stemming from the endpoints and Gauss–Hermite for the paths Γ± . If c is close to 1, however, we may be interested in combining the paths Γ+ ∪ Γ1 and Γ− ∪ Γ−1 , designing uniform schemes. As proposed in [73], this situation can be studied using the simpler oscillator

 Γ

f (x) eiω g (x) dx,

g (x) =

x2 − b x, 2

(6.17)

along a semi-infinite contour that joins 0 with the sector arg x ∈ (0, π/4] at infinity. Now the stationary point is located at x = b , and the steepest descent paths are indicated in Fig. 6.13. It is clear that if b = 0, then the weight becomes a semiclassical Laguerre weight; otherwise, we deal with a truly complex weight function. Similarly as before, we can scale with ω and write 



Γ

f (x)e

x2 2

−b x

!

dx =

1 ω 1/2

 f Γ

u ! i e ω 1/2

where now x = uω −1/2 ,

σ = b ω 1/2 .

u2 2

−σ u

!

du,

6.3. Complex quadrature and uniform asymptotic expansions Im z

131



0

Re z b

Γ0

Γb

Figure 6.13. Steepest descent paths for the oscillator g (x) =

x2 2

− b x.

We can study the corresponding family of orthogonal polynomials, as we did in the previous case: !  u2 σ k i 2 −σ u pn (u)u e du = 0, k = 0, 1, . . . , n − 1. (6.18) Γ

Now the moments are given in terms of error functions, or more generally parabolic cylinder functions. We note that, writing u = eπi/4 t , we obtain ! ∞ 2    u2 t i 2  1 3πi/4 i −σ u μ0 = e 2 du = eπi/4 e− 2 −σe t dt = e− 4 σ +πi/4 π U , σe3πi/4 , 2 Γ 0 using [108, formula 12.5.1]. Then, μk+1 = i

dμk (σ) , dσ

k ≥ 0.

As can be seen, the cubic and the quadratic models share the common idea of constructing a uniform scheme (in a parameter that typically controls the location of the stationary points) by combining paths in the complex plane and using the corresponding special functions that arise from the moments. Attractive as this idea may sound, it is unclear at this stage how to compute these formal orthogonal polynomials and the complex Gaussian quadrature rules that they generate. Apart from brute computation using the moments and the recurrence relation, in the next section we briefly present an alternative that remains largely unexplored, especially from a numerical point of view.

6.3.3 Computational methods The example with the cubic oscillator in the complex plane can be further analyzed to obtain a computational method for the orthogonal polynomials { pnδ }n≥0 that is

132

Chapter 6. Complex-valued Gaussian quadrature

promising but has not been studied in detail so far. It is possible to show that the recurrence coefficients of the orthogonal polynomials pnδ , namely δ δ (z) + αn (δ) pnδ (z) + βn (δ) pn−1 (z), z pnδ (z) = pn+1

satisfy the following nonlinear difference equations (sometimes known as string equations or Freud equations): βn + βn+1 + α2n = δ, βn (αn + αn−1 ) = in.

(6.19)

One possible way to obtain these identities is using compatibility relations between two operations applied to the orthogonal polynomials: shift in the parameter n and differentiation with respect to z. We refer the reader to [6, Section 2.5] for more details. We can compute the initial values for (6.19) directly from p0δ (z) = 1 and p1δ (z) = z − μ1 /μ0 , namely α0 =

μ1 , μ0

β1 = δ = α02 = δ −

μ21

μ20

.

(6.20)

Since the weight and the contour of integration are symmetric with respect to the imaginary axis, we have, as in the previous section, the property pnδ (z) = (−1)n pnδ (− z); this can be used to prove that for real values of δ, the coefficient αn (δ) is purely imaginary and the coefficient βn (δ) is real. If we write αn = iγn , then all coefficients are real and (6.19) becomes

βn + βn+1 = γn2 + δ, βn (γn + γn−1 ) = n, which is equivalent to the formulation in [21]. These identities are very attractive for computational purposes, but unfortunately they suffer from severe instability once used in the forward direction. This is illustrated in Fig. 6.14, created in MAPLE with standard double precision (16 decimal digits).

Figure 6.14. Values of Im αn , on the left, and βn , on the right, calculated with the string equations (6.19) with δ = 1.

6.4. A discussion

133

Can we explain, at least heuristically, this instability? As proved by [21], when solving the string equations subject to the initial conditions (6.20), with the moments given by (6.15) and (6.16), we are approximating the unique positive solution of these nonlinear equations. Based on similar phenomena for linear problems (like the computation of recessive solutions of recurrence relations for special functions; see [52]), such calculation is usually very delicate since small (and essentially unavoidable) numerical errors will result in a generic solution of the equation with very different properties. In this case, this can be understood also by noting that, because of the choice of integration contour, the initial values (6.20) are expressed using only the recessive Airy function Ai; a choice of other contours (no longer motivated by numerical integration) where the cubic oscillator is well defined will yield in general a linear combination of Ai and Bi which behaves very differently. This has been explored by [20] for Airy solutions of the Painlevé II equation, and also in the context of random matrix theory in [7, 8, 9]. An alternative method for computation using the string equations is proposed in [21], based on fixed point iteration; this approach seems to be more promising than direct forward calculation (where extended precision is needed), but at the moment this is a subject of further research.

6.4 A discussion 6.4.1 Have we found an ideal method? The next chapter is devoted to a more far-ranging comparison of highly oscillatory quadrature methods, but it would be beneficial to conclude this chapter with a brief discussion of the advantages and limitations of complex-valued Gaussian quadrature. Does complex-valued Gaussian quadrature trump all alternatives? Before we jump to conclusions, we might compare it with the different extended Filon methods of Chapter 4: Filon–Jacobi (FJ), Filon–Clenshaw–Curtis (FCC), and adaptive Filon (AF). Fig. 6.15 should be seen in tandem with Fig. 4.8: in both we report the computation of the integral 1 eiωx dx (6.21) 2 −1 1 + x + x

Figure 6.15. Errors (in logarithmic scale) in the computation of (6.21) by complex-valued Gaussian quadrature with 4 (violet), 6 (crimson), and 8 (black) nodes.

134

Chapter 6. Complex-valued Gaussian quadrature

for ω ∈ [0, 200]. While in Fig. 4.8 we use different configurations of 10 function and derivative values in [−1, 1], at most 8 function values in  are required for Fig. 4.8— and yet the results are clearly superior! So, does complex-valued Gaussian quadrature knock the opposition into a cocked hat? Not necessarily. First, it is considerably more expensive than, say, extended Filon or NSD. Second, computation with complex-valued Gaussian quadrature requires high accuracy. This is not just a major contribution to its cost but restricts its applicability on standard platforms of scientific computing. Our results have been derived with MAPLE with multiple precision, but practical numerical work on, say, MATLAB or PYTHON is with standard IEEE precision. And anyway, who needs such high accuracy? When solving practical problems, it is typically sufficient to compute integrals to moderate precision, consistent with the problem at hand and its data. Third, extended Filon and numerical stationary descent methods can be generalized with great ease both in the presence of stationary points and in multivariate settings. In principle, complex-valued Gaussian quadrature might be generalized to cater for stationary points, although no hard results are available in the literature. However, its generalization to highly oscillatory integrals over polytopes, say, is fraught with great difficulty—even nonoscillatory quadrature over polytopes is difficult, and classical (i.e., with real weight function) theory of orthogonal polynomials over polytopes is poorly developed. All of this does not mean that complex-valued Gaussian quadrature is of no or little use! It is the newest and an incompletely understood weapon in the highly oscillatory arsenal, and, at least in principle, ways and means might be developed to implement it affordably and in low-precision arithmetic.

Chapter 7

A highly oscillatory olympics

7.1 Heat 1: No stationary points, eight function evaluations So far in this monograph we have introduced and analyzed a long list of methods for highly oscillatory integrals. This might be interesting, indeed comforting to a professional numerical analyst, but of little help to a scientist or an engineer intending to calculate a highly oscillatory integral. Which method from this embarrassment of riches should be used in practical computations? The answer is far from straightforward, and in this chapter we intend to explore it and provide some tentative results. To that end we have selected a set of integrals which, apart from being of course highly oscillatory, appear fairly innocuous. It is a fact of life that integrals in applications are free to combine high oscillation with any other sort of complication, and they usually exercise that freedom with great abandon. Yet, at the current stage of our narrative, such complications may lead us astray, and right now we prefer clear vision—complications will not be ignored but will have to await the next chapter. The first, most obvious criterion is to quantify the cost of a method by the number of function evaluations it requires.18 We will see in what follows that this is a fairly fallible criterion, but it will do as a first attempt at our target. Thus, we get under way by evaluating the integral 1 2 [1] Iω = sin(x 2 + πx) eiω(4x+x ) dx (7.1) −1

for ω ∈ [0, 400] restricting ourselves to exactly eight function evaluations. Note that (7.1) has no stationary points in the interval [−1, 1]. Asymptotic method

We commence from the truncated asymptotic expansion (2.1.3), which, allowing for eight function evaluations (therefore, s = 3), is  s  eiω g (−1) eiω g (1) 1 [3] ω = − f m (1) , − f m (−1) m+1 g (−1) g (1) m=0 (−iω) 18 In

this chapter by “function evaluation” we mean also the evaluation of a derivative.

135

136

Chapter 7. A highly oscillatory olympics

where f0 (x) = sin(x 2 + πx),

f m+1 (x) =

d f m (x) , dx 4 + 2x

m = 0, 1, 2. [3]

Fig. 7.1 displays the error (in the usual logarithmic scale) committed by ω in two plots: for 0 ≤ ω ≤ 50 and 0 ≤ ω ≤ 400. The left-hand plot depicts the initial stage, small ω, before asymptotic behavior sets in, the intermediate stage, and the beginning of the asymptotic regime, while the right-hand plot emphasizes asymptotic behavior for large ω. Other figures in this section display similar information.

[1]

Figure 7.1. The errors in logarithmic scale approximating Iω with the asymptotic [3] method ω for 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right).

Filon–Jacobi

Allocating eight function (and derivative) evaluations in the Filon method, we are faced with four sensible choices of interpolation conditions: p (i ) (±1) = f (i ) (±1),

i = 0, . . . , s,

p(ck ) = f (ck ),

k = 1, . . . , 6 − 2s,

(7.2)

for s = 0, 1, 2, 3. Choosing the internal nodes ck as the zeros of the Jacobi polyno(s +1,s +1) mial P6−2s and consistently with §4.2.1 we denote the corresponding method by

ωFJ,s ,6−2s . The case s = 3 corresponds to plain-vanilla Filon, whereby interpolation is restricted to the endpoints to maximize asymptotic order. Fig. 7.2 displays the error committed by the Filon–Jacobi method for different choices of s. The blue curve corresponds to plain-vanilla Filon, and it displays the least error for large ω, as one can expect from its superior asymptotic order. Yet, everything is inverted for small ω: the smaller s (and more internal nodes), the smaller the error! FJ,0,6 with the least asymptotic order Indeed,  −2 the least uniform error is attained by ω  ω . Needless to say, this does not mean that ωFJ,0,6 is “superior” to the other methods—the answer depends on the question being asked! Comparing Figs. 7.1 and 7.2 it is apparent that for ω  1 the asymptotic method and ωFJ,3,0 deliver very similar precision. Predictably, though, the asymptotic method breaks down for small ω ≥ 0.

7.1. Heat 1: No stationary points, eight function evaluations

137

[1]

Figure 7.2. The errors in logarithmic scale in approximating Iω with the Filon–Jacobi method ωFJ,s ,6−2s for s = 0, 1, 2, 3, 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right). The plain Filon method ωFJ,3,0 is in blue, and decreasing s is reflected by transition to a reddish hue.

Filon–Clenshaw–Curtis

The only difference between Filon–Jacobi and Filon–Clenshaw–Curtis quadrature is in the choice of internal nodes in (7.2): in the latter we choose ck = cos kπ/(7 − 2s). While lowering somewhat the precision for small ω ≥ 0, this renders the computation considerably cheaper. Fig. 7.3 displays the error committed by the FCC quadrature ωFCC,s ,6−s . It is clear that the only substantive difference between Clenshaw–Curtis and Jacobi “flavors” of extended Filon method is that the latter, by design, is more accurate for small ω ≥ 0. This is entirely consistent with the analysis of §4.3.

[1]

Figure 7.3. The errors in logarithmic scale approximating Iω with the Filon–Clenshaw– Curtis method ωFCC,s ,6−2s for s = 0, 1, 2, 3, 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right). The color scheme is the same as in Fig. 7.2.

138

Chapter 7. A highly oscillatory olympics

Levin’s method

Harking back to §3.3, we let ψ(x) = conditions 7  =0

(j)

r (−1)q = f ( j ) (−1),

7  =0

where

r (ck )q = f (ck ),

7

7  =0

q x =0 



and, in place of (7.2), impose the

(j)

r (1)q = f ( j ) (1),

j = 0, . . . , s,

(7.3)

k = 1, . . . , 6 − 2s,

r (x) = xi−1 ω g (x)x  ,

 = 0, . . . , 7.

The Levin method (with eight function evaluations) is ωL,s ,6−2s , s = 0, 1, 2, 3. We choose Jacobi points as our internal nodes c1 , . . . , c6−2s .

[1]

Figure 7.4. The errors in logarithmic scale approximating Iω with the Levin method for s = 0, 1, 2, 3, 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right). The “plain Levin” method ωL,3,0 is in magenta, and decreasing s is reflected by transition to a bluish hue.

ωL,s ,6−2s

Fig. 7.4 depicts the error committed by ωL,s ,6−2s for s = 0, 1, 2, 3—comparing with the two extended Filon methods ωFJ,s ,6−2s and ωFCC,s ,6−2s confirms that there is not much to choose among the three methods. They all display very similar behavior. Filon/Levin with natural basis

§3.3.3 describes a setup whereby Filon and Levin methods become identical, namely when, in place of a polynomial interpolation we use the basis Ψ N = {g (x), g (x) g (x), . . . , g (x) g N −1 (x)}, in our case N = 8. Recalling from §3.3.3 that this is a special case of the Filon– Olver quadrature, we denote the method with interpolation pattern (7.2) (or (7.3)) by

ωFO,s ,6−2s . In Fig. 7.5 we display the error committed by the “Filon meets Levin” quadrature

ωFO,s ,6−2s for s = 0, 1, 2, 3. The accuracy is somewhat better than with other Filon and Levin methods, although the margin of improvement is quite modest.

7.1. Heat 1: No stationary points, eight function evaluations

139

[1]

Figure 7.5. The errors in logarithmic scale approximating Iω with the Filon–Jacobi method ωFO,s ,6−2s for s = 0, 1, 2, 3, 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right). The “plain Levin” method ωFO,3,0 is in dark blue, and decreasing s is reflected by transition to a purple hue.

Numerical steepest descent

Implementing the method of steepest descent from §5.2 is straightforward, and the paths of integration in the complex plane can be derived explicitly,   h−1 ( p) = −2 + 1 + i p, h1 ( p) = −2 + 9 + i p, p ≥ 0.

After some elementary algebra, we have

5  ix ie−3iω ∞ e−x dx [1] Iω = 1 + f −2 + ω (ω + ix)1/2 2ω 1/2 0

5  ix ie5iω ∞ e−x dx , 9 + f −2 + − ω (9ω + ix)1/2 2ω 1/2 0

where f (x) = sin(x 2 +πx). We compute this expression with Gauss–Laguerre quadrature with four nodes—given that we discretize two integrals, this sums up to eight function evaluations. The logarithmic error is displayed in Fig. 7.6, and it is abundantly clear that, except for small ω ≥ 0, where the method is bound to blow up, we obtain much greater precision in comparison with the asymptotic–Filon–Levin breed of methods. This is in line with the theory of Chapter 5. Complex-valued Gaussian quadrature

And so we are coming to the last of the seven highly oscillatory quadrature methods in our race, Gaussian quadrature with a complex-valued measure. Bearing in mind that we are “allocated” eight function evaluations, we form p8ω , the 8th-degree monic orthogonal polynomial with respect to the semi-norm  〈 f1 , f2 〉ω =

1

−1

f1 (x) f2 (x)eiω(4x+x ) dx. 2

140

Chapter 7. A highly oscillatory olympics

[1]

descent

Figure 7.6. The errors in logarithmic scale approximating Iω with numerical steepest for 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right).

ωNSD

Note that all the moments can be computed explicitly, rendering the formation of p8ω simpler. Let c1 , . . . , c8 ∈  be the zeros of p8ω and set the weights  b =

8  x − ck iω(4x+x 2 ) e dx, c −1 k=1  − ck 1

 = 1, . . . , 8.

k=

Then

ωCG,8 =

8  =1

b f (c ),

where f (x) = sin(x 2 + πx). And so to Fig. 7.7, illustrating the error of complex-valued Gaussian quadrature. The results are truly remarkable, significantly more precise than numerical steepest

[1]

Figure 7.7. The errors in logarithmic scale approximating Iω with the complex-valued Gaussian quadrature ωCG,8 for 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right).

7.2. Heat 2: A stationary point, 12 function evaluations

141

descent, and hugely better than the other methods for ω  1, while for small ω ≥ 0 again the method beats all competition by a large margin. Have we reached our ideal highly oscillatory quadrature, a method to end all methods? Alas, it is not as simple as that! As we will see in what follows, the situation is not so clear-cut, and a comparison of methods while keeping the number of function evaluations constant produces some useful information but falls short of crowning the “best” method. First, however, let us compare (again pitting the number of function evaluations vs. accuracy) our methods in the presence of a stationary point.

7.2 Heat 2: A stationary point, 12 function evaluations In place of (7.1), we now approximate 

[2]

Iω =

1 −1

2

cos(x + πx 2 ) eiωx dx,

(7.4)

with a first-order stationary point at the origin. To reflect the greater cost of computing integrals with stationary points, we change the rules of the game and allow 12 function evaluations. Asymptotic method

Let f (x) = cos(x + πx 2 ) and g (x) = x 2 . Since  μ0 (ω) =

1

iωx 2

e −1

dx =

  π1/2 erf (−iω)1/2

(−iω)1/2

,

the asymptotic method with 12 function evaluations is [2,3] ω

  π1/2 erf (−iω)1/2

1 1 1 f1 (0) + f0 (0) + f (0) f2 (0) + = 1/2 2 (−iω)3 3 (−iω) −iω (−iω) ( ' 1 eiω g (−1) eiω g (1) − − [ f0 (−1) − f0 (0)] [ f0 (1) − f0 (0)] g (−1) g (1) −iω ( ' iω g (1) 1 eiω g (−1) e − − [ f1 (−1) − f1 (0)] [ f1 (1) − f1 (0)] g (−1) g (1) (−iω)2 ' ( 1 eiω g (−1) eiω g (1) − [ f2 (1) − f2 (0)] , − [ f2 (−1) − f2 (0)] g (−1) g (1) (−iω)3

where f0 (x) = f (x),

f m+1 (x) =

d f m (x) − f m (0) , g (x) dx



m = 0, 1, 2.

Recall that f m (x) is a linear combination of f (i ) (x), i = 0, . . . , m, for x = 0, while f m (0) [2,3] is a linear combination of f (i ) (0), i = 0, . . . , 2m. Therefore, indeed  m requires 12 function evaluations. [2,3] We display in Fig. 7.8 the logarithm of the error of ω . Predictably, for small ω ≥ 0 the method is useless,  but once ω grows, so does the accuracy—and it does so fairly rapidly, like  ω −4 .

142

Chapter 7. A highly oscillatory olympics

[2]

Figure 7.8. The errors in logarithmic scale approximating Iω with the asymptotic [2,3] method ω for 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right).

Filon–Jacobi

In an extended Filon setting, the 12 function evaluations can be parceled out in three different ways: 1. Three function evaluations each at x = ±1 and six at the stationary point x = 0— this is the plain-vanilla Filon method. 2. Two function evaluations each at x = ±1, four at x = 0 and four internal nodes. 3. A single function evaluation at the endpoints, two at the stationary point and eight internal nodes. In line with §4.2.3, in case 2 we choose the internal nodes ±ξ1 and ±ξ2 , where ξ1 , ξ2 (2,4) are the zeros of P2 (2x − 1), while in case 8 the internal nodes are ±ξk , k = 1, . . . , 4, (1,2)

where ξ1 , . . . , ξ4 are the zeros of P4 (2x − 1). The error in three cases is displayed in Fig. 7.9. It is clear that for small ω ≥ 0 case 3, with the greatest number of internal nodes, is the clear winner, while for ω  1 plainvanilla Filon, with its superior asymptotic order, provides the best solution, of quality similar to the asymptotic method in Fig. 7.8. Filon–Clenshaw–Curtis

The only difference between Filon–Jacobi and Filon–Clenshaw–Curtis is in the choice of the internal nodes. In the latter case they are of the form ±ξ , where ξ is a Chebyshev point mapped to [0, 1]—in other words, a zero of the Chebyshev polynomial of the second kind U s (2x − 1), where s = 2 for case 2 and s = 4 for case 3. Fig. 7.10 and its comparison with Fig. 7.9 demonstrate that, while for small ω ≥ 0 Filon–Jacobi is somewhat better, for large ω the two approaches are for all intents and purposes indistinguishable. Levin and Filon–Olver

The standard Levin method does not work in the presence of stationary points, but this can be remedied by using the Levin-like method (3.3.19) from §3.3.3, pioneered by

7.2. Heat 2: A stationary point, 12 function evaluations

143

[2]

Figure 7.9. The errors in logarithmic scale approximating Iω with the three Filon–Jacobi methods, 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right). The plain Filon method is in blue, case 2 in dark blue, and case 3 in slate blue.

[2]

Figure 7.10. The errors in logarithmic scale approximating Iω with the Filon– Clenshaw–Curtis method, 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right). The colors are the same as in Fig. 7.9.

Sheehan Olver. However, for g (x) = x 2 Filon–Olver coincides with the Filon method (whether plain vanilla or extended).

Numerical steepest descent

 The steepest descent trajectory is composed of four paths: the first, h−1 ( p) = − 1 + i p,  descends from −1; the second, h0,− ( p) = − i p, ascends to the origin, to be followed   by h0,+ ( p) = i p; and, finally, h1 ( p) = 1 + i p descends down to +1. Following the reasoning in Chapter 5 on stationary points, we combine the two integrals at the stationary point (i.e., those involving h0,− and h0,+ ) into one integral, π since together they make up a single straight line in the complex plane at an angle of 4 .

144

Chapter 7. A highly oscillatory olympics

We can also sum the two endpoint integrals. Altogether, [2] Iω



5 ix ix (ω + ix)−1/2 e−x dx + f − 1+ f 1+ ω ω 0  i1/2 ∞ x ! −x 2 + 1/2 f i1/2 e dx. ω ω −∞

ieiω = 2ω 1/2





 5

The first integral is computed with standard Gauss–Laguerre quadrature with three quadrature points, which amounts to six function evaluations. The second integral is evaluated with Gauss–Hermite with six points, so that altogether we have 12 function evaluations.

[2]

Figure 7.11. The errors in logarithmic scale in approximating Iω with numerical steepest descent ωNSD for 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right).

Fig. 7.11 reports the errors for numerical steepest descent, and all is consistent with theory. For ω ≥ 0 the solution is useless but, as ω grows, the accuracy rapidly overtakes that of the asymptotic and Filon-type methods. This, of course, is a reflection of the asymptotic order being doubled.

Complex-valued Gaussian quadrature

The formulation of the complex-valued Gaussian quadrature ωGC,12 is straightforward. We derive the 12th-order monic orthogonal polynomial with respect to the semi-norm 1 2 〈 f1 , f2 〉ω = f1 (x) f2 (x)eiωx dx −1

and compute its zeros c1 , . . . , c12 ∈ . Then

ωGC,12 =

12  =1

b f (c ),

7.3. What is the best method?

145

[2]

Figure 7.12. The errors in logarithmic scale approximating Iω with the complex-valued Gaussian quadrature ωCG,12 for 0 ≤ ω ≤ 50 (on the left) and 0 ≤ ω ≤ 400 (on the right).

where

 b =

12  x − ck iωx 2 e dx, −1 k=1 c − ck 1

 = 1, . . . , 12.

k=

The error of complex-valued Gaussian quadrature is displayed in Fig. 7.12, and it definitely beats all the competition! Its error for small ω ≥ 0 is already minute, and things only get better once ω increases.

7.3 What is the best method? The question “What is the best method for highly oscillatory quadrature?” is fairly meaningless unless we define exactly the meaning of “best.” In the last two sections we have reduced everything to a competition for greatest accuracy given a fixed number of function and derivative evaluations. This provides some useful information but falls short of determining which method is preferable in a specific situation. “Computation of highly oscillatory integrals” assumes different meaning subject to the exact purpose of the exercise and features of the underlying problem: 1. Do you wish to compute a highly oscillatory integral for a single value (or a small number of values) of ω or for a large number of such values? (The generation of the plots in this chapter is an extreme case of “a large number of such values.”) 2. Is the function available in the complex plane or just along the integration interval? 3. Is it more expensive to compute derivatives in comparison to “plain” function values? 4. Do we wish to compute to very high accuracy, or will moderate accuracy do? The latter is the case, for example, when the calculation of highly oscillatory integrals forms a part of a larger numerical computation and it is clearly pointless to invest resources in achieving high accuracy which will be degraded by other components of the program.

146

Chapter 7. A highly oscillatory olympics

Of these four questions (and one can easily pose more!), arguably the most important is the first, and it provides a handy rule of thumb in the selection of the right method. Essentially, the cost of any method for highly oscillatory integrals has two components, the setup cost and the unit cost. The setup cost S must be spent just once, whether we compute the integral for a single ω or for a thousand—determining coefficients, computing functions and derivatives (provided that this computation is independent of ω) etc.—while the unit cost U is spent once for every ω—thus, the total cost of approximating the integral for n values of ω is S + nU . To flesh out these concepts consider a plain-vanilla Filon method using just f and 1 its derivative at the endpoints for the integral −1 f (x) eiωx dx. The corresponding Hermite interpolating polynomial is

 f (−1) + f (1) f (−1) − f (1) f (−1) − f (1) f (−1) + f (1) p(x) = x + − 3 + 4 4 4 2  f (−1) − f (1) 2 f (−1) − f (1) f (−1) + f (1) 3 − x . + x + 4 4 4

Now we are faced with two options: either we compute directly

ωF [ f ] =



1

p(x)eiωx dx

(7.5)

−1

1 1 [eiω f (1) − e−iω f (−1)] [eiω f (1) − e−iω f (−1)] − (−iω)2 −iω 1 − {−3 cos ω[ f (1) − f (−1)] + (2eiω + e−iω ) f (1) + (eiω + 2e−2iω ) f (−1)} (−iω)3 3i sin ω − [− f (1) + f (−1) + f (1) + f (−1)] (−iω)4

=−

or we rearrange terms,

ωF [ f ] = b−1,0 (ω) f (−1) + b1,0 (ω) f (1) + b−1,1 (ω) f (−1) + b1,1 (ω) f (1),

(7.6)

where b−1,0 (ω) =

e−iω 3 cos ω 3i sin ω − , − −iω (−iω)3 (−iω)4

b1,0 (ω) = −

b−1,1 (ω) =

eiω 3 cos ω 3i sin ω , + + −iω (−iω)3 (−iω)4

eiω + 2e−iω 3i sin ω e−iω , − − (−iω)4 (−iω)3 (−iω)2

b1,1 (ω) = −

2eiω + e−iω 3i sin ω eiω . − − (−iω)4 (−iω)3 (−iω)2

The sum (7.6) may sound familiar in the context of classical quadrature (provided that the latter is a Birkhoff–Hermite quadrature, using both function and derivative values), except that the weights depend upon ω.

7.3. What is the best method?

147

The expression (7.5) is probably simpler unless we wish to compute the integral for many different values of the function f , in which case (7.6) comes into its own. Either way, all of this (inclusive of the computation of f (±1) and f (±1)) belongs to the setup stage, while each individual computation of the integral for a different value of ω requires simple and speedy evaluation of either (7.5) or (7.6). Since the setup cost dominates (and this is true for most flavors of Filon: plainvanilla, Filon–Jacobi, and Filon–Clenshaw–Curtis, as well as for Levin quadrature), the cost of computing a large number of values of ω is marginal. Compare this, however, to the adaptive Filon method from §4.4. While the uniform error of this method is smaller, the quid pro quo is that for every individual ω we have different interpolation conditions and need to compute an equivalent of (7.5) or (7.6) afresh, as well as new function values. The setup cost is virtually nil, but the unit cost is large, and the total cost of the method scales with the number of different values of ω. This makes it very expensive once we wish to compute for a large number of values of ω, something that we already mentioned by the end of Chapter 4. What about the remaining methods? There is little virtue in the asymptotic method because, using exactly the same information, even plain-vanilla Filon delivers significantly better performance for small ω ≥ 0, while the accuracy for ω  1 is roughly the same. As we have already mentioned in Chapter 2, the great virtue of asymptotic expansions is not as a practical numerical method but as a theoretical tool to understand Filon-type and Levin methods. The setup cost of the Levin method is smaller and, accordingly, the unit cost higher than of a Filon method, because we need to solve a separate linear system for every ω, yet this is a fairly modest expense. Numerical steepest descent shares with Filon methods the feature of the cost being dominated by setup. This consists of determining the steepest descent path (which, we assume, can be determined explicitly; otherwise also its numerical computation needs to be relegated to the setup cost) and computing the nodes and the weights of Laguerre-type quadratures. A unit cost consists of function evaluations (which, unlike for Filon, depend upon ω and cannot be relegated to the setup stage) and computation of the quadrature formulas. Complex-valued Gaussian quadrature, which has emerged as the undisputed winner of Heats 1 and 2, delivering by far the best accuracy (throughout the range ω ≥ 0), is also an extreme case of unit cost dominating and being very high indeed. For each new ω we have a new bilinear norm and hence need to evaluate the requisite orthogonal polynomial, compute its zeros (which are the nodes of the quadrature), compute the weights, and finally compute the quadrature itself. The first two steps, computation of an orthogonal polynomial and evaluation of its zeros, are expensive. As things stand, to compute the orthogonal polynomial we need to evaluate moments and form and solve a complex-valued linear system while evaluation of its zeros is standard but far from cheap. And all this must repeated for every ω! Thus, while exceedingly accurate, complex-valued Gaussian quadrature is unacceptably expensive once we wish to compute a large number of highly oscillatory integrals with different values of ω. In other words, a sleek F1 racer does not make a convenient and affordable family car! Having said this, we need to hedge this statement. First, complex-valued Gaussian quadrature becomes relatively cheap once we wish to compute a highly oscillatory integral for the same value of ω (and g ) and for many values of f , a problem we did not consider in this section. Second, the theory of complex-valued Gaussian quadrature is very new, and its practical computational aspects have been subjected to exceedingly little scrutiny. No significant effort has been extended to design fast means to compute

148

Chapter 7. A highly oscillatory olympics

orthogonal polynomials with respect to complex-valued measures and of their zeros, but this might well change in the future. All said and done, if the task is to compute a highly oscillatory integral for a large(ish) value of ω, whether for a single ω or a large number of ω’s, the current state of the art is clear. Best accuracy vs. cost ratio is delivered by numerical steepest descent. Having said this, once moments are explicitly available, programming and implementing Filon-type methods (e.g., Filon–Jacobi) are much simpler (no need to determine integration paths!), they deliver better uniform error, and in computations requiring moderate accuracy they might well be the preferred option.

Chapter 8

Variations on the highly oscillatory theme

8.1 Oscillatory integrals in theory and in practice Until this point, we have mostly treated oscillatory integrals of the form  I[f ] =

b

f (x)eiω g (x) dx

(8.1)

a

as if they always presented themselves this way! If only they would. Instead, each particular problem has its own peculiarities—such is the nature of particular problems—and more often than not it comes with a variation of (8.1) rather than with (8.1) itself. If you came to this book with an oscillatory integral in mind, you may have already been wondering in the previous chapters how to adapt the methods to fit your case. Indeed, that is the task that lies ahead: adapt the methods, yet bear in mind the underlying principles. What are these principles? In this final chapter, let us recapitulate. Usually, the most efficient numerical method to evaluate an integral comes from a thorough understanding of how to approximate the integrand. A good approximation scheme leads to a good integration scheme, simply by integrating the approximation. Indeed, virtually all traditional numerical integration schemes are based on polynomial or trigonometric approximations. The major quirk of oscillatory integrals is that this is no longer so! The most efficient numerical method for an oscillatory integral comes from a thorough understanding of its asymptotic behavior, and this game follows a very different set of rules. As a case in point, in the absence of stationary points one can replace the function f in (8.1) with any other function p interpolating f (a) and f (b ). For any desired accuracy  > 0 there is a minimal frequency ω0 such that one cannot tell the difference between I [ f ] and I [ p], i.e., |I [ f ] − I [ p]| <  ∀ω > ω0 . It is immaterial whether p approximates f . Though it can’t hurt, at large frequency it doesn’t help either! A second insight is that the asymptotic expansion itself is not a suitable numerical tool. There is even no need to compute the actual coefficients of an asymptotic expansion; it is sufficient to know what they depend on. For the integrals in this book asymptotic expansion coefficients depend on function values and derivatives at endpoints, stationary points, and, in higher dimensions, hidden stationary points. By extension, and this will be obvious without elaborating on the precise expansions, we 149

150

Chapter 8. Variations on the highly oscillatory theme

can add to this list points of discontinuity, branch points, logarithmic or algebraic singularities, and indeed any other point singularities of f and g (or of their derivatives). It helps that knowing what type of data an expansion depends on is significantly simpler than constructing the expansion. Third, getting the leading order behavior right is qualitatively the biggest step: it may mean the difference between a numerical method with increasing cost for increasing frequency and a numerical method with a fixed cost for increasing frequency. For interpolatory quadrature, such as Filon-type quadrature, this first step is to realize that endpoints should be included. Depending on the application, spending fixed computational cost for increasing frequency might well be enough to stop oscillatory integrals from being a computational bottleneck in the overall problem. We often went above and beyond this goal, reaching for rapidly improving accuracy with increasing frequency, which we denoted as high asymptotic order of convergence. Yet, we have been generous to ourselves in this book with model integrals with smooth integrands. The luxury of high asymptotic order may be beyond your needs. Still, in showing how to achieve it, may we kindly request that you attempt to push the limits? Of course there are limits to the story of asymptotics. Inevitably, at some point, or rather at some combination of (moderate) frequency and (high) accuracy, the approximation of p to f does matter. Indeed, this is the whole point of turning to numerical methods rather than asymptotic expansions: we introduce a parameter that can be chosen larger, such that the error goes to zero no matter what the other parameters are. This is an important point. Consider the model integral (8.1) again and a fixed value of ω = ω1 . Say the function f itself is somewhat oscillatory, not depending on ω, but every bit as oscillatory as eiω1 g (x) is. Clearly in this case the values and derivatives of f at endpoints alone fall short of conveying as much information about the value of the integral as they do for larger ω. We are in the pre-asymptotic regime. Even without a more formal definition of what that means—and what it is very much depends on the context—it is clear that asymptotic expansions are of little use in a pre-asymptotic regime. In applications, at one point we must be pragmatic. In a pre-asymptotic regime of oscillatory integrals it may be more efficient to disregard their oscillatory nature and use a more classical approach. If for some value of ω = ω1 one needs to resolve f to achieve sufficient accuracy, one might as well resolve eiω g (x) along with it. But there are other pragmatic choices to be made. If g does not vanish but becomes very small, one is confronted with a near stationary point. It might be better to pretend that it is a real one.19 In asymptotics such problems are treated using uniform expansions that are uniformly valid for a range of another parameter. We have discussed an example of that for two coalescing stationary points in §6.3. Fortunately, such problems revolve around canonical cases with well-identified special functions, and it suffices to study a finite number of them. Quite possibly, numerical methods can be devised for each of these cases. But from a pragmatic viewpoint this is not always worthwhile. For example, the asymptotic analysis of integrals with several nearly coalescing stationary points is very intricate. Yet, this setting means that g is small in some nontrivial interval, and therefore the integral is not overly oscillatory to begin with. We always have the 19 On that topic, when dealing with analytic functions, perhaps a true stationary point is nearby in the complex plane? Consider deforming the path to include it. Other near events can be equally annoying: endpoints near stationary points, two (or more) stationary points close to each other, a singularity close to an endpoint/stationary point/cluster of stationary points.

8.2. A singular integral with multiple oscillators

151

brute-force evaluation of the integral as a stick behind the door, and in this case the stick is probably the more efficient choice. It is not our intention, and indeed it is not possible, to present in this chapter a full overview of variations on the theme of (8.1). We proceed by studying two types of oscillatory integrals that are sufficiently different from (8.1) to warrant further investigation and hope that this may convey some strategies or ideas to be used in other contexts as well.

8.2 A singular integral with multiple oscillators Oscillatory integrals arise naturally in mathematical models of wave scattering and propagation. One of the most direct ways they appear is in boundary element methods (BEM). These are numerical methods for integral equation problems, often used in acoustics and electromagnetics. We make an abstraction of the underlying mathematics and formulate a model integral that frequently appears in two-dimensional problems: b (1) f (x; ω)Hν (ω g1 (x))eiω g2 (x) dx. (8.2) I[f ] = a

We explain our rationale behind the notation f (x; ω) further below. The function (1) Hν (z) is the Hankel function of the first kind and of order ν (typically limited to ν = 0, 1). For ν = 0 it has a logarithmic singularity at z = 0, while for larger ν the singularity is algebraic (see [108, (10.7.2), (10.7.7)]), and this imposes conditions on f for the integrand to be integrable. The Hankel function arises from the Green’s function of the Helmholtz equation, which may be used to model acoustic pressure. It is also seen in three-dimensional problems with circular symmetry. The functions g1 and g2 are two distinct oscillator (1) functions. Bear in mind that Hν is an oscillatory function too, like the complex iz exponential e that has featured more prominently in this book. Its argument g1 (x) in (8.2) is multiplied by ω; hence this function is indeed as oscillatory as the exponential. We refer the reader to [22] for an overview of integral equation methods in scattering theory and to [16, 5, 62] and references therein for a number of specific cases where integrals like (8.2) arise. Suffice to say that f , g1 , and g2 encapsulate geometric information about the scattering problem, as well as the basis functions that were used in the discretization. Their explicit expressions can be lengthy. The integral can be considered to be highly oscillatory only when the interval [a, b ] spans many wavelengths, which in a standard boundary element method with compactly supported basis functions is not necessarily the case. Highly oscillatory integrals arise when using oscillatory basis functions, for example, or when incorporating asymptotic information about the solution of the physical problem. Importantly, in boundary element methods there is not one integral like (8.2) to evaluate but hundreds, thousands, or even more of them in any given simulation. The integrals depend on additional parameters, not shown here, and this makes their efficient evaluation particularly challenging. Each corner case that could potentially arise with integrals for the form (8.2) will actually do so with high probability, for some combination of parameters. If there are two stationary points, sure enough for some value of a parameter they will coalesce. A canonical example is that of a plane wave impinging on and scattered by a smooth obstacle: the direction of grazing incidence, where an incoming straight ray hits the obstacle tangentially, is associated with a degenerate stationary point.

152

Chapter 8. Variations on the highly oscillatory theme

We return to the problem at hand, since many challenges are present in (8.2): 1. The notation f (x; ω) is meant to reflect that, unlike any of the integrals considered before in this book, the function f depends on the parameter ω. 2. There are two oscillatory functions in the problem: the Hankel function and the complex exponential. Accordingly there are two oscillators, g1 and g2 . What are the critical points? 3. The integral has a logarithmic or algebraic singularity in one of its oscillators, the Hankel function. We consider each item in this list separately, though as will become clear they are not entirely independent.

8.2.1 Benign dependency on the frequency parameter What are the consequences of the amplitude of the envelope function f in (8.2) depending on ω? Certainly, the asymptotic expansions we have used in this book for large ω are no longer valid. Indeed, for all we know f could be eiωx . If such is the case, ideally we would factor out the oscillatory behavior from f and include its phase into the oscillator g . If the oscillatory nature of f is unknown, that presents a huge difficulty: one cannot exploit asymptotic behavior of the integral without knowing the asymptotic behavior of the integrand. Yet, even when this factorization is successful, the remainder may still depend on ω. We expect it to be less severe, though, and one way to make this precise is as follows. Asymptotic expansions can be rescued if we have guarantees that f and its derivatives remain bounded if ω increases. That is, we call the dependency of f on the parameter ω benign if it satisfies f ( j ) (x; ω) =  (1) ,

ω → ∞,

x ∈ [a, b ],

j = 0, 1, . . . .

(8.3)

For example, the function cos (2 + x/ω) has a benign dependence on ω, but the function cos(ωx + 2) does not, since derivatives of the former remain bounded as ω increases, whereas a derivative of order j of the latter function grows like ω j and this is typical of oscillatory functions. More generally, each function that itself has an expansion of the form ∞  ak (x) f (x; ω) ∼ , ω → ∞, ωk k=0

has only a benign dependence on ω. We note that we have seen an example of such benign dependence before, in Chapter 6, when studying the asymptotics of the polyω (z) in a neighborhood of the endpoints z = ±1. nomials p2N Asymptotic expansions are typically obtained using integration by parts. Using the condition above, the derivatives of f that appear in this recursive process do not interfere with the asymptotic decay of the terms. Mathematically, it is frequently useful to define oscillatory behavior in terms of growth of derivatives. Extending (8.3) one could also allow for mild growth of the derivatives of f with ω, for example f ( j ) (x; ω) =  ω α j with α < 1. Subsequent terms in the integration-by-parts process would become larger, but still decay faster than the previous terms. The expected order of, say, a Filon-type method can be adjusted accordingly.

8.2. A singular integral with multiple oscillators

153 (1)

A good example of benign frequency dependence is the oscillator Hν (ω g1 (x)) appearing in (8.2). The Hankel function has a known asymptotic expansion for large arguments and fixed order ν [108, (10.17.5)], (1) Hν (z) ∼



2 πz

1/2

1

1

ei(z− 2 νπ− 4 π)

∞  ak (ν) k=0

zk

,

z → ∞.

(8.4)

The coefficients ak (ν) can be explicitly identified but play no further role here. The main benefit of the expansion is that we can factor out the phase and define a new function (1) η(z) = Hν (z)e−iz . (8.5) We can thus write

(1)

Hν (ω g1 (x)) = η(ω g1 (x))eiω g1 (x) .

(8.6)

Putting together all of the above, we can conclude that η(ω g1 (x)) has only a benign dependence on the parameter ω, in the sense that its derivatives remain bounded for increasing ω. An essential condition, though, is that g1 (x) > 0 or g1 (x) < 0. It is only when this condition is met that the argument of the Hankel function grows large with increasing ω uniformly in x, so that we can invoke its expansion for large argument.

8.2.2 Multiple oscillators The integral (8.2) has two oscillator functions, g1 and g2 . What are the critical points of the integral? We have already partially answered that question when we factored out the oscillations from the Hankel function. Indeed, plugging (8.6) into (8.2) we arrive at the more familiar-looking integral  I[f ] = a

b

f (x; ω)η(ω g1 (x))eiω[ g1(x)+ g2 (x)] dx.

It is now clear that the oscillatory behavior of the integral is actually governed by the combined oscillator (8.7) g (x) = g1 (x) + g2 (x). Thus, the stationary points are points ξ for which g (ξ ) = 0, or, equivalently, g1 (ξ ) = −g2 (ξ ). The derivatives of g1 and g2 do not matter independently; it is their sum that we should take into account when designing a numerical scheme. That was simple enough. But wasn’t this just a trick? We have greatly benefited from knowing the asymptotic expansion of the Hankel function for large arguments. (1) How general is this? Suppose that instead of Hν (ω g1 (x)), we have a function h(ωx) in our integral, in addition to other oscillators. How can we determine the combined phase of the integral? Is there even a combined phase, in general? Fortunately, there is some generality that goes beyond the special functions and their properties that are listed in the Digital Library of Mathematical Functions [108]. A broad setting is one where the function h itself is the solution of a differential equation. For many ordinary differential equations one can easily predict the phase of the solution from the so-called Wentzel–Kramers–Brillouin (WKB) approximation or the Liouville–Green method, popular in mathematical physics. We refer the interested reader to [83] for a detailed account of the theory. More recently, efficient numeri-

154

Chapter 8. Variations on the highly oscillatory theme

cal methods for oscillatory differential equations using so-called nonoscillatory phase functions have started to appear, and these may provide a numerical alternative [55]. For other cases, where h does not satisfy a differential equation, one may still be able to find inspiration in the literature on asymptotic expansions. For example, another starting point may be the Mellin transform of h, which is extensively explained in [10].

8.2.3 A singular oscillator We have not yet fully acknowledged the singular behavior of the integral. The singularity is not confined to the function f in (8.2); it is actually contained in the oscillator (1) Hν (ω g1 (x)), and that makes a difference. We encounter the singularity at a point ξ where g1 (ξ ) = 0. In that case, we can no longer employ the expansion of the Hankel function for large argument. Indeed, its argument is exactly zero, regardless of ω. The Hankel function transitions from being singular to being oscillatory, and increasing ω speeds up the process. Yet, this is a problem for asymptotic analysis more than it is for numerical methods. Let us consider the simpler integral with [a, b ] = [0, 1], ν = 0, g1 (x) = x, and g2 (x) ≡ 0, 1 (1) f (x)H0 (ωx) dx. (8.8) 0

One can use the fact that the Hankel function satisfies a differential equation, similar to the Airy integrals in Chapter 2, §2.5. The Digital Library of Mathematical Functions also offers an alternative. We can recover an oscillatory exponential via an integral representation of the Hankel function [108, (10.9.10)]: (1)

H0 (z) =

1 πi





eiz cosh t dt .

(8.9)

−∞

Substituting the above integral representation into (8.8) results in a double integral, but with an exponential oscillator—a multivariate Fourier-type integral. Further analysis results in an asymptotic expansion in inverse powers of ω, though we have traded the special nature of the oscillator for an additional dimension. The list of special functions with integral representations does not encompass all oscillators arising in applications, but there are some other noteworthy examples: the error function, the Airy function, and, more generally, hypergeometric functions. We also refer the reader to [106] for a case study of oscillatory Hilbert transforms involving a strong singularity. Yet, we do not dwell on the asymptotics, and instead focus on the numerics.

8.2.4 Adapting the numerical methods We return to our core business: evaluating oscillatory integrals, taking their asymptotic behavior into account as best as we can. We set the stage for the general integral (8.2) by considering the simpler integral (8.8) first. There are two ways to design Filontype quadrature (or, rather, there are at least two ways—perhaps we lack imagination!). The first one is to approximate f by a polynomial p and to integrate the result exactly,

8.2. A singular integral with multiple oscillators

155

bearing in mind the interpolation of derivatives at x = 0 and x = 1. This leads to the approximation 1 1 (1) (1) f (x)H0 (ωx) dx ≈ p(x)H0 (ωx) dx. 0

0

The second approach is to ignore the complication of the singularity and to extract the phase as in (8.5), but this time up to and including the point of singularity. The approximation is a regular Fourier-type integral, 

1

f 0

(1) (x)H0 (ωx) dx

 =



1

iωx

f (x)η(ωx)e

dx ≈

0

1

p(x)eiωx dx.

0

In this case the function p is found by approximating the logarithmically singular but (1) nonoscillatory function f (x)η(ωx) = f (x)H0 (ωx)e−iωx . Note that one does not need an explicit expression for η(x); it can be evaluated from the Hankel function itself. The second approach seems simpler, because it leads to an exponential oscillator. Yet, this comes at the cost of having to deal with the singularity in p, arising from η, for which a polynomial won’t do. Furthermore, we lose asymptotic accuracy: the dependence of the function η(ωx) on ω is benign on any interval [a, b ] with a, b > 0, but it is not so on [0, b ]. For best asymptotic order, the first approach is better. The price to pay is having to evaluate moments that involve the Hankel function. This conclusion extends to the other methods as well: for best asymptotic order, one has to take into account the specific oscillator. Levin-type methods have also been extended to oscillators that satisfy a differential equation, such as Bessel functions [87]. This can be used to recover a differential equation for the indefinite integral, which is subsequently solved using collocation as in Chapter 3. For numerical steepest descent, there are again two ways to proceed, and they are quite analogous to the Filon case. Both start from decomposing the integral as a sum of two line integrals. Based on the exponential decay of the Hankel function in the upper half of the complex plane, the steepest descent paths are straight lines from the real line upwards into the complex plane. Parameterized by h0 ( p) = i p and h1 ( p) = 1+i p, this yields  0

1

(1)



f (x)H0 (ωx) dx ∼ i i = ω

∞ 0



(1)

f (i p)H0 (ωi p) d p − i





f 0



∞ 0

(1)

f (1 + i p)H0 (ω(1 + i p)) d p

    it i ∞ it (1) (1) f 1+ H0 (ω + it ) dt . H0 (it ) dt − ω ω 0 ω

As before, for simplicity we assume that the integrals can stretch out to infinity. The two approaches now differ in how these integrals are evaluated. The simplest one is to ignore the special nature of the Hankel function and to evaluate the line integrals numerically by a general numerical integration strategy. Since they are exponentially decaying and nonoscillatory, albeit weakly singular for the first integral, the computational cost is limited. One can even factor out the exponential decay of the Hankel function and apply Gauss–Laguerre. This is very effective especially for the second (1) (1) integral, since it is not singular: we write H0 (ω + it ) = H0 (ω + it )e t e−t . The fi−t nal factor e goes into the weight function of Gauss–Laguerre, and the remainder is

156

Chapter 8. Variations on the highly oscillatory theme

neither oscillatory nor exponentially growing or decaying. In fact, from (8.5) we may (1) write H0 (ω + it )e t = eiω η(ω + it ). The second approach is more intriguing. The true spirit of the numerical steepest descent method as presented in this book is to reduce the integral to a sum of canonical integrals, and to evaluate those using specialized quadrature rules. Can we make a rule that incorporates the Hankel function, much like the Gauss–Laguerre rule incorporates exponential decay? The answer is yes, somewhat surprisingly, and it was explored in [4, 110]. One can compute a Gaussian quadrature rule with n points ti and weights wi such that ∞ n  (1) t k H0 (it ) dt = wi tik , k = 0, . . . , 2n − 1. i =1

0

Suitably rescaled, this quadrature rule can be used to evaluate the first steepest descent integral above. By Proposition 5.5 in Chapter 5, this approach comes with high asymptotic order of accuracy in ω. We can take this idea even further. First, by analytic deformation we can move the integration path from the positive imaginary axis onto any straight line that makes a positive angle with the real line. The limit is the positive half line [0, ∞) itself. ∞ (1) The integral 0 t k H0 (t ) dt is strictly speaking improper, but can be regularized by the above deformation procedure. Going further still, how about oscillators that do not decay in the complex plane? Several examples were explored in [4], including for example the Bessel function Jν . It increases exponentially in the upper and lower halves of the complex plane, much like cos(x) and sin(x) do. Yet, we can formally define orthogonal polynomials, compute their roots, and construct a Gaussian quadrature rule satisfying ∞ n  t k Jν (t ) dt = wi tik , k = 0, . . . , 2n − 1. (8.10) 0

i =1

 (Strictly speaking, we need to regularize this integral since, for example, t J0 (t ) dt is undefined. We defer a discussion of regularization to §3.2.) The corresponding roots are shown in Fig. 8.1. This result means that one can evaluate oscillatory integrals involving Bessel functions with optimal asymptotic order of accuracy! A steepest descent path cannot be defined since there is no decay in the complex plane. Instead, the path of integration is implicit in the roots of the orthogonal polynomials. Their distribution for the Bessel case was analyzed in [28]. Having discussed (8.8), let us finally turn to the wonderfully complicated oscillatory integral (8.2). We now understand its sublime complications. Can we also evaluate it numerically? Two approaches have been described in the literature in the context of boundary element methods: Filon-type quadrature and numerical steepest descent. Both approaches were shown to yield a computational cost that is bounded independently of the frequency—a worthy goal, albeit in itself not optimal—yet each comes with its own pitfalls in a successful implementation. The Filon-type scheme for (8.8) is the Filon–Clenshaw–Curtis scheme, which we have described before in Chapter 4. Its use for boundary element methods is summarized in [16]. The phase of the Hankel function is factored out, and the remaining singularity in the otherwise smoothly varying amplitude f is treated using hp-type grading. Furthermore, in order to avoid having to evaluate complicated moments, a variable substitution is employed that maps the combined oscillator g1 (x) + g2 (x) to

8.2. A singular integral with multiple oscillators

157

Figure 8.1. The roots of polynomials orthogonal with respect to J0 (x) (on the left) and J1 (x) (on the right) on the positive half line [0, ∞).

the simple linear oscillator g (x) = x. Thus, the complexity of the oscillator is also moved into f . The FCC method is not asymptotically optimal, and we refer back to our analysis in Chapter 4. Yet, this approach certainly has the advantage of robustness, which in view of the many parameters involved in the actual application is a substantial benefit indeed. On the other hand, the extension to multivariate integrals is a considerable effort: the variable substitution from g (x, y) to x + y has a major impact on the shape of the integration domain, and there are many corner cases to be covered. The numerical steepest descent method for integrals of the form (8.2) was explored in [62]. It comes with much higher asymptotic order of accuracy, but is not free of worries either. One concerns the computation of the steepest descent path. This is not unlike computing the map from g (x) = g1 (x) + g2 (x) to x, since the steepest descent path is intimately related to the inverse of g . This map is also computed in the FCC approach, yet there are two main differences. The first is an advantage in favor of steepest descent: with steepest descent it is sufficient to compute the map locally near each critical point, rather than globally on [a, b ]. The second difference is a disadvantage: steepest descent requires computations in the complex plane and, thus, analytic continuations of all the functions involved. For multivariate integrals, the possible complications multiply in numbers. Particular headaches in the implementation of the numerical method of steepest descent are due to branch cuts in the complex plane, which may have to be moved away from their standard definitions in order to allow for analytic path deformation, and to stationary points in the complex plane. The latter may move arbitrarily close to the real line, depending on parameters of the problem or on the other integration variables, and thus have to be taken into account. Still, the conclusion of ongoing efforts seems to be that the numerical method of steepest descent works like a dream for the majority of integrals of the form (8.2), even when extended to two-dimensional integrals. However, it fails spectacularly and the dream turns into a nightmare for a small number of cases. Here too, as with FCC, fixing all the possible corner cases that may arise is a considerable effort not to be taken lightly. While that conclusion may sound far from appealing, one should also consider the alternative: a classical numerical integration

158

Chapter 8. Variations on the highly oscillatory theme

method for a bivariate oscillatory integral has a computational cost that scales  highly  like a whopping  ω 2 . It may not be easy or straightforward to get all the corner cases right, but it is certainly possible and once achieved the benefit is immense.

8.3 Integral transforms 8.3.1 Fourier transforms We consider another generalization of our model integral, one that the reader might have expected to see sooner: Fourier transforms! Or, for that matter, even more general oscillatory integral transforms. Needless to say, these are ubiquitous in diverse applications, and their efficient evaluation is often an important ingredient in wider computation. Can we dispense with the Fast Fourier Transform (FFT), now that we have learned about the efficient methods of this book? We could say “Yes, absolutely!”, and we could say “No, of course not!” The true answer is “It depends,” though the negative response is probably closer to the truth. As before, we do not embark on a systematic study of numerical schemes for integral transforms, but we comment on the issues with specific examples. If the answer is “It depends,” our aim is to illustrate what it depends on. First and foremost, our model integral (8.1) is said to be a Fourier-type integral because it can be seen as a Fourier transform, at least for g (x) = x: 



b iωx

f (x)e

F (ω) =

dx =

a

∞ −∞

f (x)χ[a,b ] (x)eiωx dx,

(8.11)

where χ[a,b ] is the indicator function of the interval [a, b ]. More general phases are encountered in harmonic analysis of Fourier integral operators [98]. Needless to say, the function f χ[a,b ] is discontinuous; hence its Fourier transform has a slowly decay  ing tail: it exhibits the Gibbs phenomenon. The tail is  ω −1 and the leading-order coefficient is determined by the jumps at a and b . The jumps in this case are precisely the function values f (a) and f (b ), and this is in perfect agreement with the asymptotic expansion of the integral that we have based our methods on. In general, for discontinuous functions the FFT is both costly (requiring  (ω) samples) and inaccurate. In sharp contrast, highly oscillatory quadrature is both efficient and accurate. On the other hand, highly oscillatory quadrature methods require knowledge of the precise location of the jumps. Furthermore, as explained in the previous chapter, the best scheme for one particular value of ω may not be the best scheme for a range of values of ω. The FFT produces results for a whole range of ω at once. However, the example (8.11) is rather contrived. In most cases an oscillatory integral on a finite interval [a, b ] or on a bounded domain Ω does not originate in a Fourier transform that involves an indicator function. The setting is usually quite different. What about a genuine Fourier transform? That is, what about integrals on the real line of the form ∞ f (x) e2πiωx dx? F (ω) = −∞

Compared to the previous example, there is another point to be made. Why would one compute a Fourier transform in the first place? Typically, F (ω) reveals spectral information about f . If f exhibits nontrivial spectral content, it is by definition oscillatory. Hence, our methods may converge only after these oscillations have been resolved. For example, when using Filon-type methods, we would be inclined to add

8.3. Integral transforms

159

many points in the interior as in Chapter 4. It is to be expected that not much can be gained from explicitly taking the oscillations of eiωx into account while disregarding those of comparable frequency in f . There is the additional problem of truncating the integral to a finite interval. On the other hand, any lack of smoothness of f precludes rapid decay of the Fourier transform: F (ω) may have a large, slowly decaying tail. This tail is often a nuisance when computing the Fourier transform numerically. Speaking in loose terms, it does not convey more information than the discontinuity that triggered it, and yet it necessitates using large numbers of degrees of freedom for nonoscillatory quadrature. Such dependence on isolated information is precisely the regime where highly oscillatory quadrature would be very effective. It seems that neither FFT and nonoscillatory quadrature methods nor highly oscillatory quadrature on their own cover the full spectrum. Here is another example that is at least different from the Gibbs phenomenon. It resembles somewhat, but is much simpler than, the case studied in [11] in quantum mechanics. Consider a simple wave packet with a specific frequency ω0 which is localized around a point x0 ∈  through an exponentially decaying amplitude, f (x) = f0 (x) e−2πiω0 x e−(x−x0 ) . 2

The function f0 is a smoothly varying function that is superimposed over the exponential decay. This modulated Gaussian has the Fourier transform ∞ 2 F (ω) = f0 (x) e2πi(ω−ω0 )x e−(x−x0 ) dx. −∞

It could be approximated using the FFT with judicious choice of parameters, and with truncation based on the exponential decay of the integrand away from x0 . Yet, that would entail a potentially large number of function samples in order to resolve the oscillations of e2πi(ω−ω0 )x . We can improve upon this approach by using our knowledge of the integrand. The function F (ω) clearly peaks at ω = ω0 and depends mostly on the behavior f0 (x) near x = x0 . What would a Filon-type method look like? You guessed it: we approximate f0 (x) using polynomial interpolation at x = x0 , including derivatives, and integrate the result exactly. The computation of the moments is an issue, but for ω near ω0 the integral is not very oscillatory. What about numerical steepest descent? Even if f0 is not analytic, the moments of the above-mentioned Filon-type method do yield analytic integrands, and it makes sense to consider this case too. We can write the integrand as f s (x)eiω g (x) and incorporate the exponential decay into the phase function g by choosing it to be complexvalued, g (x) = 2π(ω − ω0 )x − i(x − x0 )2 . This phase has a stationary point! Indeed, g (x) = 2π(ω − ω0 ) − 2i(x − x0 ) = 0 ⇐⇒ x = x0 − πi(ω − ω0 ). The stationary point lies in the complex plane away from the real axis, unless ω = ω0 . The way to proceed with numerical steepest descent is to compute a steepest descent path and employ Gauss–Hermite quadrature. These examples do not represent the wealth of cases that may arise when computing Fourier transforms. We have merely wanted to illustrate that highly oscillatory quadrature may become viable once additional information about the function to be

160

Chapter 8. Variations on the highly oscillatory theme

transformed is available. Even then, it is more likely to be an additional tool in the toolbox, rather than a complete alternative to FFT-based methods. For more general Fourier integral operators with nonlinear phase functions, an FFT-like alternative is butterfly algorithms [14].

8.3.2 Hankel transforms and Sommerfeld integrals in layered media We single out a final, specific example that exhibits further interesting variations. It also captures the above-mentioned tension between computing a transform for extracting information about a function, and the artifacts of that transform originating in isolated particularities. The former may unavoidably require many degrees of freedom, whereas for the latter one can exploit the underling sparsity with highly oscillatory quadrature. High oscillation acts much like rain pouring down on a sandy surface for centuries on end: it tirelessly washes away the sand, eroding increasingly denser sediments, until only isolated sculpture-like features remain, each one unique and together characterizing the landscape. Abstracting away the underlying model, the integral we consider is ∞ f (k) Jν (k x) k dk. (8.12) u(x) = 0

Unless f (k) decays rapidly at infinity this integral is divergent. However, it is understood in the following regularized sense: ∞ ∞ f (k) Jν (k x) k dk := lim f (k) Jν (k x) ke−s k dk. (8.13) 0

s →0

0

In this case, one can allow for mild (algebraic) growth of f at infinity. For example, we can define the moments  1+ν+m  ∞ ∞ Γ 2 t m Jν (t ) dt = lim+ t m Jν (t ) e−s t dt = 2 m  1+ν−m  . s →0 Γ 0 0 2

Integral (8.12) represents an inverse Hankel transform, involving the Bessel function Jν again, and it arises in seismic imaging in the computation of the Green function in layered media [44]. For the context of the underlying computational problem, one can think of the propagation of (elastic) pressure waves in a soil that contains several planar layers of sediment. The very same integral appears in a very different context, yet also involving layered media, in electromagnetics. Here, it is commonly referred to as a Sommerfeld integral [96, 17]. One can think of a multilayered antenna, where the computational problem is to analyze its radiation pattern. The function f is typically not known in a closed form, and may even be the outcome of a (large) computation for each value of k. In both contexts, the Green function for the layered medium is most easily computed in the frequency domain. The connection to layered media arises in that one starts from the Green function for the half space, a single layer. The transform (8.12) represents the inverse Fourier transform that is used to obtain values of the Green function in the time domain. The appearance of the Hankel function is due to using cylindrical coordinates in three dimensions. There exists a wealth of literature on the efficient evaluation of integrals like (8.12). We refer the reader to [79] (and references therein) for a recent review of numerical

8.3. Integral transforms

161

evaluation methods for Sommerfeld integrals. We do not claim that the material below is new: indeed, many ideas have surfaced in this context in the past four decades, including both Filon quadrature [43]—albeit more in the spirit of the original Filon method than in our Filon-type class of methods—and complex plane methods. We will only make the modest claim that our point of view on asymptotic behavior of oscillatory integrals is not standard in this context. What are the numerical issues we are facing? The first one is immediate: what is the frequency parameter? There are two candidates, as the argument of the oscillatory Bessel function Jν is the product of k and x. The variable k represents a physical frequency, but differs from our usual variable ω in at least two ways. First, k is the integration variable and ω never was. Second, ω was large, but finite, whereas k actually tends to infinity along the way. The second candidate is the variable x: when x is large, the integrand is again highly oscillatory except at the left endpoint k = 0. This is reminiscent of the singularity earlier in this chapter, in §2.3. The case of large x is simplest: one can develop the integral into an asymptotic expansion involving inverse powers of x, and this shows rapidly that the only contributing point is the left endpoint k = 0. The variable k ranges from 0 to infinity, and the most straightforward way to proceed is to split the integral into two parts:  I[f ] = 0



 f (k) Jν (k x) k dk =

0



a

f (k) Jν (k x) k dk +

∞ a

f (k) Jν (k x) k dk

= I1 [ f ] + I2 [ f ]. Here, a > 0 is a constant to be chosen. We will call the second integral, I2 [ f ], the tail of the transform. How does one choose a? The function f is often analytic but typically has poles and other singularities in the upper or lower half of the complex plane. These poles have a physical interpretation. Interestingly, from physical considerations, the maximal real part of the poles is sometimes known (based on wave speeds and the thickness of the layers, for example). That leads to a natural choice for the constant a. We may have to be careful deforming the integral I1 [ f ] into the complex plane, but for the integral I2 [ f ] it is safe to do so as long as we maintain a real part larger than a. Another view of a is implicit in the name tail for I2 : we can regard I1 [ f ] as capturing the spectral content of f , and I2 [ f ] as being the tail determined by much less information. From this point of view, we choose a such that I2 [ f ] is well approximated using highly oscillatory quadrature. Still from that point of view, the value of a could be smaller if x grows larger. However, letting a depend on x has practical considerations, as the resulting quadrature points will now depend on x. Recall that evaluating f at a point may be a costly operation and we ideally want to share function values for different values of x. Let us consider Filon-type quadrature again. The integral I1 [ f ] is like integral (8.8) and many of the same considerations apply. We can formally write  I1 [ f ] =



a 0

f (k) Jν (k x) k dk ≈ I1 [ p] =

a 0

p(k) Jν (k x) k dk.

Is there a way around resolving f on [0, a]? Probably not, unless x happens to be very large, in which case f at k = 0 matters more than other values. The advantage of, say, a Chebyshev approximation of f on [0, a] is that it can be reused for all x.

162

Chapter 8. Variations on the highly oscillatory theme

For the second integral, the tail, the case is different. We proceed in the same way formally: ∞ ∞ I2 [ f ] = f (k) Jν (k x) k dk ≈ I2 [ p] = p(k) Jν (k x) k dk. a

a

However, in this case one expects high asymptotic order of accuracy if p interpolates derivatives of f at k = a, f ( j ) (a) = p ( j ) (a),

j = 0, . . . , s − 1.

The evaluation of moments of the Bessel function can be avoided by extracting its phase—recall our discussion of the closely related Hankel function in this chapter, in §2.4. Finally, we turn once more to numerical steepest descent. Path deformation for I1 [ f ] is a delicate affair, in view of a possible minefield of poles in the complex plane. The poles can be located and residues can be computed, but this usually comes with substantial computational overhead. The case for I2 [ f ] is easier to make. Let us write   1 ∞ 1 ∞ (2) (1) f (k) Hν (k x) k dk, f (k) Hν (k x) k dk + I2 [ f ] = 2 a 2 a 1

(1)

(2)

based on Jν (z) = 2 [Hν (z) + Hν (z)]. The Hankel function of the first kind decays (2)

exponentially in the upper half plane of the complex plane, while Hν (z) does so in the lower half. Hence, we find ourselves in the setting of §2.4 again, and we integrate along the line parallel to the imaginary axis, upwards and downwards, respectively, from k = a. Fortunately, this particular path deformation agrees perfectly with the regularization of the integral defined in (8.13).

8.4 Postscript The reader will note that, compared to the previous chapters, in this one we have painted a picture that was at times incomplete, tentative, or even speculative. We did not present a single, plug-and-play, black-box numerical evaluation scheme for all oscillatory integrals we are ever to encounter in our scientific lives. That is because, unfortunately, no such scheme exists. Time and again, as elsewhere in mathematics, the idea of a low-hanging fruit, which a newly developed toolbox can use to sort out effortlessly and forever all relevant problems, is a pipe dream. The field of numerical methods for oscillatory integrals is—this book notwithstanding—still very much open, not least because the universe of highly oscillatory integrals is so huge, rich, and varied. Challenges await the brave souls willing to venture into this fascinating world, and we hope that this volume will form a part of their mathematical GPS system along this journey.

Appendix A

The theory of orthogonal polynomials with complex measure

A.1 Orthogonal polynomials on the real line The general theory of orthogonal polynomials is a mature area in mathematics, with many standard references available. Some classical texts on (among other topics) orthogonal polynomials on the real line are those of Szeg˝ o [99], Chihara [18], Ismail [68], and Gautschi [49]. In this appendix we outline the elements of the general theory that we need to use (or extend) in the context of highly oscillatory quadrature.

A.1.1 Construction and the three-term recurrence relation Suppose that we have an integral of the form 

b

I[f ] =

f (x)w(x) dx, a

where f is for the time being a smooth function and w(x) is a positive weight function20 such that the moments  μn =

b

x n w(x) dx,

n ≥ 0,

(A.1)

a

are finite. Of course, this condition is equivalent to saying that the integral of any polynomial times the weight function is finite. The so-called classical orthogonal polynomials on the real line are named after Jacobi, Laguerre, and Hermite, and the weight functions are the following: Family Jacobi

Laguerre

Hermite

Weight α

β

Interval

Conditions

(−1, 1)

α, β > −1

α −x

(0, ∞)

α > −1

−x 2

(−∞, ∞)

(1 − x) (1 + x)

x e

e

20 It is possible to develop the theory in a more general setting, using a Borel measure dμ(x) (alternatively, formal positive-definite functionals in a Hilbert space), but here we do not require this degree of generality.

163

164

Appendix A. Orthogonal polynomials

Associated with a weight function w(x), we consider the Hilbert space L2 ([a, b ], w), with the inner product b f (x) g (x)w(x) dx. (A.2) 〈f , g〉 = a

It is clear that  f  = 〈 f , f 〉 defines a norm, provided that the weight function is nonnegative and not identically zero on [a, b ]. Applying the standard Gram–Schmidt procedure to the canonical basis of monomials, one can generate a basis of polynomials { pn (x)}n≥0 that are orthogonal in the sense that b pn (x) p m (x)w(x) dx = 0, n = m. (A.3) 〈 pn , p m 〉 = 1/2

a

We need to choose a normalization for the set of orthogonal polynomials. For instance, we can take pn (x) to be monic, that is, pn (x) = x n +. . ., and in this case (A.3) can be written as b b pn (x) p m (x)w(x) dx = pn (x)x m w(x) dx = 0, m = 0, 1, . . . , n−1, 〈 pn , p m 〉 = a

a

(A.4)

and is complemented with b pn2 (x)w(x) dx =  pn 2 = hn = 0, 〈 pn , pn 〉 =

n ∈ .

(A.5)

a

We call the set { pn (x)}n≥0 defined by the conditions (A.3) and (A.5) an orthogonal polynomial sequence (OPS) with respect to the weight function w(x). Alternatively, we can take orthonormal polynomials, so that 〈πn , π m 〉 = δ mn ,

n ≥ 0,

(A.6)

where δ mn is the Kronecker delta (equal to 1 if n = m and 0 otherwise) and πn (x) = n x n + . . . . Clearly, this process can be always carried out provided that the weight function w(x) is positive, but this is not the most efficient way to calculate pn since it involves the calculation (exact or numerical) of all the integrals. In this sense, a fundamental property that follows directly from the orthogonality property (A.3) is that the polynomials pn satisfy a three-term recurrence relation, x pn (x) = pn+1 (x) + αn pn (x) + βn pn−1 (x),

(A.7)

that we write here in monic version, i.e., pn (x) = x n +. . . . The initial data are typically given as p−1 ≡ 0 and p0 ≡ 1. It is not difficult to verify that the recurrence coefficients in (A.3) can be expressed in terms of the inner product 〈 · , ·〉: αn =

〈x pn , pn 〉 , 〈 pn , pn 〉

βn =

〈 pn , pn 〉 , 〈 pn−1 , pn−1 〉

n ∈ .

(A.8)

An alternative and useful way to write this recurrence relation is in matrix form, ⎡ ⎤ 0 ⎢ .. ⎥ ⎢ ⎥ xpn (x) = Jn pn (x) + ⎢ . ⎥ pn (x), (A.9) ⎣ 0 ⎦ 1

A.1. Orthogonal polynomials on the real line

165

where we have the vector of polynomials pn (x) = [ p0 (x), p1 (x), . . . , pn−1 (x)]  , while ⎤ ⎡ α0 1 0 ... ... 0 ⎥ ⎢ .. ⎥ ⎢ β α 1 0 . ⎥ ⎢ 1 1 ⎥ ⎢ . . ⎥ ⎢ 0 β α . . . . 1 ⎥ ⎢ 2 2 ⎥ (A.10) Jn = ⎢ .. .. .. .. ⎥ ⎢ .. . . . . ⎥ ⎢ . 0 ⎥ ⎢ ⎥ ⎢ .. ⎥ ⎢ . 0 βn−2 αn−2 1 ⎦ ⎣ 0 ... ... 0 βn−1 αn−1 is a (tridiagonal) Jacobi matrix. Furthermore, using the orthonormal polynomials πn (x) instead of the monic ones, it is possible to make this matrix symmetric.

A.1.2 Hankel determinants The existence of the sequence of orthogonal polynomials can be proved using Hankel determinants that are constructed from the moments of the weight function, given by the formulas (A.1), ⎡ ⎤ μ0 μ1 · · · μn ⎢ μ1 μ2 · · · μn+1 ⎥ ⎢ ⎥ n Dn = det⎢ . n ≥ 0. . .. ⎥ = det[μ j +k ] j ,k=0 , .. ⎣ .. . ⎦ μn

μn+1

···

μ2n

A classical formula (see, for instance, [68, §2.1]) gives pn explicitly in terms of this determinant, ⎤ ⎡ μ1 · · · μn μ0 ⎢ μ1 μ2 · · · μn+1 ⎥ ⎥ ⎢ 1 ⎥ ⎢ .. .. .. .. det ⎢ . (A.11) pn (x) = ⎥ = xn + . . . , . . . ⎥ ⎢ Dn−1 ⎦ ⎣ μ μ ··· μ n−1

n

x

1

2n−1 n

x

...

and a direct consequence is the following standard existence result. Theorem A.1 ([18, Theorem 3.1]). Given a weight function w ≥ 0 with moments μk , k ≥ 0, a necessary and sufficient condition for the existence of an OPS with respect to w is that Dn = 0 for n ≥ 0. We also remark that the recurrence coefficients can be written in terms of Hankel determinants. Specifically, αn =

where

⎡ ⎢ ˜ = det⎢ D ⎢ n ⎣

˜ ˜ D D n−1 n , − Dn Dn−1

μ0 μ1 .. .

μn

··· ···

μn−1 μn .. .

· · · μ2n−1

βn =

μn+1 μn+2 .. .

Dn Dn−2

μ2n+1

2 Dn−1

,

(A.12)

⎤ ⎥ ⎥ ⎥, ⎦

n ≥ 1.

166

Appendix A. Orthogonal polynomials

The formula for αn follows directly from (A.11), since pn (x) = x n + σn,n−1 x n−1 + . . . ,

σn,n−1 =

˜ D n−1

Dn−1

,

and comparing coefficients of x n in the recurrence relation we obtain αn = σn,n−1 − σn+1,n . Next, hn−1 βn = hn , using (A.5) and (A.8), and since hn =

Dn , Dn−1

(A.13)

we obtain the formula for βn in terms of the Hankel determinants. Again, these formulas are not the most convenient if we have to calculate the recurrence coefficients for the (nonclassical) weight function since they involve potentially large Hankel determinants, but they are theoretically important.

A.1.3 Gaussian quadrature The classical idea of numerical quadrature of univariate integrals is to construct an approximation for an integral (A.2) as a weighted sum, 

b

f (x)w(x) dx =

I[f ] = a

n 

wn,k f (xn,k ) + Rn [ f ] = Qn [ f ] + Rn [ f ],

(A.14)

k=1

with nodes xn,k , weights wn,k , and an error term Rn [ f ] that depends both on the properties of the function f (x) and on the choice of nodes and weights. A standard criterion for this choice is to require that the quadrature rule Qn [ f ] integrate exactly polynomials of degree as high as possible, in the following sense. Definition A.2. The quadrature rule (A.14) has degree of exactness equal to d if I [q] = Qn [q] (or equivalently, if Rn [q] = 0) for any polynomial q(x) of degree ≤ d . (Often we designate d + 1 as the order of the quadrature rule.) This requirement is reasonably justified by the well-known theorem of Weierstrass regarding the density of polynomials in the space of continuous functions on [a, b ]; therefore, we hope for a good approximation for I [ f ] by requiring that the numerical scheme perform well for polynomials.21 The crucial twist when imposing a maximum degree of exactness in a quadrature rule is that this requirement brings us to the world of orthogonal polynomials. Theorem A.3 ([49, Theorem 1.45]). Given an integer k with 0 ≤ k ≤ n, the quadrature rule (A.14) has degree of exactness equal to d = n − 1 + k if and only if both of the following conditions are satisfied: 1. The quadrature is interpolatory; i.e., it is obtained by integrating the Lagrange interpolating formula n  f (xn,k )n,k (x) + rn . f (x) = k=1 21

This is by no means the only possible choice. Depending on the structure of the integrand, rational functions, for instance, might confer a good alternative.

A.1. Orthogonal polynomials on the real line

167

2. The nodal polynomial pn (x) =

n 

(x − xn,k )

k=1

satisfies



b a

pn (x)q(x)w(x) dx = 0

for any polynomial q(x) of degree ≤ k − 1. As a direct consequence, the degree of exactness can be maximized by taking k = n, so d = 2n − 1, and this is achieved when pn (x) is the nth orthogonal polynomial with respect to the weight function w(x), consistently with our definition (A.3). This case is referred to as Gaussian (or Gauss–Christoffel) quadrature. It is not difficult to check that d = 2n − 1 is optimal, in the sense that there exists a polynomial q2n (x) of degree 2n for which I [q] = Q[q]. One such choice is [ pn (x)]2 , since the integral of this polynomial times the weight function is strictly positive but the quadrature rule yields zero. The quadrature weights wn,k , or Christoffel numbers, can be written in different ways, for instance b wn,k = n,k (x)w(x) dx, a

applying the Lagrange interpolation formula directly. Again, this formula is not particularly suited to numerical evaluation, but there are alternative formulations related to orthogonal polynomials, for example wn,k =

 n−1 

−1 πk (xn,k )

,

(A.15)

k=0

where πn (x) is the nth orthonormal polynomial given by (A.6). An extra important property of Gaussian nodes and weights is the following. Theorem A.4 ([49, Theorem 1.46]). If the weight function w(x) is positive in [a, b ], then the nodes {xn,k }nk=1 of Gaussian quadrature are pairwise distinct and they are contained in the interior of the interval [a, b ]. Furthermore, the quadrature weights {wn,k }nk=1 are positive. This result is very important for numerical reasons as well, since it gives stability of the quadrature rule for positive functions; additionally, iterative methods like Newton or fixed point can be used to compute the zeros of the orthogonal polynomials without the issues associated to multiple or complex roots of polynomials.

A.1.4 Methods of computation In the classical cases recurrence coefficients are explicit, and the polynomials have, among other different representations and functional identities, representations in terms of classical hypergeometric functions. For instance, Jacobi polynomials can be

168

Appendix A. Orthogonal polynomials

written as (α,β)

Pn

(x) =

(α + 1)n F n! 2 1



−n

n +α+β+1 1− x ; α+1 2



in terms of classical Gaussian hypergeometric functions; see [108, Chapter 15,18.5.7]. In other cases, one typically has to resort to numerical computation. Over the years, several methods have been proposed in the literature to compute orthogonal polynomials with (positive) nonclassical weights, and an excellent source of information is the monograph by Gautschi [49]. One popular algorithm, due to Golub and Welsch [54] (cf. also the more recent analysis of Laurie [72]), exploits the connection with linear algebra given by the matrix formulation of the three-term recurrence relation; see formula (A.7). Evaluating that equation at x = xn,k , which is a root of pn , we obtain xn,k pn (xn,k ) = Jn pn (xn,k ); hence the zeros of pn (x), say {xn,k }nk=1 , correspond to the eigenvalues of the Jacobi matrix Jn ; see (A.10). Furthermore, as shown in [54], because of (A.15) the Gaussian weights can be obtained from the first component of the (normalized) eigenvectors of Jn . This turns the analytical problem of computing the orthogonal polynomials and their zeros into a well-studied question in numerical linear algebra. There are other approaches available in the literature, based on analysis instead of linear algebra. More recently, Glaser, Liu, and Rokhlin in [53] and also Gil and Segura [50, 51] and Townsend, Trogdon, and Olver [103] have proposed methods based on differential and difference-differential identities satisfied by orthogonal polynomials (and also by more general special functions), combined with numerical tools such as fixed point iteration (or the Newton method). In any case, it is essential to note that any approach essentially depends on the availability (explicit or numerical) of the recurrence coefficients αk and βk . This issue is the basis for several approaches presented by Gautschi in [49]. We summarize here some of the main ideas: • Compute the recurrence coefficients directly from the moments of the weight function. This is usually a very ill-conditioned problem, as Gautschi points out in great detail, but sometimes the use of modified moments  τn =

b a

φn (x)w(x) dx,

where φn (x) is a suitable sequence of functions, can result in a substantial improvement. A standard example is given by modified moments constructed out of Chebyshev polynomials. • Discretize the inner product that is given by the orthogonality (using for example a suitable quadrature rule with N nodes and weights), and generate some recurrence coefficients with respect to a discrete inner product, say αk,N and βk,N , in such a way that αk,N → αk and βk,N → βk sufficiently rapidly as N → ∞. The discretization of the inner product can be done with Chebyshev nodes, for instance, or with equispaced nodes. A delicate aspect is the interplay between the indices N and k, and the numerical stability of the recurrence relation in the different parameter regimes.

A.2. Orthogonal polynomials with complex weight function

169

A recent alternative, not sufficiently explored yet, is to use the point of view of integrable systems: in some cases, recurrence coefficients satisfy deformation equations in terms of parameters in the weight function, and also string equations (sometimes called Freud equations), which are nonlinear relations for the recurrence coefficients themselves. These identities have been worked out for several nonclassical weight functions, deforming in parameter space and starting from initial values where one has information; additionally, these identities are of considerable theoretical interest since they are often related to Painlevé-type equations or Toda lattice equations. We refer the interested reader to [6, 19, 77] for more details of this theory from the perspective of random matrices and orthogonal polynomials.

A.2 Orthogonal polynomials with complex weight function The extension of the previous theory to non-Hermitian orthogonality in the complex plane with weight functions that are no longer positive, or even with complex weight functions, has immediate consequences: first and foremost, the Gram–Schmidt procedure to generate orthogonal polynomials pn (x) will work as long as hn = 〈 pn , pn 〉 = 0, with the bilinear form given in (A.2). This condition is often difficult to establish for all n, but partial and/or asymptotic results can be obtained in some cases. As an example, when w(x) = eiωx , it follows from [30] that the orthogonal polynomial pnω exists for sufficiently large n and fixed ω (or ω coupled linearly with n below a certain critical regime). Furthermore, explicit asymptotic approximations can be computed using the Riemann–Hilbert formulation and the Deift–Zhou method of steepest descent in the complex plane. Complementary results are obtained in [32]; see also Chapter 6, where it is shown that as ω → ∞, D2N −1 is different from 0 and D2N vanishes (to leading order) at integer multiples of π. This gives the correspondω ω and p2N ing existence result for the orthogonal polynomials p2N +1 in this asymptotic regime. Generally speaking, if one can establish that no zeros of the Hankel determinant Dn associated to the moments of a weight function w may occur in a certain regime of interest, then Theorem A.1 guarantees existence of an OPS for this weight function. Once zeros of Dn occur, then degeneracy of the OPS can happen, in the sense that for example consecutive polynomials in the sequence have common zeros (a situation that cannot arise in the case of positive weight functions, as stated in Theorem A.4). Given a weight function of the form w(x) = eV (x) , where V (x) is a polynomial (perhaps complex-valued), is it possible to determine where zeros of the Hankel determinants lie on the real line or in the complex plane? In general, this is a very complicated question, but we observe that for many such weight functions, one can write the moments in terms of classical special functions; for example, the moments of our serendipitous Fourier weight 1 x k eiωx dx, k ≥ 0, μk = −1

can be written in terms of incomplete Gamma functions, or more generally confluent hypergeometric functions, and !  x3 k iω( 3 −c x dx, k ≥ 0, μk = x e Γ

where Γ is a suitable contour in the complex plane, can be recast in terms of Airy functions and derivatives.

170

Appendix A. Orthogonal polynomials

In some cases, one can use this connection with special functions in order to determine zero-free regions for Dn , but at the moment this area remains to a large extent unexplored, since the location of complex zeros for classical special functions can be quite involved. Even if this theoretical information is available, calculation of nodes and weights (or the polynomials themselves) can be quite challenging. The recurrence relation (A.7) stays in place in the context of non-Hermitian orthogonal polynomials, and so does the Jacobi matrix and the Golub–Welsch approach, with the caveat that the calculation of the weights from the first component of the normalized eigenvectors is no longer possible, since in the complex setting the denominator of (A.15) no longer coincides with the norm pn 2 . However, the recurrence coefficients αn and βn are no longer explicit, and their computation can be expensive.

Bibliography [1] M. Abramowitz and I. A. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, volume 55 of National Bureau of Standards Applied Mathematics Series. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, DC, 1964. (Cited on p. 30) [2] A. Asheim, A. Deaño, D. Huybrechs, and H. Wang. A Gaussian quadrature rule for oscillatory integrals on a bounded interval. Disc. Cont. Dyn. Sys. A, 34(3):883–901, 2014. (Cited on pp. 115, 118, 120) [3] A. Asheim and D. Huybrechs. Asymptotic analysis of numerical steepest descent with path approximations. Found. Comput. Math., 10(6):647–671, 2010. (Cited on pp. 90, 100) [4] A. Asheim and D. Huybrechs. Complex Gaussian quadrature for oscillatory integral transforms. IMA J. Numer. Anal., 33(4):1322–1341, 2013. (Cited on pp. 107, 112, 113, 156) [5] P. Bettess. Short-wave scattering: Problems and techniques. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 362(1816):421–443, 2004. (Cited on p. 151) [6] P. M. Bleher. Lectures on random matrix models. The Riemann–Hilbert approach. In John Harnad, editor, Random Matrices, Random Processes and Integrable Systems. CRM Series in Mathematical Physics, pages 251–349. Springer, 2011. (Cited on pp. 122, 132, 169) [7] P. M. Bleher and A. Deaño. Topological expansion in the cubic random matrix model. Int. Math. Res. Not. IMRN, 2013(12):2699–2755, 2013. (Cited on p. 133) [8] P. M. Bleher and A. Deaño. Painlevé I double scaling limit in the cubic random matrix model. Random Matrices Theory Appl., 5(2):1650004, 58, 2016. (Cited on p. 133) [9] P. M. Bleher, A. Deaño, and M. Yattselev. Topological expansion in the complex cubic log-gas model: One-cut case. J. Stat. Phys., 166(3-4):784–827, 2017. (Cited on p. 133) [10] N. Bleistein and R. A. Handelsman. Asymptotic Expansions of Integrals. Dover Publications, New York, second edition, 1986. (Cited on pp. 5, 28, 79, 126, 154) [11] R. Bourquin and V. Gradinaru. Numerical steepest descent for overlap integrals of semiclassical wavepackets. Technical Report 2015-12, Seminar for Applied Mathematics, ETH Zürich, Switzerland, 2015. (Cited on p. 159) [12] J. P. Boyd. The devil’s invention: Asymptotic, superasymptotic and hyperasymptotic series. Acta Appl. Math., 56(1):1–98, 1999. (Cited on p. 91) [13] J. C. Butcher. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons, Chichester, second edition, 2008. (Cited on p. 35)

171

172

Bibliography [14] E. Candès, L. Demanet, and L. Ying. A fast butterfly algorithm for the computation of Fourier integral operators. Multiscale Model. Simul., 7(4):1727–1750, 2009. (Cited on p. 160) [15] S. N. Chandler-Wilde and D. C. Hothersall. Efficient calculation of the Green function for acoustic propagation above a homogeneous impedance plane. J. Sound Vibration, 180(5):705–724, 1995. (Cited on p. 91) [16] S. N. Chandler-Wilde, I. G. Graham, S. Langdon, and E. A. Spence. Numericalasymptotic boundary integral methods in high-frequency acoustic scattering. Acta Numer., 21:89–305, 2012. (Cited on pp. 151, 156) [17] W. C. Chew, M. S. Tong, and B. Hu. Integral Equation Methods for Electromagnetic and Elastic Waves. Morgan & Claypool, San Rafael, CA, 2009. (Cited on p. 160) [18] T. S. Chihara. An Introduction to Orthogonal Polynomials. Dover Publications, New York, 2011. (Cited on pp. 163, 165) [19] P. A. Clarkson. Painlevé equations—nonlinear special functions. In F. Marcellán and W. Van Assche, editors, Orthogonal Polynomials and Special Functions: Computation and Applications, volume 1883 of Lecture Notes in Mathematics, pages 331–411. SpringerVerlag, 2006. (Cited on p. 169) [20] P. A. Clarkson. On Airy solutions of the second Painlevé equation. Stud. Appl. Math., 137(1):93–109, 2016. (Cited on pp. 129, 133) [21] P. A. Clarkson, A. F. Loureiro, and W. Van Assche. Unique positive solution for an alternative discrete Painlevé I equation. J. Diff. Eq. Appl, 22(5):656–675, 2016. (Cited on pp. 132, 133) [22] D. Colton and R. Kress. Integral Equation Methods in Scattering Theory, volume 72 of Classics in Applied Mathematics. SIAM, Philadelphia, 2013. Reprint of the 1983 original [MR0700400]. (Cited on p. 151) [23] M. Condon, A. Deaño, A. Iserles, K. Maczy´ nski, and T. Xu. On numerical methods for highly oscillatory problems in circuit simulation. COMPEL, 28(6):1607–1618, 2009. (Cited on p. 26) [24] R. Cools. Constructing cubature formulae: The science behind the art. Acta Numer., 6:1–54, 1997. (Cited on pp. 13, 101) [25] R. D’Amico, K. Koo, D. Huybrechs, and W. Desmet. A refined use of the residue theorem for the evaluation of band-averaged input power into linear second-order dynamic systems. J. Sound. Vib., 333(6):1796–1817, 2014. (Cited on p. 107) [26] K. T. R. Davies. Complex-plane methods for evaluation integrals with highly oscillatory integrands. J. Comput. Phys., 80(2):498–505, 1989. (Cited on p. 91) [27] P. J. Davis and P. Rabinowitz. Methods of Numerical Integration. Dover Publications, Mineola, NY, 2007. Corrected reprint of the second (1984) edition. (Cited on pp. 1, 13, 33) [28] A. Deaño, A. B. J. Kuijlaars, and P. Román. Asymptotic behavior and zero distribution of polynomials orthogonal with respect to Bessel functions. Constr. Approx., 43(1):153–196, 2016. (Cited on p. 156) [29] A. Deaño, D. Huybrechs, and A. B. J. Kuijlaars. Asymptotic zero distribution of complex orthogonal polynomials associated with Gaussian quadrature. J. Approx. Theory, 162(12):2202–2224, 2010. (Cited on p. 129)

Bibliography

173

[30] A. Deaño. Large degree asymptotics of orthogonal polynomials with respect to an oscillatory weight on a bounded interval. J. Approx. Theory, 186:33–63, 2014. (Cited on pp. 115, 169) [31] A. Deaño and D. Huybrechs. Complex Gaussian quadrature of oscillatory integrals. Numer. Math., 112(2):197–219, 2009. (Cited on pp. 88, 90, 95) [32] A. Deaño, D. Huybrechs, and A. Iserles. The kissing polynomials and their Hankel determinants. Technical report, DAMTP, University of Cambridge, 2015. (Cited on pp. 121, 122, 123, 124, 125, 169) [33] P. Deift. Orthogonal Polynomials and Random Matrices: The Riemann–Hilbert Approach, volume 3 of Courant Lecure Notes in Mathematics. Courant Institute of Mathematical Sciences, American Mathematical Society, 2000. (Cited on p. 122) [34] E. Delabaere and C. J. Howls. Global asymptotics for multiple integrals with boundaries. Duke Mathematical Journal, 112(2):199–264, 2002. (Cited on pp. 101, 105) [35] R. A. DeVore and L. R. Scott. Error bounds for Gaussian quadrature and weighted-L1 polynomial approximation. SIAM J. Numer. Anal., 21(2):400–412, 1984. (Cited on p. 68) [36] J. Dick, F. Y. Kuo, and I. H. Sloan. High-dimensional integration: The quasi-Monte Carlo way. Acta Numer., 22:133–288, 2013. (Cited on pp. 39, 101) [37] V. Domínguez, I. G. Graham, and T. Kim. Filon–Clenshaw–Curtis rules for highly oscillatory integrals with algebraic singularities and stationary points. SIAM J. Numer. Anal., 51(3):1542–1566, 2013. (Cited on p. 67) [38] V. Domínguez, I. G. Graham, and V. P. Smyshlyaev. Stability and error estimates for Filon-Clenshaw-Curtis rules for highly oscillatory integrals. IMA J. Numer. Anal., 31(4):1253–1280, 2011. (Cited on pp. 62, 63, 66) [39] N. Dyn. On the existence of Hermite–Birkhoff quadrature formulas of Gaussian type. J. Approx. Theory, 31(1):22–32, 1981. (Cited on pp. 32, 61) [40] P. Favati, G. Lotti, and F. Romani. Peano kernel behaviour and error bounds for symmetric quadrature formulas. Comput. Math. Appl., 29(6):27–34, 1995. (Cited on pp. 68, 73) [41] L. N. G. Filon. On a quadrature formula for trigonometric integrals. Proc. Roy. Soc. Edin., 49:38–47, 1929. (Cited on p. 29) [42] J. Franklin and B. Friedman. A convergent asymptotic representation for integrals. Proc. Cambridge Philos. Soc., 53:612–619, 1957. (Cited on p. 91) [43] L. N. Frazer and J. F. Gettrust. On a generalization of Filon’s method and the computation of the oscillatory integrals of seismology. Geophys. J. R. Astr. Soc., 76:461–481, 1984. (Cited on p. 161) [44] K. Fuchs and G. Müller. Computation of synthetic seismograms with the reflectivity method and comparison with observations. Geophys. J. R. Astr. Soc., 23:417–433, 1976. (Cited on p. 160) [45] J. Gao and A. Iserles. Adaptive Filon method for highly oscillatory quadrature. In J. Dick, F. Y. Kuo, and H. Wo´zniakowski, editors, Festschrift for the 80th Birthday of Ian Sloan, Springer-Verlag, Berlin, 2018. (Cited on p. 75) [46] J. Gao and A. Iserles. A generalization of Filon–Clenshaw–Curtis quadrature for highly oscillatory integrals. BIT, DOI 10.1007/s10543-017-0682-9, 2017 (Cited on pp. 64, 65)

174

Bibliography [47] J. Gao and A. Iserles. Error analysis of the extended Filon-type method for highly oscillatory integrals. Res. Math. Sci., 4:21, DOI 10.1186/s40687-017-0110-4, 2017. (Cited on pp. 61, 62, 68, 71, 72, 74) [48] W. Gautschi. Efficient computation of the complex error function. SIAM J. Numer. Anal., 7:187–198, 1970. (Cited on p. 91) [49] W. Gautschi. Orthogonal Polynomials: Computation and Approximation. Clarendon Press, Oxford, 2004. (Cited on pp. 92, 111, 163, 166, 167, 168) [50] A. Gil and J. Segura. Computing the zeros and turning points of solutions of second order homogeneous linear ODEs. SIAM J. Numer. Anal., 41(3):827–855, 2003. (Cited on p. 168) [51] A. Gil and J. Segura. Computing the zeros and turning points of solutions of second order homogeneous linear odes. J. Symbolic Comput., 35(5):465–485, 2003. (Cited on p. 168) [52] A. Gil, J. Segura, and N. M. Temme. Numerical Methods for Special Functions. SIAM, Philadelphia, 2007. (Cited on pp. 127, 133) [53] A. Glaser, X. Liu, and V. Rokhlin. A fast algorithm for the calculation of the roots of special functions. SIAM J. Sci. Comput., 29(4):1420–1438, 2007. (Cited on p. 168) [54] G. H. Golub and J. H. Welsch. Calculation of Gauss quadrature rules. Math. Comp., 23:221–230, 1969. (Cited on p. 168) [55] Z. Heitman, J. Bremer, and V. Rokhlin. On the existence of nonoscillatory phase functions for second order ordinary differential equations in the high-frequency regime. J. Comput. Phys., 290:1–27, 2015. (Cited on p. 154) [56] D. Huybrechs, A. B. J. Kuijlaars, and N. Lejon. Zero distribution of complex orthogonal polynomials with respect to exponential weights. J. Approx. Theory, 184:28—54, 2014. (Cited on p. 129) [57] D. Huybrechs, A. B. J. Kuijlaars, and N. Lejon. A numerical method for oscillatory integrals with coalescing saddle points. Technical Report TW-586, Department of Computer Science, KU Leuven, 2017. (Cited on pp. 128, 129) [58] D. Huybrechs, A. Iserles, and S. P. Nørsett. From high oscillation to rapid approximation V: The equilateral triangle. IMA J. Numer. Anal., 31(3):755–785, 2011. (Cited on p. 19) [59] D. Huybrechs and S. Olver. Superinterpolation in highly oscillatory quadrature. Found. Comput. Math., 12(2):203–228, 2012. (Cited on pp. 90, 99) [60] D. Huybrechs and S. Vandewalle. On the evaluation of highly oscillatory integrals by analytic continuation. SIAM J. Numer. Anal., 44(3):1026–1048, 2006. (Cited on p. 90) [61] D. Huybrechs and S. Vandewalle. The construction of cubature rules for multivariate highly oscillatory integrals. Math. Comp., 76(260):1955–1980, 2007. (Cited on pp. 90, 101, 104) [62] D. Huybrechs and S. Vandewalle. A sparse discretization for integral equation formulations of high frequency scattering problems. SIAM J. Sci. Comput., 29(6):2305–2328, 2007. (Cited on pp. 151, 157) [63] A. Iserles and D. Levin. Asymptotic expansion and quadrature of composite highly oscillatory integrals. Math. Comp., 80(273):279–296, 2011. (Cited on pp. 26, 27)

Bibliography

175

[64] A. Iserles and S. P. Nørsett. On quadrature methods for highly oscillatory integrals and their implementation. BIT, 44(4):755–772, 2004. (Cited on pp. 35, 37, 61, 75) [65] A. Iserles and S. P. Nørsett. On the computation of highly oscillatory multivariate integrals with stationary points. BIT, 46(3):549–566, 2006. (Cited on pp. 15, 16) [66] A. Iserles and S. P. Nørsett. Efficient quadrature of highly oscillatory integrals using derivatives. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461(2057):1383–1399, 2005. (Cited on pp. 9, 31) [67] A. Iserles and S. P. Nørsett. Quadrature methods for multivariate highly oscillatory integrals using derivatives. Math. Comp., 75(255):1233–1258, 2006. (Cited on pp. 13, 16, 23) [68] M. E. H. Ismail. Classical and Quantum Orthogonal Polynomials in One Variable. Cambridge University Press, Cambridge, 2005. (Cited on pp. 122, 163, 165) [69] G. B. Jeffery. Louis Napoleon George Filon. 1875–1937. Obituary Notices of Fellows of the Royal Society, 2(7):500–526, 1939. (Cited on p. 29) [70] W. B. Jones, O. Njåstad, and W. J. Thron. Moment theory, orthogonal polynomials, quadrature, and continued fractions associated with the unit circle. Bull. London Math. Soc., 21:113–152, 1989. (Cited on p. 111) [71] A. B. J. Kuijlaars and G. L. F. Silva. S-curves in polynomial external fields. J. Approx. Theory, 191:1–37, 2015. (Cited on p. 129) [72] D. P. Laurie. Computation of Gauss-type quadrature formulas. J. Comp. Appl. Math., 127:201–217, 2001. (Cited on p. 168) [73] N. Lejon. Analysis and applications of orthogonal polynomials with zeros in the complex plane. Ph.D. thesis, KU Leuven, 2016. (Cited on pp. 108, 127, 128, 129, 130) [74] D. Levin. Procedures for computing one- and two-dimensional integrals of functions with rapid irregular oscillations. Math. Comp., 38(158):531–538, 1982. (Cited on pp. 42, 43) [75] D. Levin. Fast integration of rapidly oscillatory functions. J. Comput. Appl. Math., 67(1):95–101, 1996. (Cited on p. 42) [76] E. Levin and D. S. Lubinsky. Orthogonal Polynomials for Exponential Weights. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, 4. Springer-Verlag, New York, 2001. (Cited on p. 92) [77] A. P. Magnus. Painlevé-type differential equations for the recurrence coefficients of semiclassical orthogonal polynomials. J. Comput. Appl. Math., 57:215–237, 1995. (Cited on p. 169) [78] J. E. Marsden and A. Tromba. Vector Calculus. W. H. Freeman, New York, 1976. (Cited on pp. 16, 48) [79] K. A. Michalski and J. R. Mosig. Efficient computation of Sommerfeld integral tails— methods and algorithms. J. Electromagn. Waves Appl., 30(3):281–317, 2016. (Cited on p. 160) [80] P. D. Miller. Applied Asymptotic Analysis, volume 75 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2006. (Cited on pp. 5, 79, 80) [81] G. V. Milovanovi´c. A class of orthogonal polynomials on the radial rays in the complex plane. J. Math. Anal. Appl., 206(1):121–139, 1997. (Cited on p. 111)

176

Bibliography [82] G. V. Milovanovi´c, P. M. Rajkovi´c, and Z. M. Marjanovi´c. A class of orthogonal polynomials on the radial rays in the complex plane ii. Facta Univ. Ser. Math. Inform., 11:29–47, 1996. (Cited on p. 111) [83] F. W. J. Olver. Asymptotics and Special Functions. Computer Science and Applied Mathematics. Academic Press [a subsidiary of Harcourt Brace Jovanovich, Publishers], New York, London, 1974. (Cited on pp. 5, 28, 79, 80, 98, 126, 153) [84] S. Olver. Moment-free numerical integration of highly oscillatory functions. IMA J. Numer. Anal., 26(2):213–227, 2006. (Cited on pp. 42, 44) [85] S. Olver. On the quadrature of multivariate highly oscillatory integrals over nonpolytope domains. Numer. Math., 103(4):643–665, 2006. (Cited on p. 48) [86] S. Olver. Moment-free numerical approximation of highly oscillatory integrals with stationary points. European J. Appl. Math., 18(4):435–447, 2007. (Cited on pp. 52, 53, 54) [87] S. Olver. Numerical approximation of vector-valued highly oscillatory integrals. BIT, 47(3):637–655, 2007. (Cited on pp. 25, 155) [88] S. Olver. Numerical approximation of highly oscillatory integrals. Ph.D. thesis, University of Cambridge, 2008. (Cited on p. 50) [89] R. Piessens. Computing integral transforms and solving integral equations using Chebyshev polynomial approximations. J. Comput. Appl. Math., 121(1-2):113–124, 2000. Numerical analysis in the 20th century, Vol. I, Approximation theory. (Cited on p. 62) [90] M. J. D. Powell. Approximation Theory and Methods. Cambridge University Press, Cambridge, New York, 1981. (Cited on pp. 44, 67, 68) [91] E. D. Rainville. Special Functions. The Macmillan Co., New York, 1960. (Cited on pp. 61, 63, 67, 72, 124) [92] A. Ralston. A First Course in Numerical Analysis. McGraw-Hill, New York, Toronto, London, 1965. (Cited on p. 31) [93] E. B. Saff. Orthogonal polynomials from a complex perspective. In Paul Nevai, editor, Orthogonal Polynomials, volume 294 of NATO ASI Series, pages 363–393. Springer, 1990. (Cited on p. 111) [94] A. Shadrin. Error bounds for Lagrange interpolation. J. Approx. Theory, 80(1):25–49, 1995. (Cited on pp. 70, 71) [95] B. Simon. Orthogonal Polynomials on the Unit Circle. Part 1. Classical theory. volume 54 of American Mathematical Society Colloquium Publications. American Mathematical Society, Providence, RI, 2005. (Cited on p. 111) [96] A. Sommerfeld. Mathematische Theorie der Diffraction. Math. Ann., 47(2-3):317–374, 1896. (Cited on p. 160) [97] A. Spitzbart. A generalization of Hermite’s interpolation formula. Amer. Math. Monthly, 67:42–46, 1960. (Cited on pp. 31, 33, 34) [98] E. M. Stein. Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals, volume 43 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1993. With the assistance of Timothy S. Murphy, Monographs in Harmonic Analysis, III. (Cited on pp. 5, 9, 11, 158)

Bibliography

177

[99] G. Szeg˝ o. Orthogonal Polynomials. volume 23 of American Mathematical Society Colloquium Publications. American Mathematical Society, Providence, RI, 1959. (Cited on pp. 111, 121, 163) [100] N. M. Temme. Asymptotic Methods for Integrals, volume 6 of Series in Analysis. World Scientific, Singapore, 2014. (Cited on pp. 79, 126) [101] N. M. Temme and R. Vidunas. Symbolic evaluation of coefficients in Airy-type asymptotic expansions. J. Math. Anal. Appl., 269:317–331, 2002. (Cited on p. 127) [102] J. Todd. Evaluation of the exponential integral for large complex arguments. J. Research Nat. Bur. Standards, 52:313–317, 1954. (Cited on p. 91) [103] A. Townsend, T. Trogdon, and S. Olver. Fast computation of Gauss quadrature nodes and weights on the whole real line. IMA J. Numer. Anal., 36(1):337–358, 2016. (Cited on p. 168) [104] S. Trattner, M. Feigin, H. Greenspan, and N. Sochen. Can Born approximate the Unborn? A new validity criterion for the Born approximation in microscopic imaging. In IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007. (Cited on p. 105) [105] L. N. Trefethen. Is Gauss quadrature better than Clenshaw–Curtis? SIAM Rev., 50(1):67– 87, 2008. (Cited on pp. 62, 63, 73) [106] H. Wang, L. Zhang, and D. Huybrechs. Asymptotic expansions and fast computation of oscillatory Hilbert transforms. Numer. Math., 123(4):709–743, 2013. (Cited on p. 154) [107] G. N. Watson. The harmonic functions associated with the parabolic cylinder. Proc. London Math. Soc., S2-17(1):116–148, 1918. (Cited on p. 80) [108] Website. Digital Library of Mathematical Functions. http://dlmf.nist.gov, 2016. (Cited on pp. 12, 23, 24, 26, 27, 52, 53, 54, 114, 123, 131, 151, 153, 154, 168) [109] J. Wojdylo. Computing the coefficients in Laplace’s method. SIAM Rev., 48(1):76–96, 2006. (Cited on p. 83) [110] R. Wong. Quadrature formulas for oscillatory integral transforms. 39(3):351–360, 1982. (Cited on pp. 91, 156)

Numer. Math.,

[111] R. Wong. Asymptotic Approximations of Integrals. Computer Science and Scientific Computing. Academic Press, Boston, 1989. (Cited on pp. 5, 9, 79, 126) [112] L. Zhao and C. Huang. An adaptive Filon-type method for oscillatory integrals without stationary points. Numer. Algorithms, 75(3):753–775, 2017. (Cited on p. 75)

Index acoustics, 151 vibro-acoustics, 107 Airy functions, 107, 112, 127, 129, 132, 154, 169 Airy integral, 23, 154 asymptotic analysis, 6, 150 hills, 85 method of stationary phase, 9 multivariate analysis, 13 pre-asymptotic regime, 150 uniform expansions, 126, 127, 151 valleys, 85 asymptotic crime, 80 asymptotic localisation, 80 asymptotic method, 8, 135, 141, 147 benign dependency, 152 Bessel functions, 107, 112, 127, 155, 156, 160–162 modified Bessel functions, 114 Birkhoff–Hermite quadrature, 32, 61, 67, 73, 146 Borel measure, 163 boundary element method, 151 branch points, 150 butterfly algorithms, 160 Chebyshev polynomials, 62, 168 Chebyshev set, 44, 50, 54 Clenshaw–Curtis quadrature, 62, 73 collocation, 42 complex-valued Gaussian quadrature, 77, 108, 109, 115, 120, 139, 144, 147 critical point, 16

Digital Library of Mathematical Functions, 154 Discrete Cosine Transform, 63, 64 electromagnetics, 151, 160 layered media, 160 error function, 127, 154 Faà di Bruno formula, 83 Fast Fourier Transform (FFT), 158–160 Filon Paradigm, 33, 48, 55, 59 Filon quadrature, 83, 87, 93, 107, 120, 147, 150, 152, 155, 156, 159, 161 adaptive Filon, 74, 147 derivative-free, 37, 75 error bounds, 68 error control, 35 extended, 34, 59 extended Filon, 133, 137 Filon–Clenshaw–Curtis, 60, 62, 64, 67, 68, 77, 136, 142, 157 Filon–Jacobi, 60, 64, 67, 68, 77, 136, 141 Filon–Olver, 55, 138, 142 multivariate integrals, 38, 157 original Filon, 30, 161 plain Filon, 3, 32, 75, 146 Filon, L. N. G., ix, 29 Fourier transform, 112, 158 Freud-type polynomials, 92 Gamma function, 169 Gauss–Hermite quadrature, 87 Gaussian quadrature, 2, 8, 67, 73, 75, 87, 106, 109, 110, 156, 166, 167 Christoffel numbers, 167

179

Freud-type, 94 Gauss–Hermite, 92–94, 100, 103, 106, 127, 144, 159 Gauss–Laguerre, 87, 88, 91, 100, 103, 106, 113, 127, 139, 144, 156 tensor product, 102 Gauss–Legendre, 96, 109, 115, 120 generalized moments, 49, 168 Gibbs phenomenon, 158, 159 Golub–Welsch algorithm, 168, 170 Gram–Schmidt procedure, 164, 169 Green function, 107, 151, 160 Hölder inequality, 68 Hankel determinant, 115, 121, 122, 129, 165, 169 Hankel functions, 107, 151–157, 162 Hankel transform, 112, 114, 160 inverse transform, 160 Helmholtz equation, 151 Hermite polynomials, 92, 163 hidden stationary points, 16, 105, 150 Hilbert transform, 154 hypergeometric functions, 154, 167, 169 IEEE precision, 134 incomplete gamma function, 52 integrable systems, 122 integral Fourier-type, 80, 98, 111, 154, 155, 158 Laplace-type, 80, 81, 87, 98, 99

180 interpolation Hermite, 4, 32, 70 Lagrange, 33, 166 Jacobi matrix, 165, 170 Jacobi polynomials, 59, 61, 67, 72, 163 Laguerre polynomials, 88, 124, 163 Laplace transform, 80 Legendre polynomials, 75, 116 Levin method, 147 Levin quadrature, 42, 83, 87, 138, 142, 147, 155 multivariate, 47 Liouville–Green method, 154 Mellin transform, 154 natural basis, 50, 138 Newton method, 168 nonoscillatory phase functions, 154 numerical steepest descent, 81, 87, 97, 98, 106, 111, 113, 120, 124, 127, 133, 139, 143, 147, 156, 159, 162

Index multivariate integrals, 101, 157 uniform, 128 orthogonal polynomials, 110, 163 complex valued, 93, 108, 115, 128, 169 computation, 167 location of zeros, 111 non-Hermitian measures, 129, 169, 170 on the unit circle, 111 three-term recurrence, 92, 116, 164, 165 Painlevé equations, 169 Painlevé equations, 129, 132 Peano Kernel Theorem, 68, 70, 73 Pochhammer symbol, 72 quantum mechanics, 159 random matrix theory, 122, 132 regularization, 156, 160 Riemann–Hilbert problem, 115, 169

saddle point, 85, 126 scattering theory integral equation methods, 151 singularities, 150, 151, 154 singularity theory, 101, 105 slowly oscillatory function, 43 soil mechanics, 160 Sommerfeld integrals, 160 stationary point, 8, 9, 85, 141, 150 near stationary point, 151 steepest descent, 79 Deift–Zhou method, 115, 169 steepest descent path, 80, 82, 83, 99, 100 steepest descent paradigm, 97 Stokes Theorem, 16 string equations, 131, 133, 169 Toda lattice equations, 169 Watson Lemma, 80, 83, 89, 91 Weierstrass Theorem, 166 Wentzel–Kramers–Brillouin (WKB) approximation, 154 Wronskian determinant, 129

The starting point is that approximations need to be analyzed using asymptotic methods rather than by more standard polynomial expansions. As often happens in computational mathematics, once a phenomenon is understood from a mathematical standpoint, effective algorithms follow. As reviewed in this monograph, we now have at our disposal a number of very effective quadrature methods for highly oscillatory integrals—Filon-type and Levin-type methods, methods based on steepest descent, and complex-valued Gaussian quadrature. Their understanding calls for a fairly varied mathematical toolbox—from classical numerical analysis, approximation theory, and theory of orthogonal polynomials all the way to asymptotic analysis—yet this understanding is the cornerstone of efficient algorithms. The text is intended for advanced undergraduate and graduate students, as well as applied mathematicians, scientists, and engineers who encounter highly oscillatory integrals as a critical difficulty in their computations. Alfredo Deaño is a Lecturer in Mathematics at the School of Mathematics, Statistics and Actuarial Science, University of Kent (UK). His main research interests include the theory of classical special functions, orthogonal polynomials, and Painlevé equations. Daan Huybrechs is a Professor at KU Leuven, Belgium, in the section on numerical analysis and applied mathematics (NUMA) in the Department of Computer Science. He is an associate editor of the IMA Journal of Numerical Analysis. His main research interests include oscillatory integrals, approximation theory, and numerical methods for the simulation of wave scattering and propagation.

For more information about SIAM books, journals, conferences, memberships, or activities, contact:

Society for Industrial and Applied Mathematics 3600 Market Street, 6th Floor Philadelphia, PA 19104-2688 USA +1-215-382-9800 • Fax +1-215-386-7999 [email protected] • www.siam.org

OT155

A. Deaño • D. Huybrechs • A. Iserles

Arieh Iserles recently retired from the Chair in Numerical Analysis of Differential Equations at University of Cambridge. He is the managing editor of Acta Numerica, Editor-in-Chief of IMA Journal of Numerical Analysis and of Transactions of Mathematics and its Applications, and an editor of many other journals and book series. His research interests comprise geometric numerical integration, the computation of highly oscillatory phenomena, computations in quantum mechanics, approximation theory, and the theory of orthogonal polynomials.

Computing Highly Oscillatory Integrals

Highly oscillatory phenomena range across numerous areas in science and engineering and their computation represents a difficult challenge. A case in point is integrals of rapidly oscillating functions in one or more variables. The quadrature of such integrals has been historically considered very demanding. Research in the past 15 years (in which the authors played a major role) resulted in a range of very effective and affordable algorithms for highly oscillatory quadrature. This is the only monograph bringing together the new body of ideas in this area in its entirety.

ISBN 978-1-611975-11-6 90000

Computing Highly Oscillatory Integrals

Alfredo Deaño Daan Huybrechs Arieh Iserles

OT155 9781611975116

OT155_Deano-Huybrechs_coverFINAL_11-15-17.indd 1

11/17/2017 1:38:43 PM