Essential Partial Differential Equations: Analytical and Computational Aspects (Instructor Solution Manual, Solutions) [1 ed.] 3319225685, 9783319225685

255 36 1MB

English Pages [149] Year 2015

Citation preview

Essential Partial Differential Equations: Analytical and Computational Aspects Solutions to all exercises

David F. Griffiths, John W. Dold and David J. Silvester Springer International Publishing Switzerland, 2015

Solutions to all exercises are available to approved instructors by contacting the publishers.

Springer Undergraduate Mathematics Series ISBN: 978-3-319-22568-5, e-ISBN: 978-3-319-22569-2

Essential Partial Differential Equations: Analytical and Computational Aspects Solutions to all exercises David F. Griffiths, John W. Dold and David J. Silvester Exercises 1 Introduction

2

2 Boundary and initial data

6

3 Origins of PDEs

9

4 Classification of PDEs

10

5 Boundary value problems in R1

19

6 Finite difference methods in R1

30

7 Maximum principles and energy methods

46

8 Separation of variables

50

9 The method of characteristics

66

10 Finite difference methods for elliptic PDEs

86

11 Finite difference methods for parabolic PDEs

102

12 Finite difference methods for hyperbolic PDEs

115

13 Projects

131

1

Exercises 1

Introduction

1.1 Function

Comment

Conclusion

u(x, y) = A(y)

uy = A′ (y)

False

u(x, y) = A(y)

uxy = 0

True

u(x, t) = A(x)B(t)

uxy = 0 concerns different independent variables!

False

u(x, t) = A(x)B(t)

uuxt = ABA′ B ′ = ux ut

True

u(x, t, y) = A(x, y)

ut = ∂t A(x, y) = 0

True

u(x, t) = A(x+ct) + B(x−ct)

utt + c2 uxx = 2c2 (A′′ + B ′′ )

False

1.2 These are not the only possible cases; you might find other PDEs: (a) u(x, t) = et cos x: ∂tn u = et cos x = u (any n), ux = −et sin x, uxx = −et cos x so ∂tn u = u or ∂tn u + uxx = 0, etc. (b) u(x, y) = x2 + y 2 : uxx = 2, uyy = 2 so uxx − uyy = 0. Also uxy = 0, etc. (c) u(x, t) = x2 t: ux = 2xt, ut = x2 , utx = 2x so 2tut − xux = 0 or ux = tutx , etc. (d) u(x, t) = x2 t2 : utx = 4xt, uttxx = 4 so (utx )2 = 16u or utttxx = 0, etc. 2

2

2

(e) u(x, y) = e−x : ux = −2xe−x , uxx = (4x2 − 2)e−x , uy = 0 so uy = 0 or uxx = (4x2 − 2)u, etc. (f) u(x, y) = ln(x2 + y 2 ): ux = 2

4y (x2 +y 2 )2

2x x2 +y 2 ,

uy =

2y x2 +y 2 ,

uxx =

2 x2 +y 2

so yux − xuy = 0, uxx + uyy = 0, etc.

4x2 (x2 +y 2 )2 ,

uyy =

2 x2 +y 2

1.3 These are not the only possible cases; you might find other PDEs: (a) u(x, t) = A(x+ct) + B(x−ct): ut = cA′ (x+ct) − cB ′ (x−ct), ux = A′ (x+ct) + B ′ (x−ct), utt = c2 A′′ (x+ct) + c2 B ′′ (x−ct), uxx = A′′ (x+ct) + B ′′ (x−ct) so utt − c2 uxx = 0 (wave equation). (b) u(x, t) = A(x) + B(t): ut = B ′ (t) so utx = 0. (c) u(x, t) = A(x)/B(t): ln u = ln A(x) − ln B(x) so (ln u)tx = 0 or uutx − ut ux = 0. (d) u(x, t) = A(xt): ut = xA′ (xt), ux = tA′ (xt), so tut − xux = 0. (e) u(x, t) = A(x2 t): ut = x2 A′ (x2 t), ux = 2xtA′ (x2 t) so 2tut − xux = 0. 2

′ 2 (f) u(x, t) = A(x2 /t): ut = − xt2 A′ (x2 /t), ux = − 2x t A (x /t) so 2tut + xux = 0.

2

1.4 With u(x, y) = f (2x + y 2 ) + g(2x − y 2 ) we find, using the chain rule ux = 2f ′ (2x + y 2 ) + 2g ′ (2x − y 2 ),

uy = 2yf ′ (2x + y 2 ) − 2yg ′ (2x − y 2 )

uxx = 4f ′′ (2x + y 2 ) + 4g ′′ (2x − y 2 )

uyy = 2f ′ (2x + y 2 ) − 2g ′ (2x − y 2 ) + 4y 2 f ′′ (2x + y 2 ) + 4y 2 g ′′ (2x − y 2 )

and the result follows. 1.5 √ Suppose that u(x, t) = 21 c sech2 (z), where z = 12 c(x−ct−x0 ). Then zt = − 12 c3/2 and zx = 21 c1/2 so, by the chain rule, we find ∂t u(x, t) = 12 c5/2

sinh z , cosh3 z

∂x u(x, t) = − 12 c3/2

∂t u + 6u∂x u = 12 c5/2 (sinh z)

sinh z cosh3 z

cosh2 (z) − 3 = −∂x3 u(x, t) cosh5 z

and so ut + 6uux + uxxx = 0. 1 c=4

c=2

−5

−4

−3

−2

−1

0

1

2

3

4

5

x

Figure 1: Soliton solutions of the KdV equation with c = 2 (solid) and c = 4 (dashed) for Exercise 1.5 travel with speed c. 1.6 With u defined by (1.2) we may write 1 u(x, t) = √ 4π 2

Z

−∞

f (x − s, t) ds

and f (x, t) = t−1/2 e−x /4t is the function used in Example 1.3. The partial derivatives of u are given by Z ∞ Z ∞ Z ∞ ut = ft (x − s, t) ds, ux = fx (x − s, t) ds, ux = fxx (x − s, t) ds, −∞

−∞

−∞

and ut − uxx = 0 follows since ft = fxx for each s (see Example 1.3).

1.7 Since u = −2∂x φ = −2φx /φ, the partial derivatives are

φxt φx φt φxx (φx )2 −2 2 , ux = −2 +2 2 , φ φ φ φ   φxxx φxx φx φx φxx (φx )2 = −2 +2 +4 − φ φ2 φ φ φ2

ut = −2 uxx

3

so, with φt = φxx and φxt = φxxx , ut = −2

φxxx φx φxx −2 φ φ2

and ut + uux = uxx . With φ = t−1/2 exp(−x2 /(4t)) we find log φ = − 12 log(t) − x2 /(4t) and so u = x/t, which is a special case of equation (9.32). 1.8 Since φt = a2 e−a(x−at) + b2 eb(x+bt) = φxx we see that φ solves the heat equation. Then, by Exercise 1.7, u(x, t) = −2

φx −ae−a(x−at) + beb(x+bt) . = −2 −a(x−at) φ e + eb(x+bt)

(a) The term eb(x+bt) → 0 as x → −∞ and so u → 2a. Likewise, the term e−a(x−at) → 0 as x → ∞ and so u → −2b. (b) Multiplying both numerator and denominator by ea(x−at) → 0 leads to u(x, t) = 2

a − be(a+b)(x+(b−a)t) , 1 + e(a+b)(x+(b−a)t)

so that u(x, t) is constant along the lines x + (b − a)t = constant. When a = b, u = −2a tanh ax is independent of t so the solution is stationary. It is translated a distance a − b to the right or left for each unit of time depending on whether a > b or a < b, respectively. a>b Figure 2: Solutions of Burgers’ equation at t = 0 for Exercise 1.8 for a > b, a < b and a = b (dashed). The arrows indicate direction of travel as t increases.

x a 0, limy→0 u(x, y) = tan−1 0 = 0. When x, y > 0 we have tan−1 (y/x) > 0 and tan−1 (y/x) → 12 π as y/x → ∞. There is therefore a discontinuity at the origin. Note that u = θ in polar coordinates. This is independent or r and so u is multivalued as r → 0. 4

4 t=0 2 −20

−10

t=5

0

10

t = 10

20

t = 15

30

Figure 3: The solution of Burger’s equation at times t = 0, 5, 10, 15 for Exercise 1.9. 40

5

x

Exercises 2

Boundary and initial data

2.1 (a) u(x, 0) = A(x) + B(x) = f (x) so A(x) + B(x) = f (x). There is not enough information to determine both A(·) and B(·). (b) u(x, 0) = A(x)+B(0) = f (x) so A(x) = f (x)−B(0). We would only need to know one value of B, namely B(0) to determine A. However, the initial conditions gives no information about B. (c) u(x, 0) = A(x)/B(0) = f (x) so A(x) = B(0)f (x). We would only need to know one value of B, namely B(0) to determine A, but we have no information about B. (d) u(x, 0) = A(0) = f (x) so A(0) = f (x). This tries to set a constant A(0) to something that is not constant f (x), which is not possible! (e) (exactly the same) (f) u(x, 0) = A(∞) = f (x) so A(∞) = f (x). This tries to set a constant A(∞) to something that is not constant f (x), which is again not possible! 2.2 (a) u(x, 1) = A(x + c) + B(x − c) = g(x) which provides insufficient information to find both A(·) and B(·). (b) u(x, 1) = A(x) + B(1) = g(x) so A(x) = g(x) − B(1). We only need to know B(1) to determine A. We obtain no information about B. (c) u(x, 1) = A(x)/B(1) = g(x) so A(x) = B(1)g(x). We only need to know B(1) to determine A. There is no information about B. (d) u(x, 1) = A(x) = g(x) so A(x) = g(x) and u(x, t) = g(xt). (e) u(x, 1) = A(x2 ) = g(x) so A(x2 ) = g(x). This determines A(ξ) for ξ ≥ 0 provided g(ξ) = g(−ξ). (f) u(x, 1) = A(x2 ) = g(x) so A(x2 ) = g(x). Again, this determines A(ξ) for ξ ≥ 0 provided g(ξ) = g(−ξ). 2.3 From (1.2) with g(x) as given, Z ∞ Z ∞ Z ∞ 2 2 2 1 1 1 e−(x−s) /4t ds = √ e−(x−s) /4t g(s) ds = √ e−z dz, u(x, t) = √ π −x/√4t 4πt −∞ 4πt 0 √ where we have made the change of variable s = x + z 4t in the integrand. Thus u(0, t) = 1/2 from the given result. Z ∞ Z ∞ Z ∞ 2 1 1 1 −z 2 −z 2 u(x, 0) = lim √ e dz = √ e dz = 2 √ e−z dz = 1, t→0 π −x/√4t π −∞ π 0 6

since the integrand is an even function of z. 2.4 Defining   utt (x, y) − c2 uxx (x, t)      u(0, t) u(1, t) # L u(x, y) = "    u(x, 0)     u (x, 0)

for x ∈ (0, 1), t ∈ (0, T ] for x = 0, 0 < t < T for x = 1, 0 < t < T for t = 0, 0 < x < 1

t

and

  0      g0 (t) F u(x, t) = g"1 (t) #     f1 (x)    f (x)

for x ∈ (0, 1), t ∈ (0, T ] for x = 0, 0 < t < T for x = 1, 0 < t < T for t = 0, 0 < x < 1

2

allows (2.10) to be written as L u = F . Note that L and F are vector-valued for t = 0 in order to accommodate two initial conditions. 2.5 First note that L(au) = −κ(au)xx = a(−κuxx ) = aLu. Similarly, B(au) = aBu. Then     (au)t (x, t) + (Lau)(x, t) a(ut (x, t) + Lu(x, t)) for (x, t) ∈ (0, 1) × (0, T ) (L au)(x, t) = (Bau)(x, t) = aBu(x, t) for (x, t) ∈ {0, 1} × (0, T ) .     au(x, 0) au(x, 0) for t = 0, x ∈ [0, 1]

and the right hand side is equal to aL u(x, t).

2.6 We need to show that L (u + v) = L u + L v and L (au) = aL u for any twice differentiable functions u, v and any constant a (alternatively, one could show that L (au + bv) = aL u + bL v, where b is any constant).   uxx (x, y) + vxx (x, y) + uyy (x, y) + vyy (x, y) 0 < x, y < 1      x = 0, 0 < y < 1  u(0, y) + v(0, y) L (u + v)(x, y) = u(1, y) + v(1, y) x = 1, 0 < y < 1 = L u(x, y) + L v(x, y)    −uy (x, 0) − vy (x, 0) y = 0, 0 < x < 1     u (x, 1) + v (x, 1) + u(x, 1) + v(x, 1) y = 1, 0 < x < 1 y y   auxx (x, y) + auyy (x, y) 0 < x, y < 1      x = 0, 0 < y < 1  au(0, y) L (au)(x, y) = au(1, y) x = 1, 0 < y < 1 = aL u(x, y).    −auy (x, 0) y = 0, 0 < x < 1     au (x, 1) + au(x, 1) y = 1, 0 < x < 1 y 7

2.7 2 First we confirm that u = AT 1/2 (T −t)−1/2 e−x /4(T −t) is a solution of ut = −uxx: 2 2 1 1 AT 1/2 (T −t)−3/2 e−x /4(T −t) − AT 1/2 x2 (T −t)−5/2 e−x /4(T −t) , 2 4 1 1/2 −3/2 −x2 /4(T −t) ux = − AT x(T −t) e , 2 2 2 1 1 uxx = − AT 1/2 (T −t)−1/2 e−x /4(T −t) + AT 1/2 x2 (T −t)−5/2 e−x /4(T −t) = −ut 2 4

ut =

so that ut = −uxx . Note that with this solution |u(x, 0)| ≤ A and u(0, t) → ∞ as t → T . 2 Hence, given any ε > 0 and any position (X, T ), the solution u = εT 1/2 (T −t)−1/2 e−(x−X) /4(T −t) , 2 satisfying the initial condition u(x, 0) = εe−(x−X) /4T , for which |u(x, 0)| ≤ ε, becomes infinite as (x, t) → (X, T ). Because of this, the negative heat equation is ill-posed for t > 0 when subjected to initial conditions at t = 0; we can always find solutions that are arbitrarily small initially but that become infinite at any chosen time later on. Suppose that the negative heat equation is subjected to final conditions u(x, tf ) = g(x) at some time t = tf , say. Then it is well posed for times before the final time, t < tf . 2.8 (a) ut − (x2 + u)uxx = x − t is 2nd order and quasilinear. (b) u2 utt − 21 u2x + (uux )x = eu is 2nd order and quasilinear. (c) ut − ∇2 u = u3 is 2nd order and semilinear. (d) (uxy )2 − uxx + ut = 0 is 2nd order and fully nonlinear. (e) ut + ux − uy = 10 is 1st order, linear and inhomogeneous. 2.9 The highest derivatives take the form autt + butx + cuxx (or different subscripts for different independent variables): (a) ut + utx − uxx + u2x = sin u : semilinear. (b) ux + uxx + uy + uyy = sin(xy): linear and inhomogeneous. (c) ux + uxx − uy − uyy = cos(xyu): semilinear. (d) utt + xuxx + ut = f (x, t): linear and inhomogeneous. (e) ut + uuxx + u2 utt − utx = 0: quasilinear.

8

Exercises 3

Origins of PDEs

3.1 Since H does not depend on t, we can differentiate ht + Hvx with respect to t: htt + Hvxt = 0

htt + H∂x vt = 0.

But vt = −ghx , so ∂x vt = −ghxx and htt − c2 hxx = 0, where c2 = gH. 3.2 The flow speed will adjust until the resistive force balances the gravitational force: av 2 = bh or v = Ch1/2 . Substituting this into (3.12) gives ht + 32 C(h3/2 )x = r

ht + 32 Ch1/2 hx = r.

This becomes ut + u1/2 ux = f , where u = ( 32 C)2 h and f = ( 32 C)2 r. 3.3 This simply requires the vector form of differentiation of a product: ~ · (~v T ) = T ∇ ~ · ~v + ~v · ∇T. ~ ∇ For example, suppose that ~v = (u, v), then in R2 , ~ · (~v T ) = (uT )x + (vT )y = ux T + uTx + vy T + vTy ∇ ~ · ~v + ~v · ∇T. ~ = T (ux + vy ) + (uTx + vTy ) = T ∇ 3.4 The main difference between the situation leading to (3.8) and that in Section 3.2.5 is that the depth of the water is h(x, t) (rather that H + h(x, t)) so the approximation H + h ≈ H is no longer valid. The argument leading up to (3.8) becomes: For a narrow column of water of depth h(x, t) and width δx centred on the location x, the flux of water in the x-direction is hv, the product of depth and velocity, so the net flux into this strip, per unit time, is “flux in” minus “flux out”, that is hv|x− 1 δx − hv|x+ 1 δx ≈ −δx(hv)x 2

2

by Taylor expansion. If water is neither added nor removed from the tank, conservation of (fluid) mass requires that this net influx must be balanced by the rate of change in the volume hδx of the column. Thus, ∂t (hδx) = −δx(hv)x , so, on dividing by δx, we obtain ht + (hv)x = 0.

9

Exercises 4

Classification of PDEs

4.1 If ϕ(x) = x/(1 + |x|), then ϕ is a continuous function for x ∈ R and ( ( 1 x x≥0 x ≥ 0 (1+x)2 ′ ⇒ ϕ (x) = . ϕ(x) = 1+x x 1 x 0, hyperbolic for x < 0, parabolic for x = 0. (e) ut + uuxx + u2 utt − utx = 0. b2 − 4ac = 1 − 4u3 so elliptic for u3 > 41 , hyperbolic for u3 < 14 , parabolic for u3 = 41 . 4.6 (a) (∂x − ∂y )(∂x + ∂y )u = (∂x − ∂y )(ux + uy ) = ∂x (ux + uy ) − ∂y (ux + uy ) = uxx + uxy − uxy − uyy = uxx − uyy .

(b) ux = vy uy = vx

⇒ ⇒

uxx = vxy uyy = vxy



⇒ uxx = uyy .

(c) Under the change of variables s = x + y, t = x − y, the chain rule gives  ∂ + ∂y = 2∂s ∂x = (∂x s)∂s + (∂x t)∂t = ∂s + ∂t ⇒ x ∂x − ∂y = 2∂t ∂y = (∂y s)∂s + (∂y t)∂t = ∂s − ∂t and so, by part (a), uxx − uyy = (∂x − ∂y )(∂x + ∂y )u = 4∂t ∂s u. Integrating the PDE ∂t ∂s u = 0 with respect to t gives us = f (s) (where f is an arbitrary function) then R integrating with respect to s, u = F (s) + G(t), where G is an arbitrary function and F (s) = s f (s) ds being the integral of an arbitrary function is also an arbitrary function. The general solution of uxx − uyy = 0 is, therefore u(x, y) = F (x + y) + G(x − y). 4.7 With the change of variables s = x + y, t = x − y, it follows from the previous answer that 11

uxx − uyy = x + y becomes 4ust = s. Integrating with respect to t gives us = 14 st + f (s) (where f is an arbitrary function) then integrating with respect to s, u = 81 s2 t + F (s) + G(t), where G R is an arbitrary function and F (s) = s f (s) ds is also an arbitrary function. The general solution of uxx − uyy = x + y is, therefore u(x, y) = 18 (x + y)2 (x − y) + F (x + y) + G(x − y). The boundary conditions give: u(x, 0) = u(0, y) =

1 3 8x − 81 y 3

+ F (x) + G(x) = x + F (y) + G(−y) = − 21 y 3

and, when we replace y by x in the second of these, we find that F (x) + G(−x) = − 38 x3 and so G(x) − G(−x) = x + 14 x3 . With the identity1 G(x) = 21 (G(x) − G(−x)) + 12 (G(x) + G(−x)) we have G(x) = 21 x + 18 x3 + E(x) where we have used E(x) for the even part of G (which is, as yet, undetermined). Then F (x) = 21 x − 14 x3 − E(x) so that the solution is (after some algebra) u(x, y) = x(1 − xy) − 21 (x + y)y 2 − E(x + y) + E(x − y) and involves an arbitrary even function E(·). 4.8 Comparing uxx − 2uxy − 3uyy = 0 with (4.12) a = 1, b = −1 and c = −3 and b2 − ac = 4 > 0 so the equation is hyperbolic. The form given for the general solution suggests the change of variables s = 3x + y, t = x − y and the chain rule gives  ∂ + ∂y = 4∂s ∂x = (∂x s)∂s + (∂x t)∂t = 3∂s + ∂t ⇒ x ∂x − 3∂y = 4∂t . ∂y = (∂y s)∂s + (∂y t)∂t = ∂s − ∂t Hence uxx − 2uxy − 3uyy = (∂x2 − 2∂x ∂y − 3∂y2 )u = (∂x − 3∂y2 )(∂x + ∂y2 )u = 16ust . Then integrating ust = 0 twice gives u = F (s) + G(t) = F (3x + y) + G(x − y) as the general solution. The initial conditions give u(x, 0) = g0 (x) = F (3x) + G(x) uy (x, 0) = g1 (x) = F ′ (3x) − G′ (x). Integrating the second of these leads to 1 3 F (3x)

− G(x) =

Z

g1 (s) ds.

x

Solving for F (3x) and G(x) gives F (3x) = 34 g0 (x) + 1 Every

3 4

Z

g1 (s) ds,

x

G(x) = 14 g0 (x) −

function can be written as the sum of an odd and an even function.

12

3 4

Z

x

g1 (s) ds

and so F (x) = 43 g0 ( 13 x) + 43

R

u(x, y) =

x/3 g1 (s) ds. 1 4

Combining these with the earlier general solution gives

 3g0 (x + 13 y) + g0 (x − y) +

3 4

Z

x+y/3

g1 (s) ds.

x−y

4.9 Comparing the equation 2uxx + 5uxt + 3utt = 0 with the template (4.12) we see that a = 2, b= 5/2 and c = 3 so b2 − ac = 14 > 0, so the equation is hyperbolic. The factorisation 2uxx + 5uxt + 3utt = (2∂x + 3∂y )(∂x + ∂y )u suggests the change of variables y = x − t, s = 3x − 2t and the chain rule gives  ∂ + ∂t = 4∂y ∂x = (∂x y)∂y + (∂x s)∂s = 3∂y + ∂s ⇒ x ∂x − 3∂t = 4∂s . ∂t = (∂t y)∂y + (∂t s)∂s = ∂y − ∂s Hence 2uxx + 5uxt + 3utt = (2∂x + 3∂t )(∂x + ∂t )u = 16uys which, on integrating twice gives the general solution u = F (y) + G(s) = F (x − t) + G(3x − 2t). The initial conditions give F (x) + G(3x) = 0,

−F ′ (x) − 2G′ (3x) = xe−x

2

2

and integrating the second of these we find F (x) + 23 G(3x) = 21 e−x + C, where C is the constant of integration. It follows that 2

2

Hence u(x, y) =

3 2

4.10 (a) With u = v/r, ur =

e−(x−t)

2

2

G(3x) = − 32 e−x − 3C ⇒ G(x) = − 32 e−x /9 − 3C.  2 − e−(3x−2t) /9 in which there is no arbitrary constant.

F (x) = 32 e−x + 3C,

v vr − 2, r r

r2 ur = rvr − v,

∂r (r2 ur ) = (vr − rvrr ) − vr = vrr

 and so a2 ∂r r2 ur = r2 ∂t2 u becomes a2 vrr = vtt . The change of independent variables p = r−at, q = r + at gives  ∂r = (∂r p)∂p + (∂r q)∂q = ∂p + ∂q (a∂r )2 −∂t2 = a2 (∂p +∂q )2 −a2 (−∂p +∂q )2 = 4a2 ∂p ∂q ∂t = (∂t p)∂p + (∂t q)∂q = a(−∂p + ∂q ) Integrating vpq = 0 twice gives the general solution v = g(p) + f (q) = g(r − at) + f (r + at) and u = (g(r − at) + f (r + at))/r. The initial condition u(r, 0) = 0 gives g(r)+f (r) = 0 and ut (r, 0) = 0 gives (−ag ′ (r)+af ′ (r))/r = exp(−r2 ). The first of these gives g(r) = −f (r) so that g ′ (r) = −f ′ (r) and the second initial condition gives 1 −r2 1 −r2 re ⇒ f (r) = e + C, f ′ (r) = 2a 4a 2

2

where C is an arbitrary constant. Therefore u(r, t) = (e−(r−at) − e−(r+at) )/(4ar) for r > 0. 4.11 Substituting u(r, t) = rm f (t − r) into the PDE and collecting terms leads to rm−2 f (t − r)m(m + n − 2) + rm−1 f ′ (t − r)(2m + n − 1) = 0. 13

This will hold for all differentiable functions f if, and only if, m(m + n − 2) = 0 and 2m + n − 1 = 0. Thus, either m = 0 and n = 1 or n = 3 and m = −1 giving the solutions u(r, t) = f (t − r) when n = 1 and u(r, t) = f (t − r)/r when n = 3 (see the previous question).

4.12 √ Suppose that φ(x, t) satisfies the heat equation φt = φxx and that u(x, t) = φ((x − at)/ κ, t) then, by the chain rule, √ √ ut = (−a/ κ)φx + φt , ux = (1/ κ)φx , uxx = (1/κ)φxx so that ut + aux = φt = φxx = κuxx . Thus u satisfies the advection-diffusion equation ut + aux = κuxx . The result follows by choosing φ(x, t) to be the solution of the heat equation given by (4.22). 4.13 The change of independent variables x = s cos α − t sin α, y = s sin α + t cos α gives  ∂s = (∂s x)∂x + (∂s y)∂y = cos α∂x + sin α∂y ∂t = (∂t x)∂x + (∂t y)∂y = − sin α∂x + cos α∂y

(∂s2 + ∂t2 )u = (cos α∂x + sin α∂y )2 u + (− sin α∂x + cos α∂y )2 u = uxx + uyy

since α is constant and cos2 α + sin2 α = 1. 4.14 This question can be answered by using the change of variables s = y, t = β(cx − by) (in which β is a scaling constant) and following the procedure in Section 4.2.3. It is rather more economical to observe that the operator L = 1c (ac − b2 )∂x2 + (b∂x + c∂y )2 is related to (4.27) by interchanging b appearing in (4.29) is unaffected by these interchanges, the same a ↔ c and x ↔ y. Since Q solution is obtained in the two cases. 4.15

y

O

x

b = 1 (3x2 + 2y 2 ) in the Figure 4: With a = 2, b = 0 and c = 3 then Q 6 b = solution (4.29). The figure shows a typical elliptical curve xT Qx constant.

4.16 Differentiating under the integral sign, we find Z ∞ Z ∞ ux = kx (x − s, y)g(s) ds, uy = ky (x − s, y)g(s) ds −∞ −∞ Z ∞ Z ∞ uxx = kxx (x − s, y)g(s) ds, uyy = kyy (x − s, y)g(s) ds −∞ −∞ Z ∞ uxx + uyy = (kxx (x − s, y) + kyy (x − s, y))g(s) ds. −∞

14

However kxx (x − s, y) =

1 y(3(x − s)2 − y 2 ) = −kyy (x − s, y) π (x − s)2 + y 2

and it follows that uxx + uyy = 0. Note that there are singularities in kxx and kyy at x = s when y = 0. 4.17 With G(x, y, t) =

1 4π

 log(x2 + (y + t)2 ) − log(x2 + (y − t)2 ) we find   2(y + t) 1 −2(y − t) ∂t G(x, y, t) = − 2 4π (x2 + (y + t)2 x + (y − t)2   y 1 = k(x, y) ∂t G(x, y, t)|t=0 = π x2 + y 2

from (4.32). With the change of variables s = x + y tan θ the interval −∞ < s < ∞ becomes − 21 π < θ < 12 π and ds = y sec2 θ dθ so that Z ∞ Z ∞ Z π/2 y k(x − s, y) ds = ds = dθ = π, 2 2 −∞ −∞ (x − s) + y −π/2 where we have used the identity 1 + tan2 θ = sec2 θ. 4.18 (a) If g(s) > 0 for s ∈ R the integrand in (4.32) is strictly positive and consequently u(x, y) > 0. (b) If g(x) = c, a positive constant, in a neighbourhood (x0 − ε, x0 + ε) of a point x0 and is zero otherwise then Z x0 +ε

u(x, y) = c

x0 −ε

k(x − s, y) ds > 0

since k(x − s, y) > 0. Hence u(x, y) must be affected at every point in the domain by the value of c. The principle of superposition applies, so any change in the boundary values of u in the neighbourhood of any point x0 on the boundary will affect the solution throughout the domain. (c) Since k(x, y) > 0 we deduce from (4.32) that Z ∞ Z ∞  k(x − s, y)|g(s)| ds ≤ |u(x, y)| ≤ k(x − s, y) ds max |g(s)| ≤ max |g(s)| since

R∞

−∞

−∞

−∞ 0. So, with λ = µ2 , the general solution of the ODE is φ(x) = A sin µx + B cos µx. The boundary condition φ(0) = 0 requires B = 0 and φ′ (1) = 0 requires A cos µ = 0. Since A 6= 0 (since it would lead to a trivial solution), it follows that λ = λn := (n − 12 )2 π 2 , n = 1, 2, . . ., with corresponding eigenfunctions φn (x) = sin(n − 12 )πx. (b) Suppose that ω 2 = λn for some value of n. If φ(x) is the corresponding eigenfunction, then Z

1

φ(x)(−u′′ (x) + ω 2 u) dx =

0

Z

1

φ(x)f (x) dx.

0

Integrating the left hand side by parts twice and using the boundary conditions on both u and

23

φ, we find Z 1 Z 1 −φ(x)u′ (x) + (φ′ (x)u′ (x) + ω 2 φu) dx = 0

1 (−φ(x)u′ (x) + φ′ (x)u(x)) + 0

0 Z 1

1

φ(x)f (x) dx,

0

(−φ′′ (x) + ω 2 φ)u(x) dx =

Z

1

φ(x)f (x) dx,

0

0

2φ(1) − φ′ (0) =

Z

1

φ(x)f (x) dx,

0

since −φ′′ (x) + ω 2 φ = 0. The data for the problem are inconsistent unless this condition is satisfied. When ω 2 = λ1 = 21 π, then φ(x) = sin 21 πx and, with f (x) = c, we find that c = 12 π(2 − 12 π). The general solution of −u′′ (x) = ω 2 u(x) + c is u(x) = A sin 12 πx + B cos 12 πx − c/ω 2 . Applying the BCs: u(0) = 1 = B − c/ω 2 , u′ (1) = −2 = − 21 πB

both of which give the same value B = 4/π because the value of c was carefully chosen. The solution is, therefore, u(x) = 1 + A sin 21 πx + (4/π) cos( 21 πx) − 1 which is unique up to an arbitrary multiple of φ. 5.16 The chain rule with ξ = x/L gives d dξ d 1 d = = , dx dx dξ L dξ

1 d2 d2 = dx2 L2 dξ 2

so the eigenvalue problem becomes, with v(ξ) = u(Lξ), −

 d2 v 2  = (L λ)v, 0 < ξ < 1 dξ 2  v(0) = v(1) = 0.

in which the eigenvalue has become rescaled to (L2 λ). This new eigenvalue problem is identical to that in Example 5.10 so L2 λ = (nπ)2 with corresponding eigenfunctions vn (x) = sin nπξ. Thus, the given problem has eigenvalues λn = (nπ/L)2 with corresponding eigenfunctions un (x) = sin nπx L , n = 1, 2, . . .. Note: the frequency of vibration λ increases as L decreases. 5.17 When u(x) = M (x)w(x) we find u′ (x) = M ′ (x)w(x) + M (x)w′ (x),

u′′ (x) = M ′′ (x)w(x) + 2M ′ (x)w′ (x) + M (x)w′′ (x)

so that u′′ − au′ − bu = M w′′ + (2M ′ − aM )w′ + (M ′′ − aM ′ − bM )w. The coefficient of w′ can be made to vanish by choosing M such that 2M ′ = aM in which case 2M ′′ = a′ M + aM ′ = (a′ + 21 a2 )M and M ′′ − aM ′ − bM = − 21 (2b + 12 a2 − a′ )M. Thus, u′′ − au′ − bu = f becomes −w′′ + Q(x)w(x) = G(x), 24

1 2 (2b

where RQ(x) = −(M ′′ − aM ′ − bM )/M = x A exp( a(s) ds).

+ 21 a2 − a′ ), G(x) = f (x)/M (x) and M (x) =

5.18 R1 A function u is square integrable if 0 (u(x))2 dx < ∞. When u(x) = xm , Z

1

(u(x))2 dx =

Z

1

x2m dx =

0

0

1 1 x2m+1 2m + 1 0

which is finite provided that 2m + 1 > 0. If m ≤ −1/2, Z 1 1 1 (u(x))2 dx = lim+ x2m+1 2m + 1 ε→0 ε 0 and the limit does not exist.

5.19 With φn (x) = e2πinx/L then, for m 6= n, Z L Z

φn , φm = e2πinx/L e−2πimx/L dx = 0

=

L

e2πi(n−m)x/L dx

0

L   L L e2πi(n−m) − 1 = 0 e2πi(n−m)x/L = 2πi(n − m) 2πi(n − m) 0

since n − m is an integer and so e2πi(n−m) = 1. Also, when m = n, Z L Z L

2πinx/L −2πinx/L dx = L. e e dx = φn , φn = 0

0

5.20 For ϕn = sin(2πnx/L) and ψm = cos(2πmx/L) we use sin A cos B = 21 (sin(A + B) − sin(A − B)) so that, for m 6= n, Z

ϕn , ψm = 21

0

When m = n, Z

ϕn , ψm =

0

 = −

L

L

 sin(2(n + m)πx/L) − sin(2(n − m)πx/L) dx

L L L cos(2(n + m)πx/L) + cos(2(n − m)πx/L) = 0 4(n + m)π 4(n − m)π 0

 sin(2nπx/L) cos(2nπx/L) dx =

1 2

Z

0

L

sin(4nπx/L) dx = −

L L cos(4nπx/L) = 0. 8nπ 0

Using cos A cos B = − 12 (cos(A + B) − cos(A − B)) and supposing that m 6= n,

ψn , ψm = − 21 

= −

Z

0

L

 cos(2(n + m)πx/L) − cos(2(n − m)πx/L) dx

L L L sin(2(n + m)πx/L) + sin(2(n − m)πx/L) = 0. 4(n + m)π 4(n − m)π 0 25

Finally, with m = n,

ψn , ψn =

Z

L

cos2 (2πx/L) dx =

0

1 2

Z

L 0

(1 + cos(4πx/L)) dx = 21 L.

5.21 Suppose that the eigenvalue problem is defined by Lu = λu with boundary conditions Bu = 0. Let v = cu, where c is a constant. By the linearity of L and B, Lv = L(cu) = cLu = cλu = λv and B(v) = B(cu) = cBu = 0. Thus, v satisfies the same equations as u: Lv = λv and Bv = 0. 5.22 When a0 = a1 = 0, b0 6= 0, b1 6= 0 and q(x) ≡ 0 the Sturm–Liouville problem (1) becomes ) −(p(x)u′ )′ = λw(x)u, 0 4 we have the general solution √ √ u(x) = A sin λ − 4x + B cos λ − 4x 26

and the BCs give

√ u(0) = 0 = B and u(π) = 0 = A sin λ − 4π = 0.

Since √ A cannot be zero (this √ would imply that u(x) = 0, the trivial solution) so λ must satisfy sin λ − 4π = 0. Hence λ − 4π = nπ, n = ±1, ±2, ... giving λn = 4 + n2 with corresponding eigenfunction un (x) = sin nπx, n = 1, 2, 3, .. (we do not include negative values of n, since sin(−nπx) = − sin(nπx) and we would have linearly dependent eigenfunctions). 5.26 Suppose that λ = −µ2 < 0, then −u′′ = −µ2 u has general solution u(x) = Aeµx + Be−µx . The boundary conditions give u(0) − u′ (0) = 0 = A + B − µ(A − B) and u(π) − u′ (π) = 0 = Aeµπ + Be−µπ − µ(Aeµπ − Be−µπ ). That is (µ − 1)A = (µ + 1)B and (µ − 1)Aeµπ = (µ + 1)Be−µπ . Substituting the first into the second leads to eµπ = e−µπ and µ = 0. Alternatively, the boundary conditions are satisfied by choosing µ = 1 and B = 0 (giving an eigenvalue λ = 1 with eigenfunction u(x) = ex ). The same solution is obtained with A = 0 and µ = −1. When λ = 0 we have u(x) = Ax + B and the BCs give A = B = 0. When λ = µ2 > 0, then −u′′ = µ2 u has general solution u(x) = A sin µx+B cos µx and, applying the BCs, we find u(0) − u′ (0) = B − µA = 0 and u(π) − u′ (π) = 0 = (A sin µπ + B cos µπ) − µ(cos µπ − B sin µπ) which, on elimination of B, give A(1 + µ2 ) sin µπ = 0. Thus, either A = 0 (the trivial solution) or µ = ±i (which leads back to the earlier case of λ = −1) or sin µπ = 0, that is µ = n = 1, 2, . . .. The corresponding eigenfunctions are obtained with B = nA, that is, λn = n2 ,

un (x) = sin nx + n cos nx.

5.27 With u = M w the ODE −x2 u′′ + 2xu′ − 2u = λx2 u becomes (see Exercise 5.17)   −x2 M ′′ − aM ′ − bM + 2x M ′ w + M w′ − 2M w = λx2 M w −x2 M w′′ + (2xM − 2x2 M ′ )w′ + (−x2 M ′′ + 2xM ′ − 2M )w = λx2 M w

and the coefficient of w′ vanishes when M − xM ′ = 0. Therefore we may choose M (x) = x and the eigenvalue problem becomes −w′′ = λw. Since u′ = w + xw′ , the BC u′ (0) = 0 becomes w(0) = 0 and u(1) = u′ (1) becomes w′ (1) = 0. The standard argument can be applied to show that λ = 0 and λ < 0 both lead to trivial solutions. For the case λ > 0, let λ = µ2 then −w′′ = µ2 w has general solution w = A sin µx + B cos µx. The BC w(0) = 0 implies B = 0 and w′ (1) = 0 then leads to Aµ cos µ = 0. Choosing A = 0 gives immediately the trivial solution, choosing µ = 0 leads to the earlier case λ = 0 so we are left with cos µ = 0. Therefore µ = (n − 12 )π, n = 1, 2, . . . and the eigenvalues are λn = (n − 12 )2 π 2 with corresponding eigenfunction wn = sin(n − 12 )πx, i.e., un (x) = x sin(n − 21 )πx, n = 1, 2, . . .. 5.28 −u′′ = λu: The standard argument rules out the possibilities that λ < 0 and λ = 0 because they both lead to trivial solutions. We therefore suppose that λ > 0 and write λ = µ2 (µ ∈ R). The general solution may then be written as u(x) = A sin µx + B cos µx and the BC u(0) = 0 implies that B = 0. 27

π

O

µ

Figure 5: The graphs of µ and tan µ. The BC u(1) = u′ (1) leads to A sin µ = Aµ cos µ and, since A cannot be zero (this would lead again to a trivial solution), µ must satisfy µ = tan µ and, for each such µ, the corresponding eigenfunction is u(x) = sin µx. We observe from the graphs of µ and tan µ shown in Fig. 5 that : 1. The roots of µ = tan µ occur in ± pairs. Since the eigenfunction is effectively unchanged when µ is replaced by −µ, we need only consider positive roots. 2. tan µ = 0 at µ = nπ (n = 0, 1, 2, ...), 3. tan µ → +∞ as µ → (n + 12 )π − 0 (i.e., from the left, or below), n = 0, 1, 2, ....  4. tan µ and µ are both monotonic increasing for µ ∈ nπ, nπ + 21 π , n = 1, 2, 3, ... with µ − tan µ > 0 at the left of each interval and < 0 at the right of each interval, hence there must be at least one root µn in each of these intervals. Hence there are an infinite number of roots µ from which we have an infinite number of positive eigenvalues λ = µ2n . 5. For large n, the roots approach the vertical asymptotes of tan µ, i.e., µn → (n + 21 )π. 5.29 We shall work from first principles. Multiplying the differential equation −u′′ (x) = λw(x)u(x) by u∗ (x) and integrating by parts gives, on applying the boundary conditions,

1 −u′ (x)u∗ (x) + 0

Z

1

0 Z 1

−u′′ (x)u∗ (x) dx = λ u′ (x)(u′ (x))∗ dx = λ

0

|u(1)|2 +

and so

Z

0

Z

1

w(x)u(x)u∗ (x) dx

0

Z

1

w(x)|u(x)|2 dx

0

1

|u′ (x)|2 dx = λ

Z

1

w(x)|u(x)|2 dx

0

R1 |u(1)|2 + 0 |u′ (x)|2 λ = R1 2 0 w(x)|u(x)| dx

in which both numerator and denominator are real and positive. 5.30 Multiplying the differential equation −(p(x)u′ )′ + q(x)u = λw(x)u(x) by u∗ (x) and integrating 28

by parts and applying the BCs u′ (1) = −a1 u(1)/b1 and u′ (0) = a0 u(0)/b0 gives Z

1 ′ ′

(−(p(x)u ) + q(x)u)u (x) dx = λ

0 1

Z

1

w(x)u(x)u∗ (x) dx

0

Z 1 1 Z  ′ 2 2 −p(x)u (x)u (x) + p(x)|u (x)| + q(x)|u(x)| dx = λ w(x)|u(x)|2 dx 0 0 0 Z 1 Z 1  a0 a1 ′ 2 2 2 2 p(x)|u (x)| + q(x)|u(x)| dx = λ w(x)|u(x)|2 dx p(1) |u(1)| + p(0) |u(0)| + b1 b0 0 0 ′

and the result follows since p(x) > 0 (see (5.34)), a0 b0 > 0 and a1 b1 > 0.

5.31 This requires on the insertion of a factor w(x) in each of the integrands in the solution of Exercise 5.11. 5.32 We take the inner product of both sides of f (x) =

∞ X

an φn (x) with φm (x). That is,

n=1 ∞

X f, φm = an φn , φm w . n=1

Repeated application of Property (d) from Exercise 5.11 to the right hand side gives

∞ X

f, φm = an φn , φm w n=1

but, since φn , φm w = 0 whenever n 6= m, this reduces to

f, φm = am φm , φm w

and the required result follows by replacing m by n. 5.33 Choosing u = φm (x) and v = φn (x) so that

Lu = λm wφm and Lv = λn wφn R1  then by Lagrange’s identity (5.25) 0 v ∗ Lu − uLv ∗ dx = 0. However, the eigenfunctions are real by Exercise 5.30, so 0=

Z

0

1

Z

1

 φn λm wφm − φm λn wφn dx 0

= (λm − λn ) φn , φm w

 v ∗ Lu − uLv ∗ dx =

and therefore φn , φm w = 0 provided λm 6= λn .

29

Exercises 6

Finite difference methods in R1

6.1 Using Taylor series expansions with remainder terms, (6.13) with x = xm becomes v(xm ± h) = v(xm ) ± hv ′ (xm ) + 12 h2 v ′′ (xm ) ± 61 h3 v ′′′ (xm ) +

1 4 ′′′′ ± 24 h v (ξm ),

− + where xm − h < ξm < xm < ξm < xm + h. Adding these series together we obtain  + 1 4 ′′′′ − ) . v(xm+1 ) + v(xm−1 ) = 2v(xm ) + h2 v ′′ (xm ) + 24 h v (ξm ) + v ′′′′ (ξm

− + but, by the Intermediate Value Theorem, there must be a point ξm ∈ (ξm , ξm ) such that  1 ′′′′ − ′′′′ + ′′′′ (v (ξ ) + v (ξ ) = v (ξ ). Consequently, on rearranging, m m m 2  ′′ −2 1 2 ′′′′ v (xm ) = h v(xm+1 ) − 2v(xm ) + v(xm−1 ) − 12 h v (ξm ).

6.2 △+ △− Um = △+ (△− Um ) = △+ (Um − Um−1 ) = △+ Um − △+ Um−1 (a)

(b)

(c)

= (Um+1 − Um ) − (Um − Um−1 ) = Um+1 − 2Um + Um−1 = δ 2 Um .

△− △+ Um = △− (△+ Um ) = △− (Um+1 − Um ) = △− Um+1 − △− Um △Um

= (Um+1 − Um ) − (Um − Um−1 ) = Um+1 − 2Um + Um−1 = δ 2 Um .   = 21 (Um+1 − Um−1 ) = 12 (Um+1 − Um ) + (Um − Um−1 )

= 21 (△+ Um + △− Um )

δ 2 Um = Um+1 − 2Um + Um−1 = ((Um+1 − Um ) − (Um − Um−1 ) = △+ Um − △− Um

6.3 △+ △+ vm = △+ (△+ vm ) = △+ (vm+1 − vm ) = △+ vm+1 − △+ vm

= (vm+2 − vm+1 ) − (vm+1 − vm ) = vm+2 − 2vm+1 + vm = δ 2 vm+1 .

Then solution to Exercise 6.1 gives ′′ h−2 △+ △+ vm = h−2 δ 2 vm+1 = vm+1 + O(h2 ) ′′ ′′ ′′′ ′′ ′′ but vm+1 = v ′′ (xm + h) = vm + hvm + O(h3 ) = vm + O(h), Hence, h−2 △+ △+ vm = vm + O(h). − − The corresponding results for △ △ vm are: ′′ ′′ h−2 △− △− vm = h−2 δ 2 vm−1 = vm−1 + O(h2 ) = vm + O(h).

6.4 By differentiating the exact solution u(x) = cos2 (σx) − tan(σx)/tan σ it can be shown that   tan σx sec2 σx − 3 sec4 σx . u′′′′ (x) = 8σ 4 cos 2σx + tan σ 30

With σ = 3/2 and 0 ≤ x ≤ 1 we find (rounding up the bounds to avoid fractions) cos 2σx ≤ 1,

tan σx ≤ 1, tan σ

sec2 σx ≤ 200,

3 sec4 σx < 12 × 104

which combine to show that |u′′′′ (x)| ≤ 3 × 104 and the fourth derivative is bounded. Thus the local truncation error is bounded: |Rm | ≤ 250h2 . From Example 6.13 the stability constant C = 9/8 and so, from Theorem 6.7, kEkh,∞ ≤ CkRkh,∞ ≤ 300h2 (we have used the fact that R0 = RM = 0). The global error E therefore converges to zero with h at a second order rate. 6.5 At m = 0, the equation Lh Um = Fh,m gives U0 = α. For 0 < m < M , −am Um−1 + bm Um − cm Um+1 = fm

and at m = M , UM = β. With u = [U1 , U2 , . . . , UM−1 ]T , give Au = f , where  b1 −c1  −a2 b2 −c2 1   . .. .. .. A= 2 . . h   −aM−2 bM−2 −cM−2 −aM−1 bM−1

these equations may be combined to 

   ,  

f1 + αa1 f2 .. .

      f = .     fM−2 fM−1 + βcM−1

6.6 With Lh Um := −h−2 δ 2 Um + x2m Um the local truncation error is Rm = Lh Um − fm , where fm = xm . Therefore, using (6.30) with rm = x2m we find that Rm = O(h2 ). The BCs u(0) = 1, u(1) = −2 have the exact replacements U0 = 1, UM = −2 so the local truncation error Rh is estimated by ( 0 M = 0, M Rh,m = . 2 Rh,m = O(h ) 0 < m < M Thus, Rh = O(h2 ) showing that the method is consistent of second order. When M = 4 the finite difference equations give, with h = 1/4 m = 0 : U0 = 1 m=1:−

m=2:−

1 16 (U0 1 16 (U1 1 16 (U2

− 2U1 + U2 ) + − 2U2 + U3 ) +

− 2U3 + U4 ) + m=3:− m = 4 : U4 = −2.

1 1 16 U1 = 4 1 1 4 U1 = 2 9 3 16 U1 = 4

Multiplying the equations for m = 1, 2, 3 by 16, they may be written as Au = f , where       3 −1 0 U1 5      −1 6 −1 U A= , u= f = 8 . 2 , 0 −1 11 U3 10 31

6.7 am = h−2 , bm = 2h−2 + x2m , cm = h−2 , dm = xm for m = 1, 2, . . . , M − 1. Referring to Definition 6.9, these coefficients are all positive and bm = am + cm + x2m ≥ 0 and the corresponding operator Lh is of positive type. 6.8 (a) For 0 < m < M the local truncation error is Rh,m = −h−2 δ 2 um − 1 = −u′′ (xm ) − 1 + O(h2 ) so it is consistent of order two with the ODE −u′′ (x) = 1, 0 < x < 1. The BC at U0 = 0 is clearly an exact representation of u(0) = 0. At m = M , Rh,M = 2h−1 (uM − uM−1 ) − 1. Now xM = 1, xM−1 = 1−h and, with the Taylor expansion u(1−h) = u(1)−hu′(1)+ 12 h2 u′′ (1)+ O(h3 ), we find Rh,M = 2(u′ (1) + O(h)) − 1 = 2u′ (1) − 1 + O(h).

which is consistent of first order with the BC 2u′ (1) = 1. The overall order of consistency is Rh = O(h). (b) The scheme may be written in the general for (6.52) with a0 = 0, b0 = 1, c0 = 0 so the inequalities (6.53) are satisfied at m = 0 with strict inequality. For 0 < m < M : am = h−2 , bm = 2h−2 , cm = h−2 so the inequalities (6.53) are satisfied. For m = M : aM = 2h−1 , bM = 2h−1 , cM = 0 so the inequalities (6.53) are satisfied. The conditions of Definition 6.17 and so, by Theorem 6.18 Lh is of positive type. The numerical solution U satisfies the equations Lh U = Fh , where   0 m = 0 Fh,m = 1 0 < m < M .   1 m=M It follows that U ≥ 0 since Fh ≥ 0.

6.9 Since Φ(x) is a quadratic function of x, h−2 δ 2 Φm = Φ′′ (xm ) = 2c. Also, ΦM − ΦM−1 = Φ(1) − Φ(1 − h) = hΦ′ (1) − 12 h2 Φ′′ (1) = −ch(1 − a) + ch2 Consequently, −h−2 δ 2 Φm = −2c and h−1 (Φ(1) − Φ(1 − h)) = −c(1 − a) + ch. Since h ≤ 12 we choose a = 41 so that 1 − a − h = 34 − h ≥ 41 and therefore c(1 − a) − ch ≥ 1 with c = −4. Then −h−2 δ 2 Φm = −2c = 8 ≥ 1. Thus Φ(x) = x(4 − x) is a possible comparison function for Exercise 6.8. When Um = 21 xm (3 − xm−1 ) we find U0 = 0, −h−2 δ 2 Um = 1 and 2h−1 (UM − UM−1 ) = 1 and so U satisfies exactly the finite difference equations from Exercise 6.8. This is the only solution since Lh is inverse monotone.

32

The general solution of the differential equation −u′′ (x) = 1 is u = A + Bx − 12 x2 . The BC u(0) = 0 implies A = 0 while u′ (1) = 21 requires B = 3/2. Thus u(x) = 12 x(3 − x) and the global error is Em = u(xm ) − Um = 21 xm (xm − xm−1 ) = 12 hxm . and so kEkh,∞ = O(h)—convergence is at a first order rate. 6.10 When the equations are written as Au = f , the first two rows of A are (when the Dirichlet BC at m = 0 is incorporated), using (6.40b),   2εh−1 −(εh−2 − h−1 ) 0 ··· −2 −1  2εh−1 −(εh−2 − h−1 ) 0  A = −(εh + h ) . .. .

The (1, 2) and (2, 1) entries in A differ so A cannot be symmetric.

6.11 The ODE is approximated by Lh Um = fm for m = 1, 2, . . . , M − 1, where Lh Um := −h−2 δ 2 Um + 20h−1 △Um

= −h−2 (1 + 10h)Um−1 + 2h−2 Um − h−2 (1 − 10h)Um+1

and fm = (mh)2 . Comparing the coefficients of Lh with those of Definition 6.9, am = h−2 (1 + 10h) ≥ 0 for all h > 0, cm = h−2 (1 − 10h) ≥ 0 for h ≤ 1/10 and bm = 2h−2 = am + cm . Hence Lh is a positive type operator for h ≤ 1/10, i.e., M ≥ 10.

With ϕ(x) = Ax + B and Lu(x) = −u′′ (x) + 20u′ (x) we find Lϕ(x) = 20A ≥ 1 if A ≥ 1/20. Also ϕ(0) ≥ 1 and ϕ(1) ≥ 1 if B ≥ 1 and A + B ≥ 1, respectively. All these conditions can be met by choosing A = 1/20 and B = 1 so ϕ(x) = 1 + x/20 is a comparision function for L with Dirichlet BCs. Since ϕ is a linear function, then Lh ϕ(xm ) ≥ 1 so it also acts as a comparison function for Lh . C = max0≤x≤1 ϕ(x) = 21/20 and therefore Lh with Dirichlet BCs is stable by Lemma 6.8 for h ≤ 1/10. 6.12 Since δ 2 A = △A = 0, Um = A is a solution. For Um = Bρm : δ 2 ρm = ρm−1 (1 − 2ρ + ρ2 ) = ρm−1 (ρ − 1)2

△ρm = 21 ρm−1 (ρ2 − 1) = ρm−1 (ρ − 1)(ρ + 1)

and so   −εh−2 δ 2 Um + 2h−1 △Um = Bρm−1 h−1 (ρ − 1) −εh−1 (ρ − 1) + (1 + ρ)

and the right hand side vanishes when ρ = (ε + h)/(ε − h). The equations therefore have a general solution Um = A + Bρm . Clearly ρ < 0 when h > ε so Um = Bρm changes sign from one grid point to the next. The amplitude of these oscillations will clearly depend on the ratio h/ε and the BCs. Note that the ODE −εu′′ + 2u′ = 0 with which it is consistent has the general solution u(x) = a + be2x/ε so that um = a + b(e2h/ε )m . It may be shown that ρ = e2h/ε + O(h3 ), in keeping with a scheme that is accurate of second order (ρm = (e2h/ε + O(h3 ))m so, by the binomial theorem, ρm = (e2h/ε )m + mhO(h2 ) + higher order terms and mh = xm ). 33

6.13 The local truncation error is Rh = Lh U − Fh , where Lh Um := −εh−2 δ 2 Um + 2h−1 △− Um and Fh,m = fm . Using the results in Table 6.1, Rh,m = −εh−2 δ 2 mm + 2h−1 △− um − fm   1 2 ′′′′ = −ε u′′m + 12 h um + O(h4 ) + 2 u′m + 21 hu′′m + O(h2 ) − fm  = −εu′′m + 2u′m − fm + hu′′m + O(h2 ) = O(h)

since −εu′′ + 2u′ = f . The order of consistency is therefore first order. Also

Lh Um = −(εh−2 + 2h)Um−1 + (2εh−2 + h)Um − εh−2 Um+1 so, according to Definition 6.9, is of positive type for all ε > 0 and h > 0. The comparison function ϕ(x) = 21 x + 1, being linear in x, also satisfies Lh ϕ(xm ) ≥ 1. Therefore Lh is stable by Lemma 6.8 (with stability constant C = 3/2). Finally, convergence at a first order rate is a consequence of Theorem 6.7. 6.14   Consistency: Lh Um := h−2 −(1 − h2 )Um−1 + 2Um − (1 − h2 )Um+1 and so the local truncation error is given by Rh = Lh u. Thus  Rh,m = h−2 −(1 − h2 )um−1 + 2um − (1 − h2 )um+1 um±h = um ± hu′m + 12 h2 u′′m ± 61 h3 u′′m + Rh,m =

−u′′m

+

1 2 ′′′′ ± 12 h u (ξm )

+ 2um +

1 4 ′′′′ ± (see Exercise 24 h u (ξm ), 2 ′′ 2 ′′ 1 2 ′′′′ ± h um = h um + 12 h u (ξm )

6.13)

since Lu(x) = −u′′ (x) + 2u(x) = 0. The local truncation error is O(h2 ). An alternative approach would be to observe that Lh Um = −h−2 δ 2 Um + 2Um + δ 2 Um and using the Taylor expansion for δ 2 um . Stability: The expression Lh Um may be written in the form (6.32) with am = cm = 1 − h2 , bm = 2 and therefore satisfy the requirements of a positive type operator in Definition 6.9. The operator Lh defined by (6.34) is inverse monotone. Therefore, to establish stability, we need only exhibit a suitable comparison function Φ (see Lemma 6.8). With Φ = C, a constant, Lh Φm = 2C and so ( ( Φm C for m = 0, M, Lh Φm = = Lh Φm 2C for m = 1, 2, . . . , M − 1, and we shall have Lh Φ ≥ 1 by choosing C = 1 (say). Thus, Lh is a stable operator. Convergence: This follows from Theorem 6.7. 6.15 ch u − F ch and, since △− um = hu′ − 1 h2 u′′ + O(h3 ) From (6.51) the local truncation error Rh = L m m 2 (see Table 6.1), Rh,M = (a + 21 bhrM )uM + bh−1 △− uM − (β + 12 bhfM )

= (a + 12 bhr(1))u(1) + b(u′ (1) − 21 hu′′ (1) + O(h2 )) − (β + 12 bhf (1))   = au(1) + bu′ (1) − β + 12 hb −u′′ (1) + r(1)u(1) − f (1) + O(h2 )

and so Rh,M = O(h2 ) since au(1) + bu′ (1) = β and −u′′ (1) + r(1)u(1) = f (1). Had we used △− um = hu′m − 12 h2 u′′m + 61 h3 u′′′ (ξm ), where xm − h < ξm < xm , we would have found that Rh,M = bh2 u′′′ (ξm ). In both cases the local truncation error is of second order. 34

6.16 The finite difference equation (6.19) is a second order approximation of the differential equation −u′′ (x) + r(x)u(x) = f (x) for 0 < x < 1 and the BC u(1) = β is replicated exactly by the numerical BC UM = β. It only remains to check the local truncation error at m = 0. This is given by Rh,M = au0 − bh−1 △+ u0 − α   = au(0) − b u′ (0) + 21 hu′′ (0) + O(h2 ) − α = − 21 bhu′′ (0) + O(h2 )

since au(0) − bu′ (0) = α. Thus, Rh = O(h) and the approximation is consistent of first order. 6.17 The leading term in the local truncation error from the previous question is − 21 bhu′′ (0) and, using the ODE −u′′ + ru = f at x = 0, this becomes 21 bh(f (0) − r(0)u(0)). Thus, subtracting this term from the left of the BC gives the modified condition aU0 − bh−1 △+ U0 − 12 bh(f0 − r0 U0 ) = α. The corresponding local truncation error is Rh,M = au0 − bh−1 △+ u0 − 12 bh(f0 − r0 u0 ) − α  = au0 − b u′ (0) + 21 hu′′ (0) + O(h2 ) − 12 bh(f0 − r0 u0 ) − α   = au(0) − bu′ (0) − α − 21 h −u′′ (0) + r0 u0 − f0 + O(h2 ) = O(h2 )

since au(0) − bu′ (0) = α and −u′′ (0) + r0 u0 = f0 .

6.18 At m = 0, the equation Lh Um = Fh,m gives b0 U0 − c0 = f0 . For 0 < m < 1, −am Um−1 + bm Um − cm Um+1 = fm

and at m = M , Lh UM = −aM UM−1 + bM UM . With u = [U0 , U1 , . . . , UM ]T , these equations may be combined to give Au = f , where     b0 −c0 α  −a1   f1  b1 −c1    1     .  . . . .. .. .. A= 2  , f =  ..  .    h   fM−1  −aM−1 bM−1 −cM−1  −aM bM β 6.19 The proof of Theorem 6.10 can be used to prove that Um cannot have a negative minimum for 1 ≤ m ≤ M − 1. It remains to prove that Um cannot have a negative minimum for either m = 0 or m = M . We begin with the left end-point. Suppose that, contrary to the statement of the theorem, Um has a negative minimum at m = 0 so that U1 ≥ U0 (which implies −c0 U1 ≤ −c0 U0 ) and Lh U0 = b0 U0 − c0 U1 ≤ (b0 − c0 )U0 ≤ 0. If this inequality were strict (because b0 > c0 ) it would contradict the assumption Lh U ≥ 0 and prove that a negative minimum at m = 0 could not occur. Suppose therefore that equality holds which means that b0 = c0 and U1 = U0 < 0. 35

The argument at the right end-point is essentially the same—either a negative minimum cannot occur at m = M or bM = cM and UM−1 = UM < 0. We now turn to the interior grid points. Suppose that, contrary to the statement of the theorem, Um has a negative minimum at m, where 0 < m < M . The argument in Theorem 6.10 proves that either there is a contradiction of Lh Um ≥ 0 or both Um−1 = Um = Um+1 < 0 and bm = am + cm . Combining all three cases we see that either there is a contradiction of Lh Um ≥ 0 or the same negative minimum is attained at all grid points and bm = am + cm for all m (recall a0 = cM = 0). However, this last possibility is ruled out by Definition 6.17 which requires that bm > am + cm for at least one value of m. 6.20 When 0 ≤ m < M we find Lh Um = Lh Um and Lh is shown to be of positive type in Example 6.12. When m = M , Lh UM = aUM + bh−1 △− UM = −bh−1 UM−1 + (a + bh−1 )UM which is of the form required by Definition 6.17 with aM = bh−1 and bM = a + bh−1 so both are non-negative if b ≥ 0 and bM ≥ aM if a ≥ 0. Thus Lh is of positive type if a, b ≥ 0. The case when both are negative can be dealt with in the same way if Lh is defined so that Lh UM := −aUM − bh−1 △− UM . 6.21 We find that a0 = 0, b0 = c0 = 1 am = 1, bm = 2, cm = 1

b 0 = a0 + c0 b m = am + cm ,

aM = bM = 1, cM = 0

b M = aM + cM

0 0 and bm = 65 ≥ am + cm and so Lh is of positive type. Lh A8m = ALh 8m = A8m−1 (8 × 82 − 65 × 8 + 8) = 0

Lh B 8−m = BLh 8−m = B8−m+1 (8 − 65 × 8 + 8 × 82 ) = 0 so both sequences satisfy Lh U = 0. Since Lh is a linear operator, Lh (A8m + B 8−m ) = ALh 8m + BLh 8−m = 0 and Um = A8m + B 8−m is a solution for any A, B. The BCs lead to the equations A + B = α,

A8M + B 8−M = β 37

β − α8−M , 1 − 8−2M

B=

α − β8−M 1 − 8−2M

 1 8m−M (β − α8−M ) + 8m (α − β8−M ) . 1 − 8−2M

Um =

When M = 10 we find 8−10 ≈ 9 × 10−10 and so, when α = ±1 and β = ±2 we have Um ≈ α8−m + β8M−m . The solutions with α = ±1, β = 2 are shown in Fig. 6. On the left α = 1, β = 2 so that min(0, α, β) = 0 ≤ Um ≤ max(0, α, β) = 2. On the right α = −1, β = 2 so that min(0, α, β) = −1 ≤ Um ≤ max(0, α, β) = 2. b b

Um 1

Um 1 b

b b b b b b

b

b

b

b

M

b

m

b

b

b

b

b

b

b

M

m

b

Figure 6: The points Um = α8−m + β 8m−M for m = 0, 1, . . . , M and M = 10 with α = 1, β = 2 (left) and α = −1, β = 2 (right). 6.24 If α, β ≥ 0 then min(0, α, β) = 0 and Lh U ≥ 0 then implies the result. Similarly, if α, β ≤ 0 then by working with −U , Lh (−U ) ≥ 0 implies −U ≥ 0 = max(0, α, β). Suppose therefore that α < 0 < β, that Lh U ≥ 0 and that U has a negative minimum. If the minimum occurs at m = 0 then Um ≥ α as required. If the minimum occurs at m = 1: Lh U1 = (b1 − a1 − c1 )U1 + a1 (U1 − U0 ) + a1 (U1 − U2 ) in which each of the terms on the right hand side is ≤ 0. Strict inequality (< 0) in any term would lead to Lh U1 < 0 and contradict Lh U ≥ 0. Thus a negative minimum can occur at m = 1 if, and only if, b1 = a1 + c1 and U2 = U1 = U0 = α. A similar argument at m = 2 allows us to conclude that a negative minimum can occur at m = 1 or m = 2 if, and only if, bj = aj + cj (j = 1, 2) and U3 = U2 = U1 = U0 = α. Repeating the same argument we see that a negative minimum can occur at , ≤ m ≤ M − 1 if, and only if, bj = aj + cj (j = 1, 2, . . . , M − 1) and Um = α, m = 0, 1, . . . , M. But Um = β > 0 so this is not possible. The negative minimum must therefore occur at m = 0 so Um ≥ α for m = 0, 1, . . . , M . 38

A similar argument with −U shows that −Um ≥ −β, i.e., Um ≤ β for m = 0, 1, . . . , M . The case α > 0 > β follows by a similar argument. 6.25 The operator Lh is of positive type by Example 6.12 (with r(x) = σ 2 > 0) Lh Um

 1 −1 + 2  −h △ U0 + 2 hσ U0 = Lh Um = −h−2 δ 2 Um + σ 2 Um   (σ + 21 σ 2 h)UM + h−1 △− UM  1 −1 2 −1  m = 0, (h + 2 hσ )U0 − h U1 , = h−2 (−Um−1 + (2 + σ 2 h2 )Um − Um+1 ), m = 1, 2, . . . , M − 1,   m = M, (σ + 21 σ 2 h + h−1 )UM − h−1 UM−1 ,

which satisfy the criteria of Definition 6.17 for σ > 0. 6.26 Rh = Lh U − Fh . At m = 0, Rh = U0 = 0. For 0 < m < M :

Rh,m = −h−2 δx2 um + um − f (mh)  1 2 ′′′′ h um + O(h4 ) + um − f (mh) (see Table 6.1) = − u′′m + 12  1 2 ′′′′ 1 2 ′′′′ = −u′′m + um − f (mh) + 12 h um + O(h4 ) = 12 h um + O(h4 )

since −u′′ + u = f . For m = M we use uM−1 = u(1 − h) = u(1) − hu′ (1) + 12 h2 u′′ (1) + O(h3 ) so that  Rh,M = h−1 (1 + 12 h2 )uM − uM−1 − 12 hf (1)  = h−1 (1 + 21 h2 )u(1) − u(1) − hu′ (1) + 21 h2 u′′ (1) + O(h3 ) − 12 hf (1)  = u′ (1) + 12 h −u′′ (1) + u(1) − f (1) + O(h2 ) = O(h2 ) since u′ (1) = 0 and −u′′ + u = f .

To prove that Fh ≥ 0 implies U ≥ 0 if Lh U = Fh : At m = 0: Lh U0 = U0 = Fh,0 = 0 and so U0 ≥ 0. For 0 < m < M . Suppose that Um ≥ 0 is false. There must therefore be grid points at which Um < 0 and, in particular, a grid point m = j, say, where Um has a negative minimum. Thus Uj ≤ Uj±1 . However, this means that Lh Uj = h−2 (−Uj−1 + 2Uj − Uj+1 ) + Uj ≤ Uj < 0 (since −Uj−1 + 2Uj − Uj+1 = (Uj − Uj−1 ) + (Uj − Uj+1 ) and both bracketed terms are ≤ 0). This is in direct contradiction to the assertion that Lh Uj ≥ 0. Thus a negative minimum cannot occur for 0 < m < M . Finally, suppose that Um has a negative minimum at m = M . Therefore UM ≤ 0 and UM ≤ UM−1 . Lh UM = h−1 ((1 + 21 h2 )UM − UM−1 ) ≤ 12 h2 UM < 0 again in direct contradiction of the assertion that Lh UM ≥ 0. Thus U cannot have a negative minimum for m = 0, 1, . . . , M and so Um ≥ 0. 39

Note: this proof is more straightforward than this given in Theorem 6.10 and Theorem 6.18 (See Exercise 6.19) because, in the notation of Definition 6.17 we have strict inequality in bm > am + cm , independently of h, for every m . 6.27 The argument used in (6.29) and (6.30) established a second order local truncation error for 0 < m < M . The local truncation error is identically zero at m = M . It remains to examine the situation for m = 0. We use the expansion h−1 △+ u0 = (u(h)−u(0))/h = u′ (0)+ 12 hu′′ (0)+O(h2 ) (see Table 6.1) Rh,0 = −h−1 △+ u0 − 1 = (−u′ (0) − 1) − 21 hu′′ (0) + O(h2 ) = − 12 hu′′ (0) + O(h2 ) since −u′ (0) = 1. The order of consistency is therefore first order. For the second case, Rh,0 = −h−1 △+ u0 − 1 + 21 h = (−u′ (0) − 1) − 12 hu′′ (0) + 21 h + O(h2 ) = 12 h(−u′′ (0) − 1) + O(h2 ). However, the ODE −u′′ (x) + xu(x) = 1 evaluated at x = 0 gives −u′′ (0) = 1 so that Rh,0 = O(h2 ): the order of consistency is second order. 6.28 A second order consistent approximation of the ODE −u′′ (x) = f (x) is −h−2 δ 2 Um = fm (see Example 6.12). At x = 0, h−1 △+ u0 = (u(h) − u(0))/h = u′ (0) + 21 hu′′ (0) + O(h2 ) and, from the ODE, u′′ (0) = 0. Hence the numerical BC −h−1 △+ U0 = 1 is consistent of order two. At x = 1, we use a backward difference: h−1 △− uM = h−1 (u(1) − u(1 − h)) = u′ (1) − 21 hu′′ (1) + O(h2 ) and, since the ODE evaluated at x = 1 gives −u′′ (1) = 1, we have u′ (1) = h−1 △− uM − 12 h + O(h2 ). The finite difference approximation h−1 △− UM + UM = 2 + 21 h of the BC u′ (1) + u(1) = 2 is, by construction, consistent of order two. 6.29 Since h−2 δ 2 um = u′′m + O(h2 ) and h−1 △um = u′m + O(h2 ) (see Table 6.1), the given scheme is consistent of second order. At x = 1, we use a backward difference: h−1 △− uM = h−1 (u(1) − u(1 − h)) = u′ (1) − 21 hu′′ (1) + O(h2 ) and, since the ODE evaluated at x = 1 gives −u′′ (1) + 4u′ (1) = 0, the BC u′ (1) = 2 gives u′′ (1) = 8. Consequently, u′ (1) = h−1 △− uM + 4h + O(h2 ) and the finite difference replacement h−1 △− UM = 2 − 4h of the BC u′ (1) = 2 is consistent of second order. 6.30 Checking the local truncation error: Rh,0 = h−1 (u1 − u0 ) − Au0 − B = u′ (0) + 21 hu′′ (0) − Au0 − B + O(h2 )

= u′ (0) − 21 h − Au0 − B + O(h2 ) = (2 − A)u(0) − 21 h − B + O(h2 ) 40

where we have used −u′′ (0) = 1 (from the ODE) and the BC u′ (0) = 2u(0). Thus Rh,0 = O(h2 ) by choosing A = 2 and B = − 21 h leading to h−1 (U1 − U0 ) − 2U0 = − 21 h.

Note: This BC would need to be written in terms of the outward normal derivative: −u′ (0) + 2u(0) = 0 with the approximation −h−1 (U1 − U0 ) + 2U0 = 12 h in order to define an operator of positive type. 6.31

T

LL = h

−2

    

−1

1 .. .

..

. −1

1 −1 ρ

      

When r(x) ≡ 0 and aM,M = 1 + ah/b, the  2   −1 A=  

−1 1

..

.

..

.

 2 −1      −1 . . . . . . =   .. −1   . 2 −1 1 −1  −1 1 + ρ2 ρ

  .  

matrix A in (6.47) becomes  −1  .. ..  . . .  .. . 2 −1  −1 1 + ah/b

Thus LLT = A by choosing ρ2 = ah/b. This leads to a real value of ρ provided that a and b have the same sign. It follows that v T Av = v T LLT v = (LT v)T (LT v) = wT w ≥ 0 when w = LT v (as in Lemma 6.1). This proves that A is positive semi-definite. We need to check that it is not possible to choose a nonzero vector v such that v T Av = 0. This can only occur if w = 0 but it is easily verified that the only solution of LT v = 0 is v = 0. 6.32 ′′ + O(h2 ), we choose vm = u′′ (xm ) = h−2 δ 2 um + O(h2 ). Thus, Since h−2 δ 2 vm = vm   2 h−2 δ 2 vm = h−2 δ 2 h−2 δ 2 um + O(h2 ) = u′′′′ m + O(h ) ⇒

2 h−4 δ 4 um = u′′′′ m + O(h ).

Now, with wm = δ 2 um , δ 4 um = δ 2 wm = wm−1 − 2wm + wm+1 = (um−2 − 2um−1 + um ) − 2(um−1 − 2um + um+1 ) + (um − 2um+1 + um+2 ) = um−2 − 4um−1 + 6um − 4um+1 + um+2

2 (Note Pascal’s triangular numbers). This result can be used to show that h−4 δ 4 um = u′′′′ m +O(h ) by combining the Taylor expansions of um±1 and um±2 appropriately. 1 2 ′′′′ From u′′m = h−2 δ 2 um − 12 h um + O(h4 ), it follows that  1 4 u′′m = h−2 δ 2 − 12 δ um + O(h4 ).

Also

(12δ 2 − δ 4 )um = 12(um−1 − 2um + um+1 ) − um−2 + 4um−1 − 6um + 4um+1 − um+2 = um−2 − 16um−1 + 30um − 16um+1 + um+2 41

which gives the required result when both sides are divided by 12. Thus, at grid points m = 2, 3, . . . , M − 2 the finite difference replacement  1 4 −h−2 δ 2 − 12 δ Um + rm Um = fm

of −u′′ (xm )+rm u(xm ) = fm is, by construction, consistent of order four and, for each m, involves values of U at the five consecutive grid points xm , xm±1 , xm±2 . It can shown (see Project 13.2) that, when r(x) ≡ 0 and the ODE is approximated at x1 and xM−1 by the standard second order finite difference equations −h−2 δ 2 Um = fm that the convergence rate is fourth order in h provided that u and its first six derivatives are bounded on the interval (0, 1). 6.33 Using (6.60) and the fact that δ 2 fm = 12δ 2 xm = 0 leads to   h−2 −Um−1 + 2Um − Um+1 + Um−1 + 10Um + Um+1 = 12xm , i.e.,

− (M 2 − 1)Um−1 + (2M 2 + 10)Um − (M 2 − 1)Um+1 = 12xm

since h−1 = M . With BCs u(0) = 3 and u(1) = −5, these can be written when M = 4 as      45 + 12x1 42 −15 0 U1 , −15 42 −15 U2  =  12x2 −75 + 12x3 0 −15 42 U3 where xm = m/4.

6.34 Suppose that 

Lh um := −h−2 δ 2 um + rm um +

1 2 12 δ (rm um )

and Fh,m = fh +

1 2 12 δ fm .

1 2 ′′′′ h um + O(h6 ), then the local truncation error at internal grid points Using h−2 δ 2 um = u′′m + 12 (given by Rh = Lh u − Fh ) satisfies     1 2 ′′′′ 1 Rh,m = − u′′m + 12 h2 (ru − f )′′m + O(h4 ) h um + O(h6 ) + rm um + 12    2  ′′ 1 2 d + O(h6 ) = −u′′m + rm um − fm + 12 h −u (x) + r(x)u(x) − f (x) dx2 x=xm

so that Rh = O(h6 ) and the leading terms contain derivatives of u up to the 6th and derivatives of f up to the 4th order. A more careful treatment would start with the expansions v(xm ± h) = v(xm ) ± hv ′ (xm ) + 12 h2 v ′′ (xm ) ±

1 3 ′′′ 3! h v (xm ) + 1 4 ′′′′ 1 5 (5) (xm ) 4! h v (xm ) ± 5! h v

− + where xm − h < ξm < xm < ξm < xm + h and deduce that

v ′′ (xm ) = h−2 δ 2 v(xm ) −

1 2 ′′′′ 12 h v (xm )

4 (6) 1 (ξm ). 360 h v

It can then be shown, in conjunction with Exercise 6.1, that 1 h4 u(6) (ξm ) + Rm = − 360

4 4 d 1 144 h dx4 (r(x)u(x)

42

− f (x))

x=ηm

+

1 6 (6) ± (ξm ), 6! h v

where ξm , ηm both lie in the interval (xm−1 , xm+1 ). 6.35 For indices m for which Fh,m 6= 0 we have the standard inequalities: Lh Um = Fh,m ≤ kFh kh,∞ ≤ kFh,m kh,∞ Lh Φm . The same end result also holds for indices m such that Fh,m = 0 since Lh Um = 0 ≤ kFh,m kh,∞ Lh Φm (because the right hand side is automatically non-negative). Thus, Lh Um ≤ kFh,m kh,∞ Lh Φm for all m from which we have  Lh Um − kFh,m kh,∞ Φm ≤ 0

and then Um − kFh,m kh,∞ Φm ≤ 0 by inverse monotonicity.

This result is particularly useful when dealing with the local truncation error since it is frequently identically zero at some grid points (see the following example). 6.36 The local truncation error Rh in Example 6.13 satisfies ( 0, m = 0, M Rh,m = 1 2 ′′′′ 4 − 12 h u (xm ) + O(h ), 0 < m < M. The global error E satisfies Lh E = Rh . With the comparison function Φm = ϕ(xm ) = 12 xm (1 − xm ), which vanishes at the same grid points as Rh , we conclude, via the previous exercise, that Em ≤ kRh kh,∞ Φm A similar argument starting with Lh E = Rh ≥ −kRhkh,∞ leads to Em ≥ −kRhkh,∞ Φm and therefore kEkh,∞ ≤ kRh kh,∞ kΦm kh,∞ . 6.37 Suppose that Lh Um := −am Um−1 + bm Um − cm Um+1 and the coefficients have to be chosen so that this is to be consistent with the operator defined by Lv(x) := −v ′′ (x) + r(x)v(x). We require Lh vm = (Lv(x)) x=x for v(x) = 1, (x − xm ) and (x − xm )2 . Since Mh := Lh − L m is a linear operator (i.e., Mh (u + v) = Mh u + Mh v and Mh (cu) = cMh u for any sufficiently smooth functions u and v and any constant c) it follows that    Mh (A + B(x − xm ) + C(x − xm )2 ) = A Mh 1 + B Mh (x − xm ) + C Mh (x − xm )2 ) = 0

for arbitrary constants A, B and C. Thus Mh p(x) = 0 for any quadratic polynomial p(x). Applying the conditions Lh vm = (Lv(x)) x=xm for v(x) = 1, (x − xm ) and (x − xm )2 we find Lh 1 = −am + bm − cm = rm = (L1) x=xm Lh (x − xm ) = ham − hcm = 0 = (L(x − xm )) x=x m 2 2 2 Lh (x − xm ) = −h am − h cm = −2 = (L(x − xm )2 ) x=xm

The first and third combine to give bm = −2h−2 + rm and the second and third give am = cm = h−2 . Thus Lh coincides with the approximation used in standard finite difference approximation (6.20). 43

6.38 Repeating the previous exercise with Lv(x) := −εv ′′ (x) + 2v ′ (x) we find Lh 1 = −am + bm − cm = 0 = (L1) x=x m Lh (x − xm ) = ham − hcm = 2 = (L(x − xm )) x=x m Lh (x − xm )2 = −h2 am − h2 cm = −2ε = (L(x − xm )2 ) x=xm

leading to am = h−2 (ε + h), cm = h−2 (ε − h), bm = −2h−2 ε and Lh coincides with the operator in (6.40b). 6.39 Case (a) is entirely standard: u(x) = 1 − 16(x − 12 )4 . In case (b), u′′ (x) = 0 for 0 < x < 12 and u(0) = 0 which gives u(x) = Ax for an arbitrary constant A. For 21 < x < 1, −u′′ (x) = 384(x− 21 )2 which, on integrating twice gives u(x) = ax+b−32(x− 12 )4 . The three conditions u(1) = a + b − 2 = 0, continuity of u at x = 21 : u( 21 −) = u( 12 +), i.e., 1 1 1 ′ ′ 1 ′ 1 2 A = 2 a + B and continuity of u at x = 2 : u ( 2 −) = u ( 2 +), i.e., A = a lead to the solution ( 0, 0 ≤ x ≤ 1/2 . u(x) = 2x + 1 4 −32(x − 2 ) , 1/2 < x ≤ 1 A similar procedure in case (c) leads to ( u(x) = 2x +

0, 0 ≤ x ≤ 1/2 . 1 3 −16(x − 2 ) , 1/2 < x ≤ 1

These are shown in Fig. 7. These boundary value problems are solved numerically using the 1

1

x

Figure 7: The solutions for Exercise 6.39: case (a) solid line, case (b) dashed line and case (c) dotted line.

standard “second order” scheme −h−2 δ 2 Um = fm ,

m = 1, 2, . . . , M − 1

with BCs U0 = UM = 0. The values of M chosen for the experiments are 8j , 9j and 5j (j = 1, 2, . . . , 7) and the results are summarised in Fig. 8. On the left are shown graphs of M 2 × global error for M = 9 (dots) and M = 16 (crosses) in cases (a), (b) and (c). We also include the case labelled “Test” where f (x) = 8 and the exact solution u(x) = 4x(1 − x) is a polynomial of degree two, for which the global error should be identically zero. The graphical results show an error of less than 10−14 attributable to roundoff error. Including such a test in numerical experiments is recommended to test the integrity of the code. In cases (a) and (b) the graphs with different values of M are indistinguishable, in keeping with the global error E ∝ 1/M 2 . In case (c) the graphs of M 2 × E appear to be two distinct continuous piecewise linear functions. Note that a grid point lies exactly on the discontinuity in f ′ (x) at x = 1/2 when M = 16 but this is not the case when M = 9. On the right we show loglog plots of the global error as a function of h. For cases (a) and (b) the results are almost identical and lie on a line having slope two, again in keeping with E ∝ h2 The 44

global error in case (c) is more erratic, especially for the larger values of h. However, the error itself is never larger than in cases (a) and (b) and appears to behave more smoothly as h → 0. The theory developed in this chapter requires that the first four derivatives of the exact solution to be bounded in order that the local truncation error be bounded by a multiple of h2 , and for the global error to converge to zero at a second order rate. The results of these experiments suggest that a second order convergence rate is attainable under less onerous constraints, but that goes beyond the scope of this book.

0

10

−14

4

x 10

4

2 0

3

Test

2

−2 −4 0

−2

10

(a)

1 0.5

1

0 0

−4

o sl

10

0.5

pe

=

2

1 −6

10

6

4 3

4

(b)

2

2 0 0

−8

10

(c)

1 0.5

1

0 0

−10

10

−5

10

0.5

−4

10

−3

−2

10

10

−1

10

0

10

h

1

Figure 8: The graphs show M 2 ×E (E is the global error) for M = 9 (dots) and M = 16 (crosses) in the test case and cases (a), (b) and (c). 45

Exercises 7

Maximum principles and energy methods

7.1 With v(x, t) = u(x, t) + ε(τ − t) in place of (7.2) and it follows that −κvxx + vt = −κuxx + ut −κε so that −κvxx + vt < 0 for all positive values of ε and the proof proceeds as before (but restricted to 0 ≤ t ≤ τ ). 7.2 We have ut = κuxx and so: 1. −κuxx + ut ≤ 0 and therefore, by Theorem 7.1, u(x, t) is either constant or else attains its maximum value on Γτ . 2. −κuxx + ut ≥ 0, that is κ(−u)xx − (−u)t ≤ 0 and therefore, by Theorem 7.1, −u(x, t) is either constant or else attains its minimum value on Γτ . Thus, u(x, t) is either constant or else attains its minimum value on Γτ . 7.3 If L u(x, t) ≥ 0 then −κuxx + ut ≥ 0 and therefore, by Theorem 7.1 applied to −u (or part (b) of the previous solution), u(x, t) is either constant or else attains its minimum value on Γτ . But L u(x, t) ≥ 0 also implies that u(x, t) ≥ 0 for (x, t) ∈ Γτ . Hence u(x, t) ≥ 0 for (x, t) ∈ Ωτ . 7.4 Differentiating M (t) and using the PDE gives ′

M (t) =

Z

1

ut dx =

Z

  1 uxx − 2ux dx = ux − 2u

1

x=0

0

0

= 0.

R1 Therefore M (t) = M (0) = 0 6x dx = 3. For the energy E(t) we have, using integration by parts, ′

E (t) =

Z

1

2uut dx = 2 0

= −2

Z

0

Z

0

1

1

 u uxx − 2ux dx

 1 (ux ) dx + 2 uux − 2u 2

2

x=0

Thus E ′ (t) ≤ 0 so that E(t) ≤ E(0) =

R1 0

= −2

Z

1

(ux )2 dx.

0

36x2 dx = 12.

7.5 The PDE ut = xuxx + ux may be written as ut = (xux )x so, multiplying by u and integrating over the interval (0, 1) gives Z

1

uut dx =

0

1 2

d dt

Z

0

Z

1

0

1

u2 dx = −

Z

 1 u(xux )x dx = uxux x=0 −

0

1

Z

1

x(ux )2 dx

0

x(ux )2 dx ≤ 0.

The boundary terms vanish by virtue of the BCs u(0, t) = u(1, t) = 0. Thus, the energy R1 R1 E(t) := 0 u2 dx is a decreasing function of t so E(t) ≤ E(0) = 0 sin2 πx dx = 1/2. 46

7.6 Differentiating under the integral sign we find, on integrating by parts, Z 2 Z 2 Z 2 d u(rur )r dr ruut dr = 2 ru2 dr = 2 dt 1 1 1 Z 2 Z 2  2 = −2 r(ur )2 dr + 2 ruur = −2 r(ur )2 dr r=1

1

since u = 0 at r = 1 and r = 2. Thus, the energy E(t) := R1 so E(t) ≤ E(0) = 0 r(r − 1)2 (2 − r)2 dx = 1/20.

R2 1

1

ru2 dr is a decreasing function of t

7.7 Differentiating under the integral sign we find Z 1 Z 1 d (ux )2 dx = 2 ux uxt dx dt 0 0

but uxt = uxxx , and so d dt

Z

1

(ux )2 dx =

0

2

Z

0

1

ux uxxx dx = −2

Z

1

2

 1 (uxx )2 dr + 2 ux uxx

x=0

.

The boundary terms clearly vanish if homogeneous Neumann conditions ux = 0 are applied at x = 0 and x = 1. When homogeneous Dirichlet conditions u = 0 are applied we deduce, from the PDE, that uxx = ut and boundary terms again vanish because ut (0, t) = ut (1, t) = 0. Thus R1 (ux )2 dx is a decreasing function of t and 0 Z

0

1

(ux (x, t))2 dx ≤

Z

1

(ux (x, 0))2 dx =

Z

1

(g ′ (x))2 dx.

0

0

7.8 When the PDE −uxx − uyy = f is written as − div grad u = f and multiplied by u, we use the multi-dimensional version of differentiation of a product: div (α~v ) = α div ~v + ~v · grad α with ~v = grad u and α = u. Thus Z Z Z u div grad u dΩ = − uf dΩ = − Ω

Applying the Divergence Theorem to the first term on the right hand side leads to Z Z Z  (ux )2 + (uy )2 dΩ u~n · grad u ds + uf dΩ = − Ω Ω Z ∂Ω  2 2 (ux ) + (uy ) dΩ = Ω

since u = 0 on ∂Ω.

Suppose that the problem consisting of Poisson’s equation − div grad u = f in Ω with BCs u = 0 on ∂Ω has two solutions u1 , u2 . Then, by virtue of the linearity of the problem, the difference 47

v = u1 − u2 satisfies Laplace’s equation − div grad v = 0 in Ω with BCs v = 0 on ∂Ω. The energy identity applied to v (with f ≡ 0) then gives Z  (vx )2 + (vy )2 dΩ = 0 Ω

and so vx = vy = 0 in Ω. This implies that v = constant in Ω but, since v = 0 on ∂Ω the constant must vanish. Hence, v = u1 − u2 = 0 and the solution must be unique. 7.9 With homogeneous Neumann BCs: un := ~n · grad u = 0 on ∂Ω. Thus, after application of the Divergence Theorem the boundary terms still vanish so v again satisfies Z  (vx )2 + (vy )2 dΩ = 0 Ω

from the previous solution. However, the BCs no longer allow us to conclude that v = 0 from vx = vy = 0 in Ω. In fact the solution is only unique up to an arbitrary constant. 7.10 From Example 7.11 ′

E (t) = 2

Z

1

1  ut utt − c2 uxx dx + 2c2 ux ut . 0

0

Using utt = a2 uxx and and applying the new boundary conditions ux (0, t) = αu(0, t) and ux (1, t) = βu(0, t) leads to E ′ (t) = −2c2

 a0 a1 u(1, t)ut (1, t) + u(0, t)ut (0, t) . b1 b0

However, 2uut = ∂t u2 so this expression can be written as

 a1 a0 d E(t) + c2 ( u2 (1, t) + u2 (0, t)) = 0 dt b1 b0

e = E(t) + so the modified energy E(t)

a1 2 2 b1 c u (1, t)

+

a0 2 2 b0 c u (1, t)

is constant in time.

e if a1 /b1 ≥ 0 and E(t) is clearly a nonnegative function and this property will be inherited by E(t) a0 /b0 ≥ 0. That is, the BCs should be based on outward normal derivatives at the endpoints.

7.11 Differentiating m(t) and using the PDE gives ′

Z

Z



uxxx − 6uux dx = ut dx = −∞ −∞  ∞ =0 = uxx − 3u2

m (t) =

Z

0

1

 uxxx − 3(u2 )x dx

x=−∞

since u and its derivatives tend to zero at ±∞. Therefore m(t) is constant in time. Similarly for M (t): Z ∞ Z ∞  uuxxx − 6u2 ux dx M ′ (t) = 2uut dx = 2 −∞

−∞

48

and, integrating the term involving uuxxx by parts, Z ∞ M (t) = 2(uuxx ) −2 ′

3

ux uxx + 2(u )x −∞ −∞ ∞ = (2uuxx − (ux )2 − 4u3 ) =0 −∞



∞ dx = 2(uuxx)

x=−∞

Z

−∞

 ∂x (ux )2 + 4(u3 )x dx

since u and its derivatives tend to zero at ±∞. Therefore M (t) is constant in time. 7.12 Differentiating E(t), we find on integrating by parts (we shall ignore the boundary terms since they all vanish) Z ∞ Z ∞ (−uxxut − 3u2 ut ) dx. (ux uxt − 3u2 ut ) dx = E ′ (t) = −∞

−∞

Now, using ut = −6uux − uxxx , Z ∞ Z ∞ Z ∞  2 1 uxx ut dx = − uxx (uxxx + 6uux) dx = − 2 ∂x (uxx ) + 6uux uxx dx −∞ −∞ −∞ Z ∞ 6uux uxx dx =− −∞ Z ∞ Z ∞ Z ∞  2 2 −6uuxuxx + 6(u3 )x dx 3u (uxxx + 6uux) dx = − 3u ut dx = − −∞ −∞ −∞ Z ∞ 6uux uxx dx. = −∞

Combing these gives E ′ (t) = 0 so E(t) = constant.

49

Exercises 8

Separation of variables

8.1 As in Example 8.1, u(x, t) = X(x)T (t), where X satisfies −X ′′ = λX for 0 < x < 1 but, in this case, the BCs are X ′ (0) = X ′ (1) = 0. When λ = −µ2 < 0, the general solution is X = Aeµx + Be−µx and the BC X ′ (0) = 0 gives µ(A − B) = 0. Thus, since µ = 0 is not possible (since λ < 0) we must have A = B . The second BC then gives Aµ(eµ − e−µ ) = 0. But eµ 6= e−µ for µ 6= 0 so we are left with A = 0, leading to the trivial solution X(x) = 0. When λ = 0, the general solution is X = A + Bx and the BCs X ′ (0) = X ′ (1) = 0 both require B = 0 with no restriction on A. Thus X(x) = A is a nontrivial solution corresponding to an eigenvalue λ = 0. Since T ′ (t) = −λT (t), we have T (t) = constant and the corresponding solution of the heat equation is u(x, t) = A. When λ = µ2 > 0, the general solution is X = A sin µx + B cos µx and the BC X ′ (0) = 0 gives µA = 0. Thus, since µ = 0 is not possible (since λ > 0) we must have A = 0 . The second BC then gives Bµ sin µ = 0. To avoid the trivial solution, µ must be chosen so that sin µ = 0, 2 thus µ = nπ for n = 1, 2, . . .. Correspondingly, T ′ (t) = −λT (t), so T (t) = Ce−(nπ) t , leading to 2 fundamental solutions e−(nπ) t cos nπx. The general solution is a linear combination of all fundamental solutions and so takes the form u(x, t) = u(x, t) =

∞ X 2 2 1 A0 + An e−κn π t cos nπx. 2 n=1

The factor 1/2 in the leading term allows all the coefficients to be determined by the same formula Z 1 g(x) cos nπx dx, n = 0, 1, 2, . . . . An = 2 0

(see Example 8.2). When g(x) = x we find A0 = 2

R1 0

x dx = 1 and, using integration by parts,

Z 1 2 sin nπx 1 sin nπx dx An = 2 x cos nπx dx = 2x − nπ 0 nπ 0 0 2 2 1 = 2 2 cos nπx = 2 2 ((−1)n − 1). n π n π 0 Z

1

Thus, An = 0 when n is even and An = 4/(nπ)2 when n is odd.

8.2 We use the facts that |8x−4| = 8x−4 for x ≥ 1/2, |8x−4| = −(8x−4) for x ≤ 1/2 and |8x−4| ≤ 1 for 3/8 ≤ x ≤ 5/8. Therefore 1 − |8x − 4| < 0 outside this interval and  0 0 ≤ x ≤ 3/8    8x − 3 3/8 ≤ x ≤ 1/2 g(x) =  5 − 8x 1/2 ≤ x ≤ 5//8    5/8 ≤ x ≤ 1. 0

50

Now

g, sin nπx =

Z

1

g(x)sin nπx dx =

Z

1/2

3/8

0

(8x − 3)sin nπx dx +

Z

5/8

1/2

(5 − 8x)sin nπx

Z 1/2 cos nπx 1/2 8 + cos nπx dx 3/8 nπ nπ 3/8 Z 5/8 cos nπx 5/8 8 − (5 − 8x) cos nπx dx − 1/2 nπ nπ 1/2 1/2 cos 1 nπ 5/8 cos 12 nπ 8 8 2 =− + 2 2 sin nπx + − 2 2 sin nπx 3 1/2 nπ n π nπ n π /8   16 8 = 2 2 2sin 12 nπ − sin 83 nπ − sin 58 nπ = 2 2 sin 21 nπ − sin 21 nπ cos 18 nπ n π n π 32 2 1 1 = 2 2 sin 16 nπsin 2 nπ. n π

Then An = g, sin nπx / sin nπx, sin nπx and sin nπx, sin nπx = 1/2 and so An is given by (8.12). = −(8x − 3)

8.3 The mean value theorem states that if g is continuous on an interval [a, b] and differentiable on the open interval (a, b) then there is a point c ∈ (a, b) such that g ′ (c) =

g(b) − g(a) . b−a

If we choose b = x, a = 1 − x, then, for x > 21 , g ′ (c) =

g(x) − g(1 − x) = 0, 2x − 1

1 − x < c < x.

It follows that g ′ ( 12 ) = 0 by taking the limit x → 12 . With the change of variable x = 1 − s and v(s, t) = u(1 − s, t), vt = ut (1 − s, t),

vs = −ux (1 − s, t),

vss = uxx (1 − s, t)

and therefore vt − κvss = ut (1 − s, t) − κuxx (1 − s, t) = ut (x, t) − κuxx (x, t) = 0. Also v(0, t) = u(1, t) = 0, v(1, t) = u(0, t) = 0 and v(s, 0) = u(1 − s, 0) = g(1 − s) = g(s) (since g is symmetric about x = 1/2). 8.4 With u(x, t) = X(x)T (t), then −X ′′ = λX for 0 < x < 1/2 and the BCs are X(0) = X ′ (1/2) = 0. The standard argument shows that λ = µ2 > 0 and X(x) = A sin µx is the most general solution satisfying the BC X(0) = 0. Applying X ′ (1/2) = 0 leads to cos 12 µ = 0 which means that µ must be an odd multiple of π: µ = (2m − 1)π. Thus, X(x) = A sin(2m − 1)πx, λ = (2m − 1)2 π 2 2 2 and T ′ (t) = −λT (t) giving T (t) = Be−(2m−1) π . The fundamental solutions are um (x, t) = 2 2 e−(2m−1) π sin(2m − 1)πx and the general solution is a linear combination of these: u(x, t) =

∞ X

Am e−(2m−1)

m=1

51

2

π2

sin(2m − 1)πx.

When g(x) = 0 for 0 < x < 3/8 and g(x) = 1 for 3/8 < x < 1/2 we have Z

1/2

1/2 1 cos(2m − 1)πx 3/8 (2m − 1)π 3/8 1 1 = cos 38 (2m − 1)π = sin 21 (2m − 1)π sin 18 (2m − 1)π. (2m − 1)π (2m − 1)π

g, sin(2m − 1)πx =

sin(2m − 1)πx = −

R 1/2 2

Since sin(2m − 1)πx, sin(2m − 1)πx = 0 sin (2m − 1)πx dx = g, sin(2m − 1)πx / sin(2m − 1)πx, sin(2m − 1)πx , u(x, t) =

1 4,

we have, since An =

∞ X

2 2 4 sin 12 (2m − 1)π sin 18 (2m − 1)πe−(2m−1) π sin(2m − 1)πx. (2m − 1)π m=1

This is identical with (8.11) when n = 2m − 1. The boundary value problem in Example 8.4(a) satisfies the conditions of Exercise 8.3 so that u is symmetric about x = 1/2: u(x, t) = u(1 − x, t). Hence, with x = 21 − s, 0 = lim

s→0

u( 21 + s, t) − u( 21 − s, t) = 2ux ( 12 , t) s

by l’Hˆopital’s rule. 8.5 From Example 8.1 the general solution of the heat equation with BCs u = 0 at both x = 0 and x = 1 is given by (8.6) and the coefficients by (8.8). When g(1 − x) = g(x) we have, making a change of variable x = 1 − s, Z 1 Z 1 Z 1 hg, Xn i = g(x) sin nπx dx = g(1 − x) sin nπx dx = g(s) sin nπ(1 − s) ds 0

=−

Z

0

1

g(s) cos nπ sin nπs ds = (−1)n+1

0

Z

0

0

1

g(s) sin nπs ds = (−1)n+1 hg, Xn i.

It follows that hg, Xn i = −hg, Xn i when n is even. Consequently g, Xn = 0, and so An = 0 when n is even. 8.6 Looking for a separable solution u(x, t) = X(x)T (t), when subsituted into the PDE gives X(t)T ′ (t) − X ′′ (x)T (t) + 2X ′ (x)T (t) = 0 ⇒

−X ′′ (x) + 2X ′ (x) T ′ (t) =− . X(x) T (t)

The left hand side is a function of x only and the right hand side a function of t only. Hence both sides must be constant, equal to λ, say. Thus −X ′′ (x) + 2X ′ (x) = λX(x),

T ′ (t) = −λT (t).

The ODE for X has the general solution is X = Aeσ1 x + Beσ2 x , where σ1 and σ2 are the roots of the quadratic √ σ 2 − 2σ + λ = 0 ⇒ σ1,2 = 1 ± 1 − λ. 1. λ < 1: The roots are real and distinct and only the trivial solution X ≡ 0 can satisfy the BCs. 52

2. λ = 1: There is a double root σ = 1 so the general solution is X = (Ax + B)ex and the BCs again lead to the trivial solution. 3. λ = 1 + µ2 > 1: the roots are σ1,2 = 1 ± iµ so the general solution is X = ex (A sin µx + B cos µx). The BC X(0) = 0 gives B = 0 and X(1) = 0 gives A sin µ = 0. Thus, to avoid a trivial solution, µ = nπ, n = 1, 2, . . . and λn = 1 + n2 π 2 ,

Xn = ex sin nπx, 2

Tn (t) = e−λn t

2

leading to fundamental solutions un = ex−n π t sin nπx, n = 1, 2, . . .. These are of the form u(x, t) = exp(x − αt) sin βx with α = λn and β = nπ. The general solution takes the form ∞ X

u(x, t) = ex

2

An e−n

π2 t

sin nπx

n=1

for arbitrary constants {An }. At t = 0 we require u(x, 0) = 1, that is 1 = ex

∞ X

An sin nπx

n=1

⇒ e−x =

∞ X

An sin nπx

n=1

where the functions {sin nπx}∞ n=1 are mutually orthogonal with respect to the standard inner product on (0, 1). Since Z 1

−x 1 − e−1 (−1)n e−x sin nπx dx = nπ e , sin nπx = , 1 + n2 π 2 0

(integration by parts twice) and sin nπx, sin nπx = 1/2 we have u(x, t) = 2e

x

∞ X

n=1

1 − e−1 (−1)n −n2 π2 t e sin nπx. 1 + n2 π 2

8.7 From Exercise 4.20

1 sin θ uθ = cos θf ′ (r), r since u = f (r). Allowing x, y → 0 would lead to ux (x, y) being multivalued at the origin (because it would depend on the angle at which the origin was approached) unless f ′ (0) = 0. ux = cos θ ur −

8.8 With x = r cos θ sin φ, y = r sin θ sin φ, z = r cos φ, the chain rule gives ur = xr ux + yr uy + zr uz =

cos θ sin φux + sin θ sin φuy + cos φuz  uθ = xθ ux + yθ uy + zθ uz = r − sin θ sin φux + cos θ sin φuy

uφ = xφ ux + yφ uy + zφ uz = r

cos θ cos φux + sin θ cos φuy − sin φuz



Multiplying the first of these by r sin θ cos φ, the second by − sin θ/ sin φ and the third by cos θ cos φ and adding the results together gives ux =

 1 sin θ r sin θ cos φur − uθ + cos θ cos φuφ r sin φ 53

and so ux = sin θ cos φf ′ (r) when u = f (r), a function of r alone. Thus, ux would be multivalued at the origin (because it would depend on the angle at which the origin was approached) unless f ′ (0) = 0. 8.9 When g(r) = 1 − (r/b)2 for 0 ≤ r ≤ b and g(r) = 0 for b < r ≤ a     Z b Z a rξn rξn 2 r(1 − (r/b) )J0 dr = dr rg(r)J0 a a 0 0 From Exercise D.4 and writing ω = bξn /a,   Z b Z rξn a 2 ω b2 rg(r)J0 dr = xJ0 (x) dx = J1 (ω) a ξn ω 0 0 R 3 and, using x J0 (x) dx = 2x2 J0 (x) + x(x2 − 4)J1 (x), we find   Z b Z  1 a 4 ω 3 rξn r 2 b2 dr = 2 J0 r x J0 (x) dx = 4 2ω 2 J0 (ω) + ω(ω 2 − 4)J1 (ω) . b a b ξ ω n 0 0 These combine to give the required result.

8.10 The heat equation with spherical symmetry is given by (8.25) which can be written as ut = 1 2 2 r 2 (r ur )r . This is a special case of the generic form (8.13) with w = r and Lu := −(r2 ur )r . Looking for a separable solution u(r, t) = R(r)T (t) requires (r2 R′ (r))′ T ′ (t) = = constant = −λ T (t) r2 R(r) by the standard argument. The eigenvalue problem for R reads −(r2 R′ (r))′ = λr2 R(r),

0 L/( κπ) in the general solution (8.14a)—will decay, leaving a growing solution that becomes smoother in time. 8.15 When g0 is given by (8.43) g0 (x + ct) + g0 (x − ct) =

∞ X

n=1

An (sin nπ(x + ct) + sin nπ(x − ct)) = 2

When g1 is given by (8.44) Z Z x+ct ∞ X g1 (z) dz = Bn cnπ x−ct

n=1

= −c

∞ X

n=1

∞ X

An sin nπx cos nπct.

n=1

x+ct

sin nπz dz

x−ct

Bn (cos nπ(x + ct) − cos nπ(x − ct)) = 2c

∞ X

Bn sin nπx sin nπct.

n=1

Combining these in d’Alembert’s solution (4.20) leads to (8.42). 8.16 Substituting u(x, t) = X(x)T (t) into the PDE gives, on dividing by X(x)T (t), X ′′ (x) T ′′ (t) = = −λ X(x) T (t) leading the the eigenvalue problem −X ′′ (x) = λX(x), 0 < x < 1 with X(0) = X ′ (L) = 0. The standard arguments can be used to show that only trivial solutions are possible when λ ≤ 0. When λ = µ2 > 0, the general solution is X(x) = A sin µx + B cos µx 56

and X(0) = 0 immediately requires B = 0. Then X ′ (1) = Aµ cos µ = 0 requires µ = (n− 21 )π (an odd multiple of π/2) in order to avoid a trivial solution. Thus λn = (n− 12 )2 π 2 with corresponding eigenfunctions Xn (x) = sin(n − 21 )πx. The ODE T ′′ (t) + µ2 T (t) = 0 has the general solution T (t) = C sin(n − 12 )πt + D cos(n − 21 )πt, for arbitrary constants C and D. This leads to the general solution u(x, t) =

∞ X

n=1

 Cn sin(n − 12 )πt + Dn cos(n − 21 )πt sin(n − 12 )πx

that satisfies the PDE and all BCs. The initial condition u(x, 0) = 0 leads to 0=

∞ X

n=1

Dn sin(n − 21 )πx

from which Dn = 0 (this follows because the eigenfunctions {sin(n − 12 )πx} are mutually orthogonal). Therefore ∞ X u(x, t) = Cn sin(n − 12 )πt sin(n − 21 )πx n=1

and the coefficients Cn could be determined by imposing an initial condition of the form ut (x, 0) = g1 (x). Then, hg1 , Xn i Cn = . hXn , Xn i 8.17 Substituting u(x, t) = X(x)T (t) into the PDE gives, on dividing by X(x)T (t), T ′′ (t) X ′′ (x) = = −λ X(x) T (t)

leading the the eigenvalue problem −X ′′ (x) = λX(x), −a < x < a with X(−a) = X(a) = 0. The standard arguments can be used to show that only trivial solutions are possible when λ ≤ 0. When the origin is not one of the endpoints the calculations are simplified by writing the general solution for λ = µ2 > 0 as X(x) = A sin µ(x + β) for arbitrary constants A and β. The BC X(−a) = 0 immediately sets β = a and X(a) = 0 leads to A sin 2µa = 0. Thus, to avoid trivial solutions we must choose µ = 12 nπ/a and the eigenvalues are λn = ( 21 nπ)2 with corresponding eigenfunctions Xn (x) = sin( 12 nπ(x + a)/a). These eigenfunctions are closely related to those in Example 8.1. The ODE T ′′ (t) + µ2 T (t) = 0 has the general solution T (t) = C sin µt + D cos µt, for arbitrary constants C and D. This leads to the general solution u(x, t) =

∞ X

Cn sin

n=1

nπ(x + a) nπt  nπt sin + Dn cos 2a 2a 2a

that satisfies the PDE and all BCs. The initial conditions u(x, 0) = g0 (x) and ut (x, 0) = 0 give g0 (x) =

∞ X

n=1

Dn sin

nπ(x + a) , 2a 57

0=

∞ X

n=1

Cn

nπ nπ(x + a) sin . 2a 2a

Since the eigenfunctions are orthgonal on the interval (−a, a), Dn =

hg0 , Xn i , hXn , Xn i

Cn = 0.

and hXn , Xn i = 12 a.

When g0 (x) = max(0, 1 − (x/b)2 ) (b < a) a change of variable x = bs and integrating by parts twice gives 2 a

Z

b

x 2b (1 − ( )2 ) sin 21 nπ(x + a)/a dx = b a −b  1 sin αω sin ω =8 − cos ω . ω αω ω

Dn =

Z

1

−1

(1 − s2 ) sin ω(s + α) ds,

ω=

nπb , a

α=

a b

The solution is shown in Fig. 8.7 (dashed line) at time t = .75 with a = 1/2, b = 1/4 (see Example 8.11). 8.18 Since utt + 2ut = uxx , we suppose that u(x, t) = X(x)T (t), then X ′′ T ′′ + 2T ′ = . T X The left hand side is a function of t only while the right side is a function of x only, so both sides must be constant (the separation constant). The BCs stipulate that u = 0 at x = 0 and x = 1 so, for nontrivial solutions, the separation constant must be negative, equal to −λ2 , say. Then X ′′ + λ2 X = 0 which has general solution X = A sin λx + B cos λx. Applying the BCs: X(0) leads to B = 0 and X(1) = 0 to A sin λ = 0. The choice A = 0 would lead to a trivial solution so we must have sin λ = 0. i.e., λ = nπ (n = 1, 2, . . .). Then T ′′ + 2T ′ + λ2 T = 0 √ which has solutions T = eµt if µ satisfies µ2 + 2µ + λ2 = 0, that√is µ = −1 ± i λ2 − 1. These are √ complex roots since λ ≥ π. Thus T = e−t (a sin λ2 − 1t + b cos λ2 − 1t) for arbitrary constants a and b and p p u(x, t) = e−t (a sin λ2 − 1t + b cos λ2 − 1t) sin λx.

At t = 0 we are given u(x, 0) = sin πx so n = 1, λ = π and b = 1. Since p p p ut = −u + λ2 − 1e−t (a cos λ2 − 1t − sin λ2 − 1t) sin λx √ the initial condition ut (x, 0) = 0 gives a = 1/( π 2 − 1) and we have p p 1 sin π 2 − 1t + cos π 2 − 1t) sin πx. u(x, t) = e−t ( √ π2 − 1

8.19 Suppose u = Φ(x, y, a, b) represents the solution (8.56) of subproblem P1 in Example 8.12, so that ∞  nπx  X Φ(x, 0, a, b) = An sin a n=1 58

on E1 and Φ = 0 on the remaining edges. The PDE is unchanged under the linear change of variable y 7→ b − y (x unchanged) and maps the rectangle into itself with the edges E1 being interchanged E3 . This gives the solution u(x, y) = Φ(x, b − y, a, b) =

∞ X

Cn

n=1

 nπx  sinh (nπy/(ab)) , sin sinh (nπ/a) a

u(x, b) =

∞ X

Cn sin

n=1

 nπx  a

and u = 0 on edges Ej , j = 1, 2, 4. This is the solution to P3 . The PDE is unchanged under the interchange x ↔ y and the domain is also unaltered if we make the interchange a ↔ b. Thus u(x, y) = Φ(y, x, b, a) =

∞ X

Dn

n=1

 nπy  sinh (nπ(1 − x/a)/b) , sin sinh (nπ/b) b

u(0, y) =

∞ X

Dn sin

n=1

 nπy  b

is the solution to P4 . Finally, applying the linear change of variable x 7→ a − x (y unchanged) gives the solution to P2 : u(x, y) = Φ(y, a − x, b, a) =

∞ X

n=1

Bn

 nπy  sinh (nπx/(ab)) , sin sinh (nπ/b) b

u(a, y) =

∞ X

Bn sin

n=1

 nπy  b

.

8.20 Integrating by parts twice, we find Z 1 Z 1 Z 1 4 1 2 2 cos nπ + (−1)n−1 + 4 x2 sin(nπx) dx = − (x cos nπx) dx = (x cos nπx) dx An = 2 nπ nπ 0 nπ nπ 0 0   Z 1 sin nπx 4 sin nπ 2 cos nπ − 1 2 (−1)n−1 + − dx = (−1)n−1 + 4 = nπ nπ nπ nπ nπ n3 π 3 0 as required. 8.21 By virtue of Exercise 8.19 the solution to problem P2 is given by u(x, y) =

∞ X

n=1

Bn

sinh(nπx) sin(nπy), sinh nπ

u(1, y) =

∞ X

Bn sin(nπy)

n=1

and, with u(1, y) = 1 − y, Z 1 Z 1 2 1 2 2 sin nπ (1 − y) sin(nπy) dy = Bn = 2 −2 cos nπy dy = −2 2 2 = . nπ nπ n π nπ 0 0 nπ 8.22 Substituting u(x, y) = X(x)Y (y) into the PDE and dividing by X(x)Y (y) gives −

Y ′′ (y) X ′′ (x) = −c X(x) Y (y)

and, the left hand side being a function of x only, while the right hand side is a function of y only, we deduce that both must be constant. The homogeneous BCs u(0, y) = u(2, y) = 0 imply that 59

X(0) = X(2) = 0. We therefore look for eigenfunctions in the x-variable and set the separation constant to λ so that −X ′′ = λX.

(a) λ = −µ2 < 0, then −X ′′ = −µ2 X has general solution X = Aeµx + Be−µx . The only solution satisfying X(0) = X(2) = 0 is the trivial solution. (b) λ = 0, then −X ′′ = 0 has general solution X = A + Bx. Again, the only solution satisfying X(0) = X(2) = 0 is the trivial solution. (c) λ = µ2 > 0, then −X ′′ = µ2 X has general solution X = A sin µx + B cos µx. The BC X(0) = 0 implies that B = 0 and the X(2) requires A sin 2µ = 0. Since A = 0 leads to the trivial solution, we must have sin 2µ = 0, that is, µ = 12 nπ. The eigenvalues are therefore λn = ( 12 nπ)2 with corresponding eigenfunctions Xn = sin( 21 nπx), n = 1, 2, . . .. Then Y must satisfy Y ′′ − (c + λn )Y = 0. (d) c + λn = ν 2 > 0 then Y has general solution Y = Aeνy + Be−νy . The only solution satisfying Y ′ (0) = Y ′ (1) = 0 is the trivial solution. (e) c + λn = 0, then −Y ′′ = 0 has general solution Y = A + By. The only solution satisfying Y ′ (0) = Y ′ (1) = 0 is the constant solution Y (y) = A. This leads to the nontrivial solution u(x, y) = Xn (x) provided that c = −λn = −( 21 nπ)2 .

(f) c + λn = −ν 2 < 0, then −Y ′ = ν 2 X has general solution Y = A sin νx + B cos νx. The BC Y ′ (0) = 0 implies that A = 0 and the Y ′ (1) requires Bν sin ν = 0. Since B = 0 leads to the trivial solution, we must have sin ν = 0, that is, ν = mπ. The eigenvalues are therefore c + λn = (mπ)2 with corresponding eigenfunctions Ym = cos(mπy), m = 0, 1, 2, . . . (when m = 0 this becomes case (e) above.) Thus, we have the non-trivial solutions um,n (x, y) = sin 12 nπx cos mπy when c = −(m2 + 41 n2 )π 2 . (These are in fact the eigenfunctions and eigenvalues of the Laplacian ∇2 on the rectangle with the given BCs.) 8.23 Substituting u(x, y) = X(x)Y (y) into the PDE and dividing by X(x)Y (y) gives −X ′′ (x) + 2X ′ (x) Y ′′ (y) = X(x) Y (y) and, the left hand side being a function of x only, while the right hand side is a function of y only, we deduce that both must be constant. The homogeneous BCs u(x, 0) = u(x, 1) = 0 imply that Y (0) = Y (1) = 0. We therefore look for eigenfunctions in the y-variable and set the separation constant to −λ so that −Y ′′ = λY . This is the eigenvalue problem solved in Example 8.1 (for X(x)) and so the eigenvalues are λn = (nπ)2 with corresponding eigenfunctions Yn = sin(nπy), n = 1, 2, . . .. The corresponding ODE for X is −X ′′√+ 2X ′ + λX = 0. The general solution is a linear combination of ex+σx and ex−σx , where σ = 1 + λ2 . This may be written in several equivalent ways but, in view of the BC X ′ (1) = 0 being applied at x = 1, the most convenient form is  X(x) = ex C cosh σ(1 − x) + D sinh σ(1 − x) which satisfies X ′ (1) = 0 if C = σD. Thus the fundamental solutions are p  un = ex σ cosh σ(1 − x) + sinh σ(1 − x) sin nπy, σ = 1 + λ2n , 60

for n = 1, 2, . . . and the general solution is u(x, y) =

P∞

n=1

Dn un (x, y).

To match the additional BC u(0, y) = y(1 − y), we have y(1 − y) =

∞ X

An sin nπy,

An = Dn (σ cosh σ + sinh σ),

n=1

so that (see the discussion from equation (8.7) to (8.8)) Z 1 4 An = 2 y(1 − y) sin nπy dy = (1 − (−1)n ). (nπ)3 0 Hence, u(x, y) = ex = ex

∞ X

4 σ cosh σ(1 − x) + sinh σ(1 − x) (1 − (−1)n ) sin nπy 3 (nπ) σ cosh σ + sinh σ n=1 ∞ X

n=1 n odd

8 σ cosh σ(1 − x) + sinh σ(1 − x) sin nπy. (nπ)3 σ cosh σ + sinh σ

8.24 Using Exercise 4.20 we find that Laplace’s equation in polar coordinates becomes 1 1 urr + ur + 2 uθθ = 0 r r and substituting u(r, θ) = R(r)Θ(θ) into the PDE leads to Θ′′ (θ) r2 R′′ (r) + rR′ (r) =− . R(r) Θ(θ) The left hand side is a function of r only, while the right hand side is a function of θ only so we deduce that both must be constant. The periodic BCs u(r, θ) = u(r, θ + 2kπ) for any integer k suggests that we look for eigenfunctions in the θ-variable and set the separation constant to λ so that −Θ′′ = λΘ. (a) λ = −µ2 < 0, then −Θ′′ = −µ2 Θ has general solution Θ = Aeµθ + Be−µθ . This cannot be a periodic function of θ for any µ 6= 0.

(b) λ = 0, then −Θ′′ = 0 has general solution Θ = A + Bθ. The only solution that is periodic in θ is the constant solution: Θ = A. Now R satisfies r2 R′′ (r) + rR′ (r) = 0 . This may be written as B (rR′ (r))′ = 0 ⇒ rR′ (r) = B ⇒ R′ (r) = ⇒ R(r) = B ln r + Ca, r for arbitrary constants B and C. Thus u(r, θ) = B ln r + C is a solution of Laplace’s equation. With B = 1 and C = 0 it is, in fact, the solution that featured in Example 1.4. If a bounded solution is required at the origin then we must choose B = 0. (c) λ = µ2 > 0, then −Θ′′ = µ2 Θ has general solution Θ = A sin µθ + B cos µθ. This is periodic in θ of period 2π if µ = n, n = 1, 2, . . . leading to the eigenvalues λn = n2 and eigenfunctions that can be written as Θn = sin(nθ + β) for any constant (phase) β. The corresponding solutions for R satisfy r2 R′′ (r) + rR′ (r) = n2 R. 61

Looking for solutions of the form R = Arα we find that r2 R′′ (r) + rR′ (r) = α2 Arα and so α = ±n. There are two solutions and, by linearity of the ODE, a linear combination is also a solution and therefore R(r) = Drn + Er−n for arbitrary constants D and E. Boundedness of the solution at the origin requires that we set E = 0. The fundamental solutions of Laplace’s equation in a circular domain are, therefore u0 = 1 and un (r, θ) = rn sin(nθ + β), for n = 1, 2, . . . and the general solution is u(r, θ) = D0 +

∞ X

Dn rn sin(nθ + β).

n=1

When r = a the BC gives u(a, θ) = g(θ) and integrating the series over one period gives D0 = R 2π 1 2π 0 g(θ) dθ and so Z 2π 1 u(0, 0) = g(θ) dθ. 2π 0 8.25 Using Exercise 4.20 we find that Laplace’s equation in polar coordinates becomes 1 1 urr + ur + 2 uθθ = 0 r r and substituting u(r, θ) = R(r)Θ(θ) into the PDE leads to Θ′′ (θ) r2 R′′ (r) + rR′ (r) =− . R(r) Θ(θ) The left hand side is a function of r only, while the right hand side is a function of θ only so we deduce that both must be constant. The BCs u(r, 0) = u(r, π/4) suggest that we look for eigenfunctions in the θ-variable and set the separation constant to λ so that −Θ′′ = λΘ.

(a) λ = −µ2 < 0, then −Θ′′ = −µ2 Θ has general solution Θ = Aeµθ + Be−µθ . This cannot satisfy Θ(0) = Θ(π/4) = 0 for any µ 6= 0 unless A = B = 0.

(b) λ = 0, then −Θ′′ = 0 has general solution Θ = A+Bθ which cannot satisfy Θ(0) = Θ(π/4) = 0 unless A = B = 0.

(c) λ = µ2 > 0, then −Θ′′ = µ2 X has general solution Θ = A sin µθ +B cos µθ. The BC Θ(0) = 0 requires B = 0. Then Θ(π/4) = 0 requires A sin µπ/4 = 0. This will lead to the trivial solution A = 0 unless µ = 4n, n = 1, 2, . . .. The eigenvalues are therefore given by λn = 16n2 and corresponding eigenfunctions Θn = sin(4nθ) Then R satisfies r2 R′′ (r) + rR′ (r) = 16n2 R. Looking for solutions of the form R = Arα we find that r2 R′′ (r) + rR′ (r) = α2 Arα and so α = ±4n. There are two solutions and, by linearity of the ODE, a linear combination is also a solution and therefore R(r) = Dr4n + Er−4n

62

for arbitrary constants D and E. The BC u(1, θ) implies that R(1) = 0 so that E = −D and the general solution satisfying the PDE and the homogeneous BCs is u(r, θ) =

∞ X

n=1

 Dn r4n − r−4n sin 4nθ.

The BC u(2, θ) = g(θ) then means that the coefficients are given by Dn (24n − 2−4n ) =

8 π

Z

π/4

g(θ) sin 4nθ dθ. 0

due to the mutual orthogonality of the eigenfunctions over (0, π/4) and Z

0

π/4

sin2 4nθ dθ = 81 π.

8.26 Using Exercise 4.20 we find that Laplace’s equation in polar coordinates becomes 1 1 urr + ur + 2 uθθ = 0 r r and substituting u(r, θ) = R(r)Θ(θ) into the PDE leads to Θ′′ (θ) r2 R′′ (r) + rR′ (r) =− . R(r) Θ(θ) The left hand side is a function of r only, while the right hand side is a function of θ only so we deduce that both must be constant. The BCs u(1, θ) = u(2, θ) suggest that we look for eigenfunctions in the r-variable and set the separation constant to −λ so that −r2 R′′ (r)−rR′ (r) = λR. Following the hint, the change of variable s = log r means that, by the chain rule, ∂r = (∂r s)∂s =

1 ∂s r

⇒ r∂r = ∂s

then, since r2 R′′ (r) + rR′ (r) = r∂r (r∂r R) = ∂s2 R and the ODE for R becomes −∂s2 R = λR

(a) λ = −µ2 < 0, then we have general solution R = Aeµs + Be−µs = Arµ + Br−µ which cannot satisfy R(1) = R(2) = 0 for any µ 6= 0 unless A = B = 0. (b) λ = 0, then the general solution is R = A + Bs = A + B log r which cannot satisfy R(1) = R(2) = 0 unless A = B = 0. (c) λ = µ2 > 0, then the general solution is R = A sin µs + B cos µs. The BC R = 0 at r = 1 is applied at s = 0 and requires B = 0. Then R = 0 at r = 2 is applied at s = log 2 requires A sin(µ log 2) = 0. This will lead to the trivial solution A = 0 unless µ = nπ/ log 2, n = 1, 2, . . .. The eigenvalues are therefore given by λn = (nπ/ log 2)2 and corresponding eigenfunctions Rn = r sin log log 2 nπ. Then Θ satisfies Θ′′ = λn Θ so that Θ = Ceµθ + De−µθ . The BC Θ(π/4) = 0 gives Ceµπ/4 + De−µπ/4 = 0. The solution of this is conveniently written in terms of another arbitrary constant γ as C = γe−µπ/4 , D = −γeµπ/4 giving Θ = 2γ sinh µ(θ − 14 π), 63

µ=

nπ . log 2

Normalising this by choosing γ so that Θ(0) = 1 finally gives the general solution u(r, θ) =

∞ X

n=1

An

sinh µ( 14 π − θ) log r  sin nπ , 1 log 2 sinh 4 µπ

which satisfies the PDE and the homogeneous BCs.

8.27 Written in polar coordinates, we require the eigenvalues λ and eigenfunctions u (not identically zero) such that   1 1 − urr + ur + 2 uθθ = λu. r r Substituting u(r, θ) = R(r)Θ(θ) into the PDE leads to −

r2 R′′ (r) + rR′ (r) + λr2 R(r) Θ′′ (θ) = . R(r) Θ(θ)

The left hand side is a function of r only, while the right hand side is a function of θ only so we deduce that both must be constant. The periodic BCs u(r, θ) = u(r, θ + 2kπ) for any integer k suggests that we look for eigenfunctions in the θ-variable and set the separation constant to α so that −Θ′′ = αΘ.

It is readily shown that there are no periodic solutions for α < 0 so we set α = ν 2 ≥ 0, so that Θ = A sin νθ + B cos νθ. This will be periodic of period 2π if ν = n, n = 0, 1, . . . (Note that n = 0 is permissible). This being the case, then

r2 R′′ + rR′ (λr2 − n2 )R = 0. √ The simple change of variable x = r λ converts this into Bessel’s equation (D.1) with ν = n. The general solution that remains bounded at the centre of the circle is √ Rn (r) = CJn (r λ), where Jn is the Bessel function of the first kind of order n (see Appendix D). It is necessary to have R(a) = 0 in order for u(a, θ) = 0. √ Therefore, since C = 0 leads to the trivial solution, we have to choose λ in such a way that a λ = ξn,m , the mth nonnegative zero of Jn , m = 1, 2, . . .. Hence the eigenvalues are λn = (ξn,m /a)2 and the eigenfunctions are as given in the question. 8.28 We look for a solution in the form u(r, θ, t) = U (r, θ)T (t), in which case the PDE gives T ′′ (t) ∇2 U = . c2 T (t) U (r, θ) The left hand side is a function of t only, while the right hand side is a function of (r, θ) only so we deduce that both must be constant. Setting the separation constant to −λ then U should satisfy −∇2 U = λU

inside the circle and U = 0 on the perimeter. This is the problem solved in the previous exercise for which that eigenvalues λm,n and corresponding eigenfunctions are given. The associated solution for T then satisfies T ′′ = −λc2 T . Thus, the general solution is of the form u=

∞ X ∞ X

(Am,n sin ωn,m t + Bm,n cos ωn,m t)Jn (ωn,m r),

n=0 m=1

64

ωn,m =

cξn,m , a

for arbitrary constants Am,n and Bm,n which can be determined by the initial conditions.

65

Exercises 9

The method of characteristics

9.1 (a) ut + tux = u:

t

dt dx du dx = = ⇒ = t, 1 t u dt

du =u dt

giving, in terms of (t, k): x = k + 21 t2 , u = A(k)et . In terms of (x, t): u = A(x − 12 t2 )et . (b) tut − ux = 1: dx du dt dt = = ⇒ = −t, t −1 1 dx

x

t

du = −u dt

giving, in terms of (t, k): x = ket and u = A(k)e−t . In terms of (x, t): u = A(xe−t )e−t . (d) xut − ux = t: dt dx du dt = = ⇒ = −x, x −1 t dx

t

du = −1 dx

giving, in terms of (x, k): t = ke−x and u = A(k) − x. In terms of (x, t): u = A(tex ) − x. (c) ut + xux = −u: dt dx du dx = = ⇒ = x, 1 x −u dt

x

du = −t dx

x

t

du 1 = −k + x2 . dx 2 Hence in terms of (x, k): u = A(k) − kx + 16 x3 . In terms of (x, t): u = A(t + 21 x2 ) − (t + 21 x2 )x + 61 x3 .

giving, t = k − 12 x2 and so

x

t

(e) tut + xux = x: dt dx du = = ⇒ t x x

Z

dt = t

Z

dx , x

du =1 dx

x

giving, in terms of (x, k): t = kx and u = A(k) + x. In terms of (x, t): u = A(t/x) + x. (f) tut − xux = t: dx du dt = = ⇒ t −x t

t Z

dt =− t

Z

dx , x

du =1 dt

giving, in terms of (t, k): x = k/t and u = A(k) + t. In terms of (x, t): u = A(tx) + t.

66

x

(g) xut − tux = xt: dt dx du = = ⇒ x −t xt

t Z

t dt = −

2

Z

x dx,

du =t dt

2

giving, in terms of (t, k): x = k − t and u = A(k) + In terms of (x, t): u = A(x2 + t2 ) + 12 t2 . (h) xut + tux = −xu: dt dx du = = ⇒ x t −xu

x

1 2 2t .

t Z

t dt =

Z

x dx,

du = −u dt

x

giving, in terms of (t, k): x2 = k + t2 and u = A(k)e−t . In terms of (x, t): u = A(x2 − t2 )e−t . 9.2 Note that the solutions below are only valid for those values of (x, t) where characteristics passing through (x, t) also pass through the given boundary data without either going through infinity or crossing other characteristics. Check this against the sketches of the paths of the characteristics. (a) u = A(x− 21 t2 )e−t with u(0, x) = sin(x) gives A(x) = sin(x). Hence u = sin(x− 12 t2 )e−t , for all values of (x, t). (b) u = A(tex ) − x with u(t, 0) = exp(−t2 ) gives A(t) = exp(−t2 ). Hence u = exp(−t2 e2x ) − x, for all values of (x, t). (c) u = A(xe−t )e−t with u(0, x) = x2 gives A(x) = x2 . Hence u = x2 e−3t , for all values of (x, t). (d) u = A(t+ 21 x2 ) − (t+ 12 x2 )x + 61 x3 with u(t, 0) = ln(1 + t2 ) gives A(t) = ln(1 + t2 ). Hence u = ln[1 + (t+ 12 x2 )2 ] − (t+ 12 x2 )x + 61 x3 , for all values of (x, t). (e) u = A(t/x) + x with u(1, x) = x3 gives A(1/x) + x = x3 or A(z) = z −3 − 1/z. Hence u = (x/t)3 − x/t + x, for t > 0. (f) u = A(tx) + t with u(1, x) = 1/(1 + x2 ) gives A(x) + 1 = 1/(1 + x2 ) or A(x) = −x2 /(1 + x2 ). Hence u = −x2 t2 /(1 + x2 t2 ) + t, for t ≥ 0. √ (g) u = A(x2 + t2 ) + 21 t2 with u(0, x) = 1 + x for x ≥ 0 gives A(x2 ) = 1 + x or A(z) = 1 + z. √ Hence u = 1 + x2 + t2 + 21 t2 , for all values of (x, t). √ (h) u = A(x2 − t2 )e−t x) = 1 − x for x ≥ 0 gives A(x2 ) = 1 − x or A(z) = 1 − z. √ with u(0, Hence u = (1 − x2 − t2 )e−t , for |x| ≥ |t|. 9.3 The nonzero component of f1 is equal to the first component of V T g(X − λ1 T ). Thus f1 = D1 V T g(X − λ1 T ), where D1 is the d × d matrix zero matrix except that its (1, 1) entry is equal to one. Hence, u1 = V −T D1 V T g(X − λ1 T ) ⇒ ku1 k2 ≤ kV −T k2 kD1 k2 kV T k2 kg(X − λ1 T )k2 . However, kV −T k2 = kV −1 k2 , kV T k2 = kV k2 , kD1 k2 = 1 and kg(X − λ1 T )k2 ≤ M2 so the result follows. 67

A similar argument holds for V T uj = fj , where the only nonzero component of fj is its jth component, which is vjT g(X − λj T ). This time we can write fj = Dj V T g(X − λj T ), where Dj is the d × d matrix zero matrix except that its (j, j) entry is equal to one. P The bound on the solution of (9.6) follows by applying the triangle inequality to u = dj=1 uj : Pd kuk2 ≤ j=1 kuj k2 . When A is symmetric its eigenvalues are mutually orthogonal and κ2 (V ) = 1. 9.4

 a Suppose that A = c

 b then its characteristic polynomial is d a − λ b = λ2 − (a + d)λ + (ad − bc) c d − λ

and the result follows by observing that tr(A) = a + d and det(a) = ad − bc. 9.5 At P1 , where X1 > 2T1 , (see Fig. 9, Left) (a) AP1 is a Γ1 -characteristic: x+t = X1 +T1 along which u+w = constant. Thus xA = X1 +T1 and u(P) + w(P) = u(A) + w(A). (9.5a) (b) BP1 is a Γ2 -characteristic: x−t = X1 −T1 along which v+w = constant. Thus xB = X1 −T1 and v(P) + w(P) = v(B) + w(B). (9.5b) (c) CP3 is a Γ3 -characteristic: x − 2t = X1 − 2T1 along which u + v = constant. Thus x(C) = X1 − 2T1 and u(P) + v(P) = u(C) + v(A). (9.5c)

t

t

x=t P(X, T )

E

2t =

t

x

x=

t

P3 (X3 , T3 )

P1 (X1 , T1 ) D

E

C B A x C B A x D C B A x Figure 9: Left and Centre: The characteristics through points P1 , and P3 for Example 9.2 drawn backwards in time, with reflections when they intersect the t-axis. The dashed lines show the characteristics x − t = 0 and x − 2t = 0 that pass through the origin. Right: The characteristics for Exercise 9.7. Equations (9.5a-c) are clearly of the form (9.13), where the columns of V are the eigenvectors of AT and f = [v3T g(A), v2T g(B), v1T g(C)]T . The initial-conditions provide values for u, v and w at A, B, and C and these equations may be solved to give u, v and w at P1 . At P3 , where X3 < T3 , (see Fig. 9, Centre) 68

(a) AP3 is a Γ1 -characteristic: x+t = X3 +T3 along which u+w = constant. Thus xA = X3 +T3 and u(P) + w(P) = u(A) + w(A). (9.5A) (b) The Γ2 -characteristic through P3 intersects the t-axis at D: x − t = X3 − T3 along which v + w = constant. Thus tD = T3 − X3 and v(P) + w(P) = v(D) + w(D). One of the BCs on x = 0 is w(0, t) = 0 and so v(P) + w(P) = v(D). Now CD is a Γ1 -characteristic so xC = tD , u + w is constant and, applying the BCs, gives u(C) + w(C) = v(D). Combining these results leads to v(P) + w(P) = u(C) + w(C). (9.5B) (c) The Γ3 -characteristic through P3 intersects the t-axis at E: x − 2t = X3 − 2T3 along which u + v = constant. Thus tE = T3 − 21 X3 and u(P) + v(P) = u(E) + v(E). On of the BCs on x = 0 is u(0, t) = v(0, t) and so u(P) + v(P) = 2u(E). Now BE is a Γ1 -characteristic so xB = tE , u + w is constant and, applying the BCs, gives u(B) + w(B) = u(E). Combining these results leads to u(P) + v(P) = 2u(B) + 2w(B). (9.5C) Equations (9.5A-C) are clearly of the form (9.13), where the columns of V are the eigenvectors of AT and the components of f are linear combinations of the values of u evaluated at the points where the characteristics intersect the x-axis. 9.6 When u(x, 0) = w(x, 0) = 0, the condition u(A) + w(A) = u(B) + w(B) along BA leads to u(B) + w(B) = 0 which is a constraint on the initial values of u and w which will not, in general, be valid. The difficulty is caused by specifying the value (here u+w) on an outgoing characteristic at A. 9.7 The three families of characteristics are: Γ1 : λ1 = −2,

x + 2t = constant,

v1T u = u + w = constant,

x + t = constant,

v2T u = v + w = constant,

Γ3 : λ3 =

x − t = constant,

v3T u = u + v = constant.

Γ2 : λ2 = −1, 1,

In order to find the solution at P(X, T ) withX < T (see Fig. 9, Right) we follow the characteristics through P backwards in time until they intersect the initial line t = 0 (x ≥ 0). When a characteristic interests the t-axis (at E, for example), we also have to include the characteristics through E as shown. AP: This is a Γ1 characteristic x + 2t = X + 2T along which u + w is constant. Thus, u(A) + w(A) = u(P) + w(P),

x(A) = X + 2T.

(9.7a)

BP: This is a Γ2 characteristic x + t = X + T along which v + w is constant. Thus, v(B) + w(B) = v(P) + w(P),

x(B) = X + T.

(9.7b)

EP: This is a Γ3 characteristic x − t = X − T along which u + v is constant. Thus, u(E) + v(E) = u(P) + v(P), 69

t(E) = T − X.

(9.7c)

CE: This is a Γ1 characteristic x + 2t = 2(T − X) along which u + w is constant. The BC specifies w(0, t) = 0, and so, u(E) = u(C) + w(C),

x(C) = 2(T − X).

(9.7d)

DE: This is a Γ2 characteristic x + t = (T − X) along which v + w is constant. The BC specifies w(0, t) = 0, and so, v(E) = v(D) + w(D),

x(D) = (T − X).

(9.7e)

Hence, from (9.7c-e), u(P) + v(P) = u(C) + w(C) + v(D) + w(D). This, together with (9.7a-b) constitute three linearly independent equations to determine u(P), v(P) and w(P). The coefficient matrix of this system is V T whose rows are the eigenvectors of AT . 9.8 With the given matrix A the system (9.3) becomes uy aux

+

vx vy + bvx

+ +

wx cwx + wy

=0 =0 = 0.

We shall use the last two of these to eliminate w. Differentiating the third with respect to x and substituting wx = −vy gives uy auxx

+ +

vx bvxx

=0 (cvxy + vyy ) = 0.

Differentiating the second with respect to x and then substituting vx = −uy gives, on rearranging, uyyy + cuxyy − buxxy + auxxx = 0. It can be shown that each component of u satisfies the same PDE. The connection with eigenvalues is that the characteristic polynomial of −A is λ3 + cλ2 − bλ + a = 0 whose coefficients are the same as the PDE. 9.9 With u = [u, v]T and f = [f, g]T , the equations become   0 1 ut + Aux = f , A = . 1 0 The matrix A has eigenvalue λ = 1 with eigenvector v1 = [1, 1]T and eigenvalue λ = −1 with eigenvector v1 = [1, −1]T . Multiplying ut + Aux = f by v1T = [1, 1] and supposing that f = 0, g = Gt , we find that U + = v1T u = u + v satisfies Ut+ + Ux+ = Gt for which the characteristic equation is (see (9.2)) dx dU + dt = = 1 1 Gt so that dU + dx = 1, = Gt dt dt

⇒ x = t + k,

U + = G(x, t) + A(k), ⇒ u + v = G(x, t) + A(x − t), 70

where k is an arbitrary constant and A(k) an arbitrary function. Similarly, multiplying ut + Aux = f by v2T = [1, −1], we find that U − = v2T u = u − v satisfies Ut− + Ux− = −Gt for which the characteristic equation is dx dU − dt = = 1 −1 −Gt

so that

dx dU − = −1, = Gt dt dt

⇒ x = −t + k,

U − = G(x, t) + B(k), ⇒ u − v = −G(x, t) + B(x + t)

where B is an arbitrary function. Combining these we have u = 21 (A(x − t) + B(x + t)),

v = G(x, t) + 21 (A(x − t) − B(x + t)).

The initial condition u(x, 0) = 0 gives B(x) = −A(x) and then v(x, 0) = 0 gives A(x) = −G(x, 0). Hence the solution is u(x, t) = 21 (G(x + t, 0) − G(x − t, 0)),

v(x, t) = G(x, t) − 12 (G(x − t, 0) + G(x + t, 0)).

9.10 Comparing 2uxx + 3uxy + uyy = 0 with (4.12) we see that a = 2, b = 3/2 and c = 1 so that b2 − ac = 41 > 0. Consequently the PDE is hyperbolic. The PDE may be factored: 2uxx + 3uxy + uyy = (2∂x + ∂y )(∂x + ∂y )u = 0. The first factor has characteristic equations Γ1 :

dy dU dx = = ⇒ x = 2y + k, 2 1 0

U = ux + uy = A(k) = constant

and the second factor gives Γ2 :

dx dy dV = = ⇒ x = y + k, 1 1 0

V = 2ux + uy = B(k) = constant

so that ux + uy = A(x − 2y), 2ux + uy = B(x − y). Solving and integrating, we find ux = B(x − y) − A(x − 2y)

u = F (x − y) + G(x − 2y) + C(y)

uy = 2A(x − 2y) − B(x − y) ⇒ u = G(x − 2y) + F (x − y) + D(x), R R where F (s) = B(s) ds, G(s) = − A(s) ds, C(y), D(x) are arbitrary functions. These equations are compatible if, and only if, C(y) = D(x) = constant—this constant can be absorbed into either F or G. Thus the general solution is u(x, y) = F (x − y) + G(x − 2y). The BCs u(x, 0) = g0 (x) and uy (x, 0) = g1 (x) lead to F (x) + G(x) = g0 (x),

−F ′ (x) − 2G′ (x) = g2 (x) ⇒ −F (x) − 2G(x) = 71

Z

x

g1 (s) ds

which give G(x) = −g0 (x) − and, therefore,

Z

x

g2 (s) ds,

u(x, y) = 2g0 (x − y) − g0 (x − 2y) +

Z

F (x) = 2g0 (x) +

Z

x

g1 (s) ds

x−y

x−2y

g1 (s) ds ≡ 2g0 (A) − g0 (B) +

Z

A

g1 (s) ds B

in the notation of Fig. 10. As a check we note that this reduces to u(x, 0) = g0 (x) as y → 0.

y

x=y x = 2y P(X, Y ) Figure 10: The characteristics through a general point P for Exercise 9.10. B (X−2Y,0)

A (X−Y,0)

x

9.11 Suppose that P has coordinates (X, Y ). (a) For X > 2Y > 0 (see Fig. 11) both characteristics through P1 intersect the boundary on the x-axis. Hence u(X, Y ) is determined by the initial conditions—the solution to Exercise 9.10 fulfils these conditions.

y

x=y

P3 (X, Y )

F(0, Y − 21 X) E(0, Y − X) D(0, Y − 12 X) O

P2 (X, Y ) Q

x = 2y P1 (X, Y )

R C B (X − Y, 0) (X − 2Y, 0)

A (X − Y, 0)

x

Figure 11: The characteristics through three general points Pj (j = 1, 2, 3) for Exercise 9.11. (b) For 0 < X < Y both characteristics through P3 intersect the boundary on the y-axis. Hence u(X, Y ) is determined by the boundary conditions. We begin with the general solution u(x, y) = F (x − y) + G(x − 2y) of the PDE derived in the solution to Exercise 9.10. Applying the BCs leads to Z y ′ ′ 1 f1 (s) ds F (−y)+G(−2y) = f0 (y), F (−y)+G (−2y) = f1 (y) ⇒ −F (−y)− 2 G(−2y) =

72

from which we obtain G(−2y) = 2f0 (y) + 2

Z

y

f1 (s) ds ⇒ Z y F (−y) = − f0 (y) − 2 f1 (s) ds ⇒

G(t) =

2f0 (− 21 t)

+2 Z F (t) = −2f0 (−t) − 2

Z

−t/2

−t

f1 (s) ds,

f1 (s) ds.

The solution satisfying the BCs is, therefore, u(x, y) = 2f0 (y −

1 2 x)

− f0 (y − x) + 2

Z

1 y− 2 x

f1 (s) ds.

y−x

(c) For 2Y > X > Y > 0 (see Fig. 11) one characteristic through P2 intersects the boundary on each axis. Hence u(X, Y ) is determined by both the initial and boundary conditions. Using u(x, y) = F (x − y) + G(x − 2y) we find that, along x = y, u(y, y) = F (0) + G(−y) while, along x = 2y, u(2y, y) = F (y) + G(0). Hence F (y) = u(2y, y) − G(0),

G(y) = u(−y, −y) − F (0).

and u(X, Y ) = u(2(X − Y ), X − Y ) + u(2Y − X, 2Y − X) − F (0) − G(0). Setting X = Y = 0 shows that F (0) + G(0) = u(0, 0). The characteristics y = x + Y − X and 2y = x + 2Y − X through P3 intersect the characteristics y = x and 2y = x through  the origin at Q and R whose coordinates are 2Y − X, 2Y − X and 2(X − Y ), X − Y , respectively. Hence, u(P) = u(Q) + u(R) − u(0, 0). 9.12 2 2 d xe−x = (1 − 2x2 )e−x = 12 f (x, t). dt

With c = 1 Z x+t−τ x−t+τ

and

  2 x+t−τ 2 2 f (ξ, τ ) dξ = 2 xe−x = 2(x + t − τ )e−(x+t−τ ) − 2(x − t + τ )e−(x−t+τ ) x−t+τ

Z tZ 0

x+t−τ

x−t+τ

    2 t 2 t f (ξ, τ ) dξ dτ = e−(x+t−τ ) − e−(x−t+τ ) 0

= 2e

−x2

−e

−(x−t)2

0

−e

−(x+t)2

and (9.25) follows. 9.13 The operator associated with the PDE utt + utx − 2uxx = t factorizes: utt + utx − 2uxx = (∂t − ∂x )(∂t + 2∂x )u Hence, with v = (∂t + 2∂x )u we have (∂t − ∂x )v = t whose characteristic equations are dt dx dv = = 1 −1 t

⇒ 73

dx = −1, dt

dv =t dt

so the characteristics are x + t = k1 along which v = 21 t2 + a(k1 ), where a(·) is an arbitrary function. Then u is the solution of (∂t + 2∂x )u = v whose characteristic equations are dt dx du = = 1 2 v

du =v dt Rt so the characteristics are x − 2t = k2 along which u = 16 t3 + a(x + s) ds + B(k2 ). This gives the general solution u(x, t) = 61 t3 + A(x + t) + B(x − 2t), Rt where A(x + t) = a(x + s) ds. The initial conditions u(x, 0) = ut (x, 0) = 0 give A(x) + B(x) = 0,

A′ (x) − 2B ′ (x) = 0

dx = 2, dt

⇒ A(x) − 2B(x) = constant = C

and therefore A(x) = 31 C = −B. Hence u(x, t) = 16 t3 . 9.14 The operator associated with the PDE utt − uxx = 0 factorizes: utt − uxx = (∂t − ∂x )(∂t + ∂x )u. Hence, with v = (∂t + ∂x )u we have (∂t − ∂x )v = 0 whose characteristic equations are dx dv dt = = 1 −1 0

dx = −1, dt

dv =0 dt

so the characteristics are x + t = k1 along which v = a(k1 ), where a(·) is an arbitrary function. Then u is the solution of (∂t + ∂x )u = v whose characteristic equations are dx du dt = = 1 1 v

dx du = 1, =v dt dt Rt so the characteristics are x − t = k2 along which u = a(x + s) ds + B(k2 ). This gives the general solution u(x, t) = A(x + t) + B(x − t), Rt where A(x + t) = a(x + s) ds. The boundary condition u(0, t) = 0 gives A(t) + B(−t) = 0 so B(s) = −A(−s) and u(x, t) = A(x + t) − A(t − x), ⇒

(a) The boundary condition u(π, t) = 0 gives A(π − t) = A(t − π), that is A(s) = A(s + 2π), which is the description of a 2π periodic function. (b) The boundary condition ux (π, t) = 0 gives A′ (π − t) + A′ (t − π) = 0, that is, A′ (z) = −A′ (z + 2π) from which we deduce that A(z + 2π) = C − A(z),where C is an arbitrary constant. Thus, A(z + 4π) = C − A(z + 2π) = C − (C − A(z)) = A(z) which is the description of a 4π periodic function.

74

9.15 L1 L2 = (a∂t + b∂x )(c∂t + d∂x ) = a∂t (c∂t + d∂x ) + b∂x (c∂t + d∂x )   = a ct ∂t + c∂t2 + dt ∂x + d∂x ∂t + b cx ∂t + c∂t ∂x + dx ∂x + d∂x2 ∂t = L + (act + bcx )∂t + (adt + bdx )∂x

and so L1 L2 = L if act + bcx = 0 and adt + bdx = 0. Thus, Lu = 0 is equivalent to L1 v = 0 where L2 u = v. For the given PDE, comparing coefficients we find ac = 1, ad+bc = t−1 and bd = −t. A solution of these is a = c = 1, b = t, d = −1 so that L1 = ∂t + t∂x , L2 = ∂t − ∂x . The characteristic equations of L1 v = 0 are dt dx dv dx = = ⇒ = 1 t 0 dt

t,

dv = 0, ⇒ x = k1 + 12 t2 , v = A(k1 ) dt

and so v = ut − ux = A(k1 ) is constant along x − 21 t2 = k1 . The characteristic through the point (x, t) cuts the x-axis at (x − 12 t2 , 0), at which point ut = g1 (x − 21 t2 ) and ux = g0′ (x − 12 t2 ). Hence, ut − ux = A(x − 12 t2 ), where A(z) = g1 (z) − g0′ (z).

This PDE L2 u = v has characteristic equations

dt dx du dx = = ⇒ = −1, ⇒ x = k2 − t, 1 −1 v dt Z t du 1 2 1 2 A(x + t − s − 21 s2 ) ds + g0 (x + t) = A(x − 2 t ) = A(k2 − t − 2 t ) ⇒ u = dt 0

using the initial condition u(x, 0) = g0 (x). 9.16 In each case we solve, in turn, L1 v = 0 and L2 u = v.

(a) L1 = ∂t + x∂x , L2 = ∂t + ∂x . The characteristic equations are

dt dx dv dx dv = = ⇒ = x, = 0, ⇒ x = k1 et , v = a(k1 ) 1 x 0 dt dt dx du dx dt = = ⇒ = 1, ⇒ x = k2 + t, L2 u =v : 1 1 v dt du = v = a(k1 ) = a(xe−t ) = a((t + k2 )e−t ) dt Rt so the general solution is u = a((k2 + s)e−s ) ds + B(k2 ). L1 v =0 :

For utt + (1 + x)utx + xuxx = 0, b2 − ac = 21 (1 + x)2 − x = 21 (1 − x)2 > 0 except at x = 1 where both characteristics have speed x′ = 1 and are therefore parallel. The PDE is therefore parabolic at x = 1. (b) L1 = t∂t + x∂x , L2 = ∂t − ∂x . The characteristic equations are Z Z dt dt dx dv dx dv L1 v =0 : = = ⇒ = , = 0, ⇒ x = k1 t, v = a(k1 ) t x 0 t x dt dx du dx dt = = ⇒ = −1, ⇒ x = k2 − t, L2 u =v : 1 −1 v dt du x k2 = v = a(k1 ) = a( ) = a( − 1) dt t t 75

Z

t

k2 − 1) ds + B(k2 ). s For tutt + (x − t)utx − xuxx = 0, b2 − ac = 21 (x − t)2 + xt = 21 (x + t)2 > 0 except at x = −t where both characteristics have speed x′ = −1 and are therefore parallel. The PDE is therefore parabolic along x + t = 0.

so the general solution is u =

a(

(c) L1 = x∂t − t∂x , L2 = ∂t + ∂x . The characteristic equations are Z Z dt dx dv dv L1 v =0 : = = ⇒ t dt = − x dx, = 0, ⇒ t2 = k1 − x2 , v = a(k1 ) x −t 0 dt dx du dx dt = = ⇒ = 1, ⇒ x = k2 + t, L2 u =v : 1 1 v dt du = v = a(k1 ) = a(x2 + t2 ) = a((k2 + t)2 + t2 ) dt Rt so the general solution is u = a((k2 + s)2 + s2 ) ds + B(k2 ).

For xutt + (x − t)utx − tuxx = 0, b2 − ac = 21 (x − t)2 + xt = 21 (x + t)2 > 0 except at x = −t where both characteristics have speed x′ = 1. The PDE is therefore parabolic along x + t = 0.

(d) L1 = x∂t + ∂x , L2 = ∂t + t∂x . The characteristic equations are dt dx dv dt dv = = ⇒ = x, = 0, ⇒ t = k1 + 12 x2 , v = a(k1 ) x 1 0 dx dt dt dx du dx L2 u =v : = = ⇒ = t, ⇒ x = k2 + 21 t2 , 1 t v dt du = v = a(k1 ) = a(t − 21 x2 ) = a(t − 12 (k2 + 21 t2 )) dt Rt so the general solution is u = a(s − 21 (k2 + 12 s2 )) ds + B(k2 ). L1 v =0 :

For xutt + (1 + xt)utx + tuxx = 0, b2 − ac = 21 (1 + xt)2 − xt = 21 (1 − xt)2 > 0 except at xt = 1 where both characteristics have speed x′ = 1/x at x = 1/t. The PDE is therefore parabolic along xt = 1.

(e) This case is slightly different because L2 contains an undifferentiated term.

L1 = x∂t − t∂x , L2 = t∂t + x∂x − 1. The characteristic equations are Z Z dx dv dv dt = = ⇒ t dt = − x dx, = 0, ⇒ t2 = k1 − x2 , v = a(k1 ) L1 v =0 : x −t 0 dt Z Z Z dt dt dx du dx du L2 u =v : = = ⇒ = = , ⇒ x = k2 t, t x u+v t x u+v u + a(k1 ) u + a(x2 + t2 ) u + a((k22 + 1)t2 ) du = = = . dt t t t

Seeking a solution of this ODE in the form u = tA(t) leads to the general solution Z t a((k22 + 1)s2 ) u=t ds + B(k2 ). s For xtutt + (x2 − t2 )uxt − xtuxx = 0, b2 − ac = 12 (x2 − t2 )2 − x2 t2 = 12 (x2 + t2 )2 > 0 except at x = t = 0 where both characteristics have an undefined speed. 76

9.17 The PDE ut + uux = −2u has the characteristic equations dx du dt = = 1 u −2u leading to

dx = u, dt

du = −2u. dt

The second of these has general solution u = Ce−2t from which the first ODE gives x = k − 1 −2t , where k and C are related arbitrary constants—which we express by writing C = A(k). 2 Ce Consider a characteristic emanating from a point on the boundary given by x = 0 and t = t∗ , ∗ ∗ ∗ say. We then have k = 21 A(k)e−2t and the BC u(0, t∗ ) = e−t = A(k)e−2t so that A(k) = et , ∗ k = 12 e−t and therefore kA(k) = 12 . These results tie the arbitrary constants for a characteristic ∗ ∗ ∗ to its point of intersection on the t-axis. Furthermore, u = et −2t and x = 12 (e−t − et −2t ). Eliminating t∗ shows that u is related to x and t via the quadratic equation u2 + 2xu − e−2t = 0 whose roots are u = −x ±

p

x2 + e−2t .

Since u > 0 for x = 0 the positive root is appropriate and, by rationalising the result, we obtain u=

e−2t √ . x + x2 + e−2t

9.18 When g(x) = σx in (9.27) we find u = σke−t ,

x = k + σk(1 − e−t )

⇒ u(x, t) =

σxe−t . 1 + σ(1 − e−t )

Clearly u is a linear function of x for each t. The denominator will remain positive for 0 ≤ t < ∞ provided σ > −1 in which case u(x, t) → 0 as t → ∞. The denominator is zero when t = t∗ , where ∗ 1+σ σ e−t = ⇒ t∗ = log σ 1+σ which is real and finite for σ < −1. As t → t∗− , |u(x, t)| → ∞ for x 6= 0. The situation is illustrated in Fig. 12. g(x)

σ>0 Figure 12: The arrows indicate how initial data g(x) = σx (dashed lines) will evolve for Exercise 9.17 as t increases. Initial data in the shaded wedge (σ < −1) give rise to solutions that become unbounded in a finite time. x −1 < σ < 0 σ < −1 77

9.19 Under the change of variables s = −x, v(s, t) = −u(−s, t) the PDE becomes −vt + (−v)(−v)−s = v

⇒ vt + vvs = −u

so remains invariant. The initial condition becomes v(s, 0) = −u(−s, 0) = −g(−s) for s ∈ R. Thus v(s, 0) = g(s) when g is an odd function in which case v satisfies the same PDE and initial condition as u so v(s, t) = u(s, t) = −u(−s, t) so that u must be an odd function of x. With g(x) = x/(1 + |x|), which is an odd function, in (9.27) we have, for k ≥ 0, u=

k −t e , 1+k

x=k+

k (e−t − 1). 1+k

Defining U = uet and a(t) = e−t − 1 to simplify the notation, we find U=

k , 1+k

x = k + U a(t)

⇒k=

U , 1−U

x=

U + U a(t) 1−U

which gives the relationship between U , x and t. On rearranging we obtain the quadratic equation a(t)U 2 − (1 + x + a(t))U + x = 0 whose roots are U=

p 1 (1 + x + a(t)) ± (1 + x + a(t))2 − 4a(t)x. 2a(t)

Since a(t) → 0 as t → 0, we have to choose the root for which the numerator also vanishes at t = 0—this is the negative root. Thus, since u = U e−t , p  e−t (1 + x + a(t)) − (1 + x + a(t))2 − 4a(t)x 2a(t) 2xe−t p = (1 + x + a(t)) + (1 + x + a(t))2 − 4a(t)x

u(x, t) =

which is valid for x ≥ 0. The solution for x < 0 follows since u must be an odd function of x. 9.20 With g(x) = σ(x − x0 ), the solution of Burgers’ equation is given by u = σ(k − x0 ) and the characteristics by x = k + σ(k − x0 )t. Consider now two characteristics corresponding to the two distinct parameters k1 and k2 . These will intersect at time t if k1 + σ(k1 − x0 )t = k2 + σ(k2 − x0 )t,

⇒ (k1 − k2 )(1 + σt) = 0.

Therefore, since k1 6= k2 , they all intersect at the same time t = −1/σ at the location x = x0 . 9.21 The general solution of Burgers’ equation takes the form (9.30): u = g(k),

x = g(k)t + k.

The characteristics pass through the point x = k at t = 0 and u(x, 0) = g(x) and u = constant along characteristics. When g(x) = max{0, 1 − |x|} the initial condition is u = 0 for |x| ≥ 1 and the characteristics are therefore x = constant if there are no shocks present.

78

(a) Consider characteristics emanating from the points (x, t) = (k, 0) for 0 ≤ k ≤ 1. Here g(x) = 1 − x and so u = 1 − k,

x = (1 − k)t + k

⇒ u(x, t) =

1−x 1−t

so this solution is valid only until t = 1 when the entire family of these characteristics intersect at (1, 1) and a shock forms. Note that u(1, t) = 0 for 0 ≤ t < 1. (b) Consider characteristics emanating from the points (x, t) = (k, 0) for −1 ≤ k ≤ 0. Here g(x) = 1 + x and so u = 1 + k,

x = (1 + k)t + k

⇒ u(x, t) =

1+x 1+t

a solution that is valid for t > 0. Note that u(−1, t) = 0 for t ≥ 0. (c) We have seen that the characteristics in case (a) intersect to form a shock at x = t = 1. However, the rightmost characteristic from case (b) (i.e., k = 0− ) also intersects the characteristic x = 1 and the speed and location of the shock is determined by the interaction of characteristics of case (b) with those for x > 1, which are vertical lines since u = 0 on each (see Fig. 13). The appropriate Rankine-Hugoniot condition (see Example 9.11) is s′ (t) = 21 (u+ + u− ) with u+ = 0 and u− = 1 + k. Along the (b)-characteristics we have k = (x − t)/(1 + t) and therefore, when x = s(t), s′ (t) = This has solution s(t) =

1 2

1+s , 1+t

t > 1,

s(1) = 1.

√ 2 + 2t − 1. The solution for t > 1 is therefore given by   x ≤ −1 0, 1+x u(x, t) = 1+t , −1 < x < s(t)   0, x > s(t).

The solution at selected times is shown in Fig. 13. The triangular profile persists for all time. Prior to shock formation it has a fixed base of length 2 and a constant height u = 1 so the area of the triangle is 1 for 0 ≤ t ≤ 1. After shock formation the base of the triangle has length (1 + s(t)) and height u(s(t), t) = (1 + s(t))/(1 + t) so the area is 1 + s(t) 1 2 (1 + s(t)) 1 + t = 1 √ since 1 + s(t) = 2 + 2t. 9.22 ut + u1/2 ux = 0   for x ≤ −1 4 u(0, x) = g(x) = (1 − x)2 for − 1 ≤ x ≤ 0   1 for x ≥ 0. 79

u0 (x)

4 3 2 1 −1

0

1

x

t

sho c k

t = 3/2

2

√ u = 1/ 5

t=1 t = 1/2

1

t=0 −1

0

1

2

x

0

−1

1

2

x

Figure 13: The characteristics (left) and solutions at selected times (right) for Exercise 9.21. We have the characteristic equations dt dx du = 1/2 = 1 0 u

du = 0, dt

which show that u = A(k),

x=k+t

dx = u1/2 dt

p A(k)

for any constant k. From the initial condition, and the fact that x = k at t = 0, we see that u(x, 0) = A(x) = g(x). It therefore follows that p u = g(k) and x = k + t g(k). Characteristics cross when ∂k x = 0. Differentiating gives ∂k x = 1 + t

d p g(k) dk

from which we can see that characteristics cross when 1 + t   2 p g(k) = 1 − k   1

   0 d p g(k) = −1  dk  1

d p g(k) = 0. dk 2

for k ≤ −1 for − 1 ≤ k ≤ 0 for k ≥ 0.

1 −1

for k ≤ −1 for − 1 ≤ k ≤ 0 for k ≥ 0.

−1

p

g(k)

1 d p g(k) dk 1

k

k

Hence characteristics cross, for −1 ≤ k ≤ 0, when 1+t

d p g(k) = 1 − t = 0, dk

p which is when t = 1. The characteristics meet where x = k + t g(k) = k + (1 − k) = 1, which shows that all of these characteristics collide together at the position (x, t) = (1, 1). At this 80

moment we have u(x, 1) =

(

4 1

for x < 1 for x > 1.

A shock forms at the point (x, t) = (1, 1) and follows a path x = s(t). By writing the PDE in “conservation form”   ut + 32 u3/2 = 0 x

we see that, provided u is a conserved quantity across the shock, the shock will travel at the speed  2 3/2  u ds 2 8−1 14 = 3 = × = dt [u] 3 4−1 9

so that the shock follows the path x = s(t) = 1 +

14 9 (t

− 1) for t ≥ 1.

9.23 1 g(x) = 1 − = 1 + |x|

(

1 1+x , 2x−1 x−1

x≥0 x < 0.

Hence, for k = k + > 0, (9.41) leads to s=k+

t 1+k

and, therefore, k=

1 2

s−1±

k 2 − (s − 1)k + t − s = 0 p  (s − 1)2 − 4(t − s) .

Since k = k + > 0 we have to choose the positive square root (this is readily checked at t = 0). Similarly, for k = k − < 0, (9.41) leads to s=k+t and, therefore, k=

1 2

2k − 1 k−1

1 + s − 2t ±

k 2 − (1 + s − 2t)k + s − t = 0 p  (1 + s − 2t)2 − 4(s − t) .

This time we have to choose the negative square root since k = k − < 0. Clearly, when s(t) = t these give k + = k − = 0 = k ∗ for t ≥ 1. 9.24 The solution (9.45) is of the form (9.32) with σ = [u] and the constant σx0 replaced by σx0 −εuL . It is therefore a solution of Burgers’ equation. Setting t = 0 in (9.45) gives u(x, 0) = gε (x), as required. 9.25 Differentiating the expression q ′ (u) =

x−x0 t

∂x q ′ (u) = q ′′ (u)ux =

1 , t

with respect to x and t gives ∂t q ′ (u) = q ′′ (u)ut = −

x − x0 t2

so that q ′′ (u)ut = −q ′ (u)q ′′ (u)ux , that is, ut + q ′ (u)ux = 0 provided that q ′′ (u) 6= 0. 9.26 The condition for a shock to be sustainable is given in Section 9.3.3 as q ′ (uL ) >

q(uL ) − q(uR ) > q ′ (uR ). uL − uR 81

When q(u) = 2u1/2 this requires 1 1 q ′ (uL ) = √ > q ′ (uR ) = √ uL uR which, in turn, requires uR > uL > 0. Hence the Riemann problem for this flux function cannot sustain a shock if uL > uR . 0 To determine the associated expansion fan we need to solve q ′ (u) = x−x for u: t 1 x − x0 q ′ (u) = √ = u t so that  √  x − x0 < t/ uL , uL , √ √ u = t2 /(x − x0 )2 , t/ uL ≤ x − x0 ≤ t/ uR ,  √  uR , t/ uR < x − x0 .

u = uL u = uR x0 +

√t uL

x0 +

√t uR

x

9.27 q(u) = log u so q ′ (u) = 1/u = (x − x0 )/t gives the expansion fan

u = uL

  u L , u = t/(x − x0 ),   uR ,

u = uR

x − x0 < t/uL , t/uL ≤ x − x0 ≤ t/uR , t/uR < x − x0 .

x0 +

t uL

x0 +

t uR

x

9.28 For 0 < t < 2: an expansion fan forms at x0 = 0 and q ′ (u) = (x − x0 )/t with q(u) = 21 u2 leads to (with uL = 0 and uR = 1)   x < 0, 0, u = x/t, 0 ≤ x ≤ t,   1, t < x.

A shock is already present at x = 1, t = 0 with uL = 1, uR = 0. The shock speed is s′ (t) = 1 1 1 2 (uL + uR ) = 2 and its location is therefore x = s(t) = 1 + 2 t (since s(0) = 1). The rightmost characteristic from the expansion fan meets the shock wave when x = t = 1 + 12 t, i.e., t = 2 when x = 2.

9.29 The behaviour of the solution is broadly similar to that in the example—an expansion fan forms at x = 0 and a shock at x = 1 and they will collide at some point in time. The expansion fan with uL = 1 and uR = 2 is given by   x < t, 1, u = x/t, t ≤ x ≤ 2t,   2, 2t < x. 82

A shock is already present at x = 1, t = 0 with uL = 2, uR = 1. The shock speed is s′ (t) = 3 3 1 2 (uL + uR ) = 2 and its location is therefore x = s(t) = 1 + 2 t (since s(0) = 1). The rightmost characteristic from the expansion fan meets the shock wave when x = 2t = 1 + 32 t, i.e., when t = 2 and x = 4. The solution is illustrated in Fig. 14 for t = 31 (left) and t = 2 (right). 2

u

t = 1/3

2

1

u

t=2

1 1

2

3

4

x

1

Figure 14: The solution of Exercise 9.29 when t = show the initial condition.

1 3

2

3

4

x

(left) and t = 2 (right). The dashed lines

9.30 (a) The operator associated with the PDE 3utt + 10utx − 8uxx = 0 factorizes: 3utt + 10utx − 8uxx = (3∂t − 2∂x )(∂t + 4∂x )u Hence, with v = (∂t + 4∂x )u we have (3∂t − 2∂x )v = 0 whose characteristic equations are dt dx dv = = 3 −2 0

dx 2 =− , dt 3

dv =0 dt

so the characteristics are2 3x + 2t = k1 along which v = a(k1 ), where a(·) is an arbitrary function. Then u is the solution of (∂t + 4∂x )u = v whose characteristic equations are dt dx du = = 1 4 v

dx = 4, dt

du = a(3x + 2t) dt

so the characteristics are x − 4t = k2 along which v = a(k1 ) = a(3x + 2t) = a(3k2 + 14t) so Z 3k2 +14t Z t 1 a(θ) dθ + B(k2 ) = A(k1 ) + B(k2 ) a(3k2 + 14s) ds + B(k2 ) = 14 u=

R 3k2 +14t 1 where A = 14 a(θ) dθ = gives the general solution

1 14

R 3x+2t

a(θ) dθ (depends only on 3x + 2t = k1 ). This

u(x, t) = A(3x + 2t) + B(x − 4t). (b) With the initial conditions u(x, 0) = g0 (x), ut (x, 0) = g1 (x) we have A(3x) + B(x) = g0 (x),

2A′ (3x) − 4B ′ (x) = g1 (x) ⇒ 23 A(3x) − 4B(x) =

Solving these gives

2 Written

Z x Z  3 3 1 4g0 (x) + g1 (s) ds ⇒ A(z) = 4g0 ( 3 z) + A(3x) = 14 14 Z x  3 2 g0 (x) − g1 (s) ds B(x) = 14 3

this way to avoid fractions.

83

Z

x

g1 (s) ds.

z/3

g1 (s) ds



so that u(x, t) = A(3x + 2t) + B(x − 4t) Z Z x+2t/3 2 3 4g0 (x + 32 t) + g1 (s) ds + g0 (x − 4t) − = 14 3 Z x+2t/3   3 2 = 4g0 (x + 32 t) + g0 (x − 4t) + g1 (s) ds 14 3 x−4t Z 3 1 3 g1 (s) ds. u(1, 3) = (12g0 (3) + 2g0 (−11)) + 14 14 −11

x−4t

g1 (s) ds



(c) The PDEs may be written as (3∂t + 4∂x )u + 7∂x v = 0 7∂x u + (3∂t + 5∂x )v = 0 so, subtracting 7∂x × the 2nd from (3∂t + 5∂x ) × the 1st gives   (3∂t + 5∂x ) (3∂t + 4∂x )u + 7∂x v − 7∂x 7∂x u + (3∂t + 5∂x )v = 0 (3∂t + 5∂x )2 u − 49∂x2 u = 0 3utt + 10utx − 8uxx = 0

as required. From part (b) we have u(x, 0) = g0 (x). To obtain an initial condition for v we observe that 7∂x v = −(3∂t + 4∂x )u so vx (x, 0) = −

 1 3g1 (x) + 4g0′ (x) 7

which may be integrated over the interval to give v(x, 0) =

1 4g0 (0) − 4g0 (x) − 3 7

Z

x

0

 g1 (s) ds + v(0, 0),

where v(0, 0) is, in effect, an arbitrary constant, whose value has no effect on u. (d) The coupled PDEs in part(c) can be written       0 5ux + 7vx ut = ⇒ ut + Aux = 0, + 3 0 7ux + 5vx vt

  1 5 7 A= , 3 7 5

  u u= . v

Substituting u = cφ(x − λt), we find

 ut + Aux = −λc + Ac φ′ (x − λt)

and the right hand side will vanish for any differentiable function φ if λ is an eigenvalue of A with corresponding eigenvector c. A short calculation reveals that A has eigenvalues λ = −2/3 and λ = 4 with corresponding eigenvectors c1 = [1, −1]T and c2 = [1, 1]T . (e) The PDE is linear so a linear combination of solutions will also provide a solution. Thus     1 1 2 2 φ(x + 3 t) + ψ(x − 4t) u(x, t) = c1 φ(x + 3 t) + c2 ψ(x − 4t) = −1 1 84

where φ and ψ are arbitrary scalar functions, and u(x, t) = φ(x + 32 t) + ψ(x − 4t) will coincide with the general solution in part (a) by choosing φ(z) = A(3z) and ψ(z) = B(z). (f) The calculation in part (c) reveals that3 L2 u = M2 u, where L := 3∂t +5∂x and M := −7∂x. If v satisfies Mv = Lu then, because the differential operators have constant coefficients they commute: LM = ML, L2 u = LMv = M2 u

⇒ M(Mu − Lv) = 0

and the equations are satisfied by Mu = Lv. Now Lu − Mv = 3ux + 5vx + 7vx = 0

Mu − Lv = −7ux − 3vt − 5vx = −(3vt + 7ux + 5vx ) = 0 leading to the coupled system in part (c). Just as there are families of 2 × 2 matrices each having the same (quadratic) characteristic polynomial, there are families of coupled systems all leading to the same second order PDE. Choosing M = +7∂x gives one alternative member of the family. (g) With the eigenvectors from part (d), if u = V v,  ut + Aux = V vt + AV vx = V vt + V −1 AV vx = 0.

Since V = [c1 , c2 ] diagonalizes A, V −1 AV = diag(λ1 , λ2 ) so, wt + λ1 wx = 0 and

zt + λ2 zx = 0

if v = [w, z]T . With λ1 = −2/3 and λ2 = −4 these have general solutions w(x, t) = F (x + 23 t),

z(x, t) = G(x − 4t)

for arbitrary functions F and G. Thus u = V −1 v =

1 2

 1 1

  −1 w = 1 z

1 2

  w−z w+z

giving u(x, t) = F (x + 23 t) − G(x − 4t), in agreement with part (a) with F (s) = A(3s) and G(s) = −B(s). Agreement with part (e) follows.

3 Note that (L2 − M2 )u = 3(3u + 10u since we are dealing with a tt tx − 8uxx ) but the factor 3 is immaterial √ homogeneous PDE—otherwise L and M would need to be multiplied by 1/ 3.

85

Exercises 10

Finite difference methods for elliptic PDEs

10.1 1 2 4 1 2 4 2 4 −2 2 4 We use the replacements uxx = h−2 x δx u − 12 hx ∂x u + O(hx ) and uyy = hy δy u − 12 hy ∂y u + O(hy ) and so  2 4 2 4 4 4 2 −2 2 1 ∇2 u = uxx + uyy = h−2 x δx u + hy δy u − 12 hx ∂x u + hy ∂y u + O(hx ) + O(hy ). Hence the local truncation error for Poisson’s equation: −∇2 u = f is  −2 2 2 Rℓ,m = − h−2 x δx u + hy δy u − f  1 = −∇2 u − 12 h2x ∂x4 u + h2y ∂y4 u + O(h4x ) + O(h4y ) − f  1 = − 12 h2x ∂x4 u + h2y ∂y4 u + O(h4x ) + O(h4y ).

Our finite difference replacement is

 −2 2 2 − h−2 x δx + hy δy Uℓ,m = fℓ,m

and, because δx2 Uℓ,m = Uℓ−1,m −2Uℓ,m +Uℓ+1,m , δy2 Uℓ,m = Uℓ,m−1 −2Uℓ,m +Uℓ,m+1 this becomes, on multiplying by hx hy hy hx (Uℓ−1,m − 2Uℓ,m + Uℓ+1,m ) + (Uℓ,m−1 − 2Uℓ,m + Uℓ,m+1 ) = hx hy fℓ,m hx hy 2(θ + θ−1 )Uℓ,m − θ (Uℓ−1,m + Uℓ+1,m ) − θ−1 (Uℓ,m−1 + Uℓ,m+1 ) = hx hy fℓ,m . If we use this scheme to solve Poisson’s equation on the rectangle {0 < x < Lx , 0 < y < Ly } with hx = Lx /Mx and hy = Ly /My (where Mx and My are specified integers indicating the number of grid points in each direction) the totality of equations can me written in matrix vector form by defining the My × My tridiagonal matrix   2(θ + θ−1 ) −θ−1   −θ−1 2(θ + θ−1 ) −θ−1     . . . .. .. .. D=    −1 −1 −1   −θ 2(θ + θ ) −θ −θ−1 2(θ + θ−1 ) and the Mx × Mx block tridiagonal matrix  D −θI  −θI D   . .. A=  

−θI .. . −θI

..

. D −θI

     −θI  D

then, in the natural column ordering of points, Au = f . This reduces to the standard 5–point formula (10.6) when θ = 1. 10.2 We say that a node (xℓ , ym ) is even if ℓ + m is even. The numbering on a grid with h = 1/M and M = 4 is shown below. The numbers outside the box refer to the boundary nodes. 86

There are Neven = 5 even nodes (numbered 1–5) and Nodd = 4 odd nodes (numbered 6–9). u = [U1,1 , U1,3 , U2,2 , U3,1 , U3,3 , U1,2 , U2,1 , U2,3 , U3,2 ]

T

3 2 1

In terms of this ordering, we define

4

5

6

2 6 1

8 3 7

5 9 4

7 8 9

12 11 10  ueven . ueven = [U1 , U2 , U3 , U4 , U5 ] , uodd = [U6 , U7 , U8 , U9 ] , , u = uodd The boundary nodes are labelled 1–12 as shown (we do not use the corner values) and denote the value of U at the boundary node number j by bj . The equations can be written in this new numbering system as (we only give 3 examples) T

Node 1: Node 3: Node 9:

T



4U1 − U6 − U7 = b1 + b12 4U3 − U6 − U7 − U8 − U9 = 0 4U9 − U3 − U4 − U5 = b8.

The key observation is that when Uℓ,m is an even node (it is a component of ueven) then the other terms Uℓ−1,m , Uℓ+1,m , Uℓ,m−1 , Uℓ,m+1 in the 5–point replacement of the Laplacian all lie at odd nodes. Hence, writing down all the equations at even nodes, 4Uℓ,m − Uℓ−1,m − Uℓ+1,m − Uℓ,m−1− , Uℓ,m+1 = h2 fℓ,m gives rise to the system   −1 −1 0 0  −1 0 −1 0     −1 −1 −1 −1 4ueven + Buodd = feven , B =     0 −1 0 −1  0 0 −1 −1

where feven contains not only the values of h2 fℓ,m at even nodes but also the contributions from the boundary conditions. The matrix B has dimension Neven × Nodd and its entries are either −1 or 0. Similarly, writing down all the equations at odd nodes, 4Uℓ,m −Uℓ−1,m −Uℓ+1,m−Uℓ,m−1− , Uℓ,m+1 = h2 fℓ,m gives rise to the system   −1 −1 −1 0 0  −1 0 −1 −1 0   = BT. 4uodd + Cueven = fodd , C =   0 −1 −1 0 −1  0 0 −1 −1 −1 and the matrix C is Nodd ×Nevenwhose entries are also either −1 or 0. Putting these two together gives      ueven feven 4Ie B = B T 4Io uodd fodd where the coefficient matrix is clearly symmetric since C = B T .

10.3 We use the standard finite difference operators δx2 Uℓ,m = Uℓ+1,m − 2Uℓ,m + Uℓ−1,m , 87

δy2 Uℓ,m = Uℓ,m+1 − 2Uℓ,m + Uℓ,m−1

so δx2 δy2 Uℓ,m = δx2 δy2 U



ℓ,m

= δy2 U



− 2 δy2 U ℓ+1,m



+ δy2 U ℓ,m



ℓ−1,m

= (Uℓ+1,m+1 − 2Uℓ+1,m + Uℓ+1,m−1 ) − 2 (Uℓ,m+1 − 2Uℓ,m + Uℓ,m−1 ) + (Uℓ−1,m+1 − 2Uℓ−1,m + Uℓ−1,m−1 ) . The stencil of this result may be visualized as the “outer” product of a column and a row vector: 

   1 1 −2 1 −2 [1, −2, 1] = −2 4 −2 . 1 1 −2 1

Then 

     −2 1 −2 1 −1 −1 2 2   = 2h2 L× 8 −2 − −2 4 −2 =  4 2h2 L+ h − δx δy = −2 h −2 1 −2 1 −1 −1

We know from equation (10.13) that

+ 2 L+ h uℓ,m = −∇ uℓ,m + Rℓ,m ,

where

 1 2 ∂x4 u + ∂y4 u + O(h4 ) R+ ℓ,m = − 12 h

+ 1 −2 2 2 so it then follows from L× δx δy Uℓ,m that h Uℓ,m = Lh Uℓ,m − 2 h

 + 1 −2 2 2 1 2 L× h−4 δx2 δy2 uℓ,m δx δy uℓ,m = L+ h uℓ,m = Lh uℓ,m − 2 h h uℓ,m − 2 h   1 2 = −∇2 u − 12 h ∂x4 u + ∂y4 u + O(h4 ) − 21 h2 ∂x2 ∂y2 uℓ,m + O(h2 )  1 2 h ∂x4 u + 6∂x2 ∂y2 u + ∂y4 u + O(h4 ). = −∇2 u − 12

10.4 Continuing from the previous exercise + Lh uℓ,m = λL× h uℓ,m + (1 − λ) Lh uℓ,m     1 2 1 2 = λ −∇2 u − 12 h ∂x4 u + 6∂x2 ∂y2 u + ∂y4 u + (1 − λ) −∇2 u − 12 h ∂x4 u + ∂y4 u + O(h4 )    1 2 h λ ∂x4 u + 6∂x2 ∂y2 u + ∂y4 u + (1 − λ) ∂x4 u + ∂y4 u + O(h4 ) = −∇2 u − 12   1 2 = −∇2 u − 12 h ∂x4 u + 6λ∂x2 ∂y2 u + ∂y4 u + O(h4 )   1 2 4 1 2 h ∂x4 u + 2∂x2 ∂y2 u + ∂y4 u + O(h4 ) = −∇2 u − 12 h ∇ u + O(h4 ) = −∇2 u − 12

by choosing λ = 13 . Since −∇2 u = f we have −∇4 u = ∇2 f which gives Lh uℓ,m = f +

1 2 2 12 h ∇ f

+ O(h4 )

and therefore the local truncation error of the scheme Lh Uℓ,m = fℓ,m + 88

1 2 2 12 h ∇ fℓ,m

will be O(h4 ). Since it can be inconvenient to calculate the derivatives in ∇2 fℓ,m , we may use the 2 the further approximation ∇2 fℓ,m = −L+ h f + O(h ) without impairing the order of consistency. This leads to the scheme Lh Uℓ,m = fℓ,m + =

1 12

1 12

(fℓ−1,m + fℓ−1,m + fℓ,m−1 + fℓ,m+1 − 4fℓ,m )

(fℓ−1,m + fℓ−1,m + fℓ,m−1 + fℓ,m+1 + 8fℓ,m)

2 + 1 where Lh Uℓ,m := 31 L× h Uℓ,m + 3 Lh Uℓ,m (since λ = 3 ), that is,

Lh Uℓ,m :=

1 (20Uℓ,m − 4Uℓ−1,m − 4Uℓ−1,m − 4Uℓ,m−1 − 4Uℓ,m+1 6h2 − Uℓ−1,m+1 − Uℓ+1,m+1 − Uℓ−1,m−1 − Uℓ+1,m+1 )

In stencil format we see that we have a  −1 −4 1  −4 20 6h2 −1 −4

9–point formula:   −1 1 1 1 8 −4 Uℓ,m = 12 −1 1

1 fℓ,m .

Note: Suppose that both U and f are functions of x only: Uℓ,m ≡ Uℓ for every m. The method becomes (sum the columns in the 3 × 3 stencil) 1 1 1 [−1, 2, −1]Uℓ = [1, 10, 1]fℓ ⇒ −h−2 δx2 Uℓ = fℓ + δx2 fℓ 2 h 12 12 which is Numerov’s method for solving the ode −u′′ = f (x) (see Section 6.4.2).

10.5 In standard column-wise order, the equations L× h U = F are     4 −1 b0 + b2 + b14     b1 + b3 4 −1 −1        b2 + b4 + b6  4 −1        b13 + b15  −1 4 −1      −1   0 −1 4 −1 −1   u =       b + b −1 4 −1 5 7         −1 4   b10 + b12 + b14     b9 + b11  −1 −1 4 b6 + b8 + b10 −1 4

where bj denotes the boundary value at the jth boundary node and these have been numbered clockwise from 0–15 starting at the origin. For general M , the coefficient matrix has the block tridiagonal structure:     4I T −1    T 4I T  −1 −1         . . . . . . .. .. .. .. .. .. A=  , where T =  .        T 4I T −1 −1  T 4I −1

The diagonal blocks of A are a multiple of the identity I and the nonzero off-diagonal blocks T are themselves tridiagonal. 89

Switching to an odd-even numbering system where a node (xℓ , ym ) is even if ℓ + m is even. The numbering on a grid with h = 1/M and M = 4 is shown below. The numbers outside the box refer to the boundary nodes where, unlike Exercise 10.2, the corner nodes are now involved. There are Neven = 5 even nodes (numbered 1–5) and Nodd = 4 odd nodes (numbered 6–9). u = [U1,1 , U1,3 , U2,2 , U3,1 , U3,3 , U1,2 , U2,1 , U2,3 , U3,2 ]

4 3 2 1 0

T

In terms of this ordering, we define

5

6

7

2 6 1

8 3 7

5 9 4

15 14  ueven . ueven = [U1 , U2 , U3 , U4 , U5 ] , uodd = [U6 , U7 , U8 , U9 ] , , u = uodd With this numbering system the equations L× h U = F become, in matrix-vector form,     4 −1 b0 + b2 + b14    b2 + b4 + b6  4 −1      −1 −1   4 −1 −1  uodd =        −1 4 b10 + b12 + b14  b6 + b8 + b10 −1 4     b1 + b3 4 −1 −1   −1  4 −1   uodd = b13 + b15  .     −1 b5 + b7  4 −1 −1 −1 4 b9 + b9 T

T



The general pattern is difficult to detect with such a small value of the structure is    4I1 B −1  B T 4I2 B −1 −1    B= ,  −1 B T 4I1 B    .. .. .. . . .

13

8 9 10 11 12

M but, for even nodes (say),

−1 .. .

 ..

.

  , 

where I1 and I2 are identity matrices whose dimensions are the number of even nodes in odd and even numbered columns, respectively. The matrix B is square when there are the same number of nodes in consecutive columns, otherwise it is rectangular. The structure is similar for odd nodes with the roles of the pairs I1 , B and I2 , B T being interchanged. One reason for studying different orderings of unknowns is the profound effect these can have on the time taken to solve the finite difference equations. 10.6 If Lh U ≤ 0 it follows that Lh (−U ) ≥ 0 and, from Theorem 10.2, that either −U is constant on Ωh —in which case U is constant—or else −U achieves its minimum value on ∂Ω h . If −U achieves a minimum it means that U must achieve a maximum and the required result follows. To establish inverse monotonicity we have to show that Lh U ≥ 0 implies that U ≥ 0. If U is constant then, because Lh Uℓ,m = Uℓ,m ≥ 0 for nodes (xℓ , ym ) on the boundary, this constant must be non-negative. Otherwise, U achieves its minimum value on the boundary which is non-negative for the same reason. 10.7 If Q(0) = Q(h) + Ch2 + O(h3 ) for some constant C, then Q(0) = Q(h/2) + C(h/2)2 + O(h3 ). 90

Subtracting these gives Ch2 = 4(Q(h/2) − Q(h))/3 + O(h3 ) and so Q(h) = Q(0) + 34 (Q(h) − Q(h/2)) + O(h3 ). h/2

Thus 4(Q(h) − Q(h/2))/3 ≡ 4(UPh − UP )/3 gives an estimate of the leading term in the error in Q(h) ≡ UPh . 10.8 We construct a table analogous to Table 10.2 with Q(h) := UPh . M 8 16 32 64 128 256

Q(h) 0.31413 0.33203 0.33943 0.34206 0.34305 0.34343

Q(h/2) − Q(h) — 0.01790 0.00740 0.00263 0.00099 0.00038

EOC — — 1.27 1.49 1.41 1.38

Q(0) — — 0.34465 0.34351 0.34365 0.34367

Q(0) − Q(h) 0.0295 0.0116 0.00424 0.00161 0.000617 0.000237

The experimental order of convergence (EOC) is about 1.4 and the estimate Q(0) of uP with M = 256 is 0.34367 suggesting that the numerical solution is in error by approximately 0.000237. 10.9 Suppose that the Neumann BC −ux = g(0, y) is approximated at y = ym by the second-order central difference formula −△x U0,m = g(0, mh)

⇒ −U1,m + U−1,m = 2hg(0, mh)

while (10.6) with ℓ = 0 gives 4U0,m − U0,m+1 − U1,m − U0,m−1 − U−1,m = h2 f0,m and adding these two equations gives 4U0,m − U0,m+1 − 2U1,m − U0,m−1 − U−1,m = 2hg0,m + h2 f0,m which is (10.22b), as required. 10.10 Using (10.22a) the local truncation error corresponding to the discrete version of the Neumann BC −ux(0, y) = g(0, y) at y = mh is given by Rh 0,m = −h−1 △+x u0,m − 21 h−1 δy2 u0,m − g0,m − 12 hf0,m .

The Taylor expansion

u(x + h, y) = u(x, 0) + hux (x, y) + 12 h2 uxx (x, y) + 61 h3 uxxx(ξ, y),

0 < ξ < x,

with x = 0 and y = mh gives h−1 △+x u0,m = ux (0, mh) + 21 huxx (0, mh) + 61 h2 uxxx (ξ, mh). and, using Exercise 6.1, we deduce that uyy (0, ym ) = h−2 δy2 u0,m −

1 2 12 h uyyyy (0, mh

91

+ η),

−h < η < h.

Putting these together we find   Rh 0,m = − ux (0, mh) + 12 huxx (0, mh) + 61 h3 uxxx (ξ, mh)   1 2 h uyyyy (xℓ , mh + η) − g0,m − 12 hf0,m − 12 h uyy (xℓ , ym ) + 12   = −ux (0, mh) − g0,m + 21 h −∇2 u(0, mh) − f0,m − 16 h3 uxxx(ξ, mh) −

1 2 12 h uyyyy (xℓ , mh

+ η)

from which the result follows since −ux = g and −∇2 u = f . 10.11 The local truncation error associated with (10.24) is Rh 0,M = −h−1 △+x u0,M + h−1 △−y u0,M − 12 hf0,M − g(1+ , 0) − g(0− , 1). We have, from the previous exercise,

h−1 △+x u0,M = ux (0, 1) + 12 huxx (0, 1) + 61 h2 uxxx(ξ, 1) and by making the change of variables x 7→ 1−y, y 7→ 1−x, h 7→ −h (so that h−1 △+ 7→ −h−1 △− ), h−1 △−y u0,M = uy (0, 1) − 12 huyy (0, 1) + 16 h2 uyyy (0, 1 − η),

0 < η < h.

Hence,   Rh 0,M = − ux (0, 1) + 21 huxx (0, 1) + 61 h2 uxxx (ξ, 1)   + uy (0, 1) − 12 huyy (0, 1) + 16 h2 uyyy (0, 1 − η)

− 21 hf0,M − g(1+ , 0) − g(0− , 1)    = −ux (0, 1) − g(1+ , 0) + uy (0, 1) − g(0− , 1) − 21 h − ∇2 u(0, 1) − f0,M  − 61 h2 uxxx (ξ, 1) + uyyy (0, 1 − η) and therefore Rh 0,M = O(h2 ) if the third partial derivatives of u are bounded in the neighbourhood of the corner (0, 1).

10.12 If the domain in Fig. 10.6 (left) occupies −1 < x < 1, 0 < y < 1 with a grid size h = 1/M , then it contains N = (2M − 1)(M − 1) grid points in its interior. The subdomain x ≥ 0 contains N+ = M (M − 1) and N+ ≈ 12 N when M is large. The time taken to produce an LR factorization of the coefficient matrix is proportional to the product of the number of grid points and the square of the bandwidth. If the unknowns are ordered by columns the bandwidth in the two situations are the same (M ) so the time taken is proportional to the dimension. Thus, taking account of symmetry is expected to halve the computation time. Note that, were the unknowns in the rectangular domain to be ordered by rows rather than columns, the bandwidth would increase to about 2M and the cost of factorization to 8M 2 . For the full square domain −1 ≤ x, y ≤ 1, the number of nodes is about 4M 2 and the bandwidth 2M so the factorization cost is proportional to 16M 3. These results are summarized in the table shown below, the leftmost column indicating the shape of the domain and the ordering of the unknowns within it. 92

Domain

# nodes

bandwidth

cost

2

2M

16M 3

2M 2 2M 2 M2

M 2M M

2M 3 8M 3 M3

4M

10.13 The factors are: Geometry The domain 0 < x, y < 1 is unaffected by the interchange x ↔ y. Boundary values u(x, y) = x2 + y 2 is unaffected by the interchange x ↔ y. PDE Neither the source term f (x, y) = xy nor the differential operator −∇2 ≡ ∂x2 + ∂y2 is affected by the interchange x ↔ y. Thus u(x, y) = u(y, x). Since the finite difference grid is also unaffected by the interchange x ↔ y, the numerical solution U must also share the same symmetry, that is Uℓ,m = Um,ℓ . It follows that Uℓ,ℓ+1 = Uℓ+1,ℓ and Uℓ−1,ℓ = Uℓ,ℓ−1 so the 5–point formula (10.6) applied at the grid point (ℓh, ℓh) gives  Lh Uℓ,ℓ = h−2 4Uℓ,ℓ − Uℓ+1,ℓ − Uℓ,ℓ+1 − Uℓ−1,ℓ − Uℓ,ℓ−1  = h−2 4Uℓ,ℓ − Uℓ+1,ℓ − Uℓ+1,ℓ − Uℓ,ℓ−1 − Uℓ,ℓ−1  = h−2 4Uℓ,ℓ − 2Uℓ+1,ℓ − 2Uℓ,ℓ−1

and fℓ,ℓ = xℓ yℓ = ℓ2 h2 . When M = 4 then symmetry reduces the number of unknowns to N = 21 M (M − 1) = 6 and these, together with the boundary nodes, are numbered as shown. 4 u = [U1,1 , U2,1 , U2,2 , U3,1 , U3,2 , U3,3 ] f=

T

T 1 16 [1, 2, 4, 2, 6, 9]

g = [2g1 , g2 , 0, g3 + g5 , g6 , 2g7 ]T =

T 1 16 [2, 4, 0, 27, 20, 50] ,

3 2 1 0

5

6

7

4 2 1

5 3 2

6 5 4

1

2

3

8 7 6 5 4

where gj is the value of U at the jth boundary node. The numerical solution is then found by solving Au = f + g, where     4 −2 2 −1 −1  −1  4 −1 −1 4 −1 −1         −2 4 −2 −1 2 −1  = 16D  , A = 16      −1 4 −1 −1 4 −1        −1 −1 4 −1 −1 −1 4 −1 −2 4 −1 2

where the diagonal matrix D is equal to the identity matrix except for entries corresponding to nodes on the diagonal of the grid, where Dℓ,ℓ = 2. The purpose of introducing D being that A is the product of a diagonal matrix and a symmetric matrix. When M is large the cost of solving (D−1 A)u = D−1 (g + f ) is approximately half the cost of solving the original system. 93

10.14 Symmetry about x = 0 implies that u(−x, y) = u(x, y). The change of variables s = −x, t = y, v(s, t) = u(−s, t) should leave the problem unchanged. It is necessary to check: (a) The domain: −1 ≤ x, y ≤ 1 becomes −1 ≤ s, t ≤ 1. (b) The boundary values: u(±1, y) = 0 becomes v(±1, t) = 0 and u(x, ±1) = 0 for x ∈ (−1, 1) becomes v(s, ±1) = 0 for s ∈ (−1, 1). (c) The PDE: ∂x = −∂s so ∂x2 = (−∂s )2 = ∂s2 , ∂y = ∂t and −uxx − uyy = 1 becomes −vss − vtt = 1.

Thus the boundary value problem in v is identical to that in u so that v(s, t) = u(s, t). But v was defined by v(s, t) = u(−s, t) so we conclude that u(s, t) = u(−s, t) and so u is symmetric about s = 0. Symmetry about y = 0 implies that u(x, −y) = u(x, y) and the change of variables is s = x, t = −y, v(s, t) = u(s, −t). Symmetry about x = y implies that u(y, x) = u(x, y) and the change of variables is s = y, t = x, v(s, t) = u(t, s). Symmetry about x = −y implies that u(−y, x) = u(x, y) and the change of variables is s = −y, t = x, v(s, t) = u(−t, s). Following the steps (a)-(c) above reveals the boundary value problems to be unchanged in each case. With h = 1/3 the domain contains 25 grid points in its interior but the 6 unknown grid values in the heavily shaded triangle 0 ≤ x < 1, 0 ≤ y ≤ x can be solved for independently and then the remaining grid values may be deduced by symmetry as shown in the figure.

y 6

5

4

5

6

5

3

2

3

5

4

2

1

2

4

5

3

2

3

5

6

5

4

5

6

    U1 4 −4 1 −1  U2  1 4 −2 −1        U3  1 −2 4 −2   =  . 9   U4  1 −1 4 −2       −1 −1 4 −1 U5  1 −2 4 U6 1 

x

The coefficient matrix becomes symmetric when both sides of this system are multiplied by the diagonal matrix D = diag( 14 , 12 , 12 , 21 , 1, 21 ). 10.15

y

3

2

2

3

2

1

1

2

2

1

1

2

3

2

2

3

x

With M = 5 (h = 2/5) the implications of symmetry are shown in the figure and reveal that there are only 3 independent unknowns. Applying the 5–point approximation of the Poisson equation at nodes 1, 2 & 3 leads to the system:      2 −2 U1 1 25  −1 3 −1 U2  = 1 . 4 −2 4 U3 1 94

For general odd values of M , there are (M − 1)2 unknowns (1 per grid point) of which only about one eighth are independent. We therefore have to solve for 81 (M − 1)2 unknowns. 10.16 With the Robin BC −ux + σu = g(y) along x = 0, the analogue of (10.22a) becomes −h−1 △+x U0,m + σU0,m − 12 h−1 δy2 U0,m = g0,m + 21 hf0,m , (m = 1, 2, . . . , M − 1) which can also be written in the explicit form (4 + 2σh)U0,m − 2U1,m − U0,m−1 − U0,m+1 = 2hg0,m + h2 f0,m . 10.17

y B bc

7

17

bc

16

bc

bc

18

3

+× 8

Node 3 5 7

bc

9

bc

b

2

5

bc

10 11 bc

b

1

b

bc

4

6

bc

bc

h+ 1/32 2/32 3/32

k+ 1/36 2/36 3/36

12

bc

15 14 13 x A C The 6 internal grid points and the 12 active boundary nodes are numbered as shown. The standard 5–point finite difference equations  −h−2 δx2 + δy2 Uℓ,m + Uℓ,m = 0.

can only be deployed at points 1, 2 and 4 where there is a uniform grid in both directions. These give: P1 : 65U1 − 16U2 − 16U4 = 16(U15 + U16 ) = 16 [4x(9 − 8x)]x=1/4 = 112, P2 : −16U1 + 65U2 − 16U3 − 16U5 = 16U17 = 0, P4 : −16U1 + 65U4 − 16U5 − 16U6 = 16U14 = 16 [4x(9 − 8x)]x=1/2 = 160. We focus first on the approximation of uxx at the points 3, 5 and 6. At points 3, 5 and 6 we use equation (10.26) to give  1 32 P3 : h+ = 32 , h− = h = 14 : (uxx )3 ≈ 64 9 1 (u8 − u3 ) − 4(u3 − u18 ) ,  2 64 32 P5 : h+ = 32 , h− = h = 14 : (uxx )5 ≈ 10 2 (u10 − u5 ) − 4(u5 − u2 ) ,  64 32 3 , h− = h = 14 : (uxx )6 ≈ 11 P6 : h+ = 32 3 (u12 − u6 ) − 4(u6 − u4 ) . The uyy derivatives are treated similarly. P3 : k+ = P5 : k+ = P6 : k+ =

1 36 , 2 36 , 3 36 ,

k− = k = k− = k = k− = k =

1 4 1 4 1 4

: (uyy )3 ≈ : (uyy )5 ≈ : (uyy )6 ≈ 95

72 10 72 11 72 12

 − u3 ) − 4(u3 − u2 ) ,  − u5 ) − 4(u5 − u4 ) ,  36 1 (u11 − u6 ) − 4(u6 − u13 ) . 36 1 (u7 36 2 (u9

Putting these together we find the following finite difference expressions for the approximations to −∇2 u + u = 0 at the 3 grid points closest to the hypotenuse.  72 36  32 P3 : − 64 9 1 (U8 − U3 ) − 4(U3 − U18 ) − 10 1 (U7 − U3 ) − 4(U3 − U2 ) + U3 = 0 P5 : P6 :

64 − 10

32 2 (U10

64 − 11

 − U5 ) − 4(U5 − U2 ) −

32 3 (U12

72 11

 − U6 ) − 4(U6 − U4 ) −

36 2 (U9

72 12

545U3 −144 5 U2 − U5 ) − 4(U5 − U4 ) + U5

=0 =0

288 273U5 − 128 4 = 0 5 U2 − 11 U − U6 ) − 4(U6 − U13 ) = 0

36 1 (U11

979 3 U6

256 11 U4

= 24U13 ,

where U13 = [4x(9 − 8x)]x=3/4 = 9. These provide 6 equations in the 6 unknowns. 10.18 The forward difference operator △+y (see Table 6.1) gives

△+y uℓ,0 = uℓ,1 − uℓ,0 = huy |ℓ,0 + 12 h2 uyy ℓ,0 + O(h3 )

and rearranging this we get

−uy (ℓh, 0) = −h−1 △+y uℓ,0 − 21 huyy ℓ,0 + O(h2 ).

In order to remove the first order term in h, we use the PDE uyy = −uxx so that, at the point (ℓh, 0) −uy |ℓ,0 = uxx |ℓ,0 = −h−2 δx2 uℓ,0 + O(h2 ). Combining these results gives −uy (ℓh, 0) = −h−1 △+y uℓ,0 − 21 h−1 δx2 uℓ,0 + O(h2 ) so that the Neumann BC −uy (ℓh, 0) = 0 is approximated by the numerical boundary condition −h−1 △+y Uℓ,0 − 12 h−1 δx2 Uℓ,0 = 0

(3)

(ℓ = 1, 2, . . . , M − 1). It can also be written in the explicit form 4Uℓ,0 − 2Uℓ,1 − Uℓ−1,m − Uℓ+1,m = 0 which, on division by 2 gives the formula (10.29). The alternative derivation described in Exercise 10.9 would combine the second order approximation −h−2 △Uℓ,0 = 0 of −uy (ℓh, 0) = 0, that is Uℓ,1 − Uℓ,−1 = 0 with the 5–point approximation of the Laplacian (10.6) at (ℓh, 0): 4Uℓ,0 − Uℓ+1,0 − Uℓ,1 − Uℓ−1,0 − Uℓ,−1 = 0 to give the same result. The local truncation error in boundary condition (3) is given by Rh ℓ,0 = −h−1 △+y uℓ,0 − 12 h−1 δx2 uℓ,0 = −h−1 [huy + 21 h2 uyy + O(h3 )]ℓ,0 − 21 h−1 [h2 uxx + O(h4 )]ℓ,0   = − uy + h(uxx + uyy ) ℓ,0 + O(h2 ) 96

and it is therefore consistent of second order since uy = 0 and uxx + uyy = 0. 10.19 In view if the identity 2(Uℓ,1 − Uℓ,0 ) = (Uℓ,1 − 2Uℓ,0 + Uℓ,−1 ) + (Uℓ,1 − Uℓ,−1 ) = δθ2 Uℓ,0 + 2△θ Uℓ,0 equation (10.33b) may be written Bh Uℓ,0 := −∆θ−1 △θ Uℓ,0 − 21 rℓ2 ∆θLh Uℓ,0 , where Lh (defined by (10.33)) is an approximation of Lu = (1/r)(rur )r + uθθ /r2 . The local truncation error of the BC Bh Uℓ,0 = 0 is Rh ℓ,0 :=Bh uℓ,0 = ∆θ−1 △θ uℓ,0 − 12 rℓ2 ∆θLh uℓ,0     = − uθ + O(∆θ2 ) ℓ,0 − 21 rℓ2 ∆θ Lu + O(h2 ) ℓ,0   = − uθ − 12 rℓ2 ∆θLu ℓ,0 + O(∆θ2 ) + O(h2 ) = O(∆θ2 ) + O(h2 ) as required (since uθ = 0 and Lu = 0 at (ℓh, 0)).

10.20 Part (a): Consistency of Lh proved in the text and consistency of Bh established in Exercise 10.19. Part (b) From (10.33a) we find − Lh Uℓ,m = −

 1 rℓ+1/2 (Uℓ+1,m − Uℓ,m ) − rℓ−1/2 (Uℓ,m − Uℓ−1,m ) h2 rℓ 1 − 2 2 (Uℓ,m+1 − 2Uℓ,m + Uℓ,m−1 ) , rℓ ∆θ

which is of the form (10.30) with ν = 4. The coefficient of Uℓ,m is positive and those of its neighbours are negative, thus satisfying the first of conditions (10.31). The sum of coefficients is zero since Lh 1 = 0, consequently −Lh satisfies the definition of a positive type operator (Definition 10.8). To establish a maximum principle we mirror the proof of Theorem 10.2. The inequality −Lh Uℓ,m ≤ 0 gives (note that rℓ+1/2 + rℓ−1/2 = 2rℓ ) 2(h−2 + (rℓ ∆θ)−2 )Uℓ,m ≤

1 h−2 (rℓ+1/2 Uℓ+1,m + rℓ−1/2 Uℓ−1,m ) + 2 2 (Uℓ,m+1 + Uℓ,m−1 ) rℓ rℓ ∆θ −2  h 2 ≤ (rℓ+1/2 + rℓ−1/2 ) + 2 2 max Uℓ+i,m+j rℓ rℓ ∆θ i=±1,j=±1 = 2(h−2 + (rℓ ∆θ)−2 )

max

i=±1,j=±1

Uℓ+i,m+j .

Consequently, Uℓ,m ≤ max{Uℓ+1,m , Uℓ,m+1 , Uℓ−1,m , Uℓ,m−1 } and the proof proceeds exactly as in Theorem 10.2. Part (c) Φ = 1 − 14 r2 > 0 on the unit circle r = 1 and, since −Lh Φ = −LΦ = 1 this is a possible comparison function. Then, since Φ is bounded on the domain, Lh must be stable (Corollary 10.3). Part (d) This is a consequence of Theorem 6.7. 10.21 With u(r, θ) = r2

 π 2 1 π r log( ) sin 2θ + ( − θ)r2 cos 2θ − π r 4 4

1

97

we find that u(r, 0) = 0, u(r, 21 π) = 0 and u(1, θ) = 14 (1 − 4θ/π) cos 2θ− 41 . Consider the functions f (r, θ) = r2 log( 1r ) sin 2θ, g(r, θ) = ( π4 − θ)r2 cos 2θ and h(r, θ) = 14 r2 . 1 fθ = 2r2 log( ) cos 2θ r fθθ = 4r2 log(r) sin 2θ

fr = −r(2r log(r) + 1) sin(2θ), frr = −(2r log(r) + 3) sin(2θ), so that ∇2 f := frr + 1r ur +

1 r 2 fθθ

π − θ)r cos 2θ, 4 π = 2( − θ) cos 2θ, 4

= −4 sin 2θ. Similarly gθ = −r2 cos(2θ) − 2

gr = 2( grr

1 4π

 − θ r2 sin 2θ

gθθ = 4r2 sin 2θ − 4( 41 π − θ)r2 cos 2θ

and so ∇2 g = 4 sin 2θ. Hence L(f + g) = 0: f + g is a harmonic function. Also ∇2 h = −1. Thus −∇2 u = 1 since u = f + g + h. Also, urr = −2 log(r) sin 2θ + terms involving θ only so that |urr | → 0 as r → 0 so long as sin 2θ 6= 0.

10.22 Suppose that the eight neighbours of the grid point P(xℓ , ym ) are labelled as shown so that their coordinates are Q1 (xℓ+1 , ym ), Q2 (xℓ+1 , ym+1 ), etc. Defining the operator Lh to be

Q4 Q5 Q6

Q3 P Q7

Q2 Q1 Q8

8 X αj u Qj , Lh u P := α0 u P − j=1

then Taylor expanding each of the terms u(xℓ±h , ym±h ) around the point (xℓ , ym ), we find on collecting terms, that 8  X  αj u Q Rh P = Lu P − α0 u P − j

j=1

= C0 u + h(C1 ux + C2 uy ) + (a − 21 h2 C3 )uxx + (2b − h2 C4 )uxy + (c − 21 h2 C5 )uyy − 16 h3 (C6 uxxx + 3C7 uxxy + 3C8 uxyy + C9 uyyy ) + O(h4 ),

where u and its derivatives on the right hand side are evaluated at P and Ck (k = 0, 1, . . . , 9) are linear expressions in the coefficients αj (j = 0, 1, . . . , 8). In order that Rh = O(h2 ), we find that the 9 α’s should satisfy the 10 linear equations, here presented in matrix-vector form (all equations have been multiplied by −1 for convenience):      0 1 1 1 1 1 1 1 1 1  α 0 0   0 1 1 0 −1 −1 −1 0 1       α1  0   0 0 1 1   1 0 −1 −1 −1       α2  a  0 1 1 0    1 1 1 0 1      α3  b  0 0 1 0 −1   2 0 1 0 −1    =: a, say.   α4  = 2  Cα =    c 1 0 1 1 1  h     0 0 1 1  0   0 1 1 0 −1 −1 −1  α5  0 1       α6  0   0 0 1 0   1 0 −1 0 −1       α7  0   0 0 1 0 −1 0 −1 0 1  α8 0 0 0 1 1 1 0 −1 −1 −1 98

The second and seventh rows of the coefficient matrix, C, are identical, as are the third and eighth. Thus the rank of C cannot exceed 8 (the cause of this reduction in rank will be explained below). By performing elementary row operations on the augmented matrix [C|a] (where a denotes the right hand side vector) it can be shown that the reduced row echelon form is   1 0 0 0 0 0 0 0 −4 (2b − 2a − 2c)/h2  0 1 0 0 0 0 0 0  2 (a − b)/h2   2  0 0 1 0 0 0 0 0 −1  b/h   2  0 0 0 1 0 0 0 0  2 (−b + c)/h    0 0 0 0 1 0 0 0 −1  0   α = 0. 2  0 0 0 0 0 1 0 0  2 (a − b)/h   2  0 0 0 0 0 0 1 0 −1  b/h   2  0 0 0 0 0 0 0 1  2 (−b + c)/h    0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 from which we deduce that the rank of C is exactly 8; there is a one-parameter family of solutions given by α0 = −2(a + b + c)/h2 + 4α8 ,

α1 = α5 = (a − b)/h2 − 2α8 ,

α2 = α6 = b/h2 + α8

α3 = α7 = (−b + c)/h2 − 2α8 ,

α4 = α8 , in which α8 is an arbitrary parameter. This will match the solution (10.39) if α8 = 1 2 2 (γ − b)/h . In order to explain the reduction in rank, we first observe that Lh vℓ,m = 0 when vℓ,m = (x − xℓ−1 )(x − xℓ )(x − xℓ+1 ) simply because v = 0 at each of the points P, {Qj }8j=1 . We also note that x − xℓ±1 = (x − xℓ ) ± h and so  vℓ,m = (x − xℓ ) (x − xℓ )2 − h2 = (x − xℓ )3 − h2 (x − xℓ )

and Lh v = 0 implies that Lh (x − xℓ )3 = h2 Lh (x − xℓ ). The second observation is that, when u is expanded in a Taylor series about the point (xℓ , ym ):  u(x, y) = u + (x− xℓ )ux + (y − ym )uy + 12 (x− xℓ )2 uxx + 2(x− xℓ )(y − ym )uxy + (y − ym )2 uyy + · · ·

where u := u(xℓ , ym ), ux := ux (xℓ , ym ), etc., and, applying Lh term by term to this expansion, we find   Lh u = u(Lh 1) + ux Lh (x − xℓ ) + uy Lh (y − ym ) + · · · .

Therefore C1 = hLh (x − xℓ ) and C6 = h3 Lh (x − xℓ )3 and so C1 = C6 —the 2nd and 7th rows of C are identical. A similar calculation with vℓ,m = (y − ym−1 )(y − ym )(y − ym+1 ) shows that Lh (y − ym )3 = h2 Lh (y − ym ) and therefore C2 = C9 —the 3rd and 10th rows of C are identical. 10.23 We rewrite (10.41) as −1 Lh Uℓ,m = −h−1 x hy

1

ρ

a(Uℓ−1,m −2Uℓ,m+Uℓ+1,m )+ 12 b(Uℓ+1,m+1 −Uℓ−1,m+1 +Uℓ−1,m−1 −Uℓ+1,m−1 ) + cρ(Uℓ,m−1 − 2Uℓ,m + Uℓ,m+1 )

 − 21 γ(4Uℓ,m−2Uℓ+1,m +Uℓ+1,m+1 −2Uℓ,m−1+Uℓ−1,m−1 −2Uℓ−1,m+Uℓ−1,m−1 −2Uℓ,m−1+Uℓ+1,m−1 ) . 99

This is of the form

8  X  −1 α U h α U − Lh u P = h−1 j 0 y x Qj P j=1

with (these are easiest to calculate by adjusting the coefficients in Fig. 10.15) α0 = 2a/ρ + 2cρ − 2γ, α3 = cρ − γ,

α6 =

− 21 b

+

1 2 γ,

α1 = a/ρ − γ,

α2 = − 21 b + 21 γ

α4 = 21 b + 12 γ,

α5 = α1

α7 = α3 ,

α8 = 12 b + 21 γ.

These coefficients will be non-negative if γ is chosen such that |b| ≤ γ, γ ≤ a/ρ and γ ≤ cρ. p When ρ = a/c these become √ |b| ≤ γ ≤ ac which can always be achieved when b2 ≤ ac.

10.24 The operator L− h defined by (10.51) becomes, when the finite difference operators are expanded,  −2 L− + h−1 )Uℓ,m − εh−2 Uℓ+1,m + Uℓ,m+1 + Uℓ,m−1 − (h−1 + εh−2 )Uℓ−1,m h Uℓ,m = (4εh

is of the form (10.30) with ν = 4 and clearly has the properties of a positive type operator from Definition 10.8 (the correct sign pattern is evident and the coefficients sum to zero). 10.25 Using △−x = △− 12 δx2 , then the analogue of the operator L− h defined by (10.51) that is appropriate for the PDE −ε∇2 u + pux = 0 with p > 0 may be written as −2 2 L− (δx + δy2 )Uℓ,m + ph−1 △−x Uℓ,m h Uℓ,m = −εh

= −(εh−2 + 21 ph−1 )δx2 − εh−2 δy2 Uℓ,m + ph−1 △x Uℓ,m

= −εh−2 (1 + Peh )δx2 − εh−2 δy2 Uℓ,m + ph−1 △x Uℓ,m ,

where Peh = ph/(2ε). When p < 0, the backward difference △−x should be replaced by a forward difference △+x and, with the identity △+x = △ + 12 δx2 , the corresponding manipulations are −2 2 L+ (δx + δy2 )Uℓ,m + ph−1 △+x Uℓ,m h Uℓ,m = −εh

= −(εh−2 − 21 ph−1 )δx2 − εh−2 δy2 Uℓ,m + ph−1 △x Uℓ,m

= −εh−2 (1 − Peh )δx2 − εh−2 δy2 Uℓ,m + ph−1 △x Uℓ,m .

Here Peh < 0 so that Peh = −|Peh | and therefore both cases are accommodated in the single formula −2 (1 + |Peh |)δx2 Uℓ,m − εh−2 δy2 Uℓ,m + h−1 p△x Uℓ,m . L± h Uℓ,m := −εh

The generalization to the PDE −ε(uxx + uyy ) + pux + quy = 0, is

−εh−2 (1 + |Peh ,x |)δx2 Uℓ,m − εh−2 (1 + |Peh ,y |)δy2 Uℓ,m + h−1 p△x Uℓ,m + h−1 q△y Uℓ,m = 0, where Peh,x := ph/(2ε) and Peh,y := qh/(2ε). 10.26 From Exercise 4.20 we find that, in polar coordinates, uxx + uyy =

1 1 (rur )r + 2 uθθ r r 100

and 1 1 sin θuθ ) + r cos θ(sin θur + cos θuθ ) r r = (cos2 θ + sin2 θ)uθ = uθ .

−yux + xuy = −r sin θ(cos θur −

Hence the PDE −ε(uxx + uyy ) − yux + xuy = 0 becomes −ε

 1 1 (rur )r + 2 uθθ + uθ = 0. r r

A finite difference scheme that is consistent of second order may be constructed using the approximation (10.33) of the Laplacian and a central difference replacement of the first derivative term. This leads to the scheme Lh Uℓ,m = 0, where Lh Uℓ,m := −ε

1

δr (rℓ δr Uℓ,m ) h2 rℓ

+

1 δ 2 Uℓ,m rℓ2 ∆θ2 θ



+ ∆θ−1 △θ Uℓ,m

which is valid for rℓ > 0. It was pointed out in Example 10.14 that u(0, θ) must be independent of θ—and therefore uθ (0, θ) = 0)—if u(r, θ) is to be single-valued as r → 0. Thus a possible finite difference scheme for ℓ = 0 is (see (10.35)) U0 =

M−1 1 X U0,m . M m=0

Using (10.33a) we find that Lh Uℓ,m = α0 Uℓ,m − α1 Uℓ+1,m − α2 Uℓ,m+1 − α3 Uℓ−1,m − α2 Uℓ,m−1 , where α1 =

εrℓ+1/2 , rℓ h2

α2 =

1 rℓ2 ∆θ2

 ε − 12 rℓ2 ∆θ ,

α3 =

εrℓ−1/2 , rℓ h2

α4 =

1 rℓ2 ∆θ2

ε + 21 rℓ2 ∆θ



P4 1 2 and α0 = j=0 αj . The coefficients will clearly be non-negative provided ε ≥ 2 rℓ ∆θ. This means that if the motion takes place in a circle of radius a, the azimuthal grid size ∆θ must be chosen such that ∆θ ≤ 2ε/a2 . The quantity u is advected by a rigid body rotational flow and so the speed increases with distance from the origin.

101

Exercises 11

Finite difference methods for parabolic PDEs

11.1 Differentiating ut = uxx with respect to t we find utt = uxxt while differentiating twice with respect to x leads to uxxxx = uxxt . Hence (11.12) becomes n n 1 2 h uxxxx m + O(k 2 ) + O(h4 ) Rnm = 12 kutt m − 12 n = 1 h2 (r − 1 )uxxt + O(k 2 ) + O(h4 ). 2

6

m

When r = 1/6 the order of consistency is O(k 2 ) + O(h4 ).

11.2 (a) An FTCS approximation of ut = uxx + f (x, t) is given by (11.10): n+1 n n n Um = Um + rδx2 Um + kfm .

for which the local truncation error is n Rnm = 12 kutt m −

n 1 2 12 h uxxxx m

+ O(k 2 ) + O(h4 ).

Differentiating ut = uxx + f with respect to t we find utt = uxxt + ft while differentiating twice with respect to x leads to uxxxx = uxxt + fxx . Hence the LTE becomes n n n 1 2 h fxx m + 21 h2 (r − 16 )uxxt m + O(k 2 ) + O(h4 ). Rnm = 12 kft m − 12

and there is no increase in order of consistency when r = 1/6 unless f itself satisfies the heat equation: ft = fxx . (b) An FTCS approximation of ut = uxx − u is given in Example 11.9 (with γ = −1) as n+1 n n n Um = rUm−1 + (1 − 2r − k)Um + rUm+1

for which the local truncation error is n Rnm = 21 kutt m −

n 1 2 12 h uxxxx m

+ O(k 2 ) + O(h4 ).

Differentiating ut = uxx − u with respect to t we find utt = uxxt − ut while differentiating twice with respect to x leads to uxxxx = uxxt + uxx = uxxt + ut + u. Hence the LTE becomes n n 1 2 h ((1 + 6r)ut + u) m + 12 h2 (r − 61 )uxxt m + O(k 2 ) + O(h4 ). Rnm = − 12

and there is no increase in order of consistency when r = 1/6. 11.3 Equations (11.8b) give, for m = 1, 2, . . . , M − 1, U1n+1 = rg0 (nk) + (1 − 2r)U1n + rU2n

U2n+1 = .. .

n+1 UM−2 = n+1 UM−1 =

rU1n + (1 − 2r)U2n + rU3n ..

.

..

.

..

.

n n n rUM−3 + (1 − 2r)UM−2 + rUM−1

n n rUM−2 + (1 − 2r)UM−1 + rg1 (nk)

102

which, in matrix-vector form, become

un+1

 1 − 2r  r   =  0   0

r 1 − 2r r

0 r .. . ..

··· ..

. r

.

  g0 (nk)  0       ..   n  . u + r     0     r   0  1 − 2r g1 (nk) 0

n where un = [U1n , U2n , . . . , UM−1 ]T . The coefficient matrix is clearly the same as that in the BTCS scheme (11.25) with r replaced by −r.

11.4 For any j with m − n < j < m + n 1 0 = (−1)j [−r + (1 − 2r) − r] = (1 − 4r)(−1)j . + (1 − 2r)Uj1 + rUj+1 Uj1 = rUj−1

A similar calculation with m − n + 1 < j < m + n − 1 gives 1 1 Uj2 = rUj−1 + (1 − 2r)Uj1 + rUj+1 = (1 − 4r)(−1)j [−r + (1 − 2r) − r] = (1 − 4r)2 (−1)j

and, with m − n + ℓ < j < m + n − ℓ, ℓ ℓ Ujℓ+1 = rUj−1 + (1 − 2r)Ujℓ + rUj+1 = (1 − 4r)ℓ (−1)j [−r + (1 − 2r) − r] = (1 − 4r)ℓ+1 (−1)j ,

for ℓ = 0, 1, . . . , m − 1. At ℓ = m − 1 we find Unm = (1 − 4r)m (−1)n so, for r =

1 2

+ ε,

n |Um | = |1 + 4ε|n → ∞ n as P(X, T ) is a fixed point, n = T /k → ∞ as k → 0 and the sequence |Um | is unbounded. The 1 scheme is therefore unstable when r > 2 .

11.5 n+1 n n n The finite difference scheme Um = 13 rUm−2 + (1 − r)Um + 23 rUm+1 can be written in the form (11.10) with n n n n − 3Um + 2Um+1 ). Lh Um = − 31 h−2 (Um−2 Using the Taylor expansions of unm−2 and unm+1 about the point (xm , tn ), we find that (all the terms on the right hand side are evaluated at (xm , tn ))

t

P(X, T )

bc b

b

b

A

B Interval of dependence

Figure 15: Stencil (left) and domain of dependence (right) for Exercise 11.5.

103

Lh unm = − 13 h−2

u − 2hux +

1 1 (2h)2 uxx − (2h)3 uxxx + O(h4 ) 2! 3!

− 3u + 2u + 2hux + 2 = −uxx + O(h).

 1 2 1 h uxx + h3 uxxx + O(h4 ) 2! 3!

Since Lh u = Lu + O(h), Lh is first order consistent with L and, from (11.11), the given method is therefore consistent with the heat equation. The stencil and domain of dependence are shown in Fig. 15. 11.6 n+1 n n n n The finite difference scheme Um = Um + r(Um−2 − 2Um−1 + Um ) can be written in the form (11.10) with n n . = −h−2 δx2 Um−1 Lh Um

Consistency could be addressed in a manner similar to the previous exercise but a rather more elegant approach is to observe that n n Lh unm = −h−2 δx2 unm−1 = −uxx m−1 + O(h2 ) = −uxx m + O(h) so that Lh is consistent of order 1 with L.

t

P(X, T )

bc b

b

b

A

B Interval of dependence

Figure 16: Stencil (left) and domain of dependence PAB (right) for Exercise 11.6. The stencil and domain of dependence are shown in Fig. 16. Because the numerical solution at P uses no information along the initial line for x > X it cannot converge to the true solution by virtue of the CFL condition. 11.7 Replacing U by −U in Theorem 11.5 we have

 n+1 n n ) − (−Um ) ≤0 −h−2 δx2 (−Um ) + k −1 (−Um

n n+1 n i.e., − h−2 δx2 Um + k −1 (Um − Um )≥0

for (xm , tn+1 ) ∈ Ωτ . Thus, if r = k/h2 ≤ 1/2 then U is either constant or else attains its minimum (−U attains its maximum) value on Γτ . n n+1 n If −h−2 δx2 Um + k −1 (Um − Um ) = 0 then U is either constant or else attains its maximum and minimum value on Γτ . The case when U is constant can only occur if U is the same constant on n = 0, the maximum absolute value must occur at t = 0. Γτ . So, when U0n = UM 11.8 To establish inverse monotonicity we have to show that Lh U ≥ 0 implies that U ≥ 0. 104

In view of the discrete minimum principle (replacing U by −U in Theorem 11.5—see also Exercise 11.7), U is either constant or else attains its minimum value on Γτ . If U is constant then, because Lh Uℓ,m = Uℓ,m ≥ 0 for nodes (xℓ , tn ) on the boundary, this constant must be nonnegative. Otherwise, U achieves its minimum value on the boundary which is non-negative for the same reason. 11.9 The local truncation error of the BTCS approximation (11.22) is, by definition, n  1 n+1 1 n+1 = (um − unm ) − h−2 δx2 un+1 Rh m := − unm − rδx2 un+1 u m m k m k

and, using unm = un+1 − kut |n+1 + 12 k 2 utt |n+1 + O(k 3 ) together with δx2 un+1 = uxx |n+1 + m m m m m 1 2 n+1 4 h u | + O(h ), we find xxxx m 12 n   1 2 + O(k 2 ) − uxx |n+1 + 12 h uxxxx|n+1 + O(h4 ) Rh m = ut |n+1 − 12 kutt |n+1 m m m m  n+1   n+1 + O(k 2 ) + O(h4 ) = ut − uxx m − 21 h2 rutt + 61 uxxxx m and so Rh = O(k) + O(h2 ). Since utt = utxx = ∂x2 ut = uxxxx , this becomes n n+1   Rh m = − 21 h2 r + 61 utxx m + O(k 2 ) + O(h4 ).

11.10 Since A = I + kT , where T is given in (6.26), and v T Av = v T v + kv T T v then v T Av ≥ 0 follows from the positive definiteness of T : v T T v ≥ 0. 11.11 Replacing U by −U in Theorem 11.11 we deduce that

 n+1 n n+1 ) − (−Um ) ≤0 −h−2 δx2 (−Um ) + k −1 (−Um n+1 n+1 n i.e., − h−2 δx2 Um + k −1 (Um − Um )≥0

and so U is either constant or else attains its minimum value on Γτ when r = k/h2 ≤ 1/2. Thus, n+1 n+1 n if −h−2 δx2 Um + k −1 (Um − Um ) = 0 then U is either constant or else attains its maximum and minimum value on Γτ . The case when U is constant can only occur if U is the same constant on n Γτ . So, when U0n = UM = 0, the maximum absolute value must occur at t = 0. 11.12 The local truncation error of the finite difference scheme is, by definition, n  1 Rh m := un+1 − unm − rδx2 unm + 12 γk(unm−1 + unm+1 ) m k  1 − unm ) − h−2 δx2 unm + 12 γ(unm−1 + unm+1 = (un+1 m k

The first two terms on the right and side are the same as the FTCS method and lead to (11.12). The final term on the right and side could be dealt with using the Taylor expansions of unm±1 about the point (xm , tn ), but it is rather more elegant to observe that n unm−1 + unm+1 = 2unm + δx2 unm = 2unm + h2 uxx m + O(h4 ) 105

and so unm−1 + unm+1 = 2unm + O(h2 ). Thus, combined with (11.12), the LTE is n n n 1 2 Rnm = 12 kutt m − 12 h uxxxx m − 12 γh2 uxx m + O(k 2 ) + O(h4 ). 11.13 With Lh U = −h−2 δx2 U , the defining equation (11.30) of the θ–method becomes n+1 n+1 n n Um − θrδx2 Um = Um + (1 − θ)rδx2 Um

i.e., n+1 n+1 n+1 n n n −rθUm−1 + (1 + 2rθ)Um − rθUm+1 = r(1 − θ)Um−1 + (1 − 2(1 − θ)r)Um + r(1 − θ)Um+1

so, with the given BCs, these give, for m = 1 and m = M − 1,

 (1 + 2rθ)U1n+1 − rθU2n+1 = (1 − (1 − 2θ)r)U1n + r(1 − θ)U2n + r (1 − θ)g0 (tn ) + θg0 (tn+1 )

 n+1 n+1 n n −rθUM−2 + (1 + 2rθ)UM−1 = r(1 − θ)UM−2 + (1 − (1 − 2θ)r)UM−1 + r (1 − θ)g1 (tn ) + θg1 (tn+1 ) .

Thus, for (M − 1) × (M − 1) matrices B and C:

Bun+1 = Cun + f n where

 1 + 2rθ −rθ 0 ··· 0  −rθ  1 + 2rθ −rθ     . . . .   = A(θr) . . −rθ B= 0    ..  . −rθ  0 −rθ 1 + 2rθ   1 − 2r(1 − θ) r(1 − θ) 0 ··· 0  r(1 − θ)  1 − 2r(1 − θ) r(1 − θ)      .. ..  = A (θ − 1)r , . . 0 r(1 − θ) C =     ..   . rθ 0 r(1 − θ) 1 − 2r(1 − θ)

where A ≡ A(r) is the coefficient matrix appearing in (11.25). Also,

f n = r[(1 − θ)g0 (tn ) + θg0 (tn+1 ), 0, . . . , 0, (1 − θ)g1 (tn ) + θg1 (tn+1 )]T . 11.14 In the context of the matrix A(θr), the parameters a, b and c appearing in (B.11) are a = c = −θr,

b = 1 + 2rθ

and so the eigenvalues of A(r) are given by (with dimension n = M − 1) jπ λj = 1 + 2rθ + (1 + 2rθ) cos ,  M jπ jπ = (1 + 2rθ) 1 + cos = 2(1 + 2rθ) cos2 M 2M 106

which is clearly positive when θ ≥ 0 for all j = 1, 2, . . . , M − 1. Thus A is positive definite since it is symmetric with positive eigenvalues. 11.15 Replacing U by −U in Theorem 11.16 we deduce that   n+1 n n+1 n − 21 h−2 δx2 (−Um ) + (−Um )) + k −1 (−Um ) − (−Um )) ≤ 0 n+1 n n+1 n i.e., − 21 h−2 δx2 (Um + Um ) + k −1 (Um − Um )≥0

for (xm , tn+1 ) ∈ Ωτ . If r = k/h2 ≤ 1 then, from Theorem 11.16, −U is either constant or else attains its maximum value—therefore U attains its minimum value, on Γτ . n+1 n n+1 n Thus, if − 12 h−2 δx2 (Um + Um ) + k −1 (Um − Um ) = 0 then U is either constant or else attains both its maximum and minimum value on Γτ . The case when U is constant can only occur if U n is the same constant on Γτ . So, when U0n = UM = 0, the maximum absolute value must occur at t = 0. 11.16 With h = gives

1 2

and homogeneous Dirichlet BCs are applied the Crank–Nicolson scheme (11.34b) n+1 = Um

1−r n U 1+r m

n ⇒ Um =



1−r 1+r

n

n and so Um is negative on odd-numbered time steps if r > 1, thus violating inverse monotonicity. Hence r ≤ 1 is a necessary condition of inverse monotonicity.

11.17 We use two properties of a sum of non-negative terms: it is always smaller than the (largest term) × (the number of terms) and it is always larger than the largest single term. The norm kU· kh,2 =

h

M X ′′

m=0

|Um |

2

!1/2

h

M X ′′

m=0

1

!1/2

max |Um | = kU·kh,∞ .

0≤m≤M

n M If the largest term among {Um }m=0 occurs at m = 0 or m = M , then kU· kh,∞ = |U0 | or |UM | √ 1 and kU·kh,2 ≥ 2 hkU·kh,∞ . √ If kU· kh,∞ = |Um | with 0 < m < M , then kU· kh,2 ≥ hkU· kh,∞ . Thus, in general, 1 2

√ hkU· kh,∞ ≤ kU· kh,2 ≤ kU· kh,2 .

For the two possible grid functions a simple calculation reveals Case (a) (b)

kU·kh,∞ 1 1

kU· kh,2 1 √ 1 2 h

Right hand bound attained Left hand bound attained

These examples show that the bounds are attained so they cannot be improved upon. 11.18 n n+1 n n+1 n We substitute Um = ξ n eiκmh into [1 − rδx2 ]Um = Um and use the relationships Um = ξUm and (11.47b) to give n n = Um [1 + 4r sin2 ( 21 κh)]ξUm

107

ξ=

1 . 1 + 4r sin2 ( 12 κh)

Clearly ξ is real and positive for all wavenumbers κ and, since 1 + 4r sin2 ( 12 κh) ≥ 1, it follows that 0 ≤ ξ ≤ 1 for all mesh ratios r > 0—it is unconditionally ℓ2 -stable. 11.19 From the solution to Exercise 11.13, the θ-method applied to the heat equation gives n+1 n+1 n n Um − θrδx2 Um = Um + (1 − θ)rδx2 Um . n n+1 n We now substitute Um = ξ n eiκmh and use the relationships Um = ξUm and (11.47b) to give 2 1 (we define s = sin ( 2 κh) as a convenient abbreviation) n n [1 + 4rθs]ξUm = [1 − 4r(1 − θ)s]Um

ξ=

1 − 4r(1 − θ)s . 1 + 4rθs

We can rewrite this as

4rs 1 + 4rθs so ξ ≤ 1 for all r > 0 and θ ≥ 0. In order that ξ ≥ −1 we require ξ =1−

1 − 4r(1 − θ)s ≥ −1 − 4rθs This holds for all r > 0 when θ ≥ restricted by

1 2

1 2

2r(1 − 2θ)s ≤ 1.

otherwise, it will hold for all s ∈ [0, 1] only when r is r≤

This reduces, as it should, to r ≤

1 . 2(1 − 2θ)

when θ = 0 and the method becomes the FTCS scheme.

11.20 n n n n n Since Um−1 + Um+1 = δx2 Um + 2Um , substituting Um = ξ n eiκmh and use the relationships 2 1 n+1 n Um = ξUm and (11.47b) give (we define s = sin ( 2 κh) as a convenient abbreviation) ξ = 1 − k − (4r − 2k)s. Thus ξ(s) is a linear function of s ∈ [0, 1] and will achieve its maximum an minimum values at either s = 0 or s = 1. At s = 0, −1 ≤ ξ(0) = 1 − k ≤ 1 for 0 < k ≤ 2 while, at s = 1, −1 ≤ ξ(1) = 1 − 4r + k ≤ 1 requires k ≤ 4r (i.e., h ≤ 2) and 4r − k ≤ 2. In terms of h and k, the conditions for |ξ| ≤ 1 for all wavenumbers κ are k ≤ 2,

h ≤ 2,

k≤

2h2 . 4 − h2

The standard method (11.18) with γ = 1 has the amplification factor ξ = 1 − k − 4rs. At s = 0, −1 ≤ ξ(0) = 1 − k ≤ 1 for 0 < k ≤ 2 while, at s = 1, −1 ≤ ξ(1) = 1 − 4r − k ≤ 1 requires 4r + k ≤ 2. Thus, in terms of h and k, the conditions for |ξ| ≤ 1 for all wavenumbers κ are 2h2 k ≤ 2, k ≤ . 4 + h2 This limit on k is stricter than for the first method which will therefore allow larger time steps while maintaining stability (the order of consistency is O(h2 ) + O(k) for both schemes). 108

11.21 The FTCS scheme (11.53) n+1 n n n Um = Um + rδx2 Um + c△x Um n n n + (1 − 2r)Um + (r + 21 ρ)Um+1 = (r − 12 ρ)Um−1

is of the form n+1 n n n Um = α−1 Um−1 + α0 Um + α1 Um+1

with α−1 = r − 21 ρ, α0 = 1 − 2r, α1 = r + 12 ρ. Non-negativity of the coefficients requires 1 1 2 ρ ≤ r ≤ 2 . The left inequality simplifies to h ≤ 2ε (so that stability is only possible if the spatial grid size is sufficiently small compared to ε), that is, Peh ≤ 1, where Peh := h/(2ε) is the mesh Peclet number. The right inequality leads to k ≤ h2 /2ε—the stability region is shown in Fig. 17 (Left). The conditions 12 ρ2 ≤ r ≤ 21 for ℓ2 -stability require k ≤ 2ε and k ≤ h2 /2ε, both coinciding when Peh = 1. The corresponding stability region is shown in Fig. 17 (Right). Thus the scheme is ℓ2 -stable for any spatial grid size. k 2ε

ℓ∞ stability region

k 2ε

ℓ2 stability region

2ε h 2ε (Peh = 1) (Peh = 1) Figure 17: The stability regions for Exercise 11.21.

h

The proof of Theorem 11.5 may be used to establish a maximum principle under the conditions 1 1 j+1 2 ρ ≤ r ≤ 2 . The only change necessary is the derivation of an upper bound on Um : This now reads j j j+1 j Um ≤ α−1 Um−1 + α0 Um + α1 Um+1 j j j and, because Um−1 , Um , Um+1 ≤ Kτ and the non-negativity of the coefficients  j+1 Um ≤ α−1 + α0 + α1 Kτ = Kτ

since α−1 + α0 + α1 = 1.

11.22 n We substitute Um = ξ n eiκmh into the finite difference scheme n+1 n+1 n n Um − rδx2 Um = Um − 2c△Um , n+1 n and use the relationships Um = ξUm , (11.47b) and (11.47c) to give

ξ= Then |ξ|2 − 1 =

1 − 2ic sin κh . 1 + 4r sin2 21 κh

(r − 2c2 ) + 2s(r2 + c2 ) 1 + 4c sin2 κh − 1 = −8s , 2 1 (1 + 4r sin 2 κh)2 (1 + 4r sin2 21 κh)2 109

where we have written s = sin2 21 κh. Clearly |ξ|2 − 1 ≤ 0 if the numerator of this expression is non-negative for s ∈ [0, 1]. Since it is linear in s we need only check the end points: s = 0 leads to r ≥ 2c2 and s = 1 imposes no restriction. Thus, |ξ|2 ≤ 1 if r ≥ 2c2 , that is, k ≤ 21 . 11.23 n We substitute Um = ξ n eiκmh into the finite difference scheme n+1 n n n Um = 13 rUm−2 + (1 − r)Um + 32 rUm+1 n+1 n n n n n and use the relationships Um = ξUm , Um−2 = e−2iκmh Um and Um+1 = eiκmh Um to give n n n n + 23 reiκh Um + (1 − r)Um = 31 re−2iκh Um ξUm

ξ = 1 − r + 13 r(2eiκh + e−2iκh ).

Extracting real and imaginary parts of the right hand side we have ξ = (1 − r − r cos κh) + 31 ir sin κh = (1 − 2r sin2 12 κh) + 13 ir sin κh so that, after expressing sin κh in terms of half-angles,   |ξ|2 − 1 = −4rs 1 − r + 91 r(1 − s) ,

s = sin2 12 κh.

Hence |ξ|2 − 1 ≤ 0 if the bracketed term on the right hand side is positive for s ∈ [0, 1], Since it is a linear expression in s, only its end-points need to be examined. It is non-negative at s = 0 if r ≤ 9/8 and at s = 1 if r ≤ 1. The method is therefore ℓ2 stable for r ≤ 1. 11.24 In these types of calculations it is convenient to define θ = κh ∈ [−π, π].

(a) ξ(θ) = 1 − 4r sin2 ( 12 θ) and so ξ(0) = 1 (no information), ξ(π) = 1 − 4r so −1 ≤ ξ ≤ 1 if r < 12 . For small values of θ we have sin2 ( 21 θ) ≈ 41 θ2 so ξ = 1 − kκ 2 + O(κ 3 ). This gives no new conditions.

(b) ξ = 1 − 4r sin2 ( 12 θ) + i ρ sin θ. The conditions for κh = 0, π are the same as part (a). For small values of θ (bearing in mind that r = εk/h2 ), ξ = 1 + ikκ − εk 2 κ 2 + · · · ⇒

|ξ|2 = 1 − (2ε − k)kκ + O(k 2 ).

Hence |ξ|2 > 1 for small wavenumbers unless k ≤ 2ε. This is one of the conditions found in Example 11.24 (c) Little useful information is gained by sampling ξ so we expand numerator and denominator in powers to h (leading to powers of k) ξ=

1 − 2ic sin κh 1 − 2ikκ ≈ 2 1 1 + kκ 2 1 + 4r sin 2 κh

|ξ|2 ≈

1 + 4k 2 κ 2 1 + 4k 2 κ 2 ≈ (1 + kκ 2 )2 1 + 2kκ 2

and the final expression will exceed 1 unless 4k 2 κ 2 ≤ 2kκ 2 , i.e., k ≤ 12 . This condition is also sufficient for stability—see Exercise 11.22. 11.25 With the Robin end condition −ux (0, t) + σu(0, t) = g0 (t), the numerical boundary condition (11.57) is replaced by n − 21 h−1 (−U−1 + U1n ) + σU0n = g0 (nk). 110

n Using (11.59) to eliminate U−1 leads to

 1 U n+1 − (1 − 2r − 2hrσ)U0n − 2rU1n = g0 (nk). 2hr 0

or, for computational purposes,

U0n+1 = (1 − 2r − 2hrσ)U0n + 2rU1n + 2rhg0 (nk). However, it is the former version that is correctly scaled for determining the local truncation error (all the terms involving u on the right hand side in the following expansions are evaluated at (0, nk)): n  1 un+1 − (1 − 2r − 2hrσ)un0 − 2run1 − g0 (nk) Rh 0 = 0 2hr 1 = u + kut + 21 k 2 utt + O(k 3 ) 2hr − (1 − 2r − 2hrσ)u  − 2r(u + hux + 12 h2 uxx + 61 h3 uxxx + O(h4 )) − g0 (nk)

 1 (1 − (1 − 2r − 2hrσ) − 2r)u + kut − 2hrux + 12 k 2 utt − kuxx + · · · − g0 (nk) 2hr  = (σu − ux − g0 (nk)) + 12 h ut − uxx + 61 h(3kutt − 2huxxx) + · · · . =

Hence Rh = O(hk) + O(h2 ) is consistent of second order in h since k = O(h), u(0, t) − ux(0, t) = g0 (t) and ut = uxx . 11.26 Condition (11.57) with n + 1 replaced n gives n+1 + U1n+1 ) = g0 ((n + 1)k) − 12 h−1 (−U−1

and the BTCS scheme (11.22b) with m = 0 gives n+1 −rU−1 + (1 + 2r)U0n+1 − rU1n+1 = U0n . n+1 Multiplying the first of these by 2hr and adding to the second eliminates U−1 and gives

(1 + 2r)U0n+1 − 2rU1n+1 = U0n + 2hrg0 ((n + 1)k). 11.27 The Crank–Nicolson scheme (11.34b) for the heat equation at m = 0 is n+1 n + (1 + r) U0n+1 − 12 rU1n+1 = 21 rU−1 + (1 − r) U0n + 21 rU1n . − 21 rU−1

and using (11.57) at t = nk and also with n replaced by n + 1 gives n + U1n ) = g0 (nk) − 21 h−1 (−U−1

n+1 − 21 h−1 (−U−1 + U1n+1 ) = g0 ((n + 1)k)

n U−1 = U1n + 2hg0 (nk)

n+1 U−1 = U1n+1 + 2hg0 ((n + 1)k).

n+1 n On substituting for U−1 and U−1 the Crank–Nicolson scheme then becomes

 (1 + r) U0n+1 − rU1n+1 = (1 − r) U0n + rU1n + hr g0 (nk) + g0 ((n + 1)k) . 111

11.28 n With Uℓ,m = ξ n ei(κx ℓ+κy m)h (from (11.62)) we find, using (11.47b), n n = (1 − 4r sin2 21 κy h)Uℓ,m (1 + rδy2 )Uℓ,m

n n (1 + rδx2 )(1 + rδy2 )Uℓ,m = (1 − 4r sin2 12 κy h)(1 + rδx2 )Uℓ,m

n = (1 − 4r sin2 12 κy h)(1 − 4r sin2 21 κx h)Uℓ,m

n+1 n which, together with Uℓ,m = ξUℓ,m gives the amplification factor

ξ = (1 − 4r sin2 12 κy h)(1 − 4r sin2 21 κx h). The expression (1 − 4r sin2 21 κx h) is the amplification factor of the FTCS method for solving the one-dimensional heat ut = uxx and its modulus was shown in Example 11.21 to lie in the interval [−1, 1] if, and only if, r ≤ 12 . The same argument applies to the expression (1 − 4r sin2 21 κy h). Since κ x and κ y are independent variables, it follows that |ξ| ≤ 1 requires r ≤ 21 . 11.29 We prove the result by induction on n. Let K0 = kU 0 kh,∞ , then the induction hypothesis kU n kh,∞ ≤ kU 0 kh,∞ is satisfied at n = 0. Let us suppose that it holds up until n = j so that kU j kh,∞ ≤ K0 . We need to prove that kU j+1 kh,∞ ≤ K0 when r ≤ 21 . This is done in a two step process, exploiting the one-dimensional nature of the factors constituting the scheme. Let j+1 j Vℓ,m = (1 + rδy2 )Uℓ,m then j j+1 j j |Vℓ,m | = rUℓ,m−1 + (1 − 2r)Uℓ,m + rUℓ,m+1  ≤ r + |1 − 2r| + r K0 = K0

j+1 j+1 since 1 − 2r ≥ 0. A similar argument with Uℓ,m = (1 + rδy2 )Vℓ,m gives

j j+1 j j |Uℓ,m | = rVℓ−1,m + (1 − 2r)Vℓ,m + rVℓ+1,m  ≤ r + |1 − 2r| + r K0 = K0

and so the induction hypothesis holds with n = j + 1. Thus kU n kh,∞ ≤ K0 holds for n = 0, 1, . . . provided that r ≤ 12 . 11.30 The local truncation error of the first of formulae (11.70) is n  n+1/2 Rh ℓ,m = k −1 (1 − 21 rδy2 )uℓ,m − (1 + 12 rδx2 )unℓ,m

n+1/2

where the factor k −1 has been included to give the correct scaling and Vℓ,m by

n+1/2 uℓ,m .

This can be reorganised to give n  n+1/2 n+1/2 Rh ℓ,m = k −1 (uℓ,m − unℓ,m ) − 12 h−2 δy2 uℓ,m + δx2 unℓ,m n n = 12 ut ℓ,m + O(k) − 21 (uyy + uxx ) ℓ,m + O(h2 )

so that it is a consistent approximation of ut = uxx + uyy . 112

has been replaced

The argument for the second step of (11.70) is entirely similar. 11.31 (a) The given finite difference scheme may be written n+1 n Um = Um +

k h2 rm

 n n n n rm+1/2 (Um+1 − Um ) − rm−1/2 (Um − Um−1 ) ,

Since rm±1/2 = rm ± h/2, this may be written as

 k n n n n n rm (Um+1 − 2Um + Um−1 ) + 12 h(Um+1 − Um−1 ) h2 rm 1 −1 n n+1 n n h △r U m k −1 (Um − Um ) = h−2 δr2 Um + rm n+1 n Um = Um +

which is the same as the scheme obtained by replacing the spatial dervatives in urr + r1 ur with second order accurate differences and the time derivative by a forward difference k −1 △+t . (b) We may also write the scheme as n+1 = Um

k k k h h n )U n + (1 − 2 2 )Um + 2 (1 + )U n (1 − h2 2rm m−1 h h 2rm m+1

which is of positive type provided that k ≤ h2 /2 (the formula is valid only for m ≥ 1 so h/(2rm ) = 1/(2m) ≤ 21 ). It was shown in Exercise 8.7 that ur → 0 as r → 0 when u possesses circular symmetry. Hence ur /r is of the form 0/0 at the origin and, by l’Hˆopital’s rule: lim

r→0

∂r ur ur = lim = urr (0, t) r→0 ∂r r r

so that the PDE becomes ut = 2urr at r = 0. The standard FTCS approximation of this equation is  k k n U0n+1 = U0n + 2 2 δr2 U0n = U0n + 2 2 U−1 − 2U0n + U1n h h n = U1n and so but, because of symmetry, U−1 U0n+1 = U0n + 4

k k k (U n − U0n ) = (1 − 4 2 )U0n + 4 2 U1n h2 1 h h

which is of positive type for k ≤ h2 /4. 11.32 Using (10.33a) and the solution to Exercise 11.31 the finite difference scheme can be written as n+1 n n n n n Uℓ,m = α0 Uℓ,m + α1 Uℓ+1,m + α2 Uℓ,m+1 + α3 Uℓ−1,m + α4 Uℓ,m−1 ,

where h ), α1 = ρ(1 + 2rm

h α3 = ρ(1 − ), 2rm

α2 = α4 =

−2 σrm ,

4 X

αj = 1,

j=0

where ρ = k/h2 and σ = k/∆θ2 . The four coefficients αj , j = 1, 2, 3, 4 are nonnegative (since the finite difference scheme is valid only for rm ≥ h) so the scheme will be of positive type if α0 ≥ 0. Since 4 X −2 αj = 1 − 2ρ − 2σrm α0 = 1 − j=1

113

we require k≤

2 h2 ∆θ2 rm 2 ) 2(h2 + ∆θ2 rm

k≤

h2 ∆θ2 2(1 + ∆θ2 )

when rm = h. 11.33 We use the expressions derived for Lh Uℓ,m derived in the solution to Exercise 10.17—the notation, and numbering of nodes, is taken from that solution. Then  P1 : U1n+1 = U1n − k 65U1n − 16U2n − 16U4n − 112 ,  P2 : U2n+1 = U2n − k − 16U1n + 65U2n − 16U3n − 16U5n , 144 n  P3 : U3n+1 = U3n − k 545U3n − U , 5 2  P4 : U4n+1 = U4n − k − 16U1n + 65U4n − 16U5n − 16U6n − 160 , 128 n 288 n  U − U , P5 : U5n+1 = U5n − k 273U5n − 5 2 11 4  979 n 256 n U − U −9 . P6 : U6n+1 = U6n − k 3 6 11 4

These equations will be of positive type if the coefficient of Ujn is non-negative at the point Pj . The most restrictive condition occurs at P3 where it is required that k ≤ 1/545. The corresponding limit for a regular grid is k ≤ 1/64 (as, for instance, at Pj , j = 1, 2, 4). 11.34 We build on the solution to the previous exercise. Using a backward difference in time at nodes Pj , j = 3, 5, 6, leads to 144 n+1  = U3n , U 5 2 128 n+1 288 n+1  P5 : U5n+1 + k 273U5n+1 − = U5n , U U − 5 2 11 4 979 n+1 256 n+1  P6 : U6n+1 + k U U − = U6n + 9k. 3 6 11 4 P3 : U3n+1 + k 545U3n+1 −

If the computations are organised so that the numerical solution at points on the regular grid (Pj , j = 1, 2, 4) at time level tn+1 are computed first, then the solution at the remaining points is given by 144 n+1  1 , U3n + U 1 + 545k 5 2 1 128 n+1 288 n+1  = U5n + , U U − 1 + 273k 5 2 11 4  3 256 n+1 = U6n + U4 + 9k . 3 + 979k 11

P3 : U3n+1 = P5 : U5n+1 P6 : U6n+1

Thus the entire set of equations is of explicit type and of positive type for k ≤ h2 /4 = 1/64.

114

Exercises 12

Finite difference methods for hyperbolic PDEs

12.1 Differentiating the PDE ut + aux = 0 with respect to t gives: utt + autx = 0

utt + a∂x ut = 0

utt + a∂x (−aux ) = 0

and so utt = a2 uxx . Using this in (12.21) leads to n n Rh m = 21 ah(c − 1)uxx m + O(h2 ) + O(k 2 ),

as required.

12.2 n+1 n n = (1 + c)Um − cUm+1 is explicit and of positive type for −1 ≤ c < 0 and The FTFS scheme Um is therefore inverse monotone (non-negative initial data leads to non-negative solutions). It will therefore be stable in the maximum norm if we can find a suitable comparison function. Such a function is Φnm := 1 + tn . For ℓ2 -stability we first find the amplification factor: ξ = 1 + c − ceiκh so |ξ|2 − 1 = (1 + c − c cos θ)2 − 1 + c2 sin2 θ = 2c(1 + c)(1 − cos θ),

where θ = κh ∈ [−π, π]. Since (1 − cos θ) ≥ 0, we shall have |ξ|2 ≤ 1 if, and only if, c(1 + c) ≤ 0, that is, −1 ≤ c ≤ 1. 12.3 n The product of the coefficients of Um±1 have opposite sign so it is not possible for both to be non-negative. The amplification factor of the scheme is ξ = 1 − ic sin κh

|ξ|2 = 1 + c2 sin2 κh

thus ξ( 21 π) = 1 + c2 and it is not possible to find a constant C, independent of h and k such that |ξ| ≤ 1 + Ck (see Definition 11.20). 12.4 We use the identities △+x = △x + 21 δx2 , △−x = △x − 12 δx2 (see Exercise 6.2) so, when c > 0, n+1 n n n n n n Um = Um − c△x Um + 21 |c| δx2 Um = Um − c△x Um + 12 c δx2 Um  n n n n n = Um − c△−x Um = Um − c △x U m − 21 δx2 Um

which is the FTBS scheme (12.20). Similarly, when c < 0, |c| = −c and

n+1 n n n n n n Um = Um − c△x Um + 12 |c| δx2 Um = Um − c△x Um − 12 c δx2 Um  n n n n n = Um − c△+x Um = Um − c △x U m + 21 δx2 Um

which is the FTFS scheme (12.25). 12.5

(a) The Lax–Friedrichs scheme can be written as4 n n n n + Um + 21 (1 − c)Um−1 Um = 12 (1 + c)Um−1

so its stencil is the same as that of the Lax-Wendroff method (see Fig. 12.6 (left)). 4 Note

n by the average that this can also be deduced from (12.26) by replacing Um

115

1 n (Um−1 2

n + Um+1 ).

(b) The local truncation error is (note the scaling), using the results from Table 6.1, n  Rh m : = k −1 un+1 − unm + c△x unm − 12 δx2 unm m   = ut + 12 kutt + O(k 2 ) + a ux + 61 h2 uxxx + O(h2 ) − 12 (h2 /k)uxx + O(h4 ) = (ut + aux ) + 21 kutt − 12 (h2 /k)uxx + O(k 2 ) + O(h2 ) + O(h4 /k)

so that Rh = O(h) + O(k) if c is fixed—the method is consistent of first order. (c) We see from part (a) that the coefficients on the right hand side are non-negative for −1 ≤ c ≤ 1 in whch case the scheme is of positive type. (d) The amplification factor of the scheme is, writing θ = κh, ξ = 1 − ic sin θ − 2 sin2 21 θ = cos θ − ic sin θ. Therefore, |ξ|2 − 1 = cos2 θ − 1 + c2 sin2 θ = −(1 − c2 ) sin2 θ

and |ξ|2 ≤ 1 for all θ ∈ [−π, π] if, and only if, −1 ≤ c ≤ 1.

(e) The Lax-Friedrichs method has the same order of consistency as the FTBS and FTFS methods so holds no advantage in that department. One major advantage that is does hold is that it is stable for |c| ≤ 1 regardless of the sign of the advection speed so that it can be applied to systems of hyperbolic equations that have both right and left moving characteristics. (f) The end product of the calculation in part (b) may be reorganized to give n  h2 Rh m = ut + aux − 21 (1 + c2 )uxx + O(k 2 ) + O(h2 ) + O(h4 /k) k

where we have used utt = a2 uxx . Thus the local truncation error is consistent of order O(k 2 ) + O(h2 ) (provided that c = ak/h is fixed) with the PDE ut + aux = εuxx , where ε = 21 h2 (1 + c2 )/k. The scheme is consistent with an advection-diffusion equation which is of parabolic type. The numerical solutions will therefore become smoother (damped) as time proceeds. 12.6 With a Courant number c = 21 , the Lax-Wendroff method becomes n n n n Um = 81 (3Um−1 + 6Um − Um+1 )

and, from the given data, we calculate m 1 Um

1 1

2 1.125

3 .375

4 0

This is sketched in Fig. 18 as a solid line. The exact solution (shown shaded) moves a distance ch during each time step so that discontinuity occurs at x = 3h. 12.7 n We substitute Um = ξ n eiκmh into the Lax-Wendroff method scheme n+1 n Um = [1 − c△x + 12 c2 δx2 ] Um

116

1 Um b

b b

b b

x Figure 18: The solution at time level t = k to Exercise 12.6 by the Lax-Wendroff method (solid line). The initial function g(x) is shown as a dashed line and the shaded area represents the exact solution, which will have travelled a distance ch = 12 h in one time step. n+1 n and use the relationships Um = ξUm , (11.47b) and (11.47c) to give n n n n ξUm = Um − ic sin(κh)Um − 2c2 sin2 ( 12 κh)Um

ξ = 1 − ic sin κh − 2c2 sin2 12 κh.

Then, writing s = sin2 21 κh |ξ|2 − 1 = (1 − 2c2 s)2 − 1 + c2 sin2 (κh) = −4c2 s + 4c4 s2 + 4c2 s(1 − s) = −4c2 s2 (1 − c2 ).

Clearly |ξ|2 − 1 ≤ 0 if c2 ≤ 1.

12.8 Referring to the stencil shown in Fig. 12.2: FTBS: corresponds to µ = 1, ν = 0 and has, from (12.17), the coefficients α−1 =

0 Y ℓ+c = c, ℓ+1

α0 =

ℓ=−1 ℓ6=−1

0 Y ℓ+c =1−c ℓ

ℓ=−1 ℓ6=0

n+1 n n and Um = α−1 Um−1 + α0 Um gives the FTBS method (12.20). Lax-Wendroff method: corresponds to µ = 1, ν = 1 and so

α−1 =

1 Y ℓ+c = 21 c(1 + c), ℓ+1

α0 =

1 Y ℓ+c = 1 − c2 , ℓ

α1 =

ℓ=−1 ℓ6=1

ℓ=−1 ℓ6=0

ℓ=−1 ℓ6=−1

1 Y ℓ+c = 12 c(c − 1) ℓ−1

n+1 n n n and Um = α−1 Um−1 + α0 Um + α1 Um+1 gives the Lax-Wendroff method (12.28). 3rd order upwind: corresponds to µ = 2, ν = 1 and so

α−2 =

α0 =

1 Y ℓ+c = − 61 c(1 − c2 ), ℓ+2

α−1 =

ℓ=−2 ℓ6=−2

1 Y ℓ+c = 12 (1 − c2 )(2 − c), ℓ

α1 =

1 Y ℓ+c = 21 c(1 + c)(2 − c), ℓ+1

ℓ=−2 ℓ6=−1

1 Y ℓ+c = 61 c(c − 1)(2 − c) ℓ−1

ℓ=−2 ℓ6=1

ℓ=−2 ℓ6=0

n+1 n n n n and Um = α−2 Um−2 +α−1 Um−1 +α0 Um +α1 Um+1 gives the 3rd order upwind method (12.33b).

12.9 With µ = 2, ν = 0 the formula (12.17) leads to the coefficients α−2 =

0 Y ℓ+c = − 21 c(1−c), ℓ+2

ℓ=−2 ℓ6=−2

α−1 =

0 Y ℓ+c = c(2−c), ℓ+1

ℓ=−2 ℓ6=−1

117

α0 =

0 Y ℓ+c = 21 (1−c)(2−c) ℓ

ℓ=−2 ℓ6=0

n+1 n n n Um = α−2 Um−2 + α−1 Um−1 + α0 Um gives the Warming–Beam scheme (12.64).

12.10 n+1 From (12.15) we have Um = Φ(xm − ch) so s = −c and we have n n+1 + = Um Um

 p  X j −c−1 j

j=1

With p = 1 we find



j−c−1 j

j n . △−x Um

= −c at j = 1 giving n n n+1 − c△−x Um = Um Um

which is the FTBS scheme (12.20). With p = 2 an additional term is added to the right hand side of FTBS. Since − 12 c(1 − c) at j = 2 this leads to



j−c−1 j

=

2 n n+1 n n Um = Um − c△−x Um − 12 c(1 − c) △−x Um

n n n n n n − 2Um−1 + Um ) = Um − c(−Um−1 + Um ) − 21 c(1 − c)(Um−2 n n n + c(2 − c)Um−1 + 12 (1 − c)(2 − c)Um = − 21 c(1 − c)Um−2

which matches the Warming–Beam scheme (12.64). 12.11 (a) The local truncation error of Leith’s method is n  n Rh m : = k −1 (un+1 − unm + c△x Um − (r + 12 c2 )δx2 unm , m

  = ut + 21 kutt + O(k 2 ) + a ux + 16 h2 uxxx + O(h2 ) − (ε + 21 a2 k) uxx + O(h2 )   = ut + aux − εuxx + 12 k(utt − a2 uxx + O(k 2 ) + O(h2 )

and so is consistent of order O(k) + O(h2 ) with the advection-diffusion equation ut + aux = εuxx. (b) Differentiating the PDE with respect to t and x gives utt + auxt = εuxxt uxt + auxx = εuxxx so, subtracting a × second from the first equation leads to utt = a2 uxx + ε(uxxt − auxxx) and, therefore, n Rh = 1 kε(uxxt − auxxx) + O(k 2 ) + O(h2 ). m

2

From the point of view of convergence as h, k → 0 the scheme is clearly consistent of order O(k) + O(h2 ) with the advection-diffusion equation. However, the factor ε multiplying the leading term in the local truncation error means that in computations where ε ≪ 1, the method will, on coarser grids, effectively perform as a second order scheme.

(c) We observe that the amplification factor for Leith’s scheme ξ = 1 − 4(r + 21 c2 ) sin2 ( 21 κh) + i ρ sin (κh) 118

is the same as the FTCS scheme (11.53) with r replaced by (r + 21 c2 ) and ρ replaced by −c = −ak/h. The stability conditions (11.56) become 1 2 2c

≤ r + 12 c2 ≤ 21 ,

so the left inequality is always satisfied leaving 2r + c2 ≤ 1. Written in terms of h and k, this requires 2εk + a2 k 2 ≤ h2 .

If both sides are multiplied by a2 /ε2 we find 2

ka2 ka2 2 ah 2 + ≤ ε ε ε

2b h2 , k+b k2 ≤ b

where the stability region is independent of any parameters when expressed in terms of b k := ka2 /ε and b h := ha/ε (known as non-dimensional variables because their values are k plane the independent of the units used to measure k, h, a and ε). Thus, in the b h-b boundary is a branch of the hyperbola (1 + b h2 = 1 k)2 − b

k = −1, b whose centre is at b h = 0 and has asymptotes b k = −1 ± b h. The region bounded b by this curve and the h-axis is shown shaded in Fig. 19 but the scales shown on the axes are for the grid sizes h and k. Also shown for reference is the curve is k = εh2 /2 (dashed), which is the corresponding stability limit for the FTCS approximation of the heat equation ut = εuxx. We can see that this is the relevant limit when ε/a is large (when we should focus on the stability region near the origin where h and k are small). On the other hand, when ε/a is small—the advection dominated case—we enjoy a limit which is approximately that for the FTCS approximation of the advection equation ut + aux = 0, i.e., k ≤ ah. k 2ε/a2 ε/a2

ε/a

2ε/a

ℓ2 stability region Figure 19: The ℓ2 -stability region for Leith’s method. The dashed curve shows k = εh2 /2 and the dashed line the asymptote k = (ah − ε)/a2 . 3ε/a h

12.12 The LTE of the leapfrog scheme (12.31) is n  n−1 − um Rh m : = k −1 un+1 + c(unm+1 − unm−1 ) = k −1 △t unm − ah−1 △x unm m   = ut + 16 k 2 uttt + O(k 4 ) + a ux + 16 h2 uxxx + O(h4 )   = ut + aux + 61 k 2 uttt + 61 h2 uxxx + O(k 4 ) + O(h4 ) where we have used the expansion for △u from Table 6.1. n n n When Um = ξ n eiκmh we have △x Um = i sin(κh)Um (see (11.47c)) and 1 n n+1 n−1 n − Um ) = 12 (ξ − )Um △t U m = 12 (Um ξ 119

and so the amplification factor ξ satisfies ξ−

1 + 2ic sin(κh) = 0 ξ

ξ 2 + 2icξ sin(κh) − 1 = 0.

Using the familiar “quadratic formula”, the roots are (writing θ = κh) p ξ = −ic sin θ ± 1 − c2 sin2 θ.

Case (a) c2 > 1. In this case there are values of θ such that 1 − c2 sin2 θ < 0 so that both roots are imaginary. Their product is −1 so one must have a modulus exceeding 1 and the method is unstable in this case. Case(b) c2 ≤ 1. Now 1 − c2 sin2 θ ≥ 0 for all θ and p ℜξ = ± 1 − c2 sin2 θ, so that

ℑξ = −c sin θ

|ξ|2 = (ℜξ)2 + (ℑξ)2 = (1 − c2 sin2 θ) + (c sin θ)2 = 1

and the method is stable 12.13

(a) We evaluate the coefficients supplied in (12.33a) at the Courant numbers c = −1, 0, 1, 2 and display the results in the following table. Coeffs. c 2 1 0 −1

α−2 − 61 c(1 − c2 ) 1 0 0 0

1 2 c(1

α−1 + c)(2 − c) 0 1 0 0

1 2 (1

α0 − c2 )(2 − c) 0 0 1 0

1 6 c(c

α1 − 1)(2 − c) 0 0 0 1

n+1 n and so Um = Um−c for c = −1, 0, 1, 2.

(b) Using (12.33a), the LTE is given by n  − unm + c△x unm − 12 c2 δx2 unm − 16 c(1 − c2 )△−x δx2 unm Rh m : = k −1 un+1 m

The expansion of all terms except the last on the right hand side are available in Table 6.1. For the final term we proceed as follows (many other routes to the same destination are n = hvx − 21 h2 vxx + O(h3 ) (Table 6.1) then, with v = δx2 unm = possible): Since △−x vm 2 4 h uxx + O(h ),  △−x δx2 unm = h∂x h2 uxx + O(h4 ) + O(h4 ) = h3 uxxx − h4 uxxxx + O(h5 ). Now

n Rh m =

ut + 12 kutt + 61 k 2 uttt + + a ux

+

1 2 6 h uxxx

− 12 ka2 uxx − 61 a(1 − c2 )

1 3 24 k utttt  4

+ O(h )

+ O(k 4 ) 

4 1 2 12 h uxxxx + O(h )  h2 uxxx − h3 uxxxx + O(h4 )

120

+

so that n Rh m =

  ut + aux + 21 k utt − a2 uxx + 16 k 2 uttt + a2 uxxx  1 4 3 1 3 k utttt + a2 uxxxx + 24 a h (1 + c)(1 − c)(2 − c)uxxxx + O(h4 ). + 24

But ∂tj u = (−a∂x )j u so that this reduces to n Rh = 1 a4 h3 (1 + c)(1 − c)(2 − c)uxxxx + O(h4 ). m

24

Notice how the leading coefficient vanishes at c = −1, 1, 2 (as do all subsequent coefficients in deference to part (a)). (c) The CFL condition follows immediately from Example 12.3. n (d) We substitute Um = ξ n eiκmh into the finite difference scheme and use the relationships − n n+1 n Um = ξUm , △x Um = (1 − e−iκh , (11.47b) and (11.47c) to give

ξ = 1 − ic sin θ − 2c2 sin2 12 θ − 23 c(1 − c2 )(1 − e−iθ ) sin2 21 θ, where θ = κh. We now write s = sin2 21 θ and then 1−e−iθ = (1−cos θ)+i sin θ = 2s+i sin θ so that, collecting real and imaginary parts  ξ = 1 − 2c2 s − 43 c(1 − c2 )s2 − ic 1 + 23 c(1 − c2 )s sin θ.

Therefore, since sin2 θ = 4s(1 − s) and defining A = complexity,

2 3 c(1

− c2 )s in order to lighten the

ξ = 1 − 2c2 s − 2As − i(A + c) sin θ. 2 |ξ|2 − 1 = 1 − 2s(A + c2 ) − 1 + 4(A + c)2 s(1 − s)

= −4s(A + c2 ) + 4s2 (A + c2 )2 + 4(A + c)2 s − 4(A + c)2 s2   = −4s (A + c2 ) − (A + c)2 + 4s2 (A + c2 )2 − (A + c2 )2 = −4sA(1 − 2c − A) + 4s2 (c2 − c)(c2 + c + 2A)

= −4sA(1 − 2c − A) −4s2 c2 (1 − c2 ) −8s2 Ac(1 − c) | {z } = −4sA(1 −

1 −A 2c |

−6scA

  + 2sc(1 − c)) = − 38 c(1 − c2 )(2 − c)s2 1 + 23 sc(1 − c) . {z }

2 3 sc(1−c)(c−2)

(e) For ℓ2 -stability it is necessary to have |ξ|2 − 1 ≤ 0. Since F (s) := (|ξ|2 − 1)/s2 is linear in s and, in order for it to be non-positive for 0 ≤ s ≤ 1, it must be non-positive at s = 0 and s = 1. Now F (0) = − 38 c(1 − c2 )(2 − c) ≤ 0

F (1) = − 98 c(1 − c2 )(2 − c)(1 + 2c)(3 − 2c) and the only interval held in common is c ∈ [0, 1].

121

⇒ c ∈ [−1, − 21 ] ∪ [0, 1] ∪ [ 23 , 2]

⇒ c ∈ (−∞, −1] ∪ [0, 1] ∪ [2, ∞)

12.14 It is more efficient to base the Taylor expansions about the point x = mh, t = nk + k: n+1 n+1 n+1 unm = um − kut m + 12 k 2 uxx m + O(k 3 ) n+1 n+1 = unm + kut m − 21 k 2 uxx m + O(k 3 ). so that un+1 m

n+1 n+1 Then, using the expansion △−x un+1 = hux m − 21 h2 uxx m + O(h3 ) (see Table 6.1), The local m truncation error is found to be n  − unm + c△−x un+1 Rh m : = k −1 un+1 m m   = ut − 12 kutt + O(k 2 ) + a ux − 21 h2 uxx + O(h3 )

but, utt = −auxt and auxx = −uxt so that n n+1  n+1 Rh m = ut + aux m + 12 h(1 + c)uxx m + O(h2 ) + O(k 2 ), as required.

12.15 The mth component of Cvj is    Cvj m = −c vj m−1 + (1 + c) vj m = −ce2πi(m−1)j/M + (1 + c)e2πim j/M  = 1 + c − ce−2πi j/M vj m .   this holds for all m = 1, 2, . . . , M (at M = 1 we need to recognise that vj M = vj M−1 because of the periodic nature of its components). Thus Cvj = λj vj , where λj = 1 + c − ce−2πi j/M . Comparing this with the amplification factor ξ(κh) given by (12.36), it is seen that λj = 1/ξ(2πjh) corresponding to the wavenumber κ = 2πj. This result generalises readily to all constant coefficient finite difference approximations of parabolic and hyperbolic PDEs with periodic BCs because all circulant matrices of a given dimension share the same set of eigenvectors. 12.16 Replacing △−x by △+x in (12.34a) leads to the BTFS scheme n+1 n n+1 Um = Um − c△+x Um

n+1 n+1 n (1 − c)Um + c Um−1 = Um .

The local truncation error is given by  1 n+1 um − unm + c△+x un+1 m k n+1 n+1 = (ut + aux )|m − 21 (kutt − ahuxx )|m + O(k 2 ) + O(h2 )

Rnm =

+ O(h2 ) + O(k 2 ). = 21 h(1 − c) uxt |n+1 m

There are no CFL restrictions when this scheme is used with periodic BCs for the same reasons as for the BTBS scheme. With non-periodic BCs the BTFS scheme can be operated in two explicit modes, from left to right on each time level, that is, n+1 n n+1 Um+1 = (Um − (1 − c)Um )/c,

122

m = . . . , 1, 2, 3, . . . .

n+1 This is illustrated in Fig. 20 (left) where the target point (Um+1 ) is denoted by ◦. The CFL condition requires characteristics to lie in the shaded region—that is, c ≥ 1. The second mode applies the scheme from right to left on each time level, that is, n+1 n n+1 − c Um+1 )/(1 − c), = (Um Um

m = . . . , 3, 2, 1, . . .

(see Fig. 20, right). The characteristics will lie in the shaded region as required by the CFL condition if c ≤ 0. The amplification factor for the scheme is given by ξ(θ) = 1/(1 − c + ceiθ ) so that (using halfangle formulae for cos θ and sin θ) 1 (1 − c + c cos θ)2 + c2 sin2 θ 1 = . 1 − 4c(1 − c) sin2 21 θ

|ξ|2 =

Thus |ξ|2 ≤ 1 if the denominator satisfies 1 − 4c(1 − c) sin2 12 θ ≥ 1 for all θ. This will be the case if either c ≤ 0 or c ≥ 1. These conditions coincide with the CFL conditions. → bc b b

← bc b

b

Figure 20: The two modes of operation of the BTFS scheme with target points (◦). Both are stable provided that the characteristics lie in the shaded regions.

There is no CFL limit for the BTFS scheme for the same reason is c ≥ 1. Its ℓ2 stability conditions can be deduced from the corresponding condition for the BTBS 12.17 From (12.22) and Exercise 12.16 we find, respectively, ξF (θ) = 1 − c + ce−iθ and ξB (θ) =

1 1 = ∗ iθ 1 − c + ce ξF (θ)

(where ξF∗ (θ) is the complex conjugate of ξF (θ)) so that ξF∗ (θ)ξB (θ) = 1. Since |ξF (θ)| |ξB (θ)| = 1 it follows that if |ξF (θ)| < 1 (the FTBS scheme is stable) for some scaled wavenumber θ, then |ξB (θ)| > 1 (the BTFS scheme is unstable) at that wavenumber, and vice versa. Their stability regions are therefore complements of each other. Since the FTBS scheme is stable for 0 ≤ c ≤ 1 we deduce that the BTFS scheme is stable for c ≥ 1 and c ≤ 0.

12.18 The box scheme (12.38) involves four terms and each has to be expanded in Taylor series as a function of two variables up to, and including, third derivatives. Since each expansion contains 10 terms, a total of 40 terms is involved. The high order of consistency is only possible because of the cancellation that occurs when these expansions are combined. This is a consequence of the high degree of symmetry/anti-symmetry in the formula and this has to be exploited in order to avoid unnecessary algebraic complexity. This suggests that all expansions should be based on the “centre” of the stencil which is X = (m − 12 )h, T = (n + 12 )k. Then, using ut = −aux we have ∂tj u = (−a∂x )j u and we find un+1 = u + 21 (k∂t + h∂x )u + 18 (k∂t + h∂x )2 u + m =u+ =u+

3 4 1 48 (k∂t + h∂x ) u + O(h ) 2 3 1 1 1 2 (−ak∂x + h∂x )u + 8 (−ak∂x + h∂x ) u + 48 (−ak∂x + h∂x ) u 2 3 4 1 1 2 1 3 2 ah(1 − c)ux + 8 h (1 − c) uxx + 48 h (1 − c) uxxx + O(h ),

123

+ O(h4 )

where we have assumed that c = ak/h is fixed, so that k = O(h) and the remainder terms can then be expressed as O(h4 ). Treating the other terms in a similar fashion we find = u + 21 h(1 − c)ux + 18 h2 (1 − c)2 uxx + un+1 m

unm−1 = u − 12 h(1 − c)ux + 18 h2 (1 − c)2 uxx −

2 1 1 2 un+1 m−1 = u − 2 h(1 + c)ux + 8 h (1 + c) uxx −

unm = u + 12 h(1 + c)ux + 81 h2 (1 + c)2 uxx +

1 48 (1 1 48 (1 1 48 (1 1 48 (1

− c)3 uxxx + O(h4 ), − c)3 uxxx + O(h4 ), + c)3 uxxx + O(h4 ), + c)3 uxxx + O(h4 ).

Subtracting the second from the first and the fourth from the third then gives − unm−1 = un+1 m

(1 − c)hux +

n un+1 m−1 − um = −(1 + c)hux −

1 24 (1 1 24 (1

− c)3 h3 uxxx + O(h5 ),

+ c)3 h3 uxxx + O(h5 ).

The LTE may be expressed as (see (12.38))     n Rh (X, T ) : = k −1 (1 + c) un+1 − unm−1 + (1 − c) un+1 m m−1 − um    1 = k −1 (1 + c) (1 − c)hux + 24 (1 − c)3 h3 uxxx + O(h5 )   1 − (1 − c) (1 + c)hux + 24 (1 + c)3 h3 uxxx + O(h5 ) =

1 −1 3 h (1 24 k

− c2 )[(1 − c)2 − (1 + c)2 ]uxxx + O(h4 ) = − 61 ah2 (1 − c2 )uxxx + O(h4 ),

where uxxx is evaluated at (X, T ). This point could be changed to any of the four points in the stencil (for instance) without changing the leading term (but the O(h4 ) would drop to O(h3 )). 12.19 With η = (1 − c) + (1 + c)e−iκh , then

 η ∗ = (1 − c) + (1 + c)eiκh = (1 − c)e−iκh + (1 + c) eiκh

so that (12.39) becomes ξ = e−iκh η/η ∗ . It follows that |ξ| = 1 (for all c) since |η| = |η ∗ | and |e−iκh | = 1. 12.20 The discriminant of the quadratic polynomial (12.44) is −16c2s2 (1 − c2 s2 ), where s = sin 12 κh. (i) When c2 ≤ 1 the discriminant is non-positive and the roots are p ξ = (1 − 2c2 s2 ) ± 2isc (1 − c2 s2 )

which form a complex conjugate pair ξ1 , ξ1∗ say. The product of roots, |ξ1 |2 = 1 (the constant term in (12.44)) but it can also be verified from the formula given above for the roots.

(ii) When c2 ≥ 1 there will be wavenumbers κ for which c2 s2 > 1 and the discriminant is positive. The roots, ξ1 , ξ2 say, are real and distinct in such situations and, their product being ξ1 ξ2 = 1, one root must have a magnitude greater than 1, so the scheme is unstable. 12.21 The amplification factor of the method (12.5) is ξ=

ν X

αj eijκh

j=−µ

124

and its LTE is

ν X n  αj unm+j . Rh m := k −1 un+1 − m j=−µ

The solution of the advection equalion ut + aux = 0 with initial condition u(x, 0) = exp(iκx) is u(x, t) = exp(iκ(x − at) so the LTE becomes ν X n  αj eiκ(xm+j −ank) Rh m = k −1 eiκ(xm −a(n+1)k) − j=−µ



= k −1 e−aiκk −

ν X

j=−µ

   αj eijκh eiκ(xm −ank) = k −1 e−ciθ − ξ(θ) unm ,

where θ = κh. This shows that equation (12.50) holds more generally than just for the LaxWendroff method. Now hp+1 ∂xp+1 u = (iκh)p+1 u = (iθ)p+1 u with a similar expression for hp+2 ∂xp+2 u and the given error expansion gives n kRh m = Cp+1 (iθ)p+1 u + Cp+2 (iθ)p+2 u + O(hp+3 )

and, using (12.50), we obtain

ξ(θ) = e−icθ − Cp+1 (iθ)p+1 − Cp+2 (iθ)p+2 u + O(hp+3 ) showing that the amplification factor is an order (p + 1) approximation of e−icθ . (a) p is odd: then (iθ)p+1 is real and (iθ)p+2 is imaginary. In fact, supposing that p = 2q − 1 (where q is a positive integer), (iθ)p+1 = i2q θp+1 = (−1)q θp+1 = (−1)(p+1)/2 θp+1 and (iθ)p+2 = (−1)(p+1)/2 iθp+2 . Hence   ξ(θ) = cos cθ − (−1)(p+1)/2 Cp+1 θp+1 − i sin cθ + (−1)(p+1)/2 iθp+2 + O(θp+3 )

|ξ(θ)|2 = cos cθ − (−1)(p+1)/2 Cp+1 θp+1

2

+ sin cθ + (−1)(p+1)/2 Cp+2 θp+2

= cos2 cθ + sin2 cθ − 2(−1)(p+1)/2 Cp+1 θp+1 cos cθ + O(θp+3 )

2

+ O(θp+3 )

= 1 − 2(−1)(p+1)/2 Cp+1 θp+1 + O(θp+3 ),

where we have used θp+1 cos cθ = θp+1 + O(θp+3 ) and θp+2 sin cθ = O(θp+3 ). (b) p is even: then (iθ)p+1 is imaginary and (iθ)p+2 is real. In fact, supposing that p = 2q (where q is a positive integer), (iθ)p+1 = i(i2q )θp+1 = (−1)q iθp+1 = (−1)p/2 iθp+1 and (iθ)p+2 = (−1)p/2+1 θp+2 . Hence   ξ(θ) = cos cθ − (−1)p/2+1 Cp+2 θp+2 − i sin cθ + (−1)p/2 Cp+1 θp+1 + O(θp+3 ) 125

leading to |ξ(θ)|2 = cos cθ − (−1)p/2+1 Cp+2 θp+2

2

+ sin cθ + (−1)p/2 Cp+1 θp+1

2

+ O(θp+3 )

= cos2 cθ + sin2 cθ − 2(−1)p/2+1 Cp+2 θp+2 cos cθ + 2(−1)p/2 Cp+1 θp+1 sin cθ + O(θp+3 )

= 1 − 2(−1)p/2+1 Cp+2 θp+2 + 2(−1)p/2 Cp+1 θp+2 + O(θp+3 )

= 1 − 2(−1)p/2+1 (cCp+1 + Cp+2 )θp+2 + O(θp+3 ),

where we have used θp+2 cos cθ = θp+2 + O(θp+4 ) and θp+1 sin cθ = cθp+2 + O(θp+4 ). The bottom line is that for a method whose order of consistency p (odd), the order of dissipation is (p + 1) but, if the order of consistency p (even), the order of dissipation is (p + 2). 12.22 We do not give a solution since this would depend on the specific computer algebra package used. 12.23 When c > 0, characteristics travel from left-to-right (see Fig. 21, left) and the domain of dependence of the anchor point (shown as ◦) is shaded. The method must be used as n+1 n+1 n Um = (Um + c Um−1 )/(1 + c),

m = 1, 2, . . . , M

so the BC must be placed at x = 0. The corresponding situation when c ≤ −1 is shown in Fig. 21 (right) and the BC needs to be at x = 1. → bc b

← bc

b

b b

Figure 21: The two modes of operation of the BTBS scheme in Exercise 12.23 with target points (◦). Both modes are stable provided that the characteristics lie in the shaded regions.

12.24 The situation is depicted in Fig. 22. The artificial BC implies n δx2 UM =0

n n n n (UM − UM−1 ) = (UM+1 − UM )

n n n (UM − UM−1 ) (U n − UM ) = M+1 h h

n n so that the gradient of the line joining (xM−1 , UM−1 ) to (xM , UM ) is equal to that of the line n n n joining (xM , UM ) to (xM+1 , UM+1 ). The points (xm , Um ) (m = M − 1, M, M + 1) are therefore collinear.

bc b

b

× t = nk

Figure 22: The application of the Lax-Wendroff method at a point (M h, nk) on the boundary x = 1 involves a grid point (marked ×) outside the domain.

x=1 n = 0 and the identity △−x = △x − 21 δx2 , the Lax-Wendroff method at x = xM becomes With δx2 UM n+1 n n UM = [1 − c△x + 12 c2 δx2 ] UM = [1 − c△−x ]UM ,

which is the FTBS approximation (12.20) of the advection equation. 126

12.25 The Lax-Wendroff method is applied at the points (xm , nk), 0 < m < M , n ≥ 0. At these points the LTE Rnm = 0 because the exact solution is a quadratic polynomial and the expression for the LTE involves the 3rd derivatives of u (see (12.30)). So  1 n+1 um − unm + c△x unm − 21 c2 δx2 unm = 0, k  n = 1 − c△x + 12 c2 δx2 Um ,

Rnm = n+1 Um

and so the global error E = u − U itself is a solution of the Lax-Wendroff scheme:  n n+1 = 1 − c△x + 12 c2 δx2 Em Em n n n = 21 c(1 + c)Em−1 + 21 c(c − 1)Em+1 + (1 − c2 )Em

0 with starting condition Em = 0 and a homogeneous BC E0n = 0 at x = 0. Thus, by a domain of n dependence argument, Em = 0 for xm + tn ≤ 1. The LTE of the FTBS scheme (see (12.21)) at x = 1 is, for u(x, t) = (x − t)2 and a = 1,  n n RnM = k −1 un+1 M − cuM−1 − (1 − c)uM = −h(1 − c)

and so

n+1 n n EM = cEM−1 − (1 − c)EM − hk(1 − c)

for n = 0, 1, 2, . . .. n → Am as n → ∞, the Lax-Wendroff equations reduce to Assuming now that Em (1 + c)Am−1 − 2cAm + (c − 1)Am+1 = 0, Setting C=

1 2 (1

1−M  1+c , − c) h − 1−c 2 2

ρ=

then the proposed solution is Am = Cρm . Substituting, we find, (1 + c)Am−1 − 2cAm + (c− 1)Am+1 = Cρm

c+1 c−1

 1+c − 2c+ (c− 1)ρ = Cρm ((c− 1)− 2c+ (c+ 1)) = 0 ρ

so these equations are satisfied. n When Em → Am as n → ∞, the FTBS equations reduce to AM − AM−1 = −h2 (1 − c). Substituting the putative solution into the left hand side of this, we find AM − AM−1 = CρM−1 (ρ − 1) =

2 CρM−1 = −h2 (1 − c) c−1

as required. 12.26 The equations can be written as a first order system ut + Aux = 0,

 2 A= 1

 1 , 2

(4)

where u = [u, v]T . The matrix A has eigenvalues λ = 1, 3 which are the characteristic speeds. 127

n+1 n n The FTCS method for systems is Um = ρAUm−1 + (I − ρA)Um , where ρ = k/h. This is stable for λρ ≤ 1 and so the largest mesh ratio for which the solution is stable is k ≤ h/3. At x = 5h = 0.5 we have m = 5 and U40 = [.4, .16]T , U50 = [.5, .25]T so, with ρ = 41 , the solution at (5h, k) is         .4 2 −1 .5 .4275 1 0 0 1 1 1 2 1 1 U5 = 4 AU4 + (I − 4 A)U5 = 4 +4 = . 1 2 .16 −1 2 .25 .1800

12.27 The FTCS method is stable only for positive eigenvalues and the FTBS method is stable only for negative eigenvalues but the matix A has eigenvalues ±a of both signs. (a) Applying the FTBS method to (12.66a) and the FTBS method to (12.66b) leads to n n n n n n+1 − Vmn ) ) + (1 − c)(Um − Vm−1 − Vmn ) = c(Um−1 − Vmn ) − c△−x (Um − Vmn+1 ) = (Um (Um n+1 n n n n n (Um + Vmn+1 ) = (Um + Vmn ) + c△+x (Um + Vmn ) = (1 − c)(Um + Vmn ) + c(Um+1 + Vm+1 )

so, by adding and subtracting, we find n+1 n n n n n Um = (1 − c)Um + 21 c(Um−1 − Vm−1 ) + 21 c(Um+1 + Vm+1 ) n + c△x Vmn = (1 + 12 cδx2 )Um

n n n n Vmn+1 = (1 − c)Vmn − c(Um−1 − Vm−1 ) + c(Um+1 + Vm+1 ) n = (1 + 12 cδx2 )Vmn + c△x Um

n+1 Um

= (1 +

n 1 2 2 cδx )Um

n (k/h)A△x Um

=

" (1 + 21 cδx2 ) c△x

c△x (1 + 12 cδx2 )

#

n Um .



   0 −a 1 1 (b) With A = the matrix of eigenvectors is V = and Λ = diag(−a, a). −a 0 1 −1 Therefore |Λ| = |a|I (where I is the 2 × 2 identity matrix) and so |A| = V |Λ|V −1 = |a|V V −1 = |a|I. The method now reads n+1 n 1 n Um = Um − (k/h)A△x Um + 21 |c| δx2 Um

which is identical to the method in part (a) provided a > 0. (c) Following the steps in Example 12.18 the Lax–Friedrichs scheme is found to be n n+1 , Um = [I − (k/h)A△x + 21 δx2 ]Um

which is identical to the method in part (b) when |c| = 1. 12.28 At the boundary x = 0 the outward characteristic equation is (12.66b): (u + v)t − a(u + v)x = 0. Approximating this with the FTFS scheme (12.25) at m = 0 (U0n+1 + V0n+1 ) = (1 − c)(U0n + V0n ) + c(U1n + V1n ),

c = ak/h,

where we have accounted for the fact that the advection speed is −a. But the BC specifies u(0, t) = 0 so this reduces to V0n+1 = (1 − c)V0n + c(U1n + V1n ). 128

12.29 Consider the critical factor in the final term on the right hand side of (12.77)  n n n △−x ϕnm △+x Um = ϕnm △+x Um − ϕnm−1 △+x Um−1

n n but △+x Um−1 = △−x Um so that this becomes  n n n △−x ϕnm △+x Um = ϕnm △+x Um − ϕnm−1 △−x Um n  ϕ n , = ( nm − ϕnm−1 △−x Um ρm n n where ρnm = △−x Um /△+x Um , and the result follows.

12.30 When ϕ(ρ) = 1 it is immediate that (12.77) reduces to the Lax-Wendroff method (12.28). When ϕ(ρ) = ρ n n n = △−x Um = ρnm △+x Um ϕnm △+x Um

and so (12.77) becomes

n n n+1 n − 12 c(1 − c)(△−x )2 Um − c△−x Um = Um Um ,

which, by Exercise 12.10, is the Warming-Beam method. 12.31 The minmod limiter is a special case of the Chakravarthy–Osher limiter.

ϕ(ρ)

Superbee

2 Chakravarthy–Osher

ψ 1

0 0

1

2

3

ρ

Figure 23: The flux limiters for Exercise 12.31. Chakravarthy–Osher: ρ≤0: 0ψ:

max{0, min(ρ, ψ)} = max{0, ρ} = 0 max{0, min(ρ, ψ)} = max{0, ρ} = ρ max{0, min(ρ, ψ)} = max{0, ψ} = ψ.

Superbee: ρ≤0:

0 0—the combined method is unconditionally stable. Pure advection (r = 0) then |ξ|2 = 1 for all s ∈ [0, 1] so that the combined method is nondissipative. When c = 0 the methods can be written in terms of finite difference operators as  n+1 n (1 + r△−x )Um = (1 + r△+x )Um n+2 n ⇒ (1 − r△+x )(1 + r△−x )Um = (1 − r△−x )(1 + r△+x )Um . n+2 n+1 (1 − r△+x )Um = (1 − r△−x )Um The LTE of this scheme is (we divide by 2k because it spans two time levels) n  − (1 − r△−x )(1 + r△+x )unm Rh m = (2k)−1 (1 − r△+x )(1 + r△−x )un+2 m

and using the identities △+x − △−x = △+x △−x = δx2 , this becomes n  − (1 + rδx2 − r2 δx2 )unm Rh m = (2k)−1 (1 − rδx2 − r2 δx2 )un+2 m  = (2k)−1 (1 − rδx2 )un+2 − (1 + rδx2 )unm − (2k)−1 r2 δx2 (un+2 − unm ). m m

The first term on the right (which we denote by R1 ) is the LTE of the Crank-Nicolson method with a time step 2k, so that R1 = O(h2 ) + O(k 2 ). The second term is R2 = −(2k)−1 r2 δx2 (un+2 − unm ) = −ε2 m = −ε2

 k 2 uxxt + O(h2 ) + O(k 2 ) h 134

 k 2 −2 2 h δx k−1 △un+1 m h

so that R2 does not tend to zero with h and k unless k/h → 0. The method will not therefore converge unless k = O(h). n This is a perilous situation because, in contrast with instability when |Um | → ∞ (at least for some m), there is no signal from the numerical results that they are grossly inaccurate. For instance, if the grid is refined with k = αh, the numerical method is consistent with the modified PDE ut = εuxx + ε2 α2 uxxt . In the pure advection case the finite difference equations become  n+1 n = (1 − 21 c△+x )Um (1 + 12 c△−x )Um n+2 n ⇒ (1+ 12 c△+x )(1+ 12 c△−x )Um = (1− 12 c△−x )(1− 12 c△+x )Um . n+2 n+1 = (1 − 21 c△−x )Um (1 + 12 c△+x )Um and, using the identities △+x + △−x = △x , △+x △−x = δx2 , this becomes n n+2 n n+2 ) + 12 c△x (Um − Um )=0 + Um (1 + 14 δx2 )(Um

n+1 n+1 n+1 (1 + 41 δx2 )△t Um + c△x Um + 12 c△x δt2 Um =0

which is consistent of order two with the advection equation ut + aux = 0 when k = O(h). See “Modified AGE methods for the convection-diffusion equation” by Lu Jinfu, Zhang Baolin and Zuo Fengli, (Commun. Applied. Numer. Methods, vol. 14, 1998, pp.65–76). (Note: AGE is an acronym for Alternating Group Explicit). 13.5 Differentiating under the integral sign and then integrating by parts, we find Z Z Z    d uuxx + εvvyy dΩ uut + vvt dΩ = 2 u2 + v 2 dΩ = 2 dt Ω Ω Ω Z Z 1 Z 1 1 1  (ux )2 + ε(vy )2 dΩ vvy ) y=0 dx − uux ) x=0 dy + ε = Ω 0 0 Z  2 2 (ux ) + ε(vy ) dΩ ≤ 0 =− Ω

and so the “energy” integral is bounded below by 0 and is a strictly decreasing function of t if either u 6= 0 or v 6= 0. Hence both u and v tend to zero with time.  n  n+1   n Um Um U n i(κ 1 ℓ+κ 2 m)h =ξ e c⇒ =ξ m Vmn Vmn+1 Vmn

and so the amplification factor of the coupled scheme satisfies   1 − 4rs1 k c, ξc = −k 1 − 4rεs2 where s1 = sin2 12 κ 1 h and s2 = sin2 12 κ 2 h. This is an eigenvalue problem with characteristic polynomial  P (ξ) = ξ 2 + 2(2εs2 + 2s1 − 1)ξ + 16εs1 s2 − 4εs2 + k 2 − 4s1 + 1 . The product of the roots is P (0) = det(A) = (1 − 4s1 )(1 − 4εs2 ) + k 2 and clearly P (0) > 1 when s1 = s2 = 0 so the method is unstable.

135

For the method (13.5b) the amplification factor satisfies    1 − 4rs1 kξ 1 − ξ − 4rs1 ξc = c ⇒ A(ξ)c := −k 1 − 4rεs2 −k

 kξ c = 0. 1 − ξ − 4rεs2

This will have a nontrivial solution if, and only if, det(A(ξ)) = 0. This leads to the characteristic polynomial P1 (ξ) = 0, where P1 (ξ) := (1 − ξ − 4rs1 kξ)(1 − ξ − 4rεs2 ) + k 2 ξ. Applying the Jury conditions, the roots will satisfy |ξ| ≤ 1 if, and only if, (a) P1 (0) = (1 − 4rs1 kξ)(1 − 4rεs2 ) < 1 for all s1 , s2 ∈ [0, 1]: considering P1 (0) − 1 = −4r(1 + ε − 4εr), we conclude that P1 (0) < 1 if r < (1 + ε)/(4ε). (b) P1 (1) = 16εr2 s21 s22 + k 2 > 0 is clearly true for all s1 , s2 ∈ [0, 1] and all r. (c) P1 (−1) = 4(1 − 2rs1 )(1 − 2rεs2 ) − k 2 > 0. There are two cases: (i) 0 < r < 21 : both the factors in the first term of P1 (−1) are positive and (1 − 2rs1 ) ≥ 1 − 2r,

(1 − 2rεs2 ) ≥ 1 − 2rε ⇒ P1 (−1) ≥ 4(1 − 2r)(1 − 2rε) − k 2

so we have the time step limit k 2 < 4(1 − 2r)(1 − 2rε). (ii) r > 12 . There is a value of s1 ∈ (0, 1) such that P1 (−1) = −k 2 < 0 and the process is unstable. The amplification factor of method (13.5c) satisfies    1 − ξ − 4rξs1 1 − 4rξs1 kξ c ⇒ A(ξ)c := ξc = −k −k 1 − 4rεs2

 kξ c = 0. 1 − ξ − 4rεs2

This will have a nontrivial solution if, and only if, det(A(ξ)) = 0. This leads to the characteristic polynomial P1 (ξ) = 0, where  P2 (ξ) := (1 + 4rs1 )ξ − 1) (ξ + 4rεs2 − 1) + k 2 ξ.

Applying the Jury conditions, the roots will satisfy |ξ| ≤ 1 if, and only if, (a) P2 (0) = (1 − 4rεs2 ) < 1 for all s1 , s2 ∈ [0, 1] and r > 0.

(b) P2 (1) = 16εr2 s21 s22 + k 2 > 0 is clearly true for all s1 , s2 ∈ [0, 1] and all r. (c) P2 (−1) = 4(1 + 2rs1 )(1 − 2rεs2 ) − k 2 > 0. There are two cases: (i) 0 < r < 1/(2ε): P2 (−1) ≥ 1 − 2εr − k 2 > 0 if r < (1 − 41 k 2 )/(2ε).

(ii) r > 1/(2ε). There is a value of s2 ∈ (0, 1) such that P2 (−1) = −k 2 < 0 and the process is unstable. 136

r 1 2ε

1 2

Figure 26: The stability regions for method (13.5b) (light shading) and (13.5c) (dark shading).

2 k

The stability regions for methods (13.5b) (light shading) and (13.5c) (dark shading) are shown in Fig. 26. Clearly method (13.5c) has the largest stability region but this is offset by requiring a set of tridiagonal equations to be solved at each time step. The local truncation error of all three methods is O(k) + O(h2 ). 13.6 When u(x, t) = v(t) then v satisfies the ODE v ′ (t) = v(1 − v) and has solution v(t) =

Aet , 1 + Aet

A=

v(0) . 1 − v(0)

When A > 0 (corresponding to 0 < v(0) < 1) then the denominator of v(t) is always positive and v(t) → 1 as t → ∞. When A < 0, the denominator becomes zero at t = t∗ , where ∗

et = −

1 1 =1− . A v(0)

It follows that t∗ > 0 if v(0) < 0. (i) Substituting u(x, t) = φ(x − at) into the PDE gives

ϕ′′ (z) + aϕ′ (z) + ϕ(z) − ϕ2 (z) = 0,

−∞ < z < ∞.

(13.6a)

When ϕ(z) ≈ Ae−λz and z is sufficiently large that −ϕ2 (z) is negligible, then ϕ′′ (z) + aϕ′ (z) + ϕ(z) = 0, so that λ2 − aλ + 1 = 0. That is

⇒ (λ2 − aλ + 1)Ae−λz = 0

1 ≥2 λ since the function λ + 1/λ has a minimum at λ = 1 for all positive values of λ. a=λ+

(ii) When ϕ(z) ≈ 1 − Beµz we find ϕ − ϕ2 = ϕ(1 − ϕ) = (1 − Beµz )Beµz ≈ Beµz as z → −∞. Hence, for large negative values of z, (−µ2 − aµ + 1)Aeµz = 0 so that µ2 + aµ − 1 = 0. That is

1 −µ µ so that −∞ < a < ∞ for 0 < µ < ∞. If the wave speeds at +∞ and ∞ are to agree, then λ and µ have to be related by 1 1 λ + = − µ. (13.6c) λ µ a=

137

(iii) ′

s (t) = −a

Z

L

∞ ϕ′ (x − at) dx = −aϕ(x − at) x=L = aϕ(L − at)

and s′ (t) → a since ϕ(L − at) → 1 as t → ∞ (part (ii)) and ϕ(x − at) → 0 as z → ∞ (part (i)). Substituting the expression (13.6b) into the ODE for ϕ leads to  C exp(bz) 2(ab + b2 − 1) + C(2ab − 4b2 − 1) exp(bz) = 0 (1 + C exp(bz))4

√ which is satisfied if (ab + b2 − 1) = (2ab − 4b2 − 1) = 0. These lead to a = 5b = 5/ 6. Also, ϕ(z)

exp(−2bz) 1 = ≈ C −2 exp(−2bz) as z → ∞ (1 + C exp(bz))2 (exp(−bz) + C)2 1 ≈ (1 − 2C exp(bz)) as z → −∞ ≈ (1 + 2C exp(bz))

so that λ = 2b and µ = b. These satisfy (13.6c) if b2 = 1/6. For some further insights into Fisher’s equation see “An Introduction to Partial Differential Equations” by J. David Logan, Section 4.4 (John Wiley & Sons, 1994). 13.7 Substituting u(x, t) = X(x)/T (t) into ut − u = u(uxx − u) leads to −

XT ′ X X2 XX ′′ − − 2 = 2 2 T T T T

T ′ + T = X − X ′′

and, using the standard separation of variables argument with separation constant C, we obtain X = C + Aex + Be−x and T (t) = C + De−t . Hence u(x, t) =

C + Aex + Be−x C + De−t

(13.7d)

solves the PDE for arbitrary constants A, B, C and D. Now, with a similar argument to the derivation of (11.47b), δx2 e±xm = (eh − 2 + e−h )e±xm = 4 sinh2 ( 12 h)e±xm and so a2 δx2 Xm = Xm − C with a = 1/(2 sinh 12 h) and Xm = X(xm ). Also, with T n = T (tn ) and tn = nk, T n − (1 + b)T n+1 = (C + De−tn ) − (1 + b)(C + De−tn+1 ) = −bC + De−tn (1 − (1 + b)e−k ) = −bC n if b = ek − 1. Hence, if we substitute Um = Xm /T n into the difference between left and right hand sides of (13.7b), rearranged to read,  2 2 n  n+1 n n+1 n Um a δx U m − U m − (1 + b)Um − bUm

we have

Xm (T n n T T n+1

− (1 + b)T n+1 ) − b

which vanishes by virtue of the earlier results. 138

Xm n T T n+1



a2 δx2 Xm − Xm



Consistency follows from the the expansions a2 =

1 = h−2 − 4 sinh2 21 h

1 12

+ O(h2 ),

b = ek − 1 = k + 12 k 2 + O(k 3 ).

With U = 1 + εV we have n n+1 = εVmn+1 (1 + εVmn ) = εVmn+1 + O(ε2 ) )Um (1 − Um

and n+1 2 n Um δx Um = εδx2 Vmn (1 + εVmn+1 ) = εδx2 Vmn + O(ε2 )

so that V satisfies (1 − b)Vmn+1 = Vmn + a2 bδx2 Vmn .

Substituting Vmn = ξ n eiκmh it is readily show that the amplification factor is real and given by ξ=

1 − 4a2 bs , 1+b

where s = sin2 12 κh. Clearly ξ ≤ 1 and we shall have ξ ≥ −1 if b(1 + 4a2 s) ≤ 2. This will be satisfied for all s ∈ [0, 1] if it is satisfied at s = 1, that is, b(1 + 4a2 ) ≤ 2. Using the identity cosh2 x − sinh2 x = 1 we find the stability condition to be b ≤ 2 tanh2 12 h. This reduces to k ≤ 21 h2 when k and h are sufficiently small. Nonstandard finite difference schemes of this type were introduced by Ronald E. Mickens— see his article “Nonstandard Finite Difference Schemes for Differential Equations” (Journal of Difference Equations and Applications, vol. 8: pp. 823–847, 2002) for background material. 13.8 See the article “Discretization of a convection–diffusion equation” by K.W. Morton and I.J. Sobey (IMA Journal of Numerical Analysis, vol.√ 13, pp. 141–160, 1993). With the change of variable s = (ak − 2z εk)/h, the solution at (xm , tn+1 ) may be expressed as Z ∞ 1 −(s−c)2 /4r n+1 K(s) Φ(xm − sh) ds, K(s) = Um = e . (13.8d) 4πr −∞ This is shown in Fig. 27 for r = 1/8, 1/4, 1/2, 1 and c = 0. Consider this in the context of Leith’s K(s) 1

−3

−2

−1

0

1

r = 1/8 r = 1/4 r = 1/2 r=1 2

3

s

Figure 27: The kernel K(s) for Project 13.8 for r = 1/8, 1/4, 1/2, 1 and c = 0. method (which is the FTCS method for the heat equation when c = 0). The interpolant is based on data at the points s = −1, 0, 1 and would not be expected to give a good approximation of the solution outwith an interval much larger than −2 ≤ s ≤ 2, say. When r is small (r = 1/8, for example) the kernel K is very small outside the interval (−1.5, 1.5) and the formula (13.8d) 139

effectively forms a weighted average of the interpolant within its most accurate range. As r increases, the interval over which the weighted average is taken effectively grows and the accuracy of the method is expected to fall away. 13.9 For 0 < m < M the argument used in Example 11.9 may be used with γ = 0 to show that n n+1 = 0, |Um | ≤ kU·n kh,∞ provided that r ≤ 12 . At m = M and using UM+1 2r 2r n n+1 n |UM | = UM−1 + (1 − )UM 1+α α 2r 2r  ≤ + 1 − kU·n+1 kh,∞ . 1+α α There are two cases to consider (a) 0 < 2r ≤ α < 1 then 0< n+1 and so |UM | ≤ kU·n kh,∞ .

2r  2r 1 2r + 1 − = 1 − ≤1 1+α α α 1+α

(b) α < 2r ≤ 1 then 0
0 α2 1+α α (1 + α)

and so the roots of p(z) are real and their product is negative for 0 < α < 1. Since p(−1) =

2(1 − 2α2 ) , α(1 − α)

p(0) < 0,

p(1) =

2 >0 α(1 + α)

and p(z) → +∞ as z → ±∞, it follows that the roots z1 , z2 of p(z) satisfy z1 < −1 < z2 < 1 (so that |z1 | > 1) for 2α2 < 1 and −1 < z1 < 0 < z2 < 1 for 2α2 > 1. There are no solutions with |z| > 1 for 2α2 > 1 so the stability test imposes no restriction apart from the von Neumann condition r ≤ 12 . In the case 2α2 < 1, we solve (13.9e) for z and substitute into (13.9d) to give (ξ − 1)2 =

4r2 α2 (1 − α2 )

2r ξ =1± √ α 1 − α2

so there is clearly one root (ξ1 , say) with ξ1 > 1 and another root (ξ2 , say) that will satisfy −1 ≤ ξ2 ≤ 1 if, and only if, p 2r ≥ −1 ⇒ r ≤ α 1 − α2 . 1− √ 2 α 1−α √ Finally, we need to relate the ξ roots with the z roots when 0 < α < 1/ 2. Substituting ξ = ξ2 into (13.9d) we find  1 α 1− √ z= 1+α 1 − α2

√ so that the corresponding z-root is negative, i.e., z = z1 . Hence, for 0 < α < 1/ 2, we have a √ n solution Um = ξ2n z1m with |z1 | > 1 and |ξ2 | ≤ 1 provided that r ≤ α 1 − α2 . The overall stability region is shown on the right of Fig. 28. If the finite difference approximation at x = xM is replaced by −

2r 2r n+1 2r n+1 n UM−1 U n+1 = UM + (1 + )UM − , 1+α α α(1 + α) M+1

(13.9c)

n+1 (based on a backward difference in time) then, since the value of UM−1 is provided by an explicit n+1 scheme, UM may be computed without the need to solve any linear equations (the BCs mean n+1 that UM+1 = 0). n+1 The argument used in Example 11.9 may again be used with γ = 0 to show that |Um | ≤ n+1 1 n kU· kh,∞ for 0 < m < M provided that r ≤ 2 . Thus, with UM+1 = 0

(1 +

2r 2r n+1 n )|UM | ≤ |UM |U n+1 | |+ α 1 + α M−1 2r  n ≤ 1+ kU· kh,∞ 1+α 141

n+1 and so |UM | ≤ kU·n kh,∞ since

1+ Thus kU·n+1 kh,∞ ≤ kU·n kh,∞ for r ≤ any α ∈ (0, 1].

1 2

2r 2r ≤1+ . α 1+α

and so the scheme is stable in the maximum norm for

13.10 Replacing n by n − 1 in (13.10b) gives n−1 n n n ] + Um + Um−1 = r[Um+1 (1 + 2r)Um n and eliminating Um using this equation and (13.10a) leads to (13.10c). When m + n is odd (say), the sum of the indices in each of the terms in (13.10c) is m + n ± 1 and is therefore even. Consequently, the solution at all grid points where m + n ± 1 is even may be determined independently of those where m + n ± 1 is odd, and vice versa.

(a) The domain of dependence of the Du Fort–Frankel (and therefore also the hopscotch) scheme is the same as that of the FTCS method (shown in Fig. 11.4) and, as in Example 11.2, the method cannot converge unless k/h → 0 as h, k → 0. (b) The LTE of the scheme is (using the expansions given in Table 6.1) n  1 n−1 − 2r(unm−1 + unm+1 ) + (1 − 2r)um Rh m := (1 + 2r)un+1 m k  1 = 2△t unm − 2r(δx2 − δt2 )unm k k 2 −2 2 n k δt )um =2k −1 △t unm − 2h−2 δx2 unm + 2 h    k 2 1 2 utt + O(k 2 ) =2 ut + 61 k 2 uttt + O(k 4 ) − 2 uxx + 12 h uxxx + O(h4 ) + 2 h k 2  utt + O(h2 ) + O(k 2 ). =2 ut − uxx + h

Thus the method is consistent with the heat equation ut = uxx only if k/h → 0 as h, k → 0. If k = rh2 and r is fixed as h → 0, the order of consistency is O(h2 ). If there is a constant α such that k = αh as h → 0 (for example, halving k whenever h is halved), then the Du Fort–Frankel method is consistent of second order with the PDE ut − α2 utt = uxx and the finite difference solution cannot therefore converge to a solution of the heat equation. 1 (c) The argument used in Example 11.9 may again be used with γ = 0 to show that |Um |≤ kU·0 kh,∞ for m = 2, 4, . . . provided that r ≤ 21 . Then, using (13.10b) with m = 1, 3, . . . 1 |Um |≤

 1 0 1 1 |Um | + r|Um−1 | + r|Um+1 | ≤ kU·0 kh,∞ . 1 + 2r

Hence kU·1 kh,∞ ≤ kU·0 kh,∞ for r ≤ 21 . The same argument holds at all odd time levels and, at even numbered time levels, the roles of the odd and even spatial grid points is reversed. 142

n (d) To analyse the ℓ2 -stability of the scheme we substitute Um = ξ n eiκmh into (13.10c) which leads to the quadratic equation

(1 + 2r)ξ 2 − 4r cos(κh)ξ − (1 − 2r) = 0 whose roots are ξ= There are two cases

p  1 2r cos θ ± 1 − 4r2 sin2 θ , 1 + 2r

θ = κh.

(i) Real roots: 1 − 4r2 sin2 θ ≥ 0 then 1 − 4r2 sin2 θ ≤ 1 and so |ξ| ≤

1 (2r + 1) = 1. 1 + 2r

(ii) Complex roots: 1 − 4r2 sin2 θ < 0 then

p  1 2r cos θ ± i 4r2 sin2 θ − 1 1 + 2r  2r − 1 1 2 2 2 4r (cos2 θ + sin2 θ) − 1 = ≤ 1. |ξ| = 1 + 2r 2r + 1 ξ=

Thus |ξ| ≤ 1 for all θ and all r and the method is unconditionally ℓ2 stable. As in Project 13.4, the time step is limited by consistency rather than stability. One possible strategy in two space dimensions is to use the FTCS(BTCS) schemes at spatial grid points where ℓ + m is even (odd) at even numbered time levels and to exchange the roles of even/odd spatial grid points on alternate time levels. 13.11 The analysis of this scheme, along with its generalisation to d space dimensions and anisotropic diffusion (which is necessary to study the generalisation of Leith’s scheme), is contained in the article: “The Stability of Explicit Euler Time–Integration for Certain Finite Difference Approximations of the Multi–Dimensional Advection–Diffusion Equation” by A. C. Hindmarsh, P. M. Gresho and D. F. Griffiths (Int. J. Num. Meth. Fluids, vol., 4, 1984, pp.853–897). 13.12 (i) With f (u) = 0,  n n n n n = Um − c△−x Um + c2 δx2 Um − c△−x 1 − c△+x Um U nm = Um

since △−x △+x = δx2 (Exercise 6.2). Hence

n+1 n n n Um = Um − c△x Um + 21 c2 δx2 Um

(using △−x + △+x = 2△x ) which is the Lax-Wendroff method (see Exercise 12.32). n (ii) With f (u) = αu and Um = ξ 2 eiκmh we have   n n n U m = Um − c(eiθ − 1) − αk Um   n n − c(1 − e−iθ ) − αk U m U nm = Um

143

which lead to ξ = 1 + σ + 21 σ 2 − ic(1 + σ) sin θ − 2c2 sin2 21 θ, where σ = −αk. Defining s = sin2 21 θ we find that |ξ|2 − 1 = 14 p(s), where p(s) := 16(c2 − (1 − σ)2 )c2 s2 − 8σ(2 − σ)c2 s − σ(2 − σ)((1 − σ)2 + 3), and so stability will follow if p(s) ≤ 0 for all s ∈ [0, 1]. Now p(0) ≤ 0 for 0 ≤ σ ≤ 2 and it is readily shown that p(1) ≤ 0 for  (13.12c) c2 ≤ 41 (1 − σ)2 + 3 . It may be shown (we omit the details) that p(s) may have a stationary point s = s∗ in the region c2 ≥ (1 − σ)2 but, since p′′ (s∗ ) ≥ 0, this must be a local minimum. Hence p(s) ≤ 0 for all s ∈ [0, 1] provided that condition (13.12c) holds and 0 ≤ σ ≤ 2.

c 1 Figure 29: The stability region (13.12c) for MacCormack’s method when f (u) = −αu and σ = αk. 2

σ

The stable region in the σ-c parameter space is √ shaded in Fig. 29. The dashed line shows a refinement path as k → 0 with c fixed and 21 3 < c < 1. The method is unstable for σ > 2 then, as k decreases in stabilizes briefly before becoming unstable again. It finally restabilizes when k is sufficiently small. This illustrates the possibility that reducing the times step may destabilise a numerical method. n n+1 (iii) Substituting Um = Um = U ∗ into (13.12b) leads to n

U m = U ∗ + kf (U ∗ ) =: U , say ∗

U nm = U ∗ + kf (U ) =: U ∗ , say, n

U ∗ = 12 (U m + U nm ) = U ∗ + 12 k(f (U ∗ ) + f (U )) ∗

which will be satisfied if f (U ∗ ) + f (U ) = 0, as required. When f (u) = αu(1 − u), it may be shown that the steady states U ∗ satisfy the quadratic equation σ 2 U ∗ 2 − σ(2 + σ)U ∗ + 2 + σ = 0. This has real roots for σ = αk ≥ 2. The method has spurious steady state solutions when α ≫ 1 (otherwise, since k is small, we will have αk < 2). Other spurious behaviour may also be possible and is one of the hazards faced when solving PDEs with strong nonlinearities. 13.13 The differential equation iv ′ (x) = λv(x) has general solution v = e−iλx and the periodicity requirement v(x + 1) = v(x) necessitates e−iλ = 1 so that λ = ±2jπ for j = 0, 1, 2, . . .. The eigenfunction corresponding to λ = 0 is v(x) = constant.

144

To investigate the rank of A1 and A2 it is sufficient to study (L − LT ) and (RL − R−1 LT ), respectively (since D is nonsingular). So, using the notation A ∼ B to denote matrices that are equivalent under elementary row operations,   0 1 −1  −1 0 1   A1 ∼ L − LT =  . . . .. ..   1

−1

0

It is easily shown that (L − LT )v1 = 0, where v1 = [1, 1, . . . , 1]T , and so rank (A1 ) ≤ M − 1. When M is even, (L − LT )v2 = 0 , where v1 = [1, 0, 1, 0 . . . , 1, 0]T , and so rank (A1 ) ≤ M − 2. Permuting the rows of L − LT so that the first becomes the last, we have, when M = 5,     −1 0 1 0 0 −1 0 1 0 0  0 −1  0 1 0 0 1 0    0 −1  ∼ 0 0 0 −1 0 1 0 −1 0 1 A1 ∼       1 0 0 −1 0  0 0 0 −1 1 0 1 0 0 −1 0 0 0 0 0

having added row 1 to row 4, row 2 to row 5, row 3 to row 4 and then row 4 to row 5. Hence rank (A1 ) = 4 when M = 5. When M = 6, similar calculations lead to     −1 0 1 0 0 0 −1 0 1 0 0 0  0 −1  0 1 0 0 0 1 0 0    0 −1   0   0 −1 0 1 0  0 0 −1 0 1 0  A1 ∼  ∼  0  0 0 −1 0 1 0 0 −1 0 1    0   1 0 0 0 −1 0  0 0 0 0 0 0 0 1 0 0 0 −1 0 0 0 0 0 0

in which there are two zero rows, therefore rank (A1 ) = 4 when M = 6. Since A2 v1 = 0 it follows that rank (A2 ) ≤ M − 1. It is impractical to carry out row operations on A2 in the general case, so we look at the special case hm = αh for m = 1, 2, . . . , M − 1 and that hM = βh. When M = 6 

      A2 ∼      

0

1

0

0

0

−1

−1

0

1

0

0

0

0

−1

0

1

0

0

0

0

−1

0

1

0

0

0

0

− αβ

β α

0

0

0

β α

−α β

α β

α β α β

β α

           

Permuting the rows of this matrix so that the first becomes the last and then carrying out elementary row operations (which involve adding rows together or adding β/α × one row to

145

another, we find 

      A2 ∼      

0

1

0

0

0

−1

−1

0

1

0

0

0

0

−1

0

1

0

0

0

0

−1

0

1

0

0

0

0

β −α

β α

0

0

0

β α

− βα

α β

α β α β

β α

            ∼           

−1

0

1

0

0

0

0

−1

0

1

0

0

0

0

−1

0

1

0

0

0

0

− αβ

− α β−β α

2

2

α β

0

0

0

0

− α β−β α

2

2

α2 −β 2 βα

0

0

0

0

0

0

             

so that rank (A2 ) = M − 1 = 5 unless α = β. However, the characteristic polynomial of A2 is  2 2 det(A2 − λI) = 16 λ6 α4 (α + β) − 4 11 α2 + 10 β α + 3 β 2 λ4 α2 + (5 α + β) λ2

from which we deduce that it has a double eigenvalue at λ = 0. Hence A2 is defective—it has fewer than M linearly independent eigenvectors. The eigenvalue problem A1 v = λv may be written as i(L − LT )v = λDv so that λ=

v ∗ i(L − LT )v v ∗ Dv

and the numerator and denominator are both real since i(L − LT) and D are Hermitian matrices. Hence λ is real. Taking the complex conjugate of i(L − LT )v = λDv leads to −i(L − LT )v = λDv so that v is an eigenvector corresponding to −λ. No corresponding results are known for A2 . We now compute the eigenvalues λj of A1 and µj of A2 using Matlab when M = 47 and M = 48 (chosen because one is prime and the other even). In all computed cases the eigenvalues of both matrices have been found to be real. The relative difference λj − µm λj

is shown in Fig. 30 for all non-zero values of λj . It is seen to be of the order of rounding error (which is around 10−16 ) for both values of M . The computed “zero” eigenvalues of A1 are generally less than 10−13 for both values of M (using a number of different random grids). The same is true for A2 when M = 47 but, when M = 48, it has a pair of eigenvalues of the form ±ε, where ε varies in magnitude from about 10−8 to 10−3 for different random grids. The null space of both matrices is computed exactly as multiples of v1 when M = 47. When M = 48, the null space of A1 is computed as two vectors of the form [aj , bj , aj , bj , . . . , aj , bj ]T ,

j = 1, 2

for complex constants a1 , b1 , a2 , b2 . These span the same subspace as v1 , v2 . The situation is more complicated for A2 . The computed vectors corresponding to the “null-space” each have entries whose magnitude is around 0.5 ± 10−7 so as to be, for practical purposes, linearly dependent. The positive eigenvalues of A1 (solid lines) are compared with those of the ODE (◦) in Fig. 30 when both are ordered by magnitude. It would appear that, apart from the first eigenvalue, the differences are significant. The reason for this can best be understood by calculating the 146

10

10

10

-10

250

250

200

200

150

150

100

100

50

50

-12

-14

10-16

0

10

20

30

40

50

0

0

5

10

15

20

25

0

0

5

10

j

j

15

20

25

j

Figure 30: Left: the relative difference between the eigenvalues of A1 and A2 for M = 47 (solid lines) and M = 48 (broken lines). The positive eigenvalues of A1 (solid lines) are shown for M = 47 (centre) and M = 48 (right). Also shown in the centre and right are the exact positive eigenvalues of the ODE (◦), the discrete eigenvalues on a uniform grid when sorted according to magnitude (×) and when sorted according to wavenumber (+). eigenvalues of A1 on a uniform grid. The eigenvalue problem A1 V = λV is a representation of iM △Vm = λVm and, we find that the jth eigenvalue/vector are5 [Vj ]m = e−2πijxm ,

λj = M sin(2πj/M ),

xm = m/M,

where j = − 21 (M − 1), . . . , 21 (M − 1) when M is odd and j = − 12 M + 1, . . . , 21 M when M is even. These are shown with + symbols in Fig. 30 (centre and right) and the first half-dozen or so agree well with the eigenvalues of the ODE. Now, taking M to be odd, for illustration, consider λ 1 (M−1)−k = M sin(π(2k + 1)/M ), 2

[V 1 (M−1)−k ]m = (−1)m e−πi(2k+1)xm . 2

so, for k = 0, 1, 2, . . . (small), eigenvalues ≈ π, 3π, . . . that are approximately odd multiples of π are generated by eigenvectors with high wavenumber whereas the low wavenumbers (j = 1, 2, . . .) generate eigenvalues ≈ 2π, 4π, . . .. When these eigenvalues are ordered by magnitude (see × symbols in Fig. 30) there is close agreement with the eigenvalues on a random grid for the lower half of range of wavenumbers. We now turn to the characteristic polynomials of A1 and A2 . For M = 3, 4, 5, 6 they have identical characteristic polynomials pM (λ) given by p3 (λ) = det(D)λ3 − 2λ

p4 (λ) = det(D)λ4 − λ2 5 X  p5 (λ) = det(D)λ5 − Hm Hm+1 Hm+2 λ3 + 2λ m=1

p6 (λ) = det(D)λ6 −

6 X

m=1

 Hm Hm+1 Hm+2 Hm+3 λ4 + λ2 ,

QM where det(D) = m=1 Hm , Hm := hm + hm−1 and hm ≡ hm−M for m > M . The matrices A1 , A2 and 12 (A1 +A2 ) have been found to have identical characteristic polynomials for all M up to M = 20 but their complexity makes it impractical to display them here.

5 Note

that λj → 2πj, the eigenvalues of the original ODE as M → ∞ with j fixed.

147

http://www.springer.com/978-3-319-22568-5