Computational Electromagnetics with MATLAB, Fourth Edition (Solutions, Instructor Solution Manual) [4 ed.] 113855815X, 9781138558151


107 101 2MB

English Pages 188 Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Computational Electromagnetics with MATLAB, Fourth Edition   (Solutions, Instructor Solution Manual) [4 ed.]
 113855815X, 9781138558151

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

SOLUTIONS MANUAL ACCOMPANYING

COMPUTATIONAL ELECTROMAGNETICS WITH MATLAB

FOURTH EDITION

Matthew N. O. Sadiku Prairie View A&M University

Table of Contents Chapter 1

1

Chapter 2

11

Chapter 3

63

Chapter 4

90

Chapter 5

111

Chapter 6

135

Chapter 7

160

Chapter 8

169

Chapter 9

184

1 CHAPTER 1 Prob. 1.1

 ∂  ∂x ∇ × ∇Φ =   ∂Φ (a)  ∂x 

∂ ∂y ∂Φ ∂y

∂  ∂z   ∂Φ  ∂z 

 ∂ 2Φ ∂ 2Φ   ∂ 2Φ ∂ 2Φ   ∂ 2Φ ∂ 2Φ  a a = − + − + −  x   y   az = 0  ∂y∂z ∂z∂y   ∂x∂z ∂z∂x   ∂x∂y ∂y∂x  (b)

∂ ∂ ∂  ∂ ∂ ∂   = ∇g∇ × F  , ,  ⋅  ∂x ∂y ∂z   ∂x ∂y ∂z   F F F  y y  x ∂  ∂Fx ∂Fy  ∂  ∂Fz ∂Fx  ∂  ∂Fy ∂Fx  = − − −  −   +  ∂x  ∂y ∂x  ∂y  ∂x ∂z  ∂z  ∂x ∂y  2 2 ∂ 2 Fz ∂ Fy ∂ 2 Fz ∂ 2 Fx ∂ Fy ∂ 2 Fx = − − + + − =0 ∂x∂y ∂x∂z ∂y∂x ∂y∂z ∂z∂x ∂z∂y (c )  ∂  ∂F ∂Fy ∂Fz  ∂ 2 Fx ∂ 2 Fx ∂ 2 Fx  F   x+ ∇(∇gF ) - ∇ 2= +  − 2 − 2 − 2  ax x x y z ∂ ∂ ∂ ∂ ∂y ∂z    ∂x   ∂  ∂F ∂Fy ∂Fz  ∂ 2 Fy ∂ 2 Fy ∂ 2 Fy  +  x + + − 2 − 2  ay − ∂y ∂z  ∂x 2 ∂y ∂z   ∂y  ∂x  ∂  ∂Fx ∂Fy ∂Fz  ∂ 2 Fz ∂ 2 Fz ∂ 2 Fz  +  + + − 2 − 2  az − ∂y ∂z  ∂x 2 ∂y ∂z   ∂z  ∂x  ∂  ∂Fy ∂Fx  ∂  ∂Fx ∂Fz   =  − − −   a x ∂y  ∂z  ∂z ∂x    ∂y  ∂x  ∂  ∂F ∂Fy  ∂  ∂Fy ∂Fx   +  x − − −   a y ∂z  ∂x  ∂x ∂y    ∂z  ∂y  ∂  ∂F ∂F  ∂  ∂F ∂Fy   +  x − z −  z −  az ∂x  ∂y  ∂y ∂z    ∂x  ∂z = ∇×∇× F Prob. 1.2 Let A= U ∇V and apply Stokes’ theorem. Ñ ∫ U ∇V gdl = ∫ ∇ × (U ∇V )gdS L

S

=

∫ (∇U × ∇V )gdS + ∫ U (∇ × ∇V )gdS S

S

2 Since ∇ × ∇V = 0,

Ñ ∫ U ∇V gdl= ∫ (∇U × ∇V )gdS L

S

Similarly, we can show that Ñ ∫ V ∇U gdl= ∫ (∇V × ∇U )gdS = - ∫ (∇U × ∇V )gdS L

S

S

Thus,

−Ñ Ñ ∫ U ∇V gdl = ∫ V ∇U gdl L

L

as required. Prob. 1.3 Using divergence theorem, ∫ (U ∇V )gdS = ∫ ∇g(U ∇V )dv S

v

But ∇g(UA) =U ∇gA + Ag∇U ,

where A=∇V

S ∫ (U ∇g∇V + ∇V g∇U )dv ∫ (U ∇V )gd= S

v

=

∫ (U∇ V + ∇U g∇V )dv 2

v

Prob. 1.4 If J = 0 = ρv , then Maxwell’s equations become 0 ∇gB = 0 ∇gD =

(1) (2)

∂B (3) ∂t ∂D ∇× H = (4) ∂t Since ∇g∇ × A = 0 for any vector field A, ∂∇gB ∇g∇ × E = − =0 ∂t ∂∇gD ∇g∇= ×H = 0 ∂t Showing that (1) and (2) are incorporated in (3) and (4). Thus Maxwell’s equations can be reduced to curl equations (3) and (4). ∇× E = −

Prob. 1.5

If J ≠ 0 ≠ ρv ,

3 ∇gε E = ρv ∇gµ H = 0 ∂H ∇ × E = −µ ∂t ∂E ∇ × H= J + ε ∂t ∂ ∂J ∂2 E ∇ × H = −µ − µε 2 ∂t ∂t ∂t 2 ∂J 1 ∂ E ∇(∇gE ) − ∇ 2 E = − µ − ∂t c 2 ∂t 2 ∂J 1 ∂2 E 2 ∇ E− 2 2 = ∇( ρv / ε ) + µ ∂t c ∂t Similarly, ∂ ∇×∇× H = ∇× J +ε ∇× E ∂t ∂2 H ∇(∇gH ) − ∇ 2 H = ∇ × J − µε 2 ∂t or 1 ∂2 H ∇2 H − 2 = −∇ × J c ∂t 2 It is assumed that the medium is free space so that the medium is homogeneous and 1 = u = c. ∇ × ∇ × E = −µ

µε

Prob. 1.6 ∇ × H= J +

∂D ∂t ∂∇ • D ∂t Hence

∇•∇× H = 0 = ∇• J + But

∇ • D = ρv .

∇•J = -

∂ρv ∂t

4 Prob. 1.7 ∂B ∂H → = −∇ × E µ ∂t ∂t ∂D ∂E ∇× H = J + → ε = ∇× H - J ∂t ∂t Taking the derivative of (2), ∇× E = −

(1) (2)

∂2 E ∂H ∂J −∇× =(3) 2 ∂t ∂t ∂t Substituting (1) into (3) to eliminate H gives

ε

ε

∂2 E ∂E ∂J 1 + ∇× ∇× =2 ∂t ∂t ∂t µ

as required.

Prob. 1.8 (a) ∂D ∂t ∂H ∇ × E = −µ ∂t ∂ ∂ ∂ ∂H ∇ × ∇ × H = ∇ × J + (∇ × D) = ∇ × J + ε (∇ × E ) = ∇ × J − ( µ ) ∂t ∂t ∂t ∂t ∂2 H ∇ × ∇ × H + µε 2 = ∇ × J ∂t ∂ ∂ ∂D ∂J ∂2 E (b) ∇ × ∇ × E = − µ (∇ × H ) = − µ ( J + ) = −µ − µε 2 ∂t ∂t ∂t ∂t ∂t ∇ × H= J +

∇ × ∇ × E + µε

∂2 E ∂J = −µ 2 ∂t ∂t

Prob. 1.9 ∂H ∂t ∂D ∇ × H= J + ∂t Dotting both sides of (2) with E gives ∂D E g(∇ ×= H) E gJ + E g ∂t But for any arbitrary vectors A and B, ∇ × E = −µ

(1) (2)

(3)

5 ∇g( A × B= ) B g(∇ × A) − Ag(∇ × B ) Applying this to the left-hand side of (3) by letting A ≡ H and B ≡ E , we get 1 ∂ (4) ( DgE ) H g(∇ × E ) + ∇g( H ×= E ) E gJ + 2 ∂t From (1), ∂B 1 ∂ H g(∇ × E ) = H g(− ) = − ( B gH ) ∂t 2 ∂t Substituting this into (4) gives 1 ∂ 1 ∂ ( B gH ) − ∇g( E × H ) J gE + ( DgE ) − = 2 ∂t 2 ∂t Rearranging terms and then taking the volume integral of both sides, 1 ∂ − ( E gD + H gB )dv − ∫ J gEdv ∫v ∇g( E × H )dv = 2 ∂t ∫v v Using the divergence theorem, we get ∂W − − J gEdv Ñ ∫ ( E × H )gdS = ∂t ∫v Or

∂W = −Ñ ∫S ( E × H )gdS − ∫v E gJdv ∂t as required. Prob. 1.10

∇gE= 0,

∇gH= 0

∂ ∇ × E = ∂x Ex

∂ ∂y Ey

∂ ∂E ∂E ∂z = − y a x + x a y dz dz 0

= −10k sin(ωt + kz )a x − 20k cos(ωt − kz )a y H=− =

1

µo

∫ ∇ × E ∂t

k

 −10 cos(ωt − kz )a x + 20sin(ωt − kz )a y  ωµo 

which is the given H. Since all of Maxwell’s equations are satisfied by the fields, they are genuine EM fields. Prob. 1.11 ∇ × E = −µ

∂H ∂t



∂H 1 =− ∇ ×εo E ∂t µ oε o

6

1 1 ∂H =− ∇× D = − µ oε o µ oε o ∂t

∂ ∂x Dx ( z , t )

∂ ∂y 0

∂ ∂z 0

1 ∂Dx β Do sin(ωt + β z )a y ay = = − µoε o ∂z µ oε o D H= − o cos(ωt − β z )a y

β

Prob. 1.12

ω Es ∇ ×= H s j=

1 d ( ρ H φ )a z ρ dρ

 H o  1 1/ 2 − j βρ − j βρ 1/ 2 e − j βρ  a z  ρ e ρ ρ   − j βρ Ho  1 =  2 − jβ  e az ρ ρ   − j βρ ∇ × Hs 1 Ho  1 = Es =  2 − jβ  e az jωε jωε ρ  ρ 

=

Prob. 1.13 It is evident that ∇= gE 0 and ∇gH = 0 are satisfied. ∂B ∇× E = −  → ∇ × E s = − jωµ H s ∂t ∂ ∂ ∂ ∂y ∂z = − j β Eo e − j β z a y ∇ × E s = ∂x Eo e − j β z 0 0 E i.e. − jωµ o e − j β z a y = − j β Eo e − j β z a y

η ωµ β= η

∇ × H= J +

= ∇ × Hs

(1)

∂D ∂t

 → ∇ × H s= (σ + jωε ) E s

∂ ∂ ∂ ∂x ∂y ∂z = Eo − j β z 0 e 0

η

j β Eo

η

e− jβ z a x

7 j β Eo − j β z e ax (σ + jωε ) Eo e − j β z a x =

η

β= − jη (σ + jωε ) From (1) and (2), ωµ jωµ = − jη (σ + jωε )  → η 2= η σ + jωε Thus, jωµ ωµ = η = β , σ + jωε η

(2)

Prob. 1.14 The surface current density is K = an × H = an × ( H x , 0, H z ) At x= = 0, an a= K H z (0, y, z , t )a y x, i.e.= K H o cos(ωt − β z )a y At x = a, an = −a x − H z ( a , y , z , t )a y = − H o cos(π ) cos(ωt − β z )a y K= i.e. = K H o cos(ωt − β z )a y At y =0, an = a y K= − H z ( x, 0, z , t )a x + H x ( x, 0, z , t )a z = − H o cos(π x / a ) cos(ωt − β z )a x −

βa H sin(π x / a ) sin(ωt − β z )a z π o

At y =b, an = −a y K= −a y × ( H x , 0, H z ) = H x ( x , b, z , t ) a z − H x ( x , b , z , t ) a x = H o cos(π x / a ) cos(ωt − β z )a x −

βa H sin(π x / a ) sin(ωt − β z )a z π o

Prob. 1.15 ∂2 E (a) ∇ E= µ oε o 2 ∂t 2 ∂ ∂ For E , ∇ 2 E = cos(ωt − β z )a x = [ β sin(ωt − β z )a x = − β 2 cos(ωt − β z )a x 2 ∂z ∂z 2

∂2 E µ oε o 2 = −ωµoε o cos(ωt − β z )a x ∂t Thus, (b)

− β 2= −ω 2 µoε o

 → β= ω µoε o

8 ∂H  → H = − µo ∫ ∇ × Edt ∂t ∂ ∂ ∂ ∂x ∂= y ∂z β sin(ωt − β z )a y = ∇× E cos(ωt − β z ) 0 0 ∇ × E = −µ

µβ ω

o cos(ωt − β z )a y H= − µo ∫ β sin(ωt − β z )dta y =

Prob. 1.16

∂D = ε ∂t 1 = ∇ × Hs r sin θ

∂E jωε E s → ∇× H = s ∂t IL 1  (2sin θ cos θ )  + j β  e-j β r ar 4π r r 

∇×= H



= Es

IL sin θ 4π r

 1  -j β r 1 -j β r   − j β  r + j β  e − r 2 e  aθ    

 IL  1 jβ e-j β r  2 cos θ  2 + 2 4π rjωε r r 

Prob. 1.17

(

20(e jk x x − e − jk x x ) e Es = 2j

−e

jk y y

− jk y y

)

2j

= j5 e x y + e x y − e  which consists of four plane waves. j (k x+k y )

  2 jβ 1   −  aθ  ar − sin θ  β − r r 2    

j ( k x−k y )

j ( kx x −k y y )

−e−

j ( kx x + k y y )

 

∇ × E s = − jωµo H s

Or

∂ ∂x Hs Es = ∇ ×= ωµo ωµo 0 j

=

j

∂ ∂y 0

∂ ∂z Esz ( x, y )

 ∂E j  ∂Esz a x − sz a y   ∂x ωµo  ∂y  20  k y sin(k x x) sin(k y y )a x + k x cos(k x x) cos(k y y )a y  = − ωµ  o

Prob. 1.18 (a) I Re( I s e jωt ) sin π x cos π y cos(ωt − z ) = =

9 (b) V Re(20e − j 2 x e − j 90 e jωt − 10e − j 4 x e jωt ) = o

− j 20e − j 2 x − 10e − j 4 x Vs = 20e − j 2 x e − j 90 − 10e − j 4 x = o

Prob. 1.19 (a) A = Re( As e jωt ) = cos(ωt − 2 z )a x − sin(ωt − 2 z )a y (b) B = −10sin x sin ωta x − 5cos(ωt − 12 z − 45o )a z Re( Bs e jωt ) = jωt (c) = C Re(C= ) 2 cos 2 x sin(ωt − 3 x) + e3 x cos(ωt − 4 x) se

Prob. 1.20 Assuming the time factor e jωt , equation ∂2 E ∂J ∇ E − µε 2 = µ ∂t ∂t becomes ∇ 2 E s + ω 2 µε E s = jωµσ E s or 2

0 ∇ 2 E s − jωµ (σ + jωε ) E s = For conducting medium, σ ? ωε so that ∇ 2 E s − jωµσ E s = 0

Prob. 1.21 From eq. (1.42), ∂2 A ∇ 2 Az − µε 2 z = −µ J z ∂t Prob. 1.22

Lorenz condition is ∂V ∇gA = − µε ∂t By imposing this condition, we obtain the following wave equations:

ρ ∂ 2V ∇ V − µε 2 = − v ∂t ε 2 ∂ A ∇ 2 A − µε 2 = −µ J ∂t 2

10 Prob. 1.23 1

2 2 2 2 r − r' = ( x − x') + ( y − y ') + ( z − z ' )  R=   1  ∂   ∂ ∂ 1 2 2 2 − 2  a a a ∇ = + + − + − + − y ' x x' y ' z z' ( ) ( ) ( ) x y z    ∂y ' ∂z '   R  ∂x ' 3 2 2 2 − 2  1   a = − − − − + − + − + a y and a z terms y 2 x x' x x' y ' z z' ) ( ) ( ) ( ) ) ( ( x      2 − ( x − x') a z + ( y − y ') a y + ( z − z') a z  R = 3 R R3

1 ∇ = R = =

1  ∂ ∂ ∂   2 2 2 − 2 ax + ay + a z  ( x − x') + ( y − y') + ( z − z')    ∂y ∂z    ∂x 3 1 2 2 2 − 2 − 2 ( x − x') a x ( x − x') + ( y − y') + ( z − z')  + a y and az terms   2 − ( x − x') a z + ( y − y ') a y + ( z − z') a z  R − 3 3 = R R

Prob. 1.24 (a) a = 1, b = 2, c = 0, b2 – 4ac = -16 Hence, it is elliptic. (b) a = 0, c = y 2 + 1, b = x 2 + 1, b 2 − 4ac = −4( x 2 + 1)( y 2 + 1) < 0 Hence it is elliptic. (c ) a = 1, b = −2 cos x, c = −(3 + sin 2 x) b 2 − 4ac = 4 cos 2 x + 12 + 4sin 2 x = 16 > 0 Hence it is hyperbolic. (d) a == x 2 , b −2 xy, c = y2 , b 2 − 4ac= 4 x 2 y 2 − 4 x 2 y 2= 0 Hence it is parabolic.

Prob. 1.25 (a) = a α, = b 0, (b) a = 1, b = 0,

= c 0, c= 0,

b 2 − 4ac = 0; i.e. it is parabolic. b 2 − 4ac = −4; i.e. it is elliptic.

2

 ∂ 2Φ ∂ 2Φ  (c)  2 + 2  = 0 ∂y   ∂x a= 1, b = 0, c= 1,

b 2 − 4ac = −4; i.e. it is elliptic.

11 CHAPTER 2 Prob. 2.1

Let Φ ( x, y ) = X ( x)Y ( y ). aX '' Y + bX ' Y '+ cXY ''+ dX ' Y + eXY '+ fXY = 0 Dividing through by XY, 1 1 bX ' Y ' (aX ''+ dX ') + (cY ''+ eY ') + 0 +f = X Y XY The PDE is separable if and only if a and d are functions of x only; c and e are functions of y only; b = 0; and f is a sum of a function of x only and a function of y only, i.e. if a ( x)Φ xx + c( y )Φ yy + d ( x)Φ x + c( y )Φ y + [ f1 ( x) + f 2 ( y )]Φ = 0 Prob. 2.2 (a) Consider the problem as shown below.

y

0

b

-Vo

Vo

a

0

This is similar to Example 2.1 with V1 = V3 = 0,

V2 = Vo ,

x

V4 = −Vo . Hence

sinh (nπ x / b) sin(nπ y / b) 4Vo sinh [nπ (a − x) / b]sin(nπ y / b) − ∑ π n odd n sinh(nπ a / b) π n odd n sinh(nπ a / b) = = A+ B A− B But sinh A − sinh B = 2 cosh sinh 2 2 ∞ 4Vo sin(nπ y / b) nπ a V ( x, y ) 2 cosh sinh [nπ ( x − a / 2) / b] ∑ π n =odd n sinh(nπ a / b) 2b V ( x, y )

4Vo





We now transform coordinates: x= X + a / 2, y= Y +b/2



12

y

Y

X

x

 nπ  sin  (Y + b / 2)  4V nπ a nπ X  b  V ( x, y ) = o ∑ 2 cosh sinh π n =odd 2b b  nπ a   nπ a  2n sinh   cosh    2b   2b  But  nπ Y nπ   nπ Y   nπ Y  = + sin   sin   cos(nπ / 2) + sin(nπ / 2) cos   2   b  b   b   nπ Y  = n= (−1) n cos  odd ,  b   nπ x  (−1) n sinh   cos(nπ y / b) ∞ 4Vo b   V ( x, y ) = ∑ π n =odd  nπ a  n sinh    2b  (b) Let V(x,y) = X(x)Y(y), nπ y Yn ( x) sin X n ( x) A1e − nπ x / b + A2 e nπ x / b = , = b A 2 = 0 since X(x) = 0 as x → ∞ V ( x, y ) = ∑ an sin(nπ y / b)e − nπ x / b ∞

V (0, y= ) V= o

∑a

n

sin(nπ y / b)

n = even  0, b 2  = an = Vo sin(nπ y / b)dx  4Vo b ∫0 n = odd  nπ , 4V ∞ 1 V ( x, y ) = o ∑ sin(nπ y / b)e − nπ x / b π n =odd n

13 Prob. 2.3 (a) This is similar to Example 2.1 with V= V= V= 0, 2 3 4 Hence, = V ( x, y )





n = odd

V= Vo x / a . 1

An sin(nπ x / a ) sinh[nπ (b − y ) / a ]

When y = 0, we obtain Vo x / a =





n = odd

An sin(nπ x / a ) sinh(nπ b / a )

Multiplying both sides by sin (mπx/a) and integrating over 0 < x < a leads to a Vo x 2 An = sin(nπ x / a )dx ∫ a sinh(nπ b / a ) 0 a a

 a2  2Vo ax = 2 cos(nπ x / a )   2 2 sin(nπ x / a ) − a sinh(nπ b / a )  n π nπ 0 2Vo (−1) n +1 = nπ sinh(nπ b / a )

(b) Let V(x,y) = X(x)Y(y). X ( x) A1 cos β x + A2 sin β x = Since X(-a/2) = 0 = X(a/2) A 2 = 0 and , Y ( y ) B1 cosh β y + B2 sinh β y X ( x) A1 cos β x= =

= V ( x, y )

∑ cos β x ( C

n

cosh β y + Dn sinh β y )

Substituting = V ( x, b / 2) Vo cos(π x / a )= and V ( x, −b / 2) Vo cos(π x / a ) gives n = 1, D n = 0, and Vo Cn = cosh(π b / a ) Thus, V ( x, y ) =

Vo cos(π x / a ) cosh(π y / a ) cosh(π b / a )

Prob. 2.4 (a) Let U(x,y) = X(x) Y(y). = X ( x) A1 cos β x + B1 sin β x X '(0) = 0 → B1 = 0 X '(π ) = 0 → 0= − A1β sin βπ i.e. β = n, n = 1,2,3, …

= X ( x) A1 cos = nx, Y A2 cosh β y + B2 sinh β y Y (0) =0 → A2 =0 Hence

14 ∞



U ( x, y ) = C + ∑ C cos nx sinh ny = ∑ Cn cos nx sinh ny

n 0 = n 1= n 0 ∞

∑C

U ( x, π )= x=

n =0

cos nx sinh nπ

n

Multiplying both sides by cos mx and integrating over 0< x < π yields Co = π / 2 and n = even 0,   Cn =  4 n = odd  π n 2 sinh nπ ,

π



4



cos nx sinh ny 2 π n =odd n 2 sinh nπ (b) First, transform the equation by letting U(x,t) = x + u(x,t) where ut = ku xx Subject to u(0,t) = 0 = u(1,t), u(x,0) = -x If u(x,t) = X(x)T(t), = X ( x) A1 cos β x + A2 sin β x X (0) =0 → A1 =0 U ( x, y= )



X (1) = 0

0 = sin β



i.e. β = nπ. 2 2 = X ( x) A= T Be − kn π t 2 sin nπ x, ∞

u ( x, t ) = ∑ Cn sin nπ xe − kn π 2

2

t

n =1



u ( x, 0) =− x =∑ Cn sin nπ x n =1

2(−1) n −2 ∫ x sin nπ xdx = Cn = nπ 0 1

2 2 2 ∞ (−1) n U ( x, t ) = x + u ( x, t ) = x+ ∑ sin nπ xe − kn π t π n =1 n (c ) Let u(x,t) = X(x)T(t). From the boundary conditions, = X ( x) A cos β x + B sin β x X (0) =0 → A =0 X (1) = 0 → 0 = sin β or β = nπ. = X A sin nπ= x, T A3 sin nπ at + A4 cos nπ at T (0) = 0 → A4 = 0

15 ∞

u ( x, t ) = ∑ Cn sin nπ x sin nπ at n =1

ut ( x, 0)= x=



∑ nπ aC n =1

n

sin nπ x

Multiplying both sides by sin mπx and integrating over 0 < x < 1 yields 2 cos nπ a Cn = − (nπ a ) 2 2 ∞ cos nπ a u ( x, t ) = − 2 2 ∑ sin nπ x sin nπ at π a n =1 n 2 Prob. 2.5 (a) Let Φ ( ρ , φ ) = R( ρ ) F (φ ) . From the text, eq. (2.72) onward, = F (φ ) C1 cos(λφ ) + C2 sin(λφ ) C1 = 0 since Φ ( ρ , 0) = Φ ( ρ , π ) . Also, n = 1,2,3,… Fn (φ ) = Cn sin nφ As ρ → 0,

Rn= (φ ) an ρ n + bn ρ − n Φ ( ρ , φ ) must be finite, i.e. a n = 0. ∞ sin nφ Φ( ρ , φ ) = ∑ An n n =1

Φ (1, φ ) = sin φ =

ρ



∑ A sin nφ n =1

n

which implies that = A1 1, = An 0 if n ≠ 1 . Thus sin φ Φ( ρ , φ ) =

ρ

(b) From the boundary conditions, Φ does not depend on φ, i.e. m= 0 in F’’ + m2F = 0. From eq. (2.89) in the text, = Z ( z ) C1 sin µ z + C2 cos µ z If Z(0) = 0 = Z(L), then = C2 0 and = µ L n= π or µ nπ / L . Also,

ρ 2 R ''+ ρ R '− µ 2 ρ 2 R = 0 2 2 2 2 ρ R ''+ ρ R '+ ( j µ ρ − 0) R = 0 The solution to this is = R( ρ ) An I 0 ( ρµ ) + Bn K 0 ( ρµ ) B n = 0 since K 0 is infinite at ρ = 0. Thus ∞

Φ( ρ , z ) = ∑ An I 0 (nπρ / L) sin(nπ z / L) n =1

To obtain A n , we apply the boundary condition Φ(a,z) = 1, multiply both sides by sin mz/L, and integrate over 0 < x < L. We get

16 n = even 0,   An =  4 n = odd  nπ I (nπ a / L) , o 

4 ∞ I 0 (nπρ / L) sin(nπ z / L) Φ( ρ , φ ) = ∑ n π n =1,3,5 I 0 (nπ a / L) (c ) Let Φ ( ρ , φ , t ) = R( ρ ) F (φ )T (t ) . Separation of variables leads to

T '+ µ 2 kT =0 → T (t ) =e − k µ t From eq. (2.88) in the text, = Fn (φ ) an cos nφ + bn sin nφ Since Φ ( ρ , φ= , 0) ρ cos φ , = bn 0 and Fn (φ ) = an cos nφ Finally, ρ 2 R ''+ ρ R '+ ( µ 2 ρ 2 − n 2 ) R = 0 2

with solution = R( ρ ) c1 J n ( ρµ ) + c2Yn ( ρµ ) c 2 = 0 since R must be finite at ρ = 0. Hence 2 Φ( ρ , φ , t ) = ∑∑ Anµ J n ( ρµ ) cos nφ e− k µ t µ

n

But Φ(a,φ,t) = 0 implies that J n ( µ a )= 0= J n ( X m ) where X m are the roots of J n and µ = X m /a . Also, Φ ( ρ , φ , 0)= ρ 2 cos 2φ = ∑∑ Anµ J n ( ρµ ) cos nφ µ

n

It is evident that n = 2 and that ρ 2 = ∑ A2 µ J 2 ( ρµ ) µ

which is the Fourier-Bessel expansion of ρ2. From Table 2.1, property (h), a 2 2 = A2 µ = ρ 2 J 2 ( ρµ )d ρ 2 2 ∫ a [ J 3 (a µ )] 0 X m J3 ( X m ) Thus ∞ J ( ρ X m / a) = Φ ( ρ , φ , t ) 2∑ 2 cos 2φ exp(− X m2 kt / a 2 ) X J ( X ) m =1 m 3 m where J 2 ( X m ) = 0 .

17 Prob. 2.6 Let

U ( x, t ) = X ( x)T (t )

X ''( x)T = X ( x)T '(t )

T '(t ) X ''( x) == −α 2 T (t ) X ( x)



−α 2T (t ), −α 2 X ( x) T '(t ) = X ''( x) = Since U x (0, t )= 0= U x (1, t ) X'(0)=X'(1), we can readily show that −α t T= e= e− n π n (t )

α =nπ , X n= ( x) cos nπ x,

2

2 2

t

= = nπ x)e − n π t , n 1, 2,.3,... U n ( x, t ) cos( 2 2

Thus, = U ( x, t )



∑ A cos(nπ x) exp(−n π 2

n =1

n

2

t)

We now impose the initial condition: ∞

∑ A cos(nπ x)

= = π x) U ( x, 0) cos(2

n =1

n

It is evident that the series only exists for n=2. Thus, U ( x, t) = cos(2π x) e −4π

2

t

Prob. 2.7 Let U ( x, t ) = X ( x)T (t )

X '' T ' = = λ X T X (0) = X (1) 0=



−(nπ ) 2 λ=

= X c= T c2 e − n π 1 sin( nπ x ),

2 2



U ( x, t ) = ∑ An e − n π t sin(nπ x) n =1

2 2

t

18

U ( x, 0) = x(1 − x) =



∑ A sin(nπ x) n =1

n

1 1 − 2 x 1  x2 − x 2  − 3 3  cos nπ x  An = 2 ∫ ( x − x 2 ) sin(nπ x) = 2  2 2 cosn π x −  nπ   nπ 0n nπ 0  8 n = odd 4  3 3, n  = 3 3 1 − (−= 1)   n π nπ  0, n = even

U ( x, t ) =

Thus,

8



1 − n2π 2t e sin(nπ x) π n =odd n3 3



Prob. 2.8 It can readily be shown that = U ( x, y, t ) sin(3π x) sin(4π y ) cos(5t ) +

x

π

Prob. 2.9 Let U(x,y,t) = X(x)Y(y)T(t) X’’YT + XY’’T = XYT’ Divide through by XYT X '' Y '' T ' + = = constant = − λ 2 X Y T 2 T '+ = λ T 0  →= T (t ) Co exp(−λ 2t ) X '' Y '' X '' Y '' + = −λ 2  → + λ 2= − = constant = µ 2 X Y X Y Y ''+= µ 2Y 0

 →

= Y ( y ) C1 cos µ y + C2 sin µ y

= Y (0) 0

 →

C = 0 1 +0

or = C1 0

= Y (1) 0

 →= 0 C2= sin µ sin π n, i.e. µ =nπ

Y ( y ) = C2 sin nπ y

For X, X '' + λ 2= − µ2 0  → X 2 where τ= λ2 − µ2 = X ( x) C3 cos τ x + C4 sin τ x = X (0) 0

 →

X ''+= τ 2X 0

C = 0 3 +0

or = C3 0

 →= 0 = sin τ sin π m, i.e. τ =mπ 2 2 2 2 λ =τ + µ =n π + m 2π 2

= X (1) 0

2

19

= U ( x, y , t )





∑ ∑A

mn

n =0

m

sin(mπ x) sin(nπ y ) exp(−λ 2t )

To get A mn., we impose the initial condition. = U ( x, y= xy , 0) 10





m

n =0

∑ ∑A

mn

sin(mπ x) sin(nπ y )

Multiply both sides by sin(m’πx) sin(n’πy) and integrating, ∞

1 1



1 1

∫ ∫ 10 xy sin(m 'π x) sin(n 'π y)dxdy = ∑∑ Amn ∫ ∫ sin(mπ x) sin(m 'π x) sin(n 'π y) sin(nπ y)dxdy m 1= n 1 =

0 0

0 0

All terms on the right-hand side (RHS) vanish except when n=m’ and m = m’. 1

1

0

0

LHS = 10 ∫ x sin(mπ x)dx ∫ y sin(nπ y )dy 10 1 1 1 = sin mπ x − mπ x cos mπ x ]0 2 2 [sin nπ y − nπ y cos nπ y ]0 2 2 [ mπ nπ 10 = cos mπ cos nπ mnπ 2 1 1

RHS = ∫ ∫ Amn sin 2 (mπ x) sin 2 (nπ y )dxdy 0 0

1

1

2 Amn (1/ 2)(1/ 2) = A= mn ∫ sin ( mnx ) ∫ sin ( nny ) 2

0

0

Equating the left-hand side (LHS) with the right-hand side (RHS) gives 40 cos(mπ ) cos(nπ ) Amn = mnπ 2 Thus, cos(mπ ) cos(nπ ) 2 t) sin(mπ x) sin(nπ y ) exp(−λmn π =m 1 =n 1 mn 2 where = λmn (mπ ) 2 + (nπ ) 2 . U ( x, y , t )

40 2





∑∑

Prob. 2.10 Φ xxxx + a= Φ tt 0, = a 1/ 4, and Φ ( x, y ) = X ( x)T (t ) The general solution for X is X = A sinh λ x + B cosh λ x + C sin λ x + D cos λ x Applying the boundary conditions,

Let

(1) (2

20 (0) 0 X= ''(0) 0 X=



X (1) = 0 X ''(1) = 0



B= +D 0 B= −D 0



A sinh λ + B cosh λ + C sin λ + D cos λ = 0 A sinh λ + B cosh λ − C sin λ − D cos λ = 0



From the conditions in (3), A = 0 = B = = D, λ n= π , n 1, 2,3,L X n ( x) = Cn sin nπ x For T, Tn (t ) En sin(aλ 2t ) + Fn cos(aλ 2t ) =

T '(0) =0



En =0

Hence, ∞

Φ ( x, t ) = ∑ gn sin(nπ x) cos(an2π 2t ) n =1

Imposing the condition ∞

Φ ( x, 0) = x= ∑ gn sin(nπ x) n =1

gn =

lads to

2(−1) n +1 nπ

2(−1) n +1 Φ ( x, t ) = sin(nπ x) cos(an 2π 2t ) ∑ nπ n =1 ∞

Prob. 2.11 V ( x, y ) = X ( x)Y (y) Separating the variables leads to X(x)=c1 sin α x + c2 cos α x X'(x)=α c1 cos α x − α c2 sin α x X '(0) =0



c1 =0

X '(π ) = 0



− α c2 sin απ = 0

X(x)=c2 cosn x Y(y)=c3 sinh α y + c4 cosh α y Y'(y)=α c3 cosh α y + α c4 sinh α y

→ α = n,

n = 1, 2,3,...

Thus,

(3)

21 Y'(π ) =0 V ( x, y ) =

→ α c3 cosh απ + α c4 sinh απ =0 ∞





−c3 c4 = tanh nπ

cosh ny 

∑ A cos nπ sinh ny − tanh nπ  n =1

n

, 0) cosx Vy ( x= =





1



∑ A n cos nπ − tanh nπ  n =1

n

This series exists for n=1 so that cosh ny   ( x, y ) sinh ny − cos x V = tanh nπ  

Prob. 2.12 Using separation of variables

V(x,y)=(a1e −α x + a2 eα x )(a3 sin α y + a4 cos α y ) V ( ∞, y ) = 0



a2 = 0

V (x, 0) =0



a4 =0

V (x,1) =0



α = nπ ,

n =1, 2,3,...

Hence, ∞

V ( x, y ) = ∑ an e − nπ x sin(nπ y ) n =1



V (0, y ) = 10 =∑ an sin(nπ y ) n =1

Thus, V ( x, y ) =

sin(nπ y ) − nπ x e π n =odd n

40





 40 n = odd  , → an = nπ  0, n = even

22 Prob. 2.13 U ( x, t ) = X ( x)T (t ) Let 1 T '' X '' → == XT '' = a 2 X '' T λ a2 T X = X ''− λ X 0, T ''− a 2= λT 0 −α 2 , λ=

X (0) = X (1) 0=

−α = −(nπ ) λ= X ( x) = c1 sin(n π x) 2

T ''+ a 2α 2T = 0 U t ( x, 0) =0

X ( x) = c1 sin(α x)



nπ , α=

T (t ) = c2 sin(aα t ) + c3 cos(aα t )

→ →



2

T '(t ) =0



c2 =0



U ( x, t ) = ∑ An sin(nπ x) cos(nπ at ) n =1

Imposing U ( x, 0)= (1 − x) x, U ( x, 0) = x(1 − x) =



∑ A sin(nπ x) n =1

n

Multiply both sides by sin(mπ x) and integrate over 0 1, i.e. the scheme is unstable for all r. (b)

(

Φ in +1 = Φ in −1 + r Φ in+1 + Φ in−1 − Φ in +1 − Φ in −1

)

73 An +1 =An −1 + rAn [e jkx + e − jkx ] − r[ An +1 + An −1 ] (1 − r ) 2r n = An −1 + A cos kx (1 + r ) 1 + r or

2r (1 − r ) g cos kx − 0 = 1+ r (1 + r ) From this, we obtain g2 −

1  r cos kx ± 1 − 2r 2 sin 2 (kx / 2)    1+ r | g |≤ 1 for all values of r; i.e. the scheme is stable. = g

Prob. 3.18 (a) Let Φ in =An e jki∆ An +1e jki∆ =An e jki∆ − rAn (e jk ∆ − e − jk ∆ )e jki∆ An +1 = 1 − 2r sin k ∆ = 1 − jα An where = α 2r sin k ∆ . | g |2 = gg * = 1 + α 2 = 1 + 4r 2 sin 2 k ∆ ≥ 1 showing that the algorithm is unstable. (b) Similarly, An +1 An [cos k ∆ − 2 jr sin k ∆] = so that = g cos k ∆ − j 2r sin k ∆ g=

* | g= |2 gg = cos 2 k ∆ + 4r 2 sin 2 k ∆

=1 − (1 − 4r 2 ) sin 2 k ∆ | g |2 ≤ 1



r ≤ 1/ 2

or ∆t ≤

∆x . u

Prob. 3.19 Let U in,l = An e jk xix e

jk y ly

(i) An +1e jk xix e

jk y ly

=An e jk xix e

+ An e jk xix e

jk y ( l −1) y

jk y ly

+ r[ An e jk x (i −1) x e

− 2 An e jk xix e jk y ly

jk y ly

jk y ly

− 2 An e jk xix e

+ An e jk xix e

jk y ( l +1) y

Dividing by An e jk xix e gives n +1 A = 1 + r[−4 + 2 cos k x x + 2 cos k y y ] An An +1 Since = g and |g|≤1, An

]

jk y ly

+ An e jk x (i +1) x e

jk y ly

74

12r (2 + cos k x x + cos k y y ) ≤ 1 (ii)

1 − 2r (2 + 1 + 1) ≤ −1 or r ≤ 1/ 4 Similarly, An +1 = (1 + 2r cos k x x − 2r )(1 + 2r cos k y y − 2r ) An

Hence,

g= (1 + λ1r )(1 + λ2 r ) where λ= 2(cos k x x − 1), λ= 2(cos k y y − 1). 1 2 | g |≤ 1 → −1 ≤ g ≤ 1 The maximum value of λ 1 or λ 2 is 0, the upper limit of g ≤ 1 is satisfied. For the lower limit, −1 ≤ g = (1 + λ1r )(1 + λ2 r )

λ1λ2 r 2 + (λ1 + λ2 )r + 2 ≥ 0 The minimum value of λ 1 or λ 2 is -4. When λ 1 = λ 2 = -4, r is complex. Since r must be real, λ 1 and λ 2 cannot be minimum at the same time If either λ 1 or λ 2 is minimum and the other is maximum, say λ 1 = -4 , λ 2 = 0, then −4r + 2 ≥ 0 → r= 1/ 2

Prob. 3.20 (a)

∇gE = 0 ∇gH = 0 ∇ × E = −µ

∂H ∂t

H σ E +ε ∇×=

∂E + Js ; σ E + Js ∂t

∂J ∂ ∂E (∇ × H ) = − µσ −µ s ∂t ∂t ∂t ∂J ∂E ∇(∇gE ) − ∇ 2 E = − µσ −µ s ∂t ∂t 2 2 ∂J ∂ E ∂ E ∂E µ s + 2 − µσ = 2 ∂x ∂y ∂t ∂t

∇ × ∇ × E = −µ

From this,

∂J ∂ 2 Ez ∂ 2 Ez ∂E µ sz + − µσ z = 2 2 ∂x ∂y ∂t ∂t (b) If J s = 0 and ∆x = ∆y = ∆, we obtain the following formulas.

75 (i) Euler:

Ein, +j 1 − Ein, j Ein+1, j − 2 Ein, j + Ein−1, j Ein, j +1 − 2 Ein, j + Ein, j −1 = + ∆t µσ∆ 2 µσ∆ 2 =

Ein+1, j + Ein−1, j + Ein, j +1 + Ein, j −1 − 4 Ein, j

µσ∆ 2

or Ein, +j 1 = (1 − 4r ) Ein, j + r ∑ Ein, j ,

∆ r =2 µσ∆

(ii)

Leapfrog: Ein, +j 1 − Ein, −j 1 Ein+1, j − 2 Ein, j + Ein−1, j Ein, j +1 − 2 Ein, j + Ein, j −1 = + µσ∆ 2 µσ∆ 2 2∆t

∑ = or

Ein, j − 4 Ein, j

µσ∆ 2

Ein+1, j = Ein, −j 1 + 2r  ∑ Ein, j − 4 Ein, j 

(iii)

Dufort-Frankel: Ein, +j 1 − Ein, −j 1 Ein+1, j − Ein, −j 1 − Ein, +j 1 + Ein−1, j Ein, j +1 − Ein, +j 1 − Ein+−1,1 j + Ein−1, j = + 2∆t µσ∆ 2 µσ∆ 2

∑ =

Ein, j − 2 Ein, j − 2 Ein, −j 1

µσ∆ 2 (1 + 4r ) Ein, +j 1 =2rEin, j − 4rEin, −j 1 + Ein, −j 1 or = Ein, +j 1

1 − 4r n −1 2r Ei , j + Ein, j ∑ 1 + 4r 1 + 4r

(c ) Let Ein, j = An cos(k x i∆) cos(k y j ∆) (i) Euler: An +1 =(1 − 4r + r λ ) An Where = λ 2(cos k x ∆ + cos k y ∆) An +1 g E = n =1 − 4r + r λ A | g E |≤ 1 → −1 ≤ g E ≤ 1 Since r > 0 and λ ≤ 4, g E ≤ 1 always. The limit -1 ≤ g E requires that 1 + r (λ − 4)= g ≥ −1 → 2 ≥ r (4 − λ ) or  2  r ≤ min    4−λ 

76 Since the minimum value of λ is -4, r ≤ 4 for stability. (ii) Leapfrog: Let An +1 An = gL = An An −1 g L2 − 2r (λ − 4) g L − 1 =0

g L+ = r (λ − 4) + r 2 (λ − 4) 2 + 1 g L− = r (λ − 4) − r 2 (λ − 4) 2 + 1 Since, g L+ + g L− = −1 , it is impossible to satisfy | g L |≤ 1 for all k x and 2r (λ − 4), g L+ g L− = k y . Hence the Leapfrog scheme is always unstable. (iv) Dufort-Frankel: An +1 An g DF = = An An −1 + = g DF

r λ + r 2 (λ − 4)(λ + 4) + 1 1 + 4r

− = g DF

r λ − r 2 (λ − 4)(λ + 4) + 1 1 + 4r

+ − Both | g DF | and | g DF | are bounded by 1 for all k x and k y so that the scheme is unconditionally stable.

n j β kδ Prob. 3.21 Substituting = Exn (k ) A= e , H ny (k )

H yn +1/ 2 (k + 1/ 2)= H yn −1 (k + 1/ 2) +

An

η

e j β kδ into

δt  Exn (k ) − Exn (k + 1)  µδ

gives An +1/ 2 = An −1/ 2 + η Rb An e − j βδ / 2 − e j βδ / 2 

where Rb =

n +1/ 2 δt An n −1/ 2 , .= Let g A= n A A µδ

1 g 2 = 1 − j 2 gη Rb sin βδ 2

But

µ δ t uδ t = = r ε µδ δ 2 g + 2 pg = − 1 0, = p jr sin βδ / 2 η= Rb

g1 =− p − p 2 + 1, g 2 =− p + p 2 + 1

If | gi |≤ 1, i = 1, 2, p must lie between –j and j, i.e. − j ≤ jr sin βδ / 2 ≤ j or − 1 ≤ r sin βδ / 2 ≤ 1

77 Prob. 3.22 (a) y

1

2

3

V = 100

4

5 x V=0

Applying the difference method, V V V1 = 3 + 2 + 25 4 2 1 V2 = (V1 + V4 ) + 50 4 V1 V4 V= + 3 4 2 1 (V2 + V3 + V5 ) V= 4 4 V4 V= + 50 5 4 Applying these equations iteratively, we obtain the results below.

No. of iterations V1 V2 V3 V4 V5

0

1

2

3

4

5…

100

0 0 0 0 0

25.0 56.25 6.25 15.63 53.91

54.68 67.58 21.48 35.74 58.74

64.16 74.96 33.91 41.96 60.49

70.97 78.23 38.72 44.86 61.09

73.79 79.54 40.63 45.31 61.37

74.68 80.41 41.89 45.95 51.49

78

(b)

100

1

2

3

4

On the interface,

ε1 1 ε2 3 = = , 2(ε1 + ε 2 ) 8 2(ε1 + ε 2 ) 8 V 3V V1 = 2 + 3 + 12.5 4 8 3V V V2 = 12.5 + 4 + 1 8 4 1 = V3 (V1 + V4 ) 4 1 = V4 (V2 + V3 ) 4 Applying this iteratively, we obtain the results shown in the table below.

No. of iterations V1 V2 V3 V4

0

1

2

3

4

5…

100

0 0 0 0

12.5 15.62 3.125 4.688

17.57 18.65 5.566 6.055

19.25 19.58 6.33 6.477

19.77 19.87 6.56 6.608

19.93 19.96 6.6634 6.649

20 20 6.667 6.667

Prob. 3.23 By symmetry, = V1 V= V2 . Hence, 5 , V4 2V= 50 + V2 1 4V2 = 100 + V1 + V3 2= V3 100 + V2 Solving these simultaneously leads to

(1) (2) (3)

79 = V1 54.17 = V5 ,= V2 58.33 = V4 = , V3 79.16 Prob. 3.24 The finite difference solution is obtained by following the same steps as in Example 3.6. We obtain Z= 43Ω . o Prob. 3.25 Due to double symmetry of the solution region, we consider only a quarter section and apply the appropriate finite difference formulas at the lines of symmetry. Let nx = A / ∆, ∆ = ∆x = ∆y (a) nx =30 → Z o =60.36Ω nx =45



Z o =60.46Ω

nx =60 → Z o =60.51Ω (The exact solution gives = Z o 60.61Ω .) (b)

nx =18



Z o =49.87Ω

nx =36



Z o =50.28Ω

nx =72 → Z o =50.44Ω (The exact solution gives Z= 50Ω .) o Prob. 3.27 = (a) Exact: kc 4.4429 for nx n y . The FDM gives =

nx 2 4 6 8 10

kc 4.0 4.3296 4.3928 4.414 4.425

(b) Exact: kc 3.5124 = = for nx 2n y . The FDM gives

nx

kc

2 4 6 8 10

3.2163 3.436 3.478 3.493 3.5

80 Prob. 3.28 The results remain almost exactly the same as in Example 3.9. Prob. 3.29 (a) When Ez= 0 and

∂ = 0, H x= 0= H y and Maxwell’s equations ∂z

become ∂H z ∂E y ∂Ex = − ∂t ∂x ∂y ∂E ∂H z =ε x ∂y ∂t ∂E ∂H z − = ε y ∂x ∂t Hence, Yee algorithm becomes H zn +1/ 2 (i + 1/ 2, j + 1/ 2)= H zn −1/ 2 (i + 1/ 2, j + 1/ 2) + α [ Exn (i + 1/ 2, j + 1) −µ

− Exn (i + 1/ 2, j )] + α [ E yn (i, j + 1/ 2) − E yn (i + 1, j + 1/ 2)] E yn +1 (i + 1/ 2, = j ) γ E yn (i, j + 1/ 2) + β [ H zn +1/ 2 (i − 1/ 2, j + 1/ 2) − H zn +1/ 2 (i + 1/ 2, j + 1/ 2)] Exn +1 (i + 1/ 2, j ) =γ Exn (i + 1/ 2, j ) + β [ H zn +1/ 2 (i + 1/ 2, j + 1/ 2) − H zn +1/ 2 (i + 1/ 2, j − 1/ 2)] ∂ = 0, E x= 0= E y and Maxwell’s equations become ∂z ∂Ez ∂H y ∂H x ε= − ∂t ∂x ∂y ∂H x ∂E µ = − z ∂t ∂y ∂H y ∂Ez µ = ∂t ∂x Hence, Yee algorithm becomes H xn +1/ 2 (i, j + 1/ = 2) H xn −1/ 2 (i, j + 1/ 2) + α [ Ezn (i, j ) − Ezn (i, j + 1)]

(b) When H z= 0 and

H yn +1/ 2 (i + 1/ 2, = j ) H yn +1/ 2 (i + 1/ 2, j ) + α [ Ezn (i + 1, j ) − Ezn (i, j )] Ezn +1 (i, j )= γ Ezn (i, j ) + β [ H yn +1/ 2 (i + 1/ 2, j ) − H yn +1/ 2 (i − 1/ 2, j )] + β [ H xn +1/ 2 (i, j − 1/ 2) − H xn +1/ 2 (i, j + 1/ 2)]

81 Prob. 3.30 Typically, E z of the TEM wave for j=30 and time cycle n=95 is plotted below [80, p. 37].

Prob. 3.31 Typically, E z of the TEM wave for j=30 and time cycle n=95 is plotted below [80, p. 90].

82 Prob. 3.32 For TM ( H z= 0) and

∂ = 0, H ρ= 0= Eφ and Maxwell’s equations ∂φ

become ∂H = − φ − σ Eρ ∂t ∂z ∂Hφ Hφ ∂E 1 ∂ ε z = + − σ Ez ( ρ Hφ ) − σ Ez = ∂t ρ ∂ρ ∂ρ ρ ∂Hφ ∂Ez ∂Eρ µ = − ∂t ∂ρ ∂z Applying eqs. (3.82) to (3.83) to the three equations above gives Hφn +1/ 2 (i, = j ) Hφn (i, j ) + α [ Ezn +1/ 2 (i, j + 1/ 2) − Ezn +1/ 2 (i, j − 1/ 2)]

ε

∂Eρ

− α [ Eρn +1/ 2 (i + 1/ 2, j ) − Eρn +1/ 2 (i − 1/ 2, j )] Ezn +3/ 2 (i, = j + 1/ 2) γ Ezn +1/ 2 (i, j + 1/ 2) + β [ Hφn +1 (i, j + 1) − Hφn +1 (i, j ) +

1 n +1 Hφ (i, j + 1/ 2)] j

+ β [ Hφn +1 (i + 1, j ) − Hφn +1 (i, j )]

γ Ezn +1/ 2 (i + 1/ 2, j ) Eρn +3/ 2 (i + 1/ 2, j ) = where α = δ t / µδ , β = δ t / εδ , γ = 1 − σδ / ε

Prob. 3.33 (b) at x = 0 Ezn +1 (0, j , k + 1/= 2) Ezn (1, j , k + 1/ 2) +

Prob. 3.34

µo

∂H xy

∂t ∂H xz µo ∂t ∂H yz

µo

µo

∂t ∂H yx

∂t ∂H zx µo ∂t ∂H zy

µo

∂t

coδ t − δ  Ezn +1 (1, j , k + 1/ 2) − Ezn +1 (0, j , k + 1/ 2)  coδ t + δ

∂ + σ *y H xy = − ( Ezx + Ezy ) ∂y ∂ + σ z* H xz = ( E yx + E yz ) ∂z ∂ + σ z* H yz = − ( Exy + Exz ) ∂z ∂ + σ x* H yx = − ( Ezx + Ezy ) ∂x ∂ + σ x* H zx = − ( E yx + E yz ) ∂z ∂ + σ *y H zy = − ( Exy + Exz ) ∂y

(1) (2) (3) (4) (5) (6)

83

εo

∂Exy

∂t ∂E ε o xz ∂t ∂E yz

εo

εo

∂t ∂E yx

∂t ∂Ezx εo ∂t ∂Ezy

εo

∂t

∂ ( H zx + H zy ) ∂y ∂ + σ *y Exz = − ( H yx + H yz ) ∂z ∂ + σ z* E yz = − ( H zx + H zy ) ∂x ∂ + σ x* E yx = − ( H zx + H zy ) ∂x ∂ + σ x* Ezx = ( H yx + H yz ) ∂x ∂ + σ *y Ezy = − ( H xy + H xz ) ∂x + σ *y Exy =

(7) (8) (9) (10) (11) (12)

Prob. 3.35

Maxwell’s equations can be written as ∂E ∇ ×= +σ E H ε ∂t ∂H ∇ × E = −µ + σ *H ∂t In Cartesian coordinates, these become

∂H x ∂E y ∂Ez = − − σ *H x ∂t ∂z ∂y ∂H y ∂Ez ∂Ex µ = − − σ *H y ∂t ∂x ∂z ∂H z ∂Ex ∂Ez µ = − − σ *H z ∂t ∂z ∂x ∂ Hy ∂E ∂H z ε x = − − σ Ex ∂t ∂y ∂z ∂E ∂H x ∂H z ε y = − − σ Ey ∂t ∂z ∂x ∂H y ∂H x ∂E ε z = − − σ Ez ∂t ∂z ∂y

µ

For

∂ = 0= H z , Ez= 0= E y . Thus for TM case, Maxwell’s equations become ∂x

84 ∂H y ∂H x ∂Ez + σ Ez = − ∂t ∂z ∂y ∂H x ∂E + σ *H z = − z µo ∂t ∂y ∂H y ∂Ez + σ *H y = µo ∂t ∂x If we let E , we obtain E E = + z zx zy

ε

∂H y ∂Ezx + σ x Ezx = ∂t ∂x ∂Ezy ∂H x + σ y Ezy = − εo ∂t ∂y ∂H x ∂ + σ *y H x = − ( Ezx + Ezy ) µo ∂t ∂y ∂H y ∂ + σ x* H y = µo ( Ezx + Ezy ) ∂t ∂x

εo

Prob. 3.36 Let H z = e jωt e − jk x x e − jk z z E= E= E= e jωt e − jk x x e − jk z z yz yz y

Substituting these into the FDTD equation yields δt n H zn +1/ 2 (i + 1/ 2, k ) e jωδ t / 2 − e jωδ t / 2  = E y (i + 1/ 2, k ) e − jk xδ / 2 − e jk xδ / 2  − µδ j 2 H zn sin(ωδ t / 2) =

δt n E y sin(k xδ / 2) µδ

or = Zz

E y µoδ sin(ωδ t / 2) = δ t sin(koδ / 2) Hz

Prob. 3.37 For ∆ = 0.25, no. of iterations = 500, we apply the ideas in section 3.10 and obtain the following results. ρ

z

Potential

5 5 5 10 15

18 10 2 2 2

65.852 23.325 6.3991 10.226 10.343

85 Prob. 3.38 Applying the finite difference scheme of section 3.10, for ∆ = 0.25 and 100 iterations, we obtain the following results. ρ

z

Potential

5 5 5 10 15

18 10 2 2 2

11.474 27.870 12.128 2.342 0.3965

Prob. 3.39 1/ 2 H ρn += (i, j ) H ρn −1/ 2 (i, j ) −

mδ t

Ezn (i, j ) +

δt  Eφn (i, j + 1) − Eφn (i, j )  µδ 

µρi δt 1/ 2  Eρn (i, j + 1) − Eρn (i, j ) + Ezn (i, j ) − Ezn (i + 1, j )  Hφn += (i, j ) Hφn −1/ 2 (i, j ) − µδ Prob. 3.40 The finite difference equation is: U ( ρ + ∆ρ , t ) − 2U ( ρ , t ) + U ( ρ − ∆ρ , t ) U ( ρ + ∆ρ , t ) − U ( ρ − ∆ρ , t ) + (∆ρ ) 2 2 ρ∆ρ U ( ρ , t + ∆t ) − U ( ρ , t ) = ∆t ∆t Let ∆ρ = h and α = 2 . h hα αU ( ρ + h, t ) − 2αU ( ρ , t ) + αU ( ρ − h, t ) + [U ( ρ + h, t ) − U ( ρ − h, t )] 2ρ = U ( ρ , t + ∆t ) − U ( ρ , t )   h  h  U ( ρ , t + ∆= t ) α 1 +  U ( ρ + h, t ) + α  1 −  U ( ρ − h, t ) + (1 − 2α )U ( ρ , t )  2ρ   2ρ  or 1 1   U (i, n + 1)= α 1 +  U (i + 1, n) + α 1 −  U (i − 1, n) + (1 − 2α )U (i, n)  2i   2i  For ρ = 0, U (∆ρ , t= ) U (−∆ρ , t ) . Applying L’Hopital’s rule,

lim

→0 ρ 

Thus,

1 ∂U ∂ 2U = ρ ∂ρ ∂ρ 2

(1)

86 ∂U 2U ρρ U t  →= ∂t U ( ρ + h, t ) − U ( ρ , t ) + U ( ρ − h, t )  U ( ρ , t + ∆t ) − U ( ρ , t ) 2  = ∆t h2  4α [U ( ρ + h, t ) − U ( ρ= , t ) ] U ( ρ , t + ∆t ) − U ( ρ , t ) = ∇ 2U

U ( ρ , t= + ∆t ) 4αU ( ρ + h, t ) + (1 − 4α )U ( ρ , t ) Thus, for ρ = 0, (2) U (0,= n + 1) 4αU (1, n) + (1 − 4α )U (0, n) 2 Let h = 0.1, α = ¼, then ∆= t α h= 0.0025 Using (1) and (2), we can develop a MATLAB program to calculate U at ρ = 0.5, t = 0.1, 0.2, etc. We obtain the following results.

t

Exact

FD

0.1 6.0191 6.0573 0.2 3.3752 3.3533 0.3 1.8833 1.8770 0.4 1.0619 1.0516 0.5 0.5955 0.5893 1.0 0.0330 0.0325 Prob. 3.41 The finite difference equivalent of ∇ 2U = U t is U ( ρ + ∆ρ , z , t ) − 2U ( ρ , z , t ) + U ( ρ − ∆ρ , z , t ) U ( ρ + ∆ρ , z , t ) − U ( ρ − ∆ρ , z , t ) + (∆ρ ) 2 2 ρ∆ρ U ( ρ , z + ∆, t ) − 2U ( ρ , z , t ) + U ( ρ , z − ∆z , t ) U ( ρ , z , t + ∆t ) − U ( ρ , z , t ) + = (∆z ) 2 ∆t Let ∆ρ = ∆z = h, and α =

∆t h2

  h  h  U ( ρ , z , t + ∆= t ) α 1 +  U ( ρ + h, z , t ) + α 1 −  U ( ρ − h, z , t )  2ρ   2ρ  +αU ( ρ , z + h, t ) + αU ( ρ , z − h, t ) + (1 − 4α )U ( ρ , z , t ) or 1 1   U (i, j , n + 1)= α 1 +  U (i + 1, j , n) + α 1 −  U (i − 1, j , n)  2i   2i  +αU (i, j + 1, n) + αU (i, j − 1, n) + (1 − 4α )U (i, j , n) For ρ = 0, U (∆ρ , z , t= ) U (−∆ρ , z , t ) . Applying L’Hopital’s rule,

lim

ρ  →0

Thus,

1 ∂U ∂ 2U = ρ ∂ρ ∂ρ 2

(1)

87 ∂U Ut  → 2U= ρρ + U zz ∂t U ( ρ + ∆ρ , z , t ) − 2U ( ρ , z , t ) + U ( ρ − ∆ρ , z , t )  2  (∆ρ ) 2   = ∇ 2U

U ( ρ , z + ∆z , t ) − 2U ( ρ , z , t ) + U ( ρ , z − ∆z , t )  U ( ρ , z , t + ∆t ) − U ( ρ , z , t ) + = (∆z ) 2 ∆t   ∆t Let ∆ρ =h =∆z , α = 2 , ρ =0, U (h, z , t ) =U (−h, z , t ) h U (0, z , t + ∆= t ) 4αU (h, z , t ) + αU (0, z + h, t ) + αU (0, z − h, t ) + (1 − 6α )U (0, z , t ) or (2) 1) 4αU (1, j , n) + αU (0, j + 1, n) + αU (0, j − 1, n) + (1 − 6α )U (0, j , n) U (0, j , n += 2 Let h = 0.1, = α 0.1  → = ∆t = α h 0.001 . Using (1) and (2), we develop a computer program to determine U at ρ = 0.5, z = 0.5, t = 0.05, 0.1, 0.15, etc. The finite difference results are compared with the exact analytical results as shown below.

t

Exact

FD

0.05 0.10 0.15 0.20 0.25 0.30

6.2475 2.8564 1.3059 0.5971 0.5971 0.1248

6.3848 2.9123 1.2975 0.5913 0.27 0.1233

Prob. 3.42 (a)= Exact: y ' cos = x 0.9047 x = 0.44 Finite difference: y ( x + ∆x) − y ( x − ∆x) y (0.46) − y (0.42) = y' = 2∆x 2(0.2) 0.44395 − 0.40776 = = 0.90475 0.4 0.52

(b)Exact:



0.4

0.52 ydx = − cos x = 0.05324 0.4

Finite difference: ∆x = 0.02

88

0.52



ydx=

0.4

0.02 [0.38942 + 4(0.40776) + 2(0.42594) + 4(0.44395) + 2(0.46178) 3 + 4(0.47943) + 0.49688] = 0.05324 1

Prob. 3.43 Exact:



f ( x)dx = 4x −

0

x3 1 = 3.667 3 0

(a) Trapezoidal rule, h =0.2,

1  n −1  I= h  ∑ fi + ( f o + f n )  2  i =1  1   = 0.2  f (0.2) + f (0.4) + f (0.6) + f (0.8) + (4 + 3)  2   = 0.2 [3.96 + 3.84 + 3.64 + 3.36 + 3.5] = 3.66 (b) Newton-Cotes with n=3, f(0)=4. Let m = 2n = 6, f(1/6) = 3.96, f(2/6) = 3.89, f(3/6) = 3.75, f(4/6) = 3.56, f(5/6) = 3.3, f(6/6) = 3. 1 3( ) 6 [ 4 + 3(3.87) + 3(3.89) + 3.75]= 1.96 A1= 8 = A2 0.0625[3.75 + 3(3.56) + 3(3.3) = + 3] 1.71 A = A1 + A2 = 3.67

Prob. 3.44 The exact solution is found in any standard text on antenna theory, i.e. π  cos 2  cos θ  2 4 6 2 dθ= 1  (2π ) − (2π ) + (2π ) + L    ∫0 sin θ 2  2(2!) 4(4!) 6(6!)  = 1.218

π

1

Prob. 3.45 Exact:

∫e

−x

dx = −e − x

0

1 1 − 1/ e = 0.6321 = 0

Numerical: For n = 2, m= 2, h =0.5 1 (0.5)[e0 + 4e −0.5 += e −1 ] 0.6323 3 For n= 4, m= 4, h = 0.25, 4 = A (0.25)[7e0 + 32e −0.25 + 12e −0.5 + 32e −0.75 + 7= e −1 ] 0.6321 90 = A

89 For n= 6, m = 6, h = 0.167, 6 = A (0.167)[41e0 + 216e −0.167 + 27e −0.334 + 272e −0.5 + 27e −0.667 840 + 216e −0.834 + 41e −1 ] = 0.6320 Prob. 3.46 Using MATLAB, we obtain the exact solution as J1 (4) = −0.06604

Prob. 3.47 In this case, n = 3, I = ao f ( xo ) + a1 f ( x1 ) + a2 f ( x2 ) + a3 f ( x3 ) Let f(x) = 1, 4

∫ 1dx = 4 = a

0

+ a1 + a2 + a3

(1)

0

Let f(x) = x,. 4

∫ xdx =8 =0 + a + 3a 1

2

+ 4a3

(2)

0

Let f(x) = x2, 4

64

∫ x dx = 3 2

=0 + a1 + 9a2 + 16a3

(3)

0

Let f(x) = x3, 4

∫ x dx =64 =0 + a + 27a 3

1

2

+ 64a3

0

Solving (1) to (4) gives a= a= 0 3

2 , 9

a= a= 1 2

16 9

Prob. 3.48 Exact solution is F (2, π / 2) = 1.68575 Prob. 3.49 Exact solution: - 0.4116 + j0 Prob. 3.50 (a) Exact solution is 2.9525 (b) The exact solution is 16.

(4)

90 CHAPTER 4 Prob. 4.1 (a) 1

2 dx ∫ x (2 − x)=

v> < u ,=

−1

2 x3 x 4 1 − = 1.333 3 4 −1

(b) 1

2

∫ ∫

< u , v >=

(x 2 − 2 y 2 )dxdy =

= x 0= y 1

x3 1 y3 2 = −4.667 (1) − 2(1) 3 0 3 1

(c )

∫∫∫ ( x + y) xzdxdydz

In cylindrical system, = x ρ= cos φ , y ρ sin φ ,= dxdydz ρ dφ dzd ρ 2π

2

= < u, v >

5

∫ ∫ ∫ (zρ ρ φ

2

cos 2 φ + z ρ 2 sin φ cos φ ) ρ dφ dzd ρ

= 0 = 0=z 0



1 1  3 ∫0 zdz ∫0 ρ d ρ ∫0  2 (1 + cos 2φ ) + 2 sin 2φ dφ 25 = = (4)π 157.08 2 5

2

=

Prob. 4.2 (a) < h(− x), f ( x= )>

Let u = - x, dx = - du. )> < h(− x), f ( x=

∫ h(− x) f ( x)dx u ) ∫ h(u )[− f (−u )]du ∫ h(u ) f (−u )d (−=

= < h( x), − f (− x) >

(b) = < h(ax), f ( x) >

1

h(u ) f (u / a )du / a ∫ h(u )[ f (u / a )]du ∫= a

1 = < h( x), f ( x / a ) > a (c ) < f '( x), h( x) >= ∫ f '( x)h( x)dx

Integrating by parts, let u = h(x), du = f’(x)dx dv = h’(x)dx, u = f(x) b < f= '( x), h( x) > h( x) f ( x) − ∫ f ( x)h '( x)dx a If h(x) or f(x) vanishes at x = a and x = b, = -

91

(d) By integrating by parts n times as done once in (c ), it can readily be shown that
= ( − 1) < f ( x ), > dx n dx n

Prob. 4.3 2  x3 x 2  1  3x 2 x3  2 x ( x − 1) dx + (2 − x )( x − 1) = dx − + − 2 x − 3 20  2 ∫0 ∫1 3  1    = 1/ 3 − 1/ 2 + 6 − 4 − 8 / 3 − 3 / 2 + 2 + 1/ 3 = 4 − 2 − 2 = 0 1

< f,= g>

Prob. 4.4 Let

, y ') ( y ') 2 + y 2 F ( x, y=

∂F ∂  ∂F  −   =0 → ∂y ∂x  ∂y '  or 0 y ''− y =

2y −

∂ (2 y ') =0 ∂x



2 y − 2 y '' =0

Prob. 4.5 (a) Let

F ( x, y, y= ') y '2 + 2 ye x

= Fy 2= ex , Fy ' 2 y ' d d2y Fy ' =2e x − 2 y '' =0 → y '' = 2 =e x dx dx x y =e + Ax + B → −1 y(0) = 0 → 0= 1+ B B= → y (1) =1 → 1 =e + A + B A =2 − e

0 =Fy −

y ( x) = e x + (2 − e) x − 1



dy =∫ e x dx =e x + A dx

92 (b) Let F ( x, y, y ') = y 2 + y '2 + 2 y 2 y + 2, 2y ' Fy = Fy ' = d Fy ' = 2 y + 2 − 2 y '' = 0 dx 1 (1) y ''− y =

0 = Fy −

y ''− y= 0

y= Ae x + Be − x



Let y ( x) = Ae x + Be − x + C Substituting this into (1) yields C=-1 y ( x) = Ae x + Be − x − 1 1 0 y (0) A + B −= = 0 → Ae + Be −1 − 1 =0 → Solving these gives e −1 2e A= B = A +1 = , e +1 e +1 Hence, y (1) =0

(e − 1)e x 2e − x +1 + −1 y ( x)= e +1 e +1 (c) Similar to part (b). We obtain y ''− y = ex y ( x) = Ae x + Be − x + Cxe x

Let

y ' = Ae x − Be − x + Ce x + Cxe x y '' = Ae x + Be − x + 2Ce x + Cxe x y ''−= y ex

x ex → 2Ce= −x

y ( x) = Ae + Be + 0.5 xe 0 → 0= y (0) = A+ B x

C 1/ 2 → =

x

→ B= −A

y (1) = 0 → 0 = Ae + Be −1 + e / 2 Solving for A and B leads A=

−e 2 2(e 2 − 1)

= y ( x)

−e 2 (e x − e − x ) 1 x + xe 2(e 2 − 1) 2

Prob. 4.6 (a) or

F ( x, y, y= ') y '2 − y 2

93

(b)

y’’ + y = 0 F ( x, y, y ', y '') =5 y 2 − ( y '') 2 + 10 x d ' d2 d2 0 = Fy − Fy + 2 Fy '' = 10 y − 0 + 2 (−2 y '') dx dx dx

or d4y − 10 y = 0 dx 2 F ( x, u , v, u ', v ') = 3uv − u 2 + u '2 − v '2 2

(c)

∂F d  ∂F  d −   = 3v − 2u − (2u ') ∂u dx  ∂u x  dx 3v – 2u – 2u’’ = 0 ∂F d  ∂F  d 0= −   =3u − (−2v ') ∂v dx  ∂vx  dx 0=

or

or 3u + 2v’’ = 0 Prob. 4.7 (a)

F ( x, y, y ')= 2 y '2 + yy '+ y '+ y , Fy = y '+ 1, Fy ' = 4 y '+ y + 1

d ' Fy = y '+ 1 − 4 y ''− y ' = 0 dx y ''= 1/ 4 → y= x 2 / 8 + Ax + B y(0) = 0 gives B = 0, while y(1) = 1 leads to A = 7/8. Thus 1 2 = y ( x + 7 x) 8 0 = Fy −

(b)

F ( x, y, y ') = y '2 − y 2 , Fy = −2 y, Fy ' = 2y '

d ' Fy = −2 y − 2 y '' = 0 dx y ''+ y= 0 → y= c1 sin x + c2 cos x y(π/2) = 0 and y(0) = 1 gives c 1 = 0 and c 2 = 1. Hence, y = cos x Fy − 0=

Prob. 4.8 Let Φ o be the unique solution, i.e. LΦ o =g . Then I =< LΦ, Φ > −2 < Φ, LΦ o > Adding and subtracting < LΦ o , Φ o > and noting that if L is self-adjoint, < LΦ o , Φ ) = < LΦ, Φ o > , we obtain I =< L(Φ − Φ o ), Φ − Φ o > − < LΦ o , Φ o >

94 Since L is positive definite, the last term on the right-hand side is always positive, while the first term is greater than or equal to zero. Thus, I assumes its least value when Φ = Φo . Prob. 4.9 ∂F dF ∂F ∂F dy ∂F dy ' ∂F ∂F = + + = + y '+ y '' ∂y ' dx ∂x ∂y dx ∂y ' dx ∂x ∂y In addition, d  dF  d  dF  dF = y ''  y'  y'  + dx  dy '  dx  dy '  dy ' Subtracting (2) from (1) gives

(1)

(2)

 ∂F d  dF   d  dF  ∂F −  F − y'  = + y '  dx  dy '  ∂x  ∂y dx  dy '   The last term on the RHS is zero according to Euler's equation. Hence, d  dF  ∂F = 0 F − y' − dx  dy '  ∂x as required.

95

Prob. 4.10 Let εβ (x) be the variation in V. Then 2  ∂   I= + δ I ∫  (V + εβ )  − 2 g (V + εβ ) dx   ∂x  Subtracting the given I from I + δ I yields 2 2  ∂    ∂V  δ I ∫  (V + εβ )  − 2 g (V + εβ ) −  =  + 2 gV dx   ∂x   ∂x 

  ∂V  ∂β  2  ∂β  ∫ 2 ε ∂x  ∂x  − g β  dx + ∫ ε  ∂x  dx The first integral can be simplied by integration by parts 2

=

∂V  ∂β  ∂V ∂ ∂V εβ − ∫ εβ dx  =  dx ∂x  ∂x  ∂x ∂x ∂x Substituting this into δ I

∫ε

∂V  ∂ ∂V   ∂β  + g εβ dx + ∫ ε 2  δI = 2 εβ − 2∫   dx ∂x  ∂x ∂x   ∂x  The last integral vanies as ε → 0. Also, the first integral vanishes due to the fact that β =0 on the boundary. Hence 2

 ∂ ∂V  + g ( x) εβ dx = 0  ∂x ∂x  This implies that the term in brackets is zero.

δ I = 2∫ 

∂ ∂V + g ( x) ∂x ∂x



d 2V = − g ( x) dx 2

Prob. 4.11 F ( x, y , Φ , Φ x , Φ y = )

1 2 1 (Φ x + Φ 2y ) − k 2 Φ 2 + g Φ 2 2

Applying Euler’s equation ∂F ∂  ∂F  ∂  ∂F  −  0 = −  ∂Φ ∂x  ∂Φ x  ∂y  ∂Φ y  ∂ ∂ − k 2 Φ + g − (Φ x ) − (Φ y ) = 0 ∂x ∂y 2 g = Φ xx + Φ yy + k Φ or ∇ 2Φ + k 2Φ = g

96

Prob. 4.12 1 2 ε E − ρvV 2 Euler’s equation for this 3-D case is = F F (V , V= x , Vy )

But

∂F ∂  ∂F  ∂  ∂F  ∂  ∂F  −  0= −  −   ∂V ∂x  ∂Vx  ∂y  ∂Vy  ∂z  ∂Vz  ∂V ∂V ∂V Vx = Vy = Vz = = − Ex , = −Ey , = − Ez ∂x ∂y ∂z ∂F E =Ex2 + E y2 + Ez2 , = − ρv ∂V

(1)

Hence,

∂F ∂F ∂F ∂E = − = − = −ε Ex ∂Vx ∂Ex ∂E ∂Ex Similarly, ∂F ∂F = −ε E y , = −ε Ez ∂Vy ∂Vz Substituting (2) and (3) into (1) gives ∂ ∂ ∂ 0= − ρv + (ε Ex ) + (ε E y ) + (ε Ez ) ∂x ∂y ∂z or ρv = ∇gε E = ∇gD

Prob. 4.13 = F

But

1 = σ E 2 F (V ,Vx ,Vy ,Vz ) 2

∂V ∂V ∂V ax − ay − az ∂x ∂y ∂z E 2 = Vx2 + Vy2 + Vz2 E = −∇V = −

According to Euler’s equation, ∂F ∂  ∂F  ∂  ∂F −  0= −  ∂V ∂x  ∂Vx  ∂y  ∂Vy ∂F 1 = σ (2V = σ= Vx J x x) ∂Vx 2

Hence,

 ∂  ∂F   −    ∂z  ∂Vz 

∂ ∂ ∂ 0= 0 − (J x ) − (J y ) − (J z ) ∂x ∂y ∂z

(2)

(3)

97 or

∇gJ = 0

Prob. 4.14 ∂

∫∫∫  ∂x (ε

I δ=

x

 ∂V ∂ ∂V ∂ ∂V ) + (ε y ) + (ε z ) + ρv δ Vdxdydz = 0 ∂x ∂y ∂y ∂z ∂z 

∂ ∂V (ε x )δ Vdxdydz + L + ∫∫∫ ρvδ Vdxdydz = 0 ∂x ∂x ∂ ∂V ∂ ∂V To integrate by parts, let = u dv = , dv (ε x ), then = du δ Vdx = , v εx ∂x ∂x dx dx = − ∫∫∫

− ∫∫∫

∂ ∂V ∂V ∂V ∂   (ε x )δ Vdxdydz = − ∫∫ δ V ε x − ∫εx δ Vdx dydz ∂x ∂x ∂x ∂x ∂x  

δ= I

∫∫∫ ε

Thus, 

 ∂V ∂ ∂V ∂ ∂V ∂ δV + ε y δV + ε z δ V + δρvV dxdydz ∂x ∂x ∂y ∂y ∂z ∂z 

x

− ∫∫ δ V ε x

∂V ∂V ∂V dydz − ∫∫ δ V ε y dxdz − ∫∫ δ V ε z dxdy ∂x ∂y ∂z

2 2   ∂V 2   ∂V   ∂V  2 V = ε + ε + ε − ρ  x y z  v dxdydz    2 ∫∫∫   ∂x  ∂y  ∂z      ∂V ∂V ∂V dydz − δ ∫∫ V ε y dxdz − δ ∫∫ V ε z dxdy − δ ∫∫ V ε x ∂x ∂y ∂z The last three terms vanish if we assume either homogeneous Dirichlet or Neumann conditions at the boundaries. Thus, 2 2 2   ∂V  1   ∂V   ∂V  = + + − I ε ε ε ρ V 2  dxdydz   x y z v     ∂y  ∂z  2 ∫v   ∂x     

δ

Prob. 4.15 Equation (4.26) can be written, for three dimensional problem, as

δ

(| ∇Φ | 2∫

= δI

2

v

)

−2 f Φ dv − δ ∫ Φ S

∂Φ Substituting = h − g Φ gives ∂n

δ

(| ∇Φ | 2∫

= δI

2

− 2 f Φ )dv +

v

δ

2 ∫S

∂Φ dS ∂n

Φ ( g Φ − h)dS

or = I

∫ (| ∇Φ |

2

v

− 2 f Φ )dv + ∫ ( g Φ 2 − 2hΦ )dS S

98 Prob. 4.16 If - y’’ + y = sin π, Ly − g =− y ''+ y − sin π x =0 1

I =< Ly − y, δ y >= ∫ (− y ''+ y − sin π x)δ ydx 0

1

1

1

0

0

0

δI = ∫ − y ''δ ydx + ∫ yδ ydx − ∫ sin π xδ ydx 1 δ δ = − y ' δ y + ∫ ( y ') 2 dx + ∫ y 2 dx + δ ∫ y sin π xdx 0 20 2 1

=

δ

1

y' + y 2 ∫ 2

2

− 2 y sin π x dx

0

or 1

1 ( y ') 2 + y 2 − 2 y sin π x dx 2 ∫0  I (Φ ) =< LΦ, Φ > −2 < Φ, g > , where

I = Prob. 4.17

d2 L= − 2 + 1, g = 4 xe x dx d 2Φ I (Φ= ) ∫ (−Φ 2 + Φ 2 )dx − 2 ∫ 4 xe x dx dx d 2Φ Integrating the first term by parts, u = Φ, dv =2 dx dx d 2Φ dΦ 1 dΦ dΦ Φ = Φ −∫ dx dx ∫ dx 2 dx 0 dx dx 1

I (Φ )=

∫ (Φ ' + Φ 2

2

)

− 8 xe x Φ dx − Φ (1)Φ '(1) + Φ (0)Φ '(0)

0

But

Φ '(1) = Φ (1) − e and Φ '(0) = Φ (0) + 1, 1

I (Φ )=

∫ (Φ ' + Φ 2

2

)

− 8 xe x Φ dx − Φ 2 (1) + Φ 2 (0) + eΦ (1) + Φ (0)

0

Prob. 4.18 For the exact solution, x3 −Φ = + Ax + B 6 Φ (0)= 0 → B= 0 Φ (1) = 2 → A= −13 / 6 13 1 Φ ( x) = x − x3 = 2.1667 x − 0.1667 x3 6 6

99 If −Φ= '' x, Φ ''+= x 0 1

∫ (Φ ''+ x)δΦdx

δI=

0

1  = I 1∫  (Φ ') 2 − xΦ dx 2  0  F=

1 (Φ ') 2 − xΦ 2

For N = 1,

% Φ = uo + a1u1= 2 x + a1 ( x 2 − x) %' = 2 + a (2 x − 1) Φ 1 1

= I

1

∫  2 [2 + a (2 x − 1)]

2

1

0

 − x[2 x − a1 ( x 2 − x)]  dx 

4 a a2 = + 1+ 1 3 12 6 dI = 0 da1



1 a1 + = 0 12 3



Hence, 1 % Φ = 2 x − ( x 2 − x) 4 = 2.25 x − 0.25 x 2 For N = 2, % Φ = uo + a1u1 + a2u2

= 2 x + a1 ( x 2 − x) + a2 ( x3 − x 2 ) 1

= I

1

∫  2 (Φ%') 0

2

%dx − xΦ 

4 a1 a2 a1a2 a12 a22 = + + + + + 3 12 20 6 6 15 dI a1 a2 1 = 0 → + = da1 3 6 12 1 dI a1 2a2 0 → = + = − da2 6 15 20 Solving these two equations yields 1 a1 = a2 = − 6 Hence,

a1= −

1 4

100 1 1 % Φ = 2 x − ( x 2 − x) − ( x3 − x 2 ) 6 6 = 2.1667 x − 0.1667 x 3 which is the same as the exact solution. N = 3 gives the same result.

Prob. 4.19

Compare your result with the following exact solutions. sin x + 2sin(1 − x) = + x2 − 2 (a) U ( x) sin1 2 cos(1 − x) − sin x = + x2 − 2 (b) U ( x) sin1 The approximate solutions are obtained as follows. (a) Let u = a1 x(1 − x) = a1 ( x − x 2 ), u ' = a1 (1 − 2 x)

The variational principle is 1

I (= u)

∫ (u ')

2

− u + 2 x u dx = 2

2

0

1

∫ a

2 1

(1 − 2 x) 2 − a12 ( x − x 2 ) 2 + 2a1 ( x3 − x 4 ) dx

0

 x3 2 x 4 x5 x 4 x5  1 4x =  a12 ( x + 2 x 2 + + ) + 2a1 ( − )  = a12 (4.3) + 0.1a1 ) − a12 ( − 3 3 4 5 4 5 0  3

∂I = 8.6a1 + 0.1 = 0 → ∂a1

−1/ 86 a1 =

Thus, 1 − x(1 − x) u= 86 (b) Let u= a1 x( x − 2)= a1 ( x 2 − 2 x),

u=' a1 (2 x − 2)

The variational principle is 1

I= (u )

2 2 2 dx ∫ (u ') − u + 2 x u = 0

1

∫ a

2 1

(4 x 2 − 8 x + 4) − a12 ( x 4 − 4 x 3 + 4 x 2 ) + 2a1 ( x 4 − 2 x 3 ) dx

0

5  2 4x 4 x3 x5 2 x 4  1 2 2 x 4 =  a1 ( − 4 x + 4 x) − a1 ( − x + ) + 2a1 ( − = ) a12 (0.8) − 0.6a1 3 5 3 5 4 0  ∂I a1 = 3 / 8 = 1.6a1 − 0.6 = 0 → ∂a1 3

Thus, = u

3 x( x − 2) 8

101

Prob. 4.21

Compare your result with the following exact solutions. 1

(a) = Φ ( x)

π2 1

(b) = Φ ( x)

π2

(cos π x + 2 x − 1) (cos π x − 1)

Prob.4.22 y, y ') (a) Let F ( x,=

1 ( y ') 2 − y 2

Using eq. (4.20). ∂F ∂  ∂F  −   =0 ∂y ∂x  ∂y ' 



−1−

∂ (y') =−1 − y'' =0 ∂x

d2y +1 = 0 dx 2

or

Prob. 4.23 (a) u= x(1 − x m ). For m=1, m % a 1 x(1 − x), Φ % = Φ =' a1 (1 − 2 x) Substitution into eq. (4.36) in Example 4.5 leads to 1

I (a= 1)

∫ a

2 1

(1 − 4 x + 4 x 2 ) + 2a1 ( x3 − x 4 ) − 4a12 ( x 2 − 2 x3 + x 4 ) dx

0

1 2 1 a1 + a1 5 10

=

∂I 2 1 1 =a1 + = a1 = − 0 → ∂a1 5 10 4 %= a u = − 1 x(1 − x) Φ 1 1 4 For m=2, % a1u1 + a2u= = Φ a1 ( x − x 2 ) + a2 ( x − x3 ) 2 % Φ =' a (1 − 2 x) + a (1 − 3 x 2 ) 1

2

1

% I (Φ = )

∫ [a

2 1

(1 − 4 x + 4 x 2 ) + a22 (1 − 6 x 2 + 9 x 4 ) + a1a2 (1 − 2 x − 3 x 2 + 6 x 3 ) − 4a12 ( x 2 − 2 x 3 + x 4 )

0

−4a ( x 2 − 2 x 4 + x 6 ) − 4a1a2 ( x 2 − x 3 − x 4 + x 5 ) + 2a1 ( x 3 − x 4 ) + 2a2 ( x 3 − x 5 )]dx 2 2

102 I (a1 , a2 ) =

1 2 3 1 1 52 2 a1 + a1a2 + a1 + a2 + a2 5 5 10 6 105

∂I 2 3 1 = − a1 + a2 = 0 → ∂a1 5 5 10 ∂I 3 104 1 = 0 → a1 + a2 = − ∂a2 5 105 6 Solving these gives a1 = 1/ 36, a2 = −7 / 36. Hence, % x(0.0184 x 2 − 0.0263 x − 0.1579) = Φ

For m = 3, % a ( x − x 2 ) + a ( x − x3 ) + a ( x − x 4 ) = Φ 1 2 3 2 % Φ =' a (1 − 2 x) + a (1 − 3 x ) + a (1 − 4 x 3 ) 1

2

3

Substituting these in I(Φ) and taking the derivatives, we obtain ∂I =→ 0 0.4a1 + 0.6a2 + 0.724a3 = −0.1 ∂a1 ∂I = −1.67 0 → 0.6a1 + 0.99a2 + 1.267 a3 = ∂a2 ∂I = 0 → 0.724a1 + 1.267 a2 + 1.682a3 = −0.214 ∂a3 Solving these leads to a1 = −0.223, a2 = −0.506, a3 = −0.157 %= −0.223( x − x 2 ) − 0506( x − x3 ) − 0.157( x − x 4 ) Φ (b) um = sin mπ x < um , Lun > Amn =  d2  m x π sin  2 + 4  sin nπ xdx ∫0  dx  1

=

1

=

∫ 4sin mπ x sin nπ x − n π 2

0

m≠n  0,  =  4 − n 2π 2 ,m = n  2 

2

sin nπ x sin mπ x  dx

103 1

Bn =< g , un >= ∫ x 2 sin nπ xdx 0

 n π −4  3 3 , = nπ − 1 ,  nπ 2

2

n = odd n= even

For m = 1, 4 −π 2 3 π2 −4 A11 = , B1 =3 → a1 = B1 / A11 = − 3 2 π π For m = 2, 4 −π 2  π 2 − 4  0    3  2 π    [ A] = , [ B ] = 2 4 − 4π    1   0   − 2π  2  2   − 3    a1  π −1  = A] [ B ]  a  [= 1    2  4π (π 2 − 1)  For m = 3,  π2 −4  4 −π 2    3 0 0    π  2   1   2 [ A] = 2 − 2π 0  0  , [ B] =  − 2π  2  0  2  0 2 − 4.5π    9π − 4     27π 3 

 a1  a  =  2  a3 

 2   −π3    1   −1 [= A] [ B]  π (π 2 − 1)     − 2   27π 3 

Prob. 4.24 (a) For the collocation method, select w1 (= x) δ ( x − xi ), where 10 10 xi = + (i − 1) 6 3 (b) For the subdomain method, select subdomains 0 ≤ x ≤ 10/3, 10/3 ≤ x ≤ 20/3, 20/3 ≤ x ≤ 10.

104 %. (c ) For Galerkin method, w( x) = Φ %. (d) For the least squares method, select w( x)= LΦ The results are as shown below.

Method

a1

a2

a3

(a) Collocation (b) Subdomain (c ) Galerkin (d) Least squares

10.33 10.44 10.21 10.21

-1.40 -1.61 -1.32 -1.32

0.48 0.67 0.35 0.35

Prob. 4.25 (a) R = Φ ''+ Φ + x = (−2 + x − x 2 )a1 + (2 − 6 x + x 2 − x3 )a2 + x R(1/4) = 0 and R(1/2) = 0 gives  29 /16 −35 / 64   a1  1/ 4  =  7/4 7 / 8   a2  1/ 2   i.e.= / 31 0.1935,= 217 0.1843 a1 6= a2 40 /= 1

(b)

(1 − x)dx 0 and ∫ Rx= 0

1

∫ Rx

2

(1 − x)dx give

0

 3 /10 3 / 20   a1  1/12  3 / 29 13 /105  a  = 1/ 20    2   i.e. a1 71/= 369 0.1924,= a2 7= / 41 0.1707 = ∂R ∂R (c ) w1 = =−2 + x − x 2 , w2 = =2 − 6 x + x 2 − x3 , ∂a1 ∂a2 1

1

0

0

= and ∫ w2 Rdx 0 ∫ w1Rdx 0= give = a1 1.8754, = a2 0.1695 Prob. 4.26 = y a= a1 x(1 − x 2 ) 1u1 The residual is

105 d2y −6a1 x + 100 x 2 R =2 + 100 x 2 = dx The Ritz method requires that 1

1

0

0

2 ∫ R( x)dx =∫ (−6a1 x + 100 x )dx =0

 x3  1 2 3 a x 100 0 − +   = 1 3 0  This gives a1 = 100 9 Hence, 100 y ( x) x(1 − x 2 ) = 9 Prob. 4.27 = y a= a1 x(1 − x 3 ) 1u1 d2y R= + 100 x 2 = −12a1 x 2 + 100 x 2 dx 2 1

= ∫ Rdx 0

1

 →

0

∫ (−12a x 1

2

+ 100= x 2 )dx 0

0

 x 1 3 0  −4a1 x + 100 =  3 0  3

 →= a1

100 12

Hence,

100 x(1 − x3 ) 12 which is the exact solution. = y ( x)

Prob. 4.28 (a) For Galerkin method, a1 0.93344, a2 3.6795 = = (b) For the least square method = a1 0.932718, = a2 3.6627 Prob. 4.29 %=1 − x + a x(1 − x) + a x 2 (1 − x) + a x 3 (1 − x) Φ 1 2 3 1

∫ w Rdx = 0 i

gives

0

−70 133a1 + 63a2 + 36a3 = −98 140a1 + 108a2 + 79a3 = −210 264a1 + 252a2 + 211a3 =

106 Solving these equations yields a1 = −0.209, a2 = −0.7894, a3 = −0.209 and %= (1 − x)(1 − 0.209 x − 0.789 x 2 − 0.209 x3 ) Φ Prob. 4.30 = Φ a1u1 + a2u= a1 x(1 − x 2 ) + a2 x(1 − x 4 ) 2

 d2  R = LΦ − g =  2 − 1 Φ + 100 x  dx  2 d Φ= a1 (−6 x) + a2 (−20 x 2 ) dx 2 R ( x) = a1 (−7 x + x 3 ) + a2 (− x + 20 x 2 + x 5 ) + 100 x R(1/= 3) 0

or



 → = 0 a1 (−7 / 3 + 1/ 27) +a2 (−1/ 3 + 20 / 9 + 1/ 243) +

100 = −2.296a1 + 1.893a3 3

/ 3) 0 R(2=

(1)

0 a1 (−14 / 3 + 8 / 27) +a2 (−2 / 3 + 80 / 9 + 32 / 243) +  → =

200 = −4.37 a1 + 8.354a3 3 We can use MATLAB to solve (1) and (2) and obtain a1 = 13.9587, a2 = −0.6784 Thus, = Φ 13.28 x − 13.96 x3 + 0.678 x 5

or

100 3



Prob. 4.31 Let y%( x) =a1u1 + a2u2 + a3u3

and

un =x(1 − x n ).

y%( x) = a1 ( x − x 2 ) + a2 ( x − x3 ) + a3 ( x − x 4 )

 d2  Amn =< wm , Lun >= ∫ wm  − 2 ( x − x n +1 ) dx  dx  0 0.5 0.1875 0.0625 [ A] =  1 0.5 ???  1.5 1.6875 1.6875  1

1

Bmn =< wm , Mun >= ∫ wm ( x − x n +1 )dx 0

 0.0575 0.0615 0.0625 [ B] =  0.2083 0.2344 0.2438 0.4219 0.4834 0.5150 

(2)

200 3

107 [ A= ] λ[ B] → [ A][ B]−1 − λ[ = I] 0 Solving this gives = λ1 9.968, = λ2 44.9, = λ3 95.6 Exact: λn = (nπ ) 2 , i.e. λ1 9.8696, λ2 39.48, λ3 88.83 = = = Prob. 4.32

For the given problem, d2 L= − 2, M= 1 dx Using the Galerkin method, wm = um Amn = < um , Lun > 1  d2  A11 =< u1 , Lu1 >= ∫ x(1 − x)  − 2 x(1 − x) dx = ∫ 2( x − x 2 )dx = 1/ 3  dx  0 0 1

1  d2  A21 =< u2 , Lu1 >= ∫ x (1 − x)  − 2 x(1 − x) dx = ∫ 2( x 2 − x3 )dx = 1/ 6  dx  0 0 1 1 2  d  2 A22 =< u2 , Lu2 >= ∫ x 2 (1 − x)  − 2 x 2 (1 − x) dx = ∫ ( x 2 − x3 )(−2 + 6 x)dx = 15  dx  0 0 1

2

1  d2 2  A12 =< u1 , Lu2 >= ∫ x(1 − x)  − 2 x (1 − x) dx = ∫ ( x − x 2 )(−2 + 6 x)dx = 1/ 6  dx  0 0 Bmn = < um , Mun > 1

1

1

B11 =< u1 , u1 >= ∫ x(1 − x) x(1 − x)dx = ∫ ( x 2 − 2 x 3 + x 4 )dx = 0

0

1

1 30

1

B12 = B21 =< u2 , u1 >= ∫ x (1 − x) x(1 − x)dx = ∫ ( x 3 − 2 x 4 + x5 )dx = 2

0

0

1

1

0

0

B22 =< u2 , u2 >= ∫ x 2 (1 − x) x 2 (1 − x)dx = ∫ ( x 4 − 2 x5 + x 6 )dx =

= [ A][a ] λ[ B][a ]

 →

1 60

1 105

1/ 3 1/ 6   a1  1/ 30 1/ 60   a1  λ   1/ 6 =    2 /15  a2   1/ 60 1/105  a2 

or [ A][ B]−1 = λ[ I ] Solving this using MATLAB gives = λ1 10, = λ2 42 Exact: λn = (nπ ) 2 , i.e. = λ1 9.8696, = λ2 39.48

108 Prob. 4.33 10

= λo

x( x − 10)[−2 + 0.1x( x − 10)]dx ΦLΦdx ∫ ∫= = ∫ Φ dx ∫ x ( x − 10) dx 0

10

2

2

0.2

2

0

The exact fundamental eigenfunction is Ψ =sin(π x /10) so that λ= 0.1 + (π /10) 2 = 0.1987 Prob. 4.34 The basis functions depend only on ρ. Let = uk cos(2k − 1)πρ / 2 such that u k satisfies the boundary condition Φ(ρ=1) = 0. Φ =a1 cos πρ / 2 2π

2

2 2πρ dΦ 2 π | | 2 sin 2 dS d d a ρ ρ φ π ρd ρ = ∇Φ = 1 ∫S ∫ ∫ ∫ 4 0 2 φ 0= ρ 0 dρ = 2

1

1

π2 1 2 = π +  a1  8 4

2πρ 1 π2  2 ∫S Φ dS= 2π a ∫0 cos 2 ρ d ρ= π  2 − 1 a1 1

2

2 1

2

π2 1 +  | Φ |2 dS 8 4 ∫  5.832 = λ = = 2 2 ∫ Φ dS 1  π − 1 π 2  compared with the exact value of 5.779.

π

109 Prob. 4.35 Imposing the boundary conditions y(0)=0=y(1), we obtain −(a1 + a2 ) a0 = a3 = 0, Thus, y = a1 ( x − x3 ) + a2 ( x 2 − x3 ) Substituting this in λ gives

λ=

168a12 + 126a1a2 − 35a22 16a12 + 11a1a2 + 2a22

Dividing the numerator and denominator by a 22 and letting b = a1 / a2 , we get

λ=

168b 2 + 126b − 35 16b + 11b + 2

dλ = 0 → 168b 2 + 17926b − 637 = 0 db This is a quadratic equation with b=10.53, -0.34435. The second value leads to a negative λ. The first value yields λ =10.53 (The exact value is λ =π 2 = 9.87) Prob. 4.36 = λ1 4.212, = λ2 34.188 Prob. 4.38 E y2 =sin 2 (π x / a ) + c1[cos(2π x / a ) − cos(4π x / a )] + c12 sin 2 (3π x / a ) d 2 Ey

π2

= − 2 sin 2 (π x / a ) + 5c1[cos(2π x / a ) − cos(4π x / a )] + 9c12 sin 2 (3π x / a )  2 dx a Substituting this in eq. (4.90) in Example 4.10 and performing the integration yields   π2 1  3 3  c1 2 2 ω ε o µo (1 + c1 ) + (ε r − 1)  +  + 2c1 (ε r − 1)  −  + (ε r − 1) = 2 (1 + 9c1 )  3 2π   π  3   a Taking ε r = 4, Ey

4a 2

λc2

=

1 + 9c12

9 3 2 (2 + 3 3 / 2π ) − c1   + c1 2 π   Differentiating with respect to c 1 and setting the result equal to zero yields c 1 = 0.05201. Substituting this value gives

110 a

λc

= 0.2948

Prob. 4.39 Applying the conditions give 4 6 A= , B= − 2 , C == 0, D 1 2 a a Hence, 6 x 2 4 x3 Hx = 1− 2 + 3 a a 2 12 x 8 x 3 36 x 4 48 x 5 16 x 6 2 Hx = 1− 2 + 3 + 4 − 5 + 6 a a a a a a

∫H

2 x

dx = 0.4857 a

0

d 2Hx 12 24 x 72 x 2 192 x3 96 x 4 = − + + 4 − 5 + 6 dx 2 a 2 a3 a a a a 2 d H 4.8 ∫0 H x dx 2 x dx = − a Hx

Hence, 0.4857 aω 2 µoε o =

But ω = 2πf. Thus, 4 f c2 =

4.8 a

4.8c 2 0.4857π 2 a 2

or 0.50032c a where c is the speed of light in vacuum. fc =

111 CHAPTER 5 Prob. 5.1 Amn =

wm ( x),

d2 ( x − x n +1 ) = 2 dx

( x − x m +1 ), −(n + 1)nx n −1

 x n +1 x m + n +1  1 = −n(n + 1) ∫ ( x n − x m + n )dx = −n(n + 1)  −   n + 1 m + n + 1 0 0 mn 1  1  = −n(n + 1)  − = −  m + n +1  n + 1 m + n + 1 1

 x 4 x m+4  1 m +1 2 − − = − Bm = wm ( x), g( x) = x x x dx ( )( )  4 − m + 4 0 ∫0   m 1  1 = − = − −  4(m + 4) 4 m + 4 1

Prob. 5.2 d2 Let L = 2 , g ( x) =−(1 + 2 x 2 ), U m =x − x m +1 =wm ( x) dx d2 −mn Amn = wm ( x), 2 ( x − x n +1 = ( x − x m +1 ), −n(n + 1) x n −1 = dx m + n +1 as in the previous problem. 1

Bm =

wm ( x), g ( x) =

m +1 2 ∫ ( x − x )(−1 − 2 x )dx = 0

1

∫ (− x + x

m +1

− 2 x3 + 2 x m +3 )dx

0

1 2 2  −m(m + 3)  =  −1 + − + = m + 2 4 m + 4  (m + 2)(m + 4)  B1 = B2 = −4 /15, −5 /12  −1/ 3 −1/ 2   a1   −4 /15 1/ 3 [ A][a ] = [ B] → →  −1/ 2 −4 / 5  a  =  −5 /12  1/ 2   2    1/ 3 1/ 2 4 /15 1/ 2 1/ 3 = ∆ = 1/ 60,= ∆1 = 1/120,= ∆2 1/ 2 4 / 5 5 /12 4 / 5 1/ 2

1/ 2   a1   4 /15 = 4 / 5  a2  5 /12  4 /15 5 /12

= 1/ 60

112 ∆1 ∆2 = 1/ 2, = 1 a= 2 ∆ ∆ Thus, 1 U ( x)= ( x − x 2 ) + ( x − x3 ) 2 = a1

 x4 x2 2x  − + −  The exact solution is U ( x) = 3   6 2 Prob. 5.3 (a) Nonsingular, Fredholm integral of the second kind. 1 5x 1 5x 1 t 3 1 + ∫ xt 2 dt = + x = x= Φ ( x) 6 20 6 2 3 0 (b) Nonsingular, Volterra integral of the second kind. x

cos x − sin x + 2 ∫ e − t sin( x − t )dt == e − x Φ ( x) 0

(c ) Fredholm integral of the second kind.

λ

1

cosh( x + t ) cosh tdt λ  −∫1  2 sinh 2 + λ − 1 cosh x = = Φ ( x) λ   2 sinh 2 + λ − 1

− cosh x +

Prob. 5.4 (a) Fredholm integral of the first kind (b) Fredholm integral of the second kind (c) Volterra integral of the first kind (d) Volterra integral of the second kind Prob. 5.5 (a) Note that Φ (0) = 5, F ( x, Φ ) = 2 xΦ ( x) dΦ = 2 xΦ ( x) dx dΦ 2 ∫ Φ = ∫ 2 xdx → ln Φ= x + ln co Φ ( x) = co e x

or Φ (0) = 5

→ co = 5 Φ =5e x

(b)

2

2

113 x

Φ = x − ∫ ( x − t )Φ (t )dt 0

x

Φ ' = 1 − ∫ Φ (t )dt ,

Φ '' = −Φ

0

Φ (0)= 0, Φ '' = −Φ Φ (0)= 0

Φ '(0)= 1 → Φ = d1 cos x + d 2 sin x → d1 = 0

Φ '(0) = 0

→ d2 = 1

Hence, Φ =sin x

Prob. 5.6 (a) Given y’’ = -y, x

y' = − ∫ y (t )dt + c1 0

y '(0) =1



c1 =1

x

y ' = 1 − ∫ y (t )dt1 0

x

y =− x ∫ ( x − t ) ydt + c2 0

y (0) =0



c2 =0

x

y =− x ∫ ( x − t ) y (t )dt 0

(b) If y’’ = - y + cos x, x

y' = − ∫ y (t )dt + sin x + c1 0

y '(0) = 1

→ c1 = 1 x

y =− x cos x − ∫ ( x − t ) ydt + c2 0

y (0) = 0

→ c2 = 1 x

y =1 + x − cos x − ∫ ( x − t ) y (t )dt 0

Prob. 5.7 ∂G e − jkr 1 = (− jk − ) ∂r 4π r r − jkr  1 ∂  ∂G  1 ∂  e − jkr 2 e ∇ 2G =2  r 2 = ( − jkr − 1) = − k   r ∂r  ∂r  r 2 ∂r  4π 4π r 

114 Hence, 2 ∇ 2G + k = G 0, To prove that

r >0

∇ 2G + k 2G = −δ ( x)δ ( y )δ ( z ) we multiply both sides by dv and integrate over a sphere of radius ε and allow ε → 0. ε

∫∫∫ k Gdv = 4π k ∫ 2

2

e − jkr 2 r dr → 0 as ε → 0 4π r

r =0

∫∫∫ ∇ Gdv= ∫∫∫ ∇ • (∇G)dv= Ò ∫∫ ∇G • a dS 2

r

where divergence theorem has been applied. But ∂G 1 e − jkr ar =− ar ∇G= ( jk − ) ∂r r 4π r 1 e − jkε 2 a ∇ • = − − → −e − jkε → −1 as ε → 0 πε G dS jk 4 ( ) r Ò ∫∫ ε 4πε Thus, −1, ∫∫∫ (∇ G + k G)dv = 2

2

−1 ∫∫∫ −δ ( x)δ ( y)δ ( z ) =

i.e. ∇ 2G + k 2G = −δ (r)

Prob. 5.8 d 2G + k 2G= 0 → m 2 + k 2 = 0 dx 2 The general solution is G A cos kx + B sin kx = A =0, G = B sin kx G(0) = 0 → 0+

 d 2G  2 dx ∫−  dx 2 + k G = 0 1 B= 2k 1 G = sin kx 2k

0+

∫ δ ( x)dx

0−

0+

dG dG → − + k 2 ∫ Gdx = 1 + − = dx x 0= dx x 0 0−

115 Prob. 5.9

Consider the following two cases. Case 1: x < xo d 2G + k 2G = 0, − L. < x < L 2 dx The general solution to this is G A cos k ( x - L) + B sin k ( x + L) = Applying the boundary condition at x=-L gives A=0 (1) G B sin k ( x + L) = Case 2: x > xo For this case, the general solution is dG = −kC sin k ( x - L) + kDcosk ( x + L) dx Applying the Neumann boundary condition at x=L gives D=0.

G= C cos k ( x - L) + D sin k ( x + L)



G = C cos k ( x - L) (2) We now obtain constants B and C. We obtain the Green's function and obtain the derivative continuity condition. xo +

 d 2G  2 dx ∫−  dx 2 + k G = xo

xo +

∫ δ ( x)dx

xo −

x+

o dG dG 2 k → − + = 1 + ∫ Gdx = dx x x= dx x xo− o x− o

−1 C sin k ( xo − L) + B cos k ( xo + L) = (3) k Applying the continuity condition: G ( x | xo− )=G ( x | xo+ ) B sin k ( x= C cos k ( xo − L) o + L)

B →=

C cos k ( xo − L) sin k ( xo + L)

Substituting (4) into (3) gives  sin k ( xo − L) sin k ( xo + L) + cos k ( xo − L) cos k ( xo + L)  −1 C = sin k ( xo + L)   k Applying the trig identity cos(α -β )=sinα sinβ +cosα cosβ yields sin k ( xo + L) (5) C= − k cos(2kL)

(4)

116

From (4) and (5), B = −

cos k ( xo − L) k cos(2kL)

Thus,  cos k ( xo − L)  − k cos(2kL) sin k ( x + L),  G ( x | xo ) =  − sin k ( xo + L) cos k ( x − L),  k cos(2kL)

x < xo x > xo

Prob. 5.10 We want to show that

 ∂2  ∂2 (∇ 2 + k 2 )G = 2 + 2 + k 2  G =−δ ( x − x ')δ ( z − z ')  ∂x ∂z   ∂2  ∂2 j + + k2 G = −(nπ / a ) 2 − kn2 + k 2 × ∑  2 2 a  ∂x ∂z  e jkn ( z − z ') =0 sin(mπ x / a ) sin(nπ x '/ a ) kn

(

)

2 Since k= k 2 − (nπ / a ) 2 . n

Prob. 5.11 This is the same as Example 5.5 when a = b = 1. ∇ 2G = δ ( x − x ')δ ( y − y ') We first determine the eigenfunction of Laplace’s equation, i.e. ∇ 2G = λu where u satisfies the boundary conditions. 2sin nπ x sin nπ y, λmn = umn = −(m 2π 2 + n 2π 2 ) Thus, ∞

(1)



G ( x, y, x ', y ') = 2∑∑ Amn ( x ', y ') sin π nx sin nπ y = m 1= n 1

We substitute this in (1). Using orthogonality property and the shifting properties of the delta function, −(m 2π 2 + n 2π 2 ) Amn = 2sin nπ x sin nπ y Hence, 4 ∞ ∞ sin nπ x sin nπ y sin nπ x 'sin nπ y ' G ( x, y, x ', y ') = − 2 ∑∑ m2 + n2 π =m 1 =n 1 Prob. 5.12 The eigenfunction expansion is obtained easily if the operator L in LG is selfadjoint. To achieve this, let H = e x G . Then

117 LH = H xx + H yy − H = e xδ ( x − x ')δ ( y − y ') on R H = 0 on S Let = H ∑∑ cmn Φ mn where

LΦ mn = λmn Φ mn in R

Φ mn = 0 on S By the method of separation of variables, λmn =−1 − (mπ / a) 2 − (nπ / b) 2 2 Φ mn = sin(mπ x / a ) sin(nπ y / b) ab Multiplying by Φ pq and integrating over R leads to c pq λ pq =

2 x' e sin( pπ x '/ a ) sin(qπ y '/ b) ab

Therefore,

4 x '− x ∞ ∞ [sin λa x sin λb y sin λa x 'sin λb y '] G ( x, y; x ' y ') = − e ∑∑ ab 1 + λa2 + λb2  = m 1= n 1

where λa m= = π / a, λb nπ / b Prob. 5.13 (a) Sum the series over m. (b) Sum the series over n. Prob. 5.15 Let ∞

G ( x, y; x ', h1 + h2 ) = ∑ Φ n ( y ) sin(nπ x '/ a ) sin(nπ x / a ) n =1

where

An sinh nπ y / a, 0 ≤ y ≤ h1   = Φ ( y )  Bn sinh nπ y / a + Cn cosh nπ y / a, h1 ≤ y ≤ h1 + h2  Dn sinh[nπ (b − y ) / a ], h1 + h2 ≤ y ≤ b  G must satisfy

 ∂2 ∂2  1 − δ ( x − x ')δ ( y − y ')  2 + 2  G ( x, y; x ', y ') = ε  ∂x ∂x  and the given boundary and continuity conditions. By imposing those conditions, we obtain

118 An =

2ε r 2 sinh(nπ h3 / a ) nπε o ∆ n

Bn

2ε r 2 sinh(nπ h3 / a ) ε r1 cosh 2 (nπ h1 / a ) − ε r 2 sinh 2 (nπ h1 / a )  nπε o ∆ n

Cn =

2(ε r 2 − ε r1 ) sinh(nπ h1 / a ) cosh 2 (nπ h1 / a ) sinh 2 (nπ h3 / a ) nπε o ∆ n

Dn =

2ηn nπε o ∆ n

where = ∆ n ε r 2ζ n sinh(nπ h3 / a ) + ε r 2ηn cosh(nπ h3 / a ) = ζ n ε r1 cosh(nπ h1 / a) cosh(nπ h2 / a) + ε r 2 sinh(nπ h1 / a) sinh(nπ h2 / a) = ζ n ε r1 cosh(nπ h1 / a) sinh(nπ h2 / a) + ε r 2 sinh(nπ h1 / a) cosh(nπ h2 / a) Prob. 5.16 ∇ 2 F + k 2 F = δ ( x − x ')δ ( y − y ') For ρ > 0, 1 ∂  ∂F  2 ρ +k F = 0 ρ ∂ρ  ∂ρ  or

ρ 2 F ''+ ρ F '+ k 2 ρ 2 F = 0

= F ( ρ ) AH 0(1) (k ρ ) + BH 0(2) (k ρ ) Assuming an outgoing wave, B = 0. = F ( ρ ) A [ J 0 (k ρ ) + jY0 (k ρ ) ] 2π

∂Y (k ρ ) ∂F 1 lim jA 0 ρ dφ = = ∫ ∂ρ dl lim ε →0 Ñ ε →0 ∫ ∂ρ 0 As ρ → 0,

J 0 (k ρ ) → 0,

2 Y0 (k ρ ) = ln k ρ

∂Y (k ρ ) 2 lim 0 = ρ →0 ∂ρ πρ

π

Hence, 2π

= 1

∫ 0

jA

2

πρ

ρ d= φ j4 A

j F ( ρ ) = − H 0(1) (k ρ ) 4

→ = A

−j 4

119 Prob. 5.17 Let  ∞ (2) r < r'  ∑ cn hn (r ') jn (r ) Pn (cos α ),  n =0 (2) (1) h0 (| r − r ' |) = ∞ (2)  c j (r ')h (r ) j (r ) P (cos α ), r > r' n n n n n ∑ n =0 Where constants c n are to be determined. Using the asymptotic formula j n +1 − jz hn(2) ( z ) = e z the left-hand side of the (1) becomes je − jkr ' kr cosθ (2) hn(2) (| r − r ' |) e → r '→∞ ,θ '=0 r' and the right-hand side of (1) becomes je − jkr ' ∞ (3) → ∑ cn jn (r ) Pn (cos θ ) r '→∞ ,θ '=0 r ' n =0 Comparing (2) and (3) with eq. (2.184), for a plane wave, shows that c= 2n + 1 . Thus, n  ∞ (2) r < r'  ∑ (2n + 1)hn (r ') jn (r ) Pn (cos α ),  n =0 (2) h0 (| r − r ' |) = ∞  (2n + 1) j (r ')h(2) (r ) j (r ) P (cos α ), r > r' n n n n ∑ n =0 Prob. 5.18 It is evident that K(0,y) = 0 = K(1,y) and this is satisfied by sin αx. sin= α1 sin nπ →= α nπ i.e. sin nπx is a possible solution. Similarly, K(x,0)= 0 = K(x,1) leads to sin nπy. Hence, ∞

K ( x, y ) = ∑ An sin nπ x sin nπ y n =1

But = (1 − x) y



∑ A sin nπ x sin nπ y, n =1

0 < y < x