Advanced Engineering Mathematics (Instructor Solution Manual, Solutions) [1 ed.] 9781482207712, 9781439834473, 1439834474


127 45 11MB

English Pages 1010 Year 2013

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
k11552_InstructorManualC001
k11552_InstructorManualC002
k11552_InstructorManualC003
k11552_InstructorManualC004
k11552_InstructorManualC005
k11552_InstructorManualC006
k11552_InstructorManualC007
k11552_InstructorManualC008
k11552_InstructorManualC009
k11552_InstructorManualC010
k11552_InstructorManualC011
k11552_InstructorManualC012
k11552_InstructorManualC013
k11552_InstructorManualC014
k11552_InstructorManualC015
Recommend Papers

Advanced Engineering Mathematics   (Instructor Solution Manual, Solutions) [1 ed.]
 9781482207712, 9781439834473, 1439834474

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

SOLUTIONS MANUAL FOR Advanced Engineering Mathematics

by Lawrence Turyn

SOLUTIONS MANUAL FOR Advanced Engineering Mathematics

by Lawrence Turyn

Boca Raton London New

York

CRC Press is an imprint of the Taylor & Francis Group, an informa business

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2014 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper Version Date: 20140127 International Standard Book Number-13: 978-1-4822-0771-2 (Ancillary) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

page 1

Chapter One Section 1.1.4

1.1.4.1.



1 −1 | 3 −4 |

−7 11



 ∼

1 −1 | −7 0 −1 | 32

 ⇒ −y = 32 ⇒ y = −32 ⇒ x = −7+y = −39

−3R1 + R2 → R2

=⇒ unique solution: (x, y) = (−39, −32)



 1 −1 2 | −1 1 1 | −1  1.1.4.2.  2 −1 3 −4 | 5/3



 1 −1 2 | −1  0 3 −3 | 1  0 2 −2 | 2/3

∼ −2R1 + R2 → R2 R 1 + R3 → R 3

 1 −1 2 | −1  0 1 −1 | 1/3  0 0 0 | 0

○ 1  0 0 

 ∼



1 R 3 2

0 1 | ○ 1 −1 | 0 0 |

 −2/3 1/3  . 0

R2 + R1 → R 1

→ R2 −2R2 + R3 → R3



   x1 −2/3 − c1 The solutions are:  x2  =  1/3 + c1  , arbitrary constant c1 . x3 c1  1 0 −2



0 −1 4| 3 2| 1.1.4.3.  −1 2 0 −1 |

 ∼

R 1 ↔ R2 −R1 → R1 −2R1 + R3 → R3

1 −3 −2 |  0 −1 4| 0 6 3|

 0 1 −2



1 −3 −2 | 0 1 −4 | 0 0 27 |



 0 −1  4

−R2 → R2 −6R2 + R3 → R3



 −25/27 =⇒ unique solution x =  −11/27 . 4/27



0  1 1.1.4.4.   0 2

0 1 1 −1 1 1 0 0



1 0 ∼  0 −R3 + R2 → R2 0 −4R3 + R4 → R4 2R3 + R1 → R1

 0 0   1  1

0 1 0 0

 ∼ R 1 ↔ R2 −2R1 + R4 → R4

 0 −1 0 1  1 0 0 3

1 1 −1  0 0 1   0 1 1 0 −2 2

○ 1  0 ∼   0 1 R → R 0 4 4 3 

 0 0   1  1

0 0 ○ 0 1 0 ○ 1 0 0

 ∼ R2 ↔ R3 2R2 + R4 → R4 −R2 + R1 → R1

1  0   0 0

  0 0  1 0  = RREF   0 0  ○ 2 1

 0 −2 −1 1 1 1   0 1 0  0 4 3

0 1 1 −1 1 1 0 0

 0  0  1  1

−R4 + R2 → R2 R4 + R 1 → R 1 ©Larry

Turyn, October 13, 2013

page 2

1.1.4.5. (a) r = 6 ⇒ last row is



0

0



(b) r = 0 ⇒



A | b

1 ∼0 0



0 |

0

6= 0



 3 −1 | 4 1 0| 5 0 0| 0

2 0 0

⇒ there is no solution. ○ 1 2 0  0 0 ○ 1 0 0 0  ∼

−1 | 0 | 0 |

 −11 5  0

−3R2 + R1 → R1



 −11 − 2c1 + c2   c1 , arbitrary scalars c1 , c2 =⇒ Solutions are: x =    5 c2 ○ 1 2 0 ∼ 0 0 ○ 1 0 0 0 

1.1.4.6. (a)



A | b



| | |

   −1 −1 − 2c1 , arbitrary scalar 3  : ⇒ Solutions are: x =  c1 0 3

c1 ○ 1 2 0 ∼ 0 0 ○ 1 0 0 0 

(b)



A | b



| | |

original system.

 −1 3 : ○ 2

Pivot in last row, last column ⇒ there is no solution of

1.1.4.7. Ex: Here are four (the question asked for only three) such possible matrices: ○ 0 1  0 ○ 1 0 0 

 ○ 0 1 5 0  0 0 ○ 0 , 1 0 0 0 ○ 1

 0 0 0 0 , ○ 1 0 



0 1 3

  1 | 1 (b)  A | 0 =  2 | 0 0

4| 0| 1|

0 1 3

 0 0 , ○ 1

○ 1  0 0 

and

 0 0 5 ○ 0 0  1 0 ○ 1 0

 4 0  1

1 1.1.4.8. (a) Ex: A =  2 0 

0 ○ 0 1  0 0 ○ 1 0 0 0 

 1 0 0

 ∼

 1 0 4| 1  0 1 −8 | −2  0 3 1| 0

−2R1 + R2 → R2



 ∼

1 0 0

0 4| 1 −8 | 0 25 |

 1 −2  6

−3R2 + R3 → R3



1/25 =⇒ unique solution x =  −2/25  6/25

P lant #Hours Run per day I x1 1.1.4.9. II x2 III x3

#Cars Built 4 4 2

#T rucks Built 4 1 3

Currently, numbers of hours run per day are x1 = 7, x2 = 6, x3 = 9 =⇒ Each day, 4(7) + 4(6) + 2(9) = 70 cars produced and 4(7) + 1(6) + 3(9) = 61 trucks produced Shutting down Plant I =⇒ We want to solve 4x2 + 2x3 = 70 and x2 + 3x3 = 61.

©Larry

Turyn, October 13, 2013

page 3



4 1

2 | 70 3 | 61



 ∼

1 0.5 | 17.5 0 2.5 | 43.5

 ⇒

  x3 = 

= 17.4

 

x2 = 17.5 − 0.5(17.4) = 8.8



43.5 2.5

1 R 4 1

→ R1 −R1 + R2 → R2

In whole hours, run Plant II for 9 hours/day and Plant III for 17 hours/day, or, if trucks are more profitable than cars, run Plant II for 8 hours/day and Plant III for 18 hours a day. 1.1.4.10. Let x1 = # of cups of puffed rice, x2 = # of cups of rolled oats, x3 = # of cups of bran flakes. To get the desired amount of protein as there would be in 106 cups of corn flakes, you need 1·x1 +5·x2 +4·x3 = 2·106. That and similar equations for carbohydrates and calories yield the system of equations correlating to the augmented matrix. 

1  13 60

5 23 130

 4 | 212 28 | 2120  105 | 10070

 ∼

1 5 4 |  0 −42 −24 | 0 −170 −135 |

 212 −636  −2650

−13R1 + R2 → R2 −60R1 + R3 → R3

 ∼ 1 R2 → R 2 − 42 −5R2 + R1 → R1 170R2 + R3 → R3

1  0 0

0 8/7 | 1 4/7 | 0 −265/7 |

 954/7 106/7  −530/7

○ 0 1  0 ○ 1 0 0 

∼ −(7/265)R3 → R3 −(4/7)R3 + R2 → R2 −(8/7)R3 + R1 → R1

 0 | 134 0 | 14  ○ 2 1 |

⇒ 106 cups of corn flakes = 134 cups of puffed rice +14 cups of rolled oats +2 cups of bran flakes. F ood #1 1.1.4.11. #2 #3 M eal

#Grams (in units of 100) mg Calcium x1 40 x2 70 x3 50 120

mg P otassium 20 10 40 30

mg M agnesium 40 30 60 70

=⇒ We want to solve, for example, 120 = 40x1 + 70x2 + 50x3 in order to have the calcium in the meal:      40 70 50 | 120 ○ 2 | 1.5 ○ 1 0.5 1 ∼ ∼  20 10 40 | 30   0  0 50 −30 | 60  40 30 60 | 70 0 10 −20 | 10 0 ↔ R2 → R1 −40R1 + R2 → R2 −40R1 + R3 → R3

↔ R3 → R2 −0.5R2 + R1 → R1 −50R2 + R3 → R3

R1 1 R 20 1

⇒ x3 =

1 7

⇒ x2 = 1 + 2 ·

1 7

=

9 7

R2 1 R 10 2

⇒ x1 = 1 − 3 ·

1 7

=

correct amount of  0 3| 1 ○ 1 −2 | 1  0 70 | 10

4 7

The meal should consist of 47 × 100 g ≈ 57.14 g of food #1, 1 7 × 100 g ≈ 14.29 g of food #3.

9 7

× 100 g ≈ 128.57 g of food #2, and

1.1.4.12. Let x1 =number of landscaped parking spaces and x2 =number of garage parking spaces.

©Larry

Turyn, October 13, 2013

page 4

⇒ 8000 =number of square feet available = 50x1 +10x2 

1 | 250 10 | 8000

1 50



 ∼

1 1 | 250 0 −40 | −4500

and 250 =total number of parking spaces= x1 +x2

 ⇒ x2 =

−4500 −40

= 112.5 and x1 = 250−112.5 = 137.5

−50R1 + R2 → R2

⇒ Have 137 landscaped parking spaces and 113 garage spaces (not 138 landscaped spaces and 112 garage spaces because that would exceed 8,000 sq. feet, and you can’t use more land than there is).     1 1 1 1 1.1.4.13. (a) May be true or may be false: Ex.  2 2  or  2 2 , 3 3 0 0     1 1|1 1 1|1 (b) May be true or may be false: Ex.  2 2 | 2  or  2 2 | 2 , (c), Must be true, 0 0|0 3 3|3 (d) Must be true,   2I1 −3I1 1.1.4.14.    6I1 −3I1 

+ +

(e) Must be true. 3I1 3I2

− +

 I1 = V1  5I2 − 5I3 = 0 , that is,  5I3 + 9I3 = V2  = V1  = 0   = V2 6 −3 0|    6 −3 0| V1  0  0 27/2 −5 | V1 /2  ∼ 1 − 10 27 |   0 −5 14 | V2 76 2 R → R2 0 0 27 2 9 | V2 +

3I2 7I2 −5I2

− +

3I2 15I2 − 5I3 −5I2 + 14I3   6 −3 0 | V1  −3 15 −5 | 0 ∼ 0 −5 14 | V2 1 R 2 1

⇒ I3 =

9 76 V2

+

1 I2 = 27 V1 + 10 27 139 = 3078 V1 +

I1 =

1 6

5 228 V1

=

+ + +

+ R2 → R2

27V2 +5V1 228

+

+

50 27(228) V1

139 27(228) V1

+

↔ R3 → R1 −4R2 + R4 → R4 R2 1 R 2 2

 1 0 0 2| 5  0 1 1.5 −2 | 3.5    0 0 1 0| 6 0 0 0 0| 0

=

5 228 V2

278 27(228) V1

=

 ∼

1 27 V1

    

5 27 V1

5R2 + R3 → R3

10 228 V2

(3I2 + V1 ) = 16 V1 + 12 I2 = 16 V1 +

1.1.4.15. (a)



= I3

27V2 +5V1 1 = 27 V1 228 5 V = I 2 114 2



V1

+

10 228 V2

139+9·114 27(228) V1

○ 1  0   0 0 



+

=

139 27(114) V1

5 228 V2

=

0 0 ○ 0 1 0 ○ 1 0 0

+

5 114 V2

1165V1 +135V2 6166

= I1

 2| 5 −2 | −5.5   0| 6 0| 0

−1.5R3 + R2 → R2

=RREF ([A | b ]) ○ 1  0 RREF(A) =   0 0 

0 0 ○ 0 1 0 ○ 1 0 0

 2 −2   0  0

©Larry

Turyn, October 13, 2013

page 5 

 5 − 2c1  −5.5 + 2c1   , arbitrary constant c1 (b) RREF ([A | b]) =⇒ The solutions are: x =    6 c1 (c) Both RREF ([A | b]) and RREF(A) have rank three.

©Larry

Turyn, October 13, 2013

page 6

Section 1.2.5 

     1 0 0 1 1 0 0 1.2.5.1. Ex. A = , B= =⇒ AB = 1 −1 1 0 1 −1 1      0 1 1 0 1 −1 BA = = 6= AB 1 0 1 −1 1 0

1 0

1.2.5.2. Taking the transpose of both sides of A2 = A implies that also AT 0 = A − AT





2

 =

0 1 −1 1



and

= AT . Multiply out to get

 2 2 A − AT = A · A − AT · A − A · AT + AT = A2 − AT A − AAT + AT ,

so 0 = A − AT A − AAT + AT . This implies AT A = A − AAT + AT . This implies that 2    2 2  AAT = AAT AAT = A AT A AT = A A − AAT + AT AT = A2 AT − A2 AT + A AT    = A · AT −  A · AT + A · AT = AAT .  1.2.5.3. Here are two examples of non-zero A and B for which AB = 0:         0 1 0 2 1 1 1 2 ,B = , and Ex. 2 A = ,B = . Ex. 1 A = 0 0 0 0 −1 −1 −1 −2 1.2.5.4. (a) Ex. A = (b) Ex. A = (c) Ex. A =



1 0



1 0





1 0

0 0



 ,B =

−1 1 1.2.5.5. A2 =  0 −1 0 0



0 1



has A + B =

2 0

0 0





1 0



0 1

= I2 , which has rank 2.

 3 0 ,B = has A + B = , which has rank 1. 0 0      0 −1 0 0 0 ,B = has A + B = , which has rank 0. 0 0 0 0 0 0 0



0 0



 −1 1 0 1   0 −1 0 0 2



  0 1 −2 1 = 0 1 2 0 0

 0 1 −2 1  0 1 2 0 0

−1 1 A3 = A · A2 =  0 −1 0 0

 1 1  4

  1 −1 3 1  =  0 −1 4 0 0

 0 3  8

1.2.5.6. Using the hint, that is, AB = A [B∗1 pp B∗2 pp ... pp B∗p ] = [AB∗1 pp AB∗2 pp ... pp AB∗p ], we have that the j-th column of AB is AB∗j = A · 0 = 0. 1.2.5.7. Assume A = [aij ], where aij = 0 for i > j and B = [bjk ], where bjk = 0 for j > k. We will explain why AB = C = [cik ] is upper triangular, that is, why cik = 0 for i > k. Suppose i > k: We calculate cik =

n X j=1

aij bjk =

X j≤i

aij bjk +

X j>i

aij bjk =

X j≤i

0 · bjk +

X j>i

aij bjk =

n X

aij bjk = ai,i+1 bi+1,k + ... + ai,n bn,k .

j=i+1

But, i > k implies that all of i + 1 > k, i + 2 > k, ... , n > k are true, so the assumption that bjk = 0 for j > k implies 0 = bi+1,k = ... = bn,k . So, cik = ai,i+1 bi+1,k + ... + ai,n bn,k = ai,i+1 · 0 + ... + ai,n · 0 = 0. So, AB = C is upper triangular. ©Larry

Turyn, October 13, 2013

page 7 

d11



0 d22

  1.2.5.8. D2 = D · D =  

..

d11 d22

   

.

0 d11

d22

.



..

dk−1 11

0 dk−1 22

   

.

0

dnn

0 d222

..

. d20k

0

dk2

0

  By induction, Dk = D · Dk−1 = 

d211

    =  

0

dnn 

..





0

..

 

.

   

0 dn22

    =   dk−1 nn

0

dn11



..

. dknn

0

   . 

So, it is false that, in general, rank(A2 ) = p = rank(A) also. 1.2.5.9. False. Ex. A =

1.2.5.10. (AB)T = 

1.2.5.11.



1  (b) 0 0 

1 (c)  0 0



1 0 (a)  0 1 −2 0  1  0 −2  0 0 1/2 0 , 0 1 0 0 1



50 122

0 0

1 0

68 167



has rank 1, but A2 =

T

 =

50 68

 0 0 , because 1  0 0 a11 a12 1 0  a21 a22 0 1 a31 a32  1 0  0 1/2 because 0 0

  1 0 1 , because  0 0 0

0 0 1

122 167





0 0

0 0



has rank 0.

versus B T AT =



7 10

8 11

9 12





1  2 3

  4 50 5 = 68 6

122 167

   a13 a11 a12 a13  a23 = a21 a22 a23 a33 −2a11 + a31 −2a12 + a32 −2a13 + a33     a11 a12 a13 0 a11 a12 a13 0   a21 a22 a23  =  21 a21 12 a22 12 a23  1 a31 a32 a33 a31 a32 a33

 0 a11 1   a21 0 a31

a12 a22 a32

1.2.5.12. (a) Assume the m × n matrix L = [ `ij ]

  a13 a11 a23  =  a31 a33 a21 1≤i≤m

a12 a32 a22

 a13 a33  a23

is lower triangular, that is, `ij = 0 for all (i, j)

1≤j≤n

with i < j. Assume D = diag(d11 , ..., dnn ) is diagonal. Then Theorem 1.9 in Section 1.2 implies     (?) LD = L D∗1 pp D∗2 pp ... pp D∗n = LD∗1 pp LD∗2 pp ... pp LD∗n , where D∗1 , D∗2 , ..., D∗n are the columns of D. Also, write L in terms of its columns, that is, L = [ L∗1 pp L∗2 pp ... pp L∗n ]. The j − th column of D is   0  ..   .     0     djj  ,    0     .   ..  0 ©Larry

Turyn, October 13, 2013



page 8

so (?) and Lemma 1.2 in Section 1.2 together imply that the j−th column of LD is LD∗j = 0 · L∗1 + ... + 0 · L∗(j−1) + djj L∗j + 0 · L∗(j+1) + ... + 0 · L∗n = djj L∗j . So,  (??) LD = d11 L∗1

p p

d22 L∗2

p p

...

p p

 dnn L∗n .

Because L is lower triangular, `ij = 0 for all (i, j) with i < j, so (??) implies that (LD)ij , the ij− element of LD is djj `ij = 0 for all (i, j) with i < j. So, LD is lower triangular, too. e = diag(de11 , ..., denn ) is diagonal. Similar to the work above, we Similarly, suppose the m × m matrix D find that i   h e =D e L∗1 pp L∗2 pp ... pp L∗n = DL e ∗1 pp DL e ∗2 pp ... pp DL e ∗n , DL e ij , the ij−the element of DL, e is the i−th element in the vector DL e ∗j . So, (DL) e ij = dii `ij = 0 hence (DL) e for all (i, j) with i < j. So, DL is lower triangular, too. (b) Assume the m × n matrix U = [ uij ] 1 ≤ i ≤ m is upper triangular, that is, uij = 0 for all (i, j) with 1≤j≤n

i > j. Assume D = diag(d11 , ..., dnn ) is diagonal. Then Theorem 1.9 in Section 1.2 implies     (?) U D = U D∗1 pp D∗2 pp ... pp D∗n = U D∗1 pp U D∗2 pp ... pp U D∗n , where D∗1 , D∗2 , ..., D∗n are the columns of D. Also, write U in terms of its columns, that is, U = [ U∗1 pp U∗2 pp ... pp U∗n ]. The j − th column of D is   0  ..   .     0     djj  ,    0     .   ..  0 so (?) and Lemma 1.2 in Section 1.2 together imply that the j−th column of U D is U D∗j = 0 · U∗1 + ... + 0 · U∗(j−1) + djj U∗j + 0 · U∗(j+1) + ... + 0 · U∗n = djj U∗j . So,  (??) U D = d11 U∗1

p p

d22 U∗2

p p

...

p p

 dnn U∗n .

Because U is upper triangular, uij = 0 for all (i, j) with i > j, so (??) implies that (U D)ij , the ij− element of U D is djj uij = 0 for all (i, j) with i > j. So, U D is upper triangular, too. e = diag(de11 , ..., denn ) is diagonal. Similar to the work above, we Similarly, suppose the m × m matrix D find that i   h e =D e U∗1 pp U∗2 pp ... pp U∗n = DU e ∗1 pp DU e ∗2 pp ... pp DU e ∗n , DU e )ij , the ij−the element of DU e , is the i−th element in the vector DU e ∗j . So, (DU e )ij = dii uij = 0 hence (DU e for all (i, j) with i > j. So, DU is upper triangular, too. 1.2.5.13. Suppose A = [ A∗1

p p

... pp A∗n ] and D = diag(d11 , d22 , ..., dnn ). We will explain why   AD = d11 A∗1 pp d22 A∗2 pp ... pp dnn A∗n :

A∗2

p p

©Larry

Turyn, October 13, 2013

page 9

First, by Theorem 1.9 in Section 1.2, rewriting D = [ D∗1 , D∗2 , ..., D∗n ] in terms of its columns, we have     (?) AD = A D∗1 pp D∗2 pp ... pp D∗n = AD∗1 pp AD∗2 pp ... pp AD∗n . Using Lemma 1.2 in Section 1.2 and writing the j − th column of D as   0  ..   .     0     D∗j =   djj  ,  0     .   ..  0 we have that the j−th column of AD is AD∗j = 0 · A∗1 + ... + 0 · A∗(j−1) + djj A∗j + 0 · A∗(j+1) + ... + 0 · A∗n = djj A∗j . This and (?) imply  AD = AD∗1

p p

AD∗2

p p

...

p p

  AD∗n = d11 A∗1

p p

...

p p

djj A∗j

p p

...

p p

 dnn A∗n ,

as was desired.

©Larry

Turyn, October 13, 2013

page 10

Section 1.3.1 

1.3.1.1. [A | 0]



1 3 −1 0 −2 2

1| 0 1| 0



 ∼

○ 1 0

0 2 5/2 | 0 ○ −1 −1/2 | 0 1



− 21 R2 → R2 −3R2 + R1 → R1

−2R1 + R2 → R2



⇒ x3 , x4 are free variables: Let x3 = c1 , x4 = c2

   −2 −5/2  1  1/2    , where ⇒ General solution is x = c1   1  + c2  0 0 1

c1 , c2 =arbitrary constants  1.3.1.2. [A | 0]

1 2 0 3| 0 1 1 −1 |   0 −2 −2 2| 0 1 1 −1 |

∼ −R1 + R3 → R3

 0 0 ∼  0 0 −2R2 + R1 → R1

○ 0 1  0 ○ 1   0 0 0 0 

 −2 5| 0 1 −1 | 0   0 0| 0 0 0| 0

2R2 + R3 → R3 −R2 + R4 → R4

   −5 2  1 −1    ⇒ General Solution is x = c1   1  +c2  0 , where c1 , c2 =arbitrary 1 0 

⇒ x3 , x4 free: Let c1 = x3 , c2 = x4 constants

 1 2 3 1| 0  0 0 1 −1 | 0  0 5 5 5| 0 

1.3.1.3. [A | 0]

∼ R 1 ↔ R2 R 1 + R 3 → R3

○ 0 1  0 ○ 1 0 0 



0 0 ○ 1

 ∼ ↔ R3 → R2 −2R2 + R1 → R1 R2 1 R 5 2

1 0 0

0 1 0

 1 −1 | 0 1 1| 0 1 −1 | 0

 0| 0 2 | 0  ⇒ only x4 free: c1 = x4 −1 | 0

−R3 + R2 → R2 −R3 + R1 → R1



 0  −2   ⇒ General Solution is x = c1   1 , where c1 =arbitrary constant 1 1.3.1.4. (a) Must be true, because there is an x 6= 0 with Ax = 0 only if that system has a free variable.   ○ 0 0 1 (b) May be false, Ex. A = has rank = 2 = m 0 ○ 1 1 (c) Must be true, because c · x is a solution for any constant c, if x is a non-trivial solution of Ax = 0     1 0 1 0 0 (d) May be false, Ex. A = (e) May be false, Ex. A =  0 0  0 1 1 0 0 1.3.1.5. For some scalars c1 , c2 , α1 , α2 , and β1 , β2 , w = c1 v1 + c2 v2 , β1 u1 + β2 u2 .

v1 = α1 u1 + α2 u2 ,

©Larry

and v2 =

Turyn, October 13, 2013

page 11

So, w = (c1 α1 + c2 β1 )u1 + (c1 α2 + c2 β2 )u2 is a linear combination of u1 , u2 . 1.3.1.6. (a) Must be true, because there is at least one free variable.   1 0 (b) May be true or may be false: Ex. of may be true: A = , 0 0

Ex. of may be false: A =



1

(c) Must be true, because cx is a solution of Ax = 0 for all constants c    1 0 1 (d) May be true or may be false: Ex. of may be true: A =  0 0 , Ex. of may be false: A = 0 0 0    1 (e) May be true or may be false: Ex. of may be true: A = 1 0 , Ex. of may be false: A = 0 ○ 1 1.3.1.7. (a) Ex: A =  0 0 



0 (b) B = I − A =  0 0

0 ○ 1 0

 0 −1 0 0  0 1

0



0 0



0 0



 1 0  0 0 0 ○ 1 0 0 0 0 0 0  =⇒ [B | 0 ]



 | 0 | 0  ⇒ x1 , x2 free; x3 = 0 | 0

−R1 → R1 −R1 + R3 → R3



   1 0 =⇒ solutions of Bx = 0 are x = c1  0  + c2  1 , where c1 , c2 are arbitrary constants 0 0 ○ 1  0 1.3.1.8. (a) Ex. A =   0 0 

0 ○ 1 0 0

0 0 ○ 1 0

 2 −1 2 0   −1 0  0 0

(b) rank (A) = 3 = the number of pivot positions. The nullity of A is 5 − rank(A) =⇒ The nullity of A is ν(A) = 2

©Larry

Turyn, October 13, 2013

page 12

Section 1.4.1  1 0 2| 4| 1.4.1.1.  1 1 −1 0 −2 |

○ 1  0 0 

 −1 1 1



0 2| ○ 1 2| 0 0|

 −1 2 0

⇒ x1 = −1 − 2x3 , x2 = 2 − 2x3 , x3 =

−R1 + R2 → R2 R1 + R3 → R 3

free 

     −1 −1 −2 ⇒ x =  2  +c1  −2  = xp +xh , c1 = arbitrary constant, where xp =  2  , 0 1 0



 5 −4  6

1 1 2| 1.4.1.2.  0 −1 −3 | −2 2 8|



1 1 2|  0 −1 −3 | 0 4 12 |



 5 −4  16

2R1 + R3 → R3



 −2 xh = c1  −2  1

○ 0 1  0 ○ 1 0 0 

∼ 4R2 + R3 → R3 −R2 → R2 −R2 + R1 → R1

 −1 | 1 3| 4 0| 0

   1 1 =⇒ Solutions are x =  4  + c1  −3 , where c1 =arbitrary constant, 1 0     1 1 that is, solutions are x = xp + xh , where xp =  4  , xh = c1  −3 , where c1 =arbitrary constant 1 0 

 2 0 3| 4 1 1 −1 | −5   0 −2 5 | −6  1 1 −1 | 5



1 0 1.4.1.3.  1 0

 1 2 0 3| 4 0 1 1 −1 | −5     0 −2 −2 2 | −10  0 1 1 −1 | 5



=⇒ last row is

0

0

0

0 |

6= 0





−R2 +R4 → R4

−R1 +R3 → R3



 1 2 0 3| 4 0 1 1 1 | −5     0 −2 −2 2 | −10  0 0 0 0 | 10 



⇒ there is no solution.

1.4.1.4. By linearity, A (xp,1 + xp,2 ) = A (xp,1 ) + A (xp,2 ) = b(1) + b(2) , so x = xp,1 + xp,2 is a solution of Ax = b(1) + b(2) . So, x = xp,1 + xp,2 + xh is the general solution of Ax = b(1) + b(2) . 

  4 c1 − c2  1   −c1 − c2   1.4.1.5. (a) x =   0 + c1 0 c2 c1 , c2 are arbitrary constants





  4 c1 − c2   1   −c1 − c2  = xp + xh , where xp =   and xh =    0   c1 0 c2

  , 



 c1 − c2  −c1 − c2   , where c1 , c2 are arbitrary constants (b) The general solution of Ax = 0 is x = xh =    c1 c2 

       1 −1 1 −1  −1   −1   −1    (1) (2)  −1       (c) xh = c1   1  + c2  0 , so x =  1  and x  0  are the basic solutions of Ax = 0. 0 1 0 1 ©Larry

Turyn, October 13, 2013

page 13

1.4.1.6. (a) If m ≤ n, then it is possible for Ax = b to have a solution if [ A | b ] has a pivot position in each row: Ex. 1: A = In (if m = n); Ex. 2: A = [Im | 0 . . . 0], if m < n. (b) May be true: Ex. 1: A = [Im | 0 . . . 0], if m < n; Ex. 2: A = In , if m = n. (c) May be true, because, in fact, it is definitely true if m ≤ n. (d) Not possible, because if Ax = b has a solution then b is in span of the column of A. If every b in Rn has a solution x for Ax = b, then every b in Rn is in the span of the columns of A, hence the columns of A span Rm . 1.4.1.7. Hint: Ax = b =⇒ bT = xT AT . Using the hint and the assumption that AT z = 0, we find that   bT z = xT AT z = xT AT z = xT 0 = 0. Note: If Ax = b has no solution x then we should not multiply Ax = b on both sides by z because that would implicitly assume that Ax = b is true for some x. ○ 0 1 1.4.1.8. (a) Ex. A =  0 ○ 1 0 0 

 2 −3 2 0  0 0

 ○ 0 2 −3 | 0 1 0 | 0  =⇒ x3 and x4 are free, (b) [ A | 0 ] =  0 ○ 1 2 0 0 0 0 | 0       −2c1 + 3c2 3 −2       −2c −2 1   0   = c1  and x =     1  + c2  0 , arbitrary constants c1 , c2 c1 1 0 c2     3 −2  0   −2     ⇒  1  and  0  are the only basic solutions for this example of matrix A. 1 0 

   1 ○ 1 (c) Ex. Let b =  0 , so [A | b] =  0 0 0  1  0   =⇒ Solution: xp =   0 ; and, using the 0      1 −2  0   −2       x = xp + xh =   0  + c1  1  + c2  0 0

 0 2 −3 | 1 ○ 0 | 0  1 2 0 0 0 | 0



result of part (b), all solutions are  3 0  , where c1 , c2 are arbitrary constants. 0  1

1.4.1.9. No. One can find a b for which Ax = b has no solution. [Hint: Use the elementary matrices that row reduce A to its RREF.] Because A has only 5 columns, RREF has a ○ 1 in at most 5 rows. So, at least the 6th row of RREF(A) is [0 . . . 0].

©Larry

Turyn, October 13, 2013

page 14

Let Ek . . . E2 E1 be a sequence of elementary matrices that row reduce A to RREF. Let   0  ..   −1 −1  b = (Ek . . . E2 E1 ) e(6) = (Ek . . . E2 E1 )  .  .  0  1 Then, multiplication by Ek . . . E2 E1 on both sides of Ax = b gives −1

(Ek . . . E2 E1 ) A = (Ek . . . E2 E1 ) (Ek . . . E2 E1 )

 e(6) = Ek . . . E2 E1 E1−1 E2−1 . . . Ek−1 e(6) = e(6) .

The last row of the system (Ek . . . E1 ) A = e(6) is [0 . . . 0 | 1] hence, there was no solution of Ax = b. So this −1 b , (Ek . . . E2 E1 ) e(6) is a b for which Ax = b has no solution. 

1 1 1.4.1.10.  1 −1 1 0



 1| 0 1| 6 2| 0

∼ −R1 + R2 → R2 −R1 + R3 → R3

1 1  0 −2 0 −1

 1| 0 0| 6 1| 0

○ 1  0 0 

∼ − 12 R2 R2 + R 3 −R2 + R1 −R3 + R1

0 0 ○ 0 1 0 ○ 1

 | 6 | −3  | −3

→ R2 → R3 → R1 → R1



 6 =⇒ The general solution is x =  −3 . The only solution of the corresponding homogeneous system is −3   0 x =  0 . 0

©Larry

Turyn, October 13, 2013

page 15

Section 1.5.3 

1.5.3.1.

1 4 | 1 −2 3 | 0

0 1



 ∼

4 | 1 11 | 2

1 0

0 1



1 4 −2 3





1 1.5.3.2.  2 3

−1 =

1 −1 | 1 0 −1 | 0 0 2| 0

− 12 R2 → R2 −R2 + R1 → R1 3R2 + R3 → R3

1  ⇒ 2 3



3 −4 2 1

0 1 0

 0 0 1







1 11

1 0 0

−1 1 −1 0 −1  = 0 2



1 0 | 3/11 −4/11 0 1 | 2/11 1/11



1 R 11 2

→ R2 −4R2 + R1 → R1

2R1 + R2 → R2







exists.

 ∼ −2R1 + R2 → R2 −3R1 + R3 → R3

1 1 −1 |  0 −2 1| 0 −3 5|

 0 −0.5 | 0 0.5 0 1 −0.5 | 1 −0.5 0  0 3.5 | 0 −1.5 1



0 2 1  7 −5 7 0 −3

 1 −2 2| 1 0 0 0 −1 | 0 1 0  1.5.3.3.  −2 2 −1 0| 0 0 1

 1 0 0 −2 1 0  −3 0 1



∼ 2 R → 7 3 1 R + R2 3 2 1 R + R1 3 2

R3 → R2 → R1

 1 0 0| 0 2/7 1/7  0 1 0 | 1 −5/7 1/7  0 0 1 | 0 −3/7 2/7

 1 1  exists. 2





 1 0 0 2 1 0 −2 0 1

1 −2 2|  0 −4 3| 0 3 −4 |

∼ 2R1 + R2 → R2 −2R1 + R3 → R3

 ∼ − 14 R2 → R2 2R2 + R1 → R1 −3R2 + R3 → R3

1 0 0

−1 1 −2 2 0 −1  = ⇒  −2 2 −1 0 



 0 0 1

0 0.5 | 0 −0.5 1 −0.75 | −0.5 −0.25 0 −1.75 | −0.5 0.75

1 1 −1 | 1 0 2| 0 1.5.3.4.  3 −2 −1 0| 0





1 0 − 47 R3 → R3 0

3 R 4 3 1 − 2 R3

0 1 0

 0 | −1/7 −2/7 2/7 0 | −2/7 −4/7 −3/7  1 | 2/7 −3/7 −4/7

+ R2 → R 2 + R1 → R 1



 −1 −2 2 1  −2 −4 −3  exists. 7 2 −3 −4

0 1 0

 0 1  0

 ∼ −3R1 + R2 → R2 2R1 + R3 → R3

1 1 −1 |  0 −3 5| 0 1 −2 |

 1 0 0 −3 1 0  2 0 1

©Larry

Turyn, October 13, 2013

page 16



1 0 1|  0 1 −2 | 0 0 −1 |

∼ R2 ↔ R3 3R2 + R3 → R3 −R2 + R1 → R1

 0 −1 0 1  1 3

−1 2 3



∼ −R3 → R3 2R3 + R2 → R2 −R3 + R1 → R1

1  0 0

0| 0| 1|

0 1 0

 2 1 2 −4 −2 −5  −3 −1 −3

−1   1 1 −1 2 1 2 0 2  =  −4 −2 −5  exists. ⇒ 3 −2 −1 0 −3 −1 −3 



 1 0 −1 | 1 0 0 1 −1/2 0  1.5.3.5.  0 1 −3/2 | 0 0 5/2 | −5/2 1/2 −1/2

−1 1 0 −1 1  = ⇒  2 −2 −3 −2 1 

1.5.3.6. A

−1

=



 −1 T

A

1 102

A2 = A · A = 1.5.3.7. AT B

−1

1.5.3.8. Ex. A = −1

(A + B)

=



T

1 2

1 0 

=



A



−1

a11

    =   



R3 → R2 R3 + R 1 → R 1

= A−1 

 =

−1

−1

...

a1n a2n

..

.

.. . ann

1 100

A−1

0 1 −1 0

versus A

a12

 1 −3 , 2 4

=

4 3 −2 1

a22

0



 T T −1

and B =

1 −1 1 1 

1.5.3.9. Yes: A−1



2 R → 5 3 3 R + R2 3 2

 0 1/5 −1/5 −1/2 −1/5 −3/10  −1 1/5 −1/5

0 1 0

0| 0| 1|

=

1 4+6



2 3 −2 −1

 0 2 −2 1  −5 −2 −3  exists. 10 −10 2 −2

4 3 −2 1

0 1

1 0 0



= B −1 AT









A= A

10 15 −10 −5

T

−1

−1

 =

1 20



4 3 −2 1

 =

1 10



 4 3 , and −2 1

 .

= AB T

have A + B =

+B

       



 −1 −1

 =

1 0



a−1 11



1 1 −1 1

 , so

     0 0 −1 1 −1 −1 + = 6= (A + B) 1 1 0 1 1 a12 − a11 a22

...

a−1 22

a23 − a22 a33

     =    

..

.

 ...

        ..  . 

a−1 nn

0

Further information: Let A−1 = B = [ bij ]. The diagonal entries of B are easy to determine: bii = a−1 ii . After that, we can determine the first superdiagonal entries, as indicated above: bi,i+1 = −

ai,i+1 . aii ai+1,i+1

In order to have AB = I, consider the (1, 3) entries of AB and I: We need    a23 0 = a11 b13 + a12 b23 + a13 b33 + ... + a1n bn3 = a11 b13 + a12 − + a13 a−1 33 + 0 + ... + 0 a22 a33 ©Larry

Turyn, October 13, 2013

page 17

implies b13 =

a−1 11



a13 a12 a23 − a22 a33 a33

 .

Continuing in this way we can construct all of B = A−1 and see that it is also upper triangular.     1.5.3.10. Ay(1) = e(1) , A −y(4) = e(2) , A −y(2) = e(3) , Ay(3) = e(4) ⇒ A y(1) pp − y(4) pp − y(2) pp y(3) = I   ⇒ A−1 = y(1) pp − y(4) pp − y(2) pp y(3) .     e(2) + e(4) + 21 e(2) − e(4) = A y(3) + y(4) , e(3) = A −y(2) ,      e(2) + e(4) − 12 e(2) − e(4) = A y(3) − y(4) ⇒ A y(1) pp y(3) + y(4) pp − y(2) pp y(3) − y(4) = I

1.5.3.11. e(2) = e(4) =

1 2

⇒ A−1 =



1 2

y(1) pp y(3) + y(4)

p p

 − y(2) pp y(3) − y(4) .

    1.5.3.12. Ay(3) = e(1) , Ay(1) = e(2) , A −y(4) = e(3) , A −y(2) = e(4) ⇒ A y(3) pp y(1) pp − y(4) pp − y(2) = I   ⇒ A−1 = y(3) pp y(1) pp − y(4) pp − y(2)     1.5.3.13. A −y(2) = e(1) ⇒ A y(1) + y(2) = e(1) + e(2) + −e(1) = e(2)    ⇒ A y(3) − y(1) + y(2) = e(2) + e(3) − e(2) = e(3)   ⇒ A −y(2) pp y(1) + y(2) pp − y(1) − y(2) + y(3) = I   ⇒ A−1 = −y(2) pp y(1) + y(2) pp − y(1) − y(2) + y(3) 1.5.3.14. Ex. A =



1 2

0 0

  0 ,B= 3

0 4



has AB = O and BA =



0 3

0 4



1 2

0 0



 =

0 0 11 0

 6= O.

1.5.3.15. (a) If Ax = 0, then (CA)x = C(Ax) = C(0) = 0. So, x solves CAx = 0. (b) If CAx = 0, then C −1 (CAx) = C −1 (0) = 0. Because C −1 (CAx) = Ax, it follows that x solves Ax = 0. 1.5.3.16. Yes, there is a relationship between x and z, because z = Cy = C(Bx) = (CB)x = Ix = x. So, x must equal z. 1.5.3.17. Define D = AC and B = C −1 A−1 . We will explain why DB = I, and that will imply AC is invertible and (AC)−1 = C−1 A−1 DB = (AC) C −1 A−1 = A(CC −1 )A−1 = AIA−1 = AA−1 = I. Because DB = I. Theorem 1.21 in Section 1.5 implies D = AC is invertible and (AC)−1 = B = C −1 A−1 . 1.5.3.18 We assume that AB = BA. (a) Using Theorem 1.23(c) in Section 1.5, we calculate A−1 B −1 = (BA)−1 = (AB)−1 = B −1 A−1 . (b) Multiply the result of part (a), that is, A−1 B −1 = B −1 A−1 , on the right by A and on the left by A to get B −1 A = A A−1 B −1 A = A B −1 A−1 A = AB −1 . So , AB = BA implies B −1 A = AB −1 . Exchanging A for B, we have that BA = AB implies A−1 B = BA−1 . (c) Using BA−1 = A−1 B, one of the two results of part(b), we calculate      RHS , AB A−1 + B −1 = AB A−1 + AB B −1 = A BA−1 + A = A A−1 B + A = B + A = A + B = LHS, as was desired.  1.5.3.19. (a) Must be false, by Theorem 1.25 in Section 1.5 not (d) =⇒ not (a) (b) Must be true, because of part (a) and “not invertible" and “singular" are synonymous. ©Larry

Turyn, October 13, 2013

page 18

(c) Must be false, by Theorem 1.25 in Section 1.5 not (d) =⇒ not (e) (d) Must be false, because 0 is in W no matter what A is.



  1.5.3.20. (a) Let x satisfy Bx = x(1) , that is, x = B −1 x(1) . Then ABx = AB B −1 x(1) = A BB −1 x(1) = A(I)x(1) = Ax(1) = 0. (b) Supposing m = n, then there exists x(1) 6= 0 =⇒ Ax(1) = 0 =⇒ ν(A) ≥ 1 =⇒ rank(A) = n−ν(A) ≤ n−1. Because AB is also n × n and has ν(AB) ≥ 1, again, rank(AB) ≤ n − 1. 1.5.3.21. (a) Yes, because A(BC) = I implies A is invertible and A−1 = BC. (b) Yes, (AB)C = I and Theorem 1.21 in Section 1.5 implies C is invertible and C −1 = AB. (c) Yes, because ABC = I and part (a) implies BC = A−1 (ABC) = A−1 I = A−1 . Similarly part (b) says C invertible, hence B = (BC)C −1 = A−1 C −1 . But, A−1 and C −1 are invertible, so Theorem 1.23(c) in −1 Section 1.5 implies B = A−1 C −1 is invertible, as well as B −1 = A−1 C −1 = CA. 1.5.3.22. Yes. First, we will find what A−1 should be, if it exists, and then we will see that that formula for −1 −1 −1 −1 −1 A−1 works! If A−1 were invertible, wewould have  (AC) = C A , hence A = C(AC) . So, define −1 −1 B , C(AC)−1 . We calculate AB = A C (AC) = (AC) (AC) = I, so by Theorem 1.21 in Section 1.5, A−1 exists. The desired formula is A−1 = B = C(AC)−1 . 1.5.3.23. (a) X = AX + C ⇐⇒ (I − A)X = X − AX = C ⇐⇒ X = (I − A)−1 C = BC. (b) AX = X + C ⇐⇒ − C = −AX + X = (I − A)X ⇐⇒ X = −(I − A)−1 C = − BC. (c) XA = X + C ⇐⇒ − C = X − XA = X(I − A) ⇐⇒ X = − C(I − A)−1 = −CB. 

 B11 p B12 1.5.3.24. Look for a matrix B in block form B =  − − p − −  that we want to satisfy AB = I. B21 p B22 We calculate      I p O I p O B11 p B12  − − p − −  = I =? AB =  − − p − −   − − p − −  O p I A21 p I B21 p B22 that is, 

I · B11 + O · B21 p (?)AB =  − − − − − p A21 B11 + I · B21 p

   I · B12 + O · B22 B11 p B12 − − − − −  = − − − − − p − − − − −  . A21 B12 + I · B22 A21 B11 + B21 p A21 B12 + B22

The first row implies I = B11 and O = B12 . Substitute those into (?) to    I p O I  − − p − −  = In =? AB =  − − − O p I A21 + B21 hence we need I = B22 and O = A21 + B21 , hence B21 A−1 has the partitioned form  I A−1 =  − − − A21

get p p p

 O − − , B22

= − A21 . So, we conclude that A is invertible and p p p

 O − − . I



 B11 p B12 1.5.3.25. look for a matrix B in block form B =  − − p − −  that we want to satisfy AB = I. B21 p B22 ©Larry

Turyn, October 13, 2013

page 19

We calculate



  I p O A11 p  − − p − −  = I =? AB =  − − p O p I A21 p

 O B11 − −  − − A22 B21

 B12 −−  B22

p p p

that is, 

  A11 B11 + O · B21 p A11 B12 + O · B22 A11 B11 p − − − − −  = − − − − − p (?) AB =  − − − − − p A21 B11 + A22 B21 p A21 B12 + A22 B22 A21 B11 + A22 B21 p

 A11 B12 − − − − − . A21 B12 + A22 B22

The first row implies O = A11 B12 and I = A11 B11 . Because we assumed that A11 and A22 are invertible, we get B12 = O and B11 = A−1 11 . Substitute those into (?) to get     I p O I p O  − − p − −  = In =? AB =  − − − − − p − − − − , O p I A21 A−1 + A B p A22 B22 22 21 11 −1 −1 −1 hence we need I = A22 B22 and O = A21 A−1 11 + A22 B21 , hence B22 = A22 and B21 = − A22 A21 A11 . So, we −1 conclude that A is invertible and A has the partitioned form   A−1 p O 11 A−1 =  − − − − − p − −  . −1 − A−1 p A−1 22 A21 A11 22

1.5.3.26. One other example is



−2 −1 3 2

  −2 . Another example is −1

3 2

 .

©Larry

Turyn, October 13, 2013

page 20

Section 1.6.3 1.6.3.1 (a) 0 −1 2

for 1 3 0

example, expanding 4 −1 2 2 = −(1) 2 1 1

along the first row, + 4 −1 3 = −(−1 − 4) + 4(0 − 6) = −19 2 0

−1 3 2 0 1 4 (b) −1 3 2 = − 0 1 4 , after R1 ↔ R2 2 0 1 2 0 1 −1 3 2 = − 0 1 4 , after 2R1 + R3 → R3 0 6 5 −1 3 2 4 , after −6R2 + R3 → R3 = − 0 1 0 0 −19 = −(−1)(1)(−19) = −19 1.6.3.2 (a) for example, expanding along the first column, 0 −1 2 −1 2 −1 2 = −(−1 + 2) + (−2)(−3 − 10) = 25 1 5 3 = −(1) + (−2) 5 3 −1 1 −2 −1 1 1 0 −1 2 5 3 5 3 = − 0 −1 2 , after R1 ↔ R2 and then 2R1 + R3 → R3 (b) 1 0 −2 −1 1 9 7 1 5 3 2 , after 9R2 + R3 → R3 = − 0 −1 0 0 25 = −(1)(−1)(25) = 25 

   1 0 −1 0 1.6.3.3. Ex. A = ,B= has 0 −1 0 1 |A| + |B| = (−1) + (−1) = −2 and |A + B| = |O| = 0 6= −2 = |A| + |B|. 1.6.3.4 (a) |AB| = |A| · |B| = (−2)(5) = −10 (b) |AT B 2 | = |AT | · |B 2 | = |A| · |B|2 = (−2)(52 ) = −50 (c) For 3 × 3 matrices, |A4 B| = |(−1)A4 B| = (−1)3 |A|4 · |B| = (−1)(−2)4 · (5) = (−1)(16)(5) = −80 (d) |AB −1 | = |A| · |B −1 | = |A|(|B|)−1 = (−2)(5)−1 = − 52

1.6.3.5 (a) After −R1 + R2 → R2 ,

1 1 (b) After R2 ← 3 4 and then using part (a), this 1 3 R2 ,

5 1 4 0 9 −2 0 −3 is



1 3 3 4 10 6 6 4

5 1 10 1 5 1 12 0 18 2 7 −1 = 9 −2 6 3 9 −2 0 −3 4 4 0 −3 1 5 1 10 1 3 12 0 18 = 3 3 9 −2 6 4 0 −3 4

10 8 6 4

= |A| = −132.

©Larry

Turyn, October 13, 2013

page 21 = 31

1 2 3 4

5 1 7 −2 9 −2 0 −3 

10 8 6 4

1 = |A| = 3

2 1

1 1

1 1

2 1

1 2

2 1

     1.6.3.6. adj (A) =   −    

1 3

(−132) = −44.

−1 1 − 0 1 1 0 −



2 1

1 2 −1 1

1.6.3.7. After −aR1 + R2 → R2 , −a2 R1 + R3 1 1 1 a b c 2 2 2 a b c followed by 1 = 0 0

−1 2 0 1 1 − 0

1 1

1 1 −1 2 → R3 , 1 = 0 0

T     T    1 1 −1 1 1 −3   = 1 1 −1  =  1 1 −3  .   −3 −3 3 −1 −1 3   

1 b−a b2 − a2

1 c−a c2 − a 2



−(b + a)R2 + R3 → R 3 , gives 1 1 =(1)(b − a)(c − b)(c − a)=(c − a)(c − b)(b − a). b−a c−a 0 (c − a)(c − b)

1.6.3.8. (a) |A| =

Expanding along R1 gives a c 0 0 a b a = a(bc − 0) − c(0 − ab) = abc + cab = 2abc 0 b a = a − c b c 0 c b 0 c   0 b T b a − 0 a  0 c b c b 0      T      bc −c2 ac bc ab −b2 a c  a 0  c 0 2   ab ac −a2  ac bc  =  (b) adj(A) =  b c − b 0  = −c  − 0 c 2 2   ac −a ab −b bc ab     a 0 a c   c 0 b a − 0 a 0 b (c) A adj(A) = ... = 2abc I = |A| I    1.6.3.9. (a) Method 1 : B α−1 A−1 = αA α−1 A−1 = αα−1 AA−1 = I. So, B −1 = α−1 A−1 exists. Method 2 : B = αA = A(αI) =⇒ B −1 = (αI)−1 A−1 = (α−1 I)A−1 = α−1 A−1 exists.   (b) adj(B) · B = |B| = |α A| = αn |A| =⇒ adj(B)B · B −1 = |B| B −1 = αn |A| α−1 A−1 = αn−1 |A|A−1 But, adj(B)B · B −1 = adj(B) · I = adj(B), so |A| = adj(A)A implies  −1  = αn−1 adj (A). adj(B) = adj(B)B · B −1 = αn−1 (adj A)A A 1.6.3.10. |A| = |AT | = |2A| = 25 |A| = 32|A| ⇒ −31|A| = 0 =⇒ |A| = 0

©Larry

Turyn, October 13, 2013

page 22 s 1 2 −1 5 2 0 1 t −2 −3 1 2s + 2t 1 1 = 1.6.3.11. x1 = = − s − t, x = 2 −4 2 2 2 −1 1 2 0 1 −3 −2 1 3 t, 4 1 2 s 2 0 5 −3 −2 t −4s − 20 − 4t and x3 = = =5+s+t −4 −4 s 1 = 3s + 2 ⇐⇒ s = − 2 . So, the system 1.6.3.12. (a) 0 = 3 −2 3 s 5 5 1 −2 4 4 3 15 − 4 11 2 = = , for s = 6 − ; x = (b) x1 = 2 3 3s + 2 3s + 2 3s + 2 s 1 −2 3     1 x1 11 The solution is x = = , for s 6= − 32 . x2 3s + 2 10 + 4s s 1 1.6.3.13. (a) 4 s exactly one solution

s −1 5 1 t 1 −5s − 10 − 3t 5 5 = = + s+ −4 −4 2 4

has exactly one solution ⇐⇒ s 6= − 23 10 + 4s = , for s 6= − 32 2 + 3s

= s2 − 4 is non-zero for |s| 6= 2, that is, 2 6= s 6= −2, in which case the system has

3 1  −6 s  3s + 6 3 ( s+ 2) 3 (b) For |s| 6= 2, x1 = = 2 = = ,  2  s −4 s −4 (s − 2) ( s + 2) s−2 s 3 4 −6 −6s − 12 −6 (s + 2) −6 and x2 = = = = . 2 2 2 s −4 s −4 s −4 s−2     1 3 x1 For |s| 6= 2, the solution is x = = . x2 s − 2 −6 

   1| 3 2 1| 3 ∼ . 2 | −6 0 0 | −12 does not exist a solution    1| 3 −2 1 | 3 ∼ −2 | −6 0 0| 0  3 1  − 2 + 2 c1 x= , c1 = arbitrary constant c1

2 4 The bottom row is [ 0 0 | 6= 0 ] =⇒ There  −2 For s = −2, the augmented matrix is 4 (c) For s = 2, the augmented matrix is

=⇒ There are ∞−ly many solutions 1 3 −2 2 −2 −1 −1 1 1 = 1.6.3.14. x2 = 1 0 −2 2 1 −1 −1 2 1 hence x2 =

−2 −1 − 3 2 −1 − 2 2 −2 1 −1 −1 1 1 1 −1 − 3 (1) − 2 (0) = 1 −1 2 1 3 − 0 − 2 (5) − 0 − 2 2 −1 2 1

−4 4 = −7 7 ©Larry

Turyn, October 13, 2013

page 23 1 0 −1 1.6.3.15. x2 = 1 0 −1

b1 1 b2 −1 b3 −1 b + b3 1 1 = 1 = b1 + b3 =⇒ x2 = 2 2 2 1 1 1 −1 1 −1

1 −1 0 1.6.3.16. x3 = 1 −1 0

1 2 1 1 2 1

1 2

b1 +

1 2

b3

b1 b2 b3 −b1 − b2 + 3b3 1 1 1 = = b1 + b2 − b3 −3 − 2 − 1 6 6 2 1 −1 −2

4 0 k k 0  + k 1 k = 36k − k 3 = k 36 − k 2 ⇐⇒ k = 0, −6, or 6. 1.6.3.17. 0 = 1 k 0 = 4 k 0 0 9 k 0 9 If k is neither 0, −6, nor 6, then the matrix is invertible. 1.6.3.18. Expanding along the 2nd column, we get k 0 k k 0 = |A| = 0 4 1 = 4 k k 0 2

 k = 4 2k − k 2 = 4k (2 − k) . 2

So, A is invertible ⇐⇒ k 6= 0 and k = 6 2, in which case T   8 k −4k 8 0 1 1 1   k 2k − k 2 0 2k − k 2 0 = A−1 = adj(A) = |A| 4k(2 − k) 4k(2 − k) −4k −k 4k −4k 0

 −4k −k . 4k

1.6.3.19 (a) Suppose Ri = Rj in matrix A. Then |A| = |B|, where B is obtained from A by −Ri + Rj → Rj . But, B has its Rj being all zeroes. Expanding |B| along Rj implies |B| = 0, so |A| = 0. (b) If A has two equal columns, then AT has two equal rows, hence |AT | = 0 by part (a). So, |A| = |AT | = 0. (c) (i) ai1 Aj1 + ai2 Aj2 + ... + ain Ajn is the expansion along the j − th row of B, where B is obtained from A by replacing the j−th row of A by the i − th row of A, although the i-th row of B is the i-th row of A. If i 6= j, then the matrix B has two equal rows, namely the i−th and the j−th. By part (a), |B|=0. So, 0 = |B| = ai1 Aj1 + ai2 Aj2 + ... + ain Ajn . (ii) a1j A1i + a2j A2i + ... + anj Ani is the expansion along the i − th column of |C|, where C is obtained from A by replacing the i−th column of A by the j − th column of A, although the i-th column of C is the i-th column of A. If i 6= j, then the matrix C has two equal columns, namely the i−th and the j−th. By part (b), |C|=0. So, 0 = |C| = a1j A1i + a2j A2i + ... + anj Ani . 

    x1 1 −4 7 b1 5 −8   b2 , 1.6.3.20 (a) Ax = b ⇐⇒ x = A−1 b, that is,  x2  = A−1 b =  −2 x3 3 −6 2 b3 A2 (b) = x2 = −2b1 + 5b2 − 8b3 . |A| 1 1 1 1 1 = (b) |A−1 | = ⇒ |A| = −1 = = . |A| |A | 10 + 96 + 84 − (105 + 16 + 48) 21 1 −4 7 −2 5 −8 3 −6 2 so

©Larry

Turyn, October 13, 2013

page 24     a b e f ae + bg af + bh = . Because 0 6= ad − bc = |A|, A is c d g h ce + dg cf + dh invertible, and similarly 0 6= eh − f g = |B| implies B is invertible. Because A and B are invertible, so is C. So, 0 6= |C| = (ae + bg)(cf + dh) − (af + bh)(ce + dg). 1.6.3.21. C , AB =



1.6.3.22. |C| = |AB| = |A| |B| and |B| = |CA| = |C| |A|. So, |C| = |A| |B| = |A|(|C| |A|) = |A|2 |C|. Because C is invertible, |C| 6= 0. So, we can divide |C| = |A|2 |C| by |C| to get 1 = |A|2 . So, |A| = ±1. 1.6.3.23 (a) Yes, |A| = 6 0, because (−1)n = |−In | = |AB| = |A| |B| implies |A| = 6 0, because if |A| = 0, then n (−1) = 0 · |B| = 0, giving a contradiction. (b) (−1)n = |−In | = |A| |B|, so similarly to part (a), |B| 6= 0. It follows that B is invertible. So, it is not true that B is not invertible.  (c) By part (a), |A| 6= 0 and A is invertible. So, −In = AB =⇒ In = (−1)AB = (A (−1)B = A(−B) =⇒ −A−1 = −B must be true. (d) |AT | = 0 is false because |AT | = |A| 6= 0. 1.6.3.24. (a) b = Ax∗ ⇒ adj(A)b = (adj(A)A)x∗ = |A|x∗ = 0 · x∗ = 0 (b) No, if Ax = b does not have a solution, it does not follow that adj(A)b = 0. (We can’t multiply b = Ax∗ by adj(A) if there is no such x∗ .)     0 1 1 1 1 1 , (1) Ax =  1  , b has no solution, because x1 + x2 + x3 can’t equal both Ex. For A =  1 0 1 −1 0 0 and 1, and  T   1 −1 0 1 1 −2 2  =  1 −1 0 , (2) adj(A) =  −1 −1 −2 2 0 0 0 0     0 −1 so adj(A)b = adj(A)  1  =  −1  6= 0. 0 2 Aside: Perhaps there is also a 2 × 2 example? 1.6.3.25. (a) A, B invertible =⇒ A−1 = −1

(AB)

1 |A| adj(A),

= B −1 A−1 =

but also (AB)−1 =

B −1 =

1 |B| adjB),

so

1 1 1 adj(B) adj(A)= adj(B)adj(A) |B| |A| |A||B| 1 1 adj(AB) = adj(AB) |AB| |A||B|

⇒ adj(AB) = adj(B)adj(A). (b) No, not necessarily:         0 0 0 1 0 −1 0 0 Ex. A = ,B= have adj(A) = , adj(B) = 1 0 0 0 0 0 −1 0   0 0 ⇒ adj(B)adj(A) = 0 1     0 0 1 0 versus adj(AB) = adj = 6 adj(B)adj(A). = 0 1 0 0   p p 1.6.3.26. Let B = y(1) + y(2) p 2y(1) − y(2) p y(2) , which is 3 × 3 because each y(i) , must be 3 × 1 in order to   p p be multiplied on the left by A. Then AB = e(1) p e(2) p e(3) = I3 =⇒ There exists B −1 =⇒ |B| = 6 0. But, ©Larry

Turyn, October 13, 2013

page 25

p

p

column operations −C3 + C1 → C1 , C3 + C2 → C2 =⇒ |B| = |y(1) p 2y(1) p y(2) | and then −2C1 + C2 → C2 p

p

=⇒ |B| = |y(1) p 0 p y(2) | = 0, by expansion along column 2. This contradicts |B| 6= 0. So, no, it is not possible for A, y(1) , y(2) , y(3) to satisfy A(y(1) + y(2) ) = e(1) , A(2y(1) − y(2) ) = e(2) ,

and Ay(2) = e(3) .

1.6.3.27. The last n − r columns of A are the last n − r columns of the n × n identity matrix In . Why?   p p First, note that the (n − i)−th column of In = e(1) p ... p e(n) is p

e(n−i) = [0 ... 0 p 0 ... 0

... 0]T  where there are, first, (n − r) entries of zero, followed by (n − i) − (n − r) zeros until the (n − i) − th entry being a one is reached, followed by another (i − 1) zeros. Let In−r be the (n − r) × (n − r) identity matrix,   p p and write it in terms of its columns as In−r = f (1) p ... p f (n−r) . We see that   p T , e(n−i) = 0 p f (n−i)−(n−r) 1 0

where 0 is in Rn−r . So, expand the determinant of A along the last column to get A11 p Or,n−r A11 p − − = 1 · − − − |A| = − − On−r,r p In−r On−r−1,r

p Or,n−r−1 p − − − p In−r−1

and then expand the determinant along the last of the (n − 1) columns A11 p Or,n−r−1 A11 |A| = − − − p − − − = 1 · − − − On−r−1,r p In−r−1 On−r−2,r



to get p p p

Or,n−r−2 − − − In−r−2

.

Continue this way until we get the determinant of an r × n matrix, namely |A| = 1 ·

···

· 1 · |A11 |,

as was desired. 1.6.3.28. That the first r columns of A are the first r columns of the n × n identity matrix In follows from reasoning similar to that in problem 1.6.3.27. The rest of the work is also similar to that in problem 1.6.3.27 except that this matrix A doesn’t have as many zeros in it. Note that A12 is r × (n − r) and A22 is (n − r) × (n − r). Expand the determinant of A along the first column to get e12 Ir p A12 Ir−1 p A p − − = 1 · − − − p − − |A| = − − On−r,r p A22 On−r,r−1 p A22 e12 is the (r − 1) × (n − r) matrix obtained from A12 by deleting its first row. After that, expand where A along the first of the (n − 1) columns to get e12 b12 Ir−1 p A Ir−2 p A |A| = − − − p − − = 1 · − − − p − − , On−r,r−1 p In−r−1 On−r,r−2 p A22 b12 is the (r − 2) × (n − r) matrix obtained from A12 by deleting its first two rows. Continue this where A way until we get the determinant of an (n − r) × (n − r) matrix, namely |A| = 1 ·

···

· 1 · |A22 |, ©Larry

Turyn, October 13, 2013

page 26

as was desired. A11 p O 1.6.3.29. |A| = − − p − − O p I by the results of problems 1.6.3.27

I p A−1 11 A12 −− p − − − O p A22 and 1.6.3.28.

= |A11 | · |A22 |,

©Larry

Turyn, October 13, 2013

page 27

Section 1.7.3 1.7.3.1. b is in the Span of those three given vectors ⇐⇒ b = Ax, for some x. The latter equation gives augmented matrix       1 −1 1 | b1 /2 2 −2 2 | b1 1 −1 1 | b1 /2 0 1 0 3 3 | b2 − (b1 /2)  2 4 | b2  3 3| b2 − (b1 /2)  ∼ ∼ 0 3 3| b3 0 3 3 | b3 0 0 0 | b3 − b2 + (b1 /2) 1 R 2 1

→ R1 −R1 + R2 → R2

So, b is in that Span ⇐⇒ 0 =

−R2 + R3 → R3 1 2

b1 − b2 + b3 .

1 0 0 k −2 1 k −2 = − 0 k k −2 0 k = k 2 + 2. So, the given set of three vectors spans R3 for all real numbers k. 1 t 0 −1 1 2 1 = 2t − 3 + t2 + 2t = t2 + 4t − 3. − t 1.7.3.3. −1 2 1 = 2 t 3 t 2 3 t √ So, the vectors are linearly dependent exactly for ⇐⇒ t = −2 ± 7. 1 −2 0 3 1 + 2 −1 1 = 4 + 2(−2) = 0. So, they are not linearly independent. 3 1 = 1.7.3.4. −1 1 1 −1 1 1 −1 1     ○ 0 2 | 0 1 −2 0 | 0 1 ∼  0 ○ 3 1 | 0 To find a dependency relation, row reduce:  −1 1 1 | 0 0 0 0 | 0 1 −1 1 | 0 1 3 1.7.3.2. The given set of three vectors span R if, and only if, 0 6= 0 −1

R 1 + R2 −R1 + R3 −R2 + R3 2R2 + R1

→ R2 → R3 → R3 → R1

=⇒ x3 = free, x1 = −2x3 , x2 =−x3 .              −2 0 0 0 1 −2 1 Ex: x3 = 1, x1 = −2, x2 = −1 ⇒ −2  −1  −  3  +  1  =  0  ⇒  1  = 2  −1  +  3 . −1 1 0 1 1 −1 1 1 1.7.3.5. (a) 1 1 vectors in R3 is

3 1 1 −1 = 0 because R2 = R3 . So, by Theorem 1.43 in Section 1.7, the set of three given 1 −1 not a basis for R3 , is not linearly independent, and does not span R3 . 1 1 0 1 1 0 −1 1 (b) Using −R1 + R2 → R2 , 1 0 1 = 0 −1 1 = = −2 6= 0 1 1 0 1 1 0 1 1 ⇒ The set of three given vectors is linearly independent, does span R3 , and is a basis for R3 , all by Theorem 1.43 in Section 1.7. (c) The set of two given vectors cannot span R3 and cannot be a basis for R3 because we need at least three vectors to span and need exactly three vectors in order to be a basis. This follows from the Goldilocks Theorem 1.42 in Section 1.7.

©Larry

Turyn, October 13, 2013

page 28



1 1 0

 1 | 0 0 | 0  1 | 0

 1 1 0  0 −1 0  0 1 0

∼ −R1 + R2 → R2

○ 1  0 0 



∼ −R2 → R2 −R2 + R1 → R1 −R2 + R3 → R3

 0 | 0 ○ 1 | 0 0 | 0

⇒ the only solution of −c1 v1 + c2 v2 = 0 is c1 = c2 = 0, so yes, it is linearly independent. To summarize, the set of two given vectors is linearly independent, does not span, and is not a basis. (d) The set of four given vectors in R3 cannot be linearly independent and cannot be a basis for R3 because 4 > 3 and the Goldilocks Theorem 1.42 in Section 1.7. Does the set of four given vectors span R3 ? For any b in R3 ,     1 0 −1 1| b1 1 0 −1 1 | b1 0 1 2 1 ∼ 2 −5 | b2 − 2b1  0 −3 | b2  0 4 8 −3 | b3 − 3b1 3 4 5 0 | b3 −2R1 + R2 → R2 −3R1 + R3 → R3

○ 1  0 0 



0 −1 1| ○ 2 −5 | 1 0 0 ○ 17 |

 b1 b2 − 2b1  b3 − 3b1 − 4b2 + 8b1

−4R2 + R3 → R3

=⇒ There exists a solution for any b in R3 , that is, the set of four given vectors does span R3 . To summarize, the set of four given vectors is not linearly independent, does span, and is not a basis.   p p 1.7.3.6. First, rank(A) ≥ 2 because there exists at least two pivot positions in v(1) p v(2) p v(3) . Why? Because, otherwise, either (i) v(1) 6= 0 but v(2) and v(3) would have to be multiples of v(1) , or (ii) v(1) = 0 but v(2) , v(3) are multiples of each other. In either case of (i) or (ii), the set {v(2) , v(3) } would be linearly dependent, giving a contradiction. Second, the rank of a 6 × 4 matrix must be ≤ 4, because rank(A) =number of pivot positions in A, which is ≤ number of columns of A. To summarize, 2 ≤ rank(A) ≤ 4.   p 1.7.3.7. (a) First, rank(A) ≥ 2 because because there exists at least two pivot positions in v(1) p v(2) . Why? Because, otherwise, either (i) v(1) 6= 0 but v(2) is a multiple of v(1) , or (ii) v(1) = 0. In either case of (i) or (ii), the set {v(1) , v(2) } would be linearly dependent, giving a contradiction. Second, the rank of a 4 × 5 matrix must be ≤ 4, because rank(A) =number of pivot positions in A, which is ≤ number of rows of A. To summarize, 2 ≤ rank(A) ≤ 4. (b) ν(A) = n − rank(A) is always true. Here n = 5. From part (a), 2 ≤ rank(A) so −rank(A) ≤ −2 implies ν(A) = 5 + − rank(A) ≤ 5 + (−2) = 3.  Also, −rank(A) ≥ −4 implies ν(A) = 5 + − rank(A) ≥ 5 + (−4) = 1. To summarize, 1 ≤ nullity(A) ≤ 3. p

p

1.7.3.8. (a) {b1 , b2 , b3 } is a basis for R3 =⇒ |b1 p b2 p b3 | = 6 0 =⇒ There exists B −1 . (b) |AB| = |A||B| =(non-zero)·(non-zero) =non-zero. (c) |AB| 6= 0 ⇒ There exists (AB)−1 =⇒ The columns of AB are a basis for R3 . But,

©Larry

Turyn, October 13, 2013

page 29

    p p p p AB = A b1 p b2 p b3 = Ab1 p Ab2 p Ab3 , so Theorem 1.43 in Section 1.7 =⇒ {Ab1 , Ab2 , Ab3 } is a basis for R3 . 1.7.3.9. (a) A =



1 −1 3 1

2 0



 ∼

○ 1 0

−1 ○ 4

2 −6

 ⇒ rank(A) = 2

−3R1 + R2 → R2

(b) ν(A) = 3 − rank(A) = 3 − 2 = 1 (c)



1 −1 3 1

2| 0 0| 0



 ∼

1 −1 2 | 0 0 4 −6 | 0



 ∼

○ 1 0

0 0.5 | 0 ○ 1 −1.5 | 0

1 R 4 2



→ R2 R2 + R1 → R 1

−3R1 + R2 → R2

     −0.5c1 −0.5  −0.5  ⇒ x3 = free = c1 , and solutions are x =  1.5c1  = c1  1.5  =⇒ W has basis  1.5  .   c1 1 1 

 1.7.3.10. (a) False, by Theorem 1.43 in Section 1.7 not (d) =⇒ not (b) (b) True, because 1 = |I4 | = |AB| = |A| |B| = 0·|B| = 0. Alternatively, true by the definition of invertibility. (c) False, because x = 0 always is a solution of Ax = 0.   0 0 0 (d) False, as a general proposition: Ex. A =  0 1 0  0 0 1  (e) True, by Theorem 1.25 in Section 1.5 not (a) =⇒ not (d) (f) False, because A not invertible implies ν(A) ≥ 0 which implies that the number of pivot positions is < 4.   ○ 0 0 1 has rank(A) = 2 =⇒ n = 3, ν(A) = 3 − rank(A) = 3 − 2 = 1. 1.7.3.11. (a) Ex: A = 0 ○ 1 0  ○ 0 1 AT =  0 ○ 1  has rank(AT ) = 2. So, AT being 3 × 2 implies ν(AT ) = 2 − rank(AT ) = 2 − 2 = 0. So, 0 0  T ν A = 0 6= 1 = ν(A).  (b) Suppose m = n. By Theorem 1.44 in Section 1.7, rank(A) = rank AT , so m = n ⇒ ν(A) = n − rank(A), so ν(AT ) = n − rank(AT ) = n − rank(A) = ν(A). 

¯ j = a1 v ¯ 1 +...+aj−1 v ¯ j−1 +aj+1 v ¯ j+1 + 1.7.3.12. The “if" part: If v¯j is a linear combination of the others, say v ¯ ` , then ... + a` v  p 0 = a1 v1 +...+aj−1 vj−1 +(−1)·vj +aj+1 vj+1 +...+a` v` = v1 p

...

 p T p v` [ a1 . . . aj−1 − 1 aj+1 . . . a` ] ,

hence the set of vectors {v1 , ..., v` } is linearly dependent. The “only if" part: If {v1 , ..., v` } is linearly dependent, then (?) c1 v1 + ... + c` v` = 0, for some c 6= 0. Because c 6= 0, there exists index j such that cj 6= 0. Then (?) implies vj = −

c1 cj−1 cj+1 c1 v1 − ... − vj−1 − vj+1 − ... − v` , cj cj cj c`

that is, vj is a linear combination of the other ` − 1 vectors. ©Larry

Turyn, October 13, 2013

page 1

Chapter Two Section 2.1.6 −2 − λ 7 = (−2 − λ)(4 − λ) − 7 = λ2 − 2λ − 15 = (λ − 5)(λ + 3) 2.1.6.1. 0 = 1 4−λ ⇒ eigenvalues are λ1 = −3, λ2 = 5     1 7| 0

1 7| 0 ∼ , after −R1 + R2 → R2 [ A − λ1 I | 0 ] = 1 7| 0 0 0| 0   −7 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −3. ⇒ v1 = c 1 1     −7 7| 0

1 −1 | 0 [ A − λ2 I | 0 ] = ∼ , after − 17 R1 → R1 , −R1 + R2 → R2 1 −1 | 0 0 0| 0   1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 5. ⇒ v2 = c 1 1 1−λ 2 2.1.6.2. 0 = = (1 − λ)(2 − λ) − 6 = λ2 − 3λ − 4 = (λ − 4)(λ + 1) 3 2−λ ⇒ eigenvalues are λ1 = −1, λ2 = 4     2 2| 0

1 1| 0 [ A − λ1 I | 0 ] = ∼ , after 21 R1 → R1 , -3R1 + R2 → R2 3 3| 0 0 0| 0   −1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1. 1     2 −3 2| 0

1 −3 | 0 [ A − λ2 I | 0 ] = ∼ , after R1 + R2 → R2 , − 13 R1 → R1 3 −2 | 0 0 0| 0  2  ⇒ v2 = c1 3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 4. 1 −1 − λ 2.1.6.3. 0 = 1

4 = (−1 − λ)(1 − λ) − 4 = λ2 − 5 1−λ √ √ ⇒ eigenvalues are λ1 = 5, λ2 = − 5 √ √     √ −1 − 5 4√ | 0

1− 5 | 0 [A − λ1 I | 0 ] = ∼ 1 , after R1 ↔ R2 , (1 + 5)R1 + R2 → R2 0 0 | 0 1 1− 5 | 0 √   √ −1 + 5 ⇒ v1 =c1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 5. 1 √ √     √ −1 + 5 4√ | 0

1+ 5| 0 , after R1 ↔ R2 , (1 − 5)R1 + R2 → R2 [A − λ2 I | 0 ] = ∼ 1 0 0| 0 1 1+ 5| 0 √   √ −1 − 5 , for any const. c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = − 5. ⇒ v2 =c1 1 −2 − λ −5 = (−2 − λ)(−λ) − (−5) = λ2 + 2λ + 5 2.1.6.4. 0 = 1 0−λ ⇒ eigenvalues are λ1 = −1 + i2, λ2 = −1 − i2. Because the eigenvalues are not real and are a complex conjugate pair, we only need to calculate one eigenvector:     −1 − i2 −5 | 0

1 1 − i2 | 0 [ A − λ1 I | 0 ] = ∼ , after R1 ↔ R2 , (1 + i2)R1 + R2 → R2 1 1 − i2 | 0 0 0 | 0   −1 + i2 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1 + i2. 1   −1 − i2 ¯ 1 = c1 The eigenvectors corresponding to λ2 = −1 − i2 are v2 = v , for any constant c1 6= 0. 1 c Larry

Turyn, January 2, 2014

page 2

2.1.6.5. We are given three distinct eigenvalues, so the only eigenvalues are λ1 = 0, λ2 = 2, λ3 = 4. All we need to do is to find the eigenvectors. 

1 [ A − λ1 I | 0 ] =  −1 1

2| 2| 2|

1 3 1

 0 0 0

 ∼ −R1 + R2 → R2 R 1 + R 3 → R3

1 0 0

2| 4| 0|

1 4 0

 0 0 0

 ∼ 1 R 4 2 −R2 + R1

→ R2 → R1

1  0 0

 0 0 0

1| 1| 0|

0

1 0

 −1 ⇒ v1 = c1  −1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 0. 1 



−1 [ A − λ2 I | 0 ] =  −1 1

2| 2| 0|

1 1 1

 0 0 0

 ∼ −R1 → R1 R1 + R2 → R 2 −R1 + R3 → R3

1 0 0

−1 0 2

 0 0 0

−2 | 0| 2|

 ∼ R2 1 R 2 2 R2 + R1

↔ R3 → R2 → R1

1  0 0

−1 | 1| 0|

0

1 0

 0 0 0



 1 ⇒ v2 = c1  −1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 2. 1 

−3 [ A − λ3 I | 0 ] =  −1 1

2| 2| −2 |

1 −1 1

 0 0 0

 ∼

R1 ↔ R3 R1 + R 2 → R 2 3R1 + R3 → R3

1 0 0

1 0 4

−2 | 0| −4 |



 0 0 0



R2 1 R 4 2 −R2 + R1

↔ R3 → R2 → R1

1  0 0

−1 | −1 | 0|

0

1 0

 0 0 0



 1 ⇒ v3 = c1  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = 4. 1 2.1.6.6. By expanding along the second row, we calculate 1−λ −3 6 1−λ   6 2 −2 − λ 0 = (−2 − λ) · 0 = 0 = (−2 − λ) (1 − λ)(−λ) − 12 = (−2 − λ) λ − λ − 12 2 −λ 2 0 −λ = (−2 − λ)(λ − 4)(λ + 3), so the eigenvalues are λ1 = −3, λ2 = −2, λ3 = 4. 

−3 1 0

4 [A − λ1 I | 0 ] =  0 2

6| 0| 3|

 0 0 0

 ∼ − 12 R1 + R3 → R3 1 R → R1 4 1

−0.75 1 1.5

1 0 0

3 2

| 0| 0|

 0 0 0

 ∼ 3 R + R1 → R 1 4 2 − 32 R2 + R3 → R3

1  0 0

3 2

| 0| 0|

 0 0 0

1| −1 | 0|

 0 0 0

0

1 0

 − 23 0 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −3. ⇒ v1 = c 1  1 



3 [ A−λ2 I | 0 ] =  0 2

−3 0 0

6| 0| 2|

 0 0 0

 ∼

1 R → R1 3 1 −2R1 + R3 → R3 R2 ↔ R3

1 0 0

−1 2 0

2| −2 | 0|

 0 0 0



1  0 0



0

1 0

1 R → R2 2 2 R2 + R 1 → R1

c Larry

Turyn, January 2, 2014

page 3

 −1 ⇒ v2 = c1  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = −2. 1 



−3 [ A − λ3 I | 0 ] =  0 2

−3 −6 0

6| 0| −4 |

 0 0 0

 ∼

1 0 0

1 −6 −2

−2 | 0| 0|

− 31 R1 → R1 −2R1 + R3 → R3

 0 0 0



1  0 0



0

1 0

−2 | 0| 0|

 0 0 0

− 61 R2 → R2 −R2 + R1 → R1 2R2 + R3 → R3

 2 ⇒ v3 = c1  0 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = 4. 1 

2.1.6.7. By expanding along the first row, we calculate −3 − λ 0 0 −4 − λ −3 4 −4 − λ −3 0= = (−3 − λ) · 1 −λ −1 1 −λ

  = (−3 − λ) (−4 − λ)(−λ) + 3 = (−3 − λ) λ2 + 4λ + 3

= (−3 − λ)(λ + 3)(λ + 1), so the eigenvalues are λ1 = λ2 = −3, λ3 = −1.      1 −1 −3 | 0 0 0 0| 0

0 0| 1 0  0 3 9| 0 ∼ ∼ 3| [ A − λ1 I | 0 ] =  4 −1 −3 | 0  1 0 0 0| 0 −1 1 3| 0 0 0 0| 1 R1 ↔ R 3 R → R2 3 2 −R1 → R1 R2 + R 1 → R 1 −4R1 + R2 → R2   0 ⇒ v1 = c1  −3 , for any constant c1 6= 0, are the only eigenvectors corresponding to eigenvalue λ1 = −3. 1 So, λ = −3 is a defective eigenvalue.

 0 0 0

Because λ2 = λ1 , we get no further eigenvectors corresponding to eigenvalue λ2 . 

−2 [ A − λ3 I | 0 ] =  4 −1

0 −3 1

0| −3 | 1|

 0 0 0

 ∼

− 21 R1 → R1 −4R1 + R2 → R2 R 1 + R 3 → R3

1 0 0

0 −3 1

0| −3 | 1|

 0 0 0

 ∼ − 13 R2 → R2 −R2 + R3 → R3 2R2 + R3 → R3

1  0 0

0

1 0

0| 1| 0|

 0 0 0



 0 ⇒ v3 = c1  −1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = −1. 1 2.1.6.8. Using the criss-cross method, 6−λ 1 4  1 − λ −4 = (6 − λ)(1 − λ)(−λ) + 4 + 16 − − 4(1 − λ) + 4λ + 4(6 − λ) = −λ(6 − λ)(1 − λ) + 2 0 − 0 = −4 2 0 − 4λ −1 −1 −λ     = −λ (6 − λ)(1 − λ) + 4 = −λ λ2 − 7λ + 10 = −λ(λ − 2)(λ − 5), so the eigenvalues are λ1 = 0, λ2 = 2, λ3 = 5.

c Larry

Turyn, January 2, 2014

page 4



6 [ A−λ1 I | 0 ] =  −4 −1

1 1 −1

 0 0 0

4| −4 | 0|



1 0 0



R1 −R1 4R1 + R2 −6R1 + R2

1 5 −5

0| −4 | 4|

 0 0 0



1  0 0



0

1 0

0.8 | −0.8 | 0|

 0 0 0

R 2 + R 3 → R3 1 R → R2 5 2 −R2 + R1 → R1

↔ R3 → R1 → R2 → R2

 −0.8 ⇒ v1 = c1  0.8 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 0. 1 



4 [ A − λ2 I | 0 ] =  −4 −1

1 −1 −1

4| −4 | −2 |

 0 0 0



1 0 0

∼ R2 + R1 R1 −R1 4R1 + R2

1 3 0

2| 4| 0|

 0 0 0

 ∼ 1 R 3 2 −R2 + R1

→ R1 ↔ R3 → R1 → R2

1  0 0

0

1 0

2/3 | 4/3 | 0|

 0 0 0

→ R2 → R1



 −2/3 ⇒ v2 = c1  −4/3 , for any constant c1 6= 0, are the only eigenvectors corresponding to eigenvalue λ2 = 2. 1 

1 [ A − λ3 I | 0 ] =  −4 −1

1 −4 −1

4| −4 | −5 |

 0 0 0

 ∼

1 0 0

1 0 0

4| 12 | −1 |

4R1 + R2 → R2 R 1 + R 3 → R3

 0 0 0

 ∼ 1 R 12 2 R2 + R3

→ R2 → R3 −4R2 + R1 → R1

1  0 0

1 0 0

0

1 0

 0 0 0

| | |



 −1 ⇒ v3 = c1  1 , for any constant c1 6= 0, are the only eigenvectors corresponding to eigenvalue λ3 = 5. 0 2.1.6.9. By expanding along the first row, we calculate 1−λ 0 0 3−λ 1 3−λ 1 =(1 − λ) 0= 2 2 5−λ −1 2 5−λ so the eigenvalues are λ1 = 1, λ2  0 0 0| [ A − λ1 I | 0 ] =  2 2 1 | −1 2 4 |



1

⇒ v1 = c 1 

− 23

  = (1 − λ) (3 − λ)(5 − λ) − 2 = (1 − λ)(λ2 − 8λ + 13),

√ √ 3, λ3 = 4 − 3.  1 0 ∼ 0 R 1 ↔ R3 −R1 → R1 −2R1 + R2 → R2

=4+  0 0 0

−2 6 0

−4 | 9| 0|

 0 0 0

 ∼

1  0 0

0

1 0

−1 | 3 | 2 0|

 0 0 0

1 R → R2 6 2 2R2 + R1 → R1

 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 1.

1

c Larry

Turyn, January 2, 2014

page 5

√ −3 − 3 [A − λ2 I | 0 ] =  2 −1 

0√ −1 − 3 2

0| 1√| 1− 3|

 0 0 0





1 0 ↔ R3 0 → R1 → R2 → R3

−2√ 3− √ 3 −2(3 + 3)

√ −1 + √ 3 3 −√2 3 2 3

| | |

R1 −R1 √−2R1 + R2 (3 + 3)R1 + R3 √ √ √ √ 3−2 3 3−2 3 3+ 3 1− 3 √ √ √ Note that = · = ... = . So, 2 3− 3 3− 3 3+ 3  

0 0√ | 0 1 1− 3  0 ∼ [A − λ2 I | 0 ] | 0 1 2 0 0 0 | 0 √ (3 − 3)−1 R2 → R2 √ 2R2 + R1 → R1 2(3 + 3)R2 + R3 → R3   0√ √ −1+ 3 , for any const. c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 4 + 3. ⇒ v2 = c 1  2 1 √ √    −3 + 3 1 −2√ −1 − √ 3 | 0√ 0| 0 ∼ 0 [A − λ3 I | 0 ] =  2 −1 + 3 1√| 0 3+ √ 3 3 + 2√ 3 | 3) −2 3 | −1 2 1+ 3| 0 0 −2(3 − R1 ↔ R3 −R1 → R1 −2R 1 + R2 → R2 √ (3 − 3)R1 + R3 → R3 √ √ √ √ 3+2 3 3+2 3 3− 3 1+ 3 √ = √ · √ = ... = Note that . So, 2 3+ 3 3 + 3 3 − 3 

0 0√ | 0 1 1+ 3  0 ∼ [A − λ3 I | 0 ] | 0 1 2 0 0 0 | 0 √ (3 − 3)−1 R2 → R2 √ 2R2 + R1 → R1 2(3 − 3)R2 + R3 → R3   0√ √ 3 , for any const. c 6= 0, are the eigenvectors corresponding to eigenvalue λ = 4 − ⇒ v3 = c1  −1− 3. 1 3 2 1

 0 0 . 0

 0 0 . 0

2.1.6.10. Using the determinant of an upper triangular matrix, a−λ b 0 a−λ b = (a − λ)(a − λ)(c − λ), 0 = | A − λI | = 0 0 0 c−λ so the distinct eigenvalues of A are µ1 = a and µ2 = c. Using the given information that b 6= 0, 

0 [ A − µ1 I | 0 ] =  0 0

b 0 0

0 b c−a

| | |

 0 0 0

 ∼ b−1 R1 → R1 b−1 R2 → R2 −(c − a)R2 + R3 → R3

0 0 0

1 0 0

0

1 0

| | |

 0 0 0

⇒ x1 = c1 is the only free variable and x2 = x3 = 0.

c Larry

Turyn, January 2, 2014

page 6

 1 ⇒ v1 = c1  0 , for any constant c1 6= 0, are the only eigenvectors corresponding to eigenvalue µ1 = a. 0 So, µ1 = a is a deficient eigenvalue. 

Using the given information that  a−c b a−c [ A − µ2 I | 0 ] =  0 0 0

a 6= c, 0 b 0

| | |

 0 0 0

 ∼

1  0 0

0

1 0

−b2 /(a − c) b/(a − c) 0

| | |

 0 0 0

(a − c)−1 R1 → R1 (a − c)−1 R2 → R2 −bR2 + R1 → R1 ⇒ x3 = c1 is the only free variable and x1 = x2 = 0  2  b /(a − c) ⇒ v2 =c1  −b/(a − c) , for any const. c1 6= 0, are the only eigenvectors corresponding to eigenvalue µ2 = c. 1 2.1.6.11. Ax = λBx, that is, (A − λB)x = 0, has a non-trivial solution for x if, and only if,     −1 0 5 0 0 0 −1 0 5 1 4  − λ  0 1 0  = 2 1−λ 4 0 = | A − λB | =  2 3 −2 3 0 0 1 3 −2 3−λ 1−λ 2 1−λ   4 = − (1 − λ)(3 − λ) + 8 + 5 − 4 − 3(1 − λ) , = (−1) · + 5 · −2 3−λ 3 −2 = −λ2 + 4λ − 11 − 35 + 15λ = −(λ2 − 19λ + 46). √ 19 ± 177 . by expanding along the first row. So, the only such values are λ = 2 

   2 4 −1 2 1 4  leads us to 2.1.6.12. v ,  4  being an eigenvector of A =  2 1 −1 0 5       4 −1 2 2 6 1 4   4  =  12  = 3  Av =  2 −1 0 5 1 3

calculate  2 4 , 1

hence λ = 3 is an eigenvalue of A. This enables us to factor the characteristic polynomial: −λ3 + 10λ2 − 33λ + 36 = P(λ) = (3 − λ)(λ2 − 7λ + 12) = (3 − λ)(3 − λ)(4 − λ). So, 3 and 4 are all of the eigenvalues of A. 

   2 4 0 10 2.1.6.13.  −1  being an eigenvector of A =  −5 −6 −5  leads us to calculate 1 5 0 −1        4 0 10 2 18 2 Av =  −5 −6 −5   −1  =  −9  = 9  −1  ⇒ λ = 9 is an eigenvalue of A 5 0 −1 1 9 1     −1 4 0 10  0  being an eigenvector of A =  −5 −6 −5  leads us to calculate 1 5 0 −1        4 0 10 −1 6 −1 Av =  −5 −6 −5   0  =  0  = −6  0  ⇒ λ = −6 is an eigenvalue of A 5 0 −1 1 −6 1

c Larry

Turyn, January 2, 2014

page 7

 0  1  being an eigenvector of 0  4 0 Av =  −5 −6 5 0 

 4 0 10 A =  −5 −6 −5  leads us to calculate 5 0 −1       10 0 0 0 −5   1  =  −6  = −6  1  ⇒ λ = −6 is an eigenvalue of A −1 0 0 0 

Denote the eigenvalues we found by µ1 = 9 and µ2 = −6. Are eigenvalues of A? there any  other 0   −1 The latter two eigenvectors form a linearly independent set  0  ,  1  , so the nullity of   0 1  A − (−6)I , that is, the geometric multiplicity, is m2 ≥ 2. This implies that α2 , the algebraic multiplicity of −6, is at least two. α1 , the algebraic multiplicity of 9, is at least one. By Theorem 2.3(a) in Section 2.1, α1 + α2 ≤ 3 But α1 ≥ 1 and α2 ≥ 2, so α1 + α2 ≥ 3. It follows that α1 + α2 = 3 and the only eigenvalues of A are 9 and −6. Because α2 ≥ 2, α1 ≥ 1, and α1 +α2 = 3, we conclude that α1 = 1 and α2 = 2. Because m2 ≥ 2 and m2 ≤ α2 = 2, we conclude that m2 = 2. Also, α1 = 1 and 1 ≤ m1 ≤ α1 imply m1 = 1. To summarize, the only eigenvalues of A are µ1 = 9, with algebraic multiplicity α1 = 1 and geometric multiplicity m1 = 1, and eigenvalue µ2 = −6, with algebraic multiplicity α2 = 2 and geometric multiplicity m2 = 2. 2.1.6.14. By expanding along the third column, √ √ −3 − 5 2 − λ √ √ −4√ 2 0 −3 − 5 2 − λ √ −4√ 2 √ 0 = | A − λI | = −3 + 5 2 − λ 0 = (3 − λ) 4 2 4 2 −3 + 5 2 − λ 0 0 3−λ √ √   = (3 − λ) (−3 − 5 2 − λ)(−3 + 5 2 − λ) + 32 = (3 − λ) λ2 + 6λ − 9 , √ √ so the eigenvalues of A are λ1 = 3, λ2 = −3 + 3 2, and λ3 = −3 − 3 2. √ −6 − √5 2 [A − λ1 I | 0 ]=  4 2 0 

√ −4 2√ −6 + 5 2 0

0| 0| 0|

 0 0 0





1  0 ↔ R2 0 → R1 → R2 → R1

0

1 0

0 0 0

| | |

 0 0 0

√ −1 R1 (4 √ 2) R1 (6 + 5 √2)R1 + R2 − 9 4 2 R2 + R 1 ⇒ x3 = c1 is the only free variable and x1 = x2 = 0   0 ⇒ v1 =c1  0 , for any const. c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 3. 1 √ √     1 −8√ 2 −4√ 2 0 | 0 0 | 0

1 2 ∼  0 0 0 | 0 [A − λ2 I | 0 ]=  4 2 2 2 0√ | 0 0 0 0 0 6−3 2 | 0 1 | 0 √ −1 R1 ↔ R2 (4 2) R1 → R1 √ 8 2R √ 1 + R2 → R 2 (6 − 3 2)−1 R3 → R3 ⇒ x2 = c1 is the only free variable and x3 = 0  1  −2 √ ⇒ v2 =c1  1 , for any const. c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = −3 + 3 2. 0 √ √     −2√ 2 −4√ 2 0 | 0

2 0 | 0 1 ∼  0 0 0 | 0 [A − λ3 I | 0 ]=  4 2 8 2 0√ | 0 0 0 0 0 6+3 2 | 0 1 | 0 √ −1 R1 ↔ R2 (4 2) R1 → R1 √ 2 2R √ 1 + R2 → R 2 (6 + 3 2)−1 R3 → R3 c Larry

Turyn, January 2, 2014

page 8

⇒ x2 = c1 is the only free variable and x3 = 0   −2 √ ⇒ v3 =c1  1 , for any const. c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = −3 − 3 2. 0 2.1.6.15. (a) Can Ax = λ1 x and x 6= 0, as well as Ax = λ2 x, if λ1 6= λ2 ? No, two unequal eigenvalues cannot have the same eigenvector because then we would have λ1 x = Ax = λ2 x

and

x 6= 0,

hence λ1 x = λ2 x

and

x 6= 0,

hence (λ1 − λ2 )x = λ1 x − λ2 x = 0

and

x 6= 0.

If the vector x 6= 0 and the vector (λ1 − λ2 )x = 0, then we must have that the scalar (λ1 − λ2 ) = 0, contradicting the assumption that λ1 and λ2 are unequal. [Why “must have?” Because x 6= 0, at least one of its entries, say xj , must be nonzero. If (λ1 − λ2 ) 6= 0, then the j-th entry in (λ1 − λ2 )x would be (λ1 − λ2 )xj 6= 0, so (λ1 − λ2 )x would be a nonzero vector, contradicting (λ1 − λ2 )x = 0.] (b) Yes, a nonzero vector x be an eigenvector for two unequal eigenvalues λ1 and λ2 corresponding to two different matrices A and B, respectively.       1 0 5 0 1 and B = both have x = as an eigenvector corresponding to A’s eigenvalue Ex: A = 0 0 0 0 0 λ1 = 1 and B’s eigenvalue λ2 = 5, respectively. 2.1.6.16. λ is an eigenvalue of A so there exists a vector x 6= 0 with Ax = λx. (a) This implies Bx = (2In − A)x = 2In x − Ax = 2x − λx = (2 − λ)x. Because C = B −1 , this implies x = In x = CBx = C(2 − λ)x = B −1 (2 − λ)x = (2 − λ)B −1 x. Because λ 6= 2, we can divide both sides by (2 − λ) to get (2 − λ)−1 x = B −1 x. Because x 6= 0, by the definitions of eigenvalue and eigenvector this implies x is an eigenvector of B −1 corresponding to eigenvalue γ , (2 − λ)−1 . (b) No, γ, an eigenvalue of C, cannot equal to

1 . 2

Why not? Because if B −1 y = Cy =

1 2

y for some y 6= 0, then

1   1 1 1 1 y = By = (2In − A)y = (2y − Ay) = y − Ay. y = B B −1 y = B 2 2 2 2 2 Subtracting y from both sides would give 0 = − 12 Ay. Because y 6= 0, this would imply 0 is an eigenvalue of A, contradicting the given information that A is invertible. So, no, 12 cannot be an eigenvalue of C. 2.1.6.17. (a) Because C = A + B and x is an eigenvector for both A and B, corresponding to eigenvalues λ and β, respectively, x 6= 0 and Cx = (A + B)x = Ax + Bx = λx + βx = (λ + β)x, which implies γ , λ + β is an eigenvalue of C with corresponding eigenvector x.  (b) If, in addition, C = A2 , then λ2 x = A2 x = Cx = (λ + β)x, hence λ2 − (λ + β) x = 0. Because x 6= 0, it follows that λ2 − λ − β = 0. √ p  1 ± 1 + 4β 1 This quadratic equation for λ only has solutions λ = . So it follows that either λ = 1 + 1 + 4β or 2 2 p  1 λ= 1 − 1 + 4β . 2 2.1.6.18. Because x is an eigenvector for both A and B, corresponding to eigenvalues λ and β, respectively, x 6= 0 and ABx = A(Bx) = A(βx) = β Ax = β(λx) = (βλ)x, which implies βλ is an eigenvalue of AB with corresponding eigenvector x.

c Larry

Turyn, January 2, 2014

page 9

 2.1.6.19. Let x(1) and x(2) be two distinct solutions of Ax = b. Then Ax(1) = b and Ax(2) = b, so A x(1) − x(2) = Ax(1) − Ax(2) = b − b = 0. (a) So, x , x(1) − x(2) is a nonzero solution of Ax = 0. By definition, λ = 0 is an eigenvalue of A with corresponding eigenvector x. (b) The assumption that b = 0 is actually irrelevant. By part (a), Ax = 0 has a nonzero solution x. Linearity implies that Ax = 0 would have infinitely many solutions, namely c x for all scalars c. 2.1.6.20. By (2.2) in Section 2.1, | A − λI | = P(λ) = (λ1 − λ)(λ2 − λ) · · · (λn − λ). Substitute λ = 0 to get | A − 0 · I | = P(0) = (λ1 − 0)(λ2 − 0) · · · (λn − 0), that is, | A | = λ1 · λ2 · · · λn . 2.1.6.21. The n distinct eigenvalues of A must each have algebraic multiplicity of one, by Theorem 2.3(a) in Section 2.1. It follows that each of the geometric multiplicities must be one, by Theorem 2.3(c) in Section 2.1. Because a11 − λ a12 . . . a1n 0 a22 − λ a2n . . . . = (a11 − λ)(a22 − λ) · · · (ann − λ), 0 = | A − λI | = . . . . . . . . 0 0 . . . ann − λ the eigenvalues of A are a11 , a22 , ..., ann . To find the eigenvectors, note that the fact that the ajj are distinct implies a22 − a11 6= 0,    0 a12 . . . a1n | 0 0 a12  0 a22 − a11  0 ∼ a | 0 1 . 2n    .  . 0 . . | 0 0 1   [ A − a11 I | 0 ] =  . . . . . | 0 .    . . . . . | 0 . 1 0 0 . . . ann − a11 | 0 0 0 . R → R2 a22 −a11 2 .. . 1 R → Rn ann −a11 n We see that e(1) = [ 1 0 ... 0 ]T Similarly,  a11 − a22 a12  0 0   . 0  [ A − a22 I | 0 ] =  . .   . . 0 0

so .

a1n a2n . . . 1

.

. . .

.

 0 0  0 . 0  0 0

| | | | | |

is an eigenvector corresponding to eigenvalue a11 . .

.

.

. . .

.

. .

a1n a2n . . . ann − a22

| | | | | |

 0 0  0  0  0 0

 ∼

1 R a11 −a22 n

1 0  .  .  . 0

a12 0 0 . . 0

.

.

.

. . .

.

. .

a1n a2n . . . 1

| | | | | |

 0 0  0 . 0  0 0

.. . 1 R → Rn ann −a22 n We see that e(2) = [ 1 0 ... 0 ]T is an eigenvector corresponding to eigenvalue a22 . We see in that for an upper triangular matrix whose diagonal entries are distinct, the eigenvalues are the diagonal entries a11 , a22 , ..., ann and the corresponding eigenvectors are e(1) , e(2) ,..., e(n) , respectively.   2.1.6.22. A2 x(1) = A − 3x(2) = −3Ax(2) = −3 · 2x(1) , hence A2 x(1) = −6x(1) . Similarly, A2 x(2) = A 2x(1) = 2Ax(1) = 2 · (−3)x(2) , hence A2 x(2) = −6x(2) . (a) So, A2 has eigenvalue −6 with corresponding eigenvectors x(1) and x(2) . (b) {x(1) , x(2) } is linearly independent, so A2 ’s eigenvalue (−6) has geometric multiplicity being at least two. By Theorem 2.3(b) in Section 2.1, α, the algebraic multiplicity of A2 ’s eigenvalue (−6) is at least two. This implies that (−6 − λ)α is a factor of the characteristic polynomial PA2 (λ), hence (−6 − λ)2 is a factor of the characteristic polynomial PA2 (λ).

c Larry

Turyn, January 2, 2014

page 10

     1 0 0 −3 satisfy the hypotheses of this problem. Note that A2 = −6I2 , and x(2) = , x(1) = 0 1 2 0 in this example in R2 . 

(c) Ex: A =

2.1.6.23. (a) If (I − A−1 ) were not invertible, then there would be an x 6= 0 with (I − A−1 )x = 0, hence x − A−1 x = 0, hence x = A−1 x, hence Ax = AA−1 x = Ix = x, hence λ = 1 would be an eigenvalue of A, contradicting the given information. So, (I − A−1 ) is invertible. Alternatively, we can calculate | (A − I) | = | A(I − A−1 ) | = | A | · | (I − A−1 ) |, and use the given information... −1 −1 (b) Using Theorem 1.23(c) in Section 1.5, A−1 (I − A−1 )−1 = (I − A−1 )A = I A − A−1 A = (A − I)−1 .   −1 −1 (c) Using Theorem 1.23(c) in Section 1.5, (I − A−1 )−1 A−1 = A(I − A−1 ) = A I − AA−1 = (A − I)−1 . 2.1.6.24. We are given that x 6= 0 and Ax = λx, as well as B = C T AC. For any vector y we calculate  (?) B − λC T C y = By − λC T Cy = C T ACy − λC T Cy = C T (A − λI)Cy. Because C is invertible, we can choose y so that Cy = x, namely let y , C −1 x. So, (?) implies  (??) B − λC T C y = C T (A − λI)Cy = C T (A − λI)x = C T (Ax − λIx) = C T (λx − λx) = C T 0 = 0. Now, y 6= 0, because Cy = x. (If, instead, y = 0 then 0 = C0 = Cy = x, which would contradict the given information that x 6= 0.)   Together, y 6= 0 and (??) B − λC T C y = 0 imply that zero is an eigenvalue of B − λC T C . By Theorem 1.30 in Section 1.6, | B − λC T C | = 0.  (b) In the work of part (a) we found that y , C −1 x satisfies B − λC T C y = 0. 2.1.6.25. (a) We are given that x 6= 0, which implies y , Bx 6= 0. Why? Because, if not, then 0 = B −1 0 = B −1 y = B −1 Bx = x, which would contradict x 6= 0. (b) We were given that AB = BA and that x is an eigenvector of A corresponding to eigenvalue λ. It follows that    A Bx = (AB)x = (BA)x = B Ax = B λx = λ(Bx). y , Bx 6= 0 satisfies Ay = λy, hence by definition y is an eigenvector of A corresponding to eigenvalue λ. (c) Suppose, in addition to all of the above assumptions, A’s eigenvalue λ has geometric multiplicity equal to one. Then the additional facts that both Bx and x are nonzero and eigenvectors of A corresponding to eigenvalue λ imply that {Bx, x} is linearly dependent. It follows from Theorem 1.35 in Section 1.7 that either x can be written as a linear combination of Bx or Bx can be written as a linear combination of x. Because both x and Bx are nonzero, in either case it follows that Bx = µx, for some scalar µ. So, x is an eigenvector of B. −1 − λ 0 2.1.6.26. 0 = 0

1 −1 − λ 0

0 = (−1 − λ)(−1 − λ)(−1 − λ) 1 −1 − λ

⇒ eigenvalues are λ1 = λ2 = λ3 = −1.

So, the only distinct eigenvalue of A is µ1 = −1, which has algebraic multiplicity α1 = 3. 

0 [A − µ1 I | 0 ]=  0 0   1 ⇒ v1 =c1  0 , 0

1 0 0

0|

1 | 0|

 0 0  is already in RREF ⇒ x1 = x2 = 0 and the only free variable is x3 = c1 0

for any const. c1 6= 0, are the only eigenvectors corresponding to eigenvalue µ1 = −1.

So, eigenvalue µ1 = −1 has geometric multiplicity m1 = 1. 2.1.6.27. (a) A is invertible if, and only if, |A| = 6 0. Because |A| = | A − 0 · I | = P(0), we see that A is invertible if, and only if, P(0) 6= 0, which is true if and only if 0 is not an eigenvalue of A. (b) If every eigenvalue of A is greater than 2, then P(λ) 6= 0 for all λ ≤ 2. In particular, | A − 1 · I | = P(1) 6= 0, so (A − I) is invertible. It follows that (I − A) = −(A − I) is invertible. c Larry

Turyn, January 2, 2014

page 11

(c) By the result of part (a), because 0 is not an eigenvalue of A, it follows that A is invertible. Because every eigenvalue of A is greater than 2, there is no nonzero vector x for which Ax = 1 · x. It follows that there is no nonzero vector for which x = Ix = A−1 Ax = A−1 x. It follows that (A−1 − I) is invertible, hence (I − A−1 ) = −(A−1 − I) is invertible.  2.1.6.28. Ex: A =

0 0

 [A − µ1 I | 0 ] = AT =



0 1

0 0

[AT − β1 I | 0 ] =

1 0

1 0

0 0 

 is an upper triangular matrix, so µ1 = 0 is its only eigenvalue. | |

0 0



 is already in RREF ⇒ the only eigenvectors are v1 =c1

 1 , for any const. c1 6= 0. 0

is a lower triangular matrix, so β1 = 0 is its only eigenvalue. 

0 1

0 0

| |

0 0





1 0

0 0

| |

 0 , which is in RREF 0

∼ R1 ↔ R 2   0 ⇒ the only eigenvectors are u1 =d1 , for any const. d1 6= 0. 1

No eigenvector of A is an eigenvector of AT , and, vice-versa, no eigenvector of AT is an eigenvector of A. We needed to have A 6= AT , because if A = AT then of course eigenvectors of A will be eigenvectors of AT and vice-versa. 2.1.6.29. (a) (A − λB)x = 0 has a non-trivial solution for x if, and only if,     0 1 1 + λ 1 −1 −λ = = −λ(−λ) + λ(1 + λ) = λ(2λ + 1), 0 = | A − λB | = −λ −λ 0 0 1 1 −λ if and only if λ1 = 0 or λ2 = − 21 .  (b) For generalized eigenvalue λ1 = 0, [ A − λ1 B | 0 ] =

0 0

1 0

| |

0 0

 is already in RREF.



 1 , for any const. c1 6= 0, are the eigenvectors corresponding to generalized eigenvalue λ1 = 0. 0  1 1    | 0 2 2

1 1| 0   | [ A − λ2 B | 0 ] = ∼ , after −R1 + R2 → R2 , 2R1 → R1 0 0| 0 1 1 | 0 2 2   −1 , for any const. c1 6= 0, are the eigenvectors corresponding to generalized eigenvalue λ2 = − 12 . ⇒ w2 =c1 1 ⇒ w1 =c1

2.1.6.32. Problem 2.1.6.25 suggests that any example we look for should try to have AB 6= BA. Also, if 0 is an eigenvalue of B than any corresponding eigenvector of B will automatically be an eigenvector of AB.       1 2 0 1 6 5 Ex: A = and B = give AB = . 0 1 3 2 3 2 −λ 1 = λ2 − 2λ − 3 = (λ − 3)(λ + 1), so the eigenvalues First, find the eigenvalues of B: 0 = | B − λI | = 3 2−λ are λ1 = −1 and λ2 = 3.     1 1 | 0

1 1| 0 To find the eigenvectors of B: [ B − λ1 I | 0 ] = ∼ , after −3R1 + R2 → R2 . 3 3 | 0 0 0| 0   −1 ⇒ v1 = c 1 , for any const. c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1. 1     1 −3 1 | 0

1 −3 | 0 [ B − λ2 I | 0 ] = ∼ , after R1 + R2 → R2 , − 13 R1 → R1 . 3 −1 | 0 0 0| 0  1  ⇒ v2 = c1 3 , for any const. c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 3. 1

c Larry

Turyn, January 2, 2014

page 12

We calculate



−1 (AB) 1



 =

6 3

5 2



−1 1



 =

−1 −1



 6= γ

−1 1



for any scalar γ, because we can’t have both −1 = γ(−1) and −1 = γ · 1. Also,  1     1   1   6 5 8 3 (AB) 3 = = 6= γ 3 1 3 2 1 3 1 for any scalar γ, because we can’t have both 8 = γ · 13 and 3 = γ · 1. So, no eigenvector of B is also an eigenvector of AB.

c Larry

Turyn, January 2, 2014

page 13

Section 2.2.3 5−λ 2.2.3.1. 0 = 3

−1 = (5 − λ)(1 − λ) + 3 = λ2 − 6λ + 8 = (λ − 2)(λ − 4) 1−λ

⇒ eigenvalues λ1 = 4, λ2 = 2.     1 −1 | 0

1 −1 | 0 [ A − λ1 I | 0 ] = ∼ , after −3R1 + R2 → R2 . 3 −3 | 0 0 0| 0   1 is an eigenvector corr. to eigenvalue λ1 = 4 ⇒ p(1) = 1     1 3 −1 | 0

1 −3 | 0 [ A − λ2 I | 0 ] = ∼ , after −R1 + R2 → R2 , 31 R1 → R1 . 3 −1 | 0 0 0| 0   1 is an eigenvector corr. to eigenvalue λ2 = 2 ⇒ p(2) = 3   1 1 The matrix P = [ p(1) pp p(2) ] = should diagonalize A. 1 3 5−λ 2.2.3.2. 0 = −2

−2 = (5 − λ)(2 − λ) − 4 = λ2 − 7λ + 6 = (λ − 1)(λ − 6) 2−λ

⇒ eigenvalues λ1 = 6, λ2 = 1.    

−1 −2 | 0 1 2| 0 , after −R1 → R1 , 2R1 + R2 → R2 . [ A − λ1 I | 0 ] = ∼ 0 0| 0 −2 −4 | 0   −2 ⇒ p(1) = is an eigenvector corr. to eigenvalue λ1 = 6 1     1 4 −2 | 0

1 −2 | 0 [ A − λ2 I | 0 ] = ∼ , after 12 R1 + R2 → R2 , 14 R1 → R1 . −2 1| 0 0 0| 0   1 ⇒ p(2) = is an eigenvector corr. to eigenvalue λ2 = 1 2   −2 1 should diagonalize A. The matrix P = [ p(1) pp p(2) ] = 1 2 2−λ 2.2.3.3. 0 = −1

0 = (2 − λ)(−1 − λ) − 0 = (2 − λ)(−1 − λ) −1 − λ

⇒ eigenvalues λ1 = 2, λ2 = −1.     0 0| 0

1 3| 0 [ A − λ1 I | 0 ] = ∼ , after R1 ↔ R2 , −R1 → R1 . −1 −3 | 0 0 0| 0   −3 (1) ⇒p = is an eigenvector corr. to eigenvalue λ1 = 6 1     3 0| 0

1 0| 0 [ A − λ2 I | 0 ] = ∼ , after 31 R1 → R1 , R1 + R2 → R1 . −1 0 | 0 0 0| 0   0 ⇒ p(2) = is an eigenvector corr. to eigenvalue λ2 = −1 1   −3 0 The matrix P = [ p(1) pp p(2) ] = should diagonalize A. 1 1 −3 − λ √ 2.2.3.4. 0 = − 3



3 = (−3 − λ)(1 − λ) + 3 = λ2 + 2λ = λ(λ + 2) 1−λ

⇒ eigenvalues λ1 = 0, λ2 = −2. √   

−3 3| 0 1 √ [ A − λ1 I | 0 ] = ∼ − 3 1| 0 0

− √13 | 0|

0 0



, after − 13 R1 → R1 ,

√ 3R1 + R2 → R2 .

c Larry

Turyn, January 2, 2014

page 14

 √1 is an eigenvector corr. to eigenvalue λ1 = 0 3 √ √     −1 3| 0

1 − 3 | 0 , after −R → R , √3R + R → R . √ [ A − λ2 I | 0 ] = ∼ 1 1 1 2 2 0 0| 0 3| 0 − 3  √  3 ⇒ p(2) = is an eigenvector corr. to eigenvalue λ2 = −2 1 √   1 3 The matrix P = [ p(1) pp p(2) ] = √ should diagonalize A. 3 1 ⇒ p(1) =





2 = (−2 − λ)(2 − λ) + 2 = λ2 + λ = λ2 − 2 2−λ √ √ ⇒ eigenvalues λ1 = 2, λ2 = − 2. √ √ √     −2 − √2 1 1 − 2 | 0 , after R ↔ R , − √1 R → R , (2 + √2)R + √2 | 0 ∼ [ A − λ1 I | 0 ] = 1 1 1 2 2 1 0 0| 0 − 2 2− 2| 0 R2 → R2 . √   √ −1 + 2 ⇒ p(1) = is an eigenvector corr. to eigenvalue λ1 = 2 1 √ √ √     −2 + √2 1 −1 − 2 | 0 , after R ↔ R , − √1 R → R , (2 − √2)R + √2 | 0 ∼ [ A − λ2 I | 0 ] = 1 2 1 1 2 1 0 0| 0 − 2 2+ 2| 0 R 2 → R2 . √   √ 1+ 2 (2) ⇒p = is an eigenvector corr. to eigenvalue λ2 = − 2 1 √  √  −1 + 2 1 + 2 should diagonalize A. The matrix P = [ p(1) pp p(2) ] = 1 1 −2 − λ √ 2.2.3.5. 0 = − 2

2.2.3.6. Expanding along the second row, −3 − λ −1 2 −3 − λ 0 −2 − λ 0 = (−2 − λ) 0 = −1 −1 −1 −λ

2 = (−2 − λ)(λ2 + 3λ + 2) = (−2 − λ)(λ + 2)(λ + 1), −λ

so the eigenvalues are λ1 = λ2 = −2 and λ3 = −1.    −1 −1 2 | 0

1 ∼  0 0 0| 0 [ A − λ1 I | 0 ] =  0 −1 −1 2 | 0 0 −R1 + R3 → R3 −R1 → R1  −c1 + 2c2 c1 ⇒ x2 = c1 and x3 = c2 are free variables and v1 =  c2

 0 0 0

−2 | 0| 0|

1 0 0 



   −1 2  = c1  1  + c2  0 , for any 0 1

constants c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = −2.     −1 2 ⇒ p(1) =  1 , p(2) =  0  are eigenvectors that span the eigenspace Eλ=−2 . 0 1 

−2 [ A − λ3 I | 0 ] =  0 −1

−1 −1 −1

2| 0| 1|

 0 0 0

 ∼

1  0 0

1 −1 1

−1 | 0| 0|

 0 0 0

 ∼

1  0 0

0

1 0

−1 | 0| 0|

 0 0 0

R 1 ↔ R3 −R2 → R2 −R1 → R1 −R2 + R3 → R3 2R1 + R3 → R3 −R2 + R1 → R1     c1 1 ⇒ x3 = c1 is the only free variable and v3 =  0  = c1  0 , for any constant c1 6= 0, are the eigenvectors c1 1 corresponding to eigenvalue λ3 = −1. c Larry

Turyn, January 2, 2014

page 15



The matrix P =

[ p(1) pp

6−λ 2.2.3.7. 0 = −7 7

p(2) pp

7 −8 − λ 7

p

(3)

−1 ]= 1 0

7 −7 6−λ

2 0 1

 1 0  should diagonalize A. 1 6−λ −7 0

=

7 −8 − λ −1 − λ

7 −7 −1 − λ

R 2 + R 3 → R3 6−λ (−1 − λ) −7 0

=

7 −8 − λ 1

 7 6−λ −7 = (−1 − λ) − −7 1

7 6 − λ + −7 −7

 7 −8 − λ

R3 ← (−1 − λ)R3   4 9 = (−1 − λ)(6 − λ)(7 − 8 − λ) = (−1 − λ)2 (6 − λ) so the eigenvalues 4 9 + (6 − λ)(−8 − λ) +  = (−1 − λ) 7(6 − λ) −  are λ1 = λ2 = −1 and λ3 = 6.     7 7 7| 0 ∼

1 1 1| 0  0 0 0| 0 [ A − λ1 I | 0 ] =  −7 −7 −7 | 0  7 7 7| 0 R 1 + R 2 → R2 0 0 0| 0 −R1 + R3 → R3 7−1 R1 → R1       −1 −1 −c1 − c2  = c1  1  + c2  0 , for any c1 ⇒ x2 = c1 and x3 = c2 are free variables and v1 =  1 0 c2 constants c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = −1.     −1 −1 ⇒ p(1) =  1 , p(2) =  0  are eigenvectors that span the eigenspace Eλ=−1 . 0 1       0 7 7| 0

1 0| 0

0 −1 | 0 1 1 ∼ ∼  0 −7 −7 | 0   0 1| 0 [A − λ3 I | 0 ]= −7 −14 −7 | 0  1 7 7 0| 0 0 7 7| 0 0 0 0| 0 R 1 ↔ R3 R2 + R3 → R 3 −1 R 1 + R 2 → R2 (−7) R2 → R2 7−1 R1 → R1 −R2 + R1 → R1     1 c1 ⇒ x3 = c1 is the only free variable and v3 =  −c1  = c1  −1 , for any constant c1 6= 0, are the eigenvectors 1 c1 corresponding to eigenvalue λ3 = 6.   −1 −1 1 0 −1  should diagonalize A. The matrix P = [ p(1) pp p(2) pp p(3) ] =  1 0 1 1 2.2.3.8. It turns out that we can do the problem in a straight forward way without the given information that 6 is an eigenvalue. Expanding along the third row, 1−λ 5 −10   1−λ 5 1−λ −10 = (−4 − λ) = (−4 − λ) (1 − λ)(1 − λ) − 25 0 = 5 5 1−λ 0 0 −4 − λ = (−4 − λ)(λ2 − 2λ − 24) = (−4 − λ)(λ − 6)(λ + 4) ⇒ eigenvalues are λ1 = λ2 = −4, λ3 = 6. 

5 [ A − λ1 I | 0 ] =  5 0

5 5 0

−10 | −10 | 0|

 0 0 0

 ∼ −R1 + R2 → R2 5−1 R1 → R1

1  0 0

1 0 0

−2 | 0| 0|

 0 0 0

c Larry

Turyn, January 2, 2014

page 16

     2 −1 −c1 + 2c2  = c1  1  + c2  0 , for any c1 ⇒ x2 = c1 and x3 = c2 are free variables and v1 =  1 0 c2 constants c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = −4.     2 −1 ⇒ p(1) =  1 , p(2) =  0  are eigenvectors that span the eigenspace Eλ=−4 . 1 0     

2| 0 −5 5 −10 | 0 1 −1 1 −1  0  0 0 ∼ 0 1| 0 ∼ [A − λ3 I | 0 ]= 5 −5 −10 | 0  0 0 0 0 −10 | 0 0 0 −10 | 0 10R2 + R3 → R3 R1 + R 2 → R2 −2R2 + R1 → R1 (−5)−1 R1 → R1 (−20)−1 R2 → R2     1 c1 ⇒ x3 = 0 and x2 = c1 is the only free variable and v3 =  c1  = c1  1 , for any constant c1 6= 0 0 eigenvectors corresponding to eigenvalue λ3 = 6.   −1 2 1 (1) p (2) p (3) The matrix P = [ p p p p p ] =  1 0 1  should diagonalize A. 0 1 0   −1 1 2 2  and A calculator gives P −1 = 21  0 0 1 1 −2       −1 1 2 1 5 −10 −1 2 1 −1 1 2 4 −8 1 1 −1 0 0 2   5 1 −10   1 0 1  =  0 0 2   −4 0 P AP = 2 2 1 1 −2 0 0 −4 0 1 0 1 1 −2 0 −4     −8 0 0 −4 0 0 1 0 −8 0  =  0 −4 0  = D, = 2 0 0 12 0 0 6 as we expected. 

0|

1| 0|

 0 0 0

0, are the

 6 6  0

2.2.3.9. It turns out that we can do the problem in a straight forward way without the given information that −1 and 3 are eigenvalues. Expanding along the third row, 3−λ 0 −12 3−λ 0 = (−1 − λ)(3 − λ)(−1 − λ) −1 − λ −12 = (−1 − λ) 0 = 4 4 −1 − λ 0 0 −1 − λ ⇒ eigenvalues are λ1 = λ2 = −1,  4 0 −12 | [ A − λ1 I | 0 ] =  4 0 −12 | 0 0 0|

λ3 = 3.    0

1 0 −3 | 0 ∼  0 0 0 0| 0 0 0 0 0| 0 −R1 + R2 → R2 −1 4 R1 → R 1       3c2 0 3 ⇒ x2 = c1 and x3 = c2 are free variables and v1 =  c1  = c1  1  + c2  0 , for any c2 0 1 constants c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = −1.     0 3 (1) (2) ⇒ p =  1 , p =  0  are eigenvectors that span the eigenspace Eλ=−1 . 0 1      0 0 −12 | 0

−3 | 0

1 −1 1 −1 ∼  0  0 0 −12 | 0  0 [A − λ3 I | 0 ]= 4 −4 −12 | 0  ∼ 0 0 −4 | 0 0 0 −4 | 0 0 0 (−12)−1 R2 → R3 R1 ↔ R2 3R2 + R1 → R1 4−1 R1 → R1 4R2 + R3 → R3 c Larry

0|

1| 0|

 0 0 0

Turyn, January 2, 2014

page 17

   1 c1 6 0, are the ⇒ x3 = 0 and x2 = c1 is the only free variable and v3 =  c1  = c1  1 , for any constant c1 = 0 0 eigenvectors corresponding to eigenvalue λ3 = 3.   0 3 1 (1) p (2) p (3) The matrix P = [ p p p p p ] =  1 0 1  should diagonalize A. 0 1 0   −1 1 3 −1 1  and A calculator gives P = 0 0 1 0 −3        0 −3 3 −1 1 3 0 3 1 3 0 −12 −1 1 3 0 3  1   −1 1   4 −1 −12   1 0 1  =  0 0 P −1 AP =  0 0 0 −1 0 1 0 −3 0 1 0 0 0 −1 1 0 −3   −1 0 0 =  0 −1 0  = D, 0 0 3 

as we expected. 2.2.3.10. It turns out that we can do the problem in a straight forward way without the given information that −1 is an eigenvalue. Expanding along the second row, −3 − λ 2 2   −3 − λ 2 0 −1 − λ 0 = (−1 − λ) 0 = = (−1 − λ) (−3 − λ)(−λ) + 2 −1 −λ −1 1 −λ = (−1 − λ)(λ2 + 3λ + 2) = (−1 − λ)(λ + 1)(λ + 2) ⇒ eigenvalues are λ1 = λ2 = −1,  −2 2 2 | [ A − λ1 I | 0 ] =  0 0 0 | −1 1 1 |

λ3 = −2.    0

1 −1 −1 | 0 ∼  0 0 0 0| 0 0 0 0 0| 0 − 12 R1 → R1 R1 + R 2 → R2       1 1 c1 + c 2  = c1  1  + c2  0 , for any c1 ⇒ x2 = c1 and x3 = c2 are free variables and v1 =  1 0 c2 constants c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = −1.     1 1 ⇒ p(1) =  1 , p(2) =  0  are eigenvectors that span the eigenspace Eλ=−1 . 0 1     −1 2 2 | 0

0 −2 | 0 1 ∼  0 0| 0 [A − λ3 I | 0 ]=  0 1 0 | 0  1 0 0 0| 0 −1 1 2 | 0 −2R2 + R1 → R1 −R2 + R3 → R3 −R1 + R3 → R3 −R1 → R1     2c1 2 ⇒ x2 = 0 and x3 = c1 is the only free variable and v3 =  0  = c1  0 , for any constant c1 6= 0, are the c1 1 eigenvectors corresponding to eigenvalue λ3 = −2.   1 1 2 The matrix P = [ p(1) pp p(2) pp p(3) ] =  1 0 0  should diagonalize A. 0 1 1

c Larry

Turyn, January 2, 2014

page 18



A calculator gives P

−1

 1 0 1 2  and −1 −1   1 −3 2 2 0 2   0 −1 0   1 0 −1 1 0 −1  −1 0 =  0 −1 0 0

0 =  −1 1



0 P −1 AP =  −1 1

1 1 −1

1 0 1

  0 2 0  =  −1 1 1 

1 1 −1

 −1 0 2   −1 0 −1

−1 0 −1

 −4 0  −2

0 0  = D, −2

as we expected. 

0 −1 1 2  is upper triangular so it has eigenvalues λ1 = λ2 = 1 and λ3 = 0. 2.2.3.11. (a) Ex: A =  0 1 0 0 0     0 0 −1 | 0 0 0 1 | 0 0 0 0 | 0 2| 0 ∼ [ A − λ1 I | 0 ] =  0 0 0 0 −1 | 0 0 0 0 | 0 2R1 + R2 → R2 −R1 + R3 → R3 −R1 → R1       0 1 c1 ⇒ x1 = c1 and x2 = c2 are free variables and v1 =  c2  = c1  0  + c2  1 , for any constants 0 0 0 

c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = 1.     1 0 (1) (2) ⇒ p =  0 , p =  1  are eigenvectors that span the eigenspace Eλ=1 . 0 0 

 0 0  is already in RREF. 0     1 c1 ⇒ x3 = c1 is the only free variable and v3 =  −2c1  = c1  −2 , for any constant c1 6= 0, are the eigenvectors 1 c1 corresponding to eigenvalue λ3 = 0.       0 1  n o  1 (b) p(1) , p(2) , p(3) =  0  ,  1  ,  −2  is a basis for R3 consisting of eigenvectors of A.   0 0 1

1 [A − λ3 I | 0 ]= 0 0

0

1 0

−1 | 2| 0|



  1 0 1 is upper triangular so its eigenvalues are its diagonal entries, 1 and −1. A2 = 0 −1 0 is also upper triangular so its eigenvalues are its diagonal entries, 1 and 1 = (−1)2 .

2.2.3.12. Ex: A =

0 1



2.2.3.13. (a) must be true, by the definition of the word “eigenvector” (b) may be true and may be false. E.g., if A = AT then x is an eigenvector of AT . See also problem 2.1.6.28, where the eigenvectors of A are not eigenvectors of AT . (c) must be false, by the definition of the word “eigenvector” (d) must be false. Note that x is a vector in Rn so it couldn’t be a factor of any equation in Rn unless n = 1! But, we were given that n ≥ 2. (e) must be true, because Ax = λx ⇒ A2 x = λ2 x (f) may be true and may be false. If the eigenspace of A in which x lies is one dimensional, then because x 6= 0 it would follow that x is a basis for that eigenspace. But if that eigenspace has dimension two or higher then x can’t be a basis for that eigenspace. c Larry

Turyn, January 2, 2014

page 19

(g) may be true and may be false. Because B is similar to A there is an invertible matrix P with B = P −1 AP . If Ax = λx and x 6= 0, then Bx = P −1 AP x. If P x = βx and β = 6 0, then P −1 x = β −1 x and Bx = P −1 AP x = P −1 Aβx = P −1 λβx = λβP −1 x = λββ −1 x = λx, so x would be an eigenvector of B, too. But, on the other hand, if P x is not an eigenvector of A then x is not an eigenvector of B, by Theorem 2.11 in Section 2.2.  29 18 . Suppose there were an invertible matrix P that diagonalizes A, so that D = P −1 AP . −50 −31 By Theorem 2.11 in Section 2.2, the eigenvalues of D must be the eigenvalues of A. But in Example 2.16 in Section 2.2, we saw that the only eigenvalue of A is λ = −1. Because its diagonal entries are its eigenvalues.  D is diagonal,  −1 0 Putting together what we know so far, we must have D = . 0 −1   a b Let P = . We have c d           −a −b −1 0 a b a b 29 18 29a + 18c 29b + 18d . = = AP = P D = = −c −d 0 −1 c d c d −50 −31 −50a − 31c −50b − 31d 

2.2.3.14. A =

It follows that 29a + 18c = −a, hence c = − 35 a, and also 29b + 18d = −b, hence d = − 53 b. So, the invertible matrix P has  5   5  a b a b = 5 0 6= |P | = = a − b − b − a = 0, 5 c d −3 a −3 b 3 3 giving a contradiction. So, no, there is no invertible matrix P that diagonalizes the matrix A of Example 2.16 in Section 2.2. 

−2 2.2.3.15. [ A − (−2I) | 0 ] =  1 0

−2 1 0

2 −1 0

 |0 |0 |0

 ∼

1  0 0

1 0 0

−1 | 0| 0|

 0 0 0

R 1 ↔ R2 2R1 + R2 → R2       1 −1 −c1 + c2  = c1  1  + c2  0 , for any constants c1 ⇒ x2 = c1 and x3 = c2 are free variables and v1 =  1 0 c2 c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = −2.     −1 1 (1) (2) ⇒ p =  1 , p =  0  are eigenvectors that span the eigenspace Eλ=−2 . 0 1       −1 −2 2 |0 ∼

0| 0 1 2 −1 | 0 1 2 ∼  0 0  0 0 2 −1 | 0  [A − (−3I) | 0] =  1 1 | 0 1 | 0 0 0 1 |0 R 1 ↔ R2 0 0 1 | 0 0 0 0| 0 R 2 + R 1 → R1 R 1 + R 2 → R2 −R2 + R3 → R3 R 2 ↔ R3     −2c1 −2 ⇒ x3 = 0 and x2 = c1 is the only free variable and v3 =  c1  = c1  1 , for any constant c1 6= 0, are the 0 0 eigenvectors corresponding to eigenvalue λ3 = −3.       1 −2   −1  1  ,  0  ,  1  is a set of three eigenvectors of A, and it is a basis for R3 because by   0 1 0 expanding along the third row, −1 1 0

1 0 1

−2 1 0

= −1 · −1 1

−2 = −1 6= 0. 1

c Larry

Turyn, January 2, 2014

page 20

1 2.2.3.16. (a) B = P −1 AP ⇒ |B| = |P −1 AP | = |P −1 | |A| |P | = |A| |P | = |A| |P |     2 0 1 0 both have determinant equal to one. But (b) No, the converse is not true. Ex: A = and B = 0 0.5 0 1 A cannot be similar to B because the eigenvalues of A are 1 and 1 while the eigenvalues of B are 2 and 0.5. (Similar matrices must have the same eigenvalues.) 2.2.3.17. Because n = 3, the geometric multiplicity of A’s eigenvalue µ1 = 2 is m1 = n − rank(A − 2I3 ) = 3 − 2 = 1 and the geometric multiplicity of A’s eigenvalue µ2 = 3 is m2 = n − rank(A − 3I3 ) = 3 − 1 = 2. Because m1 + m2 = n, it follows that the corresponding algebraic multiplicities are α1 = 1 and α2 = 2. To finish the work, we have a choice of two different methods: Method I: The algebraic multiplicities imply both that the eigenvalues of A are 2, 3, 3, counting multiplicity, and that the characteristic polynomial of A is P(λ) = (2 − λ)(3 − λ)2 . Using the result of problem 2.1.6.20, or arguing directly from P(λ), we conclude that |A| = λ1 λ2 λ3 = 2 · 3 · 3 = 18. Method II: The geometric multiplicities of the eigenvalues of A add up to 3, which equals n, so A is diagonalizable with A being similar to D = diag(2, 3, 3). By the result of problem 2.2.3.16(a), |A| = |D| = 2 · 3 · 3 = 18. 

1 0 2.2.3.18. [ A − (−1)I) | 0 ] ∼  0 0

−2 0 0 0

3 1 0 0

5| 4| 0| 0|

 0 0  0 0

 ∼

1  0   0 0

−3R2 + R1 → R1   2c1 + 7c2    c1  = c1  ⇒ x2 = c1 and x4 = c2 are free variables and v1 =    −4c2  c2 

−2 0 0 0

0

1 0 0

  2 7  0 1   + c2   −4 0  0 1

−7 | 4| 0| 0|

 0 0  0 0

  , for any constants 

c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue µ = −1.     2 7       0  1      ⇒ eigenspace Eµ=−1 has basis  , . 0   −4       0 1 √ √ √ √ 2.2.3.19. S 2 = S · S = (P diag( λ1 , ..., λn ) P −1 )(P diag( λ1 , ..., λn ) P −1 ) √ √ √ √ = P diag( λ1 , ..., λn ) (P −1 P ) diag( λ1 , ..., λn ) P −1 √ √ √ √ = P diag( λ1 , ..., λn ) (I) diag( λ1 , ..., λn ) P −1  √  √ √ √ √ √ 2  −1 2 = P (diag( λ1 , ..., λn ) · diag( λ1 , ..., λn ) P −1 = P (diag λ1 , ..., λn P  −1 = P diag λ1 , ..., λn P = A. 2.2.3.20. Yes. B = P −1 AP ⇒ P B = (P P −1 )AP = (I)AP = AP −1 −1 ⇒ P BP = (P B)P = (AP )P −1 = A(P P −1 ) = A(I) = A.   −5 4 2.2.3.21. Use the two given eigenvectors to form the matrix P = [ p(1) pp p(2) ] = that diagonalizes A. For 1 1   9 0 example, if A is similar to the diagonal matrix D = , then the result of problem 2.2.3.20 gives 0 27 A = P DP −1 =  =

−5 1



  −1      1 1 −4 −5 4 9 0 −5 4 −5 4 9 0 = 1 1 0 27 1 1 1 1 0 27 −9 −1 −5         4 −1 0 1 −4 −5 4 −1 4 17 40 = = . 1 0 −3 −1 −5 1 1 3 15 2 19

c Larry

Turyn, January 2, 2014

page 21

2.2.3.22. By expanding along the second row, 2−λ 0 1 2−λ −5 − λ 0 = (−5 − λ) 0 = | A − λI | = 0 3 3 0 2−λ

  1 = (−5 − λ) (2 − λ)2 − 3 2−λ

√ √ A are λ1 = −5, λ2 = 2 + 3, λ3 = 2 − 3.     

1/7 | 0

0 1 |0 1 0 1 0 0 | 0  0 0  0 0 46/7 | 0  ∼ ∼ 0 0 |0 1 | 0 0 0 0 | 0 0 0 0 | 0 0 7 |0 7 7−1 R1 → R1 R → R2 46 2 −3R1 + R3 → R3 − 17 R2 + R1 → R1 R2 ↔ R 3     0 0 ⇒ x1 = x3 = 0 and x2 = c1 is the only free variable and v1 =  c1  = c1  1 , for any constant c1 6= 0, are the 0 0  √    eigenvectors corresponding

0 − √13 | 0 − 3 0to√eigenvalue 1 λ|1 0= −5. 1 ∼   0 [A − λ2 I | 0 ]= 0 −7 − 3 0 | 0 1 √0 | 0 √ 0 0 0 | 0 3 0 − 3 |0 (−7 − √3)−1 R2 → R2 −1 (− 3) R1 → R1 −3R1 + R3 → R3  1    1 √ c1 √ 3 3  = c1  0 , for any constant c1 6= 0, are the ⇒ x2 = 0 and x3 = c1 is the only free variable and v2 =  0 c1 1 √ eigenvectors corresponding to eigenvalue λ2 = 2 + 3.

⇒ the eigenvalues of  7 [ A − λ1 I | 0 ] =  0 3

√ [A − λ3 I | 0 ]=

3 0 3

0√ −7 + 3 0

1 √0 3

 |0 |0 |0



1  0 0 

0

1 0

1 √ 3

| | |

 0 0 0

0 √ −1 0 (−7 + √3) R2 → R2 ( 3)−1 R1 → R1 −3R1 + R3 → R3     − √13 c1 − √13  = c1  ⇒ x2 = 0 and x3 = c1 is the only free variable and v3 =  0 0 , for any constant c1 6= 0, are c1 1 √ the eigenvectors corresponding to eigenvalue λ3 = 2 − 3. (b) Because A has three distinct eigenvalues and n = 3, A has a set of eigenvectors that is a basis for R3 , by Theorem 2.7(c) in Section 2.2. √ 2.2.3.23. (a) A is 3 × 3 and three eigenvalues, 2,√−2, 3 are given, so those are the only eigenvalues and the characteristic polynomial is PA (λ) = (2 − λ)(−2 − λ)( 3 − λ). √ (b) We were given that Ax1 = 2x1 , Ax2 = −2x2 , and Ax3 = 3 x3 . Because these are eigenvectors corresponding to distinct eigenvalues of A, {x1 , x2 , x3 } is a linearly independent set. It follows that A2 x1 = 4x1 , A2 x2 = 4x2 , and A2 x3 = 3x3 . So, x1 , x2 , and x3 are also eigenvectors of A2 and we already know that {x1 , x2 , x3 } is a linearly independent set. So, {x1 , x2 , x3 } is a linearly independent set of eigenvectors of A2 . (c) Because {x1 , x2 , x3 } is a linearly independent set of eigenvectors of A2 corresponding to eigenvalues 4, 4, and 3, respectively, it follows that the characteristic polynomial of A2 is PA2 (λ) = (4 − λ)(4 − λ)(3 − λ). 2.2.3.24. (a) Use (A − λ1 I)(A − λ3 I) · · · (A − λn I). How? Suppose 0 = c1 x(1) + c2 x(2) + ... + cn x(n) . If we multiply

c Larry

Turyn, January 2, 2014

page 22

on the left by the matrix (A − λ1 I)(A − λ3 I) · · · (A − λn I), we have 0 = c1 (A − λ1 I)(A − λ3 I) · · · (A − λn I)x(1) + c2 (A − λ1 I)(A − λ3 I) · · · (A − λn I)x(2) + ...+ +cn (A − λ1 I)(A − λ3 I) · · · (A − λn I)x(n) = c1 (λ1 − λ1 )(λ1 − λ3 ) · · · (λ1 − λn )x(1) + c2 (λ2 − λ1 )(λ2 − λ3 ) · · · (λ2 − λn )x(2) + ...+ +cn (λn − λ1 )(λn − λ3 ) · · · (λn − λn )x(n) = 0 + c2 (λ2 − λ1 )(λ2 − λ3 ) · · · (λ2 − λn )x(2) + 0 + ... + 0. Because the eigenvalues were assumed to be distinct, and x(2) 6= 0, it follows that c2 = 0. (b) Use (A − λ1 I)(A − λ2 I) · · · (A − λn−1 I). How? Suppose 0 = c1 x(1) + c2 x(2) + ... + cn x(n) . If we multiply on the left by the matrix (A − λ1 I)(A − λ2 I) · · · (A − λn−1 I), we have 0 = c1 (A − λ1 I)(A − λ2 I) · · · (A − λn−1 I)x(1) + c2 (A − λ1 I)(A − λ2 I) · · · (A − λn−1 I)x(2) + ...+ +cn (A − λ1 I)(A − λ2 I) · · · (A − λn−1 I)x(n) = c1 (λ1 − λ1 )(λ1 − λ2 ) · · · (λ1 − λn−1 )x(1) + c2 (λ2 − λ1 )(λ2 − λ2 ) · · · (λ2 − λn−1 )x(2) + ...+ +cn (λn − λ1 )(λn − λ2 ) · · · (λn − λn−1 )x(n) = 0 + ... + 0 + cn (λn − λ1 )(λn − λ2 ) · · · (λn − λn−1 )x(n) . Because the eigenvalues were assumed to be distinct, and x(n) 6= 0, it follows that cn = 0. 2.2.3.25. Use multiplication on the left by (A − µ2 I)(A − µ3 I) · · · (A − µp I). Why? Suppose 0 = c1,1 x1,1 + ... + c1,m1 x1,m1 + c2,1 x2,1 + ... + c2,m2 x2,m2 + ... + cp,1 xp,1 + ... + cp,mp xp,mp . If we multiply on the left by the matrix (A − µ2 I)(A − µ3 I) · · · (A − µp I), we have 0 = (A − µ2 I)(A − µ3 I) · · · (A − µp I)(c1,1 x1,1 + ... + c1,m1 x1,m1 )+ +(A − µ2 I)(A − µ3 I) · · · (A − µp I)(c2,1 x2,1 + ... + c2,m2 x2,m2 ) + ...+ +(A − µ2 I)(A − µ3 I) · · · (A − µp I)(cp,1 xp,1 + ... + cp,mp xp,mp ) = (µ1 − µ2 ) · · · (µ1 − µp )(c1,1 x1,1 + ... + c1,m1 x1,m1 ) + (µ2 − µ2 ) · · · (µ2 − µp )(c2,1 x2,1 + ... + c2,m2 x2,m2 )+ +... + (µp − µ2 ) · · · (µp − µp )(cp,1 xp,1 + ... + cp,mp xp,mp ) = (µ1 − µ2 ) · · · (µ1 − µp )(c1,1 x1,1 + ... + c1,m1 x1,m1 ) + 0 + ... + 0 Because the µ1 , ..., µp were assumed to be distinct, it follows that c1,1 x1,1 + ... + c1,m1 x1,m1 = 0. But, in Theorem 2.7(a) in Section 2.2 we also assumed that {x1,1 , .., x1,m1 } is linearly independent. This implies that c1,1 = · · · = c1,m1 = 0. 2.2.3.26. No, R3 cannot have a basis of eigenvectors all of which have 0 in their first component, because a basis must span R3 but [ 1 0 0 ]T would not be in that span. 2.2.3.27. By expanding along the first column, 1−λ −1 −1 1−λ −1 −1 −1 1−λ −1 = (1 − λ) − (−1) 0 = | A − λI | = −1 1 8−λ 1 8−λ 0 1 8−λ    = (1 − λ) (1 − λ)(8 − λ) + 1 + − (8 − λ) + 1 = (1 − λ) λ2 − 9λ + 9 − 7 + λ c Larry

Turyn, January 2, 2014

page 23

= −λ3 + 10λ2 − 18λ + 9 − 7 + λ = −λ3 + 10λ2 − 17λ + 2 , P(λ). Standard advice for factoring polynomials suggests trying λ = ±1, ±2. We find that P(1) = −6, P(−1) = 30, P(2) = 0. The latter implies 2 is an eigenvalue and enables factoring P(λ) = (2 − λ)(λ2 − 8λ + 1). √ √ 8 ± 60 The quadratic equation λ2 − 8λ + 1 = 0 has solutions λ = = 4 ± 15. 2 √ Because there are three distinct eigenvalues, 2, 4 ± 15, for this 3 × 3 matrix, theory in Section 2.2 guarantees that A has a set of three linearly independent eigenvectors.

c Larry

Turyn, January 2, 2014

page 24

Section 2.3.4  2.3.4.1. Denote a1 =

1 1



 and a2 =

2 0

 . To start the Gram-Schmidt process, let

v1 , a1 , r11 , ||v1 || =

√ 1 −1 2, q1 = r11 v1 = √ 2



 1 . 1

Next, let  v2 , a2 − (a2 • q1 )q1 = r22 , ||v2 || =

2 0



 −

2 0



1 •√ 2



1 1



1 √ 2



1 1



 =

2 0



 −

2 √ 2



1 √ 2



1 1



 =

1 −1

 ,

√ 2, and

  1 1 −1 . q2 = r22 v2 = √ 2 −1 According to Theorem 2.16 in Section 2.3, the o.n. set      1 1 1 1 √ , √ 2 1 2 −1     1 2 has span equal to the span of the given set of vectors, , . 1 0 

     1 0 1 2.3.4.2. Denote a1 =  0 , a2 =  1 , and a3 =  1 . To start the Gram-Schmidt process, let 1 1 1   1 √ 1   −1 0 . v1 , a1 , r11 , ||v1 || = 2, q1 = r11 v1 = √ 2 1 Next, let 

               0 0 1 1 0 1 −1 1 1 1 1 1 √  0  =  2 , v2 , a2 − (a2 • q1 )q1 =  1  −  1  • √  0  √  0  =  1  − √ 2 2 2 2 2 1 1 1 1 1 1 1 √

r22 , ||v2 || =

6 , 2

and   −1 1 −1 q2 = r22 v2 = √  2  . 6 1

Further, let v3 , a3 − (a3 • q1 )q1 − (a3 • q2 )q2              1 1 1 1 1 −1 −1 1 1 1 1 =  1  −  1  • √  0  √  0  −  1  • √  2  √  2  2 2 6 6 1 1 1 1 1 1 1             1 1 −1 1 2 1  2 1 1    √ √  2  =  1 , 1 − √ 0 − √ = 3 2 2 6 6 1 1 1 −1 

r33 , ||v3 || =

√1 , 3

and q3 =

−1 r33 v3 =

  1 1  √ 1 . 3 −1

According to Theorem 2.16 in Section 2.3, the o.n. set        1 −1 1   1 1 1  √ 0 , √  2 , √  1   2  6 3 1 1 −1 c Larry

Turyn, January 2, 2014

page 25

      1  0  1 has span equal to the span of the given set of vectors,  0  ,  1  ,  1  .   1 1 1      −1 1 2 2.3.4.3. Denote a1 =  2 , a2 =  1 , and a3 =  0 . To start the Gram-Schmidt process, let 0 2 −1 √ v1 , a1 , r11 , ||v1 || = 5, 

and q1 =

−1 r11 v1

  −1 1  2 . = √ 5 0

Next, let              2 2 2 2 −1 −1 −1 1 1 1 v2 , a2 − (a2 • q1 )q1 = 1  −  1  • √  2  √  2 = 1  − 0 · √  2 =  1  , 5 5 5 −1 −1 −1 −1 0 0 0 √ r22 , ||v2 || = 6, and   2 1  −1 1 . q2 = r22 v2 = √ 6 −1 

Further, let v3 , a3 − (a3 • q1 )q1 − (a3 • q2 )q2 

             −1 −1 1 2 2 1 1 1 1 1 1 =  0  −  0  • √  2  √  2  −  0  • √  1  √  1  5 5 6 6 0 0 2 −1 −1 2 2             −1 2 1 4 2 1 1 1 −1 2 √  2  − 0 · √  1  =  2  =  1 , = 0 − √ 5 5 5 5 6 0 −1 2 10 5 r33 , ||v3 || =

√ 2 30 , 5

and q3 =

−1 r33 v3 =

  2 1  √ 1 . 30 5

According to Theorem 2.16 in Section 2.3, the o.n. set     −1 2  1  1 √ 2 , √  1  5 6 0 −1   −1 has span equal to the span of the given set of vectors,  2  0

 2   , √1  1   30 5      2 1  , 1 , 0  .  −1 2 





     1 1 −1 2.3.4.4. Denote a1 =  −1 , a2 =  1 , and a3 =  1 . To start the Gram-Schmidt process, let 1 −1 1   1 √ 1 −1 v1 = √  −1  . v1 , a1 , r11 , ||v1 || = 3, q1 = r11 3 1 Next, let 

       1 1 1 1 1 1 v2 , a2 − (a2 • q1 )q1 =  1  −  1  • √  −1  √  −1  3 3 −1 −1 1 1 c Larry

Turyn, January 2, 2014

page 26

         1 1 4 2 −1 1 1 2 √  −1  =  2  =  1  , = 1 − √ 3 3 3 3 −1 1 −2 −1 

r22 , ||v2 || =

√ 2 6 , 3

and   2 1 −1 q2 = r22 v2 = √  1  . 6 −1

Further, let v3 , a3 − (a3 • q1 )q1 − (a3 • q2 )q2              −1 −1 −1 1 1 2 2 1 1 1 1 =  1  −  1  • √  −1  √  −1  −  1  • √  1  √  1  3 3 6 6 1 1 1 1 1 −1 −1             −1 1 2 0 −1 1  −2 1    √ √  1  =  1 , 1 − √ −1 − √ = 3 3 6 6 1 1 −1 1 √ , ||v3 || = 2, and   0 1 −1 q3 = r33 v3 = √  1  . 2 1 

r33

According to Theorem 2.16 in Section 2.3, the o.n. set     1 2  1  1 √ −1  , √  1  3 6 1 −1  1  has span equal to the span of the given set of vectors,  −1  1  2.3.4.5. Denote a1 =

3 −4



 and a2 =

√1 2

  0  1 , √  1   2 1      1 −1  , 1 , 1  .  −1 1 

 . To start the Gram-Schmidt process, let

−1 v1 , a1 , r11 , ||v1 || = 5, q1 = r11 v1 =

1 5



 3 . −4

Next, let  v2 , a2 − (a2 • q1 )q1 =

r22 , ||v2 || =

√ 4+3 2 , 5

√1 2

√           1 3−4 2 1 1 1 3 3 3 √ − • − = 2 5 −4 5 −4 5 5 −4 √ √     4+3 2 4 1 16 + 12√2 = , = 3 12 + 9 2 25 25





√1 2



and −1 q2 = r22 v2 =

1 5



4 3

 .

According to Theorem 2.16 in Section 2.3, the o.n. set      1 1 4 3 S= , 5 −4 5 3     1 3 has span equal to the span of the given set of vectors, , √ . −4 2

c Larry

Turyn, January 2, 2014

page 27

     1 1 2 2.3.4.6. (a) We were given a1 =  1 , a2 =  1 , and a3 =  0 . To start the Gram-Schmidt process, let 2 0 1   2 √ 1 −1 v1 , a1 , r11 , ||v1 || = 6, q1 = r11 v1 = √  1  . 6 1 

Next, let                1 1 1 2 2 2 0 1 3 1 1 1 √  1  =  1 , v2 , a2 − (a2 • q1 )q1 =  1  −  1  • √  1  √  1  =  1  − √ 2 6 6 6 6 0 0 0 1 1 1 −1 



r22 , ||v2 || =

2 , 2

and   0 1 −1 q2 = r22 v2 = √  1  . 2 −1

Further, let v3 , a3 − (a3 • q1 )q1 − (a3 • q2 )q2              1 1 2 2 1 0 0 1 1 1 1 =  0  −  0  • √  1  √  1  −  0  • √  1  √  1  6 6 2 2 2 2 1 1 2 −1 −1             −1 1 2 0 4 1  −2 1 1    √ √  1  =  1 , 0 − √ 1 − √ = 3 6 6 2 2 1 2 1 −1 √ , ||v3 || = 2, and   −1 1 −1 q3 = r33 v3 = √  1  . 3 1 

r33

According to Theorem 2.16 in Section 2.3, the o.n. set        2 0 −1   1  1 1 √ 1 , √  1 , √  1    6 2 3 1 −1 1       2 1 1 has span equal to the span of the vectors, a1 =  1  , a2 =  1  , a3 =  0 . 1 0 2 

     1 1 2 (b) We were given w1 = a2 =  1 , w2 = a3 =  0 , and w3 = a1 =  1 . To start the Gram-Schmidt 0 2 1 process, let   1 √ 1 −1 V1 , a1 , R11 , ||V1 ||= 2, q1 = R11 V1 = √  1  . 2 0 Next, let                1 1 1 1 1 1 1 1 1 1 1 1 √  1  =  −1  , V2 , w2 − (w2 • q1 )q1 =  0  −  0  • √  1  √  1  =  0  − √ 2 2 2 2 2 2 2 0 0 2 0 4 



R22 , ||V2 || =

18 , 2

and q2 =

−1 R22 V2 =

  1 1  √ −1  . 18 4

c Larry

Turyn, January 2, 2014

page 28

Further, let V3 , w3 − (w3 • q1 )q1 − (w3 • q2 )q2              2 2 2 1 1 1 1 1 1 1 1 =  1  −  1  • √  1  √  1  −  1  • √  −1  √  −1  2 2 18 18 1 1 1 0 0 4 4           2 1 1 3 5 1  1    √ √  −1  1 − √ 1 − √ = 2 2 18 18 1 0 4 

  4 1  −4  , = 18 −2 R33 , ||V3 || = 31 , and     4 2 1 1 −1 q3 = R33 V3 =  −4  =  −2  . 6 3 −2 −1 According to Theorem 2.16 in Section 2.3, the o.n. set        1 1 2   1  1 1 √ 1  , √  −1  ,  −2   2  3 18 0 4 −1       2 1 1 has span equal to the span of the vectors, w1 =  1  , w2 =  1  , w3 =  0 . 1 0 2 The results of parts (a) and (b) show that the o.n. set that we get depends on the order in which we list the three given vectors (whose span we wish to equal using the span of the vectors in the o.n. set).  2.3.4.7. Ex: G-S. on

1 0

              1 1 1 0 1 1 , , yields , , while G-S. on , yields √12 1 1 0 1 1 0

1 √ 2



1 −1

2.3.4.8. To start the Gram-Schmidt process, let −1 v1 , a1 , r11 , ||v1 || = ||a1 ||, q1 = r11 v1 .

Next, let −1 −1 −2 −2 v2 , a2 − (a2 • q1 )q1 = a2 − (a2 • r11 a1 )r11 a1 = a2 − r11 (a2 • a1 )a1 = a2 − r11 ·

So

1 · a1 . 3

1 −2 r22 , ||v2 || = a2 − r11 a1 3

and q2 =

 1  1 a2 − a1 . 2 r22 3||a1 ||

We could go further and calculate that 2   1 −2 2 1 −2 1 −2 1 −2 2 −2 2 r22 , ||v2 ||2 = a2 − r11 a1 = a2 − r11 a1 , a2 − r11 a1 = ||a2 ||2 − r11 ha2 , a1 i + r11 a1 3 3 3 3 3   2 −2 1 1 −2 2 2 −2 1 −4 2 1 −2 = ||a2 ||2 − r11 · + r ||a1 ||2 = ||a2 ||2 − r11 + r11 · r11 = ||a2 ||2 − r11 3 3 3 11 9 9 9 that is, 1 2 r22 = ||a2 ||2 − ||a1 ||−2 . 9 2.3.4.9. We calculate hx, x + y − zi = hx, xi + hx, yi − hx, zi = || x ||2 + hx, yi − hx, zi, c Larry

Turyn, January 2, 2014

 .

page 29

hy, y + z − xi = hy, yi + hy, zi − hy, xi = || y ||2 + hy, zi − hy, xi, and hz, z + x − yi = hz, zi + hz, xi − hz, yi = || z ||2 + hz, xi − hz, yi. So, 2     hz,  − hx,  + || z ||2 + hz, hx, x + y − zi + hy, y + z − xi + hz, z + x − yi = || x ||2 +  hx, yi hy, zi −  hy, xi yi  zi + || y || +   xi − 

= || x ||2 + || y ||2 + ||z||2 . Yes, hx, x + y − zi + hy, y + z − xi + hz, z + x − yi = || x ||2 + || y ||2 + ||z||2 is true for all vectors x, y, z. 2.3.4.10. (i)(a) and (b) If n = 2 or n = 3, in fact for any n ≥ 1, the given data x ⊥ y and y ⊥ z imply h (x + z), yi = hx, yi + hz, yi = 0 + 0 = 0, so (x + z) ⊥ y. (ii) (a) If n = 2, then x k z. Why? First, will give an intuitive argument why this should be true, but to be honest this intuitive reasoning is not 100% persuasive, as we will see later when discussing the case n = 3. Note that x ⊥ y implies the angle between x and y is ±90◦ , and y ⊥ z implies the angle between y and z is ±90◦ . Because the nonzero vectors x, y, and z are all in R2 , the angle between x and z is −180◦ , 0◦ , or 180◦ , the sum or the difference of the angle between x and y and the angle between y and z. Because the angle between x and z is an integer multiple of 180◦ , we conclude that x k z. Here is completely persuasive reasoning: Because x ⊥ y and x, y are nonzero vectors in R2 , the set of vectors {x, y} is a basis for R2 . Why? From Theorem 1.43 in Section 1.7, it will suffice to explain why {x, y} is a linearly independent set of vectors in R2 : Suppose 0 = αx + βy for some scalars α, β. Then x ⊥ y would imply 0 = 0 • x = αx • x + βy • x = α||x||2 + β · 0 = α||x||2 , hence α = 0 because x 6= 0. Similarly, 0 = 0 • y = αx • y + βy • y = α · 0 + β||y||2 = β||y||2 , hence β = 0 because y 6= 0. So, 0 = αx + βy implies α = β = 0. By definition, {x, y} is a linearly independent set of vectors in R2 . It follows that there exists constants c1 , c2 such that z = c1 x + c2 y. It follows from x ⊥ y that z • x = c1 x • x + c2 y • x = c1 x • x + c2 · 0, so z • x = c1 x • x = c1 ||x||2 . Similarly, it follows from y ⊥ z that z • y = c1 x • y + c2 y • y = c1 · 0 + c2 y • y, so z • y = c2 ||y||2 . It follows that z=

z•x z•y x+ y. ||x||2 ||y||2

But, z ⊥ y implies z • y = 0, so z= It follows that (?)

z•x x. ||x||2

z•x ||x|| = |z • x| ||x|| = |z • x| . ||z|| = ||x||2 ||x||2 ||x||

c Larry

Turyn, January 2, 2014

page 30

But the Cauchy-Schwarz inequality implies |z • x| ≤ ||x|| ||z||, with equality only if x k z. This, combined with (?), implies |z • x| ||x|| ||z|| ≤ = ||z||. ||x|| ||x||

||z|| =

hence equality holds throughout. This implies that equality holds in the Cauchy-Schwarz inequality, hence x k z. (ii) (b) If n = 3, then it does not follow x k z. Here’s an example: x = e(1) , y = e(2) , and z = e(3) . So, why does the intuitive argument we gave for n = 2 not work for n = 3? I.e., why does the angle between x and y being ±90◦ and the angle between y and z being ±90◦ not imply that the angle between x and z −180◦ , 0◦ , or 180◦ , the sum or the difference of the angle between x and y and the angle between y and z. Perhaps you can explain the error in the reasoning for n = 3. In any case, it points out why we have the responsibility for explaining why something is true; we cannot “turn the tables” and claim that “something is true until proven otherwise." (iii)(a) and (b) If n = 2 or n = 3, x ⊥ (−y), because x • (−y) = x • (−1 · y) = −1 · x • y = −1 · 0 = 0.

2.3.4.11. (a) To start the Gram-Schmidt process, let v1 , a1 , r11 , ||v1 || = and



a1 • a1 =

√ 2,

1 −1 q1 = r11 v1 = √ a1 . 2

Next, let   1 1 1 3 v2 , a2 − (a2 • q1 )q1 = a2 − a2 • √ a1 √ a1 = a2 − (a2 • a1 )a1 = a2 − a1 . 2 2 2 2 So 2   3 9 3 9 1 3 2 r22 ,||v2 ||2 = a2 − a1 = a2 − a1 , a2 − a1 = ha2 , a2 i − 3 ha2 , a1 i + ha1 , a1 i = 5 − 3 · 3 + · 2 = . 2 2 2 4 4 2 We have

1 r22 = √ 2

and

−1 q2 = r22 v2 =

√  3  2 a2 − a1 . 2

Next,   1 1 v3 , a3 − (a3 • q1 )q1 − (a3 • q2 )q2 = a3 − a3 • √ a1 √ a1 2 2     √  3  √  3  3 3  1 2 a2 − a1 =a3 − (a3 • a1 )a1 − 2 (a3 • a2 ) − (a3 • a1 ) a2 − a1 − a3 • 2 a2 − a1 2 2 2 2 2    3 3 1 = a3 − · 4a1 − 2 6 − · 4 a2 − a1 = a3 − 2a1 . 2 2 2 So, {v1 , v2 , v3 } is an orthogonal basis for R3 , where v1 = a1 , v2 = a2 − 23 a1 , and v3 = a3 − 2a1 . (b) Continuing with the Gram-Schmidt process, 2 r33 ,||v3 ||2 = ||a3 −2a1 ||2 = ha3 −2a1 , a3 −2a1 i = ha3 , a3 i − 4 ha3 , a1 i + 4ha1 , a1 i = 9 − 4 · 4 + 4 · 2 = 1.

We have −1 q3 = r33 v3 = a3 − 2a1 . √ 3 1 So, {q1 , q2 , q3 } is an orthogonal basis for R , where q1 = √2 a1 , q2 = 2(a2 − 23 a1 ), and q3 = a3 − 2a1 .

r33 = 1

and

2.3.4.12. ||x + y||2 + ||x − y||2 = hx + y, x + yi + hx − y, x − yi = hx, xi + 2hx, yi + hy, yi + hx, xi − 2hx, yi + hy, yi = 2(|| x ||2 + || y ||2 ).

c Larry

Turyn, January 2, 2014

page 31

2.3.4.13. For all x, y in Rn , we have ||x + y||2 = hx + y, x + yi = hx, xi + 2hx, yi + hy, yi, so ||x + y||2 = ||x||2 + 2hx, yi + ||y||2 . It follows that ||x + y||2 = ||x||2 + ||y||2 if, and only if, 2hx, yi = 0, that is, if and only if x ⊥ y. 2.3.4.14. For all scalars α and vectors x in Rn , ||αx||2 = || [ αx1 ... αxn ] ||2 = (αx1 )2 + ... + (αxn )2 = α2 x21 + ... + α2 x2n = α2 (x21 + ... + x2n ) = |α|2 || x ||2 . We can take the square root of both sides to get ||αx|| = |α| || x ||, because |α| || x || ≥ 0. 2.3.4.15. We are given that an is a linear combination of {a1 , ..., an−1 }, hence an is in the Span{a1 , ..., an } = Span{q1 , ..., qn }. So, an is a linear combination of {q1 , ..., qn−1 }, that is, there exists scalars c1 , ..., cn−1 such that (?)

an = c1 q1 + ... + cn−1 qn−1 .

Because {q1 , ..., qn−1 } is an o.n. set of vectors, cj = an • qj for j = 1, ..., n − 1. [This follows from taking the dot product of (?) with qj to get an • qj = c1 q1 • qj + ... + cn−1 qn−1 • qj = c1 · 0 + ... + cj−1 · 0 + cj · 1 + cj+1 · 0 + ... + cn · 0 = cj .] It follows that vn , an − (an • q1 )q1 − ... − (an • qn−1 )qn−1 = an − (c1 q1 + ... + cn−1 qn−1 ) = an − an = 0. It follows that we cannot construct qn from {a1 , ..., an } and the Gram-Schmidt process fails at this step. 2.3.4.16. We are given that λ is real, Au = λu, and ||u|| = 1. (a) hu, Aui = hu, λui = λhu, ui = λ||u||2 = λ · 1 = λ. (b) hu, A2 ui = hu, λ2 ui = λ2 hu, ui = λ2 ||u||2 = λ2 · 1 = λ2 . (c) ||Au||2 = ( ||λu|| )2 = ( |λ| ||u|| )2 = |λ|2 ||u||2 = λ2 · 1 = λ2 . 2.3.4.17. Assume q is a unit vector and define P , qqT . Then P 2 = (qqT )(qqT ) = q(qT q)qT = q(||qT ||2 )qT = q(1)qT = qqT = P and P T = (qqT )T = (qT )T qT = qqT = P. So, P satisfies the two properties of an orthogonal projection, that is, P is an orthogonal projection. 2.3.4.18. A = γ1 P1 + γ2 P2 = γ1 q1 qT1 + γ2 q2 qT2 . (a) We calculate Aw = γ1 (qT1 w)q1 + γ2 (qT2 w)q2 = γ1 qT1 (α1 q1 + α2 q2 )q1 + γ2 qT2 (α1 q1 + α2 q2 )q2   = γ1 α1 · 1 + α2 · 0 q1 + γ2 α1 · 0 + α2 · 1 q2 = α1 γ1 q1 + α2 γ2 q2 . (b) If λ is an eigenvalue if, and only if, Aw = λw for some w 6= 0. We can try to find eigenvalues by using w in the form w = α1 q1 + α2 q2 for unspecified scalars α1 , α2 . Note that because {q1 , q2 } is an o.n. set of vectors, the Pythagorean Theorem 2.15 in Section 2.3 implies that ||w||2 = ||α1 q1 + α2 q2 ||2 = ||α1 q1 ||2 + ||α2 q2 ||2 = |α1 |2 |q1 ||2 + |α2 |2 ||q2 ||2 = α12 · 1 + α22 · 1. So, ||w||2 = α12 + α22 . c Larry

Turyn, January 2, 2014

page 32

The vector w = α1 q1 + α2 q2 will be nonzero as long as either α1 6= 0 or α2 6= 0. Using the result of part (a), we study the solutions of α1 γ1 q1 + α2 γ2 q2 = Aw = λw = λ(α1 q1 + α2 q2 ) that is, α1 (γ1 − λ)q1 + α2 (γ2 − λ)q2 = 0. Using again the Pythagorean Theorem 2.15 in Section 2.3 implies that 0 = ||0||2 = ||α1 (γ1 − λ)q1 + α2 (γ2 − λ)q2 ||2 = ... = α1 (γ1 − λ)

2

2 + α2 (γ2 − λ) .

So, we need both α1 (γ1 − λ) = 0 and α2 (γ2 − λ) = 0, as well as either α1 6= 0 or α2 6= 0. The solutions for λ are λ1 = γ1 and λ2 = γ2 . Because we were given that m = 2, that is, A is a 2 × 2 matrix, A has at most two distinct eigenvalues. Because we were given that γ1 6= γ2 , the only eigenvalues of A are γ1 and γ2 . The eigenvectors of A corresponding to eigenvalue γ1 are w = α1 q1 , with α1 6= 0. The eigenvectors of A corresponding to eigenvalue γ2 are w = α2 q2 , with α2 6= 0. 2.3.4.19. Define P , P1 P2 . We calculate P 2 = (P1 P2 )(P1 P2 ) = P1 (P2 P1 )P2 , We were given that P1 and P2 are orthogonal projections and that P1 P2 = P2 P1 , so P 2 = P1 (P2 P1 )P2 = P1 (P1 P2 )P2 = P12 P22 = P1 P2 = P. Also, P T = (P1 P2 )T = P2T P1T = P2 P1 = P1 P2 = P. So, P satisfies the two properties of an orthogonal projection, that is, P is an orthogonal projection.

c Larry

Turyn, January 2, 2014

page 33

Section 2.4.3      3 1 1 2.4.3.1. Denote a1 =  2 , a2 =  −1 , and a3 =  0 . To start the Gram-Schmidt process, let −1 0 −1   1 √ 1  −1 2 . v1 , a1 , r11 , ||v1 || = 2, q1 = r11 v1 = √ 6 −1 

Next, let     1 1 1 1 v2 , a2 − (a2 • q1 )q1 =  −1  −  −1  • √  2 6 0 0 −1  7 1 −4 = 6 −1 

        1 1 1 −1 1 1  √  2  =  −1  − √ √  2  6 6 6 0 −1 −1  

,



r22 , ||v2 || =

66 , 6

and   7 1 −1 q2 = r22 v2 = √  −4  . 66 −1

Further, let v3 , a3 − (a3 • q1 )q1 − (a3 • q2 )q2 

    3 3 1 1 =  0  −  0  • √  2 6 −1 −1 −1     3 4 = 0− √ 6 −1



     1 3 1 1  √  2  −  0  • √  6 66 −1 −1        1 7 1  22 1 √ √  −4  =  2− √ 6 −1 66 66 −1

   7 7 1 −4  √  −4  66 −1 −1  0 0 , 0

In using the G.-S. process, we arrive at v3 = 0, which cannot be used to create the third orthonormal vector q3 . So, no, we cannot use the given set of vectors to construct an o.n. basis for R3 using the G.-S. process. The underlying cause of the G.-S. process’s failure is that the given set of three vectors is linearly dependent. 2.4.3.2. Examples:    1 0 0 Q1 = , Q2 = 0 1 1

1 0

 , and Q3 =

√1 2



1 1

1 −1



all are solutions for Q of the matrix equation QT Q = I2 .

2.4.3.3. A is a real, orthogonal matrix exactly when I3 = AT A = √ √     a 1/ 2 0 a 0 1/ 2 a2 + 21       √     0 1 0 = 0 b   0  1/ 2 =     √ √ 1/ 2 b 0 0 1 0 (a + b)/ 2

0 1 0

√  (a + b)/ 2   , 0   b2 +

1 2

which is true if, and only if, a2 + 21 = 1, a + b = 0, and b2 + 21 = 1. The first equation is satisfied if, and only if, a = ± √12 , in which case b = −a = − ± √12 = ∓ √12 satisfies the third equation. So, there are two solutions for a and b: (1) a = √12 and b = − √12 ; (2) a = − √12 and b = √12 . 2.4.3.4. I made many attempts to pick a nice first column of Q having no zero entry and then using the method in the Appendix to find the second, and then the third, columns of Q. Unfortunately, it always turned out that the proposed third column of Q had a zero entry! So, I decided to try a completely different and more systematic method:

c Larry

Turyn, January 2, 2014

page 34

We can use the method in the Appendix, starting from a vector u1 which is to be the first column of Q and which has no zero entry. After many failed attempts to find all three columns of Q each having no zero entries, I decided to try a first column with three different nonzero entries: Let the first column of Q be   1 1  2 . u1 = √ 14 3 Let’s try to find Q’s second and third columns, each in the form v = [v1 v2 column:     v1 1 1 0 = v • u 1 =  v2  • √  2  14 v3 3 that is, v1 + 2v2 + 3v3 = 0. The augmented matrix is 

1 2

3|

0



v3 ]T , that are orthogonal to the first

,

so the free variables are v2 , v3 and whose solutions are   −2v2 − 3v3 v2  . v= v3 One choice, v2 = a 6= 0 and v3 = b 6= 0 with 2a + 3b 6= 0, will give the second column of Q, and another choice, v2 = α 6= 0 and v3 = β 6= 0 with 2α + 3β 6= 0, will give the third column of Q. For the columns of Q to be orthogonal we will need     −2a − 3b −2α − 3β a • α  = (−2a − 3b)(−2α − 3β) + aα + bβ = 4aα + 6bα + 6aβ + 9bβ + aα + bβ 0= b β = 5aα + 6bα + 6aβ + 10bβ. Let’s try a = 2 and b = −1, so we need 0 = 10α − 6α + 12β − 10β = 4α + 2β and 2α + 3β 6= 0. For example, α = 1 and β = −2 should work. This gives two vectors, one from a = 2 and b = −1,   −1 v =  2 , −1 and another from one from α = 1 and β = −2, 

 4 w =  1 . −2 All that is needed is to normalize the second and third columns of Q to get 

√1 14

− √16

√4 21



  Q=  

√2 14

√2 6

√1 21

    

√3 14

− √16

− √221

2.4.3.5. Method 1 : We can use the method in the Appendix, starting from the given vector   1 1  2  u1 = √ 14 3 c Larry

Turyn, January 2, 2014

page 35

which is to be the first column of Q. We will use the Gram-Schmidt process on the set of vectors {u1 , e(1) , e(2) , e(3) } to get an o.n. set of three vectors; we will stop the process after we have three vectors. Hopefully each of those vectors will have no zero entry; if not, we can try using the Gram-Schmidt process on a different set of vectors. Let               1 1 1 1 1 1 1 1 1 1 √  2  v2 , e(1) − (e(1) • u1 )u1 =  0  −  0  • √  2  √  2  =  0  − √ 14 14 14 14 0 0 0 3 3 3   13 1  −2  , = 14 −3 √

182 , 14

r22 , ||v2 || =

and u2 =

−1 r22 v2 =

  13 1  √ −2  . 182 −3

Further, let v3 , e(2) − (e(2) • u1 )u1 − (e(2) • u2 )u2 

        0 0 1 1 0 1 1 =  1  −  1  • √  2  √  2  −  1 14 14 0 0 3 3 0          0 1 13 2 1 −2 1  √  2 − √ √ −2 = 1 − √ 14 14 182 182 0 3 −3 r33 , ||v2 || =

√ 117 , 13



 13 1 • √  −2 182 −3   0  = 1  126 182 −84



  13 1  √  −2  182 −3    0 1 =  9 , 13 −6

and u3 =

−1 r33 v3 =

  0 1  √ 9 . 117 −6

A desired orthogonal matrix is 

√1 14

√13 182

  Q=  

√2 14

2 − √182

√9 117

√3 14

3 − √182

6 − √117

0

   .  

Method 2 : It is easy to guess a column vector that is orthogonal to the first column of Q:   −2 v2 =  1  0 and then normalize it to get the second column of Q to be   −2 1  1 . u2 = √ 5 0 After that, we can use the method of the Appendix to find a third column: Let v3 , e(1) − (e(1) • u1 )u1 − (e1) • u2 )u2               1 1 1 1 1 −2 −2 1 1 1 1 =  0  −  0  • √  2  √  2  −  0  • √  1  √  1  14 14 5 5 0 0 3 3 0 0 0

c Larry

Turyn, January 2, 2014

page 36

             1 1 −2 9 3 1 −2 1 1 1 3  18  =  6 , √ 2− √ √  1= =0− √ 70 70 14 14 3 5 5 0 0 −15 −5 

r33 , ||v2 || =

√ 3 70 , 70

and  3 1 −1 u3 = r33 v3 = √  6  . 70 −5 

A desired orthogonal matrix is 

√1 14

− √25

√3 70



  Q=  

√2 14

1 √ 5

√6 70

  .  

√3 14

0

− √570

We could multiply the last column by (−1) to get another orthogonal matrix, 

√1 14

− √25

  Q=Q=  

√2 14

1 √ 5

√3 14

0

− √370



  − √670  .  √5 70

2.4.3.6. There is no real, 3 × 3, orthogonal matrix that has exactly three zero entries. Because no column of an orthogonal matrix can be the zero vector, there are only two cases to consider: (1) each column contains exactly one zero, or (2) one column contains two zeros. In case (1), if each column contains exactly one zero, then without loss of generality the matrix has the form   0 a b Q =  c 0 d , f g 0 where none of a, b, c, d, f, g are zero. Then we would need   0 a b 0 I = QT Q =  c 0 d   a f g 0 b

c 0 d

 f g , 0

that is, 

1  0 0

0 1 0

  2 0 a + b2   0 bd = 1 ag

bd c 2 + d2 cf

 ag cf  f 2 + g2

We would need b = 0 or d = 0, hence Q would have more than three zeros, giving a contradiction. In case (2), without loss of generality, one column of Q is [0 0 1]T = e(3) , that is, the unit vector on the z−axis. The other two columns of Q are orthogonal to e(3) and thus are in the xy−plane. It follows that Q has at least four zero entries, giving a contradiction. 2.4.3.7. We could start with the first column of Q being e(1) , which has two zeros, but then we would have left exactly one zero that we should have in the remaining two columns. Instead, let’s try the first column of Q to be   1 1  1 . u1 = √ 2 0 We will use the Gram-Schmidt process on the set of vectors {u1 , e(1) , e(2) , e(3) } to get an o.n. set of three vectors; we will stop the process after we have three vectors. Hopefully each of those vectors will have no zero entry; if not, we can try using the Gram-Schmidt process on a different set of vectors.

c Larry

Turyn, January 2, 2014

page 37

Let                1 1 1 1 1 1 1 1 1 1 1 1 √  1  =  −1  , • u1 )u1 =  0  −  0  • √  1  √  1  =  0  − √ 2 2 2 2 2 0 0 0 0 0 0 0 

v2 , e

(1)

(1)

− (e

r22 , ||v2 || =

√ 2 , 2

and   1 1 −1 u2 = r22 v2 = √  −1  . 2 0

Further, let v3 , e(2) − (e(2) • u1 )u1 − (e(2) • u2 )u2            0 0 1 1 1 1 0 1 1 1 1 =  1  −  1  • √  1  √  1  −  1  • √  −1  √  −1  2 2 2 2 0 0 0 0 0 0 0             0 1 1 0 1 1 1 −1 1 √  1 − √ √  −1  =  0  , = 1 − √ 2 2 2 2 2 0 0 0 0 





which is a dead end. So, let’s try yet again. Let a3 = e(3) , and v3 , a3 − (a3 • u1 )u1 − (a3 • u2 )u2           0 0 1 1 1 1 1 1 1 1 1 −1 =  0  −  0  • √  1  √  1  −  1  • √  −1  √ 0 2 2 2 2 1 1 0 0 1 0             1 1 0 0 1 0 1 0 √  1 − √ √  −1  =  0  e(3) . = 0 − √ 2 2 2 2 0 0 1 1 







[We probably could have guessed the third column after looking at the first two columns! ] A desired orthogonal matrix is   √1 1 √ 0 2 2     1 1  √ √ − 2 0  Q= 2 ,   0

0

1

which has exactly four zeros. Alternatively, it’s even easier to think geometrically to guess an example of Q, one of whose column vectors is e(3) , along the z − axis, and whose other two column vectors are in the plane z = 0. 2.4.3.8. The columns of A are     sin φ cos θ − sin θ A∗1 =  sin φ sin θ  , A∗2 =  cos θ  , cos φ 0



and

A∗3

 cos φ cos θ =  cos φ sin θ  . − sin φ

We calculate A∗1 • A∗2 = − sin φ cos θ sin θ + sin φ sin θ cos θ + cos φ · 0 = 0, A∗1 • A∗3 = sin φ cos θ cos φ cos θ + sin φ sin θ sin φ sin θ − cos φ sin φ = sin φ cos φ(sin2 θ + cos2 θ) − cos φ sin φ = sin φ cos φ · 1 − cos φ sin φ = 0, and A∗2 • A∗3 = − sin θ cos φ cos θ + cos θ sin φ sin θ − 0 · sin φ = − sin θ cos φ cos θ + cos θ sin φ sin θ − 0 = 0.

c Larry

Turyn, January 2, 2014

page 38

So, the columns of A are an orthogonal set of vectors. using the Pythagorean identity, we also to calculate that ||A∗1 ||2 = (sin φ cos θ)2 + (sin φ sin θ)2 + (cos φ)2 = sin2 φ cos2 θ + sin2 φ sin2 θ + cos2 φ = sin2 φ(cos2 θ + sin2 θ) + cos2 φ = sin2 φ · 1 + cos2 φ = 1, ||A∗2 ||2 = (− sin θ)2 + (cos θ)2 + (0)2 = sin2 θ + cos2 θ = 1, and ||A∗3 ||2 = cos2 φ cos2 θ + cos2 φ sin2 θ + sin2 φ = cos2 φ(cos2 θ + sin2 θ) + sin2 φ = cos2 φ · 1 + sin2 φ = 1. Because the columns of the real, square matrix A are an o.n. set, it follows that A is an orthogonal matrix. 2.4.3.9. Q1 and Q2 are both orthogonal matrices and Q , Q1 Q2 . Because Q1 and Q2 are both square, so is Q (otherwise Q doesn’t even exist.) We have  QT Q = (Q1 Q2 )T (Q1 Q2 ) = QT2 QT1 (Q1 Q2 ) = QT2 (QT1 Q1 )Q2 = QT2 (I)Q2 = QT2 Q2 = I, so, Q is an orthogonal matrix. 2.4.3.10. (a) If m = n, hence Q is square, then the set of n columns of Q are an o.n. set of vectors in Rn , hence the set of columns of Q are a basis for Rn and thus Q has rank(Q) = n. (b) Suppose n < m and Q = [q1 ... qn ] is a real, m × n, matrix whose set of columns is an o.n. set. The latter implies that each column of Q is a unit vector and is thus nonzero. For all m × n matrices A, rank(A) ≤ min{m, n}. So, n < m implies rank(Q) ≤ n. To explain why rank(Q) = n, it will suffice to explain why we cannot have rank(Q) < n. Suppose that rank(Q) < n. Then the nullity of Q is positive, so there exists a vector x = [x1 ... xn ] for which 0 = Qx = [x1 q1 + ... + xn qn ], using Lemma 1.3 in Section 1.7, and x 6= 0. The latter implies that, without loss of generality, xn 6= 0, hence qn = −

1 (x1 q1 + ... + xn−1 qn−1 ). xn

Because qn 6= 0, at least one of x1 , ..., xn−1 must be nonzero. Say that xj 6= 0. Then taking the inner product with qj and using the fact that {q1 , ..., qn } is an o.n. set, we have 0 = qj • qn = −

1 (x1 qj • q1 + ... + xn−1 qj • qn−1 ) xn

1 1 (x1 · 0 + ... + xj−1 · 0 + xj · 1 + xj+1 · 0 + ... + xn−1 · 0) = − xj , xn xn giving a contradiction with xj 6= 0. So, rank(Q) = n. (c) If m < n, then it is not possible for the set of columns of the m × n, real matrix Q to be an o.n. set. Why not? =−

Method 1 : If the n columns of Q were an o.n. set in Rm , then by Theorem 2.20 in Section 2.4 the set of columns of Q would be a linearly independent set. Using Lemma 1.3 in Section 1.7, it would follow that the only solution of 0 = Qx = x1 q1 + ... + xn qn , is x = 0. Then ν(Q), the nullity of Q, would be zero, so Theorem 1.17 in Section 1.3 implies that m = min{m, n} ≤ rank(Q) = n − ν(Q) = n − 0 = n > m, giving a contradiction. Method 2 : If the n columns of Q were an o.n. set in Rm , then the set of its first m columns, {q1 , ..., qm }, would be an o.n. basis for Rm . By Corollary 2.6(a) in Section 2.4 and using orthonormality, qm+1 = hqm+1 , q1 iq1 + ... + hqm+1 , qm iqm = 0 · q1 + ... + 0 · qm = 0, contradicting the assumption that qm+1 is a unit vector.

c Larry

Turyn, January 2, 2014

page 39

2.4.3.11. For all x, y, hQx, Qyi = (Qx)T (Qy) = (xT QT )(Qy) = xT (QT Q)y = xT (I)y = hx, yi.

2.4.3.12. (a) For all x, || Qx ||2 = hQx, Qxi = (Qx)T Qx = xT QT Qx = xT (QT Q)x = xT I x = xT x = || x ||2 . Then, take square root of both sides to get || Qx || = ±|| x ||. But, || Qx || ≥ 0, so || Qx || = || x ||. (b) If x is a unit eigenvector of Q corresponding to λ, then 1 = || x ||2 = || Qx ||2 = ||λx||2 = (|λ| || x ||)2 = |λ|2 || x ||2 = |λ|2 · 12 = |λ|2 . Because |λ|, the modulus of λ, is non-negative, taking the square root gives |λ| = 1. 2.4.3.13. Method 1 : Yes, because B is a real, orthogonal matrix and we have the result of Problem 2.4.3.12. Method 2 : Alternatively, we could explicitly calculate that for all x = [ x1 x2 x3 ]T in R3 ,    ||Bx||2 =     =

1 1 1 √ x1 + √ x2 + √ x3 3 2 6

2

 +

1 √ 3

x1 +

1 √ 2

x2 +

1 √ 6

x3

1 √ 3

x1 −

1 √ 2

x2 +

1 √ 6

x3

√1 3

x1 −

2 √ 6

x3

 2     

1 1 1 √ x1 − √ x2 + √ x3 3 2 6

2

 +

1 2 √ x1 − √ x3 3 6

2

1 2 1 2 1 2 2 2 2 x1 + x2 + x3 + √ x1 x2 + √ x1 x3 + √ x2 x3 3 2 6 6 18 12 1 2 1 2 1 2 2 2 2 1 2 4 2 4 2 + x1 + x2 + x3 − √ x1 x2 + √ x1 x3 − √ x2 x3 + x1 + x3 − √ x1 x3 + √ x2 x3 3 2 6 3 6 6 18 12 18 12 =

= x21 + x22 + x23 = ||x||2 . 2.4.3.14. Method 1 : Yes, because A is a real, orthogonal matrix and we have the result of Problem 2.4.3.12. Method 2 : Alternatively, we could explicitly calculate that for all x = [ x1 x2 x3 ]T in R3 ,    2 ||Ax|| =     = =

1 1 1 x1 − x2 + √ x3 2 2 2

2

 +

1 2

x1 − 1 √ 2

1 2

1 2

x2 +

1 √ 2

1 √ 2

x2

x2 −

1 √ 2

x1 +

x1 −

1 2

1 1 √ x1 + √ x2 2 2

x3

x3

2

 2      

+

1 1 1 x1 − x2 − √ x3 2 2 2

2

1 2 1 2 1 2 2 2 2 1 1 2 x1 + x2 + x3 − x1 x2 + √ x1 x3 − √ x2 x3 + x21 + x22 + x1 x2 4 4 2 4 2 2 2 2 2 2 2 1 2 1 2 1 2 2 2 2 + x1 + x2 + x3 − x1 x2 − √ x1 x3 + √ x2 x3 4 4 2 4 2 2 2 2

= x21 + x22 + x23 = ||x||2 . 2.4.3.15. Suppose x and y are any vectors in Rn and {q1 , ..., qn } is an o.n. set in Rn . Then from (2.19)(a) in Section 2.4, x = hx, q1 iq1 + ... + hx, qn iqn , so

D E hx, yi = hx, q1 iq1 + ... + hx, qn iqn , y = hx, q1 ihq1 , yi + ... + hx, qn ihqn , yi,

as we wanted to derive. 2.4.3.16. Q is real, and 1 = | I | = |QT Q| = |QT | |Q| = |Q| |Q| = |Q|2 , so |Q| = ±1.

c Larry

Turyn, January 2, 2014

page 40

2 T T 2.4.3.17. QT = I − 2qqT = I T − 2 qT qT = I − qT qT = Q and so QT Q = Q2 = I − 2qqT = I 2 − 4qqT + 2      T T   2qqT = I − 4qqT + 4 qqT qqT = I − 4qqT + 4q qT q qT = I −  4qq + 4qq =I 2.4.3.18. Define P = q1 qT1 + q2 qT2 and note that it is an orthogonal projection by Theorem 2.19 in Section 2.3. So, P 2 = P and P T = P . It follows that A = I − 2P satisfies AT = (I − 2P )T = I T − 2P T = I − 2P = A and − +  = I. AT A = (I − 2P )(I − 2P ) = 1 − 2P I − 2IP + 4P 2 = 1 −  2P 2P 4P 2.4.3.19. As suggested by Corollary 2.5 in Section 2.4, begin by using Theorem 1.41 in Section 1.7 to construct a     basis for Col(A): Row reduce 1 −1 1 −1 ∼ A= , 0 3 2 1 −2R1 + R2 → R2 so the two columns of A are its pivot columns. 2 2 Method 1 : It follows that the two columns of A are a basis for  thus that Col(A) = R . Geometrically, the  R and 1 0 . projection of R2 onto all of itself is the identity matrix, I2 = 0 1 Method 2 : Use the Gram-Schmidt process: Let   √ 1 1 −1 v1 , a1 , r11 , ||v1 || = 5, q1 = r11 . v1 = √ 5 2

Next, let  v2 , a2 − (a2 • q1 )q1 = r22 , ||v2 || =

√ 3 5 , 5

−1 1



 −

−1 1



1 •√ 5



1 2



1 √ 5



1 2



 =



−1 1

 −

1 √ 5



1 √ 5



1 2



3 5

=



−2 1

 ,

and

  1 −2 −1 q2 = r22 v2 = √ . 1 5 Using Corollary 2.5 in Section 2.4, we have that the orthogonal projection onto Col(A) is given by PA = q1 qT1 + q2 qT2 1 = √ 5



1 2



1 √ [1 5

1 2] + √ 5



−2 1



1 √ [−2 5

1] =

1 5



1 2

2 4

 +

1 5



4 −2

−2 1



 =

1 0

0 1

 .

2.4.3.20. As suggested by Corollary 2.5 in Section 2.4, begin by using Theorem 1.41 in Section 1.7 to construct a     basis for Col(A): Row reduce 1 −1 1 −1 ∼ A= , 2 −2 0 0 −2R1 + R2 → R2 so the first column of A is its only pivot column. Use the Gram-Schmidt process: Let v1 , a1 , r11 , ||v1 || =

√ 1 −1 5, q1 = r11 v1 = √ 5



 1 . 2

Using Corollary 2.5 in Section 2.4, we have that the orthogonal projection onto Col(A) is given by     1 1 1 1 2 1 T √ [1 2] = PA = q 1 q 1 = √ . 5 2 4 5 2 5 2.4.3.21. As suggested by Corollary 2.5 in Section 2.4, begin by using Theorem 1.41 in Section 1.7 to construct a basis for Col(A): Row reduce 

1 A=0 2

1 −1 0

 0 1 2

 ∼ −2R1 + R3 → R3

1 0 0

1 −1 −2

 0 1 , 2



1 0 0



1 −1 0

 0 1 , 0

−2R2 + R3 → R3 c Larry

Turyn, January 2, 2014

page 41

so the first two columns of A are its only pivot columns. Use the Gram-Schmidt process: Let   1 √ 1   −1 0 . v1 , a1 , r11 , ||v1 ||= 5, q1 = r11 v1 = √ 5 2 Next, let              1 1 1 1 1 1 1 1 1 1 √  0  v2 , a2 − (a2 • q1 )q1 =  −1  −  −1  • √  0  √  0  =  −1  − √ 5 5 5 5 0 0 0 2 2 2   4 1 =  −5  , 5 −2 



r22 , ||v2 || =

45 , 5

and q2 =

−1 r22 v2 =

  4 1  √ −5  . 45 −2

Using Corollary 2.5 in Section 2.4, we have that the orthogonal projection onto Col(A) is given by     1 4 1 1 1 1 PA = q1 qT1 + q2 qT2 = √  0  √ [1 0 2] + √  −5  √ [4 − 5 − 2] 5 5 45 45 2 −2  1 1 0 = 5 2

0 0 0

  2 16 1  −20 0 + 45 4 −8

−20 25 10

  −8 25 1  −20 10  . = 45 4 10

−20 25 10

 10 10  . 40

2.4.3.22. Method 1 : Because there exists Q−1 = QT , the unique solution of Qx = b is  T   T  q1 b q1     x = Q−1 b = QT b =  ...  b =  ...  , qTn

qTn b

using Theorem 1.11 in Section 1.2, that is,  hb, q1 i n   X .. x= = hb, qj ie(j) .  . j=1 hb, qn i 

Method 2 : Because there exists Q−1 = QT , Qx = b has exactly one solution. Define x = [x1 ... xn ]. Using Lemma 1.3 in Section 1.7 and Corollary 2.6(a) in Section 2.4, Qx = b can be rewritten as hb, q1 iq1 + ... + hb, qn iqn = b=Qx=x1 q1 + ... + xn qn so x1 = hb, q1 i, ..., xn = hb, qn i. So, the only solution is     x1 hb, q1 i n     X .. x =  ...  =  hb, qj ie(j) . = . j=1 xn hb, qn i

c Larry

Turyn, January 2, 2014

page 42

Section 2.5.2    0 1 −1 2  and b =  1 . The normal equations are AT Ax = AT b, 2.5.2.1. The system is Ax = b, where A =  1 2 0 1 where     1 −1   2 1 1 1 0  1 2 = AT A = 1 6 −1 2 1 0 1 

is invertible. There is only one l.s.s.: x = (AT A)−1 AT b =

1 11





−1 2

6 −1



1 2.5.2.2. The system is Ax = b, where A =  3 5 where  1 3 AT A = 2 4

1 −1

1 2

0 1



  0 6  1 = 1 11 −1 2 



−1 2

1 4

 =

1 11





2 7

.

   3 2 4  and b =  5 . The normal equations are AT Ax = AT b, 1 −1    1   2 5  35 9 3 4 = −1 9 21 5 −1

is invertible. There is only one l.s.s.: 1 x = (AT A)−1 AT b = 654



21 −9

−9 35



1 2

  3 21  5 = 1 654 −9 1   1 129 . = 327 334 3 4

5 −1





−9 35



23 25



1 = 654



258 668





   4 −1 2 2.5.2.3. The system is Ax = b, where A =  4 −3  and b =  −1 . The normal equations are AT Ax = AT b, 2 −1 1 where     4 −1   4 4 2  36 −18 4 −3  = AT A = −1 −3 −1 −18 11 2 −1 is invertible. There is only one l.s.s.: T

x = (A A)

−1

1 A b= 72 T



11 18

18 36



4 −1

4 −3

2 −1





  2 11  −1  = 1 72 18 1

18 36



6 0



1 = 12



11 18

 .



   2 −1 2 2.5.2.4. The system is Ax = b, where A =  2 −2  and b =  −1 . The normal equations are AT Ax = AT b, −1 3 1 where       2 −1 2 2 −1  9 −9 2 −2 = AT A = −1 −2 3 −9 14 −1 3 is invertible. There is only one l.s.s.: T

x = (A A)

−1

1 A b= 45 T



14 9

9 9



2 −1

2 −2

−1 3





  2 14  −1  = 1 9 45 1

9 9



c Larry

1 3

 =

1 45



41 36

 .

Turyn, January 2, 2014

page 43

   3 1 1 2.5.2.5. The system is Ax = b, where A =  1 −2  and b =  1 . The normal equations are AT Ax = AT b, 4 0 1 where    1    1 1 1 0  2 −1 1 −2  = AT A = 1 −2 1 −1 6 0 1 

is invertible. There is only one l.s.s.: 1 x = (AT A)−1 AT b = 11



6 1

1 2



1 1

1 −2

0 1



  3 6  1 = 1 11 1 4 

  1 0 −1 0  and b =  2.5.2.6. The system is Ax = b, where A =  1 −1 0 1 −1 AT b, where     1 1 0 1 0 −1 1   1 −1 0 = AT A =  0 −1 −1 0 −1 0 1 −1 

is not invertible and



1 AT b =  0 −1

1 −1 0

1 2



4 5

 =

1 11



29 14

 .

 1 2 . The normal equations are AT Ax = 0 2 −1 −1

−1 2 −1

 −1 −1  2

    0 1 3 1   2  =  −2  −1 0 −1

We will use row reduction on the normal equations to find all of the l.s.s.:    1 2 −1 −1 | 3 ∼ T T 0 2 −1 | −2  [ A A | A b ] =  −1 −1 −1 2 | −1 R 1 ↔ R2 0 2R1 + R2 → R2 −R1 + R3 → R3 −R1 → R1  

0 −1 | 4/3 1 ∼  0 1 −1 | −1/3  0 0 0| 0 R 2 + R 3 → R3 1 R → R2 3 2 2R2 + R1 → R1

−2 3 −3

1| −3 | 3|

 2 −1  1

⇒ x3 is the only free variable ⇒ All of the l.s.s. are given by   4+t 1 −1 + t  , −∞ < t < ∞. x= 3 3t 2.5.2.7. Because Q and QT are both n × n, by Theorem 1.21 in Section 1.5, QT Q = In implies Q−1 exists and Q−1 = QT . It follows that QQT = In . 2    2.5.2.8. If the set of columns of m × n matrix Q is an o.n. set, then QQT = QQT QQT = Q QT Q QT =   T T Q(In )QT = QQT and QQT = QT QT = QQT , so QQT is a projection. 1 1 T 1 ... m ] . Because we are assuming that at least two of the 2.5.2.9. Take the hint and begin by defining y , [ m m xi ’s are distinct, we will be able to conclude that the set of vectors {x, y} is linearly independent: Without loss of generality, assume x1 and x2 are distinct (unequal). Then     1/m x1  1/m   x2      0 = α1 x + α2 y = α1  .  + α2  .   ..   ..  xm 1/m

c Larry

Turyn, January 2, 2014

page 44

would imply 1 m

0 = α1 x1 + α2

and

hence (?) α1 x1 = −α2

0 = α1 x2 + α2

1 m

1 = α1 x 2 , m

hence α1 (x1 − x2 ) = 0. 1 Because x1 6= x2 , it follows that α1 = 0. Further, (?) implies −α2 m = α1 x1 = 0 · x1 = 0, hence α2 = 0. This explains why {x, y} is linearly independent. Taking the further hint to use the Cauchy-Schwarz inequality, because {x, y} is linearly independent,

  q  1 2 1/2 1 1 2 |x| = (x1 + ... + xm ) = |hx, yi| < ||x|| ||y|| = x21 + ... + x2m + ... + m m m =

 q q  1 2 1/2 1 = √ x21 + ... + x2m m · x21 + ... + x2m , m m

hence

 1 2 √ m

(x)2 = |x|2
n, and we define B = A(AT A)−1 AT . Then B 2 = A(AT A)−1 AT

2

= A(AT A)−1 AT

  T −1 T  T T −1  A(AT A)−1 AT = A (A A) A A (A A) A −1 T A = B, = A AT A 

hence B 2 = B. P T 2.6.3.14. The spectral decomposition (2.34) gives A = n i=1 λi qi qi . It follows that ! n ! n n X n    X X X 2 T T A = λi qi qi λj qj qj = λi qi qTi λj qj qTj i=1

=

n X n X

λi λj qi (qTi qj )qTj =

i=1 j=1

j=1

i=1 j=1

n X n X

 λi λj qi ·

i=1 j=1

1, 0,



i=j i 6= j

· qTj =

n X

λ2i qi qTi ,

i=1

hence (2.38) is correct. 2.6.3.15. The spectral decomposition (2.34) gives A =

Pn

i=1

λi qi qTi and we define

n X √ √ A, λi qi qTi . i=1

We calculate that √ 2 A =

n X √ λi qi qTi

!

i=1

=

n X n X



λi

n X p λj qj qTj

! =

j=1

p λj qi (qTi qj )qTj =

n X n   p  X √ λi qi qTi λj qj qTj i=1 j=1

n X n X



λi

p

 λj qi ·

i=1 j=1

i=1 j=1

1, 0,

i=j i 6= j



· qTj

n n X X √ ( λi )2 qi qTi , = λi qi qTi = A, = i=1

i=1

hence (2.39) is correct. 2.6.3.16. Method 1 : For all x = [ x1 x2 ]T in R2 , we have      1 α x1 x1 + αx2 xT Ax = [ x1 x2 ] = [ x1 x2 ] = x21 + 2αx1 x2 + x22 = (x1 + αx2 )2 + (1 − α2 )x22 . α 1 x2 αx1 + x2 It follows that (a) A is positive definite if, and only if, −1 < α < 1, that is, |α| < 1, and (b) A is positive semi-definite if, and only if, −1 ≤ α ≤ 1, that is, |α| ≤ 1. 1−λ α Method 2 : 0 = = (1 − λ)2 − α2 , α 1−λ ⇒ the eigenvalues of A are λ = 1 ± α. By Theorem 2.25 in Section 2.10, it follows that (a) A is positive definite if, and only if, −1 < α < 1, that is, |α| < 1, and (b) A is positive semi-definite if, and only if, −1 ≤ α ≤ 1, that is, |α| ≤ 1. We could also establish positive definiteness using Theorem 2.31 in Section 2.6, after calculating that the principal 1 α = 1 − α2 . minors are A1 = det( [ 1 ] ) = 1 and A2 = α 1 2.6.3.17. We define hx, yiW , hW x, yi , xT W T y and || x ||W , definite n × n matrix.

p hx, xiW , where W is a real, symmetric, positive

c Larry

Turyn, January 2, 2014

page 60

The conclusions of Theorem 2.29 in Section 2.6 are the conclusions of Theorems 2.12 and 2.13 in Section 2.3 with hx, yiW and || x ||W replacing hx, yi and || x ||, respectively. Basically, the method of establishing these results is to apply Theorems 2.12 and 2.13’s properties concerning hx, yi and || x || to get properties for hx, yiW and || x ||W . Below, x and y are arbitrary vectors in Rn . Regarding Theorem 2.12(a) in Section 2.3, using W = W T we calculate hx, yiW = xT W T y = W x

T

  y = yT W x = yT W T x = yT W T x = hy, xiW .

Regarding Theorem 2.12(b) in Section 2.3, we calculate hαx, yiW = (αx)T W T y = αxT W T y = αhx, yiW . Regarding Theorem 2.12(c) in Section 2.3, we calculate hx1 + x2 , yiW = (x1 + x2 )T W T y = xT1 W T y + xT2 W T y = hx1 , yiW + hx2 , yiW . Regarding Theorem 2.12(d) in Section 2.3, we know that hx, xiW = xT W T x = xT W x is positive as long as x 6= 0, by the definition of W being positive definite; when x = 0, then hx, xiW = 0T W T 0 = 0. So, hx, xi ≥ 0, with equality only if x = 0. Regarding Theorem 2.13(a) in Section 2.3, || x ||2W = hx, xiW is true by our definitions of hx, yiW and || x ||W . Regarding Theorem 2.13(b) in Section 2.3, we calculate ||x + y||2W = hW (x + y), x + yi = hW x, xi + 2hW x, yi + hW y, yi = || x ||2W + || y ||2W + 2hx, yiW . Regarding Theorem 2.13(c) in Section 2.3, we know from Theorem 2.12(d) in Section 2.3 that xT W T x ≥ 0, with equality only if x = 0, so √ √ ||x||W = xT W T x ≥ 0 = 0, with equality only if x = 0. Regarding Theorem 2.13(d) in Section 2.3, we calculate √ √ p p ||αx||W = (αx)T W T (αx) = α2 (xT W T x) = α2 xT W T x = |α| || x ||W . √ Regarding Theorem 2.13(e) in Section 2.3, using the existence of W that satisfies √ W

2

= W = WT =

 √  2 T W ,

we calculate  T √  T √ T 2 T √ T √ T √ |hx, yiW | = |x W y| = x W y = x W W y = W x W y T

T

√ √ ≤ || W x|| || W y||, using the Cauchy-Schwarz inequality, that is, Theorem 2.13(e) in Section 2.3. Continuing, we have q √ √ √ T √  q √ T √  Wx Wx · Wy Wy |hx, yiW | ≤ || W x|| || W y|| = =

q q q q √ T √ √ T √ √ T √ T √ T √ T xT W W x · yT W W y = xT W W x · yT W W y √ =

xT W T x ·

p yT W T y = || x ||W || y ||W .

Regarding Theorem 2.13(f) in Section 2.3, we calculate ||x + y||W =

p

hx + y, x + yiW =

q √ =

W (x + y)

T √

q p √ T √ T (x + y)W T (x + y) = (x + y) W W (x + y)

W (x + y) =

q √ √ √ T √ W x + W y ( W x + W y)

c Larry

Turyn, January 2, 2014

page 61

√ √ √ √ = W x + W y ≤ W x + W y , using Theorem 2.13(f) in Section 2.3. Continuing, we have q √ √ √ q √ T √ T √ ||x + y||W ≤ W x + W y = Wx Wx · Wy Wy =

q q q q √ T √ √ T √ √ T √ T √ T √ T xT W W x + yT W W y = xT W W x + yT W W y √ =

xT W T x +

p yT W T y = || x ||W + || y ||W .

−2 T 2.6.3.19. Let W = diag(b−2 1 , ..., bm ) = W . Then the relative squared error is

 m  X i=1

(Ax)i − bi bi

*

2 =

  

(Ax)1 −b1 b1

.. .

(Ax)m −bm bm

    ,  

(Ax)1 −b1 b1

.. .

(Ax)m −bm bm



 +

  

* =

  

(Ax)1 −b1 b2 1

.. .

(Ax)m −bm b2 m

   + (Ax)1 − b1    .. ,   . (Ax)m − bm

−2 T T 2 = diag(b−2 1 , ..., bm )(Ax − b), (Ax − b) = hW (Ax − b), (Ax − b)i = (Ax − b) W (Ax − b) = ||Ax − b||W . So, the problem of minimizing the relative squared error is a weighted least squares problem. √ √ 2.6.3.20. Define C , M −1 K M −1 , where both M and K are√real, symmetric, and positive definite. Because M is real, symmetric, and positive definite, so is M , as defined in formula (2.39). It follows from √ −1 formula (2.37) that M is also real, symmetric, and positive definite. Because, in addition, K is real, symmetric, and positive definite, it follows that C is real, and √ √ √ √ √ √ √ √ C T = ( M −1 K M −1 )T = ( M −1 )T K T ( M −1 )T = ( M −1 )T K( M −1 )T = ( M −1 )K( M −1 ) = C, so C is symmetric. Also, C is positive definite, because for all x 6= 0, xT Cx = xT

√ T  √  √ −1 √ −1 √ √ M K M x = xT ( M −1 )T K( M −1 )x = ( M −1 )x K ( M −1 )T x > 0,

√ √ because K is positive definite and (√ M −1 )x 6= 0. [The latter follows from the following reasoning: ( M −1 ) is itself invertible, so the only solution of ( M −1 )x = 0 is x = 0, but we assumed that x 6= 0.]  2.6.3.21. Exs.: Q1 =

1 0

0 1

  , Q2 =

0 −1

−1 0



2.6.3.22. If λ is an eigenvalue of Q, then problem 2.4.3.12(b) yields |λ| = 1. But Q being real and symmetric implies λ is real. The only real numbers whose absolute value is 1 are ±1.

c Larry

Turyn, January 2, 2014

page 62

Section 2.7.7    0 1 2.7.7.1. Use the Gram-Schmidt process on the columns of A: Let a1 =  1  and a2 =  1 . Then 1 0   1 √ 1 −1 v1 = √  1  . v1 , a1 , r11 , ||v1 || = 2, q1 = r11 2 0 

Next, let                0 0 0 1 1 1 −1 1 1 1 1 1 √  1  =  1 , v2 , a2 − (a2 • q1 )q1 =  1  −  1  • √  1  √  1  =  1  − √ 2 2 2 2 2 1 1 1 0 0 0 2 



r22 , ||v2 || =

6 , 2

and q2 =

−1 r22 v2 =

We have Q = [ q1

p p

and, using r12 , a2 • q1 ,  R=

r11 0

r12 r22

 √ 3 1  √ q2 ] = √ 3 6 0 "√

 =

  −1 1  √ 1 . 6 2

2 0



2 √

−1

#

6 2

 −1 1  2

1 = √ 2



2 0

 √1 . 3



  1 0  −1  1   2.7.7.2. Use the Gram-Schmidt process on the columns of A: Let a1 =   1  and a2 =  2 0 1   1 √ 1  −1  −1 . v1 = √  v1 , a1 , r11 , ||v1 ||= 3, q1 = r11 3 1 0

  . Then 

Next, let 

  0  1     v2 , a2 − (a2 • q1 )q1 =   2  −  1 r22 , ||v2 || =

√ 51 , 3

  1 0  −1 1 1  • √  2  3 1 0 1





  1  1  −1    √     3  1 =  0

  0 1    −1 1 1 1  − √ √  2  3 3 1 0 1





 −1  1 4  =    3  5 , 3

and 

 −1 1  4  −1 . q2 = r22 v2 = √  51  5  3 We have

√ √17 1  − 17 √ q2 ] = √  17 51  0 

Q = [ q1

and, using r12 , a2 • q1 ,  R=

r11 0

r12 r22



p p

"√ 3 = 0



3 √

−1

51 3

#

 −1 4   5  3

1 = √ 3



3 0

 √ 1 . 17 c Larry

Turyn, January 2, 2014

page 63

   0 1 2.7.7.3. Use the Gram-Schmidt process on the columns of A: Let a1 =  1  and a2 =  −2 . Then 1 1   1 √ 1 −1 v1 , a1 , r11 , ||v1 ||= 3, q1 = r11 v1 = √  1  . 3 1 

Next, let              0 0 0 1 1 1 1 −1 1 1 √ 1 v2 , a2 − (a2 • q1 )q1 =  −2  −  −2  • √  1  √  1 =  −2  − √ 3 1 3 1 3 3 1 1 1 1   1 1 −5  , = 3 4 



r22 , ||v2 || =

42 , 3

and q2 =

−1 r22 v2 =

We have Q=[

q1 pp

and, using r12 , a2 • q1 ,  R=

r11 0

r12 r22



  1 1  √ −5  . 42 4

 √ 14 1  √ q2 ] = √ √14 42 14

"√ 3 = 0

√ −1 # − √3 42 3

 1 −5  4

1 = √ 3



 √−1 . 14

3 0



  1 0  −1   1    2.7.7.4. Use the Gram-Schmidt process on the columns of A: Let a1 =  , a =  0 2 −1 0 0 Then   1 √ 1  −1  −1  . v1 , a1 , r11 , ||v1 ||= 2, q1 = r11 v1 = √  0 2 0





 0    , and a3 =  0 .   1 −1

Next, let 

  0 0  1   1   v2 , a2 − (a2 • q1 )q1 =   −1  −  −1 0 0





  1 1   −1  1  −1 1 • √   √   2  0  2  0 0 0





  0 1     1   −1 −1 1 =     −1  − √2 √2  0 0 0

   



 1 1 1  , =  2  −2  0 √

r22 , ||v2 || =

6 , 2

and 

 1  1 1  −1 . q2 = r22 v2 = √   −2  6 0 Next, let v3 , a3 − (a3 • q1 )q1 − (a3 • q2 )q2

c Larry

Turyn, January 2, 2014

page 64

            0 1 1 1 1 0 0  −1  1  −1   0   1  1  1  0   0  1 1             =  1  −  1  • √2  0  √2  0  −  1  • √6  −2  √6  −2 −1 0 0 0 0 −1 −1         0 1 1 1      0 0 −2 1 1  1  −1  1 1      ,    √  √  − √ = − √ =  1 0 1  3 2 2 6 6 −2  −1 0 0 −3 

  . 



r22 , ||v2 || =

12 , 3

and  1 1  1  −1 . q2 = r22 v2 = √  12  1  −3 

We have

√ √6 1  − 6 q3 ] = √  0 12  0

√ √2 √2 −2 2 0



Q = [ q1 pp q2

p p

and, using r12 , a2 • q1 , r13 , a3 • q1 , and r23 , a3 • q2 ,  √    √2 − √1 0 2 r11 r12 r13 2        √       6 r22 r23  R= 0 − √26  = 0  0 = 2         √ 12 0 0 r33 0 0 0 3

− √12

0

√ √3 2

− √26

0

√2 3

 1 1   1  −3  √ 2 3    1   0 = √   6   0 

√ − 3 3 0

0



  −2  .  √ 2 2

2.7.7.5. We are given that A = [a1 . . . an ], where {a1 , . . . , an } is an orthogonal set in Rm . It follows that {a1 , . . . , an } is linearly independent, so the Gram-Schmidt process can be used to find the QR factorization of A: −1 v1 , a1 , r11 , ||v1 ||, q1 = r11 v1 =

v2 , a2 − (a2 • q1 )q1 = a2 − (a2 •

1 a1 , ||a1 ||

1 a1 )q1 = a2 − 0 · q1 ||a1 ||

hence −1 v2 = a2 , r22 , ||v2 ||, q2 = r22 v2 =

1 a2 , ||a2 ||

.. . vn , an − (an • q1 )q1 − ... − (an • qn−1 )qn−1 = an − 0 · q1 − ... − −0 · qn−1 = an , 1 −1 an . rnn , ||vn ||, qn = rnn vn = ||an || So,  Q=

1 a1 ||a1 ||

p p

...

p p

1 an ||an ||



and R = diag( ||a1 ||, ..., ||an || ). 2.7.7.6. We are given that A = [a1

...

an ], where {a1 , . . . , an } is an o.n. set in Rm .

Method 1 : Because the columns of A are already an o.n. set, the set {q1 , . . . , qn } = {a1 , . . . , an } will be the result of the Gram-Schmidt process. It follows that Q = A, and therefore R = In . Method 2 : It follows that {a1 , . . . , an } is linearly independent, so the Gram-Schmidt process can be used to find the QR factorization of A −1 v1 , a1 , r11 , ||v1 || = 1, q1 = r11 v1 =a1 , c Larry

Turyn, January 2, 2014

page 65

v2 , a2 − (a2 • q1 )q1 = a2 − (a2 • a1 )a1 = a2 − 0 · a1 hence −1 v2 = a2 , r22 , ||v2 || = 1, q2 = r22 v2 =a2 ,

.. . vn , an − (an • a1 )a1 − ... − (an • an−1 )an−1 = an − 0 · a1 − ... − −0 · an−1 = an , −1 rnn , ||vn || = 1, qn = rnn vn =an .

So,  Q = a1

p p

p p

...

 an = A

and R = diag( 1, ..., 1 ) = In . 2.7.7.7. Because A  is invertible it must be square, say n × n. Because A = a1 pp ... pp an is a real, invertible, upper triangular matrix, it follows that {a1 , . . . , an } is linearly independent. The Gram-Schmidt process can be used to find the QR factorization of A: v1 , a1 = a11 e(1) , r11 , ||v1 || = |a11 |, and −1 q1 = r11 v1 =

1 a1 = sgn(a11 ) e(1) . |a11 |

Next, a2 = a12 e(1) + a22 e(2) , so   2 v2 , a2 − (a2 • q1 )q1 = a2 − sgn(a11 )a12 sgn(a11 ) e(1) = a2 − sgn(a11 ) a12 e(1) = a2 − a12 e(1) hence v2 = a22 e(2) , r22 , ||v2 || = |a22 |, and −1 q2 = r22 v2 =

1 a22 e(2) = sgn(a22 ) e(2) . |a22 | .. .

an = a1n e(1) + ... + ann e(n) , so vn , an − (an • q1 )q1 − ... − (an • qn−1 )qn−1    = an − sgn(a11 ) a1n (sgn(a11 ) e(1) ) − ... − sgn(an−1,n−1 ) a12 sgn(an−1,n−1 ) e(n−1) = ... = ann e(n) , rnn , ||vn || = |ann |, and −1 qn = rnn vn =

1 ann e(n) = sgn(ann ) e(n) . |ann |

So, h Q = sgn(a11 ) e(1)

p p

...

i sgn(ann ) e(n) ,

p p

that is,   Q = diag sgn(a11 ), ..., sgn(ann ) , and



|a11 |  0   . R=  .   . 0

a12 sgn(a11 ) |a22 |

.

.

.

. . .

.

. .

 a1n sgn(a11 ) a2n sgn(a22 )    . .  .   . |ann |

Note that the diagonal elements of R are ||vj || > 0, j = 1, ..., n.

c Larry

Turyn, January 2, 2014

page 66

2.7.7.8. The problem suggests that we start by looking for a matrix Q and a vector b for which QQT b 6= b. This is not possible if m = n. Also, ultimately we will be dealing with a matrix A whose QR factorization is QR, and this is possible only if m ≥ n. So, √ we should  look for an example with m > n. 2 1 √ The matrix Q = √16  √2 −2  has its two columns being an o.n. set of vectors in R3 . We calculate 2 1  √    √ √   √ 2 1 3 0 3 √ 1 1 1 2 2 2 QQT = √  √2 −2  √ =  0 6 0 . 1 −2 1 6 6 6 3 0 3 2 1 We want to find a vector b = [ b1

b2

b3 ] with QQT b 6= b, that is,   −b1 + b3 1 T , 0 0 6= (QQ − I)b = 2 b1 − b3

so any vector b with b1 6= b3 will do. For example, let us choose b = [ 1 0 0 ] Ultimately, we will need x, the solution of Rx = QT b, to have Ax 6= b. Because A = QR and x = R−1 QT b, we would need to have b 6= Ax = (QR)(R−1 QT b) = QQT b. So, let  us choose a 2 × 2, real, invertible, upper triangular matrix R having r11 > 0 and r22 > 0, for example, 2 −1 1 √ R = √2 . Then 0 2 2 √   √   √   2√2 2 1 √2 1 1 1  √ 2 −1 √  2 2 −5 2  , √ √ = A = QR = √ 2 −2 √ √ √ 6 2 0 2 2 12 2 2 2 1 2 that is,  2 1  2 A= √ 6 2

 1 −5  , 1

and x = R−1 QT b =

 √

2 0

√ −1 √ 1 −1/ 2 2 √ 2 1 6



  √  1  1 1 2 2  0 = √ · √ 1 6 2 2 0 0 √   1 5/√2 = √ , 2 4 3 2 −2

√  √  1/√2 2 1 2

that is, 1 x= √ 4 6 We have

 2 1  2 Ax = √ 6 2



5 2

 .

    1      1 12 1 2 1 1 5  0 = 0 = −5  √ = 6  0  = b, 24 1 4 6 2 1 0 12 2

even though Rx = QT b, as x was designed to satisfy. 2.7.7.9. Take the hint, and begin by using Rx = QT b to see that Ax = Q(Rx) = Q(QT b) = (QQT )b. Next, use b = Qc to see that Ax = (QQT )(Qc) = Q(QT Q)c) = Q(I)c = Qc = b. So, yes, x = R−1 QT b is guaranteed to satisfy the original system, Ax = b, as long as b = Qc for some c and A = QR is the QR factorization of A. c Larry

Turyn, January 2, 2014

page 67



2/3

  2.7.7.10. We see that A = QR is the QR factorization with Q =   2/3  1/3 

√  1/ 2   6 √   −1/ 2  and R =   0 0 

−3



 √ . Then the 2

2 unique solution (and thus the unique l.s.s.) of the system Ax = b =  −1  is given by 1  x=R

−1

6

−3

0

√ 2

T

Q b=

−1  

2/3 √ 1/ 2

 √   2 2 1 1/3  −1  = √  0 6 2 1 0

2/3 √ −1/ 2

3



1





√ 3/ 2



−3



6

√     11/ 2 11 1  1 =  . = √ √ 12 6 2 18 18/ 2



√  1/ 5   3   0   and R =  0 √ 2/ 5

2/3

  2.7.7.11. We see that A = QR is the QR factorization with Q =   

2/3 −1/3 

 √ . Then the 5

 2 unique solution (and thus the unique l.s.s.) of the system Ax = b =  −1  is given by 1 

3

−3

−1

2/3

2/3

√ 5

 

0

√ 1/ 5

0

x = R−1 QT b = 

 √ 2 5 1  −1  = √  √ 3 5 1 2/ 5 0 −1/3



3 3



1/3





√ 4/ 5



√     41 41/(3 5) 1  1  = . = √ √ 45 3 5 36 12/ 5 2.7.7.12. We see that A = U ΣV T is the SVD factorization with √  1/ 2   0 U = [ u1 . . . u3 ] =    √ −1/ 2 



1 0 

| | | | |

     0  =0 −   0 0



1

0

  v3T ]T =  0 

√ 1/ 2

σ1

0

  Σ=  0  − 0

σ2 −− 0

0

√  1/ 2   0  ,  √ 1/ 2

0

3

0 2 0

0



  0 ,  0

and V T = [ v1T

...

√ −1/ 2

0

0



 √  1/ 2  .  √ 1/ 2

(a) Then then all l.s.s. of the system 

 2 Ax = b =  −1  1

c Larry

Turyn, January 2, 2014

page 68

are given by (2.65) in Section 2.7, that is, x=

2 X

(σi−1 uTi b)vi +

3 X

ci v i = x ? +

i=3

i=1

3 X

ci vi = (σ1−1 uT1 b)v1 + (σ2−1 uT2 b)v2 + c3 v3 ,

i=3

where c3 is an arbitrary constant. So, in this problem, all of the l.s.s. are given by             2 1 2 h i h √ √ i √0 √0 −1 −1 x = 3 1/ 2 0 − 1/ 2  −1   0  + 2 0 1 0  −1   1/√2  + c3  −1/√2  1 0 1 1/ 2 1/ 2         0 1 2 √0 1   1 √  1 0 − 1/√2 + c3  −1/√2  = √  −3 − 6c3  , = √ 2 3 2 0 6 2 −3 + 6c 1/ 2 1/ 2 3 where c3 is an arbitrary constant. (b) Because {v1 , ..., v3 } is an o.n. set of vectors, the l.s.s of minimum norm is given by   2 1  ? −3  . x=x = √ 6 2 −3 (c) The Moore-Penrose generalized inverse of A is given by (2.66), that is,     2 1 h √ X √ i √0 h + −1 T −1  −1 0  1/ 2 0 − 1/ 2 + 2  1/√2  0 A = σi vi ui = 3 i=1 0 1/ 2  √ 1/ 2 1 = 0 3 0

0 0 0

√   0 −1/ 2 1 0+ 0 2 0 0

√0 1/√2 1/ 2

  0 2 1 0= √ 0 6 2 0 0

0 3 3

1

0

i

 −2 0 . 0

2.7.7.13. We did not mention the case m < n before Theorem 2.36 in Section 2.7 because it is impossible for the columns of an m × n matrix to be linearly independent if there are more columns than rows in the matrix. This is because of Corollary 1.3 in Section 1.7. 2.7.7.14. Method 1 : Recall from problem 2.4.3.12(a) that “(multiplication by) an orthogonal matrix preserves lengths of vectors,”that is, || Qx || = || x || for all x. So, for all x in Rn , hence for all Rx, || Ax ||2 = || Q(Rx) ||2 = || Rx ||2 . Method 2 : For all x in Rn , || Ax ||2 = hAx, Axi = (Ax)T (Ax) = xT AT Ax = xT (QR)T QRx = xT RT (QT Q)Rx = xT RT Rx = (Rx)T (Rx) = || Rx ||2 2.7.7.15. To start the Gram-Schmidt process, let v1 , a1 , r11 , ||v1 || = and



a1 • a1 =

√ 2,

1 −1 q1 = r11 v1 = √ a1 . 2

Next, let   1  1  1  1 1  1 v2 , a2 − (a2 • q1 )q1 = a2 − a2 • √ a1 √ a1 = a2 − √ a2 • a1 √ a1 = a2 − − √ √ a1 2 2 2 2 2 2 = a2 +

1 a1 . 2

So 2 r22 ,||v2 ||2 =

2   a2 + 1 a1 = a2 + 1 a1 , a2 + 1 a1 = ha2 , a2 i + ha2 , a1 i + 1 ha1 , a1 i = 2 + (−1) + 1 · 2 = 3 . 2 2 2 4 4 2 c Larry

Turyn, January 2, 2014

page 69

We have

r r22 =

and q2 =

−1 r22 v2

√ 3 3 = √ 2 2

√  2  2 1  1  = √ a2 + a1 = √ a2 + a1 . 2 2 3 6

The QR factorization is A = QR, where √ 

1 Q = √ a1 2

1 √ a1 + 2a2 6

 

and

− √12

2



 q .

 R=

3 2

0

2.7.7.16. We are given that the m × n matrix A = QR, where the columns of Q are an o.n. set and R is an invertible, upper triangular matrix. Suppose that b is any vector in Rm and x is any l.s.s. of Ax = b. Then x must satisfy the normal equations AT Ax = AT b, that is, (QR)T QRx = (QR)T b, that is, RT Rx = RT (I)Rx = RT (QT Q)Rx = (QR)T QRx = RT QT b. But R being invertible implies so is RT , so this implies Rx = (RT )−1 RT Rx = (RT )−1 RT QT b = QT b, hence x = R−1 Rx = R−1 QT b. So, there is exactly one l.s.s. and it is x = R−1 QT b. 2.7.7.17. We calculate B = AT A =



1 1

−1 1



1 −1

1 1



 =

2 0

0 2

 .

(1) It is easy to see that the eigenvalues of B = 2I2 are λ1 = λ2 = 2 and the corresponding eigenvectors = 1 = e √ √ √ are v√ (2) [1 0] and v2 = e = [0 1], respectively. It follows that σ1 = σ2 = 2, Σ = S = diag( 2, 2) = 2I2 , and V = [ e(1) e(2) ] = I2 . Using this we see that   √ −1 1 1 1 1 U1 =AV1 S −1 =AI2 2 I2 = √ A = √ . 2 2 −1 1 Because A is 2 × 2 and U1 is 2 × 2, U = U1 , that is, we do not need to find additional columns in the orthogonal matrix U .   √  1 1 2 √0 To summarize, the SVD factorization is A = U ΣV T , where U = √12 ,Σ= , and V T = I2 . −1 1 0 2

2.7.7.18. We calculate 

1 B=A A= 0 −5 T

0 −2 0

Next, find the eigenvalues of B: 26 − µ 0 −10 0 4−µ 0 0 = −10 0 26 − µ

 −5 1 0  0 1 −5

0 −2 0

= (4 − µ) 26 − µ −10

  −5 26 0 . =  0 1 −10

0 4 0

 −10 0 . 26

 −10 = (4 − µ) (26 − µ)2 − 100 26 − µ

  = (4 − µ) (26 − µ) − 10 ) (26 − µ) + 10 The eigenvalues of B are µ1 = 36, µ2 = 16, µ3 = 4. Correspondingly, σ1 = 6, σ2 = 4, and σ3 = 2. Next, find the eigenvectors of B corresponding to its eigenvalues: c Larry

Turyn, January 2, 2014

page 70

  

0 1 | 0 −10 0 −10 | 0 1 1 0 −32 0| 0  ∼  0 [ B − µ1 I | 0 ] =  1 0 | 0  , after −R1 + R3 → R3 , − 10 R1 → R1 , 0 0 0 | 0 −10 0 −10 | 0 1 − 32 R 2 → R2 .   −1 (1) ⇒ p =  0  is an eigenvector corr. to B’s eigenvalue µ1 = 36. 1   −1 ⇒ v1 = √12  0  is a normalized eigenvector corr. to B’s eigenvalue µ1 = 36. 1    

0 −1 | 0 10 0 −10 | 0 1 1 0 | 0  , after R1 + R3 → R3 , 10 0 −12 0| 0  ∼  0 [ B − µ2 I | 0 ] =  R1 → R 1 , 1 0 0 0 | 0 −10 0 10 | 0 1 R 2 → R2 . − 12   1 ⇒ p(2) =  0  is an eigenvector corr. to B’s eigenvalue µ2 = 16. 1   1 ⇒ v2 = √12  0  is a normalized eigenvector corr. to eigenvalue µ2 = 16. 1     22 0 −10 | 0

1 0 0 | 0 1 0 0 0| 0 ∼ 0 0 [ B − µ3 I | 0 ] =  1 | 0  , after 22 R1 → R1 , 10R1 + R2 → R2 , R2 ↔ R3 , −10 0 22 | 0 0 0 0 | 0 11 5 R → R2 , 11 R 2 + R1 → R 1 . 192 2   0 ⇒ p(3) =  1  is an eigenvector corr. to B’s eigenvalue µ3 = 4. 0   0 ⇒ v3 =  1  is a normalized eigenvector corr. to B’s eigenvalue µ3 = 4. 0 Because all of the σ’s are positive, we have √ √   −1/ 2 1/ 2 0 0√ 0√ 1 . V = V1 = [v1 v2 v3 ] =  1/ 2 1/ 2 0 

Because all of the σ’s are positive, we have    −1 1 0 −5 1 −1 0 √  0 U = AV Σ = 0 −2 2 −5 0 1 1

Σ = S = diag(6, 4, 2). Using this we see that  1    1 −6 1 √0 0 0 1 0 −5 6 1 1 0  0 2  0 4 0  = √  0 −2 0 2 −5 1 0 0 12 0 1 1 0 6

 −1 1  = √ 0 2 1 T

−1 0 −1

 0 √ − 2 . 0 

To summarize, the SVD factorization is A = U ΣV , where U = 

6 Σ=0 0

0 4 0

 0 0 , and V T = 2

 1 √ 2

−1  1 0

0 √0 2

1 √ 2

−1  0 1

−1 0 −1

1 4

0 1 4

0

√ 2 2

 

0

 √0 − 2 , 0

 1 1 . 0

c Larry

Turyn, January 2, 2014

page 71

2.7.7.19. We calculate T



B =A A =

√ 2 4/√5 1 −8/ 5

2 1





2√  4/ 5 2

  56 1√ 5 −8/ 5  =  − 12 1 5

− 12 5 74 5

 .

Next, find the eigenvalues of B:  56  −µ − 12    5 5 = 56 − µ 74 − µ − 144 = µ2 − 130 µ + 4000 = µ2 − 26µ + 160 0 = 5 5 25 5 25 74 − 12 −µ 5 5 = (µ − 16)(µ − 10). √ The eigenvalues of B are µ1 = 16 and µ2 = 10. Correspondingly, σ1 = 4 and σ2 = 10. Next, find the eigenvectors of B corresponding to its eigenvalues:  24    −5 − 12 | 0 1 5 | 0 1 5 2 ∼ | [ B − µ1 I | 0 ] =  , after − 12 R1 + R2 → R2 , − 24 R1 → R1 . 0 0 | 0 12 6 −5 −5 | 0   −1 (1) ⇒p = is an eigenvector corr. to B’s eigenvalue µ1 = 16. 2   −1 is a normalized eigenvector corr. to B’s eigenvalue µ1 = 16. ⇒ v1 = √15 2   6   − 12 | 0 5 5 1 −2 | 0 ∼ | , after 2R1 + R2 → R2 , 56 R1 → R1 . [ B − µ2 I | 0 ] =  0 0 | 0 24 | 0 − 12 5 5   2 ⇒ p(2) = is an eigenvector corr. to B’s eigenvalue µ2 = 10. 1   2 is a normalized eigenvector corr. to eigenvalue µ2 = 10. ⇒ v2 = √15 1 Because all of the σ’s are positive, we have   1 −1 2 V = V1 = [v1 v2 ] = √ . 2 1 5   S Because all of the σ’s are positive, we have that the 3 × 2 matrix Σ =  − − , where 00 √ S = diag(4, 10).   4 √ 0 So, Σ =  0 10 . Using this we see that 0 0 

U1 = AV1 S −1

2√ = 4/ 5 2

  1√ 1 −1  −8/ 5 √ 2 5 1 

0 1  √ = √ − 5 5 0

2 1



√5 10

1 4

0

0



√1 10



 0   0  = −1 √5 0 10

The 3 × 3 real, orthogonal matrix U = [ U1 pp U2 ] = Appendix to Section 2.4: First, calculate    1 w3 = e(1) − he(1) , u1 iu1 − he(1) , u2 iu2 =  0  −  0

[ u1

u2

p p

 2 1  √ = √ 4/ 5 5 2

 1 −4 1√ −8/ 5  2 1 4

√2 10 √1 10

 

√  1/ 2  √0 . 1/ 2 u3 ], where we can find u3 by the process in the

           1 0 0 1 1 1 1 1 0  •  −1   −1  −  0  • √  0  √  0  2 1 2 1 0 0 0 0

c Larry

Turyn, January 2, 2014

page 72

         0 1 1 1 1 1 1 √  0  =  0 , =  0  − (0)  −1  − √ 2 2 2 1 0 0 −1 

||w3 || =

√ 2 , 2

and finally   1 1  0 . u3 = √ 2 −1

To summarize, the SVD factorization is    1 4 √0 1 T 1 A = U ΣV , where U = √2  − 2 0 0 , Σ =  0 0 1 −1 0

 √ 0 10 , and V T = 0

1 √ 5



−1 2

 2 . 1

2.7.7.20. We calculate  √ 3√ 5 1 B =AT A = √  √5 3 2 2 10  1  = 18

90 30 √ 60 2

30 266 √ −44 2

√   √ 3√ 5 3 5 1 ·· √  0 5 √ 3 2 3√5 2 10

0 −16 √ 4 2

√   60√2 45 1 −44 2  =  15 √ 9 112 30 2

15 133 √ −22 2



5 −16 √ 5

√  2 √10 4√ 2  2 10

√  30√2 1 −22 2  , G. 9 56

Note that (1) the eigenvalues of B are of the form µ = 91 γ, where gamma is an eigenvalues of G, and (2) that the corresponding eigenvalues of B are the same as the corresponding eigenvalues of G. We will work for a while with G in order to avoid tedious work with fractions. Next, find the eigenvalues of G: √ √ 45 − γ 45 − γ 15 30 √2 15 30 √2 15 = 0 = 15 133 − −22 2 133 −22 2 √γ √ − γ√ 30√2 −22 2 56 − γ 0 −288 2 + 2 2γ 144 − γ √ −2 2R2 + R3 → R3 √ 45 − γ 15 30 √2 = (144 − γ) 15 133√− γ −22 2 0 −2 2 1 √ −2 2R2 + R3 → C3 √  √ 45 − γ 45 − γ 15  30√2 +1· = (144 − γ) 2 2 · 15 133 − γ 15 −22 2  √   √ √  = (144 − γ) 2 2 (45 − γ)(−22 2) − 450 2 + (45 − γ)(133 − γ) − 225   = (144 − γ) · · − 3960 + 88γ − 1800 + 5985 − 178γ + γ 2 − 225 = γ(144 − γ) γ − 90). The eigenvalues of G are γ1 = √ 144, γ2 = 90, and γ3 = 0, so the eigenvalues of B Correspondingly, σ1 = 4, σ2 = 10, and σ3 = 0. Next, find the eigenvectors of G corresponding to its eigenvalues: √ √    2 −99 15 30√2

− 11 − 2215 | 1 15 √    [ G − γ1 I | 0 ] = ∼ 15 −11 −22 2 −99 15 30 2 | √ √ 30 2 −22 2 −88 0 0 0 | 1 R → R 2 , R 2 ↔ R1 , 15 2  

1  ∼ 0 0

11 − 15 864 − 15 0

are µ1 = 16, µ2 = 10, and µ3 = 0.

 0 √ 0  , after −2 2R2 + R3 → R3 , 0



2 − 2215 √

− 1728 15

0

| | |

| | |

 0 0 , 0

2

0  0 , 0

after 99R1 + R2 → R2 , 

1 ∼ 0 0

0

1 0

0 √ 2 2 0

c Larry

Turyn, January 2, 2014

page 73

15 after − 864 R2 → R2 , − 11 R + R1 → R1 . 15 2   √0 ⇒ p(1) =  −2 2  is an eigenvector corr. to G’s eigenvalue γ1 = 144. 1   √0 1  ⇒ v1 = 3 −2 2  is a normalized eigenvector corr. to G’s eigenvalue γ1 = 144. 1 √ √     2 43 −45 15 30√2 − 2215 | 0

1 15 √ √ [ G − γ2 I | 0 ] =  15 43 −22 2  ∼  0 144 −36 2 | 0  , after −2 2R2 + R3 → R3 , √ √ √ −34 30 2 −22 2 0 −108 2 54 | 0 1 3R2 + R1 → R1 , 15 R2 → R 2 , R 2 ↔ R1 ,   √ 3 2 | 0

1 0 − √ 4   ∼ 0 1 − 42 | 0  , 0 0 0 | 0

√ 43 → R2 , 108 2R2 + R3 → R3 , − 15 R2 + R 1 → R1 .  √  3√ 2 ⇒ p(2) =  2  is an eigenvector corr. to G’s eigenvalue γ2 = 90. 4  √  3√ 2 ⇒ v2 = 16  2  is a normalized eigenvector corr. to eigenvalue γ2 4 √    133 45 15 30√2

1 15    [ G − γ3 I | 0 ] = ∼ 15 133 −22 2 0 −384 √ √ √ 30 2 −22 2 56 0 −288 2 1 −3R2 + R1 → R1 , 15 R 2 → R2 , R2 ↔ R 1 ,  √ 3 2

| 0 1 0 4 √  ∼ 0 1 − 2 | 0 4 0 0 0 | 0 after

1 R 144 2

= 90. √

2 − 2215 √ 96 2 144

 0 √ 0  , after −2 2R2 + R3 → R3 , 0

| | |

  ,

√ 1 after − 384 R2 → R2 , 288 2R2 + R3 → R3 , − 133 R 2 + R1 → R 1 . 15 √   −3 √ 2 ⇒ p(3) =  2  is an eigenvector corr. to G’s eigenvalue γ3 = 0. 4 √   −3√2 ⇒ v3 = 16  2  is a normalized eigenvector corr. to eigenvalue γ3 4 We have √  √0 3√2 1 p V = [ V1 p v3 ] = −4 2 2 6 2 4

= 0. √  | −3√2 | 2  | 4

and the 3 × 3 matrix Σ is 

4 Σ = diag(σ1 , σ2 , σ3 ) =  0 0

√ 0 10 0

  S 0 0 = −− 0 O

| | |

 O − . 0

Using this we see that [ u1 pp u2 ] = U1 = AV1 S −1

 √ 3 5 1 = √  √ 0 3 2 3 5



5 −16 √ 5

√   2 √10 √0 1 4√ 2  −4 2 6 2 2 10

√  3 √2  1/4 2  0 4 c Larry

0 √



1/ 10

Turyn, January 2, 2014

page 74

 √ 3 5 1  = √ 0 18 2 3√5

√ 5 −16 √ 5

√  2 √10 √0 4√ 2  − 2 2 10 1/2

The 3 × 3 real, orthogonal matrix U = [ U1 Appendix to Section 2.4. First, calculate 

p p

√   3/√5 √0 1  = √  18 2 1/ 5 √ 18 2 0 4/ 10 U2 ] = [ u1

u2

p p

  18 √0 1 0= √  2 2 18 0

 1 0 1

u3 ], where we can find u3 by the process in the

             1 0 1 1 0 1 1 1 1 w3 = e(1) − he(1) , u1 iu1 − he(1) , u2 iu2 =  0  −  0  •  1   1  −  0  • √  0  √  0  2 1 2 1 0 0 0 0 0          0 1 1 1 1 1 1 √  0  =  0 , =  0  − (0)  1  − √ 2 2 2 1 0 0 −1 

||w3 || =



2, and finally   1 1  0 . u3 = √ 2 −1

To summarize, the SVD factorization    √ is 0 1 1 6 √ U = √12  2 0 0 , Σ =  0 0 1 −1 0

A = UΣV T , where  0 0 0 √ 1 0 , and V = 61  3 √2 0 0 −3 2

√ −4 √ 2 √2 2

 2 4 . 4

P 2.7.7.21. Recall (2.60), that is, A = ri=1 σi ui viT , and also recall the thin SVD factorization A = U1 SV1T . Essentially, our only use of (2.60) will be to make sure what are the correct notations of U1 , S, and V1 . We have  −1  σ1 0 . . . 0  T  −1  0  u1 σ . 2   .  x? , V1 S −1 U1T b = V1  . .   .   ..  b  . . .  uT r 0 . . . σr−1  −1  σ1 0 . . . 0  T   −1 T   −1 T  −1 u1 b σ1 u1 b σ1 u1 b  0 σ2 .    .      . ..   .. = V1  . . .   ..  =V1  = [ v 1 . . . v r ]  . .  . −1 T −1 T . .  uT b σ u b σ u b r r r r r 0 . . . σr−1 Using (1.42) in Section 1.7, we can rewrite this as x? = (σ1−1 uT1 b)v1 + ... + (σr−1 uTr b)vr , that is, (2.61) is true. 2.7.7.22. Recall that A+ = V1 S −1 U1T , where A = U1 SV1T is the thin SVD factorization of A. Recall that the columns of U1 are an o.n. set, as are the columns of V1 . We will show that X = A+ satisfies those properties. (a) Regarding property (2.67)(a) in Section 2.7, AXA = A(V1 S −1 U1T )A = (U1 SV1T )(V1 S −1 U1T )(U1 SV1T ) = U1 S(V1T V1 )S −1 (U1T U1 )SV1T = U1 S(I)S −1 (I)SV1T = U1 (SS −1 )SV1T = U1 (I)SV1T = U1 SV1T = A.

(b) Regarding property (2.67)(b) in Section 2.7, XAX = (V1 S −1 U1T )A(V1 S −1 U1T ) = (V1 S −1 U1T )(U1 SV1T )(V1 S −1 U1T ) = V1 S −1 (U1T U1 )S(V1T V1 )S −1 U1T = V1 S −1 (I)S(I)S −1 U1T = V1 S −1 (SS −1 )U1T = V1 S −1 (I)U1T = V1 S −1 U1T = A+ = X. c Larry

Turyn, January 2, 2014

page 75

(c) Regarding property (2.67)(c) in Section 2.7, AX = A(V1 S −1 U1T ) = (U1 SV1T )(V1 S −1 U1T ) = U1 S(V1T V1 )S −1 U1T = U1 S(I)S −1 U1T = U1 (SS −1 )U1T = U1 (I)U1T = U1 U1T . It follows that (AX)T = U1 U1T

T

= U1T

T T U1 = U1 U1T = AX.

(d) Regarding property (2.67)(d) in Section 2.7, XA = (V1 S −1 U1T )A = (V1 S −1 U1T )(U1 SV1T ) = V1 S −1 (U1T U1 )SV1T = V1 S −1 (I)SV1T = V1 (S −1 S)V1T = V1 (I)V1T = V1 V1T . It follows that (XA)T = V1 V1T

T

= V1T

T T V1 = V1 V1T = XA.

2.7.7.23. Define X to be the Moore-Penrose inverse A+ we found in Example 2.34 We will show that X = A+ in Example 2.34 satisfies those properties. First, we calculate that     √1  √1 − √12 0 1 0 0 − √12 3 0 0 2 2          1 1  1 1 √1 √  0 2 0  0   √1 √ √ √ 0 A = 2 2  =  2   2 2 2      1 √ 0 0 0 0 − √12 0 0 1 0 0 2 

√3 2

−1

  =  

√3 2

1

  1 , 

0

0

0

and, in Example 2.34 in Section 2.7, we calculated that  1

−1

0



3

0 √

   0  0  0 1

2 0

0



 √  2   0



√ 3 2

1 √ 3 2

  1 A =  −4 

1 4

  0  . 

− 41

1 4

0

−1

+

in Section 2.7.

0



(1) Regarding property (2.67)(a) in Section 2.7, 

3 √ 2

−1

  AXA =   

3 √ 2

1

  1  1  − 4 

0

0

0

− 41

−1



1 √ 3 2



√3 2

−1

1 4

   0   

3 √ 2

1 4

0

0

1 √ 3 2

0



3 √ 2

−1

1

     1 =  

√3 2

1

0

0

0

0





3 √ 2

−1

  =  

3 √ 2

1

  1  = A. 

0

0

0

−1

−1



1

   1  0  0 0

0

0



1 2

1 2

1 2

1 2

    



c Larry

Turyn, January 2, 2014

page 76

(2) Regarding property (2.67)(b) in Section 2.7,  1  1   √3 1 √ √ √ −1 −1 0 2 3 2 3 2 3 2       3 1 1  1  1 1 XAX = 0  −4  −4   √2 4    − 14

1 4

0

0

0

− 14

0 

1 √ 3 2

1 √ 3 2

0

0

1 √ 3 2

1 4

0

− 14

1 4

0



1

   0  0  0 0

0 1 0

0



  0   0



− 14

1 4

0

− 14

1 √ 3 2

1 4

  0  = X. 

0



     1 0  = − 4  

1 4

0



1 4

  1 =  −4 

(3) Finally, regarding property (2.67)(d) in Section 2.7,  1  √3 √ −1 −1 2 3 2     3  1 1 1 AX =  − 4  √2   0

1 √ 3 2

1 √ 3 2

0





1

     0 =0   0 0

1 4 1 4

0 1 0

0



  0 ,  0

so (AX)T = AX. By the way, the fact that in this example 

1

  AA+ =  0  0

0 1 0

0



  0   0

explains a little why A+ is called a “pseudo-inverse.” 2.7.7.24. (1) Because AT A is invertible, the unique l.s.s. of Ax = b is given by x = (AT A)−1 AT b. But x? = A+ b, the minimum norm l.s.s., has to be the only l.s.s. in this situation. It follows that for all b, A+ b = (AT A)−1 AT b. This being true for all b implies that A+ = (AT A)−1 AT . (2) Regarding property (2.67)(a) in Section 2.7, using the result of part (1) for X = A+ = (AT A)−1 AT gives  AXA = A (AT A)−1 AT A = A(AT A)−1 (AT A) = A(I) = A. (3) Regarding property (2.67)(c) in Section 2.7, using the result of part (1) for X = A+ = (AT A)−1 AT gives T T −1 T −1 T (AX)T = A(AT A)−1 AT = (AT )T (AT A)−1 AT = A (AT A)T A = A (AT (AT )T A −1 T = A AT A A = AX. 2.7.7.25. Because A is a real, symmetric, positive definite matrix, it follows that A = AT , all of its eigenvalues are positive, and there is an orthogonal matrix Q and diagonal matrix D such that A = QDQT . Without loss of generality suppose that D = diag(λ1 , ..., λn ), where λ1 ≥ ... ≥ λn > 0. To construct the SVD using the method following Theorem 2.37 in Section 2.7, we find    B = AT A = A2 = QDQT QDQT = QD QT Q DQT = QT D(I)DQT = QD2 QT = Q diag(λ21 , ..., λ2n ) QT . It follows that the eigenvalues of B are λ21 ≥ ... ≥ λ2n > 0 and the corresponding eigenvectors can be chosen to be the columns of Q. It follows that σj = λj , for j = 1, ..., n. c Larry

Turyn, January 2, 2014

page 77

So, Σ = D and V = Q. It follows that U = AV Σ−1 = (QDQT )QD−1 = QD(QT Q)D−1 = QD(I)D−1 = Q(DD−1 ) = Q. To summarize, if A is a real, symmetric, positive definite matrix, then its SVD factorization is A = QDQT , the factorization we found in Section 2.6. 2.7.7.26. Because A is a real, symmetric, matrix, it follows that A = AT , all of its eigenvalues are real, and there is an orthogonal matrix Q and diagonal matrix D such that A = QDQT , as in Section 2.6. We assumed that the eigenvalues of A satisfy |λ1 | ≥ |λ2 | ≥ ... ≥ |λn | ≥ 0; without loss of generality, D = diag(λ1 , ..., λn ). Assume that |λr | > 0 and that λj = 0 for j = r + 1, ..., n. To construct the SVD using the method following Theorem 2.37 in Section 2.7, we find    B = AT A = A2 = QDQT QDQT = QD QT Q DQT = QT D(I)DQT = QD2 QT = Q diag(λ21 , ..., λ2n ) QT . It follows that the n eigenvalues of B are λ21 ≥ ... ≥ λ2r > 0 = ... = 0 and the corresponding eigenvectors can be chosen to be the columns of Q. It follows that σj = |λj |, for j = 1,..., n. As usual for the SVD factorization, we let V1 = v1 pp ... pp vr . So, V = Q and   | S O Σ =  − − | −  = diag(|λ1 |, ..., |λr |, 0, ..., 0). | O O It follows that U1 = AV1 S −1 = (QDQT )V1 S −1 = QD(QT V1 )S −1 , that is, U1 = QD(QT V1 )S −1 . We calculate that  v1T   QT V1 =  ...  v1 vnT

v1T v1  .. = . vnT v1 



p p

...

p p

hence

vr



...

v1T vr

...

vnT vr



  Ir   − − − , = On−r,r

 diag(|λ1 |−1 , ..., |λr |−1 ) =  − − − − − − − . On−r,r 

T

(Q V1 )S

−1

So,  diag(|λ1 |−1 , ..., |λr |−1 ) U1 = QD(Q V1 )S = Q diag(λ1 , ..., λn ) − − − − − − −  On−r,r      diag sgn(λ1 ), ..., sgn(λr )   diag sgn(λ1 ), ..., sgn(λr ) p p  = v1 p ... p vn   − − − − − − − − − − − − − − = Q On−r,r On−r,r 

T



−1

 = sgn(λ1 )v1

p p

...

p p

 sgn(λr )vr .

It is easy to find the rest of the columns of the orthogonal matrix U because {v1 , ..., vn } being an o.n. set implies that {sgn(λ1 )v1 , ..., sgn(λr )vr , vr+1 , ..., vn } is an o.n. set. To summarize, if A is a real, symmetric matrix whose eigenvalues satisfy , then its SVD factorization is A = U ΣV T , where    U = sgn(λ1 )v1 pp ... pp sgn(λr )vr pp vr+1 pp · · · pp vn , Σ = diag sgn(λ1 ), ..., sgn(λr ), 0, . . . , 0 , and  V = Q = v1

p p

...

p p

 vn .

c Larry

Turyn, January 2, 2014

page 78



1 2.7.7.27. Ex. (a) A =  1 0

1 −1 0

 0 1  1

(b) We calculate 

1 B =AT A = 1 0 Next, find the eigenvalues of B: 2−µ 0 2−µ 0 = 0 1 −1

1 −1 1

 1 0 0  1 0 1

1 −1 2−µ



=

1 −1 0

  2 0 1= 0 1 1

= R1 + R 2 → R 2

(2 − µ) 1 (2 − µ) 1

2−µ 2−µ 1

0 1 −1

 1 −1  2

0 2 −1

0 2−µ −1

1 0 2−µ

1 0 2−µ





R2 ← (2 − µ)R2 and then expanding along the second column to get   2−µ 2−µ 1   1 = (2 − µ) (2 − µ)2 − 1 − 1 = (2 − µ) (2 − µ)2 − 2 . = (2 − µ) 1 · − (−1) · 1 1 2−µ 0 p √ √ √ √ The eigenvalues of B are µ1 = 2 + 2, µ2 = 2, and µ3 = 2 − 2. Correspondingly, σ1 = 2 + 2, σ2 = 2, and p √ σ3 = 2 − 2. Next, find the eigenvectors of B corresponding to its eigenvalues: √    √ 

−1 − 2 | 0 − 2 1 | 0 1 √0 1 √ | 0 ,

[ B − µ1 I | 0 ] =  −1 | 0  ∼  0 0 − 2 2 √ √1 1 −1 − 2 | 0 0 − 2 −1 | 0 √ after R1 ↔ R3 , 2R1 + R3 → R3 , − √12 R2 → R2 , √  

0 −1/√2 | 0 1 ∼ 0 1/ 2 | 0  , 1 0 0 0 | 0 √ after 2R2 + R3 → R3 , R2 + R1 → R1 .   1 √ (1) −1  is an eigenvector corr. to B’s eigenvalue µ1 = 2 + 2. ⇒p =√ 2   1 √ 1  −1  is a normalized eigenvector corr. to eigenvalue µ1 = 2 + 2. ⇒ v1 = 2 √ 2     0 0 1

1 −1 0 0 −1  ∼  0 0 [ B − µ2 I | 0 ]= 0 1 , 0 0 0 1 −1 0 after R1 ↔ R3 , R2 + R3 → R3 , −R2 → R2 .   1 ⇒ p(2) = 1  is an eigenvector corr. to B’s eigenvalue µ2 = 2. 0   1 ⇒ v2 = √12  1  is a normalized eigenvector corr. to B’s eigenvalue µ2 = 2. 0 √ √  

2 2 √0 1 | 0 1 −1 1

[ B − µ3 I | 0 ] =  0 2 √ −1 | 0  ∼  0 √ 1 − √2 1 −1 2 | 0 2 −1 0

| | |

 0 0 , 0

c Larry

Turyn, January 2, 2014

page 79 √ after R1 ↔ R3 , − 2R1 + R3 → R3 ,

√1 R2 2

→ R2 , 

1 ∼ 0 0

0

1 0

√ 1/√2 −1/ 2 0

| | |

 0 0 , 0

√ after − 2R2 + R3 → R3 , R2 + R1 → R1 .   −1 √ ⇒ p(3) =  √1  is an eigenvector corr. to B’s eigenvalue µ3 = 2 − 2. 2   −1 √ ⇒ v3 = 21  √1  is a normalized eigenvector corr. to eigenvalue µ3 = 2 − 2. 2 Because all of the σ’s are positive, we have √   1 √2 −1 1 V = V1 = [v1 ... v3 ] = 2 √1  −1 √ 2 2 0 2 and q Σ = S = diag(

√ √ 2 + 2, 2,

q

 p √ 2+ 2 √  2 − 2) =  0 0

 0  p 0 √ . 2− 2

0 √ 2 0

Using this we see that 

U = U1 = AV Σ−1

1 = 1 0

 0√ 1 = 2+ √ 2 2 2

  1 0 1 1  √ −1 2 1 2

1 −1 0 √ 2 2 0 0 

p √  −1 1/ 2 + 2  0 √1  2 0

p √  0√ 1/ 2 + 2  0 −2√ + 2  2 0

0  p  √ 1 2+ 2 =  2  q p √ 2/ 2 + 2 To summarize, the SVD factorization  0   p √ 1 2+ 2 U=  2  √ p √ 2/ 2 + 2

√ √2 2 0

0

0

p √ − 2− 2

is A = U ΣV T , where  2 0  p √   0 − 2 − 2 ,   √ p √ 0 2/ 2 − 2

and 

1/2

  √ VT =  1/ 2  −1/2

 0√ 0  1/ 2 p0 √  0 1/ 2 − 2 

2

0

−1/2 √ 1/ 2 1/2



 0  p0 √  1/ 2 − 2

0√ 1/ 2 0

p √ 2/ 2 − 2

   .  

p √ 2+ 2    0 Σ=   0

0 2 0



0 0 p

2−

√ 2

   ,  

√  1/ 2   0  .  √ 1/ 2

2.7.7.28. A = QR implies QT A = (QT Q)R = (I)R = R, and this implies QT AQ = RQ, which shows that A is orthogonally similar to RQ.

c Larry

Turyn, January 2, 2014

page 80



1 2.7.7.29. (a) Ex. A =  0 0

0 1 0

 2 −1  0

(b) We calculate 

1 B =AT A = 0 2

0 1 −1

 1 0 0  0 0 0

Next, find the eigenvalues of B: 1−µ 0 2 1−µ 1−µ −1 = (1 − µ) 0= 0 −1 2 −1 5−µ

0 1 0

  1 2 −1  =  0 2 0

0 −1 + 2 2 5−µ

 2 −1  5

0 1 −1

 1 − µ = (1 − µ) (1 − µ)(5 − µ) − 1 − 4(1 − µ) −1

 = (1 − µ) (1 − µ)(5 − µ) − 1 − 4 = (1 − µ)(µ2 − 6µ) = µ(1 − µ)(µ − 6). √ The eigenvalues of B are µ1 = 6, µ2 = 1, and µ3 = 0. Correspondingly, σ1 = 6, σ2 = 1, and σ3 = 0. Next, find the eigenvectors of B corresponding to its eigenvalues:     −5 0 2 | 0

0 − 25 | 0 1 1

| 0 , [ B − µ1 I | 0 ] =  0 −5 −1 | 0  ∼  0 1 5 2 −1 −1 | 0 0 −1 − 51 | 0 after − 15 R1 → R1 , −2R1 + R3 → R3 , − 51 R2 → R2 , 

1 ∼ 0 0

0

1 0

− 52 1 5

0

| | |

 0 0 , 0

after R2 + R3 → R3 .   2 ⇒ p(1) =  −1  is an eigenvector corr. to B’s eigenvalue µ1 = 6. 5   2 ⇒ v1 = √130  −1  is a normalized eigenvector corr. to eigenvalue µ1 = 6. 5    1 0 0 2 | 0

1 −2 0 −1 | 0  ∼  0 0 [ B − µ2 I | 0 ] =  0 2 −1 4 | 0 0 0

0

1 0

| | |

 0 0 , 0

after R1 ↔ R3 , 12 R1 → R1 , 2R2 + R3 → R3 , −2R2 + R1 → R1 , −R2 → R2 .   1 (2)  ⇒ p = 2  is an eigenvector corr. to B’s eigenvalue µ2 = 1. 0   1 ⇒ v2 = √15  2  is a normalized eigenvector corr. to B’s eigenvalue µ2 = 1. 0    1 0 2 | 0

0 1 1 −1 | 0  ∼  0 [ B − µ3 I | 0 ] =  0 1 2 −1 5 | 0 0 0

2 −1 0

| | |

 0 0 , 0

after −2R1 + R3 → R3 , R2 + R3 → R3 .   −2 (3) ⇒ p =  1  is an eigenvector corr. to B’s eigenvalue µ3 = 0. 1   −2 1 ⇒ v3 = √6  1  is a normalized eigenvector corr. to eigenvalue µ3 = 0. 1 c Larry

Turyn, January 2, 2014

page 81



V = [ V1

p p

2 1 v3 ] = √  −1 30 5

√ √6 2 6 0

√  | −2√5  | √5 5 |

and the 3 × 3 matrix Σ is  √ Σ = diag(σ1 , σ2 , σ3 ) = 

6 0 0

0 1 0

|

  S 0 0 = −− 0 O

| |

 O − . 0

Using this we see that 

[ u1 pp

u2 ] = U1 = AV1 S

 1 1  0 = √ 30 0

0 1 0

−1

1 =0 0

0 1 0

  2 2 1 −1  √  −1 30 0 5

√  √  √6 1/ 6  2 6 0 0

√ √     2/√6 2 1 0 2 2 √6 1 −1   −1/√6 2 6  = √  0 1 −1   −1 6 5 0 0 0 0 5 5/ 6 0     12 6 2 1 1 1 = √  −6 12  = √  −1 2  6 5 5 0 0 0 0

0 1



 6 12  0

The 3 × 3 real, orthogonal matrix U = [ U1 pp U2 ] = [ u1 u2 pp u3 ], where we can find u3 by the process in the Appendix to Section  2.4.    Because Span {u1 , u2 } = Span {e(1) , e(2) } , we should use e(3) instead of e(1) . First, calculate w3 = e(3) − he(3) , u1 iu1 − he(3) , u2 iu2          2 2 0 0 0 1 1 =  0  −  0  • √  −1  √  −1  −  0  • 5 5 0 0 1 1 1        0 2 1 0 1 1 =  0  − (0) √  −1  − (0) √  2  =  0 5 5 0 1 0 1 

    1 1 1   1   √ √ 2 2 5 0 5 0   = e(3) ,

||w3 || = 1, and finally 

u3 = e(3)

To summarize, the SVD factorization    √ is 2 1 0 6 2 √ 0 , Σ =  0 U = √15  −1 0 0 5 0

 0 =  0 . 1

A = UΣV T , where  0 0 √2 1 0 , and V T = √130  √6 0 0 −2 5

−1 √ 2 √6 5

 5  √0 . 5

c Larry

Turyn, January 2, 2014

page 82

Section 2.8.3  1 0 2.8.3.1.  21 1 0 0 

 0 −2 0  1 0 1 

1 0 0 −2  0 1 0  0 0 23 1 0 So, a LU factorization is

  −2 0 1  = E1 A3 =  0 −2 0

1 −2 1



1



−2 0 1  = E2 (E1 A3 ) =  0 −2 0

1 − 32 1

 0 1  , and −2

1 − 23

 0 1  = U.

1 − 32 0

− 43

A3 = LU, where



  0 1 0 0  =  − 12 1 L = (E2 E1 )−1 = E1−1 E2−1 =  − 12 − 23 0 1 0 − 23   1 0 0 −2 1 0 , U =  0 To summarize, a LU factorization of A3 is given by L =  − 21 0 − 32 0 1 

1

0 1 0 0

 1 2 2.8.3.2.   0 0 

1 0  0 0

0 1 2 3

0

0 0 1 0 0 0 1 0

 0 −2  1 0  0  0 1 0  0 −2  0  0 0  0 1 0

 −2 1 0 0 0  0 1 0 0  0    0 0 1 0  0 3 0 0 0 4 1 So, one LU factorization is

1 −2 1 0 1 − 23 1 0



1 − 23 0 0

1 1 − 43 1

 0 1 0  0 0 1 

0 1 0

0 1

  0 −2  0 0  = E1 A4 =   0 1 0 −2

0 1 −2 1 0 1 −2 1

1

1

0 1 −2 1

− 32 1 0

  0 −2  0  = E2 (E1 A4 ) =  0  0 0 −2 0

1 − 23 0 0

  2 0 −1  0 0  = E3 (E2 E1 A3 ) =   0 1 −2 0

 0 1 .

1 − 32 0

− 43

 0 0 , 1 −2  0 0  1 −2

1 1 − 43 1 1 − 32 0 0

 0 0 . 1

1 1 − 43 0

 0 0 =U 1

− 54

A4 = LU, where    1 0 1 0 0 0 1 0 0 0  0 0 1  1 1 0 0 0 0 −1 −1 −1 −1 −1 −1 0  = E1   L=(E3 E2 E1 ) =E1 E2 E3 =E1   0 −2 0 − 32 1 0   0 0 1 0 3 0 0 0 1 0 0 0 0 − 34 1      1 0 0 0 1 0 0 0 1 0 0 0  1 1 0 0  0  −1 1 0 0 1 0 0 2 2      . = = 0 0 1 0   0 − 23 1 0   0 − 23 1 0 0 0 − 43 0 0 − 34 0 0 0 1 1 1 To summarize, an LU factorization of A4 is given by    2  1 0 0 0 −1 1 1 0 3    1 1 0 0 0 −2 1 0 2 . , U=  L=  4  0 −2   1 1 0 0 −3 1 3 0 0 − 43 1 0 0 0 − 54 

 2.8.3.4. We want to find a lower triangular matrix L = 

3 −1

−1 3



T

`11 `21 

0 `22



− 34

 0 0  0 1

satisfying A = LLT , that is,

`211

`11 `21

`11 `21

`221 + `222

= LL = ... = 

0 0 1

 .

c Larry

Turyn, January 2, 2014

page 83 √ The (1, 1) entry of A requires `211 = 3, so one Cholesky factorization can use `11 = 3. After that, both the (1, 2) and (2, 1) entries of A require that −1 = `11 `21 , hence `21 = − √13 . Finally, the (2, 2) entry of A requires 3 = `221 + `222 = 13 + `222 . One Cholesky factorization of A is given by " √ #" √ #T 3 0 3 0 √ √ A= . − √13 2√32 − √13 2√32  2.8.3.5. We want to find a lower triangular matrix L = 

−2 2

3 −2



`11 `21 

= LLT = ... = 

0 `22



satisfying A = LLT , that is,

`211



`11 `21

`11 `21

`221

+

`222

.

√ The (1, 1) entry of A requires = 3, so one Cholesky factorization can use `11 = 3. After that, both the (1, 2) and (2, 1) entries of A require that −2 = `11 `21 , hence `21 = − √23 . Finally, the (2, 2) entry of A requires 2 = `221 + `222 = 43 + `222 . One Cholesky factorization of A is given by " √ #" √ #T 3 √0 3 √0 A= . − √23 √23 − √23 √23 `211

2.8.3.6. Take the hint, and partition  L= − `31  specifically, L11 =

`11 `21

0 `22

L11 − `32

| | |

 0 − − , `33

where

L11 is 2 × 2,

 and correspondingly 

A11 A=−− A21

| | |

  A12 −−= − a31 A22

A11 − a32

| |

− |

 a12 − . a33

Then we calculate 

A11  −− A21

| | |

  A12 T − −  = A = LL =  − A22 `31 |

  = − − − − − `11 `31 +`12 `32 It follows that we need A11 =

L11 LT11 − − −− `21 `31 +`22 `32

| | |

L11 − `32

| | |

  0  − −  − `33

|

LT11

|

− 0T

|

  `11 `31 +`12 `32 2 `21 `31 +`22 `32   = A =  −1 − − −−  0 2 `11 +`222 +`233

|

 `31 `32   −− `33

−1 2 1

 0 1 . 1

(?)

L11 LT11 ,

which is a problem  we have  experience with from problems 2.8.3.4 and 2.8.3.5. `11 0 We want to find a lower triangular matrix L11 = satisfying `21 `22     `211 `11 `21 2 −1 . = A11 = L11 LT11 = −1 2 2 2 `11 `21 `21 + `22 √ 2 The (1, 1) entry of A requires `11 = 2, so one Cholesky factorization can use `11 = 2. After that, both the (1, 2) and (2, 1) entries of A require that −1 = `11 `21 , hence `21 = − √12 . Finally, the (2, 2) entry of A requires 2 = `221 + `222 = 12 + `222 . One Cholesky factorization of A11 is given by  √  √ T 2 0 2 0    A11 =    . √ √ − √12 √32 − √12 √32 c Larry

Turyn, January 2, 2014

page 84

(?) is

√ 2

 

2  −1 0

0



−1 2 1

 0  1 1  = A = LLT =   − √2  1 `31 

  =  

2

−1

−1

2

√ 2 `31

0=

`32

`31



√ √3 2

`32

     

0

`33

2

0

0

   1 0    − √2   1 √ 0 3

√ √3 2

− √12

   .  

0

− √12 `31 +

The (3, 1) entry of A requires

√ √3 2

 √ 2    0 0    `33 0 √ 2 `31 0

√ √3 2

`211 + `222 + `233

`32

√ 2 `31 + 0 · `32 ,

hence `31 = 0. The (3, 2) entry of A requires √ √ 1 3 1 3 1 = − √ `31 + √ `32 = − √ · 0 + √ `32 , 2 2 2 2 hence `32 =

√ √2 . 3

The (3, 3) entry of A requires  √2 2 2 1 = `231 + `222 + `233 = 02 + √ + `233 = + `233 , 3 3

hence `33 = 13 .





2

0

  1  To summarize, the Cholesky factorization is A =  − √2   0

3 √ 2 √ √2 3

0





T

  0   .  

√ √2 3

1 √ 3

2.8.3.7. Take the hint, and partition  L= − `31  specifically, L11 =

`11 `21

0 `22

L11 − `32

| | |

 0 − − , `33

where

L11 is 2 × 2,

 and correspondingly 

A11 A=−− A21

| | |

  A12 −−= − A22 a31

A11 − a32

| |

− |

 a12 − . a33

Then we calculate 

A11  −− A21

|

  A12 T − −  = A = LL ==  − A22 `31

| |

L11 − `32 |

  = − − − − − `11 `31 +`12 `32

L11 LT11 | |

− − −− `21 `31 +`22 `32

|

| | |

  0  − −  − `33

|

LT11 − 0T

| | |

  `11 `31 +`12 `32 2 `21 `31 +`22 `32   = A =  −1  − − −− 0 `211 +`222 +`233

 `31 `32   −− `33 −1 2 1

 0 1 . 1

(?)

It follows that we need A11 = L11 LT11 , which is a problem we have experience with from problems 2.8.3.4 and 2.8.3.5.

c Larry

Turyn, January 2, 2014

page 85



`11 `21

We want to find a lower triangular matrix L11 = 

0

satisfying

`22 

 −1 = A11 = L11 LT11 = 2

2 −1



`211

`11 `21

`11 `21

`221 + `222

 .

√ The (1, 1) entry of A requires `211 = 2, so one Cholesky factorization can use `11 = 2. After that, both the (1, 2) and (2, 1) entries of A require that −1 = `11 `21 , hence `21 = − √12 . Finally, the (2, 2) entry of A requires 2 = `221 + `222 = 12 + `222 . One Cholesky factorization of A11 is given by √

  A11 = 

2

0

− √12

√ √3 2

 

(?) is 2  −1 0

−1 2 −1 

  =  

0

− √12

√ √3 2

√ 2

 

√ 2



0



 0  1 −1  = A = LLT =   − √2  2 `31 2

−1

−1

2

√ 2 `31

− √12 `31 +

√ √3 2

The (3, 1) entry of A requires 0=

√ √3 2

`32

T   .

 √ 2    0 0    `33 0

− √12

0

√ √3 2

0 √

2 `31 0

`31



`32

     

`33    .  

`211 + `222 + `233

`32

√ 2 `31 + 0 · `32 ,

hence `31 = 0. The (3, 2) entry of A requires √ √ 1 1 3 3 −1 = − √ `31 + √ `32 = − √ · 0 + √ `32 , 2 2 2 2 √

hence `32 = − √23 . The (3, 3) entry of A requires 2 = `231 + `222 + `233 = 02 + hence `33 =

2 √ . 3





√  2 2 2 −√ + `233 = + `233 , 3 3

√ 2

   − √12 To summarize, the Cholesky factorization is A =     0

0



q

3 2

q

2 3

0





2

   0   − √12    2 √ 0 3

0 q

3 2

q − 23

c Larry

0

T

   0   

2 √ 3

Turyn, January 2, 2014

page 86

Section 2.9.2 x2 ]T be an unspecified unit vector. Let x1 = x, hence x22 = 1 − x2 . We calculate    p 2 1 x RA (x) = xT Ax = [ x x2 ] =2x2 + 2xx2 − x22 = 2x2 + 2x(± 1 − x2 )−(1−x2 ) 1 −1 x2 p = −1 + 3x2 ± 2x 1 − x2 , f± (x), for − 1 ≤ x ≤ 1.

2.9.2.1. Let x = [ x1

Next, find the minimum and maximum values of f± (x), each of which is a function of a single variable:   p  1 − x2 1 x2 1 0 2 √ √ √ · (−2x) = 6x ± 2 − f± (x) = 6x ± 2 1−x +x· 2 1 − x2 1 − x2 1 − x2   p 2 = √ 3x 1 − x2 ± (1 − 2x2 ) . 1 − x2 So, the critical points of f± (x) are where p 3x 1 − x2 ± (1 − 2x2 ) = 0, that is, 3x

p 1 − x2 = ∓(1 − 2x2 ).

If x is a critical point, then p 2 2 3x 1 − x2 = ∓ (1 − 2x2 ) , that is, 9x2 (1 − x2 ) = (1 − 2x2 )2 , that is, 0 = (1 − 4x2 + 4x4 ) − 9x2 (1 − x2 ) = 13x4 − 13x2 + 1. The critical points for both f+ (x) and f− (x) must satisfy √   13 ± 132 − 4 · 13 · 1 1 3 = 1± √ , x2 = 26 2 13 that is, 1 x= √ 2

s

32 1+ √ , 13

where independently 1 = ±1 and 3 = ±1. Next, we evaluate the functions at any critical points that lie in the interval −1 < x < 1: Let 3 = ±1. Note that s !2   1 32 1 32 √ 1+ √ = 1+ √ . 2 2 13 13 We have f3

1 √ 2

s

32 1+ √ 13

√ 3 92  2 = −1 + + √ + 1 3  2 2 13

!

s

1 = −1 + 3 · 2

32 1+ √ 13

s

1 2





32 1+ √ 13

32 1− √ 13 r





1 + 23 √ 2

s

32 1+ √ 13

s 1−

3 92 = −1 + + √ + 1 3 2 2 13

1 2

s



32 1+ √ 13

32 1+ √ 13





32 1− √ 13



3 9 92 3 92 2 + √ + 1 3 1 − = −1 + + √ + 1 3 √ . 2 13 2 2 13 2 13 13 Using all choices of 1 , 2 , and 3 gives eight possible values. The maximum of these is √ 3 9 2 1 9+4 1 + 13 M , −1 + + √ + √ = + √ = , 2 2 2 2 13 13 2 13 = −1 +

and the minimum of these eight possible values is √ 3 9 2 1 9+4 1 − 13 . m , −1 + − √ − √ = − √ = 2 2 2 2 13 13 2 13 c Larry

Turyn, January 2, 2014

page 87

We also need to evaluate the functions at the endpoints of the interval −1 ≤ x ≤ 1: p f+ (±1) = −1 + 3(±1)2 + 2(±1) 1 − (±1)2 = 2 and similarly f− (±1) = 2. So, the Rayleigh quotient for this 2 × 2 real, symmetric matrix gives maximum eigenvalue being √ minimum eigenvalue being 1−2 13 .

√ 1+ 13 2

and

By the way, Section 2.1’s usual way of finding the eigenvalues using the characteristic equation gives the result √ that the exact eigenvalues of A are 1±2 13 . x2 ]T be an unspecified unit vector. Let x1 = x, hence x22 = 1 − x2 . We calculate    p 1 3 x T RA (x) = x Ax = [ x x2 ] =x2 + 6xx2 + 4x22 = x2 + 6x(± 1 − x2 )+4(1−x2 ) 3 4 x2 p = 4 − 3x2 ± 6x 1 − x2 , f± (x), for − 1 ≤ x ≤ 1.

2.9.2.2. Let x = [ x1

Next, find the minimum and maximum values of f± (x), each of which is a function of a single variable:   p  1 − x2 1 1 x2 0 1 − x2 + x · √ · (−2x) = −6x ± 6 √ −√ f± (x) = −6x ± 6 2 1 − x2 1 − x2 1 − x2  p  6 −x 1 − x2 ± (1 − 2x2 ) . = √ 1 − x2 So, the critical points of f± (x) are where p −x 1 − x2 ± (1 − 2x2 ) = 0, that is, x

p 1 − x2 = ±(1 − 2x2 ).

If x is a critical point, then p 2 2 x 1 − x2 = ± (1 − 2x2 ) , that is, x2 (1 − x2 ) = (1 − 2x2 )2 , that is, 0 = (1−4x2 +4x4 )−x2 (1−x2 ) = 5x4 −5x2 +1. The critical points for both f+ (x) and f− (x) must satisfy √ √   5± 5 1 1 5 ± 52 − 4 · 5 · 1 2 = = 1± √ x = , 10 10 2 5 that is, 1 x= √ 2

r

2 1+ √ , 5

where independently 1 = ±1 and 3 = ±1. Next, we evaluate the functions at any critical points that lie in the interval −1 < x < 1: Let 3 = ±1. Note that  2   r 1 2 1 2 √ 1+ √ = 1+ √ . 2 2 5 5 We have

s      r r 2 1 2 1 2 1 2 f3 1+ √ =4−3· 1+ √ + 63 √ 1+ √ 1− 1+ √ 2 2 5 5 2 5 5 s   r 3 32 61 3 2 1 2 =4 − − √ + √ 1+ √ 1− √ 2 2 5 2 2 5 5 s r   3 32 2 2 3 32 1 = 4 − − √ + 31 3 1+ √ 1− √ = 4 − − √ + 31 3 1 − 2 2 5 2 5 5 5 2 5 

1 √ 2

c Larry

Turyn, January 2, 2014

page 88

3 32 2 − √ + 31 3 √ 2 2 5 5 Using all choices of 1 , 2 , and 3 gives eight possible values. The maximum of these is √ 3 6 5 5+3 5 3 3 + 12 M ,4− + √ + √ = + √ = , 2 2 2 2 5 5 2 5 =4−

and the minimum of these eight possible values is √ 3 3 6 5 3 + 12 5−3 5 m,4− − √ − √ = − √ = . 2 2 2 2 5 5 2 5 We also need to evaluate the functions at the endpoints of the interval −1 ≤ x ≤ 1: p f+ (±1) = 4 − 3(±1)2 + 6(±1) 1 − (±1)2 = 1 and similarly f− (±1) = 1. So, the Rayleigh quotient for this 2 × 2 real, symmetric matrix gives maximum eigenvalue being √ minimum eigenvalue being 5−32 5 .

√ 53+ 5 2

and

By the way, Section 2.1’s usual way√ of finding the eigenvalues using the characteristic equation gives the result that the exact eigenvalues of A are 5±32 5 . z ]T be an unspecified nonzero vector. We calculate  √2 xT Ax xT Ax 1 RA (x) = = 2 = 2 · [x y z] 3 2 2 2 2 2 ||x|| x +y +z x +y +z 0 √ 2x2 + 2 3xy − z 2 = , f (x, y, z) x2 + y 2 + z 2

2.9.2.3. Let ||x|| = [ x

y

√ 3 0 0

  0 x 0  y  z −1

MathematicaTM on f (x, y, z) over the unit cube, as in Example 2.37 in Section 2.9, gives maximum value of +3 and minimum value of −1. z ]T be an unspecified nonzero vector. We calculate  0 xT Ax xT Ax 1  1 RA (x) = = = · [ x y z ] ||x||2 x2 + y 2 + z 2 x2 + y 2 + z 2 1

2.9.2.4. Let ||x|| = [ x

y

=

1 0 0

  1 x 0  y  2 z

2xy + 2xz + 2z 2 , f (x, y, z). x2 + y 2 + z 2

MathematicaTM on f (x, y, z) over the unit cube, as in Example 2.37 in Section 2.9, gives approximate maximum value of +2.4811943040920146 and approximate minimum value of −1.1700864866260186. 2.9.2.5. x+ = [1 1 1 1 1]T and x− = [1 − 1 1 − 1 1]T give estimates RA (x+ ) = 4 ≈ λ1 , the maximum eigenvalue of A, and for the minimum eigenvalue of A gives the estimate λ5 ≈ RA (x− ) = − 45 , after calculating      0 1 1 1 1 1 0 1    2  0 1 1 1   1    1 1      1 0 1 1   1  = [1 1 1 1 1]  RA (x+ ) = [1 1 1 1 1]  1  0 =4 ||x+ ||2 5 1  2  1 1 0 1  1  1 1 1 1 0 1 0 and 

0 1  1 RA (x− ) = [1 − 1 1 − 1 1]  1 ||x− ||2 1 1

1 0 1 1 1

1 1 0 1 1

1 1 1 0 1

   1 1 0  −1   2 1    1    1   1  = 5 [1 − 1 1 − 1 1]  0  2 1   −1  0 1 0 c Larry

    = −4.  5 

Turyn, January 2, 2014

page 89

2.9.2.6. x+ = [1 1 1 1 1]T and x− = [1 − 1 1 − 1 1]T give estimates RA (x+ ) = 54 ≈ λ1 , the maximum eigenvalue of A, and for the minimum eigenvalue of A gives the estimate λ5 ≈ RA (x− ) = −4, after calculating      1 0 1 −1 1 −1 0    1  2  0 1 −1 1   1     1   1  = 1 [1 1 1 1 1]  0  = 4  −1 1 0 1 −1 RA (x+ ) = [1 1 1 1 1]       2 ||x+ || 5 5  1 −1  2  1 0 1  1  1 −1 1 −1 1 0 0 and 

0  1  1 [1 − 1 1 − 1 1]  RA (x− ) =  −1 ||x− ||2  1 −1

1 0 1 −1 1

−1 1 0 1 −1

1 −1 1 0 1

   1 −1 −4    4 1   −1  1     −1    1 = 5 [1 − 1 1 − 1 1]  −4     4 −1 1 1 0 −4

   = −4.  

2.9.2.7. Denoting, as usual in this section, by λn (respectively, λ1 ) the minimum (respectively, maximum) eigenvalue of the real, symmetric matrix A, we have that λn ≤ RA (x) ≤ λ1 , for all nonzero vectors x. For each index i = 1, ..., n, using in the Rayleigh quotient the trial unit vector e(i) gives   a1i   RA (e(i) ) = (e(i) )T Ae(i) = (e(i) )T [ aij ] e(i) = (e(i) )T  ...  = aii . ani So, λn ≤ aii ≤ λ1 . 2.9.2.8. Denoting, as usual in this section, by λn (respectively, λ1 ) the minimum (respectively, maximum) eigenvalue of the real, symmetric matrix A, we have that λn ≤ RA (x) ≤ λ1 , for all nonzero vectors x. Using the trial unit vector x , [1 1 ... 1]T , the Rayleigh quotient is 1 1 (e(1) + ... + e(n) ) = (e(1) + ... + e(n) )T [ aij ] (e(1) + ... + e(n) ) n ||e(1) + ... + e(n) ||2   a11 + ... + a1n n n 1 1 1 XX   .. = (e(1) + ... + e(n) )T  = (a + ... + a + ... + a + ... + a ) = aij ,  11 1n n1 nn . n n n i=1 j=1 an1 + ... + ann

RA (e(1) + ... + e(n) ) =

the average of the and also the average of the column sums. PnrowPsums n So, λn ≤ n1 a i=1 j=1 ij ≤ λ1 . 2.9.2.9. (a) Because {tq1 , (1 − t)qn } is also an orthogonal set of vectors, the Pythagorean theorem implies ||x(t)||2 = ||tq1 + (1 − t)qn ||2 = ||tq1 ||2 + ||(1 − t)qn ||2 = (|t| ||q1 ||)2 + (|1 − t| ||qn ||)2 = (|t| · 1)2 + (|1 − t| · 1)2 = |t|2 + |1 − t|2 = t2 + (1 − t)2 , using the assumption that t is real. Further, because t2 + (1 − t)2 = t2 + t2 − 2t + 1 = 2t2 − 2t + 1 = 2(t2 − t) + 1    1 1 1 1 2 1 1 1 + 1 = 2 t2 − t + − +1=2 t− + ≥ > 0, = 2 t2 − t + − 4 4 4 2 2 2 2 we know that x(t) 6= 0. (b) Define a function of a single variable by  x(t)T Ax(t) f (t) , RA x(t) = . ||x(t)||2 x(t) = tq1 + (1 − t)qn is differentiable in t, and has derivative x0 (t) = q1 − qn , hence x(t) is a continuous function of t. It follows that x(t)T Ax(t) is continuous in t. We also know that the denominator, ||x(t)||2 = t2 + (1 − t)2 is

c Larry

Turyn, January 2, 2014

page 90

differentiable in t, hence is a continuous function of t. Finally, we know that the denominator is never zero. Putting all of these results together imply that f (t) is a continuous function. (c) Because f (t) is continuous on the interval −1 ≤ t ≤ 1, the Intermediate Value Theorem applies. Because the range f ( [−1, 1] ) contains both λn = f (0) and λ1 = f (1), every w between λn and λ1 must be in f ( [−1, 1] ), that is, there is a tw such that f (tw ) = w. 2.9.2.10. (a) Let µ1 and λ1 be the maximum eigenvalues of B and A, respectively. Let q1 be a unit eigenvector of A corresponding to eigenvalue λ1 . Then we calculate that qT1 Bq1 = qT1 (A + C T C)q1 = qT1 Aq1 + qT1 C T Cq1 = qT1 Aq1 + (Cq1 )T (Cq1 ) = RA (q1 ) + ||Cq1 ||2 ≥ RA (q1 ) + 0 = RA (q1 ) It follows that µ1 ≥ RB (q1 ) = qT1 Bq1 ≥ RA (q1 ) = λ1 . So, the maximum eigenvalue of B is greater than or equal to the maximum eigenvalue of A. (b) Let λn and µn be the minimum eigenvalues of A and B, respectively. In the calculations of part (a), replace q1 by zn , a unit eigenvector of B corresponding to eigenvalue µn , to get zTn Azn = zTn (B − C T C)zn = ... = RB (zn ) − ||Czn ||2 . Then λn ≤ RA (zn ) = zTn Azn = RB (zn ) − ||Czn ||2 ≤ RB (zn ) = µn . So, the minimum eigenvalue of B is greater than or equal to the minimum eigenvalue of A.

c Larry

Turyn, January 2, 2014

page 91

Section 2.10.8 2.10.8.1. Define the function q(x) ≡ 1 and suppose p is a polynomial. On the space Pn consisting of all polynomials of degree less than or equal to n with real coefficients, define the inner product ˆ 1 (2.78) hp, qi , p(x)q(x)dx. −1

The Cauchy-Schwarz inequality implies ˆ

1 −1

ˆ p(x) · dx = |hp, qi| ≤ ||p|| · ||q|| =

1

|p(x)|2 dx

1/2 ˆ

1

|1|2 dx

1/2



|p(x)|2 dx

1/2 ·



2,

−1

−1

−1

1

=

hence

√ ˆ 1 ˆ 1 1/2 1 2 ≤ 2 p(x) dx |p(x)| dx , 2 2 −1 −1

hence

ˆ 1  ˆ 1 1/2 1 1 2 , prms . |p| , p(x)dx ≤ |p(x)| dx 2 −1 2 −1

2.10.8.2. For each x in i.p. space V, Ax is also in V. The linear operator B is bounded, so for each (Ax), (?)

||B(Ax)|| ≤ ||B|| ||Ax||.

But, A is also a bounded linear operator, so for each x in V, ||Ax|| ≤ ||A|| ||x||. So, (?) implies ||(BA)x|| = ||B(Ax)|| ≤ ||B|| ||Ax|| ≤ ||B|| ||A|| ||x||. This being true for all x in V implies that BA is a bounded linear operator, and ||BA|| ≤ ||B|| ||A||. 2.10.8.3. We are given that both A and B are bounded linear operators on an i.p. space V, so the result of problem 2.10.8.2 implies ||BA|| ≤ ||B|| ||A||. Because B is the algebraic inverse of A, BAx = x for all x in V. By definition of the norm of a bounded linear operator, ||x|| = ||BAx|| ≤ ||BA|| ||x|| ≤ ||B|| ||A|| ||x||. Choosing any unit vector x implies that 1 ≤ ||B|| ||A||, hence ||B|| ≥ ( ||A|| )−1 . 2.10.8.4. We are given that A is a one-to-one, bounded linear operator on an i.p. space V. Let || || denote the norm on V. Let x and y be any vectors in V and let α be any scalar. We will explain why the properties in (2.79) in Section 2.10 hold also for the normed linear space (V, ||| |||). (a) Regarding property (2.79)(a), we have, using the corresponding property for the norm || ||, that |||αx|||= ||A(αx)||= ||α(Ax)||= |α| ||Ax||= |α| |||x|||. (b) Regarding property (2.79)(b), we have, using the corresponding property for the norm || ||, that |||x + y|||= ||A(x + y)||= ||Ax + Ay|| = ||Ax|| + ||Ay||= |||x||| + |||y|||. (c) Regarding property (2.79)(c), we have, using the corresponding property for the norm || ||, that |||x|||= ||(Ax)|| ≥ 0, with equality only if Ax = 0. But, A is one-to-one, linear operator, so Ax = 0 only if x = 0. So, |||x|||= ||(Ax)|| ≥ 0, c Larry

Turyn, January 2, 2014

page 92

with equality only if x = 0. 2.10.8.5. For all x, y in Cn , hx, A∗ yi = hAx, yi , (Ax)T y = xT AT y = xT (AT y) = hx, AT yi. So, we need A∗ = AT . 2.10.8.6. From the definition of xk → x∞ and the result of Example 2.48 in Section 2.10, || xk || − || x∞ || ≤ || xk − x∞ || → 0, as k → ∞. By the definition of convergence in R1 , || xk || → || x∞ ||, as k → ∞. 2.10.8.7. We are given that xk * x∞ and yk → y∞ . We calculate that the sequence of scalars hxk , yk i satisfy |hxk , yk i − hx∞ , y∞ i| = |hxk , yk i − hxk , y∞ i + hxk , y∞ i − hx∞ , y∞ i|

≤ |hxk , yk i − hxk , y∞ i| + |hxk , y∞ i − hx∞ , y∞ i| = |hxk , yk − y∞ i| + |hxk − x∞ , y∞ i| ≤ ||xk || · ||yk − y∞ || + |hxk − x∞ , y∞ i| Both of these terms converge to 0 as k → ∞, but for two different reasons: Concerning the first term, because xk * x∞ we know that the sequence of norms { ||xk || }∞ k=1 is bounded, that is, there is an M ≥ 0 for which ||xk || ≤ M for all k. So, ||xk || · ||yk − y∞ || ≤ M ||yk − y∞ || → 0 as k → ∞. Concerning the second term, we were given that xk * x∞ . By definition of *, this implies that for the fixed vector y∞ we have hxk − x∞ , y∞ i → 0 as k → ∞. 2.10.8.8. For all constants c1 and c2 , ˆ 1 ˆ |x2 − c1 − c2 x|2 dx = −1

 = So,

1

(x4 − 2c2 x3 − 2c1 x2 + c22 x2 − 2c1 c2 x + c21 )dx

−1

1 2 1 2 4 2 1 5 1 x − c2 x4 − c1 x3 + c22 x3 − c1 c2 x2 + c21 x = − 0 − c1 + c22 − 0 + 2c21 . 5 2 3 3 5 3 3 −1 ˆ

1

|x2 − c1 − c2 x|2 dx = −1

2 4 2 − c1 + c22 + 2c21 , f (c1 , c2 ), 5 3 3

considered as a function of two variables c1 and c2 , which of which is allowed to be any real number. To minimize f (c1 , c2 ), first note that f (c1 , c2 ) → ∞ as either |c1 | → ∞ or |c2 | → ∞, because of the terms. Next, look for all critical points of f (c1 , c2 ):   ∂f 4  0 = ∂c1 = − 3 + 4c1  ,   ∂f 4 0 = ∂c = − c 2 3 2 so the only critical point is at (c1 , c2 ) =

1 ,0 3

f

2 3

c22 and 2c21

 . We calculate that

1

 2 4 1 4 ,0 = − + 2 · = , 3 5 9 9 45 c Larry

Turyn, January 2, 2014

page 93 ˆ so (c1 , c2 ) =

1 ,0 3



1

|x2 − c1 − c2 x|2 dx.

gives the minimum value of −1

2.10.8.9. For all x in V, using the triangle inequality we have ||Ax|| , λ1 hx, u(1) iu(1) + · · · + λn hx, u(n) iu(n) ≤ λ1 hx, u(1) iu(1) + · · · + λn hx, u(n) iu(n) , and then using (2.79)(a), ||Ax|| ≤ |λ1 hx, u(1) i| · ||u(1) || + · · · + |λn hx, u(n) i| · ||u(n) ||, and using the information that the u(j) are unit vectors, ||Ax|| ≤ |λ1 hx, u(1) i| + · · · + |λn hx, u(n) i| = |λ1 | · |hx, u(1) i| + · · · + |λn | · |hx, u(n) i|, and then, using the Cauchy-Schwarz inequality, ||Ax|| ≤ |λ1 | · ||x|| ||u(1) || + · · · + |λn | · ||x|| ||u(n) ||, and again using the information that ||u(j) || = 1, ||Ax|| ≤ |λ1 | · ||x|| + · · · + |λn | · ||x||. This being true for all x in V, it follows that A is bounded and || A || ≤ |λ1 | + · · · + |λn |. To go further, we want to find the value of ||A||, not just an upper bound on ||A||. How? Add more unit vectors to the set {u1 , ..., un } to get an o.n. basis, {u1 , ..., un , ...}, for V. [We use the ellipsis ... to indicate that there may be finitely many or infinitely many more vectors in the basis.] Then, no matter how many more vectors there are in the o.n. basis for V, we have X x= xi u(i) , i=1

where the notation

X

indicates that the sum may have a finite or infinite number of terms, depending on the

i=1

dimension of V. We calculate using Parseval’s identity that 2 X X (i) ||x|| = xi u = |xi |2 2

i=1

and Ax ,

n X

(j)

λj hx, u

iu

(j)

i=1

=

j=1

n X

+

* λj

X

(i)

(j)

xi u , u

u(j) .

i=1

j=1

Using orthogonality, (?)

Ax =

n X

λj xj u(j) .

j=1

Note that, even though x and ||x||2 may involve infinitely many terms, orthogonality made Ax be a finite sum. Again using Parseval’s identity, n 2 n n X X X 2 (j) λj xj u = |λj xj |2 = |λj |2 |xj |2 . ||Ax|| = j=1

j=1

j=1

But, we were given that |λ1 | ≥ |λ2 | ≥ ... ≥ |λn |, so ||Ax||2 ≤

n X

|λ1 |2 |xj |2 = |λ1 |2

j=1

n X

|xj |2 ,

j=1

so, ||Ax||2 ≤ |λ1 |2

n X

|xi |2 = |λ1 |2 ||x||2 .

i=1

c Larry

Turyn, January 2, 2014

page 94

It follows that ||A|| ≤ |λ1 |. But, in fact, choosing x = u(1) = 1 · u(1) + 0 · u(2) + ..., we have, from (?), Ax = λ1 u(1) , so ||Ax||2 = |λ1 |2 · 1 = |λ1 |2 · ||u(1) ||2 = |λ1 |2 · ||x||2 . It follows that ||A|| ≥ |λ1 . Combined with the previous result, we get ||A|| = |λ1 |. 2.10.8.10. We will explain why every x in W2⊥ must be in W1⊥ , and thus explain why W2⊥ ⊆ W1⊥ . Choose any x in W2⊥ . By definition of the latter, x ⊥ w2

(?)

for every w2 in W2 .

But, we were given that W1 ⊆ W2 , that is, every vector w1 in W1 is also in W2 . So, (?) app;lies to every w1 in W1 , that is, (?) x ⊥ w1 for every w1 in W1 . By definition, x is in W1⊥ , which is what we wanted to get. 2.10.8.11. For all x, y in H, D

x, A−1

∗ E −1 y = A x, y .

Let w = A−1 x and v = (A∗ )−1 y, hence Aw = x and A∗ v = y. It follows that D E



 −1 x, (A∗ ) y = hx, vi = hAw, vi = hw, A∗ vi = w, A∗ (A∗ )−1 y = w, A∗ (A∗ )−1 y



∗ = hw, yi = A−1 x, y = x, A−1 y . To summarize, for all x, y in H, D

x, (A∗ )

−1

E

∗ y = x, A−1 y .

It follows that (A∗ )

−1

= A−1

∗

.

2.10.8.12. First, we will explain why A is one-to-one: If Ax1 = Ax2 , then 0 = ||0|| = ||A(x1 − x2 )|| ≥ α||x1 − x2 || ≥ 0. It follows that 0 = α||x1 − x2 ||. Because α > 0, it follows that ||x1 − x2 || = 0, which implies x1 − x2 = 0, that is, x1 = x2 . So, yes, A is one-to-one. It follows that A has a linear, algebraic inverse A−1 . For all x in H, x = A−1 Ax, hence ||x|| = ||A(A−1 x)|| ≥ α||A−1 x||, hence

1 ||x||. α is bounded and

||A−1 x|| ≤ This being true for all x in H, it follows that A−1

||A−1 || ≤

2.10.8.13. For each x = [ x1

1 . α

xn ] in Cn ,  P n  2 2 k=1 a1k xk n X X   . 2 .. || Ax || =  ajk xk .  = P n j=1 k=1 ank xk

...

k=1

For each index j = 1, ..., n, let A∗j denote the j−th column of A. Then n X

ajk xk = A∗j • x,

k=1

c Larry

Turyn, January 2, 2014

page 95

so the Cauchy-Schwarz inequality implies n X ajk xk ≤ ||A∗j || ||x||, k=1

hence

n 2 X ajk xk ≤ |A∗j ||2 ||x||2 =

n X

k=1

It follows that

! |ajk |

2

||x||2 .

k=1

2 ! n n X X X X 2 2 || Ax || = ajk xk ≤ |ajk | ||x||2 , j=1 k=1

j=1

hence

k=1

n XX

|| Ax ||2 ≤ ||x||2 ·

! |ajk |2

.

j=1 k=1

This implies v uX X n u |ajk |2 . || Ax || ≤ ||x|| · t j=1 k=1 n

Because this is true for all x in C , A is a bounded linear operator and v uX n u n X || A || ≤ t |ajk |2 , || A ||F . j=1 k=1

2.10.8.14. To start the Gram-Schmidt process, let

r11

g1 (x) ≡ f0 (x), sˆ sˆ 1 2 |g1 | dx = , ||g1 || = 0

1

1dx = 1, 0

and −1 q1 (x)= r11 g1 (x) =

1 · g1 (x) ≡ 1. 1

Next, let 1

ˆ g2 (x) , f1 (x) − hf1 , q1 iq1 (x) = x − 0

sˆ r22 , ||g2 || =



1

|g2

|2 dx

1

=

0

0



1 2 x− dx = 2



and −1 q2 (x)= r11 g2 (x) =

1 0



 1 x · 1 dx 1 = x − , 2

r  1 1 1 1 1 2 x −x+ dx = − + dx = √ 4 3 2 4 12

 1 12 · x − . 2

Further, let g3 (x) , f2 (x) − hf2 , q1 iq1 − hf2 , q2 iq2 = x2 −

ˆ 0

= x2 −

r33 , ||g3 || =

1

1

 ˆ x2 · 1 dx · 1 −

x2 · 0

  √  √ 1 1 12 · x − dx 12 x − 2 2

 1 2 1 1 1 1 1 x dx x− = x2 − − 12 · x− = x2 − x + , 2 2 3 12 2 6 0 sˆ sˆ 1 1 1 1 2 4 1 1 |g3 |2 dx = x2 − x + dx = x4 − 2x3 + x2 − x + dx 6 3 3 36 0 0 0 r r r 1 1 4 1 1 1 23 1 = − + − + = = = √ , 5 2 9 6 36 180 90 3 10

1 − 12 3 sˆ



1



x3 −

c Larry

Turyn, January 2, 2014

page 96

and

√  1 −1 q3 (x)= r33 g3 (x)= 3 10 x2 − x + 6 According to Theorem 2.16 in Section 2.3, the o.n. set   √  1 √  2 1 , 3 10 x − x + 1, 12 x − 2 6  has span equals the span of the given set of vectors, 1, x, x2 .

c Larry

Turyn, January 2, 2014

page 1

Chapter Three Section 3.1.4 ˆ  ´ d t  3.1.4.1. µ(t) = exp 1 dt = et ⇒ e y = et y˙ + et y = et e−2t = e−t ⇒ et y = e−t dt = −e−t + c dt   −t −t ⇒y=e −e +c ⇒ General solution of the ODE is y = −e−2t + c e−t , where c =arb. const.  ´ 3.1.4.2. ODE in standard form is y˙ + t−1 y = t−1 e−2t : µ(t) = exp t−1 dt = exp(ln t) = t ˆ   d  1 t y = ty˙ + y = e−2t ⇒ t y = e−2t dt = − e−2t + c ⇒ y = t−1 − 21 e−2t + c ⇒ dt 2 1 −1 −2t ⇒ General solution of the ODE is y = − 2 t e + c t−1 , where c =arb. const. 3.1.4.3. ODE in standard form is y˙ − 3t−1 y = t3 :  ˆ −3t−1 dt = exp(−3 ln t) = exp(ln t−3 ) = t−3 µ(t) = exp ˆ ⇒ t−3 y = 1 dt = t + c ⇒ y = t3 (t + c)



d  −3  t y = t−3 y˙ − 3t−4 y = 1 dt

⇒ General solution of the ODE is y = t4 + c t3 , where c =arb. const. 3.1.4.4. ODE is already in standard form:  ˆ  1 dt = exp − ln(t + 1) = exp(ln(t + 1)−1 ) = (t + 1)−1 µ(t) = exp − t+1  d t−t−2 (t + 1)−1 y = (t + 1)−1 y˙ − (t + 1)−2 y = dt t+1 ˆ ˆ   t−t−2 1 −1 ⇒ (t + 1) y = dt = (t − 2)dt + c = t2 − 2t + c ⇒ y = (t + 1) 12 t2 − 2t + c t+1 2 ⇒ General solution of the ODE is y = 12 (t + 1)(t2 − 4t) + c(t + 1), where c =arb. const. ⇒

3.1.4.5. ODE has a particular solution yP (t) = t2 ln(t). The corresponding homogeneous ODE, ty˙ − 2y = 0, that is,  ˆ     −1 y˙ − 2t y = 0, has general solution y = cµ(t) = c exp 2t−1 dt = c exp 2 ln t = c exp ln t2 = c t2 , so the general solution of the original ODE is y=yP (t) + yh (t)=c t2 +t2 ln(t), where c=arb. const. Satisfying the IC: 3 = y(1) + c 12 + 12 ln(1) = c + 0 = c. Solution of the IVP is y = 3 · t2 + t2 ln t = t2 (3 + ln t). ˆ  ´ d  3t  1 3.1.4.6. µ(t) = exp 3 dt = e3t ⇒ e y = e3t y˙ + 3e3t y = e3t e−t = e2t ⇒ e3t y = e2t dt = e2t + c dt 2   ⇒ y = e−3t 12 e2t + c ⇒ General solution of the ODE is y = 12 e−t + c e−3t , where c =arb. const. Satisfying the IC: −1 = y(0) =

1 2

+ c ⇒ c = − 32 .

⇒ Solution of the IVP is

y=

1 −t 3 −3t e − e . 2 2

c Larry

Turyn, October 13, 2013

page 2

 ˆ 3 e−2t y= 3 3t−1 dt =exp(3 ln t)=exp(ln t3 )= t3 ⇒ µ(t)=exp t t ˆ   d 3 1 ⇒ t y = t3 y˙ + 3t2 y = e−2t ⇒ t3 y = e−2t dt = − e−2t + c dt 2 ⇒ General solution of the ODE is y = − 12 t−3 e−2t + c t−3 , where c =arb. const.

3.1.4.7. ODE in standard form is y˙ +

Satisfying the IC: −1 = y(1) = − 12 e−2 + c

⇒ c = −1 +

1 2

e−2 .

⇒ Solution of the IVP is

 1 1 y = − t−3 e−2t + − 1 + e−2 t−3 . 2 2

3.1.4.8. ODE in standard form is

ˆ  3 ⇒ µ(x)=exp − dx = exp(−3 ln x) = exp(ln x−3 ) = x−3 x ˆ −3 ⇒ x y = 1 dx = x + c

dy 3 − y = x3 dx x

d  −3  dy − 3x−4 y = 1 x y = x−3 dx dx ⇒ General solution of the ODE is y = x4 + c x3 , where c =arb. const. Satisfying the IC: ⇒

−1 = y(1) = 1 + c

⇒ c = −2.

Solution of the IVP is y = x4 − 2x3 . t−1 y = −t 3.1.4.9. ODE in standard form is y˙ + t  ˆ  ˆ  t−1 dt = exp (1 − t−1 ) dt = exp (t − ln t) = et · exp ln(t−1 ) = t−1 et ⇒ µ(t) = exp t ´ t−1 d  −1 t  −1 t t e y = t e y˙ + t−1 et · y= −t−1 et · t ⇒ t−1 et y = − et dt = −et + c dt t ⇒ General solution of the ODE is y = −t + c t e−t , where c =arb. const. ⇒

Satisfying the IC: −2 = y(1) = −1 + c e−1 ⇒ c = −e. Solution of the IVP is y = −t − e · te−t = −t(1 + e−(t−1) ). ˆ  1 1 y = −3: µ(t)=exp dt = exp(ln t)= t t t ˆ d  3 2 ⇒ t y = ty˙ + y = −3t ⇒ t y = −3t dt = − t + c dt 2 ⇒ General solution of the ODE is y = − 32 t + c t−1 , where c =arb. const. Satisfying the IC:

3.1.4.10. ODE in standard form is y˙ +

1 = y(1) = − Solution of the IVP is y = − 32 t +

3 +c 2

⇒c=

5 . 2

5 . 2t

6 3.1.4.11. ODE in standard form is A˙ + A=4 100 − 2t  ˆ      6 dt = exp − 3 ln(100 − 2t) = exp ln (100 − 2t)−3 = (100 − 2t)−3 ⇒ µ(t) = exp 100 − 2t ⇒

 d 6 (100 − 2t)−3 A = (100 − 2t)−3 A˙ + A= 4(100 − 2t)−3 dt (100 − 2t)4 ˆ ⇒ (100 − 2t)−3 A= 4(100 − 2t)−3 dt=(100 − 2t)−2 + c

⇒ General solution of the ODE is A = (100 − 2t) + c(100 − 2t)3 , where c =arb. const. Satisfying the IC: 10 = A(0) = 100 + c · 1003 ⇒ c = −90 · 100−3 . Solution of the IVP is A = (100 − 2t) − 90 · 100−3 (100 − 2t)3 , that is, A = 100(1 − 0.02t) − 90 · (1 − 0.02t)3 . ˆ 3.1.4.12. µ(t) = exp

 3 dt

= e3t



d  3t  e y = e3t y˙ + 3e3t y = e3t · 5te−t = 5te2t dt c Larry

Turyn, October 13, 2013

page 3 ˆ

  ˆ 1 1 5te2t dt = 5 t · e2t − e2t dt = 5 t · 2 2

⇒ Using integration by parts, e3t y = ⇒ General solution of the ODE is y = Satisfying the IC: 1 = y(0) =

− 54

5 2

t e−t −

+c ⇒ c=

5 4

9 . 4

y=

1 2

e2t − 14 e2t + c



e−t + c e−3t , where c =arb. const. ⇒ Solution of the IVP is

5 −t 5 −t 9 −3t te − e + e . 2 4 4 ˆ ⇒ µ(t) = exp

3.1.4.13. ODE in standard form is y˙ + 3y = 2t

d  3t  e y = e3t y˙ + 3e3t y = e3t · 2t = 2te3t dt ´ ⇒ Using integration by parts, e3t y = 2te3t dt = 2t ·

 3 dt

= e3t



⇒ General solution of the ODE is y = Satisfying the IC: 0 = y(1) =

2 3



2 9

2 3

t−

2 9

+ ce

1 3 −3t

2 3

´

e3t dt =

2 3

t e3t − 92 e3t + c

, where c =arb. const.

+ ce−3 ⇒ c = − 49 e3 . y=

e3t −

⇒ Solution of the IVP is

2 2 4 t − − e−3(t−1) . 3 9 9

ˆ  3 y = 5t ⇒ µ(t)=exp 3t−1 dt =exp(3 ln t)=exp(ln t3 )= t3 t ˆ ⇒ t3 y = 5t4 dt = t5 + c

3.1.4.14. ODE in standard form is y˙ +

d 3  t y = t3 y˙ + 3t2 y = 5t4 dt ⇒ General solution of the ODE is y = t2 + c t−3 , where c =arb. const.



Satisfying the IC: −4 = y(2) = 4 + c 2−3 ˆ 3.1.4.15. µ(t) = exp

2t dt 1 + t2



⇒ c = −64. Solution of the IVP is y = t2 − 64 t−3 .

 = exp ln(1 + t2 ) = (1 + t2 )

ˆ  1 1 d (1 + t2 )y = (1 + t2 )y˙ + 2t y = (1 + t2 ) · = (t−1 + t) ⇒ (1 + t2 )y = (t−1 + t) dt = ln t + t2 + c dt t 2   2 −1 1 2 ln t + 2 t + c , where c =arb. const. ⇒ General solution of the ODE is y = (1 + t ) ⇒

Satisfying the IC: −1 = y(2) = 15 (ln 2 + 2 + c) ⇒ c = −7 − ln 2.   ⇒ Solution of the IVP is y = (1 + t2 )−1 − 7 − ln 2 + ln t + 12 t2 . 3.1.4.16. ODE in standard form is y˙ +

1 4 y = : µ(t)=exp t t



 1 dt = exp(ln t)= t t ˆ ⇒ t y = 4 dt = 4t + c

d  t y = ty˙ + y = 4, back to where we started! dt ⇒ General solution of the ODE is y = 4 + c t−1 , where c =arb. const. ⇒

Satisfying the IC: 3 = y(1) = 4 + c

⇒ c = −1. Solution of the IVP is y = 4 − t−1 .

ˆ  ˆ  t+1 dt = exp 3.1.4.17. µ(t) = exp (1 + t−1 ) dt = exp (t + ln t) = et · exp (ln t)) = t et t  t  ´ d ⇒ dt t e y = t et y˙ + (t + 1)et y = t et · 1t e−2t = e−t ⇒ t et y = e−t dt = −e−t + c ⇒ General solution of the ODE is y = −t−1 e−2t + c t−1 e−t , where c =arb. const. Satisfying the IC: 0 = y(1) = −e−2 + c e−1 ⇒ c = e−1 .

⇒ Solution of the IVP is

y = −t−1 e−2t + e−1 t−1 e−t = −t−1 e−2t + t−1 e−(t+1) . ˆ 3.1.4.18. µ(t)=exp Integration by parts

 1 d  dt = exp(ln t)= t ⇒ t y = ty˙ + y = t cos t t dt ˆ ˆ ⇒ t y = t cos t dt = t sin t − sin t dt = t sin t + cos t + c

c Larry

Turyn, October 13, 2013

page 4

⇒ General solution of the ODE is y = sin t + t−1 cos t + c t−1 , where c =arb. const. Satisfying the IC: π 2 =1+0+ c 2 π

0=y Solution of the IVP is

π ⇒c=− . 2

π −1 t . 2  t−1 → 0 as t → ∞, hence are transient. The steady state solution is y = sin t + t−1 cos t −

The terms t−1 cos t −

π 2

yS (t) = sin t.  ˆ d t  1 dt = et ⇒ 3.1.4.19. ODE in standard form is y˙ + y = sin t: ⇒ µ(t) = exp e y = et y˙ + et y = et sin t dt ˆ et (− cos t + sin t) + c ⇒ et y = et sin t dt = 2 ⇒ General solution of the ODE is y = 12 (− cos t + sin t) + c e−t , where c =arb. const. Satisfying the IC: 1 = y(0) =

1 (−1 + 0) + c 2

⇒c=

3 . 2

Solution of the IVP is

 3 1 − cos t + sin t + e−t . 2 2 → 0 as t → ∞, hence is transient. The steady state solution is y=

The term

3 2

e−t

 1 − cos t + sin t . 2 ˆ  1 1 3.1.4.20. ODE in standard form is y˙ + y = −1 + 2 sin t: µ(t) = exp dt = et/2 2 2 d  t/2  ⇒ e y = et/2 y˙ + et/2 y = et/2 (−1 + 2 sin t) dt ˆ  8et/2  1 − cos t + sin t + c ⇒ et/2 y = et/2 (−1 + 2 sin t) dt = −2et/2 + 5 2   8 1 ⇒ General solution of the ODE is y = −2 + 5 − cos t + 2 sin t + c e−t/2 , where c =arb. const. Satisfying the IC: 18 8 . 0 = y(0) = −2 − + c ⇒ c = 5 5 Solution of the IVP is  18 −t/2 8 1 y = −2 + − cos t + sin t + e . 5 2 5 The term 18 e−t/2 → 0 as t → ∞, hence is transient. The steady state solution is 5 yS (t) =

yS (t) = −2 −

8 4 cos t + sin t. 5 5

 ´ d t  3.1.4.21. µ(t) = exp 1 dt = et ⇒ e y = et y˙ + et y = et cos 2t dt ˆ  et  cos 2t + 2 sin 2t + c ⇒ et y = et cos 2t dt = 5  ⇒ General solution of the ODE is y = 15 cos 2t + 2 sin 2t + c e−t , where c =arb. const. Satisfying the IC: 0 = y(0) =

1 +c 5

1 ⇒c=− . 5

Solution of the IVP is

 1 1 cos 2t + 2 sin 2t − e−t . 5 5 → 0 as t → ∞, hence is transient. The steady state solution is y=

The term − 15 e−t

yS (t) =

 1 cos 2t + 2 sin 2t . 5 c Larry

Turyn, October 13, 2013

page 5



ˆ −α dt

3.1.4.22. Case 1: Assume α 6= 0. Then µ(t) = exp

= e−αt

d  −αt  e y = e−αt y˙ − αe−αt y = t e−αt dt ˆ ˆ 1 −αt 1 1 1 e dt = − t e−αt − 2 e−αt + c ⇒ e−αt y = t e−αt dt = − e−αt · t + α α α α ⇒ General solution of the ODE is y = − α1 t − α12 + c eαt , where c =arb. const. Satisfying the IC: ⇒

3 = y(0) = − Solution of the IVP is y=−

1 +c α2

⇒c=3+

1 . α2

 1 1 1  t − 2 + 3 + 2 eαt , α α α

in the case when α 6= 0. Case 2: Assume α = 0. Then the ODE is y˙ = t, which we solve using direct integration: ˆ 1 y = t dt = t2 + c, where c =arb. const. Satisfying the IC: 3 = y(0) = c. Solution of the IVP is 2 y=

1 2 t + 3, 2

in the case when α = 0. ˆ  d  αt  3.1.4.23. (a) µ(t) = exp α dt = eαt ⇒ e y = eαt y˙ + αeαt y = 2 eαt dt ˆ 2 ⇒ eαt y = 2 eαt dt = eαt + c ⇒ General solution of the ODE is y = α2 + c e−αt , where c =arb. const. Satisfying α the IC: 2 2 1 = y(0) = + c ⇒ c = 1 − . α α Solution of the IVP is 2  2  −αt y = + 1− e . α α   (b) Solve 2 = y(1) = α2 + 1 − α2 e−α , that is, after multiplying through by α > 0 and rearranging terms, 0 = f (α) , 2 − 2α + (α − 2)e−α . A graphing calculator and successive zooming in gives estimates 0.64893617 0.64361702 0.64361702 0.64378324 0.64378324 0.64379363 0.64379883 0.64379753 0.64379769 0.64379777 0.64379775 0.64379776 so we guess that α ≈ 0.64379776. E0 R 3.1.4.24. ODE in standard form is I˙ + I = L L ⇒

ˆ ⇒µ(t) = exp

d  Rt/L  R E0 Rt/L e I = eRt/L I˙ + eRt/L I = e dt L L

ˆ ⇒ eRt/L I =

R dt L



= eRt/L

E0 Rt/L E0 Rt/L e dt = e +c L R

c Larry

Turyn, October 13, 2013

page 6

⇒ General solution of the ODE is I =

E0 R

+ c e−Rt/L , where c =arb. const. Satisfying the IC:

0 = I(0) =

E0 +c R

⇒c=−

E0 . R

Solution of the IVP is the current as a function of time:  E0  I= 1 − e−Rt/L . R E0 . The “rise time," τ , that is, the time it takes for In this problem, the steady state current is limt→∞ I(t) = R −1 E0 the current to reach the value (1 − e ) , satisfies R   E0 E0  = I(τ ) = 1 − e−Rτ /L , 1 − e−1 R R which is equivalent to Rτ /L = 1, hence the rise time is τ =

L . R

3.1.4.25. ODE in standard form is T˙ + α T = α M , where α > 0 and M are constants ⇒µ(t)=exp ˆ d  αt  ⇒ e T = eαt T˙ + α eαt T = α M eαt ⇒ eαt T = α M eαt dt = M eαt + c dt ⇒ General solution of the ODE is T = M + c e−αt , where c =arb. const. Satisfying the IC: T0 = T (0) = M + c



 α dt =eαt

⇒ c = T0 − M.

Solution of the IVP is the object’s temperature as a function of time: T = M + (T0 − M )e−αt . We are given that the temperature of the medium is M = 20◦ C. Other given information is found in the table below, assuming t is measured in minutes after 1:00 pm. and T , the temperature of the object, is measured in ◦ C.

(?)

T ime 0 3 4

Object0 s T emperature T0 250 200

Here T0 is the unknown temperature at 1:00 pm. that we are to solve for. Substituting M = 20 and the known data into the solution of the ODE, we get   250 = 20 + (T0 − 20)e−3α , −4α 200 = 20 + (T0 − 20)e which implies 230e3α = (T0 − 20) = 180e4α , hence 230 e4α = 3α = eα . 180 e 230 23 = ln ≈ 0.245122458. Substitute 180 18 this into either of the two equations in (?), for example, the first, to get So, α = ln

(T0 − 20) = 230 eα

3

= 230 ·

 23 3 18

≈ 479.8371056

so the temperature at 1:00 pm. was T0 ≈ 499.8371056 ≈ 500◦ C. 3.1.4.26. ODE in standard form is T˙ + α T = α M , where α > 0 and M are constants ⇒µ(t)=exp ˆ d  αt  ⇒ e T = eαt T˙ + α eαt T = α M eαt ⇒ eαt T = α M eαt dt = M eαt + c dt c Larry



 α dt =eαt

Turyn, October 13, 2013

page 7

⇒ General solution of the ODE is T = M + c e−αt , where c =arb. const. Satisfying the IC: T0 = T (0) = M + c

⇒ c = T0 − M.

Solution of the IVP is the cake’s temperature as a function of time: T = M + (T0 − M )e−αt . We are given that the temperature of the medium is M = 65◦ F. Other given information is found in the table below, assuming t is measured in minutes after the time when the cake was removed from the oven and T , the temperature of the cake, is measured in ◦ F. T ime 0 t? 15

Cake0 s T emperature 360 100 80

Substituting M = 65 and the known data into the solution of the ODE, we get   360 = T0 , 80 = 65 + (T0 − 65)e−15α   15 15 which implies 80 = 65 + (360 − 65)e−15α , ⇒ 295 = ln e−15α = −15α. = e−15α ⇒ ln 295   1 15 So, α = − ln ≈ 0.1985950103. Substitute this into the solution of the IVP to get 15 295 ?

100 = T (t? ) = 65 + 295e−αt



? 35 = e−αt 295

hence

 35  1 ln ≈10.73353903. 0.1985950103 295 So, the temperature was 100◦ F. at about approximately 10 minutes and 44 second after the cake was removed from the oven. It turns out that there is a lot of practical science, especially chemistry, involved in baking. There are often fairly precise rules about when to do steps in creating baked goods. t? =−

3.1.4.27. ODE in standard form is T˙ + α T = α M , where α > 0 and M re constants ⇒µ(t)=exp ˆ d  αt  ⇒ e T = eαt T˙ + α eαt T = α M eαt ⇒ eαt T = α M eαt dt = M eαt + c dt ⇒ General solution of the ODE is T = M + c e−αt , where c =arb. const. Satisfying the IC: T0 = T (0) = M + c



 α dt =eαt

⇒ c = T0 − M.

Solution of the IVP is the person’s temperature as a function of time: T = M + (T0 − M )e−αt . We are given that the temperature of the medium is M = 21.1◦ C. Other given information is found in the table below, assuming t is measured in hours after 11 am. and T , the temperature of the person, is measured in ◦ C. T ime t? 0 0.5

P erson0 s T emperature 36.95 34.8 34.3

Substituting M = 21.1 and the known data into the solution of the ODE, we get   34.8 = T0 , 34.3 = 21.1 + (T0 − 21.1)e−0.5α   13.2 which implies 34.3 = 21.1 + (34.8 − 21.1)e−0.5α , ⇒ 13.7 = e−0.5α ⇒ ln 13.2 = ln e−0.5α = −0.5α. 13.7 c Larry

Turyn, October 13, 2013

page 8

 13.2  1 ln ≈ 0.0743580065. 0.5 13.7 Substitute this into the solution of the IVP to get So, α = −

?

36.95 = T (t? ) = 21.1 + 13.7e−αt

? 15.85 = e−αt 13.7



hence

15.85  1 ln ≈−1.960430011 hours. 0.0743580065 13.7 The person died about 1.960430011 hours before 11 am., that is, at about 9:02 am. t? =−

3.1.4.28. ODE in standard form is T˙ + α T = α M , where α > 0 and M re constants ⇒µ(t)=exp ˆ d  αt  e T = eαt T˙ + α eαt T = α M eαt ⇒ eαt T = α M eαt dt = M eαt + c ⇒ dt ⇒ General solution of the ODE is T = M + c e−αt , where c =arb. const. Satisfying the IC: T0 = T (0) = M + c



 α dt =eαt

⇒ c = T0 − M.

Solution of the IVP is the person’s temperature as a function of time: T = M + (T0 − M )e−αt . We are given that the temperature of the medium is M = 21.1◦ C. Other given information is found in the table below, assuming t is measured in hours after 11 am. and T , the temperature of the person, is measured in ◦ C. The person’s temperature when alive was T ? , and we will consider two scenarios based on the value of T ? . T ime t? 0 0.5

P erson0 s T emperature T? 34.8 34.3

Substituting M = 21.1 and the known data into the solution of the ODE, we get   34.8 = T0 , −0.5α 34.3 = 21.1 + (T0 − 21.1)e   13.2 which implies 34.3 = 21.1 + (34.8 − 21.1)e−0.5α , ⇒ 13.7 = e−0.5α ⇒ ln 13.2 = ln e−0.5α = −0.5α. 13.7  13.2  1 ln ≈ 0.0743580065. So, α = − 0.5 13.7 Datum #1: Assuming the person’s temperature when alive was T ? = 36.6◦ C. will give us one scenario for when the person died: Substitute the value of α into the solution of the IVP to get ?

36.6 = T (t? ) = 21.1 + 13.7e−αt



? 15.5 = e−αt 13.7

hence

 15.5  1 ln ≈−1.660133144 hours. 0.0743580065 13.7 Assuming T ? = 36.6◦ C., the person died about 1.660133144 hours before 11 am., that is, at about 9:20:24 am. t? =−

Datum #2: Assuming the person’s temperature when alive was T ? = 37.2◦ C. will give us one scenario for when the person died: Substitute the value of α into the solution of the IVP to get ?

37.2 = T (t? ) = 21.1 + 13.7e−αt



? 16.1 = e−αt 13.7

hence

 16.1  1 ln ≈−2.170895197 hours. 0.0743580065 13.7 Assuming T ? = 37.2◦ C., the person died about 2.170895197 hours before 11 am., that is, at about 8:49:45 am. t? =−

For living person’s temperature between 36.6◦ C. and 37.2◦ C. we would get a conclusion between the two extreme scenarios. So, the person is estimated to have died between about 8:49 am. and 9:20 am.

c Larry

Turyn, October 13, 2013

page 9

Note that the conclusion for problem 3.1.4.27, where we assumed a living person’s temperature is 36.95◦ C., gives a conclusion between the two extremes in problem 3.1.4.28. This makes sense. One of the points of this problem is that scientific evidence should give a range of values involving rather than a single definite conclusion. Notice, too, that although 36.95◦ C. is pretty much halfway between 36.6◦ C. and 37.2◦ C., the estimated time of death being about 9:02 am. in problem 3.1.4.27 is not halfway between 8:49 am. and 9:20 am. So, uncertainty in data may lead to a nonlinear effect. 3.1.4.29. Assume that y measures the vertical displacement of the object down from the point from which it was d released. Newton’s second law of motion says that [ mv ] = ΣF orces. Here, the forces are (1) the force of gravity, dt F = mg, and (2) the air resistance force, F = −4v, where v = y˙ is the velocity of the object. The air resistance force opposes the motion downward. So, d [ mv ] = mg − 4v dt We are also given that m = 0.5 kg, g is approximately 9.81 m/s2 , and that the object is released from rest, that is, v(0) = 0. Assuming instead that g is exactly 9.81 m/s2 , v satisfies the IVP v˙ = 9.81 − 8v, v(0) = 0. In standard form this  ˆ 8 dt = e8t . Multiply through by the integrating factor to get is v˙ + 8v = 9.81, so µ(t) = exp d 8t [ e v ] = e8t v˙ + 8e8t v = 9.81e8t , dt

ˆ

9.81 8t e + c. The 8 9.81 + c, so c = − 8 and the solution

and then take the indefinite integral of both sides with respect to t to get e8t v = general solution of the ODE is v = of the IVP is

9.81 8

+ c e−8t . The IC requires 0 = v(0) =

9.81 8

9.81e8t dt =

 9.81 1 − e−8t . 8 is transient, so the steady state velocity is v=

The term − 9.81 e−8t 8

vS =

9.81 ≈ 1.22625 m/s. 8

This is also known as the terminal velocity. 3.1.4.30. The water exerts a resistive force proportional to the speed of the canoe but that there is no wind or water current, so the only force on the horizontal motion of the canoe is that resistive force. Let y be the displacement of the canoe past the finish line, so y(0) = 0 if t = 0 is the time the canoe passed the finish line. Newton’s second law of motion gives mv˙ = −βv, where β is a positive constant of proportionality for the resistive force. The ODE β d . The integrating factor is µ(t) = eγt , so dt [ eγt v ] = eγt v˙ + γeγt v = 0, in standard form is v˙ + γv = 0, where γ = m γt −γt which implies e v = c, so the general solution is v = c e , where c =arbitrary constant. Further, y˙ = v = c e−γt implies y(t) = − γc e−γt + c2 , where c2 =arbitrary constant. We have a table of data for quantities discussed in the problem: T ime 0 T 2T

y(t) 0 yT y2T

and y∞ , limt→∞ y(t).  c c The IC gives 0 = y(0) = − + c2 , hence c2 = and y(t) = γc 1 − e−γt . We have γ γ    −γT c  yT = y(T ) = γ 1 − e   ,  y2T = y(2T ) = γc 1 − e−2γT hence

 −γT =1−  e 

γ c

 

yT

e−γT )2 = e−2γT = 1 −

γ c

y2T

.



c Larry

Turyn, October 13, 2013

page 10

Combining the two equations gives  2 γ γ 1 − yT = e−γT )2 = e−2γT = 1 − y2T , c c hence

γ c

yT

2

−2

This implies

γ c

yT



+1=1−

γ y2T . c

γ 2 yT − 2yT + y2T = 0. c

(?) On the other hand,

y∞ , lim y(t) = lim t→∞

t→∞

 c c 1 − e−γt = γ γ

Substitute this into (?) to get 1 · yT2 − 2yT + y2T = 0, y∞ hence

1 · yT2 = 2yT − y2T y∞

So, y∞ =

yT2 , 2yT − y2T

as we were asked to derive. The strange thing about this derivation is that we didn’t have to bother to solve for the constant c using the IC for v: v0 = v(0) = c. 3.1.4.31. (a) Let A be the number of acres occupied by the plant and t be measured in years. The two given ˙ (i) −10, because goats are consuming the plant, and (ii) kA, the increase of the assumptions give two terms in A: plant at a rate proportional to the current acreage in the absence of goats. So, A˙ = −10 + kA, where k is a positive constant. (b) In standard form the ODE is A˙ − kA = −10, so the integrating factor µ(t) = e−kt gives d −kt [ e A ] = e−kt A˙ − ke−kt A = −10e−kt , dt

ˆ

and then take the indefinite integral of both sides with respect to t to get e−kt A =

−10e−kt dt =

10 −kt e + c. The k

general solution of the ODE is A=

10 + c ekt . k

3.1.4.32. Let G be the amount of glucose in the patient’s bloodstream, in grams. The two given assumptions give two ˙ (i) +b, because glucose is dripping into the bloodstream, and (ii) −kG, the decrease of G because the terms in G: patient’s body uses up its glucose at a rate proportional to the amount of glucose in the bloodstream, with positive constant of proportionality k. So, G˙ = b − kG. We are also given that G(0) = 0. (That seems to be an extreme assumption.) In standard form the ODE is G˙ + kG = b, so the integrating factor µ(t) = ekt gives d kt [ e G ] = ekt G˙ + kekt G = bekt , dt

ˆ

and then take the indefinite integral of both sides with respect to t to get ekt G = solution of the ODE is

bekt dt =

b kt e + c. The general k

10 + c e−kt . b + c, hence c = − 10 . The solution of the IVP is b G=

The IC implies 0 = G(0) =

10 b

G=

10 (1 − e−kt ). b

c Larry

Turyn, October 13, 2013

page 11

The term − 10 e−kt is transient, so the steady state amount of glucose in the patient’s bloodstream is b GS =

10 g. b

The amount of glucose in the bloodstream will reach (1 − e−5 ) times the steady state amount when e−5 = e−kt , 5 that is, when t = . k 3.1.4.33. The ODE is −mg = mv˙ + um, ˙ where v(0) = v0 , g = 32f t/s2 , m = m0 (1 − m ˙ ODE can be rewritten as v˙ = −g − u . In standard form this is m v˙ = −32 − u so

ˆ v=

−32 +

m0 − 200 1 u m0  = −32 + · t 200 m0 1 − m0 1 − 200

u 1  · t 200 1 − 200

! dt = −32t − u ln 1 −

t 200

t ), 200

and m0 is a constant. The

 ,

t  + c. 200

The IC requires v0 = 0 − u · 0 + c, so the velocity as a function of time is v = −32t − u ln 1 −

t  + v0 . 200

The velocity when the rocket stops burning, that is, when t = 190s, is v(190)= −32 · 190 − u ln(0.05) + v0 = −6080 + u ln 20 + v0 . By the way, to leave the earth’s effective gravitational control, for example to go to the Moon, we need to reach the escape velocity of approximately 25, 000 m/h, that is, 36, 667 f t/s, so we need this hypothetical single stage rocket’s u to satisfy 36, 667 = −6080 + u ln 20 + v0 . We can assume v0 = 0 is the initial velocity, so we need u ≈ 14, 270f t/s. I don’t know if this hypothetical rocket’s nozzle gas speed is realistic. The Apollo rocket configuration used a three stage rocket in order to lower the constant (1 − αf ) which is the fraction of the rocket mass after firing divided by its initial mass when fully fueled. When in orbit around the Earth, the vehicle’s configuration was re-arranged and then the third stage rocket was re-fired to start the rest of the journey to the Moon. 3.1.4.34. (a) It appears that a graph of the data of T versus t has a horizontal asymptote of about T = 28. Solutions of T˙ = −αT are T = c e−αt , while solutions of T˙ = −α(T − M ) are T = M + c e−αt . So, not having a horizontal asymptote T = 0 tells us that Model #2 is a better fit for the data. (a) For Model #2, T˙ = −α(T − M ), the solutions are T = M + c e−αt , which have steady state solution T = M . That would have a horizontal asymptote T = M , so M ≈ 28. To estimate α, we could use two of the data points: T ime 0 20

T emperature T 140 80

⇒ 140 = T (0) = M + c = 28 + c ⇒ c = 112 and  52 52 ⇒ 80 = T (0) = M + c e−α20 = 28 + 112e−α20 ⇒ e−α20 = 112 ⇒ −20α = ln 112  1 52 ⇒ α = − 20 ln 112 ≈ 0.03836275764 ⇒ solution for the temperature T (t) = 28 + 112e−0.03836275764 t . We could also use any other pair(s) of data points to estimate α, and we could also take the average of multiple such estimates of α. ˆ  2 2 2 2 d t2 3.1.4.35. µ(t) = exp 2t dt = et ⇒ [ e y ] = et y˙ + 2t et y = et dt ˆ t ˆ t 2 2 d s2 ⇒ et y(t) − e0 y(0) = [ e y(s) ] ds = es ds ⇒ Using y(0) = 3, 0 ds 0 ˆ t 2 2 2 y(t) = 3e−t + e−(t −s ) ds solves the IVP. 0

c Larry

Turyn, October 13, 2013

page 12

3.1.4.36. ODE in standard form is y 0 + 3x2 y = −2x

ˆ ⇒ µ(x) = exp

3x2 dx



3

= ex

ˆ x ˆ x 3 3 3 3 3 d x3 d s3 [ e y ] = ex y˙ + 3x2 ex y = −2x ex ⇒ ex y(x) − e1 y(1)= [ e y(s) ] ds= −2s es ds dx ds 1 1 ˆ x 3 3 ⇒ Using y(1) = 0, y(x) = − 2s e−(x −s ) ds solves the IVP.



1 t



2

3.1.4.37. y˙ + et y = 1

⇒ µ(t) = exp

 2 es ds ⇒

0

Note that µ(0) = exp 1 ⇒ y(t) = µ(t)



0 0

d [ µ(t)y dt

2

] = µ(t)y˙ + µ(t)et y = µ(t) · 1

ˆ  2 es ds = 1. We have µ(t)y(t) − y(0) = 0

ˆ



t

y(0) +

t

d [ µ(s)y(s) ] ds = ds

ˆ

t

µ(s)ds 0

 µ(s)ds , that is,

0

 ˆ t   ˆ ˆ t 2 y(t) = exp − es ds · y(0)+ exp 0

0

s

  2 eu du ds

0

solves the IVP.

c Larry

Turyn, October 13, 2013

page 13

Section 3.2.4     t2 3.2.4.1. Rewrite the ODE in the form M + N y˙ = 0: − e2t − y + t ln y + t − + e−3y y˙ = 0, so M (t, y) = 2y t2 −3y 2t −e + y − t ln y and N (t, y) = t − +e . Next, check the exactness criterion: 2y 0+1−

    t ∂  ∂  ∂ ∂ t2 2t = − e2t + y − t ln y = M (t, y) =? N (t, y) = t− + e−3y = 1 − + 0, y ∂y ∂y ∂t ∂t 2y 2y

so, yes, ODE is exact. A “potential function" φ(t, y) would have −e2t + y − t ln(y) = M (t, y) =

 ∂ φ(t, y) , ∂t

ˆ

hence

 1 1 −e2t + y − t ln y ∂t = − e2t + ty − t2 ln y + f (y), 2 2 ´ where f (y) is an arbitrary function of only y. Our symbol ... ∂t is shorthand for the operation of anti-partialdifferentiation with respect to t. ∂ The reason we have an arbitrary function f (y) instead of an arbitrary constant is because [f (y)] ≡ 0. Note also ∂t ∂ df that because f (y) is a function of y alone, [f (y)] = . ∂y dy φ(t, y) must also satisfy φ(t, y) =

t−

i t2 ∂ ∂ h 1 1 t2 df + e−3y = N (t, y) = [ φ(t, y) ] = − e2t + ty − t2 ln y + f (y) = t − + , 2y ∂y ∂y 2 2 2y dy

so e−3y =

df . dy

We have f (y) = − 31 e−3y ; we could add an arbitrary constant but it would turn out to be redundant because our solutions are the curves φ(t, y) = C. Putting everything together, we have the solutions of the ODE are the curves 1 1 1 C = φ(t, y) = − e2t + ty − t2 ln y − e−3y , 2 2 3 where C is an arbitrary constant. 3.2.4.2. The ODE is separable. Multiply through by 2ydt to get to get

ˆ y2 =

ˆ 2y dy =

2y dy =

and then integrate both sides

dt = arctan t + c. (1 + t2 )

The IC requires 12 = y(−1)2 = arctan(−1) + c = − hence c = 1 +

dt (1 + t2 )

π + c, 4

π . The implicit solution of the IVP is 4 y 2 = arctan t + 1 +

π . 4

To get the explicit solution, solve for y to get y=±

r π 1 + + arctan t. 4

But the IC has y(−1) = +1, so we must choose the + sign. The explicit solution of the IVP is r π y = 1 + + arctan t. 4 c Larry

Turyn, October 13, 2013

page 14

1 dy t dt dt to get = y−1 y−1 (1 + t2 ) ˆ ˆ dy t dt 1 ln |y − 1| = dy = = ln(1 + t2 ) + c. y−1 (1 + t2 ) 2

3.2.4.3. The ODE is separable. Multiply through by sides to get

and then integrate both

Raise e to both sides to get 2

1

(±1)(y − 1) = |y − 1| = eln |y−1| = e 2 ln(1+t p 2 1/2 = eln(1+t ) ec = ec 1 + t2 .

)+c

Multiply through by (±1) and define K = (±1)ec to get solutions y − 1 = K(1 + t2 ), that is, p y = 1 + K 1 + t2 , where K is an arbitrary non-zero constant. In addition, there is an equilibrium solution y(t) ≡ 1, because f (t, y) = (a) The IC requires 3 = y(1) = 1 + K so the solution is y1 (t) = 1 +

p √ 1 + 12 = 1 + K 2

t(y − 1) has f (t, 1) ≡ 0. t2 + 1

⇒K=



2,

p √ p 2 1 + t2 = 1 + 2(1 + t2 ).

(b) The IC requires 1 = y(1), which is satisfied by the equilibrium solution. So, the solution for the second IC is y2 (t) ≡ 1.

Figure 1: Problem 3.2.4.3: Solutions for two different ICs

3.2.4.4.

dA = −αA dt

ˆ ⇒

dA =− A

ˆ αdt ⇒ ln |A| = −αt + c

⇒ (±1)A = |A| = eln |A| = e−αt+c = e−αt ec

⇒ A = (±1)ec e−αt = Ke−αt , for arbitrary non-zero constant K. IC A0 = A(0) = K, so the solution of the IVP is A(t) = A0 e−αt . The table below summarizes data we will use. T ime 0 t?

Amount of the Radioactive substance A0 1 A0 2

The half-life, t? , satisfies ?

A0 e−αt = A(t? ) =

? 1 1 A0 ⇒ e−αt = 2 2

⇒ −αt? = ln

1 = ln(2−1 ) = − ln 2. 2

So, the half life is t? =

1 ln 2. α

c Larry

Turyn, October 13, 2013

page 15

3.2.4.5.

ˆ

dA = −αA dt



dA =− A

ˆ ⇒ (±1)A = |A| = eln |A| = e−αt+c = e−αt ec

αdt ⇒ ln |A| = −αt + c

⇒ A = (±1)ec e−αt = Ke−αt , for arbitrary non-zero constant K. IC A0 = A(0) = K, so the solution of the IVP is A(t) = A0 e−αt . The half-life, t? , satisfies ?

A0 e−αt = A(t? ) =

? 1 1 A0 ⇒ e−αt = 2 2

⇒ −αt? = ln

1 = ln(2−1 ) = − ln 2, 2

so

1 1 ln 2, ln 2 = t? 5730 using the half-life for C 14 , assuming time is measured in years. The sarcophagus has 63% of what would be in a present day sample. Assume that this measurement 63% was done at time t = 0 and the wood was cut at time τ < 0. The table below summarizes data we will use. α=

T ime τ 0

Amount of C 14 A(τ ) 0.63 A(τ )

We have A0 = A(0) = 0.63 A(τ ) = 0.63 A0 e−ατ

⇒ e−ατ =

1 0.63

⇒ −ατ = ln e−ατ = ln

1 = ln 0.63−1 = − ln 0.63 0.63

1 ln 0.63 ln 0.63= 5730· ≈ −3819.482006. α ln 2 Measured with the three significant digits implicit in the half-life being t? = 5730, the sarcophagus was buried about 3820 years ago. ⇒τ=

ˆ ˆ dA dA = −αA ⇒ = − αdt ⇒ ln |A| = −αt + c ⇒ (±1)A = |A| = eln |A| = e−αt+c = e−αt ec dt A c −αt −αt ⇒ A = (±1)e e = Ke , for arbitrary non-zero constant K. IC A0 = A(0) = K, so the solution of the IVP is A(t) = A0 e−αt . The half-life, t? , satisfies 3.2.4.6.

?

A0 e−αt = A(t? ) =

? 1 1 A0 ⇒ e−αt = 2 2

⇒ −αt? = ln

1 = ln(2−1 ) = − ln 2. 2

So, the half life is 1 ln 2. α We are given that 98% of the initial amount of that isotope of Radium remains after 44.5 years. The table below summarizes data we will use. t? =

T ime 0 44.5

Amount of the Radium isotope A0 0.98 A0

We have 0.98A0 = A(44.5) = A0 e−α 44.5 hence α=− It follows that the half-life is t? =

3.2.4.7.

dP = kP dt

ˆ ⇒

dP = P

⇒ −α 44.5 = ln e−α44.5 = ln 0.98, ln 0.98 . 44.5

1 44.5 ln 2 = − · ln 2 ≈ 1526.778023 years. α ln 0.98

ˆ k dt ⇒ ln |P | = kt + c

⇒ (±1)P = |P | = eln |P | = ekt+c = ekt ec

⇒ P = (±1)ec ekt = Ce−αt , c Larry

Turyn, October 13, 2013

page 16

for arbitrary non-zero constant C. IC P0 = P (0) = C, so the solution of the IVP is P (t) = P0 ekt . The table below summarizes data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.473 5.055 10.000

We have P0 = 4.473, so P (t) = 4.473ekt and so 5.055 = P (7) = 4.473e7k . This implies ln

5.055 4.473

= 7k, hence

 5.055  1 ln ≈ 0.0174740754. 7 4.473 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.473ekT implies  10.000  1 ln ≈ 46.04110653. k 4.473 So, this model predicts that the Earth’s population of human beings will reach 10 billion on about July 16, 2026. By the way, k ≈ 0.0174740754 says that in this model the Earth’s population of human beings has a growth rate of about 1.74740754 % per year. Also, by the way, the Earth’s population on September 9, 2012 was about billion. ˆ ˆ dP dP 3.2.4.8. = kP ⇒ = k dt ⇒ ln |P | = kt + c ⇒ (±1)P = |P | = eln |P | = ekt+c = ekt ec dt P T =

⇒ P = (±1)ec ekt = Ce−αt , for arbitrary non-zero constant C. IC P0 = P (0) = C, so the solution of the IVP is P (t) = P0 ekt . The uncertainty in the initial datum was that P0 ranged between 4.423 and 4.523. Case 1 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.423 5.055 10.000

We have P0 = 4.423, so P (t) = 4.423ekt and so 5.055 = P (7) = 4.423e7k . This implies ln

5.055 4.423

= 7k, hence

 5.055  1 ln ≈ 0.0190799505. 7 4.423 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.423ekT implies  10.000  1 ln ≈ 42.7551892. k 4.423 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about March 30, 2023. T =

Case 2 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.523 5.055 10.000 c Larry

Turyn, October 13, 2013

page 17

We have P0 = 4.523, so P (t) = 4.523ekt and so 5.055 = P (7) = 4.523e7k . This implies ln

5.055 4.523

= 7k, hence

 5.055  1 ln ≈ 0.0158860517. 7 4.523 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.523ekT implies  10.000  1 ln ≈ 49.94378823. k 4.523 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about June 10, 2030. T =

3.2.4.9. The uncertainty in the data for both P0 and P (7) means that eventually we will need two study four cases: (1) (2) (3) (4) dP = kP dt

P0 P0 P0 P0

= 4.423 and P (7) = 5.105, = 4.523 and P (7) = 5.005, = 4.423 and P (7) = 5.005, = 4.523 and P (7) = 5.105. ˆ ˆ dP ⇒ = k dt ⇒ ln |P | = kt + c P

⇒ (±1)P = |P | = eln |P | = ekt+c = ekt ec

⇒ P = (±1)ec ekt = Ce−αt , for arbitrary non-zero constant C. IC P0 = P (0) = C, so the solution of the IVP is P (t) = P0 ekt . The uncertainty in the initial datum was that P0 ranged between 4.423 and 4.523. Case 1 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.423 5.105 10.000

Of the four cases, this case has the greatest increase of population between July 1, 1980 and July 1, 1987, thus producing the greatest k. Intuitively this should give the smallest value of T of the four cases. We have P0 = 4.423, so P (t) = 4.423ekt and so 5.105 = P (7) = 4.423e7k . This implies ln

5.105 4.423

= 7k, hence

 5.105  1 ln ≈ 0.0204860361. 7 4.423 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.423ekT implies  10.000  1 ln ≈ 39.82063148. k 4.423 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about April 26, 2020. T =

Case 2 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.523 5.005 10.000 c Larry

Turyn, October 13, 2013

page 18

We have P0 = 4.523, so P (t) = 4.523ekt and so 5.005 = P (7) = 4.523e7k . This implies ln

5.005 4.523

= 7k, hence

 5.005  1 ln ≈ 0.0144659889. 7 4.523 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.523ekT implies  10.000  1 ln ≈ 54.84655133. k 4.523 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about May 5, 2035. T =

Case 3 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.423 5.005 10.000

We have P0 = 4.423, so P (t) = 4.423ekt and so 5.005 = P (7) = 4.423e7k . This implies ln

5.005 4.423

= 7k, hence

 5.005  1 ln ≈ 0.0176598877. 7 4.423 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.423ekT implies  10.000  1 ln ≈ 46.19320961. k 4.423 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about September 10, 2026. T =

Case 4 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.523 5.105 10.000

We have P0 = 4.523, so P (t) = 4.523ekt and so 5.105 = P (7) = 4.523e7k . This implies ln

5.105 4.523

= 7k, hence

 5.105  1 ln ≈ 0.0172921373. 7 4.523 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.523ekT implies T =

 10.000  1 ln ≈ 45.88268001. k 4.523 c Larry

Turyn, October 13, 2013

page 19

So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about May 18, 2026. The model, including uncertainties, predicts that the Earth’s human population will reach 10 billion approximately between April 26, 2020 and May 5, 2035. That is an uncertainty of only about 15 years in the final conclusion, which is a lot of uncertainty! The results for the four cases agree with the intuition about the comparisons among the four values of T : Case 1 did give the smallest value of T and Case 2 did give the largest value of T . 3.2.4.10. The uncertainty in the data for both P0 and P (7) means that eventually we will need two study four cases: (1) (2) (3) (4) dP = kP dt

P0 P0 P0 P0

= 4.463 and P (7) = 5.065, = 4.483 and P (7) = 5.045, = 4.463 and P (7) = 5.045, = 4.483 and P (7) = 5.065. ˆ ˆ dP ⇒ = k dt ⇒ ln |P | = kt + c P

⇒ (±1)P = |P | = eln |P | = ekt+c = ekt ec

⇒ P = (±1)ec ekt = Ce−αt , for arbitrary non-zero constant C. IC P0 = P (0) = C, so the solution of the IVP is P (t) = P0 ekt . The uncertainty in the initial datum was that P0 ranged between 4.463 and 4.483. Case 1 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.463 5.065 10.000

Of the four cases, this case has the greatest increase of population between July 1, 1980 and July 1, 1987, thus producing the greatest k. Intuitively this should give the smallest value of T of the four cases. We have P0 = 4.463, so P (t) = 4.463ekt and so 5.065 = P (7) = 4.463e7k . This implies ln

5.065 4.463

= 7k, hence

 5.065  1 ln ≈ 0.018076136. 7 4.463 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.463ekT implies  10.000  1 ln ≈ 44.631436. k 4.463 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about February 15, 2025. T =

Case 2 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.483 5.045 10.000

We have P0 = 4.483, so P (t) = 4.483ekt and so 5.045 = P (7) = 4.483e7k . This implies ln

5.045 4.483

= 7k, hence k=

 5.045  1 ln ≈ 0.0168721698. 7 4.483 c Larry

Turyn, October 13, 2013

page 20

We want to solve for T the equation for the third entry in the data table: 10.000 = P (T ) = 4.483ekT implies  10.000  1 ln ≈ 47.5512419. k 4.483 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about January 17, 2028. T =

Case 3 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.463 5.045 10.000

We have P0 = 4.463, so P (t) = 4.463ekt and so 5.045 = P (7) = 4.463e7k . This implies ln

5.045 4.463

= 7k, hence

 5.045  1 ln ≈ 0.017510924. 7 4.463 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.463ekT implies  10.000  1 ln ≈ 46.07203517. k 4.463 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about July 27, 2022. T =

Case 4 : The table below summarizes some of the data we will use. We will measure time in years after July 1, 1980. T ime 0 7 T

P opulation 4.483 5.065 10.000

We have P0 = 4.483, so P (t) = 4.483ekt and so 5.065 = P (7) = 4.483e7k . This implies ln

5.065 4.483

= 7k, hence

 5.065  1 ln ≈ 0.0174373818. 7 4.483 We want to solve for T the equation for the third entry in the data table: k=

10.000 = P (T ) = 4.483ekT implies  10.000  1 ln ≈ 46.00992497. k 4.483 So, this case of the model predicts that the Earth’s population of human beings will reach 10 billion on about July 5, 2026. The model, including uncertainties, predicts that the Earth’s human population will reach 10 billion approximately between February 15, 2025 and January 17, 2028. That is an uncertainty of only about 3 years in the final conclusion. This is a lot less uncertainty than the results of problem 3.2.4.9, which The results for the four cases agree with the intuition about the comparisons among the four values of T : Case 1 did give the smallest value of T and Case 2 did give the largest value of T . T =

c Larry

Turyn, October 13, 2013

page 21

3.2.4.11. (a) Let y = y(t) be the position of the particle. We are given that y(t) ˙ = k | y(t) |2 = ky 2 , for some constant k. The table below summarizes the data we will use.

Separation of variables gives

−y −1

T ime P osition 0 2 3 4 ˆ ˆ dy = = k dt = kt + c, y2 y=−

so

1 . kt + c

This function exists for t 6= − kc . 1 The IC requires 2 = y(0) = − 0+c , so c = − 21 . The solution of the IVP is unique on some open interval containing t = 0 and is 1 2 2 y(t) = − · = . 1 − 2kt kt − 21 2 The second data point gives 2 , 1 − 6k 1 hence 4(1 − 6k) = 2, hence k = 12 . The solution satisfying all of the given data is 4 = y(3) =

y(t) =

(b) The particle reaches position y = 8 when 8 = at time t = 4.5.

2 2 = . 1 t 1 − 2 12 1 − 16 t

2 , that is, 8 1 − 1 − 16 t

(c) The particle reaches position y = 8 when 1000 = y = 1000 at time t = 5.988.

1 6

 t = 2. The particle reaches position y = 8

 2 , that is, 1000 1 − 16 t = 2. The particle reaches position 1 − 16 t

(d) The particle lives during the time interval the solution exists. The solution stops existing when the denominator is zero, that is, when t = 6. So, the particle lives for 0 ≤ t < 6.   dy dy = 0: y 2 − 1 + 2xy = 0, so dx dx 2 M (x, y) = y − 1 and N (x, y) = 2xy. Next, check the exactness criterion: 3.2.4.12. Rewrite the ODE in the form M + N

2y =

    ∂  ∂  2 ∂  ∂  y −1 = M (x, y) =? N (x, y) = 2xy = 2y, ∂y ∂y ∂x ∂x

so, yes, ODE is exact. A “potential function" φ(x, y) would have y 2 − 1 = M (x, y) =

 ∂  φ(x, y) , ∂x

ˆ

hence

 y 2 − 1 ∂x = xy 2 − x + f (y), ´ where f (y) is an arbitrary function of only y. Our symbol ... ∂x is shorthand for the operation of anti-partialdifferentiation with respect to x. ∂ The reason we have an arbitrary function f (y) instead of an arbitrary constant is because [f (y)] ≡ 0. Note ∂x ∂ df also that because f (y) is a function of y alone, [f (y)] = . ∂y dy φ(t, y) must also satisfy i ∂ ∂ h 2 df 2xy = N (x, y) = [ φ(x, y) ] = xy − x + f (y) = 2xy − 0 + , ∂y ∂y dy φ(x, y) =

so 0=

df . dy c Larry

Turyn, October 13, 2013

page 22

We have f (y) = 0; we could add an arbitrary constant but it would turn out to be redundant because our solutions are the curves φ(x, y) = C. Putting everything together, we have the solutions of the ODE are the curves C = φ(x, y) = xy 2 − x = x(y 2 − 1), where C is an arbitrary constant. This exact ODE’s solutions are so simple we can solve for y in terms of x to get explicit solutions: r C C C y2 − 1 = ⇐⇒ y 2 = 1 + ⇐⇒ y = ± 1 + . x x x q (a) passing through the point (x, y) = (1, 0) ⇐⇒ IC y(1) = 0, so 0 = y(1) = ± 1 + C1 ⇐⇒ C = −1. We get two solutions! r 1 y1 (x) = 1 − x and r 1 y2 (x) = − 1 − . x Aside: We can also do part (a) using separation of variables, in fact it would have been easier that way. (b) passing through the point (x, y) = (0, 1) ⇐⇒ IC y(0) = 1. But, we can’t put x = 0 into the explicit solutions q y =± 1+ C . x Instead, we can return to the solution curves C = φ(x, y) = x(y 2 − 1) and substitute in x = 0 and y = 1: C = φ(0, 1) = 0(12 − 1) = 0. So, the implicit solution for the second IC is 0 = C = x(y 2 − 1). In fact, this gives equilibrium solutions y1 (x) ≡ 1 and y1 (x) ≡ −1. But, the original IC has the solution passing through the point (x, y) = (0, 1), so there is only one solution: y(x) ≡ 1. dy = (2x + y), 3.2.4.13. Multiply both sides by (3 + 3y 2 − x) to rewrite the ODE in the form (3 + 3y 2 − x) dx is, dy −(2x + y) + (3 + 3y 2 − x) = 0, dx so M (t, y) = −2x − y and N = 3 + 3y 2 − x. Next, check the exactness criterion: −1 =

that

    ∂  ∂  ∂  ∂  − 2x − y = M (x, y) =? N (x, y) = 3 + 3y 2 − x = −1, ∂y ∂y ∂x ∂x

so, yes, ODE is exact. A “potential function" φ(x, y) would have −2x − y = M (x, y) =

 ∂  φ(x, y) , ∂x

ˆ

hence

(−2x − y) ∂x = −x2 − xy + f (y), ´ where f (y) is an arbitrary function of only y. Our symbol ... ∂x is shorthand for the operation of anti-partialdifferentiation with respect to x. ∂ [f (y)] ≡ 0. Note The reason we have an arbitrary function f (y) instead of an arbitrary constant is because ∂x ∂ df also that because f (y) is a function of y alone, [f (y)] = . ∂y dy φ(x, y) =

c Larry

Turyn, October 13, 2013

page 23

φ(t, y) must also satisfy 3 + 3y 2 − x = N (x, y) =

i ∂ ∂ h df [ φ(x, y) ] = − x2 − xy + f (y) = 0 − x + , ∂y ∂y dy

so

df = 3 + 3y 2 . dy

We have f (y) = 3y + y 3 ; we could add an arbitrary constant but it would turn out to be redundant because our solutions are the curves φ(x, y) = C. Putting everything together, we have the solutions of the ODE are the curves C = φ(x, y) = −x2 − xy + 3y + y 3 , where C is an arbitrary constant. The IC requires y(0) = 1, that is, C = φ(0, 1) = −02 − 0 · 1 + 3 · 1 + 13 = 4, so the implicit solution is the curve 4 = −x2 − xy + 3y + y 3 . dy 3.2.4.14. Multiply both sides by (2 + x sin y) to rewrite the ODE in the form (2 + x sin y) dx = (−x + cos y), is, dy = 0, −(−x + cos y) + (2 + x sin y) dx so M (x, y) = x − cos y and N (x, y) = 2 + x sin y. Next, check the exactness criterion:

sin y =

that

    ∂  ∂  ∂  ∂  x − cos y = M (x, y) =? N (x, y) = 2 + x sin y = −1, ∂y ∂y ∂x ∂x

so, yes, ODE is exact. A “potential function" φ(x, y) would have x − cos y = M (x, y) =

 ∂  φ(x, y) , ∂x

ˆ

hence

1 2 x − x cos y + f (y), 2 ´ where f (y) is an arbitrary function of only y. Our symbol ... ∂x is shorthand for the operation of anti-partialdifferentiation with respect to x. ∂ The reason we have an arbitrary function f (y) instead of an arbitrary constant is because [f (y)] ≡ 0. Note ∂x ∂ df also that because f (y) is a function of y alone, [f (y)] = . ∂y dy φ(t, y) must also satisfy i ∂ ∂ h1 2 df 2 + x sin y = N (x, y) = [ φ(x, y) ] = x − x cos y + f (y) = 0 + x sin y + , ∂y ∂y 2 dy φ(x, y) =

(x − cos y) ∂x =

so

df = 2. dy We have f (y) = 2y; we could add an arbitrary constant but it would turn out to be redundant because our solutions are the curves φ(x, y) = C. Putting everything together, we have the solutions of the ODE are the curves C = φ(x, y) =

1 2 x − x cos y + 2y, 2

where C is an arbitrary  πconstant.  π The IC requires y = − , that is, 2 2  π π 1 π 2 π π π2 π π C= φ( , − )= − cos − +2 − = −0−π =π −1+ , 2 2 2 2 2 2 2 8 8 c Larry

Turyn, October 13, 2013

page 24

so the implicit solution is the curve  π 1 π −1+ = x2 − x cos y + 2y. 8 2 3.2.4.15. Multiply both sides by (sin t + t cos y + y) to rewrite the ODE in the form (sin t + t cos y + y) y˙ = − (sin y + y cos t − 4) that is, dy = 0, dt so M (t, y) = sin y + y cos t − 4 and N (t, y) = sin t + t cos y + y. Next, check the exactness criterion: (sin y + y cos t − 4) + (sin t + t cos y + y)

cos y + cos t=

 ∂    ∂  ∂  ∂ sin y + y cos t − 4 = M (t, y) =? N (t, y) = sin t + t cos y + y = cos t + cos y, ∂y ∂y ∂t ∂t

so, yes, ODE is exact. A “potential function" φ(t, y) would have sin y + y cos t − 4 = M (t, y) =

 ∂ φ(t, y) , ∂t

ˆ

hence

(sin y + y cos t − 4) ∂t = t sin y + y sin t − 4t + f (y), ´ where f (y) is an arbitrary function of only y. Our symbol ... ∂x is shorthand for the operation of anti-partialdifferentiation with respect to x. ∂ [f (y)] ≡ 0. Note also The reason we have an arbitrary function f (y) instead of an arbitrary constant is because ∂t df ∂ [f (y)] = . that because f (y) is a function of y alone, ∂y dy φ(t, y) must also satisfy φ(t, y) =

sin t + t cos y + y = N (t, y) =

i ∂ h df ∂ [ φ(t, y) ] = t sin y + y sin t − 4t + f (y) = t cos y + sin t − 0 + , ∂y ∂y dy

so

df = y. dy

We have f (y) = 12 y 2 ; we could add an arbitrary constant but it would turn out to be redundant because our solutions are the curves φ(t, y) = C. Putting everything together, we have the solutions of the ODE are the curves C = φ(t, y) = t sin y + y sin t − 4t +

1 2 y , 2

where C is an arbitrary constant. dy 3.2.4.16. The ODE is y cos(xy) + (1 + x cos(xy) ) = 0, so M (x, y) = y cos(xy) and N (x, y) = 1 + x cos(xy). Next, dx check the exactness criterion. Using the chain rule and the product rule, 1 · cos(xy) − xy sin(xy) = =

   ∂  ∂  ∂  y cos(xy) = M (x, y) =? N (x, y) ∂y ∂y ∂x

  ∂  1 + x cos(xy) = 1 · cos(xy) + x · − y sin(xy) , ∂x

so, yes, ODE is exact.   ∂ A “potential function" φ(x, y) would have y cos(xy) = M (x, y) = ∂x φ(x, y) . The substitution w = xy, with ∂w = y∂x, gives ˆ ˆ φ(x, y) = y cos(xy) ∂x = cos(w)∂w = sin w + f (y) = sin(xy) + f (y), ´ where f (y) is an arbitrary function of only y. Our symbol ... ∂x is shorthand for the operation of anti-partialdifferentiation with respect to x. c Larry

Turyn, October 13, 2013

page 25

∂ The reason we have an arbitrary function f (y) instead of an arbitrary constant is because [f (y)] ≡ 0. Note ∂x ∂ df also that because f (y) is a function of y alone, [f (y)] = . ∂y dy φ(t, y) must also satisfy 1 + x cos(xy) = N (x, y) =

i ∂ ∂ h df [ φ(x, y) ] = sin(xy) + f (y) = x cos(xy) + , ∂y ∂y dy

so

df = 1. dy

We have f (y) = y; we could add an arbitrary constant but it would turn out to be redundant because our solutions are the curves φ(x, y) = C. Putting everything together, we have the solutions of the ODE are the curves C = φ(x, y) = sin(xy) + y, where C is an arbitrary constant. ˆ ˆ ˆ  (A + x)dx A 3.2.4.17. Separation of variables gives −k t + c = −k dt = = + 1 dx = A ln x + x. x x [Note that biologically x ≥ 0 so we don’t need the absolute sign in ln |x|.] This gives implicit solutions relating x and t. Note that the goal of the problem is to find a certain time, so it might help to solve for t in terms of x, instead of the usual method of solving for x in terms of t! We get  1 − c + A ln x + x t=− k The table below summarizes the data we will use. T ime 0 t?

Alcohol concentration 0.024 0.008

(a) Assuming A = 0.005 and k = 0.01, the first data point requires   0 = −100 − c + 0.005 ln 0.024 + 0.024 , hence c = 0.005 ln 0.024 + 0.024 ≈ 0.0053514928. So, we can solve for   t? ≈ −100 − 0.0053514928 + 0.005 ln 0.008 + 0.008 ≈ 2.149306144 hours. So, it would take about 2 hours and 9 minutes for the person’s blood alcohol concentration to be within the legal limit. (b) Ex: 1: Assuming A = 0.005 and k = 0.007, the first data point requires  1  0=− − c + 0.005 ln 0.024 + 0.024 , .007 hence, as in part (a), c = 0.005 ln 0.024 + 0.024 ≈ 0.0053514928. So, we can solve for t? ≈ −.007−1 ·



 − 0.0053514928 + 0.005 ln 0.008 + 0.008 ≈ 3.070437349 hours.

So, it would take about 3 hours and 4 minutes for the person’s blood alcohol concentration to be within the legal limit. This is much longer than when we assumed A = 0.005 and k = 0.01 in part (a).

c Larry

Turyn, October 13, 2013

page 26

The results in part (b)’s Ex: 1 were for the same A as in part (a) but for a smaller value of k. The positive proportionality constant k partly measures how fast alcohol is “cleared" out of the bloodstream, because f (x, k, A) , −

kx A+x

has

∂f x =− < 0, for x > 0 and constant A > 0 . ∂k A+x So, it makes sense that the smaller the value of k the longer it takes for the person’s blood alcohol concentration to be within the legal limit. Ex: 2: Assuming A = 0.01 and k = 0.01, the first data point requires  1  0=− − c + 0.01 ln 0.024 + 0.024 , .01 hence c = 0.01 ln 0.024 + 0.024 ≈ −0.0132970145. So, we can solve for   t? ≈ −100 0.0132970145 + 0.01 ln 0.008 + 0.008 ≈ 2.698612289 hours. So, it would take about 2 hours and 42 minutes for the person’s blood alcohol concentration to be within the legal limit. When A = 0.01, it takes longer for alcohol concentration to go down compared to the situation in part (a), where A = 0.005. The results in part (b)’s Ex: 2 were for the same k as in part (a) but for a larger value of A. The positive constant kx A partly measures how slowly alcohol is “cleared" out of the bloodstream, because f (x, k, A) = − A+x has ∂f kx = > 0, for x > 0 and constants A, k > 0 . ∂A (A + x)2 So, it makes sense that the larger the value of A the longer it takes for the person’s blood alcohol concentration to be within the legal limit. Here are several physiological factors that could affect the time it takes for alcohol to be metabolized: • As an adult person ages usually their metabolism decreases, so their ability to clear alcohol from their bloodstream slows down. • I believe it is physiologically true that, on average women metabolize alcohol more slowly than men. This would lead to a smaller k or A for women than for men. • I suspect that a person with a higher BMI (body mass index) would tend to metabolize alcohol more slowly than a person with a lower BMI, although this may simply be correlated with having a lower metabolism in general. So, factors that would tend to increase A and/or decrease k would be to have a higher BMI, be female, or be older. kx + b. (c) Add a positive constant, b, to the right hand side of the ODE, to get x˙ = − A+x

dy , so dx M (x, y) = y sin(xy) − x − y sin x and N (x, y) = x sin(xy) − 2y + cos x. Next, check the exactness criterion: 3.2.4.18. The ODE is 0 = (y sin(xy) − x − y sin x) + (x sin(xy) − 2y + cos x)

1 · sin(xy) + x · y cos(xy) − 0 − sin x = =

   ∂  ∂  ∂  y sin(xy) − x − y sin x = M (x, y) =? N (x, y) ∂y ∂y ∂x

  ∂  x sin(xy) − 2y + cos x = 1 · sin(xy) + x · y cos(xy) − 0 − sin x, ∂x

so, yes, ODE is exact. A “potential function" φ(x, y) would have y sin(xy) − x − y sin x = M (x, y) =

 ∂  φ(x, y) . ∂x

c Larry

Turyn, October 13, 2013

page 27

The substitution w = xy, with ∂w = y∂x, gives ˆ ˆ ˆ 1 φ(x, y) = (y sin(xy) − x − y sin x)∂x = sin(w)∂w − (x + y sin x)∂x = − cos w − x2 + y cos x + f (y) 2 1 2 x + y cos x + f (y), 2 ´ where f (y) is an arbitrary function of only y. Our symbol ... ∂x is shorthand for the operation of anti-partialdifferentiation with respect to x. ∂ [f (y)] ≡ 0. Note The reason we have an arbitrary function f (y) instead of an arbitrary constant is because ∂x ∂ df also that because f (y) is a function of y alone, [f (y)] = . ∂y dy φ(t, y) must also satisfy i ∂ ∂ h 1 df x sin(xy) − 2y + cos x = N (x, y) = [ φ(x, y) ] = − cos(xy) − x2 + y cos x + f (y) = x sin(xy) − 0 + cos x + , ∂y ∂y 2 dy = − cos(xy) −

so

df = −2y. dy

We have f (y) = −y 2 ; we could add an arbitrary constant but it would turn out to be redundant because our solutions are the curves φ(x, y) = C. Putting everything together, we have the solutions of the ODE are the curves C = φ(x, y) = − cos(xy) −

1 2 x + y cos x − y 2 , 2

where C is an arbitrary constant. ˆ 3.2.4.19. Separation of variables, when y 6= 1, gives

2(y − 1)1/2 =

y =1+ The IC requires 1 = y(2) = 1 +



2+c 2

2

 t + c 2 2

dy = (y − 1)1/2

ˆ dt = t + c,

so

.

, so c = −2. Separation of variables gives y1 (t) = 1 +

 t − 2 2 2

.

as an example of a solution of the IVP. The ODE also has an equilibrium solution, y2 (t) ≡ 1, that happens to satisfy the IC. ˆ ˆ dy 5 4/5 3.2.4.20. Separation of variables, when y 6= 0, gives y = = dt = t + c, so 4 y 1/5  4(t + c) 5/4 y= . 5 This function exists only for t satisfying t ≥ −c.  5/4 The IC requires 0 = y(2) = 4(2+c) , so c = −2. Separation of variables gives 5 y1 (t) = 1 +

 t − 2 2 2

.

as an example of a solution of the IVP. y(t) =

 4(t − 2) 5/4

. 5 It is not yet a solution because a solution has to be defined and satisfy the ODE on an open interval containing the initial time, which is t0 = 2 in this problem. But, we extend the solution backwards from t = 0 by noting that y(t) = 0 satisfies the ODE. All together, we get ( ) 0, t≤2   5/4 y1 (t) = . 4(t−2) , t≥2 5 c Larry

Turyn, October 13, 2013

page 28

as an example of a solution. The ODE also has an equilibrium solution, y2 (t) ≡ 0, that happens to satisfy the IC. ˆ 3.2.4.21. Separation of variables, when y 6= 0, gives

dy = y 2/3

3 y 1/3 = y=

 t + c 3 3

ˆ dt = t + c,

so

.

This function exists for all t.  5/4 The IC requires 0 = y(0) = 4(0+c) , so c = 0. Separation of variables gives 5 y1 (t) =

 t 3 3

.

as an example of a solution of the IVP. The ODE also has an equilibrium solution, y2 (t) ≡ 0, that happens to satisfy the IC. 3.2.4.22. If an ODE is separable, then it can be written in the form dy = f (t)g(y). dt This can be rewritten as

1 dy = f (t) g(y) dt

as long as g(y) 6= 0. The latter gives equilibrium solutions that we would not find using separation of variables. We can rewrite the ODE as 1 dy = 0, (?) − f (t) + g(y) dt which is in the form M (t, y) + N (t, y) dy = 0, where M (t, y) = −f (t) and N (t, y) = dt 0≡

1 . g(y)

Let’s check that (?) is exact:

 ∂    ∂  ∂ ∂h 1 i − f (t) = M (t, y) =? N (t, y) = ≡ 0, ∂y ∂y ∂t ∂t g(y)

so, yes, ODE (?) is exact. So, yes, every separable ODE can be rewritten as an exact equation. ˆ 3.2.4.23. (a) Separation of variables, when y 6= 0, gives

−y −1 =

y=−

dy = y2

ˆ dt = t + c,

so

1 . t+c

This function exists for t 6= −c. 1 The IC requires 3 = y(0) = − 0+c , so c = − 31 . The solution of the IVP is unique on some open interval containing t = 0, by Picard’s Theorem, and is 1 3 3 y(t) = − · = , 1 − 3t t − 13 3 This exists on any open interval not containing t = 13 ; but, to be a solution of the IVP it has to exist on an open interval containing t = 0. So, the solution y(t) exists only on the open interval (−∞, 13 ). So, δ = 31 in the notation of this problem. (b) In this problem t0 = 0 and y0 = 3. The hypotheses of Picard’s Theorem 3.6 are satisfied on any rectangle Rα,β , {(t, y) : −α ≤ t ≤ α, 3−β ≤ y ≤ 3+β} because f (t, y) , y 2 is continuous and has continuous partial derivative with respect to y everywhere. So, we can take α and β to be any positive numbers. Picard’s Theorem 3.6 establishes that there is an open time interval containing t0 = 0 on which IVP has exactly one solution. Being an open interval containing t = 0 implies that it contains some open interval of the form −α ¯≤t≤α ¯ on which IVP has exactly one solution. (c) Picard’s Theorem 3.7 gives more information: The hypotheses ask for α ¯ and β¯ sufficiently small that ¯ Kα (?) 0 < α ¯ ≤ α, 0 < β¯ ≤ β, M α ¯ ≤ β, ¯ < 1, c Larry

Turyn, October 13, 2013

page 29

where |f (t, y)| ≤ M

and

∂f (t, y) ≤ K ∂y

for all (t, y) in Rα,β . 2 2 Whatever positive numbers α and β we take. we will have M = (3+β) because |f (t, y)| , |y| and |3+β| > |3−β| ∂f for any β > 0. Similarly, ∂y (t, y) = 2|y|, so K = 2(3 + β). So, (?) requires ¯ 2(3 + β)α (?)0 < α ¯ ≤ α, 0 < β¯ ≤ β, (3 + β)2 α ¯ ≤ β, ¯ < 1. The last of these requirements and the requirement that β > 0 together imply that 2(3 + 0)α ¯ < 1, that is, α ¯
0 and the equilibrium position to be y = 0. So, the ICs are y(0) = 0 m and y(0) ˙ = −5 m/s. We are given that the damping force is F = −20v, where the velocity is v = y. ˙ The IVP is   y + 20y˙ + 50y = 0   2¨ y(0) = 0 .   y(0) ˙ = −5  The ODE has characteristic equation 0 = 2s2 + 20s + 50 = 2 s2 + 10s + 25 = 2(s + 5)2 , so the roots are s = −5, −5. The solutions of the ODE are y(t) = c1 e−5t + c2 t e−5t , where c1 , c2 =arbitrary constants. It follows that y(t) ˙ = −5c1 e−5t + c2 (1 − 5t) e−5t The ICs require 

0 = y(0) = c1 −5 = y(0) ˙ = −5c1 + c2

 ,

which implies c1 = 0 and c2 = −5. The solution of the IVP is the position of the mass as a function of time, namely y(t) = −5 t e−5t c Larry

Turyn, October 13, 2013

page 38

and y(t) ˙ = −5(1 − 5t) e−5t . The furthest from the equilibrium position that the object goes happens either at t = 0 or at a time when y(t) has a local maximum or a local minimum, because limt→∞ y(t) = 0. But, because y(0) = 0, that is, the object was released form the equilibrium position, this will not happen at t = 0. We have that 0 = y(t) ˙ = −5(1 − 5t) e−5t only at t = 0.2, and y(0.2) = −5 (0.2) e−5(0.2) = e−1 . The furthest from the equilibrium position the object can go is 1e meters. 3.3.8.28. In order to find the first time when the wheel passesthrough  the equilibrium position , it really does help t −t to put the solution in the amplitude phase form y = Ae cos −δ : 2   1 = A cos δ , 1 = A sin δ hence A =

p

(1)2 + (1)2 =

√ 2 and tan δ =

1 1

= 1. Because (c1 , c2 ) is in the first quadrant, δ = arctan 1 =

π . 4

The IVP’s solution in amplitude phase form is y(t) =



2 e−t cos

t 2



π . 4

−t

of the form y = A e cos(ω t − δ) will pass through the equilibrium position if, and only if, ω t − δ =   A function π n + 21 for some integer n, that is, if and only if

t = tn , Here, δ =

  π n + 21 + δ

π and ω = 12 , so 4

ω

=

δ + ω

  π n + 12 ω

.

  π n + 12

π δ = and = (2n + 1)π. ω 2 ω The first time t ≥ 0 when the wheel passes through the equilibrium position is t0 =

π π + (2 · 0 + 1)π = . 2 2

Note that t−1 < 0. (b) The solution corresponds to a second order LCCHODE whose characteristic equation has roots s = −1 ± i 21 , hence the characteristic equation is, or is equivalent to after dividing by a constant,  1 2 5 0 = (s + 1)2 + = s2 + 2s + . 2 4 An ODE of the form y¨ + 2y˙ +

5 4

y = 0 would have this vertical position function as a solution.

3.3.8.29. The solution corresponds to a second order LCCHODE whose characteristic equation has roots s = −3 ± i, hence the characteristic equation is, or is equivalent to after dividing by a constant, 0 = (s + 3)2 + 12 = s2 + 6s + 10. We were given that the ODE has the form y¨ + by˙ + ky = 0, whose characteristic equation is 0 = s2 + bs + k. So, we need b = 6 and k = 10. 3.3.8.30. In order to find the first time when the wheel passes through the equilibrium position and the maximum absolute deviation of the wheel from the equilibrium position, it really does help to put the solution in the amplitude phase form y = Ae−t cos (3t − δ):   −1 √ = A cos δ , 3 = A sin δ c Larry

Turyn, October 13, 2013

page 39

hence A =

q

(−1)2 +

√ 2 3 = 2 and tan δ =

√ 3 −1

√ = − 3. Because (c1 , c2 ) is in the second quadrant,

 √  π 2π . δ = π + arctan − 3 = π − = 3 3 The IVP’s solution in amplitude phase form is  2π  y(t) = 2 e−t cos 3t − . 3 of the form y = A e−t cos(ω t − δ) will pass through the equilibrium position if, and only if, ω t − δ =  A function  1 π n + 2 for some integer n, that is, if and only if

t = τn , Here, δ =

  π n + 12 + δ ω

=

δ + ω

  π n + 21 ω

.

2π and ω = 3, so 3

  π n + 12 π · (2n + 1) 2π δ = and = . ω 9 ω 6 The first time t ≥ 0 when the wheel passes through the equilibrium position is τ−1 =

π · 3(2 · (−1) + 1) 4π π + = . 18 18 18

Note that t−2 < 0. (b) The maximum absolute deviation  of the wheel from the equilibrium position will occur at either t = 0 or at a has a local maximum or a local minimum, because limt→∞ y(t) = 0. time when y(t) = 2 e−t cos 3t − 2π 3 Local extrema occur when  2π  2π  0 = y(t) ˙ = −2e−t cos 3t − + 3 sin 3t − 3 3 that is, denoting θ = 3t − that is, sin θ =

− 13

2π , 3

0 = cos θ + 3 sin θ,

cos θ, that is, 1 tan θ = − . 3

The solutions of this trigonometric equation are 3tn −

2π 1 = θn = nπ − arctan , 3 3

that is, tn =

nπ 2π 1 1 + − arctan ≈ 0.590881516 + n1.047197551 3 9 3 3

where n is any integer. Because of the exponential decay factor e−t in y(t) and the fact that the local maxima (minima, respectively) are periodic with period equal to the quasi-period 2π , we know that the maximum absolute deviation of the wheel 3 will either be at t = 0 or at the time of the first positive local maximum or local minimum of y(t). We have y(0) = −1, y(t0 ) ≈ 1.0508353370 and y(t1 ) ≈ −0.3687589338. The maximum absolute deviation of the wheel from the equilibrium position is   2π 1 1 y(t0 ) = y − arctan ≈ 1.0508353370. 9 3 3 (c) The solution corresponds to a second order LCCHODE whose characteristic equation has roots s = −1 ± i3, hence the characteristic equation is, or is equivalent to after dividing by a constant, 0 = (s + 1)2 + 32 = s2 + 2s + 10. An ODE of the form y¨ + 2y˙ + 10y = 0 would have this vertical position function as a solution.

c Larry

Turyn, October 13, 2013

page 40

3.3.8.31. I does not oscillate infinitely often when the ODE is not in the underdamped case. The characteristic equation is 12 s2 + 10s + C1 = 0, so the roots are q r −10 ± 102 − 4 · 12 · C1 2 s= = −10 ± 100 − . C 2 · 12 So, I does not oscillate infinitely often ⇐⇒ 100 ≥

2 C

⇐⇒ C ≥

So, the current I does not oscillate infinitely often ⇐⇒ C ≥

1 . 50 1 . 50

3.3.8.32. From the solution we know that the roots of the characteristic equation are s = −2 ± i characteristic equation of 5I¨ + RI˙ + C1 I = 0, that is, of 5s2 + Rs + C1 = 0, are q r  R 2 −R ± R2 − 4 · 5 · C1 R 1 s= =− ±i − . 2·5 10 5C 10



3. The roots of the

This implies  R 2 1 − , 5C 10 1 1 hence R = 20 and thus 3 = − 22 . The latter implies C = . 5C 35 2=

R 10

and

3=

3.3.8.33. The time between successive local maxima on the graph is the quasi-period T=

2π = 4.000 − 2.000 = 2.000, ν

so ν = π. The roots of the characteristic equation for y¨ + py˙ + qy = 0 are p 4q − p2 p . α ± iν = s = − ± i 2 2 The logarithmic decrement is 2πα 3.9876 = D , ln (y(t2 )/y(t1 )) = ln , ν 5.1234 so α= So,

3.9876 π ln 5.1234 νD = ≈ −0.1253143675. 2π 2π

p − = α ≈ −0.1253143675 ⇒ p ≈ 0.2506287350 2

and

p π=ν=

4q − p2 ⇒ (2π)2 = 4q − p2 2

⇒q=

  1 1 4π 2 + p2 ≈ 4π 2 + 0.25062873502 4 4

so q ≈ 9.885308092. So, the ODE is approximately y¨ + 0.2506287350 y˙ + 9.885308092 y = 0. 3.3.8.34. On the graph is appears that there are successive local maxima at (t1 , y(t1 ) = (0.82, 0.57) and (t2 , y(t2 ) = (3.7, 0.28). The time between successive local maxima on the graph is the quasi-period T=

2π = 3.7 − 0.82 = 2.88, ν

so

2π ≈ 2.181661565. 2.9 [We are keeping as many digits as possible in work before final conclusions, but we will round off final conclusions to the two significant digits in the graphical data.] Recall that we are assuming m = 1. The roots of the characteristic equation for y¨ + by˙ + ky = 0 are √ b 4k − b2 α ± iν = s = − ± i . 2 2 ν≈

c Larry

Turyn, October 13, 2013

page 41

The logarithmic decrement is 2πα 0.28 = D , ln (y(t2 )/y(t1 )) = ln , ν 0.57 so α=

2.181661565 ln νD ≈ 2π 2π

0.28 0.57

≈ −0.2468217909.

So, b − = α ≈ −0.2468217909 ⇒ b ≈ 0.4936435818 2 and √ 4k − b2 2.181661565 = ν = ⇒ (2 · 2.181661565)2 = 4k − b2 2   1 1 ⇒k= (2 · 2.181661565)2 + b2 ≈ (2 · 2.181661565)2 + 0.49364358182 ≈ 4.820568181. 4 4 So, the ODE is approximately y¨ + 0.4936435818 y˙ + 4.820568181 y = 0. 3.3.8.35. (a) a1 is the only correct choice because of the Principle of Linear Superposition for the homogeneous ODE y¨ + p(t)y˙ + q(t)y = 0. Note that neither a2 nor a3 can be true if a1 is true. (b) b3 is the only correct choice because without knowing if W (y1 , y2 ) is non-zero at at least one value of t in I we cannot establish that c1 y1 (t) + c2 y2 (t) is a general solution of (?) on I q R 1 . Recall that the natural frequency is ω0 = LC . 3.3.8.36. The neper “frequency" is defined to be α = 2L The characteristic polynomial of ODE (3.25) that models the DC series RLC circuit shown in Figure 3.8 is Ls2 + Rs + whose roots are q −R ± R2 − 4 · L · s= 2L

1 C

q R =− ± 2L

R2 − 4 · L ·

R =− ± 2L

1 C

2L r

1 = 0, C

R =− ± 2L

s

R 2 1 − = −α ± 2L LC

R2 − 4 · L · (2L)2

1 C

R =− ± 2L

s

4L/C R2 − (2L)2 (2L)2

r

α2 − ω02 ,

as we desired to show. 3.3.8.38. ODE y¨ + y˙ − 6y = 0 has characteristic equation 0 = s2 + s − 6 = (s + 3)(s − 2), so the general solution is y(t) = c1 e−3t + c2 e2t . The ICs require   a = y(0) = c1 + c2 , 0 = y(0) ˙ = −3c1 + 2c2 which implies 

  c1 1 = c2 −3

1 2

−1

  1 2 a = 0 5 3

So, the solution of the IVP is y(t) = We have

−1 1



   1 2a a = . 0 5 3a

 a 2e−3t + 3e2t . 5

 a lim e2t 3 + 2e−5t = ∞, 5 t→∞  = ∞, and limt→∞ 3 + 2e−5t = 3 + 0 = 3 > 0.

lim y(t) =

t→∞

because a is a positive constant, limt→∞ e2t Similarly, we have a lim y(t) = t→∞ 5 because a is a positive constant, limt→−∞ e−3t

 lim e−3t 2 + 3e5t = ∞,  = ∞, and limt→−∞ 2 + 3e5t = 2 + 0 = 2 > 0. t→−∞

3.3.8.39. A critically damped ODE has general solution y(t) = c1 e−αt + c2 t e−αt for some positive constant α. It follows that y(t) ˙ = −αc1 e−αt + c2 (1 − αt)e−αt .

c Larry

Turyn, October 13, 2013

page 42

Because the system is released from rest, y(0) ˙ = 0. The ICs require   y0 = y(0) = c1 , 0 = y(0) ˙ = −αc1 + c2 for some constant y0 , which implies c1 = y0 and c2 = αy0 . the solution of the IVP is  y(t) = y0 e−αt 1 + αt and y(t) ˙ = −αy0 e−αt + αy0 (1 − αt)e−αt = −α2 t y0 e−αt . If y0 = 0 then the system stays at rest for all t ≥ 0. In this case, the maximum deviation from equilibrium occurs at t = 0 and is |y0 | = 0. If y0 6= 0 it follows that y(t) ˙ 6= 0 for all t ≥ 0. Because limt→∞ y(t) = 0 and there is no critical point for t ≥ 0, it follows that the maximum deviation from equilibrium occurs at t = 0 and is |y0 |. 3.3.8.40. An overdamped ODE has general solution y(t) = c1 e−s1 t + c2 e−s2 t for some unequal but positive constants s1 and s2 . It follows that y(t) ˙ = −s1 c1 e−s1 t − s2 c2 e−s2 t . Because the system is released from rest, y(0) ˙ = 0. The ICs require   y0 = y(0) = c1 + c2 , 0 = y(0) ˙ = −s1 c1 − s2 c2 for some constant y0 , which implies    −1   1 c1 1 1 y0 −s2 = = c2 −s1 −s2 0 s1 s1 − s2 So, the solution of the IVP is y(t) =

y0 s1 − s2

−1 1



   1 y0 −s2 y0 = . 0 s1 y0 s1 − s2

 − s2 e−s1 t + s1 e−s2 t .

and

 y0 s1 s2 −s1 t e − e−s2 t . s1 − s2 If y0 = 0 then the system stays at rest for all t ≥ 0. In this case, the maximum deviation from equilibrium occurs at t = 0 and is |y0 | = 0.  If y0 6= 0 it follows that y(t) ˙ 6= 0 for all t > 0, because there is no t > 0 with e−s1 t − e−s2 t = 0, because s1 6= s2 and both s1 and s2 are positive. Because limt→∞ y(t) = 0 and there is no critical point for t ≥ 0, it follows that the maximum deviation from equilibrium occurs at t = 0 and is |y0 |. y(t) ˙ =

2

3.3.8.41. (a) Define y1 (t) = et and y2 (t) = et . To verify that they solve the ODE, we calculate (1 − 2t)¨ y1 + (1 + 4t2 )y˙ 1 + (−2 + 2t − 4t2 )y1 = (1 − 2t)et + (1 + 4t2 )et + (−2 + 2t − 4t2 )et   = (1 − 2t)+(1 + 4t2 )+(−2 + 2t − 4t2 ) et ≡ 0 · et ≡ 0  2 2 and y˙ 2 = 2tet and y¨2 = 2 + 4t2 et imply  2 2 2 (1 − 2t)¨ y2 + (1 + 4t2 )y˙ 2 + (−2 + 2t − 4t2 )y2 = (1 − 2t) 2 + 4t2 et + (1 + 4t2 )2tet + (−2 + 2t − 4t2 )et  2 = (1 − 2t)(2 + 4t2 )+(1 + 4t2 )2t+(−2 + 2t − 4t2 ) et  2 2 = − 8t3 + 4t2 −  4t + 2 +  2t + 8t3 − 2 +  2t − 4t2 et · et ≡ 0. (b) Because 2 t et e t+t2 W y1 (t), y2 (t) = t 6= 0, 2 = (2t − 1)e e 2tet n o 2 except at t = 21 , it follows from Theorem 3.13 that et , et is a complete set of basic solutions on any open interval 

I that does not include t = 21 .

c Larry

Turyn, October 13, 2013

page 43

2

2

(c) Let y(t) = c1 et + c2 et . It follows that y(t) ˙ = c1 et + c2 2 t et . The ICs require   1/2 + c2 e1/4   5 = y( 12 ) = c1 e ,   −3 = y( ˙ 12 ) = c1 e1/2 + c2 e1/4 which would require 5 = −3. The system of equations for c1 , c2 is inconsistent, that is, has no solution. So, there is no solution of this IVP. (d) Our difficulty in part (c) of not being able to solve the IVP does not contradict the Existence and Uniqueness conclusion of Theorem 3.8 because the ODE, when written in standard form, 1+ 4t2 −2 + 2t− 4t2 y˙ + y = 0, 1−2t 1−2t

y¨ +

does not satisfy the hypothesis that both of the coefficients p(t) and q(t) must be continuous on some open interval containing the initial time t = 21 . 3.3.8.42. y¨ − ω 2 y = 0 has characteristic equation s2 − ω 2 = 0, which has roots s = ±ω. The general solution is y(t) = c1 eωt + c2 e−ωt , where c1 , c2 =arbitrary constants. The boundary condition requires 0 = y(L) = c1 eωL + c2 e−ωL . This is satisfied if c1 = c2 = 0. If, instead c1 6= 0 or c2 6= 0, then we need −c1 eωL = c2 e−ωL . Multiply both sides by eωL to get −c1 e2ωL = c2 . The solutions that satisfy the BC in this case are  y(t) = c1 eωt − e2ωL e−ωt , where c1 is an arbitrary constant. This is enough to finish the problem, but we can also go on to get a nicer looking formula: This can be rewritten as  1 − e−ωL eωt + eωL e−ωt , y(t) = −2c1 eωL · 2 that is, solutions can be written in the form y(t)=c ·

 eω(L−t) − e−ω(L−t) =c sinh ω(L − t) , 2

where c , −2c1 eωL is an arbitrary constant. 3.3.8.43. y¨ − ω 2 y = 0 has characteristic equation s2 − ω 2 = 0, which has roots s = ±ω. The general solution is y(t) = c1 eωt + c2 e−ωt , where c1 , c2 =arbitrary constants. It follows that  y(t) ˙ = ω c1 eωt − c2 e−ωt , The boundary condition (BC) requires  0 = y(L) ˙ = ω c1 eωL − c2 e−ωL . Recall that the problem assumed that ω is a positive constant. The BC is satisfied if c1 = c2 = 0. If, instead c1 6= 0 or c2 6= 0, then we need c1 eωL = c2 e−ωL . Multiply both sides by eωL to get c1 e2ωL = c2 . The solutions that satisfy the BC in this case are  y(t) = c1 eωt + e2ωL e−ωt , where c1 is an arbitrary constant. This is enough to finish the problem, but we can also go on to get a nicer looking formula: This can be rewritten as  1 −ωL ωt y(t) = 2c1 eωL · e e + eωL e−ωt , 2 c Larry

Turyn, October 13, 2013

page 44

that is, solutions can be written in the form y(t)=c ·

 eω(L−t) + e−ω(L−t) =c cosh ω(L − t) , 2

where c , 2c1 eωL is an arbitrary constant. 3.3.8.44. We calculate the Wronskian  αy1 + y2 W αy1 (t) + y2 (t), y1 (t) + 2αy2 (t) = αy1 0 + y2 0

y1 + 2αy2 0 0 0 0 0 0 = (αy1 + y2 )(y1 + 2αy2 ) − (y1 + 2αy2 )(αy1 + y2 ) y1 + 2αy2

2 0 0 0 2 0 0 0 2 2 2αy y2 0 = (2α2 − 1)(y1 y2 0 − y2 y1 0 ) 2αy y2 0 −  αy = αy 1 y1 − 2α y2 y1 − y1 y2 −  1 y1 + y2 y1 + 2α y1 y2 +   = (2α2 − 1)W y1 , y2 ,





so {αy1 (t) + y2 (t), y1 (t) + 2αy2 (t)} is also a compete set of basic solutions on I exactly when 2α2 − 1 6= 0, that is, for all α except α = ± √12 . 3.3.8.45. We calculate the Wronskian αy1 − 3y2 W αy1 (t) − 3y2 (t), y1 (t) − αy2 (t) = αy1 0 − 3y2 0 

y1 − αy2 = (αy1 − 3y2 )(y1 0 − αy2 0 ) − (y1 − αy2 )(αy1 0 − 3y2 0 ) y1 0 − αy2 0

0 2 0 2 0 0 0 0 2 2 = αy 3αy y2 0 −  αy 3αy y2 0 = (−α2 + 3)(y1 y2 0 − y2 y1 0 ) 1 y1 − 3y2 y1 − α y1 y2 +  1 y1 + α y2 y1 + 3y1 y2 −   = (−α2 + 3)W y1 , y2 ,





2 so {αy1 (t) − 3y2 (t), y1 (t) √ − αy2 (t)} is also a compete set of basic solutions on I exactly when −α + 3 6= 0, that is, for all α except α = ± 3.

3.3.8.46. Rewrite the ODE as dW + p(t)W (t) = 0, which is a linear first order ODE with integrating factor µ(t) , dt  ˆ t  exp − p(τ )dτ . Multiply both sides of the ODE by µ(t) to get t0

 d ˙ (t) + p(t)µ(t)W (t) = 0, µ(t)W (t) = µ(t)W dt hence

 d µ(s)W (s) = 0, ds and then take the definite integral both sides with respect to s to get ˆ t  t  t µ(t)W (t) − µ(t0 )W (t0 ) = µ(s)W (s) t = 0 ds = 0 t = 0. 0

 ˆ But this, together with µ(t0 ) = exp −

t0

0

t0

 p(τ )dτ

= exp (0) = 1, implies

t0

µ(t)W (t) − W (t0 ) = 0. So, µ(t)W (t) = W (t0 ), which implies  ˆ t  W (t)= µ(t)−1 W (t0 )= exp − p(τ )dτ W (t0 ), t0

as was desired. 3.3.8.47. Calculus I results say that the relative maxima of y(t) = Aeαt cos(νt − δ) occur where y(t) ˙ = 0 and alternate with relative minima, unless y(t) ˙ = 0 does not change sign as t passes through some critical point. We calculate    y(t) ˙ = A αeαt cos(νt − δ) − νeαt sin(νt − δ) = Aeαt α cos(νt − δ) − ν sin(νt − δ) = Aeαt ρ cos (νt − δ) − η where we use polar coordinates, that is, the amplitude phase form, to express  α cos(νt − δ) − ν sin(νt − δ) = ρ cos (νt − δ) − η . c Larry

Turyn, October 13, 2013

page 45

This requires 

α = ρ cos η ν = −ρ sin η

 ,

√ ao ρ = α2 + ν 2 and tan η = − αν . Note that both α 6= 0 and ν 6= 0 are assumed. Because y(t) ˙ = Aρeαt cos(νt − δ − η), the critical points are at tn satisfying  1 νtn − δ − η = n − π, 2 that is, tn =

1 ν



  1 δ+η+ n− π , 2

where n are integers. From this, it follows that successive critical points are at a distance apart equal to       1 1 π 1 1 tn+1 − tn = δ+η+ (n+1)− π − δ+η + n− π = . ν 2 ν 2 ν Also, the trigonometric function cos θ changes sign as θ passes through its successive zeros, so y 0 (t) changes sign as t passes through each critical point. Because of the changes of sign, it follows that relative maxima and relative minima alternate successively. So, consecutive relative maxima differ by π 2π tn+2 − tn = 2 · = , ν ν as we were asked to show.

c Larry

Turyn, October 13, 2013

page 46

Section 3.4.4 3.4.4.1. The characteristic polynomial, P(s) , s3 + s2 − 2, has the easy to find root s = 1 because P(1) = 0. We factor 0 = P(s) = s3 + s2 − 2 = (s − 1)(s2 + 2s + 2). Because s2 + 2s + 2 = (s + 1)2 + 1 the three roots of the characteristic polynomial are s = 1, −1 ± i. The general solution of the ODE is y(t) = c1 et + c2 e−t cos t + c3 e−t sin t, where c1 , c2 , c3 =arbitrary constants. 3.4.4.2. The characteristic polynomial is P(s) , s6 − s4 − 2s2 = s2 (s4 − s2 − 2). This gives root s = 0 with multiplicity two. √ Substitute u = s2 into (s4 − s2 − 2) to get 0 = u2 − u − 2 = (u − 2)(u + 1). So, s2 = u = 2 gives roots s = ± 2 and s2 = u = −1 gives roots s = ± i. The six roots of the characteristic polynomial are √ s = 0, 0, ± 2, ± i. The general solution of the ODE is √ 2t

y(t)=c1 + c2 t + c3 e−

+ c4 e

√ 2t

+ c5 cos t + c6 sin t,

where c1 , ..., c6 =arbitrary constants. 3.4.4.3. The characteristic equation is (s + 1)3 = 0, so the roots are s = −1, −1, −1. The general solution of the ODE is y(t) = c1 e−t + c2 t e−t + c3 t2 e−t , where c1 , c2 , c3 =arbitrary constants. 3.4.4.4. The characteristic equation is 0 = s2 − 9 = (s + 3)(s − 3), so the roots are s = −3, 3. The general solution of the ODE is y(t) = c1 e−3t + c2 e3t , where c1 , c2 =arbitrary constants. It follows that y(t) ˙ = −3c1 e−3t + 3c2 e3t . The ICs require   3 = y(0) = c1 + c2 , −6 = y(0) ˙ = −3c1 + 3c2 which implies 

  c1 1 = c2 −3

1 3

−1

  1 3 3 = −6 6 3

−1 1



   3 2.5 = , −6 0.5

so the solution of the IVP is

5 −3t 1 3t e + e . 2 2 There is no time constant, because not all solutions have limt→∞ y(t) = 0. y(t) =

3.4.4.5. The characteristic equation is 0 = s3 − 2s2 − 15s = s(s2 − 2s − 15) = s(s + 3)(s − 5), so the roots are s = −3, 0, 5. The general solution of the ODE is y(t) = c1 e−3t + c2 + c3 e5t , where c1 , c2 , c3 =arbitrary constants. The ICs require    0 = y(0) = c1 + c2 + c3  0 = y(0) ˙ = −3c1 + 5c3 ,   1 = y¨(0) = 9c1 + 25c3 c Larry

Turyn, October 13, 2013

page 47

which, using a calculator, implies that    c1 1  c2 = −3 c3 9

1 0 0

−1    0 1 5 1  −8  , 5   0 = 120 1 25 3

so the solution of the IVP is

1 (5e−3t − 8 + 3e5t ). 120 There is no time constant, because not all solutions have limt→∞ y(t) = 0. y(t) =

3.4.4.6. The characteristic equation is 0 = s4 + 2s2 + 1 = (s2 + 1)2 , so the roots are s = ±i, ±i. The general solution of the ODE is y(t) = c1 cos t + c2 sin t + c3 t cos t + c4 t sin t, where c1 , c2 , c3 , c4 =arbitrary constants. It follows that y(t) ˙ = −c1 sin t + c2 cos t + c3 (cos t − t sin t) + c4 (sin t + t cos t). y¨(t) = −c1 cos t − c2 sin t + c3 (−2 sin t − t cos t) + c4 (2 cos t − t sin t), and

... y (t) = c1 sin t − c2 cos t + c3 (−3 cos t + t sin t) + c4 (−3 sin t − t cos t),

The ICs require  0 = y(0) = c1    0 = y(0) ˙ = c2 + c3 0 = y ¨ (0) = −c + 2c4  1  ...  −2 = y (0) = −c2 − 3c3 which, using a calculator, implies that    c1 1  c2   0  =  c3   −1 c4 0

0 1 0 −1

−1 0 0  0 0   2  0 0 −2

0 1 0 −3

   

,

  



  0   −1  =     1 , 0

so the solution of the IVP is y(t) = − sin t + t cos t. 3.4.4.7. The characteristic equation is 0 = s4 − 2s2 − 3 = (s2 + 1)(s2 − 3) √ √ so the roots are s = − 3, 3, ±i. The general solution of the ODE is √

y(t) = c1 e−

3t



+ c2 e

3t

+ c3 cos t + c4 sin t,

where c1 , c2 , c3 , c4 =arbitrary constants. It follows that √ √ √ √ y(t) ˙ = − 3 c1 e− 3 t + 3 c2 e 3 t − c3 sin t + c4 cos t, √

y¨(t) = 3c1 e− and

3t

+ 3c2 e

√ 3t

− c3 cos t − c4 sin t,

√ √ √ ... y (t) = −33/2 c1 e− 3 t + 33/2 3 c2 e 3 t + c3 sin t − c4 cos t.

The ICs require  0 = y(0) = c1√+ c2 + c√ 3    0 = y(0) ˙ = − 3 c1 + 3 c2 + c4 + 3c2 − c√  0 = y¨(0) = 3c1 √ 3  ...  −2 = y (0) = −3 3 c1 + 3 3 c2 − c4

   

,

  

c Larry

Turyn, October 13, 2013

page 48

which, using a calculator, implies that    c1 √1  c2   − 3  =  c3   √3 c4 −3 3 so the solution of the IVP is

√1 3 √3 3 3

1 0 −1 0

−1 0 0  0 1   0  0 −2 −1

√  −1/(4√ 3)   1/(4 3)  , =     0 1/2 



√ 1 1 √ 1 y(t) = √ e− 3 t − √ e 3 t + sin t. 2 4 3 4 3

3.4.4.8. The characteristic polynomial is s3 + 3s2 + s + 3. We are given that it has roots s = ±i because we were given that y = cos t is one of the solutions of the ODE. It follows that (s − i)(s + i) = (s2 + 1) is a factor of the characteristic polynomial. We calculate that the characteristic equation is 0 = s3 + 3s2 + s + 3 = (s2 + 1)(s + 3), so the roots are s = −3, ±i. The general solution of the ODE is y(t) = c1 e−3t + c2 cos t + c3 sin t, where c1 , c2 , c3 =arbitrary constants. 3.4.4.9. The characteristic equation is 0 = s4 − 8s3 + 17s2 − 8s + 16. We are given that it has roots s = ±i because we were given that y = sin t is one of the solutions of the ODE. It follows that (s − i)(s + i) = (s2 + 1) is a factor of the characteristic polynomial. We calculate that the characteristic equation is 0 = s4 − 8s3 + 17s2 − 8s + 16 = (s2 + 1)(s2 − 8s + 16) = (s2 + 1)(s − 4)2 so the roots are s = 4, 4, ±i. The general solution of the ODE is y(t) = c1 e4t + c2 t e4t + c3 cos t + c4 sin t, where c1 , c2 , c3 , c4 =arbitrary constants. 3.4.4.10. The characteristic polynomial is 0 = s4 + 2s3 + 2s2 + 2s + 1. We are given that it has roots s = ±i because we were given that y = sin t is one of the solutions of the ODE. It follows that (s − i)(s + i) = (s2 + 1) is a factor of the characteristic polynomial. We calculate that the characteristic equation is 0 = s4 + 2s3 + 2s2 + 2s + 1 = (s2 + 1)(s2 + 2s + 1) = (s2 + 1)(s + 1)2 so the roots are s = −1, −1, ±i. The general solution of the ODE is y(t) = c1 e−t + c2 t e−t + c3 cos t + c4 sin t, where c1 , c2 , c3 , c4 =arbitrary constants. 3.4.4.11. (a) The characteristic equation is 0=s3 + 8=(s + 2)(s2 − 2s + 4) = (s + 2) (s − 1)2 + 3 √ so the roots are s = −2, 1 ± i 3. The general solution of the ODE is √ √ y(t) = c1 e−2t + c2 et cos( 3 t) + c3 e−t sin( 3 t),



where c1 , c2 , c3 =arbitrary constants. (b) The characteristic equation is 0 = s3 − 2 = (s −

√   3 3 √ √ √ √ √  2 2 √ 2 2 3 3 3 3 3 2) s2 + 2 s + 4 = (s − 2) s+ + 4− 2 2 √ √   3 3 √ 2 2 √ 4 3 3 s+ + 4− = (s − 2) 2 4 c Larry

Turyn, October 13, 2013

page 49

so the roots are

√ 3

s=

√ √ √ 3 3 2 2· 3 2, − ±i . 2 2

The general solution of the ODE is y(t) = c1 e

√ 3 2t

+ c 2 e−

√ 3

2 t/2

cos

√  √  √ √ 3 3 √ 3 2 · 3t 2 · 3t + c3 e− 2t/2 sin , 2 2

where c1 , c2 , c3 =arbitrary constants. 3.4.4.12. The loop currents and voltages across the capacitors are shown in Figure 3.20. Kirchoff’s voltage law gives in the first loop the equation 16 ˙ 16 ˙ (1) V0 = 7I1 + I1 − I2 + v1 7 7 and in the second loop the equation 16 ˙ 16 ˙ (2) 0 = v2 + I2 − I1 . 7 7 The voltages across the capacitors satisfy 1 I1 = 5I1 (3) v˙ 1 = 1/5 and 1 (4) v˙ 2 = I1 = 16I2 . 1/16 This is a DC circuit, so V0 is a constant. Differentiate equation (1) with respect to t to get 16 ¨ 16 ¨ 0 = 7I˙1 + I1 − I2 + v˙ 1 7 7 and use this and equation (3) to get 0 = 7I˙1 +

(5)

16 ¨ 16 ¨ I1 − I2 + 5I1 7 7

Differentiate equation (2) with respect to t to get (6)

0 = v˙ 2 +

16 ¨ 16 ¨ I2 − I1 . 7 7

Equations (5) and (6) together imply −v˙ 2 =

16 ¨ 16 ¨ I2 − I1 = 7I˙1 + 5I1 , 7 7

hence, after differentiating with respect to t, (7) From equation (2), it follows that I˙1 = I˙2 +

7 16

v¨2 = −7I¨1 − 5I˙1 .

v2 . Substitute the latter equation into equation (7) to get

  7 v¨2 = −7I¨1 − 5 I˙2 + v2 16 and then substitute into this equation (6), in the form I¨1 = I¨2 +

7 16

v˙ 2 , to get

    7 7 v¨2 = −7 I¨2 + v˙ 2 − 5 I˙2 + v2 . 16 16 Multiply through by 16 and substitute in the implications of equation (4) that I˙2 =

1 16

v¨2 and I¨2 =

1 16

... v 2 , to get

... 16¨ v2 = −7 v 2 − 49v˙ 2 − 5¨ v2 − 35v2 hence

... 0 = 7 v 2 + 21¨ v2 + 49v˙ 2 + 35v2 .

After division by 7, this gives ODE (?)

... 0 = v 2 + 3¨ v2 + 7v˙ 2 + 5v2 . c Larry

Turyn, October 13, 2013

page 50

The characteristic equation for (?) is 0 = P(s) = s3 + 3s2 + 7s + 5. Standard advice suggests trying to find roots in the form f actors of 5 s=± = ±1, ±5. f actors of 1 We find P(1) = 16 6= 0, P(−1) = 0. So, we can factor 0 = P(s) = s3 + 3s2 + 7s + 5 = (s + 1)(s2 + 2s + 5) = (s + 1) (s + 1)2 + 4). The roots are s = −1, −1 ± i2, so the general solution for v2 is v2 (t) = c1 e−t + c2 e−t cos 2t + c3 e−t sin 2t, where c1 , c2 , c3 =arbitrary constants. 3.4.4.13. For n = 3, expanding the determinant  y1 y2 W y1 (t), y2 (t), y3 (t) = y˙ 1 y˙ 2 y¨1 y¨2

along the first row gives y3 y˙ 2 y˙ 3 − y2 y˙ 1 y˙ 3 = y1 y¨1 y ¨ y ¨ 2 3 y¨3

y˙ 1 y˙ 3 + y 3 y¨1 y¨3

y˙ 2 y¨2

= y1 (y˙ 2 y¨3 − y˙ 3 y¨2 ) − y2 (y˙ 1 y¨3 − y˙ 3 y¨1 ) + y3 (y˙ 1 y¨2 − y˙ 2 y¨1 ). Take the time derivative to get ... ... ˙ (t) = y˙ 1 (y˙ 2 y¨3 − y˙ 3 y¨2 ) − y˙ 2 (y˙ 1 y¨3 − y˙ 3 y¨1 ) + y˙ 3 (y˙ 1 y¨2 − y˙ 2 y¨1 ) + y1 (y¨2 y¨3 y¨2 − y˙ 3 y 2 ) W y¨3 + y˙ 2 y 3 −  ... ... ... ... y¨2 y¨1 − y˙ 2 y 1 ) y¨1 y¨2 + y˙ 1 y 2 −  y¨3 y¨1 − y˙ 3 y 1 ) + y3 ( −y2 ( y¨1 y¨3 + y˙ 1 y 3 −  y˙ 1 y˙ 2 y˙ 3      = y˙ 1 y˙ 2 y˙ 3 + y1 y˙ 2 (−p1 (t)¨ y3 −  p2 (t) y˙ 3 −  p3 (t)y y2 −  p2 (t) y˙ 2 −  p3 (t)y 3 ) − y˙ 3 (−p1 (t)¨ 2) y¨1 y¨2 y¨3      +y2 y˙ 1 (−p1 (t)¨ y3 −  p2 (t) y˙ 3 −  p3 (t)y y1 −  p2 (t) y˙ 1 −  p3 (t)y 3 ) − y˙ 3 (−p1 (t)¨ 1)      +y3 y˙ 1 (−p1 (t)¨ y2 −  p2 (t) y˙ 2 −  p3 (t)y y1 −  p2 (t) y˙ 1 −  p3 (t)y 2 ) − y˙ 2 (−p1 (t)¨ 1) y˙ 1 y˙ 2 y˙ 3 y˙ 1 y˙ 2 y˙ 3 y˙ 1 y˙ 2 y˙ 3 = −p1 (t) y˙ 1 y˙ 2 y˙ 3 = −p1 (t)W (t). = 0 + −p1 (t)¨ y¨1 y¨2 y¨3 y1 −p1 (t)¨ y2 −p1 (t)¨ y3

c Larry

Turyn, October 13, 2013

page 51

Section 3.5.1 3.5.1.1. The characteristic equation is 0 = n(n − 1) + 5n − 2 = n2 + 4n − 2, so the roots are p √ −4 ± 42 − 4 · 1 · (−2) n= = −2 ± 6. 2 The general solution of the ODE is



y(r) = c1 r−2+

6

√ 6

+ c2 r−2−

,

where c1 , c2 =arbitrary constants. 3.5.1.2. The characteristic equation is 0 = n(n − 1) + 14 = n2 − n + 41 , so the roots are q 1 ± (−1)2 − 4 · 1 · 41 1 n= = ± 0. 2 2 Because of the repeated real root, the general solution of the ODE is y(r) = c1 r1/2 + c2 r1/2 ln r , where c1 , c2 =arbitrary constants. 3.5.1.3. The characteristic equation is 0 = n(n − 1) + n + 4 = n2 + 4, so the roots are n = ± i 2. The general solution of the ODE is y(r) = c1 cos(2 ln r) + c2 sin(2 ln r), where c1 , c2 =arbitrary constants. 3.5.1.4. The characteristic equation is 0 = n(n−1)+3n+3 = n2 +2n+3 = (n+1)2 +2, so the roots are n = −1± i The general solution of the ODE is √ √  y(r) = r−1 c1 cos( 2 ln r) + c2 sin( 2 ln r) ,

√ 2.

where c1 , c2 =arbitrary constants. 3.5.1.5. The characteristic equation is 0 = n(n − 1) + 5n + 4 = n2 + 4n + 4 = (n + 2)2 , so the roots are n = −2, −2. The general solution of the ODE is y(r) = c1 r−2 + c2 r−2 ln r , where c1 , c2 =arbitrary constants. 3.5.1.6. The characteristic equation is 0 = n(n − 1) + 6n + 6 = n2 + 5n + 6 = (n + 3)(n + 2)0, so the roots are n = −3, −2. The general solution of the ODE is y(r) = c1 r−3 + c2 r−2 , where c1 , c2 =arbitrary constants. It follows that y 0 (r) = −3c1 r−4 − 2c2 r−3 . The ICs require    0 = y(2) = 18 c1 + 14 c2  ,   3 1 = y 0 (2) = − 16 c1 − 28 c2 which implies 

  c1 = c2

1 8

1 4

3 − 16

− 28

−1

0

  1

 2 −8 1 =  1/64 3 

16

− 14 1 8



0

 

−16

=

 1

 ,

8

so the solution of the IVP is y(r) = −16r−3 + 8r−2 . 3.5.1.7. The characteristic equation is 0 = n(n − 1) − 2 = n2 − n − 2 = (n + 1)(n − 2), so the roots are n = −1, 2. The general solution of the ODE is y(r) = c1 r−1 + c2 r2 , c Larry

Turyn, October 13, 2013

page 52

where c1 , c2 =arbitrary constants. It follows that y 0 (r) = −c1 r−2 + 2c2 r. The ICs require   0 = y(e) = e−1 c1 + e2 c2 , 11 = y 0 (e) = −e−2 c1 + 2 e c2 which implies 

  e−1 c1 = c2 −e−2

e2 2e

−1

  1 2e 0 = 11 3 e−2

so the solution of the IVP is y(r) =

−e2 e−1



   11 −e2 0 = , 11 3 e−1

11 (−e2 r−1 + e−1 r2 ). 3

3.5.1.8. The characteristic equation is 0 = n(n − 1) − 3n + 4 = n2 − 4n + 4 = (n − 2)2 , so the roots are n = 2, 2. The general solution of the ODE is y(r) = c1 r2 + c2 r2 ln r , where c1 , c2 =arbitrary constants. It follows that y 0 (r)=c1 2r + c2 2r ln r + c2 r2 · Using the fact that ln e = 1, the ICs require 

1 =r(2c1 + 2c2 ln r + c2 ). r

2 = y(e) = e2 c1 + e2 c2 −3 = y 0 (e) = 2ec1 + 3ec2

 ,

which implies 

  2 c1 e = 2e c2

e2 3e

−1

      1 1 2 3e −e2 6e + 3e2 2 = , = 3 e2 −3 −3 e −2e e3 −4e − 3e2

so the solution of the IVP is  y(r) = e−3 r2 6e + 3e2 + (−4e − 3e2 ) ln r . 3.5.1.9. The characteristic equation is 0 = n(n − 1) − n + 5 = n2 − 2n + 5 = (n − 1)2 + 4, so the roots are n = 1 ± i 2. The general solution of the ODE is y(r) = c1 r cos(2 ln r) + c2 r sin(2 ln r), where c1 , c2 =arbitrary constants. It follows that y 0 (r) = c1 cos(2 ln r) − c1 r sin(2 ln r) ·

2 2 + c2 sin(2 ln r) + c2 r cos(2 ln r) · r r

that is, y 0 (r) = (c1 + 2c2 ) cos(2 ln r) + (c2 − 2c1 ) sin(2 ln r). Using the fact that ln 1 = 0, the ICs require 

−2 = y(1) = c1 0 = y 0 (1) = c1 + 2c2

 ,

which implies c1 = −2 and c2 = 1. So, the solution of the IVP is y(r) = −2r cos(2 ln r) + r sin(2 ln r).

3.5.1.10. Here, the Wronskian is rα cos(ν ln r) W (r cos(ν ln r), r sin(ν ln r) =  rα−1 α cos(ν ln r) − ν sin(ν ln r) α

rα sin(ν ln r)

α

rα−1

 α sin(ν ln r) + ν cos(ν ln r)

    = rα cos(ν ln r) · rα−1 α sin(ν ln r) + ν cos(ν ln r) − rα sin(ν ln r) · rα−1 α cos(ν ln r) − ν sin(ν ln r)

c Larry

Turyn, October 13, 2013

page 53

 = r2α−1 ν cos2 (ν ln r) + sin2 (ν ln r) = ν r2α−1 6= 0 on any open interval not containing r = 0, assuming ν 6= 0. 3.5.1.11. The characteristic equation is 0 = n(n − 1) + n − (2m)2 = n2 − (2m)2 = (n − 2m)(n + 2m), so the roots are n = ± 2m. (a) Case 1 : If m = 0, the root n = 0 is repeated. Using r0 ≡ 1, the general solution of the ODE is y(r) = c1 + c2 ln r, where c1 , c2 =arbitrary constants. (b) Case 2 : If the integer m ≥ 1, the general solution of the ODE is y(r) = c1 r2m + c2 r−2m , where c1 , c2 =arbitrary constants. 3.5.1.12. (a) The characteristic equation is   0 = n(n − 1)(n − 2)(n − 3) + 2n(n − 1)(n − 2) − (2m2 + 1) n(n − 1) − n + m2 (m2 − 4) = n(n − 1)(n − 2)(n − 3 + 2) − (2m2 + 1)n(n − 2) + m2 (m2 − 4) = n(n − 1)2 (n − 2) − (2m2 + 1)n(n − 2) + m2 (m2 − 4).  (b) Case 1 : If m = 0, the characteristic equation is 0 = n(n − 1)2 (n − 2) − n(n − 2) = n(n − 2) (n − 1)2 − 1 = n(n − 2)(n2 − 2n) = n2 (n − 2)2 , so the roots are n = 0, 0, 2, 2. The general solution of the ODE is y(r) = c1 + c2 ln r + c3 r2 + c4 r2 ln r, where c1 , c2 , c3 , c4 =arbitrary constants. Case 2 : If m = 1, the characteristic equation is 0 = n(n − 1)2 (n − 2) − 3(n2 − 2n) − 3 = ... = n4 − 4n3 + 2n2 + 4n − 3 = (n − 1)(n3 − 3n2 − n + 3) = (n − 1)(n + 1)(n2 − 4n + 3) = (n − 1)(n + 1)(n − 1)(n − 3) so the roots are n = −1, 1, 1, 3. The general solution of the ODE is y(r) = c1 r−1 + c2 r + c3 r ln r + c4 r3 , where c1 , c2 , c3 , c4 =arbitrary constants. Case 3 : If m = 2, the characteristic equation is  = n(n − 1)2 (n − 2) − 9(n2 − 2n) = n(n − 2) (n − 1)2 − 9 = n(n − 2)(n2 − 2n − 8) = n(n − 2)(n − 4)(n + 2) so the roots are n = −2, 0, 2, 4. The general solution of the ODE is y(r) = c1 r−2 + c2 + c3 r2 + c4 r4 , where c1 , c2 , c3 , c4 =arbitrary constants.

c Larry

Turyn, October 13, 2013

p. 1

Chapter Four Section 4.1.5 4.1.5.1. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 5s + 6 = (s + 3)(s + 2) ⇒ L1 = −3, −2 f (t) = 3e−t ⇒ L2 = −1 ⇒ Superlist is L = −3, −2, −1 ⇒ y(t) = c1 e−3t + c2 e−2t + c3 e−t ⇒ yp (t) = Ae−t , where A is a constant to be determined: 3e−t = y¨p + 5y˙ p + 6yp = Ae−t − 5Ae−t + 6Ae−t = 2Ae−t



A=

3 2



yp (t) =

3 −t e . 2

The general solution of the ODE is y(t) = c1 e−3t + c2 e−2t +

3 −t e , 2

where c1 , c2 =arbitrary constants. 4.1.5.2. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4s + 6 = (s + 2)2 + 2 √ ⇒ L1 = −2 ± i 2 √ f (t) = t + 3e−2t ⇒ L2 = 0, 0, −2 ⇒ Superlist is L = −2 ± i 2, 0, 0, −2 √ √ ⇒ y(t) = c1 e−2t cos( 2 t) + c2 e−2t sin( 2 t) + c3 e−2t + c4 + c5 t ⇒ yp (t) = Ae−2t + B + Ct, where A, B, C are constants to be determined: 3e−2t + t = y¨p + 4y˙ p + 6yp =4Ae−2t − 8Ae−2t + 4C + 6Ae−2t + 6B + 6Ct= 2Ae−2t + (4C + 6B) + 6Ct ⇒ A = 32 , C = 16 , ⇒ B = − 19

⇒ yp (t) =

3 −2t 1 1 e − + t. 2 9 6

The general solution of the ODE is √ √ 1 1 3 y(t) = c1 e−2t cos( 2 t) + c2 e−2t sin( 2 t) + e−2t − + t, 2 9 6 where c1 , c2 =arbitrary constants. 4.1.5.3. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 5s + 6 = (s + 3)(s + 2) ⇒ L1 = −3, −2 f (t) = 2e−3t ⇒ L2 = −3 ⇒ Superlist is L = −3, −2, −3 ⇒ y(t) = c1 e−3t + c2 e−2t + c3 t e−3t ⇒ yp (t) = A t e−3t , where A is a constant to be determined: ⇒ y˙ p (t) = A(1 − 3t)e−3t ⇒ y¨p (t) = A(−6 + 9t)e−3t 2e−3t = y¨p + 5y˙ p + 6yp = A(−6 + 9t)e−3t + 5A(1 − 3t)e−3t + 6A t e−3t = −A e−3t ⇒ A = −2 ⇒ yp (t) = −2 t e−3t . Alternatively, we could find A using the shift theorem:   2e−3t = y¨p +5y˙ p +6yp = (D +2)(D +3)[ A t e−3t ] = (D +2) (D +3)[ A t e−3t ] = (D +2)[ Ae−3t ] = −A e−3t , hence A = −2. The general solution of the ODE is y(t) = −2 t e−3t + c1 e−3t + c2 e−2t , where c1 , c2 =arbitrary constants. ©Larry

Turyn, October 13, 2013

p. 2 √ 4.1.5.4. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2 ⇒ L1 = ± i 2 √ f (x) = xe−x ⇒ L2 = −1, −1 ⇒ Superlist is L = ± i 2, −1, −1 √ √ ⇒ y(x) = c1 cos( 2 x) + c2 sin( 2 x) + c3 e−x + c4 x e−x ⇒ yp (x) = A e−x + B x e−x , where A, B are constants to be determined: ⇒ yp0 (x) = −A e−x + B(1 − x)e−x

⇒ yp00 (x) = A e−x + B(−2 + x)e−x

x e−x = yp00 + 2yp = A e−x + B(−2 + x)e−x + 2A e−x + 2B x e−x = (3A − 2B)e−x + 3B x e−x ⇒ 3B = 1 ⇒ B = 13 , and so 3A − 2B = 0 ⇒ A =

2 9

⇒ yp (x) =

2 9

+

1  −x x e . 3

The general solution of the ODE is 2

y(x) =

9

+

√ √ 1  −x x e + c1 cos( 2 x) + c2 sin( 2 x), 3

where c1 , c2 =arbitrary constants. 4.1.5.5. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 − 1 ⇒ L1 = −1, 1 f (t) = e−t + 5e−2t ⇒ L2 = −2, −1 ⇒ Superlist is L = −1, 1, −2, −1 ⇒ y(t) = c1 e−t + c2 et + c3 e−2t + c4 t e−t t ⇒ yp (t) = A e−2t + B t e−t , where A, B are constants to be determined: ⇒ y˙ p (t) = −2A e−2t + B(1 − t)e−t

⇒ y¨p (t) = 4A e−2t + B(−2 + t)e−t

e−t + 5e−2t = y¨p − yp = 4A e−2t + B(−2 + t)e−t − A e−2t − B t e−t = −2Be−t + 3Ae−2t 5 −2t 1 −t e − te . 3 2 The general solution of the ODE is ⇒ B = − 12 and A =

5 3

⇒ yp (t) =

y(t) =

5 −2t 1 −t e − t e + c1 e−t + c2 et , 3 2

where c1 , c2 =arbitrary constants. 4.1.5.6. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4s + 5 = (s + 2)2 + 1 ⇒ L1 = −2 ± i f (t) = sin 2t ⇒ L2 = ± 2 i ⇒ Superlist is L = −2 ± i, ± 2 i ⇒ y(t) = c1 e−2t cos t + c2 e−2t sin t + c3 cos 2t + c4 sin 2t ⇒ yp (t) = A cos 2t + B sin 2t, where A, B are constants to be determined: sin 2t = y¨p + 4y˙ p + 5yp = −4A cos 2t − 4B sin 2t + 4(−2A sin 2t + 2B cos 2t) + 5A cos 2t + 5B sin 2t = (A + 8B) cos 2t + (B − 8A) sin 2t   A + 8B = 0 ⇒ −8A + B = 1 so 

A B



 =

1 8 −8 1

−1 

0 1

 =

1 65



1 −8 8 1



0 1

 =

1 65



−8 1



1 (−8 cos 2t + sin 2t). 65 The general solution of the ODE is

⇒ yp (t) =

y(t) =

1 (−8 cos 2t + sin 2t) + c1 e−2t cos t + c2 e−2t sin t, 65 ©Larry

Turyn, October 13, 2013

p. 3

where c1 , c2 =arbitrary constants. 4.1.5.7. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + s − 12 = (s + 4)(s − 3) ⇒ L1 = −4, 3 f (t) = 5e−4t ⇒ L2 = −4 ⇒ Superlist is L = −4, 3, −4 ⇒ y(t) = c1 e−4t + c2 e3t + c3 t e−4t ⇒ yp (t) = A t e−4t , where A is a constant to be determined: ⇒ y˙ p (t) = A(1 − 4t)e−4t

⇒ y¨p (t) = A(−8 + 16t)e−4t

5e−4t = y¨p + y˙ p − 12yp = A(−8 + 16t)e−4t + A(1 − 4t)e−4t − 12A t e−4t = −7Ae−4t 5 ⇒ A = − 75 ⇒ yp (t) = − t e−4t . 7 The general solution of the ODE is 5 y(t) = − t e−4t + c1 e−4t + c2 e3t , 7 where c1 , c2 =arbitrary constants. 4.1.5.8. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4 ⇒ L1 = ± 2 i f (t) = e−t cos 2t ⇒ L2 = −1 ± 2 i ⇒ Superlist is L = ± 2 i, −1 ± 2 i ⇒ y(t) = c1 cos 2t + c2 sin 2t + c3 e−t cos 2t + c4 e−t sin 2t ⇒ yp (t) = Ae−t cos 2t + Be−t sin 2t, where A, B are constants to be determined: ⇒ y˙ p (t) = (−A + 2B)e−t cos 2t + (−B − 2A)e−t sin 2t  ⇒ y¨p (t) = − (−A + 2B) + 2(−B − 2A) e−t cos 2t  + − (−B − 2A) − 2(−A + 2B) e−t sin 2t = (−3A − 4B)e−t cos 2t + (4A − 3B)e−t sin 2t so e−t cos 2t = y¨p + 4yp = (−3A − 4B)e−t cos 2t + (4A − 3B)e−t sin 2t + 4e−t (A cos 2t + B sin 2t) = (A − 4B)e−t cos 2t + (4A + B)e−t sin 2t   A − 4B = 1 ⇒ 4A + B = 0 so 

A B



 =

1 −4 4 1

−1 

1 0



1 = 17



1 4 −4 1



1 0



1 = 17



1 −4



 1 −t e cos 2t − 4 sin 2t . 17 The general solution of the ODE is

⇒ yp (t) =

y(t) =

 1 −t e cos 2t − 4 sin 2t + c1 cos 2t + c2 sin 2t, 17

where c1 , c2 =arbitrary constants. 4.1.5.9. We are given a particular solution yp (t) = −e−2t cos(et ). The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 3s + 2 = (s + 2)(s + 1) ⇒ yh (t) = c1 e−2t + c2 e−t ⇒ The general solution of the ODE is y(t) = yp (t) + yh (t) = −e−2t cos(et ) + c1 e−2t + c2 e−t ,

©Larry

Turyn, October 13, 2013

p. 4

where c1 , c2 =arbitrary constants. It follows from the chain rule that y(t) ˙ = 2e−2t cos(et ) + e−2t sin(et ) · et − 2c1 e−2t − c2 e−t . The ICs require  so 

c1 c2



 =

1 1 −2 −1

−1 

0 = y(0) = − cos(1) + c1 + c2 0 = y(0) ˙ = 2 cos(1) + sin(1) − 2c1 − c2

cos(1) −2 cos(1) − sin(1)



 =

−1 −1 2 1



 ,

cos(1) −2 cos(1) − sin(1)



 =

cos(1) + sin(1) − sin(1)

The solution of the IVP is  y(t) = −e−2t cos(et ) + cos(1) + sin(1) e−2t − sin(1)e−t . 4.1.5.10. (a) We are given a particular solution yp (x) = sin x. The corresponding homogeneous ODE, (??) y 0 (x) + 2xy = 0, does not have constant coefficients, but it is in standard form of a linear, first order ODE. (It is also separable!) An integrating factor for (??) is ˆ  2 µ(x) = exp 2x dx = ex , so we get 2 2 2 d h x2 i e y = ex y 0 (x) + 2xex y = ex · 0 = 0. dx

This gives 2

ˆ

ex y =

0 dx = 0 + c, 2

where c is an arbitrary constant. This gives yh (x) = c e−x . The general solution of the original, nonhomogeneous ODE is 2 y(x) = yp (x) + yh (x) = sin x + c e−x , where c =arbitrary constant. (b) The IC requires that 5 = y(0) = 0 + c · 1 = c, so the solution of the IVP is 2

y(x) = sin x + 5 e−x . 1 1 4.1.5.11. We are given a particular solution yp (t) = − tet + t2 et . The corresponding LCCHODE’s 9 6 characteristic polynomial is P(s) = s2 + s − 2 = (s + 2)(s − 1) ⇒ yh (t) = c1 e−2t + c2 et ⇒ The general solution of the ODE is 1 1 y(t) = yp (t) + yh (t) = − tet + t2 et + c1 e−2t + c2 et , 9 6 where c1 , c2 =arbitrary constants. It follows that 1 1 1 1 y(t) ˙ = − et − t et + tet + t2 et − 2c1 e−2t + c2 et . 9 9 3 6 The ICs require  so 

c1 c2



 =

1 1 −2 1

0 = y(0) = 0 + c1 + c2 −2 = y(0) ˙ = − 91 − 2c1 + c2

−1 

0 −17/9



1 = 3



1 −1 2 1





, 0 −17/9



17 = 27

©Larry



1 −1



Turyn, October 13, 2013



p. 5

The solution of the IVP is

 1 1 17 −2t y(t) = − tet + t2 et + e − et . 9 6 27

4.1.5.12. The corresponding LCCHODE’s characteristic polynomial is P(s) = s + α ⇒ L1 = −α f (t) = e−t ⇒ L2 = −1 ⇒ Superlist is L = −α, −1 Case 1 : If α 6= 1, then y(t) = c1 e−αt + c2 e−t ⇒ yp (t) = Ae−t , where A is a constant to be determined: e−t = y˙ p (t) + αyp (t) = −Ae−t + αAe−t = (α − 1)Ae−t ⇒ A = (α − 1)−1 ⇒ yp (t) =

1 e−t . When α 6= 1, the general solution of the ODE is α−1 y(t) =

1 e−t + c1 e−αt α−1

where c1 =arbitrary constant. Case 2 : If α = 1, then y(t) = c1 e−t + c2 t e−t ⇒ yp (t) = A t e−t , where A is a constant to be determined: e−t = y˙ p (t) + 1 · yp (t) = (1 − t)Ae−t + A t e−t = Ae−t ⇒ A = 1 ⇒ yp (t) = t e−t . When α = 1, the general solution of the ODE is y(t) = t e−t + c1 e−t where c1 =arbitrary constant. By the way, it makes sense that the homogeneous solution is the same no matter which case we are considering. 4.1.5.13. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 − 1 ⇒ L1 = −1, 1 f (t) = e−2t ⇒ L2 = −2 ⇒ Superlist is L = −1, 1, −2 ⇒ y(t) = c1 e−t + c2 et + c3 e−2t ⇒ yp (t) = Ae−2t , where A is a constant to be determined: e−2t = y¨p (t) − yp (t) = 4Ae−2t − Ae−2t = 3Ae−2t 1 −2t e . 3 The general solution of the ODE is

⇒A=

1 3

⇒ yp (t) =

y(t) =

1 −2t e + c1 e−t + c2 et , 3

where c1 , c2 =arbitrary constants. It follows that 2 y(t) ˙ = − e−2t − c1 e−t + c2 et . 3 The ICs require   0 = y(0) =  so 

c1 c2



 =

1 1 −1 1

−1

1 3

 + c1 + c2 

0 = y(0) ˙ = − 23 − c1 + c2 1 3



The solution of the IVP is y(t) =

−1 2

 =

1 2



1 −1 1 1

,

 

1 3



−1 2

 =

1 6



−3 1



1 −2t 1 −t 1 t e − e + e. 3 2 6 ©Larry

Turyn, October 13, 2013

p. 6

4.1.5.14. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 − 1 ⇒ L1 = −1, 1 f (t) = et ⇒ L2 = 1 ⇒ Superlist is L = −1, 1, 1 ⇒ y(t) = c1 e−t + c2 et + c3 t et ⇒ yp (t) = A t et , where A is a constant to be determined. It follows that y˙ p (t) = A(1 + t)et and y¨p (t) = A(2 + t)et . So, we want et = y¨p (t) − yp (t) = A(2 + t)et − A t et = 2Aet ⇒ A = 21 ⇒ yp (t) = 12 t et . The general solution of the ODE is 1 t t e + c1 e−t + c2 et , 2 where c1 , c2 =arbitrary constants. It follows that y(t) =

y(t) ˙ =

1 (1 + t)et − c1 e−t + c2 et . 2

The ICs require    0 = y(0) = 0 + c1 + c2   so 

c1 c2



 =

1 1 −1 1

0 = y(0) ˙ =

−1 

0 −1/2

The solution of the IVP is y(t) =



1 2

− c1 + c2

1 = 2



,



1 −1 1 1



0 −1/2



1 = 4



1 −1



1 t 1 −t 1 t te + e − e . 2 4 4

4.1.5.15. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + s + 5 = s + √ ⇒ L1 = − 12 ± i 219

 1 2 2

+

19 4



f (t) = 10 ⇒ L2 = 0 ⇒ Superlist is L = − 12 ± i 219 , 0 √ √   ⇒ y(t) = c1 e−t/2 cos 219 t + c2 e−t/2 sin 219 t + c3 ⇒ yp (t) = A, where A is a constant to be determined: 10 = y¨p (t) + y˙ p (t) + 5yp (t) = 0 + 0 + 5A ⇒ A = 2 ⇒ yp (t) = 2. −t/2

 √19   √19  −t/2 cos t +c2 e sin t , where c1 , c2 =arbitrary 2 2

The general solution of the ODE is y(t) = 2+c1 e constants. It follows that √ √  1  √19   √19 19  −t/2 1  −t/2  19  y(t) ˙ = − c1 + c2 e cos t + − c1 − c2 e sin t . 2 2 2 2 2 2 The ICs require   0 = y(0) = 2 + c1 

0 = y(0) ˙ = − 12 c1 +

  √ 19 2

c2

,



so c1 = −2 and c2 = − √219 . The solution of the IVP is −t/2

y(t) = 2 − 2e



 √19   √19  1 cos t + √ cos t . 2 2 19

4.1.5.16. (a) The non-homogeneous ODE is Rq˙ + C1 q = 0.02 sin(120πt). The corresponding LCCHODE’s 1 characteristic polynomial is P(s) = Rs + C1 ⇒ L1 = − RC ©Larry

Turyn, October 13, 2013

p. 7

1 f (t) = 0.02 sin(120πt) ⇒ L2 = ±i 120π ⇒ Superlist is L = − RC , ±i 120π ⇒ q(t) = c1 e−t/(RC) + c2 cos(120πt) + c3 sin(120πt) ⇒ qp (t) = A cos(120πt) + B sin(120πt), where A, B are constants to be determined:

0.02 sin(120πt)= Rq˙p (t)+

1 1 1 qp (t)= −120πRA sin(120πt)+120πRB cos(120πt)+ A cos(120πt)+ B sin(120πt) C C C   1   C A + 120πRB = 0 ⇒   −120πRA + C1 B = 0.02

so 

A





1 C

120πR

−120πR

1 C

=

 B

−1  

0

 0.02



 1 =  C −2 + (120πR)2 

= so qp (t) =

C −2

1  C −2 + (120πR)2

−2.4πR 0.02 C −1

1 C

−120πR

120πR

1 C



0



 

0.02

 ,

 1 − 2.4πR cos(120πt) + 0.02 C −1 sin(120πt) . 2 + (120πR)

The general solution of the ODE is  C2 − 2.4πR cos(120πt) + 0.02 C −1 sin(120πt) . 2 1 + (120πRC)

q(t) = c1 e−t/(RC) + where c1 =arbitrary constant. The IC requires

0.001 = q(0) = c1 − so c1 = 0.001 +

2.4πRC 2 , 1 + (120πRC)2

2.4πRC 2 1 + (120πRC)2

The solution of the IVP is    C2 2.4πRC 2 −t/(RC) e + q(t) = 0.001 + − 2.4πR cos(120πt) + 0.02 C −1 sin(120πt) . 2 2 1 + (120πRC) 1 + (120πRC) (b) The ODE in standard form is q˙ +

1 0.02 q= sin(120πt), RC R

so an integrating factor is given by µ(t) = et/(RC) . This implies d  t/(RC)  0.02 e q = et/(RC) q˙ + et/(RC) q = et/(RC) · sin(120πt) dt R ⇒ et/(RC) q = =

0.02 R

ˆ et/(RC) sin(120πt) dt

  0.02  · et/(RC) (RC)−1 sin(120πt) − 120π cos(120πt) + c, R (RC)−2 + (120π)2

so q = c e−t/(RC) +

  0.02  (RC)−1 sin(120πt) − 120π cos(120πt) R (RC)−2 + (120π)2 ©Larry

Turyn, October 13, 2013

p. 8   0.02(RC)2  (RC)−1 sin(120πt) − 120π cos(120πt) R 1 + (120πRC)2   RC 2 = c e−t/(RC) + 0.02(RC)−1 sin(120πt) − 2.4π cos(120πt) 2 1 + (120πRC)   C2 −1 = c e−t/(RC) + 0.02 C sin(120πt) − 2.4πR cos(120πt) . 1 + (120πRC)2

= c e−t/(RC) +

where c =arb. const. This is the same general solution as we found in part (a), except with arbitrary constant c instead of c1 . Satisfying the IC using c leads to the same final conclusion that  q(t) = 0.001 +

  C2 2.4πRC 2 −t/(RC) e + − 2.4πR cos(120πt) + 0.02 C −1 sin(120πt) . 2 2 1 + (120πRC) 1 + (120πRC)

4.1.5.17. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s + 5 = (s + 1)2 + 4 ⇒ L1 = −1 ± 2i f (t) = sin 2t ⇒ L2 = ± 2 i ⇒ Superlist is L = −1 ± 2i, ± 2 i y(t) = c1 e−t cos 2t + c2 e−t sin 2t + c3 cos 2t + c4 sin 2t ⇒ yp (t) = A cos 2t + B sin 2t, where A, B are constants to be determined: sin 2t = y¨p + 2y˙ p + 5yp = −4A cos 2t − 4B sin 2t + 2(−2A sin 2t + 2B cos 2t) + 5A cos 2t + 5B sin 2t = (A + 4B) cos 2t + (B − 4A) sin 2t   A + 4B = 0 ⇒ −4A + B = 1 so 

A B



 =

1 4 −4 1

−1 

0 1

 =

1 17



1 −4 4 1



0 1

 =

1 17



−4 1



⇒ The steady state solution is yS (t) = yp (t) =

1 (−4 cos 2t + sin 2t), 17

because the terms c1 e−t cos 2t + c2 e−t sin 2t are transient no matter what are the values of the constants c1 , c2 . The amplitude of the steady state solution is Amplitude =

1 p 1 (−4)2 + 12 = √ . 17 17

4.1.5.18. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4s + 5 = (s + 2)2 + 1 ⇒ L1 = −2 ± i f (t) = sin t ⇒ L2 = ± i ⇒ Superlist is L = −2 ± i, ± i ⇒ y(t) = c1 e−2t cos t + c2 e−2t sin t + c3 cos t + c4 sin t ⇒ yp (t) = A cos t + B sin t, where A, B are constants to be determined: sin t = y¨p + 4y˙ p + 5yp = −A cos t − B sin t + 4(−A sin t + B cos 2t) + 5A cos t + 5B sin t = (4A + 4B) cos t + (4B − 4A) sin t   4A + 4B = 0 ⇒ −4A + 4B = 1

©Larry

Turyn, October 13, 2013

p. 9

so 

A B



 =

4 4 −4 4

−1 

0 1



1 = 32



4 −4 4 4





0 1

1 = 8



−1 1



⇒ The steady state solution is yS (t) = yp (t) =

1 (− cos 2t + sin 2t), 8

because the terms c1 e−2t cos t + c2 e−2t sin t are transient no matter what are the values of the constants c1 , c2 . The amplitude of the steady state solution is √ 1 1p 2 2 2 (−1) + 1 = =√ . Amplitude = 8 8 32 4.1.5.19. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4s + 5 = (s + 2)2 + 1 ⇒ L1 = −2 ± i f (t) = cos t ⇒ L2 = ± i ⇒ Superlist is L = −2 ± i, ± i ⇒ y(t) = c1 e−2t cos t + c2 e−2t sin t + c3 cos t + c4 sin t ⇒ yp (t) = A cos t + B sin t, where A, B are constants to be determined: cos t = y¨p + 4y˙ p + 5yp = −A cos t − B sin t + 4(−A sin t + B cos 2t) + 5A cos t + 5B sin t = (4A + 4B) cos t + (4B − 4A) sin t   4A + 4B = 1 ⇒ −4A + 4B = 0 so 

A B



 =

4 4 −4 4

−1 

1 0



1 = 32



4 −4 4 4



1 0



1 = 8



1 1



⇒ The steady state solution is yS (t) = yp (t) =

1 (cos t + sin t), 8

because the terms c1 e−2t cos t + c2 e−2t sin t are transient no matter what are the values of the constants c1 , c2 . The amplitude of the steady state solution is √ 1p 2 2 1 2 Amplitude = 1 +1 = =√ . 8 8 32 4.1.5.20. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 3s + 5 = s + √ ⇒ L1 = − 32 ± 211 i

 3 2 2

+

11 4



f (t) = 6 cos 4t ⇒ L2 = ± 4i ⇒ Superlist is L = − 23 ± 211 i, ± 4i √  √  ⇒ y(t) = c1 e−3t/2 cos 211 t + c2 e−3t/2 sin 211 t + c3 cos 4t + c4 sin 4t ⇒ yp (t) = A cos 4t + B sin 4t, where A, B are constants to be determined: cos t = y¨p + 3y˙ p + 5yp = −16A cos t − 16B sin t + 3(−4A sin t + 4B cos 2t) + 5A cos 4t + 5B sin 4t = (−11A + 4B) cos t + (−11B − 12A) sin t   −11A + 12B = 6 ⇒ −12A − 11B = 0

©Larry

Turyn, October 13, 2013

p. 10

so 

A B



 =

−11 12 −12 −11

−1 

6 0



1 = 265





−11 −12 12 −11

6 0



6 = 265



−11 12



⇒ The steady state solution is 6 (−11 cos 4t + 12 sin 4t), 265 √  √  because the terms c1 e−3t/2 cos 211 t + c2 e−3t/2 sin 211 t are transient no matter what are the values of the constants c1 , c2 . The amplitude of the steady state solution is yS (t) = yp (t) =

Amplitude =

6 6 p (−11)2 + 122 = √ . 265 265

4.1.5.21. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s + 5 = (s + 1)2 + 4 ⇒ L1 = −1 ± 2i f (t) = 3 ⇒ L2 = 0 ⇒ Superlist is L = −1 ± 2i, 0 ⇒ y(t) = c1 e−t cos 2t + c2 e−t sin 2t + c3 ⇒ yp (t) = A, where A is a constant to be determined: 3 = y¨p + 2y˙ p + 5yp = 0 + 0 + 5A ⇒ The steady state solution is 3 , 5 because the terms c1 e−t cos 2t + c2 e−t sin 2t are transient no matter what are the values of the constants c1 , c2 . So, even though we were given initial conditions we didn’t need to satisfy them in order to answer the question that was asked! The amplitude of the steady state solution is Amplitude = 53 . yS (t) = yp (t) =

4.1.5.22. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 9 ⇒ L1 = ± 3i f (t) = 3e−2t ⇒ L2 = −2 ⇒ Superlist is L = ± 3i, −2 ⇒ y(t) = c1 cos 3t + c2 sin 3t + c3 e−2t ⇒ yp (t) = Ae−2t , where A is a constant to be determined: 3e−2t = y¨p + 9yp = 4e−2t + 9Ae−2t 3 −2t ⇒ yp (t) = e . Note that in this problem the particular solution is transient, not part of the steady 13 state solution. 3 −2t The general solution of the ODE is y(t) = yp (t) + yh (t) = e + c1 cos 3t + c2 sin 3t. It follows that 13 y(t) ˙ =−

6 −2t e − 3c1 sin 3t + 3c2 cos 3t. 13

The ICs require   

0 = y(0) =

3 13

+ c1

6 + 3c2 −4 = y(0) ˙ = − 13

 

,



3 so c1 = − 13 and c2 = − 46 39 . The solution of the IVP is

y(t) =

3 −2t 3 46 e − cos 3t − sin 3t. 13 13 39

⇒ The steady state solution is yS (t) =

1 (−9 cos 3t − 46 sin 3t), 39 ©Larry

Turyn, October 13, 2013

p. 11

1 −2t because the term 13 e is transient. Note that the “steady state solution" is not a solution of the nonhomogeneous ODE. The amplitude of the steady state solution is √ √ √ 2197 13 1p 13 13 2 2 (−9) + (−46) = Amplitude = = = . 39 39 39 3

4.1.5.23.√ The corresponding LCCHODE’s characteristic polynomial is P(s) = 2s2 + 4 = 2(s2 + 2) ⇒ L1 = ± 2 i √ f (t) = f0 e−2t ⇒ L2 = −2 ⇒ Superlist is L = ± 2 i, −2 √ √ ⇒ y(t) = c1 cos( 2 t) + c2 sin( 2 t) + c3 e−2t ⇒ yp (t) = Ae−2t , where A is a constant to be determined: f0 e−2t = 2¨ yp + 4yp = 8e−2t + 4Ae−2t f0 −2t ⇒ yp (t) = 12 e . Note that in this problem the particular solution is transient, not part of the steady state solution. √ √ f0 −2t The general solution of the ODE is y(t) = yp (t) + yh (t) = e + c1 cos( 2 t) + c2 sin( 2 t). It follows 12 that √ √ √ √ f0 y(t) ˙ = − e−2t − 2 c1 cos( 2 t) + 2 c2 sin( 2 t). 6 The ICs require   f  0 = y(0) = 120 + c1  , √   0 = y(0) ˙ = − f60 + 2 c2 f0 so c1 = − 12 and c2 =

f√0 . 6 2

The solution of the IVP is y(t) =

√ √ f0 −2t f0 f0 e − cos( 2 t) + √ sin( 2 t) 12 12 6 2

⇒ The steady state solution is yS (t) =

√ √ √  f0 − cos( 2 t) + 2 sin( 2 t) , 12

f0 −2t because the term 12 e is transient. Note that the “steady state solution" is not a solution of the nonhomogeneous ODE. The amplitude of the steady state solution is q √ |f0 | |f0 | Amplitude = (−1)2 + ( 2)2 = √ . 12 4 3

4.1.5.24. The corresponding LCCHODE’s characteristic polynomial is P(s) = ms2 + k = m s2 + p L1 = ± i ω0 , where the natural frequency is ω0 = k/m, as usually notated.

k m





f (t) = f0 e−2t ⇒ L2 = −2 ⇒ Superlist is L = ± ω0 i, −2 ⇒ y(t) = c1 cos(ω0 t) + c2 sin(ω0 t) + c3 e−2t ⇒ yp (t) = Ae−2t , where A is a constant to be determined: f0 e−2t = m¨ yp + kyp = 4me−2t + kAe−2t f0 e−2t . Note that in this problem the particular solution is transient, not part of the steady 4m + k state solution. f0 The general solution of the ODE is y(t) = yp (t) + yh (t), that is, y(t) = e−2t + c1 cos(ω0 t) + 4m + k c2 sin(ω0 t). It follows that

⇒ yp (t) =

y(t) ˙ =−

2f0 e−2t − ω0 c1 cos(ω0 t) + ω0 c2 sin(ω0 t). 4m + k ©Larry

Turyn, October 13, 2013

p. 12

The ICs require    so c1 = 2 −

f0 4m+k

and c2 =

− ω10

+

2 = y(0) = −1 = y(0) ˙ =

f0 4m+k 2f0 − 4m+k

+ c1

 

+ ω0 c2



,

2f0 (4m+k)ω0 .

The solution of the IVP is    f0 1 f0  2f0 y(t) = cos(ω0 t) + − sin(ω0 t), e−2t + 2 − + (4m + k) 4m + k ω0 (4m + k)ω0

⇒ The steady state solution is  yS (t) = 2 −

  1 f0  2f0 cos(ω0 t) + − sin(ω0 t), + 4m + k ω0 (4m + k)ω0

f0 because the term 4m+k e−2t is transient. Note that the “steady state solution" is not a solution of the non-homogeneous ODE. The amplitude of the steady state solution is s 2  1 2f0 f0 2  Amplitude = + − + 2− 4m + k ω0 (4m + k)ω0

 f 2  2 1/2 4f0 1 4f0 2f0 0 + 2− + + 2 4m + k 4m + k ω0 (4m + k)ω0 (4m + k)ω0  f 2 2 1/2  1 4f0 4  f0 4f0 0 + + 2− + 2· = 4− 2 4m + k 4m + k ω0 (4m + k)ω0 ω0 (4m + k)    2 1/2  2 4 4f0 4f0 f0 f0 1 + 2· − + = 4+ 2 − 2 ω0 4m + k (4m + k)ω0 4m + k ω0 (4m + k)       2 4f0 m 4m 1/2 m f0 = 4+ − · 1+ + · 1+ . k 4m + k k 4m + k k  = 4−

4.1.5.25. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 − 2s + 2 = (s − 1)2 + 1 ⇒ L1 = 1 ± i f (t) = t ⇒ L2 = 0, 0 ⇒ Superlist is L = 1 ± i, 0, 0 ⇒ y(t) = c1 et cos t + c2 et sin t + c3 + c4 t ⇒ yp (t) = A + B t, where A, B are constants to be determined: t = y¨p − 2y˙ p + 2yp = 0 − 2B + 2A + 2B t ⇒ 1 = 2B and 0 = 2A − 2B ⇒ B = 21 , A =

1 2

⇒ yp (t) =

1 (1 + t). 2

The general solution of the ODE is y(t) = yp (t) + yh (t) =

1 (1 + t) + c1 et cos t + c2 et sin t. 2

It follows that

1 + (c1 + c2 )et cos t + (−c1 + c2 )et sin t. 2 Because cos π = −1 and sin π = 0, the ICs require    0 = y(π) = 12 (1 + π) − c1 eπ  .   0 = y(π) ˙ = 12 − (c1 + c2 )eπ y(t) ˙ =

It follows that c1 = 12 (1 + π)e−π and c2 = − π2 e−π . The solution of the IVP is 1 π 1 y(t) = (1 + t) + (1 + π)et−π cos t − et−π sin t. 2 2 2

©Larry

Turyn, October 13, 2013

p. 13

4.1.5.26. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s = s(s + 2) ⇒ L1 = −2, 0 f (t) = e−2t ⇒ L2 = −2 ⇒ Superlist is L = −2, 0, −2 ⇒ y(t) = c1 e−2t + c2 + c3 t e−2t ⇒ yp (t) = A t e−2t , where A is a constant to be determined. It follows that y˙ p (t) = A(1 − 2t)e−2t and y¨p (t) = A(−4 + 4t)e−2t e−2t = y¨p + 2y˙ p = A(−4 + 4t)e−2t + 2A(1 −2t)e−2t = −2Ae−2t 1 ⇒ A = − 21 ⇒ yp (t) = − t e−2t . 2 The general solution of the ODE is 1 y(t) = yp (t) + yh (t) = − t e−2t + c1 e−2t + c2 . 2 It follows that

1 y(t) ˙ = − (1 − 2t)e−2t − 2c1 e−2t . 2

The ICs require 

It follows that c1 =

9 4

0 = y(0) = c1 + c2 −5 = y(0) ˙ = − 12 − 2c1

 .

and c2 = − 94 . The solution of the IVP is 1 9 9 y(t) = − t e−2t + e−2t − . 2 4 4

4.1.5.27. The corresponding LCCHODE’s characteristic polynomial is P(s) = s3 + 2s2 + s + 2. We are given that cos t is a solution of the corresponding LCCHODE so (s2 + 1) is a factor of P(s). We factor 0 = P(s) = (s2 + 1)(s + 2) ⇒ L1 = −2, ± i f (t) = 4 − 3e−2t ⇒ L2 = −2, 0 ⇒ Superlist is L = −2, ± i, −2, 0 ⇒ y(t) = c1 e−2t + c2 cos t + c3 sin t + c4 + c5 t e−2t ⇒ yp (t) = A + B t e−2t , where A, B are constants to be determined. It follows that y˙ p (t) = B(1 − 2t)e−2t , ... y¨p (t) = B(−4 + 4t)e−2t , and y p (t) = B(12 − 8t)e−2t . ... 4 − 3e−2t = yp + 2¨ yp + y˙ p + 2yp = B(12 − 8t)e−2t + 2B(−4 + 4t)e−2t + B(1 − 2t)e−2t + 2A + 2B t e−2t = 2A + 5Be−2t 3 ⇒ A = 2, B = − 35 ⇒ yp (t) = 2 − t e−2t . 5 The general solution of the ODE is y(t) = yp (t) + yh (t) that is, y(t) = 2 −

3 −2t te + c1 e−2t + c2 cos t + c3 sin t, 5

where c1 , c2 , c3 =arbitrary constants. 4.1.5.28. We are given that m¨ y3 +by˙ 3 +ky3 = g(t) and m¨ y0 +by˙ 0 +ky0 = 0. It follows that y(t) = 2y0 (t)−y3 (t) satisfies m¨ y + by˙ + ky = 2m¨ y0 − m¨ y3 + 2by˙ 0 − by˙ 3 + 2ky0 − ky3 = 2(m¨ y0 + by˙ 0 + ky0 ) − (m¨ y3 + by˙ 3 + ky3 ) ©Larry

Turyn, October 13, 2013

p. 14

= 2 · 0 − (g) = −g(t). To summarize, y = 2y0 − y3 satisfies the ODE m¨ y + by˙ + ky = −g(t). 4.1.5.29. We are given that y¨1 + 2y˙ 1 + 5y1 = −10t + 11 and that y¨2 + 2y˙ 2 + 5y2 = −17 cos 2t. It follows that t−

 2   2  11 1 1 +2 cos 2t = − −10t+11 − −17 cos 2t = − y¨1 +2y˙ 1 +5y1 − y¨2 +2y˙ 2 +5y2 = y¨+2y+5y, ˙ 10 10 17 10 17

2 1 y1 (t) − 17 y2 . where y(t) = − 10 1 2 So, a particular solution of y¨ + 2y˙ + 5y = t − 11 10 + 2 cos 2t is given by y(t) = − 10 y1 (t) − 17 y2 . The corresponding LCCHODE is y¨ + 2y˙ + 5y = 0, whose characteristic equation is 0 = s2 + 2s + 5 = (s + 1)2 + 4, whose roots are s = −1 ± i2. The general solution of the ODE y¨ + 2y˙ + 5y = t − 11 10 + 2 cos 2t is

y(t) = −

1 2 y1 (t) − y2 + c1 e−t cos 2t + c2 e−t sin 2t, 10 17

that is, the solution is y(t) = −

2 1 (−2t + 3) − (− cos 2t − 4 sin 2t) + c1 e−t cos 2t + c2 e−t sin 2t, 10 17

where c1 , c2 =arbitrary constants. 4.1.5.30. We are given that y¨1 + p(t)y˙ 1 + q(t)y1 = et and y¨2 + p(t)y˙ 2 + q(t)y2 = et . (a) So, y(t) = 2y1 (t) − y2 (t) satisfies y¨ +p(t)y˙ +q(t)y = 2¨ y1 − y¨2 +2p(t)y˙ 1 −p(t)y˙ 2 +2q(t)y2 −q(t)y1 = 2(¨ y2 +p(t)y˙ 2 +q(t)y2 )−(¨ y1 +p(t)y˙ 1 +q(t)y1 ) = 2et − et = et . So, (a1) is true. (b) So, y(t) = c1 y1 (t) + c2 y2 (t) satisfies y¨ + p(t)y˙ + q(t)y =c1 y¨1 + c2 y¨2 + c1 p(t)y˙ 1 + c2 p(t)y˙ 2 + c1 q(t)y2 + c2 q(t)y1 =c1 (¨ y2 + p(t)y˙ 2 + q(t)y2 ) + c2 (¨ y1 + p(t)y˙ 1 + q(t)y1 ) = c1 et + c2 et = (c1 + c2 )et . So, only if c1 + c2 = 1 does y(t) = c1 y1 (t) + c2 y2 (t) satisfy the ODE (?) y¨ + p(t)y˙ + q(t)y = et . In the context of issues concerning “general solutions" of a second order linear ODE, we might presume that c1 , c2 =arbitrary constants. But if we require c1 + c2 = 1 then there is only one arbitrary constant in y(t) = c1 y1 (t) + c2 y2 (t), and we know that the general solution of a second order ODE must have exactly two arbitrary constants. So, y(t) = c1 y1 (t) + c2 y2 (t) cannot be the general solution of (?). So, (b2) is true. 4.1.5.31. This problem asks us to “reverse engineer" a solution of an ODE to find the ODE it satisfies. The solution, y(t) = e−t + e−2t − e−3t , could come from a superlist L = −3, −2, −1 for the method of undetermined coefficients for a non-homogeneous second order ODE. There are three ways that this can happen: L1 = −3, −2, or L1 = −3, −1, or L1 = −2, −1. Ex. 1: With L1 = −3, −2 and L2 = −1 we would have an ODE of the form f0 e−t = (D + 3)(D + 2)[ y ] = (D2 + 5D + 6)[ y ], for some choice of constant f0 . In this case, we would have yh (t) = e−2t − e−3t . Substitute in yp (t) = e−t to get  f0 e−t = (D2 + 5D + 6)[ yp ] = (D2 + 5D + 6)[ e−t ] = (−1)2 + 5(−1) + 6 e−t = 2e−t , so f0 = 2. So, an example of an ODE that could have y(t) = e−t + e−2t − e−3t as one of its solutions is (?) y¨ + 5y˙ + 6y = 2e−t . ©Larry

Turyn, October 13, 2013

p. 15

Ex. 2: With L1 = −3, −1 and L2 = −2 we would have an ODE of the form f0 e−2t = (D + 3)(D + 1)[ y ] = (D2 + 4D + 3)[ y ], for some choice of constant f0 . In this case, we would have yh (t) = e−t − e−3t . Substitute in yp (t) = e−2t to get  f0 e−2t = (D2 + 4D + 3)[ yp ] = (D2 + 4D + 3)[ e−2t ] = (−2)2 + 4(−2) + 3 e−2t = −e−2t , so f0 = −1. So, an example of an ODE that could have y(t) = e−t + e−2t − e−3t as one of its solutions is (??) y¨ + 4y˙ + 3y = −e−2t . In fact, there are infinitely many examples based on each of (?) and (??), for example, 5¨ y + 25y˙ + 30y = 10e−t . 4.1.5.32. This problem asks us to “reverse engineer" a solution of an ODE to find the ODE it satisfies. The solution, y(t) = e−t + t e−t + 2e−2t , could come from a superlist L = −2, −1, −1 for the method of undetermined coefficients for a non-homogeneous second order ODE. There are two ways that this can happen: L1 = −2, −1, or L1 = −1, −1. Ex. 1: With L1 = −2, −1 and L2 = −1 we would have an ODE of the form f0 t e−t = (D + 2)(D + 1)[ y ] = (D2 + 3D + 2)[ y ], for some choice of constant f0 . In this case, we would have yh (t) = e−t + 2e−2t . Substitute in yp (t) = t e−t to get f0 e−t = (D2 + 3D + 2)[ yp ] = (D2 + 3D + 2)[ t e−t ] = (−2 + t)e−t + 3(1 − t)e−t + 2t e−t = e−t , so f0 = 1. So, an example of an ODE that could have y(t) = e−t + te−t + 2e−2t as one of its solutions is (?) y¨ + 3y˙ + 2y = e−t . Ex. 2: With L1 = −1, −1 and L2 = −2 we would have an ODE of the form f0 e−2t = (D + 1)(D + 1)[ y ] = (D2 + 2D + 1)[ y ], for some choice of constant f0 . In this case, we would have yh (t) = e−t + te−t . Substitute in yp (t) = 2e−2t to get  f0 e−2t = (D2 + 2D + 1)[ yp ] = (D2 + 2D + 1)[ 2e−2t ] = (−2)2 + 2(−2) + 1 2e−2t = 2e−2t , so f0 = 1. So, an example of an ODE that could have y(t) = e−t + te−t + 2e−2t as one of its solutions is (?) y¨ + 2y˙ + y = 2e−2t . In fact, there are infinitely many examples based on each of (?) and (??), for example, 5¨ y + 10y˙ + 5y = 10e−2t .

©Larry

Turyn, October 13, 2013

p. 16

Section 4.2.5 4.2.5.1. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4 ⇒ L1 = ± i 2 f (t) = −3 cos 2t ⇒ L2 = ± i 2 ⇒ Superlist is L = ± i 2, ± i 2 ⇒ y(t) = c1 cos 2t+c2 sin 2t+c3 t cos 2t+c4 t sin 2t ⇒ yp (t) = At cos 2t+B t sin 2t, where A, B are constants to be determined. It follows that y˙ p (t) = A cos 2t − 2A t sin 2t + B sin 2t + 2B t cos 2t and y¨p (t) = −4A sin 2t − 4A t cos 2t + 4B cos 2t − 4B t sin 2t. Substitute into the original, non-homogeneous ODE to get −3 cos 2t = y¨p + 4yp = −4A sin 2t − 4A t cos 2t + 4B cos 2t − 4B t sin 2t + 4A t cos 2t + 4B t sin 2t = −4A sin 2t + 4B cos 2t 3 ⇒ A = 0 and B = − 43 ⇒ yp (t) = − t sin 2t. 4 The general solution of the ODE is y(t) = − 34 t sin 2t + c1 cos 2t + c2 sin 2t, where c1 , c2 =arbitrary constants. 4.2.5.2. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 9 ⇒ L1 = ± i 3 f (t) = 5 sin 3t ⇒ L2 = ± i 3 ⇒ Superlist is L = ± i 3, ± i 3 ⇒ y(t) = c1 cos 3t+c2 sin 3t+c3 t cos 3t+c4 t sin 3t ⇒ yp (t) = At cos 3t+B t sin 3t, where A, B are constants to be determined. It follows that y˙ p (t) = A cos 3t − 3A t sin 3t + B sin 3t + 3B t cos 3t and y¨p (t) = −6A sin 3t − 9A t cos 3t + 6B cos 3t − 9B t sin 3t. Substitute into the original, non-homogeneous ODE to get 5 sin 3t = y¨p +9yp = −6A sin 3t−9A t cos 3t+6B cos 3t−9B t sin 3t+9A t cos 3t+9B t sin 3t = −6A sin 3t+6B cos 3t 5 ⇒ A = − 65 and B = 0 ⇒ yp (t) = − t cos 3t. 6 The general solution of the ODE is y(t) = − 56 t cos 3t + c1 cos 3t + c2 sin 3t, where c1 , c2 =arbitrary constants. It follows that 5 15 y(t) ˙ = − cos 3t + t sin 3t − 3c1 sin 3t + 3c2 cos 3t. 6 6 The ICs require   0 = y(0) = c1 , 0 = y(0) ˙ = − 65 + 3c2 so c1 = 0 and c2 =

5 18 .

The solution of the IVP is 5 5 sin 3t. y(t) = − t cos 3t + 6 18

4.2.5.3. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 16 ⇒ L1 = ± i 4 f (t) = 4 cos 4t ⇒ L2 = ± i 4 ⇒ Superlist is L = ± i 4, ± i 4 y(t) = c1 cos 4t + c2 sin 4t + c3 t cos 4t + c4 t sin 4t ⇒ yp (t) = At cos 4t + B t sin 4t, where A, B are constants to be determined. It follows that y˙ p (t) = A cos 4t − 4A t sin 4t + B sin 4t + 4B t cos 4t and y¨p (t) = −8A sin 4t − 16At cos 4t + 8B cos 4t − 16B t sin 4t. Substitute into the original, non-homogeneous ODE to get 4 cos 4t = y¨p + 16yp = −8A sin 4t − 16A t cos 4t + 8B cos 4t − 16B t sin 4t + 16A t cos 4t + 16B t sin 4t = −8A sin 4t + 8B cos 4t

©Larry

Turyn, October 13, 2013

p. 17 1 t sin 4t. 2 The general solution of the ODE is y(t) = 12 t sin 4t + c1 cos 4t + c2 sin 4t, where c1 , c2 =arbitrary constants. It follows that 1 y(t) ˙ = sin 4t + 2t cos 4t − 4c1 sin 4t + 4c2 cos 4t. 2 The ICs require   −1 = y(0) = c1 , 3 = y(0) ˙ = 4c2 ⇒ A = 0 and B =

1 2

⇒ yp (t) =

so c1 = −1 and c2 = 43 . The solution of the IVP is y(t) =

3 1 t sin 4t − cos 4t + sin 4t. 2 4

√ 4.2.5.4. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 8 ⇒ L1 = ± i 2 2 √ f (t) = 5 sin 3t ⇒ L2 = ± i 3 ⇒ Superlist is L = ± i 2 2, ± i 3 implies √ √ y(t) = c1 cos(2 2 t) + c2 sin(2 2 t) + c3 cos 3t + c4 sin 3t ⇒ yp (t) = A cos 3t + B sin 3t, where A, B are constants to be determined. Substitute into the original, non-homogeneous ODE to get 5 sin 3t = y¨p + 8yp = −9A cos 3t − 9B sin 3t + 8A cos 3t + 8B sin 3t = −A cos 3t − B sin 3t ⇒ A = 0 and B = −5 ⇒ yp (t) = −5 sin 3t. √ √ The general solution of the ODE is y(t) = −5 sin 3t + c1 cos(2 2 t) + c2 sin(2 2 t), where c1 , c2 =arbitrary constants. It follows that √ √ √ √ y(t) ˙ = −15 cos 3t − 2 2c1 sin(2 2 t) + 2 2c2 cos(2 2 t). The ICs require 

so c1 = 0 and c2 =

15 √ . 2 2

0 = y(0) = c1 √ 0 = y(0) ˙ = −15 + 2 2 c2

 ,

The solution of the IVP is √ 15 y(t) = −5 sin 3t + √ sin(2 2 t). 2 2

4.2.5.5. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s + 2 = (s + 1)2 + 1 ⇒ L1 = −1 ± i f (t) = sin t ⇒ L2 = ± i ⇒ Superlist is L = −1 ± i, ± i ⇒ y(t) = c1 e−t cos t + c2 e−t sin t + c3 cos t + c4 sin t ⇒ yp (t) = A cos t + B sin t, where A, B are constants to be determined: sin t = y¨p +2y˙ p +2yp = −A cos t−B sin t−2A sin t+2B cos t+2A cos t+2B sin t = (A+2B) cos t+(B−2A) sin t   A + 2B = 0 ⇒ −2A + B = 1 so    −1        1 −2 1 1 −2 0 A 1 2 0 = = = B −2 1 1 1 1 1 5 2 5 ⇒ The steady state solution is yS (t) = yp (t) =

1 (−2 cos t + sin t), 5 ©Larry

Turyn, October 13, 2013

p. 18

because the terms c1 e−t cos t + c2 e−t sin t are transient no matter what are the values of the constants c1 , c2 . The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(t − δ), where   2  − 5 = A = α cos δ  ,   1 = B = α sin δ 5 1/5 and tan δ = −2/5 = − 12 . Because (A, B) = 1 1 = π − arctan . quadrant, δ = π + arctan − 2 2 The steady state solution in amplitude phase form is

hence α =

1 5

p (−2)2 + 12 =

√1 5

− 52 , 15



is in the second

 1 1 yS (t) = √ cos t − π + arctan , 2 5 1 whose amplitude is Amplitude = √ . 5 4.2.5.6. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + s + 5 = s + √ ⇒ L1 = − 21 ± i 219 f (t) = cos 2t ⇒ L2 = ± 2i ⇒ Superlist is L = − 12 ± i y(t) = c1 e

−t/2

√ 19 2 ,

 1 2 2

+

19 4

± 2i implies

√ √ 19  19  −t/2 cos t + c2 e sin t + c3 cos 2t + c4 sin 2t 2 2

⇒ yp (t) = A cos 2t + B sin 2t, where A, B are constants to be determined: cos 2t = y¨p + y˙ p + 5yp = −4A cos 2t − 4B sin t − 2A sin 2t + 2B cos 2t + 5A cos 2t + 5B sin 2t = (A + 2B) cos 2t + (B − 2A) sin 2t   A + 2B = 1 ⇒ −2A + B = 0 so 

A B



 =

1 2 −2 1

−1 

1 0

 =

1 5



1 −2 2 1



1 0

 =

1 5



1 2



⇒ The steady state solution is 1 (cos 2t + 2 sin 2t), 5 √ √   because the terms c1 e−t/2 cos 219 t + c2 e−t/2 sin 219 t are transient no matter what are the values of the constants c1 , c2 . The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(2t − δ), where  1   5 = A = α cos δ  ,  2  = B = α sin δ 5 yS (t) = yp (t) =

hence α = 51 δ = arctan 2.



12 + 22 =

√1 5

and tan δ =

2/5 1/5

= 2. Because (A, B) =

1 2 5, 5



is in the first quadrant,

The steady state solution in amplitude phase form is   1 yS (t) = √ cos 2t − arctan 2 , 5

©Larry

Turyn, October 13, 2013

p. 19 1 whose amplitude is Amplitude = √ . 5 4.2.5.7. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s + 3 = (s + 1)2 + 2 √ ⇒ L1 = −1 ± i 2 √ f (t) = sin 2t ⇒ L2 = ± 2i ⇒ Superlist is L = −1 ± i 2, ± 2i implies √ √ y(t) = c1 e−t cos( 2 t) + c2 e−t sin( 2 t) + c3 cos 2t + c4 sin 2t ⇒ yp (t) = A cos 2t + B sin 2t, where A, B are constants to be determined: sin 2t = y¨p + 2y˙ p + 3yp = −4A cos 2t − 4B sin t − 4A sin 2t + 4B cos 2t + 3A cos 2t + 3B sin 2t = (−A + 4B) cos 2t + (−B − 4A) sin 2t   −A + 4B = 0 ⇒ −4A − B = 1 so 

A B



 =

−1 4 −4 −1

−1 

0 1



1 = 17



−1 −4 4 −1



0 1



1 = 17



−4 −1



⇒ The steady state solution is yS (t) = yp (t) =

1 (−4 cos 2t − sin 2t), 17

√ √ because the terms c1 e−t cos( 2 t)+c2 e−t sin( 2 t) are transient no matter what are the values of the constants c1 , c2 . The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(2t − δ), where   4 = A = α cos δ   − 17 ,   1 = B = α sin δ − 17 p 1 (−4)2 + (−1)2 = √117 and tan δ = −1/17 −4/17 = 4 . Because (A, B) = − 1 quadrant, δ = π + arctan . 4 The steady state solution in amplitude phase form is hence α =

1 17

4 1 17 , − 17



is in the third

 1 1 yS (t) = √ cos 2t − π − arctan , 4 17 1 whose amplitude is Amplitude = √ . 17 4.2.5.8. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s + 1 = (s + 1)2 ⇒ L1 = −1, −1 f (t) = 4 sin 3t ⇒ L2 = ± 3i ⇒ Superlist is L = −1, −1 ± 3i implies y(t) = c1 e−t + c2 te−t + c3 cos 3t + c4 sin 3t ⇒ yp (t) = A cos 3t + B sin 3t, where A, B are constants to be determined: 4 sin 3t = y¨p + 2y˙ p + yp = −9A cos 3t − 9B sin t − 6A sin 3t + 6B cos 3t + A cos 3t + B sin 3t = (−8A + 6B) cos 3t + (−8B − 6A) sin 3t

©Larry

Turyn, October 13, 2013

p. 20  ⇒ so 

A B



 =

−8 6 −6 −8

−1 

0 4

−8A + 6B = 0 −6A − 8B = 4 

1 = 100





−8 −6 6 −8



0 4



1 = 100



−24 −32



⇒ The steady state solution is yS (t) = yp (t) =

2 (−3 cos 3t − 4 sin 3t), 25

√ because the terms c1 e−t + c2 te−t sin( 2 t) are transient no matter what are the values of the constants c1 , c2 . The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(3t − δ), where   6 = A = α cos δ   − 25 ,   8 − 25 = B = α sin δ 4 6 8 (−3)2 + (−4)2 = 52 and tan δ = −8/25 −6/25 = 3 . Because (A, B) = (− 25 , − 25 ) is in the third 4 quadrant, δ = π + arctan . 3 The steady state solution in amplitude phase form is  2 4 yS (t) = cos 3t − π − arctan , 5 3

hence α =

2 25

p

whose amplitude is Amplitude =

2 . 5

4.2.5.9. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s + 10 = (s + 1)2 + 9 ⇒ L1 = −1 ± 3i f (t) = 74 cos 3t ⇒ L2 = ± 3i ⇒ Superlist is L = −1 ± 3i, ± 3i implies y(t) = c1 e−t cos 3t + c2 e−t sin 3t + c3 cos 3t + c4 sin 3t ⇒ yp (t) = A cos 3t + B sin 3t, where A, B are constants to be determined: 74 cos 3t = y¨p + 2y˙ p + 10yp = −9A cos 3t − 9B sin t − 6A sin 3t + 6B cos 3t + 10A cos 3t + 10B sin 3t = (A + 6B) cos 3t + (B − 6A) sin 3t   A + 6B = 74 ⇒ −6A + B = 0 so 

A B



 =

1 6 −6 1

−1 

74 0

 =

1 37



1 −6 6 1



74 0



 =

2 12

 .

The general solution of the ODE is y(t) = 2 cos 3t + 12 sin 3t + c1 e−t cos 3t + c2 e−t sin 3t, where c1 , c2 =arbitrary constants. It follows that y(t) ˙ = −6 sin 3t + 36 cos 3t + (−c1 + 3c2 )e−t cos 3t + (−c2 + 3c2 )e−t sin 3t. The ICs require 

−1 = y(0) = 2 + c1 2 = y(0) ˙ = 36 − c1 + 3c2

 , ©Larry

Turyn, October 13, 2013

p. 21

so c1 = −3 and c2 = − 37 3 . The solution of the IVP is y(t) = 2 cos 3t + 12 sin 3t − 3e−t cos 3t −

37 −t e sin 3t. 3

⇒ The steady state solution is yS (t) = yp (t) = 2 cos 3t + 12 sin 3t, because the terms −3e−t cos 3t − e−t sin 3t go to zero as t → ∞. The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(3t − δ), where    2 = A = α cos δ  ,   12 = B = α sin δ √ √ √ = −6. Because (A, B) = (2, 12) is in the first hence α = 22 + 122 = 148 = 2 37 and tan δ = −12 2 quadrant, δ = arctan(6) = arctan 6. 37 3

The steady state solution in amplitude phase form is   √ yS (t) = 2 37 cos 3t − arctan 6 , √ whose amplitude is Amplitude = 2 37. 4.2.5.10. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4s + 5 = (s + 2)2 + 1 ⇒ L1 = −2 ± i f (t) = sin(πt) ⇒ L2 = ± πi ⇒ Superlist is L = −2 ± i, ± πi implies y(t) = c1 e−2t cos t + c2 e−2t sin t + c3 cos(πt) + c4 sin(πt) ⇒ yp (t) = A cos(πt) + B sin(πt), where A, B are constants to be determined: sin(πt) = y¨p + 4y˙ p + 5yp = −π 2 A cos(πt) − π 2 sin t − 4πA sin(πt) + 4πB cos(πt) + 5A cos(πt) + 5B sin(πt) = ((5 − π 2 )A + 4πB) cos(πt) + ((5 − π 2 )B − 4πA) sin(πt)   (5 − π 2 )A + 4πB = 0 ⇒ −4πA + (5 − π 2 )B = 1 so 

A B



 =

5 − π2 −4π

−1 



1 = (5 − π 2 )2 + 16π 2   1 −4π = . (5 − π 2 )2 + 16π 2 5 − π 2

4π 5 − π2

0 1



5 − π2 4π

−4π 5 − π2



0 1



The general solution of the ODE is y(t) =

1 (5 −

π 2 )2

+

16π 2

 − 4π cos(πt) + (5 − π 2 ) sin(πt) + c1 e−2t cos t + c2 e−2t sin t,

where c1 , c2 =arbitrary constants. It follows that y(t) ˙ =

1 (5 −

π 2 )2

+

The ICs require

16π 2

 4π 2 sin(πt) + (5 − π 2 )π cos(πt) + (−2c1 + c2 )e−2t cos t + (−2c2 + c1 )e−2t sin t,  −4π  0 = y(0) = + c1   (5 − π 2 )2 + 16π 2      0 = y(0) ˙ =

    

,

  (5 − π 2 )π   − 2c + c 1 2 (5 − π 2 )2 + 16π 2 ©Larry

Turyn, October 13, 2013

p. 22



so c1 =

π 2 )2

16π 2

and c2 =

(5 − + The solution of the IVP is y(t) =

1 (5 −

π 2 )2

 +

16π 2

−(5 − π 2 )π + 8π (3 + π 2 )π = . 2 2 2 (5 − π ) + 16π (5 − π 2 )2 + 16π 2

  − 4π cos(πt) + (5 − π 2 ) sin(πt) + 4πe−2t cos t + 3 + π 2 )π e−2t sin t .

⇒ The steady state solution is yS (t) =

1 (5 −

π 2 )2

+

16π 2

 − 4π cos(πt) + (5 − π 2 ) sin(πt) ,

because the other terms in y(t) go to zero as t → ∞. The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(πt − δ), where   1  −4π · (5−π2 )2 +16π2 = A = α cos δ  ,   (5 − π 2 ) · (5−π2 )12 +16π2 = B = α sin δ hence α= and tan δ = Because

p p 1 1 2 + (5 − π 2 )2 = (−4π) (5 − π 2 )2 + 16π 2 (5 − π 2 )2 + 16π 2 (5 − π 2 )2 + 16π 2

−(π 2 − 5) . −4π (A, B) =





 4π π2 − 5 , − (5 − π 2 )2 + 16π 2 (5 − π 2 )2 + 16π 2

is in the third quadrant, δ = π + arctan

 −(π 2 − 5)  −4π

= π + arctan

 π2 − 5  4π

.

The steady state solution in amplitude phase form is   π 2 − 5  1 cos πt − π − arctan yS (t) = p , 4π (5 − π 2 )2 + 16π 2 whose amplitude is Amplitude = p

1 (5 − π 2 )2 + 16π 2

.

4.2.5.11. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 2s + 6 = (s + 1)2 + 5 √ ⇒ L1 = −1 ± 5 i √ f (t) = sin 2t ⇒ L2 = ± 2i ⇒ Superlist is L = −1 ± 5 i, ± 2i implies √ √ y(t) = c1 e−t cos( 5 t) + c2 e−t sin( 5 t) + c3 cos 2t + c4 sin 2t ⇒ yp (t) = A cos 2t + B sin 2t, where A, B are constants to be determined: sin 2t = y¨p + 2y˙ p + 6yp = −4A cos 2t − 4B sin 2t − 4A sin 2t + 4B cos 2t + 6A cos 2t + 6B sin 2t = (2A + 4B) cos 2t + (2B − 4A) sin 2t   2A + 4B = 0 ⇒ −4A + 2B = 1 so 

A B



 =

2 4 −4 2

−1 

0 1



1 = 20



2 −4 4 2



0 1



1 = 10



©Larry

−2 1

 .

Turyn, October 13, 2013

p. 23

The general solution of the ODE is y(t) = c1 , c2 =arbitrary constants. It follows that y(t) ˙ =

1 10 (−2 cos 2t

√ √ + sin 2t) + c1 e−t cos( 5 t) + c2 e−t sin( 5 t), where

√ √ √ √ 1 (4 sin 2t + 2 cos 2t) + (−c1 + 5 c2 )e−t cos 5 t + (−c2 + 5 c2 )e−t sin 5 t. 10

The ICs require 2 + c1 −3 = y(0) = − 10

  

√ 3 − 3 5 = y(0) ˙ =

2 10

− c1 +

 

, √  5 c2

so c1 = − 14 5 and c2 = −3. The solution of the IVP is y(t) =

√ √ 1 14 −t (−2 cos 2t + sin 2t) − e cos( 5 t) − 3e−t sin( 5 t), 10 5

⇒ The steady state solution is yS (t) = yp (t) =

1 (−2 cos 2t + sin 2t), 10

because the other terms in y(t) go to zero as t → ∞. The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(2t − δ), where   2 = A = α cos δ , −12 = B = α sin δ   p 1/10 1 1 1 2 1 hence α = 10 (−2)2 + (1)2 = 2 √ is in the second and tan δ = = − . Because (A, B) = − , −2/10 2 10 10 5 1 1 quadrant, δ = π + arctan − = π − arctan . 2 2 The steady state solution in amplitude phase form is  1 1 yS (t) = √ cos 2t − π + arctan , 2 2 5 whose amplitude is Amplitude =

1 √ . 2 5

4.2.5.12. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + s + 3 = s + √ ⇒ L1 = − 12 ± 211 i √ f (t) = sin 2t ⇒ L2 = ± 2i ⇒ Superlist is L = −1 ± 5 i, ± 2i implies  √11   √11  −t/2 −t/2 y(t) = c1 e cos t + c2 e sin t + c3 cos 2t + c4 sin 2t 2 2

 1 2 2

+

11 4

⇒ yp (t) = A cos 2t + B sin 2t, where A, B are constants to be determined: sin 2t = y¨p + y˙ p + 3yp = −4A cos 2t − 4B sin 2t − 2A sin 2t + 2B cos 2t + 3A cos 2t + 3B sin 2t = (−A + 2B) cos 2t + (−B − 2A) sin 2t   −A + 2B = 0 ⇒ −2A − B = 1 so 

A B



 =

−1 2 −2 −1

−1 

0 1

 =

1 5



−1 −2 2 −1



0 1

 =

1 5



©Larry

−2 −1

 .

Turyn, October 13, 2013

p. 24

The general solution of the ODE is y(t) =

 √11   √11  1 (−2 cos 2t − sin 2t) + c1 e−t/2 cos t + c2 e−t/2 sin t , 5 2 2

where c1 , c2 =arbitrary constants. It follows that √ √  1  √11   1  √11  1 11  −t/2 11  −t/2 y(t) ˙ = − c1 + c2 e cos t + − c2 + c2 e sin t + (4 sin 2t − 2 cos 2t). 2 2 2 2 2 2 5 The ICs require  2  1 = y(0) = − 5 + c1  so c1 =

7 5

and c2 =

√ 11 5 .

0 = y(0) ˙ = − 25 −

1 2

c1 +

  √ 11 2

c2

,



The solution of the IVP is

 √11  √11  √11  1 7 −t/2 −t/2 cos sin y(t) = (−2 cos 2t − sin 2t) + e t + e t . 5 5 2 5 2 ⇒ The steady state solution is 1 (−2 cos 2t − sin 2t), 5 because the other terms in y(t) go to zero as t → ∞. The amplitude phase form (3.39) for the steady state solution is y(t) = α cos(3t − δ), where   2  − 5 = A = α cos δ  ,  1  − 5 = B = α sin δ   p −1/5 hence α = 51 (−2)2 + (1)2 = √15 and tan δ = −2/5 = 12 . Because (A, B) = − 25 , − 15 is in the second 1 1 quadrant, δ = π + arctan = π + arctan . 2 2 The steady state solution in amplitude phase form is yS (t) = yp (t) =

 1 1 yS (t) = √ cos 2t − π − arctan , 2 5 1 whose amplitude is Amplitude = p . (5 4.2.5.13. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 9 ⇒ L1 = ± 3i √ √ √ f (t) = 5 cos( 8 t) ⇒ L2 = ± 8 i ⇒ Superlist is L = ± 3i, −1 ± 2 2 i implies √ √ y(t) = c1 cos 3t + c2 sin 3t + c3 cos(2 2 t) + c4 sin(2 2 t) √ √ ⇒ yp (t) = A cos(2 2 t) + B sin(2 2 t), where A, B are constants to be determined: √ √ √ √ √ 5 cos(2 2 t) = y¨p + 9yp = −8A cos(2 2 t) − 8B sin(2 2 t) + 9A cos(2 2 t) + 9B sin(2 2 t) √ √ = A cos(2 2 t) + B sin(2 2 t) ⇒ A = 5 and B = 0. √ The general solution of the ODE is y(t) = 5 cos(2 2 t)+c1 cos 3t+c2 sin 3t, where c1 , c2 =arbitrary constants. It follows that √ √ y(t) ˙ = −10 2 sin(2 2 t) − 3c1 sin 3t + 3c2 cos 3t. ©Larry

Turyn, October 13, 2013

p. 25

The ICs require    0 = y(0) = 5 + c1  

0 = y(0) ˙ = 3c2

,



so c1 = −5 and c2 = 0. The solution of the IVP is √  y(t) = 5 cos(2 2 t) − cos 3t . √ (a) The natural frequency is ω0 = 3 and the forcing frequency of this undamped system is ω = 2 2. The frequency of the beats is √ √ |2 2 − 3| 3−2 2 ζ, = . 2 2 √  (b) The maximum amplitude of the motion given by y(t) = 5 cos(2 2 t) − cos 3t is 10. Intuitively, there √ are times when both cos(2 2 t) ≈ 1 and cos 3t ≈ −1, because the motion is quasi-periodic, not periodic. 4.2.5.14. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 5 √ f (t) = 3 cos 2t ⇒ L2 = ± 2i ⇒ Superlist is L = ± i 5, ± 2i implies √ √ y(t) = c1 cos( 5 t) + c2 sin( 5 t) + c3 cos 2t + c4 sin 2t

√ ⇒ L1 = ± i 5

⇒ yp (t) = A cos 2t + B sin 2t, where A, B are constants to be determined: 3 cos 2t = y¨p + 5yp = −4A cos 2t − 4B sin 2t + 5A cos 2t + 5B sin 2t = A cos 2t + B sin 2t ⇒ A = 3 and B = 0. √ √ The general solution of the ODE is y(t) = 3 cos 2t + c1 cos( 5 t) + c2 sin( 5 t), where c1 , c2 =arbitrary constants. It follows that √ √ √ √ y(t) ˙ = −6 sin 2t − 5c1 cos( 5 t) + 5c2 sin( 5 t) The ICs require    0 = y(0) = 3 + c1  

0 = y(0) ˙ =

, √  5c2

so c1 = −3 and c2 = 0. The solution of the IVP is √  y(t) = 3 cos 2t − cos( 5 t) . √ 5 and the undamped system’s forcing frequency is ω = 2. The frequency √ √ | 5 − 2| 5−2 ζ, = . 2 2 √  (b) The maximum amplitude of the motion given by y(t) = 3 cos 2t − cos( 5 t) is 6. Intuitively, there are √ times when both cos 2t ≈ 1 and cos( 5 t) ≈ −1, because the motion is quasi-periodic, not periodic.

(a) The natural frequency is ω0 = of the beats is

4.2.5.15. The corresponding LCCHODE’s characteristic polynomial is P(s) = ms2 + bs + k √ b2 −4mk b ⇒ L1 = − 2m ± 2m √

2

−4mk b f (t) = f0 sin ωt ⇒ L2 = ± i ω ⇒ Superlist is L = − 2m ± b 2m , ±iω y(t) = yh (t)+c3 cos ωt+c4 sin ωt ⇒ yp (t) = A cos ωt+B sin ωt, where A, B are constants to be determined:

f0 sin ωt = m¨ yp +by˙ p +kyp = −mω 2 A cos ωt−mω 2 B sin ωt+b(−ωA sin ωt−ωB cos ωt)+k(A cos ωt+B sin ωt)   = (k − mω 2 A + bωB) cos ωt + (k − mω 2 B − bωA) sin ωt, ©Larry

Turyn, October 13, 2013

p. 26

so 0 = (k − mω 2 )A + bωB and f0 = (k − mω 2 )B − bωA. This can be written as a 2 × 2 system   bωB = 0   (k − mω 2 )A + ,   −bωA + (k − mω 2 )B = f0 whose solution is    A k − mω 2 = B −bω

bω k − mω 2

−1 



0 f0

=

1 2 (k − mω )2 + (bω)2

k − mω 2 bω



−bω k − mω 2



0 f0

 .

So, a particular solution of the IVP is given by yp (t) =

(k −

f0 2 mω )2

+

(bω)2

 −bω cos ωt + (k − mω 2 ) sin ωt .

b > 0 implies that yh (t) → 0 as t → ∞, that is, yh (t) is a transient solution, and the initial data y0 , y˙ 0 don’t affect this fact! The steady state solution is yS (t) = yp (t) =

(k −

f0 mω 2 )2

+

(bω)2

 −bω cos ωt + (k − mω 2 ) sin ωt .

4.2.5.16. (a) From the graph in the textbook’s Figure 4.10, the steady state solution appears to have period 2π π 4. Bu the period of the steady state solution is 2π/ω, so ω = = . 4 2 (b) From the graph in the textbook’s Figure 4.10, the steady state solution appears to have Amplitude ≈ 2.3. (c) According to formula (4.11), the amplitude of the steady state solution of an ODE m¨ y +by+ky ˙ = f0 cos ωt is |f0 | . (?) Amplitude = p (k − mω 2 )2 + (bω)2 In this problem, we were given f0 = 22, m = 1, and b = 6, in part (a) we derived that ω = π2 , and in part (b) we found amplitude ≈ 2.3. Using these values and equation (?) we can find an approximate value of k: (?) implies f02 (22)2 (2.3)2 ≈ Amplitude2 = = , 2 2 2 2 2 (k − mω ) + (bω) k − 1 · π2 + (6 · π2 )2 which implies 

k−1·

π 2 2 π (22)2 + (6 · )2 ≈ . 2 2 (2.3)2

This implies 

k−

hence π 2 k≈ ± 2

s

(22)2 π 2 2 ≈ − (3π)2 , 2 (2.3)2 (22)2 − (3π)2 = (2.3)2



k+ , k− ,

if + if −

 .

So, there are two possibilities: k + ≈ 4.100479216

and k − ≈ 0.8343229842

Because m = 1, the natural frequency of the system is, rounded off to the two significant digits of accuracy of data we read from the graph: √ √ ω0+ = k + ≈ 4.100479216 ≈ 2.0 and ω0− =

√ √ k − ≈ 0.8343229842 ≈ 0.91. ©Larry

Turyn, October 13, 2013

p. 27

4.2.5.17. (a) From the graph in the textbook’s Figure 4.11, the steady state solution appears to have period 2π π 4. But the period of the steady state solution is 2π/ω, so ω = = . 4 2 (b) From the graph in the textbook’s Figure 4.11, the steady state solution appears to have Amplitude ≈ 1.7. (c) According to formula (4.11), the amplitude of the steady state solution of an ODE m¨ y +by+ky ˙ = f0 cos ωt is |f0 | . (?) Amplitude = p (k − mω 2 )2 + (bω)2 In this problem, we were given f0 = 11, m = 1, and b = 4, in part (a) we derived that ω = π2 , and in part (b) we found amplitude ≈ 1.7. Using these values and equation (?) we can find an approximate value of k: (?) implies (11)2 f02 = , (1.7)2 ≈ Amplitude2 =  2 2 (k − mω 2 )2 + (bω)2 k − 1 · π2 + (4 · π2 )2 which implies 

π 2 2 π (11)2 + (4 · )2 ≈ . 2 2 (1.7)2

k−1·

This implies 

k−

hence π 2 ± k≈ 2

π 2 2 (11)2 ≈ − (2π)2 , 2 (1.7)2

s

(11)2 − (2π)2 ≈ 4.013394149. (1.7)2

So, there are two possibilities: k + ≈ 4.013394149

and k − ≈ 0.9214080517

Because m = 1, the natural frequency of the system is, rounded off to the two significant digits of accuracy of data we read from the graph: √ √ ω0+ = k + ≈ 4.013394149 ≈ 2.0 and ω0− =

√ √ k − ≈ 0.9214080517 ≈ 0.96.

4.2.5.18. In the textbook’s Figure 4.12, the first graph seems to show a steady state oscillation, for parameters b1 and ω1 in t he ODE. So, without discussing the numerical values of y(t) in the first graph, we would need to have b1 > 0 and ω1 could be any value. n the textbook’s Figure 4.12, the second graph seems to show the beats phenomenon, for parameters b2 and ω2 in the ODE. So, without discussing the numerical values of y(t) in the first graph, we would need to have b2 = 0 and ω2 could be any value other than ω0 = 1, the natural frequency of the undamped oscillator ODE y¨ + y = cos(ω2 t). √ (a) b1 = 0.5 & ω1 = 2 agrees with the first graph in the textbook’s Figure 4.12, because b1 = 0.5 > 0. √ (b) b2 = 0.5 & ω2 = 2 does not agree with the second graph n the textbook’s Figure 4.12, because b2 = 0.5 > 0. [In fact, the parameter values in (b) give a steady state oscillation.] (c) b1 = 0.5 & ω1 = 1 agrees with the first graph in the textbook’s Figure 4.12, because b1 = 0.5 > 0. (d) b2 = 0 & ω2 = 1 does not agree with the second graph in the textbook’s Figure 4.12, because ω2 = ω0 = 1. [In fact, the parameter values in (d) give the pure resonance phenomenon.] √ (e) b1 = 1 & ω1 = 2 agrees with the first graph in the textbook’s Figure 4.12, because b1 = 1 > 0. √ √ (f) b2 = 0 & ω2 = 2 agrees with the second graph in the textbook’s Figure 4.12, because 2 = ω2 6= ω0 = 1 and b2 = 0. To summarize, choices (a), (c), (e), and (f) could conceivably give the graphs. ©Larry

Turyn, October 13, 2013

p. 28

4.2.5.19. As d → 0+ , eventually d will be a very small but still positive number, hence the steady state solution is a steady state oscillation of y¨ + 2dy˙ + y = cos t. From formula (4.11) and noting that b = 2d, m = 1, k = 1, and ω = 1, the amplitude of the steady state oscillation is 1 1 |f0 | =p = (?) Amplitude = p > 0. 2 2 2 2 2 2d (k − mω ) + (bω) (1 − 1) + (2d) As d → 0+ , the Amplitude → +∞. We can think of pure resonance as an “oscillation" with amplitude that becomes unbounded as t → ∞. This is not quite the same thing as a steady state oscillation depending on a parameter d whose Amplitude → +∞ as d → 0+ . From the practical point of view, having a sufficiently small damping coefficient can be almost as powerful as having no damping and pure resonance. 4.2.5.20. The corresponding homogeneous ODE has characteristic polynomial Ls2 + C1 = 0, hence roots 1 s = ± i √LC . Using the given information that L = 10−3 henrys and the constant C is measured in f arads, the natural frequency of this undamped but forced circuit is r 1 1000 . ω0 = √ = −3 C 10 C Note that ω = 120π is the forcing frequency. (a) Because the damping is zero, the circuit exhibits pure resonance if r 1000 , 120π = ω = ω0 = C that is, (120π)2 =

1

10−3 C So, the circuit exhibits pure resonance if, and only if, C=

=

103 . C

103 5 = . 2 (120π) 72π 2

(b) Because the damping is zero, the circuit exhibits the beats phenomenon when C 6=

5 , 72π 2

in which case the slow, beats frequency is √ 120π − 103 C −1 √ = 5 12π − 10C −1 . ζ, 2 4.2.5.21. By Hooke’s Law, mg = k`, where ` is the stretch of the spring in equilibrium and m is the mass of 1 the object compressing or stretching the spring. We are given that ` = 0.1 in = 120 ft, so the mass of the machine, m, and the spring constant, k, satisfy ω02 =

k g 32 f t/s2 = = = 32 · 120 s−2 = 28 · 15 s−2 , m ` (1/120)f t

where ω0 is the natural frequency of the undamped system. Pure resonance vibrations will occur if √ √ ω = ω0 = 28 · 15 s−2 = 16 15 s−1 ≈ 61.96773354s−1 . ©Larry

Turyn, October 13, 2013

p. 29

I.e., pure resonance vibrations occur if the machine spins at ω revolutions per second ≈ 9.862471105 Hz. 2π 4.2.5.22. This problem asks us to “reverse engineer" a solution of an ODE to find the ODE it satisfies. The solution, y(t) = e−t cos 3t + 3 cos 2t + 2 sin 2t, could come from a superlist L = −1 ± 3i, ± 2i. Some of the roots in L come from L1 and others from L2 . Because the right hand side of the ODE is f0 cos ωt, L2 = ± iω. To be part of the superlist L, we must have ω = 2. It follows that the list L1 is the rest of L, hence L1 = −1 ± 3i. So, the corresponding homogeneous ODE has characteristic polynomial     0 = ms2 + bs + k = s − (−1 + i3) s − (−1 − i3) = (s + 1) − i3 (s + 1) + i3 = (s + 1)2 + 9 = s2 + 2s + 10, hence b = 2 and k = 10 and the ODE is y¨ + 2y˙ + 10y = f0 cos 2t. In this problem, we were not asked to find f0 , but here is that work as a bonus solution: We can substitute into the ODE either the whole given solution, y(t) = e−t cos 3t + 3 cos 2t + 2 sin 2t, or just the particular solution, which is yp (t) = 3 cos 2t + 2 sin 2t because L2 = ± 2i: f0 cos 2t = y¨p + 2y˙ p + 10yp = −12 cos 2t − 8 sin 2t + 2(−6 sin 2t + 4 cos 2t) + 10(3 cos 2t + 2 sin 2t) = 26 cos 2t, so f0 = 26. 4.2.5.23. This problem asks us to “reverse engineer" a solution of an ODE to find the ODE it satisfies. The solution, y(t) = 2t sin 2t + 5 cos(2t − δ), could come from a superlist L = ± 2i, ± 2i. Some of the roots in L come from L1 and others from L2 . Because the right hand side of the ODE is f0 cos ωt, L2 = ± iω. To be part of the superlist L, we must have ω = 2. It follows that the list L1 is the rest of L, hence L1 = ± 2i. We were given that k = 6, in N/m. So, the corresponding homogeneous ODE has characteristic polynomial   0 = ms2 + bs + 6 = m s − (−i2) s − (−i2) = m(s − i2)(s + i2) = m(s2 + 4), hence b = 0 and 4m = 6. The latter implies m =

3 2

and the ODE is

3 y¨ + 6y = f0 cos 2t. 2 To find f0 we can substitute into the ODE either the whole given solution, y(t) = 2t sin 2t + 5 cos(2t − δ), or the particular solution, which is yp (t) = 2t sin 2t because L2 = ± 2i. We calculate y˙ p (t) = 2 sin 2t+4t cos 2t and y¨p (t) = 8 cos 2t − 8t sin 2t. We substitute all of this into the ODE to get f0 cos 2t =

3 3 y¨p + 6yp = (8 cos 2t − 8t sin 2t) + 12t sin 2t = 12 cos 2t, 2 2

so f0 = 12. 4.2.5.24. This problem asks us to “reverse engineer" a solution of an ODE to find the ODE it satisfies. The solution, y(t) = 2 sin 3t − cos 3t − 14 t sin 3t, could come from a superlist L = ± 3i, ± 3i. Some of the roots in L come from L1 and others from L2 . Because the right hand side of the ODE is f0 cos ωt, L2 = ± iω. To be part of the superlist L, we must have ω = 3.

©Larry

Turyn, October 13, 2013

p. 30

It follows that the list L1 is the rest of L, hence L1 = ± 3i. So, the corresponding homogeneous ODE has characteristic polynomial   0 = s2 + bs + k = s − (−i3) s − (−i3) = (s − i3)(s + i3) = s2 + 9, hence b = 0 and k = 9 and the ODE is y¨ + 9y = f0 cos 3t. To find f0 we can substitute into the ODE either the whole given solution, y(t) = 2 sin 3t−cos 3t− 14 t sin 3t, 1 1 or the particular solution, which is yp (t) = − t sin 3t because L2 = ± 3i. We calculate y˙ p (t) = − (sin 3t + 4 4 1 3t cos 3t) and y¨p (t) = − (6 cos 3t − 9t sin 3t). 4 We substitute all of this into the ODE to get 9 6 1 f0 cos 3t = y¨p + 9yp = − (6 cos 3t − 9t sin 3t) − t sin 3t = − cos 3t, 4 4 4 so f0 = − 32 . 4.2.5.25. The ODE is y¨ + by˙ + 9y = f0 cos ωt. (a) In order to have a solution of the form y(t) = (steady state solution)+(transient solution), in particular to have a transient solution, we must have b > 0. Other than that, there are no restrictions on the parameters. Here’s an Ex: b = 1, f0 = 1, and ω = 1. (b) In order to have a solution y(t) that exhibits pure resonance, there must be no damping, that is, b = 0, and matching frequencies ω = ω0 . Then, the ODE becomes y¨ + 9y = f0 cos ωt. The natural frequency is ω0 = 3, so ω = 3. Here’s an Ex: b = 0, f0 = 1, and ω = 3. (c) In order to have a solution y(t) that exhibits the beats phenomenon, we must have no damping, that is, b = 0, and different frequencies ω 6= ω0 . Then, the ODE becomes y¨ + 9y = f0 cos ωt. The natural frequency is ω0 = 3, so ω 6= 3. Here’s an Ex: b = 0, f0 = 1, and ω = 2. 4.2.5.26. Let RHS be the right hand side of the desired identity and let LHS be the left hand side. Substitute into the LHS the particular solution, yp = y sin(2πnν t), and use a trigonometric identity to get RHS =

1 1 1 C y %U 2 D sin ϕ · cos(2πnν t) + C y %U 2 D cos ϕ · sin(2πnν t) = C y %U 2 D sin(2πnν t + ϕ) 2 2 2 = m¨ yp + 4πmn0 δs y˙ p + 4π 2 mn20 yp = −m(2πnν )2 y sin(2πnν t) + 4πmn0 δs · 2πnν y cos(2πnν t) + 4π 2 mn20 y sin(2πnν t).

So, (?)

1 1 C y %U 2 D sin ϕ · cos(2πnν t) + C y %U 2 D cos ϕ · sin(2πnν t) 2 2   2 = 8π mn0 nν δs y cos(2πnν t) + − 4mπ 2 n2ν + 4π 2 mn20 y sin(2πnν t).

Equation (?) implies two results. The first comes from the cos(2πnν t) terms:  1 C y %U 2 D sin ϕ = 8π 2 mn0 nν δs y, 2 which implies 1 C y %U 2 sin ϕ y Cy Cy Cy %U 2 % U2 %D2 U2 = 2 2 = · sin ϕ · = · sin ϕ · · = · sin ϕ · · D 8π mn0 nν δs 8π 2 2mn0 nν δs 8π 2 2mδs n0 nν 8π 2 2mδs D2 n0 nν

=

Cy %D2 U2 n0 Cy %D2  U 2 n0 · sin ϕ · · 2 2· = · sin ϕ · · · . 2 2 8π 2mδs D n0 nν 8π 2mδs n0 D nν ©Larry

Turyn, October 13, 2013

p. 31

The second comes from the sin(2πnν t) terms:  1 C y %U 2 D cos ϕ = − 4mπ 2 n2ν + 4π 2 mn20 y, 2 which implies, after dividing through by n20 y, that n2 1 C y %U 2 D cos ϕ = −4mπ 2 ν2 + 4mπ 2 , 2 2yn0 n0 hence

  1  1 1 1  n2ν 2 2 2 = − − = C %U D cos ϕ + 4mπ C %U D cos ϕ +1 y y n20 4π 2 m 2yn20 4π 2 m 2yn20 =1−

=1−

1 1 %D2 1 D Cy Cy 2 · cos ϕ · %U D = 1 − · cos ϕ · · 2 · 2 U2 2 2 2 4π m 2yn0 4π 2m D yn0

%D2 D %D2  U 2 D Cy U2 Cy · cos ϕ · · 2 2· = 1 − 2 · cos ϕ · · 2 4π 2m n0 D y 4π 2m n0 D y  %D2  U 2  y −1 Cy . = 1 − 2 · cos ϕ · 4π 2m n0 D D

So, n0 = nν

 1−

 %D2  U 2  y −1 Cy · cos ϕ · 2 4π 2m n0 D D

4.2.5.27. Substitute

r ∗

ω ,

ω02 −

−1/2 .

b2 2m2

into Gmax to get 1

Gmax = G(ω ∗ ) = p

(k − m(ω ∗ )2 )2 + (bω ∗ )2

But, ω02 =

= r

k−m

ω02



1 2 b2 2m2

. +

b2 ω02



b2 2m2



k , so k − mω02 = 0, and thus m

Gmax = r

1 m·

b2 2m2

2

+ b2

1 1 = r =r  4 b kb2 b4 k b2  b2 2 kb2 b4 + − − + − 4m2 m 2m2 m 2m2 2m m 2m2

=r

r Because b > 0 and m > 0,

1 2

4

kb b − m 4m2

=r

1

.

2

b (4km − b2 ) 4m2

b2 b = , so 4m2 2m

Gmax = G(ω ∗ ) =

1 1 = · b p b 4km − b2 2m

1

√ 4km−b2 2m

=

1 , νb

as we were asked to show.

©Larry

Turyn, October 13, 2013

p. 32 1

4.2.5.28. (a) G = G(ω) = p

(k − mω 2 )2 + (bω)2

, so

−3/2 d   dG 1 · = − (k − mω 2 )2 + (bω)2 (k − mω 2 )2 + (bω)2 dω 2 dω −3/2  1 = − (k − mω 2 )2 + (bω)2 · − 4mω(k − mω 2 ) + 2b2 ω 2 −3/2   = − (k − mω 2 )2 + (bω)2 · ω b2 − 2mk + 2m2 ω 2 . We are given that b2 ≥ 2mk, hence b2 − 2mk ≥ 0. It follows that for all ω > 0,  b2 − 2mk + 2m2 ω 2 > 0. So, −3/2   dG = − (k − mω 2 )2 + (bω)2 · ω b2 − 2mk + 2m2 ω 2 = −(positive)(positive)(positive) < 0 dω for all ω > 0, as we were asked to show. (b) limω→0+ G(ω) 1

= lim p ω→0+

(k −

mω 2 )2

+

(bω)2

1 1 1 =p =√ = , 2 2 2 k (k − m · 0) + (b · 0) k

because k > 0. (c) Because dG dω < 0 for all ω > 0, it follows that for all ω > 0 G(ω) < lim+ G(ω) = ω→0

1 , k

So, the maximum frequency response is k1 but it does not equal g(ω) for any ω > 0, that is, the maximum frequency r esponse is not achieved for any ω > 0, in the case when b2 ≥ 2mk. 4.2.5.29. This problem asks us to “reverse engineer" a solution of an ODE to find an ODE it satisfies. The solution, y(t) = 1.5 + 0.75 cos(t − π3 ) − 0.5e−t/5 , could come from a superlist L = 0, − 15 , ± i. Some of the roots in L come from L1 and others from L2 . (a) Here, we assume that the ODE has the form y¨ + ω02 y = δ + γe−αt . The right hand side of the ODE, δ + γe−αt , where δ, γ, and α are constants, implies that L2 = 0, − 15 . It follows from L and L2 that L1 = ±i, hence ω0 = 1. So, the ODE is of the form (?) y¨ + y = δ + γe−t/5 The solutions of this ODE come from the superlist and are y(t) = c1 cos t + c2 sin t + c3 + c4 e−t/5 ⇒ The form of the particular solution is yp (t) = A + Be−t/5 , where A, B are constants. To find γ and δ, we can substitute into the ODE (?) either the whole given solution, y(t) = 1.5 + 0.75 cos(t − π3 ) − 0.5e−t/5 , or the part of that solution that could be the particular solution, namely yp (t) = 1.5 − 0.5e−t/5 . We calculate δ + γe−t/5 = y¨p + yp = (0 − 0.02e−t/5 ) + 1.5 − 0.5e−t/5 = 1.5 − 0.52e−t/5 . ⇒ δ = 1.5 and γ = −0.52. The final conclusion is that the ODE y¨ + y = 1.5 − 0.52e−t/5 ©Larry

Turyn, October 13, 2013

p. 33

fits the desired form and has y(t) = 1.5 + 0.75 cos(t − π3 ) − 0.5e−t/5 as one of its solutions. (b) Again, the superlist is L = 0, − 51 , ± i. Here, we assume that the ODE has the form (D2 +ω02 )(D +ε)[ y ] = η. The right hand side of the ODE, η, where η is a constant, implies that L2 = 0. It follows from L and L2 that L1 = −ε, ±i, hence ω0 = 1 and ε = − 51 . So, the ODE is (??)

 1 (D2 + 1) D + [y] = η 5

The solutions of this ODE come from the superlist and are y(t) = c1 cos t + c2 sin t + c3 e−t/5 + c4 ⇒ The form of the particular solution is yp (t) = A, where A is a constant. To find η, we can substitute into the ODE (??) either the whole given solution, y(t) = 1.5 + 0.75 cos(t − π ) − 0.5e−t/5 , or the part of that solution that could be the particular solution, namely yp (t) = 1.5. We 3 calculate  1.5 1 [ yp ] = η = (D2 + 1) D + 5 5 ⇒ η = 0.3. The final conclusion is that the ODE  ... 1 1 1 [ y ] = y + y¨ + y˙ + y 0.3 = (D2 + 1) D + 5 5 5 fits the desired from and has y(t) = 1.5 + 0.75 cos(t − π3 ) − 0.5e−t/5 as one of its solutions. (c) Here, we assume that the ODE is homogeneous. This implies that the corresponding homogeneous ODE has characteristic equation   1 s+ s . 0 = (s2 + 1) 5 So, the ODE is  1 1 ... 1 0 = (D2 + 1) D + ˙ D[ y ] = y (iv) + y + y¨ + y. 5 5 5 4.2.5.30. The graph in the textbook’s Figure 4.13 appears to have a steady state solution of the form yS (t) = 2 + cos(βt − δ) = 2 + c1 cos(ω0 t) + c2 sin(ω0 t). The period of the steady state solution appears to be approximately 3.0 ≈ (?) ω0 = β ≈

2π β ,

hence

2π ≈ 2.094. 3.0

Keeping four significant digits is optimistic as to the accuracy of reading data from the graph but we can round off even more, later. (a) An ODE of the form y¨ + ω02 y = δ + γe−t/2 has lists L1 = ± iω0 and L2 = 0, − 21 , so the super list is L = ± iω0 , 0, − 12 . This gives y(t) = c1 cos(ω0 t) + c2 sin(ω0 t) + c3 + c4 e−t/2 ⇒ yp (t) = A + Be−t/2 , where A, B are constants. (Note that part of the steady state solution is a solution of the corresponding homogenous ODE.) The A term in the particular solution accounts for the 2 in the steady state solution, so it follows that A = 2 and Substitute yp (t) = 2 + Be−t/2 into the ODE: δ + γe−t/2 = y¨p + ω02 yp = 0 +

1  1 Be−t/2 + ω02 (2 + Be−t/2 ) = 2ω02 + + ω02 Be−t/2 . 4 4

It follows from (?) that δ = 2ω02 ≈ 8.772 and γ =

1 4

+ ω02 ≈ 4.636. ©Larry

Turyn, October 13, 2013

p. 34

So, the ODE is approximately y¨ + 4.386y = 8.772 + 4.636e−t/2 . This is enough to answer the question in part (a). But, in order to do part (c) we should find the approximate values of B, c1 , and c2 : (b) An ODE of the form (D2 + ω02 )(D + ε)[ y ] = η has lists L1 = −ε, ± iω0 and L2 = 0, so the super list is L = −ε, ± iω0 , 0. This gives y(t) = c1 cos(ω0 t) + c2 sin(ω0 t) + c3 e−εt + c4 ⇒ yp (t) = A, where A is a constant. The A term in the particular solution accounts for the 2 in the steady state solution, so it follows that A = 2. Substitute yp (t) = 2 into the ODE and use (?) to get η = (D2 + ω02 )(D + ε)[ yp ] = 2ω02 ε ≈ 3.900ε. We also know that the general solution of the ODE is y(t) = yh (t) + yp (t) = c1 cos(ω0 t) + c2 sin(ω0 t) + c3 e−εt + 2, so it follows that y(t) ˙ = −ω0 c1 cos(ω0 t) + ω0 c2 sin(ω0 t) − εc3 e−εt . Besides the graph, we were also given the ICs   1 = y(0) = c1 + c3 + 2 , 1 = y(0) ˙ = ω0 c2 − εc3  hence c1 + c3 = −1 and ω0 c2 − εc3 = 1. It follows that c1 = −c3 − 1 and c2 = ω0−1 1 + εc3 . The sinusoidal part of the steady star solution has Amplitude ≈ 1, so 2 (≈ 1)2 = c21 + c22 = (−c3 − 1)2 + ω0−2 1 + ε c3 . Using the approximate value of ω0 we got in (?) near the beginning of our work, 1 ≈ c23 + 2c3 + 1 + 0.5131(1 + 2εc3 + ε2 c23 ), that is, (1 + 0.5131ε2 )c23 + (2 + 1.026 ε)c3 + 0.5131 ≈ 0. The quadratic formula gives us c3 ≈

−(2 + 1.026 ε) ±

p (2 + 1.026 ε)2 − 4(1 + 0.5131ε2 )(0.5131) 2(1 + 0.5131ε2 )

For example, if we let ε = 1.5, then we get (c) a homogeneous ODE that has y(t) as a solution would have the form (D2 + ω02 )(D + ε)D[ y ] = 0. The 2π only thing we can say further is that, again, we need, ω0 = β ≈ 3.0 ≈ 2.094 by the same reasoning we had in the work preceding part (a). So, the ODE should have the form (D2 + 4.386)(D + ε)D[ y ] = 0, that is, ... y (4) + ε y + 4.386¨ y + 4.386εy˙ = 0, for some constant parameter ε.

©Larry

Turyn, October 13, 2013

p. 35

4.2.5.31. (a) ODE y˙ + δy = f0 cos ωt has lists L1 = −δ and L2 = ± iω, so the super list is L = −δ, ± iω. This gives y(t) = c1 cos(ωt) + c2 sin(ωt) + c3 e−δt ⇒ yp (t) = A cos(ωt) + B sin(ωt), where A, B are constants to be determined. Substitute yp (t) into the ODE: f0 cos ωt=y˙ p +δyp = −Aω sin(ωt)+ωB cos(ωt)+δA cos(ωt)+δB sin(ωt)= (ωB+δA) cos(ωt)+(−ωA+δB) sin(ωt) so 0 = −Aω + δB and f0 = ωB + δA. This can be written as a 2 × 2 system   −ωA + δB = 0 , δA + ωB = f0 whose solution is 

A B



 =

−ω δ

δ ω

−1 

0 f0

 =−

1 ω2 + δ2



−δ −ω

ω −δ



0 f0

 .

So, a particular solution of the IVP is given by yp (t) =

f0 (δ cos ωt + ω sin ωt) . ω2 + δ2

δ > 0 implies that yh (t) = c3 e−δt → 0 as t → ∞, that is, yh (t) is a transient solution, and the initial datum y0 doesn’t affect this fact! The steady state solution is yS (t) = yp (t) =

f0 (−δ cos ωt + ω sin ωt) ω2 + δ2

(b) ODE y˙ + δy = f0 cos ωt has lists L1 = −δ and L2 = ± iω, so the super list is L = −δ, ± iω. This gives y(t) = c1 cos(ωt) + c2 sin(ωt) + c3 e−δt ⇒ yp (t) = A cos(ωt) + B sin(ωt), where A, B are constants to be determined. Substitute yp (t) into the ODE: f0 sin ωt = y˙ p + δyp = −Aω sin(ωt) + ωB cos(ωt) + δA cos(ωt) + δB sin(ωt) = (ωB + δA) cos(ωt) + (−ωA + δB) sin(ωt) so 0 = −Aω + δB and f0 = ωB + δA. This can be written as a 2 × 2 system   −ωA + δB = f0 , δA + ωB = 0 whose solution is 

A B



 =

−ω δ

δ ω

−1 

f0 0

 =−

1 2 ω + δ2



−δ −ω

ω −δ



f0 0

 .

So, a particular solution of the IVP is given by yp (t) =

ω2

f0 (−ω cos ωt + δ sin ωt) . + δ2

δ > 0 implies that yh (t) = c3 e−δt → 0 as t → ∞, that is, yh (t) is a transient solution, and the initial datum y0 doesn’t affect this fact! The steady state solution is yS (t) = yp (t) =

ω2

f0 (−ω cos ωt + δ sin ωt) + δ2

(c) Using the results of parts (a) and (b), along with the non-homogeneous superposition principle in Theorem 4.2, we get that the steady state solution of y˙ + δy = f0 · (a cos ωt + b sin ωt) is   f0 yS (t) = 2 · a (δ cos ωt + ω sin ωt) + b (−ω cos ωt + δ sin ωt) , 2 ω +δ that is,  f0  yS (t) = 2 (aδ − bω) cos ωt + (aω + bδ) sin ωt . ω + δ2 ©Larry

Turyn, October 13, 2013

p. 36

Section 4.3.2 4.3.2.1. The corresponding homogeneous ODE, y 00 + 4y = 0, where s2 + 4, so the homogeneous solution is

0

=

d dx ,

has characteristic polynomial

yh (x) = c1 cos 2x + c2 sin 2x. Let’s try to find a solution of the non-homogeneous ODE in the form y(x) = v1 (x) cos 2x + v2 (x) sin 2x, where v1 (x), v2 (x) are functions to be determined later. We assume that v10 (x) cos 2x + v20 (x) sin 2x ≡ 0, so y 0 (x) = (−2 sin 2x)v1 (x) + (2 cos 2x)v2 (x) and thus y 00 (x) = (−2 sin 2x)v10 (x) + (2 cos 2x)v20 (x) − (4 cos 2x)v1 (x) − (4 sin 2x)v2 (x). Substitute all of that into the original, non-homogeneous ODE y 00 + 4y = sin12x , which is in standard form. We get 1 . (−2 sin 2x)v10 (x) + (2 cos 2x)v20 (x) = sin 2x So, v10 (x), v20 (x) should satisfy the system of linear equations   (cos 2x)v10 (x) + (sin 2x)v20 (x) = 0   .   (−2 sin 2x)v10 (x) + (2 cos 2x)v20 (x) = sin12x Using the inverse of a 2 × 2 matrix, we get  

v10 v20





cos 2x

sin 2x

−2 sin 2x

2 cos 2x

=

−1  



0 1 sin 2x



 2 cos 2x 1 =  2 2 sin 2x

ˆ

We obtain v1 (x) =

ˆ v10 (x)dx =



− sin 2x

 

cos 2x

0 1 sin 2x



 1 =  2

−1 cos 2x sin 2x

 .

1 1 dx = ... = − x + c1 , 2 2

where c1 =arbitrary constant. Substituting w = sin 2x, we also get ˆ ˆ 1 cos 2x 1 1 1 v2 (x) = dx = dw = ln | sin 2x| + c2 , 2 sin 2x 4 w 4 where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is 1 1 y(x) = − x cos(2x) + sin(2x) ln | sin 2x| + c1 cos 2x + c2 sin 2x, 2 4 where c1 , c2 =arbitrary constants.

©Larry

Turyn, October 13, 2013

p. 37

4.3.2.2. The corresponding homogeneous ODE, y¨ + y = 0 has characteristic polynomial s2 + 1, so the homogeneous solution is yh (t) = c1 cos t + c2 sin t. Let’s try to find a solution of the non-homogeneous ODE in the form y(t) = v1 (t) cos t + v2 (t) sin t, where v1 (t), v2 (t) are functions to be determined later. We assume that v˙ 1 (t) cos t + v˙ 2 (t) sin t ≡ 0, so y(t) ˙ = (− sin t)v1 (t) + (cos t)v2 (t) and thus y¨(t) = (− sin t)v˙ 1 (t) + (cos t)v˙ 2 (t) − (cos t)v1 (t) − (sin t)v2 (t). Substitute all of that into the original, non-homogeneous ODE y¨ + y = sec(t) csc(t), which is in standard form. We get (− sin t)v˙ 1 (t) + (cos t)v˙ 2 (t) = sec(t) csc(t). So, v˙ 1 (t), v˙ 2 (t) should satisfy the system of linear equations   (cos t)v˙ 1 (t) + (sin t)v˙ 2 (t) = 0 

(− sin t)v˙ 1 (t) + (cos t)v˙ 2 (t)

 

= sec(t) csc(t)

.



Using the inverse of a 2 × 2 matrix, we get 

v˙ 1





cos t

sin t

=

 v˙ 2

−1  



0

v1 (t) =





0

cos t



− sec(t)

=

 sin t

sec(t) csc(t) ˆ

We obtain

cos t − sin t

=



− sin t cos t



sec(t) csc(t)

 .

csc(t)

ˆ v˙ 1 (t)dt = −

sec(t)dt = − ln | tan t + sec t| + c1 ,

where c1 =arbitrary constant. We also get ˆ ˆ v2 (t) = v˙ 2 (t)dt = csc(t)dt = − ln | cot t + csc t| + c2 , where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is y(t) = − cos(t) ln | tan t + sec t| − sin(t) ln | cot t + csc t| +c1 cos t + c2 sin t, where c1 , c2 =arbitrary constants. 4.3.2.3. The corresponding homogeneous ODE, y¨ + y = 0 has characteristic polynomial s2 + 1, so the homogeneous solution is yh (t) = c1 cos t + c2 sin t. Let’s try to find a solution of the non-homogeneous ODE in the form y(t) = v1 (t) cos t + v2 (t) sin t, where v1 (t), v2 (t) are functions to be determined later. ©Larry

Turyn, October 13, 2013

p. 38

We assume that v˙ 1 (t) cos t + v˙ 2 (t) sin t ≡ 0, so y(t) ˙ = (− sin t)v1 (t) + (cos t)v2 (t) and thus y¨(t) = (− sin t)v˙ 1 (t) + (cos t)v˙ 2 (t) − (cos t)v1 (t) − (sin t)v2 (t). Substitute all of that into the original, non-homogeneous ODE y¨ + y = sec2 (t), which is in standard form. We get (− sin t)v˙ 1 (t) + (cos t)v˙ 2 (t) = sec2 (t). So, v˙ 1 (t), v˙ 2 (t) should satisfy the system of linear equations   (cos t)v˙ 1 (t) + (sin t)v˙ 2 (t) = 0 (− sin t)v˙ 1 (t) + (cos t)v˙ 2 (t)



 

= sec2 (t)

.



Using the inverse of a 2 × 2 matrix, we get 

v˙ 1





cos t

−1 

sin t

=

 v˙ 2

 − sin t cos t



0 sec2 (t)





cos t

− sin t

sin t

cos t

=

 



0 sec2 (t)



sin t − cos 2t

 .

= sec(t)

We obtain, using the substitution w = cos t, ˆ ˆ − sin t 1 1 1 v1 (t) = v˙ 1 (t)dt = dt = dw = − + c1 = − + c1 , 2 2 cos t w w cos t where c1 =arbitrary constant. We also get ˆ ˆ v2 (t) = v˙ 2 (t)dt = sec t dt = ln | tan t + sec t| + c2 , where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is y(t) = − cos(t) ·

1 + sin(t) ln | tan t + sec t| + c1 cos t + c2 sin t, cos t

hence y(t) = −1 + sin(t) ln | tan t + sec t| + c1 cos t + c2 sin t, where c1 , c2 =arbitrary constants. 4.3.2.4. The corresponding homogeneous ODE, y¨ + 4y˙ + 4y = 0 has characteristic polynomial s2 + 4s + 4 = (s + 2)2 , so the homogeneous solution is yh (t) = c1 e−2t + c2 t e−2t . Let’s try to find a solution of the non-homogeneous ODE in the form y(t) = v1 (t)e−2t + v2 (t) t e−2t , where v1 (t), v2 (t) are functions to be determined later. We assume that v˙ 1 (t)e−2t + v˙ 2 (t) t e−2t ≡ 0, so y(t) ˙ = (−2e−2t )v1 (t) + (1 − 2t)e−2t v2 (t) ©Larry

Turyn, October 13, 2013

p. 39

and thus y¨(t) = (−2e−2t )v˙ 1 (t) + (1 − 2t)e−2t v˙ 2 (t) + 4e−2t v1 (t) + (−4 + 4t)e−2t v2 (t). Substitute all of that into the original, non-homogeneous ODE y¨ + 4y˙ + 4y = form. We get (−2e−2t )v˙ 1 (t) + (1 − 2t)e−2t v˙ 2 (t) =

e−2t , which is in standard t−1

e−2t . t−1

So, v˙ 1 (t), v˙ 2 (t) should satisfy the system of linear equations e−2t v˙ 1 (t)

  

+ te−2t v˙ 2 (t)

=0

  −2e−2t v˙ (t) + (1 − 2t)e−2t v˙ (t) 1 2 Dividing each equation by e−2t gives an equivalent system,  v˙ 1 (t)) + t v˙ 2 (t)     −2v˙ 1 (t) + (1 − 2t)v˙ 2 (t)

=

e−2t   t−1

  

=0 =

  

1   t−1

.

.

Using the inverse of a 2 × 2 matrix, we get 

v˙ 1





1

t

=

 v˙ 2

−2

1 − 2t

−1    

0 1 t−1





  =

1 − 2t 2

 −t   1

t  −t − 1   = 1    1 t−1 t−1 

0



   . 

We obtain, using the substitution w = cos t,  ˆ ˆ ˆ  t 1 v1 (t) = v˙ 1 (t)dt = − dt = − 1+ dt = −t − ln |t − 1| + c1 , t−1 t−1 where c1 =arbitrary constant. We also get ˆ ˆ v2 (t) = v˙ 2 (t)dt =

1 dt = ln |t − 1| + c2 , t−1

where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is y(t) = (−t − ln |t − 1| + c1 )e−2t + (ln |t − 1| + c2 )te−2t hence y(t) = (−1 + t)e−2t ln |t − 1| + e c1 e−2t + e c2 t e−2t , where e c1 , e c2 =arbitrary constants. [Note that the −t e−2t term that came from v1 (t) was added to c2 to get the arbitrary constant e c2 .] d 4.3.2.5. The corresponding homogeneous ODE is the Cauchy-Euler ODE x2 y 00 − 2xy 0 + 2y = 0, where 0 = dx . m 2 Substituting y(x) = x into that ODE gives characteristic equation 0 = m(m − 1) − 2m + 2 = m − 3m + 2 = (m − 1)(m − 2). The homogeneous solution is

yh (x) = c1 x1 + c2 x2 . Let’s try to find a solution of the non-homogeneous ODE in the form y(x) = v1 (x) · x + v2 (x) · x2 , ©Larry

Turyn, October 13, 2013

p. 40

where v1 (x), v2 (x) are functions to be determined later. We assume that v10 (x) · x + v20 (x) · x2 ≡ 0, so y 0 (x) = 1 · v1 (x) + 2x · v2 (x) and thus y 00 (x) = v10 (x) + 2x · v20 (x) + 2v2 (x). Put the non-homogeneous ODE into standard form by dividing through by x2 to get y 00 −2x−1 y 0 +2x−2 y = x. Substitute into that ODE to get 1 · v10 (x) + 2x · v20 (x) = x. So, v10 (x), v20 (x) should satisfy the system of linear equations    x v10 (x) + x2 v20 (x) = 0  v10 (x) + 2x v20 (x)



=x

Using the inverse of a 2 × 2 matrix, we get    0   −1  2x 0 v1 x x2 1  =  =   x2 −1 x v20 1 2x ˆ

We obtain v1 (x) = and

v2 (x) =

ˆ

.



−x2



0 x

v10 (x)dx =

1 −x dx = − x2 + c1 2

ˆ

ˆ v20 (x)dx =



−x

=

 x



 .

1

1 dx = x + c2 ,

where c1 , c2 are arbitrary constants. Putting everything together, the general solution of the ODE is   1 y(x) = − x2 + c1 x + (x + c2 ) x2 , 2 hence y(x) =

1 3 x + c1 x + c2 x2 , 2

where c1 , c2 =arbitrary constants. d 4.3.2.6. The corresponding homogeneous ODE is the Cauchy-Euler ODE x2 y 00 − 2xy 0 + 2y = 0, where 0 = dx . m 2 Substituting y(x) = x into that ODE gives characteristic equation 0 = m(m − 1) − 2m + 2 = m − 3m + 2 = (m − 1)(m − 2). The homogeneous solution is

yh (x) = c1 x1 + c2 x2 . Let’s try to find a solution of the non-homogeneous ODE in the form y(x) = v1 (x) · x + v2 (x) · x2 , where v1 (x), v2 (x) are functions to be determined later. We assume that v10 (x) · x + v20 (x) · x2 ≡ 0, so y 0 (x) = 1 · v1 (x) + 2x · v2 (x) ©Larry

Turyn, October 13, 2013

p. 41

and thus y 00 (x) = v10 (x) + 2x · v20 (x) + 2v2 (x). Put the non-homogeneous ODE into standard form by dividing through by x2 to get y 00 −2x−1 y 0 +2x−2 y = xe . Substitute into that ODE to get −3x

1 · v10 (x) + 2x · v20 (x) = xe−3x . So, v10 (x), v20 (x) should satisfy the system of linear equations   x v10 (x) + x2 v20 (x) = 0 

v10 (x) + 2x v20 (x)

Using the inverse of a 2 × 2 matrix, we get   0   −1   2x 0 v1 x x2  =   = 1  x2 −1 x e−3x v20 1 2x

 

= x e−3x

−x2

.



 

x

0 x e−3x





=

−x e−3x e−3x

 .

We obtain, using integration by parts,    ˆ ˆ ˆ 1 −3x  1 1 1 v1 (x) = v10 (x)dx = x (−e−3x )dx = x · e − 1 · e−3x dx = x e−3x + e−3x + c1 , 3 3 3 9 where c1 =arbitrary constant. We also get ˆ ˆ 1 v2 (x) = v20 (x)dx = e−3x dx = − e−3x + c2 , 3 where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is      1 −3x 1  1 −3x  −3x 2 y(x) = xe + e + c e + c x + −   1 2 x ,  3 9 3  hence y(x) =

1 −3x xe + c1 x + c2 x2 , 9

where c1 , c2 =arbitrary constants. d 4.3.2.7. The corresponding homogeneous ODE is the Cauchy-Euler ODE x2 y 00 − 5xy 0 + 8y = 0, where 0 = dx . m 2 Substituting y(x) = x into that ODE gives characteristic equation 0 = m(m − 1) − 5m + 8 = m − 6m + 8 = (m − 2)(m − 4). The homogeneous solution is

yh (x) = c1 x2 + c2 x4 . Let’s try to find a solution of the non-homogeneous ODE in the form y(x) = v1 (x) · x2 + v2 (x) · x4 , where v1 (x), v2 (x) are functions to be determined later. We assume that v10 (x) · x + v20 (x) · x4 ≡ 0, so y 0 (x) = 2x · v1 (x) + 4x3 · v2 (x) and thus y 00 (x) = 2x v10 (x) + 4x3 v20 (x) + 2v1 (x) + 12x2 v2 (x). ©Larry

Turyn, October 13, 2013

p. 42

Put the non-homogeneous ODE into standard form by dividing through by x2 to get y 00 −5x−1 y 0 +8x−2 y = xe . Substitute into that ODE to get −x

2x · v10 (x) + 4x3 · v20 (x) = xe−x . So, v10 (x), v20 (x) should satisfy the system of linear equations   x2 v10 (x) + x4 v20 (x) = 0 

2x v10 (x) + 4x3 v20 (x)

= xe−x

Using the inverse of a 2 × 2 matrix, we get   0   2 −1   v1 4x3 x x4 0  =   = 1  2x5 −2x v20 2x 4x3 xe−x ˆ

We obtain

 

.



−x4

 

x2

0 xe−x



− 21 e−x



=

1 2

x−2 e−x

 .

ˆ

1 1 − e−x dx = e−x + c1 . 2 2 We also get, because we cannot find the indefinite integral, that ˆ 1 x −2 −s v2 (x) = c2 + s e ds, 2 0 v1 (x) =

v10 (x)dx =

where c1 , c2 are arbitrary constants. Putting everything together, the general solution of the ODE is     ˆ 1 x −2 −s 1 −x 2 e + c1 x + c2 + s e ds x4 , y(x) = 2 2 1 hence y(x) = c1 x2 + c2 x4 +

1 2 −x 1 4 x e + x 2 2

ˆ

x

s−2 e−s ds, 1

where c1 , c2 =arbitrary constants. d 4.3.2.8. The corresponding homogeneous ODE is the Cauchy-Euler ODE r2 y 00 − 4ry 0 + 6y = 0, where 0 = dr . m 2 Substituting y(r) = r into that ODE gives characteristic equation 0 = m(m − 1) − 4m + 6 = m − 5m + 6 = (m − 2)(m − 3). The homogeneous solution is

yh (r) = c1 r2 + c2 r3 . Let’s try to find a solution of the non-homogeneous ODE in the form y(r) = v1 (r) · r2 + v2 (r) · r3 , where v1 (r), v2 (r) are functions to be determined later. We assume that v10 (r) · r2 + v20 (r) · r3 ≡ 0, so y 0 (r) = 2r · v1 (r) + 3r2 · v2 (r) and thus y 00 (r) = 2r · v10 (r) + 3r2 · v20 (r) + 2 v(r) + 6r v2 (r). Put the non-homogeneous ODE into standard form by dividing through by r2 to get y 00 −4r−1 y 0 +6r−2 y = r cos r. Substitute into that ODE to get 2

2r · v10 (r) + 3r2 · v20 (r) = r2 cos r. ©Larry

Turyn, October 13, 2013

p. 43

So, v10 (r), v20 (r) should satisfy the system of linear equations   r2 v10 (r) + r3 v20 (r) = 0 

2r v10 (r) + 3r2 v20 (r)

 

= r2 cos r

Using the inverse of a 2 × 2 matrix, we get   0   2 −1   3r2 v1 r r3 0 1   =   = r4 −2r v20 2r 3r2 r2 cos r

−r3 r

2

.



 



0 2



−r cos r

=

r cos r

 .

cos r

We obtain, using integration by parts twice, that ˆ ˆ ˆ v1 (r) = v10 (r)dr = −r cos r dr = −r sin r + 1 · sin r dr = −r sin r − cos r + c1 , where c1 =arbitrary constant, and

ˆ

v2 (r) =

ˆ v20 (r)dr

=

cos r dr = sin r + c2 ,

where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is y(r) = (−r sin r − cos r + c1 )r2 + (sin r + c2 )r3 , hence y(r) = c1 r2 + c2 r3 − r2 cos r, where c1 , c2 =arbitrary constants. 4.3.2.9. The corresponding homogeneous ODE is the Cauchy-Euler ODE r2 y 00 − 6ry 0 + 12y = 0, where d 0 = dr . Substituting y(r) = rm into that ODE gives characteristic equation 0 = m(m − 1) − 6m + 8 = 2 m − 7m + 12 = (m − 3)(m − 4). The homogeneous solution is yh (r) = c1 r3 + c2 r4 . Let’s try to find a solution of the non-homogeneous ODE in the form y(r) = v1 (r) · r3 + v2 (r) · r4 , where v1 (r), v2 (r) are functions to be determined later. We assume that v10 (r) · r3 + v20 (r) · r3 ≡ 0, so y 0 (r) = 3r2 · v1 (r) + 4r4 · v2 (r) and thus y 00 (r) = 3r2 v10 (r) + 4r3 · v20 (r) + 6rv1 (r) + 12r2 v2 (r). Put the non-homogeneous ODE into standard form by dividing through by r2 to get y 00 − 6r−1 y 0 + 12r y = r3 sin 2r. Substitute into that ODE to get −2

3r2 · v10 (r) + 4r3 · v20 (r) = r3 sin 2r. So, v10 (r), v20 (r) should satisfy the system of linear equations   r3 v10 (r) + r4 v20 (r) = 0 

3r2 v10 (r) + 4r3 v20 (r)

= r3 sin 2r

 

.

 ©Larry

Turyn, October 13, 2013

p. 44

Using the inverse of a 2 × 2 matrix, we get  

v10 v20





=

r3

r4

3r2

4r3

−1  





0 r3 sin 2r

 1 =  r6

4r3

−r4

−3r2

r3

 



0 r3 sin 2r



−r sin 2r

=

 .

sin 2r

We obtain, using integration by parts, that   ˆ ˆ  ˆ 1   1 1 1 cos 2r − 1 · cos 2r dr = r cos 2r − sin 2r + c1 , v1 (r) = v10 (r)dr = r(− sin 2r)dr = r 2 2 2 4 where c1 is an arbitrary constant. We also get ˆ 1 v2 (r) = sin 2r dr = − cos 2r + c2 , 2 where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is      1  1  1  rcos 2r − sin 2r + c1 r3 + − cos 2r + c2 r4 , y(r) = 2 4 2  hence

1 y(r) = c1 r3 + c2 r4 − r3 sin 2r, 4

where c1 , c2 =arbitrary constants. d 4.3.2.10. The corresponding homogeneous ODE is the Cauchy-Euler ODE r2 y 00 +4ry 0 +2y = 0, where 0 = dr . m 2 Substituting y(r) = r into that ODE gives characteristic equation 0 = m(m − 1) + 4m + 2 = m + 3m + 2 = (m + 2)(m + 1). The homogeneous solution is

yh (r) = c1 r−2 + c2 r−1 . Let’s try to find a solution of the non-homogeneous ODE in the form y(r) = v1 (r) · r−2 + v2 (r) · r−1 , where v1 (r), v2 (r) are functions to be determined later. We assume that v10 (r) · r−2 + v20 (r) · r−1 ≡ 0, so y 0 (r) = −2r−3 · v1 (r) − r−2 · v2 (r) and thus y 00 (r) = −2r−3 v10 (r) − r−2 v20 (r) + 6r−4 v1 (r) + 2r−3 v2 (r). r

Put the non-homogeneous ODE into standard form by dividing through by r2 to get y 00 +4r−1 y 0 +2r−2 y = e . Substitute into that ODE to get

−2 −r

−2r−3 · v10 (r) − r−1 · v20 (r) = r−2 e−r . So, v10 (r), v20 (r) should satisfy the system of linear equations  r−2 v10 (r) + r−1 v20 (r) = 0  

−2r−3 v10 (r) − r−2 v20 (r)

= r−2 e−r

 

.



©Larry

Turyn, October 13, 2013

p. 45

Using the inverse of a 2 × 2 matrix, we get  0   −1    0 v1 r−2 r−1 −r−2 1  =   =  r−4 v20 −2r−3 −r−2 r−2 e−r 2r−3

−r−1 r

−2





0

 r

−2 −r



−r e−r

=

e

e

−r

 .

We obtain, using integration by parts, that   ˆ ˆ ˆ  v1 (r) = v10 (r)dr = r (−e−r ) dr = r e−r − 1 · e−r dr = r e−r + e−r + c1 , where c1 is an arbitrary constant. We also get v2 (r) =

ˆ e−r dr = −e−r + c2 ,

where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is     + e−r + c  r−2 + −e −r −r + c2 r−1 , y(r) =  r e 1  hence y(r) = c1 r−2 + c2 r−1 + r−2 e−r , where c1 , c2 =arbitrary constants. d 4.3.2.11. The corresponding homogeneous ODE is the Cauchy-Euler ODE r2 y 00 −4ry 0 +6y = 0, where 0 = dr . m 2 Substituting y(r) = r into that ODE gives characteristic equation 0 = m(m − 1) − 4m + 6 = m − 5m + 6 = (m − 2)(m − 3). The homogeneous solution is

yh (r) = c1 r2 + c2 r3 . Let’s try to find a solution of the non-homogeneous ODE in the form y(r) = v1 (r) · r2 + v2 (r) · r3 , where v1 (r), v2 (r) are functions to be determined later. We assume that v10 (r) · r2 + v20 (r) · r3 ≡ 0, so y 0 (r) = 2r · v1 (r) + 3r2 · v2 (r) and thus y 00 (r) = 2r · v10 (r) + 3r2 · v20 (r) + 2 v1 (r) + 6r v2 (r). Put the non-homogeneous ODE into standard form by dividing through by r2 to get y 00 −4r−1 y 0 +6r−2 y = 1. Substitute into that ODE to get 2r · v10 (r) + 3r2 · v20 (r) = 1. So, v10 (r), v20 (r) should satisfy the system of linear equations    r2 v10 (r) + r3 v20 (r) = 0  

2r v10 (r) + 3r2 v20 (r)

=1

Using the inverse of a 2 × 2 matrix, we get  0   2 −1    v1 r r3 0 3r2 1  =   =  r4 −2r v20 2r 3r2 1

.



−r3 r

2



0





=

 1

©Larry

− 1r 1 r2

 .

Turyn, October 13, 2013

p. 46

ˆ

We obtain that v1 (r) =

ˆ v10 (r)dr =

where c1 =arbitrary constant. We also get

ˆ v2 (r) =

1 − dr = − ln |r| + c1 , r

1 1 dr = − + c2 , r2 r

where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is   1 2 y(r) = (− ln |r| + c1 ) r + − + c2 r3 , r hence y(r) = e c1 r2 + e c2 r3 − r2 ln |r|, where e c1 , e c2 =arbitrary constants. 4.3.2.12. The corresponding homogeneous ODE, y¨ + 4y˙ + 5y = 0 has characteristic polynomial s2 + 4s + 5 = (s + 2)2 + 1, so the homogeneous solution is yh (t) = c1 e−2t cos t + c2 e−2t sin t. Let’s try to find a solution of the non-homogeneous ODE in the form y(t) = v1 (t)e−2t cos t + v2 (t)e−2t sin t, where v1 (t), v2 (t) are functions to be determined later. We assume that v˙ 1 (t)e−2t cos t + v˙ 2 (t)e−2t sin t ≡ 0, so y(t) ˙ = e−2t (−2 cos t − sin t)v1 (t) + e−2t (−2 sin t + cos t)v2 (t) and thus y¨(t) = e−2t (−2 cos t−sin t)v˙ 1 (t)+e−2t (−2 sin t+cos t)v˙ 2 (t)+e−2t (4 sin t+3 cos t)v1 (t)+e−2t (−4 cos t+3 sin t)v2 (t). Substitute all of that into the original, non-homogeneous ODE y¨ + y = e−2t sec t, which is in standard form. We get e−2t (−2 cos t − sin t)v˙ 1 (t) + e−2t (−2 sin t + cos t)v˙ 2 (t) = e−2t sec t. So, v˙ 1 (t), v˙ 2 (t) should satisfy the system of linear equations  (e−2t cos t)v˙ 1 (t) + (e−2t sin t)v˙ 2 (t)  

e−2t (−2 cos t − sin t)v˙ 1 (t) + e−2t (−2 sin t + cos t)v˙ 2 (t)

Dividing each equation by e−2t gives an equivalent system,  (cos t)v˙ 1 (t) + (sin t)v˙ 2 (t)  

(−2 cos t − sin t)v˙ 1 (t) + (−2 sin t + cos t)v˙ 2 (t)

=0

 

= e−2t sec t



=0

 

= sec t



Using the inverse of a 2 × 2 matrix, we get    −1   v˙ 1 cos t sin t 0 −2 sin t + cos t  =   = v˙ 2 −2 cos t − sin t −2 sin t + cos t sec t 2 cos t + sin t

.

.

− sin t



0

©Larry



sin t − cos t

sec t

 .

= 

 cos t



1

Turyn, October 13, 2013

p. 47

We obtain, using the substitution w = cos t, ˆ ˆ ˆ − sin t 1 v1 (t) = v˙ 1 (t)dt = dt = dw = ln |w| + c1 = ln | cos t| + c1 cos t w ˆ

and v2 (t) =

ˆ v˙ 2 (t)dt =

1 dt = t + c2 ,

where c1 , c2 are arbitrary constants. Putting everything together, the general solution of the ODE is y(t) = (ln | cos t| + c1 )e−2t cos t + (t + c2 )e−2t sin t hence y(t) = e−2t (cos t) ln | cos t| + t e−2t sin t + c1 e−2t cos t + c2 e−2t sin t, where c1 , c2 =arbitrary constants. 4.3.2.13. The corresponding homogeneous ODE, y¨ + y˙ = 0 has characteristic polynomial s2 + s = s(s + 1), so the homogeneous solution is yh (t) = c1 + c2 e−t . Let’s try to find a solution of the non-homogeneous ODE in the form y(t) = v1 (t) + v2 (t)e−t , where v1 (t), v2 (t) are functions to be determined later. We assume that (?) v˙ 1 (t) + v˙ 2 (t)e−t ≡ 0, so y(t) ˙ = −e−t v2 (t) and thus y¨(t) = −e−t v˙ 2 (t) + e−t v2 (t). Substitute all of that into the original, non-homogeneous ODE y¨ + y˙ = e−t , which is in standard form. We get −e−t v˙ 2 (t) = e−t . So, v˙ 2 (t) ≡ −1, whose solution is v2 (t) = −t + c2 , where c2 =arbitrary constant. From (?) we get v˙ 1 (t) = −v˙ 2 (t)e−t = e−t , whose solution is v2 (t) = −e−t + c1 . Putting everything together, the general solution of the ODE is y(t) = (−e−t + c1 ) + (−t + c2 )e−t hence y(t) = −t e−t + e c1 + e c2 e−t , where e c1 , e c2 =arbitrary constants. [Note that the e−t term that came from v1 (t) was added to c2 to get the arbitrary constant e c2 .] 4.3.2.14. The corresponding homogeneous ODE, y¨ +8y˙ +16y = 0 has characteristic polynomial s2 +8s+16 = (s + 4)2 , so the homogeneous solution is yh (t) = c1 e−4t + c2 t e−4t . Let’s try to find a solution of the non-homogeneous ODE in the form y(t) = e−4t v1 (t) + t e−4t v2 (t), ©Larry

Turyn, October 13, 2013

p. 48

where v1 (t), v2 (t) are functions to be determined later. We assume that e−4t v˙ 1 (t) + t e−4t v˙ 2 (t) ≡ 0, so y(t) ˙ = (−4e−4t )v1 (t) + (1 − 4t)e−4t v2 (t) and thus y¨(t) = (−4e−4t )v˙ 1 (t) + (1 − 4t)e−4t v˙ 2 (t) + 16e−4t v1 (t) + (−8 + 16t)e−4t v2 (t). Substitute all of that into the original, non-homogeneous ODE y¨ + 8y˙ + 16y = e−4t , which is in standard form. We get (−4e−4t )v˙ 1 (t) + (1 − 4t)e−4t v˙ 2 (t) = e−4t . So, v˙ 1 (t), v˙ 2 (t) should satisfy the system of linear equations   e−4t v˙ 1 (t) + te−4t v˙ 2 (t) 

−4e−4t v˙ 1 (t) + (1 − 4t)e−4t v˙ 2 (t)

Dividing each equation by e−4t gives an equivalent system,  v˙ 1 (t)) + t v˙ 2 (t)  

−4v˙ 1 (t) + (1 − 4t)v˙ 2 (t)

=0

 

= e−4t



 =0  =1

.

.



Using the inverse of a 2 × 2 matrix, we get 

v˙ 1 v˙ 2



 =

1 −4

−1 

t 1 − 4t

0 1



 =

ˆ

We obtain

ˆ v˙ 1 (t)dt = −

v1 (t) =

ˆ

and v2 (t) =

1 − 4t 4



−t 1

0 1



 =

−t 1

 .

1 t dt = − t2 + c1 2

ˆ v˙ 2 (t)dt =

1 · dt = t + c2 ,

where c1 , c2 are arbitrary constants. Putting everything together, the general solution of the ODE is y(t) =





hence y(t) =

 1 2 t + c1 e−4t + (t + c2 )t e−4t 2

1 2 −4t t e + c1 e−4t + c2 t e−4t , 2

where c1 , c2 =arbitrary constants. 4.3.2.15. The corresponding homogeneous ODE, y¨ + 4y˙ + 4y = 0 has characteristic polynomial s2 + 4s + 4 = (s + 2)2 , so the homogeneous solution is yh (t) = c1 e−2t + c2 t e−2t . Let’s try to find a solution of the non-homogeneous ODE in the form y(t) = v1 (t)e−2t + v2 (t) t e−2t , where v1 (t), v2 (t) are functions to be determined later.

©Larry

Turyn, October 13, 2013

p. 49

We assume that v˙ 1 (t)e−2t + v˙ 2 (t) t e−2t ≡ 0, so y(t) ˙ = (−2e−2t )v1 (t) + (1 − 2t)e−2t v2 (t) and thus y¨(t) = (−2e−2t )v˙ 1 (t) + (1 − 2t)e−2t v˙ 2 (t) + 4e−2t v1 (t) + (−4 + 4t)e−2t v2 (t). √ Substitute all of that into the original, non-homogeneous ODE y¨ + 4y˙ + 4y = t e−2t , which is in standard form. We get √ (−2e−2t )v˙ 1 (t) + (1 − 2t)e−2t v˙ 2 (t) = t e−2t . So, v˙ 1 (t), v˙ 2 (t) should satisfy the system of linear equations  e−2t v˙ 1 (t) + te−2t v˙ 2 (t) =0  

−2e−2t v˙ 1 (t) + (1 − 2t)e−2t v˙ 2 (t)

Dividing each equation by e−2t gives an equivalent system,  v˙ 1 (t) + t v˙ 2 (t)  

−2v˙ 1 (t) + (1 − 2t)v˙ 2 (t)

Using the inverse of a 2 × 2 matrix, we get      −1  v˙ 1 1 t 1 − 2t √0 = = v˙ 2 −2 1 − 2t 2 t

=

=0 =



  t e−2t

.



 

√ . t

−t 1



√0 t



 =

−t3/2 t1/2

 .

We obtain, using the substitution w = cos t, ˆ ˆ 2 v1 (t) = v˙ 1 (t)dt = − −t3/2 dt = − t5/2 + c1 , 5 where c1 =arbitrary constant. We also get ˆ ˆ 2 v2 (t) = v˙ 2 (t)dt = t1/2 dt = t3/2 + c2 , 3 where c2 =arbitrary constant. Putting everything together, the general solution of the ODE is  2  2  t3/2 + c2 t e−2t y(t) = − t5/2 + c1 e−2t + 5 3 hence

4 5/2 −2t t e + c1 e−2t + c2 t e−2t , 15 where c1 , c2 =arbitrary constants. It follows that y(t) =

y(t) ˙ =−

8 5/2 −2t 2 3/2 −2t t e + t e − 2c1 e−2t + c2 (1 − 2t)e−2t , 15 3

where c1 , c2 =arbitrary constants. The ICs require   −1 = y(1) =

4 15

0 = y(1) ˙ =

2 15

  + c1 + c2 e−2 

,   − 2c1 − c2 e−2   so, after some calculations, c1 = e2 + 25 , hence c2 = −2 e2 + 31 . 

©Larry

Turyn, October 13, 2013

p. 50

The solution of the IVP is y(t) =

4 5/2 −2t t e − e−2t − 2 t e−2t , 15

that is,  y(t) =

e2 +

  4 5/2 −2t 2 1 t+ − 2 e2 + t e . 5 3 15

4.3.2.16. The corresponding homogeneous ODE, y 00 + y = 0 has characteristic polynomial s2 + 1, so the homogeneous solution is yh (x) = c1 cos x + c2 sin x. Let’s try to find a solution of the non-homogeneous ODE in the form y(x) = v1 (x) cos x + v2 (x) sin x, where v1 (x), v2 (x) are functions to be determined later. We assume that v10 (x) cos x + v20 (x) sin x ≡ 0, so y 0 (x) = (− sin x)v1 (x) + (cos x)v2 (x) and thus y 00 (x) = (− sin x)v10 (x) + (cos x)v20 (x) − (cos x)v1 (x) − (sin x)v2 (x). Substitute all of that into the original, non-homogeneous ODE y 00 + y = sec x, which is in standard form. We get (− sin x)v10 (x) + (cos x)v20 (x) = sec x. So, v10 (x), v20 (x) should satisfy the system of linear equations   (cos x)v10 (x) + (sin x)v20 (x) = 0 

(− sin x)v10 (x) + (cos x)v20 (x)

 

= sec x

.



Using the inverse of a 2 × 2 matrix, we get  

v10 v20





cos x

sin x

− sin x

cos x

=

−1  

0





cos x

− sin x

sin x

cos x

=

 sec x





0



sin x − cos x

.

=

 sec x



1

We obtain, using the substitution w = cos x, ˆ ˆ ˆ − sin x 1 v1 (x) = v10 (x)dx = dx = dw = ln |w| + c1 = ln | cos x| + c1 , cos x w where c1 is an arbitrary constant. We also get ˆ ˆ v2 (x) = v20 (x)dx = 1 dx = x + c2 , where c2 is an arbitrary constant. Putting everything together, the general solution of the ODE is y(x) = cos(x) ln | cos x| + x sin x + c1 cos x + c2 sin x, where c1 , c2 =arbitrary constants, hence y 0 (x) = − sin(x) ln | cos x| + cos x ·

− sin x − c1 sin x + c2 cos x, = − sin(x) ln | cos x| − sin x − c1 sin x + c2 cos x. cos x ©Larry

Turyn, October 13, 2013

p. 51

The ICs require 

−1 = y(0) = 0 + c1 0 = y 0 (0) = 0 + c2

 ,

so c1 = −1 and c2 = 0. The solution of the IVP is y(x) = cos(x) ln | cos x| + x sin x − cos x. d . 4.3.2.17. The corresponding homogeneous ODE is the Cauchy-Euler ODE r2 y 00 + ry 0 + y = 0, where 0 = dr m 2 Substituting y(r) = r into that ODE gives characteristic equation 0 = m(m − 1) + m + 1 = m + 1. The roots are m = ±i, so the homogeneous solution is

yh (r) = c1 cos(ln r) + c2 sin(ln r). Let’s try to find a solution of the non-homogeneous ODE in the form y(r) = v1 (r) · cos(ln r) + v2 (r) · sin(ln r), where v1 (r), v2 (r) are functions to be determined later. We assume that v10 (r) · cos(ln r) + v20 (r) · sin(ln r) ≡ 0, so y 0 (r) = −r−1 sin(ln r) · v1 (r) + r−1 cos(ln r) · v2 (r) and thus  y 00 (r) = −r−1 sin(ln r)v10 (r) + r−1 cos(ln r) · v20 (r) + r−2 sin(ln r) − r−2 cos(ln r) v1 (r)  + − r−2 cos(ln r) − r−2 sin(ln r) v2 (r). Put the non-homogeneous ODE into standard form by dividing through by r2 to get y 00 +r−1 y 0 +r−2 y = 1. Substitute into that ODE to get −r−1 sin(ln r) · v10 (r) + r−1 cos(ln r) · v20 (r) = 1. So, v10 (r), v20 (r) should satisfy the system of linear equations  cos(ln r) · v10 (r) + sin(ln r) · v20 (r)  

 =0 

−r−1 sin(ln r) · v10 (r) + r−1 cos(ln r) · v20 (r) = 1

.



Using the inverse of a 2 × 2 matrix, we get  

v10 v20

  =

cos(ln r)

sin(ln r)

−r−1 sin(ln r)

r−1 cos(ln r)

−1

0

  1





= 1  r−1

r−1 cos(ln r) r−1 sin(ln r)

    −r sin(ln r) 0  = . cos(ln r) 1 r cos(ln r)

− sin(ln r)

The substitution w = ln r, hence dr = r dw = ew dw, along with the integration formula (3.10), gives ˆ ˆ ˆ ˆ 0 w w v1 (r) = v1 (r)dr = −r sin(ln r) dr = − e sin(w) e dw = − e2w sin(w) dw  e2w 1 · (2 · sin w − cos w) + c1 = r2 − 2 sin(ln r) + cos(ln r) + c1 , 2 +1 5 where c1 =arbitrary constant. =−

22

©Larry

Turyn, October 13, 2013

p. 52

Similarly, using the integration formula (3.9) we get ˆ ˆ ˆ ˆ v2 (r) = v20 (r)dr = r cos(ln r) dr = ew cos(w) ew dw = e2w cos(w) dw = =

e2w ·(sin w + 2 cos w)+c2 + 12

22

 1 2 r sin(ln r) + 2 cos(ln r) + c2 , 5

where c2 =arbitrary constant. Putting everything together, the general solution of the ODE is y(r) =

   1 2    r  −2sin(ln r) + cos(ln r) cos(ln r) + sin(ln r) + 2 cos(ln r) sin(ln r) + c1 cos(ln r) + c2 sin(ln r), 5

hence

 1 2 r cos2 (ln r) + sin2 (ln r) + c1 cos(ln r) + c2 sin(ln r). 5 So, the general solution of the ODE is y(r) =

y(r) =

1 2 r + c1 cos(ln r) + c2 sin(ln r). 5

where c1 , c2 =arbitrary constants. It follows that y 0 (r) =

4 r − c1 r−1 sin(ln r) + c2 r−1 cos(ln r). 5

The ICs require  

3 = y(1) =

1 5



−1 = y 0 (1) =

2 5

 

+ c1 + c2

,



7 so c1 = 14 5 and c2 = − 5 . The solution of the IVP is

y(r) =

1 2 14 7 r + cos(ln r) − sin(ln r). 5 5 5

d 4.3.2.18. Into the Cauchy-Euler ODE x2 y 00 − 4xy 0 + 6y = 0, where 0 = dx , substitute y(x) = xm . This gives 2 the characteristic equation 0 = m(m − 1) − 4m + 6 = m − 5m + 6 = (m − 2)(m − 3). The general solution is

y(x) = c1 x2 + c2 x3 , where c1 , c2 =arbitrary constants. It follows that y 0 (x) = 2x c1 + 3c2 x2 . (a) The first choice of ICs requires   0 = y(1) = c1 + c2 , −2 = y 0 (1) = 2c1 + 3c2 Using the inverse of a 2 × 2 matrix, we get 

c1 c2



 =

1 2

1 5

−1 

0 −2

 =

The solution of the IVP is y(x) =

1 3



5 −1 −2 1



0 −2

 =

1 3



2 −2

 .

2 2 (x − x3 ). 3

©Larry

Turyn, October 13, 2013

p. 53

(b) The second choice of ICs requires 

0 = y(0) = c1 · 0 + c2 · 0 = 0 −2 = y 0 (0) = 2c1 · 0 + 3c2 · 0 = 0

 ,

So the IVP has no solution! (c) The Existence and Uniqueness conclusion of Theorem 3.8 in Section 3.3 is not contradicted because the ODE, here put into standard form 6 4 y 00 − y 0 + 2 y = 0, x x 4 does not satisfy the hypotheses of Theorem 3.8 because p(x) = − is not continuous at the initial point x 6 x0 = 0 [ or, because q(x) = 2 is not continuous at the initial point x0 = 0 ]. x 4.3.2.19. We are given that the corresponding homogeneous 1 + x + 21 x2 . We calculate that their Wronskian is x e 1 + x + 12 x2 W (y1 , y2 )(x) = x e 1+x

ODE has solutions y1 (x) = ex and y2 (x) = = − 1 x2 ex 6= 0, 2

so this set of functions {y1 (x), y2 (x)} is linearly independent on any interval that does not contain x = 0. So, the general solution of the corresponding homogeneous ODE is 1  yh (x) = c1 ex + c2 1 + x + x2 . 2 Let’s try to find a solution of the non-homogeneous ODE in the form 1  y(x) = v1 (x) · ex + v2 (x) · (1 + x + x2 , 2 where v1 (x), v2 (x) are functions to be determined later. We assume that 1  v10 (x) · ex + v20 (x) · 1 + x + x2 ≡ 0, 2 so y 0 (x) = ex · v1 (x) + (1 + x) · v2 (x) and thus y 00 (x) = ex v10 (x) + (1 + x) v20 (x) + ex v1 (x) + v2 (x). Put the non-homogeneous ODE into standard form by dividing through by x to get y 00 − x2 . Substitute into that ODE to get

x+2 0 2 y + y= x x

ex · v10 (x) + (1 + x) · v20 (x) = x2 . So, v10 (x), v20 (x) should satisfy the system of linear equations  x 0   e v1 (x) + 1 + x + 12 x2 v20 (x) 

ex v10 (x) + (1 + x)v20 (x)

=0

 

= x2



.

Using the inverse of a 2 × 2 matrix, we get  

v10 v20





ex

= e

x

1 + x + 21 x2

−1  

1+x

 1+x 1  =  − 12 x2 ex x2 −ex 0



− 1 + x + 21 x2 x

e ©Larry

 

0

 x

2

 

Turyn, October 13, 2013

p. 54

  2 1 + x + 12 x2 e−x . = −2 

We obtain, using integration by parts twice, that   ˆ ˆ ˆ 1 2  −x 1 2 0 −x −x v1 (x) = v1 (x)dx = 2 1 + x + x e dx = 2 1 + x + x (−e ) − (1 + x)(−e )dx 2 2 ˆ ˆ   1  1  = −2 1 + x + x2 e−x + 2(1 + x)e−x dx = −2 1 + x + x2 e−x + 2(1 + x)(−e−x ) − 2(−e−x )dx 2 2     1 = e−x −2 1 + x + x2 − 2(1 + x) − 2 + c1 = e−x −6 − 4x − x2 + c1 , 2 where c1 =arbitrary constant, and ˆ v2 (x) =

ˆ v20 (x)dx

=

−2 dx = −2x + c2 ,

where c1 , c2 are arbitrary constants. Putting everything together, the general solution of the ODE is 1  1  y(x) = −6 − 4x − x2 − 2x 1 + x + x2 + c1 ex + c2 1 + x + x2 , 2 2 hence

1  y(x) = −6 − 4x − x2 − 2x − 2x2 − x3 + c1 ex + c2 1 + x + x2 , 2

that is, 1  y(x) = −6 − 6x − 3x2 − x3 + c1 ex + c2 1 + x + x2 2 where c1 , c2 =arbitrary constants. √ √ 4.3.2.20. Re-write the ODE y¨ + 2y˙ + y = t e−t as (D + 1)2 [ y ] = t e−t , and then look for a solution in the form y(t) = e−t v(t): √ −t t e = (D + 1)2 [ y ] = (D + 1)2 [ e−t v(t) ] = e−t D2 [ v(t) ] = e−t v¨(t). Multiplying through by et gives v¨(t) =

√ t,

and then integrate with respect to t to get ˆ √ 2 v(t) ˙ = t dt = t3/2 + b1 , 3 where b1 is an arbitrary constant. Integrate again to get ˆ   2 3/2 4 5/2 v(t) = t + b1 dt = t + b1 t + b2 , 3 15 where b1 , b2 are arbitrary constants. It follows that the solution of the original problem is 4  y(t) = e−t v(t) = t5/2 + b1 t + b2 e−t , 15 where b1 , b2 are arbitrary constants. It follows that     4 5/2 2 y(t) ˙ = −e−t v(t) + e−t v(t) ˙ = e−t − v(t) + v(t) ˙ = e−t − t − b1 t − b2 + t3/2 + b1 . 15 3 ©Larry

Turyn, October 13, 2013

p. 55

The ICs require 

−4 = y(0) = b2 5 = y 0 (0) = b1 − b2

 .

This implies b2 = −4, which implies b1 = 1. The solution of the IVP is  4 t5/2 + t − 4 e−t . y(t) = 15 This agrees with the final conclusion of Example 4.18. [Note that the constants b1 , b2 we used here are not necessarily the same as the constants c1 , c2 used in Example 4.18.] 4.3.2.21. The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 5s + 6 = (s + 3)(s + 2) ⇒ L1 = −3, −2 f (t) = t2 e−t ⇒ L2 = −1, −1, −1 ⇒ Superlist is L = −3, −2, −1, −1, −1 ⇒ y(t) = c1 e−3t + c2 e−2t + c3 e−t + c4 t e−t + t2 c5 e−t ⇒ yp (t) = Ae−t + Bt e−t + Ct2 e−t , where A, B, C are constants to be determined. It follow that y˙ p (t) = −Ae−t + B(1 − t)e−t + C(2t − t2 )e−t and y¨p (t) = Ae−t + B(−2 + t)e−t + C(2 − 4t + t2 )e−t . Substitute all of this into the original, non-homogenous ODE to get t2 e−t = y¨p + 5y˙ p + 6yp = Ae−t +B(−2+t)e−t +C(2−4t+t2 )e−t −5Ae−t +5B(1−t)e−t +5C(2t−t2 )e−t +6Ae−t +6Bt e−t +6Ct2 e−t = (2A + 3B + 2C)e−t + (2B + 6C)t e−t + (2C)t2 e−t t2 e−t terms ⇒ C = 12 . t e−t terms ⇒ 2B + 6C = 0 ⇒ B = − 23 , e−t terms ⇒ 2A + 3B + 2C = 0 ⇒ A =

7 4

⇒ yp (t) =

7 −t 3 −t 1 2 −t e − te + t e . 4 2 2

The general solution of the ODE is y(t) = c1 e−3t + c2 e−2t +

7 −t 3 −t 1 2 −t e − te + t e , 4 2 2

where c1 , c2 =arbitrary constants. This agrees with the conclusion of Example 4.16. I think variation of parameters, as we did in Example 4.16, was easier for this problem. 4.3.2.22. The best that we can hope for is a formula in terms ... of {y1 (t), y2 (t), y3 (t)}, a complete set of basic solutions of the corresponding linear homogeneous ODE, y + p1 (t)¨ y + p2 (t)y˙ + p3 (t)y = 0. Assume pi (t), for i = 1, 2, 3, are continuous on an open interval containing t = 0. Assume that y(t) = y1 (t)v1 (t) + y2 (t)v2 (t) + y3 (t)v3 (t), so y˙ = y˙ 1 v1 + y˙ 2 v2 + y˙ 3 v3 + y1 v˙ 1 + y2 v˙ 2 + y3 v˙ 3 . Assume that (?) y1 (t)v˙ 1 (t) + y2 (t)v˙ 2 (t) + y3 v˙ 3 ≡ 0, hence y˙ = y˙ 1 v1 + y˙ 2 v2 + y˙ 3 v3 and thus y¨ = y¨1 v1 + y¨2 v2 + y¨3 v3 + y˙ 1 v˙ 1 + y˙ 2 v˙ 2 + y˙ 3 v˙ 3 . Assume, in addition, that (??) y˙ 1 (t)v˙ 1 (t) + y˙ 2 (t)v˙ 2 (t) + y˙ 3 v˙ 3 ≡ 0, ©Larry

Turyn, October 13, 2013

p. 56

hence

... ... ... ... y = y 1 v1 + y 2 v2 + y 3 v3 + y˙ 1 v˙ 1 + y˙ 2 v˙ 2 + y˙ 3 v˙ 3 .

... Substitute y, y, ˙ y¨, and y into the original non-homogeneous ODE to get ... ... ... ... y1 v1 + y¨2 v2 + y¨3 v3 ) y + p2 (t)y˙ + p3 (t)y = y 1 v1 + y 2 v2 + y 3 v3 + y˙ 1 v˙ 1 + y˙ 2 v˙ 2 + y˙ 3 v˙ 3 + p1 (t)(¨ f (t) = y + p1 (t)¨ +p2 (t)(y˙ 1 v1 + y˙ 2 v2 + y˙ 3 v3 ) + p3 (t)(y1 v1 + y2 v2 + y3 v3 ) ... ... ... y3 +p2 (t)y˙ 3 +p3 (t)y3 )v3 y2 +p2 (t)y˙ 2 +p3 (t)y2 )v2 +( y 3 +p1 (t)¨ y1 +p2 (t)y˙ 1 +p3 (t)y1 )v1 +( y 2 +p1 (t)¨ = ( y 1 +p1 (t)¨ +y˙ 1 v˙ 1 + y˙ 2 v˙ 2 + y˙ 3 v˙ 3 = 0 · v1 + 0 · v2 + 0 · v3 + y˙ 1 v˙ 1 + y˙ 2 v˙ 2 + y˙ 3 v˙ 3 that is, (? ? ?)

f (t) = y˙ 1 v˙ 1 + y˙ 2 v˙ 2 + y˙ 3 v˙ 3 .

Together, (?), (??), and (? ? ?) form the system      0 y1 (t) y2 (t) y3 (t) v˙ 1  0  =  y˙ 1 (t) y˙ 2 (t) y˙ 3 (t)   v˙ 2  . f (t) y¨1 (t) y¨2 (t) y¨3 (t) v˙ 3 Using the adjugate matrix, we have 

v˙ 1





−1 

y1 (t) y2 (t) y3 (t)

       v˙ 2  =  y˙ 1 (t) y˙ 2 (t) y˙ 3 (t)       v˙ 3 y¨1 (t) y¨2 (t) y¨3 (t)  =

    

    

0



0

    

f (t)

y˙ 2 y¨3 − y˙ 3 y¨2

y3 y¨2 − y2 y¨3 (t)

  1  y˙ 3 y¨1 − y˙ 1 y¨3 W (y1 , y2 , y3 )(t)  

y1 y¨3 − y3 y¨1 (t)

y˙ 1 y¨2 − y˙ 2 y¨1

y2 y˙ 3 − y3 y˙ 2



0

   y3 y˙ 1 − y1 y˙ 3   0  f (t) y1 y˙ 2 − y2 y˙ 1

     

y2 y¨1 − y1 y¨2 (t)   (y2 y˙ 3 − y3 y˙ 2 )f (t)     1  (y3 y˙ 1 − y1 y˙ 3 )f (t)  =   W (y1 , y2 , y3 )(t)   (y1 y˙ 2 − y2 y˙ 1 )f (t)  ˆ t ˆ t y2 (s)y˙ 3 (s) − y3 (s)y˙ 2 (s) f (s) Because, for example, v1 (t) = c1 + v˙ 1 (s) ds = c1 + ds, where c1 is an W (y1 , y2 , y3 )(t) 0 0 arbitrary constant, !  ˆ t y2 (s)y˙ 3 (s) − y3 (s)y˙ 2 (s) f (s) (? ? ? ?) y(t) = c1 + ds y1 (t) W (y1 , y2 , y3 )(t) 0 ˆ

t

+ c2 + 0

ˆ + c3 + 0

t

!  y3 (s)y˙ 1 (s) − y1 (s)y˙ 3 (s) f (s) ds y2 (t) W (y1 , y2 , y3 )(t) !  y1 (s)y˙ 2 (s) − y2 (s)y˙ 1 (s) f (s) ds y3 (t), W (y1 , y2 , y3 )(t)

where c1 , c2 , c3 are arbitrary constants. Equation (? ? ??) gives a formula for all solutions of ©Larry

Turyn, October 13, 2013

p. 57

Section 4.4.1     5 2 4.4.1.1. L −5e3t + sin 2t = −5L e3t + L [ sin 2t ] = − + s − 3 s2 + 4   t t s s 1/2 2 = L [ cos 3t ] + L sin = 2 = 2 4.4.1.2. L cos 3t + sin + + 2 2 s + 9 s2 + (1/2)2 s + 9 4s2 + 1 

1 1 4.4.1.3. L 1 + at + (at)2 + (at)3 2! 3!   1 a  a 2  a 3 = 1+ + + s s s s

a2 a3 a a2  2  a3  3  1 L t + L t = + 2+ 3 + 4 2! 3! s s s s

 = L [ 1 ] + aL [ t ] +

    4.4.1.4. L t3 e−2t = L e−2t f (t) , where f (t) = t3 . Using Table entry L1.8, we have     L t3 e−2t = L t3

s7→(s+2)

=

3! 3! = . s4 s7→(s+2) (s + 2)4

 √3  h i √  t = L et/2 f (t) , where f (t) = cos 23 t . Using Table entry L1.8, we have 2 # " # " √  √3   3  s − 12 s − 12 s t/2 √ = L cos L e cos t t = = = . 2 2 2 s2 + s + 1 s7→(s− 21 ) s2 + ( 3/2)2 s7→(s− 21 ) s− 1 + 3

4.4.1.5. et/2 cos

2

4.4.1.6. L

−1

4



       5 3s − 1 1 s 1 −1 −1 −1 − 2 = 5L − 3L +L s3 s +4 s3 s2 + 4 s2 + 4       5 2! s 1 5 1 −1 = L−1 3 − 3L−1 2 + L = t2 − 3 cos 2t + sin 2t. 2! s s + 22 s2 + 22 2 2

s−2 A Bs + C = + 2 (s + 2)(s2 + 1) s−2 s +1 Substitute s = −2 into (?) to get

4.4.1.7. Partial fractions:



(?) s − 2 = A(s2 + 1) + (Bs + C)(s + 2).

−2 − 2 = A((−2)2 + 1) + (Bs + C)(−2 + 2) = 5A so A = − 45 . Substitute this into (?) to get 4 s − 2 = − (s2 + 1) + (Bs + C)(s + 2), 5 hence Bs + C =

s − 2 + 45 s2 + (s + 2)

4 5

=

4 5

s2 + s − (s + 2)

6 5

=

1 4s2 + 5s − 6 1 · = (4s − 3). 5 (s + 2) 5

So,      s−2 4 1 1 4s − 3 4 1 = − L + L = − e−2t + (4 cos t − 3 sin t). (s + 2)(s2 + 1) 5 s+2 5 s2 + 1 5 5         s+1 s+1 (s − 2) + 3 −1 −1 −1 (s − 2 + 2) + 1 −1 4.4.1.8. L =L =L =L s2 − 4s + 5 (s − 2)2 + 1 (s − 2)2 + 1 (s − 2)2 + 1 L−1



= L−1 [ F (s − 2) ] = e2t (cos t + 3 sin t).

©Larry

Turyn, October 13, 2013

p. 58

4.4.1.9. L−1



−s + 4 2 s + 4s + 7



= L−1



−s + 4 (s + 2)2 + 3



= L−1



−(s + 2 − 2) + 4 (s + 2)2 + 3



= L−1



−(s + 2) + 6 (s + 2)2 + 3



√ √ √ √  √  6 = L−1 [ F (s + 2) ] = e−2t − cos( 3 t) + √ sin( 3 t) = e−2t − cos( 3 t) + 2 3 sin( 3 t) . 3 4.4.1.10. Partial fractions:

2s2 + 4s + 8 2s2 + 4s + 8 A B C = = + + s3 − 4s s(s + 2)(s − 2) s s+2 s−2

(?) 2s2 + 4s + 8 = A(s2 − 4) + Bs(s − 2) + Cs(s + 2). We get   8 = −4A   @s = 0 : @s = −2 : 8 = 8B ,   @s = 2 : 24 = 8C



so A = −2, B = 1, C = 3. Substitute this into (?) to get  2    2 1 3 −1 2s + 4s + 8 =L − + L + , = −2 + e−2t + 3e2t . s3 − 4s s s+2 s−2 4.4.1.11. Partial fractions: ⇒

(s +

s+6 A B Cs + E = + + 2 2 + 2s + 2) s + 3 (s + 3) (s + 2s + 2)

3)2 (s2

(?) s + 6 = A(s + 3)(s2 + 2s + 2) + B(s2 + 2s + 2) + (Cs + E)(s + 3)2 . Substitute s = −3 into (?) to get 3 = A · 0 + 5B + C · 0

so B = 53 . Substitute this into (?) to get 3 (?) s + 6 = A(s + 3)(s2 + 2s + 2) + (s2 + 2s + 2) + (Cs + E)(s + 3)2 , 5 hence (??) A(s2 + 2s + 2) + (Cs + E)(s + 3) =

s + 6 − 53 (s2 + 2s + 2) 1 −3s2 − s + 24 1 = · = (−3s + 8). s+3 5 s+3 5

Substitute s = −3 into (??) to get 5A = so A =

17 25 .

17 5

Substitute this into (??) to get (? ? ?)

hence Cs + E =

1 5 (−3s

1 17 2 (−3s + 8) = (s + 2s + 2) + (Cs + E)(s + 3), 5 25

2 + 8) − 17 1 −17s2 − 49s + 6 1 25 (s + 2s + 2) = · = (−17s + 2). (s + 3) 25 (s + 3) 25

So, L

−1



    3 −1 1 −1 −17s + 2 1 + L + L 5 (s + 3)2 25 s2 + 2s + 2   17 −3t 3 −3t 1 −1 −17(s + 1 − 1) + 2 = e + te + L 25 5 25 (s + 1)2 + 1   17 −3t 3 −3t 17 19 −t = e + te +e − cos t + sin t . 25 5 25 25

s+6 (s + 3)2 (s2 + 2s + 2)



17 −1 = L 5



1 s+3



©Larry

Turyn, October 13, 2013

p. 59

4.4.1.12. y(t) = L−1



s 2 s + 4s + 13



= L−1



s (s + 2)2 + 9



= L−1



s+2−2 (s + 2)2 + 9



= e−2t (cos 3t −

2 sin 3t). 3

The amplitude phase form for y(t) is y(t) = α e−2t cos(3t − δ), where    1 = A = α cos δ  ,  2  − 3 = B = α sin δ q √ 2 (1)2 + − 23 = 313 and 2 quadrant, δ = arctan − = − arctan 3 The amplitude phase form is hence α =

tan δ = 2 . 3

−2/3 1

= − 23 . Because (A, B) =



1, − 23



is in the fourth

√   2  13 −2t . y(t) = e cos 3t + arctan 3 3 4.4.1.13. Define Y (s) , L [ y(t) ]. Take the Laplace transform of both sides of the ODE y˙ − 2y = 3e4t , and 3 use the IC y(0) = −1 to get sY − (−1) − 2Y = s−4 . This gives Y (s) = −

1 3 + s − 2 (s − 2)(s − 4)

and then we use a partial fractions expansion: A B 3 = + , (s − 2)(s − 4) s−2 s−4 where A, B are constants to be determined. After multiplying both sides by (s − 2)(s − 4) we get (?) A(s − 4) + B(s − 2). Substituting s = 2 and s = 4 into (?) we get   @s = 2 : 3 = −2A , @s = 4 : 3 = 2B

3=

so A = − 32 , B = 32 . So,     1 3 1 3 1 1 3 1 3 5 5 y(t) = L−1 [ Y (s) ] = L−1 − − · + = L−1 − · + = − e2t + e4t . s−2 2 s−2 2s−4 2 s−2 2s−4 2 2 There is no steady state solution. 4.4.1.14. Define Y (s) , L [ y(t) ]. Take the Laplace transform of both sides of the ODE y˙ + 2y = cos 4t, and s use the IC y(0) = −1 to get sY − (−1) + 2Y = s2 +16 . This gives Y (s) = −

1 s + s + 2 (s + 2)(s2 + 16)

and then we use a partial fractions expansion: s A Bs + C = + 2 , 2 (s + 2)(s + 16) s+2 s + 16 where A, B, C are constants to be determined. After multiplying both sides by (s + 2)(s2 + 16) we get (?) s = A(s2 + 16) + (Bs + C)(s + 2). Substituting s = −2 into (?) gives −2 = 20A, ©Larry

Turyn, October 13, 2013

p. 60

1 so A = − 10 . Substitute this into (?) to get

s=− hence (??) V s + C =

s+

1 2 (s + 16) + (Bs + C)(s + 2), 10 1 2 10 (s

+ 16) 1 s2 + 10s + 16 1 = = (s + 8). s+2 10 s+2 10

The solution of the IVP is

−1

y(t) = L

[ Y (s) ] = L

−1



1 1 1 s+8 1 − · + · − s + 2 10 s + 2 10 s2 + 16 =−

The steady state solution is yS (t) =



−1

=L



11 1 1 s+8 − · + · 10 s + 2 10 s2 + 16



11 −2t 1 e + (cos 4t + 2 sin 4t). 10 10

1 10 (cos 4t

+ 2 sin 4t).

4.4.1.15. Define Y (s) , L [ y(t) ]. Take the Laplace transform of both sides of the ODE y¨ + 3y˙ − 10y = 0, and use the ICs y(0) = 1, y(0) ˙ = −3 to get s2 Y − s + 3 + 3(sY − 1) − 10Y = 0. This gives Y (s) = Partial fractions:

s2

s s = . s2 + 3s − 10 (s + 5)(s − 2)

s A B = + ⇒ (?) s = A(s − 2) + B(s + 5). We get + 3s − 10 s+5 s−2   @s = −5 : −5 = −7A , @s = 2 : 2 = 7B

so A = 75 , B = 27 . Substitute this into (?) to get that the solution of the IVP is   5 1 2 B 2 5 y(t) = L−1 · + · = e−5t + e2t . 7 s+5 7 s−2 7 7 There is no steady state solution. 4.4.1.16. Define Y (s) , L [ y(t) ]. Take the Laplace transform of both sides of the ODE y¨ + 9y = 5 sin 2t, and use the ICs y(0) = 1, y(0) ˙ = −3 to get s2 Y − s + 3 + 9Y = s210 +4 . This gives Y (s) = Partial fractions:

10 s−3 + 2 . 2 s + 9 (s + 4)(s2 + 9)

10 As + B Cs + E = 2 + 2 (s2 + 4)(s2 + 9) s +4 s +9

⇒ (?) 10 = (As + B)(s2 + 9) + (Cs + E)(s2 + 4) = (A + C)s3 + (B + E)s2 + (9A + 4C)s + (9B + 4E). Sorting by powers of s we get  3  s : 0=A +C      2  0= B +E s : , 0 = 9A + 4C  s1 :     0  s : 10 = 9B + 4E so    −1        A 1 0 1 0 0 −4 0 1 0 0 0  B   0 1 0 1   0  1  0 −4     0 1          0  =  2 .  C = 9 0 4 0   0 = 5 9     0 −1 0 0 0  0 9 0 −1 10 E 0 9 0 4 10 −2 ©Larry

Turyn, October 13, 2013

p. 61

Substitute these results into (?) to get that the solution of the IVP is   2 2 2 y(t) = L−1 2 − 2 = sin 2t − sin 3t. s +4 s +9 3 The steady state solution is yS (t) = sin 2t −

2 3

sin 3t.

4.4.1.17. (a) The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 9 f (t) = 10te−t ⇒ L2 = −1, −1 ⇒ Superlist is L = ± i 3, −1, −1 implies

⇒ L1 = ± i 3

y(t) = c1 cos(3t) + c2 sin(3t) + c3 e−t + c4 te−t ⇒ yp (t) = Ae−t + Bte−t , where A, B are constants to be determined. First calculate y˙ p (t) = −Ae−t + B(−t + 1)e−t and then y¨p (t) = Ae−t + B(t − 2)e−t . We have 10te−t = y¨p + 9yp = Ae−t + B(t − 2)e−t + 9(Ae−t + Bte−t ) = (10A − 2B)e−t + 10Bte−t hence B = 1 and thus A = 51 . 1 The general solution of the ODE is y(t) = c1 cos(3t) + c2 sin(3t) + e−t + te−t , where c1 , c2 =arbitrary 5 constants. It follows that y(t) ˙ = −3c1 sin(3t) + 3c2 cos(3t) −

1 −t e + (−t + 1)e−t . 5

The ICs require   0 = y(0) = c1 + 

 

1 5

0 = y(0) ˙ = 3c2 −

1 5

+1

,



4 so c1 = − 15 and c2 = − 15 . The solution of the IVP is

1 4 1 y(t) = − cos(3t) − sin(3t) + e−t + te−t . 5 15 5 1 4 [By the way, the steady state solution is yS (t) = − cos(3t) − sin(3t).] 5 15 (b) Define Y (s) , L [ y(t) ]. Take the Laplace transform of both sides of the ODE y¨ + 9y = 10te−t , and use 10 the ICs y(0) = 0, y(0) ˙ = 0 to get s2 Y + 9Y = (s+1) 2 . This gives Y (s) =

Partial fractions: ⇒

(s2

10 . + 9)(s + 1)2

10 A B Cs + E = + + 2 (s2 + 9)(s + 1)2 s + 1 (s + 1)2 (s + 9)

(?) 10 = A(s + 1)(s2 + 9) + B(s2 + 9) + (Cs + E)(s + 1)2 . Substitute s = −1 into (?) to get 10 = A · 0 + 10B + C · 0

so B = 1. Substitute this into (?) to get (?) 10 = A(s + 1)(s2 + 9) + 1 · (s2 + 9) + (Cs + E)(s + 1)2 , hence (??) A(s2 + 9) + (Cs + E)(s + 1) =

10 − s2 − 9 1 − s2 = = −s + 1 s+1 s+1 ©Larry

Turyn, October 13, 2013

p. 62

Substitute s = −1 into (??) to get 10A = 2 so A = − 15 . Substitute this into (??) to get 1 2 (s + 9) + (Cs + E)(s + 1), 5

(? ? ?) − s + 1 = hence Cs + E =

−s + 1 − 51 (s2 + 9) 1 −s2 − 5s − 4 1 = · = (−s − 4). (s + 1) 5 (s + 1) 5

So, y(t) = L−1



1 −s − 4 1 1 1 + · 2 · + 2 5 s + 1 (s + 1) 5 (s + 9)



1 −1 L 5

=



1 s+1



+ L−1



1 (s + 1)2

 +

1 −1 L 5



−s − 4 s2 + 9



 1 −t 1 4 e + t e−t − cos 3t + sin 3t . 5 5 3 1 4 [By the way, the steady state solution is yS (t) = − cos(3t) − sin(3t).] 5 15 =

4.4.1.18. (a) The corresponding LCCHODE’s characteristic polynomial is P(s) = s2 + 4 f (t) = e−t cos 2t ⇒ L2 = −1 ± i2 ⇒ Superlist is L = ± i 2, −1 ± i2 implies

⇒ L1 = ± i 2

y(t) = c1 cos(2t) + c2 sin(2t) + c3 e−t cos 2t + c4 e−t sin 2t ⇒ yp (t) = Ae−t cos 2t + Be−t sin 2t, where A, B are constants to be determined. First calculate y˙ p (t) = −Ae−t cos 2t − 2Ae−t sin 2t − Be−t sin 2t + 2Be−t cos 2t and then y¨p (t) = −3Ae−t cos 2t + 4Ae−t sin 2t − 3Be−t sin 2t − 4Be−t cos 2t. We have e−t cos 2t = y¨p +9yp = −3Ae−t cos 2t+4Ae−t sin 2t−3Be−t sin 2t−4Be−t cos 2t+4(Ae−t cos 2t+Be−t sin 2t) = (A − 4B)e−t cos 2t + (4A + B)e−t sin 2t hence



so 

A B



 =

1 −4 4 1

−1 

1 0

1 = A − 4B 0 = 4A + B  =

1 52



 ,

1 4 −4 1



1 0

 =

1 17



1 −4

 .

The general solution of the ODE is y(t) = c1 cos(2t) + c2 sin(2t) +

 1 −t e cos 2t − 4e−t sin 2t , 17

where c1 , c2 =arbitrary constants. It follows that y(t) ˙ = −2c1 sin(2t) + 2c2 cos(2t) +

1 −t e (−9 cos 2t + 2 sin 2t). 17

The ICs require   0 = y(0) = c1 +  1 so c1 = − 17 and c2 =

9 34 .

 

1 17

0 = y(0) ˙ = 2c2 −

9 17

,



The solution of the IVP is

y(t) =

 1 (−2 cos(2t) + 9 sin(2t) + 2e−t cos(2t) − 8e−t sin(2t) , 34

©Larry

Turyn, October 13, 2013

p. 63

⇒ The steady state solution is yS (t) =

 1 − 2 cos 2t + 9 sin 2t . 34

(b) Define Y (s) , L [ y(t) ]. Take the Laplace transform of both sides of the ODE y¨ + 4y = e−t cos 2t, and (s+1) use the ICs y(0) = 0, y(0) ˙ = 0 to get s2 Y + 4Y = (s+1) 2 +4 . This gives Y (s) =

Partial fractions:

s+1 . (s2 + 4) (s + 1)2 + 4

As + B Cs + E s+1 = 2 + 2 s +4 s + 2s + 5 (s2 + 4) (s + 1)2 + 4

⇒ (?) s + 1 = (As + B)(s2 + 2s + 5) + (Cs + E)(s2 + 4) = (A + C)s3 + (2A + B + E)s2 + (5A + 2B + 4C)s + (5B + 4E). Sorting by powers of s we get   3 s : 0=A +C       2 s : 0 = 2A + B +E , 1     s0 : 1 = 5A + 2B + 4C   s : 1= 5B + 4E so   1 A  B   2     C = 5 0 E 

0 1 2 5

1 0 4 0

−1  0  1    0   4

   0 0 −4 8 1 −2  0  −32 −4 1 0  8 1 =   1  17  21 −8 −1 2  1 1 40 5 −10 3 1



 −1    = 1  9   17  1  −7 

Substitute these results into (?) to get that the solution of the IVP is     1 −1 −s + 9 s−7 9 1 −1 (s + 1 − 1) − 7 1 y(t) = L + (− cos 2t + sin 2t) + L = 17 s2 + 4 (s + 1)2 + 4 17 2 17 (s + 1)2 + 4 9 1 (− cos 2t + sin 2t + e−t cos 2t − 4e−t sin 2t). 17 2  1 The steady state solution is yS (t) = 34 − 2 cos 2t + 9 sin 2t . =

4.4.1.19. Define Y (s) , L [ y(t) ]. Take the Laplace transform of both sides of the ODE y¨ + 2y˙ + 2y = sin t, and use the ICs y(0) = a, y(0) ˙ = b to get s2 Y − as − b + 2(sY − a) + 2Y = s21+1 . This gives 1 as + b + 2a . + s2 + 2s + 2 (s2 + 1) s2 + 2s + 1     as + b + 2a as + b + 2a −1 = L = e−t (...) will be transient and thus will not be part Note that L−1 s2 + 2s + 2 (s + 1)2 + 1 of the desired steady state solution. 1 As + B Cs + E = 2 Partial fractions: 2 + 2 s +1 s + 2s + 2 (s + 1) s2 + 2s + 1 Y (s) =

⇒ (?) s = (As+B)(s2 +2s+2)+(Cs+E)(s2 +1) = (A+C)s3 +(2A+B +E)s2 +(2A+2B +C)s+(2B +E). Sorting by powers of s we get  3  s : 0=A +C      2  s : 0 = 2A + B +E , 1 s : 0 = 2A + 2B + C      0  s : 1= 2B +E ©Larry

Turyn, October 13, 2013

p. 64

so   1 A  B   2     C = 2 0 E 

0 1 2 2

1 0 1 0

−1  0  1    0   1

   0 0 −1 2 1 −2  −2 −1  0 1 0  2 1 =   0  5  6 −2 −1 2  0 1 1 4 2 −4 3





 −2  1 1  =    5  2 . 3

Substitute these results into (?) to get that Y (s)) =

as + b + 2a 1 + s2 + 2s + 2 5



−2s + 1 2s + 3 + 2 s2 + 1 s + 2s + 2

 .

As noted earlier, all terms whose denominator is (s + 1)2 + 1 are transient. The steady state solution is   1 −2s + 1 1 yS (t) = L−1 = (−2 cos t + sin t). 2 5 s +1 5 4.4.1.20. (a) Y (s) =

s−2 (s2 +1)(s2 +2s+5)

is the Laplace transform of a solution y(t) of an IVP. Partial fractions:

As + B Cs + E s−2 = 2 + 2 (s2 + 1)(s2 + 2s + 5) s +1 s + 2s + 5 ⇒ (?) s−2 = (As+B)(s2 +2s+5)+(Cs+E)(s2 +1) = (A+C)s3 +(2A+B+E)s2 +(5A+2B+C)s+(5B+E). Sorting by powers of s we get   3 s : 0=A +C       2 s : 0 = 2A + B +E , 1 s : 1 = 5A + 2B + C      0  s : −2 = 5B +E so  −1          0 1 0 1 0 A 0 −2 1 2 −1 4  B   2 1 0 1   0      1  1 2          0  = 1  −3  .  −1 −2  C  =  5 2 1 0   1  = 10  12 −1 −2 1   1  10  −4  −2 0 5 0 1 E −2 5 10 −5 0 −5 Substitute these results into (?) to get that   −4s − 5 1 4s − 3 Y (s) = + . 10 s2 + 1 s2 + 2s + 5     −4s − 5 −1 −4(s + 1 − 1) − 5 Note that L−1 = L = e−t (...) will be transient and thus will not be (s2 + 2s + 5) (s + 1)2 + 4 part of the desired steady state solution. The steady state solution is   1 −1 4s − 3 1 yS (t) = L = (4 cos t − 3 sin t). 10 s2 + 1 10 (b) Ex. : For convenience, we assume that the desired IVP is of the form y¨ + py˙ + qy = F0 cos ωt + F1 sin ωt. In this case, to have steady state frequency ω, the solution   s−2 −1 −1 L [ Y (s) ] = L (s2 + 1)(s2 + 2s + 5) should have ω = 1 and the left hand side of the ODE should be y¨ + 2y˙ + 5y, hence p = 2 and q = 5. Method 1: Also, in this case, the steady state solution, yS (t) = (a), should be a particular solution the ODE, hence F0 cos ωt + F1 sin ωt = y¨S + 2y˙ S + 5yS =

1 10 (4 cos t

− 3 sin t), which we found in part

1 1 1 (−4 cos t + 3 sin t) + 2 · (−4 sin t − 3 cos t) + 5 · (4 cos t − 3 sin t) 10 10 10 ©Larry

Turyn, October 13, 2013

p. 65

= cos t − 2 sin t, hence F0 = 1 and F1 = −2. To find the ICs, we may continue the work of part (a) to find     −4s − 5 1 1 −1 −4(s + 1 − 1) − 5 1 −1 4s − 3 −1 y(t) = L [ Y (s) ] = L + = (4 cos t − 3 sin t) + L 10 s2 + 1 s2 + 2s + 5 10 10 (s + 1)2 + 4      1 1 1 −1 −4(s + 1) − 1 1 −t − 4 cos 2t − = (4 cos t − 3 sin t) + L = 4 cos t − 3 sin t + e sin 2t , 10 10 (s + 1)2 + 4 10 2 so y(0) = 0. Also, it follows that    1 1 y(t) ˙ = −4 sin t − 3 cos t + e−t 4 cos 2t + sin 2t + 8 sin 2t − cos 2t , 10 2 so y(0) ˙ = 0. To summarize, Y (s) is the Laplace transform of the solution of the IVP    y¨ + 2y˙ + 5y = cos t − 2 sin t  .   y(0) = 0, y(0) ˙ =0 Method 2: Take the Laplace transform of both sides of the ODE to get  s2 Y − sy(0) − y(0) ˙ + 2 sY − y(0) + 5Y = F0 hence Y (s) =

s2

s 1 + F1 2 , +1 s +1

sy(0) + y(0) ˙ + 2y(0) s2 + 1 F0 s + F1 · 2 + 2 s + 2s + 5 s + 1 (s2 + 1)(s2 + 2s + 5)

that is,  sy(0) + y(0) ˙ + 2y(0) (s2 + 1) + F0 s + F1 , Y (s) = (s2 + 2s + 5)(s2 + 1) hence  y(0)s3 + y(0) ˙ + 2y(0) s2 + (y(0) + F0 )s + y(0) ˙ + 2y(0) + F1 s−2 = Y (s) = . 2 2 2 2 (s + 1)(s + 2s + 5) (s + 2s + 5)(s + 1) Sorting by powers of s gives four equations in the four unknown constants y(0), y(0), ˙ F0 , F1 :  3  s : 0 = y(0)      2  s : 0 = 2y(0) + y(0) ˙ . 1 = y(0) + F0  s1 :     0  s : −2 = 2y(0) + y(0) ˙ + F1 It follows that, in succession, we have y(0) = 0, hence F y(0) ˙ = 0, as well as F0 = 1, and finally F0 = −2. To summarize, Y (s) is the Laplace transform of the solution of the IVP    y¨ + 2y˙ + 5y = cos t − 2 sin t  .   y(0) = 0, y(0) ˙ =0 Aside: Note that we can get completely different solutions of this problem if, instead, we assume the ODE is of the form y¨ + ω02 y = G0 e−t cos ωt + G1 e−t sin ωt. √ √ 4.4.1.21. L−1 [ y(t) ] has terms that come from the Laplace transform of cos( 3 t) and/or sin( 3 t), as well as from cos(t) and/or sin(t). For a second order ODE that models a sinusoidally forced oscillator problem, ©Larry

Turyn, October 13, 2013

p. 66

this can only happen if there is no damping and two unequal frequencies 1 and in the beats phenomenon case.

√ 3. This can happen only

4.4.1.22. Because s2 + 4s + 10 = (s + 2)2 + 6, L−1 √ come from the Laplace transform √[ y(t) ] has terms that of cos(3t) and/or sin(3t), as well as from e−2t cos( 6 t) and/or e−2t sin( 6 t). For a second order ODE that models a sinusoidally forced oscillator problem, this can happen if there is a positive damping coefficient. In   this case this is in the steady state solution case and the steady state s + 2 2 solution is yS (t) = L−1 − 2 = − cos(3t) − sin(3t). s +1 3 Alternatively, there is a completely different solution of this problem: The instructions did not say that the oscillator√ has to be forced √ by a sinusoidal input. Instead, the ODE could be of the form y¨ + 9y = e−2t F0 cos( 6 t) + F1 sin( 6 t) . This case can still  be described as being in the steady state solution case, s + 2 2 with steady state solution yS (t) = L−1 − 2 = − cos(3t) − sin(3t). s +1 3  4.4.1.23. L [ pn (t) ] = L 1 + t + geometric series we get

1 2 2! (t)

+ ... +

1 n n! (t)

1 1 1 1 1 L [ pn (t) ] = + 2 + 3 + ... + n+1 = s s s s s =



=

1 s

+

1 s2

+

1 2!

·

2! s3

+ ... +

1 n!

·

n! sn+1 .

Using a finite

n+1   1 1 1 1 1 − 1s 1 + + 2 + ... + n = · s s s s 1 − 1s

n+1   1 − 1s 1 → = L et , s−1 s−1

as n → ∞, for s > 1.

©Larry

Turyn, October 13, 2013

p. 67

Section 4.5.8     1 1 4.5.8.1. L 7 + 3e−2t − t step(t − 4) = 7L [ 1 ]+3L e−2t −L [ t step(t − 4) ] = 7· +3· −e−4s L [ (t + 4) ] s s+2   7 3 4 1 = + − + 2 e−4s . s s+2 s s 4.5.8.2. L [ (t − 2)step(t − 2) − 5t step(t − 3) ] = L [ (t − 2)step(t − 2) ] − 5L [ t step(t − 3) ]     1 −2s 3 1 −2s −3s e−3s . = e L (t + 2) − 2 − 5e L [ (t + 3) ] = 2 e −5 + s s s2  4.5.8.3. f (t) = so

−t + 4, t,

0≤t 0. This is illustrated in the figure below. Note that a21 < 0. With this new assumption, the system of ODEs becomes     f1 a11 a12 a13 0 0  0   a21 − a12 a22 0 0 0      x +  0 . 0 a a 0 0 x˙ =  32 33      0   0 0 a43 0 0  0 a51 0 0 0 0

Figure 1: Problem 5.1.3.1: Modification of Example 5.6 5.1.3.2. With k1 = 2, k2 = 4, k3 = 5, m1 = 2, and m2 = 1, the ODE system (5.12) in the textbook, becomes   −3 2  x , Ax. ¨= x 4 −9 5.1.3.3. As in Example 5.5, assume x1 > 0 when the first object is to the right of its equilibrium position and similarly for x2 > 0 and x3 . The easiest forces to understand are (a) the force the first spring exerts only on the first object, and (b) the force the fourth spring exerts only on the third object. The first spring is stretched a distance of x1 , if x1 > 0 and, conversely, the first spring is compressed a distance of −x1 , if x1 < 0. The first spring exerts a force of −k1 x1 on the first object, so the first spring acts to bring the first object back to equilibrium. The fourth spring is compressed by a distance of x3 ; equivalently, the fourth spring is stretched by a distance of −x3 . In the picture, x3 > 0 the position of the third object contributes a positive compression to the length of the fourth spring. The fourth spring exerts on the third object a force of −k4 x3 . [If x3 > 0 then the fourth spring’s force acts to bring the third object back to equilibrium.] The second spring is compressed by a distance of x1 if x1 > 0; equivalently, the second spring is stretched by a distance of −x1 . The second spring is also stretched by a distance of x2 , if x2 > 0; equivalently, the second spring is compressed by a distance −x2 . So, the second spring has (net compression) = x1 + (−x2 ) = (x1 − x2 ), that is, the second spring has (net stretch) = −(net compression) = (x2 − x1 ). The second spring exerts on the second object a force of −k2 (x2 − x1 ). [In the picture, x1 > x2 , so the second spring pushes the second object to the right.] The second spring exerts on the first object the opposite force of k2 (net stretch), that is, k2 (x2 − x1 ). [For example, the picture has x1 > x2 , so the second spring pulls the first object to the right as the second spring tries to shrink to its unstretched length, `.] The third spring is compressed by a distance of x2 if x2 > 0; equivalently, the third spring is stretched by a distance of −x2 . The third spring is also stretched by a distance of x3 , if x3 > 0; equivalently, the third spring is c Larry

Turyn, October 10, 2013

page 2

Figure 2: Problem 5.1.3.3: Three masses and four springs compressed by a distance −x3 . So, the third spring has (net compression) = x2 + (−x3 ) = (x2 − x3 ), that is, the third spring has (net stretch) = −(net compression) = (x3 − x2 ). The third spring exerts on the third object a force of −k3 (x3 − x2 ). The third spring exerts on the second object the opposite force of k3 (net stretch), that is, k3 (x3 − x2 ). Newton’s second law of motion gives us the ODEs m1 x¨1 = ΣF orces on f irst object = −k1 x1 + k2 (x2 − x1 ), m2 x¨2 = ΣF orces on second object = −k2 (x2 − x1 ) + k3 (x3 − x2 ), and m3 x¨3 = ΣF orces on third object = −k3 (x3 − x2 ) − k4 x3 . Later we will divide through these three equations by m1 , m2 , or m3 , respectively. Recall that we assumed this system has no damping forces.   x1 We can write this system of second order ODEs in terms of the vector x =  x2 : x3   k1 + k2 k2 − 0   m1 m1       k k k + k 2 3 2 3  x , Ax. ¨= x −   m m m 2 2 2       k3 k3 + k4 0 − m3 m3 5.1.3.4. Define y1 (t) = x1 , v1 (t) = x˙ 1 (t) = y˙ 1 (t), y1 (t) = x1 , and v1 (t) = x˙ 1 (t) = v˙ j = y¨j = x ¨j , for j = 1, 2. So, the ODE system (5.12), in the textbook, is equivalent to   0 1 0 0    y˙ 1 (t)  y1 (t)       k1 + k2 k2      − 0 0  v˙ 1 (t)     v1 (t) m1 m1      =      y˙ 2 (t)   y2 (t) 0 0 0 1             k2 k2 + k3 v˙ 2 (t) v2 (t) 0 − 0 m2 m2

y˙ 1 (t). Then, y˙ j = vj and

     .    

5.1.3.5. Let V1 , V2 be the volumes, in gallons, of the mixtures in the two tanks. Figure 5.5 in the textbook has flow rates of mixtures into or out of the tanks. We have V˙ 1 = total rate of change of mixture volume in tank #1 = +4 − 1 − 5 + 2 = 0 c Larry

Turyn, October 10, 2013

page 3

and V˙ 2 = total rate of change of mixture volume in tank #2 = +5 − 2 − 3 = 0. It follows that V1 (t) ≡ V (0) and V2 (t) ≡ V2 (0). Also let A1 , A2 be the amounts of dye, in pounds, in the mixtures in the two tanks. Figure 5.5 in the textbook has flow rates, and possibly dye concentrations of mixtures, into or out of the tanks. If a dye concentration is not given, then the assumption that the mixture in each tank is well-mixed implies that the dye concentration in any flow out of a certain tank equals the concentration of dye in the mixture in that tank. We have that the amount of dye that flows in or out equals the flow rate times the dye concentration in the flow, so A1 A1 A2 6 2 A˙ 1 = total rate of change of dye amount in tank #1 = 4 · 2 − 1 · −5· +2· =8− A1 + A2 V1 V1 V2 V1 (0) V2 (0) and A2 A2 5 5 A1 −2· −3· = A1 − A2 . A˙ 2 = total rate of change of dye amount in tank #2 = +5 · V1 V2 V2 V1 (0) V2 (0) The system of ODEs for the amounts of dye in the mixtures in the tanks  6 2  A˙ 1 = 8 − A1 + A2    V1 (0) V2 (0)     A˙ 2 =

5 5 A1 − A2 V1 (0) V2 (0)

is     

.

   

5.1.3.6. In the first loop, Kirchoff’s voltage law gives  LI˙1 (t) + R1 I1 (t) − I2 (t) = V (t), hence

R1 R1 1 I˙1 (t) = − I1 (t) + I2 (t) + V (t). L L L The input V (t) is a given function. In the second loop Kirchoff’s voltage law gives the algebraic equation  R2 I2 (t) + v2 (t) + R1 (t) I2 (t) − I1 (t) = 0, (1)

hence

R1 1 I1 (t) − v2 (t). R1 + R2 R 1 + R2 Take the time derivative of both sides of the latter equation to get I2 (t) =

(2)

I˙2 (t) =

1 R1 I˙1 (t) − v˙ 2 (t). R 1 + R2 R1 + R2

In terms of the loop current I2 (t), the voltage across the capacitor satisfies (3)

v˙ 2 (t) =

1 I2 . C2

Substitute (1) and (3) into (2) to get I˙2 (t) =

R1 R1 + R2



 R1 R1 1 1 1 − I1 (t) + I2 (t) + V (t) − · I2 , L L L R1 + R2 C 2

that is, (4)

I˙2 (t) = −

R12 I1 (t) + (R1 + R2 )L



R12 1 − (R1 + R2 )L (R1 + R2 )C2

 I2 (t) +

R1 V (t). (R1 + R2 )L

Together, (1) and (4) give a linear system in R2 , that is, a linear system of two ODEs in wo unknowns, I1 (t) and I2 (t):        R1 R1 1 I1 (t) I1 (t) − V (t)   L L L     d     = +    .       dt    R1 R12 1 R12 1 V (t) − − I2 (t) I2 (t) (R1 + R2 )L (R1 + R2 )L R1 + R2 L C2 c Larry

Turyn, October 10, 2013

page 4

5.1.3.7. Let T1 , T2 , M be the temperatures, respectively, of the two objects and the medium. Apply Newton’s Law of Cooling to each of the two objects to get T˙1 (t) = −kT,1 (T1 − M ) T˙2 (t) = −kT,2 (T2 − M ) where kT,1 and kT,2 are constants dependent on the material natures of the two objects, respectively. Apply Newton’s Law of Cooling to the medium to get M˙ (t) = −kM (M − T1 ) − kM (M − T2 ), where kM is a constant dependent on the medium’s material nature. So, the temperature of the medium affects the temperature of the two objects, which in turn affects the temperature of the medium: The temperatures of the objects and the medium are intertwined. Note that because the first object affects the medium and the medium affects the second object, indirectly the first object affects the second object. To summarize, the three temperatures satisfy the system of ODEs      T −kT,1 0 kT,1 T1 d  1   T2 0 −kT,2 kT,2   T2  . = dt M kM kM −2kM M We’ll assume that kT,1 , kT,2 , kM are constants.

c Larry

Turyn, October 10, 2013

page 5

Section 5.2.5 5−λ 5.2.5.1. 0 = 4

4 = (5 − λ)(−1 − λ) − 16 = λ2 − 4λ − 21 = (λ + 3)(λ − 7) −1 − λ

⇒ eigenvalues are λ1 = −3, λ2 = 7     1 8 4| 0

| 0 1 2 [ A − λ1 I | 0 ] = ∼ , after − 21 R1 + R2 → R2 , − 18 R1 → R1 4 2| 0 0 0| 0   1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −3 −2     −2 4| 0

1 −2 | 0 ∼ , after 2R1 + R2 → R2 − 12 R1 → R1 , [ A − λ2 I | 0 ] = 4 −8 | 0 0 0| 0   2 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 7 1 The general solution of the system is     1 2 x(t) = c1 e−3t + c2 e7t , where c1 , c2 = arbitrary constants. −2 1 −3 − λ 5.2.5.2. 0 = √ 5



5 = (−3 − λ)(1 − λ) − 5 = λ2 + 2λ − 8 = (λ + 4)(λ − 2) 1−λ

⇒ eigenvalues are λ1 = −4, λ2 = 2 √ √     √ 5| 0 1

5| 0 1 √ ∼ , after − 5 R1 + R2 → R2 [ A − λ1 I | 0 ] = 0 0| 0 5 5| 0  √  − 5 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −4 1 √     1

5| 0 −5 1 − √5 | 0 √ ∼ , after √15 R1 + R2 → R2 − 15 R1 → R1 [ A − λ2 I | 0 ] = 5 −1 | 0 0 0| 0   1 ⇒ v2 = c 1 √ , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 2 5 The general solution of the system is  √    1 − 5 x(t) = c1 e−4t + c2 e2t √ , where c1 , c2 = arbitrary constants. 1 5 5.2.5.3. Using the fact the determinant of an upper triangular matrix is the product of the diagonal entries, the 2−λ 1 0 = (2 − λ)(3 − λ)(−1 − λ) 3−λ 1 characteristic equation is 0 = 0 0 0 −1 − λ ⇒ eigenvalues are λ1 = −1, λ2 = 2, λ3 = 3     3 1 0| 0

0 −1/12 | 0 1 1/4 | 0 , after 14 R2 → R2 , −R2 + R1 → R1 , 13 R1 → R1 [A − λ1 I | 0 ] = 0 4 1 | 0 ∼  0 1 0 0 0| 0 0 0 0| 0   1 ⇒ v1 = c1  −3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1 12     0 1 0| 0 0 0 | 0 1 1 | 0 ∼  0 0 [A − λ2 I | 0 ] = 0 1 1 | 0 , after −R1 + R2 → R2 , 3R2 + R3 → R3 0 0 −3 | 0 0 0 0 | 0   1 ⇒ v2 = c1  0 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 2 0

c Larry

Turyn, October 10, 2013

page 6

  −1 1 0| 0 1 | 0 ∼  [A − λ3 I | 0 ] = 0 0 0 0 −4 | 0   1 ⇒ v3 = c1  1 , for any constant c1 6= 0, 0 

1 0 0

−1 0 0

0 |

1 | 0 |

 0 0 , after −R1 → R1 , 4R2 + R3 → R3 0

are the eigenvectors corresponding to eigenvalue λ3 = 3

The general solution of the system is       1 1 1 3t 2t −t  −3  + c2 e  0  + c3 e  1  , where c1 , c2 , c3 = arbitrary constants. x(t) = c1 e 0 0 12 5.2.5.4. Expanding the determinant of the matrix along the first column, the characteristic equation is −6 − λ 5 −5 −1 − λ  2 0 −1 − λ 2 0= = (−6 − λ) (−1 − λ)(4 − λ) − 14 = (−6 − λ) 7 4 − λ 0 7 4−λ  = (−6 − λ) λ2 − 3λ − 18 = (−6 − λ)(λ − 6)(λ + 3) ⇒ eigenvalues are λ1 = −6, λ2 = −3, λ3 = 6     0 5 −5 | 0 0 1 −1 | 0 2 | 0 ∼  0 0 7 | 0 , after −R1 + R2 → R2 , − 75 R1 + R3 → R3 , 15 R1 → R1 [A − λ1 I | 0 ] = 0 5 0 7 10 | 0 0 0 17 | 0   0 0 | 0 1 17 1 ∼0 0 1 | 0 , after − 7 R2 + R3 → R3 , 7 R2 → R2 , R2 + R1 → R1 0 0 0 | 0   1 ⇒ v1 = c1  0 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −6 0     −3 5 −5 | 0 −3 0 −10 | 0 2 | 0 ∼ 0 1 1 | 0 , after 21 R2 → R2 , −7R2 + R3 → R3 , −5R2 + R1 → R1 [A − λ2 I | 0 ] = 0 2 0 7 7| 0 0 0 0| 0   10

0 | 0 1 3 1 | 0 , after − 13 R1 → R1 ∼ 0 1 0 0 0| 0   −10 ⇒ v2 = c1  −3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = −3 3     −12 5 −5 | 0 −12 0 − 25 | 0 7 2 1 2 | 0 ∼ 0

[A − λ3 I | 0] = 0 −7 1 − 7 | 0 , after R2 + R3 → R3 , − 7 R2 → R2 , −5R2 + R1 → R1 0 7 −2 | 0 0 0 0| 0  

0 25/84 | 0 1 1 −2/7 | 0 , after − 12 ∼ 0 R1 → R 1 1 0 0 0| 0   −25/84 2/7 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = 6 ⇒ v3 = c 1  1 The general solution of the system is       1 −10 −25 −6t  −3t 6t  −3  + c3 e  24  , where c1 , c2 , c3 = arbitrary constants. 0  + c2 e x(t) = c1 e 0 3 84

c Larry

Turyn, October 10, 2013

page 7

−3 − λ 5.2.5.5. 0 = √ 2



2 = (−3 − λ)(−2 − λ) − 2 = λ2 + 5λ + 4 = (λ + 4)(λ + 1) −2 − λ

⇒ eigenvalues are λ1 = −4, λ2 = −1 √ √     √ 1 2| 0 2| 0

1 [ A − λ1 I | 0 ] = √ , after − 2 R1 + R2 → R2 ∼ 0 0| 0 2 2| 0  √  − 2 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −4 ⇒ v1 = c 1 1 √ √     −2 2| 0

1 −1/ 2 | 0 , after √1 R + R → R − 1 R → R [ A − λ2 I | 0 ] = √ ∼ 1 2 2 1 1 2 2 0 0| 0 2 −1 | 0   1 √ ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = −1 2 The general solution of the system is   √   1 2 −4t + c2 e−t √ , where c1 , c2 = arbitrary constants. x(t) = c1 e −1 2

τ =

Because all eigenvalues are negative, all solutions → 0 as t → ∞. So, there is a time constant. Its value is 1 = 1. min{4, 1}

5.2.5.6. Expanding the determinant of the matrix along the second column, the characteristic equation is −3 − λ 0 −1 −3 − λ  −1 2 −4 − λ 1 0 = −1 = (−4 − λ) −1 = (−4 − λ) (−3 − λ) − 1 −3 − λ −1 0 −3 − λ   = (−4 − λ) (−3 − λ) − 1 (−3 − λ) + 1 = (−4 − λ)(−4 − λ)(−2 − λ) ⇒ eigenvalues are λ1 = λ2 = −4, λ3 = −2    1 0 −1 | 0

1 1 | 0 ∼  0 [A − λ1 I | 0 ] = −1 0 −1 0 1| 0 0 

 0 −1 | 0 0 0 | 0 , after R1 + R2 → R2 , R1 + R3 → R3 0 0| 0      1 0 c2 ⇒ v2 = c1 , v3 = c2 are free variables ⇒ v =  c1  = c1  1  + c2  0  , c1 v1 + c2 v2 , 1 0 c2 for any constants c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalues λ1 = λ2 = −4     −1 0 −1 | 0

0 1| 0 1 1 | 0 ∼ 0 −2 2 | 0 , after −R1 → R1 , R1 + R2 → R2 , R1 + R3 → R3 , [A − λ3 I | 0 ] = −1 −2 −1 0 −1 | 0 0 0 0| 0  

0 1| 0 1 1 ∼ 0 1 −1 | 0 , after − 2 R2 → R2 0 0 0| 0   −1 ⇒ v3 = c1  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = −2 1 The general solution of the system is       0 1 −1 x(t) = c1 e−4t  1  + c2 e−4t  0  + c3 e−2t  1  , where c1 , c2 , c3 = arbitrary constants. 0 1 1 Because all eigenvalues are negative, all solutions → 0 as t → ∞. So, there is a time constant. Its value is 1 1 = . τ = min{4, 2} 2 c Larry

Turyn, October 10, 2013

page 8

a−λ 5.2.5.7. 0 = b

0 = (a − λ)(c − λ) ⇒ eigenvalues are λ1 = a, λ2 = c c−λ

Case 1: If b 6= 0,  [ A − λ1 I | 0 ] =  ⇒ v1 = c 1

0 b

| |

0 c−a

0 0



 ∼

b−1 (c − a) 0

1 0

| |

0 0



, after R1 ↔ R2 , b−1 R1 → R1 .

 a−c , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = a. b

Case 2: If b = 0,  [ A − λ1 I | 0 ] =  ⇒ v1 = c˜1

1 0

0 b

0 c−a

| |

0 0



 =

0 0

0| c−a|

0 0



 ∼

1 | 0|

0 0

0 0



, after R1 ↔ R2 , (c − a)−1 R1 → R1 .

 , for any constant c˜1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = a, if b = 0.

 But, if b = 0 then the eigenvectors corresponding to eigenvalue λ1 = a can, instead, be written as v1 = a−c c1 , because a − c 6= 0. 0 We also need an eigenvector(s) corresponding to eigenvalue λ2 :     a−c 0| 0

1 0| 0 [ A − λ2 I | 0 ] = ∼ , after (a − c)−1 R1 → R1 , −bR1 + R2 → R2 b 0| 0 0 0| 0   0 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = c 1 Whether or not b 6= 0, the general solution of the system can be written as     a−c 0 x(t) = c1 eat + c2 ect , where c1 , c2 = arbitrary constants. b 1 There is a time constant if, and only if, both a and c are negative. If they are, the time constant is τ = 1−λ 5.2.5.8. 0 = 4

1 . min{|a|, |c|}

1 = (1 − λ)(1 − λ) − 4 = λ2 − 2λ − 3 = (λ + 1)(λ − 3) 1−λ

⇒ eigenvalues are λ1 = −1, λ2 = 3     1 | 0 2 1| 0

1 2 , after −2R1 + R2 → R2 , 12 R1 → R1 [ A − λ1 I | 0 ] = ∼ 0 0| 0 4 2| 0   1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1 −2     1 −2 1| 0

1 −2 | 0 [ A − λ2 I | 0 ] = ∼ , after 2R1 + R2 → R2 − 12 R1 → R1 , 0 0| 0 4 −2 | 0   1 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 3 2 The general solution of the system is     1 1 x(t) = c1 e−t + c2 e3t , where c1 , c2 = arbitrary constants. −2 2 The IC requires that  hence 

c1 c2



 =

1 −2

1 2

0 −2

−1 



 =

0 −2



1 −2

1 = 4

1 2 

2 2



c1 c2

−1 1





0 −2



1 = 4



2 −2

c Larry

 .

Turyn, October 10, 2013

page 9

The solution of the IVP is 1 x(t) = e−t 2



1 −2



1 − e3t 2





1 2

.

5.2.5.9. This is the same system of ODEs as    1 x(t) = c1 e−t  −3  + c2 e2t  12

in problem 5.2.5.3, where we found the general solution to be    1 1 0  + c3 e3t  1  , where c1 , c2 , c3 = arbitrary constants. 0 0   e−t e2t e3t −t 3t 0 e . So, a fundamental matrix is given by Z(t) =  −3e 12e−t 0 0 −1 − λ 5.2.5.10. 0 = 0

1 = (−1 − λ)(−2 − λ) − 0 = (−1 − λ)(−2 − λ) −2 − λ

⇒ eigenvalues are λ1 = −2, λ2 = −1    

1 1| 0 , ⇒ v1 = c1 −1 , for any constant c1 6= 0, are the eigenvectors | [ A − λ1 I | 0 ] =  1 0 0| 0 corresponding to eigenvalue λ1 = −2 

     1| 0 0 1 | 0 ∼ , after R1 + R2 ↔ R2 ⇒ v2 = c1 1 , for any constant c1 6= 0, | | [ A − λ2 I | 0 ] =  0 0 −1 | 0 0 0| 0 are the eigenvectors corresponding to eigenvalue λ2 = 3 The general solution of the system is     −1 1 x(t) = c1 e−3t + c2 e3t , where c1 , c2 = arbitrary constants. 1 0   −e−2t e−t  . We find etA by calculating So, a fundamental matrix is given by Z(t) =  −2t e 0

e

tA

0

 −e−2t −1 = Z(t) Z(0) = e−2t

e−t



−1

1

1

0





0 

√ − 3 √ − 3−λ

1

∼ 0  ⇒ v1 = c 1

−1 + 2



3

−e−2t

=

−e−2t + e−t

0

e−2t

e−2t

e−t 0



 1   −1

0

−1

−1

−1

 

 .

√ √ = ( 3 − λ)(− 3 − λ) − 6 = λ2 − 9 = (λ + 3)(λ − 3)

⇒ eigenvalues are λ1 = −3, λ2 = 3 √  √ 3+3 − 3| | [ A − λ1 I | 0 ] =  √ √ −2 3 − 3 + 3 | 



e−t

= √ 3−λ √ 5.2.5.11. 0 = −2 3

−1

0





∼ 0 √ 1− 3 2

| | 0|

0

1 √ 3+3

√ 1− 3 2

| √ | − 3|

 ,

after − (3 +

0

 , after R1 ↔ R2 , −

1 √ 2 3

R1 → R 1

0

√ 3)R1 + R2 → R2 .

0

 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −3

c Larry

Turyn, October 10, 2013

page 10  √ [ A − λ2 I | 0 ] = 

3−3

√ −2 3

√ − 3| | √ − 3−3| 

0





∼

| | 0|

∼ 0

| √ | − 3|

√ 3−3

0 √ 1+ 3 2

1

√ 1+ 3 2

1

0

 after (3 −

,





0

, after R1 ↔ R2 , −

1 √ 2 3

R1 → R 1

0

3)R1 + R2 → R2

0

√  −1 − 3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 3 2 The general solution of the system is √  √    −1 − 3 −1 + 3 + c2 e3t , where c1 , c2 = arbitrary constants. x(t) = c1 e−3t 2 2 

⇒ v2 = c 1



(−1 +

So, a fundamental matrix is given by Z(t) =   e

tA

= Z(t) Z(0)

−1

 =

 1 = √  4 3

(−1 +

=

(−1 +

(−1 −

√ 3t   2 3)e 1  √  4 3 2e3t −2

√ −3t 3)e

(−1 −

3)e−3t − 2(−1 −



4e−3t − 4e3t

√ 5 −3 − λ

(1 +

(−1 −

2(1 +





(−1 +

3)



√ 3)





3)e−3t + 2(−1 +



3)e3t

e−3t − e3t (1 +



3)e−3t + (−1 +

√ −1 3) 

1

2e−3t − 2e3t

3)e3t

√ √  (−1 + 3)e−3t + (1 + 3)e3t 1  = √ 2 3 2e−3t − 2e3t 1−λ 5.2.5.12. 0 = √ 5

(−1 −

√ √ 3t   (−1 + 3) 3)e  2e3t 1

√ −3t 3)e

2e−3t



√ 3t  3)e  . We find etA by calculating 2e3t

3)e−3t

2e−3t

2e−3t

2(−1 +





 √

3)e3t

.

= (1 − λ)(−3 − λ) − 5 = λ2 + 2λ − 8 = (λ + 4)(λ − 2)

⇒ eigenvalues are λ1 = −4, λ2 = 2 √     √1 |

0 5 5| 0 1 5 √ ∼ , after 1 R1 → R1 , − 5 R1 + R2 → R2 | [ A − λ1 I | 0 ] =  √ | 5 5 1| 0 0 0| 0   1 √ , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −4 ⇒ v1 = c 1 − 5 

−1

[ A − λ2 I | 0 ] =  √

√ 5| | −5 |

0





∼

1

√ − 5| | 0|

0



√ , after −R1 → R1 , − 5 R1 + R2 → R2

0 0 5 0  √  5 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 2 ⇒ v2 = c 1 1 The general solution of the system is    √  1 5 −4t 2t √ x(t) = c1 e + c2 e , where c1 , c2 = arbitrary constants. − 5 1

c Larry

Turyn, October 10, 2013

page 11



e−4t

So, a fundamental matrix is given by Z(t) = 

√ − 5e−4t

e

tA

 −1 = = Z(t) Z(0)

e−4t √ − 5e−4t  1 = 6



5e2t e2t



1



√ − 5

e−4t + 5e2t √ √ − 5e−4t + 5e2t

5.2.5.13. The characteristic equation is 1−λ −1 −1 0 1 − λ −1 − λ −1 − λ 3 = 0 0 = 0 −1 0 1 −λ −λ 1−λ −1 0 −1 − λ 3 = −λ 0 1 0 1

0 3 −λ

√ 2t  5e  . We find etA by calculating 2t e √ −1  e−4t 5  = √ 1 − 5e−4t √ √  − 5e−4t + 5e2t . −4t 2t 5e +e



5e2t e2t



 1 1   √ 6 5

√  − 5  1

, after the row operation R1 + R3 → R3 ,

, after the row operation R3 ← −λR3 .

Expanding along the third row, we get that the characteristic  −1 0 1 − λ −1 0 = | A − λ I | = −λ + −1 − λ 3 0 −1 − λ

equation is    = −λ − 3 + (1 − λ)(−1 − λ) = −λ − 4 + λ2

 = −λ − 4 + λ2 = −λ(−2 + λ)(2 + λ) ⇒ eigenvalues are λ1 = −2, λ2 = 0, λ3 = 2     3 −1 0 | 0

1 −1 −2 | 0 1 3 | 0 ∼  0 1 3 | 0 , after R1 ↔ R3 , 3R1 + R3 → R3 , −R1 → R1 [A − λ1 I | 0 ] = 0 −1 1 2| 0 0 2 6| 0  

0 1| 0 1 ∼ 0 1 3 | 0 , after −2R2 + R3 → R3 , R2 + R1 → R1 0 0 0 | 0   −1 ⇒ v1 = c1  −3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −2 1     1 −1 0 | 0

0| 0 1 −1 1 −3 | 0 , after R1 + R3 → R3 , −R2 → R2 [A − λ2 I | 0 ] = 0 −1 3 | 0 ∼ 0 −1 1 0| 0 0 0 0| 0  

0 −3 | 0 1 ∼ 0 1 −3 | 0 , after R2 + R1 → R1 0 0 0| 0   3 ⇒ v2 = c1  3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 0 1     −1 −1 0| 0

0| 0 1 1 3 | 0 ∼  0 1 −1 | 0 , after −R1 → R1 , R1 + R3 → R3 , − 13 R2 → R2 [A − λ3 I | 0 ] = 0 −3 −1 1 −2 | 0 0 2 −2 | 0  

0 1| 0 1 ∼ 0 1 −1 | 0 , after −R2 + R1 → R1 , −2R2 + R3 → R3 0 0 0| 0   −1 ⇒ v3 = c1  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = 2 1 c Larry

Turyn, October 10, 2013

page 12

The general solution of the system is      3 −1 2t −2t  −3  + c2  3  + c3 e  x(t) = c1 e 1 1  −e−2t  −3e−2t So, a fundamental matrix is given by Z(t) = e−2t

 −1 1  , where c1 , c2 , c3 = arbitrary constants. 1  3 −e2t 3 e2t  . We find etA by calculating 1 e2t

−e−2t =  −3e−2t e−2t 

e

−e−2t  = −3e−2t e−2t 

3 3 1

tA

= Z(t) Z(0)

  −e2t 1 1 e2t   2 8 e2t −3

−1

−2 0 2

3 3 1

 −e2t −1 2t   e −3 e2t 1

  3 −e−2t + 6 + 3e2t 1   2 = −3e−2t + 6 − 3e2t 8 3 e−2t + 2 − 3e2t

3 3 1

−1 −1 1  1

2e−2t − 2e2t 6e−2t + 2e2t −2e−2t + 2e2t

 −3e−2t + 6 − 3e2t −9e−2t + 6 + 3e2t  . 3e−2t + 2 + 3e2t

5.2.5.14. If the given information is enough to find a set of three eigenvectors of A that are a basis for R3 then the only eigenvalues of A must be −3 and −2 and we will be able to solve the IVP.     −2 −2 2 |0 1 1 −1 | 0 1 −1 | 0 ∼  0 0 0 | 0 , after − 12 R1 → R1 , −R1 + R2 → R2 [A − λI | 0] = [A + 2I | 0]=  1 0 0 0 |0 0 0 0 |0       1 −1 −c1 + c2 c1  = c 1  1  + c 2  0  , c1 v 1 + c 2 v 2 , ⇒ v2 = c1 , v3 = c2 are free variables ⇒ v =  1 0 c2 for any constants c1 , c2 with |c1 | + |c2 | > 0, are  −1 −2 2 2 −1 [A − λI | 0] = [A + 3I | 0] =  1 0 0 1

the eigenvectors corresponding to eigenvalues λ1 = λ2 = −2    |0

1 2 0| 0 | 0 ∼  0 0 1 | 0 , after R1 + R2 → R2 , |0 0 0 0| 0

−R1 → R1 , −R2 + R3 → R3 , 2R2 + R1 → R1   −2 ⇒ v3 = c1  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = −3 0 The general solution of the system is       −1 1 −2 −2t −3t −2t   0  + c3 e  1  , where c1 , c2 , c3 = arbitrary constants. 1  + c2 e x(t) = c1 e 0 1 0 The IC requires that 

  0 −1  0 = 1 1 0 hence 

  c1 −1  c2  =  1 c3 0

1 0 1

1 0 1

  −2 c1 1   c2  0 c3

−1    −2 0 1 1   0 = 0 0 1 −1

2 0 −1

    −1 0 −1 1  0  =  1 . 1 1 1

The solution of the IVP is  −2t

x(t) = −e

−a − λ 5.2.5.15. 0 = b

         −1 1 −2 2 −2 −2t −3t −2t −3t  1 +e  0 +e  1 =e  −1  + e  1 . 0 1 0 1 0

  b = (−a − λ)2 − b2 = (−a − λ) − b (−a − λ) + b = (−a − b − λ)(−a + b − λ) −a − λ c Larry

Turyn, October 10, 2013

page 13

⇒ eigenvalues are λ1 = −a − b, λ2 = −a + b. Because b > 0, λ1 6= λ2 , and     b b| 0

1 1| 0 ∼ , after − R1 + R2 → R2 , b−1 R1 → R1 . [ A − λ1 I | 0 ] = b b| 0 0 0| 0   −1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −a − b. ⇒ v1 = c 1 1     −b b| 0

1 −1 | 0 [ A − λ2 I | 0 ] = ∼ , after R1 + R2 → R2 , −b−1 R1 → R1 . b −b | 0 0 0| 0   1 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = −a + b. 1 The general solution of the system is     −1 1 x(t) = c1 e−(a+b)t + c2 e−(a−b)t , where c1 , c2 = arbitrary constants. 1 1 So, a fundamental matrix is given by  −e−(a+b)t  Z(t) = e−(a+b)t

e−(a−b)t e





−e−bt

 = e−at 

−(a−b)t

e

ebt

−bt

e

 .

bt

We find etA by calculating etA

 =e

 −e−bt −1 −at  = Z(t) Z(0) =e e−bt

−e−bt

ebt

e−bt

ebt

−at





 1   −2

  5.2.5.16. First, take the hint: eλt v = eλt  



v1 eλt

  d h λt i d   v2 eλt e v =  dt dt   ..  . vn eλt 

1 5.2.5.17. (a) Ex: A =  −1 −2

0 0 −3

v1 v2 .. . vn



−1

     





    =  



      =    

e

bt



−1

1



−1 

1

1

 −e−bt − ebt 1 −at  = e −2 −1 e−bt − ebt −1

1

v1 eλt v2 eλt .. . vn eλt v1 eλt

d dt



d dt

 v2 eλt ..  . λt  vn e

d dt



e−bt − ebt



 −bt e + ebt 1 −at  = e 2 −e−bt + ebt 

ebt

−e−bt + ebt e

−bt

+e

bt

−e−bt − ebt

 

 .

   . So, because v1 , ..., vn are constants, 

 



λv1 eλt

      λv2 eλt =   ..   . λvn eλt





     λt   = λe     

v1 v2 .. . vn

    λt  = λe v.  

 0 0  2

(b) Because the matrix (A − λI) is lower triangular, the characteristic equation is 1−λ 0 0 −λ 0 = (1 − λ)(−λ)(2 − λ) ⇒ eigenvalues are λ1 = 1, λ2 = 2, λ3 = 0 0 = −1 −2 −3 2−λ

c Larry

Turyn, October 10, 2013

page 14

  

1 0| 0 0 0 0| 0 1 0 0 | 0 , after R1 ↔ R2 , −R1 → R1 , 2R1 + R3 → R3 [A − λ1 I | 0 ] = −1 −1 0 | 0 ∼  0 0 −1 1 | 0 −2 −3 1 | 0  

0 1| 0 1 ∼ 0 1 −1 | 0 , after R2 ↔ R3 , R2 + R1 → R1 , −R2 → R2 0 0 0| 0   −1 ⇒ v1 = c1  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 1 1    

0 0| 0 −1 0 0| 0 1 [A − λ2 I | 0 ] = −1 −2 0 | 0 ∼  0 −2 0 | 0 , 0 −3 0 | 0 −2 −3 0 | 0 after −R1 + R2 → R2 , −2R1 + R3 → R3 , −R1 → R1  

0 0| 0 1 0 | 0 , after − 21 R2 → R2 , 3R2 + R3 → R3 ∼ 0 1 0 0 0| 0   0 ⇒ v2 = c1  0 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 2 1     1 0 0| 0

0 0| 0 1 0 0 | 0 ∼ 0 0 0 | 0 , after R1 + R2 → R2 , 2R1 + R3 → R3 [A − λ3 I | 0 ] = −1 −2 −3 2 | 0 0 −3 2 | 0  

0 0| 0 1 1 ∼ 0 1 −2/3 | 0 , after R2 ↔ R3 , − 3 R2 → R2 0 0 0| 0   0 ⇒ v3 = c1  2 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = 0 3 

The general solution is      0 0 −1 2t x(t) = c1 e  1  + c2 e  0  + c3  2  where c1 , c2 , c3 = arbitrary constants 3 1 1 

t



 2 2 4 2  has eigenvalues λ1 = λ2 = −2, λ3 = 7, 5.2.5.18. (a) In Example 5.10 we used the fact that A ,  2 −1 4 2 2 with corresponding eigenvectors       1 −1 2 v(1) =  −2  , v(2) =  0  , v(3) =  1  , 0 1 2 and that set of three vectors is a basis for R3 . So, a general solution of LCCHS x˙ = Ax is given by       1 −1 2 −2t  −2t 7t  0  + c3 e  1  , −2  + c2 e x(t) = c1 e 0 1 2   e−2t −e−2t 2e7t −2t 7t 0 e  . We find where c1 , c2 , c3 are arbitrary constants. This gives fundamental matrix Z(t) = −2e 0 e−2t 2e7t etA by calculating   −1 e−2t −e−2t 2e7t 1 −1 2 −1 tA −2t 7t   −2 0 e 0 1  e = Z(t) Z(0) =  −2e 0 e−2t 2e7t 0 1 2 c Larry

Turyn, October 10, 2013

page 15

   e−2t −e−2t 2e7t 1 −4 1 1 0 e7t   −4 −2 5  =  −2e−2t 9 −2t 2 1 2 0 e 2e7t   5e−2t + 4e7t −2e−2t + 2e7t −4e−2t + 4e7t 1 −2t 7t −2t 7t −2t 7t  −2e + 2e 8e + e −2e + 2e = . 9 −4e−2t + 4e7t −2e−2t + 2e7t 5e−2t + 4e7t 

(b) The given eigenvectors,  w(1)

 1 =  −2  = v(1) , 0

 1 =  0  = −v(2) , −1

w(2)

 1 1 =  1/2  = v(3) , 2 1 



w(3)

are also a basis for R3 and gives as general solution of LCCHS x˙ = Ax x(t) = c1 e−2t v(1) − c2 e−2t v(2) +

1 c3 e7t v(3) , 2

where c1 , c2 , c3 are arbitrary constants. This gives fundamental matrix   1 7t (3) e v . Z(t) , e−2t v(1) pp − e−2t v(2) pp 2 We find etA by calculating etA = Z(t) Z(0)

−1

 = e

 = e−2t v(1)

−2t

v(1) pp

−e

p p

− e−2t v(2)

−2t

 5e−2t + 4e7t 1 −2e−2t + 2e7t = 9 −4e−2t + 4e7t

v(2) pp

p p

1 7t (3) e v 2

1 7t (3) e v 2



−2e−2t + 2e7t 8e−2t + e7t −2e−2t + 2e7t



v(1)

p p

− v(2)

p p

1 (3) v 2

−1

 1 −5  4  −4e−2t + 4e7t −2e−2t + 2e7t  . 5e−2t + 4e7t

 1 1 4 9 4

−4 2 2

This makes sense because for a given constant, square matrix A there is only one etA . 5.2.5.19. (a) Using the new given eigenvectors, Theorem 5.4 says that        1 −2t − 31 −2e − 21 p −2t −3t    = X(t) ,  e e p e−2t 1 1 is a fundamental matrix for x˙ = Ax. Then  1 −2t  1 −2e − 31 e−3t −2 −1 tA  e =X(t) X(0) = e−2t e−3t 1  −2e−3t + 3e−2t  = 6e−3t − 6e−2t

− 13

−1  

=

1 −e−3t + e 3e

−3t

− 2e

− 13 e−3t e−3t

− 12 e−2t

− 13 e−3t

e−2t  −2t

e−3t

−2t

 

 

−6

−2

6

3

 

 

,

which is the same conclusion as in Example 5.13. 5.2.5.20. Define y(t) = x1 (t), so y(t) ˙ = x˙ 1 (t) = x2 (t) and y¨(t) = x˙ 2 (t) = −4t−2 x1 − t−1 x2 = −4t−2 y − t−1 y, ˙ that is, t2 y¨ + ty˙ + 4y = 0. Substituting y(t) = tm gives characteristic equation 0 = m(m − 1) + m + 4 = m2 + 4. So, the general solution of the equivalent second order ODE is x1 (t) = y(t) = c1 cos(2 ln t)t + c2 sin(2 ln t), where c1 , c2 =arbitrary constants. The solution of the original system is       c1 cos(2 ln t) c2 sin(2 ln t) c1 cos(2 ln t) c2 sin(2 ln t) x(t) = = + −2t−1 c1 sin(2 ln t) 2t−1 c2 cos(2 ln t) −2t−1 c1 sin(2 ln t) 2t−1 c2 cos(2 ln t) c Larry

Turyn, October 10, 2013

page 16

 = c1 so a fundamental matrix is given by   cos(2 ln t)  X(t) ,   −2t−1 sin(2 ln t)

cos(2 ln t) −2t−1 sin(2 ln t)

 p p





 + c2

sin(2 ln t) 2t−1 cos(2 ln t)

sin(2 ln t) 2t cos(2 ln t)

 ,

−1





=

cos(2 ln t)

sin(2 ln t)

−2t−1 sin(2 ln t)

2t−1 cos(2 ln t)

 .

5.2.5.21. Define y(t) = x1 (t), so y(t) ˙ = x˙ 1 (t) = x2 (t) and y¨(t) = x˙ 2 (t) = −2t−2 x1 + 2t−1 x2 = −2t−2 y + 2t−1 y, ˙ that 2 is, t y¨ − 2ty˙ + 2y = 0. Substituting y(t) = tm gives characteristic equation 0 = m(m − 1) − 2m + 2 = m2 − 3m + 2 = (m − 1)(m − 2). So, the general solution of the equivalent second order ODE is x1 (t) = y(t) = c1 t + c2 t2 , where c1 , c2 =arbitrary constants. The solution of the original system is   2         t c 2 t2 t c 1 t c 2 t2 c1 t + c2 , + = c1 x(t) = = 1 2t c1 2c2 t c1 2c2 t so a fundamental matrix is given by 

t

X(t) ,  

 1

5.2.5.22. If x(t) = [ x1 (t)

p p



 p p

t2



t

t2

1

2t



=

 2t

.

x2 (t) ]T satisfies LCCHS (5.28), that is, x˙ = y¨ = x ¨1 =





0 −q

1 −p

 x, then y(t) , x1 (t) satisfies

dh i dh i x˙ 1 = x2 = −qx1 − px2 = −qy − py, ˙ dt dt

hence y(t) , x1 (t) satisfies LCCHODE (5.27), that is, y¨ + py˙ + qy = 0. h 5.2.5.23. Write the matrix in terms of its columns: Z(t) = z(1) (t) A(t)Z(t), hence, using Theorem 1.9 in Section 1.2, h i h ˙ z˙ (1) (t) pp · · · pp z˙ (n) (t) = Z(t) =A(t)Z(t)=A(t) z(1) (t)

p p

···

p p

p p

···

p p

i ˙ z(n) (t) . We are given that Z(t) =

i h z(n) (t) = A(t)z(1) (t)

p p

···

p p

A(t)z(n) (t)

i

hence z˙ (j) (t) = A(t)z(j) (t) for j = 1, .., n. This says that every column of Z(t) is a solution of the same linear ˙ homogeneous system x(t) = A(t)x(t), as we were asked to show. 5.2.5.24. We are given that (?) x˙ = A(t)x has two fundamental matrices X(t) and Y (t). Implicit in the explanation of Theorem 5.6(b) is the derivation that if B is an invertible, constant matrix, then the fact that X(t) is a fundamental matrix of (?) implies that X(t)B is also a fundamental matrix of (?). [Why? X(t) being a fundamental matrix of (?) implies that X(t) is invertible and X˙ = AX, hence B being an invertible, constant matrix implies that X(t)B is invertible and i    dh ˙ X(t)B = X(t) B = AX(t) B = A X(t)B) . dt Theorem 5.5 implies that X(t)B is also a fundamental matrix of (?).] To find an invertible, constant matrix B such that (??) Y (t) = X(t)B, take the hint: We want (??), hence at −1 −1 t = 0 we will need to have Y (0) = X(0)B, hence B = X(0) Y (0). [Note that X(0) exists by definition of X(t) being a fundamental matrix.] −1 So, define Z(t) , X(t) X(0) Y (0). By the remark in the preceding paragraph, Z(t) is a fundamental ma−1 trix of (?). Also, by substituting t = 0 into the definition of Z(t), we get that Z(0) = X(0) X(0) Y (0) =  −1  X(0) X(0) Y (0) = IY (0) = Y (0). −1 So, both Y (t) and Z(t) , X(t) X(0) Y (0) are fundamental matrices of (?) and Y (0) = Z(0). By uniqueness of solutions of (?), every column of Y (t) is identically equal to the corresponding column of Z(t), hence Y (t) ≡ Z(t), −1 −1 that is, Y (t) ≡ X(t) X(0) Y (0) = X(t)B. So, B = X(0) Y (0) accomplished what the problem asked for.

c Larry

Turyn, October 10, 2013

page 17

˙ 5.2.5.25. Yes, because X(t) is a fundamental matrix for (?) x˙ = A(t)x, we have X(t) = A(t)X(t), so   d ˙ Y˙ (t) = [ X(t)B ] = X(t)B = A(t)X(t) B = A(t) X(t)B = A(t)Y (t). dt Because X(t) is a fundamental matrix, it is invertible on some open interval of time. We were given that B is invertible, so X(t)B is also invertible. This and the differential equation that Y (t) satisfies implies that Y (t) is also a fundamental matrix for (?). 5.2.5.26. Yes, because (1) eγt etA is invertible for all t follows from the fact that eγt being a positive scalar for all t and det(eγt etA ) 6= 0 for all t implies that for all t, n det(eγt etA ) = eγt det(etA ) 6= 0, and (2) from the product rule we have d γt tA d d [ e e ] = [ eγt ] etA + eγt [ etA ] = γeγt etA + eγt (AetA ) = (γI)eγt etA + Aeγt etA = (γI + A)etA . dt dt dt Together, (1) and (2) say that eγt is a fundamental matrix for x˙ = (γI + A)x. 5.2.5.27. Take the hint and define Y (t) , X(−t), where X(t) is a fundamental matrix for a system (?) x˙ = A(t)x. The ˙ ˙ latter is equivalent to X(t) = A(t)X(t). By replacing t by −t throughout, it follows that (??)X(−t) = A(−t)X(−t) The chain rule, and after that, (??), imply d d d ˙ ˙ Y˙ (t) , [ Y (t) ] = [ X(−t) ] = X(−t) · [ (−t) ] = −X(−t) = −A(−t)X(−t). dt dt dt But, A(−t) ≡ −A(t), so Y˙ (t) = A(t)X(−t) = A(t)Y (t). So, both i h Y (t) = y(1) (t) pp · · · pp y(n) (t) i h are fundamental matrices for (?) x˙ = A(t)x, and X(0) = I = e(1) pp · · · pp e(n) = Y (0). For j = 1, ..., n, h X(t) = x(1) (t)

p p

···

p p

x(n) (t)

i

and

uniqueness of solutions for each of the IVPs (?) x˙ = A(t)x, X(t) ≡ Y (t) , X(−t), that is, X(t) is an even function of t.

x(0) = e(j) , implies that x(j) (t) ≡ y(j) (t). So,

5.2.5.28. We can express a square matrix in terms of its entries, for example, X(t) = [xij (t)]

1≤i≤n

.

1≤j≤n

It follows that T  i i d dh dh d T T T ˙ [ X(t) ] = [xij (t)] = [xji (t)] = [x˙ ji (t)] = [ X(t) ] = (X(t)) . dt dt dt dt (b) Take the time derivatives of both sides of I = X(t)T X(t)T O=

−1

to get

   −1 −1 −1 d d d  d [ I ] = [ X(t)T X(t)T [ X(t)T ] X(t)T ] = X(t)T [ X(t)T ]+ . dt dt dt dt

It follows that X(t)T

−1  −1 d d  [ X(t)T ] = − [ X(t)T ] X(t)T , dt dt

hence, using the result of part (a), −1  −1 −1 −1 d −1 T  d  ˙ [ X(t)T ] = − X(t)T [ X(t)T ] X(t)T = − X(t)T X(t) X(t)T dt dt  −1 −1    T    −1 T −1 T  T = − X(t)T A(t)X(t) X(t)T = − X(t) X(t)T  X(t) A(t) −1 T  = − A(t) X(t)T .

c Larry

Turyn, October 10, 2013

page 18

−1 (c) Define Y (t) , X(t)T . By the result of part (b), Y˙ = −A(t)T Y . By Theorem 5.5, Y (t) is a fundamental matrix for the system x˙ = −A(t)T x. 1 = (−3 − λ)(−3 − λ) − 1 = (−3 − λ)2 − 1 −3 − λ

−3 − λ 5.2.5.29. 0 = 1

⇒ eigenvalues are λ1 = −4, λ2 = −2     1 1| 0

1 1| 0 [ A − λ1 I | 0 ] = ∼ , after −R1 + R2 → R2 1 1| 0 0 0| 0   −1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −4 ⇒ v1 = c 1 1  [ A − λ2 I | 0 ] =

−1 1

1| −1 |

0 0





1 0



1| 0|

0 0

 , after R1 + R2 → R2

 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = −2 1 The general solution of the system is     −1 1 x(t) = c1 e−4t + c2 e−2t , where c1 , c2 = arbitrary constants. 1 1   −e−4t e−2t  . We find etA by calculating So, a fundamental matrix is given by Z(t) =  e−4t e−2t 

⇒ v2 = c 1

 e

tA

= Z(t) Z(0)

−1

=

−e−4t

e−2t

e−4t

e−2t



−1

1

1

1



ˆ = lim

b→∞

0

b

b→∞

0

  −4t e + e−2t 1   2 −e−4t + e−2t

1 = lim 4 b→∞

ˆ

b



 =

1 lim   2 b→∞

e−2t

b

etA

T

T

 −4t e + e−2t 1   2 −e−4t + e−2t

2e−8t + 2e−4t

e−8t −

 −e−8b + 1 − 2 e−4b + 2 1 lim  = 16 b→∞ e−8b − 1 − 2e−4b + 2

1 4

e−4t

−1

−1

−1

 

etA dt =

−2e−8t + 2e−4t

e−4t

1



−2e−8t + 2e−4t

1 4

 1   −2



2e−8t + 2e−4t

− 18 e−8t − 1 8

e−4t



0

e−4t + e−2t

0

e−2t

e−4t + e−2t

−e−4t + e−2t



=

−e−4t

−e−4t + e−2t

We calculate that the improper integral is ˆ ∞ ˆ b ˆ T T etA etA dt , lim etA etA dt = lim 0





 −4t e + e−2t 1 = 2 −e−4t + e−2t

b→∞

−1

1 8

e−8t −

1 4

− 81 e−8t −

e−8b − 1 − 2e−4b + 2

e−4t + e−2t

  dt

  dt

e−4t

1 4

−e−4t + e−2t

e−4t

 b  0



 3 1 =  16 1 −e−8b + 1 − 2e−4b + 2

c Larry

1

 .

3

Turyn, October 10, 2013

page 19 ˆ

t

5.2.5.30. A(t) is given and we define B(t) ,

A(s)ds. Using Part I of the fundamental theorem of Calculus, we 0

calculate that  2 1 3 2 d h B(t) i d 1 1 ˙ ˙ ˙ e = + B(t)B(t) + I + B(t) + B(t) + B(t) + ... = O + B(t) B(t) B(t) + ... dt dt 2! 3! 2!     2 1 3 2 1 3 1 1 ˙ = I + B(t) + B(t) + B(t) + ... B(t) B(t) + B(t) + ... A(t). = I + B(t) + 2! 3! 2! 3! Using the assumption that A(t)B(t) ≡ B(t)A(t), we see that     2 1 3 2 1 3 1 1 d h B(t) i e = I + B(t) + B(t) + B(t) + ... A(t) =A(t) I + B(t) + B(t) + B(t) + ... dt 2! 3! 2! 3! = A(t)eB(t) . Because det(eB(t) ) 6= O for all t [Why?], Theorem 5.5 implies that eB(t) is a fundamental matrix for the system x˙ = A(t)x. 5.2.5.31. Take the hint and define Y (t) = (X(t))−1 , where X(t) is a fundamental matrix for a system (?) x˙ = A(t)x. ˙ The latter is equivalent to X(t) = A(t)X(t). To find the ODE that Y (t) satisfies, note that I = X(t) (X(t))−1 = X(t) Y (t). So, using the product rule to differentiate both sides with respect to t, we get O=

d d ˙ [ I ] = [ X(t) Y (t) ] = X(t)Y˙ (t) + X(t)Y (t), dt dt

˙ hence X(t)Y˙ (t) = −X(t)Y (t). It follows that −1 −1   ˙ Y˙ (t) = − X(t) X(t)Y (t) = − X(t) A(t)X(t)Y (t) = −Y (t)A(t) X(t)Y (t) = −Y (t)A(t) I , hence Y˙ (t) = −Y (t)A(t).

(??)

On the other hand, define Z(t) , X(t)T and note that Z(t) satisfies the ODE system ˙ ˙ Z(t) = X(t)

T

= A(t)X(t)

T

= X(t)

T

A(t)

T

= Z(t) A(t)

T

.

But, we were given that A(t)T ≡ −A(t), so ˙ Z(t) = Z(t) A(t)

T

 = Z(t) − A(t) = −Z(t)A(t).

So, both   Y (t) =  

T y(1) (t) .. . T (n) y (t)

   

 and

 Z(t) =  

˙ T = −(x)T A(t), and Y (0) = I are fundamental matrices for (? ? ?) (x)

T  z(1) (t)  ..   .  T (n) z (t)  T (e(1) )  .. =   . T e(n) )

   = Z(0). For j = 1, ..., n, 

˙ T = −(x)T A(t), (x)T (0) = (e(j) )T , implies that y(j) (t) uniqueness of solutions for each of the IVPs (?) (x) T −1 T (j) z (t) . So, Y (t) ≡ Z(t), that is, X(t) ≡ X(t) , as we wished to show.

T



5.2.5.32. The system that models iodine metabolism found in Example 5.6 in Section 5.1. Let f1 be the recommended daily allowance of iodine for adults and children of age four or greater. According to the United States NIH1 (National Institutes of Health), the recommended daily allowance of iodine for children ages 4-8 is 90 mcg, children ages 9-13 is 120 mcg, and children and adults ages 14 and above is 150 mcg. Because of this ambiguity, let’s take f1 = 120 mcg. (I’m assuming that the older the person, the more varied their diet and thus the greater chance that they will consume at least some foods richer in iodine than the average food.) By the way, webpage cited has a good discussion of the roles of iodine in human health. 1 ttp://ods.od.nih.gov/factsheets/Iodine-HealthProfessional/

c Larry

Turyn, October 10, 2013

page 20

Define x = [ x˙ 1 metabolism is

x˙ 2

  −2.773 x˙ 1  x˙ 2  =  0.832 0 x˙ 3 

(?)

x˙ 3 ]T . Using the parameter values we were given, Example 5.6’s system that models iodine 0 −0.008664 0.008664

    f1 x1 0.05199 0   x2  +  0  , Ax + f . 0 x3 −0.05776

Using Mathematica, we found that that the eigenvalues and corresponding eigenvectors of A are, respectively, to the three significant digits of precision found in the entries of matrix A,   −0.958 , λ1 = −2.77, v1 =  0.288 −0.000920   0.0183 λ2 = −0.0604, v2 =  −0.294  0.956 and



 −0.00310 . λ3 = −0.00604, v3 =  −0.986 −0.165

This gives general solution of the corresponding homogeneous system, x˙ = Ax,       −0.00310 0.0183 −0.958 ,  + c2 e−0.0604t  −0.294  + c3 e−0.00604t  −0.986 x = c1 e−2.77t  0.288 −0.165 0.956 −0.000920 where c1 , c2 , c3 are arbitrary constants. So, a fundamental matrix is given by   −0.958e−2.77t 0.0183e−0.0604t −0.00310e−0.00604t −2.77t −0.0604t −0.00604t . −0.294e −0.986e X(t) =  0.288e −0.000920e−2.77t 0.956e−0.0604t −0.165e−0.00604t For the original nonhomogeneous system, x˙ = Ax + f , we can use formula After solving (?), we integrate x1 (t) and x3 (t) to find x4 (t) and x5 (t), which modelers can use as measurable outputs from the body in order to estimate the other parameters in the system. In fact, using the parameter values we were given in Example 5.6, x˙ 4 = 0.005770x3 and x˙ 5 = 1.941x1 , so ... 5.2.5.33. (a) Given a third order scalar ODE y + p¨ y + q y˙ + ry = 0, define x1 = y, x2 = y, ˙ and x3 = y¨. Equivalent to the third order scalar ODE is the system    x˙ 1 = y˙ = x2  x˙ 2 = y¨ = x3 , ...   x˙ 3 = y = −p¨ y − q y˙ − ry = −px3 − qx2 − rx1 that is,    x 0 d  1   x2 0 = dt x3 −r

  1 0 x1 0 1   x2  . −q −p x3   0 1 0 So, in R3 the generalization of companion form is A =  0 0 1 . ? ? ? (b) Given an n−th order scalar ODE y (n) + a1 y (n−1) + ... + an−1 y˙ + an y = 0, define x1 = y, ... , xn = y (n−1) . Equivalent to the n−th order scalar ODE is the system      x˙ 1 = y˙ = x2       x˙ 2 = y¨ = x3  .. ,  .           x˙ n = y (n) = − a1 y (n−1) + ... + an−1 y˙ + an y  c Larry

Turyn, October 10, 2013

page 21

that is,   0 x1 0  x2      .    d    .  . = dt  ..   .        0 xn −an 

1 0

0 1

0 0 .

... ... . .

. −an−1

.

. ...

     n So, in R the generalization of companion form is A =     

0 0 . . . 0 ?

1 0

0 1

0 0 . . . 1 −a1 0 0 .

          ... ...

. . . ?

.

. ...

 x1 x2     ..  . .    xn 0 0 . . . 1 ?

     .    

5.2.5.34. Recall from Example 2.52 in Section 2.10.4 that a matrix C is Hermitian if C T = C. We calculate  (eitA )T

=

T   1 1 1 1 2 3 T T 2 T 3 I + (itA) + (itA) + (itA) + ... = I + (it A ) + (it A ) + (it A ) + ... 2! 3! 2! 3!   1 1 = I + (−it AT ) + (−it AT )2 + (−it AT )3 + ... . 2! 3!

Using the given information that AT = −A and that A is real, we have that AT = −A, hence    2 3 1 1 (eitA )T = I + − it(−A) + − it(−A) + − it(−A) + ... . 2! 3!   1 1 2 3 = I + (itA) + (itA) + (itA) + ... = eitA . 2! 3! So, eitA is Hermitian. 5.2.5.35. Yes, if A is a real, symmetric n × n matrix, then etA must be real and symmetric because of the Maclaurin series formula t2 2 t3 3 etA , I + tA + A + A + ... 2! 3! just before equation (5.29). Why? From the Maclaurin series and the assumption that A is symmetric, that is, AT = A, it follows that etA

T

 =

I + tA +

= I + tAT +

T  2 T  3 T T t2 2 t3 3 t t A + A + ... = I T + tA + A2 + A3 + ... 2! 3! 2! 3!

T T 2 t3 3 t2 t3 t2 A2 + A3 + ... = I + tAT + AT + AT + ... 2! 3! 2! 3! t2 2 t3 3 = I + tA + A + A + ... = etA , 2! 3!

that is, etA is symmetric. Also, from the Maclaurin series we see that if A is real then all of the terms in the series for etA are real, hence tA e is real. 5.2.5.36. Let T1 , T2 , M be the temperatures, respectively, of the two objects and the medium. The conclusion of the solution of problem 5.1.3.7 is that      T1 −kT,1 0 kT,1 T1 d  T2  =  0 −kT,2 kT,2   T2  . dt M kM kM −2kM M We’ll assume that kT,1 , kT,2 , kM are constants.

c Larry

Turyn, October 10, 2013

page 22



−kT,1 0 Let A =  kM

 kT,1 kT,2  . First, find the eigenvalues: −2kM −kT,1 − λ 0 kT,1 0 −kT,2 − λ kT,2 0 = | A − λI | = kM kM −2kM − λ 0 −kT,2 − λ −kT,2 − λ kT,2 + kT,1 = (−kT,1 − λ) kM kM kM −2kM − λ  = (−kT,1 − λ) (−kT,2 − λ)(−2kM − λ) − kT,2 kM + kT,1 kM (kT,2 + λ)  = −(kT,1 + λ) λ2 + (kT,2 + 2kM )λ + kT,2 kM + kT,1 kM (kT,2 + λ)  = − λ3 + (kT,1 + kT,2 + 2kM )λ2 + (kT,1 kT,2 + kT,1 kM + kT,2 kM )λ 0 −kT,2 kM

= −λ λ2 + (kT,1 + kT,2 + 2kM )λ + (kT,1 kT,2 + kT,1 kM + kT,2 kM )





The eigenvalues are λ1 = 0 and p (kT,1 + kT,2 + 2kM )2 − 4(kT,1 kT,2 + kT,1 kM + kT,2 kM ) 2 q 2 2 2 − 2kT,1 kT,2 + 4kM −(kT,1 + kT,2 + 2kM ) ± kT,1 + kT,2 . λ2,3 = 2 We note the physically meaningful result that λ2 < 0 and λ3 < 0. This result follows from two facts. First, we note that (1) (kT,1 + kT,2 + 2kM )2 − 4(kT,1 kT,2 + kT,1 kM + kT,2 kM ) < (kT,1 + kT,2 + 2kM )2 λ2,3 =

−(kT,1 + kT,2 + 2kM ) ±

and (2)

2 2 2 (kT,1 + kT,2 + 2kM )2 − 4(kT,1 kT,2 + kT,1 kM + kT,2 kM ) = kT,1 + kT,2 + 4kM − 2kT,1 kT,2

2 = (kT,1 − kT,2 )2 + 4kM > 0. p 2 2 If we define a , (kT,1 + kT,2 + 2kM ) > 0 and b , (kT,1 − kT,2 ) + 4kM > 0, then we have p √ −(kT,1 + kT,2 + 2kM ) ± (kT,1 + kT,2 + 2kM )2 − 4(kT,1 kT,2 + kT,1 kM + kT,2 kM ) −a ± b2 = λ2,3 = 2 2

−a + b < 0, 2 because of (1); Also, λ2,3 are real because of (2). This completes the explanation for why λ2 < 0 and λ3 < 0. Because there are three distinct eigenvalues, it follows that the general solution of the system is of the form   T1  T2  = c1 v1 + c2 eλ2 t v2 + c3 eλ3 t v3 , M ≤

where c1 , c2 , c3 are arbitrary constants. Because λ2 < 0 and λ3 < 0, the long term behavior is that there exists     T1 (∞) T1 (t) (?)  T2 (∞)  , lim  T2 (t)  = c1 v1 . t→∞ M (∞) M (t) Note that the constant c1 depends on the initial values of the temperatures, but (?) does give the ratios of the three limiting values T1 (∞), T2 (∞), M (∞). To find the corresponding eigenvectors, we do three different row reductions: First, for λ1 = 0, we have





−kT,1 0 A − (0 · I) | 0 =  kM 

0 −kT,2 kM

kT,1 kT,2 −2kM

 |0 |0  |0

 ∼

1  0 0

0 −kT,2 kM

−1 kT,2 −kM

 |0 |0  |0

−1 −kT,1 R 1 → R1 −kM R1 + R3 → R3

c Larry

Turyn, October 10, 2013

page 23



1  0 0



0

1 0

−1 −1 0

 |0 |0  |0

−1 −kT,2 R 2 → R2 −kM R2 + R3 → R3

so M is the only free variable and the first eigenvalue’s eigenvectors are c1 v(1) , where c1 6= 0 and   1 v(1) =  1  . 1 This makes sense physically, because it implies that T1 (∞) = T2 (∞) = M (∞). Next, we used Mathematica to find the other two eigenvectors:





−kT,1 0 A − (λ2 I) | 0 =  kM 

0 −kT,2 kM

kT,1 kT,2 −2kM

 |0 |0  |0

 ∼

1  0 0

0 −kT,2 kM

−1 kT,2 −kM

 |0 |0  |0

−1 −kT,1 R1 → R 1 −kM R1 + R3 → R3

c Larry

Turyn, October 10, 2013

page 24

Section 5.3.6 −2 − λ 5.3.6.1. 0 = | A − λI | = 1

−5 = (−2 − λ)(−λ) + 5 = λ2 + 2λ + 5 = (λ + 1)2 + 4 0−λ

⇒ the eigenvalues of A are the complex conjugate pair λ = −1 ± 2i. Corresponding to eigenvalue λ1 = −1 + i2, eigenvectors are found by      

1 − i2 | 0 −1 − i2 −5 | 0 1 , ∼ A − (−1 + i2)I | 0 = 0 0 |0 1 1 − i2 | 0 after row operations  R1 ↔ R2 , (1 + i2)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −1 + i2 we have an −1 + i2 (1) . This gives two solutions of the LCCHS: The first is eigenvector v = 1       −1 + i2 −1 + i2 x(1) (t) = Re e(−1+i2)t = Re e−t (cos 2t + i sin 2t) 1 1      − cos 2t − 2 sin 2t − i sin 2t + i2 cos 2t − cos 2t − 2 sin 2t = Re e−t = e−t . cos 2t + i sin 2t cos 2t For the second, we don’t have to do all of the algebra steps again:       −1 + i2 − cos 2t − 2 sin 2t − i sin 2t + i2 cos 2t x(2) (t) = Im e(−1+i2)t = Im e−t 1 cos 2t + i sin 2t   2 cos 2t − sin 2t = e−t . sin 2t The general solution of the LCCHS is     − cos 2t − 2 sin 2t 2 cos 2t − sin 2t x(t) = c1 e−t + c2 e−t , cos 2t sin 2t where c1 , c2 =arbitrary constants. −λ 5.3.6.2. 0 = | A − λI | = −3

2 = (−λ)(−2 − λ) + 6 = λ2 + 2λ + 6 = (λ + 1)2 + 5 −2 − λ √ √ ⇒ the eigenvalues of A are the complex conjugate pair λ = −1 ± i 5. Corresponding to eigenvalue λ1 = −1 + i 5, eigenvectors are found by √ √     1+i 5 √   1−i 5 |0 1 3 √2 | 0 ∼ , A − (−1 + i 5)I | 0 = −3 −1 − i 5 | 0 0 0 |0 √ √ after row operations R1 ↔ R2 , − 31 R1 → R1 , −(1 − i 5)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −1 + i 5 √   −1 − i 5 we have an eigenvector v(1) = . This gives two solutions of the LCCHS: The first is 3 √  √      √ √ √ −1 − i 5 −1 − i 5 = Re e−t (cos( 5 t) + i sin( 5 t)) x(1) (t) = Re e(−1+i 5)t 3 3 √ √ √ √ √ √ √ √ √      − cos( 5 t) + 5 sin(√ 5 t) − i sin( √5 t) − i 5 cos( 5 t) − cos( 5 t) +√ 5 sin( 5 t) −t −t = Re e =e . 3 cos( 5 t) + i3 sin( 5 t) 3 cos( 5 t) For the second, we don’t have to do all of the algebra steps again: √    √ −1 − i 5 (2) (−1+i 5)t x (t) = Im e 3 √ √ √ √ √ √ √ √     √  − cos( 5 t) + 5 sin(√ 5 t) − i sin( √5 t) − i 5 cos( 5 t) − 5 cos( 5 t) √− sin( 5 t) . = Im e−t = e−t 3 cos( 5 t) + i3 sin( 5 t) 3 sin( 5 t) The general solution of the LCCHS is √ √ √ √ √    √  − cos( 5 t) +√ 5 sin( 5 t) − 5 cos( 5 t) √− sin( 5 t) , x(t) = c1 e−t + c2 e−t 3 cos( 5 t) 3 sin( 5 t) c Larry

Turyn, October 10, 2013

page 25

where c1 , c2 =arbitrary constants. 3−λ 5.3.6.3. 0 = | A − λI | = −4

2 = (3 − λ)(−1 − λ) + 8 = λ2 − 2λ + 5 = (λ − 1)2 + 4 −1 − λ

⇒ the eigenvalues of A are the complex conjugate pair λ = 1 ± 2i. Corresponding to eigenvalue λ1 = 1 + i2, eigenvectors are found by     1+i  

2 − i2 2 |0 |0 1 2 ∼ A − (1 + i2)I | 0 = , −4 −2 − i2 | 0 0 0 |0 after row operations R1 ↔ R2 , − 14 R1 → R1 , (−2 + i2)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = 1 + i2 we   −1 − i . This gives two solutions of the LCCHS: The first is have an eigenvector v(1) = 2       −1 − i −1 − i = Re et (cos 2t + i sin 2t) x(1) (t) = Re e(1+i2)t 2 2      − cos 2t + sin 2t − i sin 2t − i cos 2t − cos 2t + sin 2t = Re et = et . 2 cos 2t + i2 sin 2t 2 cos 2t For the second, we don’t have to do all of the algebra steps again:         −1 − i − cos 2t + sin 2t − i sin 2t − i cos 2t − cos 2t − sin 2t x(2) (t)= Im e(1+i2)t = Im et = et . 2 2 cos 2t + i2 sin 2t 2 sin 2t The general solution of the LCCHS is     − cos 2t + sin 2t − cos 2t − sin 2t x(t) = c1 et + c 2 et , 2 cos 2t 2 sin 2t where c1 , c2 =arbitrary constants. 5.3.6.4. (a) Method 1 : The easiest way to dot his problem takes advantage of the matrix A being in companion form, so that the first order system of ODEs system is equivalent to the scalar, second order ODE y¨ + y˙ +

9 y = 0, 4

where y = x1 and x2 = y. ˙ √ 2 The characteristic equation 0 = s2 + s + 49 = s + 12 + 2 gives roots s = − 21 ± i 2, so the scalar, second order ODE has general solution √ √ y(t) = c1 e−t/2 cos( 2 t) + c2 e−t/2 sin( 2 t), where c1 , c2 =arbitrary constants. Correspondingly, a general solution of the original first order system of ODEs can be written as     x1 (t) y(t) x(t) = = x1 (t) y(t) ˙ √ √  t) c1 e−t/2 cos( √2 t) + c2 e−t/2 sin( 2√ √ √ √ √ − 21 c1 e−t/2 cos( 2 t) − 2 c1 e−t/2 sin( 2 t) − 12 c2 e−t/2 sin( 2 t) + 2 c2 e−t/2 cos( 2 t) √ √     −t/2 sin( 2√t) e√ cos(√ 2 t) √ −t/2 √ √ + c e = c1 e−t/2 2 − 12 sin( 2 t) + 2 cos( 2 t) − 12 cos( 2 t) − 2 sin( 2 t) 

=

where c1 , c2 =arbitrary constants. 0−λ (b) Method 2 : 0 = | A − λI | = − 94

 1 = (−λ)(−1 − λ) + 9 = λ2 + λ + 9 = λ + 1 2 + 2 4 4 2 −1 − λ √ √ ⇒ the eigenvalues of A are the complex conjugate pair λ = − 12 ± i 2. Corresponding to eigenvalue λ1 = − 21 + i 2, eigenvectors are found by √ √  1     1 2+i4 2 √    −i 2 1 |0

|0 1 2 9 √ A− − +i 2 I | 0 = ∼ , − 49 − 12 − i 2 | 0 2 0 0 |0 c Larry

Turyn, October 10, 2013

page 26 √ √ after row operations R1 ↔ R2 , − 94 R1 → R1 , (− 12 + i 2)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = − 12 ± i 2 √   −2 − i4 2 we have an eigenvector v(1) = . This gives two solutions of the LCCHS: The first is 9 √  √      √ √ √ 1 −2 − i4 2 −2 − i4 2 x(1) (t) = Re e(− 2 +i 2)t = Re e−t/2 (cos( 2 t) + i sin( 2 t)) 9 9 √ √ √ √ √ √    −2 cos( 2 t) + 4 2 sin(√ 2 t) − i2 sin(√ 2 t) − i4 2 cos( 2 t) = Re e−t/2 9 cos( 2 t) + i9 sin( 2 t) √ √ √   4 2 sin( 2 t) −2 cos( 2 t) +√ = e−t/2 . 9 cos( 2 t) For the second, we don’t have to do all of the algebra steps again: √    √ √ −2 − i4 2 (2) −t/2 x (t)= Im e (cos( 2 t) + i sin( 2 t)) 9  = Im e

−t/2



√ √ √ √ √ √  −2 cos( 2 t) + 4 2 sin(√ 2 t) − i2 sin(√ 2 t) − i4 2 cos( 2 t) 9 cos( 2 t) + i9 sin( 2 t) √ √ √   −4 2 cos( 2 t)√− 2 sin( 2 t) −t/2 . =e 9 sin( 2 t)

The general solution of the LCCHS is √ √ √ √ √ √     −2 cos( 2 t) +√ 4 2 sin( 2 t) −4 2 cos( 2 t)√− 2 sin( 2 t) x(t) = e c1 e−t/2 +e c2 e−t/2 , 9 cos( 2 t) 9 sin( 2 t) where e c1 , e c2 =arbitrary constants. It is not obvious, but the solutions given by the two methods are almost the same. To see this, ... 5−λ 5.3.6.5. 0 = | A − λI | = 0 4

0 −2 − λ 0

−10 0 −7 − λ

= (−2 − λ) 5 − λ 4

−10 −7 − λ

 = (−2 − λ) (5 − λ)(−7 − λ) + 40 = (−2 − λ)(λ2 + 2λ + 5) ⇒ the eigenvalues of A are the complex conjugate pair λ = −1 ± 2i and the real eigenvalue λ3 Corresponding to eigenvalue λ1 = −1 + i2, eigenvectors are found by    6 − i2 0 −10 |0

0 − 3+i 1 2      0 0 0 −1 − i2 0 |0 A − (1 + i2)I | 0 = ∼ 1 4 0 −6 − i2 | 0 0 0 0

= −2.  |0 | 0 , |0

1 R1 → R1 , (−6 + i2)R1 + R3 → R3 , −1−i2 R2 → R2 . Corresponding to eigenvalue   3+i λ1 = −1 + i2 we have an eigenvector v(1) =  0 . This gives two solutions of the LCCHS: The first is 2       3+i 3+i (1) (−1+i2)t −t  0  = Re e (cos 2t + i sin 2t)  0  x (t) = Re e 2 2      3 cos 2t − sin 2t + i3 sin 2t + i cos 2t 3 cos 2t − sin 2t  = e−t  . 0 0 = Re e−t  2 cos 2t + i2 sin 2t 2 cos 2t

after row operations R1 ↔ R3 ,

1 4

For the second, we don’t have to do all of the algebra steps again:       3+i 3 cos 2t − sin 2t + i3 sin 2t + i cos 2t (2) (−1+i2)t −t  0  = Im e   0 x (t) = Im e 2 2 cos 2t + i2 sin 2t

c Larry

Turyn, October 10, 2013

page 27

 3 sin 2t + cos 2t .  0 2 sin 2t 

=e

−t

Corresponding to eigenvalue λ3 = −2, eigenvectors are found by   

7 0 −10 | 0 1   0 |0 ∼ 0 A − (−2)I | 0 =  0 0 0 4 0 −5 | 0 after row operations

1 7

R1 → R1 , −4R1 + R3 → R3 ,

to eigenvalue λ3 = −2 we have an eigenvector v(3)

7 5

R 3 → R3 ,   0 =  1 . 0

10 R3 7

0 0 0

0 0

1

 |0 | 0 , |0

+ R1 → R1 . Corresponding

The general solution of the LCCHS is       0 3 sin 2t + cos 2t 3 cos 2t − sin 2t −2t −t −t   + c3 e  1 ,  + c2 e  0 0 x(t) = c1 e 0 2 sin 2t 2 cos 2t where c1 , c2 , c3 =arbitrary constants. −12 − λ 5.3.6.6. 0 = | A − λI | = 4

−25 = (−12 − λ)(8 − λ) + 100 = λ2 + 4λ + 4 = (λ + 2)2 8−λ

⇒ the eigenvalues of A are λ1 = λ2 = −2. Corresponding to eigenvalue λ1 = −2, eigenvectors are found by     5   −10 −25 | 0

|0 1 2 A − (−2)I | 0 = ∼ , 4 10 | 0 0 0 |0 1 after row operations − 10 R1 → R1 , −4R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −2 we have     − 52 − 25 (1) , c1 6= 0. Define v = . only eigenvectors v = c1 1 1 Because λ1 = λ2 = −2 is a deficient eigenvalue, we need to also find a generalized eigenvector w that should satisfy the system (A − (−2)I)w = v(1) :     5  

−10 −25 | − 52 | 41 1 2 A − (−2)I | v(1) = ∼ , 4 10 | 1 0 0 |0 1 after row operations − 10 R1 → R1 , −4R1 + R2 → R2 . So, corresponding to eigenvalue λ1 = −2 we have a generalized  1  4 eigenvector w = . 0 The general solution of the LCCHS is     − 25 t + 14 − 25 + c2 e−2t , x(t) = c1 e−2t v(1) + c2 e−2t (tv(1) + w) = c1 e−2t 1 t

where c1 , c2 =arbitrary constants. 1−λ 5.3.6.7. 0 = | A − λI | = 1

−5 = (1 − λ)(−3 − λ) + 5 = λ2 + 2λ + 2 = (λ + 1)2 + 1 −3 − λ

⇒ the eigenvalues of A are the complex conjugate pair λ = −1 ± i. Corresponding to eigenvalue λ1 = −1 + i, eigenvectors are found by       2−i −5 | 0

−2 − i | 0 1 A − (−1 + i)I | 0 = ∼ , 1 −2 − i | 0 0 0 |0 after row R1 ↔ R2 , (−2+i)R1 +R2 → R2 . Corresponding to eigenvalue λ1 = −1+i we have an eigenvector  operations  2+i (1) v = . This gives two solutions of the LCCHS: The first is 1       2+i 2+i (1) (−1+i)t −t x (t) = Re e = Re e (cos t + i sin t) 1 1 c Larry

Turyn, October 10, 2013

page 28

     2 cos t − sin t 2 cos t − sin t + i2 sin t + i cos t . = e−t = Re e−t cos t cos t + i sin t For the second, we don’t have to do all of the algebra steps again:         2+i 2 cos t − sin t + i2 sin t + i cos t 2 sin t + cos t x(2) (t) = Im e(−1+i)t = Im e−t = e−t . 1 cos t + i sin t sin t The general solution of the LCCHS is x(t) = c1 e−t



2 cos t − sin t cos t



+ c2 e−t



2 sin t + cos t sin t

 ,

where c1 , c2 =arbitrary constants. A fundamental matrix is given by X(t) = e−t −3 − λ 5.3.6.8. 0 = | A − λI | = −5



 2 sin t + cos t . sin t

2 cos t − sin t cos t

2 = (−3 − λ)(3 − λ) + 10 = λ2 + 1 3−λ

⇒ the eigenvalues of A are the complex conjugate pair λ = ± i. Corresponding to eigenvalue λ1 = i, eigenvectors are found by       −3 − i 2 |0

− 3−i |0 1 5 A − iI | 0 = ∼ , −5 3−i | 0 0 0 |0 after row operations R1 ↔ R2 , − 15 R1 → R1 , −(−3 − i)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = i we have   3−i an eigenvector v(1) = . This gives two solutions of the LCCHS: The first is 5       3−i 3−i x(1) (t) = Re eit = Re (cos t + i sin t) 5 5     3 cos t + sin t + i3 sin t − i cos t 3 cos t + sin t = Re = . 5 cos t + i5 sin t 5 cos t For the second, we don’t have to do all of the algebra steps again:        3−i 3 cos t + sin t + i3 sin t − i cos t 3 sin t − cos t x(2) (t) = Im eit = Im = . 5 5 cos t + i5 sin t 5 sin t The general solution of the LCCHS is  x(t) = c1

3 cos t + sin t 5 cos t



 + c2



3 sin t − cos t 5 sin t

,

where c1 , c2 =arbitrary constants.  A fundamental matrix is given by X(t) = −12 − λ 5.3.6.9. 0 = | A − λI | = 4

3 cos t + sin t 5 cos t

 3 sin t − cos t . 5 sin t

−25 = (−12 − λ)(8 − λ) + 100 = λ2 + 4λ + 4 = (λ + 2)2 8−λ

⇒ the eigenvalues of A are λ1 = λ2 = −2. Corresponding to eigenvalue λ1 = −2, eigenvectors are found by     5   |0 −10 −25 | 0

1 2 A − (−2)I | 0 = ∼ , 0 0 |0 4 10 | 0 1 after row operations − 10 R1 → R1 , −4R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −2 we have     − 25 − 52 only eigenvectors v = c1 , c1 6= 0. Define v(1) = . 1 1 Because λ1 = λ2 = −2 is a deficient eigenvalue, we need to also find a generalized eigenvector w that should satisfy the system (A − (−2)I)w = v(1) :     5  −10 −25 | − 52

| 41 1 (1)  2 A − (−2)I | v = ∼ , 4 10 | 1 0 0 |0

c Larry

Turyn, October 10, 2013

page 29

1 after row operations − 10 R1 → R1 , −4R1 + R2 → R2 . So, corresponding to eigenvalue λ1 = −2 we have a generalized  1  4 eigenvector w = . 0 The general solution of the LCCHS is     − 25 − 25 t + 14 x(t) = c1 e−2t v(1) + c2 e−2t (tv(1) + w) = c1 e−2t + c2 e−2t , 1 t

where c1 , c2 =arbitrary constants.  A fundamental matrix is given by X(t) = e−2t 

5 2

5 2

t−

√ √ 3 − 3−λ

 .

−1 √ 3−λ √ 5.3.6.10. 0 = | A − λI | = −2 3

1 4

−t

√ √ = ( 3 − λ)(− 3 − λ) + 6 = λ2 + 3

√ √ ⇒ the eigenvalues of A are the complex conjugate pair λ = ± i 3. Corresponding to eigenvalue λ1 = i 3, eigenvectors are found by √ √  √    (1+i)   3 − i√3 |0 1 2 √ √3 | 0 ∼ A − iI | 0 = , − 3−i 3 | 0 −2 3 0 0 |0 √ √ 1 after row operations R1 ↔ R2 , − 2√ R1 → R1 , − 3(1 − i)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = i 3 we 3   −1 − i have an eigenvector v(1) = . This gives two solutions of the LCCHS: The first is 2      √  √ √  −1 − i −1 − i (1) i 3t = Re cos( 3 t) + i sin( 3 t) x (t) = Re e 2 2 √ √ √ √ √ √     − cos( 3 t) + sin(√ 3 t) − i sin( √3 t) − i cos( 3 t) − cos( 3 t) √ + sin( 3 t) = Re = . 2 cos( 3 t) + i2 sin( 3 t) 2 cos( 3 t) For the second, we don’t have to do all of the algebra steps again: √ √ √ √   √    −1 − i − cos( 3 t) + sin(√ 3 t) − i sin( √3 t) − i cos( 3 t) x(2) (t) = Im ei 3 t = Im 2 2 cos( 3 t) + i2 sin( 3 t) √ √  − sin( 3 t) √ − cos( 3 t) . 2 sin( 3 t)

 =

The general solution of the LCCHS is √ √ √ √     − cos( 3 t) √ + sin( 3 t) − sin( 3 t) √ − cos( 3 t) x(t) = c1 + c2 , 2 cos( 3 t) 2 sin( 3 t) where c1 , c2 =arbitrary constants.  A fundamental matrix is given by X(t) =

√ √ − cos( 3 t) √ + sin( 3 t) 2 cos( 3 t)

√ √  − sin( 3 t) √ − cos( 3 t) . 2 sin( 3 t)

5.3.6.11. Because A is in companion form, the easiest way to do this problem is to first solve the equivalent scalar second order ODE, y¨ + 2y˙ + y = 0, where y(t) = x1 (t) and y(t) ˙ = x2 (t): The characteristic equation is 0 = s2 + 2s + 1 = (s + 1)2 , hence s = −1, −1 is a repeated root. The general solution of the scalar second order ODE is y(t) = c1 e−t + c2 t e−t , where c1 , c2 =arbitrary constants. So, the general solution of the original LCCHS in companion form is         y(t) c1 e−t + c2 t e−t 1 t −t −t =  = c1 e   + c2 e  , x(t) =  y(t) ˙ −c1 e−t + c2 (1 − t)e−t −1 1−t where c1 , c2 =arbitrary constants.  A fundamental matrix is given by X(t) = e

1

t

−t

 .

 −1

1−t c Larry

Turyn, October 10, 2013

page 30

−1 = (−4 − λ)(2 − λ) + 9 = λ2 + 2λ + 1 2−λ

−4 − λ 5.3.6.12. 0 = | A − λI | = 9

⇒ the eigenvalues of A are λ1 = λ2 = −1. Corresponding to eigenvalue λ1 = −1, eigenvectors are found by     1   −3 −1 | 0

|0 1 3 A − (−1)I | 0 = ∼ , 9 3 |0 0 0 |0 after row operations − 13 R1 → R1 , −9R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −1 we have     − 31 −1 (1) . only eigenvectors v = c1 , c1 6= 0. Define v = 3 1 Because λ1 = λ2 = −1 is a deficient eigenvalue, we need to also find a generalized eigenvector w that should satisfy the system (A − (−1)I)w = v(1) :     1   −3 −1 | − 1

| 13 1 3 A − (−1)I | v(1) = ∼ , 9 3 | 3 0 0 |0 after row operations − 13 R1 → R1 , −9R1 + R2 → R2 . So, corresponding to eigenvalue λ1 = −1 we have a generalized  1  3 eigenvector w = . 0 The general solution of the LCCHS is     −1 −t + 31 −t (1) −t (1) −t −t , x(t) = c1 e v + c2 e (tv + w) = c1 e + c2 e 3t 3 where c1 , c2 =arbitrary constants. 

−1

−t +

A fundamental matrix is given by X(t) = e−t 

 .

3 a−λ 5.3.6.13. 0 = −b

1 3

3t

b = (a − λ)2 + b2 a−λ

⇒ the eigenvalues of A are the complex conjugate pair λ = a ± i b. Corresponding to eigenvalue λ1 = a + i b, eigenvectors are found by      −i b b |0

1 A − (a + i b)I | 0 = ∼ −b −i b | 0 0

i 0

|0 |0

 ,

after row operations bi R1 → R1 , bR1 + R2 → R2 . Corresponding to eigenvalue λ1 = a + i b we have an eigenvector   −i v(1) = . This gives two solutions of the LCCHS: The first is 1          −i −i sin bt − i cos bt x(1) (t) = Re e(a+i b)t = Re eat (cos bt + i sin bt) = Re eat 1 1 cos bt + i sin bt   sin bt = e−t . cos bt For the second, we don’t have to do all of the algebra steps again:         −i sin bt − i cos bt − cos bt x(2) (t) = Im e(a+i b)t = Im eat = eat . 1 cos bt + i sin bt sin bt The general solution of the LCCHS is x(t) = c1 eat



sin bt cos bt



+ c2 eat



− cos bt sin bt

 ,

where c1 , c2 =arbitrary constants. A fundamental matrix is given by Z(t) = e

at



sin bt cos bt

 − cos bt . sin bt c Larry

Turyn, October 10, 2013

page 31

Alternatively, if we  switch the columns and multiply a column by −1 we get another possible fundamental matrix cos bt sin bt at given by X(t) = e . − sin bt cos bt −a − λ 5.3.6.14. 0 = | A − λI | = 0

b = (−a − λ2 −a − λ

⇒ the eigenvalues of A are λ1 = λ2 = −a. Corresponding to eigenvalue λ1 = −a, eigenvectors are found by       0 b |0 0

1 |0 A − (−a)I | 0 = ∼ , 0 0 |0 0 0 |0 after row operation 1b R1 → R1 . Corresponding to eigenvalue λ1 = −a we have only eigenvectors     1 1 (1) v = c1 , c1 6= 0. Define v = . 0 0 Because λ1 = λ2 = −a is a deficient eigenvalue, we need to also find a generalized eigenvector w that should satisfy the system (A − (−a)I)w = v(1) :       0 b |1 0

| 1b 1 A − (−a)I | v(1) = ∼ , 0 0 |0 0 0 |0 after row operation   0 . 1

1 b

R1 → R1 . So, corresponding to eigenvalue λ1 = −a we have a generalized eigenvector w =

b

The general solution of the LCCHS is x(t) = c1 e−at v(1) + c2 e−at (tv(1) + w) = c1 e−at



1 0



+ c2 e−at



t 1 b

 ,

where c1 , c2 =arbitrary constants. 

1

t

0

1 b

A fundamental matrix is given by X(t) = e−t 

 .

−2 − λ 5.3.6.15. 0 = | A − λI | = 2

−3 = (−2 − λ)(−4 − λ) + 6 = λ2 + 6λ + 14 = (λ + 3)2 + 5 −4 − λ √ √ ⇒ the eigenvalues of A are the complex conjugate pair λ = −3 ± i 5. Corresponding to eigenvalue λ1 = −3 + i 5, eigenvectors are found by √ √     √   1−i 5 −3 | 0

− 1+i2 5 | 0 1 √ , A − (−3 + i 5)I | 0 = ∼ 2 −1 − i 5 | 0 0 0 |0 √ √ after row operations R1 ↔ R2 , 12 R1 → R1 , (−1 + i 5)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −3 + i 5 √   1+i 5 we have an eigenvector v(1) = . This gives two solutions of the LCCHS: The first is 2 √  √      √ √ √  1+i 5 1+i 5 x(1) (t) = Re e(−3+i 5)t = Re e−3t cos( 5 t) + i sin( 5 t) 2 2 √ √ √ √ √ √    cos( 5 t) − 5 sin( √5 t) + i sin( 5√t) + i 5 cos( 5 t) = Re e−3t 2 cos( 5 t) + i2 sin( 5 t) √ √ √   cos( 5 t) − √5 sin( 5 t) = e−3t . 2 cos( 5 t) For the second, we don’t have to do all of the algebra steps again: √    √ 1+i 5 (2) (−3+i 5)t x (t) = Im e 2

c Larry

Turyn, October 10, 2013

page 32 √ √ √ √ √ √ √ √ √      cos( 5 t) − 5 sin( √5 t) + i sin( 5√t) + i 5 cos( 5 t) sin( 5 t) + √5 cos( 5 t) = Im e−3t = e−3t . 2 cos( 5 t) + i2 sin( 5 t) 2 sin( 5 t) The general solution of the LCCHS is √ √ √ √ √ √     cos( 5 t) − √5 sin( 5 t) sin( 5 t) + √5 cos( 5 t) x(t) = c1 e−3t + c2 e−3t , 2 cos( 5 t) 2 sin( 5 t) where c1 , c2 =arbitrary constants. 

√ √ √ cos( 5 t) − 5 sin( 5 t)

A fundamental matrix is given by Z(t) = e−3t 

√ 2 cos( 5 t)

√ √ √  sin( 5 t) + 5 cos( 5 t) . √ 2 sin( 5 t)

Further,

e

tA

√ √ √  cos( 5 t) − 5 sin( 5 t) −1 −3t  =e = Z(t) Z(0) √ 2 cos( 5 t)

√ √ √  sin( 5 t) + 5 cos( 5 t) 1  √ 2 2 sin( 5 t)

√ −1 5  0

√ √ √  cos( 5 t) − 5 sin( 5 t) 1 −3t  =− √ e √ 2 5 2 cos( 5 t)

√ √ √ √   sin( 5 t) + 5 cos( 5 t) 0 − 5   √ −2 1 2 sin( 5 t) √ √ √   √ 2 5 cos( 5 t) + 2 sin( 5 t) −6 sin( 5 t) 1 −3t   = √ e √ √ √ √ 2 5 4 sin( 5 t) 2 5 cos( 5 t) − 2 sin( 5 t) √ √ √ √  5 cos( 5 t) + sin( 5 t) −3 sin( 5 t) 1 −3t  . = √ e √ √ √ √ 5 2 sin( 5 t) 5 cos( 5 t) − sin( 5 t)

Aside: We get the same et A even if the middle of our work has a different fundamental matrix X(t), for example, from using an eigenvector # " #  √  "   3 1 3 3√ 1+i 5 √ √ ˘ (1) , = = = v 6√ 5) 6√ √ · (1+i 1+i 5 2 (1 − i 5) (1−i 5) (1−i 5) (1+i 5) corresponding to eigenvalue λ1 = −3 + i 1−λ 5.3.6.16. 0 = | A − λI | = 2



5.

−4 = (1 − λ)(−3 − λ) + 8 = λ2 + 2λ + 5 = (λ + 1)2 + 4 −3 − λ

⇒ the eigenvalues of A are the complex conjugate pair λ = −1 ± i2. Corresponding to eigenvalue λ1 = −1 + i2, eigenvectors are found by       2 − i2 −4 | 0

−1 − i | 0 1 A − (−1 + i2)I | 0 = ∼ , 2 −2 − i2 | 0 0 0 |0 after row operations R1 ↔ R2 , 12 R1 → R1 , (−2 + i2)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −1 + i2 we   1+i have an eigenvector v(1) = . This gives two solutions of the LCCHS: The first is 1       1+i 1+i x(1) (t) = Re e(−1+i2)t = Re e−t (cos 2t + i sin 2t) 1 1      cos t − sin 2t + i sin 2t + i cos 2t cos 2t − sin 2t = Re e−t = e−t . cos 2t + i sin 2t cos 2t For the second, we don’t have to do all of the algebra steps again:         1+i cos 2t − sin 2t + i sin 2t + i cos 2t sin 2t + cos 2t x(2) (t) = Im e(−1+i2)t = Im e−t = e−t . 1 cos 2t + i sin 2t sin t

c Larry

Turyn, October 10, 2013

page 33

The general solution of the LCCHS is     sin 2t + cos 2t cos 2t − sin 2t , + c2 e−t x(t) = c1 e−t sin 2t cos 2t where c1 , c2 =arbitrary constants. A fundamental matrix is given by X(t) = e−t



 sin 2t + cos 2t . sin 2t

cos 2t − sin 2t cos 2t

Further, e

tA

−1 = X(t) X(0) = e−t

= e−t





cos 2t − sin 2t cos 2t

3−λ 5.3.6.17. 0 = | A − λI | = 0 4

cos 2t − sin 2t cos 2t

sin 2t + cos 2t sin 2t 0 −1 − λ 0

−2 0 3−λ



sin 2t + cos 2t sin 2t







0 1

1 −1

= e−t

1 1

1 0

−1

 −2 sin 2t . cos 2t − sin 2t

cos 2t + sin 2t sin 2t

= (−1 − λ) 3 − λ 4

−2 3−λ

 = (−1 − λ) (3 − λ)2 + 8 √ ⇒ the eigenvalues of A are the complex conjugate pair λ = 3 ± i 8 and the real eigenvalue λ3 √ Corresponding to eigenvalue λ1 = 3 + i 8, eigenvectors are found by √   

0 − √i2 −i 8 0 √ −2 |0 1 √   A − (3 + i 8)I | 0 =  0 −4 − i 8 0√ |0 ∼ 0 0 1 0 0 0 4 0 −i 8 | 0 after row operations R1 ↔ R3 , λ1 = 3 + i



1 4

 x

(t) = Re e

 |0 | 0 , |0

√ 1 √ 8 R1 + R3 → R3 , −4−i R → R2 . Corresponding to eigenvalue 8 2  i =  √0 . This gives two solutions of the LCCHS: The first is 2

R1 → R 1 , i 

8 we have an eigenvector v(1)

(1)

= −1.

 (3+i2)t

    i i √ √ 3t  0  = Re e (cos( 8 t) + i sin( 8 t))  0  √ √ 2 2

√ √ √    − sin( 8 t) + i cos( 8 t) − sin( 8 t) 3t  = e  . 0 √ 0√ = Re e  √ √ √ √ 2 cos( 8 t) + i 2 sin( 8 t) 2 cos( 8 t) 



3t

For the second, we don’t have to do all of the algebra steps again: √ √ √         i − sin( 8 t) + i cos( 8 t) cos( 8 t) (2) (3+i2)t 3t 3t  0  = Im e   = e  . 0 √ 0√ x (t) = Im e √ √ √ √ √ 2 2 cos( 8 t) + i 2 sin( 8 t) 2 sin( 8 t) Corresponding to eigenvalue λ3 = −1, eigenvectors are found by    4 0 −2 | 0

1   0 |0 ∼ 0 A − (−1)I | 0 =  0 0 4 0 4 |0 0 after row operations −R1 + R3 → R3 ,

1 4

R1 → R 1 ,

to eigenvalue λ3 = −1 we have an eigenvector v(3)

0 0 0

0 0

1

 |0 | 0 , |0

1 6

R3 → R3 , 12 R3 + R1 → R1 . Corresponding   0 =  1 . 0

c Larry

Turyn, October 10, 2013

page 34

The general solution of the LCCHS is √ √       − sin( 8 t) cos( 8 t) 0 3t  3t −t  + c2 e   + c3 e  1  , 0√ 0√ x(t) = c1 e √ √ 0 2 cos( 8 t) 2 sin( 8 t) where c1 , c2 , c3 =arbitrary constants.

√ √ −e3t sin( 8 t) e3t cos( 8 t) 0 −t 0 √ 0 e A fundamental matrix is given by X(t) =  √ √ 3t √ 2 e3t cos( 8 t) 2 e sin( 8 t) 0 Further, √ √   0 e3t cos( 8 t) 0 −e3t sin( 8 t) −1 tA −t  0 0 √ 0 e e = X(t) X(0) =√ √ √ 3t √ 2 2 e3t cos( 8 t) 2 e sin( 8 t) 0 

√ √  0 0 e3t cos( 8 t) 0 −e3t sin( 8 t) −t   0 √ 0 e =√ 1 0 √ 3t √ 2 e3t cos( 8 t) 2 e sin( 8 t) 0 0 1 √ √  3t  e cos( 8 t) 0 − √12 e3t sin( 8 t)     −t . = 0 e 0     √ √ √ 3t 2 e sin( 8 t) 0 e3t cos( 8 t) 

1 √ 2

 .

−1 0 1 0

1 0 0



0 0

5.3.6.18. Expanding the determinant along the first row gives characteristic equation 1−λ 0 0 1−λ  −2 1−λ −2 = (1 − λ) 0 = | A − λI | = 3 = (1 − λ) (1 − λ)2 + 4 2 1−λ 2 2 1−λ ⇒ the eigenvalues of A are the complex conjugate pair λ = 1 ± i2 and the real eigenvalue λ3 Corresponding to eigenvalue λ1 = 1 + i 2, eigenvectors are found by      −i2 0 0 |0

0 0 |0

1 1   −i2 −2 | 0  ∼  0 −i2 −2 | 0  ∼  0 A − (1 + i2)I | 0 =  3 2 2 −i2 | 0 0 2 −i2 | 0 0

= 1. 0 1 0

0 −i 0

 |0 | 0 , |0

after row operations 2i R1 → R1 , −3R1 + R2 → R2 , −2R1 + R3 → R3 , followed by row operations − 2i R2 → R2 , −2R2 + R3 → R3 . Corresponding to eigenvalue λ1 = 1 + i2 we have an eigenvector   0 v(1) =  i . This gives two solutions of the LCCHS: The first is 1          0 0 0 (1) (1+i2)t t t  i  = Re e (cos 2t + i sin 2t)  i  = Re e  − sin 2t + i cos 2t  x (t) = Re e 1 1 cos 2t + i sin 2t   0 = et  − sin 2t  . cos 2t For the second, we don’t have to do all of the algebra steps again:         0 0 0 x(2) (t) = Im e(1+i2)t  i  = Im et  − sin 2t + i cos 2t  = et  cos 2t  . 1 cos 2t + i sin 2t sin 2t Corresponding to eigenvalue λ3 = 1, eigenvectors are found by    0 0 0 |0

1 0 1   A − (1)I | 0 =  3 0 −2 | 0  ∼  0 −3 −2 2 2 0 |0 0 0 0

  |0 |0 ∼ |0

1 0 0

0

1 0

c Larry

−2/3 2/3 0

 |0 | 0 , |0

Turyn, October 10, 2013

page 35

after row operations R1 ↔ R3 ,

1 2

R1 → R1 , −3R1 + R2 → R2 , followed by row operations 

− 31 R2 → R2 , −R2 + R1 → R1 . Corresponding to eigenvalue λ3 = 1 we have an eigenvector v(3)

 2 =  −2 . 3

The general solution of the LCCHS is 

     2 0 0 t t x(t) = c1 e  − sin 2t  + c2 e  cos 2t  + c3 e  −2  , 3 sin 2t cos 2t t

where c1 , c2 , c3 =arbitrary constants.  0 0 2 cos 2t −2  . A fundamental matrix is given by X(t) = et  − sin 2t cos 2t sin 2t 3 Further, −1   0 0 2 0 0 2  −1 cos 2t −2   0 1 −2  = et  − sin 2t et A = X(t) X(0) 1 0 3 cos 2t sin 2t 3     0 0 2 −3 0 2 1 cos 2t −2   2 2 0  = et  − sin 2t 2 cos 2t sin 2t 3 1 0 0   2 0 0 1 t 2 cos 2t + 3 sin 2t − 2 2 cos 2t −2 sin 2t  . = e 2 −3 cos 2t + 2 sin 2t + 3 2 sin 2t 2 cos 2t 10 − λ 11 = (10 − λ)(−12 − λ) + 121 = λ2 + 2λ + 1 = (λ + 1)2 5.3.6.19. (a) 0 = | A − λI | = −11 −12 − λ 

⇒ the eigenvalues of A are λ1 = λ2 = −1. Corresponding to eigenvalue λ1 = −1, eigenvectors are found by       11 11 | 0

1 1 |0 A − (−1)I | 0 = ∼ , −11 −11 | 0 0 0 |0 1 after row operations R1 + R2 → R2 , 11 R1 → R1 . Corresponding to eigenvalue λ1 = −1 we have     −1 −1 only eigenvectors v = c1 , where c1 6= 0. Define v(1) = . 1 1

Because λ1 = λ2 = −1 is a deficient eigenvalue, we need to also find a generalized eigenvector w that should satisfy the system (A − (−1)I)w = v(1) :     1 

11 11 | −1 1 1 | − 11 (1)  A − (−1)I | v = ∼ , −11 −11 | 1 0 0 | 0 1 R → R1 . So, corresponding to eigenvalue λ1 = −1 we have after row operations R1 + R2 → R2 , 11   1 1 − 11 a generalized eigenvector w = . 0 The general solution of the LCCHS is     1 −1 −t − 11 , x(t) = c1 e−t v(1) + c2 e−t (tv(1) + w) = c1 e−t + c2 e−t 1 t

where c1 , c2 =arbitrary constants. A fundamental matrix is given by 

−1

−t −

1

t

X(t) = e−t 

1 11

 .

Further,  e

tA

= X(t) X(0)

−1

=e

−1

−t −

−t



1 11



0

1

=e

 1

t



−11

−11



11t + 1

11t

−t

 .

 −11t c Larry

−11t + 1

Turyn, October 10, 2013

page 36

(b) Define X(s) = L[ x(t) ](s). Take the Laplace transforms of both sides of the system of ODEs to get s X − x(0) = AX, hence X=(sI − A)−1 x(0)=



s − 10 11

−1 x(0)=

−11 s + 12

1 (s − 10)(s + 12)+121 s + 12 (s + 1)2

 =

s2

1 + 2s + 1



s + 12 −11

11 s − 11



  x(0) =  

 (s + 1) + 11  (s + 1)2  =  11 − (s + 1)2



s − 10 −11

11 s + 12

 x(0)

 11 2 (s + 1)    x(0) s − 11  (s + 1)2

11 (s + 1)2

11 (s + 1)2





   x(0). (s + 1) − 11  (s + 1)2

So,   (s + 1) + 11  (s + 1)2  x(t) = L−1 [ X(s) ](t) = L−1    11 − (s + 1)2 1 11 +  (s + 1) (s + 1)2    11 − (s + 1)2

 =

e−t + 11 t e−t

11 t e−t

−11 t e−t

e−t − 11 t e−t

  (t) x(0) 



1 11 − s+1 (s + 1)2 





   (s + 1) − 11  (s + 1)2 11 (s + 1)2

   = L−1  

11 (s + 1)2

   

  (t) x(0) 

1 + 11 t

11 t

−11 t

1 − 11 t

 x(0) = e−t 

  x(0) = et A x(0).

So,  e

tA

=e

1 + 11 t

11 t

−11 t

1 − 11 t

−t

 .



This conclusion agrees with the conclusion in part (a). 5.3.6.20. Expanding the determinant along the second row gives characteristic equation 3−λ 3 −1 3−λ  −1 −1 − λ 0 0 = | A − λI | = 0 = (−1 − λ) (3 − λ)(−1 − λ) + 4 = (−1 − λ) 4 −1 − λ 4 4 −1 − λ  = (−1 − λ) λ2 − 2λ + 1 = (−1 − λ)(λ − 1)2 ⇒ the eigenvalues of A are λ1 = −1 and λ2 = λ3 = 1. Corresponding to eigenvalue λ1 = −1, eigenvectors are found by    4 3 −1 | 0

1 0 1   0 | 0  ∼  0 −1 −1 A − (−1)I | 0 =  0 0 4 4 0 |0 0 0 0

  |0

1 |0 ∼ 0 |0 0

0 1 0

−1 1 0

 |0 | 0 , |0

after row operations R2 ↔ R3 , R1 ↔ R2 41 R1 → R1 , −4R1 + R2 → R2 , followed by row operation R2 + R1 → R1 , −R2 → R2 . Corresponding to eigenvalue λ1 = −1 we have an eigenvector

c Larry

Turyn, October 10, 2013

page 37

 1 =  −1 . 1 Corresponding to eigenvalues λ2 = λ3 = 1,  2 3 −1 |   0 | A − (1)I | 0 =  0 −2 4 4 −2 | 

v

(1)

eigenvectors are found by     3

0

− 12 | 0 1 1 2 0 ∼ 0

0 |0 ∼ 0 1 0 0 −2 0 |0 0

− 12 0 0

0

1 0

 |0 | 0 , |0

after row operations 12 R1 → R1 , −4R1 + R2 → R2 , − 21 R2 → R2 followed by row operations − 32 R2 + R1 → R1 , 2R2 + R3 → R3 . Corresponding to eigenvalues λ2 = λ3 = 1 we have  1   1  2

2

only eigenvectors v = c1  0 , where c1 6= 0. Define v(2) =  0 . 1 1 Because λ2 = λ3 = 1 is a deficient eigenvalue, we need to also find a generalized eigenvector w that should satisfy the system (A − (1)I)w = v(2) :       3 2 3 −1 | 21

− 21 | 14

0 − 12 | 41 1 1 2  (2)  0 | 0 ∼ 0

0 |0  ∼  0 0 | 0 , A − (1)I | v =  0 −2 1 1 4 4 −2 | 1 0 −2 0 |0 0 0 0 |0 after row operations 21 R1 → R1 , −4R1 + R2 → R2 , − 21 R2 → R2 followed by row operations − 32 R2 + R1 → R1 , 2R2 + R3 → R3 . So, corresponding to eigenvalue λ1 = −1 we have  1  4

a generalized eigenvector w =  0 . 0 The general solution of the LCCHS is 

 1 x(t) = c1 e−t  −1  + c2 et v(2) + c3 et (tv(2) + w) 1 where c1 , c2 =arbitrary constants.

e−t



1 2

  −t A fundamental matrix is given by X(t) =   −e  e−t 



e−t

  −t =  −e  e−t

e−t

1 2

et

1 2

t+

1 4

0 et et

1 2

t+

1 4



et

  −1 −t 0 0 et A = X(t) X(0) = e  −e  e−t et tet     1 t 1 e t + 14 et 0 −1 0 (2t + 1)et 2 2         0 0 1 1 0  0 =    et tet 4 2 −2 4tet

 t  e   0   . Further,  tet 

1

    −1   1

1 2

0 1

1 4

−1

  0    0

−e−t + (t + 1)et

−tet



e−t

0

−e−t + (2t + 1)et

(−2t + 1)et

  .  

(b) Define X(s) = L[ x(t) ](s). Take the Laplace transforms of both sides of the system of ODEs to get s X − x(0) = AX, hence 

s−3 X = (sI − A)−1 x(0) =  0 −4

−3 s+1 −4

−1 1 0  x(0) s+1

c Larry

Turyn, October 10, 2013

page 38

  2 s + 2s + 1 3s − 1 −s − 1 1 2  x(0)  0 s − 2s + 1 0 = 3 s − s2 − s + 1 4(s + 1) 4s s2 − 2s − 3   (s + 1)2 3s − 1 −s − 1     1 2   x(0) 0 (s − 1) 0 =   2 (s − 1)(s − 1)   4(s + 1) 4s (s − 3)(s + 1)   3s − 1 1 s+1 −  (s − 1)2 (s + 1)(s − 1)2 (s − 1)2        1   0 0 =  x(0). (s + 1)         4 4s s−3 (s − 1)2 (s + 1)(s − 1)2 Here are some useful partial fractions expansions:

(s − 1)2

(s − 1) + 2 s+1 1 2 = = + (s − 1)2 (s − 1)2 s−1 (s − 1)2

(1)

(s − 1) − 2 s−3 1 2 = = − (s − 1)2 (s − 1)2 s−1 (s − 1)2 B C s A + + = ⇒ (?) s = A(s − 1)2 + B(s + 1)(s − 1) + C(s + 1), (s + 1)(s − 1)2 s+1 s−1 (s − 1)2 where A, B, C are constants to be determined. Substitute s = 1 and s = −1 into (?) to get, respectively, 1 = 2C and −1 = 4A. Substitute the values of A and C into (?) to get (2)

1 1 s = − (s − 1)2 + B(s + 1)(s − 1) + (s + 1) 4 2 hence B=

s+

1 4

s+ (s − 1)2 − 21 (s + 1) = (s + 1)(s − 1)

1 4

(s2 − 2s + 1) − 12 (s + 1) 1 = . (s + 1)(s − 1) 4

So, (3)

1 1 − 14 s 2 = + 4 + . 2 (s + 1)(s − 1) s+1 s−1 (s − 1)2

We also need A B C 1 = + + (s + 1)(s − 1)2 s+1 s−1 (s − 1)2

⇒ (?) 1 = A(s − 1)2 + B(s + 1)(s − 1) + C(s + 1),

where A, B, C are constants to be determined. Substitute s = 1 and s = −1 into (?) to get, respectively, 1 = 2C and 1 = 4A. Substitute the values of A and C into (?) to get 1= hence B=

1−

1 4

1 1 (s − 1)2 + B(s + 1)(s − 1) + (s + 1) 4 2

(s − 1)2 − 21 (s + 1) 1− = (s + 1)(s − 1)

1 4

(s2 − 2s + 1) − 12 (s + 1) 1 =− . (s + 1)(s − 1) 4

So, (4) Using (1),...,4, we have  1 2 + (s−1) 2  s−1   0 X=    4 (s−1)2

3



1 1 − 14 1 2 = 4 + . + 2 (s + 1)(s − 1) s+1 s−1 (s − 1)2

−1 4 s+1

+

1 4

s−1

+

1 2



(s−1)2





1 4

s+1

+

1 −4 s−1

+

1 2





(s−1)2

1 (s+1)

4



−1 4 s+1

+

1 4

s−1

+

1 (s−1)2

0 1 2

(s−1)2



1 s−1

c Larry



2 (s−1)2

     x(0).   

Turyn, October 10, 2013

page 39

So, 1 2 +   s−1 (s − 1)2     0 x = L−1       4 (s − 1)2 

   =  



1 1 1 + + s+1 s−1 (s − 1)2



1 (s − 1)2

1 (s + 1) −

0

1 2 1 + + s+1 s−1 (s − 1)2

et + 2tet

−e−t + et + tet

−tet

0

e−t

0

4tet

−e−t + et + 2tet

et − 2tet

1 2 − s−1 (s − 1)2

          

      x(0).    

    x(0) = et A x(0).  

So,    et A =   

(2t + 1)et

−e−t + (t + 1)et

−tet



0

e−t

0

4tet

−e−t + (2t + 1)et

(−2t + 1)et

  .  

This conclusion agrees with the conclusion in part (a). 5.3.6.21. We are given that the eigenvalues of A are the complex conjugate pair λ = −1 ± i and the real eigenvalue λ3 = −1. Corresponding to eigenvalue λ1 = −1 + i, eigenvectors are found by     −i −1 0 |0

0 |0 1 −i   1 |0 ∼ 0 i 1 | 0 , A − (−1 + i)I | 0 =  2 −i 0 1 −i | 0 0 1 −i | 0 after row operations iR1 → R1 , −2R1 + R2 → R1 . Continuing, 

0 1   A − (−1 + i)I | 0 ∼  0 1 0 0

1 −i 0

 |0 | 0 , |0

after row operations R2 + R1 → R1 , −iR2 → R2 , −R2 + R3 → R3 . Corresponding to eigenvalue   −1 i . This gives two solutions of the LCCHS: The first is λ1 = −1 + i we have an eigenvector v(1) =  1       −1 −1 (1) (−1+i)t −t  i  = Re e (cos t + i sin t)  i  x (t) = Re e 1 1      − cos t − i sin t − cos t −t −t = Re e  − sin t + i cos t  = e  − sin t  . cos t + i sin t cos t For the second, we don’t have to do all of the algebra steps again:         −1 − cos t − i sin t − sin t i  = Im e−t  − sin t + i cos t  = e−t  cos t  . x(2) (t) = Im e(−1+i)t  1 cos t + i sin t sin t Corresponding to eigenvalue λ3 = −1, eigenvectors are found by    0 −1 0 | 0   0 1 |0 ∼ A − (−1)I | 0 =  2 0 1 0 |0

1 0 0

0

1 0

1 2

0 0

 |0 | 0 , |0

c Larry

Turyn, October 10, 2013

page 40

after row operations R1 + R3 → R3 , R1 ↔ R2 ,

1 2

R1 → R1 , −R2 → R2 . Corresponding   −1 (3) to eigenvalue λ3 = −1 we have an eigenvector v =  0 . 2 The general solution of the LCCHS is       −1 − sin t − cos t −t −t −t  − sin t  + c2 e  cos t  + c3 e  0  , x(t) = c1 e 2 sin t cos t where c1 , c2 , c3 =arbitrary constants. A fundamental matrix is given by 

− cos t X(t) = e−t  − sin t cos t

− sin t cos t sin t

 −1 0 . 2

Further,   −1 − cos t − sin t −1 −1 0 −1 −1 −t  − sin t cos t 0  0 1 0 e = X(t) X(0) =e cos t sin t 2 1 0 2       − cos t − sin t −1 −2 0 −1 −1 + 2 cos t − sin t −1 + cos t cos t 0  0 1 0 = e−t  2 sin t cos t sin t  . = e−t  − sin t cos t sin t 2 1 0 1 2 − 2 cos t sin t 2 − cos t tA

−1 − λ 5.3.6.22. (a) 0 = | A − λI | = −1

4 = (−1 − λ)(1 − λ) + 4 = λ2 + 3 1−λ √ √ ⇒ the eigenvalues of A are the complex conjugate pair λ = ± i 3. Corresponding to eigenvalue λ1 = i 3, eigenvectors are found by √ √     √   −1 − i 3 −1 + i 3 | 0 1 √4 | 0 ∼ A − (i 3)I | 0 = , 0 0 |0 −1 1−i 3 | 0 √ √ after row operations R1 ↔ R2√, −R 1 → R1 , (1 + i 3)R1 + R2 → R2 . Corresponding to eigenvalue λ1 = i 3 we have 1−i 3 an eigenvector v(1) = . This gives two solutions of the LCCHS: The first is 1 √  √    √   √ √ 1−i 3 1−i 3 x(1) (t) = Re ei 3 t = Re (cos( 3 t) + i sin( 3 t)) 1 1  = Re

√ √ √ √ √ √ √ √ √    cos( 3 t) + √ 3 sin( 3 t) cos( 3 t) + 3 sin( √3 t) + i sin( √3 t) − i 3 cos( 3 t) = . cos( 3 t) + i sin( 3 t) cos( 3 t)

For the second, we don’t have to do all of the algebra steps again: √   √  −1 − i 3 x(2) (t) = Im ei 3 t 1  = Im

√ √ √ √ √ √ √ √   √  cos( 3 t) + 3 sin( √3 t) + i sin( √3 t) − i 3 cos( 3 t) − 3 cos( 3√ t) + sin( 3 t) = . cos( 3 t) + i sin( 3 t) sin( 3 t)

The general solution of the LCCHS is √ √ √ √ √    √  − 3 cos( 3√ cos( 3 t) + √ 3 sin( 3 t) t) + sin( 3 t) + c2 , x(t) = c1 cos( 3 t) sin( 3 t) where c1 , c2 =arbitrary constants.  A fundamental matrix is given by X(t) =

√ √ √ cos( 3 t) + √ 3 sin( 3 t) cos( 3 t)

√ √  − cos( 3 t)√+ sin( 3 t) . sin( 3 t)

c Larry

Turyn, October 10, 2013

page 41

(b) The general solution of the ODE system can be written as x(t) = X(t)c, where c is a 2-vector of arbitrary constants. To satisfy the IC, we want [ 2 − 3 ]T = x(0) = X(0)c, hence the solution is √ √ √ √ √     −1   −1 2 cos( 3 t) + √ 3 sin( 3 t) − cos( 3 t)√+ sin( 3 t) 1 −1 2 x(t) = X(t) X(0) = −3 1 0 −3 cos( 3 t) sin( 3 t) √ √ √ √ √    2 cos( 3 t) + √ 3 sin( 3 t) 0 1 − cos( 3 t)√+ sin( 3 t) −3 −1 1 sin( 3 t) cos( 3 t) √ √ √ √ √    cos( 3 t) + √ 3 sin( 3 t) − cos( 3 t)√+ sin( 3 t) −3 = −5 cos( 3 t) sin( 3 t) √ √ √   2 cos( 3 t) √ − (3 3 + 5) sin( 3 t) √ = , −3 cos( 3 t) − 5 sin( 3 t) √ √ √ √  √  that is, the parametric curve x1 (t), x2 (t) = 2 cos( 3 t) − (3 3 + 5) sin( 3 t), −3 cos( 3 t) − 5 sin( 3 t) for −∞ < t < ∞. 

=

−1 − λ 5.3.6.23. 0 = −2 λ = −1 ± i 2.

2 = (−1 − λ)2 + 22 ⇒ the eigenvalues of A are the complex conjugate pair −1 − λ

Corresponding to eigenvalue λ1 = −1 + i 2, eigenvectors are found by      −i 2 2 |0

1 A − (−1 + i 2)I | 0 = ∼ −2 −i 2 | 0 0

i 0

|0 |0

 ,

after row operations 2i R1 → R1 , 2R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −1 + i 2 we have an eigenvector   −i (1) v = . This gives two solutions of the LCCHS: The first is 1          −i −i sin 2t − i cos 2t x(1) (t) = Re e(−1+i 2)t = Re e−t (cos 2t + i sin 2t) = Re e−t 1 1 cos 2t + i sin 2t   sin 2t = e−t . cos 2t For the second, we don’t have to do all of the algebra steps again:         −i sin 2t − i cos 2t − cos 2t (2) (−1+i 2)t −t −t x (t) = Im e = Im e =e . 1 cos 2t + i sin 2t sin 2t The general solution of the LCCHS is x(t) = c1 e−t



sin 2t cos 2t



+ c2 e−t



− cos 2t sin 2t

 ,

where c1 , c2 =arbitrary constants. The ICs require 

π 2



 = x(0) = c1

0 1



 + c2

−1 0

 ,

which implies c1 = 2 and c2 = −π. The solution of the IVP is       sin 2t − cos 2t 2 sin 2t + π cos 2t x(t) = 2e−t − π e−t = e−t . cos 2t sin 2t 2 cos 2t − π sin 2t −2 − λ 1 5.3.6.24. 0 = −2 −4 − λ the complex conjugate pair λ = −3 ± i.

= (−2 − λ)(−4 − λ) + 2 = λ2 + 6λ + 10 = (λ + 3)2 + 1 ⇒ the eigenvalues of A are

Corresponding to eigenvalue λ1 = −3 + i, eigenvectors are found by      1−i 1 |0

1 A − (−3 + i)I | 0 = ∼ 0 −2 −1 − i | 0

1+i 2

0 c Larry

|0 |0

 ,

Turyn, October 10, 2013

page 42

1 after row operations 1−i R1 → R1 , 2R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −3 + i we have an eigenvector   −1 − i v(1) = . This gives two solutions of the LCCHS: The first is 2       −1 − i −1 − i = Re e−3t (cos t + i sin t) x(1) (t) = Re e(−3+i)t 2 2      − cos t + sin t − cos t + sin t − i cos t − i sin t . = e−t = Re e−3t 2 cos t 2 cos t + i2 sin t

For the second, we don’t have to do all of the algebra steps again:         − cos t − sin t − cos t + sin t − i cos t − i sin t −1 − i . = e−3t = Im e−3t x(2) (t) = Im e(−3+i)t 2 sin t 2 cos t + i2 sin t 2 The general solution of the LCCHS is x(t) = c1 e−3t



− cos t + sin t 2 cos t



+ c2 e−3t



− cos t − sin t 2 sin t

 ,

where c1 , c2 =arbitrary constants. The ICs require 

1 −2



 = x(0) = c1

−1 2



 + c2

−1 0

 ,

which implies c1 = −1, and this implies c2 = 0. The solution of the IVP is     − cos t + sin t cos t − sin t x(t) = −e−3t = e−3t 2 cos t −2 cos t

5.3.6.25. Because A is in companion form, the easiest way to do this problem is to first solve the equivalent scalar second order ODE, y¨ + 2y˙ + 5y = 0, where y(t) ˙ = v(t): The characteristic equation is 0 = s2 + 2s + 5 = (s + 1)2 + 4, hence the roots are the complex conjugate pair s = −1 ± i 2. The general solution of the scalar second order ODE is y(t) = c1 e−t cos 2t + c2 t e−t sin 2t, where c1 , c2 =arbitrary constants. So, the general solution of the original LCCHS in companion form is     y(t) c1 e−t cos 2t + c2 e−t sin 2t x(t) = = y(t) ˙ −c1 e−t cos 2t − 2c1 e−t sin 2t − c2 e−t sin 2t + 2c2 e−t cos 2t     cos 2t sin 2t = c1 e−t + c2 e−t , − cos 2t − 2 sin 2t − sin 2t + 2 cos 2t where c1 , c2 =arbitrary constants. The ICs require 

1 0



 = x(0) = c1

1 −1



 + c2

0 2

 ,

which implies c1 = 1, which implies 0 = −c1 + 2c2 = −1 + 2c2 , hence c2 = 21 . The solution of the IVP is       1 y(t) cos 2t sin 2t = x(t) = e−t + e−t v(t) − cos 2t − 2 sin 2t − sin 2t + 2 cos 2t 2     cos 2t + 21 sin 2t cos 2t + 12 sin 2t  = e−t  . = e−t  1 − cos 2t − 2 sin 2t − 2 sin 2t + cos 2t − 25 sin 2t 5.3.6.26. Expanding the determinant along the first row gives characteristic equation −4 − λ 0 0 = (−4 − λ) −1 − λ 0 −1 − λ 1 0 = | A − λI | = 2 0 2 −3 − λ

1 −3 − λ

  = (−4 − λ) (−1 − λ)(−3 − λ) − 2 = (−4 − λ) λ2 + 4λ + 1

c Larry

Turyn, October 10, 2013

page 43

⇒ the eigenvalues of A are λ1 = −2 and √ √ 16 − 4 = −2 ± 3. 2 p √ √ √ √ √ exact frequencies of vibration are ω1 = −λ1 = 4 = 2, ω2 = −λ2 = 2 − 3, and ω3 = −λ3 = p The √ 2 + 3. λ2,3 =

−4 ±

¨ (t) = Ax imply, where λ = σ 2 , that 5.3.6.27. Solutions x(t) = eσt v of the second order system x −3 − λ 1 = (−3 − λ)(−2 − λ) − 2 = λ2 + 5λ + 4 = (λ + 4)(λ + 1) 0 = 2 −2 − λ ⇒ the eigenvalues of A are λ1 = −4 and λ2 = −1. Corresponding to eigenvalue λ1 = −4, eigenvectors are found by      1 1 |0

1 A − (−4)I | 0 = ∼ 2 2 |0 0

1 0

|0 |0

 ,

after row operation 2R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −4 we have an eigenvector v(1) = Corresponding to eigenvalue λ2 = −1, eigenvectors are found by     

−2 1 |0 1 A − (−1)I | 0 = ∼ 2 −1 | 0 0

− 21 0

|0 |0



−1 1

 .

 ,

after row operations R1 + R2 → R2 , − 12 R1 → R1 . Corresponding to eigenvalue λ2 = −1 we have an eigenvector   1 v(1) = . 2 Corresponding to −ν12 = σ 2 = λ1 = −4, hence ν1 = 2, we get solutions of the system in the forms     x(1) (t) = Re eiν1 t v1 = cos 2tv1 and x(2) (t) = Im eiν1 t v1 = sin 2tv1 . Corresponding to −ν22 = σ 2 = λ2 = −1, hence ν2 = 1, we get solutions of the system in the forms     x(3) (t) = Re eiν1 t v2 = cos tv1 and x(4) (t) = Im eiν2 t v2 = sin tv2 . The general solution of the second order ODE system is     −1 1 x(t) = (c1 cos 2t + d1 sin 2t) + (c2 cos t + d2 sin t) , 1 2 where c1 , c2 , d1 , d2 are arbitrary constants. 5.3.6.28. (a) Corresponding to eigenvalue λ = −2, eigenvectors are found by      2 2 −1 | 0

1 −0.5 | 0

1 1   2 |0 ∼ 0 0 0.5 | 0  ∼  0 A − (−2)I | 0 =  −3 −3 0 0 1 |0 0 −2 −2 2 |0

1 0 0

0

1 0

 |0 | 0 , |0

after row operations 21 R1 → R1 , 3R1 + R2 → R2 , −2R1 + R3 → R3 , followed by row operations R2 + R1 → R1 , −2R2 + R3 → R3 , 2R2 → R2 . Corresponding to eigenvalue λ = −2 we have     −1 −1 (1) only eigenvectors v = c1  1 , where c1 6= 0. Define v =  1 . 0 0 To find a generalized eigenvector w that should satisfy the system (A − (−2)I)w = v(1) :       2 2 −1 | − 1

1 −0.5 | − 0.5

1 1 1 0 |−1 2 | 1 ∼ 0 0 0.5 | − 0.5  ∼  0 0 =  −3 −3 1 | − 1 , −2 −2 2 | 0 0 0 1 |−1 0 0 0 | 0 c Larry

Turyn, October 10, 2013

page 44

after row operations 21 R1 → R1 , 3R1 + R2 → R2 , −2R1 + R3 → R3 , followed by row operations R2 + R1 → R1 , −2R2 + R3 → R3 , 2R2 → R2 . So, corresponding to eigenvalue λ1 = −2 we have   −1 a generalized eigenvector w =  0 . −1 (b) The characteristic equation is −λ 2 −5 − λ 0 = −3 −2 −2

−1 2 −λ

−(λ + 2) 2+λ 0

=

2 −5 − λ −2

−1 2 −λ

−C2 + C1 → C1 −1 (λ + 2) 1 0

=

−1 2 −λ

2 −5 − λ −2

−1 (λ + 2) 0 0

=

2 −3 − λ −2

−1 1 −λ

R 1 + R 2 → R2 −3 − λ  1 = (−1)(λ + 2) = −(λ + 2)(λ2 + 3λ + 2 = −(λ + 2)(λ + 2)(λ + 1) −2 −λ ⇒ the eigenvalues of A are λ1 = λ2 = −2 and λ3 = −1. Corresponding to eigenvalue λ3 = −1, eigenvectors are found by    

1 2 −1 | 0 1 2 −1 | 0    0 2 −1 | 0  2 |0  ∼ A − (−1)I | 0 =  −3 −4 0 2 −1 | 0 −2 −2 1 |0 3R1 + R2 → R2 2R1 + R3 → R3 C1 ← (2 + λ)C1

 ∼ −R2 + R3 → R3 −R2 + R1 → R1 1 R2 → R 2 2

1  0 0

0 1 0

0 − 12 0

 |0 | 0 , |0



Corresponding to eigenvalue λ3 = −1 we have an eigenvector v(3)

 0 =  1 . 2

The general solution of the LCCHS is  x(t) = c1 e

−2t

v

(1)

+ c2 e

−2t

(tv

(1)

+ w) + c3 e

where c1 , c2 , c3 =arbitrary constants. 

−t

v

(3)

e−2t

= c1 e

−2t

     −1 −t − 1 0 −2t −t  1  + c2 e  t  + c3 e  1  0 −1 2

−(t + 1)e−2t

0



    −2t −2t −t  −e te e A fundamental matrix is given by X(t) =    . Further,   0 −e−2t 2e−t   −1 e−2t −(t + 1)e−2t 0 1 −1 0      −1  −2t −2t −t    −e te e −1 0 1 et A = X(t) X(0) =       −2t −t 0 −e 2e 0 −1 2    −2t −2t −1 −2 1 e −(t + 1)e 0       −2t −2t −t    −2 −2 1 −e te e =       −1 −1 1 0 −e−2t 2e−t c Larry

Turyn, October 10, 2013

page 45



(2t + 1)e−2t

2te−2t

  −2t − e−t =  (−2t + 1)e  2e−2t − 2e−t

−te−2t

  (t − 1)e−2t + e−t  .  −e−2t + 2e−t

(−2t + 2)e−2t − e−t 2e−2t − 2e−t 

−1 1 0 −2 have negative real part, the system x˙ = Ax is asymptotically stable. 5.3.6.29. The eigenvalues of the upper triangular matrix A =

√ 2−λ √ 5.3.6.30. The characteristic equation is 0 = −3 2

√ √ 2 − 2−λ



 are λ1 = −1 and λ1 = −2. Because both

√ √ = ( 2 − λ)(− 2 − λ) + 6 = λ2 + 4

⇒ eigenvalues are λ1,2 = ± i2. Because the two eigenvalues of the 2 × 2 matrix are distinct, neither eigenvalue is deficient. Because they have real part equal to zero and are not deficient, the system x˙ = Ax is neutrally stable. −12 − λ 5.3.6.31. 0 = | A − λI | = 4

−25 = (−12 − λ)(8 − λ) + 100 = λ2 + 4λ + 4 = (λ + 2)2 8−λ

⇒ the eigenvalues of A are λ1 = λ2 = −2. Because both eigenvalues have negative real part, the system x˙ = Ax is asymptotically stable. [Aside: Even though there is a repeated eigenvalue, because it has negative real part it does not matter whether it is deficient, according to Theorem 5.11.] 

 −1 0 1 5.3.6.32. The eigenvalues of the upper triangular matrix A =  0 −1 −2  are λ1 = λ1 = −1 and λ3 = 0. 0 0 0 Because an eigenvalue has zero real part, the system cannot be asymptotically stable; but, all eigenvalues have nonpositive real part. So, is the system neutrally stable or unstable? The only eigenvalue with zero real part is not repeated, hence it cannot be deficient. The system x˙ = Ax is asymptotically stable. [Aside: Even though there is a repeated eigenvalue, because it has negative real part it does not matter whether it is deficient, according to Theorem 5.11.] 5.3.6.33. Expanding the determinant along the second column, we have characteristic equation −3 − λ 0 −1  −1 = (−4 − λ) −3 − λ −4 − λ 1 0 = | A − λI | = −1 = (−4 − λ) (−3 − λ)2 − 1 −1 −3 − λ −1 0 −3 − λ ⇒ the eigenvalues of A are λ1 = −4, λ2 = −3 − 1 = −4, and λ3 = −3 + 1 = −2. Because all eigenvalues have negative real part, the system x˙ = Ax is asymptotically stable. 5.3.6.34. The characteristic equation is −5 − λ 2 2 −8 − λ 0 = 4 2

4 2 −5 − λ

−5 − λ 2 9+λ

=

2 −8 − λ 0

4 2 −9 − λ

−R1 + R3 → R3

=

−5 − λ 2 (λ + 9) 1

R3 ← (9 + λ)R3 −1 − λ = (−1)(λ + 9) 4

2 −8 − λ 0

4 2 −1

=

−1 − λ 4 (λ + 9) 0

2 −8 − λ 0

4 2 −1

C3 + C1 → C1  2 = −(λ + 9) (−1 − λ)(−8 − λ) − 8 = −(λ + 9)(λ2 + 9λ) −8 − λ

⇒ the eigenvalues of A are λ1 = λ2 = −9 and λ3 = 0.

c Larry

Turyn, October 10, 2013

page 46

Because an eigenvalue has zero real part, the system cannot be asymptotically stable. So, is the system neutrally stable or unstable? The only eigenvalue with zero real part is not repeated, hence it cannot be deficient. The system x˙ = Ax is asymptotically stable. [Aside: Even though there is a repeated eigenvalue, because it has negative real part it does not matter whether it is deficient, according to Theorem 5.11.] 5.3.6.35. (a) must be false, because there are eigenvalues whose real part is zero, (b) may be true and maybe false, depending upon whether the repeated eigenvalues ± i, ± i are not deficient or deficient, respectively (c) must be true, by the same reasoning as for part (b) (d) must be true, because by themselves the first pair of eigenvalues ± i · 1 = ± i ω give some periodic solutions whose period is 2π = 2π. [The problem did not ask whether all solutions are periodic.] 1 (e) may be true, because the repeated eigenvalues ± i, ± i may be deficient. 5.3.6.36. Because the matrix A has an eigenvalue λ with Re(λ) = 0 that is deficient, we know that there is a corresponding eigenvector v and corresponding generalized eigenvector w so that one of the solutions (possible complex because λ may be complex) is of the form x(t) = eλt (tv + w). Because Re(λ) = 0, ||x(t)|| → ∞ as t → ∞. So, LCCHS x˙ = Ax is not neutrally stable.

c Larry

Turyn, October 10, 2013

page 47

Section 5.4.1  5.4.1.1. The solution of x˙ = Ax +

1 0

 can be written in the form (5.40):

  ˆ x(t)= X(t) c + (X(t))−1 f (t)dt = 

=

e−2t −2e−2t

e−2t −2e−2t

e−3t −3e−3t



 =

e−3t −3e−3t 



ˆ  c+ 

=

e−2t −2e−2t ˆ c+

3e2t −2e3t e

−2t

−2e−2t

e−3t −3e−3t 1 −e−5t



 dt

ˆ  c+



−3e−3t 2e−2t

 =

e



−3t

−3e−3t

e−2t −2e−2t

e−3t −3e−3t

−e−3t e−2t

 

  1 dt 0

e−2t

e−3t





−2t

−3t

−2e  

 c + 

−3e 



−1 

3 2

e2t

2 3

3t

e

 ! 1 dt 0

 

5 6

,

c +  −1

where c is a vector of arbitrary constants. 

 cos t can be written in the form (5.40): 0  ˆ    ˆ  cos 3t sin 3t cos 3t x(t)= X(t) c + (X(t))−1 f (t)dt = c+ −3 sin 3t 3 cos 3t −3 sin 3t

5.4.1.2. The solution of x˙ = Ax +

sin 3t 3 cos 3t

−1 

cos t 0

!

 dt

     ˆ 1 3 cos 3t − sin 3t cos t c+ dt 3 sin 3t cos 3t 0 3   ˆ    cos 3t sin 3t cos 3t cos t = c+ dt −3 sin 3t 3 cos 3t sin 3t cos t      ˆ cos 3t sin 3t cos 4t + cos 2t 1  c +   dt = 2 −3 sin 3t 3 cos 3t sin 4t + sin 2t  1     sin 4t + 12 sin 2t 1 4 cos 3t sin 3t   = c+ −3 sin 3t 3 cos 3t 2 1 1 − 4 cos 4t − 2 cos 2t     cos 3t sin 4t + 2 cos 3t sin 2t − sin 3t cos 4t − 2 sin 3t cos 2t 1 cos 3t sin 3t , = c+ −3 sin 3t 3 cos 3t 8 −3 sin 3t sin 4t − 6 sin 3t sin 2t − 3 cos 3t cos 4t − 6 cos 3t cos 2t     (sin 4t cos 3t − cos 4t sin 3t) − 2(sin 3t cos 2t − 2 cos 3t sin 2t) 1 cos 3t sin 3t , = c+ −3 sin 3t 3 cos 3t 8 −3(cos 4t cos 3t + sin 4t sin 3t) − 6(cos 3t cos 2t + sin 3t sin 2t)         cos 3t sin 3t sin t − 2 sin t cos 3t sin 3t − sin t 1 1 c +  = c +  , = 8 8 −3 sin 3t 3 cos 3t −3 cos t − 6 cos t −3 sin 3t 3 cos 3t −9 cos t 

=

cos 3t −3 sin 3t

sin 3t 3 cos 3t



where c is a 2-vector of arbitrary constants.  5.4.1.3. The solution of x˙ = Ax +

0



e−3t

can be written in the form (5.40):

 ˆ  x(t)= X(t) c + (X(t))−1 f (t)dt =

= e−3t



cos t − sin t 2 cos t

cos t + sin t 2 sin t



 ˆ  cos t − sin t c+ e−3t 2 cos t

cos t + sin t 2 sin t

c Larry

−1 

0 e−3t

!

 dt

Turyn, October 10, 2013

page 48

= e−3t



cos t − sin t 2 cos t = e−3t





= e−3t

cos t − sin t 2 cos t 

cos t − sin t 2 cos t

= ... = e−3t



ˆ c+

    1 0 2 sin t − cos t − sin t dt −3t e cos t − sin t −2e−3t −2 cos t  ˆ    1 cos t + sin t cos t + sin t c+ dt 2 sin t 2 − cos t + sin t

cos t + sin t 2 sin t



  1 sin t − cos t 2 − sin t − cos t    cos t + sin t −1 c + e−3t , 2 sin t −1

cos t + sin t 2 sin t

cos t − sin t 2 cos t

c+

where c is a vector of arbitrary constants.  5.4.1.4. The solution of x˙ = Ax +

1 0

 can be written in the form (5.40):

  ˆ 2t + t2 x(t)= X(t) c + (X(t))−1 f (t)dt = t2

3t2 + t3 t3



 =

2t + t2 t2

3t2 + t3 t3  =

 =  =

2t + t2 t2

3t2 + t3 t3

2t + t2 t2

3t2 + t3 t3 

= ... =

2t + t2 t2

ˆ  2t + t2 c+ t2

3t2 + t3 t3

−1

0 t3 e−t

!

 dt

ˆ c+

     3 1 0 t −3t2 − t3 dt 2 2 3 −t −t 2t + t t e −t4    ˆ  3te−t + t2 te−t 3t2 + t3 dt c+ −t −t 3 −2e − te t



2t + t2 t2









(−3t − 3) e−t + (−t2 − 2t − 2) e−t

c +  

 c+

2t + t2 t2

3t2 + t3 t3





 2e−t − (−t − 1) e−t   2  −t − 5t − 5 3t2 + t3 −t e t3 t+3

c + e−t



−10t − 6t2 − t3 −5t2 − 2t3

 ,

where c is a vector of arbitrary constants. 3−λ 0 −2 3−λ  −2 2 −1 − λ 0 = (−1 − λ) 5.4.1.5. 0 = | A − λI | = 0 = (−1 − λ) (3 − λ) + 8 ⇒ 4 3 − λ 4 0 3−λ √ the eigenvalues of A are the complex conjugate pair λ = 3 ± i 8 and the real eigenvalue λ3 = −1. √ Corresponding to eigenvalue λ1 = 3 + i 8, eigenvectors are found by √    

0 − √i2 | 0 −i 8 0 √ −2 |0 1 √   A − (3 + i 8)I | 0 =  0 −4 − i 8 0√ |0 ∼ 0 0 | 0 , 1 0 0 0 |0 4 0 −i 8 | 0 √ 1 √ R1 → R1 , i 8 R1 + R3 → R3 , −4−i R → R2 . Corresponding to eigenvalue 8 2   i √ λ1 = 3 + i 8 we have an eigenvector v(1) =  √0 . This gives two solutions of the corresponding LCCHS: The 2 first is       i i √ √ √ x(1) (t) = Re e(3+i 8)t  √0  = Re e3t (cos( 8 t) + i sin( 8 t))  √0  2 2 √ √ √      − sin( 8 t) + i cos( 8 t) − sin( 8 t) 3t 3t  = e  . 0 √ 0√ = Re e  √ √ √ √ 2 cos( 8 t) + i 2 sin( 8 t) 2 cos( 8 t) after row operations R1 ↔ R3 ,

1 4

c Larry

Turyn, October 10, 2013

page 49

For the second, we don’t have to do all of the algebra steps again: √ √ √         i − sin( 8 t) + i cos( 8 t) cos( 8 t) √ (2) (3+i 8)t 3t 3t  0  = Im e   = e  . 0 √ 0√ x (t) = Im e √ √ √ √ √ 2 2 cos( 8 t) + i 2 sin( 8 t) 2 sin( 8 t) Corresponding to eigenvalue λ3 = −1, eigenvectors are found by   

4 0 −2 | 0 1   0 |0 ∼ 0 A − (−1)I | 0 =  0 0 0 4 0 4 |0 after row operations −R1 + R3 → R3 ,

1 4

R1 → R 1 ,

0 0 0

0

1 0

 |0 | 0 , |0

1 6

to eigenvalue λ3 = −1 we have an eigenvector v(3)

R3 → R3 , 12 R3 + R1 → R1 , R2 ↔ R3 . Corresponding   0 =  1 . 0

A fundamental matrix is given by √ −e3t sin( 8 t) 0 √ x(3) (t) =  √ 2 e3t cos( 8 t)

√ e3t cos( 8 t) √ 3t 0 √ 2 e sin( 8 t)



h

X(t) = x(1) (t)

p p

x(2) (t)





0

The solution of x˙ = Ax +

e

p p

−3t

i

0



e−t  . 0

can be written in the form (5.40). We can calculate (X(t))−1 by using the

adjugate matrix, to get ˆ

 x(t)= X(t) c +

(X(t))



ˆ

 · c +

−1

 f (t)dt

√ −e3t sin( 8 t) 0 √ =√ 2 e3t cos( 8 t)

√ e3t cos( 8 t) √ 3t 0 √ 2 e sin( 8 t)



√ −e3t sin( 8 t)  √ 3t 0 √ 2 e cos( 8 t)

√ e3t cos( 8 t) √ 3t 0 √ 2 e sin( 8 t)



0

0



e−t  · 0

−1 

0





 e−t  dt  7

e−t  0

√ √  −e3t sin( 8 t) e3t cos( 8 t) 0 −t 0 √ e · =√ √ 3t 0 √ 2 e3t cos( 8 t) 2 e sin( 8 t) 0 √ √     √ 2t  ˆ −√ 2 e sin(√ 8 t) 0 e2t cos(√ 8 t) 0 1 −t 2t 2t  dt  · c + √ 2 e cos( 8 t) e sin( 8 t)   e √ 0 6t 2e5t 7 0 2e 0    √ √ √   7 √ e−3t cos( 8 t) ˆ −e3t sin( 8 t) e3t cos( 8 t) 0 2 √     0 √ e−t  c +  √72 e−3t sin( 8 t)  dt =√ √ 3t 0 √ 3t 2 e cos( 8 t) 2 e sin( 8 t) 0 1 

   =  

√ −e3t sin( 8 t)

√ e3t cos( 8 t)

0

0



√ 2 e3t cos( 8 t)

√ 3t √ 2 e sin( 8 t)





14 17

  −t =  te  − 21 17





    +    



7√ 17 2

    c +  e−t     

7√ 17 2

0



√ √ √  e−3t (−3 cos( 8 t) + 8 sin( 8 t)   √ √ √  e−3t (−3 sin( 8 t) − 8 cos( 8 t)   

0

t

√ −e3t sin( 8 t)

√ e3t cos( 8 t)

0

0



√ 2 e3t cos( 8 t)



√ 2 e3t sin( 8 t)

0



  e−t   c,  0

where c is a vector of arbitrary constants. c Larry

Turyn, October 10, 2013

page 50

−1 − λ −1 = (−1 − λ)(1 − λ) + 1 = λ2 ⇒ eigenvalues are λ1 = λ2 = 0 5.4.1.6. 0 = 1 1−λ    

−1 −1 | 0 1 1| 0 , after row operations R1 + R2 → R2 , −R1 → R1 ∼ [ A − λ1 I | 0 ] = 0 0| 0 1 1| 0   −1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the inly eigenvectors corresponding to the eigenvalues λ1 = λ2 = 1 0. Because λ1 = λ2 = 0 is a deficient eigenvalue, we need to also find a generalized eigenvector w that should satisfy the system (A − (0)I)w = v(1) :      

−1 −1 | −1 1 1 | 1 , ∼ A − (0)I | v(1) = 0 0 | 0 1 1 | 1 after row operations R  1 + R2 → R2 , −R1 → R1 . So, corresponding to eigenvalue λ1 = 0 we have a generalized  1 . eigenvector w = 0 The general solution of the corresponding LCCHS is     −1 −t + 1 x(t) = c1 v(1) + c2 (tv(1) + w) = c1 + c2 , 1 t where c1 , c2 =arbitrary constants. A fundamental matrix is given by 

−1

−t + 1

X(t) = 

. 1

 The general solution of x˙ = Ax +

t 0

can be written in the form (5.40):

  ˆ −1 −1 x(t)= X(t) c + (X(t)) f (t)dt =  1 −1

−t + 1

=

−t + 1

−1

−t + 1

=

ˆ



t ˆ

  c +

1

t

 c + 1



t











−t2



 ˆ −1  c +  1 

 1  −1 



−1

−1

−t + 1

=









 dt





 dt 0  

t

− 16 t3 +

c +  1

t



0

 c +  1

−t + 1



t

 −1

t −1

−1 

t



 dt = 



t 

t−1

t

−t + 1

1 2

t2

1 3 t 6

t

− 13 t3 1 2

t2

 

 ,

where c is a vector of arbitrary constants. The IC requires       0 −1 1 0 = x(0) = c+ , 1 1 0 0 so       −1   1 0 −1 0 1 −1 1 0 = . c= = 1 1 1 0 1 −1 −1 −1 The solution to the IVP is     1 3 1 2   −1 −t + 1 1 −6t + 2 t −t − 16 t3 + 12 t2  + = . x(t) =  1 3 1 3 1 t 1 t 1 + t + t 6 6

5.4.1.7. Method 1 : The system of ODEs,

  A˙ 1

=

 ˙ A2

=

5



A1 10 A1 10



A2 6

  , can most easily be  c Larry

Turyn, October 10, 2013

page 51

solved by first solving the first ODE, that is, the first oder linear ODE A˙ 1 + A101 = 5, using the integrating factor µ(t) = et/10 : ˆ d h t/10 i e A1 = et/10 · 5 ⇔ et/10 A1 = 5 et/10 dt = 50 et/10 + c1 dt A1 = 50 + c1 e−t/10 ,



where c1 is an arbitrary constant. After that, substitute A1 into the second ODE, A˙ 2 = c1 −t/10 A2 e − , A˙ 2 = 5 + 10 6

A1 10



A2 , 6

to get

A2 c1 −t/10 A˙ 2 + =5+ e . 6 10

that is,

This first order linear ODE can be solved using the integrating factor µ(t) = et/6 : ˆ d h t/6 i c1 −t/10 c1 t/15 e A2 = et/6 · (5 + e ) ⇔ et/6 A2 = 5 et/6 + e dt = 30 et/6 + 1.5 c1 et/15 + c2 dt 10 10 A2 = 30 + 1.5 c1 e−t/10 + c2 e−t/6 .

⇔ So, the general solution of the system is

50 + c1 e−t/10

 A(t) = 

30 + 1.5 c1 e−t/10 + c2 e−t/6

 

where c1 , c2 =arbitrary constants. Method 2 : The corresponding homogeneous system of ODEs,

  A˙ 1

=

 ˙ A2

=



A1 10 A1 10

A2 6



  , can 

most easily be solved by first solving the first ODE to get A1 = c1 e−t/10 , where c1 is an arbitrary constant, and then substituting that into the second ODE to get c1 −t/10 A2 e − , A˙ 2 = 10 6

A2 c1 −t/10 A˙ 2 + = e . 6 10

that is,

This first order linear ODE can be solved using the integrating factor µ(t) = et/6 : ˆ d h t/6 i c1 t/15 c1 −t/10 e A2 = et/6 · e ⇔ et/6 A2 = e dt = 1.5c1 et/15 + c2 dt 10 10 ⇔

A2 = 1.5c1 e−t/10 + c2 e−t/6 .

So, the general solution of the corresponding homogeneous system is       0 c1 e−t/10 e−t/10 ,  = c1   + c2  A(t) =  e−t/6 1.5c1 e−t/10 + c2 e−t/6 1.5e−t/10 where c1 , c2 =arbitrary constants. It follows that a fundamental matrix is given by   e−t/10 0 X(t) = . 1.5e−t/10 e−t/6 The solution of the original, non-homogeneous system can be written in the form (5.40):  ˆ   A(t)= X(t) c + (X(t))−1 f (t)dt =

 =

e−t/10 1.5e−t/10  =

0

ˆ 



e−t/6 e−t/10 1.5e−t/10

c+

0 e−t/6

e−t/10 1.5e−t/10

et/10 −1.5et/6

 

c1 c2

0 et/6 

 +



0

ˆ  c+



e−t/6

   5 e−t/10 dt = 0 1.5e−t/10

50 et/10 −45 et/6



 dt

 =

e−t/10 1.5e−t/10

0 e−t/6

0

−1

e−t/6 ˆ 

 c+

5 et/10 −7.5 et/6

c1 e−t/10 + 50 −t/10 1.5 c1 e + c2 e−t/6 + 30 c Larry

 ! 5 dt 0



 dt

 ,

Turyn, October 10, 2013

page 52

which agrees with the conclusion using Method 1. 5.4.1.8. Define y(t) = x1 (t), so y(t) ˙ = x˙ 1 (t) = x2 (t) and y¨(t) = x˙ 2 (t) = −3t−2 x1 + 3t−1 x2 = −3t−2 y + 3t−1 y, ˙ that is, t2 y¨ − 3ty˙ + 3y = 0. Substituting y(t) = tm gives characteristic equation 0 = m(m − 1) − 3m + 3 = m2 − 4m + 3 = (m − 1)(m − 3). So, the general solution of the equivalent second order ODE is x1 (t) = y(t) = c1 t + c2 t3 , where c1 , c2 =arbitrary constants. The solution of the original system is          3  c1 t c 2 t3 c1 t c2 t3 t t x(t) = = + = c1 + c2 , c1 3c2 t2 c1 3c2 t2 1 3t2 so a fundamental matrix is given by 

t

X(t) ,  

 1

 The general solution of x˙ = Ax +

0 1

t3



 p p



= 1 

t

= 1

t

t3

1

3t2

 .

 can be written in the form (5.40):

  ˆ t x(t)= X(t) c + (X(t))−1 f (t)dt =  1 t



=

3t2

t3







3t

2

 ˆ t  c +  1 

t3 3t

2

−1  

0





 dt

 1

ˆ

    2 0 3t −t3 1  c +    dt  3 2t −1 t 1 3t2          ˆ − 12 − 12 t t3 t t3  c +   c +   dt =   2 2 1 −2 1 −1 3t 1 3t t −2 t 2     −t2 t t3 , c +  = −2t 1 3t2 t3



where c is a vector of arbitrary constants. The IC requires       −5 1 1 −1 = x(1) = c+ , 1 1 3 −2 so  −1        1 1 −15 1 1 −4 3 −1 −4 c= = = . 1 3 3 1 3 7 2 −1 2 The solution to the IVP is  t 1 x(t) = 2 1

t3 3t2



−15





−t2

+







= −2t

7

t − t2 + 72 t3 − 15 2 − 15 − 2t + 2

21 2 t 2

 .

5.4.1.9. (a) This homogeneous system is in companion form, so it is equivalent to the scalar second order CauchyEuler ODE, y¨ − 5t−1 y˙ + 8t−2 y = 0, where y(t) = x1 (t) and y(t) ˙ = x2 (t). Substitute y = tm to get the characteristic equation 0 = m(m − 1) − 5r + 8 = m2 − 6m + 8 = (m − 2)(m − 4), hence the roots are m = 2, 4. The general solution of the scalar second order ODE is y(t) = c1 t2 + c2 t4 , where c1 , c2 =arbitrary constants. The general solution of this homogeneous system is      2   4  y(t) c1 t 2 + c 2 t 4 t t x(t) = = = c1 + c2 3 y(t) ˙ c1 2t + c2 4t 2t 4t3 so, a fundamental matrix is given by  X(t) =

t2 2t

t4 4t3

 .

c Larry

Turyn, October 10, 2013

page 53

(b) The solution of the non-homogeneous ODE system can be written in the form (5.40):   2 ˆ t x(t)= X(t) c + (X(t))−1 f (t)dt = 2t

t4 4t3



 =

t2 2t

t4 4t3

ˆ





t2

t4

2t

4t3

=

4t3 −2t



1 2t5

c+

−t4 t2





   2 t 3 dt = 0 2t

−6t−1



 c + 

ˆ  2 t c+ 2t



t−3





 = 

t4 4t3

t4 4t3

−1

 ! 3 dt 0

ˆ 

 c+

c1 t2 + c2 t4 − 5t 2c1 t + 4c2 t3 − 8

6t−2 −3t−4



 dt

 ,

where c1 , c2 =arbitrary constants. The ICs require 

0





 = x(1) = 

 −1 hence 

c1 c2





1 2

=

c1 + c2 − 5

1 4

 ,

2c1 + 4c2 − 8

−1 

5 7





=

1 2

13 2

t2 −



−1 1

4 −2

5 7

 =

1 2



13 −3

 .

The solution of the IVP system is  x(t) = 

2−λ 5.4.1.10. 0 = −3

3 2

t4 − 5t

 .

13 t − 6t3 − 8

1 = (2 − λ)(6 − λ) + 3 = λ2 − 8λ + 15 = (λ − 3)(λ − 5) 6−λ

⇒ eigenvalues are λ1 = 3 and λ2 = 5     −1 1 | 0

1 −1 | 0 [ A − λ1 I | 0 ] = ∼ , after row operations −R1 → R2 , 3R1 + R2 → R2 −3 3 | 0 0 0| 0   1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to the eigenvalue λ1 = 3. 1     1 −3 1 | 0

1 −3 | 0 , after row operations −R1 + R2 → R2 , − 31 R1 → R2 , [ A − λ2 I | 0 ] = ∼ 0 0| 0 −3 1 | 0   1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to the eigenvalue λ2 = 5. 3 The general solution of the corresponding LCCHS is     1 1 x(t) = c1 e3t + c2 e5t , 1 3 where c1 , c2 =arbitrary constants. A fundamental matrix is given by  X(t) =   The general solution of x˙ = Ax +  x(t)= X(t) c +

e−5t 4e−t

(X(t))

 =

e3t e3t

e5t

e3t

3e5t

 .

 can be written in the form (5.40):

ˆ −1

e3t





f (t)dt

=

e5t 3e5t

c+

e3t e3t ˆ



e5t 3e5t





3e5t −e3t

1 2e8t

ˆ  c+

−e5t e3t

e3t e3t



e5t 3e5t

e−5t 4e−t



−1 

e−5t 4e−t

!

 dt

 dt

c Larry

Turyn, October 10, 2013

page 54

 =

e3t e3t

e5t 3e5t

ˆ 

 c+  =

3 −8t e 2 1 −10t −2 e

e3t

e5t

e3t 

3e5t

=

− 2e−4t + 2e−6t







 dt

 =

3 −5t e + − 16

1 2

e3t

e5t

e3t

3e5t

e−t +

1 20





e5t

e3t

3e5t

1 2

e−4t

e−10t −

1 3

e−6t

 c + 

e−5t −

1 3

e−t

c + 

e3t

3 −8t − 16 e +

3 −5t 3 −5t − 16 e + 12 e−t + 20 e − e−t      11 1 1 1 −5t −t   + e  , c − e 80 6 3 −3

1 20

 

 ,

where c is a vector of arbitrary constants. 5.4.1.11. The scalar, order ODE y¨+y = f (t) is equivalent to the ODE system x˙ = Ax+g(t),    second  non-homogeneous, 0 0 1 . The corresponding homogeneous ODE system, and g(t) = where A = f (t) −1 0   0 1 x˙ = Ax = x, −1 0 is equivalent to the homogeneous, scalar, second order ODE y¨ + y = 0, whose general solution is         y(t) c1 cos t + c2 sin t cos t sin t = c1 + c2 , x(t) = = y(t) ˙ −c1 sin t + c2 cos t − sin t cos t where c1 , c2 =arbitrary constants. So, the corresponding homogeneous ODE system has a fundamental matrix   cos t sin t X(t) = . − sin t cos t As it happens, this matrix satisfies X(0) = I, so et A = X(t). The solution of the non-homogeneous ODE system can be written in the form (5.43):    ˆ t ˆ t cos(t − τ ) sin(t − τ ) cos t sin t g(τ )dτ x(t) = etA c + e(t−τ )A g(τ )dτ = c+ − sin(t − τ ) cos(t − τ ) − sin t cos t t0 t0  =

cos t − sin t

sin t cos t

ˆ t

 c+

t0

cos(t − τ ) − sin(t − τ )

So, 

y(t)

 y(t) ˙





 = x(t) =  

c1 cos t + c2 sin t + −c1 sin t + c2 cos t +

sin(t − τ ) cos(t − τ ) ´t t0

´t t0



0 f (τ )

sin(t − τ ) f (τ ) dτ cos(t − τ ) f (τ ) dτ

 dτ.   ,

where c1 , c2 =arbitrary constants. Changing the integration variable name from τ to u, specifying t0 = 0, and leaving out the homogeneous solution, ˆ t we find that there is a particular solution of ODE y¨ + y = f (t) given by y(t) = sin(t − u)f (u)du. This agrees with the result of Example 4.33 in Section 4.5 when ω = 1.

0

5.4.1.12. The non-homogeneous, scalar, secondorder ODE y¨ + ω 2 y = f (t) is equivalent to the ODE system x˙ =   0 1 0 Ax + g(t), where A = and g(t) = . The corresponding homogeneous ODE system, −ω 2 0 f (t)   0 1 x˙ = Ax = x, 2 −ω 0 is equivalent to the homogeneous, scalar, second order ODE y¨ + y = 0, whose general solution is         y(t) c1 cos ωt + c2 sin ωt cos ωt sin ωt x(t) = = = c1 + c2 , y(t) ˙ −c1 ω sin ωt + c2 ω cos ωt −ω sin ωt ω cos ωt

c Larry

Turyn, October 10, 2013

page 55

where c1 , c2 =arbitrary constants. So, the corresponding homogeneous ODE system has a fundamental matrix   cos ωt sin ωt X(t) = . −ω sin ωt ω cos ωt We find that −1 et A = X(t) X(0) =



cos ωt −ω sin ωt

sin ωt ω cos ωt

hence et A =





1 0

0 ω

−1



cos ωt −ω sin ωt

=

ω −1 sin ωt cos ωt

cos ωt −ω sin ωt

sin ωt ω cos ωt



1 0

0



ω −1



The solution of the non-homogeneous ODE system can be written in the form (5.43): ˆ t e(t−τ )A g(τ )dτ x(t) = etA c + t0

ω −1 sin ωt cos ωt

=

ˆ t

 cos ω(t − τ ) −ω sin ω(t − τ ) t0   ˆ t −1 ω sin ωt cos ω(t − τ ) c+ cos ωt −ω sin ω(t − τ ) t0



cos ωt −ω sin ωt  cos ωt = −ω sin ωt



c+

So, 

y(t)

 y(t) ˙





´t

  ω −1 sin ω(t − τ ) f (τ ) dτ  ,  ´t −c1 ω sin ωt + c2 cos ωt + t0 cos ω(t − τ ) f (τ ) dτ

c1 cos ωt + c2 ω −1 sin ωt +

 = x(t) =  

  ω −1 sin ω(t − τ ) g(τ )dτ cos ω(t − τ )    ω −1 sin ω(t − τ ) 0 dτ. cos ω(t − τ ) f (τ )

t0

where c1 , c2 =arbitrary constants. Changing the integration variable name from τ to u, specifying t0 = 0, and leaving ˆ t out the homogeneous solution,  2 we find that there is a particular solution of ODE y¨ + ω y = f (t) given by y(t) = ω −1 sin ω(t − u) f (u)du. This 0

agrees with the result of Example 4.33 in Section 4.5.

5.4.1.13. Suppose {y1 (t), y2 (t)} is a complete set of basic solutions of the corresponding linear homogeneous ODE, y¨ + p(t)y˙ + q(t)y = 0 on some open interval containing t = 0. The non-homogeneous, scalar, second order ODE y¨ +   p(t)y˙ + q(t)y = f (t) is equivalent to the ODE system 0 1 0 x˙ = Ax + g(t), where A = and g(t) = . The corresponding homogeneous ODE system, −q(t) −p(t) f (t)   0 1 x˙ = Ax = x, −q(t) −p(t) is equivalent to the homogeneous, scalar, second order ODE y¨ + p(t)y˙ + q(t)y = 0, whose general solution is         y(t) c1 y1 (t) + c2 y2 (t) y1 (t) y2 (t) = c1 + c2 , x(t) = = y(t) ˙ c1 y˙ 1 (t) + c2 y˙ 2 (t) y˙ 1 (t) y˙ 2 (t) where c1 , c2 =arbitrary constants. So, the corresponding homogeneous ODE system has a fundamental matrix   y1 (t) y2 (t) X(t) = . y˙ 1 (t) y˙ 2 (t) Note that {y1 (t), y2 (t)} being a complete set of basic solutions of y¨+p(t)y˙ +q(t)y = 0 on some open interval containing  t = 0 implies that W (y1 (t), y2 (t) = |X(t)| 6= 0 on that interval. The solution of the non-homogeneous ODE system can be written in the form (5.41), with t0 = 0: !     −1 ˆ t ˆ t y1 (t) y2 (t) y1 (τ ) y2 (τ ) −1 x(t) = X(t) c + (X(τ )) f (τ )dτ = c+ g(τ )dτ y˙ 1 (t) y˙ 2 (t) y˙ 1 (τ ) y˙ 2 (τ ) 0 0  =

y1 (t) y˙ 1 (t)

y2 (t) y˙ 2 (t)

ˆ

 c+

0

t

1 |X(τ )|



y˙ 2 (τ ) −y˙ 1 (τ )

−y2 (τ ) y1 (τ )



 g(τ )dτ

c Larry

Turyn, October 10, 2013

page 56

 =

y1 (t) y˙ 1 (t)

 

y2 (t) y˙ 2 (t)

c1 c2

ˆ



t

+ 0

1  W (y1 (τ ), y2 (τ )



−y2 (τ ) y1 (τ )

y˙ 2 (τ ) −y˙ 1 (τ )



0 f (τ )

!

 dτ

.

Replacing the integration variable τ by s and multiplying, we have  =

y1 (t) y˙ 1 (t)

y2 (t) y˙ 2 (t)

 

ˆ



c1 c2

t

+ 0

−y2 (s)f (s) y1 (s)f (s)



1  W (y1 (s), y2 (s)



1  W (y1 (s), y2 (s)



! ds ,

that is, 

y(t) y(t) ˙



 = x(t)=

c1 y1 (t) + c2 y2 (t) c1 y˙ 1 (t) + c2 y˙ 2 (t)



 +

y1 (t) y˙ 1 (t)

y2 (t) y˙ 2 (t)



t

0

 −y2 (s)f (s) ds, y1 (s)f (s)

where c1 , c2 =arbitrary constants. The first component, y(t), of the solution vector can be rewritten as     ˆ t ˆ t y2 (s)f (s) y1 (s)f (s) y(t) = c1 + − ds y1 (t) + c2 + ds y2 (t), W (y1 , y2 )(s) 0 0 W (y1 , y2 )(s) which is equation (4.39), a formula for all solutions of y¨ + p(t)y˙ + q(t)y = f (t), as we wanted to show. 5.4.1.14. Define X(s) = L[ x(t) ](s). Take the Laplace transforms of both sides of the system of ODEs to get s X − x(0) = AX + e−αs w, hence X = (sI − A)−1 x(0) + (sI − A)−1 e−αs w Generalize Table 4.1’s formula L1.11, in the form L−1 [ e−cs G(s) ] = L−1 [ G(s) ]

step(t − c) to vectors G(s) t7→(t−c)

to conclude that the solution of the IVP is given by x(t) = L−1 [ X ] = L−1 [ (sI − A)−1 x(0) ] + L−1 [ (sI − A)−1 e−αs w ] h i = etA x(0) + L−1 (sI − A)−1 w step(t − α) t7→(t−α)   step(t − α) = etA x(0) + etA w t7→(t−α)   = etA x(0) + e(t−α)A w step(t − α).

5.4.1.15. Take Laplace transforms of both sides of the system of ODEs and use the ICs to get     2 −5 − cos 2t ˙ sL[ x(t) ] − x(0) = L[ x(t) ]= L[ x ] + L , 1 −2 sin 3t hence 

1

sL[ x(t) ] − 





2

−5

1

−2



− s2s+4

 L[ x ] + 

= 0



 ,

3 s2 +9

that is,  

1

0

0

1





2

−5

1

−2

−

s 





− s2s+4

 L[ x(t) ] = 





1

+

3

 .

0

s2 +9

So, 

s−2

5

−1

s+2

L[ x(t) ] = 

−1  



− s2s+4 3 s2 +9





1

+

 

0

c Larry

Turyn, October 10, 2013

page 57

 s+2 1  = (s − 2)(s + 2) + 5 1

−5

 

− s2s+4

 



1

+

3 s2 +9

s−2



0





 =  

s+2 s2 +1 1 s2 +1



s(s+2) (s2 +1)(s2 +4)



s (s2 +1)(s2 +4)



15 (s2 +1)(s2 +9)

+

3(s−2) (s2 +1)(s2 +9)

  .

This leads to four partial fractions expansions problems, which we may do in any order. s(s + 2) As + B Cs + E = 2 + 2 (s2 + 1)(s2 + 4) (s + 1) (s + 4) ⇒

s2 + 2s = s(s + 2) = (As + B)(s2 + 4) + (Cs + E)(s2 + 1) = (A + C)s3 + (B + E)s2 + (4A + C)s + (4B + E).

Sorting by powers of s gives a system of four equations in the four unknown  3 =A +C   s2 : 0  s : 1 = B +E ⇒ 1 s : 2 = 4A +C    0 s : 0 = 4B +E

constants A, B, C, E:     ,   

that is, 

1 0

=B = 4B

whose solutions are, respectively,    B 1 = E 4

1 1

−1 

1 1

−1 

and 

A C



 =

1 4



+E +E

 and

1 0



0 2



1 3



=−

1 3



=−

0 2

=A = 4A

1 −4

−1 1



1 −4

−1 1



+C +C

1 0



0 2



 ,

1 3



1 −4



=−

1 3



−2 2



=−

.

So, s(s + 2) As + B Cs + E 1 = 2 + 2 = (s2 + 1)(s2 + 4) (s + 1) (s + 4) 3



2s − 1 −2s + 4 + 2 s2 + 1 s +4

 .

The second expansion is 15 αs + β γs + δ = 2 + 2 (s2 + 1)(s2 + 9) (s + 1) (s + 9) ⇒

15 = s(s + 2) = (αs + β)(s2 + 9) + (γs + δ)(s2 + 1) = (α + γ)s3 + (β + δ)s2 + (9α + γ)s + (9β + δ)   =α +γ    0    0 = β +δ ⇒ , 0 = 9α +γ       15 = 9β +δ

that is, 

0 15

whose solutions are, respectively,    β 1 = 9 δ

1 1

+δ +δ

−1 

0 15



−1 

0 0

and 

α γ



 =

1 9



=β = 9β

1 1

 and

=−

1 8

 =−



1 8

0 0

1 −9 

1 −9

=α = 9α

−1 1



−1 1



+γ +γ

0 15



,

 =





15 −15 0 0



15 −15 + 2 s2 + 1 s +9



0 0



1 8

=



.

So, 15(s + 2) αs + β γs + δ 1 = 2 + 2 = (s2 + 1)(s2 + 9) (s + 1) (s + 9) 8



.

The third expansion is s ps + q rs + u = 2 + 2 (s2 + 1)(s2 + 4) (s + 1) (s + 4) ⇒

s = (ps + q)(s2 + 4) + (rs + u)(s2 + 1) = (p + r)s3 + (q + u)s2 + (4p + r)s + (4q + u)

c Larry

Turyn, October 10, 2013

page 58

 0    0 1    0



=p = = 4p =

+u

   

+u

  

+r q +r 4q

,

that is, 

0 0

=q = 4q

whose solutions are, respectively,    p 1 = r 4

−1 

1 1



+u +u





0 1

1 3

=−

and

0 1

and







q u



0 0



−1 1

1 −4

=

=p = 4p



+r +r

0 1

,

 =−



1 3



−1 1

 .

So, ps + q rs + u 1 s = 2 + 2 = (s2 + 1)(s2 + 4) s +1 s +4 3



s −s + 2 s2 + 1 s +4

 .

The fourth expansion is 3(s − 2) as + b cs + e = 2 + 2 (s2 + 1)(s2 + 9) (s + 1) (s + 9) ⇒

3s − 6 = 3(s − 2) = (as + b)(s2 + 9) + (cs + e)(s2 + 1) = (a + c)s3 + (b + e)s2 + (9a + c)s + (9b + e)   0 =a +c       0 = b +e , ⇒ 3 = 9a +c       −6 = 9b +e

that is, 

0 −6

=b = 9b

1 1

−1 

whose solutions are, respectively,    b 1 = e 9 and 

a c



 =

1 9

1 1



+e +e

0 −6

−1 

 and



1 8



=−

1 8



=−



0 3

0 3

=a = 9a

1 −9

−1 1



1 −9

−1 1





+c +c

0 −6 0 3

,



1 8





3 −3

=

 =

1 8

−6 6



 .

So, 3(s − 2) as + b cs + e 1 = 2 + 2 = (s2 + 1)(s2 + 9) (s + 1) (s + 9) 8



3s − 6 −3s + 6 + 2 s2 + 1 s +9

 .

Altogether,   L[ x(t) ] = 

s+2 s2 +1 1 s2 +1



1 3





1 3



2s−1 s2 +1 s s2 +1

+

−2s+4 s2 +4

+

−s s2 +4







+

1 8

1 8





3s−6 s2 +1

15 s2 +1

+

+

−15 s2 +9



  1    = 24 −3s+6 s2 +9

8s+11 s2 +1 s+6 s2 +1

+ +

16s−32 s2 +4 8s s2 +4

+

+

45 s2 +9

−9s+18 s2 +9

 .

So,  1   24  −1

x(t) = L hence

8s+11 s2 +1 s+6 s2 +1

+ +

16s−32 s2 +4 8s s2 +4

+

+

45 s2 +9

−9s+18 s2 +9

 ,

  8 cos t + 11 sin t + 16 cos 2t − 16 sin 2t + 15 sin 3t 1  . x(t) = 24 cos t + 6 sin t + 8 cos 2t − 9 cos 3t + 6 sin 3t

c Larry

Turyn, October 10, 2013

page 59

Section 5.5.2  5.5.2.1. Define A =

2 7

1 −4



 and f (t) =

0 −e−t

 =e

−t

 w, where w =

0 −1

 .

Let’s try for a particular solution in the form xp (t) = e−t a. We substitute xp (t) into the non-homogeneous system x˙ = Ax + f (t) to get  −e−t a = x˙ p = Axp + f (t) = A e−t a + e−t w = e−t (Aa + w) . As in Example 5.26, we get a = − (A − (−1)I)−1 w, as long as (A − (−1)I) is invertible. Here, this gives a = − (A − (−1)I)−1 w = −



3 7

1 −3

−1 

0 −1

 =

1 16



−3 −7

−1 3





0 −1

=

1 16



1 −3

 .

So, a particular solution is given by xp (t) = e

−t

1 −t e a= 16



1 −3

 .

We can see already that the method of Section 5.5 gave the same particular solution as the variation of constants method in Section 5.4. For the rest of the problem, we can use the result of Example 5.23 in Section 5.4 that   1 e−5t + 7e3t −e−5t + e3t , etA = 7e−5t + e3t 8 −7e−5t + 7e3t hence the general solution of the system of ODEs is      −5t 1 −t 1 −t 1 e + 7e3t 1 1 x(t) = xp (t) + xh (t) = e + etA c = e + −5t + 7e3t −3 −3 16 16 8 −7e where c is a vector of arbitrary constants. The ICs require     1 3 1 = x(0) = +c −2 16 −3



1 16

c=



−e−5t + e3t 7e−5t + e3t

 c,

 47 . −29

So, the solution of the IVP is given by   −5t     1 1 −t 1 e + 7e3t −e−5t + e3t 47 1 · + e x(t) = 7e−5t + e3t −3 8 −7e−5t + 7e3t 16 −29 16     1 1 19e−5t + 75e3t e−t + , = −5t 3t −t −133e + 75e −3e 32 16 which agrees with the final conclusion of Example 5.23 in Section 5.4.  5.5.2.2. Define A =

29 −50

18 −31



 and f (t) =

t 0





1 0

= t w, where w =

 .

Let’s try for a particular solution in the form xp (t) = ta + b. We substitute xp (t) into the non-homogeneous system x˙ = Ax + f (t) to get a = x˙ p = Axp + f (t) = A (ta + b) + tw = t (Aa + w) + Ab. Similarly to work in Example 5.26, we get a = −A−1 w and b = A−1 a, as long as A is invertible. Here, this gives  −1        1 −31 −18 29 18 1 1 31 a = −A−1 w = − =− = . −50 −31 0 50 29 0 −50 1 and b = A−1 a =



29 −50

18 −31

−1 

31 −50

 =

1 1



−31 50

−18 29



31 −50



 =

−61 100

 .

So, a particular solution is given by  xp (t) = ta + b =

31t − 61 −50t + 100

 .

c Larry

Turyn, October 10, 2013

page 60

We can see already that the method of Section 5.5 gave the same particular solution as the variation of constants method in Section 5.4, but that the method of Section 5.5 involved much less effort! For the rest of the problem, we can use the result of Example 5.24 in Section 5.4 that   (30t + 1)e−t 18te−t tA e = , −50te−t (1 − 30t)e−t hence the general solution of the system of ODEs is      (30t + 1)e−t 31t − 61 31t − 61 tA + +e c= x(t) = xp (t) + xh (t) = −50t + 100 −50te−t −50t + 100 where c is a vector of arbitrary constants. The IC requires     3 −61 = x(0) = +c −2 100

 ⇒

c=

18te−t (1 − 30t)e−t

 c,

 64 . −102

So, the solution of the IVP is given by      64 31t − 61 (30t + 1)e−t 18te−t x(t) = + −102 −50t + 100 −50te−t (1 − 30t)e−t     31t − 61 84t + 64 = + e−t , −50t + 100 −140t − 102 which agrees with the final conclusion of Example 5.24 in Section 5.4.  5.5.2.3. Define A =

−4 −2

3 1



 and f (t) =

e−3t 0



= e−3t w, where w =



1 0

 .

Let’s try for a particular solution in the form xp (t) = e−3t a. We substitute xp (t) into the non-homogeneous system x˙ = Ax + f (t) to get  −3e−3t a = x˙ p = Axp + f (t) = A e−3t a + e−3t w = e−3t (Aa + w) . As in Example 5.26, we get a = − (A − (−3)I)−1 w, as long as (A − (−3)I) is invertible. Here, this gives  −1        1 4 −3 −1 3 1 1 −2 −1 a = − (A − (−3)I) w = − =− = . −2 4 0 0 −1 2 2 −1 So, a particular solution is given by xp (t) = e

−3t

a=e

−3t



−2 −1

 .

For the rest of the problem, we first find et A : −4 − λ 3 0 = = (−4 − λ)(1 − λ) + 6 = λ2 + 3λ + 2 = (λ + 2)(λ + 1) −2 1−λ ⇒ eigenvalues are λ1 = −2, λ2 = −1    −2 3 | 0

1 [ A − λ1 I | 0 ] = ∼ −2 3 | 0 0   3 ⇒ v1 = c 1 , for any constant c1 6= 0, 2    −3 3 | 0

1 [ A − λ2 I | 0 ] = ∼ −2 2 | 0 0   1 ⇒ v2 = c 1 , for any constant c1 6= 0, 1



3 2

| 0|

0 0

 , after −R1 + R2 → R2 , − 21 R1 → R1

are the eigenvectors corresponding to eigenvalue λ1 = −2 −1 | 0|

0 0

 , after − 31 R1 → R1 , 2R1 + R2 → R2

are the eigenvectors corresponding to eigenvalue λ2 = −1

The general solution of the corresponding LCCHS is     3 1 −2t −t x(t) = c1 e + c2 e , where c1 , c2 = arbitrary constants, 2 1 c Larry

Turyn, October 10, 2013

page 61

so a fundamental matrix is given by  Z(t) =

3e−2t 2e−2t

e−t e−t

 .

This implies that −1 = etA = Z(t) Z(0)

−1   3e−2t 3e−2t e−t 3 1 = −2t −t 2 1 2e−2t 2e e   3e−2t − 2e−t −3e−2t + 3e−t . = 2e−2t − 2e−t −2e−2t + 3e−t

e−t e−t





Hence the general solution of the system of ODEs is      −2 −2 3e−2t − 2e−t −3t  tA −3t +e c=e  + x(t) = xp (t) + xh (t) = e −1 −1 2e−2t − 2e−t

1 −2

−1 3



−3e−2t + 3e−t −2e−2t + 3e−t

  c,

where c is a vector of arbitrary constants. [Note: Alternatively, we could have used Z(t) instead of etA to help write the general solution.]      3 0 −2 −5 0 0 , w1 ,  0 , and w2 ,  0 , so that the 5.5.2.4. Define constant matrix or vectors by A ,  0 1 4 0 −3 0 1 ODE system can be rewritten as x˙ = Ax + w1 + e−2t w2 . Using the method of Section 5.5, we can express the general solution of the ODE system in the form 

x = Z(t)c + xp,1 + xp,2 (t), where Z(t) is a fundamental matrix for x˙ = Ax, xp,1 = a1 , xp,2 (t) = e−2t a2 , and a1 , a2 are constant vectors satisfying, respectively, −1 a1 = −A−1 w1 and a2 = − A − (−2)I w2 . We calculate 

3 a1 = −A−1 w1 = −  0 4

0 1 0

−1    −2 −5 3 0   0  = − 0 −3 0 4

0 1 0

    −2 −5 15 0  0  =  0  −3 0 20

and 

a2 = − A − (−2)I

−1

5 w2 = −  0 4

0 3 0

−1    −2 0 −1 1 0   0 =−  0 3 −1 1 −4

0 1 0

    2 0 2 1 0  0  = −  0 . 3 5 1 5

We also will use a fundamental matrix for x˙ = Ax: To find one we will first find the characteristic equation: 3−λ 0 −2 −2 = (1 − λ) 3 − λ 1−λ 0 0 = | A − λI | = 0 4 −3 − λ 4 0 −3 − λ = ... = (1 − λ) λ2 − 1 ⇒ the eigenvalues of A are λ1 = λ2 = 1 and eigenvalue   2 0 −2 | 0 ∼ 0| 0 [ A − λ1 I | 0 ] =  0 0 4 0 −4 | 0 −2R1 + R3 → R3 1 R → R1 2 1  c2 ⇒ x2 = c1 and x3 = c2 are free variables and v =  c1 c2

λ3 = −1. 

0 1  0 0 0 0 



−1 | 0| 0|

 0 0 0



   0 1  = c1  1  + c2  0 , for any 0 1

constants c1 , c2 with |c1 | + |c2 | > 0, are the eigenvectors corresponding to eigenvalue λ1 = λ2 = −2. c Larry

Turyn, October 10, 2013

page 62

   1 0 (2) =  1 , v =  0  are eigenvectors that span the eigenspace Eλ=1 . 1 0 

⇒v

(1)



4 [ A − λ3 I | 0 ] =  0 4

−2 | 0| −2 |

0 2 0

 0 0 0



1  0 0



− 21 | 0| 0|

0

1 0

 0 0 0

−R1 + R3 → R3 1 R → R2 2 2 1 R1 → R1 4  1   1  c 2 1 2 ⇒ x3 = c1 is the only free variable and v3 =  0  = c1  0 , for any constant c1 6= 0, are the eigenvectors c1 1 corresponding to eigenvalue λ3 = −1. The general solution of the corresponding LCCHS can be written as       1 1 0 x(t) = c1 et  1  + c2 et  0  + c3 e−t  0  , 2 1 0 where c1 , c2 , c3 =arbitrary constants. A fundamental matrix is given by  e−t 0 . 2e−t

et 0 et



0 Z(t) =  et 0 A general solution of the ODE system is given by  0 et t 0 x = Z(t)c + xp,1 + xp,2 (t) =  e 0 et

     15 e−t 2 1 0  c +  0  − e−2t  0  , 3 −t 20 2e 5

where c is a 3-vector of arbitrary constants. 5.5.2.5. As in Example 5.28, rather than solve the system in the form (?) x˙ = Ax + (cos t)w, we will first solve its ep (t), the solution of (??), and xp (t), the solution of complexification (??) x˙ = Ax + eit w. The relationship between x (?), is xp (t) = Re (e xp (t)) ,  it because cos t = Re e .     −1 2 1 Define A = and w = . We look for a particular solution of (??) in the form −2 −1 0 ep (t) = eit a e, x e is constant vector, possibly complex. Substituting this into (??) we get where a   e=x e˙ p (t) = Ae e + eit w = eit (Ae e. ieit a xp (t) + eit w = A eit a a + w) , that is, −w = (A − iI) a e is given by The solution for a e = − (A − iI) a

=−

1 4 + 2i



−1

 w=−

−1 − i 2

−1 −2

2 −1



−2 −1 − i



1 0

−1  − iI  =

1 0

1 4 + 2i





 =−

1+i −2

−1 − i −2

 =

2 −1 − i

1 4 − 2i · 4 + 2i 4 − 2i



−1 

1 0

1+i −2





        1+i 1+i (2 − i)(1 + i) 3+i 4 − 2i  2 − i 1 1 =  =  =   = 20 10 10 10 −2 −2 −2(2 − i) −4 + 2i

c Larry

Turyn, October 10, 2013

page 63

so

 1 ep (t) = eit a e= x (cos t + i sin t)  10



 1 =  10

3+i −4 + 2i

3 cos t − sin t + i(cos t + 3 sin t)

 .

−4 cos t − 2 sin t + i(2 cos t − 4 sin t)

A particular solution is given by  1  xp (t) = Re (e xp (t)) = 10 For the rest of the problem, we first find et A : 0 =

3 cos t − sin t



. −4 cos t − 2 sin t −1 − λ 2 = (−1 − λ)2 + 4 −2 −1 − λ

⇒ the eigenvalues of A are the complex conjugate pair λ = −1 ± 2i . Corresponding to eigenvalue λ1 = −1 + 2i, eigenvectors are found by      −2i 2 |0

1 ∼ A − (−1 + 2i)I | 0 = −2 −2i | 0 0

i 0

|0 |0

 ,

after row operations 2i R1 → R1 , 2R1 + R2 → R2 . Corresponding to eigenvalue λ1 = −1 + 2i we have an eigenvector   −i v(1) = . This gives two solutions of the corresponding LCCHS: The first is 1          −i −i sin 2t − i cos 2t x(1) (t) = Re e(−1+i2)t = Re e−t (cos 2t + i sin 2t = Re e−t 1 1 cos 2t + i sin 2t   sin 2t = e−t . cos 2t For the second, we don’t have to do all of the algebra steps again:         −i sin 2t − i cos 2t − cos 2t x(2) (t) = Im e(−1+i2)t = Im e−t = e−t . 1 cos 2t + i sin 2t sin 2t A fundamental matrix is given by Z(t) = e−t





− cos 2t sin 2t

sin 2t cos 2t

.

This implies that e

tA

−1 = Z(t) Z(0) = e−t



sin 2t cos 2t

 − cos 2t 0 sin 2t 1  cos 2t = e−t − sin 2t

−1 0

−1

sin 2t cos 2t

=e

−t



sin 2t cos 2t

− cos 2t sin 2t



0 −1

1 0



 .

Hence the general solution of the system of ODEs is      3 cos t − sin t 3 cos t − sin t cos 2t 1 1   + etA c =   + e−t  x(t) = xp (t) + xh (t) = 10 10 −4 cos t − 2 sin t −4 cos t − 2 sin t − sin 2t

sin 2t

  c,

cos 2t

where c is a vector of arbitrary constants. 5.5.2.6. (a) This homogeneous system is in companion form, so it is equivalent to the scalar second order nonhomogeneous ODE, y¨+2y+y ˙ = cos t, where y(t) = x1 (t) and y(t) ˙ = x2 (t). We could solve the scalar non-homogeneous ODE by any of the methods of undetermined coefficients, or variation of parameters, or Laplace transforms. For this ODE, the first of these methods is probably the easiest to implement, followed by the third, and followed by the second. Substitute y = est into the corresponding second order LCCHODE, y¨ + 2y˙ + y = 0, to get the characteristic equation 0 = s2 + 2s + 1 = (s + 1)2 , hence we have the repeated root case s = −1, −1. The general solution of the scalar second order LCCHODE is y(t) = c1 e−t + c2 te−t , where c1 , c2 =arbitrary constants. Following the method of undetermined coefficients, we have L1 = −1, −1. Because f (t) = cos t, we have L1 = ±i and thus the superlist is L = −1, −1, ±i. This gives y(t) = c1 e−t + c2 te−t + c3 cos t + c4 sin t. c Larry

Turyn, October 10, 2013

page 64

⇒ yp (t) = A cos t + B sin t, where A, B are constants to be determined: cos t = y¨p + 2y˙ p + yp = −A cos t − B sin t − 2A sin t + 2B cos t + A cos t + B sin t = 2B cos t − 2A sin t ⇒ B = 12 and A = 0 ⇒ yp (t) = 12 sin t ⇒ y(t) = c1 e−t + c2 te−t + 21 sin t is the general solution of the scalar non-homogeneous ODE. The general solution of the original non-homogeneous ODE system is           1 sin t y(t) c1 e−t + c2 te−t + 12 sin t 1 t x(t) = = = c1 e−t + c2 e−t + . −t −t 1 y(t) ˙ −c1 e + c2 (−t + 1)e + 2 cos t −1 −t + 1 2 cos t  (b) To use the nonresonance method of Section 5.5, define A ,

0 −1

1 −2



 and w ,

0 1

 , and rewrite the ODE

system in the form x˙ = Ax + (cos t)w. The complexification of the ODE system is (?)

x˙ = Ax + eit w.

ep (t), the solution of the complexification, and xp (t), the solution of the original nonThe relationship between x homogeneous system, is xp (t) = Re (e xp (t)) ,  it because cos t = Re e . We try a solution of (?) in the form ep (t) = eit a e, x e is constant vector, possibly complex. We substitute this into (?): where a   e=x e˙ p (t) = Ae e + eit w = eit (Ae ieit a xp (t) + eit w = A eit a a + w) , that is, e. −w = (A − iI) a Here we see where the non-resonance condition comes in: If ±i is not an eigenvalue of A then (A − iI) is invertible e. Here, that is and we can solve for a       −1   1 −1 1 −2 − i −1 0 −i 1 0 e = − (A − iI)−1 w = − =− , a =− 1 −i 1 −1 −2 − i 1 −i(−2 − i) + 1 i2 −i so ep (t) = eit a e= x

    1 sin t − i cos t i −1 (cos t + i sin t) = . −i 2 2 cos t + i sin t

A particular solution is given by xp (t) = Re (e xp (t)) =

1 2



sin t cos t

 .

The general solution of the non-homogeneous ODE system can be written in the form x(t) = X(t)c + xp (t), where X(t) is a fundamental matrix of the corresponding homogeneous ODE system   0 1 x˙ = Ax = x. −1 −2 To find X(t) we could either use eigenvalues and eigenvectors of A, but it is easier to use the fact that A is in companion form to get the scalar LCCHODE y¨ + 2y˙ + y = 0, whose general solution is y(t) = c1 e−t + c2 te−t , where c1 , c2 =arbitrary constants. [This follows from work we did for part (a) of problem 5.5.2.6.] The general solution of the corresponding linear, constant coefficients, homogeneous ODE (LCCHS) system is         y(t) c1 e−t + c2 te−t 1 t x(t) = = = c1 e−t + c2 e−t , −t −t y(t) ˙ −c1 e + c2 (−t + 1)e −1 −t + 1 so a fundamental matrix is given by X(t) = e

−t



1 −1

t −t + 1

 .

c Larry

Turyn, October 10, 2013

page 65

The general solution of the original non-homogeneous ODE system is     1 1 t sin t c+ x(t) = X(t)c + xp (t) = e−t , −1 −t + 1 2 cos t which agrees with the final conclusion of part (a) of problem 5.5.2.6. (c) Using the fundamental matrix found in part (b) of problem 5.5.2.6, along with formula (5.40) of the textbook, the solution of the non-homogeneous ODE system can be written in the form     −1  ˆ  ! ˆ 1 t 1 t 0 −1 −t −t x(t)= X(t) c + (X(t)) f (t)dt = e c+ dt e −1 −t + 1 −1 −t + 1 cos t



1

= e−t



1 −1

= e−t



1 −1



t

= e−t 

    ˆ 1 t −t + 1 −t 0 dt c+ e cos t 1 1 1    ˆ  −tet cos t t dt c+ t e cos t −t + 1 

t −t + 1



− 12 tet cos t − 12 (−1 + t)et sin t

 c +  −1

1 2

−t + 1 = e−t





c1 e−t + c2 te−t +



 = 

t

e (cos t + sin t) −c1 e    1 1 t sin t c+ , −1 −t + 1 2 cos t

−t

+ c2 (−t + 1)e

−t

+

1 2 1 2

sin t

 

cos t

where c1 , c2 =arbitrary constants. Again, this agrees with the final conclusion of parts (a) and (b) of problem 5.5.2.6.  5.5.2.7. Define A =

0 1

3 −2



 and f (t) =

−e−t 0



= e−t w, where w =



−1 0

 .

Let’s try for a particular solution in the form xp (t) = e−t a. We substitute xp (t) into the non-homogeneous system x˙ = Ax + f (t) to get  −e−t a = x˙ p = Axp + f (t) = A e−t a + e−t w = e−t (Aa + w) . As in Example 5.26, we get a = − (A − (−1)I)−1 w, as long as (A − (−1)I) is invertible. Here, this gives  −1        1 1 1 1 3 −1 −1 −3 −1 a = − (A − (−1)I)−1 w = − =− = . 1 −1 0 1 0 −4 −1 4 1 So, a particular solution is given by xp (t) = e

−t

1 a = e−t 4



1 1

 .

For the rest of the problem, we first find et A : 0−λ 3 = (−λ)(−2 − λ) − 3 = λ2 + 2λ − 3 = (λ + 3)(λ − 1) 0= 1 −2 − λ ⇒ eigenvalues are λ1 = −3, λ2 = 1     3 3| 0

1 1| 0 [ A − λ1 I | 0 ] = ∼ , after 13 R1 → R1 , −R1 + R2 → R2 1 1| 0 0 0| 0   −1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −3 1     −1 3| 0

1 −3 | 0 [ A − λ2 I | 0 ] = ∼ , after R1 + R2 → R2 , −R1 → R1 1 −3 | 0 0 0| 0   3 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 1 1 The general solution of the corresponding LCCHS is     −1 3 −3t t x(t) = c1 e + c2 e , where c1 , c2 = arbitrary constants, 1 1 c Larry

Turyn, October 10, 2013

page 66

so a fundamental matrix is given by  Z(t) =

−e−3t e−3t

3et et

 .

This implies that −1 = etA = Z(t) Z(0)



−e−3t e−3t

3et et



−1 1

3 1

 −3t e + 3et 1 = 4 −e−3t + et

−1

−e−3t e−3t  −3e−3t + 3et . 3e−3t + et

3et et



=

 ·

Hence the general solution of the system of ODEs is      −3t 1 1 e + 3et 1 1 1 −t  tA −t     +e c= e + x(t) = xp (t) + xh (t) = e 4 4 4 1 1 −e−3t + et where c is a vector of arbitrary constants. The ICs require     1 4 1 +c   = x(0) =  4 1 5



−3 −1

1 −1

−3e−3t + 3et 3e

−3t

+e

t



  c,

  15 1 . c= 4 19



So, the solution of the IVP is given by    −3t 1 e + 3et 1 1 −t  +  x(t) = e 4 4 1 −e−3t + et

1 −4

−3e−3t + 3et 3e−3t + et



  15 1 ·   4 19

that is,   4e−t − 42e−3t + 102et 1  . x(t) = 16 4e−t + 42e−3t + 34et 

     2 1 −1 0 , w1 , , and w2 , , so that the ODE system can be rewritten as −3 −2 0 1 x˙ = Ax + w1 + (cos t)w2 . Using the method of Section 5.5, we can express the general solution of the ODE system in the form

5.5.2.8. Define A =

x = Z(t)c + xp,1 + xp,2 (t), where Z(t) is a fundamental matrix for x˙ = Ax, xp,1 = a1 , xp,2 (t) is a particular solution of x˙ = Ax + (cos t)w2 , and a1 is a constant vectors satisfying a1 = −A−1 w1 . First, we calculate a1 = −A−1 w1 = −



2 −3

1 −2

−1 

−1 0

 =−

1 −1



−2 3

−1 2



−1 0



 =

2 −3

 .

As in Example 5.28, rather than solve a system in the form (?) x˙ = Ax + (cos t)w2 , we will first solve its ep,2 (t), the solution of (??), and xp,2 (t), the solution complexification (??) x˙ = Ax + eit w2 . The relationship between x of (?), is xp,2 (t) = Re (e xp,2 (t)) ,      2 1 0 because cos t = Re eit . Here, A = and w2 , . −3 −2 1 We look for a particular solution of (??) in the form ep,2 (t) = eit a e2 , x e is constant vector, possibly complex. Substituting this into (??) we get where a   e2 = x e˙ p,2 (t) = Ae e2 + eit w2 = eit (Ae e2 . ieit a xp,2 (t) + eit w2 = A eit a a2 + w2 ) , that is, −w = (A − iI) a c Larry

Turyn, October 10, 2013

page 67

e2 is given by The solution for a e2 = − (A − iI)−1 w2 = − a



1 =− −2 so



2 −3

1 −2

−1 

 − iI

−2 − i 3

−1 2−i



0 1

0 1







1 = 2

=− 

2−i −3

−1 2−i

1 −2 − i

−1 

0 1





    −1 − cos t − i sin t 1 1 =  . ep,2 (t) = eit a e2 = (cos t + i sin t)  x 2 2 2−i 2 cos t + sin t + i(− cos t + 2 sin t)

A particular solution is given by xp,2 (t) = Re (e xp,2 (t)) =

1 2



− cos t 2 cos t + sin t

 .

For the rest of the problem, we first find et A : 2−λ 1 = (2 − λ)(−2 − λ) + 3 = λ2 − 1 = (λ + 1)(λ − 1) ⇒ eigenvalues are λ1 = −1, λ2 = 1 0 = −3 −2 − λ  [ A − λ1 I | 0 ] =  ⇒ v1 = c 1

−1 3

⇒ v2 = c 1

0 0



 ∼

1 0

1 3

| 0|

0 0

 , after

1 3

R1 → R1 , 3R1 + R2 → R2

, for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1 

−1 1

1| −1 |



[ A − λ2 I | 0 ] = 

3 −3

1 −3

1| −3 |

0 0



 ∼

1 0

1| 0|

0 0

 , after 3R1 + R2 → R2

 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 1

The general solution of the corresponding LCCHS is     −1 −1 x(t) = c1 e−t + c 2 et , where c1 , c2 = arbitrary constants, 3 1 so a fundamental matrix is given by  Z(t) =

−e−t 3e−t

−et et

 .

This implies that e

tA

−1 = Z(t) Z(0) =



 −1  −et −1 −1 −e−t = t e 3 1 3e−t   −e−t + 3et −e−t + et 1 . = 2 −t t −t t 3e − 3e 3e − e

−e−t 3e−t

−et et



1 · 2



1 −3

1 −1



Hence the general solution of the original, non-homogeneous system of ODEs is     1 2 − cos t x(t) = xp,1 (t) + xp,2 (t) + xh (t) = + etA c + −3 2 2 cos t + sin t       1 1 −e−t + 3et 2 − cos t −e−t + et c, = + + −3 3e−t − 3et 3e−t − et 2 2 cos t + sin t 2 where c is a vector of arbitrary constants. The ICs require       1 −1 −1 2 = x(0) = + + c. 1 −3 2 2

c Larry

Turyn, October 10, 2013

page 68

So, the solution of the IVP is given by      1 1 −e−t + 3et 2 − cos t + x(t) = + −3 3e−t − 3et 2 2 cos t + sin t 2

−e−t + et 3e−t − et

 ·

1 2



−5 6



that is,  1 x(t) = 4

 5.5.2.9. Define A =

−4 −2

3 1



 and f (t) =

8 − 2 cos t − e−t − 9et



−12 + 4 cos t + 2 sin t + 3e−t + 9et e−t 0



= e−t w, where w =



1 0

.

 .

Let’s try for a particular solution in the form xp (t) = e−t a. We substitute xp (t) into the non-homogeneous system x˙ = Ax + f (t) to get  −e−t a = x˙ p = Axp + f (t) = A e−t a + e−t w = e−t (Aa + w) . As in Example 5.26, we get a = − (A − (−1)I)−1 w, as long as (A − (−1)I) is invertible. Unfortunately,   −3 3 A − (−1)I = −2 2 is not invertible. By using row reduction on an augmented matrix, it also turns out that the system of equations (A − (−1)I) a = −w has no solution. At this point we could revert to using the method of Section 5.4. Instead, similarly to the remarks after Example 5.26, we can try to find a particular solution of the form xp (t) = e−t (tv + u) . [This is like the derivation of the solution of a homogeneous ODE system in the deficient eigenvalue case.] Substitute this into the non-homogeneous system x˙ = Ax + f (t) to get −te−t v + e−t v − e−t u =

 d  −t e (tv + u) = x˙ p = Axp + f (t) = Ae−t (tv + u) + e−t w dt = e−t (tAv + Au + w) .

t

Multiplying through by e gives −tv + v − u = tAv + Au + w. Sorting by powers of t yields −v = Av and v − u = Au + w  So, we want v to satisfy A − (−1)I v = 0, that is, v is some eigenvector of A corresponding to eigenvalue −1, and  A − (−1)I u = v − w. [If we used v = 0 then the attempted solution would be xp (t) = e−t u, which we already saw wouldn’t succeed because A − (−1)I is not invertible.]     −3 3 | 0

1 −1 | 0 [ A − (−1)I | 0 ] = ∼ , after − 31 R1 → R1 , 2R1 + R2 → R2 −2 2 | 0 0 0| 0   1 ⇒ v = c1 is an eigenvector corresponding to A’s eigenvalue −1, as long as c1 6= 0. 1    c1 − 1 We solve for u satisfying A − (−1)I u = v − w = using the augmented matrix c1     1 1 −3 3 | c1 − 1

1 −1 | − 3 c1 + 3 ∼ , | | [ A − (−1)I | v − w ] =  1 −2 2 | c1 0 0| c + 32 3 1 after −

1 3

R1 → R1 , 2R1 + R2 → R2 .

c Larry

Turyn, October 10, 2013

page 69

In order for there to be a solution for u, we must choose c1 = −2. In this case,   −2 v= −2 and u is found using the augmented matrix, in RREF,  

1 −1 | 1 . 0 0| 0   1 One solution is u = . 0 So, a particular solution is given by        −2t + 1 1 −2 . = e−t + xp (t) = e−t (tv + u) = e−t t −2t 0 −2 For the rest of the problem, we first find et A : −4 − λ 3 = (−4 − λ)(1 − λ) + 6 = λ2 + 3λ + 2 = (λ + 2)(λ + 1) 0= −2 1−λ ⇒ eigenvalues are λ1 = −2, λ2 = −1     3 −2 3 | 0

1 −2 | 0 [ A − λ1 I | 0 ] = ∼ , after R1 + R2 → R2 , − 21 R1 → R1 , −2 3 | 0 0 0| 0   3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −2 ⇒ v1 = c 1 2 As for the eigenvalue −1, not we already found the eigenvectors above when we solved for a  “incidentally"  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue particular solution: ⇒ v2 = c1 1 λ2 = −1. The general solution of the corresponding homogeneous system can be written as     3 1 xh (t) = c1 e−2t + c2 e−t , where c1 , c2 = arbitrary constants. 2 1 So, the general solution of the original, non-homogeneous problem can be written as       −2t + 1 3 1 x(t) = xp (t) + x(t)h = e−t + c1 e−2t + c2 e−t , −2t 2 1 where c1 , c2 =arbitrary constants. 

  −6 5 −5 2  and w ,  5.5.2.10. Define constant matrix or vectors by A ,  0 −1 0 7 4 can be rewritten as x˙ = Ax + e−3t w. Using the method of Section 5.5, we can express the general solution of the ODE

 0 0 , so that the ODE system 1 system in the form

x = Z(t)c + xp , where Z(t) is a fundamental matrix for x˙ = Ax, xp = e−3t a, and a is a constant vector satisfying  A − (−3)I a = −w, −1 if a solution exists. Note that A − (−3)I does not exist, that is, the nonresonance condition is violated. To find a (if it exists!) we will use Gaussian elimination. But, we see that the augmented matrix   −3 5 −5 | 0 2| 0 [ A − (−3)I | w ] =  0 2 0 7 7| 1 c Larry

Turyn, October 10, 2013

page 70

shows that the system is inconsistent. At this point, we could try to find a particular solution in the form xp = e−3t (ta1 + a2 ), or we could use the method of Section 5.4. Let’s try the latter. No matter what method we use, we will need to find a basis of solutions for x˙ = Ax: First, the characteristic equation is −6 − λ 5 −5 −1 − λ 2 0 −1 − λ 2 = (−6 − λ) 0 = | A − λI | = 7 4−λ 0 7 4−λ   = (−6 − λ) (−1 − λ)(4 − λ) − 14 = (−6 − λ) λ2 − 3λ − 18 = (−6 − λ)(λ + 3)(λ − 6) ⇒ the eigenvalues of A are λ1 = −6, λ2 = −3, and λ3 = 6. Note that we have “resonance" in this problem, that is, the nonresonance condition is violated, because −3 is both an eigenvalue of A and corresponds to the exponential function e−3t in the inhomogeneity. We find the eigenvectors:       0 5 −5 | 0 0 0 0 | 0 1 −1 | 0 1 ∼ ∼ 0 0 0 0 2| 0 7| 0 [ A − λ1 I | 0 ] =  0 5 1 | 0 0 7 10 | 0 0 0 17 | 0 0 0 0 | 0 1 1 R → R1 R → R2 5 1 7 2 −5R1 + R2 → R2 −17R2 + R3 → R3 −7R1 + R3 → R3 R 2 + R 1 → R1     1 c1 ⇒ x1 = c1 is the only free variable and v =  0  = c1  0 , for any constant c1 6= 0, are the eigenvectors 0 0 corresponding to eigenvalue λ1 = −6.     10 −3 5 −5 | 0

0 | 0 1 3 ∼  0 2| 0 [ A − λ2 I | 0 ] =  0 2 1 1 | 0 0 7 7| 0 0 0 0 | 0 − 13 R1 → R1 1 R → R2 2 2 5 R + R 1 → R1 3 2 −7R2 + R3 → R3    10  −10 − 3 c1 −c1  = c˜1  −3 , for any constant c˜1 6= 0, are the eigenvectors ⇒ x3 = c1 is the only free variable and v =  c1 3   25 corresponding to eigenvalue λ2 = −3.

0 | 0 1 84     −12 5 −5 | 0   ∼ 2     0 −7 2 | 0 0

− | 0 [ A − λ3 I | 0 ] = 1 7     0 7 −2 | 0 1 − 12 R1 → R1 0 0 0| 0 R 2 + R3 → R3 − 17 R2 → R2 5 R + R1 → R1 12 2  25  − 84 c1     −25   2 c  = c˜1  24 , for any constant c˜1 6= 0, are the ⇒ x3 = c1 is the only free variable and v3 =  7 1     84 c1 eigenvectors corresponding to eigenvalue λ3 = 6. The general solution of the corresponding LCCHS is       1 −10 −25 −6t  −3t 6t  −3  + c3 e  24  , 0  + c2 e x(t) = c1 e 0 3 84 where c1 , c2 , c3 =arbitrary constants. A fundamental matrix is given by e−6t 0 Z(t) =  0 

−10e−3t −3e−3t 3e−3t

 −25e6t 6t  24e . 84e6t c Larry

Turyn, October 10, 2013

page 71

A general solution of the ODE system is given by   ˆ −1 x = Z(t) c + Z(t) f (t)dt e−6t  0 = 0 

  −6t ˆ −25e6t e 24e6t  c +  0 84e6t 0

−10e−3t −3e−3t 3e−3t

Using Mathematica to find Z(t)  −6t e −10e−3t 0 −3e−3t = 0 3e−3t

−1    −25e6t 0 24e6t   0  dt . 84e6t e−3t

−10e−3t −3e−3t 3e−3t

−1

e−6t 0 = 0  −6t e 0 = 0  −6t e 0 = 0

  ˆ 108e6t −25e6t 1 6t    0 24e c+ 108 0 84e6t −10e−3t −3e−3t 3e−3t



−10e−3t −3e−3t 3e−3t −10e−3t −3e−3t 3e−3t

−255e6t −28e3t e−6t

   105e6t 0 3t   0  dt , 8e e−6t e−3t

    ˆ 105e3t −25e6t 1 6t    8  dt , 24e c+ 108 e−9t 84e6t    315e3t −25e6t 1 1 24e6t  c + ·  72 t  , 108 9 −e−9t 84e6t    −25e6t −720t + 340 1 24e6t  c + e−3t  −216t − 24  , 972 6t 84e 216t − 84

where c is a 3-vector of arbitrary constants. 5.5.2.11. We are given that (A − λI)v = 0 and v 6= 0. For the non-homogenous problem x˙ = Ax + eλt v, assume a solution of the form xp (t) = eλt (tv + w). Substitute that into the ODE to get λt eλt v + eλt v + λeλt w =

i d h λt e (tv + w) = x˙ p = Axp + eλt v = Aeλt (tv + w) + eλt v dt

= eλt (tAv + Aw + v) = eλt (tλv + Aw + v) . Multiplying through by et gives

 + v + λw =   + Aw + v. λtv tλv  This is equivalent to (A − λI)w = 0, that is, that w is in the eigenspace of A corresponding to eigenvalue λ. So, for any w in the eigenspace of A corresponding to eigenvalue λ, xp (t) = eλt (tv+w) solves the non-homogeneous problem. As a special choice, w = 0 works, so xp (t) = t eλt v is one such particular solution.  2I˙1 − 2I˙2 = −3I1 + 2 5.5.2.12. First, rewrite the ODE system as . Next, take the hint and first write −2I˙1 + 2I˙2 = −3I2 − 3       2 −2 −3 0 2 the system as B x˙ = Ax + f , where B = ,A= , and f = . −2 2 0 −3 −3 It would be nice if B were invertible, so that we could rewrite the ODE system as x˙ = B −1 Ax + B −1 f . But, our B is not invertible! We can still look for (1) a particular solution of the ODE system, as well as (2) solutions of the corresponding homogeneous system. Let xp (t) = a be a constant vector; we chose this form for a particular solution because f is a constant vector. We get 0 = B0 = B x˙p = Axp + f = Aa + f , 

so we take xp (t) = a = −A−1 f = −



−3 0

0 −3

−1 

2 −3



 =−

− 13 0

0 − 13



2 −3



c Larry

 =

2 3

−1

 .

Turyn, October 10, 2013

page 72

For the corresponding homogeneous system B x˙ = Ax, we can try solutions of the form x(t) = eλt v, where v is a constant vector. Similar to work in Section 5.2, this gives λeλt Bv = B(λeλt v) = B x˙ = Ax = A(eλt v) = eλt Av and thus leads to the generalized eigenvalue problem (?)

(λB − A)v = 0.

The generalized eigenvalues λ should satisfy the characteristic equation     −2λ −3 0 2λ + 3 2 −2 = − 0 = | λB − A | = λ 2λ + 3 0 −3 −2λ −2 2

= (2λ + 3)(2λ + 3) − 4λ2 = 12λ + 9.

So, the only generalized eigenvalue is λ1 = − 43 . To find the corresponding vector(s) v, we substitute λ1 = − 34 into (?) to get  3 3    | 0

1 1| 0 2 2    ∼ [ λ1 B − A | 0 ] =  3 3 0 0 | 0 | 0 2 2 2 R → R1 3 1 − 32 R1 + R2 → R2   −1 , for any constant c1 6= 0, are the only non-trivial vectors corresponding to generalized eigenvalue ⇒ v = c1 1 3 λ1 = − 4 . Note that we don’t refer to those vectors v as eigenvectors or as generalized eigenvectors because those terms have already been defined for an eigenvalue problem (C − λI)y = 0. But, we can refer to those vectors v as “root vectors." So far, we have solutions, of the corresponding homogeneous ODE system, in the form   −1 x(1) = c1 e−3t/4 1 for arbitrary constants c1 . But, our experience tells us to expect that a system of two ODEs should have two arbitrary constants. We will see, eventually, that our experience does not apply to the system in this problem, perhaps because B is not invertible. We can look for further solutions in the form x = eλt (tv + w). This gives        eλt (λt + 1)Bv + λBw = B (λt + 1)eλt v + λeλt w = B x˙ = Ax = A eλt (tv + w) = eλt tAv + Aw , hence t(λBv) + Bv + λBw = tAv + Aw. Sorting by powers of t implies that it follows that we need (λB − A)v = 0

and

(λB − A)w = −Bv.

We see that v should be a “root vector;" we may refer to w as a “generalized root vector." With   −1 v= , 1 we get the augmented matrix  [ λ1 B − A | −Bv ] = 

3 2

3 2

|

4

3 2

3 2

|

−4

 

 ∼

1

1|

8 3

0

0|

0

 ,



2 R → R1 3 1 − 23 R1 + R2 → R2

which has no solution. So far, the most general solution I can find is    2    I1 −1 −3t/4 3 (??) = + c1 e , I2 1 −1 c Larry

Turyn, October 10, 2013

page 73

where c1 is an arbitrary constant. Note that if [ I1 I2 ]T solves the original system, then necessarily 3I2 + 3 = 2I˙1 − 2I˙2 = −3I1 + 2 hence I1 = −I2 − 13 . It follows that the second ODE in the system reduces to   0 = 3 + 3I2 + 2I˙2 − 2 − I˙2 − 0 = 4I˙2 + 3I2 + 3, that is

3 3 I˙2 + I2 = − . 4 4 Using either the method of undetermined coefficients, or the method of integrating factor, we can find that the general solution is I2 = −1 + ce−3t/4 , where c is an arbitrary constant. It follows that I1 = −I2 − solution of the original system!

1 3

=

2 3

− ce−3t/4 . So, in fact, (??) gives the most general

5.5.2.14. The system that models iodine metabolism is found in Example 5.6 in Section 5.1. Let f1 be the recommended daily allowance of iodine for adults and children of age four or greater. According to the United States NIH2 (National Institutes of Health), the recommended daily allowance of iodine for children ages 4-8 is 90 mcg, children ages 9-13 is 120 mcg, and children and adults ages 14 and above is 150 mcg. Because of this ambiguity, let’s take f1 = 120 mcg. (I’m assuming that the older the person, the more varied their diet and thus the greater chance that they will consume at least some foods richer in iodine than the average food.) By the way, webpage cited has a good discussion of the roles of iodine in human health. Define x = [ x˙ 1 x˙ 2 x˙ 3 ]T . Using the parameter values we were given, Example 5.6’s system that models iodine metabolism is        f1 −2.773 0 0.05199 x1 x˙ 1 0   x2  +  0  , Ax + f . (?)  x˙ 2  =  0.832 −0.008664 0 0 0.008664 −0.05776 x3 x˙ 3 Using Mathematica, we found that that the eigenvalues and corresponding eigenvectors of A are, respectively, to the three significant digits of precision found in the entries of matrix A,   −0.958 , λ1 = −2.77, v1 =  0.288 −0.000920   0.0183 λ2 = −0.0604, v2 =  −0.294  0.956 and



 −0.00310 . λ3 = −0.00604, v3 =  −0.986 −0.165

This gives general solution of the corresponding LCCHS, x˙ = Ax,       −0.958 0.0183 −0.00310 −2.77t  −0.0604t −0.00604t  + c2 e  −0.294  + c3 e  −0.986 , 0.288 x = c1 e −0.000920 0.956 −0.165 where c1 , c2 , c3 are arbitrary constants. 

 f1 For the original nonhomogeneous system, x˙ = Ax + f , where f =  0  = 0 vector, we can use formula (5.49) with α = 0 to find a particular solution:  −1  −2.773 0 0.05199 1.20 × 10−6 g −1    0.832 −0.008664 0 0 xp = −A f = − 0 0.008664 −0.05776 0

 1.20 × 10−6 g   is a constant 0 0 

 5.93 × 10−7 g  =  5.69 × 10−5 g  . 8.54 × 10−6 g 



2 ttp://ods.od.nih.gov/factsheets/Iodine-HealthProfessional/

c Larry

Turyn, October 10, 2013

page 74

So, the general solution is x(t) + xp , that is,           −0.00310 0.0183 −0.958 5.93 × 10−7 g x1 ,  + c2 e−0.0604t  −0.294  + c3 e−0.00604t  −0.986  x2  =  5.69 × 10−5 g  + c1 e−2.77t  0.288 −0.165 0.956 −0.000920 8.54 × 10−6 g x3 where c1 , c2 , c3 are arbitrary constants. After solving (?), we integrate x1 (t) and x3 (t) to find x4 (t) and x5 (t), which modelers can use as measurable outputs from the body in order to estimate the other parameters in the system. In fact, using the parameter values we were given in Example 5.6, ˆ x4 = 0.005770x3 (t) dt   0.958 0.0183 0.00310 = 0.005770 8.54 × 10−6 t + c1 e−2.77t − c2 e−0.0604t + c3 e−0.00604t + c4 2.77 0.0604 0.00604 = 4.93 × 10−8 t + 2.00 × 10−3 c1 e−2.77t − 1.75 × 10−3 c2 e−0.0604t + 2.96 × 10−3 c3 e−0.00604t ˆ

and x5 =

1.941x1 (t) dt   0.000920 0.956 0.165 −7 −2.77t −0.0604t −0.00604t = 1.941 5.93 × 10 t + c1 e − c2 e + c3 e 2.77 0.0604 0.00604 = 1.15 × 10−6 t + 6.45 × 10−3 c1 e−2.77t − 30.7c2 e−0.0604t + 53.0c3 e−0.00604t + c5 .

5.5.2.15. Suppose that α is not an eigenvalue of the constant matrix A and w is a constant vector. Substitute into x˙ = Ax + eαt w, that is, (5.50), a solution in the form (5.51), that is, xp (t) = eαt a. This gives  αeαt a = x˙ p = Axp + f (t) = A eαt a + eαt w = eαt (Aa + w) . Multiplying through by e−αt gives αa = Aa + w, that is, (A − αI) a = w. because we assumed that α is not an eigenvalue of A, there exists (A − αI)−1 . So, the solution for a exists and is given by a = − (A − αI)−1 w, hence xp (t) = −eαt (A − αI)−1 w, solves (5.50), as we wanted to show.

c Larry

Turyn, October 10, 2013

page 75

Section 5.6.2  5.6.2.1. V(A, b) =Span 

6 3

Because x0 =







  ,



2 1

=3

5.6.2.2. V(A, b) =Span 

6 3

Because x0 =

2 1

−2 4

1 −2

2 1



 =Span

2 1

     0 2 , =Span . 0 1



 is in V(A, b), yes, the system can be driven from

  −2 , 1

1 0



7 4





1 0

 =Span

1 0

6 3

is in V(A, b) =R2 , yes, the system can be driven from



6 3

to 0.

1 5.6.2.4. Because b pp Ab = 1

0 = 3 6= 0, the system is completely controllable. 3

p 2 5.6.2.5. Because b p Ab = 1

0 = 0, the system is not completely controllable. 0

a11 0 (b) We calculate

0 a22



b2 ]T , and c = [ c1

, where a11 , a22 are non-zero, b = [ b1

  p b p Ab = b1 b2

p p



a11 0

0 a22



c2 ]T .

a11 b1 = b1 b2 (a22 − a11 ). a22 b2

 b1 = b2

b1 b2

−2 = 1 6= 0. 1



−2 = 1 6= 0, the system is completely controllable. 1



to 0.

   1 −2 =R2 , because , 0 1

p 1 5.6.2.3. Because b p Ab = 0

5.6.2.6. A =



So, system (5.64) is completely controllable if, and only if, b1 6= 0, b2 6= 0, and a22 6= a11 . (b) We calculate [ c1 c2 ] T T − − − −− − − − −− c c A = 0 [ c1 c2 ] a11 0 a22

c1 = a11 c1

c2 = c1 c2 (a22 − a11 ). a22 c2

So, system (5.64) is completely controllable if, and only if, all three of the following are true: c1 6= 0, c2 6= 0, and a22 6= a11 .  5.6.2.7. Because B

p p

 AB =



1 0 

1 2 system is completely controllable. 5.6.2.8. Because



B

p p

AB



=

1 0

0 0 1 0

0 0 2 0

 is in RREF and has rank = 1, the system is not completely controllable.

0 0

 has RREF



B

p p

AB



 =

1 0

0

1

0 2

c Larry

 0 , whose rank = 2, the 0

Turyn, October 10, 2013

page 76

Section 5.7.7  3 2 . We can use eigenvalues and eigenvectors to find the general solution of the 2 −3 homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : 3−λ √ √ 2 = (3 − λ)(−3 − λ) − 4 = λ2 − 13 = (λ + 13)(λ − 13) 0 = 2 −3 − λ √ √ ⇒ eigenvalues are λ1 = − 13, λ2 = 13 √ √     −3+ 13 √ 3 + 13 2√ |0 |0

1 2 [ A−λ1 I | 0 ] = , after R1 ↔ R2 , 12 R1 → R1 , −(3+ 13)R1 + ∼ 2 −3 + 13 | 0 0 0 |0 R2 → R2 √   √ 3 − 13 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = − 13 2 √ √     −3− 13 √ 2√ |0 3 − 13

|0 1 2 , after R1 ↔ R2 , 21 R1 → R1 , −(3− 13)R1 + [ A−λ2 I | 0 ] = ∼ 2 −3 − 13 | 0 0 0 |0 R 2 → R2 √   √ 3 + 13 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 13 2 The general solution of the system is √ √     √ √ 3 − 13 3 + 13 xk = c1 (− 13)k + c2 ( 13)k , where c1 , c2 = arbitrary constants. 2 2 

5.7.7.1. Define A =



 −2 7 . We can use eigenvalues and eigenvectors to find the general solution of the 1 4 homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : −2 − λ 7 0= = (−2 − λ)(4 − λ) − 7 = λ2 − 2λ − 15 = (λ + 3)(λ − 5) 1 4−λ ⇒ eigenvalues are λ1 = −3, λ2 = 5     1 7 |0

1 7 |0 [ A − λ1 I | 0 ] = ∼ , after −R1 + R2 → R2 1 7 |0 0 0 |0   −7 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −3 1     −7 7 |0

1 −1 | 0 [ A − λ2 I | 0 ] = ∼ , after − 71 R1 → R1 , −R1 + R2 → R2 1 −1 | 0 0 0 |0   1 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 5 1 The general solution of the system is     −7 1 xk = c1 (−3)k + c2 5k , where c1 , c2 = arbitrary constants. 1 1 5.7.7.2. Define A =



 1 2 . We can use eigenvalues and eigenvectors to find the general solution of the homoge3 2 neous system of difference equations (LCCHS∆) xk+1 = Axk : 1−λ 2 0 = = (1 − λ)(2 − λ) − 6 = λ2 − 3λ − 4 = (λ + 1)(λ − 4) 3 2−λ ⇒ eigenvalues are λ1 = −1, λ2 = 4     2 2 |0

1 1 |0 ∼ [ A − λ1 I | 0 ] = , after 12 R1 → R1 , −3R1 + R2 → R2 3 3 |0 0 0 |0 5.7.7.3. Define A =

c Larry

Turyn, October 10, 2013

page 77

 −1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1 1     2 −3 2 |0

1 −3 | 0 [ A − λ2 I | 0 ] = ∼ , after − 13 R1 → R1 , −3R1 + R2 → R2 3 −2 | 0 0 0 |0  2  ⇒ v2 = c1 3 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 4 1 The general solution of the system is     2 −1 , where c1 , c2 = arbitrary constants. + c2 4k xk = c1 (−1)k 3 1 

⇒ v1 = c 1



 1 −2 . As it happens, problem 5.7.7.4 is exactly the same as Example 5.33 in the textbook. 4 −3 We can use eigenvalues and eigenvectors to find the general solution of the homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : 1−λ −2 = (1 − λ)(−3 − λ) + 8 = λ2 + 2λ + 5 = (λ + 1)2 + 4 0 = 4 −3 − λ 5.7.7.4. Define A =

⇒ eigenvalues are λ1 = −1 + i2, λ2 = −1 − i2    2 − i2 −2 |0

1 [ A − λ1 I | 0 ] = ∼ 0 4 −2 − i2 | 0

− 1+i 2 0

|0 |0

 , after R1 ↔ R2 ,

1 R 4 1

→ R1 ,

−(2 − i2)R1 + R2 → R2   1+i ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −1 + i2. Because 2 the eigenvalues are a complex conjugate pair, we don’t need to find eigenvectors corresponding to λ2 = λ1 . As in Example 5.33, we calculate that α + iν = −1 + i2 = ρ(cos ω + i sin ω), where

√ 5,

2 . −1 Because α + iν = −1 + i2 is in the second quadrant, we have ω = π − arctan 2. We calculate  1    + i 12 cos ωk − sin ωk + i(cos ωk + sin ωk) 2 1 =  . (cos ωk + i sin ωk)v = (cos ωk + i sin ωk)  2 1 2 cos ωk + i2 sin ωk ρ=

p

(−1)2 + 22 =

tan ω =

As in the discussion before Example 5.33, there are solutions (1) xk

1 = ρ Re ((cos ωk + i sin ωk)v) = 5k/2 2



cos ωk − sin ωk 2 cos ωk



1 k/2 5 2



cos ωk + sin ωk 2 sin ωk



k

and (2)

xk = ρk Im ((cos ωk + i sin ωk)v) =

.

The general solution is     cos ωk − sin ωk cos ωk + sin ωk xk = c1 5k/2 + c2 5k/2 , where c1 , c2 = arbitrary constants. 2 cos ωk 2 sin ωk



 0 −1 . We can use eigenvalues and eigenvectors to find the general solution of the 1 0 homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : −λ −1 0= = λ2 + 1 ⇒ eigenvalues are λ1 = i, λ2 = −i 1 −λ 5.7.7.5. Define A =

c Larry

Turyn, October 10, 2013

page 78

   −i −1 | 0

1 −i | 0 ∼ , after iR1 ↔ R1 , −R1 + R2 → R2 , 1 −i | 0 0 0 |0   i ⇒ v 1 = c1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = i. Because the 1 eigenvalues are a complex conjugate pair, we don’t need to find eigenvectors corresponding to λ2 = λ1 . As in Example 5.33, we calculate that  π π α + iν = i = ρ(cos ω + i sin ω) = 1 · cos + i sin , 2 2 

[ A − λ1 I | 0 ] =

where

π . 2    i i cos πk − sin  2 πk     πk + i sin = (cos ωk + i sin ωk)v = cos 2 2 1 + i sin cos πk 2 ρ=1

We have

and

ω=

πk 2 πk 2

 .

As in the discussion before Example 5.33, there are solutions  (1) xk

− sin

πk 2

cos

πk 2

= ρk Re ((cos ωk + i sin ωk)v) = 

and



cos

πk 2

sin

πk 2

(2)

xk = ρk Im ((cos ωk + i sin ωk)v) = 

 

 .

The general solution is 

− sin

πk 2

cos

πk 2

x k = c1 





cos

πk 2

sin

πk 2

 + c2 

  where c1 , c2 = arbitrary constants.



 1 −3 6 5.7.7.6. Define A =  0 −2 0 . We can use eigenvalues and eigenvectors to find the general solution of the 2 0 0 homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : Expanding the determinant along the second row gives 1−λ −3 6 1−λ  6 −2 − λ 0 = (−2 − λ) 0 = 0 = (−2 − λ) (1 − λ)(−λ) − 12 2 −λ 2 0 −λ = (−2 − λ)(λ2 − λ − 12) = (−2 − λ)(λ + 3)(λ − 4) ⇒ eigenvalues are λ1 = −3, λ2 = −2, λ3 = 4    4 −3 6 | 0

1 1 0 |0 ∼ 0 [ A − λ1 I | 0 ] =  0 0 2 0 3 |0 after R1 ↔ R3 , −2R1 + R3  3  −2 0 , for ⇒ v1 = c 1  1  3 −3 0 [ A − λ2 I | 0 ] =  0 2 0

→ R3 ,

1 R 2 1

0

1 −3

3 2

0 0

  |0

1 |0 ∼ 0 |0 0

0

1 0

3 2

0 0

 |0 | 0 , |0

→ R1 , followed by 3R2 + R3 → R3

any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = −3 6 0 2

  |0 |0 ∼ |0

1 0 0

0 0 −3

1 0 3

  |0 |0 ∼ |0

1 0 0

0

1 0

1 −1 0

 |0 | 0 , |0

after R1 ↔ R3 , 21 R1 → R1 , −3R1 + R3 → R3 , followed by R2 ↔ R3 , − 31 R2 → R2   −1 ⇒ v1 = c1  1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = −2 1 c Larry

Turyn, October 10, 2013

page 79



−3 [ A − λ3 I | 0 ] =  0 2

−3 −6 0

  |0 |0 ∼ |0

6 0 −4

1 0 0

 |0 | 0 , |0

−2 0 0

0

1 0

after − 61 R2 → R2 , 3R2 + R1 → R1 , − 31 R1 → R1 , −2R1 + R3 → R3   2 ⇒ v1 = c1  0 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ3 = 4 1 The general solution of the system is  3      −2 2 −1 k k k 0  + c2 (−2)  1  + c3 4  0  , where c1 , c2 = arbitrary constants. xk = c1 (−3) 1 1 1  5.7.7.7. Using the results of Example 5.35, with n = 5, xk ,

Vk Ik

 , for k = 0, ..., 5, Y0 = Y1 = ... = Y4 =

1 , 2

and

Z0 = Z1 = ... = Z4 = 1, we get that xk+1 = Ak xk , where for k = 0, ..., 3, 



−Zk 1 + Yk Zk

1 −Yk

Ak = and



 =

− 12 −Z4 1

1 0

A4 =

1



−1 1 + 12 · 1



−1 1



 =

1 0

 =

1

−1

− 12

3 2

3 2 1 2

  1 1

= whose (1, 1) entry is η =

683 , 32

1 16

1 1 

4 

171 85

−1 

−1 3 2

    1 1 1 171 170 1 = 1 1 85 86 0 16 2     1 170 3/2 683 = , 86 1/2 32 341 1 0

and the equivalent impedance is Z =

η Y4

=

,

.

(a) Method 1 : To find the equivalent impedance, using the above, we calculate −1  −1  −1     1 −1 1 −1 1 1 1 −1 −1 −1 −1 = A0 A1 · · · A4 3 1 3 1 3 1 − − − Y4 − 12 2 2 2 2 2 2 =



683/32 1/2

=

1 1



−1 1

1 0 1

−1 

1



1 2



1 2

683 . 16

Method 2 : To find the equivalent impedance, it helps to use eigenvalues and eigenvectors of Ak , for k = 0, ..., 3. In part (b), below, we will diagonalize A0 :   1 −1 = A0 = P DP −1 , 1 3 −2 2       −1 2 where P = v1 pp v2 = and D = diag(λ1 , λ2 ) = diag 2, 21 . 1 1 It follows that       1 1 1 −1 −1 −1 −4 −1 −4 −1 −1 = P DP A = P D P A A−1 A · · · A 0 1 4 4 4 Y4 Y4 Y4 

−1 1

=



λ−4 1 0

−λ−4 1 λ−4 1

2λ−4 2 λ−4 2

−4 2λ−4 1 − 2λ2



1 =− 3  −4 −λ−4 1 − 2λ2 1 =− 3 −4 λ−4 1 − λ2

2 1



−4 −2λ−4 1 − λ2



0 λ−4 2 

3 2 1 2



1 −3

1 −1



1 −1

−2 −1



−2 −1



A−1 4



1 Y4

1 0

1 1



1



1 2

 −λ42 − 2λ41 1 =−  3 λ41 λ42 λ42 − λ41 



2λ42 − 2λ41 −2λ42 − λ41

c Larry

 

3 2 1 2

 

Turyn, October 10, 2013

page 80

 =−

1 3 · 24

 1 4 2





 1 4 2

− 2 · 24

2

 1 4 2

 1 4 2

− 24

−2

− 2 · 24

 1 4 2

 

− 24

3 2 1 2



 −513 1 =−  48 −255



−510

 −258

3 2 1 2



  −2049 1 =−   96 −1023

  683 1  , = 32 341 whose (1, 1) entry is η =

683 , 32

and the equivalent impedance is Z =

η Y4

=

683/32 1/2

=

683 . 16

(b) To find a formula for Vk it helps to use eigenvalues and eigenvectors of Ak , for k = 0, ..., 3. We calculate 1−λ −1 9 0 = = (1 − λ)( 23 − λ) − 21 = λ2 − 52 λ + 1 = (λ − 54 )2 − 16 3 − 12 −λ 2 ⇒ eigenvalues are λ1 = 45 + 34 = 2, λ2 = 12     −1 −1 | 0

1 |0 1 [ A0 − λ1 I | 0 ] = ∼ , after −R1 → R1 , 12 R1 + R2 → R2 . 1 1 −2 −2 | 0 0 0 |0   −1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 2 1     1 −1 | 0

−2 | 0 1 2 ∼ , after R1 + R2 → R2 , 2R1 → R1 . [ A0 − λ2 I | 0 ] = − 12 1 |0 0 0 |0   2 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 21 . 1 So, we can diagonalize A0 :    1   −1 2 . A0 = P DP −1 , where P = v1 pp v2 = and D = diag(λ1 , λ2 ) = diag 2, 1 1 2   1 −1 As in Example 5.33, for k = 1, ..., 4, using the fact that Ak = A0 = , for k = 0, ..., 3, 1 3 −2 2         k V0 V0 V0 Vk = P DP −1 = P Dk P −1 xk = = Ak−1 Ak−2 · · · A1 A0 I0 I0 I0 Ik  =

−1 1

2 1



λk1 0



0 λk2

1 −3





−2 −1

1 −1

 −λk1 − 2λk2 1 =− 3 λk1 − λk2

V0 I0





−λk1 λk1

2λk2 λk2



1 −1

−2 −1

−λ41 − 2λ42

2λ41 − 2λ42



λ41 − λ42

−2λ41 − λ42

=−

2λk1 − 2λk2 −2λk1



−1 1



λk2

1 3



V0



V0 I0



 .

 I0

Further, using the fact that  A4 =

1 0

,

we have  x5 =

V5 I5



 = A4 (A3 A2 A1 A0 )

 −2λ41 − λ42 1 =− 3 λ41 − λ42 1 =− 48



−513 255

V0 I0



4λ41 − λ42





V0 I0

V0



−2λ41 − λ42 1023 −513

 1 1 =− 3 0

I0 

1 =− 16

−1

 

1

 −2 · 24 − 1 =−  3 24 − 



−171 85

341 −171





4 · 24 −

 1 4 2

 1 4 2

−2 · 24 −

 1 4 2



1 = 16







V0



171V0 − 341I0 −85V0 + 171I0

c Larry



I0

 1 4 2

V0 I0

V0

 

I0  .

Turyn, October 10, 2013

page 81

From part (a) and (5.101) in the textbook, we have 

V0 I0

 = x0 =

−1 V5 A−1 0 A1

· · · A−1 4



1 Y4



  1 = V5 1

3 2 1 2

4 

1 1

1 0

1 1





1 1 2

1 = ... = V5 32

341/32 341 hence I0 = 341 V . By definition, η = VV05 , so I0 = 341 V = 683/32 V0 = 683 V0 . 32 5 32 5 From earlier work in part (b), we have, for k = 0, 1, ..., 4       −λk1 − 2λk2 2λk1 − 2λk2 V0 −λk1 − 2λk2 1 V Vk 0      xk = =− =− Ik 3 3 I0 λk1 − λk2 −2λk1 − λk2 λk1 − λk2



2λk1 − 2λk2 −2λk1



λk2

683 341

 

 .



1

,

341 683

hence, for k = 1, ..., 4, Vk = −

  1 k  V0 V0  V0 − 2k − 2048 (−683 + 682)λk1 − (1366 + 682)λk2 = − = 2k + 2048 · 2−k , 2049 2049 2 2049

Further, using the fact that in this problem Y4 = 12 , V5 = Y4−1 I4 = −

  2V0 2V0 192 32 (683 − 682)λ41 − (683 + 341)λ42 = − 24 + 1024 · 2−4 = V0 = V0 . 2049 2049 4098 683

Note that, by definition of η, we also have V5 = η −1 V0 =

32 683

V0 , which agrees with our final conclusion. 

5.7.7.8. Using the results of Example 5.35, with n = 6, xk ,

Vk Ik

 , for k = 0, ..., 6, Y0 = Y1 = ... = Y5 =

1 , 2

and

Z0 = Z1 = ... = Z5 = 1, we get that xk+1 = Ak xk , where for k = 0, ..., 4, 

−Zk 1 + Yk Zk

1 −Yk

Ak = and

 A5 =



 =

1 − 12

−Z5 1

1 0



 =

−1 1 + 12 · 1



−1 1



1 0

 =

1

−1

− 12

3 2

 ,

.

(a) Method 1 : To find the equivalent impedance, using the above, we calculate   1 −1 −1 A−1 A · · · A 0 1 5 Y5 

1

−1

− 12   1 = 1

3 2

=

=

1 32

3 2 1 2



whose (1, 1) entry is η =

−1 

1 1 683 341

1

− 12 5  1 0  682 342

2731 , 64

−1

−1 

1

−1

−1 

1

−1

−1 

3 2

3 3 − 21 − 12 2 2     1 1 1 683 682 1 = 1 1 0 32 341 342 2    1 3/2 2731 = , 1/2 64 1365

and the equivalent impedance is Z =

η Y5

=

1 1

1

−1

− 21 

3 2

1

−1 

1 0

−1 1

−1 

1



1 2



1 2

2731/64 1/2

=

2731 . 32

Method 2 : To find the equivalent impedance, it helps to use eigenvalues and eigenvectors of Ak , for k = 0, ..., 4. In part (b), below, we will diagonalize A0 :   1 −1 = A0 = P DP −1 , 3 − 12 2       −1 2 where P = v1 pp v2 = and D = diag(λ1 , λ2 ) = diag 2, 21 . 1 1 It follows that       1 1 1 −1 −1 −1 −1 −5 −1 −5 −1 −1 A0 A1 · · · A5 = P DP A5 = P D P A5 Y4 Y5 Y5 c Larry

Turyn, October 10, 2013

page 82



−1 1

=

 =−

1 3 · 25

 1 5 2





 1 5 2  1 5 2



λ−5 1 0

λ−5 2

2λ−5 2 λ−5 2

−5 2λ−5 1 − 2λ2







−5 −2λ−5 1 − λ2

− 2 · 25

2

5

−2

 1 5 2

−2

− 2 · 25

 1 5 2



5



1 −1

−2 −1

1 −1



−2 −1



A−1 5



1 Y5

1 0

1 1



1



3 2 1 2



 −2049 1 =−  96 −1023



1 2

2λ52 − 2λ51



1 2



1 −3

 −λ52 − 2λ51 1  =− 3 λ51 λ52 λ52 − λ51

3 2

 −2



0

−λ−5 1 λ−5 1

1 =− 3  −5 −λ−5 1 − 2λ2 1 =− 3 −5 λ−5 1 − λ2

2 1

−2λ52 − λ51

−2046

 

−1026

3 2 1 2

 

3 2 1 2

 



  −8193 1 =−   192 −4095

  2731 1  , = 64 1365 whose (1, 1) entry is η =

2731 , 64

and the equivalent impedance is Z =

η Y5

2731/64 1/2

=

=

2731 . 32

(b) To find a formula for Vk it helps to use eigenvalues and eigenvectors of Ak , for k = 0, ..., 4. We calculate 1−λ −1 9 0 = = (1 − λ)( 23 − λ) − 21 = λ2 − 52 λ + 1 = (λ − 54 )2 − 16 3 1 −λ −2 2 ⇒ eigenvalues are λ1 = 45 + 34 = 2, λ2 = 12     −1 −1 | 0

1 |0 1 [ A0 − λ1 I | 0 ] = ∼ , after −R1 → R1 , 12 R1 + R2 → R2 . 1 1 −2 −2 | 0 0 0 |0   −1 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 2 1   1  

−2 | 0 −1 | 0 1 2 , after R1 + R2 → R2 , 2R1 → R1 . [ A0 − λ2 I | 0 ] = ∼ 0 0 |0 − 12 1 |0   2 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 21 . ⇒ v2 = c 1 1 So, we can diagonalize A0 :    1   −1 2 A0 = P DP −1 , where P = v1 pp v2 = and D = diag(λ1 , λ2 ) = diag 2, . 1 1 2   1 −1 As in Example 5.33, for k = 1, ..., 4, using the fact that Ak = A0 = , for k = 0, ..., 3, 3 − 21 2         k V0 Vk V0 V0 xk = = Ak−1 Ak−2 · · · A1 A0 = P DP −1 = P Dk P −1 Ik I0 I0 I0  =

−1 1

2 1



λk1 0

0 λk2



1 −3



1 −1

−2 −1



 −λk1 − 2λk2 1 =− 3 λk1 − λk2

V0 I0

 =−

2λk1 − 2λk2 −2λk1 − λk2

1 3



−λk1 λk1



V0

2λk2 λk2



1 −1

−2 −1



V0 I0



 .

 I0

Further, using the fact that  A5 =

1 0

−1 1

 ,

c Larry

Turyn, October 10, 2013

page 83

we have 

V6 I6

x5 =



 = A5 (A4 A3 A2 A1 A0 )

 −2λ51 − λ52 1 =− 3 λ51 − λ52 =−

1 96



−2049 1023



V0 I0

4λ51 − λ52





V0



−2λ51 − λ52 4095 −2049

 1 1 =− 3 0

I0 

V0 I0

=−

1 32



−1

−λ51 − 2λ52

2λ51 − 2λ52

λ51 − λ52

−2λ51 − λ52

 1

 −2 · 25 − 1 =−  3 25 − 



−683 341



1365 −683

 

4 · 25 −

 1 5 2

 1 5 2

−2 · 25 −

 1 5 2



1 32



=



1



 I0

 1 5 2

V0 I0



V0





V0



 I0

683V0 − 1365I0 −341V0 + 683I0

 .

From part (a) and (5.101) in the textbook, we have 

V0 I0

 = x0 =

−1 V5 A−1 0 A1

· · · A−1 5



1 Y5



  1 = V5 1

3 2 1 2

5 

1 1

1 0

1 1

1 = ... = V5 64

1 2

1365/64 1365 hence I0 = 1365 V5 . By definition, η = VV50 , so I0 = 1365 V5 = 2731/64 V0 = 2731 V0 . 64 64 From earlier work in part (b), we have, for k = 0, 1, ..., 4       −λk1 − 2λk2 2λk1 − 2λk2 V0 −λk1 − 2λk2 V 1 Vk 0  =−  xk = =− Ik 3 3 I0 λk1 − λk2 −2λk1 − λk2 λk1 − λk2



2λk1 − 2λk2

2731 1365



.



1



−2λk1 − λk2



1365 2731

,

hence, for k = 1, ..., 4, Vk = −

  V0  1 k  V0 V0 (−2731 + 2730)λk1 − (5462 + 2730)λk2 = − − 2k − 8192 = 2k + 8192 · 2−k , 8193 8193 2 8193

Further, using the fact that in this problem Y5 = 12 , V5 = Y5−1 I5 = −

  2V0 2V0 192 64 (2731 − 2730)λ51 − (2731 + 1365)λ52 = − 25 + 4096 · 2−5 = V0 = V0 . 8193 8193 8193 2731

Note that, by definition of η, we also have V6 = η −1 V0 =

64 2731

V0 , which agrees with our final conclusion. 

 Vk , for k = 0, ..., 5. Because the input Ik voltage source is sinusoidal, each resistor has impedance Z = R. In this problem, Y0 = Y1 = ... = Y5 = 1, and Z0 = Z1 = ... = Z5 = 1. We get that xk+1 = Ak xk ,

5.7.7.9. Using the results of Example 5.35, with n = 5, and xk ,

where for k = 0, ..., 3,  Ak =

−Zk 1 + Yk Zk

1 −Yk

and

 A4 =



1 0

 =

1 −2

−Z4 1



−1 1+1·1



−1 1



 =

1 0



1 −1

=

−1 2

 ,

.

(a) Method 1 : To find the equivalent impedance, using the above, we calculate −1 −1 A−1 0 A1 · · · A4

=

  1 2 1 1

1 1



1 Y5



4 

 =

1 −1

1 0

1 1



−1 2 1 2

−1 



 =

1 −1

−1 2

−1 

34 21

21 13



1 0

1 1

whose (1, 1) entry is η = 89, and the equivalent impedance is Z =

−1 

−1 2

1 −1

 η Y5

1 1

=



89 1

1 −1 

=

34 21

−1 2

−1 

1 0



1 1

55 34

−1 1 

−1 

 =

89 55

1 1



 ,

= 89.

c Larry

Turyn, October 10, 2013

page 84

Method 2 : To find the equivalent impedance, it helps to use eigenvalues and eigenvectors of Ak , for k = 0, ..., 3. In part (b), below, we will diagonalize A0 :   1 −1 = A0 = P DP −1 , −1 2  where P = v1

p p

 v2 =



√ 1− 5 2

√  1+ 5 and D = diag(λ1 , λ2 ) = diag 2

√ √  3+ 5 3− 5 , 2 . 2

It follows that −1 A−1 0 A1

· · · A−1 4

 =



1 Y5

√ 1− 5 2

 = P DP

−1 −4

√   −5 λ1 1+ 5 0 2

A−1 4



0 λ−5 2

√ √ (1 − 5)λ−4 (1 + 5)λ−4 1 2 2λ−4 2λ−4 1 2 √ √  2(1 − 5)λ−4 − 2(1 + 5)λ−4 1 2 1 =− √  4 5 −4 4λ−4 1 − 4λ2 1 =− √ 4 5



1 Y5

 = PD

−4

P

−1

A−1 4





1 Y5

√     1 2 −1 − √5 1 √ A−1 4 Y5 1− 5 −4 5 −2 √     2 −1 − √5 1 1 1 0 1 1 −2 1− 5   −4 −4 2 4λ1 − 4λ2   √ √ 1 −2(1 + 5)λ−4 5)λ−4 1 + 2(1 − 2 

√ √ √ √     2(2 − 5)λ42 − 2(2 + 5)λ41 4(2 − 5)λ−4 5)λ−4 1 − 4(2 + 2 1  1   =− √ =− √ √ √ 4 5 2(3 − √5)λ−4 − 2(3 + √5)λ−4 2 5 λ41 λ42 4 4 5)λ − (3 + 5)λ (3 − 2 1 1 2  =− √ 2 5

1 √  3+ 5 4 2

hence −1 −1 A−1 0 A1 · · · A4



1 Y5



√ 5)

√  3− 5 4 2

− 2(2 +

√ (3 − 5)

√  3− 5 4 2

√ − (3 + 5)

2(2 −

 √   3− 5 4 2



5)

√  3+ 5 4 2 √  3+ 5 4 2

  

√  √     −178 5 89 −178 5 1 =− √  = , √   √ =− √ √ √ 5) 4 2 5 2 5 (3+ 5)(3− 55 −110 5 −110 5 4 

1

whose (1, 1) entry is η = 89, and the equivalent impedance is Z =

η Y5

=

89 1

= 89.

(b) To find a formula for Vk it helps to use eigenvalues and eigenvectors of Ak , for k = 0, ..., 3. We calculate 1−λ 2 −1 0 = = (1 − λ)(2 − λ) − 1 = λ2 − 3λ + 1 = λ − 23 − 54 −1 2−λ ⇒ eigenvalues are λ1 = " √ [ A0 − λ1 I | 0 ] =

−1− 5 2

−1

√ 3+ 5 , 2

λ2 =

−1 √

1− 5 2

√ 3− 5 2

|0 |0

#

 ∼

1 0

√ −1+ 5 2

0

|0 |0

 , after R1 ↔ R2 , −R1 → R1 ,

√ 1+ 5 R1 + R 2 2



R2 .

√  √ 1− 5 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 3+2 5 2 " #  √ √  −1+ 5 √ −1 | 0

− 1+2 5 | 0 1 2 √ [ A0 −λ2 I | 0 ] = ∼ , after R1 ↔ R2 , −R1 → R1 , 1−2 5 R1 +R2 → 1+ 5 0 0 |0 −1 |0 2 R2 . √   √ 1+ 5 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 3−2 5 . 2 So, we can diagonalize A0 : √ √    3 + √5 3 − √5    1− 5 1+ 5 A0 = P DP −1 , where P = v1 pp v2 = and D = diag(λ1 , λ2 ) = diag , . 2 2 2 2 

⇒ v1 = c 1

c Larry

Turyn, October 10, 2013

page 85

 As in Example 5.33, for k = 1, ..., 5, using the fact that Ak = A0 = 

Vk Ik

xk =





V0 I0

= Ak−1 Ak−2 · · · A1 A0



= P DP −1

k



V0 I0

1 −1 

−1 2

 , for k = 0, ..., 3,

= P Dk P −1





V0 I0

√  √ √  k    1 λ1 0 2 −1 − √5 V0 1− 5 1+ 5 √ I0 2 2 0 λk2 −4 5 −2 1− 5 √  √ √    k k 1 2 −1 − √5 V0 (1 − 5)λ1 (1 + 5)λ2 =− √ I0 2λk1 2λk2 −2 1− 5 4 5 √ √    2(1 − 5)λk1 − 2(1 + 5)λk2 4λk1 − 4λk2 V0 1    =− √ √ √ 4 5 I0 4λk1 − 4λk2 −2(1 + 5)λk1 + 2(1 − 5)λk2 √ √    2λk1 − 2λk2 V0 (1 − 5)λk1 − (1 + 5)λk2 1    =− √ √ √ 2 5 I 2λk − 2λk −(1 + 5)λk + (1 − 5)λk 

=

1

2

1

0

2

Further, using the fact that  A4 =

1 0

−1 1

 ,

we have 

V5 I5

x5 =



 = A4 (A3 A2 A1 A0 )

V0 I0



√ √     2λ41 − 2λ42 V0 1 −1 (1 − 5)λ41 − (1 + 5)λ42 1     =− √ √ √ 2 5 I0 0 1 2λ41 − 2λ42 −(1 + 5)λ41 + (1 − 5)λ42 √ √ √ √    −(1 + 5)λ41 + (1 − 5)λ42 (3 + 5)λ41 − (3 − 5)λ42 V0 1    =− √ √ √ 2 5 I 2λ4 − 2λ4 −(1 + 5)λ4 + (1 − 5)λ4 1

 1  =− √  2 5



−(1 +

5)

2

1 =− √ 2 5



√  3+ 5 4 2 √  3+ 5 4 2

√ −68√ 5 42 5

+ (1 −

−2

2

0

√  3+ 5 4 2

− (3 −

1

2

√ 5)

√  3− 5 4 2

(3 +

√  3− 5 4 2



−(1 +

√    110 √5 V0 34 = I0 −21 −68 5

5)



−55 34

5)



√  3+ 5 4 2

V0 I0





√  3− 5 4 2

5)

√ + (1 − 5) 

=

  

√  3− 5 4 2

34V0 − 55I0 −21V0 + 34I0

V0

 

I0

 .

From part (a) and (5.101) in the textbook, we have 

V0 I0

 = x0 =

−1 V4 A−1 0 A1

· · · A−1 4



1 Y5



  1 2 = V4 1 1

hence I0 = 55V4 . By definition, η = VV40 , so I0 = 55V4 = 55 V . 89 0 From earlier work in part (b), we have, for k = 0, 1, ..., 4 √ √    (1 − 5)λk1 − (1 + 5)λk2 1  Vk xk = =− √ Ik 2 5 2λk − 2λk 1

 V0 =− √  2 5

2

1 1

hence Vk = −

1 0

1 1



1 2

−(1 +

 = ... = V4





√ √ 5)λk1 + (1 − 5)λk2

2λk1 − 2λk2 −(1 +



2λk1 − 2λk2

√ √ (1 − 5)λk1 − (1 + 5)λk2 2λk1 − 2λk2

4 

5)λk1 + (1 −

 √ 5)λk2



1 55 89

V0



 89 . 55

 

I0  ,

 √ √ V0  √ (89(1 − 5) + 110)λk1 − (89(1 + 5) + 110)λk2 . 178 5

c Larry

Turyn, October 10, 2013

page 86

Further, using the fact that in this problem Y5 = 1, √  √   V0  √ V5 = Y5−1 I4 = − 178 − 55(1 + 5) λ41 + − 178 + 55(1 − 5) λ42 178 5 √ √    √  √  3 + 5 4 √  3 − 5 4 V0 V0 √ √ · −2 5 178 − 55(1 + 5) + − 178 + 55(1 − 5) =− =− 2 2 178 5 178 5 V0 = . 89 1 V0 , which agrees with our final conclusion. Note that, by definition of η, we also have V5 = η −1 V0 = 89 

 Vk , for k = 0, ..., 4. Because the input Ik voltage source is sinusoidal, each resistor has impedance Z = R and each capacitor has admittance y = jωC . In this problem, Y0 = Y1 = ... = Y4 = jω, and Z0 = Z1 = ... = Z4 = 1. We get that

5.7.7.10. Use the methods of Example 5.35, with n = 4, and xk ,

xk+1 = Ak xk , where for k = 0, 1, 2  Ak =



−Zk 1 + Yk Zk

1 −Yk

and

 =

1 −(jω)

1 0

−Z4 1

 A3 =

−1 1 + (jω) · 1 

 =

1 0

−1 1



 =

1 −jω

−1 1 + jω

 ,

 .

(a) Method 1 : To find the equivalent impedance, using the above, we calculate −1 −1 A−1 0 A1 · · · A3

=



  1 1 + jω jω 1

1 Y4

1 1



 1 = −jω

3 

1 0

1 1

−1 1 + jω



 =

1 jω

−1



 =

1 −jω

−1 1 + jω

−1

1 −jω

1 + 6jω − 5ω 2 − jω 3 3jω − 4ω 2 − jω 3

1 + j10ω − 15ω 2 − j7ω 3 + ω 4 j4ω − 10ω 2 − j6ω 3 + ω 4

−1 1 + jω

−1

1 0

−1 1

−1 

4 + j10ω − 6ω 2 − jω 3 1 + j6ω − 5ω 2 − jω 3



1 jω

1 jω





 ,

whose (1, 1) entry is η = (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 ), and the equivalent impedance is Z=

 η η 1 = = −jω −1 η = 10ω − 7ω 3 − j(1 − 15ω 2 + ω 4 ) . Y4 (jω) ω

(b) To find a formula for Vk it might help to use eigenvalues and eigenvectors of Ak , for k = 0, 1, 2. We calculate 1−λ −1 = (1 − λ)(1 + jω − λ) − (−1) · (−jω) = λ2 − (2 + jω)λ + 1 0 = | A0 − λI | = −jω 1 + jω − λ ⇒ eigenvalues are  p √ √ (2 + jω) ± (2 + jω)2 − 4 (2 + jω) ± −1 + 4jω (2 + jω) ± 1 + 16ω 2 ej π−arctan(4ω) /2 λ1,2 = = = 2 2 2 q√  q√  √ 1+16ω 2 −1 1+16ω 2 −1 2 √ (2 + jω) ± 1 + 16ω + j 21/2 (1+16ω 2 )1/4 21/2 (1+16ω 2 )1/4 (2 + jω) ± j 1 + 16ω 2 e−j arctan(4ω)/2 = = 2 2     q 2 1/4 qp (1 + 16ω 2 )1/4 p (1 + 16ω ) 1 2 −1 +j 2 +1 , 1 + 16ω ± 1 + 16ω = 1± 2 21/2 21/2

c Larry

Turyn, October 10, 2013

page 87

r after using e and

jπ/2

= j, e

−jθ

= cos θ − j sin θ, cos

arctan(4ω)  2

=

q

1+cos(arctan(4ω)) 2

1+ √

=

q√

1

1+16ω 2

=

2

1+16ω 2 +1

21/2 (1+16ω 2 )1/4

,

q√ 1+16ω 2 −1 = = . = 1/2 2 2 2 (1+16ω 2 g)1/4 The eigenvalues are complicated, and the eigenvectors are likely to be more complicated. In this problem it would probably not a good idea to use eigenvalues and eigenvectors to find Vk for k = 1, ..., 4. Because we only need to find four values of Vk , it is practical to find them one at a time using matrix multiplication(s):          V1 V0 1 −1 V0 V0 − I0 x1 = = A0 = = , I1 I0 −jω 1 + jω I0 −jωV0 + (1 + jω)I0          V1 − I1 V1 1 −1 V1 V2 = = = A1 x2 = −jωV1 + (1 + jω)I1 I1 −jω 1 + jω I1 I2     (1 + jω)V0 − (2 + jω)I0 V0 − I0 − (−jωV0 + (1 + jω)I0 ) , = = 2 2 (ω − 2jω)V0 + (1 − ω + 3jω)I0 −jω(V0 − I0 ) + (1 + jω)(−jωV0 + (1 + jω)I0 )        V2 1 −1 V2 V3 = = A2 x3 = I2 −jω 1 + jω I2 I3   2 2 (1 − ω + 3jω)V 0 + (−3 + ω − 4jω)I0   , = ... = 4ω 2 + jω(−3 + ω 2 ) V0 + 1 − 5ω 2 + jω(6 − ω 2 ) I0          V4 V3 1 −1 V3 V3 − I3 = = x4 = = A3 I4 I3 0 1 I3 I3     2 2 2 1 − 5ω + jω(6 − ω ) V0 + − 4 + 6ω + jω(−10 +ω 2 ) I0 . = 4ω 2 + jω(−3 + ω 2 ) V0 + 1 − 5ω 2 + jω(6 − ω 2 ) I0

r

sin

q

arctan(4ω) 

1−cos(arctan(4ω)) 2

1− √

1

1+16ω 2

From part (a) and (5.101) in the textbook, we have       V0 1 1 + j10ω − 15ω 2 − j7ω 3 + ω 4 −1 −1 , = x0 = V4 A−1 = ... = V4 2 3 4 0 A1 · · · A3 j4ω − 10ω − j6ω + ω I0 Y4  hence I0 = j4ω − 10ω 2 − j6ω 3 + ω 4 V4 . Furthermore, by definition, η = VV40 , so  j4ω − 10ω 2 − j6ω 3 + ω 4 I0 = j4ω − 10ω − j6ω + ω V4 = V0 . (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 ) 2

3

4

So, V1 = V0 − I0 =

 ! j4ω − 10ω 2 − j6ω 3 + ω 4 1 − 5ω 2 + jω(6 + 3ω 2 ) 1− V0 = V0 2 4 3 (1 − 15ω + ω ) + j(10ω − 7ω ) (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 )

  1 − 5ω 2 + jω(6 − ω 2 ) 1 − 15ω 2 + ω 4 − j(10ω − 7ω 3 ) V0 (1 − 15ω 2 + ω 4 )2 + (10ω − 7ω 3 )2 Using Mathematica to do algebra, we have     1 V1 = 1 + 40ω 2 + 24ω 4 + 2ω 6 + j − 4ω − 34ω 3 − 14ω 5 − ω 7 . 2 4 6 8 1 + 70ω + 87ω + 19ω + ω =

Next, V2 = (1 + jω)V0 − (2 + jω)I0 =

(1 + jω)



 ! j4ω − 10ω 2 − j6ω 3 + ω 4 V0 1 + jω − (2 + jω) (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 )

1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3



− (2 + jω) j4ω − 10ω 2 − j6ω 3 + ω 4



=

V0 (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 )   1 − ω 2 + j3ω 1 − 15ω 2 + ω 4 − j(10ω − 7ω 3 ) 1 − ω 2 + j3ω = V0 = V0 (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 ) (1 − 15ω 2 + ω 4 )2 + (10ω − 7ω 3 )2

c Larry

Turyn, October 10, 2013

page 88

=

1+

70ω 2

   1 1 + 14ω 2 − 5ω 4 − ω 6 + j − 7ω − 28ω 3 − 4ω 5 . 4 6 8 + 87ω + 19ω + ω

Next, V3 = (1 − ω 2 + 3jω)V0 + (−3 + ω 2 − 4jω)I0  ! j4ω − 10ω 2 − j6ω 3 + ω 4 1 − ω + 3jω + (−3 + ω − 4jω) V0 (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 ) 2

=

2

(1 − ω 2 + 3jω)



1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3

=



+ (−3 + ω 2 − 4jω) j4ω − 10ω 2 − j6ω 3 + ω 4

 V0

(1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 )

1 − 92ω 2 + 74ω 4 − 2ω 6 + j(25ω − 124ω 3 + 20ω 5 ) V0 (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 )   1 − 92ω 2 + 74ω 4 − 2ω 6 + j(25ω − 124ω 3 + 20ω 5 ) 1 − 15ω 2 + ω 4 − j(10ω − 7ω 3 ) = V0 (1 − 15ω 2 + ω 4 )2 + (10ω − 7ω 3 )2 1 = · 1 + 70ω 2 + 87ω 4 + 19ω 6 + ω 8   · (1 + 143ω 2 + 40ω 4 − 136ω 6 − 36ω 8 − 2ω 10 ) + jω 15 + 428ω 2 + 521ω 4 + 114ω 6 + 6ω 8 . =

Finally, V4 =

1 1 V0 = V0 η (1 − 15ω 2 + ω 4 ) + j(10ω − 7ω 3 )   1 2 4 2 = 1 − 15ω + ω + jω(10 − 7ω V0 . 1 + 70ω 2 + 87ω 4 + 19ω 6 + ω 8 

 Vk , for k = 0, ..., 4. Because the input Ik voltage source is sinusoidal, each resistor has impedance Z = R. In this problem, Y0 = Y1 = ... = Y4 = 1, and Z0 = Z1 = ... = Z4 = 2. We get that xk+1 = Ak xk ,

5.7.7.11. Use the methods of Example 5.35, with n = 4, and xk ,

where for k = 0, 1, 2  Ak =

−Zk 1 + Yk Zk

1 −Yk

and

 A3 =



 =

1 −1

−Z4 1

1 0



 =

−2 1+1·2



−2 1



1 0

 =

−2 3

1 −1

 ,

.

(a) Method 1 : To find the equivalent impedance, using the above, we calculate −1 −1 A−1 0 A1 · · · A3



=

1 Y4



 =

  1 3 1 1

1 −1 2 1

−2 3

3 

−1

1 0

2 1

1 −1 

−2 3 1 1

−1

1 −1

−2 3

−1



41 15

112 41



 =

whose (1, 1) entry is η = 153, and the equivalent impedance is Z =

η Y4

=

η 1

−2 1

1 0

1 1



 =

−1 

153 56

1 1



 ,

= 153.

Method 2 : To find the equivalent impedance, it helps to use eigenvalues and eigenvectors of Ak , for k = 0, 1, 2. In part (b), below, we will diagonalize A0 :   1 −2 = A0 = P DP −1 , −1 3  where P = v1

p p

 v2 =



√ 1− 3 1

√  √ √  1+ 3 and D = diag(λ1 , λ2 ) = diag 2 + 3, 2 − 3 . 1

c Larry

Turyn, October 10, 2013

page 89

It follows that −1 −1 A−1 0 A1 · · · A3



1 Y4



= P DP −1

−3

A−1 3



1 Y4



= P D−3 P −1 A−1 3



1 Y4



√     1 1 −1 − √3 1 √ A−1 3 Y4 1− 3 −2 3 −1 √ √ √         1 1 −1 − √3 1 2 1 (1 − 3)λ−3 (1 + 3)λ−3 1 2 =− √ 0 1 1 λ−3 λ−3 −1 1− 3 2 3 1 2 √ √    −3 −3 −3 −3 (1 − 3)λ1 − (1 + 3)λ2 2λ1 − 2λ2 3 1    =− √ √ √ 2 3 −3 −3 −3 1 λ−3 − λ −(1 + 3)λ + (1 − 3)λ 1 2 1 2 

=

√ 1− 3 1

√   −3 λ1 1+ 3 0 1

0 λ−3 2



√ √ √ √     −3 (5 − 3 3)λ−3 (5 − 3 3)λ32 − (5 + 3 3)λ31 1 − (5 + 3 3)λ2 1 1  =− √   =− √ √ √ √ √ 2 3 2 3 λ31 λ32 −3 3 3 (2 − 3)λ−3 − (2 + 3)λ 3)λ − (2 + 3)λ (2 − 2 1 1 2 √ 3 √ √ 3  √  (5 − 3 3) 2 − 3 − (5 + 3) 2 + 3 1  =− √ √ 3 √ 3  √ 3 √ √ 3 √ 2 3 2+ 3 2− 3 (2 − 3) 2 − 3 − (2 + 3) 2 + 3 hence

√  √      −306 3 153 −306 3 1 1 =− √  = , =− √ √ √ 3  √ 2 3 −112√3 2 3 (2 + 3)(2 − 3) 56 −112 3

whose (1, 1) entry is η = 153, and the equivalent impedance is Z =

η Y4

=

153 1

= 153.

(b) To find a formula for Vk it helps to use eigenvalues and eigenvectors of Ak , for k = 0, 1, 2. We calculate 1−λ −2 0 = = (1 − λ)(3 − λ) − 2 = λ2 − 4λ + 1 = (λ − 2)2 − 3 −1 3−λ √ √ ⇒ eigenvalues are λ1 = 2 + 3, λ2 = 2 − 3 √ √     −1 − 3 −2√ |0

−1 + 3 | 0 1 [ A0 − λ1 I | 0 ] = ∼ , after R1 ↔ R2 , −R1 → R1 , (1 + 0 0 |0 −1 1− 3 |0 √ 3)R1 + R2 → R2 . √   √ 1− 3 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 2 + 3 1 √ √     −1 + 3 −2√ |0

−1 − 3 | 0 1 [ A0 − λ2 I | 0 ] = ∼ , after R1 ↔ R2 , −R1 → R1 , (1 − 0 0 |0 −1 1+ 3 |0 √ 3)R1 + R2 → R2 √   √ 1+ 3 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 2 − 3. 1 So, we can diagonalize A0 : √  √  √ √   1− 3 1+ 3 A0 = P DP −1 , where P = v1 pp v2 = and D = diag(λ1 , λ2 ) = diag(2 + 3, 2 − 3). 1 1   1 −2 As in Example 5.33, for k = 1, ..., 4, using the fact that Ak = A0 = , for k = 1, 2, 3, −1 3         Vk V0 V0 V0 −1 k k −1 xk = = Ak−1 Ak−2 · · · A1 A0 = P DP = PD P Ik I0 I0 I0  =

√ 1− 3 1

√  k λ1 1+ 3 1 0

0 λk2



1 √ −2 3



1 −1

√   −1 − √3 V0 I0 1− 3

c Larry

Turyn, October 10, 2013

page 90

√  √ √    1 1 −1 − √3 V0 (1 − 3)λk1 (1 + 3)λk2 =− √ k k I0 λ1 λ2 −1 1− 3 2 3 √ √    (1 − 3)λk1 − (1 + 3)λk2 2λk1 − 2λk2 V0 1   . =− √ √ √ 2 3 I0 λk1 − λk2 −(1 + 3)λk1 + (1 − 3)λk2 Further, using the fact that  A3 =

1 0

−2 1

 ,

we have  x4 =

V4 I4



 = A3 A2 A1 A0

 1 1  =− √ 2 3 0

−2



V0 I0

(1 −

 √ √ 3)λ31 − (1 + 3)λ32



λ31

1



2λ31 − 2λ32

λ32

−(1 +



3)λ31



√ + (1 − 3)λ32



V0



 I0

√ √ √ √  −(1+ 3)(2+ 3)3 +(1 − 3)(2 − 3)3 1  =− √ √ √ 2 3 (2+ 3)3 − (2 − 3)3

√ √ √ √   (4+2 3)(2+ 3)3 +(−4+2 3)(2 − 3)3 V0   √ √ 3 √ √ 3 I −(1+ 3)(2+ 3) +(1 − 3)(2 − 3) 0       41 −112 V0 41V0 − 112I0 = = . −15 41 I0 −15V0 + 41I0

√  √  1 224 √3 −82√ 3 V0 =− √ I0 30 3 −82 3 2 3 From part (a) and (5.101) in the textbook, we have       1 3 V0 1 −1 −1 −1 = x0 = V4 A0 A1 · · · A3 = V4 I0 Y4 1 1 56 hence I0 = 56V4 . By definition, η = VV04 , so I0 = 56V4 = 153 V0 . From earlier work in part (b), we have, for k = 0, 1, 2, 3, √ √    (1 − 3)λk1 − (1 + 3)λk2 1  Vk xk = =− √ Ik 2 3 λk − λk 1

2

√ √  (1 − 3)λk1 − (1 + 3)λk2 V0  =− √ 2 3 λk − λk 1

2

2 1

3 

1 0

2 1



1 1





2λk1 − 2λk2

= ... = V4



 153 . 56

V0



  √ √ I0 3)λk1 + (1 − 3)λk2   2λk1 − 2λk2 1 ,  √ √ 56 k k −(1 + 3)λ1 + (1 − 3)λ2 153 −(1 +

hence

 √ √ V0  √ (153(1 − 3) + 112)λk1 − (153(1 + 3) + 112)λk2 . 306 3 Further, using the fact that in this problem Y4 = 1, √  √   V0  √ V4 = Y4−1 I3 = − 153 − 56(1 + 3) λ31 + − 153 + 56(1 − 3) λ32 306 3 √  √ √  √  √ V0  V0 √ √ · (−2 3) =− 153 − 56(1 + 3) (2 + 3)3 + − 153 + 56(1 − 3) (2 − 3)3 = − 306 3 306 3 V0 = . 153 1 Note that, by definition of η, we also have V4 = η −1 V0 = 153 V0 , which agrees with our final conclusion. Vk = −



 Vk , for k = 0, ..., 4. Because the input Ik voltage source is sinusoidal, each resistor has impedance Z = R and each capacitor has admittance y = jωC . In this problem, Y0 = Y1 = ... = Y4 = j2ω, and Z0 = Z1 = ... = Z4 = 1. We get that 5.7.7.12. Using the results of Example 5.35, with n = 4, and xk ,

xk+1 = Ak xk , c Larry

Turyn, October 10, 2013

page 91

where for k = 0, 1, 2 

−Zk 1 + Yk Zk

1 −Yk

Ak =



 =

and

 A3 =

−Z4 1

1 0



−1 1 + (j2ω) · 1

1 −(j2ω) 

 =

1 0

−1 1

 =

−1 1 + j2ω

1 −j2ω

 ,

 .

(a) Method 1 : To find the equivalent impedance, using the above, we calculate −1 −1 A−1 0 A1 · · · A3

=



  1 1 + j2ω j2ω 1

1 Y4

1 1

  =

3 

1 −j2ω

1 0

1 1

−1 1 + j2ω



1 j2ω 

=

−1



 =

1 −j2ω

−1 1 + j2ω

−1

−1 1 + j2ω

1 −j2ω

1 + 12jω − 20ω 2 − j8ω 3 6jω − 16ω 2 − j8ω 3

1 + j20ω − 60ω 2 − j56ω 3 + 16ω 4 j8ω − 40ω 2 − j48ω 3 + 16ω 4

−1

1 0

−1 1

−1 

4 + j20ω − 24ω 2 − j8ω 3 1 + j12ω − 20ω 2 − j8ω 3



1 j2ω



1 j2ω



 ,

whose (1, 1) entry is η = (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 ), and the equivalent impedance is Z=

 1 η η = −j(2ω)−1 η = (20ω − 56ω 3 ) + j(−1 + 60ω 2 − 16ω 4 ) . = Y4 (j2ω) 2ω

(b) To find a formula for Vk it might help to use eigenvalues and eigenvectors of Ak , for k = 0, 1, 2. We calculate 1−λ −1 = (1 − λ)(1 + j2ω − λ) − (−1) · (−j2ω) = λ2 − 2(1 + jω)λ + 1 0 = | A0 − λI | = −j2ω 1 + j2ω − λ ⇒ eigenvalues are  p p p λ1,2 = (1 + jω) ± (1 + jω)2 − 1 = (1 + jω) ± −1 + j2ω = (1 + jω) ± 1 + 4ω 2 ej π−arctan(2ω) /2 ! p√ p√ p p 1 + 4ω 2 − 1 1 + 4ω 2 − 1 −j arctan(2ω)/2 2 2 = (1 + jω) ± j 1 + 4ω e + j 1/2 = (1 + jω) ± 1 + 4ω 21/2 (1 + 4ω 2 )1/4 2 (1 + 4ω 2 )1/4     q q (1 + 4ω 2 )1/4 p (1 + 4ω 2 )1/4 p 2 2 = 1± 1 + 4ω − 1 + j 1 ± 1 + 4ω + 1 , 21/2 21/2 r q√ 1+ √ 1 q 1+4ω 2 +1 1+4ω 2 arctan(2ω)  1+cos(arctan(2ω) jπ/2 −jθ = = = 21/2 (1+4ω2 )1/4 , and after using e = j, e = cos θ − j sin θ, cos 2 2 2 q√ 1− √ 1 1+4ω 2 −1 1+4ω 2 = = = . 2 21/2 (1+4ω 2 )1/4 The eigenvalues are complicated, and the eigenvectors are likely to be more complicated. In this problem it would probably not a good idea to use eigenvalues and eigenvectors to find Vk for k = 1, ..., 4. Because we only need to find four values of Vk , it is practical to find them one at a time using matrix multiplication(s):          V1 V0 1 −1 V0 V0 − I0 x1 = = A0 = = , I1 I0 −j2ω 1 + j2ω I0 −j2ωV0 + (1 + j2ω)I0          V2 V1 1 −1 V1 V1 − I1 x2 = = A1 = = I2 I1 −j2ω 1 + j2ω I1 −j2ωV1 + (1 + j2ω)I1     V0 − I0 − (−j2ωV0 + (1 + j2ω)I0 ) (1 + j2ω)V0 − (2 + j2ω)I0 = = , 2 2 −j2ω(V0 − I0 ) + (1 + j2ω)(−j2ωV0 + (1 + j2ω)I0 ) (4ω − 4jω)V0 + (1 − 4ω + 6jω)I0        V3 V2 1 −1 V2 x3 = = A2 = I3 I2 −j2ω 1 + j2ω I2

r

sin

arctan(2ω)  2

q

1−cos(arctan)2ω) 2

c Larry

Turyn, October 10, 2013

page 92

 2 (1 − 4ω 2 + 6jω)V  0 + (−3 + 4ω2 − 8jω)I0  , 2 2 16ω + j2ω(−3 + 4ω ) V0 + 1 − 20ω + j2ω(6 − 4ω ) I0          V4 V3 1 −1 V3 V3 − I3 x4 = = A3 = = I4 I3 0 1 I3 I3     2 2 2 2 1 − 20ω + j2ω(6 − 4ω ) V0 + − 4 + 24ω + j2ω(−10 + 4ω ) I0  = . 16ω 2 + j2ω(−3 + 4ω 2 ) V0 + 1 − 20ω 2 + j2ω(6 − 4ω 2 ) I0 

= ... =

2

From part (a) and (5.101) in the textbook, we have       V0 1 1 + j20ω − 60ω 2 − j56ω 3 + 16ω 4 −1 −1 = x0 = V4 A−1 A · · · A = ... = V , 4 0 1 3 I0 Y4 j8ω − 40ω 2 − j48ω 3 + 16ω 4  hence I0 = j8ω − 40ω 2 − j48ω 3 + 16ω 4 V4 . Furthermore, by definition, η = VV40 , so  j8ω − 40ω 2 − j48ω 3 + 16ω 4 I0 = j8ω − 40ω − j48ω + 16ω V4 = V0 . (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 ) 2

3

4

Have to check from here on (8/29/13) So,

V1 = V0 − I0 =

1−

 ! j8ω − 40ω 2 − j48ω 3 + 16ω 4 1 − 20ω 2 + j2ω(6 − 4ω 2 ) V0 = V0 2 4 3 (1 − 60ω + 16ω ) + j(20ω − 56ω ) (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 )

  1 − 20ω 2 − j2ω(6 − 4ω 2 ) 1 − 60ω 2 + 16ω 4 − j(20ω − 56ω 3 ) V0 (1 − 60ω 2 + 16ω 4 )2 + (20ω − 56ω 3 )2 Using Mathematica to do algebra, we have =

1 · 1 + 280ω 2 + 1392ω 4 + 1216ω 6 + 256ω 8    · 1 + 160ω 2 + 384ω 4 + 128ω 6 + j − 8 − 272ω 3 − 448ω 5 − 128ω 7 .

V1 =

Next, V2 = (1 + j2ω)V0 − (2 + j2ω)I0 =

(1 + j2ω)



 ! j8ω − 40ω 2 − j48ω 3 + 16ω 4 1 + j2ω − (2 + j2ω) V0 (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 )

1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3

=



− (2 + j2ω) j8ω − 40ω 2 − j48ω 3 + 16ω 4

 V0

(1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 )

  1 − 4ω 2 + j6ω 1 − 60ω 2 + 16ω 4 − j(20ω − 56ω 3 ) 1 − 4ω 2 + j6ω = V0 = V0 (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 ) (1 − 60ω 2 + 16ω 4 )2 + (20ω − 56ω 3 )2    1 = 1 + 56ω 2 − 80ω 4 − 64ω 6 + j − 14ω − 224ω 3 − 128ω 5 V0 . 2 4 6 8 1 + 280ω + 1392ω + 1216ω + 256ω Next, V3 = (1 − 4ω 2 + 6jω)V0 + (−3 + 4ω 2 − 8jω)I0 =

 ! j8ω − 40ω 2 − j48ω 3 + 16ω 4 1 − 4ω + 6jω + (−3 + 4ω − 8jω) V0 (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 )

(1 − 4ω 2 + 6jω) =

2



2

1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3



+ (−3 + 4ω 2 − 8jω) j8ω − 40ω 2 − j48ω 3 + 16ω 4

V0

(1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 )

=



1 − 368ω 2 + 1184ω 4 − 128ω 6 + j(50ω − 992ω 3 + 640ω 5 ) V0 (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 ) c Larry

Turyn, October 10, 2013

page 93

  1 − 368ω 2 + 1184ω 4 − 128ω 6 + j(50ω − 992ω 3 + 640ω 5 ) (1 − 60ω 2 + 16ω 4 ) − j(20ω − 56ω 3 ) = V0 (1 − 60ω 2 + 16ω 4 )2 + (20ω − 56ω 3 )2

=

1 · 1 + 280ω 2 + 1392ω 4 + 1216ω 6 + 256ω 8   · 1 + 117ω 2 − 3565ω 4 − 4912ω 6 − 1376ω 8 − 128ω 10 + jω 40 + 1945ω 2 + 1154ω 4 − 1024ω 6 − 256ω 8 V0 .

Finally, V4 =

1 1 V0 = V0 η (1 − 60ω 2 + 16ω 4 ) + j(20ω − 56ω 3 ) =

1+

280ω 2

  1 1 − 60ω 2 + 16ω 4 − jω(20 − 56ω 2 V0 . 4 6 8 + 1392ω + 1216ω + 256ω



     k k 1 4 3 3 and fk = 21 = 12 w, where w = . 1 −1 1 1 First, we will use a method of undetermined coefficients analogous to a method for non-resonant non-homogeneous k linear systems of ODEs: Let’s try for a particular solution of xp,k+1 = Axp + fk in the form xp,k = 21 a, where a is a constant vector. We substitute xp,k into the non-homogeneous system to get       1 k+1  1 k  1 k 1 k 1 k a = xp,k+1 = Axp + fk = A a + w= Aa + w, 2 2 2 2 2 −1  Similar to work in Example 5.26 in Section 5.5, we get 12 a = Aa+w hence a = − A − 21 I w, as long as A − 12 I is invertible. Here, this gives  1 −1      3  17   −1 3 3 4 − 2 −4 −2 2 1 4 1   =−  = .   a=− A− I w = − 2 −19/4 19 1 5 1 1 1 − 32 −1 − 2 2 5.7.7.13. Define A =

So, a particular solution is given by   2  1 k 17 xp,k = − . 5 19 2 We can use eigenvalues and eigenvectors to find the general solution of the corresponding homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : 1−λ 4 = (1 − λ)(−1 − λ) − 4 = λ2 − 5 0= 1 −1 − λ √ √ ⇒ eigenvalues are λ1 = − 5, λ2 = 5 √ √     √ 1+ 5 4√ |0

−1 + 5 | 0 [ A − λ1 I | 0 ] = ∼ 1 , after R1 ↔ R2 , −(1 + 5)R1 + R2 → R2 0 0 |0 1 −1 + 5 | 0 √   √ 1− 5 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = − 5 ⇒ v1 = c 1 1 √ √     √ 1− 5 4√ |0

−1 − 5 | 0 1 [ A − λ2 I | 0 ] = ∼ , after R1 ↔ R2 , −(1 − 5)R1 + R2 → R2 0 0 |0 1 −1 − 5 | 0 √   √ 1+ 5 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 5 1 The general solution of the corresponding homogeneous system is √  √    √ √ 1− 5 1+ 5 xk = c1 (− 5)k + c2 ( 5)k , where c1 , c2 = arbitrary constants. 1 1 The general solution of the original problem is √  √      √ k 1− 5 √ k 1+ 5 2  1 k 17 xk = − + c1 (− 5) + c2 ( 5) , where c1 , c2 = arbitrary constants. 5 1 1 19 2 c Larry

Turyn, October 10, 2013

page 94

     3 3 1 2 . = cos(2k)w, where w = and fk = cos(2k) 1 1 2 −1 First, we will use a method of undetermined coefficients analogous to a method for non-resonant non-homogeneous ep,k+1 = Ae linear systems of ODEs: Let’s try for a particular solution of the complexification x xp + ei2k w in the form i2 k ep,k = (e ) a, where a is a constant vector. We substitute x ep,k into the non-homogeneous system to get x   ep,k+1 = Ae (ei2 )k ei2 a = (ei2 )k+1 a = x xp + fk = A (ei2 )k a + (ei2 )k w = (ei2 )k Aa + (ei2 )k w, 

5.7.7.14. Define A =

Similar to work in Example 5.26 in Section 5.5, we get (ei2 ) a = Aa + w hence a = − A − ei2 I A − ei2 I is invertible. Here, this gives   −1 1 − ei2 a = − A − ei2 I w=− 2

2 −1 − ei2

−1 

3 1



1 −1 + ei4 − 4 

=−



−1 − ei2 −2

−1

−2 1 − ei2

w, as long as



3 1



 −5 − 3ei2 1   = 5 − ei4 −5 − ei2

Because cos(2k) = Re(ei2k ), a particular solution is given by xp,k = Re(e xp,k ). First, let’s simplify     −5 − 3ei2 −5 − 3ei2 i2k −i4 e (5 − e ) 1 =    ep,k = ei2k x 5 − ei4 (5 − e−i4 )(5 − ei4 ) i2 i2 −5 − e −5 − e     (5 − e−i4 )(−5 − 3ei2 ) −25 + 5e−i4 − 15ei2 + 3e−i2 i2k ei2k e   =  = 26 − 10 cos(4) 26 − 10 cos(4) −i4 i2 −i2 −i4 i2 −25 + 5e − 5e + e (5 − e )(−5 − e )    −25 + 5 cos(4) − 15 cos(2) + 3 cos(2) + i − 5 sin(4) − 15 sin(2) − 3 sin(2) cos(2k) + i sin(2k)   =  26 − 10 cos(4) −25 + 5 cos(4) − 5 cos(2) + cos(2) + i(−5 sin(4) − 5 sin(2) − sin(2) Using the fact that ei2 = cos(2) + i sin(2), etc., we have     − 25 + 5 cos(4) − 12 cos(2) cos(2k) − − 5 sin(4) − 18 sin(2) sin(2k) 1  . xp,k =Re(e xp,k )=   26 − 10 cos(4) (−25 + 5 cos(4) − 4 cos(2) cos(2k) − − 5 sin(4) − 6 sin(2) sin(2k) We can use eigenvalues and eigenvectors to find the general solution of the corresponding homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : 1−λ 2 = (1 − λ)(−1 − λ) − 4 = λ2 − 5 0= 2 −1 − λ √ √ ⇒ eigenvalues are λ1 = − 5, λ2 = 5 √ √     −1+ 5 1+ 5 2√ |0

|0 1 2 [ A − λ1 I | 0 ] = ∼ , after R1 ↔ R2 , 21 R1 → R1 , 2 −1 + 5 | 0 0 0 |0 √ −(1 + 5)R1 + R2 → R2 √   √ 1− 5 ⇒ v1 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = − 5 2 √ √     5 1− 5 2√ |0 |0

−1− 2 [ A − λ2 I | 0 ] = ∼ 1 , after R1 ↔ R2 , 12 R1 → R1 , 2 −1 − 5 | 0 0 0 |0 √ −(1 − 5)R1 + R2 → R2 √   √ 1+ 5 ⇒ v2 = c 1 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 5 2 The general solution of the corresponding homogeneous system is √  √    √ √ 1− 5 1+ 5 xk = c1 (− 5)k + c2 ( 5)k , where c1 , c2 = arbitrary constants. 2 2

c Larry

Turyn, October 10, 2013

page 95

The general solution of the original problem is     − 25 + 5 cos(4) − 12 cos(2) cos(2k) − − 5 sin(4) − 18 sin(2) sin(2k) 1   xk =   26 − 10 cos(4) (−25 + 5 cos(4) − 4 cos(2) cos(2k) − − 5 sin(4) − 6 sin(2) sin(2k) √  √    √ √ 1− 5 1+ 5 + c2 ( 5)k , +c1 (− 5)k 2 2 where c1 , c2 =arbitrary constants.      1 0 0 1 1   k k 3 1  and fk = 12  2  = 12 w, where w =  2 . 5.7.7.15. Define A =  2 −1 −2 5 3 3 First, we will use a method of undetermined coefficients analogous to a method for non-resonant non-homogeneous k linear systems of ODEs: Let’s try for a particular solution of xp,k+1 = Axp + fk in the form xp,k = 12 a, where a is a constant vector. We substitute xp,k into the non-homogeneous system to get       1 k+1  1 k  1 k 1 k 1 k a = xp,k+1 = Axp + fk = A a + w= Aa + w, 2 2 2 2 2 −1  Similar to work in Example 5.26 in Section 5.5, we get 12 a = Aa+w hence a = − A − 21 I w, as long as A − 12 I is invertible. Here, this gives  1 −1        0 0 106 0 0 1 −106 1 2           −1          1 5   2  = − 1  −80 18 −4   2  = 1  . 2 1 56 a=− A− I w = − 2           2 53  53         9 −1 −2 2 −12 8 10 3 −34 3 

So, a particular solution is given by xp,k

  −106 1  1 k  56  . = 53 2 −34

We can use eigenvalues and eigenvectors to find the general solution of the corresponding homogeneous system of difference equations (LCCHS∆) xk+1 = Axk : 1−λ 0 0 3−λ  1 3−λ 1 = (1 − λ) 0 = 2 = (1 − λ) (3 − λ)(5 − λ) + 2 −2 5−λ −1 −2 5−λ   = (1 − λ) λ2 − 8λ + 17 = (1 − λ) (λ − 4)2 + 1 . ⇒ eigenvalues are λ1 = 1, λ2  0 0 2 [ A − λ1 I | 0 ] =  2 −1 −2

= 4 + i, λ3 = 4 − i   0 |0 1 2 1 | 0  ∼  0 −2 4 |0 0 0

−4 9 0

  |0

1 |0∼ 0 |0 0

0

1 0

 |0 | 0 , |0

5 − 29 0

after R1 ↔ R3 , −R1 → R1 , −2R1 + R2 → R2 , followed by R2 + R1 → R1 , − 12 R2 → R2   −10 9 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ1 = 1. ⇒ v1 = c 1  2 A solution of the corresponding homogeneous system of difference equations is given by     −10 −10 (1) k  9 = 9 . xk = 1 2 2 

−3 − i [ A − λ2 I | 0 ] =  2 −1

0 −1 − i −2

0 1 1−i

  |0 1 |0∼0 |0 0

2 −5 − i 2(3 + i)

−1 + i 3 − i2 −4 + i2

 |0 |0 |0 c Larry

Turyn, October 10, 2013

page 96

after R1 ↔ R3 , −R1 → R1 , −2R1 + R2 → R2 , 

0 0 1 3−i2 ∼ 0 1 − 5+i 0 0 0

  |0

1 |0= 0 0 |0

0

1 0

0 −1+i 2

0

 |0 | 0 , |0

1 R2 → R2 , −2R2 + R1 → R1 , −2(3 + i)R2 + R3 → R3 after − 5+i   0 ⇒ v2 = c1  1 − i , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue λ2 = 4 + i. Because 2 the eigenvalues are a complex conjugate pair, we don’t need to find eigenvectors corresponding to λ3 = λ3 . As in Example 5.33, we calculate that λ2 = 4 + i = α + iν = ρ(cos ω + i sin ω), where

ρ=

p √ 42 + 12 = 17

and

tan ω =

1 . 4

Because (α, ν) = (4, 1) is in the first quadrant, ω = arctan 41 . We have ρk (cos ωk + i sin ωk)v2 = 17k/2



   0  1 1  1−i cos k arctan + i sin k arctan 4 4 2 



    0       cos k arctan 41 + sin k arctan 14 − i cos(k arctan 41 + i sin k arctan 14  . = 17k/2        2 cos k arctan 14 + i2 sin k arctan 14 As in the discussion before Example 5.33, there are solutions  (2)

xk = 17k/2 Re ((cos ωk + i sin ωk)v2 ) = 17k/2

and

  0      cos k arctan 14 + sin k arctan 14      2 cos k arctan 14 

(3)

xk = 17k/2 Im ((cos ωk + i sin ωk)v2 ) = 17k/2

 0      − cos k arctan 41 + sin k arctan 14      2 sin k arctan 14 

The general solution of the corresponding homogeneous system is       −10 0  0  k/2 k/2 1 1 1 1  cos k arctan +sin k arctan  + c3 17  −cos k arctan +sin k arctan  , xk =c1  9  + c2 17 4 4 4 4   2 2 cos k arctan 14 2 sin k arctan 14 where c1 , c2 , c3 =arbitrary constants. The general solution of the original problem is      0     −106 −10   1  1 k  cos k arctan 41 + sin k arctan 14  56  + c1  9  + c2 17k/2  xk =     53 2 −34 2 2 cos k arctan 1 

4

 k/2

+c3 17



 0     1  − cos k arctan 4 + sin k arctan 41  ,     2 sin k arctan 41

where c1 , c2 , c3 =arbitrary constants. √

5.7.7.17. A’s eigenvalues stable.

3 2

±

i 2

= e±iπ/6 have modulus one, so LCCHS∆ (?) xk+1 = Axk cannot be asymptotically

c Larry

Turyn, October 10, 2013

page 97 √

3 2

(a) The system may be neutrally stable. If the repeated eigenvalue stable. (b) The system may be unstable. If the repeated eigenvalue

√ 3 2

+

i 2

+ 2i is not deficient then the system is neutrally

is deficient then the system is unstable.

(c) By the reasoning in parts (a) and (b) combined, the system may be neutrally stable, depending upon more information concerning A. √ 6 2 (d) A’s eigenvalue λ = 23 + 2i = eiπ/6 satisfies λ6 = eiπ/6 = e6·iπ/6 = eiπ = −1 and λ12 = λ6 = (−1)2 = 1. It follows that (?) has solutions xk that are periodic in k with period 12. But, there is no eigenvalue µ satisfying µ6 = 1, so (?) does not have solutions xk that are periodic in k with period 6. So, must be false to claim that, “(?) has solutions xk that are periodic in k with period 6, that is, xk+6 ≡ xk ." 5.7.7.18. We are given that A has an eigenvalue λ with |λ| = 1 that is deficient, hence there is an eigenvector v and a generalized eigenvector w with (A − λI)v = 0 and (A − λI)w = v. Define, analogically to the ODE case, xk = λk (kv + u), where u will be determined later. It follows that   xk+1 − Axk = λk+1 ((k + 1)v + u) − A(λk kv + u) = λk λ(k + 1)v + λu − kAv − Au , so  λ−k xk+1 − Axk = −k(A − λI)v + λv − (A − λI)u = k · 0 + λv − (A − λI)u = λv − (A − λI)u. If we choose u = λw, then (A − λI)u = (A − λI)(λw) = λ(A − λI)w = λv. It follows that xk = λk (kv + λw) satisfies the LCCHS∆ (?) xk+1 = Axk . Because v is an eigenvector of A, v 6= 0. From this and the fact that λ 6= 0. It follows that ||xk || → ∞ as k → ∞, hence LCCHS∆ xk+1 = Axk is unstable and thus not neutrally stable. 5.7.7.19. Equivalent to the scalar difference equation yk+2 = a1,k yk+1 + a2,k yk is the LCCHS∆ (?) xk+1 = Axk ,  0 1 where xk = [ yk yk+1 ]T and, for all `, A` = A , is in companion form. a2,k a1,k (1) (1) (2) (1) (2) (2) Abel’s Theorem 5.19 says that if xk and xk solve the LCCHS∆ (?) xk+1 = Axk and C(xk , xk ) , xk pp xk , then ! k−1 Y (2) (1) (2) (1) C(xk , xk ) = |A` | C(x0 , x0 ) `=0

for any k ≥ 1. (j)

(j)

Let’s restate this in terms of xk = [ yk (1) yk (1) yk+1

(2) yk = (2) yk+1

(1) yk (j) (1) (2) yk+1 ]T , for j = 1, 2: We have that C(xk , xk ) = (1) yk+1

! (1) y |A` | 0(1) y1 `=0

k−1 Y

(2) y0 (2) = y1

! (1) y (−a2,k ) 0(1) y1 `=0

k−1 Y

(2)

yk (2) yk+1

, so

(2) y0 (2) , y1

which is (4.68) in the textbook if n = 2. This explains why Abel’s Theorem 4.15 (in Section 4.6) for the scalar difference equation yk+2 = a1,k yk+1 + a2,k yk follows from Abel’s Theorem 5.19 for the LCCHS∆ (?) xk+1 = Axk . √ √ √ 5.7.7.22. √ The system of problem 5.7.7.1 has eigenvalues λ1 = − 13, λ2 = 13. Because |λ1 | = 13 > 1, or because |λ2 | = 13 > 1, the system is unstable. 5.7.7.23. The system of problem 5.7.7.3 has eigenvalues λ1 = −1, λ2 = 4. Because |λ2 | = 4 > 1, the system is unstable. 5.7.7.24. The system of problem 5.7.7.4 has eigenvalues λ1 = −1 + i2, λ2 = −1 − i2. Because |λ1 | = p √ √ 5 > 1, or because |λ2 | = (−1)2 + (−2)2 = 5 > 1, the system is unstable.

p (−1)2 + 22 =

5.7.7.25. The system of problem 5.7.7.5 has eigenvalues λ1 = i, λ2 = −i. Because |λ1 | = |λ2 | = 1 and the eigenvalues of A are not deficient (because the 2 × 2 matrix has two distinct eigenvalues), the system is neutrally stable.

c Larry

Turyn, October 10, 2013

page 98

1−λ √ √ 1 = (1 − λ)(−1 − λ) − 1 = λ2 − 2 = (λ + 2)(λ − 2), so the system of problem 5.7.7.26. 0 = 1 −1 − λ √ √ √ √ 5.7.7.26 has eigenvalues λ1 = − 2, λ2 = 2. Because |λ1 | = 2 > 1, or because |λ2 | = 2 > 1, the system is unstable. 5.7.7.27. Expanding the determinant along the third row, 1 1 √ 2 −λ 0 6 1   2 − λ 1 1 1 = −λ 0 − λ 0 = √6 2 2 √1 6 1 −λ 0 0 2

1 √ 6 1 2

   1 2  1 2  = 1 −λ − λ − √ , 2 2 6 −λ

so the system of problem 5.7.7.27 has eigenvalues λ1 = 21 , λ2 = 12 − √16 , and λ3 = 12 + √16 . Because |λ1 | = |λ2 | = 21 − √16 < 21 < 1, and |λ3 | = 12 + √16 < 12 + 12 < 1, the system is asymptotically stable.

c Larry

1 2

< 1,

Turyn, October 10, 2013

page 99

Section 5.8.6  et−cos t (1 + sin t) 0 , and et 0   t−cos t     t−cos t  e (1 + sin t) (1 + sin t)et−cos t 0 e 0 1 + sin t 0 = = A(t)Z(t) = et ecos t et−cos t 0 et 1 ecos t 0  et−cos t 0 = et−cos t is never zero. Also, det Z(t) = et 1 By Theorem 5.5 in Section 5.2, Z(t) is a fundamental matrix for (?) x˙ = A(t)x. ˙ 5.8.6.1. (a) Z(t) =



0 0



0



˙ = Z(t).

(b) The principal fundamental matrix at t = 0 is −1 X(t) = Z(t) Z(0) =



et−cos t et

0 1 



e−1 1

−1

0 1

e1+t−cos t e · (et − 1)

=

0 1 

In this problem, the period of the coefficient matrix A(t) = is

 X(2π) =

e1+2π−cos 2π e · (e2π − 1)



0 1

 =

 =

et−cos t et



0 1



1

1 −1

e−1

e−1



1 + sin t ecos t

0 0

 is T = 2π. The monodromy matrix

e2π e · (e2π − 1)

0 1

 .

Its eigenvalues are the characteristic multipliers µ, which satisfy e2π − µ 0 0 = = (e2π − µ)(1 − µ), e · (e2π − 1) 1−µ hence µ1 = 1 and µ2 = e2π . We also need to find the corresponding eigenvectors.    

e2π − 1 0 |0 1 0 |0 , after −e · R1 + R2 → R2 , e2π1−1 R1 → R1 ∼ [ X(2π) − µ1 I | 0 ] = 0 0 |0 e · (e2π − 1) 0 | 0   0 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue µ1 = 1. ⇒ v1 = c 1 1  [ X(2π) − µ2 I | 0 ] = 

0 e · (e2π − 1)

0 1 − e2π

|0 |0



 ∼

1 0

−e−1 0

|0 |0

 , after R1 ↔ R2 ,

1 e·(e2π −1)

R 1 → R1

 , for any constant c1 6= 0, are the eigenvectors corresponding to eigenvalue µ2 = e2π .     0 1 Let D = diag(µ1 , µ2 ) = (1, e2π ) and Q = v1 pp v2 = . As in the discussion on pages 441-442, we can 1 e use these results to calculate    1 1 E , diag ln(µ1 ), ln(µ2 ) = diag 0, 1 , 2π 2π

⇒ v2 = c 1

1 e

and C = QEQ−1 =



0 1

1 e



0 0

0 1



0 1

1 e

−1

 =

0 1

1 e



0 0

0 1



−e 1

1 0



 =

1 e

0 0

 .

Using eigenvalues and eigenvectors of C, or using the MacLaurin series on page 373 in Section 5.2, we can calculate etC . Note that C 2 = C, so t2 2 t3 3 C + C + ... = I + tC + 2! 3!    1 0 1 = I − C + et C = − 0 1 e

etC = I + tC +

We have P (t) = X(t)e(−t)C =



e1+t−cos t e · (et − 1)

1 2 2 1 t2 t3 t C + t3 C 3 + ... = I − C + (1 + t + + + ...)C 2! 3! 2! 3!      0 1 0 et 0 + et = . 0 e 0 e · (et − 1) 1 0 1



e−t e · (e−t − 1)

0 1



 =

e1−cos t 0

c Larry

0 1

 .

Turyn, October 10, 2013

page 100

The Floquet representation is X(t) = P (t)etC =



e1−cos t 0



0 1

et e · (et − 1)

0 1

 .

5.8.6.2. (a) For a system in R1 , all of the matrices are 1 × 1. The principal fundamental matrix at t = 0 is X(t) = [ y(t) ], where y(t) solves   y˙ = (α + sin t)y . y(0) = 1 We can solve the ODE using separation of variables: ˆ ˆ dy ln |y| = = (α + sin t)dt = αt − cos t + c; y substituting in t = 0 and y = 1 gives 0 = ln |1| = α · 0 − cos 0 + c = −1 + c, hence c = 1 and y(t) = eαt−cos t+1 .   We have X(t) = [ y(t) ] = eαt−cos t+1 . In this problem, the period of the 1 × 1coefficient matrix A(t) = [ (α + sin t) ] is T = 2π. The monodromy matrix is X(2π) = eα2π−cos 2π+1 = eα2π . The only characteristic multiplier is µ1 = e2απ and the corresponding eigenvector is [ 1 ].     α2π  2πC , so C = [ α ], and P (t) = X(t)e(−t)C = eαt−cos t+1 e−αt = The 1×1 matrix C satisfies e = X(2π) = e  1−cos t  . e The Floquet representation is    X(t) = P (t)etC = e1−cos t eαt . (b) The only characteristic multiplier is µ1 = e2απ . The system is stable if α < 0, neutrally stable if α = 0, and unstable if α > 0. 5.8.6.3. (a) For a system in R1 , all of the matrices are 1 × 1. The principal fundamental matrix at t = 0 for the scalar, linear, homogeneous ODE y˙ − p(t)y = 0 is X(t) = [ y1 (t) ], where y1 (t) solves   y˙ − p(t)y = 0 . y(0) = 1 ´  t We can solve the ODE using the method of integrating factor ν(t) , exp 0 −p(s)ds . [We call the integrating factor ν(t) instead of µ(t) so as not to confuse notations with characteristic multipliers.] Using the Fundamental Theorem of Calculus to find ν(t), ˙ we have  d [ ν(t)y ] = yν(t) ˙ + ν(t)y ˙ = yν(t) ˙ − p(t)ν(t)y = ν(t) y˙ − p(t)y ≡ 0, dt so ν(t)y(t) = c, where c is an arbitrary constant. The solutions of the ODE are ˆ t  −1 y(t) = c ν(t) = c exp p(s)ds , cy1 (t), 0



t





. In this problem, the period of the 1 × 1 coefficient matrix  ˆ T  A(t) = [ p(t) ] is T. The monodromy matrix is X(2π) = exp p(s)ds . The only characteristic multiplier is 0 ˆ T  µ1 = exp p(s)ds and the corresponding eigenvector is [ 1 ]. 0  ˆ T  ˆ T  The 1 × 1 matrix C satisfies e2πC = X(2π) = exp p(s)ds , so C = p(s)ds , and We have X(t) = [ y1 (t) ] =

exp

p(s)ds

0

0

 P (t) = X(t) exp(−tC) = exp



t

p(s)ds



exp

0

 = exp



0

ˆ

T



p(s)ds



 = exp



0



ˆ

t

0

p(s)ds + 0

p(s)ds T



ˆ

t 0



ˆ

t

= exp

T

p(s)ds −

p(s)ds



0

 p(s)ds

T

c Larry

Turyn, October 10, 2013

page 101

The Floquet representation is   ˆ t ˆ X(t) = P (t)etC = exp p(s)ds exp T T

ˆ The only characteristic multiplier is µ1 = exp

T

p(s)ds



.

0

 p(t)dt . ˆ

0

T

p(t)dt < 0, what can you say about

The problem is a little vague as to whether the question, "Also, if 0

the solutions y(t)?" refers to the homogeneous or to the non-homogeneous ODE. [The placement of the question coming after a question about the non- homogeneous ODE suggests that the "Also,..." question refers to the nonhomogeneous ODE.] So, let’s answer the question for both. [If someone is grading the solution of this problem, they should probably insist on having an answer for at least one of the two ODEs, either the homogeneous ODE or the non-homogeneous ODE.] ˆ T If p(t)dt < 0, then the only characteristic multiplier satisfies |µ1 | < 1, hence the homogeneous ODE is 0

asymptotically stable. (b) For the scalar, linear, nonhomogeneous ´  ODE y˙ − p(t)y = f (t), we can solve the ODE using the method of t integrating factor ν(t) , exp 0 −p(s)ds . Using the Fundamental Theorem of Calculus to find ν(t), ˙ we have  d [ ν(t)y ] = yν(t) ˙ + ν(t)y ˙ = yν(t) ˙ − p(t)ν(t)y = ν(t) y˙ − p(t)y = ν(t)f (t), dt ˆ

t

so ν(t)y(t) = c +

ν(s)f (s)ds, where c is an arbitrary constant. The solutions of the ODE are 0

ˆ



1 y(t) = ν(t)

t

c+

 ν(s)f (s)ds

t

ˆ = exp

p(s)ds

0

ˆ

 c+



t



 ν(s)f (s)ds

0

= cy1 (t) + exp

t

0

t

p(s)ds

ν(s)f (s)ds , cy1 (t) + yp (t),

0

0

where y1 (t) was found in part (a) and c is an arbitrary constant. Further, a natural question is whether the non-homogeneous ODE has a T-periodic solution. Using the fundamental matrix X(t) = [ y1 (t) ], existence of a T-periodic solution y(t) is equivalent to the existence of a solution of the scalar version of (5.114) in the textbook, ˆ T (1 − y1 (T)) y0 = y1 (T) (y1 (s))−1 f (s)ds, 0

that is,  1 − exp



T

p(s)ds



y0 = exp

0



T

p(s)ds



0

T

exp 0



ˆ −

s

 p(u)du f (s)ds.

0

So, the necessary and sufficient condition for he non-homogeneous ODE to have a T-periodic solution is 1 − ˆ T  exp p(s)ds 6= 0, that is, 0 ˆ T p(s)ds 6= 0. ˆ

0 T

p(t)dt < 0, then the non-homogeneous ODE has a T-periodic solution given by the scalar

In particular, if 0

version of (5.115) in the textbook, that is,

y(t)=exp

ˆ 0

 t

 p(s)ds · 

ˆ 0

exp  ˆ s  t exp − p(u)du f (s)ds + 0

ˆ 0

ˆ p(s)ds ·

T

  ˆ s  exp − p(u)du f (s)ds  0 0 . ˆ T   1 − exp p(s)ds T

0

c Larry

Turyn, October 10, 2013

page 102

5.8.6.4. Slightly modifying some information given in Example 5.39, we have that x(1) (t) , et/2



cos t − sin t

 is one

solution of the system  (?)



(1)

= A(t)x,

where

−1 +

A(t) =  −1 −

Note that A(t) is periodic with period T = 2π. h (a) Define a fundamental matrix by Z(t) = x(1) (t)

p p

3 2

3 2

cos2 (t) cos t sin t

1−

3 2

−1 +

cos t sin t sin2 (t)

3 2

 .

i x(2) (t) , where x(2) (t) is some solution of (?) chosen with

initial conditions so that | Z(0) | = 1.    π  h i e ∗ 1 , where the asterWe have Z(2π) = x(1) (2π) pp x(2) (2π) and that x(1) (2π) = eπ . So, Z(2π) = 0 ∗ 0 (1) T isks are some unspecified numbers. We can see directly that e , [ 1 0 ] is an eigenvector of Z(2π) corresponding to eigenvalue µ1 = eπ . It follows from Theorem 5.23 that µ1 = eπ is a characteristic multiplier for system (?).   b11 b12 (b) If B = , then its eigenvalues γ1 and γ2 , possibly including repetitions, are the solutions of the b21 b22 characteristic equation b11 − γ b12 = (b11 − γ)(b22 − γ) − b12 b21 (1) 0 = | B − γI | = b21 b22 − γ = γ 2 − γ(b11 + b22 ) + b11 b22 − b12 b21 = γ 2 − γ(b11 + b22 ) + | B |. But, because γ1 and γ2 are the solutions of the characteristic equation, the latter must be (2) 0 = (γ − γ1 )(γ − γ2 ) = γ 2 − γ(γ1 + γ2 ) + γ1 γ2 . Comparing (1) and (2) we see that γ1 γ2 = | B |.  (c) Continuing from part (a), Abel’s Theorem 5.7 in Section 5.2, along with the fact that tr A(t) = a11 (t) + a22 (t) = −1 + 32 cos2 (t) − 1 + 32 sin2 (t) ≡ − 12 , implies that | Z(t)| ≡ e−t/2 | Z(0) | = e−t/2 · 1 = e−t/2 . It follows that | Z(2π) | = e−π . By the matrix algebra result of part (b) with B = Z(2π), we have that the two characteristic multipliers µ1 and µ2 satisfy | Z(2π) | = µ1 µ2 . Using the result earlier in part (c), along with the result of part (a), we have e−π = | Z(2π) | = µ1 µ2 = eπ · µ2 , hence µ2 = e−2π is a second multiplier for system (?), as was desired. 2 5.8.6.5. For the T-periodic Hill’s equation (5.111), that is, (?) y¨ + (λ + q(t))y = 0, the  corresponding system in R y1 (t; λ) y2 (t; λ) has principal fundamental matrix at t = 0 given by X(t; λ) , . It satisfies |X(t; λ)| ≡ 1, so y˙ 1 (t; λ) y˙ 2 (t; λ) the characteristic multipliers µ satisfy y1 (T; λ) − µ y2 (T; λ) 0 = | X(T; λ) − µI | = = (y1 (T; λ) − µ)(y˙ 2 (T; λ) − µ) − y2 (T; λ)y˙ 1 (T; λ) y˙ 1 (T; λ) y˙ 2 (T; λ) − µ   = y1 (T; λ)y˙ 2 (T; λ) − y2 (T; λ)y˙ 1 (T; λ) − y1 (T; λ) + y˙ 2 (T; λ) µ + µ2 = |X(T; λ)| − y1 (T; λ) + y˙ 2 (T; λ) µ + µ2  = 1 − y1 (T; λ) + y˙ 2 (T; λ) µ + µ2 ,

so µ1,2 =

 q 2 y1 (T; λ) + y˙ 2 (T; λ) ± y1 (T; λ) + y˙ 2 (T; λ) − 4 2

.

Because we are assuming that |y˙ 2 (T; λ)+y1 (T; λ)| < 2, we have that µ1,2 are not real (because 0) and are the complex conjugate pair q  2 y1 (T; λ) + y˙ 2 (T; λ) ± i 4 − y1 (T; λ) + y˙ 2 (T; λ) µ1,2 = , 2

q 2 4 − y1 (T; λ) + y˙ 2 (T; λ) 6=

whose moduli are | µ1,2 | =

1 2

q

y1 (T; λ) + y˙ 2 (T; λ)

2

+ 4 − y1 (T; λ) + y˙ 2 (T; λ)

2

=

1√ 4 = 1. 2

c Larry

Turyn, October 10, 2013

page 103

Since there are only two characteristic multipliers and they are distinct, by Theorem (5.25)(b), the system is neutrally stable. 2 5.8.6.6. For the T-periodic Hill’s equation (5.111), that is, (?) y¨ + (λ + q(t))y = 0, the  corresponding system in R y1 (t; λ) y2 (t; λ) . It satisfies |X(t; λ)| ≡ 1, so has principal fundamental matrix at t = 0 given by X(t; λ) , y˙ 1 (t; λ) y˙ 2 (t; λ) the characteristic multipliers µ satisfy y1 (T; λ) − µ y2 (T; λ) = (y1 (T; λ) − µ)(y˙ 2 (T; λ) − µ) − y2 (T; λ)y˙ 1 (T; λ) 0 = | X(T; λ) − µI | = y˙ 1 (T; λ) y˙ 2 (T; λ) − µ   = y1 (T; λ)y˙ 2 (T; λ) − y2 (T; λ)y˙ 1 (T; λ) − y1 (T; λ) + y˙ 2 (T; λ) µ + µ2 = |X(T; λ)| − y1 (T; λ) + y˙ 2 (T; λ) µ + µ2  = 1 − y1 (T; λ) + y˙ 2 (T; λ) µ + µ2 ,

so µ1,2 =

 q 2 y1 (T; λ) + y˙ 2 (T; λ) ± y1 (T; λ) + y˙ 2 (T; λ) − 4

. 2 that |y˙ 2 (T; λ) + y1 (T; λ)| > 2, we have that µ1,2 are real and distinct, because q Because we are assuming 2 4 − y1 (T; λ) + y˙ 2 (T; λ) 6= 0. Further,

µ1 =

 q 2 y1 (T; λ) + y˙ 2 (T; λ) + y1 (T; λ) + y˙ 2 (T; λ) − 4 2

>

 y1 (T; λ) + y˙ 2 (T; λ) > 1. 2

It follows that (?) has a solution y(t; λ) with max0≤t≤kT |y(t; λ)| → ∞ as k → ∞, that is, that (?) is unstable, as in Theorem 5.25 (c). 5.8.6.7. (a) If X(T) = I then all of the characteristic multipliers are equal to one and are not deficient, because the characteristic equation is 0 = | X(T) − µ | = | I − µI | = (1 − µ)n . We can construct the Floquet representation as on pages 441-442 of the textbook: X(t) = P (t)etC = P (t)QetE Q−1 = P (t)QetO Q−1 = P (t)QeO Q−1 = P (t)Q I Q−1 = P (t),   because E = diag T1 ln(µ1 ), ..., T1 ln(µn ) = diag T1 ln(1), ..., T1 ln(1) = diag(0, ..., 0). Because X(t) = P (t) is periodic with period T, all solutions of (?) x˙ = A(t)x are periodic with period T. (b) The result of problem 5.2.5.27 says that, because A(−t) that is,  ≡ −A(t),it follows  that X(t) is an even function,   T T T T T satisfies X(−t) ≡ X(t). Replace t by 2 − T to see that X − 2 − T ≡ X 2 − T , that is, X T − 2 ≡ X 2 − T . (c) Lemma 5.6 states that X(t + T) ≡ X(t)X(T). With t = − T2 , this gives X(− T2 + T) ≡ X(− T2 )X(T). (d) Proceeding from the right hand side of the result of part (c), and then using the result of part (b), we have  T  T  T   T X − X(T) ≡ X − + T ≡ X −T ≡X − . 2 2 2 2  (e) Because X(t) is a fundamental matrix, it is invertible for all t. In particular, X − T2 is invertible. Using the result of part (d), we have   −1    −1  T T T T X(T) ≡ X − X − X(T) ≡ X − X − = I. 2 2 2 2 Using the result of part (a), this implies that all of the solutions of (?) x˙ = A(t)x are periodic with period T. 5.8.6.8. Note that the instructions for this problem have been changed on the Errata page. Assuming b is a non-negative real number, the unforced linear oscillator ODE y¨ + by˙ + y = 0 is in the undamped, overdamped, critically damped, or underdamped case when b = 0, 0 < b < 2, b = 2, or b > 2, respectively. In addition, b < 0 also leads to cases. In order to cut down on the number of cases, it makes sense to work with the complex form of solutions. Then the three cases for the real number b become (1) b = 0, (2) |b| 6= 2 and b 6= 0, and (3) b = ±2. c Larry

Turyn, October 10, 2013

page 104

In principle there are nine cases to consider, because we could have h1 in any of three cases and h2 in any of three cases. In all cases we will construct the principal fundamental matrix of solutions at t = 0. Let y1 (t; h1 , h2 ), or y1 (t) for short, solve y¨+h1 y+y ˙ = 0, 0 < t < π with initial data y1 (0; h1 , h2 ) = 1, y˙ 1 (0; h1 , h2 ) = 0, and let y2 (t; h1 , h2 ), or y2 (t) for short, solve y¨ + h1 y˙ + y = 0, 0 < t < π with initial data y2 (0; h1 , h2 ) = 0, y˙ 2 (0; h1 , h2 ) = 1. As in the discussion of Hill’s equationin Section 5.8.4, the scalar ODE y¨ + b(t)y˙ + y = 0 has principal fundamental y1 (t; h1 , h2 ) y2 (t; h1 , h2 ) matrix at t = 0 given by X(t; h1 , h2 ) , , so the characteristic multipliers µ satisfy y˙ 1 (t; h1 , h2 ) y˙ 2 (t; h1 , h2 ) the characteristic equation y1 (2π; h1 , h2 ) − µ y2 (2π; h1 , h2 ) 0 = | X(2π; h1 , h2 ) − µI | = y˙ 1 (2π; h1 , h2 ) y˙ 2 (2π; h1 , h2 ) − µ = (y1 (2π; h1 , h2 ) − µ) (y˙ 2 (2π; h1 , h2 ) − µ) − y2 (2π; h1 , h2 )y˙ 1 (2π; h1 , h2 )  = µ2 − y1 (2π; h1 , h2 ) + y˙ 2 (2π; h1 , h2 ) µ + (y1 (2π; h1 , h2 )y˙ 2 (2π; h1 , h2 ) − y2 (2π; h1 , h2 )y˙ 1 (2π; h1 , h2 )) , that is, (?)

 0 = µ2 − y1 (2π; h1 , h2 ) + y˙ 2 (2π; h1 , h2 ) µ + e−(h1 +h2 )π .

Note that we have used Abel’s Theorem 3.12 in Section 3.3 and the fact that ˆ 2π ˆ π ˆ 2π ˆ π ˆ 2π −b(t)dt = −b(t)dt + −b(t)dt = −h1 dt + −h2 dt = −(h1 + h2 )π 0

0

π

to conclude that | X(2π; h1 , h2 ) | = e

−(h1 +h2 )π

0

| X(0; h1 , h2 ) | = e

π −(h1 +h2 )π

· 1.

Case 1 : If h1 = h2 = 0, then both y1 (t; 0, 0) and y2 (t; 0, 0) satisfy y¨ + y = 0. In this case the principal fundamental  cos t sin t matrix at t = 0 is easily found to be X(t) = , 0 ≤ t ≤ 2π. − sin t cos t Because X(2π) = I, the characteristic multipliers are µ1 = µ2 = 1. Case 2 : If h1 = 0 and |h2 | = 2, then both y1 (t; 0, h2 ) and y2 (t; 0, h2 ) satisfy y¨ + y = 0, for 0 < t < π, and satisfy y¨ + h2 y˙ + y = 0, for π < t < 2π. To find y1 (t) for 0 < t < π, note that the general solution of y¨ + y = 0 is y(t) = c1 cos t + c2 sin t. The ICs require 1 = y(0) = c1 and 0 = y(0) ˙ = c2 . So, y1 (t) = cos t, 0 < t < π. On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so |h2 | = 2 and h2 being real imply y1 (t; 0, h2 ) = Ae−h2 (t−π)/2 + B(t − π)e−h2 (t−π)/2 and

  h2 h2 Ae−h2 (t−π)/2 + B 1 − (t − π) e−h2 (t−π)/2 . 2 2 Continuity of the solution y1 (t; 0, h2 ) and its first derivative at t = π requires −1 = cos π = y1 (π; 0, h2 ) = A and 0 = − sin π = y˙ 1 (π; 0, h2 ) = − h22 A + B. So, for π < t < 2π, y˙ 1 (t; 0, h2 ) = −

y1 (t; 0, h2 ) = −e−h2 (t−π)/2 −

h2 (t − π)e−h2 (t−π)/2 . 2

It follows that

 h2  −h2 π/2 y1 (2π; 0, h2 ) = − 1 + π e 2 To find y2 (t) for 0 < t < π, note that the general solution of y¨ + y = 0 is y(t) = c1 cos t + c2 sin t. The ICs require 0 = y(0) = c1 and 1 = y(0) ˙ = c2 . So, y2 (t) = sin t, 0 < t < π. On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so |h2 | = 2 and h2 being real imply y2 (t; 0, h2 ) = Ae−h2 (t−π)/2 + B(t − π)e−h2 (t−π)/2

and y˙ 2 (t; 0, h2 ) = −

  h2 h2 Ae−h2 (t−π)/2 + B 1 − (t − π) e−h2 (t−π)/2 . 2 2 c Larry

Turyn, October 10, 2013

page 105

Continuity of the solution y2 (t; 0, h2 ) and its first derivative at t = π requires 0 = sin π = y2 (π; 0, h2 ) = A and −1 = cos π = y˙ 2 (π; 0, h2 ) = − h22 A + B. So, for π < t < 2π,   h2 y˙ 2 (t; 0, h2 ) = − 1 − (t − π) e−h2 (t−π)/2 2 It follows that

 h2  −h2 π/2 y˙ 2 (2π; 0, h2 ) = − 1 − π e 2 So, in the case that h1 = 0 and |h2 | = 2, the characteristic equation (?) is    h2  −h2 π/2 h2   π − 1− π e µ + e−(0+h2 )π , 0 = µ2 − − 1 + 2 2

that is,  2 0 = µ2 + 2e−h2 π/2 µ + e−h2 π = µ + e−h2 π/2 . The characteristic multipliers are µ1 = µ2 = −e−h2 π/2 . Case 3 : If h1 = 0 and |h2 | is neither 0 nor 2, then both y1 (t; 0, h2 ) and y2 (t; 0, h2 ) satisfy y¨ + y = 0, for 0 < t < π, and satisfy y¨ + h2 y˙ + y = 0, for π < t < 2π. To find y1 (t) for 0 < t < π, note that the general solution of y¨ + y = 0 is y(t) = c1 cos t + c2 sin t. The ICs require 1 = y(0) = c1 and 0 = y(0) ˙ = c2 . So, y1 (t) = cos t, 0 < t < π. On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so |h2 | being neither 0 nor 2 implies y1 (t; 0, h2 ) = Aes1 (t−π) + Bes2 (t−π) and −h ±

y˙ 1 (t; 0, h2 ) = As1 es1 (t−π) + Bs2 es2 (t−π) ,



h2 −4

2 2 are two distinct, but possibly complex, numbers. where s1,2 = 2 Continuity of the solution y1 (t; 0, h2 ) and its first derivative at t = π requires −1 = cos π = y1 (π; 0, h2 ) = A + B and 0 = − sin π = y˙ 1 (π; 0, h2 ) = s1 A + s2 B. This gives





A



1

1

s1

s2

=

 B

−1  

−1

 0



 1 =  s2 − s1

s2

−1

−s1

1

 

  −s2 1 . =  s2 − s1 0 s1

−1



So, for π < t < 2π, y1 (t; 0, h2 ) =

  1 −s2 es1 (t−π) + s1 es2 (t−π) s2 − s1

It follows that

1 (−s2 es1 π + s1 es2 π ) . s2 − s1 To find y2 (t) for 0 < t < π, note that the general solution of y¨ + y = 0 is y(t) = c1 cos t + c2 sin t. The ICs require 0 = y(0) = c1 and 1 = y(0) ˙ = c2 . So, y2 (t) = sin t, 0 < t < π. y1 (2π; 0, h2 ) =

On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so |h2 | being neither 0 nor 2 implies y2 (t; 0, h2 ) = Aes1 (t−π) + Bes2 (t−π) and y˙ 2 (t; 0, h2 ) = As1 es1 (t−π) + Bs2 es2 (t−π) ,



−h ±

h2 −4

2 2 are two distinct, but possibly complex, numbers. where s1,2 = 2 Continuity of the solution y2 (t; 0, h2 ) and its first derivative at t = π requires 0 = sin π = y2 (π; 0, h2 ) = A + B and −1 = cos π = y˙ 2 (π; 0, h2 ) = s1 A + s2 B. This gives



A





1

1

s1

s2

=

 B

−1  

0

 −1



 1 =  s2 − s1

s2

−1

−s1

1



0

 −1



 1 =  s2 − s1

c Larry

1

 .

−1

Turyn, October 10, 2013

page 106

So, for π < t < 2π, y2 (t; 0, h2 ) =

  1 es1 (t−π) − es2 (t−π) s2 − s1

It follows that

1 (s1 es1 π − s2 es2 π ) s2 − s1 So, in the case that h1 = 0 and 0 6= |h2 | = 6 2, the characteristic equation (?) is   1 0 = µ2 − − s2 es1 π + s1 es2 π + s1 es1 π − s2 es2 π µ + e−(0+h2 )π , s2 − s1 y˙ 2 (2π; 0, h2 ) =

that is, 0 = µ2 + (es1 π + es2 π ) µ + e−h2 π . Note that the equation satisfied by s1,2 being 0 = s2 +h2 s+1 and also being 0 = (s−s1 )(s−s2 ) = s2 −(s1 +s2 )s+s1 s2 implies that s1 s2 = 1 and −h2 = s1 + s2 . So, the characteristic equation for µ can be rewritten as 0 = µ2 + (es1 π + es2 π ) µ + e(s1 +s2 )π = (µ + es1 π )(µ + es2 π ). The characteristic multipliers are µ1 = −es1 π and µ2 = −es2 π . Case 4 : If |h1 | = 2 and h2 = 0, then both y1 (t; h1 , 0) and y2 (t; h1 , 0) satisfy y¨ + h1 y˙ + y = 0, for 0 < t < π, and satisfy y¨ + y = 0, for π < t < 2π. To find y1 (t) for 0 < t < π, note that when |h1 | = 2, the general solution  of y¨ +  h1 y˙ + y = 0 is y(t) = c1 e−h1 t/2 + c2 te−h1 t/2 , from which it follows that y(t) ˙ = − h21 c1 e−h1 t/2 + c2 1 − 1 = y(0) = c1 and 0 = y(0) ˙ =

− h21

h1 2

t e−h1 t/2 The ICs require

c1 + c2 . So, y1 (t) = e−h1 t/2 +

h1 −h1 t/2 te , 0 < t < π, 2

and

h1 −h1 t/2 h1  h1  −h1 t/2 e + 1− t e , 0 < t < π. 2 2 2 On the interval π < t < 2π, y¨ + y = 0, the general solution is y1 (t; h1 , 0) = A cos(t − π) + B sin(t − π), and y˙ 1 (t; h1 , 0)  = −A sin(t  − π) + B cos(t − π). Continuity of the solution y1 (t; h1 , 0) and its first derivative at t = π y˙ 1 (t) = −

requires 1 + h21 π e−h1 π/2 = y1 (π; h1 , 0) = A and −   A = 1 + h21 π e−h1 π/2 implies B=−

h2 1 4

πe−h1 π/2 = y˙ 1 (π; h1 , 0) = − h22 A + B. Substituting in

  h21 −h1 π/2 h2  h1  −h1 π/2 h2 h2 h1 h2 πe + 1+ π e = − 1 π+ + π e−h1 π/2 . 4 2 2 4 2 4

So, for π < t < 2π,    h2 h2 h1 h2 h1  −h1 π/2 π e cos(t − π) + − 1 π + + π e−h1 π/2 sin(t − π). y1 (t; h1 , 0) = 1 + 2 4 2 4 It follows that

 h1  −h1 π/2 y1 (2π; h1 , 0) = − 1 + π e 2 −h1 t/2 To find y2 (t) for 0 < t < π, note that the general + c2 te−h1 t/2 ,  solution  of y¨ + h1 y˙ + y = 0 is y(t) = c1 e

from which it follows that y(t) ˙ = − h21 c1 e−h1 t/2 + c2 1 − − h21 c1 + c2 . So,

h1 2

t e−h1 t/2 The ICs require 0 = y(0) = c1 and 1 = y(0) ˙ =

y2 (t) = te−h1 t/2 , 0 < t < π,

and

 h1  −h1 t/2 y˙ 2 (t) = 1 − t e , 0 < t < π. 2 On the interval π < t < 2π, y¨ + y = 0, the general solution is y2 (t; h1 , 0) = A cos(t − π) + B sin(t − π), and y˙ 2 (t; h1 , 0) = −A sin(t − π) + B cos(t − π). of the solution y2 (t; h1 , 0) and its first derivative at t = π  Continuity  requires πe−h1 π/2 = y2 (π; h1 , 0) = A and 1 −

h1 2

π e−h1 π/2 = y˙ 2 (π; h1 , 0) = B. So, for π < t < 2π,

 h1  −h1 π/2 y2 (t; h1 , 0) = πe−h1 π/2 cos(t − π) + 1 − π e sin(t − π), 2 c Larry

Turyn, October 10, 2013

page 107

hence

 h1  −h1 π/2 y˙ 2 (t; h1 , 0) = −πe−h1 π/2 sin(t − π) + 1 − π e cos(t − π), 2

It follows that

 h1  −h1 π/2 y˙ 2 (2π; h1 , 0) = − 1 − π e 2 So, in the case that |h1 | = 2 and h2 = 0, the characteristic equation (?) is    h1   h1  −h1 π/2 e µ + e−(h1 +0)π , 0 = µ2 − − 1 + π − 1− π 2 2

that is,

2 0 = µ2 + 2e−h2 π/2 µ + e−h2 π = µ + e−h1 π/2 .

The characteristic multipliers are µ1 = µ2 = −e−h1 π/2 . Case 5 : If |h1 | = 2 and |h2 | = 2, then both y1 (t; h1 , h2 ) and y2 (t; h1 , h2 ) satisfy y¨ + h1 y˙ + y = 0, for 0 < t < π, and satisfy y¨ + h2 y˙ + y = 0, for π < t < 2π. −h1 t/2 To find y1 (t) for 0 < t < π, note that the general + c2 te−h1 t/2 ,  solution  of y¨ + h1 y˙ + y = 0 is y(t) = c1 e from which it follows that y(t) ˙ = − h21 c1 e−h1 t/2 + c2 1 −

− h21 c1 + c2 . So, y1 (t) = e−h1 t/2 + and

h1 2

t e−h1 t/2 The ICs require 1 = y(0) = c1 and 0 = y(0) ˙ =

h1 −h1 t/2 te , 0 < t < π, 2

h1 −h1 t/2 h1  h1  −h1 t/2 e + 1− t e , 0 < t < π. 2 2 2 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so |h2 | = 2 and h2 being real imply y˙ 1 (t) = −

y1 (t; h1 , h2 ) = Ae−h2 (t−π)/2 + B(t − π)e−h2 (t−π)/2 and

  h2 h2 Ae−h2 (t−π)/2 + B 1 − (t − π) e−h2 (t−π)/2 . 2 2   Continuity of the solution y1 (t; h1 , h2 ) and its first derivative at t = π requires 1 + h21 π e−h1 π/2 = y1 (π; h1 , h2 ) = A   h2 and − 41 π e−h1 π/2 = y˙ 1 (π; h1 , h2 ) = − h22 A + B. Substituting in A = 1 + h21 π e−h1 π/2 implies y˙ 1 (t; h1 , h2 ) = −

B=−

  h21 −h1 π/2 h2  h1  −h1 π/2 h2 h2 h1 h2 πe + 1+ π e = − 1 π+ + π e−h1 π/2 . 4 2 2 4 2 4

So, for π < t < 2π,    h1  −h1 π/2 −h2 (t−π)/2 h2 h2 h1 h2 y1 (t; h1 , h2 ) = 1 + π e e + − 1 π+ + π e−h1 π/2 (t − π)e−h2 (t−π)/2 . 2 4 2 4 It follows that

  h2 h1 h2 h1 h2  π+ − 1 π+ + π π e−(h1 +h2 )π/2 2 4 2 4   2 h1 + h2 h h1 h2 = 1+ π− 1 π+ π e−(h1 +h2 )π/2 2 4 4 

y1 (2π; h1 , h2 ) =

1+

−h1 t/2 To find y2 (t) for 0 < t < π, note that the general + c2 te−h1 t/2 ,  solution  of y¨ + h1 y˙ + y = 0 is y(t) = c1 e h1 h1 −h1 t/2 −h1 t/2 from which it follows that y(t) ˙ = − 2 c1 e + c2 1 − 2 t e The ICs require 0 = y(0) = c1 and 1 = y(0) ˙ =

− h21 c1 + c2 . So,

y2 (t) = te−h1 t/2 , 0 < t < π,

and

 h1  −h1 t/2 t e , 0 < t < π. y˙ 2 (t) = 1 − 2 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, the general solution is y2 (t; h1 , h2 ) = Ae−h2 (t−π)/2 + B(t − π)e−h2 (t−π)/2 , and   h2 h2 y˙ 2 (t; h1 , h2 ) = − Ae−h2 (t−π)/2 + 1 − (t − π) Be−h2 (t−π)/2 . 2 2 c Larry

Turyn, October 10, 2013

page 108

Continuity of the solution y2 (t; h1 , h2 ) and its first derivative at t = π requires πe−h1 π/2 = y2 (π; h1 , h2 ) = A and 

1−

h1  −h1 π/2 h2 π e = y˙ 2 (π; h1 , h2 ) = − A + B. 2 2

Substituting A = πe−h1 π/2 into the latter gives     h1 h2 h2 − h1 B = 1− π+ π e−h1 π/2 = 1 + π e−h1 π/2 . 2 2 2 So, for π < t < 2π,   h2 − h1 y2 (t; h1 , h2 ) = πe−h1 π/2 e−h2 (t−π)/2 + 1 + π e−h1 π/2 (t − π)e−h2 (t−π)/2 , 2 hence

 h2 − h1  −h1 π/2  h2 h2 −h1 π/2 −h2 (t−π)/2  e e + 1+ π e 1− (t − π) e−h2 (t−π)/2 . 2 2 2   h2  h2 − h1  h2  −(h1 +h2 )π/2 y˙ 2 (2π; h1 , h2 ) = −π + 1+ π 1− π e 2 2 2   (h2 − h1 )h2 h2 h2 − h1 h2 = −π +1+ π− π− π e−(h1 +h2 )π/2 2 2 2 4   (h2 − h1 )h2 h1 + h2 = 1− π− π e−(h1 +h2 )π/2 2 4

y˙ 2 (t; h1 , h2 ) = −π It follows that

So, in the case that |h1 | = 2 and |h2 | = 2, the characteristic equation (?) is   (h2 − h1 )h2 h1 + h2 h2 h1 h2 h1 + h2 0 = µ2 − 1 + π− 1 π+ π+1− π− π e−(h1 +h2 )π/2 µ + e−(h1 +h2 )π , 2 4 4 2 4 that is,   (h1 − h2 )2 π 2 e−(h1 +h2 )π/2 µ + e−(h1 +h2 )π . 0 = µ2 − 2 + 4 In the case |h1 | = 2 = |h2 |, the characteristic multipliers are r 2+

(h1 −h2 )2 π 2 4

µ1,2 =

±

2+

(h1 −h2 )2 π 2 4

2

−4

2

· e−(h1 +h2 )π/2 .

Case 6 : If |h1 | = 2 and |h2 | 6= 2 and h2 6= 0, then both y1 (t; h1 , h2 ) and y2 (t; h1 , h2 ) satisfy y¨ + h1 y˙ + y = 0, for 0 < t < π, and satisfy y¨ + h2 y˙ + y = 0, for π < t < 2π. −h1 t/2 To find y1 (t) for 0 < t < π, note that the general + c2 te−h1 t/2 ,  solution  of y¨ + h1 y˙ + y = 0 is y(t) = c1 e from which it follows that y(t) ˙ = − h21 c1 e−h1 t/2 + c2 1 −

− h21 c1 + c2 . So, y1 (t) = e−h1 t/2 + and

h1 2

t e−h1 t/2 The ICs require 1 = y(0) = c1 and 0 = y(0) ˙ =

h1 −h1 t/2 te , 0 < t < π, 2

h1 −h1 t/2 h1  h1  −h1 t/2 e + 1− t e , 0 < t < π. 2 2 2 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so |h2 | being neither 0 nor 2 implies y˙ 1 (t) = −

y1 (t; h1 , h2 ) = Aes1 (t−π) + Bes2 (t−π) and √

where s1,2 =

h2 2 −4

−h2 ±

2

y˙ 1 (t; h1 , h2 ) = As1 es1 (t−π) + Bs2 es2 (t−π) . are two distinct, but possibly complex, numbers.

c Larry

Turyn, October 10, 2013

page 109

  Continuity of the solution y1 (t; h1 , h2 ) and its first derivative at t = π requires 1+ h21 π e−h1 π/2 = y1 (π; h1 , h2 ) = A + B and h2 − 1 πe−h1 π/2 = y˙ 1 (π; h1 , h2 ) = s1 A + s2 B. 4 This gives        −1      h1 1 + h21 π e−h1 π/2 1 + π A 1 1 s −1 2 2 e−h1 π/2     =      =  s − s 2 1 2 2 h h B s1 s2 −s 1 −h1 π/2 1 1 1 − πe − π 4

4

So, for π < t < 2π, y1 (t; h1 , h2 ) =

e−h1 π/2 s2 − s1

       h1  h21 h1  h21 s2 1 + π + π es1 (t−π) + −s1 1 + π − π es2 (t−π) 2 4 2 4

It follows that y1 (2π; h1 , h2 ) =

e−h1 π/2 s2 − s1

       h1  h21 h1  h21 s2 1 + π + π es1 π + −s1 1 + π − π es2 π 2 4 2 4

−h1 t/2 To find y2 (t) for 0 < t < π, note that the general + c2 te−h1 t/2 ,  solution  of y¨ + h1 y˙ + y = 0 is y(t) = c1 e from which it follows that y(t) ˙ = − h21 c1 e−h1 t/2 + c2 1 − h21 t e−h1 t/2 The ICs require 0 = y(0) = c1 and 1 = y(0) ˙ =

− h21 c1 + c2 . So, and

y2 (t) = te−h1 t/2 , 0 < t < π,

 h1  −h1 t/2 y˙ 2 (t) = 1 − t e , 0 < t < π. 2 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so |h2 | being neither 0 nor 2 implies y2 (t; h1 , h2 ) = Aes1 (t−π) + Bes2 (t−π)

and y˙ 2 (t; h1 , h2 ) = As1 es1 (t−π) + Bs2 es2 (t−π) . Continuity of the solution y2 (t; h1 , h2 ) and its first derivative at t = π requires πe−h1 π/2 = y1 (π; h1 , h2 ) = A + B and

 h1  −h1 π/2 1− π e = y˙ 2 (π; h1 , h2 ) = s1 A + s2 B. 2

This gives 

c1





1

1

=

 c2

s1

s2

 −1   πe−h1 π/2 s2 −h1 π/2 e       =  s2 − s1 −s1 1 − h21 π e−h1 π/2

−1 1

   



π 1−

h1 2

π

  

So, for π < t < 2π, y2 (t; h1 , h2 ) = hence y˙ 2 (t; h1 , h2 ) =

e−h1 π/2 s2 − s1

e−h1 π/2 s2 − s1





      h1  s1 (t−π) h1  s2 (t−π) s2 π − 1 − π e + −s1 π + 1 − π e 2 2

      h1  h1  s2 π − 1 − π s1 es1 (t−π) + −s1 π + 1 − π s2 es2 (t−π) 2 2

It follows that y˙ 2 (2π; h1 , h2 ) =

e−h1 π/2 s2 − s1



      h1  h1  s2 π − 1 − π s1 es1 π + −s1 π + 1 − π s2 es2 π 2 2

So, in the case that |h1 | = 2 and |h2 | 6= 2 and |h2 | 6= 0, the characteristic equation (?) is 0 = µ2 −

 1   h1  h21  s1 π  h1  h21  s2 π s2 1 + π + π e + − s1 1 + π − π e s2 − s1 2 4 2 4

c Larry

Turyn, October 10, 2013

page 110

   h1  s1 π  h1  s2 π  −h1 π/2 + s1 s2 π − s1 1 − π e + − s1 s2 π + s2 1 − π e e µ + e−(h1 +h2 )π , 2 2 that is,  h1 h2 π 1  s2 − s1 + (s2 + s1 ) π + 1 + s1 s2 π es1 π s2 − s1 2 4    h1 h2 π + s2 − s1 − (s2 + s1 ) π − 1 − s1 s2 π es2 π e−h1 π/2 µ + e−(h1 +h2 )π . 2 4 The equation satisfied by s1,2 is 0 = s2 + h2 s + 1 and also is 0 = (s − s1 )(s − s2 ) = s2 − (s1 + s2 )s + s1 s2 . This implies that s1 s2 = 1 and s1 + s2 = −h2 . Furthermore, |h1 | = 2 and h1 is real, so h21 = 4. So, the characteristic equation for µ can be rewritten as  h h   h h  π 1 2 1 2 0 = µ2 − e−h1 π/2 µ + e−(h1 +h2 )π , s2 − s1 − π − 2 es1 π + s2 − s1 + π −2 s2 − s1 2 2 0 = µ2 −

that is, 0 = µ2 −



 es1 π + es2 π −

   π  h1 h2 − 2 es1 π − es2 π e−h1 π/2 µ + e−(h1 +h2 )π , s2 − s1 2

In the case |h1 | = 2 and |h2 | 6= 2 and h2 6= 0, the characteristic multipliers are µ1,2 =

 p e−h1 π/2  α ± α2 − 4e−h2 π 2

where  α , es1 π + es2 π −

   π h1 h2 − 4 es1 π − es2 π . 2(s2 − s1 )

Case 7 : If |h1 | 6= 2 and h1 6= 0 and h2 = 0, then both y1 (t; h1 , 0) and y2 (t; h1 , 0) satisfy y¨ + h1 y˙ + y = 0, for 0 < t < π, and satisfy y¨ + y = 0, for π < t < 2π. To find y1 (t) for 0 < t < π, when |h1 | = 6 2 and h1 6= 0 the general solution of y¨ + h1 y˙ + y = 0 is y1 (t; h1 , 0) = c1 es1 t + c2 es2 t and −h ±

y˙ 1 (t; h1 , h2 ) = c1 s1 es1 t + c2 s2 es2 t ,



h2 −4

2 1 where s1,2 = are two distinct, but possibly complex, numbers. The ICs require 1 = y(0) = c1 + c2 and 2 0 = y(0) ˙ = s1 c1 + s2 c2 . So,



c1





1

1

s1

s2

=

 c2

−1  

1

 0

 1  = s2 − s1 

s2

−1

−s1

1



1

 0

 1  = s2 − s1 

s2

 

−s1

implies that y1 (t) =

 1 s2 es1 t − s1 es2 t , 0 < t < π, s2 − s1

and y˙ 1 (t) =

 s1 s2 es1 t − es2 t , 0 < t < π. s2 − s1

On the interval π < t < 2π, y¨ + y = 0, so y1 (t; h1 , 0) = A cos(t − π) + B sin(t − π), π < t < 2π and y˙ 1 (t; h1 , 0) = −A sin(t − π) + B cos(t − π), π < t < 2π. 1 Continuity of the solution y1 (t; h1 , 0) and its first derivative at t = π requires s2 −s (s2 es1 π − s1 es2 π ) = y1 (π; h1 , 0) = 1 A and s1 s2 (es1 π − es2 π ) = y˙ 1 (π; h1 , 0) = B. s2 − s1 This gives

y1 (t; h1 , 0) =

1 s2 − s1

   s2 es1 π − s1 es2 π cos(t − π) + s1 s2 es1 π − es2 π sin(t − π) , π < t < 2π,

c Larry

Turyn, October 10, 2013

page 111

hence

 1 s2 es1 π − s1 es2 π . s2 − s1 To find y2 (t) for 0 < t < π, when |h1 | = 6 2 and h1 6= 0 the general solution of y¨ + h1 y˙ + y = 0 is y1 (2π; h1 , 0) = −

y2 (t; h1 , 0) = c1 es1 t + c2 es2 t and −h ±

y˙ 2 (t; h1 , h2 ) = c1 s1 es1 t + c2 s2 es2 t ,



h2 −4

2 2 where s1,2 = are two distinct, but possibly complex, numbers. The ICs require 0 = y(0) = c1 + c2 and 2 1 = y(0) ˙ = s1 c1 + s2 c2 . So,





c1



1

1

=

 c2

−1  

s1

0



s2

1

 1  = s2 − s1 

s2 −s1

  −1 1  =   s2 − s1 1 1 1

−1



0



implies that y2 (t) = and

 1 −es1 t + es2 t , 0 < t < π, s2 − s1

 1 −s1 es1 t + s2 es2 t , 0 < t < π. s2 − s1 On the interval π < t < 2π, y¨ + y = 0, so y˙ 2 (t) =

y2 (t; h1 , 0) = A cos(t − π) + B sin(t − π), π < t < 2π and y˙ 2 (t; h1 , 0) = −A sin(t − π) + B cos(t − π), π < t < 2π. 1 Continuity of the solution y2 (t; h1 , 0) and its first derivative at t = π requires s2 −s (−es1 π + es2 π ) = y2 (π; h1 , 0) = A 1 and 1 (−s1 es1 π + s2 es2 π ) = y˙ 2 (π; h1 , 0) = B. s2 − s1 This gives

y2 (t; h1 , 0) =

1 s2 − s1

   − es1 π + es2 π cos(t − π) + − s1 es1 π + s2 es2 π sin(t − π) , π < t < 2π,

hence y˙ 2 (t; h1 , 0) =

   1 − s1 es1 π − s2 es2 π sin(t − π) + − s1 es1 π + s2 es2 π cos(t − π) , π < t < 2π. s2 − s1

So,  1 s1 es1 π − s2 es2 π . s2 − s1 So, in the case that |h1 | 6= 2 and |h1 | 6= 0 and h2 = 0, the characteristic equation (?) is y˙ 2 (2π; h1 , 0) =

0 = µ2 −

  1 − s2 es1 π − s1 es2 π + s1 es1 π − s2 es2 π µ + e−(h1 +0)π , s2 − s1

that is,  0 = µ2 + es1 π + es2 π µ + e−h1 π . The equation satisfied by s1,2 is 0 = s2 + h1 s + 1 and also is 0 = (s − s1 )(s − s2 ) = s2 − (s1 + s2 )s + s1 s2 . This implies that s1 s2 = 1 and s1 + s2 = −h1 . So, the characteristic equation for µ can be rewritten as  0 = µ2 + es1 π + es2 π µ + e(s1 +s2 )π = (µ + es1 π )(µ + es2 π ), so the characteristic multipliers are µ1 = −e−s1 π and µ1 = −e−s2 π , as in Case 3. Case 8 : If |h1 | 6= 2 and h1 = 6 0 and |h2 | = 2, then both y1 (t; h1 , h2 ) and y2 (t; h1 , h2 ) satisfy y¨ + h1 y˙ + y = 0, for 0 < t < π, and satisfy y¨ + y = 0, for π < t < 2π.

c Larry

Turyn, October 10, 2013

page 112

To find y1 (t) for 0 < t < π, when |h1 | = 6 2 and h1 6= 0 the general solution of y¨ + h1 y˙ + y = 0 is y1 (t; h1 , h2 ) = c1 es1 t + c2 es2 t and y˙ 1 (t; h1 , h2 ) = c1 s1 es1 t + c2 s2 es2 t ,



−h ±

h2 −4

2 1 where s1,2 = are two distinct, but possibly complex, numbers. The ICs require 1 = y(0) = c1 + c2 and 2 0 = y(0) ˙ = s1 c1 + s2 c2 . So,    −1        c1 1 1 1 s2 −1 1 s2 1 1  =   =   =   s2 − s1 s2 − s1 c2 s1 s2 0 −s1 1 0 −s1

implies that y1 (t) = and

 1 s2 es1 t − s1 es2 t , 0 < t < π, s2 − s1

 s1 s2 es1 t − es2 t , 0 < t < π. s2 − s1 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0 and |h2 | = 2, so y˙ 1 (t) =

y1 (t; h1 , h2 ) = Ae−h2 (t−π)/2 + B(t − π)e−h2 (t−π)/2 , π < t < 2π and

 h2 h2 Ae−h2 (t−π)/2 + B 1 − (t − π) e−h2 (t−π)/2 , π < t < 2π. 2 2 1 (s2 es1 π − s1 es2 π ) = y1 (π; h1 , h2 ) = Continuity of the solution y1 (t; h1 , h2 ) and its first derivative at t = π requires s2 −s 1 A and s1 s2 h2 (es1 π − es2 π ) = y˙ 1 (π; h1 , h2 ) = − A + B. s2 − s1 2  s1 π s2 π 1 and thus This gives A = s2 −s s − s 2e 1e 1 y˙ 1 (t; h1 , h2 ) = −

B=

  1  h2 1  h2  s1 π h2  s2 π  s2 es1 π − s1 es2 π + s1 s2 es1 π − es2 π = s2 s1 + e − s1 s2 + e , s2 − s1 2 s2 − s1 2 2

so y1 (t; h1 , h2 )=

1 s2 − s1



   h2  s1 π h2  s2 π  s2 es1 π −s1 es2 π e−h2 (t−π)/2 + s2 s1 + e −s1 s2 + e (t − π)e−h2 (t−π)/2 , 2 2

for π < t < 2π. So, y1 (2π; h1 , h2 ) =

   h2  s1 π h2  s2 π  −h2 π/2 s2 es1 π − s1 es2 π e−h2 π/2 + s2 s1 + e −s1 s2 + e πe . 2 2     1 h2 π h2 π = s2 s1 π + + 1 es1 π −s1 s2 π + + 1 es2 π e−h2 π/2 s2 − s1 2 2 1 s2 − s1



To find y2 (t) for 0 < t < π, when |h1 | = 6 2 and h1 6= 0 the general solution of y¨ + h1 y˙ + y = 0 is y2 (t; h1 , h2 ) = c1 es1 t + c2 es2 t and √

−h ±

y˙ 2 (t; h1 , h2 ) = c1 s1 es1 t + c2 s2 es2 t ,

h2 −4

2 1 are two distinct, but possibly complex, numbers. The ICs require 0 = y(0) = c1 + c2 and where s1,2 = 2 1 = y(0) ˙ = s1 c1 + s2 c2 . So,    −1        c1 1 1 0 s2 −1 0 −1 1 1  =   =   =   s2 − s1 s2 − s1 c2 −s1 1 1 1 s1 s2 1

implies that y2 (t) =

 1 −es1 t + es2 t , 0 < t < π, s2 − s1 c Larry

Turyn, October 10, 2013

page 113

and

 1 −s1 es1 t + s2 es2 t , 0 < t < π. s2 − s1 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, so y˙ 2 (t) =

y2 (t; h1 , h2 ) = Ae−h2 (t−π)/2 + B(t − π)e−h2 (t−π)/2 , π < t < 2π and

 h2 h2 Ae−h2 (t−π)/2 + B 1 − (t − π) e−h2 (t−π)/2 , π < t < 2π, 2 2 1 Continuity of the solution y2 (t; h1 , h2 ) and its first derivative at t = π requires s2 −s (−es1 π + es2 π ) = y2 (π; h1 , h2 ) = 1 A and 1 h2 (−s1 es1 π + s2 es2 π ) = y˙ 2 (π; h1 , h2 ) = − A + B. s2 − s1 2  s1 π s2 π 1 This gives A = s2 −s − e + e and thus 1 y˙ 2 (t; h1 , h2 ) = −

B=

1  h2 s2 − s1 2

  − es1 π + es2 π + − s1 es1 π + s2 es2 π =

h2  s1 π 1  h2  s2 π  − s1 + e + s2 + e . s2 − s1 2 2

This gives y2 (t; h1 , h2 )=

1 s2 − s1



   h2  s1 π h2  s2 π  − es1 π + es2 π e−h2 (t−π)/2 + − s1 + e + s2 + e (t − π)e−h2 (t−π)/2 , 2 2

for π < t < 2π. So, y˙ 2 (t; h1 , h2 ) =

 1  h2 − − es1 π + es2 π e−h2 (t−π)/2 s2 − s1 2    h2  s2 π  h2 h2  s1 π e + s2 + e 1− (t − π) e−h2 (t−π)/2 , + − s1 + 2 2 2

hence    h2 h2  s1 π h2  s2 π  h2  −h2 π/2 − es1 π + es2 π + − s1 + e + s2 + e 1− π e 2 2 2 2   1 h2 h2 π  s1 π  h2 h2 π  s2 π −h2 π/2 = − s1 + s1 + e + s2 − s2 + e e s2 − s1 2 2 2 2

y˙ 2 (2π; h1 , h2 ) =

1 s2 − s1





So, in the case that |h1 | 6= 2 and |h1 6= 0 and |h2 | = 2, the characteristic equation (?) is 0 = µ2 −

  1  h2 π h2 π s2 s1 π + + 1 es1 π −s1 s2 π + + 1 es2 π s2 − s1 2 2     h2 π h2 h2 π  s2 π  −h2 π/2 h2 s1 + es1 π + s2 − s2 + e e µ + e−(h1 +h2 )π , + − s1 + 2 2 2 2

that is, 1  h2 π h2 π  s2 − s1 + s1 s2 π + (s1 + s2 ) + 2 es1 π s2 − s1 2 4   h2 π h2 π  + s2 − s1 − s1 s2 π − (s1 + s2 ) − 2 es2 π µ + e−(h1 +h2 )π . 2 4 2 The equation satisfied by s1,2 is 0 = s + h1 s + 1 and also is 0 = (s − s1 )(s − s2 ) = s2 − (s1 + s2 )s + s1 s2 . This implies that s1 s2 = 1 and s1 + s2 = −h1 . Also, |h2 | = 2 implies that h22 = 4.  h h   h h  π 1 2 1 2 2 s1 π 0=µ − s2 − s1 − π −2 e + s2 − s1 + π −2 e−h1 π/2 µ + e−(h1 +h2 )π , s2 − s1 2 2 0 = µ2 −

that is, 2

0=µ −

 e

s1 π

+e

s2 π 

  π  h1 h2 s1 π s2 π  − −2 e −e e−h1 π/2 µ + e−(h1 +h2 )π , s2 − s1 2

As in Case 6, the characteristic multipliers are µ1,2 =

 p e−h1 π/2  α ± α2 − 4e−h2 π 2 c Larry

Turyn, October 10, 2013

page 114

where  α , es1 π + es2 π −

   π h1 h2 − 4 es1 π − es2 π . 2(s2 − s1 )

Case 9 : If |h1 | 6= 2 and h1 6= 0, and |h2 | 6= 2 and h2 6= 0, then both y1 (t; h1 , h2 ) and y2 (t; h1 , h2 ) satisfy y¨+h1 y+y ˙ = 0, for 0 < t < π, and satisfy y¨ + h2 y˙ + y = 0, for π < t < 2π. To find y1 (t) for 0 < t < π, when |h1 | = 6 2 and h1 6= 0 the general solution of y¨ + h1 y˙ + y = 0 is y1 (t; h1 , h2 ) = c1 es1 t + c2 es2 t and −h ±

y˙ 1 (t; h1 , h2 ) = c1 s1 es1 t + c2 s2 es2 t ,



h2 −4

2 1 are two distinct, but possibly complex, numbers. The ICs require 1 = y(0) = c1 + c2 and where s1,2 = 2 0 = y(0) ˙ = s1 c1 + s2 c2 . So,         −1   s2 −1 1 s2 c1 1 1 1 1 1   =    =   = s2 − s1 s2 − s1 −s1 1 0 −s1 c2 s1 s2 0

implies that y1 (t) = and

 1 s2 es1 t − s1 es2 t , 0 < t < π, s2 − s1

 s1 s2 es1 t − es2 t , 0 < t < π. s2 − s1 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, and |h2 | = 6 2 and h2 6= 0, so y˙ 1 (t) =

y1 (t; h1 , h2 ) = Aer1 (t−π) + Ber2 (t−π) and −h ±



y˙ 1 (t; h1 , h2 ) = Ar1 er1 (t−π) + Br2 er2 (t−π) ,

h2 −4

2 2 where r1,2 = are two distinct, but possibly complex, numbers. 2 1 Continuity of the solution y1 (t; h1 , h2 ) and its first derivative at t = π requires s2 −s (s2 es1 π − s1 es2 π ) = 1 y1 (π; h1 , h2 ) = A + B and s1 s2 (es1 π − es2 π ) = y˙ 1 (π; h1 , h2 ) = r1 A + r2 B. s2 − s1 So, −1         1 (s2 es1 π − s1 es2 π ) A 1 1 r2 −1 (s2 es1 π − s1 es2 π ) s2 −s1 1   =  =    (s2 − s1 )(r2 − r1 ) s1 s2 s1 π s2 π s1 π s2 π B r1 r2 (e − e ) −r 1 s s (e − e ) 1 1 2 s2 −s1

 1  = (s2 − s1 )(r2 − r1 )

r2 (s2 es1 π − s1 es2 π ) − s1 s2 (es1 π − es2 π ) −r1 (s2 es1 π − s1 es2 π ) + s1 s2 (es1 π − es2 π )

 .

It follows that y1 (t; h1 , h2 ) =

  1 r2 (s2 es1 π − s1 es2 π ) − s1 s2 (es1 π − es2 π ) er1 (t−π) (s2 − s1 )(r2 − r1 )   + − r1 (s2 es1 π − s1 es2 π ) + s1 s2 (es1 π − es2 π ) er2 (t−π) , π < t < 2π.

So, y1 (2π; h1 , h2 ) =

  1 r2 (s2 es1 π − s1 es2 π ) − s1 s2 (es1 π − es2 π ) er1 π (s2 − s1 )(r2 − r1 )   + − r1 (s2 es1 π − s1 es2 π ) + s1 s2 (es1 π − es2 π ) er2 π .

To find y2 (t) for 0 < t < π, when |h1 | = 6 2 and h1 6= 0 the general solution of y¨ + h1 y˙ + y = 0 is y2 (t; h1 , h2 ) = c1 es1 t + c2 es2 t c Larry

Turyn, October 10, 2013

page 115

and y˙ 2 (t; h1 , h2 ) = c1 s1 es1 t + c2 s2 es2 t ,



−h ±

h2 −4





2 2 where s1,2 = are two distinct, but possibly complex, numbers. The ICs require 0 = y(0) = c1 + c2 and 2 1 = y(0) ˙ = s1 c1 + s2 c2 . So,

c1



1

1

s1

s2

=

 c2

−1  

0

 1



 1 =  s2 − s1

s2 −s1

  −1 1  =   s2 − s1 1 1 1

−1



0



implies that y2 (t) = and

 1 −es1 t + es2 t , 0 < t < π, s2 − s1

 1 −s1 es1 t + s2 es2 t , 0 < t < π. s2 − s1 On the interval π < t < 2π, y¨ + h2 y˙ + y = 0, and |h2 | = 6 2 and h2 6= 0, so y˙ 2 (t) =

y2 (t; h1 , h2 ) = Aer1 (t−π) + Ber2 (t−π) and √

−h ±

y˙ 2 (t; h1 , h2 ) = Ar1 er1 (t−π) + Br2 er2 (t−π) , h2 −4

2 2 where r1,2 = are two distinct, but possibly complex, numbers. 2 1 (−es1 π +es2 π ) = y2 (π; h1 , h2 ) = Continuity of the solution y2 (t; h1 , h2 ) and its first derivative at t = π requires s2 −s 1 A + B and 1 (−s1 es1 π + s2 es2 π ) = y˙ 1 (π; h1 , h2 ) = r1 A + r2 B. s2 − s1 So,     −1     1 (−es1 π + es2 π ) r2 −1 −es1 π + es2 π A 1 1 s2 −s1 1  =    =   (s2 − s1 )(r2 − r1 ) s1 π s2 π s1 π s2 π 1 −s e + s e (−s e + s e ) −r 1 B r1 r2 1 2 1 2 1 s2 −s1

 1  = (s2 − s1 )(r2 − r1 )

r2 (−es1 π + es2 π ) − (−s1 es1 π + s2 es2 π ) −r1 (−es1 π + es2 π ) + (−s1 es1 π + s2 es2 π )

 .

It follows that y2 (t; h1 , h2 ) =

  1 r2 (−es1 π + es2 π ) − (−s1 es1 π + s2 es2 π ) er1 (t−π) (s2 − s1 )(r2 − r1 )   + − r1 (−es1 π + es2 π ) + (−s1 es1 π + s2 es2 π ) er2 (t−π) , π < t < 2π,

and y˙ 2 (t; h1 , h2 ) =

  1 r2 (−es1 π + es2 π ) − (−s1 es1 π + s2 es2 π ) r1 er1 (t−π) (s2 − s1 )(r2 − r1 )   + − r1 (−es1 π + es2 π ) + (−s1 es1 π + s2 es2 π ) r2 er2 (t−π) , π < t < 2π.

hence y˙ 2 (2π; h1 , h2 ) =

  1 r2 (−es1 π + es2 π ) − (−s1 es1 π + s2 es2 π ) r1 er1 π (s2 − s1 )(r2 − r1 )  + − r1 (−es1 π + es2 π ) + (−s1 es1 π + s2 es2 π ) r2 er2 π .

So, in the case that |h1 | 6= 2 and |h1 6= 0 and |h2 | = 2, the characteristic equation (?) is   1 0 = µ2 − r2 (s2 es1 π − s1 es2 π ) − s1 s2 (es1 π − es2 π ) er1 π (s2 − s1 )(r2 − r1 )   + − r1 (s2 es1 π − s1 es2 π ) + s1 s2 (es1 π − es2 π ) er2 π c Larry

Turyn, October 10, 2013

page 116

 + r2 (−es1 π + es2 π ) − (−s1 es1 π + s2 es2 π ) r1 er1 π   + − r1 (−es1 π + es2 π ) + (−s1 es1 π + s2 es2 π ) r2 er2 π µ + e−(h1 +h2 )π , that is, 0 = µ2 −

  1 r2 (s2 es1 π − s1 es2 π ) − s1 s2 (es1 π − es2 π ) + r1 r2 (−es1 π + es2 π ) − (−s1 es1 π + s2 es2 π ) er1 π (s2 − s1 )(r2 − r1 )    + − r1 (s2 es1 π − s1 es2 π ) + s1 s2 (es1 π − es2 π ) + r2 − r1 (−es1 π + es2 π ) + (−s1 es1 π + s2 es2 π ) er2 π µ +e−(h1 +h2 )π ,

that is 0 = µ2 −

  1 r2 (s2 es1 π − s1 es2 π ) − (s1 s2 + r1 r2 )(es1 π − es2 π ) − r1 (−s1 es1 π + s2 es2 π ) er1 π (s2 − s1 )(r2 − r1 )    + − r1 (s2 es1 π − s1 es2 π ) + (s1 s2 + r1 r2 )(es1 π − es2 π ) + r2 (−s1 es1 π + s2 es2 π ) er2 π µ +e−(h1 +h2 )π ,

that is 0 = µ2 −

    1 r2 s2 − s1 s2 − r1 r2 + r1 s1 es1 π + r2 s2 + s1 s2 + r1 r2 + r1 s1 es2 π er1 π (s2 − s1 )(r2 − r1 )      + − r1 s2 + s1 s2 + r1 r2 − r2 s1 es1 π + r1 s2 − s1 s2 − r1 r2 + r2 s2 es2 π er1 π µ +e−(h1 +h2 )π ,

The equation satisfied by s1,2 is 0 = s2 + h1 s + 1 and also is 0 = (s − s1 )(s − s2 ) = s2 − (s1 + s2 )s + s1 s2 . This implies that s1 s2 = 1 and s1 + s2 = −h1 . Similarly, the equation satisfied by r1,2 being 0 = r2 + h2 r + 1 and also being 0 = (r − r1 )(r − r2 ) = r2 − (r1 + r2 )s + r1 r2 implies that r1 r2 = 1 and r1 + r2 = −h2 . So, the characteristic equation for µ can be rewritten as     1 r2 s2 − 2 + r1 s1 es1 π + r2 s2 + 2 + r1 s1 es2 π er1 π 0 = µ2 − (s2 − s1 )(r2 − r1 )      + − r1 s2 + 2 − r2 s1 es1 π + r1 s2 − 2 + r2 s2 es2 π er1 π µ + e−(h1 +h2 )π .

(b) Unfortunately, what the problem asks us to do is impossible! It is not true that “the system is asymptotically stable if, and only if, h1 + h2 > 0." It is true that “If h1 + h2 ≤ 0 then the system cannot be asymptotically stable." Another way of saying this is that “If the system is asymptotically stable, then it follows that h1 + h2 > 0." We will explain why this is true below. The system can be unstable even if h1 + h2 > 0." For example, for h1 = 2 and h2 = −1.9, then Mathematica calculations using NDSolve[{y00 [t] + b[t]y0 [t] + y[t] == 0... give characteristic multipliers µ1 ≈ −0.027162107819620473 and µ2 ≈ −26.890469516232464, hence µ2 | > 1, which implies instability of the system. So, why is it true that “if h1 + h2 ≤ 0 then the system cannot be asymptotically stable?" Recall that the characteristic equation satisfied by the characteristic multipliers is  (?) 0 = µ2 − y1 (2π; h1 , h2 ) + y˙ 2 (2π; h1 , h2 ) µ + e−(h1 +h2 )π . The two characteristic multipliers are µ1 and µ2 , possibly including repetition. They satisfy (µ − µ1 )(µ − µ2 ) = µ2 − (µ1 + µ2 )µ + µ1 µ2 . Comparing this with (?) we conclude that µ1 µ2 = e−(h1 +h2 )π . This implies that the system is not asymptotically stable if h1 + h2 ≤ 0. Why? If h1 + h2 ≤ 0 then µ1 µ2 ≥ 1, hence |µ1 | |µ2 | = |µ1 µ2 | ≥ 1, hence |µ1 | ≥ 1 and/or |µ2 | ≥ 1, hence the system cannot be asymptotically stable, by Theorem 5.25.

c Larry

Turyn, October 10, 2013

page 117

5.8.6.9. The system is periodic with period T = 2π. The first equation in the system, x˙ 1 = solvable using the method of separation of variables:  ˆ  cos t ln |x1 | = c + −1 − dt = −t − ln |2 + sin t|, 2 + sin t



where c is an arbitrary constant, yields x1 (t) = c1 e−t (2 + sin t)−1 . The initial value is x1 (0) = solution of the first ODE in the system can be written as x1 (t) = e−t ·

−1 −

1 2

cos t 2+sin t



x1 , is

c1 , so the general

2 · x1 (0). 2 + sin t

Substitute that into the second ODE in the system, rearrange terms to put it into the standard form of a first order 2 linear ODE, x˙ 2 + x2 = 2(cos t) · e−t · 2+sin · x1 (0). After that, multiply through by the integrating factor of et to get t d  t  4 cos t 4 cos t · x1 (0) = · x1 (0). e x2 = et (x˙ 2 + x2 )= et · e−t · dt 2 + sin t 2 + sin t Indefinite integration with respect to t of both sides yields et x2 (t) = c2 + 4 ln(2 + sin t)x1 (0), so

  x2 (t) = e−t c2 + 4 ln(2 + sin t)x1 (0) .

The initial value is x2 (0) = c2 + (4 ln 2)x1 (0), so the solutions of the second ODE can be written in the form      2 + sin t   −t −t x2 (t) = e x2 (0) + 4 − ln 2 + ln(2 + sin t) x1 (0) = e x2 (0) + 4 ln x1 (0) . 2 To find a fundamental matrix, first summarize the general solution by writing it in matrix times vector form:  −t    2 2 · x1 (0) e · 2+sin 0 t 2 + sin t      x(0).    = e−t  x(t) =   2 + sin t   −t     2 + sin t  e x2 (0) + 4 ln x1 (0) 4 ln 1 2 2 So, a fundamental matrix is given by 2 2 + sin t

  X(t) = e−t   4 ln

0

 2 + sin t  2

  . 

1

In particular, substitute in t = 2π to see that in this problem X(T) = X(2π) = e−2π I, so the characteristic multipliers, being the eigenvalues of X(T), are µ = e−2π , e−2π . We can take Q = I, D = e−2π I in the calculation of the Floquet representation, hence S = Q = I,   1 1 ln(e−2π ), ln(e−2π ) = −I, E = diag 2π 2π  and C = −I. The Floquet representation here is X(t) = P (t)etC = P (t) e−t I , so in this example 2 2 + sin t

  P (t) = et X(t) =   4 ln To summarize, a Floquet representation is given by  X(t) = P (t)etC

4 ln

1

2

 2 + sin t  2

   

 2 + sin t 

2 2 + sin t

  = 

0

0

  −t   e I . 

1 c Larry

Turyn, October 10, 2013

page 118

( 5.8.6.10.

x˙ 1 =



−1 +

sin t 2+cos t



−(sin t)x1

x˙ 2 =

)

x1 −x2

 The system is periodic with period T = 2π. The first equation in the system, x˙ 1 = −1 + using the method of separation of variables:  ˆ  sin t ln |x1 | = c + −1 + dt = −t − ln |2 + cos t|, 2 + cos t

sin t 2+cos t

where c is an arbitrary constant, yields x1 (t) = c1 e−t (2 + cos t)−1 . The initial value is x1 (0) = solution of the first ODE in the system can be written as x1 (t) = e−t ·

1 3



x1 , is solvable

c1 , so the general

3 · x1 (0). 2 + cos t

Substitute that into the second ODE in the system, rearrange terms to put it into the standard form of a first order 3 linear ODE, x˙ 2 + x2 = (sin t) · e−t · 2+cos · x1 (0). After that, multiply through by the integrating factor of et to get t d  t  3 sin t 3 sin t · x1 (0) = · x1 (0). e x2 = et (x˙ 2 + x2 )= et · e−t · dt 2 + cos t 2 + cos t Indefinite integration with respect to t of both sides yields et x2 (t) = c2 + 3 ln(2 + cos t)x1 (0), so

  x2 (t) = e−t c2 + 3 ln(2 + cos t)x1 (0) .

The initial value is x2 (0) = c2 + (3 ln 3)x1 (0), so the solutions of the second ODE can be written in the form      2 + cos t   −t −t x1 (0) . x2 (t) = e x2 (0) + 3 ln 3 + ln(2 + cos t) x1 (0) = e x2 (0) + 3 ln 3 To find a fundamental matrix, first summarize the general solution by writing it in matrix times vector form:   x(t) =  

e−t ·

3 2+cos t



· x1 (0)



e−t x2 (0) + 3 ln

 2 + cos t  3



    = e−t    x1 (0) 3 ln

3 2 + cos t  2 + cos t  3

0

   x(0). 

1

So, a fundamental matrix is given by   X(t) = e−t   3 ln

3 2 + cos t  2 + cos t  3

0

  . 

1

In particular, substitute in t = 2π to see that in this problem X(T) = X(2π) = e−2π I, so the characteristic multipliers, being the eigenvalues of X(T), are µ = e−2π , e−2π . We can take Q = I, D = e−2π I in the calculation of the Floquet representation, hence S = Q = I,   1 1 ln(e−2π ), ln(e−2π ) = −I, E = diag 2π 2π  and C = −I. The Floquet representation here is X(t) = P (t)etC = P (t) e−t I , so in this example   P (t) = et X(t) =   3 ln

3 2 + cos t  2 + cos t  3

0

   

1

c Larry

Turyn, October 10, 2013

page 119

To summarize, a Floquet representation is given by    X(t) = P (t)etC =   3 ln

3 2 + cos t  2 + cos t  3

0

  −t   e I . 

1

d h tC i ˙ e = CetC , we calculate X(t) = P˙ (t)etC + P (t)CetC . Using dt the information given about P (t), C, A0 , and Ω, along with (5.29) in Section 5.2 in the textbook, we have  h i d tΩ ˙ e etC + etΩ CetC = ΩetΩ etC + etΩ CetC = etΩ ΩetC + etΩ CetC = etΩ (Ω + C)etC X(t) = dt 5.8.6.11. Using the product rule and the fact that

= etΩ (Ω + A0 − Ω)etC = etΩ A0 etC , while, on the other hand, we have A(t)X(t) = (etΩ A0 e−tΩ )etΩ etC = etΩ A0 (e−tΩ etΩ )etC = etΩ A0 (I)etC = etΩ A0 etC . ˙ So, X(t) = A(t)X(t). Moreover, |X(t)| = etΩ etC = etΩ etC 6= 0. So, X(t) , etΩ etC is a fundamental matrix of solutions for x˙ = A(t)x. In fact, X(t) is the principal fundamental matrix of solutions at t = 0, because X(0) = e0·Ω e0·C = I · I = I. Because P (t) , etΩ is T-periodic and C is a constant matrix, X(t) = P (t)etC is a Floquet representation for x˙ = A(t)x. Note: At no time in the above work did we assume that the matrices A0 and Ω commute, that is, we did not assume that A0 Ω = ΩA0 . So, at no time did we claim, without justification, that etC could be rewritten as etA0 e−tC or as e−tC etA0 . 5.8.6.12. X(t) is the principal fundamental matrix at t = 0 for x˙ = A(t)x, hence X(t) is invertible at all t. So, X(T) is invertible, hence cannot have zero as an eigenvalue. Because the characteristic multipliers are the eigenvalues of X(T), zero cannot be a characteristic multiplier. 5.8.6.13. We will study this problem in nine cases: (1) δ +  > 0 and δ −  > 0,

(2) δ +  > 0 and δ −  = 0,

(3) δ +  > 0 and δ −  < 0,

(4) δ +  = 0 and δ −  > 0,

(5) δ +  = 0 and δ −  = 0,

(6) δ +  = 0 and δ −  < 0,

(7) δ +  < 0 and δ −  > 0,

(8) δ +  < 0 and δ −  = 0,

(9) δ +  < 0 and δ −  < 0. √

In Cases (1), (2), and (3), we assume δ + > 0 and denote ω , δ + . Let y1 (t; δ, ) solve (a) y¨ +(δ +)y = 0, 0 < t < π with initial data y1 (0; δ, ) = 1, y˙ 1 (0; δ, ) = 0. The general solution of the ODE is y(t) = c1 cos(ωt) + c2 sin(ωt), so the ICs require 1 = y(0) = c1 and 0 = y(0) ˙ = ωc2 . These conditions imply c1 = 1 and c2 = 0, so y1 (t; δ, ) = cos(ωt). Let y2 (t; δ, ) solve (a) y¨ + (δ + )y = 0, 0 < t < π with initial data y2 (0; δ, ) = 0, y˙ 2 (0; δ, ) = 1. The general solution of the ODE is y(t) = c1 cos(ωt) + c2 sin(ωt), so the ICs require 0 = y(0) = c1 and 1 = y(0) ˙ = ωc2 . These conditions imply c1 = 0 and c2 = ω −1 , so y2 (t; δ, ) = ω −1 sin(ωt). √ Case 1 : We assume δ +  > 0. We also assume δ −  > 0 and denote ν = δ − . The general solution  of the ODE (b) y¨ + (δ −)y = 0 on the interval π < t < 2π can be written as y(t; δ, ) = constant · cos ν(t − π) + constant · sin ν(t − π) .   The ODE implies y1 (t; δ, ) = A cos ν(t − π) + B sin ν(t − π) on π < t < 2π, so continuity of the function y1 (t) = cos(ωt) and its first derivative, y˙ 1 (t) = −ω sin(ωt), for 0 < t < π, requires cos(ωπ) = y1 (π; δ, ) = A

− ω sin(ωπ) = y˙ 1 (π; δ, ) = Bν.

and

Putting our results, so far, together we have   cos(ωt), y1 (t; δ, ) =   cos(ωπ) cos ν(t − π) −

ω ν

 sin(ωπ) sin ν(t − π) ,

0 b2 . Almost all of the calculations we did in part (a) apply to this situation, so that we get that the spherical shell of matter exerts a gravitational force of  1  F = 0 ·ˆ ı + 0 · ˆ − 2πmG%0 2 |b + ρ1 |(b2 − bρ1 + ρ21 ) − |b − ρ1 |(b2 + bρ1 + ρ21 ) 3ρ1  ˆ − |a + ρ1 |(a2 − aρ1 + ρ21 ) − |a − ρ1 |(a2 + aρ1 + ρ21 ) k. Because ρ1 > b > a,   1  ˆ F = −2πmG%0 2 (b+ρ1 )(b2 −bρ1 +ρ21 )−(ρ1 −b)(b2 +bρ1 +ρ21 ) − (a+ρ1 )(a2 −aρ1 +ρ21 )−(ρ1 −a)(a2 +aρ1 +ρ21 ) k 3ρ1   4π%0 3 1 1  ˆ = − mM G k, ˆ (b − a3 ) · 2 mGk = −2πmG%0 2 2b3 − 2a3 = − 3ρ1 3 ρ1 ρ21 where M is the mass of the spherical shell V. It follows that if the point P1 is inside the spherical shell of matter then it experiences a gravitational force of F=−

mM G ˆ GM m k=− r1 , ρ21 ||r1 ||3

−−→ where r1 = OP1 . 7.6.4.16. Using the results of problem 7.6.4.15, we have that the gravitational force is (a) F = 0,

(b) F = −

GM1 m r1 , ||r1 ||3

(c) F = −

G(M1 + M2 )m r2 , respectively. ||r1 ||3

7.6.4.17. Traveling along a curve C : r = r(t), a ≤ t ≤ b, the non-zero tangent vectors are ˙ r(t) = x(t) ˙ ˆ ı + y(t) ˙ ˆ. It follows that n , y(t) ˙ ˆ ı − x(t) ˙ ˆ is normal to C. Let us leave aside for the moment the question of why n points out of D. 1 b= n and Note that n ||n|| q 2 2 q 2 2 ˙ ||n|| = y(t) ˙ + − x(t) ˙ = x(t) ˙ + y(t) ˙ = ||r||, hence b ds = n

1 ˙ dt = (y(t) (y(t) ˙ ˆ ı − x(t) ˙ ˆ) ||r|| ˙ ˆ ı − x(t) ˙ ˆ)dt. ||n||

Next, rewrite (P ˆ ı + Qˆ ) • (y(t) ˙ ˆ ı − x(t)ˆ ˙ )dt = (−Q ˆ ı + P ˆ) • (dx ˆ ı + dyˆ ) = −Q dx + P dy. It follows that

˛ C

˛ b ds = (P ˆ ı + Qˆ ) • n

(−Q dx + P dy) C

Green’s Theorem implies that ˛ ˛ ¨  ∂P ∂Q  b ds= (−Q dx + P dy)= (P ˆ ı + Qˆ ) • n dA, + ∂x ∂y C C D that is,

˛ (?) C

¨ b ds = F•n

∇ • F dA, D

as was desired. ˙ So, why does n , y(t) ˙ ˆ ı − x(t) ˙ ˆ point out of D? It will suffice to show that r(t) = x(t) ˙ ˆ ı + y(t) ˙ ˆ is always to the left of n: ©Larry

Turyn, October 22, 2013

page 56

˙ Case 1 : If r(t) is in the first quadrant, that is, QI, that is, x(t) ˙ > 0 and y(t) ˙ > 0, then y(t) ˙ > 0 and −x(t) ˙ < 0, so n ˙ is in QIV, hence r(t) is to the left of n. ˙ Case 2 : If r(t) is in the second quadrant, that is, QII, that is, x(t) ˙ < 0 and y(t) ˙ > 0, then y(t) ˙ > 0 and −x(t) ˙ > 0, ˙ so n is in QI, hence r(t) is to the left of n. ˙ Case 3 : If r(t) is in the third quadrant, that is, QIII, that is, x(t) ˙ < 0 and y(t) ˙ < 0, then y(t) ˙ < 0 and −x(t) ˙ > 0, so ˙ n is in QII, hence r(t) is to the left of n. ˙ Case 4 : If r(t) is in the fourth quadrant, that is, QIV, that is, x(t) ˙ > 0 and y(t) ˙ < 0, then y(t) ˙ < 0 and −x(t) ˙ < 0, ˙ so n is in QIII, hence r(t) is to the left of n. 7.6.4.18. By Stokes’s Theorem,



¨ (f ∇g) • dr =

C

∇ × (f ∇g) • dS. S

We calculate

  ∂g ∂g ∂g ˆ ∇ × (f ∇g) = ∇ × f ˆ ı+f ˆ + f k ∂x ∂y ∂z   h   h   h i h i i h i ∂ ∂ ∂ ∂g ∂ ∂g ∂g ∂ ∂g ∂g i ∂ h ∂g i ˆ ˆ ı+ ˆ + k f − f f − f f − f = ∂y ∂z ∂z ∂y ∂z ∂x ∂x ∂z ∂x ∂y ∂y ∂x ! !  ∂f ∂g  ∂f ∂g   ∂ 2 g  ∂f ∂g ∂ 2 g ∂ 2 g   ∂f ∂g ∂ 2 g  = +f  − +f  ˆ ı+ +f  − +f  ˆ ∂y ∂z ∂y∂z ∂z ∂y ∂z∂y ∂z ∂x ∂x∂z ∂x ∂z ∂z∂x !  ∂f ∂g ∂ 2 g  ∂f ∂g ∂ 2 g ˆ +f  − +f  k ∂x ∂y ∂x∂y ∂y ∂x ∂y∂x       ∂f ∂g ∂f ∂g ∂f ∂g ∂f ∂g ∂f ∂g ˆ ∂f ∂g − ˆ ı+ − ˆ + − k = (∇f ) × (∇g). = ∂y ∂z ∂z ∂y ∂z ∂x ∂x ∂z ∂x ∂y ∂y ∂x +

So, Stokes’s Theorem yields ‰

¨ (f ∇g) • dr =

C

¨ ∇ × (f ∇g) • dS =

S

(∇f × ∇g) • dS, S

as desired.

©Larry

Turyn, October 22, 2013

page 57

Section 7.7 7.7.2.1. Using the results of Example 7.50, the standard deviation of the fair die of Example 7.45 is r r p 91  7 2 35 σ = E[ X 2 ] − (E[ X ])2 = − = . 6 2 12 7.7.2.2. (a) In order f (x) = αe−λ|x| to be a density function, we must have ˆ ∞ ˆ ∞ 1= f (x)dx = α e−λ|x| dx. −∞

−∞

Using the fact that f (x) is an even function and the fact that λ is a positive constant, this criterion can be re-written as   h  ˆ b   ˆ ∞ ˆ ∞ ib  2α(e−λb − 1) 1 e−λx dx = lim 2α − e−λx e−λx dx = lim 2α e−λ|x| dx = 2α 1 = 2α = lim − b→∞ b→∞ b→∞ λ λ 0 0 0 0 = So, we must choose α =

2α . λ

λ . 2

(b) All of the moments exist: First, (i) for any positive, odd integer n, symmetry implies ˆ ∞ ˆ 2α ∞ n −λ|x| P.v. E[ X n ] = P.v. f (x)dx = P.v. x e dx = 0, λ −∞ −∞ as long as we can show that the improper integral converges. 7.7.2.3. continuing from the hint, E[(X − µ)2 ] = E[X 2 ] − 2E[µX] + E[µ)2 ] = E[X 2 ] − 2µE[X] + µ2 E[1] = E[X 2 ] − 2µµ + µ2 · 1 = E[X 2 ] − µ2 , as we wanted to show. 7.7.2.4. By Theorem 7.21, the random variable Z , X + Y has PDF ˆ ∞ k(x) = f (u)g(x − u)du, −∞

where both X and Y have the same probability density function (PDF), f (x) =

 

0,



αe−λx ,

 x 1, so this solution of the difference equation has |yi | → ∞ as i → ∞. This is different from all of the solutions of the original, ODE, which have y(t) → 0 as t → ∞. 8.7.7.14. The difference equation has characteristic polynomial r2 − (2 − 4h2 ) r + 1, which has roots p p 2 − 4h2 ± (2 − 4h2 )2 − 4 r= = 1 − 2h2 ± 2h h2 − 1. 2 √ √ For 0 < h 0 or h < 0, in the first or fourth quadrant, respectively, ω = arctan(h). We have     −i ρk (cos ωk + i sin ωk)v1 = (1 + h2 )k/2 cos k arctan(h) + i sin cos k arctan(h) 1       sin k arctan(h) − cos k arctan(h) = (1 + h2 )k/2 +i . cos k arctan(h) sin k arctan(h) The system of difference equations that are Euler’s Method for the system has general solution which is a linear combination of the real and imaginary parts of ρk (cos ωk + i sin ωk)v1 , that is,        sin k arctan(h) − cos k arctan(h) xk = (1 + h2 )k/2 c1 + c2 , k = 0, 1, 2, ..., cos k arctan(h) sin k arctan(h)

c Larry

Turyn, October 24, 2013

page 63

where c1 , c2 =arbitrary constants. We can rewrite the general solution using the amplitude-phase form. Equate the first component of xk :   x1,k = (1 + h2 )k/2 −c2 cos k arctan(h) + c1 sin k arctan(h)  = (1 + h2 )k/2 A1 cos k arctan(h) − δ1 , p c1 where A1 = (−c2 )2 + c21 and tan δ1 = . We can also equate −c2   x2,k = (1 + h2 )k/2 c1 cos k arctan(h) + c2 sin k arctan(h)  = (1 + h2 )k/2 A2 sin k arctan(h) − δ2    = (1 + h2 )k/2 A2 sin k arctan(h) cos δ2 − A2 cos k arctan(h) sin δ2   = (1 + h2 )k/2 −A2 sin δ2 cos k arctan(h) + A2 cos δ2 sin k arctan(h) , p A2 sin δ2 −c1 c21 + c22 = A1 and tan δ2 = = = tan δ1 . A2 cos δ2 c2 So, either δ2 = δ1 or δ2 = δ1 + π or δ2 = δ1 − π. That, along with the fact that A2 = A1 and sin(θ ± π) = − sin(θ), imply that   x2,k = (1 + h2 )k/2 A2 sin k arctan(h) − δ2 = ±(1 + h2 )k/2 A1 sin k arctan(h) − δ1 . where A2 =

So, 

x1,k A1

2

 +

x2,k A1

2

  = (1 + h2 )k cos2 k arctan(h) − δ1 + (1 + h2 )k sin2 k arctan(h) − δ1 ≡ (1 + h2 )k .

So, in R2 , the solutions of the system of difference equations, xk spiral away from (0, 0) as k → ∞. As to the original system, which is in companion form, it’s not too hard to derive that the general solution is   A cos t − δ) x(t) = . −A sin t − δ) This implies that ||x(t)|| ≡ A2 , so the solutions of the ODE system are circles in R2 . 8.7.7.16. Continuing, y3 = y0 + h (f (t0 ) + f (t1 ) + f (t2 )),..., ˆ

1

yn = y0 + h (f (t0 ) + f (t1 ) + ... + f (tn−1 )) ≈ y0 +

f (t) dt, 0

using the definition of the definite integral as a limit of Riemann sums, in this case using left endpoints. 8.7.7.17. Denote fi = f (ti ), as in Section 8.3. Continuing, h h h h h f0 + f1 + f1 + f2 = y0 + (f0 + 2f1 + f2 ) , ..., 2 2 2 2 2 ˆ 1 h yn = y0 + (f0 + 2f1 + 2f2 + ... + 2fn−1 + fn ) ≈ y0 + f (t) dt, 2 0 using the Trapezoidal Rule approximation of the definite integral. So, in a sense, the Modified Euler’s Method is a generalization of the Trapezoidal Rule. y2 = y0 +

8.7.7.18. The modified Euler’s Method, also known as another Runge-Kutta problem 8.7.7.17:  y0 = y(t0 )      yi+1 , yi + 12 (k1 + k2 ), where (?)      k1 , hf (ti , yi ), k2 , hf (ti+1 , yi + k1 )

Method of order two, is given in      

.

    

Recall that when evaluating local error we assume yi = y(ti ). Equation (?) gives  h (??) yi+1 = yi + f (ti , yi ) + f ti + h, yi + k1 . 2 c Larry

Turyn, October 24, 2013

page 64

Taylor Series for a function of two variables, found below in Theorem 13.7 in Section 13.2, implies that   ∂f ∂f f ti + h, yi + k1 = f (ti , yi ) + h (ti , yi ) + k1 (ti , yi ) + O h2 + k12 . ∂t ∂y But, k1 = hf (ti , yi ) = hfi , so

  ∂f ∂f f ti + h, yi + k1 = f (ti , yi ) + h (ti , yi ) + hfi (ti , yi ) + O h2 . ∂t ∂y Combining this with (??) gives     h ∂f 1 ∂f (ti , yi ) + O h3 , yi+1 = yi + (k1 + k2 ) = yi + fi + fi + h (ti , yi ) + hfi 2 2 ∂t ∂y that is, yi+1 = yi + hfi +

1 2 h 2



  ∂f ∂f (ti , yi ) + fi (ti , yi ) + O h3 . ∂t ∂y

On the other hand, Taylor’s Theorem in the form of (8.65), along with (8.67), imply   h2 ˙ ∂f h2 ∂f y(ti+1 ) = yi + hfi + (ti , yi ) + (ti , yi ) · fi + O(h3 ). fi + O(h3 ) = yi + hfi + 2 2 ∂t ∂y So, the local error is   ( ( h2 ∂f ∂f(((( ( ( i + |y(ti+1 ) − yi+1 | =  yi +  hf (ti , y( (ti , yi ) · fi + O(h3 ) i) + 2 (( ∂t(( ∂y (   (( h2 ∂f ∂f((( ( ( i + ( + O(h3 ) , (t , y ) + (t , y )f − yi −  hf i i i i i ( ( ( 2 ( ∂t ∂y ( that is, the local error is O(h3 ).  8.7.7.19. Note that we are asked to show that the local error is O h4 , so we must include enough terms in approximations to get such an error. The Runge-Kutta Method of order three is given in (8.74):

(?)

  y0 = y(t0 )               1  y i+1 , yi + 6 (k1 + 4k2 + k3 ), where            k1 , hf (ti , yi ), .               k2 , hf (ti+0.5 , yi + 12 k1 ), and             k3 , hf (ti+1 , yi − k1 + 2k2 )

Recall that when evaluating local error we assume yi = y(ti ). Equation (?) gives    h 1  h f (ti , yi ) + 4f ti + , yi + k1 + f (ti + h, yi − k1 + 2k2 ) . (??) yi+1 = yi + 6 2 2 Taylor Series for a function of two variables, found below in Theorem 13.7 in Section 13.2, implies that   h 1  h ∂f 1 ∂f 1  h 2 ∂ 2 f k2 = hf ti + , yi + k1 = h f (ti , yi ) + (ti , yi ) + k1 (ti , yi ) + (ti , yi ) 2 2 2 ∂t 2 ∂y 2 2 ∂t2  ∂2f 1  1 2 ∂ 2 f h 1 · k1 (ti , yi ) + k1 (ti , yi ) + O h4 + |h|3 |k1 | + h2 k12 . 2 2 ∂t∂y 2 2 ∂y 2 But, k1 = hf (ti , yi ) = hfi , so +

(? ? ?)

k2 =hfi +

 h2 ∂f h2 ∂f h3 ∂ 2 f h3 ∂2f h3 2 ∂ 2 f (ti , yi ) + fi (ti , yi ) + (t , y ) + f (t , y ) + fi (ti , yi ) + O h4 . i i i i i 2 ∂t 2 ∂y 8 ∂t2 4 ∂t∂y 8 ∂y 2 c Larry

Turyn, October 24, 2013

page 65

Also f (ti + h, yi − k1 + 2k2 ) = f (ti , yi ) + h

∂f ∂f 1 ∂2f (ti , yi ) + (−k1 + 2k2 ) (ti , yi ) + h2 2 (ti , yi ) ∂t ∂y 2 ∂t

1 ∂2f ∂2f (ti , yi ) + (−k1 + 2k2 )2 2 (ti , yi ) ∂t∂y 2 ∂y  +O |h|3 + h2 (|k1 | + |k2 |) + |h|(k12 + k22 ) .

+h · (−k1 + 2k2 )

Substitute into this k1 = hf (ti , yi ) = hfi and the formula for k2 given in (? ? ?) to get k3 = hf (ti + h, yi − k1 + 2k2 ) = hfi + h2

∂f (ti , yi )+ ∂t

   ∂f h ∂f h ∂f h2 ∂ 2 f h2 ∂2f h2 ∂2f +h2 − fi + 2 fi + (ti , yi ) + fi (ti , yi ) + (ti , yi ) + fi (ti , yi ) + fi2 (ti , yi ) · (ti , yi ) 2 2 2 ∂t 2 ∂y 8 ∂t 4 ∂t∂y 8 ∂y ∂y   1 ∂2f ∂2f h ∂f h ∂f h2 ∂ 2 f h2 + h3 2 (ti , yi ) + h3 − fi + 2 fi + (ti , yi ) + fi (ti , yi ) + (ti , yi ) + fi (ti , yi )+ 2 2 ∂t 2 ∂t 2 ∂y 8 ∂t 4 ∂t∂y  ∂ 2 f h2 ∂2f + fi2 (t , y ) (ti , yi ) i i 8 ∂y 2 ∂t∂y  2 ∂ 2 f ∂2f h ∂f h ∂f h2 ∂ 2 f h2 h2 ∂2f 1  (ti , yi )+ fi (ti , yi )+ (ti , yi )+ fi (ti , yi )+ fi2 (ti , yi ) · (ti , yi ) + h3 −fi +2 fi + 2 2 2 2 ∂t 2 ∂y 8 ∂t 4 ∂t∂y 8 ∂y ∂y 2   +O h4 so

 ∂f 2 ∂f ∂f ∂f ∂f (ti , yi ) + h2 fi (ti , yi ) + h3 (ti , yi ) · (ti , yi ) + h3 fi · (ti , yi ) + ∂t ∂y ∂t ∂y ∂y 2 2  ∂ f 1 ∂ f 1 3 ∂2f (ti , yi ) + h3 fi (ti , yi ) + h3 fi2 (ti , yi ) + O h4 . + h 2 ∂t2 ∂t∂y 2 ∂y 2

k3 = hfi + h2

Combining this with (??) gives 1 (k1 + 4k2 + k3 ) 6  ∂f ∂2f h2 ∂f h2 h3 ∂ 2 f h3 1 hfi + 4 hfi + (ti , yi ) + fi (ti , yi ) + (ti , yi ) + fi (ti , yi )+ = yi + 2 6 2 ∂t 2 ∂y 8 ∂t 4 ∂t∂y

yi+1 = yi +

 h3 2 ∂ 2 f fi (ti , yi ) 2 8 ∂y  ∂f 2 ∂f ∂f 2 ∂f 2 3 ∂f 3 +hfi + h (ti , yi ) + h fi (ti , yi ) + h (ti , yi ) · (ti , yi ) + h fi · (ti , yi ) + ∂t ∂y ∂t ∂y ∂y 2 2  1 3 ∂2f ∂ f 1 ∂ f + h (ti , yi ) + h3 fi (ti , yi ) + h3 fi2 (ti , yi ) + O h4 . 2 ∂t2 ∂t∂y 2 ∂y 2 By combining terms, this can be rewritten as +

yi+1 = yi + hfi +

h2 ∂f h2 ∂f h3 ∂ 2 f h3 ∂2f h3 2 ∂ 2 f (ti , yi ) + fi (ti , yi ) + (t , y ) + f (t , y ) + fi (ti , yi )+ i i i i i 2 ∂t 2 ∂y 6 ∂t2 3 ∂t∂y 6 ∂y 2

 ∂f 2  h3 ∂f ∂f h3 (ti , yi ) · (ti , yi ) + fi · (ti , yi ) + O h4 6 ∂t ∂y 6 ∂y On the other hand, Taylor’s Theorem in the form of (8.65), along with (8.67) and (8.70), imply   h2 ˙ h3 ¨ h2 ∂f ∂f fi + fi + O(h4 ) = yi + hfi + (ti , yi ) + (ti , yi ) · fi + y(ti+1 ) = yi + hfi + 2 6 2 ∂t ∂y   ∂2f ∂f ∂f ∂f h3  ∂ 2 f + (ti , yi ) + 2 (ti , yi ) · fi + (ti , yi ) · (ti , yi ) + (ti , yi ) · fi + 6 ∂t2 ∂t∂y ∂y ∂t ∂y  ∂2f + 2 (ti , yi ) · (fi )2 + O(h4 ). ∂y +

c Larry

Turyn, October 24, 2013

page 66

This can be rewritten as y(ti+1 ) = yi + hfi +

h2 ∂f h2 ∂f h3 ∂ 2 f h3 ∂2f (ti , yi ) + fi (ti , yi ) + (t , y ) + f (ti , yi )+ i i i 2 ∂t 2 ∂y 6 ∂t2 3 ∂t∂y +

2 h3 2 ∂ 2 f h3 ∂f ∂f h3  ∂f fi (ti , yi ) + (ti , yi ) · (ti , yi ) + fi (ti , yi ) + O(h4 ). 2 6 ∂y 6 ∂t ∂y 6 ∂y

So, the local error is   ( ( ∂f(((( h2 ∂f ( ( i + |y(ti+1 ) − yi+1 | =  yi +  hf (t , y ) + (t , y ) · f ( i i i i i 2 (( ∂t(( ∂y (

((( ((( ( ( ( (((  ∂f 2  (((∂f ( ( ∂f ( (

2 ∂2f ∂2f 2∂ f (ti , yi ) + 2fi (ti , yi ) + f( (t( (ti , yi ) · (ti , yi ) + fi (ti , yi ) i , yi ) + i (2( 2 ∂t ∂t∂y ((( ∂y ∂t ∂y ∂y ( (   (( ((( ∂f((( h2 ∂f ((( ( ( ( (  ( − y − hf + (t , y ) + (t , y )f ( i i i i i i i ( (   2 (( ∂t( ∂y (

+



h3 6

h3 6





(((( ((( ( ( (  (((

 ∂f 2 2 ( ∂2f ∂2f ∂f(( ∂f 2∂ f ( ( ( (t , y ) + 2f (t , y ) + f (t , y ) + (t , y ) · (t , y ) + f (t , y ) ( i i i i i i i i i i i i i i i ( ∂t2 ∂t∂y ((((∂y 2 ∂t ∂y ∂y

(

+ O(h4 )

+ O(h4 ) ,

(

(((

(((( ((((

that is, the local error is O(h4 ).  8.7.7.20. Note that we are asked to show that the local error is O h4 , so we must include enough terms in approximations to get such an error. The Runge-Kutta Method of order three is given in (8.74):  y0 = y(t0 )         yi+1 , yi + 14 (k1 + 3k3 ), where      k1 , hf (ti , yi ),      1 1    k2 , hf (ti + 3 h, yi + 3 k1 ), and      k2 , hf (ti + 23 h, yi + 23 k2 )

             

.

            

Recall that when evaluating local error we assume yi = y(ti ). Equation (?) gives    h 2h 2  (??) yi+1 = yi + f (ti , yi ) + 3f ti + , y i + k2 . 4 3 3 Taylor Series for a function of two variables, found below in Theorem 13.7 in Section 13.2, implies that  h 1  h ∂f 1 ∂f 1  3 2 ∂ 2 f f ti + , yi + k1 = f (ti , yi ) + (ti , yi ) + k1 (ti , yi ) + (ti , yi ) 3 3 3 ∂t 3 ∂y 2 2 ∂t2 +

 h 1 ∂2f 1  1 2 ∂ 2 f · k1 (ti , yi ) + k1 (ti , yi ) + O |h|3 + h2 |k1 | + |h|k12 . 3 3 ∂t∂y 2 3 ∂y 2

But, k1 = hf (ti , yi ) = hfi , so (? ? ?)

k2 =hfi +

 h2 ∂f h2 ∂f h3 ∂ 2 f h3 ∂2f h3 2 ∂ 2 f (ti , yi ) + fi (ti , yi ) + (t , y ) + f (t , y ) + fi (ti , yi ) + O h4 . i i i i i 3 ∂t 3 ∂y 18 ∂t2 9 ∂t∂y 18 ∂y 2

Also  2 2  2 ∂f 2 ∂f 2 ∂2f f ti + h, yi + k2 = f (ti , yi ) + h (ti , yi ) + k2 (ti , yi ) + h2 2 (ti , yi ) 3 3 3 ∂t 3 ∂y 9 ∂t

c Larry

Turyn, October 24, 2013

page 67

 2 2 ∂2f 2 ∂2f + h · k2 (ti , yi ) + k22 (ti , yi ) + O |h|3 + h2 (|k1 | + |k2 |) + |h|(k12 + k22 ) . 3 3 ∂t∂y 9 ∂y 2 Substitute into this k1 = hf (ti , yi ) = hfi and the formula for k2 given in (? ? ?) to get  2  2 ∂f 2 (ti , yi )+ k3 = f ti + h, yi + k2 = hfi + h2 3 3 3 ∂t  ∂f 2  h2 ∂f h2 ∂f h3 ∂ 2 f ∂2f h3 2 ∂ 2 f h3 + h hfi + (ti , yi ) + fi (ti , yi ) + fi (ti , yi ) + fi (ti , yi ) (ti , yi ) + (ti , yi ) · 2 2 3 3 ∂t 3 ∂y 18 ∂t 9 ∂t∂y 18 ∂y ∂y  ∂f ∂2f 2 ∂2f 4  h2 ∂f h2 h3 ∂ 2 f h3 h3 2 ∂ 2 f + h3 2 (ti , yi )+ h2 hfi + (ti , yi )+ fi (ti , yi )+ (t , y )+ f (t , y )+ f (t , y ) · i i i i i i i i 9 ∂t 9 3 ∂t 3 ∂y 18 ∂t2 9 ∂t∂y 18 ∂y 2 ·

∂2f (ti , yi ) ∂t∂y

2 ∂ 2 f 2  h2 ∂f h2 ∂f h3 ∂ 2 f h3 ∂2f h3 2 ∂ 2 f + h hfi + (ti , yi ) + fi (ti , yi ) + (t , y ) + f (t , y ) + f (t , y ) · (ti , yi ) i i i i i i i i 9 3 ∂t 3 ∂y 18 ∂t2 9 ∂t∂y 18 ∂y 2 ∂y 2   +O h4 so

 ∂f 2 ∂f ∂f 2 2 ∂f 2 2 2 ∂f h (ti , yi ) + h2 fi (ti , yi ) + h3 (ti , yi ) · (ti , yi ) + h3 fi · (ti , yi ) + 3 ∂t 3 ∂y 9 ∂t ∂y 9 ∂y 2 2 2  2 ∂ f 4 ∂ f 2 ∂ f + h3 2 (ti , yi ) + h3 fi (ti , yi ) + h3 fi2 (ti , yi ) + O h4 . 9 ∂t 9 ∂t∂y 9 ∂y 2 Combining this with (??) gives k3 = hfi +

1 (k1 + 3k3 ) 4  1 2 ∂f 2 ∂f 2 ∂f ∂f hfi + 3 hfi + h2 (ti , yi ) + h2 fi (ti , yi ) + h3 (ti , yi ) · (ti , yi )+ = yi + 4 3 ∂t 3 ∂y 9 ∂t ∂y  ∂f 2 2 ∂ 2 f   ∂2f 4 2 ∂2f 2 (ti , yi ) + h3 2 (ti , yi ) + h3 fi (ti , yi ) + h3 fi2 (ti , yi ) + O h4 . + h3 f i · 2 9 ∂y 9 ∂t 9 ∂t∂y 9 ∂y By combining terms, this can be rewritten as yi+1 = yi +

yi+1 = yi + hfi +

h2 ∂f ∂2f h2 ∂f h3 ∂ 2 f h3 h3 2 ∂ 2 f (ti , yi ) + fi (ti , yi ) + (ti , yi ) + fi (ti , yi ) + fi (ti , yi )+ 2 2 ∂t 2 ∂y 6 ∂t 3 ∂t∂y 6 ∂y 2

 ∂f 2  ∂f h3 h3 ∂f (ti , yi ) · (ti , yi ) + fi · (ti , yi ) + O h4 6 ∂t ∂y 6 ∂y On the other hand, Taylor’s Theorem in the form of (8.65), along with (8.67) and (8.70), imply   h2 ˙ h3 ¨ h2 ∂f ∂f 4 y(ti+1 ) = yi + hfi + fi + fi + O(h ) = yi + hfi + (ti , yi ) + (ti , yi ) · fi + 2 6 2 ∂t ∂y    h3  ∂ 2 f ∂2f ∂f ∂f ∂f ∂2f + (t , y ) + 2 (t , y ) · f + (t , y ) · (t , y ) + (t , y ) · f + (ti , yi ) · (fi )2 + O(h4 ). i i i i i i i i i i i i 2 2 6 ∂t ∂t∂y ∂y ∂t ∂y ∂y This can be rewritten as +

y(ti+1 ) = yi + hfi +

h2 ∂f h2 ∂f h3 ∂ 2 f h3 ∂2f (ti , yi ) + fi (ti , yi ) + (t , y ) + f (ti , yi )+ i i i 2 ∂t 2 ∂y 6 ∂t2 3 ∂t∂y +

2 h3 ∂f h3 2 ∂ 2 f ∂f h3  ∂f fi (ti , yi ) + (ti , yi ) · (ti , yi ) + fi (ti , yi ) + O(h4 ). 2 6 ∂y 6 ∂t ∂y 6 ∂y

So, the local error is   ( ( h2 ∂f ∂f(((( ( ( i + |y(ti+1 ) − yi+1 | =  yi +  hf (t , y ) + (t , y ) · f ( i i i i i ( 2 (( ∂t( ∂y (

h3 + 6



(((( (((( ( ( ((  2  ((((

2 ∂2f ∂2f ∂f ∂f ∂f 2∂ f ((( (t , y ) + 2f (t , y ) + f (t( (ti , yi ) · (ti , yi ) + fi (ti , yi ) ( i i i i i i , yi ) + i ( ∂t2 ∂t∂y ((((∂y 2 ∂t ∂y ∂y

+ O(h4 )

((((

(((

( ((((

c Larry

Turyn, October 24, 2013

page 68

i + − yi −  hf

  (( ∂f((( h2 ∂f (( ( (t( (ti , yi )fi i, y i) + ( 2 (( ∂t ∂y (

(((( (((( ( (  ((((  ∂f 2  2 ( ∂f ∂2f ∂f(( h3 ∂ 2 f 2∂ f ( ( ( (ti , yi ) + f( (ti , yi ) · (ti , yi ) + fi (ti , yi ) + O(h4 ) , − (ti , yi ) + 2fi (ti , yi ) + i (2( 2 6 ∂t ∂t∂y ((( ∂y ∂t ∂y ∂y ( (((( ( ( ( ((((

that is, the local error is O(h4 ).

c Larry

Turyn, October 24, 2013

page 69

Section 8.8 8.8.4.1. (a) Define ti = ih, yi = y(ti ), i = 0, ..., 4, where h = 0.25. Using the central difference approximation for the first derivative term, the replacement equations are, for i = 1, 2, 3,  2πih  √ h−2 (yi+1 − 2yi + yi−1 ) + 2 2 cos yi = 0 3 that is,

  2πih  √ yi−1 + −2 + h2 · 2 2 cos yi + yi+1 = 0. 3

The BCs are 0 = y(0) = y0 and −1 = y(1) = y4 . The system is  √ √ 1 0 −2 + 82 · 23   √   1 −2 + 82 · 12 1   √ 0 1 −2 + 82 · 0



y1

     y2   y3





    =    

−y0





0



       0 . 0  =       −y4 1

(b) The approximate solution is √



y1

   y2   y3





−2 +

    =    

6 16

1

0

  1   



1

−2 +

0

1

−1 

2 16

−2

0





−.3111164480



         0  ≈  −.5746031117  .         1 −.7873015559

The piecewise linear approximate solution is the red, dashed graph in the figure. (c) There is no exact solution of this non-constant coefficients linear second order ODE. But we did use the Mathematica command N DSolve for the ODE-BVP to find a more accurate approximate solution and it agrees well with the coarse approximation we found.

Figure 3: Answer for problem 8.8.4.1 8.8.4.2. (a) Define xi = ih, yi = y(xi ), i = 0, ..., 4, where h = 0.25. Using the central difference approximation for the first derivative term, the replacement equations are, for i = 1, 2, 3, h−2 (yi+1 − 2yi + yi−1 ) + xi · (2h)−1 (yi+1 − yi−1 ) − 2yi = sin(πxi ) that is, after noting xi = ih, multiplying through by h2 , and re-arranging terms, 

1−

 ih2  ih2  yi−1 − 2(1 + h2 )yi + 1 + yi+1 = h2 sin(πih). 2 2

The system is 

− 17 8

33 32

0

    

30 32

− 17 8

34 32

0

29 32

− 17 8



y1

    y2   y3





    =    

1 √ 16 2



31 32

y0

    

1 16 1 √ 16 2



35 32



y4 c Larry

Turyn, October 24, 2013

page 70

The BCs are 0 = y(0) = y0 and 3 = y(1) = y4 . (b) The approximate solution is 

y1

   y2   y3



− 17 8

33 32

0

    =    

30 32

− 17 8

34 32

0

29 32

− 17 8



−1 

1 √ 16 2

    

1 16

    

1 √ 16 2





105 32



0.591958101

     ≈  1.26264741     2.06180237

   .  

The piecewise linear approximate solution is the red, dashed graph in the figure. (c) There is no exact solution of this non-constant coefficients linear second order ODE. But we did use the Mathematica command N DSolve for the ODE-BVP to find a more accurate approximate solution and it agrees well with the coarse approximation we found.

Figure 4: Answer for problem 8.8.4.2 8.8.4.3. (a) Define xi = ih, yi = y(xi ), i = 0, ..., 4, where h = 0.5. The replacement equations are, for i = 1, 2, 3, h−2 (yi+1 − 2yi + yi−1 ) + yi = xi that is, yi−1 + (−2 + h2 )yi + yi+1 = ih3 . The BCs are −1 = y(0) = y0 and 5 = y(2) = y4 . The system is   y1 −2 + 14 1 0       y2  1 1 −2 + 14     1 0 1 −2 + 4 y3



1 8

    =    

2 8



3 8

− y0

     

− y4

(b) The approximate solution is 

y1

   y2   y3



− 74

1

    =    

1

− 74

0

1



0

−1 

  1   

− 74

    

9 8



2 8

    = 1    238  

− 37 8



239





1.00420168



       2.88235294  . 686  ≈       1021 4.28991597

The piecewise linear approximate solution is the red, dashed graph in the figure. (c) The method of undetermined coefficients for the ODE gives yh = c1 cos x + c2 sin x and, after some calculations, yP = x. Plug y = yh + yP = x + c1 cos x + c2 sin x into the BCs to get   −1 = y(0) = c1 . 5 = y(2) = c1 cos 2 + c2 sin 2 + 2 The exact solution of (?) is y(t) = x − cos x +

3 + cos 2 · sin x, sin 2

c Larry

Turyn, October 24, 2013

page 71

Figure 5: Answer for problem 8.8.4.3(b) and (c) whose graph is given in the figure. The results from part (a) look very good, for example, y1 = 1.00420168 vs. the exact value y(.5) ≈ 0.984749672, y2 = 2.88235294 vs. the exact value y(1) ≈ 2.85081572, and y3 = 4.28991597 vs. the exact value y(1.5) ≈ 4.26373753. 8.8.4.4. (a) Define xi = ih, yi = y(xi ), i = 0, ..., 4, where h = 0.25. One BC is 5 = y(1) = y4 . The other BC is 0 = y 0 (0) ≈ (2h)−1 (y1 − y−1 ), that is, 0 = y1 − y−1 , using the central difference approximation of the derivative. To solve for the fictitious value y(−h) ≈ y−1 we include it in the difference approximation of the ODE at x = 0. The replacement equations are, for i = 0, 1, 2, 3, h−2 (yi+1 − 2yi + yi−1 ) + 2yi = 0 that is, yi−1 + (−2 + 2h2 )yi + yi+1 = 0, as well as the BC −y−1 − y1 = 0. The system is              

−1 1

0 −2 +

1 8

1

0

0

1

0

0

1

0

−2 +

0

1

0

0

1

0

0

0

(b) The approximate solution is    −1 0 y−1        y0   1 −2 +           y1  =  0 1           y2   0 0       0 0 y3

1 8

1 8

−2 +

1 8

1

0

0

1

0

0

1

0

−2 +

1 8

1 0

−2 + 1

1 8

            

1 −2 +

1



1 8



     y0  0        y1  0 =      y2  0     y3 −y4

            

−1              

1 −2 +

0

y−1

1 8

            



0







     0         0  ≈      0      −5

31.5529992



  33.6565325     31.5529992  .   25.5053410    16.2695152

The fictitious value y−1 does not appear in the final conclusion: The piecewise linear approximate solution is the red, dashed graph in the figure. √ √ (c) The linear homogeneous ODE has general solution y(x) = c1 cos( 2 x) + c2 sin( 2 x). Plug this into the BCs to get √   0 = y 0 (0) = 2 c2√ √ . 5 = y(1) = c1 cos 2 + c2 sin 2 The exact solution of (?) is y(x) =

√ 5 √ · cos( 2 x), cos( 2) c Larry

Turyn, October 24, 2013

page 72

whose graph is given in the figure. The results from part (a) look pretty good, for example, y0 = 33.6565325 vs. the exact value y(0) ≈ 32.0628546, y1 = 31.5529992 vs. the exact value y(.25) ≈ 30.0797136, y2 = 25.5053410 vs. the exact value y(.5) ≈ 24.3756119, and y3 = 16.2695152 vs. the exact value y(.75) ≈ 15.6561659.

Figure 6: Answer for problem 8.8.4.4(b) and (c)

8.8.4.5. (a) Define xi = ih, yi = y(xi ), i = 0, ..., 4, where h = 0.25. One BC is 5 = y(1) = y4 . The other BC is 3 = y 0 (0) ≈ (2h)−1 (y1 − y−1 ), that is, 3 · 2h = y1 − y−1 , using the central difference approximation of the derivative. To solve for the fictitious value y(−h) ≈ y−1 we include it in the difference approximation of the ODE at x = 0. The replacement equations are, for i = 0, 1, 2, 3, h−2 (yi+1 − 2yi + yi−1 ) + 2yi = 0 that is, yi−1 + (−2 + 2h2 )yi + yi+1 = 0, as well as the BC −y−1 − y1 = 3 · 2h. The system is 

−1

            

1

0 −2 +

1 8

1

0

0

1

0

0

1

0

−2 +

0

1

0

0

1

0

0

0

1 8

−2 +

1 8

             

1 −2 +

1

1 8

6h



     0 y0         0 y1  =      y2  0     y3 −y4

            

y−1





(b) The approximate solution is              

y−1





     y0         y1  =      y2      y3

−1 1

0 −2 +

1 8

1

0

0

1

0

0

1

0

−2 +

0

1

0

0

1

0

0

0

1 8

−2 + 1

1 8

−1              

1 −2 +

1 8

            

6 4





     0         0  ≈      0      −5

17.3529170



  19.3097781     18.8529170  .   16.0394412    11.2210353

The fictitious value y−1 does not appear in the final conclusion: The piecewise linear approximate solution is the red, dashed graph in the figure. √ √ (c) The linear homogeneous ODE has general solution y(x) = c1 cos( 2 x) + c2 sin( 2 x). Plug this into the BCs to get √   3 = y 0 (0) = 2 c2√ √ . 5 = y(1) = c1 cos 2 + c2 sin 2

c Larry

Turyn, October 24, 2013

page 73

The exact solution of (?) is  y(x) =

√  √ √ cos( 2 x) 3 3 √ 5 − √ sin 2 · + √ · sin( 2 x), 2 cos 2 2

whose graph is given in the figure. The results from part (a) look good, for example, y0 = 19.3097781 vs. the exact value y(0) ≈ 18.6261587, y1 = 18.8529170 vs. the exact value y(.25) ≈ 18.2085721, y2 = 16.0394413 vs. the exact value y(.5) ≈ 15.5385246, and y3 = 11.2210353 vs. the exact value y(.75) ≈ 10.94630976.

Figure 7: Answer for problem 8.8.4.5(b) and (c) 8.8.4.6. (a) Define ti = ih, yi = y(ti ), i = 0, ..., 4, where h = 0.25. Using the central difference approximation for the first derivative term, the replacement equations are, for i = 1, 2, 3, h−2 (yi+1 − 2yi + yi−1 ) + (2h)−1 (yi+1 − yi−1 ) = ti = ih, that is,   h h 1− yi−1 − 2yi + 1 + yi+1 = ih3 . 2 2 The BCs are −1 = y(0) = y0 and 1 = y(1) = y4 . The system is    1  −2 1.125 0 − .875y0 y1 64        2   .875    −2 1.125     y2  =  64     3 − 1.125y4 y3 0 .875 −2 64 The approximate solution is    y1 −2        y2  =  .875       y3 0

1.125 −2 .875

0

−1 

  1.125    −2

    

57 64



2 64

    =    

− 69 64



     

−.342548077



  .182692308  .  .618990385

The piecewise linear approximate solution is the red, dashed graph in the figure. (b) The method of undetermined coefficients for the ODE in (?): yh = c1 + c2 e−t , so ... yP = At + Bt2 . Substitute the latter into the ODE to get t = y¨P + y˙ P = 2B + A + 2Bt, so 1 = 2B and 0 = A + 2B, so B = 12 and A = −1. Plug y = yh + yP = c1 + c2 e−t − t + 21 t2 into the BCs to get   −1 = y(0) = c1 + c2 − 0 + 0 −1 1 1 = y(1) = c1 + c2 e − 1 + 2 , The exact solution of (?) is 3 e + 1 − 52 e1−t 1 2 t + 2 , 2 e−1 whose graph is given in the figure. The results from part (a) look very good, for example, y1 = −.342548077 vs. the exact value y(0.25) ≈ −0.343919978, y2 = .182692308 vs. the exact value y(0.5) ≈ 0.181148328, and y3 = .618990385 vs. the exact value y(0.75) ≈ 0.618009558.

y(t) = −t +

c Larry

Turyn, October 24, 2013

page 74

(c) Define ti = ih, yi = y(ti ), i = 0, ..., 4, where h = 0.25. Using the forward difference approximation for the first derivative term, the replacement equations are, for i = 1, 2, 3, h−2 (yi+1 − 2yi + yi−1 ) + h−1 (yi+1 − yi ) = ti = ih, that is, yi−1 + (−2 − h)yi + (1 + h)yi+1 = ih3 . The BCs are −1 = y(0) = y0 and 1 = y(1) = y4 . The system is     1 −2.25 1.25 0 y1 − y0 64         2     1 −2.25 1.25     y2  =  64     3 0 1 −2.25 y3 − 1.25y4 64

     

whose solution is 

y1

   y2   y3





−2.25

1.25

    =    

1

−2.25

0

1

0

−1 

  1.25    −2.25

    

65 64



2 64

    ≈    

− 77 64



−.3607723577



  .1631097561  .  .6072154472

The piecewise linear approximate solution is the red, dashed graph in the figure. The results from part (a) look good, for example, y1 = −.3607723577 vs. the exact value y(0.25) ≈ −0.343919978, y2 = .1631097561 vs. the exact value vs. the exact value y(0.5) ≈ 0.181148328, and y3 = .6072154472 vs. the exact value y(0.75) ≈ 0.618009558. The results for part (a), using the central difference approximation for y, ˙ were better than the results for part (c), using the forward difference approximation, for y. ˙ But the results in part (c) were still good.

Figure 8: Answer for problem 8.8.4.6(a) and (b) 8.8.4.7. (a) Define xi = ih, yi = y(xi ), i = 0, ..., 4, where h = 0.25. One BC is 0 = y(1) = y4 . The other BC is 1 = y(0) − y 0 (0) ≈ y0 − (h)−1 (y0 − y−1 ), that is, h = y−1 + (h − 1)y0 , using the backwards difference approximation of the derivative. To solve for the fictitious value y(−h) ≈ y−1 we include it in the difference approximation of the ODE at x = 0. The replacement equations are, for i = 0, 1, 2, 3, h−2 (yi+1 − 2yi + yi−1 ) = −xi that is, yi−1 − 2yi + yi+1 = −h2 xi = −ih3 , as well as the BC 

1

   1     0     0   0

− 34

0

0

−2

1

0

1

−2

1

0

1

−2

0

0

1

y−1 + (h − 1)y0 = h.    y−1 0 h          0    y0   0        y1  =  −h3 0            y2   −2h3 1        −2 −3h3 − y4 y3

1 4



      0       = −1   64       −2   64   3 − 64

            





c Larry

Turyn, October 24, 2013

page 75

(b) The approximate solution is              

y−1





     y0         y1  =      y2      y3

1

− 34

0

0

1

−2

1

0

0

1

−2

1

0

0

1

−2

0

0

0

1

0

−1 

  0     0     1    −2

1 4

   0     −1  64    −2  64  3 − 64





.68359375



        .57812500            ≈  .47265625  .            .35156250        .19921875

The piecewise linear approximate solution is the red, dashed graph in the figure. (c) Integrate the ODE twice to get general solution y(x) = c1 x + c2 − 16 x3 . Plug this and y 0 (x) = c1 − BCs to get   1 = y(0) − y 0 (0) = c2 − c1 . 1 0 = y(1) = c1 + c2 − 6

1 2

x2 into the

The exact solution of (?) is 7 1 5 x+ − x3 . 12 12 6 The solid curve in the figure is the exact solution. The thick dashed curve is the solution using finite differences and agrees well with the exact solution. y(x) = −

Figure 9: Answer for problem 8.8.4.7 8.8.4.8. Define ri = 1 + ih, Ri = R(ri ), i = 0, ..., 4, where h = 0.5. Using the central difference approximation for the first derivative term, the replacement equations are, for i = 1, 2, 3, h−2 ri2 (Ri+1 − 2Ri + Ri−1 ) + (2h)−1 ri (Ri+1 − Ri−1 ) + λRi = 0, that is,   h ri2 − ri Ri−1 − 2ri2 Ri + ri2 + ri 2 The BCs are 0 = y(1) = R0 and 0 = y(3) = R4 . The system is  9  21 −2 R1 0 8    7  9   −8 R2 2   2   45 − 25 R3 0 8 2

h Ri+1 + λ h2 Ri = 0. 2 



R1

     = − λ  R2   4   R3

   .  

Using Mathematica we found the approximate eigenvalues λn and eigenvectors R , [ R0 λ1 ≈ 7.872083911279186, λ2 ≈ 28.08822671753273, λ3 ≈ 64.03968937118809,

R1

R ≈ [0 1.94113913 1.87235183 1 R ≈ [ 0 − 1.01360670 0.973856590 1 R ≈ [0 0.142308845 − 0.623986194 1

c Larry

R2

R3

R4 ] T :

0 ]T 0 ]T 0 ]T

Turyn, October 24, 2013

page 76

and the corresponding numerical eigenfunctions (dashed) curves) are compared pictorially to the exact eigenfunctions (solid curves) in the figure. While the numerical eigenfunctions seem to be shrunken versions of the exact eigenfunctions, this is an artifact of the coefficients ci . √  (b) The Cauchy-Euler ODE r2 R00 + rR0 + λR = 0 has, for λ > 0, general solution R(r) = c1 cos λ ln r + √  c1 sin λ ln r . Plug this into the BCs to get (

√ √ ) 0 = R(1) = c1 cos( λ · 0) +c2 sin( λ · 0) = c1 √ 0 = R(3) = c2 sin λ · ln 3 .

 nπ 2 The exact eigenvalues are λ = ln ≈ 2.859600867, 11.43840347, 25.73640781 and corresponding eigenfunctions are 3  Rn (r) = sin nπlnln3 r , n = 1, 2, 3... Exact eigenfunctions R1 (r), −R2 (r), R3 (r) are solid curves in the figure.

Figure 10: Answer for problem 8.8.4.8 8.8.4.9. Define ri = ih, Ri = R(ri ), i = 0, ..., 4, where h = 0.25. Using the central difference approximation for the first derivative term, the replacement equations are, for i = 1, 2, 3, h−2 (Ri+1 − 2Ri + Ri−1 ) + (2h)−1

2 1 (Ri+1 − Ri−1 ) + λ Ri = 0, ri + 1 (r + 1)4

that is,  h  h  λ h2 Ri−1 − 2Ri + 1 + Ri+1 = Ri . ri + 1 ri + 1 (ri + 1)4 The BCs are 0 = y(0) = R0 and 0 = y(1) = R4 . The system is a generalized eigenvalue problem, specifically AR = λBR, specifically     256   6 −2 0 R1 0 0 R1 5 625        5     λ  7  16      R2  = − 0   R2  6  81  6 −2  0 . 16       6 256 0 −2 R 0 0 R 3 3 7 2401  1−

Using the Mathematica command Eigensystems[{A, B}], we found the approximate eigenvalues and eigenvectors R , [ R0 R 1 R 2 R3 R4 ] T : λ1 ≈ 37.607284106899115, λ2 ≈ 138.80108530222495, λ3 ≈ 363.84163059087587,

R ≈ [ 0 0.690872837 0.643898300 R ≈ [ 0 − 0.621341452 0.560288352 R ≈ [ 0 0.128025550 − 0.531340229

0.328769984 0.547733296 0.837428815

c Larry

0]T 0]T 0]T

Turyn, October 24, 2013

page 77

and the corresponding numerical eigenfunctions (dashed) curves) in the figure.

Figure 11: Answer for problem 8.8.4.9 b−a . Using the central 8.8.4.10. Define xi = a + ih, yi = y(xi ), pi = p(xi ), qi = q(xi ), i = 0, ..., N , where h = N difference approximation for the first derivative term, the replacement equations are, for i = 1, 2, ..., N − 1, h−2 (yi+1 − 2yi + yi−1 ) + (2h)−1 pi · (yi+1 − yi−1 ) + (qi + λ)yi = 0 to study an eigenvalue problem y 00 + p(x)y 0 + (q(x) + λ)y = 0, y(a) = y(b) = 0. Note that y0 = yN = 0 in order to satisfy the boundary conditions. Multiply through by h2 to get      h h 1 − pi yi−1 + −2 + h2 qi yi + 1 + pi yi+1 = − λ h2 yi 2 2 and rewrite this as 

−2 + h2 q1

   1 − h p2 2     0     .   0

1+

h 2

p1 2

−2 + h q2 .

...

0 1+

h 2

... p2

.

.

.

.

.

0

0

1−

h 2

. pN −1

−2 + h2 qN −1

0 1−

h 2

pN +1

 y1       ..   .       yN −1





             2 = − λ h             

y1



.. .

       .      

yN −1

We expect to find N − 1 approximate eigenvalues for this problem, which reduces to finding eigenvalues of an (N − 1) × (N − 1) matrix.

c Larry

Turyn, October 24, 2013

page 78

Section 8.9 8.9.6.1. (a) For N = 2, h = 12 , and our approximate solution is 3 X

y2 (x) =

zj Cj (x),

j=−1

where we will solve for the constants z−1 , z0 , ..., z4 . Using (8.94), the boundary conditions require two equations (1)

0 = y(0) = y2 (x0 ) =

1 (z−1 + 4z0 + z1 ) 6

and

1 (z1 + 4z2 + z3 ) . 6 Note that xj = j · 21 , for j = 0, 1, 2. The replacement equations (3) − (5) for the ODE are, respectively, for j = 0, 1, 2,   √ 2πj · 12 1 (3) − (5) 4 (zj−1 − 2zj + zj+1 ) + 2 2 cos · (zj−1 + 4zj + zj+1 ) = 0. 3 6 We include replacement equations for the ODE at the endpoints x = x0 and x = x2 because the spline functions are twice continuously differentiable at the endpoints. The system is   4 1 1 0 0  z−1   0  6 6 6           1 4 1   0 0   −1  z 6 6 6   0            √ √ √      4+ 2 2 4 2    −8 + 4 + 0 0   z1  =  0   3 3 3 .           √ √ √     0  z  0 4 + 62 −8 + 4 6 2 4 + 62 0  2            √ √ √ 2 4 2 2 z3 0 −8 − 4− 0 0 4− (2)

− 1 = y(1) = y2 (x2 ) =

6

(b) The approximate solution of the system is  z−1    z0     z1     z2   z3



6



0.564829708

6



         0           ≈  −0.564829708             −0.941074435        −1.670872552

The approximate solution of the ODE-BVP is y2 (x) ≈ 0.564829708C−1 (x) − 0.564829708C1 (x) − 0.941074435C2 (x) − 1.670872552C3 (x). The solid curve in Figure is an approximate solution produced by Mathematica. The thick dashed curve is the solution using cubic B-splines. The fact that the latter solution is twice continuously differentiable is apparent and agrees pretty well with Mathematica’s approximate solution. (c) There is no exact solution of this non-constant coefficients linear second order ODE. But we did use the Mathematica command N DSolve for the ODE-BVP to find a more accurate approximate solution shown as a solid, blue curve in the figure. Mathematica’s more accurate approximate solution agrees well with the coarse approximation we found using splines. 8.9.6.2. (a) For N = 2, h = 21 , and our approximate solution is y2 (x) =

3 X

zj Cj (x),

j=−1

c Larry

Turyn, October 24, 2013

page 79

Figure 12: Answer for problem 8.9.6.1 where we will solve for the constants z−1 , z0 , ..., z4 . Using (8.94), the boundary conditions require two equations (1) and

0 = y(0) = y2 (x0 ) =

1 (z−1 + 4z0 + z1 ) 6

1 (z1 + 4z2 + z3 ) . 6 The replacement equations (3) − (5) for the ODE are, respectively, for j = 0, 1, 2, (2)

(3) − (5)

3 = y(1) = y2 (x2 ) =

4 (zj−1 − 2zj + zj+1 ) + xj · (2h)−1 (−zj−1 + zj+1 ) − 2 ·

1 (zj−1 + 4zj + zj+1 )=0. 6

We include replacement equations for the ODE at the endpoints x = x0 and x = x2 because the spline functions are twice continuously differentiable at the endpoints. j Note that xj · (2h)−1 = , for j = 0, 1, 2. The system is 2      1 4 1 0 0 z−1 0 6 6 6           4 1 1    z0   3  0 0 6 6 6                1 4 1  4−   z1  =  0  . −8 − 4 − 0 0 3 3 3                 1 1 4 1 1       0 4− 3 − 2 −8 − 3 4− 3 + 2 0    z2   0       −8 − 43 4 − 31 + 1 0 0 4 − 31 − 1 z3 0 (b) The approximate solution of the system is  z−1    z0     z1     z2   z3





            ≈            

−1.297898640



  0     1.297898640  .   2.907292954    5.072929543

The approximate solution of the ODE-BVP is y2 (x) ≈ −1.297898640C−1 (x) + 1.297898640C1 (x) + 2.907292954C2 (x) + 5.07292954C3 (x). The solid, blue curve in the figure is an approximate solution, as plotted by Mathematica. The thick dashed curve is the solution using cubic B-splines. The fact that the latter solution is twice continuously differentiable is apparent and agrees pretty well with Mathematica’s approximate solution. (c) There is no exact solution of this non-constant coefficients linear second order ODE. c Larry

Turyn, October 24, 2013

page 80

Figure 13: Answer for problem 8.9.6.2 8.9.6.3. (a) For N = 4, h = 12 , and our approximate solution is y4 (x) =

3 X

zj Cj (x),

j=−1

where we will solve for the constants z−1 , z0 , ..., z4 . Using (8.94), the boundary conditions require two equations − 1 = y(0) = y4 (x0 ) =

(1) and

1 (z−1 + 4z0 + z1 ) 6

1 (z3 + 4z4 + z5 ) . 6 The replacement equations (3) − (7) for the ODE are, respectively, for j = 0, 1, ..., 4, (2)

(3) − (7)

3 = y(1) = y2 (x4 ) =

4 (zj−1 − 2zj + zj+1 ) +

1 (zj−1 + 4zj + zj+1 ) = xj . 6

We include replacement equations for the ODE at the endpoints x = x0 and x = x4 twice continuously differentiable at the endpoints. j Note that xj = , for j = 0, 1, ..., 4. The system is 2  1 4 1 0 0 0 6 6 6   4 1  0 0 0 0 6 6     4+ 1 −8 + 46 4 + 16 0 0 0 6     0 4 + 16 −8 + 46 4 + 61 0 0     0 0 4 + 16 −8 + 64 4 + 16 0     0 0 0 4 + 61 −8 + 46 4 + 16   0

0

0

0

4+

1 6

−8 +

4 6

4+

because the spline functions are

0 1 6

0 0 0 0

                     

1 6

c Larry

z−1





−1



     z0         z1         z2  =      z3         z4     

  3     0     1  . 2    1     3  2  

z5

2

Turyn, October 24, 2013

page 81

(b) The approximate solution of the system is  z−1    z0     z1     z2     z3     z4  





−1.719908174



        −1.041666667             0.113425159            ≈  0.962038387             2.046612720              3      

z5

3.953387280

The approximate solution of the ODE-BVP is y4 (x) ≈ −1.719908174C−1 (x) − 1.041666667C0 (x) + 0.113425159C1 (x) + 0.962038387C2 (x) +2.046612720C3 (x) + 3C4 (x) + 3.953387280C5 (x). (c) The method of undetermined coefficients for the ODE gives yh = c1 cos x + c2 sin x and, after some calculations, yP = x. Plug y = yh + yP = x + c1 cos x + c2 sin x into the BCs to get   −1 = y(0) = c1 5 = y(2) = c1 cos 2 + c2 sin 2 + 2, The exact solution of (?) is 3 + cos 2 · sin x. sin 2 The solid, blue curve in the figure is the exact solution, as plotted by Mathematica. The thick dashed curve is the solution using cubic B-splines. The fact that the latter solution is twice continuously differentiable is apparent and agrees with well with the exact solution. y(t) = x − cos x +

Figure 14: Answer for problem 8.9.6.3

8.9.6.4. (a) For N = 4, h = 14 , and our approximate solution is y4 (x) =

3 X

zj Cj (x),

j=−1

where we will solve for the constants z−1 , z0 , ..., z4 . Using (8.94) and (8.95), the boundary conditions require two equations (1) 0 = y 0 (0) = y4 (x0 ) = 2 (−z−1 + z1 ) and (2)

5 = y(1) = y4 (x4 ) =

1 (z3 + 4z4 + z5 ) . 6 c Larry

Turyn, October 24, 2013

page 82

The replacement equations (3) − (7) for the ODE are, respectively, for j = 0, 1, ..., 4, (3) − (7)

16 (zj−1 − 2zj + zj+1 ) + 2 ·

1 (zj−1 + 4zj + zj+1 )=0. 6

We include replacement equations for the ODE at the endpoints x = x0 and x = x4 because the spline functions are twice continuously differentiable at the endpoints. The system is      −2 0 2 0 0 0 0 0 z−1            1 4 1     0 0 0 0 z0  6 6 6    5           4 1  16 + 1   z1   0  −32 + 16 + 0 0 0 0 3 3 3                1 4 1    z2  =  0  . 0 16 + −32 + 16 + 0 0 0 3 3 3                 1 4 1      0 0 16 + 3 −32 + 3 16 + 3 0 0   z3    0                −32 + 43 16 + 13 0 0 0 0 16 + 13    z4   0       0

0

0

0

(b) The approximate solution of the system is  z−1    z0     z1     z2     z3     z4  



16 +



                    ≈                    

z5

1 3

29.37614367

−32 +

4 3

16 +

1 3

z5

0



  31.29197913     29.37614367     23.86322940     15.42828703     5.104166667    −5.844953700

The approximate solution of the ODE-BVP is y4 (x) ≈ 29.37614367C−1 (x) + 31.29197913C0 (x) + 29.37614367C1 (x) + 23.86322940C2 (x) +15.42828703C3 (x) + 5.104166667C4 (x) − 5.844953700C5 (x). √ √ (c) The linear homogeneous ODE has general solution y(x) = c1 cos( 2 x) + c2 sin( 2 x). Plug this into the BCs to get √   0 = y 0 (0) = 2 c2√ √ 5 = y(1) = c1 cos 2 + c2 sin 2 The exact solution of (?) is

√ 5 √ · cos( 2 x). cos 2 The solid, blue curve in the figure is the exact solution, as plotted by Mathematica. The thick dashed curve is the solution using cubic B-splines. The fact that the latter solution is twice continuously differentiable is apparent and agrees pretty well with the exact solution. y(x) =

8.9.6.5. (a) For N = 4, h = 14 , and our approximate solution is y4 (x) =

3 X

zj Cj (x),

j=−1

c Larry

Turyn, October 24, 2013

page 83

Figure 15: Answer for problem 8.9.6.4 where we will solve for the constants z−1 , z0 , ..., z4 . Using (8.94), the boundary conditions require two equations (1) and

1 = y(0) − y 0 (0) =

1 (z−1 + 4z0 + z1 ) − (2h)−1 (−z−1 + z1 ) 6

1 (z3 + 4z4 + z5 ) . 6 The replacement equations (3) − (7) for the ODE are, respectively, for j = 0, 1, ..., 4, (2)

0 = y(1) = y4 (x4 ) =

(3) − (7)

16 (zj−1 − 2zj + zj+1 ) = −xj .

We include replacement equations for the ODE at the endpoints x = x0 and x = x4 because the spline functions are twice continuously differentiable at the endpoints. j Note that xj = , for j = 0, 1, ..., 4. The system is 4  1     1 4 1 z−1 +2 −2 0 0 0 0 6 6 6           1 4 1      0 0 0 0 0 z 0  6 6 6                  z1    16 −32 16 0 0 0 0 0               1     z2  =  −  . 0 16 −32 16 0 0 0   4           2         0 0 16 −32 16 0 0   z3   − 4                 z4   − 3  0 0 0 16 −32 16 0     4       0

0

0

0

16

−32

16

z5

− 44

(b) The approximate solution is                      

z−1 z0 z1 z2 z3 z4 z5

  0.6875         0.583333333             0.479166667           ≈  0.359375            0.208333333             0.0104166667        

−0.25 c Larry

Turyn, October 24, 2013

page 84

The approximate solution of the ODE-BVP is y4 (x) ≈ 0.6875C−1 (x) + 0.583333333C0 (x) + 0.479166667C1 (x) + 0.359375C2 (x) +0.208333333C3 (x) + 0.0104166667C4 (x) − 0.25C5 (x). (c) Integrate the ODE twice to get general solution y(x) = c1 x + c2 − 61 x3 . Plug this and y 0 (x) = c2 − BCs to get   1 = y(0) − y 0 (0) = c2 − c1 0 = y(1) = c1 + c2 − 16

1 2

x2 into the

The exact solution of (?) is 7 1 5 x+ − x3 . 12 12 6 The solid, blue curve in the figure is the exact solution, as plotted by Mathematica. The thick dashed curve is the solution using cubic B-splines. The fact that the latter solution is twice continuously differentiable is apparent and agrees pretty well with the exact solution. y(x) = −

Figure 16: Answer for problem 8.9.6.5

8.9.6.6. Equation (8.87) is f (x) ≈ fN (x) , y0 T0 (x) + · · · + yN TN (x). Using (8.86), where yj , y(xj ), we can rewrite (8.87) as    x−x0  x−xN   + 1 , x−1 ≤ x ≤ x0  + 1 , xN −1 ≤ x ≤ xN    h h                   x−x0 x−xN f (x) ≈ y0 · + · · · + yN · 1− h , x 0 ≤ x ≤ x1 , xN ≤ x ≤ xN +1 1− h                     0, |x − x0 | ≥ h 0, |x − xN | ≥ h

=

  0 +1 , y0 · x−x  h        0  y0 · 1 − x−x + y1 ·  h        1  y1 · 1 − x−x + y2 ·  h                     

x−x1 h

 +1 ,

x−x2 h

 +1 ,

 x−1 ≤ x ≤ x0        x0 ≤ x ≤ x1         x1 ≤ x ≤ x2    

.. .  yN −1 · 1 − yN · 1 −

x−xN −1 h

x−xN h



,



+ yN ·

x−xN h

 +1 ,

xN −1 ≤ x ≤ xN xN ≤ x ≤ xN +1

c Larry

.

                  

Turyn, October 24, 2013

page 85

Restricting our attention to             f (x) ≈           

=

 y0 + (y1 − y0 ) ·         y + (y2 − y1 ) ·   1

x0 ≤ x ≤ xN and rewriting terms using x1 = x0 + h, ..., xN = xN −1 + h, gives   0 + y1 · x−xh0 −h + hh , x 0 ≤ x ≤ x1  y0 − y0 · x−x h        x−x1 −h x−x1 h + , x ≤ x ≤ x y1 − y1 · h + y2 · 1 2   h h   .. . yN −1 − yN −1 ·

x−xN −1 h

x−x0 , h



x−xN −1 −h h

+

h h



xN −1 ≤ x ≤ xN

,

 x0 ≤ x ≤ x1        x1 ≤ x ≤ x2   

x−x1 , h

 ..    .       yN −1 + (yN − yN −1 ) ·

+ yN ·

x−xN −1 , h

xN −1

 y0 + m1 (x − x0 ),         y1 + m2 (x − x1 ),   =   y2 + m3 (x − x2 ),         ..       .     yN −1 + mN (x − xN −1 ), ≤ x ≤ xN

          

x0 ≤ x ≤ x1

          

x1 ≤ x ≤ x2 x2 ≤ x ≤ x3 xN −1 ≤ x ≤ xN

         

, LN (x), yj − yj−1 , for i = 1, ..., N , as we wanted to show. h    1, xj ≤ x ≤ xj+1  8.9.6.7. For k = 1, we defined Nj,1 (x) , .   0, all other x The recursive definition (8.103) is, for k = 2,

where mj =

 Nj,2 (x) = ωj,2 (x)Nj,1 (x) + 1 − ωj+1,2 (x) Nj+1,1 (x) where ωj,2 (x) =

x−xj xj+1 −xj

. For j = 0, ..., N − 2, using the definition of Nj,1 (x), Nj,2 (x) can be rewritten as

Nj,2 (x) = ωj,2 (x) ·

  1, 

0,

 xj ≤ x ≤ xj+1 

   1, + 1 − ωj+1,2 (x) ·   0,

 xj+1 ≤ x ≤ xj+2 

all other x

all other x

Using the definition of ωj,2 (x), we have      1,  1, xj ≤ x ≤ xj+1   x − xj+1 x − xj + 1− · · Nj,2 (x) =   xj+1 − xj  xj+1+1 − xj+1 0, all other x 0,

=

  

x − xj , xj+1 − xj

 xj ≤ x ≤ xj+1  

 

0,

all other x

 x − xj  ,   xj+1 − xj       x − xj+1 = 1 − ,   xj+2 − xj+1       0,

+

    1−

 

  

xj ≤ x ≤ xj+1

        

xj+1 ≤ x ≤ xj+2 all other x

x − xj+1 xj+2 − xj+1

0,

 x − xj  ,    xj+1 − xj     xj+2 − x = ,     x − xj+1 j+2             0,

 ,

,



 xj+1 ≤ x ≤ xj+2  all other x



  xj+1 ≤ x ≤ xj+2     

all other x xj ≤ x ≤ xj+1

        

xj+1 ≤ x ≤ xj+2  = Nj,2 (x),        all other x

as we wanted to show.

c Larry

Turyn, October 24, 2013

page 86

8.9.6.8. Continuity of ψ(x) is continuous on the real line follows from the calculations that 1 6

limx→−1− ψ(x) =

1 6

limx→−1− (x + 2)3 =

1 6

limx→−1+ ψ(x) =

1 6

limx→−1+

1 6

limx→0− ψ(x) =

1 6

1 6

and

 − 3(x + 1)3 + 3(x + 1)2 + 3(x + 1) + 1 = −0 + 0 + 0 +

1 6

=

1 6

limx→0−

 − 3(x + 1)3 + 3(x + 1)2 + 3(x + 1) + 1 = 16 (−3 + 3 + 3 + 1) =

2 3 2 3

1 6

limx→0+ ψ(x) =

1 6

limx→0+

 − 3(1 − x)3 + 3(1 − x)2 + 3(1 − x) + 1 = 61 (−3 + 3 + 3 + 1) =

1 6

limx→1− ψ(x) =

1 6

limx→1−

 − 3(1 − x)3 + 3(1 − x)2 + 3(1 − x) + 1 = −0 + 0 + 0 +

1 6

limx→1+ ψ(x) =

1 6

limx→1+ (2 − x)3 = 16 . X

1 6

=

1 2

X

1 6

X and X

and

Continuity of ψ 0 (x) is continuous on the real line follows from the calculations that 1 6

limx→−1− ψ 0 (x) =

1 6

limx→−1− 3(x + 2)2 =

1 6

limx→−1+ ψ 0 (x) =

1 6

limx→−1+

1 6

limx→0− ψ 0 (x) =

1 6

limx→0−

− 9(x + 1)2 + 6(x + 1) + 3 = 61 (−9 + 6 + 3) = 0

1 6

limx→0+ ψ 0 (x) =

1 6

limx→0+

 9(1 − x)2 − 6(1 − x) − 3 = 16 (9 − 6 − 3) = 0 X

1 6

limx→1− ψ 0 (x) =

1 6

limx→1−

 9(1 − x)2 − 6(1 − x) − 3 = 16 (0 − 0 − 3) = − 12

1 6

limx→1+ ψ 0 (x) =

1 6

limx→1+ −3(2 − x)2 = − 12 X

1 2

and

 − 9(x + 1)2 + 6(x + 1) + 3 = −0 + 0 +

1 2

=



and

and

Continuity of ψ 00 (x) is continuous on the real line follows from the calculations that 1 6

limx→−1− ψ 00 (x) =

1 6

limx→−1− 6(x + 2) = 1

1 6

limx→−1+ ψ 00 (x) =

1 6

limx→−1+

1 6

limx→0− ψ 00 (x) =

1 6

limx→0−

 − 18(x + 1) + 6 = −18 + 6 = −2

1 6

limx→0+ ψ 00 (x) =

1 6

limx→0+

 − 18(1 − x) + 6 = −3 + 1 = −2 X

1 6

limx→1− ψ 00 (x) =

1 6

limx→1−

 − 18(1 − x) + 6 = 0 + 1 = 1

1 6

limx→1+ ψ 00 (x) =

1 6

limx→1+ 6(2 − x) = 1 X

and

 − 18(x + 1) + 6 = −0 + 1 = 1 X and

and

so ψ 00 (x) is continuous on the real line. These three things allow us to conclude that ψ(x) is twice continuously differentiable on the real line. 8.9.6.9. Choose any x in the interval [x0 , xN ]. Then there is an integer j with xj−1 ≤ x ≤ xj and 1 ≤ j ≤ N . Note that for this value of j, (8.84) gives LN (x) = yj−1 +

yj − yj−1 · (x − xj−1 ). xj − xj−1

Because Ti (x) = 0 for i < j − 1 and for i > j, (8.101) gives   x − xj−1 x − xj−1 fN (x) = yj−1 Tj−1 (x) + yj Tj (x) = yj−1 · 1 − + yj · xj − xj−1 xj − xj−1 x − xj−1 x − xj−1 + yj · xj − xj−1 xj − xj−1 x − xj−1 = (yj − yj−1 ) · = LN (x). xj − xj−1

= yj−1 − yj−1 ·

So, yes, (8.102) and (8.84) give the same piecewise, linear approximation of f (x) for x0 ≤ x ≤ xN . c Larry

Turyn, October 24, 2013

page 87

8.9.6.10. If the xj ’s are equally spaced, that is, there is an h such that to    x−xj−1  , xj−1 ≤ x ≤ xj  h        x−x Tj (x) = 1 − h j , xj ≤ x ≤ xj+1        0, |x − xj | ≥ h

xj − xj−1 for 1 ≤ j ≤ N , then (8.101) reduces         x − x  j , =φ  h      

as in (8.86). 8.9.6.11. In the figure, (a) shows the curve and the control points; (b) shows that and the control polygon, too.

Figure 17: Answer for problem 8.9.6.11

8.9.6.12. In the figure, (a) shows the curve and the control points; (b) shows that and the control polygon, too.

Figure 18: Answer for problem 8.9.6.12

c Larry

Turyn, October 24, 2013

page 1

Chapter Nine Section 9.1 9.1.8.1. L = 3, f (x) is an odd function, and

f (x) =

  

 x, 0 < x < 1  3−x 2 ,

1 0 is an eigenvalue if, and only if, sin(ωL) = 0. So, we have the trivial solution for the function X(x) unless ω satisfies the “characteristic equation" sin(ωL) = 0. nπ Trigonometry implies that there are infinitely many values of ω that make this true: ω = , any integer n. L 2 2 Now, n = 0 is an integer but gives λ = ω = 0 , which is not allowed in Case 2. Additionally, while that satisfies sin(ωL) = 0, it turns out that n < 0 any integer n < 0, say n = −m, does give ω = −mπ L gives no eigenfunction for X(x) beyond the ones we get for n > 0. Why? Because, if n = −m then   mπx X(x) = cos ωx = cos −mπx = cos , which duplicates the eigenfunction X = cos mπx . L L L  nπ 2 The case λ > 0 gives eigenvalues λn = , n = 1, 2, ..., and corresponding eigenfunctions Xn (x) = L  cos nπx , n = 1, 2, ... . L √ Case 3: If λ < 0, rewrite λ = −ω 2 , where ω , −λ. The differential equation X 00 (x) + λX(x) = 0 is X 00 (x) − ω 2 X(x) = 0, whose solutions are X = c1 cosh(ωx) + c2 sinh(ωx), for arbitrary constants c1 , c2 . In this case, X 0 (x) = ωc1 sinh ωx + ωc2 cosh ωx. Applying the first BC gives 0 = X 0 (0) = ωc1 · 0 + ωc2 · 1 = ωc2 . Because ω > 0, this implies c2 = 0. So, X = c1 cosh ωx. Applying the second BC gives 0 = X 0 (L) = ωc1 sinh ωL. Note that ω > 0 implies sinh ωL > 0. This implies c1 = 0, so there is no eigenfunction if λ < 0. c Larry

Turyn, November 19, 2013

page 28

9.3.3.2. Case 1: If λ = 0, then the differential equation X 00 (x) + λX(x) = 0 is X 00 (x) = 0, whose solutions are X = c1 + c2 x, for arbitrary constants c1 , c2 . In this case, X 0 (x) = c2 . Applying the first BC gives 0 = X(0) = c1 . Applying the second BC gives 0 = X 0 (L) = c2 . So, both BCs are satisfied if, and only if, c1 = c2 = 0. When λ = 0, the ODE-BVP has only the trivial solution. So, λ = 0 is not an eigenvalue for this problem. √ Case 2: If λ > 0, rewrite λ = ω 2 , where ω , λ > 0. The differential equation X 00 (x) + λX(x) = 0 is X 00 (x) + ω 2 X(x) = 0, whose solutions are X = c1 cos ωx + c2 sin ωx, for arbitrary constants c1 , c2 . In this case, X 0 (x) = −ωc1 sin ωx + ωc2 cos ωx. Applying the first BC gives 0 = X(0) = c1 . So, X = c2 sin ωx. Applying the second BC gives 0 = X 0 (L) = ωc2 cos ωL. Because ω > 0 and we need c2 6= 0 in order to have an eigenfunction of the form X = c2 sin ωx, λ > 0 is an eigenvalue if, and only if, cos(ωL) = 0. So, we have the trivial solution for the function X(x) unless ω satisfies the “characteristic equation" cos(ωL) = 0. (n − 21 )π , any Trigonometry implies that there are infinitely many values of ω that make this true: ω = L integer n. −(m + 12 )π that satisfies cos(ωL) = 0, it turns out While any integer n ≤ 0, say n = −m, does give ω = L that n ≤ 0 gives no eigenfunction for  X(x) beyond ones   the  we get for n > 0. Why? Because, if n = −m −(m+ 21 )πx (m+ 12 )πx then X(x) = sin ωx = sin = − sin , which essentially duplicates the eigenfunction L L   (m+ 12 )πx X = sin . L 2  (n − 12 )π , n = 1, 2, ..., and corresponding eigenfunctions The case λ > 0 gives eigenvalues λn = L   1 (n− 2 )πx Xn (x) = sin , n = 1, 2, ... . L √ Case 3: If λ < 0, rewrite λ = −ω 2 , where ω , −λ. The differential equation X 00 (x) + λX(x) = 0 is X 00 (x) − ω 2 X(x) = 0, whose solutions are X = c1 cosh(ωx) + c2 sinh(ωx), for arbitrary constants c1 , c2 . In this case, X 0 (x) = ωc1 sinh ωx + ωc2 cosh ωx. Applying the first BC gives 0 = X(0) = c1 . So, X = c2 sinh ωx. Applying the second BC gives 0 = X 0 (L) = ωc2 cosh ωL. Note that ω > 0 implies cosh ωL > 0. This implies c2 = 0, so there is no eigenfunction if λ < 0. 9.3.3.3. Case 1: If λ = 0, then the differential equation X 00 (x) + λX(x) = 0 is X 00 (x) = 0, whose solutions are X = c1 + c2 x, for arbitrary constants c1 , c2 . In this case, X 0 (x) = c2 . Applying the first BC gives 0 = X 0 (0) = c2 , hence c2 = 0 and X(x) ≡ c1 . Applying the second BC gives 0 = X(L) = c1 . So, both BCs are satisfied if, and only if, c1 = c2 = 0. When λ = 0, the ODE-BVP has only the trivial solution. So, λ = 0 is not an eigenvalue for this problem. √ Case 2: If λ > 0, rewrite λ = ω 2 , where ω , λ > 0. The differential equation X 00 (x) + λX(x) = 0 is X 00 (x) + ω 2 X(x) = 0, whose solutions are X = c1 cos ωx + c2 sin ωx, for arbitrary constants c1 , c2 . In this case, X 0 (x) = −ωc1 sin ωx + ωc2 cos ωx. Applying the first BC gives 0 = X 0 (0) = −ωc1 · 0 + ωc2 · 1 = ωc2 . Because ω > 0, this implies c2 = 0. So, X = c1 cos ωx. Applying the second BC gives 0 = X(L) = c1 cos ωL. Because ω > 0 and we need c1 6= 0 in order to have an eigenfunction, λ > 0 is an eigenvalue if, and only if, cos(ωL) = 0. So, we have the trivial solution for the function X(x) unless ω satisfies the “characteristic equation" cos(ωL) = 0. c Larry

Turyn, November 19, 2013

page 29

(n − 12 )π Trigonometry implies that there are infinitely many values of ω that make this true: ω = , any L integer n. −(m + 12 )π While any integer n ≤ 0, say n = −m, does give ω = that satisfies cos(ωL) = 0, it turns L out that n ≤ 0 gives no eigenfunction for X(x) beyond the we  ones  get for n > 0. Why? Because, if −(m+ 21 )πx (m+ 12 )πx n = −m then X(x) = cos ωx = cos = cos , which duplicates the eigenfunction L L   1 (m+ 2 )πx X = cos . L  2 (n − 12 )π The case λ > 0 gives eigenvalues λn = , n = 1, 2, ..., and corresponding eigenfunctions L   (n− 12 )πx , n = 1, 2, ... Xn (x) = cos L √ Case 3: If λ < 0, rewrite λ = −ω 2 , where ω , −λ. The differential equation X 00 (x) + λX(x) = 0 is X 00 (x) − ω 2 X(x) = 0, whose solutions are X = c1 cosh(ωx) + c2 sinh(ωx), for arbitrary constants c1 , c2 . In this case, X 0 (x) = ωc1 sinh ωx + ωc2 cosh ωx. Applying the first BC gives 0 = X 0 (0) = ωc1 · 0 + ωc2 · 1 = ωc2 . Because ω > 0, this implies c2 = 0. So, X = c1 cosh ωx. Applying the second BC gives 0 = X(L) = c1 cosh ωL. Note that ω > 0 implies cosh ωL > 0. This implies c1 = 0, so there is no eigenfunction if λ < 0. 9.3.3.4. Plugging the solutions of the ODE, in the form X = c1 cosh(ωx) + c2 sinh(ωx), where ω , along with X 0 = ωc1 sinh(ωx) + ωc2 cosh(ωx), into the two BCs to get

√ −λ,

0 = X(−π) − X(π) = c1 cosh(−ωπ) + c2 sinh(−ωπ) − c1 cosh(ωπ) − c2 sinh(ωπ) and 0 = X 0 (−π) − X 0 (π) =ωc1 sinh(−ωπ)+ωc2 cosh(−ωπ)−ωc1 sinh(ωπ)−ωc2 cosh(ωπ). But, cosh( ) and sinh( ) are even and odd functions, respectively, so the above two equations are   0 =c1 cosh ωπ−c2 sinh ωπ−c1 cosh ωπ−c2 sinh ωπ   

0=−ωc1 sinh ωπ+ωc2 cosh ωπ− ωc1 sinh ωπ−ωc2 cosh ωπ,



that is,    0 = −2c2 sinh ωπ  

0 = −2ω c1 sinh ωπ

.



Because ω > 0 implies sinh ωπ > 0 and cosh ωπ > 0, these two equations imply c2 = 0 and c1 = 0. So, λ < 0 cannot be an eigenvalue for ODE-BVP (9.26). √ 9.3.3.5. Suppose λ > 0 is an eigenvalue for (9.20-9.21). Let ω = λ > 0, so ODE (9.20) becomes X 00 +ω 2 X = 0, whose solutions are X = c1 cos ωx + c2 sin ωx, where c1 , c2 are arbitrary constants. In this case, X = −ωc1 sin ωx + ωc2 cos ωx. Plug the solutions into the BCs (9.21) to get   0 = 0 X(0) − 1 X 0 (0) = 0 c1 − 1 ω c2 , 

 

0 = γ0 X(L)+γ1 X 0 (L) = γ0 (c1 cos ωL +c2 sin ωL)+γ1 ω(−c1 sin ωL +c2 cos ωL)

c Larry

,



Turyn, November 19, 2013

page 30

that is,  

 

0 =  0 c1 −  1 ω c 2 ,

.

  0 =(γ0 cos ωL−γ1 ω sin ωL)c1 +(γ0 sin ωL+γ1 ω cos ωL)c2 This can be written using a matrix and vector:  0  γ0 cos ωL−γ1 ω sin ωL

−ω 1



c1

.

 γ0 sin ωL+γ1 ω cos ωL



c2 

 c1 There is an eigenvalue if, and only if, the system has a non-trivial solution for , which is true if and c2 only if   0 −ω 1  0 = det  γ0 cos ωL−γ1 ω sin ωL γ0 sin ωL+γ1 ω cos ωL √ √ √ =0 (γ0 sin ωL+γ1 ω cos ωL)+ω 1 (γ0 cos ωL−γ1 ω sin ωL)= (0 γ0 −λ 1 γ1 ) sin( λ L)+(0 γ1 +1 γ0 ) λ cos( λ L). that is if and only if λ satisfies characteristic equation (9.24). 9.3.3.6. The ODE is not (9.20) in Section 9.3, so we have to work from scratch, as in Example 9.14 in Section 9.3. Plug X = esx into the ODE to get 0 = s2 + 2s + (λ + 1) = (s + 1)2 + λ. Case 1: If λ = 0, then the roots are s = −1, −1, so the solutions of the differential equation are X = c1 e−x + c2 xe−x , for arbitrary constants c1 , c2 . Applying the first BC gives 0 = X(0) = c1 , so X = c2 xe−x . Applying the second BC gives 0 = X(L) = c2 Le−L , which implies c2 = 0. So, both BCs are satisfied if, and only if, c1 = c2 = 0. When λ = 0, the ODE-BVP has only the trivial solution. So, λ = 0 is not an eigenvalue for this problem. √ Case 2: If λ > 0, rewrite λ = ω 2 , where ω , λ > 0. The roots are s = −1 ± iω. The differential equation has solutions X = e−x (c1 cos ωx + c2 sin ωx), for arbitrary constants c1 , c2 . Applying the first BC gives 0 = X(0) = c1 · 1 + c2 · 0 = c1 . So, X = c2 e−x sin ωx. Applying the second BC gives 0 = X(L) = c2 e−L sin ωL. Because ω > 0 and we need c2 6= 0 in order to have an eigenfunction, λ > 0 is an eigenvalue if, and only if, sin(ωL) = 0. nπ , any integer Trigonometry implies that there are infinitely many values of ω that make this true: ω = L n. −mπ While any integer n ≤ 0, say n = −m, does give ω = that satisfies sin(ωL) = 0, it turns out L that n ≤ 0 gives no eigenfunction for X(x) the ones  beyond  we get for n > 0. Why? Because, if n = −m −mπx mπx −x −x then X(x) = e−x sin ωx = e sin = −e sin , which essentially duplicates the eigenfunction L L  X = e−x sin mπx . L  nπ 2 The case λ > 0 gives eigenvalues λn = , n = 1, 2, ..., and corresponding eigenfunctions Xn (x) = L  nπx −x e sin L , n = 1, 2, ... √ Case 3: If λ < 0, rewrite λ = −ω 2 , where ω , −λ  > 0. The roots are s = −1 ± ω. The differential equation has solutions X = e−x c1 cosh(ωx) + c2 sinh(ωx) , for arbitrary constants c1 , c2 . Applying the first BC gives 0 = X(0) = c1 · 1 + c2 · 0 = c1 . So, X = c2 e−x sinh ωx. Applying the second BC gives 0 = X(L) = c2 e−L sinh ωL. Note that ω > 0 implies sinh ωL > 0. This implies c2 = 0, so there is no eigenfunction if λ < 0. c Larry

Turyn, November 19, 2013

page 31

9.3.3.7. The ODE is not (9.20) in Section 9.3, so we have to work from scratch, as in Example 9.14 in Section 9.3. Plug X = esx into the ODE to get 0 = s2 + 2s + (λ + 1) = (s + 1)2 + λ. Case 1: If λ = 0, then the roots are s = −1, −1, so the solutions of the differential equation are X = c1 e−x + c2 xe−x , for arbitrary constants c1 , c2 . In this case, X 0 = −c1 e−x + c2 (1 − x)e−x .  Applying the first BC gives 0 = X 0 (0) = −c1 + c2 , which implies c2 = c1 , hence X = c1 e−x + xe−x and X 0 = c1 − e−x + (1 − x)e−x = −c1 xe−x . Applying the second BC gives 0 = X 0 (L) = −c1 Le−L , which implies c1 = 0. So, both BCs are satisfied if, and only if, c1 = c2 = 0. When λ = 0, the ODE-BVP has only the trivial solution. So, λ = 0 is not an eigenvalue for this problem. √ Case 2: If λ > 0, rewrite λ = ω 2 , where ω , λ > 0. The roots are s = −1 ± iω. The differential equation has solutions X = e−x (c1 cos ωx + c2 sin ωx), for arbitrary constants c1 , c2 . In this case,  X 0 =e−x − c1 (cos ωx + ω sin ωx) + c2 (ω cos ωx − sin ωx) Applying the first BC gives 0 = X 0 (0) = −c1 · 1 + ω c2 · 1 = −c1 + ω c2 , which implies c1 = ω c2 . So, X = c2 e−x (ω cos ωx + sin ωx) and X 0 = −c2 (ω 2 + 1)e−x sin ωx. Applying the second BC gives 0 = X 0 (L) = −c2 (ω 2 + 1)e−L sin ωL. Because ω > 0 and we need c2 6= 0 in order to have an eigenfunction, λ > 0 is an eigenvalue if, and only if, sin(ωL) = 0. nπ , any integer Trigonometry implies that there are infinitely many values of ω that make this true: ω = L n. −mπ that satisfies sin(ωL) = 0, it turns out that While any integer n ≤ 0, say n = −m, does give ω = L n ≤ 0 gives no eigenfunction for X(x) beyond the ones we get for n > 0. Why? Because, if n = −m then      −mπx −mπx  −x −x −mπ X = e (ω cos ωx + sin ωx) = e cos + sin L L L  mπ  mπx   mπx    mπ mπx mπx  = e−x − cos − sin = −e−x cos + sin L L L L L L    mπx mπx −x mπ which essentially duplicates the eigenfunction X = e + sin L . L cos L  nπ 2 The case λ > 0 gives eigenvalues λn = , n = 1, 2, ..., and corresponding eigenfunctions L  nπx   nπx    nπ Xn (x) = e−x cos + sin , L L L n = 1, 2, ... √ Case 3: If λ < 0, rewrite λ = −ω 2 , where ω , −λ  > 0. The roots are s = −1 ± ω. The differential equation has solutions X = e−x c1 cosh(ωx) + c2 sinh(ωx) , for arbitrary constants c1 , c2 . In this case,  X 0 =e−x c1 (− cosh ωx + ω sinh ωx) + c2 (ω cosh ωx − sinh ωx) Applying the first BC gives 0 = X 0 (0) = −c1 · 1 + ω c2 · 1 = −c1 + ω c2 , which implies c1 = ω c2 . So, X = c2 e−x (ω cosh ωx + sinh ωx) and X 0 = c2 (ω 2 + 1)e−x sinh ωx. Applying the second BC gives 0 = X 0 (L) = c2 (ω 2 + 1)e−L sinh ωL. Because ω > 0 implies sinh ωL > 0, and we need c2 6= 0 in order to have an eigenfunction, λ > 0 gives an eigenvalue if, and only if, ω = 1, hence λ0 = −ω 2 = −1, for which the corresponding eigenfunction is  x  e + e−x ex − e−x + = e−x · (ex ) ≡ 1. X0 (x) = e−x · (cosh x + sinh x) = e−x · 2 2 c Larry

Turyn, November 19, 2013

page 32

In retrospect, it is at least “familiar" that we should have the constant eigenfunction X0 (x) ≡ 1 for the homogeneous Neumann BCs.  π 9.3.3.8. The functions sin (2n − 1)x , n = 1, 2, ... are orthogonal on the interval 0 < x < . As on page 2 770 of the textbook, we have a generalized Fourier expansion ∞  π . X f (x) = cn sin (2n − 1)x , 0 < x < , 2 n=1

where

´ π/2

 ˆ π/2  f (x) sin (2n − 1)x dx 2 = f (x) sin (2n − 1)x dx. cn = ´ π/2  π/2 0 | sin (2n − 1)x |2 dx 0 0

Here, f (x) = sin x. We do not need to calculate the integrals, because the generalized Fourier expansion is . sin x = c1 sin x + c2 sin 3x + c3 sin 5x + ..., . that is, c1 = 1, c2 = c3 = ... = 0. The generalized Fourier expansion is f (x) = sin x.  9.3.3.9. The functions cos (n − 12 )x , n = 1, 2, ... are orthogonal on the interval 0 < x < π. As on page 687, we have a generalized Fourier expansion ∞  1  π . X f (x) = cn cos n − x , 0 < x < , 2 2 n=1

where

´π

   1 ˆ  n − f (x) cos 2 x dx 0 1  2 π   cn = ´ π = f (x) cos n − x dx.  π 0 2 | cos n − 12 x |2 dx 0

Here, f (x) = sin x. We calculate ˆ ˆ    1  1   1   2 π 2 π 1 cn = sin x cos n − x dx = sin 1 − (n − ) x + sin 1 + (n − ) x dx π 0 2 π 0 2 2 2

=

1 π

ˆ 0

π



       π 3 1   1  3 cos − n x cos + n x     2 2 1  − n x + sin + n x dx =  sin + 3 1 2 2 π −( 2 − n) −( 2 + n) 0



  −n π −1

 3 2 1  cos = π −( 32 − n)

=



   1    cos + n π −1 2 1 0−1 0−1 1 1  + = + = − π −( 23 − n) −( 12 + n) π −( 12 + n) n−

1 −2 −2 1 · = · π (n − 32 )(n + 21 ) π n2 − n −

3 4

·

3 2

1 + n+

 1 2

4 8 1 =− · 2 . 4 π 4n − 4n − 3

So, the generalized Fourier expansion is ∞ 1 1  . 8X cos (n − )x . f (x) = − π n=1 4n2 − 4n − 3 2

9.3.3.10. The last sentence of Theorem 9.6 says that there are only positive eigenvalues, because 0 = 0 ≥ 0, 1 = 1 ≥ 0, γ0 = 1 ≥ 0, and γ1 = h ≥ 0 and not both of 0 = 0 and γ0 = 1 are zero. c Larry

Turyn, November 19, 2013

page 33

So, all of the eigenvalues satisfy (9.24), the characteristic equation found in Theorem 9.6: √ √ √ 0 = (0 · 1 − λ · 1 · h) sin( λ π) + (0 · h + 1 · 1) λ cos( λ π), that is, Define ω ,

√ √ √ 0 = −h λ sin( λ π) + λ cos( λ π).

√ λ > 0. The characteristic equation is

0 = −h ω 2 sin πω + ω cos πω. Because ω > 0, we can divide through by ω to get characteristic equation 0 = −h ω sin πω + cos πω. If cos πω 6= 0 and we define θ = π ω, the characteristic equation is equivalent to tan θ =

π . hθ

Graphical analysis is shown in the figure. If cos πω = 0 then the characteristic equation reduces to 0 = −h ω sin πω, which implies sin πω = 0. But it is impossible to have cos πω = sin πω = 0, so we get no eigenvalue from the case where cos πω = 0.

Figure 30: Problem 9.3.3.10

9.3.3.11. The ODE is not (9.20), so we have to work from scratch, as in Example 9.14 in Section 9.3. The problem assumes that µ > 0. Plug X = esx into the ODE to get 0 = s2 + 2µs + 2µ2 , so

p (2µ)2 − 8µ2 s= = −µ ± iµ. 2 The differential equation has solutions X = e−µx (c1 cos µx + c2 sin µx), for arbitrary constants c1 , c2 . In this case,  X 0 =e−µx − c1 (µ cos ωx + µ sin ωx) + c2 (µ cos ωx − µ sin ωx) , −2µ ±

that is,  X 0 =µe−µx − c1 (cos ωx + sin ωx) + c2 (cos ωx − sin ωx) , Applying the first BC gives 0 = X(0) = c1 ·1+c2 ·0 = c1 , so X = c2 e−µx sin µx and X 0 = c2 µ e−µx (cos µx− sin µx). Applying the second BC gives 0 = X 0 (L) = c2 µ e−µL (cos µL − sin µL). c Larry

Turyn, November 19, 2013

page 34

Because µ > 0 and we need c2 6= 0 in order to have an eigenfunction, λ > 0 is an eigenvalue if, and only if, cos µL − sin µL = 0, which is the characteristic equation. Because cos µL = sin µL = 0 is impossible, the characteristic equation can be rewritten as 1 = tan µL. The eigenvalues are µn =

1 L

π 4

 + nπ , n = 0, 1, 2, ... .

c Larry

Turyn, November 19, 2013

page 35

Section 9.4 9.4.3.1. L = π, so ωn = n and  ˆ 0 ˆ π ˆ π ˆ 0 i3x 1 1 1 e + e−i3x −inx −iωn x −inx −inx cn = f (x) e dx = cos 3x e dx + 0·e dx = e dx 2π −π 2π 2π −π 2 −π 0 1 = 4π

ˆ

0



 ei(3−n)x + e−i(3+n)x dx. (?)

−π

If n = 3, c3 =

1 4π

ˆ

0

 1 h e−i6x i0 1  1−1 1 1 + e−i6x dx = x+ = (0 − (−π) + = . 4π −i6 −π 4π −i6 4

−π

If n = −3, c−3 =

1 4π

ˆ

0

−π

i0  1  1 h ei6x 1 1−1 ei6x + 1 dx = +x + (0 − (−π) = . = 4π i6 4π i6 4 −π

Using (?) above, if |n| 6= 3 cn =

=

1 h ei(3−n)x e−i(3+n)x i0 1  1 − e−i(3−n)π 1 − ei(3+n)π  1  1 − (−1)3−n 1 − (−1)3+n  + + + = = 4π i(3 − n) −i(3 + n) −π 4π i(3 − n) −i(3 + n) 4π i(3 − n) −i(3 + n)  1 + (−1)n  1 1  1 + (−1)n 1 + (−1)n  1 + (−1)n  1 1 1  + = · − = · − 4π i(3 − n) −i(3 + n) 4π i(3 − n) i(3 + n) 4πi 3−n 3+n =

Using

1 + (−1)n 2n · . 4πi 9 − n2

1 = −i and the fact that i 1 + (−1)n =

  2, 

0,

n = 2k

 

n = 2k − 1



,

if |n| = 6 3 then cn = 0 for odd n and c2k =

2 · (−i) 2 · 2k i2 k · = · . 2 4π 9 − (2k) π (2k)2 − 9

The complex Fourier series representation is ∞   i2 X k . 1 −i3x f (x) = e + e−i3x + ei2kx . 4 π 4k 2 − 9 k=−∞

nπ , we calculate 3 ˆ 4  ˆ ˆ 6 1 6 1 cn = f (t) e−iωn t dt = (−1)e−inπt/3 dt + 2·e−inπt/3 dt . (?) 6 0 6 0 4

9.4.3.2. Using the second part of formula (9.31) with 2L = 6, so ωn =

If n = 0, c0 =

1 6



ˆ

4

(−1) · 1 dt + 0

6

 2 · 1 dt = 0.

4 c Larry

Turyn, November 19, 2013

page 36

Using (?), if n 6= 0,     1 h e−inπt/3 i4 h 2e−inπt/3 i6 1 e−i4nπ/3 − 1 2e−i2nπ − 2e−i4nπ/3 cn = + − = − 6 inπ/3 0 inπ/3 4 6 inπ/3 inπ/3 1 = 6



e−i4nπ/3 − 1 2 − 2e−i4nπ/3 − inπ/3 inπ/3

 =

  3  1 3  1−e−i4nπ/3 = 1−e−i4nπ/3 . · 6 nπ/3 2nπ

The complex Fourier series representation is   . 3 X 1 f (t) = 1−e−i4nπ/3 einπt/3 . 2π n n6=0

9.4.3.3. L = π, so ωn = n and the complex Fourier series is the function itself, because f (x) = 1 − i3 sin 2x +

=

1 ei2x − e−i2x 1 ei5x + e−i5x cos 5x = 1 − i3 · + · 2 i2 2 2

3 1 1 −i5x 3 −i2x e + e + 1 − ei2x + e−i5x . 4 2 2 4

9.4.3.4. L = 1, so ωn = nπ and we calculate ˆ ˆ ˆ 1 1 iπt/2 −inπt 1 1 −i(2n−1)πt/2 1 h e−i(2n−1)πt/2 i1 1 1 −iωn t f (t) e dt = e e dt = e dt = cn = 2 −1 2 −1 2 −1 2 −i(2n − 1)π/2 −1   1 e−i(2n−1)π/2 − ei(2n−1)π/2 i −e−i(2n−1)π/2 + ei(2n−1)π/2 = · = · − −i(2n − 1)π/2 2 (2n − 1)π/2 2   i2 1 i2 = n− π = sin · (−1)n+1 . (2n−1)π 2 (2n−1)π The complex Fourier series representation is ∞ . i2 X (−1)n+1 inπt e . f (t) = π n=−∞ 2n − 1

 9.4.3.5. f (t) = Step(t − 1) − Step(t − 3) =

0, 1,

t1



 −

0, 1,

t3



  t≤1   0, 1, 1 < t < 3 , so the =   0, t≥1

Fourier transform of f (t) is ˆ ∞ ˆ 3  1 1 1 h e−iωt i3 i −iωt √ √ F[ f (t) ] , f (t) e dt = e−iωt dt = √ = √ e−i3ω − e−iω . −iω 1 2π −∞ 2π 1 2π ω 2π 9.4.3.6. Using the second part of formula (9.31) with 2L = 2, so ωn = nπ, the Fourier transform of f (t) is  ˆ 1 ˆ ∞ ˆ 2 1 1 f (t) e−iωt dt = t e−iωt dt + (2 − t)e−iωt dt F[ f (t) ] , √ 2 2π −∞ 0 1 =

−iω   e−i2ω − e−iω   1  h t e−iωt e−iωt i1 h (2 − t) e−iωt e−iωt i2  1  e−iω − 0 e−iω − 1 0 − e − + + = − +  +  2 2 2  −iω 2 −iω (−iω) 0 −iω (−iω) 1 2  −iω (−iω) (−iω)2 c Larry

Turyn, November 19, 2013

page 37

=

1 9.4.3.7. F[ f (t) ] , √ 2π

ˆ

2 1 − 2e−iω + e−i2ω 1 = − 2 1 − e−iω . 2 2(−iω) 2ω ´



f (t) e−iωt dt = −∞

1 =√ 2π

ˆ

0 0·e−iωt −∞

√1 2π



e−αt · 0

dt +

 ´ ∞ −αt e sin(ω0 t) e−iωt dt 0

eiω0 t − e−iω0 t −iωt ·e dt, i2

which we can break up into two integrals. The first improper integral is ˆ ∞ ˆ ∞ ˆ b −αt iω0 t −iωt −(α+i(ω−ω0 ))t e ·e e dt = e dt = lim e−(α+i(ω−ω0 ))t dt 0

b→∞

0

= lim

b→∞

h

0

ib   1 1  e−(α+i(ω−ω0 ))t = lim  e−(α+i(ω−ω0 ))b − 1 b→∞ − α + i(ω − ω0 ) 0 − α + i(ω − ω0 ) =

1 1  (0 − 1) = , − α + i(ω − ω0 ) α + i(ω − ω0 )

because the constant α > 0 was assumed to be positive. Similarly, ˆ ∞ ˆ ∞ e−αt · e−iω0 t e−iωt dt = e−(α+i(ω+ω0 ))t dt = ... = 0

1 , α + i(ω + ω0 )

0

because the constant α > 0 was assumed to be positive. Putting the two integrals back together gives  1 1 1 1 α + i(ω + ω0 ) − α + i(ω − ω0 ) 1 1 −  =√ ·   F[ f (t) ]= √ · α + i(ω + ω0 ) 2π i2 α + i(ω − ω0 ) 2π i2 α + i(ω − ω0 ) α + i(ω + ω0 ) i2ω0 1 1 1 ω0 1 ω   = √0 · =√ · =√ · 2 2 2 2π i2 (α + iω) − iω0 (α + iω) + iω0 2π (α + iω) + ω0 2π α + i2αω − ω 2 + ω02 ω0 . =√ 2 2π α + ω02 − ω 2 + i2αω ˆ ∞  ´ 1/(2ν) 1 9.4.3.8. F[ f (t) ] , √ f (t) e−iωt dt = √12π −1/(2ν) ν 1 + cos(2πνt) e−iωt dt 2π −∞    ˆ 1/(2ν) ˆ 1/(2ν)  ei2πνt + e−i2πνt ν ei(2πν−ω)t + e−i(2πν+ω)t ν −iωt −iωt 1+ e dt = √ e + dt (?) =√ 2 2 2π −1/(2ν) 2π −1/(2ν) If |ω| 6= 2πν and ω 6= 0, then F[ f (t) ] = = ω

√ν 2π

h

e−iωt −iω ω

ω

+

ei(2πν−ω)t i2(2πν−ω) ω

+

e−i(2πν+ω)t −i2(2πν+ω)

i1/(2ν) −1/(2ν)

ω

ω

ν  e−i 2ν − ei 2ν ei(π− 2ν ) − e−i(π+ 2ν ) e−i(π+ 2ν ) − ei(π+ 2ν )  =√ + + −iω i2(2πν − ω) −i2(2πν + ω) 2π ω ω ω ω ω ω ν  2 ei 2ν − e−i 2ν eiπ e−i 2ν − e−iπ ei 2ν e−iπ e−i 2ν − eiπ ei 2ν  =√ · + + i2 i2(2πν − ω) −i2(2πν + ω) 2π ω ω

ω

ω

ω

ω

ω

 ν 2 ω −e−i 2ν + ei 2ν −e−i 2ν + ei 2ν  ν 2 ω ei 2ν − e−i i2ν 1 1 =√ ·sin + + =√ ·sin + · − 2ν i2(2πν − ω) −i2(2πν + ω) 2ν i2 2πν − ω 2πν + ω 2π ω 2π ω c Larry

Turyn, November 19, 2013

page 38

  2ν ω ω 2ω ω 1 ω ν 2 = √ · sin ·sin + sin · · + =√ 2 2 2 2ν 2ν (2πν − ω)(2πν + ω) 2ν ω (4π ν − ω ) 2π ω 2π 2ν ω ω 4π 2 ν 2 2ν 1 = √ · sin . · = √ · sin · 2ν ω(4π 2 ν 2 − ω 2 ) 2ν ω(1 − 4πω22ν 2 ) 2π 2π Define r ,

ω 2πν .

If |ω| 6= 2πν and ω 6= 0, then

2ν 1 1 2ν 1 1 1 F[ f (t) ] = √ · sin(πr) · = √ ·sin(πr)· = · √ ·sin(πr)· . ω(1 − r2 ) ω 1 − r2 πr(1 − r2 ) 2π 2π 2π If ω = 0 then from (?) ν F[ f (t) ] = √ 2π

ˆ

1/(2ν)

  ei2πνt + e−i2πνt ν h ei2πνt e−i2πνt i1/(2ν) 1+ dt = √ t+ + 2 i4πν −i4πν −1/(2ν) 2π −1/(2ν)

ν  1 eiπ − e−iπ e−iπ − eiπ  ν  1 −1 − (−1) −1 − (−1)  1 =√ + + + + =√ =√ . i4πν −i4πν i4πν −i4πν 2π ν 2π ν 2π If ω = 2πν then from (?) ν F[ f (t) ] = √ 2π

ˆ

1/(2ν)



e−i(2πν)t +

−1/(2ν)

ei(0)t + e−i(4πν)t 2



ν dt = √ 2π

ˆ

1/(2ν)

−1/(2ν)

1 + 2e−i(2πν)t + e−i(4πν)t dt 2

ν 1 −i(2πν)t 1 −i(4πν)t i1/(2ν) = √ t− e − e iπν i4πν −1/(2ν) 2 2π   1 −iπ 1 ν 1 − (e − eiπ ) − (e−i2π − ei2π ) = √ iπν i4πν 2 2π ν  ν 1 1 1 1 = √ − (−1 − (−1)) − (1 − 1) = √ . iπν i4πν 2 2π ν 2 2π h

If ω = −2πν then from (?) ν F[ f (t) ] = √ 2π

ˆ

  ˆ 1/(2ν) ei4πνt + e−i(0)t ν 1 + 2ei(2πν)t + ei(4πν)t i2πνt e + dt = √ dt 2 2 2π −1/(2ν) −1/(2ν) 1/(2ν)

ν h 1 i(2πν)t 1 i(4πν)t i1/(2ν) t+ + = √ e e iπν i4πν −1/(2ν) 2 2π  1 iπ 1 ν 1 = √ + (e − e−iπ ) + (ei2π − e−i2π ) iπν i4πν 2 2π ν  ν 1 1 1 1 = √ + (−1 − (−1)) + (1 − 1) = √ . ν iπν i4πν 2 2π 2 2π To summarize,  1 1   √ ·sin(πr)· , |ω| 6= 2πν and ω 6= 0   πr(1 − r2 ) 2π       1 √ , ω=0 F[ f (t) ] =  2π       1    √ , |ω| = 2πν 2 2π

c Larry

          

.

         

Turyn, November 19, 2013

page 39

1 9.4.3.9. F[ e−at Step(t)] , √ 2π

ˆ



e−at Step(t) e−iωt dt = −∞

√1 2π

´∞ 0

e−at e−iωt dt , limR→∞

√1 2π

´ R −(a+iω)t e dt 0

 −(a+iω)t R  −(a+iω)R  1 e 1 e 1 1 1 √ √ = lim = lim − =√ . R→∞ R→∞ −(a + iω) −(a + iω) −(a + iω) a + iω 2π 2π 2π 0 9.4.3.10. Method 1 : From Theorem 9.9(b), F[ tf (t) ](−ω) = F −1 [ tf (t) ](ω). But, reversing the roles of t and ω in Theorem 17.1 in Section 17.2, we have F[ f 0 (ω) ](t) = itF[ f (ω) ](t), hence     f 0 (t) = F −1 F[ f 0 (ω) ](t) = F −1 itF[ f (ω) ](t) , hence F[ tf (t) ](−ω) = F −1 [ tf (t) ](ω) =

 1 d  F[ f (t) ](ω) . i dω

This implies that F[ tf (t) ](ω) = −i

 d  F[ f (t) ] (−ω). dω

For example, 2

F[ te−t

/2

](ω) = −i

2 2 2 d  −ω2 /2  e (−ω) = −i · −ωe−ω /2 = −i · −(−ωe−(−ω) /2 ) = −iωe−ω /2 . dω ω7→−ω 2

2

Method 2 : We recognize that te−t /2 is the derivative with respect to t of −e−t /2 , so using Theorem 17.1 in Section 17.2, we have h i h d h i i 2 2 2 2 2 F te−t /2 (ω) = F − e−t /2 (ω) = iω · F − e−t /2 (ω) = iω · (−e−w /2 ) = −iωe−ω /2 . dt

9.4.3.11. Example 9.21’s conclusion in Section 9.4 of r F[ f (x) ] =

2 1 − e−i3ωπ · , π 4 − ω2

forr |ω| 6= 2, looks like the F (ω) we’re given, except for two aspects of problem 9.4.3.11: (a) there is no factor 2 of , and (b) the numerator of F (ω) is 1 − e−i5πω instead of 1 − e−i3πω . π Method 1: Table 9.2’s entry F.15 [with the correction in the errata page] is a generalization of the result of Example 9.21.  r   sin kx, 0 ≤ x ≤ τ    1 1 −iωτ = F · 2 e (−iω sin kτ −k cos kτ )+k   2π k − ω 2 0, all other x We should choose k = 2 so that the denominator of 4 − ω 2 fits the form k 2 − ω 2 . We should choose τ = 5π so that the e−iωτ would fit the form e−i5πω . So, Table 9.2’s entry F.15 [with the correction in the errata page] gives, as a particular example,  r   sin 2x, 0 ≤ x ≤ 5π   1 1  −iω5π = · e (−iω sin 2 · 5π−2 cos 2 · 5π)+2 . F   2π 4 − ω 2 0, all other x c Larry

Turyn, November 19, 2013

page 40

Using the facts that sin 10π = 0 and cos 10π = 1, this implies that  r  r  sin 2x, 0 ≤ x ≤ 5π    1 1 2 1 −iω5π = F − 2e +2 = 1 − e−i5πω . · · 2 2   2π 4 − ω π 4−ω 0, all other x This implies that 

F −1

−i5πω

1−e 4 − ω2

r

 =

  sin 2x, 0 ≤ x ≤ 5π   π · .  2  0, all other x

Method 2: The 3π that appears in the formula for the function f (x) in Example 9.21 in Section 9.4 appears in the Fourier transform in the e−i3πω . This suggests modifying Example 9.21 to find the Fourier transform of    sin 2x, 0 ≤ x ≤ 5π  g(x) , :   0, all other x We can calculate, from the definition of Fourier transform, that r 2 1 − e−i5πω · . F [ g(x) ] = ... = π 4 − ω2 [Please check for yourself the work needed in the = ... =.] From the Fourier transform of g(x) we can find the inverse transform     r  sin 2x, 0 ≤ x ≤ 5π  −i5πω 1 − e π F −1 · . =  4 − ω2 2  0, all other x 9.4.3.12. With k = 1 and τ = 6π, we can use Table 9.2’s entry F.15 [with the correction in the errata page] to get r r   1 1 1 1 −iωτ F [ f (t) ] = · · e (−iω sin kτ − k cos kτ )+k = e−i6πω (−iω sin 6π − cos 6π)+1 2π k 2 − ω 2 2π 1 − ω 2 r 1 1 − e−i6πω · . = 2π 1 − ω2 9.4.3.13. Example 9.19’s conclusion in Section 9.4 of  r  sin(aω) , 2  aω F[ f (t) ] = a · · π   1,

  ω 6= 0  ω=0

,

 

looks a lot like the F (ω)rwhose inverse transform we’re given, except for two aspects of problem 9.4.3.13: 2 (a) there is no factor of , and (b) the numerator of F (ω) is sin bω instead of sin aω. π We can take care of (b) easily by changing a to b, and we can take care of (a) by multiplying by an appropriate factor: Example 9.19 in Section 9.4 implies      sin(bω) r r r  1, −b < t < b      , ω = 6 0 π π 2 bω   F = ·b· · ,   2 2 π    0, |t| > b 1, ω=0 so c Larry

Turyn, November 19, 2013

page 41

 sin bω  = F −1 ω

r

  1, −b < t < b   π · .  2  0, |t| > b

9.4.3.14. Table 9.2’s entry F.9 with β = 1 implies that 2 2 2 1 1 e−ω /(4·1) = √ e−ω /4 . F[ e−x ] = √ 2·1 2

9.4.3.15. Hints: Use the time delay Theorem 9.11 to find F[ p(t − 2n) ], use the fact that

n=1

|r| < 1, and use the fact that e−n = (e−1 )n . r F[ g(t) ] =

∞ X

rn =

r if 1−r

1 2 sin ω · · π ω 1 − e−1+i2ω

9.4.3.16. Define F (ω) = F[ f (t) ](ω), hence f (t) = F −1 [ F (ω) ](t). By the result of Theorem 9.9(a) in Section 9.4, (1) F −1 [ e−iωt0 F (ω) ](t) = F[ e−iωt0 F (ω) ](−t). The result of Example 9.20(a) in Section 9.4, with the roles of t and ω reversed and a = −t0 , implies that we can rewrite the latter:  F[ e−iωt0 F (ω) ](−t) = F[ eiωa F (ω) ](−t) = F[ F (ω) ](−t − a) = F[ F (ω) ] − t − (−t0 ) hence  (2) F[ e−iωt0 F (ω) ](−t) = F[ F (ω) ] − (t − t0 ) . By the result of Theorem 9.9(a), we can rewrite the latter:  (3) F[ F (ω) ] − (t − t0 ) = F −1 [ F (ω) ](t − t0 ) = f (t − t0 ). Stringing together the results of (1), (2), and (3) gives  F −1 [ e−iωt0 F (ω) ](t) = F[ e−iωt0 F (ω) ](−t) = F[ F (ω) ] − t − (−t0 ) = f (t − t0 ), which implies that h i F[ f (t − t0 ) ](ω) = F F −1 [ e−iωt0 F (ω) ](t) (ω) = e−iωt0 F (ω), as were asked to explain. [Aside: It would be easier to derive the formula by directly using the definition of the Fourier transform given in (9.40) in Section 9.4, but the instructions for the problem don’t allow that. The instructions for this problem make it quite difficult.] 9.4.3.17. (a) Using Table 9.2’s entry F.2, we calculate    1 1 ikx −ikx ) (ω) = F[ f (x) eikx ](ω) + F[ f (x) e−ikx ](ω) F[ f (x) cos kx ](ω) = F f (x) · (e + e 2 2 =

1 (F (ω − k) + F (ω + k)) . 2

c Larry

Turyn, November 19, 2013

page 42

(b) Using Table 9.2’s entry F.2, we calculate    1 ikx 1 −ikx F[ f (x) sin kx ](ω) = F f (x) · (e − e F[ f (x) eikx ](ω) − F[ f (x) e−ikx ](ω) ) (ω) = i2 12 =

1 9.4.3.18. F[ f (t) ](ω) = √ 2π 1 F[ f (t) ](ω) = √ 2π



ˆ

1 (F (ω − k) − F (ω + k)) . i2 ˆ

0

1 e−iωt dt +

(−1)e−iωt dt , so for ω 6= 0,

−b

e−iωt −iω

0

0

e−iωt − −iω −b 

b ! = 0

r = For ω = 0,

!

b

 1 2 √ 2 − eiωb − e−iωb = √ (cos(ωb) − 1) −i 2π ω i 2π ω

2 cos(ωb) − 1 · . π iω

ˆ ˆ b  1  0 F[ f (t) ](0)= √ 1dt + (−1)dt =0. 2π −b 0

So, r F[ f (t) ](ω)=

  cos(ωb) − 1     , ω = 6 0 2 iω · .  π    0, ω =0

[Aside: Using L’Hôpital’s rule, we can see that F[ f (t) ](ω) is continuous at ω = 0.] 9.4.3.19. The result of problem 9.4.3.18 implies that   cos(ωb) − 1 r    , ω 6= 0  π   ωb F −1  =i    2   0, ω =0

   1, −b < t < 0  0 b

 0 0 on [a, b]. Also, for the last result of Theorem 9.15 in Section 9.6, we are assuming also that γ0 , γ1 ≥ 0. It follows that either γ1 or γ0 > 0. For the moment, let us assume that γ1 > 0. [At the end of the work we will mention how the rest of the work would proceed if, instead, we assume that γ0 > 0.] The assumption that γ1 > 0, along with the second BC satisfied by the eigenfunction X(x), implies that X 0 (b) = −

γ0 X(b). γ1

This and (?) imply that ˆ b ˆ b ˆ b 2 2 0 p(a) 2 2 2 γ0 p(b) 0 (??) λ s(x) X(x) dx = X(b) + X(a) + p(x) X (x) dx − q(x) X(x) dx. γ  1 1 a a a In the last result of Theorem 9.15 in Section 9.6, we are assuming that 0 , 1 > 0, γ0 , γ1 ≥ 0 and q(x) ≤ 0 on the interval [ a, b ]. It follows from (??), along with the assumption that the Sturm-Liouville problem is regular, that ˆ b 2 (? ? ?) λ s(x) X(x) dx ≥ 0. a

Recall that in Definition 9.5 we are assuming that s(x) ≥ 0 and is not identically zero on [ a, b ]. It follows ˆ b 2 that s(x) X(x) dx ≥ 0. That, and (? ? ?), together imply that λ ≥ 0. So, there can be no negative a

eigenvalue. The next to last thing we need to explain is why λ = 0 cannot be an eigenvalue. If λ = 0 were an eigenvalue, then (??) would imply that ˆ b 2  p(a)  2 ˆ b 2 2 γ0 p(b)  0 (? ? ? ?) 0 = X(b) + X(a) + p(x) X 0 (x) dx − q(x) X(x) dx. γ1 1 a a Now, the previous assumptions mentioned, that 0 , 1 > 0, γ0 , γ1 ≥ 0 and q(x) ≤ 0 on the interval [ a, b ], along with another part of Definition 9.5, namely that p(x) > 0 on [ a, b ], imply that every one of the terms ˆ b 2  p(a)  2 ˆ b 2 2 γ0 p(b)  0 X(b) , X(a) , p(x) X 0 (x) dx, and − q(x) X(x) dx are non-negative. γ1 1 a a ˆ b 2 2 2 0 p(a)  γ0 p(b)  It follows from that and (? ? ? ?) that 0 = X(b) , 0 = X(a) , 0 = p(x) X 0 (x) dx, γ1 1 a ˆ b 2 2 and 0 = − q(x) X(x) dx. The third of these equalities, along with the facts that both X 0 (x) ≥ 0 a

and p(x) > 0 on [ a, b ] imply that X 0 (x) ≡ 0 on [ a, b ]. This implies that X(x) is constant on [ a, b ]. c Larry

Turyn, November 19, 2013

page 50

2 0 p(a)  X(a) , along with the assumptions that 0 , 1 > 0, implies that X(a) = 0. So, the 1 constant X(x) is identically zero on [ a, b ]. So, λ = 0 cannot be an eigenvalue. The last thing to mention is how we would proceed if, instead of assuming that γ1 > 0, we assume γ1 p(b)  0 2 X (b) , which is also that γ0 > 0. In that case, we would replace the term −p(b)X(b)X 0 (b) by γ0 non-negative because we are assuming that γ0 , γ1 ≥ 0, p(x) > 0 on [ a, b ], and that γ1 > 0. The term 2 γ1 p(b)  0 2 γ0 p(b)  X (b) has the same sign properties and implications as the term X(b) would have had γ0 γ1 in the subsequent reasoning. But 0 =

9.6.4.2. (a) The ODE can be rewritten to be the Cauchy-Euler ODE r2 R00 + 2rR0 + λR = 0, whose characteristic equation is 0 = n(n − 1) + 2n + λ = n2 + n + λ. This gives roots r √ 1 1 −1 ± 1 − 4λ =− ± − λ. n= 2 2 4 Case 1: If

1 4

1

  0 = R0 (a) = −

1 2

0

1 2



0 = R (b) = −





1 2

 3 + ω a− 2 +ω



1 2

 3 + ω b− 2 +ω

hence  Because  − 1 + ω a− 32 +ω 2 − 1 + ω b− 32 +ω 2 =

1

− λ = ω 2 and ω > 0, then R = c1 r− 2 +ω + c2 r− 2 −ω . The BCs require





 3 + ω c1 a− 2 +ω + − − 32 +ω



+ ω c1 b

+ −

1 2 1 2

  3 − ω c2 a− 2 −ω 

 − 3 −ω  , − ω c2 b 2

 3     − ω a− 2 −ω c1 0   =  .  3 c2 0 − 21 − ω b− 2 −ω



1 2

 3 − ω a− 2 −ω  − 3 −ω 1 2 − 2 −ω b



1 2

 3   3    3   3  1 1 1 1 + ω a− 2 +ω − − ω b− 2 −ω − − − ω a− 2 −ω − + ω b− 2 +ω 2 2 2 2 = −

   3 3 3 3 1 1 + ω − − ω a− 2 +ω b− 2 −ω − a− 2 −ω b− 2 +ω 2 2 =

1 4

 3  3 − ω 2 a− 2 −ω b− 2 −ω a2ω − b2ω ,

and a < b, the determinant is zero if, and only if, 14 − ω 2 = λ = 0. Because ω > 0, this occurs when ω = 12 , that is, when λ = λ0 ,= 0. Denote the corresponding eigenfunction by R0 (r). [We were not asked for the eigenfunctions, but we mention that for λ = 0 the eigenfunctions are given by R(r) = c1 R0 (r). where R0 (r) ≡ 1 and c1 is an arbitrary nonzero constant. Here’s why: 0 = R0 (a) = 5 0 · c1 a−1 + (−1)c2 a− 2 , hence c2 = 0.] Case 2: λ = 14 , then the roots are n = − 12 , − 12 , so R(r) = c1 r−1/2 + c2 r−1/2 ln r and R0 (r) =  c2 r−3/2 1 − 12 ln r . The BCs require     0 = R0 (a) = − 12 c1 a−3/2 + c2 a−3/2 1 − 21 ln a   .  0 = R0 (b) = − 12 c1 b−3/2 + c2 b−3/2 1 − 12 ln b

c Larry

1 2

c1 r−3/2 +

Turyn, November 19, 2013

page 51

Multiplying the first equation by a−3/2 and the second equation by b−3/2 gives an equivalent system of equations,     0 = R0 (a) = − 12 c1 + 1 − 12 ln a c2   .  0 = R0 (b) = − 12 c1 + 1 − 12 ln b c2 There is a non-trivial solution for c1 and c2 if, and only if, 1  − 1 − 21 ln a 2 1 b 1 0 =  = 4 (ln b − ln a) = 4 ln a . 1 1 − 1 − 2 ln b 2 Because a < b, c1 = c2 = 0, so there is no eigenfunction for λ = 14 . Case 3: If 14 − λ = −ω 2 and ω > 0, then the roots are n = − 12 ± iω. In this case, R(r) = c1 r−1/2 cos(ω ln r) + c2 r−1/2 sin(ω ln r), so     1 1 0 −3/2 −3/2 R (r) = r c1 − cos(ω ln r) − ω sin(ω ln r + r c1 − sin(ω ln r) + ω cos(ω ln r . 2 2 The BCs R0 (a) = 0 and R0 (b) = 0 require      0 = − 12 cos(ω ln a) − ω sin(ω ln a) c1 + − 21 sin(ω ln a) + ω cos(ω ln a) c2  

   0 = − 12 cos(ω ln b) − ω sin(ω ln b) c1 + − 21 sin(ω ln b) + ω cos(ω ln b) c2

that is.  (?)

0





=

 0

 − 12 cos(ω ln a) − ω sin(ω ln a)  − 12 cos(ω ln b) − ω sin(ω ln b)

   c1 − 12 sin(ω ln a) + ω cos(ω ln a)  .  c2 − 12 sin(ω ln b) + ω cos(ω ln b)

There is a non-trivial solution for c1 and c2 if, and only if,   − 1 cos(ω ln a) − ω sin(ω ln a) − 12 sin(ω ln a) + ω cos(ω ln a) 2 0 =   1 − 1 cos(ω ln b) − ω sin(ω ln b) − 2 sin(ω ln b) + ω cos(ω ln b) 2    1 1 = − cos(ω ln a) − ω sin(ω ln a) − sin(ω ln b) + ω cos(ω ln b) 2 2    1 1 − − sin(ω ln a) + ω cos(ω ln a) − cos(ω ln b) − ω sin(ω ln b) 2 2  1  1 = ... = + ω 2 (sin(ω ln b) cos(ω ln a) − sin(ω ln a) cos(ω ln b)) = − ω 2 sin (ω ln b − ω ln a) 4 4   1  b = + ω 2 sin ω ln . 4 a  2 1 nπ 1 2 . Because a < b, the eigenvalues are when λn = + ωn = + 4 4 ln(b/a) We were not asked for the eigenfunctions, but for λ = λn the eigenfunctions are given by  1  1   Rn (r) = r−1/2 − sin(ωn ln b)+ωn cos(ωn ln b) cos(ωn ln r)+ cos(ωn ln b)+ωn sin(ωn ln b) sin(ωn ln r) 2 2  1  = r−1/2 − sin(ωn ln b) cos(ωn ln r) − cos(ωn ln b) sin(ωn ln r) 2   +ωn cos(ωn ln b) cos(ωn ln r) + sin(ωn ln b) sin(ωn ln r) c Larry

Turyn, November 19, 2013

page 52

    b 1 b  + ωn cos ωn ln . = r−1/2 − sin ωn ln 2 r r Here’s why: Use the adjugate matrix method for the system (?), where     − 12 cos(ωn ln a) − ωn sin(ωn ln a) − 12 sin(ωn ln a) + ωn cos(ωn ln a)  adj    − 21 cos(ωn ln b) − ωn sin(ωn ln b) − 12 sin(ωn ln b) + ωn cos(ωn ln b)     − 12 sin(ωn ln b) + ωn cos(ωn ln b) − − 21 sin(ωn ln a) + ωn cos(ωn ln a) . =   1 1 − 2 cos(ωn ln a) − ωn sin(ωn ln a) − − 2 cos(ωn ln b) − ωn sin(ωn ln b) (b) According to Theorem 9.15, the orthogonality relation for the eigenfunctions {R0 (r); Rn (r)} for the regular Sturm-Liouville problem  2 0 0   (r R ) + λR(r) = 0,  :   R0 (a) = R0 (b) = 0 is that, for n 6= m,

ˆ

b

Rn (r)Rm (r) dr = 0. a

9.6.4.3. (a) The ODE can be rewritten to be the Cauchy-Euler ODE r2 R00 +rR0 +λR = 0, whose characteristic equation is 0 = n(n − 1) + n + λ = n2 + λ. Case 1: λ = −ω 2 and ω > 0, then R = c1 rω + c2 r−ω . The BCs require    0 = R0 (a) = ω(c1 aω−1 − c2 a−ω−1 )  ,   0 = R0 (b) = ω(c1 bω−1 − c2 b−ω−1 ) hence



aω bω

−a−ω −b−ω



c1 c2



 =

0 0

 .

ω a −a−ω  b ω  a ω = − and a < b, the determinant is never zero and so there is no eigenvalue Because ω b −b−ω a b λ < 0. Case 2: λ = 0, then R = c1 + c2 ln r. The BCs require 0 = R0 (a) = c2 a1 and 0 = R0 (b) = c2 1b , so all that is required is c2 = 0. So, λ = 0 is an eigenvalue. [We were not asked for the eigenfunctions, but we mention that for λ = 0 the eigenfunctions are given by R(r) = c1 R0 (r). where R0 (r) ≡ 1 and c1 is an arbitrary nonzero constant.] Case 3: λ = ω 2 and ω > 0, then R(r) = c1 cos(ω ln r) + c2 sin(ω ln r), hence ω R0 (r) = (−c1 sin(ω ln r) + c2 cos(ω ln r)). The BCs require r   ω 0     0 = R (a) = a (−c1 sin(ω ln a) + c2 cos(ω ln a)   ,     0 = R0 (b) = ω (−c1 sin(ω ln b) + c2 cos(ω ln b)   b hence

Because − sin(ω ln a) − sin(ω ln b)



− sin(ω ln a) − sin(ω ln b)

cos(ω ln a) cos(ω ln b)



c1 c2



 =

0 0

 .

  b cos(ω ln a) = sin(ω ln b) cos(ω ln a)−cos(ω ln b) sin(ω ln a) = sin ω(ln b−ln a) = sin ω ln cos(ω ln b) a c Larry

Turyn, November 19, 2013

page 53

2 nπ and a < b, the eigenvalues are λn = = . ln(b/a) [We were not asked for the eigenfunctions, but we mention that for λ = λn , we can use the adjugate matrix method to find that the corresponding eigenfunctions are    nπ ln(b/r)  nπ Rn (r) = cos(ωn ln b) cos(ωn ln r)+sin(ωn ln b) sin(ωn ln r) = cos .] (ln b − ln r) = cos ln(b/a) ln(b/a) ωn2



(b) According to Theorem 9.15, the orthogonality relation for the eigenfunctions {R0 (r); Rn (r)} for the regular Sturm-Liouville problem     dR 1 d    r + λ R(r) = 0,    dr dr r :       R0 (a) = R0 (b) = 0 is that, for n 6= m,

ˆ

b

Rn (r)Rm (r) a

1 dr = 0. r

9.6.4.4. (a) The ODE is the Cauchy-Euler ODE r2 R00 + rR0 + λR = 0, whose characteristic equation is 0 = n(n − 1) + n + λ = n2 + λ. Case 1: If λ = −ω 2 and ω > 0, then R = c1 rω + c2 r−ω . The BCs require    0 = R(1) = c1 1ω − c2 1−ω = c1 + c2  ,   0 = R(3) = c1 3ω − c2 3−ω hence



1 3ω

1 −3−ω



c1 c2



 =

0 0

 .

1 1 = −(3ω + 3−ω ) < 0 for all ω > 0, the determinant is never zero and so there is no Because ω 3 −3−ω eigenvalue λ < 0. Case 2: If λ = 0, then R = c1 + c2 ln r. The BCs require 0 = R(1) = c1 + c2 ln 1 = c1 + 0, hence c1 = 0, and 0 = R(3) = c2 ln 3, hence c2 = 0, hence λ = 0 is not an eigenvalue. Case 3: If λ = ω 2 and ω > 0, then R = c1 cos(ω ln r) + c2 sin(ω ln r). The BCs require    0 = R(1) = c1 cos(ω ln 1) + c2 sin(ω ln 1) = c1 cos(0) + c2 sin(0) = c1  ,   0 = R = c1 cos(ω ln 3) + c2 sin(ω ln 3) = 0 · cos(ω ln 3) + c2 sin(ω ln 3) hence there is an eigenvalue if, and only if, sin(ω ln 3) = 0. The eigenvalues are λn = ωn2 =

 nπ 2

. ln 3 [We were not asked for the eigenfunctions, but we mention that for λ = λn , c1 = 0 implies that the  nπ ln r  corresponding eigenfunctions are Rn (r) = cos(ωn ln r) = cos .] ln 3 (b) In order to use Theorem 9.15 in Section 9.6, first rewrite the ODE in the form of (9.63) in Section 9.6: r2 R00 + rR + λR = 0 ⇔ (rR0 )0 + λ 1r R = 0.

c Larry

Turyn, November 19, 2013

page 54

According to Theorem 9.15, the orthogonality relation for the eigenfunctions {R0 (r); Rn (r)} for the regular Sturm-Liouville problem   1   (rR0 )0 + λ R = 0,   r :     R(1) = R(3) = 0 is that, for n 6= m,

ˆ

b

Rn (r)Rm (r) a

1 dr = 0. r

9.6.4.5. (a) To find the eigenvalues, we consider three separate cases: λ = 0, λ > 0, and λ < 0. Case 1: If λ = 0, then the differential equation X 00 (x) + λX(x) = 0 is X 00 (x) = 0, whose solutions are X = c1 + c2 x, for arbitrary constants c1 , c2 . It follows that X 0 (x) = c2 . The BCs require      0 = X π4 − 3X 0 π4 = c1 + c2 π4 − 3c1 = −2c1 + π4 c2  ,     3π 3π 0 3π 0 = X 3π + c = 2c + c + X = c + c 1 2 1 1 2 4 4 4 4 hence



−2

 −2 Because 2

π 4 3π 4

π 4 3π 4



c1





=



0

 .

c2 0 2 = − 3π − π = −2π 6= 0, so λ = 0 is not an eigenvalue. 2 2

√ Case 2: If λ > 0, it will turn out to be convenient to rewrite λ = ω 2 , where ω , λ. Then the differential equation X 00 (x) + λX(x) = 0 is X 00 (x) + ω 2 X(x) = 0, the undamped harmonic oscillator differential equation of Section 3.3 whose solutions are X = c1 cos ωx + c2 sin ωx, for constants c1 , c2 . The BCs require         + c2 sin ωπ − 3ω −c1 sin ωπ + c2 cos ωπ  0 = X π4 − 3X 0 π4 = c1 cos ωπ  4 4 4 4        ,  3ωπ 3ωπ 3ωπ 3ωπ 0 3π 0 = X 3π + X = c cos + c sin + ω −c sin + c cos 1 1 2 2 4 4 4 4 4 4 hence

 

cos

ωπ 4



+ 3ω sin

 cos 3ωπ − ω sin 4    ωπ cos 4 + 3ω sin ωπ 4 Because    − ω sin 3ωπ cos 3ωπ 4 4

ωπ 4



3ωπ 4

sin

ωπ 4



− 3ω cos

ωπ 4

  

c1





=

  c2 sin 3ωπ + ω cos 3ωπ 4 4   ωπ ωπ sin 4 − 3ω cos 4   3ωπ sin 3ωπ + ω cos 4 4     ωπ   ωπ    3ωπ  3ωπ = cos + 3ω sin sin + ω cos 4 4 4 4       ωπ   ωπ   3ωπ 3ωπ − sin − 3ω cos cos − ω sin 4 4 4 4           3ωπ ωπ 3ωπ ωπ = ... = 4ω cos cos + sin sin 4 4 4 4         ωπ  3ωπ ωπ 3ωπ 2 +(1 − 3ω ) sin cos − cos sin 4 4 4 4     2ωπ 2ωπ = 4ω cos + (1 − 3ω 2 ) sin . 4 4 

c Larry

0 0

 .

Turyn, November 19, 2013

page 55

So, the characteristic equation is 0 = (1 − 3λ) sin

π

√ ! √ λ + λ cos 2

π

√ ! λ . 2

It is not possible to find the exact eigenvalues, but we could approximate them using a root finding method from Section 8.1. √ Case 3: If λ < 0, it will turn out to be convenient to rewrite λ = −ω 2 , where ω , −λ. Then the differential equation X 00 (x) + λX(x) = 0 is X 00 (x) − ω 2 X(x) = 0, whose solutions are X = c1 cosh(ωx) + c2 sinh(ωx), for arbitrary constants c1 , c2 . The BCs require         + c2 sinh ωπ − 3ω c1 sinh ωπ + c2 cosh ωπ   0 = X π4 − 3X 0 π4 = c1 cosh ωπ 4 4 4 4        ,  3ωπ 3ωπ 3ωπ 3ωπ 0 3π + X = c cosh + c sinh + ω c sinh + c cosh 0 = X 3π 1 2 1 2 4 4 4 4 4 4 hence



cosh



ωπ 4



ωπ 4





+ ω sinh  − 3ω sinh ωπ 4

3ωπ 4

sinh

ωπ 4





sinh  ωπ

3ωπ 4

− 3ω cosh

ωπ 4

 

c1



 =



 c2 + ω cosh 3ωπ 4   sinh 4 − 3ω cosh ωπ cosh 4 4  Because     3ωπ 3ωπ 3ωπ 3ωπ cosh 4 + ω sinh 4 sinh 4 + ω cosh 4       ωπ   ωπ   3ωπ 3ωπ = cosh + ω cosh − 3ω sinh sinh 4 4 4 4        ωπ   ωπ  3ωπ 3ωπ − sinh − 3ω cosh cosh + ω sinh 4 4 4 4        ωπ   ωπ  3ωπ 3ωπ = ... = 4ω cosh cosh − sinh sinh 4 4 4 4         ωπ  3ωπ 3ωπ ωπ +(1 + 3ω 2 ) sinh − cosh cosh sinh 4 4 4 4     2ωπ 2ωπ + (1 + 3ω 2 ) sinh . = 4ω cosh 4 4   Because sinh ωπ > 0 and cosh ωπ > 0 for all ω > 0, there is no eigenvalue for λ < 0. 2 2 cosh  ωπ

3ωπ 4

− 3ω sinh



0 0

 .

(b) In the context of Theorem 9.15 in Section 9.6, the ODE X 00 + λX = 0 has s(x) ≡ 1. So, the orthogonality ˆ 3π/4 relation is that 0 = Xn (x)Xm (x) dx for n 6= m. π/4

9.6.4.6. (a) The ODE X 00 + 2βX 0 + λX(x) = 0 has roots p p s = −β ± β 2 − λ = −β ± i λ − β 2 , after noting that we are assuming that λ > β 2 . The solutions of the ODE are X = c1 e−βx cos(ωx) + c2 e−βx sin(ωx), p where ω , λ − β 2 and c1 , c2 are arbitrary constants. The first BC requires 0 = X(0) = c1 , hence −βx X = c2 e sin(ωx), so the second BC requires 0 = X(L) = c2 e−βL sin(ωL).  nπ 2 The eigenvalues are λn = ωn2 = . L c Larry

Turyn, November 19, 2013

page 56

(b) In order to use Theorem 9.15 in Section 9.6, first rewrite the ODE in the form of (9.63) in Section 9.6: To do this, use the idea factor from Section 3.1 to see that X 00 + 2βX 0 + λX = 0 ⇔  of an integrating 2βx 00 0 2βx 0 0 e X + 2βX + λX = 0 ⇔ e X ) + λe2βx X = 0. So, s(x) = e2βx , and the orthogonality relation is ˆ L 0= e2βx Xn (x)Xm (x) dx for n 6= m. 0

9.6.4.7. (a) (i) For λ = 0 the ODE becomes X 0000 (x) = 0, so X(x) = c0 + c1 x + 21 c2 x2 + 16 c3 x3 , where the ci ’s are arbitrary constants. Substitute X(x) and X 0 (x) = c1 + c2 x + 21 c3 x into the first two BCs to get   0 = X(0) = c0 . 0 = X 0 (0) = c1 Substitute these into X(x) to reduce it to X(x) = 21 c2 x2 + 16 c3 x3 . Substitute X(x) and X 0 (x) = c2 x+ 12 c3 x2 into the last two BCs to get the system of equations    0 = X(L) = 21 c2 L2 + 16 c3 L3  ,   0 = X 0 (L) = c2 L + 12 c3 L2 that is, 

1 2

L2

1 6

L3

L

1 2

2

 Because



1 2

L



c2





0

.

=

 c3



0

L3 = 1 L3 6= 0, 12 1 2 L

L2

1 6

L

2

it follows that c2 = c3 = 0. This yields X(x) ≡ 0, so λ = 0 is not an eigenvalue of this fourth order ODE-BVP. (ii) Suppose λ < 0. Define ω = (−λ)1/4 for convenience. The ODE X 0000 − ω 4 X = 0 has characteristic polynomial s4 − ω 4 = (s2 − ω 2 )(s2 + ω 2 ), which has roots s = ±ω, ±iω. The solutions of the ODE are X(x) = c1 cosh(ωx) + c2 sinh(ωx) + c3 cos(ωx) + c4 sin(ωx).  Substitute that and X 0 (x) = ω c1 sinh(ωx) + c2 cosh(ωx) − c3 sin(ωx) + c4 cos(ωx) into the first two BCs to get   0 = X(0) = c1 + c3 , 0 = X 0 (0) = c2 + c4 which implies c1 = −c3 and c2 = −c4 . Substitute these into X(x) to reduce it to   X(x) = c3 cos(ωx) − cosh(ωx) + c4 sin(ωx) − sinh(ωx) .    Substitute X(x) and X 0 (x) = ω c3 − sin(ωx) − sinh(ωx) + c4 cos(ωx) − cosh(ωx) into the last two BCs to get the system of equations        0 = X(L) = c3 cos(ωL) − cosh(ωL) + c4 sin(ωL) − sinh(ωL)      ,   0 = X 0 (L) = ω c3 − sin(ωL) − sinh(ωL) + c4 cos(ωL) − cosh(ωL)  or, equivalently,  (??)

cos(ωL) − cosh(ωL) − sin(ωL) − sinh(ωL)

sin(ωL) − sinh(ωL) cos(ωL) − cosh(ωL)



c3 c4



c Larry

 =

0 0

 .

Turyn, November 19, 2013

page 57

There is a non-trivial solution for X(x) if, and only if, cos(ωL) − cosh(ωL) sin(ωL) − sinh(ωL) 0 = − sin(ωL) − sinh(ωL) cos(ωL) − cosh(ωL)   = cos(ωL) − cosh(ωL) cos(ωL) − cosh(ωL) − =



  sin(ωL) − sinh(ωL) − sin(ωL) − sinh(ωL)

   cos2 (ωL) + sin2 (ωL) + cosh2 (ωL) − sinh2 (ωL) − 2 cosh(ωL) cos(ωL)

that is, if and only if, 0 = 1 − cosh(ωL) cos(ωL). This could also be rewritten as cos(θ) =

1 , cosh(θ)

1 where θ = ωL. Graphed below are the functions cos(θ), in a dashed curve, and cosh(θ) , in a solid curve. Their points of intersection give eigenvalues. There are infinitely many eigenvalues λn = −ωn4 , where ωn = θLn → ∞, as n → ∞. Because cosh(θ) → ∞,  1 as θ → ∞, the roots θn ∼ n − 2 π as n → ∞.

Figure 31: Problem 9.6.4.7 [We were not asked for the corresponding eigenfunctions, but again we can use the “adjugate matrix method" of Theorem 2.2, as in Example 9.33, to find a solution of (??):     c3 cos(θn ) − cosh(θn ) = , c4 sin(θn ) + sinh(θn ) The corresponding eigenfunctions are     Xn (x) = cos(θn )−cosh(θn ) cos(ωx)−cosh(ωx) + sin(θn )+sinh(θn ) sin(ωx)−sinh(ωx) , n = 1, 2, ... .] (iii) Suppose λ > 0. Define ω = (λ)1/4 for convenience. The ODE X 0000 + ω 4 X = 0 has characteristic polynomial s4 + ω 4 , which implies s2 = ±iω = ±eiπ/2 and thus the roots are s = ±eiπ/4 , ±ei5π/4 . These make for quite a complicated general solution of the ODE. It might be easier to just use the method of Example 9.30 to explain why λ > 0 cannot be an eigenvalue: Suppose that λ is an eigenvalue. Multiply through the ODE by X(x) and integrate on the interval 0 < x < L to get ˆ L ˆ L 2 0000 0= X(x)X (x)dx + λ X(x) dx. 0

0

Integrate by parts to get h iL ˆ 0 = X(x)X 000 (x) − 0

0

ˆ

L

X 0 (x)X 000 (x)dx + λ

L

2 X(x) dx.

0 c Larry

Turyn, November 19, 2013

page 58

The BCs X(0) = X(L) = 0 explains why this implies that ˆ

L

λ

2 X(x) dx =

ˆ

L

X 0 (x)X 000 (x)dx, 0

0

and another use of integration by parts gives ˆ L h iL ˆ 2 0 00 λ X(x) dx = X (x)X (x) − 0

0

L

X 00 (x)X 00 (x)dx.

0

The BCs X 0 (0) = X 0 (L) = 0 explains why this implies that ˆ

L

(? ? ?) λ

2 X(x) dx = −

ˆ

L

2 X 00 (x) dx.

0

0

Because λ is an eigenvalue, we cannot have the continuous function X(x) ≡ 0 on the interval 0 < x < L. It 2 ´L follows that 0 X(x) dx > 0. This and (? ? ?) imply that λ ≤ 0. This contradictions our assumption that λ > 0, so there cannot be an eigenvalue λ > 0. (b) Regarding orthogonality, we can use the method of Example 9.29: Suppose λn and λm are two distinct eigenvalues, with corresponding eigenfunctions Xn (x) and Xm (x). From the ODE, with λ replaced by λn and X(x) replaced by Xn (x), we get (1) Xn0000 (x) + λn Xn (x) = 0, a < x < b, and similarly we get 0000 (2) Xm (x) + λm Xm (x) = 0, a < x < b.

Multiply ODE (1) by Xm (x), and subtract from that Xn (x) times ODE (2) to get 0000 0=Xm (x)Xn0000 (x) + λn Xm (x)Xn (x) − Xn (x)Xm (x) − λm Xn (x)Xm (x).

Integrate from 0 to L to get ˆ L ˆ (?) 0 = 0 dx = 0

ˆ

L

Xm (x)Xn0000 (x)



0000 Xn (x)Xm (x)

 dx + (λn − λm )

0

L

Xn (x)Xm (x)dx. 0

For the first term, integration by parts gives ˆ L  0000 Xm (x)Xn0000 (x) − Xn (x)Xm (x) dx 0

h iL ˆ 000 = Xm (x)Xn000 (x) − Xn (x)Xm (x) − 0

=Xm (L)Xn000 (L)



000 Xn (L)Xm (L)



Xm (0)Xn000 (0)

L

 0 000 Xm (x)Xn000 (x) − Xn0 (x)Xm (x) dx

0

ˆ

000 Xn (0)Xm (0)

+

L

 0 000 Xm (x)Xn000 (x) − Xn0 (x)Xm (x) dx.

− 0

Using the BCs Xn (L) = Xm (L) = Xn (0) = Xm (L) = 0, followed by another use of integration by parts, we have ˆ L ˆ L   0000 0000 0 000 Xm (x)Xn (x) − Xn (x)Xm (x) dx = 0 − Xm (x)Xn000 (x) − Xn0 (x)Xm (x) dx 0

h

0 00 = Xm (x)Xn00 (x) − Xn0 (x)Xm (x)

Xn0 (L)

using the BCs = So, (?) reduces to

0 Xm (L)

=

Xn0 (0)

=

iL

0

ˆ

L

 00 00 Xm (x)Xn00 (x) − Xn00 (x)Xm (x) dx = 0 ,



0

0 Xm (L)

0

 0 000 = 0, and canceling Xm (x)Xn000 (x) − Xn0 (x)Xm (x) ≡ 0. ˆ

0 = (λn − λm )

L

Xn (x)Xm (x)dx. 0 c Larry

Turyn, November 19, 2013

page 59

Because λn 6= λm , we can divide through by (λn −λm ) to get the orthogonality relation ˆ L 0= Xn (x)Xm (x) dx, for n 6= m. 0

9.6.4.8. Multiply ψ 00 + q(x)ψ(x) = 0 by y(x) to get (1) y(x)ψ 00 + q(x)y(x)ψ(x) = 0, and multiply y 00 + q(x)y(x) = f (x) by ψ(x) to get (2) ψ(x)y 00 + q(x)y(x)ψ(x) = f (x)ψ(x). Subtract (1) from (2) to get ψ(x)y 00 − y(x)ψ 00 (x) + q(x)y(x)ψ(x) − q(x)y(x)ψ(x) = f (x)ψ(x) − 0, or equivalently, ψ(x)y 00 − y(x)ψ 00 (x) = f (x)ψ(x). Integrate both sides from a to b and then integrate by parts to get ˆ b ˆ b  ψ(x)y 00 (x) − y(x)ψ 00 (x) dx ψ(x)f (x) dx = a

a

h

= ψ(x)y 0 (x) − y(x)ψ 0 (x)

ib

ˆ

a

b

 ψ 0 (x)y 0 (x) − y 0 (x)ψ 0 (x) dx

− a

ˆ

b

= ψ(b)y 0 (b) − y(b)ψ 0 (b) − ψ(a)y 0 (a) + y(a)ψ 0 (a) −

 ψ 0 (x)y 0 (x) − y 0 (x)ψ 0 (x) dx. a

Use the BCs ψ(a) = ψ(b) = 0 and the BCs y(a) = y(b) = 0, and cancel ψ 0 (x)y 0 (x) − y 0 (x)ψ 0 (x) ≡ 0, to conclude that ˆ b ψ(x)f (x) = 0. a

√ √ 2 tan( 2ω) √ + , where ω , λ, shown below, appears to give infinitely many 9.6.4.9. The graph of g(ω) , 3 tan( 3ω) eigenvalues λn = ωn2 , where ωn ∼ n as n → ∞. r

Figure 32: Problem 9.6.4.9 √ 9.6.4.10. (a) Assume that λ > 0 and define ω = λ. The general solution of the ODE is X(x) = c1 cos(ωx)+ c2 sin(ωx). Substitute this and X(x) = −c1 ω sin(ωx) + c2 ω cos(ωx) into the two BCs to get     0 = X(0) − hX 0 (0) = c1 cos(0) + c2 sin(0) − h − c1 ω sin(0) + c2 ω cos(0) = c1 − hωc2  ,    0 = X(L) + hX 0 (L) = c1 cos(ωL) + c2 sin(ωL) + h − c1 ω sin(ωL) + c2 ω cos(ωL) or, equivalently,  (?)

1 cos(ωL) − hω sin(ωL)

−hω sin(ωL) + hω cos(ωL)



c1 c2



c Larry

 =

0 0

 .

Turyn, November 19, 2013

page 60

There is a non-trivial solution for X(x) if, and only 1 −hω 0 = cos(ωL) − hω sin(ωL) sin(ωL) + hω cos(ωL)

if,  = sin(ωL) + hω cos(ωL) + hω cos(ωL) − hω sin(ωL) .

So, 0 = (1 − h2 ω 2 ) sin(ωL) + 2hω cos(ωL). is the characteristic equation to be satisfied by all of the eigenvalues λ = ω 2 . (b) The adjugate of the 2 × 2 matrix in (?) is   sin(ωL) + ωh cos(ωL) hω , − cos(ωL) + ωh sin(ωL) 1 so the eigenfunctions are of the form Xn (x) = hωn cos(ωn x) + sin(ωn x), where ωn =

√ λn and λn is an eigenvalue.

9.6.4.11. We are assuming that 0 < λn = ωn2 . The general solution of the ODE is X(x) = c1 cos(ωn x) + c2 sin(ωn x). Substitute this and X(x) = −c1 ωn sin(ωn x) + c2 ωn cos(ωn x) into the two BCs to get      0 = 0 X(a) − 1 X 0 (a) = 0 c1 cos(ωn a) + c2 sin(ωn a) − 1 − c1 ωn sin(ωn a) + c2 ωn cos(ωn a)    ,  0 = γ0 X(b) + γ1 X 0 (b) = γ0 c1 cos(ωn b) + c2 sin(ωn b) + γ1 − c1 ωn sin(ωn b) + c2 ωn cos(ωn b) or, equivalently,  (?)

0 cos(ωn a) + 1 ωn sin(ωn a)

0 sin(ωn a) − 1 ωn cos(ωn a)



 

γ0 cos(ωn b) − γ1 ωn sin(ωn b)

γ0 sin(ωn b) + γ1 ωn cos(ωn b)

There is a non-trivial solution for X(x) if, and only if, 0 cos(ωn a) + 1 ωn sin(ωn a) 0 = γ0 cos(ωn b) − γ1 ωn sin(ωn b) The adjugate of the 2 × 2 matrix in (?) is  γ0 sin(ωn b) + γ1 ωn cos(ωn b)  −γ0 cos(ωn b) + γ1 ωn sin(ωn b)



c1 c2



 =

0 0

 .

0 sin(ωn a) − 1 ωn cos(ωn a) γ0 sin(ωn b) + γ1 ωn cos(ωn b)

−0 sin(ωn a) + 1 ωn cos(ωn a)

 .

0 cos(ωn a) + 1 ωn sin(ωn a)

If the first column of the adjugate matrix is nonzero it gives eigenfunctions   Xn (x) = γ0 sin(ωn b) + γ1 ωn cos(ωn b) cos(ωn x) + − γ0 cos(ωn b) + γ1 ωn sin(ωn b) sin(ωn x). If the second column of the adjugate is nonzero it gives eigenfunctions   Xn (x) = − 0 sin(ωn a) + 1 ωn cos(ωn a) cos(ωn x) + 0 cos(ωn a) + 1 ωn sin(ωn a) sin(ωn x). Note that at least one of the columns of the adjugate matrix must be a nonzero vector in R2 , because of Theorem 1.33 in Section 1.7 and the fact that Theorem 9.15 says that “...the only eigenfunctions are the nonzero multiples of a single eigenfunction...," hence the 2 × 2 matrix in (?) cannot have rank equal to zero. 9.6.4.12. We will find a characteristic equation for this problem: To do this, we will solve the ODE separately on the two sub-intervals [0, 1] and [1, 2] and then match the two solutions using the interface boundary conditions. Define X (1) = X(x) for 0 < x < 1 and X (2) = X(x) for 1 < x < 2. c Larry

Turyn, November 19, 2013

page 61

First consider the case of λ = 0: On sub-interval [0, 1], κ(x) = 1, so the ODE there is X 00 (x)+0·X(x) = 0, that is, X 00 (x) = 0, so the solutions are X (1) (x) = c1 + c2 x. The BC X 0 (0) = 0 yields c2 = 0, so X (1) (x) = c1 , 0 < x < 1. On [1, 2], κ(x) = 41 , so the ODE there is 41 X 00 (x) + 0 · X(x) = 0. The solutions are X (2) (x) = d1 + d2 x. The BC X 0 (2) = 0 yields that d2 = 0, so X (2) (x) = d1 , 1 < x < 2. Matching the solutions X (1) (x) and X (2) (x) requires c1 = X (1) (1− ) = X (2) (1+ ) = d1 , hence c1 = d1 , as well as 0 = limx→1− κ(x)(X (1) )0 (x) = limx→1+ κ(x)(X (2) )0 (x) = 0, which effectively imposes no further requirement on c1 or d1 . So, λ = 0 is an eigenvalue, with corresponding eigenfunctions   c1 , 0 < x < 1 X(x) = , c1 , 1 < x < 2 and we can define X(1) = limx→1− c1 = limx→1+ c1 = c1 , so X(x) = c1 · 1. So, a corresponding eigenfunction is X1 (x) ≡ 1. 00 (1) For√λ > 0, on [0, 1], √ κ(x) = 1, so the0 ODE there is X (x) + λX(x) = 0, so the solutions are X (x) = c1 cos( λ x) + c2 sin( λ x). The BC X (0) = 0 yields c2 = 0, so √ √ √ X (1) (x) = c1 cos( λ x), (X (1) )0 (x) = −c1 λ sin( λ x), 0 < x < 1. On [1, 2], κ(x) = 41 , so the ODE there is 41 X 00 (x) + λX(x) = 0. We could take the solutions there, √ √ X (2) (x), to be a linear combination of cos(2 λ x) and sin(2 λ x), but instead we can use another set of basic solutions that will help us solve the BC X 0 (2) = 0. Let √ √   X (2) (x) = d1 cos 2 λ (2 − x) + d2 sin 2 λ (2 − x) . The chain rule yields √ √ √ √   (X (2) )0 (x) = d1 2 λ sin 2 λ (2 − x) − d2 2 λ cos 2 λ (2 − x) . The BC X 0 (2) = 0 yields √ √ √ √ √   0 = d1 2 λ sin 2 λ (2 − 2) − d2 2 λ cos 2 λ (2 − 2) = 2 λ(d1 · 0 − d2 · 1), hence d2 = 0. So √  X (2) (x) = d1 cos 2 λ (2 − x) ,

√ √ 0  X (2) (x) = d1 2 λ sin 2 λ (2 − x) , 1 < x < 2.

Matching the solutions X (1) (x) and X (2) (x) requires, by the interface conditions at x = 1, √ √ c1 cos( λ) = X (1) (1− ) = X (2) (1+ ) = d1 cos(2 λ), that is,

√ √ cos( λ)c1 − cos(2 λ)d1 = 0,

as well as (?)



√ √ √ √ 1 λ c1 sin( λ) = lim− κ(x)(X (1) )0 (x) = lim+ κ(x)(X (2) )0 (x) = · 2 λ d1 sin(2 λ), 4 x→1 x→1

that is,

√ √ √ 1√ λ sin( λ)c1 − λ sin(2 λ)d1 = 0. 2 Equations (?), (??) and (? ? ?) form a homogeneous system in two unknowns, c1 , d1 . After dividing the last q −

(??)

equation through by

λ 6



(recall that we assumed λ > 0 in this case), we can write the system as c1

A





, d1

√ cos( λ) √ − sin( λ)

√     − cos(2 λ) c1 0   =  . √ d1 0 − 12 sin(2 λ) c Larry

Turyn, November 19, 2013

page 62

There is a non-trivial solution for the eigenfunction  (1)   X (x), 0 < x < 1  X(x) =  (2)  X (x), 1 < x < 2 if and only if the system of two equations has a non-trivial solution forc1 , d1 , that is, if and only if √ √ cos( λ) − cos(2 λ) |A| = √ √ = 0. 1 − sin( λ) − 2 sin(2 λ) This reduces to the characteristic equation (? ? ?)

0=

√ √ √ √ √ 1 cos( λ) sin(2 λ) + sin( λ) cos(2 λ) , f ( λ). 2

We can use a graphical method to see why there are infinitely many eigenvalues. In fact, the function f (θ) is periodic with period 2π, so the existence of one root of f (θ) implies the existence of infinitely many roots. Also, we could use a root finding method in Section 8.1 to get good approximations to a finite number of them.

Figure 33: Problem 9.6.4.12 As to the corresponding eigenfunctions, we can use the “adjugate matrix method," of 2.2 in  Theorem  c1 Section 2.1, as in Example 2.5 in Section 2.1, to find a non-trivial solution for the vector : d1 We calculate that √ √   1 − 2 sin(2 λ) cos(2 λ) . adj(A) =  √ √ sin( λ) cos( λ)   c1 Any nonzero column of adj(A) is a useful solution for . d1 Denote adj(A) = [ v1 v2 ]. The second column is nonzero, because √ √ ||v2 ||2 = cos2 (2 λ) + cos2 ( λ) 6= 0 √ √ π follows from the fact that it is impossible to have both λ and 2 λ be odd multiples of . 2 By the way, using the fact that λ being an eigenvalue yields √ √ √ √ cos( λ) sin(2 λ) = −2 sin( λ) cos(2 λ), we calculate that

√ √ cos( λ)v1 = sin( λ)v2 , c Larry

Turyn, November 19, 2013

page 63

confirming that rank(adj(A)) = 1, as predicted by Theorem 1.33 in Section 1.6. √     c1 cos(2√ λ) So, we can take = v1 = . Correspondingly, our eigenfunctions are d1 cos( λ) √ √   cos(2 λ) cos( 2λ x),

X(x)=



 0 || f

||22

∞ X  πa20 = +π |an |2 + |bn |2 . 2 n=1

c Larry

Turyn, November 19, 2013

page 71

By the “divergence test" for infinite series of real numbers, the fact that the sum converges implies that an → 0 as n → ∞. But, ˆ 1 π 1 an = f (x) sin nx dx = hf, sin nxi. π −π π So, for all square integrable functions f , 1 hf, √ sin(nx)i → 0 = hf, 0i as n → ∞, π  ∞ 1 √ which says that the sequence sin(nx) converges weakly to the zero function in L2 (−π, π). π n=1

c Larry

Turyn, November 19, 2013

page 1

Chapter Ten Section 10.1.3 10.1.3.1. For an arbitrary control volume V we have (10.3), that is, ˚ ‹ ˚ ∂e b dS + dV = − q(r, t) • n Q(r, t) dV. V ∂t S V For an arbitrary rectangular slice V = {(x, y, z) : 0 < x < a, 0 < y < b, α < z < β}, where β > α, the surface S that bounds V consists of six parts: ˆ b = −k, (1) S1 , the rectangle z = 0, 0 < x < a, 0 < y < b, on which n ˆ b = k, (2) S2 , the rectangle z = H, 0 < x < a, 0 < y < b, on which n b = −ˆ (3) S3 , the rectangle x = 0, 0 < y < b, α < z < β, on which n ı, b =ˆ (4) S4 , the rectangle x = a, 0 < y < b, α < z < β, on which n ı, b = −ˆ (5) S3 , the rectangle y = 0, 0 < x < a, 0 < z < H, on which n , and b = ˆ. (6) S3 , the rectangle y = b, 0 < x < a, 0 < z < H, on which n So,



− S

¨ b dS = − q(r, t) • n

¨ b dS − q(r, t) • n

S1

¨ − S5

ˆ bˆ

S2

¨ b dS − q(r, t) • n ¨

b dS − q(r, t) • n

S6

0

0

ˆ βˆ

b

ˆ βˆ

0 b

q(0, y, z, t) • (−ˆ ı)dy dz −

q(a, y, z, t) • (−ˆ ı)dy dz

0

α

ˆ βˆ

0

ˆ βˆ

a



q(x, 0, z, t) • (−ˆ )dx dz − α

b dS q(r, t) • n

ˆ q(x, y, β, t) • (k)dx dy

0

α

S4

a

ˆ q(x, y, α, t) • (−k)dx dy −



b dS − q(r, t) • n

b dS q(r, t) • n

ˆ bˆ

a

=−

S3

¨

a

q(x, b, z, t) • (ˆ )dx dz

0

α

0

So (10.3) is ˆ bˆ a ˆ bˆ a ˆ βˆ b βˆ bˆ a ∂e (x, y, z, t)dxdydz = qz (x, y, α, t)dxdy − qz (x, y, β, t)dxdy + qx (0, y, z, t)dydz 0 0 0 0 α 0 α 0 0 ∂t

ˆ

ˆ βˆ b ˆ βˆ a ˆ βˆ a ˆ − qx (a, y, z, t)dydz + qy (x, 0, z, t)dxdz − qy (x, b, z, t)dxdz + α

0

α

0

α

0

β

ˆ

b

ˆ

a

Q(x, y, z, t)dxdydz .

α

0

0

If we assume that e, q, and Q do not depend on z and we assume that qz ≡ 0, that is, there is no heat flux out of the slab at any point on the top surface, S2 , or the bottom surface, S1 , then we can integrate to get ˆ b ˆ b ˆ bˆ a ∂e (?) (β − α) (x, y, t)dxdy = (β − α) qx (0, y, t)dy − (β − α) qx (a, y, t)dy 0 0 ∂t 0 0 ˆ +(β − α)

a

ˆ

qy (x, 0, t)dx − (β − α) 0

ˆ

a

b

ˆ

qy (x, b, t)dx + (β − α) 0

a

Q(x, y, t)dxdy . 0

c Larry

0

Turyn, January 8, 2014

page 2

The first two terms on the right hand side can be rewritten using ˆ a ∂qx qx (0, y, t) − qx (a, y, t) = (x, y, t) dx, 0 ∂x and the next two terms on the right hand side can be rewritten using ˆ b ∂qy qy (x, 0, t) − qy (x, b, t) = (x, y, t) dy, 0 ∂y After dividing through by the positive constant (β − α) and moving terms to the left hand side, (?) implies that ˆ bˆ a ˆ bˆ a ˆ bˆ a ˆ bˆ a ∂e ∂qx ∂qy Q(x, y, t)dxdy = 0. (x, y, t)dxdy − (x, y, t) dxdy − (x, y, t) dxdy − 0 ∂t 0 0 0 0 ∂x 0 0 ∂y 0 This being true for all α, β satisfying 0 < α < β < H implies the PDE ∂qx ∂qy ∂e (x, y, t) = − (x, y, t) − (x, y, t) + Q(x, y, t), ∂t ∂x ∂y the two space dimensional version of the heat balance equation. 10.1.3.2. For an arbitrary control volume V we have (10.3), that is, ‹ ˚ ˚ ∂e b dS + dV = − q(r, t) • n Q(r, t) dV. S V V ∂t It is natural to rewrite an arbitrary cylindrical shell V = {(x, y, z) : 0 < z < H, α ≤ x2 + y 2 ≤ β 2 }, where R > β > α > 0, in cylindrical coordinates, as V = {(r, θ, z) : α < r < β, −π < θ ≤ π, 0 < z < H}. In ˆ cylindrical coordinates, q = qr ˆ er + qθ ˆ eθ + qz k. The surface S that bounds V consists of four parts, similarly to Example 7.36 in Section 7.5: ˆ b = −k, (1) S1 , the disk z = 0, α < x2 + y 2 ≤ β 2 , on which n ˆ b = k, (2) S2 , the disk z = H, α < x2 + y 2 ≤ β 2 , on which n b=ˆ (3) S3 , the lateral surface r = β, −π < θ ≤ π, 0 < z < H, on which n er = cos θ ˆ ı + sin θ ˆ, and b = −ˆ (4) S4 , the lateral surface r = α, −π < θ ≤ π, 0 < z < H, on which n er . So, ‹

¨

− S

b dS = − q(r, t) • n

ˆ

π

ˆ

S1

¨ b dS − q(r, t) • n ˆ

β

π

ˆ

S2

−π α

ˆ

S3

¨ b dS − q(r, t) • n ˆ

β

ˆ r dr dθ − q(r, θ, α, t) • (−k)

=−

¨ b dS − q(r, t) • n

q(β, θ, z, t) • (ˆ er ) β dθ dz . −π

0

Hˆ π



b dS q(r, t) • n

Hˆ π

ˆ r dr dθ − q(r, θ, β, t) • (k) −π α

S4

q(β, θ, z, t) • (ˆ er ) β dθ dz . −π

0

So (10.3) is ˆ 0

H

ˆ

π

−π

ˆ

ˆ πˆ β ˆ πˆ β ∂e (r, θ, z, t) r dr dθ dz = qz (r, θ, α, t) r dr dθ − qz (r, θ, β, t) r dr dθ ∂t −π α −π α

β

α

ˆ + 0

Hˆ π

ˆ qr (β, θ, z, t) β dθ dz −

−π

0

ˆ qr (α, θ, z, t) α dθ dz +

Hˆ π

−π

0

H

ˆ

π

ˆ

β

Q(r, θ, z, t) r dr dθ dz . −π

α

c Larry

Turyn, January 8, 2014

page 3

If we assume that e, q, and Q and the BCs do not depend on z or θ and we assume that qz ≡ 0, that is, there is no heat flux out of the cylinder at any point on the top surface, S+ , or the bottom surface, S− , then we can integrate with respect to z and integrate wih respect to θ to get ˆ β ˆ β ∂e (?) 2πH (r, t)r dr = 2πHβ · qr (β, t) − 2πHα · qr (α, t) + 2πH Q(r, t) r dr . α ∂t α The first term on the right hand side can be rewritten using ˆ β  ∂  r · qr (r, t) dr, β · qr (β, t) − α · qr (α, t) = ∂r α hence (?) can be rewritten as ˆ (?)

β

2πH α

ˆ

∂e (r, t)r dr = 2πH ∂t

β

α

 ∂  r · qr (r, t) dr + 2πH ∂r

ˆ

β

Q(r, t) r dr . α

After dividing through by the positive constant 2πH and moving terms to the left hand side, (?) implies that  ˆ β  1 ∂  ∂e (r, t) − r · qr (r, t) − Q(r, t) r dr = 0. ∂t r ∂r α This being true for all α, β satisfying 0 < α < β < R implies the PDE  ∂e 1 ∂  (r, t) = r · qr (r, t) + Q(r, t), ∂t r ∂r the two space dimensional version of the heat balance equation in polar coordinates. 10.1.3.3. For an arbitrary control volume V we derived (10.3), that is, ‹ ˚ ˚ ∂e b dS + dV = − q(r, t) • n Q(r, t) dV. S V V ∂t It is natural to rewrite an arbitrary cylindrical slice V = {(x, y, z) : a < x < b, 0 ≤ y 2 + z 2 ≤ R2 } in an alternative cylindrical coordinates as V = {(x, r, ϑ) : a < x < b, 0 ≤ r < R, −π < ϑ ≤ π}. The surface S that bounds V consists of three parts, similarly to Example 7.36 in Section 7.5: (1) S− , the disk b = −ˆ b =ˆ x = a, 0 ≤ y 2 + z 2 ≤ R2 , on which n ı, (2) S+ , the disk x = b, 0 ≤ y 2 + z 2 ≤ R2 , on which n ı, ˜ b=ˆ and (3) S, the lateral surface a < x < b, r = R, −π < ϑ ≤ π, on which n er = cos ϑ ˆ ı + sin ϑ ˆ. In these cylindrical coordinates, q = qx ˆ ı + qr ˆ er + qϑ ˆ eϑ . So, ‹ ¨ ¨ ¨ b dS = − b dS − b dS − b dS − q(r, t) • n q(r, t) • n q(r, t) • n q(r, t) • n S

S−

ˆ

ˆ

π

=−



S+

ˆ q(a, r, ϑ, t) • (−ˆ ı)rdrdϑ −

R

−π 0

π

ˆ

ˆ bˆ q(b, r, ϑ, t) • (ˆ ı)rdrdϑ −

R

−π 0

a

π

q(x, R, ϑ, t) • ˆ er dϑdx

−π

So (10.3) is ˆ a

b

ˆ

π

−π

ˆ

R

0

∂e (x, r, ϑ, t)r drdϑdx = ∂t ˆ

b

ˆ

ˆ

π

ˆ

ˆ

ˆ

R

−π

−π

0

R

q(b, r, ϑ, t) • (−ˆ ı)rdrdϑ −π

0

ˆ

b

ˆ

π

0

ˆ

R

q(x, R, ϑ, t) • ˆ er dϑdx + a

−π

R

ˆ

π

ˆ

qx (a, r, ϑ, t)r dr dϑ −

=

ˆ

π

q(a, r, ϑ, t) • ˆ ı rdrdϑ +

π

− ˆ

π

Q(x, r, ϑ, t)r drdϑdx a

−π

R

0

ˆ

b

ˆ

π

qx (b, r, ϑ, t)r dr dϑ − −π

0

qr (x, R, ϑ, t)dϑ dx a

−π

c Larry

Turyn, January 8, 2014

page 4 ˆ

ˆ

b

ˆ

π

R

Q(x, r, ϑ, t)r dr dϑ dx.

+ −π

a

0

But, unlike Example 10.2 in Section 10.1, here we are assuming that the lateral side of the rod is not insulated but instead is losing heat according to Newton’s Law of Cooling. This implies that on the lateral surface of the rod,  qr (x, R, ϑ, t) = −k(x) u(x, R, ϑ, t) − u0 (x, ϑ, t) ,  where u(x, R, θ, t) is the temperature on the surface of the rod and u0 (x, θ, t) is the temperature in the medium surrounding the rod. So, (10.3) becomes ˆ

b

ˆ

ˆ

π

−π

a

0

R

∂e (x, r, ϑ, t)r drdϑdx = ∂t ˆ

b

ˆ

ˆ

π

ˆ

ˆ

R

π

ˆ

R

qx (a, r, ϑ, t)r dr dϑ − −π

qx (b, r, ϑ, t)r dr dϑ −π

0

0

ˆ

π

b

ˆ

π

ˆ

R



k(x) u(x, R, ϑ, t) − u0 (x, ϑ, t) R dϑ dx +

+ a

−π

Q(x, r, ϑ, t)r dr dϑ dx. a

−π

0

Assume, as in Example 10.2 in Section 10.1,  that e, q, and Q do not depend on r or ϑ. In addition, assume that medium’s temperature, u0 (x, ϑ, t) , does not depend on r or ϑ. Then we can integrate to get ˆ πR2 a

b

∂e (x, t)dx = −πR2 (qx (b, t) − qx (a, t)) + 2πR ∂t

ˆ

ˆ

b

 k(x) u(x, R, t) − u0 (x, t) dx + πR2 a

b

Q(x, t)dx. a

The first two terms on the right hand side can be rewritten using ˆ

b

qx (b, t) − qx (a, t) = a

∂qx (x, t) dx. ∂x

After dividing through by the constant πR2 and moving terms to the left hand side, we get  ˆ b  ∂e ∂qx 2 (x, t) + (x, t) − k(x) u(x, t) − u0 (x, t) − Q(x, t) dx = 0. ∂t ∂x R a This being true for all a, b satisfying 0 < a < b < L implies the PDE  ∂e ∂qx 2 (x, t) = − (x, t) + k(x) u(x, t) − u0 (x, t) + Q(x, t), ∂t ∂x R the one space dimensional version of a heat balance equation modified to include heat loss through the lateral side of the rod by Newton’s Law of Cooling. 10.1.3.4. By Stokes’s Theorem

˛

¨ H • dr =

(?) C

Because S is fixed, d dt

(∇ × H) • dS. S

 ¨ ∂D D • dS = • dS S S ∂t



Putting the latter together with (10.14) and (?) gives one of Maxwell’s integral equations of electromagnetism: ¨ ¨ ¨ ∂D (??) J • dS + • dS = (∇ × H) • dS. S S ∂t S To express the PDE version of the physical law in (??), rewrite it as  ¨  ∂D J+ − ∇ × H • dS = 0 ∂t S

c Larry

Turyn, January 8, 2014

page 5

and then reason that S is an arbitrary oriented, piecewise smooth, parametrized surface, hence J+

∂D −∇×H=0 ∂t

that is, ∂D = −J + ∇ × H, ∂t which is (10.15), as was desired.

c Larry

Turyn, January 8, 2014

page 6

Section 10.2.4 10.2.4.1. Multiplication of the ODE through by r gives   d du r = −r dr dr and then indefinite integration gives du 1 = − r2 + c, dr 2 where c is an arbitrary constant. Division through by r gives r

1 c du =− r+ . dr 2 r Indefinite integration gives 1 u = − r2 + c ln r + c2 , 4 and then use the boundary conditions to get   1 2    T0 = u(a) = − 4 a + c ln a + c2    0 = u0 (b) = − 1 b + c 2 b

,

 

that is,    c ln a + c2 = T0 + c 1 = b b 2

  The second equation implies c =

1 2

1 4

 a2  

.

 

b2 . Substitute that into the first equation to get 1 2 1 b ln a + c2 = T0 + a2 , 2 4

so c2 = T0 +

1 2 1 2 a − b ln a. 4 2

The solution of the ODE-BVP is 1 1 1 1 u(r) = − r2 + b2 ln r + T0 + a2 − b2 ln a, 4 2 4 2 that is, u(r) = −

 b2  r  1 2 r − a2 + ln + T0 , a < r < b, 4 2 a

after noting that r > 0. ∂u ≡ 0 implies that the equilibrium temperature u = u(x) satisfies the ODE 0 = u00 + x and the ∂t BCs u0 (0) − 2u(0) = 0 and u(L) = T1 . We can solve the ODE u00 = −x by indefinite integration to get u0 = − 21 x2 + c1 and then u = − 16 x3 + c1 x + c2 , where c2 , c1 =arb. constants. The two BCs require    0 = u0 (0) − 2u(0) = c1 − 2c2  .   T1 = u(L) = − 16 L3 + c1 L + c2 10.2.4.2.

c Larry

Turyn, January 8, 2014

page 7

Substitute c1 = 2c2 into the second BC to get 1 T1 = u(L) = − L3 + 2c2 L + c2 , 6 so c2 =

T1 + 61 L3 . 1 + 2L

The equilibrium temperature is  1 1  (1 + 2x) . u(x) = − x3 + T1 + L3 6 6 1 + 2L 10.2.4.3.

∂u ≡ 0 implies that the equilibrium temperature u = u(x) satisfies the ODE-BVP ∂t    0 = u00 (x)−η(u− T¯), 0 < x < L,  .   u(0)= T0 , u(L)= T1

Note that η was assumed to be a positive constant. The ODE can be rewritten as (?) u00 (x) − ηu = −η T¯. The method of undetermined coefficients easily finds a particular solution up = T¯. The corresponding homogeneous ODE is u00 (x) − ηu = 0, whose solutions are √ √ uh = c1 cosh( η x) + c2 sinh( η x), so the general solution of ODE (?) is √ √ u = up + uh = T¯ + c1 cosh( η x) + c2 sinh( η x), where c1 and c2 are arbitrary constants. Plug the general solution into the two BCs to get    T0 = u(0) = T¯ + c1  . √ √   T1 = u(L) = T¯ + c1 cosh( η L) + c2 sinh( η L) The first equation implies c1 = T0 − T¯. Substitute that into the second equation to get √ √ T1 = u(L) = T¯ + (T0 − T¯) cosh( η L) + c2 sinh( η L). √ Because η > 0 implies sinh( η L), we can solve for c2 : c2 =

 1 √ T1 − T¯ − (T0 − T¯) cosh( η L) . √ sinh( η L)

The equilibrium temperature distribution, that is the solution of the ODE-BVP, is √ T1 − T¯ − (T0 − T ) cosh( η L) √ √ ¯ ¯ sinh( η x), u(x) = T + (T0 − T ) cosh( η x) + √ sinh( η L) for 0 < x < L. ∂u ≡ 0, the lateral surface is perfectly insulated, and there ∂t 00 is no internal heat source or sink, so 0 = u (x), for 0 < x < 0.3 m. Measure x in m. The general solution is u = c1 + c2 x, where c1 , c2 are arbitrary constants. Plug this into the BCs to get    T0 = u(0) = c1 

10.2.4.4. The equilibrium temperature u satisfies



2T0 = u(L) = c1 + c2 (0.3)

 c Larry

Turyn, January 8, 2014

page 8

so c1 = T0 , hence c2 = (0.3)−1 (2T0 − c1 ) = (0.3)−1 (2T0 − T0 ) = (0.3)−1 T0 . The equilibrium temperature distribution is  x  . u(x) = T0 · 1 + 0.3 10.2.4.5.

∂u ≡ 0 implies that the equilibrium temperature u = u(x) satisfies the ODE-BVP ∂t  00   u (x) + 4u = 0, 0 < x < L,  .   u(0) = T0 6= 0, u0 (L) = 0

(Insulated at the right end implies u0 (L) = 0.) The general solution of the ODE is u = c1 cos 2x + c2 sin 2x, where c1 and c2 are arbitrary constants. Plug the general solution into the two BCs to get    T0 = u(0) = c1  .   0 = u0 (L) = 2c2 cos 2L So, c1 = T0 , but there are two cases concerning c2 : (a) If cos 2L = 0 then c2 is arbitrary; if cos 2L 6= 0 then c2 = 0.  (a) There is exactly one equilibrium solution when L 6= 21 · n − 21 π for all integers n.  (b) There are infinitely many equilibrium solutions when L = 21 · n − 21 π for any integer n. (c) There is no no value of L for which there is no equilibrium solution. 10.2.4.6. The general solution of the ODE u00 + π 2 u = 0, for 0 < x < 1, is u = c1 cos(πx) + c2 sin(πx), where c1 , c2 are arbitrary constants. It follows that u0 (x) = −πc1 sin(πx) + πc2 cos(πx), Plug this into the BCs to get    A = u0 (0) = πc2  .   B = u0 (1) = −πc1 sin π + πc2 cos π = −πc2 This requires that c2 = A/π and c2 = −B/π. So, there is no equilibrium solution unless B = −A, in which case the equilibrium temperature distribution is u(x) = c1 cos(πx) +

A sin(πx), π

where c1 is an arbitrary constant.  πx  ∂u 10.2.4.7. The source is Q(x) = cos and ≡ 0, implying that the equilibrium temperature u = u(x) L ∂t satisfies the ODE-BVP  00   = 0, 0 < x < L,   κ u (x) + cos πx L .   u0 (0) = 0, u(L) = T1 (Insulated at the left end implies u0 (0) = 0.) Note that the thermal conductivity, κ, is assumed to be constant. Integrate the ODE  πx  1 u00 = − cos κ L c Larry

Turyn, January 8, 2014

page 9

once to get  πx  L + c1 , sin πκ L where c1 is an arbitrary constant. Plug this into the first BC to get u0 = −

0 = u0 (0) = − so c1 = 0 and u0 = −

L · 0 + c1 , πκ

 πx  L . sin πκ L

Integrate a second time to get  πx  L2 + c2 cos π2 κ L where c2 is an arbitrary constant. Plug this into the second BC to get u=

T1 = u(L) =

L2 L2 cos π + c = − + c2 , 2 π2 κ π2 κ

which we can solve for c2 . The equilibrium temperature distribution is u(x) = T1 +

 πx  L2  1 + cos . π2 κ L

∂u ≡ 0, implying that the equilibrium temperature u = u(x) 10.2.4.8. The source is Q(x) = g 00 (x) and ∂t satisfies the ODE-BVP    κ u00 (x) + g 00 (x) = 0, 0 < x < L,  .   u(0) = T0 , u(L) = T1 Note that the thermal conductivity, κ, is assumed to be constant. 1 1 1 Integrate the ODEu00 = − g 00 (x) once to get u0 = − g 0 (x) + c1 and then u = − g(x) + c1 x + c2 where κ κ κ c1 , c2 =arb. constants. Plug this into the two BCs to get    T0 = u(0) = − κ1 g(0) + c2  ,   T1 = u(L) = − κ1 g(L) + c1 L + c2 hence c2 = T0 +

1 g(0), hence κ −1

c1 = L



 1 1 T1 − T0 + g(L) − g(0) κ κ

The equilibrium temperature distribution is   g(L) − g(0) x 1 1 + T0 + g(0). u(x) = − g(x) + T1 − T0 + κ κ L κ 10.2.4.9. Because ∇2 u is on the right hand side of the PDE, the material is homogeneous. Because the PDE has the form ∂u = ∇2 u − Q0 ∂t (in spherical coordinates), the homogeneous material has thermal conductivity, specific heat, and mass density all equal to one, in appropriate units. c Larry

Turyn, January 8, 2014

page 10

In spherical coordinates, V = {(ρ, φ, θ) : 0 ≤ θ ≤ 2π,

π ≤ φ ≤ π, 0 ≤ ρ < a} 2

is the lower half of the ball of radius a whose center is at the origin. The absence of θ from the PDE says that the problem is circularly symmetric around the z−axis. The term −Q0 models that the material is absorbing heat at a rate Q0 . The boundary condition ∂u (a, φ, t) = 0 implies that the material is insulated on the spherical surface ρ = a for z ≤ 0. The BC ∂ρ   π   1 ∂u  π  ρ, , t = − u ρ, , t − 20 models that on the z = 0 disk surface the solid is losing heat at a ρ ∂φ 2 2 rate equal to the difference between its temperature and the medium’s constant temperature of 20◦ , all in appropriate units. The IC u(ρ, φ, 0) = 40 implies that the initial temperature of the material is uniformly 40◦ . 10.2.4.10. In cylindrical coordinates, we may assume that the solid inside the inner cylinder is V1 = {(r, θ, z) : 0 < z < L, 0 ≤ θ ≤ 2π, 0 ≤ r < a}, whose boundary is the surface r = a, and the solid between the two cylinders is V2 = {(r, θ, z) : 0 < z < L, 0 ≤ θ ≤ 2π, a < r < b}, whose boundary consists of the two surfaces r = a and r = b. We will assume that the temperature satisfies the heat equation for a homogeneous material inside the inner cylinder and inside the solid between the two cylinders. Because no heat source or sink is mentioned, we will assume there is none. Let the temperature inside V1 be u1 = u1 (r, θ, z, t) and let the temperature inside V2 be u2 = u2 (r, θ, z, t). So, we have   1 ∂ 2 u1 ∂ 2 u1 1 ∂ h ∂u1 i ∂u1 = α1 ∇2 u1 = α1 r + 2 + , for 0 < z < L, 0 ≤ θ ≤ 2π, 0 ≤ r < a, 0 < t, (1) ∂t r ∂r ∂r r ∂θ2 ∂z 2 and ∂u2 = α2 (2) ∂t



 1 ∂ h ∂u2 i 1 ∂ 2 u2 ∂ 2 u2 r + 2 + , r ∂r ∂r r ∂θ2 ∂z 2

for 0 < z < L, 0 ≤ θ ≤ 2π, a < r < b, 0 < t,

where α1 and α2 are positive constants. The easiest BCs model are at the circular ends. The z = 0 end of the rod is kept at a constant temperature T0 , so (3) u1 (r, θ, 0, t) = T0 , for 0 ≤ θ ≤ 2π, 0 ≤ r < a, 0 < t, and (4) u2 (r, θ, 0, t) = T0 ,

for 0 ≤ θ ≤ 2π, a < r < b, 0 < t.

The end z = L is insulated, so (5)

∂u1 (r, θ, L, t) = 0, ∂z

for 0 ≤ θ ≤ 2π, 0 ≤ r < a, 0 < t,

and

∂u2 (r, θ, L, t) = 0, for 0 ≤ θ ≤ 2π, a < r < b, 0 < t. ∂z The BC at the inner wall, r = a, that the outer, shaded region dissipates heat from V1 to V2 according to Newton’s Law of Cooling, is modeled by (10.34) in Section 10.2: (6)

(7) u1 (a, θ, z, t) + h(θ, z, t)

∂u1 (a, θ, z, t) = u2 (a, θ, z, t), ∂r

for 0 ≤ θ ≤ 2π, 0 < z < L, 0 < t. c Larry

Turyn, January 8, 2014

page 11

The BC at the outer wall, r = a, that the outer, shaded region dissipates heat from V2 to the surrounding medium according to Newton’s Law of Cooling, is modeled by (10.34) in Section 10.2: (8) u2 (b, θ, z, t) + h(θ, z, t)

∂u2 (b, θ, z, t) = g(θ, z, t), ∂r

for 0 ≤ θ ≤ 2π, 0 < z < L, 0 < t.

Finally, we have the usual finiteness BC, (9) | u1 (0+ , θ, z, t) | < ∞,

for 0 ≤ θ ≤ 2π, 0 < z < L, 0 < t.

The mathematical problem consists of equations (1) through (9). b = −ˆ 10.2.4.11. (a) At x = 0, n ı. So, the heat flux out of the left end at x = 0 is b • q(0, t) = −ˆ n ı • (−κ(0)∇u(0, t) = κ(0)

∂u (0, t). ∂x

Physically, the heat conductance κ(0) > 0, so the heat flux out of the left end is positive if, and only if, ∂u (0, t) > 0. ∂x b =ˆ (b) At x = L, n ı. So, the heat flux out of the right end at x = L is b • q(L, t) = ˆ n ı • (−κ(L)∇u(L, t) = −κ(L)

∂u (L, t). ∂x

Physically, the heat conductance κ(L) > 0, so the heat flux out of the right end is positive if, and only if, ∂u (0, t) < 0. ∂x ∂u ≡ 0 because we are solving for the equilibrium temperature distribution. There is no source ∂t term and the problem is circularly symmetric, so u = u(r) satisfies the ODE-BVP     1 d du    r = 0, a < r < b,  κ  r dr dr .       u(a) = T0 , u(b) = 0 10.2.4.12.

The ODE can be rewritten as

d dr

 r

du dr

 = 0,

whose solutions are

du c1 = c1 , that is, u0 (r) = , dr r where c1 is an arbitrary constant. Integrate a second time to get ˆ c1 u= dr = c1 ln r + c2 , r r

where c2 is an arbitrary constant. The BCs require    T0 = u(a) = c1 ln a + c2   so 

c1





ln a 1

=

 c2

−1  

ln b

1

T0

 0

0 = u(b) = c1 ln b + c2  −1  = ln b − ln a 

1

,

 −1



T0



T0

=

 − ln b ln a



0 c Larry

 .

−T0 ln b Turyn, January 8, 2014

page 12

The solution of the ODE-BVP is u=−

 ln(b/r) T0  ln r − ln b = · T0 . ln(b/a) ln(b/a)

The total rate of heat flow out of the annulus through the inner circle, r = a, is ‰ ˆ 2π ∂u b · (ds) = N et F lux = · (a dθ). q(L) • n −κ ∂n r=a C 0 b = −ˆ On r = a, the outward unit normal vector is n er , so   ∂u ∂u ∂u 1 ∂u er • er •∇u = −ˆ ˆ er + ˆ eθ = − (a), =−ˆ ∂n r=a ∂r r ∂θ ∂r r=a r=a so the

ˆ N et F lux =

ˆ 2π  ∂u  ∂u κ (a) · a dθ = 2πa κ u0 (a), −κ · a dθ = ∂n r=a ∂r 0



0

∂u does not depend on θ. But because ∂r r=a

u0 (a) =

c1 = a



T0 ln(b/a) , a

so the N et F lux = 2π a κ u0 (a) = 2π a κ ·

−T0 2π κ T0 =− . a ln(b/a) ln(b/a)

∂u ≡ 0 because we are solving for the equilibrium temperature distribution. There is no source ∂t term and the problem is circularly symmetric, so u = u(r) satisfies the ODE-BVP     1 d du    r = 0, a < r < b,  κ  r dr dr ,       0 0 u(a) = 0, u (b) = ub 10.2.4.13.

for some constant u0b . We are given that the total rate of heat flow out of the annulus through the circle r = b is ‰ ˆ 2π ∂u b · (ds) = β= q(L) • n −κ · (b dθ). ∂n r=b C 0 On r = b, the outward unit normal vector is ˆ er , so   1 ∂u ∂u ∂u ∂u er •∇u =ˆ ˆ er + ˆ eθ = (b), er • =ˆ ∂n r=b ∂r r ∂θ ∂r r=b r=b so

ˆ β= 0



−κ

ˆ 2π ∂u ∂u b dθ = −κ (b) · b dθ = −2π b κ u0b , ∂n r=b ∂r 0

∂u because does not depend on θ. Solving for u0b gives ∂r r=b u0b = −

β . 2π b κ

c Larry

Turyn, January 8, 2014

page 13

The ODE can be rewritten as

d dr



du r dr

 = 0,

whose solutions are

du c1 = c1 , that is, u0 (r) = , dr r where c1 is an arbitrary constant. Integrate a second time to get ˆ c1 u= dr = c1 ln r + c2 , r r

where c2 is an arbitrary constant. Plug u0 (r) into the condition on u0b to get −

c1 β = u0b = , 2π b κ b

which implies c1 = − so u=−

β , 2π κ

β ln r + c2 . 2π κ

Plug this into the first BC to get 0 = u(a) = −

β ln a + c2 . 2π κ

So, the solution of the ODE-BVP is u=−

β β β a ln r + ln a = ln . 2π κ 2π κ 2π κ r

So, T = u(b) =

a β ln . 2πκ b

10.2.4.14. Similar to work in Example 10.5 in Section 10.2, the equilibrium temperature is u = u(ρ). It does not depend on φ, θ because the material is homogeneous and both the heat source and the boundary conditions |u(0+ )| < ∞ and u(a) = 20 do not depend on φ, θ or t. The heat source is Q(ρ) ≡ 1000. The steady state temperature satisfies the PDE 0 = κ ∇2 u + Q. Using the Laplacian in spherical coordinates given at the end of Section 6.7, the ODE-BVP we need to solve is   1 d 2 du (?) 0 = κ 2 ρ + 1000, |u(0+ )| < ∞, u(a) = 20. ρ dρ dρ Rewrite the ODE in (?) as   d 1000 2 2 du ρ =− ρ , dρ dρ κ and then integrate once with respect to ρ to get ρ2

du 1000 3 =− ρ + c1 , dρ 3κ

or, equivalently, 1000 ρ + c1 ρ−2 , 3κ where c is an arbitrary constant. Integrating a second time gives u0 (ρ) = −

u(ρ) = −

500 2 ρ − c1 ρ−1 + c2 . 3κ c Larry

Turyn, January 8, 2014

page 14

The first BC, |u(0+ )| < ∞, requires that c1 = 0, hence u(ρ) = −

500 2 ρ + c2 . 3κ

The second BC requires 20 = u(a) = −

500 2 a + c2 , 3κ

500 2 a . hence c2 = 20 + 3κ The equilibrium temperature distribution is u(ρ) = −

500 2 500 2 ρ + 20 + a . 3κ 3κ

10.2.4.15. Perfect thermal contact requires continuity of temperature, hence lim u(x, t) = lim+ v(y, t) = lim+ w(x, t),

x→0−

y→0

x→0

and zero net heat flux at the origin. The latter is, using Fourier’s Law of heat conduction,       ∂v ∂w ∂u (x, t) + lim κ2 (y, t) + lim κ3 (x, t) = 0. lim −κ1 ∂x ∂y ∂x y→0+ x→0+ x→0− So, the BCs are u(0− , t) = v(0+ , t) = w(0+ , t), and −κ1

∂u − ∂v + ∂w + (0 , t) + κ2 (0 , t) + κ3 (0 , t) = 0. ∂x ∂y ∂x

The latter can be written as κ1

∂v + ∂w + ∂u − (0 , t) = κ2 (0 , t) + κ3 (0 , t), ∂x ∂y ∂x

which is analogous to Kirchoff’s law on continuity of electrical current at a node in a circuit. 10.2.4.16. Continuity of temperature at the interface point requires u(−L− , t) = v(−π + , t) = v(π − , t). At the point of intersection, there are three fluxes. The simplest is the flux leaving the straight line segment ∂u 1 ∂v of wire, −κ1 (−L− , t). The other two fluxes come from the circular piece of wire: −κ2 (−π + , t) is a ∂x L ∂θ 1 ∂v − heat flux entering the circular piece of wire at (x, y) = (−L, 0), and −κ2 (π , t) is a heat flux leaving L ∂θ the circular piece of wire at (x, y) = (−L, 0). Balance of heat flux requires −κ1

∂u 1 ∂v 1 ∂v − (−L− , t) − κ2 (−π + , t) = −κ2 (π , t) . ∂x L ∂θ L ∂θ

10.2.4.17. The interface is at θ =

π . Continuity of temperature at the interface implies 4  π   π  u1 r, + , t = u2 r, − , t . 4 4

c Larry

Turyn, January 8, 2014

page 15

Along a radial line segment, the normal derivative is   ∂u 1 ∂u 1 ∂u b • ∇u = ˆ n eθ • ˆ er + ˆ eθ = . ∂r r ∂θ r ∂θ Continuity of the total flux out of sector D1 and into sector D2 implies that, by integrating on the radial π segment θ = from r = 0 to r = a, 4 ˆ a ˆ a κ2 ∂u2  π −  κ1 ∂u1  π +  r, dr = r, dr. · · r ∂θ 4 r ∂θ 4 0 0 ˆ

L

10.2.4.18. E = c% A

u(x, t) dx implies 0

d ˙ E(t) = dt

ˆ

"

#

L

c% A

u(x, t) dx , 0

where c, %, A are constants because the material is assumed to be homogeneous and, as usual, we assume L is a constant. So, ˆ

L

˙ E(t) = c% A 0

= κA

∂u (x, t) dx = c% A ∂t

h ∂u

iL

(x, t)

= κA

ˆ

L

0

 ∂u

∂x ∂x 0 It follows that E(t) ≡ c, where c is a constant. ˆ

ˆ L 2  ∂2u  ∂ u α 2 (x, t) dx = αc% A (x, t) dx 2 ∂x 0 ∂x

(L, t) −

 ∂u (0, t) = κ A (0 − 0) = 0. ∂x

L

10.2.4.19. E = c% A

u(x, t) dx implies 0

d ˙ E(t) = dt

ˆ

" c% A

#

L

u(x, t) dx , 0

where c, %, A are constants because the material is assumed to be homogeneous and, as usual, we assume L is a constant. So, ˆ ˙ E(t) = c% A 0

L

∂u (x, t) dx = c% A ∂t

ˆ 0

L

ˆ L 2 ˆ L  ∂2u  1 ∂ u α 2 (x, t) + Q(x) dx = αc% A (x, t) dx + A Q0 dx 2 ∂x c% 0 ∂x 0

 ∂u h iL  ∂u + A Q0 x = κA (L, t) − (0, t) + ALQ0 = κ A (0 − 0) + ALQ0 . ∂x ∂x ∂x 0 0 ˙ Indefinite integration of the ODE, E(t) = ALQ0 , gives = κA

h ∂u

(x, t)

iL

E(t) = ALQ0 t + c, where c is an arbitrary constant. Using the IC E(0) = E0 we see that E(t) = ALQ0 t + E0 , which fits the form E(t) = E(0) + βt if and only if Q0 =

β . AL c Larry

Turyn, January 8, 2014

page 16 ˆ

L

u(x, t) dx implies

10.2.4.20. E = c% A 0

d ˙ E(t) = dt

ˆ

" c% A

#

L

u(x, t) dx , 0

where c, %, A are constants because the material is assumed to be homogeneous and, as usual, we assume L is a constant. So, ˆ L ˆ L ˆ L 2 ˆ L  1 ∂2u ∂ u ∂u ˙ (x, t) dx+A Q(x) dx (x, t) dx = c% A α 2 (x, t)+ Q(x) dx = αc% A E(t) = c% A 2 ∂x c% 0 0 0 ∂x 0 ∂t = κA

h ∂u ∂x

(x, t)

iL

¯ = κA + ALQ

0

 ∂u ∂x ˆ L

(L, t) −

 ∂u ¯ = κ A (0 − 0) + ALQ ¯ = ALQ, ¯ (0, t) + ALQ ∂x

¯, 1 after recalling the definition that Q Q(x) dx. L 0 ˙ ¯ gives Indefinite integration of the ODE, E(t) = +ALQ, ¯ + c, E(t) = ALQt where c is an arbitrary constant. Using the IC E(0) = E0 we see that ¯ + E0 , E(t) = ALQt which fits the form E(t) = E(0) + βt if and only if ¯= β . Q AL

10.2.4.21. Because U satisfies

∂U ∂2U = − U and T (x, t) = e−x U (x, t), ∂t ∂x2 ∂U ∂T ∂2U = e−x · = −e−x U + e−x . ∂t ∂t ∂x2

Also, the product rule gives ∂T ∂U = −e−x U (x, t) + e−x ∂x ∂x and

  ∂2T ∂ ∂U ∂U ∂2U −x −x ∂U = −e U (x, t) + e = e−x U (x, t) − e−x − e−x + e−x 2 ∂x ∂x ∂x ∂x ∂x ∂x2

= −e−x U (x, t) + 2e−x U (x, t) − 2e−x

  2 ∂U ∂2U −x −x ∂ U −x −x ∂U + e−x = −e U (x, t) + e − 2 −e U + e ∂x ∂x2 ∂x2 ∂x =

∂T ∂T −2 . ∂t ∂x

Also, T (0, t) = e−0 U (0, t) = 1 · 0 = 0, T (L, t) = e−L U (L, t) = 1 · e−L · 0 = 0, and

  T (0, t) = e−x U (x, 0) = e−x ex f (x) = f (x).

c Larry

Turyn, January 8, 2014

page 17

(b) We will not be able to do this part until Section 11.1. ˆ

L

1 2 ((u(x, t)) dx. We calculate that 2 0 "ˆ # ˆ ˆ L L L d 1 ∂u ∂2u 2 ˙ G(t) = u(x, t) (x, t) dx = u(x, t)α 2 dx ((u(x, t)) dx = dt ∂t ∂x 0 2 0 0

10.2.4.22. Define G(t) ,

where α is assumed to be a constant. Integration by parts gives that  L ˆ L  2 ˆ L ∂u ∂u ∂u ∂u ∂u ∂u ˙ G(t) = α u(x, t) (x, t) − α · dx = αu(L, t) (L, t) − αu(0, t) (0, t) − α dx ∂x ∂x ∂x ∂x ∂x ∂x 0 0 0 ∂u (0, t) = u(L, t) = 0 implies that ∂x

so use of the BCs

∂u ˙ (L, t) − αu(0, t) · 0 − α G(t) = α0 · ∂x

ˆ

L



0

∂u ∂x

ˆ

2

L

dx = −α 0



∂u ∂x

2 dx ≤ 0.

∂u ≡ 0 because we are solving for the equilibrium temperature distribution. There is no source ∂t term and the problem is spherically symmetric, so u = u(ρ) satisfies the ODE-BVP     1 d du    ρ2 = 0, a < ρ < b,   κ 2 ρ dρ dρ ,       u(a) = 0, u0 (b) = u0b 10.2.4.23.

for some constant u0b . We are given that the total rate of heat flow out of the annulus through the sphere ρ = b is ‹ ˆ 2πˆ π ∂u b (dS) = β= q(L) • n −κ · (b2 sin φ dφ dθ). ∂n ρ=b C 0 0 The outward unit normal vector is ˆ eρ , so   ∂u ∂u 1 ∂f 1 ∂f ∂u ˆ eρ + ˆ eφ + (b), eρ • ∇u =ˆ eρ • =ˆ = ∂n ρ=b ∂ρ ρ ∂φ ρ sin φ ∂θ ρ=b ∂ρ ρ=b so

ˆ



ˆ

π

−κ

β= 0

0

∂u · (b2 sin φ dφ dθ) = ∂n ρ=b

ˆ



ˆ

π

−κ 0

0

du (b) · b2 sin φ dφ dθ = ... = −4π b2 κ u0b , dρ

∂u because does not depend on φ or θ. Solving for u0b gives ∂ρ ρ=b u0b = − The ODE can be rewritten as

β . 4π b2 κ

  d du ρ2 = 0, dρ dρ

whose solutions are ρ2

du = c1 , dρ

that is,

u0 (ρ) =

c1 , ρ2 c Larry

Turyn, January 8, 2014

page 18

where c1 is an arbitrary constant. Integrate a second time to get ˆ 1 c1 dρ = −c1 · + c2 , u= ρ2 ρ where c2 is an arbitrary constant. Plug u0 (ρ) into the condition on u0b to get −

β c1 = u0b = 2 , 2 4π b κ b

which implies c1 = − so u=

β , 4π κ

1 β · + c2 . 4π κ ρ

Plug this into the first BC to get 20 = u(a) =

1 β · + c2 . 4π κ a

So, the solution of the ODE-BVP is u = 20 +

β 1 β 1 β · − · = 20 − 4π κ ρ 4π κ a 4π κ



1 1 − a ρ

 .

So, β T = u(b) = 20 − 4πκ



1 1 − a b

 .

c Larry

Turyn, January 8, 2014

page 19

Section 10.3.4 10.3.4.1. Integrate Laplace’s equation over the interval [0, L] to get ˆ 0=

L

ˆ 0 · dx =

0

L

0

 L d2 u du (x) = = u0 (L) − u0 (0). 2 dx dx 0

10.3.4.2. Integrate Poisson’s equation over the interval [0, L] to get ˆ −

L

ˆ Q(x) dx =

0

0

L

 L du d2 u (x) dx = = u0 (L) − u0 (0). dx2 dx 0

So, the equation to be satisfied is ˆ 0

0

u (L) − u (0) +

L

Q(x) dx = 0. 0

10.3.4.3. The circularly symmetric electric potential V = V (r) satisfies the ODE-BVP     dV 1 d    r = 0, 1.3 < r < 2.5,   κ r dr dr ,       V (1.3) = V0 , V (2.5) = V1 assuming distance are measured in cm. The ODE can be rewritten as   d dV r = 0, dr dr whose solutions are

c1 dV = c1 , that is, V 0 (r) = , dr r where c1 is an arbitrary constant. Integrate a second time to get ˆ c1 V = dr = c1 ln r + c2 , r r

where c2 is an arbitrary constant. Plug V into the first BCs to get    V0 = V (1.3) = c1 ln 1.3 + c2 ,  

V1 = V (2.5) = c1 ln 2.5 + c2

,



which implies 

  c1 ln 1.3 = c2 ln 2.5

1 1

−1

  1 V0 1 = V1 ln 1.3 − ln 2.5 − ln 2.5

−1 ln 1.3



V0 V1

 =

−1 ln 2.5 − ln 1.3



 V0 − V1 . V1 ln 1.3 − V0 ln 2.5

The solution is

1 ((−V0 + V1 ) ln r − V1 ln 1.3 + V0 ln 2.5) ln 2.5 − ln 1.3   1 1 r  2.5  = (V1 (ln r − ln 1.3) + V0 (ln 2.5 − ln r)) = V1 ln + V0 ln . ln 2.5 − ln 1.3 ln(2.5/1.3) 1.3 r V (r)=

c Larry

Turyn, January 8, 2014

page 20 ‰

∂u ds. C ∂n Here, the positively oriented curve C consists of four line segments: (1) y = 0, 0 ≤ x ≤ L; (2) x = L, 0 ≤ y ≤ ∂u not identically zero. H; (3) y = H, 0 ≤ x ≤ L; and (4) x = 0, 0 ≤ y ≤ H. Only on the first two is ∂n We parametrize those two line segments and give the corresponding outward unit normal vectors as

10.3.4.4. The solvability condition for Laplace’s equation in the domain enclosed by curve C is 0 =

C1 : r = t ˆ ı, 0 ≤ t ≤ L :

b = −ˆ n 

and C2 : r = L · ˆ ı + t ˆ, 0 ≤ t ≤ H :

b =ˆ n ı.

The solvability condition is ‰ ˆ ˆ ˆ ˆ ∂u ∂u ∂u 0= ds = ds + ds ∇u • (−ˆ )ds + ∇u • ˆ ı ds = − ∂n ∂y C C1 C2 ∂x C2 C1 ˆ

ˆ H ˆ L ˆ H ∂u ∂u (t, 0) dt + (L, t) dt = f (t) dt − g(t) dt ∂x 0 ∂y 0 0 0 ˆ L ˆ H So, the solvability condition is f (t) dt = g(t) dt. L

=−

0

0



∂u ds. ∂n C Here, the positively oriented curve C consists of four line segments: y = 0, 0 ≤ x ≤ L, x = L, 0 ≤ y ≤ H, ∂u y = H, 0 ≤ x ≤ L, and x = 0, 0 ≤ y ≤ H, as shown in the figure. Only on the last two is not identically ∂n zero. We parametrize those two line segments and give the corresponding outward unit normal vectors as 10.3.4.5. The solvability condition for Laplace’s equation in the domain enclosed by curve C is 0 =

C3 : r = (L − t)ˆ ı + H ˆ, 0 ≤ t ≤ L :

b = ˆ n

and C4 : r = 0 · ˆ ı + (H − t)ˆ , 0 ≤ t ≤ H :

b = −ˆ n ı.

The solvability condition is ‰ ˆ ˆ ˆ ˆ  ∂u ∂u ∂u  0= ds = ∇u • ˆds + ∇u • (−ˆ ı) ds = ds + − ds ∂x C ∂n C3 C4 C3 ∂y C4 ˆ = 0

L

∂u (L − t, H) dt − ∂y

ˆ 0

H

ˆ L ˆ H h ∂u 1 iH (0, H − t) dt = k dt − (H − t) dt=−kL + Ht − t2 ∂x 2 0 0 0

1 2 H . 2 H2 So, the only value of k for which there is a solution is k = . 2L =−kL +

π 10.3.4.6. The boundary of the half ball consists of two parts: (1) The flat disk at φ = , and (2) the half 2 sphere ρ = a in the half space z > 0. In spherical coordinates, the gradient operator is given by ∇u =

∂u 1 ∂u 1 ∂u ˆ eρ + ˆ eφ + ˆ eθ . ∂ρ ρ ∂φ ρ sin φ ∂θ

π ˆ From Table b = −k. On the flat disk at φ = , the unit vector pointing out of the half ball is given by n 2 6.2 in Section 6.2 in Section 6.2, ˆ = cos φ ˆ k eρ − sin φ ˆ eφ . c Larry

Turyn, January 8, 2014

page 21

Figure 1: Problem 10.3.4.5 On the flat disk φ =

π , 2

∂u b • ∇u = (− cos φ ˆ ,n eρ + sin φ ˆ eφ ) • ∂n π But, φ = implies that on the flat disk, 2



∂u 1 ∂u 1 ∂u ˆ eρ + ˆ eφ + ˆ eθ ∂ρ ρ ∂φ ρ sin φ ∂θ

 = − cos φ

∂u sin φ ∂u + . ∂ρ ρ ∂φ

∂u 1 ∂u  π  = ρ, , θ . ∂n ρ ∂φ 2 On the half sphere ρ = a in the half space z > 0, the unit vector pointing out of the half ball is given by b=ˆ n eρ , and   ∂u 1 ∂u 1 ∂u ∂u ∂u b • ∇u = ˆ ,n eρ • ˆ eρ + ˆ eφ + ˆ eθ = (a, φ, θ) . ∂n ∂ρ ρ ∂φ ρ sin φ ∂θ ∂ρ The solvability condition is ‹ ‹ ‹ ∂u ∂u ∂u (?) 0 = dS = dS + dS. π ∂n ∂n ∂n ∂V φ= 2 ρ=a We recall that spherical and cylindrical coordinates are related by r = ρ sin φ, so that on the flat disk π π π φ = , we have r = ρ sin = ρ. Also, on the flat disk φ = , the element of surface area is given in polar 2 2 2 coordinates by dS = r dr dθ. So, ˆ 2πˆ a ˆ 2πˆ a ˆ 2πˆ a ‹ 1 ∂u  π  1 ∂u  π  ∂u  π  ∂u ρ, , θ r dr dθ = ρ, , θ ρ dρ dθ = ρ, , θ dρ dθ. dS = ∂n 2 2 2 0 0 ρ ∂φ 0 0 ρ ∂φ 0 0 ∂φ φ= π 2 We recall that on the half sphere ρ = a in the half space z > 0, the element of surface area is given in spherical coordinates by dS = 4πa2 sin φ dφ dθ, as was discussed in Example 7.29 in Section 7.5. So, ‹ ˆ 2πˆ π/2 ∂u ∂u dS = (a, φ, θ) 4πa2 sin φ dφ dθ. ∂n ∂ρ 0 0 ρ=a From (?) and the above calculations, the solvability condition is ‹ ˆ 2π ˆ a ˆ 2π ˆ π/2 ∂u ∂u  π  ∂u 0= dS = ρ, , θ dρ dθ + (a, φ, θ) 4πa2 sin φ dφ dθ. ∂n ∂φ 2 ∂ρ ∂V 0 0 0 0 10.3.4.7. The boundary of the domain consists of two parts: C+ , the positively oriented circle r = a, and C− , the negatively oriented rectangle inside the circle. The solvability condition is that ‰  ∂u ∂u 0= ds + ds C+ ∂n C− ∂n c Larry

Turyn, January 8, 2014

page 22

Parametrize C+ : r = a ˆ eθ , −π ≤ θ ≤ π, b=ˆ on which n er , and

C0 : r = t ˆ ı + H ˆ, −L ≤ t ≤ L,

b = −ˆ on which n . Only on C+ and C0 are the normal derivatives of the temperature not identically zero. The solvability condition is ‰ 0= C+

∂u ds + ∂n

ˆ C0

∂u ds = ∂n

that is,

ˆ

ˆ

π

−π

∂u (a, θ) · (a dθ) + ∂r ˆ

π

ˆ

−L

∂u (t, H) · (dt), ∂y

−L

ˆ

π

L

f (θ) dθ −

0=a



g(t) dt,

−π

or, equivalently,

L

L

f (θ) dθ −

0=a

ˆ

−π

g(x) dx. −L

Figure 2: Problem 10.3.4.7

10.3.4.8. Method 1: Integrate Laplace’s equation over the annulus to get  ˆ 2π ˆ b ˆ 2π ˆ b  1 ∂ h ∂u i 1 ∂2u 2 0= ∇ u r dr dθ = r + 2 r dr dθ. r ∂r ∂r r ∂θ2 0 a 0 a But, we assumed the material is homogeneous, and the annulus is circularly symmetric, so we can assume u = u(r) does not depend on θ. So, ˆ



ˆ

0= 0

a

b



   ˆ 2π ˆ b  ˆ 2π ˆ b  h d 1 d h du i 1 1 d h du i du i r + 2 · 0 r dr dθ = r r dr dθ r dr dθ = r dr dr r dr dr dr 0 a 0 a r dr ˆ

= 0



h du ib r dθ. dr a

So, the solvability condition for this problem is ˆ 2π   du du (?) 0 = b (b) − a (a) dθ. dr dr 0

c Larry

Turyn, January 8, 2014

page 23

So, if u = u(r) satisfies Laplace’s equation in the annulus and the condition b “Yes," u satisfies the solvability condition, (?).

∂u ∂u (b, θ) ≡ a (a, θ), then ∂r ∂r

Method 2: As in problem 10.2.4.13, the total rate of heat flow out of the annulus through the circle r = b is ‰ ˆ 2π ∂u b · (ds) = · (b dθ). β, q(L) • n −κ ∂n r=b C 0 On r = b, the outward unit normal vector is ˆ er , so   ∂u ∂u ∂u 1 ∂u er • er •∇u =ˆ ˆ er + ˆ eθ = (b), =ˆ ∂n r=b ∂r r ∂θ ∂r r=b r=b so

ˆ β=



−κ

0

ˆ 2π ∂u ∂u −κ (b) · b dθ = −2π b κ u0b , b dθ = ∂n r=b ∂r 0

∂u does not depend on θ. because ∂r r=b As in problem 10.2.4.12, the total rate of heat flow out of the annulus through the inner circle, r = a, is ‰ ˆ 2π ∂u b · (ds) = α, q(L) • n −κ · (a dθ). ∂n r=a C 0 b = −ˆ On r = a, the outward unit normal vector is n er , so   ∂u 1 ∂u ∂u ∂u er •∇u =−ˆ er • ˆ er + ˆ eθ = − (a), =−ˆ ∂n r=a ∂r r ∂θ ∂r r=a r=a so

ˆ α= 0

ˆ 2π  ∂u  ∂u −κ · κ (a) · a dθ = 2πa κ u0 (a), a dθ = ∂n r=a ∂r 0



∂u because does not depend on θ. ∂r r=a Because there is no internal heat source or sink, the solvability condition for Laplace’s equation is that   0 = α + β = 2πa κ u0 (a) − 2πb κ u0 (b) = −2π κ b u0 (b) − a u0 (a) . So, yes, the solvability condition is satisfied. ∂u ≡ 0 because we are solving for the equilibrium temperature distribution, so u = u(r) should ∂t solve the ODE   1 d du 0= r − 4, r dr dr 10.3.4.9.

that is,   d du r = 4r. dr dr Indefinite integration implies r

du = 2r2 + c1 , dr

that is, du c1 = 2r + , dr r where c1 is an arbitrary constant. Indefinite integration a second time implies u = r2 + c1 ln r + c2 , c Larry

Turyn, January 8, 2014

page 24

where c2 is an arbitrary constant. The finiteness BC, | u(0+ , t) | < ∞, implies c1 = 0, so u = r2 + c2 ,

hence u0 (r) = 2r.

Plug this into the second BC to get 5 = u0 (a) = 2a. (a) Only for a = 52 is there an equilibrium temperature distribution. (b) If a = 52 , the equilibrium temperature distribution is u(r) = r2 + c1 , where c1 is an arbitrary constant. ∂u ≡ 0 because we are solving for the equilibrium temperature distribution, so u = u(r) should ∂t solve the ODE   1 d du 4 0= r − 2 u, r dr dr r

10.3.4.10.

that is, the Cauchy-Euler equation r2 u00 + ru0 − 4u = 0. Substituting in u = rn , we see that its characteristic equation is 0 = n(n−1)+n−4 = n2 −4 = (n−2)(n+2), so the ODE’s general solution is u = c1 r2 + c2 r−2 , where c1 , c2 =arb. constants. The BC | u(0+ ) | < ∞ requires that c2 = 0. The second BC requires 5 = u0 (a) = c1 a2 , so c1 =

5 . The equilibrium solution is a2 u=

 r 2 5 2 r = 5 , a2 a

and it exists for all a. 10.3.4.11. The ODE-BVP for the equilibrium temperature distribution in problem 10.2.4.7 is  00   = 0, 0 < x < L,   κ u (x) + cos πx L .   u0 (0) = 0, u(L) = T1 Note that the thermal conductivity, κ, is assumed to be constant. Integrate the ODE once to get ˆ L h  πx iL  πx  L dx= κu0 (x)+ sin 0= κ u00 (x)+cos L π L 0 0   = κ u0 (L) − u0 (0) + 0 − 0 = κ u0 (L) − 0 = κu0 (L), after using the boundary condition u0 (0) = 0. (a) The solvability condition is u0 (L) = 0. (b) rate of heat flow out of the right end is b = (−κ∇u) • (+ˆ q(L) • n ı) = −κ

du (L) = 0, dx

that is, is zero when the temperature is at equilibrium.

c Larry

Turyn, January 8, 2014

page 25

10.3.4.12. Integrate both sides of Poisson’s equation over V to get ˚ ˚ Q 2 (?) dV = 0 ∇ udV + V V κ The divergence theorem gives ˚ V

Substituting that into (?) gives

˚ ‹ b dS. ∇2 udV = ∇ • ∇udV = (∇u) • n V

S



˚ Q b dS+ dV = 0. (∇u) • n S V κ

10.3.4.13. Integrate both sides of the anisotropic Laplace’s equation over V and then use the divergence theorem to get ‹ ‹ ˚ b dS = g(r) dS. (κ(r)∇u) • n ∇ • (κ(r)∇u)= 0= V



So, the solvability condition is that 0 =

S

S

g(r) dS. S

Q 10.3.4.14. Yes. Why? Corollary 6.1 implies ∇u • ∇u = ∇ • (u∇u) − u∇2 u. Using ∇2 u = − , we have κ uQ ∇ • (u∇u) = ∇u • ∇u − . The divergence theorem gives κ  ˚ ˚  ¨ ˚ uQ uQ b dS + LHS = ∇u • ∇u dV = ∇ • (u∇u) + dV = u∇ • n dV κ V V S V κ ˚ ¨ uQ ∂u dS + dV = RHS. = u ∂n V κ S 10.3.4.15. Begin by using the Calculus I chain rule for the substitution r =

1 to get p

∂ h  1  i ∂u  1  ∂r ∂u  1  ∂ h 1 i ∂U 1 ∂u  1  (p, θ) = = ,θ · = ,θ · ,θ . u ,θ =− 2 · ∂p ∂p p ∂r p ∂p ∂r p ∂p p p ∂r p Using the product rule, ∂2U ∂ h 1 ∂u  1  i 2 ∂u  1  1 ∂ h ∂u  1  i ∂ h ∂U i = − , θ = , θ − ,θ . (p, θ) = · · · ∂p2 ∂p ∂p ∂p p2 ∂r p p3 ∂r p p2 ∂p ∂r p Using the Calculus I chain rule for the substitution r =

1 gives p

  ∂2U 2 ∂u  1  1 ∂ h ∂u  1  i ∂r 2 ∂u  1  1 ∂2u  1  1 (p, θ) = 3 · ,θ − 2 · ,θ · = 3· ,θ − 2 · 2 ,θ · − 2 ∂p2 p ∂r p p ∂r ∂r p ∂p p ∂r p p ∂r p p =

1 ∂2u  1  2 ∂u  1  · , θ + · ,θ . p4 ∂r2 p p3 ∂r p

Because u = u(r, θ) satisfies Laplace’s equation, that is,    2  1 ∂ ∂u 1 ∂2u 1 ∂ u ∂u 1 ∂2u ∂ 2 u 1 ∂u 1 ∂2u 0= r + 2 2 = r 2 + + 2 2,= + · + 2 2, 2 r ∂r ∂r r ∂θ r ∂r ∂r r ∂θ ∂r r ∂r r ∂θ c Larry

Turyn, January 8, 2014

page 26

we have

∂2u 1 ∂u 1 ∂2u = − . · − ∂r2 r ∂r r2 ∂θ2

So, ∂2U 1 ∂2u  1  2 ∂u  1  1 (p, θ) = · · , θ + ,θ = 4 ∂p2 p4 ∂r2 p p3 ∂r p p 1 = 4 p



∂u ∂2u −p · − p2 · 2 ∂r ∂θ

 +

  1 ∂u 2 ∂u  1  1 ∂2u − · − 2 2 + 3 ,θ r ∂r r ∂θ p ∂r p

2 ∂u  1  2 ∂u  1  1 ∂u 1 ∂2u · · + · , θ = − − ,θ p3 ∂r p p3 ∂r p2 ∂θ2 p3 ∂r p

that is, (?)

1 ∂u ∂2U 1 ∂2u (p, θ) = · · . − ∂p2 p3 ∂r p2 ∂θ2

But, our first result was that ∂U 1 ∂u  1  (p, θ) = − 2 · ,θ , ∂p p ∂r p so

1 ∂U 1 ∂u · =− · (p, θ). p3 ∂r p ∂p

Substitute this into (?) to get ∂2U 1 ∂U 1 ∂2u (p, θ) = − · (p, θ) − · , ∂p2 p ∂p p2 ∂θ2 that is, 1 ∂U 1 ∂2U ∂2U (p, θ) + · (p, θ) + · (p, θ) = 0, ∂p2 p ∂p p2 ∂θ2 that is U = U (p, θ) satisfies Laplace’s equation, after noting that ∂2U ∂2u  1  (p, θ) = ,θ . ∂θ2 ∂θ2 p

c Larry

Turyn, January 8, 2014

page 27

Section 10.4.6 10.4.6.1. Starting from Section 10.4’s equation (10.58),     ∂2r ∂ ∂r ∂r %(X) 2 = β || (X, t)|| +f ∂t ∂X ∂X ∂X  ∂r we assume that the mass density is % = %(X), the tension is T = T (X) = β || ∂X (X, t)|| , the body force is f = 0, and we approximate r(X, t) ≈ Xˆ ı + y(X, t)ˆ  ≈ xˆ ı + y(x, t)ˆ . Then the wave equation becomes   ∂ ∂ 2 y(x, t) ∂y(x, t) = %(x) T (x) . ∂t2 ∂x ∂x 10.4.6.2. Define i(x, t) , e−pt u(x, t) and assume it satisfies the transmission line equation ∂2i ∂2i ∂i = LC 2 + RC , 2 ∂x ∂t ∂t where R, L, C are constants. [We used the assumption that G = 0.] Calculate  ∂2i ∂ 2  −pt ∂2 ∂2u = e u(x, t) = e−pt 2 [ u(x, t) ] = e−pt 2 , 2 2 ∂x ∂x ∂x ∂x  ∂i ∂  −pt ∂u = e u(x, t) = e−pt (x, t) − pe−pt u(x, t), ∂t ∂t ∂t and

  2 ∂ ∂u ∂2i −pt ∂u −pt −pt ∂ u = e (x, t) − pe u(x, t) = e (x, t) − 2e−pt (x, t) + p2 e−pt u(x, t). 2 2 ∂t ∂t ∂t ∂t ∂t

Substituting this into the transmission line equation gives     2 ∂2u ∂2i ∂2i ∂i −pt ∂ u −pt ∂u 2 −pt −pt ∂u −pt e−pt 2 = = LC + RC = LC e − 2pe + p e u + RC e − pe ∂x ∂x2 ∂t2 ∂t ∂t2 ∂t ∂t   ∂2u ∂u = e−pt LC 2 + (−2pLC + RC) + (p2 LC − pRC)u . ∂t ∂t To get the PDE in the desired form, we choose p so that (−2pLC + RC) = 0: Let p = satisfied by u = u(x, t) is e−pt

R , so that the PDE 2L

     ∂2u ∂2u ∂u R 2 R −pt = e LC + (0) + LC − · RC u , ∂x2 ∂t2 ∂t 2L 2L

or, equivalently, ∂2u ∂ 2 u R2 C = LC 2 − u. 2 ∂x ∂t 4L √ ∂2u ∂2u R2 C The latter is a PDE in the form = c2 2 − ru, where c = LC and r = . 2 ∂x ∂t 4L  10.4.6.3. Tension T = 21.3 lbs = (21.3 lbs) wave speed c =

9.80665 N 2.205 lb

 ≈ 94.73090476 N and string length is L ≈ 0.352 m.

p T /%, and the fundamental mode has vibration frequency p √ π T /% πc T √ 440 · 2π = 440Hz = ω = = ⇒ %= . L L 880L c Larry

Turyn, January 8, 2014

page 28



%≈

94.94620227 ≈ 9.872815337 × 10−4 (880 · 0.352)2

% ≈ 9.87 × 10−4 kg/m, to three significant digits



10.4.6.4. Following the hint, we first note that the matrix variables x and z is given by the elementary matrix  0 0 E, 0 1 1 0 related to work in Sections 1.2 and 6.6. We calculate that    0 a b p 0    p  T    p 0  E E =  0  c d   − − −− p −−  1 0 0 p 0 

0

0

  =  0  1

1 0 

0 as was desired. So, let O = E =  0 1

1



0 1 0 0

   0   0  0 0  0 1 1 0 . 0 0

1



1

0

  =  0  0

0 1 

1   as was desired. So, let O = E =   0  0

0



0 0 1 a

   1   c  0 0  0 0   0 1  .  1 0

 1 0 , 0



a

b

p  p   c d p 0     − − −− p 0 0 p 0 b

a

d 0

0





0

     c =0   0 0

10.4.6.5. Following the hint, we first note that the matrix variables y and z is given by the elementary matrix  1 0 E, 0 0 0 1 related to work in Sections 1.2 and 6.6. We calculate that    a b p 0 1    p   T   p 0  E  c d E =  0  − − −− p −−   0 0 p 0 0

multiplication that has the effect of swapping the

0 0



0

   0   0 −−   1 0 0

0

0 1 0

1



  0    0



  c ,  a

d b

multiplication that has the effect of swapping the  0 1 , 0



a

b

p  p   c d 1  p    − − −− p 0 0 0 p 0

0

b





a

     d =0   0 c

0



1

   0   0  −−  0 0 0 0 0

b

0 0 1

0



  1    0



  0 ,  d

c Larry

Turyn, January 8, 2014

page 29

10.4.6.6. In the example of istrotopic, uniaxial stretching, displacements are given by   X u = α  −ν Y  , −ν Z for some constant ν. Then the strain tensor is    h 1 0 i h i 1 ∂u T 1  ∂u + = α  0 −ν [ε] = 2 ∂X ∂X 2 0 0

  0 1 0 0  + α  0 −ν −ν 0 0

 T  0 1 0  0   = α  0 −ν −ν 0 0

 0 0 , −ν

the diagonal matrix given in (10.75) in Section 10.4. 10.4.6.7. With the strain tensor given in (10.75) in Section 10.4, that is,   α 0 0 0 , (?) [ε] = [εij ] =  0 −αν 0 0 −αν Hooke’s law says that the stress tensor is [τ ] = C[ε], and (10.74) in Section 10.4 says that [τ ] = λ(ε11 + ε22 + ε33 )I + 2µ[ε]. Substituting in the [ε] given in (?), we have 

  1 0 0 α 0 [τ ] = λ(α − αν − αν)I + 2µ[ε] = λα(1 − 2ν)  0 1 0  + 2µ  0 −αν 0 0 1 0 0   λ(1 − 2ν) + 2µ 0 0 , 0 λ(1 − 2ν) − 2µν 0 = α 0 0 λ(1 − 2ν) − 2µν

 0 0  −αν

which is (10.76), as we were asked to explain. 10.4.6.8. For istrotopic, uniaxial stretching, we have (10.76) in Section 10.4, that is,   λ(1 − 2ν) + 2µ 0 0 . 0 λ(1 − 2ν) − 2µν 0 [τ ] = α  0 0 λ(1 − 2ν) − 2µν In the present problem we are also assuming that ν is chosen to equal Poisson’s ratio, that is, ν=

λ . 2(λ + µ)

Combining the information, we have    λ λ 1 − 2 + 2µ 0  2(λ + µ)       λ λ [τ ] = α  0 λ 1 − 2 − 2µ  2(λ + µ) 2(λ + µ)     0 0

 0

0  λ 1−2

λ 2(λ + µ)

c Larry

 − 2µ

λ 2(λ + µ)

          

Turyn, January 8, 2014

page 30

  λ λ 1 − + 2µ 0  (λ + µ)       λ λ = α 0 λ 1 − −µ  (λ + µ) (λ + µ)     0 0 



  λ     = α        µ   = α   



λ (λ + µ)

µ (λ + µ)

 0

0  λ 1−

 −µ

 + 2µ

0 

0

λ

µ (λ + µ)

0

 −µ

λ (λ + µ)

0

 0

0

0

0

0

λ (λ + µ) 

0

0 

+ 2µ

λ (λ + µ)

          

  αµ (2ν) + 2αµ 0      = 0  0     0 0

λ

0 0 0

0

µ (λ + µ) 



 −µ

λ (λ + µ)

          

α · 2µ (1 + ν)

     0  =   0

0

0

0

0

0

0



  0  ,  0

which is (10.77), as we wanted to explain. 10.4.6.9. Regarding (10.78), we have that 1 1 µ · (3λ + 2µ) 1 E µ · (3λ + 2µ) = ·E = · = · λ 2(1 + ν) 2(1 + ν) 2(1 + ν) λ+µ (λ + µ) 2(1 + 2(λ+µ) ) µ · (3λ + 2µ) 2(λ + µ) (λ + µ) µ · (3λ + 2µ)  · = · = µ, (λ + µ) (3λ + 2µ) (λ + µ) 2(2(λ + µ) + λ)  as were asked to explain, and hence =

λ 2 λ νE E 2ν 2ν (λ+µ) 2(λ+µ)  = · =µ· =µ· = µ · λ (1 + ν)(1 − 2ν) 2(1 + ν) (1 − 2ν) (1 − 2ν) (1 − 2 λ ) (1 − (λ+µ) ) 2(λ+µ) 

=µ·

(1

λ (λ+µ) λ − (λ+µ) )

·

(λ + µ) λ λ =µ· = µ · = λ, (λ + µ) µ (λ + µ) − λ  

as we were asked to explain.  2  2 ! 1 ∂u ∂u ∂2u ∂2u 10.4.6.10. Define e(x, t) = + , where u(x, t) satisfies = . Using the chain rule 2 2 ∂x ∂t ∂t ∂x2 and then the product rule, we calculate that "  2  2 ! # ∂u ∂u ∂ 2 u ∂e ∂ 1 ∂u ∂u ∂ 2 u = + = · + · ∂t ∂t 2 ∂x ∂t ∂x ∂x∂t ∂t ∂t2 and

∂2e ∂2u ∂2u ∂u ∂ 3 u ∂ 2 u ∂ 2 u ∂u ∂ 3 u = · + · + · + · . ∂2t ∂x∂t ∂x∂t ∂x ∂x∂t2 ∂t2 ∂t2 ∂t ∂t3 c Larry

Turyn, January 8, 2014

page 31

Using the given fact that

∂2u ∂2u = , we have ∂t2 ∂x2

∂2e ∂2u ∂2u ∂u ∂ 3 u ∂ 2 u ∂ 2 u ∂u ∂ 3 u + · + = · + · · . ∂2t ∂x∂t ∂x∂t ∂x ∂x3 ∂x2 ∂x2 ∂t ∂x2 ∂t On the other hand, similarly we have that "  2  2 ! # ∂e ∂ 1 ∂u ∂u ∂u ∂ 2 u ∂u ∂ 2 u + = + · · = ∂x ∂x 2 ∂x ∂t ∂x ∂x2 ∂t ∂x∂t and

∂ 2 u ∂ 2 u ∂u ∂ 3 u ∂2u ∂2u ∂u ∂ 3 u ∂2e = · 2 + + · · + · 2 2 3 ∂x ∂x ∂ x ∂x ∂x ∂x∂t ∂x∂t ∂t ∂x2 ∂t

∂2u ∂2u ∂u ∂ 3 u ∂ 2 u ∂ 2 u ∂u ∂ 3 u ∂2e + · + = 2 , · + · · 3 2 2 2 ∂x∂t ∂x∂t ∂x ∂x ∂x ∂x ∂t ∂x ∂t ∂ t as we were asked to explain. =

Define p(x, t) =

∂u ∂u ∂2u ∂2u · , where u(x, t) satisfies = . Using the product rule, we calculate that 2 ∂x ∂t ∂t ∂x2   ∂p ∂ ∂u ∂u ∂ 2 u ∂u ∂u ∂ 2 u = · = · + · ∂t ∂t ∂x ∂t ∂x∂t ∂t ∂x ∂t2

and

∂ 3 u ∂u ∂2u ∂2u ∂ 2 u ∂ 2 u ∂u ∂ 3 u ∂2p = + · 2 + · · · + . 2 2 ∂ t ∂x∂t ∂t ∂x∂t ∂t ∂x∂t ∂t2 ∂x ∂t3

Using the given fact that

∂2u ∂2u = , we have ∂t2 ∂x2

∂ 3 u ∂u ∂2u ∂2u ∂ 2 u ∂ 2 u ∂u ∂ 3 u ∂2p = · + · + · + · . ∂2t ∂x3 ∂t ∂x∂t ∂x2 ∂x∂t ∂x2 ∂x ∂x2 ∂t On the other hand, similarly we have that   ∂p ∂ ∂u ∂u ∂ 2 u ∂u ∂u ∂ 2 u = · = · + · ∂x ∂x ∂x ∂t ∂x2 ∂t ∂x ∂x∂t and

∂ 3 u ∂u ∂ 2 u ∂ 2 u ∂2u ∂2u ∂u ∂ 3 u ∂2p = + + + · · · · ∂2x ∂x3 ∂t ∂x2 ∂x∂t ∂x2 ∂x∂t ∂x ∂x2 ∂t

∂ 3 u ∂u ∂2u ∂2u ∂ 2 u ∂ 2 u ∂u ∂ 3 u ∂2p · + · + · + · = , ∂x3 ∂t ∂x∂t ∂x2 ∂x∂t ∂x2 ∂x ∂x2 ∂t ∂2t as we were asked to explain. =

∂2r ∂T 10.4.6.11. We start with (10.57) in Section 10.4, %(X) 2 (X, t) = (X, t) + f (X, t), and follow the ∂t ∂X instructions in the problem: First, take the dot product of both sides of that PDE with an unspecified “test function" η = η(X, t), and then integrate both sides with respect to X over the interval [ 0, L ] to get ˆ (?)

L

%(X) 0

∂2r (X, t) • η(X, t) dX = ∂t2

ˆ 0

L

∂T (X, t) • η(X, t) dX + ∂X

ˆ

L

f (X, t) • η(X, t) dX . 0

c Larry

Turyn, January 8, 2014

page 32

Next, use integration by parts on the second integral to get ˆ L h iL ˆ L ∂T ∂η T(X, t) • (X, t) • η(X, t) dX = T(X, t) • η(X, t) − (X, t) dX . ∂X ∂X 0 0 0 Let us assume that the test function η satisfies the BCs η(0, t) = η(L, t) = 0, hence

ˆ

L

0

∂T (X, t) • η(X, t) dX = − ∂X

ˆ

L

T(X, t) • 0

∂η (X, t) dX . ∂X

So, (?) can be rewritten as ˆ L ˆ L ˆ L ∂2r ∂η %(X) 2 (X, t) • η(X, t) dX = − T(X, t) • (X, t) dX + f (X, t) • η(X, t) dX , ∂t ∂X 0 0 0 or, equivalently, ˆ L %(X) 0

 ∂η ∂2r (X, t) • η(X, t) + T(X, t) • (X, t) − f (X, t) • η(X, t) dX = 0. ∂t2 ∂X

Next, integrate both sides of the equation with respect to t over the interval [0, ∞) to get  ˆ ∞ˆ L ∂η ∂2r (??) (X, t) − f (X, t) • η(X, t) dX dt = 0. %(X) 2 (X, t) • η(X, t) + T(X, t) • ∂t ∂X 0 0 The penultimate step is to use integration by parts (with respect to t) on the first term in (??) to get ! ˆ ∞ˆ L ˆ bˆ L ∂2r ∂2r (? ? ?) %(X) 2 (X, t) • η(X, t) dX %(X) 2 (X, t) • η(X, t) dX dt = lim b→∞ ∂t ∂t 0 0 0 0 = lim

b→∞

hˆ 0

L

ib ∂r %(X) (X, t) • η(X, t) dX − ∂t 0

ˆ 0

b

ˆ 0

L

! ∂r ∂η %(X) (X, t) • (X, t) dX dt . ∂t ∂t

Let us assume that all improper integrals appearing in the work exist. [This assumption can be replaced by assuming that the test function η(X, t) and the solution r(X, t) are in spaces of functions with suitable integrability, but this can be thought of as mathematical technicalities.] Also, since r(X, t) satisfies the boundary conditions (10.54) in Section 10.4, r(0, t) = 0 and r(L ˆ ı, t) = L ˆ ı. It follows that, interchanging1 the operation of taking a partial derivative with evaluation at X = 0 or at X=L ∂r ∂r (0, t) ≡ 0 and (L, t) ≡ 0. ∂t ∂t Then (? ? ?) becomes ˆ ∞ˆ L ˆ ∞ˆ L ∂2r ∂r ∂η (? ? ? ?) %(X) 2 (X, t) • η(X, t) dX dt = 0 − %(X) (X, t) • (X, t) dX dt. ∂t ∂t ∂t 0 0 0 0 Substitute this into (??) to get  ˆ ∞ˆ L ∂r ∂η ∂η (X, t) + T(X, t) • (X, t) − f (X, t) • η(X, t) dX dt = 0, (? ? ? ? ?) −%(X) (X, t) • ∂t ∂t ∂X 0 0 which is true for all suitably “nice enough " functions η(X, t). Equation (? ? ? ? ?) is the principle of virtual work, as we wished to explain. 1 this

is another mathematical technicality that is true as long as the function r(X, t) is continuously differentiable in (X, t).

c Larry

Turyn, January 8, 2014

page 33

Section 10.5.5

10.5.5.1. In Figure 10.23 in Section 10.5, we were given Φ(x) = u(x, 0) =

     

0, x + 2,

    

      −2 ≤ x < 0 . Because      0≤x x < −2

0, ∂u (x, 0) ≡ 0, the solution of the PDE is given by u(x, t) = 12 Φ(x − ct) + 12 Φ(x + ct), where the wave speed ∂t p p is c = T0 /%0 = 4/0.01 = 20. So, the solution of the problem is     0, x − 20t < −2  0, x + 20t < −2                    1 1 x − 20t + 2, −2 ≤ x − 20t < 0 x + 20t + 2, −2 ≤ x + 20t < 0 u(x, t) = +   2  2                  0, 0 ≤ x − 20t 0, 0 ≤ x + 20t that is,  0,      1 x − 20t + 2, u(x, t) = 2      0,

     

 0,      1 −2 + 20t ≤ x < 20t x + 20t + 2, +  2          20t ≤ x 0, x < −2 + 20t

At t = 0.025, the solution of the PDE is  0, x < −1.5      1 x + 1.5, −1.5 ≤ x < 0.5 u(x, 0.025) = 2      0, 0.5 ≤ x

=

                          

x < −2 − 20t −2 − 20t ≤ x < −20t

 0, x < −2.5      1 x + 2.5, −2.5 ≤ x < −0.5 +   2         0, −0.5 ≤ x

.

    

−20t ≤ x

     

     

          

        1  (x + 2.5), −2.5 ≤ x < −1.5  2     1 1 (x + 2.5) + (x + 1.5), −1.5 ≤ x < −0.5 , 2 2      1 −0.5 ≤ x < 0.5   2 (x + 1.5),       0, 0.5 ≤ x 0,

x < −2.5

whose graph is shown in the figure.

Figure 3: Problem 10.5.1: u(x, 0.025)

c Larry

Turyn, January 8, 2014

page 34

At t = 0.05, the solution of the PDE is  0,      1 x + 1, u(x, 0.05) = 2      0,

 x < −1 0,      1 −1 ≤ x < 1 + x + 3,  2          1≤x 0,      

x < −3 −3 ≤ x < −1 −1 ≤ x

     

=

    

 0,          12 (x + 3),

         −3 ≤ x < −1 

 1   2 (x + 1),       0,

  −1 ≤ x < 1        1≤x

x < −3

,

Figure 4: Problem 10.5.1: u(x, 0.05)

10.5.5.2. In Figure 10.24 in Section 10.5, we were given  0, x < −2         2, −2 ≤ x < −1      0, −1 ≤ x < 1 Φ(x) = u(x, 0) =       2, 1≤x 0,   (?) .  ∂w    ∂w    (0, t) = (π, t) = 0, t > 0  ∂x ∂x The boundary conditions correspond to the second group in Table 11.1 with L = π, so, similar to the work in Example 11.1 in Section 11.1, we look for solutions of the PDE in the form w(x, t) = cos (nx) G(t), for n = 0, 1, 2, ... . [Recall that X0 (x) = cos(0 · x) ≡ 1 is an “honorary cosine function."] Substitute w into the PDE to get cos (nx) hence

dG = −α n2 cos (nx) G(t), dt dG = −α n2 G(t) . dt

The general solution of (?) is ∞

w(x, t) =

2 a0 X an cos (nx) e−α n t . + 2 n=1

The solution of the original problem’s PDE and BCs is ∞

T (x, t) = v(x) + w(x, t) = c1 + x +

2 a0 X an cos (nx) e−α n t , + 2 n=1

where c1 is an arbitrary constant. a0 a0 But c1 + is no more or less of an arbitrary constant then is , so it is simpler to write the general solution 2 2 of the PDE and the BCs as ∞ 2 a0 X T (x, t) = x + + an cos (nx) e−α n t . 2 n=1 The initial condition is satisfied by solving 2x = T (x, 0) = x +

a0 + 2

∞ P

an cos nx, 0 < x < π,

n=1

that is, ∞

x=

a0 X + an cos (nx) , 0 < x < π . 2 n=1

Fourier analysis implies that this is done by finding the coefficients an , for n = 0, 1, 2, .... In fact, this was done in Example 9.9 in Section 9.2:  π ˆ 2 π 2 1 2 x dx = x =π a0 = π 0 π 2 0 and an =

2 π

ˆ

π

x cos nx dx = 0

 2 (−1)n − 1 2 h x sin nx cos nx iπ + = ... = . π n n2 n2 π 0

But for n = even, (−1)n − 1 = 0; for n = odd = 2k − 1, (−1)n − 1 = −2. The solution of the problem is T (x, t) = x +

∞  2 π 4 P 1 − cos (2k − 1)x e−α(2k−1) t . 2 π k=1 (2k − 1)2

11.1.4. The boundary conditions fit the fourth group of entries of Table 11.1, with L = π. As in Example 11.1 in Section 11.1, define  1  Xn (x) , cos n − x . 2 c Larry

Turyn, October 13, 2013

page 3

Substitute product solutions of the form T (x, t) = Xn (x)G(t) . into the PDE to get  ∂w ∂2w 1 2 ˙ Xn (x)G(t) = =α = −α n − Xn (x)G(t), 2 ∂t ∂x 2 hence

1 2 G(t) . 2 So, the product solutions of the homogeneous problem are of the form ˙ G(t) = −α

Tn (x, t) = cos





n−

1   −α x e 2

n−

n− 1 2

2

t

.

The general solution of the PDE and the BCs is ∞ P

T (x, t) =

dn cos



1   −α x e 2

n−

n=1

1 n− 2

2

t

.

The initial condition is satisfied by solving sin x = T (x, 0) =

∞ P

dn cos



n−

n=1

1  x , 0 < x < π. 2

We calculate the generalized Fourier coefficients  ˆ ˆ    1      2 π 1 π 1  3 dn = − n x + sin +n x dx sin x cos n − x dx = sin π 0 2 π 0 2 2 1 h cos = π −

3 2 3 2

 cos −n x  + −n −

1 2 1 2

 + n x iπ 0−1  1 0−1 1    + = = π − 32 − n π 0 +n − 12 + n =

2 · π

3 2

−n

1 

1 2

+n

3 2

1 + −n

1 2

 1  +n

.

The solution of the problem is T (x, t) = −

2 π

∞ P n=1

1 n2 + n −

3 4

cos



n−

1   −α x e 2

11.1.5. The physical situation is modeled by  ∂T ∂2T   = α , 0 < x < 5, t > 0,    ∂t ∂x2    ∂T ∂T (0, t) = (5, t) = 0, t > 0,   ∂x  ∂x      T (x, 0) = x, 0 < x < 5

        

1 n− 2

2

t

.

,

       

where the thermal diffusivity of copper is α = 1.15 × 10−4 m2 /s and x is measured in m. The boundary conditions correspond to the second group in Table 11.1 with L = 5, so, similar to the work in Example 11.1 in Section 11.1, we look for solutions of the PDE in the form  nπx  T (x, t) = cos G(t), 5 for n = 0, 1, 2, ... . [Recall that X0 (x) = cos(0 · x) ≡ 1 is an “honorary cosine function."] Substitute T into the PDE to get  nπx  dG  nπ 2  nπx  cos = −α cos G(t), 5 dt 5 5 c Larry

Turyn, October 13, 2013

page 4

hence

 nπ 2 dG = −α G(t) . dt 5 The general solution of the PDE and the BCs is ∞

T (x, t) =

 nπx  nπ 2 a0 X + e−α( 5 ) t . an cos 2 5 n=1

The initial condition is satisfied by solving x = T (x, 0) =

a0 + 2

∞ P

an cos

n=1

 nπx  5

, 0 < x < 5,

that is, ∞

x=

 nπx  a0 X + , 0 < x < 5. an cos 2 5 n=1

Fourier analysis implies that this is done by finding the coefficients an , for n = 0, 1, 2, ... . In fact, this is similar to what was done in Example 9.9 in Section 9.2: 5  ˆ 2 5 2 1 2 a0 = x =5 x dx = 5 0 5 2 0 and 2 an = 5

ˆ

5

x cos 0

 nπx  5

nπx

2 h x sin 5 dx = 5 nπ/5



i  5 cos nπx 2 (−1)n − 1 5 + = ... = . (nπ/5)2 0 π 2 n2 /5

n

But for n = even, (−1) − 1 = 0; for n = odd = 2k − 1, (−1)n − 1 = −2. The solution of the problem is ∞ 5 1 20 P T (x, t) = − 2 cos 2 π k=1 (2k − 1)2

11.1.6. The equilibrium solution, v = v(x), satisfies



(2k − 1)πx 5



−1.15×10−4



(2k−1)π 5

2

t

.

∂v ≡ 0 and ∂t

  00  0 = αv (x), 0 < x < 5,  

e

v(0) = 20, v 0 (5) = 0

.



The general solution of the ODE v 00 = 0 is v(x) = c1 + c2 x, where c1 , c2 are arbitrary constants. Substitute that into the BCs to get   20 = v(0) = c1 . 0 = v 0 (5) = c2 The equilibrium solution is v(x) ≡ 20 . Define w = w(x, t) = T (x, t) − v(x). Similar to work in Example 11.2 in Section 11.1, w(x, t) should satisfy the homogeneous problem   ∂w ∂2w       ∂t = α ∂x2 , 0 < x < 5, t > 0,   .        w(0, t) = ∂w (5, t) = 0, t > 0  ∂x The boundary conditions fit the third group of entries of Table 11.1, with L = 5. Define  n − 1 πx  2 Xn (x) , sin . 5 Substitute product solutions of the form w(x, t) = Xn (x)G(t) .

c Larry

Turyn, October 13, 2013

page 5

into the PDE to get  n − 1 π 2 ∂w ∂2w 2 =α = −α Xn (x)G(t), ∂t ∂x2 5 hence  n − 1 π 2 2 ˙ G(t) . G(t) = −α 5 So, the product solutions of the homogeneous problem are of the form   n − 1 πx  n− 1 π 2 2 −α t 2 5 wn (x, t) = sin e . 5 ˙ Xn (x)G(t) =

The general solution of the PDE and the BCs is T = v + w, that is, T (x, t) = 20 +

∞ P n=1

n− 1 2 5

 n − 1 πx  2 e−α cn sin 5



π

2

t

.

The initial condition is satisfied by solving 0 = T (x, 0) = 20 +

∞ P n=1

 n − 1 πx  2 , 0 < x < 5, cn sin 5

that is, −20 =

∞ P

cn sin

n=1

 n − 1 πx  2 , 0 < x < 5. 5

We calculate the generalized Fourier coefficients 2 cn = 5

ˆ 0

5

5   πx n− 1 2  n − 1 πx    cos 2 5  dx = −8 ·  −20 sin  =8· 5 − n − 12 π/5

0−1  n − 21 π/5

! .

0

The solution of the problem is ∞ 40 P 1 T (x, t) = 20 − π n=1 n −

11.1.7. The equilibrium solution, v = v(x), satisfies

1 2

 n − 1 πx  2 sin e−α 5

The general solution of the ODE v 00 =



π

2

t

.

∂v ≡ 0 and ∂t

  00  0 = αv (x) − 1, 0 < x < 2,  

n− 1 2 5

v(0) = v 0 (2) = 0

.



1 is α v(x) =

1 2 x + c1 + c2 x, 2α

where c1 , c2 are arbitrary constants. Substitute that into the BCs to get   0 = v(0) = c1   .   0 = v 0 (2) = α1 · 2 + c2 The equilibrium solution is v(x) =

1 2 2 x − x. 2α α

c Larry

Turyn, October 13, 2013

page 6

Define w = w(x, t) = T (x, t) − v(x). Similar to work in Example 11.2 in Section 11.1, w(x, t) should satisfy the homogeneous problem   ∂2w ∂w       ∂t = α ∂x2 , 0 < x < 2, t > 0,   .     ∂w    w(0, t) = (2, t) = 0, t > 0  ∂x The boundary conditions fit the third group of entries of Table 11.1, with L = 2. Define  n − 1 πx  2 . Xn (x) , sin 2 Substitute product solutions of the form w(x, t) = Xn (x)G(t) into the PDE to get  n − 1 π 2 ∂w ∂2w 2 ˙ Xn (x)G(t) = =α Xn (x)G(t), = −α ∂t ∂x2 2 hence  n − 1 π 2 2 ˙ G(t) . G(t) = −α 2 So, the product solutions of the homogeneous problem are of the form   n − 1 πx  (n− 1 )π 2 2 t 2 2 wn (x, t) = sin e−α . 2 The general solution of the PDE+BCs is T = v + w, that is, 1 2 T (x, t) = (x − 4x) + 2α

∞ P n=1

 n − 1 πx  2 cn sin e−α 2

(n− 1 )π 2 2

2

t

.

The initial condition is satisfied by solving 1 2 (x − 4x) + 0 = T (x, 0) = 2α

 n − 1 πx  2 cn sin , 0 < x < 2, 2

∞ P n=1

that is, −

1 2 (x − 4x) = 2α

∞ P n=1

cn sin

 n − 1 πx  2 , 0 < x < 2. 2

We calculate the generalized Fourier coefficients ˆ  n − 1 πx  8 1 2 2 1 2 2 dx = ... = 3 · cn = − (x − 4x) sin  . 2 0 2α 2 π α n− 1 3 2 The solution of the problem is 1 2 8 T (x, t) = (x − 4x) + 3 2α π α

∞ P n=1

 n − 1 πx  2 e−α 3 sin 2 n − 21 1

(n− 1 )π 2 2

2

t

.

 11.1.8. The equilibrium solution, v = v(x) satisfies 0 = v 00 (x), v 0 (0) = 0, v 12 = 10. The ODE has solutions 0 v = c1 + c2 x, where  c1 , c2 are arbitrary constants. The first BC implies 0 = v (0) = c2 , so v(x) ≡ c1 . The second BC requires 10 = v 21 = c1 . The equilibrium solution is v(x) ≡ 10. Similar to work in Example 11.1 in Section 11.1, the boundary conditions fit the fourth group of entries of Table 11.1, with L = 21 . Define  n − 1 πx   2 Xn (x) , cos = cos (2n − 1)πx . 1/2 Substitute product solutions of the form w(x, t) = Xn (x)G(t) c Larry

Turyn, October 13, 2013

page 7

into the PDE to get 2 ∂w ∂2w ˙ Xn (x)G(t) = =α = −α (2n − 1) Xn (x)G(t), 2 ∂t ∂x hence

2 ˙ G(t) = −α (2n − 1)π G(t) .

So, the product solutions of the homogeneous problem are of the form  Tn (x, t) = cos (2n − 1)πx e−α

(2n−1)π

2

t

.

The general solution of the PDE+BCs is T = v + w, that is, ∞ P

T (x, t) = 10 +

 dn cos (2n − 1)πx e−α

(2n−1)π

2

t

.

n=1

The initial condition is satisfied by solving  1, 0 < x <  

2 − 4x,

for 0 < x < 12 , that is,

1 4

1 4 1 2

0,     ∂t ∂x2 . (?)     ∂w ∂w    (0, t) = (π, t) = 0, t > 0  ∂x ∂x The boundary conditions correspond to the second group in Table 11.1 with L = π, so, similar to the work in Example 11.1 in Section 11.1, we look for solutions of the PDE in the form w(x, t) = cos (nx) G(t), for n = 0, 1, 2, ... . [Recall that X0 (x) = cos(0 · x) ≡ 1 is an “honorary cosine function."] Substitute w into the PDE to get cos (nx) hence

dG = −n2 cos (nx) G(t), dt dG = −n2 G(t) . dt

The general solution of (?) is ∞

w(x, t) =

2 a0 X + an cos (nx) e−n t . 2 n=1

c Larry

Turyn, October 13, 2013

page 10

The solution of the original problem’s PDE + BCs is ∞

T (x, t) = v(x) + w(x, t) = cos x + c1 +

2 a0 X + an cos (nx) e−n t , 2 n=1

where c1 is an arbitrary constant. a0 a0 is no more or less of an arbitrary constant then is , so it is simpler to write the general solution But c1 + 2 2 of the PDE and BCs as ∞ 2 a0 X + T (x, t) = cos x + an cos (nx) e−n t . 2 n=1 The initial condition is satisfied by solving ∞

−x + cos x = T (x, 0) = cos x +

a0 X + an cos (nx) , 0 < x < π, 2 n=1

that is, ∞

a0 X + an cos (nx) , 0 < x < π . 2 n=1

−x =

Fourier analysis implies that this is done by finding the coefficients an , for n = 0, 1, 2, .... In fact, except for a minus sign factor, this was done in Example 9.9 in Section 9.2:  π ˆ 2 1 2 π −x dx = − x2 = −π a0 = π 0 π 2 0 and an =

2 π

ˆ

π

−x cos nx dx = − 0

 2 (−1)n − 1 2 h x sin nx cos nx iπ + . = ... = − π n n2 n2 π 0

But for n = even, (−1)n − 1 = 0; for n = odd = 2k − 1, (−1)n − 1 = −2. The solution of the problem is T (x, t) = cos(x) −

π + 2

∞ P k=1

11.1.12. The equilibrium solution, v = v(x), satisfies

2 4 cos((2k − 1)x)e−(2k−1) t . π(2k − 1)2

∂v ≡ 0 and ∂t

  00  0 = αv (x), 0 < x < π,  

v(0) = 20, v(π) = 70

.



The general solution of the ODE v 00 = 0 is v(x) = c1 + c2 x, where c1 , c2 are arbitrary constants. Substitute that into the BCs to get   20 = v(0) = c1 . 70 = v(π) = c1 + πc2 The equilibrium solution is 50x . π Define w = w(x, t) = T (x, t) − v(x). Similar to work in Example 11.2 in Section 11.1, w(x, t) should satisfy the homogeneous problem   2   ∂w = α ∂ w , 0 < x < π, t > 0,   2 ∂t ∂x .     w(0, t) = w(π, t) = 0, t > 0 v(x) ≡ 20 +

The boundary conditions fit the first group of entries of Table 11.1, with L = π. From Example 11.1 in Section 11.1, we see that the solution for w(x, t) is w(x, t) =

∞ P

2

bn sin nx e−α n t .

n=1

c Larry

Turyn, October 13, 2013

page 11

The general solution of the PDE+BCs is T = v + w, that is, T (x, t) = 20 +

∞ P

50x + π

2

bn sin nx e−α n t .

n=1

The initial condition is satisfied by solving 20 + 40x = T (x, 0) = 20 +

50x + π

∞ P

bn sin nx, 0 < x < π,

n=1

that is, 40 −

∞ P

50  x= π

bn sin nx, 0 < x < π .

n=1

We calculate the generalized Fourier coefficients     ˆ sin nx iπ 2 0−0  50  2 50 h x cos nx 50  0 − 1 2 π + = + 40 − x sin nx dx = 40 − 40 − bn = 2 π 0 π π π −n n π π −n n2 0   2 50 1 = 40 − . π π n The solution of the problem is T (x, t) = 20 +

50x + π



80π − 100 π2



∞ 2 P 1 sin nx e−α n t . n=1 n

11.1.13. The equilibrium solution is v = v(x) ≡ 0. The boundary conditions correspond to the first group in Table 11.1 with L = π, so, similar to the work in Example 11.1 in Section 11.1, we look for solutions of the PDE in the form T (x, t) = sin (nx) G(t), for n = 1, 2, 3, ... . Substitute T into the PDE to get sin (nx)

dG α n2 =− sin (nx) G(t), dt t+1

hence

dG α n2 =− G(t) . dt t+1 This is a separable ODE, whose solution is found using ˆ ˆ 2 dG α n2 ln |G| = = − dt = −α n2 ln |t + 1| + c = ln |t + 1|−α n + c, G t+1

where c is an arbitrary constant, hence 2

2

G = C|t + 1|−α n = C(t + 1)−α n , because t + 1 > 0 for 0 < t < ∞. The general solution of the PDE +BCs is T (x, t) =

∞ P

2

bn sin (nx) (t + 1)−α n .

n=1

The initial condition is satisfied by solving f (x) = T (x, 0) =

∞ X

bn sin (nx) , 0 < x < π,

n=1

c Larry

Turyn, October 13, 2013

page 12

that is, ∞ X

f (x) =

bn sin (nx) , 0 < x < π .

n=1

We calculate the Fourier sine series coefficients 2 π

bn =

ˆ

π

f (x) sin nx dx . 0

The solution of the problem is T (x, t) =

∞ P



n=1

2 L

ˆ

L

 2 f (x) sin nx dx (t + 1)−α n sin nx .

0

11.1.14. The equilibrium solution, v = v(x), satisfies

∂v ≡ 0 and ∂t

  00  0 = αv (x), 0 < x < π,  

v(0) = 20, v 0 (π) = 0

.



The general solution of the ODE v 00 = 0 is v(x) = c1 + c2 x, where c1 , c2 are arbitrary constants. Substitute that into the BCs to get   20 = v(0) = c1 . 0 = v 0 (π) = c2 The equilibrium solution is v(x) ≡ 20 . Define w = w(x, t) = T (x, t) − v(x). Similar to work in Example 11.2 in Section 11.1, w(x, t) should satisfy the homogeneous problem   ∂2w ∂w     = α , 0 < x < π, t > 0,    ∂t  ∂x2 .     ∂w    w(0, t) = (π, t) = 0, t > 0  ∂x The boundary conditions correspond to the third group in Table 11.1, so product solutions are of 2 the form w(x, t) =    −α n− 1 t 1 2 G(t) sin n − 2 x . Substituting this into the PDE for w gives, as usual, G(t) = e . So, the general solution of the original problem is T (x, t) = v(x) + w(x, t) = 20 +

∞ P

cn sin



n−

1   −α x e 2

cn sin



n−

1  x , 2



1  x . 2

n=1

n− 1 2

2

t

.

Substitute this into the IC to get 0 = T (x, 0) = 20 +

∞ P n=1

that is, −20 =

∞ P n=1

cn sin

n−

The generalized Fourier coefficients are given by  ˆ 1  2 π 1  40 h sin (n − 2 )x iπ 40 (−1)n+1 − 0 40 (−1)n   =− . cn = −20 sin n − x dx = − =− · · 1 1 π 0 2 π π π 0 − n− 2 − n− 2 n − 12 The solution to the problem is T (x, t) = 20 −

40 π

∞ P n=1

 (−1)n 1   sin n − x e−α 1 2 n− 2

n− 1 2

2

t

.

c Larry

Turyn, October 13, 2013

page 13

11.1.15. The boundary conditions correspond to the first group in Table 11.1, so, similar to the work in Example 11.6 in Section 11.1, we look for the solution of the PDE in the form ∞ P

T (x, t) =

bn (t) sin

 nπx  L

n=1

.

Substitute that and the Fourier sines series for 1, in the form ∞ P

1=

fn sin

 nπx  L

n=1

,

[we’ll calculate the Fourier coefficients fn later] into the PDE to get ∞ ∞ ∞  nπx   nπx   nπ 2  nπx  P P ∂T ∂2T −t −t P b (t) sin b˙ n (t) sin = = + e = − + e , f sin n n L ∂t ∂x2 L L L n=1 n=1 n=1

that is, ∞ P

0=



n=1

  nπx   nπ 2 bn (t) − fn e−t sin , b˙ n (t) + L L

hence, for n = 1, 2, 3...,  nπ 2 b˙ n (t) + bn (t) = fn e−t . L For each n, this is a first order linear ODE. We can solve it using the method of integrating factor or the method π of undetermined coefficients. Using the latter, the assumption L > 1 implies that the corresponding homogeneous ODE and the right hand side do not duplicate roots, so we can assume a particular solution in the form (?)

bn,p = An e−t . Substitute this into (?) to get −An e−t +

 nπ 2 L

so

An e−t = fn e−t ,

fn  nπ 2 L

An =

−1

.

The solution of the homogeneous ODE corresponding to (?), is bn,h = Cn e−(

nπ 2 t L

) .

So, T (x, t) =

∞ P



bn,h (t) + bn,p (t) sin

n=1

 nπx  L

∞ P

=

2

Cn e

−( nπ t L )

n=1

fn + nπ 2 L

! e

−1

−t

sin

 nπx  L

.

The next to next to last thing to do is to satisfy the initial condition ! ∞  nπx  P fn 0 = T (x, 0) = Cn + nπ 2 sin , 0 < x < L, L n=1 −1 L hence Cn = −

fn  nπ 2 L

−1

.

The next to last thing to do is a Fourier analysis: ∞ P

1=

n=1

where 2 fn = L

ˆ

L

1 · sin 0

 nπx  L

fn sin

 nπx  L

,

 nπx 2 1 − (−1)n 2 h cos L iL dx = = . L −nπ/L 0 nπ c Larry

Turyn, October 13, 2013

page 14

So, fn = 0 for n = even. The solution of the original problem is T (x, t) =

∞ 4 P π k=1



1 (2k − 1)



 2 (2k−1)π − 1 L

e−t − e

   (2k−1)π 2 t − L

 sin

(2k − 1)πx L

 .

11.1.16. The boundary conditions correspond to the second group in Table 11.1, so, similar to the work in Example 11.6 in Section 11.1, we look for the solution of the PDE in the form ∞  nπx  b0 (t) P bn (t) cos . 2 n=1 L

T (x, t) = Substitute that into the PDE to get b˙ 0 (t) + 2

∞ ∞  nπx   nπx   nπ 2 P P ∂T ∂2T b˙ n (t) cos = = bn (t) cos + g(t), + g(t) = − 2 L ∂t ∂x L L n=1 n=1

that is, 0=

∞ P

b˙ 0 (t) − g(t) + 2

Orthogonality of the functions 1, cos

 πx L

n=1 2πx L

, cos

(?)



  nπ 2  nπx  b˙ n (t) + bn (t) cos . L L

 ,... implies 0=

b˙ 0 (t) − g(t) 2

and, for n = 1, 2, 3, ...,  nπ 2 bn (t) = 0 . b˙ n (t) + L The solution of (?) by direct integration gives ˆ t b0 (t) c0 = + g(s) ds, 2 2 0 (??)

where c0 is an arbitrary constant. The solution of (??) is, as usual, bn (t) = bn (0)e−( So,  T (x, t) =

c0 + 2

ˆ

t

 g(s) ds +

∞ P

nπ 2 t L

) .

bn (0)e−(

n=1

0

  ) cos nπx . L

nπ 2 t L

Substitute this into the initial condition to get  f (x) = T (x, 0) =

c0 + 2

ˆ

0 0

 g(s) ds +

∞ P n=1

bn (0) cos

 nπx  L

, 0 < x < L.

We can find the coefficients c0 and bn (0) using orthogonality of the Fourier cosine series components: For n =) we get ˆ ˆ 0 2 L g(s) ds, f (x) dx = c0 = c0 + 2 · 0 = c0 + 2 L 0 0 and, for n = 1, 2, 3, ..., ˆ  nπx  2 L f (x) cos dx = bn (0) . L 0 L

c Larry

Turyn, October 13, 2013

page 15

The solution of the problem is T (x, t) =

1 L

ˆ

ˆ

L

t

g(s) ds +

f (x) dx +



n=1

0

0

∞ P

2 L

ˆ

L

f (x) cos

 nπx  L

0

  nπx  nπ 2 dx. e−( L ) t cos . L

11.1.17. The boundary conditions correspond to the first group in Table 11.1, so, similar to the work in Example 11.6 in Section 11.1, we look for the solution of the PDE in the form ∞ P

T (x, t) =

bn (t) sin

 nπx  L

n=1

.

Substitute that and the Fourier sines series for 1, in the form ∞ P

1=

fn sin

 nπx  L

n=1

,

[we’ll calculate the Fourier coefficients fn later] into the PDE to get ∞ ∞ ∞  nπx   nπ 2  nπx   nπx  P P P ∂T ∂2T b˙ n (t) sin = = + g(t) = − bn (t) sin + g(t) fn sin , 2 L ∂t ∂x L L L n=1 n=1 n=1

that is, 0=

∞ P



n=1

  2   ˙bn (t) + nπ bn (t) − fn g(t) sin nπx , L L

hence, for n = 1, 2, 3...,  nπ 2 bn (t) = fn g(t) . b˙ n (t) + L For each n, this is a first order linear ODE. We can solve it using the method of integrating factor, µ(t): Multiply the ODE (?) through by ˆ   nπ 2 nπ 2 µ(t) = exp dt = e( L ) t L to get  nπ 2 nπ 2 nπ 2 nπ 2 b˙ n (t)e( L ) t + e( L ) t bn (t) = fn e( L ) t g(t), L that is, i 2 nπ 2 d h ( nπ e L ) t bn (t) = fn e( L ) t g(t), dt Take the definite integral of both sides of i 2 nπ 2 d h ( nπ e L ) s bn (s) = fn e( L ) s g(s), ds to get ˆ t ˆ t i 2 nπ 2 nπ 2 d h ( nπ e( L ) t bn (t) − bn (0) = e L ) s bn (s) ds = fn e( L ) s g(s) ds . ds 0 0 This can be rewritten as ˆ t nπ 2 nπ 2 b (t) = b (0)e−( L ) t + f e−( L ) (t−s) g(s) ds, (?)

n

n

n

0

where bn (0) is a constant. So, T (x, t) =

∞ P n=1

 nπx   bn,h (t) + bn,p (t) sin = L

∞ P



bn (0)e−(

nπ 2 t L

)

n=1

ˆ

t

+

fn e−(

nπ 2 (t−s) L

)

0

  nπx  g(s) ds sin . L

The next to next to last thing to do is to satisfy the initial condition 0 = T (x, 0) =

∞ P n=1

ˆ



0

bn (0) + 0

fn e−(

nπ 2 (0−s) L

)

 g(s) ds

sin

 nπx  L

=

∞ P n=1

c Larry

bn (0) sin

 nπx  L

Turyn, October 13, 2013

page 16

for 0 < x < L, hence bn (0) = 0, n = 1, 2, 3, ... . So, T (x, t) =

∞ P

t

ˆ fn

n=1

e−(

nπ 2 (t−s) L

)

0

  nπx  . g(s) ds sin L

The next to last thing to do is a Fourier analysis: ∞ P

1=

fn sin

 nπx  L

n=1

where 2 fn = L

ˆ

L

1 · sin 0

 nπx  L

,

 nπx 2 1 − (−1)n 2 h cos L iL = dx = . L −nπ/L 0 nπ

So, fn = 0 for n = even. The solution of the problem is T (x, t) =

∞  (2k − 1)πx  4 P 1 sin π k=1 2k − 1 L

ˆ

t

e

  (2k−1)π 2 −α (t−s) L

g(s)ds .

0

α factor in the PDE. Nevertheless, we 11.1.18 This problem does not have an equilibrium solution because of the t+1 know methods for solving problems with homogeneous boundary conditions, so, like Example 11.6 in Section 11.1, we can try to “take care of " the inhomogeneity in the BCs using a function v(x) = c1 + c2 x. Assume v(x) is a first degree polynomial. Then v(x) is both as simple as possible and, because both

 ∂v ∂ = c 1 + c2 x ≡ 0 ∂t ∂t and

 ∂v ∂  = c1 + c2 x ≡ 0, ∂x ∂x it would follow that w(x, t) , T (x, t) − v(x) satisfies a problem that is relatively simple, as we will see below. So, we want a function of the form v(x) = c1 + c2 x to satisfy the two BCs v(0) = 0 and v(π) = B. Recall that A and B are unspecified constants. We have   0 = v(0) = c1 , B = v(π) = c1 + πc2 B Bx . So, v(x) = . π π Defining w(x, t) , T (x, t) − v(x), we see that w(x, t) should satisfy the problem   ∂w α ∂2w      = − A, 0 < x < π, t > 0,  ∂t t + 1 ∂x2 .       w(0, t) = w(π, t) = 0, t > 0

hence c1 = 0 and c2 =

Because of the A term, as in Example 11.6 in Section 11.1 we assume a solution in the form w(x, t) =

∞ P

bn (t) sin nx .

n=1

Substitute that and the Fourier sines series for 1, in the form 1=

∞ P n=1

fn sin

 nπx  L

,

c Larry

Turyn, October 13, 2013

page 17

[we’ll calculate the Fourier coefficients fn later] into the PDE to get ∞ ∞ ∞ P P P ∂T α ∂2T αn2 b˙ n (t) sin nx = = bn (t) sin nx − A fn sin nx, −A=− 2 ∂t t + 1 ∂x n=1 n=1 n=1 t + 1

that is, 0=

∞ P



n=1

 αn2 b˙ n (t) + bn (t) − Afn sin nx, t+1

hence, for n = 1, 2, 3..., αn2 bn (t) = −Afn . (?) b˙ n (t) + t+1 For each n, this is a first order linear ODE. We can solve it using the method of integrating factor: The integrating factor is ˆ  2 αn2 2 αn2 µ(t) = exp dt = eαn ln(t+1) = eln(t+1) = (t + 1)αn . t+1 Note that we are assuming that α > 1, hence αn2 6= 1 for n = 1, 2, 3, ... . The solution of the ODE (?) follows from integrating i 2 2 2 2 dh (t + 1)αn bn = (t + 1)αn b˙ n + α(t + 1)αn −1 bn = −Afn (t + 1)αn , dt ˆ

hence 2

2

Afn (t + 1)αn dt = −

(t + 1)αn bn = − hence

bn = −

2 Afn (t + 1)αn +1 + Cn , 2 αn + 1

2 Afn (t + 1) + Cn (t + 1)−αn , 2 αn + 1

where Cn is an arbitrary constant. So, T (x, t) = v(x) + w(x, t) =

Bx + π

∞ P

∞ P

Bx + π

bn (t) sin nx =

n=1

 −

n=1

2 Afn (t + 1) + Cn (t + 1)−αn 2 αn + 1

 sin nx .

The next to next to last thing to do is to satisfy the initial condition f (x) = T (x, 0) =

∞ P

Bx + π

 −

n=1

 Afn sin nx, + C n αn2 + 1

for 0 < x < π, that is, f (x) −

Bx = π

∞ P

 −

n=1

 Afn + C sin nx, 0 < x < π, n αn2 + 1

hence

Afn 2 + αn2 + 1 π The next to last thing to do is a Fourier analysis:

ˆ

π

Cn =

1=

0

∞ P

 Bx  f (x) − sin nx dx . π

fn sin nx,

n=1

where

 ˆ 2 1 − (−1)n 2 π 2 h cos nx iπ fn = 1 · sin nx dx = = . π 0 π −n 0 nπ The solution of the original problem is T (x, t) = v(x) + w(x, t) =

Bx − π

∞ P n=1

Afn (t + 1) sin nx αn2 + 1

c Larry

Turyn, October 13, 2013

page 18

+

∞ P n=1



Afn 2 + αn2 + 1 π

ˆ

π



f (x) −

0

 2 Bx  sin nx dx (t + 1)−αn · sin nx . π

11.1.19. (a) Work with the three cases of λ < 0, λ = 0, λ > 0 as in Example 9.14 in Section 9.3: Case 1: If λ = 0, then the differential equation X 00 (x) + λX(x) = 0 is X 00 (x) = 0, whose solutions are X = c1 + c2 x, for arbitrary constants c1 , c2 . In this case, X 0 (x) = c2 . Applying the first BC gives 0 = X(−L) = c1 − c2 L. Applying the second BC gives 0 = X(L) = c1 + c2 L. Adding the two equations gives 0 = 0 + 0 = 2c1 , which implies c1 = 0. Substituting that into the first BC gives 0 = 0 − c2 L, hence c2 = 0. So, both BCs are satisfied if, and only if, c1 = c2 = 0. When λ = 0, the ODE-BVP has only the trivial solution. So, λ = 0 is not an eigenvalue for this problem. √ Case 2: If λ > 0, rewrite λ = ω 2 , where ω , λ > 0. The differential equation X 00 (x) + λX(x) = 0 is X 00 (x) + ω 2 X(x) = 0, whose solutions are X = c1 cos ωx + c2 sin ωx, for arbitrary constants c1 , c2 . Applying the first BC gives 0 = X(−L) = c1 cos(−ωL) + c2 sin(−ωL) = c1 cos ωL − c2 sin ωL . Applying the second BC gives 0 = X(L) = c1 cos ωL + c2 sin ωL . Adding the two equations gives 0 = 0 + 0 = 2c1 cos ωL, which implies c1 cos ωL = 0. Instead, subtracting the two equations gives 0 = 0 − 0 = 2c2 sin ωL, which implies c2 sin ωL = 0. So, there is a non-trivial solution if and only if, (1) c1 cos ωL = 0

and

(2) c2 sin ωL = 0 .

As a logical matter, in principle there are four possibilities: (a) c1 = 0

and

c2 = 0,

(b) c1 = 0

and

sin ωL = 0,

(c) cos ωL = 0

and

c2 = 0,

(d) cos ωL = 0

and

sin ωL = 0.

Case (a) gives no eigenfunction, because eigenfunctions must not be identically zero. Case (d) is impossible, because cos2 ωL + sin2 ωL = 1. Case (b) gives characteristic equation sin ωL = 0. As in Example 9.14 in Section 9.3, there are infinitely many , any positive integer n, with corresponding eigenfunctions Xn (x) = sin nπx . eigenvalues: ω = nπ L L Case (c) gives characteristic equation cos ωL = 0. As in problem 9.3.1, there are infinitely many eigenvalues:   n− 1 π 2

n− 1 πx

2 , any positive integer n, with corresponding eigenfunctions Xn (x) = cos . L √ 2 00 Case 3: If λ < 0, rewrite λ = −ω , where ω , −λ. The differential equation X (x)+λX(x) = 0 is X 00 (x)−ω 2 X(x) = 0, whose solutions are X = c1 cosh(ωx) + c2 sinh(ωx), for arbitrary constants c1 , c2 . Applying the first BC gives 0 = X(−L) = c1 cosh(−ωL) + c2 sinh(−ωL)

ω=

L

= c1 cosh ωL − c2 sinh ωL . Applying the second BC gives 0 = X(L) = c1 cosh ωL + c2 sinh ωL . Adding the two equations gives 0 = 0 + 0 = 2c1 cosh ωL, which implies c1 = 0 because ω > 0 implies cosh ωL > 0. Instead, subtracting the two equations gives 0 = 0 − 0 = 2c2 sinh ωL, which implies c2 = 0 because ω > 0 implies sinh ωL > 0. So there is no eigenfunction if λ < 0. In summary, the eigenvalues/eigenfunctions are   2kπx 2kπ (i) λ = , X2k (x) = sin 2L 2L and

(2k − 1)π (ii) λ = , X2k−1 (x) = cos 2L



(2k − 1)πx 2L

 .

c Larry

Turyn, October 13, 2013

page 19

(b) Using the eigenfunctions found in part (a), the general solution of the PDE and BCs is T (x, t) =

∞ P

 A2k−1 cos

k=1

(2k − 1)πx 2L

 e

  (2k−1)π 2 −α t 2L

+

∞ P

 A2k sin

k=1

2kπx 2L



e−α(

2kπ 2 t 2L

) .

The initial condition to be satisfied is 100 = T (x, 0) =

∞ P

 A2k−1 cos

 (2k − 1)πx  2L

k=1

+ A2k sin

 2kπx  2L

,

  , the for 0 < x < L. While this involves both sine and cosine functions, this is not a Fourier series because (2k−1)π 2L  2kπ frequencies in the cosine functions, are not the same as the frequencies in the sine functions, 2L . Using the result of Theorem 9.6 in Section 9.3, the Fourier coefficients are found by, first,   ´L h 200L  (2k − 1)πx iL ˆ L 1   2(2k − 1)πx  100 cos (2k−1)πx dx 2L 0 dx = sin ÷ 1 + cos A2k−1 = ´  2  L (2k − 1)π 2L 2L 0 0 2 cos (2k−1)πx dx 2L 0

=

200L (2k − 1)π



 sin

(2k − 1)π 2



 −0



÷

1 1+ 2

sin



2(2k−1)πx 2L

(2k − 1)π/L

 L  0

200L L 400 = (−1)k+1 ÷ = (−1)k+1 , (2k − 1)π 2 (2k − 1)π and, second, by ´L

A2k

     ˆ L  h 100 sin kπx dx 1 kπx iL 2kπx  100L L = ´ ÷ cos 1 − cos dx = − 2  L kπ L L 0 0 2 sin kπx dx L 0 0

 L    1 sin 2kπx L 100L  k  = 100L (1 − (−1)k ) ÷ L = 200 (1 − (−1)k ) . =− (−1) − 1 ÷  1 − kπ 2 2kπ/L kπ 2 kπ 0

But for k = even, 1 − (−1)k = 0; for k = odd = 2` − 1, 1 − (−1)k = 2. So, A(2·even) = 0 and A2(2`−1) =

400 . (2` − 1)π

The solution of the problem is T (x, t)=

       2 ∞ ∞ (2k−1)π 2 (−1)k+1 (2k − 1)πx (2` − 1)πx −α (2`−1)π 400 P 400 P 1 −α t t 2L L e + e . cos sin π k=1 2k − 1 2L π `=1 (2` − 1) L

∂v ≡ 0 and ∂t   00  0 = v (x), 0 < x < L  .  0  v (0) = 10, v(L) = 0

11.1.20. First find the equilibrium solution, v = v(x), satisfying

The solutions of the ODE 0 = v 00 are v(x) = c1 + c2 x, where c1 , c2 are arbitrary constants. Substitute that into the BCs to get   10 = v 0 (0) = c2 , 0 = v(L) = c1 + c2 L whose solution is v(x) = 10(x − L) . c Larry

Turyn, October 13, 2013

page 20

Define w = w(x, t) = T (x, t) − v(x). Similar to work in Example 11.2 in Section 11.1, w(x, t) should satisfy the homogeneous problem   ∂2w ∂w       ∂t = ∂x2 , 0 < x < L, t > 0,   (?) .  ∂w       (0, t) = w(L, t) = 0, t > 0  ∂x The boundary conditions correspond to the fourth group in Table 11.1 with L = π, so, similar to the work in Example 11.1 in Section 11.1, we look for solutions of the PDE in the form  n − 1 πx  2 G(t), w(x, t) = cos L for n = 1, 2, 3, .... Substitute w into the PDE to get  n − 1 πx  dG  n − 1 π 2  n − 1 πx  2 2 2 cos =− cos L dt L L hence

 n − 1 π  2 dG 2 =− G(t) . dt L

The general solution of (?) is

w(x, t) =

∞ P n=1

  n − 1 πx  −α 2 e dn cos L

n− 1 2 L

 2 π

t

.

The solution of the original problem’s PDE and the BCs is ∞ P

T (x, t) = v(x) + w(x, t) = 10(x − L) +

n=1

  n − 1 πx  −α 2 dn cos e L

n− 1 2 L

 2 π

t

.

Substitute this into the IC to get 0 = T (x, 0) = 10(x − L) +

∞ P

dn cos

n=1

 n − 1 πx  2 , L

that is, −10(x − L) =

∞ P

dn cos

n=1

 n − 1 πx  2 . L

Orthogonality implies 2 dn = −10 · L

ˆ 0

L

     L (n− 1 )πx (n− 1 )πx 2 2  n − 1 πx  (x − L) sin cos L L 20   2  (x − L) cos dx = −  + 2  L L n − 12 π/L (n − 12 )π/L 0

=−

20 L

0−1 0−0  + 2 n − 12 π/L (n − 12 )π/L

! =

1 20L · 2 π2 n − 12

The solution of the problem is ∞ 20L P T (x, t) = 10(x − L) + 2 π n=1

  n − 1 πx  −α 2 e 2 cos L n − 12 1

n− 1 2 L

 2 π

So ∞ L  20L P g(t) , T , t = 5L + 2 2 π n=1

  n − 1 π  −α 1 2 e 2 cos 2 n − 12

n− 1 2 L

c Larry

t

.

 2 π

t

Turyn, October 13, 2013

page 21

11.1.21. Define Condition (?) to mean that all solutions of the PDE-BVP-IVP satisfy “limt→∞ T (x, t) = 0, for all x in [0, L]." ∂v The equilibrium solution, v = v(x), satisfies ≡ 0 and ∂t   00  0 = αv (x) + β v, 0 < x < L,  .   v(0) = v(L) = 0 Note that the constant α is positive and β is assumed to be constant. If we divide by α we get the ODE-BVP   00  0 = v (x) + λ v, 0 < x < L,  ,   v(0) = v(L) = 0 β . But we know this as the eigenvalue problem of Example 9.14 in Section 9.3. Condition (?) is true only α β if the equilibrium solution, v(x) must be identically zero, that is, only if λ = is not an eigenvalue, that is, only if α  mπ 2 β 6= α · , for m = 1, 2, 3, .. . L

where λ =

Define w = w(x, t) = T (x, t) − v(x). Similar to work in Example 11.2 in Section11.1, w(x, t) should satisfy the homogeneous problem   ∂w ∂2w     = + βw, 0 < x < π, t > 0, ∂t ∂x2 (?) .     w(0, t) = w(π, t) = 0, t > 0 The boundary conditions correspond to the first group in Table 11.1, so, similar to the work in Example 11.1 in Section 11.1, we look for solutions of the PDE in the form  nπx  G(t), w(x, t) = sin L for n = 1, 2, 3, ... . Substitute w into the PDE to get  nπx  dG  nπ 2  nπx   nπx  sin = −α sin G(t) + β sin G(t) L dt L L L hence dG = dt

 β−α

 nπ 2  L

G(t) .

The general solution of (?) is w(x, t) =

∞ P

an sin

n=1

  nπx   2 β−α( nπ t L ) e . L

The solution of the original problem’s PDE + BCs is T (x, t) = v(x) + w(x, t) = v(x) +

∞ P n=1

bn sin

 nπx  L

e

  2 β−α( nπ t L )

,

where the bn ’s are arbitrary constants that depend on the initial condition. In order for all solutions to go to zero as t → ∞, it is necessary and sufficient that v(x) ≡ 0 and β − α  π 2 . for all integers n ≥ 1, that is, β < α L The time constant is τ =

 nπ 2 L

0, = 4 2 − δ(t) − − δ(t)   2  ∂t ∂x L       w(0, t) = w(L, t) = 0, t > 0,   x    , 0 < x < L, w(x, 0) = f (x) − δ(0) − − δ(0) + (0)   L        ˙  x , 0 < x < L. ˙ ˙  ∂w (x, 0) = g(x) − δ(0) − − δ(0) + (0) ∂t L

                          

Because of the homogeneous BCs, let’s look for a solution in the form w(x, t) =

∞ P

bn (t) sin

 nπx  L

n=1

.

Substitute that into the PDE for w to get ∞   P ¨bn (t) sin nπx = L n=1

∞ P n=1

 nπ 2  nπx   ¨ − − δ(t) ¨ + ¨(t) x . −4 bn (t) sin − δ(t) L L L

We need to get everything in terms of Fourier sine series components sin

1=

∞ P

hn sin

 nπx 

n=1

where 2 hn = L

ˆ 0

L

L

 nπx  L

, so we do Fourier sine series analyses:

,

 nπx  nπx  2 1 − (−1)n 2 h cos L iL 2  (−1)n − 1  1 · sin dx = = = , L L −nπ/L 0 L −nπ/L nπ

and x = L

∞ P n=1

kn sin

 nπx  L

,

where kn =

2 L

ˆ 0

L

nπx  nπx  iL sin nπx 2(−1)n+1 x 2 h x cos L 2  L(−1)n − 0 0−0  L sin dx = 2 + = + = . L L L −nπ/L (nπ/L)2 0 L2 −nπ/L (nπ/L)2 nπ

c Larry

Turyn, October 13, 2013

page 28

So, the PDE can be rewritten as  ∞  ∞ ∞   nπx   nπx   P P 2 1 − (−1)n nπ 2 ¨ ¨bn (t) sin nπx = −4 P bn (t) sin − δ(t) sin L L L nπ L n=1 n=1 n=1

¨ + ¨(t) − − δ(t)

∞  nπx   P 2(−1)n+1 sin , nπ L n=1

that is, 0=

∞ P n=1

 n  2 n+1  ¨bn (t) + 4 nπ bn (t) + δ(t) ¨ 2 1 − (−1) + − δ(t) ¨ + ¨(t) 2(−1) L nπ nπ

! sin

 nπx  L

.

Orthogonality implies that for n = 1, 2, 3, ...,  2    ¨ − (−1)n+1 − δ(t) ¨ + ¨(t) . ¨bn + 2nπ bn = 2 − 1 − (−1)n δ(t) L nπ For temporary convenience, define φn (t) ,

  2 − 1 − (−1)n δ(t) − (−1)n+1 − δ(t) + (t) nπ

hence φn (t) =

 2 − δ(t) + (−1)n (t) . nπ

2 The ODE ¨bn + 2nπ bn = φ¨n (t) can be solved using the method of variation of parameters or using Laplace L transforms. [If the functions δ(t) and (t) were explicitly given then it might be possible to use the method of undetermined coefficients.] 2nπ , a particular solution is Using the result of Example 4.33 in Section 4.5, with ω = L ˆ t  2nπ  1 bn,p (t) = sin (t − s) φ¨n (s)ds L 0 2nπ/L =

L 2nπ

ˆ t sin

 2nπt  L

0

 2nπt  L sin 2nπ L Integration by parts yields

ˆ

t

=

cos 0

cos

 2nπs  L

− cos

 2nπt  L

sin

 2nπs  L

φ¨n (s)ds

 2nπt  ˆ t  2nπs   2nπs  L φ¨n (s)ds − cos φ¨n (s)ds . sin L 2nπ L L 0

 t ˆ t   2nπt    2nπs   2nπs  L 2nπ  sin cos φ˙ n (s)) − − sin φ˙ n (s)ds 2nπ L L L L 0 0   t ˆ t  2nπt   2nπs   2nπs  L 2nπ ˙ ˙ − cos sin φ(s)) − cos φn (s)ds 2nπ L L L L 0 0  ˆ t  2nπt    2nπs   2nπt   L ˙  + 2nπ ˙ = sin −φ(0) + cos φ(t) sin φ˙ n (s)ds  2nπ L L L 0 L   ˆ t       L 2nπt 2nπt˙ 2nπ 2nπs  ˙ − cos −0 + sin φ(t) − cos φn (s)ds  2nπ L L L 0 L ˆ t        2nπt  ˆ t  2nπs  L 2nπt 2nπt 2nπs ˙ ˙ = −φ(0) + sin sin sin φn (s)ds + cos cos φ˙ n (s)ds . 2nπ L L L L L 0 0 Further integration by parts yields  t ˆ t  2nπt   2nπt    2nπs   2nπs  L 2nπ ˙ + sin φn (s) − φn (s)ds bn,p (t) = −φ(0) sin sin cos 2nπ L L L L L 0 0   t ˆ t   2nπt   2nπs   2nπs  2nπ  + cos φn (s) − − sin φn (s)ds cos L L L L 0 0 bn,p (t) =

c Larry

Turyn, October 13, 2013

page 29

 ˆ  2nπt   2nπt    2nπt   2nπs  L 2nπ t sin + sin −0 + sin φn (t) − cos φn (s)ds 2nπ L L L L 0 L  ˆ  2nπt   2nπt    2nπs  2nπ t −φ(0) + cos + cos φn (t) + sin φn (s)ds L L L 0 L  2nπt   2nπt    2nπt   2nπt  L = −φ˙ n (0) sin − φn (0) cos + sin2 + cos2 φn (t) 2nπ L L L L    ˆ  2nπs   2nπs   2nπt  ˆ t 2nπ 2nπt  t − cos sin φn (s)ds − cos φn (s)ds sin L L L L L 0 0 ˆ t  2nπt   2nπt   2nπ  L 2nπ = −φ˙ n (0) sin − φn (0) cos + φn (t) − sin (t − s) φn (s)ds . 2nπ L L L 0 L The general solution for bn (t) is = −φ˙ n (0)

bn (t) = bn,h (t) + bn,p (t)  2nπt   2nπt   2nπt   2nπt  L ˙ =e an cos +e bn sin − φ(0) sin − φ(0) cos + φn (t) L L 2nπ L L ˆ   2nπ 2nπ t (t − s) φn (s)ds, sin − L 0 L e where e an , bn are arbitrary constants. By combining terms, this can be rewritten as ˆ  2nπt   2nπt   2nπ  2nπ t bn (t) = An cos + Bn sin + φn (t) − sin (t − s) φn (s)ds, L L L 0 L where An , Bn are arbitrary constants. The general solution of the PDE and time varying BCs is u(x, t) = v(x, t) + w(x, t) = δ(t) + − δ(t) + (t) that is, u(x, t) = δ(t) + − δ(t) + (t) +

∞ P

 An cos

 2nπt 

n=1

L

x + L

∞ P

bn (t) sin

 nπx  L

n=1

,

x L

+ Bn sin

 2nπt  L

+ φn (t) −

2nπ L

ˆ

t

sin 0

   nπx  (t − s) φn (s)ds sin , L L

 2nπ

where An , Bn ’s are constants. The first IC this needs to satisfy is f (x) = u(x, 0) = δ(0) + − δ(0) + (0)

x + L

∞ P n=1

 nπx   An + φ(0) sin . L

 − δ(0) + (−1)n (0) , we see that ˆ  nπx    x 2 2 L − δ(0) + (−1)n (0) = f (x) − δ(0) − − δ(0) + (0) sin dx An + nπ L 0 L L ˆ ˆ ˆ  nπx   nπx   nπx  2 L x 2 L 2 L = f (x) sin dx − δ(0) 1 · sin dx − − δ(0) + (0) sin dx L 0 L L 0 L L 0 L L  ˆ  nπx   2 1 − (−1)n 2(−1)n+1 2 L = f (x) sin dx − − δ(0) + (0) δ(0) − L 0 L nπ nπ

Recalling that φn (0) =

=

2 L

ˆ

2 nπ

L

f (x) sin 0

 nπx   2  δ(0) + (−1)n (0) n n+1   dx + − 1 + (−1) + (−1) L nπ ˆ L  nπx   2 2 = f (x) sin dx + − δ(0) + (−1)n (0) , L 0 L nπ c Larry

Turyn, October 13, 2013

page 30

so An =

2 L

ˆ

L

f (x) sin 0

 nπx  L

dx .

Using Leibniz’s rule, we calculate x ∂u ˙ + − δ(t) ˙ + (t) (x, t) = δ(t) ˙ ∂t L +

∞  P



 2nπt  2nπ  2nπt    nπx  2nπ An sin + Bn cos + φ˙ n (t) sin L L L L L



ˆ t  2nπ   2nπ    nπx  2nπ 2nπ sin (t − s) φn (s) cos (t − s) φn (s)ds sin − , L L L L L s=t 0

n=1

+

∞  P n=1

so, the second IC requires g(x) =

x ∂u ˙ ˙ ˙˙ (x, 0) = δ(0) + − δ(0) + (0) + ∂t L

Recalling that φ˙ n (0) =

2 nπ

 ˙ − δ(0) + (−1)n (0) ˙ , we see that

 2nπ 2 2 ˙ Bn + − δ(0) + (−1)n (0) ˙ = L nπ L

=

= =

2 L 2 L 2 L

ˆ

L

g(x) sin 0

ˆ

L

g(x) sin ˆ

0 L

g(x) sin 0

∞    nπx  P 2nπ Bn + φ˙ n (0) sin + 0. L L n=1

ˆ

L 0

  nπx   x ˙ ˙ g(x) − δ(0) − − δ(0) + (0) ˙ sin dx L L

ˆ ˆ  nπx   nπx   nπx  2 L x 2 L ˙ ˙ dx − δ(0) dx − − δ(0) + (0) ˙ sin dx 1 · sin L L 0 L L 0 L L dx −

  2 1 − (−1)n ˙ 2(−1)n+1 ˙ − δ(0) + (0) ˙ δ(0) − nπ nπ

dx +

 2  δ(0) n n+1 ˙   − 1 + (−1) + (−1) + (−1)n (0) ˙ , nπ

 nπx  L  nπx  L

so Bn =

2 L

ˆ

L

g(x) sin 0

 nπx  L

dx .

The solution of the problem is u(x, t) = δ(t) + − δ(t) + (t)

+

x + L

 ∞  ˆ L  nπx    2nπt   ˆ L   2nπt  2 P f (x)sin nx dx cos + g(x)sin nx dx sin + φn (t) sin L n=1 L L L 0 0 −

∞ 4 P L n=1

t



sin

     ¨ + (−1)n ¨(s) ds sin nπx . (t − s) − δ(s) L L

 2nπ

0

≡ 0, so we want 11.2.10. The equilibrium solution v = v(x) satisfies ∂v ∂t   2 00  0 = c v (x), 0 < x < L,  

v(0) = 1, v(L) = 1 + h

.



The solution of the ODE v 00 (x) is v(x) = c1 + c2 x, so we need   1 = v(0) = c1 1 + h = v(L) = c1 + c2 L c Larry

Turyn, October 13, 2013

page 31

so c1 = 1 and c2 = h/L. The equilibrium solution is hx . L

v(x) = 1 +

The problem that w(x, t) , u(x, t) − v(x) should satisfy is  2  2 ∂2w  ∂∂tw 2 = c ∂x2 , 0 < x < L, t > 0,  

.



w(0, t) = w(L, t) = 0, t > 0

This is the same as (11.21–11.22) in Section 11.2, so we know the general solution is ∞ P

w(x, t) =

sin

 nπx  

n=1

L

an cos

 nπct  L

+ bn sin

 nπct  L

.

The general solution of the original problem’s PDE and BCs is u(x, t) = v(x) + w(x, t) = 1 +

h x+ L

∞ P

sin

 nπx  

n=1

L

an cos

 nπct  L

+ bn sin

 nπct  L

.

Substitute this, along with ∞  nπx    nπct   nπct  P nπ · sin −an sin + bn cos , L L L L n=1

∂u (x, t) = 0 + 0 + ∂t into the ICs to get

0 = u(x, 0) = 1 +

h x+ L

∞ P

an sin

 nπx  L

n=1

,

that is, −1 − hence 2 an = − L

ˆ

L

h x= L

∞ P n=1

an sin

 nπx  L

,

1+

 nπx   2 h  x sin dx = ... = − 1 − (1 + h)(−1)n , L L nπ

0=

∂u (x, 0) = ∂t



0

and ∞  nπx  P nπ · bn sin , L L n=1

hence bn = 0 for all n. The solution of the problem is u(x, t) = 1 +

∞  h 2 P 1 x− 1 − (1 + h)(−1)n cos L π n=1 n



nπct L

 sin

 nπx  L

.

11.2.11. The boundary conditions correspond to the first group in Table 11.1, so, similar to the work in Example 11.6 in Section 11.1, we look for product solutions of the PDE in the form  nπx  u(x, t) = sin G(t) . L Substitute this into the PDE to get sin

 nπx  L

   2     2 2 ¨ + 2 sin nπx G˙ = ∂ u + 2 ∂u = c2 ∂ u − u = −c2 nπ sin nπx G − sin nπx G, G 2 2 L ∂t ∂t ∂x L L L

c Larry

Turyn, October 13, 2013

page 32

hence

 2 ¨ + 2G˙ = −c2 nπ G − G, G L

that is,   2  ¨ + 2G˙ + 1 + nπc G G = 0. L Substitute in solutions in the form G(t) = est to get the requirement that  nπc 2

s2 + 2s + 1 +

L

= 0,

that is, (s + 1)2 +

 nπc 2 L

whose solutions are

= 0,

nπc . L

s = −1 ± i So, G(t) = c1 e−t cos



nπct L



+ c2 e−t sin



nπct L

 ,

where c1 , c2 are arbitrary constants. The solution of the problem is ∞ P

u(x, t) = e−t



 an cos

n=1



nπct L

 + bn sin

nπct L

 sin

 nπx  L

,

whre an , bn are arbitrary constants. [Note: instead of substituting product solutions into the PDE, similar to Example 11.6 in Section 11.1 we could have plugged into the PDE a solution in the form u(x, t) =

∞ P



 an (t) cos

n=1

nπct L



 + bn (t) sin

nπct L

 sin

 nπx  L

.

The work would have been more complicated and would require a little more thought, but still we could have arrived at the same final conclusion.] 11.2.12. u

L  , t ≡ 0 and 2 u(x, t) =

∞ P

 an cos

n=1

nπct L

 sin

 nπx  L

imply L  (?) 0 = u ,t = 2

∞ P

an sin

n=1

 nπ  2

 cos

nπct L

But (?) expresses a conclusion about a Fourier cosine series! Define An = an sin

(??) 0 = u

L  ,t = 2

∞ P n=1

 An cos

nπct L

 .  nπ  2

, so (?) is

 .

2L This is true for all t ≥ 0, so, in particular it is true on the interval 0 ≤ t ≤ . The function g(t) ≡ 0, defined on the c h 2L i interval 0, , is its own Fourier cosine series. So, all of its Fourier coefficients must be zero, that is, c  nπ  0 = An = an sin , n = 1, 2, 3, ... . 2

c Larry

Turyn, October 13, 2013

page 33

But, for all even n, sin

 nπ  2

= 0, so 0 = aeven · 0 gives no conclusion about aeven . For n = odd = 2k − 1,

0 = A2k−1 = a2k−1 sin

 (2k − 1)π  2

= (−1)k+1 a2k−1 ,

for k = 1, 2, 3, ... .This implies that an = 0 for all odd n, that is, for all n of the form n = 2k − 1 for some integer k ≥ 1. So,   ∞  2kπx  P 2kπct u(x, t) = a2k sin cos L L k=1 L L L Further, u(x, t) is odd about  x = 2 , that is, u( 2 − z, t) = −u( 2 + z, t) for 0 ≤ z ≤ 2kπx L integers k, the function sin is odd about x = 2 , because L

sin

 2kπ

L 2

 −z 

L

L . 2

Why? Because for all

 2kπz  2kπz 2kπz 2kπz 2kπz = sin kπ − = sin(kπ) cos − cos(kπ) sin = 0 · cos − (−1)k sin L L L L L = −(−1)k sin

2kπz , L

versus sin

 2kπ

L 2

L

 +z 

 2kπz  2kπz 2kπz 2kπz 2kπz = sin kπ + = sin(kπ) cos + cos(kπ) sin = 0 · cos + (−1)k sin L L L L L = (−1)k sin

2kπz . L

So, for each integer k, sin So, u(x, t) is odd about

11.2.13. u

 2kπ

L 2

 −z 

L

= − sin

L 2

 2kπ

 +z 

L

.

L . 2

L  , t ≡ 0 and 3 u(x, t) =

∞ P

 an cos

n=1

nπct L

 sin

 nπx  L

imply (?) 0 = u

L  ,t = 3

∞ P

an sin

n=1

 nπ  3

 cos

nπct L

But (?) expresses a conclusion about a Fourier cosine series! Define An = an sin L  (??) 0 = u ,t = 3

∞ P n=1

 An cos

nπct L

 .  nπ  3

, so (?) is

 .

2L This is true for all t ≥ 0, so, in particular it is true on the interval 0 ≤ t ≤ . The function g(t) ≡ 0, defined on the c h 2L i interval 0, , is its own Fourier cosine series. So, all of its Fourier coefficients must be zero, that is, c  nπ  0 = An = an sin , n = 1, 2, 3, ... . 3  nπ  But, for all n of the form n = 3k for some integer k, sin = 0, so 0 = a3k · 0 gives no conclusion about a3k . 3   nπ For n 6= 3k, sin 2 6= 0, so  nπ  0 = An = an sin 2 implies an = 0. I.e., an = 0 for all n of the form n = 3k − 1 or n = 3k − 2 for some integer k ≥ 1. c Larry

Turyn, October 13, 2013

page 34

11.2.14. Similar to work in problems 11.1.15, the boundary conditions correspond to the first group in Table 11.1, so we look for the solution of the PDE in the form u(x, t) =

∞ P

An (t) sin

 nπx 

n=1

L

.

Substitute that and the Fourier sines series for 1, in the form 1=

∞ P n=1

fn sin

 nπx  L

,

[we’ll calculate the Fourier coefficients fn later] into the PDE to get ∞ P n=1

 nπx  ∂2T ∂2T = = c2 2 + cos ωt A¨n (t) sin 2 L ∂t ∂x

=−

∞ ∞  nπx   nπx   nπc 2 P P An (t) sin + cos(ωt) fn sin , L L L n=1 n=1

that is, 0=

∞ P n=1



  nπx   nπc 2 An (t) − fn cos(ωt) sin , A¨n (t) + L L

hence, for n = 1, 2, 3...,  nπc 2 An (t) = fn cos ωt . (?) A¨n (t) + L For each n, this is a second order linear ODE. We can solve it using the method of variation of parameters or the method of undetermined coefficients. There are two cases: nπc (1) If ω 6= for all positive integers n, then L An,p (t) = c1 cos ωt + c2 sin ωt . This is the “beats phenomenon" case: The particular solution will remain bounded on the interval 0 ≤ t ≤ ∞; nπc (2) If ω = for some positive integer n, then L An,p = c1 t cos ωt + c2 t sin ωt . This is the “pure resonance" case: The particular solution will become unbounded on the interval 0 ≤ t ≤ ∞. No matter what is the value of ω, the corresponding homogeneous solution has the form     nπct nπct An,h (t) = an cos + bn sin , L L which will will remain bounded on the interval 0 ≤ t ≤ ∞. The final conclusion is that a solution of the original PDE and BCs can have max0 0 implies

0

0

ˆ X 00 (x)X 00 (x) dx +

0

L

0 L

L

+

0

= 0 − X 0 (L)X 00 (L) + X 0 (0)X 00 (L) + ˆ

ˆ

2 X 00 (x) dx +

L

2 λ X(x) dx

0

ˆ

L

2 λ X(x) dx

0

ˆ

L

2 λ X(x) dx .

0

2 X(x) dx = 0, which implies X(x) ≡ 0 on the interval [0, L], contradicting X(x) being an

0

eigenfunction. So, λ > 0 cannot be an eigenvalue. (c) Define ω = (−λ)1/4 . The ODE X 0000 + λX = 0’s characteristic polynomial s4 + λ = s4 − ω 4 = (s2 − ω 2 )(s2 + ω 2 ) has roots s = ±ω, ±iω, so the ODE’s general solution is X(x) = c1 cosh ωx + c2 sinh ωx + c3 cos ωx + c4 sin ωx. Noting that X 00 (x) = ω 2 (c1 cosh ωx + c2 sinh ωx − c3 cos ωx − c4 sin ωx), the BCs imply (1) 0 = X(0) = c1 + c3 and 0 = X 00 (0) = ω 2 (c1 − c3 ) ; the latter ⇒ (2) 0 = c1 − c3 . Add equations (1) and (2) to get 0 = 2c1 ; subtract (2) from (1) to get 0 = 2c3 . So, X(x) = c2 sinh ωx + c4 sin ωx. Substitute this into the two remaining boundary conditions to get (3) 0 = X(L) = c2 sinh ωL + c4 sin ωL, and (4) 0 = ω −2 X 00 (L) = c2 sinh ωL − c4 sin ωL. Add equations (3) and (4) to get (5) 0 = 2c2 sinh ωL. Because ω > 0 and L > 0 imply sinh ωL> 0, (5) implies c2 = 0. Subtract (4) from (3) to get 0 = 2c4 sin ωL. So, there is an eigenfunction Xn (x) = sin nπx only corresponding to L 4 eigenvalues λn = − nπ . L 11.2.16. The results of problem 11.2.15 suggest trying a solution of problem 11.2.16 in the form ∞ P

y(x, t) =

Gn (t) sin

 nπx  L

n=1

.

Substitute that into the PDE to get ∞ P n=1

  2 4 ¨ n (t) sin nπx = ∂ y = −EI ∂ y = G 2 L ∂t ∂x4

∞ P

EI

 nπ 4 L

n=1

Gn (t) sin

 nπx  L

that is, ∞ P



n=1

 nπ 4   nπx  ¨ Gn (t) + EI sin . L L

This implies that for n = 1, 2, 3, ..,  4 ¨ n + EI nπ Gn = 0, G L so the general solution of the PDE and the four homogeneous BCs is y(x, t) =

∞ P

Gn (t) sin

 nπx 

n=1

y(x, t) =

∞ P n=1

 an cos

 √EI n2 π 2 ct  L2

+ bn sin

L  √EI n2 π 2 ct  L2

sin

 nπx 

c Larry

L

,

Turyn, October 13, 2013

page 36

where an , bn are constants to be chosen to satisfy the ICs: ∞ P

f (x) = y(x, 0) =

an sin

n=1

hence an = and g(x) =

ˆ

L

f (x) sin

 nπx  L

0

dx,

√ ∞   √EI n2 π 2 ct   nπx  P EI n2 π 2 c  · b sin sin , n 2 2 L L L n=1

∂y (x, 0) = ∂t

hence

2 L

 nπx  , L

 √EI n2 π 2 c  L2

2 · bn = L

ˆ

L

g(x) sin

 nπx  L

0

dx .

The solution of the whole problem is ∞ P

y(x, t) = 

2 L

 nπx 

sin

L

n=1

ˆ

·

√  ˆ L  √EI n2 π 2 ct   nπx   2L nπx    EI n2 π 2 ct  √ dx cos + g(x) sin dx sin . L L2 L L2 n2 πc EI 0

L

f (x) sin 0

∞ P

11.2.17. Substitute y(x, t) =

Gn (t) sin

n=1

(?)

∞ P n=1

=



nπx L



into the PDE to get

∞    nπx  P ∂2y ∂y ∂2y ¨ n (t) sin nπx + 2β G G˙ n (t) sin = 2 + 2β = c2 2 + cos ωt L L ∂t ∂t ∂x n=1

∞ P n=1



 nπ 2 L

Gn (t) sin

 nπx  L

+ cos ωt ·

 ∞  nπx  P 2 1 − (−1)n sin . nπ L n=1

Here, as in problems 11.1.15 through 11.1.17, we have used the Fourier sine series expansion 1=

∞ P n=1

fn sin

 nπx  L

.

From (?) it follows that for n = 1, 2, 3, ..,  n  2 ¨ n + 2β G˙ n + nπ Gn = 2 1 − (−1) s cos ωt . G L nπ Because the constant β > 0, y(x, t) has steady state oscillations in time, t.

c Larry

Turyn, October 13, 2013

page 37

Section 11.3 11.3.1. We need to solve

 2  ∂2T ∂ T     + = 0, 0 < x < 2π, 0 < y < π,   2 2   ∂x ∂y         . T (x, 0) = T (x, π) = 0, 0 < x < 2π,               ∂T (0, y) = 0, ∂T (2π, y) = 10, 0 < y < π   ∂x ∂x The first pair of boundary conditions are homogeneous and belong to the first group of entries of Table 11.1 with Y (0) = Y (π) = 0, for a function Y (y). For n = 1, 2, 3, ..., when we substitute into the PDE a product solution in the form T (x, y) = X(x) sin(ny) and then divide through by sin(ny) we get the ODE X 00 − n2 X = 0 . ∂T ∂T (0, y) = 0 and (2π, y) = 10 suggest writing X(x) in the form ∂x ∂x  X(x) = c1 cosh nx + c2 cosh n(2π − x) .

Using clairvoyance, the remaining two BCs,

So, the general solution of the PDE and the two homogeneous BCs can be written in the form T (x, y) =

∞   P an cosh nx + bn cosh n(2π − x) sin ny, n=1

hence ∂T (x, y) = ∂x

∞ P

  n an sinh nx − bn sinh n(2π − x) sin ny .

n=1

[The minus sign comes from the chain rule and the derivative of n(2π − x) with respect to x.] Substitute this into the two remaining BCs to get 0=

∂T (0, y) = ∂x

∞ P

  n an · 0 − bn sinh 2nπ sin ny,

n=1

for 0 < y < π, hence bn = 0 for all n, and 10 =

∂T (2π, y) = ∂x

∞ P

  n an sinh 2nπ − bn · 0 sin ny,

n=1

for 0 < y < π, hence n sinh 2nπ · an =

2 π

ˆ

π

10 sin ny dy = 0

 20 h cos ny iπ 20  (−1)n − 1  20 = 1 − (−1)n . = π −n 0 π −n nπ

So, aeven = 0. The solution of the problem is T (x, y) =

11.3.2. We need to solve

 ∞  cosh (2k − 1)x 40 P sin (2k − 1)y . 2 π k=1 (2k − 1) sinh(2(2k − 1)π)

 2 ∂ T ∂2T   + = 0, 0 < x < 2π, 0 < y < π,  2  ∂x ∂y 2     ∂T ∂T  (0, y) = (2π, y) = 0, 0 < y < π,   ∂x ∂x      T (x, 0) = 100, T (x, π) = 0, 0 < x < 2π

        

.

        c Larry

Turyn, October 13, 2013

page 38

The first pair of boundary conditions are homogeneous and belong to the second group of entries of Table 11.1, that is, with X 0 (0) = X 0 (2π) = 0 for a function X(x). When we substitute into the PDE a product solution in the form  nx  T (x, y) = cos Y (y) 2  and then divide through by cos nx we get the ODE 2 Y 00 −

 n 2 2

Y = 0.

Recall that X0 (x) ≡ 1 is an “honorary cosine function." For n = 1, 2, 3, ..., using clairvoyance, the remaining two BCs, T (x, 0) = 100 and T (x, π) = 0 suggest writing Y (y) in the form  ny  n  Yn (y) = c1 sinh + c2 sinh (π − y) . 2 2 For n = 0, using clairvoyance, the remaining two BCs, T (x, 0) = 100 and T (x, π) = 0 suggest writing Y (y) in the form Y0 (y) = c1 (π − y) + c2 y . So, the general solution of the PDE and the two homogeneous BCs can be written in the form T (x, y) =

a0 b0 (π − y) + y+ 2 2

∞  P

an sinh

n=1

n

  ny   nx  (π − y) + bn sinh cos . 2 2 2

Substitute this into the first of the two remaining BCs to get 100 = T (x, 0) =

a0 π+ 2

∞ P

an sinh

n=1

 nx  nπ cos , 0 < x < 2π . 2 2

But, T (x, 0) = 100 is its own Fourier cosine series, so a0 = 200/pi and an = 0 for n = 1, 2, 3, ... . The last BC is ∞  nx  P nπ b0 π+ bn sinh cos , 0 < x < 2π, 0 = T (x, π) = 2 2 2 n=1 so bn = 0 for all n. The solution of the problem is T (x, y) ≡ 100 . 11.3.3. We need to solve         

∂2T ∂2T + = 0, 0 < x < 2π, 0 < y < π, 2 ∂x ∂y 2

        

. T (0, y) = T (2π, y) = 0, 0 < y < π,               T (x, 0) = 100, ∂T (x, 0) = −10, 0 < x < 2π   ∂y The first pair of boundary conditions are homogeneous and belong to the first group of entries of Table 11.1, that is, with X(0) = X(2π) = 0 for a function X(x). When we substitute into the PDE a product solution in the form  nx  T (x, y) = sin Y (y) 2  and then divide through by sin nx , we get the ODE 2 Y 00 −

 n 2 2

Y = 0.

∂T (x, 0) = 0, do not suggest much about how to write Y (y); ∂y  ny   ny  Y (y) = c1 cosh + c2 sinh 2 2

The remaining two BCs, T (x, 0) = 100 and

c Larry

Turyn, October 13, 2013

page 39

is good enough. [In fact, these two BCs are more like initial conditions! That’s what makes this problem strange.] The general solution of the PDE and the two homogeneous BCs can be written in the form ∞  P

T (x, y) =

an cosh

 ny  2

n=1

+ bn sinh

 ny  2

sin

 nx  2

.

Substitute this into the first of the remaining BCs to get ∞ P

100 = T (x, 0) =

an sin

n=1

so an =

2 2π

ˆ



100 sin 0

 nx  , 2

nx  nx   100 h cos 2 i2π 200 dx = = 1 − (−1)n . 2 π −n/2 0 nπ

We calculate ∞  ny   ny   nx  P n an sinh + bn cosh sin . 2 2 2 n=1 2

∂T (x, y) = ∂y The last BC requires

−10 = hence n 2 bn = 2 2π The solution of the problem is T (x, y)=

ˆ

∂T (x, 0) = ∂y



−10 sin

∞  nx  P n bn sin , 2 n=1 2

 nx  2

0

dx = ... = −

∞  ny   nx   P 200 1 − (−1)n cosh sin − 2 2 n=1 nπ

∞ P n=1

 20 1 − (−1)n . nπ

 ny   nx   40 1 − (−1)n sinh sin . 2 n π 2 2

But, 1 − (−1)n = 0 for n = even and 1 − (−1)n = 2 for n = odd = 2k − 1, so the solution is ∞ 80 P T (x, y)= π k=1



 (2k−1)y   (2k−1)y   (2k−1)x  1 5 cosh − sinh sin . 2 2k−1 2 (2k−1) 2 2

11.3.4. Let the square be {(x, y) : 0 ≤ x ≤ a, 0 ≤ y ≤ a}. Without loss of generality, the side that is kept at a non-zero temperature is the side y = a. So, the problem we need to solve is  ∂2T  2 + ∂∂yT2 = 0, 0 < x < a, 0 < y < a,   ∂x2         . T (0, y) = 0, T (a, y) = 0, 0 < y < a           T (x, 0) = 0, T (x, a) = 100, 0 < x < a The pair of homogeneous BCs on the sides x = 0 and x = a suggest taking the solution to be in the form ∞ P

T (x, y) =

sin

n=1

 nπx  Yn (y) . a

As in Example 11.9, Yn (y) should satisfy the ODE Yn00 −

 nπ 2 a

Yn = 0 .

As in Section 11.3.1, clairvoyance for the remaining BCs T (x, 0) = 0, T (x, a) = 100, 0 < x < a suggests the form of the solution for Yn should be  nπ   nπ  Yn (y) = c1 sinh (a − y) + c2 sinh y . a a c Larry

Turyn, October 13, 2013

page 40

The general solution of the PDE and the BCs T (0, y) = 0, T (a, y) = 0, 0 < y < a can be written in the form ∞   nπ   nπ   nπx  P an sinh (a − y) + bn sinh y sin , a a a n=1

T (x, y) =

where an and bn are to be chosen to satisfy the BCs T (x, 0) = 0, T (x, a) = 100, 0 < x < a: ∞ P

0 = T (x, 0) =

an sinh(nπ) sin

 nπx  a

n=1

, 0 < x < a,

and orthogonality implies an = 0 for all n, and ∞ P

100 = T (x, a) =

bn sinh(nπ) sin

 nπx  a

n=1

, 0 < x < a,

so orthogonality implies bn sinh(nπ) =

2 a

ˆ

a

100 sin

 nπx  a

0

dx =

nπx  200  200 h cos a ia = 1 − (−1)n . a −nπ/a 0 nπ

Using the fact that 1 − (−1)n = 0 for n = even, we see that the solution of the whole, original problem is

T (x, y) =

∞  (2k − 1)πy   (2k − 1)πx  400 P 1 sinh sin . π k=1 (2k − 1) sinh((2k − 1)π) a a

11.3.5. The third and fourth BCs are homogeneous and correspond to the third group of entries in Table 11.1 with X(0) = X 0 (π) = 0, so look for product solutions of the PDE in the form T (x, y) = sin

 (2n − 1)x  2 (2n−1)x  2

Substitute that into the PDE and divide through by sin Y 00 −

 2n − 1 2 2

Y (y) .

to get

Y = 0.

The remaining two BCs are T (x, 0) = 20 and T (x, 1) = f (x), so clairvoyance suggests writing the solutions for Y (y) in the form  2n − 1   2n − 1  Y (y) = c1 sinh (1 − y) + c2 sinh y . 2 2 The general solution of the PDE and the two homogeneous BCs can be written in the form ∞ P

T (x, y) =

 an sinh

 2n − 1

n=1

2

  2n − 1   (2n − 1)x  . (1 − y) + bn sinh y sin 2 2

The first of the two remaining BCs requires 20 = T (x, 0) =

∞ P

an sinh

 2n − 1 

n=1

so an sinh

 2n − 1  2

=

2 π

ˆ

π

20 sin 0

2

sin

 (2n − 1)x  , 2

(2n−1)x i  (2n − 1)x  π 40 h cos 80 2 dx = = ... = . 2 π −(2n − 1)/2 0 (2n − 1)π

The last BC requires 10 X n=3

 αn sin

n−

 1 x = f (x) = T (x, 1) = 2

∞ P n=1

bn sinh

 2n − 1  2

sin

 (2n − 1)x 

c Larry

2

,

Turyn, October 13, 2013

page 41

so bn sinh

 2n − 1 

  αn ,

=

2

0,



3 ≤ n ≤ 10

 

n < 3 or n > 10



.

The solution of the problem is T (x, y) =

10 X n=3

+



αn sinh

 sinh 2n−1 2

∞ 1 80 X π n=1 (2n − 1) sinh

(2n − 1)y 2 

 sinh 2n−1 2



 sin

(2n − 1)x 2

(2n − 1)(1 − y) 2





 sin

(2n − 1)x 2

 .

11.3.6. The pair of homogeneous BCs on the sides x = 0 and x = a suggest taking the solution to be in the form ∞ P

T (x, y) =

 nπx  Yn (y) . a

sin

n=1

As in Example 11.9, Yn (y) should satisfy the ODE Yn00 −

 nπ 2 a

Yn = 0 .

As in Section 11.3.1, clairvoyance for the remaining BCs, which are of the form T (x, 0) = 0, T (x, b) = f (x), 0 < x < a suggests the form of the solution for Yn should be   nπ   nπ (b − y) + c2 sinh y . Yn (y) = c1 sinh a a The general solution of the PDE and the BCs T (0, y) = 0, T (a, y) = 0, 0 < y < a can be written in the form ∞  P

T (x, y) =

an sinh

 nπ

  nπ   nπx  (b − y) + bn sinh y sin , a a a

n=1

where an and bn are to be chosen to satisfy the remaining two BCs: ∞ P

0 = T (x, 0) =

an sinh

 nπb  a

n=1

and orthogonality implies an = 0 for all n, and   2x , 0 < x < a2   a = T (x, a) =    2 1 − xa , a2 < x < a

∞ P

 nπx  , 0 < x < a, a

sin

bn sinh

 nπb  a

n=1

sin

 nπx  , 0 < x < a, a

so orthogonality implies bn sinh

2 = a =

2 a

 −

 nπb  a

2 = a

ˆ

a/2

0

 nπx  2x sin dx + a a

2x a

2 cos nπx sin nπx a a + a −nπ/a (nπ/a)2



nπ 2

cos −0 + nπ/a

Using the fact that sin

T (x, y) =

nπ 2

2 a

nπ 2

sin −0 (nπ/a)2

a/2 0

 +

2 a

2 + a  −

"

ˆ

a

 nπx  x 2 1− sin dx a a a/2

 2 1 − xa cos −nπ/a nπ 2

0 − cos nπ/a



0

nπx a

2 a

sin nπx a − (nπ/a)2

− sin nπ 2 (nπ/a)2 2 a

 =

!

#a a/2

8 nπ sin n2 π 2 2

= 0 for n = even, we see that the solution of the whole, original problem is

∞  (2k − 1)πy   (2k − 1)πx  8 P 1   sinh sin . π 2 k=1 (2k − 1)2 sinh (2k−1)πb a a a

c Larry

Turyn, October 13, 2013

page 42

11.3.7. Method I: Because three of the four sides are kept at 20◦ C., it makes sense to write the solution in the special form T (x, y) = 20 + u(x, y). The problem that u needs to solve is  2  ∂2u ∂ u     + = 0, 0 < x < a, 0 < y < b,    ∂x2    ∂y 2     . u(0, y) = −20, u(a, y) = 0, 0 < y < b,               u(x, 0) = 0, u(x, b) = 0, 0 < x < a The last two BCs fit the first group of entries in Table 11.1 with Y (0) = Y (b) = 0. Substitute into the PDE for u product solutions in the form  nπy  u(x, y) = X(x) sin b  and then divide through by sin nπy to get b X 00 −

 nπ 2 b

X = 0.

Clairvoyance for the remaining BCs, u(0, y) = −20 and u(a, y) = 0 suggest writing the general solution of u’s PDE and the two homogeneous BCs in the form ∞  P

u=

an sinh

 nπ

n=1

  nπ   nπy  (a − x) + bn sinh x sin . b b b

The first of the remaining two BCs requires −20 = u(0, y) =

∞ P

an sinh

 nπa  b

n=1

so an sinh

 nπa 

=

b The last of the BCs requires

2 b

ˆ

b

(−20) sin 0

0 = u(a, y) =

sin

 nπy  b

,

nπy  nπy   40 h cos b ib 40 dy = − =− 1 − (−1)n . b b −nπ/b 0 nπ

∞ P n=1

bn sinh

 nπa  b

sin

 nπy  b

,

so bn = 0 for all n. The solution of the problem for u is     ∞ sinh (2k−1)π(a−x) b (2k − 1)πy 80 P   sin u=− . π k=1 (2k − 1) sinh (2k−1)πa b b The solution of the whole problem is     ∞ sinh (2k−1)π(a−x) b (2k − 1)πy 80 P   sin T (x, y) = 20 + u(x, y) = 20 − . π k=1 (2k − 1) sinh (2k−1)πa b b Method II: The whole, original problem is to find T (x, y) satisfying  2 ∂2T ∂ T   + = 0, 0 < x < a, 0 < y < b,   2  ∂y 2  ∂x        

T (x, 0) = 20, T (x, b) = 20, 0 < x < a T (0, y) = 0, T (a, y) = 20, 0 < y < b

       

.

      

c Larry

Turyn, October 13, 2013

page 43

As in Example 11.12 in Section 11.3, write the solution of the whole problem as T (x, y) = T1 (x, y) + T2 (x, y), where  2  ∂ 2 T1 ∂ T1     + = 0, 0 < x < a, 0 < y < b,       ∂x2 ∂y 2     T1 (x, 0) = T1 (x, b) = 0, 0 < x < a

       and

      

T1 (0, y) = 0, T1 (a, y) = 20, 0 < y < b

 2 ∂ 2 T2 ∂ T2   + = 0, 0 < x < a, 0 < y < b,    ∂x2 ∂y 2   T2 (x, 0) = T2 (x, b) = 20, 0 < x < a

      

       

.

      

T2 (0, y) = T2 (a, y) = 0, 0 < y < b

The problem for T1 (x, y) has its first two BCs fitting the first group of entries of Table 11.1 with Y (0) = Y (b) = 0. Substitute into the PDE for T1 product solutions in the form  nπy  T1 (x, y) = X(x) sin b  nπy and then divide through by sin b to get X 00 −

 nπ 2 b

X = 0.

Clairvoyance for the remaining BCs, T1 (0, y) = 0 and T1 (a, y) = 20 suggest writing the general solution of the PDE and the two homogeneous BCs for T1 as T1 =

∞  P

an sinh

 nπ

  nπ   nπy  (a − x) + bn sinh x sin . b b b

n=1

The first of the remaining two BCs requires ∞ P

0 = T1 (0, y) =

an sinh

 nπa 

n=1

b

sin

 nπy  , b

so an = 0 for all n. The last BC requires 20 = T1 (a, y) =

∞ P

bn sinh

n=1

so bn sinh

 nπa  b

=

2 b

ˆ

b

20 sin 0

 nπy  b

dy =

 nπa  b

sin

 nπy  b

,

nπy  40 h cos b ib 40 = 1 − (−1)n . b −nπ/b 0 nπ

The solution for T1 (x, y) is   (2k−1)πx ∞  (2k − 1)πy  sinh P b 80   sin T1 (x, y) = . (2k−1)πa π k=1 (2k − 1) sinh b b For T2 (x, y), the separated homogeneous BCs T2 (0, y) = T2 (a, y) = 0, 0 < y < b suggest looking for product solutions in the form  nπx  T2 (x, y) = sin Y (y) . a  nπx Substitute that into the the PDE and divide through by sin a to get Y 00 −

 nπ 2 a

Y = 0.

c Larry

Turyn, October 13, 2013

page 44

The remaining BCs being T2 (x, 0) = T2 (x, b) = 20, 0 < x < a and clairvoyance suggest writing the general solution of the PDE and the two homogeneous BCs in the form ∞   nπ   nπ   nπx  P an sinh (b − y) + bn sinh y sin . a a a n=1

T2 (x, y) =

The first of the remaining two BCs requires ∞ P

20 = T2 (x, 0) =

an sinh

 nπb  a

n=1

so an sinh

 nπb  a

=

2 a

ˆ

a

20 sin 0

sin

 nπx  a

,

nπx  nπx   40 40 h cos a ia = dx = 1 − (−1)n . a a −nπ/a 0 nπ

The last BC requires ∞ P

20 = T2 (x, b) =

bn sinh

 nπb  a

n=1

so, similarly, bn sinh

 nπb  a

=

2 a

ˆ

a

20 sin 0

 nπx  a

dx =

sin

 nπx  , a

 40 1 − (−1)n . nπ

The solution for T2 (x, y) is     ∞ ∞  (2k−1)πx  80 P  (2k−1)πx  sinh (2k−1)π(b−y) sinh (2k−a1)πy a 80 P   sin   sin T2 (x, y)= + . π k=1 (2k−1) sinh (2k−1)πb a π k=1 (2k−1) sinh (2k−1)πb a a a Altogether, the solution of the original problem is T (x, y) = T1 (x, y) + T2 (x, y)  ∞  sinh (2k−b1)πx (2k−1)πy  sinh 80 P = +  sin π k=1 (2k−1) sinh (2k−1)πa b b

(2k−1)π(b−y)  a

(2k−1)πy  a (2k−1)πb  a

+ sinh

(2k−1) sinh

sin

(2k−1)πx  . a

11.3.8. Write the solution of the whole problem in the form T (x, y) = T1 (x, y) + T2 (x, y), where   ∂ 2 T1 ∂ 2 T1    + = 0, 0 < x < 2, 0 < y < 1,      2 2   ∂x ∂y           ∂T1 ∂T1 (x, 0) = 0, (x, 1) = 0, 0 < x < 2   ∂y ∂y                 ∂T1 (0, y) = 2y, ∂T1 (2, y) = 2y, 0 < y < 1   ∂x ∂x and

 2 ∂ T2 ∂ 2 T2   + = 0, 0 < x < 2, 0 < y < 1,   2  ∂x ∂y 2      ∂T2 ∂T2 (0, y) = 0, (2, y) = 0, 0 < y < 1  ∂x ∂x        ∂T2 ∂T2   (x, 0) = x, (x, 1) = x, 0 < x < 2 ∂y ∂y

          

.

         

For the problem whose solution is T1 , the second group of entries of Table 11.1 is relevant, with Y 0 (0) = Y 0 (1) = 0. This gives product solutions of the form T1,n (x, y) = Xn (x) cos(nπy), where Xn00 − (nπ)2 Xn = 0,

c Larry

Turyn, October 13, 2013

page 45

for n = 0, 1, 2, ... . Clairvoyance gives the general solution of the PDE and the two homogeneous BCs written in the form ∞   P a0 b0 T1 (x, y) = + x+ an cosh(nπx) + bn cosh nπ(2 − x) cos(nπy) . 2 2 n=1 It follows that ∂T1 b0 (x, y) = + ∂x 2

∞ P

  nπ an sinh(nπx) − bn sinh nπ(2 − x) cos(nπy) .

n=1

The first of the remaining BCs for T1 (x, y) requires 2y =

∂T1 b0 (0, y) = − ∂x 2

which implies b0 = and −nπ sinh(2nπ)bn =

2 1

ˆ

2 1

ˆ

1

∞ P

nπbn sinh(nπ2) cos(nπy),

n=1

h i1 2y dy = 2 y 2 = 2, 0

0

1

2y cos(nπy)dy = 2 0

 0−0 h 2y sin(nπy) 2 cos(nπy) i1 2 (−1)n − 2  + =2 + 2 nπ (nπ) nπ (nπ)2 0 4(1 − (−1)n ) , n2 π 2

=− so for n = odd = 2k − 1, b2k−1 =

8 , π 3 (2k − 1)3 sinh(2(2k − 1)π)

and bn = 0 for n = even. The second of the remaining BCs for T1 (x, y) requires 2y =

∂T1 b0 (2, y) = + ∂x 2

which implies (again!) b0 = and also nπ sinh(2nπ)πan =

2 1

2 1

ˆ

1

∞ P

nπan sinh(nπ2) cos(nπy),

n=1

h i1 2y dy = 2 y 2 = 2, 0

0

ˆ

1

2y cos(nπy)dy = ... = − 0

4(1 − (−1)n ) , n2 π 2

so for n = odd = 2k − 1, a2k−1 = −

8 , π 3 (2k − 1)3 sinh(2(2k − 1)π)

and an = 0 for n = even. The solution for T1 (x, y) is T1 (x, y) =

  ∞  cosh (2k − 1)πx − cosh (2k − 1)π(2 − x) a0 8 P +x− 3 cos (2k − 1)πy . 3 2 π k=1 (2k − 1) sinh(2(2k − 1)π)

For the problem whose solution is T2 , the second group of entries of Table 11.1 is relevant, with Y 0 (0) = Y 0 (1) = 0.  T (x), where This gives product solutions of the form T2,n (x, y) = cos nπx n 2 Yn00 −

 nπ 2 2

Yn = 0,

for n = 0, 1, 2, ... . Clairvoyance gives the general solution of the PDE and the two homogeneous BCs written in the form  ∞  nπy   nπ(1 − y)   nπx  P A0 B0 + Bn cosh cos . T2 (x, y) = + y+ An cosh 2 2 2 2 2 n=1

c Larry

Turyn, October 13, 2013

page 46

It follows that ∞ P nπ 2 n=1

∂T2 B0 (x, y) = + ∂y 2

 An sinh

 nπy  2

− Bn sinh

 nπ(1 − y)  2

cos

 nπx  2

.

The first of the remaining BCs for T2 (x, y) requires x= which implies B0 = and −

∞  nπ   nπx  P nπ Bn sinh cos , 2 2 2 n=1

∂T2 B0 (x, 0) = − ∂x 2

 nπ  nπ 2 sinh Bn = 2 2 2

ˆ

2

x cos

2 2

ˆ

x dx = 0

 nπx 

0

2

2

dx =

h1 i2 x2 = 2, 2 0

h x sin nπx i2  0 − 0 cos nπx (−1)n − 1  2 2 + = + 2 nπ/2 (nπ/2) 0 nπ/2 (nπ/2)2

4(1 − (−1)n ) , n2 π 2

=− so for n = odd = 2k − 1, B2k−1 =

8 π 3 (2k



1)3

(2k−1)π 2

sinh

,

and Bn = 0 for n = even. The second of the remaining BCs for T2 (x, y) requires x= which implies A0 = and  nπ  nπ 2 sinh An = 2 2 2

∞  nπ   nπx  P nπ An sinh cos , 2 2 2 n=1

∂T2 A0 (x, 1) = + ∂x 2

ˆ

2

x cos 0

2 2

ˆ

2

x dx = 0

i2 h1 x2 = 2, 2 0

 nπx  h x sin nπx i2  0 − 0 cos nπx (−1)n − 1  2 2 dx = + = + 2 2 nπ/2 (nπ/2) 0 nπ/2 (nπ/2)2 =−

4(1 − (−1)n ) , n2 π 2

so for n = odd = 2k − 1, A2k−1 = −

8 π 3 (2k



1)3

sinh

(2k−1)π 2

,

and An = 0 for n = even. The solution for T2 (x, y) is

T2 (x, y) =

A0 8 +y− 3 2 π

∞ cosh P



(2k−1)πy 2

(2k −

k=1



− cosh

1)3

sinh



(2k−1)π(1−y) 2



(2k−1)π 2

cos

 (2k − 1)πx  2

.

The solution of the whole, original problem is T (x, y) = T1 (x, y) + T2 (x, y)   ∞ cosh (2k − 1)πx − cosh (2k − 1)π(2 − x) a0 8 P = +x+y− 3 cos(nπy) 2 π k=1 (2k − 1)3 sinh(2(2k − 1)π)



8 π3

∞ cosh P k=1



(2k−1)πy 2



− cosh

(2k − 1)3 sinh



(2k−1)π(1−y) 2

(2k−1)π 2

 cos

 (2k − 1)πx  , 2

where a0 is an arbitrary constant. c Larry

Turyn, October 13, 2013

page 47

nπx a

11.3.9. For n = 0, 1, 2, ..., substitute product solutions T (x, y) = cos



Y (y) into the PDE 0 =

∂2T ∂2T + 3 to ∂x2 ∂y 2

get  nπx   nπx   nπ 2 ∂2T ∂2T cos Y (y) + 3 cos Y 00 (y) . +3 =− 2 2 ∂x ∂y a a a  Divide through by 3 cos nπx to get a  nπ 2 √ Y = 0. Y 00 − a 3 a a The remaining two BCs are T (x, 0) = 0 and T (x, b) = − |x − |, so clairvoyance suggests writing the solutions for 2 2 Y (y) in the form, for n = 1, 2, 3, ...,   nπ   nπ Yn (y) = c1 sinh √ (b − y) + c2 sinh √ y . a 3 a 3 0=

For n = 0, clairvoyance suggests writing the solutions of Y 00 = 0 in the form Y0 (y) = c1 (b − y) + c2 y . The general solution of the PDE and the two homogeneous BCs can be written in the form T (x, y) =

∞ P

a0 b0 (b − y) + y+ 2 2



 nπ   nπ   nπx  √ (b − y) + bn sinh √ y . cos a a 3 a 3

an sinh

n=1

The first of the two remaining BCs requires a0 b+ 2

0 = T (x, 0) =

∞ P

an sinh

n=1

 nπb   nπx  √ cos , a a 3

so an = 0 for all n ≥ 0. The last BC requires a a b0 − |x − | = T (x, b) = b+ 2 2 2

∞ P

bn sinh

n=1

 nπb   nπx  √ cos . a a 3

Note that    a a − x− =  2 2 

a 2





− x− a 2

a 2

a 2

  

a 2

0 and limx→0+ f (x) =√∞ and limx→∞ f (x) = ∞, we conclude that the global minimum of the function f (x) is at x = x? = 20 5. √ ? The sides of the best triangle have lengths x = x = 20 5, y= and

s (x? )2

√ 2000 2000 = √ = 20 5, ? x 20 5

4 · 106 + = (x? )2

r 2000 +

√ √ 4 · 106 = 4000 = 20 10, 2000 c Larry

Turyn, January 7, 2014

page 5

Figure 2: Example 13.1.3.8 in m. 13.1.3.8. Using the quantity x defined implicitly in the figure, the length of the rectangle is 2x units and the 1 height of the rectangle is 2 units. So, the area of the rectangle is, in square units, x +1 f (x) = 2x ·

x2

1 . +1

where x is in the interval I , [ 0, ∞). We calculate f 0 (x) = 2 ·

1 −2x 1 4x2 2(1 − x2 ) 2 = (x + 1) · 2 · − = , + 2x · x2 + 1 (x2 + 1)2 (x2 + 1)2 (x2 + 1)2 (x2 + 1)2

so the only critical point in the interval [ 0, ∞) is at x = 1. We have that f 0 (x) > 0 for 0 ≤ x < 1 and f 0 (x) < 0 for 1 < x < ∞. By Theorem 13.3 in Section 13.1, the global maximum of f (x) on the interval I is at x? = 1. So, the maximum area is f (1) = 1 square unit. 13.1.3.9. We will explain the “if, and only if, " in two parts. In part (a) we will explain why f 00 (x) ≥ 0 for all x in [ a, b ] implies that f is convex on [ a, b ]. In part (b) we will explain why f being convex on [ a, b ] implies that f 00 (x) ≥ 0 for all x in [ a, b ]. (a) Suppose f 00 (x) ≥ 0 for all x in [ a, b ]. Choose any x < y in [ a, b ] and any λ with 0 < λ < 1. Define z , λx + (1 − λ)y. Note that x < z < y. f (z) − f (x) By the Mean Value Theorem of Calculus I, there exists ξ with x ≤ ξ ≤ z and = f 0 (ξ); z−x f (y) − f (z) likewise, there exists η with z ≤ η ≤ y and = f 0 (η). Because f 00 ≥ 0 on [ a, b ], ξ ≤ z y−z implies f 0 (ξ) ≤ f 0 (z); likewise, z ≤ η implies f 0 (z) ≤ f 0 (η). Putting this all together, we get (?)

f (z) − f (x) f (y) − f (z) = f 0 (ξ) ≤ f 0 (z) ≤ f 0 (η) = . z−x y−z

Because z − x > 0 and y − z > 0, we can multiply both extreme sides of (?) by (z − x)(y − z) without disturbing the direction of the inequality. This gives   (y − z) f (z) − f (x) ≤ (z − x) f (y) − f (z) , hence (??)

(y − z)f (z) + (z − x)f (z) ≤ (y − z)f (x) + (z − x)f (y). c Larry

Turyn, January 7, 2014

page 6

  But, y − z = y − λx + (1 − λ)y = λ(y − x) and z − x = λx + (1 − λ)y − x = (1 − λ)(y − x) substituted into (??) imply (y − x)f (z) ≤ λ(y − x)f (x) + (1 − λ)(y − x)f (y). Because y − x > 0, we can divide through by (y − x) without disturbing the direction of the inequality to get  f λx + (1 − λ)y = f (z) ≤ λf (x) + (1 − λ)f (y). Since this was true for all such x, y, λ, we conclude that f is convex on [ a, b ]. (b) Choose any x in the interval (a, b). By Theorem 8.12(b) in Section 8.6, f 00 (x) = lim

h→0

f (x + 2h) − 2f (x + h) + f (x) . h2

[See a note later about how to get this result directly without having to use Theorem 8.12(b) in Section 8.6.] Now, f (x) being convex implies that, using λ = 12 , f (x + h) = f

1

 1 1 1 x + (x + 2h) ≤ f (x) + f (x + 2h), 2 2 2 2

hence 2f (x + h) ≤ f (x + 2h) + f (x), hence f (x + 2h) − 2f (x + h) + f (x) ≥ 0. It follows that f 00 (x) = lim

h→0

f (x + 2h) − 2f (x + h) + f (x) ≥ 0. h2

If x0 = a or x0 = b, then f (x) being twice continuously differentiable function on the interval [a, b] implies that f 00 (x0 ) = limx→x0 f 00 (x) ≥ 0. Note: If you don’t want to use Theorem 8.12(b) in Section 8.6, argue this way: By definition, f 00 (x) = lim

h→0

limδ→0 f 0 (x + h) − f 0 (x) = lim h→0 h =

lim

h→0 and δ→0

f (x+h+δ)−f (x+h) δ

− limδ→0

f (x+δ)−f (x) δ

h

f (x + h + δ) − f (x + h) − f (x + δ) + f (x) . h·δ

Since we are given that f (x) is twice continuously differentiable, the double limiting process should exist and be the same no matter how h → 0 and δ → 0, so the limit should agree with the single limit in which δ = h → 0. So, f 00 (x) = lim

h→0

f (x + h + h) − f (x + h) − f (x + h) + f (x) f (x + 2h) − 2f (x + h) + f (x) = lim . h→0 h·h h2

1 > 0 on the interval (0, ∞), so the result of problem 13.1.3.9 x2 implies that f (x) is convex on the interval (0, ∞).. 13.1.3.10. (a) f (x) , − ln x ⇒ f 00 (x) ,

(b) f (x) , − ln x is a convex function on the interval (0, ∞). By definition, for all x, y > 0 and all t in the interval (0, 1),   − ln (1 − t)x + ty = f (1 − t)x + ty ≤ (1 − t)f (x) + tf (y) = −(1 − t) ln(x) − t ln(y). Multiply through by (−1), taking note of the reversal of the direction of the inequality, to get  (?) ln (1 − t)x + ty ≥ (1 − t) ln(x) + t ln(y) c Larry

Turyn, January 7, 2014

page 7

for all x, y > 0 and all t in the interval (0, 1). (c) Use both sides of (?) as the exponent of e to get   1−t t (1 − t)x + ty = eln (1−t)x+ty ≥ e(1−t) ln(x)+t ln(y) = e(1−t) ln(x) et ln(y) = eln(x ) eln(y ) = x1−t y t , That is, x1−t y t ≥ (1 − t)x + ty, for all 0 ≤ t ≤ 1 and positive x, y. 13.1.3.11. Suppose that there are points x and y, possibly equal, in S , {x is in I : f (x) ≤ M }, a subset of the convex set I. Then f (x) ≤ M and f (y) ≤ M . Also, for 0 < t < 1, (1 − t)x + ty is in I and, because the function f is convex on the convex set I,  f (1 − t)x + ty ≤ (1 − t)f (x) + tf (y) ≤ (1 − t)M + tM = M,  hence (1 − t)x + ty is in S. This being true for all x and y in S and 0 < t < 1, we have that S is convex, unless it is empty. 13.1.3.12. (a) For all x, y in I and t in the interval [0, 1],    (f + g) (1 − t)x + ty = f (1 − t)x + ty + g (1 − t)x + ty ≤ (1 − t)f (x) + tf (y) + (1 − t)g(x) + tg(y) = (1 − t)(f + g)(x) + t(f + g)(y), hence f + g is convex on I. (b) Ex. 1: f (x) = x2 is convex on the interval (−∞, ∞) and g(x) = − ln x is convex on the interval (e, ∞), 00 00 x) but f (g(x)) = (ln x)2 = 2(1−ln < 0 for all x in (e, ∞), hence f (g(x)) is not convex on (e, ∞). x2 2 Ex. 2: f (x) = e−x is convex on the interval (−∞, ∞) and  g(x) = x is convex on the interval (e, ∞), but  00 2 1 1 f (g(x)) = (−2 + 4x2 )e−x < 0 for all x in − √2 , √2 , hence f (g(x)) is not convex on (e, ∞).

13.1.3.13. We will explain the “if, and only if, " in two parts. In (a) we will explain why the existence of a positive γ for which f 00 (x) ≥ γ for all x in [ a, b ] implies that f is strictly convex on [ a, b ]. In (b) we will explain why f being strictly convex on [ a, b ] implies that that there exists a positive γ for which f 00 (x) ≥ γ for all x in [ a, b ]. (a) Suppose there exists a positive γ for which f 00 (x) ≥ γ for all x in [ a, b ]. Choose any x < y in [ a, b ] and any λ with 0 < λ < 1. Define z , λx + (1 − λ)y. Note that x < z < y. f (z) − f (x) = f 0 (ξ); By the Mean Value Theorem of Calculus I, there exists ξ with x < ξ < z and z−x f (y) − f (z) likewise, there exists η with z < η < y and = f 0 (η). Because f 00 ≥ γ on [ a, b ], ξ < z implies y−z f 0 (ξ) < f 0 (z); likewise, z < η implies f 0 (z) < f 0 (η). Putting this all together, we get (?)

f (z) − f (x) f (y) − f (z) = f 0 (ξ) < f 0 (z) < f 0 (η) = . z−x y−z

Because z − x > 0 and y − z > 0, we can multiply both extreme sides of (?) by (z − x)(y − z) without disturbing the direction of the inequality. This gives   (y − z) f (z) − f (x) < (z − x) f (y) − f (z) , hence (??)

(y − z)f (z) + (z − x)f (z) < (y − z)f (x) + (z − x)f (y).

c Larry

Turyn, January 7, 2014

page 8

  But, y − z = y − λx + (1 − λ)y = λ(y − x) and z − x = λx + (1 − λ)y − x = (1 − λ)(y − x) substituted into (??) imply (y − x)f (z) < λ(y − x)f (x) + (1 − λ)(y − x)f (y). Because y − x > 0, we can divide through by (y − x) without disturbing the direction of the inequality to get  f λx + (1 − λ)y = f (z) < λf (x) + (1 − λ)f (y). Since this was true for all such x, y, λ, we conclude that f is strictly convex on [ a, b ]. (b) Choose any x in the interval (a, b). By Theorem 8.12(b) in Section 8.6, f 00 (x) = lim

h→0

f (x + 2h) − 2f (x + h) + f (x) . h2

[See a note later about how to get this result directly without having to use Theorem 8.12(b) in Section 8.6.] Now, f (x) being strictly convex implies that, using t = 21 , 1  1 1 1 f (x + h) = f x + (x + 2h) < f (x) + f (x + 2h), 2 2 2 2 hence 2f (x + h) < f (x + 2h) + f (x), hence f (x + 2h) − 2f (x + h) + f (x) > 0. Consider the function g(h) , f (x + 2h) − 2f (x + h) + f (x). Choose η to be small enough that x ± 2η are in the interval [a, b]. Note that g(h) is continuous and positive on the interval [−η, η]. It follows that on [−η, η], the minimum value of g(h) exists and equals a number α > 0. Define γ = η −2 α. Then for all h in the interval [−η, η], f (x + 2h) − 2f (x + h) + f (x) g(h) α α = 2 ≥ 2 ≥ 2 = γ. h2 h h η It follows that f (x + 2h) − 2f (x + h) + f (x) f 00 (x) = lim ≥ γ. h→0 h2 If x0 = a or x0 = b, then f (x) being twice continuously differentiable function on the interval [a, b] implies that f 00 (x0 ) = limx→x0 f 00 (x) ≥ γ. Note: If you don’t want to use Theorem 8.12(b) in Section 8.6, argue this way: By definition, f 00 (x) = lim

h→0

limδ→0 f 0 (x + h) − f 0 (x) = lim h→0 h

f (x+h+δ)−f (x+h) δ

− limδ→0

f (x+δ)−f (x) δ

h

f (x + h + δ) − f (x + h) − f (x + δ) + f (x) . h→0 and δ→0 h·δ Since we are given that f (x) is twice continuously differentiable, the double limiting process should exist and be the same no matter how h → 0 and δ → 0, so the limit should agree with the single limit in which δ = h → 0. So, =

f 00 (x) = lim

h→0

lim

f (x + h + h) − f (x + h) − f (x + h) + f (x) f (x + 2h) − 2f (x + h) + f (x) = lim . h→0 h·h h2

13.1.3.14. Suppose f has two distinct minimizers, x1 and x2 , in I. So, f (x1 ) = f (x2 ) = m is the global minimum value of f on I. Because I is an interval, it is convex. So, with t = 0.5 we have that x , 0.5(x1 + x2 ) is in I. By strict convexity,  f (x) = f (1 − 0.5)x1 + 0.5x2 < (1 − 0.5)f (x1 ) + 0.5f (x2 ) = 0.5m + 0.5m = m. f (x) < m contradicts that m is the global minimum of f on I. So, no, f cannot have two distinct global minimizers in I if f is strictly convex on interval I. c Larry

Turyn, January 7, 2014

page 9

Section 13.2.3 13.2.3.1. (a) ∇f (x) exists at all x = (x, y), so we need only to ∂f ∂y = 0, that is,  ∂f   2x + y 2 = 0   ∂x =  ∂f    = ∂y

2xy + 6y

find all (x, y) for which both     

∂f ∂x

= 0 and

,

   =0 

that is, (?)

 

2x + y 2



2y(x + 3)

 =0  =0

.



The second equation is true when (1) y = 0 or (2) x = −3. Substitute (1) into the first equation to get 2x + 0√= 0, hence x = 0. Alternatively, substitute (2) into the first equation to get 2(−3) + y 2 = 0, hence y = ± 6. √ √ So, there are exactly three critical points: (x, y) = (0, 0), (−3, 6), and (−3, − 6). (b) The Hessian matrix is 

2

D f (x, y) = so

2 2y

2y 2(x + 3)

 ,

0 = 12 > 0, 6

2 det D f (0, 0) = 0 2

and a11 > 0. It follows that the matrix A is positive definite, so (0, 0) is a local minimum point. Also, √ √ 2√ ±2 6 det D2 f (−3, ± 6) = = −24 < 0, ±2 6 0 √ √ so both (−3, 6), and (−3, − 6) are saddle points. 13.2.3.2. ∇f (x) exists at all x = (x, y), so that is,      (?)    

we need only to find all (x, y) for which both ∂f = −2x(1 − y 2 ) ∂x

=0

∂f = −2y(2 − x2 ) ∂y

   =0 

    

∂f ∂x

= 0 and

∂f ∂y

= 0,

.

The first √ equation is true √ when (1) x = 0 or (2) y = −1 or (3) y = 1. The second equation is true when (4) x = − 2 or (5) x = 2 or (6) y = 0. For both equations in (?) to be simultaneously true, we must have both one of (1), (2), or (3) and one of (4), (5), or (6). In principle, there are nine possibilities: √ (1) and (4) : x = 0 and x = − 2 : impossible √ 2 : impossible

(1)

and (5) :

x=0

and x =

(1)

and (6) :

x=0

and y = 0 : (x, y) = (0, 0)

(2)

and (4) :

y = −1

√ √ and x = − 2 : (x, y) = (− 2, −1)

(2)

and (5) :

y = −1

and x =

(2)

and (6) :

y = −1

and y = 0 : impossible



√ 2 : (x, y) = ( 2, −1)

c Larry

Turyn, January 7, 2014

page 10

(3)

and (4) :

y=1

√ √ and x = − 2 : (x, y) = (− 2, 1)

(3)

and (5) :

y=1

and x =

(3)

and (6) :

y=1



√ 2 : (x, y) = ( 2, 1)

and y = 0 : impossible. √ √ So, there are exactly five critical points: (x, y) = (0, 0), (± 2, −1), and (± 2, 1). (b) The Hessian matrix is −2(1 − y 2 ) 4xy



2

D f (x, y) = so

−2 det D f (0, 0) = 0 2

4xy −2(2 − x2 )

 ,

0 = 8 > 0, −4

and a11 < 0. It follows that the matrix −A is positive definite, so the matrix A is negative definite, hence (0, 0) is a local maximum point. Also, √ √ 0√ ∓4 2 det D2 f (± 2, −1) = = −32 < 0, ∓4 2 0 √ √ so both ( 2, −1), and (− 2, −1) are saddle points. Also, √ √ 0√ ±4 2 det D2 f (± 2, 1) = = −32 < 0, ±4 2 0 √ √ so both ( 2, 1), and (− 2, 1) are saddle points. 13.2.3.3. (a) ∇f (x) exists at all x = (x, y), so we need only to find all (x, y) for which both ∂f ∂y = 0, that is,   ∂f    6xy − 6x = 0     ∂x =  ,   ∂f     2 2  = 3y + 3x − 6y = 0  ∂y

∂f ∂x

= 0 and

that is, (?)

 

6x(y − 1)



3(x2 + y 2 − 2y)

 =0  =0

.



The first equation is true when (1) x = 0 or (2) y = 1. Substitute (1) into the second equation to get 3(0 + y 2 − 2y) = 0, hence y = 0 or y = 2. Alternatively, substitute (2) into the second equation to get 3(x2 + 1 − 2) = 0, hence x = −1 or x = 1. So, there are exactly four critical points: (x, y) = (0, 0), (0, 2), (−1, 1), and (1, 1). (b) The Hessian matrix is 

2

D f (x, y) = so

6(y − 1) 6x

−6 det D f (0, 0) = 0 2

6x 6(y − 1)

 ,

0 = 36 > 0, −6

and a11 < 0. It follows that the matrix −A is positive definite, so the matrix A is negative definite, so (0, 0) is a local maximum point. Also, 6 0 2 = 36 > 0, det D f (0, 2) = 0 6 c Larry

Turyn, January 7, 2014

page 11

and a11 > 0. It follows that the matrix A is positive definite, hence (0, 2) is a local minimum point. Also, 0 ±6 = −36 < 0, det D2 f (±1, 1) = ±6 0 so both (−1, 1), and (1, 1) are saddle points. 13.2.3.4. ∇f (x) exists at all x = (x, y), so we need only to find all (x, y) for which both that is,   ∂f 2   2 −x2   = (1 − 2x )e sin y = 0    ∂x  ,   ∂f   2    = xe−x · 2 sin y cos y = 0  ∂y that is,    (1 − 2x2 ) sin2 y = 0  , (?)   x sin(2y) = 0

∂f ∂x

= 0 and

∂f ∂y

= 0,

1 1 2 because e−x is never zero. The first equation is true when (1) x = − √ or (2) x = √ or (3) y = nπ for 2 2 mπ some integer n. The second equation is true when (4) x = 0 or (5) y = for some integer m. For both 2 equations in (?) to be simultaneously true, we must have both one of (1), (2), or (3) and one of (4) or (5). In principle, there are six possibilities: (1)

and (4) :

1 x=−√ 2

and x = 0 : impossible

(1)

and (5) :

1 x=−√ 2

and y =

(2)

and (4) :

1 x= √ 2

and x = 0 : (x, y) = (0, 0)

(2)

and (5) :

1 x= √ 2

and y =

(3)

and (4) :

y = nπ

and x = 0 : (x, y) = (0, nπ)

(3)

and (5) :

y = nπ

and y =

 mπ 1 mπ  : (x, y) = − √ , 2 2 2

 1 mπ  mπ : (x, y) = √ , 2 2 2

mπ : (x, y) = (x, nπ), for any x in the interval − 1 ≤ x ≤ 1 2

where n and m are any integers. In the domain {(x, y) : −1 ≤ x ≤ 1, − π4 ≤ y ≤ 5π 4 }, the only critical points are at (x, y) =  1 π √ , , and (x, 0) and (x, π) for any x in the interval −1 ≤ x ≤ 1. 2 2



1 π − √ , , 2 2

13.2.3.5. We will use the method of Lagrange multipliers, with f (x, y) , 4 + x − x2 − 2y 2 and g(x, y) = x2 +y 2 −2. At a maximizer x? = (x, y) that satisfies the constraint g(x, y) = 0, there is a Lagrange multiplier λ such that  (1 − 2x)ˆ ı − 4y ˆ = ∇f (x? ) = λ∇g(x? ) = λ 2x ˆ ı + 2y ˆ , as long as the technical requirement, 0 6= ∇g(x? ) = 2x ˆ ı + 2y ˆ, that is, x? 6= 0, is true. So,    (1 − 2x) = 2λx  ,   −4y = 2λy c Larry

Turyn, January 7, 2014

page 12

that is, (?)

  2(1 + λ)x 

2(2 + λ)y

 =1  =0

,



The second equation is true when (1) y = 0 or (2) λ = −2. 1 Substitute (1) into the first equation to get x = , as long as λ 6= −1. Alternatively, substitute 2(1 + λ) 1 (2) into the first equation to get 2(−1)x = 1, hence x = − . 2   1 So far, the candidates for finding a maximizer are (x, y) = , 0 , for λ 6= −1, and (x, y) = 2(1 + λ)   1 − , y , for any y. Which, if any, of these candidates satisfies the constraint 0 = g(x, y) = x2 + y 2 − 2 ? 2   √ 1 The first choice, (x, y) = , 0 , satisfies 0 = x2 + y 2 − 2 only at the points (x, y) = (± 2, 0). 2(1 + λ)  1  The second choice, (x, y) = − , y , satisfies 0 = x2 + y 2 − 2 only if 0 = 14 + y 2 − 2, that is, only at √  2  1 7 the points (x, y) = − , ± . 2 2 √   √ 1 7 . We calculate To summarize, the only possible maximizers are (x, y) = (± 2, 0) and − , ± 2 2 √ √ √ f (± 2, 0) = 4 ± 2 − 2 − 2 · 0 = 2 ± 2 and

√  1 7 1 1 7 1 f − , ± =4− − −2· =− . 2 2 2 4 4 4 √ √ So, the absolute maximum value 2 + 2 is achieved at (x, y) = ( 2, 0). 

13.2.3.6. A maximizer or minimizer can occur either strictly inside the ellipse 3x2 + y 2 ≤ 6, where we will use Fermat’s theorem, or on the boundary, 3x2 + y 2 = 6, where we will use the method of Lagrange multipliers. Define f (x, y) , x2 + 2y 2 and g(x, y) = 3x2 + y 2 − 6. Inside the ellipse, 0 = ∇f (x? ) = 2x ˆ ı + 4y ˆ only at (x, y) = (0, 0). The point (x, y) = (0, 0) will be among our candidates for a maximizer and a minimizer. On the other hand, at a maximizer x? = (x, y) on the boundary of the ellipse, there is a Lagrange multiplier λ such that  2x ˆ ı + 4y ˆ = ∇f (x? ) = λ∇g(x? ) = λ 6x ˆ ı + 2y ˆ , as long as the technical requirement, 0 6= ∇g(x? ) = 6x ˆ ı + 2y ˆ, that is, x? 6= 0, is true. So,    2x = 6λx  ,   4y = 2λy that is, (?)

   2(1 − 3λ)x = 0  

2(2 − λ)y

=0

,



1 The first equation is true when (1) x = 0 or (2) λ = . The second equation is true when (3) y = 0 or (4) 3 λ = 2.

c Larry

Turyn, January 7, 2014

page 13

In principle, there are four possibilities: (1)

and (3) :

x = 0 and y = 0 : (x, y) = (0, 0)

(1)

and (4) :

x = 0 and λ = 2 : (x, y) = (0, y)

(2)

and (3) :

λ=

1 3

and y = 0 : (x, y) = (x, 0)

1 and λ = 2 : impossible 3 So, our candidates for finding a maximizer and a minimizer are (x, y) = (0, y), for any y, and (x, y) = (x, 0), for any x. Note that the case of (x, y) = (0, 0) is subsumed by the candidates already listed. Which, if any, of these candidates satisfies the constraint 0 = g(x, y) = 3x2 + y 2 − 6 ? √ 6). The first choice, (x, y) = (0, y), satisfies 0 = 3x2 + y 2 − 6 only at the points (x, y) = (0, ± √ The second choice, (x, y) = (x, 0), satisfies 0 = 3x2 + y 2 − 6 only at the points (x, y) = ( 2, 0). To summarize, the only possible extrema √ are at (x,√y) = (0, 0), from Fermat’s Theorem, that is, Theorem 13.6 in Section 13.2, and at (x, y) = (0, ± 6) and (± 2, 0), from the method of Lagrange multipliers. We calculate f (0, 0) = 0, √ f (0, ± 6) = 0 + 2 · 6 = 12, (2)

and (4) :

λ=

and

√ f (± 2, 0) = 2 + 0 = 2, √ So, the absolute maximum value 12 is achieved at (x, y) = (0, ± 6), and the absolute minimum value 0 is achieved at (x, y) = (0, 0). 13.2.3.7. A maximizer or minimizer can occur either strictly inside the disk x2 + y 2 ≤ 4, where we will use Fermat’s Theorem, that is, Theorem 13.6 in Section 13.2, or on the boundary x2 + y 2 = 4, where we will use the method of Lagrange multipliers. Define f (x, y) , 2x2 + y 2 and g(x, y) = x2 + y 2 − 4. Inside the disk, 0 = ∇f (x? ) = 4x ˆ ı + 2y ˆ only at (x, y) = (0, 0). On the other hand, at a maximizer x? = (x, y) on the boundary of the disk, there is a Lagrange multiplier λ such that  4x ˆ ı + 2y ˆ = ∇f (x? ) = λ∇g(x? ) = λ 2x ˆ ı + 2y ˆ , as long as the technical requirement, 0 6= ∇g(x? ) = 2x ˆ ı + 2y ˆ, that is, x? 6= 0, is true. So,    4x = 2λx  ,   2y = 2λy that is, (?)

  2(2 − λ)x 

2(1 − λ)y

 =0  =0

,



The first equation is true when (1) x = 0 or (2) λ = 2. The second equation is true when (3) y = 0 or (4) λ = 1. In principle, there are four possibilities: (1)

and (3) :

x=0

and

y = 0 : (x, y) = (0, 0)

(1)

and (4) :

x=0

and

λ = 1 : (x, y) = (0, y)

(2)

and (3) :

λ=2

and

y = 0 : (x, y) = (x, 0)

(2)

and (4) :

λ=2

and

λ = 1 : impossible c Larry

Turyn, January 7, 2014

page 14

So, there are candidates for finding a maximizer are (x, y) = (0, y), for any y, and (x, y) = (x, 0), for any x. Note that the case of (x, y) = (0, 0) is subsumed by the candidates already listed. Which, if any, of these candidates satisfies the constraint 0 = g(x, y) = x2 + y 2 − 4 ? The first choice, (x, y) = (0, y), satisfies 0 = x2 + y 2 − 4 only at the points (x, y) = (0, ±2). The second choice, (x, y) = (x, 0), satisfies 0 = x2 + y 2 − 4 only at the points (x, y) = (2, 0). To summarize, the only possible extrema are at (x, y) = (0, 0), from Fermat’s Theorem, and at (x, y) = (0, ±2) and (±2, 0), from the method of Lagrange multipliers. We calculate f (0, 0) = 0, f (0, ±2) = 2 · 0 + 4 = 4, and f (±2, 0) = 2 · 4 + 0 = 8, So, the absolute maximum value 8 is achieved at (x, y) = (±2, 0), and the absolute minimum value 0 is achieved at (x, y) = (0, 0). 13.2.3.8. Define f (x, y) , e−x sin(x + y). We have  0 = ∇f (x? ) = − e−x sin(x + y) + e−x cos(x + y) ˆ ı − e−x sin(x + y) ˆ, which implies   −e−x sin(x + y) + e−x cos(x + y) −e−x sin(x + y)



 =0  =0

,



that is, (?)

  − sin(x + y) + cos(x + y) − sin(x + y)



 =0  =0

,



because e−x is never zero. Substitute the second equation into the first equation to conclude that 0 + cos(x + y) = 0. So, any critical point (x, y) would have 1 = cos2 (x + y) + sin2 (x + y) = 02 + 02 , which is impossible. So, f (x, y) has no critical point. Define xn = π2 − nπ for all positive integers n. We have  π − 2kπ = lim e−π/2 e2kπ · 1 = ∞ lim f (x2k , 0) = lim e−π/2 e2kπ sin k→∞ k→∞ k→∞ 2 and lim f (x2`−1 , 0) = lim e−π/2 e2(2`−1)π sin

`→∞

`→∞

π 2

 − (2` − 1)π = lim e−π/2 e2(2`−1)π · (−1) = −∞. `→∞

The function f (x, y) , e−x sin(x + y) is continuous in (x, y) in R2 , because e−x is continuous in x and sin(x + y) is continuous in (x + y), which is continuous in (x, y). Because f (x, y) is continuous on R2 , the values that f (x, y) produces is a closed set. Because f (x, y) takes on arbitrarily large positive values and arbitrarily large negative values, f (x, y) takes on all values between −∞ and ∞. 13.2.3.9. [Note that the problem has been corrected to include the assumption that A is real.] If the real matrix is A = [ aij ], the characteristic equation is a11 − λ a12 | A − λI | = = (λ1 − λ)(λ2 − λ) = λ2 − (a11 + a22 )λ + |A|, a12 a22 − λ so the quadratic equation gives us that the eigenvalues are p a11 + a22 ± (a11 + a22 )2 − 4 |A| λ= . 2 c Larry

Turyn, January 7, 2014

page 15

Because we are assuming that |A| < 0, (a11 + a22 )2 − 4 |A| ≥ −4 |A| > 0, hence the eigenvalues are real and distinct. More is true: Because |A| < 0, (a11 + a22 )2 − 4 |A| > (a11 + a22 )2 ≥ 0, hence p p (a11 + a22 )2 − 4 |A| > (a11 + a22 )2 = | a11 + a22 |. It follows that the larger eigenvalue is p a11 + a22 + (a11 + a22 )2 − 4 |A| a11 + a22 + | a11 + a22 | λ1 , > > 0, 2 2 and the smaller eigenvalue is λ2 ,

a11 + a22 −

p

(a11 + a22 )2 − 4 |A| a11 + a22 − | a11 + a22 | < < 0. 2 2

So, yes, |A| < 0 implies that the matrix |A| has one negative and one positive eigenvalue.  T 13.2.3.10. Method 1 : Suppose the vector x = x1 ... xn minimizes ||Ax − b||2 over the set Rn . Note that for all vectors x and y in Rn , hx, Ayi = hAT x, yi. Then, using the assumption that A is real and using Fermat’s Theorem, that is, Theorem 13.6 in Section 13.2, implies that for k = 1, ...n, h i h i h i 0 = ∇ ||Ax − b||2 = ∇ hAx − b, Ax − bi = ∇ hAx, Axi − hAx, bi − hb, Axi + hb, bi h i h i = ∇ hAT Ax, xi − hb, Axi − hb, Axi + || b ||2 = ∇ hAT Ax, xi − 2hAT b, xi + || b ||2 . So, Fermat’s Theorem says that all critical points satisfy h i h i (?) 0 = ∇ ||Ax − b||2 = ∇ hAT Ax, xi − 2AT b + 0, by using the result of problem 13.2.3.17 with c replaced by AT b. In addition, from results for problem 6.7.6.33, we know that if g(x) , xT Cx and C is symmetric, then ∇g(x) = 2Cx. It follows that for problem 13.2.3.10, we have the fact that C , (AT A) being symmetric implies that h i ∇ hAT Ax, xi = 2(AT A)x. Substituting this into (?) gives h i 0 = ∇ ||Ax − b||2 = 2(AT A)x − 2AT b, hence x solves the system of equations AT Ax = AT b, the normal equations, as were were asked to explain. In fact, the minimizer must be a critical point, because we are minimizing over all of the set Rn and lim||x||→∞ ||Ax − b||2 = ∞. So, the minimizer must satisfy the normal equations.

c Larry

Turyn, January 7, 2014

page 16

 T Method 2 : Suppose the vector x = x1 ... xn minimizes ||Ax − b||2 over the set Rn . Then, using the assumption that A is real and using Fermat’s Theorem, that is, Theorem 13.6 in Section 13.2, implies that for k = 1, ...n,   2  n n n i h h i   X X X  ∂ ∂ ∂  2   0= ||Ax − b||2 = (Ax)i − bi = aij xj − bi    ∂xk ∂xk i=1 ∂xk i=1 j=1    n n n n  X  X  ∂  X  X = aij xj ai` x` − 2bi aij xj + b2i   ∂xk i=1 j=1 j=1 `=1

      n n n n n n n X  X ∂ XXX ∂ ∂ XX 2 2   2bi a x aij xj  + 0. + aij ai` x` xj  − = ∂xk i=1 j=1 ij j ∂xk i=1 j=1 ∂xk i=1 j=1 `6=j

So, 0=

n X

a2ik 2xk + 2

i=1

=2

n X n X

n X n X

aij aik xj −

i=1 j6=k

aij aik xj −

2bi aik

i=1

n X

i=1 j=1

n X

2bi aik .

i=1

So, Fermat’s Theorem says that all critical points satisfy  h i   0 = ∇ ||Ax − b||2 =  

∂ ∂x1

h

2

i 

||Ax − b|| .. . h i ∂ 2 ||Ax − b|| ∂xn



2

     =   

Pn

i=1

Pn

j=1

aij ai1 xj −

Pn

i=1

2bi ai1

   .  

.. . Pn

2

i=1

Pn

j=1

aij ain xj −

Pn

i=1



2bi ain

We also calculate the terms that appear in the normal equations:    Pn  (AT b)1 i=1 ai1 bi     .. .. AT b =  =  . . P n T (A b)n i=1 ain bi and

P   P n n  aij xj ai1 a (Ax) i=1 j=1 i1 i i=1     .. .. AT (Ax) =  =   . .  P P  Pn n n i=1 ain (Ax)i ain aij xj  Pn

i=1

   . 

j=1

So, we have that Fermat’s Theorem says that critical points satisfy h i 0 = ∇ ||Ax − b||2 = 2AT (Ax) − 2AT b, hence that all critical points satisfy the normal equations, AT Ax = AT b. In fact, the minimizer must be a critical point, because we are minimizing over all of the set Rn and lim||x||→∞ ||Ax − b||2 = ∞. So, the minimizer must satisfy the normal equations.

c Larry

Turyn, January 7, 2014

page 17

13.2.3.11. f (x) = (Ax − b)T W T (Ax − b) = ... = xT AT W T Ax − 2bT W T Ax + bT W T b.  Using the result of problem 6.7.6.33, that is, ∇xT Ax = A + AT x, we have   T   ∇ xT AT W T Ax = AT W T A + AT W T A x = AT W T A + (AT W A) x = 2AT W T Ax, because W is symmetric.  T Using the result of problem 13.2.3.17, ∇ bT W T Ax = bT W T A = AT W b = AT W T b. So, a minimizer of f (x) must satisfy 0 = 2AT W T Ax − 2AT W T b + 0, hence must satisfy the generalized normal equations AT W T A x = AT W T b. 13.2.3.12. Define f (x, y) , e−x sin(x + y). We have  0 = ∇f (x? ) = − e−x sin(x + y) + e−x cos(x + y) ˆ ı − e−x sin(x + y) ˆ, which implies   −e−x sin(x + y) + e−x cos(x + y) −e−x sin(x + y)



 =0  =0

,



that is, (?)

  − sin(x + y) + cos(x + y) − sin(x + y)



 =0  =0

,



because e−x is never zero. Substitute the second equation into the first equation to conclude that 0 + cos(x + y) = 0. So, any critical point (x, y) would have 1 = cos2 (x + y) + sin2 (x + y) = 02 + 02 , which is impossible. So, f (x, y) has no critical point. ∂f (a) Note that e−x > 0 at all x. To have ∂f ∂x > 0 and ∂y > 0, we want to find (x, y) for which cos(x + y) − sin(x + y) > 0 and − sin(x + y) > 0. We could either do some guessing or be more analytical by using the amplitude-phase form to note that  √ π cos(x + y) − sin(x + y) = 2 cos x + y + . 4

So, it will  suffice to find (x, y) for which 0 < x + y + (x, y) = − π6 , 0 .

0 0 and

and − π2 < x + y < 0. One such example is ∂f ∂y

< 0, it will suffice to find (x, y) for which   One such example is (x, y) = π6 , 0 .

(b) Using the analytical work in part (a), to have π 4

∂f ∂x

π 4

∂f (c) Using the analytical work in part (a), to have ∂f ∂x < 0 and ∂y > 0, it will suffice to find (x, y) for which −π < x + y + π4 < − π2 and −π < x + y < − π2 . One such example is (x, y) = − 5π 6 , 0 . ∂f (d) Using the analytical work in part (a), to have ∂f ∂x < 0 and ∂y < 0, it will suffice to find (x, y) for which 5π π < x + y + π4 < 3π 2 and 0 < x + y < π. One such example is (x, y) = 6 , 0 . 3 2 13.2.3.13. Example 13.5 in Section 13.2 concluded that (x, y) = ( 43 , − 16 ) is the point on the curve y =  x −x 1 closest to the curve Y = 2 X − 6. Example 13.5 in Section 13.2 gave 2(Y − y) = 2µ = 2 − 2(X − x) , hence 21 3 2 Y + 16 = −4 X − 34 , that is, Y = −2X + 16 . Substitute into the latter equation that Y = 12 X − 6, 1 21 363 hence −6 + 2 X = Y = −2X + 16 . It follows that X = 117 40 , hence Y = − 80 , as we desired.

13.2.3.14. Define k(x, y) , 5xey − x5 − e5y . We have that any critical point (x, y) would satisfy   0 = ∇k(x? ) = 5ey − 5x4 ˆ ı + 5xey − 5e5y ˆ, c Larry

Turyn, January 7, 2014

page 18

which implies  

e y − x4



xey − e5y

 =0  =0

,



that is, (?)

 

ey



xey

= x4

 

= e5y > 0



,

hence ey > 0 and e5y > 0 imply that x > 0. Further, the second equation implies that 0 = ey (x − e4y ), hence x = e4y because ey cannot be zero. Substitute this into the first equation to conclude that 0 = ey − x4 = ey − (e4y )4 = ey − e16y = ey (1 − e15y ), hence e15y = 1, hence y = 0 and x = e4·0 = 1. So, the only critical point is at (x, y) = (1, 0). In order to use the second derivative test, we calculate that the Hessian matrix is   −20x3 5ey D2 k(x, y) = , 5ey 5xey − 25e5y so

−20 det D k(1, 0) = 5 2

5 = 375 > 0, −20

and a11 < 0. It follows the matrix −A is positive definite, hence the matrix A is negative definite, so (1, 0) is a local maximum point of k(x, y). We have k(1, 0) = 3, but lim k(x, 0) = lim (5x − x5 − 1) = lim

x→−∞

x→−∞

x→−∞

− x5



1−

1 5 + = ∞, x4 x5

so k(x, y) has a local maximum at (x, y) = (1, 0) but the function k(x, y) does not have a global maximum there despite the fact that the function has no other critical point. 13.2.3.15. Any critical point (w, h, `) of f (w, h, `) , hw` satisfies ˆ + hw `, ˆ 0 = ∇f = h` w ˆ + w` h that is (?)

   h` = 0  w` = 0 .   hw = 0

The first equation implies either (1) h = 0 or (2) ` = 0, the second equation implies either (3) w = 0 or (4) ` = 0, and the third equation implies that either (5) h = 0 or (6) w = 0. In order to satisfy the three equations simultaneously, we need to have one of what are, in principle, eight possibilities:

c Larry

Turyn, January 7, 2014

page 19

(1)

and (3)

and

(5) :

h=0

and

w = 0 and

(1)

and (3)

and

(6) :

h=0

and w = 0

(1)

and (4)

and

(5) :

h=0

and ` = 0

and h = 0 : (w, h, `) = (w, 0, 0)

(1)

and (4)

and

(6) :

h=0

and ` = 0

and w = 0 : (w, h, `) = (0, 0, 0)

(2)

and (3)

and

(5) :

`=0

and w = 0

and

h = 0 : (w, h, `) = (0, 0, 0)

(2)

and (3)

and

(6) :

`=0

and w = 0

and

w = 0 : (w, h, `) = (0, h, 0)

(2)

and (4)

and

(5) :

`=0

and ` = 0 and h = 0 : (w, h, `) = (w, 0, 0)

(2)

and (4)

and

(6) :

`=0

and ` = 0 and w = 0 : (w, h, `) = (0, h, 0)

and

h = 0 : (w, h, `) = (0, 0, `) w = 0 : (w, h, `) = (0, 0, `)

So, the critical points of the function f (h, w, `) lie only on the lines h = w = 0, w = ` = 0, and ` = h = 0.   13.2.3.17. ∇ cT x = ∇ c1 x1 + ... + cn xn = [c1

c2

...

T

cn ] = c.

c Larry

Turyn, January 7, 2014

page 20

Section 13.3.3  13.3.3.1.

1 −1 3 −4

0 | 1 |

1 0

−7 11



 ∼

1 −1 1 0 −1 −3

0 | 1 |

−7 32



−3R1 + R2 → R2





1 0 0 1

4 −1 | 3 −1 |

−39 −32



−R2 → R2 R2 + R 1 → R1

The solutions of the linear system of algebraic equations are given by     −39 − 4c1 + c2 x1  x2   −32 − 3c1 + c2  ,   (?)    x3  =  c1 c2 x4 where x3 = c1 and x4 = c2 are arbitrary constants. Basic feasible solutions are x = [ x1 pp ... pp x4 ]T that fit the form (?), have all of x1 , ..., x4 ≥ 0, and have at least two of the xi = 0. In principle, there are six possible ways to choose two of the four xi = 0: Case 1 : x1 = x2 = 0 imply    c1 −4 = c2 −3

1 1

−1 

39 32



1 = −1



1 −1 3 −4



39 32



 =

 −7 , 11

hence x3 = c1 = −7 < 0, hence there is no basic feasible solution in this case. Case 2 : x1 = x3 = 0 implies c1 = 0, hence 0 = x1 = −39 + c2 would imply c2 = 39, hence x2 = −32 + 39 = 7, hence the unique basic solution in this case is given by x = [ x1 pp ... pp x4 ]T = [ 0 pp 7 pp 0 pp 39 ]T . Case 3 : x1 = x4 = 0 implies c2 = 0, hence 0 = x1 = −39 − 4c1 would imply x3 = c1 = −39/4 < 0, hence there is no basic feasible solution in this case. Case 4 : x2 = x3 = 0 implies c1 = 0, hence 0 = x2 = −32 + c2 would imply c2 = 32, hence x1 = −7 < 0, hence there is no basic feasible solution in this case. Case 5 : x2 = x4 = 0 implies c2 = 0, hence 0 = x2 = −32 − 3c1 would imply x3 = c1 = −32/3 < 0, hence there is no basic feasible solution in this case. Case 6 : x3 = x4 = 0 implies c1 = c2 = 0, hence 0 = x1 = −39 < 0, hence there is no basic feasible solution in this case. [Alternatively, we could have broken the problem into the six cases from the beginning of the work.] To summarize, there is exactly one basic feasible solution: x = [ x1 pp ... pp x4 ]T = [ 0 pp 7 pp 0 pp 39 ]T . 

1 −1 1 2 −1 13.3.3.2.  −2 0 3 2

 1 | 0 0 | 0 0 | 4



1 −1 0 0 0 3



1 1 2

1 2 0

 | 0 | 0 | 4

2R1 + R2 → R2

 ∼ 1 3

R 2 → R2 R 2 ↔ R3 R2 + R 1 → R1

1 0 0

0 5/3 1 2/3 0 1

 1 | 4/3 0 | 4/3  2 | 0

 ∼ −

2 3

R 3 + R 2 → R2



5 3

R 3 + R 1 → R1

0 1  0 1 0 0

 0 −7/3 | 4/3 0 −4/3 | 4/3 

2 | 0 1

c Larry

Turyn, January 7, 2014

page 21

The solutions of the linear system of algebraic equations are given by   4 7  x1 3 + 3 c1       4 4   x2   + c1    3 3    , (?)   =   x3   −2c1          x4 c1 where x4 = c1 is an arbitrary constant. Basic feasible solutions are x = [ x1 pp ... pp x4 ]T that fit the form (?), have all of x1 , ..., x4 ≥ 0, and have at least two of the xi = 0. In principle, there are six possible ways to choose two of the four xi = 0. But, in order to have both x3 = −2c1 ≥ 0 and x4 = c1 ≥ 0, it is necessary that c1 = 0. From (?) then we see that there is exactly one basic feasible solution: x = [ x1 pp ... pp x4 ]T = [ 43 pp 34 pp 0 pp 0 ]T .   x1 13.3.3.3. −2x1  x1

−2x3 +5x3 −x3

+2x2 −4x2 +2x2



1 2 −2  −2 −4 5 1 2 −1

+x4

 −6 17  0

1 | 0 | 0 |

 = −6  = 17  = 0



1 0 0



2 0 0

−2 1 | 1 2 | 1 −1 |

 −6 5 6

2R1 + R2 → R2 −R1 + R3 → R3



1 0 0



2 0 0

 0 5 | 4 1 2 | 5 0 −3 | 1



1  0 0



2 0 0

0 | 0 |

1 |

0

1 0

 17/3 17/3  −1/3

− 31 R3 → R3 − 2R3 + R2 → R2 − 5R3 + R1 → R1

−R2 + R3 → R3 2R2 + R1 → R1

The solutions of the linear system of algebraic equations are given by    17  x1 3 − 2c1          x2    c 1     = , (?)      17  x3    3         x4 − 31 where x2 = c1 is an arbitrary constant. Because x4 = − 31 < 0, there is no basic feasible solution.  13.3.3.4. (a)

 (b)

1 −1 −3 4

x1 −3x1 1 0

−x2 +4x2

0 | 1 |

4 −5

+x3 +x4 

=4 = −5 

∼ 3R1 + R2 → R2

 is the LP in standard form.

1 −1 0 1

1 3

0 1

| 4 | 7



 ∼

1 0 0 1

4 3

1 | 11 1 | 7



R 2 + R1 → R1

c Larry

Turyn, January 7, 2014

page 22

The solutions of the linear system of algebraic equations are given by     11 − 4c1 − c2 x1  x2   7 − 3c1 − c2  ,   (?)    x3  =  c1 c2 x4 where x3 = c1 and x4 = c2 are arbitrary constants. Basic feasible solutions are x = [ x1 pp ... pp x4 ]T that fit the form (?), have all of x1 , ..., x4 ≥ 0, and have at least two of the xi = 0. In principle, there are six possible ways to choose two of the four xi = 0: Case 1 : x1 = x2 = 0 imply 

c1 c2



 =

−4 −1 −3 −1

−1 

−11 −7

 =

1 1



−1 1 3 −4



−11 −7



 =

 4 , −5

hence x4 = c2 = −5 < 0, hence there is no basic feasible solution in this case. Case 2 : x1 = x3 = 0 implies c1 = 0, hence 0 = x1 = 11 − c2 would imply c2 = 11, hence x2 = 7 − 3 · 0 − 11 = −4 < 0, hence there is no basic feasible solution in this case. Case 3 : x1 = x4 = 0 implies c2 = 0, hence 0 = x1 = 11 − 4c1 would imply c1 = 11/4, hence x2 = 7 − 3 · (11/4) − 0 = −5/4 < 0, hence there is no basic feasible solution in this case. Case 4 : x2 = x3 = 0 implies c1 = 0, hence 0 = x2 = 7 − 3 · 0 − c2 would imply x4 = c2 = 7, hence x1 = 11 − 4 · 0 − 7 = 4, hence the unique basic solution in this case is given by x = [ x1 pp ... pp x4 ]T = [ 4 pp 0 pp 0 pp 7 ]T . Case 5 : x2 = x4 = 0 implies c2 = 0, hence 0 = x2 = 7 − 3c1 would imply x3 = c1 = 7/3, hence x1 = 11 − 4 · (7/3) = 5/3, hence the unique basic solution in this case is given by x = [ x1 pp ... pp x4 ]T = [ 53 pp 0 pp 73 pp 0 ]T . Case 6 : x3 = x4 = 0 implies c1 = c2 = 0, hence 0 = x1 = 11 and x2 = 7, hence the unique basic solution in this case is given by x = [ x1 pp ... pp x4 ]T = [ 11 pp 7 pp 0 pp 0 ]T . [Alternatively, we could have broken the problem into the six cases from the beginning of the work.] To summarize, there are exactly three basic feasible solutions:       11 5/3 4  0   0   7       x = [ x1 pp ... pp x4 ]T =   0  ,  7/3  ,  0  . 0 7 0  13.3.3.5. (a)

 (b)

1 1 −2 −1

x1 −2x1 1 0

+x2 −x2

0 | 1 |

+x3

2 −3

+x4 

= 2 = −3 



1 0

 is the LP in standard form.

1 1

2R1 + R2 → R2

1 2

0 1

| 2 | 1





1 0 −1 −1 | 1 0 2 1 | 1 1

c Larry

Turyn, January 7, 2014

∼ −R2 + R1 → R1

The solutions of the linear system of algebraic equations are given by     x1 1 + c1 + c2  x2   1 − 2c1 − c2    , (?)   x3  =   c1 x4 c2 where x3 = c1 and x4 = c2 are arbitrary constants.



page 23

Basic feasible solutions are x = [ x1 pp ... pp x4 ]T that fit the form (?), have all of x1 , ..., x4 ≥ 0, and have at least two of the xi = 0. In principle, there are six possible ways to choose two of the four xi = 0: Case 1 : x1 = x2 = 0 imply     −1       1 −1 −1 c1 1 1 −1 −1 2 = = = , c2 −2 −1 2 1 −1 −1 −3 1 hence x4 = c2 = −3 < 0, hence there is no basic feasible solution in this case. Case 2 : x1 = x3 = 0 implies c1 = 0, hence 0 = x1 = 1 + c2 would imply x4 = c2 = −1, hence there is no basic feasible solution in this case. Case 3 : x1 = x4 = 0 implies c2 = 0, hence 0 = x1 = 1 + c1 would imply x3 = c1 = −1, hence there is no basic feasible solution in this case. Case 4 : x2 = x3 = 0 implies c1 = 0, hence 0 = x2 = 1 − 2 · 0 − c2 would imply x4 = c2 = 1, hence x1 = 1 + 0 + 1 = 2, hence the unique basic solution in this case is given by x = [ x1 pp ... pp x4 ]T = [ 2 pp 0 pp 0 pp 1 ]T . Case 5 : x2 = x4 = 0 implies c2 = 0, hence 0 = x2 = 1 − 2c1 would imply x3 = c1 = 1/2, hence x1 = 1 + (1/2) = 3/2, hence the unique basic solution in this case is given by x = [ x1 pp ... pp x4 ]T = [ 23 pp 0 pp 12 pp 0 ]T . Case 6 : x3 = x4 = 0 implies c1 = c2 = 0, hence 0 = x1 = 1 and x2 = 1, hence the unique basic solution in this case is given by x = [ x1 pp ... pp x4 ]T = [ 1 pp 1 pp 0 pp 0 ]T . [Alternatively, we could have broken the problem into the six cases from the beginning of the work.] To summarize, there are exactly three basic feasible solutions:       1 3/2 2       0 0   1    x = [ x1 pp ... pp x4 ]T =   0  ,  1/2  ,  0  . 0 0 1 13.3.3.6. Let x1 , x2 , x3 be the proportions of wheat bran, oat flour, and rice flour used in the mixture. (a) Directly, the LP problem is   Minimize −216x1 − 404x2 − 363x3                  Subject to 15.55x1 + 14.66x2 + 7.23x3 ≥ 10  64.51x1 + 65.70x2 + 76.48x3 ≤ 70 .     2.212x + 3.329x + 0.996x ≥ 2   1 2 3       x1 + x2 + x3 =1       x1 , ..., x3 ≥ 0 In standard form, this is  Minimize −216x1 − 404x2 − 363x3          Subject to −15.55x1 − 14.66x2 − 7.23x3 + x4 64.51x1 + 65.70x2 + 76.48x3 +x5   −2.212x1 − 3.329x2 − 0.996x3 +x6     x1 + x2 + x3    x1 , ..., x6 ≥ 0   −15.55 −14.66 −7.23 1 0 0 | −10  64.51 65.70 76.48 0 1 0 | 70   (b)   −2.212 −3.329 −0.996 0 0 1 | − 2  does not directly give 1 1 1 0 0 0 | 1 for example, x6 = −2 < 0. The second row is fine the way it is because x5 = 70

        = −10   = 70  = −2     = 1    

a feasible solution because, would be feasible.

c Larry

Turyn, January 7, 2014

page 24

We could start by using x1 as a basic variable in the fourth row, and that might clear up the infeasiblity issues. x2 x3 x4 x5 x6  x1  0 0.8900 8.320 1 0 0 | 5.550  0 1.190 11.97 0 1 0 | 5.490     0 −1.117 ∼ 1.216 0 0 1 | 0.2120  1 1 1 0 0 0 | 1 2.212R4 + R3 → R3 −64.51R4 + R2 → R2 15.15R4 + R1 → R1

This implies that [ x1 ... x6 ]T =[ 1 0 the LP problem in standard form.

0 5.55

5.49

0.212 ]T is an example of a basic feasible solution of

13.3.3.7. Let x1 , x2 , x3 be the proportions of wheat bran, oat flour, and rice (a) Directly the LP problem is  Minimize −15.55x1 − 14.66x2 − 7.23x3          Subject to 216x1 + 404x2 + 363x3 ≥ 300 42.8x1 + 6.5x2 + 4.6x3 ≥ 10   2.212x  1 + 3.329x2+ 0.996x3 ≥ 2.5    x + x2 + x3 = 1  1   x1 , ..., x3 ≥ 0

flour used in the mixture.

In standard form, this is  Minimize −15.55x1 − 14.66x2 − 7.23x3          Subject to −216x1 − 404x2 − 363x3 + x4 −42.8x1 − 6.5x2 − 4.6x3 + x5   −2.212x1 −3.329x2 −0.996x3 + x6     x1 + x2 + x3    x1 , ..., x6 ≥ 0

         

.

        

        = −300   = −10  = −2.5      = 1   

 −216 −404 −363 1 0 0 | −300  −42.8 −6.5 −4.6 0 1 0 | −10   does not directly give a feasible solution (b)   −2.212 −3.329 −0.996 0 0 1 | − 2.5  1 1 1 0 0 0 | 1 because, for example, x6 = −2 < 0. The fourth row is fine the way it is because x5 = 70 would be feasible. We could start by using x2 as a basic variable in the fourth row, and that will clear up some, but not all, of the infeasiblity issues. 

x1 x2 188 0  −36.3 0   1.117 0 1 1 



x3 41 1.9 2.333 1

x4 1 0 0 0

x5 0 1 0 0

x6 0 0 1 0

 | 104  | −3.5  | 0.829  | 1

3.329R4 + R3 → R3 6.5R4 + R2 → R2 404R4 + R1 → R1

c Larry

Turyn, January 7, 2014

page 25

   

∼ 1 R2 − 36.3 −188R2 + R1 −1.117R2 + R3 −R2 + R4

→ R2 → R1 → R3 → R4

x1 0 1 0 0

x2 0 0 0 1

This implies that [ x1 ... x6 ]T =[ 0.09642 0.9036 solution of the LP problem in standard form.

Ex. [ x1 ... x6 ]T = [ 0.5 in standard form

0.5

0

10 14.65

x3 x4 x5 x6 50.84 1 5.179 0 −0.05234 0 −0.02755 0 2.391 0 0.03077 1 1.052 0 0.02755 0

0

85.87

0

 | 85.87 | 0.09642   | 0.7213  | 0.9036

0.7213 ]T is an example of a basic feasible

0.2705 ]T is another basic feasible solution of the LP problem

c Larry

Turyn, January 7, 2014

page 26

Section 13.4 13.4.2.1. Use slack variable x3 to put this problem in standard form (13.22) in Section 13.3, that is,   Minimize x1 + x2       . Subject to −x1 − 2x2 + x3 = −3       x1 , x2 , x3 ≥ 0 Next, we will find a basic feasible solution. In matrix-tableau form, the problem is 1 x 1  −1

1 x2 −2

0 x3 1

f |−3

1 x  1 1





1 x2 2

0 x3 −1

f |3



−R1 → R1

The underlined 1 in the first row correspond to the only basic variable, x1 . After that, the tableau is in Table 13.1, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Also, later we will explain why the “2" is circled. Table 1: After finding the first basic, feasible solution 1 x1 1

1 x2

2 z2

0 x3 −1 z3

y 3

f 3=1·3

So far, we have a basic feasible solution (x1 , x2 , x3 ) = (3, 0, 0). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced cost of x2 is         z2 − c2 = c1 • α1 − c2 = 1 • 2 − 1 = 2 − 1 = 1. Note that the α column vector used to calculate z2 sits under the x2 variable in Table 13.1. Similarly, the unit reduced cost of x3 is     z3 − c3 = 1 • −1 − 0 = −1 − 0 = −1. The maximum reduced unit cost is 1, and so we choose x2 to move into the list of basic variables. The choice to move in x2 will, in principle, affect the choice of which variable to move out. But, in this problem, there is only one basic variable, so we move x1 out of the list of basic variables. Circle the “pivot position" 2 in Table 13.1 and now do row operation 0.5R1 → R1 to get the tableau in Table 13.2. After Table 2: After doing a row operation on Table 13.1 1 x1 0.5

1 x2 1

0 x3 −0.5

that, we permute the columns, specifically by exchanging the columns corresponding to variables x1 and x2 , c Larry

Turyn, January 7, 2014

page 27

Table 3: After finding the second basic solution 1 x2 1

1 x1 0.5 z 1 − c1

0 x3 −0.5 −0.5

f 1.5 = 1 · 1.5

y 1.5

to put the tableau into standard form for using unit costs reduction to discuss the next round of possibly swapping variables:     z3 − c3 = 1 • −0.5 − 0 = −0.5 − 0 = −0.5 [We did not bother to calculate z1 − c1 because we just moved the variable x1 out of the set of basic variables in favor of moving x2 in.] The only unit cost reduction is negative, so we have arrived at a minimizer! The solution is (x1 , x2 ) = (0, 1.5). The slack variable value of x3 = 0 is not part of the solution to the original problem but does indicate how much “wiggle room" is left in the inequalities at the optimum solution. 13.4.2.2. No slack variable is needed and the problem is already in standard form (13.22) in Section 13.3, that is,   Minimize 2x1 + x2 + x3               Subject to x1 + 2x2 − x3 = 20 . −x1 − x2 + 2x3 + x4 = 16        2x1 + x2 + 2x3 = 12        x1 , ..., x4 ≥ 0. First, we will find a basic feasible solution. In matrix-tableau form, the problem is 2 1 1 x x x 2 3  1 1 2 −1  −1 −1 2 2 1 2

∼ −2R2 + R1 → R1 3R2 + R3 → R3

0 f x4  0 | 20 1 | 16  0 | 12

∼ R 1 + R2 → R 2 −2R1 + R3 → R3

2 1 1 0 f x x x x 2 3 4  1  1 0 −3 −2 | − 52  0 1 1 1 | 36  0 0 7 3 | 80

2 1 1 x x 2 x3  1 1 2 −1  0 1 1 0 −3 4

∼ −

1 7 3 7

R 3 + R 2 → R2 R 3 + R 1 → R1

0 f x4  0 | 20 1 | 36  0 | − 28

2 1 1 0 f x x x x 2 3 4  1  1 0 0 −5/7 | −124/7  0 1 0 4/7 | 172/7  , 0 0 7 3 | 80

but this does not directly give a feasible solution because x1 = −124/7 < 0. Next, do another row operation in order to use x4 as a basic variable. [Why choose x4 ? Because in the third equation, ... − 75 x4 ... = − 124 7 looks promising for finding a positive value of x4 .]

∼ − 75 R1 − R1 + R2 −3R1 + R3 1 R 7 3 4 7

→ R1 → R2 → R3 → R3

2  x1 −1.4  0.8 0

1 1 0 f x2 x3 x4  0 0 1 | 24.8 1 0 0 | 10.4  , 0 1 0 | 0.8

c Larry

Turyn, January 7, 2014

page 28

The underlined 1’s in the three rows correspond to the basic variables, x4 , x2 , x3 . After permuting the variables we get the tableau form shown in Table 13.4, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Table 4: After finding the first basic, feasible solution 0 x4 1 0 0

1 x2 0 1 0

1 x3 0 0 1

2 x1 −1.4 0.8 0 −1.2

y 24.8 10.4 0.8

f 11.2 = 0 · 24.8 + 1 · 10.4 + 1 · 0.8

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 ) = (0, 10.4, 0.8, 24.8). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced cost of x1 is         c4 −1.4 α4 0 z1 − c1 =  c2  •  α2  − c1 =  1  •  0.8  − 2 = 0.8 − 2 = −1.2. 0 1 c3 α3 The only unit cost reduction is negative, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 , x4 ) = (0, 10.4, 0.8, 24.8). The minimum value is 11.2. 13.4.2.3. No slack variable is needed and the problem is already in standard form (13.22) in Section 13.3, that is,   Minimize 3x1 + x2 + 2x3               Subject to −x1 − x2 − x3 + x4 = −10 . 2x1 + x2 + x3 + x5 = 40        3x1 + x2 + x3 = 50        x1 , ..., x5 ≥ 0.   0 First, we will find a basic feasible solution. First, we will do row operations to get a column  0 : In 1 matrix-tableau form, the problem is 3 1 2 0 0 f 3 1 2 0 0 f x x x x x x x x x x 1 2 3 4 5 1 2 3 4 5     −1 −1 −1 1 0 | −10 2 0 0 1 0 | 40  2  −1 0 0 0 1 | −10  1 1 0 1 | 40  ∼ 3 1 1 0 0 | 50 3 1 1 0 0 | 50 −R3 + R2 → R2 R3 + R 1 → R 1

but this does not directly give a feasible solution because x5 = −10 < 0. Next, do another row operation in order to use x1 as a basic variable. [Why choose x1 ? Because in the second equation, ... − x1 ... = −10 looks promising for finding a positive value of x1 .]

c Larry

Turyn, January 7, 2014

page 29

3 1 x x 1 2  0 0  1 0 0 1

∼ −R2 → R2 −2R2 + R1 → R1 −3R2 + R3 → R3

2 x3 0 0 1

0 x4

1 0 0

0 f x5  2 | 20 −1 | 10  3 | 20

The underlined 1’s in the three rows correspond to the basic variables, x4 , x1 , x2 . After permuting the variables we get the tableau form shown in Table 5, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Table 5: After finding the first basic, feasible solution 0 x4 1 0 0

3 x1 0 1 0

1 x2 0 0 1

2 x3 0 0 1 0

0 x5 2 −1 3 0

y 20 10 20

f 50 = 0 · 20 + 3 · 10 + 1 · 20

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 ) = (10, 20, 0, 20, 0). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x1 and x2 are         0 c4 α1 0 z3 − c3 =  c1  •  α2  − c3 =  3  •  0  − 2 = 1 − 2 = −1. 1 1 c2 α3 and

       2 c4 α4 0 z5 − c5 =  c1  •  α1  − c5 =  3  •  −1  − 0 = 0 − 0 = 0, 3 c2 α2 1 

respectively. The only unit cost reductions are nonpositive, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 , x4 , x5 ) = (0, 0, 25, 15, 15). The minimum value is 50. By the way, the maximum of the unit cost reductions being 0 implies that there is a cost free way to find another minimizer or minimizers that produce the same minimum value of 50. In fact, another minimizer is 20 20 given by (x1 , x2 , x3 , x4 , x5 ) = ( 50 3 , 0, 0, 3 , 3 ). 13.4.2.4. Use slack variables x4 and x5 to put this problem in standard form (13.22) in Section 13.3, that is,   Minimize 5x1 + 2x2 + x3               Subject to 2x1 +x2 +x3 = 60 3x1 +x2 +x3 +x4 = 80        −2x −x +x +x = −40    1 2 3 5     x1 , ..., x5 ≥ 0.   1 Next, we will find a basic feasible solution. First, we will do row operations to get a column  0 : In 0 matrix-tableau form, the problem is c Larry

Turyn, January 7, 2014

page 30

5 2 x x 1 2  2 1  3 1 −2 −1

1 x3 1 1 1

0 x4 0 1 0

0 x5 0 | 0 | 1 |

f  60 80  −40

5 2 x x2 1  2 1  1 0 −4 −2



1 x3 1 0 0

0 0 x4 x5 0 0 | 1 0 | 0 1 |

f  60 20  , −100

−R1 + R2 → R2 −R1 + R3 → R3

but this does not directly give a feasible solution because x5 = −100 < 0. Next, do another row operation in order to use x2 as a basic variable. [Why choose x2 ? Because in the third equation, ... − 2x2 ... = −100 looks promising for finding a positive value of x2 .] 5 2 1 x x x3 1 2  0 0 1  1 0 0 2 1 0



0 0 x4 x5 0 0.5 | 1 0 | 0 −0.5 |

f  10 20  , 50

− 21 R3 → R3 − R3 + R 1 → R 1

The underlined 1’s in the three rows correspond to the basic variables, x3 , x4 , x2 . After permuting the variables we get the tableau form shown in Table 5, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Table 6: After finding the first basic, feasible solution 1 x3 1 0 0

0 x4 0 1 0

2 x2 0 0 1

5 x1 0 1 2 −1

0 x5 0.5 0 −0.5 −0.5

y 10 20 50

f 110 = 1 · 10 + 0 · 20 + 2 · 50

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 ) = (0, 50, 10, 20, 0). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x1 and x5 are         c3 α3 1 0 z1 − c1 =  c4  •  α4  − c1 =  0  •  1  − 5 = 4 − 5 = −1 c2 α2 2 2 and



       c3 α3 1 0.5 0  − 0 = −0.5 − 0 = −0.5, z5 − c5 =  c4  •  α4  − c5 =  0  •  c2 α2 2 −0.5

respectively. The unit cost reductions are all negative, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 ) = (0, 50, 10). The minimum value is 110. The slack variables values of x4 = 20 and x5 = 0 are not part of the solution to the original problem but do indicate how much “wiggle room" is left in the inequalities at the optimum solution.

c Larry

Turyn, January 7, 2014

page 31

13.4.2.5. Use slack variables x4 and x5 to put this problem in standard form (13.22) in Section 13.3, that is,   Minimize 2x1 + 3x2 + x3               Subject to 2x1 +x2 +x3 = 60 3x1 +x2 +x3 +x4 = 80        −2x −x +x +x = −40    1 2 3 5     x1 , ..., x5 ≥ 0. 

 1 Next, we will find a basic feasible solution. First, we will do row operations to get a column  0 : In 0 matrix-tableau form, the problem is 2 3 x x 1 2  2 1  3 1 −2 −1

1 0 x3 x4 1 0 1 1 1 0

0 x5 0 | 0 | 1 |

f  60 80  −40

2 3 x x 1 2  2 1  1 0 −4 −2



1 x3 1 0 0

0 0 x4 x5 0 0 | 1 0 | 0 1 |

f  60 20  , −100

−R1 + R2 → R2 −R1 + R3 → R3

but this does not directly give a feasible solution because x5 = −100 < 0. Next, do another row operation in order to use x2 as a basic variable. [Why choose x2 ? Because in the third equation, ... − 2x2 ... = −100 looks promising for finding a positive value of x2 .] 2 3 1  x1 x2 x3 0 0 1  1 0 0 2 1 0



0 0 x4 x5 0 0.5 | 1 0 | 0 −0.5 |

f  10 20  , 50

− 21 R3 → R3 − R3 + R 1 → R 1

The underlined 1’s in the three rows correspond to the basic variables, x3 , x4 , x2 . After permuting the variables we get the tableau form shown in Table 6, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Also, later we will explain why the “1" is circled. Table 7: After finding the first basic, feasible solution 1 x3 1 0 0

0 x4 0 1 0

3 x2 0 0 1

2 x1 0

1 2 1

0 x5 0.5 0 −0.5 −1

y 10 20 50

f 160 = 1 · 10 + 0 · 20 + 3 · 50

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 ) = (0, 50, 10, 20, 0). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x1 and x5 are         c3 α3 1 0 z1 − c1 =  c4  •  α4  − c1 =  0  •  1  − 5 = 6 − 5 = 1. c2 α2 3 2 c Larry

Turyn, January 7, 2014

page 32

and



       c3 α3 1 0.5 0  − 0 = −1 − 0 = −1, z5 − c5 =  c4  •  α4  − c5 =  0  •  c2 α2 3 −0.5

respectively. Only one unit cost reduction is positive, so the only possible variable to move in is x1 . To decide which variable to move out, we calculate the minimum positive reduction, using ∗ to denote quantities not calculated because of α ≤ 0, where the α’s were those used to calculate z1 when we decided we might move in the variable x1 :     y3 y4 y2 20 50 yi` = min , , = min ∗, , = 20, θ = min αi` >0 αi` α3 α4 α2 1 2 which is achieved at index L = 4. So, to improve on our basic feasible solution we increase x1 from 0 to θ = 20 and reduce x4 to 0; at the same time, the other basic variables xik change from yik to yik − θαik . Because we are moving x4 out of the set of basic variables, circle the pivot position 1 in the tableau in Table 13.7, and do row operation −2R2 + R3 → R3 ; after that, permute the columns to get the tableau in Table 13.8. Table 8: After finding the second basic solution 1 x3 1 0 0

2 x1 0 1 0

3 x2 0 0 1

2 x4 0 1 −2 4

0 x5 0.5 0 −0.5 −1

y 10 20 10

f 80 = 1 · 10 + 2 · 20 + 3 · 10

The unit reduced costs of x4 and x5 are,         0 c3 α3 1 z4 − c4 =  c1  •  α1  − c4 =  2  •  1  − 0 = −4 − 0 = −4 −2 3 c2 α2 and



       c3 α3 0.5 1 0  − 0 = −1 − 0 = −1, z5 − c5 =  c1  •  α1  − c5 =  2  •  −0.5 c2 3 α2

respectively. All of the unit cost reductions are negative, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 ) = (20, 10, 10). The minimum value is 80. The slack variables values of x4 = 0 and x5 = 0 are not part of the solution to the original problem but do indicate that there is no “wiggle room" left in the inequalities at the optimum solution. 13.4.2.6. Use slack variables x4 and x5  Minimize        Subject to       

to put this problem in standard form (13.22) in Section 13.3, that is,  3x1 − 2x3        x1 +3x2 +4x3 = 80 3x1 −x2 +x3 +x4 = 160    −x1 −x2 −x3 +x5 = −40     x1 , ..., x5 ≥ 0.   1 Next, we will find a basic feasible solution. First, we will do row operations to get a column  0 : In 0 matrix-tableau form, the problem is c Larry

Turyn, January 7, 2014

page 33

3 x  1 1  3 −1

0 x2 3 −1 −1

−2 x3 4 1 −1

0 x4 0 1 0

0 x5 0 | 0 | 1 |

f  80 160  −40

∼ 1 R 4 1

→ R1 −R1 + R2 → R2 R1 + R 3 → R 3

3 0 −2 x x x3 1 2  0.25 0.75 1  2.75 −1.75 0 −0.75 −0.25 0

0 x4 0 1 0

0 x5 0 | 0 | 1 |

f  20 140  , −20

but this does not directly give a feasible solution because x5 = −20 < 0. Next, do another row operation in order to use x7 as a basic variable. [Why choose x1 ? Because in the third equation, ... − 0.75x1 ... = −20 looks promising for finding a positive value of x1 , and dividing −20 by −0.75 would not create too large a number that may be subtracted from another “RHS" to get a basic variable becoming negative.] 3 0 −2 x x x3 2  1 0 2/3 1  0 −8/3 0 1 1/3 0



0 0 f x4 x5  0 1/3 | 40/3 1 11/3 | 200/3  , 0 −4/3 | 80/3

− 34 R3 → R3 − 2.75R3 + R2 → R2 − 0.25R3 + R1 → R1

The underlined 1’s in the three rows correspond to the basic variables, x3 , x4 , x1 . After permuting the variables we get the tableau form shown in Table 9, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Table 9: After finding the first basic, feasible solution −2 x3 1 0 0

0 x4 0 1 0

3 x1 0 0 1

0 x2 2/3 −8/3 1/3 − 31

0 x5 1/3 11/3 −4/3 − 14 3

f y 40/3 200/3 80/3

160 3

= −2 ·

40 3

+0·

200 3

+3·

80 3

40 200 So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 ) = ( 80 3 , 0, 3 , 3 , 0). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x2 and x5 are         c3 α3 −2 2/3 1 1 z2 − c2 =  c4  •  α4  − c2 =  0  •  −8/3  − 0 = − − 0 = − 3 3 c1 α1 3 1/3

and



       c3 α3 −2 1/3 14 14 z5 − c5 =  c4  •  α4  − c5 =  0  •  11/3  − 0 = − −0=− , 3 3 c1 α1 3 −4/3

respectively. All of the unit cost reductions are negative, so we have arrived at a minimizer! The solution is 40 160 (x1 , x2 , x3 ) = ( 80 3 , 0, 3 ). The minimum value is 3 . c Larry

Turyn, January 7, 2014

page 34

The slack variables values of x4 = 200 3 and x5 = 0 are not part of the solution to the original problem but do indicate how much “wiggle room" is left in the inequalities at the optimum solution. 13.4.2.7. Use slack variables x4 , x5 , and x6 to put this problem in standard form (13.22) in Section 13.3, that is,   Minimize x1 + x2 + 2x3               Subject to x1 +3x2 +4x3 +x4 = 70 3x1 −x2 +x3 +x5 = 110        −x1 −x2 −x3 +x6 = −40        x1 , ..., x6 ≥ 0. Next, we will find a basic feasible solution. In matrix-tableau form, the problem is 1 1 2 0 x x x x 2 3 4  1 1 3 4 1  3 −1 1 0 −1 −1 −1 0

0 x5 0 1 0

0 x6 0 | 0 | 1 |

f  70 110  −40

but this does not directly give a feasible solution because x6 = −40 < 0. Unfortunately, it seems that any attempt to put a pivot position in the third row leads to another basic variable becoming negative, that is, not feasible. So, we seem to need to not take the direct choice of x4 , x5 , x6 being the basic variables but, instead, “start from scratch."



1 1 2 0 x x x x 1 2 3 4  1/3 1 4/3 1/3  10/3 0 7/3 1/3 −2/3 0 1/3 1/3

0 x5 0 1 0

0 f x6  0 | 70/3 0 | 400/3  1 | −50/3

1 3

R 1 → R1 R 1 + R 2 → R2 R 1 + R 3 → R3



1  x1 0  0 1

1 2 0 0 x2 x3 x4 x5 1 3/2 1/2 0 0 4 2 1 0 −1/2 −1/2 0

0 x6 1/2 | 5 | −3/2 |

f  15 50  25

−1.5R3 → R3 − 10 R 3 + R 2 → R2 3 − 31 R3 + R1 → R1

The underlined 1’s in the three rows correspond to the basic variables, x3 , x4 , x1 . After permuting the variables we get the tableau form shown in Table 9, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 , x6 ) = (25, 15, 0, 0, 50, 0). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x3 , x4 , and x6 are         c2 α2 1 3/2 4  − 2 = 1 − 2 = −1, z3 − c3 =  c5  •  α5  − c3 =  0  •  c1 α1 1 −1/2

c Larry

Turyn, January 7, 2014

page 35

Table 10: After finding the first basic, feasible solution 1 x2 1 0 0

0 x5 0 1 0

1 x1 0 0 1

2 x3 3/2 4 −1/2 −1

0 x4 1/2 2 −1/2 0

0 x6 1/2 5 −3/2 −1

y 15 50 25

f 40 = 1 · 15 + 0 · 50 + 1 · 25



       c2 α2 1 1/2 2  − 0 = 0 − 0 = 0, z4 − c4 =  c5  •  α5  − c4 =  0  •  c1 α1 1 −1/2 and



       c2 α2 1 1/2 5  − 0 = −1 − 0 = −1, z6 − c6 =  c5  •  α5  − c6 =  0  •  c1 α1 1 −3/2

respectively. All of the unit cost reductions are nonpositive, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 ) = (25, 15, 0). The minimum value is 40. By the way, the maximum of the unit cost reductions being 0 implies that there is a cost free way to find another minimizer or minimizers that produce the same minimum value of 50. In fact, another minimizer is 5 given by (x1 , x2 , x3 ) = ( 37 2 , 2 , 10). The slack variables values of x4 = 0, x5 = 50, and x6 = 0 are not part of the solution to the original problem but does indicate how much “wiggle room" is left in the inequalities at the optimum solution. 13.4.2.8. No slack variable is needed and the problem is already in standard form (13.22) in Section 13.3, that is,     Minimize 3x1 + x2 − 2x3             Subject to −x1 +3x2 −x3 = 7 . −2x2 +x3 +x4 = 12        −4x2 +3x3 +x5 = 10        x1 , ..., x5 ≥ 0.   1 First, we will find a basic feasible solution. First, we will do row operations to get a column  0 : In 0 matrix-tableau form, the problem is 3 1 −2 x x x3 2  1 −1 3 −1  0 −2 1 0 −4 3

0 x4 0 1 0

0 x5 0 | 0 | 1 |

f  7 12  10

∼ 1 R 3 1

→ R1 2R1 + R2 → R2 4R1 + R3 → R3

3 1 −2 x x x3 1 2  −1/3 1 −1/3  −2/3 0 1/3 −4/3 0 5/3

0 x4 0 1 0

0 f x5  0 | 7/3 0 | 50/3  1 | 58/3

The underlined 1’s in the three rows correspond to the basic variables, x3 , x4 , x1 . After permuting the variables we get the tableau form shown in Table 11, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 , x6 ) = (25, 15, 0, 0, 50, 0). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. c Larry

Turyn, January 7, 2014

page 36

Table 11: After finding the first basic, feasible solution 1 x2 1 0 0

0 x4 0 1 0

0 x5 0 0 1

3 x1 −1/3 −2/3 −4/3 − 13

−2 x3 −1/3 1/3

5/3

f y

7 3

=1·

7 3

+0·

50 3

+0·

58 3

7/3 50/3 58/3

5 3

We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x1 and x3 are         −1/3 c2 α2 1 10 10 z1 − c1 =  c4  •  α4  − c1 =  0  •  −2/3  − 3 = − −3=− 3 3 c5 α5 0 −4/3 and

       −1/3 1 α2 c2 1 5 z3 − c3 =  c4  •  α4  − c3 =  0  •  1/3  − (−2) = − + 2 = , 3 3 α5 c5 5/3 0 

respectively. Only one unit cost reduction is positive, so the only possible variable to move in is x3 . To decide which variable to move out, we calculate the minimum positive reduction, using ∗ to denote quantities not calculated because of α ≤ 0, where the α’s were those used to calculate z3 when we decided we might move in the variable x3 :     yi` y2 y4 y5 50/3 58/3 58 θ = min = min , , , = 11.6, = min ∗, = αi` >0 αi` α2 α4 α5 1/3 5/3 5 which is achieved at index L = 5. So, to improve on our basic feasible solution we increase x3 from 0 to θ = 11.6 and reduce x5 to 0; at the same time, the other basic variables xik change from yik to yik − θαik . Because we are moving x5 out of the set of basic variables, circle the pivot position 5/3 in the tableau in Table 13.11, and do row operations 35 R3 → R3 , − 31 R3 + R2 → R2 , 13 R3 + R1 → R1 ; after that, permute the columns to get the tableau in Table 13.12. Table 12: After finding the second basic solution 1 x2 1 0 0

−2 x3 0 0 1

0 x4 0 1 0

3 x1 −3/5 −2/5 −4/5 −2

0 x5 1/5 −1/5 3/5 −1

y 31/5 64/5 58/5

f −17 = 1 ·

31 5

+0·

64 5

+ (−2) ·

58 5

The unit reduced costs of x1 and x5 are         c2 α2 1 −3/5 z1 − c1 =  c4  •  α4  − c1 =  0  •  −2/5  − 3 = 1 − 3 = −2. c3 α3 −2 −4/5 and



       c2 α2 1 1/5 z5 − c5 =  c4  •  α4  − c5 =  0  •  −1/5  − 0 = −1 − 0 = −1, c3 α3 −2 3/5

respectively. All of the unit cost reductions are negative, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 , x4 , x5 ) = (0, 6.2, 11.6, 12.8, 0). The minimum value is −17. c Larry

Turyn, January 7, 2014

page 37

13.4.2.9. Method 1 : The problem is  Maximize λ        Subject to 4M1 + 2M2 = λP ` 2M + 4M = 2λP `  2 3       −MP ` ≤ Mj ≤ MP ` , j = 1, 2, 3

              

Substitute λP ` = µ and xj = Mj /MP ` , j = 1, 2, 3, so that the problem can be rewritten as   Maximize P1` µ               1   Subject to 4x + 2x − µ = 0   1 2   MP ` 2x2 + 4x3 −

         

2 MP `

µ=0

−1 ≤ xj ≤ 1, j = 1, 2, 3

         

Instead of using the Simplex Procedure, we will first solve the system of two constraint equalities for (µ, x1 ) = f (x2 , x3 ), g(x2 , x3 ) in terms of (x2 , x3 ) and, after that, maximize µ = f (x2 , x3 ) over the domain {(x2 , x3 ) : −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1, −1 ≤ g(x2 , x3 ) ≤ 1}. This work will have a somewhat ad hoc nature. [Later, we will begin, but not complete, the work to implement the Simplex Procedure: After making a change of variables in order to replace constraints such as −1 ≤ x1 ≤ 1 by 0 ≤ y1 ≤ 2 so that the feasible region has the usual non-negativity constraint in our standard linear programming format.] The system   − M1 µ = 0     4x1 + 2x2 P` 2x2 + 4x3 −

  can be rewritten as

  µ 

−4MP ` x1



2 MP `

 µ=0 

−2MP ` x2

 =0 

−2MP ` x2 − 4MP ` x3

=0

.



Row reduction gives 

1 −4MP ` 2 0

−2MP ` −2MP `

0 −4MP `

| |

0 0



 ∼

1 0 −MP ` 0 0.25 1

−2MP ` −0.5

| 0 | 0

 ,

−2R1 + R2 → R2 (8MP ` )−1 R2 → R2 4MP ` R2 + R1 → R1

so µ = f (x2 , x3 ) , MP ` x2 +2MP ` x3 is the objective function we want to maximize, subject to the constraints that −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1. Defining x1 = g(x2 , x3 ) , −0.25x2 + 0.5x3 , we must also satisfy the constraint that −1 ≤ x1 = g(x2 , x3 ) ≤ 1. We can do this in an elementary way because we are in luck in this problem: Choosing x2 = x3 = 1 certainly gives the largest value that µ = MP ` x2 + 2MP ` x3 could possibly be, namely µmax = 3MP ` can be. It happens that x2 = x3 = 1 gives x1 = g(x2 , x3 ) = −0.25x2 + 0.5x3 = 0.25, and that is in the interval −1 ≤ x1 ≤ 1. 3MP ` 1 , and it is achieved when the bending So, the maximum compressive force is λ · P = µmax = ` ` moments are (M1 , M2 , M3 ) = MP ` · (x1 , x2 , x3 ) = MP ` · (0.25, 1, 1).

c Larry

Turyn, January 7, 2014

page 38

Method 2 : The problem is  Minimize −λ        Subject to 4M1 + 2M2 = λP ` 2M + 4M = 2λP `  2 3       −MP ` ≤ Mj ≤ MP ` , j = 1, 2, 3

              

Substitute λP ` = µ and xj = Mj /MP ` , j = 1, 2, 3, so that the problem can be rewritten as   Minimize − P1` µ               1   Subject to 4x1 + 2x2 − µ = 0     MP `          

2x2 + 4x3 −

2 MP `

µ=0

−1 ≤ xj ≤ 1, j = 1, 2, 3

         

Next, to keep the usual definition of feasibility to include non-negativity of a variable, define yj = xj + 1, so that the inequalities −1 ≤ xj ≤ 1 can be rewritten as 0 ≤ yj ≤ 2, j = 1, 2, 3. In terms of the new variables, the problem can be rewritten as   Minimize − P1` µ               1  −M µ=6    Subject to 4y1 + 2y2     P`           2 2y2 + 4y3 − M µ = 6 P`   y1 ≤ 2         y ≤ 2   2       y ≤ 2   3             µ, yj ≥ 0, j = 1, 2, 3 Introduce slack variables to put the problem into standard form:  Minimize − P1` µ         Subject to 4y1 + 2y2 − M1 µ = 6    P`      2y2 + 4y3 − M2 µ = 6 P`  y1 + y4 = 2     y2 + y5 = 2     y3 + y6 = 2        µ, yj ≥ 0, j = 1, 2, ..., 6

                                

At this point, we could use the rest of the Simplex Procedure: First, find a basic feasible solution, and then use unit cost reductions to decide whether to move in a new basic variable and move out an old basic variable. In this way, we would travel among the extreme points of a polytope in R7 . Clearly, this would be more difficult than the ad hoc work we did in Method 1.

c Larry

Turyn, January 7, 2014

page 39

13.4.2.10. The problem is  Maximize λ        Subject to 4M1 + 2M2 = λP ` 2M + 4M = λP `  2 3       −MP ` ≤ Mj ≤ MP ` , j = 1, 2, 3

              

Substitute λP ` = µ and xj = Mj /MP ` , j = 1, 2, 3, so that the problem can be rewritten as   Maximize P1` µ               1   Subject to 4x + 2x − µ = 0   1 2   MP ` 2x2 + 4x3 −

         

1 MP `

µ=0

−1 ≤ xj ≤ 1, j = 1, 2, 3

         

Instead of using the Simplex Procedure, we will first solve the system of two constraint equalities for (µ, x1 ) = f (x2 , x3 ), g(x2 , x3 ) in terms of (x2 , x3 ) and, after that, maximize µ = f (x2 , x3 ) over the domain {(x2 , x3 ) : −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1, −1 ≤ g(x2 , x3 ) ≤ 1}. This work will have a somewhat ad hoc nature. The system   − M1 µ = 0     4x1 + 2x2 P` 2x2 + 4x3 −

  can be rewritten as

  µ −4MP ` x1 

µ

1 MP `

 µ=0 

−2MP ` x2

 =0 

−2MP ` x2 − 4MP ` x3

=0

.



Row reduction gives 

1 −4MP ` 1 0

−2MP ` −2MP `

0 −4MP `

| 0 | 0



 ∼

1 0

0

1

−2MP ` 0

−4MP ` −1

| 0 | 0



−R1 + R2 → R2 (4MP ` )−1 R2 → R2 4MP ` R2 + R1 → R1

so µ = f (x2 , x3 ) , 2MP ` x2 +4MP ` x3 is the objective function we want to maximize, subject to the constraints that −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1. Defining x1 = g(x2 , x3 ) , x3 , we must also satisfy the constraint that −1 ≤ x1 = g(x2 , x3 ) ≤ 1. We can do this in an elementary way because we are in luck in this problem: Choosing x2 = x3 = 1 certainly gives the largest value that µ = 2MP ` x2 + 4MP ` x3 could possibly be, namely µmax = 6MP ` can be. It happens that x2 = x3 = 1 gives x1 = g(x2 , x3 ) = x3 = 1, and that is in the interval −1 ≤ x1 ≤ 1. 1 6MP ` So, the maximum compressive force is λ · P = µmax = , and it is achieved when the bending ` ` moments are (M1 , M2 , M3 ) = MP ` · (x1 , x2 , x3 ) = MP ` · (1, 1, 1).

c Larry

Turyn, January 7, 2014

page 40

13.4.2.11. The problem is  Maximize λ        Subject to 4M1 + 2M2 = 5λP ` 2M + 4M = λP `  2 3       −MP ` ≤ Mj ≤ MP ` , j = 1, 2, 3

              

Substitute λP ` = µ and xj = Mj /MP ` , j = 1, 2, 3, so that the problem can be rewritten as   Maximize P1` µ               5   Subject to 4x + 2x − µ = 0   1 2   MP ` 2x2 + 4x3 −

         

1 MP `

µ=0

−1 ≤ xj ≤ 1, j = 1, 2, 3

         

Instead of using the Simplex Procedure, we will first solve the system of two constraint equalities for (µ, x1 ) = f (x2 , x3 ), g(x2 , x3 ) in terms of (x2 , x3 ) and, after that, maximize µ = f (x2 , x3 ) over the domain {(x2 , x3 ) : −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1, −1 ≤ g(x2 , x3 ) ≤ 1}. This work will have a somewhat ad hoc nature. [Later, we will begin, but not complete, the work to implement the Simplex Procedure after making a change of variables in order to replace constraints such as −1 ≤ x1 ≤ 1 by 0 ≤ y1 ≤ 2 so that the feasible region has the usual non-negativity constraint in our standard linear programming format.] The system   − M5 µ = 0     4x1 + 2x2 P` 2x2 + 4x3 −

  can be rewritten as

  5µ −4MP ` x1 

µ

1 MP `

 µ=0 

−2MP ` x2

 =0 

−2MP ` x2 − 4MP ` x3

=0

.



Row reduction gives 

5 −4MP ` 1 0

−2MP ` −2MP `

0 −4MP `

| 0 | 0



 ∼

1 0

0 −2MP `

−2 1

−4MP ` −5

| 0 | 0

 ,

R 1 ↔ R2 −5R1 + R2 → R2 −(4MP ` )−1 R2 → R2

so µ = f (x2 , x3 ) , 2MP ` x2 +4MP ` x3 is the objective function we want to maximize, subject to the constraints that −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1. Defining x1 = g(x2 , x3 ) , 2x2 + 5x3 , we must also satisfy the constraint that −1 ≤ x1 = g(x2 , x3 ) ≤ 1. [In problems 13.4.2.9 and 13.4.2.10 the last constraint was satisfied more easily than in problem 13.4.2.11.] So, we have reduced the problem to   Maximize 2MP ` x2 + 4MP ` x3               Subject to −1 ≤ 2x2 + 5x3 ≤ 1 (?) . −1 ≤ x2 ≤ 1         −1 ≤ x3 ≤ 1      

c Larry

Turyn, January 7, 2014

page 41

We could solve this using the method of Section 13.2. Instead, we will note that our new problem is to maximize f (x2 , x3 ) , 2MP ` x2 + 4MP ` x3 over the domain D , {(x2 , x3 ) : 1 ≤ 2x2 + 5x3 ≤ 1, −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1}. Problem (13.19), which introduced LP problems in Section 13.3, is similar to our new problem (?). Thinking geometrically, the line k = 2MP ` x2 + 4MP ` x3 has k decreased until the line just touches the domain D, which is shown in the figure. The maximum is achieved at x2 = 1, hence 2 · 1 + 5x3 = 1 implies x3 = −0.2, where k = 2MP ` · 1 + 4MP ` · (−0.2) = 1.2MP ` and x1 = 1. 1 1.2MP ` So, the maximum compressive force is λ · P = µmax = , and it is achieved when the bending ` ` moments are (M1 , M2 , M3 ) = MP ` · (x1 , x2 , x3 ) = MP ` · (1, 1, −0.2).

Figure 3: Answer key for problem 13.4.2.11

13.4.2.12. In standard form, the LP problem of problem 13.3.3.6 is  Minimize −216x1 − 404x2 − 363x3          Subject to −15.55x1 − 14.66x2 − 7.23x3 + x4 64.51x1 + 65.70x2 + 76.48x3 +x5   −2.212x  1 − 3.329x2 − 0.996x3 +x6    x + x2 + x3 =1  1   x1 , ..., x6 ≥ 0

        = −10   = 70 .  = −2        

Next, we will find a basic feasible solution. In matrix-tableau form, the problem is

   

−216 −404 −363 0 0 x1 x2 x3 x4 x5 −15.55 −14.66 −7.23 1 0 64.51 65.70 76.48 0 1 −2.212 −3.329 −0.996 0 0 1 1 1 0 0

0 x6 0 0 1 0

f  | −10 | 70   | −2  | 1

but this does not directly give a feasible solution because, for example, x6 = −2 < 0. The second row is fine the way it is because x5 = 70 would be feasible. We could start by using x1 as a basic variable in the fourth row, and that might clear up the infeasiblity issues. c Larry

Turyn, January 7, 2014

page 42

−216 −404 x2  x1 0 0.8900  0 1.190   0 −1.117 1 1

∼ 2.212R4 + R3 → R3 −64.51R4 + R2 → R2 15.15R4 + R1 → R1

−363 x3 8.320 11.97 1.216 1

0 x4 1 0 0 0

0 x5 0 1 0 0

0 x6 0 0 1 0

f  | 5.550 | 5.490   | 0.2120  | 1

The underlined 1’s in the three rows correspond to the basic variables, x4 , x5 , x6 , x1 . After permuting the variables we get the tableau form shown in Table 13, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Also, later we will explain why the “1" is circled. Table 13: After finding the first basic, feasible solution 0 x4

0 x5

0 −216 x6 x1

−404 x2

−363 x3

1 0 0 0

0 1 0 0

0 0 1 0

0.8900 1.190 −1.117 1 188

8.320 11.97 1.216

1 147

y |

0 0 0 1

f −216 = 0 · (5.550) + 0 · (5.490) | +0 · (0.2120) − 216 · (1)

5.550 5.490 0.2120 1

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 , x6 ) = (1, 0, 0, 0, 5.550, 5.490.0.2120). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x2 and x3 are         c4 α4 0.8900 0  c5   α5   0   1.190         z 2 − c2 =   c6  •  α6  − c2 =  0  •  −1.117  − (−404) = −216 + 404 = 188, c1 1 α1 −216 and 

  c4 α4  c5   α5    z3 − c3 =  • c6   α6 c1 α1





  0 8.320      − c3 =  0  •  11.97   0   1.216 −216 1

   − (−363) = −216 + 363 = 147, 

respectively. Both unit cost reductions are positive, and we choose to move into the set of basic variables x2 , the variable corresponding to the larger unit cost reduction. To decide which variable to move out, we calculate the minimum positive reduction, using ∗ to denote quantities not calculated because of α ≤ 0, where the α’s were those used to calculate z2 when we decided we might move in the variable x2 :     y4 y5 y6 y1 5.550 5.490 1 yi` = min , , , = min , , ∗, = min {6.236, 4.613, 1, ∗} = 1, θ = min αi` >0 αi` α4 α5 α6 α1 0.8900 1.190 1 which is achieved at index L = 1. So, to improve on our basic feasible solution we increase x2 from 0 to θ = 1 and reduce x1 to 0; at the same time, the other basic variables xik change from yik to yik − θαik . c Larry

Turyn, January 7, 2014

page 43

Circle the pivot position 1 in the tableau in Table 13.13, and do row operations − 0.8900R4 + R1 → R1 , −1.190R4 + R2 → R2 , and 1.117R4 + R3 → R3 ; after that, permute the columns to get the tableau in Table 13.14. Table 14: After finding the second basic solution 0 x4

0 x5

0 −404 x6 x2

−216 x1

−363 x3

y |

1 0 0 0

0 1 0 0

0 0 1 0

−0.89 −1.19 1.117 1 −188

0 0 0 1

7.43 10.78 2.333 1 −41

f −404 = 0 · (4.660) + 0 · (4.300) | +0 · (1.329) − 404 · (1)

4.660 4.300 1.329 1

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 , x6 ) = (0, 1, 0, 4.660, 4.300, 1.329). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x1 and x3 are         7.43 c4 α4 0  c5   α5   0   10.78         z1 − c1 =   c6  •  α6  − c1 =  0  •  2.333  − 404 − (−216) = −404 + 216 = −188, 1 −404 c2 α2 and 

  c4 α4  c5   α5   z3 − c3 =   c6  •  α6 c2 α2

   −0.89 0       − c3 =  0  •  −1.19  − 404 − (−363) = −404 + 363 = −41,   0   1.117  1 −404 



respectively. Both unit cost reductions are negative, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 ) = (0, 1, 0). The minimum value is −404. In terms of the original nutrition problem, we have that the maximum, 404 kcal content of 100 g of mixture, is when we use 0 g of wheat bran, 100 g of oat flour, and 0 g of rice flour. In retrospect, this result is obvious! The slack variables values of x4 = 4.660, x5 = 4.300, and x6 = 1.329 are not part of the solution to the original problem but does indicate how much “wiggle room" is left in the inequalities at the optimum solution. 13.4.2.13. In standard form, the LP problem of problem 13.3.3.7 is  Minimize −15.55x1 − 14.66x2 − 7.23x3          Subject to −216x1 − 404x2 − 363x3 + x4 −42.8x1 − 6.5x2 − 4.6x3 + x5   −2.212x  1 −3.329x2 −0.996x3 + x6    x =1  1 + x2 + x3   x1 , ..., x6 ≥ 0

         

= −300 = −10 .  = − 2.5        

Next, we will find a basic feasible solution. In matrix-tableau form, the problem is c Larry

Turyn, January 7, 2014

page 44

−15.55 x1  −216  −42.8   −2.212 1

−14.66 −7.23 x2 x3 −404 −363 −6.5 −4.6 −3.329 −0.996 1 1

0 x4 1 0 0 0

0 x5 0 1 0 0

0 x6 0 0 1 0

f | | | |

 −300 −10   − 2.5  1

but this does not directly give a feasible solution because, for example, x6 = −2 < 0. The fourth row is fine the way it is because x5 = 70 would be feasible. We could start by using x2 as a basic variable in the fourth row, and that will clear up some, but not all, of the infeasiblity issues.

∼ 3.329R4 + R3 → R3 6.5R4 + R2 → R2 404R4 + R1 → R1

∼ 1 − 36.3 R2 −188R2 + R1 −1.117R2 + R3 −R2 + R4

→ R2 → R1 → R3 → R4

−15.55 −14.66 −7.23 x x2 x3 1  188 0 41  −36.3 0 1.9   1.117 0 2.333 1 1 1

−15.55 −14.66 −7.23 0 x2 x3 x4  x1 0 0 50.84 1  1 0 −0.05234 0   0 0 2.391 0 0 1 1.052 0

0 x4 1 0 0 0

0 x5 0 1 0 0

0 f x6 0 | 104 0 | −3.5 1 | 0.829 0 | 1

0 0 x5 x6 5.179 0 −0.02755 0 0.03077 1 0.02755 0

   

f  | 85.87 | 0.09642   | 0.7213  | 0.9036

The underlined 1’s in the three rows correspond to the basic variables, x4 , x5 , x6 , x1 . After permuting the variables we get the tableau form shown in Table 13, whose bottom row contains the unit costs reduction information zj − cj , which we are about to calculate. Also, later we will explain why the “1" is circled. Table 15: After finding the first basic, feasible solution 0 x4

−15.55 x1

0 −14.66 x6 x2

−7.23 x3

0 x5

y |

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

50.84 −0.05234 2.391 1.052 −7.378

5.179 −0.02755 0.03077 0.0 755 2 0.0245

f −14.75 = 0·(85.87)+(−15.55)·(0.09642) | +0 · (0.7213) − 14.66 · (0.9036)

85.87 0.09642 0.7213 0.9036

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 , x6 ) = (0.09642, 0.9036, 0, 87.87, 0, 0.7213). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out.

c Larry

Turyn, January 7, 2014

page 45

We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x3 and x5 are         50.84 0 α4 c4  −15.55   −0.05234   c1   α4   − 14.61 − (−7.23) = −14.61 + 7.23 = −7.378,  •    z3 − c3 =     2.391  c6  •  α6  − c3 =  0 1.052 −14.66 α2 c2 and   α4 c4  c1   α4   z5 − c5 =   c6  •  α6 α2 c2 

   5.179 0       − c5 =  −15.55  •  −0.02755  − 0 = 0.0245 − 0 = 0.0245,    0.03077   0 0.02755 −14.66 



respectively. Only one unit cost reduction is positive, so we choose to move into the set of basic variables x5 . To decide which variable to move out, we calculate the minimum positive reduction, using ∗ to denote quantities not calculated because of α ≤ 0, where the α’s were those used to calculate z5 when we decided we might move in the variable x5 :     yi` 0.7213 0.9036 y4 y1 y6 y2 85.87 θ = min , ∗, , = min , , , = min αi` >0 αi` α4 α1 α6 α2 5.179 0.03077 0.02755 = min {16.56, ∗, 23.44, 32.80} = 16.56, which is achieved at index L = 4. So, to improve on our basic feasible solution we increase x5 from 0 to θ = 16.56 and reduce x4 to 0; at the same time, the other basic variables xik change from yik to yik − θαik . 1 Circle the pivot position .179 in the tableau in Table 13.15, and do row operations 5.179 R1 → R1 , 5 0.02755R1 + R2 → R2 , −0.03077R1 + R3 → R3 , and −0.02755R1 + R4 → R4 ; after that, permute the columns to get the tableau in Table 13.16. Table 16: After finding the second basic, feasible solution 0 x5

−15.55 x1

1 0 0 0

0 1 0 0

0 −14.66 x6 x2

−7.23 x3

0 x4

0 0 1 0

9.816 0.2181 2.089 0.7819 −7.624

0.1931 0.005319 −0.005941 −0.005319 −0.004734

y |

0 0 0 1

f −15.15 = 0·(16.58)+(−15.55)·(0.5532) | +0 · (0.2111) − 14.66 · (0.4468)

16.58 0.5532 0.2111 0.4468

So far, we have a basic feasible solution (x1 , x2 , x3 , x4 , x5 , x6 ) = (0, 1, 0, 4.660, 4.300, 1.329). The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. The next thing to do is to decide whether we should pivot by choosing a variable to enter the list of basic variables, and after that, by choosing a variable to move out. We use the “maximum unit reduced cost" criterion for choosing which variable, if any, to move in. The unit reduced costs of x3 and x4 are         c5 α5 0 9.816  c1   α1   −15.55   0.2181      •  z 3 − c3 =   c6  •  α6  − c3 =    2.089  − 14.85 − (−7.23) = −14.85 + 7.23 = −7.624, 0 c2 α2 −14.66 0.7819

c Larry

Turyn, January 7, 2014

page 46

and   α5 c5  c1   α1   z4 − c4 =   c6  •  α6 α2 c2 

  0.1931 0  −15.55   0.005319  •  − c4 =     −0.005941  0 −0.005319 −14.66 



   = −0.004734 − 0 = −0.004734, 

respectively. Both unit cost reductions are negative, so we have arrived at a minimizer! The solution is (x1 , x2 , x3 ) = (0.5532, 0.4468, 0). The minimum value is −15.15. In terms of the original nutrition problem, we have that the maximum, 15.15 g protein content of 100 g of mixture, is when we use 55.32 g of wheat bran, 44.68 g of oat flour, and 0 g of rice flour. This result is not at all obvious! The slack variables values of x4 = 0, x5 = 16.58, and x6 = 0.211 are not part of the solution to the original problem but does indicate how much “wiggle room" is left in the inequalities at the optimum solution.

c Larry

Turyn, January 7, 2014

page 47

Section 13.5 13.5.3.1. (X − x)2 + (Y − y)2 is the square of the distance between a point (x, y) in the region x2 − 2x − y ≤ 0 and a point on (X, Y ) in the region 8 − X + Y ≤ 0. We consider X, Y, x, y to be the variables of the problem, which we state as   Minimize (X − x)2 + (Y − y)2       . 2 Subject to x − 2x − y ≤ 0       8−X +Y ≤0 Define f (X, Y, x, y) , (X − x)2 + (Y − y)2 , f1 (X, Y, x, y) , x2 − 2x − y, f2 (X, Y, x, y) , 8 − X + Y and let λ1 , λ2 ≥ 0 be the Lagrange multipliers. Note that f , f1 , and f2 are convex functions everywhere, as we can see from using either Theorem 13.13 in Section 13.5 or Corollary 13.3 in Section 13.5. As in Examples 13.5 and 13.6 in Section 13.2, as well as Example 13.12 in Section 13.5, the gradient is ∂ ∂ ∂ ∂ + Jˆ +ˆ ı + ˆ . ∇ = Iˆ ∂X ∂Y ∂x ∂y The stationarity condition, (13.32) in Section 13.5, is 

2(X  2(Y   −2(X −2(Y

(?)

  ∂f1 − x)  ∂X − y)   = ∇f (X, Y, x, y) = −   − x)   ∂f 2 − y) ∂X

∂f1 ∂Y

∂f1 ∂x

∂f2 ∂Y

∂f2 ∂x

  ∂f1 T 0 ∂y   0   λ = −λ1   2x −2 ∂f2  −1 ∂y

 −1    −λ2  1  .   0  0 



Equation (?) implies (1) λ1 (2x − 2) = 2(X − x) = λ2

and

(2) λ1 = −2(Y − y) = λ2 ,

hence  0 = λ2 − λ2 = λ1 (2x − 2) − λ1 = λ1 (2x − 2) − 1 . It follows that (??)

either λ1 = 0

or x =

3 . 2

The complementarity conditions are λ1 (x2 − 2x − y) = λ2 (8 − X + Y ) = 0. In effect, the complementarity conditions say that either the multiplier λ1 is zero or the point (x, y) lies on the boundary of the region x2 − 2x − y ≤ 0, and either the multiplier λ2 is zero or the point (X, Y ) lies on the boundary of the region 8 − X + Y ≤ 0. Combine the first complementarity condition and (??) to imply that  2 3 3 3 λ1 = 0 or y = x2 − 2x = −2· =− . 2 2 4 3 3 So, either λ1 = 0 or (x, y) = ,− . 2 4 Suppose λ1 = 0. Then (1) and (2) imply X = x, Y = y, and λ2 = 0. The second complementarity condition is then satisfied because λ2 = 0. The only facts we have left to work with are the feasibility requirements in (13.33) in Section 13.5, specifically f (x? ) ≤ 0, that is, that x2 −2x−y ≤ 0 and 8−X +Y ≤ 0. It follows that −8 + X ≥ Y and y ≥ x2 − 2x 2 23 But X = x and Y = y, so −8 + x ≥ y ≥ x2 − 2x, hence 0 ≥ x2 − 3x + 8 = x − 32 + 23 4 ≥ 4 , which is impossible. So, we conclude that λ1 6= 0. c Larry

Turyn, January 7, 2014

page 48

So far, we have concluded that (x, y) =

3

,−

2 along with λ1 > 0 and the fact that 2x − 2 = 2 · 2Y +

3 2

3 and λ1 > 0. To find (X, Y ), note that (1) and (2), 4 − 2 6= 0, together imply

  2(X − 23 ) 3 (2X − 3) 3 2(X − x) =− =2 Y + = 2(Y − y) = −λ1 = − =− = −2X + 3 2 4 (2x − 2) 1 (2 · 32 − 2)

hence

3 Y = −X + . 4 The second complementarity condition is that either λ2 = 0 or 8 − X + Y = 0. If λ2 = 0 then, again, (1) and (2) imply X = x and Y = y, leading eventually to a contradiction as in the above argument. If 8 − X + Y = 0 then we use the fact that Y = −X + 34 , hence   3 3 0 = 8 − X + Y = 8 − X + −X + = 8 − 2X + , 4 4 35 3 29 hence X = 35 8 and thus Y = − 8 + 4 = − 8 . The closest approach of the two regions is where (x, y) =   3 3 35 29 2 , − 4 , (X, Y ) = 8 , − 8 , and the minimum distance is

s

35 3 − 8 2

2

√  2 r 2 29 3 232 23 2 23 + − + = + 2 = . 8 4 82 8 8

The results are depicted in the figure.

Figure 4: Answer key for problem 13.5.3.1

13.5.3.2. (X −x)2 +(Y −y)2 is the square of the distance between a point (x, y) in the region x2 −2x+1−y ≤ 0 and a point on (X, Y ) in the region 6 − X + Y ≤ 0. We consider X, Y, x, y to be the variables of the problem, which we state as   2 2   Minimize (X − x) + (Y − y)     . Subject to x2 − 2x + 1 − y ≤ 0       6−X +Y ≤0 Define f (X, Y, x, y) , (X − x)2 + (Y − y)2 , f1 (X, Y, x, y) , x2 − 2x + 1 − y, f2 (X, Y, x, y) , 6 − X + Y and let λ1 , λ2 ≥ 0 be the Lagrange multipliers.

c Larry

Turyn, January 7, 2014

page 49

Note that f , f1 , and f2 are convex functions everywhere, as we can see from using either Theorem 13.13 in Section 13.5 or Corollary 13.3 in Section 13.5. As in Examples 13.5 and 13.6 in Section 13.2, as well as Example 13.12 in Section 13.5, the gradient is ∂ ∂ ∂ ∂ ∇ = Iˆ + Jˆ +ˆ ı + ˆ . ∂X ∂Y ∂x ∂y The stationarity condition, (13.32) in Section 13.5, is 

2(X  2(Y   −2(X −2(Y

(?)

  ∂f1 − x)  ∂X − y)   = ∇f (X, Y, x, y) = −   − x)   ∂f 2 − y) ∂X

∂f1 ∂Y

∂f1 ∂x

∂f2 ∂Y

∂f2 ∂x

  ∂f1 T 0 ∂y   0    λ = −λ1  2x −2  ∂f2 −1 ∂y





 −1    −λ2  1  .   0  0

Equation (?) implies (1) λ1 (2x − 2) = 2(X − x) = λ2

(2) λ1 = −2(Y − y) = λ2 ,

and

hence  0 = λ2 − λ2 = λ1 (2x − 2) − λ1 = λ1 (2x − 2) − 1 . It follows that (??)

either λ1 = 0

or x =

3 . 2

The complementarity conditions are λ1 (x2 − 2x + 1 − y) = λ2 (6 − X + Y ) = 0. In effect, the complementarity conditions say that either the multiplier λ1 is zero or the point (x, y) lies on the boundary of the region x2 − 2x + 1 − y ≤ 0, and either the multiplier λ2 is zero or the point (X, Y ) lies on the boundary of the region 6 − X + Y ≤ 0. Combine the first complementarity condition and (??) to imply that λ1 = 0

 2 3 3 1 or y = x − 2x + 1 = −2· +1= . 2 2 4 2

3 1 , . 2 4 Suppose λ1 = 0. Then (1) and (2) imply X = x, Y = y, and λ2 = 0. The second complementarity condition is then satisfied because λ2 = 0. The only facts we have left to work with are the feasibility requirements in (13.33) in Section 13.5, specifically f (x? ) ≤ 0, that is, that x2 − 2x + 1 − y ≤ 0 and 6 − X + Y ≤ 0. It follows that −6 + X ≥ Y and y ≥ x2 − 2x + 1 2 19 But X = x and Y = y, so −6 + x ≥ y ≥ x2 − 2x + 1, hence 0 ≥ x2 − 3x + 7 = x − 32 + 19 4 ≥ 4 , which is impossible. So, we conclude that λ1 6= 0. 3 1 So far, we have concluded that (x, y) = , and λ1 > 0. To find (X, Y ), note that (1) and (2), along 2 4 3 with λ1 > 0 and the fact that 2x − 2 = 2 · 2 − 2 6= 0, together imply So, either λ1 = 0 or (x, y) =

  2(X − 23 ) 1 1 2(X − x) (2X − 3) 2Y − = 2 Y − = 2(Y − y) = −λ1 = − =− =− = −2X + 3 3 2 4 (2x − 2) 1 (2 · 2 − 2) hence

7 Y = −X + . 4

c Larry

Turyn, January 7, 2014

page 50

The second complementarity condition is that either λ2 = 0 or 6 − X + Y = 0. If λ2 = 0 then, again, (1) and (2) imply X = x and Y = y, leading eventually to a contradiction as in the above argument. If 6 − X + Y = 0 then we use the fact that Y = −X + 74 , hence   7 7 = 6 − 2X + , 0 = 6 − X + Y = 6 − X + −X + 4 4   31 7 17 3 1 hence X = 31 and thus Y = − + = − . The closest approach of the two regions is where (x, y) = , 8 8 4 8 2 4 ,  17 31 (X, Y ) = 8 , − 8 , and the minimum distance is s

31 3 − 8 2

2

√  2 r 2 192 19 2 17 1 19 + 2 = + − − = . 8 4 82 8 8

The results are depicted in the figure.

Figure 5: Answer key for problem 13.5.3.2

13.5.3.3. The objective function, f (x, y) = −2 ln x − 3 ln y, is convex on the convex set C , {(x, y) : x > 0, y > 0} by problems 13.1.3.10 and 13.1.3.12(a). The constraint function f1 (x, y) = 2x + y − 4 is also convex on C, by a result in Corollary 13.3 in Section 13.5, so we can use the Kuhn-Tucker conditions to solve the CP problem in problem 13.5.3.3. The feasible region is shown in Figure 13.11. Let λ ≥ 0 be the Lagrange multiplier.

Figure 6: Answer Key for problem 13.5.3.3

c Larry

Turyn, January 7, 2014

page 51

The feasibility conditions are 2x + y − 4 ≤ 0, x > 0, y > 0. The complementarity condition is λ · (2x + y − 4) = 0. The stationarity condition is     2 0 2 −   x          + λ  =    3  − 0 1 y  (?)

  . 

1 But, (?) implies λ 6= 0, because − cannot be zero for a feasible (x, y). So, the complementarity condition x implies (??) 2x + y − 4 = 0. (?) also implies y=

3 λ

and x =

1 . λ

Substitute those into (??) to get 4=2

    1 3 5 + = , λ λ λ

4 12 5 . The minimum value of the objective function with these constraints hence λ = , hence x = and y = 4 5 4  12 5  55   4 12  = −2 ln − 3 ln = ln 2 3 . is f , 5 5 5 5 4 12 13.5.3.4. The objective function, f (x, y) = − ln x − 2 ln y, is convex on the convex set C , {(x, y) : x > 0, y > 0} by problems 13.1.3.10 and 13.1.3.12(a). The constraint functions f1 (x, y) = −x + y − 1 and f1 (x, y) = 2x + y − 3 are also convex on C, by a result in Corollary 13.3 in Section 13.5, so we can use the Kuhn-Tucker conditions to solve the CP problem in problem 13.5.3.3. The feasible region is shown in Figure 13.11. Let λ1 ≥ 0 and λ2 ≥ 0 be the Lagrange multipliers.

Figure 7: Answer Key for problem 13.5.3.4 The feasibility conditions are −x + y − 1 ≤ 0, 2x + y − 4 ≤ 0, and x > 0, y > 0. The complementarity conditions are (A) λ1 · (−x + y − 1) = 0 (B) λ2 · (2x + y − 3) = 0.

c Larry

Turyn, January 7, 2014

page 52

The stationarity condition is       1 0 2 −1 −  x          + λ2   =    + λ1        2  − 0 1 1 y 

  . 

By multiplying through by x > 0 and y > 0, the stationarity condition can be rewritten as the equations (C)

(−λ1 + 2λ2 )x − 1 = 0

(D)

(λ1 + λ2 )y − 2 = 0.

In principle, there are four ways to satisfy both equations (A) and (B): (1)

λ1 = 0 and λ2 = 0

(2)

λ1 = 0 and 2x + y − 3 = 0

(3)

−x + y − 1 = 0 and λ2 = 0

(4)

−x + y − 1 = and 2x + y − 3 = 0

Let’s substitute these possibilities into the system of equations (C) and (D): Case 1 : If λ1 = 0 and λ2 = 0, then (C) implies −1 = 0, which gives no solution. Case 2 : If λ1 = 0 and 2x + y − 3 = 0, then (C) implies 2λ2 x = 1 and (D) implies λ2 y = 2, so 0 = 0 · λ2 = (2x + y − 3)λ2 = 2λ2 x + λ2 y − 3 = 1 + 2 − 3 is automatically satisfied. So, x =

1 1 2 (3 − y) = 3− . So, the system of equations consisting of (A), 2 2 λ2

(B), (C), and (D) is satisfied by (i) as long as

 1 3 2 (λ1 , λ2 , x, y) = 0, λ2 , − , , 2 λ2 λ2

3 1 − > 0, that is, as long as λ2 > 32 . 2 λ2

Case 3 : If λ2 = 0 and −x + y − 1 = 0, then (C) implies −λ1 x = 1, which would contradict the assumptions that λ1 ≥ 0 and x > 0. So, this case gives no solution. Case 4 : If −x + y = 1 and 2x + y = 3, then 

x y



 =

−1 1 2 1

−1 

1 3



1 = −3



1 −1 −2 −1



1 3



 =



2/3 5/3

.

then (C) implies 0 = (−λ1 + 2λ2 )x − 1 =

2 (−λ1 + 2λ2 ) − 1, 3

hence

− λ1 + 2λ2 =

3 , 2

and (D) implies 0 = (λ1 + λ2 )y − 2 =

5 (λ1 + λ2 )y − 2 3

hence λ1 + λ2 =

6 . 5

c Larry

Turyn, January 7, 2014

page 53

3 6 and λ1 + λ2 = is 2 5        −1    1 1 −2 λ1 3/2 −1 2 0.3 3/2 = = , = λ2 6/5 1 1 0.9 6/5 −3 −1 −1

The solution of the system consisting of −λ1 + 2λ2 =

which does satisfy λ1 ≥ 0 and λ2 ≥ 0. So, (ii)

 2 5 (λ1 , λ2 , x, y) = 0.3, 0.9, , 3 3

is another solution. The minimizer will be found among

(i) (x, y) =

3 2



5 : 3 3 3 3 2 3 2 1 1 1 2 (i) f − , − − = − ln − 2 ln = − ln − 2 ln 2 λ2 λ2 2 λ2 λ2 2 λ2 λ2  3λ − 2  2   3 λ2 2 = − ln − 2 ln = ln . 2λ2 λ2 2(3λ2 − 2)   λ32 Mathematica showed that the function g(λ2 ) , ln has graph with g(λ2 ) > 0 for λ2 > 2(3λ2 − 2) has minimum value g(1) = 12 . Also, 2 5 2 5 27 < 0. (ii) f , = − ln − 2 ln = ln 3 3 3 3 50 2 5 27 So, the minimum value of the objective function with these constraints is f , = ln . 3 3 50 (ii) (x, y) =

2

2 1 2 , where λ2 > , and , λ2 λ2 3

,

2 3

and

13.5.3.5. We are given that C is convex, which by definition means that (1 − t)x + ty must be in C whenever x and y are in C and 0 ≤ t ≤ 1. For p = 3, define µp−1 = µ2 , λ1 + λ2 , α1 , λ1 /µ2 , and α2 , λ2 /µ2 . Then z , µ−1 2 (λ1 x1 + λ2 x2 ) = α1 x1 + α2 x2 is in C because C is convex and α1 + α2 = 1. Then we see that λ1 x1 + ... + λ3 x3 is in C because (1) λ1 x1 + ... + λ3 x3 = (λ1 x1 + λ2 x2 ) + λ3 x3 = µ2 z + λ3 x3 , (2) µ2 + λ3 = (λ1 + λ2 ) + λ3 = 1, and (3) z and x3 are in C. This shows us how to use an inductive process to explain why λ1 x1 + ... + λp xp must be in C if xj , j = 1, ..., p are in C and the non-negative real numbers λj satisfy λ1 + ... + λp = 1. Continuing in this way, for any p > 3, we have (?) z , µ−1 p−1 (λ1 x1 + ... + λp−1 xp−1 ) is in C. Because C is convex and µp−1 + λp = (λ1 + ... + λp−1 ) + λp = 1, we have that  λ1 x1 + ... + λp xp = µp−1 µ−1 p−1 (λ1 x1 + ... + λp−1 xp−1 ) + λp xp = µp−1 z + λp xp is in C. c Larry

Turyn, January 7, 2014

page 54

13.5.3.6. (a) Suppose x and y are any vectors in C , {x : ||x − x0 || ≤ r} and 0 ≤ t ≤ 1. Then, using the triangle inequality, z , (1 − t)x + ty satisfies ||z − x0 || = ||(1 − t)x + ty − x0 || = ||(1 − t)x + ty − (1 − t)x0 − tx0 || = ||(1 − t)(x − x0 ) + t(y − x0 )||. So, using the triangle inequality and homogeneity properties of norms, ||z − x0 || ≤ ||(1 − t)(x − x0 )|| + ||t(y − x0 )|| = |1 − t| ||x − x0 || + |t| ||y − x0 || ≤ |1 − t| r + |t| r = r, hence z , (1 − t)x + ty is in C. This being true for all x and y in C and 0 ≤ t ≤ 1 implies that C is convex. (b) Suppose x and y are any vectors in C , {x : ||x − x0 || < r} and 0 ≤ t ≤ 1. Then, using the triangle inequality, z , (1 − t)x + ty satisfies ||z − x0 || = ||(1 − t)x + ty − x0 || = ||(1 − t)x + ty − (1 − t)x0 − tx0 || = ||(1 − t)(x − x0 ) + t(y − x0 )||. So, using the triangle inequality and homogeneity properties of norms, ||z − x0 || ≤ ||(1 − t)(x − x0 )|| + ||t(y − x0 )|| = |1 − t| ||x − x0 || + |t| ||y − x0 || < |1 − t| r + |t| r = r, hence z , (1 − t)x + ty is in C. This being true for all x and y in C and 0 ≤ t ≤ 1 implies that C is convex. (c) Suppose V is a vector subspace of Rn . If x and y are any vectors in V and 0 ≤ t ≤ 1, then z , (1−t)x+ty is in V directly by the (CloseLin) property in Theorem 1.38 in Section 1.7. 13.5.3.7. Suppose that there are points x and y, possibly equal, in S , {x is in C : f (x) ≤ M }, a subset of the convex set C. Then f (x) ≤ M  and f (y) ≤ M . Also, for 0 < t < 1, because the function f is convex on the convex set C, (1 − t)x + ty is in C and  f (1 − t)x + ty ≤ (1 − t)f (x) + tf (y) ≤ (1 − t)M + tM = M,  hence (1 − t)x + ty is in S. This being true for all x and y in S and 0 < t < 1, we have that S is convex, unless it is empty. 13.5.3.8. For all x, y in C and t in the interval [0, 1],    (f + g) (1 − t)x + ty = f (1 − t)x + ty + g (1 − t)x + ty ≤ (1 − t)f (x) + tf (y) + (1 − t)g(x) + tg(y) = (1 − t)(f + g)(x) + t(f + g)(y), hence f + g is convex on C. 13.5.3.9. Choose any x, y in Rn and any λ with 0 < λ < 1. Then    f g(λx + (1 − λ)y) = f A(λx + (1 − λ)y) + b = f λAx + (1 − λ)Ay + λb + (1 − λ)b    = f λ(Ax + b) + (1 − λ)(Ay + b) ≤ λf (Ax + b) + (1 − λ)f (Ay + b) = λf g(x) + (1 − λ)f g(y) ,  hence f g(x) is convex on C. 13.5.3.10. We are given that C is a convex set in Rn and for = 1, ..., m, fi (x) is a convex function on common domain C. Define S , {x in C : f (x) ≤ 0 and x ≥ 0}. To explain why S is a convex set in Rn , let x and y be any two points in S, hence for i = 1, .., m, fi (x) ≤ 0 and fi (y) ≤ 0, and for j = 1, .., n, xj ≥ 0 and yj ≥ 0. Further, let t be any real number with 0 < t < 1. Then, because C is convex, (1 − t)x + ty is in C. Also, for i = 1, ..., m, because fi (x) is a convex function on C,  fi (1 − t)x + ty ≤ (1 − t)fi (x) + tfi (y) ≤ (1 − t) · 0 + t · 0 = 0, and, for j = 1, ..., n, (1 − t)xj + tyj ≥ (1 − t) · 0 + t · 0 = 0. So, (1 − t)x + ty is in S. c Larry

Turyn, January 7, 2014

page 55

This being true for all x and y in S and any real number t with 0 < t < 1, we conclude that the set of all feasible points, S, is convex. 13.5.3.11. Suppose C1 and C2 are convex subsets of Rm . Suppose x and y are any two points in C1 ∩ C2 , {x : x is in both C1 and C2 }, and suppose t is any real number with 0 < t < 1. Because x is in C1 ∩ C2 , x is in both C1 and C2 , and similarly y is in both C1 and C2 . Because C1 is convex, (1 − t)x + ty is in C1 , and because C2 is convex, (1 − t)x + ty is in C2 . So, (1 − t)x + ty is in both C1 and C2 , hence (1 − t)x + ty is in C1 ∩ C2 . This being true for all x and y in C1 ∩ C2 and any real number t with 0 < t < 1, we conclude that C1 ∩ C2 is convex. 13.5.3.12. Define f (x) , xT Ax, where A is an n × n real, constant, positive semi-definite matrix, and define g(x) , cT x, where c is a real, constant vector in Rn . Suppose x1 and x2 are any two vectors in Rn and 0 < t < 1. Concerning f (x), by the result of problem 6.7.6.33, we have that ∇f (x) = (A + AT )x. It follows that  B , D2 f (x) = D (A + AT )x = (A + AT ), is also positive semi-definite, because for all x in Rn , xT AT x being a real number implies that xT AT x = T xT AT x , hence xT Bx = xT Ax + xT AT x = xT Ax + xT AT x

T

= xT Ax + xT AT

T

xT

T

= xT Ax + xT Ax = 2xT Ax ≥ 0.

By Theorem 13.13, f (x) is convex on Rn . Concerning g(x), by the result of problem 12.3.2.17, we have that ∇g(x) = c. It follows that  D2 f (x) = D c = 0. By Theorem 13.13, g(x) is convex on Rn . 13.5.3.13. Suppose, to the contrary, that there is an x? that is a local minimizer for f but is not a global minimizer for f on C. x? being a local minimizer implies there is an δ > 0 such that f (x? ) ≤ ? e such that f (x) for all x in C with |x − x? | < ?δ. x not being a global minimizer implies there is a x ? e ≥ δ. Define t = 0.9δ/ x? − x e . We see that 0 < t < 1. Note for f (e x) < f (x ), which implies that x − x e = 0.9δ/t. future reference that x? − x Define x , te x + (1 − t)x? . By the convexity of f ,       e + (1 − t)f x? < tf x? + (1 − t)f x? = f x? . (?) f (x) = f te x + (1 − t)x? ≤ tf x But, x = te x + (1 − t)x? satisfies  e = t · x ? − x e = t · (0.9δ/t) = 0.9δ; |x? − x| = x? − te x − (1 − t)x? = t x? − x by the local minimizer assumption this implies f (x) ≥ f (x? ), contradicting (?). So, the local minimizer must be the global minimizer on the interval C. 13.5.3.14. Define f (x) , xT Ax, where A is an n × n real, constant, positive semi-definite matrix, and define g(x) , cT x, where c is a real, constant vector in Rn . Suppose x and y are any two vectors in Rn and 0 < t < 1. Concerning f (x), we have  T  f (1 − t)x + ty − (1 − t)f (x) − tf (y) = (1 − t)x + ty A (1 − t)x + ty − (1 − t)xT Ax − tyT Ay = (1 − t)2 xT Ax + t(1 − t)xT (A + AT )y + t2 yT Ay − (1 − t)xT Ax − tyT Ay

c Larry

Turyn, January 7, 2014

page 56

 = (1 − t)2 − (1 − t) xT Ax + t(1 − t)xT (A + AT )y + (t2 − t)yT Ay = −t(1 − t)xT Ax + t(1 − t)xT (A + AT )y − t(1 − t)yT Ay = −t(1 − t)(xT Ax − xT (A + AT )y + yT Ay) = −t(1 − t)(x − y)T A(x − y) , α. Because A is real and positive semi-definite, for z , x − y we have zT Az ≥ 0, so t > 0 and 1 − t > 0 imply α = −t(1 − t)zT Az ≤ 0. So,  T  f (1 − t)x + ty − (1 − t)f (x) − tf (y) = (1 − t)x + ty A (1 − t)x + ty − (1 − t)xT Ax − tyT Ay ≤ 0, hence  f (1 − t)x + ty ≤ (1 − t)f (x) + tf (y). This being true for all x and y in Rn and any real number t with 0 < t < 1, we conclude that f (x) is convex on Rn . Concerning g(x), we have   g (1 − t)x + ty − (1 − t)g(x) − tg(y) = cT (1 − t)x + ty − (1 − t)cT x − tcT y

= cT (1 − t)x + cT ty − (1 − t)cT x − tcT y = 0. So,  g (1 − t)x + ty − (1 − t)g(x) − tg(y) = 0, hence  g (1 − t)x + ty = (1 − t)g(x) + tg(y). This being true for all x and y in Rn and any real number t with 0 < t < 1, we conclude that g(x) is convex on Rn . 13.5.3.15. Define f (x) , xT Ax, where A is an n × n real, constant, positive definite matrix. Suppose x and y are any two vectors in Rn and 0 < t < 1. Concerning f (x), we have  T  f (1 − t)x + ty − (1 − t)f (x) − tf (y) = (1 − t)x + ty A (1 − t)x + ty − (1 − t)xT Ax − tyT Ay

= (1 − t)2 xT Ax + t(1 − t)xT (A + AT )y + t2 yT Ay − (1 − t)xT Ax − tyT Ay  = (1 − t)2 − (1 − t) xT Ax + t(1 − t)xT (A + AT )y + (t2 − t)yT Ay = −t(1 − t)xT Ax + t(1 − t)xT (A + AT )y − t(1 − t)yT Ay = −t(1 − t)(xT Ax − xT (A + AT )y + yT Ay)

c Larry

Turyn, January 7, 2014

page 57

= −t(1 − t)(x − y)T A(x − y) , α. Because A is real and positive definite, for z , x − y we have zT Az > 0, so t > 0 and 1 − t > 0 imply α = −t(1 − t)zT Az < 0. So,  T  f (1 − t)x + ty − (1 − t)f (x) − tf (y) = (1 − t)x + ty A (1 − t)x + ty − (1 − t)xT Ax − tyT Ay < 0, hence  f (1 − t)x + ty < (1 − t)f (x) + tf (y). This being true for all x and y in Rn and any real number t with 0 < t < 1, we conclude that f (x) is strictly convex on Rn . 13.5.3.16. Suppose f has two distinct minimizers, x1 and x2 , in I. So, f (x1 ) = f (x2 ) = m is the global minimum value of f on C. Because C is convex, with t = 0.5 we have that x , 0.5(x1 + x2 ) is in C. By strict convexity,  f (x) = f (1 − 0.5)x1 + 0.5x2 < (1 − 0.5)f (x1 )+ 0.5f (x2 )=0.5m+ 0.5m=m. f (x) < m contradicts that m is the global minimum of f on C. So, no, f cannot have two distinct global minimizers in C if f is strictly convex on the convex C.

c Larry

Turyn, January 7, 2014

page 58

Section 13.6 13.6.3.1. Similarly to the work for the maximization problem system (13.52), define f (x) , xT Ax, f1 (x) , xT x − 1, f2 (x) , xT x(1) ,..., fk+1 (x) , xT x(k) . Symmetry of A implies that we have ∇f = 2Ax, ∇f1 = 2x, ∇f2 = x(1) , ..., ∇fk+1 = x(k) . There exists multipliers µ1 , ..., µk+1 and global minimizer x? for which stationarity holds, that is, (?) 2Ax? = 2µ1 x? + µ2 x(1) + ... + µk+1 x(k) and T

T

(??) ||x? ||2 − 1 = 0 = (x? ) x(1) = ... = (x? ) x(k) . Take the dot product of (?) with x? and use (??) to get T

T

T

2 (x? ) Ax? = 2µ1 ||x? ||2 + µ2 (x? ) x(1) + ... + µk+1 (x? ) x(k) = 2µ1 · 1 + µ2 · 0 + ... + µk+1 · 0 = 2µ1 . This shows that µ1 is the global maximum value of xT Ax subject to the k + 1 constraints. On the other hand, the set of eigenvectors x(1) ,...,x(k) is orthonormal, so when we take the dot product of (?) with each of the unit vectors x(1) ,...,x(k) and use (??) we get  T  T 2 x(1) Ax? = 2µ1 x(1) x? + µ2 ||x(1) ||2 + ... + µk+1 ||x(k) ||2 , so

 T (? ? ?) 2 x(1) Ax? = 2µ1 · 0 + µ2 · 1 = µ2 ,

...

 T , 2 x(k) Ax? = µk+1 .

But, symmetry of A implies 

x(1)

T

 T  T  T  T Ax? = x(1) AT x? = Ax(1) x? = λ1 x(1) x? = λ1 x(1) x? = λ1 · 0,

so (? ? ?) implies µ2 = 0, and similarly,..., µk+1 = 0. It follows that stationarity, (?), actually implies 2Ax? = 2µ1 x? + 0 · x(1) + ... + 0 · x(k) = 2µ1 x? , hence λk+1 , µ1 is an eigenvalue of A with corresponding eigenvector x? . The last thing to notice is that λk+1 ≤ λk follows from T

T

λk = max{RA (x) : x satisfying || x || = 1, 0 = (x? ) x(1) = ... = (x? ) x(k−1) } T

T

≥ max{RA (x) : x satisfying || x || = 1, 0 = (x? ) x(1) = ... = (x? ) x(k) } = λk−1 . z ]T be an unspecified nonzero vector. We calculate  0 1 xT Ax 1 xT Ax  1 0 = = · [ x y z ] RA (x) = ||x||2 x2 + y 2 + z 2 x2 + y 2 + z 2 1 0

13.6.3.2. Let ||x|| = [ x

y

=

  1 x 0  y  2 z

2xy + 2xz + 2z 2 , f (x, y, z). x2 + y 2 + z 2

MathematicaTM on f (x, y, z) over the unit cube, as in Example 2.37 in Section 2.9, gives approximate maximum value of λ1 ≈ 2.4811943040920146 and approximate minimum value of λ3 ≈ −1.1700864866260186. Corresponding to eigenvalue λ1 ≈ 2.4811943040920146, Mathematica’s FindMaximum command also gives us an eigenvector also gives us an eigenvector x(1) , [ 0.2802804773301895

0.11296192221337528

T

0.5824683730551866] .

c Larry

Turyn, January 7, 2014

page 59

Using Mathematica again, we found an approximate solution of the  Maximize xT Ax      Subject to xT x − 1 = 0      xT x(1) = 0

problem       .     

which we hoped would produce a second greatest eigenvalue, hence middle eigenvalue, λ2 ≈ −1. Here are the Mathematica commands, but it didn’t successfully give a result. f [x , y , z ] :=

2xy + 2xz + 2z 2 x2 + y 2 + z 2

FindMaximum[{f[x, y, z], 1 ≥ x ≥ −1&&1 ≥ y ≥ −1&&1 ≥ z ≥ −1 &&0.2802804773301895 x + 0.11296192221337528 y + 0.5824683730551866 z == 0}, {x, y, z}] {2.4811943040920146, {x → 0.2802804773301895, y → 0.11296192221337528, z → 0.5824683730551866} Instead, we decided to use the orthogonality condition 0.2802804773301895x + 0.11296192221337528y + 0.5824683730551866z = 0 to solve for y and substitute that into the objective function f . Here are the Mathematica commands and their output. f [x , y , z ] :=

2xy + 2xz + 2z 2 x2 + y 2 + z 2

0.2802804773301895 x− FindMaximum[{f2[x, − 0.11296192221337528

0.582468373055186 0.11296192221337528

z, z], 1 ≥ x ≥ −1&&1 ≥ z ≥ −1}, {x, z}]

{0.688892, {x → −0.22102, z → 0.168575}}, hence the middle eigenvalue is λ2 ≈ 0.6888921825340185. z ]T be an unspecified nonzero vector. We calculate √    x 3 0 √2 1 xT Ax xT Ax RA (x) = = 2 = 2 · [x y z] 3 0 0  y  ||x||2 x + y2 + z2 x + y2 + z2 z 0 0 −1

13.6.3.3. Let ||x|| = [ x

y

√ 2x2 + 2 3xy − z 2 = , f (x, y, z) x2 + y 2 + z 2 MathematicaTM on f (x, y, z) over the unit cube, as in Example 2.37 in Section 2.9, gives maximum value of λ1 = 3 and minimum value of λ3 = −1. Corresponding to eigenvalue λ1 = 3, Mathematica’s FindMaximum command also gives us an eigenvector T √ x(1) , 3 1 0 . Using Mathematica again, we found an approximate solution of the problem   Maximize xT Ax           T Subject to x x − 1 = 0 .           xT x(1) = 0

c Larry

Turyn, January 7, 2014

page 60

which produces a second greatest eigenvalue, hence middle eigenvalue, λ2 ≈ −1. Here are the Mathematica commands and their output. √ 2x2 + 2 3 xy − z 2 f [x , y , z ] := x2 + y 2 + z 2 √ FindMaximum[{f[x, y, z], 1 ≥ x ≥ −1&&1 ≥ y ≥ −1&&1 ≥ z ≥ −1&& 3x + y == 0}, {x, y, z}] {−1., {x → 0.29263, y → −0.506849, z → −0.0886408}

c Larry

Turyn, January 7, 2014

page 1

Chapter Fourteen Section 14.1.2 14.1.2.1. It makes sense to look for an approximate solution that also satisfies the boundary conditions, for example, in the form x y(x) = c1 x2 (π − x) + c2 cos . 2 MathematicaTM gave  ˆ π   1 x 2 x 2 2 0.1x dx f (c1 , c2 ) , 3 + cos x c1 (2πx − 3x ) − c2 sin − 2 c1 x (π − x) + c2 cos 2 2 2 0  1 5 2 2 32 5π 2 π c1 + (2 − π)c2 + 180 − 20π 2 + π 4 c21 + (−34 + 11π)c1 c2 + c . 100 5 5 9 16 2 The approximate minc1 ,c2 f (c1 , c2 )is achieved at c1 ≈ 0.0130544671, c2 ≈ 0.219383918. So, using only two terms, an approximate solution of the ODE-BVP is given by =−

x y(x) ≈ 0.0130544671x2 (π − x) + 0.219383918 cos . 2 [By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.5 in Section 14.1, the Rayleigh-Ritz method’s approximate solution is given as the dashed, red graph and Mathematica’s approximate solution y(x) is the solid, blue graph.]

Figure 1: Answer key for problem 14.1.2.1

14.1.2.2. It makes sense to look for an approximate solution that also satisfies the boundary conditions. Ex. 1: For an approximate solution in the form y(x) = c1 (1 − r) + c2 r(1 − r), MathematicaTM gave ˆ f (c1 , c2 ) ,

1



2 2   r − c1 + c2 (1 − 2r) − c1 (1 − r) + c2 r(1 − r) − 2 c1 (1 − r) + c2 r(1 − r) r3 dr

0

1 1 1 1 2 2 c1 + c21 − c2 + c1 c2 + c 10 6 15 6 15 2 The approximate minc1 ,c2 f (c1 , c2 )is achieved at c1 ≈ 0.2545454545516736, c2 ≈ 0.09090909090105735. Using only two terms, an approximate solution of the ODE-BVP is given by =−

u(r) ≈ 0.2545454545516736(1 − r) + 0.09090909090105735r(1 − r) [By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.5 in Section 14.1, the Rayleigh-Ritz method’s approximate solution is given as the dashed, red graph.] c Larry

Turyn, January 7, 2014

page 2

Figure 2: Answer key for first example of a solution to problem 14.1.2.2 Ex. 2: For an approximate solution in the form y(x) = d1 (1 − r) + d2 (1 − r)2 , MathematicaTM gave ˆ 1 2 2   f (d1 , d2 ) , r − d1 + 2d2 (1 − r) − d1 (1 − r) + d2 (1 − r)2 − 2 d1 (1 − r) + d2 (1 − r)2 r3 dr 0

1 1 1 1 2 2 d1 + d21 − d2 + d1 d2 + d 10 6 30 6 15 2 The approximate mind1 ,d2 f (d1 , d2 )is achieved at d1 ≈ 0.34545454807517356, d2 ≈ −0.09090909444857707. Using only two terms, an approximate solution of the ODE-BVP is given by =−

u(r) ≈ 0.34545454807517356(1 − r) − 0.09090909444857707(1 − r)2 [By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.5 in Section 14.1, the Rayleigh-Ritz method’s approximate solution is given as the dashed, red graph.]

Figure 3: Answer key for second example of a solution to problem 14.1.2.2  πr  , MathematicaTM gave 2 ˆ 1   πr    πr 2   πr   π f (e1 , e2 ) , r − e1 − e2 sin 2− e1 (1 − r) + e2 cos − 2 e1 (1 − r) + e2 cos r3 dr 2 2 2 2 0

Ex. 3: For an approximate solution in the form y(x) = e1 (1 − r) + e2 cos

1 1 4 4 1 e1 + e21 + 4 (−48 + 24π − π 3 )e2 + 2 (−2 + π)e1 e2 + (−4 + π 2 )e22 10 6 π π 16 The approximate mine1 ,e2 f (e1 , e2 )is achieved at e1 ≈ 0.15803140522028478, e2 ≈ 0.10228209565004716. So, using only two terms, an approximate solution of the ODE-BVP is given by  πr  u(r) ≈ 0.15803140522028478(1 − r) + 0.10228209565004716 cos 2 =−

c Larry

Turyn, January 7, 2014

page 3

[By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.5 in Section 14.1, the Rayleigh-Ritz method’s approximate solution is given as the dashed, red graph.]

Figure 4: Answer key for third example of a solution to problem 14.1.2.2 The three examples of approximate solutions agree very much with each other. The only slight disagreement is that the third approximate solution has a slightly larger value of y(0). 14.1.2.3. It makes sense to look for an approximate solution that also satisfies the boundary conditions. Ex. 1: For an approximate solution in the form y(x) = c1 (1 − r) + c2 r(1 − r), MathematicaTM gave ˆ f (c1 , c2 ) ,

1



2 2   r − c1 + c2 (1 − 2r) − 4r2 c1 (1 − r) + c2 r(1 − r) − 2 c1 (1 − r) + c2 r(1 − r) r3 dr

0

11 2 1 1 9 2 1 c1 + c − c2 + c1 c2 + c 10 30 1 15 5 70 2 The approximate minc1 ,c2 f (c1 , c2 )is achieved at c1 ≈ 0.08333333442105048, c2 ≈ 0.19444444183629322. So, using only two terms, an approximate solution of the ODE-BVP is given by =−

u(r) ≈ 0.08333333442105048(1 − r) + 0.19444444183629322r(1 − r) [By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.5 in Section 14.1, the Rayleigh-Ritz method’s approximate solution is given as the dashed, red graph.]

Figure 5: Answer key for first example of a solution to problem 14.1.2.3

c Larry

Turyn, January 7, 2014

page 4

Ex. 2: For an approximate solution in the form y(x) = d1 (1 − r) + d2 (1 − r)2 , MathematicaTM gave ˆ f (d1 , d2 ) ,

1



  2 2 r − d1 + d2 2(1 − r) − 4r2 d1 (1 − r) + d2 (1 − r)2 − 2 d1 (1 − r) + d2 (1 − r)2 r3 dr

0

1 11 2 1 8 31 2 d1 + d − d2 + d1 d2 + d 10 30 1 30 15 105 2 The approximate mind1 ,d2 f (d1 , d2 )is achieved at d1 ≈ 0.2777777777871004, d2 ≈ −0.19444444445493192. So, using only two terms, an approximate solution of the ODE-BVP is given by =−

u(r) ≈ 0.2777777777871004(1 − r) − 0.19444444445493192(1 − r)2 [By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.5 in Section 14.1, the Rayleigh-Ritz method’s approximate solution is given as the dashed, red graph.]

Figure 6: Answer key for second example of a solution to problem 14.1.2.3 Ex. 3: For an approximate solution in the form y(x) = e1 (1 − r) + e2 cos f (e1 , e2 ) ,

 πr  , MathematicaTM gave 2

ˆ 1   πr 2   πr 2   πr   π r − e1 − e2 sin r3 dr − 4r2 e1 (1 − r) + e2 cos − 2 e1 (1 − r) + e2 cos 2 2 2 2 0

1 11 2 4 4 1 e1 + e1 + 4 (−48 + 24π − π 3 )e2 + 4 (192 − 64π + π 3 )e1 e2 + (192 − 20π 2 + 3π 4 )e22 10 30 π π 48π 2 The approximate mine1 ,e2 f (e1 , e2 )is achieved at e1 ≈ −0.16338185122419183, e2 ≈ 0.24393320411062294. Using only two terms, an approximate solution of the ODE-BVP is given by  πr  u(r) ≈ −0.16338185122419183(1 − r) + 0.24393320411062294 cos 2 =−

[By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.5 in Section 14.1, the Rayleigh-Ritz method’s approximate solution is given as the dashed, red graph.] The three examples of approximate solutions agree very much with each other. The only slight disagreement is that the third approximate solution has slightly larger values of |y 00 (x)|.

c Larry

Turyn, January 7, 2014

page 5

Figure 7: Answer key for third example of a solution to problem 14.1.2.3 Section 14.2.5 2

− 2yf (x),

2

2 + x2 y ,

14.2.5.1. This problem fits the form of Theorem 14.1 in Section 14.2, with F (x, y, y 0 ) = −p(x) y 0 ya = y(0) = 0 and yb = y(L) = 0, so the minimizer must satisfy the Euler-Lagrange equation 0≡

i d h ∂F i d h ∂F 0 − = −2f (x) − − 2p(x)y , ∂y dx ∂y 0 dx

that is, (p(x)y 0 )0 = f (x), 0 < x < L. The latter is an ODE satisfied by all solutions of the calculus of variations problem. 14.2.5.2. This problem fits the form of Theorem 14.1 in Section 14.2, with F (x, y, y 0 ) = − y 0 ya = y(0) = 0 and yb = y(L) = 0, so the minimizer must satisfy the Euler-Lagrange equation 0≡

i ∂F d h ∂F i d h 2 0 − = 2x y − − 2y , ∂y dx ∂y 0 dx

that is, y 00 + x2 y = 0, 0 < x < L. The latter is an ODE satisfied by all solutions of the calculus of variations problem.  ∂u ∂u   ∂u 2 14.2.5.3. This problem fits the form of Corollary 14.1 in Section 14.2, with F x, y, u, , = + ∂x ∂y ∂x  ∂u 2 − 2uf (x, y), so the minimizer must satisfy the Euler-Lagrange equation ∂y 0≡

i i ∂F ∂ h ∂F ∂ h ∂F ∂ h ∂u i ∂ h ∂u i − − = −2f (x, y) − 2 − 2 , (x, y) in D. ∂u ∂x ∂(∂u/∂x) ∂y ∂(∂u/∂y) ∂x ∂x ∂y ∂y

that is, ∂2u ∂2u + 2 = −f (x, y), (x, y) in D. ∂x2 ∂y The latter is a PDE satisfied by all solutions of the calculus of variations problem. 14.2.5.4. We proceed, at first, in a way similar to that of Example 14.8 in Section 14.2: Define ¨   ∂u 2 ∂u 2 J[ u ] = + − 2uf (x, y) dA. ∂x ∂y D

c Larry

Turyn, January 7, 2014

page 6

Let u0 (x, y) be a solution of this problem’s minimization problem and δu(x, y) be its variation. Both u0 and ∂u ∂(δu) u = u0 + δu must satisfy the Neumann BC ≡ 0, on ∂D, so ≡ 0 on ∂D. We calculate ∂n ∂n ¨  ∂u0 ∂(δu) 2  ∂u0 ∂(δu) 2  dA J[ u0 + δu ] = + + + ∂x ∂x ∂y ∂y D ¨ = D



∂u0 ∂x

2

¨ 

so δJ = 2

D

∂u0 ∂(δu) +2 · + ∂x ∂x



∂(δu) ∂x

∂u0 ∂(δu) ∂u0 ∂(δu) · + · ∂x ∂x ∂y ∂y

2

 +

∂u0 ∂y

2

∂u0 ∂(δu) +2 · + ∂y ∂y



∂(δu) ∂y

2 ! dA,

¨



(∇u0 ) • (∇δu) dA.

dA = 2 D

Corollary 6.1 in Section 6.7 implies that ∇u0 • ∇δu = ∇ • (δu ∇u0 ) − δu∇2 u0 , so the divergence theorem implies ¨ ‰ ¨ ‰  ∂u0 2 b ds − 2 ds − 2 δu ∇2 u0 dA δJ = 2 δu ∇u0 ) • n δu ∇ u0 dA = 2 (δu) ∂n D ∂D D ∂D ¨ =0−2 δu ∇2 u0 dA, D

because u0 satisfies the homogeneous Neumann BC on ∂D. Other than the requirement that ∂D, arbitrariness of δu implies stationarity, that is, δJ = 0, implies

∂(δu) ≡ 0 on ∂n

∇2 u0 = 0 in D, that is, u0 satisfies Laplace’s PDE in D. 14.2.5.5. This problem fits the form of Theorem 14.1 in Section 14.2, with F (x, y, y 0 ) = −x y 0 y(1) = ya = 0 and y(3) = yb = −1, so the minimizer must satisfy the Euler-Lagrange equation 0≡

2

+ xy 2 ,

i d h ∂F i ∂F d h − = −2xy − − 2xy 0 , 0 ∂y dx ∂y dx

that is, (xy 0 )0 + xy = 0, 1 < x < 3, which can be rewritten as the ODE x2 y 00 + xy 0 + x2 y = 0, 1 < x < 3, which is Bessel’s equation of order zero. So, the solution of the minimization problem must satisfy the ODE-BVP  2 00   x y + xy 0 + x2 y = 0, 1 < x < 3,  (?)   y(1) = 0, y(3) = −1. Bessel’s equation of order zero has solutions, J0 (x) and Y0 (x), that are known with as much accuracy as the trigonometric functions sin x and cos x are known. The general solution of ODE y 00 + x1 y 0 + y = 0 is y(x) = c1 J0 (x)+c2 Y0 (x), where c1 , c2 are arbitrary constants. Substituting this into the boundary conditions gives   J0 (1)c1 + Y0 (1)c2 = 0 , J0 (3)c1 + Y0 (3)c2 = −1

c Larry

Turyn, January 7, 2014

page 7

a system of two algebraic equations in unknowns c1 , c2 . Using tabulated values of J0 (1), Y0 (1), J0 (3), Y0 (3), we arrive at the solution of (?): y(x) ≈ 0.2662994102 J0 (x) − 2.469811754 Y0 (x). [By the way, this ODE-BVP also was studied using numerical, approximate solutions in Example 8.19 in Section 8.8 and Example 8.21 in Section 8.9.] 2 14.2.5.6. This problem fits the form of Theorem 14.1 in Section 14.2, with F (x, y, y 0 ) = x2 y 0 , y(−1) = ya = −1 and y(1) = yb = 1, so the minimizer must satisfy the Euler-Lagrange equation 0≡

∂F d h 2 0i d h ∂F i = 0 − 2x y , − ∂y dx ∂y 0 dx

that is, (x2 y 0 )0 = 0, −1 < x < 1. Integrate twice with respect to x to get successively (x2 y 0 )0 = 0

⇐⇒

x2 y 0 = c1

⇐⇒

y 0 = c1 x−2

⇐⇒

y = c2 − c1 x−1 ,

where c1 , c2 are arbitrary constants. If there were an admissible solution, that is, y(x) that is continuous and piecewise continuously differentiable on [−1, 1], then continuity at x = 0 would require that c1 = 0, hence the solution is of the form y = c2 . The BCs would then require −1 = y(−1) = c2 = y(1) = 1, which is impossible. So, no, there is no admissible solution on [−1, 1]. 14.2.5.7. We are allowed to vary δy = δy(x), δy 0 = δy 0 (x), δv = δv(x), and δv 0 = δv 0 (x) arbitrarily, except for requiring that δy and δv be continuously differentiable on (a, b), δy(a) = δy(b) = δv(a) = δv(b) = 0, δy 0 and δv 0 be piecewise continuous on (a, b). The second group of assumptions comes from the need for y0 (x) + δy(x) to satisfy the boundary conditions y(a) = ya , y(b) = yb , v(a) = va , v(b) = vb . In addition,  b ´b consistency, that is, (δy)0 = δ(y 0 ) and (δv)0 = δ(v 0 ), implies that a δy 0 (x)dx = δy(x) a = δy(b) − δy(a) = 0  b ´b and a δv 0 (x)dx = δv(x) a = 0. Analogous to linear approximation in R5 , we have ∂F ∂F ∂F ∂F (x, y, y 0 , v, v 0 ) · δy + 0 (x, y, y 0 , v, v 0 ) · δy 0 + (x, y, y 0 , v, v 0 ) · δv + 0 (x, y, y 0 , v, v 0 ) · δv 0 . ∂y ∂y ∂v ∂v

δF (x, y, y 0 ) =

There is no partial derivative of F with respect to an x term because x is not being varied in the minimization process. So, ˆ

b

δF (x, y, y 0 , v, v 0 )dx

δJ = a

ˆ

b



   ∂F ∂F 0 0 0 0 0 = x, y(x), y (x), v(x), v (x) · δy(x) + 0 x, y(x), y (x), v(x), v (x) · δy (x) dx ∂y ∂y a  ˆ b   ∂F ∂F + x, y(x), y 0 (x), v(x), v 0 (x) · δv(x) + 0 x, y(x), y 0 (x), v(x), v 0 (x) · δv 0 (x) dx. ∂v ∂v a Concerning the second and fourth terms, integration by parts gives ˆ

b

a

 ∂F x, y(x), y 0 (x), v(x), v 0 (x) · δy 0 (x) dx = 0 ∂y 

=

b ˆ b  h   i ∂F d ∂F 0 0 0 0 x, y(x), y (x), v(x), v (x) · δy(x) − x, y(x), y (x), v(x), v (x) · δy(x) dx. ∂y 0 dx ∂y 0 a a c Larry

Turyn, January 7, 2014

page 8

Similarly, ˆ

b

 ∂F x, y(x), y 0 (x), v(x), v 0 (x) · δv 0 (x) dx = 0 ∂v

a

 =

b ˆ b  h   i ∂F d ∂F 0 0 0 0 x, y(x), y (x), v(x), v (x) · δv(x) − x, y(x), y (x), v(x), v (x) · δv(x) dx. ∂v 0 dx ∂v 0 a a

But, δy(a) = δy(b) = 0 and δv(a) = δv(b) = 0, so ˆ

b

 ∂F x, y(x), y 0 (x), v(x), v 0 (x) · δy(x) dx − ∂y

δJ = a

ˆ

b

+ a

 ∂F x, y(x), y 0 (x), v(x), v 0 (x) · δv(x) dx − ∂v

Written briefly,

ˆ

b

ˆ

b



 i d h ∂F 0 0 · δy(x) dx x, y(x), y (x), v(x), v (x) dx ∂y 0

b



 i d h ∂F 0 0 · δv(x) dx. x, y(x), y (x), v(x), v (x) dx ∂v 0

a

ˆ a

  ˆ b ∂F ∂F d h ∂F i d h ∂F i δy dx + δv dx. − − ∂y dx ∂y 0 ∂v dx ∂v 0 a



δJ = a

14.2.5.8. Let ϕ be the angle with respect to vertical, with ϕ = 0 for the pendulum normally at rest and ϕ = π for the inverted pendulum, and define θ to be the usual angle in the xy−plane. Assume that thetotal ˙ 2 . potential energy is V = mgz = mg`(1 − cos ϕ) and the kinetic energy is T = 21 m (`ϕ) ˙ 2 + (`(sin ϕ) θ) The Lagrangian is  1 ˙ 2 − mg`(1 − cos ϕ). m (`ϕ) ˙ 2 + (`(sin ϕ) θ) 2

L,T −V = The action is ˆ

ˆ

t2

I,

t2



L dt = t1

t1

 1 1 ˙ 2 − mg`(1 − cos ϕ) dt. m`2 (ϕ) ˙ 2 + m`2 (sin2 ϕ)(θ) 2 2

Using integration by parts, the variation of the action is ˆ t2 ˆ t2 ˆ ˙ 2 sin ϕ cos ϕ (δϕ)dt + δI = m`2 ϕ(δ ˙ ϕ)dt ˙ + m(`θ) t1

t1

it2 ˆ = m` ϕ(δϕ) ˙ − h

t1

ˆ

t2

2

m` ϕ(δϕ)dt ¨ +

t1

h it2 ˆ 2 2 ˙ + m` (sin ϕ)θ(δθ) − t1 2

ˆ  ˙ m`2 (sin2 ϕ)θ˙ (δ θ)dt +

t1

t2

2

t2

t2

mg`(− sin ϕ)(δϕ)dt t1

˙ 2 sin ϕ cos ϕ (δϕ)dt m(`θ)

t1

t2 2

m`



ˆ  2 ˙ ¨ 2 sin ϕ cos ϕ)ϕ˙ θ + (sin ϕ)θ (δθ)dt +

t1

ˆ

ˆ

t2

2

t2

m` ϕ(δϕ)dt ¨ + t1

mg`(− sin ϕ)(δϕ) dt

t1

2

= m` ϕ(t ˙ 2 )δϕ(t2 ) − m` ϕ(t ˙ 1 )δϕ(t1 ) −

t2

˙ 2 sin ϕ cos ϕ (δϕ)dt m(`θ)

t1

    ˙ 2 ) δθ(t2 ) − m`2 sin2 ϕ(t1 ) θ(t ˙ 1 ) δθ(t1 ) +m`2 sin2 ϕ(t2 ) θ(t ˆ t2 ˆ t2   2 2 ˙ ¨ − m` 2 sin ϕ cos ϕ)ϕ˙ θ + (sin ϕ)θ (δθ)dt + mg`(− sin ϕ)(δϕ) dt. t1

t1

Among all motions that start and end at fixed endpoints ϕ(t1 ) = ϕ1 , ϕ(t2 ) = ϕ2 , θ(t1 ) = θ1 , and θ(t2 ) = θ2 , the variations δϕ satisfy δϕ(t1 ) = δϕ(t2 ) = 0 and δθ(t1 ) = δθ(t2 ) = 0, hence

c Larry

Turyn, January 7, 2014

page 9

ˆ

ˆ

t2

t2

m`2 ϕ(δϕ)dt ¨ +

δI = − t1

ˆ ˙ 2 sin ϕ cos ϕ (δϕ)dt + m(`θ)

ˆ

t2

mg`(− sin ϕ)(δϕ) dt t1

t1



t2

  m`2 2 sin ϕ cos ϕ)ϕ˙ θ˙ + (sin2 ϕ)θ¨ (δθ)dt

t1

So, stationarity with respect to independent variations δϕ and δθ implies    `ϕ¨ − `θ˙2 sin ϕ cos ϕ + g sin ϕ ≡ 0, t1 < t < t2  2(cos ϕ)ϕ˙ θ˙ + (sin ϕ)θ¨ ≡ 0,

 ˆ

L

t1 < t < t2

.





 1 EI(y 00 )2 + yf (x) dx, where y(x) is an admissible function, namely one 2 0 that (1) is once continuously differentiable on the interval (0, L), (2) has second derivative that is piecewise continuous on (0, L), and (3) satisfies the BCs y(0) = y(L) = 0. We are allowed to vary δy = δy(x), δy 0 = δy 0 (x), and δy 00 = δy 00 (x) arbitrarily, except for requiring that δy and δy 0 be continuously differentiable on (0, L), and that δy(0) = δy(L) = 0 and δy 00 be piecewise continuous on (0, L). The second group of assumptions comes from the need for y0 (x) + δy(x) to satisfy the boundary conditions y(0) = 0 and y(L) = 0. In addition, consistency, that is, (δy)0 = δ(y 0 ) and (δy 0 )0 = δ(y 00 ), implies  L ´L 0 δy (x)dx = δy(x) 0 = δy(L) − δy(0) = 0. 0 The work is similar to that for Example 14.9 in Section 14.2. (a) Suppose y(x) is the minimizer of (14.2) in Section 14.1, that is, the minimization in problem 14.2.5.9. We calculate the variation  ˆ L ˆ L   1 EI(y 00 )2 + yf (x) dx = EIy000 (x) δy 00 (x) + f (x) δy(x) dx. δJ = δ 2 0 0 14.2.5.9. Define J[ y ] ,

At a minimizer y0 (x), stationarity implies that ˆ L  0 = δJ = EIy000 (x) δy 00 (x) + f (x) δy(x) dx. 0 0 0

00

0

(b) Next, using (δy ) = δ(y ) and (δy) = δ(y 0 ), integrate by parts, twice successively to get ˆ L h iL ˆ L 0 = δJ = EIy000 (x) δy 0 (x) − EIy0000 (x) δy 0 (x) dx + f (x) δy(x) dx, 0

0

0

hence (?)

h iL h iL ˆ 0 = δJ = EIy000 (x) δy 0 (x) − EIy0000 (x) δy(x) + 0

0

ˆ

L

EIy00000 (x) δy(x) dx +

0

L

f (x) δy(x) dx. 0

Note that δy(0) = δy(L) = 0 implies that h iL EIy0000 (x) δy(x) = EIy0000 (L) δy(L) − E(y0000 (0) δy(0) = EIy0000 (L) · 0 − EIy0000 (0) · 0 = 0. 0

Substitute this into (?) and combine the two integrals to get ˆ (??)

L

0 = EIy000 (L) δy 0 (L) − EIy000 (0) δy 0 (0) +

 EIy00000 (x) + f (x) δy(x) dx. 0

(c) In (??), we are allowed to vary independently (1) the function δy(x) for 0 < x < L, (2) the number δy 0 (L), and (3) the number δy 0 (0). It follows that (??) implies y000 (L) = 0,

y000 (0) = 0, c Larry

Turyn, January 7, 2014

page 10

and  y00000 (x) + f (x) ≡ 0, 0 < x < L. So, the natural boundary conditions y 00 (0) = y 00 (L) = 0 appear as a consequence of the minimization. Putting everything together, the minimizer y0 (x) satisfies the fourth order ODE-BVP   EIy 0000 + f (x) = 0   (?) .   y(0) = y(L) = 0, y 00 (0) = y 00 (L) = 0 ˆ

L

F (x, y, y 0 , y 00 ) dx, where y(x) is an admissible function, namely one that is once

14.2.5.10. Define J[ y ] , 0

continuously differentiable on the interval (0, L), whose second derivative is piecewise continuous on (0, L), and satisfies the BCs y(0) = y(L) = 0. We are allowed to vary δy = δy(x), δy 0 = δy 0 (x), and δy 00 = δy 00 (x) arbitrarily, except for requiring that δy and δy 0 be continuously differentiable on (0, L), δy(0) = δy(L) = 0, and that δy 00 be piecewise continuous on (0, L). The second group of assumptions comes from the need for y0 (x) + δy(x) to satisfy the boundary conditions y(0) = 0 and y(L) = 0. In addition, consistency, that is, (δy)0 = δ(y 0 ) and (δy 0 )0 = δ(y 00 ), implies  L ´L that 0 δy 0 (x)dx = δy(x) 0 = δy(L) − δy(0) = 0. The work is similar to that for Example 14.9 in Section 14.2. Analogous to linear approximation in R4 , we have δF (x, y, y 0 ) =

∂F ∂F ∂F (x, y, y 0 , y 00 ) · δy + 0 (x, y, y 0 , y 00 ) · δy 0 + 00 (x, y, y 0 , y 00 ) · δy 00 . ∂y ∂y ∂y

There is no partial derivative of F with respect to an x term because x is not being varied in the minimization process. So,  ˆ L ˆ L   ∂F ∂F 0 00 0 00 0 00 0 δJ = δF (x, y, y , y )dx = x, y(x), y (x), y (x) · δy(x) + 0 x, y(x), y (x), y (x) · δy (x) dx ∂y ∂y 0 0 ˆ + 0

L

 ∂F x, y(x), y 0 (x), y 00 (x) · δy 00 (x) dx. ∂y 00

Concerning the second of the three terms, using integration by parts, ˆ

L

 ∂F x, y(x), y 0 (x), y 00 (x) · δy 0 (x) dx = ∂y 0

0

L ˆ L  h   i ∂F d ∂F 0 00 0 00 = x, y(x), y (x), y (x) · δy(x) − x, y(x), y (x), y (x) · δy(x) dx. ∂y 0 dx ∂y 0 0 0 

But, δy(a) = δy(b) = 0, so ˆ

L

(A) 0

 ∂F x, y(x), y 0 (x), y 00 (x) · δy 0 (x) dx = − 0 ∂y

ˆ 0

L



 i d h ∂F 0 00 x, y(x), y (x), y (x) · δy(x) dx. dx ∂y 0

Similarly, concerning the third of the three terms, using (δy 0 )0 = δ(y 00 ) and (δy)0 = δ(y 0 ), integrate by parts twice successively to get ˆ 0

L

 ∂F x, y(x), y 0 (x), y 00 (x) · δy 00 (x) dx = 00 ∂y L ˆ L  h   i ∂F d ∂F 0 00 0 0 00 = x, y(x), y (x), y (x) · δy (x) − x, y(x), y (x), y (x) · δy 0 (x) dx ∂y 00 dx ∂y 00 0 0 

c Larry

Turyn, January 7, 2014

page 11

L L  h  i ∂F d ∂F 0 00 0 0 00 = x, y(x), y (x), y (x) · δy (x) − x, y(x), y (x), y (x) · δy(x) ∂y 00 dx ∂y 00 0 0  ˆ L 2 h i  ∂F d · δy(x) dx. x, y(x), y 0 (x), y 00 (x) + 2 ∂y 00 dx 0 

But, δy(0) = δy(L) = 0, so ˆ L  ∂F (B) x, y(x), y 0 (x), y 00 (x) · δy 00 (x) dx 00 0 ∂y  L ˆ L  2 h   i ∂F d ∂F 0 00 0 0 00 =− x, y(x), y (x), y (x) · δy (x) + x, y(x), y (x), y (x) · δy(x) dx. ∂y 00 dx2 ∂y 00 0 0 Substitute (A) and (B) and use stationarity to imply that  ˆ L ˆ L h  i ∂F d ∂F 0 00 · δy(x) dx 0 = δJ = x, y(x), y 0 (x), y 00 (x) · δy(x) − x, y(x), y (x), y (x) ∂y dx ∂y 0 0 0  L ˆ L  2 h   i ∂F d ∂F 0 00 0 0 00 − x, y(x), y (x), y (x) · δy (x) + x, y(x), y (x), y (x) · δy(x) dx, ∂y 00 dx2 ∂y 00 0 0 hence

ˆ (?)

L



0= 0



  i i d h ∂F d2 h ∂F ∂F 0 00 0 00 x, y, y 0 , y 00 − x, y, y , y x, y, y , y + · δy(x) dx ∂y dx ∂y 0 dx2 ∂y 00

  ∂F ∂F L, y(L), y 0 (L), y 00 (L) · δy 0 (L) + 00 0, y(0), y 0 (0), y 00 (0) · δy 0 (0). 00 ∂y ∂y

In (?), we are allowed to vary independently (1) the function δy(x) for 0 < x < L, (2) the number δy 0 (L), and (3) the number δy 0 (0). It follows that (?) implies the desired result,  i i d h ∂F ∂F d2 h ∂F x, y0 (x), y 00 (x) − x, y0 (x), y 00 (x) + 2 x, y, y 0 , y 00 ≡ 0, 0 < x < L. 0 00 ∂y dx ∂y dx ∂y (?) also implies the natural boundary conditions  ∂F L, y(L), y 0 (L), y 00 (L) = 0 00 ∂y and

 ∂F 0, y(0), y 0 (0), y 00 (0) = 0. 00 ∂y The latter two results were not asked for in the problem statement. 14.2.5.11. We are allowed to vary δy = δy(x) and δy0 = δy0 (x) arbitrarily, except for requiring that δy be continuously differentiable on (a, b), δy(a) = δy(b) = 0, and that δy0 be piecewise continuous on ´b (a, b) and a δy0 (x)dx = 0. The second assumption comes from the need for y0 (x) + δy(x) to satisfy the boundary conditions y(a) = ya , y(b) = yb . In addition, consistency, that is, (δy)0 = δ(y0 ), implies that  b ´b 0 δy (x)dx = δy(x) a = δy(b) − δy(a) = 0. a Analogous to linear approximation in R3 , we have   δF (x, y, y0 ) = ∇y F x, y, y0 • δy + ∇y0 F x, y, y0 • δy0 . There is no partial derivative of F with respect to an x term because x is not being varied in the minimization process. So, ˆ b ˆ b     δJ = δF x, y, y0 dx = ∇y F x, y, y0 • δy + ∇y0 F x, y, y0 • δy0 dx. a

a c Larry

Turyn, January 7, 2014

page 12

Concerning the last term, integration by parts gives ˆ b ˆ    b ∇y0 F x, y, y0 • δy0 dx = ∇y0 F x, y, y0 • δy a − a

b

a

But, δy(a) = δy(b) = 0, so ˆ b ˆ  δJ = ∇y F x, y, y0 • δy dx −

b



Stationarity at a global minimum gives ˆ ˆ b  0 ∇y F x, y, y • δy dx − 0 = δJ =

b

a

a

 i d h • δy dx. ∇y0 F x, y, y0 dx

 i d h 0 ∇y0 F x, y, y • δy dx. dx 

a

a



 i d h 0 0 ∇y F x, y, y • δy dx. dx

Because δy(x) is arbitrary except for the requirements that it be continuously differentiable and satisfy δy(a) = δy(b) = 0, we conclude that necessary is that  i d h ∇y F x, y, y0 − ∇y0 F x, y, y0 ≡ 0, a < x < b. dx In more explicit notation, this says that if y = [ y1 (x) ... yn (x) ], the necessary conditions are that, for i = 1, ..., n,  i ∂F d h ∂F 0 x, y, y0 − x, y, y ≡ 0, a < x < b. ∂yi dx ∂yi0 14.2.5.12. The action is I=

ˆ

1 2

t2



 m1 x˙ 21 + m2 x˙ 22 − k1 x21 − k2 (x2 − x1 )2 − k3 x22 dt.

t1

The variation of the action is ˆ t2   δI = m1 x˙ 1 (δ x˙ 1 ) + m2 x˙ 2 (δ x˙ 2 ) − k1 x1 (δx1 ) − k2 (x2 − x1 )(δx2 − δx1 ) − k3 x2 (δx2 ) dt. t1

Using integration by parts, we see that the variation of the action is h it2 ˆ t2 h it2 ˆ − − δI = m1 x˙ 1 (δx1 ) m1 x ¨1 (δx1 ) dt + m2 x˙ 2 (δx2 ) ˆ

t1

t2

+



t1

t1

t2

m2 x ¨2 (δx2 ) dt

t1

 − k1 x1 (δx1 ) − k2 (x2 − x1 )(δx2 − δx1 ) − k3 x2 (δx2 ) dt.

t1

Among all motions that start and end at fixed endpoints x1 (t1 ) = x1,1 , x1 (t2 ) = x1,2 , x2 (t1 ) = x2,1 , and x2 (t2 ) = x2,2 , the variations δx1 (t) and δx2 (t) satisfy δx1 (t1 ) = δx1 (t2 ) = δx2 (t1 ) = δx2 (t2 ) = 0, hence ˆ t2 ˆ t2 ˆ t2   δI = − m1 x ¨1 (δx1 ) dt − m2 x ¨2 (δx2 ) dt + − k1 x1 (δx1 ) − k2 (x2 − x1 )(δx2 − δx1 ) − k3 x2 (δx2 ) dt, t1

that is,

t1

ˆ

t2

δI =



t1

ˆ  − m1 x ¨1 − k1 x1 + k2 (x2 − x1 ) (δx1 ) dt +

t1

t2



 − m2 x ¨2 − k2 (x2 − x1 ) − k3 x2 (δx2 ) dt.

t1

So, stationarity with respect to independent variations x1 and x2 implies   ¨1 − k1 x1 + k2 (x2 − x1 ) ≡ 0, t1 < t < t2   −m1 x 

−m2 x ¨2 − k2 (x2 − x1 ) − k3 x2 ≡ 0,

t1 < t < t2

,



that is,   ¨1 = −k1 x1 + k2 (x2 − x1 )   m1 x 

m2 x ¨2 = −k2 (x2 − x1 ) − k3 x2

, t1 < t < t2 ,



which can be rewritten as system (5.12) of Section 5.1. c Larry

Turyn, January 7, 2014

page 13

Section 14.3.2 2 14.3.2.1. This problem partially fits the form of Theorem 14.2 in Section 14.3, with F (x, u, u0 ) = u0 − q(x)u2 , G(x, u, u0 ) = u2 , and u(0) = 0, but the second BC, u0 (L) = 0 does not fit the Theorem. In order to adapt Theorem 14.2 to problem 14.3.2.1 we only need to calculate as in the derivation of (14.18) in Section 14.2. ˆ L   Define J[ u ] , F x, u(x), u0 )x) dx, where F (x, u, u0 ) , (u0 )2 − q(x)u2 and u(x) is an admissible 0

function that satisfies the BCs u(0) = u0 (L) = 0. We are allowed to vary δu = δu(x) and δu0 = δu0 (x) arbitrarily, except for requiring that δu be continuously differentiable on (0, L), δu(0) = 0, δu0 be piecewise continuous on (a, b), (δu0 )(L) = 0, and (δu)0 = δ(u0 ). The second assumption comes from the need for u0 (x) + δu(x) to satisfy the boundary condition u(0) = 0, the next to last assumption follows from the need for u0 (x)+δu(x) to satisfy the boundary condition u0 (L) = 0, and the last assumption is called “consistency." Analogous to linear approximation in R3 , we have δF (x, u, u0 ) =

∂F ∂F (x, u, u0 ) · δu0 . (x, u, u0 ) · δu + ∂u ∂u0

There is no partial derivative of F with respect to an x term because x is not being varied in the minimization process. So,  ˆ L ˆ L   ∂F ∂F 0 0 δJ = δF (x, u, u0 )dx = x, u(x), u0 (x) · δu(x) + x, u(x), u (x) · δu (x) dx. ∂u ∂u0 0 0 Concerning the last term, integration by parts gives ˆ L  ∂F x, u(x), u0 (x) · δu0 (x) dx = 0 0 ∂u L ˆ L  h    i d ∂F ∂F 0 0 x, u(x), u (x) · δu(x) − x, u(x), u (x) = · δu(x) dx. ∂u0 dx ∂u0 0 0  ∂F 0, u(0), u0 (0) · δu(0) = 0. As to the other term on the boundary, in problem 14.3.2.1, 0 ∂u  F (x, u, u0 ) , (u0 )2 − q(x)u2 , so

But, δu(0) = 0, so

 ∂F L, u(L), u0 (L) · δu(L) = 2u0 (L) · δu(L) = 2 · 0 · δu(L) = 0. ∂u0 Putting everything together,  ˆ L ˆ L h  i ∂F d ∂F 0 δJ = x, u(x), u0 (x) · δu(x) dx − x, u(x), u (x) · δu(x) dx. dx ∂u0 0 ∂u 0 Having adapted the calculation of δJ to problem 14.3.2.1, we can use the conclusion of Theorem 14.2 in Section 14.3 to see that there must be a Lagrange multiplier λ such that the minimizer, u(x), must satisfy the equation i ∂(F − λG) d h ∂(F − λG) i d h 0 0≡ − = −2q(x)u − 2λu − 2(u + λ · 0 , 0 < x < L 0 ∂u dx ∂u dx that is, u00 + (q(x) + λ)u = 0, 0 < x < L. The latter is an ODE satisfied by all solutions of the calculus of variations problem. The desired ODE-BVP is  00   u + (q(x) + λ)u, 0 < x < L,  

u(0) = 0, u0 (L) = 0.

 c Larry

Turyn, January 7, 2014

page 14

2 14.3.2.2. This problem partially fits the form of Theorem 14.2 in Section 14.3, with F (x, u, u0 ) = u0 − q(x)u2 , G(x, u, u0 ) = u2 , and u(0) = 0, but the second BC, u0 (L) = 0 does not fit the Theorem. In order to adapt Theorem 14.2 to problem 14.3.2.1 we only need to calculate as in the derivation of (14.18) in Section 14.2. ˆ L   Define J[ u ] , F x, u(x), u0 )x) dx, where F (x, u, u0 ) , (u0 )2 − q(x)u2 and u(x) is an admissible 0

function that satisfies the BCs u(0) = u0 (L) = 0. We are allowed to vary δu = δu(x) and δu0 = δu0 (x) arbitrarily, except for requiring that δu be continuously differentiable on (0, L), δu(0) = 0, δu0 be piecewise continuous on (a, b), (δu0 )(L) = 0, and (δu)0 = δ(u0 ). The second assumption comes from the need for u0 (x) + δu(x) to satisfy the boundary condition u(0) = 0, the next to last assumption follows from the need for u0 (x)+δu(x) to satisfy the boundary condition u0 (L) = 0, and the last assumption is called “consistency." Analogous to linear approximation in R3 , we have δF (x, u, u0 ) =

∂F ∂F (x, u, u0 ) · δu0 . (x, u, u0 ) · δu + ∂u ∂u0

There is no partial derivative of F with respect to an x term because x is not being varied in the minimization process. So,  ˆ L ˆ L   ∂F ∂F 0 0 0 0 δJ = δF (x, u, u )dx = x, u(x), u (x) · δu(x) + x, u(x), u (x) · δu (x) dx. ∂u ∂u0 0 0 Concerning the last term, integration by parts gives ˆ

L

0

 ∂F x, u(x), u0 (x) · δu0 (x) dx = ∂u0 L ˆ L  h   i d ∂F ∂F 0 0 x, u(x), u (x) · δu(x) − x, u(x), u (x) · δu(x) dx. = ∂u0 dx ∂u0 0 0 

 ∂F 0, u(0), u0 (0) · δu(0) = 0. As to the other term on the boundary, in problem 14.3.2.1, 0 ∂u  F (x, u, u0 ) , (u0 )2 − q(x)u2 , so

But, δu(0) = 0, so

 ∂F L, u(L), u0 (L) · δu(L) = 2u0 (L) · δu(L) = 2 · 0 · δu(L) = 0. 0 ∂u Putting everything together, ˆ δJ = 0

L

 ∂F x, u(x), u0 (x) · δu(x) dx − ∂u

ˆ 0

L



 i d h ∂F 0 x, u(x), u (x) · δu(x) dx. dx ∂u0

Having adapted the calculation of δJ to problem 14.3.2.1, we can use the conclusion of Theorem 14.2 in Section 14.3 to see that there must be a Lagrange multiplier λ such that the minimizer, u(x), must satisfy the equation 0≡

i d h ∂(F − λG) i d h 0 ∂(F − λG) − = −2q(x)u − 2λσ(x)u − 2(u + λ · 0 , 0 1   x−xj and uniform tent basis functions Tj (x) = φ , j = 0, 1, 2, 3, 4. Using only four terms, an approximate h solution of the ODE-BVP is given by 14.5.4.2. Define J[ y ] ,

0

y(x) ≈ 0.03149295017 T1 (x) + 0.05586880884 T2 (x) + 0.06481824190 T3 (x) + 0.04905365855 T4 (x). Here are the details of the Mathematica work used to find this approximate solution: After defining h = 0.2, the basic tent function T1 (x) was, as in Example 14.16, defined by h t i t−h T1[t_] := Piecewise {{ , 0 < t ≤ h}, {1 − , h < t ≤ 2h}, {0, −1 < t ≤ 0}, {0, 2h < t ≤ 1}} . h h c Larry

Turyn, January 7, 2014

page 20

Figure 8: Answer key for problem 14.5.4.1 Noting that T2 (x) = T1 (x − h), T3 (x) = T1 (x − 2h), T4 (x) = T1 (x − 3h), we defined y[x_, c1_, c2_, c3_, c4_] := c1 ∗ T1[x] + c2 ∗ T1[x − h] + c3 ∗ T1[x − 2h] + c4 ∗ T1[x − 3h] and the functional by J[c1_, c2_, c3_, c4_] hˆ 1  i := Evaluate (D[y[x, c1, c2, c3, c4], x])2 + Cos[π x] ∗ (y[x, c1, c2, c3, c4])2 − 2 x y[x, c1, c2, c3, c4] dx . 0

Then we used FindMinimum[J[c1, c2, c3, c4], {{c1, .1}, {c2, 0.1}, {c3, 0.1}, {c4, 0.1} to get output {−0.021356, {c1 → 0.03149295017, c2 → 0.05586880884, c3 → 0.06481824190, c4 → 0.04905365855}}. In a figure we show the approximate solution of the ODE-BVP as a dashed, red graph and also Mathematica’s approximate solution y(x) in the solid, blue graph. [By the way, the problem did not ask for a graph of the approximate solution, but here it is if you’re curious. As in Example 14.16 in Section 14.5, the Rayleigh-Ritz method’s approximate solution is given as a dashed, red graph and Mathematica’s approximate solution y(x) as a the solid, blue graph.]

Figure 9: Answer key for problem 14.5.4.2 ´1

((y 0 )2 + xy 2 − 2(1 + x)y)dx. Let h = 0.5, xj = jh  for j= −1, 0, 1, 2, 3, ψ(t) is the x−xj cubic spline, and the cubic uniform basis spline functions are Cj (x) , ψ , j = −1, 0, 1, 2, 3. Using h 14.5.4.3. Define J[ y ] ,

0

c Larry

Turyn, January 7, 2014

page 21

only five terms, an approximate solution of the ODE-BVP is given by y(x) ≈ −0.707679286807C−1 (x) + 0.0445065089233C0 (x) + 0.529653251113C1 (x) +0.697245134775C2 (x) + 0.529653251113C3 (x). Here are the details of the Mathematica work used to find this approximate solution: After defining h = 0.5, the cubic spline B3[x] , ψ(x) was, as in Example 14.17, defined by  3    3x − 6x2 + 4 (2 − x)3 BB[x ] := Piecewise ,0 0, whose solution is the set of points z√that are a distance √ a from the point z0 . So, |z + 1 − i 3| = 1 is the circle of radius 1 centered at −1 + i 3. (b) Let z = x+iy, where x, y are real. |z−1| = |z−i2| implies |(x−1)+iy|2 = |z−1|2 = |z−i2|2 = |x+i(y−2)|2 , 2 2 ⇐⇒ (x − 1)2 + y 2 = x2 + (y − 2)2 ⇐⇒  x − 2x + 1 + y 2 =  x + y 2 − 4y + 4 ⇐⇒ −2x + 1 = −4y + 4, hence the 3 1 points lie on the line y = x + . We should check that all points on that line satisfy the original equation, 2 4 which we manipulated by squaring and thus could conceivably have had spurious solutions created:     s 2  1 1 3 3 = (x − 1)2 + 1 x + 3 LHS = |z − 1| = x + i x+ − 1 = (x − 1) + i x+ 2 4 2 4 2 4 r =

x2

1 3 9 − 2x + 1 + x2 + x + = 4 4 16

r

25 5 2 5 x − x+ 4 4 16

versus      s 2 1 1 3 5 = x2 + 1 x − 5 RHS = |z − i2| = x + i x+ x− − i2 = (x + i 2 4 2 4 2 4 r

r 1 2 5 25 5 2 5 25 + x − x+ = x − x+ = LHS. = 4 4 16 4 4 16 1 3 So, every point on the line y = x + does satisfy the original equation, |z − 1| = |z − i2|. 2 4 x2

(c) Let z = x+iy, where x, y are real. |z−i| = |z+1| implies |x+i(y−1)|2 = |z−i|2 = |z+1|2 = |(x+1)+iy|2 , 2 2 ⇐⇒ x2 + (y − 1)2 = (x + 1)2 + y 2 ⇐⇒  x + y 2 − 2y + 1 =  x + 2x + 1 + y 2 ⇐⇒ −2y + 1 = 2x + 1, hence the points lie on the line y = −x. We should check that all points, x − ix, on that line satisfy the original equation, which we manipulated by squaring and thus could conceivably have had spurious solutions created: p p LHS = |z − i| = |(x − ix) − i| = |x − i(x + 1)| = x2 + (x + 1)2 = 2x2 + 2x + 1 versus RHS = |z + 1| = |(x − ix) + 1| = |(x + 1) − ix| =

p

(x + 1)2 + x2 =

p 2x2 + 2x + 1 = LHS.

So, every point on the line y = −x does satisfy the original equation, |z − i| = |z + 1|. c Larry

Turyn, January 8, 2014

page 6

1 1 and z 6= 0 ⇐⇒ z = f (w) = and w 6= 0. z w 1 1 1 (a) Define A = {z : |z| = 3}. We have w ∈ f (A) ⇐⇒ 3 = |z| = = and w 6= 0 ⇐⇒ |w| = and w |w| 3   1 w 6= 0, so the image of A under the inversion mapping is f (A) = w : |w| = . Also, see the picture. 3 15.1.4.19. w = f (z) =

Figure 4: Transformation in 15.1.4.19(a) (b) Define A = {z : |z − 1| = 1}. Note that 0 ∈ A. We have 1 1 − w |1 − w| = w ∈ f (A) ⇐⇒ 1 = |z − 1| = − 1 = and w 6= 0 ⇐⇒ |w| = |1 − w| and w 6= 0. w w |w| Let w = u + iv, where u, v are real. Then, |w| = |1 − w| =⇒ u2 + v 2 = |w|2 = |1 − w|2 = |(1 − u) + iv|2 = (1 − u)2 + v 2 = 1 − 2u + u2 + v 2 1 b , {z : |z − 1| = 1, z 6= 0} under the inversion mapping is ⇐⇒ 0 = 1 − 2u ⇐⇒ u = , so the image of A 2 contained in the line L , w = 12 + iv : −∞ < v < ∞ . Can we change contained in to equals? We use analysis similar to that used in Example 15.4 in Section b include all of the points in the line L in the w−plane? 15.1: Does the image of A Every z in A = {z : |z − 1| = 1} is of the form z = 1 + eit for some real t. We define  b = {z 6= 0 : z is in A} = 1 + eit : −π < t < π . A b It follows that for every z in A, u + iv = w = f (z) =

1 1 . = it 1+e (1 + cos t) + i sin t

Rationalizing the denominator gives u + iv =

(1 + cos t) − i sin t (1 + cos t) − i sin t (1 + cos t) − i sin t = = 2 + 2 cos t (1 + cos t)2 + sin2 t 1 + 2 cos t + cos2 t + sin2 t   1 1 + cos t − sin t 1 sin t = +i = −i . 2 1 + cos t 1 + cos t 2 2(1 + cos t)

c Larry

Turyn, January 8, 2014

page 7

1 For any w = u + iv satisfying Re(w) = , we need to show that there is at least one value of t in the interval 2 sin t = v. [By the way, we chose the interval to be −π < t < π in −π < t < π for which g(t) , − 2(1 + cos t) order to avoid t for which the denominator, 1 + cos t, is zero.] We calculate (?)

lim

t→−π +



sin t (1 − cos t) sin t (1 − cos t) ≈2 = − ∞, = lim − = lim − = lim − + + + 2(1 + cos t) t→−π 2(1 + cos t)(1 − cos t) t→−π 2 sin t ≈ 0+ t→−π

and similarly (??) lim− g(t) = lim− − t→π

t→π

sin t (1 − cos t) = lim − = ∞. 2(1 + cos t) t→π− 2 sin t

Because g(t) is continuous for −π < t < π, (?) and (??) imply g(t) takes on all values in the interval  b = L = w = 1 + iv : −∞ < v < ∞ , a vertical line in (−∞, ∞). This concludes the explanation why f (A) 2 the w−plane. Also, see the picture.

Figure 5: Transformation in 15.1.4.19(b) (c) Define A = {z : |z + 2| = 2}. Note that 0 ∈ A. We have 1 1 + 2w |1 + 2w| = w ∈ f (A) ⇐⇒ 2 = |z + 2| = + 2 = and w 6= 0 ⇐⇒ 2|w| = |1 + 2w| and w 6= 0. w w |w| Let w = u + iv, where u, v are real. Then, 2|w| = |1 + 2w| =⇒ 4u2 + 4v 2 = (2|w|)2 = |1 + 2w|2 = |(1 + 2u) + i2v|2 = (1 + 2u)2 + 4v 2 = 1 b , {z : |z + 2| = 2, z 6= 0} under the 1 + 4u + 4u2 + 4v 2 ⇐⇒ 0 = 1 + 4u ⇐⇒ u = − , so the image of A 4 inversion mapping is contained in the line L , w = − 41 + iv : −∞ < v < ∞ . Can we change contained in to equals? We use analysis similar to that used in Example 15.4 in Section b include all of the points in the line L in the w−plane? 15.1: Does the image of A Every z in A = {z : |z + 2| = 2} is of the form z = −2 + 2eit for some real t. We define  b = {z 6= 0 : z is in A} = −2 + 2eit : 0 < t < 2π . A b It follows that for every z in A, u + iv = w = f (z) =

1 1 1 . = = it −2 + 2e (−2 + 2 cos t) + i2 sin t 2 (−1 + cos t) + i sin t

Rationalizing the denominator gives u + iv =

(−1 + cos t) − i sin t (−1 + cos t) − i sin t (−1 + cos t) − i sin t = = 2 2 2 2 4(1 − cos t) 2 (−1 + cos t) + sin t 2 1 − 2 cos t + cos t + sin t c Larry

Turyn, January 8, 2014

page 8

=

1 4



−1 + cos t − sin t +i 1 − cos t 1 − cos t



1 sin t =− −i . 4 4(1 − cos t)

1 For any w = u + iv satisfying Re(w) = − , we need to show that there is at least one value of t in the 4 sin t interval 0 < t < 2π for which g(t) , − = v. [By the way, we chose the interval to be 0 < t < 2π 4(1 − cos t) in order to avoid t for which the denominator, 1 − cos t, is zero.] We calculate (?) lim − t→0+

(1 + cos t) sin t (1 + cos t) ≈2 sin t = −∞, = lim+ − = lim+ − = lim+ − 4(1 − cos t) t→0 4(1 + cos t)(1 − cos t) t→0 4 sin t ≈ 0+ t→0

and similarly (??)

lim

t→2π −

g(t) = lim − t→2π −

sin t (1 + cos t) = lim − = ∞. 2(1 + cos t) t→2π− 4 sin t

Because g(t) is continuous for 0 < t < 2π, (?) and (??) imply g(t) takes on all values in the interval  b = L = w = − 1 + iv : −∞ < v < ∞ , a vertical line (−∞, ∞). This concludes the explanation why f (A) 4 in the w−plane. Also, see the picture.

Figure 6: Transformation in 15.1.4.19(c) (d) Define A = {z : |2z + i| = 1}. Note that 0 ∈ A. We have 2 2 + iw |2 + iw| = w ∈ f (A) ⇐⇒ 1 = |2z + i| = + i = and w 6= 0 ⇐⇒ |w| = |2 + iw| and w 6= 0. w w |w| Let w = u + iv, where u, v are real. Then, |w| = |2 + iw| =⇒ u2 + v 2 = |w|2 = |2 + iw|2 = |(2 − v) + iu|2 = (2 − v)2 + u2 = 4 − 4v + v 2 + u2 b , {z : |2z + i| = 1, z 6= 0} under the inversion mapping is ⇐⇒ 0 = 4(1 − v) ⇐⇒ v = 1, so the image of A contained in the line L , {w = u + i : −∞ < v < ∞}. Can we change contained in to equals? We use analysis similar to that used in Example 15.4 in Section b include all of the points in the line L in the w−plane? 15.1: Does the image of A   i 1 Every z in A = {z : |2z + i| = 1} = z : z + = is of the form 2 2 1 i z = − + eit 2 2 for some real t. We define b = {z 6= 0 : z is in A} = A

    i − i + eit : −π < t < π . 2

b It follows that for every z in A, u + iv = w = f (z) =

1 2

1 2 = . it cos t + i(−1 + sin t) −i+e c Larry

Turyn, January 8, 2014

page 9

Rationalizing the denominator gives    2 cos t − i(−1 + sin t) 2 cos t − i(−1 + sin t) 2 cos t − i(−1 + sin t) u + iv = = = cos2 t + (−1 + sin t)2 2(1 − sin t) cos2 t + 1 − 2 sin t + sin2 t =

cos t 1 − sin t cos t +i = + i. 1 − sin t 1 − sin t 1 − sin t

For any w = u + iv satisfying Im(w) = 1, we need to show that there is at least one value of t in the interval 5π cos t π 5π π 0 in the z−plane. (c) By Theorem 15.8 in Section 15.2, where it exists, namely at points z = x + i0, we have f 0 (z) = f 0 (x + i0) =

∂u ∂v (x, 0) + i (x, 0) = 2x + i0 = 2x = z + z¯. ∂x ∂x

15.2.5.13. When x, y are real we have f (z) = (x − iy)(2 − x2 − y 2 ) = x(2 − x2 − y 2 ) − iy(2 − x2 − y 2 ) = u(x, y) + iv(x, y), so the CR (Cauchy-Riemann) equations are  ∂u ∂v   = = −2 + x2 + 3y 2 2 − y 2 − 3x2 =   ∂x ∂y

    

  ∂u ∂v   −2xy = =− = −2xy ∂y ∂x

   

.

c Larry

Turyn, January 8, 2014

page 15

In the z−plane, the CR equations satisfied if, and only if, 2 − y 2 − 3x2 = −2 + x2 + 3y 2 , if and only if, 4 = 4x2 + 4y 2 , that is, if and only if, x2 + y 2 = 1. The functions u(x, y) = x(2 − x2 − y 2 ) and v(x, y) = −y(2 − x2 − y 2 ) are polynomials in x, y, hence are continuous everywhere. By Theorem 15.8 in Section 15.2, f is differentiable only at the points on the circle x2 + y 2 = 1. 15.2.5.14. f (z) is differentiable at z = 0, because there exists f (z) − f (0) = lim z→0 x+iy→0 z

f 0 (0) , lim

x2 sin

1 x

+ ixy sin

1 x

−0

x + iy

1   1  x2 + ixy sin (x − iy) x3 + xy 2 + i( x2 y − x2 y) sin x x = lim = lim x+iy→0 x+iy→0 x2 + y 2 x2 + y 2 1      x(x2 + y 2 ) sin x = lim x sin 1 = lim x sin 1 = 0, = lim x→0 x+iy→0 x+iy→0 x2 + y 2 x x 1 1 by the Squeeze Theorem of Calculus 1: For all x 6= 0, −1 ≤ sin ≤ 1 =⇒ −|x| ≤ x sin ≤ |x|. x x 1 , and for x = 0, u(x, y) , 0. So, for x 6= 0, (b) Using the definition of f (z), for x 6= 0, u(x, y) , x2 sin x 1i 1 1 d h 1 i 1 1 ∂u ∂ h 2 (x, y) = x sin = 2x sin + x2 · cos = 2x sin − cos . ∂x ∂x x x x dx x x x For x = 0, we have u(x, y) − u(0, y) ∂u (0, y) , lim = lim x→0 x→0 ∂x x

x2 sin

1 x x

−0 = lim x sin

1

x→0

again by the Squeeze Theorem of Calculus 1. So,      1 1   − cos , if x 6= 0  2x sin ∂u x x (x, y) =  ∂x   0, if x = 0

   

x

= 0,

.

  

∂u (x, y) is not continuous at (x, y) = (0, 0), because ∂x      ∂u 1 1 lim (x, y) = lim 2x sin − cos x→0 ∂x x→0 x x   1 does not exist. Why? While there exists limx→0 2x sin by the Squeeze Theorem of Calculus 1, x   ∂u 1 lim (x, y) = − lim cos x→0 ∂x x→0 x

In fact,

1 1 → ∞ as x → 0+ , and cos (θ) does not exist due to oscillation, as θ , → ∞. x x ∂u (c) These results do not contradict Theorem 15.7 in Section 15.2 because (0, y) does exist, even though ∂x ∂u lim (x, y) does not exist. x→0 ∂x does not exist because

c Larry

Turyn, January 8, 2014

page 16

b be a unit tangent vector at a point on the streamline Ψ = k2 . Because ∇Ψ is normal to the 15.2.5.15. T b Define streamline at that point, ∇Ψ 6= 0 and 0 = ∇Ψ • T. q1 , It follows that

1 ∇Ψ. ||∇Ψ||

n o b q1 , T

is an o.n. basis for R2 . Express v in terms of this basis to get b T b = 0 · q1 + constant · T. b v = hv, q1 i q1 + h∇Ψ, Ti This says that v is parallel to the tangent vector to the streamline, that is, the streamline gives the flow of fluid particles.   3  z /|z|2 , if z 6= 0  is not differentiable at z = 0, because for z = ρeiθ in polar 15.2.5.16. f (z) ,   0, if z = 0 exponential form, z3 −0 f (z) − f (0) (ρe−iθ )3 z3 |z|2 0 f (0) , lim = lim = lim = lim = lim e−i4θ z→0 z→0 z→0 z|z|2 ρ→0 (ρeiθ )ρ2 ρ→0 z z does not exist because e−i4θ can be any complex number of magnitude one. As for the Cauchy-Riemann equations, we have, for z 6= 0, f (z) =

z3 (x − iy)3 x3 − 3xy 2 −3x2 y + y 3 = = + i = u(x, y) + iv(x, y). |z|2 x2 + y 2 x2 + y 2 x2 + y 2

At z = x + iy = 0 + i0, both u(0, 0) , 0 and v(0, 0) , 0. The Cauchy-Riemann equations are satisfied at z = 0, because x3 − 0 −0 2 u(x, 0) − u(0, 0) x3 ∂u (0, 0) , lim = lim x + 0 = lim 3 = lim 1 = 1, x→0 x→0 x→0 x x→0 ∂x x x 0 + y3 −0 ∂v v(0, y) − v(0, 0) y3 ∂u 0 + y2 (0, 0) , lim = lim = lim 3 = lim 1 = 1 = (0, 0), y→0 y→0 y→0 y y→0 ∂y y y ∂x and

0−0 −0 ∂u u(0, y) − u(0, 0) x2 + y 2 (0, 0) , lim = lim = lim 0 = 0, y→0 y→0 y→0 ∂y y y −0 + 0 −0 ∂v v(x, 0) − v(0, 0) ∂u x2 + y 2 − (0, 0) , lim = lim = lim 0 = 0 = (0, 0) . x→0 x→0 x→0 ∂x x x ∂y

So, u, v satisfying the Cauchy-Riemann equations at a point z0 is not enough to imply that f = u + iv is differentiable at z0 .

c Larry

Turyn, January 8, 2014

page 17

Section 15.3.3 15.3.3.1. When x, y are real we have f (z) = x−y +i(x+y) = u(x, y)+iv(x, y), so the CR (Cauchy-Riemann) equations are   ∂v ∂u     = = 1 1 =     ∂x ∂y .     ∂u ∂v  −1 =   =− = −1  ∂y ∂x In the z−plane, the CR equations satisfied everywhere. The functions u(x, y) = x − y and v(x, y) = x + y are polynomials in x, y, hence are continuous everywhere. By Theorem 15.8 in Section 15.2, f (z) is differentiable everywhere in the z−plane, hence f (z) is analytic everywhere in the z−plane. Be definition, that means that f (z) is an entire function. Also, everywhere f 0 (z) = f 0 (x + iy) =

∂v ∂u (x, y) + i (x, y) = 1 + i. ∂x ∂x

15.3.3.2. Using Definition 15.4 in Section 15.2, for any z0 6= 0, 1 1 31 1 3 1 3 (z − z0 ) + − z+ − z0 + f (z) − f (z0 ) 2 z 2 z0 2 2 z z0 f 0 (z0 ) = lim = lim = lim z→z0 z→z0 z→z0 z − z0 z − z0 z − z0   3 1 1 3 z0 − z     − (z  − z0 ) (z − z0 ) +  3 1 3 1 2 2z0 z 2 2 z0 z = lim − = − 2. = lim = lim   z→z0 z→z0 z→z0 z − z0 2 2z0 z 2 2z0 (z  − z0 )  Since every point z0 in the set S = {z : z 6= 0} has a disk Dr (0) contained in S, f (z) is analytic in the set 1 3 S. There, f 0 (z) = − 2 . 2 2z 15.3.3.3. Using Definition 15.4 in Section 15.2, for any z0 6= 0, z0 − i z−i 1 1 − − f (z) − f (z0 ) (z0 − i)(z − i) (z0 − i)(z − i) z − i z0 − i = lim = lim f 0 (z0 ) = lim z→z0 z→z0 z→z0 z − z0 z − z0 z − z0 (z0− z)  −1 1 (z0 − i)(z − i) = lim = lim =− .   z→z z→z0 (z  − z0 ) (z0 − i)2 0 (z0 − i)(z − i)  Since every point z0 in the set S = {z : z 6= i} has a disk Dr (i) contained in S, f (z) is analytic in the set S. −1 There, f 0 (z) = − . (z − i)2 15.3.3.4. Assume that the unspecified domain D is the whole complex plane or, perhaps, the largest set in the complex plane on which u and its first and second partial derivatives are continuous. Later, we may have to specify the domain D. The given function u(x, y) = (−1 + 2x)y is a real-valued function, as should be the desired harmonic conjugate function v. We want v to satisfy the Cauchy-Riemann equations, hence ˆ  ∂v ∂u ∂  = = (−1 + 2x)y = 2y =⇒ v = (2y)∂y = y 2 + g(x), ∂y ∂x ∂x

c Larry

Turyn, January 8, 2014

page 18

where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get −1 + 2x =

 ∂u  ∂  ∂v ∂  2 (−1 + 2x)y = y + g(x) = −g 0 (x), =− =− ∂y ∂y ∂x ∂x

hence g 0 (x) ≡ x − x2 . So, v(x, y) = y 2 + x − x2 + c is a harmonic conjugate of u, for any real constant c. 15.3.3.5. Assume that the unspecified domain D is the whole complex plane or, perhaps, the largest set in the complex plane on which u and its first and second partial derivatives are continuous. Later, we may have to specify the domain D. The given function u(x, y) = x−2xy is a real-valued function, as should be the desired harmonic conjugate function v. We want v to satisfy the Cauchy-Riemann equations, hence ˆ  ∂u ∂  ∂v = = x − 2xy = 1 − 2y =⇒ v = (1 − 2y)∂y = y − y 2 + g(x), ∂y ∂x ∂x where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get −2x =

 ∂u  ∂v ∂  ∂  x − 2xy = =− =− y − y 2 + g(x) = −g 0 (x), ∂y ∂y ∂x ∂x

hence g 0 (x) ≡ x2 . So, v(x, y) = y − y 2 + x2 + c is a harmonic conjugate of u, for any real constant c. 15.3.3.6. Assume that the unspecified domain D is the whole complex plane or, perhaps, the largest set in the complex plane on which u and its first and second partial derivatives are continuous. Later, we may have to specify the domain D. The given function u(x, y) = 8x3 y − 8xy 3 + 1 is a real-valued function, as should be the desired harmonic conjugate function v. We want v to satisfy the Cauchy-Riemann equations, hence ˆ  ∂v ∂u ∂  3 = = 8x y − 8xy 3 + 1 = 24x2 y − 8y 3 =⇒ v = (24x2 y − 8y 3 )∂y = 12x2 y 2 − 2y 4 + g(x), ∂y ∂x ∂x where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get 8x3 − 24xy 2 =

 ∂u  ∂  3 ∂v ∂  8x y − 8xy 3 + 1 = =− =− 12x2 y 2 − 2y 4 + g(x) = −24xy 2 − g 0 (x), ∂y ∂y ∂x ∂x

hence g 0 (x) ≡ −8x3 . So, v(x, y) = 12x2 y 2 − 2y 4 − 2x4 + c is a harmonic conjugate of u, for any real constant c. 15.3.3.7. First, we will find a harmonic conjugate function v for the given function u(x, y) = x − 2xy. We want v to satisfy the Cauchy-Riemann equations, hence ˆ  ∂v ∂u ∂  = = x − 2xy = 1 − 2y =⇒ v = (1 − 2y)∂y = y − y 2 + g(x), ∂y ∂x ∂x where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get −2x =

 ∂u  ∂  ∂v ∂  x − 2xy = y − y 2 + g(x) = −g 0 (x), =− =− ∂y ∂y ∂x ∂x

hence g 0 (x) ≡ x2 . So, v(x, y) = y − y 2 + x2 + c is a harmonic conjugate of u, for any real constant c. It follows that the simplest such desired function f (z) is given by  f (z) = u(x, y) + iv(x, y) = (x − 2xy) + i(y − y 2 + x2 ) = x + iy + − 2xy + i(x2 − y 2 ) = z + iz 2 . c Larry

Turyn, January 8, 2014

page 19

Further, because f (z) is a polynomial we have f 0 (z) = 1 + 2iz. 15.3.3.8. First, we will find a harmonic conjugate function v for the given function u(x, y) = 4xy 3 − 4x3 y. We want v to satisfy the Cauchy-Riemann equations, hence ˆ  ∂v ∂u ∂  3 3 3 2 = = 4xy − 4x y = 4y − 12x y =⇒ v = (4y 3 − 12x2 y)∂y = y 4 − 6x2 y 2 + g(x), ∂y ∂x ∂x where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get 12xy 2 − 4x3 =

 ∂u  ∂  ∂v ∂  4 4xy 3 − 4x3 y = =− =− y − 6x2 y 2 + g(x) = 12xy 2 − g 0 (x), ∂y ∂y ∂x ∂x

hence g 0 (x) ≡ 4x3 . So, v(x, y) = y 4 − 6x2 y 2 + x4 + c is a harmonic conjugate of u, for any real constant c. It follows that the simplest such desired function f (z) is given by  f (z) = u(x, y) + iv(x, y)= (4xy 3 − 4x3 y)+ i(y 4 − 6x2 y 2 + x4 )= i x4 − i4xy 3 − 6x2 y 2 + i4x3 y + y 4 = iz 4 . Further, because f (z) is a polynomial we have f 0 (z) = 4iz 3 . 15.3.3.9. First, we will find a harmonic conjugate function v for the given function u(x, y) = y 2 + x3 − x2 − 3xy 2 + 2. We want v to satisfy the Cauchy-Riemann equations, hence ˆ  ∂u ∂  2 3 2 ∂v = = y +x −x −3xy 2 +2 = 3x2 −2x−3y 2 =⇒ v = (3x2 −2x−3y 2 )∂y = 3x2 y−2xy−y 3 +g(x), ∂y ∂x ∂x where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get 2y − 6xy =

 ∂u  ∂  2 ∂v ∂  2 y + x3 − x2 − 3xy 2 + 2 = =− =− 3x y − 2xy − y 3 + g(x) = −6xy + 2y − g 0 (x), ∂y ∂y ∂x ∂x

hence g 0 (x) ≡ 0. So, v(x, y) = 3x2 y − 2xy − y 3 + c is a harmonic conjugate of u, for any real constant c. It follows that the simplest such desired function f (z) is given by f (z) = u(x, y) + iv(x, y) = (y 2 + x3 − x2 − 3xy 2 + 2) + i(3x2 y − 2xy − y 3 )  = 2 − (x2 + i2xy − y 2 ) + x3 + i3x2 y − 3xy 2 − iy 3 = 2 − z2 + z3. Further, because f (z) is a polynomial we have f 0 (z) = −2z + 3z 2 . 15.3.3.10. Note that on the errata page we changed the problem to have u(x, y) = 3x2 y − y 3 + 2xy + y. First, we will find a harmonic conjugate function v for u(x, y). We want v to satisfy the Cauchy-Riemann equations, hence ˆ  ∂v ∂u ∂  2 = = 3x y − y 3 + 2xy + y = 6xy + 2y =⇒ v = (6xy + 2y)∂y = 3xy 2 + y 2 + g(x), ∂y ∂x ∂x where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get 3x2 − 3y 2 + 2x + 1 =

 ∂u  ∂  2 ∂v ∂  3x y − y 3 + 2xy + y = =− =− 3xy 2 + y 2 + g(x) = −3y 2 − g 0 (x), ∂y ∂y ∂x ∂x

hence g 0 (x) ≡ −3x2 − 2x − 1. So, v(x, y) = 3xy 2 + y 2 − x3 − x2 − x + c is a harmonic conjugate of u, for any real constant c.

c Larry

Turyn, January 8, 2014

page 20

It follows that the simplest such desired function f (z) is given by f (z) = u(x, y) + iv(x, y) = (3x2 y − y 3 + 2xy + y) + i(3xy 2 + y 2 − x3 − x2 − x)  = −i(x + iy) − i(x2 + i2xy − y 2 ) − i x3 + i3x2 y − 3xy 2 − iy 3 = −iz − iz 2 − iz 3 . Further, because f (z) is a polynomial we have f 0 (z) = −i − 2iz − 3iz 2 . 15.3.3.11. For z = x + iy 6= (0, 0), we want v to satisfy the Cauchy-Riemann equations, hence, 2xy ∂u ∂ h 2y i ∂v =− 2 = = 2 2 ∂y ∂x ∂x x + y (x + y 2 )2 hence, using the substitution w = x2 + y 2 , with ˆ  v=

xy − 2 (x + y 2 )2

 ∂y =

∂w ∂y

ˆ 

= 2y,



x x  x ∂w = + g(x) = 2 + g(x), w2 w x + y2

where g(x) is an arbitrary function of x alone. Substitute v into the other Cauchy-Riemann equation to get i ∂u i 2y 2 ∂ h y ∂v ∂ h x 1 − = = = − = − + g(x) x2 + y 2 (x2 + y 2 )2 ∂y x2 + y 2 ∂y ∂x ∂x x2 + y 2 =−

1 2x2 + − g 0 (x), x2 + y 2 (x2 + y 2 )2

hence g 0 (x) ≡ −

2y 2 1 2x2 2 2(x2 + y 2 ) 1 − = − = 0, + + + x2 + y 2 (x2 + y 2 )2 x2 + y 2 (x2 + y 2 )2 x2 + y 2 (x2 + y 2 )2

hence g(x) = c, for an arbitrary constant c. So, v(x, y) =

x2

x +c + y2

is a harmonic conjugate of u, for any real constant c, on the domain D = {z = x + iy : x2 + y 2 > 0}. 15.3.3.12. As given, x = r cos θ and y = r sin θ imply ∂U ∂u ∂x ∂u ∂y ∂v ∂v = · + · ... = cos θ · − sin θ · ∂r ∂x ∂r ∂y ∂r ∂y ∂x and 1 ∂V 1 = r ∂θ r



∂v ∂x ∂v ∂y · + · ∂x ∂θ ∂y ∂θ

 =

1 r

 −r sin θ

∂v ∂v + r cos θ ∂x ∂y

 = − sin θ

∂v ∂v ∂U + cos θ = . ∂x ∂y ∂r

Similarly, using the Cauchy-Riemann equations, ∂V ∂v ∂x ∂v ∂y ∂v ∂v ∂u ∂u = · + · = cos θ · + sin θ · = − cos θ · + sin θ · ∂r ∂x ∂r ∂y ∂r ∂x ∂y ∂y ∂x and 1 ∂U 1 = r ∂θ r



∂u ∂x ∂u ∂y · + · ∂x ∂θ ∂y ∂θ



1 = r

  ∂u ∂u ∂u ∂u ∂V −r sin θ + r cos θ = − sin θ + cos θ =− . ∂x ∂y ∂x ∂y ∂r

c Larry

Turyn, January 8, 2014

page 21

 a2  15.3.3.13. The given function U (r, θ) = v∞ r + cos θ is a real-valued function, defined on the domain r r 6= 0 in the z−plane, as should be the desired harmonic conjugate function v. We want v to satisfy the Cauchy-Riemann equations, hence i  a2  a2  1 ∂V ∂U ∂ h  v∞ r + cos θ = v∞ 1 − 2 cos(θ) = = r ∂θ ∂r ∂r r r ˆ

so V =

  a2  a2  cos(θ) ∂θ = v∞ r − sin(θ) + g(r), v∞ r − r r

where g(r) is an arbitrary function of r alone. Substitute V into the other Cauchy-Riemann equation to get   i 1 ∂U i a2  1 ∂ h a2  a2  ∂V ∂ h  − v∞ 1 + 2 sin θ = v∞ r + cos θ = v∞ r − sin(θ) + g(r) =− =− r r ∂θ r r ∂θ ∂r ∂r r  a2  = −v∞ 1 + 2 sin(θ) − g 0 (r), r  2 a sin(θ) + c is a harmonic conjugate of U , for any real constant c. hence g 0 (r) ≡ 0. So, V = v∞ r − r  a2  15.3.3.14. The given function U (r, θ) = v∞ r + cos(θ) − κθ is a real-valued function, defined on the r domain r 6= 0 in the z−plane, as should be the desired harmonic conjugate function v. We want v to satisfy the Cauchy-Riemann equations, hence i  1 ∂V ∂U ∂ h  a2  a2  = = v∞ r + cos(θ) − κθ = v∞ 1 − 2 cos(θ) r ∂θ ∂r ∂r r r ˆ

so V =

  a2  a2  cos(θ)∂θ = v∞ r − sin(θ) + g(r), v∞ r − r r

where g(r) is an arbitrary function of r alone. Substitute V into the other Cauchy-Riemann equation to get i 1 ∂U i  a2   a2  1 ∂ h ∂V ∂ h  a2  κ v∞ r+ cos(θ)−κθ = =− =− v∞ r− sin(θ)+g(r) − v∞ 1+ 2 sin(θ)− = r r r ∂θ r r ∂θ ∂r ∂r r  a2  = −v∞ 1 + 2 sin(θ) − g 0 (r), r  2 κ a hence g 0 (r) ≡ . So, V = v∞ r − sin(θ) + κ ln(r) + c is a harmonic conjugate of U , for any real constant r r c. 15.3.3.15. The given function U (r, θ) = u(x, y) =

(x2

2r2 cos(θ) sin(θ) 2 cos(θ) sin(θ) sin(2θ) 2xy = = = 2 2 +y ) (r2 )2 r2 r2

is a real-valued function, defined on the domain r 6= 0 in the z−plane, as should be the desired harmonic conjugate function v. We want v to satisfy the Cauchy-Riemann equations, hence ˆ 1 ∂V ∂U ∂ h sin(2θ) i 2 sin(2θ) 2 sin(2θ) cos(2θ) = = =− =⇒ V = − ∂θ = + g(r) r ∂θ ∂r ∂r r2 r3 r2 r2 where g(r) is an arbitrary function of r alone. Substitute V into the other Cauchy-Riemann equation to get i 2 cos(2θ) 2 cos(2θ) 1 ∂ h sin(2θ) i 1 ∂U ∂V ∂ h cos(2θ) = = = − = − + g(r) = − g 0 (r), r3 r ∂θ r2 r ∂θ ∂r ∂r r2 r3 c Larry

Turyn, January 8, 2014

page 22

cos(2θ) + c is a harmonic conjugate of U , for any real constant c. We must convert r2 2xy back to the original independent variables to get that a harmonic conjugate of u(x, y) = 2 is given (x + y 2 )2 by  r2 cos2 (θ) − sin2 (θ) cos(2θ) x2 − y 2 v(x, y) = V (r, θ) = +c= +c= 2 + c, 2 4 r r (x + y 2 )2 hence g 0 (r) ≡ 0. So, V =

for any real constant c. 15.3.3.16. rn cos nθ has harmonic conjugate rn sin nθ because z n = rn cos nθ + irn sin nθ is entire. 15.3.3.17. Method 1 : (Tricky) rn sin nθ = −(−rn sin nθ) = Re(−iz n ), so      −i z n = −i Re(−iz n ) + i Im(−iz n ) = −i rn sin(nθ) + i − rn cos(nθ) , U (r, θ) + i V (r, θ), hence V (r, θ) , −rn cos(nθ) is a harmonic conjugate for U (r, θ) , rn sin nθ. Method 2 : Use the Cauchy-Riemann equations in polar coordinates, namely, the result of problem 15.3.3.12. y 15.3.3.18. = Re (x + 2)2 + y 2 = Re

so f (z) =



y + i(x + 2) (x + 2)2 + y 2

 = Re

i − iy + (x + 2) (x + 2)2 + y 2

!

!      i (x + 2) − iy i i   = Re = Re , (x + 2) + iy z+2 (x + 2) + iy (x + 2) − iy

y i is a function that is analytic in the domain {z : z 6= −2} and has Re(f (z)) = . z+2 (x + 2)2 + y 2

15.3.3.19. (a) For all z = x + iy, where x and y are real,  f (z) = eαz = e(a+ib)(x+iy) = e(ax−by)+i(bx+ay) = e(ax−by) ei(bx+ay) = e(ax−by) cos(bx + ay) + i sin(bx + ay) = e(ax−by) cos(bx + ay) + i e(ax−by) sin(bx + ay) , u(x, y) + v(x, y). (b) First, check the Cauchy-Riemann equations:  ∂  (ax−by) ∂u = e cos(bx + ay) = e(ax−by) (a cos(bx + ay) − b sin(bx + ay)) , ∂x ∂x  ∂v ∂  (ax−by) = e sin(bx + ay) = e(ax−by) (−b sin(bx + ay) + a cos(bx + ay)) , ∂y ∂y  ∂u ∂  (ax−by) = e cos(bx + ay) = e(ax−by) (−b cos(bx + ay) − a sin(bx + ay)) , ∂y ∂y and

 ∂v ∂  (ax−by) =− e sin(bx + ay) = e(ax−by) (−a sin(bx + ay) − b cos(bx + ay)) . ∂x ∂x So, the Cauchy-Riemann equations are satisfied everywhere in the z−plane. Because (1) the functions cos(θ), sin(θ) are everywhere continuous in θ, and (2) the functions defined by p(x, y) , ax − by and q(x, y) , bx + ay are polynomials in x and y and thus are continuous everywhere in the z = (x + iy)−plane, Theorem 15.8 in Section 15.2 implies that f (z) = eαz is entire, that is, analytic everywhere. −

c Larry

Turyn, January 8, 2014

page 23

2

2

15.3.3.20. U (x, y) = (u(x, y)) − (v(x, y)) is reminiscent of the fact that z 2 = (x + iy)2 = x2 − y 2 + i2xy. So, if we try 2 2 2 2 f (z) , g(z) = u(x, y) + iv(x, y) = u(x, y) − v(x, y) + i2u(x, y)v(x, y), 2

then we see that Re(f (z)) = U (x, y), as we wanted. So, let f (z) = (g(z)) . ∂u ∂v ∂u ∂v ≡ − and ≡ on some ∂y ∂x ∂x ∂y ∂v ∂w ∂v ∂w domain. Define w(x, y) = −u(x, y). It follows that, ≡ and ≡− on that domain, that is, ∂x ∂y ∂y ∂x w = −u is a harmonic conjugate of v on that domain. 15.3.3.21. Method 1 : If u(x, y) has harmonic conjugate v(x, y) then

Method 2 : (Tricky): If u(x, y) has harmonic conjugate v(x, y) then f (z) , u(x, y) + iv(x, y) is analytic on some domain. It follows that g(z) , −if (z) = v(x, y) − iu(x, y) is analytic on that domain, hence v(x, y) has harmonic conjugate −u(x, y) on that domain. 15.3.3.22. This is potential flow for potential function Φ(x, y) if the velocity is v(x, y) = 3(x2 − y 2 )ˆ ı − 6xy ˆ =

∂Φ ∂Φ ˆ ı+ ˆ. ∂x ∂y

As discussed in Example 15.23 in Section 15.3 and Example 15.14 in Section 15.2, the streamline function Ψ(x, y) is to be chosen so that f (z) = Φ(x, y) + iΨ(x, y) is analytic on some domain. As we saw in the present section, Ψ is a harmonic conjugate of Φ. So, by the Cauchy-Riemann equations, we want Ψ to satisfy (?)

∂Φ ∂Ψ ≡ = 3(x2 − y 2 ) ∂y ∂x

and (??)

∂Φ ∂Ψ ≡− = 6xy ∂x ∂y

on some domain D. From (??), we integrate to get ˆ Ψ = (6xy) ∂x = 3x2 y + g(y), where g(y) is an arbitrary function of y alone. Substitute this into (?), the first Cauchy-Riemann equation, to get 3(x2 − y 2 ) =

 ∂Φ ∂Ψ ∂  2 = = 3x y + g(y) = 3x2 + g 0 (y) ∂x ∂y ∂y

This gives us g 0 (y) = −3y 2 , so g(y) = −y 3 + e c, where e c is an arbitrary real constant. For convenience, choose e c = 0 to get the streamline function Ψ(x, y) = 3x2 y − y 3 . So, the solution curves, that is, the streamlines, are 3x2 y − y 3 = c =constant. 15.3.3.23. This is potential flow for potential function Φ(x, y) if the velocity is v(x, y) = 2y ˆ ı + 2x ˆ =

∂Φ ∂Φ ˆ ı+ ˆ. ∂x ∂y

As discussed in Example 15.23 in Section 15.3 and Example 15.14 in Section 15.2, the streamline function Ψ(x, y) is to be chosen so that f (z) = Φ(x, y) + iΨ(x, y) c Larry

Turyn, January 8, 2014

page 24

is analytic on some domain. As we saw in the present section, Ψ is a harmonic conjugate of Φ. So, by the Cauchy-Riemann equations, we want Ψ to satisfy (?) and (??)

∂Ψ ∂Φ ≡ = 2y ∂y ∂x ∂Ψ ∂Φ ≡− = −2x ∂x ∂y

on some domain D. From (??), we integrate to get ˆ Ψ = (−2x) ∂x = −x2 + g(y), where g(y) is an arbitrary function of y alone. Substitute this into (?), the first Cauchy-Riemann equation, to get 2y =

 ∂Ψ ∂  ∂Φ = = −x2 + g(y) = g 0 (y) ∂x ∂y ∂y

This gives us g 0 (y) = 2y, so g(y) = y 2 + e c, where e c is an arbitrary real constant. For convenience, choose e c = 0 to get the streamline function Ψ(x, y) = −x2 + y 2 . So, the solution curves, that is, the streamlines, are the hyperbolas −x2 y + y 2 = c =constant. 15.3.3.24. Define S = {z : z is in O but z 6= zk for k = 1, ..., n}. Suppose z0 is any point in S. Because z0 is in O, there exists an R > 0 such that DR (z0 ) is contained in O. Also, because z0 is in S, z 6= zk for k = 1, ..., n, hence ri , |z0 − zi | > 0 for k = 1, ..., n. Let r > 0 be the minimum of the finite list of numbers r1 , ..., rn . Let r¯ be the minimum of r and R. Then r¯ > 0 and Dr¯(z0 ) is contained in S. So, for every z in S, there is an r¯ > 0 such that Dr¯(z0 ) is contained in S. By definition, S is an open set.

c Larry

Turyn, January 8, 2014

page 25

Section 15.4.6 √ 15.4.6.1. Arg(z) = 3π 2 γei3π/4 = −γ + iγ, for some (real) γ > 0. The stated problem requires 4 ⇐⇒ z = that 22 = |z − i|2 = | − γ + iγ − i|2 = | − γ + i(γ − 1)|2 = (−γ)2 + (γ − 1)2 = 2γ 2 − 2γ + 1, hence 2γ 2 − 2γ − 3 = 0. This implies that γ=



p √ 22 − 4 · 2 · (−3) 2±2 7 = . 2·2 4

The solutions of the original problem are

15.4.6.2. Arg(z) = that

5π 6

√ (−1 + i)(1 ± 7) . z = −γ + iγ = (−1 + i)γ = 2 √ ⇐⇒ z = 2 γei5π/6 = −γ 3 + iγ, for some (real) γ > 0. The stated problem requires

√ √ √ 32 = |z − i|2 = | − γ 3 + iγ − i|2 = | − γ 3 + i(γ − 1)|2 = (−γ 3)2 + (γ − 1)2 = 4γ 2 − 2 γ + 1, hence 4γ 2 − 2 γ − 8 = 0, that is, 2γ 2 − γ − 4 = 0. This implies that γ=



p √ 12 − 4 · 2 · (−4) 1 ± 33 = . 2·2 4

The solutions of the original problem are √ √ √ √ (− 3 + i)(1 ± 33) z = −γ 3 + iγ = (− 3 + i)γ = . 4 1 and −1 + i is in the second quadrant together imply that −1   3π 3π arg(−1 + i) = + 2πk : k is any integer and Arg(−1 + i) = 4 4

15.4.6.3. (a) tan θ =

√ −1 √ and − 3 − i is in the third quadrant together imply that − 3   √ √ 5π 5π + 2πk : k is any integer and Arg(− 3 − i) = − arg(− 3 − i)= − 6 6   5π i 5π 3 (c) arg(2ei 3 ) = 5π = − π3 3 + 2πk : k is any integer and Arg 2e (d) Using results of parts (a) and (b) of this problem, √   2ei7π/6  √   5π − 3−i i5π/12 arg( )=arg √ =arg 2 e = + 2πk : k is any integer −1 + i 12 2ei3π/4  −√3 − i  5π and Arg = . −1 + i 12 √ √ √ −2 2 √ = 1 and −2 2 − i2 2 is in the third quadrant together imply that 15.4.6.4. (a) tan θ = −2 2   √ √ 3π 3π + 2πk : k is any integer and Arg(−1 + i) = − arg(−2 2 − i2 2) = − 4 4 (b) tan θ =

c Larry

Turyn, January 8, 2014

page 26 √ √ iπ/4  −i5π/6  √ (b) (1 + i)(− 3 − i) = 2e 2e = 2 2e−i7π/12 implies that   √ √   7π 7π arg (1 + i)(− 3 − i) = − + 2πk : k is any integer and Arg (1 + i)(− 3 − i) = − 12 12 (c) We have  √  1 + i√3   2eiπ/3    5π −i5π/12 √ =arg 2 e arg =arg = − + 2πk : k is any integer −1 + i 12 2ei3π/4  1 + i√3  5π =− . and Arg −1 + i 12   15.4.6.5. (a) False; a clue is that Arg 2Log(z) is in the interval (−2π, 2π] while Arg Log(z 2 ) is in the interval (−π, π]. An explicit is given by z = 2ei3π/4 , for which Log(z 2 ) = Log(4ei3π/2 ) =  counterexample  ln 4 − i π2 but 2Log(z) = 2 ln 2 + i 3π = ln 4 + i 3π 4 2 . (b) True, because for all z 6= 0,   √ Arg(z) 1 Arg(z) Log( z) = Log(z \1/2\ ) = Log |z|1/2 eiArg(z)/2 = ln(|z|1/2 ) + i = ln |z| + i 2 2 2  1 1 = ln |z| + iArg(z) = Log(z). 2 2   1   15.4.6.6. A clue is that for z 6= 0, Arg Log is in the interval (−π, π] while Arg − Log(z) is in the z interval [−π, π). So, the proposed identity is not true for z = 0 or for z with Arg(z) = π. For z = ρeiArg(z) satisfying both z 6= 0 and −π < Arg(z) < π, we have 1 1   1  −1 Log = ln +iArg = ln(|z| )+iArg ρe−iArg(z) = (−1) ln |z|+i −Arg(z) = − (ln |z| + iArg(z)) z z z = −Log(z).   1 = −Log(z) for all z = satisfying both z 6= 0 and −π < Arg(z) < π. So, the final conclusion is that Log z  15.4.6.7. (a) A clue is that Im Log(ez ) lies in the interval (−π, π]. So, only if Im(z) lies in the interval (−π, π] can it be true that Log (ez ) = z. Recall that, by definition, w = Log(ˆ z ) is the unique w for which both ew = zˆ and −π < Im(z) ≤ π. Assume that z satisfies −π < Im(z) ≤ π. Then z is an example of a complex number w0 that satisfies both ew0 = ez and −π < Im(w0 ) ≤ π. It follows that w0 = z is the unique w for which ew = ez and −π < Im(w) ≤ π, so we have that z = w. So, Log (ez ) = z for all z for which −π < Im(z) ≤ π. (b) When it exists, Log(z) is an element of the set {w : ew = z}, so by definition of that set, eLog(z) = z. So, eLog(z) = z is true for all z 6= 0, the set of which is the punctured plane. [This explains why Theorem 15.16 in Section 15.4 is true.]  √ √ i3π/4 1− 1 ln( 2) i3π/4 ln(2)+i3π/4 15.4.6.8. −1 + i = 2 e =e e = e2 , so e   z z 1− 1− 1 2 = −1 + i ⇐⇒ e 2 = e 2 ln(2)+i(3π/4) ⇐⇒ e

z 2 = −1 + i 1−

1 3π z = ln(2) + i + i2πn, 2 2 4

c Larry

Turyn, January 8, 2014

page 27

for some integer n. So, the solutions are  z = 2 − ln(2) − i

15.4.6.9.

 3π + 4πn , 2

where n is any integer.

ez = i ⇐⇒ ez = i(ez + 1) ⇐⇒ (1 − i)ez = i ⇐⇒ ez + 1 ez =

1 1 3π eiπ/2 i =√ = 2−1/2 ei3π/4 = e− 2 ln 2 ei3π/4 = e− 2 ln 2 + i 4 . 1−i 2 e−iπ/4

ez = i are given by ez + 1   1 3π z = − ln 2 + i + 2nπ , where n is any integer. 2 4

So, the solutions of the original equation

15.4.6.10. Define w1 = Log(z1 ), w2 = Log(z2 ), and w3 = Log(z1 z2 ). Then, by definition, ew1 = z1 , ew2 = z2 , and ew3 = z1 z2 , and also all three of w1 , w2 , and w3 have imaginary part lying in the interval (−π, π]. The law of exponents implies that ew1 +w2 = ew1 ew2 = z1 z2 = ew3 , so there must exist an integer k for which w3 + i2kπ = w1 + w2 . By the definitions of w1 , w2 , w3 , this says that i2kπ = Log(z1 ) + Log(z2 ) − Log(z1 z2 ), for some integer k, which is almost what we were asked to explain. But, −π < Im(w1 ), Im(w2 ), Im(w3 ) ≤ π, so −2π < Im(w1 ) + Im(w2 ) ≤ 2π

and

− π ≤ −Im(w3 ) < π.

It follows, by addition, that −2π + (−π) < Im(w1 ) + Im(w2 ) − Im(w3 ) < 2π + π. By the definitions of w1 , w2 , w3 , this says that (?)

− 3π < 2kπ = Im (Log(z1 ) + Log(z2 ) − Log(z1 z2 )) < 3π.

Because k is an integer, the only possibilities that make (star) true are that k = −1, k = 0, or k = 1. This finishes the explanation for why Log(z1 z2 ) + i2kπ = Log(z1 ) + Log(z2 ), where k must be either −1, 0, or 1. p √ \1/2\ i Arg(z)/2 15.4.6.11. Define the principal . √  square root function by z, that is z √ , |z| e n A clue is that Arg z lies in the interval (−π, π], while n Arg ( z) lies in the interval (−nπ, nπ]. Ex. For z = ei5π/6 and n = 2, we have √  p  p    π Arg z n = Arg ei5π/3 = Arg e−iπ/3 = Arg e−iπ/6 = − 6 c Larry

Turyn, January 8, 2014

page 28

versus n Arg

 5π   p √  z = 2Arg ei5π/6 = 2Arg ei5π/12 = . 6

2

15.4.6.12. Prelude: f (z) , |z|α ei(3α−2)Arg(z) = eα α2 = (3α − 2), then it would follow that f (z) = e(3α−2) ln |z| + i(3α−2)Arg(z) = e(3α−2)

2

ln |z| + i(3α−2)Arg(z)

ln |z| + iArg(z)



 = e

. If a real number α happens to satisfy

ln |z| + iArg(z)

 (3α−2)

(3α−2)  = eLog(z)

= z (3α−2) is analytic. By the way, the solutions of α2 = (3α − 2), that is, 0 = α2 − 3α + 2 = (α − 1)(α − 2), are α = 1 and α = 2. This does not tell us whether there are other values of α for which f (z) is analytic, but it will help us check our subsequent work. Anyway, the instructions said to use the Cauchy-Riemann equations to study for which real α we have that f (z) is analytic. To do that, for z = x + iy = r eiθ with x, y, and θ = Arg(z) being real and real r ≥ 0, along with α being real, we have    2 2 2 f (z) , |z|α ei(3α−2)Arg(z) = (r2 )α /2 ei(3α−2)θ = rα cos (3α − 2)θ + i sin (3α − 2)θ   2 2 = rα cos (3α − 2)θ + irα sin (3α − 2)θ , U (r, θ) + iV (r, θ). It is convenient to use the Cauchy-Riemann equations in polar coordinates, that is, the results of problem 15.3.3.12. The first C-R equation requires that i  2 ∂ h α2 ∂U = r cos (3α − 2)θ = (α2 )rα −1 cos (3α − 2)θ ∂r ∂r and

i  2 1 ∂ h α2 1 ∂V = r sin (3α − 2)θ = (3α − 2)rα −1 cos (3α − 2)θ , r ∂θ r ∂θ

be equal, hence (α2 ) = (3α − 2). The second C-R equation requires that i  2 ∂ h α2 ∂V = r sin (3α − 2)θ = (α2 )rα −1 sin (3α − 2)θ ∂r ∂r and −

i  2 1 ∂ h α2 1 ∂U =− r cos (3α − 2)θ = (3α − 2)rα −1 sin (3α − 2)θ r ∂θ r ∂r

be equal, hence, again, (α2 ) = (3α − 2). So, the discussion in the prelude turns out to be completely relevant, and f (z) is analytic if, and only if, α = 1 or α = 2. 15.4.6.13. (2i)i , eilog(2i) = ei·{ln |2i|+iArg(2i)+i2πk: k=integer} = n o  π π π = ei·{ln |2|+i 2 +i2πk: k=integer} = ei·(ln |2|+i 2 +i2πk) : k = integer = e− 2 −2πk · ei ln 2 : k is any integer .

1

15.4.6.14. (−1 − i)1/3 = e 3 ·{ln |−1−i|+iArg(−1−i)+i2πk: k=integer} = n 1 1 o −3π 1/2 3π 1 = e 3 ·{ln |2 |+i 4 +i2πk: k=integer} = e 3 ·( 2 ln |2|−i 4 +i2πk) : k = integer n ln(2) o n o π = e 6 · e−i 4 · ei2πk/3 : k is any integer = 21/6 e−iπ/4 , 21/6 e5π/12 , 21/6 e−i11π/12 . c Larry

Turyn, January 8, 2014

page 29

15.4.6.15. Recall that form is reiθ , where −π < θ ≤ π. From Example 15.28(b) in √ the polar exponential 5π Section 15.4, Log(− 3 + i) = ln 2 + i 6 , so q √ √ √ 1 1 1 5π 1 3 − 3 + i = (− 3 + i)\ 3 \ , e 3 Log(− 3+i) = e 3 (ln 2+i 6 ) = e 3

ln 2

· ei5π/18 =

√ 3 2 ei5π/18 .

15.4.6.16. Recall that the polar exponential form is reiθ , where −π < θ ≤ π. n 1 o √ √ √ 1 (− 3 + i)1/3 = e 3 log(− 3+i) = e 3 (Log(− 3+i)+i2πk) :k is any integer o n√ o n 1 5π 3 2 · ei5π/18 · ei2πk/3 : k is any integer = e 3 (ln 2+i( 6 +2πk)) : k is any integer = o n√ o n√ √ √ √ √ 3 3 3 3 3 3 2 ei5π/18 , 2ei17π/18 , 2 ei29π/18 = 2 ei5π/18 , 2e−iπ/18 , 2 e−i7π/18 . = √ is a finite set. We also see that, as in Example 15.3(c), (− 3 + i)1/3 is the set of all solutions, that is, all √ three solutions, of the equation z 3 = − 3 + i. √ 1 15.4.6.17. 3 z , e 3 Log(z) . We note that √the exponential function is entire and Log(z) is differentiable everywhere except the ray Arg(z) = π, so 3 z is also differentiable everywhere except the ray Arg(z) = π, also known as the non-positive real axis. √ 2 1 3 15.4.6.18. z 2 , e 3 Log(z ) . We note that the exponential function is entire and the squaring function g(z) , z 2 is entire. Also, the Log function is differentiable everywhere except the ray Arg(z) = π, so Log(z 2 ) is differentiable everywhere except where Arg(z 2 ) = π, that is, the set of z for which Arg(z) = ± π2 . √ 3 The latter set is the imaginary axis. So, z 2 is differentiable everywhere except on the imaginary axis. 15.4.6.19. There no solution of the equation ew = 0 because for all w = u + iv with u, v being real, q 2 2 q  √ |ew | = |eu (cos v + i sin v)| = eu cos v + eu sin v = e2u cos2 v + sin2 v = e2u = eu > 0.

15.4.6.20. Define f (z) = Arg(z). We know that f (z) is continuous everywhere except on the non-positive real axis, where f (z) is discontinuous, from discussion following Example 15.24 in Section 15.4. As for differentiability of f (z) for z not on the non-positive real axis, we will use the fact that Arg(z) = θ , U (r, θ) + iV (r, θ) to study the Cauchy-Riemann equations in polar coordinates, as given in the results of problem 15.3.3.12. So, U (r, θ) ≡ θ and V (r, θ) ≡ 0. We have that ∂U ∂ h i = θ ≡0 ∂r ∂r

and

1 ∂V 1 ∂ h i = 0 ≡0 r ∂θ r ∂θ

are equal. The second C-R equation requires that ∂V ∂ h i = 0 ≡0 ∂r ∂r

and



1 ∂U 1 ∂ h i 1 =− θ =− , r ∂θ r ∂r r

hence f (z) is differentiable nowhere. But, the problem asked about differentiability of 2 e (r, θ) + iVe (r, θ). g(z) = (f (z)) = θ2 , U

We have that e ∂U ∂ h 2i = θ ≡0 ∂r ∂r

and

1 ∂ Ve 1 ∂ h i = 0 ≡0 r ∂θ r ∂θ

c Larry

Turyn, January 8, 2014

page 30

are equal. We calculate that ∂ Ve ∂ h i 0 ≡0 = ∂r ∂r

and



e 2θ 1 ∂U 1 ∂ h 2i θ =− , =− r ∂θ r ∂r r

so the second C-R equation is satisfied if, and only if, θ = 0 and r 6= 0. So, g(z) differentiable on the positive real axis and nowhere else. To find g 0 (z) where it exists, in polar coordinates we will derive a formula(s) for g 0 (z) to replace fore (r, θ) , mulas (15.15) and (15.16) in Section 15.2. Given g(z) = f (x + iy) = u(x, y) + iv(x, y), define U e u(r cos θ, r sin θ), V (r, θ) , v(r cos θ, r sin θ). From discussion in problem 15.3.12, we note that e ∂u ∂x ∂u ∂y ∂u ∂u ∂u ∂v ∂U = · + · = cos θ · + sin θ · = cos θ · − sin θ · ∂r ∂x ∂r ∂y ∂r ∂x ∂y ∂x ∂x Similarly, we have ∂ Ve ∂v ∂x ∂v ∂y ∂v ∂u = · + · = cos θ · + sin θ · ∂r ∂x ∂r ∂y ∂r ∂x ∂x So, 

e   ∂U cos θ  ∂r       =  e   ∂V sin θ ∂r

  ∂u   ∂x       ∂v  cos θ ∂x

− sin θ

implies that 

  ∂u cos θ  ∂x       =  ∂v   sin θ ∂x

−1  ∂ U e   ∂r      e ∂V cos θ ∂r

− sin θ





    = 

cos θ

− sin θ

  ∂U e  ∂r    e ∂V cos θ ∂r



sin θ

  . 

Formula (15.15) in Section 15.2 gives e ∂u ∂v ∂U ∂ Ve g (z) = +i = cos θ + sin θ +i ∂x ∂x ∂r ∂r 0

e ∂U ∂ Ve − sin θ + cos θ ∂r ∂r

! .

e ∂U ∂ Ve ≡ 0 and ≡ 0, so g 0 (z) = 0, where it exists. ∂r ∂r So, the only places where g 0 (z) exists, namely, at x0 + i 0 = 0, where x0 > 0, it has zero derivative. [By the way, several other approaches to find g 0 (z), where it exists, ran into difficulties: To find g 0 (z) where it exists, first note that ! !  2   2  2 x x −1 −1 p p g(z) = Arg(z) = ε cos = cos , u(x, y) + iv(x, y), x2 + y 2 x2 + y 2

In this problem,

where ε = sgn(y), if y 6= 0, and ε = 0 if y = 0. So, v(x, y) ≡ 0 and u(x, y) =

cos

−1

x





!2

. x2 + y 2 At any z0 = x0 + i0, where x0 > 0, from formula (15.16) in Section 15.2 we could try to calculate   ∂v −x0 y0 ∂u x0 −1 g 0 (z0 ) = (x0 , y0 ) − i (x0 , y0 ) = 0 − i 2 cos−1 p 2 ·r · 2 2 3/2  2 2 ∂y ∂y (x x0 + y0 0 + y0 ) 1 − √ x20 2 p

x0 +y0

but, unfortunately, r

−1 2 x0 2 2 x0 +y0

is undefined at (x0 , y0 ) = (x0 , 0).

1− √

c Larry

Turyn, January 8, 2014

page 31

2 Alternatively, noting that g(z0 ) = Arg(x0 + i0) = (0)2 = 0, we could try to use the definition of derivative and L’Hôpital‘s Rule to try to calculate 2 Arg(z) − 0 g(z) − g(z0 ) = lim g (z0 ) = lim z→z0 z→z0 z − z0 z − z0 0

But, because we know that Arg(z) is not differentiable at any z0 = x0 + i0, where x0 > 0, we can’t use L’Hôpital‘s Rule.] (  α

15.4.6.21. If α is an integer then z , e

=

 eα



ln |z|+iArg(z)

αlog(z)

=

e

α

ln |z|+i Arg(z)+2πk

  · ei2παk : k is any integer = eα

=

 eα



) : k is any integer 

ln |z|+iArg(z)

 · 1 : k is any integer



ln 2+iArg(z)

is a single value. 15.4.6.22. Log(z) = ln |z| + iArg(z) chooses from log(z) = {ln |z| + iArg(z) + i2πk : k is any integer} the value with k = 0. By definition of Arg(z), −π < Arg(z) ≤ π, so for any value of the integer k 6= 0, ln |z| + iArg(z) + i2πk has imaginary part not in the interval (−π, π]. So, Log(z) = ln |z| + iArg(z) is the only element w in log(z) for which −π < Im(w) ≤ π. 15.4.6.23. If x is a real, positive number, then Log(x) = ln |x| + iArg(x + i0) = ln |x| + i · 0 = ln x.

15.4.6.24. Let x1 , y1 , x2 , y2 be any real numbers, and let z1 = x1 + iy1 and z2 = x2 + iy2 .   (i) RHS = ex1 +iy1 ex2 +iy2 = ex1 (cos y1 + i sin y1 ) ex2 (cos y2 + i sin y2 )  = ex1 ex2 cos y1 cos y2 − sin y1 sin y2 + i(sin y1 cos y2 + cos y1 sin y2 ) . Using trigonometric identities for real numbers we have

(ii) RHS =

ex1 +iy1 ex2 +iy2

 RHS = ex1 +x2 cos(y1 + y2 ) + i sin(y1 + y2 ) = e(z1 +z2 ) = LHS.    ex1 (cos y1 + i sin y1 ) cos y1 + i sin y1 cos y2 − i sin y2 ex1 = x · . = x e 2 e 2 (cos y2 + i sin y2 ) cos2 y2 + sin2 y2

Using trigonometric identities for real numbers we have RHS = e

x1 −x2

  cos y1 cos y2 + sin y1 sin y2 + i sin y1 cos y2 − sin y2 · = ex1 −x2 cos(y1 − y2 ) + i sin(y1 − y2 ) 1 = e(z1 −z2 ) = LHS.

(iii) Using DeMoivre’s Law, RHS = en(x1 +iy1 ) = enx1 (cos ny1 + i sin ny1 ) = ex1 = ez1

n

n

cos y1 + i sin y1

n

n = ex1 (cos y1 + i sin y1 )

= LHS.

c Larry

Turyn, January 8, 2014

page 32

15.4.6.25. Because x > 0, for all h sufficiently small we have that x + h > 0, so     x+h −1 cos−1 √ x+h2 2 − 0 cos (x+h) +0 (x+h) − 0 ∂v v(x + h, 0) − v(x, 0) cos−1 (1) − 0 (x, 0)= lim = lim = lim = lim h→0 h→0 h→0 h→0 ∂x h h h h = lim

h→0

0−0 ∂u =0=− (x, 0). h ∂y

Regarding the second of the Cauchy-Riemann equations, (15.33) in Section 15.4, because x > 0 and y = 0, we calculate  ∂v v(x, y) − v(x, 0) x, 0+ = lim+ , g 0 (0+) ∂y y y→0   where we keep x fixed and g(y) , cos−1 √ 2x 2 . But we know from Calculus I that g 0 (0+) exists by the x +y

chain rule and equals 1

− lim+ s y→0

 1−

x x2 +y 2



∂ 2 · ∂y

!

x p

x2

+

y2

= − lim+ q y→0

1 y2 x2 +y 2

·

−xy (x2

3/2

+ y2 )

p p x x2 + y 2 −xy x2 + y 2 −xy = − lim = lim = − lim+ · · 3/2 3/2 |y| y y→0+ y→0+ x2 + y 2 as y→0 (x2 + y 2 ) (x2 + y 2 )

y→0+

=

1 , x

and, similarly,  ∂v v(x, y) − v(x, 0) x, 0− = lim , k 0 (0−) − ∂y y y→0   x −1 √ where we keep x fixed and k(y) , − cos . But we know from Calculus I that k 0 (0−) exists by 2 2 x +y

the chain rule and equals 1

lim s

y→0−

 1−

x x2 +y 2



∂ 2 · ∂y

x p x2 + y 2

!

p x2 + y 2 −xy · = ... = lim− 3/2 |y| y→0 (x2 + y 2 )

p x2 + y 2 −xy x 1 = lim · = ... = lim = . 3/2 − x2 + y 2 2 2 −y x y→0− y→0 (x + y ) Because the left- and right-hand limits are equal, there exists

∂u ∂v (0, y) and it equals − (0, y). ∂y ∂x

15.4.6.26. Choose and z0 , x0 + i0 on the positive real axis, hence x0 > 0, hence |x0 | = x0 . For any path z → z0 , that is, for z = x + iy → x0 + i0, that is for x → x0 and y → 0, we have that eventually x > 0. Recall formula (15.25) in Section 15.4. For those y ≥ 0, we have, as x → x0 and y → 0, that ! ! !     x x x x0 x0 0 0 −1 −1 −1 −1 −1 p p p Arg(z) = cos → cos = cos = cos = cos 2 2 2 2 2 |x | x 0 0 x0 + 0 x0 x +y = cos−1 (1) = 0, and for those y < 0, we have, as x → x0 and y → 0, that ! ! x x 0 Arg(z) = − cos−1 p → − cos−1 p 2 = − cos−1 x0 x2 + y 2

x p0 x20

! = − cos

c Larry

−1



x0 |x0 |



Turyn, January 8, 2014

page 33

= − cos−1



x0 x0



= − cos−1 (1) = 0.

So, there exists lim Arg(z) = 0.

z→z0

Because Arg(0) = 0, it follows that f (z) , Arg(z) is continuous at z0 , x0 + i0, where x0 > 0.

c Larry

Turyn, January 8, 2014

page 34

Section 15.5.1 15.5.1.1. Using Theorem 15.18 in Section 15.5 with z1 = π4 and z2 = i, along with (15.40) and (15.41) in Section 15.5, we have    1 1 π + i4 π π 1 = sin cos(i) + cos sin(i) = √ cosh(1) + i √ sinh(1) = √ cosh(1) + i sinh(1) . sin 4 4 4 2 2 2 15.5.1.2. For z = x + iy, where x, y are real, the equation is 2 = sin(x + iy) = sin x cos(iy) + cos x sin(iy) = sin x cosh y + i cos x sinh y. Separating the real and imaginary parts gives the system of equations    (1) 2 = sin x cosh y  .   (2) 0 = cos x sinh y Equation (2) is easier to solve than (1): 0 = cos x sinh y ⇐⇒ (i) 0 = cos x or (ii) 0 = sinh y. (2)(ii) is true only for y = 0. (2)(i) is true only for x = (n − 21 )π, for any integer n. Substitute x = (n − 12 )π into the first equation, (1):   1 2 = sin x cosh y = sin n − π cosh y = (−1)n+1 cosh y. 2 For n = even = 2k, (1) becomes cosh y = −2, which has no solution. For n = odd = 2` − 1, (1) becomes cosh y = 2, whose solutions are found using 2 = cosh y ,

ey + e−y 2

⇐⇒

4 = ey + e−y

⇐⇒

4ey = ey · (ey + e−y ) = (ey )2 + 1,

so substituting w = ey turns that equation into 4w = w2 + 1, that is, w2 − 4w + 1 = 0, whose solutions are √ √ 4 ± 12 ey = = 2 ± 3. 2 √ This gives us y = ln(2 ± 3). Alternatively, substitute y = 0 into the first equation, (1): 2 = sin x cosh 0 = sin x, which has no solution for the real number x because −1 ≤ sin x ≤ 1 for all real x. Putting everything together, the set of solutions is   √ 1 2` − 1 − π + i ln(2 ± 3) : any integer ` . 2 15.5.1.3. For z = x + iy, where x, y are real, the equation is 2 = cos(x + iy) = cos x cos(iy) − sin x sin(iy) = cos x cosh y − i sin x sinh y. Separating the real and imaginary parts gives the system of equations    (1) 2 = cos x cosh y  .   (2) 0 = − sin x sinh y Equation (2) is easier to solve than (1): 0 = sin x sinh y ⇐⇒ (i) 0 = sin x or (ii) 0 = sinh y. (2)(i) is true only for x = nπ for any integer n. (2)(ii) is true only for y = 0. c Larry

Turyn, January 8, 2014

page 35

Substitute x = nπ into the first equation, (1): 2 = cos x cosh y = cos(nπ) cosh y = (−1)n cosh y. For n = even = 2k, (1) becomes cosh y = 2, whose solutions would be y = arccosh(2) if we had defined such a function. Instead, there is a solution technique of independent interest: 2 = cosh y ,

ey + e−y 2

⇐⇒

4 = ey − e−y

⇐⇒

4ey = ey · (ey − e−y ) = (ey )2 − 1,

so substituting w = ey turns that equation into 4w = w2 − 1, that is, w2 − 4w − 1 = 0, whose solutions are √ √ 4 ± 20 = 2 ± 5. ey = 2 √ √ This gives us only y = ln(2 + 5), because 0 < ey cannot equal 2 − 5 < 0. So far, our only solutions are √ z = x + iy = 2kπ + i ln(2 + 5). For n = odd = 2` − 1, (1) becomes 0 < cosh y = −2, which has no solution. Substitute y = 0 into the first equation, (1) to get 2 = cos x cosh y = cos x cosh 0 = cos x · 1 = cos x ≤ 1, which has no solution. Putting everything together, the set of solutions is o n √ 2kπ + i ln(2 + 5) : any integer k . 15.5.1.4. For z = x + iy, where x, y are real, the equation is −

1 = cos(x + iy) = cos x cos(iy) − sin x sin(iy) = cos x cosh y − i sin x sinh y. 2

Separating the real and imaginary parts gives the system of equations    (1) − 12 = cos x cosh y  .   (2) 0 = − sin x sinh y Equation (2) is easier to solve than (1): 0 = sin x sinh y ⇐⇒ (i) 0 = sin x or (ii) 0 = sinh y. (2)(i) is true only for x = nπ for any integer n. (2)(ii) is true only for y = 0. Substitute x = nπ into the first equation, (1): −

1 = cos x cosh y = cos(nπ) cosh y = (−1)n cosh y. 2

For n = odd = 2` − 1, (1) becomes cosh y = 21 , for which there is no solution because cosh y ≥ 1 for all y. For n = even = 2k − 1, (1) becomes 0 < cosh y = − − 21 , which has no solution. Substitute y = 0 into the first equation, (1) to get − 12 = cos x cosh y = cos x cosh 0 = cos x · 1 = cos x, 2π which has infinitely many solutions: x = ± + kπ, where k is any integer. 3 Putting everything together, the set of solutions is   2π ± + kπ + i0 : any integer k . 3 15.5.1.5. For z = x + iy, where x, y are real, the equation is i3 = sin(x + iy) = sin x cos(iy) + cos x sin(iy) = sin x cosh y + i cos x sinh y.

c Larry

Turyn, January 8, 2014

page 36

Separating the real and imaginary parts gives the system of equations    (1) 0 = sin x cosh y  .   (2) 3 = cos x sinh y Equation (1) is easier to solve than (2): 0 = sin x cosh y ⇐⇒ (i) 0 = sin x or (ii) 0 = cosh y. (1)(ii) is never true. (1)(i) is true only for x = nπ, for any integer n. Substitute x = nπ into the second equation, (2): 3 = cos x sinh y = cos (nπ) sinh y = (−1)n sinh y. For n = even = 2k, (2) becomes sinh y = 3, whose solutions would be y = arcsinh(−3) if we had defined such a function. Instead, there is a solution technique of independent interest: 3 = sinh y ,

ey − e−y 2

⇐⇒

6 = ey − e−y

⇐⇒

6ey = ey · (ey − e−y ) = (ey )2 − 1,

so substituting w = ey turns that equation into 6w = w2 − 1, that is, w2 − 6w − 1 = 0, whose solutions are √ √ 6 ± 40 y e = = 3 ± 10. 2 √ √ This gives us only y = ln(3 + 10), because 0 < ey cannot equal 3 − 10 < 0. So far, our only solutions are √ z = x + iy = nπ + i ln(3 + 10). For n = odd = 2` − 1, (2) becomes sinh y = −3, whose solutions are found using −3 = sinh y ,

ey − e−y 2

−6 = ey − e−y

⇐⇒

⇐⇒

−6ey = ey · (ey − e−y ) = (ey )2 − 1,

so substituting w = ey turns that equation into −6w = w2 − 1, that is, w2 + 6w − 1 = 0, whose solutions are √ −6 ± 40 y e = . 2 √ Again, this gives us only y = ln(−3 + 10). Putting everything together, the set of solutions is n  o n  o √  √  2kπ + i ln 3 + 10 : any integer k ∪ (2` − 1)π + i ln − 3 + 10 : any integer ` . 15.5.1.6. For z = x + iy, where x, y are real, the equation is − cosh π = cos(x + iy) = cos x cos(iy) − sin x sin(iy) = cos x cosh y − i sin x sinh y. Separating the real and imaginary parts gives the system of equations    (1) − cosh π = cos x cosh y  

(2)

0 = − sin x sinh y

.



Equation (2) is easier to solve than (1): 0 = sin x sinh y ⇐⇒ (i) 0 = sin x or (ii) 0 = sinh y. (2)(i) is true only for x = nπ for any integer n. (2)(ii) is true only for y = 0. Substitute x = nπ into the first equation, (1): − cosh π = cos x cosh y = cos(nπ) cosh y = (−1)n cosh y. c Larry

Turyn, January 8, 2014

page 37

For n = even = 2k, (1) becomes cosh y = − cosh π, for which there is no solution because cosh y ≥ 1 for all y. For n = odd = 2` − 1, (1) becomes 0 < cosh y = cosh π, which has exactly two solutions, y = ±π, because cosh(y) is a strictly increasing function for 0 ≤ y < ∞ and cosh(y) is an even function. Substitute y = 0 into the first equation, (1): − cosh π = cos x cosh 0 = cos x has no solution because − cosh π < −1 and −1 ≤ cos x ≤ 1 for all x. Putting everything together, the set of solutions is {(2` − 1)π ± i π : any integer `} . 15.5.1.7. For z = x + iy, where x, y are real, the equation is −i sinh

π = sin(x + iy) = sin x cos(iy) + cos x sin(iy) = sin x cosh y + i cos x sinh y. 2

Separating the real and imaginary parts gives the system of equations   0 = sin x cosh y   (1) 

(2) − sinh π2 =

cos x sinh y

.



Equation (1) is easier to solve than (2): 0 = sin x cosh y ⇐⇒ (i) 0 = sin x or (ii) 0 = cosh y. (1)(ii) is never true. (1)(i) is true only for x = nπ, for any integer n. Substitute x = nπ into the second equation, (2): − sinh π2 = cos x sinh y = cos (nπ) sinh y = (−1)n sinh y, that is, π sinh y = (−1)n+1 sinh 2 For n = even = 2k, (2) becomes sinh y = − sinh π2 , hence y = − π2 . For n = odd = 2` − 1, (2) becomes sinh y = sinh π2 , hence y = π2 . Putting everything together, the set of solutions is n o π nπ + i (−1)n+1 : any integer n . 2 15.5.1.8. For z = x + iy, where x, y are real, the equation is −i sinh 3 = sin(x + iy) = sin x cos(iy) + cos x sin(iy) = sin x cosh y + i cos x sinh y. Separating the real and imaginary parts gives the system of equations   0 = sin x cosh y   (1) 

(2) − sinh 3 =

cos x sinh y

.



Equation (1) is easier to solve than (2): 0 = sin x cosh y ⇐⇒ (i) 0 = sin x or (ii) 0 = cosh y. (1)(ii) is never true. (1)(i) is true only for x = nπ, for any integer n. Substitute x = nπ into the second equation, (2): − sinh 3 = cos x sinh y = cos (nπ) sinh y = (−1)n sinh y, that is, sinh y = (−1)n+1 sinh 3 For n = even = 2k, (2) becomes sinh y = − sinh 3, hence y = −3. For n = odd = 2` − 1, (2) becomes sinh y = sinh 3, hence y = 3. Putting everything together, the set of solutions is  nπ + i 3 (−1)n+1 : any integer n . c Larry

Turyn, January 8, 2014

page 38

15.5.1.9. For z = x + iy, where x, y are real, the equation is i sinh 3 = cos(x + iy) = cos x cos(iy) − sin x sin(iy) = cos x cosh y − i sin x sinh y. Separating the real and imaginary parts gives the system of equations   0 = cos x cosh y   (1) 

(2)

sinh 3 =

− sin x sinh y

.



Equation (1) is easier to solve than (2): 0 = cos x cosh y ⇐⇒ (i) 0 = cos x or (ii) 0 = cosh y. 1 (1)(i) is true only for     x =(n − 2 )π for any integer n. (1)(ii) is never true. Substitute x = n − 12 π into the first equation, (2): sinh 3 = − sin x sinh y = − sin n − 21 π sinh y = (−1)n sinh y, that is, sinh y = (−1)n sinh 3 For n = even = 2k, (2) becomes sinh y = sinh 3, hence y = 3. For n = odd = 2` − 1, (2) becomes sinh y = − sinh 3, hence y = −3. Putting everything together, the set of solutions is   1 n− π + i 3 (−1)n : any integer n . 2 15.5.1.10. This is the same problem as 15.5.1.6...sorry about that! 15.5.1.11. For z = x + iy, where x, y are real, the equation is sin(x + iy) = sin x cos(iy) + cos x sin(iy) = sin x cosh y + i cos x sinh y  = − cos(x + iy) = − cos x cos(iy) − sin x sin(iy) = − cos x cosh y + i sin x sinh y. Separating the real and imaginary parts gives the system of equations    sin x cosh y = − cos x cosh y   that is

cos x sinh y =

sin x sinh y

,



  (1)

cosh y · (cos x + sin x) =

 0 

(2)

sinh y · (cos x − sin x) =

0



.



The first equation is easier to analyze than the second: (1) requires that cosh y = 0 or cos x + sin x = 0. But, cosh y = 0 is never true, because cosh y ≥ 1 for all real y. So, (1) requires that cos x + sin x = 0, that is, sin x = − cos x, that is, tan x = −1. The latter has solutions x = − π4 + nπ, where n is any integer. Substitute those value of x into (2) to get     π   π  n 1 n+1 1 √ √ 0 = sinh y·(cos x−sin x) = sinh y· cos − + nπ − sin − + nπ = sinh y · (−1) − (−1) 4 4 2 2 √ = 2 sinh y, which implies that y = 0. Putting everything together, the set of solutions is n π o − + nπ + i 0 : any integer n . 4 c Larry

Turyn, January 8, 2014

page 39

15.5.1.12. For all complex numbers z = x + iy, where x, y are real, we have ez + e−z ex+iy + e−x−iy ex (cos y + i sin y) + e−x (cos y − i sin y) (a) cosh(z) , = = = 2 2 2 =

(ex + e−x ) (ex − e−x ) (ex + e−x ) cos y + i(ex − e−x ) sin y = cos y + i sin y 2 2 2 = cosh x cos y + i sinh x sin y .

(b) We will use the definitions of cosh(z) and sinh(z) in terms of exponential functions:

⇐⇒

ez + e−z ez − e−z =i 2 2

⇐⇒

cosh(z) = i sinh(z) (1 − i)ez = −(1 + i)e−z

⇐⇒

so we want to solve e2z = − The solutions are z=

⇐⇒

ez + e−z = i(ez − e−z )

ez (1 − i)ez = −ez (1 + i)e−z = −(1 + i)

⇐⇒

e2z = −

1+i , 1−i

π 2i (1 + i)(1 + i) =− = − i = e−i 2 . (1 − i)(1 + i) 2

  π  1 π −i + i2πk = i − + πk , 2 2 4

where k is any integer. [By the way, there is another approach, but it does  not seem to work well:  cosh z = i sinh z ⇐⇒ cosh x cos y + i sinh x sin y = i sin x cosh y + i cos x sinh y ⇐⇒ ... ] (c) We will use the definition of cosh(z) in terms of exponential functions: −e = cosh(z) ,

ez + e−z ⇐⇒ −2e = ez + e−z ⇐⇒ −2e · ez = e2z + 1 ⇐⇒ (ez )2 + 2e · ez + 1 = 0, 2

which is a quadratic equation in w , ez , so √   √ p p −2e ± 4e2 − 4 2 = −e ± e2 − 1 = eiπ · e ∓ e2 − 1 = eln(e∓ e −1)+iπ , ez = w = 2 so the solutions are

  p z = ln e ∓ e2 − 1 + i(π + 2πk),

where k is any integer. 15.5.1.13. Start on the more complicated side and use basic facts to get to the other side: eiz1 − e−iz1 eiz2 + e−iz2 eiz1 + e−iz1 eiz2 − e−iz2 · + · i2 2 2 i2       1 i(z1 +z2 ) 1 i(z1 +z2 )     2) 2) 1 +z2 ) 1 +z2 ) 1 −z 1 −z   e − ei(−z + ei(z − e−i(z1 +z2 ) + e + ei(−z − ei(z − e−i(z1 +z2 ) = i4 i4   i(z1 +z2 ) 1 e − e−i(z1 +z2 ) = · 2 · ei(z1 +z2 ) − e−i(z1 +z2 ) = , sin(z1 + z2 ), i4 i2 thus establishing (15.39) in Section 15.5. sin z1 cos z2 + cos z1 sin z2 =

15.5.1.14. For z = x + iy, where x, y are real, the equation is b + i0 = sin(x + iy) = sin x cos(iy) + cos x sin(iy) = sin x cosh y + i cos x sinh y.

c Larry

Turyn, January 8, 2014

page 40

Separating the real and imaginary parts gives the system of equations    (1) b = sin x cosh y  .   (2) 0 = cos x sinh y Equation (2) is easier to solve than (1): 0 = cos x sinh y ⇐⇒ (i) 0 = cos x or (ii) 0 = sinh y.  (2)(ii) is true only for y = 0. (2)(i) is true for x = n − 21 π, for any integer n. Substitute x = (n − 12 )π into the first equation, (1):   1 b = sin x cosh y = sin n − π cosh y = (−1)n+1 cosh y. 2 For n = even = 2k, (1) becomes cosh y = −b, which has no solution unless b = −1, because −1 < b ≤ 1 would imply that cosh y < 1. If b = −1, then the only solution for y would be y = 0. For n = odd = 2` − 1, (1) becomes cosh y = b, which has no solution unless b = 1, because −1 ≤ b < 1 would imply that cosh y < 1. If b = 1, then the only solution fory would be y = 0. To summarize what we have so far, for |b| = 1, x = n − 21 π for some integer(s) n and y = 0 gives a solution(s) for z = x + iy. For |b| < 1 case (2)(i) gives no solution for z. So, we see that |b| ≤ 1 implies that y = Im(z) = 0, that is, the only solutions for z are real. 15.5.1.15. For z = x + iy, where x, y are real, the equation is b = cos(x + iy) = cos x cos(iy) − sin x sin(iy) = cos x cosh y − i sin x sinh y. Separating the real and imaginary parts gives the system of equations    (1) b = cos x cosh y  .   (2) 0 = − sin x sinh y Equation (2) is easier to solve than (1): 0 = sin x sinh y ⇐⇒ (i) 0 = sin x or (ii) 0 = sinh y. (2)(i) is true only for x = nπ for any integer n. (2)(ii) is true only for y = 0. Substitute x = nπ into the first equation, (1): b = cos x cosh y = cos(nπ) cosh y = (−1)n cosh y. For n = even = 2k, (1) becomes cosh y = b, which has no solution unless b = 1, because −1 ≤ b < 1 would imply that cosh y < 1. If b = 1, then the only solution for y would be y = 0. For n = odd = 2` − 1, (1) becomes cosh y = −b, which has no solution unless b = −1, because −1 < b ≤ 1 would imply that cosh y < 1. If b = −1, then the only solution for y would be y = 0. To summarize what we have so far, for |b| = 1, x = nπ for some integer(s) n and y = 0 gives a solution(s) for z = x + iy. For |b| < 1 case (2)(i) gives no solution for z. So, we see that |b| ≤ 1 implies that y = Im(z) = 0, that is, the only solutions for z are real. 15.5.1.16. For z = x + iy, where x, y are real. First, we calculate that sinh(z) ,

ez − e−z ex+iy − e−x−iy ex (cos y + i sin y) − e−x (cos y − i sin y) = = 2 2 2

(ex − e−x ) cos y + i(ex + e−x ) sin y (ex − e−x ) (ex + e−x ) = cos y + i sin y 2 2 2 = sinh x cos y + i cosh x sin y . =

For z = x + iy, where x, y are real, the equation is 0 + ib = sinh(x + iy) = sinh x cos(y) + i cosh x sin(y) c Larry

Turyn, January 8, 2014

page 41

Separating the real and imaginary parts gives the system of equations    (1) 0 = sinh x cos y  .   (2) b = cosh x sin y Equation (1) is easier to solve than (2): 0 = sinh x cos y ⇐⇒ (i) 0 = sinh x or (ii) 0 = cos y. (2)(i) is true only  for x = 0, in which case the corresponding solutions) z is imaginary. (2)(ii) is true only for y = n − 21 π, for any integer n. Substitute y = n − 12 π into the second equation, (2):  1 cosh x = (−1)n+1 cosh x. b = cosh x sin y = sin n − 2 For n = even = 2k, (1) becomes cosh x = −b, which has no solution unless b = −1, because −1 < b ≤ 1 would imply that cosh x < 1. If b = −1, then the only solution for x would be x = 0. For n = odd = 2` − 1, (1) becomes cosh x = b, which has no solution unless b = 1, because −1 ≤ b < 1 would imply that cosh x < 1. If b = 1, then the only solution for y would be y = 0. To summarize what we have so far, for |b| = 1, y = n − 12 π for some integer(s) n and x = 0 gives a solution(s) for z = x + iy. For |b| < 1 case (2)(i) gives no solution for z. So, we see that |b| ≤ 1 implies that x = Re(z) = 0, that is, the only solutions for z are imaginary. 15.5.1.17. Method 1 : Let z = x + iy, where x, y are real. Using the result of problem 15.5.1.12, we want to solve b = cosh(z) = cosh x cos y + i sinh x sin y. Separating the real and imaginary parts gives the system of equations    (1) b = cosh x cos y  .   (2) 0 = sinh x sin y Equation (2) is easier to solve than (1): 0 = sinh x sin y ⇐⇒ (i) 0 = sinh x or (ii) 0 = sin y. (2)(i) is true only for x = 0. (2)(ii) is true only for y = nπ, where n is any integer. Substitute y = nπ into the first equation, (1): b = cosh x cos y = cos(nπ) cosh x = (−1)n cosh x. For n = even = 2k, (1) becomes cosh x = b, which has no solution unless b = 1, because −1 ≤ b < 1 would imply that cosh x < 1. If b = 1, then the only solution for x would be x = 0. For n = odd = 2` − 1, (1) becomes cosh x = −b, which has no solution unless b = −1, because −1 < b ≤ 1 would imply that cosh x < 1. If b = −1, then the only solution for x would be x = 0. To summarize what we have so far, for |b| = 1, y = nπ for some integer(s) n and x = 0 gives a solution(s) for z = x + iy. For |b| < 1 case (2)(i) gives no solution for z. So, we see that |b| ≤ 1 implies that x = Re(z) = 0, that is, the only solutions for z are imaginary. Method 2 : A generalization of (15.40) in Section 15.5 to all complex numbers gives us that cosh(z) = cos(iz). So, b = cosh(z) if, and only if, b = cos(iz). But we know from the result of problem 15.5.1.15 that cos(iz) = b only if Im(iz) = 0, that is, only if Re(z) = 0. So, if |b| ≤ 1 then x = Re(z) = 0, that is, the only solutions for z are imaginary.

c Larry

Turyn, January 8, 2014

page 42

Section 15.6.4 ∞ ∞ 1 −1 −1 X  z j X −1 −1 −1    = = = = =  · −(2 −j−1 )z j z z−2 2−z 2 1− z 2 2 2 1− j=0 j=0 2 2 converges for |z|< 2, so ∞ X −1 f (z) = 3z + −(2 −j−1 ) · z j

15.6.4.1. (a)

j=0

is a Laurent series that converges for 0 < |z| < 2. ∞  j ∞ X 1 1 X 2 1 1 1 = · = = =  2 j · z −j−1 (b)   2 2 z−2 z z j=0 z j=0 z 1− 1− z z f (z) = 3z −1 +

∞ X

2 j−1 · z −j = 5z −1 +

j=1

∞ X

converges for |z| > 2, so

2 j−1 · z −j

j=2

is a Laurent series that converges for |z| > 2. 15.6.4.2. (a) The domain being 1 < |z − 1| suggests that the expansion should be in powers of (z − 1). We rewrite 1 1 = = z−2 (z − 1) − 1

∞ ∞ X X 1 1 1 −j (z−1)−j−1 , (z−1) = = · = ·   1  1  (z − 1) (z − 1) j=0 j=0 (z − 1) 1 − 1− z−1 z−1

1

which converges for |z − 1| > 1, so f (z) =

∞ ∞ ∞ X X X 3 (z − 1)−j (z − 1)−j = 4(z − 1)−1 + (z − 1)−j−1 = 3(z − 1)−1 + + z − 1 j=0 j=2 j=1

is a Laurent series that converges for |z − 1| > 1. (b) The domain being 0 < |z − 1| < 1 suggests that the expansion should be in powers of (z − 1). We rewrite ∞

X 1 1 −1 (−1)(z − 1)j = = = z−2 (z − 1) − 1 1 − (z − 1) j=0 which converges for 0 < |z − 1| < 1, so f (z) = 3(z − 1)−1 +

∞ X

(−1)(z − 1)j

j=0

is a Laurent series that converges for 0 < |z − 1| < 1. 15.6.4.3. For the first term we have ∞

(1)

X 1 −1 = = (−1)z j z−1 1−z j=0 ∞

(2)

converges for |z| < 1 ∞

X X 1 1 =  = z −j−1 = z −k  1 z−1 j=0 k=1 z 1− z

converges for |z| > 1.

c Larry

Turyn, January 8, 2014

page 43

For the second term we have ∞

(3)

X (−1)j 1 1 1 1  z = · zj = = · j+1 z+2 2+z 2 1− − 2 j=0 2

(4)

1 = z+2

 z 1−

1 



= 2

∞ X

(−2)j z −j−1 =

j=0

∞ X

converges for |z| < 2

(−2)k−1 z −k

converges for |z| > 2.

k=1

z

(a) For 1 < |z| < 2, combining series (2) and (3), we have f (z) =

∞ ∞ X X 1 1 (−1)j j ·z . + = z −k + z−1 z+2 2 j+1 j=0 k=1

(b) For 0 < |z| < 1, combining series (1) and (3), we have  ∞ ∞ ∞  X X X 1 1 (−1)j j (−1)j j f (z) = + = (−1)z + ·z = −1 + j+1 z j . j+1 z−1 z+2 2 2 j=0 j=0 j=0 15.6.4.4. For the first term we have ∞

(1)

X 1 1 = = (−1)j z j z+1 1 − (−z) j=0

(2)

1 1 = · z+1 z

converges for |z| < 1 ∞



X X 1 (−1)k−1 z −k (−1)j z −j−1 =  −1  = j=0 k=1 1− z

converges for |z| > 1.

For the second term we have ∞

(3)

X −1 1 −1 −1 1 z  = = = · · zj j+1 z−2 2−z 2 1− 2 j=0 2

(4)

1 = z−2

converges for |z| < 2





X X 1 2 k−1 z −k 2 j z −j−1 =  2  = j=0 k=1 z 1− z 

converges for |z| > 2.

Ex. 1: For |z| > 2, combining series (2) and (4), we have f (z) =

∞ ∞ ∞ X X X  1 1 + = (−1)k−1 z −k + 2 k−1 z −k = (−1)k−1 + 2 k−1 z −k . z+1 z−2 k=1

k=1

k=1

Ex. 2: For |z| < 1, combining series (1) and (3), we have  ∞ ∞ ∞  X X X 1 1 −1 1 j j j j f (z) = + = (−1) z + ·z = (−1) − j+1 · z j . j+1 z+1 z−2 2 2 j=0 j=0 j=0 Ex. 3: For 1 < |z| < 2, combining series (2) and (3), ...

c Larry

Turyn, January 8, 2014

page 44

A B + for some constants A, B to be determined. Multiply z z−1 through by z(z − 1), the denominator of the LHS, to get

15.6.4.5. Ex. 1: Partial fractions gives f (z) =

1 = A(z − 1) + Bz. Substitute in z = 0 to get 1 = −A, hence A = −1; substitute in z = 1 to get 1 = B. Geometric series implies that ∞ 1 1 1 1 1 X j f (z) = − + =− + =− + z z z−1 z 1−z z j=0 is a Laurent series that converges in the domain 0 < |z| < 1. Ex. 2: Without use of partial fractions, f (z) =

∞ ∞ 1 1 1 1 1 1 1 X −` X −` = z = z , = · = · · z(z − 1) z z−1 z z(1 − z −1 ) z z `=0

which is a Laurent series that converges in the domain 0
2.

For the second term we have ∞

(3)

X −1 1 −1 −1 1 z  = = = · · zj j+1 z−3 z−3 3 1− 3 j=0 3

(4)

1 = z−3



converges for |z| < 3



X X 1 3 j z −j−1 = 3 k−1 z −k  3  = j=0 k=1 z 1− z 

converges for |z| > 3.

Ex. 1: For |z| > 3, combining series (2) and (4), we have f (z) =

∞ ∞ ∞ X X X  1 1 + = (−2)k−1 z −k + 3 k−1 z −k = (−2)k−1 + 3 k−1 z −k . z+2 z−3 k=1

k=1

k=1

Ex. 2: For |z| < 2, combining series (1) and (3), we have f (z) =

 ∞ ∞ ∞  X X X 1 1 1 −1 1 1 j −j + = (−2)−j z j + · z = (−2) − · zj . j+1 j+1 z+2 z−3 2 3 2 j=0 j=0 j=0

Ex. 3: For 2 < |z| < 3, combining series (2) and (3), ... c Larry

Turyn, January 8, 2014

page 45

A B + for some constants A, B to be determined. Multiply z + 2 2z + 1 through by (z + 2)(2z + 1), the denominator of the LHS, to get 15.6.4.7. Partial fractions gives f (z) =

−(z + 5) = A(2z + 1) + B(z + 2). Substitute in z = −2 to get −3 = −3A, hence A = 1; substitute in z = − 12 to get − 29 = Geometric series implies that f (z) =

3 2

B, hence B = −3.

∞ ∞ X 1 3 1 3 1 X  z j j − −3 − = = − (−2z) . z + 2 2z + 1 1 − (−2z) 2 j=0 2 2 1 − (−z/2) j=0

That is, (?) f (z) =

∞ X

1 2

j=0



1 − 2

!

j

j

zj .

− 3(−2)

Both series converge as long as both − z2 < 1 and | − 2z| < 1. So, (?) gives a Laurent series in the domain for 0 < |z| < 12 . Using again the partial fractions of part (a), f (z) =

3 1 3 1 −  − = z + 2 2z + 1 2 1 − (−z/2) 2z 1 − (−2z)−1

hence

j ∞ ∞  1 1 X  z j 3 X − − (??) f (z) = − 2 j=0 2 2z j=0 2z 1 < 1, that is, as long as |z| < 2 and |z| > 1 . So, Both series converge as long as both − z2 < 1 and − 2z 2 1 (??) gives a Laurent series in the domain for 2 < |z| < 2. 15.6.4.8. It would be difficult to raise a power series of cos(z) to the second power. Instead, we will work in a way similar to Example 15.41 in Section 15.6 by using the trigonometric identity: cos2 z =

1 + cos(2z) . 2

So, using the Taylor series of cos(2z) , which converges for all z, we have that   1 1 1 1 1 (−1)1 (−1)2 (−1)k f (z) = cos2 z = + cos(2z) = + 1+ (2z)2 + (2z)4 + ... + (2z)2k + ... z 2z 2z 2z 2z 2! 4! (2k)! =

1 (−1)1 21 1 (−1)2 23 3 (−1)k 22k−1 2k−1 + z + z + ... + z + ... . z 2! 4! (2k)!

So, f (z) has f (z) = z −1 +

∞ X (−1)k 22k−1 2k−1 z (2k)!

k=0

as a Lauren series that converges for all z 6= 0. 1/z

15.6.4.9. f (z) = z e

 j ∞ X 1 1 =z· =z· j! z j=0 =

∞ X j=0



1 X 1 1+ + · z −k z k! k=2

! =

∞ X 1 · z −k+1 + (1 + z) k!

k=1

1 · z −j + (1 + z). (j + 1)! c Larry

Turyn, January 8, 2014

page 46

This Laurent series converges for 0 < |z| < ∞, by using the ratio test: 1 · z −j−1 1 aj+1 (j+2)! −1 lim = lim ·z =0 = lim 1 −j j→∞ j→∞ aj j→∞ (j + 2) (j+1)! · z for all z 6= 0. 15.6.4.10. We were given the Maclaurin series arcsin(z) =

∞ X

j

aj z = z ·

1+

j=0

Define bk =

∞ X 1 · 3 · · · (2k − 1) k=1

2 · 4 · · · (2k)

1 · z 2k 2k + 1

 1,   

 k=0   

1 1 · 3 · · · (2k − 1)   · ,  2 · 4 · · · (2k) 2k + 1

  k≥1 

so arcsin(z) = z ·

∞ X

! .

,

bk z 2k .

k=0

In fact, the series for arcsin(z) converges for |z| < 1, by using the ratio test: 1·3···(2(k+1)−1) 1 · z 2(k+1) (2k + 1) (2k + 1) 2 2·4···(2(k+1)) · 2(k+1)+1 bk+1 lim = lim · · z lim = k→∞ 1·3···(2k−1) 1 2k k→∞ k→∞ bk (2k + 2) (2k + 3) · · z 2·4···(2k) 2k+1 (2 + k −1 ) (2 + k −1 ) · |z|2 = |z|2 . = lim · k→∞ (2 + 2k −1 ) (2 + 3k −1 ) Using the series and the convolution formula for the product of functions, we have 1 1 arcsin2 z = (arcsin z) (arcsin z) = f (z) = z z z



∞ X

!2 bk z

2k

k=0

=z·

∞ k X X k=0

! b` bk−`

z 2k .

`=0

This is a Laurent series for f (z) that converges for |z| < 1.

15.6.4.11. By the Binomial Theorem, (x + 1)j =

j X `=0

4j = (3 + 1)j =

j X `=0

j! x` 1j−` . With x = 3, we have `!(j − `)! j

X j! 1 3` 1j−` = j! · 3` , `!(j − `)! `!(j − `)! `=0

which implies (?). Next, generalize this to give an explanation of why the law of exponents (15.27)(i) in Section 15.4, that is, ez1 +z2 = ez1 ez2 , is true: The Binomial Theorem states that (z1 + z2 )j =

j X `=0

hence

j! z ` z j−` , `!(j − `)! 1 2

j

X 1 1 (z1 + z2 )j = z ` z j−` . j! `!(j − `)! 1 2 `=0

c Larry

Turyn, January 8, 2014

page 47

Apply this and the convolution formula for the product of two series of complex numbers, namely, ! ∞ ! ! j ∞ ∞ X X X X a` bn = a` bj−` `=0

n=0

j=0

`=0

[Note that this follows from the convolution formula in Section 15.6 by taking z = 1.] z1 z2

e e

=

∞ X 1 ` z `! 1 `=0

!

∞ X 1 n z n! 2 n=0

!

j ∞ X X 1 1 ` = z1 z2k−` `! (k − `)! j=0 `=0

! =

∞ X 1 (z1 + z2 )j = ez1 +z2 . j! j=0

c Larry

Turyn, January 8, 2014

page 48

Section 15.7.2 15.7.2.1. Ex: f (z) =

1 1 = 2 (z + i)(z − i)(z + 1)3 (z + 1)(z + 1)3

15.7.2.2. First, define h(z) = ez − 1 − z, and we will find the order of the zero of h(z)at z0 = 0: h(0) = e0 − 1 − 0 = 0 h0 (z) = ez − 1,

h0 (0) = e0 − 1 = 0

h00 (z) = ez ,

h00 (0) = 1 6= 0,

so h(z) has a zero of order two at z0 = 0. By Theorem 15.26(b) in Section 15.7, f (z) ,

1 1 = z h(z) e −1−z

has a pole of order two at z0 = 0. 15.7.2.3. First, define h(z) = (ez − 1)2 , and we will find the order of the zero of h(z)at z0 = 0: h(0) = (e0 − 1)2 = 0 h0 (z) = 2(ez − 1),

h0 (0) = 2(e0 − 1) = 0

h00 (z) = 2ez ,

h00 (0) = 2 6= 0,

so h(z) has a zero of order two at z0 = 0. By Theorem 15.26(b) in Section 15.7, f (z) ,

1 1 = z h(z) (e − 1)2

has a pole of order two at z0 = 0. 1 1 = , so we will study the zeros of the third degree polynomial h(z) , z 2 (z + 1) h(z) z 2 (z + 1) = z 3 + z 2 : h(0) = 02 (0 + 1) = 0

15.7.2.4. f (z) =

h0 (z) = 3z 2 + 2z,

h0 (0) = 0

h00 (z) = 6z + 2,

h00 (0) = 2 6= 0,

so h(z) has a zero of order two at z0 = 0. By Theorem 15.26(b) in Section 15.7, f (z) ,

1 has a pole of h(z)

order two at z0 = 0, and nowhere else. Also, h(−1) = (−1)2 (−1 + 1) = 0 h0 (z) = 3z 2 + 2z,

h0 (−1) = 1 6= 0,

so h(z) has a zero of order one at z0 = −1. By Theorem 15.26(b) in Section 15.7, f (z) ,

1 has a pole of h(z)

order one at z0 = 0. To summarize, f (z) has a simple pole at z1 = −1 and a pole of order two at z2 = 0, and nowhere else. 15.7.2.5. f (z) =

ez

1 1 = , so we will study the zeros of h(z) , ez + 1. First, we need to find the zero(s) +1 h(z)

of h(z): 0 = ez + 1

⇐⇒

ez = −1 = eiπ

⇐⇒

z = iπ + i2πk,

where k in any integer. c Larry

Turyn, January 8, 2014

page 49

Fix any integer k. We have  h i(2k + 1)π) = 0 h0 (z) = ez ,

 h0 i(2k + 1)π) = −1 6= 0,

so h(z) has a zero of order one at zk , i(2k + 1)π. By Theorem 15.26(b) in Section 15.7, f (z) ,

1 has a h(z)

simple pole at zk , i(2k + 1)π for every integer k, and nowhere else. 15.7.2.6. f (z) =

z 1 z 1 + = + has singularities at z0 = 0, z1 = −i2, and z2 = i2, and z2 + 4 z (z + i2)(z − i2) z

nowhere else. Method 1 : (a) f (z) =

1 z

 1+

z2 (z + i2)(z − i2)

 =

z2 1 g(z), where g(z) , 1+ is analytic 1 (z − 0) (z + i2)(z − i2)

at z0 = 0 and has g(0) = 1 6= 0. By Theorem 15.26(a) in Section 15.7, f (z) has a simple pole at z0 = 0.   z2 1 z z + i2 1 1+ = g(z), where g(z) , + is analytic at z1 = −i2 (b) f (z) = z (z + i2)(z − i2) (z + i2)1 z − i2 z 1 −i2 + −i2+i2 and has g(−i2) = −i2−i2 −i2 = 2 + 0 6= 0. By Theorem 15.26(a) in Section 15.7, f (z) has a simple pole at z1 = −i2.   1 z z − i2 z2 1 (c) f (z) = g(z), where g(z) , + is analytic at z1 = i2 1+ = 1 z (z + i2)(z − i2) (z − i2) z + i2 z i2 and has g(i2) = i2+i2 + i2−i2 = 21 + 0 6= 0. By Theorem 15.26(a) in Section 15.7, f (z) has a simple pole at i2 z2 = i2. To summarize, each of z = 0, i2, −i2 is a simple pole for f (z).

Method 2 : Partial fractions gives z A B = + (z + i2)(z − i2) z + i2 z − i2

⇐⇒

z = A(z − i2) + B(z + i2).

Substituting z = −i2 gives −i2 = A(−i2 − i2) + B · 0, hence A = 21 . Substituting z = i2 gives i2 = A · 0 + B(i2 + i2), hence B = 21 . So, 1 1 1 1 1 f (z) = · + · + . 2 z + i2 2 z − i2 z By Theorem 15.26(c) in Section 15.7, each of z = 0, i2, −i2 is a simple pole for f (z). 15.7.2.7. Ex: f (z) =

(z + i)(z − i) z2 + 1 = (z + 1)3 (z + 1)3

15.7.2.8. (a) Because f (z) has a zero of order m at z0 , f (z) = (z − z0 )m g(z) for some function g that is analytic at z0 and satisfies g(z0 ) 6= 0. We have f 0 (z) = m(z − z0 )m−1 g(z) + (z − z0 )m g 0 (z) , = (z − z0 )m−1 ge(z) where ge(z) , mg(z) + (z − z0 )g 0 (z) is also analytic at z0 , by the anticipated Theorem 15.4 in Section 15.9, and g˜(z0 ) = mg(z0 ) 6= 0. It follows that f 0 (z) has a zero of order m − 1 at z0 . (b) If m = 1, f (z) = (z − z0 )g(z) for some function g that is analytic at z0 and satisfies g(z0 ) 6= 0. It follows that f 0 (z) = g(z) + (z − z0 )g 0 (z), so f 0 (z0 ) = g(z0 ) + 0 · g 0 (z0 ) = g(z0 ) 6= 0. So, f 0 does not have a zero at z0 if m = 1. c Larry

Turyn, January 8, 2014

page 50

15.7.2.9. Because f (z) has a pole of order m at z0 , f (z) =

h(z) for some function h that is analytic at (z − z0 )m

z0 and satisfies h(z0 ) 6= 0. We have f 0 (z) =

e −mh(z) + (z − z0 )h0 (z) h(z) = , m+1 (z − z0 ) (z − z0 )m+1

˜ where h(z) , −mh(z) + (z − z0 )h0 (z) is also analytic at z0 , by the anticipated Theorem 15.4 in Section 15.9, and e h(z0 ) = −mh(z0 ) 6= 0. It follows that f 0 (z) has a pole of order m + 1 at z0 . 15.7.2.10. By Theorem 15.24 in Section 15.7, for k = 1, ..., 4, fk (z) = (z − z0 )2 gk (z) for some function gk (z) that is both analytic and has gk (z0 ) 6= 0. So,    f1 (z) f2 (z) h(z) , det = f1 (z)f4 (z) − f2 (z)f3 (z) = (z − z0 )4 g1 (z)g4 (z) − g2 (z)g3 (z) f3 (z) f4 (z) and H(z) ,

(z − z0 )2 g4 (z) f4 (z) g4 (z)  = (z − z0 )−2 · . = 4 h(z) (z − z0 ) g1 (z)g4 (z) − g2 (z)g3 (z) g1 (z)g4 (z) − g2 (z)g3 (z)

Define G(z) , g1 (z)g4 (z) − g2 (z)g3 (z), which is analytic at z0 . Case 1 : If G(z0 ) 6= 0, then H(z) has a pole of order two at z0 , because of Theorem 15.26(b) in Section 15.7, g4 (z0 ) 6= 0. along with the fact that g4 (z0 ) 6= 0 implies that G(z0 ) Case 2 : If G(z) has a zero of order m at z0 , then H(z) has a pole of order (m − 2) at z0 , because Theorem g4 (z) 15.26(b) in Section 15.7 implies has a pole of order m at z0 , so Theorem 15.28(b) in Section 15.7 G(z) g4 (z) implies that (z − z0 )−2 · has a pole of order (m − 2) at z0 . G(z) 15.7.2.11. By Theorem 15.24 in Section 15.7, h(z) having a zero of order m at z0 implies there is a function k(z) such that h(z) = (z − z0 )m k(z), where k(z0 ) 6= 0 and k(z) is analytic at z0 . So, (?) f (z) =

g(z) g(z) g(z)/k(z) = = . h(z) (z − z0 )m k(z) (z − z0 )m

Because k(z0 ) 6= 0, and we were given that g(z0 ) 6= 0, and both g(z) and k(z) are analytic at z0 it follows g(z) that the function is analytic at z0 and non-zero at z0 . By this and (?), Theorem 15.26(a) in Section k(z) 15.7 applies to justify that f (z) has a pole of order m at z0 . 15.7.2.12. We are given that f (z) = f1 (z) + f2 (z). We are given that f1 (z) has a pole of order m at z0 , hence by Definition 15.22(c) in Section 15.7, f (z) = a−m (z − z0 )−m +

∞ X

aj (z − z0 )j ,

j=−m+1

where a−m 6= 0. We are given that f2 (z) is analytic at z0 or has a pole of order less than m at z0 , hence f2 (z) =

∞ X

bj (z − z0 )j .

j=−m+1

[By the way, b−m+1 = ... = b−1 = 0 if f2 (z) is analytic at z0 .] c Larry

Turyn, January 8, 2014

page 51

We have f (z) = f1 (z) + f2 (z) = a−m (z − z0 )−m +

∞ X j=−m+1

= a−m (z − z0 )−m +

∞ X

∞ X

aj (z − z0 )j +

bj (z − z0 )j

j=−m+1

(aj + bj ) (z − z0 )j ,

j=−m+1

hence f (z) has a pole of order m at z0 , by Definition 15.22(c) in Section 15.7. 15.7.2.13. By Theorem 15.24 in Section 15.7, because g(z) has a zero of order m at z0 , and h(z) has a zero of order n at z0 , there are functions k(z) and `(z) that are analytic and non-zero at z0 and such that g(z) = k(z)(z − z0 )m

and h(z) = `(z)(z − z0 )n ,

So, f (z) , g(z)h(z) = k(z)`(z)(z − z0 )m (z − z0 )n that is,  (?) f (z) = k(z)`(z) (z − z0 )m+n . Because k(z)`(z) is analytic and non-zero at z0 , it follows that f (z) has a zero of order (m + n) at z0 . 15.7.2.14. By Theorem 15.24 in Section 15.7, there is a function `(z) that is analytic and non-zero at z0 for which h(z) = `(z)(z − z0 )m . By Theorem 15.27 in Section 15.7, there is a function k(z) that is analytic and non-zero at z0 for which g(z) = (z − z0 )−m k(z). So,  (?) f (z) , g(z)h(z) = (z − z0 )m−m k(z)`(z) = k(z)`(z). Because k(z)`(z) is analytic and non-zero at z0 , (?) implies that f (z) has a removable singularity at z0 . 15.7.2.15. By Theorem 15.24 in Section 15.7, there is a function `(z) that is analytic and non-zero at z0 for which h(z) = `(z)(z − z0 )n . By Theorem 15.27 in Section 15.7, there is a function k(z) that is analytic and non-zero at z0 for which g(z) = (z − z0 )−m k(z). So,  (?) f (z) , g(z)h(z) = (z − z0 )n−m k(z)`(z) .  Because k(z)`(z) is analytic and non-zero at z0 , (?) implies that f (z) has a removable singularity at z0 . The extension of f (z) defined in Theorem 15.25 in Section 15.7 has a zero of order n − m, by Theorem 15.24 in Section 15.7. 15.7.2.16. By Theorem 15.27 in Section 15.7, g(z) = (z − z0 )−m k(z), where k(z) is both analytic at z0 and has k(z0 ) 6= 0. It follows that for z 6= z0 ,  1  1 1 m = = (z − z ) , 0 g(z) (z − z0 )−m k(z) k(z) where k(z0 ) 6= 0. If we define

then

 1    (z − z0 )m ,  g 1 k(z) , g(z)   0,

 z 6= z0   z = z0

,

 

g 1 has a zero of order m at z0 , by Theorem 15.24. g(z)

c Larry

Turyn, January 8, 2014

page 52

Section 15.8.3 15.8.3.1. Parametrizing the curve as z(t) = 3eit , 0 ≤ t ≤ 2π, ‰ ˆ 2π ˆ  z dz = 3e−it i3eit dt = |z|=3

0



9 i dt = i18π.

0

15.8.3.2. Parametrizing the curve as z(t) = i + eit , 0 ≤ t ≤ 2π, ˆ 2π ˆ 2π ‰ h i2π   z dz = (−i + e−it ) ieit dt = eit + i dt = − ieit + it = −iei2π + i + i2π = i2π. |z−i|=1

0

0

0

15.8.3.3. First, partial fractions gives z A B = + (z − i)(z − 2i) z − 2i z − i

⇐⇒

z = A(z − i) + B(z − 2i).

Substituting in z = i implies i = B(−i), hence B = −1, and substituting in z = 2i implies 2i = A(i), hence A = 2. So, 2 1 z = − . f (z) , (z − i)(z − 2i) z − 2i z−i   2 In the domain D , D0 74 , for example, the function is analytic, so the Cauchy-Goursat Theorem z − 2i (Theorem 15.36 in Theorem 15.8) implies ‰ 2 dz = 0. |z|= 32 z − 2i The positively oriented circle |z| = 32 can be deformed in D to the positively oriented circle |z − i| = 21 , so the Deformation Theorem (Theorem 15.35 in Theorem 15.8) and Theorem 15.32 in Section 15.8 together imply ‰ ‰ 1 1 dz = dz = 2πi. 1 3 z − i z − i |z−i|= 2 |z|= 2 Putting everything together yields ‰ ‰ ‰ z 2 1 dz = dz − dz = 0 − 2πi = −2πi. |z|= 23 (z − i)(z − 2i) |z|= 32 z − 2i |z|= 32 z − i 15.8.3.4. First, partial fractions gives z A B = + (z − i)(z − 2i) z − 2i z − i

⇐⇒

z = A(z − i) + B(z − 2i).

Substituting in z = i implies i = B(−i), hence B = −1, and substituting in z = 2i implies 2i = A(i), hence A = 2. So, z 2 1 f (z) , = − . (z − i)(z − 2i) z − 2i z − i Define the domain D , D0 (3). The positively oriented circle |z| = 25 can be deformed in D to the positively oriented circle |z − 2i| = 12 , so the Deformation Theorem (Theorem 15.35 in Theorem 15.8) and Theorem 15.32 in Section 15.8 together imply ‰ ‰ 2 2 dz = dz = 2 · 2πi. 5 z − 2i 1 z − 2i |z|= 2 |z−2i|= 2 c Larry

Turyn, January 8, 2014

page 53

The positively oriented circle |z| = 52 can be deformed in D to the positively oriented circle |z − i| = 21 , so the Deformation Theorem (Theorem 15.35 in Theorem 15.8) and Theorem 15.32 in Section 15.8 together imply ‰ ‰ 1 1 dz = dz = 2πi. 5 1 z − i z − i |z|= 2 |z−i|= 2 Putting everything together yields ‰ ‰ ‰ z 2 1 dz = dz − dz = 4πi − 2πi = 2πi. 5 (z − i)(z − 2i) 5 z − 2i 5 z − i |z|= 2 |z|= 2 |z|= 2 15.8.3.5. Use two things from Section 15.4.4 in Section 15.4: (1) Log 3π (z) = w is the unique w in log(z) satisfying − π + 4

3π 3π < Im(w) ≤ π + 4 4

and

 1 d 3π Log 3π + π is not in arg(z). (z) = , for all z 6= 0 for which 4 dz z 4 An example of such a domain D is shown in the figure. So, we can calculate ˆ   1 dz = Log 3π (2) − Log 3π (−2) = ln |2| + i · 0 − ln |2| + iπ = −i π. 4 4 C z (2)

Figure 8: Answer key for problem 15.8.3.5

15.8.3.6. Use two things from Section 15.4.4 in Section 15.4: (1) Log 3π (z) = w is the unique w in log(z) satisfying − π + 4

3π 3π < Im(w) ≤ π + 4 4

and

 1 d 3π Log 3π (z) = , for all z 6= 0 for which + π is not in arg(z). 4 dz z 4 An example of such a domain D is shown in the figure. So, we can calculate ˆ  π 1 π (−i) − Log 3π (3) = ln | − i| − i dz = Log 3π − ln 3 + i · 0 = − ln 3 − i . 4 4 z 2 2 C (2)

15.8.3.7. (a) Given a function f (z) = u(x, y) + iv(x, y) and writing z = x + iy, where x and y are real, ˆ ˆ ˆ   f (z) dz = u(x, y) + iv(x, y) d(x + iy) = u(x, y) + iv(x, y) (dx + idy) C

C

C c Larry

Turyn, January 8, 2014

page 54 ˆ  u(x, y)dx − v(x, y)dy + i u(x, y)dy + v(x, y)dx) ,

= C

ˆ

hence

ˆ

(?)

ˆ (u dx − v dy) + i

f (z) dz = C

C

(v dx + u dy). C

(b) A complex number is zero if, and only´ if, both its real and imaginary parts are zero. In order to have path independence of the contour integral C f (z) dz, ˆ we must have path independence of both the real and ˆ (u dx − v dy) and

imaginary parts of the right hand side of (?), that is,

(v dx + u dy).

C

C

(c) Assume that the functions u(x, y) and v(x, y) are nice enough that Green’s Theorem, that is, Theorem 7.13 in Section 7.3, applies to the vector field F = u ˆ ı − v ˆ. Then ˆ ˆ (u dx − v dy) = F • dr. C

C

ˆ (u dx − v dy), for any two simple, smooth paths C1 and C2 that

So, in order to have path independence of C

have the same initial and terminal points and that do not intersect, the contour C , C1 ∪ (− C2 ) is closed, simple, and smooth, hence ˆ ˆ ˆ ˆ ˆ F • dr = F • dr + F • dr = F • dr − F • dr = 0. C

C1

−C2

But, Green’s Theorem implies then that ˆ ˆ ‹  0= F • dr = uˆ ı − v ˆ • dr = C

C

D

C1



C2

∂(−v) ∂(u) − ∂x ∂y







dA = D

∂u ∂v − − ∂y ∂x

 dA.

This being true for all such simple, smooth paths C1 and C2 in D, we must have 0≡ which can be rewritten as (??)

∂u ∂v + , ∂y ∂x ∂u ∂v ≡− , ∂y ∂x

which states that the vector field F satisfies the exactness criterion (6.43) in Section 6.4. Similarly, define the vector field G = v ˆ ı + u ˆ. Then ˆ ˆ (v dx + u dy) = G • dr. C

C

ˆ (v dx + u dy), for any two simple, smooth paths C1 and C2 that

So, in order to have path independence of C

have the same initial and terminal points and that do not intersect, the contour C , C1 ∪ (− C2 ) is closed, simple, and smooth, hence ˆ ˆ ˆ ˆ ˆ G • dr = G • dr + G • dr = G • dr − G • dr = 0. C

C1

−C2

But, Green’s Theorem implies then that ˆ ˆ 0= G • dr = C

C

C1

‹  vˆ ı + u ˆ • dr = D

C2



∂(u) ∂(v) − ∂x ∂y

 dA

c Larry

Turyn, January 8, 2014

page 55

This being true for all such simple, smooth paths C1 and C2 in D, we must have 0≡

∂u ∂v − , ∂x ∂y

which can be rewritten as (? ? ?)

∂u ∂v ≡ , ∂x ∂y

which states that the vector field G satisfies the exactness criterion (6.43) in Section 6.4. (d) (??) implies that the second Cauchy-Riemann equation, that the first Cauchy-Riemann equation,

∂v ∂u = − , must be satisfied. (? ? ?) implies ∂y ∂x

∂u ∂v = , must be satisfied. ∂x ∂y

(e) In Example 15.54 in Section 15.8 we were doing a contour integral of 1 1 (x − iy) x y) = = = 2 − 2 . 2 z x + iy (x + iy)(x − iy) x +y x + y2   x y This is similar to problems 7.2.5.19’s integrand, ˆ ı+ 2 ˆ . Moreover, x2 + y 2 x + y2 f (z) =

i i(x − iy) y x) i = = = 2 +i 2 . z x + iy (x + iy)(x − iy) x + y2 x + y2   y x is similar to problems 7.2.5.20’s integrand, − 2 ˆ ı +  ˆ . x + y2 x2 + y 2 In Section 7.2, it takes a lot of work to find a potential function in order to use the fundamental theorem of line integrals. In Example 15.54 in Section 15.8, there was little effort needed to find an anti-derivative  1 d because we know that Logσ (z) = . dz z 15.8.3.8. Let C be any positively oriented circle centered at z0 . Then there is an r > 0 such that C : z(t) = z0 + reit , 0 ≤ t ≤ 2π. So, with m = 1, we were asked to calculate ˆ ˆ 2π ˆ 2π 1 1 1 0 dz = z (t) dt = (−r sin t + ir cos t) dt 1 (z − z ) z(t) − z z + r cos t + ir sin t −  z 0 0 0 0  C 0 0  ˆ

= 0



−r sin t + ir cos t dt = r cos t + ir sin t

ˆ

0



i(r cos t + ir sin t) dt = r cos t + ir sin t

ˆ



i dt = 2πi. 0

c Larry

Turyn, January 8, 2014

page 56

Section 15.9.4 15.9.4.1. Partial fractions gives f (z) =

1 A B C = + + z(z + 1)(z − 2) z z+1 z−2

⇐⇒

1 = A(z + 1)(z − 2) + Bz(z − 2) + Cz(z + 1).

where A, B, C are constants to be determined. Substitute in z = 0 to get 1 = −2A; substitute in z = −1 to get 1 = 3B; and substitute 1 = 6C. So, 1 1 1 1 1 1 + · + · . f (z) = − · 2 z 3 z+1 6 z−2 So, the singularities of f are at z = 0, −1, 2, and the corresponding residues are h i h i h i h1 i 1 1 1 1 Res f ; 0 = Res − · ; 0 = − , Res f ; −1 = Res · ; −1 = 2 z 2 3 z+1 and

in z = 2 to get

1 , 3

i 1 h i h1 1 · ; 2 = . Res f ; 2 = Res 6 z−2 6

15.9.4.2. Partial fractions gives f (z) =

1 A B C = + 2+ z 2 (z + 1) z z z+1

⇐⇒

1 = Az(z + 1) + B(z + 1) + Cz 2 .

where A, B, C are constants to be determined. Substitute in z = 0 to get 1 = B, and substitute in z = −1 to get 1 = C. So, 1 = Az(z + 1) + (z + 1) + z 2

⇐⇒

1 − (z + 1) − z 2 = Az(z + 1)

⇐⇒

−z(z + 1) = Az(z + 1).

so A = −1. We have

1 1 1 + 2 + . z z z+1 So, the singularities of f are at z = 0, −1, and the corresponding residues are h i h i h i h 1 i 1 1 Res f ; 0 = Res − + 2 ; 0 = −1 and Res f ; −1 = Res ; −1 = 1. z z z+1 f (z) = −

15.9.4.3. First, let us find where the denominator, z sin z, is zero: For z = x+iy, where x, y are real, 0 = sin z gives    (1) 0 = sin x cosh y  0 + i0 = sin(x + iy) = sin x cosh(y) + i cos x sinh(y) ⇐⇒ .   (2) 0 = cos x sinh y Because cosh y ≥ 1 for all y, equation (1) is true only for (1) x = nπ, where n is an integer. Substitute x = nπ into equation (2) to get 0 = cos(nπ) sinh y = (−1)n sinh y, hence y = 0. 1 So, the only zeros of z sin z are at z = nπ, where n is an integer. The only singularities of f (z) = z sin z in D2 ( π2 ) are at z = 0 and at z = π. h(z) , z sin z has a zero of order two at z = 0, because h(0) = 0 h0 (z) = sin z + z cos z,

h0 (0) = 0

h00 (z) = 2 cos z − z sin z,

h00 (0) = 2 6= 0. c Larry

Turyn, January 8, 2014

page 57

By Theorem 15.26 in Section 15.7, f (z) ,

1 has a pole of order two at z = 0. h(z)

h(z) , z sin z has a zero of order one at z = π, because h(π) = 0 h0 (z) = sin z + z cos z,

h0 (π) = −π 6= 0.

1 has a simple pole at z = π. h(z) We can use Theorem 15.41 in Section 15.9 to find the residues:    h  h  h i 1 1 i 1 z i 1 d d d 2 2 (z − 0) · f (z) = z · = Res[ f ; 0 ] = lim lim lim 2! z→0 dz 2! z→0 dz z sin z 2! z→0 dz sin z

By Theorem 15.26 in Section 15.7, f (z) ,

1 (z)0 sin z − z(sin z)0 1 sin z − z cos z lim = lim 2! z→0 2! z→0 sin2 z sin2 z which equals, using L’Hôpital‘s Rule =

=

1 1 (sin z − z cos z)0 cos z − 1 · cos z − z · (− sin z) 1 z sin z 1 z = lim lim = lim = lim = 0. 2! z→0 2! z→0 2 sin z cos z 2! z→0 2 sin z cos z 2! z→0 2 cos z (sin2 z)0

For the other residue, eventually using L’Hôpital‘s Rule we have      1 1 (z − π) 1 (z − π)0 Res[ f ; π ] = lim (z − π) f (z) = lim (z − π) · = lim lim = lim z→π z→π z→π z z→π z sin z sin z π z→π (sin z)0   1 1 1 = lim =− . π z→π cos z π In summary, in D2 ( π2 ), the only singularities of f are at z = 0, π and the corresponding residues are Res[ f ; 0 ] = 0, Res[ f ; π ] = − π1 . 15.9.4.4. Method 1: Partial fractions gives f (z) =

z A B = + (z − i)(z − 2i) z − i z − 2i

⇐⇒

z = A(z − 2i) + B(z − i).

where A, B, C are constants to be determined. Substitute in z = i to get i = A(−i), and substitute in i2 = B(i). So, f (z) = − We have ‰ |z|=3

z dz = (z − i)(z − 2i)



 − |z|=3

1 2 + . z−i z − i2

1 2 + z−i z − i2



 dz = −

|z|=3

1 dz + 2 z−i

‰ |z|=3

1 dz. z − i2

Using Theorem 15.30 in Section 15.8, or Theorem 15.38 in Section 15.9, we have ‰ z dz = −2πi + 2(2πi) = 2πi. |z|=3 (z − i)(z − 2i) Method 2: Using Theorem 15.42 in Section 15.9, as in Example 15.64 in Section 15.9, we have ‰ ‰ z dz = f (z) dz = 2πi (Res[ f ; i ] + Res[ f ; i2 ]) C (z − i)(z − 2i) C c Larry

Turyn, January 8, 2014

page 58

= 2πi

 (z  − i) · 

z  (z  − i)(z − 2i) 



at

  z · +  (z  − 2i)  − i) at (z  − 2i)(z z=i 

 = 2πi (−1 + 2) = 2πi. z=2i

Method 3: Using a dumbell contour, with C0 enclosing z = i and C1 enclosing z = 2i, as shown in the figure, (15.71) in Section 15.9, and Cauchy’s Integral Formula allows us to calculate ‰ C

z dz = (z − i)(z − 2i)

ˆ C0

 = 2πi

z dz + (z − i)(z − 2i) z z − 2i at



ˆ C1

 + 2πi

z=i

z dz = (z − i)(z − 2i)

z z − i at

ˆ C0

z z ˆ z − 2i dz + z − i dz z−i C1 z − 2i

 = −2πi + 4πi = 2πi. z=2i

Figure 9: Answer key for problem 15.9.4.4 ‰



15.9.4.5. |z|= 23

z cos z z cos z + (z − 1)2 z+2



 dz =

|z|= 23

z cos z dz + (z − 1)2

‰ |z|= 32

z cos z dz = z+2

‰ |z|= 32

z cos z dz +0, (z − 1)2

z cos z 3 is analytic on and inside the curve C : |z| = . z+2 2 For the first term, whose integrand has a pole of order 2 = m + 1 at z = 1, use Theorem 15.39, that is, Cauchy’s Integral Formula in general, to get   ‰ ‰ i  z cos z z cos z z cos z 2πi d h 2πi  + dz = dz = z cos z = cos z − z sin z 2 2 3 3 (z − 1) z + 2 (z − 1) 1! dz 1! z=1 z=1 |z|= 2 |z|= 2 because

 = 2πi cos(1) − sin(1) . ‰



15.9.4.6. |z|=3

z cos z z cos z + (z − 1)2 z+2



 dz =

|z|=3

z cos z dz + (z − 1)2

‰ |z|=3

z cos z dz. z+2

The first term’s integrand has a pole of order 2 = m + 1 at z = 1, and the second term’s integrand has a simple pole at z = −2. Use Theorem 15.39, that is, Cauchy’s Integral Formula in general, to get ‰ i   z cos z 2πi d h 2πi  dz = z cos z = cos z − z sin z = 2πi cos(1) − sin(1) 2 1! dz 1! z=1 z=1 |z|=3 (z − 1)

c Larry

Turyn, January 8, 2014

page 59

and

‰ |z|=3

   z cos z = 2πi − 2 cos(2) . dz = 2πi · z cos z z+2 z=−2

Putting everything together, we have   ‰    z cos z z cos z + dz = 2πi cos(1) − sin(1) + 2πi − 2 cos(2) = 2πi cos(1) − sin(1) − 2 cos(2) . 2 (z − 1) z+2 |z|=3 ez 15.9.4.7. is analytic on and inside the curve C : |z| = 3, so z − iπ

‰ |z|=3

ez dz = 0. z − iπ

ez on or inside the curve C : |z| = 4 is at z = iπ. Define f (z) = ez . z − iπ Cauchy’s Integral Formula, Theorem 15.38 in Section 15.9, implies that ‰ ‰ f (z) ez dz = dz = 2πif (iπ) = 2πieiπ = 2πi · (−1) = −2πi. z − iπ z − iπ |z|=4 |z|=4 15.9.4.8. The only singularity of

1 . The only singularity of f (z) on or inside the curve C : |z − 1| = 3 is (z + 3)(z − 3) at z = 3. Using Theorem 15.42 in Section 15.9, as in Example 15.64 in Section 15.9, we have   ‰ ‰  1  πi 1 1    dz = f (z) dz = 2πi · Res[ f ; 3 ] = 2πi · (z − 3) · = 2πi . =   2 (z  − 3)(z + 3) at z=3 6 3  |z−1|=3 z − 9 C 15.9.4.9. Define f (z) ,

z on or inside the curve C : |z| = 3 is at z = 0, but this is sin z a removable singularity because L’Hôpital‘s Rule implies that there exists 15.9.4.10. The only singularity of f (z) ,

lim

z→0

z (z)0 1 = lim = lim = 1. 0 z→0 cos z sin z z→0 (sin z)

By Theorem 15.41(a) in Section 15.9, Res[ f ; 0 ] = 0, so by Theorem 15.42 in Section 15.9, ‰ ‰ z dz = f (z) dz = 2πi · Res[ f ; 0 ] = 0. |z|=3 sin z C z 15.9.4.11. The only singularities of f (z) , on or inside the curve C : |z − π2 | = 2 are at z = 0 and sin z z = π. The singularity at z = 0 is a removable singularity because L’Hôpital‘s Rule implies that there exists lim

z→0

z (z)0 1 = lim = lim = 1. z→0 cos z sin z z→0 (sin z)0

By Theorem 15.41(a) in Section 15.9, Res[ f ; 0 ] = 0. For the residue at z = π, first we find the order of the pole there. First, we note that h(z) , sin z has a simple zero at z = π, because h(π) = 0 . h0 (z) = cos z, h0 (π) = −1 6= 0,

c Larry

Turyn, January 8, 2014

page 60

By Theorem 15.24 in Section 15.7, sin z = (z − π)1 k(z), where k(z) is analytic at z = π and k(π) 6= 0. So, z z z k(z) f (z) = , = = sin z (z − π)1 k(z) (z − π)1 hence f (z) has a simple pole at z = π. It follows from Theorem 15.41(c) in Section 15.9, along with use of L’Hôpital‘s Rule, that        z 1 (z − π) (z − π)0 Res[ f ; π ] = lim (z−π)· = (π) lim = lim z lim = (π) lim = −π. z→π z→π cos z z→π z→π z→π (sin z)0 sin z sin z So, by Theorem 15.42 in Section 15.9, ‰ ‰ z dz = f (z) dz = 2πi · (Res[ f ; 0 ] + Res[ f ; π ]) = 2πi · (0 + (−π)) = −2π 2 i. sin z |z− π |=2 C 2 15.9.4.12. First, let us find where the denominator, ez + 1, is zero: ez + 1 = 0 ⇐⇒ ez = −1 = eiπ ⇐⇒ z = iπ + i2kπ,

where k in any integer.

z is analytic on and inside the curve C : |z| = 2. Cauchy’s residue theorem, as stated in Theorem +1 15.42 in Section 15.9, is, unfortunately, irrelevant to‰this problem. The Cauchy-Goursat integral theorem, z that is, Theorem 15.36 in Section 15.8, tells us that dz = 0. z |z|=2 e + 1 So,

ez

15.9.4.13. The denominator, z 2 + ω 2 = (z + iω)(z − iω), is zero at z = −iω and z = iω, both of which are zeiξz inside the curve C : |z| = R. Define f (z) , 2 . Cauchy’s residue theorem, as stated in Theorem 15.42 z + ω2 in Section 15.9, implies that ‰ ‰ zeiξz dz = f (z) dz = 2πi · (Res[ f ; −iω ] + Res[ f ; iω ]) 2 2 |z−1|=3 z + ω C    z eiξz z eiξz ·   + (z − iω)   − iω) at z=−iω  + iω) at (z  + iω)(z (z  − iω)(z     −iω eiξ(−iω) iω eiξ(iω) eξω + e−ξω = 2πi + = 2πi · = 2πi cosh(ξω). −i2ω i2ω 2  · = 2πi·  (z  + iω)

 z=iω

15.9.4.14. The denominator, z 2 − ω 2 , is zero at z = −ω and z = ω, both of which are inside the curve zeiξz C : |z| = R. Define f (z) , 2 . Cauchy’s residue theorem, as stated in Theorem 15.42 in Section 15.9, z − ω2 implies that ‰ ‰ zeiξz dz = f (z) dz = 2πi · (Res[ f ; −ω ] + Res[ f ; ω ]) 2 2 |z−1|=3 z − ω C = 2πi ·



 = 2πi

· (z  + ω) 

 z eiξz    (z + ω)(z − ω) at 

−ω eiξ(−ω) ω eiξ(ω) + −2ω 2ω

 = 2πi ·

z=−ω

 · +  (z  − ω)

 z eiξz    (z − ω)(z + ω) at 

 z=ω

e−iξω + eiξω = 2πi cos(ξω). 2

c Larry

Turyn, January 8, 2014

page 61

Section 15.10.5 15.10.5.1. Parametrize the unit circle |z| = 1 by C : z = z(θ) = eiθ , 0 ≤ θ ≤ 2π, which has positive orientation. For z on C, eiθ − e−iθ z − z −1 sin θ = = = (2iz)−1 (z 2 − 1). 2i 2i dz dz (θ) = ieiθ = iz, hence dθ = . By Theorem 15.29 in Section 15.8, On C, we have dθ iz ˆ 2π ‰ ‰ 1 dz dz 1 −(2z)2 dθ = =  2 iz 2 2 iz 2 2 −1 2 1 + sin θ 0 |z|=1 1 + (2iz) |z|=1 −(2z) + (z − 1) (z − 1) ‰ = 4i |z|=1



z −4z 2

+

(z 2

2

− 1)

dz = 4i |z|=1

z4

z dz. − 6z 2 + 1

2 The four singularities are where the denominator is zero, that is, where 0 = z 4 − 6z 2 + 1 = z 2 − 6z 2 + 1, that is, where √ √ 6 ± 62 − 4 2 = 3 ± 2 2, z = 2 that is, q q √ √ z = ± 3 + 2 2, ± 3 − 2 2. p p √ √ Of these singularities, only z3 = 3 − 2 2 and z4 = − 3 − 2 2 are inside C : |z| = 1. Because f (z) ,

z4

z z  = p p √ √  √ ,  2 − 6z + 1 z2 − 3 − 2 2 z − 3 − 2 2 z + 3 − 2 2

both z3 and z4 are simple poles of f (z). It is relatively straightforward to use Theorem 15.41(c) in Section 15.9 to calculate that ˆ 2π ‰ 1 dθ = 4i f (z) dz = 4i · 2πi ( Res[ f (z); z3 ] + Res[ f (z); z4 ] ) 1 + sin2 θ 0 |z|=1   q  z √     −2 2 = −8π z−3 p p √  √ √ at z=√3−2√2    2 z − 3 − 2 2 z −  3−2 2 · z+ 3−2 2     q  z √ √ √     −8π z+3 −2 2 p p  √ √ √      2 z − 3 − 2 2 z + 3 − 2 2 · z − 3 − 2 2 at z=− 3−2 2    p p √ √ 1 3 − 2 2 − 3 − 2 2 √ √ ·  p = −8π · p p √ √ + p √ √  (3 − 2 2) − 3 − 2 2 3−2 2+ 3−2 2 − 3−2 2− 3−2 2 = −8π ·

1 √ · −4 2



1 1 + 2 2



√ =π 2

15.10.5.2. Using the change of variables φ = θ − π, ˆ 2π ˆ π ˆ 2π ˆ π ˆ π 1 1 1 1 1 dθ = dθ + dθ = dθ + dφ 2 2 2 2 2 1 + cos θ 1 + cos θ 0 0 1 + cos θ π 0 1 + cos θ 0 1 + cos (φ + π) ˆ π 1 =2 dθ, 2θ 1 + cos 0 c Larry

Turyn, January 8, 2014

page 62

2 2 because cos(φ + π) = − cos(φ) = cos2 φ. So, ˆ ˆ π 1 2π 1 √ 1 π 1 dθ = dθ = · π 2 = √ , 2 2 2 0 1 + cos θ 2 2 0 1 + cos θ by using the result of Example 15.65 in Section 15.10. 15.10.5.3. Using the change of variables φ = θ − π, ˆ 2π ˆ π ˆ 2π ˆ π ˆ π 1 1 1 1 1 dθ = dθ + dθ = dθ + dφ 2 2 2 2 2 1 + sin θ 1 + sin θ 1 + sin θ 1 + sin θ 1 + sin (φ + π) 0 0 π 0 0 ˆ π 1 =2 2 dθ, 0 1 + sin θ 2 2 because sin(φ + π) = − sin(φ) = sin2 φ. So, ˆ π ˆ 1 1 2π 1 1 √ π dθ = dθ = · π 2 = √ , 2 2 2 0 1 + sin θ 2 2 0 1 + sin θ by using the result of problem 15.10.5.1. 1 1 15.10.5.4. Similar to work in Example 15.66 in Section 15.10, define g(x) = 2 and f (z) = 2. 2 2 (x + 4) (z + 4) ˆ ∞ We will explain why the improper integral g(x) dx exists and equals −∞

2πi Res[ f (z); i2 ] = ... = ˆ

π . 16



g(x) dx exists because both

First, the improper integral −∞

ˆ

b

lim

b→∞

0

ˆ

dx (x2

+ 4)

2

and

lim

a→−∞

exist, by a Comparison Theorem for definite ˆ integrals. ˆ ∞ Because both of the improper integrals g(x)dx and ˆ



−∞

R→∞

ˆ

0

g(x) dx = lim

+ 4)

2

0

g(x)dx are convergent, we have ˆ

R

g(x) dx + lim −R

dx (x2

−∞

0

ˆ

a

0

R

g(x) dx = lim

R→∞

0

R→∞

g(x) dx. −R

Consider the contour CR = CR,1 + CR,2 shown in the figure, which also indicates by × the singularities of f (z) at z = ±i2. We calculate ˆ π ˆ π ˆ ˆ 1 1 1 iθ iRe dθ = f (z) dz = dz = iReiθ dθ.   2 2 2 2 ei2θ + 4)2 2 (R 0 CR,2 CR,2 (z + 4) 0 iθ (Re ) + 4 Using Lemma 15.1 in Section 15.10, as R → ∞, ˆ ˆ ˆ ˆ π π π 1 R 1 iθ iθ iRe dθ ≤ iRe dθ ≤ f (z) dz = 2 2 2 dθ → 0. 2 2 i2θ 2 i2θ CR,2 0 (R e + 4) + 4) 0 (R e 0 (R − 4) Also, CR,1 : z = x + i0, −R ≤ x ≤ R, so ˆ

ˆ

R

f (z) dz = CR,1

g(x) dx. −R

c Larry

Turyn, January 8, 2014

page 63

Figure 10: Complex integration to evaluate a real integral Putting things together, we have ˆ

ˆ

lim

f (z) dz = lim

R→∞

For R > 2,

g(x) dx

CR,2

ˆ

!

R

f (z) dz +

R→∞

CR,1 +CR,2

ˆ



=0+

−R

g(x) dx. −∞

ˆ f (z) dz = 2πi Res[ f (z); i2 ]. CR,1 +CR,2

Finally, ˆ ∞ −∞

ˆ

dx (x2 + 4)

=

2

f (z) dz = 2πi Res[ f (z); 2i ] = CR,1 +CR,2

=

i 2πi d h 1  2   (z − 2i)  1! dz  (z  − 2i)2 (z + 2i)2 z=2i 

 2πi  2πi  2 2  π = . − − = 1! (z + 2i)3 z=2i 1! (4i)3 16

In the last steps of the calculations, we used the fact that f (z) =

1 2

(z − 2i) (z + 2i)

2

has a pole of order

two at z = 2i, along with Theorem 15.41(b) in Section 15.9. x2 z2 15.10.5.5. Similar to work in Example 15.66 in Section 15.10, define g(x) = and f (z) = 2 2. (x2 + 4) (z 2 + 4) ˆ ∞ We will explain why the improper integral g(x) dx exists and equals −∞

π . 4

2πi Res[ f (z); i2 ] = ... = ˆ



g(x) dx exists because both

First, the improper integral −∞

ˆ

b

lim

b→∞

0

ˆ

dx (x2

+ 4)

2

and

lim

a→−∞

exist, by a Comparison Theorem for definite ˆ integrals. ˆ ∞ Because both of the improper integrals g(x)dx and ˆ



−∞

R→∞

ˆ

0

g(x) dx = lim

g(x) dx + lim −R

dx (x2

+ 4)

2

0

g(x)dx are convergent, we have

−∞

0

ˆ

a

0

R→∞

ˆ

R

R

g(x) dx = lim 0

R→∞

g(x) dx. −R

c Larry

Turyn, January 8, 2014

page 64

Figure 11: Complex integration to evaluate a real integral Consider the contour CR = CR,1 + CR,2 shown in the figure, which also indicates by × the singularities of f (z) at z = ±i2. We calculate ˆ ˆ ˆ π ˆ π z2 R2 ei2θ R2 ei2θ iθ f (z) dz = iRe dθ = dz = iReiθ dθ.   2 2 2 2 ei2θ + 4)2 2 (R CR,2 CR,2 (z + 4) 0 0 (Reiθ ) + 4 Using work similar to that which explained Lemma 15.1 in Section 15.10, as R → ∞, ˆ ˆ ˆ ˆ π π π 2 i2θ R e R3 R2 ei2θ iθ iθ iRe dθ ≤ iRe dθ ≤ f (z) dz = 2 2 dθ → 0. 2 CR,2 0 (R2 ei2θ + 4)2 0 (R2 ei2θ + 4) 0 (R − 4) Also, CR,1 : z = x + i0, −R ≤ x ≤ R, so ˆ

ˆ

R

f (z) dz = CR,1

Putting things together, we have ˆ lim f (z) dz = lim R→∞

R→∞

CR,1 +CR,2

For R > 2,

g(x) dx. −R

ˆ

ˆ f (z) dz +

CR,2

ˆ

!

R

g(x) dx



=0+

−R

g(x) dx. −∞

ˆ f (z) dz = 2πi Res[ f (z); i2 ]. CR,1 +CR,2

Finally, ˆ ∞ −∞

ˆ

x2 dx (x2 + 4)

2

=

f (z) dz = 2πi Res[ f (z); 2i ] = CR,1 +CR,2

i 2πi d h z2  2   (z − 2i)  1! dz  (z  − 2i)2 (z + 2i)2 z=2i 

0 i z2 2πi d h 2πi  (z 2 )0 (z + 2i)2 − z 2 (z + 2i)2  = = 1! dz (z + 2i)2 z=2i 1! (z + 2i)4 z=2i

=

2πi  (2z)(z + 2i)2 − z 2 2(z + 2i)  2πi  (2z)(z + 2i) − 2z 2  = 1! (z + 2i)4 1! (z + 2i)3 z=2i z=2i

=

 2πi  4iz 2πi  8  π = − = . 1! (z + 2i)3 z=2i 1! (4i)3 4

In the last steps of the calculations, we used the fact that f (z) =

z2 2

(z − 2i) (z + 2i)

2

has a pole of order

two at z = 2i, along with Theorem 15.41(b) in Section 15.9. c Larry

Turyn, January 8, 2014

page 65

15.10.5.6. Parametrize the unit circle |z| = 1 by C : z = z(θ) = eiθ , 0 ≤ θ ≤ 2π, which has positive orientation. For z on C, eiθ − e−iθ z − z −1 sin θ = = = (2iz)−1 (z 2 − 1) 2i 2i and z + z −1 eiθ + e−iθ = = (2z)−1 (z 2 + 1) . cos θ = 2i 2 dz dz On C, we have (θ) = ieiθ = iz, hence dθ = . By Theorem 15.29 in Section 15.8, dθ iz 2 2 ‰ ˆ 2π ‰ z2 − 1 (2iz)−1 (z 2 − 1) dz sin2 θ −1 1 dθ = = · dz 2 1 + cos2 θ iz i |z|=1 z (2z)2 + (z 2 + 1)2 0 |z|=1 1 + (2z)−1 (z 2 + 1) ‰ =i |z|=1

2 z2 − 1  dz z z 4 + 6z 2 + 1

The five singularities are where the denominator is zero, that is, z0 , 0 and the four roots of the polynomial 2 z 4 + 6z 2 + 1 = z 2 + 6z 2 + 1, that is, where 2

z =

−6 ±

√ √ 62 − 4 = −3 ± 2 2, 2

that is,

q √ √ 3 − 2 2, ±i 3 + 2 2. p p √ √ Of these five singularities, only z0 = 0, z3 = i 3 − 2 2, and z4 = −i 3 − 2 2 are inside C : |z| = 1. Because 2 2 z2 − 1 z2 − 1 = f (z) , p p √  √  √ , z z 4 + 6z 2 + 1 z z2 + 3 + 2 2 z − i 3 − 2 2 z + i 3 − 2 2 q

z = ±i

both z3 and z4 are simple poles of f (z). It is relatively straightforward to use Theorem 15.41(c) in Section 15.9 to calculate that ˆ 2π ‰   sin2 θ dθ = i f (z) dz = i · 2πi Res[ f (z); z ] + Res[ f (z); z ] + Res[ f (z); z ] 0 3 4 1 + cos2 θ 0 |z|=1 2 z2 − 1 = −2π z z (z 4 + 6z 2 + 1) at

z=0

2  q  √ z2 − 1    √ √ −2π z −i  3 − 2 2      p p  √ √ √      z z 2 + 3 + 2 2 z − i 3− 2 2 · z + i 3 − 2 2 at z=i 3−2 2  2   q  √ z2 − 1   −2π z +i  3−2 2      p p √  √ √ at z=−i √3−2√2    2 z z + 3 + 2 2 z + i 3− 2 2 · z − i 3 − 2 2  √ 2  −(3 − 2 2) − 1 = −2π 1 + p p √ √ √  p √ √  i 3 − 2 2 −(3 − 2 2) + 3 + 2 2 i 3 − 2 2 + i 3 − 2 2 

√ 2  −(3 − 2 2) − 1 + p p √ √ √  p √ √  −i 3 − 2 2 −(3 − 2 2) + 3 + 2 2 −i 3 − 2 2 − i 3 − 2 2

c Larry

Turyn, January 8, 2014

page 66  √ 2 √ 2 − 2(2 − 2 2) − 2(2 − 2 2) = −2π  1 + p p p √ √  p √ + √ √  √   i 3 − 2 2 4 2 2i 3 − 2 2 −i 3 − 2 2 4 2 −2i 3 − 2 2 ! √ ! √ √   √ (2 − 2)2 √  (6 − 4 2) 4(2 − 2)2 √ √ √ √ = π −1 + 2 · = π −1 + 2 · = −2π 1 + 2 ·  (3 − 2 2) −8 2(3 − 2 2) (3 − 2 2)  

√ √ = π(−2 + 2 2) = 2π(−1 + 2). 15.10.5.7. Parametrize the unit circle |z| = 1 by C : z = z(θ) = eiθ , 0 ≤ θ ≤ 2π, which has positive orientation. For z on C, eiθ + e−iθ z + z −1 cos θ = = = (2z)−1 (z 2 + 1). 2i 2 dz dz On C, we have (θ) = ieiθ = iz, hence dθ = . By Theorem 15.29 in Section 15.8, dθ iz ‰ ‰ ˆ 2π 1 (2z)2 dz dz 1 dθ = =  2 2 2 2 2 2 + cos θ iz iz |z|=1 2 + (2z)−1 (z 2 + 1) |z|=1 2(2z) + (z + 1) 0 ‰ ‰ z z = −4i dz = −4i dz. 2 4 2 2 2 |z|=1 8z + (z + 1) |z|=1 z + 10z + 1 2 The four singularities are where the denominator is zero, that is, where 0 = z 4 + 10z 2 + 1 = z 2 + 10z 2 + 1, that is, where √ √ −10 ± 102 − 4 2 z = = −5 ± 2 6, 2 that is, q q √ √ z = ±i 5 + 2 6, ±i 5 − 2 6. p p √ √ Of these singularities, only z3 = i 5 − 2 6 and z4 = −i 5 − 2 6 are inside C : |z| = 1. Because f (z) ,

z z = p p √  √  √ , z 4 + 10z 2 + 1 2 z +5+2 6 z−i 5−2 6 z+i 5−2 6

both z3 and z4 are simple poles of f (z). It is relatively straightforward to use Theorem 15.41(c) in Section 15.9 to calculate that ˆ 2π ‰ 1 dθ = −4i f (z) dz = −4i · 2πi ( Res[ f (z); z3 ] + Res[ f (z); z4 ] ) 2 + cos2 θ 0 |z|=1   q  √  z   √ √ = 8π z −i  5 − 2 6      p p  √ √ √     z 2 + 5 + 2 6 z − i 5−2 6 · z + i 5 − 2 6 at z=i 5−2 6    q  √ z 2 +8π z +i  5− 6      p p √  √ √ at z=−i √5−2√6    2 z + 5 + 2 6 z + i 5− 2 6 · z − i 5 − 2 6    p p √ √ 1 5−2 6 − 5−2 6 √ √ ·  p = 8π · p p √ √ + p √ √  −(5 − 2 6) + 5 + 2 6 5−2 6+ 5−2 6 − 5−2 6− 5−2 6 1 = 8π · √ · 4 6



1 1 + 2 2



2π =√ = 6

r

2 · π. 3 c Larry

Turyn, January 8, 2014

page 67

15.10.5.8. Please delete this problem, because it appears that we cannot use the method of Example 15.66 in Section 15.10. In particular, the use of an estimate similar to that of Lemma 15.1 in Section 15.10 does not work well, because it is not true that ˆ π R2 dθ → 0, as R → ∞. 2 0 R −1  15.10.5.9. Because cos ωx = Re eiωx , we have   ˆ ∞ 2 ˆ ∞ x cos ωx x2 iωx P. v. dx = Re P. v. e dx . 4 4 −∞ x + 1 −∞ x + 1 z2 eiωz and use the same contour CR = CR,1 + CR,2 found in Example 15.66 in Section 15.10 +1 and shown in Figure 15.32. At any point on CR,2 given by z = Reiθ = x + iy that lies in the upper half-plane, that is, has y > 0, we have iωz iω(x+iy) iωx −ωy = 1 · e−ωy < 1. e = e = e e Let f (z) ,

So,

z4

ˆ ˆ ˆ 2 π π Reiθ R2 eiω(x+iy) z2 iωz iωReiθ iθ e dz = · iReiθ dθ e iRe dθ ≤ 4 4 4 i4θ 0 (Reiθ ) + 1 CR,2 z + 1 |R e + 1| 0 ˆ π R3 < dθ → 0, as R → ∞. 4 0 R −1

So, similar to the work of Example 15.66 in Section 15.10, the convergent improper integral is given by ! ! ˆ ∞ 2 ˆ ∞ 2 ˆ R 2 ˆ x cos ωx z2 x cos ωx x cos ωx iωz dx = P. v. dx , lim dx = lim Re e dz . 2 2 2 4 R→∞ R→∞ −∞ x + 1 −R x + 1 CR,1 z + 1 −∞ x + 1 We need to find the singularities of f (z) that are inside CR : 0 = z4 + 1

z 4 = −1 = eiπ ,

⇐⇒

which gives four zeros: eiπ/4 , ei3π/4 , ei5π/4 , and ei7π/4 , of which only z1 , eiπ/4 and z2 , ei3π/4 are inside CR . So, using L’Hôpital‘s Rule  ˆ ∞ 2 h z 2 eiωz i h z 2 eiωz i x cos ωx iπ/4 i3π/4 dx = Re 2πi Res 4 ; e + Res 4 ; e 2 z +1 z +1 −∞ x + 1 



= Re 2πi

lim z→eiπ/4

= Re 2πi

lim z→eiπ/4

(z − ei3π/4 )z 2 eiωz (z − eiπ/4 )z 2 eiωz + lim z4 + 1 z4 + 1 z→ei3π/4

(z − eiπ/4 )z 2 eiωz (z 4 + 1)0

  = Re 2πi lim z→eiπ/4

0 +

lim z→ei3π/4



(z − ei3π/4 )z 2 eiωz (z 4 + 1)0

0 !!

z 2 eiωz + 2z(z − eiπ/4 )eiωz + iω(z − eiπ/4 )z 2 eiωz 4z 3

z 2 eiωz + 2z(z − eiπ/4 )eiωz + iω(z − eiπ/4 )z 2 eiωz  4z 3 z→ei3π/4   iω − √12 +i √12    eiπ/2 eiω √12 +i √12 ei3π/2 e = Re 2πi + 4ei3π/4 4eiπ/4 +

lim

c Larry

Turyn, January 8, 2014

page 68

 = Re

 √ √ πi  iω/√2 −ω/√2 −i3π/4 ie e e − ie−iω/ 2 e−ω/ 2 e−iπ/4 2



   ω   ω   ω  1  ω  π −ω/√2 1 1   1  √ −i√ =− e − √ − i √ − cos √ − i sin √ Re cos √ + i sin √ 2 2 2 2 2 2 2 2 2    ω   ω   ω   ω  π −ω/√2 √  π −ω/√2 =− e 2 − cos √ + sin √ =√ e cos √ − sin √ 2 2 2 2 2 2   √ ω π = π e−ω/ 2 cos √ − . 4 2

c Larry

Turyn, January 8, 2014

K21557

an informa business

www. taylorandfrancisgroup.co m

6000 Broken Sound Parkwa y, NW Suite 300, Boca Raton, FL 33487 270 Madison A venue New Y ork, NY 10016 2 Park Square, Milton Park Abingdon, Oxon OX14 4RN, UK

9 781482 207712