538 83 11MB
English Pages [512] Year 2010
Numerical Methods
Babu Ram Formerly Dean, Faculty of Physical Sciences, Maharshi Dayanand University, Rohtak
Copyright © 2010 Dorling Kindersley (India) Pvt. Ltd This book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, resold, hired out, or otherwise circulated without the publisher’s prior written consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser and without limiting the rights under copyright reserved above, no part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording or otherwise), without the prior written permission of both the copyright owner and the above-mentioned publisher of this book. ISBN 978-81-317-3221-2 10 9 8 7 6 5 4 3 2 1 Published by Dorling Kindersley (India) Pvt. Ltd., licensees of Pearson Education in South Asia Head Office: 7th Floor, Knowledge Boulevard, A-8 (A), Sector 62, NOIDA, 201 309, UP, India. Registered Office: 14 Local Shopping Centre, Panchsheel Park, New Delhi 110 017, India Typeset by Integra Software Services Pvt. Ltd., Pondicherry, India. Printed in India at Baba Barkha Nath Printers, New Delhi.
In the Memory of MY PARENTS Smt. Manohari Devi and Sri. Makhan Lal
This page is intentionally left blank
Contents Preface
1
ix
Preliminaries 1.1
1
Approximate Numbers and Significant Figures 1
1.2
Classical Theorems Used In Numerical Methods 2
1.3
Types of Errors
1.4
General Formula for Errors
1.5
Order of Approximation Exercises
2
3
4
5
7
9
11
2.1
Classification of Methods
2.2
Approximate Values of the Roots
11
2.3
Bisection Method (Bolzano Method)
2.4
Regula–Falsi Method
2.5
Convergence of Regula–Falsi Method
2.6
Newton–Raphson Method
2.7
Square Root of a Number Using Newton– Raphson Method 23
2.8
Order of Convergence of Newton–Raphson Method 24
2.9
Fixed Point Iteration
11 12
Direct Methods
3.2
Iterative Methods for Linear Systems
3.3
The Method of Relaxation
3.4
Ill-Conditioned System of Equations
25
Convergence of Iteration Method
26
70
79 82
82
85
4.1
Eigenvalues and Eigenvectors
4.2
The Power Method
4.3
Jacobi’s Method
94
4.4
Given’s Method
101
4.5
Householder’s Method
4.6
Eigenvalues of a Symmetric Tri-diagonal Matrix 115
4.7
Bounds on Eigenvalues (Gerschgorin Circles) 117
20
Exercises
5
51
Eigenvalues and Eigenvectors
15 16
51
3.1
Exercises
4
Non-Linear Equations
2.10
Linear Systems of Equations
85
88
109
119
Finite Differences and Interpolation
122
5.1
Finite Differences
122
5.2
Factorial Notation
130
Some More Examples of Finite Differences 132
2.11
Square Root of a Number Using Iteration Method 27
5.3
2.12
Sufficient Condition for the Convergence of Newton–Raphson Method 28
5.4
Error Propagation
5.5
Numerical Unstability
2.13
Newton’s Method for Finding Multiple Roots 29
5.6
Interpolation
5.7
Use of Interpolation Formulae
2.14
Newton–Raphson Method for Simultaneous Equations 32
5.8
Interpolation with Unequal-Spaced Points 163
2.15
Graeffe’s Root Squaring Method
5.9
2.16
Muller’s Method
Newton’s Fundamental (Divided Difference) Formula 164
2.17
Bairstow Iterative Method Exercises
49
37
41 45
139 143
143 162
5.10
Error Fomulae
5.11
Lagrange’s Interpolation Formula
168 171
vi
5.12
Contents
Error in Lagrange’s Interpolation Formula 179
5.13
Hermite Interpolation Formula
5.14
Throwback Technique
5.15
Inverse Interpolation
5.16
Chebyshev Polynomials
5.17
Approximation of a Function with a Chebyshev Series 198
5.18
Interpolation by Spline Functions
5.19
Existence of Cubic Spline Exercises
6
7
Euler–Maclaurin Fomula
8.8
Double Integrals
180
Exercises
277
279
285
185
9
188 195
200
Difference Equations Definitions and Examples
9.2
Homogeneous Difference Equation with Constant Coefficients 289
9.3
Particular Solution of a Difference Equation 293 Exercises
209
213
10
213
288
9.1
202
Curve Fitting
288
299
Ordinary Differential Equations
301
6.1
Least Square Line Approximation
6.2
The Power Fit y = a x m 219
10.1
6.3
Least Square Parabola (Parabola of Best Fit) 221
Initial Value Problems and Boundary Value Problems 301
10.2
Classification of Methods of Solution 301
Exercises
10.3
Single-Step Methods
10.4
Multistep Methods
10.5
Stability of Methods
Second Order Differential Equation
227
Numerical Differentiation
228
301 330 344
7.1
Centered Formula of Order O(h )
228
10.6
7.2
Centered Formula of Order O(h4)
229
10.7
7.3
Error Analysis
Solution of Boundary Value Problems by Finite Difference Method 352
7.4
Richardson’s Extrapolation
10.8
Use of the Formula
7.5
Central Difference Formula of Order O(h4) for f ′′(x) 234
7.6
General Method for Deriving Differentiation Formulae 235
2
230 231
7.7
Differentiation of a Function Tabulated in Unequal Intervals 244
7.8
Differentiation of Lagrange’s Polynomial 245
7.9
Differentiation of Newton Polynomial Exercises
8
8.7
D2y
250
Numerical Quadrature
252
Eigenvalue Problems Exercises
11 246
¤ D2 D4 ³ h2 1 L´ f n ¦ 12 240 µ
to Solve Boundary Value Problems 10.9
349
355
357
360
Partial Differential Equations
363
11.1
Formation of Difference Equation
11.2
Geometric Representation of Partial Difference Quotients 364
11.3
Standard Five Point Formula and Diagonal Five Point Formula 365
11.4
Point Jacobi’s Method
366
11.5
Gauss–Seidel Method
366
252
363
8.1
General Quadrature Formula
8.2
Cote’s Formulae
8.3
Error Term in Quadrature Formula
8.4
Richardson Extrapolation (or Deferred Approach to the Limit) 263
11.6
Solution of Elliptic Equation by Relaxation Method 376
8.5
Simpson’s Formula with End Correction 265
11.7
Poisson’s Equation
8.6
Romberg’s Method
11.8
Eigenvalue Problems
256
267
258
379 383
Contents
11.9
Parabolic Equations
389
11.10
Iterative Method to Solve Parabolic Equations 399
11.11
Hyperbolic Equations Exercises
12.7
Control (Selection) Statements
12.8
Structure of a C Program
12.9
Programs of Certain Numerical Methods in C Language 425
402
407
Appendix
12
Elements of C Language 12.1
Programming Language
12.2
C Language
12.3
C Tokens
12.4
Library Functions
12.5
Input Operation
12.6
Output Operation
412 412 416 417 418
412
412
vii
Model Paper 1
A-1
Model Paper 2
A-11
Model Paper 3
A-18
Model Paper 4
A-28
Model Paper 5
A-38
Bibliography Index
I-1
B-1
419
423
This page is intentionally left blank
Preface The present text is intended to provide a fairly substantial ground in interpolation, numerical differentiation and integration, least square approximation, numerical solution of non-linear equations, system of linear equation, ordinary and partial differential equations, and eigenvalue problems. Various numerical methods have been described technically and their convergence and error propagation have been studied. Most of these methods are implemented efficiently in computers. Programs, in C language, for important and typical methods have been provided in the last chapter. Sufficient number of solved examples has been provided to make the matter understandable to students. As such, the text will meet the requirements of engineering and science students at the undergraduate and post-graduate level. I wish to record my thanks to my family members for their encouragement and to Sushma S. Pradeep for assistance in the preparation of this manuscript. I am thankful to the editorial team and specially Anita Yadav, Thomas Mathew and Vamanan Namboodiri of Pearson Education for their continuous support at all levels. BABU RAM
This page is intentionally left blank
1
Preliminaries
Numerical Analysis is a branch of mathematics in which we analyse and solve the problems that require calculations. The methods (techniques) used for this purpose are called Numerical Methods (techniques). These techniques are used to solve algebraic or transcendental equations, an ordinary or partial differential equations, integral equations, and to obtain functional value for an argument in some given interval where some values of the function are given. In numerical analysis we do not strive for exactness and try to device a method that will yield an approximate solution differing from the exact solution by less than a specified tolerance. The approximate calculation is one which involves approximate data, approximate methods, or both. The error in the computed result may be due to errors in the given data and errors of calculation. There is no remedy to the error in the given data but the second kind of error can usually be made as small as we wish. The calculations are carried out in such a way as to make the error of calculation negligible.
1.1
APPROXIMATE NUMBERS AND SIGNIFICANT FIGURES
The numbers of the type 3, 6, 2, 5/4, 7.35 are called exact numbers because there is no approximation associated with them. On the other hand, numbers like 2 , π are exact numbers but cannot be expressed exactly by a finite number of digits when expressed in digital form. Numbers having finite number of digits approximate such numbers. An approximate number is a number that is used as an approximation to an exact number and differs only slightly from the exact number for which it stands. For example, (i) 1.4142 is an approximate number for 2 (ii) 3.1416 is an approximate number for P (iii) 2.061 is an approximate number for 27/13.1. A significant figure is any of the digits 1, 2, ..., 9, and 0 is a significant figure except when it is used to fix the decimal point or to fill the places of unknown or discarded digits. For example, 1.4142 contains five significant figures, whereas 0.0034 has only two significant figures: 3 and 4. If we attempt to divide 22 by 7, we get 22 3.142857K 7 In practical computation, we must cut it down to a manageable form such as 3.14 or 3.143. The process of cutting off superfluous digits and retaining as many digits as desired is called rounding off. Thus to round off a number, we retain a certain number of digits, counted from the left, and drop the others. However, the numbers are rounded off so as to cause the least possible error. To round off a number to n significant figures, (i) Discard all digits to the right of the nth digit. (ii) (a) If the discarded number is less than half a unit in the nth place, leave the nth digit unchanged. (b) If the discarded number is greater than half a unit in the nth place, increase the nth digit by 1.
2
Numerical Methods
(c) If the discarded number is exactly half a unit in the nth place, increase the nth digit by 1 if it is odd, otherwise leave the nth digit unaltered. Thus, in this case, the nth digit shall be an even number. The reason for this step is that even numbers are more exactly divisible by many more numbers than are odd numbers and so there will be fewer leftover errors in computation when the rounded numbers are left even. When a given number has been rounded off according to the above rule, it is said to be correct to n significant figures. The rounding off procedure discussed in (i) and (ii) above is called symmetric round off. On the other hand, the process of dropping extra digits (without using symmetric round off) of a given number is called chopping or truncation of number. For example, if we are working on a computer with fixed word length of seven digits, then a number like 83.7246734 will be stored as 83.72467 by dropping extra digits 3 and 4. Thus, error in the approximation is 0.0000034. EXAMPLE 1.1 Round off the following numbers correctly to four significant figures: 81.9773, 48.365, 21.385, 12.865, 27.553. Solution. After rounding off, 81.9773 becomes 81.98, 48.365 becomes 48.36, 21.385 becomes 21.38, 12.865 becomes 12.86, 27.553 becomes 27.55.
1.2
CLASSICAL THEOREMS USED IN NUMERICAL METHODS
The following theorems will be used in the derivation of some of the numerical methods and in the study of error analysis of the numerical methods. Theorem 1.1. (Rolle’s Theorem). Let f be a function such that (i) f is continuous in [ a, b] (ii)
f is derivable in ( a, b)
(iii)
f ( a ) f ( b).
Then there exists at least one X ( a, b) such that f `(X ) 0. The following version of the Rolle’s Theorem will be used in error analysis of Lagrange’s interpolation formula. Theorem 1.2. (Generalized Rolle’s Theorem). Let f be n times differentiable function in [ a, b]. If f vanishes at (n + 1) distinct points x0 , x1 ,K, xn in ( a, b), then there exists a number X ( a, b) such that f ( n ) (X ) 0. It follows from Theorem 1.2 that “between any two zeroes of a polynomial f ( x ) of degree q 2, there lies at least one zero of the polynomial f `( x ).” Theorem 1.3. (Intermediate Value Theorem). Let f be continuous in [ a, b] and f ( a ) k f ( b) . Then there exists a number X ( a, b) such that f (X ) k . Theorem 1.4. (Mean Value Theorem). If (i) (ii)
f is continuous in [ a, b], f is derivable in ( a, b),
Preliminaries
3
then there exists at least one X ( a, b) such that f ( b) f ( a ) f `(X ), a X b. b a The following theorem is useful in locating the roots of a given equation. Theorem 1.5. If f is continuous in [ a, b] and if f ( a ) and f ( b) are of opposite signs, then there exists at least one X ( a, b) such that f (X ) 0. The following theorems of Taylor are frequently used in numerical methods. Theorem 1.6. (Taylor’s Theorem). Let f be continuous and possess continuous derivatives of order n in [ a, b]. If x0 [ a, b] is a fixed point, then for every x [ a, b], there exists a number ξ lying between x0 and x such that ( x x0 )2 ( x x0 ) n 1 ( n 1) f ( x ) f ( x0 ) ( x x0 ) f `( x0 ) f p ( x0 ) L ( x0 ) Rn ( x ), f ( n 1)! 2! where ( x x0 ) n ( n ) Rn ( x ) f (X ), x0 X x. n! If x x0 h, then we get h2 h n 1 hn ( n) f ( x0 h) f ( x0 ) hf `( x0 ) f p ( x0 ) L f ( n 1) ( x0 ) f (X ) ( n 1)! n! 2! f ( x0 ) hf `( x0 )
h2 h( n 1) ( n 1) f p ( x0 ) L f ( x0 ) O( h n ). 2! ( n 1)!
As a corollary to Taylor’s Theorem, we have x2 x n ( n) f p (0) L f (0) L 2! n! which is called Maclaurin’s Expansion for the function f. f ( x ) f (0) xf `(0)
Theorem 1.7. (Taylor’s Theorem for Function of Several Variables). If f ( x, y ) and all its partial derivatives of order n are finite and continuous for all points (x, y) in the domain a a x a a h, b a y a b k , then 1 1 f ( a h, b k ) f ( a, b) d f( a, b) d 2 f ( a, b) L d n 1 f ( a, b) Rn 2! ( n 1)! where t t dh k tx ty and 1 Rn d n f ( a Q h, b Q k ), 0 Q 1. n! Putting a b 0, h x, k y, we get f ( x , y ) f (0, 0) df (0, 0)
1 2 1 d f (0, 0) L d n 1 f (0, 0) Rn 2! ( n 1)!
where 1 n d f (Q x ,Q y ), 0 Q 1. n! This result is called Maclaurin’s Theorem for functions of several variables. Rn
4
Numerical Methods
Theorem 1.8. (Fundamental Theorem of Integral Calculus). If f is continuous over [ a, b], then there exists a function F, called the anti-derivative of f, such that b
¯ f ( x )dx F (b) F ( a), a
where F `( x ) f ( x ). The second version of the above theorem is given below. Theorem 1.9. If f is continuous over [ a, b] and a x b , then x d f (t ) dt f ( x ) or F `( x ) f ( x ) dx ¯a where
x
F ( x)
¯ f ( t ) dt . a
1.3
TYPES OF ERRORS
In numerical computation, the quantity “True value – Approximate value” is called the error. We come across the following types of errors in numerical computation: 1. Inherent Error (initial error). Inherent error is the quantity which is already present in the statement (data) of the problem before its solution. This type of error arises due to the use of approximate value in the given data because there are limitations of the mathematical tables and calculators. This type of error can also be there due to mistakes by human. For example, one can write, by mistake, 67 instead of 76. The error in this case is called transposing error. 2. Round-off Error. This error arises due to rounding off the numbers during computation and occurs due to the limitation of computing aids. However, this type of error can be minimized by (i) (ii)
Avoiding the subtraction of nearly equal numbers or division by a small number. Retaining at least one more significant figure at each step of calculation.
3. Truncation Error. It is the error caused by using approximate formulas during computation such as the one that arise when a function f(x) is evaluated from an infinite series for x after truncating it at certain stage. For example, we will see that in Newton–Raphson’s Method for finding the roots of an equation, if x is the true value of the root of f(x) = 0 and x0 and h are approximate value and correction, respectively, then by Taylor’s Theorem, f ( x0 h) f ( x0 ) h `f( x0 )
h2 f p ( x0 ) L 0 2!
To find the correction h, we truncate the series just after first derivative. Therefore, some error occurs due to this truncation. 4. Absolute Error. If x is the true value of a quantity and x0 is the approximate value, then x x0 is called the absolute error. ¤ x x0 ³ 5. Relative Error. If x is the true value of a quantity and x0 is the approximate value, then ¥ ´ is ¦ x µ called the relative error. ¤ x x0 ³ 6. Percentage Error. If x is the true value of quantity and x0 is the approximate value, then ¥ ´ r 100 is called the percentage error. Thus, percentage error is 100 times the relative error. ¦ x µ
Preliminaries
1.4
5
GENERAL FORMULA FOR ERRORS
Let u f (u1 , u2 ,K, un )
(1.1) be a function of u1 , u2 ,K, un, which are subject to the errors $u1, $u2, ..., $un, respectively. Let $u be the error in u caused by the errors $u1, $u2, ..., $ un in u1 , u2 ,..., un, respectively. Then u $u f (u1 $u1 , u2 $u2 ,K, un $un ).
(1.2) Expanding the right-hand side of equation (1.2) by Taylor’s Theorem for a function of several variables, we have 2 ¤ t t ³ t t ³ 1¤ u $u f (u1 , u2 ,K, un ) ¥ $u1 L $un f $ u L $ u f L n t u1 t un ´µ t un ´µ 2 ¥¦ 1 t u1 ¦ Since the errors are relatively small, we neglect the squares, product, and higher powers and have ¤ t t ³ u $u f (u1 , u2 ,K, un ) ¥ $u1 L $un f t u1 t un ´µ ¦ Subtracting equation (1.1) from equation (1.3), we have tf tf tf $u $u $u L $u t u1 1 t u2 2 t un n
(1.3)
or
tu tu tu $u1 $u2 L $u , t u1 t u2 t un n which is known as general formula for error. We note that the right-hand side is simply the total derivative of the function u. For a relative error Er of the function u, we have t u $un $u t u $u1 t u $u2 Er L . t un u u t u1 u t u2 u $u
EXAMPLE 1.2 If u 5xy 2 z 3 and errors in x, y, z are 0.001, compute the relative maximum error (Er)max in u when x = y = z = 1. Solution. We have u 5xy 2 z 3. Therefore
t u 5 y 2 t u 10 xy t u 15xy 2 3 , 3 , 4 tx z t y tz z z and so 5y2 10 xy 15xy 2
y x $ $ $z. z4 z3 z3 But it is given that $x $y $z 0.001 and x = y = z = 1. Therefore, $u
5y2 10 xy 15xy 2 $ x $ y $z 5(0.001) 10(0.001) 15(0.001) 0.03. z3 z3 z4 Thus, the relative maximum error (Er)max is given by ( $u )max y
( Er )max
( $u )max 0.03 0.03 0.006. u u 5
6
Numerical Methods
EXAMPLE 1.3 Given that a 10.00 o 0.05 b 0.0356 o 0.0002 c 15300 o 100 d 62000 o 500. Find the maximum value of the absolute error in (i) a b c d , and (ii) c3. Solution. We are given that a 10.00 o 0.05 b 0.0356 o 0.0002 c 15300 o 100 d 62000 o 500. If a1 , b1 , c1, and d1 are true values of a, b, c, and d, respectively, then ( a1 b1 c1 d1 ) ( a b c d ) = ( a1 a ) ( b1 b) ( c1 c ) ( d1 d ) a a1 a b1 b c1 c d1 d , 0.05 0.0002 100 500 = 600.0502,, which is the required maximum value of the absolute error in a + b + c + d. Further, if E is the error in c, then ( c E )3 c3 E 3 3cE 2 3c 2 E a (100)3 3(15300)(100)2 3(15300)2 (100) 106 459(104 ) 3(153)2 (106 ) 106 459(104 ) 70227(106 ) 1010 (0.0001 0.000459 7.0227 ) 1010 (7.023259), which is the required maximum absolute error. EXAMPLE 1.4 Find the number of terms of the exponential series such that their sum gives the value of ex correct to five decimal places for all values of x in the range 0 a x a 1. Solution. The remainder term in the expansion of ex is Rn ( x )
xn X e , 0 X x. n!
Therefore, maximum absolute error is emax
xn 1 at n! n!
x 1.
Preliminaries
Maximum relative error is x n ex xn 1 ( er )max nx! at x 1. n! n! e For five-decimal accuracy at x = 1, we have 1 1 5 10 , n! 2 which yields n = 9. Therefore, the number of terms in the exponential series should be 9.
1.5
ORDER OF APPROXIMATION
A function F ( h) is said to approximate f (h) with order of approximation O(hn) if n f ( h) F ( h) a M h or if f ( h) F ( h) O( h n ). For example, if 1 1 x x 2 x 3 x 4 K, 1 x then we write 1 1 x x 2 x 3 O( x 4 ) 1 x to the fourth order of approximation. Similarly, t3 t5 t 7 sin t = t L 3! 5! 7 ! can be written as t3 t5 sin t = t O(t 7 ). 3! 5! The number x1 is said to approximate x to d significant digits if d is the largest positive integer such that x x1 x
10 d . 2
EXAMPLE 1.3 Consider the Taylor’s expansions x 3 x5 O( x 7 ), 3! 5! x2 x4 cos x = 1 O( x 6 ). 2! 4! Determine the order of approximation for their sum and product. sin x = x
Solution. Since O( x 6 ) O( x 7 ) O( x 6 ), we have
7
8
Numerical Methods
x 3 x5 x2 x4 O( x 7 ) 1 O( x 6 ) 3! 5! 2! 4! x 2 x 3 x 4 x5 1 x
O( x 6 ) O( x 7 ) 2 ! 3! 4 ! 5! x 2 x3 x 4 x5 1 x
O( x 6 ). 2 ! 3! 4 ! 5!
sin x cos x = x
Hence the order of approximation for the sum of the given expressions is O(x6). Further, ¤ ³¤ ³ x 3 x5 x2 x4 sin x cos x ¥ x O( x 7 )´ ¥ 1 O( x 6 )´ 3! 5! 2! 4! ¦ µ¦ µ ¤ x 3 x5 ³ ¤ x2 x4 ³ ¥ x ´ ¥1 ´ 3! 5! µ ¦ 2! 4! µ ¦ ¤ ¤ x 3 x5 ³ x2 x4 ³ ¥ x ´ O( x 6 ) ¥ 1 ´ O( x 7 ) O( x 6 )O( x 7 ) 3! 5! µ 2! 4! µ ¦ ¦ x 3 x5 x 3 x5 x7
3! 5! 2! 2!3! 2!5! 5 x x7 x9 O( x 6 ) O( x 7 ) O( x 6 )O( x 7 ). 4! 3!4! 4!5!
x
Since O( x 6 ) O( x 7 ) O( x 6 ) and O( x 6 )O( x 7 ) O( x13 ), we have ¤1 ¤ 1 1³ 1 1³ ´ sin x cos x x x 3 ¥ ´ x 5 ¥ ¦ 5! 2!3! 4!µ ¦ 3! 2!µ ¤ 1 1 ³ x9 O( x 6 ) O( x13 )
x7 ¥
´ ! ! ! ! ! ! 2 5 3 4 4 5 µ ¦ 2 3 2 5 x x O( x 6 ) O( x 9 ) O( x13 ) 3 15 2 3 2 5 x x x O( x 6 ). 3 15 Hence the order of approximation for the product of the given expressions is O(x6). x
EXAMPLE 1.4 Find the order of the approximation for the sum and product of the following expansion: h 2 h3 O( h4 ), 2! 3! h2 h4 cos h 1 O( h6 ). 2! 4! eh 1 h
Solution. Since O( h4 ) O( h6 ) O( h4 ), we have
Preliminaries
9
h 2 h3 h2 h4 O ( h 4 ) 1 O ( h6 ) 2 ! 3! 2! 4! h3 h 4 2 h O ( h 4 ) O ( h6 ) 3! 4! h3 2 h O ( h 4 ) O ( h6 ) 3! h3 2 h O( h4 ). 3! Hence the order of approximation for the sum is O(h4). On the other hand, eh cos h 1 h
¤ ³ h 2 h3 e h cos h ¥ 1 h O( h 4 )´ 2 ! 3! ¦ µ ¤ h 2 h3 ³ ¥1 h 2 ! 3! ´µ ¦
¤ ³ h2 h4 1
O ( h6 )´ ¥ 2! 4! ¦ µ
¤ h2 h4 ³ ¤ h 2 h3 ³ 6 ¥ 1 2 ! 4 ! ´ ¥ 1 h 2 ! 3! ´ O( h ) ¦ µ ¦ µ
¤ h2 h4 ³ ¥1 ´ O( h4 ) O( h4 )O( h6 ) 2 4! µ ! ¦ h3 5h4 h5 h6 h7
O( h6 ) O( h4 ) O( h4 )O( h6 ) 3 24 24 48 1444 1 h h3 O( h4 ) O( h6 ) O(h4 ) O( h4 )O( h6 )
1 h
h3 O( h4 ) O( h10 ) 3 h3 1 h O( h4 ). 3 1 h
Hence the order of approximation for the product is O(h4).
EXERCISES 1. Round off the following number to three decimal places: i) 498.5561 (ii) 52.2756 iii) 0.70035 (iv) 48.21416. Ans. (i) 498.556 (ii) 52.276 (iii) 0.700 (iv) 48.214. 2. Round off to four significant figures (i) 19.235101 (ii) 49.85561 (iii) 0.0022218 Ans. (i) 19.24 (ii) 49.8600 (iii) 0.002222. 3. Find the number of term of the exponential series such that their sum gives the value of ex correct to eight decimal places at x = 1 Hint. e x 1 x
x2 x3 x n 1 xn L eX , 0 X x. 2! 3! ( n 1)! n!
10
Numerical Methods
Thus, Maximum absolute error at ξ = x is equal to xn/n! and so ¤ x n ex ³ xn 1 Maximum relative error ¥ ex , ´ n! n! ¦ n! µ since x = 1. For eight-decimal accuracy at x = 1, we have 1 1 8 10 , n! 2 which yields n = 12. 4. If n 10 x 3 y 2 z 2 and error in x, y, z are, respectively, 0.03, 0.01, 0.02 at x = 3, y = 1, z = 2. Calculate the absolute error and percent relative error in the calculation of it. Ans. 140.4, 13%. 5. What is the order of approximation of t2 t4 O(t 6 )? 2! 4! 6. Find the order of approximation for the sum and product of the expansions cos t 1
1 1 h h2 h3 O( h4 ), 1 h h2 h4 cos h 1 O( h6 ). 2! 4! Ans. O(h4).
2
Non-Linear Equations
The aim of this chapter is to discuss the most useful methods for finding the roots of any equation having numerical coefficients. Polynomial equations of degree a 4 can be solved by standard algebraic methods. But no general method exists for finding the roots of the equations of the type a log x bx c or ae x b tan x 4, etc. in terms of their coefficients. These equations are called transcendental equations. Therefore, we take help of numerical methods to solve such type of equations. Let f be a continuous function. Any number X for which f (X ) 0 is called a root of the equation f ( x ) 0 . Also, X is called a zero of function f ( x ). A zero X is called of multiplicity p, if we can write f ( x ) ( x X ) p g ( x ), where g(x) is bounded at X and g(X ) w 0 . If p = 1, then X is said to be simple zero and if p > 1, then X is called a multiple zero.
2.1
CLASSIFICATION OF METHODS
The methods for finding roots numerically may be classified into the following two types: 1. Direct Methods. These methods require no knowledge of an initial approximation and are used for solving polynomial equations. The best known method is Graeffe’s root squaring method. 2. Iterative Methods. There are many such methods. We shall discuss some of them in this chapter. In these methods, successive approximations to the solution are used. We begin with the first approximation and successively improve it till we get result to our satisfaction. For example, Newton–Raphson method is an iterative method. Let {xi } be a sequence of approximate values of the root of an equation obtained by an iteration method and let x denote the exact root of the equation. Then the iteration method is said to be convergent if and only if lim xn x 0.
nlc
An iteration method is said to be of order p, if p is the smallest number for which there exists a finite constant k such that p xn1 x a k xn x .
2.2
APPROXIMATE VALUES OF THE ROOTS
Let f ( x) 0
(2.1)
be the equation whose roots are to be determined. If we take a set of rectangular co-ordinate axes and plot the graph of (2.2) y f ( x ),
12
Numerical Methods
then the values of x where the graph crosses the x-axis are the roots of the given equation (2.1), because at these points y is zero and therefore equation (2.1) is satisfied. However, the following fundamental theorem is more useful than a graph. Theorem 2.1. If f is continuous on [a,b) and if f (a) and f (b) are of opposite signs, then there is at least one real root of f (x) = 0 between a and b. In many cases, the approximate values of the real roots of f (x) = 0 are found by writing the equation in the form f1 ( x ) f 2 ( x ) (2.3) and then plotting the graphs, on the same axes, of two equations y1 = f1(x) and y2 = f2(x). The abscissas of the point of intersection of these two curves are the real roots of the given equation because at these points y1 = y2 and therefore f1(x) = f2(x). Hence, equation (2.3) is satisfied and consequently f (x) = 0 is satisfied. For example, consider the equation xlog10x = 1.2. We write the equation in the form f ( x ) x log10 x 1.2 0. It is obvious from the table given below that f (2) a nd f (3) are of opposite signs: x : 1 2 3 4 f ( x ) : 1.2 0.6 0.2 3 1.2 1 Therefore, a root lies between x = 2 and x = 3 and this is the only root. The approximate value of the root can also be found by writing the equation in the form log10 x
1.2 x
and then plotting the graphs of y1 log10 x and y2 = 1.2/x. The abscissa of the point of intersection of these graphs is the desired root.
2.3
BISECTION METHOD (BOLZANO METHOD)
Suppose that we want to find a zero of a continuous function f. We start with an initial interval [a0,b0], where f (a0) and f (b0) have opposite signs. Since f is continuous, the graph of f will cross the x-axis at a root x = ξ lying in [a0,b0]. Thus, the graph shall be as shown in Figure 2.1. y
• (a0, f(a0))
O
• a0
(ξ, 0) •
•
c0
•
b0 • (b0, f(b0))
Figure 2.1
x
Non-Linear Equations
13
The bisection method systematically moves the endpoints of the interval closer and closer together until we obtain an interval of arbitrary small width that contains the root. We choose the midpoint c0 ( a0 b0 ) / 2 and then consider the following possibilities: (i) If f ( a0 ) and f ( c0 ) have opposite signs, then a root lies in [ a0 , c0 ]. (ii) If f ( c0 ) and f ( b0 ) have opposite signs, then a root lies in [c0 , b0 ] . (iii) If f ( c0 ) 0 , then x c0 is a root. If (iii) happens, then nothing to proceed as c0 is the root in that case. If anyone of (i) and (ii) happens, let [ a1 , b1 ] be the interval (representing [ a0 , c0 ] or [c0 , b0 ] ) containing the root, where f ( a1 ) and f ( b1 ) have opposite signs. Let c1 ( a1 b1 )/ 2 and [ a2 , b2 ] represent [ a1 , c1 ] or [c1 , b1 ] such that f ( a2 ) and f ( b2 ) have opposite signs. Then the root lies between a2 and b2. Continue with the process to construct an interval [ an1 , bn1 ], which contains the root and its width is half that of [ an , bn ]. In this case [ an1 , bn1 ] [ an , cn ] or [cn , bn ] for all n. Theorem 2.2. Let f be a continuous function on [a,b] and let X [ a, b] be a root of f (x) = 0. If f (a) and f (b) have opposite signs and {cn } represents the sequence of the midpoints generated by the bisection process, then
X cn a
b a , n 0,1, 2,K 2 n1
and hence {cn } converges to the root x X , that is, lim cn X . nlc
Proof. Since both the root X and the midpoint cn lie in [ an , bn ], the distance from cn to X cannot be greater than half the width of [ an , bn ] as shown in Figure 2.2. | bn–an | 2 an
ξ
bn
cn | ξ–cn |
Figure 2.2 Thus,
X cn a
bn an
for all n.
2
But, we note that b1 a1 b2 a2 b3 a3
b0 a0 2 b1 a1 2 b2 a2 2
,
b0 a0 22 b0 a0
,
23
L bn an
bn 1 an 1 2
b0 a0 2n
.
14
Numerical Methods
Hence, | b0 a0 |
| X cn | a
2 n1
for all n
and so lim | X cn | 0 or lim cn X . nlc
nlc
EXAMPLE 2.1 Find a real root of the equation x 3 x 2 1 0 using bisection method. Solution. Let f ( x ) x 3 x 2 1. Then f (0) = −1, f (1) = 1. Thus, a real root of f (x) = 0 lies between 0 and 1. Therefore, we take x0 0.5 . Then f (0.5) (0.5)3 (0.5)2 1 0.125 0.25 1 0.625. This shows that the root lies between 0.5 and 1, and we get x1
1 0.5 0.75. 2
Then f ( x1 ) (0.75)3 (0.75)2 1 0.421875 0.5625 1 00.015625. Hence, the root lies between 0.75 and 1. Thus, we take x2
1 0.75 0.875 2
and then f ( x2 ) 0.66992 0.5625 1 0.23242 ( +ve ). It follows that the root lies between 0.75 and 0.875. We take x3
0.75 0.875 0.8125 2
and then f ( x3 ) 0.53638 0.66015 1 0.19653 ( +ve ). Therefore, the root lies between 0.75 and 0.8125. So, let x4
0.75 0.8125 0.781, 2
which yields f ( x4 ) (0.781)3 (0.781)2 1 0.086 ( ve). Thus, the root lies between 0.75 and 0.781. We take x5
0.750 0.781 0.765 2
and note that f (0.765) 0.0335 ( ve ). Hence, the root lies between 0.75 and 0.765. So, let x6
0.750 0.765 0.7575 2
Non-Linear Equations
15
and then f (0.7575) 0.4346 0.5738 1 0.0084 ( ve ). Therefore, the root lies between 0.75 and 0.7575. Proceeding in this way, the next approximations shall be x7 0.7538, x8 0.7556, x9 0.7547, x10 0.7551, x11 0.7549, x12 0.75486, and so on. EXAMPLE 2.2 Find a root of the equation x 3 3x 5 0 by bisection method. Solution. Let f ( x ) x 3 3x 5. Then we observe that f (2) = −3 and f (3) = 13. Thus, a root of the given equation lies between 2 and 3. Let x0 2.5 . Then f ( 2.5) ( 2.5)3 3( 2.5) 5 3.1 2 5( ve ). Thus, the root lies between 2.0 and 2.5. Then x1
2 2.5 2.25. 2
We note that f ( 2.25) 0.359375 ( ve ). Therefore, the root lies between 2.25 and 2.5. Then we take x2
2.2 5 2.5 2.375 2
and observe that f (2.375) = 1.2715 (+ve). Hence, the root lies between 2.25 and 2.375. Therefore, we take x3
2.25 2.375 2.3125. 2
Now f ( 2.3125) 0.4289 ( ve). Hence, a root lies between 2.25 and 2.3125. We take x4
2.25 2.3125 2.28125. 2
Now f ( 2.28125) 0.0281 ( ve). We observe that the root lies very near to 2.28125. Let us try 2.280. Then f ( 2.280) 0.0124. Thus, the root is 2.280 approximately.
2.4
REGULA–FALSI METHOD
The Regula–Falsi method, also known as method of false position, chord method or secant method, is the oldest method for finding the real roots of a numerical equation. We know that the root of the equation f (x) = 0 corresponds to abscissa of the point of intersection of the curve y = f (x) with the x-axis. In Regula–Falsi method, we replace the curve by a chord in the interval, which contains a root of the equation f (x) = 0. We take the point of intersection of the chord with the x-axis as an approximation to the root.
16
Numerical Methods
Suppose that a root x X lies in the interval ( xn 1 , xn ) and that the corresponding ordinates f ( xn 1 ) and f ( xn ) have opposite signs. The equation of the straight line through the points P( xn , f ( xn )) and Q( xn , f ( xn )) is f ( x) f ( x n ) x xn . f ( xn 1 ) f ( xn ) xn 1 xn
(2.4)
Let this straight line cut the x-axis at xn1 . Since f (x) = 0 where the line (2.4) cuts the x-axis, we have, f ( xn1 ) 0 and so xn1 x n
xn 1 xn f ( xn ). f ( xn 1 ) f ( xn )
(2.5)
y Q(xn–1, f(xn–1))
xn+2 O
xn+1
xn
x
xn–1 P1(xn+1, f (xn+1))
P(xn, f (xn))
Figure 2.3 Now f ( xn 1 ) and f ( xn1 ) have opposite signs. Therefore, it is possible to apply the approximation again to determine a line through the points Q and P1. Proceeding in this way we find that as the points approach X , the curve becomes more nearly a straight line. Equation (2.5) can also be written in the form xn1
xn f ( xn 1 ) xn 1 f ( xn ) , n 1, 2,K. f ( xn 1 ) f ( x n )
(2.6)
Equation (2.5) or (2.6) is the required formula for Regula–Falsi method.
2.5
CONVERGENCE OF REGULA–FALSI METHOD
Let X be the actual root of the equation f ( x ) 0. Thus, f (X ) 0 . Let xn X E n , where E n is the error involved at the nth step while determining the root. Using xn1
xn f ( xn 1 ) xn 1 f ( xn ) , n 1, 2,K, f ( xn 1 ) f ( x n )
we get X E n1
( X E n ) f ( X E n 1 ) ( X E n 1 ) f ( X E n ) f ( X E n 1 ) f ( X E n )
Non-Linear Equations
17
and so
E n1
(X E n ) f (X E n 1 ) (X E n 1 ) f (X E n )
X f (X E n 1 ) f (X E n )
E n f (X E n 1 ) E n 1 f (X E n ) . f (X E n 1 ) f (X E n )
Expanding the right-hand side by Taylor’s series, we get
E n1
§ ¶ § ¶ 1 1 E n ¨ f (X ) E n 1 f `(X ) E n2 1 f p (X ) L· E n 1 ¨ f (X ) E n f `(X ) E n2 f p (X ) K· 2 2 ¸ © ¸ © 1 2 1 2 f (X ) E n 1 f `(X ) E n 1 f p (X ) L f (X ) E n f `(X ) E n f p (X ) L 2 2
that is,
E n1 kE n 1E n O(E n2 ),
(2.7)
where 1 f p(X ) . 2 f `(X ) We now try to determine some number in m such that k
E n1 AE nm
(2.8)
and
1
1
E n AE nm 1 or E n 1 A m E nm . From equations (2.7) and (2.8), we get
1
1
E n1 kE n 1E n kA m E nm E n and so
1
1
1
1
1
AE nm kA m E nm E n kA m E n m . Equating powers of E n on both sides, we get m 1 m or m2 m 1 0, m which yields m
1o 5 1.618 ( ve value ) . Hence, 2 E n1 AE 1n.618 .
Thus, Regula–Falsi method is of order 1.618. EXAMPLE 2.3 Find a real root of the equation x 3 5x 7 0 using Regula–Falsi method. Solution. Let f ( x ) x 3 5x 7 0. We note that f (2) = −9 and f (3) = 5. Therefore, one root of the given equation lies between 2 and 3. By Regula–Falsi method, we have x f ( xn 1 ) xn 1 f ( xn ) xn1 n , n 1, 2, 3,K. f ( xn 1 ) f ( xn )
18
Numerical Methods
We start with x0 2 and x1 3. Then x2
x1 f ( x0 ) x0 f ( x1 ) 3( 9) 2(5) 37 y 2.6. f ( x0 ) f ( x1 )
9 5 14
But f ( 2.6) 2.424 and f (3) 5. Therefore, x3
x2 f ( x1 ) x1 f ( x2 ) ( 2.6) 5 3 ( 2.4424) 2.73. f ( x1 ) f ( x2 ) 5 2.424
Now f ( 2.73) 0.30583. Since we are getting close to the root, we calculate f ( 2.75) which is found to be 0.046875. Thus, the next approximation is x4
2.75 f ( 2.73) ( 2.73) f ( 2.75) f ( 2.73) f ( 2.75) 2.75( 0.303583) 2.73(0.0468675) 2.7473.
0.303583 0.0468675
Now f ( 2.747 ) 0.0062 . Therefore, 2.75 f ( 2.747 ) 2.747 f ( 2.75) f ( 2.747 ) f ( 2.75)
x5
2.75( 0.0062) 2.747(0.046875) 2.74724.
0.0062 0.0446875
Thus, the root is 2.747 correct up to three places of decimal. EXAMPLE 2.4 Solve x log10 x 1.2 by Regula–Falsi method. Solution. We have f ( x ) x log10 x 1.2 0. Then f ( 2) 0.60 and f (3) = 0.23. Therefore, the root lies between 2 and 3. Then x2
x1 f ( x0 ) x0 f ( x1 ) 3( 0.6) 2(0.23) 2.723. f ( x0 ) f ( x1 )
0.6 0.2 3
Now f ( 2.72) 2.72 log( 2.72) 1.2 0.01797. Since we are getting closer to the root, we calculate f ( 2.75) and have f ( 2.75) 2.75 log ( 2.75) 1.2 2.75 (0.4393) 1.2 0.00816. Therefore, x3
2.75 ( 0.01797 ) 2.72 (0.00816) 0.04942 0.02219 2.7405.
0.02613
0.01797 0.00816
Now f ( 2.74) 2.74 log(2.74) 1.2 2.74(0.43775) 1.2 0.00056. Thus, the root lies between 2.74 and 2.75 and it is more close to 2.74. Therefore, x4
2.75 ( 0.00056) 2.74 (0.00816) 2.7408.
0.00056 0.00816
Thus the root is 2.740 correct up to three decimal places.
Non-Linear Equations
19
EXAMPLE 2.5 Find by Regula–Falsi method the real root of the equation log x cos x 0 correct to four decimal places. Solution. Let f ( x ) log x cos x. Then f (1) 0 0.54 0.54 ( ve ) f (1.5) 0.176 0.071 0.105 ( ve ). Therefore, one root lies between 1 and 1.5 and it is nearer to 1.5. We start with x0 1 , x1 1.5. Then, by Regula–Falsi method, xn1
xn f ( xn 1 ) xn 1 f ( xn ) f ( xn 1 ) f ( xn )
and so x2
x1 f ( x0 ) x0 f ( x1 ) 1.5( 0.5 4) 1(00.105) 1.41860 y 1.42 . f ( x0 ) f ( x1 )
0.54 0.105
But, f ( x2 ) f (1.42) 0.1523 0.1502 0.0021 . Therefore, x3
x2 f ( x1 ) x1 f ( x2 ) 1.42(0.105) 1.5(0.0021) 1.41836 y 1.4184. f ( x1 ) f ( x2 ) 0.105 0.0021
Now f (1.418) 0.151676 0.152202 0.000526. Hence, the next iteration is x4
x3 f ( x2 ) x2 f ( x3 ) 1.418(0.0021) (1.42)( 0.000526) 1.41840. f ( x2 ) f ( x3 ) 0.0021 0.000526
EXAMPLE 2.6 Find the root of the equation cos x xe x 0 by secant method correct to four decimal places. Solution. The given equation is f ( x ) cos x x e x 0 . We note that f (0) = 1, f (1) = cos 1 −e = 0 −e = −e (−ve). Hence, a root of the given equation lies between 0 and 1. By secant method, we have xn1 xn
xn 1 xn f ( xn ). f ( xn 1 ) f ( xn )
So taking initial approximation as x0 0, x1 1, f ( x0 ) 1 and f ( x1 ) e 2.1780, we have x2 x1
x0 x1
1 f ( x1 ) 1 ( 2.1178) 0.3147. f ( x0 ) f ( x1 ) 1 2.178
Further, f ( x2 ) f (0.3147 ) 0.5198. Therefore,
20
Numerical Methods
x3 x2
x1 x2 1 0.3147 f ( x2 ) 0.3147 (0.5198) 0.4467. f ( x1 ) f ( x2 )
2.178 0.5198
Further, f ( x3 ) f (0.4467 ) 0.2036. Therefore x2 x3 0.3147 0.4467 x4 x3 f ( x3 ) 0.4467 (0.2036) 0.5318, f ( x2 ) f ( x3 ) 0.5198 0.2036 f ( x4 ) f (0.5318) 0.0432. Therefore, x5 x4
x3 x4 0.4467 0.5318 f ( x4 ) 0.5318 ( 0.0432) 0.5168, f ( x3 ) f ( x4 ) 0.20.36 0.0432
and f ( x5 ) f (0.5168) 0.0029. Now x6 x5
x4 x5 0.5318 0.5168 f ( x5 ) 0.5168 (0.0029) 0.5177, f ( x4 ) f ( x5 )
0.0432 0.0029
and f ( x6 ) f (0.5177 ) 0.0002. The sixth iteration is x7 x6
x5 x6 0.5168 0.5177 f ( x6 ) 0.5177 (0.0002) 0.51776. f ( x5 ) f ( x6 ) 0.0029 0.0002
We observe that x6 x7 up to four decimal places. Hence, x = 0.5177 is a root of the given equation correct to four decimal places.
2.6
NEWTON–RAPHSON METHOD
If the derivative of a function f can be easily found and is a simple expression, then the real roots of the equation f ( x ) 0 can be computed rapidly by Newton–Raphson method. Let x0 denote the approximate value of the desired root and let h be the correction which must be applied to x0 to give the exact value of the root x. Thus, x x0 h and so the equation f ( x ) 0 reduces to f ( x0 h) 0 . Expanding by Taylor’s Theorem, we have h2 f ( x0 h) f ( x0 ) hf `( x0 ) f p ( x0 Qh), 0 Q 1. 2! Hence, h2 f ( x0 ) xf `( x0 ) f p ( x0 Qh) 0. 2 If h is relatively small, we may neglect the term containing h2 and have f ( x0 ) hf `( x0 ) 0. Hence, h
f ( x0 ) f `( x0 )
Non-Linear Equations
21
and so the improved value of the root becomes x1 x0 h x0
f ( x0 ) . f `( x0 )
If we use x 1 as the approximate value, then the next approximation to the root is f ( x1 ) . x2 x1 f `( x1 ) In general, the (n + 1)th approximation is xn1 xn
f ( xn ) , n = 0,1,2,3,K. f `( xn )
(2.9)
Formula (2.9) in called Newton–Raphson method. f ( x0 ) The expression h is the fundamental formula in Newton–Raphson method. This formula f `( x0 ) tells us that the larger the derivative, the smaller is the correction to be applied to get the correct value of the root. This means, when the graph of f is nearly vertical where it crosses the x-axis, the correct value of the root can be found very rapidly and with very little labor. On the other hand, if the value of f `( x ) is small in the neighborhood of the root, the value of h given by the fundamental formula would be large and therefore the computation of the root shall be a slow process. Thus, Newton–Raphson method should not be used when the graph of f is nearly horizontal where it crosses the x-axis. Further, the method fails if f `( x ) 0 in the neighborhood of the root. EXAMPLE 2.7 Find the smallest positive root of x 3 5x 3 0. Solution. We observe that there is a root between –2 and –3, a root between 1 and 2, and a (smallest) root between 0 and 1. We have f ( x ) x 3 5x 3 , f `( x ) 3x 2 5. Then taking x0 1, we have x1 x 0
f ( x0 ) f (1) ( 1) 1 1 0.5 , f `( x0 ) f `(1)
2
x2 x1
f ( x1 ) 5 0.5 0.64 , f `( x1 ) 34
x3 0.64 x4 0.6565
0.062144 0.6565 , 3.7712
0.000446412125 0.656620 , 3.70702325
0.00000115976975 0.656620431 . 3.70655053 We observe that the convergence is very rapid even though x0 was not very near to the root. x5 0.656620
EXAMPLE 2.8 Find the positive root of the equation
22
Numerical Methods
x 4 3x 3 2 x 2 2 x 7 0 by Newton–Raphson method. Solution. We have f (0) 7, f (1) 5, f ( 2) 3, f (3) 17. Thus, the positive root lies between 2 and 3. The Newton–Raphson formula becomes xn1 xn
xn4 3xn3 2 xn2 2 xn 7 4 xn3 9 xn2 4 xn 2
.
Taking x0 2.1, the improved approximations are x1 2.39854269, x2 2.33168543, x3 2.32674082, x4 2.32671518, x5 2.32671518. Since x4 x5 , the Newton–Raphson formula gives no new values of x and the approximate root is correct to eight decimal places. EXAMPLE 2.9 Use Newton–Raphson method to solve the transcendental equation e x 5x. Solution. Let f ( x ) e x 5x 0. Then f `( x ) e x 5. The Newton–Raphson formula becomes x
xn1 xn
e n 5x n x
e n 5
, n 0,1, 2, 3,K .
The successive approximations are x0 0.4, x1 0.2551454079, x2 0.2591682786, x3 0.2591711018, x4 0.2591711018. Thus, the value of the root is correct to 10 decimal places. EXAMPLE 2.10 Find by Newton–Raphson method, the real root of the equation 3x cos x 1 . Solution. The given equation is f ( x ) 3x cos x 1 0. We have f (0) 2 ( ve ) and f (1) 3 0.5403 1 1.4597 ( ve ). Hence, one of the roots of f ( x ) 0 lies between 0 and 1. The values at 0 and 1 show that the root is nearer to 1. So let us take x 0.6 . Further, f `( x ) 3 sin x. Therefore, the Newton–Raphson formula gives f ( xn ) 3x cos xn 1 xn1 xn xn n f `( xn ) 3 sin xn
3xn xn sin xn 3xn cos xn 1 xn sin xn cos xn 1 . 3 sin xn 3 sin xn
Non-Linear Equations
23
Hence, x1 x2
x0 sin x0 cos x0 1 0.6(0.5646) 0.8253 1 0.6071, 3 sin x0 3 0.5646
x1 sin x1 cos x1 1 (0.6071)(0.5705) 0.8213 1 0.6071. 3 sin x1 3 0.5705
Hence the required root, correct to four decimal places, is 0.6071. EXAMPLE 2.11 Using Newton–Raphson method, find a root of the equation f ( x ) x sin x cos x 0 correct to three decimal places, assuming that the root is near to x P . Solution. We have f ( x ) x sin x cos x 0 . Therefore, f `( x ) x cos x sin x sin x x cos x . Since the root is nearer to P , we take x0 P . By Newton–Raphson method x sin xn cos xn f ( xn ) xn1 xn xn n f `( xn ) xn cos xn
xn2 cos xn xn sin xn cos xn xn cos xn
Thus, x1 x2
x02 cos x0 x0 sin x0 cos x0 x0 cos x0
P 2 cos P P sin P cos P 1 P 2 1 9.87755 2.824,
3.142857 P cos P P x12 cos x1 x1 sin x1 cos x1 x1 cos x1
(7.975)( 0.95) ( 2.824)(0.3123) (0.95) ( 2.824)( 0.95)
7.576 0.8819 0.95 7.5179 2.8022,
2.6828 2.6828
x3
7.8512( 0.9429) ( 2.8022)(0.3329) 0.9429 (22.8022)( 0.9429)
7.4029 0.93285 0.9429 7.39285 2.797. 2.6422
2.6422
Calculate x4 and x5 similarly.
2.7
SQUARE ROOT OF A NUMBER USING NEWTON–RAPHSON METHOD
Suppose that we want to find the square root of N. Let x N or
x2 N .
24
Numerical Methods
We have f ( x ) x 2 N 0. Then, Newton–Raphson method yields xn1 xn
f ( xn ) x2 N xn n f `( xn ) 2 xn
1§ N¶ ¨ x · , n 0,1, 2, 3,K . 2 © n xn ¸
For example, if N = 10, taking x0 3 as an initial approximation, the successive approximations are x1 3.166666667, x2 3.162280702, x3 3.162277660, x4 3.162277660 correct up to nine decimal places. However, if we take f ( x ) x 3 Nx so that if f ( x ) 0, then x N . Now f `( x ) 3x 2 N and so the Newton–Raphson method gives 2x3 f ( xn ) x 3 Nxn xn1 xn xn n 2 2 n . f ` ( xn ) 3xn N 3xn N Taking x0 3, the successive approximations to
10 are
x1 3.176, x2 3.1623, x3 3.16227, x4 3.16227 correct up to five decimal places. Suppose that we want to find the pth root of N. Then consider f ( x ) x p N . The Newton–Raphson formula yields f ( xn ) xp N xn1 xn xn n p 1 f `( xn ) pxn
( p 1) xnp N pxnp 1
, n 0,1, 2, 3,K.
For p = 3, the formula reduces to xn1
2 xn3 N 3x
2 n
N³ 1¤ ¥ 2 xn 2 ´ . 3¦ xn µ
If N = 10 and we start with the approximation x0 2, then 1¤ 10 ³ x1 ¥ 4 ´ 2.16666, x2 2.154503616, 3¦ 4µ x3 2.154434692, x4 2.154434690, x5 2.154434690 correct up to eight decimal places.
2.8
ORDER OF CONVERGENCE OF NEWTON–RAPHSON METHOD
Suppose f (x) = 0 has a simple root at x X and let E n be the error in the approximation. Then xn X E n . Applying Taylor’s expansion of f ( xn ) and f `( xn ) about the root X , we have
Non-Linear Equations c
c
f ( xn ) £ ar E nr
and
f
f `( xn ) £ rar E nr 1 , r 1
r 1
where ar
25
(r )
(X ) . Then r!
f ( xn ) a E n 2 E n2 O(E n3 ). f `( xn ) a1
Therefore, Newton–Raphson formula xn1 xn
f ( xn ) f `( xn )
gives § ¶ a X E n1 X E n ¨E n 2 E n2 O(E n3 ) · a1 © ¸ and so
E n1 If
a2 2 1 f ``(X ) 2 E E . a1 n 2 f `(X ) n
1 f ``(X ) 1, then 2 f `(X )
E n1 E n2 .
(2.10) It follows therefore that Newton–Raphson method has a quadratic convergence (or second order convergence) if 1 f ``(X ) 1. 2 f `(X ) f ( xn ) begins with n zeros, then the result is f `( xn ) correct to about 2 n decimals. Thus, in Newton–Raphson method, the number of correct decimal roughly doubles at each stage. The inequality (2.10) implies that if the correction term
2.9
FIXED POINT ITERATION
Let f be a real-valued function f : l . Then a point x is said to be a fixed point of f if f (x) = x. For example, let I : l be an identity mapping. Then all points of are fixed points for I since I ( x ) x for all x . Similarly, a constant map of into has a unique fixed point. Consider the equation (2.11) f ( x ) 0. The fixed point iteration approach to the solution of equation (2.11) is that it is rewritten in the form of an equivalent relation (2.12). x F ( x ). Then any solution of equation (2.11) is a fixed point of the iteration function F . Thus, the task of solving the equation is reduced to find the fixed points of the iteration function F .
26
Numerical Methods
Let x0 be an initial solution (approximate value of the root of equation (2.11) obtained from the graph of f or otherwise). We substitute this value of x0 in the right-hand side of equation (2.12) and obtain a better approximation x1 given by x1 F ( x0 ). Then the successive approximations are x2 F ( x1 ), x3
F ( x2 ),
K
K K
K K K xn1 F ( xn ), n 0,1, 2, 3,K. The iteration xn1 F ( xn ), n 0,1, 2, 3,K is called fixed point iteration. Obviously, Regula–Falsi method and Newton–Raphson method are iteration processes.
2.10 CONVERGENCE OF ITERATION METHOD We are interested in determining the condition under which the iteration method converges, that is, for which xn1 converges to the solution of x F ( x ) as n l c. Thus, if xn1 x up to the number of significant figures considered, then xn is a solution to that degree of approximation. Let X be the true solution of x F ( x ), that is, (2.13) X F (X ). The first approximation is x1 F ( x0 ). Subtracting equation (2.14) from equation (2.13), we get
X x1 F (X ) F ( x0 ) (X x0 )F `(X0 ), x0 X0 X , by Mean Value Theorem. Similar equations hold for successive approximations so that
X x2 (X x1 )F `(X 1 ), x1 X1 X X x3 (X x2 )F `(X 2 ), x2 X2 X K
L L X xn1 (X xn )F `(X n ), xn Xn X Multiplying together all the equations, we get
X xn1 (X x0 )F `(X0 )F `(X1 )KF `(Xn ) and so
X xn1 X x0 F `(X 0 ) K F `(X n ) . If each of F `(X0 ) ,K F `(Xn ) is less than or equal to k < 1, then
X xn1 a X x0 k n1 l 0 as n l c.
(2.14)
Non-Linear Equations
27
Hence, the error X xn1 can be made as small as we please by repeating the process a sufficient number of times. Thus, the condition for convergence is F `( x ) 1 in the neighborhood of the desired root. Consider the iteration formula xn1 F ( xn ), n 0,1, 2,. If X is the true solution of x F ( x ), then X F (X ). Therefore, X xn1 F (X ) F ( xn ) (X xn )F `(X ) (X xn ) k ,
F `(X ) a k 1,
which shows that the iteration method has a linear convergence. This slow rate of convergence can be accelerated in the following way: we write X xn1 (X xn ) k
X xn 2 (X xn1 ) k . Dividing, we get
X xn1 X xn X xn 2 X xn1 or (X xn1 )2 (X xn 2 )(X xn ) or
X xn 2
( xn 2 xn1 )2 ($xn1 )2 xn 2 . xn 2 2 xn1 xn $ 2 xn
(2.15)
Formula (2.15) is called the Aitken’s Δ2-method.
2.11 SQUARE ROOT OF A NUMBER USING ITERATION METHOD Suppose that we want to find square root of a number, say N. This is equivalent to say that we want to find x N N or x x x . Thus, such that x 2 N , that is, x x x N x x . x 2 Thus, if x0 is the initial approximation to the square root, then xn xn1
2
N xn
, n 0,1, 2,K .
Suppose N = 13. We begin with the initial approximation of 13 found by bisection method. The solution lies 3.5625 3.6250 between 3.5625 and 3.625. We start with x0 y 3.59375 . Then, using the above iteration 2 formula, we have x1 3.6055705, x2 3.6055513, x3 3.6055513 correct up to seven decimal places.
28
Numerical Methods
2.12 SUFFICIENT CONDITION FOR THE CONVERGENCE OF NEWTON–RAPHSON METHOD We know that an iteration method xn1 F ( xn ) converges if F `( x ) 1. Since Newton–Raphson method is f ( x) an iteration method, where F ( x ) x and therefore it converges if F `( x ) 1, that is, if f `( x ) 1
( f `( x ))2 f ( x ) f ``( x ) 1, ( f `( x ))2
that is, if f ( x ) f ``( x ) ( f `( x ))2 , which is the required sufficient condition for the convergence of Newton–Raphson method. EXAMPLE 2.12 Derive an iteration formula to solve f ( x ) x 3 x 2 1 0 and solve the equation. Solution. Since f (0) and f (1) are of opposite signs, there is a root between 0 and 1. We write the equation in the form 1 , x 1
x 3 x 2 1, that is, x 2 ( x 1) 1, or x 2 or equivalently, x
1 1 x
.
Then x F ( x)
1 1 x
1
, F `( x )
3
2(1 x ) 2
so that
F `( x ) 1 for x < 1. Hence, this iteration method is applicable. We start with x0 0.75 and obtain the next approximations to the root as 1 y 0.7559, x2 F ( x1 ) y 0.7546578, x3 y 0.7549249, x1 F ( x0 ) 1 x0 x4 y 0.7548674, x5 y 0.754880, x6 y 0.7548772, x7 y 0.75487767 correct up to six decimal places. EXAMPLE 2.13 Find, by the method of iteration, a root of the equation 2 x log10 x 7 . Solution. The fixed point form of the given equation is 1 x (log10 x 7 ) . 2 From the intersection of the graphs y1 2 x 7 and y2 log10 x , we find that the approximate value of the root is 3.8. Therefore,
Non-Linear Equations
x0 3.8, x1
29
1 (log 3.8 7 ) y 3.78989, 2
1 x2 (log 3.78989 7 ) y 3.789313, 2 x3
1 (log 3.789313 7 ) y 3.78928026, 2 x4 y 3.789278 , x5 y 3.789278
correct up to six decimal places. EXAMPLE 2.14 Use iteration method to solve the equation e x 5x. Solution. The iteration formula for the given problem is 1 x xn1 e n . 5 We start with x0 0.3 and get the successive approximations as 1 x1 (1.34985881) 0.269972 , x2 0.26198555, 5 x3 0.25990155 , x4 0.259360482, x5 0.259220188 , x6 0.259183824 , x7 0.259174399 , x8 0.259171956 , x9 0.259171323, x10 0.259171159 , correct up to six decimal places. If we use Aitken’s Δ2-method, then x3 x2 and so on.
( $x1 )2 $ 2 x0
x2
( x2 x1 )2 0.000063783 0.259091 0.26198555 x2 2 x1 x0 0.02204155
2.13 NEWTON’S METHOD FOR FINDING MULTIPLE ROOTS If X is a multiple root of an equation f ( x ) 0, then f (X ) f `(X ) 0 and therefore the Newton–Raphson method fails. However, in case of multiple roots, we proceed as follows: Let X be a root of multiplicity m. Then f ( x ) ( x X ) m A( x )
(2.16)
We make use of a localized approach that in the immediate vicinity (neighborhood) of x X , the relation (2.16) can be written as f ( x ) A( x X ) m , where A = A( X ) is effectively constant. Then f `( x ) mA( x X ) m 1 o on. f ``( x ) m( m 1) A( m X ) m 2 , and so
30
Numerical Methods
We thus obtain f `( x ) m f ( x) x X or
X x
mf ( x ) , f `( x )
where x is close to X, which is a modification of Newton’s rule for a multiple root. Thus, if x1 is in the neighborhood of a root X of multiplicity m of an equation f (x) = 0, then f ( x1 ) x2 x1 m f `( x1 ) is an even more close approximation to X . Hence, in general, we have xn1 xn m
f ( xn ) . f `( xn )
(2.17)
Remark 2.1. (i) The case m = 1 of equation (2.17) yields Newton–Raphson method. (ii) If two roots are close to a number, say x, then f ( x E ) 0 and f ( x E ) 0, that is, f ( x ) E f `( x )
E2 E2 f p ( x ) L 0, f ( x ) E f `( x ) f p ( x ) L 0. 2! 2!
Since E is small, adding the above expressions, we get 0 2 f ( x ) E 2 f ``( x ) 0 or
E 2 2
f ( x) f ``( x )
or
2 f ( x ) . f ``( x ) So in this case, we take two approximations as x E and x E and then apply Newton–Raphson method.
Eo
EXAMPLE 2.15 The equation x 4 5x 3 12 x 2 76 x 79 0 has two roots close to x = 2. Find these roots to four decimal places. Solution. We have f ( x ) x 4 5x 3 12 x 2 76 x 79 f `( x ) 4 x 3 15x 2 24 x 76 f ``( x ) 12 x 2 30 x 24. Thus f ( 2) 16 40 48 152 79 1 f ``( 2) 48 60 24 36.
Non-Linear Equations
31
Therefore,
Eo
2 f ( 2)
2 o o0.2357. f ``( 2)
36
Thus, the initial approximations to the roots are x0 2.2357
and
y0 1.7643.
The application of Newton–Raphson method yields f ( x0 ) x1 x0 2.2357 0.00083 2.0365. f `( x0 ) x2 x1
f ( x1 ) 2.2365 0.000459 2.24109. f `( x1 )
x3 x2
f ( x2 ) 2.24109 0.00019 2.2410. f ` ( x2 )
Thus, one root, correct to four decimal places is 2.2410. Similarly, the second root correct to four decimal places will be found to be 1.7684. EXAMPLE 2.16 Find a double root of the equation x 3 5x 2 8 x 4 0 near 1.8. Solution. We have f ( x ) x 3 5x 2 8 x 4 f `( x ) 3x 2 10 x 8 and x0 1.8. Therefore,
f ( x0 ) f (1.8) 5.832 16.2 14.4 4 0.032 f `( x0 ) 9.72 18 8 0.28.
Hence, x1 x0 2 We take x1 2.028. Then
f ( x0 ) f (1.8) 0.032 1.8 2 1.8 2 2.02857. f `( x0 ) f `(1.8)
0.28
f ( x1 ) 8.3407 20.5639 16.224 4 0.0008 f `( x1 ) 12.3384 20.28 8 0.0584.
Therefore, x2 x1 2
f ( x1 ) f `( x1 )
2.028 which is quite close to the actual double root 2.
2(0.0008) 2.0006, 0.0584
32
Numerical Methods
EXAMPLE 2.17 Find the double root of x 3 x 2 x 1 0 close to 0.8. Solution. We have f ( x) x3 x2 x 1 0 f `( x ) 3x 2 2 x 1. We choose x0 0.8. Then xn1 xn m
f ( xn ) f `( xn )
and so x1 x0 2 x2 x1
¤ (0.8)3 (0.8)2 0.8 1³ f (0.8) 0.8 2 ¥ ´ 1.01176 f `(0.8) ¦ 3(0.8)2 2(0.8) 1 µ
2 f (1.0118) 1.0118 0.0126 0.9992, f `(1.0118)
which is very close to the actual double root 1.
2.14 NEWTON–RAPHSON METHOD FOR SIMULTANEOUS EQUATIONS We consider the case of two equations in two unknowns. So let the given equations be
F ( x, y ) 0,
(2.18)
Y ( x, y ) 0
(2.19)
Now if x0 , y0 be the approximate values of a pair of roots and h, k be the corrections, we have x x0 h
and
y y0 k .
Then equations (2.18) and (2.19) become
F ( x0 h, y0 k ) 0
(2.20)
Y ( x0 h, y0 k ) 0.
(2.21)
Expanding equations (2.20) and (2.21) by Taylor’s Theorem for a function of two variables, we have ¤ tF ³ ¤ tF ³ L 0, k¥ ´ F ( x0 h, y0 k ) F ( x0 , y0 ) h ¥ ´ ¦ t y µ y y ¦ t x µ x x 0
0
¤ tY ³ ¤ tY ³ L 0. k¥ Y ( x0 h, y0 k ) Y ( x0 , y0 ) h ¥ ´ ¦ t y ´µ y y ¦ t x µ x x 0
0
Since h and k are relatively small, their squares, products, and higher powers can be neglected. Hence, ¤ tF ³ ¤ tF ³ k¥ 0 F ( x0 , y0 ) h ¥ ´ ¦ t y ´µ y y ¦ t x µ x x
(2.22)
¤ tY ³ ¤ tY ³ k¥ 0. Y ( x0 , y0 ) h ¥ ¦ t y ´µ y y ¦ t x ´µ x x
(2.23)
0
0
Solving the equations (2.22) and (2.23) by Cramer’s rule, we get
0
0
Non-Linear Equations
33
¤ tF ³ ¥¦ t y ´µ y y
F ( x0 , y0 )
0
¤ tY ³
Y ( x0 , y0 ) ¥ ¦ t y ´µ y y
h
0
D ¤ tF ³ ¥¦ t x ´µ x x
F ( x0 , y0 )
¤ tY ³ ¥¦ t x ´µ x x
Y ( x0 , y0 )
,
0
k
0
,
D
where
D
¤ tF ³ ¥¦ t x ´µ x x
¤ tF ³ ¥¦ t y ´µ y y
¤ tY ³ ¥¦ t x ´µ x x
¤ tY ³ ¥¦ t y ´µ y y
0
0
0
0
Thus, x1 x0 h, y1 y0 k . Additional corrections can be obtained by repeated application of these formulae with the improved values of x and y substituted at each step. Proceeding as in Section 2.10, we can prove that the iteration process for solving simultaneous equations F ( x, y ) 0 and Y ( x, y ) 0 converges if
t F tY 1 and tx tx
t F tY 1. ty ty
Remark 2.2. The Newton–Raphson method for simultaneous equations can be used to find complex roots. In fact the equation f ( z ) 0 is u( x, y ) iv ( x , y ) 0. So writing the equation as u( x , y ) 0 v ( x , y ) 0, we can find x and y, thereby yielding the complex root. EXAMPLE 2.18 Solve by Newton–Raphson method x 3 log10 x y 2 0, 2 x 2 xy 5x 1 0. Solution. On plotting the graphs of these equations on the same set of axes, we find that they intersect at the points (1.4,−1.5) and (3.4, 2.2). We shall compute the second set of values correct to four decimal places. Let
F ( x, y ) x 3 log10 x y 2 , Y ( x , y ) 2 x 2 xy 5x 1.
34
Numerical Methods
Then
tF 3M 1 , M 0.43429 tx x 1.30287 1 x tF 2 y ty tY 4x y 5 tx tY x. ty Now x0 3.4, y0 2.2. Therefore, F ( x0 , y0 ) 0.1545, Y ( x0 , y0 ) 0.72, ¤ tF ³ 1.383, ¥¦ t x ´µ x x
¤ tF ³ 4.4, ¥¦ t y ´µ y y
¤ tY ³ 6.4, ¥¦ t x ´µ x x
¤ tY ³ 3.1. ¥¦ t y ´µ y y
0
0
0
0
Putting these values in ¤ tF ³ ¤ tF ³ k1 ¥ ´ F ( x0 , y0 ) h1 ¥ ´ 0, ¦ t y µ y y ¦ t x µ x x 0
0
¤ tY ³ ¤ tY ³ k1 ¥ 0, Y ( x0 , y0 ) h1 ¥ ¦ t y ´µ y y ¦ t x ´µ x x 0
0
we get 0.1545 h1 (1.383) k1 ( 4.4) 0
0.7 2 h1 (6.4) k1 ( 3.1) 0. Solving these for h1 and k1 , we get
h1 0.157
and
k1 0.085.
Thus, x1 3.4 0.517 3.557, y1 2.2 0.085 2.285. Now
F ( x1 , y1 ) 0.011,
Y ( x1 , y1 ) 0.3945,
¤ tF ³ 1.367, ¥¦ t x ´µ x x
¤ tF ³ 4.57, ¥¦ t y ´µ y y
1
1
¤ tY ³ ¤ tY ³ 3.557. 6.943, ¥ ¥¦ t x ´µ ¦ t y ´µ y y x x 1
1
Non-Linear Equations
35
Putting these values in ¤ tF ³ ¤ tF ³ k2 ¥ F ( x1 , y1 ) h2 ¥ ´ 0, ¦ t y ´µ y y ¦ t x µ x x 1
1
¤ tY ³ ¤ tY ³ k2 ¥ 0 Y ( x1 , y1 ) h2 ¥ ´ ¦ t x µ x x ¦ t y ´µ y y 1
1
and solving the equations so obtained, we get h2 0.0685, k2 0.0229. Hence, x2 x1 h2 3.4885 and y2 y1 k2 2.2621. Repeating the process, we get h3 0.0013, k3 0.000561. Hence, the third approximations are x3 3.4872
and
y3 2.26154 .
Finding the next approximation, we observe that the above approximation is correct to four decimal places. EXAMPLE 2.19 ¤ 1 1³ Find the roots of 1 z 2 0, taking initial approximation as ( x0 , y0 ) ¥ , ´ . ¦ 2 2µ Solution. We have f ( z ) 1 ( x iy )2 1 x 2 y 2 2ixy u iv , where u( x , y ) 1 x 2 y 2 , v ( x , y ) 2 xy. Then tu tu 2 x, 2 y , tx ty tv tv 2 y, 2 x. tx ty ¤ 1 1³ Taking initial approximation as ( x0 , y0 ) ¥ , ´ , we have ¦ 2 2µ ¤ 1 1³ 1 1 u( x0 , y0 ) u ¥ , ´ 1 1, 4 4 ¦ 2 2µ ¤ 1 1³ ¤ 1³ ¤ 1³ 1 v ( x0 , y0 ) v ¥ , ´ 2 ¥ ´ ¥ ´ , ¦ 2 2µ ¦ 2µ ¦ 2µ 2 ¤ 1³ ¤ 1 1³ ux ( x0 , y0 ) ux ¥ , ´ 2 ¥ ´ 1, ¦ 2µ ¦ 2 2µ ¤ 1³ ¤ 1 1³ u y ( x0 , y0 ) u y ¥ , ´ 2 ¥ ´ 1, ¦ 2µ ¦ 2 2µ ¤ 1 1³ ¤ 1³ vx ( x0 , y0 ) vx ¥ , ´ 2 ¥ ´ 1, ¦ 2 2µ ¦ 2µ ¤ 1 1³ ¤ 1³ v y ( x0 , y0 ) v y ¥ , ´ 2 ¥ ´ 1. ¦ 2 2µ ¦ 2µ
36
Numerical Methods
Putting these values in and we get
u( x0 , y0 ) h1ux ( x0 , y0 ) k1u y ( x0 , y0 ) 0 v ( x0 , y0 ) h1vx ( x0 , y0 ) k1v y ( x0 , y0 ) 0,
1 h k 0. 2 1 1 3 1 Solving these equations for h1 and k1 , we get h1 , k1 . Hence, 4 4 1 3 1 x1 x0 h1 , 2 4 4 1 1 3 y1 y0 k1 . 2 4 4 Now 1 9 1 u ( x1 , y1 ) 1 , 16 16 2 3 ¤ 1 ³¤ 3 ³ v( x1 , y1 ) 2 ¥ ´ ¥ ´ , 4 4 8 ¦ µ¦ µ 1 h1 k1 0 and
1 ¤ 1³ u x ( x1 , y1 ) 2 ¥ ´ , 2 ¦ 4µ 3 ¤3³ u y ( x1 , y1 ) 2 ¥ ´ , 2 ¦4µ ¤3³ 3 vx ( x1 , y1 ) 2 ¥ ´ , ¦4µ 2 1 ¤ 1³ v y ( x1 , y1 ) 2 ¥ ´ . 2 ¦ 4µ Putting these values in u ( x1 , y1 ) h2 u x ( x1 , y1 ) k2 u y ( x1 , y1 ) 0 and we get
v ( x1 , y1 ) h2 vx ( x1 , y1 ) k2 v y ( x1 , y1 ) 0, 1 1 3 3 3 1
h k 0 and h2 k2 0. 2 2 2 2 2 8 2 2
Solving these equations, we get h2 13 , k2 9 . Hence, 40 40 1 13 3 x2 x1 h2 0.075, 4 40 40 3 9 39 y2 y1 k2 0.975. 4 40 40 Proceeding in the same fashion, we get x3 0.00172 and y3 0.9973.
Non-Linear Equations
37
2.15 GRAEFFE’S ROOT SQUARING METHOD Graeffe’s root squaring method is applicable to polynomial equations only. The advantage of this method is that it does not require any prior information about the roots and it gives all the roots of the equation. Let the given equation be f ( x ) a0 x n a1 x n 1 a2 x n 2 K an 1 x an 0. If x1 , x2 ,K , xn are the roots of this equation, then we have f ( x ) a0 ( x x1 ) ( x x2 ) K ( x xn ) 0.
(2.24)
Multiplying equation (2.24) by the function ( 1) n f ( x ) a0 ( x x1 )( x x2 )K ( x xn ),
(2.25)
we get ( 1) n f ( x ) f ( x ) a02 ( x 2 x12 )( x 2 x22 )K( x 2 xn2 ) 0.
(2.26)
Putting x 2 y , equation (2.26) becomes
F ( y ) a02 ( y x12 )( y x22 )L( y xn2 ) 0.
(2.27)
The roots of equation (2.27) are x , x ,K x and are thus the squares of the roots of the given equation. Since the relations between the roots x1 , x2 ,K xn and coefficients a0 , a1 ,K , an of the nth degree are a1 ( x1 x2 K xn ), a0 2 1
2 2
2 n
a2 ( x1 x2 x1 x3 K), a0 a3 ( x1 x2 x3 x1 x3 x4 K), a0 L an ( 1) n x1 x2 K xn , a0 it follows that the roots x1m , x2m ,K , xnm and the coefficients b0 , b1 ,K , bn of the final transformed equation (after m squaring) b0 ( x m ) n b1 ( x m ) n 1 K bn 1 x m bn 0 are connected by the corresponding relations ¤ b1 xm xm ³ ( x1m x2m K xnm ) x1m ¥ 1 2m K nm ´ , b0 x1 x1 µ ¦ ¤ ³ b2 xm xm x1m x2m x1m x3m K x1m x2m ¥ 1 3m 4m K´ , b0 x2 x2 ¦ µ b3 ( x1m x2m x3m x1m x2m x4m K) x1m x2m x3m b0 L bn ( 1) n x1m x2m K xnm . b
¤ ³ x4m 1 K´ , ¥ m x3 ¦ µ
38
Numerical Methods
Let the order of the magnitude of the roots be x1 x2 x3 K xn . x2m x3m etc., are negligible in comparison with unity. Hence, , x1m x2m the relations between the roots and the coefficients in the final transformed equation after m squaring are b1 b b b x1m , 2 x1m x2m , 3 x1m x2m x3m , K, n ( 1) n x1m x2m x3m K xnm . b0 b0 b0 b0 Dividing each of these equations after the first by the preceding equation, we obtain
When the roots are sufficiently separated, the ratios
b b b2 x2m , 3 x3m ,K , n xnm . b1 b2 bn 1 From
(2.28)
b1 x1m and equations (2.28), we get b0 b0 x1m b1 0, b1 x2m b2 0, b2 x3m b3 0, K , bn 1 xnm bn 0.
The roots squaring process has thus broken up the original equation into n simple equations from which the desired roots can be found easily. Remark 2.3. The multiplication by ( 1) n f ( x ) can be carried out as given below: a0 a1 a2 a3 a4 K a0 a2 a4
a1
a3 K 2 2 2 2 2 a0 a2 a4
a1
a3 K 2a0 a2 2a1a3 2a2 a4 2a3 a5 K
b0
b1
2a0 a4
2a1a5 2a0 a6
b2
b3
2a2 a6
2a1a7 2a0 a8 b4
K K K K
EXAMPLE 2.20 Solve the equation x 3 5 x 2 17 x 20 0 by Graeffe’s root squaring method (squaring three times). Solution. We have f ( x ) x 3 5x 2 17 x 20 0 and so ( 1)3 f ( x ) x 3 5x 2 17 x 20. Then, for the first squaring, we have
5 5
25
34 1 59 and so the equation obtained after first squaring is 1 1 1
17 20
17 20 289 400 200 489 400
y 3 59 y 2 489 y 400 0.
Non-Linear Equations
For the second squaring, we have
59 59
3481 978 1
2503 Thus, the equation obtained after second squaring is 1 1 1
489
400 400
489 239121 16(10)4
47200 191921 16(10)4
z 3 2503z 2 191921z 16(10)4 0 For the third squaring, we have
2503 191921
191921 2503
6265009 36933670241 383842 800960000
5881167 36032710241 1 Thus, the equation obtained after third squaring is 1 1 1
16(10)4 16(10)4
256(10)6
256(10)8
u3 5881167u 2 36032710241u 256(10)8 0 Hence, the roots are given by x18 5881167 and so x1 7.0175, 36032710241 and so x2 2.9744, x28 5881167 256(10)8 x38 and so x3 0.958170684. 36032710241 EXAMPLE 2.21 Apply Graeffe’s root squaring method to find all the roots of the equation x 3 2 x 2 5 x 6 0. Solution. We have f ( x ) x 3 2 x 2 5x 6 0 such that ( 1) f ( x ) x 3 2 x 2 5x 6. Therefore, using Graeffe’s root squaring method, we have three squaring as given below: 1 6
2
5 1 2
5
6 1
4 25
336
10 24
14
36 1 49 First sq. 1 14 49 36 1
196 2401
1296 98
1008 1
98 1393
1296 Second sq. 1 98 1393 1296 1
9604 1940449 1679616 2796 254016 1
6818 1686433 1679616 Third sq.
39
40
Numerical Methods
Therefore, x18
6818 and so x1 3.0144433, 1
x28
1686433 and so x2 1.9914253, 6818
1679616 and so x3 0.99949382. 1686433 The values of the roots are in good agreement with the actual values since the actual roots of the given equationa re x 1, 2, 3. x38
EXAMPLE 2.22 Find all the roots of the equation x3 4 x 2 5 x 2 0 by Graeffe’s root squaring method, squaring thrice. Solution. The given equation is f ( x ) x 3 4 x 2 5x 2 0. Then Graeffe’s root squaring method yields 1
4 5 1 4 5 1 16 25 10 16
6 1 9 First sq. 1 6 9 1 3 6 81 18 48 1 18 33 Second sq. 1 18 33 1 324 1089 66 864 1 258 225 Third sq.
2 2
4
4 4
16
16 16
256
256
Hence, x18
258 and so 1
x1 2.00194,
x28
225 and so 258
x2 0.98304,
256 and so x3 1.03280. 225 We further observe that magnitude of −18 in the second square is half of the square of the magnitude of −6 in the first squaring. Hence, the equation has a double root. Therefore, the double root is given by x38
1
¤ 256 ³ 8 ¥ ´ 0.999028. Thus, the magnitudes of the roots are 2.00194 and 0.999028. The actual roots of the ¦ 258 µ equation are 2, 1, 1.
Non-Linear Equations
41
2.16 MULLER’S METHOD Muller’s method is an iterative method in which we do not require derivative of the function. In this method, the function f ( x) is approximated by a second degree curve in the neighborhood of the root. This method is as fast as Newton’s method and can be used to find real or complex zeros of a function. Let xi 2 , x i 1, and xi be the approximations to a root of the equation f ( x) 0 and let yi 2 f ( xi 2 ), yi 1 f ( xi 1 ), and yi f ( xi ). Let (2.29) y A( x xi ) 2 B( x xi ) yi be the parabola passing through (xi−2, yi−2), (xi−1, yi−1), and (xi, yi). Therefore, we have yi 1 A( xi 1 xi )2 B( xi 1 xi ) yi yi 2 A( xi 2 xi )2 B( xi 2 x2 ) yi and so A( xi 1 xi )2 B( xi 1 xi ) yi 1 yi
(2.30)
A( xi 2 xi ) B( xi 2 xi ) yi 2 yi .
(2.31)
2
Solving equations (2.30) and (2.31) for A and B, we get A
( xi 2 xi )( yi 1 yi ) ( xi 1 xi )( yi 2 yi ) , ( xi 1 xi 2 )( xi 1 xi )( xi 2 xi )
(2.32)
B
( xi 2 xi )2 ( yi 1 yi ) ( xi 1 xi )2 ( yi 2 yi ) . ( xi 2 xi 1 )( xi 1 xi )( xi 2 xi )
(2.33)
2 The quadratic equation A( xi 1 xi ) B( xi 1 xi ) yi 0 with A and B given by equations (2.32) and (2.33) yields the next approximation xi + 1 as
xi1 xi
B o B 2 4 Ayi 2A
B o B 2 4 Ayi ¤ B o B 2 4 Ayi ³ ¥ ´ 2A ¥¦ B o B 2 4 Ay ´µ i
2 yi B o B 2 4 Ayi
.
The sign in the denominator is chosen so that the denominator becomes largest in magnitude. EXAMPLE 2.23 Using Muller’s method find a root of the equation x 3 x 1 0. Solution. We are given that f ( x ) x 3 x 1 0. We note that f (1) 1, f (1.2) 0.472, f (1.5) 0.875.
(2.34)
42
Numerical Methods
Thus, one root of the equation lies between 1.2 and 1.5. Let xi 2 1, xi 1 1.2, and xi 1.5 be the initial approximations. Then yi 2 1, yi 1 0.472 , and yi 0.875. Therefore, A B
( xi 2 xi )( yi 1 yi ) ( xi 1 xi )( yi 2 yi ) ( xi 1 xi 2 )( xi 1 xi )( xi 2 xi ) ( 0.5)( 1.347 ) ( 0.3)( 1.875) 3.7. ( 0.2)( 0.3)( 0.5)
( xi 2 xi )2 ( yi 1 yi ) ( xi 1 xi )2 ( yi 2 yi ) ( xi 2 xi 1 )( xi 1 xi )( xi 2 xi ) ( 0.5)2 ( 1.347 ) ( 0.3)2 ( 1.875) 5.6. ( 0.2)( 0.3)( 0.5)
Thus the quadratic equation is 3.7( x 1.5) 2 5.6( x 1.5) 0.8 7 5 0 and the next approximation to the root is xi1 1.5
2(0.875) 5.6 31.36 12.95
1.5 0.1769 1.3231. We note that f (1.3231) 0.006890 . We repeat the process with xi 2 1.2, xi 1 1.3231, and xi 1.5 and get xi 2 1.3247. EXAMPLE 2.24 Use Muller’s method to find a root of the equation f ( x) x 3 x 2 0. Solution. We have f ( x) x 3 x 2 0. We note that f (1.4) 0.656, f (1.5) 0.125, f (1.6) 0.496. Thus, one root lies between 1.5 and 1.6. Let xi 2 1.4, xi 1 1.5, and xi 1.6 be the initial approximation. Then yi 2 0.656, yi 1 0.125, yi 0.496. Therefore, A B
( xi 2 xi )( yi 1 yi ) ( xi 1 xi )( yi 2 yi ) ( 0.2)( 0.621) ( 0.1)( 1.152) 4.5. ( xi 1 xi 2 )( xi 1 xi )( xi 2 xi ) (0.1)( 0.1)( 0.2)
( xi 2 xi )2 ( yi 1 yi ) ( xi 1 xi )2 ( yi 2 yi ) ( 0.2)2 ( 0.621) (0.1)2 ( 1.152) 6.66. ( xi 2 xi 1 )( xi 2 xi 1 )( xi 2 xi ) ( 0.1)( 0.1)( 0.1)
Therefore, the approximating quadratic equation is 4.5( x 1.6)2 6.6 6( x 1.6) 0.4 9 6 0
Non-Linear Equations
and the next approximation to the root is xi 1 1.6
2(0.496) 6.66 44.3556 8.928
1.52134.
We not that f ( xi 1 ) f (1.52134) 0.00199. Hence, the approximate value of the root is quite satisfactory. EXAMPLE 2.25 Apply Muller’s method to find a root of the equation cos x xe x 0. Solution. We are given that f ( x) cos x xe x 0. We note that f ( 1) 0.540302305 0.3678794411 0.908181746 f ( 0) 1 f (1) 2.1777979523. Therefore, one root of the given equation lies between 0 and 1. Let xi 2 1, xi 1 0, xi 1 be the initial approximation of the root. Then yi 2 0.908181746, yi 1 1, yi 2.177979523 Therefore, A
( xi 2 xi )( yi 1 yi ) ( xi 1 xi )( yi 2 yi ) ( xi 1 xi 2 )( xi 1 xi )( xi 2 xi ) ( 2)(3.177979523) ( 1)(3.086161269) y 1.635, ( 1)( 1)( 2)
B
( xi 2 xi )2 ( yi 1 yi ) ( xi 1 xi )2 ( yi 2 yi ) ( xi 2 xi 1 )( xi 1 xi )( xi 2 xi ) 4(3.177979523) (1)(3.0861561269) y 4.813. ( 1)( 1)( 2)
Then xi1 xi
1
1
2 yi B o B 2 4 Ayi 2( 2.177979523)
( 4.813) o 23.164969 14.24398608 4.355959046 y 0.4415. ( 7.799801453)
43
44
Numerical Methods
Now we take the initial approximation as xi 2 0, xi 1 0.4415, xi 1. Then yi 2 1, yi 1 0.217563, yi 2.177979523. Therefore, A
( xi 2 xi )( yi 1 y1 ) ( xi 1 x1 )( yi 2 yi ) ( xi 1 xi 2 )( xi 1 xi )( xi 2 xi ) ( 1)( 2.39554253) ( 0.5585)(3.177979523) y 2.5170, (0.4415)( 0.5585)( 1) B
( xi 2 xi )2 ( yi 1 yi ) ( xi 1 xi )2 ( yi 2 y1 ) ( xi 2 xi 1 )( xi 1 xi )( xi 2 xi ) (1)( 2.395542533) (0.31192225)(3.177979523) ( 0.4415)( 0.5585)( 1)
5.694998867 y 5.6950. Then xi1 xi 1 1
2 yi B o B 2 4 Ayi
.
2( 2.177979523) ( 5.6695) o 32.4330121 21.92789784 4.355959046 1 0.487453149 y 0.51255.
88.936159401
Repeating this process twice, we get the approximate root 0.517 correct to three decimal places. EXAMPLE 2.26 Apply Muller’s method to find the real root of the equation x 3 x 2 x 1 0. Solution. The given equation is f ( x) x 3 x 2 x 1 0 . We note that f (0) 1, f (1) 2, f ( 2) 1 . Thus, one root lies between 1 and 2. Let xi 2 0, xi 1 1 and xi 2 yi 2 1, yi 1 2 and yi 1. Therefore, A B
( xi 2 xi )( yi 1 yi ) ( xi 1 xi )( yi 2 yi ) ( 2)( 3) ( 1)( 2) 6 2 2, ( xi 1 xi 2 )( xi 1 xi )( xi 2 xi ) (1)( 1)( 2) 2
( xi 2 xi ) 2 ( yi 1 yi ) ( xi i xi ) 2 ( yi 2 yi ) ( 2) 2 ( 3) ( 1) 2 ( 2) 12 2 5. ( xi 2 xi 1 )( xi 1 xi )( xi 2 xi ) ( 1)( 1)( 2)
2
Non-Linear Equations
45
Therefore, xi 1 2
2(1) 5 25 4(2)(1)
2
2 2 0.2192 1.7808. 5 4.123
We note that f (1.7808) 0.53625 ( ve). Thus, the root lies between 1.78 and 2. Therefore, for the second iteration, we set xi 2 1, xi 1 1.78 and xi 2. Then yi 2 2, yi 1 0.536, yi 1. Therefore, A
( xi 2 xi )( yi 1 yi ) ( xi 1 xi )( yi 2 yi ) ( xi 1 xi 2 )( xi 1 xi )( xi 2 xi )
(1 2)( 0.536 1) (1.78 2)( 2 1) (1.78 1)(1.78 2)(1 2) 1.536 0.66 0.836 4.872, (0.78)( 0.22)( 1) 0.1716
B
( 1)2 ( 1.536) ( 0.22)2 ( 3) (1 1.78)(1.78 2)(1 2)
1.536 0.1452 1.3908 8.10. ( 0.78)( 0.22)( 1) 0.1716
Hence, xi 1 2
2(1) 8.1 65.61 4(4.87)1
2
2 81 46.13
2
2 1.87. 8.1 6792
We note f (1.87 ) 0.173. Therefore, x 1.87 is a satisfactory root.
2.17 BAIRSTOW ITERATIVE METHOD The Bairstow iterative method is used to extract a quadratic factor from a given polynomial. After obtaining quadratic factors, the roots (real or complex) can be found by solving the quadratic equations. Let f ( x ) x n a1 x n-1 K an-1 x an 0
(2.35)
be a polynomial equation of degree n. We wish to find a quadratic factor x px q of equation (2.35). Dividing the polynomial f ( x ) by x 2 px q , we get 2
f ( x ) ( x 2 px q ) ( x n 2 b1 x n 3 ... bn 3 x bn 2 ) Rx S .
(2.36)
Obviously, ( x 2 px q ) will be a factor of f ( x ) if R S 0. Thus, our aim is to find p and q such that R ( p, q ) 0 and S ( p, q ) 0
(2.37)
The two equations in (2.37) are non-linear equations in p and q. Therefore, Newton–Raphson method for simultaneous equations is applicable. Let p1 and q1 be the true values of p and q and $p and $q are the corrections, then p1 p $p and q1 q $q .
46
Numerical Methods
Therefore,
R ( p1 , q1 ) R( p $p, q $q ) 0 and S ( p1 , q1 ) S ( p $p, q $q ) 0. Applying Taylor’s series expansion for two variables, we get R( p1 , q1 ) R( p, q ) $p
tR tR $q K 0 tp tq
S ( p1 , q1 ) S ( p, q ) $p
tS tS $q K 0. tp tq
and
Neglecting the square and higher powers of $p and $q , we have $p
tR tR $q R( p , q ) tp tq
(2.38)
$p
tS tS $q S ( p, q ). tp tq
(2.39)
Solving equations (2.38) and (2.39) for $p and $q, we have $p where R p
RSq SRq R p S q Rq S p
and $q
SR p RS p R p S q Rq S p
,
(2.40)
tR etc. tp
We now determine the expression for R and S in terms of p and q. To do so, we equate coefficients of x n , x n 1 ,K on both sides of equation (2.36) to get a1 b1 p a2 b2 pb1 q a3 b3 pb2 qb1 L ak bk pbk 1 qbk 2 L an 1 R pbn 2 qbn 3 an S qbn 2
so that b1 a1 p so that b2 a2 pb1 q so that b3 a3 pb2 qb1 L so that bk ak pbk 1 qbk 2 L so that R an 1 pbn 2 qbn 3 so that an qbn 2 .
(2.41)
Thus, if we introduce the recurrence formula bk ak pbk 1 qbk 2 for k 1, 2,K , n
(2.42)
with b0 1 and b 1 0 , then the coefficient of the polynomial x n 2 b1 x n 3 K bn 3 x bn 2 of degree n 2, called deflated polynomial, can be determined. We observed that R an 1 pbn 2 qbn 3 bn 1
(2.43)
S an qbn 2 bn pbn 1 .
(2.44)
and
Non-Linear Equations
47
Therefore, R and S can be determined if bn are known. To find $p and $q in equation (2.40), we require partial derivatives Rp , Rq , S p , and S q . From expression (2.42), we have
If
t bk tb tb bk 1 p k 1 q k 2 tp tp tp
and
t b0 t b 1 0 tp tp
(2.45)
t bk tb tb bk 2 p k 1 q k 2 tq tq tq
and
t b0 t b 1 0. tq tq
(2.46)
t bk t bk ck 2 , k 1, 2,K, n, then equations (2.45) and (2.46) reduce to, respectively, ck 1 and tq tp (2.47) ck 1 bk 1 pck 2 qck 3
and ck 2 bk 2 pck 3 qck 4 .
(2.48)
Therefore, we can introduce a recurrence relation to find ck in terms of bk as ck bk pck 1 qck 2 , k 1, 2,K , n 1 with c0 1 and c 1 0.
(2.49)
Hence, the relations (2.43) and (2.44) yield
t bn 1 tb tb cn 2 , S p n p n 1 bn 1 bn 1 cn 1 pcn 2 , tp tp tp tb tb tb Rq n 1 cn 3 , Sq n p n 1 ( cn 2 pcn 3 ). tq tq tq Rp
Putting these values of partial derivatives in equation (2.40), we get bn cn 3 bn 1cn 2
¹ , c cn 3 ( cn 1 bn 1 ) º. bn 1 ( cn 1 bn 1 ) bn cn 2 $q 2 cn 2 cn 3 ( cn 1 bn 1 ) » $p
2 n 2
(2.50)
Hence, if p0 and q0 are initial values of p and q, respectively, then the first improved values are p1 p0 $p
and
q1 q0 $q
and
qk 1 qk $q,
and so, in general, the improved values are given by pk 1 pk $p
where $p and $q are determined at p pk and q qk . The process is repeated till the result, up to desired accuracy, is obtained.
48
Numerical Methods
Remark 2.4. (i). The values of bk and ck can be computed by the following scheme: 1
p0
a1
a2
...
ak
...
p0
p0 b1
q0
... ...
p0 bk 1
q0 bk 2
... ...
b1
b2
...
bk
...
bn−1
p0
p0 c1
q0
... ...
p0 ck 1
q0 ck 2
... ...
p0 cn 2
q0 cn 3
c1
c2
...
ck
...
cn−1
q0 1
p0
q0 1
an
an−1
p0 bn 2 p0 bn 1
q0 bn 3 q0 bn 2 bn
(ii) Since, we have used Newton–Raphson method to solve R( p, q ) 0 and S ( p, q ) 0, Bairstow method is second order process. EXAMPLE 2.27 Determine quadratic factors using Bairstow method to the equation x 4 5x 3 3x 2 5x 9 0. Solution. We take the initial values of p and q as p0 3, q0 5. Then 1
5 −3
1
2 b1
−3 5 −3 5
−3 1
Therefore,
−1 = c1 $p $q
3 −6 5 2 = 2b2 3 5
−5 −6 10 −1 = b3 −30 −5
10 = c2
−36 = c3
b4 c1 b3c2 c c1 ( c3 b3 ) 2 2
0.09
b3 ( c3 b3 ) b4 c2 c22 c1 ( c3 b3 )
0.08.
Hence, p1 3 0.09 2.91 and q1 5 0.08 4.92. Repeating the computation with new values of p and q, we get 1 5 3
5
2.91
2.91 6.08 5.35 4.92 4.92 10.28 1 2.09 b1 1.84 0.07
2.9 1
2.9 1 2.37 26.57 4.92 4.92
4.03 1
0.82 9.13 30.67 m m m c1 c2 c3
9 0.20 9.05 0.25 b4
−9 3 10 4 b4
Non-Linear Equations
49
Then $p 0.00745, $q 0.00241, and so p2 p1 $p 2.902550 q2 q1 $q 4.91759. The next iterations will yield p3 2.902953, q3 4.917736 , p4 2.902954, q 4 4.917738 . Hence, the approximate factorization is x 4 5x 3 3x 2 5x 9 ( x 2 2.90295x 4.91773)( x 2 2.09704 x 1.83011).
EXERCISES 1. Find the root of the equation x - cos x = 0 by bisection method. Ans. 0.739. 2. Find a positive root of equation xe = 1 lying between 0 and 1 using bisection method. x
Ans. 0.567. 3. Solve x −4x−9 = 0 by Bolzano method. 3
Ans. 2.706. 4. Use Regula–Falsi method to solve x + 2x + 10x −20 = 0. 3
2
Ans. 1.3688. 5. Use the method of false position to obtain a root of the equation x x 4 0. 3
Ans. 1.796. 6. Solve e sin x = 1 by Regula–Falsi method. x
Ans. 0.5885. 7. Using Newton–Raphson method find a root of the equation x log10x = 1.2. Ans. 2.7406. 8. Use Newton–Raphson method to obtain a root of x cos x 0. Ans. 0.739. 9. Solve sin x = 1 + x by Newton–Raphson method. 3
Ans. −1.24905.
10. Find the real root of the equation 3x = cos x+1 using Newton–Raphson method. 11. Derive the formula xi 1
Ans. 0.6071. 1¤ N³ ¥ xi ´ to determine square root of N. Hence calculate the square root of 2. 2¦ xi µ Ans. 1.414214.
12. Find a real root of the equation cos x = 3x − 1 correct to three decimal places using iteration method. 1 Hint: Iteration formula is xn (1 cos xn ). 3
Ans. 0.607.
13. Using iteration method, find a root of the equation x x 100 0. 3
2
Ans. 4.3311. 14. Find the double root of the equation x x x 1 0 near 0.9. 2
2
Ans. 1.0001.
50
Numerical Methods
15. Use Newton’s method to solve 2 2 x 2 y 2 4, x y 16
taking the starting value as (2.828, 2.828). Ans. x = 3.162, y = 2.450. 16. Use Newton’s method to solve x 2 2 x y 0.5 0, x 2 4 y 2 4 0, taking the starting value as (2.0, 0.25). Ans. x = 1.900677, y = 0.311219. 3 2 x x x
6 11
6 0 17. Find the real roots of the equation using Graeffe’s root squaring method. Ans. 3, 2, 1. 18. Find the roots of the equation x 3 8 x 2 17 x 10 0 using Graeffe’s root squaring method. Ans. 5, 2.001, 0.9995. 3 19. Using Muller’s method, find the root of the equation x 2 x 5 0 which lies between 2 and 3. Ans. 2.094568. 3 2 20. Find root lying between 2 and 3 of the equation x x x 1 0 using Muller’s method. Ans. 1.8393.
21. Solve the equation x 4 4 x 3 7 x 2 22 x 24 0 using Bairstow method. Ans. quadratic factor: ( x 2 2.00004 x 8.00004) and ( x 2 2 x 3), roots are 1, 2, 3, 4. 22. Solve the equation x 4 8 x 3 39 x 2 62 x 50 0 using Bairstow method. Ans. 1 o i, 3 o 4i
3
Linear Systems of Equations
In this chapter, we shall study direct and iterative methods to solve linear system of equations. Among the direct methods, we shall study Gauss elimination method and its modification by Jordan, Crout, and triangularization methods. Among the iterative methods, we shall study Jacobi and Gauss–Seidel methods.
3.1
DIRECT METHODS
Matrix Inversion Method Consider the system of n linear equations in n unknowns: a11 x1 + a12 x2 + K + a1n xn = b1 a21 x1 + a22 x2 + K + a2 n xn = a2 K K
K K
K K
K K
K K
(3.1)
an1 x1 + an 2 x2 + K + ann nn = bn The matrix form of the system (3.1) is AX = B,
(3.2)
where § a11 ¨ ¨ a21 A ¨… ¨ ¨… ¨ © an1
§ b1 ¶ § x1 ¶ … … a1n ¶ · ¨ · ¨ · a22 … … a2 n · ¨ b2 · ¨ x2 · · ¨ · … … … … · , X ¨ …· , and B ¨¨…·· . ¨…· ¨ …· … … … …· · ¨ · ¨ · an 2 … … ann ¸ © bn ¸ © xn ¸ a12
Suppose A is non-singular, that is, det A ≠ 0. Then A 1 exists. Therefore, premultiplying (3.2) by A 1 , we get A 1AX A 1B or X A 1B. Thus, finding A 1 we can determine X and so x1, x2,..., xn. EXAMPLE 3.1 Solve the equations x y 2z 1 x 2y 3z 1 2x 3y z 2.
52
Numerical Methods
Solution. We have 1 1 2 A 1 2 3 4 w 0. 2 3 1 Also § 7 5 1 ¶ 1¨ · A ¨ 5 3 1 · . 4 ¨ 1 1 1· © ¸
1
Hence, §x¶ § 7 5 1 ¶ §1 ¶ 1¨ ¨ · ·¨ ·
1 X ¨ y · A B ¨ 5 3 1 · ¨1 · 4 ¨z· ¨ 1 1 1· ¨2 · © ¸ © ¸© ¸ § 4 ¶ §1¶ 1¨ · ¨ · ¨ 0 · ¨0 · 4 ¨ 0 · ¨0 · © ¸ © ¸ and so x = 1, y = 0, z = 0.
Gauss Elimination Method This is the simplest method of step-by-step elimination and it reduces the system of equations to an equivalent upper triangular system, which can be solved by back substitution. Let the system of equations be a11 x1 + a12 x2 + K + a1n xn = b1 a21 x1 + a22 x2 + K + a2 n xn = b2 K K
K K
K K
K K
K. K
an1 x1 + an 2 x2 + K + ann xn = bn The matrix form of this system is AX B, where § a11 ¨ ¨ a21 A ¨K ¨ ¨K ¨ © an1
§ x1 ¶ § b1 ¶ a12 K K a1n ¶ · ¨ · ¨ · a22 K K a2 n · ¨ x2 · ¨ b2 · · ¨ · K K K K · , X K , B ¨K· . ¨ · ¨ · ¨K · ¨K· K K K K· · ¨ · ¨ · an 2 K K ann ¸ © xn ¸ © bn ¸
Linear Systems of Equations
53
The augmented matrix is § ¨ ¨ ¨ [A : B] ¨ ¨ ¨ ¨©
a11
a12 K K a1n
a21
a22 K K a2 n
K K an1
K K K ... K K K K an 2 K K ann
b1 ¶ · b2 · · K ·. K · · bn · ¸
The number arr at position (r, r) that is used to eliminate xr in rows r 1, r 2,..., n is called the r th pivotal element and the rth row is called the pivotal row. Thus, the augmented matrix can be written as § a11 ¨ m2,1 a21 /a11 ¨ a 21 ¨ ¨K ¨K ¨ mn ,1 an1 /a11 ¨ an1 ©
pivot l
a12 K K a1n
b1 ¶ j pivotal row · a22 K K a2 n b2 · · K K K K K· K K K K K· · an 2 K K ann bn ¸
The first row is used to eliminate elements in the first column below the diagonal. In the first step, the element a11 is pivotal element and the first row is pivotal row. The values mk,1 are the multiples of row 1 that are to be subtracted from row k for k 2, 3, 4,...., n. The result after elimination becomes § a11 ¨ pivot l ¨ m3,2 c32 /c22 ¨ ¨ ¨ ¨ mn,2 cn 2 /c22 ¨ ©
a12 K K a1n c22 K K c2 n c32 K K c3n K K K K cn 2 K K cnn
b1 ¶ · d2 · j pivotal row. d3 ·· K· · d n ·¸
The second row (now pivotal row) is used to eliminate elements in the second column that lie below the diagonal. The elements mk,2 are the multiples of row 2 that are to be subtracted from row k for k 3, 4,...., n. Continuing this process, we arrive at the matrix: § a11 ¨ ¨ ¨ ¨ ¨ ¨ ¨©
a12 K K a1n c22 K K c2 n K K K K K K K K hnn
b1 ¶ · d2 · K ·· . K· · pn ¸
Hence, the given system of equation reduces to a11 x1 + a12 x2 + K + a1n xn = b1 c22 x2 + K + c2 n xn = d2 K K
K K
K K hnn cn = pn .
54
Numerical Methods
In the above set of equations, we observe that each equation has one lesser variable than its preceding equap tion. From the last equation, we have xn n . Putting this value of xn in the preceding equation, we can hnn find xn 1. Continuing in this way, putting the values of x2 , x3 ,K , xn in the first equation, x1 can be determined. The process discussed here is called back substitution. Remark 3.1. It may occur that the pivot element, even if it is different from zero, is very small and gives rise to large errors. The reason is that the small coefficient usually has been formed as the difference between two almost equal numbers. This difficulty is overcome by suitable permutations of the given equations. It is recommended therefore that the pivotal equation should be the equation which has the largest leading coefficient. EXAMPLE 3.2 Express the following system in augmented matrix form and find an equivalent upper triangular system and the solution: 2x1 + 4x2 6x3 4 x1 + 5x2 + 3x3 10 x1 + 3x2 + 2x3 5. Solution. The augmented matrix for the system is pivot l § 2 4 6 ¨ m2,1 0.5 ¨ 1 5 3 m3,1 0.5 ¨© 1 3 2
4 ¶ j pivotal row · 10 · 5 ·¸
The result after first elimination is § 2 4 6 ¨ pivot l ¨ 0 3 6 m3,2 1/ 3 ¨© 0 1 5
4 ¶ · 12 · j pivotal row 7 ·¸
The result after second elimination is § 2 4 6 4 ¶ ¨ · ¨ 0 3 6 12 · . ¨0 0 3 3 · ¸ © Therefore, back substitution yields 3x3 3
and so x3 1,
3x2 6 x3 12
and so x2 2,
2 x1 4 x3 6 x3 4
and so x1 3.
Hence, the solution is x1 –3, x2 2, and x3 1. EXAMPLE 3.3 Solve by Gauss elimination method: 10 x 7 y 3z 5u = 6
6 x + 8y z 4u = 5 3x + y 4 z 11u = 2 5x 9 y 2 z 4u = 7.
Linear Systems of Equations
Solution. The augmented matrix for the given system is pivot l §10 7 3 5 6 ¶ j pivotal row m2,1 0.6 ¨ 6 8 1 4 5 · ¨ · m3,1 0.3 ¨ 3 1 4 11 2 · ¨ · m4,1 0.5 ¨© 5 9 2 4 7 ·¸ The first elimination yields §10 7 5 3 pivot l ¨ 0 3.8 0.8 1 ¨ m3,2 0.81579 ¨ 0 3.1 3.1 9.5 ¨ m4 ,2 1.4474 ¨© 0 5.5 3.5 1.5
6 ¶ · 8.6 · j pivotal row 0.2 · · 4 ·¸
The result after second elimination is §10 7 3 5 ¨ 0.8
1 ¨ 0 3.8 pivot l ¨ 0 0 2.4474 10.3158 ¨ m4,3 0.957 ¨© 0 0 2.3421 0.0526
¶ 6 · 8.6 ·
6.8158 · j pivotal row · 16.44764 ·¸
The result after third elimination is §10 7 ¶ 3 5 6 ¨ · 0.8
1 8.6 · . ¨ 0 3.8 ¨0 0 2.4474 10.3158 6.8158 · ¨ · 0 0 9.9248 9.9249 ·¸ ¨© 0 Therefore, back substitution yields 9.9248u 9.9249 and so u y 1 2.4474 z 10.3158u 6.8158 and so z 6.9999 y 7 3.8y 0.8z u 8.6 and so y 4 10x 7y 3z 5u 6 and so x 5. Hence, the solution of the given system is x = 5, y = 4, z = −7, and u = 1. EXAMPLE 3.4 Solve the following equations by Gauss elimination method: 2x y z 10, 3x 2y 3z 18, x 4y 9z 16. Solution. The given equations are 2x + y + z =10, 3x + 2y + 3z = 18, x + 4y + 9z = 16. The augmented matrix for given system of equations is pivot l § 2 1 1 10 ¶ j pivotal row · ¨ m2 ,1 3/ 2 ¨ 3 2 3 18 · m3,1 1/ 2 ¨© 1 4 9 16 ·¸
55
56
Numerical Methods
The result of first Gauss elimination is §2 ¨ ¨0 ¨ ¨ ¨0 ©
1 1 10 ¶ · 1 3 3 · j pivotal row · 2 2 · 7 17 11· ¸ 2 2
The second elimination yields § 2 1 1 10 ¶ ¨ · ¨0 1 3 3· ¨ · 2 2 ¨0 0 2 1 · 0 © ¸ Thus, the given system equations reduces to 2x + y + z = 10 0.5 y + 1.5z = 3
2 z = 10. Hence, back substitution yields z = 5, y = 9, x = 7.
Jordan Modification to Gauss Method Jordan modification means that the elimination is performed not only in the equation below but also in the equation above the pivotal row so that we get a diagonal matrix. In this way, we have the solution without further computation. n3 Comparing the methods of Gauss and Jordan, we find that the number of operations is essentially 3 n3 for Jordan method. Hence, Gauss method should usually be preferred over Jordan for Gauss method and 2 method. To illustrate this modification we reconsider Example 3.2. The result of first elimination is unchanged and we have m1,2 4 / 3 § 2 4 6 ¨ pivot l ¨ 0 3 6 m3,2 1/ 3 ¨© 0 1 5
4 ¶ · 12 · j pivotal row 7 ·¸
Now, the second elimination as per Jordan modification yields m1,3 14 / 3 § 2 0 14 ¨ m2 ,3 2 ¨ 0 3 6 pivot l ¨© 0 0 3
20 ¶ · 12 · 3 ·¸ j pivotal row
The third elimination as per Jordan modification yields § 2 0 0 6 ¶ ¨ ·. ¨0 3 0 6 · ¨© 0 0 3 3 ·¸
Linear Systems of Equations
Hence, 2x1 = 6 and so x1 = 3, 3x2 = 6 and so x2 = 2, 3x3 = 3 and so x3 = 1. EXAMPLE 3.5 Solve x + 2y + z = 8 2 x + 3 y + 4z = 2 0 4x + 3y + 2z = 1 6 by Gauss–Jordan method. Solution. The augmented matrix for the given system of equations is pivot l § 1 2 1 ¨ m2,1 2 ¨ 2 3 4 m 4 ¨© 4 3 2 3,1
8 ¶ j pivotal row · 20 · 16 ·¸
The result of first elimination is m1,2 2 § 1 2 1 ¨ pivot l ¨0 1 2 m3,2 5 ¨©0 5 2
8 ¶ · 4 · j pivotal row
16 ·¸
The second Gauss–Jordan elimination yields m1,3 5/12 § 1 0 5 ¨ m2 ,3 1/ 6 ¨0 1 2 pivot l ¨©0 0 12
16 ¶ · 4 ·
36 ·¸ j pivotal row
The third Gauss–Jordan elimination yields §1 0 0 1 ¶ ¨ ·
2 · . ¨0 1 0 ¨0 0 12 36 · ¸ © Therefore, x 1, y 2, and z 3 is the required solution. EXAMPLE 3.6 Solve 10x + y + z = 12 x +10y + z = 12 x + y + 10z = 12 2 by Gauss–Jordan method.
57
58
Numerical Methods
Solution. The augmented matrix for the given system is pivot l §10 1 1 ¨ m2,1 1/10 ¨ 1 10 1 m 1/10 ¨© 1 1 10 3,1
12 ¶ j pivotal row · 12 · 12 ·¸
The first Gauss–Jordan elimination yields m1,2 10 / 99 §10 1 1 ¨ pivot l ¨ 0 99 /10 9 /10 m3,2 1/11 ¨© 0 9 /10 99 /10
12 ¶ · 108/10 · j pivotal row 108/10 ·¸
Now the Gauss–Jordan elimination gives m1,3 10 /108 §10 0 10 /11 ¨ m2,3 11/120 ¨ 0 99 /10 9 /10 0 108/11 pivot l ¨© 0
120 /11¶ · 108/10 · 108/11·¸ j pivotal row
The next Gauss–Jordan elimination yields 0 0 10 ¶ §10 ¨ 0 99 / 10 0 99 / 10 ·· . ¨ ¨© 0 0 108 / 11 108 / 11·¸ Hence, the solution of the given system is x 1, y 1, z 1. EXAMPLE 3.7 Solve by Gauss–Jordan method x y z 9 2 x 3 y 4 z 13 3x 4 y 5z 40. Solution. The augmented matrix for the given system is § 1 1 1 9 ¶ j pivotalr ow ¨ · m21 2 ¨ 2 3 4 13 · m31 3 ¨© 3 4 5 40 ·¸ The first Gauss–Jordan elimination yields 1 5 §1 1 1 9 ¶ ¨ · ¨0 5 2 5· j pivotal row. 1 ¨0 1 2 1 ·¸ 3 m32 © 5 m12
Linear Systems of Equations
59
The second Gauss elimination yields § ¶ 7 m13 7 /12 ¨ 1 0 5 8 · ¨ · m23 10 /12 ¨0 5 2 5· ¨ · j pivotal row 12 ¨0 0 12 · ¨© ·¸ 5 The third Gauss elimination yields § ¨1 0 0 ¨ ¨0 5 0 ¨ 12 ¨0 0 5 ©
¶ 1 · ·
15· . · 12 · ¸
Thus, we have attained the diagonal form of the system. Hence, the solution is x 1, y
15 12(5) 3, z 5. 5 12
Triangularization (Triangular Factorization) Method We have seen that Gauss elimination leads to an upper triangular matrix, where all diagonal elements are 1. We shall now show that the elimination can be interpreted as the multiplication of the original coefficient matrix A by a suitable lower triangular matrix. Hence, in three dimensions, we put § l11 ¨ ¨ l 21 ¨l © 31
0
l 22 l32
0 ¶ § a11 ·¨ 0 · ¨ a21 l33 ·¸ ¨© a31
a12 a22 a32
a13 ¶ § 1 u12 · ¨ a23 · ¨0 1 a33 ·¸ ¨©0 0
u13 ¶ · u23 · . 1 ·¸
In this way, we get nine equations with nine unknowns (six l elements and three u elements). If the lower and upper triangular matrices are denoted by L and U, respectively, we have LA U or A L 1U. Since L 1 is also a lower triangular matrix, we can find a factorization of A as a product of one lower triangular matrix and one upper triangular matrix. Thus, a non-singular matrix A is said to have a triangular factorization if it can be expressed as a product of a lower triangular matrix L and an upper triangular matrix U, that is, if A LU. For the sake of convenience, we can choose lii 1 or uii 1. Thus, the system of equations AX B is resolved into two simple systems as follows: AX B or LUX B or LY B and UX Y. Both the systems can be solved by back substitution.
60
Numerical Methods
EXAMPLE 3.8 Solve the following system of equations by triangularization method: x1 2 x2 3x3 14 2 x1 5x2 2 x3 18 3x1 x2 5x3 20. Solution. The matrix form of the given system is AX B, where
Let
§ x1 ¶ §14 ¶ §1 2 3¶ ¨ · ¨ · ¨ · A ¨ 2 5 2 · , X ¨ x2 · , B ¨18 · . ¨x · ¨ 20 · ¨© 3 1 5 ·¸ © ¸ © 3¸ A LU,
that is, § 1 2 3¶ § 1 ¨ · ¨ ¨ 2 5 2 · ¨ l21 ¨ 3 1 5· ¨l © ¸ © 31
0 1 l32
0 ¶ §u11 u12 ·¨ 0 · ¨ 0 u22 0 1 ·¸ ¨© 0
u13 ¶ · u23 · u33 ·¸
and so we have 1 u11 2 l21u11 and so l21 2 3 l31u11 and so l31 3 2 u12 5 l21u12 u22 2(2) u22 and so u22 1 1 l31u12 l32 u22 3(2) l32(1) and so l32 5 3 u13 2 l21u13 u23 2(3) u23 and so u23 –4 5 l31u13 l32u23 u33 3(3) (–5) (–4) u33 and so u33 –24. Hence, §1 0 0¶ §1 2 3 ¶ ¨ · ¨ · L ¨ 2 1 0 · and U ¨0 1 4 · . ¨ 3 5 1 · ¨0 0 24 · © ¸ © ¸ Now we have AX B or LUX B or LY B where UX Y. But LY B yields § 1 0 0 ¶ § y1 ¶ §14 ¶ ¨ ·¨ · ¨ · ¨ 2 1 0 · ¨ y2 · ¨18 · ¨ 3 5 1 · ¨ y · ¨ 20 · © ¸© 3¸ © ¸
Linear Systems of Equations
and we have
61
y1 = 14, 2y1 + y2 = 18 and so y2 −10, 3y1 5y2 + y3 = 20 and so y3 −72.
Then UX Y yields
§1 ¨ ¨0 ¨0 ©
2 3 ¶ § x1 ¶ § 14 ¶ ·¨ · ¨ · 1
4 · ¨ x2 · ¨ 10 · 0 24 ·¸ ¨© x3 ·¸ ¨© 72 ·¸
and so –24x3 –72 which yields x3 3, x2 – 4x3 –10 which yields x2 2, x1 2x2 x3 14 which yields x1 1. Hence, the required solution is x1 1, x2 2, and x3 3. EXAMPLE 3.9 Use Gauss elimination method to find triangular factorization of the coefficient matrix of the system x1 2x2 3x3 14 2x1 5x2 2x3 18 3x1 x2 5x3 20 and hence solve the system. Solution. In matrix form, we have AX B, where
Write
§ x1 ¶ §14 ¶ §1 2 3¶ ¨ · ¨ · A ¨ 2 5 2 · , X ¨ x2 · , B ¨¨18 ·· . ¨x · ¨© 20 ·¸ ¨© 3 1 5 ·¸ © 3¸ § 1 0 0 ¶ § 1 2 3 ¶ j pivotal row ¨ ·¨ · A IA ¨0 1 0 · ¨ 2 5 2 · m2 ,1 2 ¨0 0 1 · ¨ 3 1 5 · m3,1 3 © ¸© ¸
The elimination in the second member on the right-hand side is done by Gauss elimination method while in the first member l21 is replaced by m21 and l31 is replaced by m31. Thus, the first elimination yields § 1 0 0 ¶ §1 2 3 ¶ ¨ ·¨ · A ¨ 2 1 0 · ¨0 1 4 · j pivotal row ¨ 3 0 1 · ¨0 5 4 · m 5 3,2 © ¸© ¸ Then the second elimination gives the required triangular factorization as § 1 0 0 ¶ §1 2 3 ¶ A ¨¨ 2 1 0 ·· ¨¨0 1 4 ·· ¨© 3 5 1 ·¸ ¨©0 0 24 ·¸ LU,
62
Numerical Methods
where §1 0 0¶ ¨ · L ¨ 2 1 0 · and ¨ 3 5 1 · © ¸ The solution is then obtained as in Example 3.8.
§1 2 3 ¶ ¨ · U ¨0 1 4 · . ¨0 0 24 · © ¸
EXAMPLE 3.10 Solve 2x1 4x2 – 6x3 –4 x1 5x2 3x3 10 x1 3x2 2x3 5. Solution. Write A IA, that is, § 2 4 6 ¶ § 1 0 0 ¶ § 2 4 66 ¶ j pivotal row · ¨ · ¨ ·¨ ¨ 1 5 3 · ¨0 1 0 · ¨ 1 5 3 · m2 ,1 1/ 2 ¨ 1 3 2 · ¨0 0 1 · ¨ 1 3 2 · m 1/ 2 ¸ © ¸ © ¸© 3,1 Using Gauss elimination method, discussed in Example 3.9, the first elimination yields § 1 0 0 ¶ § 2 4 6 ¶ ¨ ·¨ · A ¨1/ 2 1 0 · ¨ 0 3 6 · j pivotal row ¨1/ 2 0 1 · ¨ 0 1 5 · m 1/ 3 3,2 © ¸© ¸ The second elimination yields § 1 0 0 ¶ § 2 4 6 ¶ ¨ ·¨ · A ¨1/ 2 1 0 · ¨ 0 3 6 · LU. ¨1/ 2 1/ 3 1 · ¨ 0 0 3 · © ¸© ¸ Therefore, AX B reduces to LUX B or LY B, UX Y. Now LY B gives § 1 0 0 ¶ § y1 ¶ § 4 ¶ ¨ ·¨ · ¨ · ¨1/ 2 1 0 · ¨ y2 · ¨10 · ¨1/ 2 1/ 3 1 · ¨ y · ¨ 5 · © ¸ © 3¸ © ¸ and so y1 – 4 1 y y2 10 which yields y2 12, 2 1 Then UX Y implies
1 1 y y y3 5 which yields y3 3. 2 1 3 2 § 2 4 6 ¶ § x1 ¶ § 4 ¶ ¨ ·¨ · ¨ · ¨ 0 3 6 · ¨ x2 · ¨ 12 · ¨0 0 3 · ¨ x · ¨ 3 · © ¸© 3¸ © ¸
Linear Systems of Equations
63
and so 3x3 3 which yields x3 1, 3x2 6x3 12 which yields x2 2, 2x1 4x2 – 6x3 –4 which yields x1 –3. Hence, the solution of the given system is x1 –3, x2 2, and x3 1. EXAMPLE 3.11 Solve x 3y 8z 4 x 4y 3z –2 x 3y 4z 1 by the method of factorization. Solution. The matrix form of the system is AX B, where
Write
§x¶ §1 3 8 ¶ §4¶ ¨ · ¨ · A ¨1 4 3 · , X ¨ y · and B ¨¨ 2 ·· . ¨z· ¨©1 3 4 ·¸ ¨© 1 ·¸ © ¸ § 1 0 0 ¶ §1 3 8 ¶ j pivotal row ¨ ·¨ · A IA ¨0 1 0 · ¨1 4 3 · m2,1 1 ¨0 0 1 · ¨1 3 4 · m3,1 1 © ¸© ¸
Applying Gauss elimination method to the right member and replacing l21 by m21 and l31 by m31 in the left member, we get §1 0 0 ¶ §1 3 8 ¶ A ¨¨1 1 0 ·· ¨¨0 1 5 ·· j pivotal row ©¨1 0 1 ·¸ ©¨0 0 4 ·¸ LU. Then AX B reduces to LUX B or LY B and UX Y. Now LY B gives §1 0 0 ¶ § y1 ¶ § 4 ¶ ¨ ·¨ · ¨ · ¨1 1 0 · ¨ y2 · ¨ 2 · ¨1 0 1 · ¨ y · ¨ 1 · © ¸© 3¸ © ¸ and so Then UX Y gives
y1 4, y2 –6, y1 y3 1 which implies y3 –3. §1 3 8 ¶ § x ¶ § 4 ¶ ¨ ·¨ · ¨ · ¨0 1 5 · ¨ y · ¨ 6 · . ¨0 0 4 · ¨ z · ¨ 3· © ¸© ¸ © ¸
Hence, the required solution is x
19 3 9 , y , z . 4 4 4
64
Numerical Methods
Triangularization of Symmetric Matrix When the coefficient matrix of the system of linear equations is symmetric, we can have a particularly simple triangularization in the form A LLT or § a11 ¨ ¨ a12 ¨K ¨ ¨K ¨ © a1n
a12 K ... a22 K K K K K K K K a2 n K K
a1n ¶ § l11 · ¨ a2 n · ¨ l21 ¨ K ·· ¨K K · ¨K · ¨ ann ¸ ¨© ln1
0
K
K
l22
0
K
K K K K K ln 1,n 1 ln 2 K
§ l112 ¨ ¨ l11l21 ¨ K ¨ ¨K ¨ ¨© ln1l11
K
l l
K K
l l
K K
11 21 2 2 21 22
l21 K K ln1 ¶ · l22 K K ln 2 · K K K K ·· K K K K· · 0 K K lnn ¸
0 ¶ § l11 ·¨ 0 ·¨ 0 K ·· ¨K ¨ 0 · ¨K ·¨ lnn ·¸ © 0
K K K K K K ln1l21 ln 2 l22 K K
l11ln1
¶ · l21ln1 l22 ln 2 · · K · · K 2 2 2 · ln1 ln 2 K lnn ·¸
Hence, l112
a11 ,
l21l31 l22 l32 a23 ,
l212 l222 a22
K,
K
l11l21 a12 ,
K K K K 2 l11ln1 l1n , l21ln1 l22 ln 2 a2 n , ln1 K lnn2 ann However, it may encounter with some terms which are purely imaginary but this does not imply any special complications. The matrix equation AX B reduces to LLT X B or LZ B and LT X Z. This method is known as the square root method and is due to Banachiewicz and Dwyer. EXAMPLE 3.12 Solve by square root method: 4x – y 2z 12 –x 5y 3z 10 2x 3y 6z 18. Solution. The matrix form of the given system is AX B, where §x¶ §12 ¶ § 4 1 2 ¶ ¨ · ¨ · A ¨ 1 5 3 · , X ¨ y · , B ¨¨10 ·· . ¨z· ¨©18 ·¸ ¨© 2 3 6 ·¸ © ¸
Linear Systems of Equations
The matrix A is symmetric. Therefore, we have triangularization of the type A LLT , that is, § 4 1 2 ¶ § l11 ¨ · ¨ ¨ 1 5 3 · ¨ l21 ¨ 2 3 6· ¨l © ¸ © 31
0 l22 l32
§ l112 ¨ ¨ l11l21 ¨ © l11l31
0 ¶ § l11 ·¨ 0 ·¨ 0 l33 ·¸ ¨© 0
l21 l22
l l
11 21 2 2 21 22
l l
l21l31 l22 l32
0
l31 ¶ · l32 · l33 ·¸ l11l31
¶ · l21l31 l22 l32 · . · l312 l322 l332 ¸
Hence, l112 4 and so l11 2, 1 l11l21 –1 and so l21 , 2 l11l31 2 and so l31 1, l212 l222 5 and so l22 5 l21l31 l22 l32 3 and so
1 19 7 . l 3 or l32 2 4 32 19
l312 l322 l332 6 and so 1 Thus, § 2 ¨ ¨ 1 L ¨ 2 ¨ ¨ ¨ 1 ©
Then, LZ B yields
§ 2 ¨ ¨ 1 ¨ 2 ¨ ¨ ¨ 1 ©
0 19 4 7 19
1 19 , 4 4
49 2 46 . l 6 or l33 19 33 19 0 19 4 7 19
0 ¶ · · 0 ·. · 46 · 19 ·¸
0 ¶ · § z1 ¶ §12 ¶ ·¨ · ¨ · 0 · ¨ z2 · ¨10 · · ¨ z · ¨18 · 46 · © 3 ¸ © ¸ 19 ·¸
and so z1 6
3
26 19 . z2 10 which yields z2 4 19
65
66
Numerical Methods
6
7 19
r
26
46 46 . z 18, which yields z3 19 3 19
19
Now LT X Z gives § ¨2 ¨ ¨ ¨0 ¨ ¨ ¨0 ¨©
1 2
19 4 0
¶ 1 · · § x ¶ §¨ 7 ·¨ · ¨ · ¨ y· 19 · ¨ · ¨ ©z¸ ¨ 46 ·· ¨ ¨ 19 ·¸ ©
6 ¶ · 26 · . 19 · · 46 · 19 ·¸
Hence, z 1, 4 19 7 26 2, or y 19 r y z 4 19 19 19 1 2 x y z 6 which gives x 3. 2 Hence, the solution is x 3, y 2, and z 1.
Crout’s Method Crout suggested a technique to determine systematically the entries of the lower and upper triangles in the factorization of a given matrix A. We describe the scheme of the method stepwise. Let the matrix form of the system (in three dimensions) be AX B, where
The augmented matrix is
§ a11 ¨ A ¨ a21 ¨a © 31
a12 a22 a32
§ x1 ¶ § b1 ¶ a13 ¶ ¨ · · ¨ · a23 · , X ¨ x2 · , B ¨ b2 · . ¨x · ¨b · a33 ·¸ © 3¸ © 3¸
§ a11 ¨ [ A : B] ¨ a21 ¨ ¨© a31
a12
a13
a22
a23
a32
a33
b1 ¶ · b2 · . b3 ·¸
The matrix of the unknowns (in factorization of A), called the derived matrix or auxiliary matrix, is § l11 u12 u13 y1 ¶ ¨ · ¨ l21 l22 u23 y2 · . ¨l · © 31 l32 l33 y3 ¸ The entries of this matrix are calculated as follows: Step 1. The first column of the auxiliary matrix is identical with the first column of the augmented matrix [ A : B] . Step 2. The first row to the right of the first column of the auxiliary matrix is obtained by dividing the corresponding elements in [ A : B] by the leading diagonal element a11. Step 3. The remaining entries in the second column of the auxiliary matrix are l22 and l32. These entries are equal to corresponding element in [ A : B] minus the product of the first element in that row and in that column. Thus,
Linear Systems of Equations
67
l22 a22 – l21u12, l32 a32 – l31u12. Step 4. The remaining elements of the second row of the auxiliary matrix are equal to: [corresponding element in [ A : B] minus the product of the first element in that row and first element in that column]/leading diagonal element in that row. Thus, u23
a23 l21u13 l22
y2
b2 l21 y1 . l22
Step 5. The remaining elements of the third column of the auxiliary matrix are equal to: corresponding element in [ A : B] minus the sum of the inner products of the previously calculated elements in the same row and column. Thus l33 a33 – (l31u13 l32u23). Step 6. The remaining elements of the third row of the auxiliary matrix are equal to: [corresponding element in [ A : B] minus the sum of inner products of the previously calculated elements in the same row and column]/leading diagonal element in that row. Thus, y3
b3 ( l31 y1 l32 y2 ) . l33
Following this scheme, the upper and lower diagonal matrices can be found and then using § y1 ¶ UX ¨¨ y2 ·· , ¨© y3 ·¸ we can determine x1, x2, x3. EXAMPLE 3.13 Solve by Crout’s method: x1 2x2 3x3 1 3x1 x2 x3 0 2x1 x2 x3 0. Solution. The augmented matrix of the system is §1 2 3 1 ¶ ¨ ·. ¨3 1 1 0· ¨© 2 1 1 0 ·¸ Let the derived matrix be § l11 u12 ¨ M ¨ l21 l22 ¨l © 31 l32
u13 u23 l33
y1 ¶ · y2 · . y3 ·¸
68
Numerical Methods
Then § 2 3 ¨1 1 1 ¨ ¨ 1 3(3) M ¨ 3 1 3( 2)
5 ¨ ¨ § ¤ 8³ ¶ ¨ 2 1 2( 2) 1 ¨3(2) ( 3) ¥ ´ · ¦ 5µ ¸ ¨© ©
¶ 1 · 1 · · 0 3(1) ·
5 · 0 [2(1) ( 3) (3/5)] · · 1 [6 ( 24 / 5)] ·¸
§1 2 3 1 ¶ ¨ · ¨ 3 5 8/5 3/5· ¨ 2 3 1/5 1 · © ¸ Now UX Y gives § 1 2 3 ¶ § x1 ¶ § 1 ¶ ¨ ·¨ · ¨ · ¨0 1 8/5· ¨ x2 · ¨3/5· . ¨0 0 1 · ¨ x · ¨ 1 · © ¸© 3¸ © ¸ Hence, x3 1 8 3 3 8 x2 x3 and so x2 1 5 5 5 5 x1 2x2 3x3 1 and so x1 1 – 2x2 – 3x3 1 2 – 3 0. Hence, the solution is x1 0, x2 –1, and x3 1. EXAMPLE 3.14 Solve by Crout’s method: 2x y 4z 12 8x – 3y 2z 20 4x 11y – z 33. Solution. The augmented matrix for the given system of equations is § 2 1 4 12 ¶ ¨ ·. ¨8 3 2 20 · ¨© 4 11 1 33 ·¸ Let the derived matrix be § l11 u12 M ¨¨l21 l22 ¨©l31 l32
u13 u23 l33
y1 ¶ y2 ·· . y3 ¸·
Linear Systems of Equations
Then § ¶ 1 4 12 ¨2 · 2 2 2 ¨ · ¨ 2 [8(2)] 20 [8(6)] · ¤1³ M ¨8 3 8 ¥ ´ ·
7
7 ¦2µ ¨ · ¨ 3 3 [6(4) 9(4)] · ¤1³ ¨ 4 11 4 ¥ ´ 1 [4(2) 9(2)] ·
27 ¦2µ © ¸ §2 1 / 2 2 6¶ ¨¨8 7 2 4 ·· . ¨© 4 9 27 1 ·¸ Now UX Y gives §1 1 / 2 2 ¶ § x ¶ § 6 ¶ ¨0 1 2 · ¨ y · ¨ 4· . ¨ ·¨ · ¨ · ¨©0 0 1 ·¸ ¨© z ·¸ ¨©1 ·¸ By back substitution, we get z 1, y 2z 4 and so y 4 – 2z 2, 1 1 x y 2z = 6 and so x 6 2z y 3. 2 2 Hence, the required solution is x 3, y 2, z 1. EXAMPLE 3.15 Using Crout’s method, solve the system x 2y – 12z 8v 27 5x 4y 7z – 2v 4 –3x 7y 9z 5v 11 6x – 12y – 8z 3v 49. Solution. The augmented matrix of the given system is §1 2 12 8 27 ¶ ¨ 5 4 7 2 4 ·· ¨ . ¨ 3 7 9 5 11 · ¨ · ¨© 6 12 8 3 49 ¸ Then the auxiliary matrix is ¶ §1 2
12 8 27 · ¨ 5 6 67 / 6 7 131/ 6 · M¨ . ¨ 3 13 709 / 6 372 / 709 1151/ 709 · · ¨ 5 ·¸ ¨© 6 24 204 11319 / 709
69
70
Numerical Methods
The solution of the equation is given by UX Y, that is, ¶ ¶ §x ¶ § § 1 2 12 27 8 · ·¨ · ¨ ¨ 7 · ¨ y · ¨ 131/ 6 · ¨0 1 67 / 6 ¨0 0 1
372 / 709 · ¨ z · ¨ 1151/ 709 · · ·¨ · ¨ ¨ 5 0 1 ·¸ ·¸ ¨©v ·¸ ¨© ¨©0 0 or x 2y – 12z 8v 27 67 131 y z 7v 6 6 372 1,151 z v 709 709 v 5. Back substitution yields x 3, y –2, z 1, v 5.
3.2
ITERATIVE METHODS FOR LINEAR SYSTEMS
We have seen that the direct methods for the solution of simultaneous linear equations yield the solution after an amount of computation that is known in advance. On the other hand, in case of iterative or indirect methods, we start from an approximation to the true solution and, if convergent, we form a sequence of closer approximations repeated till the required accuracy is obtained. The difference between direct and iterative method is therefore that in direct method the amount of computation is fixed, while in an iterative method, the amount of computation depends upon the accuracy required. In general, we prefer a direct method for solving system of linear equations. But, in case of matrices with a large number of zero elements, it is economical to use iterative methods.
Jacobi Iteration Method Consider the system
a11 x + a12 x2 + K + a1n xn = b1 a21 x1 + a22 x2 + K + a2 n xn = b2 a31 x1 + a32 x2 + K + a3n xn = b3 K K K K K K K K K K an1 x1 + an 2 x2 + K + ann xn = bn
(3.3)
in which the diagonal coefficients aii do not vanish. If this is not the case, the equations should be rearranged so that this condition is satisfied. Equations (3.3) can be written as b a a x1 1 12 x2 K 1n xn a11 a11 a11 x2 L
b2 a21 a
x K 2 n xn a22 a22 1 a22
L L L b a a xn n n1 x1 K n n xn 1 ann ann ann
(3.4)
Linear Systems of Equations
71
Suppose x1(1) , x2(1) ,K , xn(1) are first approximation to the unknowns x1, x2, ..., xn. Substituting in the right side of equation (3.4), we find a system of second approximations: x1( 2 )
a b1 a12 (1)
x2 K 1n xn(1) a11 a11 a11
x2( 2 )
a b2 a21 (1)
x1 K 2 n xn(1) a22 a22 a22
L
L
L
L
xn( 2 )
.
L L
L L a b a n n1 x1(1) K n , n 1 xn(1 1) ann ann ann
In general, if x1( n ) , x2( n ) ,K , xn( n ) is a system of nth approximations, then the next approximation is given by the formula x1( n 1)
a b1 a12 ( n )
x2 K 1n xn( n ) a11 a11 a11
xn( n 1)
a b2 a21 ( n )
x1 K 2 n xn( n ) a22 a22 a22
L
L
xn( n 1)
.
L L a b a n n1 x1( n ) K n , n 1 xn( n ) ann ann ann
This method, due to Jacobi, is called the method of simultaneous displacements or Jacobi method.
Gauss–Seidel Method A simple modification of Jacobi method yields faster convergence. Let x1(1) , x2(1) ,K , xn(1) be the first approximation to the unknowns x1, x2, ..., xn. Then the second approximations are given by: x1( 2 )
a b1 a12 (1)
x2 K 1n xn(1) a11 a11 a11
x2( 2 )
a b2 a21 ( 2 ) a23 (1)
x3 K 2 n xn(1) x1 a22 a22 a22 a22
x3( 2 )
b3 a31 ( 2 ) a32 ( 2 ) a
x1 x2 K 3n xn(1) a33 a33 a33 a33
L
L
xn( 2 )
L
LL a b a a n n1 x1( 2 ) n 2 x2( 2 ) K n , n 1 xn( 2 )1 ann ann ann ann
The entire process is repeated till the values of x1, x2, ..., xn are obtained to the accuracy required. Thus, this method uses an improved component as soon as available and so is called the method of successive displacements or Gauss–Seidel method.
72
Numerical Methods
Introducing the matrices § a11 ¨a A1 ¨ 21 ¨ M ¨ © an1
0 a22 M an 2
0 L 0 ¶ §0 a12 · ¨0 0 0 L 0 · and A 2 ¨ ¨M M M M M · · ¨ an 3 L ann ¸ ©0 0
a13 L a1n ¶ a23 L a2 n ·· , M M M · · 0 L 0 ¸
it can be shown that the condition for convergence of Gauss–Seidel method is that the absolutely largest eigenvalue of A1 1 A 2 must be absolutely less than 1. In fact, we have convergence if for i 1, 2,K , n, aii Si , where Si £ aik . Thus, for convergence, the coefficient matrix should have a clear diagonal k wi dominance. It may be mentioned that Gauss–Seidel method converges twice as fast as the Jacobi’s method. EXAMPLE 3.16 Starting with (x0, y0, z0) (0, 0, 0) and using Jacobi method, find the next five iterations for the system 5x – y z 10 2x 8y – z 11 –x y 4z 3. Solution. The given equations can be written in the form y z 10
2 x z 11 x y3 , y , and z , respectively. 5 8 4 Therefore, starting with (x0, y0, z0) (0, 0, 0), we get y z 10 2 x1 0 0 5 x
y1
2 x0 z0 11 1.375 8
z1
x 0 y0 3 0.75. 4
The second iteration gives y1 z1 10 1.375 0.75 10 2.125 5 5
2 x1 z1 11 4 0.75 11 0.96875 y2 8 8 x2
z2 The third iteration gives
x1 y1 3 2 1.375 3 0.90625. 4 4
y2 z2 10 0.96875 0.90625 10 2.0125 5 5
2 x2 z2 11 4.250 0.90625 11 0.95703125 y3 8 8 x3
z3
x2 y2 3 2.125 0.96875 3 1.0390625 . 4 4
Linear Systems of Equations
73
The fourth iteration yields x4
y3 z3 10 0.95703125 1.0390625 10 =1.98359375 5 5
2 x3 z3 11 4.0250 1.0390625 11 0.8767578 8 4 x y 3 3 2.0125 0.95703125 3 z4 3 1.0138672, 4 4
y4
whereas the fifth iteration gives x5
y4 z4 10 1.9725781 5
2 x4 z4 11 3.9671875 1.0138672 11 1.005834963 8 8 x y4 3 1.98359375 0.8767578 3 z5 4 = 1.02670898. 4 4 One finds that the iterations converge to (2, 1, 1). y5
EXAMPLE 3.17 Using Gauss–Seidel iteration and the first iteration as (0, 0, 0), calculate the next three iterations for the solution of the system of equations given in Example 3.16. Solution. The first iteration is (0, 0, 0). The next iteration is x1
y0 z0 10 2 5
2 x1 z0 11 4 0 11 0.875 8 8 x y 3 2 0.875 3 z1 1 1 1.03125. 4 4 y1
Then x2 y2 z2
y1 z1 10 0.875 1.03125 10 1.96875 5 5
2 x2 z1 11 3.9375 1.03125 11 1.01171875 8 8
x2 y2 3 1.96875 1.01171875 3 0.989257812 . 4 4
Further, x3 y3
y2 z2 10 1.01171875 0.989257812 10 2.004492188 5 5
2 x3 z2 11 4.008984376 0.989257812 11 0.997534179 8 8
z3
x3 y3 3 2.004492188 0.997534179 3 1.001739502 . 4 4
74
Numerical Methods
The iterations will converge to (2, 1, 1). Remark 3.2. It follows from Examples 3.16 and 3.17 that Gauss–Seidel method converges rapidly in comparison to Jacobi’s method. EXAMPLE 3.18 Solve 54x y z 110 2x 15y 6z 72 –x 6y 27z 85 by Gauss–Seidel method. Solution. From the given equations, we have x
110 y z 85 x 6 y 72 2 x 6 z . , y , and z 54 27 15
We take the initial approximation as x0 y0 z0 0. Then the first approximation is given by 110 2.0370 x1 54 y1
72 2 x1 6 z0 4.5284 15
z1
85 x1 6 y1 2.2173. 27
The second approximation is given by x2
110 y1 z1 1.9122 54
y2
72 2 x2 6 z1 3.6581 15
z2
85 x2 6 y2 2.4061. 27
The third approximation is x3
110 y2 z2 1.9247 54
y3
72 2 x3 6 z2 3.5809 15
z3
85 x3 6 y3 2.4237. 27
The fourth approximation is x4
110 y3 z3 1.9258 54
y4
72 2 x4 6 z3 3.5738 15
Linear Systems of Equations
z4
85 x4 6 y4 2.4253. 27
The fifth approximation is x5
110 y4 z4 1.9259 54
72 2 x5 6 z4 = 3.5732 15 85 x5 6 y5 2.4254 . z5 27 Thus, the required solution, correct to three decimal places, is x 1.926, y 3.573, z 2.425. y5
EXAMPLE 3.19 Solve 28x 4y – z 32 2x 17y 4z 35 x 3y 10z 24 by Gauss–Seidel method. Solution. From the given equations, we have x
24 x 3 y 32 4 y z 35 2 x 4 z , y , and z . 10 28 17
Taking first approximation as x0 y0 z0 0, we have x1 1.1428571, y1 1.9243697, z1 1.7084034 x2 0.9289615, y2 1.5475567, z2 1.8428368 x3 0.9875932, y3 1.5090274, z3 1.8485325 x4 0.9933008, y4 1.5070158, z4 1.8485652 x5 0.9935893, y5 1.5069741, z5 1.8485488 x6 0.9935947, y6 1.5069774, z6 1.8485473. Hence the solution, correct to four decimal places, is x 0.9935, y 1.5069, z 1.8485. EXAMPLE 3.20 Solve the equation by Gauss–Seidel method: 20x y – 2z 17 3x 20y – z –18 2x – 3y 20z 25. Solution. The given equation can be written as 1 [17 y 2 z ] 20 1 y [ 18 3 x z ] 20 x
75
76
Numerical Methods
1 [25 3x 3 y ]. 20 Taking the initial rotation as ( x0 , y0 , z0 ) (0, 0, 0), we have by Gauss–Seidal method, 1 x1 [17 0 0] 0.85 20 z
y1
1 [ 18 3(0.85) 1] 1.0275 20
z1
1 [25 2(0.85) 3(1.0275)] 1.0108 20
x2
1 [17 1.0275 2(1.0108)] 1.0024 20
y2
z2
1 [25 2(1.0024) 3( 0.9998)] 0.9998 20
x3 y3 z3
1 [ 18 3(1.0024) 1.0108] 0.9998 20
1 [17 0.9998 2(0.9998)] 0.99997 20
1 [ 18 3(0.99997) 0.9998] 1.00000 20
1 [25 2(0.99997) 3( 1.0000)] 1.00000 . 20
The second and third iterations show that the solution of the given system of equations is x 1 , y 1 , z 1.
Convergence of Iteration Method (A) Condition of Convergence of Iteration Methods We know (see Section 2.14) that conditions for convergence of the iteration process for solving simultaneous equations f ( x, y ) 0 and g ( x, y ) 0 is
tf tg 1 tx tx and
tf tg 1. ty ty This result can be extended to any finite number of equations. For example, consider the following system of three equations: a11 x1 a12 x2 a13 x3 b1 a21 x1 a22 x2 a23 x3 b2 a31 x1 a32 x2 a33 x3 b3 .
Linear Systems of Equations
77
Then, in the fixed-point form, we have x1 f ( x1 , x2 , x3 )
1 (b a x a x ) a11 1 12 2 13 3
b a a 1 12 x2 13 x3 , a11 a11 a11 x2 g ( x1 , x2 , x3 )
1 ( b a21 x1 a23 x3 ) a22 2
b a a 2 21 x1 23 x3 , a22 a22 a33 x3 h( x1 , x2 , x3 )
1 (b a x a x ) a33 3 31 1 32 3
b a a 3 31 x1 32 x2 . a33 s33 a33
(3.5)
(3.6)
(3.7)
Then the conditions for convergence are
tf tg th 1, t x1 t x1 t x1
(3.8)
tf tg th 1, t x2 t x2 t x 2
(3.9)
tf tg th 1. t x3 t x3 t x3
(3.10)
But partial differentiation of equations (3.5), (3.6), and (3.7) yields a a tf tf tf 0, 12 , 13 , t x1 t x2 a11 t x3 a11 a a tg tg tg 21 , 0, 23 , t x1 t x3 a22 t x2 a22 a a th th th 31 , 32 , 0. t x1 a33 t x2 a33 t x3 Putting these values in inequalities (3.8), (3.9), and (3.10), we get a a21 31 1, a22 a33
(3.11)
a a12 32 1, a11 a33
(3.12)
a13 a 23 1. a11 a22
(3.13)
78
Numerical Methods
Adding the inequalities (3.11), (3.12), and (3.13), we get a a a a a21 a 31 12 32 13 23 3 a22 a33 a11 a33 a11 a22 or § a12 a ¶ §a a ¶ §a a ¶ 13 · ¨ 21 23 · ¨ 31 32 · 3 ¨ a11 ·¸ ¨© a22 a22 ·¸ ¨© a33 a33 ·¸ ¨© a11
(3.14)
We note that inequality (3.14) is satisfied by the conditions a a12 13 1 or a11 a22
a22 a12 a13
a a21 23 1 or a22 a22
a22 a21 a23
a31 a 32 1 a33 a33
a33 a31 a32 .
or
Hence, the condition for convergence in the present case is 3
aii £ aij , i 1, 2, 3; i w j . j 1
For a system of n equations, the condition reduces to n
aii £ aij , i 1, 2,K n; i w j .
(3.15)
j 1
Thus, the process of iteration (Jacobi or Gauss–Seidel) will converge if in each equation of the system, the absolute value of the largest coefficient is greater than the sum of the absolute values of all the remaining coefficients in that equation. A system of equations satisfying condition (3.15) is called diagonally dominated system. (B) Rate of Convergence of Iteration Method In view of equations (3.5), (3.6), and (3.7), the (k1)th iteration is given by x1( k 1)
b1 a12 ( k ) a13 ( k )
x2 x3 , a11 a11 a11
(3.16)
x2( k 1)
b2 a21 ( k 1) a23 ( k )
x1 x3 , a22 a22 a22
(3.17)
x3( k 1)
b3 a31 ( k 1) a32 ( k 1) .
x1 x2 a33 a33 a33
(3.18)
Putting the value of x1( k 1) from equations (3.16) in (3.17), we get ¶ a a b a §b a x2( k 1) 2 21 ¨ 1 12 x2( k ) 13 x3( k ) · 23 x3( k ) a22 a22 © a11 a11 a11 ¸ a22 a a a b a b a a 2 21 1 21 12 x2( k ) 21 13 x3( k ) 23 x3(k ) . a22 a22 a22 a11 a11a22 a22 a11
Linear Systems of Equations
79
Then x2( k 2 )
a a a b2 a b a a
21 1 21 12 x2( k 1) 21 13 x3( k ) 23 x3( k ) . a22 a22 a22 a11 a11a22 a22 a11
Hence, x2( k 2 ) x2( k 1)
a21a12 ( k 1) (x
x2( k ) ) . a11a22 2
In terms of errors, equation (3.19) yields e2( k 1)
(3.19)
a21a12 ( k ) e2 . a11a22
Therefore, the error will decrease if a12 a21 1 . a11a22
3.3
THE METHOD OF RELAXATION
In this method, a solution of all unknowns is obtained simultaneously. The solution obtained is an approximation to a certain number of decimals. Let the system of n equations be a11 x1 + a12 x2 + K + a1n xn = c1 a21 x1 + a22 x2 + K + a2 n xn = c2 K K
K K
K K
K K
K. K
an1 x1 + an 2 x2 + K + ann xn = cn Then the quantities R1 a11 x1 + a12 x2 + K + a1n xn c1 R2 a21 x1 + a22 x2 + K + a2 n xn c2 K K
K K
K K
K K
K K
Rn an1 x1 + an 2 x2 + K + ann xn cn are called residues of the n equations. The solution of the n equations is a set of numbers x1, x2, ..., xn that makes all the Ri equal to zero. We shall obtain an approximate solution by using an iterative method which makes the Ri smaller and smaller at each step so that we get closer and closer to the exact solution. At each stage, the numerically largest residual is reduced to almost zero. We illustrate the method with the help of the following examples. EXAMPLE 3.21 Solve the equations 10x – 2y – 2z 6, –x 10y – 2z 7, –x – y 10z 8 by relaxation method. Solution. The residuals for the given system are R1 10x – 2y – 2z – 6
80
Numerical Methods
R2 –x 10y – 2z – 7 R3 –x – y 10z – 8. The operations table for the system is Δx
Δy
Δz
ΔR1
ΔR2
ΔR3
1 0 0
0 1 0
0 0 1
10 –2 –2
–1 10 –2
–1 –1 10
The table shows that an increment of 1 unit in x produces an increment of 10 units in R1, –1 unit in R2 and –1 unit in R3. Similarly, second and third rows show the effect of the increment of 1 unit in y and z, respectively. We start with the trivial solution x y z 0. The relaxation table is xi
yi
zi
R1
R2
R3
0 0 0 1
0 0 1 0
0 1 0 0
–6 – –10 0
–7 –8 1 0
–8 92 1 0
1
1
1
0
0
0
All the residuals are zero. Thus, we have reached the solution. The solution is x Σxi 1 y Σyi 1 z Σzi 1. EXAMPLE 3.22 Solve the equations 10x – 2y z 12 x 9y – z 10 2x – y 11z 20 by relaxation method. Solution. The residuals are given by R1 10x – 2y z – 12 R2 x 9y – z – 10 R3 2x – y 11z – 20. The operations table is Δx
Δy
Δz
ΔR1
ΔR2
ΔR3
1 0 0
0 1 0
0 0 1
1 –
01 29 –1
2 – 11
1
1
The relaxation table is xi 0 0 0
yi 0 0 1
zi 0 2 0
R1 12 10 12
R2 10 12 3
R3 20 2 1
Linear Systems of Equations
xi
yi
1
zi
0
0
R1
R2
2
2
R3 3
0
0
0.3
2.3
1.7
0.3
0.2
0
0
0.3
1.5
0.1
0
0.2
0
0.1
0.3
0.1
0
0.03
0
0.16
0.03
0.07
0
0
0
0.014
0.102
0
0
0.009
0.009
0.005
0.003
1.184
1.170
1.709
0.009
0.005
ⴚ0.003
0.016
We observe that R1, R2, and R3 are nearly zero now. Therefore, x Σxi 1.184, y Σyi 1.170, z Σzi 1.709. EXAMPLE 3.23 Solve the equations 10x – 2y – 3z 205 –2x 10y – 2z 154 –2x – y 10z 120 by relaxation method. Solution. The residuals for the given system are given by R1 10x – 2y – 3z – 205 R2 –2x 10y – 2z – 154 R3 –2x – y 10z – 120. The operations table for the system is Δx
Δy
Δz
ΔR1
ΔR2
ΔR3
1 0 0
0 1 0
0 0 1
1 2 3
2 0 10 2
2 1 10
We start with the trivial solution x 0, y 0, z 0. The relaxation table is xi
yi
zi
R1
R2
R3
0 0 1 0 0 2 0 0
0 00 19 0 00 6 0 0 0 1
0 0 0 18 0 0 2 0 1 0
205 5 43 97 3 9 15 5 2 0
154 194 4 40 60 0 4 8 10 0
120 160 179 1 19 25 5 9 1 0
32
26
21
0
0
0
0 2
81
82
Numerical Methods
We observe that we have reached the stage where all the residuals are zero. Thus, the solution has reached. Adding the vertical columns for increment in x, y, and z, we get x Σ xi 32 y Σ yi 26 z Σ zi 21.
3.4
ILL-CONDITIONED SYSTEM OF EQUATIONS
System of equations, where small changes in the coefficient result in large deviations in the solution is said to be ill-conditioned system. Such systems of equations are very sensitive to round-off errors. For example, consider the system 3 x1 x2 9 3.015 x1 x2 3. The solution of this system is x1
9 3 9 9(3.015) 400 and x2 1209 . 3 3.015 3 3.015
Now, we round off the coefficient of x1 in the second equation to 3.02. Then the solution of the system is x1
9 9(3.02) 9 3 909 . 300 and x2 3 3.02 3 3.02
Putting these values of x1 and x2 in the given system of equations, we have the residuals as r1 900 909 9 0 and r2 3.015( 300) 909 3 1.5. Thus, the first equation is satisfied exactly whereas we get a residual for the second equation. This happened due to rounding off the coefficient of x1 in the second equation. Hence, the system in question is ill-conditioned. Let A §© aij ¶¸ be an n r n coefficient matrix of a given system. If C AA 1 is close to identity matrix, then the system is well-conditioned, otherwise it is ill-conditioned. If we define norm of the matrix A as n
A max £ aij , 1a i a n
j 1
then the number A A 1 is called the condition number, which is the measure of the ill-conditionedness of the system. The larger the condition number, the more is the ill-conditionedness of the system.
EXERCISES 1. Solve the system 2x y z 10 3x 2y 3z 18 x 4y 9z 16 by Gauss elimination method. Ans. x 7, y 9, z 5 2. Solve the following system of equations by Gauss elimination method: x1 2x2 x3 3 3x1 x2 2x3 1
Linear Systems of Equations
2x1 2x2 3x3 2
83
Ans. x1 1, x2 4, x3 4
3. Solve the following system of equations by Gauss elimination method: 2x 2y z 12 3x 2y 2z 8 5x 10y 8z 10. Ans. x 12.75, y 14.375, z 8.75 4. Solve the following system of equations by Gauss–Jordan method: 5x 2y z 4 7x y 5z 8 3x 7y 4z 10. Ans. x 11.1927, y 0.8685, z 0.1407 5. Solve by Gauss–Jordan method: 2x1 x2 5x3 x4 5 x1 x2 3x3 4x4 1 3x1 6x2 2x3 x4 8 2x1 2x2 2x3 3x4 2. 6. Solve by Gauss–Jordan method: xyz9 2x 3y 4z 13 3x 4y 5z 40.
1 4 Ans. x1 2, x2 , x3 0, x4 5 5
Ans. x 1, y 3, z 5
7. Solve by Gauss–Jordan method: 2x 3y z 1 x 4y 5z 25 3x 4y z 2.
Ans. x 8.7, y 5.7, z 1.3
8. Solve Exercise 4 by factorization method. 9. Solve the following system of equations by factorization method: 2x 3y z 9 x 2y 3z 6 3x y 2z 8. Ans. x 1.9444, y 1.6111, z 0.2777 10. Solve the following system of equations by Crout’s method: 3x 2y 7z 4 2x 3y z 5 3x 4y z 7. 11. Use Crout’s method to solve 2x 6y 8z 24 5x 4y 3z 2 3x y 2z 16.
7 9 1 Ans. x , y , z 8 8 8
Ans. x 1, y 3, z 5
84
Numerical Methods
12. Solve by Crout’s method: 10x y z 12 2x 10y z 13 2x 2y 10z 14. 13. Use Jacobi’s iteration method to solve 5x 2y z 12 x 4y 2z 15 x 2y 5z 20.
Ans. x 1, y 1, z 1
Ans. x 1.08, y 1.95, z 3.16
14. Solve by Jacobi’s iteration method 10x 2y z 9 2x 20y 2z 44 2x 3y 10z 22.
Ans. x 1, y 2, z 3
15. Solve by Jacobi’s method 5x y z 10 2x 4y 12 x y 5z 1. 16. Use Gauss–Seidel method to solve 54x y z 110 2x 15y 6z 72 x 6y 27z 85
Ans. x 2.556, y 1.722, z 1.055
Ans. x 1.926, y 3.573, z 2.425
17. Find the solution, to three decimal places, of the system 83x 11y 4z 95 7x 52y 13z 104 3x 8y 29z 71 using Gauss–Seidel method. Ans. x 1.052, y 1.369, z 1.962 18. Solve Exercise 14 by Gauss–Seidel method. 19. Solve the following equations by Relaxation method: 3x 9y 2z 11 4x 2y 13z 24 4x 4y 3z 8.
Ans. x 1.35, y 2.10, z 2.84
20. Show that the following systems of equations are ill-conditioned: (ii) y 2 x 7 (i) 2 x1 x2 25 y 2.01 3 2.001x1 x2 25.01
4
Eigenvalues and Eigenvectors
The theory of eigenvalues and eigenvectors is a powerful tool to solve the problems in economics, engineering, and physics. These problems revolve around the singularities of A LI, where L is a parameter and A is a linear transformation. The aim of this chapter is to study the numerical methods to find eigenvalues of a given matrix A.
4.1
EIGENVALUES AND EIGENVECTORS
Definition 4.1. Let A be a square matrix of dimension n × n. The scalar L is said to be an eigenvalue for A if there exists a non-zero vector X of dimension n such that AX LX. The non-zero vector X is called the eigenvector corresponding to the eigenvalue L . Thus, if L is an eigenvalue for a matrix A, then AX L X, X w 0
(4.1)
[A LI] X 0
(4.2)
or
Equation (4.2) has a non-trivial solution if and only if A LI is singular, that is, if | A LI | 0. Thus, if § a11 ¨ ¨ a21 A ¨K ¨K ¨ ¨© an1
a12 a22 K K an 2
then a11 L a12 a21 a22 L K A LI K K K an1 an 2
K K K K K K K K K K
K K K K K
a1n ¶ · a2 n · K ·, K· · ann ·¸
K a1n K a2 n K K 0, K K K a L nn
which is called the characteristic equation of the matrix A. Expanding this determinant, we get a polynomial of degree n, which has exactly n roots, not necessarily distinct. Substituting the value of each L in equation (4.1), we get the corresponding eigenvector. Further, for each distinct eigenvalue L, there exists at least one eigenvector X. Further, (i) If L is of multiplicity m, then there exist atmost m linearly independent eigenvectors X 1 , X 2 ,K, X n which correspond to L. If order of a given matrix A is large, then the number of terms involved in the expansion of the determinant A LII is large and so the chances of mistakes in the determination of characteristic
86
Numerical Methods
polynomial of A increase. In such a case, the following procedure, known as Faddeev–Leverrier method, is employed. Let the characteristic polynomial of an n r n matrix A be
L n c1L n 1 c2 L n 2 K cn 1L cn . The Faddeev–Leverrier method yields the coefficients c1 , c2 ,K , cn by the formula 1 ci trace A i , i 1 , 2 ,K , n, i
where
ª A if i 1 Ai « ¬ AB i 1 if i 2, 3, 4,K , n and B i A i ci I, I being an n × n identity matrix. Thus, this method generates a sequence{A i } of matrices which is used to determine the coefficients ci . The correctness of the calculations are checked by using the result B i A i ci I = 0 (zero matrix). As an illustration, consider the matrix §3 1 0¶ ¨ · A ¨0 3 1 · . ¨0 0 3· © ¸ Let the characteristic polynomial of A be L 3 c1L 2 c2 L c3 . Then, following Faddeev–Leverrier method, we have c1 trace of A1 trace of A 3 3 3 9, § 6 1 0 ¶ ¨ · B1 A1 9I A 9I ¨ 0 6 1 · , ¨ 0 0 6 · © ¸ § 3 1 0 ¶ § 6 1 0 ¶ § 18 3 1 ¶ ¨ ·¨ · ¨ · A 2 AB1 ¨0 3 1 · ¨ 0 6 1 · ¨ 0 18 3 · , ¨0 0 3· ¨ 0 0 6 · ¨ 0 0 18 ·¸ © ¸© ¸ © c2
1 1 trace A 2 ( 54) 27, 2 2
§9 3 1 ¶ ¨ · B 2 A 2 27I ¨0 9 3· , ¨0 0 9 · © ¸ § 3 1 0 ¶ §9 3 1 ¶ § 27 0 0 ¶ ¨ ·¨ · ¨ · A 3 AB 2 ¨0 3 1 · ¨0 9 3· ¨ 0 2 70 · , ¨0 0 3· ¨0 0 9 · ¨ 0 0 27 · © ¸© ¸ © ¸
Eigenvalues and Eigenvectors
87
1 1 c3 trace A 3 ( 27 27 27 ) 27. 3 3 Hence, the characteristic polynomial of A is L 3 9L 2 27 L 27. As a check, we note that B 3 A 3 27I 0. Hence the characteristic polynomial, obtained above, is correct. As an another example, consider the matrix §3 2 1 ¶ ¨ · A ¨ 4 5 1· . ¨2 3 4 · © ¸ Let the required characteristic polynomial be L 3 c1L 2 c2 L c3 . Then, by Faddeev–Leverrier method, we get c1 trace A 12, § 9 2 1 ¶ ¨ · B1 A 12I ¨ 4 7 1· , ¨2 3 8 ·¸ © § 3 2 1 ¶ § 9 2 1 ¶ § 17 2 7 ¶ ¨ ·¨ · ¨ · A 2 AB1 ¨ 4 5 1· ¨ 4 7 1· ¨ 18 30 7 · , ¨2 3 4 · ¨ 2 3 8 ·¸ ¨© 2
5 33·¸ © ¸© c2
1 1 trace A 2 [ 17 30 33] 40, 2 2 § 23 5 7 ¶ ¨ · B 2 A 2 40I ¨ 18 10 7 · , ¨ 2 5 7 · © ¸
§ 3 2 1 ¶ § 23 5 7 ¶ §35 0 0 ¶ ¨ ·¨ · ¨ · A 3 AB 2 ¨ 4 5 1· ¨ 18 10 7 · ¨ 0 35 0 · , ¨ 2 3 4 · ¨ 2 5 7 · ¨ 0 0 35· © ¸© ¸ © ¸ c3
1 1 trace A 3 (35 35 35) 35. 3 3
Hence, the characteristic polynomial is L 3 12L 2 40L 35. Also B 3 A 3 35I 0. Hence the characteristic polynomial, obtained above, is correct. We know that two n × n matrices A and B are said to be similar if there exists a non-singular matrix T such that B T 1AT. Also, an n × n matrix A is diagonalizable if it is similar to a diagonal matrix.
88
Numerical Methods
Definition 4.2. An eigenvalue of a matrix A that is larger in absolute value than any other eigenvalue of A is called the dominant eigenvalue. An eigenvector corresponding to a dominant eigenvalue is called a dominant eigenvector. The eigenvalues/eigenvectors other than the dominant eigenvalues/eigenvectors are called subdominant eigenvalues/subdominant eigenvectors. The spectral radius R( A ) of a matrix A is defined as the modulus of its dominant eigenvalue. Thus, R( A ) max L i , i 1, 2,K , n,
[ ]
where L i are the eigenvalues of A [ aij ]nr n .
4.2
THE POWER METHOD
This method is used to find the dominant eigenvalue of a given matrix A of dimension n × n. So we assume that the eigenvalues of A are L1 , L 2 ,K , L n , where | L1 | | L 2 | q K q | L n | Let v be a linear combination of eigenvectors v1 , v2 ,K , vn corresponding to L1 , L2 ,K , Ln , respectively. Thus, v c1v1 c2 v2 K cn vn . Since Avi L i vi , we have Av c1 Av1 c2 Av2 K cn Avn c1L1v1 c2 L2 v2 K cn Ln vn § ¤L ³ ¤L ³ ¶ L1 ¨ c1v1 c2 ¥ 2 ´ v2 K cn ¥ n ´ vn · , ¦ L1 µ ¦ L1 µ ·¸ ¨© 2 2 § ¶ ¤ L2 ³ ¤ Ln ³ ¨ A v L c1v1 c2 ¥ ´ v2 K cn ¥ ´ vn · , ¨ · ¦ L1 µ ¦ L1 µ © ¸ z z z z z z p p § ¶ ¤ L2 ³ ¤ Ln ³ p p ¨ A v L1 c1v1 c2 ¥ ´ v2 K cn ¥ ´ vn · . ¨ · ¦ L1 µ ¦ L1 µ © ¸ For large values of p, the vector 2
2 1
p
p
¤L ³ ¤L ³ c1v1 c2 ¥ 2 ´ v2 K cn ¥ n ´ vn ¦ L1 µ ¦ L1 µ will converge to c1v1 , which is the eigenvector corresponding to L1. The eigenvalue is obtained as L1 lim
p lc
( A p1v )r ( A p v )r
, r 1, 2,K , n,
where the index r signifies the rth component in the corresponding vector. The rate of convergence is deterL L mined by the quotient 2 . The convergence will be faster if 2 is very small. L1 L1 Given a vector Yk , we form two other vectors Yk 1 and Z k 1 as Z Z k 1 AYk , Yk 1 k 1 , where A k 1 max | ( Z k 1 )r | . r A k 1 The initial vector Y0 should be chosen in a convenient way. Generally, a vector with all components equal to 1 is tried. The smallest eigenvalue, if it is non-zero, can be found by using the power method on the inverse A 1 of the given matrix A.
Eigenvalues and Eigenvectors
89
Let A be a 3 r 3 matrix. Then, as discussed above, the largest and the smallest eigenvalues can be obtained by power method. The third eigenvalue is then given by trace of A – ( sum of the largest and smallest eigenvalues). The subdominant eigenvalues, using power method, can be determined easily using deflation method. The aim of deflation method is to remove first the dominant eigenvalue (obtained by power method). We explain this method for a symmetric matrix. Let A be a symmetric matrix of order n with L1 , L 2 ,K , L n as the eigenvalues. Then there exist the corresponding normalized vectors v1 , v2 ,..., vn , such that ª0 if i w j vi v Tj D ij « ¬1 if i j. Let L1 be the dominant eigenvalue and v1 be the corresponding eigenvector determined by the power method. Consider the matrix A L1v1v1T . Then, we note that ( A L1v1v1T )v1 Av1 L1v1 ( v1T v1 ) Av1 L1v1 L1v1 L1v1 0, ( A L1v1v1T )v2 Av2 L1v1 ( v1T v2 ) Av2 L2 v2 , ( A L1v1v1T )v3 Av3 L1v1 ( v1T v3 ) Av3 L3v3 , z z z z z z ( A L1v1v1T )vn Avn L1v1 ( v1T vn ) Avn Ln vn . It follows therefore that A L1v1v1T has the same eigenvalues as the matrix A except that the eigenvalue corresponding to L1 is now zero. Hence, the subdominant eigenvalue can be obtained by using power method on the matrix A L1v1v1T . EXAMPLE 4.1 Find the largest (dominant) eigenvalue of the matrix § 1 3 1¶ ¨ · A ¨ 3 2 4 ·. ¨ 1 4 10 · © ¸ Solution. Let us choose the initial vector as §0 ¶ ¨ · X1 ¨ 0 · . ¨1· © ¸ Then § 1¶ § 0.1¶ ¨ · ¨ · AX1 ¨ 4 · 10 ¨ 0.4 · 10 X 2 ¨10 · ¨ 1 · © ¸ © ¸ § 0.1 ¶ §0.009 ¶ ¨ · ¨ · AX 2 ¨ 4.5 · 11.7 ¨ 0.385 · 11.7 X 3 ¨11.7 · ¨ · © ¸ © 1 ¸
90
Numerical Methods
§0.014 ¶ § 0.164 ¶ ¨ · ¨ · AX 3 ¨ 4.797 · 11.531 ¨0.416 · 11.531X 4 ¨ 1 · ¨11.531· © ¸ © ¸ §0.022 ¶ § 0.262 ¶ ¨ · ¨ · AX 4 ¨ 4.874 · 11.650 ¨ 0.418 · 11.650 X5 ¨ 1 · ¨11.650 · © ¸ © ¸ § 0.025 ¶ § 0.276 ¶ ¨ · ¨ · AX5 ¨ 4.902 · 11.650 ¨0.422 · 11.650 X 6 . ¨ 1 · ¨11.650 · © ¸ © ¸ Thus, up to two decimal places, we get L 11.65 and the corresponding vector as § 0.025 ¶ ¨ · ¨0.422 · . ¨ 1 · © ¸ EXAMPLE 4.2 Find the largest eigenvalue of the matrix § 1 3 2 ¶ ¨ · A ¨ 4 4 1· . ¨6 3 5 · © ¸ Solution. Let us choose §1¶ ¨ · X1 ¨1· . ¨1· © ¸ Then §0¶ §0 ¶ ¨ · ¨ · AX1 ¨ 7 · 14 ¨0.5· 14 X 2 ¨14 · ¨1· © ¸ © ¸ §0.5¶ §0.0769 ¶ ¨ · ¨ · AX 2 ¨ 1 · 6.5 ¨ 0.1538 · 6.5X 3 ¨6.5· ¨ 1 · © ¸ © ¸ § 1.6155 ¶ § 0.2728 ¶ ¨ · ¨ · AX 3 ¨ 0.0772 · 5.9228 ¨ 0.0130 · 5.9228X 4 ¨ 5.9228 · ¨ 1 · © ¸ © ¸ § 2.3169 ¶ § 0.3504 ¶ ¨ · ¨ · AX 4 ¨ 0.1169 · 6.5974 ¨0.0177 · 6.5974 X5 . ¨ 6.5974 · ¨ 1 · © ¸ © ¸
Eigenvalues and Eigenvectors
91
Continuing in this fashion we shall obtain, after round off, §9¶ ¨ · AX y 7 ¨ 2 · . ¨ 30 · © ¸ §9¶ ¨ · Thus, the largest eigenvalue is 7 and the corresponding eigenvector is ¨ 2 · . ¨30 · © ¸
EXAMPLE 4.3 Determine the largest eigenvalue and the corresponding method: § 2 1 ¨ A ¨ 1 2 ¨ 0 1 ©
eigenvector of the following matrix using power 0¶ ·
1· . 2 ·¸
Solution. We have § 2 1 0 ¶ ¨ · A ¨ 1 2 1· . ¨ 0 1 2 · © ¸ We start with §1¶ ¨ · X1 ¨1· . ¨1· © ¸ Then, by power method, §1 ¶ §1 ¶ ¨ · ¨ · AX1 ¨0 · 1 ¨0 · 1X 2 ¨1 · ¨1 · © ¸ © ¸ §2 ¶ §1 ¶ ¨ · ¨ · AX 3 ¨ 2 · 2 ¨ 1· 2 X 3 ¨2 · ¨1 · © ¸ © ¸ §3 ¶ ¨4 · §3 ¶ ¨ · ¨ · AX 3 ¨ 4 · 4 ¨ 1· 4 X 4 ¨3 · ¨3 · © ¸ ¨ · ¨© 4 ·¸
92
Numerical Methods
§5 ¶ §5 ¶ ¨2 · ¨7 · ¨ · ¨ 14 · 14 ¨ · AX 4 ¨ ¨ 1· 3.5X5 4 · 4 ¨ · ¨ · 5 ¨ · ¨5 · ¨© 7 ·¸ ¨© 2 ·¸ § 17 ¶ § 17 ¶ ¨7 · ¨ 24 · ¨ · ¨ 24 · 24 ¨ · 24 AX5 ¨ X 3.46 X 6 ¨ 1 · 7 · 7 ¨ · 7 6 ¨ · 17 ¨ · ¨ 17 · ¨© 24 ·¸ ¨© 7 ·¸ § 29 ¶ § 29 ¶ ¨ 12 · ¨ 41 · ¨ · ¨ 41 · 41 ¨ · AX 6 ¨ ¨ 1 · 3.417 X 7 . 12 · 12 ¨ · ¨ · 29 ¨ · ¨ 29 · ¨© 41 ·¸ ¨© 12 ·¸ Thus, the largest eigenvalue is approximately 3.417 and the corresponding eigenvector is § 29 ¶ ¨ 41 · §0.7 ¶ ¨ · ¨ · ¨ 1 · ¨ 1 · . ¨ 29 · ¨0.7 · ¨ · © ¸ ¨© 41 ·¸ EXAMPLE 4.4 Use power method to find the dominant eigenvalue of the matrix § 3 1 0 ¶ ¨ · A ¨ 1 2 1· . ¨ 0 1 3 · © ¸ Using deflation method, find also the subdominant eigenvalues of A. Solution. We start with §1¶ ¨ · X1 ¨1· . ¨1· © ¸ Then §2¶ §1¶ ¨ · ¨ · AX1 ¨ 0 · 2 ¨ 0 · 2 X 2 ¨2· ¨1· © ¸ © ¸
Eigenvalues and Eigenvectors
§3¶ § 1 ¶ ¨ · ¨ · AX 2 ¨ 2 · 3 ¨ 2 / 3· 3X 3 ¨3· ¨ 1 · © ¸ © ¸ § 11/ 3 ¶ § 1 ¶ · 11 ¨ · 11 ¨ AX 3 ¨ 10 / 3· ¨ 10 /11· X 4 3 3 ¨ 1 · ¨ 11/ 3 · ¸ © ¸ © § 1 ¶ § 43/11 ¶ · 43 ¨ · 43 ¨ AX 4 ¨ 42 /11· ¨ 42 / 43· X5 11 11 ¨ 1 · ¨ 43/11 · © ¸ © ¸ § 171/ 43 ¶ § ¶ § 1 ¶ 1 ¨ · 171 ¨ · ¨ · AX5 ¨ 170 / 43·
170 /171· 3.9767 ¨ 0.994 · . ¨ 43 ¨ 171/ 43 · ¨ · ¨ 1 · 1 © ¸ © ¸ © ¸ §1¶ ¨ · Thus, the iterations are converging to the eigenvalue 4 and the corresponding eigenvector is ¨ 1· . ¨1· The normalized vector is © ¸ §1 ¶ 1 ¨ · v1 ¨ 1· . 3 ¨1 · © ¸ We then have §1 ¶ § 3 1 0 ¶ ¨ · 4¨ · A1 A L v v ¨ 1 2 1· ¨ 1· [1 1 1] 3 ¨1 · ¨ 0 1 3 · © ¸ © ¸ T 1 1 1
§1¶ ¨ · Starting with X1 ¨1· , we have ¨1· © ¸
§ 5 ¨ § 3 1 0 ¶ § 1 1 1 ¶ ¨ 3 ¨ · 4¨ · ¨ 1 ¨ 1 2 1· ¨ 1 1 1· ¨ 3 3 ¨ 0 1 3 · ¨ 1 1 1 · ¨ © ¸ © ¸ ¨ 4
¨© 3
§2¶ §1¶ ¨ 3· ¨2· ¨ · ¨4· 4 ¨ · 4 A1X1 ¨ · ¨1 · X 2 , 3 3¨ · 3 ¨ · 1 ¨ · 2 ¨ · ¨ © 2 ·¸ ¨© 3 ·¸
1 3 2 3 1 3
4¶ 3 ·· 1 · . 3 · · 5 · 3 ·¸
93
94
Numerical Methods
§1¶ §1¶ ¨2· ¨2· ¨ · ¨ · A1X 2 ¨1 · 1 ¨1 · . ¨1· ¨1· ¨ · ¨ · ¨© 2 ·¸ ¨© 2 ·¸ Hence, the subdominant eigenvalue L 2 is 1. Further, trace A 3 2 3 8. Therefore, L1 L 2 L 3 8 or L 3 8 L1 L 2 8 4 1 3. Hence, the eigenvalues of A are 4, 3, 1.
4.3
JACOBI’S METHOD
This method is used to find the eigenvalues of a real symmetric matrix A. We know that eigenvalues of a real symmetric matrix A are real and that there exists a real orthogonal matrix O such that O 1AO is a diagonal matrix. In this method, we produce the desired orthogonal matrix as a product of very special orthogonal matrices. Among the off-diagonal elements we choose the numerically largest element aik , that is, | aik | max . The elements aii , aik , aki ( aik ) and akk form a 2 × 2 submatrix § aii ¨ ¨© aki
aik ¶ ·, akk ·¸
which can easily be transformed to diagonal form. We choose the transformation matrix, also called rotation matrix, as the matrix O1 whose (i, i) element is cos J , (i, k) element is sin J , (k, i) element is sin J , (k, k) element is cos J while the remaining elements are identical with the unit matrix. Then the elements dii , dik , dki , and d kk of the matrix D1 O1 1AO1 are given by dii aii cos 2 J 2 aik sin J cos J akk sin 2 J , dik d ki ( aii akk ) sin J cos J aik (cos 2 J sin 2 J ), d kk aii sin 2 J 2 aik sin J cos J akk cos 2 J . We choose J such that dik dki 0, which yields aik sin J cos J 2 2 cos J sin J aii akk or 2 aik tan 2J , aii akk We put R 2 ( aii akk )2 4 aik2 .
Eigenvalues and Eigenvectors
95
Then, we obtain dii
1 ( a akk R), 2 ii
d kk
1 ( a akk R). 2 ii
We note that dii dkk aii akk and dii d kk aii akk aik2 . We perform a series of such two-dimensional rotations. Each time we choose such values of i and k such that | aik | max. Then taking O O1 O2 K O r , the matrix D O 1AO comes closer and closer to a diagonal matrix as r increases and the columns of O converge to the eigenvectors. EXAMPLE 4.5 Use the Jacobi’s method to find eigenvalues of the matrix §10 ¨ 7 A¨ ¨8 ¨ ¨© 7
7 8 7¶ · 5 6 5· . 6 10 9 · · 5 9 10 ·¸
Solution. The maximum off-diagonal element is 9. So, we have aik a34 9, aii a33 10, and akk a44 10. These elements form the 2 × 2 submatrix §10 9 ¶ ·. ¨ © 9 10 ¸ Thus, tan 2J
2 aik 18 c aii akk 0
and so 2J 90° or J 45° . Hence, the rotation matrix is §1 ¨ 0 O1 ¨ ¨0 ¨ ¨©0
0 0 1 0 0 cos J 0 sin J
0 ¶ · 0 ·
sin J · · cos J ·¸
and so §1 ¨ ¨0
1 O1 A O1 ¨ ¨0 ¨0 ©
0 ¶ §10 ·¨ 1 0 0 ·¨ 7 ·¨ 0 1/ 2 1/ 2 · ¨ 8 0 1/ 2 1/ 2 ·¸ ¨© 7 0
0
7 ¶ §1 ·¨ 5 6 5 · ¨0 ·¨ 6 10 9 · ¨0 5 9 10 ·¸ ¨©0
7
8
0
0
1
0
0 1/ 2 0 1/ 2
¶ · 0 · ·
1/ 2 · 1/ 2 ·¸ 0
96
Numerical Methods
The multiplication of last two matrices affects only third and fourth columns, whereas the multiplication of first two matrices affects only third and fourth rows. Also, R ( aii akk )2 4 aik 2 (10 10)2 4(9)2 324 18. Therefore, 1 1 ( aii akk R) (10 10 18) 19, 2 2 1 1 d kk ( aii akk R) (10 10 18) 1, 2 2 dik dki 0. dii
Further, (1,3) element of O1 1AO1
8
(1,4) element of O1 1AO1 (2,3) element of O1 1A O1
2 8
15
2 7
2
6
(2,4) element of O1 1AO1
7
5
,
2 11
2 2
1
2
2 6
,
2
,
2
5
1
2
,
2
whereas (1,1), (1,2), (2,1), and (2,2) elements of A remain unchanged. Hence, first rotation yields § ¨ 10 ¨ ¨ ¨ 7 A1 ¨ ¨ 15 ¨ ¨ 2 ¨ 1 ¨ ¨© 2 Now the maximum non-diagonal element is Hence,
15 2
15
7
2 11
5
2 11
2 1 2
19 0
1 ¶ · 2· 1 ·
· 2· · 0 · · · 1 · ·¸
. So, we take i = 1, k = 3, and have aii 10, akk 19, aik
tan 2J
15
.
2
aik 15 2 , aii akk 10 19
which yields J 56.4949°. We perform second rotation in the above fashion. After 14 rotations, the diagonal elements of the resultant matrix are
L1 0.010150, L2 0.843110, L3 3.858054, L4 30.288686. The sum of eigenvalues is 35 and the product 0.999996 is in good agreement with exact characteristic equation L 4 35L 3 146L 2 100L 1 0.
Eigenvalues and Eigenvectors
97
EXAMPLE 4.6 Use the Jacobi’s method to find the eigenvalues and the corresponding eigenvectors of the matrix § 1 ¨ A¨ 2 ¨ ¨© 2
2 3 2
2 ¶ · 2·. · 1 · ¸
Solution. The numerical largest non-diagonal element in the symmetric matrix A is a13 2. We have a11 1, a33 1. Therefore, tan 2J
2 a13 4 c, a11 a33 0
which yields J 45°. Therefore, the transformation matrix is taken as § ¨ §cos J 0 sin J ¶ ¨ ¨ · O1 ¨ 0 1 0 ·¨ ¨ ¨ sin J 0 cos J · ¨ © ¸ ¨ ©
1 2 0 1 2
0 1 0
1 ¶ · 2· 0 ·. · 1 · 2 ·¸
Thus, the first rotation is § 1 ¨ ¨ 2
1 D O1 AO1 ¨ 0 ¨ ¨ 1 ¨ 2 ©
0 1 0
1 ¶ ·§ 2 ·¨ 1 0 ·¨ 2 ·¨ 1 ·¨ 2 © 2 ·¸
2 3 2
§ 1 2 ¶¨ 2 ·¨ 2·¨ 0 ·¨ 1 ·¨ 1 ¸¨ © 2
We note that R ( a11 a33 )2 4 a132 4, 1 ( a a33 R ) 3, 2 11 1 d33 ( a11 a33 R ) 1, 2 d13 d31 0, d11
d12 d21 0, ¤ 1 ³ ¤ 1 ³ 2¥ 2, d12 d21 2 ¥ ´ ¦ 2µ ¦ 2 ´µ ¤ 1 ³ ¤ 1 ³ 2¥ 0, d32 d23 2 ¥ ´ ¦ ¦ 2 ´µ 2µ d22 3 (unchanged by multiplication).
0 1 0
1 ¶ · 2· 0 ·. · 1 · 2 ·¸
98
Numerical Methods
Thus, §3 2 0 ¶ ¨ · D ¨2 3 0 · . ¨0 0 1· © ¸ Now the maximum off-diagonal element is d12 2. We have also d11 3, d22 3. Therefore, for the second rotation 2 d12 4 tan 2J c. d11 d22 0 Hence, J 45° and so the rotation matrix is §cos J ¨ O2 ¨ sin J ¨ 0 ©
§
sin J 0 ¶ ¨¨ · cos J 0 · ¨ ¨ 0 1 ·¸ ¨ ¨ ©
1
1
2 1
1
2
2 0
2 0
¶ 0· · · 0· · 1 ·¸
and we have § cos J ¨ O 21 ¨ sin J ¨ 0 ©
§ 1 sin J 0 ¶ ¨¨ 2 · cos J 0 · ¨ 1 ¨ 0 1 ·¸ ¨ 2 ¨ 0 ©
1 2 1 2 0
¶ 0· · ·. 0· · 1 ·¸
Thus, the second rotation is § 1 ¨ ¨ 2 M O 21DO2 ¨ 1 ¨ 2 ¨ ¨ 0 ©
1 2 1 2 0
¶ § 1 0· ¨ § ¶ 3 2 0 ·¨ ¨ 2 · ¨ 2 3 0 ·· ¨ 1 0· ¨ · ¨© 0 0 1·¸ ¨ 2 ¨ 0 1 ·¸ ©
1 2 1 2 0
For this rotation, R ( d11 d33 )2 4 d132 4, 1 m11 ( d11 d33 R) 5, 2 1 m22 ( d11 d33 R) 1, 2 m12 m21 0, m33 1 (unchanged), ¤ 1 ³ ¤ 1 ³ 0¥ ( 1)0 0, m13 m31 0 ¥ ´ ¦ 2µ ¦ 2 ´µ
¶ 0· · · 0· · 1 ·¸
Eigenvalues and Eigenvectors
99
¤ 1 ³ ¤ 1 ³ 0¥ ( 1)0 0. m23 m32 0 ¥ ´ ¦ ¦ 2 ´µ 2µ Hence, §5 0 0 ¶ ¨ · M ¨0 1 0 · (a diagonal matrix). ¨0 0 1· © ¸ Therefore, eigenvalues of matrix A are 5, 1, and −1. The corresponding eigenvectors are the columns of the matrix § ¨ ¨ O O1O 2 ¨ ¨ ¨ ¨ ©
1 2 0 1 2
§ 1 ¶¨ 0 · 2· ¨ ¨ 1 0 ·¨ · 1 ·¨ ¨ 0 2 ·¸ ¨ ¨©
1
¶ § 0· ¨ · ¨ · ¨ 0· ¨ · ¨ 1· ¨ · ¨ ·¸ ¨©
1
2 1
1
2
2 0
2 0
1 2
1 2
1
1
2 1 2
2 1
2
1 ¶ · 2· · 0 ·. · 1 · · 2 ·¸
Hence, the eigenvectors are § ¨ ¨ ¨ ¨ ¨ ¨ ¨ ©
¶ § 1¶ § 1 ¶ · ¨ · ¨ · 2 · · ¨ 2· ¨ · ¨ 1 · ¨ · · , and ¨ 0 · . ·, ¨ 2· ¨ 2· ¨ 1 · 1 · ¨ 1· ¨ 2 ·
· · ¨ © ¸ 2 ¸ © 2¸ 1 2 1
EXAMPLE 4.7 Use the Jacobi’s method to find eigenvalues of the symmetric matrix §2 3 1¶ ¨ · A ¨3 2 2· . ¨1 2 1· © ¸ Solution. The largest non-diagonal element in the given symmetric matrix A is a12 3. We also have a11 2, a22 2. Therefore, tan 2J
2 a12 6 c, a11 a22 0
and so J 45°. Thus, the rotation matrix is §cos J ¨ O1 ¨ sin J ¨ 0 ©
§ ¨
sin J 0 ¶ ¨ · ¨ cos J 0 · ¨ 0 1 ·¸ ¨ ¨ ©
1
1
2 1
2 1
2 0
2 0
¶ 0· · · 0· · 1 ·¸
100
Numerical Methods
The first rotation yields § 1 § 1 ¶ 1 0· ¨ ¨ 2 ¨ 2 · §2 3 1¶ ¨ 2 · ¨3 2 2· ¨ 1 B O1 1AO1 ¨ 1 1 ¨ ·¨ ¨ 0· ¨ ¨ · ¨ 1 2 1 ·· ¨ 2 2 2 ¸¨ ¨ ·© 0 1 ·¸ ¨© 0 ¨© 0 § 3 ¶ 0 ¨ 5 · 2· ¨ ¨ 1 ··
1 . ¨ 0 ¨ 2· · ¨ 1 · ¨ 3 1 · ¨ 2 © 2 ¸ Now, the largest off-diagonal element is b13 tan 2J
3 2
1 2 1 2 0
¶ 0· · · 0· · · 1 ·¸
and b11 5, b33 1. Therefore,
2b13 3 2 1.0607, b11 b33 4
which gives J 23.343°. Then sin J 0.3962 and cos J 0.9181. Hence, the rotation matrix is §cos J 0 sin J ¶ § 0.9181 0 0.3962 ¶ ¨ · ¨ · O2 ¨ 0 1 0 ·¨ 0 1 0 ·. ¨ sin J 0 cos J · ¨0.3962 0 0.9181 · © ¸ © ¸ The second rotation yields § ¨ § 0.9181 0 0.3962 ¶ ¨ ·¨ ¨ 1 0 ·¨ C O 21BO2 ¨ 0 ¨ 0.3962 0 0.9181· ¨ ¸¨ © ¨ ¨©
5
0
0
1
3
1
2
2
3 ¶ · 2·§ 0..9181 0 0.3962 ¶ 1 ·¨ · 1 0 · ·¨ 0 2·¨ · · ©0.3962 0 0.9181 ¸ 1 · ·¸
§5.9147 0.2802 0 ¶ ¨ · ¨0.2802 0.6493· .
1 ¨ 0 0.6493 0.0848 ·¸ © (Note that in B, b12 b21 0, but these elements have been changed to 0.2802 in the second rotation. This is disadvantage of Jacobi’s method.) The next non-zero off-diagonal element is a23 0.6493 and we also have c22 1, c33 0.0848.Thus, 2c23 1.2986 tan 2J 1.1971, c22 c33 1.0848
Eigenvalues and Eigenvectors
101
which gives J 154.937°. Then sin J 0.4236, cos J 0.9058, and so the rotation matrix is §1 0 0 ¶ ¨ · O 3 ¨0 0.9058 0.4236 · . ¨0 0.4236 0.9058 · © ¸ Therefore, the third rotation yields §1 0 0 ¶ §5.9147 0.2802 0 ¶ §1 0 0 ¶ ¨ ·¨ ·¨ ·
1 D O3 CO3 ¨0 0.9058 0.4236 · ¨0.2802
1 0.6493· ¨0 0.9058 0.4236 · ¨0 0.4236 0.9058 · ¨ 0 0.6493 0.0848 ·¸ ¨©0 0.4236 0.9058 ·¸ © ¸© § 5.9147 0.2538 0.1187 ¶ ¨ · ¨ 0.2538 1.3035 0 ·. ¨ 0.1187 0 0.38835 ·¸ © After some rotations the required eigenvalues (diagonal elements) of nearly diagonal matrix shall be approximately 5.9269, −1.3126, and 0.3856.
4.4
GIVEN’S METHOD
As we have observed in Example 4.6, in Jacobi’s method the elements annihilated by a plane rotation may not remain zero during the subsequent rotations. This difficulty was removed by Given in his method, known as Given’s method. In this method, we first reduce the given symmetric matrix to a tri-diagonal symmetric matrix and then the eigenvalues of the matrix are determined by using Sturm sequence or by forming the characteristic equation, which can be solved by theory of equations. We consider first the real symmetric matrix of order 3 given by § a11 ¨ A ¨ a12 ¨a © 13
a12 a22 a23
a13 ¶ · a23 · . a33 ·¸
In this matrix, there is only one non-tri-diagonal element a13 which is to be reduced to zero. Thus, only one rotation is required. The transformation is made with the help of orthogonal rotational matrix O1 in the (2,3) plane of the type §1 0 0 ¶ ¨ · (4.3) O1 ¨0 cos Q sin Q · . ¨0 sin Q cos Q · © ¸ Then (1,3) element in the matrix §1 0 0 ¶ § a11 ¨ ·¨ B O A O1 ¨0 cos Q sin Q · ¨ a12 ¨0 sin Q cos Q · ¨ a © ¸ © 13
1 1
a12 a22 a23
a13 ¶ § 1 0 ·¨ a23 · ¨0 coss Q a33 ·¸ ¨©0 sin Q
¶ ·
sin Q · cos Q ·¸ 0
a13 . a12 Finding the values of sin Q and cos Q from here gives us the transforming matrix O1 . Thus, we can find the tri-diagonal form B of the given symmetric matrix A of order 3. is a12 sin Q a13 cos Q . Thus, (1,3) element in B will be zero if a12 sin Q a13 cos Q , that is, if tan Q
102
Numerical Methods
Now consider the real symmetric matrix of order 4 given by § a11 a12 a13 a14 ¶ ¨ · ¨ a12 a22 a23 a24 · A¨ ·. ¨ a13 a23 a33 a34 · ¨a · © 14 a244 a34 a44 ¸ In this matrix, there are three non-tri-diagonal elements: a13 , a14 , and a24 . Thus, three rotations are required to reduce the given matrix to tri-diagonal form. As discussed above, to annihilate a13 , we take orthogonal rotation matrix O1 in the (2,3) plane as given in expression (4.3) and obtain the matrix B with zeros in (1,3) and (3,1) positions. To reduce the element (1,4) in B to zero, we use the rotation in the (2,4) plane. The orthogonal rotation matrix O2 in (2,4) plane is then §1 0 0 0 ¶ · ¨ 0 cos Q 1 sin Q · O2 ¨ . ¨0 0 0 0 · ¨ · ¨©0 sin Q 0 cos Q ·¸ Then (1,4) element in the matrix 0 ¶ § b11 ·¨ 1 sin Q · ¨ b12 0 0 · ¨¨ 0 · 0 cos Q ·¸ ¨© b14
§1 0 ¨ 0 cos Q C O 21B O2 ¨¨ 0 0 ¨ ©¨0 sin Q
0
b12
0
b22
b23
b23
b33
b24
b34
b14 ¶ § 1 0 ·¨ b24 · ¨0 cos Q b34 · ¨0 0 ·¨ · b44 ¸ ©¨0 sin Q
¶ · 1 sin Q · 0 0 · · 0 cos Q ·¸ 0
0
b14 . Finding the value of sin Q and b12 cos Q , we get the transforming matrix O 2 and so C is obtained with zeros at (1,3), (3,1), (1,4), and (4,1) positions. To annihilate the element at (2,4) position, we perform rotation in (3,4) plane by taking the orthogonal rotation matrix O3 as §1 0 0 0 ¶ · ¨ 0 1 0 0 · ¨ O3 . ¨0 0 cos Q sin Q · ¨ · ¨©0 0 sin Q cos Q ¸·
is b12 sin Q b14 cos Q . Thus, (1,4) element in C shall be zero if tan Q
Then (2,4) element in the matrix §1 ¨ ¨0
1 D O3 CO 3 ¨ ¨0 ¨ ©0
0
0
1
0
0
cos Q
0 sin Q
0 ¶ · 0 · · sin Q · · cos Q ¸
§ c11 ¨ ¨ c12 ¨ ¨0 ¨ ¨© 0
c12
0
c22
c23
c23
c33
c24
c34
0¶ · c24 · · c34 · · c44 ·¸
§1 ¨ ¨0 ¨ ¨0 ¨ ©0
0
0
1
0
0 cos Q 0 sin Q
¶ · 0 · ·
sin Q · · cos Q ¸ 0
c is c23 sin Q c24 cos Q . Thus, (2,4) element in D shall be zero if tan Q 24 . Putting the values of sin Q and c23 cos Q in D, we get the required tri-diagonal form of the matrix A.
Eigenvalues and Eigenvectors
103
In case the matrix A is of order n, the number of plane rotations required to reduce it to tri-diagonal form 1 is ( n 1)( n 2). 2 EXAMPLE 4.8 Use the Given’s method to find eigenvalues of the symmetric matrix §2 3 1¶ ¨ · A ¨3 2 2· . ¨1 2 1· © ¸ Solution. In the matrix A, there is only one non-tri-diagonal element a13 1 which is to be reduced to zero. Thus, only one rotation is required. To annihilate a13 , we take orthogonal matrix in (2,3) plane as §1 0 ¨ O ¨0 cos Q ¨0 sin Q © where tan Q
0 ¶ ·
sin Q · , cos Q ·¸
a13 1 1 3 and cos Q . Then . Thus, sin Q a12 3 10 10 §1 0 ¨ B O AO ¨0 cos Q ¨0 sin Q ©
1
§1 0 ¨ 3 ¨0 ¨ 10 ¨ ¨0 1 ¨ 10 © § 2 ¨ ¨ ¨ 10 ¨ ¨ 0 ¨©
10 31 10 13 10
0 ¶ § 2 3 1 ¶ §1 0 ·¨ ·¨ sin Q · ¨ 3 2 2 · ¨0 cos Q cos Q ·¸ ¨© 1 2 1 ·¸ ©¨0 sin Q §1 0 ¶ · §2 3 1¶ ¨ 1 · ¨ · ¨0 10 · ¨ 3 2 2 · ¨ ¨ · 3 · ¨© 1 2 1 ·¸ ¨ 0 ¨ 10 ·¸ ©
0 3 10 1 10
0 ¶ ·
sin Q · cos Q ·¸ 0 ¶ · 1 ·
10 · · 3 · 10 ·¸
0 ¶ · 13 · , 10 · · 1
· 10 ·¸
which is the required tri-diagonal form. The characteristic equation of this tri-diagonal matrix is 2 L 10 0
10 0 31 13
L 0, 10 10 13 1
L 10 10
104
Numerical Methods
which gives §¤ 31 § ³ 169 ¶ ³¤ 1 ³¶ ¤ 1
( 2 L ) ¨¥ L ´ ¥ L ´ · 10 ¨ 10 ¥ L ´ · 0 µ 100 ¸ µ ¦ 10 µ¸ ¦ 10 ©¦ 10 © or ( L )[(31 10L )(1 10L ) 169] 100 1000L 0 or
L 3 5L 2 6L 3 0. The approximate roots of this characteristic equation are 0.3856, −1.3126, and 5.9269. EXAMPLE 4.9 Use the Given’s method to reduce the symmetric matrix § 3 2 1¶ ¨ · A ¨2 3 2· ¨ 1 2 3· © ¸ to tri-diagonal form and find its eigenvalues. Solution. Let §1 0 ¨ O ¨0 cos Q ¨0 sin Q ©
0 ¶ ·
sin Q · cos Q ·¸
a 2 1 1 . and cos Q be the orthogonal matrix in the (2,3) plane, where tan Q 13 . Thus, sin Q a12 2 5 5 Therefore, §1 0 ¶ · §3 2 1¶ ¨ 1 · ¨ · ¨0 5 · ¨2 3 2· ¨ ¨ · 2 · ¨© 1 2 3 ·¸ ¨ 0 ¨ 5 ·¸ ©
§1 0 ¨ 2 ¨0 B O 1AO ¨ 5 ¨ ¨0 1 ¨ 5 © § 3 ¨ ¨ ¨ 5 ¨ ¨0 ¨©
5 23 5 6 5
0¶ · 6· . 5· · 7· 5 ·¸
The characteristic equation for this tri-diagonal matrix is 3 L 5 0
5 23
L 5 6 5
0 6 5 7
L 5
0
0 2 5 1 5
0 ¶ · 1 ·
5· · 2 · 5 ¸·
Eigenvalues and Eigenvectors
105
or
L 3 9L 2 18L 8 0. Clearly L 2 satisfies this equation. The reduced equation is L 2 7 L 4 0, which yields
L
7 o 49 16 7 o 33 . 2 2
Hence, the eigenvalues of the given matrix are 2,
7 33 7 33 , and . 2 2
EXAMPLE 4.10 Use the Given’s method to reduce the symmetric matrix § 8 6 2 ¶ ¨ · C ¨ 6 7 4 · ¨ 2 4 3 · © ¸ to tri-diagonal form and find its eigenvalues. Solution. Let §1 0 ¨ O ¨0 cos Q ¨0 sin Q © be the orthogonal matrix in the (2,3) plane, where tan Q
0 ¶ ·
sin Q · cos Q ·¸ a13 1 3 1 . Thus, sin Q . and cos Q a12 3 10 10
Therefore, §1 0 ¨ B O AO ¨0 cos Q ¨0 sin Q ©
1
§1 ¨ ¨0 ¨ ¨ ¨0 ¨ ©
0 3 10 1 10
§ 8 ¨ ¨ 2 10 ¨ © 0
0 ¶ § 8 6 2 ¶ § 1 0 ·¨ ·¨ sin Q · ¨ 6 7 4 · ¨0 cos Q cos Q ·¸ ¨© 2 4 3 ·¸ ¨©0 sin Q §1 0 ¶ 0 · § 8 6 2 ¶ ¨ 1 · 3
¨ · ¨¨0 ·
6 7
4 10 ¨ 10 ·¨ · 3 · ¨© 2 4 3 ·¸ ¨ 1 0 · ¨ 10 ¸ 10 ©
2 10 9
2
0¶ ·
2 · . · 1¸
¶ ·
sin Q · cos Q ·¸ 0
0 ¶ · 1 · 10 · · 3 · 10 ·¸
106
Numerical Methods
The characteristic equation of the above tri-diagonal matrix is 8 L
2 10
2 10 0
9 L
2
0
2 0 1 L
or
L ( L 2 18L 45) 0. Hence, L 0, 3 , and 15 are the required eigenvalues. EXAMPLE 4.11 Use the Given’s method to reduce the symmetric matrix §1 2 ¨ A ¨2 1 ¨2 2 © to tri-diagonal form and find its eigenvalues. Solution. Let §1 0 ¨ O ¨0 cos Q ¨0 sin Q ©
2¶ · 2· 1 ·¸
0 ¶ ·
sin Q · cos Q ·¸ a 1 1 . and cos Q be the orthogonal matrix in the (2,3) plane, where tan Q 13 1. Thus, sin Q a12 2 2 Therefore, §1 0 0 ¶ § 1 2 2 ¶ §1 0 0 ¶ · ¨ ·¨ ·¨
1 B O AO ¨0 cos Q sin Q · ¨ 2 1 2 · ¨0 cos Q sin Q · ¨0 sin Q cos Q · ¨ 2 2 1 · ¨0 sin Q cos Q · ¸ © ¸© ¸© §1 0 ¨ 1 ¨0 ¨ 2 ¨ ¨0 1 ¨ 2 ©
§1 0 ¶ · §1 2 2¶ ¨ 1 · ¨ · ¨0 2 · ¨2 1 2· ¨ ¨ · 1 · ¨© 2 2 1 ·¸ ¨ 0 ¨ 2 ·¸ ©
§ 1 2 2 0¶ · ¨ 3 0 ·. ¨2 2 · ¨ 0
1· ¨© 0 ¸ The characteristic equation of this tri-diagonal matrix is 1 L
2 2
2 2 0
3 L 0 0 0
1 L
0
or
L 3 3L 2 9L 5 0. Hence, the characteristic roots are −1, −1, and 5.
0 1 2 1 2
0 ¶ · 1 ·
2· · 1 · 2 ¸·
Eigenvalues and Eigenvectors
EXAMPLE 4.12 Use the Given’s method to reduce the symmetric matrix §1 2 ¨ 2 1 A¨ ¨2 2 ¨ ¨© 2 2
107
2¶ · 2· 3· · 1 ·¸
2 2 1 3
to tri-diagonal form. Solution. In the given symmetric matrix there are three non-tri-diagonal elements a13 , a14 , and a24 . Thus, three rotations are required to reduce the matrix to tri-diagonal form. To annihilate a13 , we use the orthogonal rotation matrix §1 0 ¨ 0 cos Q O1 ¨ ¨0 sin Q ¨ 0 ¨©0 where tan Q
0
0¶ · 0· , 0· · 1 ¸·
a13 1 1 . Hence, 1. Thus, sin Q and cos Q a12 2 2 §1 ¨ ¨0 ¨ O1 ¨ ¨0 ¨ ¨ ¨©0
and so
0
sin Q cos Q
§1 0 ¨ 1 ¨0 ¨ 2 B O1 1AO1 ¨ ¨0 1 ¨ 2 ¨ 0 ¨©0 § ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨©
1
4
0 1 2 1 2 0
0
2 4
3
0
0
1
5
1
2
2
2 0 2
0 1
0 1 2
0¶ · 0· · · 0· · · 1 ·¸
2 1
1
2 0
2 0
0¶ · §1 0· ¨ · 2 ·¨ ¨2 0· ¨ · ¨2 ·© 1 ¸·
§1 ¨ 2 2 2¶ ¨ · 0 1 2 2· ¨ ¨ 2 1 3· ¨ · 0 2 3 1 ·¸ ¨ ¨ ¨©0
0 1
0 1
2 1
2 1
2 0
2 0
0¶ · 0· · · 0· · · 1 ·¸
¶ 2 · · 5 · · 2· . 1 · · 2· · 1 · ·¸
To reduce the element (1,4) in B to zero, we use the rotation O2 in (2,4) plane given by
108
Numerical Methods
§1 0 ¨ 0 cos Q O2 ¨ ¨0 0 ¨ ¨©0 sin Q where tan Q
b14 1 . Thus, cos Q b12 2
2 3
and sin Q
0 0 ¶ · 0 sin Q · , 1 0 · · 0 cos Q ·¸ 1
. Hence,
3
§1 ¨ ¨ ¨0 O2 ¨ ¨0 ¨ ¨0 ¨ ©
0
0 ¶ · 1 · 0 · 3· . 1 0 · · 2 · 0 3 ·¸
0
2 3 0 1 3
Hence, the second rotation yields §1 0 ¨ 2 ¨ ¨0 3 C O 21BO2 ¨ ¨0 0 ¨ ¨0 1 ¨ 3 ©
§ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨©
1 6 3 0 0
0 0 1 0
6 3 17 3 1 2 3 1
§ 0 ¶¨ ·¨ 1 ·¨ ·¨ 3·¨ 0 ·¨ ·¨ 2·¨ ¨ 3 ·¸ ¨ ¨© 0 1 2 3
1 1
3 2
3
4
1
0
2 4
3
0
2 0 2
0
1
5
1
2
2
¶ 2 · §1 ·¨ 5 ·¨ · ¨0 2·¨ 1 · ¨0 ·¨ 2·¨ · ¨0 1 ·© ·¸
0 2 3 0 1 3
0 ¶ · 1 · 0 · 3· 1 0 · · 2 · 0 3 ·¸ 0
¶ 0 · · 1 · · 3 2· . 1 · · 3 · 5·
· 3 ·¸
To annihilate (2,4) element in C, we use the rotation matrix O3 in (3,4) plane given by §1 ¨ 0 O3 ¨ ¨0 ¨ ¨©0
0 0 1 0 0 cos Q 0 sin Q
0 ¶ · 0 · ,
sin Q · · cos Q ·¸
Eigenvalues and Eigenvectors
where tan Q
109
c24 1 1 3 . Thus, sin Q and cos Q . Therefore, c23 2 2 3 §1 ¨ ¨0 ¨ O3 ¨ 0 ¨ ¨ ¨0 ©
0 1
0 0
0
3 2
0
1 2
0 ¶ · 0 · 1·
·. 2· 3 ·· 2 ¸
Hence, the third rotation yields §1 ¨ ¨0 ¨ D O3 1CO3 ¨0 ¨ ¨ ¨0 © § ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ©
0 1
0 0
0
3 2
1 0 2
1 6 3 0 0
6 3 17 3 2 3 2 0
§ ¨ 0 ¶¨ · 0 ·¨ ¨ 1 ·¨ ·¨ 2 ·¨ 3 ·· ¨ ¨ 2 ¸¨ ©¨
1 6 3 0 0
6 3 17 3 1
0 1 2 3
1
2 3 1 3 2
1 3
¶ 0 · · §1 ¨ 1 · ¨0 · 3 2·¨ ¨0 1 ·¨ · 3 ·¨ ¨0 5 ·· ©
3 ·¸
0 1
0 0
0
3 2
0
1 2
0 ¶ · 0 · 1·
· 2· 3 ·· 2 ¸
¶ 0· · · 0· ·, · 0· ·
2 ·¸
0 2 3 2 2
3 0
which is the required tri-diagonal form.
4.5
HOUSEHOLDER’S METHOD
This method is used for finding eigenvalues of real symmetric matrices. The first step of the method consists of reducing the given matrix A to a band matrix. This is carried out by orthogonal transformations. The orthogonal matrices, denoted by Pr, are of the form P I 2WW T , where W is a column vector such that W T W I. We note that P T (I 2WW T )T [I 2(WW T )T ], I 2(WW T ) P
(4.4)
110
Numerical Methods
and so P is symmetric. Further, P T P (I 2WW T )T (I 2WW T ) (I 2WW T )(I 2WW T ) I 4WW W T 4WW T WW T I 4WW T 4WIW T I 4WW T 4WW T I and so P is orthogonal. Thus, P is symmetric orthogonal matrix. The vectors W are constructed with the first (r–1) zero components. Thus, § ¨ ¨ ¨ ¨ ¨ Wr ¨ ¨ ¨ ¨ ¨ ¨ ©
¶ · · · · 0 · . xr · · xr 1 · · K · xn ·¸ 0 0 K
With this choice of W r , we form Pr I 2W r W rT . Then equation (4.4) implies xr2 xr21 K xn2 1. Now, put A = A1 and form successively A r Pr A r 1Pr , r 2, 3,K , n 1. At the first transformation, we get zeros in the positions (1,3), (1,4), ... (1,n) and in the corresponding places in the first column. In the second transformation, we get zeros in the positions (2,4), (2,5), ..., (2,n) and in the corresponding places in the second column. The final result will be a band matrix: §A1 ¨ ¨ B1 ¨ ¨0 ¨0 B¨ ¨K ¨K ¨ ¨K ¨ ©0
B1
0
0
K K
K
A2
B2
0
K K
K
B2
A 3 B3 0
K
K
0
B3
A 4 B4 K
K
K K K K K K K K K K K K 0 0 0 K
K K K K K K K B n 1
0 ¶ · 0 · 0 · · 0 · ·. K · K · · B n 1 · A n ·¸
Then the characteristic equation B LI 0 gives the required eigenvalues.
111
Eigenvalues and Eigenvectors
To describe the application of this method, we consider the real symmetric matrix, A of order 3 given by § a11 ¨ A ¨ a12 ¨a © 13
a12
a13 ¶ · a23 · . a33 ·¸
a22 a23
We wish to find a real symmetric orthogonal matrix P1 such that 1 § a11 ¨ 1 P1AP1 ¨ a12 ¨ ©0
1 a12
a
1 22 1 23
a
0¶ · 1 a23 ·. 1 · a33 ¸
(4.5)
We take §0¶ ¨ · W ¨W 2 · ¨W · © 3¸ such that WW T I and W 22 W 32 1. Then §1 0 ¨ P1 I 2WW ¨0 1 2W 22 ¨0 2W W © 2 3
0 ¶ ·
2W 2 W 3 · . 1 2W 22 ·¸
(4.6)
2 a12 W2 W3 a13 (1 2W32 ) ¶ ·
2 a22 W2 W3 a23 (1 2W32 )· . ·
2 a23W2 W3 a33 (1 2W32 )·¸
(4.7)
T
Then
§a ¨ 11 P1AP1 ¨a12 ¨ ¨© a13
a12 (1 2W22 ) 2 a13W2 W3 a22 (1 2W22 ) 2 a23W2 W3 a23 (1 2W ) 2 a33W2 W3 2 2
Comparing equations (4.5) and (4.7), we get 1 a11 a11 1 a12 a12 (1 2W 22 ) 2 a13W 2W 3
a12 2 a12W 22 2 a13W 2W 3
(4.8)
a12 2W 2 ( a12W 2 a13W 3 ) a12 2W 2 q, 0 2 a12W 2W 3 a13 (1 2W 32 ) 2 a12W 2W 3 a13 2 a13W 32 a13 2W 3 ( a12W 2 a13W 3 ) a13 2W 3q, where q a12W 2 a13W 3 .
(4.9)
112
Numerical Methods
Squaring and adding equations (4.8) and (4.9), we get 1 2 ( a12 ) a122 a132 4q 2 (W 22 W 32 ) 4q( a12W 2 a13W 3 )
a122 a132 4q 2 4q 2 a122 a132 and so 1 a12 o a122 a132 .
Thus, 1 a12 a12 2qW 2 o a122 a132 o S , say
(4.10)
0 a13 2W 3q .
(4.11)
and Multiplying equation (4.10) by W 2 and (4.11) by W 3 and adding we get a12 W 2 2qW 22 a13W 3 2qW 32 o SW 2 or a12 W 2 a13W 3 2q(W 22 W 32 ) o SW 2 or
q o SW 2 . Therefore, equation (4.10) yields a12 2W 2 ( m SW 2 ) o S and so W 22
1 § a12 ¶ ¨1 m ·. S ¸ 2©
W3
a13 a13 2q m2 SW 2
(4.12)
Now equation (4.11) gives
m
a13 2W 2 a122 a132
(4.13) .
The error in equation (4.13) will be minimum if W 2 is large. Therefore, sign in equation (4.12) should be the same as that of a12 . Putting these values of W 2 and W 3 , we get W and then P1 I 2WW T which reduces P1AP1 in the tri-diagonal form.
Working Rule for Householder’s Method §0¶ ¨ · To find the vector W ¨W 2 · , compute ¨W · © 3¸ S a122 a132 , W 22
1 § a12 ¶ ¨1 m ·, 2© S ¸
Eigenvalues and Eigenvectors
W3 m
a13 2 x2 a122 a132
113
,
where sign in W22 should be the same as that of a12 . Then find P1 1 2WW T and compute P1AP1 and so on. EXAMPLE 4.13 Use Householder’s method to reduce the symmetric matrix § 3 2 1¶ ¨ · A ¨2 3 2· ¨ 1 2 3· © ¸ to tri-diagonal form. Solution. We have §3 2 1¶ ¨ · A = ¨2 3 2· . ¨ 1 2 3· © ¸ Then S a122 a132 5 W 22
1 § a12 ¶ 1 § 2 ¶ ¨1 · ¨1 · 0.9472, S ¸ 2© 2© 5¸
or W 2 0.9732. Moreover,
W3
a13 1 0.2298. 2W 2 S 2(0.9732) 5
Hence, § 0 ¶ ¨ · W ¨0.9732 · ¨ 0.2298 · © ¸ and so § 0 ¶ ¨ · P1 I 2 ¨0.9732 · [0 0.9732 0.2298] ¨ 0.2298 · © ¸ § 1 0 0 ¶ §0 0 0 ¶ ¨ · ¨ · ¨0 1 0 · ¨0 1.8942 0.4472 · ¨0 0 1 · ¨0 0.44472 0.1056 · © ¸ © ¸ §1 0 0 ¶ ¨ · ¨0 0.8942 0.4472 · . ¨0 0.4472 0.8944 · © ¸
114
Numerical Methods
Therefore the first transformation yields §1 0 0 ¶ § 3 2 1 ¶ §1 0 0 ¶ ¨ ·¨ ·¨ · A1 P1AP1 ¨0 0.8942 0.4472 · ¨ 2 3 2 · ¨0 0.8942 0.4472 · ¨0 0.4472 0.8944 · ¨ 1 2 3 · ¨0 0.4472 0.8944 · ¸© ¸© © ¸ § 3 0 ¶
2.2356 ¨ · ¨ 2.2356 4.5983 1.1999 · , ¨ 0
1.1998 1.4002 ·¸ © which is the required tri-diagonal form. EXAMPLE 4.14 Reduce the symmetric matrix §1 3 4¶ ¨ · A ¨ 3 1 2· ¨4 2 1· © ¸ to tri-diagonal form by Householder’s method. Solution. We have §1 3 4¶ ¨ · A ¨3 1 2· . ¨4 2 1· © ¸ Then S a122 a132 9 16 5, W 22
1 § a12 ¶ 1 § 3 ¶ 4 ¨1 · 1 2© S ¸ 2 ¨© 5 ·¸ 5
and so W2 W3
2
,
5
a13 1 . 2W 2 S 5
Therefore, § ¨ ¨ W ¨ ¨ ¨ ¨©
0 ¶ · 2 · 5· · 1 · · 5¸
Eigenvalues and Eigenvectors
115
and so §1 0 0¶ ¨ · 3 4· ¨0
5 5· . P1 I 2WW T ¨ ¨ 4 3· ¨0 · © 5 5¸ Now, the first transformation yields §1 §1 0 0¶ 0 0¶ ¨ · ¨ · § ¶ 3 4 1 3 4 ¨ 3 4·
·¨ 0
A1 P1AP1 ¨0 ¨ 5 5 · ¨ 3 1 2 ·· ¨ 5 5· ¨ 4 3 · ¨4 2 1· ¨ 4 3 · ·© ¨0 · ¸ ¨0 © © 5 5 ¸ 5 5 ¸ §1 ¨ = ¨ 5 ¨ ¨ ¨0 ©
5 73 25 14 25
0¶ · 14 · , 25· · 23
· 25 ¸
which is the required tri-diagonal form.
4.6
EIGENVALUES OF A SYMMETRIC TRI-DIAGONAL MATRIX
We have seen that Given’s method and Householder’s method reduce a given matrix to a tri-diagonal matrix § a11 ¨ A1 ¨ a12 ¨0 ©
a12 a22
a11 L
a12
0
a12
a22 L
a23
0
a23
a33 L
a23
0¶ · a23 · . a33 ·¸
Then the characteristic roots are given by
A1 LI
0,
that is, f3 ( L ) 0, where f3 ( L ) ( a33 L )
a11 L a12
a12 a22 L
a23
( a33 L ) f 2 ( L ) a ( a11 L ) 2 23
( a33 L ) f 2 ( L ) a223 f1 ( L ),
a11 L
0
a12
a23
116
Numerical Methods
where f1 ( L ) a11 L ( a11 L ) f0 ( L ), f0 ( L ) 1 f2 (L )
a11 L
a12
a12
a22 L
( a11 L )( a22 L ) a122
( a22 L ) f1 (L ) a122 f0 ( L ). Thus, we have the recursion formula f0 ( L ) 1 f1 ( L ) ( a11 L ) f0 ( L ) f 2 ( L ) ( a22 L ) f1 ( L ) a122 f0 ( L ) 2 f3 ( L ) ( a33 L ) f 2 ( L ) a23 f1 ( L ),
that is f k ( L ) ( akk L ) f k 1 ( L ) ak2 1,k f k 2 ( L ) for 2 a k a n. The sequence of functions f0 ( L ), f1 ( L ), f 2 ( L ), K , f k ( L ) is known as Sturm sequence. EXAMPLE 4.15 Using Sturm sequence, find the eigenvalues of the matrix §1 2 2¶ ¨ · A ¨2 1 2· . ¨2 2 1· © ¸ Solution. We have seen in Example 4.10 that the Given’s method reduces the given matrix to the tri-diagonal form § 1 ¨ ¨2 2 ¨ ¨© 0
2 2 3 0
0¶ · 0 ·. ·
1· ¸
Then the Sturm sequence is f0 ( L ) 1 f1 ( L ) ( a11 L ) f0 ( L ) (1 L ) f 2 ( L ) ( a22 L ) f1 ( L ) a122 f0 ( L ) (3 L )(1 L ) 8 L 2 4L 5 2 f3 ( L ) ( a33 L ) f 2 ( L ) a23 f1 ( L )
( 1 L )( L 2 4L 5) 0 ( L 5)( L +1)( L +1). Hence, f3 ( L ) 0 yields the eigenvalues 1, 1, 5.
Eigenvalues and Eigenvectors
117
EXAMPLE 4.16 Using Sturm sequence, find the eigenvalues of the matrix § 8 6 2¶ ¨ · ¨ 6 7 4 · . ¨ 2 4 3 ·¸ © Solution. The tri-diagonal form of the given matrix is (see Example 4.9) § 8 ¨ ¨ 2 10 ¨ ¨© 0
2 10 9
2
0¶ ·
2 · . · 1· ¸
Then the Sturm sequence is f0 ( L ) 1 f1 ( L ) ( a11 L ) f0 ( L ) (8 L ) f 2 ( L ) ( a22 L ) f1 ( L ) a122 f0 ( L ) (9 L )(8 L ) 40 L 2 17 L 32 2 f3 ( L ) ( a33 L )f 2 ( L ) a23 f1 ( L )
(1 L )(L 2 17 L 32) 4(8 L ) L 3 18L 2 45L L( L 3)( L 15). Hence, L 0, 3, 15 are the eigenvalues of the given matrix.
4.7
BOUNDS ON EIGENVALUES (GERSCHGORIN CIRCLES)
Some applications in engineering require only bounds on eigenvalues instead of their accurate approximations. These bounds can be obtained using the following two results, known as Gerschgorin Theorems. Theorem 4.1. (First Gerschgorin Theorem). Every eigenvalue of an n × n matrix A = [aij] lies inside at least one of the circles, called Gerschgorin circles, in the complex plane with center aii and radii n
ri £ aij , i 1, 2,K , n . In other words, all the eigenvalues of the matrix A lie in the union of the disks j 1
n
z aii a ri £ aij , i 1, 2, K , n in the complex plane. j 1
Proof: Let L be an eigenvalue of A [ aij ] and X be the corresponding eigenvector. Then AX LX, which yields a11 x1 a12 x2 K a1n xn Lx1 ¹ a21 x1 a22 x2 K a2 n xn Lx2 KKKKKKKKKKKK º ai1 x1 ai 2 x2 K ain xn Lxi KKKKKKKKKKKK an1 x1 an 2 x2 K ann xn Lxn »
(4.14)
118
Numerical Methods
§ x1 ¶ ¨ · x ¨x Let xi be the largest component of vector X ¨ 2 ·· . Then m a 1 for m 1, 2, K , n. xi ¨M · ¨© xn ·¸ Dividing ith in equation (4.14) by xi, we have ai1
x1 x x x ai 2 2 K ai ,i 1 i 1 aii K ain n L xi xi xi xi
or L aii ai1 Since
xm a 1, we get xi
x x x xi ai 2 2 K ai ,i 1 i 1 K ain n . xi xi xi xi n
L aii a ai1 ai 2 K ai ,i 1 ai ,i1 K ain £ aij . j 1 jwi
This completes the proof of the theorem. Since the disk z aii a ri is contained within the disk n
n
z a aii ri aii £ aij £ aij , j 1 jwn
j w1
centered at the origin, it follows that ª n ¹ “All the eigenvalues of the matrix A lie within the disk z a max «£ aij º , i 1, 2, K , n centered at the i ¬ j 1 » origin.” Theorem 4.2. (Second Gerschgorin Theorem). If the union of m of the Gerschgorin circles forms a connected region isolated from the remaining circles, then exactly m of the eigenvalues of A lie within that region. EXAMPLE 4.17 Determine the Gerschgorin circles corresponding to the matrix § 1 2 3¶ ¨ · A ¨2 4 6· . ¨ 3 6 1· © ¸ Solution. The three Gerschgorin circles are (a) z 1 2 3 5 (b)
z 4 2 6 8
(c) z − 1 = 3 + 6 = 9. Thus, one eigenvalue lies within the circle centered at (1,0) with radius 5, the second eigenvalue lies within the circle centered at (4,0) with radius 8, and the third lies within the circle with center (1,0) and radius 9. Since disk (a) lies within (c) it follows that all the eigenvalues of A lie within the region defined by disks (b) and (c). Hence,
Eigenvalues and Eigenvectors
119
4 a L a 12 and 8 a L a 10 and so
8 a L a 12.
EXERCISES 1. Find the largest eigenvalue and the corresponding eigenvector of the matrix §1 2 3¶ ¨ · ¨ 0 4 2 · . ¨0 0 7 · © ¸
T
§ 37 2 ¶ Ans. 7, ¨ 66 11 1· © ¸ 2. Using power method, determine the largest eigenvalue and the corresponding eigenvector of the matrix §1 6 1¶ ¨ · ¨1 2 0 · . ¨0 0 3· © ¸ Ans. 4, [2 1 0]T 3. Using power method, determine the largest eigenvalue and the corresponding eigenvector of the matrix § 2 1 0 ¶ ¨ · ¨ 1 2 1· . ¨ 0 1 2 · © ¸ Ans. 3.41, [0.74 1 0.67]T 4. Determine the largest eigenvalue and the corresponding eigenvector of the matrix §10 2 1 ¶ ¨ · ¨ 2 10 2 · . ¨ 1 2 10 · © ¸
T Ans. 9, [1 0 1]
5. Using Jacobi’s method find the eigenvalues of the matrix §5 0 1¶ ¨ · ¨0 2 0 · . ¨1 0 5· © ¸ Ans. 4, –2, 6 6. Reduce the matrix § 2 1 3¶ ¨ · ¨1 4 2· . ¨ 3 2 3· © ¸
120
Numerical Methods
to tri-diagonal form by Given’s method. § 2 3.16 0 ¶ ¨ · Ans. ¨3.1 6 4.3 1.9 · ¨ 0
1.9 3.9 ·¸ © 7. Use Given’s method to reduce the matrix §3 1 1 ¶ ¨ · ¨1 3 2 · ¨1 2 3 · © ¸ to tri-diagonal form and find its eigenvalues using Sturm’s sequence.
8. Use the Given’s method to reduce the Hilbert matrix § ¨1 ¨ ¨1 ¨2 ¨ ¨1 ¨© 3
1 2 1 3 1 4
§ 3 ¨ Ans. ¨ 2 ¨ ¨© 0
2 0¶ · 5 0 · , 1, 4 o 3 · 0 1· ¸
1¶ 3 ·· 1· 4· · 1· 5 ·¸
to tri-diagonal matrix. § ¨ ¨ ¨ Ans. ¨ ¨ ¨ ¨ ¨©
9. Reduce the matrix
1 13 6 0
13 6 34 65 9 260
¶ 0 · · 9 · · 260 · 2 · · 195 · ¸
§ 2 1 1¶ ¨ · ¨ 1 2 1· ¨ 1 1 2 · © ¸ to tri-diagonal form by Householder’s method and use Sturm’s sequence to find its eigenvalues. Ans. 0, 3, 3 10. Reduce the matrix § 1 4 3¶ ¨ · ¨4 1 2· ¨ 3 2 1· © ¸ to tri-diagonal form by Householder’s method.
Eigenvalues and Eigenvectors
§ ¨1
5 ¨ Ans. ¨ 5 73 ¨ 25 ¨
14 ¨0 © 25
121
¶ 0 · ·
14 · 25 ·
11 · · 25 ¸
11. Reduce the matrix §1 3 4 ¶ ¨ · ¨ 3 2 1· ¨ 4 1 1 · © ¸ to tri-diagonal form by Householder’s method.
§ ¨ 1 5 ¨ Ans. ¨ 5 2 ¨ 5 ¨ 1 ¨0 © 5 12. Using Faddeev–Leverrier method find the characteristic equation of the matrix § 1 0 0 ¶ § 1 1 2 ¶ ¨ · ¨ · (i) ¨ 1 2 1 · , (ii) ¨ 1 2 3 · . ¨ 0 2 3· ¨ 0 1 1· © ¸ © ¸
¶ 0· · 1· 5· 3· · 5¸
Ans. ( i) L 3 2L 2 L 2 0 ( ii) L 3 6L 2 5L 0 13. Using power method and deflation method, find the dominant and subdominant eigenvalues of the matrix §2 2 0¶ ¨ · ¨2 5 0· ¨ 0 0 3· © ¸ Ans. 6, 3, 1. 14. Determine Gerschgorin circles corresponding to the matrix §10 1 0 ¶ ¨ · ¨ 1 2 2 · . ¨ 0 2 3· © ¸ Ans. z 10 1 z 2 3 z 3 2 15. Using Gerschgorin circles, show that the eigenvalues of the matrix §2 2 0¶ ¨ · A ¨2 5 0· ¨ 0 0 3· © ¸ satisfy the inequality 0 a L a 7.
5
Finite Differences and Interpolation
Finite differences play a key role in the solution of differential equations and in the formulation of interpolating polynomials. The interpolation is the art of reading between the tabular values. Also the interpolation formulae are used to derive formulae for numerical differentiation and integration.
5.1
FINITE DIFFERENCES
Suppose that a function y f ( x ) is tabulated for the equally spaced arguments x0 x0 h x0 2 h x0 nh giving the functional values y0 y1 , y2 yn . The constant difference between two consecutive values of x is called the interval of differencing and is denoted by h. The operator Δ defined by $y0
y1 y0 ,
$y1
y2 y1 ,
KKKKK , KKKKK , yn ynn 1 .
$ $y
yn1 yn is itself a
is called the Newton’s forward difference operator. We note that the first difference $y $ function of x. Consequently, we can repeat the operation of differencing to obtain $ 2 y0
$( $ 0 ) y2
$( y1
y1 ( y1
y0 ) $y1 y0 )
$y0 ,
y2 2 y1 y0 ,
which is called the second forward difference. In general, the nth difference off f is defined by $ yr
$ n 1 yr 1 $ n 1 yr .
For example, let f x) x3
x 2 5x 7.
Taking the arguments as 0, 2, 4, 6, 8, 10, we have h = 2 and $ x ) (xx $f
)3
3((xx
)2 5(xx
$ f x ) $( f x )) $(6 2
$ f ( ) 24( 3
$
4
( )
) 24 24 (
$ f ( ) K 0. 5
2
) 7 ( x3 2
6) 6( 2) 6 ) 8,
x2
x 2
) )
6
2
6,
24 x 24,
Finite Differences and Interpolation
123
In tabular form, we have Difference Table x
f (x)
0
7
2
13
Δ f (x)
Δ2f (x)
Δ3f (x)
Δ4f (x)
Δ5f (x)
6 24 30 4
48
43
72
0
102 6
145
8
367
10
757
48
0
120
0
222
48 168
390
Theorem 5.1. Iff f (x) is a polynomial of degree n, that is, n
f x ) £ ai x i , i0
n
then $ f x ) is constant and is equal to n ! an h
n
Proof: We shall prove the theorem by induction on n. If n = 1, then f x ) a1 x a0 and $ f x ) f x h f (xx) x ) a1h and so the theorem holds for n = 1. Assume now that the result is true for all degrees 1, 2, , 1. Consider n
f x ) £ ai x i . i0
Then by the linearity of the operator Δ, we have n
$
)
£ i0
i
$n xi .
For i < n, Δnxi is the nth difference of a polynomial of degree less than n and hence must vanish, by induction hypothesis. Thus, $ ) $ n x n an $ 1 ( $x $ n) n an $ n 1[( x h) n n
$ n 1[ h
n 1
xn ]
g ( x )]
where g(x ( ) is a polynomial of degree less than n−1. Hence, by induction hypothesis, $
)
n
$ n 1 ( nhx n 1 ) an ( hn)(n
)!hh n
1
an n ! n .
Hence, by induction, the theorem holds. Let y0 y1 ,K , yn be the functional values of a function f for the arguments x0 x0 h x0 2h Then the operator ∇ defined by yr yr yr 1 is called the Newton’s backward difference operator.
x0 nh.
124
Numerical Methods
The higher-order backward differences are 2 yr yr yr 1 3 yr 2 yr 2 yr 1 KKKKKKKK n yr
n 1 yr n 1 yr 1 .
Thus, the backward difference table becomes x
y
x0
y0
1st 2nd 3rd difference difference difference 1 y
y1
x1
2 y2 3 y3
2 y y2
x2
2 y3 3 y
x3
y3
EXAMPLE 5.1 Form the table of backward differences for the function f x) x3 x 2 5x 7 for x = −1, 0, 1, 2, 3, 4, and 5. Solution. x
y
−1
−16
0
−7
1st 2nd 3rd 4th difference difference difference difference 9 −6 3
1
−4
6 0
3 2
−1
6 9
3
8
12
29 68
0 6
18 39
5
0 6
21 4
0 6
Finite Differences and Interpolation
125
An operator E, known as enlargement operator, displacement operator or shifting operator, is defined by Eyr yr 1 . Thus, shifting operator moves the functional value f x ) to the next higher value f x E 2 yr E E Eyyr
E ( yr )
E yr E E yr 3
yr
E ( yr 2
2
h). Further,
2 r 3
.
KKKKKKKKKKKK KKKKKKKKKKKK E n yr yr
Relations between Δ, ∇, and E We know that yr 1 yr
$ r $y
Eyr Ey
yr ( E
I ) yr ,
where I is the identity operator. Hence, $
$.
(5.1)
Also, by definition, yr yr yr yr E 1 yr yr I E 1 ), and so I E 1 or E 1 I or E
I . I
(5.2)
From equations (5.1) and (5.2), we have I or $
1 I
I .
I I I
(5.3)
(5.4)
From equations (5.3) and (5.4) I
Theorem 5.2. f x
nh
I $ I $ 1 $
(5.5)
c ¤ n³ £ ¥ ´ $k fx k 0 ¦ k µ
Proof: We shall prove our result by mathematical induction. For n = 1, the theorem reduces to f x which is true. Assume now that the theorem is true for n − 1. Then c ¤ n 1³ i fx E n fx E( E 1 fx ) E£ ¥ $ f x by induction hypothesis. ¦ i ´µ i 0
f x $ff x
126 But E
Numerical Methods
I $. So ) E n 1 f x
E f x (II
E n 1 f x
E n 1 f x
c c ¤ n 1³ ¤ n 1³ i1 £¥ $ f £ x ´ ¥ i ´µ $ f x i µ i0 ¦ i0 ¦ c c ¤ n 1³ ¤ n 1³ j £¥ $ f £ x ´ ¥ ´ $ fx i µ i0 ¦ j 1 ¦ j 1µ
The coefficient of $ k f x k
, 1, 2,K , n) is given by ¤ n 1³ ¤ n 1³ ¤ n³ ¥¦ k ´µ ¥¦ k 1´µ ¥¦ k ´µ .
Hence, c ¤ n³ E n fx £ ¥ ´ $ k fx , k 0 ¦ k µ
f which completes the proof of the theorem. As a special case of this theorem, we get
c ¤ x³ E x f0 £ ¥ ´ $ k f0 , k 0 ¦ k µ
fx
which is known as Newton’s advancing difference formula and expresses the general functional value f x in terms off f 0 and its differences. Let h be the interval of differencing. Then the operator δ defined by D fx f h f h x
x
2
2
is called the central difference operator. We note that 1
D fx
f
x
h 2
f
x
E 2 fx
E
1 2
fx
2
E
f .
E
x
Hence, 1
1
D E E 2.
(5.6)
1 2
Multiplying both sides by E , we get ¤ 1 E D E 1 0 or ¥ E ¦ 1 2
D³ 2µ
2
D2 4
I 0
or 1
E
D 2
I
D2 4
1
E
D 2
I
D2 4
or 1
E
D2 4
¤ D2 ³ D2 I D ¥1 ´ I 4 4µ ¦
D2 2
D I
D2 . 4
(5.7)
Finite Differences and Interpolation
127
Also, using equation (5.7), we note that
D2 D2 D I 2 4
$ E I
(5.8)
¤ D2 D2 ³ I I I I D I ´ E 2 4 µ ¦
1
D2 D2 D I . 2
(5.9)
Conversely, 1
DE E
I
1 2
I
( I )1 2
$
I
I
1
( I )2
(5.10)
I$
and 1
DE E
1 2
I
1
1
I
.
(5.11)
Let h be the interval of differencing. Then the operator µ defined by 1§f h 2 © x 2
M fx
f
x
h 2
¶ ¸
is called the mean value operator or averaging operator. We have
M fx
1§f h 2 © x 2
f
x
h 2
1 ¶ 1§ 1 ¶.
¸ 2 © E fx E fx ¸
Hence,
M
1§ 1 1
¶ 2 2 © ¸ E E 2
(5.12)
or 1
2M E E
1 2
.
(5.13)
Also, we know that 1
DE E
1 2
(5.14)
.
Adding equations (5.13) and (5.14), we get 1
2
2E 2
D
1
E2 M
Also, 1
E
D 2
I
D2 4
D . 2
(5.15)
128
Numerical Methods
Hence,
D D D2 I 2 2 4
M
M I
D2 4
(5.16)
The relation equation (5.16) yields
D2 4
I
M2
I.
M2
(5.17)
1
Multiplying equation (5.13) throughout by E 2 , we get 1
E
1
I 2M E
E 2M E
I 0
or 1
E M I 2
1 2
E 2 M M2 I
2
or 1
E
I or E
M2
M
M2
I
2M M 2 I .
(5.18)
Then $ E I 2M 2 2 2M M 2 I
(5.19)
and I
I I E
2
1
2I
(5.20)
2M M M I I 2
2
The differential operator D is defined by $ x ) f x ). $f By Taylor’s Theorem, we have f x
h
h2 f x) K 2! h2 2 D x) D f ( x) K f x ) Df 2! ¤ ³ h2 2 D f (x 1 hD 2! ¦ µ f (xx ) h f x )
and so Ef (x
¤ h2 f (x h) 1 hD D 2 ¦ 2!
³ f ( x ). µ
Hence, E
hD
h2 2 D 2!
ehD eU,
U hD.
(5.21)
Finite Differences and Interpolation
129
Then $ E I eU
I e U.
We note that 1
U
1
D E E 2 e2 e U 2 sinh 2 1 1 M 2 2
U 2
(5.22)
Conversely
U
e2
e
U 2
.
(5.23)
2M
or U
eU 1 2 M e 2 or
U
e
U
e or U
g
I
M2
M
I
Since, by equation (5.22),
D 2 sinh
I .
(5.24)
U , 2
it follows that
D (5.25) . 2 From the above discussion, we obtain the following table for the relations among the finite difference operators: U sinh 1
Δ Δ ∇ δ
Δ I
∇ (
)
I $I
∇
$
1
I$
I
E
I Δ I+
I I
U = hD
log (I (I+Δ)
log
I I
I
δ
E
U = hD
D2 D2 D I 2 4
E−I
D2 D2 D I 2 4
I
1 E
1
δ
I
D2 2 2sinh
E
I
D 2
D2 4
eU
E
E
log E
e U
I
1 2
I
2 sinh
eU U
U 2
130
Numerical Methods
EXAMPLE 5.2 The expression δ y0 cannot be computed directly from a difference scheme. Find its value expressed in known central differences. Solution. We know that 1
¤
D2 ³ 2 M I ´ 4µ ¦
¤ D2 ³ M¥I ´ 4µ ¦
1 2
I
or
D
¤
0
D2 ³ MDD 1 4µ ¦
1 2
y0
§ ¶ ³ 1¤ 1 ³¤ 1 1¤ 1 ³
1´ 4 1´ ¥ 2´ 8 ¨ · 2 2 2 2 2 2 µ ¦ ¦ µ ¦ D D D MD ¨1 K· y0 ¨ · 2 16 3 8 64 ¨ · © ¸ 2 § D y0 3 4 D y0 K· . MD ¨ y0 8 128 ¸ © But
MD
¤ y 1 y 1 ³ ´ DM y0 D ¥¥ ´ 2 ¥¦ ´µ
1 Dy 2¦ 2
1 ( 2
1 ( 2
1
0
0
1
)
1
³ D y 1 ´ 2µ
1
).
Hence, equation (5.26) reduces to
D y0
1 1 [ y1 y0 ] [ 2 16
2
y1
2
y 1]
3 [D 4 y1 D 4 y 1 ] K 256
which is the required form.
5.2
FACTORIAL NOTATION
r A product of the form x(x ( − 1)(x ( − 2)...(x ( − r + 1) is called a factorial and is denoted by [ ] . Thus, [ ] x
[ ]2
x(
)
3
x(
)(
[ ]
)
KKKKKKKKK KKKKKKKK [ ]r
x(
)(
)K (
).
(5.26)
Finite Differences and Interpolation
If h is the interval of differencing, then [ ]r x (
)(
)
131
( r 1) ).
(
We observe that $[ ]n
] [ ]n )(
[ (
)K ( x h ( ) h )
)(
( x h)( x 2 h))
( x ( ) h)
x( )[[ x ( ) h][ x h ( $ 2 [ ]n $( [ ]n )
(nh nhh[ x]n 1 ) nhh$[ x ]n 1
) h[ x ]n 22 ]
nh[( nh[( n
[ x ]n 1
)]
( n 1) 2 [ x ]n 2 (n
KKKKKKKKKKKKK KKKKKKKKKKKKK $ n 1[ x ]n
n( n
)
$ [ ]
(n
)
n(nn ) $ n1[ ]n
h n 1 x 1
n( n
h n 1 ( x h x )
)
n!hn
2h
n!hh n
n!h
$ .
If interval of differentiating is 1, then $ n 1[ x ]n
$ [ ]
.
Thus for h = 1, differencing [ ]n is analogous to that of differentiating x n . EXAMPLE 5.3 Express f x ) x 3
x 2 x 1 into factorial notation and show that $ 4 f x ) 0.
Solution. Suppose f x) x3
x2 x 1
x(xx
)(x
) Bx (xx
) C Cx
D.
Putting x = 0, we get −1 = D. Putting x = 1, we get −1 = D + C and so C = −1 − D = 0. Putting x = 2, we get 1 = 2B + 2C C + D and so 2B = 1 − 2C C − D = 1−(−1) = 2 and so B = 1. Hence, f x) x3
x2 x 1 3
x(xx
)(xx
2
[ ] [ ] 1. Now, $f x ) 3[[ x ]2 2[ x ] x
$2 f ( ) $ f x) 6 3
4
f x)
.
) x(x
) 1
132
Numerical Methods
EXAMPLE 5.4 Find the function whose first difference is 2
3
33x 2 5
4.
Solution. Let f (x) be the required function. We are given that $ x) x3 $f
3x 2 5x 4
2x(x x x
)( x 2)
( 1)
.
Putting x = 0, we have 4 = D. Putting x = 1, we get 4 = C + D and so C = 0. Putting x = 2, we get 22 = 2B + 2C C + D and so 2B = 22 − 2C C − D = 22−4 = 18 and so B = 9. Thus, $ x ) 2x(x $f x x
) 9x (x
)(x
3
)4
2
[ ] 9[ ] 4. Integrating Δf Δf (x), we get 2[ x ]4 9[ x ]3 1 4[ x ] C [ x]4 4 3 2 where C is constant of integration. f x)
5.3
3[[ x ]3 4[ x ] C ,
SOME MORE EXAMPLES OF FINITE DIFFERENCES
EXAMPLE 5.5 Find the missing term in the following table: x
0
1
2
3
4
f (x)
1
3
9
–
81
Solution. Since four entries y0 y1 , y2 y3 , y4 are given, the given function can be represented by a third degree polynomial. The difference table is x
f (x)
0
1
Δf (x) Δf
Δ2f (x)
Δ3f (x)
Δ4f (x)
2 1
3
4 y3 − 19
6 2
9
3
y3
4
81
y3 − 15 y3 − 9
124 − 44y3 105 − 3y3
90 − 22y3 81 − y3
Since polynomial is of degree 3, $ 4 f x ) 0 and so 124 4y 4 3 0 and hence y3 31. EXAMPLE 5.6 If y0 3, y1 = 12, y2 = 81, y3 = 2000, and y4 = 100, determine Δ4y0. Solution. The difference table for the given data is
Finite Differences and Interpolation
x
Δ Δy
y
0
Δ2y
Δ3y
133
Δ4y
3 9
1
12
60 69
2
1790
81
1850
−7459
1919 3
−5669
2000
−3819 −1900
4
100
From the table, we have Δ4y0 = −7459. EXAMPLE 5.7 Establish the relations (i)
$
(ii)
MD
$ $ D 2; 1 ( ); 2 1
(iii)
$ E
DE2.
Solution. (i) We know that $ E I and = I
I E
Therefore, $
(
)
I³ Eµ
¤ ¦
I E
$
(5.27)
and $ E
I
2I E
(5.28)
Furthermore, 1
DE E
1 2
1
E2
I E 1/ 2
and so I
2I. E The result follows from equations (5.27), (5.28), and (5.29).
D
(5.29)
(ii) We have
M
1 2
12
12
D
12
E 1 2 .
134
Numerical Methods
Therefore, 1 ( 2 1 ( 2
MD
12
12
1
)
)(
12
12
)
I³ E ´µ
1¤ E 2 ¥¦
1¤ I³ 1 E I I ´ ( ). ¥ 2¦ Eµ 2
(iii) We have E
¤ I³ E¥I ´ E Eµ ¦
E
¤ 1³ I ´E Eµ ¦
D E (
1/
E I $
/2
I$
) E 1/ 2 E I $.
Hence, 1
E
E D E 2 $.
EXAMPLE 5.8 Show that Er where t = 1−rr and Q
E r 1 E t sinh 2rQ sinh 2tQ , E
1 Q sinh 2 sinh Q E E
hD . 2
Solution. We have Er E
E t Er E 1 E
¤E E r 1 Er
1 E ¦E
E 1 ³ Er .
1 ´ E µ
Also, E
E 1 ehD
e hD 2
2Q .
Therefore, 1 r1 1 [ EE r E r 1 ] E r 1 ) Er 1 E t 2 (E 2 1 sinh 2Q E E 1 ( E E 11 ) 2 1 [ E ( E r r ) E (rr 1 E r 1 ] 2 sinh 2Q 1 1 r 1 [ ( E E r )] ( 1 ) 2 2 sinh 2Q
Finite Differences and Interpolation
1 1 ( (E 2 sinh 2Q E sinh si h rQ inh 2tQ sinh 2Q sinh 2rQ sinh 2tQ . E sinh 2Q sinh 2Q 1 [E 2
r
)] )]
r
E
(
)
)
EXAMPLE 5.9 Show that n 1
£$
$f n $f0 .
2 k
k 0
Solution. We have $2 $
0
$ $ 0)
$( f1
1
$ $ 1)
$( f f1 f 2 $f1
2
f0 ) $ 1
$f0
KKKKKKKKKKKKKK KKKKKKKK KKKKK $2
$( $f
n 1
1
) ( fn
fn 1 ) fn
f n . n 1
Adding we get n 1
£$ k 0
$f1 $ 0
2 k
$f 2 $ 1
$f n $f n )
$ n $f0 .
EXAMPLE 5.10 n 1
Show that
£D k 0
2
f2
1
¤U ³ tanh ¥ ´ ( f 2 ¦ 2µ
f0 ).
Solution. We have 2
D (
1 2
1 2 2
)
(
1 2
1 2 2
1 2
E E U
(
e
1 2
) (
1 2
1
)
(
1 2
1
)(
1
E E
1 2
U
) (E E )
U 2
e
U
2
tan
¤U ³ (E ¦ 2µ
E 1 ).
Thus,
D 2 f2
1
¤U ³ tanh ¥ ´ [ Eff 2 ¦ 2µ
1
E 1 f2
1
¤U ³ ] tanh ¥ ´ [ ¦ µ
2
2
2k
].
1
)
135
136
Numerical Methods
Therefore, n 1
£D
2
k 0
f2
1
¤U ³ tanh ¥ ´ [( f 2 ¦ 2µ
f0 ) ( f 4
f2 )
(
2n
n 2
¤U ³ )]] tanh tan ¥ ´ [ f 2 n f0 ]. ¦ 2µ
EXAMPLE 5.11 Find the cubic polynomial f (x) which takes on the values f
f
1, f 2
f3
2255, f
f5 105.
Solution. The difference table for the given function is given below: x
f (x)
0
−5
1
1
Δ f (x)
Δ2f (x)
Δ3f (x)
Δ4f (x)
6 2 8 2
9
6 8
16 3
25
14 30
4
0 6
55
0 6
20 50
5
105
Now, fx
E x f0
I ) x f0
§ x ( x 1) 2 x( x 1)( x 2 3 ¶ 1 x$ $ $ · f0 2! 3! © ¸ 2 3 2 x x 2 x 3 2x 3 f0 f0 x$ 0 $ f0 2 6 x2 x x3 3 2 2x 3 5 6xx ( 2) (6) 2 x 2 7 5, 2 6 which is the required cubic polynomial. EXAMPLE 5.12 Determine $10 [(1 ax )(1 bx 2 )(1 cx 3 )(1 dx 4 )]. Solution. We have $10 [(1 ax )(1 bx 2 )(1 cx 3 )(1 dx 4 )] = $10 [abcd x10 +A 9 +Bx 8 +
+1].
The polynomial in the square bracket is of degree 10. Therefore, Δ100f (x) is constant and is equal to an n ! h n . In this case, we have an = abcd, n = 10, h = 1. Hence, 10
(x ) =abcd(10) !
Finite Differences and Interpolation
EXAMPLE 5.13 Show that
D 2 y5
y1 2 y5
y4 .
Solution. We know that
D 2 $ . Therefore,
D 2 y5
y5 $yy5
( y6
y5 y6
y5 ( y5
y4 )
2 y5 y .
EXAMPLE 5.14 Show that ¤ $2 ³ Ee x ex ¥ ´ ex 2 x , $ e ¦ Eµ the interval of differencing being h. Solution. We note that Ee x e x hh , e x e x $ 2 ex
h
e x e x ( eh )
e x ( eh )
and ¤ $2 ³ ¥ E ´e ¦ µ
$2
1
( )
e $ 2 e x e h e x (eeh 1)2.
$2e
Hence, ¤ $ 2 ³ x Ee x ex h h 2
h x ( 1 ) e x.
e e e e ¥ E´ $ 2 ex e x ( eh 1 2 ¦ µ EXAMPLE 5.15 Show that n
(i)
D n yx £ ( ) k k 0
(ii)
n! y k ( n k )! x 2n k
¤ n ³ ¤ n³ $¥ . ¦ i 1´µ ¥¦ i ´µ
Solution. (i) We have n
E
y
¤ n³ ¤ n³ [ E n ¥ ´ E n 1 ¥ ´ E n 2 ¦ 1µ ¦ 2µ
E
n
Dn
E
E I )n
Therefore,
D n yx
E I
n
x
n 2
( 1) n ] y
x
n 2
137
138
Numerical Methods
n
£ ( 1) k k 0 n
£ ( 1) k k 0
n! E n k y n x k !( n k )! 2 n! y k ( n k )! x
n
n 2
n! y ( n k )! x 2n k .
£ ( 1) k k 0
¤ n ³ (ii) We have ¥ ¦ i 1´µ ( i
k
n! . ) ( n i 1)!
Now, ¤ n ³ ¤ n 1³ ¤ n ³
$¥ ¦ i 1´µ ¥¦ i 1µ´µ ¥¦ i 1´µ
¤ n³ n ! ( i 1) n! . )!( !( n i )! i !( n 1)! ¥¦ i ´µ
EXAMPLE 5.16 Assuming that the following values of y belong to a polynomial of degree 4, find the missing values in the table: x y
0 1
1 −1
2 1
3 −1
4 1
5 –
6 –
7 –
Solution. The difference table of the given data is shown below: x
y
0
1
1
−1
Δ Δy
Δ2y
Δ3y
Δ4y
−2 4 2 2
1
−8 −4
−2 3
−1
4 Δ2y3
1 Δ 4 Δy
5
y5
6
y6
7
y7
Δ y5 Δ y6
Δ y2
Δ3y2 − 8
3
2 4
16 8
Δ3y3 − Δ3y2
Δ3y3 Δ2y4
Δ y4
Δ3y4 − Δ3y3
3
Δ2y5
Since the polynomial of the data is of degree 4, Δ4y should be constant. One of Δ4y is 16. Hence, all of the fourth differences must be 16. But then 3 y2 8 16 giving 3 y2 24 3 3 y2 16 giving 3 y2 40 2
Finite Differences and Interpolation
139
3 y4 16 giving $ 3 y4 56 3 2 y3 4 $ 3 y2 24 and so $ 2 y3 28 2 2 3 y4 y3 40 and so $ 2 y4 68 3 2 2 3 y5 y4 56 and so $ 2 y5 124 4 2 $ 4 30 $y4 $ y3 28 and so $y 2 $y5 $y4 $ y4 68 and so $y $ 5 98 $y6 $y5 $ 2 y5 124 and so $y $ 6 222 y5 1 y4 30 which gives y5 31 y6 y5 $yy5 98 which gives y6 129 y y6 $yy6 222 which yields y7 351. 3
Hence,
Hence, the missing terms are y5
5.4
31 y6 129, y7 351.
ERROR PROPAGATION
Let y y1 , y2 y3, y y5 , y y7 , y8 be the values of the function f at the arguments x0 x1 , x2 x3, x4 x5 , x6 x7 x8 , respectively. Suppose an error E is committed in y4 during tabulation. To study the error propagation, we use the difference table. For the sake of convenience, we construct difference table up to fourth difference only. If the error in y4 is E , then the value of the function f at x4 is y4 E . The difference table of the data is as shown below. x
y
x0
y0
Δ Δy
Δ2y
Δ3y
Δ4y
Δ y0 x1
Δ2y0
y1 Δ y1
x2
Δ3y0 Δ2y1
y2 Δ y2
x3
Δ3y1 + E Δ2y2 + E
y3 Δ y3 + E
x4
Δ2y3− 2 E
y4 + E
Δ2y4 + E
y5
Δ2y5
y6
y8
Δ4y4 + E Δ y5 3
Δ2y6
y7 Δ y7
x8
Δ4y3 − 4 E Δ3y4 E
Δ y6 x7
Δ4y2 + 6 E Δ3y3 + 3 E
Δ y5 x6
Δ4y1 − 4 E Δ3y2− 3 E
Δ y4 E x5
Δ4y0 + E
140
Numerical Methods
We note that (i) Error propagates in a triangular pattern (shown by fan lines) and grows quickly with the order of difference. (ii) The coefficients of the error E in any column are the binomial coefficients of ( ) n with alternating signs. Thus, the errors in the third column are E 3E , 3E E . (iii) The algebraic sum of the errors in any difference column is zero. (iv) If the difference table has even differences, then the maximum error lies on the same horizontal line on which the tabular value in error lies. EXAMPLE 5.17 One entry in the following table of a polynomial of degree 4 is incorrect. Correct the entry by locating it x y
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.0000 1.5191 2.0736 2.6611 3.2816 3.9375 4.6363 5.3771 6.1776 7.0471
2.0 8.0
Solution. The difference table for the given data is shown below. Since the degree of the polynomial is four, the fourth difference must be constant. But we note that the fourth differences are oscillating for the larger values of x. The largest numerical fourth difference 0.0186 is at x = 1.6. This suggests that the error in the value off f is at x = 1.6. Draw the fan lines as shown in the difference table. x
y
1.0
1.0000
1.1
1.5191
Δ Δy
Δ2y
Δ3y
Δ4y
0.5191 0.0354 0.5545 1.2
2.0736
−0.0024 0.0330
0.5875 1.3
2.6611
0.0330 0.6205
1.4
3.2816
0.0354
3.9375
0.0429
4.6363
0.0420
5.3771
0.0597
6.1776
0.0690
7.0471 8.0000
0.0051 0.0144
0.0834 0.9529
2.0
−0.0084 0.0093
0.8695 1.9
0.0186 0.0177
0.8005 1.8
−0.0084 −0.0009
0.7408 1.7
0.0051 Fan line 0.0075
0.6988 1.6
0.0024 0.0024
0.6559 1.5
0.0024 0
Finite Differences and Interpolation
141
Then taking 1.6 as x0, we have 4
4 4
+ E 0.0051 4E 0.0084
3 4
2
+ 6E 0.0186
4
4E 0.0084
1
$ 4 f0 + E 0.0051 We want all fourth differences to be alike. Eliminating Δ4f between any two of the compatible equations and solving for E will serve our purpose. For example, subtracting the second equation from the first, we get 5 = 0.0135and 0 0135 so = 0.0027. Putting this value of E in the above equations, we note that all the fourth differences become 0.0024. Further, f ( .6 )
.6363,
which yields f ( .6)
.6363 E 4.6363 0.0027 4.6336.
Thus, the error was a transposing error, that is, writing 63 instead of 36 while tabulation.
EXAMPLE 5.18 Find and correct the error, by means of differences, in the data: x
0
1
2
3
4
5
6
7
8
9
10
y
2
5
8
17
38
75
140
233
362
533
752
Solution. The difference table for the given data is shown below. The largest numerical fourth difference −12 is at x = 5. So there is some error in the value f ( ). The fan lines are drawn and we note from the table that $ 4 f 4 + 2 $ 4 f 3 4 8 $ 4 f 2 + 6E 12
142
Numerical Methods
$ 4 f 1 4 8 $ 4 f0 +
2
and $ 3 f 3 + 4 $ 3 f 2 3 12 $ 3 f 1 + 3E 0 $ 3 f0 E 8. Subtracting second equation from the first (for both sets shown above), we get 5E 10 (for the first set) and 4 8 (for the second set). Hence, E 2. Difference table
x
y
0
2
1
5
Δ
Δ2
Δ3
Δ4
3 0 3 2
8
6 6
9 3
17
12 21
4
38
5
75
16 12
140
28
233
8 8
36 129
362
−2 6
42 171
9
−12 0
93
8
8
28 65
7
−2 Fan line 4
37
6
0 6
533
0 6
48 219
10
752
We now have f Therefore, the true value of f ( ) is 77.
f ( ) 75 E 75 ( ) 77.
Finite Differences and Interpolation
5.5
143
NUMERICAL UNSTABILITY
Subtraction of two nearly equal numbers causes a considerable loss of significant digits and may magnify the error in the later calculations. For example, if we subtract 63.994 from 64.395, which are correct to five significant figures, their difference 0.401 is correct only to three significant figures. A similar loss of significant figures occurs when a number is divided by a small divisor. For example, we consider 1 f x) , x 0.9. 1 x2 Then true value of f (0.9) is 0.526316 × 10. If x is approximated to x* = 0.900005, that is, if some error appears in the sixth figure, then f x*) = 0.526341 r 10. Thus, an error in the sixth place has caused an error in the fifth place in f x ). We note therefore that every arithmetic operation performed during computation gives rise to some error, which once generated may decay or grow in subsequent calculations. In some cases, error may grow so large as to make the computed result totally redundant. We call such a process (procedure) numerically unstable. Adopting the calculation procedure that avoids subtraction of nearly equal numbers or division by small numbers or retaining more digits in the mantissa may avoid numerical instability. EXAMPLE 5.19 (Wilkinson): consider the polynomial P20 (x ) = (xx 1)(x 2)(xx 20) = x 20
210x19 + + ( 20) !
23 The zeros of this polynomial are 1, 2,..., 20. Let the coefficient of x19 be changed from 210 to ( ).
7 This is a very small absolute change of magnitude 10 approximately. Most computers, generally, neglect this small change which occurs after 23 binary bits. But we note that smaller zeros of the new polynomial are obtained with good efficiency while the large roots are changed by a large amount. The largest change occurs in the roots 16 and 17. For example, against 16, we get 16.73 o i2.81 where magnitude is 17 approximately.
5.6
INTERPOLATION
Interpolation is the process of finding the value of a function for any value of argument (independent variable) within an interval for which some values are given. Thus, interpolation is the art of reading between the lines in a given table. Extrapolation is the process of finding the value of a function outside an interval for which some values are given. We now discuss interpolation processes for equal spacing. (A) Newton’s Forward Difference Formula Let K f 2 f 1 f0 f1 f 2 K be the values of a function for K x0 h x0 h x0 x0 + h x0 + 2h K. Suppose that we want to compute the function value f p for x x0 + ph, where in general −1 < p < 1. We have fp
f (x0 ph
p
x
x0 h
,
144
Numerical Methods
where h is the interval of differencing. Then using shift operator and Binomial Theorem, we have E p f0
fx
§ I ©
I ) p f0 p$
¶ p( p 1) 2 ( p( p 1)( p ) 3 $ $ ...· f0 2! 3! ¸
¤ p³ ¤ p³ f0 ¥ ´ $f0 ¥ ´ ¦ 1µ ¦ 2µ
¤ p³
2 0
¦ 3µ
(5.30)
$ 3 f0 ...
The expression (5.30) is called Newton’s forward difference formula for interpolation. (B) Newton’s Backward Difference Formula Let. . . f 2 f 1 , f0 f1 , f 2 ,K be the values of a function for. . . x0 h x0 h, x0 x0 + h, x0 + 2h,K. Suppose that we want to compute the function value f p for x x0 ph, −1 < p < 1. We have fp
f (x0
ph
p
x x0 . h
Using Newton’s backward differences, we have fx
E p f0 § I ©
I ) p
p
f0
¶ p( p 1) 2 p( p 1)( p ) 3 K· f0 2! 3! ¸
f0 p 0
p(
1) 2!
2
0
p( p )( p 2) 3 f0 K , 3!
which is known as Newton’s backward difference formula for interpolation. Remark 5.1. It is clear from the differences used that (i) Newton’s forward difference formula is used for interpolating the values of the function near the beginning of a set of tabulated values. (ii) Newton’s backward difference formula is used for interpolating the values of the function near the end of a set of tabulated values. EXAMPLE 5.20 Calculate approximate value of sin x for x = 0.54 and x = 1.36 using the following table: x sin x
0.5
0.7
0.9
1.1
1.3
1.5
0.47943 0.64422 0.78333 0.89121 0.96356 0.99749
Solution. We take x0 0.50, x p 0.54, and p Using Newton’s forward difference method, we have
0 544 0.50 0 2. 02
Finite Differences and Interpolation
fp
p( p 1) 2 p( p 1)( p 2) 3 $ f0 $ f0 2! 3!
f0 p f0
p( p
145
)( p )( p 4!
)
$ 4 f0
p( p
)( p
)( p 5!
)(( p )
$ 5 f0
0 2(0 2 ) ( 0.0268) 2 0 2(0 2 1)(0.2 2)(0 2 3) 0 2(0 2 1)(0.2 2) (0.00125) ( 0.00555) 4! 6 0 2( .2 1)(0. )( .2 3)(0. ) (0.00016) y 0.551386. 5!
0 47943
0 2(0 16479)
Difference table x
sin x
0.5
0.47943
0.7
0.64422
1st 2nd 3rd 4th 5th difference difference difference difference difference 0.16479 −0.02568 0.13911
0.9
0.78333
−0.00555 −0.03123
0.10788 1.1
0.89121
1.3
0.96356
1.5
0.99749
0.00125 −0.00430
−0.03553 0.07235
0.00016 0.00141
−0.00289 −0.03842
0.03393
Further, the point x = 1.36 lies toward the end of the tabulated values. Therefore, to find the value of the function at x = 1.36, we use Newton’s backward differences method. We have x p =1.36, x0 =1.3, and p 1 366 1.30 0 3, 02 fp
p( p 1) 2 p( p 1) p 2) 3 p )( )( ) 4 f0 f0 f0 2! 3! 4! 0 3(0 3 1) (0.03553) 0.96356 + 0.3(0.0723 ) 2 0 3(0 3 1)(0 3 2) 0 3(0 3 1) 0.3 2)(0 3 3) ( 0.00430) (0.0 125) 6 24 00.96356 96356 + 00.021705 021705 0.006128 0.0006 2 + 0.000154 y 0.977849. f0 p f0
EXAMPLE 5.21 Find the cubic polynomial f (x) which takes on the values f (0) Find f (6) and f (2.5).
4, f
f (2) 22, f
f (4) 32, f (5).
146
Numerical Methods
Solution. The difference table for the given data is x
f (x ( )
0
−4
1
−1
Δ f (x)
Δ2f (x)
Δ3f (x)
3 0 3 2
2
3
11
4
32
5
71
6 6
9
6 12
21
6 18
39
Using Newton’s forward difference formula, we have fx
x( x 1) 2 x( x 1)( x 2) 3 $ f0 $ f0 2! 3! x2 x x3 3 2 2x (3) (0) (6) 2 6 x 3 3x 2 2 3x 4 f0 x f0
x3 3
2
5 4,
which is the required cubic polynomial. Therefore, f (6) 63
3(6 (62 ) 5(6) (6 4
216 108 30 4 134. On the other hand, if we calculate f (6) using Newton’s forward difference formula, then take x0 x x0 6 0 p 6 and have h 1 ( )( ) 2 ( )( )( ) 3 f f6 f f0 $ f f0 2 6 6(3) 15(0) 20(6) 134 (exact value off f (6)). Again taking x0 2, we have p f
x
x0 h
2.5 2.0 0.5. Therefore,
f0 pff0
p( p 1) 2 p( p 1)( p 2) 3 $ f0 $ f0 2! 6
2 0.5(9)
( .5)(0. ) 0. (05 1)(0.5 2 ( ) ( ) 2 6
2 4.50 1 50 0.375 6.875 1.500 5.375 (exact value off f (2.5)).
0
Finite Differences and Interpolation
EXAMPLE 5.22 Find a cubic polynomial in x for the following data:
x
0
1
2
3
4
5
y
−3
3
11
27
57
107
$ $y
$2 y
$3 y
Solution. The difference table for the given data is
x
y
0
3
1
3
6 2 8 2
11
3
27
4
57
5
107
6 8
16
6 14
30
6 20
50
Using Newton’s forward difference formula, we have x( x 1) 2 x( x 1)( x 2) 3 f x f0 x f0 $ f0 $ f0 2! 3! 3 6x
x2 x x3 2 2x ( 2) (6) 2 6
x3 x2 2 x2 6x 3 x 3 x 2 7 3. EXAMPLE 5.23 The area A of a circle of diameter d is given for the following values:
d
80
85
90
95
100
A
5026
5674
6362
7088
7854
Calculate the area of a circle of diameter 105. Solution. The difference table for the given data is
147
148
Numerical Methods
d
A
80
5026
85
5674
648 40 688 90
−12
6362
28
32
716 95
22
7088
50 766
100
Letting x p
105 x0 100, and p fp
f0 p f0
7854
105 100 1, we shall use Newton’s backward difference formula 5
p( p 1) 2 p( p 1)( p 2) 3 p( f0 f0 2 3!
1 p 2)( p 3) 4 f0 . 4!
Therefore, f(
) 7854 766 50 22 32 8724.
Remark 5.2. We note (in the above example) that if a tabulated function is a polynomial, then interpolation and extrapolation would give exact values. (C) Central Difference Formula Let ..., f 2 , f 1 , f , f , f 2 ,... be the values of a function f for ..., x0 − 2h, x0 − h, x0, x0 + h, x0 + 2h,.... Then 1
DE E
1 2
and so 1
DE
3
E I $, $ 2 D 2 E and $ 3 D 3 E 2 .
Thus, 1
$f
2
E2 f
1
E2 f
2
f
1
f
3 2
1 2
1
$f
1
$f0
E 2 f0
f1 2
$f1
1 2
E f1
f3 2
Finite Differences and Interpolation
149
and so on. Similarly, $2 f
2 2
$2 f
Ef
2
Ef
1
2 1
$ 2 f0
2
Ef0
2
f 1
2
f0
2
f1
and so on. Further, 3
$3 f
3 2
E2 f
f
3 2
1 2
3
$3 f
3 1
E2 f
f1
3 1
2
$ 3 f0
3
3 2
E f0
3
f3 2
and so on. Hence, central difference table is expressed as x
f (x)
x 2
f
f
D 2 f 1
D3 f
1 2
1 2
D 2 f0
f0
D 4 f0 D3 f 1
Df1
2
2
D 2 f1
f1
x1
D 4 fx
3 2
1
Df x0
D 3 fx
2
Df x 1
D 2 fx
D fx
Df3 x2
2
f2
Now we are in a position to develop central difference interpolation formula. (C1) Gauss’s Forward Interpolating Formula: Let ..., f 2 , f 1 , f0 , f1 , f 2 ,... be the values of the function f at ... x0 − 2h, x0 − h, x0, x0 + h, x0 + 2h,.... Suppose that we want to compute the function value for x = x0 + ph. In Gauss’s forward formula, we use the differences D 1 D 2 f0 , D 3 1 D 4 f0 ,K as shown by boldface letters in the table given below. The value f p can be written as 2
2
fp
f0 g D f 1 g 2D 2 f0 g3D 3 f 1 g D 4 f0 K 2
2
where g1 g 2 , g3 ,K are the constants to be determined. The above equation can be written as 1
E f0 f0
g1D E f0 g 2
1 2
f0 g3D 3 E 2 f0
g 4D 4 f0 K
150
Numerical Methods
Difference table x
f (x)
x 2
f 2
D fx Df
x 1
D 2 fx
D 2 f 1 Df
D3 f
1 2
f0
1 2
D 2 f0
D 4 f0
f1
3
f1
2
x1
D 4 fx
3 2
f 1
x0
D 3 fx
2
f
D f1 2
Df3 2
x2
f2
Hence, 1
Ep
1
I g1D E 2 g2D 2
g 4D 4 K
g3D 3 E
or (
1 g1 g 2
)
I g1$ g 2 $ 2
2
$2 1
g3
g3
3
$3 $4 g4 K 1 $ ( )2
$
2
) g 4 $ 4 (1 2$ K).
The left-hand side equals p( p 1) 2 p( p 1 2) 3 p( p 1 p 2) $ $ 2! 3! 4! Comparing coefficients of the powers of Δ on both sides, we get 1
g
p g2
p( p 1) , g3 2!
g2
)
p( p 1)( p 2) 6
and so g3
p( p 1)( p 2) p( p 1) p( p 1 p 2) 3p p 1) 6 6 2
p( p )( p ) ( p ) ( ) , 6 3!
K.
Finite Differences and Interpolation
g4
g3 g 2
p( p 1)( p 2)( p 3) , 4!
and so g4
p( p 1) p 2)( p 3) g3 4!
g2
p( p )( p )( p ) ( p ( ) p( p )
4! 3! 2!
p( p 1)[( )[( p 2)( p 3) ( p 1) 4!
p( p )[ p 2 p ] ( p 4!
]
( )( p ) , 4!
and so on. Hence, fp
f0 p f 1 2
p( p 1) 2 ( p 1 p( p 1) 3 D f0 D f1 2! 3! 2
( ) ( )( ) 4 D f0 K 4!
¤ p³ ¤ p 1³ 4 ¤ p³ ¤ p 1³ 3 ¤ p 2³ 5 D f ¥ D f0 ¥ D f1 K. f0 ¥ ´ D f 1 ¥ ´ D 2 f0 ¥ ´ ´ ¦ 1µµ 2 ¦ 2 µ ¦ 3 µ ¦ 4 µ ¦ 5 ´µ 2 2 (C2) Gauss’s Backward Interpolation Formula: The central difference table for this formula is shown below:
x
f (x)
x 2
f 2
D fx
Df
x 1
D3 f
1 2
f0
2
f1
4
2
D f1 2
f2
1 2
D3 f 1 2
Df3
f0
2
x2
D 4 fx
D 2 f 1
Df1 x1
D 3 fx
3 2
f 1
Df x0
D 2 fx
f0
151
152
Numerical Methods
In Gauss’s backward interpolation formula, we use the differences D 1 D 2 f0 , D 3 1 D 4 f0 ,K. Thus, f p can be written as
2
2
fp
f0 g1 f
1 2
g2
f0 g3
2
3
f
1 2
g4
4
f0 K
where g1 g 2 , g3 ,K are the constants to be determined. The above equation can be written as E f0 f0
g1D E
1
f0 g 2
2
f0 g3D 3 E
1 2
f
g4D 4 f0
and so Ep
I g1D E
1 2
g 2D 2
g3D 3 E
1
g 4D 4 K
or (
)
Ig
$ 1 $
g2
1 g1 1
$2 $3 g3 1 $ ( )2
$2
g3 $ 3 (1 2
3
)
4
) g2
g4 2
$4 K (1 $ )2
(1 $
2
$3
)
$ 4 (1 2 K).
But (
1
)
p(
)
$2
p((
))( 3!
)
2! )( p 3) 4 p( p )( p 2)( $ K. 4!
$3
Therefore, comparing coefficients of the powers of Δ, we have g1 p, g 2 g1 p( p 1) and so g 2 p( p 1) g1 ( p 1) p , 2! 2! 2! g3
g 2 g1
fp
f 0 pD f
p( p 1)( p 2) ( p 1 p( p 1) . and so g3 3! 3!
Hence, 1
2
( p 1) p 2 ( p 1 p( p 1) 3 D f0 D f K
2! 3! 2
¤ p³ ¤ p 1³ 2 ¤ p 1³ D f0 ¥ f0 ¥ ´ D f 1 ¥ ´
¦ 1µ ¦ 2 µ ¦ 3 ´µ 2
¤ p 2³
3
1 2
¦ 4 µ
D 4 f0 K.
(C3) Stirling’s Interpolation Formula: The central differences table for this formula is shown below. In this formula, we use f0
D3 f
3
1 2
f 1 , D 4 f0 , K 2
f 1 ,D f1
2
2
2
f0 ,
Finite Differences and Interpolation
153
Difference table x
f (x)
x 2
f 2
D fx Df
f 1
1 2
f
f0
D 2 f0 f1
1 2
f0
D f1
2
x1
D 4 fx
D 2 f 1 Df
x0
D 3 fx
3 2
x 1
D 2 fx
2
f
D f1 2
Df3 2
x2
f2
By Gauss’s forward interpolation formula, we have ¤ p 1³ 4 ¤ p³ ¤ p³ ¤ p 1³ 3 D f ¥ D f0 f p f0 ¥ ´ D f 1 ¥ ´ D 2 f0 ¥ ´ 1 ¦ 4 ´µ ¦ 1µµ 2 ¦ 2 µ ¦ 3 µ 2 ¤ p 2³ 5 D f1 K ¥ ¦ 5 ´µ 2
(5.31)
and by Gauss’s backward interpolation formula, we have ¤ p³ ¤ p 1³ 2 ¤ p 1³ 3 D f0 ¥ D f p f0 ¥ ´ D f 1 ¥ ´
¦ 1µ ¦ 2 µ ¦ 3 ´ 2 Adding equations (5.31) and (5.32), we get 1 ¤ p³ § f p f0 ¥ ´ ¨D f 1 2 ¦ 1µ © 2
1
2
¤ p 2³ 4 ¥ D f0 K. ¦ 4 ´µ
¶ 1 §¤ p ³ ¤ p 1³ ¶ 2 f 1 · ¨¥ ´ ¥ · D f0
2 ©¦ 2 ¦ 2 ´µ ¸ 2¸ ¶ 1 ¤ p 1³ § 3 3 ¥ ¨D f 1 f 1 · ´
2¦ 3 µ © 2 2¸ § ¶ 1 ¤ p 1³ ¤ p 2³ 4 ¨¥ · D f0 K 2 ©¦ 4 ´µ ¥¦ 4 ´µ
f0
³ p2 p¤ D f 1 D f 1 ´ D 2 f0 ¥
µ 2¦ 2 2 2
( p 1 p( p 1) ¤ 3 ¥D 2 !) ¦ p(
) ( 4( !)
)
1 2
³ D3 f 1 ´
µ 2
D 4 f0 K
(5.32)
154
Numerical Methods
¤ p³ ¤ p 1³ p ¤ p³ M f0 ¥ ´ MD f0 ¥ ´ D 2 f0 ¥ 2 ¦ 1µ ¦ 1µ ¦ 3 ´µ
p ¤ p 1³ 4 D 4 ¥¦ 3 ´µ
¤ p 2³ 0
¦ 5 µ
3
f0 ,
MDD 5 f0
which is the required Stirling’s formula. Second Method: We have fp
f0 S1
f
f
S2
2
3 f 0 S3 ¤ D f
3 2
f 1³ S 4
µ 2
4
(5.33)
f0 K,
where S1 S2 K are the constants to be determined. Expression (5.33) can be written as 1
E f0 f0
S1 (D E f0 D E
1
1
f0 S2D 2 f0 S3 (D 3 E 2 f0
D3
1
0
D 4 f0 K
4
¤ ¤ $3 ¤ $ ³ $2 $3 ³ $4 S S I S1 ¥ $ S ´ 2 3¥ 4 1 ´µ 1 $ ¦ (1 $ )4 ¦ 1 $ (1 $ )2 µ ¦ Therefore, expression (5.33) gives E (II ) p I S3 [
3
S1[ ( I
(I $
2
)
) S2
2 3
((11 2 $
2
( $
)] S4 $ ( 4
2
³ ´ f0 . µ
K) ).
The left-hand side is (
)p
p
p(
) 2!
$2
p((
)( 3!
)
$
p( p )(
2)( )( p 3) 4!
Comparing coefficients of the powers of Δ, we get S1
p , 2
S2
S1
S3
p( p 1)( p 1) , 2(3!)
S4
p ( p 1)2 p p 1) . 4 3!
p( p 1) p 2 , 2 2
Thus, fp
f0
p 2
f
p( ) p(( 4 3! which is the required Stirling’s formula.
f )
p2 2 ( p 1 p( p 1) ¤ D 3 D f0 ¦ 2 2(3 )
D 4 f0 K ,
1 2
D3 f 1 ³
µ 2
$ 4 K.
Finite Differences and Interpolation
155
(C4) Bessel’s Interpolation Formula Let K , f 2 , f 1 , f0 , f1 , f 2 ,K be the values of a function at K , x0 h x0 h x0 x0 + h x0 + 2h,K. Suppose that we want to compute the function value f p for x x0 + ph. In Bessel’s formula, we use the differences as indicated in the table below. In this method f0 f 1 , D 2 f0 2 f1 3 f 1 4 f0 4 f1 ,Kare used to approximate 2
2
f p . These values are shown in this difference table in boldface. Therefore, f p can be written in the form fp
f0 B1 f 1 B2
2
f0 D 2 f1
B3D 3 f 1
2
B4 (D 4 f0
4
f1 ) K
2
where B1 B2 ,K are the constants to be determined. x
f (x)
x 2
f 2
D 2 fx
D fx Df
3 2
f 1
x 1
D 2 f 1 Df
D3 f
1 2
f0
x0
2
1 2
f0
D f1
4
f0
f1
3
2
2
f1
x1
D 4 fx
D 3 fx
D 4 f1
D f1 2
Df3 2
x2
f2
The above equation can be written as 1
E f0 f0
1
B1D E f0 B2 D 2 f0 D 2 Eff0 B3D 3 E 2 f0
B (D 4
0
D4
0
) K
or 1
E
I B1 E 2 B2 (
1 2
2
E ) B3 3 E 2 B4 (
4
4
E) K
or (
)
I B1
¤ $2 ³ $3 B2 ¥ $ 2 ´ B3 1 $ ¦ 1 $ µ
B4 [ $ 2
)
2
4
(
)]] K
(5.34)
The left-hand side equals I
p$
p( p ) 2 p( p )( )( p $ 2! 3!
)
$
p( p
)( p ) 4!
Comparing coefficients of the powers of Δ in equations (5.34) and (5.35), we have
)
$4 K
(5.35)
156
Numerical Methods
B1
p,
p( p 1) 1 p( p 1) , and so B2 2! 2 2!
2 B2
2
p( p 1)( p 2) , 3!
B3
and so ¤ ¤ 1³ 3³ p( p 1) p 2 ´ p( 1) p ´ 2µ 2 ¦ ¦ 3! 3!
p( p 1)( p 2) 1 p( p 1) B3 3! 2 2 and B2
B3 2 B4
p( p 1)( p 2)( p 3) , 4!
which yields 1 § p( p 1)( p 2)( p 3) p( p 1)( p 2) ¶ · 2 ¨© 4! 3! ¸
B4
¶ 1 ( p 1 p( p 1 ( p 2) 1 § p( p 1)( p 2) ( p 3 4) · . 2 ¨© 4! 4! ¸ 2 Similarly B5
¤ (p 1 p p ¦
1³ ( p 1) p 2) 2µ 5!
and so on. Therefore, fp
¤ 1³ p p ´ ( 1) f1 ¶ µ ¦ D 3 f1 · ! 3 ¸ 2 4 4 ¶ § ( 1) p( 1)( 2) D f0 D f1 · K ¨ 2 4! ¸ © 2 p( p 1) § D f0 f 0 pD f 1 ¨ 2 2 © 2
2
¤ 1³ p ¥ p ´ ( p 1) 2µ ¦ p( p 1) 2 MD f 1 f0 pD f 1 2 ! 3! 2 2
(p 1 (
1)( p 2) 4!
f1 2
MD 4 f 1 K 2
¤ p³ f0 pD f 1 ¥ ´ MD 2 f 1 ¦ 2µ 2
3
2
¤ 1³ p ¥ p ´ ( p 1) 2µ ¦ 3!
¤ p 1³
3 1 2
¦ 4 µ
MDD 4 f 1 K 2
Finite Differences and Interpolation
¤ p³ ¤ 1 1³ f p ´ f 1 ¥ ´ MD 2 f 1 2 12 ¦ 2µ 2 ¦ 2µ 2
f0
157
1³ ( 1) 2 ´µ D 3 f1 3! 2
¤ p p ¦
¤ p 1³ 4 ¥ MD f 1 K ¦ 4 ´µ 2 ¤ p³ ¤ 1 1³ f1 f0 ¥ p ´ D f 1 ¥ ´ MD 2 f 1 2 2µ 2 ¦ 2µ ¦ 2
f0
¤ p¥ p ¦
1³ ( p 1) 2 ´µ D 3 f1 3! 2
¤ p 1³ 4 ¥ MD f 1 ¦ 4 ´µ 2
1 ( 2
0
¤ 1) p ¦
1³ ( 1) 2 ´µ D 3 f1 3! 2
¤ p p ¦
¤ p³ 1³ f 1 ¥ ´ MD 2 f 1 ´ 2µ 2 ¦ 2µ 2
¤ p 1³ 4 ¥ MD f 1 K ¦ 4 ´µ 2
1 2
¤ 1³ p p ´( ) ¤ p³ 2µ ¤ ¦ 1³ D 3 f1 ¥ ´ D f 1 ¥ ´ MD 2 f 1 3 ! 2 µ 2 ¦ 2µ ¦ 2 2
¤ p 1³ 4 ¥ MD f 1 K ¦ 4 ´µ 2 ¤ p¥ p ¦
¤ p³ ¤ p³ ¤ 1³ ¥ ´ M f1
´ D f 1 ¥ ´ MD 2 f 1 2µ 2 ¦ 2µ ¦ 0µ 2 ¦ 2
1³ ( p 1) 2 ´µ D 3 f1 3! 2
¤ p 1³ 4 ¥ MD f 1 ¦ 4 ´µ 2 ¤ p³ ¤ p³ ¤ 1³ ¥ ´ M f1
´ D f 1 ¥ ´ MD 2 f 1 2 µ 2 ¦ 2µ ¦ 0µ 2 ¦ 2
1¤ 1 ³ ¤ p³
´ ¥ ´ D 3 f1 3¦ 2µ ¦ 2µ 2
¤ p 1³ 4 ¥ MD f 1 K ¦ 4 ´µ If we put p
1 , we get 2
f1 2
1 f 1 MD 2 f 1 8 2 2
3 128
4
f 1 K, 2
which is called formula for interpolating to halves or formula for halving an interval. It is used for computing values of the function midway between any two given values.
158
Numerical Methods
(C5) Everett’s Interpolation Formula Let K , f 2 , f 1 , f0 , f1 , f 2 ,K be the values of the function f at K , x0 h x0 h x0 x0 + h x0 + 2h,K . Suppose that we want to compute the function value for x x0 + ph . In Everett’s formula, we use differences of even order only. Thus, we use the values f0 f1, D 2 f0 2 f1 , D 4 f0 4 f1 ,K , which have been shown in boldface in the difference table below: x
f (x ( )
x 2
f 2
Df
x 1
f 1
D 2 f 1
D fx
Df
D 3 fx
D3 f
f0
2
1 2
f0
Df1
4
f0
4
f1
D3 f 1
2
x1
D 4 fx
3 2
1 2
x0
D 2 fx
2
f1
2
f1
Df3 2
x2
f2
By Bessel’s formula, we have ¤ 1³ p p ´ ( 1) µ ¦ p( p 1) § D f D f ¶ D 3 f1 f 0 pD f 1 ¨ · ! 2 2 3 © ¸ 2 2 4 4 ¶ § ( 1) p( 1) p( 2) D f D f1 · K ¨ 2 4! ¸ © 2
fp
2
Since Everett’s formula expresses f p in terms of even differences lying on the horizontal lines through f 0 and f 1, therefore we convert D 1 D 3 f 1 ,K in the Bessel’s formula into even differences. 2
2
By doing so, we have
fp
¤ p¥ ¦
p( p 1) § D f0 D f1 ¶ ¨ · 2 2 © ¸ 4 4 ¶ § f1 ( p 1 p( p 1)( 2) D f0 · K ¨ 4 2 ¸ © 2
f0 p f1 f0 )
( ) f0 pff1
(
2
1³ ( p 1) 2µ [ 3!
2
f1
2
f0 ]
p( )( ) 2 ( ) p( )( ) ) 4 D f0 D f0 K 5! 3! ) p( ) 2 ( ) ) p( )( ) 4 D f1 D f1 K 3! 5!
Finite Differences and Interpolation
159
¤ 2 p³ 2 ¤ 3 p³ 4 D f0 ¥ D f0 K ( ) f0 ¥ ´ ¦ 3 µ ¦ 5 ´µ ¤ p 1³ 2 ¤ p 2³ 4 p ¥ D f ¥ D f K ´ ¦ 3 µ ¦ 5 ´µ ¤ q 1³ 2 ¤ q 2³ 4 D f0 ¥ D f0 K qff0 ¥ ´ ¦ 3 µ ¦ 5 ´µ ¤ p 1³ 2 ¤ p 2³ 4 D f1 ¥ D f1 K, pff1 ¥ ´ ¦ 3 µ ¦ 5 ´µ where q = 1 − p. Second Method: Let K f 2 f 1 f0 f1 f 2 K be the values of a function f for K x0 h x0 h x0 x0 + h x0 + 2h K. We want to compute f p, where x x0 + ph. We use the even differences lying on the horizontal lines through f 0 and f 1. So let fp
E0 f0
E4D 4 f0 K
E2D 2 f0
F0 f1 F2
f1 F4
2
4
f1 K.
Therefore, E f0 E0 f0 E2D 2 f0 E4D 4 f0 K F0 Ef0
F2D 2 Eff0 F4
4
Ef0 K
or (
)
E4D 4 K
E0 E2D 2
F0 E F2D 2 E F4D 4 K E0 E2
$2 $4 E4 K 1 $ (1 $ )2
0
2 2
F4
$4 K. I $
(5.36)
The left-hand side of equation (5.36) is I
p
p( p ) 2 p( p )( )( p $ 2! 3!
)
p( p
$
)( p ) 4!
)
$4 K,
whereas the right-hand side is E0
E2 [ 2
2
(I
I )
0
2
2 2
)] E4 [
3
F4 [
4
(I
4
( I 2 K)] K
$2
3
K)] )] K
3
Comparing coefficients of $, $ , $ ,K on both sides, we get p
E2
F0 1 E0
p( p 1)( p 2) E2 3!
F0 and so E0
2
p,
¤ p 1³ p( p 1) . and so F2 ¥ 2 ¦ 3 ´µ
160
Numerical Methods
Similarly, other coefficients are obtained. Hence, ¤ 2 p³ 2 ¤ 3 p³ 4 D f0 ¥ D f0 K p ) f0 ¥ ´ ¦ 3 µ ¦ 5 ´µ
fp
¤ p 1³ 2 ¤ p 2³ 4 f0 ¥ D f1 ¥ D f1 K ´ ¦ 3 µ ¦ 5 ´µ ¤ q 1³ 2 ¤ q 2³ 4 D f0 ¥ D f0 K qff0 ¥ ´ ¦ 3 µ ¦ 5 ´µ ¤ p 1³ 2 pff0 ¥ D ¦ 3 ´µ where q 1
0
¤ p 2³ 4 ¥ D f1 K, ¦ 5 ´µ
p.
Remark 5.3. The Gauss’s forward, Gauss’s backward, Stirling’s, Bessel’s, Everett’s, Newton’s forward and Newton’s backward interpolation formulae are called classical formulae and are used for equal spacing. EXAMPLE 5.24 The function y is given in the table below: x
0.01
y
0.02
0.03
0.04
0.05
98.4342 48.4392 31.7775 23.4492 18.4542
Find y for x = 0.0341. Solution. The central difference table is δ
x
y
0.01
98.4342
0.02
48.4392
δ2
δ3
δ4
−49.9950 33.3333 −16.6617 0.03
−24.9999
31.7775
8.3334
19.9998
−8.3283 0.04
−5.0001
23.4492
3.3333 − 4.9950
0.05
18.4542
Letting x0 = 0.03, we have p
f
x
x0 h
0.0341 0.030 0 01
f0 p f 1 2
p( p 4
)
(
0 411. Using Bessel’s formula, we have
2
f0
1³ p p ´( ) 2µ ¦ 2 D 3 f1 f1 ) 3! 2
4 ( ) ( )( ) ¤ D f0 ¥ 4! 2 ¦
4
f1 ³ ´ K µ
Finite Differences and Interpolation
= 31.7775 0.41(8.3283)
161
11.6667 (0.2419) 4
= 27.475924 approximately. EXAMPLE 5.25 If third differences are constant, prove that y
x
1 1 y yx ) 2 x 16
2
2
yx
2
yx ).
Solution. The Bessel’s formula in this case becomes y0
yp
y1 2
2 2 p( p 1) [ $ y 1 $ y0 1³ $yy0 2 2µ 2!
¤ p ¦
¤ 1³ p¥ p ´ ( 2µ ¦ 3!
) $ 3 y 1 ,
1 because the higher differences than that of third order will be equal to zero by the hypothesis. Putting p , 2 we get y y1 1 2 2 y1 0 y0 ).
( y2 2 16 2 Changing the origin to x, we have y
x
1 1 y yx ) 2 x 16
1 2
2
yx
2
yx ).
EXAMPLE 5.26 Given y y1 , y2 y3 , y y5 (fifth difference constant), prove that c 25( c b) a c) y5 , 2 256 2 where a
y0 y5 b y
Solution. Putting p
y4 , c
y2 y3 .
1 in the Bessel’s formula, we have 2 4 4 y y1 1 ¤ $ 2 y 1 $ 2 y0 ³ 3 ¤ $ y 2 $ y 1 ³ y1 0
¥ ¥ ´. ´ 2 8¦ 2 2 µ µ 128 ¦ 2
Shifting the origin to 2, we obtain 1 1 y5 y2 y3 ) 2 16 2
2
y1
2
y2 )
3 256
4
y0 $ 4 y1 ).
But $ 2 y1
3
y2
1
, $ 4 y0
4
4 y3
6
2
Substituting these values in equation (5.37), we get the required result.
4 y1
y0 etc.
(5.37)
162 5.7
Numerical Methods
USE OF INTERPOLATION FORMULAE
We know that the Newton formulae with forward and backward differences are most appropriate for calculation near the beginning and the end, respectively, of tabulation, and their use is mainly restricted to such situations. The Gaussian forward and backward formulae terminated with an even difference are equivalent to each other and to the Stirling’s formula terminated with the same difference. The Gaussian forward formula terminated with an odd difference is equivalent to the Bessel formula terminated with the same difference. The Gaussian backward formula launched from x0 and terminating with an odd difference is equivalent to the Bessel’s formula launched from x−1 and terminated with the same difference. Thus, in place of using a Gaussian formula, we may use an equivalent formula of either Stirling or Bessel for which the coefficients are extensively tabulated. To interpolate near the middle of a given table, Stirling’s formula gives the most accurate result for 1 1 3 1 1
a p a and Bessel’s formula is most efficient near p , say a p a . When the highest difference 2 4 4 4 4 to be retained is odd, Bessel’s formula is recommended and when the highest difference to be retained is even, then Stirling’s formula is preferred. In case of Stirling’s formula, the term containing the third difference, viz., p( p 2 6
)
D3 f
1 2
may be neglected if its contribution to the interpolation is less than half a unit in the last place. This means that p( p 2 6 But the maximum value of
p( p 2 6
)
D3 f
)
1 2
1 for all p in the range 0 ≤ p ≤ 1. 2
is 0.064 and so we have 0.064D 3 f
1 2
3 1 or D f 1