Numerical Methods for Engineers 9788189401610, 8189401610

★ABOUT THE BOOK: I am feeling delighted to present to my readers, students and teachers,this book on Numerical Methods w

149 45 2MB

English Pages 306 [305] Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
i
ii
Preface- iii
Contents
Contents- vi
Contents- vii
Contents- viii
CHAPTER 1 ERRORS AND APPROXIMATIONS 1–14
CHAPTER 2 ROOTS OF NON LINEAR EQUATIONS 15–40
CHAPTER 3 SOLUTION OF SIMULTANEOUS LINEAR ALGEBRAIC EQUATIONS 41–81
CHAPTER 4 EMPIRICAL LAWS AND CURVE FITTING 82–105
CHAPTER 5 CALCULUS OF FINITE DIFFERENCES 106–124
CHAPTRT 6 INTERPOLATION FOR EQUAL INTERVALS 125–150
CHAPTER 7 INTERPOLATION WITH UNEQUAL INTERVALS 151–162
CHAPTER 8 NUMERICAL DIFFERENTIATION 163–176
CHAPTER 9 NUMERICAL INTEGRATION 177–200
CHAPTER 10 NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS 201–235
CHAPTER 11 NUMERICAL SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS 236–268
CHAPTER 12 PROGRAMMING TOOLS ANDTECHNIQUES IN MATLAB AND C++ 269–296
Index- 297
Recommend Papers

Numerical Methods for Engineers
 9788189401610, 8189401610

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

NUMERICAL METHODS FOR ENGINEERS (with programs in MATLAB)

By

Dr. Arti Kaushik (Assistant Professor, Department of Mathematics Maharaja Agrasen Institute of Technology, Rohini Sec-22, Delhi)

STANDARD BOOK HOUSE unit of :

RAJSONS PUBLICATIONS PVT. LTD.

1705-A, Nai Sarak, PB.No. 1074, Delhi-110006 Ph.: +91-(011)-23265506 Show Room: 4262/3, First Lane, G-Floor, Gali Punjabian, Ansari Road, Darya Ganj, New Delhi-110002 Ph.: +91-(011) 43751128 Tel Fax : +91-(011)43551185, Fax: +91-(011)-23250212 E-mail: [email protected] www.standardbookhouse.in

Published by:

RAJINDER KUMAR JAIN Standard Book House Unit of: Rajsons Publications Pvt. Ltd.

1705-A, Nai Sarak, Delhi - 110006 Post Box: 1074 Ph.: +91-(011)-23265506 Fax: +91-(011)-23250212 Showroom: 4262/3, First Lane, G-Floor, Gali Punjabian Ansari Road, Darya Ganj New Delhi-110002 Ph.: +91-(011)-43751128, +91-(011)-43551185 E-mail: sbhl0@ hotmail.com Web: www.standardbookhouse.in

First Edition : 2018

© Publishers

All rights are reserved with the Publishers. This book or any part thereof, may not be reproduced, represent, photocopy in any manner without the prior written permission of the Publishers. Price: ` 280.00 US $ : 25

ISBN : 978-81-89401-61-0

Typeset by: N.D. Enterprises, Delhi.

Printed by: R.K. Print Media Company, New Delhi

Preface to the First Edition I am feeling delighted to present to my readers, students and teachers, this book on Numerical Methods with codes in MATLAB and C++. This book has been primarily written for under-graduate students studying Numerical Analysis courses in universities and engineering colleges. The content in the book covers both basic concepts of numerical methods and more advanced concepts such as Partial Differential Equations. The book has been designed with the primary goal of providing students with a sound introduction of numerical methods and making the learning a pleasurable experience. The content in the book is arranged in a very logical manner with clarity in presentation. The book includes numerous examples which aid the students become more and more proficient in applying the method. A salient feature of the book is computer programs written in C++ and also in MATLAB. I have made conscious efforts to make the book student friendly. I hope the students would find the book engaging and helpful. Sincere efforts have been made to eliminate printing errors and other mistakes, but perfection cannot be claimed. I shall be grateful to my readers for constructive suggestions to improve the book. I am thankful to my wonderful daughter Aanya for understanding my busy schedule while I was working on this book. I dedicate this book to my parents Sh. K.K. Kaushik and Smt. S. Kaushik whose continuously supported and encouraged me to complete this book. Dr. Arti Kaushik 1 May, 2018

Contents

CHAPTER 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

Introduction Accuracy of Numbers Inherent Errors Numerical Errors Modeling errors Blunders Some Fundamental definitions of Error Analysis Error Propagation Conditioning and Stability Exercises

CHAPTER 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11

ERRORS AND APPROXIMATIONS

1–14 1 2 3 6 6 7 9 12 13

ROOTS OF NON LINEAR EQUATIONS

15–40

Introduction Basic Properties of Equations Iterative Methods Order of Convergence Bisection method (Bolzano method) Regula Falsi method (Method of False position) Secant Method Fixed Point Method Newton—Raphson Method Muller’s Method Complex Roots Exercises

15 16 16 17 18 21 25 27 30 36 38 39

vi

Numerical Methods with Program in MATLAB

CHAPTER 3 3.1 3.2 3.3 3.4 3.5 3.6

Introduction Existence of Solution Direct Methods of Solution Iterative Methods Convergence of Iteration Methods Ill-Conditioned Equations Exercise

CHAPTER 4 4.1 4.2 4.3 4.4 4.5 4.6 4.7

EMPIRICAL LAWS AND CURVE FITTING

Introduction Linear Law Laws reducible to linear law Method of Group Averages Equations involving three constants Principle of Least Squares Method of Moments Exercises

CHAPTER 5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9

SOLUTION OF SIMULTANEOUS LINEAR ALGEBRAIC EQUATIONS

CALCULUS OF FINITE DIFFERENCES

Introduction Finite Differences Operators Properties of the operators Relations between the operators Difference of a polynomial Factorial polynomial Inverse Operator D–1 Summation of series Exercises

CHAPTRT 6

INTERPOLATION FOR EQUAL INTERVALS

6.1 Introduction 6.2 Gregory-Newton forward interpolation formula or Newton’s forward interpolation formula 6.3 Gregory–Newton’s backward interpolation formula 6.4 Central difference interpolation formulae 6.5 Gauss forward interpolation formula 6.6 Gauss backward interpolation formula

41–81 41 42 44 60 76 77 80 82–105 82 82 83 84 87 95 101 103 106–124 106 106 113 114 114 118 119 120 121 123 125–150 125 126 128 136 137 138

Contents

6.7 6.8 6.9 6.10

Stirling’s Formula Bessel’s formula Laplace-Everett’s formula Advantages of Central difference interpolation formula Exercises

CHAPTER 8 7.1 7.2 7.3 7.4 7.5 7.6

8.1 8.2 8.3 8.4 8.5

9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9

NUMERICAL DIFFERENTIATION

Introduction Derivatives using Newton’s forward difference formula Derivatives using Newton’s backward difference formula Derivatives using Stirling’s formula Maxima and minima of a function Exercises

CHAPTER 9

141 142 143 148 149

INTERPOLATION WITH UNEQUAL INTERVALS 151–162

Introduction Divided differences Properties of divided differences Newton’s divided difference interpolation formula Lagrange’s Interpolation formula Inverse Interpolation Exercises

CHAPTER 8

vii

NUMERICAL INTEGRATION

Introduction Newton-Cotes Quadrature formula Trapezoidal rule Romberg’s method Simpson’s one-third rule Simpson’s three-eighth rule Boole’s rule Weddle’s rule Numerical methods for double integrals Exercises

CHAPTER 10 NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS 10.1 Introduction 10.2 Solution of differential equations 10.3 Taylor Series method

151 151 152 155 158 160 161 163–176 163 164 165 170 173 175 177–200 177 177 178 181 182 185 186 187 193 199 201–235 201 201 202

viii

Numerical Methods with Program in MATLAB

10.4 10.5 10.6 10.7 10.8 10.9 10.10

Picard’s method Euler’s method Modified Euler’s method Runge-Kutta Methods Predictor Corrector Methods Milne’s Predictor Corrector Method Adam–Bashforth Method Exercise

CHAPTER 11 NUMERICAL SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS 11.1 11.2 11.3 11.4

Introduction Difference Quotient Finite difference approximations to Partial derivatives Elliptic Equations 11.4.1 Liebmann’s Iterative Method 11.5 Parabolic Equations 11.6 Hyperbolic Equations Exercises CHAPTER 12 PROGRAMMING TOOLS AND TECHNIQUES IN MATLAB AND C++ 12.1 12.2 12.3 12.4 12.5

Introduction C++ Programs of Standard Methods in C++ MATLAB Programs of Standard Methods in MATLAB Index

206 209 210 215 221 221 226 234 236–268 236 237 242 243 245 252 261 266 269–296 269 269 270 287 288 297

Errors and Approximations

1

1.1 INTRODUCTION Numerical techniques are widely used by scientists and engineers for solving their problems. These techniques provide answers to even those problems which have no analytical solution. However, the results obtained from numerical techniques are only approximate solutions, which can be made as accurate as we desire. The accuracy of the result obtained using numerical techniques depends on the error estimate. This makes the analysis of errors and approximation important for the study of numerical methods. In this chapter we discuss different types of errors and approximation, how they generate and affect the result obtained using numerical techniques.

1.2 ACCURACY OF NUMBERS 1. Exact Numbers. These are numbers that are exact by definition. They are called exact because they have no uncertainty in their values, e.g., 7, 15/2, 22/4 etc. are exact numbers. 2. Approximate Numbers. The numbers which are not exact are called approximate numbers, e.g., 3 =1.73... is not exact due to the presence of infinite non recurring digits. Other examples of approximate numbers are π and e whose values are 3.142… and 2.718..., respectively.

1

2

Numerical Methods with Program in MATLAB

3. Significant Figures. The number of digits in a figure that express the precision of a measurement instead of its magnitude. There are certain rules for considering the number of significant digits: (a) All non-zero digits are significant. Every 1, 2, 3, 4, 5, 6, 7, 8, and 9 claims significance. The number, 123, 456, 789 has nine significant digits. (b) All leading and following zeros that are only placeholders are not significant. These numbers have some magnitude-indicating zeros. The red (underlined) zeros are not significant in 3,500,000 (2 SF) and 0.000,007,070 (4 SF). (c) All zeros between two other digits are significant. The number 1.023 has a significant zero for a total of four significant digits. (d) All zeros to the right of the decimal and to the right of other digits are significant. For instance, the number 34.600 has five significant digits, two of which are zeros. 4. Accuracy. It refers to the number of significant digits in a value. e.g., 43.876 is accurate to five significant digits. 5. Precision. It refers to the number of decimal positions, e.g., 43.876 has a precision of 0.001. Example 1.1 What is the accuracy of the following numbers? (a) 96.453 (b) 0.002345 (c) 4300.00 (d) 88 Solution:(a)96.453 has five significant digits. (b) 0.002345 has four significant digits. (c) 4300.00 has six significant digits. (d) 88 has two significant digits. Example 1.2 What is the precision of the following numbers? (a) 1.9086 (b) 5.76 (c) 9.63245 Solution:(a) 1.9086 has precision of 0.0001. (b) 5.76 has precision of 0.01. (c) 9.63245 has precision of 0.00001.

1.3 INHERENT ERRORS These are the errors which are present in the data supplied to the model. These are also called input errors. They consists of two components, namely, data errors and conversion errors.

Errors and Approximations

3

(a) Data Errors. These are also called empirical errors. They arise when the data for a problem are obtained from some experiments and hence, has limited accuracy and precision. The accuracy of such data is limited due to the unavoidable limitations in instrumentation and reading. (b) Conversion Errors. These errors are also known as representation errors. They arise due to limitations of computers to store the data exactly. We know that the floating point representation retains only a specified number of digits. The digits that are not retained constitute this error.

1.4 NUMERICAL ERRORS Numerical errors arise during the process of execution of a numerical method. These are also called procedural errors. These errors consist of two components, namely, round-off errors and truncation errors. These errors can be reduced by designing suitable techniques for implementing the solution. (a) Round-off Errors. Round-off error occurs because computers use fixed number of bits and hence fixed number of binary digits to represent numbers. In a numerical computation round-off errors are introduced at every stage of computation. Hence though an individual round-off error due to a given number at a given numerical step may be small but the cumulative effect can be significant. When the number of bits required for representing a number are less than the number is usually rounded to fit the available number of bits. This is done either by chopping or by symmetric rounding. Chopping. Rounding a number by chopping amounts to dropping the extra digits. Here the given number is truncated. Suppose that we are using a computer with a fixed word length of four digits. Then the truncated representation of the number 72.32451 will be 72.32. The digits 451 will be dropped. Now to evaluate the error due to chopping let us consider the normalized representation of the given number, x i.e. x = 72.32451 = 0.7232451× 102 = (0.7232 + 0.0000451) × 102 = (0.7232 + 0.451 ×10–4) × 102 Therefore, chopping error in representing x = 0.451 ×102–4

4

Numerical Methods with Program in MATLAB

So in general if a number x is the true value of a given number and fx ×10 E is the normalized form of the rounded (chopped) number x and gx × 10E–d is the normalized form of the chopping error then x = fx × 10E + gx × 10E–d ...(1) E–d Since 0 < gx < 1, the chopping error < 10 Symmetric Round-off Error. In the symmetric round-off method the last retained significant digit is rounded up by 1, if the first discarded digit is greater or equal to 5. In other words, if gx in Eq. (1) is such that |gx| > 0.5 then the last digit in fx is raised by 1 before chopping gx × 10E–d. For example, let x = 72.918671, y = 18.63421 be two given numbers to be rounded to five digit numbers. The normalized form x and y are 0.7291867 × 102 and 0.1863421 ×102. On rounding these numbers to five digits we get 0.72919 × 102 and 0.18634 ×102 respectively. Now w.r.t. (1), we have |rounding error| = |gx| ×10E-d if gx < 0.5 E-d = |(gx–1)| ×10 if gx > 0.5 E-d In either case, error < 0.5 ×10 . (b) Truncation Errors. Often an approximation is used in place of an exact mathematical procedure. For instance consider the Taylor series expansion of say sin x, i.e., sin x = x −

x3 x5 x7 + − ..... 3! 5! 7 !

Practically we cannot use all of the infinite number of terms in the series for computing the sine of angle x. We usually terminate the process after a certain number of terms. The error that results due to such a termination or truncation is called as ‘truncation error’. Usually in evaluating logarithms, exponentials, trigonometric functions, hyperbolic functions etc. an infinite series of the form S=



∑ ai x i

i =0

n

is replaced by a finite series S = ∑ a i x i . Thus, a i =0



truncation error of



a i x i is introduced in the computation.

i =n +1

Example 1.3 Evaluate the exponential function

Errors and Approximations

(i) using first three terms at x = 0.2 (ii) using first four terms at x =1 (iii) first six terms at x =1. Solution: We have ex = 1 + x + e0.2 = 1 + x +

(i)

x2 x3 x4 x5 x6 + + + + + ...... 2! 3! 4 ! 5 ! 6 ! x2 2!

e0.2 = 1 + 0.2 + = =

0.22 = 1.22 2!

x3 x4 x5 x6 + + + + ...... 3! 4 ! 5 ! 6 !

0.008 0.0016 + + ... = 0.00133 + 0.000066 + .... 6 24 10 − 2

= 0.133 × 10 Therefore, Truncation error < 10–2 (ii)

ex = 1 + x +

+ 0.0066 × 10−2

x2 x3 + 2 3

= 1 + 1 + 1 + 1 = 2.66666 2

6

Therefore the truncation error is e–2.66666666 = 2.71828183 – 2.66666666 = 0.05161517 = 5.161516 ×10–2 (iii) The value of e when we use first six terms is ex = 1 + x + = 1+1+

x2 x3 x4 x5 + + + 2 3 4 5 1 1 1 1 + + + = 2.71666666 2 6 24 129

Therefore the truncation error is e–2.66666666 = 2.71828183 – 2.71666666

5

6

Numerical Methods with Program in MATLAB

= 0.00161517=1.615 × 10–3 The concept of truncation errors is important because in numerical analysis many of the iterative methods are infinite and can be applied only by using the concept of truncation errors. While using these methods, we can reduce the truncation error by using better numerical model, which usually increase the number of arithmetic operations. But, in such case round off error usually increases, as it increases with number of arithmetic operation. So, care should be taken while choosing a particular numerical method so as to minimize the error in the solution.

1.5 MODELING ERRORS Mathematical models form the basis for the numerical solution. They are developed to represent physical processes using certain parameters and variables which define the process. In many cases, it is not feasible to involve all the parameters or whole of the process in formulation of the problem. Therefore, we introduce certain assumptions for the simplification of the problem. For example, while developing a simple model for calculating velocity of a car, we may not be able to estimate the air resistance or condition of road and so on. All such neglected parameters introduce errors in the output from such models. Since a model is the basic input to the numerical process, no numerical method will give appropriate results if the model is developed erroneously. So it is important to formulate and develop a refined model incorporating more parameters and features of the problem. This way we can reduce modelling errors. Many times, refining and improving the model results in a more complex model which may take more time for the implementation of the solution process. It is also not always true that a refined model always provide better results. Therefore, it is necessary to maintain a balance between the level of accuracy and complexity of the model. A model must include only those features that are essential to reduce the error to an acceptable level.

1.6 BLUNDERS The errors which arise due to human imperfection are called blunders. These errors may result in disastrous results. These errors may originate due to wrong assumption, errors in deriving

Errors and Approximations

7

mathematical equations, selection of inappropriate methods, mistake in data input etc. These type of errors are easily avoidable by attaining proper knowledge of the problem and learning all aspects of the numerical method to be used.

1.7 SOME FUNDAMENTAL DEFINITIONS OF ERROR ANALYSIS (a) Absolute and Relative Errors. Absolute Error: Suppose that xt and xa denote the true and approximate values of a datum then the error incurred on approximating xt by xa is given by e = xt – xa and the absolute error ea, i.e., magnitude of the error is given by ea = |xt – xa|. (b) Relative Error. Relative Error or normalized error er in representing a true datum xt by an approximate value xa is defined by er =

e x |x − x a | Absolute error = a = t = 1− a xt |True error| | x t | |xt |

and er % = er ×100 Sometimes er is defined by Example 1.4 If true value of x =1.41421 and approximate value is x′ =1.414, then calculate absolute error, relative and percentage errors. Solution: ea = 1.41421 – 1.414 = 0.00021 er =

1,41421 − 1.414 = 0.00014849 1.41421

er × 100 = 0.00014849 × 100 = 0.014849 Example 1.5 If 0.333 is the approximate value of its absolute, relative and percentage errors. Solution: Given that true value x =

1 3

Approximate value x ′ = 0.333 Then absolute errorea =

1 − 0.333 = 0.000333 3

1 , then find 3

8

Numerical Methods with Program in MATLAB

Relative error

er =

0.000333 = 0.000999 0.333333

Percentage errorer × 100 = 0.000999 × 100 = 0.099% Example 1.6 If x = 0.006789, find absolute, relative and percentage error if x is rounded off to three decimal places. Solution: The value of x after we round-off to three decimal places = 0.007. Therefore Then absolute errorea = 0.0067890 – 007 = 0.000211 Relative error

er =

0.000211 = 0.031079 0.006789

Percentage error er ×100 = 0.031079 × 100 = 3.107% (c) Machine Epsilon. Let us assume that we have a decimal computer system. We know that we would encounter round-off error when a number is represented in floating-point form. The relative round-off error due to chopping is defined by er =

g x × 10E −d f x × 10E

Here we know that gx < 1.0 and fx > 0.1 er
j

lij =

i −1  1  ×  aij − ∑ lik ukj  j = 1, 2,...., (i – 1) uij   k =1

ai1

where l11 = l22 = l33 = 1 and li1 = u for i = 2, 3, ....n 11 It is important to note that for computation of any element, we require the values of elements in the previous columns as well as the values of elements in the column above that element. This means that we should compute the elements, column by column from left to right within each column from top to bottom. Example 3.10 Solve the system of equations using Triangular Factorization method: x + 5y + z = 14 2x + y + 3z = 13 3x + y + 4z = 17 Solution: The system can be written as AX = B, where 1 5 1  x  14  2 1 3  X =  y  B = 13  A=       3 1 4   z  17 

Let

0 0  u11 u12 u13  1 l 1 0   0 u22 u23  = LU =  21 l31 l32 1   0 0 u33 

1 5 1  2 1 3    3 1 4 

Notice that u11 = 1, u12 = 5, u13 = 1. Therefore 1

LU = l21

0 1

l31 l32

0  1 5 1  0  0 u22 u23  = 1  0 0 u33 

1 5 1  2 1 3    3 1 4 

58

Numerical Methods with Program in MATLAB

Hence, l21 = 2; 5l21 + u22 = 1; l21 + u23 = 3. Therefore, l21 = 2; u22 = – 9; u23 = 1 Again, l31 = 3; 5l31 + l32 u22 = 1; l31 + l32 u23 + u33 = 4. Therefore, l32 =

1 − 15 14 = −9 9

u33 = −

5 9

Since, LUX = B implies LZ = B where UX = Z we write LZ = B as 1 0 0   z1  14  2 1 0   z  =     2 13   14   z3  17  1 3 9  

which gives z1 = 14; 2z1 + z2 = 13; 3z1 +

14 z + z3 = 17 9 2

Solving which we get, z1 = 14; z2 = – 15; z3 = –

5 3

Now UX = Z can be written as 1  x  1 5  14  0 −9 1   y  =  −15         5 5   z  0 0 −  −  9   3

This reduces the system of equation as x = 5y + z = 14 –9y = z = –15 5 5 − z = − 9 3

which on back substitution gives z = 3, y = 2, x = 1 as solution. Crout’s Method. This is another direct method in which we factorize the coefficient matrix A as LU using another approach. In this method we take

Solution of Simultaneous Linear Algebraic Equations

l11 0 l l L =  21 22 L L  ln1 ln 2

L 0   1 u12  0 1 L 0  U=  L L L L   L lnm  0  0

59

L uin  L u2n  L L  L 1 

Next we follow the similar steps to obtain the elements of L and U that we used in Triangular Factorization Method. Example 3.11 Solve the system of equations using Crout’s method: x – y=0 –2x + 4y – 2z = – 1 –y + 2z =

3 2

Solution: The system can be written as AX = B, where 1

A =  −2  0

Let

−1 0  x  0  4 −2 X =  y  B =  −1   z  1.5  −1 2 

l11 0 LU = l21 l22 l31 32

0  1 u12 u13  0  0 1 u23  = 1  l33  0 0

 1 −1 0   −2 4 −2    0 −1 2 

Notice that, from first column l11 = a11 = 1, l21 = a21 = – 2, l31 = 0. From second column l11 u12 = 1; u12 = l21 u12 + l22 l21 u13 + l22 u23 Therefore, l22 = 4 – l21 Also l11 u13 So u13 0 + 2u23 So u23 l31

−1 = −1 l11

=4 = – 2; u12 = 4 – (– 2) (– 1) = 2 =0 =0 =–2 =–1 =0

60

Numerical Methods with Program in MATLAB

From third row, l31 u12 + l32 –1; ∴ l32 = – 1. Therefore, l31 u13 + l32 u23 + l33 = 2; ∴ l33 = 1 Thus 0 0  1 −1 0   1 −1 0  1  −2 4 −2  = A = LU =  −2 2 0  0 1 −1       0 −1 2   0 −1 1  0 0 1 

Now Put So

AX = LUX = B UX = Z LZ = B  1 0 0   z1   0   −2 2 0  z  = B =  −1    2    0 −1 1  z 3  3 / 2 

Solving, we get z1 = 0, z2 = –

1 , z3 = 1 2

From UX = Z, we have 1 −1 0   x   0  0 1 −1 y  =  −1/ 2       0 0 1   z   1 

Solving, we get x =

1 1 , y = , z = 1 as the solution. 2 2

3.4 ITERATIVE METHODS The direct methods discussed above become unsuitable in case of larger system of equations or when most of the coefficients are zero. They not only affect the accuracy of the solution due to roundoff errors but also get tedious. In such cases, iterative methods provide an alternative. In iterative methods we start from an approximation to the true solution and obtain better and better approximations from a computation cycle repeated as often as may be necessary to achieve desired level of accuracy. In this way, the amount of computations in iteration methods depends on the desired degree of accuracy. These iterative methods institute truncation errors and hence, it is important to interpret the magnitude of this error as well as rate of convergence of the iteration process. In fact, iterative methods are self correcting

Solution of Simultaneous Linear Algebraic Equations

61

methods and any error made in computation, is corrected in the subsequent iterations. Gauss Jacobi Method. This method is one of the simple iterative method which extends the idea of fixed point iteration method (discussed in Chapter 2) to a system of linear equations. Consider a system of n equations in n unknowns: a11 x1 + a12 x2 + a13 x3 + ....... + a1n xn = b1 a21 x1 + a22 x2 + a23 x3 + ..... + a2n xn = b2 ○

























































an1 x1 + an2 x2 + an3 x3 + ..... + ann xn = bn Rewriting the system, we get x1 =

1 (b1 − a12 x 2 − a13 x 3 − ...... − a1n x n ) a11

x2 =

1 (b2 − a 21x1 − a 23 x 3 − ...... − a 2n x n ) a 22

xn =

1 (bn − a n1x1 − a n 2 x 2 − ...... − a nn −1x n −1 ) a 22

Assuming x1(0) , x 2(0) , ....., x n(0) to be the initial guess for x1, x2, ...., xn, we can compute next set of values for these unknowns as follows: x1(1) =

1 (b1 − a12 x1(0) − a13 x 3(0) − .... − a1n x n(0) ) a11

x 2(1) =

1 (b2 − a 21x1(0) − a 23 x 3(0) − .... − a 2n x n(0) ) a 22

................................................................... x n(1) =

1 (bn − a n1x1(0) − a n 2 x 2(0) − .... − a nn −1x n(0)−1 ) a nn

Now using x1(1) , x 2(1) ,......., x n(1) , we can get next set of values of x x1(2) =

1 (b1 − a11x 2(1) − a13 x 3(1) − .... − a1n x n(1) ) a11

x 2(2) =

1 (bn − a 21x1(1) − a 23 x 3(1) − .... − a 2n x n(1) ) a 22

.................................................................... x n(2) =

1 (bn − a n1x1(1) − a n 2 x 2(1) − .... − a nn −1x n(1)−1 ) a nn

62

Numerical Methods with Program in MATLAB

The process can continue till we obtain a desired level of accuracy. Example 3.12 Solve the system of equations using Gauss Jacobi method 8x – 3y + 2z = 20 4x + 11y – z = 33 6x + 3y + 12z = 35 Solution: From the system we can write x=

1 (20 + 3y − 2z ) 8

y=

1 (33 − 4 x + z ) 11

z=

1 (35 − 6 x − 3y ) 12

Let us assume the initial values as x(0) = y(0) = z(0) = 0 First Iteration: x(1) = =

1 (20 + 3y (0) − 2z (0) ) 8 1 (20 + 3(0) − 2(0)) = 2.5 8

y(1) = 1 (33 − 4x (0) + z (0) ) 11

= z(1) = =

1 (33 − 4(0) + (0)) = 3.0 11 1 (35 − 6 x (0) − 3y (0) ) 12 1 (35 − 6(0) − 3(0)) = 2.91666 12

Second Iteration: x(2) = =

1 (20 + 3y (1) + z (1) ) 8 1 (20 + 3(3.0) − 2(2.9166)) = 2.89583 8

Solution of Simultaneous Linear Algebraic Equations

y(2) = = z(2) = =

63

1 (33 − 4 x (1) + z (1) ) 11 1 (33 − 4(2.5) + (2.91666)) 11 1 (35 − 6 x (1) − 3y (1) ) 12 1 (35 − 6(2.5) − 3(3.)) = 0.91666 12

Third Iteration: x(3) = = y(3) = = z (3) = =

1 (20 + 3y (2) − 2z (2) ) 8 1 (20 + 3(2.35606) − 2(0.91666)) = 3.15435 8 1 (33 − 4 x (2) + z (2) ) 11 1 (33 − 4(2.89583) + (0.91666) = 2.03030 11 1 (35 − 6 x (2) − 3y (2) ) 12 1 (35 − 6(2.89583) − 3(2.356 − 6)) 12

= 0.87973 Fourth Iteration: x(4) = = y(4) = =

1 (20 + 3y (3) − 2z (3) ) 8 1 (20 + 3(2.03030) − 2(0.87973) = 3.04143 8 1 (33 − 4 x (3) + z (3) ) 11 1 (33 − 4(3.15435) + (0.87973)) = 1.93293 11

64

Numerical Methods with Program in MATLAB

z(4) = =

1 (35 − 6 x (3) − 3y (3) ) 12 1 (35 − 6(3.15435) − 3(2.03030)) = 0.83191 12

Fifth Iteration: x(5) = = y(5) = = z(5) = =

1 (20 + 3y (4) − 2z (4) ) 8 1 (20 + 3(1.93293) − 2(0.83191)) = 3.01687 8 1 (33 − 4 x (4) + z (4) ) 11 1 (33 − 4(3.04143) + (0.83191)) = 1.96965 11 1 (35 − 6 x (4) − 3y(4) ) 12 1 (35 − 6(3.04143) − 3(1.93292)) = 0.91271 12

Sixth Iteration: x(6) = = y(6) = = z(6) = =

1 (20 + 3y (5) − 2z (5) ) 8 1 (20 + 3(1.96965) − 2(0.91271)) = 3.01044 8 1 (33 − 4 x (5) + z (5) ) 11 1 (33 − 4(3.01687) + (0.91271)) = 1.98593 11 1 (35 − 6 x (5) − 3y (5) ) 12 1 (35 − 6(3.01687) − 3(1.96965)) = 0.91581 12

Solution of Simultaneous Linear Algebraic Equations

65

Seventh Iteration: x(7) = = y(7) = = z(7) = =

1 (20 + 3y (6) − 2z (6) ) 8 1 (20 + 3(1.98593) − 2(0.91581)) = 3.01577 8 1 (33 − 4 x (6) + z (6) ) 11 1 (33 − 4(3.01044) + (0.9581)) = 1.98855 11 1 (35 − 6 x (6) − 3y(6) ) 12 1 (35 − 6(3.01044) − 3(1.98593)) = 0.9496 12

Eighth Iteration: x(8) = = y(8) = = z(8) = =

1 (20 + 3y (7) − 2z (7) ) 8 1 (20 + 3(1.98855) − 2(0.91496)) = 3.01694 8 1 (33 − 4 x (7) + z (7) ) 11 1 (33 − 4(3.01577) + (0.91496)) = 1.98653 11 1 (35 − 6 x (7) − 3y (7) ) 12 1 (35 − 6(3.01577) − 3(1.98855)) = 0.91164 12

Ninth Iteration: x(9) = =

1 (20 + 3y (8) − 2z (8) ) 8 1 (20 + 3(1.98653) − 2(0.91164)) = 3.01703 8

66

Numerical Methods with Program in MATLAB

y(9) = = z(9) = =

1 (33 − 4 x (8) + z (8) ) 11 1 (33 − 4(3.01703) + (0.91156)) = 1.98576 11 1 (35 − 6 x (8) − 3y (8) ) 12 1 (35 − 6(3.01694) − 3(1.98653)) = 0.91156 12

Tenth Iteration: x(10) = = y(10) = =

1 (20 + 3y(9) + 2z(9)) 8 1 (20 + 3(1.98580) − 2(0.91156)) = 3.01678 8 1 (33 – 4x(9) – z(9)) 11 1 (33 – 4(3.01703) + (0.91156)) = 1.98576 11

z(10) = 1 (35 – 6x(9) – 3y(9)) 12

=

1 (35 – 6(3.01703) – 3(1.98580)) 12

= 0.91169 Since the values are correct upto three decimal places, we take (10 x , y(10) and z(10) as the solution. Example 3.13 Solve the system of equations using Gauss Jacobi method x + y + 54z = 110 27 x + 6 y – z = 85 6x + 15y + 2z = 72 Solution: The coefficient matrix is not diagonally dominant as it is, so we rewrite the system so as to make the coefficient matrix diagonally dominant 27x + 6y – z = 85

Solution of Simultaneous Linear Algebraic Equations

67

6x + 15 y + 2z = 72 x + y + 54z = 110 Solving for x, y, z, we get x=

1 (85 – 6y + z) 27

y=

1 (72 – 6x – 2z) 15

z=

1 (110 – x – y) 54

Assuming the initial values as x(0) = y(0) = z(0) = 0 First Iteration: x(1) = = y(1) = = z(1) = =

1 (85 – 6y(0) + z(0)) 27 1 (85 – 6(0) + (0)) = 3.14815 27 1 (72 – 6x(0) – 2z(0)) 15 1 (72 – 6(0) – 2(0)) = 4.80 15 1 (110 – x(0) – y(0)) 54 1 (110 – (0) – (0)) = 2.03704 54

Second Iteration: x(2) = = y(2) = =

1 (85 – 6 y(1) + z(1)) 27 1 (85 – 6(4.80) + (2.03704)) = 2.15693 27 1 (72 – 6x (1) – 2z(1) ) 15 1 (72 – 6(3.14815) – 2(2.03704)) 15

68

Numerical Methods with Program in MATLAB

z(2)

= 3.26913 = (110 – x(1) – y(1)) =

1 (110 − (3.14815) − (4.80)) = 1.88985 54

Third Iteration: x(3) = = y(3) = = z(3) = =

1 (85 − 6y(2) + z (2) ) 27 1 (85 − 6(3.26913) + (1.88985)) = 2.49167 27 1 (72 − 6 x (2) − 2z (2) ) 15 1 (72 − 6(2.15693) − 2(1.88985)) = 3.68525 15 1 (110 − x (2) − y(2) ) 54 1 (110 − (2.15693) − (3.26913)) = 1.93655 54

Proceeding in the same way, we get Fourth Iteration: x(4) = 2.40093 y(4) = 3.54513 z(4) = 1.92265 Fifth Iteration: x(5) = 2.43155 y(5) = 3.58327 z(5) = 1.92692 Sixth Iteration: x(6) = 2.42323 y(6) = 3.57046 z(6) = 1.92565 Seventh Iteration: x(7) = 2.42603 y(7)| = 3.57395

Solution of Simultaneous Linear Algebraic Equations

69

z(7) = 1.92604 Eighth Iteration: x(8) = 2.42527 y(8) = 3.57278 z(8) = 1.92593 Hence x = 2.425 ; y = 3.573; z = 1.926 be considered as solution correct upto three decimal places. Gauss Seidal Method. Gauss Seidal method is an improvement of Gauss Jacobi method. In this method we use most recent values of unknowns as soon as they become available at any point of iteration process. Assuming x1(0) , x 2(0) ,.....x n(0) to be the initial guess for x1, x2, ....., xn, we can compute next set of values for these unknowns as follows: x1(1) =

1 (b1 − a12 x 2(0) − a13 x 3(0) − ..... − a1n x n(0) ) a11

x 2(1) =

1 (b2 − a 21x1(1) − a 23 x 3(0) − ...... − a 2n x n(0) ) a 22

..................................................................... x n(1) =

1 (bn − a n1x1(1) − a n 2 x 2(1) − ...... − a nn −1x n(1)−1 ) a nn

(1) (1) (1) Now using x1 , x2 ,......, xn , we can get next set of values of x

x1(2) =

1 (b1 − a12 x 2(1) − a13 x 3(1) − ..... − a1n x n(1) ) a11

x 2(2) =

1 (b2 − a 21x1(2) − a 23 x 3(1) − ..... − a 2n x n(1) ) a 22

................................................................... x n(2) =

1 (bn − a n1x1(2) − a n 2 x 2(2) − ..... − a nn −1x n(2)−1 ) a nn

and so on. The process can continue till we obtain a desired level of accuracy. Since, the current values of the unknowns at each stage of iteration are used in getting the values of unknowns, the

70

Numerical Methods with Program in MATLAB

convergence in Gauss Seidal method is very fast as compared to Gauss Jacobi method. The rate of convergence of Gauss Seidal method is almost double than that of Gauss Jacobi method. Example 3.14 Solve the system of equations given in Example 3.12 using Gauss Seidal method. Solution: The given system of equations is 8x – 3y + 2z = 20 4x + 11y – z = 33 6x + 3y + 12z = 35 As done in Gauss Jacobi method, we write x=

1 (20 + 3y − 2z ) 8

y=

1 (33 − 4 x + z ) 11

z=

1 (35 − 6 x − 3y ) 12

Assuming the initial values as x(0) = y(0) = z(0) = 0 First Iteration: x(1) = = y(1) = = z(1) = =

1 (20 + 3y (0) − 2z (0) ) 8 1 (20 + 3(0) − 2(0)) = 2.5 8 1 (33 − 4 x (1) + z (0) ) 11 1 (33 − 4(2.5) + (0)) = 2.090909 11 1 (35 − 6 x (1) − 3y (1) ) 12 1 (35 − 6(2.5) − 3(2.090909)) = 1.143939 12

Second Iteration: x(2) =

1 (20 + 3y (1) − 2z (1) ) 8

Solution of Simultaneous Linear Algebraic Equations

= y(2) = = z(2) = =

71

1 (20 + 3(2.090909) − 2(1.143939)) = 2.998106 8 1 (33 − 4 x (2) + z (1) ) 11 1 (33 − 4(2.998106) + (1.143939)) = 2.013774 11 1 (35 − 6 x (2) − 3y (2) ) 12 1 (35 − 6(2.998106) − 3(2.013774)) = 0.914170 12

Third Iteration: x(3) = = x(3) = = z(3)| = =

1 (20 + 3y (2) − 2z (2) ) 8 1 (20 + 3(2.013774) − 2(0.914170)) = 3.026623 8 1 (33 − 4 x (3) + z (2) ) 11 1 (33 − 4(3.026623) + (0.914170)) = 1.982516 11 1 (35 − 6 x (3) − 3y (3) ) 12 1 (35 − 6(3.026623) − 3(1.982516)) = 0.907726 12

Fourth Iteration: x(4) = = y(4) = =

1 (20 + 3y (3) − 2z (3) ) 8 1 (20 + 3(1.982516) − 2(0.907726)) = 3.016512 8 1 (33 − 4 x (4) + z (3) ) 11 1 (33 − 4(3.016512) + (0.907726)) = 1.985607 11

72

Numerical Methods with Program in MATLAB

z(4) = =

1 (35 − 6 x (4) − 3y(4) ) 12 1 (35 − 6(3.016512) − 3(1.985607)) = 0.912009 12

Fifth Iteration: x(5) = = y(5) = = z(5) = =

1 (20 + 3y (4) − 2z (4) ) 8 1 (20 + 3(1.985607) − 2(0.912009)) = 3.016600 8 1 (33 − 4 x (5) + z (4) ) 11 1 (33 − 4(3.016600) + (0.912009)) = 1.985964 11 1 (35 − 6 x (5) − 3y (5) ) 12 1 (35 − 6(3.016600) − 3(1.985964)) = 0.911876 12

Sixth Iteration: x(6) = = y(6) = = z(6) = =

1 (20 + 3y (5) − 2z (5) ) 8 1 (20 + 3(1.985964) − 2(0.911876)) = 3.016767 8 1 (33 − 4 x (6) + z (5) ) 11 1 (33 − 4(3.016767) + (0.911876)) = 1.985892 11 1 (35 − 6 x (6) − 3y(6) ) 12 1 (35 − 6(3.016767) − 3(1.985892)) = 0.911810 12

Since the values are correct up to three decimal places, we take x(6), y(6) and z(6) as the solution. It is clear that from this example that Gauss Seidal Method converges faster than Gauss Jacobi Method.

Solution of Simultaneous Linear Algebraic Equations

73

Example 3.15 Solve the system of equations using Gauss Seidal method. 5x – y = 9 – x + 5y – z = 9 – y + 5z = – 6 Solution: The system can be rewritten as x=

1 (9 + y ) 5

y=

1 (4 + x + z ) 5

z=

1 ( −6 + y ) 5

Assuming the initial values as x(0) = y(0) = z(0) = 0 First Iteration:

x(1) = = y(1) = = z(1) = =

1 (9 + y (0) ) 5 1 (9 + 0) = 1.8 5 1 (4 + x (1) + z (0) ) 5 1 (4 + (1.8) + 0) = 1.16 5 1 ( −6 + y (1) ) 5 1 ( −6 + (1.16)) = −0.968 5

Second Iteration: x(2) = =

1 (9 + y (1) ) 5 1 (9 + 1.16) = 2.032 5

74

Numerical Methods with Program in MATLAB

y(2) = = z(2) = =

1 (4 + x (2) + z (1) ) 5 1 (4 + (2.032) + ( −0.968)) = 1.0128 5 1 ( −6 + y (2) ) 5 1 ( −6 + (1.0128)) = −0.99744 5

Third Iteration: x(3) = = y(3) = = z(3) = =

1 (9 + y (2) ) 5 1 (9 + 1.0128) = 2.00256 5 1 (4 + x (3) + z (2) ) 5 1 (4 + (2.00256) + ( −0.99744)) = 1.001024 5 1 ( −6 + y (3) ) 5 1 ( −6 + (1.001024)) = −0.999795 5

Fourth Iteration: x(4) = = y(4) = = z(4) =

1 (9 + y (3) ) 5 1 (9 + 1.001024) = 2.00020 5 1 (4 + x (4) + z (3) ) 5 1 (4 + (2.00020) + ( −0.999795)) = 1.00008 5 1 ( −6 + y (4) ) 5

Solution of Simultaneous Linear Algebraic Equations

=

75

1 ( −6 + (1.00008)) = −0.999983 5

Fifth Iteration: x(5) = = y(5) = = z(5) = =

1 (9 + y (4) ) 5 1 (9 + 1.00008) = 2.00001 5 1 (4 + x (5) + z (4) ) 5 1 (4 + (2.00001) + ( −0.999983)) = 1.00000 5 1 ( −6 + y (5) ) 5 1 ( −6 + (1.00000)) = −0.99999 5

Since the values are correct up to three decimal places, we take x(5), y(5) and z(5) as the solution. Relaxation Method. This method is an improvement of Gauss Seidal method aimed at faster convergence. The basic concept is to take the change generated in a Gauss Seidal iteration step and extrapolate the new value of the unknown by a factor r of this change. The new relaxation value is given by x ir(k +1) = x i(k ) + r ( x i(k +1) − x i(k ) )

= r x i(k +1) + ( 1 − r ) x i(k )

...(9)

The parameter r is called the relaxation parameter. This step is applied successively to each element of the vector X during iteration procedure and hence the method is called Successive Relaxation Method. The parameter r may be assigned a value between 0 and 2 as follows: 0 < r