NUMERICAL ANALYSIS, ALGORITHMS AND COMPUTATION [1 ed.] 0853127514


99 5 14MB

English Pages 161 Year 1988

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Title
Table of contents
Preface
1 Introduction
2 Iterative methods for non-linear equations
3 Interpolation
4 Numerical integration
5 Numerical solution of ordinary differential equations
6 Systems of linear equations
Appendix
Index
Recommend Papers

NUMERICAL ANALYSIS, ALGORITHMS AND COMPUTATION [1 ed.]
 0853127514

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Ellis Horwood Series in Mat hematics and Its Applications S eries Editor: G. M. BELL, Professor of Mathematics, King's College IKQCJ, University of London Statisti cs, Operational RKearch and Computati onal M athematics Editor: B. W. CONOl.lY, Professor of Mathematics (Operational Research), Queen Mary College, University of London •

II

NUMERICAL ANALYSIS, ALGORITHMS AND COMPUTATION J. MURPHY, Head of Department of Computational Physics, Sowerby Research Centre, Brit!sh Aerospace pie, Bristol; D. RIDOUT, Department of Computer Science and Mathematics, University of Aston, Birmingham; and BRIGID McSHANE. Department of Mathematrcs, Cheadle Hulme School, Cheadle, Cheshire This book takes an algorithmic pathway to the teaching of numerical analysis and computation which are Important for degree courses in mathematics, engineering and science. It simplifies and minimises the numoer of advanced mathematical concepts to make the text ;-;iore acceptable to the reader; a knowledge of calculus is assumed. The book commences with an explanation of the mean value theorem and Taylor's theorem, which are of particu· lar importance in this area. An understanding of this introductory material is essential for the remaining chapters. The 1,eatment of interpolation, numerical integ· ration and differential equations is strongly dependent on difference operators, an approach which allows a smooth transition through the next section of the text. A chapter on systems of linear equations covers a range of methods and can, like the review of interaction methods for non· linear equations, be read independently of other chapters. Many of the problems presented can be solved with a hand calculator: however. since numerical analysis can· not be separated from computation, the book stresses the preference that the numerical algorithms be implemented on a computer. A number of algorithms have been included (with a Pascal structure) and, for clarity, these algorithms are enclosed in boxes throughout. The reader is encouraged to enlarge these algorithms to full pro· grams, either in Pascal or another language, end test them on examples. Readership: Mathematics, computer science, engineering.

...,. .· oJ

NUMERICAL ANALYSIS, ALGORITHMS AND COMPUTATION

MATHEMATICS AND ITS APPLICATIONS Series Editor: G. M. BELL, Profossor of Mathematics, King's College London (KQC), University of London NUMERICAL ANALYSIS, STATlSTICS AND OPERATIONAL RESEARCH Ediror: B. W. CONOLLY, Professor of Mathematics (Opera1 ional Research), Q ueen Mary College, University of London Mathematics and its applicationsare now awe·inspiriogin their S;COpe, variety and depth . Not only is there rapid gJOwth in putt mathcmallcs and its applieatioM to the traditional fields of the physical scieoccs, cngincuing and statistics, but new fields of application are emcrgmg in biology. ecology and :social organizauon. The user of mathematics must assimilate subtle new techniques and also team m handle the great power of the computer efficicn1ly and ecooomically. Tiic need for clear, IX!ncisc and autboritativc texts is thus greater than ever and our series will endeavour to supply this need. It aims to be comprehensive and yet flexible. Works surveying recent research will introduce new areas And up-to.date mathematical methods. Undergraduate texts on established topics will stimulate student interest by mcluding applicacioos relevant •I the present day The serie$ wiil also include selected volume$ of lecture notes which will enable cenam important topics to be presented carhcr than would otherwise be possible. In all these ways 1t is hoped 10 render a valuable service lo those who learn, teach, develop ru1d use mathematics.

Mathematics and Its Applications

Series Editor: G. M. BELL, Professor of Ma1hematics, King's College London (KQC), University of London Anlti, K & Pieuuch'I , J. M.tlbnmllaf Moddlhl& bi llislylng Mathtm•tlcal Modtlllftl Blum, W, Applications tnd Modelling la L•arnlng and Ttaf ht>Ok

.......,._la

1•,



l I

1

NUMERICAL ANALYSIS, ALGORITHMS AND COMPUTATION J . MURPHY, Ph.D. Head of Department of Computational Physics Sowerby Research Centre, British Aerospace pie, Bristol

D. RIDOUT,

Ph.D.

Department of Computer Science and Mathematics University of Aston, Birmingham and

BRIDGID McSHANE, a.Sc. Department of Mathematics Cheadle Hulme School, Cheshire

.

·- '

I

-:')n ..-·,~ ' \

I

-~

··.-1

ELLIS HORWOOD LIMITED Publishers · Chichester

Halsted Press: a division of JO HN WILEY & SONS New York · Chichester · Brisbane · Torou10

First published in 1988 by EI. LIS HORWOOD LfMITED Market Cross House, Cooper Street, Chichester, West Sussex, P019 lEB, England Tht publahtr's colopho11 a rtproduad from James G1Uaon'1 drawurg of Clriclresrer.

rm ancient Marktt Cross,

Distributors: Australia and New Zealand: JACARANDA WILEY LlMITED G PO Box 859, Brisbane, Queensland 4001, Australia Canada: JOHN WILEY & SONS CANADA LIMITED 22 Worcester Road, Rcxdale, Ontario, Canada

Europe and Africa:

..

JOHN WILEY & SONS LIMITED

Baffins Lane, Chichester, West Sussex, England Nonh and Soutlr America and tht rest of tire world: Halstect PrC'ss: a division of JOHN WILEY & SONS 605 Third Avenue, New York, NY 10158, USA Soutlr· East Asia JOHN WILEY & SONS (SEA) PTE UMITED 37 JaJan Pemimpin # 05--04 Block B, Union fndustriul Building, Singapore 2057 Indian Subcontinent

,.. ...

WILEY EASTERN LIMITED 4835124 Ansari Road

Daryaganj, New Delhi 110002, India

© 1988 J. Murphy, D. Ridout 1nd B. McShane/Ellls Horwood Llmlttd British Library Cataloguln&In Publication Data Murphy,J Numerical analysis, algorilhms and computa1ion 1. Applied mathematics. Numerical analysis. Computation I. Title II . R idout, D. (Den nis), 1934lll. McShanc B., ( Brigid), 1962- IV. Series

519.4 Library of Concress CIP •••llable ISBN 0-85312-751-4 (Ellis Horwood Limited, Library Edn.) ISBN 0-745s--0549-3 (Ellis Horwood Limited, Studcnl &In .) ISBN G-47()..21214-4 ( Halsted Press) Printed in Oreat Britain by Hannoll!, Bodmin

COPYRJG HT NOTICE All Righis Rcsened. No pan of this publication may be reproduced, ftored io a retrieval system, or transmitted, in any farm or by any means electronic, mechanical, photocopying, recordin~ o r o tbcrw1sc, without the permission of Ellis Horwood .Jm1ted, Market Cross House, COciper Street, Chichester, West Sussex, England.

f

I 111 1111

Table of Contents

fntroductio11 1.1 Numericaljcompulalional modelling 1.2 Number represenlation and error 1.3 Some prerequisites from Calculus 2 Jrerative methods for non-linear equations 2.1 Introduction 2.2 Simple iteration by algebraic lransformation 23 Tesl for convergence 2.4 Rate of convergence 2.5 The bisection me1hod 2.6 The Newton- Raphson method for solving fix)= 0 2.7 Equations with nearly equal roots 2.8 Real roots or polynomial equations Exercises 2

3 Interpolation 3.1 Finite differences and difference operators 3.2 Interpolation formulae involving forward and backward direrences 3.3 Divided differences and Newton's divided difference formula 3.4 Centr1tl difference interpolation formulae 3.5 Choiqes of interpolation polynomial Exercises 3 4 Numerical integration 4. l Appr9x imat ion of the integrand 4.2 Some~ell-knowo in tegration fonnu loe 4.2. 1 if he Trape-.t.ium ru le

3 7 7

8 9 1-1 15

17

20

21 26 30

32 40 45 52 59 60 63 65

66 66

v

vi

Tables of contencs 4.2.2 Simpson's rnle 4.3 Truncation error and classification of integration formulae 4.4 Higher order integration formulae 4.5 C hoice of fo rmu la 4.6 Romberg integration 4.7 Closed and open integration formulae Exercises 4

S N umerical solution of ordi nary differential equ:nions 5.l Multi-ste p methods 5.1.1 Methods based on open integration 5.1.2 Me thods based on closed integration Starting methods More general multi-step methods Accuracy of multi-Step formulae Sys tems of equations a nd higher order equations 5.2 Single-step methods 5.2. 1 First-order eq uauons 5.2.2 Systems of equa tio ns and higher order equations Exercises S

S. L3 5.l.4 5.l.5 5.1.6

6 Systems of linear e11untio11s 6.1 Direct methods 6.1.l Triangular systems-forward a nd backward substitution 6.1 .2 Ga ussian elimina tio n 6.l.3 Triangular factorization 6.2 Iterative tech niques 6.2.1 Jacobi's method 6.2.2 T he Gauss-Siedel method 6.2.3 Successive over-relaxation (S.O. R.) 6.2.4 Convergence 6.2.5 J he iterauve improvement procedure for removmg round-off ~~

Exercises 6 Appendix Homogeneo us linear diITerence equations Index

70 75 76

77 78 82 84

87 89 89 92

94 102

103 110 Ll3 11 3 11 8

120 121 121 122 128 135 135 136 137 139

IE 141 144 144 147

Preface

Numerical analysis and computation are important parts of many degree programmes in mathematics, engineering and ~cience. This text 1s suitable for first- anc.J second-year undergraduates following programmes within these subject area5' The material for this text has arisen from courses given to first- and second-year undergraduates in mathematics, engineering and science. To make the text more acceptable to students we have tried to minimize the number of advanced mathematical concepts. We have also attempted to present the more difficult concepts in a way thal is undemaodable. The clarity bas been enhanced by involving an author who has recently gradua ted a nd (s 'closer to the difficulty of meeting this material for the first time. Howeve(, it is assumed that the reader will have a good knowledge of Calculus. This mean-value , theprem,and.:raylor's Uleorem are of particular importance and are presented in Chapter I. An understanding of the introductory material in Chapter l is essential for almost the entire texL With the exception of Chapter I, Chapters 2 and 6 are independent of tbe rest of the text. H owever. students wbo are studying this sub1ect for the first time should read Chapters 3, 4 and 5 in that order because of the strong dependence of the h11er chapters on these early chapters. Tbe material in Chapter 6 will be more easily understood by students who havci attended an int roductory course on lii1ear algebra. Familiarity with the notation and some of the ideas o f linear algebra is a distinct advantage. T o allow time for a parallel course in linear algebra to take place we have put this material at the end of the texL The material of Chapter 6 can be brought forward for second-year students who have studied linear algebra in their first year. Our ueatmeot of interpolation, oumencal integrauoo and differential equations is strongly dependent on difference operators. This approach allows a smooth transition through Chapters 3, 4 and 5. A knowledge of finite difference calculus can be valuable to practising scientists and engineers involved in mathematical and numerical modelling-except those who prefer the black-box approach. There is a thin line

vii

Iii .1 1 II· I! i

viii

Preface

between making the text acceptable to students and not compronusing the mathematical rigour. This is particularly so in the numerical solu tion of differential equations, which deserves a separate text. We have derived some of·the better known m·e thods and given a relatively sim ple tretttment of errors and s1abil ity. A more thorough treatment is beyond the intended scope of this text. Since this is an introductory text, many of the problems and exercises cnn be solved with a hand calculator. However, numerical analysis cannot be separated from computation. The application of numerical analysis to a given problem usually leads to a numerical scheme or algorithm which is a sequence of arithmetic operations that leads to the solution of the problem. lo many cases it is preferable or essential for these numerical algorithms to be implemented on a computer. Consequently computational efficiency influences trends in numerical analysis. Within the limits of presen ting introductory materia l we have tried to emphasize this point throughout the tex t and have provided a numbt r of algorithms with a Pasca l structure. For clarity the algorithms are separated from the text throughout by being ruled off. Students· arc encouraged to enlarge these algorithms to a full program, either in Pascal or in another scicnulic language, and to test them on the examples.

ACKNOWLEDGEMENTS The authors are extremely grateful to Mrs t:. P. Bailey for her patience and cornmitmenc whilst typing 1he manuscript with its ted ious ma thematical notation.

·t Introduction

t.l NUMERICAL/COMPUTATTONAL MODELLING To investigate problems that arise in science, technology and commerce it is helpful 10 construct models which approximate the real situation. We are not referring to tangible models but to mathematical/compmational models that give an approximate description or the behaviour of n real system. These models are usually formed by considering the importa ~ features or physica l processes occurring in the real system and incorporating them into a system of equations, which may be algebraic but is often differential. 1;he tmknown quantities that we are seeking and that relate to the behaviour of ~he real system appear in these equatio ns. Consequently the equations must be solved to determine lhese unknowns. Real systems vary in thei r complexity. Although we have no control over this we do have control p ver the complexity of the mode ls constructed to approximate the real systems. Simple systems can be approximated by simple models and in some cases the associated «\uations can be solved ana lytically. Complex sys tems can a lso be approximated by s imple models bu l the models may not give an accurate description of the behaviour of the real system. A more accu rate description or the real system may be obtained by Incorporating more features of the real system into the model. Not surprisingly, r e associated system of equations may be difficulc to solve. Analytic solu tions may not exist and the only way forward is to seek approximate solutions via numerical and compu cacional methods. An introduction to these methods is given in this text.

1.2

NUMBE~

REPRESENTATION AND EgROR

The discussion that follows is equally applicable to positive and negative numbers, but for convenien.ce in notation we discuss positive numbers only.

2

IntroJuction The decimal numbers that we work w1lh can be expressed in the form a=a.10• -a,,_ 1 10• - 1 + ···+a., 10·=

I

4•11< -

a~loA,

(LI)

l )n1

where 11 > 111 and at, k = n(- !)in, is an integer between 0 and 9. The notation we commonly use is (1.2) fhus n - m + I digits are requirc.-d to represent such a number. Tbe number 11 - 111 + 1 can be eKtreruely large and is infinite for many numbers, e.g. l/3, n, J 2. Obviously, there is a lin1it on the magnitude of 11 - m + I wben storing a number on a calculator or digital computer. This unavoidable error is caUed round-off error. Tbe restriction on the magnitude of numbers stored on computers and calculators would be quite severe 1f tile form (I 2) was used. Tb1s restriction can be eased by expressing a real number as a number between zero and unity multiplied by a power of ten: (l.3) When a number is wriuen in this form, b is called the ma!l,ljs;sa and N is called the characteristic. This form can be obtained from ( I.I) by tal 2 (x) satisfies lhc conditions in Rolle's theorem. Thus, there exists a point x = c, a< c < b, such that lfi'1 (c) - 0. Further, differentiation gives

l/>i(x) = F}(x) + 2 (~ ~;~ F 2(a} and Fi(x) = - f'(x} + f'(x) - (b - x)f"(x), = - (b - x)f"(x)

leading to

if>'1 (x) = 2 (~~

:i;

{J(b)- f(a)- (b- a)f'(a)-!(b- a) 2f"(x)}. /

6

lnlroductioo

Subsut111ing x -

c leads 10 the

resuh

f(b) =/(a)+ (b- a)f'(a) + ~(b - a)2j'"(c)

(1.12)

where a < c < b. Equations {1. l l) and (l.12} a re special cases of Taylor's t heorem. To obta in the general result, consider tbe function -

xr

(b 4>.(x) = F.(x)- (b-a)" F.(a), wbere l".(x) = f(b)- /(x)- (b -x)f'(x) - ... - (b -x)" JI•- ll(x). (11 - l)! Then 4>.(a) = .(b) = 0 and, provided that the first n - I derivatives of fix) are continuous for + 4.17x + 5.139 = 0 (ii) x '"' 3 sin x.

We therefore need to develop methods for finding numerical approximations to the cools of such equations. The method employed is called~1. We concentrate on single non-linear equa tions in one unknown such as !i) or (ii). These equations can be expressed in the form /(x) = O.

('.U)

Ilepoting'an exact root of this equation by o:, so that /to:)= 0, iterauon involvei. maki11g Initial guess, x0 .say,, to the wot a and JJ!!_1eraci11g u sequence

an

from the initial guess. with the intention that the sequence converges to the root o:. Below we discuss how to obtain the initial- guess and compute the sequence. (-Convergence is achieved if, afler sufficient elements of the sequence have been f computed, the difference between x. and a decreases as n increases. Coave1 gcnce is '" detected when the dilference between successive elements decreases. The computation is terminated when two successive elements x,,. x. + 1 sacisfy

lx._ 1 - x.l 0 on the interval [L/4, l/3). Consequently the maximum and minimum values of g(x) will occur at the ends of this interval These arc g(, L/3) 0 .256 and g(l /4) = 0.310 and therefore g(.:c) certainly satisfies the condition 1/4..; g{x) ~ 1/3 for any x in ( 1/4, l /3l To examine the behaviour of g'(x) on the intervnl let

=

2 (.:c + 1/2) hi 'C) - ; J ~ l - (x + lj'.!)" ; then h'(x)=!_{L +(x+ 1/2)4} 4 7t {l =(.:c + 1/2 )

which is positive on (1/4, 1/3). Therefore h(.:c) is monotonic on (1 /4, lj 3] and ns maximum aod mimmum values occur at the end poims. These are h(l/4) = 0.577 und h( l/3) 0.737. Consequently lg'(xll < l over the emire interval [ L/4, 1/3) and convergence is guaranteed This last result can also be shown algebraically. The result lg'(x)I < I will hold when x satisfies the inequality

=

~(x 1

n

1/2) < j(1 - (x + 1/2)4 )

and for the range or values of x under coos1dera1ion ~ 4 ' -ir y-.rt;, 1/2,

and to get to x - x1 + rh we have to shifc a fraction of step Length, the fraction being determined by r, away from the closest point x1 . This leads lO a logical extension of the defimtion of the shift operator (see equations (3.9), (3.12)), namely that f.'f(x) • j(x + rh)

(3.28)

for any value of r. In particular. the value that we want to interpolate can now be represented by J::f1 or Ff(.l 1) since f(x) = f(x1 + rlt) = E'f(x;) = E'f;,

- 1/2 ~ r ~ l/2.

(3.29)

Thus we can use tbeshift operator Lo shirt part way between two tabular points and this can be related to the problem of estimatinE, f(x) when x lies between any two tabular points. Equauon (3.29) gives us the notation £'/ifor the va lue/(x) =f(x1 + rli) that we wanl to interpolate. ln section 3.3 we express /(x) = E'f1 in terms of forward and backward differences, i.e. the numbers that appear in finite difference tables. However, first we investigate whether or not it is reasonable to approximate a funcuon by a polynomial Differences or a polynomial; approxima tion or a funct ion by a polynomial Before fitting polynomials to a given set or data we need to examine whether or not a polynomial can give a reasonable representacion of the data. To approach this pl'oblem we first consider the process of differencing a polynomial. We consider a general polynomial of ~egree m:. !I

.

.

p.,(x)=.a 0 x"'+a 1 x"'- 1 + ... +a,•.

The rorwa~d difTeren~e of this function is 6p.,(x) = p.,(x + h) - p.,(x)

and since p,.(x + h) • a 0 (x +Ii)"' + a 1 (x + h)'" - 1 + ··· + u,.

we have 6p,.(x) = a0 { (x + h)m - x.,} + terms of lower degree.

In particular the term in x"' disappears so that lip,.(x) is a polynomial of degree m - !. In a similar manner the t~rm in x"'- 1 will disappear when forming D.(lip.,(x)), i.e. 1 l\ p.,(x) is a polynomial of degree m - 2. Repealed application of this process shows that .1."'p,.(x) is a polynomial of degree 0, i.e. a constalll. Thus the inth-order differences of an mth-degrce polynomial are co nstant. The same result is obtained when using the backward difference operator V and central diffe rence operator o. 1t is instructive for

40

Tntcrpolalion

the reader to form a fimte difference table for any cubic polynomial, e.g. fom1 a difference table for y = x 3 - 3xl + 2x + 1 for x 0.0(0.2) LO All or the entries in the third difference column will have the same value. Suppose now that we form a finite difference table for a given set of data and find that the third-order differences, for example, are constant to within round-off error. The above result suggests that it is reasonable to represent the function with a cubic. M ore generally, ff we find that the mth-o rder differences ar~ constant to within .round-off error we may represent the function with a polJ'nomial of degree m.

=

3.2 INTERPOLATION FORMULAE INYOLVfNG FORWARD AND BACKWARD DIFFERENCES

.

From equation (3.29) we now present a simple but non-rigorous derivation of interpolation formulae involving forward and backward finite differences. A more rigorous derivation is described :it the end or section 3.4. To express j(x) = j(xj + rh) = E'fi in terms of forward differences we su bstitute E = 1 + 6. from equation

(3.20): f(x)=f(x 1 +rh) = E'fi=(I

={l +rt.+

r(r;

+AYfi

l) t. 2 + ·.. } f1,

i.e.

r(r - 1)

= / 1 + rlif1 + - - - !1 2fj + ·.. + 21

r(r - 1)---(r-n+ I ) n! a.•JJ + e,,(r),

(3.30)

where e.(r) is the error resulting from terminating the series after the 11th-order difference t."/1. Jn a similar manner we can express /(x) = f(x1 + rl1) • E'fi in terms of backward dilTerences by substituting E = (I - V) 1 from equation (3.21): f(x) = f(x 1+ rh) = E'f1 =(I -V)-'f1

- { t + r'l + r(r ; I) V2 + ·.. } f1, i.e.

f(x) - f(x 1 + rh)

-f

r(r+ l )V f - '+ '·VJ'+-2!2

J

r(r+t) .. ·(r+ n - l)v•f

+ .. · --

n! - - -

()

I+ e. r.

(J.3 1)

Equations (3.30), (J.31) are called the Grego ry- Newton. forward and backward interpolation formulae respectively. The nth-degree polynomial, obtained by omitting e.(r) from the Gregory-Newton forward formula, passes through the points (x1.fl) for k j(l}j + n. When e.(r) is omitted from the Gregory- Newton backward formuJa the resulting nth-degree polynomial passes through the points (x~,f1 ) fork= j( - l}i- n. These statements are not justified by the simple derivations of the formula presented

=

3.2 Tnterpola tion formulae

41

here but arc a n immediate consequence of the deri vation of the Gregory-Newlon formulae from Newton's di\ided difference formula o'section 3.4). However, the next exa mple demonstrates these statements for the case n = 2. £tamp/~

3.4 The functions P1(r) = f,

r(r- 1)

+ rti.f. +-2-!-

6 2f.

and

where r aud s fi re defined by x = x. +rh = x 6 + sh, give quadratic approximations to f(x, + rh) and f(x 6 + s /1). Show that they represent the quadratic passing through

(x,.J.), (x 5 ,J,), (x5, /6). From the relatio nship x = x 4 + rh = x 6 + sh we see that, when x = x 4 , r takes the vah1e zero and s - (x. - x 6 )/li = - 2. Substituting these values into the above formulae gives

P1(0)=f,

+ V2.f0 = 16 - 2(/6 - lsl + (/6 - 2f, +f.,)

q1( - 2) = /6 - 2V fo

=f. showing that they both pass through (x4 ,.f4 ). Similarly when x = x 5 we have

r =x, - .~. = l, s •

h

X5

-x. = - I

h

a nd substituti ng these values into the formulae for p 2(r) and q 1 (s) gives

P2(l) =/-.+Af,

':'CJ.+(J,-1,) =fs qz(- IJ-16- V/5 =16 -(!6 - /,) = fs. Thus both quadratics pass through ('< 5 . /5 ). FinaUy when x r = 2 and

=x 6 we see that s = 0 and

J. + 21:!.f. + ti. 11. =f , + 2(/s - f.)+ (/6 - 2/s + /.)

p,(2) =

=!6 q1(0) = }~.

i.e. both quadratics pass through (x 6 , / 6 ). More generally, there is a unique polynomial passing through a given set of poin ts and the Gregory- Newton formulae give two a lternative fom1s o r this polynominl. Later we meet other forms or this unique polynom ial.

42

lnterpolaLion

To extend the above example a little, note that p 3(r) = P2(r) +

r(r- l)(r-2) )!

3

f. f,.

passes through the same poims as p2 (r), namely (x.,,f,.), (x 5 ,f5 ), (x6 .f6 ), and the next forward points of the set (x7 .f7). However

passes through (x.,.[4 ), (x,,f,). (x 6 .f6 ) and Lhe next backward poi11t of the set (x 3 ,f3). lo general for each additional term included, the resulting polynomial passes through the next point of the set-forward or backward - ae ;Q

H c

" ]~

~.;;

"' 1l :"Si

.:

c:I

..... I

...

"":

i

N

'

~c

~

en~

'

..... "'

~~.,

~' .....

..

......

..' t::

~ .....

.....

"'

..., " ~

i

.....

' " I"

"

'

~

.."'N

:;::::

..

.....,

.....,

::::I

::::I :cI

,:'

..."I

"';)

"' ~



'-:: ... '-' "' .....

~

I

3 ,....,

':?

"

~

':!

~ ..... ......' M

,....,

~

.."

..'

r-;:

"'I ...,I .f ~

::::I "I" ...... .;-

:::: I

I .....,

~

ii

II .....,

,.('

,!'

~

~

-..

;

~

"

...

._,

~ .....

'

I

.

~

..,

j

.;:

I .....,

....,

'":!

I

::::

"

.:::: n

>
2 - Xl

and substituting for f[x1+ 1o x 1+ 1 ] and f[x 1,xJ+ 1 ] leads to f[x

X X ] 1· )+11 J•l

f(x 1)

-=c.x1 -x,._,)(x1 -x1.. 2 ) +

+

f(xs, il ... , (111 such thal the mhdegre.e polynomial. •

+ (x - x 0 )a, + (x - .x0 )(x- x 1 )a2 + (.'( - .x0 )(x -.x 1)(x-.x 2 )a3 + ... +(.x-x0 )(x-x1 ) .. (x-.x._ 1 )a, thro ugh the n + J points txi, / 1) for j = 0(1)11. Thus we require

l).35)

p.(x1)=f(x1). j=O(l)11.

(3.36j

p.(x) = a0 pa~es

Smee (3.35) will be used to approximate f(x) we wnte

f(x) = a 0 + (x -x0 )o 1 + (x -x0 )(x -x 1)a 2

+ ... + tx -

x 0 )(x - .x 1) .. -(x - x._ 1)a. + e.(x),

(3.37)

i.e. e.(x) = f(x) - p.(x) represents lhe error in the approximalion of f(x) by p.(.x) when x :Px1• Obviously the error is zero when x = .x1, i.e. the coefficients a0 ,a 1 ,ai. ... are

·~

1111

I

~I

I ~I ~·m I '1 l ,,.

n1 r"'·~~

.. 'l"I' I'

-t8

l nrerpolatioo

chosen such thac (3.36) is saus fied in which case

(3.38)

e.(x 1)-=0, j = O(l)n.

We determine che coefficients o 0 , o 1 , 0 2 , ... by setci ng x = x 0 , x 1 , :c 2 , .... m turn into (3.37), using (3.38). Thus setting x = x 0 in (3.37) we sec immediately thar n 0 = /(x 0 ) To determine o 1 we move a 0 to the lefl -haod side of(3.37). subscitute a0 - f(x 0 ) and divide through by {x - x 0 ):

f(x)-f(xo)

' - - --'--...o-.,.a 1 +(x-x 1 )a 1 X-

Xo

+(x - x 1 )(.x-x 2 )a 3 (3 39)

' e.(: x 2 J +

.. ·.

(3.49)

We use (3.45) to express the divided differences in terms of central differences. The evenorder differences ha ve been expressed in the form f[x _,, ... ,x0 , ••• ,x,]. Equating the smallest and largest suffix to the corresponding suffices in (3.45) gives j = - p, j

+ k = p.

Thus j = - p, k = 2p and from (3.45)

- _,, . .. ,x0 ,. .. ,x.J --1l /312 , lilf1 12• µl> 1 f, 12. 61f - 1> l>3f 111. and µl> 3f 1, if they exist. 2. From the definitions of the operators involved show ciiat 3

(i) V / 1 =/k-3f. E.112 _ E-112 (iii) µ=11£'il+£-l12) (iv) E = I + ~1

+ l> { l + 642

}'n.

(Hint: (iv) follows from (ii), b ut a plus or minus sign appears in fro nt of Lhe square root in (iv). Checking(iv) with a simple test function such asf(x) = xshows chat the minus sign should be d iscarded. The algebraic manipulation (squaring) introduces this incorrect solution.)

5. Show that (i) 6tan - '(x) =tan- 1 [

x

l

~

+ ix+ 1

].

Exercises 3 1

61

1

(ii) 6.sin - (x)=s1n '[(x+h)y'(l -x )-x.J{t -(x+hl2}] for sufficient small

h, 0 (iii) 6sinax =2sin { ncos{11( x

\rv) 6cos ax =

0

- 2sin { :Js1n

+D}•

{a( x +D}

6. Show that (i) 6."'ft=0"'£"" 1 /t=V"'F.-ft.

(!t) (iii)6(-J-)= _!Jui

= u.tJ.ft - f,tJ.g., !lt!h .. 1

(ii) 6

9t

b/Hl/2•

ft+tr1

7. Show that (i) c5 sin1 a-c =sin ah sm 2ax,

..) , (1l utnoax = (iii)

sin ah

cos a(x - h/2) cos a(x + h/2)



!J•(.!.) = . ( ='-Y,--r!_ff_ __ _ _ x (x - rh{2) · .. (x - h/2)x(x + h/2) .. · (x ... rh/2)'

. {ax+ 2m:r} , (.iv).,. sm ax = (2sin. "Ii)"' sin 2 u

(v) c5'"cos ax = ( 2sin

rcos{

~

ax

+ '"; }·

(vi) c5"'cos ax =1{2 sin ali)"'cos { 2ax + "; }· 1

8. Derive equations (3.24) and (3.25) from equation (3.23) and Exercise 6(i). 9. Determine whether the functinns represented by the following duta can be approxima ted b> polynomials over the specified ranges: (i) /(x) takes the values 1.0000, 1.2840, 2.7183, 9.4877, 54.5982 for x = 0.0(0.5)2.0 (ii) /(x) takes the values 1.4142, l.5811, l.7321, 1.8708, 2.0000 for x 2.0(0.5)4.0.

=

lO. Estimate the values of /{l.15) and /(1.98) for the function specified numerically in Exercise I and state the accuracy of your answer. 11. A function takes the values t.0000, 1.0960, 1.0480, 0.9520, 0.9040, l .00000 for x = 0.0(0.2)1.0. Show that a cubic can represent this set of data over the range considered. Determine this cubic in terms of x. 12. Show that r(r - I) 1 r(r - 1)(r - 2) 3 p3(X)=f1 +r/i/1 +-2-!-c5 /1 + 3! 0 /11

"' lnterpolation

62 where x = x 1 + rh, and

_ . VJ s(s+J)V 1/. s!s+J)(s+2)V 3f 1• + s 4 + 2! •+ 3! 4•

q3(x ) -

where x =x 4

+ sli, are alternative forms of the cubic polynomial passing through

(x 1 ,/i}, (x 2 ,f2 ), (x 3 , /3 ) and (x_.,J,).

13. The function f(x) takes the values 1.00000, 1.04603, 1.09417, 1.14454, 1.29722, l.25232 for x = 0.0(0.1)0.5. Estimate values for f(0.05) and f(0.47). 14. Express each of / 1 (x)

= /(0) + xf[O, 0.5)

I- X(X

-0.5)/(0, 0.5, 1.5]

+ x(x - 0.5)(x - 1.5)/(0, 0.5, 1.5, 2.1] and

f 2 (x)- f[I.5] + (x-

1.5)/[1.5,0] + (x - 1.5)xf[l.5,0.2.1]

-'- (x - 1.5).x(.). - 2. l)f[l.5, 0, 2.1, 0.5)

in the form ax 3 + /nc 2 +ex+ cl given lllal /(0) = I, /(0.5) = 1.375, /(1.5) z 0.625, /(2.1) = 1.231 Verify !bat both are alternative forms of the cubic passing through (0, !), (0.5, 1.375), (1.5,0.625) and (2.1. 1.231). 15. from the data (0.25, 0.77 110), (0.37, 1.17941), (0.42, 1.36233), {0.52. 1.75788),

(0.60,2.10946), (0.75,2.87928) for (x,f{x)), estimate values for /(0.3), /(0.5) and f(0.7}, giving the accuracy of your answers. 16. Use machernatical induction witb respect co k to prove equation (3.34). 17. The fuocuon f(x) takes the values 0.25000, 0.33693, 0.46769, 0.65266, 0.91103, 1.27388 for x =0.0(0.2)1.0. Estimate values for f(0.37) and f(0.52), stating Lhe accuracy or your answers. 18. TIJe function J(x) takes the values 0.0000, 0.0995, 0.1960, 0.2867, 0.3688, 0.4401, 0.4983, U.5419 for x = 0.0(0.2)1.4. Use the most appropriate interpolation formula to estimate J(0.1), J(O. 72), J(0.77) and J(l.32). stating the accuracy of your answers. 19. Derive the Gauss backward formula, equation (3.54), from Newton's divided difference formula. 20. (i) Show that Stirling's fonnula (3.57), when 1.:nnmated arter the second-order difference term, is a form or the quadratic passing lhrough (x_J.f-1>. (xo.fo) and (x,,/1 ) but that Bessel's formula (3.61) is not. (ii) Show that Bessel's formula (3.61), when tenninatcd after the third--0rder difference lerm, is a form of the cubic passing through (x_ ,,J_ 1), (x 0 ,f0 ), (x 1. /1) and (x 2 ,f2 ) but that Stirling's formu la (3.57) is not

4 I

Numerical Integration

.II

Two situations can arise in which we need to obtain a numerical approximation to the definite integral

l =

f

f(x) dx.

(4.1)

These are (i) the explicit form of tbe integrand f(x) is not koowo hut is specified numericaUy by the data poinl~ (x., / 1) for i = 0( l)n, where xil = a and x. = b. aad (ii) tbe explici t form of the integrand i~ known but is too complicated to be integrated analytically.

However, when the integrand is known explicitly, a set of data (x" / 1) for i = 0( 1)11 can always be generated. Thus we develop methods for solving problems of type (i).and the same methods can be applied to problems of type (ii). We limit our discussion to the situation when the data are equally spaced with respect to x, i.e. x1+ 1 = x, +h. When the integrand is specified by the data points (x1, /,), i == O(l)n, the problem of deriving a formula that approximates the integral (4.1) amounts to finding an approximate continllous representation of the integrand over the range [a, b] . This can be achieved by fitting interpolation polynomials to the data (x" fi), i = 0(1)11, as discussed iu the p,revious chapter. Consequently the elementary integration of interpoltttion polynomials leads to formulae that approximate the definite integral (4.1). Given a· set of equally spaced data, (x., f J, i = O(l)n, there are many ways in which interpolation polynomials can be used approximate the integrand over the entire range. One method is to use a polynomial of degree 11 that passes through then+ l data points (x., f J. With this approach the integrand is a pproximated by a function that is smooth on [x 0 ,xJ, Le. the approximating function bas a continuo us derivative on

63

,. 1:. n I

i: '

Ill ~

J, ' 64

NumcriClll inregr:uion

[x 0 ,xJ. However, if 11 is large, tbis leads to a complicated fo nn ula. Alternatively lhe

interval (x 0 ,xJ can be diviJc: O~r~l.

(4.5)

4.2 Some wcU·known integration formulae

67

Tbat is, for tbe interval [x,, x1+ iJ we take

.,., I .. ,

f (x) dx =

ii

f (x, + rh) /1dr

0

(4.6) where&, is the local truncation error to the integration formula on [x,,x,_ iJ associated with the linear approximation (4.5). From equations (4.4) and (4.6) we see tha l this local truncation error is given by &1=

f' {'1r- l) 61/,+ r(r -

Jo

IJ(r - 2) A1/1+ ... }/Jdr.

21

3!

(4.7)

The limits for r in (4.6) and (4.7) have been obtained [rom the linear relation x,. l.1 + rh, i.e. r = 0 when x = .~. and r = I when x = x, , 1. This relation also leads to dx = (dx/dr) dr = lidr. lntegrating (4.6) yields

r··

f(x)dx =

{r/1+~6/I + C;

=l1[f,+ t6/;] +e, (4.8) since 6/1 =ii+ 1 - f. We obtam a n expression for the error t, using an approach that lacks mathematical rigour but leads to the correct form. We consider Lhe leading conlnbution to the error in (4.7). Thus, integrating on ly the lirsl term of (4. 7) we obtain

c,= - Ii 6 l / 1 + ....

12

(4.9)

It is nor~nal practice ' o exp~ess the error in terms of derivatives rather than differences. This is more useful because differences depend on the ~pacing h used in a particular problem. This convention allows us to classify and compare the error terms associa ted with different numerical integration formulae. To express (4.9) in terms of derivatives we relate lhe dill'erential opcratcr D = d/dx to the forward difference operator 8. Using the definition of 6 and a Taylor expansion we have

6/(x)"" f(x



+Ii) - f (x) h2

= {J(x) + lif'(x) + T /"(x) + ... }- f(x)

= (ltD +

Jil

2 21 0 + ... }/(x).

Thus the operator in curly brackets must be equivalent to 6, i.e.

6=hD+

I~~ 111 1 ~ ~ 'I~' · '·;i•

11 I~

1

irrr ..,

1"'1 111'"'

·~"''

1i202

21

+ ....

f

I

68

Numerical integration

[n general we will use this result in the form

h1D2 2!

C.'= { hD + --+ ···

}'

=h'D'+ ...

(4.10}

for different values of sand consider the leading term only. Using (4. 10), with equation (4.9} can be written e1 = -

h3f"(x ) + ....

12

s = 2, (4.11)

1

Thus our non-rigorous approach shows that the leading contribution to tbe ~·rror term on [x1, xi+ iJ is - h 3 f"(xJ/12. Usi ng a more rigorous mathematical derivation it can be shown thac the e rror on [xr.x 1.q] is given e;{actly by '"

t '

l

i.. •

{4.12)

where ' - U;+4/r+1 1•olrt.. - 2 3

h, /hie;) } +!1+2)-90

This can be written (4.29) where e0 is the global error and is given by eG

h' = - 90 {f',.'(~o) + f'"'(~2) + ... + Jli•l(;.-2)}.

(4.30)

To remember the composite form of Simpson's rule it is useful to observe the pauern of coefficients 142424 · ·· 241. Alternatively, observe that four is the coefficient of al l function values with odd mfficcs in the composite form of Simpson's rule and that two is the coefficient of the function va lues with even suffices. The first and last function values, fo and /., are exceptions. Simpson's composite formula can only be applied when n + I, the number of data points, is odd. When n + I is odd there are an even number of sub-intervals [x.,.x1 + 1 ) which can be paired off into the sub-intervals [x.,xf+ 2 ]. The djyision in (4.19) is then possible but is not possible when n + l is even. This places a restriction on the use of Simpson's composite fonnula when the integrand is only known in the fonn of a discrete set of numerical data. If the integrand is known analycically an odd number of equally spaced data points can always be generated. Although the form of the global truncation error in (·UO) is of little use in practice, it does lead to a useful upper bound. Since IJl ' 1 (~i)I ~ max

1

•] ao{x. + 1) = y(x.) +

1··· ... •

(5.7}

f(x, y) dx.

To develop integration formulae from (5.7) we need to approximate the integrand f(x, y} over the mterval [x•• 't•+ 1 ] At this stage a value for f(x, y) at x = x-+ 1 is not avrulablc since l~ e estimate w,.+ 1 ~Y.+i is not known. Thus we need an in tegration formula that does not contain!.+ 1 - f(x •• 1o Y.+ il ::>: f(x. • 1, w.+ 1 ), i.e. an integration (ormula that is open with respect to x = "• •1 is required (sec section 4.7). ll1is is achieved by approximating the integrand by a polynomial that does not pass through the point(x•• 1 , / • • 1 ~ i.e. the incegrand must be approximated by a polynomial passing through some of the previously estimated points (x,, j,}, 1 = 0( l)n. Using an approximation of this type in (5.7) leads to an estimate w.+ 1 for Yix. + 1 ) . On'e this is known, f,, • 1 =f(x.+uY.+ 1 ) ca n also be estimated and an alternative approximation to the integrand f(x,y) can be obtained u~ing a polynomial that passes through (x. _ 1.J. +L) as well as some or (x.. / 1). i = O(l)n. This leads to a closed integration formula and a new estimate for y(x.+ 1 ) via equation (5.7). This new estim11Le is normally more accurate than the initial estimate. The closed-integration approach cannot be used without an initial estimate for f.+ 1 • Thus, in practice_ formulae that result from open integration are used to supply an initial estimate, \llhich is then improved by formulae that result from closed integration. The formulae used to obtain the initial estimate are called predictor formulae and the formulae chat im prove the initial es timates arc called corrector formulae. This leads to the so-called predictor- corrector methods-

5.1.1 Methods based on open integration Since estimates of YuJ 1, ••. ,y. are already available, estimates of the integrated f.= f(x,,y1) can also ibe calculatr,d for i=O(l)11. Consequently the integrand f(x,y) can be approximated by the Nth-degree polynomial passing through the last N + I points, (xr>f,) for i = n( - }n - N (N < n). In these circumstances the most appropriate form of this polynomial i~ the Gregory-Newton bnckward interpolation polynomial based at x =x. (see equadon (3.31)). Approximating the integrand in this way, equation (5.7) takes the form

' J

1

.Y(x.+ 1 )-y(x.)+

{

r(r+ 1)

f.+rVf.+~V f.+··· 2

0

+

r(r+ 1)-··(r+ N- 1) v11 r }hd N!

;.,

(5.8)

r+e.+L

90

Numericnl solution of ordinary differential equa tions

=

where x x. + r/1 and t:,,- 1 is the error associaied wilh tbe approximation of the integrand by this polynomial, i.e. t. ~ 1 is ihe error resulting from the termination of the Gregory-Newton backward formu la after the term in the Nth ordt:r difference and is given by

e."

=

f'

Jo

{iir+

I) ··(r+ N) VN-lf, ···}hdr (N + l)! • ·

(5.9)

If e•• 1 is omitted from (5.8} the solution of the resulting clifferenoc: equation will be an approximation to y(x.). Denoting this approximation by w. equation (S.8) can be written w• .,. 1 = w. + h where

a0 = l , a4 =

I

f:

N

L ak 'V1f(x,,, w,)

(5.1 0)

k=O

r(r +I)-·~;+ k- I) dr fork,. l{l)N.

(5.11)

Evaluating the elementary integral (5.l l) for the first few values of k shows that (5.10) can be written

i'

w• • ,

= w,, + 11\f. Ttv/. + rl v2.r. + §V3f. +ffiV4/.+rfiV5f.+ ... +a11Vt1f,}.

(5. 12)

Taking Ns0,1,2,3,4 in turn and using equaiion (J.24) lo express lhe backward differences tn terms of function values (also see Example 3.2) leads to the following integration schemes:

Firs1-order, 011e-s1ep

'•I W,.+ l

= W,.

+ Jrfu,

(5.13)

Second-order, two-seep (5. 14)

Tliird-order, 1/rree-s1ep h

w.,. 1 =w.+12(23/,, - J6f.- 1 + Sf, - 2 ) , C•+ I = ih4y41(~.).

'i!

Fourth-order, four -siep W•+

(5.IS)

,,

I= w. + fj.55/. -59f•- l + 37f,-2 - 9f,-3},

B•+ I

=m

JiSy(S)(e.).

(5. 16)

r

91

5.1 .\1ulti-stcp methods

Fift/1-order, flue-seep 11•0

+ 1 = w,, +

h ( l90lf. - 2774/.- 1 + 26H>f.,-? - 1274/,,-J + 251/.- 4 ] 720 (5.17)

The order of these schemes is defined in the same wny as for integration formulae in Chapter 4. A scheme with error tenn containing the factor Jrh 1 has order k. Clearly higher order scheme:. can be obtained by taking larger values of N in (5.12). All of these formulae that result from equation (5.12) are called Adarns- Bashforth formulae. They were derived by open integration and a consequence of tlus is that they lead to au explicit expression for w.+ 1 ::::: y(x.+ 1 ) in terms of previous estimates of y and f (x, y). Thus in a given problem, when f (x, y) is known, these formulae are easy to use to advance the solution from x = x. to x = x.+ 1 The Adams-Bashforth formulae have been derived by truncating equation (5.12). The term after which (5.12) was truncated was determined by the value of N Jn each case the form of the error, a~sociated with this truncation, is given. Although not rigorous, the correct form of these error terms can be obtained in the same way us error terms were obtained for numerical integration [ormulac in Chapter 4, 1.e. by examining the leading conlributlon to the terms omitted fron 1 (S.12). This is evident when comparing the coefficients in each error lcnn of equations (5.13)- (5. 17) with equation (5.12). As an example consider lbe derivation of equation (5. l5). fbts is obtained by taking N .. 2 in (5.12), giving

w._, = "'• + lt{f.+tv!. + f,V f. J, which leads to (5.15) on substituting Vf.= f. - J._ 1 and '11 1/ . =f. 21. 1

(5.l8)

,1-f.-1·

From (S.9) the error associated with this formula 1s 811 + 1

= f '{r{r+l)(r+2)...,3r 31 v ;.+ 0



···}hctr.

(5.19)

The coefficient of V3/ . here is (5.11) with k = 3 and this has been evaluated in (5.12), i.e.

e•• 1 =h{iV3f.+ ··}.

(5.20)

To express the error in terms of derivatives we use the relations

'1 2D 2

and

/1

3

D3

V=hD---+ - - - ...

{5.21)

V'=h'D'+ ··· .

(5.22)

2!

3!

The.reader is left to derive these relations between the backward difference operator V and the differential operator D (Exercise 4). Similar results have been derived mvolving the forward difference operator 6 in equations (4.10), (4.11). Using (5.22) with s = 3, (5.20) can be written

(5.23) A more rigorous approach shows that this leading term gives the precise fo rm of the error when the derivative is evaluated at a different value of x, ~.say, i.e. it can be shown

92

Numerical solution of ordinary Jifferenrial equations

that

.

(5.24)

Further, si nce dy/dx = f(x,yj {equatiou (5.l)), equation (5.24) can also be written as B.+

I = ~h"y4({.),1

(5.25)

The error terms for the otber schemes can be obtained si milarly. By examining the error terms associated with the schemes (5.13)-{5.17) it is evident that, in general, larger values of N lead to more accurate formulae. This is because the approximation to the integrand is the source of the error, and larger values of N given better approximations to the integrand. lo equation (5.8) the in tegrand bas been approximated by the Nth-degree polynomial passing through (x., f.), (x,_ 1, f. - i ), ... , (x,. - 11 ,f.-N)· Thus as N increases, the polyno mial approximation passes through more of the previous data poincs, leading to smalle~ errors, as. expected. 5.1.2 Methods based

011

closed integration

Once a value for y(x.+ 1 ) has been estimated by any of the schemes derived in section 5. 1.l, ! .. + 1 = f(x.+i ,Yn+.) can be estimated. The integrand f(x,y) in (5.7) can then be approximated over [ x., x . .. 1 ] by a polynomial passing through (x. .. , , f.+ 1) as well as some of the previous points (x,.J.) for k = O(l)n. This will lead to a closedintegration fom1ula for the approximation o f the integral. The most convenient form of the Nth-degree polynomial passing through the last N + l data points (xk,fJ for k = 11 + 1( - I )n - N + I is the G regory- Newton backward formula based at x = x.+ 1 (equation (3.31) withj = n + l). Using the polynomial to approximate the integrated, equation (5.7) takes the form

o{ !.+

J

y(x.+ 1 ) =y(x.) + _,

+ .. ·+

1

s(s+ I) 2 +sVf.+ 1 +- -,- V!.+1 2

s(s+t) .. -(s+N-1),.,,.,J. N! v •+1

}hds +e.+1•

(5.26)

where s is defined by x = x,, + 1 + s/1 and e,,+ 1 is the erro r associated with the approximation of the integrand by this polynomial. Thus e. + t is tbe error resulting from the termination f the Gregory-Newton backward formula after the term in the Nth-order difference and is given by

Bn+1

=

f {s(s + 0 -L

1) .. -(s + N) vN+JJ.

(N+ !)!

• TI

...} /1d

s.

(5.27)

lf e. + 1 is omitted from (5.26) the solution of the resul ting difference equation will be an approximation to y(x.). Denoting this approximation by w. equation (5.26) can be written N

w. + 1 =w. + h

L: b~Vkf(x.+ 1 , w. + 1 ),

(5.28)

k sO

where

k o s(s+ l) · .. (s+J('- L)

f

bo = J, bk = _ I

)li'!

ds for k= l(l)N.

(5.29)

k.

_j

II 93

5.1 Multi-step methods

Carrying out the elementary integrations to evaluate che fi rst few coeffic.ients in (5.28) leads to w.+1 =; w,, + h{Jn+ t - tV f.+1 -11,VlJ,, +i - ..f;;Vlf.+ • 4 -~V j~+L --ftrj'V 5 f.+ 1 - .. . + b,, V"f. +.}.

(5.30)

Taking N = 1, 2, 3, 4 in tum and using equation (3.24) to express the backward differences in terms of function values leads to the following integration schemes:

Second-order, one-seep

l\'.,+1= W.+2h {f(Xn+1 •W•+1)+fn,} Third-order,

-

S,.+1 - -

IJ3

12

(3)

y ((.).

(5.31)

t1~0-step

e,. + I

-

-

Ji4 ("I( 24 y ~.).

(5.32)

Fourth-order, three-step

iv._

1

= w. +

2~ {9/(x.+

1> w.+

i) + 19f.

- 5/

0 _

Bo+ I = --H.i-JiS y(S)(~.).

1

+ .f.- 2 } , (5.33)

· Fift/1-order, four-siep W0 + 1

= w.+

i:•• 1 =

h

720

{251f(xu 1 ,

- m1i6y(6)·

w0 + 1 )+646f,-264/0 _ 1

+ !06/,- 2 - 19/.- J}, (5.34)

Higher order schemes can be obtained by taking larger values of Nin equation (5.30). All of the f.;>rmulae that result from equation (5.26) are called Adams-Moulton formulae. Whereas the Adams- Bashforth schemes were obtained using open integration, and explicit formulae of the form (5.35)

were obtained,, the Adams- Moulton schemes have been obtained via closed integration and ihey have the general fonn

(5.36) i.e. the Adams- Moulton schemes yield implicit formulae ror w.+i· Consequently Adams-Moul on schemes have to be used iteratively to find w". The iterative scheme is obtained by expressing (5.36) in the form

(S.37) where w~ki 1 is the kth estimate of w.+ 1 • The convergence of the scheme is discussed later, but whet) it does converge it yields the solution of (5.36).

f1 \

94

Numerical solution of ordinary tlifferenl ial equations

Aii of the schemes (5.31)-(5J4) must normally be applied in the iterative form (5.37), e.g. lo liod w.+ 1 from (5.32) we express it in the form

(5.38)

'i

v I

Equation (5.37) will generate the sequence w~,'J 1 , w~2t 1 , . .. from a starting value or initial guess w~0i 1 • This initial guess or prediction can be obtained from an explicit formula of the form (5.35). Thus, in practice, lhe Adams-Bashforth schemes and Adams-Moulton schemes are used together. The fom1er supplies a pred icted solution and the latter corrects or improves this initial prediction. Methods of this type a.re called predictor-corrector methods. Predictor- co rrector methods, other than the Adarns-Bashforth-Moulton type, do exist. A scheme lhat used to be popular was the Milne-Simpson method. This is described in section 5.l.4. When a pred ictor-corrector method is used to solve a different ial equation, pn:dicwr and corrector formulae of the same o rder are chosen. For example, one of the most popular predictor-corrector methods uses the fourth-order Adams- Bashforth fonnula (5.16) for prediction and the fourth-order Adams-Moulton formula (5.33) for correction: (0) _ w.,+ 1 -w.+

.!!.. {55;.r _ 59f..,24

1

+37/.- 2 _ 9J._ 3 }

IV~1N 1 = w. + ;~ {9f(x.. + 1, w!~! 1 ) + 19J. -

I

t

l":

!"

5fn- l + f.- 2}.

(5.39)

The overall accuracy of a solution would nol be affected by usrng a predictor formula of lower order than lbe corrector formula. However this may lead to additional calculations to achieve a specified accuracy; if the predictor solution is not sufficienl.ly accurate, more itera1ions with the corrector formula may be necessary. A feature of multi-step methods is that they can only be used when the numerical solution is already available over several consecutive steps. For example, the pred iction formula in (5.39) can o nly be used to calculate w,, + 1 when values ofw., w,, _ 1, w._ 2 and w. _3 are available. This is necessary for the calculation of f •. f,, - 1> f.-i and f.,_ 3 • Recalling that we are solving the initial-value problem (5.J ) with initial condition of Lbe form (5.2), the only detail available at the start of the problem is w0 = y(x 0 ). Coosequently a different method must be used to calcu late wi. w2 , and w 3 . More generally when using an Nth-order predictor formula a different method must be used to calculate w 1 , w2 , w3 ,. .. , w,..._1 . lt is therefore necessary to develop s tarting methods before we can apply the linear multi-sLep schemes to specific problems.

5.1.3 Starting methods (i) Taylor series merliod

One method of initiating the solution of the ini tial-value problem dy -d = J(x, y),

x

y(xo) = Yo

T S.J Multi-step methods

95

i&'via the Taylor expansion rh r2/t2 rH liJk+ l 1.kJik y(xo +rh) =Yo + I! Yo + T! Yo+ · ·· r Id y~' + (k + )! y = 6y" 1 + 2y' y131 + 2yy141,

y'(O)=O y"(O) =I y!J 1(0) = 0 yt•>(O) = 0 y(0) = 0 and since /(x,y) = x + y 2 , Picard's method (5.43) yields ylll(x)

=I:

(x + (yl 0>(x)) 2)dx

Proceeding with Picard's method we obtain

ylll(:;c) =

rx (x + (y'l~:;c))l)dx ~o

5.1 Multi-step methods

.

.

=

97

l""( x+ - +-+--x10) x

x2

x'

4

0

= 2

4

xs

20

xa

400

dx

x11

+20 + 160 + 4400·

y(x) contains two additional terms compared to y(x). We only retain the lowes t order term and d iscard the higher order, spurious term. 'Thus we have

In this last stage of the iteration we notice that

x2 xs xs y!l>(x)=-+-+2 20 160 and substituting x = 0.1, 0.2 and 0.3 in tum leads to the estimates

y(O. l) = 0.00500, y(0.2) = 0.02001 , y(0.3) = 0.04512. Examination of the magnitude of the last term, x 8/ J60, in yP'(x). for each value of x, indicates that these resu lts are accurate to five decimal places. For the initit l-vaiLte problem specified in Example 5.J Picard's method has been a little easier to use than the Taylor series method. This depends o n the form of f(x, y) and is certainly no t always the case. Notice that r2Jr2 rshs y(r'1)~ 2+20

from the Tay lor series met hod (Example 5. 1) agrees exactly with Xz

XS

y111(x)=:_+2 20 from Picard's l method (Ex ample 5.2) since x = x 0 + rh, x 0 = 0. The additional term in y< 31(x) gave the increased accuracy when using Picard's method_ With some ex tra effort thi$ additional term cou ld also have been included in the Taylor series approach . . (iii) Jteratit>e improvemenr With some problems lhe Taylor series method and Picard's method are difficult or laborious to agply for finding starting values to a required accuracy. An alternative approach is to use one of these methods to obtain initial estimates for the starting values. Theo an iterative method can be used to obtain better estimates for these starting values and achieve the required accuracy. We describe a suitable iterative method for improving the s tarting values. Suppose thal we are plann ing to t1se the fourth-ord~r predictor-corrector method (S.39). Then from the in itial condition y(x 0 ) = y 0 we must obtain starting values for 111 1 ::: Yt> w2 ::: y2 a nd w1 ::: y3 . Initial estimates for these starting values can be obtained using Taylor's series method or Picard's me thod. from those initial estimates we can obtain the approximations / 1 ~ f(x 1, w 1), f 2 ::o: f(x 2, w2) and f 3 ""'f(x 3, w3 ). Then the integrand f(x,y) in (5.42) can be approximated by the cubic polynomial passing

r ~I

98

N1UUerical sulutiun uf ordinary differen tial equations

through (x 0 . j0 ), (x 1• / 1). (x 2 • ( 1) and ('C 3.f3 ). Using the Gregory- Newton forward version of this polynomial (equation (3.30)) we obtain

f1 {

~I 1'1

''.1 I

r(r - 1)

l

Y. "' Yo-r- Jo fo+rilfo+~t::. fo +

r(r- t}(r-1)

J!

3

I::. Jo

}

/1dr+e,

(5.45)

where .'C = x 0 + r/1, y, = y(x0 -+ r/1) and c, is the error associated with the cubic approximation to the integrand. Carrying out the e lementary integrations in (5.45) and se lling r= 1,2,3 in turn leads to 11ew estimates for y 1,Yz,y3 : w1

= wo + h{fo +!t..fo -nt::.1 lo +-bt::. 3 lo},

W1=wo+l1{2fo+26/o+it::.1/o}. £1= w3

=

£1

= -mlisy'"(~)

";

-90y(~)

wo + h{3fo +!t::.Jo + :1::.1 lo+ it::. 3 lo}.

£3

= -fr>h'l"m.

(5.46) (5.47) (5.48)

The error terms can be obtained by considering the leading conlribution lo the terms ommed from the Gregory-Newton formula, as described in the derivations of the linear muhi-step formulae. Using equation (3.23) to express the finite differences in terms of function values we obtain 1V1

= Wo + ;~ {9/o + 19f1 - 5/2 + /3) /1

1V2""

=

1V3

wo + 3{/o + 411+12}

Wo +

Jli

8 Uo 1 3/1

(5.49)

(5.50)

f- 3/2 +f.).

(5.51)

1, / 2 and / 3 are available, these formulae can be used to obtain new estimates for w,, w 1 and w3 . If these values are not in agreement with the mitial estimates, to the required accuracy, f 1, f 2 and / 3 should be recalculated. Equations (5.49), (5.50) and (5.51) are then used again to obtain another set of cstnnatcs for w1, w1 and w3• Proceed ing in this manner we generate the finite sequences

SJOcc / 0 and approximate values for /

w\0 ', w\11, w1/ 1, ... w\"l w~>.

w~ll,

w~31 , .. . , w~l

w~O>,

w~l>,

w~2>, ... t wf),

where the value or N is such that

lwyi- >-wt'l} 2 ,I

99

5.1 Multi-step methods h

+-{/ 3 0 +4/lt>+/'t11 l lJ

,.ct+LJ-w 2 0

w~ +Ll

(5.53)

3h = wo +3Uo + 3/\il + 3/'fl + /~'}.

However, the convergence of the iteration may be improved by using the latest estimates of w1 , w 2 and w3 as soou as they are available, i.e. instead of (5.53) use the similar scheme \I.it+ 1

I)= \Ii

0

+ 24 Ji {9/ + J9/lll _ 5j"(i) + Jfl)} 0 I 2 J (5.54)

w~• L> = Wo + 3;1 Uo -f 3J',H II+ 3f~+ •l + !~'}. · We have derived these ite rative formulae to improve the starting values for a fourth· order predictor-corrector method. ln a similar manner. iterative formulae can be derived for prediclor-corrector methods with d11Terent order.

Example 5.3 Fo r the imtial-value problem

~ = x + y2,

y(OJ - 0

use a s tarti ng method to estimate y(0.2), y(0.4), y(0.6) and use a fourth-order predicto rcorrector method lo estimate y(0.8). la this example we demonstrate the use of the iterative improvement technique and the fourth-order predictor-correclOr formulae (5.39). Consequently we need not be too concerned about the accuracy of the initial estimates for !lie starting values. The initia.I· value problem in this eitample was also examined in Examples 5.1 and 5.2. but different function values were cequired. Following Example 5.1 the Taytorsenes method leads to r11,2

r sh'

y(rli)=-2-+W+ ... and for this problem we take h = 0.2. Working to live decimal places throughout and taking r = 1,2,3 in turn leads us to the in iLial estimates

w\0>(0.2) = 0.02002,

wl2°1(0.4) =0.0805 1,

w~1(0.6) = 0.18389

for the starting values y(0.2), y(0.4) and y(0.6). At this s tage it is not necessary to check the accuracy since that will be part of the iterative improvement scheme. From these sta rting values we determine initial estima tes for / 1 = /(0.2,y(0.2)), / 2 /(0.4,y(0.4)) and f 3 = /(0.6, y(0.6)~. ln this problem, f (x, y) x + y 2 and therefore

=

f\01 = 0.20040,

=

fzO)"" 0.40648, j\OI = 0.63382.

Substituting these values and w0 = 0,

Jo-= 0 into the first equation of (5.54) yields

100

Numerical solution of ordinary differential equations

w."'-1 - ... -

pP;.1(O} = 4 u1..>(0) = -7 1r~O)= - 2.

Using these numerical values in (5.40) with 11 - 1/10 we obtain u(x 0 +r/10) =H-'-l ( -r ) 2 10

2(

2

3

3

r ) -7 ( -r ),. +··· +-6I ( -10 24 10 .

I(

v(x 0 +r/l0)= I+ - - r ) - - -r ) .. + ··· . 3 10 12 10

I' 112

"lumerical solurion of ordinary differenr ial equations

Noting that x 0 - 0 and setting r • I , 2 in turn yields U1=11(Q. J) = l.005J,

u 1 = u(0.2)'"" 1.0209

v 1 - v(O. I)= l.0007,

v2 = v(0.2) = 1.0052

to four decimal places. Applying the third-order Ada ms- Bashforth formula (5. 15) to each of ihe equations for 11 and v we obtai n (0)

tl.,+1

fl (23 f . - 16/.-1+5 . f, =11.+12

.-1>

if,,0J1 = 11. +

=

t 1

(23g. - l 6g.- 1 + Sg._ 2)

where f(x, 11, v) x + u - v1 , g(x, u, v) x 2 - v + 11~. f.= f(x., u., v.) and g.= g(x •• u•• v.). The index (0) distinguishes the initial predicted values· from improved

=

estimates that we calculate below. We will use these formulae to estimate 113 = 11(0.3) and v3 = v(0.3). From the initial conditions and starti ng values we calculate / 0

=0.0000,

90 = 0.0000,

/ 1 = 0.1037, / 1 =0.2105 91 - 0.0195, 92 ... 0.0770

and using the above formulae with n = 2 we obtai~ the predicted values u~o1 =

1.0474, v\61 = l.0174.

To improve on these predicted values we apply the third-orc!er Adams-Moulton formula (5.32) (also see (5.38)) to the differential equations for u and v, giving (k•u U~111

h {Sf(X,1+1• U11(kJ+1J V,,+ (kJ ) gr r } =u,. +12 1 + ; .. -;"-1

.!Hit_

i .. 1

-v.

+i2 /i {5g(x.+u"~Jl+lJ ~*I ) 8 + 1 ·'••1 + g. -

u. - 1}'

where the indices in brackets indicate 1he iteration. From the predicted values for u\0 >, v~01 we will use formulae to generate the sequences u~0 1, u~1 >, 1N1,. . . and 11~0 ', v~1 ', v~2 l,. .. unti l convergence. Thus, as fo( the predictor formulae, we set 11 = 2. Then we set k "" 0 to calculate 11\1', v\''· We have /(x 3 , u~01 , u i THEN FORj:= i TO 11 + l DO BEGIN Temp:= A[i,j], A[i,j] := A[MaxRow, J1; A[1';1,axRow, }] :=Temp END; (*zero remainder of column•) lnvAii:= 1/A[i, i]; Fork:= i + J TO 11 DO BEGIN Scale:= InvAii* A[k, i]; FORj:= i TO n + 1 DO A[k,j]:= A(k,j)-Scale*A[i,J]

END END; (*back substitution*}

x[n]:= A[n,n + l)/A[n,n]; FOR i~=n- J DOWNTO I DO BEGIN xi:= A[i,n;+ !); FOR.j:= i + l TO n DO xi:=.xi-A[i,J]• x[J]; x[11 :-= xifA[i, i] END write(x).

127

128

Systems of linear equations

6.1 .3 Triangula r factorization This is an efficien t direct method fo r solving the system of equations (6.1). It is particularly efficient when several systems of equations with the· same matrix of coefficients have to be solved. The method involves fioaing lower and upper triangular matrices .(6.15a)

where (6.15b)

such that A= LU.

(6.16)

The form o f the matrices L a nd U, defined in (6.15),' can be seen. below in equations (6.18) and (6.19). We will describe the procedure for calculating Land U la ter. At this stage we will assume that L and U are known and show that the solution to (6.1) can be obtained fairly easily by forward and backward substitution. Substttuting (6.16) into (6.1 b) gives LUx=b.

(6. 17)

z=Ux,

(6.18)

Lz= b,

(6.!9a)

Defining the column vector i by

i.e. Ul l

U1 1

ll13

0 0

Un

U23

0

1133

_o

o

o

[

we write (6.17) in the form i.e.

lu

0

12 1

121

0

131

'.32

/ 33

1...

'•2

l,.3

(6.19b)

[

T hus lo solve the system of equations (6.1) we first calcu late z by solving the system of equations (6.J 9). Thls so lution can be ob tained using the forward substitution algorithm (see (6. 7))

(6.20) The required solution x can then be found by solving the upper triangular system of equations (6.18). T his system of equations can be solved by the backward substituti.on

6.l Direet methods

129

algorithm (see 6.5)}

(6.21) We now discuss the calculation of the matrices Land U. The matrices Land U are no t uniqllely determined by ihe matrix equation (6.16). These two matrices together contain n 2 + n Lnknown elements (this is most easily seen by sliding the two matrices on top of eacQ other-the overlapping diagonals give the additio nal 11 unknown elements). Thus when comparing clemems on the left- and right-hand sides of(6.16) we have n2 equatio ns and n2 + n unknowns. Consequently we require a further n conditions to uniquely determine the matrices L a nd U. There are three additional sets of n conditions tha t are commonly used. These are Doolittle's method:

lu=l,

I= l{l)11

(6.22)

Choleski's method:

lu=U;h

i = 1(1)11

(6.23)

Crout's method:

uii=

11

i = l(T)n.

(6.24)

There is no maj or difference between the three methods or significant advantage of one method over the others. We will describe Doolittle"s method. To s ummarize the problem, we wise to determine L and U such that A= LU, where

_.\ = (au)tl 1..,,, L = (/,),. ••. U = (u1i)...

I

and

0 0 0 0 Ill !32 121

L=

r 1.,

1.:i, l,,3

J

U=

U 12

u,3

Un

11 23

0

0

11 33

0

0

0

[;"

..

...

U2n

;''"3"]• .

(6.25)

u,,,.

By matrix m ultiplication we see that

ail=

L"

li,ukJ

·~ 1

and since 11, = 10 for i < k and

"•J = 0 for k > j this reduces Lo mln(i,JJ

a;1 =

I

k• I

.u,1

11

or I

alJ= 1;iit 11 + li211 11 + ··· +1 11 u;1 = ati = 111 u11 + 11 2 11 21 + ··· + l11 u11 =

L l;•"•J

for i,;;;j

(6.26)

t,.uki for j for j = 2, 3, 4, ha vc already been calculated (!he first row of U). Hence the second row of U is = [O

02

- 2.5).

3.9 0.4

Using (6.32d) the clemencs in the second column of L are given by 112 =(an - 1, i u i 2)/u21 = (a 12 + 3111 )/3.9 for j = 3, 4. Thus the second column of L is

/2 = (Q

l

1.0769

- 0. J795)T.

Returning to (6,32c) the elements in the third row of lJ are given by 2 U11 = a11 - L luutJ t=I

=

for j

a 1i -

(0Au 11 + I.0769u 2 J)

= 3, 4 and consequently the third row is U3

= [O

0 5.7692 2.6923].

Using (6.32d) tbe only element to be calculated in the third column of L is 113

=(an - ±l1tfltJ)/ l •l

11 3l

= (a13 - 2111 - 0.4/n)/S 7692

for j

= 4. Hence the third column of L is 13 =[0

0

I

0.1511].

Using (6.32c) the only element to calculate in the fourth row of U is 3

U4J

=

L /4kUl} l= l

04) -

= ( 14} -(0.11111 - 0.17951121+ 0.151 11131) with j

=4. Hence the fourth

row of U is U4

= [O

0 0 0.6444).

No calculation is needed for the fourth column or L which is I~= (0

0 0

1].

Thus the matrices L and U are given by

L

=

[ ~.3

~

~I

0.4 1.0769 0. 1 - 0.1795 0.1511

~~J

134

Systems of linear equations

u= [

JO 0

5 - 2.5

- 3 2 3.0 0.4

0 0

0 0

5.7692 0

]

2.6923 . 0.6444

Note that the calculation can be checked by forming the product LU which should give the origmaJ matrix of coefficients A. For this problem, equation (6.19b) takes the form

[i:! 0.1

! .0769

~ ~l[~~] [~!] ~ =

-l.1795 0.15 11

6

Z4

and, using the forward substitutio n algorithm (6.20) (note IJJ = 1 fo r a ll j), we calculate, in turn,

1, = b, = 30 Z2 = b1 - fll;;;I = 5 - 0.3 x 30

=- 4 Z3 = b3 - (l31Z 1 + l32Z2)

= I0 - (0.4 x 30 + 1.0769 x ( - 4))

= 2.3076 .!•

b4 - ( /41 z 1 + l,zZ2 + I~ 3 z 1) =6 - (0. 1 x30+( - 0.1795) x ( - 4) +0.1511 x2.3076)

=

= l.9333. Ht:nce the system of equations (6.18) takes lhc form

~

10

0 0

- 3.9 3 2 0.4 0 5.7692

0

0

0

Jlx']

- 5 2.5 2.6923

X1

0.6444

x.

X3

=

1-!:.30761

L

!.9J3;J

a nd, using the backward substitution algorithm (6.21), we calculate, in turn, X4

=

z./1144

= 1.9333/0.64444 = 3.0002 X3 = (Z3 -

U34X4)/u33

= (2.3076- 2.6923 x 3.0002)/ 5.7692

= - 1.0001 X2

=(z2 - U23X3 =(- 4 - 0.4( = l.000 1

U24X,.)/ u 22

1.000L) + 2.5 x 3.0002)/3.9

6.2 Iterative techniques .'C l =

(z I

-

ll12X2 -11,3X3 -

135

U14X4)tl11

- (30 + 3 x t.0001 - 2 x ( -1.0001) -2.0000.

5 x 3.0002)/10

Since we have carried out all intermediate working to four decimal places we round our final results to tbree decimal places to obtain X1 =

2.000, Xz = 1.000,

X3

x. = 3.000.

= - 1.000,

ll can be checked by substitution t hal t he original system of equations has the integer solutions

x 1 =2,

Xz

=I,

x3=

-1,

X4

=3.

6.2 ITERAT I VE TECHNIQ UES Wi th non-linear equations, s uch as those encountered in Chapter 2, iterative techniques may be the only means of obtaining the solution to an equation or system of equations. With linear equations lbe solu1ion can usually be obtained ci1hcr by direct or by iterative methods. Direct methods may seem 10 be tbe obvious choice for obtaining the solu1ioo of a system of linear equations. However there are problems for wbic~ iterative techniques provide aoequallyemcient or more efficient means of obtaining tho solution. For linear systems of small di mension, iterative techniques are seldom used, since the time required for sufficient accuracy exceeds lhat required by direct methods. However, for large linear systems with a sparse matrix of coeflicients (a matrix with a large number of zero entries 1s said to be sparse) iterauve techniques can be cffie1ent in tenns of computer storage and time requirements. Such systems of equations arise frequently in the numerical solution of boundary-value problems uod partial d ifferential equations. We want to ob,tain an iterative scheme that will generate a sequence of estimates x~•>, i = l(l)n, with the intention that

x!tl -+x1, i=l(l)nask-< co. This can beacltieved by relating newestimatesx!*' 11, i,.. l(l)n, to known cstimatesxltJ. There are various ways of doing this for a given li near system of equations. Some of th~ conm ionly LLSed methods are pres~nted in sections 6.2.l, 6.2.2 aad 6.2.3. T he iterative technique described in section 6.2.S, namely iterative improvement, is a special method for detecting ~nd removing round-olT error.

6.2.1· Jacobi's method This scheme can be obtained by simply rearranging the system of equations (6. la) such tha l x; is isolated on the left-hand side of the ith equation for i = 1(1)11,

(6.34) and iodi~ating the iteration by attaching 'k + l' to the isolated x and attaching 'k' to all o ther x's on the right-band side,

x!H L> =

{b 't' a1Jx~t1 _ 1-

}'I

t

J= , with k = 1 gives xm in terms of x)/ a...,

which s hould be compared with the Jacobi scheme in (6.35).

(6.40)

6.2 Iterative cechniques £:camp/~ 63

137

Use the Gauss-Siedel method to solve the system of equations - 19x,+ x 2 + 5't3 + 3x 4 = -34 2x 1 -17x1 + 2x 3 + x 4 = 24

x1 -20x 3 + 7x4 = 63 x 1 + 4x1 + 3x3 - 23x4 = - 73.

3x 1 + Rearranging 1hese equations

10

X11t+ I )= ( -

apply the Gauss Siedel iteration yields

34- X~) -

5x'J1 -

3r4kl)/(- 19)

x~+ tl = (24 - 2x1~ + 11 - 2x~1 - x~ 1 )/( - 17) x~+ 11 =(63-3x1t+ 1>-

x'f+ 11 - 7x~1)/( - 20)

xt+ 1> = (- 73 - x\k• 11 - 4x~ ~ o - 3x~H 11)/ ( - 23)

and taking the initial guess

x•0 >=(0 0 0 O)T leads

10

the estimates

x'1 11 =

34/ 19 = l.78947

x~ll=(N -2( 1.78947))/( -1 7)= - l.20l24 x~ll = (63 - 311.78947) + 120124)/( -20),,. - 2.94164 X~I) = ( - 7J - J.78947 -4( - 1.20124)- 3( - 294164)/(- 23) "" 2.659) l

Continuing the ite ration lo obta in four-decimal-place accuracy leads to the successive estimates X1

: