Numerical Analysis: A First Course in Scientific Computation [Reprint 2021 ed.] 9783110891997, 9783110140316


255 94 70MB

English Pages 368 [372] Year 1995

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Numerical Analysis: A First Course in Scientific Computation [Reprint 2021 ed.]
 9783110891997, 9783110140316

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

de Gruyter Textbook Deuflhard/Hohmann • Numerical Analysis

Peter Deuflhard Andreas Hohmann

Numerical Analysis A First Course in Scientific Computation Translated from the German by F. A. Potra and F. Schulz

w

Walter de Gruyter G Berlin • New York 1995 DE

Authors Peter Deuflhard Andreas Hohmann Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10 D-10711 Berlin Germany

and

Freie Universität Berlin Institut für Mathematik Amimallee 2—6 D-14195 Berlin Germany

1991 Mathematics Subject Classification: Primary: 65- -01 Secondary: 65 Bxx, 65 Cxx, 65 Dxx, 65 Fxx, 65 Gxx Title of the German original edition: Numerische Mathematik I, Eine algorithmisch orientierte Einfuhrung, 2. Auflage, Walter de Gruyter • Berlin • New York, 1993 With 62 figures and 14 tables. ©

Printed on acid-free paper which falls within the guidelines of the ANSI to ensure permanence and durability.

Library of Congress

Cataloging-in-Publication-Data

Deuflhard, P. (Peter) [Numerische Mathematik I. English] Numerical analysis : a first course in scientific computation / Peter Deuflhard, Andreas Hohmann ; translated by F. A. Potra and F. Schulz, p. cm. Includes bibliographical references (p. ) and index. ISBN 3-11-014031-4 (cloth : acid-free). ISBN 3-11-013882-4 (pbk. : acid-free) 1. Numerical analysis - Data processing. I. Hohmann, Andreas, 1964. II. Title. QA297.D4713 1995 519.4—dc20 94-46993 CIP

Die Deutsche Bibliothek — Cataloging-in-Publication-Data Numerical analysis / Peter Deuflhard ; Andreas Hohmann. Transl. by F. A. Potra and F. Schulz. — Berlin ; New York : de Gruyter. (De Gruyter textbook) Bd. 2 verf. von Peter Deuflhard und Folkmar Bornemann NE: Deuflhard, Peter; Hohmann, Andreas; Bornemann, Folkmar 1. A first course in scientific computation. - 1995 ISBN 3-11-013882-4 kart. ISBN 3-11-014031-4 Pp.

© Copyright 1995 by Walter de Gruyter & Co., D-10785 Berlin. All rights reserved, including those of translation into foreign languages. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopy, recording or any information storage and retrieval system, without permission in writing from the publisher. Printing in Germany. - Printing: Gerike GmbH, Berlin. - Binding: Liideritz & Bauer GmbH, Berlin.

Preface

The topic of Numerical Analysis is the development and the understanding of computational methods for the numerical solution of mathematical problems. Such problems typically arise from areas outside of Mathematics — such as Science and Engineering. Therefore Numerical Analysis is directly situated at the confluence of Mathematics, Computer Science, Natural Sciences, and Engineering. A new interdisciplinary field has been evolving rapidly called Scientific Computing. Driving force of this evolution is the recent vigorous development of both computers and algorithms, which encouraged the refinement of mathematical models for physical, chemical, technical or biological phenomena to such an extent that their computer simulations are now describing reality to sufficient accuracy. In this process, the complexity of solvable problems has been expanding continuously. New areas of the natural sciences and engineering, which had been considered rather closed until recently, thus opened up. Today, Scientific Computing is contributing to numerous areas of industry (chemistry, electronics, robotics, automotive industry, air and space technology, etc.) as well as to important problems of society (balance of economy and ecology in the use of primary energy, global climate models, spread of epidemics). The movement of the entire interdisciplinary net of Scientific Computing seizes each of its knots, including Numerical Analysis, of course. Consequently, fundamental changes in the selection of the material and the presentation in lectures and seminars have occurred — with an impact even to introductory courses. The present book takes this development into account. It is understood as an introductory course for students of mathematics and natural sciences, as well as mathematicians and natural scientists working in research and development in industry and universities. Possible readers are assumed to have basic knowledge of undergraduate Linear Algebra and Calculus. More advanced mathematical knowledge, say about differential equations, is not a required prerequisite, since this elementary textbook is intentionally excluding the numerics of differential equations. As a further deliberate restriction, the presentation of topics like interpolation or integration is limited to the one-dimensional case. Nevertheless, many essential concepts of modern Numerical Analysis, which play an important role in

vi

Preface

numerical differential equation solving, are treated on the simplest possible model problems. The aim of the book is to develop algorithmic feeling and thinking. After all, the algorithmic approach is historically one of the roots of todays Mathematics. That is why historical names like Gauss, Newton and Chebyshev are found in numerous places of the subsequent text together with contemporary names. The orientation towards algorithms should by no means be misunderstood. In fact, the most efficient algorithms do require a substantial amount of mathematical theory, which will be developed in the text. As a rule, elementary mathematical arguments are preferred. Wherever meaningful, the reasoning appeals to geometric intuition — which also explains the quite large number of graphical representations. Notions like scalar product and orthogonality are used throughout — in the finite dimensional case as well as in function spaces. In spite of the elementary presentation, the book does contain a significant number of rather recent results, some of which have not been published elsewhere. In addition, some of the more classical results are derived in a way, which significantly differs from more standard derivations. Last, but not least, the selection of the material expresses the scientific taste of the authors. The first author has taught Numerical Analysis courses since 1978 at different German institutions such as the University of Technology in Munich, the University of Heidelberg, and now the Free University of Berlin. Over the years he has co-influenced the dynamic development of Scientific Computing by his research activities. Needless to say, he has presented his research results in numerous invited talks at international conferences and seminars at renowned university and industry places all over the world. The second author rather recently entered the field of Numerical Analysis, after having graduated in pure mathematics from the University of Bonn. Both authors hope that this combination of a senior and a junior has had a stimulating effect on the presentation in this book. Moreover, it is certainly a clear indication of the old dream of unity of pure and applied mathematics. Of course, the authors stand on the shoulders of others. In this respect, the first author remembers with gratitude the time, when he was a graduate student of Roland Bulirsch. Numerous ideas of the colleagues Ernst Hairer and Gerhard Wanner (University of Geneva) and intensive discussions with Wolfgang Dahmen (Technical University of Aachen) have influenced our presentation. Cordial thanks go to Folkmar Bornemann for his many stimulating ideas and discussions especially on the formulation of the error analysis in Chapter 2. We also want to thank our colleagues at the Konrad Zuse Center Berlin, in particular Michael Wulkow, Ralf Kornhuber, Ulli Nowak and Karin Gatermann for many suggestions and a constructive atmosphere.

Preface

vii

This book is a translation of our German textbook "Numerische Mathematik I (Eine algorithmisch orientierte Einführung)", second edition. Many thanks to our translators, Florian Potra and Friedmar Schulz, and to Erlinda Cadano-Körnig for her excellent work in the final polishing of the English version. May this version be accepted by the Numerical Analysis students equally well as the original German version. Peter Deuflhard and Andreas Hohmann

Berlin, May 1994

Teaching Hints

The present textbook addresses students of Mathematics, Computer Science and Science covering typical material for introductory courses in Numerical Analysis with clear emphasis towards Scientific Computing. We start with Gaussian elimination for linear equations as a classical algorithm and discuss additional devices such as pivoting strategies and iterative refinement. Chapter 2 contains the indispensable error analysis based on the fundamental ideas of Wilkinson. The condition of a problem and the stability of algorithms are presented in a unified framework and exemplified by illustrative cases. Only the linearized theory of error analysis is presented — avoiding, however, the typical "e-battle". Rather, only differentiation is needed as an analytical tool. As a specialty we derive a stability indicator which allows for a rather simple classification of numerical stability. The theory is then worked out for the case of linear equations, thus supplying a posteriori a deeper understanding of Chapter 1. In Chapter 3 we deal with methods of orthogonalization in connection with linear least squares problems. We introduce the extremely useful calculus of pseudoinverses, which is then immediately applied in Chapter 4. There, we consider iterative methods for systems of nonlinear equations (Newton's method), nonlinear least squares problems (Gauss-Newton method) and parameter-dependent problems (continuation methods) in close mutual connection. Special attention is given to the affine invariant form of the convergence theory and the iterative algorithms. A presentation of the power method (direct and inverse) and the QR-algorithm for symmetric eigenvalue problems follow in Chapter 5. The restriction to the real symmetric case is motivated from the beginning by a condition analysis of the general eigenvalue problem. In this context the singular value decomposition fits perfectly, which is so important in applications. After the first five rather closely connected chapters the remaining four chapters also comprise a closely connected sequence. The sequence begins in Chapter 6 with an extensive treatment of the theory of three-term recurrence relations, which play a key role in the realization of orthogonal projections in function spaces. Moreover, the significant recent spread of symbolic computing has renewed interest in special functions also within Numerical Analysis.

Teaching Hints

ix

The condition of three-term recurrences is presented via the discrete Green's function. Numerical algorithms for the computation of special functions are exemplified for spherical harmonics and Bessel functions. In Chapter 7 classical interpolation and approximation in the one-dimensional special case are presented first, followed by non-classical methods like Bezier techniques and splines, which nowadays play a central role in CAD (Computer Aided Design) or CAGD (Computer Aided Geometric Design), i.e. special disciplines of computer graphics. Our presentation in Chapter 8, which treats iterative methods for the solution of large symmetric linear equations, is conveniently based on Chapter 6 (three-term recurrences) and Chapter 7 (min-max property of Chebyshev polynomials). The same is true for the Lanczos algorithm for large symmetric eigenvalue problems. The final Chapter 9 turns out to be a bit longer: it carries the bulk of the task to explain principles of the numerical solution of differential equations by means of the simplest problem type, which here is numerical quadrature. After the historical Newton-Cotes formulas and the Gauss quadrature, we progress towards the classical Romberg quadrature as a first example of an adaptive algorithm, which, however, only adapts the approximation order. The formulation of the quadrature problem as an initial value problem opens the possibility of working out a fully adaptive Romberg quadrature (with order and stepsize control) and at the same time a didactic first step into extrapolation methods, which play a prominent role in the solution of ordinary differential equations. The alternative formulation of the quadrature problem as a boundary value problem is exploited for the derivation of an adaptive multigrid algorithm: in this way we once more present an important class of methods for ordinary and partial differential equation in the simplest possible case. For a typical university term the contents of the book might be too rich. For a possible partitioning of the presented material into two parts we recommend the closely connected sequences Chapter 1 - 5 and Chapter 6 - 9 . Of course, different "teaching paths" can be chosen. For this purpose, we give the following connection diagram:

Teaching Hints

X

As can be seen from this diagram, the chapters of the last row (Chapters 4, 5, 8, and 9) can be skipped without spoiling the flow of teaching — according to the personal scientific taste. Chapter 4 could be integrated into a course on "Nonlinear optimization", Chapters 5 and 8 into a course on "Numerical linear algebra" or Chapter 9 into "Numerical solution of differential equations". At the end of each chapter we added exercises. Beyond these explicit exercises further programming exercises may be selected from the numerous algorithms, which are given informally (usually as pseudocodes) throughout the textbook. All algorithms mentioned in the text are internationally accessible via the electronic library eLib of the Konrad Zuse Center. In the interactive mode eLib can be reached via: Datex-P: INTERNET: login:

+45050331033 (WIN) +2043623331033 (IXI) elib.ZIB-berlin.de (130.73.108.11) elib (no password necessary)

In addition, there is the following e-mail access: X.400: INTERNET: BITNET: UUCP:

S=eLib;OU=sc;P=ZIB-Berlin;A=dbp;C=de [email protected] [email protected] unidolsc. ZIB-Berlin.dbp.de!eLib

Especially for users of Internet there is an "anonymous ftp" access (elib.ZIBBerlin.de - 130.73.108.11) .

Contents

1

Linear Systems 1.1 Solution of Triangular Systems 1.2 Gaussian Elimination 1.3 Pivoting Strategies and Iterative Refinement 1.4 Cholesky's Method for Symmetric Positive Definite Matrices . 1.5 Exercises

1 2 4 7 15 18

2

E r r o r Analysis 2.1 Sources of Errors 2.2 Condition of Problems 2.2.1 Norm-wise condition analysis 2.2.2 Component-wise condition analysis 2.3 Stability of Algorithms 2.3.1 Stability concepts 2.3.2 Forward analysis 2.3.3 Backward analysis 2.4 Application to Linear Systems 2.4.1 A closer look at solvability 2.4.2 Backward analysis of Gaussian elimination 2.4.3 Assessment of approximate solutions 2.5 Exercises

23 23 25 28 33 37 38 40 45 48 48 50 53 56

3

Linear Least Squares Problems 3.1 Least Squares Method of Gauss 3.1.1 Formulation of the problem 3.1.2 Normal equations 3.1.3 Condition 3.1.4 Solution of normal equations 3.2 Orthogonalization Methods 3.2.1 Givens rotations 3.2.2 Householder reflections 3.3 Generalized Inverses 3.4 Exercises

62 62 62 65 67 70 72 74 76 81 85

xii

Contents

4

Nonlinear S y s t e m s and Least Squares P r o b l e m s 89 4.1 Fixed Point Iterations 89 4.2 Newton's Method for Nonlinear Systems 94 4.3 Gauss-Newton Method for Nonlinear Least Squares Problems 101 4.4 Nonlinear Systems Depending on Parameters 108 4.4.1 Structure of the solution 109 4.4.2 Continuation methods Ill 4.5 Exercises 124

5

S y m m e t r i c Eigenvalue P r o b l e m s 5.1 Condition of General Eigenvalue Problems 5.2 Power Method 5.3 Qii-Algorithm for Symmetric Eigenvalue Problems 5.4 Singular Value Decomposition 5.5 Exercises

6

Three-Term Recurrence Relations 151 6.1 Theoretical Foundations 152 6.1.1 Orthogonality and three-term recurrence relations . . 153 6.1.2 Homogeneous and non-homogeneous recurrence relations 156 6.2 Numerical Aspects 159 6.2.1 Condition numbers 161 6.2.2 Idea of the Miller algorithm 167 6.3 Adjoint Summation 170 6.3.1 Summation of dominant solutions 171 6.3.2 Summation of minimal solutions 174 6.4 Exercises 178

7

Interpolation and A p p r o x i m a t i o n 7.1 Classical Polynomial Interpolation 7.1.1 Uniqueness and condition number 7.1.2 Hermite interpolation and divided differences 7.1.3 Approximation error 7.1.4 Min-max property of Chebyshev polynomials 7.2 Trigonometric Interpolation 7.3 Bezier Techniques 7.3.1 Bernstein polynomials and Bezier representation 7.3.2 De Casteljau's algorithm 7.4 Splines 7.4.1 Spline spaces and B-splines 7.4.2 Spline interpolation 7.4.3 Computation of cubic splines

129 129 133 136 143 149

182 183 183 187 196 198 201 209 . . . 210 217 225 226 234 238

Contents 7.5 8

9

xiii Exercises

Large S y m m e t r i c S y s t e m s of Equations and Problems 8.1 Classical Iteration Methods 8.2 Chebyshev Acceleration 8.3 Method of Conjugate Gradients 8.4 Preconditioning 8.5 Lanczos Methods 8.6 Exercises

242 Eigenvalue

Definite Integrals 9.1 Quadrature Formulas 9.2 Newton-Cotes Formulas 9.3 Gauss-Christoffel Quadrature 9.3.1 Construction of the quadrature formula 9.3.2 Computation of knots and weights 9.4 Classical Romberg Quadrature 9.4.1 Asymptotic expansion of the trapezoidal sum 9.4.2 Idea of extrapolation 9.4.3 Details of the algorithm 9.5 Adaptive Romberg Quadrature 9.5.1 Principle of adaptivity 9.5.2 Estimation of the approximation error 9.5.3 Derivation of the algorithm 9.6 Hard Integration Problems 9.7 Adaptive Multigrid Quadrature 9.7.1 Local error estimation and refinement rules 9.7.2 Global error estimation and details of the algorithm 9.8 Exercises

245 247 253 258 266 272 277 281 282 286 292 292 298 301 301 303 310 313 314 315 319 325 329 329 . 333 337

References

341

Notation

347

Index

349

1 Linear Systems

We start with the classical Gaussian elimination method for solving systems of linear equations. Carl Friedrich Gauss (1777-1855) describes the method in his 1809 work on celestial mechanics "Theoria Motus Corporum Coelestium" [33] by saying "the values can be obtained with the usual elimination method". The method was used there in connection with the least squares method (cf. Section 3). In fact the method had been used previously by Lagrange in 1759 and had been known in China as early as the first century B.C. The problem is to solve a system of n linear equations an^i

+

a\2X2

+

•••

+

ainxn

=

b\

a2\X\

+

a22X2

+

••• +

a2n.Xn

= i>2

flnl^l

+

On 2X2

+

••• +

a,nnXn

=

bn

or, in short form Ax = b, where A € Mat„(R) is a real (n, n)-matrix and b, x E R™ are real n-vectors. Before starting to compute the solution x, we should ask ourselves whether or not the system is solvable or not? From linear algebra, we know the following result which characterizes solvability in terms of the determinant of the matrix A. T h e o r e m 1.1 Let A € Mat r a (R) be a real square matrix with det / I / O and b £ R™. Then there exists a unique x and can be represented as a premultiplication by a matrix L k € Mat„(R), i.e. = LkA{k) and b ^ = Lkb™ . (In case of operations on columns one obtains an analogous tion). The matrix

Lk

postmultiplica-

=

—h+l,k

1

ln,k is called a Frobenius matrix; It has the nice property that its inverse L'kl is obtained from Lk by changing the signs of the lik s. Furthermore the 1! product of the s satisfies 1 L := L -l



KU

=

hi

1

hi

¿32

1

Inl

ln,n—1 1

In this way we have reduced the system Ax = b to the equivalent triangular system Rx = z with R = L~1A

and z = L^b

.

1.3 Pivoting Strategies and Iterative

Refinement

7

A lower (resp. upper) triangular matrix, whose main diagonal elements are all equal to one is a called a unit lower (resp. upper) triangular matrix. The above representation A — LR of the matrix A as a product of a unit lower triangular matrix L and an upper triangular matrix R is called the Gaussian triangular factorization, or briefly LR factorization of A. In the English literature the matrix R is often denoted by U (from upper triangular) and the corresponding Gaussian triangular factorization is called the LU factorization. If such a factorization exists, then L and R are uniquely determined (cf. Exercise 1.2). A l g o r i t h m 1.4 Gaussian a) A = LR matrix

Elimination.

Triangular Factorization, R upper and L lower triangular

b) Lz = b

Forward Substitution

c) Rx = z

Backward Substitution.

The memory scheme for the Gaussian elimination is based upon the representation (1.5) of the matrices In the remaining memory locations one can store the ¿¿t's, because the other elements, with values 0 or 1, do not have to be stored. The entire memory cost for Gaussian elimination amounts to n(n+ 1) memory locations, i.e. as many as needed to define the problem. The cost in terms of number of multiplications is £fc=Ì fc2 = n 3 / 3 for a) ££=? k = n2/2 both for b) and c). Therefore the main cost comes from the Lii-factorization. However, if different right hand sides b\,... ,bj are considered, then this factorization has to be carried out only once.

1.3

Pivoting Strategies and Iterative Refinement

As seen from the simple example

there are cases where the triangular factorization fails even when det A ^ 0. However an interchange of rows leads to the simplest Lii-factorization we

8

1 Linear

Systems

can imagine, namely

In the numerical implementation of Gaussian Elimination difficulties can arise not only when pivot elements vanish, but also when they are "too small". E x a m p l e 1.5 (cf. [30]) We compute the solution of the system (a)

1.00 • 10" 4 i i

+

1.00 x2

= 1.00

(b)

1.00 xi

+

1.00 x2

=2.00

on a machine, which, for the sake of simplicity, works only with three exact decimal figures. By completing the numbers with zeros, we obtain the "exact" solution with four correct figures X\ = 1.000

x2 = 0.9999 ,

and with three correct figures xi = 1.00

x2 = 1.00 .

Let us now carry out the Gaussian elimination on our computer, i.e. in three exact decimal figures /21 = —— = ———T = 1-00 • 10 4 , an 1.00 - 1 0 " 4 (1.00 - 1.00 • 10 4 • 1.00 • 10" 4 )a;i + (1.00 - 1.00 • 104 • 1.00)12 = 2.00 - 1.00 • 104 • 1.00 . Thus we obtain the upper triangular system 1.00 • 1 0 - 4 x i

+

1.00x2 -1.00 • 104 x2

=

1-00 -1-00 • 104

=

and the "solution" x2 = 1.00 (true)

xi = 0.00 (false!) .

However, if before starting the elimination, we interchange the rows (a) (b)

1.00 xi 1.00 • 10" 4 X!

+

1.00x2

=

2.00

+

1.00 x2

=

1.00,

1.3 Pivoting Strategies and Iterative Refinement

9

then /21 = 1-00 • 10—4, which yields the upper triangular system 1.00a;i

+

1.00 x2

=

2.00

1.00 x2

=

1.00

as well as the "true solution" x2 = 1.00

xi = 1.00 .

By interchanging the rows in the above example we obtain |/2i | < 1 and | à n | > |ò 2 i| • Thus, the new pivot a n is the largest element, in absolute value, of the first column. We can deduce the partial pivoting or column pivoting strategy from the above considerations . This strategy is to choose at each Gaussian elimination step as pivot row the one having the largest element in absolute value within the pivot column. More precisely, we can formulate the following algorithm: Algorithm 1.6 Gaussian elimination with column pivoting a) In elimination step A^

—>

choose a p e {k,...,

1^1 > 1 ^ 1

for j = k,...,

n}, such that

n

Row p becomes pivot row. b) Interchange rows p and k

_ ico

Now we have

W ith

aj*> = 1

J

-Ak) \lik\ = 7.Ìk) kk

a(k) kj

if i — p

Ak) 1 vi

\ii = k

,(fc)

otherwise

~Ak) ik < 1 . (k) pk

c) Perform the next elimination step for M k \ i.e. ÀW -> A{k+1)

.

10

1 Linear Systems

R e m a r k 1.7 Instead of column pivoting with row interchange one can also perform row pivoting with column interchange. Both strategies require at most 0(n2) additional operations. If we combine both methods and look at each step for the largest element in absolute value of the entire remaining matrix, then we need 0(n3) additional operations. This total pivoting strategy is therefore almost never employed. In the following formal description of the triangular factorization with partial pivoting we use permutation matrices P € M a t „ ( R ) . For each permutation 7r G Sn we define the corresponding matrix Pv = [e7T(l) • • • e ir(n)] i where ej = (¿>ij,... ,Snj)T is the j-th unit vector. A permutation w of the rows of the matrix A can be expressed as a premultiplication by Pn Permutation of rows ir: A —>

PnA.

and analogously a permutation n of the columns as a postmultiplication Permutation of columns n: A —> AP^ . It is known from linear algebra that the mapping 7T

> Pn

1

is a group homeomorphism Sn —> O (n) of the symmetric group Sn into the orthogonal group O ( n ) . In particular we have p-1

=

pT

The determinant of the permutation matrix is just the sign of the corresponding permutation det P^ = sgn 7r G { ± 1 } , i.e. it is equal to +1, if n consists of an even number of transpositions, and —1 otherwise. The following proposition shows that, theoretically, the triangular factorization with partial pivoting fails only when the matrix A is singular. T h e o r e m 1.8 For every invertible matrix A there exists a permutation matrix P such that a triangular factorization of the form PA = LR is possible. Here P can be chosen so that all elements of L are less than or equal to one in absolute value, i.e. | £ | < i

1.3 Pivoting Strategies and Iterative

11

Refinement

Proof. We employ the Li?-factorization algorithm with column pivoting. Since det A ^ 0, there is a transposition n € Sn such that the first diagonal element a ^ of the matrix =PTiA is different from zero and is also the largest element in absolute value in the first column, i.e. (i)| l a !Vl — l a li ; l for ¿ = 1,

0

i

n•

After eliminating the remaining elements of the first column we obtain the matrix aa ( 1 )

n

A^

=

=

*

• • •

*

0

L\PTlA

B k. If n € Sn only permutes numbers > k + 1, then the Frobenius matrix " 1

Lk

1 =

~h+l,k

m.fc

1

1 Linear Systems

12

satisfies

Lk = PirLkP-n 1

(1.7)

/7r(fe+l),fe

— lir(n),k

1

Therefore we can separate Frobenius matrices Lk and permutations PTk by inserting in (1.6) the identities P~k 1 PTk i.e. R = ¿n-l-Prn_1^n-2-PT^1_1-f)rn_1-Prn_2^n-3 • ' ' L\PTlA

.

Hence we obtain R = Ln-1---L1PV0A

with Lk = P^LkP~^

,

where 7r„_i := id and 7Tk = r„_i • • • Tk+i for k = 0 , . . . , n — 2. Since the permutation irt interchanges in fact only numbers > k + 1, the matrices Lk are of the form (1.7). Consequently P„0A = LR with L := ¿J"1 • • • L~]i1 or explicitly

wi(2),l L =

ti(3),1

1

TT2(3),2

7Ti (n),l and therefore \L\ < 1.



Note that we have used the Gaussian elimination algorithm with column pivoting to constructively prove an existence theorem. Remark 1.9 Let us also note that the determinant of A can be easily computed by using the PA = LR factorization of Proposition 1.8 via the formula

1.3 Pivoting Strategies and Iterative

Refinement

13

det A = det(.P) • det(L-R) = sgn (710) • r n • • • rnn A warning should be made against the naive computation of determinants! As is well known, multiplication of a linear system by an arbitrary scalar a results in det (a A) = a™ det A . This trivial transformation may be used to convert a "small" determinant into an arbitrarily "large" one and the other way around. The only invariants under this class of trivial transformations are the Boolean quantities det A = 0 or det A / 0; for an odd n we have additionally sgn (det A). The above noted theoretical difficulty will lead later on to a completely different characterization of the solvability of linear systems. Furthermore, it is apparent that the pivoting strategy can be arbitrarily changed by multiplying different rows by different scalars. This observation leads to the question of scaling. By row scaling we mean premultiplication of A by a diagonal matrix A —> DrA ,

Dr

diagonal matrix

and analogously, by column scaling we mean postmultiplication by a diagonal matrix A —> ADC , Dc diagonal matrix . (As we have already seen in the context of Gaussian elimination, linear operations on the rows of a matrix can be expressed by premultiplication with suitable matrices and correspondingly operations on columns are represented by postmultiplication.) Mathematically speaking scaling changes the length of the basis vectors of the range (row scaling) and of the domain (column scaling) of the linear mapping defined by the matrix A, respectively. If this mapping models a physical phenomenon then we can interpret scaling as a change of unit, or gauge transformation (e.g. from A to km). In order to make the solution of the linear system Ax = b independent of the choice of unit we have to appropriately scale the system by pre- or postmultiplying the matrix A by suitable diagonal matrices: A -> A := DrADc

,

where Dr = diag( k : aik

=

lulki +

h,k-ilk,k-i

+ kkhk •

The sophistication of the method is contained in the sequence of computations for the elements of L. As for the computational cost we have 1 , ~ - n multiplications and n square roots . 6 In contrast, the rational Cholesky factorization requires no square roots, but only rational operations (whence the name). By smart programming the cost can be kept here also to ~ | n 3 . An advantage of the rational Cholesky factorization is that almost singular matrices D can be recognized. Also the method can be extended to symmetric indefinite matrices ( x T A x / 0 for all x).

Remark 1.14 The supplemental spd property has obviously led to a sensible reduction of the computational cost. At the same time, this property forms the basis of completely different types of solution methods that will be described in Section 8.

18

1

1.5

L i n e a r

S y s t e m s

Exercises

Exercise 1.1 Give an example of a full nonsingular (3,3)-matrix for which Gaussian elimination without pivoting fails. Exercise 1.2 a) Show that the unit (nonsingular) lower (upper) triangular matrices form a subgroup of GL(n) . b) Apply a) to show that the representation A

=

L

R

of a nonsingular matrix A e GL(n) as the product of a unit lower triangular matrix L and a nonsingular upper triangular matrix R is unique, provided it exists. c) If A = LR as in b), then L and R can be computed by Gaussian triangular factorization. Why is this another proof of b) ? Hint: use induction. Exercise 1.3 A matrix A € Mat„(R) is called strictly diagonally dominant if n

|fflii| >

hijl for i = 1 , . . . ,n. i= 1 jjti

Show that Gaussian triangular factorization can be performed for any matrix A 6 Mat„(R) with a strictly diagonally dominant transpose AT. In particular any such A is invertible. Hint: use induction. Exercise 1.4 The defined as the set W

n u m e r i c a l

(

A

)

: =

r a n g e

{ ( A x , x )

W

|

(

A

of a matrix

)

( x , x )

=

1,

x



A

e Mat„(R) is

R " }

Here (•, •) is the Euclidean scalar product on R n . a) Show that the matrix A € Mat n (R) has an LR factorization (L unit lower triangular, R upper triangular) if and only if the origin is not contained in the numerical range of A, i.e. 0 g Hint: use induction.

W

(

A

)

.

19

1.5 Exercises b) Use a) to show that the matrix 1

2

3

2

4

7

3

5

3

has no LR factorization. Exercise 1.5 Program the Gaussian triangular factorization. The program should read data A and b from a data file and should be tested on the following examples: a) with the matrix from Example 1.1, b) with n = 1, A = 25 and 6 = 4, c) with a,ij = P'1

and bt = i for n = 7, 15 and 50.

Compare in each case the computed and the exact solutions. Exercise 1.6 Gaussian factorization with column pivoting applied to the matrix A delivers the factorization PA = LR, where P is the permutation matrix produced during elimination. Show that: a) Gaussian elimination with column pivoting is invariant with respect to i) Permutation of rows of A (with the trivial exception that there are several elements of equal absolute value per column) ii) Multiplication of the matrix by a number o ^ 0, A —» a A. b) If D is a diagonal matrix, then Gaussian elimination with column pivoting applied to A := AD delivers the factorization PA = LR with R = RD. Consider the corresponding behavior for a row pivoting strategy with column interchange as well as for total pivoting with row and column interchange. Exercise 1.7 Let the matrix A e Mat„(R) be symmetric positive definite, a) Show that | < yjaudjj
0 for i = 1 , . . . , n and j ^ i. Let A = A1-1"1, y l ' 2 ' , . . . , A^ that:

be produced during Gaussian elimination. Show

a) |ffln| > |a,i| for i = 2 , . . . ,n ; b) E r = 2 « g ) = 0 f o r j = 2 , . . . , n ;

21

1.5 Exercises c) alP < alf} < 0 for i = 2,..., n ; d) aff > off > 0 for i, j = 2,..., n and j ^ i ;

e) If the diagonal elements produced successively during the first n — 2 Gaussian elimination steps are all nonzero (i.e. a^ 1,

< 0 for i =

— 1) then (4m = 0.

Exercise 1.11 A problem from astrophysics ("cosmic maser") can be formulated as a system of (n + 1) linear equations in n unknowns of the form

\

/

1

1

(

xx

\

\ Xn /

\1 /

where A is the matrix from Exercise 1.10. In order to solve this system we apply Gaussian elimination on the matrix A with the following two additional rules, where the matrices produced during elimination are denoted again by and the relative machine precision is denoted by eps. a) If during the algorithm < |afcfc|eps for some k < n, then shift simultaneously column k and row k to the end and the other columns and rows towards the front (rotation of rows and columns). b) ^ laLfc'l — lafcfc|eps for all remaining k < n — 1, then terminate the algorithm. Show that: i) If the algorithm does not terminate in b) then after n — 1 elimination steps it delivers a factorization of A as PAP = LR, where P is a permutation and R = is an upper triangular matrix with rnn — 0, ru < 0 for i = 1,..., n — 1 and r^ > 0 for j > i. ii) The system has in this case a unique solution x, and all components of x are nonnegative (interpretation: probabilities). Give a simple scheme for computing x. Exercise 1.12 Program the algorithm developed in Exercise 1.11 for solving the special system of equations and test the program on two examples

1 Linear Systems

22

of your choice of dimensions n = 5 and n = 7, as well as on the matrix / - 2o

2n

0n

0n \

2

-4

1

1

0

2

-1

1

\ 0

0

0

-2

/

Exercise 1.13 Let a linear system Cx — b be given, where C is an invertible (2n, 2n)-matrix of the following special form: C = a) Let C

1

A

B

B

A

be partitioned as C: C"1

Prove

, A, B invertible

SCHUR'S

-

E

F

G

H

identity:

E = H = (A-BA~1B)-1

and

F = G = (B - AB~xA)~l

b) Let x = (x\, X2)t and b = (&i, 62) T be likewise partitioned and (A + B)y 1 =h+ Show that

b2,

(A - B)y2 = 61 - b2 . ^

xi = —(2/1 + 2/2) , x2 = ~{y\ - y2) . Numerical advantage?

.

2 Error Analysis

In the last chapter, we introduced a class of methods for the numerical solution of linear systems. There, from a given input {A, b) we computed the solution f(A,b) = A~lb. In a more abstract formulation the problem consists in evaluating a mapping / : U c I -» 7 at a point x e U. The numerical solution of such a problem (/, x) computes the result f(x) from the input x by means of an algorithm that eventually produces some intermediate values as well. algorithm

output data

In this chapter we want to see how errors arise in this process and in particular to see if Gaussian elimination is indeed a dependable method. The errors in the numerical result arise from errors in the data or input errors as well as from errors in the algorithm.

In principle we are powerless against the former, as they belong to the given problem and at best they can be avoided by changing the setting of the problem. The situation appears to be different with the errors caused by the algorithm. Here we have the chance to avoid, or to diminish, errors by changing the method. The distinction between the two kind of errors will lead us in what follows to the notions of condition of a problem and stability of an algorithm. First we want to discuss the possible sources of errors.

2.1

Sources of Errors

Even when input data are considered to be given exactly, errors in the data may still occur because of the machine representation of non-integer numbers. With today's usual floating point representation, a number 2 of "real

24

2 Error Analysis

type" is represented as z = ade, where the basis d is a power of two (as a rule 2,8 or 16) and the exponent e is an integer of a given maximum number of binary positions, ^ ^ {^-miru • • • > ^max} ^ • The so called mantissa a is either 0 or a number satisfying d~x < |a| < 1 and has the form i a = v ^^ aid~l, i=1 where v G {±1} is the sign, m G { 0 , . . . , d — 1} are the digits (it is assumed that o = 0 or ai / 0), and I is the length of the mantissa. The numbers that are representable in this way form a subset F := {x G R | there is a, e as above, so that x = ade} of real numbers. The range of the exponent e defines the largest and smallest number that can be represented on the machine (by which we mean the processor together with the compiler). The length of the mantissa is responsible for the relative precision of the representation of real numbers on the given machine. Every number x ^ 0 with n 1 < |z| < d e ° , «(l - d ~ l ) is represented as a floating point number by rounding to the closest machine number whose relative error is estimated by Iz-flOc)! _ < eps := Pi

d1-72

Here we use for division the convention 0/0 = 0 and x/0 = 00 for x > 0. We say that we have an underflow when |x| is smaller than the smallest machine number dCmin~1 and, an overflow when |x| > d £ m a x (l — d~l). We call eps the relative machine precision or the machine epsilon. In the literature this quantity is also denoted by u for "unit roundoff" or "unit round". For single precision in FORTRAN, or float in C, we have usually eps ~ 1 0 - 7 . Let us imagine that we wanted to enter in the machine a mathematically exact real number x, for example x = ir = 3.141592653589... , It is known theoretically that n as an irrational number cannot be represented with a finite mantissa and therefore it is a quantity affected by errors on any computer, e.g. for eps = 1 0 - 7 TT h^ fl(7r) = 3.141593 , |fl(?r) - tt| < eps -K

2.2 Condition of Problems

25

In the above it is essential to note that the number x after being introduced in the machine is indistinguishable from other numbers x that are rounded to the same floating point number fl(a;) = fl(i). In particular a real number that is obtained by appending zeros to a machine number is by no means distinguished. Therefore it is absurd to aim at an "exact" input. This insight will decisively influence the following error analysis. A further important source of errors for the input data are the measurement errors, when input quantities are obtained from experimental data. Such data x are usually given together with the absolute error 8x, the so called tolerance: the distance from x to the "true" value x can be estimated component-wise by \x — x\ < 6x . In many important practical situations the relative precision \8x/x\ lies in between 1CT2 and 1 0 - 3 — a quantity that in general outweighs by far the rounding of the input data. One often uses in this context the term technical precision. Let us go now to the second group of error sources, the errors in the algorithm. The realization of an elementary operation

{+.-•./} by the corresponding floating point operation o e { + , - , - , / } does not avoid the rounding errors. The relative error here is less than or equal to the machine precision, i.e. for x, y e F, we have xoy = (x o y)(l + e) for an s = e(x, y) with |e| < eps . One should notice that in general the operation o is not associative (see Exercise 2.1), so that within F the order sequence of the operations to be executed is very important. Besides rounding errors, an algorithm may also produce approximation errors . They always appear when a function cannot be calculated exactly, but must be approximated. This happens for example when computing sine by a truncated power series or, to mention a more complex problem, in the solution of a differential equation. In this chapter we will essentially limit ourselves to the treatment of rounding errors. Approximation errors will be studied in the context of particular algorithms in later chapters.

2.2

Condition of Problems

Now we ask ourselves the question: How do perturbations of input variables influence the result independently of the choice of algorithm? We have seen

2 Error Analysis

26

in the above description of input errors that the input x is logically indistinguishable from all inputs x that are within the range of a given precision. Instead of the "exact" input x we should instead consider an input set E, that contains all perturbed inputs x (see figure 2.1). A machine number x

Figure 2.1: Input and output sets

represents therefore the input set E = {x e R I \x - x\ < eps |x|} . If we know that the input x is given with absolute tolerance 6x, then the defining input quantities form the set E = {x e R I \x-x\

fix) the set valued mapping / : E R = f{E) . The effects of perturbations of input quantities on the output can be read only through the ratio between the input and output sets. We call condition of the problem described by (/, x) a certain characterization of the ratio between E and R. Example 2.1 In order to develop a feeling for this concept that has not yet been defined precisely, let us examine beforehand the geometrical problem of determining graphically the intersection point r of two lines g and h in the plane (see Figure 2.2). When solving this problem graphically it is not possible to represent the lines g and h exactly. The question is then how strongly does the constructed intersection point depend on the drawing error (or: input error). The input set E in our example consists of all lines g and h laying within drawing precision from g and h respectively, and the output set R of the corresponding intersection points f. We see at once that the ratio between the input and the output sets depends strongly on the angle £ (g, h) at which g and h intersect. If g and h are nearly perpendicular then

2.2 Condition

of

27

Problems

Figure 2.2: Nearly perpendicular intersection of two lines g, h (well-conditioned)

g

r



h Figure 2.3: Small angle intersection of two lines g, h (ill-conditioned)

the intersection point f varies about the same as the lines g and h. However if the angle £ (g, h) is small, i.e. the lines g and h are nearly parallel then one has great difficulty to locate precisely the intersection point with the naked eye. (see Figure 2.3). Actually, the intersection point f moves several times more than the small perturbation of the lines. We can therefore say that the determination of the intersection point is well conditioned in the first case, and ill-conditioned in the second case. We arrive thus at a mathematical definition of the concept of conditioning. For the sake of simplicity we assume that the problem (/, x) is given by a mapping / : U C R™ Rm from a open subset U C R " into R m , a point x € U and a (relative or absolute) precision 6 of the input data. The precision 6 can be given either through a norm || • || on R™ ||i — x|| R m are equal to a first approximation or in their leading order for x —> xq , in short g(x) = h(x)

for x —»

XQ,

g(x) — h(x) + o(||/i(x)||) for x —> x0 , where the Landau symbol lo(||/i(x)||) for x —> xo' denotes a generic function 0, such that ||/(f) - f(x) || 0, such that \\f{x)-f{x)\\> ||x-x|| , , f , v.| < Krel || M for X -> X. II/MII ll®ll Thus Kabs and /irei describe the increase of the absolute and the relative errors respectively. A problem (/, x) is well-conditioned if its condition number is small and ill-conditioned if it is large. Naturally, the meaning of

2.2 Condition of Problems

29

"small" and "large" has to be considered separately for each problem. For the relative condition number, unity serves for orientation: it corresponds to the case of pure rounding of the result (see the discussion in Section 2.1). Further below we will show in a sequence of illustrative examples what "small" and "large" means. If / is differentiate in x, then according to the mean value theorem, we can express the condition numbers in terms of the derivative: Kabs = ||/'(a:)|| and « r e l = J ^ L | | / ' ( x ) | | ,

(2.3)

where ||/'(a0|| i s the norm of the Jacobian f'{x) £ Mat m ,„(R) in the subordinate (or operator) norm \\Ax\\ P|| : = sup 1 ^--/ 1 = sup ||Ae|| for A € Mat m i „(R) x^o IfII ||a;||=i For illustration let us compute the condition numbers of some simple problems. Example 2.3 Condition of addition (resp. subtraction). mapping / : R 2 —> R , (^j with derivative f'(a,b) R2

^ /(«> b):=a

Addition is a linear

+ b

= (1,1) € M a t i ^ R ) . If we choose the 1-norm on ||(a,&)T|| = |a| + |&|

and the absolute value on R , then it follows that the subordinate matrix norm (see Exercise 2.8) is ||/'M)|| = ||(i,i)|| = i . Therefore the condition numbers of addition are ^abs — 1 &nd

|a|

K^rel —

+

|b|

\a + b\

Hence for the addition of two numbers of the same sign we have Kre\ = 1. On the other hand it turns out that the subtraction of two nearly equal numbers is ill-conditioned according to the relative condition number, because in this case we have |a + 6| aA. Together with properties i) and iii) this favors the use of condition numbers rather than determinants for characterizing the solvability of a linear system. We will go deeper into this subject in Section 2.4.1. Example 2.10 Condition of nonlinear systems. Assume that we want to solve a nonlinear system /(:r) = y, where / : Ft" —> R " is a continuously differentiate function and y £ R™ is the input quantity (mostly y = 0). We see immediately that the problem is well defined only if the derivative f'{x) is invertible. In this case, according to the Inverse Function Theorem, / is also invertible in a neighborhood of y, i.e. x = f~1{y). The derivative satisfies

(.f-1y(y) = f'(x)-1• The condition numbers of the problem ( / _ 1 , y) are therefore Kabs - ll(/ _ 1 )'(y)ll = ll/'Or)- 1 !! and « r e l = M M i y ' ^ ^

.

The conclusion clearly agrees with the geometrical determination of the intersection point of two lines. If /tabS or Krei are large we have a situation similar to the small angle intersection of two lines (see. Figure 2.4).

2.2.2

Component-wise condition analysis

After being convinced of the the value of the norm-wise condition analysis, we want to introduce similar concepts for component-wise error analysis. The latter is often proves beneficial because all individual components are afflicted by some relative errors, and therefore some phenomena cannot be explained by norm-wise considerations. Furthermore, a norm-wise consideration does not take into account any eventual special structure of the matrix A, but rather it analyses the behavior relative to arbitrary perturbations 6A, i.e. including those that do not preserve this special structure.

34

2 Error

Analysis

/

y = 0

x

Figure 2.4: Ill-conditioned zero at xq, well-conditioned zero at x\.

E x a m p l e 2.11 The solution of a linear system Ax = b with a diagonal matrix

is obviously a well-conditioned problem, because the equations are completely independent of each other (one says also decoupled). Here we implicitly assume that the admissible perturbations preserve the diagonal form. The norm-wise condition number k00(A), «00(^)

= P"1||00

1

\\A\\

becomes however arbitrary large for small e < 1. It describes the condition for arbitrary perturbations of the matrix. The example suggests that the notion of condition defined in Section 2.2.1 turns out to be deficient in some situations. Intuitively, we expect that the condition number of a diagonal matrix, i.e. of a completely decoupled linear system, is equal to one, as in the case of a scalar linear equation. The following component-wise analysis will lead us to such a condition number. If we want to carry over the concept of Section 2.2.1 to the component-wise setting, we have to replace norms with absolute values of the components. We will work this out in detail for the relative error concept. Definition 2.12 The (prototype) component-wise condition number of the problem (/, x) is the smallest number Kre\ > 0, such that ii m - f ( x ) \ \ II/MII OO

< Krel max

\Xi

Xj,

for x —>x.

35

2.2 Condition of Problems

Remark 2.13 Alternatively we can define the relative component-wise condition number also by max *

\fi(Z) ~ fi(x)\ l/»Wl

• \Xi~Xi\ < Kre[ max — : — ; — * Ft I

_ tor x —» x.

The condition number defined in this way is even submultiplicative, i.e. Krel{goh,x) < K r e l ( g , h ( x ) ) -Krel(h,x) . By analogy to (2.3) we can compute this condition number for differentiable mappings via the derivative at x. Application of the mean value theorem f ( x ) - f ( x )= [ f'(x Jt=0 gives component-wise \f(x)-f(x)\
R ,

(x,y)^(x,y)

at (x,y). Since / is differentiable with f'(x,y) = (yT,xT), the component-wise relative condition number is =

|| |(y r ,a; r )| \(x,y)\ ||oc =

2(|x|,|y|)

it follows that

36

2 Error Analysis

Example 2.16 dition number). a linear system component-wise

Component-wise condition of a linear system (Skeel's conIf we consider, as in Example 2.7, the problem of with b as input, then we obtain the following value of the relative condition number: _ Il IA" 1 ! |6| lloo _ II l^" 1 ! |6| Hoc "•rel — p-^lloo IWloo

This number was introduced by A_1b can be estimated by

With it the error x — x , x

SKEEL [69].

"V,,*11-
R n , A — i > f(A) = A~lb, is differentiable with f(A)

C = -A^CA^b

= —A~1Cx

for C e Mat„(R) .

It follows that (see Exercise 2.14) the component-wise relative condition number is given by _ I I \ f ( A ) \ \A\ lloo _ II Ml-1! \A\ M ||oo "rel — !l/(^)l!oo IN,» If we bring the results together and consider perturbations in both A and b, then the relative condition numbers add up and we obtain as condition number for the combined problem K

re\ =

|| i ^ i ^ i \x\ + ¡a-1] |b| i u N—M

—^

|| l A - 1 ! ! ^ |g| | U ¡1—¡1



Taking for x the vector e = ( 1 , . . . , 1), yields the following characterization of the component-wise condition of Ax = b for arbitrary right hand sides b 1 2


U, t)i—> Pv with \\v - Pv || = min \\v - u|| ueu

is linear and is called the orthogonal projection from V onto U. Remark 3.6 The theorem generally holds also when U is replaced by an affine subspace W = wq + U C V, where wq € V and U is a subspace of V parallel to W. Then for all v e V and w EW it follows v— w S U± .

||v — u;|| = min ||u — lu'H w'GW This defines, as in Remark 3.5, a function P : V —> W,

v i—> Pv with ||v — Pv|| = min ||u — w|| w€W

which is an afSne mapping called the orthogonal projection of V onto the afiine subspace W. This consideration will prove to be quite useful in Chapter 8. With Theorem 3.4 we can easily prove a statement on the existence and uniqueness of the solution of the linear least squares problem. T h e o r e m 3.7 The vector x € R n is a solution of the linear least squares problem ||6 — Ax\\ = min, if and only if it satisfies the so called normal equations AtAX

= ATb

.

(3.3)

In particular, the linear least squares problem is uniquely solvable if and only if the rank of A is maximal, i.e. rank A = n.

3.1 Least Squares Method of Gauss

Proof.

67

By applying Theorem 3.4 to V = R m and U = R(A) we get ||6 - AxII = min

{b - Ax, Ax') = 0 for all x' e R™ (AT(b - Ax), x') = 0 for all x' G R " AT(b -Ax) ^

ATAx

=

= 0 ATb

and therefore the first statement. The second part follows from the fact that AT A is invertible if and only if rank A = n. • Remark 3.8 Geometrically, the normal equations mean precisely that b — Ax is normal to C R m , hence the name.

3.1.3

Condition

We begin our condition analysis with the orthogonal projection P : R m —> V, b H-» Pb, onto a subspace V of R m (see Figure 3.3). Clearly the relative

Figure 3.3: Projection onto the subspace V

condition number of the projection problem (P, b) corresponding to the input b depends strongly on the angle d of intersection between b and the subspace V. If the angle is small, i.e. b « Pb, then perturbations of b leave the result Pb nearly unchanged. On the other hand if b is almost perpendicular to V then small perturbations of b produce relatively large variations of Pb. These observations are reflected in the following lemma.

68

3 Linear Least Squares Problems

L e m m a 3.9 Let P : R m —> V be the orthogonal projection onto a subspace V o/Rn. For an input b let $ denote the angle between b and V, i.e. . \\b-Pb\\2 sini? = -——:—— . Then the relative condition number of the problem (P, b) corresponding to the Euclidean norm satisfies K

=

1 cos "d

Proof. According to the Pythagorean theorem ||P6|| 2 = ||i>||2 - \\b - Pb\\2 and therefore \\Pbf 1 - sin2 # = cos21? . \M 2 Because P is linear it follows that the relative condition number of (P, b) satisfies, as stated

For the next theorem we also need the following relationship between the condition numbers of A and AT A corresponding to the Euclidean norm. L e m m a 3.10 For a matrix A e Mai m > „(R) of maximal rank p = n we have k2(AtA) = K2(A)2 . Proof. According to Definition (2.5) the condition number of a rectangular matrix satisfies k2(A)2

:

max

l|Ac 2 Ms=1 H min|| x || 2= i H Ac II2

(ATAx, x) _ Xmax(ATA) = T min||x||2=i (A Ax, x) X min(ATA)

m a x

llxll2=i

K2(ATA)

• With these preparations the following result on the condition of a linear least squares problem no longer comes as a surprise.

3.1 Least Squares Method of Gauss

69

T h e o r e m 3.11 Let A e M a t m i „ ( R ) , m > n, be a matrix of full column rank, b € R r a , andx the (unique) solution of the linear least squares problem ||6 — Ax||2 = min . We assume that x 0 and we denote by fl the angle between b and the range space R(A) of A, i. e.

with residual r — b — Ax. Euclidean norm satisfies:

Then the relative condition number of x in the

a) corresponding to perturbations

ofb

« -< ^cosi? b) corresponding to perturbations K

v(3-4)

'

of A

< K2 (A)

+ K2 {A) 2

tain? .

(3.5)

Proof. a) The solution x is given through the normal equations by the linear mapping = (ATA)~1ATb x = so that K=

\\A\\2\\(ATA)-iAT\\2\\b\\2 IIAII2NI2

VWhWbh IMk

It is easily seen that for a full column rank matrix A the condition number K2(A) is precisely

min

ll*ll2=i \ \ A x h

Now, as in Lemma 3.9, the assertion follows from 2\\x\\2

^

' - \\Ax\W

'

cos 1?

b) Here we consider x = (A) = (ATA)~1ATb as a function of A. Because the matrices of rank n form an open subset of M a t m , n ( R ) , (j) is differentiate in a neighborhood of A. We construct the directional derivative i,ei) is always 1 and therefore does not need to be stored. For the cost of this method we obtain: a) ~ 2n 2 m multiplications, if m > n, b) ~ | n 3 multiplications, if m w n. For m « n we have about the same cost as for the Cholesky method for the normal equations. For n the cost is worse by a factor of two but the method has the stability advantage discussed above. As in the case of Gaussian elimination there is also a pivoting strategy for the QR factorization, the column exchange strategy of B U S I N G E R and G O L U B [10], In contrast to Gaussian elimination this strategy is of minor importance for the numerical stability of the algorithm. If one pushes the column with maximal 2-norm to the front, so that after the change we have 117^112 = max ||2f>|| 2, J 3

then the diagonal elements rkk of R satisfy \rkk\ = ||r ( f e ) || D and |r f c + 1 ; f c + 1 | < |rfefc|, for the matrix norm ||j4||d := maxj \\Aj112. If p := rank (yl), then we obtain theoretically that after p steps the matrix

R

S

0

0

80

3 Linear Least Squares

Problems

with an invertible upper triangular matrix R G Mat p (R) and a matrix S € Mat P i „_ p (R). Because of roundoff errors we obtain instead of this the following matrix

A T l _ p (R) are "very small". As the rank of the matrix is not generally known in advance we have to decide during the algorithm when to neglect the rest matrix. In the course of the QR factorization with column exchange the following criterion for the so called rank decision presents itself in a convenient way. If we define the numerical rank p for a relative precision 6 of the matrix A by the condition kp+i,p+i| < 6 |m| < \rpp\ , then it follows directly that \\T n and rank A = n, is uniquely determined. Clearly it depends linearly on b and it is formally denoted by x = A+b. Under the above assumptions the normal equations imply that = (AtA)~1At

.

+

+

Because A A = I is precisely the identity, A is also called the pseudoinverse of A. The definition of A+ can be extended to arbitrary matrices A G Mat m , n (R). In this case the solution of ||6 — Ax\\ = min is in general no longer uniquely determined. On the contrary, if we denote by P : R m —• R(A)

C

R"

the orthogonal projection of R m onto the image space R(A), then according to Theorem 3.4 the solutions form an affine subspace L{b) := {x G R n | || b - Ax || = min} = {xeRn\Ax

= Pb} .

Nevertheless, in order to enforce uniqueness we choose the smallest solution x G L(b) in the Euclidean norm || • ||, and we denote again x = A+b. According to Remark 3.6, x is precisely the orthogonal projection of the origin 0 G Rn onto the affine subspace L(b) (see Figure 3.6). If x G L(b) is an arbitrary solution of ||b — Ax\\ = min, then we obtain all the solutions by translating the nullspace N(A) of A by x, i.e. L(b) =x + N(A) . Here the smallest solution x must be perpendicular onto the nullspace N(A), in other words: x is the uniquely determined vector x G N(A)± with ||6 - Ax|| = min. Definition 3.15 The pseudo inverse of a matrix A G Mat m i „(R) is a matrix A+ G Mat„ )m (R), such that for all b G R m the vector x = A+b is the smallest solution ||6 — Ax|| = min, i.e. A+b e N(A)1-

and ||6 - AA+b\\ = min .

82

3 Linear Least Squares

Problems

Figure 3.6: "Smallest" solution of the least squares problem as a projection of 0 onto L(b)

The situation can be most clearly represented by the following commutative diagram (where i denotes each time the inclusion operator). Rm

Rn A+

I P = AA+

P = A+A R(A+) = N(A)1-

s

R(A)

We can easily read that the projection P is precisely AA+, while P — A+A describes the projection from R™ onto the orthogonal complement N(A)1of the nullspace. Furthermore, because of the projection property, we have obviously A+AA+ = A+ and AA+A = A. As seen in the following theorem the pseudo inverse is uniquely determined by these two properties and the symmetry of the orthogonal projections P = A+A and P = AA+. T h e o r e m 3 . 1 6 The pseudo inverse A+ £ M a t „ j m ( R ) of a matrix A S Mat m ,„(R) is uniquely characterized by the following properties: i) ( A + A ) T = A+A ii) ( A A + ) T = AA+ iii) A+AA+

=

3.3 Generalized

83

Inverses

iv) AA+A = A. The properties

i) through iv) are also called the Penrose axioms.

Proof. We have already seen that A+ satisfies properties i) through iv), because A+A and AA+ are orthogonal projections onto N(A)1- = R{A+) and R(A) respectively. Conversely i) through iv) imply that P := A+A and P := AA+ are orthogonal projections, because PT = P = P2 and PT = P = P2. Analogously from iii) and P = A+A it follows that N(P) = N(A). Thus the projections P and P are uniquely determined (independently of A+) by properties i) through iv). From this it follows the uniqueness of A+: If A+ and A^ satisfy conditions i) through iv), then P = A+A = A^A and P = AA+ = AA\ and therefore A+

A+AA+-

A + AA+ --1 Â +

• R e m a r k 3 . 1 7 If only part of the Penrose axioms hold, then we talk about generalized inverses. A detailed investigation is found e.g. in the book of NASHED [58].

Now we want to derive a way of computing the smallest solution x = A+b for an arbitrary matrix A e Mat m _„(R) and b £ R m with the help of the QR factorization. Let p := rank A < min(m, n) be the rank of the matrix A. In order to simplify notation we neglect permutations and bring A to upper triangular form by orthogonal transformations Q R. The idea of fixed point iteration consists of transforming this equation into an equivalent fixed, point equation 2(x) . If we try the two corresponding fixed point iterations with the starting value xq = 1.2, then we obtain the numerical values in Table 4.1. We see that the first sequence diverges (tanx has a pole at 7r/2 and x? > 7r/2), whereas the second one converges. The convergent sequence has the property that at about every second iteration there is another correct decimal. Obviously not every naively constructed fixed point iteration converges. Therefore we consider now general sequences {xk}, which are given by an iteration mapping Xk+1 = (Xk)If we want to estimate the difference of two consecutive terms \xk+1 -xk\

= 14>(xk) -

4>(xk-l)\

4-1 Fixed, Point

91

Iterations

Table 4.1: Comparison of the fixed point iterations 4>i and i xk+i

= \ tan xk

Xk+i = arctan(2a;fe)

1.2 1.2860

1760 1687 1665 1658 1656 1655 1655

1 . 7 0 . . . > tt/2

by the difference of the previous terms — Xk-1| (naturally we have the geometric series in mind), we are necessarily lead to the following theoretical characterization: Definition 4 . 2 Let I — [a, 6] C R be an interval and R a mapping. {y)\ < 6\x — y\ for all x,y G I . The Lipschitz entiate.

constant

9 can be easily computed if 4> is continuously differ-

L e m m a 4 . 3 If : I —> R is continuously sup

X,yel

=

\x — y I

sup

zEl

differentiable,

4> € C 1 ( / ) ,

then

{x) - 4>{y) = 0'(O(*

- v) •

• T h e o r e m 4 . 4 Let I = [a, 6] C R be an interval and

c a n only be constructed if / is differentiate and f'(x) does not vanish at least in a neighborhood of the solution. The convergence properties of the method will be analyzed later in a general theoretical framework. Example 4.9 Computation tion

of the square root. We have to solve the equaf(x) := x2 - c = 0 .

In a computer the number c has the floating point representation c = a2p with 0.5 < a < l,p € N. with mantissa a and exponent p. Therefore n

Vc=
0 and

lim xk = x*

k—>oo

9 8

4

T h e

s p e e d

o f

c o n v e r g e n c e

\ \ x

M o r e o v e r

t h e

k

+

N o n l i n e a r

c a n

b e

e s t i m a t e d

- x * \ \ < ^ \ \ x

1

s o l u t i o n

x *

i s

S y s t e m s

k

- x * \ \

u n i q u e

i n

a n d

L e a s t

S q u a r e s

P r o b l e m s

b y

f o r

2

B 2 /

U

k

0,1,...

=

( x * ) .

>

P r o o f . First, we use the Lipschitz condition (4.4) to derive the following result for all x, y G D :

\ \ F ' { x ) - \ F { y )

-

F i x )

-

F ' ( x ) ( y

-

z ) ) | |


0 and 0 < 1, be two constants such that ||F'(:r) + (F'(i + sv) - F ' ( i ) H < sw|M|2

(4.15)

for all s £ [0,1], x € D and v e R n with x + v e D, and assume that ||F'(z)+F(z*)ll < k . | | * - * 1

(4-16)

for all x e D. If for a given starting point x° € D we have p \= ||z° -x*|| < 2(1 - k * ) / u j = : a

(4.17)

then the sequence {xk} defined by the Gauss-Newton method (4-14) the open ball Bp(x*) and converges towards x*, i.e. \\xk - a;* || < p for

stays in

k > 0 and lim xk = x* . k—*oo

The speed of convergence can be estimated by ||X D, such that x(0) = XQ and Snu=

4.4.2

{ ( x ( s ) , A 0 + s) I s G ] - e , e [ } .

Continuation methods

Now we turn to the numerical computation of the solution of a parameterized system of equations. First we assume that the derivative Fx(x, A) is invertible for all (a;, A) e D x [a, 6]. In the last section we have seen that in this case the solution set of the parameterized system is made up of differentiate curves that can be parameterized by A. The idea of the continuation methods consists of computing successive points on such a solution curve. If we keep the parameter A then F(x, A) = 0

(4.27)

112

4 Nonlinear Systems and Least Squares

Problems

is a nonlinear system in x as treated in Section 4.2. Therefore we can try to compute a solution with the help of Newton's method Fx(xk, X)Axk = -F(xk,

A)

and xk+1 = xk + Axk.

(4.28)

Let us assume that we have found a solution (XQ, Ao) and and we want to compute another solution (xi, A]) on the solution curve (x(s), Ao + s) through (xo, Ao). Then in selecting the starting point x° := x for the Newton Iteration (4.28), with fixed value of the parameter A = Ai we can use the fact that both solutions (x'o, Ao) and (xi, Aj) lie on the same solution curve. The simplest possibility is indicated in Figure 4.5. We take as starting point

Figure 4.5: Idea of classical continuation method

the old solution and we set x := XQ. This choice, originally suggested by Poincaré in his book on celestial mechanics [60] is called today classical continuation. A geometric view brings yet another choice: instead of going parallel to the A - a x i s we can move along the tangent (x'(0), 1) to the solution curve (x(s), A o + s ) at the point (xo, Ao) and choose x := £q + (Ai — Ao)x'(O) as starting point (see Figure 4.6). This is the tangential continuation method.

Figure 4.6: Tangential continuation method

4-4 Nonlinear

Systems Depending on

113

Parameters

If we differentiate the equation F{x(s),X0

+ s) = 0

with respect to s at s = 0, it follows that Fx(xo,\o)x'(0)

= -Fx{x0,A0)

,

i.e. the slope x'(0) is computed from a linear system similar to the Newtoncorrector (4.28). Thus each continuation step contains two sub-steps: First the choice of a point (x, A i ) as close as possible to the curve, and second the iteration from the starting point x back to a solution (x\, A i ) on the curve, where Newton's method appears to be the most appropriate because of its quadratic convergence. The first sub-step is frequently called predictor, the second sub-step corrector, and the whole process a predictor-corrector method. If we denote by s : = Ai — Ao the stepsize, then the dependence of the starting point i o n s for the both possibilities encountered so far can be expressed as x(s) = x0 for the classical continuation and £(s) =

Xo

+ sx'( 0)

for the tangential continuation. The most difficult problem in the construction of a continuation algorithm consists of choosing appropriately the steplength s in conjunction with the predictor-corrector strategy. The optimist who chooses a too large stepsize s must constantly reduce the steplength and therefore ends up with too many unsuccessful steps. T h e pessimist on the other hand chooses the stepsize too small and ends up with too many successful steps. Both variants waste computing time. In order to minimize cost, we want therefore to chose the steplength as large as possible while still ensuring the convergence of Newton's method. R e m a r k 4.21 In practice one should take care of a third criterion, namely not to leave the present solution curve and "jump" onto another solution curve without noticing it (see Figure 4.7). The problem of "jumping over" becomes important especially when considering bifurcations of solutions. Naturally, the maximal feasible stepsize s m a x for which Newton's method with starting point x° := x(s) and fixed parameter A = Ao + s converges depends on the quality of the predictor step. The better the curve is predicted the larger the stepsize. For example the point x(s) given by the tangential method appears graphically to be closer to the curve than the point given by the classical method. In order to describe more precisely this deviation of the predictor from the solution curve we introduce the order of approximation of two curves (see [18]):

114

4 Nonlinear

Systems and Least Squares Problems

D e f i n i t i o n 4.22 Let x and x be two curves x, x : [-£, e] —> R™ in R " . We say that the curve x approximates the curve x with order p £ N at s — 0, if ||a;(s) — ¿c(s)|| = 0(|s| p ) for i.e, if there are constants 0 < «o
0 such that

||i(s) - i(s)|| < i?|s|p for all |s| < s0 . From the mean value theorem it follows immediately that for a sufficiently differentiable mapping F the classical continuation has order p = 1 while the tangential continuation has order p = 2. The constant r] can be given explicitly. For the classical continuation we set x(s) = x(0). L e m m a 4.23 For any continuously differentiable curve x : [—e, e] —> R n holds ||x(s) — x(0)|| < rjs with T] := max H^'ft)!! . te[-e,e] Proof. that

According to the Lagrange form of the mean value theorem it follows ||a:(s) -ar(0)|| = ||s / X'{TS) R 71 be a twice differentiable x{s) = x ( 0 ) + sa;'(0). Then llxis) — x(s)|| < r)s2

n

with r] := - max ||a:"(i)|| . 2 t£[-e,e]

curve and

4-4

115

Nonlinear Systems Depending on Parameters

Proof.

As in the proof of Lemma 4.23 we have x(s) - x(s)

=

x{s) - x(0) - sx'(O) =

=

I I Ja=0 Jt=0

[

Jt=0

sx'(ts)

-

sx'(0)dT

x"(Tas)s2TdTda

and therefore ||x(s)-x(s)|| < i s 2 max ||x"(i)|| . 2 te[-£,e]

• The following theorem puts into context a continuation method of order p as predictor and Newton's method as corrector. It characterizes the maximal feasible stepsize .smax, for which Newton's method applied to x° := x(s) with fixed parameter Ao + s converges. Theorem 4.25 Let D C R n be open and convex and let F : Dx [a,6] —* R n be a continuously differentiable parameterized system, such that Fx(x, A) is invertible for all (x, A) € D x [a, 6]. Furthermore, let anuj > 0 be given such that F satisfies the Lipschitz condition \\Fx{x,X)-1{Fx{x

+ sv,X)-Fx{x,X))v\\

< sw|H|2

(4.29)

Also let (x(s), Ao + s) , s £ [—.e,e] be a continuously differentiable solution curve around (xq,\o), i.e. F(x(s),

Ao + s) = 0 and x(0) = xq ,

and x(s) a continuation method (predictor) of order p with ||oc(s) - x(s)|| < r)sp for all |s| < £ . Then Newton's method (4-28) with starting point x° = x(s) wards the solution x(s) of F(x, Ao + s) = 0, whenever

S
0 ||A:rfc|| , then we reduce this stepsize by a factor /? < 1 and we perform again the Newton iteration with the new stepsize s':=

0-s,

i.e. with the new starting point x° = x(s') and the new parameter A := Ao + s'. This process is repeated until either the convergence criterion (4.32) for Newton's method is satisfied or we get below a minimal stepsize ,s m i n . In the latter case we suspect that the assumptions on F are violated and we are for example in an immediate neighborhood of a turning point or a

4-4 Nonlinear Systems

Depending on

Parameters

117

bifurcation point. On the other hand we can choose a larger stepsize for the next step if Newton's method converges "too fast". This can also be read from the two Newton corrections. If HAzlfC

(4.33)

-4\\AX°\\,

then the method converges "too fast", and we can enlarge the stepsize for the next predictor step by a factor (3, i.e. we suggest the stepsize :=

s/0.

Here the choice

motivated by (4.31) is consistent with (4.32) and (4.33). The following algorithm describes the tangential continuation from a solution (xo,a) up tp the right endpoint A = b of the parameter interval. A l g o r i t h m 4 . 2 6 Tangential

Continuation.

T h e procedure newton

(x,X)

contains the (ordinary) Newton method (4.28) for the starting point x° = x and fixed value of the parameter A. The Boolean variable done specifies whether the procedure has computed the solution accurately enough after at most fcmax steps. Besides this information and (if necessary) the solution x the program will return the quotient 0

_ IIAx1 [I l|Az°||

of the norms of the simplified and ordinary Newton correctors. The procedure continuation realizes the continuation method with the "stepsize control described above". Beginning with a starting point x for the solution of F(x,a) = 0 at the left endpoint A = a of the parameter interval, the program tries to follow the solution curve up to the right endpoint A = b. The program terminates if this is achieved or if the stepsize s becomes to small, or if the maximal number ¿ m a x of computed solution is exceeded. function [done,a;, #]=newton (x, A) for k = 0 to fcmax do A := Fx(x, A);

solve A Ax = —F( x, A); x := x + Ax; solve A Ax = —F(x, A); (use again the factorization of A)

4 Nonlinear Systems and Least Squares

Problems

if k = 0 t h e n 8 := ||Ax||/||Ai||; (for the next predicted stepsize) end if ||Ax|| < tol t h e n done:= true; break; (solution found) end if ||Ax|| > 0||Aa;|| t h e n done:= false; break; (monotony violated) end end if k > fcmax t h e n done:= false; (too many iterations) end f u n c t i o n continuation (x) Ao := a; [done, xq,6] = newton (x,X0); if not done t h e n poor starting point x for F(x, a) = 0 else s := so; (starting stepsize) for i = 0 t o i m a x do lvse Fx(xt, Xi)x' = -Fx(xi,\i); repeat 00 •— I ^ Aj+i := Ai + s; [done, Xi + i,6] = newton (x, A i + i); if not done t h e n s = j3s\ elseif 0 < 0/4 t h e n s = s/(5end s = min(s, b - A i + 1 ); until s < s m i n or done if not done t h e n break; (algorithm breaks down) elseif Aj + i = b t h e n break; (terminated, solution Xi+1) end end end

4-4 Nonlinear Systems Depending on

Parameters

119

R e m a r k 4 . 2 7 There is a significantly more efficient stepsize control strategy which uses the fact t h a t the quantities w and r] can be locally approximated by quantities accessible from the algorithm. T h a t strategy is also well founded theoretically. However its description cannot be done within the frame of this introduction — it is presented in detail in [18]. We want to describe yet another variant of the tangential continuation because it fits well the context of Chapter 3 and Section 4.3. It allows at the same time dealing with turning points (x, A) with rank F'(a;, A) = n and Fx(x, A) singular. In the neighborhood of such a point the automatically chosen stepsizes s of the continuation method described above become arbitrarily small because the solution curve around (x, A) cannot be parameterized anymore with respect to the parameter A. We overcome this difficulty by giving up the "special role" of the parameter A and consider instead directly the underdetermined nonlinear system in y = (x,A), C R n + 1 —• R" .

F(y) = 0 with F:D

We assume again that the Jacobian F'(y) of this system is full rank for all y € D. Then for each solution y0 e D there is a neighborhood U c R n + 1 and a differentiable curve y :] — e,e[—> D, S := {y € D | F(y) = 0} characterizing the curve around yo, i.e. SnU={y(s)

| s€]-e,e[} .

If we differentiate the equation F(y(s)) follows that F'(y(0))y'(0)

= 0 with respect to s at .s = 0, it = 0,

(4.34)

i.e. the tangent y'(0) to the solution curve spans exactly the nullspace of the Jacobian F'(yo). Since F'(yo) has maximal rank, the tangent through (4.34) is uniquely determined up to a scalar factor. Therefore we define for all y € D the normalized tangent t(y) € R n + 1 by F'(y)t(y)=

0 and ||t(y)|| 2 = 1 ,

which is uniquely determined u p to its orientation (i.e. u p to a factor ± 1 ) . We choose the orientation of the tangent during the continuation process

120

4 Nonlinear Systems and Least Squares Problems

such that two successive tangents to = t(yo) and t\ = t(yi) form an acute angle, i.e. 0 . This guarantees that we are not going backward on the solution curve. With it we can also define tangential continuation for turning points by y — y(s) •= 2/o +

st(y0).

Beginning with the starting vector y° = y we want to find y(s) on the curve "as fast as possible". The vague expression "as fast as possible" can be interpreted geometrically as "almost orthogonal" to the tangent at a nearby point y(s) on the curve. However, since the tangent t(y(s)) is at our disposal only after computing y(s), we substitute t(y(s)) with the best approximation available at the present time t(yk). According to the geometric interpretation of the pseudo inverse (cf. Section 3.3) this leads to the iterative scheme A y k := -F'(yk)+F(yk)

and yk+1 := yk + A y k .

(4.35)

The iterative scheme (4.35) is obviously a Gauss-Newton method for the overdetermined system F(y) = 0. We mention without proof that if F'{y) has maximal rank then this method is quadratically convergent in a neighborhood of y, the same as the ordinary Newton method. The proof is found in Chapter 3 of the book [15].

Figure 4.8: Tangential continuation through turning points .

Here we want to examine the computation of the correction Ay k . We will drop the index k . The correction A y in (4.35) is the shortest solution of the solution set Z(y) of the overdetermined linear problem Z{y) := {z e Rn+1\F'(y)z

+ F(y) = 0}

121

4-4 Nonlinear Systems Depending on Parameters

By applying a Gauss elimination (with row pivoting and eventually column exchange, cf. Section 1.3) or a QR-factorization (with column exchange, cf. Section 3.2.2) we succeed relatively easily in computing some solution ^ € Z(y) as well as a nullspace vector t(y) with F'(y)t(y)

= 0.

Then the following equation holds Ay = -F'(y)+F(y)

= F'(y)+F'(y)z

As we have seen in Section 3.3, P = F'(y)+F'(y) orthogonal complement of F'{y) and therefore

.

is the projection onto the

For the correction Ay it follows that

With this we have a simple computational scheme for the pseudo inverse (with rank defect 1) provided we only have at our disposal some solution 2 and nullspace vector t. The Gauss-Newton method given in (4.35) is also easily implementable in close interplay with tangential continuation. For choosing the stepsize we grasp a similar strategy to that described in Algorithm 4.26. If the iterative method does not converge then we reduce the steplength s by a factor fi = l/%/2. If the iterative method converges "too fast" we enlarge the stepsize for the next predictor step by a factor This empirical continuation method is comparatively effective even in relatively complicated problems. Remark 4.28 For this tangential continuation method there is also a theoretically based and essentially more effective stepsize control, whose description is found in [21]. Additionally, there one utilizes approximations of the Jacobian instead of F'(y). Extremely effective programs for parameterized systems are working on this basis, (see Figure 4.9 and 4.10). Remark 4.29 The description of the solutions of the parameterized system (4.23) is also called parameter study. On the other hand, parameterized systems are also used for enlarging the convergence region of a method for solving nonlinear systems. The idea here is to work our way, step by step, from a previously solved problem G(x) = 0

122

4 Nonlinear Systems and Least Squares

to the actual problem

Problems

F(x) = 0.

For this we construct a parameterized problem H(x, A) = 0 , A e [0,1] , that connects the two problems: H(x, 0) = G(x)

and H(x, 1) = F(x) for all x.

Such a mapping H is called an embedding of the problem F(x) = 0, or a homotopy. The simplest example is the so-called linear embedding, H(x, A) : = AF(x) + (1 - A)G(x) . Problem specific embeddings are certainly preferable (see Example 4.30). If we apply a continuation method to this parameterized problem H(x, A) = 0, where we start with a known solution XQ of G(x) = 0, then we obtain a homotopy method for solving F(x) = 0.

Figure 4.9: Continuation for the trivial embedding (left) and a problem specific embedding (right); represented here is xg with respect to A.

4-4 Nonlinear Systems Depending on E x a m p l e 4 . 3 0 Continuation lem is given in [36]:

F(x)

:=

123

Parameters

for different embeddings.

x — i{x)

exp(cos(z •

^ X j ) ) ,

i = l,...,10.

3=1

The trivial embedding H(x,

A) = XF(x)

+ (l-X)x = x -

X(p{x)

with starting point x° = ( 0 , . . . , 0) at A = 0 is suggested for it. The continuation with respect to A leads indeed for A = 1 to the solution (see Figure 4.9, left), but the problem specific embedding 10

Hi(x, A) := Xi — exp(A • cos(i •

Xj)),

i = 1 , . . . , 10

j=l

with starting point = ( 0 , . . . , 0) at A = 0 is clearly advantageous (see Figure 4.9, right). One should remark that there are no bifurcations in this example. T h e intersections of the solution curves appear only in the projection onto the coordinate plane (xg, A). The points on both solution branches mark the intermediate values computed automatically by the program: their number is a measure for the computing cost required to go from A = 0 to A = 1. The above example has an illustrative character. It can be easily transformed into a purely scalar problem and solved as such (Exercise 4.6). Therefore we add another more interesting problem. E x a m p l e 4 . 3 1 Brusselator. In [63] a chemical reaction-diffusion equation is considered as a discrete model where two chemical substances with concentrations z = (x, y) in several cells react with each other according to the rule f

x\

_

( A - ( B + l)x + B x

x2

z V

'(x)\ ^ 1 let us define the following iterative procedures for k = 0 , 1 , . . . : (I) xk+1

:= 4>{xk) ,

4-5 Exercises (II) xk+i

125

:= ~1(xk).

Show that at least one of the two iterations is locally convergent. Exercise 4.3 Let / £ Cl [a, b] be a function having a simple root x* € [a, b], and let p(x) be the uniquely determined quadratic interpolation polynomial through the three nodes (a, fa),

(c,/c),

{b, fa), with a < c < b, fafb < 0

a) Show that p has exactly one simple y zero in [a, 6]. b) Given a formal procedure V=

y(a,b,c,fa,fb,fc),

that computes the zero of p in [a, b], construct an algorithm for evaluating x* with a prescribed precision eps. Exercise 4.4 In order to accelerate the convergence of a linearly convergent fixed point method in R 1 Xi+1 := 4>(xi), Xo given, x* fixed point we can use the so-called A 2 -method of Aitken. This consists of computing from the sequence {xj} a transformed sequence {xi} _ Xi

.— Xi

_ (AXif A2Xi '

where A is the difference operator A Xi :=

— xl..

a) Show that: If the sequence {xt}, with Xi / x*, satisfies xi+i - x* — (k + 6i)(xi - x*) , where |/c| < 1 and is a null sequence, limi_00, = 0, then the sequence {xl} is well defined for sufficiently large i and has the property that Um = 0. »-•oo Xi — X* b) For implementing the method one computes only xq, xi, X2 and xo and then one starts the iteration with the improved starting point xo (Steffensen's method). Try this method on our trusted example 01 (x) := ( t a n x ) / 2 and 4>\{x) := arctan2x with starting point xq = 1.2.

126

4 Nonlinear Systems and Least Squares Problems

Exercise 4.5 Compute the solution of the nonlinear least squares problem arising in Feulgen hydrolysis by the ordinary Gauss-Newton method (from a software package or written by yourself) for the data from Table 4.3 and the starting points given there. Hint: In this special case the ordinary GaussNewton method converges faster than the damped method (cf. Figure 4.3). Exercise 4.6 Compute the solution of F(x)

= x — (¡>{x) = 0 with

10

i(x) : = exp(cos(z •

i=l,...,10,

(4-37)

j=i by first setting up an equation for u =

xj

and

then solving it.

Exercise 4.7 Let there be given a function F : D —>R", D C R™ , F £ C2{D). We consider the approximation of the Jacobian J(x) = F'{x) by divided differences

J(x)

:=

m [AiF(a;),...,

&nF(x)} .

In order to obtain a sufficiently good approximation of the Jacobian we compute the quantity

\\F(* + rhei)-F(x) || \\F(x)\\ and require that

K,(fji) = k0 := ylOeps , where eps is the relative machine precision. Show that k(rj) = c\7) + C2i?2 for tj —> 0 . Specify a rule that provides an estimate for f/ in case n{rf) 3 if x, > 0 , hand) the special degenerate case A = 3.

i =

1 , . . . , n.

Compute (by

b) Write a program for a continuation method with ordinary Newton's method as local iterative procedure and empirical stepsize strategy. c) Test the program on the above example with A > 3. Exercise 4 . 1 0 Prove the following theorem: Let D C R™ be open and convex and let F : D C R n —> R " be differentiate. Suppose there exists

128

4 Nonlinear Systems and Least Squares

Problems

a solution x* £ D such that F'(x*) is invertible. Assume further that the following (affine-invariant) Lipschitz condition is satisfied for all x,y € D: \\F'(xn-1(F'(y)-F'(x))\\ C , t w A(i) , such that A(0) = Ao and A(i) is a simple eigenvalue of A+tC. Using again the fact that Ao is simple we deduce the existence of a continuously differentiable function x :] -£,£[-> C i i - > x(t) , such that x(0) = xq and x(t) is an eigenvector of A + tC for the eigenvalue A(t) (x(t) can be explicitly computed with adjoint determinants, see Exercise 5.2). If we differentiate the equation {A + tC)x{t)

=

\(t)x(t)

with respect to t at t = 0, then it follows that Cxo + Ax'{0) = A0a;'(0) + A'(0)x 0 . If we multiply from the right by yo (in the sense of the scalar product), then we obtain (Cx0,y0)

+ (Ax'(0),y0)

As (A'(0)ar o ,y o ) = X'(0)(xo,yo) (Ax'(0),yo) it follows that

= {X0x'{0),y0)

+ (\'{0)xQ,yo)

.

and

= (x'(0),A*y0)

= X0(x'(0),y0)

= (X0x'(0),y0)

A//Qx =

,

(Cx0,y0) (xo,yo) Hence we have computed the derivative of A in the direction of the matrix C . The continuous differentiability of the directional derivative implies the differentiability of A with respect to A and X'(A)C = X ' ( 0 ) = { ^ ^ {xo,yo}

131

5.1 Condition of General Eigenvalue Problems for all C e Mat n (C).



To compute the condition of the eigenvalue problem (A, A) we must calculate the norm of the mapping \'{A) a as linear mapping, \'(A)

: M a t n ( C ) -> C , C ~ ^

^

y)

,

where x is an eigenvector for the simple eigenvalue Ao of A and y is an adjoint eigenvector for the eigenvalue Ao of A*. On Mat„(C) we choose the matrix norm induced by the Euclidean vector norm, and on C the absolute value. For each matrix C e Mat n (C) we have (the Cauchy-Schwarz inequality) \{Cx,y)\ • • • > crr > ay+i = • • • = ap = 0, then rank A = r, ker A = span{ V r + i , . . . ,Vn}

and im A = s p a n { [ / i , . . . , Ur} .

3. The Euclidean norm of A is the largest singular value, i. e. \\Ah = 4. The Frobenius norm ||j4||f = \\AFF

oi.

(Y17=i Mill!) 1 '' 2 = AI + . . . +

equal to

A2P.

5. The condition number of A relative to the Euclidean norm is equal to the quotient of the largest and the smallest singular values , i.e. k,2{A) = ai/crp . 6. The squares of the singular values of,...,ap are the eigenvalues AT A and AA1 corresponding to the eigenvectors V\)... ,VP and U\,... ,UP respectively.

of

Based on the invariance of the Euclidean norm || • ||2 under the orthogonal transformations U and V we obtain from the singular value decomposition of A another representation of the pseudo inverse A+ of A. Corollary 5 . 1 7 Let UTAV = £ be the singular value decomposition matrix A G Mat m > „(R) with p = rank A and £ = diag(