227 36 3MB
English Pages 148 [155] Year 2018
Core-Chasing Algorithms for the Eigenvalue Problem
Fundamentals of Algorithms Editor-in-Chief: Nicholas J. Higham, University of Manchester The SIAM series on Fundamentals of Algorithms is a collection of short user-oriented books on stateof-the-art numerical methods. Written by experts, the books provide readers with sufficient knowledge to choose an appropriate method for an application and to understand the method’s strengths and limitations. The books cover a range of topics drawn from numerical analysis and scientific computing. The intended audiences are researchers and practitioners using the methods and upper level undergraduates in mathematics, engineering, and computational science. Books in this series not only provide the mathematical background for a method or class of methods used in solving a specific problem but also explain how the method can be developed into an algorithm and translated into software. The books describe the range of applicability of a method and give guidance on troubleshooting solvers and interpreting results. The theory is presented at a level accessible to the practitioner. MATLAB® software is the preferred language for codes presented since it can be used across a wide variety of platforms and is an excellent environment for prototyping, testing, and problem solving. The series is intended to provide guides to numerical algorithms that are readily accessible, contain practical advice not easily found elsewhere, and include understandable codes that implement the algorithms. Editorial Board
Raymond Chan The Chinese University of Hong Kong
Patrick Farrell University of Oxford
Sven Leyffer Argonne National Laboratory
Paul Constantine Colorado School of Mines
Ilse Ipsen North Carolina State University
Jennifer Pestana University of Strathclyde
Timothy A. Davis Texas A&M
C.T. Kelley North Carolina State University
Sivan Toledo Tel Aviv University
Randall J. LeVeque University of Washington
Series Volumes Aurentz, J. L., Mach, T., Robol, L., Vandebril, R., and Watkins, D. S., Core-Chasing Algorithms for the Eigenvalue Problem Gander, M. J. and Kwok, F., Numerical Analysis of Partial Differential Equations Using Maple and MATLAB Asch, M., Bocquet, M., and Nodet, M., Data Assimilation: Methods, Algorithms, and Applications Birgin, E. G. and Martínez, J. M., Practical Augmented Lagrangian Methods for Constrained Optimization Bini, D. A., Iannazzo, B., and Meini, B., Numerical Solution of Algebraic Riccati Equations Escalante, R. and Raydan, M., Alternating Projection Methods Hansen, P. C., Discrete Inverse Problems: Insight and Algorithms Modersitzki, J., FAIR: Flexible Algorithms for Image Registration Chan, R. H.-F. and Jin, X.-Q., An Introduction to Iterative Toeplitz Solvers Eldén, L., Matrix Methods in Data Mining and Pattern Recognition Hansen, P. C., Nagy, J. G., and O’Leary, D. P., Deblurring Images: Matrices, Spectra, and Filtering Davis, T. A., Direct Methods for Sparse Linear Systems Kelley, C. T., Solving Nonlinear Equations with Newton’s Method
Jared L. Aurentz Instituto de Ciencias Matemáticas Madrid, Spain
Thomas Mach
Nazarbayev University Astana, Kazakhstan
Leonardo Robol
Istituto di Scienza e Tecnologie dell’Informazione Pisa, Italy
Raf Vandebril KU Leuven Leuven, Belgium
David S. Watkins
Washington State University Pullman, Washington
Core-Chasing Algorithms for the Eigenvalue Problem
Society for Industrial and Applied Mathematics Philadelphia
Copyright © 2018 by the Society for Industrial and Applied Mathematics 10 9 8 7 6 5 4 3 2 1 All rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA. No warranties, express or implied, are made by the publisher, authors, and their employers that the programs contained in this volume are free of error. They should not be relied on as the sole basis to solve a problem whose incorrect solution could result in injury to person or property. If the programs are employed in such a manner, it is at the user’s own risk and the publisher, authors, and their employers disclaim all liability for such misuse. Trademarked names may be used in this book without the inclusion of a trademark symbol. These names are used in an editorial context only; no infringement of trademark is intended. MATLAB is a registered trademark of The MathWorks, Inc. For MATLAB product information, please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098 USA, 508-647-7000, Fax: 508-647-7001, [email protected], www.mathworks.com. Publications Director Executive Editor Developmental Editor Managing Editor Production Editor Copy Editor Production Manager Production Coordinator Compositor Graphic Designer
Kivmars H. Bowling Elizabeth Greenspan Gina Rinelli Harris Kelly Thomas David Riegelhaupt Susan Fleshman Donna Witzleben Cally A. Shrader Cheryl Hufnagle Lois Sellers
Royalties from the sale of this book are placed in a fund to help students attend SIAM meetings and other SIAM-related activities. This fund is administered by SIAM, and qualified individuals are encouraged to write directly to SIAM for guidelines. Library of Congress Cataloging-in-Publication Data CIP data available at www.siam.org/books/fa13.
is a registered trademark.
Contents Preface 1
2
3
4
Core 1.1 1.2 1.3 1.4 1.5
vii Transformations Upper Hessenberg Matrices . . . . . . . . . . . . QR Decomposition Using Core Transformations Operating with Core Transformations . . . . . . Some Details about Core Transformations . . . Backward Stability Basics . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
1 1 2 6 13 20
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
23 23 26 33 34
Francis’s Algorithm as a Core-Chasing Algorithm 3.1 Single-Shift Algorithm . . . . . . . . . . . . . . . 3.2 Double-Shift Algorithm . . . . . . . . . . . . . . . 3.3 Convergence and Deflation . . . . . . . . . . . . . 3.4 The Singular Case . . . . . . . . . . . . . . . . . . 3.5 Backward Stability of Francis’s Algorithm . . . . 3.6 In Pursuit of Efficient Cache Use and Parallelism
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
37 37 41 45 49 51 52
Special Structures Unitary Matrices . . . . . . . . . . . . . . . . . . . . . Unitary-Plus-Rank-One Matrices, Companion Matrices Backward Stability: Unitary-Plus-Rank-One Case . . . Symmetric Matrices . . . . . . . . . . . . . . . . . . . . Symmetric-Plus-Rank-One Matrices . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
59 59 60 72 81 84
Francis’s Algorithm 2.1 Description of the Algorithm 2.2 Why the Algorithm Works . 2.3 Choice of Shifts . . . . . . . 2.4 Where Is QR? . . . . . . . .
Some 4.1 4.2 4.3 4.4 4.5
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
5
Generalized and Matrix Polynomial Eigenvalue 5.1 The Moler–Stewart QZ Algorithm . . . . . . . 5.2 The Companion Pencil . . . . . . . . . . . . . 5.3 Matrix Polynomial Eigenvalue Problems . . . 5.4 Unitary-Plus-Rank-k Matrices . . . . . . . . .
6
Beyond Upper Hessenberg Form 105 6.1 QR Decomposition by Core Transformations . . . . . . . . . 105 6.2 Reduction to Condensed Form . . . . . . . . . . . . . . . . . 107 v
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89 . 89 . 90 . 94 . 102
vi
Contents
6.3 6.4 6.5 6.6
Core-Chasing Algorithms on Twisted Forms Double Steps and Multiple Steps . . . . . . . Convergence Theory . . . . . . . . . . . . . . The Unitary Case Revisited . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
114 120 133 139
Bibliography
143
Index
149
Preface Eigenvalue problems are ubiquitous in engineering and science. We assume that the reader already knows this and agrees with us that the study of numerical algorithms for solving eigenvalue problems is a worthwhile endeavor.
Prerequisites We assume that our reader already has some experience with matrix computations and is familiar with the standard notation and terminology of the subject. We also assume knowledge of the basic concepts of linear algebra. A reader who has covered most of [73] or [72] will be in a good position to appreciate this monograph. These books, which were not selected at random, are just two of many from which one could have studied. Some others are the books by Trefethen and Bau [59], Golub and Van Loan [39], Demmel [29], and Bj¨ orck [18].
This Book This monograph is about a class of methods for solving matrix eigenvalue problems. Of course the methods are also useful for computing related objects such as eigenvectors and invariant subspaces. We will introduce new algorithms along the way, but we are also advocating a new way of viewing and implementing existing algorithms, notably Francis’s implicitly shifted QR algorithm [36]. Our first message is that if we want to compute the eigenvalues of a matrix A, it is often advantageous to store A in QR-decomposed form. That is, we write A = QR, where Q is unitary and R is upper triangular, and we store Q and R instead of A. This may appear to be an inefficient approach but, as we shall see, it often is not. Most matrices that arise in applications have some special structures, and these often imply special structures for the factors Q and R. For example, if A is upper Hessenberg, then Q is also upper Hessenberg, and it follows from this that Q can be stored very compactly. As another example, suppose A is unitary. Then Q = A, and R is the identity matrix, so we don’t have to store R at all. Every matrix can be transformed to upper Hessenberg form by a unitary similarity transformation. We will study this and related transformations in detail in Chapter 6, but for the early chapters of the book we will simply take the transformation for granted; we will assume that A is already in Hessenberg form. Thus we consider an arbitrary upper Hessenberg matrix in QR-decomposed form and show how to compute its eigenvalues. Our method proceeds by a sequence of similarity transformations that drive the matrix toward upper triangular form. Once the matrix is triangular, the eigenvalues can be read from the main diagvii
viii
Preface
onal. In fact our method is just a new implementation of Francis’s implicitly shifted QR algorithm. The storage space requirement is O(n2 ) because we must store the upper triangular matrix R, and the flop count is O(n3 ). Once we know how to handle general matrices, we consider how the procedure can be simplified in special cases. The easiest is the unitary case, where R = I. This results in an immediate reduction in the storage requirement to O(n) and a corresponding reduction of the computational cost to O(n2 ) flops. A similar case is that of a companion matrix, which is both upper Hessenberg and unitary-plus-rank-one. This results in an R that is unitary-plus-rank-one. Once we have figured out how to store R using only O(n) storage, we again get an algorithm that runs in O(n2 ) flops. The unitary case is old [42], but our companion algorithm is new [7]. A structure that arises frequently in eigenvalue problems is symmetry: A = AT . This very important structure does not translate into any obvious structure for the factors Q and R, so it does not fit into our framework in an obvious way. In Section 4.4 we show that the symmetric problem can be solved using our methodology: we turn it into a unitary problem by a Cayley transform [10]. We did not seriously expect that this approach would be faster than all of the many other existing methods for the symmetric eigenvalue problem [73, Section 7.2], but we were pleased to find that it is not conspicuously slow. Our solution to the symmetric problem serves as a stepping stone to the solution of the symmetric-plus-rank-one problem, which includes comrade and colleague matrices [14] as important special cases. If a polynomial p is presented as a linear combination of Chebyshev or Legendre polynomials, for example, the coefficients can be placed into a comrade matrix with eigenvalues equal to the zeros of p. A Cayley transform turns the symmetric-plus-rank-one problem into a unitary-plus-rank-one problem, which we can solve by our fast companion solver. Our O(n2 ) algorithm gives us a fast way to compute the zeros of polynomials expressed in terms of these classic orthogonal polynomial bases. We also study the generalized eigenvalue problem, for which the Moler– Stewart QZ algorithm is the appropriate variant of Francis’s algorithm. We show how to apply this to a companion pencil using the same approach as for the companion matrix, but utilizing two unitary-plus-rank-one upper triangular matrices instead of one [4]. We then extend this methodology to matrix polynomial eigenvalue problems. A block companion pencil is formed and then factored into a large number of unitary-plus-rank-one factors. The resulting algorithm is advantageous when the degree of the matrix polynomial is large [6]. The final chapter discusses the reduction to Hessenberg form and generalizations of Hessenberg form. We introduce generalizations of Francis’s algorithm that can be applied to these generalized Hessenberg forms [64]. This monograph is a summary and report on a research project that the five of us (in various combinations) have been working on for several years now. Included within these covers is material from a large number of recent sources, including [4, 5, 6, 7, 8, 9, 10, 12, 54, 62, 63, 64, 74]. We have also included some material that has not yet been submitted for publication. This book exists because the senior author decided that it would be worthwhile to present a compact and unified treatment of our findings in a single volume. The actual writing was done by the senior author, who wanted to ensure uniformity of style and viewpoint (and, one could also say, prejudices). But the senior author is only the scribe, who (of course) benefitted from substantial feedback from the other
Preface
ix
authors. Moreover, the book would never have come into existence without the work of a team over the course of years. We are excited about the outcome of our research, and we are pleased to share it with you. The project is not finished by any means, so a second edition of this book might appear at some time in the future.
This Book’s Title Francis’s algorithm, in its standard form, is a bulge-chasing algorithm. Each iteration begins with a matrix in upper Hessenberg form. A tiny bulge is created in the upper Hessenberg form, and that bulge is then chased away. In our reformulation of the algorithm, we do not chase a bulge. Instead of introducing a bulge, we introduce an extraneous core transformation. We then chase that core transformation through the matrix until it disappears via a fusion operation. Thus our algorithms are core-chasing algorithms. Core transformations, which are defined in Chapter 1, are familiar objects. For example, Givens rotations are core transformations.
Software Fortran implementations of many of the algorithms described in this monograph are collected in the package EISCOR, which can be accessed at www.siam.org/books/fa13
Acknowledgment We thank three anonymous reviewers who read a preliminary version of the book and provided valuable feedback.
Chapter 1
Core Transformations
1.1 Upper Hessenberg Matrices Suppose we have a matrix A ∈ Cn×n , that is, A is n × n and has complex entries. (We will also frequently consider the special case where A has real entries, i.e., A ∈ Rn×n .) We are going to study algorithms that compute the eigenvalues (and possibly related quantities) of A using floating-point arithmetic on a digital computer. A common preprocessing step is to transform the matrix to upper Hessenberg form. Recall that a matrix B is upper Hessenberg if its entries below the first subdiagonal are all zero, that is, bij = 0 if i > j + 1. Pictorially, portraying the case n = 6, this means that B has the form
B =
×××××× ×××××× ××××× ×××× ××× ××
.
The entries that are marked with the symbol × can take on any values, while those that are left blank are all zeros. A lower Hessenberg matrix is a matrix that is the transpose of an upper Hessenberg matrix. We will seldom mention lower Hessenberg matrices. If we refer to a Hessenberg matrix without a modifier, we mean an upper Hessenberg matrix. Every A ∈ Cn×n is unitarily similar to an upper Hessenberg matrix. In other words, there is a unitary matrix Q (i.e., Q∗ = Q−1 ) such that B = Q∗ AQ is upper Hessenberg. We recall that Q∗ denotes the conjugate transpose of Q. The matrices Q and B are far from unique. The first column of Q, call it q1 , can be chosen arbitrarily, subject only to the constraint q1 2 = 1. Once q1 has been chosen, Q and B are almost uniquely determined. This is made precise in Theorems 2.2.4 and 2.2.6. The matrices Q and B can be computed by a direct method in O(n3 ) floating-point operations (flops) [73, Section 5.5]. If A is a real matrix, then Q and B can be taken to be real, and everything can be done in real arithmetic. This important step in the solution process is discussed in detail in [72, 73] and elsewhere, and we will have something to say about it in Chapter 6, but for now we will take it for granted. Let us assume that our given matrix A is already in Hessenberg form. 1
2
Chapter 1. Core Transformations
1.2 QR Decomposition Using Core Transformations Every matrix has a QR decomposition, which can be computed in O(n3 ) flops, but the QR decomposition of an upper Hessenberg matrix is particularly easy to compute. Given an upper Hessenberg A, we want to compute a unitary Q and an upper triangular R such that A = QR. We achieve this by transforming A to upper triangular form by applying unitary transformations on the left. We use standard matrix notation: aij denotes the entry in the ith row and jth column of A. ˆ 1 ∈ C2×2 We begin by transforming a21 to zero. There is a unitary matrix Q such that ˆ ∗ a11 = r11 Q 1 a21 0 for some r11 . For example, we can take 2 2 r11 = | a11 | + | a21 | , c1 = a11 /r11 ,
and ˆ1 = Q
c1 s1
−s1 c1
s1 = a21 /r11 ,
(1.2.1)
.
ˆ 1 is unitary and does the job.1 Now let Q1 ∈ Cn×n One easily checks that Q ˆ 1 into the upper left corner of an be the unitary matrix obtained by inserting Q identity matrix, i.e., ⎤ ⎡ c1 −s1 ⎥ ⎢ c1 ⎥ ⎢ s1 ⎥ ⎢ 1 ˆ 1 , 1, . . . , 1 = ⎢ Q1 = diag Q ⎥. ⎥ ⎢ .. ⎦ ⎣ . 1 If we now perform the transformation A → Q∗1 A, only the first two rows are altered, and the entry a21 is transformed to zero. Now that we have gotten a21 out of the way, we are in a position to annihilate ˆ 2 ∈ C2×2 be a unitary matrix such that a32 . Let Q a ˜22 r22 ∗ ˆ Q2 = a32 0 ˆ 2 can be for some r22 . Here a ˜22 denotes the (2, 2) entry of Q∗1 A, not A. Q ˆ constructed in the same way as Q1 was; we omit the details. Now we build a ˆ 2 into an identity matrix in the right place: unitary Q2 ∈ Cn×n by inserting Q ⎤ ⎡ 1 ⎥ ⎢ c2 − s2 ⎥ ⎢ ⎥ ⎢ s c 2 2 ⎥ ⎢ ˆ 2 , 1, . . . , 1 = ⎢ Q2 = diag 1, Q ⎥. 1 ⎥ ⎢ ⎥ ⎢ .. ⎦ ⎣ . 1 1 This construction fails if r 11 = 0, but in that case we have a21 = 0 to begin with, and we can take c1 = 1 and s1 = 0.
1.2. QR Decomposition Using Core Transformations
3
Then the transformation Q∗1 A → Q∗2 Q∗1 A alters only rows two and three, creates a zero in position (3, 2), and does not disturb the zero that was previously created in position (2, 1). The pattern is now clear. The next transformation, Q∗2 Q∗1 A → Q∗3 Q∗2 Q∗1 A, creates a zero in position (4, 3), and so on. After n − 1 steps we will have transformed A to an upper triangular matrix R = Q∗n−1 · · · Q∗1 A. If we then define Q = Q1 · · · Qn−1 , we have R = Q∗ A or A = QR, with Q unitary and R upper triangular. This is our QR decomposition. The matrices Qi that we have utilized here are examples of what we shall call core transformations. A core transformation is a unitary matrix Ci that differs from the identity matrix only in the submatrix at the intersection of rows and columns i and i + 1. This 2 × 2 submatrix is called the active part of Ci . Note that the subscript i on the symbol Ci tells us where the active part of Ci is. We will adhere strictly to this convention. We note also that if Ci is a core transformation, then so is Ci−1 = Ci∗ . Sometimes we get lazy and simply write “core” as an abbreviation of “core transformation”.2 It is easy to make an approximate count of the flops required for the QR decomposition. The operation c −s x cx − sy = s sx + cy c y takes four multiplications and two additions or, briefly, six flops. When we multiply A by Q∗1 , we do n such operations for a total of 6n flops. When we apply Q∗2 , we do about the same thing, except that we can skip the first column. Thus we do 6(n − 1) flops. When we apply Q3 , the cost is 6(n − 2) flops, and so on. Adding these up we get approximately 3n2 flops. So far we have ignored the cost of generating the core transformations. From (1.2.1) it is clear that the cost is O(1). Since we have to generate n − 1 core transformations, the total cost is O(n), which means it is negligible relative to the 3n2 flops required for the transformation of A to R. Thus the total cost can be reckoned to be 3n2 , or more briefly O(n2 ). (This is for an upper Hessenberg A and contrasts with the O(n3 ) cost for general A.) Our algorithms will work with the factors Q and R instead of A. Of course we will not form the product Q = Q1 · · · Qn−1 explicitly; we will work with the factors Q1 , . . . , Qn−1 directly. This is extremely economical from the storage point of view, as each Qi is determined by two numbers ci and si . Thus the storage space needed for storing the n − 1 factors of Q is O(n) instead of O(n2 ). Of course we also have to store R, which requires O(n2 ) space if we store it in the conventional way. As we shall see, there are some special cases in which we can do better. The most obvious is the unitary case, for which R = I, so we don’t have to store it at all.
A Few More Details When a matrix is reduced to Hessenberg form, it is always possible to do the reduction in such a way that the resulting A has all of its subdiagonal entries 2 In some of our work [11,12] we have made use of core transformations that are not unitary; we relaxed the definition above so that Ci was allowed to be merely nonsingular. We do not make use of any such transformations here; in this monograph the core transformations are always unitary.
4
Chapter 1. Core Transformations
aj+1,j real and nonnegative. (Alternatively, it is always possible to adjust the matrix afterward by a diagonal unitary similarity transformation so that this holds.) If we then build Q1 as specified in (1.2.1), then s1 will be real and nonnegative; in fact all of the si , i = 1, . . . , n−1 will be real and nonnegative. We note also that all of the main-diagonal entries of R turn out real and nonnegative, with the exception of rnn . In some of our algorithms we find it convenient to insert an additional unitary diagonal factor D into our representation: A = Q1 · · · Qn−1 DR. If we do this, we certainly arrange for rnn also to be real and nonnegative. Moreover, this allows us to write (more efficient) algorithms for which the si and rii (which get modified again and again in the course of the iterations) remain real and nonnegative throughout. Finally, we note that if the entire matrix A is real to begin with, then all entries of Q1 , . . . , Qn−1 , and R can be taken to be real, and the double-shift Francis algorithm will keep them real.
Notation for Core Transformations Throughout this book we will use a shorthand for displaying core transformations that we have found to be convenient for explaining our algorithms. At the beginning of the QR decomposition we make the transformation A → Q∗1 A, which transforms A by multiplying it on the left by the core transformation Q∗1 . We will depict the product Q∗1 A as
×××××× ×××××× ××××× ×××× ××× ××
.
The core Q∗1 is denoted by a small double arrow that points at the first two rows of A to indicate that these are the two rows on which Q∗1 operates. The result of the operation is to transform the (2, 1) entry of A to zero. Thus we have ×××××× ×××××× ××××× ×××× ××× ××
=
×××××× ××××× ××××× ×××× ××× ××
.
The second step of the QR decomposition multiplies Q∗1 A on the left by Q∗2 to create a zero in position (3, 2). The result is depicted as
×××××× ×××××× ××××× ×××× ××× ××
=
×××××× ××××× ×××× ×××× ××× ××
.
1.2. QR Decomposition Using Core Transformations
5
Next we apply Q∗3 , resulting in
×××××× ×××××× ××××× ×××× ××× ××
×××××× ××××× ×××× ××× ××× ××
=
.
Continuing the process to its conclusion, we obtain Q∗n−1 · · · Q∗1 A = R, which we depict, here in the case n = 6, as
×××××× ×××××× ××××× ×××× ××× ××
×××××× ××××× ×××× ××× ×× ×
=
.
Inverting the core transformations one after the other, we obtain A = Q1 · · · Qn−1 R = QR, which we depict as ×××××× ×××××× ××××× ×××× ××× ××
=
×××××× ××××× ×××× ××× ×× ×
.
(1.2.2)
This is our QR decomposition with Q = Q1 · · · Qn−1 =
.
RQ Decomposition As an alternative to the QR decomposition, one can just as easily do an RQ decomposition. This is achieved by operating on A from the right and working from bottom to top. We begin by choosing a core transformation Qn−1 such that the transformation A → AQ∗n−1 , which acts on columns n − 1 and n of A, creates a zero in position (n, n − 1). For example, we can take rnn = | ann |2 + | an,n−1 |2 , cn−1 = ann /rnn , sn−1 = an,n−1 /rnn (1.2.3) and then define the active part of Qn−1 by cn−1 −sn−1 ˆ Qn−1 = . sn−1 cn−1
6
Chapter 1. Core Transformations
Pictorially we have ×××××× ×××××× ××××× ×××× ××× ××
×××××× ×××××× ××××× ×××× ××× ×
=
.
We must always keep in mind that when we apply a core transformation on the right, it acts on columns of A. The next step is a transformation AQ∗n−1 → AQ∗n−1 Q∗n−2 that creates a zero in position (n − 1, n − 2). Pictorially we have ×××××× ×××××× ××××× ×××× ××× ××
×××××× ×××××× ××××× ×××× ×× ×
=
.
Continuing this process all the way to the top, we obtain AQ∗n−1 · · · Q∗1 = R: ×××××× ×××××× ××××× ×××× ××× ××
=
×××××× ××××× ×××× ××× ×× ×
.
Inverting the core transformations we obtain A = RQ1 · · · Qn−1 = RQ: ×××××× ×××××× ××××× ×××× ××× ××
=
×××××× ××××× ×××× ××× ×× ×
.
Of course the factors R, Q1 , . . . , Qn−1 in the RQ decomposition are different from those in the QR decomposition. The methodology that we develop in this monograph could be applied equally well to either the QR or RQ decomposition. Our decision to use QR and not RQ was arbitrary.
1.3 Operating with Core Transformations Consider a situation like
×××××× ×××××× ××××× ×××× ××× ××
=
×××××× ××××× ×××× ××× ××× ××
,
1.3. Operating with Core Transformations
7
which we have imported from above. We want to focus on the three core transformations here, so let’s just consider them separately:
C3 C2 C1 =
.
It is important to note that these matrices do not commute as, in general, C3 C2 = C2 C3 and C2 C1 = C1 C2 . The situation changes, however, if C2 is left out. C1 and C3 do commute since the rows they act on do not overlap: C3 C1 = C1 C3 =
=
=
.
In the end we have placed the symbols for C1 and C3 one atop the other since the order does not matter. We will do this routinely. We note that in general two core transformations Ci and Cj commute if | i − j | ≥ 2.
Fusion Now we consider how to deal with core transformations that act on overlapping rows and therefore interact in a nontrivial way. In the simplest case we have two adjacent core transformations acting on the same rows: Ai Bi = . In this case we can simply multiply the two core transformations together to form a single core transformation: Ai Bi = Ci . Naturally enough, we call this operation fusion. Pictorially = . Fusions are crucial to our algorithms, but they occur only at the very beginning and the very end of an iteration.
The Turnover A much more commonly occurring operation is the turnover. Suppose we have three adjacent core transformations Ai Bi+1 Ci that don’t quite line up: Ai Bi+1 Ci =
.
If we multiply these three matrices together, the product has action only in the three rows and columns i, i + 1, and i + 2. It is possible to refactor this matrix ˆi Cˆi+1 , thereby “turning over” the pattern: in the form Aˆi+1 B
8
Chapter 1. Core Transformations
Ai Bi+1 Ci =
=
××× ××× ×××
=
ˆ ˆ ˆ = Ai+1 Bi Ci+1 .
(1.3.1)
This operation, which can be done in either direction, is what we call a turnover . We defer to Section 1.4 a discussion of how to do it in practice.
The Turnover as a Shift-Through Operation The upper Hessenberg matrix Q of the QR decomposition (1.2.2) is a product of a descending sequence of n − 1 core transformations
,
and its inverse is the product of an ascending sequence
.
The turnover operation is usually used to pass a core transformation through such a descending or ascending sequence. For example, suppose we have a product C1 C2 C3 C4 B2 composed of a descending sequence C1 C2 C3 C4 with B2 to the right, which can be pictured as
.
We want to somehow pass B2 through the descending sequence. First of all, B2 commutes with C4 , so C1 C2 C3 C4 B2 = C1 C2 C3 B2 C4 . B2 does not commute with ˆ3 Cˆ2 Cˆ3 , which gives C1 C2 C3 B2 C4 = C3 , but we can do a turnover: C2 C3 B2 = B ˆ ˆ ˆ ˆ3 Cˆ2 Cˆ3 C4 = B ˆ ˆ3 C1 Cˆ2 Cˆ3 C4 . C1 B3 C2 C3 C4 . Finally, B3 commutes with C1 , so C1 B Putting this all together we have ˆ3 Cˆ2 Cˆ3 C4 = B ˆ3 C1 Cˆ2 Cˆ3 C4 , C1 C2 C3 C4 B2 = C1 C2 C3 B2 C4 = C1 B which can be portrayed as
=
=
=
.
1.3. Operating with Core Transformations
9
This is the shift-through operation, and we will usually abbreviate it by the simple picture
.
(1.3.2)
ˆ3 that comes out on the left is, of course, different from The core transformation B the core that went in on the right. It’s even been moved down by one position. Two of the transformations in the descending sequence have been altered as well. Since a turnover can be done in either direction, this process can be reversed, so it can be used to pass a core transformation from left to right through a descending sequence, moving the extra core transformation up by one position:
.
Clearly we can also pass a core transformation through an ascending sequence. The shift-through from right to left looks like this: ˆ2 Cˆ3 Cˆ2 C1 = B ˆ2 C4 Cˆ3 Cˆ2 C1 C4 C3 C2 C1 B3 = C4 C3 C2 B3 C1 = C4 B or
=
=
=
or more briefly
.
(1.3.3)
Of course we can also go in the other direction:
.
In all of these pictures we show the core transformation being passed through a sequence of length four, but in fact the sequences can have any length. In each case exactly two of the core transformations in the sequence are altered in the process.
10
Chapter 1. Core Transformations
A Conservation Law for Turnovers ˆ2 Cˆ1 Cˆ2 , a turnover of Consider a single turnover that transforms C1 C2 B1 to B type
.
We can think of C1 and C2 as being part of some long descending sequence. After the turnover they will be replaced in the sequence by Cˆ1 and Cˆ2 , respectively.
∗ ∗ Suppose that the active part of Cj is s∗j ∗∗ , and the active part of Cˆj is sˆj ∗ , for j = 1, 2. The conserved quantity is the product s1 s2 . Theorem 1.3.1. Under the conditions described immediately above, | sˆ1 sˆ2 | = | s1 s2 |. If the core transformations all have determinant 1 (as in (1.4.1) below), then sˆ1 sˆ2 = s1 s2 . Proof. The product C1 C2 is an upper Hessenberg matrix whose active part is 3 × 3. Depicting only that part we have ⎡
∗ C1 C2 = ⎣ s1
∗ ∗ s2
⎤ ∗ ∗ ⎦. ∗
ˆ2 Cˆ1 Cˆ2 in this way, treating Cˆ1 Cˆ2 as Depicting the entire turnover C1 C2 B1 = B a unit, and partitioning the matrices suggestively, we get ⎡
∗ ⎣ s1
∗ ∗ s2
⎤⎡ b11 ∗ ∗ ⎦ ⎣ b21 ∗
⎤
b12 b22
⎡
⎦=⎣
⎤⎡
1
ˆb22 ˆb32
1
ˆb23 ˆb33
∗ ⎦ ⎣ sˆ1
⎤ ∗ ∗ ⎦ . (1.3.4) ∗
∗ ∗ sˆ2
Picking out an appropriate equation from (1.3.4),
s1
∗ s2
b11 b21
b12 b22
=
ˆb22 ˆb32
ˆb23 ˆb33
sˆ1
∗ sˆ2
,
taking determinants and absolute values, we obtain | s1 s2 | = | sˆ1 sˆ2 |. If the cores ˆ2 have determinant 1, then s1 s2 = sˆ1 sˆ2 . B1 and B We have stated the theorem for a turnover from right to left, but it is obviously also true for the reverse turnover by the same argument. A second conservation law can be gleaned as follows. Transpose the given turnover to get a new turnover, and apply Theorem 1.3.1 to that. We leave this to the reader.
1.3. Operating with Core Transformations
11
Ascending and Descending Sequences A descending sequence of core transformations
is a unitary upper Hessenberg matrix; an ascending sequence is lower Hessenberg. In our algorithms the core transformations in such a sequence will typically all be nontrivial , i.e., nondiagonal. In connection with this we say that an upper Hessenberg matrix A is properly upper Hessenberg if all of its subdiagonal entries aj+1,j are nonzero. Theorem 1.3.2. Let Q = Q1 . . . Qn−1 be a unitary upper Hessenberg matrix,
and suppose that the active part of the core transformation Qj is s∗j ∗∗ , j = 1, . . . , n − 1. Then the subdiagonal entries of Q are qj+1,j = sj , j = 1, . . . , n − 1. Thus Q is properly upper Hessenberg if and only if all of the core transformations Qj are nontrivial. The proof is an easy exercise. You got some practice for this when you worked through the proof of Theorem 1.3.1. An analogous result holds for ascending sequences. We will need the following result in Section 4.2. Theorem 1.3.3. Consider a shift-through operation, passing a core transformation through an ascending or descending sequence. If the core transformations in the ascending or descending sequence were all nontrivial before the shift-through operation, then they would all be nontrivial afterwards. In fact | sˆ1 · · · sˆn−1 | = | s1 · · · sn−1 |, where sj and sˆj denote the subdiagonal entry of the jth core transformation before and after the shift-through operation. Proof. We consider a shift-through operation that passes a core transformation from right to left through a descending sequence. The other cases are similar. ˆi+1 Cˆi Cˆi+1 . Applying All of the action is in a turnover operation Ci Ci+1 Bi = B Theorem 1.3.1 to this turnover we get | sˆi sˆi+1 | = | si si+1 |. Since these are the only two cores in the descending sequence that are changed by this shift-through operations, we have | sˆ1 · · · sˆn−1 | = | s1 · · · sn−1 |. If all of the core transformations have determinant 1, we get the cleaner result sˆ1 · · · sˆn−1 = s1 · · · sn−1 .
Passing a Core Transformation through a Triangular Matrix Since each QR decomposition contains an upper triangular matrix, we need to be able to pass a core transformation through a triangular matrix. Suppose we have an upper-triangular R and a core transformation Ci , and we form the
12
Chapter 1. Core Transformations
˜ The transformation Ci acts on columns i and i + 1 of R, and product RCi = R. ˜ the product R is no longer upper triangular, as it has an extra nonzero entry (a bulge) in position (i + 1, i). In the case n = 6 and i = 3 we have ×××××× ××××× ×××× ××× ×× ×
=
×××××× ××××× ×××× +××× ×× ×
,
˜ from the left by where the red plus sign denotes the bulge. Now we can act on R ∗ ˆ a core transformation Ci , acting on rows i and i + 1, such that the extra nonzero ˜ is again upper triangular. We ˆ = Cˆ ∗ R entry is annihilated and the resulting R i ˆ or pictorially ˜ = Cˆi R, have RCi = R ×××××× ××××× ×××× ××× ×× ×
=
×××××× ××××× ×××× +××× ×× ×
=
×××××× ××××× × × × × . (1.3.5) ××× ×× ×
We have passed the core transformation through the upper triangular matrix. ˆ is upper Both have been changed in the process, but the form is the same: R ˆ triangular, and Ci is a core transformation with its active part in rows/columns i and i + 1. We will depict this passing-through operation briefly by
.
Clearly we can also pass a core transformation from left to right:
.
The cost of passing a core transformation through R depends on the size of R. ˜ acting on columns i and i + 1, requires O(i) flops, The operation R → RCi = R, ∗˜ ˆ acting on rows i and i + 1, requires O(n − i). ˜ and the operation R → Ci R = R, Thus the total is O(n) flops. This is in contrast with the fusion and turnover operations, whose cost is fixed and independent of the size of the matrix in which these operations are taking place. The cost of a fusion or turnover operation is O(1), not O(n). Thus, for large matrices, the passing-through operation is by far the most expensive of the operations we have considered here. We showed in Section 1.2 how to do a QR decomposition and also an RQ decomposition of an upper Hessenberg matrix. We note in passing that we can transform a QR decomposition into an RQ decomposition, or vice versa, simply by passing the core transformations through the upper triangular matrix. In this discussion we have tacitly assumed that R is nonsingular. If R is singular, it will have at least one zero on the main diagonal. Notice that if
1.4. Some Details about Core Transformations
13
ri+1,i+1 = 0 above, the indicated bulge will not materialize. This is not necessarily a problem; it just means that the core transformation Cˆi that comes out on the left is the identity matrix. Later on, when we perform this operation in the course of solving the eigenvalue problem, it will become a problem. We will have more to say about this in Section 3.4.
1.4 Some Details about Core Transformations Most of the time we will be intentionally vague about the precise form of our core transformations. They are any essentially 2 × 2 unitary matrices that can be fused, turned over, and passed through upper triangular matrices. In this section, and only in this section, we will be a bit more precise. The reader could skip this material on a first reading, referring back to it only as needed. In Section 1.2 we suggested the use of core transformations of the form c −s 2 2 (1.4.1) , | c | + | s | = 1, s c which are complex analogues of plane rotators. Note that these transformations have determinant 1. One easily checks that, conversely, a core transformation that has determinant 1 must have the form (1.4.1). All of the codes we have written so far use core transformations of this type. Usually we have even taken care to make s real and nonnegative. The decision to use rotators was arbitrary. We could equally well have used “plane reflectors” c s , | c |2 + | s |2 = 1, s −c which have determinant −1, and there are other possibilities. In this section we will consider only core transformations of the form (1.4.1). We also specify precisely how to build a core transformation Q that creates a zero, say x c s x r Q∗ = = . y −s c y 0 Set m ← max{| x |, | y |}. If m = 0, then set c ← 1, s ← 0, r ← 0, and move on. Otherwise set x ← x/m, y ← y/m. (1.4.2) This normalization step eliminates any possibility of overflow or harmful underflow in the subsequent squaring operations: (1.4.3) r ← | x |2 + | y |2 , c ← x/r, s ← y/r, r ← mr. Notice that s is real and nonnegative if y is, and r is always real and nonnegative.
Accuracy of Construction of Core Transformations In practice we do all of our computations in floating-point arithmetic. As the reader probably knows, the basic arithmetic operations can usually be done accurately, but not always [48,73]. For example, if we compute a product z = xy,
14
Chapter 1. Core Transformations
the result of the floating-point computation will not be exactly z but rather some nearby zˆ satisfying zˆ = z(1 + δ),
where | δ | ≤ u.
(1.4.4)
Here, and throughout this monograph, u denotes the unit roundoff for floatingpoint arithmetic. By far the most commonly used floating-point arithmetic standard for scientific computing is IEEE 754-2008 binary64, for which u = 2−53 ≈ 10−16 . Since u is so tiny, (1.4.4) shows that z is computed to high relative accuracy (δ is the relative error).3 A result analogous to (1.4.4) holds for division as well. It also holds for addition of two real numbers of the same sign. The only arithmetic operation that does not guarantee high relative accuracy is the addition operation z = x+y in the case where x and y cancel out to some extent, resulting in a relatively small z. (The same applies to subtraction, which is a form of addition.) If we look at the procedure for building a core transformation shown above in (1.4.2) and (1.4.3), we see that there are divisions and multiplications, which can be done to high relative accuracy. A well-implemented square root function also yields high relative accuracy. There is just one addition operation, but this is a sum of two positive numbers, so it also gives high relative accuracy. We deduce that this construction produces results that are highly accurate in the relative sense. This is an excellent and very helpful result. Theorem 1.4.1. Suppose we construct a core transformation as shown above in (1.4.2) and (1.4.3). Then, assuming accurate input values x and y, the key quantities c, s, and r will be produced to high relative accuracy. This means that the computed values cˆ, sˆ, and rˆ satisfy cˆ = c(1 + δc ),
sˆ = s(1 + δs ),
rˆ = r(1 + δr ),
where | δc |, | δs |, and | δr | are all on the order of the unit roundoff u.
Efficient Computation of Square Roots In our algorithms it frequently happens that we take the square root of a number that is extremely close to 1. In these cases we can skip the computation of m and the division shown in (1.4.2) because danger of √ we know there is no 2 2 over/underflow. In (1.4.3) we compute r ← 1 + δ, where δ = | x | + | y | − 1. If | δ | is extremely small, we can take a shortcut in the square root computation and save some time. The Taylor expansion √ 1 1 1 + δ = 1 + δ − δ 2 + O(δ 3 ) 2 8 shows that the approximation r ≈ 1 + .5 δ is perfectly adequate if | δ | is sufficiently small. This is much cheaper than a standard square root computation. 3 In this discussion we ignore the possibility of overflow or underflow, which can occur if extremely large or small numbers arise. In IEEE 754-2008 binary64 the over/underflow levels are approximately 10308 and 10−308 , respectively.
1.4. Some Details about Core Transformations
15
Looking at (1.4.3), we see that we are√going to divide by r subsequently, so we might prefer to compute rˆ = r−1 = 1/ 1 + δ directly. The Taylor expansion 1 3 1 √ = 1 − δ + δ 2 + O(δ 3 ) 2 8 1+δ shows that the approximation rˆ ≈ 1 − .5 δ is good enough if δ is sufficiently small. Exactly when is this approximation justified? Let u denote the unit roundoff introduced above. For any real satisfying | | < u, the number 1 + rounds to √ 2 1. In our square root computation, if | δ | < u (e.g., 10−8 ), we have | δ | < u, so the Taylor expansions show that the errors in our approximations of r and rˆ are less than √ the unit roundoff. In situations described below, we typically have | δ | ≈ u u, so we can use these approximations comfortably. √ To summarize the procedure (valid when | δ | < u), instead of (1.4.3) we compute 2
2
δ ← | x | + | y | − 1,
rˆ ← 1 − .5 δ,
c ← rˆ x,
s ← rˆ y.
(1.4.5)
Fusion Details If we fuse two transformations of the form (1.4.1), we get another transformation of the same form. Specifically c2 − s2 c3 −s3 c1 − s1 = , (1.4.6) s1 c1 s2 c2 s3 c3 where c3 = c1 c2 − s1 s2
and s3 = s1 c2 + c1 s2 .
(1.4.7)
So the form is preserved, and computing a fusion is just a matter of a few flops. But this brings us to an important point. If we were working with exact data and we could do the computations (1.4.7) in exact arithmetic, the important equation | c3 |2 + | s3 |2 = 1 would be satisfied automatically. In practice we have inexact data and use floating-point arithmetic, so it’s not quite satisfied. To be safe, once we have computed c3 and s3 by (1.4.7), we do an additional normalization step (1.4.5) to ensure that the core transformation remains unitary to working precision. (In (1.4.5) we input c3 and s3 for x and y, respectively, and get refined c3 and s3 as output.) Notice that if s1 and s2 are real, it does not follow that s3 is real. Thus if we want to keep our s’s real, we need to work a bit harder. We shall discuss this problem below.
Turnover Details Anyone who has ever tried to program the turnover operation has discovered that there are good and bad ways to do it, some accurate and some not. An accurate turnover is crucial to all of the algorithms considered in this book. The
16
Chapter 1. Core Transformations
required operation (1.3.1) is briefly summarized as
C1 C2 B1 =
=
××× ××× ×××
=
ˆ ˆ ˆ = B2 C1 C2 .
(1.4.8)
We have borrowed some notation from Theorem 1.3.1. The three core transformations on the left are multiplied together to form the essentially 3 × 3 matrix in the middle, which we will call G. This is routine. The question is how to factor G to obtain the three core transformations on the right. ˆ2∗ G has a zero in ˆ2 such that B We begin by building a core transformation B the (3, 1) position. Pictorially ˆ ∗G = B 2
××× ××× ×××
××× ××× ××
=
.
ˆ2 . The (2, 1) entry Of course we use the procedure described at (1.4.3) to build B ∗ ˆ of B2 G is real and nonnegative. Next we build a core transformation Cˆ1 such ˆ2∗ G has a zero in the (2, 1) position. Pictorially that Cˆ1∗ B ˆ 2∗ G = Cˆ1∗ B
××× ××× ×××
=
××× ×× ××
.
Notice that for the computation of Cˆ1 we can use the abbreviated procedure given by (1.4.5) instead of (1.4.3). This is because the first column of the unitary ˆ ∗ G has norm one (in exact arithmetic), so the square root that is matrix B 2 computed for the Cˆ1 computation is almost exactly 1 to begin with. ˆ ∗ G is real and nonnegative; in fact it must equal 1 The (1, 1) entry of Cˆ1∗ B 2 because the matrix is unitary. Moreover, the (1, 2) and (1, 3) entries must both ˆ2∗ G is a core transformation, which we will call Cˆ2 . Since be zero. Thus Cˆ1∗ B ˆ2 Cˆ1 Cˆ2 , pictorially G=B ××× ××× ×××
=
,
our turnover is (nearly) complete. Since we have opted to use core transformations that have determinant 1, all of the matrices in this discussion have determinant 1, including Cˆ2 , so Cˆ2 is of the form (1.4.1), i.e., ⎤ ⎡ 1 cˆ2 −sˆ2 ⎦ . Cˆ2 = ⎣ sˆ2 cˆ2 An alternative way to compute sˆ2 is suggested by Theorem 1.3.1. Using notation from that theorem we have s1 s2 . (1.4.9) sˆ2 = sˆ1
1.4. Some Details about Core Transformations
17
The entries s1 and s2 are from the cores C1 and C2 , which are available before the turnover, and sˆ1 is from Cˆ1 , which has just been computed. Using (1.4.9) to compute sˆ2 has the advantage that it respects the conservation law sˆ1 sˆ2 = s1 s2 . This is not to say that equality is preserved exactly, as there are rounding errors. Looking at (1.4.9), we see that there is one multiplication operation and one division, each of which is performed to high relative accuracy. The resulting sˆ1 and sˆ2 therefore satisfy sˆ1 sˆ2 = s1 s2 (1+δ), where | δ | ≈ u. Notice that we are not guaranteeing that sˆ1 and sˆ2 are individually computed to high relative accuracy, but their product is. We recommend using (1.4.9) to compute sˆ2 , as this enables simpler and stronger backward stability theorems than would otherwise be possible. Once we have computed sˆ2 , we must not accept Cˆ2 as is; we must carry out the additional normalization step (1.4.5) (using cˆ2 and sˆ2 as input and getting refined cˆ2 and sˆ2 as output) to ensure that Cˆ2 is unitary to working precision. We know from experience that this normalization is crucial to the computation of all of the core transformations. If it is neglected, over the course of many iterations of an algorithm, the core transformations will gradually become less 2 2 and less unitary. That is, the amount by which the equations | c | + | s | = 1 fail to hold will grow steadily, and the accuracy of the algorithm will suffer. A careful look at what we have done reveals that all of the information we need for the turnover comes from the first two columns of G. Therefore there is no need to build or operate on the third column, and this saves some arithmetic. Each turnover operation requires three square roots, but two of them can be done by the short-cut formula (1.4.5). A substantial fraction of the cost of doing a turnover lies in these three square roots, so doing two of them by the simpler formula results in a significant saving. We note in passing that the turnover operation is related to the RQ and QR decompositions of the matrix G. In fact the operation we have just described is nothing other than a QR decomposition of G. We don’t see any R matrix because in the unitary case R = I. (It looks a bit different from the QR decomposition discussed earlier in this chapter because G is not upper Hessenberg.) By contrast, the core transformations we started with,
=
××× ××× ×××
,
constitute an RQ decomposition (again with R = I). Thus the turnover operation just trades an RQ decomposition for a QR decomposition.
Keeping It Real Earlier in this chapter we remarked that we can always arrange for an upper Hessenberg matrix A to have real and nonnegative entries on the subdiagonal: aj+1,j ≥ 0, j = 1, . . . , n − 1. Let’s assume we have done this, and assume also that we have inserted a unitary diagonal matrix D into the QR decomposition of A for added flexibility: A = Q1 Q2 · · · Qn−1 DR. Then we can arrange for all of the entries on the main diagonal of R to be real and nonnegative. Moreover, all the si in the core transformations Qi are real and
18
Chapter 1. Core Transformations
nonnegative. We now consider how to keep these quantities real in the course of doing fusions, turnovers, and passing core transformations through R. First of all, because of the presence of D in the QR decomposition, we need to be able to pass a core transformation through a unitary diagonal matrix. To this end, note that cˆ −s d2 c −s d1 = , d2 c d1 s cˆ s ˆ and s. where cˆ = cd1 d−1 2 . For safety do the normalization (1.4.5) on c Now let’s look at fusions. We already observed that in the fusion (1.4.6), s3 will generally fail to be real even if s1 and s2 are real and nonnegative. We can get rid of an unwanted phase by factoring it out and absorbing it into the unitary diagonal matrix D. For example, given a core transformation c −s s c with a complex s, let sˆ = | s | and δ = s/ˆ s. Then, letting cˆ = cδ, we have cˆ −ˆ s δ c −s = . c s sˆ cˆ δ The unitary diagonal matrix diag δ, δ can be absorbed into D. Don’t forget to do the normalization (1.4.5) on cˆ and sˆ. Also, whenever D is updated, any newly computed main-diagonal entry di should be normalized by di ← di /| di | (making use of (1.4.5)) to ensure the D remains unitary to working precision. Consider next the problem (1.3.5) of passing a core transformation through R. It is not hard to show that there are no difficulties here; the quantities that start out real and nonnegative remain real and nonnegative in the end. If we look at the crucial submatrices in (1.3.5), i.e., rows and columns i and i + 1, we have (using simplified notation) cˆ −ˆ s r˜11 r˜12 c −s rˆ11 rˆ12 r11 r12 = = . 0 r22 c r˜21 r˜22 0 rˆ22 s sˆ cˆ If we start out on the left and end up on the right, initially we have r11 ≥ 0, r22 ≥ 0, and s ≥ 0. Notice that the “bulge” element r˜21 = r22 s is also real
T
T plays the role of x y in our and nonnegative. The vector r˜11 r˜21 standard procedure (1.4.3) to determine cˆ and sˆ. Since y = r˜21 is real and nonnegative, so are sˆ and rˆ11 . Since both of the core transformations in this equation have determinant 1, we must have r11 r22 = rˆ11 rˆ22 , which implies that rˆ22 is also real and nonnegative. Finally, we consider the turnover, which also causes no problems. If the core transformations going into the turnover all have real and nonnegative s’s, then so do the core transformations that come out. To see this take a close look at the essentially 3 × 3 matrix in (1.4.8), which we have called G. We have ⎤⎡ ⎤ ⎤⎡ ⎡ 1 γ1 −σ1 c1 −s1 ⎦⎣ ⎦, c1 γ1 c2 −s2 ⎦ ⎣σ1 (1.4.10) G = C1 C2 B1 = ⎣s1 1 s2 c2 1
1.4. Some Details about Core Transformations
19
with s1 ≥ 0, s2 ≥ 0, and σ1 ≥ 0. After the turnover we will have ⎤⎡ ⎤ ⎡ ⎤⎡ 1 cˆ1 −sˆ1 1 ˆ2 Cˆ1 Cˆ2 = ⎣ ⎦⎣ γ2 −σ 2 ⎦ ⎣sˆ1 cˆ2 −sˆ2 ⎦ . G=B cˆ1 σ2 γ2 1 sˆ2 cˆ2
(1.4.11)
We want to show that σ2 , sˆ1 , and sˆ2 are all real and nonnegative. An easy computation using (1.4.10) shows that g31 = s2 σ1 ≥ 0. This implies ˆ2 will be real and nonnegative, as we see by reviewing the that the entry σ2 in B ˆ ∗ G will computational details following (1.4.8). Then, since the (2, 1) entry of B 2 also be real and nonnegative, the same must be true of the entry sˆ1 in Cˆ1 . Now all that remains is to show that sˆ2 is real and nonnegative, and this follows from s1 (1.4.9). the formula sˆ2 = s1 s2 /ˆ
The Fully Real Case If the matrix A is real to begin with, then the decomposition A = QR is also real. In the detailed QR factorization A = Q1 · · · Qn−1 DR, Q1 , . . . , Qn−1 are real orthogonal core transformations, D is a real orthogonal diagonal matrix, and R is a real upper triangular matrix. It is easy to check that all of the fundamental operations preserve realness: The fusion of two real core transformations results in a real core transformation. A turnover of three real core transformations results in real outputs. When we pass a real core transformation through a real triangular matrix, the results are real. Thus, if we start out real, we stay real.
Upward Turnovers So far we have seen how to do what we call a downward turnover :
→
.
This is our most common need, but sometimes we have to go in the other direction; that is, we need an upward turnover
→
.
Clearly the code for an upward turnover must be about the same as for a downward one. In fact, there is no need for two separate codes; the code that does downward turnovers can also be used to do upward turnovers. To see this we introduce the flip matrix ⎡ ⎤ 1 ⎦ 1 F =⎣ 1
20
Chapter 1. Core Transformations
and note that F 2 = I. If
⎡
c Q=⎣ s
⎤
−s c
⎦, 1
then
⎡ (F QF )∗ = ⎣
⎤
1 c s
−s ⎦ , c
and vice versa. Now suppose we have A2 B1 C2 and we want to do an upward ˆ2 Cˆ1 . The equation turnover to get Aˆ1 B ˆ2 Cˆ1 A2 B1 C2 = Aˆ1 B can be multiplied by F on the left and right and conjugate transposed to yield ˆ2 F )∗ (F Aˆ1 F )∗ , (F C2 F )∗ (F B1 F )∗ (F A2 F )∗ = (F Cˆ1 F )∗ (F B which is an equivalent downward turnover. Executing this downward turnover in an appropriate way yields the desired upward turnover.
1.5 Backward Stability Basics The tools we have developed in this chapter are quite simple. They involve only matrix-matrix multiplications (in which at least one of the matrices is unitary) and QR and RQ decompositions, all of which are normwise backward stable operations [48]. It follows that algorithms built from these tools, including all algorithms developed in this book, are normwise backward stable.4 In our backward error analyses and throughout this monograph, the norm symbol · will always denote the 2-norm. Thus, when applied to a vector it means the Euclidean norm, and when applied to a matrix it means the spectral norm. However, the choice of norms is not crucial to our analysis. We could use other common norms such as · 1 or · ∞ without changing the results. The symbol u will always stand for the unit roundoff of the floating-point arithmetic. The most commonly used floating-point arithmetic standard is IEEE 754-2008 binary64 (double precision), for which u = 2−53 ≈ 10−16 . We will follow the customary practice of first-order error analysis; terms of . second and higher orders will be ignored. The symbol = will be used to indicate that two quantities are equal to first order. For example, if a = b + cu + du2 , we . can write a = b + cu. We will use the symbol as an informal inequality sign. An inequality like x u means that x ≤ Cm u, where Cm is an unspecified constant of modest size. It is constant for a fixed number of operations m but can grow as a low-degree polynomial in m at worst. (Typically m denotes the size of the matrix.) 4 However, we do not mean to imply that the proof of backward stability is always trivial; sometimes it is quite involved, e.g., in the unitary-plus-rank-one case (Section 4.3).
1.5. Backward Stability Basics
21
Theorem 1.5.1. Let Q be the product of an ascending or descending sequence of core transformations, and suppose we pass a finite number of core transforˆ In exact arithmetic mations through Q by shift-through operations to yield Q. ∗ ˆ ˆ U Q = QV , or Q = U QV , where V and U are the products of the incoming and outgoing core transformations, respectively (or vice versa). Then, in floatingˆ = U ∗ (Q + δQ)V , where δQ u. point arithmetic, Q Notice that we have established a backward error on the product Q. We have not pushed the error back onto the individual core transformations. We do not assert that each Qi has a tiny backward error Ei . Proof. Suppose, for the sake of argument, that Q is a descending sequence: Q = ˜ iQ ˜ i+1 = Qi Qi+1 Vi . Q1 · · · Qn−1 . It suffices to consider a single turnover Ui+1 Q This is accomplished by multiplying the core transformations together and then doing a QR decomposition by rotators. These are backward stable operations, ˜ iQ ˜ i+1 = Qi Qi+1 Vi + Et , where Et u. so, in floating-point arithmetic, Ui+1 Q (This is true regardless of whether or not we use the recommended formula ˜ i+1 = U ∗ (Qi Qi+1 + Et )Vi , where Et = Et V ∗ , and hence ˜ iQ (1.4.9).) Thus Q i+1 i Et = Et . Inserting this result into the descending sequence Q and the ˜ we have Q ˜ = U ∗ (Q + E )Vi . Since the whole transfortransformed version Q, t i+1 ˆ = U ∗ (Q+δQ)V , mation is just a sequence of such operations, we conclude that Q where δQ u. Theorem 1.5.2. Let R be an upper triangular matrix, and suppose we pass a ˆ In exact arithmetic finite number of core transformations through R to yield R. ∗ ˆ ˆ V R = RU , or R = V RU , where U and V are the products of the incoming and outgoing core transformations, respectively (or vice versa). Then, in floatingˆ = V ∗ (R + δR)U , where δR u R . point arithmetic, R This is a consequence of the backward stability of matrix multiplication [48]. In Chapter 3 we will consider unitary similarity transformations Aˆ = U ∗ AU , ˆ = V ∗ RU . Then we will be ˆ R, ˆ Q ˆ = U ∗ QV , and R where A = QR, Aˆ = Q able to use Theorems 1.5.1 and 1.5.2 together to deduce that in floating-point ˆ = U ∗ (Q + δQ)V , R ˆ = V ∗ (R + δR)U , and Aˆ = U ∗ (A + δA)U , where arithmetic Q . δA = δQ R + Q δR, and hence δA u R = u A .
Chapter 2
Francis’s Algorithm
In this chapter we pause in our development of core-chasing algorithms to describe Francis’s implicitly shifted QR (bulge-chasing) algorithm and explain why it works. This material overlaps significantly with [74] and [73, Sections 6.1–6.3]. We will draw extensively from [74].5
2.1 Description of the Algorithm Suppose we are given an upper Hessenberg matrix A, and we want to find its eigenvalues. Francis’s algorithm is by far the most popular tool for the task. Like all methods for computing eigenvalues, Francis’s algorithm is iterative. Each iteration is a unitary similarity transformation that replaces the given Hessenberg matrix by a new one that is (hopefully) closer to upper triangular form in the sense that its subdiagonal entries aj+1,j are closer to zero. Since similarity transformations preserve eigenvalues, each of the matrices produced by the iterations will have the same eigenvalues as the original matrix A. Over the course of many iterations, the iterates will become (close enough to) triangular, and we can therefore read the (nearly exact) eigenvalues from the main diagonal. This is the rough picture. As we shall see below, the subdiagonal entries do not all tend to zero at the same rate. Typically one or more entries near the bottom converge to zero rapidly, allowing the problem to be deflated to a smaller size. Moreover, we do not always get to triangular form. Instead the limiting form might be block triangular. This happens, for example, when the matrix is real and we want to stay in real arithmetic. Complex conjugate pairs of eigenvalues are extracted in 2 × 2 blocks. First we will describe an iteration of Francis’s algorithm. Then we will explain why the algorithm works. We can assume that the starting matrix A is not just upper Hessenberg but properly upper Hessenberg. This means that all of the subdiagonal entries of A are nonzero: aj+1,j = 0 for j = 1, 2, . . . , n − 1. Indeed, if A is not properly upper Hessenberg, then we can clearly decompose the problem into two or more subproblems that can be solved separately. We will therefore assume that A is properly Hessenberg from the outset. An iteration of Francis’s algorithm of degree m begins by picking m shifts ρ1 , . . . , ρm . In principal m can be any positive integer, but in practice it should 5 See
[74] for historical background that is not covered here.
23
24
Chapter 2. Francis’s Algorithm
be fairly small. In this monograph we will focus mainly on the cases m = 1 and m = 2. The rationale for shift selection will be explained later. There are many reasonable ways to choose shifts; one simple and generally good strategy is to take ρ1 , . . . , ρm to be the eigenvalues of the m × m submatrix in the lower right-hand corner of A. This is an easy computation if m = 2. Now define f (A) = (A − ρm I) · · · (A − ρ1 I). We do not actually compute f (A), as this would be too expensive. Francis’s algorithm just needs the first column x = f (A)e1 , which is easily computed by m successive matrix-vector multiplications. The computation is especially cheap because A is upper Hessenberg. It is easy to check that (A − ρ1 I)e1 has nonzero entries only in its first two positions, and (A − ρ2 I)(A − ρ1 I)e1 has nonzero entries only in its first three positions, and so on. As a consequence, x has nonzero entries only in its first m + 1 positions. The entire computation of x depends only on the first m columns of A (and the shifts) and requires negligible computational effort if m is small. The next step is to build a unitary matrix Q0 whose first column is proportional to x; i.e., Q0 e1 = βx for some nonzero scalar β. For this purpose an elementary reflector (Householder transformation) is often used [39, 73], but we will do something else. Since this monograph is about computing with core transformations, we will build Q0 as a product of m cores. The entries of x ∗ be a core transformation (acting in rows m beyond xm+1 are all zero. Let Cm ∗ and m + 1) that transforms xm+1 to zero; i.e., Cm x has a zero in position m + 1 ∗ ∗ ∗ (and below). Then let Cm−1 create a zero in position m; i.e., Cm−1 Cm x has a zero in position m (and below). Continue to create zeros in this way. The last core transformation in the sequence is C1∗ , which creates a zero in position 2. ∗ Thus C1∗ · · · Cm x = αe1 for some nonzero α. Pictorially, in the case m = 2, C1∗ C2∗ x
=
× × ×
α =
.
Letting Q0 = Cm · · · C1 , we have Q∗0 x = αe1 or Q0 e1 = βx, where β = α−1 , as desired. Now use Q0 to perform a similarity transformation: A → Q∗0 AQ0 . This disturbs the Hessenberg form, but only slightly. Because Q0 = Cm · · · C1 , the transformation A → Q∗0 A affects only the first m+1 rows, and the transformation Q∗0 A → Q∗0 AQ0 affects only the first m + 1 columns. Since am+2,m+1 = 0, row m + 2 gets filled in by this transformation. Below row m + 2, we have a big block of zeros, and these remain zero. The total effect is that there is a bulge in the Hessenberg form. In the case m = 2 and n = 6, the matrix Q∗0 AQ0 looks like C1∗ C2∗ AC2 C1
=
×××××× ×××××× ××××× ×××× ××× ××
=
×××××× ×××××× +××××× ++×××× ××× ××
.
2.1. Description of the Algorithm
25
In this 6 × 6 matrix the bulge looks huge. If you envision, say, the 100 × 100 case (keeping m = 2), you will see that the matrix is almost upper Hessenberg with a tiny bulge at the top. This is a “2 × 2” bulge corresponding to m = 2. What does the bulge look like in the case m = 1? m = 3? The rest of the iteration consists of returning this matrix to upper Hessenberg form by the “standard” algorithm. This begins with a transformation Q∗1 that acts only on rows 2 through n and creates the desired zeros in the first column. Since the first column already consists of zeros after row m + 2, the scope of Q∗1 can be restricted a lot further in this case. It needs to act on rows 2 through m + 2 and create zeros in positions (3, 1), . . . , (m + 2, 1). The usual tool for this task is an elementary reflector, but again we will build Q1 as a product of m core transformations: Q1 = Bm+1 · · · B2 . For illustration let us stick with the case m = 2: Q1 = B3 B2 or Q∗1 = B2∗ B3∗ , where B3∗ and B2∗ create zeros in positions (4, 1) and (3, 1), respectively. Applying Q∗1 on the left, we get a matrix Q∗1 Q∗0 AQ0 that has the Hessenberg form restored in the first column. Completing the similarity transformation, we multiply by Q1 = B3 B2 on the right. B3 acts on columns 3 and 4 and creates a new nonzero in position (5, 3). B2 acts on columns 2 and 3 and creates a new nonzero in position (5, 2). Pictorially
×××××× ×××××× +××××× ++×××× ××× ××
=
×××××× ×××××× ××××× +×××× ++××× ××
.
The bulge has not gotten any smaller, but it has been pushed one position to the right and downward. This establishes the pattern for the process. The next transformation will push the bulge over and down one more position, and so on. Thinking again of the 100 × 100 case, we see that a long sequence of such transformations will chase the bulge down through the matrix until it is finally pushed off the bottom. At this point, Hessenberg form will have been restored and the Francis iteration will be complete. For obvious reasons, Francis’s algorithm is called a bulge-chasing algorithm. An iteration of Francis’s algorithm of degree m can be summarized briefly as follows: 1. Pick some shifts ρ1 , . . . , ρm . 2. Compute x = f (A)e1 = (A − ρm I) · · · (A − ρ1 I)e1 . 3. Compute a unitary Q0 whose first column is proportional to x. 4. Do a similarity transformation A → Q∗0 AQ0 , creating a bulge. 5. Return the matrix to upper Hessenberg form by chasing the bulge. Letting Aˆ denote the final result of the Francis iteration, we have Aˆ = Q∗n−2 · · · Q∗1 Q∗0 AQ0 Q1 · · · Qn−2 , where Q0 is the transformation that creates the bulge, and Q1 , . . . , Qn−2 are the transformations that chase it. Letting Q = Q0 Q1 · · · Qn−2 ,
26
Chapter 2. Francis’s Algorithm
we have
Aˆ = Q∗ AQ.
Recall that Q0 was built in such a way that Q0 e1 = βx = βf (A)e1 . It is easy to check that all of the other Qi leave e1 invariant: Qi e1 = e1 , i = 1, . . . , n − 2. For example, Q1 = Bm+1 · · · B2 is a product of core transformations that do not “touch the first row,” and similarly for Q2 , . . . , Qn−1 . We conclude that Qe1 = Q0 Q1 · · · Qn−2 e1 = Q0 e1 = βx. We summarize these findings in a theorem. Theorem 2.1.1. A Francis iteration of degree m with shifts ρ1 , . . . , ρm effects a unitary similarity transformation Aˆ = Q∗ AQ, where Aˆ is upper Hessenberg and Qe1 = βf (A)e1 = β(A − ρm I) · · · (A − ρ1 I)e1 for some nonzero β. In words, the first column of Q is proportional to the first column of f (A). Our subsequent discussion will depend only upon the properties of a Francis iteration stated in this theorem: A Hessenberg matrix is transformed to another Hessenberg matrix by a unitary similarity transformation with the “right” first column. Any transformation that has these properties qualifies as a Francis iteration. During our sketch above, we mentioned that the transforming matrices can be built from either elementary reflectors or core transformations. It doesn’t matter how we do it as long as the properties stated in Theorem 2.1.1 hold. Later on we will see that the matrix Q is essentially uniquely determined by its first column, regardless of how it is built. This is the content of the famous implicit-Q theorem. In the next section we will see why Francis’s method is so powerful. Repeated iterations with well-chosen shifts typically result in rapid convergence in the sense that an−m+1,n−m → 0 quadratically.6 In a few iterations, an−m+1,n−m will be small enough to be considered zero. We can then deflate the problem: A11 A12 A= . 0 A22 The (small) m× m matrix A22 can be resolved into m eigenvalues with negligible work. (Think of the case m = 2, for example.) We can then focus on the remaining (n − m) × (n − m) submatrix A11 and go after another set of m eigenvalues.
2.2 Why the Algorithm Works Subspace Iteration At the heart of Francis’s algorithm lies a very basic iteration, the power method. In the k-dimensional version, which is known as subspace iteration, we pick a k6 This is a simplification. It can happen (especially when m not so small, e.g., m = 8) that some shifts are much better than others, resulting in an−k+1,n−k → 0 for some k < m.
2.2. Why the Algorithm Works
27
dimensional subspace S of Cn and, through repeated multiplication by A, build the sequence of subspaces S, AS, A2 S, A3 S, . . . .
(2.2.1)
Here by AS we mean {Ax | x ∈ S}. To avoid complications in the discussion, we will make some simplifying assumptions. We will suppose that all of the spaces in the sequence have the same dimension k.7 We also assume that A is a diagonalizable matrix with n linearly independent eigenvectors v1 , . . . , vn and associated eigenvalues λ1 , . . . , λn .8 Sort the eigenvalues and eigenvectors so that | λ1 | ≥ | λ2 | ≥ · · · ≥ | λn |. Any vector x can be expressed as a linear combination x = c1 v1 + c2 v2 + · · · + cn vn for some (unknown) c1 , . . . , cn . Thus clearly Aj x = c1 λj1 v1 + · · · + ck λjk vk + ck+1 λjk+1 vk+1 + · · · + cn λjn vn ,
j = 1, 2, 3 . . . .
If | λk | > | λk+1 |, the components in the directions v1 , . . . , vk will grow relative to those in the directions vk+1 , . . . , vn as j increases. As a consequence, unless our choice of S was very unlucky, the sequence (2.2.1) will converge to the kdimensional invariant subspace spanned by v1 , . . . , vk . The convergence is linear with ratio | λk+1 /λk |, which means roughly that the error is reduced by a factor of approximately | λk+1 /λk | on each iteration [72, 75]. Often the ratio | λk+1 /λk | will be close to 1, so convergence will be slow. In an effort to speed up convergence, we could consider replacing A by some f (A) in (2.2.1), giving the iteration S, f (A)S, f (A)2 S, f (A)3 S, . . . .
(2.2.2)
If f (z) = (z − ρm ) · · · (z − ρ1 ) is a polynomial of degree m, each step of (2.2.2) amounts to m steps of (2.2.1) with shifts ρ1 , . . . , ρm . The eigenvalues of f (A) are f (λ1 ), . . . , f (λn ). If we renumber them so that | f (λ1 ) | ≥ · · · ≥ | f (λn ) |, the rate of convergence of (2.2.2) will be | f (λk+1 )/f (λk ) |. Now we have a bit more flexibility. Perhaps by a wise choice of shifts ρ1 , . . . , ρm , we can make this ratio small, at least for some values of k.
Subspace Iteration with Changes of Coordinate System A similarity transformation Aˆ = Q∗ AQ is just a change of coordinate system. A and Aˆ are two matrices that represent the same linear operator with respect to two different bases. Each vector in Cn is the coordinate vector of some vector v in the vector space on which the operator acts. If the vector v has coordinate vector x before the change of coordinate system, it will have coordinate vector Q∗ x afterwards. Now consider a step of subspace iteration applied to the special subspace E k = span{e1 , . . . , ek }, 7 Of course it can happen that the dimension decreases in the course of the iterations. This does not cause any problems for us. In fact, it is good news [72], but we will not discuss that case here. 8 The nondiagonalizable case is more complicated but leads to similar conclusions. See [75] or [72], for example.
28
Chapter 2. Francis’s Algorithm
where ei is the standard basis vector with a 1 in the ith position and zeros elsewhere. The vectors f (A)e1 , . . . , f (A)ek are a basis for the space f (A)E k . Let q1 , . . . , qk be an orthonormal basis for f (A)E k , which could be obtained by some variant of the Gram–Schmidt process, for example. Let qk+1 , . . . , qn be additional orthonormal vectors such that q1 , .. . , qn together form an
orthonormal basis of Cn , and let Q = q1 · · · qn ∈ Cn×n . Since q1 , . . . , qn are orthonormal, Q is a unitary matrix. Now use Q to make a change of coordinate system Aˆ = Q∗ AQ. Let us see what this change of basis does to the space f (A)E k . Since f (A)E k = span{q1 , . . . , qk }, we check the vectors q1 , . . . , qk . Under the change of coordinate system, these get mapped to Q∗ q1 , . . . , Q∗ qk . Because the columns of Q are the orthonormal vectors q1 , . . . , qn , the vectors Q∗ q1 , . . . , Q∗ qk are exactly e1 , . . . , ek . Thus the change of coordinate system maps f (A)E k back to E k . Now, if we want to do another step of subspace iteration, we can work in the ˆ to E k . Then we can do another change new coordinate system, applying f (A) ˆ of coordinate system and map f (A)E k back to E k . If we continue to iterate in this manner, we produce a sequence of unitarily similar matrices (Aj ) through successive changes of coordinate system, and we are always dealing with the same subspace E k . This is a version of subspace iteration for which the subspace stays fixed and the matrix changes. What does convergence mean in this case? As j increases, Aj becomes closer and closer to having E k as an invariant subspace. The special space E k is invariant under Aj if and only if Aj has the block-triangular form (j) (j) A11 A12 Aj = , (j) 0 A22 (j)
where A11 is k × k. Of course, we just approach invariance; we never attain it exactly. In practice we will have (j) (j) A11 A12 Aj = , (j) (j) A21 A22 (j)
(j)
where A21 → 0 linearly with ratio | f (λk+1 )/f (λk ) |. Eventually A21 will get small enough that we can set it to zero and split the eigenvalue problem into two smaller problems. If one thinks about implementing subspace iteration with changes of coordinate system in a straightforward manner, as outlined above, it appears to be a fairly expensive procedure. Fortunately there is a way to implement this method on upper Hessenberg matrices at reasonable computational cost, namely Francis’s algorithm. This amazing procedure effects subspace iteration with changes of coordinate system, not just for one k, but for k = 1, . . . , n − 1 all at once. At this point we are not yet ready to demonstrate this fact, but we can get a glimpse at what is going on.
2.2. Why the Algorithm Works
29
Francis’s algorithm begins by computing the vector f (A)e1 . This is a step of the power method, or subspace iteration with k = 1, mapping E 1 = span{e1 } to f (A)E 1 . Then, according to Theorem 2.1.1, this is followed by a change of coordinate system Aˆ = Q∗ AQ in which span{q1 } = f (A)E 1 . This is exactly the case k = 1 of subspace iteration with a change of coordinate system. Thus if we perform iterations repeatedly with the same shifts (the effect of changing shifts will be discussed later), the sequence of iterates (Aj ) so produced will converge linearly to the form ⎡ ⎤ ∗ ··· ∗ λ1 ⎢ 0 ∗ ··· ∗ ⎥ ⎢ ⎥ ⎢ . .. .. ⎥ . ⎣ .. . . ⎦ 0
∗ ···
∗ (j)
Since the iterates are upper Hessenberg, this just means that a21 → 0. The convergence is linear with ratio | f (λ2 )/f (λ1 ) |. This is just the tip of the iceberg. To get the complete picture, we must introduce one more concept.
Krylov Subspaces Given a nonzero vector x, the sequence of Krylov subspaces associated with x is defined by K1 (A, x) = span{x}, K2 (A, x) = span{x, Ax}, K3 (A, x) = span x, Ax, A2 x , and in general Kk (A, x) = span x, Ax, . . . , Ak−1 x , k = 1, 2, . . . , n. This is clearly a nested sequence: K1 (A, x) ⊆ K2 (A, x) ⊆ K3 (A, x) ⊆ · · · . Typically Kk (A, x) has dimension k, but not always. It is easy to check that growth of dimension stops as soon as we get to an invariant subspace: Kj (A, x) = Kj+1 (A, x) = Kj+2 (A, x) = · · · if and only if Kj (A, x) is invariant under A. We need to make a couple of observations about Krylov subspaces. The first is that wherever there are Hessenberg matrices, there are Krylov subspaces. Theorem 2.2.1. Let A be properly upper Hessenberg. Then E k = span{e1 , . . . , ek } = Kk (A, e1 ),
k = 1, . . . , n.
The proof is an easy exercise. Theorem 2.2.2. Suppose H = Q−1 AQ, where H is properly upper Hessenberg. Let q1 , . . . , qn denote the columns of Q, and let Qk = span{q1 , . . . , qk }, k = 1, . . . , n. Then Qk = Kk (A, q1 ), k = 1, . . . , n. In our application, Q is unitary, but this theorem is valid for any nonsingular Q. In the special case Q = I, it reduces to Theorem 2.2.1.
30
Chapter 2. Francis’s Algorithm
Proof. We use induction on k. For k = 1, the result holds trivially. Now, for the induction step, rewrite the similarity transformation as AQ = QH. Equating kth columns of this equation we have, for k = 1, . . . , n − 1, Aqk =
k+1
qi hik .
i=1
The sum stops at k + 1 because H is upper Hessenberg. We can rewrite this as qk+1 hk+1,k = Aqk −
k
qi hik .
(2.2.3)
i=1
Our induction hypothesis is that Qk = span{q1 , . . . , qk } = Kk (A, q1 ). Since qk ∈ Kk (A, q1 ), it is clear that Aqk ∈ Kk+1 (A, q1 ). Then, since hk+1,k = 0, equation (2.2.3) implies that qk+1 ∈ Kk+1 (A, q1 ). Thus Qk+1 = span{q1 , . . . , qk+1 } ⊆ Kk+1 (A, q1 ). Since q1 , . . . , qk+1 are linearly independent, and the dimension of Kk+1 (A, q1 ) is at most k + 1, these subspaces must be equal. We remark in passing that (2.2.3) has computational as well as theoretical importance. It is the central equation of the Arnoldi process, one of the most important algorithms for large, sparse matrix computations [13]. Our second observation about Krylov subspaces is in connection with subspace iteration. Given a matrix A, which we take for granted as our given object of study, we can say that each nonzero vector x contains the information to build a whole sequence of Krylov subspaces K1 (A, x), K2 (A, x), K3 (A, x), . . . . Now consider a step of subspace iteration applied to a Krylov subspace. One easily checks that f (A)Kk (A, x) = Kk (A, f (A)x), as a consequence of the equation Af (A) = f (A)A. Thus the result of a step of subspace iteration on a Krylov subspace generated by x is another Krylov subspace, namely the one generated by f (A)x. It follows that the power method x → f (A)x implies a whole sequence of nested subspace iterations, in the following sense. The vector x generates a whole sequence of Krylov subspaces Kk (A, x), k = 1, . . . , n, and f (A)x generates the sequence Kk (A, f (A)x), k = 1, . . . , n. Moreover, the kth space Kk (A, f (A)x) is the result of one step of subspace iteration on the kth space Kk (A, x).
Back to Francis’s Algorithm Each iteration of Francis’s algorithm executes a step of the power method e1 → f (A)e1 followed by a change of coordinate system. As we have just seen, the step of the power method implies subspace iterations on the corresponding Krylov subspaces: Kk (A, e1 ) → Kk (A, f (A)e1 ), k = 1, . . . , n. This is not just some theoretical action. Since Francis’s algorithm operates on properly upper Hessenberg matrices, the Krylov subspaces in question reside in the columns of the transforming matrices. Indeed, a Francis iteration makes a change of coordinate system Aˆ = Q∗ AQ. It can happen that Aˆ has one or more zeros on the subdiagonal. This is a rare
2.2. Why the Algorithm Works
31
and lucky event, which allows us to reduce the problem immediately to two smaller eigenvalue problems.9 Let us assume we have not been so lucky. Then Aˆ is properly upper Hessenberg, and the following theorem holds. Theorem 2.2.3. Consider an iteration of Francis’s algorithm of degree m, Aˆ = Q∗ AQ, where A and Aˆ are properly upper Hessenberg matrices. Let f (z) = (z − ρ1 ) · · · (z − ρm ), where ρ1 , . . . , ρm are the shifts. For each k (1 ≤ k ≤ n − 1) the iteration A → Aˆ effects one step of subspace iteration driven by f (A), followed by a change of coordinate system. This means that f (A)E k = Qk . The change of coordinate system Aˆ = Q∗ AQ maps Qk back to E k . Proof. We can apply Theorem 2.2.2, along with Theorems 2.1.1 and 2.2.1, to deduce that, for k = 1, 2, . . . , n − 1, Qk = Kk (A, q1 ) = Kk (A, f (A)e1 ) = f (A) Kk (A, e1 ) = f (A)E k .
Now suppose we pick some shifts ρ1 , . . . , ρm , let f (z) = (z − ρm ) · · · (z − ρ1 ), and do a sequence of Francis iterations with this fixed f to produce the sequence (j) (Aj ). Then, for each k for which | f (λk+1 )/f (λk ) | < 1, we have ak+1,k → 0 linearly with rate | f (λk+1 )/f (λk ) |. Thus all of the ratios | f (λk+1 )/f (λk ) |,
k = 1, . . . , n − 1,
are important. If any one of them is small, then one of the subdiagonal entries will converge rapidly to zero, allowing us to break the problem into smaller eigenvalue problems.
Implicit Q The conclusions we have drawn follow from just two things: (1) The matrix Aˆ = Q∗ AQ is properly upper Hessenberg, and (2) q1 = βf (A)e1 . “Everything” is determined by q1 . Let us formalize this. ˜ are essentially We will say that two unitary transforming matrices Q and Q ˜ = QD. This identical if there exists a (unitary) diagonal matrix D such that Q ˜ are is clearly an equivalence relation, and it is also easy to check that Q and Q essentially identical if and only if they “span the same subspaces” in the sense that q1 , . . . , q˜k }, k = 1, . . . , n. (2.2.4) span{q1 , . . . , qk } = span{˜ ˜ = Q ˜ ∗ AQ, ˜ where Theorem 2.2.4 (implicit Q). Suppose H = Q∗ AQ and H ˜ are properly upper Hessenberg, and Q and Q ˜ are unitary and have H and H ˜ essentially the same first column: q˜1 = q1 d1 for some scalar d1 . Then Q and Q are essentially identical. ˜ are “essentially identical” in a slightly different It follows that H and H ˜ = QD, then H ˜ = D∗ HD. Thus the first column of Q (essentially) sense: If Q determines everything. 9 Assuming exact arithmetic, this happens when and only when one or more of the shifts is exactly an eigenvalue of A [72, Section 4.6].
32
Chapter 2. Francis’s Algorithm
Proof. By two applications of Theorem 2.2.2 we have, for k = 1, . . . , n, span{q1 , . . . , qk } = Kk (A, q1 ) = Kk (A, q˜1 ) = span{˜ q1 , . . . , q˜k }. Thus (2.2.4) holds, and we are done. ˜ are essentially identical, To show that (2.2.4) really does imply that Q and Q q1 , . . . , q˜k−1 } proceed as follows. For any k, Kk−1 = span{q1 , . . . , qk−1 } = span{˜ and Kk = span{q1 , . . . , qk } = span{˜ q1 , . . . , q˜k }. Let S denote the (clearly onedimensional) orthogonal complement of Kk−1 in Kk . By orthogonality of each set of vectors, we see that S = span{qk } = span{˜ qk }. Thus q˜k = qk dk for some ˜ = QD. scalar dk . Letting D = diag{d1 , . . . , dn }, we have Q Because of implicit Q, the exact details of our Francis bulge chase are irrelevant. For example, it does not matter whether we use elementary reflectors or core transformations to chase the bulge. All that matters is that we complete a unitary similarity transformation Aˆ = Q∗ AQ, where Aˆ is (properly) upper Hessenberg and the first column of Q points in the right direction, i.e., q1 = Qe1 = βf (A)e1 .
Some Fine Points After a Francis iteration, the resulting Hessenberg matrix Aˆ is almost always properly Hessenberg (although some subdiagonal entries may be quite small), but what if it is not? What if a ˆj+1,j = 0 for some j? To answer this question we consider an extension of Theorem 2.2.2. Theorem 2.2.5. Suppose H = Q−1 AQ, where H is upper Hessenberg. Suppose further that for some j < n, hk+1,k = 0 for k = 1, . . . , j − 1. Then span{q1 , . . . , qk } = Kk (A, q1 ),
k = 1, . . . , j.
Moreover, hj+1,j = 0 if and only if span{q1 , . . . , qj } = Kj (A, q1 ) is invariant under A. The proof is left as an exercise. The induction proof of Theorem 2.2.2 remains valid for k = 1, . . . , j. Then one must check carefully what happens at step j +1. Now we generalize the implicit-Q theorem. ˜ =Q ˜ ∗ AQ, ˜ where H and H ˜ are Theorem 2.2.6. Suppose H = Q∗ AQ and H upper Hessenberg, and for some j < n, hk+1,k = 0 for k = 1, . . . , j − 1. Suppose ˜ are unitary and have essentially the same first column: q˜1 = q1 d1 for Q and Q ˜ k+1,k = 0 for k = 1, . . . , j − 1, and there exist scalars some scalar d1 . Then h dk such that q˜k = qk dk for k = 1, . . . , j. Moreover, hj+1,j = 0 if and only if ˜ j+1,j = 0 if and only if Kj (A, q1 ) is invariant under A. h Proof. Since hk+1,k = 0 for k = 1, . . . , j − 1, we can invoke Theorem 2.2.5 to deduce that span{q1 , . . . , qk } = Kk (A, q1 ) for k = 1, . . . , j. Moreover, also by Theorem 2.2.5, we can deduce that the spaces Kk (A, q1 ) are not invariant under A for k = 1, . . . , j − 1 because hk+1,k = 0. Thus Kk (A, q˜1 ) (the exact same spaces) are not invariant under A for k = 1, . . . , j − 1. The noninvariance ˜ 21 = 0, which further implies that span{˜ q1 , q˜2 } = of K1 (A, q˜1 ) implies that h
2.3. Choice of Shifts
33
K2 (A, q˜1 ) by Theorem 2.2.5 with j = 2. Now proceeding inductively we deduce ˜ k+1,k = 0 for k = 1, . . . , j − 1, and finally that h span{˜ q1 , . . . , q˜k } = Kk (A, q˜1 ),
k = 1, . . . , j.
˜ j+1,j = 0 if and only if It also follows from Theorem 2.2.5 that hj+1,j = h Kj (A, q1 ) = Kj (A, q˜1 ) is invariant under A. The fact that q˜k = qk dk for k = 1, . . . , j can be deduced in exactly the same way as in the proof of Theorem 2.2.4. Now consider a Francis iteration resulting in Aˆ satisfying a ˆk+1,k = 0 for k = 1, . . . , j − 1 and a ˆj+1,j = 0. Then Theorem 2.2.6 tells us that the first j columns of Q are essentially uniquely determined by q1 = βf (A)e1 . It is also easy to check that the Francis iteration effects nested subspace iterations of dimensions k = 1, . . . , j. The space span{q1 , . . . , qj } = Kj (A, q1 ) is invariant under A. Aˆ has the block-triangular form Aˆ11 Aˆ12 Aˆ = , Aˆ22 so the eigenvalue problem has been split in two. Aˆ11 has as its eigenvalues those eigenvalues of A that are associated with the invariant subspace Kj (A, q1 ). In typical occurrences of such splittings, j is large and n − j is small, so the eigenvalues of Aˆ22 can be determined very quickly and the focus then shifts to Aˆ11 .
2.3 Choice of Shifts The convergence results we have obtained so far are based on the assumption that we are going to pick some shifts ρ1 , . . . , ρm in advance and use them over and over again. Of course, this is not what really happens. In practice, we get access to better and better shifts as we proceed, so we might as well use them. How, then, does one choose good shifts? To answer this question we start with a thought experiment. Suppose that we are somehow able to find m shifts that are excellent in the sense that each of them approximates one eigenvalue well (good to several decimal places, say) and is not nearly so close to any of the other eigenvalues. Assume the m shifts approximate m different eigenvalues. We have f (z) = (z − ρm ) · · · (z − ρ1 ), where each zero of f approximates an eigenvalue well. Then, for each eigenvalue λk that is well approximated by a shift, | f (λk ) | will be tiny. There are m such eigenvalues. For each eigenvalue that is not well approximated by a shift, | f (λk ) | will not be small. Thus, exactly m of the numbers | f (λk ) | are small. If we renumber the eigenvalues so that | f (λ1 ) | ≥ | f (λ2 ) | ≥ · · · ≥ | f (λn ) |, we will have | f (λn−m ) | | f (λn−m+1 ) |, and | f (λn−m+1 )/f (λn−m ) | 1. (j)
This will cause an−m+1,n−m to converge to zero rapidly. Once this entry gets
34
Chapter 2. Francis’s Algorithm
very small, the bottom m × m submatrix ⎡ (j) ··· a ⎢ n−m+1,n−m+1 .. ⎢ . ⎣ (j) an,n−m+1 ···
⎤ (j) an−m+1,n ⎥ .. ⎥ . ⎦ (j) an,n
(2.3.1)
will be nearly separated from the rest of the matrix. Its eigenvalues will be excellent approximations to eigenvalues of A, eventually even substantially better than the shifts that were used to find them.10 It makes sense, then, to take the eigenvalues of (2.3.1) as new shifts, replacing the old ones. This will result in a reduction of the ratio | f (λn−m+1 )/f (λn−m ) | and even faster convergence. At some future point it would make sense to replace these shifts by even better ones to get even better convergence. This thought experiment shows that at some point it makes sense to use the eigenvalues of (2.3.1) as shifts, but several questions remain. How should the shifts be chosen initially? At what point should we switch to the strategy of using eigenvalues of (2.3.1) as shifts? How often should we update the shifts? A great deal of testing and experimentation has led to the following empirical conclusions. The shifts should be updated on every iteration, and it is okay to use the eigenvalues of (2.3.1) as shifts right from the very start. At first they will be poor approximations, and progress will be slow. After a few (or sometimes more) iterations, they will begin to home in on eigenvalues and the convergence rate will improve. With new shifts chosen on each iteration, the method normally (j) converges quadratically [72, 75], which means that, once | an−m+1,n−m | becomes (j+1)
(j)
sufficiently small, | an−m+1,n−m | will be roughly the square of | an−m+1,n−m |. (j)
Thus successive values of | an−m+1,n−m | could be approximately 10−3 , 10−6 , 10−12 , 10−24 , for example. Very quickly this number gets small enough that we can declare it to be zero and split off an m × m submatrix and m eigenvalues. Then we can deflate the problem and go after the next set of m eigenvalues. There is one caveat. The strategy of using the eigenvalues of (2.3.1) as shifts does not always work, so variants have been introduced. And it is safe to say that all shifting strategies now in use are indeed variants of this simple strategy. R and other modern software are The strategies used by the codes in MATLAB quite good, but they might not be unbreakable. Certainly nobody has been able to prove that they are. It is an open question to come up with a shifting strategy that provably always works and normally yields rapid convergence. It is a common phenomenon that numerical methods work better than we can prove they work. In Francis’s algorithm the introduction of dynamic shifting (new shifts on each iteration) dramatically improves the convergence rate but at the same time makes the analysis much more difficult.
2.4 Where Is QR? Francis’s algorithm is commonly known as the implicitly shifted QR algorithm. This (QR) is a curious name, given that the algorithm does not involve any QR decompositions, nor did we make use of any QR decompositions in our 10 The eigenvalues that are well approximated by (2.3.1) are indeed the same as the eigenvalues that are well approximated by the shifts.
2.4. Where Is QR?
35
explanation of the algorithm. One might eventually like to see the connection between Francis’s algorithm and the QR decomposition. Theorem 2.4.1. Consider a Francis iteration Aˆ = Q∗ AQ of degree m with shifts ρ1 , . . . , ρm . Let f (z) = (z − ρm ) · · · (z − ρ1 ), as usual. Then there is an upper triangular matrix R such that f (A) = QR. In words, the transforming matrix Q of a Francis iteration is exactly the unitary factor in the QR decomposition of f (A). Proof. We can easily prove this theorem with the aid of Krylov matrices, defined by
K(A, x) = x Ax · · · An−1 x . ˆ e1 ) are upper The reader can easily verify the following facts. K(A, e1 ) and K(A, triangular because A and Aˆ are Hessenberg, K(A, e1 ) is nonsingular because A is properly upper Hessenberg, f (A) K(A, e1 ) = K(A, f (A)e1 ), and K(A, Qe1 ) = ˆ e1 ). Putting these together and remembering that f (A)e1 = αQe1 , we Q K(A, have ˆ e1 ). f (A) K(A, e1 ) = K(A, f (A)e1 ) = αK(A, Qe1 ) = αQ K(A, ˆ e1 ) K(A, e1 )−1 is upper triangular. Thus f (A) = QR, where R = αK(A, Theorem 2.4.1 is also valid for a long sequence of Francis iterations, for which ρ1 , . . . , ρm is the long list of all shifts used in all iterations (m could be a million). The next theorem makes this precise. Consider a sequence of Francis iterations, starting from A0 and producing, at the kth step, Ak = Q∗k Ak−1 Qk . Let fk denote the polynomial built from the shifts for the kth step (which can have arbitrary degree). Then, by Theorem 2.4.1, fk (Ak−1 ) = Qk Rk for some upper triangular Rk . Theorem 2.4.2. Define fˇk = fk · · · f1 . Thus fˇk is the high-degree polynomial whose zeros are all of the shifts that were employed in the first k iterations. Let ˇ k = Q1 · · · Qk and R ˇ k = Rk . . . R1 . Then Q ˇk ˇ ∗k A0 Q Ak = Q
and
ˇk . ˇkR fˇk (A0 ) = Q
Together with Theorem 2.4.1, this shows that (in exact arithmetic) the effect of a number of iterations of low degree is equivalent to that of a single iteration of high degree using the same shifts. Proof. The proof is by induction on k. The theorem holds in the case k = 1 ˇ k if ˇkR by Theorem 2.4.1. We complete the proof by showing that fˇk (A0 ) = Q ˇ ˇ ˇ fk−1 (A0 ) = Qk−1 Rk−1 . Using this induction hypothesis along with the equation
36
Chapter 2. Francis’s Algorithm
fk (Ak−1 ) = Qk Rk from the kth step, as well as the evident facts fˇk = fk fˇk−1 , ˇ k−1 , we get ˇ k−1 fk (Ak−1 )Q ˇ∗ , Q ˇk = Q ˇ k−1 Qk , and R ˇ k = Rk R fk (A0 ) = Q k−1 fˇk (A0 ) = fk (A0 )fˇk−1 (A0 ) ˇ∗ ˇ k−1 fk (Ak−1 )Q =Q
ˇ
ˇ
k−1 Qk−1 Rk−1
ˇ k−1 = Q ˇk . ˇ k−1 Qk Rk R ˇkR =Q
This is a useful tool for formal convergence proofs. In fact, all formal convergence results for Francis’s and related algorithms that we are aware of make use of this theorem or a variant of it. Theorem 2.4.2 suggests that it makes no difference whether we do many iterations of low degree or one iteration of high degree with the same shifts. In practice, i.e., in floating-point arithmetic, there is a difference. Iterations of high degree are ineffective because of shift blurring [69], [72, Chapter 7], so many iterations of low degree are preferred over a single iteration of high degree. In this chapter we have focused on computation of eigenvalues. Francis’s algorithm can be adapted in straightforward ways to the computation of eigenvectors and invariant subspaces as well [39, 72, 73].
Chapter 3
Francis’s Algorithm as a Core-Chasing Algorithm
Our first objective is to show how to implement Francis’s algorithm when A is presented in the QR-decomposed form A = QR = Q1 · · · Qn−1 R, where Q1 , . . . , Qn−1 are core transformations. As we indicated in Chapter 1, we may sometimes include a unitary diagonal matrix D in our factorization. This is a mere detail which, along with all of the other details discussed in Section 1.4, will be ignored here in the interest of avoiding clutter. For now we are going to assume that A is nonsingular, deferring the singular case to Section 3.4. If A is nonsingular, then so is R, so all of its main-diagonal entries are nonzero. This, along with the assumption that all of the core transformations Qi are nontrivial, ensures that A is properly upper Hessenberg.
3.1 Single-Shift Algorithm An iteration begins with the choice of a single shift ρ and computes (A−ρI)e1 , the first column of A − ρI. Then a unitary matrix U1 with first column proportional to (A−ρI)e1 is built.11 Since only the first two entries of this vector are nonzero, a core transformation can do the job. Indeed, we can compute c1 ← a11 − ρ, s1 ← a21 , r ← | c1 |2 + | s1 |2 , c1 ← c1 /r, s1 ← s1 /r and take U1 to be the core transformation with active part c1 − s1 . s1 c1 We then do a unitary similarity, transforming A to U1∗ AU1 . In Chapter 2 we showed how the algorithm proceeds when A is stored in the conventional manner. There we highlighted the case m = 2, but now we have m = 1. The left multiplication by U1∗ recombines the first two rows of A, leaving the matrix in Hessenberg form, but the right multiplication recombines the first two columns, disturbing the Hessenberg form: There is now a bulge in the Hessenberg form at position (3, 1). The next step is to build a core 11 In
Chapter 2, U1 was called Q0 .
37
38
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
transformation U2 such that left multiplication by U2∗ , acting on rows 2 and 3, transforms the bulge to zero. When the similarity transformation is completed by right multiplication by U2 (acting on columns 2 and 3), a new bulge is formed at position (4, 2):
U2∗ U1∗ AU1 U2
×××××× ×××××× +××××× ×××× ××× ××
=
×××××× ×××××× ××××× +×××× ××× ××
=
.
The bulge has been chased down and to the right by one position. Clearly we can now build a core transformation U3 to chase the bulge to position (5, 3), and so on. Eventually the bulge reaches the bottom. At step n − 1, it is eliminated ∗ . The subsequent right from position (n, n − 2) by left multiplication by Un−1 multiplication by Un−1 acts on columns n − 1 and n, creating no new bulge. The matrix has been returned to Hessenberg form, and the iteration is complete. The result of the iteration is a new upper Hessenberg matrix ∗ · · · U1∗ AU1 · · · Un−1 . Aˆ = Un−1
Now let’s see what this bulge chase looks like in our current scenario, in which A is presented in the QR-decomposed form A = Q1 · · · Qn−1 R, i.e.,
A =
.
The first step is to build the core transformation U1 , as described above, and transform A to U1∗ AU1 . Pictorially U1∗ AU1 =
.
Clearly we can fuse the two core transformations on the left, U1∗ and Q1 , to form a single new (modified) Q1 . Let’s do this. The core transformation on the right, U1 , requires more effort. We begin by passing it through the upper-triangular matrix to bring it into contact with the other cores:
.
3.1. Single-Shift Algorithm
39
The presence of the extra core transformation (red) implies that the matrix is no longer in upper Hessenberg form. If we were to multiply everything together, we would find a bulge in the Hessenberg form at position (3, 1). In our present formulation we don’t see the bulge; we see an extra core transformation, which we call the misfit. Now, instead of chasing the bulge, we chase the misfit. This is a core-chasing algorithm. The first step is a turnover to shift the misfit through the descending sequence that represents Q:
.
The resulting new misfit, which we will call U2 , is (essentially) the same as the matrix U2 appearing in the bulge chase above (as can be shown). The next step is to do a similarity transform, multiplying by U2∗ on the left and U2 on the right. This causes U2 to disappear from the left and show up on the right:
.
Now the misfit gets the same treatment as before: we pass it through the triangular matrix, and then we shift it through the descending sequence of core transformations. Depicting both moves in a single diagram, we have
.
The misfit on the left, which we will call U3 , is (essentially) the same as the U3 that was produced in the bulge chase above. The pattern should now be clear. We will do a similarity transformation to move U3 to the other side of the matrix. Then we’ll pass it through the upper triangular part and shift it through the descending sequence, causing U4 to pop out on the left. We indicate all three moves in a single diagram:
.
40
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
We continue the chase in this manner until the misfit arrives at the bottom, at step n − 1. Once we pass Un−1 through the triangular part, we find that the resulting misfit is in a perfect position to be fused with the bottom transformation in the descending sequence:
.
Performing this fusion, we are done:
.
The matrix has been returned to Hessenberg form; we have arrived at ∗ Aˆ = Un−1 · · · U1∗ AU1 · · · Un−1 ,
with Aˆ in the factored form ˆ1 · · · Q ˆ ˆ n−1 R. Aˆ = Q Let U = U1 U2 · · · Un−1 , and recall that U1 was constructed so that U1 e1 = β(A − ρI)e1 . It is also clear that Uk e1 = e1 for k = 2, . . . , n − 1, so U e1 = β(A − ρI)e1 . Thus we have completed a single Francis iteration: Aˆ = U ∗ AU with U e1 = (A − ρI)e1 . See Theorem 2.1.1 and the ensuing discussion of the implicit-Q theorem. During the discussion we made the assertion that the Ui that appear in the similarity transformations in the core chase are (essentially) the same as the Ui that appear in the bulge chase. We leave the proof as an exercise for the reader (a detailed implicit-Q theorem), but we have not relied on this equivalence for our justification that the core-chasing algorithm effects a Francis iteration. The flop count for an iteration in this format is roughly comparable to the count for an iteration in the standard format. The expensive part is passing the core transformations through the triangular matrix. There are n − 1 such events, each costing O(n) flops. Therefore the cost of each iteration is O(n2 ) flops.
Choice of Shift The first detail we consider is the choice of shift. Recall that the rationale for shifting was discussed in Section 2.3. The simplest choice is the Rayleigh quotient shift ρ = ann . Since we don’t have A stored in the conventional form, we need to do a bit of work (but not much) to get this shift. One easily checks that the last row of A is the same as the last row of Qn−1 R, so just a bit of work gives us ann . Specifically, ρ = ann = sn−1 rn−1,n + cn−1 rnn .
3.2. Double-Shift Algorithm
41
A better choice is the Wilkinson shift , which is determined as follows. Compute the eigenvalues of the submatrix an−1,n−1 an−1,n (3.1.1) an,n−1 an,n by a careful implementation of the quadratic formula, and take the one that is closer to ann as the shift. This strategy requires knowledge of entries from the bottom two rows of A. It is easy to check that the bottom two rows of A are the same as the bottom two rows of Qn−2 Qn−1 R, so the submatrix (3.1.1) can easily be obtained in O(1) operations. Another possibility is to compute the two eigenvalues of (3.1.1) and use them both as shifts (on two consecutive steps). See also Section 3.2. This procedure can be generalized to give a large number of shifts at once. Say we want to generate m shifts, where (in practice) m n. Generate the lower right-hand m × m submatrix of A, which is the same as the corresponding submatrix of Qn−m · · · Qn−1 R. (Better yet, leave the submatrix in factored form.) Then use an auxiliary routine to compute the m eigenvalues of this submatrix. Use these as shifts for m consecutive steps. This procedure is inexpensive as long as m n. We will show how to make use of it in Section 3.6.
3.2 Double-Shift Algorithm In many applications the matrix A is real, and it is natural to hope for an algorithm that works entirely in real arithmetic. But real matrices often have complex eigenvalues, and if we wish to find the complex eigenvalues rapidly, we will have to use complex shifts. This puts us in the realm of complex arithmetic. The double-shift Francis algorithm will allow us to use complex shifts while staying in real arithmetic by taking the shifts in complex-conjugate pairs. Suppose A is real and properly upper Hessenberg, and we wish to perform a double Francis step on A. We start by choosing two shifts ρ1 and ρ2 , and we will insist that these either be a complex conjugate pair or both be real. For example, we could take ρ1 and ρ2 to be the eigenvalues of the submatrix an−1,n−1 an−1,n . an,n−1 an,n Regardless of whether the shifts are real or a conjugate pair, the matrix f (A) = (A − ρ2 I)(A − ρ1 I) = A2 − (ρ1 + ρ2 )A + ρ1 ρ2 I is real, and so is the vector f (A)e1 . Next we build a unitary Q0 whose first column is proportional to x = f (A)e1 . As we showed in Chapter 2, Q0 can be built as a product of two core transformations: Q0 = C2 C1 , where C1 and C2 are constructed so that C1∗ C2∗ x
=
× × ×
× =
.
42
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
Since x = f (A)e1 is real, C1 and C2 are real orthogonal matrices. We use these to set our similarity transformation in motion. Since they are real, everything will stay real throughout the core chase. Our initial similarity transformation is C1T C2T AC2 C1
=
.
One of the core transformations on the left can be removed by a turnover followed by a fusion. Restricting our focus to the “Q” part of the picture, we have
=
=
,
giving the big picture
.
We have three misfits to chase. We can bring the two on the right into contact with the one on the left by passing them both through the triangular matrix and the descending sequence of core transformations. First we move C2 :
.
Then we move C1 :
.
(3.2.1)
The “obvious” next move is to perform a similarity transformation that removes the three misfits from the left side of the picture and makes them appear
3.2. Double-Shift Algorithm
43
on the right. We can then pass the three back through the triangular part and the descending sequence, causing them to reappear on the left but moved down by one position. We then repeat the process . . . . While this certainly works, there is a better course of action. First turn the misfits over, and then do a similarity transformation with two of them, leaving the third behind. To keep things clear, let’s do this step in several pictures. After the turnover we have
.
Now we do a similarity transform that moves the two leftmost misfits to the right:
.
Then we pass the two misfits back through the matrix. For clarity we do one at a time:
and then
.
We are now back to where we were at (3.2.1) but with the misfits moved down by one position. We are ready for another turnover, similarity transform, and so on. This procedure is certainly more economical than the “obvious” one that we first mentioned, as it passes only two core transformations through the matrix on each step instead of three. It is also more in line with our description of the double Francis step in Chapter 2, in which the similarity transformation on each step used two (not three) core transformations. To complete the description we take a close look at what happens when the misfits get to the bottom. Suppose we’ve chased the misfits to the bottom and
44
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
turned them over one last time:
.
We do the similarity transformation, but at this stage it may be clearer if we move one misfit at a time. First we move the leftmost misfit to the right. When we try to pass it back through the matrix, we find that it comes into a position where it can be fused with the bottom transformation in the descending sequence:
.
We do the fusion, resulting in
.
Then we move the next misfit to the right and pass it back through the matrix. We find that it comes into a position where it can be fused with the other remaining misfit:
.
We do this fusion, resulting in
.
We do one last similarity transformation with the one remaining misfit; we pass it through the upper triangular matrix and find that we can fuse it with the
3.3. Convergence and Deflation
45
bottom core transformation in the descending sequence:
.
We do this last fusion, and the iteration is complete. We know that we have completed a double Francis iteration because we have done a unitary similarity transformation Aˆ = U ∗ AU to upper Hessenberg form, where (as you can easily check) U e1 is proportional to f (A)e1 = (A − ρ2 I)(A − ρ1 I)e1 . The entire procedure is done in real arithmetic.
Multishift Algorithms We resist the urge to write a section on the triple-shift Francis iteration. As we indicated in Chapter 2, we can do Francis iterations of any degree in principle, and iterations of all degrees can certainly be implemented using core transformations. A substantial generalization was presented in [64] and will be discussed in Chapter 6.
3.3 Convergence and Deflation Immediately before each iteration, regardless of which version of Francis’s algorithm we are using, we check to make sure the matrix has no zeros on the subdiagonal. If any aj+1,j is zero, the problem can immediately be split into two smaller subproblems. In practice aj+1,j is considered to be zero whenever it is sufficiently small. In the standard version of the algorithm the usual criterion is that if (3.3.1) | aj+1,j | ≤ u(| aj,j | + | aj+1,j+1 |), then aj+1,j will be set to zero. Here u denotes again the unit roundoff for floatingpoint arithmetic (about 10−16 for IEEE 754-2008 binary64). We could use the same criterion in our core-chasing version, but it would not be convenient, as we would have to compute the relevant entries aj+1,j , ajj , and aj+1,j+1 before checking the condition (3.3.1). Instead we use the much simpler criterion | sj | ≤ u.
(3.3.2)
Here sj is, of course, the off-diagonal entry in cj − sj , sj cj the active part of the factor Qj in the decomposition A = Q1 · · · Qn−1 R. It is easy to check that aj+1,j = sj rjj , so criterion (3.3.2) is equivalent to | aj+1,j | ≤ u| rjj |, which is similar to (3.3.1). In the single-shift case, the shifts are designed to make an,n−1 → 0 rapidly (typically quadratically). What this means for our core-chasing formulation is that sn−1 → 0 rapidly, so we normally have | sn−1 | ≤ u after just a few steps. We
46
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
can then set sn−1 = 0 and count ann = cn−1 rnn as an eigenvalue. Subsequent iterations can be confined to the leading (n − 1) × (n − 1) submatrix. This is called deflation. The subsequent iterations will cause sn−2 → 0 rapidly, and it will soon be possible to deflate out another eigenvalue, and so on. While this is going on, most of the other sj will tend slowly toward zero. Sometimes one will get small enough that it can be set to zero, thus splitting the eigenvalue problem into two smaller pieces. The double-shift case is designed to make sn−2 → 0 rapidly. Once | sn−2 | ≤ u, we can set it to zero and extract a pair of eigenvalues from an−1,n−1 an−1,n . an,n−1 an,n This is a double deflation. It can also sometimes happen that sn−1 → 0 rapidly, allowing the deflation of a single eigenvalue. This can occur if one of the two shifts approximates an eigenvalue much better than the other one does. Once again it will happen that many of the other sj tend slowly toward zero.
Backward Stability of Deflation Let B be the matrix obtained from A = Q1 · · · Qn−1 R by setting sj = 0. Then ˇ j · · · Qn−1 R, where Q ˇ j has active part B = Q1 . . . Q cj . cj ˇ j ) · · · Qn−1 R. The active part of Qj − Q ˇ j is Then A − B = Q1 · · · (Qj − Q − sj , sj ˇ j = | sj |.12 Thus so Qj − Q A − B ≤ | sj | R = | sj | A .
(3.3.3)
Therefore, if we use criterion (3.3.2), a deflation makes an error in A that is bounded above by u A . This implies normwise backward stability for the deflation. Clearly the same can be said for the criterion (3.3.1), and the argument is even simpler, but there is one sense in which (3.3.2) is better. We explain this now. Let λ be a simple eigenvalue of A with right and left eigenvectors x and y T , respectively: Ax = λx and y T A = λy T . Define κ(λ) = κ(λ, A) =
x y , | yT x |
(3.3.4)
and note that this positive quantity is independent of rescalings of x and y and hence well defined. Let C be a (sufficiently) small perturbation of A, and let E = A − C. By continuity of eigenvalues, C has an eigenvalue μ that is close 12 We continue with our convention, established in Chapter 1, that all norms are 2-norms, i.e., Euclidean norms for vectors and spectral norms for matrices.
3.3. Convergence and Deflation
47
to λ. The following well-known bound is proved in [73, Section 7.1] and many other places: 2 (3.3.5) | λ − μ | ≤ κ(λ) E + O( E ). This bound is sharp, so it shows that κ(λ) is a condition number for the eigenvalue λ. We can apply (3.3.5) directly to our deflation scenario. Theorem 3.3.1. Let λ be a simple eigenvalue of A = Q1 · · · Qn−1 R, and let B be the matrix obtained from A by setting sj to zero, where (3.3.2) holds. Let μ be the eigenvalue of A that is closest to λ. Then | λ − μ | ≤ u A κ(λ) + O(u2 ). Proof. Letting E = A − B we have E ≤ u A by (3.3.3) and (3.3.2). We can now combine this with (3.3.5) to get the result. Of course essentially the same result is obtained if the criterion (3.3.1) is used instead of (3.3.2), so there seems to be no significant difference so far. But now comes a small surprise. If criterion (3.3.2) is used, the absolute bound in Theorem 3.3.1 can be replaced by a relative bound, provided we make the additional assumption that A is nonsingular. This was proved in [54] using ideas from Eisenstat and Ipsen [34]. Theorem 3.3.2. Under the assumptions of Theorem 3.3.1 and the additional assumption that A is nonsingular, | λ − μ | ≤ u | λ | κ(λ) + O(u2 ). Proof. Write A = A1 A2 , where A1 = Q1 · · · Qj−1 and A2 = Qj · · · Qn−1 R, and note that A1 is unitary and A2 is nonsingular. Let v be a right eigenvector of B corresponding to the eigenvalue μ. A few elementary manipulations transform the equation Bv = μv into Av − Ev = μv (where E = A − B), then −1 −1 −1 A2 v − A−1 1 EA2 A2 v = μA1 A2 A2 v,
and finally
−1 −1 −1 (μA−1 1 A2 + A1 EA2 )A2 v = A2 v.
(3.3.6)
−1 ˜ −1 −1 ˜ ˜ ˜ Let A˜ = μA−1 ˜ = A2 v. Then (3.3.6) 1 A2 , E = −A1 EA2 , B = A − E, and v takes the much simpler form
˜ v = B˜ ˜ v = v˜, (A˜ − E)˜ ˜ showing that 1 is an eigenvalue of B. −1 −1 ˜ Notice that A1 AA1 = μA , so A˜ is unitarily similar to μA−1 . Corresponding to the eigenvalue λ of A, μA−1 has the eigenvalue μ/λ with the same eigenvectors. Therefore the condition number is the same: κ(μ/λ, μA−1 ) = κ(λ, A). The eigenvectors of A˜ corresponding to μ/λ are not the same; they T are A−1 1 x and y A1 . But A1 is unitary, so the condition number is unchanged: ˜ κ(μ/λ, A) = κ(λ, A). ˜ is a slight perturbation of A, ˜ so it has an eigenvalue that approximates B μ/λ well, and that eigenvalue is in fact 1. If we now apply (3.3.5) to A˜ and
48
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
˜ = A˜ − B, ˜ we get E 2
˜ E ˜ + O( E˜ ). | λμ − 1 | ≤ κ(μ/λ, A)
(3.3.7)
˜ = κ(λ, A). Now consider E ˜ . Recalling the We already know that κ(μ/λ, A) −1 −1 ˇ j )Q−1 , so ˜ definitions we see right away that E = −A1 (A − B)A2 = −(Qj − Q j ˇ j = | sj | ≤ u. Substituting this back into (3.3.7), we get ˜ = Qj − Q E 2 | μ−λ λ | ≤ κ(λ, A) u + O(u ),
which is the desired result. Theorem 3.3.2 can make a big difference in some cases, as the following simple example shows. Example 3.3.3. Let be a small number satisfying 0 < < u, and let 1 2 A= . The sum of its eigenvalues is 1 + and the product is −, so there is one positive and one negative eigenvalue. A simple calculation shows that these are λ1 = 1 + 2 − O(2 ) and λ2 = − + O(2 ). One can also check that they are well conditioned. If we apply deflation criterion (3.3.1) to this matrix, it deflates to 1 2 B= , 0 which has eigenvalues μ1 = 1 and μ2 = . We see that μ1 has high relative accuracy, but μ2 does not; it even has the wrong sign. This cannot happen if we use the deflation criterion (3.3.2), as Theorem 3.3.2 guarantees. To apply (3.3.2) to A, we need the QR decomposition, which is, to first order, 1 − 1 2 A = QR ≈ . 1 0 − According to (3.3.2), we can deflate this problem by setting the to zero in Q, yielding 1 2 B= , 0 − which has eigenvalues μ1 = 1 and μ2 = −, both of which have high relative accuracy. As a variant on this example, consider the matrix 2 A˜ = . 1 Here again criterion (3.3.1) allows a deflation to give computed eigenvalues μ1 = 1 and μ2 = as before. In this case (3.3.2) does not allow a deflation, as 1 1 −1 2 3 ˜R ˜ = √1 √ A˜ = Q . 1 0 −1 2 1 2 √ ˜ is much too large for a deflation. The entry s1 = 1/ 2 in Q
3.4. The Singular Case
49
Theorem 3.3.2 is a pleasing result, and so is Example 3.3.3, but the reader is cautioned not to draw an overly optimistic conclusion. Theorem 3.3.2 applies only to the roundoff error that takes place during the deflation. This is just one of many roundoff errors that occur in the course of the Francis iterations, and most of those errors do not yield such a positive result. If one wants to compare the computed eigenvalues with the eigenvalues of the original matrix, one gets a result that looks more like Theorem 3.3.1 than 3.3.2. See the discussion at the end of Section 3.5.
3.4 The Singular Case Up to now we have made the convenient assumption that A is nonsingular. If A = Q1 · · · Qn−1 R is exactly singular, then so is R, so rii = 0 for some i. The reader can easily check that if rii = 0 for i < n, then ai+1,i = 0, so A is not properly upper Hessenberg, and the theory from Chapter 2 does not apply. The reader can check further that a Francis core chase does indeed break down in this case. The zero on the main diagonal of R causes the misfit to collapse to the identity matrix at some point, and the iteration dies. Clearly we need to take some special action in this case.
A Zero at the Bottom Consider first the case rnn = 0. We have
A = QR =
×××××× ××××× ×××× ××× ××
,
where R has a zero at the bottom. Do a similarity transformation that moves the entire matrix Q to the right. Then pass the core transformations back through R, and notice that when Qn−1 is multiplied into R, it acts on columns n − 1 and n and does not create a bulge. Thus nothing comes out on the left. Or, if you ˆ n−1 = I comes out on the left: prefer, we can say that Q ×××××× ××××× ×××× ××× ××
=
×××××× ××××× ×××× ××× ××
.
Now we can deflate out the zero eigenvalue. It is easy to relate what we have done here to established theory. It is well known [72,73] that if there is a zero eigenvalue, one step of the basic QR algorithm ˆ will extract it. This is exactly what we have with zero shift (A = QR, RQ = A)
50
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
done here. One can equally well check that if one does a Francis core chase with single shift ρ = 0 (as in Section 3.1), this is exactly what results.
A Zero in the Middle Now consider the case rii = 0, i < n. Depicting the case i = 3 we have
A = QR =
×××××× ××××× ××× ××× ×× ×
.
Pass Qi+1 , . . . , Qn−1 from left to right through R to get them out of the way:
×××××× ××××× ××× ××× ×× ×
.
Since these do not touch row or column i, the zero at rii is preserved. Now the way is clear to multiply Qi into R without creating a bulge. This gets rid of Qi . Now the core transformations that were passed through R can be returned to their initial positions either by passing them back through R or by doing a similarity transformation. Either way the result is
×××××× ××××× ××× ××× ×× ×
,
or more compactly
×××××× ××××× ××× ××× ×× ×
.
The eigenvalue problem has been decoupled into two smaller eigenvalue problems, one of size i × i at the top and one of size (n − i) × (n − i) at the bottom. The upper problem has a zero eigenvalue, which can be deflated immediately by doing a similarity transformation with Q1 · · · Qi−1 and passing those core
3.5. Backward Stability of Francis’s Algorithm
51
transformations back through R as explained earlier. The result is
×××××× ××××× ××× ××× ×× ×
.
We now have an eigenvalue problem of size (i − 1) × (i − 1) at the top, a deflated zero eigenvalue in position (i, i), and an eigenvalue problem of size (n−i)×(n−i) at the bottom.
3.5 Backward Stability of Francis’s Algorithm We anticipated this result in Section 1.5, and we have done some additional preparation in Section 3.3. Theorem 3.5.1. The single- and double-shift Francis algorithms, implemented as described in this chapter, are normwise backward stable. If A is the starting matrix and Aˆ is the final matrix, then Aˆ = U ∗ (A + δA)U , where δA u A . Proof. The Hessenberg matrix A is stored as a product QR. We remark that the initial step of computing Q and R is backward stable. The unitary matrix U is the product of all of the core transformations Ui that took part in similarity transformations. In the algorithm (single- or double-shift) each such Ui is immediately passed through R and ejected as a new core transformation Vi . That Vi is (usually) then shifted through the descending sequence Q to form a new Ui+1 , which is used in the next similarity transformation. An exception to this comes at the end of the iteration, when Vi is absorbed into Qi by fusion. Let V denote the product of all of the core transformations ˆ R, ˆ where (in exact arithmetic) Q ˆ = U ∗ QV and R ˆ = V ∗ RU . Vi . Then Aˆ = Q We can now apply Theorems 1.5.1 and 1.5.2 to deduce that in floating-point ˆ = V ∗ (R + δR)U , where δQ u and ˆ = U ∗ (Q + δQ)V and R arithmetic Q δR u R = u A . Actually Theorem 1.5.1 is not directly applicable as stated because it does not take the fusions or the deflations into account. However, the fusions are just matrix-matrix multiplications, which are backward stable; the tiny backward errors incurred can be absorbed into δQ. The ˆR ˆ = U ∗ (Q + δQ)V V ∗ (R + same is true for the deflations. Thus Aˆ = Q . ∗ δR)U = U (A + δA)U , where δA = δQ R + Q δR. We deduce that δA u A . Theorem 3.5.1 says that the final matrix is unitarily similar to a matrix A + δA that is close to the original matrix A. We cannot immediately deduce from this that the eigenvalues of Aˆ are close to those of A because we have to take condition numbers, defined in (3.3.4), into account. Let λ be a simple eigenvalue of A that is reasonably isolated from the other eigenvalues. Then, by ˆ will have a unique eigenvalue λ + δλ that continuity, A + δA (and hence also A)
52
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
is close to λ. Applying (3.3.5) and Theorem 3.5.1, we get | δλ | u κ(λ) A , where κ(λ) is the condition number. Thus we can say that simple eigenvalues that are well conditioned are perturbed only slightly by Francis’s algorithm.
3.6 In Pursuit of Efficient Cache Use and Parallelism Computers have hierarchical memories. Typically there is a large main memory and a much smaller cache. In order to operate on data, it must be in cache, so we try to organize our computations so that everything that we need is already in cache. If our entire matrix A fits into cache, that’s great! If not, we will only be able to store a portion of A in cache at any given time. Whenever we need a part of A that’s not in cache, we must move that part in, also moving some other part out. This is called a cache miss, and it entails a substantial delay. Thus we want to design algorithms that minimize cache misses. If the matrix A is fairly large, it will be far too big to fit into the computer’s cache, and cache misses will occur frequently during each iteration of Francis’s algorithm, causing a severe degradation of performance. A significant improvement, due to Braman, Byers, and Matthias [19], was to rearrange the algorithm so that it chases a long chain of bulges at once. This allows a reorganization of the arithmetic so that cache misses are minimized. We can do something similar with core-chasing algorithms, and that is what this section is about. This idea can be carried further. Instead of chasing one long chain of bulges, one can chase several long chains at once, opening up opportunities to parallelize the algorithm [46]. Another useful technique worth mentioning is aggressive early deflation [20]. We will not get into this topic, except to assert that it could be adapted to our current scenario.
Single-Shift Algorithm Suppose we want to chase m misfits at once.13 We use an auxiliary routine to compute m shifts; for example, we can compute the eigenvalues of the lower right-hand m × m submatrix and use them as shifts. Each shift can be used to launch one misfit. We start the first misfit and chase it through the matrix once:
.
To make room for a second misfit we must do a similarity transformation and 13 We
are going to do m Francis iterations of degree one, not a single iteration of degree m.
3.6. In Pursuit of Efficient Cache Use and Parallelism
53
pass the first misfit through one more time:
.
Moving the misfit to the left for clarity, we now have the situation
,
and there is room to start a second misfit and pass it through the matrix once:
.
To make room for a third misfit, we must move the existing misfits further down. First we do a similarity transformation
,
and then we pass both misfits back through:
.
54
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
Now we have the situation
,
and there is room to introduce a third misfit and pass it through the matrix:
.
(3.6.1)
If we wish to add a fourth misfit, we must move these three downward. We continue this way until we have introduced all m misfits. For the purpose of drawing pictures, let us suppose that m = 3, so that we are done introducing misfits at (3.6.1). We are depicting the case m = 3 in a matrix of size n = 7, but the reader should imagine something like m = 50 and n = 5000. Once all misfits have been introduced, the first chasing step begins with a similarity transformation that moves the chain of misfits to the right:
.
Then the misfits are chased back through R and Q:
.
(3.6.2)
We then do another similarity transformation and pass the misfits through again, and we keep doing this until we get to the bottom. However, we will do it by stages. At any given time we will be working within a small window of rows/columns of size s. A good window size is s ≈ 2m, so think of s = 100, for example. We start with the misfits at the top of a window and push them to the bottom. Then we dissolve that window and form a new window with the misfits at the top. Thus the windows overlap significantly.
3.6. In Pursuit of Efficient Cache Use and Parallelism
55
The bulk of the storage space is occupied by the upper triangular matrix R, and this is where most of the flops take place. Each time a misfit is passed through R, O(n) flops are executed. This is the source of the cache misses, so this is where our focus will be.
We have depicted the first two windows of rows/columns in an upper triangular matrix. First the red window is formed with the misfits at the top. Once the action has been chased to the bottom of the red window, it is dissolved, and the green window is formed. Then we chase from top to bottom of the green window, we dissolve it, and we form a new window. Continuing in this way, we move from one window to the next, until we get to the bottom. Now, what is the point of these windows? In (3.6.3) a typical window is depicted in R:
R12
R=
R22
R23
.
(3.6.3)
As the misfits are chased through this part of the matrix, updates will take place in submatrices R22 , R12 , and R23 , but we do not have to do the updates all at the same time. The small (s × s) submatrix R22 can be kept in cache, and updates are done on it immediately as the core chase proceeds. This is essential to the functioning of the algorithm. Let’s say the total transformation that takes place on R22 as the misfits are chased from top to bottom of the window is R22 → V ∗ R22 U , where U and V are products of incoming and outgoing core transformations, respectively. The blue transformations in (3.6.2) go into U , and the green ones go into V . We save these core transformations so that we can apply them to the larger submatrices R12 and R23 all at once afterwards. The appropriate transformations are R12 → R12 U and R23 → V ∗ R23 . There are at least two ways to organize this computation. The core transformations that constitute U and V can be stored separately, or they can be physically accumulated into s × s unitary matrices U and V . The latter option is easier to think about, so let’s consider it first. We will call this the lumped method. For execution of the update R23 → V ∗ R23 , for example, the short, fat block R23 can be partitioned further into roughly square blocks
R23 = B1 B2 · · · Bk .
56
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
The Bi can be brought into cache one by one for the updates Bi → V ∗ Bi . The update of the tall skinny block R12 can be done analogously. In the ideal situation Bi is an exactly square s × s block. Then the update Bi → V ∗ Bi is an s × s matrix-matrix multiply. This means that 2s3 flops are done on 2s2 data items. If s is large enough, the time required to move data in and out of cache will be completely masked by the flops. This tells us that we should make s as large as we can, subject to the constraint that we must be able to fit several s × s blocks into cache at once. We need room for R22 , U , V , and a couple of blocks Bi and Bi+1 , say. We also need to fit in a few core transformations, namely the m misfits and the part of the matrix Q that is in the window. This is O(s) data, so it is relatively insignificant. The reason for requiring room for two blocks Bi and Bi+1 is that this provides enough room to move blocks into and out of cache and do flops simultaneously. We still need to consider the question of the relationship between the number of misfits m and the window size s. Earlier we suggested that s ≈ 2m is a good choice. In order to justify this claim we now consider the second viewpoint, in which the core transformations that constitute U and V are stored separately rather than accumulated. We call this the nonlumped method. On each step an ascending sequence of m misfits (blue in (3.6.2)) is added to U . After two steps we have a pair of ascending sequences
.
Suppose the window size is s = 8. Then five steps will bring us to the bottom of the window, so the total matrix U associated with this window consists of five ascending sequences of length three,
,
for a total of 15 core transformations. Suppose instead of m = 3 we had m = 4. Then four steps would bring us to the bottom of a window of size 8, and U would be a product of four descending sequences of length four,
,
3.6. In Pursuit of Efficient Cache Use and Parallelism
57
for a total of 16 core transformations. The same considerations hold for V (green in (3.6.2)). In general, to push a chain of m misfits through a window of size s, we take s − m steps. Thus U and V each consist of s − m ascending sequences of length m, which constitute a parallelogram of m(s − m) core transformations. For a given s, chosen based on the cache size, the product m(s − m) is maximized when m = s/2. This is the basis for our claim that s ≈ 2m is a good choice. Let us now assume we have s = 2m, so that the number of core transformations constituting U and V is s2 /4 each. We do updates of the type Bi → V ∗ Bi just as before, but now the application of V ∗ means multiplying Bi by a sequence of s2 /4 core transformations. Assuming Bi is s × s, we have 6s flops per core transformation or 1.5s3 flops total. Again we are performing O(s3 ) flops on O(s2 ) data, so the flop count will mask the cost of moving data into and out of cache if s is large enough.
Double-Shift Algorithm Now, instead of m single steps, we consider m/2 double steps, where m is even. Recall that each double step involves three misfits, but only two of them participate in each similarity transformation. In (3.6.4) we have depicted the case of two double steps, i.e., m = 4:
.
(3.6.4)
We are going to chase these misfits forward, but before we do so, we execute m/2 turnovers to get the configuration
.
Then we do a similarity with an ascending sequence of m core transformations, just as in the single shift case:
.
58
Chapter 3. Francis’s Algorithm as a Core-Chasing Algorithm
Once we have passed this ascending sequence back through the matrix, we will have to do another m/2 turnovers before the next similarity transformation. Aside from this, the algorithm is just the same as the single-shift algorithm. The discussion of how to accumulate the similarity transformation is exactly the same as in the single-shift case already discussed above, except that m is now constrained to be an even number.
Assessment Of the two methods of handling U and V , which is better? In the nonlumped method the flop count per iteration is exactly the same as for the original algorithm, which is aesthetically satisfying. In the lumped method, by contrast, the flop count goes up. This might, nevertheless, be the better way, because a great deal of effort has been spent on making the matrix-matrix multiply operation fast, and we can take advantage of that.14 On the other hand, if some effort were put into making the nonlumped method fast, perhaps it could be made as fast as or faster than the lumped method. Any such effort would have to take careful account of architectural details that we have ignored here. We have considered a very simple hierarchy consisting only of main memory and cache. In real machines the cache is itself hierarchical, consisting of several levels of increasingly fast and small caches. This would have to be taken into account, not to mention the fact that today’s processors have multiple cores (where here we are using the word core to mean processing unit ). We conclude by mentioning an advantage of the core-chasing approach over the bulge-chasing approach. The pictures in this section make it clear that the misfits are packed together very tightly; it would be impossible to pack them tighter. A chain of m misfits occupies a mere m + 1 rows. Contrast this with the case when bulges are chased. If we chase m bulges of degree one, each bulge occupies two rows, so m bulges in a chain will occupy 2m rows, making the bulge-chasing approach less efficient. It turns out that there is a way to pack the bulges more tightly [51], but it is not at all obvious and requires some ingenuity. In the core-chasing approach we get optimally packed misfits for free, without even having to think about it.
14 We refer the reader to the Intel Math Kernel Library: https://software.intel.com/enus/intel-mkl
Chapter 4
Some Special Structures
4.1 Unitary Matrices At the Householder Symposium in Spa, Belgium, in 2014, Alan Edelman asked us what is available in the way of fast codes for solving the unitary eigenvalue problem. He was interested in computing the eigenvalues of large numbers of unitary matrices from a circular ensemble [52]. These matrices are already in upper Hessenberg form and are presented as a product of n − 1 core transformations, so they exactly fit into our theory. At that time we were not thinking about the unitary problem, but we had just developed a successful method for companion matrices, which are unitary-plus-rank-one. On searching the web we found that, although many papers on the unitary eigenvalue problem have been written,15 starting with the work of Gragg [42], almost no software was available. We found only one item, the divide-and-conquer code of Ammar, Reichel, and Sorensen [2, 3]. Because of the paucity of available codes and the complete lack of any implementations of Francis’s algorithm, we decided to produce some code of our own. This was easy; we just took our unitary-plus-rank-one code and removed the hard parts, resulting in [8]. Unitary upper Hessenberg matrices arise in several applications. Killip and Nenciu [52] describe an ensemble of n × n random unitary Hessenberg matrices whose eigenvalues are distributed according to the Gibbs distribution for n particles of the Coulomb gas on the unit circle. Gauss–Szeg˝ o quadrature formulas [41, 43, 68] are formulas of maximal degree for estimating integrals with respect to measures with support on the unit circle. The sample points are the eigenvalues of a certain unitary matrix, and the weights are the squares of the absolute values of the first components of the eigenvectors. Pisarenko frequency estimates [56] can be computed by solving a unitary eigenvalue problem, as described by Cybenko [25]. If A is unitary, the factorization A = Q1 · · · Qn−1 R simplifies to A = Q1 . . . Qn−1 because R = I. The single- and double-shift algorithms described in Sections 3.1 15 Works on the unitary eigenvalue problem include [1, 2, 3, 21, 22, 26, 28, 30, 38, 42, 44, 45, 47, 57, 58, 65, 66].
59
60
Chapter 4. Some Special Structures
and 3.2 can be carried out just as shown there, except that the operations that pass core transformations through R can be ignored. This implies huge savings in work, as we are leaving out the expensive part of the algorithm. The computational cost of a single-shift iteration is about n turnovers at O(1) flops per turnover for O(n) flops per iteration instead of O(n2 ). For a real, orthogonal double-shift iteration we require about 3n real turnovers costing O(n) flops. In either case, making the standard assumption that it takes O(n) iterations to find all n eigenvalues, we have a cost of O(n2 ) flops to find all eigenvalues, instead of the O(n3 ) that are required in general. It is also important that only n − 1 core transformations (and perhaps one diagonal matrix) are needed to store the entire matrix, so the memory requirement is O(n) instead of O(n2 ). This implies that even for quite large n, the entire problem can be stored in the fastest level of cache, so there is no problem with cache misses. For details on performance see [8]. We have a bit more to say about the unitary problem in Section 6.6.
4.2 Unitary-Plus-Rank-One Matrices, Companion Matrices If a matrix has the form A = Q + B, where Q is unitary and B has rank one, we call A a unitary-plus-rank-one matrix. Notice that if we apply a unitary similarity transformation to A, the result U ∗ AU = U ∗ QU + U ∗ BU is also unitary-plus-rank-one. Thus, in particular, if we carry out an iteration of Francis’s algorithm on an upper Hessenberg unitary-plus-rank-one matrix, all of the structure is preserved. A simple example of a matrix of this type is a companion matrix.
Companion Matrices Given a monic polynomial p(λ) = λn + an−1 λn−1 + · · · + a1 λ + a0 of degree n, the companion matrix of p is the n × n matrix ⎡ ⎢ 1 ⎢ A=⎢ ⎣
..
.
−a0 −a1 .. .
⎤ ⎥ ⎥ ⎥. ⎦
(4.2.1)
1 −an−1 It is easy to show that the characteristic polynomial of A is p; hence the eigenvalues of A are the roots of the equation p(λ) = 0. One popular way to compute the zeros of a polynomial is to form the associated companion matrix and compute its eigenvalues. Since A is upper Hessenberg, we can apply Francis’s algorithm directly to A to get the eigenvalues. If we do this without exploiting the special structure of the matrix, beyond the fact that it is upper Hessenberg, it will cost us O(n3 ) flops and O(n2 ) storage space. This is what the MATLAB roots command does.
4.2. Unitary-Plus-Rank-One Matrices, Companion Matrices
61
Since the initial data for this problem consists of the n coefficients an−1 , . . . , a0 , it is natural to ask whether we can somehow cut the costs to O(n2 ) flops and O(n) storage, as we did in the unitary case. It turns out that we can. Letting c denote any complex number with modulus one, we observe that ⎤ ⎡ ⎤ ⎡ −c − a0 c ⎥ ⎢ ⎢ 1 −a1 ⎥ ⎥ ⎢ ⎥ ⎢ + A=⎢ ⎢ ⎥ ⎥, .. .. ⎦ ⎣ ⎦ ⎣ . . −an−1
1
so A is unitary-plus-rank-one. We are going to exploit this structure. Since we always store our matrices in QR-decomposed form, let us take a look at the QR decomposition of the companion matrix. This turns out to be very easy, as it can be obtained simply by permuting rows: ⎡ ⎤ ⎤⎡ 1 1 −a1 ⎢ 1 ⎢ 0 ⎥ 1 −a2 ⎥ ⎢ ⎥ ⎥⎢ ⎢ ⎥ ⎥ ⎢ . .. . . .. .. ⎥ ⎢ .. A = QR = ⎢ (4.2.2) ⎥. . ⎢ ⎥ ⎥⎢ ⎣ ⎦ ⎦ ⎣ 1 0 1 −an−1 −a0 1 0 Of course we will write Q in factored form Q1 · · · Qn−1 . If we take our core transformations to be rotators, as we usually do, then each Qi will have 0 −1 1 0 as its active part. The careful reader will notice that we have to make a slight adjustment in the case of even n. This can be done by introducing an additional unitary diagonal factor D = diag{1, . . . , 1, −1} into the factorization, as discussed in Section 1.4. The entry −a0 will not necessarily be positive or even real. If we wish to require that all of the main-diagonal entries of R be real and positive, we can arrange this as well by making an adjustment in D. We will not further belabor this point. Since A is unitary-plus-rank-one, R must necessarily also have this structure, and this is in fact obvious, as R = I + zˇeTn , where zˇ =
−a1
···
−an−1
−1 − a0
T
.
Submatrices of Unitary Hessenberg Matrices Before moving on, we mention one other situation in which unitary-plus-rankone matrices arise. Consider an m × m unitary upper Hessenberg matrix written as a product of core transformations H = Q1 Q2 · · · Qm−1 . Let A denote the n × n submatrix from the upper left-hand corner of H, where 1 ≤ n < m. A is clearly upper Hessenberg and almost, but not quite, unitary. To see this, consider the factored form H = (Q1 · · · Qn−1 )Qn (Qn+1 · · · Qm ). The factor Q1 . . . Qn−1 is an essentially n × n unitary Hessenberg matrix. That is, its active part is n × n and situated in the upper left-hand corner. This matrix is almost, but not
62
Chapter 4. Some Special Structures
quite, equal to A. The factor Qn+1 · · · Qm−1 is an essentially (m − n) × (m − n) unitary upper Hessenberg matrix situated in the lower right-hand corner. The core transformation Qn connects them. Let
− sn cn
cn sn
ˇ n = diag{1, . . . , 1, cn } ∈ denote the active part of Qn , as usual, and define Q n×n ˇ n. . Then a straightforward computation shows that A = Q1 · · · Qn−1 Q C Here we are redefining the core transformations Qi so that they are n × n instead ˇ n would be R, as this matrix is upper of m × m. In fact a better name for Q triangular, and A = Q1 . . . Qn−1 R is the QR decomposition in factored form. We note that R is unitary-plus-rank-one: R = I + zˇeTn , where zˇ = (cn − 1)en . Thus A is unitary-plus-rank-one.
Compact Representation of R Previous works on companion and unitary-plus-rank-one matrices include [15, 16,17,24,27,33,60]. The representation presented below is from our papers [4,7]. The work that is closest to ours is that of Chandrasekaran et al. [24]. Like us, they store the matrix in QR-decomposed form, but their representation of R is different from ours. In both of the examples above, our unitary-plus-rank-one matrix has the form A = Q1 · · · Qn−1 R,
where R = I + zˇeTn .
(4.2.3)
The rank-one part is confined to the last column. The reader might like to prove that every unitary-plus-rank-one matrix is unitarily similar to an upper Hessenberg matrix in the form (4.2.3). The similarity transformation can be accomplished by a direct method in O(n3 ) flops. We now consider the problem of representing an upper triangular R = I + zˇeTn compactly using core transformations. R can be written as ⎡ ⎢ ⎢ R=⎢ ⎣
1 ..
z1 .. .
.
1 zn−1 zn
⎤ ⎥ ⎥ ⎥, ⎦
where zn = 1 + zˇn , and zi = zˇi for i < n. We begin by embedding R in a larger upper triangular matrix ⎡ ⎢ ⎢ ⎢ R=⎢ ⎢ ⎣
1 ..
z1 .. .
.
1 0 ···
0
zn−1 zn 0
⎤ 0 .. ⎥ . ⎥ ⎥ , 0 ⎥ ⎥ ⎦ 1 0
which has an extra zero row and a nearly zero column. (This is an important step, as we shall soon see.) The 1 in column n + 1 is inserted to preserve the
4.2. Unitary-Plus-Rank-One Matrices, Companion Matrices
unitary-plus-rank-one structure: ⎡ 1 0 ⎢ .. . .. ⎢ . ⎢ R=⎢ 1 0 ⎢ ⎣ 0 0 · · · 0 −1
63
⎤ ⎡ 0 0 .. ⎥ ⎢ ⎢ . ⎥ ⎥ ⎢ + ⎥ 0 ⎥ ⎢ ⎢ 1 ⎦ ⎣ 0 0
..
z1 .. .
.
0 ···
zn−1 zn 1
0
⎤ 0 .. ⎥ . ⎥ ⎥ . 0 ⎥ ⎥ ⎦ 0 0
Thus we have R = Zn + z en T , where Zn is a core transformation and
z = z1 · · ·
zn
1
(4.2.4) T
.
(4.2.5)
T
. Thus R and z are extended versions We will also write z = z1 · · · zn of R and z. As a general rule, underlined quantities will be objects of size n + 1 that are extensions of objects of size n. The information about the rank-one part is encapsulated in the vector z . Our next move will be to “roll up” that vector using core transformations. Let Cn be a core transformation such that Cn z has a zero in position n + 1. Since zn+1 = 1 = 0, Cn must be nontrivial (nondiagonal), and the entry in position n of Cn z must be nonzero; in fact its magnitude must be at least 1. Now let Cn−1 be a core transformation such that Cn−1 Cn z has a zero in position n. Cn−1 is also nontrivial, and it deposits a nonzero entry in position n − 1 of Cn−1 Cn z . Continuing in this way, we generate Cn−2 , . . . , C1 , all nontrivial, such that C1 · · · Cn z = αe1 ,
where | α | = z 2 .
(4.2.6)
Pictorially we have
× × × × × × ×
α =
.
The information about the rank-one part is now encoded in C1 , . . . , Cn . Let C = C1 C2 · · · Cn . Since it is a product of a descending sequence of nontrivial core transformations, C is a properly upper Hessenberg unitary matrix. Letting B = CZn and w = αen , we have R = C ∗ (B + e1 w T ).
(4.2.7)
Notice that B is also a unitary upper Hessenberg matrix, so it can be factored into a descending sequence of core transformations: B = B1 · · · Bn . In fact it is obvious that we can take Bi = Ci , for i = 1, . . . , n − 1, and Bn = Cn Zn . Expanding (4.2.7) we have R = Cn∗ · · · C1∗ (B1 · · · Bn + e1 w T ).
(4.2.8)
64
Chapter 4. Some Special Structures
Pictorially
R=
···
+
,
where the dots denote the rank-one part. With this picture we are beginning to de-emphasize the rank-one part because, as we shall soon see, we will not need to store it explicitly; the information is encoded in the core transformations Bi and Ci . We are going to store R in the form (4.2.8). This consists of 2n core transformations (and a rank-one part that will not be stored), so the storage requirement is O(n). Our real object of interest is R, which is the upper left n×n submatrix of R. Let P denote the (n+1)×n prolongation matrix consisting of an identity matrix with one extra row of zeros appended at the bottom. Then R = P T RP , so, technically, our storage scheme is R = P T Cn∗ · · · C1∗ (B1 · · · Bn + e1 w T )P.
(4.2.9)
The symbol P is there just to make the dimensions come out right. Its presence costs us nothing in terms of storage space or flop count.
A Useful Extension In our discussion of matrix polynomials in Section 5.3 we will encounter unitaryplus-rank-one matrices of the form ⎡ ⎢ ⎢ ⎢ R=⎢ ⎢ ⎣
1 ..
⎤
z1 .. .
. 1
⎥ ⎥ ⎥ ⎥, ⎥ ⎦
zi−1 zi In−i
which are just like the matrices we have been studying, except that the spike might not be in the last column. We will show that matrices of this type can also be stored in the form (4.2.9). You can skip over this bit for now if you want to. Given such an R with i < n, add a zero row and a nearly zero column to R to make ⎡ ⎤ 0 1 z1 ⎢ .. .. ⎥ .. ⎢ . . . ⎥ ⎢ ⎥ ⎢ ⎥ 0 1 z i−1 ⎢ ⎥. R=⎢ ⎥ 1 z i ⎢ ⎥ ⎢ .. ⎥ ⎣ In−i . ⎦ 0 ··· 0 0 ··· 0
4.2. Unitary-Plus-Rank-One Matrices, Companion Matrices
65
We have R = P T RP , as before. Moreover, R is unitary-plus-rank-one: ⎡ ⎤ ⎡ 0 1 0 z1 ⎢ ⎥ ⎢ .. . . . .. .. .. ⎥ ⎢ ⎢ . ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ 0 1 0 z i−1 T ⎥+⎢ R = Z +zei = ⎢ ⎢ ⎢ 0 zi 1 ⎥ ⎢ ⎥ ⎢ ⎢ .. ⎥ ⎢ ⎣ In−i . ⎦ ⎣ 0n−i 0 ··· 1 ··· 0 0 · · · 0 −1 ··· where z=
z1
···
zi
0 ···
0 −1
T
⎤ 0 .. ⎥ . ⎥ ⎥ 0 ⎥ ⎥, 0 ⎥ ⎥ .. ⎥ . ⎦ 0
.
Let Cn , Cn−1 , . . . , C1 be core transformations such that C1 · · · Cn z = αe1 , where | α | = z 2 . Because of the zeros near the bottom of z , the core transformations Cn , . . . , Ci+1 are all of a very simple form, having active part F = [ 01 10 ].16 If we let Fj denote a core transformation with active part F , then Cj = Fj for j = i + 1, . . . , n. Let C = C1 · · · Cn . Then R = C ∗ (CZ + e1 w T ),
where
w T = αei T .
Let B = CZ = C1 · · · Ci Fi+1 · · · Fn Z. It is easy to check that ⎤ ⎡ 1 ⎥ ⎢ .. ⎥ ⎢ . ⎥ ⎢ ⎥ ⎢ 1 ⎥ ⎢ ⎢ 0 0 ··· 0 1 ⎥ Fi+1 Fi+2 · · · Fn Z = ⎢ ⎥ = Fi Fi+1 · · · Fn , ⎥ ⎢ 1 ⎥ ⎢ ⎥ ⎢ .. ⎦ ⎣ . 1 0 so B = C1 · · · Ci−1 Ci Fi Fi+1 · · · Fn = B1 B2 · · · Bn , where Bi = Ci Fi , Bj = Cj for j = 1, . . . , i − 1, and Bj = Fj = Cj for j = i + 1, . . . , n. Thus R = C ∗ (B + e1 w T ) = Cn∗ · · · C1∗ (B1 · · · Bn + e1 w T ), and R has exactly the form (4.2.9). Finally, we remark that all of the core transformations C1 , . . . , Cn are nontrivial.
Passing a Core Transformation through R Our algorithms for the unitary-plus-rank-one case consist precisely of the algorithms described in Chapter 3, except that the general R shown there is replaced 16 In
our codes the active part is the explanation easier.
0 1 0 −1 1 0 , but we have used the simpler 1 0 here to make
66
Chapter 4. Some Special Structures
by the storage scheme (4.2.9). In order to justify this replacement we need to show several things. First, we need to show how to pass a core transformation through R when it is stored in the form (4.2.9). It is easier to think about R, ignoring the P and P T , which have no significant effect.17 Suppose we have RUi , where i < n, and we want to pass Ui through R, pulling out a core transformation Vi on the left: ˆ + e1 w ˆ T ), RUi = C ∗ (B + e1 w T )Ui = C ∗ (BUi + e1 w T Ui ) = C ∗ (Wi+1 B ˆ The latter transformation is a shiftwhere w ˆ T = w T Ui and BUi = Wi+1 B. through operation of type
.
Only the core transformations Bi and Bi+1 are altered in B; the turnover is ˆi B ˆi+1 . Continuing the operation, we have Bi Bi+1 Ui = Wi+1 B ˆ + e1 w ˆ, ˆ + W −1 e1 w ˆ + e1 w ˆ T ) = C ∗ Wi+1 (B ˆ T ) = Vi Cˆ ∗ (B ˆ T ) = Vi R C ∗ (Wi+1 B i+1 −1 where Vi Cˆ ∗ = C ∗ Wi+1 , and we are using the fact that Wi+1 e1 = e1 . The ∗ ∗ ˆ transformation Vi C = C Wi+1 is a shift-through operation of type
.
∗ ∗ Cˆi∗ . This completes the transformation. The turnover is Ci+1 Ci∗ Wi+1 = Vi Cˆi+1 Pictorially, ignoring the rank-one part, we have
Vi
Wi+1
Wi+1
Cn∗ · · · C1∗
Ui .
B1 · · · Bn
Notice what happens in the case i = n − 1, when the core transformation is at 17 If H is an n × n core transformation (i < n), then P H = H P , where H is the (n + 1) × i i i i (n + 1) version of Hi .
4.2. Unitary-Plus-Rank-One Matrices, Companion Matrices
67
the bottom of the matrix:
Vn−1
Wn
Wn
.
Un−1
Temporarily a core transformation Wn is created. This has its active part in rows and columns n and n + 1, and this is the reason why we had to add a row and a column to R. The cost of this operation is two turnovers, regardless of the dimension of the matrix. Thus this storage scheme allows us to pass a core transformation through R in O(1) work instead of O(n).
Justification of the Operation Earlier we emphasized that the core transformations C1 , . . . , Cn are all nontrivial to begin with. It is important to note that the nontriviality is preserved. This is a consequence of Theorem 1.3.3. Another property that holds initially is that the bottom row (row n + 1) of R is zero. It is important to note, and easy to show, that this feature is preserved. When we multiply R on the right by Ui with i < n, columns i and i + 1 of R are recombined, but this obviously leaves the zeros in the bottom row. At the end of the transformation we extract Vi on the left. This is equivalent to multiplying by Vi−1 on the left and effects a recombination of rows i and i + 1. Row n + 1 is left untouched. Thus the zeros in the bottom row of R remain zero. Yet another property that holds initially is that the matrix R given by (4.2.9) is upper triangular. If we pass a core transformation through R, it continues to have the form (4.2.9), but is it still upper triangular? This is an important question, and it is not obvious that the answer is yes. Now we are in a position to prove it. Theorem 4.2.1. Suppose R = C ∗ (B + e1 w T ) = Cn∗ · · · C1∗ (B1 · · · Bn + e1 w T ), where the core transformations C1 , . . . , Cn are all nontrivial and the bottom row of R is zero. Then R is upper triangular, and so is the matrix R given by (4.2.9). Proof. We have
R=
R 0
× 0
,
(4.2.10)
where × denotes some vector. We just need to show that R is upper triangular. Since R = C ∗ (B + e1 wT ), we have H = CR, where H = B + e1 w T . We rewrite this equation in the partitioned form × × × × R × , (4.2.11) ˜ × = C˜ × 0 0 H ˜ and C˜ are both n × n, and ×’s represent quantities that are not of where H immediate interest. The fact that the core transformations Ci are all nontrivial
68
Chapter 4. Some Special Structures
implies that C is a proper upper Hessenberg matrix, which implies that C˜ is ˜ is upper triangular because H upper triangular and nonsingular. Moreover, H ˜ ˜ which implies is upper Hessenberg. We note further that H = CR, ˜ R = C˜ −1 H.
(4.2.12)
˜ and C˜ −1 are upper triangular, R must also be upper triangular. Since H Now let us turn our attention to the rank-one part. We note that the vector w ˆ . In fact wT → w gets modified during the transformation RUi = Vi R ˆ T = w T Ui , but the general form e1 w T is preserved. Now we are ready to prove the theorem that shows that the information about w is encoded in the core transformations. Theorem 4.2.2. The vector w T = −ρ−1 eTn+1 C ∗ B, where ρ = eTn+1 C ∗ e1 . The nonzero scalar ρ is the product of the subdiagonal entries (of the active parts) of C1∗ , . . . , Cn∗ . Proof. The last row of R is zero, so 0 = eTn+1 R = eTn+1 C ∗ (B + e1 w T ) = eTn+1 C ∗ B + eTn+1 C ∗ e1 w T . This can be solved for w T to yield the desired result. The scalar ρ = eTn+1 C ∗ e1 is easily shown to be equal to the product of the subdiagonal entries (of the active parts) of Ci∗ by direct computation. This theorem provides a formula that can be used to compute w or any needed components at any time and thereby justifies our claim that there is no need to store w explicitly. However, it will turn out that our algorithm never needs any part of w anyway. We will make use of Theorem 4.2.2 in the backward error analysis.
Another Look at w This small section is not crucial to the further development, but it gives some insight into the relationship between the vector wT and the fact that R is upper triangular. The form
w T = w1 · · · wn 0 , with wn+1 = 0, holds initially and forever. The matrix H = B + e1 wT is upper Hessenberg. When we apply C ∗ to H, we get the upper triangular matrix R = C ∗ H. We will show here that the vector wT is uniquely determined (among vectors satisfying wn+1 = 0) by the condition that R be upper triangular. To see this we take a closer look at the transformation R = Cn∗ · · · C2∗ C1∗ H. The transformation H → C1∗ H must transform h21 to zero. (If it doesn’t, the subsequent transformations will spread nonzeros down the entire first column.) Thus h11 ∗ c1 s1 = , −s1 c1 h21 0 which implies −s1 h11 + c1 h21 = 0, and hence h11 = (c1 /s1 )h21 . Since h11 = b11 + w1 and h21 = b21 , this equation implies that w1 = (c1 /s1 )b21 − b11 . With
4.2. Unitary-Plus-Rank-One Matrices, Companion Matrices
69
any other choice of w1 , R will fail to be upper triangular. It is important that C1 is nontrivial since this ensures that s1 = 0. The transformation C1∗ H → C2∗ C1∗ H must set h32 to zero. Now the equations are more complicated, but it is not hard to see that this fact determines w2 ˜ 22 = (c2 /s2 )h32 , where uniquely. Indeed, as in the first step we deduce that h ˜ h22 is not the original h22 but the value after modification by C1∗ . Because C1 ˜ 22 contains a term that is a nonzero multiple of w2 . Thus the is nontrivial, h ˜ equation h22 = (c2 /s2 )h32 determines w2 uniquely. Notice that this argument relies on both C1 and C2 being nontrivial. Continuing in this way, we deduce that w3 , . . . , wn are all uniquely determined. Our conclusion follows: w T is the unique vector (among those with with wn+1 = 0) that makes C ∗ (B + e1 w T ) upper triangular.
Putting It All Together The description of an iteration of Francis’s algorithm in the unitary-plus-rankone case is now complete: It is exactly as described in Chapter 3, except that R is stored in the form (4.2.9). In the preceding paragraphs we have shown that R stored in the form (4.2.9) is indeed upper triangular, and the information about the rank-one part is encoded in the core transformations. Now we put it all together explicitly for the reader’s convenience. We will describe an iteration of the single-shift Francis algorithm. The double-shift iteration is similar. Initially we have
T
A = QP RP =
Cn∗ · · · C1∗
Q1 · · · Qn−1
+
···
,
B1 · · · Bn
where the dots again represent the rank-one part. The matrices P T and P are invisible in our depiction, but note that the descending sequence Q1 · · · Qn−1 , which defines Q, is shorter than the ascending and descending sequences that define R. If we suppress the rank-one part, we can write this more simply as
.
The iteration begins with a choice of shift ρ, which we then use to construct a core transformation U1 that has just the right first column, as described at the
70
Chapter 4. Some Special Structures
beginning of Section 3.1. We then make a similarity transformation A → U1∗ AU1 :
.
The core transformation U1∗ on the left can be fused with Q1 , but U1 on the right is the misfit, which needs to be chased away. We begin by passing it through R and Q. This requires three turnovers:
.
We then do a similarity transformation to move the misfit from the left to the right, and we pass it through again at a cost of three more turnovers:
.
Then we do another similarity transform, and so on. After chasing the misfit through the matrix a total of n − 2 times, it reaches the bottom. After the final similarity transformation, the misfit sits on the right, as shown:
.
We pass it through R one more time, bringing it into a position where it can be fused with Qn−1 :
.
Once we have done the fusion, the iteration is complete.
4.2. Unitary-Plus-Rank-One Matrices, Companion Matrices
71
The total work is about 3n turnovers, so it is O(n) per iteration. The storage space required is about 3n core transformations, so that is also O(n).
Computing Entries of A At certain points in the algorithm we need to compute elements of A explicitly. ˜ and A = QR. Now we need We already have formulas, namely R = C˜ −1 H to show that the needed entries can be computed efficiently. The points at which we need elements of A are the shift computation, the computation of the transformation that starts each iteration, and the final eigenvalue computation. In the course of computing elements of A we must compute elements of R. For example, for the transformation that starts an iteration in the double-shift case, we need the submatrix ⎡ ⎤ a11 a12 ⎣ a21 a22 ⎦ , 0 a32
and for this we need
r11 0
r12 r22
.
A typical shift computation requires the submatrix an−1,n−1 an−1,n . an,n−1 an,n For this we need
⎡
rn−2,n−1 ⎣ rn−1,n−1 0
⎤ rn−2,n rn−1,n ⎦ . rn,n
In each case we need just a few entries of R on, or near, the main diagonal. The same is true in the final eigenvalue computation. For example, if we just need to compute a single eigenvalue located at aii , we need only rii . We will show that each of these computations can be done easily in O(1) time. We make use of (4.2.12), which we now write as ˜ = CR. ˜ H
(4.2.13)
Recall that all three of these matrices are upper triangular, and the main diagonal ˜ and C˜ have indices (i + 1, i). entries of H Suppose, for example, we need rii . Then from (4.2.13) we see that hi+1,i = ci+1,i rii , so (4.2.14) rii = hi+1,i /ci+1,i . hi+1,i and ci+1,i are the subdiagonal entries of the core transformations Bi and Ci , respectively, so we have these numbers in hand. Now suppose we also need ri−1,i . Picking an appropriate equation out of (4.2.13), exploiting triangularity of C˜ and R, we have hi,i ci,i−1 ri−1,i ci,i = , (4.2.15) hi+1,i 0 ci+1,i ri,i which we can solve by back substitution. The first step yields rii by the formula already given above, and the second step gives ri−1,i with just a bit more work.
72
Chapter 4. Some Special Structures
The entries from H and C needed for this computation can be computed from Bi−1 , Bi , Ci−1 , and Ci with negligible effort. If we also need ri−2,i , we just need to carry the back solve one step further. To construct the additional required entries from H and C, we need to bring Bi−2 and Ci−2 into play. Once the required entries from R have been generated, the entries of A that we need can be obtained by applying O(1) core transformations from Q. Notice that these computations do not make explicit use of the vector wT . Instead they use the fact that R is upper triangular, which is a way of using w T implicitly.
4.3 Backward Stability: Unitary-Plus-Rank-One Case The proof of backward stability in Theorem 3.5.1, which is based in part on Theorem 1.5.2, assumes that R is stored in the conventional way. Now we need to show that backward stability continues to hold when R is unitary-plus-rankone and stored in the compact form. Since the operations involve exclusively unitary core transformations, we expect a good result. We will indeed get an excellent result in the end, but it will take some effort. Our analysis follows [4]. First we consider the accuracy of the procedure by which the unitary-plusrank-one matrix R = I + zˇeTn is transformed to the form P T (C ∗ (B + e1 w T ))P . The component parts, computed in floating-point arithmetic, are inexact, but we assert that the procedure is backward stable. C is produced in the process (4.2.6) of “rolling up” the vector z : Cz = C1 · · · Cn z = α e1 . Applying Theorem 1.4.1 inductively, we find that the core transformations C1 , . . . , Cn are all produced to high relative accuracy. So are B1 , . . . , Bn , which are initially the same as C1 , . . . , Cn , except that Bn = Cn Zn . The vector w T is given implicitly by Theorem 4.2.2 as wT = −ρ−1 eTn+1 C ∗ B, where ρ = eTn+1 C ∗ e1 . Since these depend entirely on C and B, they are also accurate in appropriate senses. Backward stability follows. We omit the details. In the course of the iterations of Francis’s algorithm, R gets transformed ˆ + e1 w ˆ = P T Cˆ ∗ (B ˆ T )P . Using notation from the proof of Theorem 3.5.1, to R ∗ ˆ R = V RU . U is the product of all of the core transformations that were passed ˆ and V is the product of all of the core into R in the course of converting R to R, transformations that emerged from such operations. To get rid of the extraneous notation P and P T , we will work with the larger matrices R = C ∗ (B +e1 w T ) and ˆ . Letting U = diag{U, 1} and V = diag{V, 1}, we have (in exact arithmetic) R ˆ = V ∗R U . R We will deal with the unitary and rank-one parts of R separately: R = Ru + Ro , where Ru = C ∗ B and Ro = C ∗ e1 w T . First let’s focus on the unitary ˆ = V ∗ C ∗ BU = V ∗ R U . Recall the diagram ˆ = Cˆ ∗ B part Ru . We have R u u
Vi
W i+1
W i+1
Cn∗ · · · C1∗
B1 · · · Bn
Ui ,
4.3. Backward Stability: Unitary-Plus-Rank-One Case
73
which shows how core transformations get passed through Ru . For each core transformation U i that is passed through, an intermediate W i+1 is created. Let W denote the product of all such W i+1 . We note for use below that W e1 = e1 . ˆ = V ∗ C ∗ W W ∗ BU , and more precisely B ˆ = W ∗ BU and Cˆ = We have Cˆ ∗ B ∗ W CV . These are the equations in exact arithmetic. By Theorem 1.5.1 we know that in floating-point arithmetic ˆ = W ∗ (B + δB)U B
and Cˆ = W ∗ (C + δC)V ,
(4.3.1)
where δB u and δC u. Thus ˆ = V ∗ (C ∗ + δC ∗ )(B + δB)U = V ∗ (R + δR )U , R u u u . where δRu = δC ∗ B + C ∗ δB, and therefore δRu u.
(4.3.2)
Now let us consider the rank-one part Ro = C ∗ e1 w T . By Theorem 4.2.2, w = −ρ−1 eTn+1 C ∗ B, where ρ = eTn+1 C ∗ e1 , so T
R0 = −ρ−1 C ∗ e1 eTn+1 Ru . Note that ρ remains invariant in exact arithmetic: If ρˆ = eTn+1 Cˆ ∗ e1 , then ρˆ = eTn+1 V ∗ C ∗ W e1 = eTn+1 C ∗ e1 = ρ because W e1 = e1 and V en+1 = en+1 . In floating-point arithmetic we do not have ρˆ = ρ but instead ρˆ = eTn+1 (C + δC)∗ e1 = ρ + δρ
with
| δρ | u.
(4.3.3)
This shows that the backward error δρ is small in an absolute sense, but it might not be small relative to ρ itself in situations where | ρ | is small. This result is good enough to yield a satisfactory error analysis, but we can do better. In Section 1.4 we recommended that a turnover use (1.4.9) for improved accuracy. From this point on we will assume that our turnover satisfies (1.4.9), as this assumption yields more accurate algorithms and a stronger and simpler error analysis.18 We need to take a closer look at C. For convenience we will assume that each Cj has its active part in the form cj −sj , (4.3.4) sj cj with sj real, and similarly for Cˆj . Theorem 4.3.1. In floating-point arithmetic, if the turnover satisfies (1.4.9), then sˆ1 · · · sˆn = s1 · · · sn (1 + δ), where | δ | u. 18 In [7] we made the mistake of not taking into account the error in ρ. We corrected this in the technical report [5]. In neither of those publications did we use a turnover that satisfies (1.4.9). Subsequently we learned that a one-line change in the turnover, incorporating (1.4.9), gives more accurate results and a simpler and stronger backward error analysis [4]. That is what we are presenting here.
74
Chapter 4. Some Special Structures
Proof. C is transformed to Cˆ by a large number of turnovers, but it suffices to consider the action of a single turnover affecting, say, cores Cj and Cj+1 , transforming them to Cˆj and Cˆj+1 . By Theorem 1.3.1 we would have sˆj sˆj+1 = sj sj+1 in exact arithmetic. In practice, using (1.4.9), we have sˆj+1 = (sj sj+1 /ˆ sj )(1+δ), where δ is the result of the two tiny roundoff errors from the multiply and divide operations and satisfies | δ | u. Thus sˆ1 · · · sˆn = s1 · · · sn (1 + δ). Since this is true for each turnover, it is also true for a large number of turnovers. Theorem 4.3.1 is relevant here because the product that is (nearly) conserved is almost exactly equal to ρ. Indeed, assuming the form (4.3.4) for the cores and using the definition ρ = eTn+1 C ∗ e1 , straightforward computation shows that ρ = (−1)n s1 s2 · · · sn . Thus Theorem 4.3.1 implies ρˆ = ρ(1 + δρr )
with | δρr | u.
(4.3.5)
This is an improvement on (4.3.3) in that it shows that the backward error in ρ is tiny relative to ρ. Another important point is that ρ is closely related to the scalar α originally defined in (4.2.6). Indeed, Cz = α e1 , so C ∗ e1 = α−1 z . Thus ρ = eTn+1 C ∗ e1 = α−1 eTn+1 z = α−1 zn+1 = α−1 , so α = ρ−1 . ˆo = In exact arithmetic the transformation from the original Ro to the final R V ∗ Ro U has the form ˆ ˆ = −ρˆ−1 Cˆ ∗ e1 eT R R o n+1 u = −ρ−1 V ∗ C ∗ e1 eTn+1 Ru U , where we have again used W e1 = e1 and V en+1 = en+1 . In floating-point arithmetic we have, by (4.3.1) and (4.3.5), ˆ = −ρ−1 (1 + δρr )−1 V ∗ (C + δC)∗ e1 eT (R + δR )U . R o u u n+1
(4.3.6)
The backward error on Ro is the quantity δR0 given by ˆ o = V ∗ (Ro + δRo )U . R . We can approximate δR0 by expanding (4.3.6), using (1 + δρ)−1 = 1 − δρ, and −1 ignoring all second-order terms. Recalling that ρ = α, we get . δR0 = α(δρr C ∗ e1 eTn+1 Ru − δC ∗ e1 eTn+1 Ru − C ∗ e1 eTn+1 δRu ). Taking norms we have δR0 | α |(| δρr | + δC + δRu ). Since | α | = z ≈ R , we deduce that δR0 u R = u A . If we now add the unitary and rank-one parts together, we have ˆ = V ∗ (R + δR)U , R where δR = δRu + δRo , and therefore δR u A .
4.3. Backward Stability: Unitary-Plus-Rank-One Case
75
ˆ P = V ∗ (R + δR)U , ˆ = PTR Now we cut the matrices down to size n × n: R T where δR = P (δR)P obviously satisfies δR ≤ δR u A .
(4.3.7)
Theorem 4.3.2. The single- and double-shift Francis algorithms for unitaryplus-rank-one matrices are normwise backward stable. If A is the starting matrix and Aˆ is the final matrix, then Aˆ = U ∗ (A + δA)U , where δA u A . Proof. A + δA = (Q + δQ)(R + δR), where δQ u by Theorem 1.5.1 and . δR u A by (4.3.7). Thus δA = δQ R + Q δR, and therefore δA u A .
Backward Stability of Polynomial Rootfinding We now specialize to the case where we have a monic polynomial p(λ) = a0 + a1 λ + a2 λ2 + · · · + an−1 λn−1 + λn , and we compute its roots by computing the eigenvalues of the companion matrix ⎡ ⎤ −a0 ⎢ 1 ⎥ −a1 ⎢ ⎥ A=⎢ ⎥. . .. .. ⎣ ⎦ . 1 −an−1 If we let a denote the vector of coefficients of p, i.e., a=
a0
···
an−1
1
T
,
then we have a ≈ A . In the context of polynomial rootfinding there is a stronger form of backward stability that one can consider, namely backward stability on the polynomial coefficients. Suppose we find the roots of p by any method. The computed roots will not be exactly the zeros of p, but we hope that they are the zeros of a “nearby” polynomial. Let λ1 , . . . , λn denote the computed zeros, and let pˆ(λ) = (λ − λ1 ) · · · (λ − λn ). This is the monic polynomial that has the computed roots as its zeros. We can compute the coefficients of pˆ, pˆ(λ) = a ˆ0 + a ˆ1 λ + a ˆ2 λ2 + · · · + a ˆn−1 λn−1 + λn (using extended-precision arithmetic), and the corresponding coefficient vector a ˆ=
a ˆ0
···
a ˆn−1
1
T
.
The normwise backward error on the polynomial coefficients is a ˆ − a . This type of backward error was studied by Edelman and Murakami [31]. They showed that if the roots are the exact eigenvalues of A + δA, where
76
Chapter 4. Some Special Structures 2
2
a
106
a
10−2
10−2
10−10 10−18
a
101
105 a
109
a
101
105 a
109
a ˆ − a
δA
106
10−10 10−18
Figure 4.1. Backward error on the companion matrix (left) and the coefficient vector (right) as a function of a when roots are computed by unstructured LAPACK code [4].
δA γ, then a ˆ − a γ a . For example, if we compute the eigenvalues of the companion matrix by any backward stable algorithm such as the ones we have studied in this monograph, we have δA u A ≈ u a (see Theorems 3.5.1 and 4.3.2). Therefore we can take γ = u a and deduce that 2
a ˆ − a u a . The square on a is alarming but real. Figure 4.1 shows the backward error as a function of a when we compute the roots of the companion matrix using the unstructured Francis code ZHSEQR from LAPACK.19 Twelve hundred random polynomials with varying norm between 1 and 1012 are produced. For each sample we plot the backward error against a (a single point). Black lines 2 with slopes corresponding to a and a performance are also provided. The graph on the left is the backward error on the companion matrix A. We see that this grows linearly as a function of a . This is consistent with the backward stability of Francis’s algorithm, which guarantees that the computed eigenvalues are the exact eigenvalues of a slightly perturbed matrix A + δA with δA u A ≈ u a . The graph on the right shows the backward error on the coefficients of the polynomial as a function of a . Note that the growth is quadratic in a in the worst cases, consistent with the analysis of Edelman and Murakami [31]. In this section we will show that our unitary-plus-rank-one Francis algorithm does better than what is predicted by [31]. The special structure of the problem implies special structure for the backward error. By exploiting this structure we are able to provide an argument that yields the result a ˆ − a u a with no square on a . Thus our structured Francis algorithm is superior to the unstructured Francis algorithm from the point of view of backward stability. This is illustrated in Figure 4.2. Of course a picture does not constitute a proof; now we have to prove it. The upper triangular matrix R can be written in numerous ways. In (4.2.4) we wrote it as R = Zn + z eTn , with unitary part Ru = Zn = C ∗ B and rank-one 19
www.netlib.org/lapack/
4.3. Backward Stability: Unitary-Plus-Rank-One Case
77
a ˆ − a
106
a
2
10−2
10−10 10−18
a
101
105 a
109
Figure 4.2. Backward error of the companion Francis algorithm on the polynomial coefficients [4].
part Ro = z eTn . We will find it useful to write the rank-one part in a standard form: (4.3.8) Ro = αxyT with x = y = 1. We can do this by defining x = C ∗ e1 = α−1 z
and y = en .
(4.3.9)
If we now define x = P T x and y = P T y = en , we have R = P T RP = P T Ru P + αxy T . Since Ru = Zn , we see easily that P T Ru P = I − en y T . Thus x ≤ 1,
R = I + (αx − en )y T ,
y = 1.
(4.3.10)
Now we we will rewrite the backward error in a way that reflects (4.3.10). Lemma 4.3.3. If the companion Francis algorithm is applied to the companion matrix A = QR, where R = I +(α x−en )y T , then the backward error δR satisfies R + δR = (I + δI) + (α(x + δx) − en )(y + δy)T , where δI u, δx u, and δy u. Proof. Revisiting (4.3.6) we find that Ro + δRo = −ρ−1 (1 + δρr )−1 (C + δC)∗ e1 eTn+1 (Ru + δRu ). We will rewrite this in the form Ro + δRo = α(x + δx)(y + δy )T
with δx u and δy u,
(4.3.11)
a perturbed version of (4.3.8). Noting that C ∗ e1 = x and −eTn+1 Ru = −eTn+1 Zn = eTn = y T , and remembering that ρ−1 = α, we see that we can achieve the desired form by defining δy T = −eTn+1 δRu , and δx implicitly by x + δx = (1 + δρr )−1 (C + δC)∗ e1 .
(4.3.12)
78
Chapter 4. Some Special Structures
. Since δRu u, we have δy u. Expanding (4.3.12) we find that δx = −δρr x + δC ∗ e1 , and we deduce that δx u. Now using (4.3.11), R + δR = Ru + δRu + α (x + δx)(y + δy)T . Projecting this down to n × n matrices we get R + δR = Ru + δRu + α (x + δx)(y + δy)T ,
(4.3.13)
with δRu u, δx u, and δy u. Since Ru = P T Ru P = I − en y T , we have Ru + δRu = I − en y T + δRu = I − en (y + δy)T + en δy T + δRu = I + δI − en (y + δy)T ,
(4.3.14)
where δI = en δy T + δRu , and δI u. Substituting (4.3.14) into (4.3.13), we get R + δR = I + δI + (α(x + δx) − en )(y + δy)T . This completes the proof. We now have all we need to know about R, and we move on to A and p(λ) = det(λI − A). From (4.3.10) we get A = QR = Q + Q(αx − en )y T , and therefore (4.3.15) λI − A = (λI − Q) − Q(α x − en )y T . To make use of this equation we need the following known result [49, p. 26]. For the reader’s convenience we sketch a proof. Lemma 4.3.4. Let K ∈ Cn×n , and let w, v ∈ Cn . Then det(K + w v T ) = det(K) + v T adj(K)w, where adj(K) denotes the adjugate matrix of K. Proof. We will prove the result for nonsingular K; it then follows for all K by continuity. The special case det(I + u v T ) = 1 + v T u is easily verified. Thus det(K + w v T ) = det(K) det(I + K −1 w v T ) = det(K)(1 + v T K −1 w) = det K + v T adj(K)w, as adj(K) = det(K) K −1 when K is nonsingular. Corollary 4.3.5. If A = QR = Q(I + (α x − en )y T ), then p(λ) = det(λI − A) = det(λI − Q) − y T adj(λI − Q)Q(α x − en ). Proof. Apply Lemma 4.3.4 to (4.3.15). This gives the characteristic polynomial of the companion matrix in terms of a much simpler characteristic polynomial plus a correction term. Recall that the QR decomposition of the companion matrix is given in (4.2.2), and notice that det(λI − Q) = λn − 1. Note also that in the companion case T
z = αx = −a0 . . . −an−1 1 , and | α | = z = a .
4.3. Backward Stability: Unitary-Plus-Rank-One Case
79
We will find it useful to write the characteristic polynomial of Q in the form det(λI − Q) = λn − 1 =
n
qk λk ,
k=0
where qn = 1, q0 = −1, and qk = 0 otherwise. The entries of the adjugate matrix are determinants of order n − 1, so adj(λI − Q) is a matrix polynomial of degree n − 1: n adj(λI − Q) = Gk λk , (4.3.16) k=0
with Gk ≈ 1 for k = 1, . . . , n − 1, and Gn = 0. Using Corollary 4.3.5 and (4.3.16) we can write the characteristic polynomial of A as p(λ) = det(λI − A) =
n
qk − y T Gk Q(α x − en ) λk .
(4.3.17)
k=0
The roots that we actually compute are the zeros of a perturbed polynomial pˆ(λ) = det(λI − (A + δA)) =
n
(ak + δak )λk .
k=0
Our plan now is to use (4.3.17) to determine the effect of the perturbation δA on the coefficients of the characteristic polynomial. That is, we want bounds on | δak |. Lemma 4.3.6. If δQ u, then adj(λI − (Q + δQ)) =
n
(Gk + δGk )λk
(4.3.18)
k=0
and det(λI − (Q + δQ)) =
n
(qk + δqk )λk ,
(4.3.19)
k=0
with δGk u and | δqk | u, k = 0, . . . , n − 1. Proof. For the bounds | δqk | u we rely on Edelman and Murakami [31]. We 2 have δq u q = 2u. The adjugate and the determinant are related by the fundamental equation B adj(B) = det(B)I. Applying this with B = λI − Q, we get (λI − Q)
n k=0
Gk λk =
n
qk Iλk .
k=0
Expanding the left-hand side and equating like powers of λ, we obtain the recurrence Gk = QGk+1 + qk+1 I, k = n − 1, . . . , 0. (4.3.20) This is one half of the Faddeev–Leverrier method [35, p. 260], [37, p. 87]. Starting from Gn = 0, and knowing the coefficients qk , we can use (4.3.20) to obtain all
80
Chapter 4. Some Special Structures
of the coefficients of adj(λI − Q). In fact, Gn−1 = I, Gn−2 = Q, Gn−3 = Q2 , and so on. The recurrence holds equally well with Q replaced by Q + δQ. We have Gk + δGk = (Q + δQ)(Gk+1 + δGk+1 ) + (qk+1 + δqk+1 )I, so
. δGk = δQ Gk+1 + Q δGk+1 + δqk+1 I.
(4.3.21)
If δGk+1 u, we can deduce from (4.3.21) that δGk u. Since we have δGn = 0 to begin with, we get by induction that δGk u for all k. We are now ready to prove our main theorem, which was predicted by Figure 4.2. Theorem 4.3.7. Suppose we apply the companion Francis algorithm to the monic polynomial p with coefficient vector a. Let pˆ, with coefficient vector a ˆ, denote the monic polynomial that has the computed roots as its exact zeros. Then a ˆ − a u a . Proof. From Theorem 4.3.2 and the discussion leading up to it, we know that pˆ is the characteristic polynomial of a matrix A + δA = (Q + δQ)(R + δR) with δQ u. We will use the form of R + δR derived in Lemma 4.3.3: A + δA = (Q + δQ)(R + δR) = (Q + δQ)[(I + δI) + (α(x + δx) − en )(y + δy)T ] + (Q + δQ)(α(x + δx) − en )(y + δy)T , = (Q + δQ) . = u. Now, using (4.3.17) with A replaced by where δQ Q δI + δQ and δQ A + δA, we get pˆ(λ) = det(λI − (A + δA)) (4.3.22) n (qk + δqk ) + (y + δy)T (Gk + δGk )(Q + δQ)(α (x + δx) − en ) λk , = k=0
where δGk u and | δqk | u by Lemma 4.3.6. Expanding (4.3.22) and ignoring higher-order terms, we obtain . δak = δqk + δy T Gk Q(α x − en ) + y T δGk Q(α x − en ) + y T Gk δQ(α x − en ) + y T Gk Q(α δx) for k = 0, . . . , n − 1. Each term on the right-hand side has one factor that is u. All terms except the first contain exactly one factor α and other factors that are ≈ 1. Thus | δak | u | α | = u a , and therefore δa u a . So far throughout this section we have assumed for convenience that we are dealing with a monic polynomial. In practice we will often have a nonmonic p, which we make monic by rescaling it. The following theorem covers this case.
4.4. Symmetric Matrices
81
Theorem 4.3.8. Suppose we compute the roots of a nonmonic polynomial p with coefficient vector a by applying the companion Francis algorithm to the monic polynomial p/an with coefficient vector a/an . Let p˜ denote the monic polynomial that has the computed roots as its exact zeros, let pˆ = an p˜, and let a ˆ denote the coefficient vector of pˆ. Then a ˆ − a u a . Proof. an .
Apply Theorem 4.3.7 to p/an , and then rescale by multiplying by
4.4 Symmetric Matrices An upper Hessenberg matrix that is Hermitian (A∗ = A) must automatically be tridiagonal. We can arrange for its off-diagonal entries to be real and nonnegative. Since its main-diagonal entries must be real, we can assume that A is real. There are several excellent methods [73, Section 7.2] for computing the eigenvalues of a real, symmetric, tridiagonal matrix, and it would be amazing if we could invent something better. We have not managed to do that so far, but we did come up with a method that works fairly well [10], and we describe it in this section. The Cayley transform λ+i μ = ϕ(λ) = λ−i is a one-to-one map of the extended complex plane onto itself that maps the extended real line onto the unit circle. Its inverse is given by λ = ϕ−1 (μ) = i
μ+1 . μ−1
Every Hermitian matrix A has a Cayley transform defined by U = ϕ(A) = (A + iI)(A − iI)−1 = (A − iI)−1 (A + iI). Straightforward matrix algebra shows that U ∗ U = I, so U is unitary. Now we focus on the case where A is real, symmetric, and tridiagonal. Then A + iI is also tridiagonal, so a QR decomposition A + iI = QR can be computed in O(n) flops. Q is upper Hessenberg and a product of n−1 core transformations: Q = Q1 Q2 · · · Qn−1 . R is upper triangular with three nonzero diagonals. As we shall see, we will need only the main diagonal of R. Since A is real, we have A − iI = A + iI = Q R. Thus −1
U = (A + iI)(A − iI)−1 = QRR
QT .
−1
RR = Q∗ U Q is unitary as well as upper triangular, so it must be diagonal. Call it D. The main diagonal entries of D are given by dj = rjj /rjj , so the main diagonal of R gives us D. Now we have U = QDQT = Q1 · · · Qn−1 DQTn−1 · · · QT1 ,
82
Chapter 4. Some Special Structures
or pictorially
U=
,
where Q is shown in black and QT in green. We have not depicted the diagonal matrix D, which lies between Q and QT . In Section 1.4 we discussed the trivial operation of passing a core transformation through a diagonal matrix. We will need to do this repeatedly in our algorithm, but we will ignore D in our pictures, as usual. Our plan is to reduce U to upper Hessenberg form by eliminating the green transformations. We have devised two procedures: One removes the green cores at the top, starting at the top and moving downwards; the other removes them at the bottom, starting from the bottom. These two procedures have the same cost, but it turns out that it is cheaper to combine the two, removing the top cores from the top and the bottom cores from the bottom. We therefore describe the two procedures simultaneously. We remove the top core by a similarity followed by a fusion. The bottom core requires only a fusion:
.
The result is
.
Now the next two core transformations can be removed by somewhat longer pathways requiring one turnover each,
,
4.4. Symmetric Matrices
83
resulting in
.
Now the last cores can be removed via lengthier paths requiring two turnovers each:
.
We have now reached upper Hessenberg form in our small example. In a larger matrix we would continue to the next step, in which two more core transformations are removed at a cost of three turnovers each, and so on. After about n/2 steps, we will get to upper Hessenberg form. The total number of turnovers required is about n2 /4, so the total flop count is O(n2 ). Once Hessenberg form is reached, the eigenvalues can be computed by the method of Section 4.1 for an additional O(n2 ) flops. For each eigenvalue μ of U , λ=i
μ+1 μ−1
is an eigenvalue of A. Performance of the algorithm is discussed in [10]. A shortcoming of this method is that it uses complex arithmetic in the course of solving a real problem. Nevertheless it works surprisingly well. It is not as fast as the fastest methods for solving this problem, but it is about as fast as the symmetric tridiagonal QR code in LAPACK. Now let’s consider stability. It is clear that the unitary part of the algorithm is backward stable, so we just need to consider the transformation from symmetric to unitary and back. The stability of these transformations hinges on the behavior of the maps ϕ and ϕ−1 . Of course they are not bounded in general, so we need to restrict the domains appropriately. Since ϕ(∞) = 1, large eigenvalues of A will be mapped to a cluster of eigenvalues of U near 1. Thus eigenvalues that are initially well separated can end up clustered together. Obviously we must avoid this. The simple solution is to rescale A if necessary so that its norm is not too large. It turns out that a very small A is also problematic since then all of the eigenvalues of U will be clustered in a small arc around −1. Thus we should scale A so that A ≈ 1. Then the eigenvalues of A lie in an interval [−a, a] with a ≈ 1, and they get mapped to eigenvalues of U in a compact circular arc that is roughly half a circle and stays away from 1. The maps ϕ and ϕ−1 restricted to these domains are well behaved, and backward stability follows. For a detailed analysis see [10].
84
Chapter 4. Some Special Structures
4.5 Symmetric-Plus-Rank-One Matrices Comrade Matrices In Section 4.2 we considered the problem of computing the zeros of a polynomial p(λ) = a0 + a1 λ + · · · + an−1 λn−1 + λn expressed in terms of the monomial basis. Other bases are sometimes used. If p0 , p1 , p2 , . . . is any basis of polynomials such that pk has degree exactly k, then we can write p uniquely as p(λ) = c0 p0 (λ) + c1 p1 (λ) + · · · + cn pn (λ). The most commonly used nonmonomial bases are classical families of orthogonal polynomials, especially Chebyshev and Legendre, that can be generated by symmetric three-term recurrences of the form βk pk+1 (λ) = (λ − αk )pk (λ) − βk−1 pk−1 (λ), In these cases the zeros of ⎡ α0 ⎢ β0 ⎢ ⎢ ⎢ T A = T + xen = ⎢ ⎢ ⎢ ⎣
k = 0, 1, 2, . . . .
p are exactly the eigenvalues of the comrade matrix ⎤ β0 −cˆ0 ⎥ α1 β1 −ˆ c1 ⎥ ⎥ β1 α2 β2 −ˆ c2 ⎥ (4.5.1) ⎥, .. .. .. .. ⎥ . . . . ⎥ βn−3 αn−2 βn−2 − cˆn−2 ⎦ βn−2 αn−1 − cˆn−1
where cˆk = ck /cn , k = 0, 1, . . . , n − 1. For a proof see [12] or the book by Barnett [14]. In the special case of Chebyshev polynomials the comrade matrix is called a colleague matrix [40]. The comrade matrix is the sum of a symmetric tridiagonal matrix T and a rank-one “spike matrix” xeTn . Thus it is symmetric-plus-rank-one. We leave it as an exercise for the reader to show that every symmetric-plus-rank-one matrix is unitarily similar to a matrix in the form (4.5.1), and the transformation can be achieved by a direct method in O(n3 ) flops.
Solution via the Cayley Transform We will consider the eigenvalue problem for comrade matrices (4.5.1). A good algorithm for this and similar problems was proposed by Eidelman, Gemignani, and Gohberg [32]. An alternative is our method [12], which uses nonunitary similarity transformations. We do not claim to have anything better than [32], but we do have something different and interesting. We apply a Cayley transform to (4.5.1) to make a unitary-plus-rank-one matrix. We then reduce it to Hessenberg form by a method similar to that in Section 4.4. Then we compute the eigenvalues by the method of Section 4.2. Taking a hint from the previous section, we perform a QR decomposition T +iI = QR, which also gives T −iI = Q R. The unitary matrix Q = Q1 · · · Qn−1 is upper Hessenberg. Let U denote the (unitary) Cayley transform of T : U = (T + iI)(T − iI)−1 = QDQT , where D = RR
−1
is unitary and diagonal.
4.5. Symmetric-Plus-Rank-One Matrices
85
Now we want to consider the Cayley transform of A = T + xeTn . First of all, A + iI = (T + iI) + xeTn = QR + xeTn = Q(I + Q∗ xeTn R−1 )R. −1 T Note that eTn R−1 = rnn en , so
A + iI = Q(I + weTn )R,
−1 ∗ where w = rnn Q x.
Next we note that −1
A − iI = (T − iI) + xeTn = Q R + xeTn = Q(I + QT xeTn R
)R,
so A − iI = Q(I + zeTn )R,
T where z = r −1 nn Q x.
(4.5.2)
(If x is real, then z = w.) Using the Sherman–Morrison formula we have (A − iI)−1 = R
−1
(I + β zeTn )QT ,
where β = −1/(1 + zn ).
(4.5.3)
Thus the Cayley transform of A is B = (A + iI)(A − iI)−1 = Q(I + weTn )D(I + βzeTn )QT = Q(D + weTn D + βDzeTn + β(eTn Dz)weTn )QT , where D is the unitary diagonal matrix defined earlier. Noting that eTn D = dnn eTn and eTn Dz = dnn zn , we deduce that B = Q(D + veTn )QT ,
(4.5.4)
where v = dnn (1 + β zn )w + β Dz. One easily checks that the computational cost of producing the form (4.5.4) is O(n). B is clearly unitary-plus-rank-one. If we can transform it to Hessenberg form, then we can find its eigenvalues by the method developed in Section 4.2. The method we described in the previous section for transforming QDQT to Hessenberg form can also be used here with slight modifications. There we showed that we have the option of removing the extra core transformations from either the top or the bottom. In the current context we must remove them from the top in order to preserve the form veTn of the rank-one part. Pictorially the process looks as follows. We start with Q(D +
veTn )QT
=
D+
veTn
.
86
Chapter 4. Some Special Structures
To get to Hessenberg form we must remove the core transformations shown in green. Suppose we have removed the top two and now we are going to remove the third. The procedure begins as follows:
D + veTn
.
First we do a similarity transformation to move the core from right to left. Then we do an upward turnover and pass the resulting core transformation Gi through the unitary-plus-rank-one matrix D + veTn . In the previous section we ignored the trivial interactions with the diagonal matrix D, but now we must be a bit more careful. Passage of the core transformation Gi through D was ˆG ˆ i , where D ˆ and G ˆi discussed very briefly in Section 1.4. We have Gi D = D ˆ −1 )G ˆ + Gi veT G ˆi. differ hardly at all from D and Gi . Thus Gi (D + veTn ) = (D n i ˆ −1 = eT because i < n − 1, so Gi (D + veT ) = (D ˆ + vˆeT )G ˆi, Notice that eTn G n n n i where vˆ = Gi v. This shows that the passing-through operation requires only the ˆ ← D, and G ˆ i ← Gi . The generic form veTn of the rank-one updates v ← Gi v, D part is preserved. The procedure continues with another similarity transformation followed by a turnover and another operation of passing a core through the unitary-plusrank-one part. Finally, one more similarity transformation followed by a fusion completes the process:
D + veTn
.
Preservation of the veTn form of the rank-one part holds even on removal of the very last core transformation. The first step in this process is
D + veTn
.
Even on this first step the core that is passed through the unitary-plus-rank-one part does not touch the nth row or column and so does not alter the vector eTn in the rank-one part. Notice that if we had tried to remove the core transformations at the bottom, this structure would not have been preserved. The total cost of the reduction to Hessenberg form is O(n2 ) since there are about n2 /2 turnovers and as many updates of the vector v in the rank-one part. Once the Hessenberg form has been reached, the eigenvalues of B can be computed by the method of Section 4.2. Then the eigenvalues of A are given by
4.5. Symmetric-Plus-Rank-One Matrices
87
the transformation λ = ϕ−1 (μ) = i
μ+1 . μ−1
Stability Equation (4.5.3) shows that the method fails outright if zn = −1, so we want to make sure that zn is nowhere near −1. Bad spots for the Cayley transform are ±i. The point i gets mapped to ∞, so any eigenvalues of A near i will be mapped to extremely large eigenvalues of B. Eigenvalues near −i will be mapped close to 0, and we want to avoid that as well. We can solve all of these problems by rescaling A so that A is significantly less than 1. For example, suppose we use a rescaled A = T + xeTn for which T ≤ 1/3 and x ≤ 1/3. Using (4.5.2) we get −1 | zn | ≤ z ≤ | rnn | x ≤ R−1 x .
Recalling that T + iI = QR, we see that R−1 = (T + iI)−1 ≤ 1. This last inequality holds because the norm of the normal matrix (T + iI)−1 is equal to the modulus of its largest eigenvalue. Since the eigenvalues of the symmetric T are all real, those of T + iI all have modulus at least 1, so (T + iI)−1 ≤ 1. Since also x ≤ 1/3, we have | zn | ≤ 1/3, so zn is not close to −1. The conditions on the norms of T and x imply that A ≤ 2/3, so all of the eigenvalues of A lie in the compact ball K = {λ | | λ | ≤ 2/3}, and all eigenvalues of B = ϕ(A) will lie in the compact ball ϕ(K) = {μ | | μ + 2.6 | ≤ 2.4}. The restrictions of the functions ϕ and ϕ−1 to K and ϕ(K), respectively, are well behaved. Since we use a backward stable method to compute the eigenvalues of B, backward stability follows.
Chapter 5
Generalized and Matrix Polynomial Eigenvalue Problems
Many eigenvalue problems present themselves most naturally as generalized eigenvalue problems Av = λBv, where we have a pair of square matrices (A, B), also commonly referred to as a pencil A − λB. A pencil is regular if there is at least one λ ∈ C for which A − λB is nonsingular. We will consider only regular pencils. The (finite) eigenvalues of the pencil are those complex numbers λ for which A − λB is singular, i.e., det(A − λB) = 0. We now consider the question of finding these eigenvalues. If B is nonsingular, the eigenvalues of the pencil are exactly the eigenvalues of the matrix AB −1 . This is an example of a product eigenvalue problem [71, 72]. We would like to find these quantities without forming the product or computing an inverse. If B is singular, then ∞ is considered to be an eigenvalue of A − λB, as then 0 is an eigenvalue of the reciprocal pencil B − μA.
5.1 The Moler–Stewart QZ Algorithm The QZ algorithm of Moler and Stewart [55] is a version of Francis’s algorithm that computes the eigenvalues of AB −1 without forming the product explicitly, working directly on the matrices A and B. In this section we will show how to implement this algorithm using core transformations. We will assume initially that B −1 exists, but in the end we will have an algorithm that works perfectly well when B is singular. A preliminary procedure can reduce (A, B) to Hessenberg-triangular form (with A upper Hessenberg and B upper triangular) by a unitary equivalence transformation costing O(n3 ) flops [39, 73]. We will assume that this has been done. We can even assume, without loss of generality, that A is properly upper Hessenberg. Then AB −1 is also properly upper Hessenberg, and we will apply Francis’s algorithm to this matrix. We can decompose A into QS, where Q = Q1 · · · Qn−1 is a descending sequence, and S is upper triangular. Then AB −1 = QR, where R = SB −1 is upper triangular. The algorithm is exactly as described in Section 3.1 (single shift) or 3.2 (double shift), except that now the matrix R is stored in the form of the two 89
90
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems
upper triangular constituents S and B. We just need to show how to pass a core transformation through R, and we’ll be done. First assume we have B −1 given explicitly. Then clearly we can do the operation in two steps,
S
B −1
,
Ui
Xi
Vi
putting Ui in on the right and extracting Vi on the left. But we don’t have B −1 ; we have B. So we replace the step
B −1
Xi
Ui
by the equivalent step
B
Ui∗
. Xi∗
(5.1.1)
To summarize, starting with Ui , we pass Ui∗ from left to right through B to get Xi∗ , as depicted in (5.1.1). Then we pass Xi from right to left through S, Vi
S
, Xi
to get Vi . That’s all there is to it. This algorithm clearly has about twice the flop count as the standard Francis algorithm, as it has to do twice as many of the expensive pass-through operations per iteration. As promised, we have provided an implementation that does not require B −1 . We can run the algorithm even if B −1 does not exist, and it works just fine. The performance in this case is discussed in [72, Section 6.5]. See also [67] and [70]. However, K˚ agstr¨ om and Kressner [50] argue in favor of deflating out all infinite eigenvalues beforehand if possible.
5.2 The Companion Pencil The material in this section is drawn mainly from [4]. In Section 4.2 we saw how to compute the zeros of a monic polynomial by computing the eigenvalues of its companion matrix, which is unitary-plus-rank-one. Given a nonmonic polynomial p(λ) = an λn + an−1 λn−1 + · · · + a1 λ + a0 , we can make it monic by dividing all of the coefficients by an . As far as we know so far, this is the best course of action. However, an alternative is to work with
5.2. The Companion Pencil
91
the companion pencil given by ⎡ ⎢ 1 ⎢ A − λB = ⎢ ⎣
..
⎤
−a0 −a1 .. .
.
⎡
⎥ ⎢ ⎥ ⎢ ⎥ − λ⎢ ⎦ ⎣
⎤
1 ..
⎥ ⎥ ⎥. ⎦
. 1
1 −an−1
an
It is easy to show that characteristic polynomial det(A−λB) is p. one can consider ⎡ ⎡ ⎤ 1 b1 −s0 ⎢ ⎢ 1 ⎥ .. . −s 1 .. ⎢ ⎢ ⎥ . − λ⎢ A − λB = ⎢ ⎥ . . .. .. ⎣ ⎣ ⎦ 1 bn−1 1 −sn−1 bn
More generally ⎤ ⎥ ⎥ ⎥, ⎦
(5.2.1)
where s0 = a0 , si + bi = ai for i = 1, . . . , n − 1, and bn = an . Thus there is considerable flexibility in how we distribute the coefficients between A and B. However we do it, A − λB is a Hessenberg-triangular pencil, so we can apply the Moler–Stewart algorithm to it directly. Moreover, both A and B in (5.2.1) are unitary-plus-rank-one, so we will be able to build a fast algorithm by using the storage scheme that we developed in Section 4.2. We will store the Hessenberg matrix A in QR-decomposed form: ⎤⎡ ⎤ ⎡ 1 −s1 1 ⎢ ⎢ 1 0 ⎥ 1 −s2 ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ . .. . . .. ⎥ ⎢ .. .. (5.2.2) A = QS = ⎢ ⎥. . ⎥⎢ ⎥ ⎢ ⎣ ⎦ ⎦ ⎣ 1 0 1 −sn−1 1 0 −s0 The unitary Hessenberg matrix Q will be stored as a descending sequence of n − 1 core transformations, as usual. Both B in (5.2.1) and S in (5.2.2) have the form I + zˇeTn , which emerged in Section 4.2. This means that we can use the storage scheme developed there to store these matrices, so each of S and B will be stored in 2n core transformations. Thus the total storage requirement for the pencil will be about 5n core transformations, i.e., O(n). The operation of passing a core transformation through R = SB −1 will also be much cheaper. The operation B
Ui∗
Xi∗
(see the previous section) becomes
Ui∗
Xi∗
,
92
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems
while the operation S
Xi
Vi becomes
Vi
Xi
,
at a cost of two turnovers each (see Section 4.2). The cost of passing a core transformation through the entire matrix QSB −1 , including Q, is thus five turnovers. The cost of a Francis iteration is 5n turnovers for a single step or 11n for a double step, so the entire cost to find all roots is O(n2 ), just as for the companion matrix method. This method is not as fast as the companion matrix method, which requires only 3n turnovers for a single step and 7n for a double step.
Backward Stability We now consider the backward error of the companion pencil Francis algorithm. The zeros of a nonmonic polynomial p are found by computing the eigenvalues of a pencil A − λB of the form (5.2.1). We will assume that the vectors s and b appearing there are chosen in a reasonable way, by which we mean that max{ s , b } ≈ a , where a is the coefficient vector of p as before. Notice that in this setting we have the freedom to rescale the coefficients of the polynomial by an arbitrary factor. Thus we can always arrange to have a ≈ 1, for example. This is the advantage of this approach, and this is what allows us to get an optimal backward error bound in this case. When we run the companion pencil Francis algorithm on (A, B), we obtain Aˆ = U ∗ (A + δA)X,
ˆ = U ∗ (B + δB)X, B
where δA and δB are the backward errors. We begin with an analogue of Theorem 4.3.2. Theorem 5.2.1. The backward errors of the companion pencil Francis algorithm satisfy the following: (a) Aˆ = U ∗ (A + δA)X, where δA u a . ˆ = U ∗ (B + δB)X, where δB u a . (b) B Proof. The proof for A = QR is identical to the proof of Theorem 4.3.2. The proof for B is even simpler because B is already upper triangular; there is no unitary Q factor to take into account.
93 2
106
2
a
106
a a
10
−2
10−2
10−10 10−18
a
101
105 a
109
101
105 a
109
a ˆ − a
max{ δA , δB }
5.2. The Companion Pencil
10−10 10−18
Figure 5.1. Backward error max{ δA , δB } (left) and a ˆ − a (right) plotted against a for the companion pencil Francis algorithm [4].
The left panel of Figure 5.1 gives numerical confirmation of Theorem 5.2.1. We see that the growth is linear in a as claimed. Now let us consider the backward error on the polynomial coefficient vector a. We will compute an optimally scaled backward error as follows. Given the computed roots, we build the monic polynomial p˜ (with coefficient vector a ˜) that has these as its exact roots. We then let pˆ = γ p˜ (with coefficient vector a ˆ), where γ is chosen so that γ a ˜ − a is minimized. We hope to get a backward error a ˆ − a that is linear in a , but the right panel of Figure 5.1 shows what we actually get. The backward error seems to grow quadratically in a , which is a disappointment. Combining Theorem 5.2.1 with the analysis of Edelman and Murakami [31], we get the following result. Theorem 5.2.2. The backward error of the companion pencil Francis algorithm on the polynomial coefficient vector satisfies 2
a ˆ − a u a . So far it looks like the companion matrix algorithm is (surprisingly) more accurate than the companion pencil algorithm, but we have not yet taken into account the freedom to rescale that we have in the companion pencil case. Theorem 5.2.3. Suppose we compute the zeros of the polynomial p with coefficient vector a by applying the companion pencil Francis algorithm to polynomial p/ a with coefficient vector a/ a . Then the backward error satisfies a ˆ − a u a . Proof. By Theorem 5.2.2 the backward error on b = a/a satisfies 2
ˆb − b u b = u. Therefore the backward error on a, which is a ˆ−a = a (ˆb−b), satisfies a ˆ − a = ˆ a b − b u a .
94
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems 2
2
a
106
a
10−2
10−2
10−10 10−18
a
101
105 a
109
a
101
105 a
109
a ˆ − a
a ˆ − a
106
10−10 10−18
Figure 5.2. Backward error of the scaled structured algorithm (left) and unstructured algorithm (right) [4].
In the interest of full disclosure we must point out that this argument applies equally well to any stable method for computing the eigenvalues of the pencil. If we use, for example, the unstructured Moler–Stewart algorithm on the rescaled polynomial p/ a , we will get the same result. Figure 5.2 provides numerical confirmation of Theorem 5.2.3. In the left panel we have the backward error for the companion pencil Francis algorithm, and in the right panel we have the backward error of the unstructured Moler–Stewart code from LAPACK. When we first devised the companion pencil algorithm, we fully expected to find classes of problems for which it succeeds but the companion matrix algorithm fails. So far we have not found any; the companion matrix algorithm is much more robust than we had originally believed. Since the companion matrix algorithm is faster, our recommendation at this time is to use it and not the companion pencil algorithm. We do not exclude the possibility that classes of problems for which the companion pencil algorithm has superior performance will be found in the future.
5.3 Matrix Polynomial Eigenvalue Problems This section is drawn mainly from [6]. Consider a regular matrix polynomial of some positive degree d: P (λ) = Pd λd + Pd−1 λd−1 + · · · + P1 λ + P0 , where the coefficient matrices Pi are k × k. The adjective regular means that there is at least one λ ∈ C such that P (λ) is nonsingular. A complex number λ is called an eigenvalue of P if P (λ) is singular. If the leading coefficient Pd is singular, then ∞ is also considered to be an eigenvalue of P . We will focus on the case when Pd is nonsingular.20 This eigenvalue problem contains as special cases the problems we have considered so far in this chapter. The case d = 1, k = n was studied in Section 5.1, and the case d = n, k = 1 was discussed in Section 5.2. The algorithm that we will describe below requires a preprocessing step, so let’s deal with that immediately. The leading and trailing coefficients Pd and P0 20 Infinite
eigenvalues pose no problem for our algorithm, as is shown in [6]. See also [70].
5.3. Matrix Polynomial Eigenvalue Problems
95
can be reduced to Hessenberg-triangular form by a unitary equivalence and then further reduced to triangular form by the Moler–Stewart algorithm, which was discussed in Section 5.1. That is, we can find unitary U and X such that the matrices Pˆd = U ∗ Pd X and Pˆ0 = U ∗ P0 X are both upper triangular. We can also then compute Pˆi = U ∗ Pi X, i = 1, . . . , d − 1, to yield the transformed matrix polynomial Pˆ (λ) = Pˆd λd + Pˆd−1 λd−1 + · · · + Pˆ1 λ + Pˆ0 , which clearly has the same eigenvalues as the original. The cost of this transformation is O(k 3 ) for the reduction to Hessenberg-triangular form followed by the Moler–Stewart algorithm. Each of the updates Pˆi = U ∗ Pi X costs O(k 3 ) as well, for a total of O(dk 3 ). From this point on we will drop the hats and simply assume that P0 and Pd are upper triangular. Generalizing from Section 5.2, we can easily show that λ is an eigenvalue of the matrix polynomial P if and only if it is an eigenvalue of the block companion pencil ⎡ ⎤ ⎤ ⎡ I −P0 ⎢ ⎥ ⎢ I .. −P1 ⎥ ⎢ ⎥ ⎥ ⎢ . − λ⎢ A − λB = ⎢ ⎥ ⎥. . . .. .. ⎣ ⎦ ⎦ ⎣ I Pd I −Pd−1 More generally one can consider ⎡ −M0 ⎢ I −M1 ⎢ A − λB = ⎢ .. . .. ⎣ . I
−Md−1
⎤
⎡
⎢ ⎥ ⎢ ⎥ ⎥ − λ⎢ ⎣ ⎦
I
N1 ..
. I
Nd−1 Nd
⎤ ⎥ ⎥ ⎥, ⎦
(5.3.1)
where M0 = P0 , Mi + Ni = Pi for i = 1, . . . , d − 1, and Nd = Pd . Because Nd is upper triangular, so is B. A is a block upper Hessenberg matrix, and ⎤ ⎡ ⎤⎡ I I −M1 ⎢ I ⎢ I −M2 ⎥ 0 ⎥ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ .. . . . .. .. .. ⎥ ⎢ (5.3.2) A = QS = ⎢ ⎥. . ⎥ ⎢ ⎥⎢ ⎦ ⎣ ⎦ ⎣ I −Mn−1 I 0 −M0 I 0 Because M0 is upper triangular, S is a true upper triangular matrix. Thus (5.3.2) is a QR decomposition. One of the ways to solve this eigenvalue problem is to reduce the pencil A − λB to Hessenberg-triangular form and then apply the Moler–Stewart algorithm, ignoring the companion structure. This costs O(n3 ), where n = dk (the dimension of A and B), and so O(d3 k 3 ) flops. We will develop a faster algorithm that does the job in O(d2 k 3 ) and can therefore be expected to be advantageous in problems where d is large. Our algorithm is based on factorizations of the matrices B and S, which have the same form. Each of these has the generic form I X1 , (5.3.3) X2
96
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems
where I is a large identity matrix, X1 is gular. We can further refine this to ⎡ I X11 ⎣ X21 0
where X1 = X11
X12 , X2 = X021
tall and skinny, and X2 is upper trian⎤ X12 X22 ⎦ , X32
X22 X32
, and X21 and X32 are upper trian 1 gular. Here we are just splitting the columns of X into two skinnier sets of X2 columns. Next we note the simple factorization ⎡ ⎤ ⎤ ⎡ ⎤⎡ I X11 X12 I X12 I X11 ⎣ ⎦, X21 X22 ⎦ = ⎣ I X22 ⎦ ⎣ X21 X32 X32 I which one easily verifies. Now we can apply this recursively to prove the following factorization. Theorem 5.3.1. Let B be an n × n matrix (n = dk) of the form (5.3.3), where X2 is k × k and upper triangular. Then B = Bn Bn−1 · · · Bn−k+1 , where each Bi has the form ⎡ ⎢ ⎢ ⎢ Bi = ⎢ ⎢ ⎣
1 ..
⎤
x1i .. .
. 1
⎥ ⎥ ⎥ ⎥, ⎥ ⎦
xi−1,i xii In−i
where the ith column of Bi is the same as the ith column of B. Notice that we get this factorization for free. The factors Bi are unitary-plus-rank-one and of the form that we studied in Section 4.2. Thus we can store them using the compact scheme developed there. The unitary part can be represented as a product of 2n core transformations consisting of one ascending sequence and one descending sequence. Moreover, these core transformations have the information about the rank-one part encoded within them, so we do not have to store the rank-one part explicitly. Now let’s apply this to our problem. The eigenvalue problem for the pencil A − λB is equivalent to the eigenvalue problem for AB −1 = QSB −1 = QR, where R = SB −1 . S and B both have factorizations S = Sn · · · Sn−k+1 and B = Bn · · · Bn−k+1 given by Theorem 5.3.1, so −1 R = Sn · · · Sn−k+1 Bn−k+1 · · · Bn−1 .
(5.3.4)
We will need to be able to pass core transformations through R. This clearly requires passing through 2k compactly stored unitary-plus-rank-one matrices at a cost of two turnovers each. Of course we do not form or operate on the inverse factors Bi−1 ; we do the equivalent operations on the uninverted Bi . Thus each pass-through R will cost 4k turnovers.
5.3. Matrix Polynomial Eigenvalue Problems
97
We can’t quite apply Francis’s algorithm yet, as ⎡ ⎢ I ⎢ ⎢ Q=⎢ ⎢ ⎣
..
⎤
I 0 .. .
⎥ ⎥ ⎥ ⎥ ⎥ 0 ⎦ 0
. I I
is not upper Hessenberg. We will have to reduce the matrix to upper Hessenberg form. Clearly Q = C k , where ⎡ ⎢ 1 ⎢ ⎢ C=⎢ ⎢ ⎣
..
⎤
1 0 .. .
⎥ ⎥ ⎥ ⎥ ∈ Rn×n ⎥ 0 ⎦ 1 0
. 1
is upper Hessenberg. Thus C can be stored as a descending sequence of n−1 core transformations. In fact C = F1 · · · Fn−1 , where each Fi is a core transformation
with active part F = 01 10 . Q = C k is represented by k such descending sequences. In the case k = 3, d = 3, and n = 9, we have
A = QR =
R
.
We reduce A to Hessenberg form by chasing away k − 1 of the k descending sequences that define Q. There are multiple ways to do this. We will start with the core transformation shown in blue below, and we will chase it through the matrix like this:
.
Eventually it gets to a position, shown in red, where it can be fused with one of the core transformations at the bottom. Next we go after the core transformation
98
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems
shown in green below. We chase it away like this:
.
Now we have the situation A = QR =
.
One is tempted to try to remove the top core transformation (shown in red) from the first descending sequence. The reader can try this and see that s/he runs into difficulties. In fact we know from Galois theory that there is no direct method that reduces a matrix to triangular form, so we should not expect to be able to remove the red core transformation. Instead we remove the blue one and then the green one. Then we move on to the third layer. Eventually we will have completely removed all but one of the descending sequences, at which point the matrix is upper Hessenberg. Let’s do a rough flop count for the reduction to Hessenberg form. Consider the first core transformation that we eliminated. Its trip through the Q part of the matrix required exactly n−1 turnovers for a cost of O(n). Each complete trip through Q lowered the core transformation by k positions, so the total number of times that the core transformation passed through R was about n/k. Recall from (5.3.4) that R is actually a product of 2k unitary-plus-rank-one matrices, and the cost of passing a core transformation through R is 4k turnovers. Thus the total number of turnovers for the “R” part of the journey was about 4k(n/k) = 4n. Combining the Q and R costs, we see that about 5n total turnovers are required. This is the cost of removing a single core transformation from the top layer of Q. We remove k − 1 core transformations from the top layer, so the cost for the entire top layer is about 5nk turnovers. The cost of removing the second layer and each subsequent layer is slightly less. Taking this into account, we see that the total cost of removing all n layers is about 2.5n2 k = 2.5k 3 d2 turnovers. Thus the total cost of the reduction to Hessenberg form is O(k 3 d2 ) flops. Once the reduction to Hessenberg form is complete, we use Francis iterations to find the eigenvalues. Each single iteration passes a misfit through R about n times at a cost of 4k turnovers each time.21 Thus the cost of one iteration is 21 This
is a massive product eigenvalue problem of the type studied in [71] and [72, Chapter 8].
5.3. Matrix Polynomial Eigenvalue Problems
99
4nk turnovers. Reckoning O(n) iterations total, as usual, the total flop count is O(n2 k) = O(k 3 d2 ). This is the same order of magnitude as for the reduction to Hessenberg form. We conclude that the cost of the whole process is O(k 3 d2 ). Thus our method is advantageous when d is large. See [6] for performance of the algorithm in the complex case.
Eigenvector Computation If eigenvectors are wanted as well, a complete set can be computed for an additional O(k 3 d2 ) flops. See [6] for details.
Backward Stability The method that we have derived in this section is normwise backward stable if the matrix polynomial is scaled appropriately, as we will show in Theorem 5.3.5. First of all, recall that there is a preliminary step where we transform P0 and Pd to triangular form. This requires a unitary reduction to Hessenberg-triangular form, application of the Moler–Stewart QZ algorithm, and some matrix-matrix multiplications. These operations are all backward stable, so the preliminary step is backward stable. Now suppose that we have a block companion pencil (A, B) with A = QS = QSn · · · Sn−k+1 and B = Bn · · · Bn−k+1 , where B and S are products of unitaryplus-rank-one matrices as described in Theorem 5.3.1. The matrix Q = C k is unitary but not yet Hessenberg. Our algorithm begins with the reduction of Q to Hessenberg form and then continues with the Francis (Moler–Stewart) iterations to triangularize the pencil. We deal with both stages at once. In the end we ˆ B) ˆ from which we can read off the eigenvalues. In floating-point have a pencil (A, ˆ −1 = U ∗ (A + δA)(B + δB)−1 U , where U is the product arithmetic we have AˆB of all of the core transformations that participated in similarity transformations. ˆ −1 = U ∗ (A + δA)XX ∗ (B + δB)−1 U or, better yet, More precisely we have AˆB Aˆ = U ∗ (A + δA)X,
ˆ = U ∗ (B + δB)X. B
(5.3.5)
We want to show that the backward errors δA and δB are small. The first equation in (5.3.5) can be broken down further into ˆ = U ∗ (Q + δQ)V, Q
Sˆ = V ∗ (S + δS)X,
(5.3.6)
ˆ S. ˆ where Aˆ = Q Since S = Sn · · · Sn−k+1 , we have S + δS = (Sn + δSn ) · · · (Sn−k+1 + δSn−k+1 ),
(5.3.7)
where δSj denotes backward error on the unitary-plus-rank-one factor Sj for j = n − k + 1, . . . , n. Fix j for now, and recall the identity-plus-spike form of Sj : ⎤ ⎡ 1 z1j ⎥ ⎢ .. .. ⎥ ⎢ . . ⎥ ⎢ Sj = ⎢ ⎥, 1 z j−1,j ⎥ ⎢ ⎦ ⎣ zjj In−j
100
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems
which can clearly be written as Sj = Sj,u + Sj,o = I + (z − ej )y T , where z=
z1j
···
zjj
0 ···
0
T
,
and y = ej . Since we prefer to work with unit vectors, we let z = α x and write Sj = I +(αx−ej ) y T , where x = y = 1, and | α | = z = Sj,o ≤ Sj ≤ S . Lemma 5.3.2. The backward error δSj is given implicitly by Sj + δSj = I + δI + (α(x + δx) − ej )(y + δy)T , and therefore
. δSj = δI + α δx y T + (α x − ej ) δy T ,
where δI u, δx u, and δy u. This lemma is analogous to Lemma 4.3.3, and the proof is the same. We have focused on the factors of S here, but the exact same analysis applies to the factors of B. Theorem 5.3.3. Let δA, δB, δQ, and δS be the backward errors defined in (5.3.5) and (5.3.6). Then (a) δQ u, (b) δS u S 2 , 2
(c) δB u B , (d) δA u A 2 . . Proof. If (a) and (b) are true, then (d) follows, since δA = δQ S + Q δS, and A = S . Part (a) is routine, involving nothing but backward stable operations on unitary matrices, and we leave it as an exercise for the reader. The proofs of (b) and (c) are identical because S and B have identical structure. We will prove (b). Expanding (5.3.7) we obtain . δS =
n
Sn · · · Sj+1 δSj Sj−1 · · · Sn−k+1 .
(5.3.8)
j=n−k+1
We focus on a single term from this sum. From Lemma 5.3.2 we know that . δSj = δI + α δx y T + (α x − ej ) δy T ,
T where x is a unit vector of the form x = x1 · · · xj 0 · · · 0 and y = ej . We are going to substitute this form for δSj into (5.3.8), but first we note that Sn · · · Sj+1 (α x − ej ) = α x − ej because xj+1 = · · · = xn = 0, and similarly y T Sj−1 · · · Sn−k+1 = y T .
5.3. Matrix Polynomial Eigenvalue Problems
101
Therefore . Sn · · · Sj+1 δSj Sj−1 · · · Sn−k+1 = Sn · · · Sj+1 δI Sj−1 · · · Sn−k+1 + (αx − ej ) δy T Sj−1 · · · Sn−k+1 (5.3.9) + α Sn · · · Sj+1 δx y T . Noting that Sn · · · Sj+1 ≤ S , Sj−1 · · · Sn−k+1 ≤ S , and | α | ≈ Sj ≤ 2 S , we see that all three terms on the right-hand side of (5.3.9) are u S . 2 Therefore, by (5.3.8), δS u S . As an aside we point out that Lemma 5.3.2 (like Lemma 4.3.3) depends on the assumption that the turnover satisfies (1.4.9). It turns out that Theorem 5.3.3 remains valid even if the turnover does not use (1.4.9), but a more careful proof is required. See [6]. We are not happy about the squares on the norms in the bounds in Theorem 5.3.3, but we can get rid of them by scaling the problem appropriately. First we define a norm on the matrix polynomial P (λ) in some reasonable way, i.e., d 2 Pi . P = i=0
When we build the pencil A − λB (5.3.1) from P , we have some flexibility in the choice of the blocks Mi and Ni . We just need to satisfy Mi + Ni = Pi . Now, in the interest of stability, we will assume that the blocks have been chosen so that max{ Mi , Ni } ≈ Pi . This ensures that max{ A , B } ≈ P . If we let η = P , then we can form a scaled pencil Pˇ (λ) = η −1 P (λ) satˇ such isfying Pˇ = 1. Then, when we build a corresponding pencil Aˇ − λB that ˇ i , Nˇi } ≈ Pˇi , max{ M ˇ ≈ 1. As an immediate consequence of Theowe will have Aˇ ≈ 1 and B rem 5.3.3 we have the following result. Theorem 5.3.4. If we run our companion pencil algorithm on the scaled pencil ˇ satisfy as described immediately above, the backward errors on Aˇ and B δ Aˇ u
and
ˇ u. δB
It is also important to consider the backward error on the coefficients of the polynomial. We get a good result if we scale before computing. Theorem 5.3.5. Suppose we apply our companion pencil algorithm to the scaled matrix pencil Pˇ (λ) = η −1 P (λ), where η = P . Then the computed eigenvalues are the exact eigenvalues of a perturbed pencil P (λ) + δP (λ) = (Pd + δPd )λd + · · · + (P1 + δP1 )λ + (P0 + δP0 ), where δP u P .
102
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems
Proof. By Theorem 5.3.4 the computed eigenvalues are exactly the eigenvalues ˇ − λ(B ˇ + δ B) ˇ with Aˇ u and B ˇ u. By Van of a pencil (Aˇ + δ A) Dooren and Dewilde [61, Section 4] the computed eigenvalues are therefore the exact eigenvalues of a perturbed pencil Pˇ (λ) + δ Pˇ (λ) with δ Pˇ u. If we now multiply this pencil by η, we deduce that the computed eigenvalues are the exact eigenvalues of P (λ) + δP (λ), where δP (λ) = η δ Pˇ (λ) and δP = η δ Pˇ P u. For more details see [6].
5.4 Unitary-Plus-Rank-k Matrices In the previous section we showed how to solve matrix polynomial eigenvalue problems by rewriting them as pencils involving upper triangular matrices of the form I X1 , (5.4.1) X2 where X1 is tall and skinny, and X2 is upper triangular. These matrices are clearly unitary-plus-rank-k (in fact, identity-plus-rank-k), where k is the number of columns in X1 . We dealt with these matrices by factoring each one into a product of k unitary-plus-rank-one matrices, as shown in Theorem 5.3.1. In this section we demonstrate that any unitary-plus-rank-k matrix can be handled in the same way. We content ourselves with a brief sketch. Let A be a unitary-plus-rank-k matrix. Thus A = U + ZY , where U is unitary, Z is n × k, Y is k × n, and ZY has full rank k. We can equally well write this as A = U (I + XY ), where X = U −1 Z, and this form will be more convenient for us. We sketch a reduction to block Hessenberg form. Each step of the reduction will be a unitary similarity transformation A ← Q∗ AQ, which will be executed as U ← Q∗ U Q and I + XY ← Q∗ (I + XY )Q. We partition U , X, and Y into blocks: ⎡ ⎤ ⎤ ⎡ X1 U11 U12 · · · U1d ⎢ X2 ⎥ ⎢ U21 U22 · · · U2d ⎥
⎢ ⎥ ⎥ ⎢ , X = ⎢ . ⎥ , Y = Y1 Y2 · · · Yd , U =⎢ . ⎥ . . . . . . . . ⎣ . ⎦ ⎣ . . . . ⎦ Ud1
Ud2
···
Udd
Xd
where each of the main-diagonal blocks Uii is k×k, except that U11 will be smaller if k does not divide n. Here d = n/k. X and Y are partitioned conformably to U . The sole purpose of this partition is to simplify the explanation of the algorithm.
The first step is to multiply Y1 · · · Yd−1 Yd by an n × n unitary matrix Q on the right to obtain
0 · · · 0 Y˜d , where Y˜d is upper triangular. This is a short, fat RQ decomposition. Q can be a product of k elementary reflectors, for example. In the interest of nonproliferation of notation, we will drop the tilde from Y˜d and simply refer to this new
5.4. Unitary-Plus-Rank-k Matrices
103
upper triangular matrix as Yd . To complete a similarity transformation we do the updates X ← Q∗ X and U ← Q∗ U Q. This, of course, changes all of the blocks Xi and Uii , but we do not give them new names. At this point it is convenient to merge Yd into X. That is, we form a new X by the updates Xi ← Xi Yd , i = 1, . . . , d. This leaves us with
Y = 0 ··· 0 I , (5.4.2) a form which Y will retain from this point on. Now we have ⎡ ⎤ I X1 ⎢ ⎥ .. .. ⎢ ⎥ . . I + XY = ⎢ ⎥, ⎣ I Xd−1 ⎦ I + Xd
(5.4.3)
which is in the form (5.4.1), except that Xd is not upper triangular. At a cost of O(k 3 ) we can transform Xd to upper triangular form by first transforming it to upper Hessenberg form and then applying Francis’s algorithm. Let’s say ˜ is the transforming matrix such that Q ˜ ∗ Xd Q ˜ is upper triangular. Recycling Q ˜ notation, let Q = diag{In−k , Q} and perform a similarity transformation with Q. Looking at the form (5.4.3) we see that the update of I + XY is accomplished by ˜ i = 1, . . . , k −1, and I +Xd ← I + Q ˜ The identity-plus-rank-k ˜ ∗ Xd Q. Xi ← Xi Q, part retains the form (5.4.3), but now Xd is upper triangular. We must also do an update U ← Q∗ U Q, which affects the dth block row and dth block column of U . The rest of the reduction consists of transforming U to block upper Hessenberg form by rows from bottom to top. The first step multiplies
Ud1 · · · Ud,d−2 Ud,d−1 ˜ of size (n − k) × (n − k) to transform it to on the right by a unitary matrix Q the form
0 · · · 0 Ud,d−1 with the new Ud,d−1 upper triangular. This is another short, fat RQ decom˜ position, and again be taken to be a product of k elementary reflectors. Q can ˜ Letting Q = diag Q, Ik , we do an update U ← Q∗ U Q to transform U to the form ⎤ ⎡ U11 U12 ··· U1,d−1 U1d ⎢ U21 U22 ··· U2,d−1 U2d ⎥ ⎥ ⎢ ⎢ .. ⎥ .. .. ⎥ ⎢ . . . ⎥ ⎢ ⎣ Ud−1,1 Ud−1,2 · · · Ud−1,d−1 Udd ⎦ 0 0 ··· Ud,d−1 Udd with Ud,d−1 upper triangular. When we apply this similarity transformation to the rank-k part, the blocks X1 , . . . , Xd−1 are updated, butthe upper triangular ∗ ˜ ∗ , Ik does not affect the Xd is untouched because the update by Q = diag Q last block row. The form of Y remains as in (5.4.2). The same will be true of all subsequent steps. The next step creates zero blocks in row d − 1, transforms the block Ud−1,d−2 to upper triangular form, and so on. After d − 1 steps, we arrive at block
104
Chapter 5. Generalized and Matrix Polynomial Eigenvalue Problems
Hessenberg form. In this form all blocks Ui,j with i ≥ j + 2 are zero. All blocks Ui,i−1 are upper triangular, except that U21 , which will typically have fewer than k columns, will be upper trapezoidal. The flop count for this reduction is O(n3 ). Now we can forget about the partition and note that the transformed U has k-Hessenberg form. This means that only the first k diagonals below the main diagonal can contain nonzero entries. Thus U can be factored into a product of k descending sequences of core transformations. The identity-plus-rank-k part I + XY has the form (5.4.3) with Xd upper triangular, so it can be factored into a product of k unitary-plus-rank-one matrices as shown in Theorem 5.3.1. Thus is exactly the form we considered in the previous section. In fact, it’s even a bit simpler. We can reduce the matrix to upper Hessenberg form and then apply Francis’s algorithm to find the eigenvalues as shown in the previous section in O(d2 k 3 ) flops.
Chapter 6
Beyond Upper Hessenberg Form
Up to now we have restricted our attention to upper Hessenberg matrices. In this chapter we will introduce a large family of condensed forms that are generalizations of upper Hessenberg form, and we will develop generalizations of Francis’s algorithm that operate on them. In [64] we have shown that the performance of the algorithm can be affected by the choice of condensed form, and since the form can evolve over the course of iterations, there is the prospect of building a faster algorithm that chooses the condensed form adaptively. Although the realization of this prospect lies in the future, we are pleased to include this new theory here because we find it really interesting, and we hope the reader will too. The material in this chapter is drawn mainly from [62] and [64], but see also [63].
6.1 QR Decomposition by Core Transformations Consider a completely general matrix
A =
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
.
(For the first part of the chapter we will stick with the case n = 6 for illustration.) Our methodology in this monograph has been to work with matrices in QRdecomposed form, so let’s take a close look at the QR decomposition of A. The usual tool for performing QR decompositions is the reflector, as one reflector suffices for each column. But this is a book about core transformations, so let’s look at how a QR decomposition is done by cores. First we create a zero in position (n, 1):
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
=
105
×××××× ×××××× ×××××× ×××××× ×××××× ×××××
.
106
Chapter 6. Beyond Upper Hessenberg Form
Then we make a zero in position (n − 1, 1):
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
=
×××××× ×××××× ×××××× ×××××× ××××× ×××××
.
Continuing in this way, we clear out the entire first column:
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
=
×××××× ××××× ××××× ××××× ××××× ×××××
.
Then we move on to the second column, starting with
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
=
×××××× ××××× ××××× ××××× ××××× ××××
=
×××××× ××××× ×××× ×××× ×××× ××××
.
=
×××××× ××××× ×××× ××× ××× ×××
.
and continuing to
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
Then we clear out the third column:
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
Continuing in this manner we clear out n − 1 columns, reducing the matrix to triangular form:
×××××× ×××××× ×××××× ×××××× ×××××× ××××××
=
×××××× ××××× ×××× ××× ×× ×
.
6.2. Reduction to Condensed Form
107
If we now invert the core transformations to move them to the other side, we obtain our QR decomposition ×××××× ×××××× ×××××× ×××××× ×××××× ××××××
=
×××××× ××××× ×××× ××× ×× ×
Q
A
,
R
where the core transformations that comprise Q are in the shape of a pyramid.
6.2 Reduction to Condensed Form Now that we have A in QR-decomposed form, we will show how to reduce it to upper Hessenberg form. Then we will introduce some generalizations.
A = QR =
.
(6.2.1)
To get to upper Hessenberg form we need to chase away most of the core transformations that comprise the matrix
Q =
,
leaving only the single descending sequence that is shown in black. Before we begin, we spread out the Q part of the picture for clarity:
A = QR =
.
The first step is to eliminate the core transformation in the lower left-hand corner of the pyramid by doing a similarity transformation, then passing the core transformation through R, and then fusing it with the core transformation
108
Chapter 6. Beyond Upper Hessenberg Form
in the lower right-hand corner of the pyramid:
.
Then we just move up the left side of the pyramid. The second elimination is
,
,
the third is
and the fourth is
.
After four (or, in general, n − 2) eliminations we have
,
6.2. Reduction to Condensed Form
109
which might be better displayed more compactly as
.
We have removed one layer from the left-hand side of the pyramid. We can now proceed in just the same way to remove a second layer. This requires n − 3 eliminations in general and results in
.
We continue to peel off layers. After removing a total of n − 2 layers, we arrive at upper Hessenberg form
A = QR =
.
(6.2.2)
We leave it to the reader to check that the flop count is O(n3 ), as expected. (This is deterministic, not iterative.) In (6.2.2) we have recycled the symbols A, Q, and R, but it should be understood that these are completely different from the A, Q, and R in (6.2.1). We will do this repeatedly in this section; there will be many equations A = QR, but they are all different.
A Different Reduction To get to upper Hessenberg form we started from (6.2.1) with Q in the form of a pyramid and stripped core transformations from the left-hand side of the pyramid. One can equally well arrive at a (different) condensed form by removing core transformations from the right-hand side of the pyramid. To illustrate this we again spread out Q, but now in the other direction:
A = QR =
.
110
Chapter 6. Beyond Upper Hessenberg Form
We begin by eliminating the core transformation in the lower right-hand corner of the pyramid. This is done by passing it to the right through R, then doing a similarity transformation to move it to the left, and then fusing it with the core transformation in the lower left-hand corner of the pyramid:
.
Then we move on up the right side of the pyramid. The second elimination is
,
,
the third is
and the fourth is
.
After four (or, in general, n − 2) eliminations we have
,
6.2. Reduction to Condensed Form
111
or more compactly
.
We have removed one layer from the right-hand side of the pyramid. Now we can remove another layer by the exact same procedure to obtain
.
We continue in this manner. After removing n − 2 layers, we arrive at the condensed form
A = QR =
.
(6.2.3)
This is clearly not upper Hessenberg. The matrix Q is itself lower Hessenberg, and if we multiply Q and R together, we get a matrix that has no obvious structure. Of course we have no intention of multiplying Q by R; we will keep the matrix in the factored form (6.2.3), which turns out to be every bit as useful as upper Hessenberg form. In fact (if R is nonsingular) it is the inverse of an upper Hessenberg matrix, so we will call it inverse Hessenberg form. (It is also a lower quasi-separable matrix.)
Exponentially Many Reductions The two reductions that we have considered so far started with A in QRdecomposed form
A = QR =
,
(6.2.4)
where the core transformations that comprise Q form a pyramid. (For the continuing discussion we find it convenient to move up to larger examples, so we are
112
Chapter 6. Beyond Upper Hessenberg Form
now taking n = 9.) For the first reduction we removed layers from the left-hand side of the pyramid, while for the second we removed layers from the right. These two procedures can be mixed. For example, we can remove a layer from the left side first and then remove a layer from the right, resulting in
.
If we now remove another layer from the left, followed by another layer from the right, we get
.
Continuing this process to its conclusion, we obtain the condensed form A = QR =
.
(6.2.5)
The matrix Q in this form is intermediate between upper and lower Hessenberg. It’s an example of what’s come to be known as a CMV matrix.22 Now we have three condensed forms, but there are many more. Starting from the pyramid form (6.2.4), we can strip away layers from the left and right in any order. To keep track of the 2n−2 possibilities, we introduce the notion of a position vector, a vector with n − 2 components consisting of the symbols and r, for example
p= r r r r . (6.2.6) 22 The name “CMV” comes from the initials of the authors Cantero, Moral, and Vel´ azquez [23], but they were far from the first to discuss matrices of this type. Perhaps the earliest mention was due to Kimura [53] in the context of digital filter design. See also [1, 21, 68], for example.
6.2. Reduction to Condensed Form
113
The two r’s at the beginning of p indicate that we should begin by removing two layers from the right side of the pyramid:
.
Then the three ’s in a row indicate that the next three layers should be taken from the left side:
.
The final two r’s indicate that the last two layers should come off of the right side, resulting in the condensed form
A = QR =
,
(6.2.7)
which we refer to as the condensed form associated with the position vector p in (6.2.6). In this and each of the condensed forms that we have considered so far, there is one core transformation Qi acting on rows i and i + 1 for each i from 1 to n − 1. We now formally define a condensed form (or extended Hessenberg form) to be a matrix in the form A = Qσ1 Qσ2 · · · Qσn−1 R,
(6.2.8)
where σ1 , . . . , σn−1 is a permutation of 1, . . . , n−1, Qσi is a core transformation acting on rows σi and σi +1, and R is upper triangular. A proper condensed form is a condensed form in which all of the core transformations Qσi are nontrivial. For example, upper Hessenberg form is the condensed form A = Q1 · · · Qn−1 R corresponding to the identity permutation. The inverse Hessenberg form (6.2.3) is the condensed form A = Qn−1 · · · Q1 R corresponding to the permutation that
114
Chapter 6. Beyond Upper Hessenberg Form
reverses the integers 1, . . . , n − 1. The condensed form (6.2.5) with Q in CMV form is A = Q1 Q3 Q5 · · · Q2 Q4 Q6 · · · R. But now we notice that the permutation notation is less than satisfactory because multiple permutations can refer to the same form. This is so because Q1 , Q3 , Q5 and all of the odd Qi commute with each other, so they can be put in any order. The same is true for all the even Qi , so they can also be put in any order. All that matters is that the odds come before the evens. Thus any permutation that lists all of the odd numbers before any of the even numbers corresponds to the condensed form (6.2.5). The condensed form in (6.2.7) is A = Q3 Q2 Q1 Q8 Q7 Q4 Q5 Q6 R, but it is also Q3 Q2 Q1 Q4 Q8 Q7 Q5 Q6 R because Q4 commutes with Q7 and Q8 . The reader can easily discover many other permutations that correspond to this condensed form. The strength of the position vector notation is that each vector p corresponds to a unique condensed form. We introduced this notation in the context of stripping layers from the pyramid form (6.2.4), but there is a simpler interpretation. Referring to our example (6.2.6), which led to (6.2.7), we see that the first component p1 = r implies that Q1 sits to the right of Q2 . The second component p2 = r implies that Q2 sits to the right of Q3 . The third component p3 = implies that Q3 sits to the left of Q4 , and so on. In general, if pi = (resp., pi = r), this means that Qi lies to the left (resp., right) of Qi+1 . This is all that matters. We don’t care about the relative positions of, say, Q2 and Q4 because these two core transformations commute. We do care about the relative positions of Q2 and Q3 , and also of Q3 and Q4 . The position vector gives us this information. For each condensed form there is a unique position vector and vice versa. There are 2n−2 position vectors and exactly as many condensed
forms. The position vector corresponding to upper Hessenberg form is p = · · · , the vector corresponding to inverse Hessenberg form (6.2.3) is p = r · · · r , and the vector corresponding to the CMV form (6.2.5) is p = r r · · · . Here is one more example. The position vector
q= r r r corresponds to the condensed form
A = QR =
.
The numerous condensed forms for unitary matrices introduced here appeared already in the paper by Kimura [53] on realizations of digital filters.
6.3 Core-Chasing Algorithms on Twisted Forms The condensed forms that we have just introduced are often referred to as twisted forms or zigzag forms. So far we know how to execute Francis’s algorithm as a
6.3. Core-Chasing Algorithms on Twisted Forms
115
core-chasing algorithm on upper Hessenberg matrices. Now we are going to introduce generalizations of Francis’s algorithm that chase misfits on arbitrary twisted forms. For this we must assume that A is nonsingular.23 We also assume that we have a proper condensed form, meaning that all of the core transformations that comprise Q are nontrivial. It is easy to check that if the core transformation Qi is trivial, then the problem decouples into two smaller subproblems, one of size i and the other of size n − i.
Core Chasing on Inverse Hessenberg Form Let’s start by looking at a simple special case. Suppose A has the inverse Hessenberg form
A = QR =
corresponding to the position vector
p= r
,
r
r
r
.
To execute a generalized Francis iteration of degree one on this form, we begin by choosing a nonzero shift ρ. Recall that the standard Francis iteration on upper Hessenberg form begins by building a core transformation U1 with the first column proportional to x = (A − ρI)e1 , but that is clearly not the right action here. Since A is not upper Hessenberg, x is a full vector, by which we mean that (generically) none of its entries is zero. However, its inverse A−1 = R−1 Q∗ =
is upper Hessenberg, and it turns out that the correct action is to take x = A−1 (A − ρI)e1 = (I − ρA−1 )e1 = −ρ(A−1 − ρ−1 I)e1 . It’s easy to compute x without forming A−1 (or R−1 ) explicitly; just one small backsolve on a 2 × 2 submatrix of R is required. Only the first two entries of x are nonzero, so we can find a core transformation U1 such that U1 e1 is proportional to x. We then do a similarity transformation A → U1∗ AU1 to obtain
.
23 In Section 3.4 we showed what to do in the singular case. There we considered only upper Hessenberg matrices, but the arguments given there can be extended in a straightforward way to arbitrary twisted forms.
116
Chapter 6. Beyond Upper Hessenberg Form
U1∗ and U1 are shown in red. We can get rid of U1 by passing it through R and fusing it with Q1 :
.
U1∗ is the misfit, and we must chase it through the matrix to get rid of it. The process is just as in the Hessenberg case, except that the misfit is chased in the opposite direction, i.e., left to right. The first step is to shift the misfit through the ascending sequence that represents Q, then pass it from left to right through R, and then move it back over to the left side by a similarity transformation:
.
Then we repeat the process until the misfit arrives at the bottom. The final step is
.
The misfit is fused with the bottom core transformation in the ascending sequence, and the iteration is complete. It is easy to justify this procedure. In fact what we have accomplished here is just a Francis iteration on A−1 with shift ρ−1 , done without forming A−1 explicitly. We leave it as an exercise for the reader to review the double Francis step in Section 3.2 and then work out how to do the double step on an inverse Hessenberg matrix. Of course, this is just an ordinary double step on A−1 , done without forming A−1 explicitly.
Second Example Now let’s look at a slightly more complicated example. Consider a condensed form A = QR =
,
6.3. Core-Chasing Algorithms on Twisted Forms
117
corresponding to the position vector p=
r
r
.
Let’s figure out how to do a core chase on this condensed form. Since p1 = , we start by building a core transformation U1 whose first column is proportional to x = (A−ρI)e1 . It is easy to check that only the first two entries of x are nonzero. (If we had had p1 = r, we would have begun the step with x = A−1 (A − ρI)e1 as in the inverse Hessenberg case.) Then we do a similarity transformation A → U1∗ AU1 :
.
U1∗ can be eliminated by fusion with Q1 . U1 is our misfit, which must be chased through the matrix to the bottom. We begin by passing it through R and then Q:
.
Then we do a similarity transformation to move the misfit from the left to the right in preparation for passing it back through the matrix again:
.
In this picture we also have moved Q5 Q4 to the left (allowed by commutativity with Q1 Q2 ) to make room for the arrival of the misfit:
.
Up until now we have been proceeding exactly as in the upper Hessenberg case, but now we have come to the “corner.” The misfit is lodged between Q2 and Q4 and can go no further. In fact it no longer appears to be a misfit; we declare a
118
Chapter 6. Beyond Upper Hessenberg Form
new misfit, shown in red,
,
which we will now chase to the right,
,
and again
.
Now we are almost done. We can pass the misfit through one more time and then fuse it with Q5 to complete the iteration:
.
(6.3.1)
The final configuration is
.
Notice that this is different from the starting configuration; we have a new position vector
pˆ = r r r .
6.3. Core-Chasing Algorithms on Twisted Forms
119
Notice also that there is more than one way to finish the iteration. Instead of (6.3.1), we could have gone in the other direction:
,
(6.3.2)
resulting in
,
which has position vector pˆ =
r
r
.
What happened here? When we got to the “corner,” we changed chasing direction. Moreover, the position of the corner moved up by one. When we got to the bottom we had a choice of ending the iteration with either pn−2 = r or pn−2 = . To summarize, here is what happens to the position vector in a single core-chasing iteration: p1 is discarded and the new position vector has pˆi = pi+1 for i = 1, . . . , n − 3. Then pn−2 can be either r or , depending on whether we choose to end the iteration as illustrated in (6.3.1) or as in (6.3.2). Stated informally, the components of p “move up by one,” and we are free to put either
or r in the last position.
More Examples; Exercises for the Reader As an exercise the reader can check what happens during a core chase in a more complex situation like
Q =
,
for which the position vector is p=
r
r
r
r
r
.
120
Chapter 6. Beyond Upper Hessenberg Form
Since p1 = r, we start with x = (I − ρA−1 )e1 . The initial misfit is on the left and gets chased to the right. When we get to the first corner, the misfit gets stuck. We declare a new misfit and start chasing it to the left. Also the position of the corner moves up by one. When we get to the second corner, the misfit gets stuck again. We declare a new misfit and start chasing it to the right. This corner also moves up one position. At the bottom we get to make a decision. Do we stay with “r” or do we make a new corner? The final configuration is
ˆ = Q
,
where we get to choose either blue or green. The final position vector is pˆ =
r
r
r
r
x
,
where x = r if we choose blue and x = if we choose green. The reader can also check that our findings remain true even in the case of a CMV configuration Q =
,
which has a new “corner” on every step. The computational complexity of the algorithm does not depend on the number of corners.
6.4 Double Steps and Multiple Steps It is possible to do double steps and, in fact, multiple steps of arbitrary degree on an arbitrary twisted form. An iteration of degree m begins with the choice of m nonzero shifts ρ1 , . . . , ρm . Suppose that in the first m positions of the position vector p the symbols and r appear i and j times, respectively. The iteration begins by calculating a vector x = αA−j (A − ρ1 I) · · · (A − ρm I)e1 . The scalar α is any convenient scale factor. The equation A−1 (A − ρI) = −ρ(A−1 − ρ−1 I)
(6.4.1)
6.4. Double Steps and Multiple Steps
121
shows that x can also be expressed as −1 x = βAi (A−1 − ρ−1 − ρ−1 m I)e1 . 1 I) · · · (A
(6.4.2)
Alternatively we can express x using a product containing i factors of the form A − ρk I and j factors of the form A−1 − ρ−1 k I, in any order, with no additional power of A. The following algorithm gives a procedure for computing x: x ← e1 for⎡k = 1, . . . , m if pk = ⎢
⎢ x ← αk (A − ρk I)x ⎢ ⎢ ⎢ if p = r k ⎢ ⎣
x ← αk (I − ρk A−1 )x
(6.4.3)
The αk are any convenient scaling factors. The vector x given by (6.4.1), (6.4.2), or (6.4.3) has the form x=
x1
···
xm+1
0
···
0
T
.
(6.4.4)
It is easy to see why this is true by thinking about what happens during the execution of (6.4.3). Consider, for example,
A = QR =
which has p =
r
,
(6.4.5)
. The form of A−1 is
A−1 = R−1 Q∗ =
.
(6.4.6)
Let x(k) denote the value of the vector x in (6.4.3) after k passes through the loop. Initially we have x(0) = e1 , which has only one nonzero entry. Since p1 = r, the first time through the loop multiplies x(0) by A−1 . Looking at (6.4.6) we see that Q∗2 , . . . , Q∗5 leave x(0) invariant, but Q∗1 does not. After multiplication by Q∗1 , the second entry of the “x” vector is nonzero, but all entries below the second remain zero. Multiplication by R−1 preserves this form. Thus x(1) has zeros after the second position. The reader can easily check that if we had multiplied by A instead of A−1 on this step, we would not have gotten the desired result.
122
Chapter 6. Beyond Upper Hessenberg Form
On the second time through the loop, x(1) is multiplied by A because p2 = . Referring to (6.4.5) and carefully considering the effect of each factor in the decomposed form, we find that x(2) emerges with zeros after the third position. The entry in the third position will (generically) be nonzero because of the multiplication by Q2 . Again the reader can check that if we had multiplied by A−1 instead of A on this step, we would have had an unsatisfactory result. On the third time through we again multiply by A because p3 = . Referring again to (6.4.5) and carefully checking what happens when x(2) is multiplied by A, we find that x(3) emerges with zeros after the fourth position. Again it is crucial that we multiplied by A and not A−1 on this step. The pattern should now be clear. Because x has the form (6.4.4), there exist core transformations U1 , . . . , Um such that ∗ U1∗ U2∗ · · · Um x = γe1
(6.4.7)
˜ = U1 . . . Um . We begin the iteration with the unitary for some nonzero γ. Let U similarity transformation ˜. ˜ ∗ AU A→U The rest of the iteration consists of returning the matrix to a condensed form. A complete description of the process is given in [64], to which we refer the curious reader. Although the process is not all that complicated, the complete description is daunting. We therefore opt here to illustrate the process by examples. For later reference we note that (6.4.1) and (6.4.7) imply that ˜ e1 = αA−j (A − ρ1 I) · · · (A − ρm I)e1 U
(6.4.8)
for some nonzero α. We observed that during single steps the pattern in the position vector p moves up by one on each iteration. It is therefore reasonable to expect that an iteration of degree m will cause the pattern to move up m positions, and this is indeed what happens. For example,
if we do a double step on a twisted
r r , the position form that initially has position vector p =
vector after the step will be pˆ = r r x y , where we can make x and y take on any of the four possible combinations of and r.
Example with m = 2 Consider an iteration of degree m = 2 on the matrix
A =
,
6.4. Double Steps and Multiple Steps
123
which has position vector p=
r
r
.
First we obtain two shifts ρ1 and ρ2 . Because the top of the matrix looks like an ordinary upper Hessenberg matrix (p1 = p2 = ), we compute x = α(A − ρ1 I)(A − ρ2 I)e1 , just as in the upper Hessenberg case. Then we determine core transformations U2 and U1 such that U1∗ U2∗ x = γe1 , and we set the step in motion by the similarity transformation A → U1∗ U2∗ AU1 U2 , resulting in
.
We immediately pass U2 U1 through R to bring it into contact with the other core transformations:
.
In the course of the core chase there will be many similarity transformations, and each is accompanied by an operation of passing the core transformations through R, either immediately before or immediately after the similarity transform. This being understood, we now simplify the picture by removing R from the display:
.
Here we have the original matrix in black (with R invisible). We call this the main sequence. We also have a descending sequence in blue and an ascending sequence in green. We need to chase the blue and green sequences away to restore a condensed form. The process starts out exactly the same as the double Francis iteration described in Section 3.2, but now we introduce some new terminology and change our viewpoint slightly. In [64] we introduced the limb, which consists initially of the first m − 1 core transformations in the main sequence. In our current example m − 1 = 1, so the limb consists of a single core transformation,
124
Chapter 6. Beyond Upper Hessenberg Form
which we show in red:
.
We begin with a preparatory step that shifts the limb through the descending sequence. This requires one turnover and results in
.
Notice that this move was possible because p1 = . In the case p1 = r, we would have shifted the limb through the ascending sequence, and it would have ended up on the right. Now that the limb is out of the way, it is possible to fuse the bottom core transformation in the descending sequence with the top remaining core transformation in the main sequence. We consider the product of this fusion to be a member of the descending sequence. This is possible because p2 = . In the case p2 = r, we would have fused with the ascending sequence instead. After the fusion we have
.
Now we are ready for the first step. Noting that p3 = , we reassign the top remaining core transformation in the main sequence, now considering it to be part of the descending sequence:
.
Then we shift the ascending sequence through the descending sequence via two turnovers:
6.4. Double Steps and Multiple Steps
125
.
This is possible because p3 = . In the case p3 = r, we would have adjoined a transformation to the ascending sequence and passed the descending sequence through the ascending sequence. Next we move the limb back to the middle by passing it through the ascending sequence:
.
Finally, we do a similarity transformation that moves the ascending sequence from the left side back to the right side:
.
The top core transformation in the descending sequence is now finished; it will not participate in any subsequent operations. We remove it from the descending sequence; it becomes the first core transformation in the finished part, and it will ˆ 1 when the core chase is done. Since it lies to be the top core transformation Q the left of both the descending and the ascending sequences, we will inevitably have pˆ1 = . This is a consequence of the fact p3 = , which caused us to adjoin a core to the descending sequence and pass the ascending sequence through it. For emphasis we write pˆ1 = p3 = . The situation is now
,
where the finished part is shown in black at the top. The reader can check that every move to this point has been exactly the same as in the Francis double step described in Section 3.2, although we have used different vocabulary to describe it. At this point, as we embark on the second
126
Chapter 6. Beyond Upper Hessenberg Form
step of the core chase, something different happens. Since p4 = r, we cannot pass the ascending sequence through the descending sequence as we did on the previous step. Instead we detach the top remaining core transformation from the main sequence and assign it to the ascending sequence. Then we shift the descending sequence through the ascending sequence,
,
,
we shift the limb back to the middle,
and we move the descending sequence back from the right to the left by a similarity transformation,
.
We now remove the top core from the ascending sequence and adjoin it to the finished part:
.
Clearly we are going to have pˆ2 = r. This is a consequence of p4 = r, which caused us to adjoin a core to the ascending sequence and pass the descending sequence through it. Again, for emphasis, pˆ2 = p4 = r. We see that the position vector is “moving up by two.” For the next step, since p5 = r, we will do as in the previous step. We adjoin a core to the ascending sequence and then shift the descending sequence through it. We then move the limb to the middle by shifting it through the descending
6.4. Double Steps and Multiple Steps
127
sequence. After these moves, which the reader can work out for herself, we have
.
The next move is a similarity transformation that moves the descending sequence from the right back to the left. The top core is removed from the ascending sequence and added to the finished part. Now the situation is
.
Clearly pˆ3 = p5 = r. Now we are getting to the bottom, and at this step we have a choice. The last core in the main sequence can be adjoined to either the descending sequence (the “ ” choice) or the ascending sequence (the “r” choice). Let’s arbitrarily make the “ ” choice. We extend the descending sequence, and then we shift the ascending sequence through it. Then we move the limb back to the middle by shifting it though the descending sequence. The result is
.
We move the ascending sequence from the left back to the right by a similarity transformation. The top core transformation from the descending sequence joins the finished part:
.
Clearly pˆ4 = , as planned. The bottom transformations from the descending and ascending sequences can now be fused. The fused product can be considered to be part of the descending sequence (the “ ” choice) or the ascending sequence
128
Chapter 6. Beyond Upper Hessenberg Form
(the “r” choice). Let’s make the “r” choice, just for variety’s sake:
.
We then shift the descending sequence through the ascending sequence. This requires one turnover. The next move would ordinarily be to shift the limb to the center by passing it through the ascending sequence. But now we have reached the point where the limb can simply be fused with the ascending sequence. After this fusion, the limb is gone:
.
We move the descending sequence (now consisting of one core) back to the left by a similarity transformation. The top transformation from the ascending sequence gets transferred to the finished part:
.
We have pˆ5 = r as planned. The iteration is completed by fusing the final blue and green core transformations. The position vector is
pˆ = r r r . The first three entries are rigidly determined by the fact that the pattern in p moves up by two. The last two were determined by choices we made at the end of the iteration.
Example with m = 3 Consider a step of degree m = 3 on the matrix
A = QR =
,
6.4. Double Steps and Multiple Steps
129
where R is not shown. The position vector is
p= r r
r
.
We pick three shifts and use them to compute x = αA−2 (A − ρ1 I)(A − ρ2 I)(A − ρ3 I)e1 by algorithm (6.4.3). We then compute core transformations U1 , U2 , and U3 ˜ ∗ AU ˜ , where such that U1∗ U2∗ U3∗ x = γe1 . The similarity transformation A → U ˜ = U3 U2 U1 , results in U
.
The blue descending sequence is U ∗ , and the green ascending sequence is U . The top two core transformations in the main sequence, shown in red, constitute the limb. (We remind the reader that the limb, defined in [64], initially consists of the first m − 1 cores in the main sequence.) We begin the core chase with a preparatory step that moves the limb either to the left or the right and does a fusion. In this case p2 = , which implies that we cannot move the limb to the right. We must move it to the left, shifting it through the descending sequence via two turnovers. Since p3 = r, we can fuse the top remaining transform in the main sequence with the ascending (green) sequence. After these moves we have
.
Now the core-chasing loop begins. Since p4 = , we add a core to the descending sequence. Then we pass the ascending sequence through the descending sequence via three turnovers, resulting in
.
Next we move the limb back to the middle by shifting it through the ascending sequence:
130
Chapter 6. Beyond Upper Hessenberg Form
.
We move the ascending sequence from the left side back to the right side by a similarity transformation. We detach a core from the top of the descending sequence. This is the beginning of the finished part. We have pˆ1 = p4 = ; the matrix now looks like
,
with the finished part in black at the top. For the next step p5 = r, so we add a core to the ascending sequence and pass the descending sequence through the ascending sequence, resulting in
.
We shift the limb to the middle:
.
Then we move the descending sequence back to the left side by a similarity transformation. We also remove a core from the top of the ascending sequence and add it to the finished part:
Note that pˆ2 = p5 = r.
.
6.4. Double Steps and Multiple Steps
131
Now we are getting to the bottom, so we can choose either or r for the next step. Suppose we choose . Then we augment the descending sequence by adjoining the last remaining core from the main branch, and we pass the ascending sequence through it. We also move the limb to the middle by passing it through the descending sequence. The result is
.
A similarity transformation moves the ascending sequence back to the right side. A core is unhitched from the top of the descending sequence and added to the finished part (with pˆ3 = , as planned). We fuse the bottom cores from the ascending and descending sequences. The result of the fusion is shown in neutral black:
.
We are ready for another step. Suppose we choose again. The black core at the bottom becomes part of the descending sequence, and we shift the ascending sequence through:
.
Next we pass the limb back to the middle. Let’s look at this step closely. We pass the first core transformation to the middle by a turnover:
.
It is impossible to pass the second one through. Instead we fuse it with the
132
Chapter 6. Beyond Upper Hessenberg Form
bottom green transformation:
.
We move the ascending sequence back to the right by a similarity transformation, we fuse the bottom two cores in the descending and ascending sequences, and we detach a core from the top of the descending sequence:
.
For the final step let’s choose option r. The black (recently fused) core at the bottom becomes part of the green sequence, and we pass the remaining core of the blue descending sequence through. We then attempt to pass (what’s left of) the limb back to the middle, but instead it just gets fused with the ascending sequence. The limb is now gone. A similarity transformation moves the blue descending sequence back to the left. A core is detached from the top of the ascending sequence, resulting in pˆ5 = r:
.
The ascending and descending sequences are now reduced to one core transformation each. We fuse them, and the iteration is complete:
The position vector is pˆ =
r
.
r
.
The first two components were forced on us by moving the previous position vector “up three”, but the last three components are consequences of choices that we made.
6.5. Convergence Theory
133
As we mentioned earlier, a complete description of the algorithm can be found in [64], but the two examples that we have just covered show all of the essential ideas. The iteration begins with a matrix A in condensed form associated with a position vector p, and it ends with a matrix Aˆ in condensed form associated with a (possibly) different position vector pˆ. The essential relationships are stated in the following theorem. Theorem 6.4.1. Consider a generalized Francis iteration of degree m with shifts ρ1 , . . . , ρm , applied to a condensed matrix A with position vector p, resulting in a condensed matrix Aˆ with position vector pˆ. Let j denote the number of r symbols among the first m entries of p. Then Aˆ = U ∗ AU, where the unitary matrix U satisfies U e1 = αA−j (A − ρ1 I) · · · (A − ρm I)e1 for some nonzero scalar α. The position vectors p and pˆ are related by pˆi = pi+m , i = 1, . . . , n − m − 2. ˜U ˇ , where U ˜ = U1 · · · Um Proof. The transforming matrix U is a product U = U ˇ is the initial transformation specified by (6.4.1), and U is the product of all of the other core transformations that participated in similarity transformations. ˇ is composed It is easy to check that none of the core transformations of which U ˇ touches the first row or column, so U e1 = e1 . Combining this fact with (6.4.8), we deduce that U e1 = αA−j (A − ρ1 I) · · · (A − ρm I)e1 . The relationship between p and pˆ follows from the complete description of the algorithm given in [64], but it is also made quite clear by the two examples that we have considered above.
6.5 Convergence Theory The convergence theory that we presented in Chapter 2 can be extended to generalized Francis iterations on arbitrary twisted forms. Krylov subspaces played a big role in Chapter 2. For the extended theory we must introduce extended Krylov subspaces.
Extended Krylov Subspaces Consider a nonsingular matrix A in condensed form with position vector p. For k = 1, 2, . . . , n − 1, we define the extended Krylov subspace Kp,k (A, v) to be a space spanned by k vectors from the bilateral sequence . . . , A3 v, A2 v, Av, v, A−1 v, A−2 v, A−3 v, . . . ,
(6.5.1)
where the vectors are taken in an order determined by the position vector p. The first vector is always v. The second vector is taken to be either Av or A−1 v, depending upon whether p1 = or p1 = r, respectively. In general, once i vectors have been chosen from the sequence (6.5.1), the next vector is taken to be the
134
Chapter 6. Beyond Upper Hessenberg Form
first vector to the left of v that has not already been taken if pi = . Otherwise (pi = r) we choose the first vector to the right of v that has not already been taken. For example, in the upper Hessenberg case, for which p=
···
,
we get the standard Krylov subspaces Kp,k (A, v) = span v, Av, A2 v, . . . , Ak−1 v = Kk (A, v). In the inverse Hessenberg case, for which p=
r
···
r
r
,
we get Krylov subspaces associated with A−1 , Kp,k (A, v) = span v, A−1 v, A−2 v, . . . , A−(k−1) v = Kk (A−1 , v). In the CMV case, for which p= we get
r
r
···
,
Kp,k (A, v) = span v, Av, A−1 v, A2 v, A−2 v, . . . .
Finally, let’s consider an irregular example with p=
r
r
r
.
Then Kp,1 (A, v) = span{v}, Kp,2 (A, v) = span v, A−1 v , Kp,3 (A, v) = span v, A−1 v, A−2 v , .. . Kp,7 (A, v) = span v, A−1 v, A−2 v, Av, A−3 v, A2 v, A3 v . Each extended Krylov space Kp,k (A, v) is just a power-shifted version of a standard Krylov space associated with A or A−1 . Lemma 6.5.1. Suppose that in the first k − 1 components of p the symbol appears i times and the symbol r appears j times (i + j = k − 1). Then Kp,k (A, v) = span A−j v, . . . , Ai v = A−j Kk (A, v) = Ai Kk (A−1 , v). Standard Krylov spaces obey the inclusion AKk (A, v) ⊆ Kk+1 (A, v). Here we must be a bit more careful.
6.5. Convergence Theory
135
Lemma 6.5.2. For k = 1, . . . , n − 2, the following hold: (a) If pk = , then AKp,k (A, v) ⊆ Kp,k+1 (A, v). (b) If pk = r, then A−1 Kp,k (A, v) ⊆ Kp,k+1 (A, v). Both of these lemmas follow immediately from the definition of Kp,k (A, v). For k = 1, . . . , n, define E k = span{e1 , . . . , ek }. For each k, E k is invariant under the upper triangular matrix R and under all core transformations Qj except Qk . If y ∈ E k+1 and Qj y = z, then zk+1 = yk+1 unless j is k or k + 1. Theorem 6.5.3. Let A be a matrix in proper condensed form (6.2.8) with position vector p. Then, for k = 1, . . . , n, E k = Kp,k (A, e1 ). Proof. The proof is by induction on k. The case k = 1 is trivial. Now let us show that if E k = Kp,k (A, e1 ) for some k < n, then E k+1 = Kp,k+1 (A, e1 ). Since Kp,k (A, e1 ) ⊆ Kp,k+1 (A, e1 ), we automatically have that E k ⊆ Kp,k+1 (A, e1 ). If we can show that ek+1 ∈ Kp,k+1 (A, e1 ), we will be done. We can rewrite (6.2.8) as A = QL Qk QR R,
(6.5.2)
where QL and QR are the products of the core transformations to the left and right of Qk , respectively. This decomposition is (in most cases) not unique, and any such decomposition can be used. We consider two cases. First suppose pk = . Then, using part (a) of Lemma 6.5.2, we have AE k ⊆ Kp,k+1 (A, e1 ). Using the factorization (6.5.2) we are going to show the existence of x ∈ E k and z ∈ E k+1 with zk+1 = 0, such that z = Ax. Then z ∈ AE k , so z ∈ Kp,k+1 (A, e1 ). By the form of z, and since e1 , . . . , ek ∈ Kp,k+1 (A, e1 ), we easily deduce that ek+1 ∈ Kp,k+1 (A, e1 ). It remains to produce vectors x and z such that z = Ax, x ∈ E k , z ∈ E k+1 , and zk+1 = 0. Since QR does not contain the factor Qk , the space E k is invariant under QR R. Since this matrix is nonsingular, it maps E k onto E k . Therefore there is an x ∈ E k such that QR Rx = ek . Let y = Qk ek . Then y ∈ E k+1 and, since Qk is nontrivial, yk+1 = 0. Let z = QL y. Since pk = , QL does not contain the factor Qk+1 . Thus E k+1 is invariant under QL , and z ∈ E k+1 . Moreover, zk+1 = yk+1 = 0 because QL does not contain either of the factors Qk or Qk+1 . Putting the pieces together we have z = Ax, where x ∈ E k , z ∈ E k+1 , and zk+1 = 0. Now we consider the case pk = r. Using part (b) of Lemma 6.5.2, we have A−1 E k ⊆ Kp,k+1 (A, e1 ). If we can show the existence of x ∈ E k , z ∈ E k+1 with z = A−1 x and zk+1 = 0, we will have z ∈ Kp,k+1 (A, e1 ), and we can deduce as in the previous case that ek+1 ∈ Kp,k+1 (A, e1 ). To produce x and z with the desired properties we use (6.5.2) in its inverse form A−1 = R−1 Q∗R Q∗k Q∗L and make arguments similar to those in the previous case. It is crucial that, since pk = r, Q∗k+1 is a factor of Q∗L , not Q∗R .
136
Chapter 6. Beyond Upper Hessenberg Form
The Main Result The effect of an iteration of degree m of our algorithm is summarized by Theorem 6.4.1. The iteration begins with a proper condensed form A with position vector p and ends with a condensed form Aˆ with position vector pˆ. Typically Aˆ will also be a proper condensed form, but not necessarily. If (and only if) one (or more) of the shifts is exactly an eigenvalue, then at least one of the core transformations will be trivial.24 This latter case is the lucky case, which allows us to reduce the problem to smaller problems immediately. Let’s suppose we have not been so lucky. Then the following theorem holds. Theorem 6.5.4. Suppose Aˆ = U ∗ AU , where Aˆ is in proper condensed form with position vector pˆ. Let u1 , . . . , un denote the columns of U , and let U k = span{u1 , . . . , uk }, k = 1, . . . , n. Then U k = Kp,k ˆ (A, u1 ),
k = 1, . . . , n.
ˆ we have, for k = 1, . . . , n, Proof. Applying Theorem 6.5.3 to A, ˆ E k = Kp,k ˆ (A, e1 ). Thus ˆ e1 ) = Kp,k ˆ ∗ U k = U E k = U Kpˆ,k (A, ˆ (U AU , U e1 ) = Kp,k ˆ (A, u1 ) for k = 1, . . . , n. The next theorem, which is our main result, shows that each iteration of our algorithm effects subspace iterations with a change of coordinate system (Section 2.2) on a nested sequence of extended Krylov spaces. It follows that, given good choices of shifts, successive iterates will move quickly toward blocktriangular form, leading to a deflation (signaled by one of the core transformations becoming trivial). Theorem 6.5.5. Consider one iteration of degree m, Aˆ = U ∗ AU , where A and Aˆ are in proper condensed form with position vectors p and pˆ, respectively. Let f (z) = (z − ρ1 ) · · · (z − ρm ), where ρ1 , . . . , ρm are the shifts. For a given k (1 ≤ k ≤ n − 1) let i(k) and j(k) denote the number of symbols and r, respectively, in the first k − 1 positions of p (i(k) + j(k) = k − 1). Define ˆi(k) and ˆj(k) analogously with respect to pˆ. Then the iteration A → Aˆ effects one step of subspace iteration driven by ˆ
Ai(k)−i(k)−j(m+1) f (A),
(6.5.3)
followed by a change of coordinate system. This means that ˆ
Ai(k)−i(k)−j(m+1) f (A)E k = U k . The change of coordinate system Aˆ = U ∗ AU maps U k back to E k . 24 We
leave this as an exercise for the reader, who can consult [72, Section 4.6] for hints.
6.5. Convergence Theory
137
Proof. Recall from Theorem 6.4.1 that u1 = U e1 = αA−j(m+1) f (A)e1 . Using Theorems 6.5.3 and 6.4.1, Lemma 6.5.1, and Theorem 6.5.4, in that order, we get ˆ
ˆ
Ai(k)−i(k)−j(m+1) f (A)E k = Ai(k)−i(k)−j(m+1) f (A)Kp,k (A, e1 ) ˆ
= Ai(k)−i(k) Kp,k (A, A−j(m+1) f (A)e1 ) ˆ
= Ai(k)−i(k) Kp,k (A, u1 ) = Kp,k ˆ (A, u1 ) = U k.
This convergence theorem differs from the theorem proved in Chapter 2 in some significant ways. For one thing, the operator (6.5.3) that drives the subspace iterations depends on k. There are, however, some special cases when it does not.
First of all, in the upper Hessenberg case p = · · · we have i(k) = ˆi(k) = k − 1 for all k, and j(m + 1) = 0, so the driving function is just f (A). Theorem 6.5.5 reduces to Theorem 2.2.3.
In the inverse Hessenberg case p = r · · · r we have i(k) = ˆi(k) = 0 for all k, and j(m + 1) = m, so the driving function is −1 − ρ−1 A−m f (A) = β(A−1 − ρ−1 m I). 1 I) · · · (A
This is different from f (A), but it is the same for all k. Consider any position vector p that is periodic with period m, i.e., pi+m = pi for all i. Then we can choose to make pˆ = p. If we do this, then ˆik = ik for all k, and the driving function is the same for all k and equals A−j f (A), where j = j(m + 1). The Hessenberg and inverse Hessenberg forms are special cases of periodic p. As another special case consider double steps, i.e., m = 2, on the CMV form p = r r · · · . We have j(3) = 1, so the driving function is A−1 (A − ρ1 I)(A − ρ2 I). Now let’s get back to the general situation and consider what happens when s successive steps are taken. To begin with we will assume that the same shifts are taken on all iterations. If we consider the subspace iterations that effectively take place in the original coordinate system, we have ˜ ˜
E k → Ai−j (f (A))s E k , where ˜i is the difference in the number of times the symbol appears in the first k − 1 positions of the position vector of the final condensed matrix versus the initial condensed matrix, and ˜j is the sum over all iterations of the number of times the symbol r appears in the first m entries of the position vector. The power ˜i is bounded: −(n − 2) ≤ ˜i ≤ (n − 2). This power can be significant in the short term, but in the long run it will not be. If the − r pattern is periodic with period m, we will have ˜i = 0 and ˜j = js. Let us now restrict our attention to the periodic case for simplicity. Let λ1 , . . . , λn denote the eigenvalues of A, −j −j ordered so that | λ−j 1 f (λ1 ) | ≥ | λ2 f (λ2 ) | ≥ · · · ≥ | λn f (λn ) |. Then the average improvement per step, taken over a sequence of s steps, is given approximately by (6.5.4) | (λk /λk+1 )j f (λk+1 )/f (λk ) |.
138
Chapter 6. Beyond Upper Hessenberg Form
If all the shifts are excellent approximations to m eigenvalues of A, then in the case k = n − m we will have | f (λk+1 )/f (λk ) | 1, and there will be rapid convergence for that value of k (meaning Qk goes rapidly to diagonal form). If we now make the more realistic assumption that the shifts are changed on each step, (6.5.4) is replaced by j
| (λk /λk+1 ) | |
s i=1
fi (λk+1 )/fi (λk ) |
1/s
,
where fi is the polynomial determined by the shifts on the ith iteration. With good shift choices, locally quadratic convergence is obtained.
Shift Selection First we consider a couple of strategies that we discussed in [64]. The conceptually simplest procedure is to compute the eigenvalues of the lower right m × m submatrix and use them as the shifts. This normally yields quadratic convergence [72, 75]. In order to implement this strategy, we must form the lower right-hand submatrix by multiplying some of the core transformations into the upper triangular matrix. Usually this is a trivial task. For example, in the upper Hessenberg case, we just have to multiply in the bottom m core transformations in order from bottom to top. The total cost of the shift computation is O(m3 ), which is O(1) if m is a small fixed number. However, in the inverse Hessenberg case, we must multiply in all n − 1 core transformations, as we must go in order from nearest to furthest from the triangular matrix. Since we need only update the last m columns, the cost is only O(nm) = O(n), which is not too bad, but it is nevertheless more than we are used to paying for shifts. Note that in the CMV case the computation costs O(1). In the random case the cost will also be O(1) with high probability. Because this strategy is occasionally more expensive than we would like, we consider another strategy in which we just multiply the bottom m core transformations into R. This strategy uses as shifts the eigenvalues of the lower right-hand m × m submatrix of Qj1 · · · Qjm R,
(6.5.5)
where j1 , . . . , jm is a permutation of n − m, . . . , n − 1 that leaves these core transformations in the same relative order as they are in the factorization of A. If pn−m−1 = , this submatrix is exactly the same as the lower right-hand submatrix of A, so this strategy is the same as the first strategy. When pn−m−1 = r, there will be a difference. However, if Qn−m = I, which is the case when the bottom m × m submatrix is disconnected from the rest of the matrix, there is again no difference. If Qn−m is close to the identity, as we have when we are close to convergence, then the difference will be slight, and the difference in shifts will also be slight. As a consequence, this strategy also yields quadratic convergence. We tried both strategies [64] and found that they gave similar results. Therefore we prefer the second, less expensive, shift strategy. Now consider a third possibility. Compute the eigenvalues of the (m + 1) × (m + 1) matrix taken from the lower right-hand corner of the matrix (6.5.5). For this we just need to use the bottom right (m + 1) × (m + 1) submatrix of R together with the given core transformations. We leave the matrix in factored form and compute the eigenvalues by generalized Francis iterations of degree one
6.6. The Unitary Case Revisited
139
or two. This is surely the most elegant course of action. The m + 1 computed eigenvalues can be used for an iteration of degree m + 1, or just the first m (in order of computation) can be kept for an iteration of degree m. These strategies yield performance similar to that of the two other strategies discussed above.
Convergence in the Single-Shift Case Now consider the simple case m = 1, with an eye to trying to determine whether we can obtain some advantage from a good choice of pˆn−2 at each step. With good shifts we expect rapid convergence when k = n − 1, so let’s focus on that ˆ case. Theorem 6.5.5 gives (for r(A) = Ai(n−1)−i(n−1)−j(2) (A − ρI)) U n−1 = r(A)E n−1 . If we assign numerical values = 1 and r = 0 to the entries of the position vectors, we have ˆi(n − 1) − i(n − 1) − j(2) = pˆn−2 − p1 − j(2). It is easy to check that p1 + j(2) = 1 always, so ˆi(n − 1) − i(n − 1) − j(2) = pˆn−2 − 1. Thus A − ρI if pn−2 = , r(A) = if pn−2 = r. A−1 − ρ−1 I The associated ratios of eigenvalues are | λn − ρ | if pn−2 = | λn−1 − ρ | Since
−1 λ−1 n −ρ = −1 λn−1 − ρ−1
and
λn−1 λn
−1 | λ−1 | n −ρ if pn−2 = r. −1 −1 | λn−1 − ρ |
λn − ρ λn−1 − ρ
,
(6.5.6)
it should be better to take pˆn−2 = r if and only if | λn−1 | < | λn |. Here the eigenvalues are numbered so that λn −ρ is the smallest (in magnitude) eigenvalue −1 of A−ρI, and λn−1 −ρ is the second smallest. It is also assumed that λ−1 is n −ρ −1 −1 −1 −1 the smallest eigenvalue of A −ρ I, and λn−1 −ρ is the second smallest. This is certainly not always true, but it is typically true, especially if ρ approximates λn well.
6.6 The Unitary Case Revisited Now consider the generalized Francis algorithm applied to a unitary matrix. We will show that in this case the order of the factors is irrelevant [8]. Suppose we have a unitary matrix in condensed form consisting of factors Q1 , . . . , Qn−1 in any order, say Qσ1 Qσ2 · · · Qσn−1 . (Recall that R = I in the unitary case.) Now ˆ τ2 · · · Q ˆ τn−1 . We will show ˆ τ1 Q suppose we do a generalized Francis step to get Q ˆ ˆ that the resulting cores Q1 , . . . , Qn−1 are always the same, regardless of the permutations σ and τ , i.e., regardless of the initial and final position vectors p and pˆ. We will restrict our analysis to iterations of degree one (m = 1), relying on the fact that a step of higher degree m is equivalent to m steps of degree 1. Consider the beginning of a single step with shift ρ applied to a unitary matrix A. In the case p1 = , the top two core transformations are in the order Q1 Q2 . We
140
Chapter 6. Beyond Upper Hessenberg Form
build a core U1 with first column proportional to x = (A − ρI)e1 = (Q1 − ρI)e1 , and then we do a similarity transformation by U1 to get U1∗ Q1 Q2 U1 . Here it is important that R = I. In general, U1 would have been passed through R, which would alter it. In the unitary case there is no R, so U1 arrives next to Q2 ˜ 1 , and then a turnover is unchanged. The step proceeds with a fusion U1∗ Q1 = Q ˜ applied to the product Q1 Q2 U1 . Now let’s see what happens in the case p1 = r. The top two core transforma˜1 with first column proportional tions are in the order Q2 Q1 . We build a core U −1 ∗ to z = (I − ρA )e1 = (I − ρQ1 )e1 . Notice that z = Q∗1 (Q1 − ρI)e1 = Q∗1 x, ˜1 . When we ˜1 = Q∗1 U1 . A similarity transformation by U ˜1 yields U ˜1∗ Q2 Q1 U so U ∗ ∗ ˜ ˜ ˜ ˜ 1 , so fuse Q1 with U1 , we obtain Q1 U1 = U1 . Notice also that U1 = U1 Q1 = Q ˜ after the fusion we have Q1 Q2 U1 , exactly as we had in the case p1 = . Then we do a turnover, resulting in exactly the same core transformations, regardless of whether p1 = or p1 = r. Now we do an induction step. Suppose we have chased the misfit forward several steps, resulting in a
b
c
a or
d
d
b
c ,
corresponding to cases pk = and pk = r, respectively. In either case we do a turnover a
b
c
=
x
y
z ,
resulting in x
y
z
or
d
d
x
y
z .
In the left case we do a similarity transformation that moves the core labelled x from left to right. In the right case we move z from right to left, resulting in y
z
d
x
z
or
d
x
y .
Again it is important that R = I, so the core transformations arrive at their destination unchanged. In each case the core labelled y is done and is added to the completed part. In both cases, what is left is the same: z
d
x .
6.6. The Unitary Case Revisited
141
These will be turned over on the next step, regardless of whether pk+1 = or r. This completes the induction. We note finally that when we get to the end of the iteration we have a configuration u
v
w
after the final turnover. If we want to end with pn−2 = , we move the core labelled u from left to right and fuse it with w. If we want pn−2 = r, we move w from right to left and fuse it with u. Either way we are doing exactly the same ˆ n−1 . fusion resulting in exactly the same Q We deduce that the order of the core transformations is irrelevant. This result shows that the various QR-like or QZ-like algorithms for the unitary eigenvalue problem, including the very interesting method of Bunse-Gerstner and Elsner [21], are essentially the same. This
being thecase, we might as well stick with the upper Hessenberg form p = · · · , and that is what we did in [8]. The reader might well wonder whether this result is at odds with the convergence theory of the previous section. There and in [64] we showed that the rate of convergence can be influenced by the choice of pattern. Notice, however, that in the single-shift case (6.5.6) shows that the decisive ratio is | λn−1 /λn |, and that ratio is always 1 in the unitary case.
Bibliography [1] G. S. Ammar, W. B. Gragg, and L. Reichel. On the eigenproblem for orthogonal matrices. In Proceedings of the 25th IEEE Conference on Decision & Control, pages 1963–1966. IEEE, New York, 1986. (Cited on pp. 59, 112) [2] G. S. Ammar, L. Reichel, and D. C. Sorensen. An implementation of a divide and conquer algorithm for the unitary eigenproblem. ACM Trans. Math. Software, 18:292–307, 1992. (Cited on p. 59) [3] G. S. Ammar, L. Reichel, and D. C. Sorensen. Corrigendum: Algorithm 730: An implementation of a divide and conquer algorithm for the unitary eigenproblem. ACM Trans. Math. Software, 20:161, 1994. (Cited on p. 59) [4] J. L. Aurentz, T. Mach, L. Robol, R. Vandebril, and D. S. Watkins. Fast and backward stable computation of roots of polynomials, part II: Backward error analysis; companion matrix and companion pencil. SIAM J. Matrix Anal. Appl., submitted. (Cited on pp. viii, 62, 72, 73, 76, 77, 90, 93, 94) [5] J. L. Aurentz, T. Mach, L. Robol, R. Vandebril, and D. S. Watkins. Fast and backward stable computation of roots of polynomials, part IIA: General backward error analysis. Technical Report TW683, KU Leuven, Department of Computer Science, 2017. (Cited on pp. viii, 73) [6] J. L. Aurentz, T. Mach, L. Robol, R. Vandebril, and D. S. Watkins. Fast and backward stable computation of the eigenvalues and eigenvectors of matrix polynomials. Math. Comp., to appear. (Cited on pp. viii, 94, 99, 101, 102) [7] J. L. Aurentz, T. Mach, R. Vandebril, and D. S. Watkins. Fast and backward stable computation of roots of polynomials. SIAM J. Matrix Anal. Appl., 36:942–973, 2015. SIAM Outstanding Paper Prize, 2017. (Cited on pp. viii, 62, 73) [8] J. L. Aurentz, T. Mach, R. Vandebril, and D. S. Watkins. Fast and stable unitary QR algorithm. Electron. Trans. Numer. Anal., 44:327–341, 2015. (Cited on pp. viii, 59, 60, 139, 141) [9] J. L. Aurentz, T. Mach, R. Vandebril, and D. S. Watkins. A note on companion pencils. Contemp. Math., 658:91–101, 2016. (Cited on p. viii) [10] J. L. Aurentz, T. Mach, R. Vandebril, and D. S. Watkins. Computing the eigenvalues of symmetric tridiagonal matrices via a Cayley transform. Electron. Trans. Numer. Anal., 46:447–459, 2017. (Cited on pp. viii, 81, 83) [11] J. L. Aurentz, R. Vandebril, and D. S. Watkins. Fast computation of the zeros of a polynomial via factorization of the companion matrix. SIAM J. Sci. Comput., 35:A255–A269, 2013. (Cited on p. 3) 143
144
Bibliography [12] J. L. Aurentz, R. Vandebril, and D. S. Watkins. Fast computation of eigenvalues of companion, comrade, and related matrices. BIT, 54:7–30, 2014. (Cited on pp. viii, 3, 84) [13] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. SIAM, Philadelphia, 2000. (Cited on p. 30) [14] S. Barnett. Polynomials and Linear Control Systems. Marcel Dekker, New York, 1983. (Cited on pp. viii, 84) [15] D. A. Bini, P. Boito, Y. Eidelman, L. Gemignani, and I. Gohberg. A fast implicit QR eigenvalue algorithm for companion matrices. Linear Algebra Appl., 432:2006– 2031, 2010. (Cited on p. 62) [16] D. A. Bini, F. Daddi, and L. Gemignani. On the shifted QR iteration applied to companion matrices. Electron. Trans. Numer. Anal., 18:137–152, 2004. (Cited on p. 62) [17] D. A. Bini, Y. Eidelman, L. Gemignani, and I. Gohberg. Fast QR eigenvalue algorithms for Hessenberg matrices which are rank-one perturbations of unitary matrices. SIAM J. Matrix Anal. Appl., 29:566–585, 2007. (Cited on p. 62) ˚. Bj¨ [18] A orck. Numerical Methods in Matrix Computations. Springer, Cham, 2015. (Cited on p. vii) [19] K. Braman, R. Byers, and R. Matthias. The multishift QR algorithm. Part I: Maintaining well-focused shifts and level 3 performance. SIAM J. Matrix Anal. Appl., 23:929–947, 2002. (Cited on p. 52) [20] K. Braman, R. Byers, and R. Matthias. The multishift QR algorithm. Part II: Aggressive early deflation. SIAM J. Matrix Anal. Appl., 23:948–973, 2002. (Cited on p. 52) [21] A. Bunse-Gerstner and L. Elsner. Schur parameter pencils for the solution of the unitary eigenproblem. Linear Algebra Appl., 154–156:741–778, 1991. (Cited on pp. 59, 112, 141) [22] A. Bunse-Gerstner and C. He. On a Sturm sequence of polynomials for unitary Hessenberg matrices. SIAM J. Matrix Anal. Appl., 16:1043–1055, 1995. (Cited on p. 59) [23] M. J. Cantero, L. Moral, and L. Vel´ azquez. Five-diagonal matrices and zeros of orthogonal polynomials on the unit circle. Linear Algebra Appl., 362:29–56, 2003. (Cited on p. 112) [24] S. Chandrasekaran, M. Gu, J. Xia, and J. Zhu. A fast QR algorithm for companion matrices. Oper. Theory Adv. Appl., 179:111–143, 2007. (Cited on p. 62) [25] G. Cybenko. Computing Pisarenko frequency estimates. In Proceedings of the Princeton Conference on Information Systems and Sciences, pages 587–591, 1985. (Cited on p. 59) [26] R. J. A. David and D. S. Watkins. Efficient implementation of the multishift QR algorithm for the unitary eigenvalue problem. SIAM J. Matrix Anal. Appl., 28:623–633, 2006. (Cited on p. 59) [27] S. Delvaux, K. Frederix, and M. Van Barel. An algorithm for computing the eigenvalues of block companion matrices. Numer. Algorithms, 62:261–287, 2012. (Cited on p. 62)
Bibliography
145
[28] S. Delvaux and M. Van Barel. Eigenvalue computation for unitary rank structured matrices. J. Comput. Appl. Math., 213:268–287, 2008. (Cited on p. 59) [29] J. W. Demmel. Applied Numerical Linear Algebra. SIAM, Philadelphia, 1997. (Cited on p. vii) [30] P. J. Eberlein and C. P. Huang. Global convergence of the QR algorithm for unitary matrices with some results for normal matrices. SIAM J. Numer. Anal., 12:97–104, 1975. (Cited on p. 59) [31] A. Edelman and H. Murakami. Polynomial roots from companion matrix eigenvalues. Math. Comp., 64:763–776, 1995. (Cited on pp. 75, 76, 79, 93) [32] Y. Eidelman, L. Gemignani, and I. Gohberg. Efficient eigenvalue computation for quasiseparable Hermitian matrices under low rank perturbation. Numer. Algorithms, 47:253–273, 2008. (Cited on p. 84) [33] Y. Eidelman, I. C. Gohberg, and I. Haimovici. Separable Type Representations of Matrices and Fast Algorithms—Volume 2: Eigenvalue Method. Volume 235 in Operator Theory: Advances and Applications. Springer, Basel, 2013. (Cited on p. 62) [34] S. C. Eisenstat and I. C. F. Ipsen. Three absolute perturbation bounds for matrix eigenvalues imply relative bounds. SIAM J. Matrix Anal. Appl., 20:149–158, 1998. (Cited on p. 47) [35] D. K. Faddeev and V. N. Faddeeva. Computational Methods of Linear Algebra. W. H. Freeman, San Francisco, London, 1963. (Cited on p. 79) [36] J. G. F. Francis. The QR transformation, parts I and II. Computer J., 4:265–271, 332–345, 1961. (Cited on p. vii) [37] F. R. Gantmacher. Theory of Matrices, Volume I. Chelsea, Providence, RI, 1959. (Cited on p. 79) [38] L. Gemignani. A unitary Hessenberg QR-based algorithm via semiseparable matrices. J. Comput. Appl. Math., 184:505–517, 2005. (Cited on p. 59) [39] G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, MD, 4th edition, 2013. (Cited on pp. vii, 24, 36, 89) [40] I. J. Good. The colleague matrix, a Chebyshev analogue of the companion matrix. Q. J. Math., 12:61–68, 1961. (Cited on p. 84) [41] W. B. Gragg. Positive definite Toeplitz matrices, the Arnoldi process for isometric operators, and Gaussian quadrature on the unit circle (in Russian). In Numerical Methods in Linear Algebra, E. S. Nikolaev (ed.), Moscow University Press, Moscow, Russia, pages 16–32, 1982. (Cited on p. 59) [42] W. B. Gragg. The QR algorithm for unitary Hessenberg matrices. J. Comput. Appl. Math., 16:1–8, 1986. (Cited on pp. viii, 59) [43] W. B. Gragg. Positive definite Toeplitz matrices, the Arnoldi process for isometric operators, and Gaussian quadrature on the unit circle. J. Comput. Appl. Math., 46:183–198, 1993. (Cited on p. 59) [44] W. B. Gragg. Stabilization of the UHQR algorithm. In Proceedings of the Guangzhou International Symposium on Computational Mathematics, Marcel Dekker, New York, pages 139–154, 1999. (Cited on p. 59)
146
Bibliography [45] W. B. Gragg and L. Reichel. A divide and conquer method for unitary and orthogonal eigenproblems. Numer. Math., 57:695–718, 1990. (Cited on p. 59) [46] R. Granat, B. K˚ agstr¨ om, and D. Kressner. A novel parallel QR algorithm for hybrid distributed memory HPC systems. SIAM J. Sci. Comput., 32:2345–2378, 2010. (Cited on p. 52) [47] M. Gu, R. Guzzo, X.-B. Chi, and X.-Q. Cao. A stable divide and conquer algorithm for the unitary eigenproblem. SIAM J. Matrix Anal. Appl., 25:385–404, 2003. (Cited on p. 59) [48] N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM, Philadelphia, 2nd edition, 2002. (Cited on pp. 13, 20, 21) [49] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, UK, 2nd edition, 2013. (Cited on p. 78) [50] B. K˚ agstr¨ om and D. Kressner. Multishift variants of the QZ algorithm with aggressive early deflation. SIAM J. Matrix Anal. Appl., 29:199–227, 2006. (Cited on p. 90) [51] L. Karlsson, D. Kressner, and B. Lang. Optimally packed chains of bulges in multishift QR algorithms. ACM Trans. Math. Software, 40: art. 12, 2014. (Cited on p. 58) [52] R. Killip and I. Nenciu. Matrix models for circular ensembles. Int. Math. Res. Not., 2004(no. 50):2665–2701, 2004. (Cited on p. 59) [53] H. Kimura. Generalized Schwarz form and lattice-ladder realizations of digital filters. IEEE Trans. Circuits Syst., 32:1130–1139, 1985. (Cited on pp. 112, 114) [54] T. Mach and R. Vandebril. On deflations in extended QR algorithms. SIAM J. Matrix Anal. Appl., 35:559–579, 2014. (Cited on pp. viii, 47) [55] C. B. Moler and G. W. Stewart. An algorithm for generalized matrix eigenvalue problems. SIAM J. Numer. Anal., 10:241–256, 1973. (Cited on p. 89) [56] V. F. Pisarenko. The retrieval of harmonics from a covariance function. Geophys. J. Roy. Astron. Soc., 33:347–366, 1973. (Cited on p. 59) [57] M. Stewart. Stability properties of several variants of the unitary Hessenberg QRalgorithm. In V. Olshevsky, editor, Structured Matrices in Mathematics, Computer Science and Engineering, II. Volume 281 of Contemporary Mathematics. American Mathematical Society, Providence, RI, pages 57–72, 2001. (Cited on p. 59) [58] M. Stewart. An error analysis of a unitary Hessenberg QR algorithm. SIAM J. Matrix Anal. Appl., 28:40–67, 2006. (Cited on p. 59) [59] L. N. Trefethen and D. Bau III. Numerical Linear Algebra. SIAM, Philadelphia, 1997. (Cited on p. vii) [60] M. Van Barel, R. Vandebril, P. Van Dooren, and K. Frederix. Implicit double shift QR-algorithm for companion matrices. Numer. Math., 116:177–212, 2010. (Cited on p. 62) [61] P. Van Dooren and P. Dewilde. The eigenstructure of an arbitrary polynomial matrix: Computational aspects. Linear Algebra Appl., 50:545–579, 1983. (Cited on p. 102)
Bibliography
147
[62] R. Vandebril. Chasing bulges or rotations? A metamorphosis of the QR-algorithm. SIAM J. Matrix Anal. Appl., 32:217–247, 2011. (Cited on pp. viii, 105) [63] R. Vandebril and D. S. Watkins. An extension of the QZ algorithm beyond the Hessenberg-upper triangular pencil. Electron. Trans. Numer. Anal., 40:17–35, 2012. (Cited on pp. viii, 105) [64] R. Vandebril and D. S. Watkins. A generalization of the multishift QR algorithm. SIAM J. Matrix Anal. Appl., 33:759–779, 2012. (Cited on pp. viii, 45, 105, 122, 123, 129, 133, 138, 141) [65] T. L. Wang and W. B. Gragg. Convergence of the shifted QR algorithm for unitary Hessenberg matrices. Math. Comp., 71:1473–1496, 2002. (Cited on p. 59) [66] T. L. Wang and W. B. Gragg. Convergence of the unitary QR algorithm with unimodular Wilkinson shift. Math. Comp., 72:375–385, 2003. (Cited on p. 59) [67] R. C. Ward. The combination shift QZ algorithm. SIAM J. Numer. Anal., 12:835– 853, 1975. (Cited on p. 90) [68] D. S. Watkins. Some perspectives on the eigenvalue problem. SIAM Rev., 35:430– 471, 1993. (Cited on pp. 59, 112) [69] D. S. Watkins. The transmission of shifts and shift blurring in the QR algorithm. Linear Algebra Appl., 241–243:877–896, 1996. (Cited on p. 36) [70] D. S. Watkins. Performance of the QZ algorithm in the presence of infinite eigenvalues. SIAM J. Matrix Anal. Appl., 22:364–375, 2000. (Cited on pp. 90, 94) [71] D. S. Watkins. Product eigenvalue problems. SIAM Rev., 47:3–40, 2005. (Cited on pp. 89, 98) [72] D. S. Watkins. The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods. SIAM, Philadelphia, 2007. (Cited on pp. vii, 1, 27, 31, 34, 36, 49, 89, 90, 98, 136, 138) [73] D. S. Watkins. Fundamentals of Matrix Computations. Wiley, New York, 3rd edition, 2010. (Cited on pp. vii, viii, 1, 13, 23, 24, 36, 47, 49, 81, 89) [74] D. S. Watkins. Francis’s algorithm. Amer. Math. Monthly, 118:387–403, 2011. (Cited on pp. viii, 23) [75] D. S. Watkins and L. Elsner. Convergence of algorithms of decomposition type for the eigenvalue problem. Linear Algebra Appl., 143:19–47, 1991. (Cited on pp. 27, 34, 138)
Index adjugate matrix, 78 Arnoldi process, 30 ascending sequence, 8, 123 backward stability and turnovers, 17 companion pencil, 92 first results, 20 matrix polynomial, 99 of deflation, 46 of Francis’s algorithm, 51 unitary-plus-rank-one case, 72 bulge chasing, 25 cache, 52, 60 Cayley transform, 81 CMV matrix, 112 colleague matrix, 84 companion matrix, 60 companion pencil, 91 block, 95 comrade matrix, 84 condensed form, 107 defined, 113 proper, 113 twisted or zigzag, 114 core transformation defined, 3 nontrivial, 11 decomposition, QR, 2 deflation, 26, 34, 46 aggressive early, 52 singular case, 49 descending sequence, 8, 123 eigenvalue of matrix polynomial, 94 of pencil, 89 extended Krylov subspace, 133
Faddeev–Leverrier method, 79 floating-point standard, 14 flop, 1 fusion, 7 details, 15 Hessenberg matrix, 1 generalized, 107 inverse, 111 proper, 11, 23 Hessenberg-triangular form, 89 IEEE binary 64, 14 implicit-Q theorem, 26, 31 inverse Hessenberg form, 111 Krylov matrix, 35 Krylov subspace, 29 extended, 133 less sim symbol , 20 limb, 123, 129 main sequence, 123 matrix polynomial, 94 misfit, 39 Moler–Stewart algorithm, 89 multishift algorithms, 45, 120 overflow, 13, 14 pencil, 89 companion, 91 polynomial matrix, 94 orthogonal, 84 scalar, 60, 84, 90 position vector, 112 simple interpretation, 114 power method, 26, 29, 30 149
QR decomposition, 2 and Francis algorithm, 35 by core transformations, 105 QZ algorithm, 89 Rayleigh quotient shift, 40 regular matrix polynomial, 94 pencil, 89 roundoff, unit, 14, 20, 45 shift through, 9 shifts, 23, 41, 138 rationale for, 33 Rayleigh quotient, 40 Wilkinson, 41 stability, see backward stability subspace iteration, 26 with changes of coordinate system, 27, 136 turnover, 8 as RQ to QR decomposition, 17 twisted form, 114 underflow, 13, 14 unit roundoff, 14, 20, 45 unitary eigenvalue problem, 59, 139 unitary-plus-rank-k matrix, 102 unitary-plus-rank-one matrix, 60 upper Hessenberg matrix, 1 Wilkinson shift, 41 zigzag form, 114
Fundamentals of Algorithms
Fundamentals of Algorithms
This book will be of interest to researchers in numerical linear algebra and their students.
Core-Chasing Algorithms for the Eigenvalue Problem
Eigenvalue computations are ubiquitous in science and engineering. John Francis’s implicitly shifted QR algorithm has been the method of choice for small to medium sized eigenvalue problems since its invention in 1959. This book presents a new view of this classical algorithm. While Francis’s original procedure chases bulges, the new version chases core transformations, which • allows the development of fast algorithms for eigenvalue problems with a variety of special structures and • leads to a fast and backward stable algorithm for computing the roots of a polynomial by solving the companion matrix eigenvalue problem. The authors received a SIAM Outstanding Paper prize for this work.
Core-Chasing Algorithms for the Eigenvalue Problem
Jared L. Aurentz is a Severo Ochoa postdoctoral fellow at the Institute of Mathematical Sciences in Madrid and was previously a postdoctoral researcher in the numerical analysis group at the Mathematical Institute, University of Oxford. His research interests include developing fast algorithms for sparse and rank-structured matrices and algorithms at the intersection of linear algebra and approximation theory.
Leonardo Robol is a researcher at the Institute of Information Science and Technologies (ISTI) of the National Research Council in Italy (CNR). He was the recipient of the honorable mention for the Householder Prize award in 2017. His research interests include the use of rank structures in matrix computations, particularly eigenvalue problems and matrix equations. Raf Vandebril is a professor at KU Leuven. His thesis led to two books on semiseparable matrices (eigenvalue problems and system solving) co-authored with Marc Van Barel and Nicola Mastronardi. His primary research interests are eigenvalue computations and structured matrices. David S. Watkins is professor emeritus of mathematics at Washington State University. He is the author of three books on matrix computations and numerous scientific publications, including several articles in SIAM Review.
For more information about SIAM books, journals, conferences, memberships, or activities, contact:
Aurentz • Mach • Robol • Vandebril • Watkins
Thomas Mach is an assistant professor at Nazarbayev University, Kazakhstan. He was one of the LAA Early Career Speakers at the 21st Conference of the International Linear Algebra Society in 2017. The author of more than 20 publications, his research interest is structure-preserving algorithms, especially for eigenvalue problems.
Jared L. Aurentz Thomas Mach Leonardo Robol Raf Vandebril David S. Watkins
Society for Industrial and Applied Mathematics 3600 Market Street, 6th Floor Philadelphia, PA 19104-2688 USA +1-215-382-9800 • Fax +1-215-386-7999 [email protected] • www.siam.org
FA13 FA13
ISBN 978-1-611975-33-8 90000
9781611975338
FA13_Watkins_cover05-14-18.indd 1
5/16/2018 10:50:08 AM