The Mutually Beneficial Relationship of Graphs and Matrices (CBMS Regional Conference Series in Mathematics) (CBMS Regional Conference Series in Mathematics, 115) [New ed.] 0821853155, 9780821853153

Graphs and matrices enjoy a fascinating and mutually beneficial relationship. This interplay has benefited both graph th

135 82 3MB

English Pages 96 [110] Year 2011

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

The Mutually Beneficial Relationship of Graphs and Matrices (CBMS Regional Conference Series in Mathematics) (CBMS Regional Conference Series in Mathematics, 115) [New ed.]
 0821853155, 9780821853153

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Conference Board of the Mathematical Sciences

CBMS Regional Conference Series in Mathematics Number 115

The Mutually Beneficial Relationship of Graphs and Matrices Richard A. Brualdi

American Mathematical Society with support from the National Science Foundation

The Mutually Beneficial Relationship of Graphs and Matrices

http://dx.doi.org/10.1090/cbms/115

Conference Board of the Mathematical Sciences

CBMS

Regional Conference Series in Mathematics Number 115

The Mutually Beneficial Relationship of Graphs and Matrices Richard A. Brualdi

Published for the Conference Board of the Mathematical Sciences by the American Mathematical Society Providence, Rhode Island with support from the National Science Foundation

NSF-CBMS Regional Research Conference in the Mathematical Sciences held at Iowa State University, Ames, IA July 12–16, 2010 Partially supported by the National Science Foundation. The author acknowledges support from the Conference Board of the Mathematical Sciences and NSF grant DMS-0938261 2000 Mathematics Subject Classification. Primary 05C50; Secondary 05B35, 05C20, 15A15, 15A18, 15B35.

For additional information and updates on this book, visit www.ams.org/bookpages/cbms-115

Library of Congress Cataloging-in-Publication Data Brualdi, Richard A. The mutually beneficial relationship of graphs and matrices / Richard A. Brualdi. p. cm. — (CBMS regional conference series in mathematics ; no. 115) Includes bibliographical references and index. ISBN 978-0-8218-5315-3 (alk. paper) 1. Matrices. 2. Graph theory. 3. Combinatorial analysis. 4. Algebras, Linear. I. Title. QA188.B794 511.5—dc23

2011 2011014290

Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2011 by the American Mathematical Society. All rights reserved.  The American Mathematical Society retains all rights except those granted to the United States Government. Printed in the United States of America. ∞ The paper used in this book is acid-free and falls within the guidelines 

established to ensure permanence and durability. Visit the AMS home page at http://www.ams.org/ 10 9 8 7 6 5 4 3 2 1

16 15 14 13 12 11

Dedicated to my Grandchildren: Samantha Brualdi Shalgos Andrew Brualdi Shalgos Olga Brualdi

Contents Preface

ix

Chapter 1. Some Fundamentals 1.1. Nonnegative Matrices 1.2. Symmetric Matrices Bibliography

1 1 7 11

Chapter 2. Eigenvalues of Graphs 2.1. Some Basic Properties 2.2. Eigenvalues and Graph Parameters 2.3. Graphs with small λmax 2.4. Laplacian Matrix of a Graph Bibliography

13 13 16 19 21 23

Chapter 3. Rado-Hall Theorem and Applications 3.1. Rado-Hall Theorem 3.2. Applications Bibliography

25 25 27 31

Chapter 4. Colin de Verdi`ere Number 4.1. Motivation and Definition 4.2. Colin de Verdi`ere Number and Graph Properties Bibliography

33 33 35 38

Chapter 5. Classes of Matrices of Zeros and Ones 5.1. Equivalent Formulations 5.2. The Classes A(R, S) 5.3. A Generalization Bibliography

39 39 40 45 47

Chapter 6. Matrix Sign Patterns 6.1. Sign-Nonsingular Matrices 6.2. An Application 6.3. Spectrally Arbitrary Sign Patterns Bibliography

49 49 53 55 57

Chapter 7. Eigenvalue Inclusion and Diagonal Products 7.1. Some Classical Theorems 7.2. Diagonal Products and Nonsingularity

59 59 62

vii

viii

CONTENTS

Bibliography

66

Chapter 8. Tournaments 8.1. Landau’s Theorem 8.2. A Special Tournament in T (R) 8.3. Eigenvalues of Tournament Matrices Bibliography

67 67 71 74 76

Chapter 9. Two Matrix Polytopes 9.1. The Doubly Stochastic Polytope 9.2. Alternating Sign Matrices 9.3. The Alternating Sign Matrix Polytope 9.4. ASM Patterns Bibliography

77 77 79 81 83 86

Chapter 10. Digraphs and Eigenvalues of (0, 1)-matrices 10.1. (0, 1)-matrices with all Eigenvalues Positive 10.2. Totally Nonnegative Matrices 10.3. Totally Nonnegative (0, 1)-matrices Bibliography

87 87 90 91 93

Index

95

Preface This monograph is based on the ten lectures I gave at Iowa State University in Ames during the week of July 12-16, 2010. The purpose of the lectures was to show the fascinating and mutually beneficial relationship between matrices and graphs (the nonzero pattern of a matrix): (i) knowledge about one of the graphs that can be associated with a matrix is used to illuminate matrix properties and to get better information about the matrix, and (ii) linear algebraic properties of one of the matrices associated with a graph is used to get useful combinatorial information about the graph. The lectures were not intended to be comprehensive on any of the topics treated; they could not have been within the time framework imposed by ten one-hour lectures. Nor were the lectures intended to cover all instances in which the interplay between matrices and graphs has turned out to be useful; again an impossibility within the time framework. The particular content of the lectures was chosen for its accessibility, beauty, and current relevance, and for the possibility of enticing the audience to want to learn more. It was, of course, influenced by the author’s personal interests and expertise. In this monograph, I have stayed within the context of the lectures, and have avoided writing a more comprehensive book. In most cases I have not given original references for results if they are readily available in one or more books referenced. Just as we did for the lectures, we assume that the reader is familiar with many of the basic concepts and facts of matrix theory and graph theory. We define some standard terms but many are presumed known and can be found in most elementary and advanced books. I am indebted to Leslie Hogben and Bryan Shader for organizing this CBMS Regional Conference and for suggesting me as principal lecturer. They did a superb job, from recruiting a diverse group of participants to arranging a stimulating and fun daily schedule with afternoon and evening activities. I would also like to express my gratitude to the participants for their attention, stimulating questions, and camaraderie. Finally, I want to thank the Department of Mathematics of Iowa State University for hosting the conference and the National Science Foundation under grant number DMS 0938261 for financially supporting it.

ix

http://dx.doi.org/10.1090/cbms/115/01

CHAPTER 1

Some Fundamentals In this chapter, motivated by a natural problem, we introduce some of the fundamental concepts and ideas of nonnegative and symmetric matrices that we shall use throughout this book.

1.1. Nonnegative Matrices We begin with a motivating problem: Let n be a positive integer and consider the numbers 1, 2, 3, . . . , n2 . How should these numbers be arranged in an n × n matrix to give the largest spectral radius? Equivalently, how should the edges of the complete bipartite graph Kn,n be given the weights 1, 2, . . . , n2 in order that its (weighted) biadjacency matrix has the largest spectral radius? Note that in this question we are not asking for the largest spectral radius—although that would certainly be of interest if it could be written down—but only the pattern of the numbers that achieves the maximum spectral radius. Thus the question is a combinatorial question—the pattern—about a linear algebraic concept—the spectral radius. Before proceeding with this question, we explain some of the terms used. If A is an n × n complex matrix, then A has n eigenvalues λ1 , λ2 , . . . , λn . The spectral radius of A is ρ(A) = max{|λ1 |, |λ2 |, . . . , |λn |}, the maximum modulus of the eigenvalues of A. A bipartite graph is a graph whose vertices can be bipartitioned (partitioned into two sets) so that each edge joins a vertex in one set and a vertex in another set. The bipartite graph is complete provided it contains all possible edges between the two parts of the bipartition; if the two parts have m and n vertices, respectively, we get the complete bipartite graph Km,n . We usually think of and draw a bipartite graph with one set of vertices in a vertical line on the left (the left vertices) and the other set of vertices in a vertical line on the right (the right vertices). Example 1.1. A bipartite graph G ⊆ K3,4 with 3 left vertices and 4 right vertices is shown in Fig. 1.1. 1

2

1. SOME FUNDAMENTALS

Figure 1.1 Taking the left vertices in order from top to bottom and also the right vertices from top to bottom, the biadjacency matrix is the 3×4 (0,1)-matrix ⎡ ⎤ 1 1 1 0 ⎣ 1 0 0 1 ⎦. 0 1 0 1 Here the rows correspond to the left vertices and the columns correspond to the right vertices. A entry equal to 1 indicates the presence of an edge between a left vertex and a right vertex; an entry equal to 0 indicates the absence of such an edge. If the edges are weighted, then in place of the 1s we use the corresponding weights. (One could then think of the 0s as corresponding to edges of weight 0.) The entries of the matrices in our problem are all positive. Hence the theory of positive matrices, more generally the theory of nonnegative matrices, applies. This theory tells us that all the matrices considered in the problem have a positive real eigenvalue equal to its spectral radius, which we denote by λmax (λmax (A) if we need to refer to a specific matrix A without ambiguity). Thus we seek a pattern with the largest λmax . Example 1.2. Let n = 2 so that we want the pattern of 1, 2, 3, 4 in a 2 × 2 matrix that gives the largest spectral radius. It is easy to check that this largest spectral radius is attained by the matrix   4 3 , 2 1 whose eigenvalues are

√ 5± 35 . 2

Thus we can say that λmax =

√ 5+ 35 2

= 5.3723.

In general, there are (n2 )! possible arrangements of 1, 2, 3, . . . , n2 into an n × n matrix. Some of these matrices will be permutation similar, that is, obtained from one another by simultaneous row and column permutations, and so related as in B = P −1 AP where P is a permutation matrix; others will be transposes of one another, and thus will have the same eigenvalues 2 and so the same spectral radius. This reduces (n2 )! to (n2n!)! , still a very large number.

1.1. NONNEGATIVE MATRICES

3

Example 1.3. Let n = 3. What arrangement of 1, 2, 3, . . . , 9 into a 3 × 3 matrix gives the largest λmax ? Some possibilities are ⎡ ⎤ ⎡ ⎤ 9 8 5 9 8 5 ⎣ 7 6 3 ⎦ with λmax = 16.7409, and ⎣ 7 6 2 ⎦ with λmax = 16.74511. 4 3 1 4 2 1 Notice how the interchange of 2 and 3 in the first matrix gives a larger λmax in the second matrix. Thus, not surprisingly, λmax is sensitive to small perturbations in the positions of the entries. In fact, the arrangement that gives the largest λmax is ⎡ ⎤ 9 8 4 ⎣ 7 6 3 ⎦ with λmax = 16.7750. 5 2 1 The arrangement of 1, 2, 3, . . . , n2 into an n × n matrix that gives the largest λmax is unknown in general—probably there is no general description of a pattern that gives the maximum—but there is a 1964 theorem of Schwarz [7] which considerably reduces the number of patterns that needs to be considered. It holds for any sequence of n2 nonnegative numbers. Theorem 1.4. Let n be a positive integer and let c1 , c2 , c3 , . . . , cn2 be a sequence of nonnegative real numbers. Then an arrangement of these numbers into an n × n matrix with the largest λmax can be found among those matrices A = [aij ] which are monotone nonincreasing in each row and each column: ai1 ≥ ai2 ≥ · · · ≥ ain (1 ≤ i ≤ n), a1j ≥ a2j ≥ · · · ≥ anj (1 ≤ j ≤ n). There may be matrices with the largest λmax that do not satisfy the monotone property stated in the theorem. Example 1.5. All of ⎡ 1 1 ⎣ 1 1 1 0

the matrices ⎤ ⎡ 1 1 1 1 ⎦,⎣ 1 1 0 1 0 √ have the largest λmax , namely 1 + 2,

⎤ ⎡ ⎤ 1 1 1 1 0 ⎦,⎣ 1 0 1 ⎦ 1 1 1 0 for the sequence 0, 0, 1, 1, 1, 1, 1, 1, 1.

How does one prove such a theorem as Theorem 1.4 in which we have to compare λmax ’s without knowing their values? There is a well-developed theory of nonnegative matrices going back to O. Perron and G. Frobenius (with additional contributions of H. Wielandt)— usually called the Perron-Frobenius theory of nonnegative matrices—that provides powerful techniques for such comparisons. Since eigenvalues and, in particular, the spectral radius, depend continuously on the entries of a matrix, any 0s among the numbers ci can be replaced by a small positive number , and thus we can assume that c1 , c2 , c3 , . . . , cn2 are all positive. This allows us to avoid questions of reducibility (which we will address later) and to use the strongest possible theorems.

4

1. SOME FUNDAMENTALS

Let A be an n × n positive matrix. Then the following properties hold: PF1. λmax is a simple eigenvalue of A. PF2. There is a positive eigenvector of A for the eigenvalue λmax : Ax = λmax x, x is a positive vector. PF3. λmax is the only eigenvalue of A with a corresponding nonnegative eigenvector. PF4. If y is a positive eigenvector and α is a scalar such that Ay ≥ αy (entrywise) but Ay = αy, then λmax > α. PF5. If y is a positive vector and α is a scalar such that Ay ≤ αy (entrywise) but Ay = αy, then λmax < α. PF6. If B is an n × n matrix with B ≥ A (entrywise) but B = A, then λmax (B) > λmax (A). PF7. Let rmin be the minimum row sum of A and let rmax be the maximum row sum of A, then rmin ≤ λmax ≤ rmax , with equality at either end if and only if rmin = rmax . The basic properties here are PF1 and PF2, and there are many expositions of their proofs. We shall assume PF1 and PF2, and show how the other properties follow.

Proofs of PF3 to PF7 from PF1 and PF2: Let A = [aij ] and let x = (x1 , x2 , . . . , xn )t be a positive eigenvector of the transpose matrix At for λmax . Then xt A = λmax xt , that is,

n 

xj aji = λmax xi

(1 ≤ i ≤ n).

j=1

We assume that y = (y1 , y2 , . . . , yn )t is an arbitrary positive vector, and we let μi =

n 1  aij yj yi j=1

(1 ≤ i ≤ n).

1.1. NONNEGATIVE MATRICES

5

Then n 

(μi − λmax )xi yi =

i=1

n n   (μi yi )xi − (λmax xi )yi i=1

=

n  i=1

=

⎛ ⎝

i=1 n 



aij yj ⎠ xi −

j=1

n  n 

n 

⎛ ⎞ n  ⎝ xj aji ⎠ yi

i=1

j=1

(aij xi yj − aji xj yi )

i=1 j=1

= 0. Now xi yi > 0 for i = 1, 2, . . . , n. Thus if all the μi are equal to the same number μ, that is, y is a positive eigenvector of A with eigenvalue μ, then, since by the above ni=1 (μ − λmax )xi y i = 0, we have μ = λmax . Thus PF3 holds. If not all the μi are equal, then ni=1 (μi − λmax )xi yi = 0 implies that min{μi : 1 ≤ i ≤ n} < λmax < max{μi : 1 ≤ i ≤ n}, from which we get PF4 and PF5. PF6 now follows from PF5. PF7 follows from PF5 and PF6 by taking y to be the vector jn of n 1s. We are now in a position to prove Theorem 1.4. Proof. Let An be the set of all n × n matrices whose n2 entries are the numbers c1 , c2 , c3 , . . . , cn2 . Let ρ be the largest λmax attainable by a matrix in An . Let Amax be the subset of An consisting of those matrices n with ρ as an eigenvalue. Finally, let A = [aij ] be a matrix in Amax n , and t let x = (x1 , x2 , . . . , xn ) be a positive vector such that Ax = ρx. After simultaneous permutations of rows and columns, we may assume that x1 ≥ x2 ≥ · · · ≥ xn > 0. Suppose we interchange two consecutive entries in some row of A, say we interchange akl and ak,l+1 to get a matrix B ∈ An . Then Bx − ρx = Bx − Ax = z, where z has only one possible nonzero entry, namely the entry (ak,l+1 − akl )(xl − xl+1 ) in the kth position. If xl = xl+1 , then Bx = ρx. Thus B is also in Amax n . Now suppose that xl > xl+1 and ak,l+1 < akl . Then the only nonzero entry of z is positive, and hence Bx ≥ ρx but Bx = ρx. By PF4, λmax (B) > ρ, a contradiction. Thus, if xl > xl+1 , we must have akl ≥ ak,l+1 . Since k and l were arbitrary, we now obtained from A by rearranging conclude that there is a matrix A ∈ Amax n the entries in each row, such that the entries in each row are monotone nonincreasing. Repeating the above argument on (A )t , we conclude that there obtained from A by rearranging the entries in each is a matrix A ∈ Amax n column, such that the entries in each column are monotone nonincreasing. The proof is completed by noting (an exercise) that the entries in each row

6

1. SOME FUNDAMENTALS

of A continue to be monotone nonincreasing, after rearranging the entries  in each column of A to be monotone nonincreasing. Example 1.6. Let n = 3 and consider the nine numbers 1, 1, 1, 1, 2, 2, 2, 2, 2. Then, using Theorem 1.4 and the fact that a matrix and its transpose have the same λmax , we see that one of the following matrices attains the smallest λmax among all the 3 × 3 matrices whose entries are four 1s and five 2s: ⎡ ⎤ ⎡ ⎤ 2 2 2 2 2 2 A1 = ⎣ 2 2 1 ⎦ , A2 = ⎣ 2 1 1 ⎦ , 1 1 1 2 1 1 In fact, λmax (A1 ) = 4.7913 < 4.8284 = λmax (A2 ). In a similar way one can prove the following theorem. Theorem 1.7. Let n be a positive integer and let c1 , c2 , c3 , . . . , cn2 be a sequence of nonnegative real numbers. Then an arrangement of these numbers into an n × n matrix with the largest λmax can be found among those matrices A = [aij ] which are monotone nonincreasing in each row and monotone nondecreasing in each column: ai1 ≥ ai2 ≥ · · · ≥ ain (1 ≤ i ≤ n), a1j ≤ a2j ≤ · · · ≤ anj (1 ≤ j ≤ n). For later reference we remark that the properties PF1 to PF7 hold for all square, irreducible nonnegative matrices. Here a matrix A is reducible provided there exists a permutation matrix P such that   A1 Or,n−r P −1 AP = A21 A2 for some integer r with 0 < r < n, and is irreducible otherwise. If A is reducible, then the eigenvalues of A are those of A1 taken together with those of A2 , and λmax (A) = max{λmax (A1 ), λmax (A2 )}. If A is reducible, then λmax is a nonnegative (not necessarily positive) eigenvalue of A with a nonnegative (not necessarily positive) eigenvector, but λmax may not be a simple eigenvalue since it may be an eigenvalue of both A1 and A2 . In general, if X is an n × n matrix, there exists a permutation matrix Q such that we get the triangular block form ⎡ ⎤ X1 O O ··· O ⎢ X21 X2 O ··· O ⎥ ⎢ ⎥ ⎢ ⎥ Q−1 XQ = ⎢ X31 X32 X3 · · · O ⎥ ⎢ .. .. .. .. ⎥ .. ⎣ . . . . . ⎦ Xs1 Xs2 Xs3 · · · Xs where s is a positive integer and X1 , X2 , . . . , Xs are irreducible square matrices. The matrices X1 , X2 , . . . , Xs are the irreducible components of X and they are uniquely determined up to simultaneous permutations of their rows and columns.

1.2. SYMMETRIC MATRICES

7

In [7], motivated by Theorem 1.4 the following fascinating question was considered. We continue with the notation used in the proof of Theorem 1.4. Question: Let c1 , c2 , c3 , . . . , cn2 be n2 distinct nonnegative numbers arranged so that 0 ≤ c1 < c2 < c3 < · · · < cn2 . (≥)

Let An be the subset of An consisting of those matrices in which the max(≥) entries in each row and in each column are decreasing, and let An = (≥) max(≥) (≥) max = ∅. Similarly, let Nn be the set An ∩ An . By Theorem 1.4, An of all matrices whose entries are 1, 2, 3, . . . , n2 with the entries in each row (≥) and each column decreasing. A matrix A in An belongs to An if and only if the matrix g(A) obtained from A by replacing each ci with i belongs to (≥) Nn . Determine the range of the function (1.1)

→ Nn(≥) , g : Amax(≥) n

over all possible 0 ≤ c1 < c2 < c3 < · · · < cn2 . (≥) First let n = 2. Then N2 consists only of   4 3 2 1 and its transpose. Thus the function (1.1) is onto. (≥) Now let n = 3. Then N3 consists of 21 matrices and their transposes. As shown in [7], the range of (1.1) consists only of the three matrices ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 9 8 5 9 8 4 9 8 4 ⎣ 7 4 3 ⎦,⎣ 7 6 3 ⎦,⎣ 7 5 3 ⎦, 6 2 1 5 2 1 6 2 1 and their transposes. Thus for n = 3, we have a test set of size 3 to determine the biggest λmax . For the smallest λmax , a test set of 13 matrices is determined in [7], but it is not shown that this test set is minimal. In case c1 , c2 , . . . , cn2 only contains 0s and 1s, say e 1s and n2 −e 0s, then the matrices in An are the biadjacency matrices of bipartite graphs with 2n vertices and e edges (equivalently, adjacency matrices of digraphs with n vertices and e edges some of which may be loops). Friedland [5] determined those (0, 1)-matrices with e 1s (digraphs with e edges) with the largest λmax for several classes of values of e. For a summary of these and other results, see [2]. 1.2. Symmetric Matrices We begin this section by reviewing some of the basic facts about real symmetric matrices. Let A = [aij ] be an n × n real symmetric matrix.

8

1. SOME FUNDAMENTALS

S1. All the eigenvalues of A are real numbers, and thus can be ordered as λmin = λn ≤ λn−1 ≤ . . . ≤ λ1 = λmax . Also there exist eigenvectors x1 , x2 , . . . , xn of A for λ1 , λ2 , . . . , λn , respectively, that form an orthonormal basis of n . S2. For every nonzero vector x ∈ n , λn ≤

xt Ax ≤ λ1 . xt x

Equivalently, for every vector x ∈ n of length 1, that is, xt x = 1, λn ≤ xt Ax ≤ λ1 . In fact, λ1 = max{xt Ax : xt x = 1} and λn = min{xt Ax : xt x = 1}. S3. More generally, each of the eigenvalues of A can be characterized using subspaces of n . Let x1 , x2 , . . . , xn be an orthonormal basis of n consisting of eigenvectors of A for λ1 , λ2 , . . . , λn , respectively. Then for a nonzero vector y ∈ span{xi , . . . , xn } λi ≥

y t Ay with equality for y = xi , yty

and for y ∈ span{x1 , . . . , xi }, λi ≤

y t Ay with equality for y = xi . yty

Thus   t y Ay : y ∈ W : W a subspace of n of dimension n − i + 1}. λi = inf{sup yty S4. (Interlacing of eigenvalues I.) Let Q be an n × m real matrix with orthonormal columns (Qt Q = Im ), and let the eigenvalues of the m × m matrix Qt AQ be μ1 ≥ μ2 ≥ · · · ≥ μm . Then λn−m+i ≤ μi ≤ λi

(i = 1, 2, . . . , m).

S5. (Interlacing of eigenvalues II.) If μ1 ≥ μ2 ≥ · · · ≥ μm are the eigenvalues of an m × m principal submatrix of A, then λn−m+i ≤ μi ≤ λi

(i = 1, 2, . . . , m).

In particular, if μ1 ≥ μ2 ≥ · · · ≥ μn−1 are the eigenvalues of an n − 1 × n − 1 principal submatrix of A, then λn ≤ μn−1 ≤ λn−1 ≤ · · · ≤ μ2 ≤ λ2 ≤ μ1 ≤ λ1 .

1.2. SYMMETRIC MATRICES

9

Proofs of S1 to S5: Properties S1 to S3 are standard and proofs can be found in many references. Property S4 in this general form was formulated by Haemers (see e.g [4]). To prove it, let x1 , x2 , . . . , xn be orthonormal eigenvectors of A for its eigenvalues λ1 , λ2 , . . . , λn , respectively. Let y 1 , y 2 , . . . , y m be orthonormal eigenvectors of Qt AQ for its eigenvalues μ1 , μ2 , . . . , μm , respectively. Let Vi be the subspace of dimension i of n spanned by the vectors y 1 , y 2 , . . . , y i (i = 1, 2, . . . , m), let Ui−1 be the subspace of dimension i − 1 spanned by x1 , x2 , . . . , xi−1 , and let Wi−1 be the subspace of dimension i − 1 spanned by Qt x1 , Qt x2 , . . . , Qt xi−1 (i = 2, 3, . . . , n + 1). The ⊥ has dimension n − i + 1, and hence there exorthogonal complement Wi−1 i ⊥ . Then (Qz i )t xj = (z i )t Qt xj = 0 for ists a nonzero vector z ∈ Vi ∩ Wi−1 ⊥ , the subspace of dimension n − i + 1 j = 1, 2, . . . , i − 1, and so Qz i ∈ Ui−1 i i+1 n spanned by x , x , . . . , x . Thus by S2 and S3, λi ≥

(z i )t Qt AQz i (Qz i )t A(Qz i ) = ≥ μi−1 . (Qz i )t (Qz i ) (z i )t z i

The other inequality follows by applying this argument to −A. This proves S4. To prove S5, consider the m × m principal submatrix A[i1 , i2 , . . . , im ] determined by rows and columns with indices i1 , i2 , . . . , im . Let Q be the n × m matrix whose columns are the (orthonormal) standard unit vectors ei1 , ei2 , . . . , eim . Then A[i1 , i2 , . . . , im ] = Qt AQ, and S5 follows from S4. A useful corollary also due to Haemers can be derived from property S5. Corollary 1.8. Let A be an n×n real symmetric matrix with eigenvalues λ1 ≥ λ2 ≥ · · · ≥ λn . Let α1 , α2 , . . . , αm be a partition of {1.2, . . . , n} into m nonempty sets of consecutive integers, where |αi | = ni (i = 1, 2, . . . , n), and let ⎤ ⎡ A11 A12 · · · A1m ⎢ A21 A22 · · · A2m ⎥ ⎥ ⎢ ⎢ .. .. .. ⎥ .. ⎣ . . . . ⎦ Am1 Am2 · · ·

Amm

be the corresponding partition of A (the matrix Aij is the submatrix A[αi |αj ] of A determined by the rows with index in αi and columns with index in αj ). Finally, let B = [bij ] be the m × m symmetric matrix where bij =

the sum of the entries of Aij ni

(i, j = 1, 2, . . . , m),

the average row sum of Aij . Then the eigenvalues of B interlace the eigenvalues of A. Proof. Define the n × m matrix Q where, for i = 1, 2, . . . , m, the ith √ column has 1/ ni in the positions of αi and 0s elsewhere. The matrix Q  has orthonormal columns and Qt AQ = B. Now apply S5.

10

1. SOME FUNDAMENTALS

There is an analogue of Theorem 1.4 for symmetric, nonnegative matrices with 0s on the main diagonal due to Brualdi and Hoffman [3], and we discuss it now. Let n ≥ 2 be an integer and let c1 , c2 , . . . , c(n) be nonnegative numbers. 2 Let Sn be the set of all n×n symmetric matrices with 0s on the main diagonal whose entries above the main diagonal are the numbers c1 , c2 , . . . , c(n) . Let 2

(≥)

Sn be the subset of Sn consisting of those matrices A = [aij ] such that whenever i < j, then akl ≥ aij for all k < l with k ≤ i and l ≤ j; equivalently, the entries in each row and column above the main diagonal are monotone nonincreasing. The following theorem is a strong symmetric analogue of Theorem 1.4. Theorem 1.9. If A is a matrix in Sn with the largest λmax , then there (≥) is an n × n permutation matrix P such that P −1 AP ∈ Sn . Proof. Let ρ be the largest λmax attainable by a matrix in Sn , and let A = [aij ] be a matrix in Sn with λmax = ρ. Since A is a nonnegative matrix, there exists a nonnegative eigenvector x = (x1 , x2 , . . . , xn )t of A for its eigenvalue ρ where xt x = 1. Since P −1 AP ∈ Sn for every A ∈ Sn and every n × n permutation matrix P , we may assume that x1 ≥ x2 ≥ · · · ≥ xn ≥ 0. First suppose that there exists p and q with p < q such that ap,q+1 > apq . Let B be the matrix in Sn obtained by switching apq and ap,q+1 , and switching aqp and aq+1,p . Then xt Bx − ρ = xt Bx − xt Ax = 2(ap,q+1 − apq )xp (xq − xq+1 ) ≥ 0. Hence by S2, λmax (B) ≥ ρ with strict inequality if xp (xq − xq+1 ) = 0. Thus xp = 0 or xq = xq+1 . To complete the proof, we show that each of these possibilities leads to a contradiction. Suppose that xp > 0. Then xq = xq+1 and xt Bx = ρ, so that Bx = ρx. On the other hand, the qth component of Bx and that of Ax differ by (ap,q+1 − apq )xp > 0, a contradiction. Thus xp = 0, and for some s ≤ p − 1, xs > 0 = xs+1 = · · · = xn = 0. Therefore, ρ = λmax (A[1, 2, . . . , s]). If there were an off-diagonal 0 in A[1, 2, . . . , s], then since (x1 , x2 , . . . , xs ) is a positive vector, we could move ap,q+1 to its position (and move aq+1,p to the symmetrically opposite position) and, by S2, increase the λmax , a contradiction. Thus A[1, 2, . . . , s] contains only positive entries off the main diagonal and thus is irreducible. An (s + 1) × (s + 1) matrix B defined by   A[1, 2, . . . , s] z , B= zt 0 where z has a positive entry followed by all 0s, is irreducible and by PF6 for irreducible matrices, satisfies λmax (B) > λmax (A[1, 2, . . . , s]) = ρ. Since ap,q+1 is positive, there is a matrix B  in Sn containing B as a principal submatrix; hence by S5, λmax (B  ) > ρ, another contradiction. In a similar way we can get a contradiction if there are integers p and q  with p + 1 < q and ap+1,q > apq .

BIBLIOGRAPHY

11

For nonsymmetric matrices the analogue of the equality condition in Theorem 1.9 is not true in general. See Example 1.5.   In case c1 , c2 , . . . , c(n) only contains 0s and 1s, say e 1s and n2 − e 0s, 2 then the matrices in Sn are the adjacency matrices of graphs with n vertices and e edges (no loops). Then Theorem 1.9 asserts that in order for a matrix in Sn to achieve the largest λmax , it must have the property that whenever apq = 1 with p < q, then aij = 1 for all i and j with i < j and i ≤ p and j ≤ q. For each e, Rowlinson [8] (see also [4]) determined the matrices with the largest λmax , that is, those graphs on n vertices and e edges with the largest λmax of their adjacency matrices. A general reference for nonnegative matrices is [1], and a general reference for symmetric matrices is [6], but there are many others. Bibliography [1] R.B. Bapat and T.E.S. Raghavan, Nonnegative matrices and applications, Encyclopedia of Mathematics and its Applications, 64, Cambridge University Press, Cambridge, UK, 1997. [2] R.A. Brualdi, Spectra of digraphs, Linear Algebra Appl., 432 (2010), 2181–2213. [3] R.A. Brualdi and A.J. Hoffman, On the spectral radius of (0,1)-matrices, Linear Alg. Appl., 65 (1985), 133–146. [4] D. Cvetkovi´c, P. Rowlinson, and S. Simi´c, Eigenspaces of graphs, Encyclopedia of Mathematics and its Applications, 66, Cambridge University Press, Cambridge, UK, 1997. [5] S. Friedland, The maximum eigenvalue of 0-1 matrices with prescribed number of 1s, Linear Algebra Appl., 69 (1985), 33-69. [6] B.N. Parlett, The symmetric eigenvalue problem, Classics in Applied Mathematics, 20, SIAM, Philadelphia, PA, 1998. [7] B. Schwarz, Rearrangements of square matrices with non-negative elements, Duke Math. J., 31 (1964), 45–62. [8] P. Rowlinson, On the maximum index of graphs with a prescribed number of edges, Linear Algebra Appl., 110 (1988), 43–53.

http://dx.doi.org/10.1090/cbms/115/02

CHAPTER 2

Eigenvalues of Graphs In this chapter we demonstrate how certain linear algebraic properties of the adjacency matrix of a graph can be used to obtain information about structural properties of a graph.

2.1. Some Basic Properties Let G be a graph of order n. Thus G has a vertex set V of n vertices which we take to be 1, 2, . . . , n and an edge set E consisting of pairs {i, j} of distinct vertices. Sometimes we write these pairs as ij (or as ji). With this labeling of vertices, the adjacency matrix of G is the n × n matrix A = [aij ] where  1, if ij is an edge, aij = 0, otherwise. Thus A is a symmetric, nonnegative matrix with 0s on its main diagonal, and both the Perron-Frobenius theory and the theory of real symmetric matrices apply. If we reorder the vertices (attach the labels 1, 2, . . . , n in a different way), the results is a matrix P −1 AP for some permutation matrix P . The eigenvalues of the graph G are the eigenvalues of the adjacency matrix A and they do not depend on the particular labeling chosen. Since A is a symmetric matrix with zero trace, we order these eigenvalues as λn ≤ λn−1 ≤ · · · ≤ λ1 , where λ1 + λ2 + · · · + λn = 0, and hence λn ≤ 0. Here λ1 is what we have also called λmax . We use λ1 and λmax interchangeably and write λmax (G) when it is necessary to avoid ambiguity. So what does A tell us about G? On the one hand, everything since A is just another way to specify G. But G, and so A, have “secrets” that have to be coaxed out of them. Example 2.1. The adjacency matrix of the complete graph Kn of order n is the matrix A = Jn − In where Jn is the n × n matrix of all 1s. This graph has eigenvalues −1, . . . , −1, n − 1. (For the eigenvalue n − 1, A has eigenvector jn = (1, 1, . . . , 1)t . The matrix A − (−1In ) has rank 1 and thus −1 is an eigenvalue of multiplicity n − 1.) 13

14

2. EIGENVALUES OF GRAPHS

The adjacency matrix of the complete bipartite graph Km,n is   Om,m Jm,n , A= Jn,m On,n where Jk,l denotes, in general, a k × l matrix of all 1s. We have   nJm Om,n 2 , A = On,m mJn a matrix Since the trace of A equals 0, the eigenvalues of Km,n √ of rank 2. √ are − mn, 0, . . . , 0, mn. A graph G with adjacency matrix A is connected if and only if there does not exist a permutation matrix P such that   A1 O P −1 AP = , O A2 where A1 and A2 are nonvacuous square matrices, that is, if and only if A is irreducible. Since the eigenvalues of A are those of A1 together with those of A2 , it follows that the eigenvalues of a graph G are obtained by putting together the eigenvalues of each of its connected components. As a result, in dealing with eigenvalues of graphs, there is usually no loss in generality in assuming that G is connected. In this case, properties PF1 to PF7 and S1 to S5 apply to the irreducible adjacency matrix A. We now consider a number of elementary relations between a connected graph G and its irreducible adjacency matrix A whose eigenvalues are λn ≤ λn−1 ≤ · · · ≤ λ1 . E1. The (i, j)-entry of Ak is the number of walks of length k between vertex i and vertex j. In particular, the trace of A2 is twice the number of edges of G. Since the eigenvalues of Ak are λkn , λkn−1 , . . . , λk1 , the number of closed walks of length k in G equals n  λk , the kth spectral moment of A. i=1

Proof. Property E1 is easily established.



E2. Let δ = δ(G) and Δ = Δ(G) be, respectively, the smallest and largest of the degrees d1 , d2 , . . . , dn of the vertices of G. Then δ ≤ λ1 ≤ Δ. In fact,

d1 + d2 + · · · + dn ≤ λ1 , n the average degree of a vertex of G.

Proof. The first inequalities are a consequence of PF7. The second inequality follows from S2 by choosing x = jnt ∈ n , the vector of all 1s. 

2.1. SOME BASIC PROPERTIES

15

E3. If the diameter of G is d, then the number of distinct eigenvalues of G is at least d + 1. Put another way, if the number of distinct eigenvalues of G is p, then the diameter of G is at most p − 1. Proof. The adjacency matrix A is symmetric and so A is similar to a diagonal matrix and the minimum polynomial of A has degree equal to p. There exist vertices k and l which are at distance d in G; hence the (k, l)-entry of Ad is positive but the (k, l)entry of Ar is 0 for r < d. Hence Ad is not a linear combination of In , A, A2 , . . . , Ad−1 , implying that the degree p of the minimal polynomial is at least d + 1.  E4. If H is an induced subgraph of G, then the eigenvalues of H interlace those of G. Proof. This is a direct consequence of S5 since the adjacency matrix of H is a principal submatrix of the adjacency matrix of G.  E5 The graph G is bipartite if and only if its collection of eigenvalues is symmetric about 0, in fact, if and only if λn = −λ1 . Proof. The graph G is bipartite if and only if its adjacency matrix A can be taken in the form   Op A1 for some p and q with p + q = n. A= At1 Oq Let x = (x1 , x2 )t be an eigenvector of A for an eigenvalue λ. Here x1 and x2 have p and q components, respectively. Then   1   1   A1 x2 x x = = λ , Ax = A x2 At1 x1 x2 and A



−x1 x2



 =

A1 x2 −At1 x1



 =

λx1 −λx2



 = −λ

−x1 x2

 ,

This implies that A has as many eigenvalues equal to λ as it has equal to −λ. Now suppose that the eigenvalues of A are symmetric about 0. Then n  λ2k+1 = 0 for all k ≥ 0. trace(A2k+1 ) = i i=1

It follows from E1 that G cannot have any closed walks or cycles of any odd length. Hence G is bipartite. Now assume only that λn = −λ1 . The largest eigenvalue of A2 is λ21 and it is not a simple eigenvalue. Thus by PF1 for irreducible nonnegative matrices, A2 cannot be irreducible. Thus after

16

2. EIGENVALUES OF GRAPHS

simultaneous permutations of rows and columns, we may assume that   B1 Ok,l 2 A = Ol,k B2 for some positive integers k and l with k + l = n, This determines a bipartition of the vertex set into two nonempty parts V1 and V2 of sizes k and l, respectively, such that there are no walks of length 2 between V1 and V2 . Suppose a and b are distinct vertices in V1 that are joined by an edge. Take a shortest path from a to a vertex in V2 , say, a = w0 , w1 , . . . , wp , wp+1 where w0 , w1 , . . . , wp are in V1 and wp+1 is in V2 . If p ≥ 1, then wp−1 , wp , wp+1 is a walk of length 2 from V1 to V2 . If p = 0, then b, a, w1 is such a walk, Thus in both cases we get a contradiction, proving that there are no edges joining vertices in V1 . Similarly, there are no edges joining vertices  in V2 , and we conclude that G is bipartite, Property E5 illustrates very clearly how knowledge of eigenvalues can imply structural properties of a graph. 2.2. Eigenvalues and Graph Parameters In this section we shall see several instances of how graph eigenvalues can give information about some fundamental graph parameters. The graph parameters imply something about the structure of a graph which, in turn, implies something about the structure of its adjacency matrix. This is then used to get information about the eigenvalues. Let G be a graph of order n, and let n+ , n0 , and n− denote, respectively, the number of positive, zero, and negative eigenvalues of G. We consider two graph parameters. The independence number of G is the maximum size α(G) of a set of vertices no two of which are joined by an edge (the largest order of an induced subgraph with no edges). The chromatic number of G is the smallest integer χ(G) such that the vertices can be partitioned into independent sets (the smallest number of colors that can be used to color the vertices so that no two vertices of the same color are joined by an edge). We implicitly assume that G is connected, although it may not always be necessary. Theorem 2.2. α(G) ≤ min{n − n+ , n − n− }. Proof. Since G has an independent set of vertices of cardinality α(G), G has an induced subgraph H of order α(G) with no edges, whose adjacency matrix is therefore Oα(G) . By the interlacing property S5, λn−α(G)+i ≤ λi (H) ≤ λi (G) (1 ≤ i ≤ α(G)). Thus λn (G) ≤ 0 = λα(H) ≤ λα (G),

2.2. EIGENVALUES AND GRAPH PARAMETERS

17

and so n− ≤ n − α(G), equivalently, α(G) ≤ n − n− . Using −A in place of  A, we also get α(G) ≤ n − n+ . Since the eigenvalues of the complete graph Kn are −1, . . . , −1, n − 1 with n− = n − 1, equality holds in Theorem 2.2 for Kn Let G be a graph of order n with vertex set V . The complement of G is the graph G on V in which two distinct vertices are joined by an edge if and only if they are not joined by an edge in G. If A is the adjacency matrix of G, then Jn − In − A is the adjacency matrix of G. A graph is regular of degree k provided that each vertex has the same degree k. A regular graph of degree k has k as its largest eigenvalue with a corresponding eigenvector jnt = (1, 1, . . . , 1)t ∈ n . If G is regular of degree k, then its complement G is regular of degree n − 1 − k. The eigenvalues of the complement of a regular graph can be easily obtained from the eigenvalues of the graph itself. Lemma 2.3. Let G be a graph of order n which is regular of degree k, and let the eigenvalues of G be λn , . . . , λ2 , λ1 = k. Then the eigenvalues of G are n − 1 − k, −1 − λ2 , . . . , −1 − λn . Proof. Let A be an adjacency matrix of G. There is an orthogonal matrix Q of eigenvectors of A with first column jnt such that Q−1 AQ = diag(k, λ2 , . . . , λn ). Then Q−1 (Jn − In − A)Q = Q−1 Jn Q − Q−1 In Q − Q−1 AQ = diag(n, 0, . . . , 0) − In − diag(k, λ2 , . . . , λn ) = diag(n − 1 − k, −1 − λ1 , . . . , −1 − λn ). and the result follows.



A clique of the graph G is a subset of its vertices every pair of which are joined by an edge. Thus a clique is a subset of the vertices that induces a complete graph. The clique number of G is the maximum size β(G) of a clique. For each U ⊆ V , U is a clique of G if and only if its complement U = V \ U is an independent set of G. Thus β(G) = α(G). Let n>−1 , n=−1 , n−1 − 1, n − n 0 since k ≤ n − 2. Also −1 − λi > 0 if and only if λi < −1, and −1 − λi < 0 if and only if λi > −1. The conclusion now follows. 

18

2. EIGENVALUES OF GRAPHS

An upper bound on the chromatic number in terms of eigenvalues was discovered by Wilf (1967). It is a consequence of the following lemma. Recall that the minimum degree of a vertex of a graph G is denoted by δ(G), Lemma 2.5. We have χ(G) ≤ 1 + p where p = max{δ(H) : H an induced subgraph of G}. Proof. We consider the vertices of G in some order and color them sequentially using a greedy algorithm: we color the first vertex using any color, and when we come to the kth vertex, we choose a color that has not been used to color any of the first k − 1 vertices that are joined to the kth vertex by an edge. Thus, when coloring a vertex, what matters is the number of previously colored vertices joined to it. We show that there is an ordering of the vertices for which the greedy algorithm colors the vertices with at most 1 + p colors. Such an ordering is determined as follows: • The graph Hn = G has a vertex xn of degree at most p. • The graph Hn−1 = G − xn has a vertex xn−1 of degree at most p. • The graph Hn−2 = G − {xn−1 , xn } has a vertex xn−2 of degree at most p. . • .. • The graph H1 = G − {x2 , . . . , xn−1 , xn } has only one vertex x1 and its degree is 0. In this way we get an ordering x1 , x2 , . . . , xn of the vertices of G such that each vertex is joined by an edge to at most p vertices that come before it. For this ordering, the greedy algorithm never needs more than 1 + p colors for a coloring of G.  Theorem 2.6. χ(G) ≤ 1 + λ1 (G). Proof. By PF7 and S5, we have δ(H) ≤ λ1 (H) ≤ λ1 (G) for all induced subgraphs H of G. By Lemma 2.5, χ(G) ≤ 1 + λ1 (G).



Hoffman (1977) discovered a lower bound on the chromatic number in terms of the smallest and largest eigenvalue. Theorem 2.7. χ(G) ≥ 1 −

λ1 λn

for n ≥ 2.

Proof. First note that since G is connected and n ≥ 2, G has at least one edge and λn must be negative. Thus −λ1 /λn is a positive number. Suppose we have a coloring of G that uses exactly q colors. Then the vertex set V can be partitioned into q independent sets V1 , V2 , . . . , Vq . Thus the adjacency matrix of G can be taken in the form ⎤ ⎡ ∗ ··· ∗ O m1 ⎢ ∗ ∗ ⎥ O m2 · · · ⎥ ⎢ A = ⎢ .. .. .. ⎥ . .. ⎣ . . . . ⎦ ∗



···

O mq

2.3. GRAPHS WITH SMALL λmax

Let

⎡ ⎢ ⎢ P =⎢ ⎣

1···1 0···0 ··· 0···0 1···1 ··· .. .. .. . . . 0···0 0···0 ···

0···0 0···0 .. .

19

⎤ ⎥ ⎥ ⎥ ⎦

1···1

be the q × n characteristic matrix for this partition; thus for i = 1, 2, . . . , q, row i has mi consecutive 1s in the positions as shown. Let z = (z1 , z2 , . . . , zn )t with z t z = 1 be a positive eigenvector of A for its largest eigenvalue λ1 . Let Q = P diag(z1 , z2 , . . . , zn ), where diag(z1 , z2 , . . . , zn ) is the n × n diagonal matrix with diagonal entries z1 , z2 , . . ., zn . The rows of P and Q are orthogonal and nonzero. Hence there is a diagonal matrix D = diag(a1 , a2 , . . . , aq ) such that R = DQ has orthonormal rows. By the interlacing property S4, λmax (RARt ) ≤ λ1 . Let y = (a1 , a2 , . . . , aq )t . Then y t R = z t and 1 = z t z = (y t R)(y t R)t = y t RRt y = y t y. Then y t RARt y = z t Az = λ1 , and so by property S2, λ1 ≤ λmax (RARt ). Therefore λ1 = λmax (RARt ). Thus (q−1)λmin (RARt )+λ1 = (q−1)λmin (RARt )+λmax (RARt ) ≤ trace(RARt ) = 0. By the interlacing property S5, (q − 1)λmin (RARt ) + λ1 ≥ (q − 1)λn + λ1 , and we get that (q − 1)λn + λ1 ≤ 0, and thus (χ(G) − 1)λn + λ1 ≤ 0, as desired.  2.3. Graphs with small λmax Since the largest eigenvalue of a connected graph is bounded from below by its average degree, one would expect that graphs with a small λmax are rare and have simple structure. Consider a connected graph of order n, (1) λmax = 0 if and only if G = K1 (the trivial graph with one vertex and no edges). To see this one could use the average degree remark above, or note that if G had an edge, then   0 1 1 0 would be a principal submatrix of G with 1 as an eigenvalue and then use the interlacing property S5.

20

2. EIGENVALUES OF GRAPHS

(2) λmax ≤ 1 with equality if and only if G = K2 (in which case λmax = 1). If n ≥ 3, then since G is connected, it has a path of length 2 and thus has ⎡ ⎤ 0 1 0 ⎣ 1 0 1 ⎦ 1 1 0 as √ a principal submatrix. For this matrix the largest eigenvalue is 2 > 1. Now use again the interlacing property S5. √ (3) 1 < λmax ≤ 2 if and only if G is the √complete bipartite graph K1,2 , for which the largest eigenvalue is 2. See the argument for (2) above and use the interlacing property S5 again. (4) There are many graphs with λmax = 2: (a) The cycle Cn for n ≥ 3. (b) The complete bipartite graph K1,4 . (c) The path on 5 vertices with a path of length 2 attached to its middle vertex. (d) The path on 7 vertices with an edge attached to its middle vertex. (e) The path on 8 vertices with an edge attached to a vertex at distance 2 from one of its end vertices. (f) The graph obtained from two paths of length 2 (two copies of K1,2 ) by joining their middle vertices by a path of length n − 5 (n ≥ 6). The graphs in (a)-(f) are called Smith graphs. It is straightforward to check that all these graphs have λmax = 2. As shown in the next theorem, they contain as induced subgraphs all graphs with λmax ≤ 2. Theorem 2.8. The connected graphs with λmax ≤ 2 are precisely the connected, induced subgraphs of the Smith graphs. Proof. We only give a rough outline of the proof. A connected graph G of order n can by iteratively constructed by starting with K1 and adding a new vertex and one or more edges connected to old vertices. By interlacing S5, each new vertex will have to increase the λmax . Since λmax is not to exceed 2, G is either a cycle Cn (so λmax = 2) or a tree. Since λmax (K1,4 ) = 2, no other tree can have a vertex of degree 4 or more. If the largest degree of a vertex is 2, then G is a path and therefore an induced subgraph of a cycle Cn . Thus the only remaining case to handle is a tree with largest degree equal to 3. If G has more than 1 vertex of degree 3, then G must be the graph in item (4)(f) with λmax = 2. Otherwise G is a tree with a unique vertex of degree 3, and thus consists of three paths at a common vertex. If all these paths have length 2 or more, then G must be the graph in item

2.4. LAPLACIAN MATRIX OF A GRAPH

21

(4)(c). Thus one of the paths has length 1. If the other two paths have length 3 or more, then G must be the graph in item (4)(d). Otherwise, one of the paths has length less than 3, and G is an induced subgraph of the graph in items (4)(d) and (4)(e).  2.4. Laplacian Matrix of a Graph A graph can also be described by another matrix whose linear algebraic properties hold combinatorial information about the graph. This matrix has also been extensively investigated. Again let G be a graph of order n with vertices taken to be 1, 2, . . . , n. Let the number of edges of G be m and assume the edges have been labeled as e1 , e2 , . . . , em . The vertex-edge incidence matrix of G is the n by m (0, 1)-matrix B whose (i, j)-entry is 1 if vertex i is a vertex of the edge ej (1 ≤ i ≤ n, 1 ≤ j ≤ m). Thus the number of 1s in each column of B is 2, and the number of 1s in a row is the degree of the corresponding vertex. A signed vertex-edge incidence matrix of G is the matrix C obtained from B by replacing one of the 1s in each column with a −1. (Obviously C is not unique in general. One can think of orienting each edge and then assigning a +1 to the initial vertex of the edge and a −1 to the terminal vertex.) Both B and C determine G up to the labeling of vertices and edges. The Laplacian matrix of G is the n × n symmetric matrix L = D − A where A is the adjacency matrix of G and D is the diagonal matrix diag(d1 , d2 , . . . , dn ) whose diagonal entries are the degrees of the corresponding vertices. Since all row sums of L equal 0, 0 is an eigenvalue of L with corresponding eigenvector jnt = (1, 1, . . . , 1). We have L = CC t , and thus L is a singular, positive semi-definite, symmetric matrix. Let the eigenvalues of L be μ1 ≥ μ2 ≥ · · · ≥ μn−1 ≥ μn = 0. The following is a classical result. Theorem 2.9. The determinant of each n − 1 × n − 1 submatrix of the Laplacian matrix of G has absolute value equal to the number τ (G) of spanning trees of G; in fact, the determinant of the n − 1 × n − 1 submatrix obtained from L by deleting row i and column j equals (−1)i+j τ (G). Proof. Here is a brief outline of a proof. First suppose that G is not connected. Then τ (G) = 0. Also L is the direct sum of two matrices each of which has 0 as an eigenvalue and is singular. Thus the rank of L is at most n−2, and each n−1×n−1 submatrix has a zero determinant. Now suppose that G is connected. Then the rank of L can be shown to equal n − 1, and thus the dimension of the nullspace of L is 1. Since jn = (1, 1, . . . , 1)t is in the nullspace, the null space is spanned by jn . Since L adj(L) = det(L)In = O, each column of the adjugate adj(L) of L is a constant vector. Since L is

22

2. EIGENVALUES OF GRAPHS

symmetric, so is adj(L) and hence adj(L) is a constant multiple of Jn , the n × n matrix of all 1s. It remains to show that this constant is τ (G). To do this, we have only to show that one n − 1 × n − 1 submatrix of L has determinant equal to τ (G). Applying the Cauchy-Binet theorem for the determinant of the product of two rectangular matrices to a n − 1 × n − 1  principal submatrix of L = CC t , we obtain κ(G). Examining the coefficient of λ in the characteristic polynomial of L, we see that n−1 1  μi . τ (G) = n i=1

Let μ(G) = μn−1 , the second smallest eigenvalue of the Laplacian matrix of the graph G. Fiedler called μ(G) the algebraic connectivity of G. By Theorem 2.9, the algebraic connectivity of G is positive if and only if G is connected. The vertex connectivity ν(G) of G is the smallest number of vertices whose removal results in a disconnected graph or a single vertex. Let G = Kn . Then ν(Kn ) = n − 1 and μ(Kn ) = n since the eigenvalues of its Laplacian matrix (n − 1)In − (Jn − In ) = nIn − Jn are n, . . . , n, 0. The next theorem shows that the algebraic connectivity furnishes a lower bound for the vertex connectivity. We omit the proof. Theorem 2.10. Let G be a graph of order n different from Kn . Then μ(G) ≤ ν(G). The edge connectivity of graph G is the smallest number (G) of edges whose removal leaves a disconnected graph. The edge connectivity of a graph with a single vertex is 0. Since the edge connectivity is at least as large as the vertex connectivity, the algebraic connectivity gives a lower bound on the edge connectivity too. The algebraic connectivity can also be used to give information about the expansive properties of a graph. Theorem 2.11. Let G be a graph of order n with vertex set V . Then for U ⊆ V , the number of edges between U and its complement V \ U is at least μ(G)

|U ||V \ U | . n

Proof. If U = ∅ or V , then the theorem holds trivially. Otherwise, let x = (x1 , x2 , . . . , xn )t be defined by  n−k i∈U xi = −k i ∈ V \ U.

BIBLIOGRAPHY

23

Then xt jn = 0 and xxt = kn(n − k). Hence  (xi − xj )2 xt Lx = (ij an edge) = (number of edges between U and V \ U ) · n2 . It follows from property S3 that μ(G) ≤

(the number of edges between U and V \ U )n2 xt Lx ≤ . xt x kn(n − k)

, Thus the number of edges between U and V \ U is at least μ(G) k(n−k) n



Two good general graph theory references are [1, 2]. Most of the results in this chapter can be found in the books [1, 3, 4]. The spectra of a graph is thoroughly investigated in [4]. Bibliography [1] B. Bollob´ as, Modern Graph Theory, Graduate Texts in Mathematics 184, Springer, New York, 1998. [2] J.A. Bondy and U.S.R. Murty, Graph Theory, Graduate Texts in Mathematics 244. Springr, New York, 2008. [3] R.A. Brualdi and H.J. Ryser, Combinatorial Matrix Theory, Encylopedia of Mathematics and its Applications 39, Cambridge University Press, Cambridge, 2010. [4] D. Cvetkovi´c, P. Rowlinson, and S. Simi´c, An Introduction to the Theory of Graph Spectra, London Math. Society Student Texts 75, Cambridge University Press, Cambridge, 2010.

http://dx.doi.org/10.1090/cbms/115/03

CHAPTER 3

Rado-Hall Theorem and Applications In this chapter discuss an important tool in proving existence theorems in combinatorics. The Rado-Hall theorem concerns the existence of independent transversals of a family of subsets of a finite set X on which a matroid is defined. In most of the applications of this theorem, the set X is arbitrary and all subsets are independent (this is the setting of Hall’s theorem), or the set X can be taken to be a subset of the vectors in a vector space where independence is just linear independence of vectors (this, in the more general setting of matroids, is the setting of Rado’s theorem). 3.1. Rado-Hall Theorem First we recall what a matroid is. Let X be a finite set and let I be a collection of subsets called independent sets of X satisfying (1) ∅ ∈ I, (2) If A ∈ I and A ⊆ A, then A ∈ I. (3) If A1 , A2 ∈ I and |A1 | < |A2 |, then there exists x ∈ A2 \ A1 such that A1 ∪ {x} ∈ I. Then M = (X, I) is a matroid on X. In order for a collection of elements of X to be independent, it cannot contain a repeated element. We can allow X to be infinite, but then we assume that the cardinality of the independent sets are bounded. The simplest example of a matroid is obtained by declaring every subset of X to be independent; such matroids are called uniform matroids. The next simplest example, or at least most familiar, is obtained by taking X to be a subset of the vectors in a vector space and I to be all the linearly independent subsets of X. There are many basic concepts which can also be used to give an alternative definition of a matroid. A dependent set is just a subset of X that is not independent. A circuit is a minimal dependent set, that is, a dependent set that becomes independent when any element is removed from it. A basis is a maximal independent set; it follows in a straighforward manner that all bases have the same cardinality. More generally, if Y ⊆ X, then all maximal independent subsets of Y have the same cardinality and this common cardinality is the rank of Y , denoted ρ(Y ). The rank function of a uniform matroid is just the cardinality function. The rank of X, the common size of bases, is called the rank of the matroid. Two important properties of the rank function of a matroid are the nondecreasing 25

26

3. RADO-HALL THEOREM AND APPLICATIONS

property: ρ(Y1 ) ≤ ρ(Y2 ) whenever Y1 ⊆ Y2 , and the submodular inequality: (3.1)

ρ(Y1 ∪ Y2 ) + ρ(Y1 ∩ Y2 ) ≤ ρ(Y1 ) + ρ(Y2 ) (Y1 , Y2 ⊆ X).

Verification of (3.1) is an easy exercise. If Y1 and Y2 are subspaces of a vector space, so that their ranks are just their dimensions, then (3.1) becomes an equation. Let C = (C1 , C2 , . . . , Cm ) be a family of subsets of X on which a matroid M is defined. Then a family (x1 , x2 , . . . , xm ) of elements of X is a system of distinct representatives, abbreviated SDR, of C provided x1 , x2 , . . . , xm are m distinct elements and x1 ∈ C1 , x2 ∈ C2 , . . . , xm ∈ Cm . If, in addition, the set {x1 , x2 , . . . , xm } is an independent set in M, then (x1 , x2 , . . . , xm ) is an independent SDR. Theorem 3.1 (Rado-Hall). The family C = (C1 , C2 , . . . , Cm ) has an independent SDR if and only if (3.2)

ρ(∪j∈J Cj ) ≥ |J|

(∅ = J ⊆ {1, 2, . . . , m}).

Proof. That (3.2) is necessary for there to exist an independent SDR is clear. Now assume that (3.2) holds. If |Cj | = 1 for all j = 1, 2, . . . , m, then (3.2) implies that the unique elements in the Cj form an independent SDR. To complete the proof, we show that if some Cj has more than one element, then one of its elements can be removed from Cj without violating (3.2) for the new family. (If we had a wide enough page, the remaining proof would be a one-liner, a string of inequalities and equalities.) Suppose, for instance, that |C1 | ≥ 2. Let x and y be distinct elements in C1 . Assume that neither of the families (C1 = C1 \ {x}, C2 , . . . , Cm ) and (C1 = C1 \ {x2 }, C2 , . . . , Cm ) satisfy the corresponding condition (3.2). Since (C1 , C2 , . . . , Cm ) satisfies (3.2), there exist J1 , J2 , ⊆ {2, 3, . . . , m} such that ρ(C1 ∪ ∪j∈J1 Cj ) ≤ |J1 | and ρ(C2 ∪ ∪j∈J2 Cj ) ≤ |J2 |. Then |J1 | + |J2 | ≥ ρ(C1 ∪ ∪j∈J1 Cj ) + ρ(C1 ∪ ∪j∈J2 Cj ) ≥ ρ((C1 ∪ ∪j∈J1 ) ∪ (C1 ∪ ∪j∈J2 Cj ))

+ρ((C1 ∪ ∪j∈J1 Cj ) ∩ (C1 ∪ ∪j∈J2 Cj )) ≥ ρ(C1 ∪ ∪j∈J1 ∪J2 Cj ) + ρ(∪j∈J1 ∩J2 Cj ) ≥ 1 + |J1 ∪ J2 | + |J1 ∩ |J2 | = 1 + |J1 | + |J2 |, a contradiction. Here we have used the submodular inequality, the nondecreasing property of the rank function, and (3.2). By using the above argument we can reduce each Ci to one element with (3.2) continuing to hold, and thus obtain an independent SDR. 

3.2. APPLICATIONS

27

Corollary 3.2. Let t be a positive integer with t ≤ m. Then there exists a subfamily consisting of t of the sets of C = (C1 , C2 , . . . , Cm ) which has an independent SDR if and only if (3.3)

ρ(∪j∈J Cj ) ≥ |J| − (m − t) (∅ = J ⊆ {1, 2, . . . , m}).

Proof. Note that (3.3) is automatically satisfied if |J| ≤ m − t. Let Y be a set with Y ∩X = ∅ and |Y | = m−t. Let M be the matroid on X ∪Y in which Z ⊆ X ∪Y is independent if and only if Z ∩X is an independent set of M. It is easy to check that this does define a matroid on X ∪ Y whose rank function ρ for U ⊆ X ∪ Y , satisfies ρ (U ) = ρ(U ∩ X) + |U ∩ Y |. There exists a subfamily consisting of t of the sets of C = (C1 , C2 , . . . , Cm ) having an  )= SDR which is independent in M if and only if the family (C1 , C2 , . . . , Cm  (C1 ∪ Y, C2 ∪ Y, . . . , Cm ∪ Y ) has an SDR which is independent in M . For ∅ = J ⊆ {1, 2, . . . , m}, we have ρ (∪j∈J Cj ) = ρ(∪j∈J Cj ) + |Y | = ρ(∪j∈J Cj ) + m − t, 

and the corollary follows.

If M is the uniform matroid on X, then Theorem 3.1 and Corollary 3.2 give, respectively, the conditions | ∪j∈J Cj | ≥ |J| and | ∪j∈J Cj | ≥ |J| − (m − t),

(J ⊆ {1, 2, . . . , m}

for the existence of an SDR and an SDR of a subfamily consisting of t sets. 3.2. Applications In this section we give an application of Theorem 3.1 to a graph partitioning problem, known as the Graham-Pollak theorem. In fact, we shall first give two proofs using elementary linear algebra. Then we shall discuss an extension of this theorem whose proof relies on Theorem 3.1. Recall that a clique of a graph is complete subgraph. A biclique of a graph is a complete bipartite subgraph. The biclique partition number of a graph G is the smallest number bp(G) of bicliques into which its edges may be partitioned. The complete graph Kn can be partitioned into n − 1 bicliques in a basically trivial way by recursively taking all edges meeting a vertex and then deleting that vertex and the edges that meet it. The result is a partition of the edge of Kn into bicliques K1,n−1 , K1,n−2 , . . . , K1,1 . There are less trivial ways to partition the edges of Kn into n bicliques. One can simply take any biclique Kr,s of Kn obtained by partitioning its vertices into two parts of size r and s with r + s = n. Removing the edges of this Kr,s , we are left with a Kr and a Ks . Proceeding recursively, we obtain a partition of Kr into r − 1 bicliques and a partition of Ks into s − 1 bicliques and hence a partition of Kn into 1 + (r − 1) + (s − 1) = r + s − 1 = n − 1 bicliques. Can we do better than n − 1 bicliques? The answer is no, and all known proofs require some elementary facts about matrices. Theorem 3.3. For all n ≥ 2, bp(Kn ) = n − 1.

28

3. RADO-HALL THEOREM AND APPLICATIONS

Proof. Here is the first proof which uses the fact that the eigenvalues of a skew-symmetric matrix have zero real part, and the fact that a matrix of rank p cannot be expressed as a sum of fewer that p rank 1 matrices. Let B1 , B2 , . . . , Br be a partition of the edges of Kn into complete bipartite graphs. We regard the Bi as spanning subgraphs of Kn so that they have in general vertices of degree zero. Let Ai be the adjacency matrix of Bi (i = 1, 2, . . . , r). Since Bi is complete bipartite (except for isolated vertices), there is a permutation matrix Pi such that ⎡ ⎤ Oai Jai ,bi Oai ,ci Obi Obi ,ci ⎦ (i = 1, 2, . . . , r). Pi−1 Ai Pi = ⎣ Jbi ,ai Oci ,ai Oci ,bi Oci The Ai have rank equal to 2. Let ⎡ Oai Jai ,bi Pi−1 Ai Pi = ⎣ Obi ,ai Obi Oci ,ai Oci ,bi

⎤ Oai ,ci Obi ,ci ⎦ Oci

(i = 1, 2, . . . , r)

be obtained from Pi−1 Ai Pi by replacing Jbi ,ai with a zero matrix. The Ai have rank equal to 1. Finally, let ⎡ ⎤ Oai −Jai ,bi Oai ,ci Obi Obi ,ci ⎦ (i = 1, 2, . . . , r) Pi−1 Qi Pi = ⎣ Jbi ,ai Oci ,ai Oci ,bi Oci be obtained from Pi−1 Ai Pi by replacing Jai ,bi with −Jai ,bi . The Qi are skewsymmetric matrices and hence their eigenvalues have zero real part. Since the adjacency matrix of Kn is Jn − In , we have J n − In = =

r  i=1 r 

Ai (Qi + 2Ai )

i=1

=

r 

Qi + 2

i−1

= Q+2

r 

Ai

i=1 r 

Ai ,

i=1

where Q is a skew-symmetric matrix and thus In + Q = J n − 2

r 

Ai .

i=1

Since Q is a skew-symmetric matrix, the eigenvalues of Q have real part equal to 0, and hence the eigenvalues of In + Q have real part equal to 1.

3.2. APPLICATIONS

29

Therefore, Q is an n × n nonsingular matrix. But In + Q is expressed as a sum of r + 1 matrices of rank 1. Hence r + 1 ≥ n and so r ≥ n − 1. The second proof uses the fact that the dimensions of the left null space and rank (dimension of the row space) of a k × l matrix add up to k. Continuing with the notation used in the first proof, we have that r 

Ai = T,

i=1

where T is a (0, 1)-matrix satisfying T + T t = Jn − In . (T is a tournament matrix, the topic of Chapter 8.) Let T  = [jn | T ], the n × n + 1 matrix obtained from T by inserting an initial column of all 1s. Suppose that x is a vector in the left null space of T  . Thus xt T  = 0 implies that xt jn = 0 and xt T = 0. We calculate xt (Jn − In )x in two ways. First we have that xt (Jn − In )x = xt Jn x − xt x = −xt x. But we also have that xt (Jn −In )x = xt (T +T t )x = xt T x+xt T t x = (xt T )x+xt (xt T )t = 0+0 = 0. Hence xt x = 0 and the left null space of T  contains only the zero vector implying that T  has rank equal to n. Thus T must have rank at least equal to n − 1. As in the first proof, this implies that r ≥ n − 1.  We now obtain a lower bound on the biclique partition number of any graph. As in Chapter 2, n+ , n− , and n0 denote, respectively, the number of positive, negative, and zero eigenvalues of a graph of order n. Theorem 3.4. Let G be a graph of order n. Then bp(G) ≥ max{n+ , n− }. Proof. Let A be the adjacency matrix of G and consider the quadratic form  xi xj . qG (x) = xt Ax = 2 {ij an edge} Let W + be the subspace of n of dimension n+ spanned by the eigenvectors of the n+ positive eigenvalues of G. Let the subspaces W − and W 0 of dimensions n− and n0 , respectively, be defined in an analogous way. Since A is a symmetric matrix, n = W + ⊕ W − ⊕ W 0 (a direct sum). Then q(x) is positive definite on W + and positive semi-definite on W + ⊕W 0 , a subspace of dimenison n+ + n0 . Similarly, q(x) is negative definite on W − and negative semi-definite on W − ⊕ W 0 . These subspaces are maximal with respect to their defining properties.

30

3. RADO-HALL THEOREM AND APPLICATIONS

Let B1 , B2 , . . . , Br be a partition of the edges of G into complete bipartite graphs. It follows from Example 2.1 that each Bi has one positive and one negative eigenvalue. We have qG (x) = qG1 (x) + qG2 (x) + · · · + qGr (x), where each qGi (x) is positive semi-definite on a subspace Ui of n of dimension n − 1. Thus ∩ri=1 Ui is a subspace of n of dimension at least n − r on which qG (x) is positive semi-definite. It follows that n − r ≤ n+ + n0 and hence that r ≥ n − (n+ + n0 ) = n− . Similarly, r ≥ n+ and the theorem follows.



Theorem 3.3 follows from Theorem 3.4 since for Kn we have n− = n − 1. A further generalization of Theorem 3.3 demonstrates even more vividly the power of linear algebra in certain graph theory problems. It also shows the power of the Rado-Hall theorem. To formulate the theorem more clearly, we introduce some language. We think of a biclique bipartition of the edges of a graph G as a coloring of its edges so that the set of edges of the same color form a complete bipartite subgraph. A subgraph of G is multicolored provided all of its edges have different colors. The following theorem is from [1]. Theorem 3.5. For each partition of the edges of Kn into bicliques there exists a multicolored spanning tree. More generally, if G is a graph of order n, then for each partition of the edges of G into bicliques, there is a multicolored spanning forest with at least max{n+ , n− } edges. Proof. First we note that the first assertion follows from the second. since n− = n−1 for Kn , and a spanning forest with n−1 edges is a spanning tree. Let E be the set of edges of G where m = |E|. We consider the matroid MG on E where a set F of edges is independent if and only if it does not contain a cycle of G, that is, if and only if F is the set of edges of a spanning forest. The edge sets of the cycles are the circuits of MG . If G is connected, the bases of MG are the edge sets of spanning trees. The rank of a set F of edges, is n − c where c is the number of connected components of the spanning subgraph of G with edge set F . Let B be the n × m vertex-edge incidence matrix of G, and regard B as a matrix over the finite field GF(2) of two elements. If we identify the edges with the columns of B, then the matroid M(G) is the matroid where a set of edges is independent if and only if the corresponding columns of B are linearly independent over GF(2). Let B1 , B2 , . . . , Br be a partition of the edge set E into complete bipartite subgraphs, and let E1 , E2 , . . . , Er be their respective edge sets (the color classes). We shall apply Theorem 3.1 to the family (E1 , E2 , . . . , Er ). Let J ⊆ {1, 2, . . . , r} and let H be the spanning subgraph of G with edges set ∪j∈J Ej . Let the connected components of H be H1 , H2 , . . . , Hk , and

BIBLIOGRAPHY

31

choose a vertex vi in Hi for i = 1, 2, . . . , k. Let G be the subgraph of G induced on the vertex set {v1 , v2 , . . . , vk }. The adjacency matrix of the subgraph G is a principal submatrix of the adjacency matrix of G. By the interlacing property S5 for the eigenvalues of a symmetric matrix, G has at least n− − (n − k) = k − n + n− negative eigenvalues and at least n+ − (n − k) = k − n + n+ positive eigenvalues. By Theorem 3.4 applied to G , bp(G ) ≥ k − n + max{n+ , n− }. Now (Ei ∩ E  : i ∈ {1, 2, . . . , r} \ J) is a decomposition of G into complete bipartite subgraphs. Hence r − |J| ≥ k − n + max{n+ , n− }, equivalently,

n − k ≥ |J| − (r − max{n+ , n− }).

Hence

ρ(∪j∈J Ej ) = n − k ≥ |J| − (r − max{n+ , n− }). Thus by Corollary 3.2 there exist a set of max{n+ , n− } independent edges,  that is, a spanning forest with max{n+ , n− } differently colored edges. A general reference on systems of distinct representatives and the RadoHall theorem is the book [3]. A general reference for matroids is the book [2]. Other applications of Theorem 3.1 can be found in [1]. Bibliography [1] N. Alon, R.A. Brualdi, and B.L. Shader, Multicolored forests in bipartite decompositions of graphs, J. Combin. Theory, Ser. B, 53 (1991), 143–148. [2] J.G. Oxley, Matroid Theory, Oxford University Press, Oxford, UK, 1992. [3] L. Mirsky, Transversal Theory, Mathematics in Science and Engineering 75, Academic Press, New York-London, 1971.

http://dx.doi.org/10.1090/cbms/115/04

CHAPTER 4

Colin de Verdi` ere Number In this chapter, we discuss a deep linear algebraic invariant associated with a graph that was introduced by Colin de Verdi`ere in 1990. Its motivation was the study of the second largest eigenvalue of certain Schr¨ odinger operators. It contains a surprising amount of information about a graph. Our treatment of this difficult invariant is necessarily brief, and we include only one proof.

4.1. Motivation and Definition Let G be a connected graph with vertices labeled as 1, 2, . . . , n. We weight the edges of G with positive numbers and then consider the n × n symmetric matrix A = [aij ] where for i = j,  weight of ij if ij is an edge of G aij = 0 if ij is not an edge of G. Thus far, the matrix A looks like a weighted form of the adjacency matrix of G, since the 1s have been replaced with arbitrary positive numbers and the 0s remain as 0s. Assuming that the diagonal entries remain equal to zero, then, since G is connected, A is an irreducible, nonnegative symmetric matrix, and it follows from Chapter 1 that A has n real eigenvalues of which the largest is positive and has multiplicity one. Since the diagonal entries carry no information about the graph G, we now allow them to be arbitrary, but only up to a point. No matter what real values we put on the main diagonal, we get a symmetric matrix with n real eigenvalues, but for some choices we may lose the simplicity of the largest eigenvalue. By using the diagonal entries to add a multiple of the identity matrix In , we can shift the second largest eigenvalue to 0, so that we have only one positive eigenvalue. Then, by convention, we multiply the matrix by −1 so that the second smallest eigenvalue is now 0. Now we want to examine how large a multiplicity of 0 as an eigenvalue we can get, subject to the restrictions on the nonzero off-diagonal entries as determined by the graph G. This is the Colin de Verdi`ere number of G. With the above in mind, we now give a precise definition of the Colin de Verdi`ere number of a graph G of order n ≥ 2. Consider all n × n symmetric matrices M = [mij ] such that 33

` 4. COLIN DE VERDIERE NUMBER

34

⎧ ⎨

λ2 ≥ · · · ≥ λn , and let M = λ0 In − A where λ0 is a number satisfying λ1 > λ0 > λ2 . Then M has exactly one negative eigenvalue λ0 − λ1 , and hence C2 and C3 hold. Also M is nonsingular, and so C3 holds as well since any matrix X with M X = O satisfies X = O. Finally, note that since M has a negative eigenvalue, its rank is at least 1 and so μ(G) ≤ n − 1. We can also conclude that μ(G) ≥ 1 as follows. Let A be the adjacency matrix of G, and let A be the matrix obtained from −A by putting generic entries on the main diagonal so that A has n distinct eigenvalues λ1 > λ2 > · · · > λn . Let M = A − λn−1 In . Then M has exactly one negative eigenvalue and exactly one zero eigenvalue. Thus M has rank equal to n − 1 and co-rank equal to 1. Suppose X = O satisfies the properties in C3. Since the co-rank of M equals 1, X must have rank equal to 1. Since X has zeros on the diagonal and is symmetric, this is impossible. It follows that μ(G) ≥ 1. By taking a negative number c with large absolute value and considering, for any M satisfying C1 and C2, the matrix −(cIn + M ), we get an irreducible nonnegative matrix which has a positive eigenvector x for its largest eigenvalue. It follows that x is a positive eigenvector of M for its unique negative eigenvalue. The Strong Arnold Property seems rather mysterious. It is a technical dimensional assumption and we give a brief explanation. Consider the n+1 2 2 n space Sn ⊆ of all n × n real symmetric matrices with standard inner product  aij bij (A = [aij ], B = [bij ]). A·B = i,j

Let the rank of M be k, and let Mk be the manifold of all matrices in Sn with rank at most k. Let OG be the linear space of all matrices A = [aij ] ∈ Sn such that aij = 0 whenever i = j and ij is not an edge of G. Then M ∈ Mk and the tangent space T (M ) of Mk at M consists of all matrices of the form M U + U t M where U is an arbitrary n × n real matrix; such matrices are symmetric since M is symmetric, and hence lie in Sn . The normal space N (M ) of Mk at M consists of all matrices X ∈ Sn such that M X = O. Two manifolds intersecting at a point x intersect transversally at x provided their normal vectors at x are linearly independent. Then, the Strong Arnold

` 4.2. COLIN DE VERDIERE NUMBER AND GRAPH PROPERTIES

35

Property is equivalent to saying that Mk and Ok intersect transversally at M . For some more details, see [4] 4.2. Colin de Verdi` ere Number and Graph Properties Let G be a graph. A minor of G is any graph that arises from G by a sequence of deletions and contractions of edges and deletions of isolated vertices (vertices of degree 0), deleting any loops that arise and deleting any multiple copies of edges that may arise. A proper minor of G is any minor different from G itself. A parameter associated with a graph G is minor monotone provided its value on any minor of G is at most its value on G. A fundamental theorem proved by Colin de Verdi`ere [1] is the following. Theorem 4.1. The Colin de Verdi`ere number of a graph is minor monotone, that is, if G is a graph and H is a minor of G, then μ(H) ≤ μ(G). We do not prove this theorem here. There is a fundamental and remarkable theorem of Robertson and Seymour [5] that is relevant here. Theorem 4.2. Any collection G of graphs such that no graph in G is (isomorphic to) a minor of another graph in G is finite. A graph property P (formally, P is any collection of graphs) is minorclosed provided that any minor of a graph with property P also has property P. A forbidden minor of a minor-closed property P is any graph not having property P each of whose proper minors has property P. Theorem 4.2 asserts that every minor-closed property has only a finite number of forbidden minors. A classical theorem illustrating Theorem 4.2 is the Kuratowski theorem on planar graphs: A graph G is planar if and only if it forbids as minors the two graphs K5 and K3,3 . We now compute the Colin de Verdi`ere numbers of some graphs. Example 4.3. (Colin de Verdi`ere number of a complete graph) Consider Kn with n ≥ 2. For M take the matrix −Jn of all −1s. Then M satisfies C1. The eigenvalues of −Jn are −n, 0, . . . , 0 and hence M satisfies C2. If X = [xij ] is an n × n symmetric matrix such that M X = O and xij = 0 if i = j or mij = 0, then X = O and hence M also satisfies C3. Since M has rank 1 and so nullity n − 1, it follows that μ(Kn ) ≥ n − 1. Since the Colin de Verdi`ere number cannot exceed n − 1, we conclude that μ(Kn ) = n − 1. In fact, for n ≥ 3, the complete graph Kn is the only graph G of order n with μ(G) = n − 1. Example 4.4. (Colin de Verdi`ere number of the complement of a complete graph) Consider Kn with n ≥ 2. Then for any M used in the evaluation of μ(Kn ) only the diagonal entries can be zero. Since M is to have one exactly negative eigenvalue, exactly one diagonal entry is negative. If there were two zeros on the diagonal, say in the (1, 1) and (2, 2) positions, then

36

` 4. COLIN DE VERDIERE NUMBER

the matrix X whose only nonzero entries are 1s in positions (1, 2) and (2, 1) would satisfy M X = O and thus M would not satisfy C3. Hence M has only one zero on the main diagonal and thus has rank n − 1. Therefore μ(Kn ) ≤ 1. Since μ(G) is always at least 1, we conclude that μ(Kn ) = 1. Example 4.5. (Colin de Verdi`ere number of a path) Let Pn be the path 1, 2, 3, . . . , n of order n. The adjacency matrix A of Pn has 2(n − 1) ones and they are on the superdiagonal and subdiagonal. If M is a matrix satisfying C1, then M has rank at least n−1, since the submatrix obtained from M by deleting column 1 and row n is nonsingular. Thus any matrix M satisfying C1 has corank at most 1 and thus μ(Pn ) ≤ 1. Since μ(G) is always at least 1, we conclude that μ(Pn ) = 1. The following important theorem shows that the Colin de Verdi`ere number can be used to characterize certain topological properties of graphs. Theorem 4.6. Let G be a graph. Then (a) μ(G) ≤ 1 if and only if the connected components of G are paths. (b) μ(G) ≤ 2 if and only if G is an outerplanar graph. (c) μ(G) ≤ 3 if and only if G is a planar graph. (d) μ(G) ≤ 4 if and only if G is linklessly embeddable in 3 . Parts (a), (b), and (c) of this theorem are due to Colin de Verdi`ere. Part (d) in the forward direction is due to Robertson, Seymour, and Thomas, and in the backward direction to Lov´asz and Schrijver (see [3]). In the remainder of this chapter, we give van der Holst’s combinatorial and linear algebraic proof of the backward direction of part (c) of Theorem 4.6. This proof does not rely on the Strong Arnold Property directly but uses its consequence that the Colin de Verdi`ere number is minor monotone (see Theorem 4.1). The next lemma is used in the proof. ] In that lemma, for x = (x1 , x2 , . . . , xn )t ∈ n , supp(x) = {i : xi = 0}, supp+ (x) = {i : xi > 0}, and supp− (x) = {i : xi < 0} denote, respectively, the support, positive support, and negative support of x. We have supp(x) = supp+ (x) ∪ supp− (x). Lemma 4.7. Let G be a connected graph with vertices {1, 2, . . . , n}, and let M = [mij ] be an n × n real symmetric matrix satisfying properties C1 and C2. Let x = (x1 , x2 , . . . , xn )t ∈ n be a nonzero vector in the null space of M that has minimal support among all nonzero vectors in the null space. Then the subgraphs of G induced on supp+ (x) and on supp− (x) are both connected. Proof. Let K = supp− (x). Then K = ∅. Otherwise, x is a nonnegative eigenvector of M (for the eigenvalue 0), and so of the nonnegative, irreducible matrix −(−cIn + M ) for c large enough. This would be a contradiction, since M has a positive eigenvector for its unique negative eigenvalue, giving a second nonnegative eigenvector for the matrix −(−cIn + M ). Similarly, with L = supp+ (x), L = ∅.

` 4.2. COLIN DE VERDIERE NUMBER AND GRAPH PROPERTIES

37

Suppose to the contrary that the subgraph of G induced on supp+ (x) is not connected. Then L = I ∪ J where I and J are nonempty, and I ∩ J = ∅, and there are no edges in G joining a vertex in I and a vertex in J. Thus mij = mji = 0 for all i ∈ I and j ∈ J. Thus M x = 0 implies that M [I|I]xI + M [I|K]xK = 0 and M [J|J]xJ + M [J|K]xK = 0, where xK and xJ are the subvectors of X with coordinates indexed by K and J, respectively. Let z be a positive eigenvector of M for its (unique) negative eigenvalue (as already remarked, the existence of z follows from the Perron-Frobenius theory). Let p=

zIt xI , zJt xJ

and define y = (y1 , y2 , . . . , yn ) ∈ n by ⎧ xi (i ∈ I) ⎨ −pxi (i ∈ J) . yi = ⎩ 0 (i ∈ I ∪ J). Then supp(y) = I ∪ J, and z t y = zIt xI − pzJt xJ = 0. Also, since M [I|J] = M [J|I] = O, we have y t M y = yIt M [I|I]yI + yJt M [J|J]YJ = xtI M [I|I]xI + p2 xtJ M [J|J]xJ = −xtI M [I|K]xK − p2 xtJ M [J|K]xJ ≤ 0, because xI and xJ are positive, xK is negative, and M [I|K] and M [J|K] are nonpositive. Since (i) z is an eigenvector for the unique negative eigenvalue (so the λmin ) of M (ii) z t y = 0, and (iii) M is a symmetric matrix, we now conclude that M y = 0; otherwise min {y t M y} < 0,

{y:y t z=0}

and by property S3 for symmetric matrices, M would have a second nonnegative eigenvalue, a contradiction. Thus y is a nonzero vector in the null space of M . Because supp(y) = I ∪ J and K = ∅, supp(y) is a proper subset of supp(x), a contradiction. Hence the subgraph of G induced on supp+ (x) is connected, and in a similar way we see that the subgraph induced on  supp− (x) is also connected. Theorem 4.8. If G is a planar graph, then μ(G) ≤ 3. Proof. Here we shall use properties of planar graphs. Because μ(G) is minor monotone, we may assume that G is a maximal planar graph (any additional edge results in a graph that is not planar). So G is 3-connected and, when embedded in the plane, has a triangular face with vertex set U

38

` 4. COLIN DE VERDIERE NUMBER

of cardinality 3. Suppose to the contrary that μ(G) ≥ 4, and let M be a matrix satisfying C1, C2, and C3 with co-rank equal to μ(G). Thus the dimension of the null space of M is at least 4, and so contains a nonzero vector x = (x1 , x2 , . . . , xn )t with xi = 0 for all i ∈ U . We take x to be such a vector with minimal support. Since G is 3-connected, there exist pairwise vertex-disjoint paths P1 , P2 , and P3 where each path starts at a vertex not in supp(x) but adjacent to a vertex in supp(x), and ends at a vertex in U . By Lemma 4.7, since supp+ (x) and supp− (x) induce connected subgraphs of G, they each can be contracted to a single vertex. We now delete all vertices of G not contained in supp(x) or in any of the paths Pi and then contract each Pi to a single vertex. If we now insert in G a vertex in the interior of the triangular face with vertex set U , we obtain a another planar graph that contains K3,3 as a subgraph, contradicting its planarity.  Bibliography [1] Y. Colin de Verdi`ere, On a new graph invariant and a criterion for planarity, in: Graph Structure Theory: Contemporary Mathematics, vol. 147, American Mathematical Society, Providence, 1993, 137–147. [2] H. van der Holst, A short proof of the planarity characterization of Colin de Verdi`ere, J. Combin. Theory, Ser. B, 65 (1995), 269–271. [3] H. van der Holst, L. Lov´ asz, and A. Schrijver, The Colin de Verdi`ere graph parameter, in: Graph Theory and Computational Biology (Balatonlelle, 1996), Mathematical Studies, Janos Bolyai Math. Soc., Budapest, 1999, 29–85. [4] L. Lov´ asz, Geometric Representation of Graphs, Chapter 7: The Colin de Verdi`ere number, unpublished. [5] N. Robertson and P.D. Seymour, Graph Minors. XX. Wagner’s conjecture, J. Combin. Theory Ser. B, 92 (2004), 325–357.

http://dx.doi.org/10.1090/cbms/115/05

CHAPTER 5

Classes of Matrices of Zeros and Ones In this chapter, we consider classes of (0,1)-matrices with a prescribed number of 1s in each row and column, equivalently, bipartite graphs whose vertices have prescribed degrees. There is considerable literature on this topic and we discuss only a small part of it. The first question is that of existence and we give a proof of the Gale-Ryser criterion using the Rado-Hall theorem. A recent generalization of these classes is also discussed. 5.1. Equivalent Formulations Matrices of 0s and 1s are basic objects of study in combinatorics. The number of m×n (0,1)-matrices equals 2mn , everything from the matrix Om,n of all 0s to the matrix Jm,n of all 1s. There are bijections between such matrices and other fundamental objects of study. A family A = (A1 , A2 , . . . , Am ) of subsets of a set {x1 , x2 , . . . , xn } can be viewed as a (0, 1)-matrix by passing to its m × n incidence matrix A = [aij ] where  1 if xj ∈ Ai aij = 0 otherwise. A bipartite graph G with vertices bipartitioned into two sets {u1 , u2 , . . . , um } and {w1 , w2 , . . . , wn } has as biadjacency matrix the m × n matrix A = [aij ] where  1 if there is an edge joining ui and vj , aij = 0 otherwise. Thus to study (0, 1)-matrices is to study families of subsets of a set and bipartite graphs. Since the class of m × n (0,1)-matrices is so large and so varied, it is natural to focus on certain natural subclasses. One type of class is defined as follows. Let R = (r1 , r2 , . . . , rm ) and S = (s1 , s2 , . . . , sn ) be nonnegative integral vectors with (5.1)

r1 + r2 + · · · + rm = s1 + s2 + · · · + sn .

Then A(R, S) denotes the class of all m × n (0, 1)-matrices with ri 1s in row i (i = 1, 2, . . . , m) and sj 1s in column j (j = 1, 2, . . . , n). R and S are called the row sum vector and column sum vector, respectively. In terms of the above correspondences, (i) A(R, S) is the class of all families of subsets (A1 , A2 , . . . , Am ) of {x1 , x2 . . . . , xn } where |Ai | = ri (i = 1, 2, . . . , m) and xj is an element of sj of the sets A1 , A2 , . . . , Am (j = 1, 2, . . . , n), and (ii) 39

40

5. CLASSES OF MATRICES OF ZEROS AND ONES

A(R, S) is the class of all bipartite graphs with bipartition {u1 , u2 , . . . , um } and {w1 , w2 , . . . , wn } where the degree of ui is ri (i = 1, 2, . . . , m) and the degree of wj is sj (j = 1, 2, . . . , n). There is another equivalent, but not obvious, formulation of (0, 1)-matrices in terms of objects called Young tableaux. We refer the interested reader to [1]. 5.2. The Classes A(R, S) Throughout this section we assume that m and n are fixed positive integers and, without loss of generality, that R = (r1 , r2 , . . . , rm ) and S = (s1 , s2 , . . . , sn ) are monotone nonincreasing integral vectors with r1 + r2 + · · · + rm = s1 + s2 + · · · + sn . With m and n fixed, we can easily give the generating function for the number (which may be zero) of matrices in each of the classes A(R, S). Theorem 5.1. m  n   (1 + xi yj ) = |A(R, S)|xr11 xr22 · · · xrmm y1s1 y2s2 · · · ynsn . i=1 j=1

R,S

Proof. Expanding the product m  n 

m n

(1 + xi yj ) =

i=1

m  n 

i=1 j=1

+ xi yj ), we get

(xi yj )aij

i=1 j=1

where the summation extends over the aij = 0 or 1

j=1 (1

2mn

choices for aij :

(1 ≤ i ≤ m, 1 ≤ j ≤ n).

Thus each term corresponds to an m × n (0, 1)-matrix. Collecting like terms gives the identity in the statement of the theorem.  The identity in Theorem 5.1 is not very useful for evaluating the numbers |A(R, S)|. There has been considerable work on asymptotic estimates for these numbers, under certain assumptions. The condition (5.1) is a necessary condition for A(R, S) to be nonempty but, in general, it is far from sufficient. Example 5.2. Let R = (4, 3, 1, 1, 1) and S = (4, 4, 1, 1). Consider the possibility of a matrix in A(R, S). In the first two columns there must be a total of 4 + 4 = 8 1s. Rows 1,2,3,4,5 can contribute, respectively, at most 2, 2, 1, 1, 1 1s in the first two columns for a total of 2 + 2 + 1 + 1 + 1 = 7 1s. Hence in this case, A(R, S) = ∅. The argument in Example 5.2 can be applied to any R = (r1 , r2 , . . . , rm ) and (s1 , s2 , . . . , sn ). If one allows nonnegative integral entries, then (5.1) is sufficient for the existence of a matrix in the class N (R, S) of nonnegative integral matrices

5.2. THE CLASSES A(R, S)

41

with row sum vector R and column sum vector S. Assume (5.1) holds. If m = 1, then   s1 s2 · · · sn works and is unique. If n = 1, then ⎡ ⎢ ⎢ ⎢ ⎣

r1 r2 .. .

⎤ ⎥ ⎥ ⎥ ⎦

rm works and is also unique. Let m ≥ 2 and n ≥ 2, and let a11 = min{r1 , s1 }. If min{r1 , s1 } = r1 , then we use ⎡ ⎤ r1 0 0 · · · 0 ⎢ ⎥ ⎢ ⎥  ⎣ ⎦ A and proceed by induction with A where s1 is diminished by r1 . If min{r1 , s1 } = s1 , then we use ⎡ ⎤ s1 ⎢ 0 ⎥ ⎢ ⎥  ⎢ 0 ⎥ A ⎢ ⎥ ⎢ .. ⎥ ⎣ . ⎦ 0 and proceed by induction with A where r1 is diminished by s1 . In order for A(R, S) to be nonempty, r1 ≤ n. So let’s start out by assuming that r1 ≤ n and that makes our notation simpler. Let ri∗ = |{k : rk ≥ i}| (i = 1, 2, 3, . . . , n). The vector R∗ = (r1∗ , r2∗ , r3∗ , . . . , rn∗ ) is the conjugate sequence of R. The vector R∗ is always monotone nonincreasing (even if R isn’t). Since r1 ≤ n, r1∗ + r2∗ + · · · + rn∗ = r1 + r2 + · · · + rm = s1 + s2 + · · · + sn . The class A(R, R∗ ) contains a unique matrix AR in which the 1s in each row are left-justified. For instance, with m = 4, n = 5, and R = (4, 3, 3, 2), we have R∗ = (4, 4, 3, 1, 0) and ⎡ ⎤ 1 1 1 1 0 ⎢ 1 1 1 0 0 ⎥ ⎥ AR = ⎢ ⎣ 1 1 1 0 0 ⎦. 1 1 0 0 0 The Gale-Ryser conditions for the existence of a matrix with prescribed row and column sum vectors are contained in the following theorem.

42

5. CLASSES OF MATRICES OF ZEROS AND ONES

Theorem 5.3. The class A(R, S) is nonempty if and only if (5.2)

s1 + s2 + · · · + sk ≤ r1∗ + r2∗ + · · · + rk∗

(k = 1, 2, . . . , n)

with equality for k = n. Proof. The relationship between S and R∗ given in the theorem is called majorization we write S  R∗ and say that S is majorized by R. We have already verified the necessity of S  R∗ for A(R, S) to be nonempty. We give two proofs of the sufficiency; later we outline a constructive proof along the lines used by both Gale and Ryser. So suppose that S  R∗ . First proof: This proof is by contradiction. Choose a counterexample, with m + n minimum, and for such an m and n with the common value r1 + r2 + · · · + rm = s1 + s2 + · · · + sn minimum. We consider two cases.

Case 1: li=1 si = li=1 ri∗ for some l with 1 ≤ l < n, that is, a nontriv ) ial coincidence in the majorization inequalities. Let R = (r1 , r2 , . . . , rm   where ri = min{l, ri } for i = 1, 2, . . . , m, and let S = (s1 , s2 , . . . , sl ).  ) where r  = r − r  for i = 1, 2, . . . , m, and let Let R = (r1 , r2 , . . . , rm i i i  S = (sl+1 , sl+2 , . . . , sn ). Then we have S   R∗ and S   R∗ . By minimality, there exists a matrix A ∈ A(R , S  ) and a matrix A ∈ A(R .S  ). The matrix   A A is in A(R, S) giving a contradiction.

l

l ∗ Case 2: i=1 si < i=1 ri for all l with 1 ≤ l < n. By minimality, rm ≥ 1 and sn ≥ 1. Let R = (r1 , . . . , rm−1 , rm − 1) and S  = (s1 , . . . , sn−1 , sn − 1). The assumptions in this case imply that S   R∗ . Hence by minimality, there exists a matrix A = [aij ] ∈ A(R , S  ). If amn = 0, we may replace amn with 1 and get a matrix in A(R, S), a contradiction. Thus amn = 1. Since rm − 1 ≤ n − 1, there exists an integer q < n such that amq = 0. Since sq ≥ sn , there exists an integer p such that apq = 1 and apn = 0. Interchanging the 1s and 0s in the 2 × 2 submatrix determined by rows p and m and columns q and n, we get a matrix in A(R , S  ) with a 0 in position (m, n). Now as in Case 1, we get another contradiction, finishing this proof. Second proof: This proof uses Theorem 3.1 and is based on the proof sketch given in [4]. Let E = (E1 , E2 , . . . , Em ) be the family of subsets of X = {(i, j) : 1 ≤ i ≤ m, 1 ≤ j ≤ n} where Ei = {(i, j) : 1 ≤ j ≤ n}

(1 ≤ i ≤ m).

For i = 1, 2, . . . , m, let Mi be the matroid of rank ri on Ei in which a subset of Ei is independent if and only if its cardinality is at most ri . Let M be the matroid on X where F ⊆ X is independent set if and only if

5.2. THE CLASSES A(R, S)

43

|F ∩ Ei | ≤ ri for i = 1, 2, . . . , m. This matroid is the direct sum of the matroids M1 , M2 , . . . , Mm . The rank function ρ of M satisfies m  ρ(F ) = min{|F ∩ Ei |, ri } (F ⊆ X). i=1

Consider the family A of subsets of X given by A = (A1 , . . . , A1 , A2 , . . . , A2 , . . . , An , . . . , An )          s1

s2

sn

where Aj = {(i, j) : 1 ≤ i ≤ m} (1 ≤ j ≤ n). Since r1 + r2 + · · · + rm = s1 + s2 + · · · + sn , there is a bijection between the matrices in A(R, S) and the independent SDRs of A: A = [aij ] ∈ A(R, S) ↔ ({(i, j) : aij = 1, 1 ≤ i ≤ m} ⊆ Aj : 1 ≤ j ≤ n) . In this bijection we are disregarding which copy of each Ai has which of the elements of {(i, j) : aij = 1, 1 ≤ j ≤ m} as its representative. So we have only to check that the family A satisfies (3.2), that is, we have to check that (5.3)

ρ(∪nj=1 (pj Aj )) ≥

n 

(0 ≤ pj ≤ sj , 1 ≤ j ≤ n).

pj

j=1

Here by pj Aj we mean taking Aj pj times. The value of the left hand side of (5.3) depends only on which pj are not zero while, for pj ≥ 1, the right hand side is largest when pj = sj . Thus, with K = {j : pj ≥ 1}, (5.3) is equivalent to  sj (K ⊆ {1, 2, . . . , n}). (5.4) ρ(∪j∈K Aj ) ≥ j∈K

Now ∪j∈K Aj is the union of all the positions {(i, k) : 1 ≤ i ≤ m, k ∈ K}, that is, the positions in the columns of an m×n matrix whose indices belong to K. Thus m  min{|K|, ri }, ρ(∪j∈K Aj ) = i=1

and thus depends only on |K| and not on K itself. By the assumed monotonicity of S = (s1 , s2 , . . . , sn ), max

{K:|K|=k}



sj =

j∈K

k 

sj

(1 ≤ k ≤ n).

j=1

Putting this all together, we see that (5.4) is equivalent to m  i=1

min{k, ri } ≥

k  j=1

sj

(1 ≤ k ≤ n),

44

5. CLASSES OF MATRICES OF ZEROS AND ONES

and as we have seen this is equivalent to S  R∗ . Thus the second proof is also complete.  Neither of the proofs given of Theorem 5.3 is like the original proofs of Gale and Ryser. Their proofs are constructive and produce a matrix in A(R, S) when it is nonempty. We briefly outline the constructive argument. First, define an interchange to be the transformation that replaces one of the following 2 × 2 submatrices of a matrix with the other:     1 0 0 1 and . 0 1 1 0 Matrices in an A(R, S) are closed under interchanges, since an interchange does not affect the row and column sums. Even more than that, interchanges can be used to generate an entire class A(R, S) starting from any single matrix in the class. To see this, let A be any matrix in A(R, S) and suppose in the last column, there is a 0 in row i and a 1 in row j where i < j and ri > rj . There there must be another column having a 1 in row i and a 0 in row j. Applying an interchange moves the 1 in row j of the last column up to row i. Similarly, if there is a 1 in row i and a 0 in row j of the last column where i < j and ri = rj , we may perform an interchange and move the 1 in row i down to row j. This implies that if A(R, S) = ∅, then there is a matrix in A(R, S) such that the 1s in the last column occur in those rows with the largest row sum, with preference given to the lower row in the case of ties. Now delete the last column of this matrix and iterate (the row sums are still monotone nonincreasing by the tie-breaking rule). The result is a  ∈ A(R, S) such that starting with any matrix A we can get to A  matrix A by a sequence of interchanges. Since interchanges are reversible, we can get from any matrix in A(R, S) to any other by a sequence of interchanges. The proof of Ryser for the nonemptiness of A(R, S) shows that the majorization condition S  R∗ allows one to use the above iteration and inductively  in A(R, S)). construct a matrix (the matrix A Using interchanges we get a natural graph G(R, S) with vertex set A(R, S) = ∅ where we put an edge between two matrices in A(R, S) provided they differ by a single interchange. It thus follows that G(R, S) is a connected graph. The distance function of this graph is given by dist(A, B) =

d(A, B) − q(A, B) (A, B ∈ A(R, S)), 2

where d(A, B) is the number of positions in which A and B differ, and q(A, B) is the largest number of cycles into which the edges of a certain digraph D(A, B) can be partitioned. Here D(A, B) is the bipartite digraph with m+n vertices partitioned into sets {u1 , u2 , . . . , um } and {w1 , w2 , . . . , wn } where there is an edge from ui to wj if A has a 1 in position (i, j) and B has a 0, and an edge from wj to ui if A has a 0 in position (i, j) and B has a 1.

5.3. A GENERALIZATION

45

5.3. A Generalization In this section we discuss a recent common generalization [3, 2] of the classes A(R, S) of (0, 1)-matrices and N (R, S) of nonnegative integral vectors. Let b(1) , b(2) , . . . , b(m) be nonnegative integral vectors in n , and let an m × n nonnegative integral matrix be defined by ⎡ (1) ⎤ b ⎢ b(2) ⎥ ⎥ ⎢ B = ⎢ . ⎥. . ⎣ . ⎦ b(m) Without loss of generality, we assume that the vectors b(1) , b(2) , . . . , b(m) have been ordered so that the row sums of B are monotone nonincreasing. Let S = (s1 , s2 , . . . , sn ) be a nonnegative integral vector; note that we do not assume that S is monotone nonincreasing, for we would lose some generality in doing so. The class A(B|S) is defined to be the set of all m × n nonnegative integral matrices whose row vectors a(1) , a(2) , . . . , a(m) satisfy the majorization conditions a(i)  b(i) (i = 1, 2, . . . , m) and whose column sum vector T satisfies the majorization condition T  S. Let R = (r1 , r2 , . . . , rm ) be a nonnegative integral vector. We consider two special cases. Let b(i) = (ri , 0 . . . , 0) (i = 1, 2, . . . , m). We then have A(B|S) = N (R, S). Now let bi = (1, . . . , 1, 0, . . . , 0) (i = 1, 2, . . . , m).    ri

We then have A(B|S) = A(R, S). Thus B(R, S) encompasses both of these two classical classes of matrices. A classical result of Muirhead about majorization is useful in determining the nonemptiness of a class A(B|S) and for constructing a matrix in the class. A transfer (from j to k) of an integral vector v = (v1 , v2 , . . . , vn ) is the transformation that, for some j and k with vj > vk , replaces v with the vector w obtained from v by subtracting 1 from vj and adding 1 to vk . It is easy to check that in this case, w  v.

46

5. CLASSES OF MATRICES OF ZEROS AND ONES

Lemma 5.4. (M uirhead, 1903) For integral vectors u = (u1 , u2 , . . . , un ) and v = (v1 , v2 , . . . , vn ), u  v if and only if u can be obtained from v by a finite sequence of transfers. Lemma 5.5. Consider a nonempty class A(B|S) and let T be a nonnegative integral vector obtained from S by a transfer. Let A be any matrix in A(B|S). Then there is a matrix A in A(B|T ) which is obtained from A by a transfer applied to one of the rows of A. Proof. Let A = [aij ] be a matrix in A(B|S). Suppose that

mT is ob> s . Since tained from S by a transfer from j to k where s j k i=1 aij =

a , there exists an i such that a > a . A transfer from j sj > sk = m ij ik ik i=1 (i) (i) (i) (i) (i) to k in row a of A results in a vector a with a  a  b and hence a matrix in A(B|T ). 

m (i) Theorem 5.6. The class A(B|S) is nonempty if and only if S  i=1 b .

(i)  Proof. Let S  = m i=1 b . If A(B|T ) = ∅, then clearly T  S . Conversely, suppose that S  S  . The class A(B|S  ) = ∅ since it contains B. By Lemma 5.4, S can be gotten from S  by a finite sequence of transfers. By Lemma 5.5, there is a matrix in A(B|S) obtained from B by a finite sequence of transfers on rows.  The proof of Theorem 5.6 provides a constructive algorithm to obtain a matrix in a nonempty class A(B|S) that generalizes the construction of a matrix in A(R, S). Of course, one does not need to know that A(B|S) is nonempty before starting the algorithm. If the class A(B|S) is empty, the algorithm will fail. Example 5.7. Let S = (6, 4, 4) and let ⎡ ⎤ 2 2 0 B = ⎣ 2 1 1 ⎦. 3 3 0 We have S  = (7, 6, 1). Starting with the matrix B in A(B|S  ), we can produce a matrix in A(B|S) by a finite sequence of transfers on rows: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 2 2 0 1 2 1 1 2 1 1 2 1 ⎣ 2 1 1 ⎦ → ⎣ 2 1 1 ⎦ → ⎣ 2 1 1 ⎦ → ⎣ 2 1 1 ⎦ ∈ A(B|S). 3 3 0 3 3 0 3 2 1 3 1 2 Example 5.8. Let S = (2, 2) and let   2 0 B= . 1 1 Then S = (2, 2)  (3, 1) = S  and so A(B|S  ) = ∅. In fact, the only matrix in A(B|S  ) is the matrix   1 1  . B = 1 1 Thus we have A(B|S  ) = A(B  |S  ).

BIBLIOGRAPHY

47

Example 5.8 shows that the matrix B used in defining a nonempty class A(B|S) may not be unique. If B and B  are two nonnegative integral matrices, we write B   B provided each row of B  is majorized by the corresponding row of B. The matrix B is minimal for the vector S provided there does not exist a matrix B  with B  = B and B   B. The matrix B in Example 5.8 is not minimal for (2, 2). The following theorem is proved in [2]. Theorem 5.9. For each nonempty class A(B|S) there exists a unique matrix B ∗ such that A(B ∗ |S) = A(B|S) and B ∗ is minimal for S. Finally, we state another theorem in [2] that generalizes the result for A(R, S) that, starting from any matrix in A(R, S), interchanges suffice to get all matrices in a class A(R, S). Let t be a positive integer, and let   a b F = c d be a 2 × 2 nonnegative integral matrix with a ≥ b + t and d ≥ u + t. A t-interchange replaces F with the matrix   a−t b+t  , F = c+t d−t or vice-versa. We may apply a t-interchange to 2 × 2 submatrices of a given matrix. If A ∈ A(B|S), we obtain another matrix in A(B|S). Theorem 5.10. Let A1 and A2 be matrices in a class A(B|S). Then A1 can be transformed into A2 by a sequence of t-transfers in such a way that all intermediate matrices also belong to A(B|S). Bibliography [1] R.A. Brualdi, Combinatorial Matrix Classes, Encyclopedia of Mathematics and its Applications 108, Cambridge University Press, Cambridge, 2006. [2] R.A. Brualdi and G Dahl, Majorization classes of integral matrices, Linear Algebra Appl., (2011), to appear. [3] G. Dahl, Majorization permutahedra and (0, 1)-matrices, Linear Algebra Appl. 432 (2010), 3265–3271. [4] D.J.A. Welsh, Matroid Theory, Academic Press, London, 1976.

http://dx.doi.org/10.1090/cbms/115/06

CHAPTER 6

Matrix Sign Patterns In this chapter we focus on the sign pattern of a real matrix by which we mean the (0, 1, −1)-matrix (or, (0, +, −)-matrix) obtained by replacing each entry with its sign. Matrix sign patterns have been an important theme in many modern matrix investigations. A square matrix sign pattern can be viewed as a weighted digraph in which the weights of edges are either +1 or −1, and properties of this digraph often play a crucial role in investigations. We shall consider only two topics within this broad area, namely, sign-nonsingular matrices and spectrally arbitrary sign patterns, and only a small part of each. 6.1. Sign-Nonsingular Matrices Let A = [aij ] be an n × n real matrix. The classical definition of the determinant of A is  sign(σ)a1σ(1)a2σ(2) · · · aσ(n) (6.1) det A = σ∈Sn

where the summation extends over all permutations σ in the symmetric group Sn . Here sign(σ) is the sign of the permutation σ, generally defined as (−1)k where k is the number of inversions of σ. The n entries a1σ(1) , a2σ(2) , . . . , aσ(n) are simultaneously one entry from each row and one entry from each column, and we call such a set of n entries a diagonal of A. The product a1σ(1) a2σ(2) · · · aσ(n) is called a diagonal product as it consists of the product of the n entries of a diagonal of A. With sign(σ) we get a signed diagonal product. Thus the determinant of A is the sum of all the (nonzero) signed diagonal products. Closely related to the determinant of A is the permanent of A defined by  a1σ(1) a2σ(2) · · · aσ(n) , perA = σ∈Sn

the sum of all the (nonzero) diagonal products of A. With the matrix A we associate a weighted digraph D(A) whose vertices are 1, 2, . . . , n where there is an edge from vertex i to vertex j of weight aij (1 ≤ i, j ≤ n). Here loops are a possibility. If aij = 0 we can either assume that there is an edge from i to j of weight 0, or assume, as we usually do, that there isn’t an edge from i to j. The classical partition of a permutation in Sn into permutation cycles corresponds to a partition of the vertices of 49

50

6. MATRIX SIGN PATTERNS

D(A) into sets each of which is the set of vertices of a digraph cycle. Thus permutations in Sn are in a bijective correspondence with spanning sets of pairwise disjoint cycles of D(A). In terms of the cycles of a permutation σ, the sign of σ is equal to (−1)n−#σ = (−1)n+#σ where #σ is the number of cycles of σ. Thus we can rewrite the determinant of A as n   (6.2) det A = (−1)n (−1)#σ aiσ(i) . σ∈Sn

i=1

The sign of a real number a is defined by ⎧ ⎨ 1 if a > 0 −1 if a < 0 sign(a) = ⎩ 0 if a = 0. The sign pattern of the n × n matrix A is the n × n (0, 1, −1)-matrix sign(A) obtained from A by replacing each of its entries with its sign. We are interested in knowing when one can assert that the determinant of a matrix is nonzero (the matrix is nonsingular) using only the knowledge of the signs of its entries. For this, it suffices to start only with (0, 1, −1)matrices. Thus now assume that A is a n × n (0, 1, −1)-matrix. Let the qualitative class of A be defined by Q(A) = {B : B is an n × n matrix with the same sign pattern as A}. A n × n (0, 1, −1)-matrix is sign-nonsingular, abbreviated SNS-matrix, provided every matrix in Q(A) is nonsingular. The following lemma is a basic first step towards understanding SNS-matrices. Lemma 6.1. Let A = [aij ] be an n × n (0, 1, −1)-matrix. Then every matrix with the same sign pattern as A has a nonzero determinant if and only if A has a nonzero diagonal product and every nonzero signed diagonal product has the same sign. Proof. (For emphasis we remark that the sign of a nonzero signed diagonal product p = sign(σ)a1σ(1)a2σ(2) · · · aσ(n) is the sign of p.) If the conditions in the lemma hold, then every matrix with the same sign pattern as A has a nonzero determinant, since there is a nonzero term in the determinant (6.1) and there can be no cancellation. Conversely, suppose that A has two nonzero signed diagonal products, one equal to +1 and one equal to −1. Then, emphasizing the entries of A in the positions of these diagonal products, that is, making them very large in absolute value, we can find a matrix with the same sign pattern as A with a positive determinant and one with a negative determinant. Since the determinant is a continuous function of its entries, there must be a matrix with the same sign pattern as A whose determinant equals zero. 

6.1. SIGN-NONSINGULAR MATRICES

51

To further study SNS-matrices, we establish some normalizing conditions that can be made without loss of generality. Let A = [aij ] be an n × n (0, 1, −1)-matrix. In order for A to be an SNS-matrix, there has to be a nonzero diagonal product. By permuting rows (which doesn’t affect whether the determinant is zero or not), we may assume that the diagonal product corresponding to the main diagonal is nonzero. By multiplying some rows by −1, (which doesn’t affect whether the determinant is zero or not), we can then assume that all the entries on the main diagonal equal −1. In terms of the weighted digraph D(A), we have a loop at each vertex with weight −1 and the signed diagonal product corresponding to the main diagonal equals (−1)n . Thus by Lemma 6.1, and with this normalization, in order for A to be an SNS-matrix every nonzero signed diagonal product must equal (−1)n . Since all the entries on the main diagonal equal −1, it follows that every cycle γ in D(A) gives rise to a nonzero diagonal product of A: if γ is a cycle of length n − t, append to γ the loops at all vertices that are not vertices of γ to get a spanning set of t + 1 pairwise disjoint cycles of D(A). Using (6.2) we see that the corresponding term in the determinant of A is   (−1)n (−1)t+1 × (−1)t (product of the weights of γ) , and this equals (−1)n+1 (product of the weights of γ) , equivalently, (−1)n ((−1)product of the weights of γ) . Thus if A is an SNS-matrix, the product of the weights of the edges in a cycle of D(A) always equals −1. The following theorem is due to Bassett, Maybee, and Quirk (see [3]). Theorem 6.2. Let A be an n×n (0, 1, −1)-matrix each of whose diagonal entries equals −1. Then A is an SNS-matrix if and only if the product of the weights of the edges of a cycle of the digraph D(A) always equals −1. Proof. If the product of the weights of each cycle is −1, then any collection of t spanning cycles corresponds to a term in the determinant of A of value (−1)n (−1)t × (−1)t = (−1)n , and hence A is an SNS-matrix. The converse follows from the argument preceding the theorem.  Example 6.3. The following are SNS-matrices in our normalized form: ⎡ ⎤ ⎡ ⎤ −1 1 0 0   −1 1 0 ⎢ −1 −1 −1 −1 1 0 ⎥ ⎥. 1 ⎦ , H4 = ⎢ , H3 = ⎣ −1 −1 H2 = ⎣ 1 −1 −1 −1 −1 1 ⎦ −1 −1 −1 −1 −1 −1 −1 In fact (see [3]), these matrices, and their obvious generalization to Hn for n ≥ 5 contain the largest number of nonzero entries in an SNS-matrix of

52

6. MATRIX SIGN PATTERNS

order n, with this number equal to n2 + 3n − 2 . 2 Example 6.4. A nontrivial example of an SNS-matrix without any −1s is ⎡ ⎤ 1 1 0 1 0 0 0 ⎢ 0 1 1 0 1 0 0 ⎥ ⎢ ⎥ ⎢ 0 0 1 1 0 1 0 ⎥ ⎢ ⎥ ⎢ 0 0 0 1 1 0 1 ⎥. ⎢ ⎥ ⎢ 1 0 0 0 1 1 0 ⎥ ⎢ ⎥ ⎣ 0 1 0 0 0 1 1 ⎦ 1 0 1 0 0 0 1 n2 − (1 + 2 + · · · + (n − 2)) =

This matrix is a circulant matrix and, in fact, is the point-line incidence matrix of the projective plane of order 2. It has determinant and permanent equal to 24. Replacing any 0 by ±1 results in a matrix that is not an SNSmatrix. One of the interests in SNS-matrices arises from their application in computing the permanent of a matrix, a notoriously difficult computational problem. In fact, computing the permanent of a (0, 1)-matrix is what is called a #P complete problem; informally, this implies that if there were a polynomial time algorithm to compute the permanent of a n × n (0, 1)matrix, then many combinatorial problems, e.g. the number of Hamilton cycles in a graph, would have polynomial time algorithms. Let A be an n × n (0, 1)-matrix, and let A be a matrix obtained from A by replacing some 1s with −1s (a signing of A) in such a way that perA = ± det A; in fact, since we could multiply a row of A by −1 and still have an SNS-matrix, we could assume in such a situation that perA = det A. Since in the determinant of an SNS-matrix there is no cancellation among any of its nonzero terms, we have a generic identity: If we replace the 1s in A by distinct indeterminates to get a matrix X of zeros and indeterminates, and apply to X the same signing that produced A from A to get X  , then perX = ± det X  . For example, the SNS matrix H3 from Example 6.3 gives the algebraic identity ⎡ ⎤ ⎡ ⎤ x2 0 x1 x2 0 −x1 x5 ⎦ . per ⎣ x3 x4 x5 ⎦ = − det ⎣ −x3 −x4 x6 x7 x8 −x6 −x7 −x8 Example 6.5. The matrix



⎤ 1 1 1 J3 = ⎣ 1 1 1 ⎦ 1 1 1

cannot be signed to give an SNS-matrix. One way to argue this is as follows. Since perJ3 = 6, we would have to have a signing of J3 with determinant

6.2. AN APPLICATION

53

equal to 6. The three terms in the determinant of a 3 × 3 matrix corresponding to permutations of positive sign (the even permutations) and the three terms corresponding to permutations of negative sign (the odd permutations) each partition the 9 entries of a 3 × 3 matrix, a crucial property that does not extend to 4 × 4 matrices. Thus if the entries of J3 are signed to have a determinant of 6, we need an even number of −1s for each of the terms of positive sign and so a total of an even number of −1s; and we need an odd number of −1s for each of the terms of negative sign and so a total of an odd number of −1s. Thus J3 cannot be signed to give an SNS-matrix. In fact, interpreted properly (see [3] for details), J3 is the unique obstacle for a n × n (0, 1)-matrix to have a signing that is an SNS-matrix. An important theorem of Kastelyn, stated in terms of SNS-matrices, was motivated by the dimer problem of statistical mechanics. It enabled the computation via determinants of the number of ways to tile a p × q rectangular grid, where pq is even, with dominoes (dimers). Theorem 6.6. Let A be the n×n biadjacency matrix of a planar bipartite graph G ⊆ Kn,n . Then there is a signing of A which is an SNS-matrix. Theorem 6.6 has been generalized in [5, 10] to arbitrary genus as follows. Theorem 6.7. Let A be the n×n biadjacency matrix of a bipartite graph G ⊆ Kn,n of genus g. Then there are 4g signings A1 , A2 , . . . , A4g of A such that 4g  g 2 perA = det Ai . i=1

We now give an application of SNS-matrices to a very different kind of combinatorial problem. 6.2. An Application In our application, we make use of a rectangular version of SNS-matrices called L-matrices. Let A be an m × n (0, 1, −1)-matrix. As for square matrices, let Q(A) be the set of all m × n real matrices with the same sign pattern as A. Then A is an L-matrix provided that every matrix in Q(A) has linearly independent rows. Square L-matrices are the same as SNS-matrices. If A is an L-matrix, then clearly m ≤ n. Example 6.8. Let



⎤ 1 1 1 1 1 −1 ⎦ . A3 = ⎣ 1 −1 1 1 −1 −1

Let B be a matrix in Q(A). Using the fact that every 3 × 1 sign pattern or its negative is a column of A, we see that a nontrivial linear combination of the rows of B cannot equal zero. Thus A3 is an L-matrix. This example

54

6. MATRIX SIGN PATTERNS

obviously generalizes to give an m × 2m−1 L-matrix with no zeros, defined inductively by   Am−1 Am−1 (m ≥ 4). Am = 1 · · · 1 −1 · · · − 1 (In fact, this inductive construction can begin with m = 2, taking A1 = [1].) These L-matrices have the property that deleting any column results in a matrix that is not an L-matrix.   (0, 1)Example 6.9. We can construct examples Ck of 2k + 1 × 2k+1 k matrices that are L-matrices by taking as columns in lexicographic order (say) all 2k + 1 × 1 column vectors with k 0s and k + 1 1s (k ≥ 1). For instance, when k = 2 we get ⎡ ⎤ 0 0 0 0 1 1 1 1 1 1 ⎢ 0 1 1 1 0 0 0 1 1 1 ⎥ ⎢ ⎥ ⎥ C2 = ⎢ ⎢ 1 0 1 1 0 1 1 0 0 1 ⎥. ⎣ 1 1 0 1 1 0 1 0 1 0 ⎦ 1 1 1 0 1 1 0 1 0 0 To see that C2 is an L-matrix, let B2 ∈ Q(C2 ). Take a nontrivial linear combination of the rows of B2 with coefficients, p1 , p2 , p3 , p4 , p5 not all 0. There must be both positive and negative numbers among the pi with, say, at least one but at most three positive. But then some coordinate in the linear combination is positive. Thus the rows of B2 are linearly independent, and we conclude that C2 is an L-matrix. One can argue in a similar way that Ck is an L-matrix for each k ≥ 1. An m × n (0, 1)-matrix without any rows of all zeros can always be regarded as the incidence matrix M of a family A = (A1 , A2 , . . . , Am ) of m nonempty subsets of the n element set X = {x1 , x2 , . . . , xn }. The family A has the equal union property provided that there exist nonempty, disjoint sets K, L ⊆ {1, 2, . . . , m} such that ∪i∈K Ai = ∪i∈L Ai . More generally, given an integer r ≥ 2, one can consider the r-fold equal union property whereby there exist r nonempty, pairwise disjoint subsets I1 , I2 , . . . , Ir of {1, 2, . . . , m} such that ∪i∈I1 Ai = ∪i∈I2 Ai = · · · = ∪i∈Ir Ai . Using a version of the Rado-Hall theorem, Lindstr¨ om [7] proved the following theorem, with another proof given by Tverberg [11] using his generalization of a classical theorem of Radon. Theorem 6.10. If m > n(r − 1), then the family A = (A1 , A2 , . . . , Am ) of nonempty subsets of X = {x1 , x2 , . . . , xn } has the r-fold equal union property.

6.3. SPECTRALLY ARBITRARY SIGN PATTERNS

55

The following result [6] shows in particular that the equal union property of sets and the negation of the L-matrix property of (0, 1)-matrices are equivalent. Theorem 6.11. The family A = (A1 , A2 , . . . , Am ) of nonempty subsets of X = {x1 , x2 , . . . , xn } has the equal union property if and only if its m × n incidence matrix M = [mij ] is not an L-matrix (so not an SNS-matrix when m = n). Proof. First assume that A has the equal union property, and let K and L be nonempty, disjoint subsets of {1, 2, . . . , m} such that ∪k∈K Ak and ∪l∈L Al both equal P. For each p ∈ P , let rp equal the number of k ∈ K such that p ∈ Ak , and let sp equal the number of l ∈ L such that p ∈ Al . Define an m × n matrix B = [bij ] by bkp = 1/rp if k ∈ K and p ∈ P , blp = 1/sp if l ∈ L and p ∈ P , and bij = mij for all other i and j. Then B is in Q(M ), and moreover the sum of rows k of B with k ∈ K equals the sum of rows l of B with l ∈ L. Since the two sets of rows of B in these sums are nonempty and disjoint, the rows of B are linearly dependent, and hence M is not an L-matrix. Conversely, suppose that M is not an L-matrix. Then there exists B ∈ Q(A) such that a nontrivial linear combination of the rows of B equals zero. Since B is a nonnegative matrix with no zero rows, there must be both positive and negative coefficients in this linear combination. It follows that the sets in A corresponding to rows with a positive coefficient and the sets corresponding to rows with a negative coefficient have the same union.  The following corollary of Theorem 6.11 is the special case r = 2 of Theorem 6.10. Corollary 6.12. A family of n + 1 nonempty subsets of a set of n elements always has the equal union property. It follows from Corollary 6.12 that if a family A = (A1 , A2 , . . . , Am ) of nonempty subsets of a set X satisfies | ∪i∈K Ai | < |K| for some K ⊆ {1, 2, . . . , m} with |K| ≥ 2, then A has the equal union property. Hence it follows from the Rado-Hall Theorem of Chapter 3 that if A does not have the equal union property, then A has an SDR. 6.3. Spectrally Arbitrary Sign Patterns Let A be an n × n (0, 1, −1)-matrix. Then A is a spectrally arbitrary sign pattern, abbreviated SAP, provided every monic real poynomial of degree n is the characteristic polynomial of some matrix in Q(A). Thus A is an SAP if and only if every collection of n real or complex numbers which is closed under complex conjugation is the collection of n eigenvalues of some matrix in Q(A). SAPs were introduced in [4], and recently there has been considerable interest in their properties; see e.g. [8] where it has been proved

56

6. MATRIX SIGN PATTERNS

that an n × n irreducible SAP must have at least 2n − 1 nonzero entries and conjectured that at least 2n nonzero entries are required. Here we focus only on one recent, somewhat surprising, result. The sign pattern A is called potentially nilpotent provided there exists a nilpotent matrix in Q(A). A spectrally arbitrary pattern is potentially nilpotent but the converse need not hold. For example, a potentially nilpotent matrix A with all 0s on its main diagonal cannot be spectrally arbitrary, since every matrix in Q(A) has eigenvalues that sum to zero. Verifying that a sign pattern is a SAP is a nontrivial exercise in general. Example 6.13. The matrix ⎡ −1 1 0 0 ⎢ −1 0 1 0 ⎢ ⎢ 0 −1 0 1 ⎢ ⎣ 0 0 −1 0 0 0 0 −1

0 0 0 1 1

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

can be shown to be spectrally arbitrary. The matrix ⎡ ⎤ 1 1 −1 0 ⎢ −1 −1 1 0 ⎥ ⎢ ⎥ ⎣ 0 0 0 1 ⎦ 1 1 0 0 is potentially nilpotent (in fact, is nilpotent itself) but it is not spectrally arbitrary. The following theorem [9] shows that potential nilpotence and no zeros implies spectrally arbitrary. Theorem 6.14. Let A be an n × n potentially nilpotent matrix without any zeros. Then A is an SAP. Proof. Let N be a nilpotent matrix in Q(A), and let P be a nonsingular matrix such that N = P ΛP −1 where Λ is the Jordan Canonical Form (JCF) of N ; thus Λ is a direct sum of Jordan blocks of the form ⎡ ⎤ 0 1 0 ··· 0 ⎢ 0 0 1 ··· 0 ⎥ ⎢ ⎥ ⎢ .. .. . . . . ⎥ (6.3) . ⎢ . . . . 0 ⎥ ⎢ ⎥ ⎣ 0 0 0 ··· 1 ⎦ 0 0 0 ··· 0 We first show that we can get a nilpotent matrix in Q(A) whose JCF has only one Jordan block, equivalently, no power strictly less than n equals the zero matrix. Suppose Λ has more than one Jordan block, and let Λ be obtained from Λ by replacing each 0 on the superdiagonal with  > 0. Then N = P Λ P −1

BIBLIOGRAPHY

57

is nilpotent and its JCF can have only one Jordan block. Moreover, since A does not have any zeros, N is in Q(A) for  small enough. Thus we may assume that Λ has only one Jordan block, and thus that A is similar to an n × n matrix of the form (6.3). Now let p(x) = xn +an−1 xn−1 +· · ·+a1 x+a0 be a real monic polynomial of degree n, and let ⎡ ⎤ 0 1 0 ··· 0 ⎢ 0 0 1 ··· 0 ⎥ ⎢ ⎥ ⎢ ⎥ . . . . .. .. .. .. Cp(x) = ⎢ 0 ⎥ ⎢ ⎥ ⎣ 0 0 0 ··· 1 ⎦ −a0 −a1 −a2 · · · −an−1 be the companion matrix of p(x). If the |ai | are sufficiently small for all i, then, again since A has no zeros, P Cp(x) P −1 has the same sign pattern as A. Let x! n = xn + can−1 xn−1 + c2 an−2 xn−2 + · · · + cn an q(x) = c p c where c is a real nonzero constant. For c sufficiently small, all nonleading coefficients of q(x) are less than  in absolute value and thus 1 P Cq(x) P −1 c has sign pattern A. Computing its characteristic polynomial, we get: # " " # # " 1 1 −1 −1 = det P xIn − Cq(x) P det xIn − P Cq(x) P c c # " 1 = det xIn − Cq(x) c   1 = n det cxIn − Cq(x) c 1 = n q(cx) c 1 cx ! = n cn p c c = p(x). 

Thus A is an SAP. Bibliography

[1] R.A. Brualdi, Combinatorial Matrix Classes, Encyclopedia of Mathematics and its Applications 108, Cambridge University Press, Cambridge, 2006. [2] R.A. Brualdi and D. Cvetkovi´c, A Combinatorial Approach to Matrix Theory and its Applications, CRC Press, Boca Raton, FL, 2009, [3] R.A. Brualdi and B.L. Shader, Matrices of Sign-Solvable Linear Systems, Cambridge Tracts in Mathematics 116, Cambridge University Press, Cambridge, 1995.

58

6. MATRIX SIGN PATTERNS

[4] J.H. Drew, C.R. Johnson, D.D. Olesky, and P. van den Driessche, Spectrally arbitrary patterns, Linear Algebra Appl., 308 (2000), 121–137. [5] A. Galluccio and M. Loebl, On the theory of Pfaffian orientations. I. Perfect matchings and permanents, Electr. J. Combin., 6 (1999), R6. [6] D.P. Jacobs and R.E. Jamison, A note on equal unions in families of sets, Discrete Math., 241 (2001), 387–393. [7] B. Lindstr¨ om, A theorem on families of sets, J. Combin. Theory, Ser. A, 13 (1972), 272–277. [8] T. Britz, J.J. McDonald, D.D. Olesky, and P. van den Driessche, Minimally spectrally arbitrary patterns, SIAM J. Matrix Anal. Applics., 26 (2004), 257–271. [9] R. Pereira, Nilpotent matrices and spectrally arbitrary patterns, Electr. J. Linear Algebra, 16 (2007), 232–236. [10] G. Tesler, Matchings on graphs on non-orientable surfaces, J. Combin. Theory, Ser. B, 78 (2000), 198–231. [11] H. Tverberg, On equal unions of sets, in: Studies in Pure Mathematics, ed. by L. Mirsky, Academic Press, London, 1971, 249–250.

http://dx.doi.org/10.1090/cbms/115/07

CHAPTER 7

Eigenvalue Inclusion and Diagonal Products Diagonal dominance and inclusion regions for eigenvalues are much studied themes in matrix theory that are closely related. That diagonal dominance implies nonsingularity is usually attributed to both L´evy and Desplanques. The celebrated eigenvalue inclusion region of Gerˇsgorin follows from it, and also implies it. Taking into account the digraph associated with a matrix can lead to stronger results. In addition, by considering all the nonzero diagonal products of a matrix, more refined information can be obtained. 7.1. Some Classical Theorems Let A = [aij ] be an n × n complex matrix, and let R = (r1 , r2 , . . . , rn ) where  |aij | (i = 1, 2, . . . , n), ri = j =i

the sum of the magnitudes of the off-diagonal elements in row i. Then A is called diagonally dominant provided |aii | > ri

(i = 1, 2, . . . , n),

and weakly diagonally dominant provided |aii | ≥ ri

(i = 1, 2, . . . , n)

with strict inequality for at least one i. The following classical result is the L´evy-Desplanques theorem. Theorem 7.1. If A = [aij ] is diagonally dominant, then A is nonsingular. Proof. Suppose to the contrary that A is singular. Then there exists a nonzero vector x = (x1 , x2 , . . . , xn )t such that Ax = 0. Let |xk | = max{|xi | : 1 ≤ i ≤ n}. Then Ax = 0 implies that

n 

akj xj = 0,

j=1

or, equivalently, akk xk = −

 j =k

59

akj xj .

60

7. EIGENVALUE INCLUSION AND DIAGONAL PRODUCTS

Hence, by the triangle inequality, |akk ||xk | ≤



⎛ ⎞  |akj ||xj | ≤ ⎝ |akj ⎠ |xk |.

j =k

Thus |akk | ≤

j =k



|akj | = rk ,

j =k



a contradiction. Theorem 7.1 immediately implies Gerˇsgorin’s theorem. Theorem 7.2. The union of the circular disks {z : |z − aii | ≤ ri } contains all the eigenvalues of A.

Proof. Let λ be an eigenvalue of A and apply Theorem 7.1 to the singular matrix λIn − A which therefore cannot be diagonally dominant.  Let A = [aij ] be an n × n complex matrix. Since A is not assumed to be symmetric it doesn’t necessarily have an associated graph of order n (although we could associate a bipartite graph of order 2n). We associate with A a digraph D(A) in which edges have directions (that is, are ordered pairs of vertices) as follows. The vertices of D(A) are 1, 2, . . . , n where vertex i corresponds to both row i and column i. There is an edge in D(A) from vertex i to vertex j if and only if i = j and aij = 0. Note that D(A), as we have defined it here does not take into account the diagonal elements of A, and so D(A) does not have any loops unlike the D(A) in Chapter 5. A digraph is strongly connected provided one can get from any specified vertex to any other specified vertex by a walk that follows edges in their given directions; only walks of length at most n − 1 need be considered. The following standard result equates a matrix property with a digraph property. Lemma 7.3. The matrix A is irreducible if and only if the digraph D(A) is strongly connected. Proof. Here is a quick proof. If A is not irreducible, then there is a permutation matrix P such that we get a decomposition   A1 Ok,n−k −1 (7.1) P AP = A21 A2 for some integer k with 1 ≤ k < n. Clearly, D(A) is then not strongly connected; one can never “escape” from the vertices corresponding to the rows/columns of A1 . Conversely, if D(A) is not strongly connected, then take a vertex u that cannot reach every other vertex by walks. If we let U be the set of vertices reachable from u (this includes u), then U is a proper nonempty subset of the vertex set. Moreover, from the definition of

7.1. SOME CLASSICAL THEOREMS

61

U , there cannot be any edges from U to its complement U and this leads to a decomposition as in (7.1).  We make a couple of observations about irreducible matrices. For a complex matrix A, let |A| denote the nonnegative matrix obtained from A by replacing each entry with its absolute value. We have D(A) = D(|A|). Since |A| is a nonnegative matrix, an equivalent way to say that A is irreducible is that (I + |A|)n−1 or, equivalently, I + |A| + |A|2 + · · · + |A|n−1 , is a positive matrix. This is because the (i, j)-entry of |A|k is positive if and only if there is a walk from vertex i to vertex j of length k in D(A). Now suppose that A is an n×n irreducible matrix each of whose diagonal elements is nonzero. Since D(A) is strongly connected, it follows that every edge of D(A) is an edge of some cycle γ. The cycle γ corresponds to the product of nonzero entries of A of the form ai1 ,i2 , ai2 ,j2 , . . . , aik−1 ,ik , aik ,i1 where k is the length of the cycle. By ‘completing’ this product using the nonzero diagonal elements app with p = i1 , i2 , . . . , ik (cf. Section 6.1) we get a nonzero diagonal product of A. This proves the following. Lemma 7.4. Every nonzero entry of an n × n irreducible matrix A each of whose diagonal elements is nonzero is an entry of some nonzero diagonal product of A. We can now extend Theorem 7.1 to weakly diagonally dominant, irreducible matrices to get the following result of Taussky. Theorem 7.5. Let A = [aij ] be an n × n weakly diagonally dominant, irreducible matrix. Then A is nonsingular. Proof. The case n = 1 is trivial. Let n ≥ 2 and suppose that A is singular. We refer to the proof of Theorem 7.1. Let U = {j : |xj | = |xk |}. Then it follows that |ajj | = rj (j ∈ U ). Since A is weakly diagonally dominant, U is a proper and nonempty subset of {1, 2, . . . , n}. Since A is irreducible, there exists p ∈ U and q ∈ U such that apq = 0. Since p ∈ U , it follows from the proof of Theorem 7.1 that |xq | = |xk |, for otherwise we get |app | < rp , a contradiction. But then this  contradicts q ∈ U . An immediate corollary is the following. Corollary 7.6. Let A = [aij ] be an n × n irreducible matrix. A boundary point of the union of the circular disks {z : |z − aii ) ≤ ri } can be an eigenvalue of A only if it is a boundary point of all of the circular disks, A better theorem and corollary takes into account the digraph of a matrix.

62

7. EIGENVALUE INCLUSION AND DIAGONAL PRODUCTS

Theorem 7.7. Let A = [aij ] be an n × n complex matrix. Suppose that   |aii | > ri (γ a cycle of D(A)). (7.2) i∈γ

i∈γ

(Here we are identifying a cycle with the set of vertices it passes through.) Then A is nonsingular. If A is irreducible, then in (7.2), we can replace the strict inequality by a weak inequality with strict inequality for at least one cycle γ. All the eigenvalues of an n × n complex matrix A = [aij ] lie in the union of the lemniscates ⎫ ⎧ ⎨   ⎬ |x − aii | ≤ ri (γ a cycle of D(A)). (7.3) x: ⎭ ⎩ i∈γ

i∈γ

If A is assumed to be irreducible, a boundary point of the union of the lemniscates (7.3) can be an eigenvalue only if it is a boundary point of each of the lemniscates. A proof of Theorem 7.7 will follow from the recent results [1] in the next section. 7.2. Diagonal Products and Nonsingularity The conditions for nonsingularity in the above theorems depend not on the entries of A = [aij ] themselves, but only on their magnitudes. (This is in contrast to SNS matrices where nonsingularity depends only on the signs of the entries and not on their magnitudes.) In fact, the conditions for nonsingularity above depend only on the nonnegative matrix B = [bij ] where ⎧ if aij = 0 and i = j ⎨ ri 0 if aij = 0 and i = j bij = ⎩ |aii | if i = j. The matrix B has the same digraph as A, that is, D(A) = D(B). The condition (7.2) that   |aii | > ri (γ a cycle of D(A)) i∈γ

i∈γ

implies that every nonzero diagonal product of B, other than that corresponding to its main diagonal, is strictly less than the main diagonal product |a11 ||a22 | · · · |ann |. In the irreducible case, our condition in Theorem 7.7 is that every nonzero diagonal product of B is less or equal to the main diagonal product |a11 ||a22 | · · · |ann | with strict inequality for at least one diagonal product. This discussion motivates that which follows [1]. Let a = (a1 , a2 , . . . , an ) and r = (r1 , r2 , . . . , rn ) be positive vectors. Let D be a strongly connected digraph with vertices 1, 2, . . . , n. In this chapter we are assuming that D does not have loops. In terms of a, r, and D and

7.2. DIAGONAL PRODUCTS AND NONSINGULARITY

63

motivated by the matrix B above, we define an n × n nonnegative matrix B(a, r, D) = [bij ] by ⎧ ⎨ ri if i = j and there is an edge from i to j 0 if i = j and there is not an edge from i to j bij = ⎩ ai if i = j. Thus the digraph D(B) is the digraph D we started with. Since D is strongly connected, B(a, r, D) is an irreducible matrix. We define M(a, r, D) to be the class of all n × n nonnegative matrices A = [aij ] such that  aij = ri (i = 1, 2, . . . , n), and D(A) = D. aii = ai (i = 1, 2, . . . , n), j =i

The following theorems all refer to these constructions. Theorem 7.8. If the main diagonal product of B(a, r, D) is the unique diagonal product of maximum value, then there exists a positive vector x = (x1 , x2 , . . . , xn ) such that for all matrices A = [aij ] ∈ M(a, r, D)  aij xj (i = 1, 2, . . . , n). ai x i > j =i

Thus if X is the nonsingular diagonal matrix diag(x1 , x2 , . . . , xn ), then AX is diagonally dominant and so every matrix C with |C| = A is nonsingular. Theorem 7.9. If the main diagonal product of B(a, r, D) is a diagonal product of maximum value and there is a nonzero diagonal product of lesser value, then there exists a positive vector x = (x1 , x2 , . . . , xn ) such that for all matrices A = [aij ] ∈ M(a, r, D)  ai xi ≥ aij xj (i = 1, 2, . . . , n) j =i

with strict inequality for at least one i. Thus if X is the nonsingular diagonal matrix diag(x1 , x2 , . . . , xn ), then AX is weakly diagonally dominant and, since D is strongly connected, every matrix C with |C| = A is nonsingular. Theorem 7.10. If all nonzero diagonal products of B(a, r, D) have the same value, then there exists a positive vector x = (x1 , x2 , . . . , xn ) such that for all matrices A = [aij ] ∈ M(a, r, D)  aij xj (i = 1, 2, . . . , n). ai x i = j =i

Thus if X is the nonsingular diagonal matrix diag(x1 , x2 , . . . , xn ), then, where A is a matrix obtained from A by negating all its off-diagonal entries, all row sums of A X equal zero, and hence every real matrix C with |C| ∈ M(a, r, D) having positive diagonal entries and nonpositive off-diagonal entries is singular.

64

7. EIGENVALUE INCLUSION AND DIAGONAL PRODUCTS

Theorem 7.11. If there is at least one nonzero diagonal product of B(a, r, D) of value greater that the main diagonal product, then there exists a singular complex matrix C with |C| ∈ M(a, r, D) and a complex vector z = 0 such that Cz = 0. We shall prove only Theorems 7.8–7.10. The principal tool is the duality theorem of linear programming applied to the special case of the optimal assignment problem (OAP). We use the OAP in its multiplicative form as opposed to its usual additive form. The standard algorithms for the OAP can be used to produce the positive vector x whose existence is given in the theorems. Theorem 7.12. Let B = [bij ] be an n × n nonnegative matrix with at least one nonzero diagonal product. Then the maximum diagonal product of B equals min{y1 y2 · · · yn z1 z2 · · · zn : yi zj ≥ bij for all bij > 0.} Lemma 7.13. If the main diagonal product of B(a, r, D) is a diagonal product of maximum value, then there exists a positive vector x = (x1 , x2 , . . . , xn ) such that ai xi ≥ ri xj for all edges (i, j) of D. Proof. We apply Theorem 7.12 to the matrix B(a, r, D) = [bij ]. Since the main diagonal product of B(a, r, D) is the maximum among all diagonal products and yi zi ≥ bii = ai in the dual problem in Theorem 7.12, it follows that yi zi = ai for all i, and hence ai (i = 1, 2, . . . , n). (7.4) zi = yi For each edge (i, j) of D (that is, for which bij = ri ), yi zj ≥ bij = ri and thus zj ≥ ri /yi . Combining this with (7.4). we get zj ri ≥ for all edges (i, j) of D. (7.5) zi ai Now let xk = 1/zk for k = 1, 2, . . . , n. Then by (7.5) we have ri , and so xi z j ≥ ai ai x i ≥

ri = ri xj for all edges (i, j) of D. zj 

We can now give the proofs of Theorems 7.8, 7.9, and 7.10. Proof. First assume that all nonzero diagonal products of B(a, r, D) have the same value. Then as in the proof of Lemma 7.13 and using its notation, we have that yi zj = ri for all edges (i, j) of D and hence (7.6)

ai xi = ri xj

for all edges (i, j) of D.

7.2. DIAGONAL PRODUCTS AND NONSINGULARITY

65

Equation 7.6 implies that for each specified i = 1, 2, . . . , n, xj is constant for each j such that (i, j) is an edge of D. Since for each A = [aij ] ∈ M(a, r, D),   ri = aij = aij , {j:(i,j) is an edge of D }

j =i

we get that ai x i =



aij xj

(i = 1, 2, . . . , n and A = [aij ] ∈ M(a, r, D)),

j =i

and this proves Theorem 7.10. Now assume that the main diagonal product of B(a, r, D) is a diagonal product of maximum value and there is a nonzero diagonal product of lesser value. Then ai xi ≥ ri xj for all edges (i, j) of D, with strict inequality for at least one edge (i, j). Thus ⎞ ⎛  ai x i ≥ ⎝ aij ⎠ xj j =i

=



aij xj

{j:(i,j) is an edge of D }





aij xj ,

j =i

with strict inequality for at least one i. This proves Theorem 7.9.  Finally, assume that the main diagonal product ni=1 ai of B(a, r, D) is strictly greater than  every other diagonal product. Thus for each i = 1, 2, . . . , n, we have that ni=1 ai is strictly greater than every diagonal product of B(a, r, D) that does not use its ith diagonal element ai . There exists a t > 0 such that with a = a − (t, t, . . . , t), the matrix B(a , r, D) satisfies the hypotheses of Theorem 7.9. Thus there exists a positive vector x = (x1 , x2 , . . . , xn ) such that for all matrices A = [aij ] ∈ M(a , r, D)  ai xi = (ai − t)xi ≥ aij xj (i = 1, 2, . . . , n). j =i

Hence for all matrices A = [aij ] ∈ M(a, r, D)  aij xj (i = 1, 2, . . . , n). ai x i > j =i

This proves Theorem 7.8.



We do not prove Theorem 7.11 here. It relies on a theorem of Camion and Hoffman (see [1]) which asserts: If N = [nij ] is an n × n nonnegative matrix, then all n × n complex matrices the modulus of whose entries is given by N are nonsingular if and only if there is a permutation matrix P

66

7. EIGENVALUE INCLUSION AND DIAGONAL PRODUCTS

and a diagonal matrix D with positive diagonal entries such that P N D is a diagonally dominant matrix. Theorem 7.7 is a consequence of Theorems 7.8 and 7.9. The condition (7.2) implies that the main diagonal product of A is strictly greater than every other diagonal product. We conclude this chapter with some comments about the case where the vector a = (a1 , a2 , . . . , an ) contains some zeros. In this case, matrices A ∈ M(a, r, D) may have positive entries that do not belong to any positive diagonal. Such entries can be replaced with zeros (edges of D can be removed), since they do not contribute to the determinant and thus do not affect nonsingularity. Thus, with the continued assumption that D is strongly connected and assuming that D has more than one cycle (otherwise we have a trivial situation), we have the following theorem [1]. Theorem 7.14. If the strongly connected digraph D has at least two cycles and at least one ai equals zero, then there is a singular complex n × n matrix C such that |C| ∈ M(a, r, D) and a complex nonzero vector z such that Cz = 0. All of the quantities whose existence is asserted in Theorems 7.8 to 7.14 can be produced in polynomial time. Bibliography [1] E. Boros, R.A. Brualdi, Y. Crama, and A.J. Hoffman, Gerˇsgorin variations III: On a theme of Brualdi and Varga, Linear Algebra Appl., 428 (2008), 14–19. [2] R.S. Varga, Gerˇsgorin and his Circles, Springer Series in Computational Mathematics, 36, Springer-Verlag, Berlin, 2004.

http://dx.doi.org/10.1090/cbms/115/08

CHAPTER 8

Tournaments Tournaments are complete graphs in which each edge has been given one of the two possible orientations. They are models of round-robin tournaments with players p1 , p2 , . . . , pn in which every pair of players plays one game and there are no ties: the vertices correspond to the players, and an edge directed from the vertex representing pi to the vertex representing pj signifies that player pi beats player pj . Landau’s classical theorem determines the possible scores that can result in a round-robin tournament. There are recent proofs of Landau’s theorem and generalizations that specify some structure of the resulting tournament. The adjacency matrix of a tournament is an anti-symmetric (0, 1)-matrix called a tournament matrix. The eigenvalues of tournament matrices have some special properties. 8.1. Landau’s Theorem A tournament of order n is an orientation Tn of the complete graph Kn . Thus Tn has n vertices, usually labeled as 1, 2, . . . , n where for i = j, either there is an edge from vertex i to vertex j, or one from vertex j to vertex i, but not both. The number of edges from vertex i is its outdegree, and the number of edges into vertex i is its indegree. It is customary to call the outdegree of a vertex its score and to call the indegree its loss. The score vector of Tn is R = (r1 , r2 , . . . , rn ) where ri is the score of vertex i. The loss vector of Tn is S = (s1 , s2 , . . . , sn ) where si is the loss of vertex i. Since each vertex meets exactly n − 1 edges, the loss vector S = (s1 , s2 , . . . , sn ) is determined by the score vector: S = (n − 1, n − 1, . . . , n − 1) − R. We have

n  i=1

ri =

n  i=1

" # n si = , 2

the number of edges of Kn . By reversing the directions of each edge of Tn we get a new tournament Tn with score vector S and loss vector R. Example 8.1. An example of a tournament of order 6 with score vector R = (1, 2, 2, 3, 3, 4) and loss vector S = (4, 3, 3, 2, 2, 1) is given in Figure 8.1. Vertices are labeled a, b, c, d, e, f with the number in parenthesis equal to the score of that vertex. 67

68

8. TOURNAMENTS

c(2)

b(2)

f (4)

a(1)

e(3) d(3) Figure 8.1 Let A = [aij ] be the n × n adjacency matrix of Tn . Then A is an antisymmetric (0, 1)-matrix with zeros on its main diagonal, that is, A satisfies A + At = Jn − In . The adjacency matrix of a tournament is called a tournament matrix. The score vector (respectively, loss vector) of a tournament is the row sum vector (respectively, column sum vector) of the corresponding tournament matrix. The matrix At is the adjacency matrix of the reversed tournament Tn . We usually blur the distinction between a tournament and a tournament matrix. For instance, we refer to the score vector of a tournament matrix, rather than its row sum vector. Example 8.2. With the vertices taken in the order a, b, c, d, e, f , the tournament matrix corresponding to the tournament in Example 8.1 is ⎡ ⎤ 0 0 1 0 0 0 ⎢ 1 0 0 0 0 1 ⎥ ⎢ ⎥ ⎢ 0 1 0 1 0 0 ⎥ ⎢ ⎥ ⎢ 1 1 0 0 1 0 ⎥. ⎢ ⎥ ⎣ 1 1 1 0 0 0 ⎦ 1 0 1 1 1 0 The matrix



0 ⎢ 0 ⎢ ⎣ 1 1

1 0 0 1

0 1 0 0

⎤ 0 0 ⎥ ⎥ 1 ⎦ 0

is a 4×4 tournament matrix with score vector R = (1, 1, 2, 2) and loss vector S = (2, 2, 1, 1). If A is a tournament matrix, then, for each n × n permutation matrix P , P −1 AP is also a tournament matrix, corresponding to a reordering of the vertices of the associated tournament. Thus, as in the example, we

8.1. LANDAU’S THEOREM

69

may assume that the score vector R = (r1 , r2 , . . . , rn ) of a tournament is monotone nondecreasing: 0 ≤ r1 ≤ r2 ≤ · · · ≤ rn ≤ n − 1. Let Tn be a tournament with such a score vector R, and let K ⊆ {1, 2, . . . , n} with |K| = k. Then the principal submatrix A[K] of A determined by the rows and columns with indices   in K is a k ×k tournament matrix, and hence A[K] contains a total of k2 1s. This implies that " #  k ri ≥ (K ⊆ {1, 2, . . . , n}, |K| = k, 1 ≤ k ≤ n) (8.1) 2 i∈K

with equality when K = {1, 2, . . . , n}. By the monotonicity assumption on R, the left hand side is smallest when K = {1, 2, . . . , k}. Thus the inequalities in (8.1) are equivalent to the n inequalities " # k  k ri ≥ (1 ≤ k ≤ n) (8.2) 2 i=1

with equality when k = n. Note that if we have equality in (8.2) for some k with 1 ≤ k ≤ n − 1, then   Ok,n−k A1 A= Jn−k,k A2 and A is reducible (the corresponding tournament is not strongly connected). We also observe that (8.2) is equivalent to the majorization condition R  (0, 1, 2, . . . , n − 1). (Here we note that since R and (0, 1, 2, . . . , n − 1) are monotone nondecreasing with the same sum, the inequalities in the majorization condition have to be reversed and so are equivalent to (8.2).) The vector (0, 1, 2, . . . , n − 1) is the score vector of the transitive tournament in which player j beats player i if and only if i < j. The corresponding tournament matrix is a lower triangular matrix with 1s everywhere below the main diagonal, and so 0s everywhere on and above the main diagonal. We have shown that the inequalities (8.2) are necessary conditions for the existence of a tournament with specified score vector R. Landau’s theorem is that these inequalities suffice for there to exist a tournament with score vector R. Let T (R) denote the set of all tournaments (tournament matrices) with score vector R. Theorem 8.3. Let R = (r1 , r2 , . . . , rn ) be a monotone, nonincreasing vector of nonnegative integers. Then T (R) is nonempty if and only if " # k  k ri ≥ (k = 1, 2, . . . , n) (8.3) 2 i=1

with equality when k = n.

70

8. TOURNAMENTS

Proof. There are several ways to prove the sufficiency part of this theorem as there are for the Gale-Ryser Theorem 5.3: (i) by contradiction, choosing a counterexample with n smallest and for that n with r1 smallest; (ii) by a constructive algorithm, putting sn = n − 1 − rn 1s in column n in those rows with the largest row sums, giving preference to the topmost positions in the case of ties (to retain the monotonicity property); and (iii) by using the Rado-Hall Theorem 3.1. We refer to [4] for details of proofs (i) and (ii), and give the recent proof (iii) that shows that Landau’s theorem can be regarded as a special case of the Rado-Hall theorem [5]. Let X = {(i, j) : 1 ≤ i, j ≤ n, i = j}. We define a matroid M on X in terms of its circuits (minimal dependent sets): the circuits of M are the n2 disjoint sets {(i, j), (j, i) : 1 ≤ i, j ≤ n, i = j} of cardinality 2. Thus Y ⊆ X is independent in M if and only if it does not contain both (i, j) and (j, i) for some i = j. This clearly defines a matroid; in fact, we can identify the elements of X with n × n real matrices where, for i = j, (i, j) with i < j is identified with the matrix that has a 1 in the (i, j) and (j, i) entries and 0s elsewhere, and (j, i) is identified with the matrix that has a 2 in the (i, j) and (j, i) entries. For instant with n = 3 and i = 1, j = 3, (1, 3) and (3, 1) are identified with ⎡ ⎤ ⎡ ⎤ 0 0 1 0 0 2 ⎣ 0 0 0 ⎦ and ⎣ 0 0 0 ⎦ , 1 0 0 2 0 0 respectively. Taking independence of these matrices to mean linear independence over the real field, we obtain the matroid M described above. The rank function of M is given by  min{|U ∩ {(i, j), (j, i)}, 1} (U ⊆ X). ρ(U ) = {(i,j):1≤i 0 from these entries. Then A = A , A− , and for  small enough, A , A− ∈ Ωn and 1 1 A = A + A− . 2 2 Thus A is a convex combination of two matrices in Ωn different from A, and  hence A is not an extreme point of Ωn . There are many known and interesting properties of the polytope Ωn . For instance, the 1-dimensional faces (the edges of the vertex-edge graph of the polytope Ωn ) correspond to pairs of permutation matrices Pσ and Pτ such that the permutation στ −1 has exactly one cycle of length strictly greater than 1. Also the faces of Ωn correspond bijectively to n × n (0, 1)matrices B with “total support,” meaning that given each 1 of B there is a permutation matrix P containing a 1 in the same position where P ≤ B (entrywise). For more details on these and other properties, see [3]. 9.2. Alternating Sign Matrices An alternating sign matrix, abbreviated ASM, is an n × n (0, 1, −1)matrix such that the 1’s and −1’s alternate in each row and column, beginning and ending with a 1. Example 9.2. Every permutation matrix is an ASM. Other examples of ASMs are: ⎡ ⎤ ⎡ ⎤ 0 0 1 0 0 0 1 0 0 0 ⎢ 0 ⎢ 0 1 −1 1 0 ⎥ 0 0 0 1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1 −1 ⎢ ⎥ 1 −1 1 ⎥ ⎢ ⎥ and ⎢ 1 −1 1 0 0 ⎥ . ⎣ 0 ⎦ ⎣ 1 −1 1 0 0 1 0 0 0 ⎦ 0 0 1 0 0 0 0 0 1 0 ASMs can be regarded as generalizations of permutation matrices since the permutation matrices are the ASMs without any −1s. The alternating property implies that the partial row and column sums of an ASM, starting at either end of a row or column equal 0 or 1, with the full row and column sum equal to 1. The following elementary properties of ASMs follow from their definition: (i) the number of 1s and −1s in each row and column is odd; (ii) the first and last nonzero entry in each row and column is a 1. (iii) the first and last rows and columns each contain exactly one 1 and no −1’s. Reversing the order of the rows or columns of an ASM results in another ASM, but if A is an n×n ASM and P and Q are n×n permutation matrices, then P AQ is generally not an ASM (it is, of course, if A is also a permutation matrix).

80

9. TWO MATRIX POLYTOPES

Since the number of n × n permutation matrices is n!, and since permutation matrices are ASMs, it is natural to ask for the number of n × n ASMs. The primary interest in ASMs in the mathematics community originated from the alternating sign matrix conjecture of Mills, Rumsey, and Welch in 1983; see [2] for a history of this conjecture and its relation with other combinatorial constructs. This conjecture, proved by Zeilberger in 1996, asserts that the number of n × n ASMs equals 1!4!7! · · · (3n − 2)! . n!(n + 1)!(n + 2)! · · · (2n − 1)! The first nine such numbers are 1, 2, 7, 42, 429, 7436, 218348, 10850216, and 911835460. As later discovered by Kuperberg, who gave another proof of the ASM conjecture, physicists working in statistical mechanics had been studying ASMs under the rubric square ice. Square ice is a system of water molecules frozen in a square lattice. There are oxygen atoms at each vertex of an n × n lattice, with hydrogen atoms between each pair of successive oxygen atoms in a row or column, and on either vertical side of the lattice, but not on the two horizontal sides. For example, with n = 4 we have H

O H H O H H O H H O

H

O H H O H H O H H O

H

O H H O H H O H H O

H

O H H O H H O H H O

H H . H H

Each oxygen atom is to be attached to two hydrogen atoms to get a water molecule H2 O, in such a way that no two oxygen atoms are attached to any of the same hydrogen atoms. There are six possible configurations in which an oxygen atom can be attached to two hydrogen atoms:

H ← O → H

H ↑ H ← O

H ↑ O → H

H ↑ O ↓ H

O → H ↓ H

H ← O ↓ . H

The top two configurations correspond to 1 and −1, respectively. The other four configurations correspond to 0. For example, for n = 4 we have:

9.3. THE ALTERNATING SIGN MATRIX POLYTOPE

H ← O ↓ H H ← O

H ← O ↓ H → H

H ← O

O → H ↓ H

H ↑ H ← O

H ← O

H ↑ H ← O

H ↑ H ← O

→ H

H ↑ O ↓ H

O → H ↓ H

H ← O

O → H ↓ H

H ← O

and this corresponds to the ASM ⎡ 0 ⎢ 1 ⎢ ⎣ 0 0

→ H

81

→ H

→ H

H ↑ O → H H ↑ O → H

⎤ 0 1 0 0 −1 1 ⎥ ⎥. 1 0 0 ⎦ 0 1 0

9.3. The Alternating Sign Matrix Polytope 2

We may regard the n×n ASMs as elements of n , and then their convex 2 hull (all their convex combinations) forms a convex polytope in n , called the n × n ASM polytope [1, 5] and denoted here by Θn . Since the n × n permutation matrices are ASMs, Θn contains the doubly stochastic polytope Ωn . The polytope Ωn is defined by linear constraints, and we determined its extreme points to be the n × n permutation matrices. The polytope Θn is defined by convex combinations of ASMs, and so its extreme points are ASMs but it is not immediate that every n × n ASM is an extreme point. To show that every ASM is an extreme point of Θn , we need to show that an n × n ASM cannot be written as a nontrivial convex combination of ASMs different from itself. We first characterize Θn in terms of linear constraints. In Striker [5], the ASM polytope Θn is defined as the convex hull of the n × n ASMs, as we have done here, and then characterized in terms of linear constraints as in the next theorem; in Behrend and Knight [1], Θn is defined by the linear constraints in that theorem whose set of extreme points is then shown to be the set of all n × n ASMs. Theorem 9.3. The ASM polytope is the set of all n × n real matrices X = [xij ] satisfying the following linear constraints: (9.4)

(row partial sums)

0≤

k  j=1

xij ≤ 1

(1 ≤ k ≤ n, 1 ≤ i ≤ n),

82

9. TWO MATRIX POLYTOPES

with equality on the right when k = n, and (9.5)

(column partial sums)

0≤

l 

xij ≤ 1

(1 ≤ l ≤ n, 1 ≤ j ≤ n),

i=1

with equality on the right when l = n. In particular, the entries of X satisfy −1 ≤ xij ≤ 1

(1 ≤ i, j ≤ n).

Proof. Every n × n ASM satisfies (9.4) and (9.5). Thus the convex hull of the n × n ASMs, that is, Θn , is contained in the polytope determined by (9.4) and (9.5). It suffices to show that every n × n matrix X = [xij ] satisfying (9.4) and (9.5) is a convex combination of ASMs. Let k l   xij (1 ≤ k, i ≤ n), and clj = xij (1 ≤ l, j ≤ n). rik = j=1

i=1

Then 0 ≤ rik ≤ 1 (1 ≤ k, i ≤ n), and 0 ≤ clj ≤ 1 (1 ≤ l, j ≤ n), and, in particular, rin = 1

(1 ≤ i ≤ n), and cnj = 1

(1 ≤ j ≤ n).

Define ri0 = 0 (1 ≤ i ≤ n) and c0j = 0 (1 ≤ j ≤ n). Then by (9.4) and (9.5), (9.6)

rij − ri,j−1 = xij = cij − ci−1,j

(1 ≤ i, j ≤ n).

If all partial row and column sums of X equal 0 or 1, then clearly A is an ASM. We now assume that A has at least one partial sum α with 0 < α < 1. Call such a partial sum a deficient partial sum. Consider the row and column partial sums ri,j−1 , rij , ci−1,j , cij “surrounding” an entry xij of X. We classify these partial sums relative to xij in the obvious way as left, right, above, and below. In general, a right partial sum of an entry is a left partial sum of the next entry in its row; similarly a below partial sum of an entry is an above partial sum of the next entry in its column. It follows from (9.6) that if xij has one deficient partial sum around it, it has a second deficient partial sum around it. Consider the graph G whose vertices are the positions (i, j) of X for which xij has at least one, and so at least two, deficient partial sums around it, with an edge between two vertices if and only if they share a deficient partial sum (one right and one left, or one below and one above). The edges of this graph are either horizontal (connecting two consecutive positions in a row) or vertical (connecting two consecutive positions in a column). Since each vertex of G with nonzero degree has degree at least 2, it follows that G has a cycle. A corner vertex of this cycle is a vertex at which a horizontal and vertical edge meet. Let X , respectively X− be obtained from X by alternating adding

9.4. ASM PATTERNS

83

and subtracting (respectively, subtracting and adding)  from the entries corresponding to the corner vertices of this cycle. Then X = X , X− , and for  small enough, X , X− ∈ Θn . Moreover, 1 1 X = X + X− . 2 2 Thus X is a nontrivial convex combination of matrices in Θn , and hence X is not an extreme point.  All (full) row and columns sums of a matrix X in Θn equal 1, implying that the last entry in each row and column of X is uniquely determined by the other (n − 1)2 entries. This implies the following corollary. Corollary 9.4. The dimension of the ASM polytope Θn equals (n−1)2 . Corollary 9.5. The extreme points of the ASM polytope Θn are precisely the n × n ASMs. Proof. Let A be an n × n ASM. If A is not an extreme point of Θn , then A can be written as 1 1 A= X+ Y 2 2 where X, Y ∈ Θn , and X = A and Y = A. Since each row and column partial sum of A equals 0 or 1, each row and column partial sum of X and Y equals 0 or 1, and the corresponding row and column partial sums of X and Y equal the corresponding row and column partial sums of A. This easily implies that X = Y = A, a contradiction.  Properties of the facets, and more generally the faces of Θn , are developed in [5]. In [1] “higher spin alternating sign matrices” are defined leading to a generalization of the ASM polytope Θn . 9.4. ASM Patterns An n × n alternating sign matrix A has a decomposition as A = A1 − A2 where A1 and A2 are (0, 1)-matrices. The pattern of A is defined to be the  = A1 + A2 which specifies the positions occupied by the (0, 1)-matrix A nonzero entries of A. By the defining alternating sign property, an ASM is uniquely determined by its pattern. The pattern of an ASM has an odd number of 1s in each row and column. Example 9.6. Let

⎡ ⎢ ⎢ ⎢ B=⎢ ⎢ ⎢ ⎣

0 0 0 1 0 0

0 0 1 0 0 0

0 1 0 1 1 0

1 0 1 1 1 1

0 0 1 1 1 0

0 0 0 1 0 0

⎤ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦

84

9. TWO MATRIX POLYTOPES

Then B is the pattern of the ASM ⎡ ⎤ 0 0 0 + 0 0 ⎢ 0 0 + 0 0 0 ⎥ ⎢ ⎥ ⎢ 0 + 0 − + 0 ⎥ ⎥ A=⎢ ⎢ + 0 − + − + ⎥ ⎢ ⎥ ⎣ 0 0 + − + 0 ⎦ 0 0 0 + 0 0 where we use ± to indicate ±1. Patterns of ASMs are investigated in [4] and we very briefly discuss some of their elementary properties and their relation with the matrix class A(R, S) discussed in Chapter 5.  and let R = (r1 , r2 , . . . , rn ) be Let A be an n × n ASM with pattern A,  and S = (s1 , s2 , . . . , sn ) the column sum vector. the row sum vector of A We have by use of properties (i)-(iii) of an ASM, (a) r1 + r2 + · · · + rn = s1 + s2 + · · · + sn , (b) r1 = rn = s1 = sn = 1, and (c) the ri and si are odd positive integers. Let n-vectors jn and kn be defined by jn = (1, 1, . . . , 1) and kn = (1, 3, 5, . . . , 5, 3, 1). Thus, for instance, k6 = (1, 3, 5, 5, 3, 1) and k7 = (1, 3, 5, 7, 5, 3, 1). Example 9.7. The vector jn is the row and column sum vector of n × n ASMs that are permutation matrices. The vector kn is the row and column sum vector of a special n × n ASM called a diamond ASM and denoted by Dn . We illustrate D6 and D7 from which the general definition is apparent: ⎡ ⎤ ⎡ ⎤ 0 0 0 + 0 0 0 0 0 0 + 0 0 ⎢ 0 0 + − + 0 0 ⎥ ⎢ 0 0 + − + 0 ⎥ ⎢ ⎥ ⎢ ⎢ 0 + − + − + 0 ⎥ ⎥ ⎢ 0 + − + − + ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ D6 = ⎢ ⎢ + − + − + 0 ⎥ , D7 = ⎢ + − + − + − + ⎥ . ⎢ ⎥ ⎢ 0 + − + − + 0 ⎥ ⎢ ⎥ ⎣ 0 + − + 0 0 ⎦ ⎣ 0 0 + − + 0 0 ⎦ 0 0 + 0 0 0 0 0 0 + 0 0 0 An elementary inductive argument is used in [4] to prove the following result concerning the row and column sum vectors of the pattern of an ASM.  has row sum Theorem 9.8. Let A be an n × n ASM whose pattern A vector R and column sum vector S. Then jn ≤ R, S ≤ kn (entrywise). Moreover, R = S = jn if and only if the A is a permutation matrix, and R = S = kn if and only if A is a diamond ASM.

9.4. ASM PATTERNS

85

Let R = (r1 , r2 , . . . , rn ) and S = (s1 , s2 , . . . , sn ) satisfy (a)-(c). Let  A(R, S) denote the set of all patterns of n × n ASMs whose row sum vector is R and whose column sum vector is S. Thus   : A an n × n ASM}. A(R, S) = A(R, S) ∩ {A  With R and S satisfying (a)-(c), we may have A(R, S) = ∅. For instance, if  R = jn , then S must equal jn in order that A(R, S) = ∅. It is easy to show  that A(R, S) = ∅ if R = S = (1, 1, 5, 1, 1). However. the following theorem is proved in [4]. Theorem 9.9. Let R = (r1 , r2 , . . . , rn ) be a vector of odd integers. Then there is an n × n ASM whose pattern has row sum vector R if and only if jn ≤ R ≤ kn (entrywise).

(9.7)

The diamond matrix Dn with n odd is a symmetric matrix. An n × n symmetric ASM A = [aij ] can be regarded as the adjacency matrix of a signed graph G(A) with linearly ordered vertices 1, 2, . . . , n with a positive edge between vertices i and j if and only if aij = +1 and a negative edge if and only if aij = −1. In this way we get an alternating sign graph. By requiring A to have all zeros on its main diagonal we get a signed graph without loops. A signed graph is an alternating sign graph provided that if i is any vertex and {j1 , j2 , . . . , jk } are the vertices joined to i by an edge where 1 ≤ j1 < j2 < · · · < jk ≤ n, then {i, j1 , }, {i, j3 }, {i, j5 }, . . . are positive edges and {i, j2 }, {i, j4 }, . . . are negative edges. Since the sum of the degrees of the vertices of a graph without loops is even, and since the degrees must be odd, such a graph must have an even number of vertices. Example 9.10. Let ⎡ ⎢ ⎢ ⎢ A=⎢ ⎢ ⎢ ⎣

⎤ 0 0 0 + 0 0 0 0 + − + 0 ⎥ ⎥ 0 + 0 0 − + ⎥ ⎥. + − 0 0 + 0 ⎥ ⎥ 0 + − + 0 0 ⎦ 0 0 + 0 0 0

Then A is a symmetric ASM with corresponding ASG given in Figure 9.1.

86

9. TWO MATRIX POLYTOPES

2



4

+

1

+ + 6

+

3



+ 5

Figure 9.1 Other properties of symmetric ASMs and alternating sign graphs, including he maximum number of edges in an ASG without loops, are determined in [4]. Bibliography [1] R.E. Behrend and V.A. Knight, Higher spin alternating matrices, Electr. J. Combinatorics, 14 (2007), #R83. [2] D.M. Bressoud, Proofs and Confirmations: The Story of the Alternating Sign Conjecture, Math. Association of America, Cambridge University Press, Cambridge, 1999. [3] R.A. Brualdi, Combinatorial Matrix Classes, Encyclopedia of Mathematics and its Applications 108, Cambridge University Press, Cambridge, 2006. [4] R.A. Brualdi, K.P. Kiernan, S.A. Meyer, and M.W. Schroeder, Patterns of alternating sign matrices, submitted. [5] J. Striker, The alternating sign matrix polytope, Electr. J. Combinatorics, 16 (2009), #R41.

http://dx.doi.org/10.1090/cbms/115/10

CHAPTER 10

Digraphs and Eigenvalues of (0, 1)-matrices A square (0, 1)-matrix A is the adjacency matrix of a digraph and, since A is a nonnegative matrix, the Perron-Frobenius theory applies. Thus A has a nonnegative eigenvalue which is at least as large as the modulus of every other eigenvalue. But what if we require that all eigenvalues of A are not only real but nonnegative, or even positive? In this chapter we investigate this question which leads to the important subclass of nonnegative matrices known as totally nonnegative matrices. Those (0, 1)-matrices which are totally nonnegative are very special. 10.1. (0, 1)-matrices with all Eigenvalues Positive Let A = [aij ] be an n × n (0, 1)-matrix. It follows from the PerronFrobenius theory that A has a nonnegative eigenvalue, indeed A has a positive eigenvalue unless there exists a permutation matrix P such that P −1 AP is a triangular matrix with only 0s on its main diagonal. The eigenvalues of a (0, 1)-matrix may all be nonnegative and even positive. Example 10.1. Besides the identity matrix which has all eigenvalues equal to 1, we have the following matrices with eigenvalues as shown: ⎡ ⎤ ⎡ ⎤ 1 0 1 1 0 1 ⎣ 1 1 0 ⎦ (0, 0, 2), ⎣ 1 1 1 ⎦ (1, 1, 1) and 1 1 0 0 0 1 ⎡

1 ⎢ 1 ⎢ ⎣ 0 1

0 1 0 1

0 0 1 1

⎤ 1 ) √ * 5 3 ± 0 ⎥ ⎥ 0, 1, . 1 ⎦ 2 1

The first 3 × 3 matrix A above satisfies the A3 = 2A2 and hence its characteristic polynomial is x2 (x − 2). The n × n (0, 1)-matrix A = [aij ] is the adjacency matrix of a digraph D(A) with vertices {1, 2, . . . , n} in which for 1 ≤ i, j ≤ n, there is an edge from vertex i to vertex j if and only if aij = 1. This digraph may have loops since there may be 1s on the main diagonal of A. The eigenvalues of a digraph are the eigenvalues of its adjacency matrix. We can ask our basic question in two ways: 87

88

10. DIGRAPHS AND EIGENVALUES OF (0, 1)-MATRICES

I. When are all the eigenvalues of a (0, 1)-matrix positive? Nonnegative? II. Which digraphs have all of their eigenvalues positive? Nonnegative? Weisstein (see [4]) counted the number of digraphs with n labeled vertices with only positive eigenvalues for n = 1, 2, 3, 4, 5 and found the answer to be 1, 3, 25, 543, 29281, respectively. Checking this sequence on the on-line encyclopedia of integer sequences, he found that it agreed with the beginning of the sequence that counts the number of acyclic digraphs (so no loops) on n labeled vertices. McKay et al. [4] then proved the following theorem. Theorem 10.2. For n ≥ 1, the number of acyclic digraphs with n labeled vertices equals the number of n × n (0, 1)-matrices all of whose eigenvalues are positive. In fact, McKay et al. proved that if all the eigenvalues are positive then, in fact, all the eigenvalues equal 1. Since the vertices of an acyclic digraph can be ordered as 1, 2, . . . , n so that all edges (i, j) from a vertex i to a vertex j satisfy i < j, a result stronger than that stated in Theorem 10.2 was proved, which we now state in matrix form. Theorem 10.3. For n ≥ 1, the eigenvalues of an n × n (0, 1)-matrix are all positive if and only if there exists a permutation matrix P such that P −1 AP is a lower triangular matrix with all 1s on its main diagonal. In particular, all the eigenvalues equal 1, Referring to the theorem, we have that P −1 AP − In = P −1 (A − In )P , and so A − In is the adjacency matrix of an acyclic digraph. Generalizing Theorem 10.3, Brualdi and Kirkland [2] proved the following result. Theorem 10.4. Let A be an n × n (0, 1)-matrix with n ≥ 1 such that there is an integer r with trace(A) ≤ r. In addition, suppose that A has r positive eigenvalues and n − r eigenvalues equal to 0. Then there exists a permutation matrix P such that P −1 AP is a lower triangular matrix with r 1s and n − r 0s on the main diagonal. In particular, trace(A) = r and the r positive eigenvalues of A equal 1. Proof. Let the eigenvalues of A be λ1 ≥ λ2 ≥ · · · ≥ λr > 0 = λr+1 = · · · = λn . Then using the arithmetic-geometric mean inequality we get 1 λ1 + λ2 + · · · + λr traceA = ≥ (λ1 λ2 · · · λr ) r . r r Using the characteristic polynomial of A, we get    det A[α] = λi = λ1 λ2 · · · λr > 0 (10.2)

(10.1)

1≥

α⊆{1,2,...,n},|α|=r

α⊆{1,2,...,n},|α|=r i∈α

10.1. (0, 1)-MATRICES WITH ALL EIGENVALUES POSITIVE

89

where A[α] is the principal submatrix of A determined by the indices in α. Since A is a (0, 1)-matrix, the left hand side of (10.2) is integral, and thus λ1 λ2 · · · λr ≥ 1. Thus, (10.1) becomes (10.3)

1≥

1 λ1 + λ2 + · · · + λr traceA = ≥ (λ1 λ2 · · · λr ) r ≥ 1. r r

Therefore we have equality throughout, and so λ1 = λ2 = · · · = λr = 1. Thus A has r eigenvalues equal to 1 and n − r eigenvalues equal to 0; in particular, trace(A) = r. We now invoke the Perron-Frobenius theory of nonnegative matrices as outlined in Chapter 1. Let A1 , A2 , . . . , Ak be the irreducible components of A. Then λmax (Ai ) is a simple eigenvalue of Ai for i = 1, 2, . . . , k. Thus k ≥ r where r of the Ai satisfy λmax (Ai ) = 1 with the other eigenvalues equal to 0. The other k − r Ai satisfy λmax (Ai ) = 0 with all eigenvalues equal to 0 and so these Ai are 1 × 1 zero matrices. Now each Ai with λmax (Ai ) = 1 being an irreducible (0, 1)-matrix has at least one 1 in each row and column. If a row or column had more than one 1, then by PF7, λmax (Ai ) > 1, a contradiction. Thus each Ai with λmax (Ai ) = 1 is a permutation matrix, and since all other eigenvalues equal 0, is a 1 × 1 permutation matrix, that is, equals I1 . Summarizing A has r irreducible components (see Section 1.1) equal to I1 , and the other irreducible components of A equal O1 , and thus k = n. Therefore there is a permutation matrix P such that P −1 AP is a lower triangular matrix with r 1s and n − r 0s on the main diagonal.  Since the trace of an n × n (0, 1)-matrix cannot exceed n, Theorem 10.3 is the special case r = n of Theorem 10.4. The trace assumption in Theorem 10.4 is needed. For example, the 4 × 4 matrix ⎡ ⎤ 1 0 0 1 ⎢ 1 1 0 0 ⎥ ⎥ A=⎢ ⎣ 0 0 1 1 ⎦ 1 1 1 1 √

has trace equal to 4 and has nonnegative eigenvalues 0, 1, 3±2 5 of which three are positive. But there does not exist a permutation matrix P such that P −1 AP is a lower triangular matrix (because A does not have a row √ with exactly one 1, or use the fact that if there were such a P then 3±2 5 would each have to be on the main diagonal). The above theorems suggest a more general investigation of (0, 1)-matrices (equivalently, digraphs) all of whose eigenvalues are nonnegative. Note that, in particular, such matrices have all real eigenvalues but symmetry is not an assumption. In this general form, the problem seems intractable, so we make a stronger assumption [2].

90

10. DIGRAPHS AND EIGENVALUES OF (0, 1)-MATRICES

10.2. Totally Nonnegative Matrices An m × n matrix is totally nonnegative, abbreviated TN, provided each of its square submatrices has a nonnegative determinant. An n × n matrix is totally positive provided each of its square submatrices has a positive determinant. TN matrices are obviously nonnegative matrices. Two recent books on totally nonnegative matrices are [3, 5] but note that in [5] a totally nonnegative matrix is called totally positive in agreement with its historical introduction, and what we have called totally positive matrices are called strictly totally positive. The following is a basic theorem about TN matrices which explains the relevance of TN matrices in this chapter. Theorem 10.5. All the eigenvalues of a square TN matrix are nonnegative. If a matrix has all nonnegative eigenvalues, then so does every matrix obtained from it by simultaneous row and column permutations. But the property of being a TN matrix is not invariant under simultaneous row and column permutations. Also a matrix can have all nonnegative eigenvalues without being a TN matrix. Example 10.6. The matrix



⎤ 1 0 1 A1 = ⎣ 1 1 0 ⎦ 1 1 0

has eigenvalues 0, 0, 2 but it is not totally nonnegative, since the 2×2 submatrix in the upper right corner has determinant equal to −1. It is also easy to check that the rows and columns of A1 cannot be simultaneously permuted to a TN matrix. Thus it is not true that a (0, 1)-matrix has all nonnegative eigenvalues if and only if its rows and columns can be simultaneously permuted to a TN matrix; no such luck! The matrix ⎡ ⎤ 1 1 0 A2 = ⎣ 1 1 1 ⎦ 1 1 1 √

is easily seen to be TN and has nonnegative eigenvalues 0, 3±2 5 . Simultaneously permuting rows and column by interchanging rows 2 and 3, and columns 2 and 3, results in the matrix ⎡ ⎤ 1 0 1 ⎣ 1 1 1 ⎦, 1 1 1 which is not TN, since the 2 × 2 submatrix in the upper right corner has determinant equal to −1. An important property of the pattern of a TN matrix is due to de Boor and Pinkus [1].

10.3. TOTALLY NONNEGATIVE (0, 1)-MATRICES

91

Theorem 10.7. Let A be an m × n TN matrix with no zero row or zero column. Then the positive entries in each row and and in each column occur consecutively. In addition, the last (respectively, first) positive entry in a row does not occur to the left of the last positive entry in the preceding row. A similar conclusion holds for columns. Proof. The theorem asserts have a double staircase pattern as ⎡ + + ⎢ 0 + ⎢ ⎢ 0 + ⎢ ⎢ 0 0 ⎢ ⎣ 0 0 0 0

that the positive entries in a TN matrix in ⎤ + + 0 0 + + 0 0 ⎥ ⎥ + + + 0 ⎥ ⎥. + + + 0 ⎥ ⎥ 0 + + + ⎦ 0 0 + +

Since A is a TN matrix it cannot have any 2 × 2 submatrix of the forms       0 + + + 0 + , , and . + 0 + 0 + + For later reference, we note that this is the only property of a TN matrix that we use. Consider a 0 in some row: ⎤ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

α β

0 δ

γ

⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎦

Not both α and β can contain a +, and not both γ and δ can contain a +. So α is a zero column or β is a zero row, and δ is a zero column or γ is a zero row. Since A has no zero rows or zero columns, not both β and γ can be zero rows and not both α and γ can be zero columns. So α is a zero column and γ is a zero row, or β is a zero row and γ is a zero column. Thus the positive entries occur consecutively in each row and column. The other conclusions of the theorem now follow from the restriction on 2 × 2 submatrices as given above. 

10.3. Totally Nonnegative (0, 1)-matrices By Theorem 10.7, to determine whether or not a (0, 1)-matrix, without zero rows or zero columns, is TN, we have to determine whether or not a

92

10. DIGRAPHS AND EIGENVALUES OF (0, 1)-MATRICES

(0, 1)-matrix with a double staircase pattern is TN. For example, ⎡ ⎤ 1 1 0 0 0 0 ⎢ 1 1 1 1 0 0 ⎥ ⎢ ⎥ ⎢ 1 1 1 1 1 0 ⎥ ⎢ ⎥ ⎢ 0 0 1 1 1 0 ⎥ ⎢ ⎥ ⎣ 0 0 0 1 1 1 ⎦ 0 0 0 0 1 1 has the double staircase pattern, but it is not TN because it has a 3 × 3 submatrix (in rows 2, 3, 5 and columns 3, 4, 5) ⎡ ⎤ 1 1 0 F3 = ⎣ 1 1 1 ⎦ 0 1 1 whose determinant equals −1. The 2 × 2 matrices identified in the proof of Theorem 10.7 and the 3 × 3 matrix F3 suffice to characterize TN (0, 1)matrices without zero rows and columns [2]. Theorem 10.8. Let A be an m × n (0, 1)-matrix without zero rows and columns. Then A is TN if and only if A does not have a submatrix equal to one of ⎡ ⎤       1 1 0 0 1 1 1 0 1 (10.4) , , , and F3 = ⎣ 1 1 1 ⎦ . 1 0 1 0 1 1 0 1 1 Proof. We have already observed that each of the four matrices in (10.4) has a negative determinant, and thus cannot be a submatrix of a TN matrix. Assume that A = [aij ] does not have a zero row or a zero column and does not have a submatrix equal to any of the matrices in (10.4). By Theorem 10.7 (see the note in its proof), we may assume that A has a double staircase pattern. We argue inductively that A is a TN matrix. If m = 2, this is easily verified. Now assume that m ≥ 3. If there are only 1s in column 1, then each column of A has all its 0s above its 1s; it is then easy to check that the determinant of each square submatrix of A equals 0 or 1. A similar conclusion holds if there are only 1s in row 1. We now assume that there is a 0 in column 1 and a 0 in row 1. If either row 1 or column 1 contains exactly one 1, then that 1 is in position (1, 1) and the conclusion follows by induction. Thus we now assume that a12 = a21 = 1. Since A has a double staircase pattern we also have a22 = 1. Let p ≥ 3 be defined by a11 = · · · = ap−1,1 = 1 and ap1 = · · · = am1 = 0. If ap2 = 0, then columns 1 and 2 are identical, and we can complete the proof by induction. Thus we now assume that ap2 = 1. If ap3 = 0, then the double staircase pattern implies that rows 1 and 2 are identical. and again we complete the proof by induction. So assume that ap3 = 1. Then ai3 = 1 for 2 ≤ i ≤ p, since otherwise rows 1 and 2 are identical. We also have a13 = 1, since otherwise A has a submatrix equal to F3 , Repeating this

BIBLIOGRAPHY

93

argument first with column 4 replacing column 3, we see that, since F3 is not a submatrix of A, we eventually are able to conclude that A has two identical rows, and then complete the proof by induction.  The four matrices in Theorem 10.8 are the forbidden submatrices for a (0, 1)-matrix without zero rows and columns to be TN. Additional properties of TN (0, 1)-matrices are established in [2], and we now summarize them. TN1. An n × n irreducible TN (0, 1)-matrix with n ≥ 2 has 0 as an eigenvalue of multiplicity at least  n2 . TN2. For n ≥ 2, let ⎡ ⎤ 1 1 0 0 ··· 0 ⎢ 1 1 1 0 ··· 0 ⎥ ⎢ ⎥ ⎢ 1 1 1 1 ··· 0 ⎥ ⎢ ⎥ Hn = ⎢ .. .. .. . . .. ⎥ ⎢ . . . . ··· . ⎥ ⎢ ⎥ ⎣ 1 1 1 1 ··· 1 ⎦ 1 1 1 1 ··· 1 be the n × n lower Hessenberg matrix with 1s on and below the main diagonal and with 1s on the super diagonal too. Then Hn is an irreducible TN matrix, the rank of Hn equals n − 1, and 0 is an eigenvalue of geometric multiplicity 1 but algebraic multiplicity  n2 . TN3. The minimum spectral radius of an n × n irreducible TN matrix 2π , and is attained by the matrix Hn . (The n × n is 2 + 2 cos n−2 irreducible TN (0, 1)-matrices with this spectral radius are characterized in [2]. In fact, not only do the have this same spectral radius, but they also have the same spectrum.) TN4. For n ≥ 2, the maximum number of 0s in an n × n irreducible TN (0, 1)-matrix equals (n − 2)2 . (The matrices with this number of 0s are characterized in [2].) Let us end with a question of H.S. Wilf: How many n × n (irreducible) (0, 1)-matrices have all of their eigenvalues nonnegative? Bibliography [1] C. de Boor and A, Pinkus, The approximation of a totally positive band matrix by a strictly banded totally positive one, Linear Algebra Appl. 42 (1982), 81–98. [2] R.A. Brualdi and S. Kirkland, Totally nonnegative (0, 1)-matrices, Linear Algebra Appl. 432 (2010), 1650–1652. [3] S. Fallat and C.R. Johnson, Totally Nonnegative Matrices, Princeton University Press, Princeton, 2011. [4] B.D. McKay, F. Oggier, G.F. Royle, N.J.A. Sloane, I.M. Wanless, and H.S. Wilf, Acyclic digraphs and eigenvalues of (0, 1)-matrices, J. Integer Seq., 7 (2004), Article 04.3.3 (electronic). [5] A. Pinkus, Totally positive matrices, Cambridge Tracts in Mathematics, 181, Cambridge University Press, Cambridge,2010.

Index

alternating sign matrix (ASM), 79 pattern, 83

complement, 17, 35 connected, 14 diameter, 15 edge connectivity, 22 eigenvalues, 13, 14 independence number, 16 Laplacian matrix, 21 minor, 35 forbidden, 35 minor-closed, 35 proper, 35 multicolored, 30 spanning forest, 30 spanning tree, 30 planar, 35 regular, 17 Smith, 20 spanning tree number, 21 vertex connectivity, 22

bipartite graph, 1, 78 complete, 1 eigenvalues, 15 left vertex, 1 right vertex, 1 diagonal product, 49, 61–64 signed, 49 digraph, 49, 60, 87 acyclic, 88 cycle, 51 eigenvalues, 87 nonnegative, 88 positive, 88 strongly connected, 60 weighted, 49 dimer problem, 53 equal union property, 54

interchange, 44 generalization, 47 graph, 44

graph, 13 adjacency matrix, 13 weighted, 33 algebraic connectivity, 22 alternating sign, 85 biclique, 27 biclique partition number, 27 lower bound, 29 chromatic number, 16 lower bound, 18 upperbound, 18 clique, 17, 27 Colin de Verdi`ere number, 33–35 complete graph, 35 linklessly embeddable, 36 outerplanar, 36 path, 36 planar, 36, 37

L-matrix, 53 Laplacian matrix, 21 lemniscate, 62 majorization, 42, 45, 69, 73 Muirhead’s lemma, 45 matrix absolute value, 61 alternating sign, 79 conjecture, 80 ASM, 79 biadjacency, 39 classes, 39, 45, 69, 85 co-rank, 34 determinant, 21, 49 diagonally dominant, 59, 63 95

96

matrix (cont.) double staircase pattern, 91 doubly stochastic, 77 incidence, 39, 54 irreducible, 6, 14, 60, 61 L-matrix, 53 permanent, 49, 52 reducible, 6 sign pattern, 50, 55 potentially nilpotent, 56 SAP, 55 sign-nonsingular, 50 SNS, 50, 51 spectrally arbitrary, 55 signing, 52 simultaneous permutation, 2 skew-symmetric, 28 eigenvalues, 28 symmetric, 7 interlacing, 8 TNN, 90 totally nonnegative, 90 characterization, 92 totally positive, 90 weakly diagonally dominant, 59, 61 matroid, 25, 43, 70 basis, 25 circuit, 25 dependent set, 25 independent set, 25 rank, 25 rank function, 25 submodular inequality, 26 uniform, 25 minor monotone, 35 optimal assignment problem (OAP), 63 duality theorem, 64 Perron-Frobenius theory, 3 polytope alternating sign matrix, 81 extreme points, 83 doubly stochastic, 78 extreme points, 78 qualitative class, 50 SDR, 26, 55 independent, 26, 43 spectral radius, 1, 2 square ice, 80 Strong Arnold Pproperty, 34

INDEX

theorem Ao-Hanson-GuidiliG´ arf´ as-Thomass´e-Weidl, 72 Bassett-Maybee-Quirk, 51 Birkhoff, 78 Brauer-Gentry, 74 Brualdi-Kirkland, 88 Camion-Hoffman, 65 Colin de Verdi`ere, 35 de Boor-Pinkus, 91 Gale-Ryser, 41, 70, 72, 73 Gerˇsgorin, 60 Graham-Pollak, 27 Kastelyn, 53 Kuratowski, 35 Landau, 69, 73 Lindstr¨ om-Tverberg, 54 McKay-Oggier-Royle-Sloane-WanlessWilf, 88 Rado, 42 Rado-Hall, 26, 27, 55, 70, 78 Robertson-Seymour, 35 Schwarz, 3 Striker-Behrend-Knight, 81 Taussky, 61 Zeilberger, 80 tournament, 67 loss, 67 loss vector, 67 nearly regular, 74 regular, 74 score, 67 score vector, 67 transitive, 69, 71, 72 tournament matrix, 68 eigenvalues, 74 spectral radius, 75 transfer, 45 vector support, 36 negative, 36 positive, 36

Titles in This Series 115 Richard A. Brualdi, The mutually beneficial relationship of graphs and matrices, 2011 114 Mark Gross, Tropical geometry and mirror symmetry, 2011 113 Scott A. Wolpert, Families of Riemann surfaces and Weil-Petersson geometry, 2010 112 Zhenghan Wang, Topological quantum computation, 2010 111 Jonathan Rosenberg, Topology, C ∗ -algebras, and string duality, 2009 110 David Nualart, Malliavin calculus and its applications, 2009 109 Robert J. Zimmer and Dave Witte Morris, Ergodic theory, groups, and geometry, 2008 108 Alexander Koldobsky and Vladyslav Yaskin, The interface between convex geometry and harmonic analysis, 2008 107 Fan Chung and Linyuan Lu, Complex graphs and networks, 2006 106 Terence Tao, Nonlinear dispersive equations: Local and global analysis, 2006 105 Christoph Thiele, Wave packet analysis, 2006 104 Donald G. Saari, Collisions, rings, and other Newtonian N -body problems, 2005 103 Iain Raeburn, Graph algebras, 2005 102 Ken Ono, The web of modularity: Arithmetic of the coefficients of modular forms and q series, 2004 101 Henri Darmon, Rational points on modular elliptic curves, 2004 100 Alexander Volberg, Calder´ on-Zygmund capacities and operators on nonhomogeneous spaces, 2003 99 Alain Lascoux, Symmetric functions and combinatorial operators on polynomials, 2003 98 Alexander Varchenko, Special functions, KZ type equations, and representation theory, 2003 97 Bernd Sturmfels, Solving systems of polynomial equations, 2002 96 Niky Kamran, Selected topics in the geometrical study of differential equations, 2002 95 Benjamin Weiss, Single orbit dynamics, 2000 94 David J. Saltman, Lectures on division algebras, 1999 93 Goro Shimura, Euler products and Eisenstein series, 1997 92 Fan R. K. Chung, Spectral graph theory, 1997 91 J. P. May et al., Equivariant homotopy and cohomology theory, dedicated to the memory of Robert J. Piacenza, 1996 90 John Roe, Index theory, coarse geometry, and topology of manifolds, 1996 89 Clifford Henry Taubes, Metrics, connections and gluing theorems, 1996 88 Craig Huneke, Tight closure and its applications, 1996 87 John Erik Fornæss, Dynamics in several complex variables, 1996 86 Sorin Popa, Classification of subfactors and their endomorphisms, 1995 85 Michio Jimbo and Tetsuji Miwa, Algebraic analysis of solvable lattice models, 1994 84 Hugh L. Montgomery, Ten lectures on the interface between analytic number theory and harmonic analysis, 1994 83 Carlos E. Kenig, Harmonic analysis techniques for second order elliptic boundary value problems, 1994 82 Susan Montgomery, Hopf algebras and their actions on rings, 1993 81 Steven G. Krantz, Geometric analysis and function spaces, 1993 80 Vaughan F. R. Jones, Subfactors and knots, 1991 79 Michael Frazier, Bj¨ orn Jawerth, and Guido Weiss, Littlewood-Paley theory and the study of function spaces, 1991 78 Edward Formanek, The polynomial identities and variants of n × n matrices, 1991 77 Michael Christ, Lectures on singular integral operators, 1990 76 Klaus Schmidt, Algebraic ideas in ergodic theory, 1990 75 F. Thomas Farrell and L. Edwin Jones, Classical aspherical manifolds, 1990

TITLES IN THIS SERIES

74 Lawrence C. Evans, Weak convergence methods for nonlinear partial differential equations, 1990 73 Walter A. Strauss, Nonlinear wave equations, 1989 72 Peter Orlik, Introduction to arrangements, 1989 71 Harry Dym, J contractive matrix functions, reproducing kernel Hilbert spaces and interpolation, 1989 70 Richard F. Gundy, Some topics in probability and analysis, 1989 69 Frank D. Grosshans, Gian-Carlo Rota, and Joel A. Stein, Invariant theory and superalgebras, 1987 68 J. William Helton, Joseph A. Ball, Charles R. Johnson, and John N. Palmer, Operator theory, analytic functions, matrices, and electrical engineering, 1987 67 Harald Upmeier, Jordan algebras in analysis, operator theory, and quantum mechanics, 1987 66 G. Andrews, q-Series: Their development and application in analysis, number theory, combinatorics, physics and computer algebra, 1986 65 Paul H. Rabinowitz, Minimax methods in critical point theory with applications to differential equations, 1986 64 Donald S. Passman, Group rings, crossed products and Galois theory, 1986 63 62 61 60

Walter Rudin, New constructions of functions holomorphic in the unit ball of C n , 1986 B´ ela Bollob´ as, Extremal graph theory with emphasis on probabilistic methods, 1986 Mogens Flensted-Jensen, Analysis on non-Riemannian symmetric spaces, 1986 Gilles Pisier, Factorization of linear operators and geometry of Banach spaces, 1986

59 58 57 56

Roger Howe and Allen Moy, Harish-Chandra homomorphisms for p-adic groups, 1985 H. Blaine Lawson, Jr., The theory of gauge fields in four dimensions, 1985 Jerry L. Kazdan, Prescribing the curvature of a Riemannian manifold, 1985 Hari Bercovici, Ciprian Foia¸ s, and Carl Pearcy, Dual algebras with applications to invariant subspaces and dilation theory, 1985 55 William Arveson, Ten lectures on operator algebras, 1984 54 William Fulton, Introduction to intersection theory in algebraic geometry, 1984 53 Wilhelm Klingenberg, Closed geodesics on Riemannian manifolds, 1983

52 51 50 49

Tsit-Yuen Lam, Orderings, valuations and quadratic forms, 1983 Masamichi Takesaki, Structure of factors and automorphism groups, 1983 James Eells and Luc Lemaire, Selected topics in harmonic maps, 1983 John M. Franks, Homology and dynamical systems, 1982

48 47 46 45

W. Stephen Wilson, Brown-Peterson homology: an introduction and sampler, 1982 Jack K. Hale, Topics in dynamic bifurcation theory, 1981 Edward G. Effros, Dimensions and C ∗ -algebras, 1981 Ronald L. Graham, Rudiments of Ramsey theory, 1981

44 Phillip A. Griffiths, An introduction to the theory of special divisors on algebraic curves, 1980 43 William Jaco, Lectures on three-manifold topology, 1980 42 Jean Dieudonn´ e, Special functions and linear representations of Lie groups, 1980 41 D. J. Newman, Approximation with rational functions, 1979 40 39 38 37

Jean Mawhin, Topological degree methods in nonlinear boundary value problems, 1979 George Lusztig, Representations of finite Chevalley groups, 1978 Charles Conley, Isolated invariant sets and the Morse index, 1978 Masayoshi Nagata, Polynomial rings and affine spaces, 1978

For a complete list of titles in this series, visit the AMS Bookstore at www.ams.org/bookstore/.

Graphs and matrices enjoy a fascinating and mutually beneficial relationship. This interplay has benefited both graph theory and linear algebra. In one direction, knowledge about one of the graphs that can be associated with a matrix can be used to illuminate matrix properties and to get better information about the matrix. Examples include the use of digraphs to obtain strong results on diagonal dominance and eigenvalue inclusion regions and the use of the RadoHall theorem to deduce properties of special classes of matrices. Going the other way, linear algebraic properties of one of the matrices associated with a graph can be used to obtain useful combinatorial information about the graph. The adjacency matrix and the Laplacian matrix are two well-known matrices associated to a graph, and their eigenvalues encode important information about the graph. Another important linear algebraic invariant associated with a graph is the Colin de Verdière number, which, for instance, characterizes certain topological properties of the graph. This book is not a comprehensive study of graphs and matrices. The particular content of the lectures was chosen for its accessibility, beauty, and current relevance, and for the possibility of enticing the audience to want to learn more.

For additional information and updates on this book, visit www.ams.org/bookpages/cbms-115

CBMS/115

AMS on the Web www.ams.org