171 70 2MB
English Pages 232 Year 2021
Current Natural Sciences
Changjiang BU, Lizhu SUN and Yimin WEI
Sign Pattern for Generalized Inverses
Printed in France
EDP Sciences – ISBN(print): 978-2-7598-2599-8 – ISBN(ebook): 978-2-7598-2600-1 DOI: 10.1051/978-2-7598-2599-8 All rights relative to translation, adaptation and reproduction by any means whatsoever are reserved, worldwide. In accordance with the terms of paragraphs 2 and 3 of Article 41 of the French Act dated March 11, 1957, “copies or reproductions reserved strictly for private use and not intended for collective use” and, on the other hand, analyses and short quotations for example or illustrative purposes, are allowed. Otherwise, “any representation or reproduction – whether in full or in part – without the consent of the author or of his successors or assigns, is unlawful” (Article 40, paragraph 1). Any representation or reproduction, by any means whatsoever, will therefore be deemed an infringement of copyright punishable under Articles 425 and following of the French Penal Code. The printed edition is not for sale in Chinese mainland. Customers in Chinese mainland please order the print book from Science Press. ISBN of the China edition: Science Press 978-7-03-068568-1 Ó Science Press, EDP Sciences, 2021
Preface
Generalized inverse has wide applications in science and engineering. The sign pattern of generalized inverses has an important theoretical significance and application value in the problems of qualitative analysis of systems and Combinatorial Matrix Theory. P.A. Samuelson, a Nobel Prize winner in economics, attributed the qualitative analysis of an economics model to the problem of the sign-solvability for a linear system, which opened the research of the sign pattern of matrices. After that, R.A. Brualdi, an AMS fellow, and B.L. Shader, a SIAM fellow, systematically investigated the sign pattern for the inverse of a matrix, which established the basic theory of the sign pattern of generalized inverses. From 1995 to 2003, B.L. Shader, Jia-Yu Shao and other scholars studied the sign pattern of the Moose–Penrose inverse of matrices, and obtained a series of important results. In 2010, P. van den Driessche, M. Catral and D.D. Olesky provided the group inverses for the adjacency matrices of a class of broom graphs, and proved the sign pattern of this kind of matrices is unique. In 2011, we proposed the concept of the sign group invertible matrix, the strongly sign group invertible matrix and the matrix with unique sign Drazin inverse, and established many new results on the associated characterization. Furthermore, we extended the research to the ray pattern of complex matrices, and characterized the matrices with ray Moore–Penrose inverse and ray Drazin inverse. Also, we first studied the sign pattern of tensors. The sign pattern of generalized inverses is not only an extension and a development of the sign pattern of matrices but also a new front topic. There are many groundbreaking research problems which need new ideas to be solved. This monograph introduces the technical methods of characterizing the sign of generalized inverses and review the status of the newest research on this topic, which can be used as a reference or textbook for researchers or graduate students in matrix theory, combinatorics algebra etc.
DOI: 10.1051/978-2-7598-2599-8.c901 © Science Press, EDP Sciences, 2021
IV
Preface
The authors are grateful to their supervisors Professors Chongguang Cao, Baodong Zheng and Zhihao Cao for their strong support and long term guidance. They thank Professors Jia-Yu Shao, Jipu Ma, Jiguang Sun, Erxiong Jiang, Jiaoxun Kuang, Hongke Du, Boyin Wang, Gong-ning Chen, Hong You, Guorong Wang, Yonglin Chen, Musheng Wei, Guoliang Chen, Wenyu Sun, Shufang Xu, Zhongxiao Jia, Xinguo Liu, Liping Huang, Anping Liao, Jianzhou Liu, Yongzhong Song, Hua Dai, Tingzhu Huang, Wen Li, Chuanlong Wang, Xingzhi Zhan, Yifeng Xue, An Chang, Xiying Yuan, Chunyuan Deng, Jianlong Chen, Zhengke Miao, Xiaodong Zhang, Yongjian Hu, Zhenghong Yang, Dongxiu Xie, Yuwen Wang, Qingwen Wang, Qingxiang Xu, Bin Zheng, Xiaoji Liu, Hanyu Li, Xian Zhang, Xiaomin Tang, Erfang Shan, Haiying Shan, Qianglian Huang, Haifeng Ma, Lihua You, Zhaoliang Xu, A. Ben-Isreal, N. Castro González, M. Catral, D. Cvetković Ilić, D. Djordjević, R. Hartwig, J. Koliha, C. Meyer, M. Nashed, P. Stanimirović, N. Thome, V. Rakocević, Liqun Qi, Sanzheng Qiao, P. Wedin, H. Werner, Zi-cai Li, Chi-Kwong Li, Fuzhen Zhang, Jiu Ding, Rencang Li, Zhongshan Li, Xiaoqing Jin, Li Qiu, Xiezhang Li, Jun Ji, Xuzhou Chen, Jianming Miao, Yongge Tian and others. The authors would like to thank Professor Eric King-wah Chu who read this book carefully and provided valuable comments and suggestions. This work is supported by National Natural Science Foundation of China under grants 11371109, 11771099, 11801115, 12071097 and 12042103, Natural Science Foundation of Heilong Jiang Province under grant QC2018002, Fundamental Research Funds for the Central University, the Innovation Program of Shanghai Municipal Education Committee, Shanghai Key Laboratory of Contemporary Applied Mathematics and Key Laboratory of Mathematics for Nonlinear Sciences of Fudan University.
Notations1
N ½n I In AD Ap A# Að1Þ Aþ AX A A> trðAÞ rank ðAÞ qðAÞ ind ðAÞ RðAÞ N ð AÞ Q ð AÞ A½a; b A½:; b A½a; :
1
positive integer set ½n ¼ f1; 2; . . .; n g, n 2 N unity matrix n n unity matrix Drazin inverse of matrix A Ap ¼ I AAD group inverse of matrix A f1g-inverse of matrix A Moore–Penrose inverse of matrix A AX ¼ I AA þ conjugate transpose of matrix A transpose of matrix A trace of square matrix A rank of matrix A term rank of matrix A Drazin index of matrix A range of matrix A null space of matrix A sign pattern class of matrix A submatrix of matrix A whose row index set is a and column index set is b submatrix of matrix A determined by the rows of A and the columns whose index is in index set b submatrix of matrix A determined by the columns of A and the rows whose index is in index set a
Unless otherwise stated, the following notations will be used in the book.
Notations
VI
Aða; bÞ ðAÞij detðAÞ sgn ðAÞ ray ðAÞ Fmn C R K Fn Sn ð m Þ N ðq; T Þ Nr ð A Þ Nc ð AÞ Fn1 n2 nk ½m;n Fk AR k AL k Af R k g Af L k g
submatrix of matrix A obtained by removing the rows in a and the columns in b ði; j Þ-entry of matrix A determinant of matrix A sign pattern of matrix A ray pattern of matrix A set of m n matrices over fields F complex fields real fields skew fields set of n-dimension vectors over fields F set of all subsets of ½m with n elements number of elements in finite integers set T which are less than or equal to q 2 T number of rows of matrix A number of columns of matrix A set of n1 n2 nk -dimension k-order tensors over fields F set of m n n-dimension k-order tensors over fields F right inverse of tensor A with k-order left inverse of tensor A with k-order set of right inverse for tensor A with k-order set of left inverse for tensor A with k-order
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
III V
CHAPTER 1 Generalized Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Matrix Decompositions . . . . . . . . . . . . . . . . . . . . . . . 1.2 Moore–Penrose Inverse . . . . . . . . . . . . . . . . . . . . . . . 1.3 Drazin Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Group Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Generalized Inverses and System of Linear Equations 1.6 Graph and Matrix . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
1 1 2 5 8 12 14
Generalized Inverses of Partitioned Matrices . . . . . . . . . . . . . . . 2.1 Drazin Inverse of Partitioned Matrices . . . . . . . . . . . . . . 2.2 Group Inverse of Partitioned Matrices . . . . . . . . . . . . . . . 2.3 Additive Formulas for Drazin Inverse and Group Inverse . 2.4 Drazin Inverse Index for Partitioned Matrices . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
19 19 45 71 96
........... ........... via Digraphs . ...........
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
101 101 108 116
Pattern for Moore–Penrose Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . Least Squares Sign-Solvability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrices with Signed Moore–Penrose Inverse . . . . . . . . . . . . . . . . . . Triangular Partitioned Matrices with Signed Moore–Penrose Inverse . Ray Pattern for Moore–Penrose Inverse . . . . . . . . . . . . . . . . . . . . . .
. . . . .
123 123 125 138 143
. . . . . . .
. . . . . . .
CHAPTER 2
CHAPTER 3 SNS 3.1 3.2 3.3
and S2NS Matrices . . . . . . . . . . . . . . . . . . . . . Sign-Solvability of Linear Equations . . . . . . . Characterizations for SNS and S2NS Matrices Ray Nonsingular and Ray S2NS Matrices . . .
CHAPTER 4 Sign 4.1 4.2 4.3 4.4
Contents
VIII
CHAPTER 5 Sign 5.1 5.2 5.3 5.4 5.5 5.6
Pattern for Drazin Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrices with Signed Drazin Inverse . . . . . . . . . . . . . . . . . . . . . . . Upper Triangular Partitioned Matrices with Signed Drazin Inverse Anti-Triangular Partitioned Matrices with Signed Drazin Inverse . Bipartite Matrices with Signed Drazin Inverse . . . . . . . . . . . . . . . . Sign Pattern of Group Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ray Pattern of Drazin Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
149 149 151 161 171 176 187
. . . . .
. . . . .
. . . . .
197 197 200 204 208
CHAPTER 6 Sign 6.1 6.2 6.3 6.4
Pattern for Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inverse of Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minimum and Maximum Rank of Sign Pattern Tensors . Sign Nonsingular Tensors . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
215
Chapter 1 Generalized Inverses Generalized inverses have wide applications in many fields such as numerical analysis, cryptography, operations research, probability statistics, combinatoric, optimization, astronomy, earth sciences, managerial economics, and various engineering sciences [50, 56, 64, 205]. Because of the different research problems, there are many types of generalized inverses. For example, the Moore–Penrose inverse, f1g-inverse, f2g-inverse, Drazin inverse, group inverse and Bott-Duffin inverse etc. There are several monographs on generalized inverses [5, 8, 34, 60, 80, 96, 104, 120, 158, 164, 193, 194, 196, 200, 202, 207]. Other related research on perturbation analysis and preservation of generalized inverses can be found in [2, 98, 117, 186, 213, 214, 224]. This chapter contains some basic concepts and tools necessary to follow the results presented in the forthcoming chapters. We will cover some standard matrix theory tools and various graph nations and invariants.
1.1
Matrix Decompositions
In this section, we list some common matrix decompositions in matrix theory. These decompositions can help readers better understand the properties of generalized inverses of matrices. Let Kmn , Cmn and Rmn denote the set of all m n matrices over skew fields K, complex field C and real field R, respectively. Let Kn , Cn and Rn denote the vector set of n-dimensional over K, C and R, respectively. Theorem 1.1 [232]. Let A 2 Kmn with rankðAÞ ¼ r. Then there exist invertible matrices P and Q such that I 0 Q; A¼P r 0 0 where Ir denotes an identity matrix of r-order. The decomposition of theorem 1.1 is called the equivalent decomposition of A.
DOI: 10.1051/978-2-7598-2599-8.c001 © Science Press, EDP Sciences, 2021
Sign Pattern for Generalized Inverses
2
Theorem 1.2 [39, 232]. Let A 2 Knn . Then there exists an invertible matrix P such that D 0 A¼P P 1 ; 0 N where D is an invertible matrix and N is a nilpotent matrix. The decomposition of theorem 1.2 is called core-nilpotent decomposition of A. Next, we introduce the full rank factorization. Theorem 1.3 [8]. Let A 2 Kmn with rankðAÞ ¼ r. Then there exist a full column rank B 2 Kmr and a full row rank C 2 Krn such that A ¼ BC . Proof. Theorem 1.1 gives there exist invertible matrices P and Q such that I Ir 0 Q ¼ P r ½ Ir 0 Q: A¼P 0 0 0 I Take B ¼ P r and C ¼ ½ Ir 0 Q. Then A ¼ BC is a full rank factorization 0 of A. ■ For A 2 Cmn , A> and A denote the transpose and conjugate transpose of A, respectively. We know that AA and A A are both positive and semidefinite, and they have the same nonzero eigenvalues k1 ; k2 ; . . .; kr of AA , where r ¼ rankðAÞ. pffiffiffiffiffi pffiffiffiffiffi pffiffiffiffiffi Then k1 ; k2 ; . . .; kr are the singular values of A. Next is the singular value decomposition of A. Theorem 1.4 [94, 212, 132]. Let A 2 Cmn . Then there exist unitary matrices U 2 Cmm and V 2 Cnn such that D 0 A¼U V ; 0 0 where D is a diagonal matrix whose diagonal elements are the singular values of A.
1.2
Moore–Penrose Inverse
The concept of generalized inverses was first introduced by I. Fredholm [92] in 1903 which is a particular generalized inverse of an integral operator, called pseudoinverse. W.A. Hurwitz gave a simple algebraic construction of the pseudoinverse using the finite dimensionality of the null spaces of the Fredholm operators in 1912 [111]. The generalized inverses of differential operators appeared in D. Hilbert’s discussion of the generalized Green’s functions in 1904 [107]. In the Bulletin of the American Mathematical Society, the generalized inverses of a matrix was first introduced in 1920 by E.H. Moore [156] who is a member of the US National Academy of Sciences, where a generalized inverse is defined using
Generalized Inverses
3
projectors of matrices. In 1955, R. Penrose OM FRS, who is a foreign member of the US National Academy of Sciences and 2020 Nobel Prize in Physics, gave its equivalent definition using matrix equations [159]. The following definition of generalized inverse is given by R. Penrose. Definition 1.1 [5]. Let A 2 Cmn . If a matrix X 2 Cnm satisfies the following four equations AXA ¼ A; XAX ¼ X; ðAXÞ ¼ AX; ðXAÞ ¼ XA; then X is called the Moore–Penrose inverse of A (abbreviated as the M–P inverse), denoted by A þ. Obviously, if A 2 Cnn is nonsingular, then A1 ¼ A þ . The following theorem will elaborate the M–P inverse uniquely exists. Theorem 1.5 [5]. For a matrix A 2 Cmn , A þ exists and is unique. Proof. By the singular value decomposition ofmatrices, there exist unitary matrices D 0 V , where D is a nonsingular U 2 Cmm and V 2 Cnn such that A ¼ U 0 0 1 D 0 positive diagonal matrix. Let X ¼ V U . Next, it can be verified that X is 0 0 the M–P inverse of A. D 0 D 0 I 0 V ¼U V ¼ A; AXA ¼ U U U 0 0 0 0 0 0 " # " # 1 1 0 0 0 I D D XAX ¼ V U ¼V U ¼ X; VV 0 0 0 0 0 0 I 0 I 0 ðAXÞ ¼ U U ¼ U U ¼ AX; 0 0 0 0 I 0 I 0 ðXAÞ ¼ V V ¼ V V ¼ XA: 0 0 0 0 Therefore, A þ always exists. The uniqueness of A þ is proved as follows. If X1 and X2 are both the M–P inverse of A, then X1 ¼ X1 AX1 ¼ X1 ðAX2 AÞX1 ¼ X1 ðAX2 ÞðAX1 Þ ¼ X1 ðAX2 Þ ðAX1 Þ ¼ X1 ðAX1 AX2 Þ ¼ X1 ðAX2 Þ ¼ X1 AX2 ¼ X1 AX2 AX2 ¼ ðX1 AÞ ðX2 AÞ X2 ¼ ðX2 AX1 AÞ X2 ¼ ðX2 AÞ X2 ¼ X2 AX2 ¼ X2 : Therefore, A þ exists and is unique. The following theorem is observed from the proof of theorem 1.5.
■
Sign Pattern for Generalized Inverses
4
mn Theorem . The singular value decomposition of A is 1.6 [5, 34]. Let A 2 C D 0 A¼U V , where U 2 Cmm and V 2 Cnn are unitary matrices and D is a 0 0 nonsingular positive diagonal matrix. Then 1 0 þ D U : A ¼V 0 0
Next theorem shows a formula on the M–P inverse using the full-rank factorization. Theorem 1.7 [193]. Let A 2 Cmn with the full-rank factorization as A ¼ BC . Then A þ ¼ C ðCC Þ1 ðB B Þ1 B : Proof. Since rankðB B Þ ¼ rankðB Þ, we know B B is nonsingular. Similarly, CC is nonsingular. Let X ¼ C ðCC Þ1 ðB B Þ1 B : Substituting X into the equations of definition 1.1, we have X ¼ A þ .
■
From the above theorem, we can obtain the following corollary. Corollary 1.1. If A 2 Cmn is of full column rank, then A þ ¼ ðA AÞ1 A . If A 2 Cmn is of full row rank, then A þ ¼ A ðAA Þ1 . We list some properties of the M–P inverse [5, 34]. Theorem 1.8. If A 2 Cmn , then (1) (2) (3) (4) (5) (6)
þ
ðA þ Þ ¼ A; ðA Þ þ ¼ ðA þ Þ ; ðkAÞ þ ¼ k1 A þ , 0 6¼ k 2 C; ðA AÞ þ ¼ A þ ðA Þ þ , ðAA Þ þ ¼ ðA Þ þ A þ ; ðPAQ Þ þ ¼ Q A þ P , where P 2 Cmm and Q 2 Cnn are unitary matrices; If A is of full column rank, then A þ A ¼ In . If A is of full row rank, then AA þ ¼ Im .
D 0 V , where U 2 0 0 Cmm and V 2 Cnn are unitary matrices 1 and D is a nonsingular positive diagonal D 0 matrix. Theorem 1.6 gives A þ ¼ V U . Hence the statements (1)–(6) are 0 0 established. ■
Proof. We know A 2 Cmn can be decomposed as A ¼ U
Generalized Inverses
5
mn
Theorem 1.9. Let A 2 C Aþ ¼ ½ B þ
. If A ¼ ½ B
B Bþ 0 , then A ¼ . If A ¼ , then 0 0 þ
0 .
Definition 1.2 [8]. Let A 2 Kmn . If X 2 Knm satisfies AXA ¼ A, then X is called a f1g-inverse of A, denoted by Að1Þ. The f1g-inverse of a matrix is not unique in general. For A 2 Cmn , A þ is a f1g-inverse of A. If A 2 Cnn is nonsingular, then Að1Þ ¼ A1 . By the definition of the f1g-inverse, the following properties are obtained. Theorem 1.10. If A 2 Kmn , then (1) rankðAÞ 6 rank Að1Þ ; (2) ðPAQ Þð1Þ ¼ Q 1 Að1Þ P 1 , where P and Q are invertible; (3) AAð1Þ and Að1Þ A are both idempotent matrices. A general representation of a f1g-inverse of a matrix is given by the equivalent decomposition of the matrix as follows. I 0 mn Theorem 1.11. Let the equivalent decomposition of A 2 K be A ¼ P Q, 0 0 where P and Q are invertible. Then X 1 I P 1 ; M ¼Q Y Z is a f1g-inverse of A, where X, Y and Z are arbitrary compatible matrices. For any f1g-inverse of A, there always exists a representation in the above form. 1 M1 X Proof. If M ¼ Q P 1 is a f1g-inverse of A, then Y Z I 0 I 0 X 1 M1 1 QQ AMA ¼ P Q P P 0 0 Y Z 0 0 M1 0 Q ¼ A: ¼P 0 0 Therefore, M1 ¼ I , and X; Y ; Z are arbitrary matrices.
■
From theorem 1.11, we know Að1Þ exists. If A is non-invertible, then the number of f1g-inverses of A is infinite.
1.3
Drazin Inverse
In 1958, M.P. Drazin [84] proposed the concept of pseudo-inverses in the process of studying associative rings and semigroups. The pseudo-inverse is called the Drazin inverse. In order to introduce the concept of Drazin inverses of matrices, the Drazin index is presented.
6
Sign Pattern for Generalized Inverses
nn Definition k 1.3 [8]. k þFor A 2 K , the smallest integer k > 0 such that 1 is called the Drazin index of A, denoted by indðAÞ. rank A = rank A
Clearly, a square matrix A is invertible if and only if indðAÞ ¼ 0. Definition 1.4 [8]. For a matrixA 2 Knn , the matrix X 2 Knn satisfying Ak XA ¼ Ak ; XAX ¼ X; AX ¼ XA; is called the Drazin inverse of A, denoted by AD, where indðAÞ ¼ k. Theorem 1.12. For A 2 Knn , AD is existent and unique.
D 0 Proof. Let the core-nilpotent decomposition of A 2 K be A ¼ P P 1 , 0 N where D is invertible, N k ¼ 0 and N k1 6¼ 0. By thedefinition of the Drazin index, D1 0 1 we have k ¼ indðAÞ ¼ indðN Þ. Let X ¼ P P . Next, we will check X is 0 0 the Drazin inverse of A. " # " # D1 0 1 D 0 Dk 0 1 k P P P 1 P P A XA ¼ P 0 N 0 0 0 0 " # Dk 0 1 P ¼ Ak ; ¼P 0 0 " # " # 1 D1 0 1 D 0 1 D 0 1 P P XAX ¼ P P P P 0 N 0 0 0 0 " # D1 0 1 ¼P P ¼ X; 0 0 " # 1 I 0 1 D 0 D 0 1 1 P AX ¼ P P P P ¼P 0 0 0 N 0 0 " # D1 0 1 D 0 ¼P P 1 ¼ XA: P P 0 N 0 0 mn
Therefore, X is the Drazin inverse of A. The Drazin inverse of a matrix is unique. Let both X and Y be the Drazin inverse of A. Let E ¼ AX ¼ XA and F ¼ AY ¼ YA, then E 2 ¼ E and F 2 ¼ F. We have E ¼ AX ¼ Ak X k ¼ Ak YAX k ¼ AYAk X k ¼ FAX ¼ FE; F ¼ YA ¼ Y k Ak ¼ Y k Ak XA ¼ YAE ¼ FE: Then E ¼ F. Thus, X ¼ AX 2 ¼ EX ¼ FX ¼ YAX ¼ YE ¼ YF ¼ Y 2 A ¼ AY 2 ¼ Y , i.e., AD is unique. ■ By the proof of the above theorem, the following theorem is given.
Generalized Inverses
7
nn Theorem 1.13. Let the core-nilpotent decomposition of A 2 K D 0 A¼P P 1 , where D is invertible and N is nilpotent. Then 0 N 1 0 1 D D A ¼P P ; 0 0
be
and indðAÞ ¼ indðN Þ. By theorem 1.13, it is easy to obtain the equivalent definition of the Drazin inverse. Definition 1.5. Let A; X 2 Knn . If there exists an integer l > indðAÞ such that Al XA ¼ Al ; XAX ¼ X; AX ¼ XA; then X is called the Drazin inverse of A. The basic properties of the Drazin inverse are summarized in the following theorem [5, 34]. Theorem 1.14. Let A 2 Knn . Then (1) (2) (3) (4)
ðkAÞD ¼ k1 AD , 0 6¼ k 2 K; D ðPAP 1 Þ ¼ PAD P 1 , where P 2 Knn is invertible; D r A ¼ ðA r ÞD ; AD ¼ 0 if and only if A is a nilpotent matrix.
0 P 1 , where D is N 1 0 1 D D invertible and N is nilpotent. By theorem 1.13, we know A ¼ P P . It is 0 0 easy to prove the statements (1)–(4) hold. ■ ð1Þ Theorem 1.15. Let A 2 Knn . Then AD ¼ Al A2l þ 1 Al , where l > indðAÞ. D 0 Proof. Let the core-nilpotent decomposition of A be A ¼ P P 1 , where D is 0 N invertible and N is nilpotent. Since l > indðAÞ, we have l 2l þ 1 0 1 D 0 1 D l 2l þ 1 ¼P P ; A P ; A ¼P 0 0 0 0 Proof. Let the core-nilpotent decomposition of A be A ¼ P
1 2l þ 1 2l þ 1 ð1Þ D ¼P A Y where X, Y and Z are arbitrary matrices. Then
X P 1 ; Z
D 0
Sign Pattern for Generalized Inverses
8
l
A A
2l þ 1 ð1Þ
1 2l þ 1 Dl 0 1 A ¼P P P D Y 01 0 0 1 D ¼P P : 0 0
l
l X P 1 P D 0 Z
0 1 P 0
ð1Þ It follows from theorem 1.13 that AD ¼ Al A2l þ 1 Al .
■
The following theorem is on a product formula of Drazin inverse. Theorem 1.16 [139]. For A 2 Kmn and B 2 Knm , ðABÞD ¼ A½ðBAÞ2 D B. Proof. Let X ¼ A½ðBAÞ2 D B. Next, we check X is the Drazin inverse of A. ABX ¼ ABA½ðBAÞ2 D B ¼ ABA½ðBAÞD 2 B ¼ AðBAÞD B; XAB ¼ A½ðBAÞ2 D BAB ¼ A½ðBAÞD 2 BAB ¼ AðBAÞD B; XABX ¼ AðBAÞD BA½ðBAÞ2 D B ¼ A½ðBAÞ2 D B ¼ X: Let k ¼ maxfindðAB Þ indðBAÞg. Then ðABÞk þ 2 X ¼ ðABÞk þ 2 A½ðBAÞ2 D B ¼ ðABÞk þ 1 ABA½ðBAÞ2 D B ¼ ðABÞk þ 1 AðBAÞD B ¼ AðBAÞk þ 1 ðBAÞD B ¼ AðBAÞk B ¼ ðAB Þk þ 1 : It follows from definition 1.5 that X ¼ ðABÞD .
■
Theorem 1.17 [103]. For A 2 Kmn and B 2 Knm , jindðAB Þ indðBAÞj 6 1. Proof. From the definition of Drazin inverse, we know M p þ 1 M D ¼ M p if and only if p > indðM Þ for any matrix M 2 Cnn . Let k ¼ indðAB Þ. By theorem 1.16, we have ðBAÞk þ 2 ðBAÞD ¼ ðBAÞk þ 2 B½ðABÞ2 D A ¼ BðABÞk þ 2 ½ðABÞD 2 A ¼ BðABÞk A ¼ ðBAÞk þ 1 : Therefore k þ 1 > indðBAÞ, i.e., indðBAÞ indðAB Þ 6 1. Similarly, we can obtain indðAB Þ indðBAÞ 6 1. Thus, jindðAB Þ indðBAÞj 6 1. ■
1.4
Group Inverse
The Drazin inverse of a matrix with the Drazin index not more than 1 is the group inverse. Now, the definition of the group inverse is given.
Generalized Inverses
9
Definition 1.6 [8]. Let A 2 Knn . The matrix X 2 Knn satisfying AXA ¼ A; XAX ¼ X; AX ¼ XA; is called the group inverse of A, denoted by A#. If A# exists, then we say A is group invertible. Next, we show the existence and uniqueness of the group inverse. Theorem 1.18 [8, 85]. For A 2 Knn , A# exists if and only if rankðAÞ ¼ rankðA2 Þ. When A# exists, it is unique. For a matrix A 2 Knn with indðAÞ 6 1, from the core nilpotent decomposition, D 0 1 D1 0 1 we have A ¼ P P . Then A# ¼ P P . 0 0 0 0 The basic properties of the group inverse are summarized in the following theorem [5, 34]. Theorem 1.19. Let A 2 Knn . If A# exists, then (1) (2) (3) (4) (5)
ðkAÞ# ¼ k1 A# , 0 6¼ k 2 K; # ðPAP 1 Þ ¼ PA# P 1 , where P 2 Knn is nonsingular; # r A ¼ ðAr Þ# , where r > 0 is an integer; A# ¼ 0 if and only if A ¼ 0; rankðAÞ ¼ rank A# .
Proof. If A is group invertible, then there exist nonsingular matrices Q and D such D 0 1 that A ¼ Q Q . Thus, 0 0 1 0 1 D # A ¼Q Q : 0 0 ■
It is easy to check the statements (1)–(5) hold. Theorem 1.20. For A 2 Knn ,
# (1) if k is the smallest positive integer such that Ak exists, then k ¼ indðAÞ; l # (2) for any integer l > indðAÞ, A exists; l # D l1 A , where l is a nonnegative integer and l > indðAÞ. (3) A ¼ A
D 0 Proof. Let the core nilpotent decomposition of A be A ¼ P P 1 , where D is 0 N invertible and N is nilpotent. Then N l ¼ 0 if and only if l > indðAÞ, and 1 0 1 D AD ¼ P P ; 0 0
Sign Pattern for Generalized Inverses
10
■
It is easy to check the statements (1)–(3) hold. The following formulas are used to calculate the group inverses. ð1Þ
Theorem 1.21. Let A 2 Knn . If A is group invertible, then A# ¼ AðA3 Þ A. Proof. If A is group invertible, then there exist invertible matrices P and D such that D 0 1 A¼P P . Then 0 0 1 0 1 D # A ¼P P ; 0 30 1 ð1Þ X P 1 ; ð A3 Þ ¼ P D Y Z where X, Y and Z are arbitrary matrices. It is easy to observe that " # 3 3 ð1Þ D 0 1 D 0 1 X D1 1 A¼P P P P P P A A 0 0 0 0 Y Z " # D1 0 1 ¼P P ¼ A# : 0 0 Theorem 1.22. Let A ¼ BC be a full-rank factorization of A 2 Knn . Then A# exists if and only if CB is invertible. In this case, A# ¼ BððCBÞ1 Þ2 C : Proof. If rankðAÞ ¼ r, then CB 2 Crr . Since B is of full column rank and C is of full row rank, we have rank A2 ¼ rankðBCBC Þ ¼ rankðCB Þ: Then theorem 1.18 gives A# exists if and only if CB is invertible. If A# exists, it follows from theorem 1.16 that A# ¼ B½ðCBÞ1 2 C : h i# Theorem 1.23. Let A 2 Kmn and B 2 Knm . If ðABÞ# exists, then ðBAÞ2 exists and h i# ðABÞ# ¼ A ðBAÞ2 B:
Generalized Inverses
11
Proof. If ðABÞ# exists, then it follows from theorem 1.17 that indðBAÞ 6 indðAB Þ þ 1 6 2; ind½ðBAÞ2 6 1: By theorem 1.16, we have ðABÞ# ¼ A½ðBAÞ2 # B.
■ 1 0
1 If ½ðBAÞ2 # exists, ðABÞ# dose not necessarily exist. For example, A ¼ 2 0 0 0 1 0 0 and B ¼ , then AB ¼ , ðBAÞ2 ¼ . Then ðBAÞ2 is group 0 1 0 0 0 0 invertible, but AB is not. The following theorem gives an equivalent definition of the group inverse. Theorem 1.24. For a matrix A 2 Knn , X is the group inverse of A if and only if XA2 ¼ A; X 2 A ¼ X: Proof. If XA2 ¼ A and X 2 A ¼ X, then rankðAÞ ¼ rank XA2 6 rank A2 6 rankðAÞ; rankðAÞ ¼ rank A2 : Then A# exists. Hence XA ¼ XA2 A# ¼ AA# ; X ¼ X ðXAÞ ¼ XAA# ¼ AA# A# ¼ A# : If X ¼ A# , then
XA2 ¼ A# A2 ¼ A; X 2 A ¼ A# A# A ¼ A# ¼ X:
Let RðAÞ and N ðAÞ be the range and null space of A, respectively. Definition 1.7 [5, 193]. For a matrix A 2 Cmn , a matrix X 2 Cnm satisfying XAX ¼ X is called a f2g-inverse of A. If X is a f2g-inverse of A and RðX Þ ¼ T ; N ðX Þ ¼ S, then X is called a f2g-inverse with prescribed range T and ð2Þ null space S of A, denoted by AT ;S . ð2Þ
For specified T and S, AT ;S does not necessarily exist. The conditions for its existence are given in the literature [5]. In 1998, Y. Wei established an algebraic ð2Þ expression for AT ;S which is deduced by the group inverse. Theorem 1.25 [203]. For a matrix A 2 Cmn , let T be a subspace of Cn with dimension s 6 rankðAÞ. Let S be a subspace of Cm with dimension m s. Suppose ð2Þ G 2 Cmn satisfies RðG Þ ¼ T and N ðG Þ ¼ S. If A has a f2g-inverse AT ;S , then ð2Þ
indðAG Þ ¼ indðGAÞ ¼ 1 and AT ;S ¼ GðAGÞ# ¼ ðGAÞ# G.
Sign Pattern for Generalized Inverses
12
It is a well-known that the M–P inverse, Drazin inverse and group inverse are all f2g-inverses with a specified range and null space [5, 193].
1.5
Generalized Inverses and System of Linear Equations
Solving linear systems is an important research topic in linear algebra and matrix theory [41, 116, 138]. Generalized inverses play an important role in solving singular liner systems [61, 202]. A brief introduction will be given in this section. Definition 1.8. Let A 2 Cmn ; b 2 Cm , if the linear equation Ax ¼ b is solvable (resp. unsolvable), then Ax ¼ b is called consistent (resp. inconsistent). We know that the linear equation Ax ¼ b is consistent if and only if b 2 RðAÞ. The f1g-inverse is used to represent the general solutions of the linear equations as follows. Theorem 1.26. Let A 2 Cmn ; b 2 Cm . The general solution of the consistent equation Ax ¼ b are x ¼ Að1Þ b þ ðI Að1Þ AÞz; for any vector z 2 Cn . Proof. Since AAð1Þ A ¼ A and b 2 RðAÞ, we have AAð1Þ b ¼ b, i.e., Að1Þ b is a solution of Ax ¼ b. Next, we prove that I Að1Þ A z (for all z 2 Cn ) are the general solutions of Ax ¼ 0. Since Að1Þ A is an idempotent matrix, we know N Að1Þ A ¼ R I Að1Þ A . Since N Að1Þ A ¼ N ðAÞ, we have I Að1Þ A z being the general solution of Ax ¼ 0. Thus, x ¼ Að1Þ b þ ðI Að1Þ AÞz; is the general solution of Ax ¼ b. 2 1 1 2 Example [18]. Let A ¼ 4 2 1 1 0 1 1 equation Ax ¼ b is consistent. Let 2
1 P ¼ 4 23 2 3
0 1 3
13
3
2
3
1 1 1 0
3 2 1 7 7: 0 5 1
■
1 1 5 5 and b ¼ 4 1 5. Since b 2 RðAÞ, the 1 1
2 3 1 0 60 6 5 0 ; Q¼4 0 1 0
1 1 0 0
Generalized Inverses 2
13
3 0 I X 0 5. From theorem 1.11, we obtain Að1Þ ¼ Q 2 Y Z 0 I 0 P, for any X 2 C2, Y 2 C22 and Z 2 C2 . Take Að1Þ ¼ Q 2 P. Theorem 1.26 0 0 gives the general solution of Ax ¼ b:
x ¼ Að1Þ b þ I Að1Þ A u 32 3 2 3 2 u1 0 0 1 2 0 76 u 7 6 1 7 6 0 0 1 1 76 2 7 6 7 6 ¼6 76 7; 7þ6 4 0 5 40 0 1 0 54 u3 5 1 Then PAQ ¼ 4 0 0
0 0 1 0 0 0
0
0
0
0
1
u4
where u1 ,u2 , u3 and u4 are arbitrary complex numbers.
□
For an inconsistent equation Ax ¼ b, we look for an x that minimizes the Euclidean distance between Ax and b, i.e., a least-squares solution. Definition 1.9. Let A 2 Cmn and b 2 Cm . For all u 2 Cn , x0 is called a least-squares solution of Ax ¼ b if kAx0 bk 6 kAu bk; where kk is the 2-norm. Definition 1.10. Let A 2 Cmn and b 2 Cm . A vector x0 2 Cn is called the minimum-norm least-squares solution of Ax ¼ b if x0 is a least-squares solution of Ax ¼ b and kx0 k 6 ku k for any other least-squares solution u. Theorem 1.27. Let A 2 Cmn and b 2 Cm . Then x0 ¼ A þ b is the minimum-norm least-squares solution of the inconsistent equation Ax ¼ b. Proof. Obviously, ½ðI AA þ Þb A ¼ 0. Then ðI AA þ Þb 2 RðAÞ? ; and kAx bk2 ¼ kAx AA þ b ðI AA þ Þbk
2
¼ kAx AA þ bk þ kðI AA þ Þbk : 2
2
Thus, x0 ¼ A þ b is a least-squares solution of Ax ¼ b. The general solution of Ax ¼ b is x ¼ A þ b þ ðI AA þ Þu; u 2 Cn is an arbitrary vector. Since the inner product ðA þ b; ðI AA þ Þu Þ ¼ u ðI AA þ ÞA þ b ¼ 0;
Sign Pattern for Generalized Inverses
14
kA þ bk 6 kA þ bk þ kðI AA þ Þu k ¼ kA þ b þ ðI AA þ Þu k ; 2
2
2
i.e., x0 ¼ A þ b is the minimum-norm least-squares solution of Ax ¼ b.
1.6
2
■
Graph and Matrix
We assume the readers are informed of the basic textbook of graph theory. An undirected graph G ¼ ðV ; E Þ consists of a set of vertices V and a set of edges E. Each edge of G is a binary subset of set V . That is, each edge e consists of two different vertices of G. If there is an edge e joining two vertices u and v, then u and v are adjacent, denoted by u v; e and u (resp. e and v) are incident, denoted by u e (resp. v e). If G does not contain loops and parallel edges, then G is a simple graph. All graphs considered in this section are simple and undirected. If the vertex set and edge set of graph H are subsets of the vertex set and T edge set of graph, G, respectively, then H is a subgraph of G. If V ¼ V1 [ V2 , V1 V2 ¼ / and each edge of G connects a vertex in V1 with the other one in V2 , then G is called a bipartite graph. Let G be a graph with vertex set V ¼ fv1 ; v2 ; . . .; vn g and the edge set E ¼ fe1 ; e2 ; . . .; em g. A matrix B ¼ bij 2 Rnm is called the incidence matrix of G, if bij is 1; vi ej ; bij ¼ 0; otherwise. Obviously, each column of the incidence matrix has exactly two 1s, and the other elements are all zero. A vertex-edge sequence v1 ; e1 ; v2 ; . . .; vk ; ek ; vk þ 1 of G is called a path, if v1 ; v2 ; . . .; vk þ 1 are different vertices and ei ¼ vi vi þ 1 is an edge of G (i ¼ 1; 2; . . .; k). The number of the edges in a path is called the length of the path. When vk þ 1 ¼ v1 the vertex-edge sequence is called a cycle. If any two vertices of G are connected by a path, then G is connected. A maximal connected subgraph of graph G is called a connected component of G. For two vertices u and v of a connected graph G, the length of the shortest path connecting u and v is called the distance between u and v, denoted by dG ðu; v Þ. A connected graph without cycles is called a tree. If a positive number is labeled on each edge of G, then G is a weighted graph. The value labeled on edge e is called the weight of e, denoted by we. Let G be a weighted graph with the vertex set V ¼ f1; 2; . . .; n g. The real symmetric matrix A ¼ aij of n-order is the adjacency matrix of G if aij satisfies wij ; i j; aij ¼ 0; otherwise, where wij is the weight of edge ðij Þ. Let D be a n-order diagonal matrix, where the P i-th diagonal entry is di ¼ nj¼1 aij . Matrices L ¼ D A and Q ¼ D þ A are the
Generalized Inverses
15
Laplacian matrix and signless Laplacian matrix of G, respectively. The generalized inverses of Laplacian matrix has important applications in the study on the theory of graph spectra [118, 122, 123, 229], resistance distance and Kirchhoff index [1, 125, 187]. Next we briefly introduce the applications of generalized inverses of the Laplacian matrix to calculate the resistance distance. Let G be a connected weighted graph. If a resistor is placed along each edge of G (the resistance value is the reciprocal of the weight on the edge), then the resistance between any two vertices a and b is the resistance distance between a and b [1, 125], denoted by Rab. If there is a voltage source between two vertices a and b, and let Y [ 0 be the total current flowing out from a (flowing into b), then according to Ohm’s law, the resistance distance between a and b is va vb ; Rab ¼ Y where va and vb are the voltages at the vertices a and b, respectively. Let yij denote the current from i into j. Clearly, yij ¼ yji . According to Kirchhoff’s current theory, we have 8 i ¼ a; < Y; X yij ¼ Y ; i ¼ b: ð1:1Þ : j i 0; i 6¼ a; b: The conductance on each edge ði; j Þ (the conductance of the resistance) is the weight wij on the edge ði; j Þ. By (1.1) 8 i ¼ a; X X < Y; i ¼ b; ð1:2Þ yij ¼ wij vi vj ¼ Y ; : j i j i 0; i 6¼ a; b: Let L be the Laplacian matrix of a weighted graph G. Denote the i-th column the identify matrix by ei. By (1.2) Lv ¼ Y ðea eb Þ;
ð1:3Þ
where v is a column vector of the voltages at each vertex. Theorem 1.26 gives the general solution of the linear equations (1.3):
v ¼ YLð1Þ ðea eb Þ þ I Lð1Þ L u; where u is arbitrary. The effective resistance between vertices a and b is Rab ¼
v a vb 1 ¼ ðea eb Þ> v Y Y
¼ ðea eb Þ> Lð1Þ ðea eb Þ þ
1 ðea eb Þ> I Lð1Þ L u: Y
Sign Pattern for Generalized Inverses
16
By (1.3), we obtain ðe a e b Þ> ¼
1 > v L; ðea eb Þ> I Lð1Þ L ¼ 0: Y
Therefore ð1Þ
ð1Þ
ð1Þ
Rab ¼ ðea eb Þ> Lð1Þ ðea eb Þ ¼ Lðaa1Þ þ Lbb Lab Lba ; ð1Þ
where Lab is the ða; bÞ-element of Lð1Þ . The above is the formula for the resistance distance using a generalized inverse. Since L is symmetric, we have L# ¼ L þ being a symmetric f1g-inverse of L, and the following theorem is obtained. Theorem 1.28 [1, 125]. Let L be the Laplacian matrix of a connected weighted graph G. Then the resistance distance between any vertices a and b is ð1Þ
ð1Þ
ð1Þ
# # þ þ þ Rab ¼ Lðaa1Þ þ Lbb Lab Lba ¼ L# aa þ Lbb 2Lab ¼ Laa þ Lbb 2Lab :
J. Zhou, L. Sun, W. Wang and C. Bu give a formula to calculate the resistance distance of bipartite graphs. Theorem 1.29 [229]. Let Q be the signless Laplacian matrix of a connected graph of G. If G is a bipartite graph, then the resistance distance between vertices a and b is ( ð1Þ ð1Þ ð1Þ ð1Þ Qaa þ Qbb þ Qab þ Qba ; dG ða; bÞ is odd; Rab ¼ ð1Þ ð1Þ ð1Þ ð1Þ þ Qbb Qab Qba ; dG ða; bÞ is even: Qaa Proof. Since G is a bipartite graph, the adjacency matrix of G is D1 B B D1 Q¼ . Then L ¼ . Therefore B > D2 B > D2 I 0 D1 B I 0 I 0 I QG ¼ ¼ L 0 I B > D2 0 I 0 I 0 I 0 I 0 : Q ð1Þ ¼ Lð1Þ 0 I 0 I Theorem 1.28 gives ( Rab ¼
ð1Þ
ð1Þ
B . Let 0
0 B>
0 I
;
ð1Þ
ð1Þ þ Qbb þ Qab þ Qba ; dG ða; bÞ is odd; Qaa ð1Þ ð1Þ ð1Þ ð1Þ Qaa þ Qbb Qab Qba ; dG ða; bÞ is even:
This completes the proof.
■
Generalized Inverses
17
Let G be a connected weighted graph, the sum of the resistance distances Pbetween all pairs of vertices is the Kirchhoff index of G [125], denoted by Kf ðG Þ ¼ u6¼v Ruv. Let e denote a column vector whose components are all 1. Lemma 1.1. For the Laplacian matrix L of a graph G, we have L# e ¼ 0. Proof. Since the sum of the elements on each row of L is zero, we know Le ¼ 0. ■ Therefore L# e ¼ L# LL# e ¼ ðL# Þ2 Le ¼ 0. Theorem 1.30 [187]. Let G be a connected graph of n-order. Then Kf ðG Þ ¼ n trðLð1Þ Þ e> Lð1Þ e ¼ n tr L# ¼ n trðL þ Þ: Proof. By theorem 1.28, we have
X Kf ðG Þ ¼ Lðuu1Þ þ Lðvv1Þ Lðuv1Þ Lðvu1Þ ¼ n trðLð1Þ Þ e> Lð1Þ e: u6¼v
# þ Since # L ¼ L isþ a f1g-inverse of L, then by lemma 1.1, we know Kf ðG Þ ¼ ■ n tr L ¼ n trðL Þ.
If the elements of a matrix are all nonnegative real numbers, then the matrix is nonnegative [79, 93, 110]. A real matrix A is called an M -matrix if A can be written as A ¼ sI B, where B is a nonnegative matrix and s is no less than the spectral radius of B [7, 208, 223]. If s equals the spectral radius of B, then A is a singuslar M -matrix; otherwise A is a nonsingular M -matrix. We know the Laplacian matrix of a graph is a singular M -matrix [119, 121]. Reference [120] summarizes the applications of the group inverse of singular M -matrices. Recently, many classical results are generalized from M -matrices to M-tensors [74].
Chapter 2 Generalized Inverses of Partitioned Matrices The Drazin inverse and group inverse have important applications in singular differential equations [32, 35], Markov chain [34, 120, 225], iterative methods [5, 129], cryptography and so on, which have attracted widespread attention. There are rich results on the Drazin inverse and group inverse of matrices. Recently, the study of generalized inverses was extented to tensors or hypermatrices [78, 185, 189, 190]. This chapter presents the representations for the Drazin inverse and group inverse of block matrices.
2.1
Drazin Inverse of Partitioned Matrices
The concept of Drazin inverse of matrices was proposed by M.P. Drazin in 1958 [84]. In 1976, S.L. Campbell, C.D. Meyer and N.J. Rose used the Drazin inverse to represent the solutions for linear systems of singular differential equations [35]. In 1977, C.D. Meyer used the method of limit to obtain the formula for the Drazin A B inverse of an upper triangular block matrix , where A and D square [152]. 0 D It is defined that Ap ¼ I AAD , and when A is group invertible, Ap ¼ I AA# . A B Theorem 2.1 [152]. Let M ¼ 2 Cnn , A 2 Crr . Then 0 D D A X D M ¼ ; 0 DD PlA 1 p i D i þ 2 PlD 1 D i þ 2 A BD i Dp þ i¼0 A AB D AD BD D , lA ¼ indðAÞ where X ¼ i¼0 and lD ¼ indðDÞ.
DOI: 10.1051/978-2-7598-2599-8.c002 © Science Press, EDP Sciences, 2021
Sign Pattern for Generalized Inverses
20
B and A 2 Crr . Then 0 AD ðAD Þ2 B D M ¼ : 0 0
A Corollary 2.1 [152]. Let M ¼ 0
Corollary 2.2 [152]. Let M ¼
0 2 Cnn and A 2 Crr : Then 0 AD 0 D M ¼ : 2 B AD 0 A B
In 1979, S.L. Campbell and C.D. Meyer proposed an open problem to find an A B explicit representation for the Drazin inverse of the block matrix , where A C D and B are square [34], the problem of representation for 2 2 block matrices. In 1983, S.L. Campbell investigated two classes of singular differential equations and indicated the relation between the Drazin inverse of the coefficient matrices of the differential equations and the solution [33]. Let E; F; G 2 Cnn . When E is singular, the system Ex 0 ðtÞ þ FxðtÞ ¼ 0ðt > 0Þ; is called a singular or implicit differential equation. If there there exists a scalar k such that kE þ F is regular, then the above equation is equivalent to E1 x 0 ðtÞ þ F1 xðtÞ ¼ 0; where E1 ¼ ðkE þ FÞ1 E, F1 ¼ ðkE þ FÞ1 F. It is easy to see that E1 F1 ¼ F1 E1 . Then the functional solution is D
xðtÞ ¼ eE1 F1 t E1D E1 xð0Þ; where xð0Þ is the consistent initial condition of dimension n. For the second order differential equation Ex 00 ðtÞ þ Fx 0 ðtÞ þ GxðtÞ ¼ 0: Assume that there there exists a scalar k such that ðk2 E þ kF þ GÞ is nonsingular, then the solutions of the system will be uniquely determined by a consistent initial condition. Let xðtÞ ¼ ekt yðtÞ. Then the above system is equivalent to 0 yðtÞ yðtÞ I 0 0 E2 F 2 þ ¼ ; y 0 ðtÞ y 0 ðtÞ 0 I I 0 0 where E2 ¼ ðk2 E þ kF þ GÞ1 ð2kE þ FÞ and F2 ¼ ðk2 E þ kF þ GÞ1 F. Thus, finding the functional solution of the above equation is equivalent to finding an explicit E2 F2 (see [33]). representation of the Drazin inverse for I 0
Generalized Inverses of Partitioned Matrices For the square matrix A M¼ C
21
B ð where A and D are also squareÞ; D
the problem of representations of the Drazin inverse for M has attracted much attention and evolved rapidly over the last few years. Rich conclusions on the representations under different conditions for subblocks A, B, C and D were established. Here, we first introduce the results related to the conditions on the Schur complement of M . For matrix M , when A is invertible, S ¼ D CA1 B is called the Schur complement of M respect to A. In this case, M is invertible if and only if S ¼ D CA1 B is invertible, and the inverse of M can be expressed by the subblocks and the Schur complement as 1 A þ A1 BS 1 CA1 A1 BS 1 M 1 ¼ : ð2:1Þ S 1 CA1 S 1 For matrix M , when A is singular, S ¼ D CAD B is called a generalized Schur complement of M . It was investigated that the generalized inverses of M in terms of the generalized Schur complement and subblocks. In 1989, J. Miao gave the following result. A B Theorem 2.2 [153]. For M ¼ 2 Cnn , if S ¼ D CAD B ¼ 0, Ap B ¼ 0 and C D CAp ¼ 0, then h i2 I D M ¼ ðAW ÞD A I AD B ; D CA where W ¼ AAD þ AD BCAD . In 1998, Y.M. Wei gave the following result [204]. A B Theorem 2.3 [204]. Let M ¼ . If Ap B ¼ 0, CAp ¼ 0 and S ¼ D CAD B is C D invertible, then D A þ AD BS 1 CAD AD BS 1 : MD ¼ S 1 CAD S 1
A B Theorem 2.4 [204]. For M ¼ , let C ¼ C ðAD Þ2 B. If Ap B ¼ 0, CAp ¼ 0, C D D CAD B ¼ 0, BðI þ CÞD ðI þ CÞ ¼ B and ðI þ CÞD ðI þ CÞC ¼ C , then
Sign Pattern for Generalized Inverses
22
X M ¼ Z D
where
Y ; W
h i h i 8 D D D D D D D > BðI þ CÞ CA I A BðI þ CÞ CA X ¼ I A A ; > > > > h i > > < Y ¼ I AD BðI þ CÞD CAD ðAD Þ2 BðI þ CÞD ; h i > D D > D 2 D D > C ðA Þ I A BðI þ CÞ CA Z ¼ ðI þ CÞ ; > > > > : W ¼ ðI þ CÞD C ðAD Þ3 BðI þ CÞD :
A B Theorem 2.5 [204]. For M ¼ , if Ap B ¼ 0, CAp ¼ 0, indðSÞ ¼ 1, C D BðI þ CÞD ðI þ CÞ ¼ B, ðI þ CÞD ðI þ CÞC ¼ C , DðI þ CÞD ðI þ CÞ ¼ D and D D CðD CA BÞ ¼ ðD CA BÞC, then D A 0 AD B # Wþ S CAD ; I ; MD ¼ U 0 0 I where
8 AD B p > > S ðI þ CÞD CAD ; I ; C ¼ C ðAD Þ2 B; >
> AD B > :W ¼ I ðI þ CÞD S p CAD ; I ; S ¼ D CAD B: I
In 2005, R.E. Hartwig, X.Z. Li and Y.M. Wei gave the following result [101]. A B Theorem 2.6 [101]. For M ¼ 2 Cnn , if CAp B ¼ 0, AAp B ¼ 0 and C D S ¼ D CAD B is invertible, then indðM Þ 6 indðAÞ þ 1; and
0 D M ¼ Iþ 0
Ap B R R Iþ 0
indðAÞ1 X i¼0
Ri þ 1
0 CAp Ai
0 0
! ;
AD þ AD BS 1 CAD AD BS 1 where R ¼ . S 1 CAD S 1 A B Theorem 2.7 [101]. For M ¼ 2 Cnn , if D CAD B ¼ 0, CAp B ¼ 0 and C D AAp B ¼ 0, then
Generalized Inverses of Partitioned Matrices
23
indðM Þ 6 indðAW Þ þ indðAÞ þ 2; and 0 M ¼ Iþ 0 D
Ap B R1 R1 I þ 0
where W ¼ AAD þ AD BCAD and R1 ¼
indðAÞ1 X
I CAD
i¼0
h
Ri1þ 1
0 CAp Ai
0 0
! ;
i2 ðAW ÞD A I AD B :
In 2009, M.F. Martínez-Serrano and N. Castro-González gave the representation for the Drazin inverse of block matrices under the conditions A2 Ap B ¼ 0, CAAp B ¼ 0, BCAp B ¼ 0 and S ¼ D CAD B ¼ 0 [149]. In 2011, X.Z. Li gave the representation for the Drazin inverse result under the conditions Ap B ¼ 0, CAp ¼ 0, p p ðAW Þ BSS # ¼ 0, SS # C ðAW Þ ¼ 0, W ¼ AAD þ AD B I SS # CAD and S ¼ D CAD B is group invertible [128]. C. Cao et al. established the following Drazin inverse representation for an anti-diagonal block matrix over skew fields [139]. 0 B Theorem 2.8 [139]. Let M ¼ 2 Knn . Then C 0 " # " # 0 ðBC ÞD B 0 BðCBÞD D M ¼ ¼ C ðBC ÞD 0 C ðBC ÞD 0 " # " # 0 ðBC ÞD B 0 BðCBÞD ¼ ¼ ; ðCBÞD C 0 ðCBÞD C 0 where 0 is a square matrix whose entries are all zero. In 2005, et al. [43] gave the representation for the Drazin N. Castro-González I I inverse of . E 0 I I 2 Cnn and E 2 Ckk . If indðEÞ ¼ r, then Theorem 2.9 [43]. Let F ¼ E 0 FD ¼
(1)
where Y1 ¼ a! b!ðabÞ!; and
Pr1 j¼0
Y1 E p D E E þ Y2 EE p
ð1Þ j C ð2j; jÞE j , Y2 ¼
E D þ Y2 E p ; E D þ ðY1 Y2 ÞE p
Pr1 j¼0
ð1Þ j C ð2j þ 1; jÞE j and C ða; bÞ ¼
Sign Pattern for Generalized Inverses
24
ðF D Þ2 ¼
(2)
where W1 ¼
Pr1 j¼0
E D þ W1 E p E D þ W2 EE p
ðE D Þ2 þ W2 E p ; E D þ ðE D Þ2 þ ðW1 W2 ÞE p
ð1Þ j C ð2j þ 1; jÞE j and W2 ¼
Pr1 j¼0
ð1Þ j C ð2j þ 2; jÞE j :
In 2006, N. Castro-González, E. Dopazo and J. Robles gave the representation I B for the Drazin inverse of under some conditions [44]. In 2007, X.Z. Li and C 0 Y.M. Wei gave the representation for the Drazin inverse under the conditions AAp B ¼ 0, BCAAD ¼ 0, CAp B ¼ 0 and DCAAD ¼ 0 [130]. A A In 2011, C. Bu et al. gave the representation for the Drazin inverse of B 0 when A2 ¼ A [28]. In order to prove this result, some lemmas on the Drain inverse and binomial coefficient are presented. Lemma 2.1. Let A; B 2 Cnn . Then ðABÞD A ¼ AðBAÞD and BðABÞD ¼ ðBAÞD B. Proof. It follows form theorem 1.16 that
ðABÞD A ¼ A½ðBAÞD 2 BA ¼ AðBAÞD ; BðABÞD ¼ BA½ðBAÞD 2 B ¼ ðBAÞD B: ■
The proof is over.
For integers n and k, denote the binomial coefficient by C ðn; kÞ. Then the following holds. Lemma 2.2 [192]. (1) C ðn; kÞ ¼ C ðn 1; kÞ þ C ðn 1; k 1Þ; (2) C ðn; n kÞ ¼ C ðn; kÞ ¼ nk C ðn 1; k 1Þ; k 6¼ 0: Lemma 2.3. For a positive integer r > 2, let sðrÞ be the integer part of r2, and let k be an integer. Then Pk1 ð1Þi ð1Þk1 (1) C ðr 1 k; k 1Þ, i¼0 ri C ðr i; iÞC ð2ðk iÞ 1; k iÞ ¼ r 2 6 k 6 sðrÞ; (2)
Psðr þ 1Þ ð1Þi i¼0
ri
C ðr i; iÞC ð2ðk iÞ 1; k iÞ ¼ 0, sðrÞ\k 6 r 1.
Proof. It follows from statement (1) of lemma 2.3 in [43] that result (1) holds. By the statement in (2) of lemma 2.3 in [43], we obtain sðrÞ X ð1Þi i¼0
r i
C ðr i; iÞC ð2ðk iÞ 1; k iÞ ¼ 0:
Generalized Inverses of Partitioned Matrices
25
Note that sðrÞ\k 6 r 1. Also, sðrÞ ¼ sðr þ 1Þ if r is even, and C ðr sðr þ 1Þ; sðr þ 1ÞÞ ¼ 0 if r is odd. So, result (2) holds. ■ Pr1 Lemma 2.4. Suppose that N is a nilpotent matrix of index r. Let X1 ¼ j¼0 P j j ð1Þ j C ð2j; jÞN j and X2 ¼ r1 j¼0 ð1Þ C ð2j þ 1; jÞN . Then (1) X12 þ X22 N ¼ X2 ; P j j (2) 2X1 X2 X22 ¼ r1 j¼0 ð1Þ C ð2j þ 2; jÞN ; Psðr þ 1Þ P Psðr þ 1Þ sðr þ 1Þ (3) C ðr i; iÞN i X1 þ i¼0 C ðr 1 i; iÞN i þ 1 X2 ¼ i¼0 i¼0 C ðr 1 i; iÞN i ; Psðr þ 1Þ Psðr þ 1Þ Psðr þ 1Þ (4) C ðr 1 i; iÞN i ðX1 X2 Þ þ i¼0 C ðr i; iÞN i X2 ¼ i¼0 i¼0 C ðr 2 i; iÞN i : Proof. The proofs of ð1Þ and ð2Þ can be found (for Z1 and Z2 ) in pp. 259–260 of [43]. The proofs of ð3Þ and ð4Þ are similar. So we only prove ð3Þ. If r ¼ 0; 1, then ð3Þ holds obviously. Next we discuss the case for r 2. Let the left hand side of ð3Þ be S. Then S¼
sðr þ 1Þ X
C ðr i; iÞN i þ
i¼0
þ
sðr þ 1Þ X
C ðr i; iÞN i ðX1 I Þ
i¼0 sðr þ 1Þ X
C ðr 1 i; iÞN i ðX2 N Þ:
i¼0
Since X1 I ¼ we have S ¼
sðrP þ 1Þ
Pr1 j¼1
Pr1 j¼1
ð1Þ j C ð2j 1; jÞN j ,
C ðr i; iÞN i
i¼0 sðrP þ 1Þ r1 P
þ
ð1Þ j C ð2j; jÞN j and X2 N ¼
i¼0
j¼1
ð1Þ j ½C ðr i; iÞC ð2j; jÞ C ðr i 1; iÞC ð2j 1; jÞN i þ j :
It follows from ð2Þ of lemma 2.2 that C ð2j; jÞ ¼ 2C ð2j 1; jÞ; and C ðr i 1; iÞ ¼ r2i ri C ðr i; iÞ. Then S¼
sðr þ 1Þ X
C ðr i; iÞN i þ
sðr þ 1Þ X r 1 X
i¼0
i¼0
j¼1
ð1Þ j
r C ðr i; iÞC ð2j 1; jÞN i þ j : r i
Let U
¼
sðrP þ 1Þ rP 1 i¼0
j¼1
r ð1Þ j ri C ðr i; iÞC ð2j 1; jÞN i þ j :
Sign Pattern for Generalized Inverses
26
Then U¼
sðr þ P 1Þ þ r1 k¼1
rð1Þk
sðrP þ 1Þ i¼0
ð1Þi ri
C ðr i; iÞC ð2ðk iÞ 1; k iÞN k ; ðk ¼ i þ jÞ:
When integer k r, we have N k ¼ 0. So U¼
r 1 X
rð1Þk
sðr þ 1Þ X i¼0
k¼1
ð1Þi C ðr i; iÞC ð2ðk iÞ 1; k iÞN k : r i
When sðrÞ\k r 1, applying ð2Þ of lemma 2.3, we can easily obtain U¼
sðrÞ X
rð1Þk
sðr þ 1Þ X i¼0
k¼1
ð1Þi C ðr i; iÞC ð2ðk iÞ 1; k iÞN k : r i
We know that when integer i k, C ð2ðk iÞ 1; k iÞ ¼ 0. Then U¼
sðrÞ X
k 1 X ð1Þi
rð1Þk
i¼0
k¼1
¼ N þ
sðrÞ X
r i
rð1Þk
C ðr i; iÞC ð2ðk iÞ 1; k iÞN k
k 1 X ð1Þi i¼0
k¼2
r i
C ðr i; iÞC ð2ðk iÞ 1; k iÞN k :
Applying ð1Þ of lemma 2.3, for 2 k sðrÞ, we get U ¼ N þ
sðrÞ X ð1Þ2k1 C ðr 1 k; k 1ÞN k k¼2
¼
sðr þ 1Þ X
C ðr 1 k; k 1ÞN k :
k¼1
From ð1Þ of lemma 2.2, we obtain C ðr i; iÞ ¼ C ðr i 1; iÞ þ C ðr i; i 1Þ. Then S¼
sðr þ 1Þ X
C ðr i; iÞN i þ U
i¼0
¼
sðr þ 1Þ X i¼0
¼
sðr þ 1Þ X i¼0
C ðr i; iÞN i
sðr þ 1Þ X
C ðr i k; k 1ÞN k
k¼1
C ðr 1 i; iÞN i :
■
Generalized Inverses of Partitioned Matrices
27
P j p jk Lemma 2.5. Let U ðkÞ ¼ r1 , k ¼ 1; 0; 1, j¼k ð1Þ C ð2j k þ 1; j þ 1ÞðABÞ ðABÞ Psðr þ 1Þ iþk and V ðkÞ ¼ i¼0 C ðr k i; iÞðABÞ ; k ¼ 0; 1, where A; B 2 Cnn , r ¼ indðABÞ. Then U ð1Þ þ U ð0Þ ¼ ABU ð1Þ; U 2 ð1Þ þ U 2 ð0ÞAB ¼ U ð0Þ; 2U ð1ÞU ð0ÞðABÞ2 U 2 ð0ÞAB ¼ U ð1ÞAB; Psðr þ 1Þ V ð0ÞU ð1Þ þ V ð1ÞU ð0Þ ¼ i¼0 C ðr 1 i; iÞðABÞp ðABÞi ; Psðr þ 1Þ (5) V ð1ÞU ð1ÞAB þ V ð0ÞU ð0ÞAB ¼ i¼0 C ðr 2 i; iÞðABÞp ðABÞi þ 1 : (1) (2) (3) (4)
Proof. U ð1Þ þ U ð0Þ ¼
r1 X
ð1Þ j C ð2j þ 2; j þ 1ÞðABÞp ðABÞj þ 1
j¼1
þ
r1 X ð1Þ j C ð2j þ 1; j þ 1ÞðABÞp ðABÞ j j¼0
r 1 X ¼ ð1Þ j C ð2j; jÞðABÞp ðABÞ j j¼0
þ
r1 X
ð1Þ j C ð2j þ 1; j þ 1ÞðABÞp ðABÞ j
j¼0
¼
r 1 X
ð1Þ j C ð2j; j þ 1ÞðABÞp ðABÞ j
j¼1
¼ ABU ð1Þ: Therefore (1) holds. By the core-nilpotent decomposition, there exists a nonsingular matrix P 2 Cnn such that D 0 P 1 ; AB ¼ P 0 N1 where D 2 Crr is nonsingular and N1 is r–nilponent. Furthermore, we have 0 0 p ðABÞ AB ¼ P P 1 : 0 N1 Let matrix N in ð1Þ of lemma 2.4 be N1 . Then N ¼ 0 and X12 þ Pr1 j j j¼0 ð1Þ C ð2j þ 1; jÞN1 . Thus, let matrix N in (1) of Lemma 2.4 be ðABÞ AB. It is easy to see (2) holds. From the statement (1) and ABU ðkÞ ¼ U ðkÞAB, k ¼ 1; 0; 1, it yields that X22 N1 ¼ p
2U ð1ÞU ð0ÞðABÞ2 U 2 ð0ÞAB ¼ 2½U ð1Þ þ U ð0ÞU ð0ÞAB U 2 ð0ÞAB ¼ 2U ð1ÞU ð0ÞAB þ U 2 ð0ÞAB:
Sign Pattern for Generalized Inverses
28
Similar to the proof of (2), it follows from (2) of lemma 2.4 that 2U ð1ÞU ð0ÞAB þ U 2 ð0ÞAB ¼
r1 X
ð1Þ j C ð2j þ 2; jÞðABÞp ðABÞj þ 1 :
j¼0
Manipulation gives 2U ð1ÞU ð0ÞðABÞ2 U 2 ð0ÞAB
¼ ¼ ¼
rP 1 j¼0
r P
ð1Þ j C ð2j þ 2; jÞðABÞp ðABÞj þ 1
ð1Þ j C ð2j; j þ 1ÞðABÞp ðABÞ j
j¼1 rP 1 j¼1
ð1Þ j C ð2j; j þ 1ÞðABÞp ðABÞ j
¼ U ð1ÞAB: So (3) holds. Similar to the proof of (2), It follows from (3) of lemma 2.4 that (4) holds. The statement (4) of lemma 2.4 gives that V ð1Þ½U ð1Þ U ð0Þ þ V ð0ÞU ð0ÞAB ¼
sðr þ 1Þ X
C ðr 2 i; iÞðABÞp ðABÞi þ 1 ;
i¼0
■
and by (1), we have (5) holds. A A Theorem 2.10. Let M ¼ 2 Cnn and A2 ¼ A. Then B 0 P11 ðlÞ P12 ðlÞ l M ¼ ðl > 2Þ; P21 ðlÞ P22 ðlÞ
ð2:2Þ
where P11 ðlÞ
¼ P12 ðlÞ þ
P21 ðlÞ
¼ P22 ðlÞ þ
sðlÞ P i¼0 sðlÞ P i¼0
C ðl 2 i; iÞðABÞi þ 1 ; C ðl 3 i; iÞðBAÞi þ 1 B;
P12 ðlÞ ¼
sðlÞ P
C ðl 1 i; iÞðABÞi A;
i¼0 sðlÞ P
P22 ðlÞ ¼
i¼0
C ðl 2 i; iÞðBAÞi þ 1 :
Proof. We prove this theorem by induction. When l ¼ 2, the theorem holds obviously. Assume that (2.2) holds when l ¼ k. Then when l ¼ k þ 1, Mk þ1 ¼ MkM P11 ðkÞ P12 ðkÞ A A ¼ P21 ðkÞ P22 ðkÞ B 0 P11 ðkÞA þ P12 ðkÞB P11 ðkÞA ¼ : P21 ðkÞA þ P22 ðkÞB P21 ðkÞA
Generalized Inverses of Partitioned Matrices
29
Calculation gives that k þ 1 ¼ P11 ðkÞA M 12 ¼
sðkÞ X
C ðk 1 i; iÞðABÞi A þ
i¼0
¼
sðkÞ þ1 X
sðkÞ X
C ðk 2 i; iÞðABÞi þ 1 A
i¼0
C ðk i; iÞðABÞi A
i¼0
¼
sðk þ 1Þ X
C ðk i; iÞðABÞi A
i¼0
¼ P12 ðk þ 1Þ; and
Mk þ1
11
¼ P11 ðkÞA þ P12 ðkÞB sðkÞ X ¼ M k þ 1 12 þ C ðk 1 i; iÞðABÞi þ 1 i¼0
¼
sðk þ 1Þ X
C ðk i; iÞðABÞi A þ
i¼0
sðk þ 1Þ X
C ðk 1 i; iÞðABÞi þ 1
i¼0
¼ P11 ðk þ 1Þ: Similarly, it yields k þ 1 ¼ P21 ðk þ 1Þ and M k þ 1 22 ¼ P22 ðk þ 1Þ: M 21 ■
Hence (2.2) holds. Theorem 2.11 [28]. Let M ¼
A B
A 0
Then
with submatrices A; B 2 Cnn and A2 ¼ A.
E11 M ¼ E21 D
E12 ; E22
where E11 ¼ E12 ABU ð1Þ ðABÞD ;
E12 ¼ U ð0ÞA þ ðABÞD A;
E21 ¼ E22 þ B½U ð0Þ þ U ð1Þ þ ðBAÞD B þ B½ðABÞD 2 ; U ðkÞ ¼
E22 ¼ BU ð1ÞA ðBAÞD ;
r 1 X ð1Þ j C ð2j k þ 1; j þ 1ÞðABÞp ðABÞjk ; r ¼ indðM Þ > 1; k ¼ 1; 0; 1: j¼k
Sign Pattern for Generalized Inverses
30
E11 E12 Proof. Let X ¼ . Next we check that X is the Drazin inverse of M . E21 E22 Firstly, it is verified that XM ¼ MX. Applying lemma 2.1 and (1) of lemma 2.5, and since AU ð0ÞA ¼ U ð0ÞA, we obtain ðMXÞ12 ¼ AE12 þ AE22 ¼ AU ð0ÞA þ ðABÞD A ABU ð1ÞA AðBAÞD ¼ ½U ð0Þ ABU ð1ÞA ð1Þ
¼ U ð1ÞA; ðXM Þ12 ¼ E11 A ¼ U ð0ÞA þ ðABÞD A ABU ð1ÞA ðABÞD A ¼ ½U ð0Þ ABU ð1ÞA ð1Þ
¼ U ð1ÞA:
Hence, ðMXÞ12 ¼ ðXM Þ12 ¼ U ð1ÞA: It follows from (2.3) and ABU ð0Þ ¼ U ð0ÞAB that ðMXÞ11 ¼ AE11 þ AE21 ¼ AE12 þ AE22 þ ABU ð0Þ þ ABðABÞD ¼ ðMXÞ12 þ ABU ð0Þ þ ABðABÞD ¼ U ð1ÞA þ ABU ð0Þ þ ABðABÞD ; and ðXM Þ11 ¼ E11 A þ E12 B ¼ ðXM Þ12 þ E12 B ¼ U ð1ÞA þ U ð0ÞAB þ ABðABÞD ¼ U ð1ÞA þ ABU ð0Þ þ ABðABÞD : Then ðMXÞ11 ¼ ðXM Þ11 ¼ U ð1ÞA þ ABU ð0Þ þ ABðABÞD : The other subblocks can be obtained similarly, with ðMXÞ21 ¼ ðXM Þ21 ¼ ðBAÞD BA BðABÞD þ BU ð0ÞA BU ð1ÞAB; ðMXÞ22 ¼ ðXM Þ22 ¼ BAðBAÞD þ BU ð0ÞA: Thus,
F11 MX ¼ XM ¼ F21
F12 ; F22
ð2:3Þ
Generalized Inverses of Partitioned Matrices
31
where F11 ¼ ðABÞD AB þ U ð0ÞAB þ F12 ; D
F21 ¼ BðABÞ BU ð1ÞAB þ F22 ;
F12 ¼ U ð1Þ; F22 ¼ BAðBAÞD þ BU ð0ÞA:
Secondly, it is verified that XMX ¼ X: E11 E12 F11 F12 XMX ¼ E21 E22 F21 F22 E11 F11 þ E12 F21 E11 F12 þ E12 F22 ¼ : E21 F11 þ E22 F21 E21 F12 þ E22 F22 Since U ðkÞðABÞ ¼ ðABÞU ðkÞ, U ðkÞðABÞD ¼ 0 and AU ðkÞA ¼ U ðkÞA (k ¼ 1; 0; 1), and by (1)–(3) of lemma 2.5, it yields that ðXMXÞ12 ¼ E11 F12 þ E12 F22 ¼ ½U ð0ÞA þ ðABÞD A ABU ð1Þ ðABÞD ½U ð1ÞA þ ½U ð0ÞA þ ðABÞD A½BAðBAÞD þ BU ð0ÞA ¼ ½ABU ð1Þ U ð0ÞU ð1ÞA þ U 2 ð0ÞABA þ ðABÞD A ð1Þ
ð2:4Þ
D
¼ ½U 2 ð1Þ þ U 2 ð0ÞABA þ ðABÞ A
ð2Þ
¼ U ð0ÞA þ ðABÞD A ¼ E12 ;
and ðXMXÞ11 ¼ E11 F11 þ E12 F21 ¼ E11 F12 þ E12 F22 þ E11 ½ðABÞD AB þ U ð0ÞAB þ E12 ½BðABÞD BU ð1ÞAB ¼ ðXMXÞ12 þ ½U ð0ÞA þ ðABÞD A ABU ð1Þ ðABÞD ½ðABÞD AB þ U ð0ÞAB þ ½U ð0ÞA þ ðABÞD A½BðABÞD BU ð1ÞAB
ð2:5Þ
¼ E12 ½2U ð1ÞU ð0ÞAB U 2 ð0ÞðABÞ ðABÞD ð3Þ
¼ E12 ABU ð1Þ ðABÞD ¼ E11 :
Similar calculation shows that ðXMXÞ21 ¼ E21 ; ðXMXÞ22 ¼ E22 : It follows from (2.4), (2.5) and (2.6) that XMX ¼ X holds. Finally, it is verified that M r þ 1 X ¼ M r . From theorem 2.10, we have
ð2:6Þ
Sign Pattern for Generalized Inverses
32
M
r þ1
X¼
P11 ðr þ 1Þ
P12 ðr þ 1Þ
E11
E12
P21 ð1 þ 1Þ P22 ðr þ 1Þ E21 E22 P11 ðr þ 1ÞE11 þ P12 ðr þ 1ÞE21 P11 ðr þ 1ÞE12 þ P12 ðr þ 1ÞE22 ¼ ; P21 ðr þ 1ÞE11 þ P22 ðr þ 1ÞE21 P21 ðr þ 1ÞE12 þ P22 ðr þ 1ÞE22
where P11 ðr þ 1Þ ¼ V ð0ÞA þ V ð1Þ; P12 ðr þ 1Þ ¼ V ð0ÞA; ð0ÞBA þ V ð1ÞB; P22 ðr þ 1Þ ¼ V ð0ÞBA; P21 ðr þ 1Þ ¼ V V ðkÞ ¼
sðr þ 1Þ X
C ðr k i; iÞðABÞi þ k ; k ¼ 0; 1;
i¼0
ðkÞ ¼ V
sðr þ 1Þ X
C ðr 1 k i; iÞðBAÞi þ k ; k ¼ 0; 1:
i¼0
Using (1), (4) and (5) of lemma 2.5, the subblocks of M r þ 1 X are deduced to r þ1 M X 12 ¼ P11 ðr þ 1ÞE12 þ P12 ðr þ 1ÞE22 ¼ ½V ð0ÞA þ V ð1Þ½U ð0ÞA þ ðABÞD A þ V ð0ÞA½ðBAÞD BU ð1ÞA ¼ V ð1ÞðABÞD A þ V ð1ÞU ð0ÞA þ V ð0Þ½U ð0Þ ABU ð1ÞA ð1Þ
¼ V ð1ÞðABÞD A þ ½V ð1ÞU ð0Þ V ð0ÞU ð1ÞA
ð4Þ
¼ V ð1ÞðABÞD A þ
sðr þ 1Þ X
C ðr 1 i; iÞðABÞp ðABÞi A
i¼0
¼
sðr þ 1Þ X
C ðr 1 i; iÞðABÞi þ 1 ðABÞD A
i¼0
þ
sðr þ 1Þ X
C ðr 1 i; iÞðABÞp ðABÞi A
i¼0
¼
sðr þ 1Þ X
C ðr 1 i; iÞðABÞi A
i¼0
¼
sðrÞ X
C ðr 1 i; iÞðABÞi A
i¼0
¼ P12 ðrÞ;
Generalized Inverses of Partitioned Matrices
33
and r þ1 M X 11 ¼ P11 ðr þ 1ÞE11 þ P12 ðr þ 1ÞE21 ¼ P11 ðr þ 1ÞE12 þ P12 ðr þ 1ÞE22 þ P11 ðr þ 1Þ½ABU ð1Þ ðABÞD þ P12 ðr þ 1Þ½BU ð0Þ þ BU ð1Þ þ ðBAÞD B þ B½ðABÞD 2 ¼ ðM r þ 1 XÞ12 þ ½V ð0ÞAB V ð1ÞðABÞD þ V ð0ÞU ð0ÞAB V ð1ÞU ð1ÞAB ð5Þ
¼ P12 ðrÞ þ ½V ð0ÞAB V ð1ÞðABÞD þ
sðr þ 1Þ X
C ðr 2 i; iÞðABÞi þ 1 ðABÞp
i¼0
¼ P12 ðrÞ þ
sðr þ 1Þ X
C ðr 1 i; i 1ÞðABÞi þ 1 ðABÞD
i¼0
þ
sðr þ 1Þ X
C ðr 2 i; iÞðABÞi þ 1 ðABÞp
i¼0
¼ P12 ðrÞ þ
sðr þ 1Þ X
C ðr 1 i; i 1ÞðABÞi þ 1 ðABÞD
i¼1
sðr X þ 1Þ þ 1
C ðr 1 i; i 1ÞðABÞi þ 1 ðABÞD
i¼1
þ
sðr þ 1Þ X
C ðr 2 i; iÞðABÞi þ 1
i¼0
¼ P12 ðrÞ þ
sðr þ 1Þ X
C ðr 2 i; iÞðABÞi þ 1
i¼0
þ C ðr 2 sðr þ 1Þ; sðr þ 1ÞÞðABÞi þ 1 ðABÞD :
Since C ðr 2 sðr þ 1Þ; sðr þ 1ÞÞ ¼ 0, we have
M r þ 1X
11
¼ P12 ðrÞ þ
sðr þ 1Þ X
C ðr 2 i; iÞðABÞi þ 1
i¼0
¼ P12 ðrÞ þ
sðrÞ X
C ðr 2 i; iÞðABÞi þ 1
i¼0
¼ P11 ðrÞ: Similarly,
r þ1 X 21 ¼ P21 ðrÞ; M r þ 1 X 22 ¼ P22 ðrÞ: M
Hence M r þ 1 X ¼ M r . This completes the proof. In 2012, C. Deng extended the above theorems [69].
■
Sign Pattern for Generalized Inverses
34
A A Theorem 2.12 [69]. Let M ¼ 2 Cnn . If BAAp ¼ 0 and AAD ðBA ABÞ B 0 AAD ¼ 0, then ! p p i s P AA AA 0 0 Ri þ 2 MD ¼ R þ p p Ap BAAD 0 0 i¼0 A BA 0 0 I þR ; AAD BAp 0 where R¼ C1 ¼
"
~ p þ C2 B ~B ~p C1 B D ~B ~ þ C1 B ~B ~ p AD B
# ~ D þ C1 B ~p B ; ~ D þ C2 B ~B ~p B
r 1 X ~ j; ð1Þ j C ð2j þ 1; jÞðAD Þj þ 1 B
C2 ¼
j¼0
r 1 X ~ j; ð1Þ j C ð2j þ 2; jÞðAD Þj þ 2 B j¼0
~ ¼ AAD BAAD ; and indðBÞ ~ ¼ r: indðAÞ ¼ s; B For a rational n and an integer k, the binomial coefficient 8 Qk1 > < i¼0 ðniÞ ; if k [ 0; k Cn ¼ 1; k! if k ¼ 0; > : 0; if k\0: In 2009, C. Bu et al. established the representation for the Drazin inverse of E F under the condition EF ¼ FE [26]. In order to presented the proof of this I 0 result, we introduce some lemmas. Lemma 2.6. For the nonnegative integers n, k and l, the following hold P Pk i ki i ki k ¼ Cnk þ l , ki¼0 C1=2 C1=2 ¼ C1 ; (1) i¼0 Cn Cl
kl (2) Cnk Ckl ¼ Cnl Cnl ; nk k k1 , k 6¼ 0; (3) Cn ¼ Cn ¼ nk Cn1 P i i k lk (4) i > 0 ð1Þ Cn Cli ¼ Cln ;
k (5) Cl ¼ ð1Þk C k ; Pk l þ 1 i l þ k1 ki ki k (6) C2ðkiÞ ¼ Clk . i¼0 l þ 1i Cl þ 1i ð1Þ
Proof. Statements (1)–(5) are given in [100]. It follows from the below derivation that statement (6) holds,
Generalized Inverses of Partitioned Matrices k P i¼0
35
ki ki l þ1 i C2ðkiÞ l þ 1i Cl þ 1i ð1Þ
ð1Þ
¼ ðl þ 1Þð1Þ
k
k P
(
ð1Þi l þ 12i l þ 1i Cl þ 1i
ð2Þ
i¼0 k kP i P
ð3Þ
i¼0 j¼0 k kP i P
¼ ðl þ 1Þð1Þk
¼ ðl þ 1Þð1Þk
¼ ðl þ 1Þð1Þk ð4Þ
ð5Þ
i
ð1Þ kij i j l þ 1i Cl þ 1i Cl þ 1k þ j C2kl1 ð1Þi l þ 1k þ j
i¼0
j kj 1 l þ 1k þ j C2kl1 Ck1j j¼0 k þ 1Þð1Þk l þ1 1 C2kl1
¼ ðl þ 1Þð1Þk
¼ ðl
j¼0
!) j Clkij þ 12i C2kl1
lk þ j i j Cli Cl þ 1k þ j C2kl1 kj P j i lk þ j i 1 ð1Þ Cli Cl þ 1k þ j l þ 1k þ j C2kl1
i¼0 j¼0
k P j¼0 k P
kP i
k ¼ Clk : ðiÞ
The notation ¼ means the ¼ holds due to the statement ðiÞ.
■
Lemma 2.7. For E; F 2 Cnn if EF ¼ FE, then E D F ¼ FE D ; and EF D ¼ F D E. Proof. We know that the Drazin inverse of a square matrix is some polynomial of itself, so the conclusion is obvious. ■ E F 2 Cnn , where E and F are square. If Theorem 2.13 [26]. Let M ¼ I 0 EF ¼ FE, then 2 3 sðlÞ sðl1Þ P k l2k k P k l12k k þ 1 Clk E F Cl1k E F 6 7 6 k¼0 7 k¼0 ð2:7Þ M l ¼ 6 sðl1Þ 7; sðlÞ1 4 P k 5 P l12k k k l22k k þ 1 Cl1k E F Cl2k E F k¼0
k¼0
where l > 2 is a positive integer and sðlÞ is the integer part of 2l . Proof. This theorem is proved by induction. This theorem holds obviously for M 2 . Assume that (2.7) holds for M l . Then M l þ 1 is deduced as X11 X12 l þ1 l ¼M M ¼ M ; X21 X22
Sign Pattern for Generalized Inverses
36
where X11 ¼
sðlÞ X k¼0
¼
sðlÞ X k¼0
¼
k Clk E l þ 12k F k þ
k Clk E l þ 12k F k þ
sðl þ 1Þ X k¼0
¼
sðl þ 1Þ X k¼0
X12 ¼
sðlÞ X k¼0
sðl1Þ X k¼0
k Cl1k E l12k F k þ 1
sðl1Þ Xþ 1
k Clk E l þ 12k F k þ
k¼1 sðl þ 1Þ X k¼0
k1 l þ 12k k Clk E F
k1 l þ 12k k Clk E F
Clkþ 1k E l þ 12k F k ;
k Clk E l2k F k þ 1 :
Similarly, one can obtain sðl1Þ sðlÞ1 X X k k X21 ¼ Cl1k E l12k F k and X22 ¼ Cl2k E l22k F k þ 1 : k¼0
k¼0
■
Hence (2.7) holds.
E F 2 Cnn with square subblocks E and F I 0 satisfying EF ¼ FE. Let F be a nilpotent matrix and indðFÞ ¼ r þ 1 (r > 0). Then EDF p FF D ; (1) when r ¼ 0, M D ¼ F D þ ðE D Þ2 F p EF D (2) when r > 1, 2 3 r r D 2k þ 1 k D 2k k P P k k 1 k k 4 C E F 4 C E F 1=2 1=2 2 6 k¼0 7 7: MD ¼ 6 k¼1
r r 4 1P P 2k þ 2 k 2k þ 1 k 5 k þ1 k þ1 k 2 4k þ 1 C1=2 ED F 4k C1=2 þ 2C1=2 F ED Theorem 2.14 [26]. Let M ¼
k¼0
k¼1
ð2:8Þ
Proof. When r ¼ 0, the result holds obviously. Next it is proved that the result holds when r > 1. Since EF ¼ FE, it follows from lemma 2.7 that E D F ¼ FE D . In this proof, it is specified that ðE D Þ0 ¼ EE D and F 0 ¼ I . Let X be the right hand side of (2.8). In the following, we check that X is the Drazin inverse of M .
Generalized Inverses of Partitioned Matrices
37
Firstly, it is verified MX ¼ XM . Calculation shows the subblocks of MX and XM are ðMX Þ11 ¼ ¼
r D 2k k 1 X D 2k þ 2 k þ 1 k k þ1 4k C1=2 E F 4k þ 1 C1=2 E F 2 k¼0 k¼0
r X
r D 2k k 1 X D 2k k k k 4k C1=2 E F 4k C1=2 E F 2 k¼1 k¼0
r X
¼ EE D þ ðXM Þ11 ¼
r D 2k k 1 X D 2k k k k 4k C1=2 E F 4k C1=2 E F 2 k¼1 k¼0
r X
¼ EE D þ ðMX Þ12 ¼ ¼ ¼
ðMX Þ21 ¼
r D 2k k 1X k 4k C1=2 E F ; 2 k¼1
r r
2k þ 1 D 2k1 k X 1X k k k þ1 4k C1=2 E F þ 4k C1=2 þ 2C1=2 Fk þ1 ED 2 k¼1 k¼1
r1 r 1
2k þ 1 D 2k þ 1 k þ 1 X 1X k þ1 k k þ1 4k þ 1 C1=2 E F þ 4k C1=2 þ 2C1=2 Fk þ1 ED 2 k¼0 k¼0
r 1 X k¼0
ðXM Þ12 ¼
r D 2k k 1X k 4k C1=2 E F ; 2 k¼1
r X k¼0
r X k¼0
D 2k þ 1 k þ 1 k 4k C1=2 E F ; r1 D 2k þ 1 k þ 1 X D 2k þ 1 k þ 1 k k 4k C1=2 E F ¼ 4k C1=2 E F ; k¼0
D 2k þ 1 k k 4k C1=2 E F ;
r r
2k þ 1 D 2k þ 1 k X 1X k þ1 k k þ1 4k þ 1 C1=2 E F þ 4k C1=2 þ 2C1=2 Fk ED 2 k¼0 k¼1 r
2k þ 1 X 1 k þ1 k k þ1 ¼ 4k þ 1 C1=2 þ 4k C1=2 þ 2C1=2 Fk ED 2 k¼0
ðXM Þ21 ¼
¼
r X k¼0
ðMX Þ22 ¼ ðXM Þ22 ¼
D 2k þ 1 k k 4k C1=2 E F ;
r D 2k k 1X k 4k C1=2 E F ; 2 k¼1
r r D 2k þ 2 k þ 1 D 2k k 1X 1X k þ1 k 4k þ 1 C1=2 E F ¼ 4k C1=2 E F : 2 k¼0 2 k¼1
Sign Pattern for Generalized Inverses
38
Secondly, it is verified XMX ¼ X. Calculation shows the first subblock of XMX is ðXMX Þ11 ¼
r X k¼0
D 2k þ 1 k k 4k C1=2 E F
! ! r r X 1 X 2k þ 2 2k þ 1 k þ1 k þ 4k þ 1 C1=2 ED Fk þ1 4k C1=2 ED Fk 2 k¼0 k¼0 ! ! r r X D 2k þ 1 k þ 1 D 2k þ 2 k 1 X k k k þ1 k þ1 4 C1=2 E F 4 C1=2 E F 2 k¼0 k¼0 ¼
r X k¼0
¼
D 2k þ 1 k k 4k C1=2 E F
þ
r X r D 2i þ 2j þ 3 i þ j þ 1 1X j iþ1 4i þ j þ 1 C1=2 C1=2 E F 2 i¼0 j¼0
r X r 1X j þ 1 D 2i þ 2j þ 3 i þ j þ 1 i 4i þ j þ 1 C1=2 C1=2 E F 2 i¼0 j¼0
r X k¼0
D 2k þ 1 k k 4k C1=2 E F
¼ ðXÞ11 : The derivations of ðXMX Þ12 for the cases r > 2 and r ¼ 1 are similar, and we only deduce for the case r > 2 here. It follows from (1) of lemma 2.6 that !2 r r D 2k k 1 X D 2k k 1X k k k k ðXMX Þ12 ¼ 4 C1=2 E F 4 C1=2 E F 2 k¼1 4 k¼1 ! ! r r
2k þ 1 X 1 X 2k1 k1 k k þ1 þ 4k C1=2 ED Fk 4k C1=2 þ 2C1=2 Fk ED 4 k¼1 k¼1 r r n D 2k k X 2k 1X k ¼ 4k C1=2 E F þ 4k1 E D F k 2 k¼1 k¼2 !) k 1 k 1 k 1 X X X i ki i1 ki i1 k þ 1i C1=2 C1=2 þ C1=2 C1=2 þ 2 C1=2 C1=2 i¼1
i¼1
i¼1
r r D 2k k X 2k 1X k ¼ 4k C1=2 E F þ 4k1 0 E D F k 2 k¼1 k¼2
ð1Þ
¼
r D 2k k 1X k 4k C1=2 E F 2 k¼1
¼ ðXÞ12 : Similarly, we have ðXMX Þ21 ¼ ðXÞ21 and ðXMX Þ22 ¼ ðXÞ22 . Thus, XMX ¼ X.
Generalized Inverses of Partitioned Matrices
39
Finally, it is verified M l þ 1 X ¼ M l for all the integers l > indðM Þ. Since F is nilpotent and of index r þ 1, it follows from theorem 2.13 that 2 3 r rP 1 P k l2k k k l12k k þ 1 Clk E F Cl1k E F 6 7 6 7 k¼0 M l ¼ 6 r k¼0 7: rP 1 4P k l12k k k l22k k þ 1 5 Cl1k E F Cl2k E F k¼0
k¼0
By (5) of lemma 2.6, it yields
M l þ 1X
11
r X
¼
k¼0
Clkþ 1k E l þ 12k F k
r 1 X
þ
k¼0
¼
! k Clk E l2k F k þ 1
r k X X k¼0
!
i¼0
r X k¼0
k 4k C1=2 E
D 2k þ 1
! Fk
r D 2k þ 2 k 1X k þ1 4k þ 1 C1=2 E F 2 k¼0 !
!
ki Cliþ 1i 4ki C1=2 E l2k F k
! k X 1 i1 k þ 1i k þ 1i C 4 C1=2 E l2k F k 2 k¼1 i¼1 l þ 1i r X
! k 1X i1 k þ 1i k þ 1i ¼E þ Cl þ 1i 4 C1=2 E l2k F k 2 i¼1 k¼1 i¼0 ! r k X X 1 i 1 k l i ki ki ¼E þ Cl þ 1i Cli 4 C1=2 þ Clk E l2k F k 2 2 k¼1 i¼0 ! r k X 1X l þ1 1 k ki ki l i Cl þ 1i ð1Þ C2ðkiÞ þ Clk E l2k F k ¼E þ 2 l þ 1 i 2 i¼0 k¼1 r X 1 1 k ð5Þ l k ¼E þ Clk þ Clk E l2k F k 2 2 k¼1 l
¼
r X k¼0
r k X X
ki Cliþ 1i 4ki C1=2
k Clk E l2k F k
¼ M l 11 : Similarly, one can obtain the other subblocks of M l þ 1 X are equal to the corresponding subblocks of M l , with l þ1 X 12 ¼ M l 12 ; M l þ 1 X 21 ¼ M l 21 ; M l þ 1 X 22 ¼ M l 22 : M This completes the proof.
■
The remainder of an integer l divided by 2 is denoted by l%2. In 2010, C. Bu et al. gave the following result.
Sign Pattern for Generalized Inverses
40
B 2 Cnn with square subblocks A and D. D A B Suppose that indðAÞ ¼ iA , indðBC Þ ¼ iBC , ind ¼ iABC and C 0 indð0 DÞ ¼ iD . If ABC ¼ 0 and DC ¼ 0, then U ð1Þ T1 MD ¼ ; CU ð2Þ T2
A Theorem 2.15 [24]. Let M ¼ C
where iX D 1
T1 ¼ ðAD þ BCV ð2ÞÞBD D þ
V ði þ 1ÞBD i Dp
i¼0
þ
iABC X1
ðAp BCU ð2ÞÞRðiÞ AD þ BCV ð2Þ BSðiÞ ;
i¼1
T2 ¼ ðI CV ð1ÞBÞDD þ
iX D 1
CV ði þ 2ÞBD i Dp
i¼0
þ
iABC X1
ððCU ð1ÞÞRðiÞ þ ðI CV ð1ÞB ÞSðiÞÞ;
i¼1
RðiÞ ¼
sði1Þ X
ðBC Þk Ai12k BðDD Þi þ 1 ;
SðiÞ ¼
k¼0
þ ðBC ÞD V ðlÞ ¼ XðAD Þl
ðBC ÞD
sðl1Þ
k
ðAD Þl2k
sðl þ 1Þ YAAl%2 þ ðBC ÞD Al%2 Ap ;
sðl1Þ X k¼1
þ ðBC ÞD iBC 1 X
sðl1Þ X k¼1
X¼
C ðBC Þk Ai22k BðDD Þi þ 1 ;
k¼0
U ðlÞ ¼ XðAD Þl1
sðiÞ1 X
sðl1Þ
ðBC ÞD
k
ðAD Þl þ 12k
sðl þ 1Þ YAl%2 þ ð1Þl þ 1 ðBC ÞD ðAp Þl%2 ðAD Þðl þ 1Þ%2 ;
ðBC Þp ðBC Þi ðAD Þ2i þ 1 ;
sði AÞ X
ððBC ÞD Þi þ 1 Ap A2i1 ;
i¼1
i¼0
when k\j, it is defined
Y ¼
Pk
¼ 0. A B For the block matrix M ¼ , scholars give the representations for the C D Drazin inverse of M under the following different conditions: i¼j
(1) A ¼ I , DC ¼ 0 [130]; (2) BC ¼ 0, BD ¼ 0, DC ¼ 0 [102]; (3) Dp C ¼ C , BCAp ¼ 0, DCAp ¼ 0, AD BC ¼ 0 [63];
Generalized Inverses of Partitioned Matrices (4) (5) (6) (7) (8) (9) (10)
41
BC ¼ 0, BDC ¼ 0, BD 2 ¼ 0 [83]; BD i C ¼ 0, i ¼ 0; 1; . . .; n 1 [97]; D ¼ 0, BCAp ¼ 0, CAD B is invertible [68]; BD p C ¼ 0, BDD D ¼ 0, DD p C ¼ 0 [83]; ABC ¼ 0, D ¼ 0 [24]; BC ¼ 0, BD ¼ 0 [70]; or AB ¼ 0, D ¼ 0 [71].
For more conclusions, please refer to the references [45, 81, 90, 157, 195, 201, 230]. In [20], the representation for the Drazin inverse of an anti-triangular block matrix was established. Furthermore, we study the sign pattern for the Drazin inverse of the block matrix in chapter 5. Let A B M¼ 2 Cðn þ mÞðn þ mÞ ; ð2:9Þ C 0 where A 2 Cnn . The singular value decomposition gives that there exists unitary matrices U 2 Cnn and V 2 Cmm such that D 0 ; ð2:10Þ UBV ¼ 0 0 where the r r matrix D is invertible and positively diagonal, and r ¼ rankðB Þ. Let A1 A 2 C1 C2 UAU ¼ ; VCU ¼ ; ð2:11Þ A3 A 4 C3 C4 where A1 ; C1 2 Crr . Then hence
e U ; M ¼ UM D e U ; MD ¼ U M
3 A 1 D A2 0 6 7 e ¼ 6 C1 0 C2 0 7; M ð2:13Þ 4 A 3 0 A4 0 5 C3 0 C4 0 2 3 I 0 0 0 7 0 6 U 6 0 0 I 0 7 is unitary. Thus, we obtain a decomposition for and U ¼ 0 V 40 I 0 05 0 0 0 I A B the anti-triangular block matrix M ¼ and change the problem of finding C 0 the Drazin inverse of matrix M to the problem of finding that of the matrix in (2.13). Thus, we give the following Drazin inverse of an anti-triangular block matrix. where
2
ð2:12Þ
Sign Pattern for Generalized Inverses
42
Theorem 2.16 [20]. Let M be the form as in (2.9). If rankðBCBÞ ¼ rankðBÞ, and B X AB ¼ 0, then F E D M ¼ B þ þ B Z CEB þ þ B Z CF 2 B Z CEB þ AE B þ AE 0 0 F E I 0 þ Y þ Y BZ C 0 B þ B þ AE B Z CF B Z CE þ I 0 0 þ Y 2; BZ C 0 where Y ¼
i p i þ 2 " l1 X BB þ A B X A B X A B X 0 E i p þ B þ AE B þ BC B X A B X A B X i¼0 B ECF 0 ; B þ AF B þ AECF 0
0
#
0
E ¼ ðB þ BCBB þ Þ þ ; F ¼ ðB X AÞD ; l ¼ ind ðB X AÞ; and B X ¼ I BB þ ; B Z ¼ I B þB:
Proof. Since B X AB ¼ 0 and rankðBCBÞ ¼ rankðBÞ, is follows from (2.10) and (2.11) that A3 ¼ 0 and C1 is invertible. Then (2.13) implies 2 3 A 1 D A2 0 6 7 e ¼ 6 C1 0 C2 0 7: M 4 0 0 A4 0 5 C3 0 C4 0 Partition the above 2 A1 6 C1 e ¼6 M 4 0 C3 where
matrix as D 0 0 0
A2 C2 A4 C4 2
A1 A0 ¼ 4 C 1 0
3 0 07 7 ¼: A0 0 05 0
D 0 0
0 0 þ 0 B0
0 e þ B; e ¼: A 0
3 A2 C2 5; B0 ¼ ½C3 0 C4 : A4
It follows from corollary 2.2 that D 2 A0 0 D eD þ B eD : e e A M ¼ ¼A B0 0
ð2:14Þ
e D , partition A0 into the following four In order to obtain the expression of ð AÞ subblocks
Generalized Inverses of Partitioned Matrices 2
where
A1 C1
N1 ¼
3 A2 N1 5 C2 ¼: 0 A4
D 0 0
A1 A 0 ¼ 4 C1 0
43 N2 ; N4
D A2 ; N2 ¼ ; N4 ¼ A4 : 0 C2
Since C1 is invertible, we have N1 is invertible. Then theorem 2.1 gives that 1 N1 X ¼ AD ; 0 0 N4D P 1 i þ 2 N2 N4i N4p N11 N2 N4D , l ¼ indðN4 Þ. So where X ¼ l1 i¼0 N1 2 1 3 D N1 X 0 e D ¼ A0 0 ¼ 4 0 A N4D 0 5: 0 0 0 0 0 Combining (2.12) and (2.14) yields 2 e D U : e D U þ U B e D U ¼ U A e A M D ¼ UM e D U as Rewrite U A
2
N11 D e 4 UA U ¼ U 0 0
X N4D 0
3 0 f1 þ X e þ N4 ; 0 5U ¼ N 0
where
2 l1 X
0
0
f1 N c4 ¼ U6 f1 i þ 2 N f2 N f4 i N c4 N f2 N4 ; N N 4 0 N4p i¼0 0 0 2 1 3 2 3 0 N2 0 N1 0 0 7 7 f1 ¼ U6 f2 ¼ U6 N 0 0 5U ; N 4 0 4 0 0 0 5U ; 0 0 0 0 0 0 2 3 2 3 0 0 0 0 0 0 7 6 7 f4 ¼ U6 N 4 0 N4 0 5U ; N4 ¼ U4 0 N D 0 5U : e ¼ X
4
0
0
Calculation gives that f1 ¼ 0þ N B
0
ð2:15Þ
0
0
0
E BB þ AB X f ; N ¼ 2 þ B AE B þ BCB X
0 ; 0
ð2:16Þ
0
3
7 0 5U ; 0
Sign Pattern for Generalized Inverses
44
and
X f4 ¼ B A N 0
F 0 ; N4 ¼ 0 0
p X X 0 0 c4 ¼ B A B ; N ; 0 0 0 D F ¼ B X A . Since B X AB ¼ 0,
þ
and we have where E ¼ ðB þ BCBB þ Þ X D X X D . Substituting the above equations into (2.16), we obtain B A ¼ B AB E e D U ¼ F UA þY; ð2:17Þ B þ B þ AE where Y
¼
lP 1 i¼0
0 Bþ
E B þ AE
i þ 2 "
ECF B þ AF B þ AECF
Note that e ¼ U BU
i p BB þ A B X A B X A B X i p B þ BC B X A B X A B X 0 ; l ¼ indðB X AÞ: 0
0 BZ C
# 0 0
0 ; 0
ð2:18Þ
and 2 e D U ¼ U BU e D U U A e D U : e A e UA UB
ð2:19Þ
Substituting (2.17) and (2.18) into (2.19), we have 2 e D U e A UB ¼
F E 0 0 0 0 Y þ Y B þ B þ AE B Z CF B Z CE BZ C 0 0 0 0 0 2 þ Y : þ B Z CEB þ þ B Z CF 2 B Z CEB þ AE BZ C 0
Substituting (2.17) and (2.20) into (2.15), we obtain F E MD ¼ B þ þ B Z CEB þ þ B Z CF 2 B Z CEB þ AE B þ AE 0 0 F E I 0 þ Y þ Y BZ C 0 B þ B þ AE B Z CF B Z CE þ I 0 0 þ Y 2: BZ C 0 In the above theorem, if C ¼ B , one can obtain the following result.
ð2:20Þ
■
Generalized Inverses of Partitioned Matrices
45
A B Theorem 2.17 [20]. Let M ¼ 2 Cnn with a square A 2 Crr . If B 0 B X AB ¼ 0, then i þ 2 i p lP 1 0 ðB Þþ BB þ A B X A B X A B X 0 MD ¼ þ B þ AðB Þþ 0 0 i¼0 B " # X D ðB Þþ B A þ ; D Bþ Bþ A BX A B þ AðB Þþ where B X ¼ I BB þ ; B Z ¼ I B þ B; l ¼ indðB X AÞ.
2.2
Group Inverse of Partitioned Matrices
If the Drazin index of a matrix A is no larger than 1, that is rankðAÞ ¼ rankðA2 Þ, then AD ¼ A# . In this book, some equality and inequality on rank [150] play an important role in the research for the existence of group inverse. In the past few decades, rich conclusions on the existence and expressions for the group inverse of 2 2 block matrices have emerged. S.L. Campbell and C.D. Meyer [34], N. Castro-González and J.Y. Vélez-Cerrada [47] and Q. Xu et al. [210] gave that following expression on group inverse. B1 B1 T Theorem 2.18 [34, 47, 210]. For B ¼ 2 Cðm þ nÞðm þ nÞ with T 2 SB1 SB1 T Cmn , S 2 Cnm and invertible subblock B1 2 Cmm , we have (1) B is group invertible if and only if I þ TS is invertible; (2) if B is group invertible, then ½ðI þ TSÞB1 ðI þ TSÞ1 ½ðI þ TSÞB1 ðI þ TSÞ1 T B# ¼ ; S½ðI þ TS ÞB1 ðI þ TS Þ1 S½ðI þ TSÞB1 ðI þ TSÞ1 T
and
I ðI þ TSÞ1 B ¼ SðI þ TSÞ1 p
ðI þ TSÞ1 T : I SðI þ TSÞ1 T
A B C D B are invertible [58]. In over general field F under the conditions A and I þ CA2 A B 2001, C. Cao gave the existence and expression for M ¼ (where A and D 0 D are square) over skew fields K [36]. In 1996, X. Chen and R.E. Hartwig obtained the existence for M ¼
Sign Pattern for Generalized Inverses
46
A B Theorem 2.19 [36]. For M ¼ 2 Knn whit subblock A 2 Krr , M # exists 0 C if and only if A# and C # exist and rankðM Þ ¼ rankðAÞ þ rankðC Þ. If M # exists, then # A X # M ¼ ; 0 C# where X ¼ ðA# Þ2 BðI CC # Þ þ ðI AA# ÞBðC # Þ2 A# BC # . A 0 Theorem 2.20 [36]. For M ¼ 2 Knn with subblock A 2 Krr , M # exists B C if and only if A# and C # exist and rankðM Þ ¼ rankðAÞ þ rankðC Þ. If M # exists, then # A 0 M# ¼ ; X C# where X ¼ ðC # Þ2 BðI AA# Þ þ ðI CC # ÞBðA# Þ2 C # BA# . A B Corollary 2.3 [36]. For M ¼ 2 Knn with subblock A 2 Krr , M # exists if 0 0 and only if A# exists and rankðAÞ ¼ rank½A B . If M # exists, then # 2 # A B : A # M ¼ 0 0
0 2 Knn with A 2 Krr , M # exists if and only 0 A # if A exists and rankðAÞ ¼ rank . If M # exists, then B A# 0 # M ¼ : 2 B A# 0
Corollary 2.4 [36]. For M ¼
A B
In 2002, C. Bu et al. established some formulas for the product of two matrices over skew fields [14]. In 2005, N. Castro-González et al. gave the following expression [43]. I I Theorem 2.21 [43]. Let F ¼ , where E 2 Cnn and indðEÞ ¼ 1. Then E 0 Ep E# þ Ep F# ¼ : I Ep E #
Generalized Inverses of Partitioned Matrices
47
A B For the anti-triangular block matrix M ¼ , C. Cao et al. gave the exisC 0 tence and expressions for the group inverse of M under some conditions [37, 38, 182]. In [25], the existence and expressions for M # ware established when the subblock A is linear combination or product of B and C . Moreover, there are many results on group inverse of operator and matrices over rings [46, 209]. From 2008 to 2016, C. Bu et al. established the expressions for the group inverse of matrices with idempotent or invertible subblocks, and the group inverse of the Laplacian matrix of weighted graphs. Lemma 2.8 [14]. For A 2 Kmn and B 2 Knm , if rankðAÞ ¼ rankðBAÞ and rankðBÞ ¼ rankðABÞ, then ðABÞ# and ðBAÞ# exist. From the core-nilpotent decomposition, it is easy to obtain the following lemma. Lemma 2.9 [30]. For A 2 Knn , if A2 ¼ A, then there exists an invertible matrix P 2 Knn such that I 0 1 P ; A¼P r 0 0 where r ¼ rankðAÞ and Ir is the r r unity matrix. Lemma 2.10. For A; B 2 Knn , if A2 ¼ A, rankðAÞ ¼ r and rankðBÞ ¼ rankðBABÞ, then there exists an invertible matrix P 2 Knn such that B1 X B1 P 1 ; B¼P YB1 YB1 X where B1 2 Krr , X 2 KrðnrÞ , Y 2 KðnrÞr and B1# exists. Proof. Since rankðBÞ ¼ rankðBABÞ 6 rankðBAÞ 6 rankðBÞ, we have rankðBÞ ¼ rankðBAÞ: By rankðBÞ ¼ rankðBABÞ 6 rankðABÞ 6 rankðBÞ; we obtain rankðBÞ ¼ rankðABÞ: Then rankðAÞ ¼ rankðBAÞ ¼ rankðBÞ ¼ rankðABÞ: Since A2 ¼ A, and from lemma 2.9, there exists an invertible matrix P 2 Knn such that B B2 1 I 0 1 P ; B¼P 1 A¼P r P ; 0 0 B3 B4
Sign Pattern for Generalized Inverses
48
where B1 2 Krr ; B2 2 KrðnrÞ ; B3 2 KðnrÞr and B4 2 KðnrÞðnrÞ : Note that B1 0 1 B1 B2 1 P ; BA ¼ P P : AB ¼ P B3 0 0 0 Then rankðBÞ ¼ rankðABÞ, hence rank½ B1 B2 ¼ rankðBÞ. It is easy to see ðnrÞr there exists a matrix such that B3 ¼ YB1 ; B4 ¼ YB2 . Thus, we have Y 2K B2 B1 P 1 : Similarly, it follows from rankðAÞ ¼ rankðBAÞ that B¼P YB1 YB2 B1 B1 X B¼P P 1 ; X 2 KrðnrÞ : YB1 YB1 X Since rankðAÞ ¼ rankðBAÞ ¼ rankðBÞ ¼ rankðABÞ; we have rankðBÞ1 ¼ rankðB12 Þ ¼ rankðBÞ, that is B1# exists.
■
Lemma 2.11. For matrices A; B 2 Knn , if A2 ¼ A and rankðBÞ ¼ rankðBABÞ, then the follows hold ðBAÞ# BAB ¼ B; AðABÞ# ¼ ðABÞ# ; ðBAÞ# A ¼ ðBAÞ# ; ðABÞ# A ¼ AðBAÞ# ; ðBAÞ# B ¼ BðABÞ# ; ðABÞ# ABAðABÞ# ¼ ðABÞ# ; AðBAÞ# ðABÞ# AB ¼ ðABÞ# ; ðBAÞ# BAðABÞ# A ¼ ðBAÞ# ; BðABÞ# ABA ¼ BA:
(1) (2) (3) (4)
Proof. It follows from lemma 2.8 that ðABÞ# and ðBAÞ# exist. Corollary 2.4 and lemma 2.10 give that B1 0 1 B1 B1 X 1 P ; BA ¼ P P ; AB ¼ P 0 0 YB1 0 and
"
ðBAÞ# BAB ¼ P
B1#
YB1# B1 ¼P YB1 ¼ B:
#
B1 0 0 YB1 0 B1 X P 1 YB1 X 0
B1 YB1
B1 X P 1 YB1 X
■
Thus, (1) holds. From lemma 2.10, (2)–(4) can be obtained directly. A A Theorem 2.22 [30]. For M ¼ with A; B 2 Knn and A2 ¼ A, the following B 0 results hold (1) M # if and only if rankðBÞ ¼ rankðBABÞ; (2) if M # exists, then
Generalized Inverses of Partitioned Matrices
M
#
49
A ðABÞ# þ ðABÞ# A ðABÞ# ABA ¼ ðBAÞ# B þ ðBAÞ# ðABÞ# AB ðBAÞ#
Proof. Calculation shows that
rankðM Þ ¼ rank
A
A
B
0
A þ ðABÞ# A ðABÞ# ABA : ðBAÞ#
¼ rank
0
A
B
0
¼ rankðAÞ þ rankðBÞ; and
AB A ¼ rank BA BA 0 BA 0 A 0 A ¼ rank ¼ rank BAB BA BAB 0 ¼ rankðAÞ þ rankðBABÞ:
rankðM 2 Þ ¼ rank
A þ AB
A
We know M # exists if and only if rankðM Þ ¼ rankðM 2 Þ, that is rankðBÞ ¼ rankðBABÞ. Hence (1) holds. Y 1 Y2 # # Lemma 2.8 gives that ðABÞ and ðBAÞ exist. Let X ¼ , where Y 3 Y4 Y1 ¼ A ðABÞ# þ ðABÞ# A ðABÞ# ABA;
Y2 ¼ A þ ðABÞ# A ðABÞ# ABA;
Y3 ¼ ðBAÞ# B þ ðBAÞ# ðABÞ# AB ðBAÞ# ;
Y4 ¼ ðBAÞ# :
Next, we prove X satisfies the definition of group inverse of M . Firstly, we will show MXM ¼ M . Calculation gives A A Y 1 Y2 A A MXM ¼ B 0 Y 3 Y4 B 0
AðY1 þ Y3 ÞA þ AðY2 þ Y4 ÞB ¼ BY1 A þ BY2 B
AðY1 þ Y3 ÞA : BY1 A
It follows from lemma 2.11 that its (2,1)-subblock is BY1 A þ BY2 B ¼ BðA ðABÞ# þ ðABÞ# A ðABÞ# ABAÞA þ BðA þ ðABÞ# A ðABÞ# ABAÞB ¼ BA BðABÞ# ABA þ BAB þ BðABÞ# AB BðABÞ# ABAB ¼ ðBAÞ# BAB ¼ B:
Sign Pattern for Generalized Inverses
50
Similarly, we have AðY1 þ Y3 ÞA þ AðY2 þ Y4 ÞB ¼ A; AðY1 þ Y3 ÞA ¼ A; BY1 A ¼ 0: Secondly, we will show XMX ¼ X. Calculation gives Y 1 Y2 A A Y1 Y2 XMX ¼ Y 3 Y4 B 0 Y3 Y4 Y1 AY1 þ Y2 BY1 þ Y1 AY3 Y1 AY2 þ Y2 BY2 þ Y1 AY4 ¼ : Y3 AY1 þ Y4 BY1 þ Y3 AY3 Y3 AY2 þ Y4 BY2 þ Y3 AY4 It follows from lemma 2.11 that the (1,1)-subblock is Y1 AY1 þ Y2 BY1 þ Y1 AY3 ¼ Y1 þ AðBAÞ# B þ AðBAÞ# ðABÞ# AB AðBAÞ# þ ðABÞ# ABAðABÞ# ðABÞ# ABAðABÞ# A þ ðABÞ# ABAðABÞ# ABA ðABÞ# ABAðBAÞ# B ðABÞ# ABAðBAÞ# ðABÞ# AB þ ðABÞ# ABAðBAÞ# ðABÞ# þ ðABÞ# A ðABÞ# ABA ¼ Y1 þ ðABÞ# AB þ ðABÞ# ðABÞ# A þ ðABÞ# ðABÞ# A þ ðABÞ# ABA ðABÞ# AB ðABÞ# þ ðABÞ# A ðABÞ# þ ðABÞ# A ðABÞ# ABA ¼ Y1 : It is easy to see that Y1 AY2 þ Y2 BY2 þ Y1 AY4 ¼ Y2 ; Y3 AY1 þ Y4 BY1 þ Y3 AY3 ¼ Y3 ; Y3 AY2 þ Y4 BY2 þ Y3 AY4 ¼ Y4 ; and
A ðABÞ# ABA þ ðABÞ# AB MX ¼ XM ¼ ðBAÞ# BA ðBAÞ# B Thus, X ¼ M # . Theorem 2.23 [29]. For M ¼
A B
A 0
A ðABÞ# ABA : ðBAÞ# BA ■
with A; B 2 Knn , if rankðBÞ > rankðAÞ,
then (1) M is group invertible if and only if rankðAÞ ¼ rankðBÞ ¼ rankðABÞ ¼ rankðBAÞ; M11 M12 # , (2) if M is group invertible, then M ¼ M21 M22
Generalized Inverses of Partitioned Matrices
51
where M11 ¼ ðABÞ# A ðABÞ# A2 ðBAÞ# B;
M12 ¼ ðABÞ# A;
M22 ¼ BðABÞ# A2 ðBAÞ#
M21 ¼ ðBAÞ# B BðABÞ# A2 ðBAÞ# þ BðABÞ# AðABÞ# A2 ðBAÞ# B:
ð2:21Þ
A B ; the existence for M # was given under the conditions C D that A is invertible and the Schur complement S ¼ D CA1 B ¼ 0 [47]. In [73], For matrix M ¼
~¼ was given under the condition C the expression for M# AAp Ap BS D CAp Ap BS p is group invertible, where S ¼ D CAD B. Next, we S p CAp SS p establish the group inverse of M when A is invertible and the Schur complement S ¼ D CA1 B is group invertible. A B Lemma 2.12. Let M ¼ 2 Kmm , where A 2 Knn is invertible and C D S ¼ D CA1 B. Let S p ¼ Imn SS # . If S # exists and R ¼ A2 þ BS p C is invertible, then (1) CAR1 ¼ CA1 DS p CR1 ; (2) R1 AB ¼ A1 B R1 BS p D. Proof. From CA1 ¼ CA1 In ¼ CA1 ðA2 R1 þ BS p CR1 Þ ¼ CAR1 þ ðD SÞS p CR1 ; A1 B ¼ In A1 B ¼ ðR1 A2 þ R1 BS p C ÞA1 B ¼ R1 AB þ R1 BS p ðD SÞ; one can verify the result directly. ■ A B Theorem 2.24 [17]. Let M ¼ 2 Kmm , where A 2 Knn is invertible and C D S ¼ D CA1 B. Let S p ¼ Imn SS # . If S # exists, then (1) M # exists if and only if R is invertible, where R ¼ A2 þ BS p C ; (2) if M # exists, then X Y # ; M ¼ Z W where X ¼ AR1 ðA þ BS # C ÞR1 A; Y ¼ AR1 ðA þ BS # C ÞR1 BS p AR1 BS # ; Z ¼ S p CR1 ðA þ BS # C ÞR1 A S # CR1 A; W ¼ S p CR1 ðA þ BS # C ÞR1 BS p S # CR1 BS p S p CR1 BS # þ S # :
Sign Pattern for Generalized Inverses
52
Proof. Since A is invertible, we have A B A rankðM Þ ¼ rank ¼ rank C D 0 A 0 ¼ rank ; 0 S
B D CA1 B
where S ¼ D CA1 B. So rankðM Þ ¼ rankðAÞ þ rankðSÞ. If S # exists, then 2 A þ BC AB þ BD rankðM 2 Þ ¼ rank CA þ DC CB þ D 2 AB þ BD A2 þ BC ¼ rank CA þ DC CA1 ðA2 þ BC Þ CB þ D 2 CA1 ðAB þ BDÞ 2 A þ BC AB þ BD ¼ rank SC SD 2 A þ BC AB þ BD ðA2 þ BC ÞA1 B ¼ rank SC SD SCA1 B 2 A þ BC BS ¼ rank SC S2 2 R BS A þ BC BSS # C BS ¼ rank ¼ rank 0 S2 SC S 2 S # C S2 R 0 R BS BS # S 2 ¼ rank ¼ rank ; 0 S2 0 S2 hence rankðM 2 Þ ¼ rankðRÞ þ rankðS 2 Þ. We know M # exists if and only if rankðM Þ ¼ rankðM 2 Þ, then rankðM 2 Þ ¼ rankðRÞ þ rankðSÞ. Hence M # exists if and only if rankðAÞ ¼ rankðRÞ, that is M # exists if and only if R is invertible. Thus, (1) holds. X Y Let E ¼ : Next we will proveE ¼ M # . Calculation gives Z W AX þ BZ AY þ BW ME ¼ ; CX þ DZ CY þ DW EM ¼
XA þ YC
XB þ YD
ZA þ WC
ZB þ WD
;
Generalized Inverses of Partitioned Matrices
53
AX þ BZ ¼ A2 R1 A þ BS # C R1 A þ BS p CR1 A þ BS # C R1 A BS # CR1 A ¼ ðA2 R1 þ BS p CR1 Þ A þ BS # C R1 A BS # CR1 A ¼ In A þ BS # C R1 A BS # CR1 A ¼ AR1 A; AY þ BW ¼ ðA2 R1 þ BS p CR1 Þ A þ BS # C R1 BS p þ BS # ðA2 R1 þ BS p CR1 ÞBS # BS # CR1 BS p ¼ A þ BS # C R1 BS p þ BS # BS # BS # CR1 BS p ¼ AR1 BS p : From the statement (1) of lemma 2.12, we obtain CX þ DZ ¼ CAR1 A þ BS # C R1 A þ DS p CR1 A þ BS # C R1 A DS # CR1 A ¼ ðCA1 DS p CR1 Þ A þ BS # C R1 A DS # CR1 A þ DS p CR1 A þ BS # C R1 A ¼ CA1 A þ BS # C R1 A DS # CR1 A ¼ S p CR1 A; CY þ DW ¼ CAR1 A þ BS # C R1 BS p CAR1 BS # DS p CR1 BS # þ DS # þ DS p CR1 A þ BS # C R1 BS p DS # CR1 BS p ¼ CA1 A þ BS # C R1 BS p CA1 BS # DS # CR1 BS p þ DS # ¼ S p CR1 BS p þ SS # ; XA þ YC ¼ AR1 A þ BS # C R1 A2 þ AR1 A þ BS # C R1 BS p C AR1 BS # C ¼ AR1 A þ BS # C ðR1 A2 þ R1 BS p C Þ AR1 BS # C ¼ AR1 A þ BS # C In AR1 BS # C ¼ AR1 A: By statement (2) of lemma 2.12, we have XB þ YD ¼ AR1 A þ BS # C R1 AB þ AR1 A þ BS # C R1 BS p D AR1 BS # D ¼ AR1 A þ BS # C A1 B AR1 BS # D ¼ AR1 BS p ;
Sign Pattern for Generalized Inverses
54
ZA þ WC ¼ S p CR1 A þ BS # C ðR1 A2 þ R1 BS p C Þ S p CR1 BS # C þ S # C S # C ðR1 A2 þ R1 BS p C Þ ¼ S p CR1 A þ BS # C S p CR1 BS # C þ S # C S # C ¼ S p CR1 A; ZB þ WD ¼ S p CR1 ðA þ BS # C ÞR1 AB þ S p CR1 A þ BS # C R1 BS p D S # CR1 BS p D S p CR1 BS # D þ S # D S # CR1 AB ¼ S p CR1 A þ BS # C A1 B S # CA1 B S p CR1 BS # D þ S # D ¼ S p CR1 BS p þ SS # : Hence ME ¼ EM ¼
AR1 A
AR1 BS p
; #
S p CR1 A S p CR1 BS p þ SS A B AR1 A AR1 BS p ; MEM ¼ C D S p CR1 A S p CR1 BS p þ SS # X Y AR1 BS p AR1 A EME ¼ : S p CR1 A S p CR1 BS p þ SS # Z W
Lemma 2.12 shows that MEM ¼ M and EME ¼ E hold. Thus, we get E ¼ M # . ■ A B Let M ¼ 2 Cnn (where A and D are square), A is group invertible and C D the generalized Schur complement is S ¼ D CA# B. In [226], the expressions for the group inverse of M were given under the conditions (1) S ¼ 0; or (2) S # exists and CAp B ¼ 0. A B Lemma 2.13 [5]. Let M ¼ 2 Cnn (where A and D are square). If A is C D group invertible, then
rankðM Þ ¼ rank ðI CAp ðCAp Þð1Þ ÞðD CA# BÞðI ðAp BÞð1Þ Ap BÞ þ rankðAÞ þ rankðAp BÞ þ rankðCAp Þ; where Ap ¼ I AA# . Lemma 2.14 [5]. For A 2 Cmn andB 2 Cnt , then (1) RðABÞ ¼ RðAÞ if and only if rankðABÞ ¼ rankðAÞ; (2) N ðABÞ ¼ N ðBÞ if and only if rankðABÞ ¼ rankðBÞ, where RðAÞ and N ðAÞ denote the range space and null space of A, respectively.
Generalized Inverses of Partitioned Matrices
55
Lemma 2.15 [14]. For A 2 Cmn and B 2 Cnm , if rankðAÞ ¼ rankðBAÞ and rankðBÞ ¼ rankðABÞ, then AB and BA are group invertible. Lemma 2.16. ForA 2 Cnn , B 2 Cnt ; and C 2 Ctn , let V ¼ A2 þ BC . If A# exists, rankðV Þ ¼ rankðAÞ þ rankðAp BÞ and rankðCAp Þ ¼ rankðCAp BÞ ¼ rankðAp BÞ, then V is group invertible and the following are equivalent: ð1Þ VV # A ¼ A; ð2Þ VV # B ¼ B; ð3Þ AVV # ¼ A; ð4Þ CVV # ¼ C ; ð5Þ VV # Ap ¼ Ap VV # :
A . ThenV ¼ EF. Since A is group invertC ible, we have AAp ¼ Ap A ¼ 0. Lemma 2.13 gives 2 A AB rankðFEÞ ¼ rank ¼ rankðAÞ þ rankðCAp BÞ: CA CB Proof. LetE ¼ ½ A
B , and let F ¼
Hence rankðEÞ ¼ rankðAÞ þ rankðAp BÞ; rankðFÞ ¼ rankðAÞ þ rankðCAp Þ: Since rankðEFÞ ¼ rankðV Þ ¼ rankðAÞ þ rankðAp BÞ; and rankðCAp Þ ¼ rankðCAp BÞ ¼ rankðAp BÞ; we know rankðEÞ ¼ rankðFÞ ¼ rankðEFÞ ¼ rankðFEÞ: Then by lemma 2.15, V is group invertible. Next we prove (1)–(5) hold. Since rankðEFÞ ¼ rankðEÞ; from statement (1) of lemma 2.14, we have RðEFÞ ¼ RðEÞ. Hence EFðEFÞ# E ¼ E, that is VV # A ¼ A and VV # B ¼ B: Similarly, statement (2) of lemma 2.14 gives N ðEFÞ ¼ N ðFÞ. Then FðEFÞðEFÞ# ¼ F; that is AVV # ¼ A and CVV # ¼ C : Since VV # A ¼ A and AVV # ¼ A; we have VV # Ap ¼ VV # AA# ¼ Ap VV # : Thus, (1)–(5) hold. ■ Lemma 2.17 [25]. LetA; G 2 Cnn , where indðAÞ ¼ k. Then G ¼ AD if and only if Ak GA ¼ Ak ; AG ¼ GA and rankðGÞ rankðAk Þ:
A B Lemma 2.18 [73]. LetM ¼ 2 Cmm , where A 2 Cnn and the generalized C D Schur complement S ¼ D CAD B. If AAp Ap BS D CAp ¼ 0, Ap BS p ¼ 0, S p CAp ¼ 0, SS p ¼ 0 andCAp B ¼ 0, then M is group invertible if and only if W0 ¼ I þ AD BS p CAD is invertible. In this case,
Sign Pattern for Generalized Inverses
56
Ap B 0 BS p M ¼ R þ I WRW I þ R ; S pD CAp DS p D A þ AD BS D CAD AD BS D where R ¼ , W ¼ W01 I , Ap ¼ I AAD , and S D CAD SD S p ¼ I SS D . A B Theorem 2.25. Let M ¼ 2 Cmm , where A 2 Cnn . If A is group invertible C D and the generalized Schur complement S ¼ D CA# B ¼ 0, then #
0 S pC
(1) M is group invertible if and only if rankðA2 þ BC Þ ¼ rankðAÞ þ rankðAp BÞ; rankðCAp Þ ¼ rankðCAp BÞ ¼ rankðAp BÞ; (2) if M is group invertible, then V is group invertible and X Y ; M# ¼ Z W where X ¼ AV # AV # A þ AV # Ap þ Ap V # A; Z ¼ CV # AV # A þ CV # Ap ; # # p # # # 2 Y ¼ AV AV B þ A V B; W ¼ CV AV B;V ¼ A þ BC . Proof. (1) Since A is group invertible and S ¼ 0, by lemma 2.13, we have rankðM Þ ¼ rankðAÞ þ rankðAp BÞ þ rankðCAp Þ and 2 A þ BC AB þ BD rankðM 2 Þ ¼ rank CA þ DC CB þ D2 2 r2 þ ðCA# Þr1 A þ BC AB þ BD ¼ rank 0 CAp B c2 þ c1 ðA# B Þ 0 A2 þ BC ¼ rank 0 CAp B ¼ rankðA2 þ BC Þ þ rankðCAp BÞ: Then M is group invertible if and only if rankðAÞ þ rankðAp BÞ þ rankðCAp Þ ¼ rankðA2 þ BC Þ þ rankðCAp BÞ; that is rankðA2 þ BC Þ rankðAÞ rankðAp BÞ ¼ rankðCAp Þ rankðCAp BÞ:
Generalized Inverses of Partitioned Matrices
Since
rankðA þ BC Þ ¼ rank ½ A 2
we know
A B C
rank½ A
57
B ¼ rankðAÞ þ rankðAp BÞ;
rankðA2 þ BC Þ rankðAÞ rankðAp BÞ 0:
Obviously, rankðCAp Þ rankðCAp BÞ 0: Then M is group invertible if and only if rankðA2 þ BC Þ ¼ rankðAÞ þ rankðAp BÞ and rankðCAp Þ ¼ rankðCAp BÞ: Similarly, M # exists if and only if rankðA2 þ BC Þ ¼ rankðAÞ þ rankðCAp Þ and rankðAp BÞ ¼ rankðCAp BÞ: Hence M # exists if and only if rankðA2 þ BC Þ ¼ rankðAÞ þ rankðAp BÞ and rankðCAp Þ ¼ rankðCAp BÞ ¼ rankðAp BÞ:
(2) Statement (1) X Y LetG ¼ Z W A MG ¼ C X GM ¼ Z
of lemma 2.16 implies that V . Then
is group invertible.
X Y B AX þ BZ AY þ BW ¼ ; CA# B Z W CX þ CA# BZ CY þ CA# BW Y A B XA þ YC XB þ YCA# B ¼ ; W C CA# B ZA þ WC ZB þ WCA# B
and by statements (1) and (5) of lemma 2.16, we have ðMGÞ11 ¼ A2 V # AV # A þ A2 V # Ap þ BCV # AV # A þ BCV # Ap ¼ ðA2 þ BC ÞV # AV # A þ ðA2 þ BC ÞV # Ap ð1Þ
¼ VV # AV # A þ VV # Ap ¼ AV # A þ VV # Ap ; ð1Þ
ðMGÞ12 ¼ A2 V # AV # B þ BCV # AV # B ¼ VV # AV # B ¼ AV # B; ðMGÞ21 ¼ CAV # AV # A þ CAV # Ap þ CAp V # A þ CA# BCV # AV # A þ CA# BCV # Ap ¼ CA# ðA2 þ BC ÞV # AV # A þ CA# ðA2 þ BC ÞV # Ap þ CAp V # A ð1Þð5Þ
¼ CA# VV # AV # A þ CA# VV # Ap þ CAp V # A ¼ CV # A; ðMGÞ22 ¼ CAV # AV # B þ CAp V # B þ CA# BCV # AV # B ð1Þ
¼ CA# VV # AV # B þ CAp V # B ¼ CV # B:
Sign Pattern for Generalized Inverses
58
Statements (3) and (5) of lemma 2.16 give ðGM Þ11 ¼ AV # AV # A2 þ Ap V # A2 þ AV # AV # BC þ Ap V # BC ¼ AV # AV # V þ Ap V # V ð3Þð5Þ
¼ AV # A þ VV # Ap ;
ðGM Þ12 ¼ AV # AV # AB þ AV # Ap B þ Ap V # AB þ AV # AV # BCA# B þ Ap V # BCA# B ¼ AV # AV # ðA2 þ BC ÞA# B þ AV # Ap B þ Ap V # ðA2 þ BC ÞA# B ¼ AV # AV # VA# B þ AV # Ap B þ Ap V # VA# B ð3Þð5Þ
¼ AV # AA# B þ AV # Ap B ¼ AV # B; ð3Þ
ðGM Þ21 ¼ CV # AV # A2 þ CV # AV # BC ¼ CV # AV # V ¼ CV # A; ðGM Þ22 ¼ CV # AV # AB þ CV # Ap B þ CV # AV # BCA# B ð3Þ
¼ CV # AV # VA# B þ CV # Ap B ¼ CV # AA# B þ CV # Ap B ¼ CV # B:
Then
AV # A þ VV # Ap MG ¼ GM ¼ CV # A
AV # B ; CV # B
thus, we have B AV # A þ VV # Ap AV # B A MGM ¼ CV # A CV # B C CA# B AV # A2 þ AV # BC AV # AB þ VV # Ap B þ AV # BCA# B ¼ CV # A2 þ CV # BC CV # AB þ CV # BCA# B AV # V AV # VA# B þ VV # Ap B ¼ : CV # V CV # VA# B
Statements (2)–(5) of lemma 2.16 give MGM ¼ M : Hence AV # AV # A þ AV # Ap þ Ap V # A AV # AV # B þ Ap V # B G¼ CV # AV # A þ CV # Ap CV # AV # B p # AV # AV # A þ AV # Ap AV # AV # B A V A Ap V # B ¼ þ CV # AV # A þ CV # Ap CV # AV # B 0 0 p A # A V #2 ¼ V A V #2 B : V AV # A þ V # Ap V # AV # B þ 0 C
Generalized Inverses of Partitioned Matrices
So
A
V # AV # A þ V # Ap V # AV # B C p A V #2 þ rank V A V #2 B 0 p 2 A A ðA þ BC Þ rank þ rank C 0 p A A BC ¼ rank þ rank 0 C
rankðGÞ rank
59
rankðAÞ þ rankðCAp Þ þ rankðAp BÞ ¼ rankðM Þ: Then rankðGÞ rankðM Þ: Thus by lemma 2.17, we obtainG ¼ M # . ■ A B Corollary 2.5. Let M ¼ 2 Cmm , where A 2 Cnn . If D is group invertible C D and the Generalized Schur complement S ¼ A BD # C ¼ 0, then (1) M is group invertible if and only if rankðD 2 þ CBÞ ¼ rankðDÞ þ rankðDp C Þ; rankðBD p Þ ¼ rankðBD p C Þ ¼ rankðDp C Þ:
(2) If M is group invertible, then V is group invertible and X Y ; M# ¼ Z W where X ¼ BV # DV # C ; Y ¼ BV # DV # D þ BV # Dp ; V ¼ D2 þ CB; Z ¼ DV # DV # C þ Dp V # C ; W ¼ DV # DV # D þ DV # Dp þ Dp V # D: Proof. We know 1 A B 0 In D C 0 In M¼ ¼ ¼ PNP 1 ; C D Imn 0 B A Imn 0 0 In D C where P ¼ , and N ¼ . Then M is group invertible if and only Imn 0 B A if N # exists and M # ¼ PN # P 1 . Thus by theorem 2.25, this result is obtained. ■
Sign Pattern for Generalized Inverses
60
A B Theorem 2.26. LetM ¼ 2 Cmm , where A 2 Cnn is group invertible. If C D CAp B ¼ 0 and S ¼ D CA# B is group invertible, then (1) M is group invertible if and only if rankðA2 þ BS p C Þ ¼ rankðAÞ; Ap BS p ¼ 0; S p CAp ¼ 0; and Ap BS # CAp ¼ 0; (2) if M if group invertible, then 0 Ap B 0 R þ I WRW I þ R M# ¼ S pC S pD CAp where A# þ A# BS # CA# R¼ S # CA#
A# BS # S#
BS p DS p
;
; and W ¼ ðI þ A# BS p CA# Þ1 I :
Proof. (1) Since A and S are group invertible, we have A B A Ap B rankðM Þ ¼ rank ¼ rank CAp S CAp S A Ap BS # CAp Ap BS p ¼ rank CAp S A Ap BS # CAp Ap BS p ¼ rank S p CAp S p # p p A A BS CA A BS p ¼ rank þ rankðSÞ S p CAp 0 Ap BS # CAp Ap BS p ¼ rankðAÞ þ rank þ rankðSÞ; S p CAp 0
and
rankðM Þ ¼ rank 2
¼ rank
A2 þ BC
AB þ BD
CA þ DC CB þ D 2 2 A þ BC AB þ BD
SC 2 A þ BS p C ¼ rank SC
¼ rank
A2 þ BC
SC 2 A þ BC
¼ rank SD SC 2 0 A þ BS p C ¼ rank S2 0
¼ rankðA2 þ BS p C Þ þ rankðS 2 Þ:
AB þ BD CAp B þ SD BS
0 S2
S2
Generalized Inverses of Partitioned Matrices
61
(1) The sufficiency is obviously. Next we show the necessity. If M is group invertible, then rankðM Þ ¼ rankðM 2 Þ, that is Ap BS # CAp Ap BS p ¼ rankðA2 þ BS p C Þ: rankðAÞ þ rank S p CAp 0 Then rankðA þ BS C Þ ¼ rank ½ A p
2
¼ rank A
p
BS
A
rankð½ A
C
BS p Þ
BS p AA# BS p ¼ rank½ A
Ap BS p
¼ rankðAÞ þ rankðAp BS p Þ; that is rankðA2 þ BS p C Þ rankðAÞ þ rankðAp BS p Þ: Hence
Ap BS # CAp Ap BS p rankðAÞ þ rankðA BS Þ rankðAÞ þ rank S p CAp 0 2 p ¼ rankðA þ BS C Þ rankðAÞ þ rankðAp BS p Þ; p
so
rank
p
Ap BS # CAp S p CAp
Thus, we have S p CAp ¼ 0. Next we consider p
Ap BS p 0
¼ rankðAp BS p Þ:
A
A
rankðA þ BS C Þ ¼ rank ½ A B p rank S C S pC A A ¼ rank p ¼ rank p S C S p CAA# S CAp 2
¼ rankðAÞ; that is rankðA2 þ BS p C Þ rankðAÞ: Then
Ap BS # CAp rankðAÞ þ rank S p CAp
Ap BS p 0
¼ rankðA2 þ BS p C Þ rankðAÞ;
Sign Pattern for Generalized Inverses
62
hence
rank
Ap BS # CAp S p CAp
Ap BS p 0
¼ 0:
Thus, we know M is group invertible if and only if rankðA2 þ BS p C Þ ¼ rankðAÞ; Ap BS p ¼ 0; S p CAp ¼ 0 and Ap BS # CAp ¼ 0: (2) Since M is group invertible, we obtain the expression of M # from lemma 2.18. ■
A Lemma 2.19 [222]. Let M ¼ B
B C
be a Hermition. Then Ap B ¼ 0; BC p ¼ 0.
In what follow, A > 0 and A 6 0 mean that the entries of A are respectively nonnegative and nonpositive. Lemma 2.20 [7]. Let A be a nonsingular M -matrix. Then A1 > 0. Lemma 2.21 [121]. Let L be the Laplacian matrix of a weightedly connected graph. Then each proper principal submatrix of L is a nonsingular M -matrix. L1 L2 Theorem 2.27 [19]. Let L ¼ be the Laplacian matrix of a weighted graph L> L3 2 G (where L1 and L3 are square matrices). Then the following hold (1) Lp1 L2 ¼ 0, L2 Lp3 ¼ 0; # (2) the generalized Schur complement S ¼ L3 L> 2 L1 L2 of L is a Laplacian matrix of a weighted graph. Proof. Since L is a positive semidefinite matrix, it follows from lemma 2.19 that Lp1 L2 ¼ 0 and L2 Lp3 ¼ 0. There exists a permutation matrix Q such that L1 ¼ Q diag½L1 L2 Ls Q > ; where Li ði ¼ 1; 2; . . .; rÞ is a connected component of G and Li ði ¼ r þ 1; . . .; sÞ is a principal submatrix of the Laplacian matrix of a connected component ofG. Let B ¼ Q > L2 ¼ ½ B 1
B2
Bs > ;
where the number of the columns of Bi equals the number of the rows of Li . When 1 6 i 6 r, we have Li being the Laplacian matrix of a connected component of G. Then Bi ¼ 0 ði ¼ 1; 2; . . .; rÞ. When r þ 1 6 i 6 s, we have Li being a principal submatrix of the Laplacian matrix of a connected component ofG. Lemma 2.21 gives that Li ði ¼ r þ 1; . . .; sÞ is an invertible M -matrix. So
Generalized Inverses of Partitioned Matrices
63
# # 1 1 > L# 1 ¼ Q diag½L1 Lr Lr þ 1 Ls Q ; # S ¼ L3 L > 2 L1 L2 # 1 1 ¼ L3 B > diag½L# 1 Lr Lr þ 1 Ls B s X > Bi L1 ¼ L3 i Bi : i¼r þ 1
Since Li is a nonsingular M -matrix, it follows from lemma 2.20 that > 0 ði ¼ r þ 1; . . .; sÞ. We know Q is a permutation matrix, then Bi 6 0, and P > each nondiagonal entry of S ¼ L3 si¼r þ 1 Bi L1 i Bi is no greater than 0. From LJ ¼ 0, we obtain L1 J þ L2 J ¼ 0 and L> 2 J þ L3 J ¼ 0. Then
L1 i
# > > > # > p p SJ ¼ L3 J L> 2 L1 L2 J ¼ L2 J þ L2 L1 L1 J ¼ L2 L1 J ¼ ðL1 L2 Þ J ¼ 0: # It is easy to see S ¼ L3 L> 2 L1 L2 is symmetric matrix. Since each nondiagonal entry of S is less than or equal to 0 and SJ ¼ 0, we have S being the Laplacian matrix of a weighted graph. ■ L1 L2 Theorem 2.28 [19]. Let L ¼ be the Laplacian matrix of a weighted graph L> L3 2 G (where L1 and L3 are square). Then X Y # ; L ¼ Y> Z
where X ¼ L1 R# KR# L1 ;
Y ¼ L1 R# KR# L2 S p L1 R# L2 S # ;
# # p # > # p p > # # # Z ¼ S p L> 2 R KR L2 S S L2 R L2 S S L2 R L2 S þ S ;
R ¼ L21 þ L2 S p L> 2;
K ¼ L1 þ L2 S # L> 2;
# S ¼ L3 L> 2 L1 L2 :
Proof. Since L1 and L3 are symmetric, there exist orthogonal matrices P1 and P2 such that D1 0 > D2 0 > P ; L 3 ¼ P2 P ; L1 ¼ P 1 0 0 1 0 0 2 where D1 and D2 are invertible diagonal matrices. Then 1 1 D1 0 > # D2 0 > ¼ P ; L ¼ P L# P P : 1 2 1 3 0 0 1 0 0 2 M1 M2 > Let L2 ¼ P1 P . Theorem 2.27 gives Lp1 L2 ¼ 0 and Lp3 L> 2 ¼ 0. Hence M3 M4 2 M2 ¼ 0, M3 ¼ 0 and M4 ¼ 0, thus, we have
Sign Pattern for Generalized Inverses
64 2 D1 6 0 6 0 6 P2 4 M1>
P1 L ¼ 0 #
¼U
M#
0
0
0
0
3#
0
M1
0 0
0 D2
07 7 7 05
0
0
0
0
"
P1> 0
0 P2>
#
U 1 ;
2 3 I 0 0 0 7 D1 0 6 6 0 0 I 0 7 and D1 is an invertible where M ¼ > 4 M1 P2 0 I 0 05 0 0 0 I diagonal matrix. Note that the Schur complement D2 M1> D1 1 M1 is a real symmetric matrix, then it is group invertible. Theorem 2.24 implies that e e X Y # M ¼ e> f ; Y W
M1 P1 , U ¼ D2 0
where e p D1 R e #; e ¼ D1 R e 1 K eR e 1 D1 ; Y e 1 K eR e 1 M1 S e 1 M1 S e ¼ D1 R X e pM > R e #M > R e pM > R e #; ep S ep S e# þ S f¼S e 1 K eR e 1 M1 S e 1 M1 S e 1 M1 S W 1
1
1
e ¼ D2 M > D1 M1 : e pM >; K e ¼ D2 þ M1 S e ¼ D1 þ M1 S # M > ; S R 1 1 1 1 1 # M 0 From L# ¼ U U 1 , one obtains the expression for L# . 0 0
■
For an m n matrix A, let a and b be the subsets of the row index set ½m ¼ f1; 2; . . .; mg and column index set ½n ¼ f1; 2; . . .; ng, respectively. Let a and b denote the complements of a and b in ½m and ½n, respectively. Let A½a; b denote the submatrix of A with row index set a and column index set b. Next, we use the example in figure 2.1 to illustrate theorem 2.28.
FIG. 2.1 – Weighted graph G.
Generalized Inverses of Partitioned Matrices Example 2.1. The Laplacian 2 2 6 0 6 6 0 6 6 0 L¼6 6 0 6 6 0 6 4 0 2
65
matrix of the graph G in figure 2.1 is 3 0 0 0 0 0 0 2 0 0 0 0 0 0 0 7 7 0 4 1 0 1 2 0 7 7 0 1 7 3 1 2 0 7 7: 0 0 3 5 2 0 0 7 7 0 1 1 2 7 3 0 7 7 0 2 2 0 3 7 0 5 0 0 0 0 0 0 2
Let
L1 L¼ L> 2
L2 ; L3
ð2:23Þ
where L1 ¼ L½f1; 2g; f1; 2g, L2 ¼ L½f1; 2g; f1; 2g, and L3 lation shows 2 4 1 0 1 6 1 7 3 1 6 6 0 3 5 2 # 6 S ¼ L3 L > 2 L1 L2 ¼ 6 1 1 2 7 6 4 2 2 0 3 0 0 0 0 2
548=11725 139=1675 1152=11725 11=1675 11=1675 261=1675 363=11725 34=1675 318=11725 99=1675 0 0
2052=11725 6 548=11725 6 6 139=1675 S# ¼ 6 6 438=11725 6 4 93=11725 0
2
1=5 6 1=5 6 Sp ¼ 6 6 1=5 4 1=5 0
1=5 1=5 1=5 1=5 0
1=5 1=5 1=5 1=5 0
¼ L½f1; 2g; f1; 2g. Calcu2 2 0 3 7 0
438=11725 363=11725 34=1675 1122=11725 83=11725 0
1=5 1=5 1=5 1=5 0
0 0 0 0 1
3 0 07 7 07 7; 07 7 05 0 93=11725 318=11725 99=1675 83=11725 1187=11725 0
þ L2 S p L> 2
8 ¼ 0
3 0 07 7 07 7; 07 7 05 0
3 7 7 7; 7 5
0 1=8 0 2 # # > R¼ ;R ¼ ; K ¼ L1 þ L2 S L2 ¼ 0 0 0 0 X Y From theorem 2.28, we have L# ¼ , where Y> Z L21
1=5 1=5 1=5 1=5 0
ð2:22Þ
0 : 0
Sign Pattern for Generalized Inverses
66
X
Z
1=8 0 0 0 0 0 ¼ ; Y ¼ 0 0 0 0 0 0 2 2052=11725 548=11725 6 548=11725 1152=11725 6 6 139=1675 11=1675 ¼6 6 438=11725 363=11725 6 4 93=11725 318=11725 0 0
0 1=8 ; 0 0 139=1675 11=1675 261=1675 34=1675 99=1675 0
438=11725 363=11725 34=1675 1122=11725 83=11725 0
93=11725 318=11725 99=1675 83=11725 1187=11725 0
3 0 0 7 7 0 7 7: 0 7 7 0 5 1=8
Applying theorem 2.28, a formula to compute the resistance distance is established. Theorem 2.29 [19]. Let L be the Laplacian matrix of a weighted graph G, and let i and j be two vertices of G belonging to the same connected component. Then the resistance distance between i and j is Xij ¼ X> , where ¼ ½ 1 1 ; # S ¼ L3 L> 2 L1 L2 ;
X ¼ L1 R# KR# L1 ; L1 ¼ L½fi; jg; fi; jg;
R ¼ L21 þ L2 S p L> 2;
K ¼ L1 þ L2 S # L> 2;
L2 ¼ L½fi; jg; fi; jg;
L3 ¼ Lðfi; jg; fi; jgÞ:
Proof. There exists a permutation matrix P such that L ¼ P whereL1 ¼ L½fi; jg; fi; jg. Theorem 2.28 gives X Y ; L# ¼ Y> Z
L1 L> 2
L2 > P , L3
where X ¼ L1 R# KR# L1 ; Y ¼ L1 R# KR# L2 S p L1 R# L2 S # ; # # p # > # p p > # # # Z ¼ S p L> 2 R KR L2 S S L2 R L2 S S L2 R L2 S þ S ; # > > # R ¼ L21 þ L2 S p L> 2 ; K ¼ L1 þ L2 S L 2 ; S ¼ L3 L2 L1 L2 :
It follows from theorem 1.28 that the resistance distance between i and j is ■ Xij ¼ ½ 1 1 X ½ 1 1 > . Next, we present an example of using the above formula to compute the resistance distance between vertices 4 and 6 in figure 2.1. Example 2.2. Let G be the weighted graph in figure 2.1, and let L (as in (2.22)) be the Laplacain matrix of G. Partition L as in (2.23), where L1 ¼ L½f4; 6g; f4; 6g; L2 ¼ L½f4; 6g; f4; 6g; and L3 ¼ Lðf4; 6g; f4; 6gÞ. We obtain
Generalized Inverses of Partitioned Matrices 2
S
2 6 0 6 6 0 # 6 ¼ L3 L > L L ¼ 2 1 2 6 0 6 4 0 2 2
1=8 6 0 6 6 0 # S ¼6 6 0 6 4 0 1=8
0 0 0 0 0 0
1=2 6 0 6 6 0 ¼6 6 0 6 4 0 1=2
Sp
K
¼
L1 þ L 2 S # L > 2
R
#
0 0 0 0 0 11=3 0 5=6 0 17=6 0 0
0 0 188=1407 142=1407 46=1407 0 2
31=1920 1=1920 ¼ ; 1=1920 31=1920
0 0 5=6 137=48 97=48 0
0 0 17=6 97=48 233=48 0
0 0 0 0 142=1407 46=1407 227=1407 85=1407 85=1407 131=1407 0 0
0 0 1 0 0 1=3 0 1=3 0 1=3 0 0
3516=469 ¼ 372=469
67
0 0 1=3 1=3 1=3 0
372=469 ; 3420=469
3 2 0 7 7 0 7 7; 0 7 7 0 5 2
3 1=8 0 7 7 0 7 7; 0 7 7 0 5 1=8
3 1=2 0 7 7 0 7 7; 0 7 7 0 5 1=2
0 0 1=3 1=3 1=3 0
R¼
L21
þ L2 S p L> 2
62 ¼ 2
2 ; 62
1152=11725 363=11725 X ¼ L1 R KR L1 ¼ : 363=11725 1122=11725 #
#
By theorem 2.29, the resistance distance between 4 and 6 is 1 1152=11725 363=11725 ¼ 120=469: X46 ¼ ½ 1 1 1 363=11725 1122=11725
Q1 Q2 Let Q ¼ be the signless Laplacian matrix of a weighted graph G Q2> Q3 (where Q1 and Q3 are square). We know Q is positive semidefinite. Lemma 2.19 implies Q1p Q2 ¼ 0 andQ2 Q3p ¼ 0. Applying those properties, the expression of Q # can be obtained. Q1 Q2 Theorem 2.30 [19]. Let Q ¼ be the signless Laplacian matrix of a Q2> Q3 weighted graph G, where Q1 and Q3 are square. Then X Y ; Q# ¼ Y> Z
Sign Pattern for Generalized Inverses
68
where
X ¼ Q1 R# KR# Q1 ; Y ¼ Q1 R# KR# Q2 S p Q1 R# Q2 S # ; Z ¼ S p Q2> R# KR# Q2 S p S # Q2> R# Q2 S p S p Q2> R# Q2 S # þ S # ; R ¼ Q12 þ Q2 S p Q2> ; K ¼ Q1 þ Q2 S # Q2> ; S ¼ Q3 Q2> Q1# Q2 :
For a complex matrix B, let P B ¼ I BB þ . Clearly, B þ P B ¼ 0 and P B B ¼ 0. Then we have the following result. Lemma 2.22 [227]. For B 2 Cnm ; C 2 Cms , if rankðBÞ ¼ rankðBC Þ, then P ðBCC þ Þ B ¼ 0. In the following, the expression for the group inverse of an anti-triangular matrix is given, and we will study the sign pattern of the group inverse in chapter 5. A B Theorem 2.31 [227]. Let M ¼ 2 Cðn þ mÞðn þ mÞ , and C 2 Cmn be a C 0 matrix with full column rank. Then (1) M # exists if and only if rankðBÞ ¼ rankðBC Þ and BC þ P ðBCC þ Þ A is invertible; (2) if M # exists, then FP ðBCC þ Þ þ P B AE 1 BB þ FB ; M# ¼ GP ðBCC þ Þ þ CE 1 BB þ GB where E ¼ BC þ P ðBCC þ Þ A; G ¼ ½CE 1 P ðBCC þ Þ CE 1 BB þ AE 1 ; ½P B AE 1 P ðBCC þ Þ þ BB þ P B AE 1 BB þ AE 1 :
and
Proof. Since C is of full column rank. C þ C ¼ I . Then 0 B A AC þ C B rankðM Þ ¼ rank ¼ rank ¼ rankðBÞ þ n: C 0 C 0 Then we have rankðM 2 Þ ¼ rank ¼ rank
A2 þ BC
AB
CA CB A þ BC AC þ CA AB AC þ CB 2
CA CB BC 0 I ¼ rank ¼ rank CA CB 0 C A BC 0 6 rank ¼ rankðBC Þ þ n 0 C
BC
0
6 rankðBÞ þ n ¼ rankðM Þ:
0 B
F¼
Generalized Inverses of Partitioned Matrices
69
If M # exists, then rankðM Þ ¼ rankðM 2 Þ; and rankðBÞ ¼ rankðBC Þ. Since C has D full column rank, there exist unitary matrices U and V such that C ¼ U V , 0 b ; and B ¼ V ½ B1 B2 U , where D is invertible diagonal matrix. Let A ¼ V AV nn where B1 2 C . Then b DB1 DB2 D A BC ¼ VB1 DV ; CA ¼ U U : V ; CB ¼ U 0 0 0 If M # exists, then rank½ B1 B2 ¼ rankðBÞ ¼ rankðBC Þ ¼ rankðB1 Þ. Hence there exists X 2 CnðmnÞ such that B2 ¼ B1 X. Thus, we have 2 3 0 0 B1 D BC 0 6 b 7 rankðM 2 Þ ¼ rank DB1 DB1 X 5 ¼ rank4 D A CA CB 0 0 0 B1 D 0 ¼ rank b : A B1 Applying the singular value decomposition, there exist unitary matrices P and Q S 0 Q , where S is an invertible diagonal matrix. Suppose that such that B1 ¼ P 0 0 b ¼ P A1 A2 Q ; and D ¼ Q T1 T2 Q , then A A3 A 4 T3 T4 2 3 ST1 ST2 0 0 6 0 B1 D 0 0 0 07 6 7 rankðM 2 Þ ¼ rank b ¼ rank6 7 4 A1 A B1 A2 S 0 5 2
ST1
6 ¼ rank4 A3 0 ST1 ¼ rank A3
ST2
3
A3
A4
ST1 7 A4 0 5 ¼ rank A3 0 S ST2 þ rankðBÞ: A4 0
0
0
ST2 þ rankðB1 Þ A4
So M # exists if and only if rankðM 2Þ ¼ rankðM Þ ¼ rankðBÞ þ n, or if and only if ST1 ST2 rankðBÞ ¼ rankðBC Þ and is invertible. By the singular value decomA3 A4 position of B1 , we obtain
Sign Pattern for Generalized Inverses
70
S 1 ¼Q 0 b ¼ P ST1 then B1 D þ P B1 A A3 sition of C that B1þ
þ
C ¼V D
1
0 U ;
0 0 0 þ P ; P ; P B1 ¼ I B1 B1 ¼ P 0 I 0 ST2 Q . It follows from the singular value decompoA4 þ
BCC ¼ V ½ B1
0 U ;
þ þ
ðBCC Þ ¼ U
B1þ 0
V ;
b : P ðBCC þ Þ ¼ V P B1 V ; BC þ P ðBCC þ Þ A ¼ V ðB1 D þ P B1 AÞV ST1 ST2 It is clearly that is invertible if and only if BC þ P ðBCC þ Þ A is A3 A4 invertible. Hence M # exists if and only if rankðBÞ ¼ rankðBC Þ and BC þ P ðBCC þ Þ A is invertible. Let FP ðBCC þ Þ þ P B AE 1 BB þ FB X¼ ; GP ðBCC þ Þ þ CE 1 BB þ GB where E ¼ BC þ P ðBCC þ Þ A; G ¼ ½CE 1 P ðBCC þ Þ CE 1 BB þ AE 1 ; and F ¼ ½P B AE 1 P ðBCC þ Þ þ BB þ P B AE 1 BB þ AE 1 : Lemma 2.22 gives P ðBCC þ Þ B ¼ 0. Then P ðBCC þ Þ P B ¼ P ðBCC þ Þ ðI BB þ Þ ¼ P ðBCC þ Þ . Then we have " # FP ðBCC þ Þ þ P B AE 1 BB þ FB A B XM ¼ GP ðBCC þ Þ þ CE 1 BB þ GB C 0 FE þ P B AE 1 BB þ A P B AE 1 B ¼ GE þ CE 1 BB þ A CE 1 B " # P B AE 1 P ðBCC þ Þ þ BB þ P B AE 1 B ¼ ; CE 1 P ðBCC þ Þ CE 1 B " XM ¼ 2
P B AE 1 P ðBCC þ Þ þ BB þ
CE 1 P ðBCC þ Þ P B A þ BB þ A B ¼ C 0 A B ¼ C 0 ¼ M:
P B AE 1 B CE 1 B
#
A
B
C
0
Generalized Inverses of Partitioned Matrices
71
We know P ðBCC þ Þ B ¼ 0 and P ðBCC þ Þ P B ¼ P ðBCC þ Þ . Let H ¼ P ðBCC þ Þ . Then FH þ P B AE 1 BB þ FB P B AE 1 H þ BB þ P B AE 1 B X 2M ¼ GH þ CE 1 BB þ CE 1 H GB CE 1 B Y1 Y 2 ; ¼ Y3 Y 4 where Y1 ¼ FHAE 1 H þ P B AE 1 BB þ þ FBCE 1 H ¼ FH þ P B AE 1 BB þ ; Y2 ¼ FHAE 1 B þ FBCE 1 B ¼ FB; Y3 ¼ GHAE 1 H þ CE 1 BB þ þ GBCE 1 H ¼ GH þ CE 1 BB þ ; Y4 ¼ GHAE 1 B þ GBCE 1 B ¼ GB: Then X 2 M ¼ X. It follows from theorem 1.24 that X ¼ M # .
■
Furthermore, one obtains the following result from the above theorem. A B Theorem 2.32 [227]. Let M ¼ 2 Cðn þ mÞðn þ mÞ with a full row rank subC 0 block B 2 Cnm . Then M # exists if and only if rankðC Þ ¼ rankðBC Þ and BC þ AðB þ BC Þp is invertible. If M # exists, then þ ðB BC Þp F þ C þ CE 1 AC p ðB þ BC Þp G þ C þ CE 1 B ; M# ¼ CF CG where E ¼ BC þ AðB þ BC Þp ; G ¼ E 1 ½ðB þ BC Þp E 1 B AC þ CE 1 B; and F ¼ E 1 ½ðB þ BC Þp E 1 AC p þ C þ C AC þ CE 1 AC p :
2.3
Additive Formulas for Drazin Inverse and Group Inverse
In 1958, M.P. Drazin gave the formula for the Drazin inverse of the sum of matrices P and Q under the condition PQ ¼ QP ¼ 0. Theorem 2.33 [84]. For P; Q 2 Cnn , if PQ ¼ QP ¼ 0, then ðP þ QÞD ¼ P D þ Q D . After that, other formulas for P þ Q emerged under more general conditions. In 2001, R.E. Hartwig et al. extended the above formulas. Theorem 2.34 [102]. Let P; Q 2 Cnn , with the core-nilpotent decompositions P ¼ CP þ NP andQ ¼ CQ þ NQ , respectively. If PQ ¼ QP, then ðP þ QÞD ¼ ðCP þ CQ ÞD ½I þ ðCP þ CQ ÞD ðNP þ NQ Þ1 ¼ ðCP þ CQ ÞD ½I þ ðCP þ CQ ÞD NP 1 ½I þ ðCP þ CQ ÞD NQ 1 :
Sign Pattern for Generalized Inverses
72
Theorem 2.35 [102]. Let P; Q 2 Cnn . If PQ ¼ 0, then ðP þ Q ÞD ¼ Q p ðP þ Q ÞðP þ Q ÞD ¼ Q p
t1 X
t1 i þ 1 X i þ 1 i p Qi P D þ QD PP ;
i¼0
i¼0
t1 X
t1 X i þ 1 i þ 1 i p Qi P D P þQ QD P P þ QQ D PP D ;
i¼0
i¼0
where maxfindðPÞ; indðQÞg 6 t 6 indðPÞ þ indðQÞ: In 2005, Castro-González gave the Drazin inverse of P þ Q under the conditionsP D Q ¼ 0, PQ D ¼ 0 and Q p PQP p ¼ 0 [42]. In 2009, M.F. Martínez-Serrano and N. Castro-González gave the Drazin inverse of P þ Q under the conditions P 2 Q ¼ 0 and Q 2 ¼ 0 [149]. In 2009, C. Deng established the Drazin inverse of P þ Q when P 2 ¼ P and Q 2 ¼ Q [67]. In 2010, X. Liu et al. gave the formulas for ðP QÞD , ½ðP PQ ÞD 2 and ðPQÞ# whenP 2 ¼ P, Q 2 ¼ Q, P 3 Q ¼ QP and Q 3 P ¼ PQ [137]. In 2010, J. Benítez et al. established the expression for the group inverse of c1 P þ c2 Q when P,Q 2 Cnn are k power idempotent matrices (where P k ¼ P and Q k ¼ Q) and PQ ¼ 0, with c1 ; c2 2 C [6]. In 2011, H. Yang et al. gave the formula for ðP þ QÞD under the conditions 2 PQ ¼ 0 and PQP ¼ 0 [215]. In 2010, C. Deng et al. established a formula for ðP þ QÞD under the condition PQ ¼ QP [72]. In 2011, Y. Wei et al. gave the following result on the Drazin inverse. Theorem 2.36 [206]. For P; Q 2 Cnn , if PQ ¼ QP and indðPÞ ¼ k, then ðP þ QÞD ¼ ðI þ P D QÞD P D þ P p
k 1 X
ðQ D Þi þ 1 ðPÞi
i¼0
¼ ðI þ P QÞ P þ P Q D ðI þ PP p Q D Þ1 ; D
D
D
p
ðP þ QÞD ðP þ QÞ ¼ ðI þ P D QÞD P D ðP þ QÞ þ P p QQ D : Theorem 2.37 [206]. Let B ¼ P þ Q 2 Cnn , where P is a nonnilpotent and PQ ¼ QP. For every eigenvalue l of B, there exists a nonzero eigenvalue k of P such that j l kj 6 jðP D QÞ; j kj where jðP D QÞ is the spectral radius of P D Q. There are more results on the Drazin inverse of the sum of two matrices in [82, 216, 231].
Generalized Inverses of Partitioned Matrices
73
The additive formulas for the Drazin inverse of two matrices was extended to that of more matrices. J. Chen et al. [57] gave the Drazin inverse and group inverse of the sum of matrices ðP; Q; R; SÞ satisfying PQ ¼ QP ¼ 0; PS ¼ SQ ¼ QR ¼ RP ¼ 0; RD ¼ RD ¼ 0:
ð2:24Þ
Theorem 2.38 [57]. For P; Q; R 2 Cnn , with ðP; Q; R; 0Þ satisfying (2.24), let M ¼ P þ Q þ R. If indðPÞ 6 1, indðQÞ 6 1, and indðRÞ > 2, then the following hold: (1) indðM Þ 6 indðRÞ; (2) indðM Þ 6 1 if and only if k2 k3 P p j þ1 # j P ki2 P # i i þ 1 p k2 P ðP Þ R Q þ P R ðQ Þ ¼ P p RQ p þ ðP # Þi Ri þ j þ 1 ðQ # Þ j ; i¼0
j¼1
i¼1 j¼1
and if M is group invertible, then M# ¼
k X
ðP # Þi Ri1 Q p þ
i¼1
k X
P p Rj1 ðQ # Þ j
j¼1
k 1 X ki X ðP # Þi Ri þ j þ 1 ðQ # Þ j ; i¼1 j¼1
where k ¼ indðRÞ. Theorem 2.39 [57]. For P; Q; R; S 2 Cnn , with ðP; Q; R; SÞ satisfying (2.24), let M ¼ P þ Q þ R þ S. (1) If SP ¼ SR ¼ 0, then M D ¼ P D þ QD þ
ls X
ðQ D Þl S l1
l¼2
þ
ls X l1 X n 1 X ni X
ðP D Þk þ i Ri þ j ðQ D Þlk þ j S l1
l¼2 k¼1 i¼1 j¼1
þ
ls X n 1 X ni X ðP D Þl þ i þ j Ri Q j Q p S l1 l¼1 i¼1 j¼0
þ
ls X n 1 X ni X
P p P j Ri ðQ D Þl þ i þ j S l1
l¼1 i¼1 j¼0
þ
ls X n 2 ni1 X X ðP D Þl þ i þ n Ri þ j Q nj Q p S l1 l¼1 i¼1
þ
l¼1 i¼1
j¼1
ls X n 2 ni1 X X
P p P ni Ri þ j ðQ D Þl þ n þ j S l1
j¼1
ls X l X n 1 X ni X ðP D Þk þ i1 Ri þ j1 ðQ D Þlk þ j S l1 ; l¼1 k¼1 i¼1 j¼1
where ls ¼ indðSÞ.
Sign Pattern for Generalized Inverses
74 (2) If RS ¼ QS ¼ 0, then M D ¼ P D þ QD þ
ls X
S l1 ðP D Þl
l¼2
þ
ls X l1 X n 2 ni1 X X l¼2 k¼1 i¼1
þ
ls X n 1 X ni X
S l1 ðP D Þk þ i Ri þ j ðQ D Þlk þ j
j¼1
S l1 ðP D Þl þ i þ j Ri Q j Q p
l¼1 i¼1 j¼0
þ
ls X n1 X ni X
S l1 P p P j Ri ðQ D Þl þ i þ j
l¼1 i¼1 j¼0
þ
ls X n 2 ni1 X X l¼1 i¼1
þ
j¼1
ls X n 2 ni1 X X l¼1 i¼1
S l1 ðP D Þl þ n þ i Ri þ j Q nj Q p S l1 P p P ni Ri þ j ðQ D Þl þ n þ j
j¼1
ls X l X n 1 X ni X
S l1 ðP D Þk þ i1 Ri þ j1 ðQ D Þlk þ j ;
l¼1 k¼1 i¼1 j¼1
where ls ¼ indðSÞ. From 2012 to 2016, C. Bu et al. presented a series of results on the Drazin inverse and group inverse of a sum of matrices and block matrices. Theorem 2.40 [15]. For P; Q 2 Cnn , if P 2 Q ¼ 0 and Q 2 P ¼ 0, then ðP þ QÞD ¼ ðPQÞD P QðPQÞD l
i þ 1
i þ 1 X D D þ ðPQÞ P þ Q ðPQÞ P 2i P p þ Q 2i Q p i¼0
l
X þ ðPQÞi ðPQÞp P þ QðPQÞi ðPQÞp ðP D Þ2i þ 2 þ ðQ D Þ2i þ 2 ;
i¼0
where l ¼ max indðP 2 Þ; indðQ 2 Þ; indðPQ Þ : Proof. From the definition of Drazin inverse, it is easy to see that ðP þ QÞD ¼ ðP þ QÞðP 2 þ Q 2 þ PQ þ QPÞD D I ¼ ðP þ Q Þ ½ PQ þ QP; I 2 : P þ Q2
ð2:25Þ
Generalized Inverses of Partitioned Matrices
75
Theorem 1.16 gives D I ½ PQ þ QP; I 2 P þ Q2 " ¼ ½ PQ þ QP;
I
PQ þ QP P 3 Q þ P 2 QP þ Q 2 PQ þ Q 3 P
I P 2 þ Q2
D #2
I : P 2 þ Q2 ð2:26Þ
Since P 2 Q ¼ 0 and Q 2 P ¼ 0, we have D I ½ PQ þ QP; I 2 P þ Q2 PQ þ QP ¼ ½ PQ þ QP; I 0 Theorem 2.1 gives PQ þ QP 0
I P 2 þ Q2
D
I P 2 þ Q2
ðPQ þ QPÞD ¼ 0
D !2
I : P 2 þ Q2
X ; ðP 2 þ Q 2 ÞD
where D X ¼ ðPQ þ QPÞD P 2 þ Q 2 j1
i þ 2 X i D þ ðPQ þ QPÞD P 2 þ Q 2 I ðP 2 þ Q 2 Þ P 2 þ Q 2 i¼0
þ I ðPQ þ QP ÞðPQ þ QPÞD
k1
X
ðPQ þ QP Þi
P 2 þ Q2
D i þ 2
;
i¼0
and j ¼ indðP 2 þ Q 2 Þ; k ¼ indðPQ þ QP Þ: Then we have D X ¼ ðPQ þ QPÞD P 2 þ Q 2 m 1
i þ 2 X i D þ ðPQ þ QPÞD P 2 þ Q 2 I ðP 2 þ Q 2 Þ P 2 þ Q 2 i¼0
1
m X D i þ 2 ðPQ þ QP Þi P 2 þ Q 2 ; þ I ðPQ þ QP ÞðPQ þ QPÞD
i¼0
and m ¼ max indðP 2 þ Q 2 Þ; indðPQ þ QP Þ : Hence D I ½ PQ þ QP; I 2 P þ Q2 2 I X ðPQ þ QP ÞD ¼ ½ PQ þ QP; I : D P 2 þ Q2 0 ðP 2 þ Q 2 Þ
ð2:27Þ
Sign Pattern for Generalized Inverses
76
Since P 2 Q ¼ 0 and Q 2 P ¼ 0, and from theorem 2.35, we have ðPQ þ QPÞD ¼ ðPQÞD þ ðQPÞD ;
ð2:28Þ
ðP 2 þ Q 2 ÞD ¼ ðP D Þ2 þ ðQ D Þ2 :
ð2:29Þ
Substituting X, (2.28) and (2.29) into (2.27), and substituting (2.27) into (2.25), we obtain ðP þ QÞD ¼ ðPQÞD P QðPQÞD m
i þ 1
i þ 1 X P 2i P p þ Q 2i Q p þ ðPQÞD P þ Q ðPQÞD i¼0
m
X ðPQÞi ðPQÞp P þ QðPQÞi ðPQÞp ðP D Þ2i þ 2 þ ðQ D Þ2i þ 2 ; þ i¼0
where m ¼ max indðP 2 þ Q 2 Þ; indðPQ þ QP Þ : Let l ¼ max indðP 2 Þ; indðQ 2 Þ; indðPQ Þ . (1) When l > m, the result is obtained. (2) When l\m, l 6 i 6 m, we have P 2i P p ¼ 0; Q 2i Q p ¼ 0; ðPQ Þi ðPQ Þp ¼ 0;
then the result holds.
■
Theorem 2.41 [15]. For P; Q 2 Cnn , if P 3 Q ¼ 0, QPQ ¼ 0 and QP 2 Q ¼ 0, then h i 2 ðP þ QÞD ¼ I þ PQ D þ P 2 Q D þ PP D þ P 2 Q D P D P D þ
l1 n X
i þ 1 i þ 3 o QpQi P D þ P 2Qp Qi P D
i¼0
l1 nh X 2 i D i þ 1 i p 2i þ 2 o þ I þ PQ D þ P 2 Q D P P þ PQ p Q 2i I þ QP D P D Q ; i¼0
where l ¼ maxfindðPÞ; indðQÞg: D P , and from theorem 1.16, we have Proof. Since ðP þ Q ÞD ¼ ½ I ; Q I !D D P P PQ 2 P ¼ ½I; Q ½I; Q I I Q I 2 D P P þ PQ P 2 Q þ PQ 2 : ¼ ½I; Q 2 I P þQ Q þ PQ
Generalized Inverses of Partitioned Matrices
77
Let A¼ M¼
P 2 þ PQ
0
P þQ
Q 2 þ PQ
; B¼
P 2 þ PQ
P 2 Q þ PQ 2
P þQ
Q 2 þ PQ
0 P 2 Q þ PQ 2 0
0
;
:
Then M ¼ A þ B. Since QP 2 Q ¼ 0, we obtain B 2 A ¼ 0 and A2 B ¼ 0. Theorem 2.40 implies M D ¼ ðABÞD A BðABÞD m
i þ 1
i þ 1 X D D þ ðABÞ A þ B ðABÞ A2i Ap þ B 2i B p i¼0
m
X þ ðABÞi ðABÞp A þ BðABÞi ðABÞp ðAD Þ2i þ 2 þ ðB D Þ2i þ 2 ; i¼0
where m ¼ max indðA2 Þ; indðB 2 Þ; indðABÞ : Since B D ¼ 0, ðAB Þ2 ¼ 0 and ðAB ÞD ¼ 0, the above equality is reduced to M D ¼ AD þ BðAD Þ2 þ ABðAD Þ3 þ BABðAD Þ4 : Theorem 2.1 shows
" AD ¼
ðP 2 þ PQ Þ X
D
# 0 D ; ðQ 2 þ PQ Þ
ð2:30Þ
ð2:31Þ
where X¼
j1 h X 2 D ii þ 2 i p Q þ PQ ðP þ Q Þ P 2 þ PQ P 2 þ PQ i¼0
þ
k 1 X
Q 2 þ PQ
i¼0
Q 2 þ PQ
D
p
h i D ii þ 2 Q 2 þ PQ ðP þ Q Þ P 2 þ PQ
D ðP þ Q Þ P 2 þ PQ ;
where j ¼ indðQ 2 þ PQÞ; and k ¼ indðP 2 þ PQÞ: Since QPQ ¼ 0 and P 3 Q ¼ 0, and by theorem 2.35, we have 4 2 D 2 P þ PQ ¼ P D þ PQ P D ; ð2:32Þ
Q 2 þ PQ
D
2 3 ¼ QD þ P QD :
ð2:33Þ
Substituting (2.32) and (2.33) into (2.31), and (2.31) into (2.30), the result is obtained. ■
Sign Pattern for Generalized Inverses
78
Applying the above theorem, the following representation for Drazin inverse of block matrices is established. A B Theorem 2.42 [15]. For M ¼ 2 Cnn (where A and D are square), if C D S ¼ D CAD B ¼ 0, ABCAp ¼ 0 and Ap ABC ¼ 0, then "
D
M ¼
#
ðBCAp ÞD A
0
0
0
"
þ
ðBCAp ÞD B
0
#
ðI V ð1ÞÞ C ðAp BC ÞD AD B 0 0 11
i þ 1 nX þ1
3 ðBCAp ÞD A2i 0 B B CC A þ @ AAD BCAD ðAW ÞD 0þ @ A A
i þ 1 0 i¼1 A2i1 0 C ðBCAp ÞD 82 3
i þ 1 nX þ1 > < 0 ðBCAp ÞD B 6 7 þ 4 5U ðiÞ
i þ 1
i þ 1 > D D p p D : i¼1 C ðA BC Þ A B C ðA BC Þ " #) 0 ðAp BC Þi1 ðAp BC Þp B þ V ð2i þ 1Þ; C ðAp BC Þi1 ðAp BC Þp C ðAp BC Þi1 ðAp BC Þp AD B
where
C ðAp BC ÞD
B
0
I U ðjÞ ¼ ðAW Þp ðAW Þ2j1 A I ; AD B ; D CA
j I D V ðjÞ ¼ A I ; AD B ; ð AW Þ D CA n
o W ¼ AAD þ AD BCAD ; n ¼ max indðAp BC Þ; indðA2 Þ; ind ðAW Þ2 :
Proof. Let A 2 Crr . Since AAD þ Ap ¼ Ir and S ¼ D CAD B ¼ 0, we have 2 D A B AAp Ap B A A AAD B M¼ ¼ : þ C D 0 0 C CAD B Let
P¼
A 2 AD C
AAp AAD B ; Q ¼ D 0 CA B
Ap B : 0
Since ABCAp ¼ 0 and Ap ABC ¼ 0, we obtain P 2 Q ¼ 0 andQ 2 ¼ 0. Note that k ¼ indðAÞ, then Q k þ 1 ¼ 0 and Q D ¼ 0; Q p ¼ I : Theorem 2.40 gives M D ¼ ½PðQPÞD þ ðQPÞD QP p l
i þ 1
i þ 1 X þ ½P ðQPÞD þ ðQPÞD QðP 2i P p þ Q 2i Þ i¼1
þ
l X i¼0
½PðQPÞi ðQPÞp þ ðQPÞi ðQPÞp QðP D Þ2i þ 2 ;
ð2:34Þ
Generalized Inverses of Partitioned Matrices
79
where l ¼ max indðP 2 Þ; indðQ 2 Þ; indðPQ Þ : p A BC Ap BCAD B Since QP ¼ , and by theorem 2.1, we have 0 0 D p ðAp BC ÞD AD B : ðQP ÞD ¼ ðA BC Þ 0 0 Let
A2 AD P1 ¼ CAAD
0 AAD B ; P2 ¼ CAp CAD B
0 : 0
Then P ¼ P1 þ P2 , P2 P1 ¼ 0 and P22 ¼ 0: Since ABCAp ¼ 0, we have P1 P2 ¼ 0. By lemma 2.25, we obtain P D ¼ P1D . Let S1 be the generalized Schur complement of P1 . Then S1 ¼ CAD B CAAD ðA2 AD ÞD AAD B ¼ 0; ðA2 AD Þp AAD B ¼ 0; CAAD ðA2 AD Þp ¼ 0: Lemma 2.2 gives D i P ¼
I CAD
ðAW ÞD
i þ 1 A I ; AD B ;
where W ¼ AAD þ AD BCAD , i > 1. Let I U ðjÞ ¼ ðAW Þp ðAW Þ2j1 A I ; AD B ; D CA
j I D V ðjÞ ¼ A I ; AD B ; ð AW Þ D CA where W ¼ AAD þ AD BCAD . Substituting P; Q; ðQP ÞD and P D into (2.34), it yields " # " # 0 ðBCAp ÞD B ðBCAp ÞD A 0 D M ¼ þ ðI V ð1ÞÞ 0 0 C ðAp BC ÞD C ðAp BC ÞD AD B 0 2 31
i þ 1 l þ1
3 X ðBCAp ÞD A2i 0 B 6 7C A B þ @AAD BCAD ðAW ÞD þ 4 5A
i þ 1 0 0 i¼0 C ðBCAp ÞD A2i1 0 82 3
i þ 1 l þ1 > < 0 ðBCAp ÞD B X 6 7 þ 4 5U ðiÞ
i þ 1
i þ 1 > i¼1 : C ðAp BC ÞD C ðAp BC ÞD AD B " # ) 0 ðAp BC Þi1 ðAp BC Þp B þ V ð2i þ 1Þ ; C ðAp BC Þi1 ðAp BC Þp C ðAp BC Þi1 ðAp BC Þp AD B
Sign Pattern for Generalized Inverses
80
where l ¼ max indðP 2 Þ; indðQ 2 Þ; indðPQ Þ : n
o Let n ¼ max indðAp BC Þ; indðA2 Þ; ind ðAW Þ2 . (1) When n > l, the result can be obtained; (2) When n\l, n 6 i 6 l, then A2i Ap ¼ 0; ðAp BC Þi ðAp BC Þp ¼ 0; ðAW Þ2i þ 1 ðAW Þp ¼ 0; ■
hence the result holds.
A B 2 Cnn (where A is square), if CAD BC ¼ 0, C 0 ABCAp ¼ 0 andAp ABC ¼ 0, then 2 3 ðAp BC ÞD 0 A B 5 i2 MD ¼ 4 h XU ð0Þ ðCAp B ÞD C 0 C ðBCAp ÞD A I 2 h 3 ii þ 1 p D 2i n ð BCA Þ A 0 X6 7 A B þ 4 h 5 ii þ 2 0 0 i¼1 C ðBCAp ÞD A2i þ 1 0 ( " # ) n X ðAp BC Þp ðAp BC Þi1 0 iþ1 þ X V ðiÞ þ U ðiÞ ; ðCAp B Þp ðCAp B Þi1 0 i¼1
Theorem 2.43 [15]. For M ¼
where
U ðjÞ ¼
I D
CA I
ðAW ÞD
2j þ 1 A A þ AD BC ;
B ;
ðAW Þp ðAW Þ2j1 A A þ AD BC ; B ; CA " # ðAp BC ÞD 0 X¼ ; W ¼ AAD þ AD BCAD ; 0 ðCAp B ÞD n h io n ¼ max indðAp BC Þ; indðA2 Þ; ind ðAW Þ2 :
V ðjÞ ¼
Proof. Let
D
A P¼ C
0 B ; Q¼ 0 CAD B
0 : CAD B
Then M ¼ P þ Q. Since CAD BC ¼ 0, we have PQ ¼ 0 and Q 2 ¼ 0. Then theorem 2.35 implies M D ¼ P D þ ðP D Þ2 Q:
ð2:35Þ
Generalized Inverses of Partitioned Matrices
81
Since ABCAp ¼ 0, Ap ABC ¼ 0 and CAD B CAD B ¼ 0, and by theorem 2.42, the representation of P D is obtained. Let
2j þ 1 I A A þ AD BC ; B ; U ðjÞ ¼ ðAW ÞD D CA I V ðjÞ ¼ ðAW Þp ðAW Þ2j1 A A þ AD BC ; B ; D CA " # ðAp BC ÞD 0 X¼ ; 0 ðCAp B ÞD where W ¼ AAD þ AD BCAD . Substituting the representations of P D and Q into ■ (2.35), the representation of M D is obtained. In [188], the explicit formulas of ðP þ QÞD over skew fields were established under the following conditions: (i) PQ 2 ¼ 0; P 2 QP ¼ 0; ðQPÞ2 ¼ 0; or (ii) P 2 QP ¼ 0; P 3 Q ¼ 0; Q 2 ¼ 0: Clearly, (iÞ generalizes the case of PQ 2 ¼ 0 and PQP ¼ 0 in [215], and (ii) generalizes the case of P 2 Q ¼ 0 and Q 2 ¼ 0 given by M.F. Martínez-Serrano and N. Castro-González in [149]. In order to prove the result, we apply theorems 2.1 and 2.35. Note that theorems 2.1 and 2.35 are over complex fields. In fact, the theorems also hold over skew fields, here we omit the proof. Lemma 2.23 [139]. Let A 2Kmn andB 2Knm . Then
D ðAB ÞD ¼ A ðBAÞ2 B and ðAB ÞD A ¼ AðBAÞD : Theorem 2.44. Let P; Q 2Knn . If PQ 2 ¼ 0; P 2 QP ¼ 0 and ðQPÞ2 ¼ 0, then h i ðP þ QÞD ¼ ðQ p Q D P þ PQðP D Þ2 ÞðP D Þ2 þ ðQ D Þ3 ðP þ QÞP p ðP þ QÞ þ
m 2 1 X
2i þ 4 Q 2i þ 1 Q p ðP þ Q Þ P D ðP þ Q Þ
i¼0
þ
m 1 1 X
ðQ D Þ2i þ 5 ðP þ Q ÞP 2i þ 2 P p ðP þ Q Þ;
i¼0
where m1 ¼ indðP 2 Þ and m2 ¼ indðQ 2 Þ:
Sign Pattern for Generalized Inverses
82 Proof. It is easy to see that
ðP þ QÞD ¼ ðP þ QÞððP þ QÞ2 ÞD ¼ ðP þ QÞðP 2 þ PQ þ QP þ Q 2 ÞD
ð2:36Þ
D
¼ ðP þ QÞðM þ N Þ ; where M ¼ P 2 þ PQ and N ¼ Q 2 þ QP. Since PQ 2 ¼ 0 and P 2 QP ¼ 0, we have MN ¼ 0. It follows from theorem 2.35 that ðM þ N ÞD ¼
l1 s1 X X i þ 1 ðN D Þi þ 1 M i ðI MM D Þ þ I NN D N i M D ; i¼0
ð2:37Þ
i¼0
where l ¼ indðM Þ; s ¼ indðN Þ. Theorem 2.35 shows that N D ¼ ðI Q 2 ðQ 2 ÞD Þ
l1 X
Q 2i ððQPÞD Þi þ 1 þ
i¼0
s1 X
ðQ D Þ2ði þ 1Þ ðQPÞi ðI QPðQPÞD Þ;
i¼0
where l ¼ indðQ 2 Þ; s ¼ indðQPÞ. Note that ðQPÞ2 ¼ 0, then ðQPÞD ¼ 0. Hence, ND ¼
s1 X ðQ D Þ2ði þ 1Þ ðQPÞi ¼ ðQ 2 ÞD þ ðQ D Þ3 P: i¼0
It follows Similarly,
from
PðQ 2 ÞD ¼ PQ 2 ðQ 4 ÞD ¼ 0 that
ðN D Þ2 ¼ ðQ D Þ4 þ ðQ D Þ5 P.
ðN D Þi ¼ ðQ D Þ2i þ ðQ D Þ2i þ 1 P;
ð2:38Þ
for i > 1. From P 2 QP ¼ 0; ðQPÞ2 ¼ 0 and lemma 2.23, we get M D ¼ ½PðP þ QÞD ¼ P½ðP 2 þ QPÞ2 D ðP þ QÞ ¼ PðP 4 þ QP 3 ÞD ðP þ QÞ: By applying theorem 2.35 to P 4 þ QP 3 , we have ðP 4 þ QP 3 ÞD ¼ ðP D Þ4 þ QðP D Þ5 ; hence, M D ¼ ðP D Þ2 þ ðP D Þ3 Q þ PQðP D Þ4 þ PQðP D Þ5 Q: It follows from P D PQP D ¼ ðP 2 ÞD P 2 QPðP 2 ÞD ¼ 0 that ðM D Þ2 ¼ ðP D Þ4 þ ðP D Þ5 Q þ PQðP D Þ6 þ PQðP D Þ7 Q. Similarly, ðM D Þi ¼ ðP D Þ2i þ ðP D Þ2i þ 1 Q þ PQðP D Þ2i þ 2 þ PQðP D Þ2i þ 3 Q;
ð2:39Þ
for i > 1. By substituting (2.38) and (2.39) into (2.37), and substituting (2.37) into (2.36), it yields that
Generalized Inverses of Partitioned Matrices
83
h i ðP þ Q ÞD ¼ ðQ p Q D P þ PQðP D Þ2 ÞðP D Þ2 þ ðQ D Þ3 ðP þ QÞP p ðP þ QÞ þ
s1 X
2i þ 4 Q 2i þ 1 Q p ðP þ Q Þ P D ðP þ QÞ
ð2:40Þ
i¼0
þ
l1 X
ðQ D Þ2i þ 5 ðP þ Q ÞP 2i þ 2 P p ðP þ Q Þ:
i¼0
Let m1 ¼ indðP 2 Þ and m2 ¼ indðQ 2 Þ. For i > m1 ; j > m2 ; we have P 2i þ 2 P p ¼ 0; Q 2j þ 1 Q p ¼ 0: In (2.40), replace l and s with m1 andm2 , respectively. Thus, the proof is complete. ■ Similarly, we can obtain the following theorems. Theorem 2.45. Let P; Q 2 Knn . If P 2 Q ¼ 0; QPQ 2 ¼ 0 and ðQPÞ2 ¼ 0, then h i ðP þ Q ÞD ¼ ðP þ Q Þ Q p ðP þ QÞðP D Þ3 þ ðQ D Þ2 ðP p QP D þ ðQ D Þ2 PQÞ þ
m 1 1 X
2i þ 5 ðP þ Q ÞQ 2i þ 2 Q p ðP þ Q Þ P D
i¼0
þ
m 2 1 X
ðP þ Q ÞðQ D Þ2i þ 4 ðP þ Q ÞP 2i þ 1 P p ;
i¼0
where m1 ¼ indðQ Þ; m2 ¼ indðP 2 Þ: 2
Theorem 2.46. Let P; Q 2Knn . If P 2 QP ¼ 0; P 3 Q ¼ 0 and Q 2 ¼ 0, then ðP þ Q ÞD ¼
nX 2 1
P ðQP Þi ðQP Þp þ ðQP Þi ðQP Þp P
i¼0
þ
PD
2i þ 2
nX 1 1
i þ 1
i þ 1 P ðQP ÞD þ ðQP ÞD P P 2i P p
i¼0
þ ðQP ÞD Q þ PððQPÞD Þ2 PQ P D ; where n1 ¼ indðP 2 Þ; n2 ¼ indðQPÞ: Proof. It is easy to see that ðP þ QÞD ¼ ðP þ QÞððP þ QÞ2 ÞD ¼ ðP þ QÞðP 2 þ PQ þ QP þ Q 2 ÞD D
¼ ðP þ QÞðM þ N Þ ;
ð2:41Þ
Sign Pattern for Generalized Inverses
84
where M ¼ P 2 þ Q 2 ; and N ¼ PQ þ QP. Since P 2 QP ¼ 0; P 3 Q ¼ 0 and Q 2 ¼ 0, we have ðM þ N ÞD ¼
l1 s1 X X i þ 1 ðN D Þi þ 1 M i ðI MM D Þ þ I NN D N i M D ; i¼0
ð2:42Þ
i¼0
where l ¼ indðM Þ; s ¼ indðN Þ. Clearly, ðM D Þi ¼ ðP D Þ2i ;
ð2:43Þ
for i > 1. Note that matrix N satisfies the condition of theorem 2.35, then l1
X
i þ 1 ðN ÞD ¼ I ðQPÞD QP ðQPÞi ðPQÞD
þ
s1 X
i¼0
ðQPÞD
i þ 1
ðPQÞi I PQðPQÞD ;
i¼0
where l ¼ indðQP Þ and s ¼ indðPQ Þ. It follows from P 2 QP ¼ 0 that PðPQÞD ¼ P 2 QPQððPQÞ3 ÞD ¼ 0 and ðQPÞD ðPQÞ2 ¼ ððQPÞ2 ÞD QP 2 QPQ ¼ 0. Hence, 1
i þ 1
X N D ¼ I ðQPÞD QP ðPQÞD þ ðQPÞD ðPQÞi I PQðPQÞD D
D
i¼0 D 2
¼ ðPQÞ þ ðQPÞ þ ððQPÞ Þ PQ: It follows from Q 2 ¼ 0 and P 2 QP ¼ 0 that ðN D Þ2 ¼ ððPQÞD Þ2 þ ððQPÞD Þ2 þ ððQPÞD Þ3 PQ. Similarly, ðN D Þi ¼ ððPQÞD Þi þ ððQPÞD Þi þ ððQPÞD Þi þ 1 PQ;
ð2:44Þ
for i > 1. By substituting (2.43) and (2.44) into (2.42), and substituting (2.42) into (2.41), we obtain l1
i þ 1
i þ 1 X D D D ðP þ Q Þ ¼ P ðQP Þ þ ðQP Þ P P 2i P p i¼0
þ
s1 X
P ðQP Þi ðQP Þp þ ðQP Þi ðQP Þp P
2i þ 2 PD
i¼0
þ ðQP ÞD Q þ PððQPÞD Þ2 PQ P D : Let n1 ¼ indðP 2 Þ and n2 ¼ indðQPÞ. For i > n1 ; j > n2 ; we have P 2i P p ¼ 0; ðQPÞ j ðQPÞp ¼ 0:
ð2:45Þ
Generalized Inverses of Partitioned Matrices
85
In (2.45), replace l and s with n1 andn2 , respectively. The proof is complete. ■ Theorem 2.47. Let P; Q 2Knn . If PQP 2 ¼ 0; QP 3 ¼ 0 and Q 2 ¼ 0, then nX
i þ 1
i þ 1 1 1 D D D 2i p ðP þ Q Þ ¼ P P P ðPQ Þ þ ðPQ Þ P i¼0
þ
nX 2 1
PD
2i þ 2
½P ðPQ Þi ðPQ Þp þ ðPQ Þi ðPQ Þp P
i¼0
þ Q ðPQ ÞD þ QPððPQÞD Þ2 P P D ; where n1 ¼ indðP 2 Þ; and n2 ¼ indðPQÞ: We apply the additive formulas in theorems 2.44–2.47 to establish the representations for the Drazin inverse of block matrices, where A B ð2:46Þ M¼ 2 Knn ; A 2 Krr ; S ¼ D CAD B; C D under the following conditions: (i) A2 Ap BC ¼ 0; BCAp BC ¼ 0; CAAp BC ¼ 0; S ¼ 0; or (ii) A2 BC ¼ 0; ABCA ¼ 0; ABCB ¼ 0; S ¼ 0: Obviously, the above condition (i) generalizes the case of A2 Ap B ¼ 0, CA AB ¼ 0, BCAp B ¼ 0 and S ¼ 0 in [149], and the case of AAp BC ¼ 0, CAp BC ¼ 0 and S ¼ 0 in [215]; condition (ii) generalizes the case of ABC ¼ 0 and S ¼ 0 in [149]. p
Theorem 2.48. Let M be in the form in (2.46). If A2 Ap BC ¼ 0, BCAp BC ¼ 0, S ¼ 0 and CAAp BC ¼ 0, then iþ1 p ! s1 X A A 0 ðP1D Þi þ 1 M D ¼ M 2 ðP1D Þ4 I þ M; CAi Ap 0 i¼0
where s ¼ indðAÞ and AD BCAD ; t > 1:
t P1D ¼
I ½ðAW ÞD t þ 1 A I ; AD B , W ¼ AAD þ CAD
0 Ap B A AAD B Proof. Note that M ¼ :¼ P þ Q. Obviously, Q 2 ¼ 0. þ 0 0 C CAD B The conditions A2 Ap BC ¼ 0, BCAp BC ¼ 0 and CAAp BC ¼ 0 imply that P 2 QP ¼ 0 and ðQPÞ2 ¼ 0. Theorem 2.44 implies ð2:47Þ M D ¼ P 2 þ QP þ PQ ðP D Þ4 ðP þ Q Þ:
Sign Pattern for Generalized Inverses
86
AAp 0 A2 AD AAD B P¼ :¼ P1 þ P2 . Obviously, þ CAp 0 CAAD CAD B ¼ 0, where s ¼ indðAÞ. It follows from theorem 2.35 that
We
consider
P2 P1 ¼ 0 and P2s þ 1
PD ¼
s X i¼0
Decompose P1D : 2 D AA D P1 ¼ CAAD
AAD B CAD B
D
ðP1D Þi þ 1 P2i :
I ¼ CAD
0 I
AW 0
ð2:48Þ
AD B 0
D
I CAD
0 ; I
where W ¼ AAD þ AD BCAD . It follows from theorem 2.1 that D D AW AAD B ððAW ÞD Þ2 AAD B ¼ ðAW Þ 0 0 0 0 I D D 2 D ¼ : ðAW Þ ; ððAW Þ Þ AA B 0 Since ðAW ÞD A2 AD ¼ ðAW ÞD A, we have I D P1 ¼ ½ðAW ÞD 2 A I ; AD B : CAD Manipulation shows that
t P1D ¼
I ½ðAW ÞD t þ 1 A I ; AD B ; CAD
ð2:49Þ
for t > 1. By substituting (2.49) into (2.48), we obtain the expression of P D , ■ substituting P D into (2.47), the expression of M D is obtained. Using theorem 2.45, we have the following theorem. Theorem 2.49. Let M be in the form as (2.46) BCA2 Ap ¼ 0; BCAAp B ¼ 0, S ¼ 0 and BCAp BC ¼ 0, then ! s1 i þ 1 p X A A 0 D D iþ1 þ I ðP1D Þ4 M 2 ; M ¼M ðP1 Þ i p CA A 0 i¼0 where s ¼ indðAÞ, AD BCAD ; t > 1:
t P1D ¼
I D tþ1 A I ; AD B , and W ¼ AAD þ D ½ðAW Þ CA
Theorem 2.50. Let M be in the form in (2.46). If A2 BC ¼ 0; ABCA ¼ 0, S ¼ 0 and ABCB ¼ 0, then
Generalized Inverses of Partitioned Matrices 2
BWA þ AD D 3 D 4 M ¼ þ BððCBÞ Þ ½ðCBÞ2 CAD CABC WA2 þ ðCBÞD CAp
87 3 BWB þ ðI BðCBÞD C ÞðAD Þ2 B 5 ; WAB ðCBÞD CAD B
P 1 1 P 2 1 where W ¼ wi¼0 ððCBÞD Þi þ 2 CA2i Ap þ wi¼0 ðCBÞp ðCB Þi C ðAD Þ2i þ 4 ; w1 ¼ ind 2 ðA Þ and w2 ¼ indðCB Þ. A B 0 0 Proof. Note that M ¼ þ :¼ P þ Q. Obviously, Q 2 ¼ 0. The 0 CAD B C 0 conditions A2 BC ¼ 0; ABCA ¼ 0 and ABCB ¼ 0 imply that P 2 QP ¼ 0; and P 3 Q ¼ 0. It follows from theorem 2.46 that nX
i þ 1
i þ 1 1 1 D D D P ðQP Þ þ ðQP Þ P P 2i P p M ¼ i¼0
þ
nX 2 1
P ðQP Þi ðQP Þp þ ðQP Þi ðQP Þp P
PD
2i þ 2
ð2:50Þ
i¼0
þ ðQP ÞD Q þ PððQPÞD Þ2 PQ P D ; where n1 ¼ indðP 2 Þ, and n2 ¼ indðQPÞ: Theorem 2.1 leads to ðAD Þ2i þ 2 ðAD Þ2i þ 3 B D 2i þ 2 ¼ ; ðP Þ 0 0 ððQPÞD Þi þ 1 ¼
0 ððCBÞD Þi þ 2 CA
0 ; ððCBÞD Þi þ 1
ð2:51Þ
ð2:52Þ
for i > 0. By substituting (2.51) and (2.52) into (2.50), we have BWA þ C BWB þ ðI BðCBÞD C ÞðAD Þ2 B ; MD ¼ WA2 þ ðCBÞD CAp WAB ðCBÞD CAD B where W¼
nX 1 1
ððCBÞD Þi þ 2 CA2i Ap þ
i¼0 D
nX 2 1
ðCBÞp ðCB Þi C ðAD Þ2i þ 4 ;
i¼0 D 3
2
ð2:53Þ
D
C ¼ A þ BððCBÞ Þ ½ðCBÞ CA CABC : Let w1 ¼ indðA2 Þ and w2 ¼ indðCB Þ. For i > w1 ; j > w2 , we have A2i Ap ¼ 0; ðCBÞ j ðCBÞp ¼ 0: In (2.53), replace n1 and n2 with w1 and w2 , respectively. We complete the proof. ■
Sign Pattern for Generalized Inverses
88
Applying theorem 2.47, we obtain the following theorem. Theorem 2.51. Let M be in the form in (2.46). If BCA2 ¼ 0; ABCA ¼ 0, S ¼ 0 and CBCA ¼ 0, then AWC þ AD ðI BðCBÞD C Þ þ BCABððCBÞD Þ3 C A2 W þ Ap BðCBÞD MD ¼ ; C WC þ C ðAD Þ2 ðI BðCBÞD C Þ CAW CAD BðCBÞD P 2 1 2i p P 1 1 D 2i þ 4 ðA Þ BðCBÞp ðCB Þi þ wi¼0 A A BððCBÞD Þi þ 2 , where W ¼ wi¼0 indðCB Þ and w2 ¼ indðA2 Þ.
w1 ¼
Lemma 2.24 [215]. LetP, Q 2 Cnn , where indðPÞ ¼ r and indðQÞ ¼ s. If QPQ ¼ 0 and P 2 Q ¼ 0, then ! 2 X D D 2 D 2 D i D 3i ; ðQ Þ ðP Þ ðP þ QÞ ¼ Y1 þ Y2 þ PQ Y1 ðP Þ þ ðQ Þ Y2 i¼1
where Y1 ¼
s1 X
Q p Q i ðP D Þi þ 1 ; Y2 ¼
i¼0
r 1 X ðQ D Þi þ 1 P i P p :
ð2:54Þ
i¼0
Lemma 2.25 [143]. Let P, Q 2 Cnn , where indðPÞ ¼ r and indðQÞ ¼ s. If PQP 2 ¼ 0, PQPQ ¼ 0, PQ 2 P ¼ 0 and PQ 3 ¼ 0, then ! 2 X D D 2 D 2 D i D 3i ðQ Þ ðP Þ ðP þ QÞ ¼ Y1 þ Y2 þ Y1 ðP Þ þ ðQ Þ Y2 PQ i¼1 D 3
D 3
þ Y1 ðP Þ þ ðQ Þ Y2
3 X
D i
! D 4i
ðQ Þ ðP Þ
ðPQP þ PQ 2 Þ;
i¼1
where Y1 and Y2 are as in (2.54). Theorem 2.52 [23]. For P; Q 2 Cnn , let Si ¼ ðP þ QÞi . If S2i1 PS2i1 QS2ði1Þ ¼ 0, S2i1 PS2i1 P ¼ 0 andQS2ði1Þ PQS2ði1Þ Q 2 ¼ 0, then
3 ðP þ QÞD ¼ S2i þ 1 ðS1 QS2ði1Þ ÞD S2ð2i1Þ ;
Generalized Inverses of Partitioned Matrices
89
where
3
2
2 ðS1 QS2ði1Þ ÞD ¼ Y1 ðPQS2ði1Þ ÞD þ ðQ 2 S2ði1Þ ÞD Y2
2
j
3j X ðQ 2 S2ði1Þ ÞD ðPQS2ði1Þ ÞD þ PQS2ði1Þ Q 2 S2ði1Þ j¼1
4
4 fY1 ðPQS2ði1Þ ÞD þ ðQ 2 S2ði1Þ ÞD Y2
4
j
5j X ðQ 2 S2ði1Þ ÞD ðPQS2ði1Þ ÞD g; j¼1
Y1 ¼
t1 X ðQ 2 S2ði1Þ Þp ðQ 2 S2ði1Þ Þ j ððPQS2ði1Þ ÞD Þj þ 1 ; j¼0
Y2 ¼
t1 X ððQ 2 S2ði1Þ ÞD Þj þ 1 ðPQS2ði1Þ Þ j ðPQS2ði1Þ Þp ; j¼0
t ¼ maxfindðPQS2ði1Þ Þ; indðQ 2 S2ði1Þ Þg: Proof. Theorem 1.16 gives
D
ðP þ QÞ ¼ ½ P; I Þ
P
I
QP
Q
D !2
I
Q D S1 P S1 I ¼ ½ P; I Q QS1 P QS1 I S1 P I 2 D I 0 ¼ ½ P; I M QS1 P Q 0 S1 Q i D i2 I ¼ ½ S2 P; S1 M M : S1 Q
Since
M¼
By induction, we have
S1 P S1 QS1 P
S2i1 P M ¼ S1 QS2i1 P i
where Si ¼ ðP þ QÞi . Let
S2i1 P M ¼ F þ G; F ¼ 0 i
ð2:55Þ
I : S1 Q S2ði1Þ ; S1 QS2ði1Þ
S2ði1Þ 0 ;G ¼ S1 QS2ði1Þ S1 QS2i1 P
ð2:56Þ
0 : 0
Sign Pattern for Generalized Inverses
90
Obviously, G 2 ¼ 0 and G D ¼ 0. Since S2i1 PS2i1 P ¼ 0 and S2i1 PS2i1 QS2ði1Þ ¼ 0, we have FGFG ¼ 0 and FGF 2 ¼ 0. Then lemma 2.25 implies ðM i ÞD ¼ F D þ GðF D Þ2 þ ðF D Þ2 G þ GðF D Þ3 G þ ðF D Þ3 GF þ GðF D Þ4 GF: By theorem 2.1, we have D
F ¼
P
"
ðS2i1 PÞD 0
ð2:57Þ
#
P ðS1 QS2ði1Þ ÞD
:
Since S2i1 PS2i1 P ¼ 0 and S2i1 PS2i1 QS2ði1Þ ¼ 0, we have ðS2i1 PÞD ¼ 0 and 2D ¼ S2ði1Þ S1 QS2ði1Þ . By manipulation, we get 2
k þ 1 3 D 6 0 S2ði1Þ ðS1 QS2ði1Þ Þ 7 ðF D Þk ¼ 4 ð2:58Þ
k 5; D 0 ðS1 QS2ði1Þ Þ
where k > 1. Let R ¼ PQS2ði1Þ and T ¼ Q 2 S2ði1Þ . Then S1 QS2ði1Þ ¼ R þ T . Since QS2ði1Þ PQS2ði1Þ Q 2 ¼ 0, we have TRT ¼ 0 and R2 T ¼ 0. Then lemma 2.24 implies ðS1 QS2ði1Þ ÞD ¼ Y1 þ Y2 þ PQS2ði1Þ Q 2 S2ði1Þ
Y1 ððPQS2ði1Þ ÞD Þ2 þ ððQ 2 S2ði1Þ ÞD Þ2 Y2
2 X
! ððQ 2 S2ði1Þ ÞD Þ j ððPQS2ði1Þ ÞD Þ3j ;
j¼1
where Y1 and Y2 are as in (2.54). By manipulation, we obtain
k
k1
k1 ðS1 QS2ði1Þ ÞD ¼ Y1 ðPQS2ði1Þ ÞD þ ðQ 2 S2ði1Þ ÞD Y2
k1
j
kj X ðQ 2 S2ði1Þ ÞD ðPQS2ði1Þ ÞD þ PQS2ði1Þ Q 2 S2ði1Þ j¼1
k þ 1
k þ 1 ½Y1 ðPQS2ði1Þ ÞD þ ðQ 2 S2ði1Þ ÞD Y2 # kX þ 1
j
k þ 2j ðQ 2 S2ði1Þ ÞD ðPQS2ði1Þ ÞD ; j¼1
ð2:59Þ where k > 1. Substituting (2.58) and (2.59) into (2.57), and substituting (2.57) and (2.56) into (2.55), the representation of ðP þ QÞD is obtained. ■ When i in the above theorem equals 1 or 2, one obtains two corollaries.
Generalized Inverses of Partitioned Matrices
91
Corollary 2.6 [23]. For P, Q 2 Cnn , if ðP þ QÞPðP þ QÞP ¼ 0, ðP þ QÞ PðP þ QÞQ ¼ 0 and QPQ 3 ¼ 0, then
3 ðP þ QÞD ¼ ðP þ QÞ3 ððP þ QÞQÞD ðP þ QÞ2 ; where 2
3
2 4
3i X 2i ððP þ QÞQÞD ¼ Y1 ðPQÞD þ Q D Y2 QD ðPQÞD
þ PQ
3
Y1 ðPQÞ
D
4
i¼1
þ Q
D 8
Y2
4 X
Q
D 2i
!
5i D ðPQÞ ;
i¼1
Y1 ¼
t1 X ðQ 2 Þp ðQÞ2i ððPQÞD Þi þ 1 ; i¼0
Y2 ¼
t1 X
ðQ D Þ2ði þ 1Þ ðPQÞi ðPQÞp ;
i¼0
t ¼ maxfindðPQÞ; indðQ 2 Þg: Corollary 2.7 [23]. Let P; Q 2 Cnn . If ðP þ QÞ3 PðP þ QÞ3 P ¼ 0, ðP þ QÞ3 PðP þ QÞ3 QðP þ QÞ2 ¼ 0 andQðP þ QÞ2 PQðP þ QÞ2 Q 2 ¼ 0, then
3 ðP þ QÞD ¼ ðP þ QÞ5 ððP þ QÞQðP þ QÞ2 ÞD ðP þ QÞ6 ; where
3
2
2 ððP þ QÞQðP þ QÞ2 ÞD ¼ Y1 ðPQðP þ QÞ2 ÞD þ ðQ 2 ðP þ QÞ2 ÞD Y2
i
3i 2 P ðQ 2 ðP þ QÞ2 ÞD ðPQðP þ QÞ2 ÞD i¼1
þ PQðP þ QÞ2 Q 2 ðP þ QÞ2
4
4 Y1 ðPQðP þ QÞ2 ÞD þ ðQ 2 ðP þ QÞ2 ÞD Y2
i
5i 4 P 2 D 2 D 2 ðQ ðP þ QÞ Þ ðPQðP þ QÞ Þ ; i¼1
Y1 ¼
t1 X
ðQ 2 ðP þ QÞ2 Þp ðQ 2 ðP þ QÞ2 Þi ððPQðP þ QÞ2 ÞD Þi þ 1 ;
i¼0
Y2 ¼
t1 X
ððQ 2 ðP þ QÞ2 ÞD Þi þ 1 ðPQðP þ QÞ2 Þi ðPQðP þ QÞ2 Þp ;
i¼0
t ¼ maxfindðPQðP þ QÞ2 Þ; indðQ 2 ðP þ QÞ2 Þg:
Sign Pattern for Generalized Inverses
92
Applying the above theorem, the following representation for the Drazin inverse of block matrices is obtained. A B Theorem 2.53 [23]. Let M ¼ 2 Cnn , where A 2 Crr . If D ¼ CAD B, C D AAp BCA ¼ 0, CAp BCA ¼ 0, and BCAp BC ¼ 0, then k1 P iþ1 0 0 0 Ap B D M E E Iþ ¼ Iþ E 0 0 CAi Ap 0 i¼0 p p 0 A B 0 A B þ Iþ E E2 0 0 0 0 p k1 P iþ1 0 0 0 0 Ap B 3 A BC þ E E E þ Iþ 0 CAi Ap B 0 0 0 0 i¼0 p p k1 P iþ1 0 0 0 Ap B 4 0 A BCA B þ E E E þ Iþ CAi Ap BC 0 0 0 0 0 i¼0 k1 P iþ1 0 0 þ ; E i p 0 CA A BCAp B i¼0 where Ei ¼
I CAD
ðAW ÞD
i þ 1 A I ; AD B ; i > 1;
W ¼ AAD þ AD BCAD ; indðAÞ ¼ k: Proof. Partition M :
0 M¼ 0
Ap B A þ 0 C
AAD B CAD B
:¼ P þ Q:
Since AAp BCA ¼ 0, CAp BCA ¼ 0 and BCAp BC ¼ 0, we have P 2 ¼ 0, QPQ 2 ¼ 0 and PQPQ ¼ 0. Then from corollary 2.7, it is easy to see M D ¼ Q D þ ðQ D Þ2 P þ ðQ D Þ3 PQ þ ðQ D Þ4 PQP þ PðQ D Þ2 þ PðQ D Þ3 P þ PðQ D Þ4 PQ þ PðQ D Þ5 PQP: Partition Q as
AAp Q¼ CAp
2 D 0 A A þ 0 CAAD
AAD B CAD B
:¼ P1 þ Q1 :
Obviously, P1 Q1 ¼ 0 and P1k þ 1 ¼ 0, where k ¼ indðAÞ. Lemma 2.24 gives Q D ¼ Q1D þ Q1D
k 1 X ðQ1D Þi þ 1 P1i þ 1 ; i¼0
Generalized Inverses of Partitioned Matrices
93
and ðQ D Þs ¼ ðQ1D Þs þ ðQ1D Þs
k 1 X i¼0
ðQ1D Þi þ 1 P1i þ 1 ; P1i þ 1 ¼
Ai þ 1 A p CAi Ap
0 ; 0
where i > 1. Since Q1 ¼ 0, S1 ¼ CAD B CAAD ðA2 AD ÞD AAD B ¼ 0, ðA2 AD Þp AAD B ¼ 0 andCAAD ðA2 AD Þp ¼ 0, and let E ¼ Q1D , theorem 2.2, it implies I D 2 D E¼ D ððAW Þ Þ A I ; A B ; CA where W ¼ AAD þ AD BCAD . Manipulation gives
i þ 1 I D A I ; AD B : Ei ¼ ðAW Þ D CA ■
This completes the proof. Theorem 2.54 [23]. Let M ¼
A C
B 2 Cnn , and A 2 Crr . If D ¼ CAD B, D
CAp BC ¼ 0, BCAp AB ¼ 0 and BCAp A2 ¼ 0, then k1 P 0 P 0 Ai Ap B i þ 1 k1 0 0 D ¼ Iþ þ Eþ M E 0 CAp 0 0 i¼0 0 i¼0 0 0 0 0 þ E2 ; I þE CAp A CAp B CAp 0 where Ei ¼
i þ 1 I D A I; ðAW Þ CAD
0 iþ2 E E CAi Ap B
AD B ; i > 1;
W ¼ AAD þ AD BCAD ; indðAÞ ¼ k: Proof. Partition M as in 0 M¼ CAp
0 A þ 0 CAAD
B CAD B
:¼ P þ Q:
Since CAp BC ¼ 0, BCAp AB ¼ 0 and BCAp A2 ¼ 0, we have P 2 ¼ 0, QPQ 2 ¼ 0 and PQP ¼ 0. Then corollary 2.7 leads to M D ¼ Q D þ ðQ D Þ2 P þ ðQ D Þ3 PQ þ PðQ D Þ2 þ PðQ D Þ3 P þ PðQ D Þ4 PQ: Partition Q as
A2 AD Q¼ CAAD
AAp AAD B þ D 0 CA B
Ap B 0
:¼ P1 þ Q1 :
Sign Pattern for Generalized Inverses
94
Obviously, P1 Q1 ¼ 0 and Q1k þ 1 ¼ 0, where k ¼ indðAÞ. Theorem 2.35 implies Q D ¼ P1D þ and D s
ðQ Þ ¼
ðP1D Þs
þ
k 1 X i¼0
k 1 X i¼0
Q1i þ 1 ðP1D Þi þ 1 P1D ;
Q1i þ 1 ðP1D Þi þ 1 ðP1D Þs ; Q1i þ 1
A i þ 1 Ap ¼ 0
Ai Ap B ; 0
where i > 1. Note that P1 ¼ 0, S1 ¼ CAD B CAAD ðA2 AD ÞD AAD B ¼ 0, ðA2 AD Þp AAD B ¼ 0 and CAAD ðA2 AD Þp ¼ 0. Let E ¼ Q1D , then theorem 2.2 implies I E¼ ððAW ÞD Þ2 A I ; AD B ; CAD where W ¼ AAD þ AD BCAD . Then we obtain
i þ 1 I A I ; AD B ; Ei ¼ ðAW ÞD D CA ■
This completes the proof. Theorem 2.55 [31]. For P; Q 2 Knn , if PQ ¼ 0, then (1) ðP þ QÞ# exists if and only if P 2 andrankðP þ QÞ ¼ rankðP 2 Þ þ rankðQ 2 Þ; (2) if ðP þ QÞ# exists, then
and Q 2
are group invertible,
ðP þ QÞ# ¼ ðP 2 Þ# P þ QðQ 2 Þ# þ QXP; where X ¼ ððQ 2 Þ# Þ2 ðP þ QÞðP 2 Þp þ ðQ 2 Þp ðP þ QÞððP 2 Þ# Þ2 ðQ 2 Þ# ðP þ QÞðP 2 Þ# : Proof. By the core-nilpotent decomposition, there exists an invertible matrix U such that D 0 ð2:60Þ P¼U U 1 ; 0 N Q1 Q2 where D is invertible and N is nilpotent. Let Q ¼ U U 1 , where the Q3 Q4 dimensions of Q1 and D are the same. Since PQ ¼ 0, we have 0 0 ð2:61Þ Q¼U U 1 ; NQ3 ¼ 0; NQ4 ¼ 0: Q3 Q4
Generalized Inverses of Partitioned Matrices
D Theorem 2.20 implies that P þ Q ¼ U Q3
95 0 U 1 is group invertible if N þ Q4
and only if ðN þ Q4 Þ# exists. Similarly, there exists an invertible matrix V such that R 0 V 1 ; ð2:62Þ Q4 ¼ V 0 S N1 N 2 where R is invertible and S is nilpotent. Let N ¼ V V 1 , where the N3 N 4 dimensions of N1 and R are the same. Since NQ4 ¼ 0, we have 0 N2 ð2:63Þ N ¼V V 1 ; N2 S ¼ 0; N4 S ¼ 0: 0 N4 R N2 Then by theorem 2.19, N þ Q4 ¼ V V 1 is group invertible if and 0 N4 þ S only if ðN4 þ SÞ# exists. Since N4 and S are nilpotent, and N4 S ¼ 0, we have N4 þ S is nilpotent. Hence ðP þ QÞ# exists if and only if N4 þ S ¼ 0. (Sufficiency) From the above deduction, we know ðP þ QÞ# exists. Then N4 þ S ¼ 0, N4 ¼ S. Since N4 S ¼ 0, we have N42 ¼ 0 and S 2 ¼ 0. Since N4 ¼ S, 2 and by (2.63), we have N 2 ¼ 0. From (2.60), P 2 is group invertible. Since S ¼ 0, 0 0 U 1 and and by (2.62), Q42 is group invertible. Consequently Q 2 ¼ U Q4 Q3 Q42 0 0 rankðQ 2 Þ ¼ rank ¼ rankðQ42 Þ: Q4 Q3 Q42 Q4# Q3 Q42 Theorem 2.20 implies that Q 2 is group invertible. Hence rankðP þ QÞ ¼ rankðDÞ þ rankðN þ Q4 Þ ¼ rankðDÞ þ rankðQ42 Þ ¼ rankðP 2 Þ þ rankðQ 2 Þ: (Necessity) If P 2 and Q 2 are group invertible, and by (2.60) and (2.61), we have N ¼ 0 and Q42 is group invertible, and rankðQ 2 Þ ¼ rankðQ42 Þ ¼ rankðRÞ. Since 2
rankðP þ QÞ ¼ rankðDÞ þ rankðN þ Q4 Þ ¼ rankðP 2 Þ þ rankðN þ Q4 Þ ¼ rankðP 2 Þ þ rankðQ 2 Þ ¼ rankðP 2 Þ þ rankðRÞ; R N2 we have rankðN þ Q4 Þ ¼ rankðRÞ. Since N þ Q4 ¼ V V 1 , we obtain 0 N4 þ S N4 þ S ¼ 0. Hence ðP þ QÞ# exists, implying that
Sign Pattern for Generalized Inverses
96
ðP þ QÞ# ¼ U
"
¼U
#
D
0
Q3
N þ Q4
U 1 D1
0
ðN þ Q4 Þp Q3 D2 ðN þ Q4 Þ# Q3 D1
ðN þ Q4 Þ#
# U 1
¼ ðP 2 Þ# P þ QðQ 2 Þ# þ QXP; where X ¼ ððQ 2 Þ# Þ2 ðP þ QÞðP 2 Þp þ ðQ 2 Þp ðP þ QÞððP 2 Þ# Þ2 ðQ 2 Þ# ðP þ QÞðP 2 Þ# . This completes the proof. ■ Theorem 2.56 [31]. Let P; Q 2 Knn . If PQ ¼ 0, then P þ Q is invertible if and only if P and Q are group invertible and rankðP þ QÞ ¼ n. If P þ Q is invertible, then ðP þ QÞ1 ¼ Q p P # þ Q # P p : Proof. (Necessity) If P þ Q is invertible, then theorem 2.55 gives rankðP þ QÞ ¼ rankðP 2 Þ þ rankðQ 2 Þ ¼ n. Since PQ ¼ 0, we have rankðPÞ þ rankðQÞ 6 n ¼ rankðP 2 Þ þ rankðQ 2 Þ. Hence P and Q are group invertible. (Sufficiency) Decompose P and Q as in (2.60) and (2.61). Since P and Q is group invertible, and N ¼ 0, theorem 2.20 gives Q4 is group invertible, and rankðQÞ ¼ rankðQ4 Þ. Since rankðPÞ þ rankðQÞ ¼ rankðDÞ þ rankðQ4 Þ ¼ n, we have Q4 is invertible. Hence P þ Q is invertible. If P þ Q is invertible, then D 0 1 1 1 U ðP þ QÞ ¼ U Q3 Q4 " # D1 0 ¼U U 1 Q41 Q3 D1 Q41 ¼ QpP # þ Q#P p: This completes the proof.
2.4
■
Drazin Inverse Index for Partitioned Matrices
This section deals with the study on the relations between the Drazin index of a block matrix and that of its submatrices. In 1989, D. Hershkowitz et al. characterized the Drazin index of upper triangular block matrices. A X Theorem 2.57 [105]. Let M ¼ 2 Cnn , where a ¼ indðAÞ and b ¼ indðBÞ. 0 B P Let Xk ¼ ki¼1 Aki XB i1 (with X0 ¼ 0Þ. Then indðM Þ ¼ a þ b if and only if X½RðB b1 Þ \ N ðBÞ * RðAÞ þ N ðAa1 Þ:
Generalized Inverses of Partitioned Matrices
97
Further, R. Bru et al. presented more general results on the Drazin index of up triangular block matrices. A X Theorem 2.58 [10]. Let M ¼ 2 Cnn , where a ¼ indðAÞ and b ¼ indðBÞ. 0 B P Let Xk ¼ ki¼1 Aki XB i1 (X0 ¼ 0). (1) If indðM Þ ¼ a þ b ðk 1Þ, then Xk ½RðB bk Þ \ N ðB k Þ * RðAk Þ þ N ðAak Þ; where 1 6 k 6 minfa; bg; (2) indðM Þ ¼ a þ b ðk 1Þ if and only if Xi ½RðB bi Þ \ N ðB i ÞRðAi Þ þ N ðAai Þði ¼ 0; 1; 2; . . .; k 1Þ; Xk ½RðB bk Þ \ N ðB k Þ * RðAk Þ þ N ðAak ÞðXj ; j ¼ 0; 1; 2; . . .; kÞ; where 1 6 k 6 minfa; bg; (3) indðM Þ ¼ maxfa; bg if and only if Xi ½RðB bi Þ \ N ðB i ÞRðAi Þ þ N ðAai Þði ¼ 0; 1; 2; . . .; minfa; bgÞ; (4) indðM Þ 6 q if and only if indðAÞ 6 q; indðBÞ 6 q; Ap Xq B p ¼ 0; where q is a positive integer; (5) the following results are equivalent Xk ½RðB bk Þ \ N ðB k ÞRðAk Þ þ N ðAak Þ; Ap Aak Xk B bk B p ¼ 0; where 1 6 k 6 minfa; bg; (6) indðM Þ ¼ maxfa; bg if and only if Ap Aai Xk B bi B p ¼ 0ði ¼ 0; 1; 2; . . .; minfa; bgÞ: In 2005, R.E. Hartwig et al. characterized the Drazin index of the 2 2 block matrices.
Sign Pattern for Generalized Inverses
98
A Theorem 2.59 [101]. For C then
B 2 Cnn , if BC ¼ 0, DC ¼ 0 and D is nilpotent, D
indðM Þ 6 indðAÞ þ indðDÞ þ 1; and
" I MD ¼ AD I CAD
indðDÞ1 P i¼0
# ðAD Þi þ 1 BD i :
In 2012 Q. Xu et al. studied the Drazin inverse of anti-triangular block matrices. A B Theorem 2.60 [211]. Let H ¼ 2 Cnn ; where A and C are square, 0 C P i k1i indðAÞ ¼ s, indðC Þ ¼ t and k ¼ maxfs; tg > 1. Let Vk ¼ k1 ; V0 ¼ 0, i¼0 A BC where k 2 N is a positive integer. Then indðH Þ ¼ minfk 2 Njk > k; Ap Vk C p ¼ 0g: A B Theorem 2.61 [211]. Let H ¼ 2 Cnn , where A is square. If AB ¼ 0, C 0 CA ¼ 0, and k ¼ max indðA2 þ BC Þ; indðCBÞ > 1, then
HD ¼
AD C ðBC ÞD
BðCBÞD ; 0
and indðH Þ 2 f2k 1; 2kg; (2) indðH Þ ¼ 2k 1 if and only if A2k AD ¼ A2k1 ; BðCBÞk1 ðCBÞp ¼ 0; C ðBC Þk1 ðBC Þp ¼ 0:
A B Theorem 2.62 [211]. Let H ¼ 2 Cnn , where C is nilpotent, indðAÞ ¼ s D C P i k1i and indðC Þ ¼ t. Let Vk ¼ k1 ; V0 ¼ 0, where k 2 N is a positive intei¼0 A BC ger. If BD ¼ 0 and CD ¼ 0, then
Generalized Inverses of Partitioned Matrices indðH Þ ¼ where L ¼
min
k maxfs;1g
Pt1 j¼0
99
kjAk þ 1 L ¼ Vk ; DAk1 Ap ¼ 0; DAk L ¼ DVk1 þ C k ;
ðAD Þj þ 2 BC j :
In 2014, L. Yu et al. characterized the Drazin index of the sum and product of matrices [220]. Lemma 2.26 [151]. Let A 2 Cnn , and let p be a nonnegative integer. Then limþ ep ðA þ eI Þ1 exists if and only if p > indðAÞ.
e!0
A Lemma 2.27 [103, 152]. Let M ¼ B
0 C
2 Cnn , where A is square. Then
maxfind ðAÞ; ind ðC Þg 6 ind ðM Þ 6 ind ðAÞ þ ind ðC Þ: Theorem 2.63 [220]. For P; Q 2 Cnn , if P D Q ¼ 0 and PQP p ¼ 0, then ind ðP þ QÞ 6 indðPÞ þ indðQÞ 1: Proof. By the core-nilpotent decomposition, there exist invertible matrices U , D and a nilpotent matrix N such that D 0 P¼U U 1 : 0 N 1 Q1 Q2 0 D D 1 Then P ¼ U U 1 , U and indðPÞ ¼ ind ðN Þ. Let Q ¼ U Q3 Q4 0 0 where Q1 is a square matrix with the same order as D. Since P D Q ¼ 0, we have Q1 ¼ 0; Q2 ¼ 0. Then 0 0 Q¼U U 1 : Q3 Q4 Since PQP p ¼ 0, we have NQ4 ¼ 0. Since D 0 P þQ ¼ U U 1 ; Q3 N þ Q4 and by lemma 2.27, we obtain ind ðP þ QÞ ¼ ind ðN þ Q4 Þ. Since NQ4 ¼ 0, we obtain eðN þ Q4 þ eI Þ ¼ ðN þ eI ÞðQ4 þ eI Þ: Let m ¼ ind ðQ4 Þ þ ind ðN Þ 1. Lemma 2.26 implies that lim em ðN þ Q4 þ eI Þ1 ¼ limþ em þ 1 ðQ4 þ eI Þ1 ðN þ eI Þ1 ;
e!0þ
e!0
Sign Pattern for Generalized Inverses
100
exists. Hence ind ðP þ QÞ ¼ ind ðN þ Q4 Þ 6 m ¼ indðQ4 Þ þ ind ðN Þ 1: Lemma 2.27 gives ind ðQ4 Þ 6 ind ðQÞ. Since indðPÞ ¼ ind ðN Þ, we have ind ðP þ QÞ 6 ind ðPÞ þ indðQÞ 1: ■
The proof is complete. nn
Theorem 2.64 [220]. For A; B 2 C and ðBAÞs are similar.
, if s > maxfind ðABÞ; ind ðBAÞg, then ðABÞs
Proof. We know ðABÞ0 and ðBAÞ0 are similar. Then we consider the cases of s > 1. By theequivalent decomposition, there exist invertible matrices P and Q such that Ir 0 Q, where Ir is an r r unity matrix, and r ¼ rankðAÞ. Let A¼P 0 0 B2 1 1 B1 B¼Q P , where B1 2 Crr . Then B3 B4 s B1 B2 1 B1 B1s1 B2 1 s AB ¼ P P ; ðABÞ ¼ P P ; 0 0 0 0 B1s 0 B1 0 BA ¼ Q 1 Q; ðBAÞs ¼ Q 1 Q: B3 B1s1 0 B3 0 Since s > maxf indðABÞ; indðBAÞg, and ðABÞs and ðBAÞs are group invertible. By corollaries 2.3 and 2.4, there exist matrices X and Y such that B1s X ¼ B1s1 B2 and YB1s ¼ B3 B1s1 . Then s B B1s X I X B1s 0 I X ðABÞs ¼ P 1 P 1 ¼ P P 1 ; 0 I 0 0 0 0 0 I s B1 0 0 B1s 0 I 0 s 1 1 I ðBAÞ ¼ Q Q: Q¼Q YB1s 0 Y I 0 0 Y I Hence ðAB Þs and ðBAÞs are similar.
■
Chapter 3 SNS and S2NS Matrices In this chapter, we shall present some results on the sign pattern of nonsingular matrices over real fields, which are closely related to the sign-solvability of linear equations.
3.1
Sign-Solvability of Linear Equations
For a real number a, let sgn a or sgnða Þ denote the sign of a, i.e., 8 a [ 0; < 1; sgnða Þ ¼ 1; a\0; : 0; a ¼ 0: For a real matrix A ¼ aij mn , the matrix sgn aij mn is called the sign pattern (matrix) of A [221], denoted by sgnðAÞ. For example, 2 3 2 3 3 2 2 1 1 1 A ¼ 4 2 4 5 5; sgnðAÞ ¼ 4 1 1 1 5: 0 0 3 0 0 1 The set of matrices with the same sign pattern as A is called the sign pattern class (or qualitative class) of A, denoted by QðAÞ, i.e., QðAÞ ¼ fB jsgnðB Þ ¼ sgnðAÞg: The sign pattern matrix is an active field in combinatorial matrix theory. The research on the foundations of sign pattern matrices and generalized inverses of matrices can be found in [86, 112, 147, 217–219]. The study of sign-solvable linear equations [166] originated from the work of P.A. Samuelson, a Nobel Prize winner, in his classic book Foundations of Economic Analysis [99]. Next, we give the definition of sign-solvable linear equations.
DOI: 10.1051/978-2-7598-2599-8.c003 © Science Press, EDP Sciences, 2021
Sign Pattern for Generalized Inverses
102
e ¼e Definition 3.1 [13, 135]. For A 2 Rmn and b 2 Rm , if the linear equation Ax b is e ¼e e 2 solvable and the sign patterns of the solutions of Ax b are the same for all A QðAÞ and e b 2 QðbÞ, then Ax ¼ b is called sign-solvable. The sign pattern class of the solution of the sign-solvable linear equation Ax ¼ b is denoted by QðAx ¼ bÞ. For example, the linear system 1 1 x1 0 ¼ ; 1 1 x2 1 is a sign-solvable linear equation, and the sign solution is a positive vector. e ¼B e is Definition 3.2 [13]. For A 2 Rmn and B 2 Rmp , if the matrix equation AX e e solvable and the sign patterns of the solutions of AX ¼ B are the same for all e 2 QðAÞ and B e 2 QðB Þ, then AX ¼ B is called sign-solvable. A Clearly, the sign-solvability of AX ¼ B is equivalent to that Ax ¼ b is solvable for every column b of B. e 2 QðAÞ is of full Definition 3.3 [175]. For a real matrix A, if each of the matrix A column rank, then A is called an L-matrix. For example,
2
1 6 1 A¼6 4 1 1
1 1 1 1
3 1 1 7 7; 1 5 1
is an L-matrix [13]. The following theorem gives a necessary and sufficient condition for the sign-solvability of the linear equation Ax ¼ b. Theorem 3.1. The linear equation Ax ¼ 0 is sign-solvable if and only if A is an L-matrix. e 2 Q ðAÞ is of full column rank, then the only solution Proof. (Sufficiency) If each A e ¼ 0 is the trivial solution. Thus, Ax ¼ 0 is sign-solvable. of Ax (Necessity) Suppose that Ax ¼ 0 is sign-solvable. Since x ¼ 0 is a solution of e ¼ 0 has the only solution x ¼ 0 for each A e 2 QðAÞ, i.e., A e is of Ax ¼ 0, we have Ax full column rank. ■ In the following is a necessary condition of the sign-solvability of non-homogeneous linear equations. Theorem 3.2 [13]. If the linear equation Ax ¼ b is sign-solvable, then A is an L-matrix. Proof. Suppose that Ax ¼ b is sign-solvable and A is not an L-matrix. Then there e 2 QðAÞ and a nonzero vector z such that Az e ¼ 0. Let e exist A x be a solution of
SNS and S2NS Matrices
103
e ¼ b. For any real number c, we have A e ðe Ax x þ cz Þ ¼ b, i.e., ð e x þ cz Þ 2 QðAx ¼ bÞ. Thus, we can find a real number c such that sgnð e x þ cz Þ 6¼ sgnð e x Þ. This contradicts the assumption Ax ¼ b is sign-solvable. So A is an L-matrix. ■ A square L-matrix is called a sign-nonsingular matrix [13]. e 2 QðAÞ is nonsingular, Definition 3.4. For a real square matrix A, if each matrix A then A is called a sign-nonsingular matrix, abbreviated SNS-matrix. In order to characterize the SNS-matrices, the concept of signed determinant is given as follows. e ¼ sgnðdetðAÞÞ holds for each Definition 3.5 [13, 135]. For A 2 Rnn , if sgnðdetð AÞÞ e 2 QðAÞ, then A has a signed determinant. matrix A The standard determinant expansion of a matrix A ¼ aij of order n is X sgnðrÞa1rð1Þ anrðnÞ ; detðAÞ ¼ r
where r is a permutation of f1; 2; . . .; n g, sgnðrÞ denotes the sign of the permutation. Lemma 3.1. Let A ¼ aij 2 Rnn . Then A has a signed determinant if and only if one of the following statements holds: (1) Each term in the standard determinant expansion of A is zero. (2) There exists a nonzero term in the standard determinant expansion of A and the sign of all the nonzero terms is the same. Proof. Sufficiency is obviously. Next we show the necessity. Suppose that A has a signed determinant but the statement (1) does not hold. Then there exists at least one nonzero term in the standard determinant expansion of A. Let tr ¼ sgnðrÞa1rð1Þ anrðnÞ ; be the nonzero term. Take a matrix A1 2 QðAÞ satisfying eaij ; if j ¼ rði Þ; ðA1 Þij ¼ aij ; otherwise, where e [ 0. When e is sufficiently large, we have sgnðdetðA1 ÞÞ ¼ sgnðtr Þ. Since A has a signed determinant, all the nonzero terms in the standard determinant expansion of A have the same sign. ■ From lemma 3.1, we obtain the characterizations of SNS-matrices. Theorem 3.3 [13]. For a real square matrix A, then the following statements are equivalent: (1) A is an SNS-matrix. (2) detðAÞ 6¼ 0, and A has a signed determinant.
Sign Pattern for Generalized Inverses
104
(3) There exists a nonzero term in the standard determinant expansion of A and the sign of all the nonzero terms are the same. Theorem 3.3 is an important method to determine SNS-matrices. Example 3.1 [13]. Let n be an integer with n 2. Take a Hessenberg matrix 3 2 1 1 0 0 0 6 1 1 1 0 0 7 6 . .. 7 .. .. .. .. nn .. Hn ¼ 6 . 7 . . . . 72R : 6 4 1 1 1 1 1 5 1 1 1 1 1 Since each term in the standard determinant expansion of Hn is ð1Þn , from theorem 3.3, Hn is an SNS-matrix. e ¼ 0 holds for each A e 2 QðAÞ, then we say A has an identically zero If detð AÞ nn u Þ denote the matrix obtained determinant. For A 2 R , and u 2 Rn , let Aði from A by replacing the i-th column with u ði ¼ 1; 2; . . .; n Þ. Theorem 3.4 [13]. Let A 2 Rnn , and b 2 Rn . Then linear equation Ax ¼ b is sign-solvable if and only if A is an SNS-matrix and Aði u Þ is either an SNSmatrix or has an identically zero determinant for i ¼ 1; 2; . . .; n. Proof. Assume that Ax ¼ b is sign-solvable. It follows from theorem 3.2 that A is an e 2 QðAÞ and e e ¼e SNS-matrix. For each A b 2 QðbÞ, by Cramer’s rule, Ax b has a > unique solution e x ¼ ðe x1; e x 2 ; . . .; e x n Þ , where e xi ¼
e detð Aði
e detð AÞ
bÞÞ
; i ¼ 1; 2; . . .; n:
e ¼ sgnðdetðAÞÞ. Thus, for any Theorem 3.3 implies that sgnðdetð AÞÞ e e i 2 f1; 2; . . .; n g, the signs of detð Aði bÞÞ and detðAði bÞÞ are the same. So Að i u Þ is either an SNS-matrix or has an identically zero determinant. The converse is an immediate consequence of Cramer’s rule. ■ Definition 3.6 [13, 135]. Let A be an m n matrix. The rank of A, denoted by qðAÞ, is the maximum number of nonzero entries of A without any pair in the same row or column. If qðAÞ ¼ n, then A is said to be of full term rank. Both column full term rank matrices and row full term rank matrices are called full term rank matrices. Theorem 3.5. Let A 2 Rnn . Then A has an identically zero determinant if and only if qðAÞ\n. Proof. By the proof of lemma 3.1, A has an identically zero determinant if and only if statement (1) of lemma 3.1 holds, i.e., qðAÞ\n. ■
SNS and S2NS Matrices
105
¼ sgn A1 hold for any matrices If A is an SNS-matrix, does sgn A1 1 2 A1 ; A2 2 QðAÞ? The answer is no. Example 3.2 [13, 135].
2
1 A1 ¼ 4 1 1
1 1 1
3 2 0 1 1 5; A2 ¼ 4 1 1 2
3 1 0 1 1 5; 1 1
are both SNS-matrices, and 2 2 3 2 1 3 5 15 15 2 14 14 6 3 7 6 1 7 1 1 17 1 17 6 ðA1 Þ1 ¼ 6 4 5 5 5 5 ; ð A2 Þ ¼ 4 2 4 4 5 : 1 3 1 25 0 12 5 5 2 1 Clearly, the signs of A1 1 31 and A2 31 are distinct. e 1 Þ ¼ sgnðA1 Þ for each Definition 3.7 [13, 135]. Let A be an SNS-matrix. If sgnð A e 2 QðAÞ, then A is called a strong SNS-matrix, abbreviated S 2 NS-matrix. A Let A ¼ aij 2 Rnn . It easy to check the matrix equation AX ¼ In is sign-solvable if and only if A is an S 2 NS-matrix. By theorem 3.3, the following result can be obtained immediately. Theorem 3.6 [13]. Let A ¼ aij 2 Rnn . Then A is an S 2 NS-matrix if and only if the following statements hold: (1) A is an SNS-matrix. (2) If aij ¼ 0, then Aði; j Þ is an SNS-matrix or has an identically zero determinant, where Aði; j Þ is a submatrix obtained from A by deleting the i-th row and j-th column. Example 3.3 [13]. Let
2
1 60 6 60 Gn ¼ 6 6 ... 6 40 1
1 1 0 .. .
0 1 1 .. .
0 0
0 0
.. .
0 0 0 .. . 1 0
0 0 0 .. .
3
7 7 7 7: 7 7 1 5 1
By calculation, all terms in the standard determinant expansion of Gn are positive. By theorem 3.3, Gn is an SNS-matrix. Furthermore, there is only one nonzero term in the standard determinant expansion of each ðn 1Þ-order submatrix of Gn , so each ðn 1Þ-order submatrix of Gn is an SNS-matrix. Thus, it follows from theorem 3.6 that Gn is an S 2 NS-matrix.
106
Sign Pattern for Generalized Inverses
Definition 3.8 [13, 135]. Let B 2 Rnðn þ 1Þ . If each n-order submatrix of B is an SNS-matrix, then B is called an S H -matrix. Let A 2 Rnn and b 2 Rn . If the solutions of Ax ¼ b do not have zero coordinates, by theorem 3.4, Ax ¼ b is sign-solvable if and only if B ¼ ½ A; b is an S H -matrix. Let B ¼ ½ A; b be an S H -matrix, where A 2 Rnn and b 2 Rn . Let x ¼ ðx1 ; x2 ; . . .; xn þ 1 Þ> . Then Bx ¼ 0 is equivalent to Aðx1 ; x2 ; . . .; xn Þ> ¼ xn þ 1 b: If xn þ 1 ¼ 0, then x ¼ 0 since A is an SNS-matrix. If xn þ 1 6¼ 0, all coordinates of x are nonzero by Cramer’s rule. Thus, there is a vector w, without zero coordinates, such that x 2 f0g [ QðwÞ [ QðwÞ. Then we obtain the following theorem. Theorem 3.7. For B 2 Rnðn þ 1Þ , B is an S H -matrix if and only if there exists a vector w 2 Rn þ 1 , without zero coordinates, such that the null space of each matrix e 2 QðB Þ is contained in f0g [ QðwÞ [ QðwÞ. B Theorem 3.8 [13]. Let A 2 Rnn and b 2 Rn . The linear equation Ax ¼ b is e 2 QðAÞ and e sign-solvable if and only if for any A b 2 QðbÞ the following statements hold: e ¼e (1) Ax b is solvable. e e (2) There exists a vector w such that the null space of A; b is contained in f0g [ QðwÞ [ QðwÞ. e ¼e e 2 Proof. Suppose that Ax ¼ b is sign-solvable, then Ax b is solvable for any A QðAÞ and e b 2 QðbÞ. It follows from theorem 3.2 that A is an L-matrix. Let z be a e e vector satisfying A; b z ¼ 0, and let c denote the last coordinate of z. Vector z 0 e 0 ¼ ce is obtained from z by deleting its last coordinate. Clearly, Az b. If c ¼ 0, then 0 0 z ¼ 0 since A is an L-matrix. If c 6¼ 0, then cz 2 QðAx ¼ bÞ since Ax ¼ b is sign-solvable. Thus, there exists a vector w such that the null space of the matrix e e A; b is contained in f0g [ QðwÞ [ QðwÞ. The converse follows in a similar way. ■ Let A be an m n matrix. For a positive integer p, let ½p denote f1; 2; . . .; pg. Let a and b denote the subset of ½m and ½n , respectively. Let A½a; b be a submatrix of A whose row index set is a and column index set is b. If a ¼ ½m or b ¼ ½n , we shorten the notation A½a; b to A½:; b or A½a; : respectively. Let Aða; bÞ be a submatrix obtained from A by deleting the rows in a and columns in b. For a vector z, z ½a is a vector consisted of the elements of z whose index are in ½a. Theorem 3.9 [124, 148]. For A 2 Rmn and b 2 Rm , let z ¼ ðz1 ; z2 ; . . .; zn Þ> be a solution of linear equation Ax ¼ b. Let
b ¼ j zj 6¼ 0 ; a ¼ i aij 6¼ 0 for some j 2 b :
SNS and S2NS Matrices
107
Then Ax ¼ b is sign-solvable if and only if the matrix ½ A½a; b b½a ; is an S H -matrix and Aða; bÞ is an L-matrix. Proof. Without loss of generality, we assume that b ¼ f1; 2; . . .; l g and a ¼ f1; 2; . . .; k g, where l and k are nonnegative integers. When b ¼ ; (or a ¼ ;), A 1 A3 ; A¼ 0 A2 where A1 is a k l matrix without zero rows. Then the linear system Ax ¼ b can be written as A1 x ð1Þ þ A3 x ð2Þ ¼ b½a; A2 x ð2Þ ¼ 0: (Necessity) Since Ax ¼ b is sign-solvable, we have the vector x ð2Þ ¼ 0. Then A1 x ð1Þ ¼ b½a is sign-solvable and
Q A1 x ð1Þ ¼ b½a ¼ Q ðz1 ; z2 ; . . .; zl Þ> :
x ð1Þ x ð2Þ
2 Qðz Þ and
Theorem 3.2 implies that A and A1 are both L-matrices. When b is empty, then A2 ¼ Aða; bÞ ¼ A is an L-matrices. When b is nonempty, since A1 is an L-matrix, we know that k > l. Next, we prove that l ¼ k. Without loss of generality, assume that the first l rows of A1 are linearly independent. Then the linear equation A1 ½b; :x ð1Þ ¼ b½b; has a unique solution x ð1Þ ¼ z ½b, and x ð1Þ is without zero entries. If k [ l, since each e 1 2 QðA1 Þ such that row of A1 has a nonzero entry, then there exists a matrix A e A 1 ½b; : ¼ A1 ½b; : and e 1 z ½b 6¼ b½a: A This contradicts that A1 x ð1Þ ¼ b½a is sign-solvable. Hence l ¼ k. By theorems 3.7 and 3.8, we get ½ A1 ; b½a ;
ð3:1Þ
is an S H -matrix. e 2 2 QðA2 Þ, suppose that u e Next, we prove that A2 is an L-matrix. For any A e 2u e ¼ 0. Since A1 is an SNS-matrix, we know that A1 x ð1Þ ¼ b½a A3 u e has satisfies A a solution. Let A 1 A3 e A¼ e 2 2 QðAÞ: 0 A
Sign Pattern for Generalized Inverses
108
e ¼ b belongs to the sign pattern class of z, we know that Since the solution of Ax e ¼ 0 and A2 is an L-matrix. u (Sufficiency) Assume that the matrix in (3.1) is an S H -matrix and A2 is an L-matrix. For any e3 e1 A A e e A¼ e 2 2 QðAÞ; b 2 QðbÞ; 0 A e ¼e the linear system Ax b can be written as e 1 x ð1Þ þ A e 3 x ð2Þ ¼ e A b ½a; e 2 x ð2Þ ¼ 0: A
ð3:2Þ
H Since A 2 is an L-matrix and the matrix (3.1) is an S -matrix, (3.2) have a unique ð1Þ e z e 1 x ð1Þ ¼ e z ð1Þ is the unique solution of A b ½a. solution , where e z ð2Þ ¼ 0 and e e z ð2Þ Thus, Ax ¼ b is sign-solvable. ■
By theorem 3.9, the study of sign-solvability of linear equations is equivalent to the study of S I -matrices and L-matrices. In [13], there are many results on S I matrices and L-matrices. The characterizations via digraph on sign-solvable linear equations can be found in [169].
3.2
Characterizations for SNS and S 2 NS Matrices via Digraphs
Signed digraph is an effective tool to study SNS and S 2 NS-matrices. Firstly, we introduce some concepts on signed digraphs [13]. Let D be a digraph. If we assign a positive sign or negative sign on each arc of D, then D is called a signed digraph. A path a from vertex i to vertex j of length k in D is a sequence i ¼ i0 ; i1 ; . . .; ik1 ; ik ¼ j of distinct vertices such that ip1 ; ip is an arc of D from ip1 to ip ðp ¼ 1; 2; . . .; k Þ. We denote the path a by i0 ! i1 ! ! ik1 ! ik : When j1 ¼ jk , the sequence is called a directed cycle, denoted by j1 ! j2 ! ! jk ! j1 : The digraph D is strongly connected provided that for each pair of distinct vertices u and v there is a path in D from u to v and a path from v to u. The maximal strongly connected subdigraphs of D is said to be a strongly connected components of D.
SNS and S2NS Matrices
109
Definition 3.9 [13]. For an n n matrix A ¼ aij , a directed graph with the vertex
set V ¼ f1; 2; . . .; n g and arc set E ¼ ði; j Þ aij 6¼ 0; i 6¼ j is called the associated graph of A, denoted by DðAÞ. Note that the associated graph DðAÞ is determined by the non-diagonal entries of A. Definition 3.10 [12]. Let A be a square matrix. If there exists a permutation matrix P such that B1 0 PAP > ¼ ; B2 B3 where B1 ; B3 are square matrices, then A is called reducible; otherwise, A is called irreducible. Theorem 3.10 [12]. The square matrix A is irreducible if and only if DðAÞ is strongly connected. By definition 3.10, for any reducible matrix A, there exists a permutation matrix P such that 2 3 A1 0 0 6 A21 A2 0 7 6 7 PAP > ¼ 6 .. .. 7; .. .. 4 . . . 5 . Ak1 Ak2 Ak where A1 ; A2 ; . . .; Ak are irreducible matrices, and DðA1 Þ; . . .; DðAk Þ are strongly connected components of DðAÞ. Here, Ai is also called an irreducible components of A. Definition 3.11 [135]. Let A be a square matrix. If there exist permutation matrices P and Q such that B1 0 PAQ ¼ ; B2 B3 where B1 ; B3 are square matrices, then A is called partly decomposable; otherwise, A is called fully indecomposable. By definitions 3.10 and 3.11, a fully indecomposable matrix is a irreducible matrix. Definition 3.12 [13]. For a square matrix A, assign the number sgn aij on each arc ði; j Þ of DðAÞ, then the resulting digraph is called a signed digraph of A, denoted by S ðAÞ. The sign of the subgraph of S ðAÞ is the product of the signs of all its arcs. Example 3.4. For a square matrix A, let a be a path in the signed digraph S ðAÞ, and a ¼ i ¼ i0 ! i1 ! ! ik1 ! ik ¼ j:
Sign Pattern for Generalized Inverses
110 The sign of a is
sgnðaÞ ¼ sgnðai0 i1 ai1 i2 aik1 ik Þ:
Specially, the sign of the path with length 0 is 1. Example 3.5. Let c be a directed cycle in the signed digraph S ðAÞ, with c ¼ j1 ! j2 ! ! jk ! j1 : The sign of c is
sgnðcÞ ¼ sgn aj1 j2 aj2 j3 ajk1 jk ajk j1 :
If the sign of a directed path or directed cycle is þ 1 (or 1), then we discribe the directed path or directed cycle are positive (or negative). Let A be a square n n matrix. If there is a nonzero term in the standard determinant expansion of A, then there exist a permutation matrix P and an invertible diagonal matrix D such that all the diagonal entries of DPA are negative. Thus, in the process of investigating the SNS-matrices, without loss of generality, one may assume that the diagonal entries of A are all negative. In the following, a characterization of SNS-matrices is given in terms of signed digraphs. Theorem 3.11 [3]. Let A ¼ aij 2 Rnn be a matrix whose diagonal entries are negative. Then A is an SNS-matrix if and only if each directed cycle in the signed digraph S ðAÞ is negative. Proof. (Necessity) Firstly, assume that A is an SNS-matrix. Let c ¼ j1 ! j2 ! ! jk ! j1 ; be a directed cycle of S ðAÞ. Then ð1Þk1 aj1 j2 aj2 j3 ajk1 jk ajk j1
Y
aii ;
i6¼j1 ;j2 ;...;jk
is a nonzero term in the standard determinant expansion of A and its sign is ð1Þn1 sgnðcÞ: From theorem 3.3, we have ð1Þn1 sgnðcÞ ¼ sgnða11 a22 ann Þ ¼ ð1Þn : Hence, sgnðcÞ ¼ 1. (Sufficiency) Assume that each directed cycle c of S ðAÞ is negative. Let r be a permutation of f1; 2; . . .; n g such that
SNS and S2NS Matrices
111 tr ¼ sgnðrÞa1rð1Þ a2rð2Þ anrðnÞ ;
is a nonzero term in the standard determinant expansion of A. Let c1 ; c2 ; . . .; cl be the (permutation) cycles of r of length greater than one. Then p denotes the number of the cycles with length 1. Each ci corresponds to a directed cycle of S ðAÞ. Since sgnðtr Þ ¼ ð1Þnlp and each diagonal entry of A is negative, we have sgnðtr Þ ¼ ð1Þnlp ð1Þp
l Y i¼1
sgnðci Þ ¼ ð1Þn :
Hence, the sign of each nonzero term in the standard determinant expansion of A ■ is ð1Þn . By theorem 3.3, A is an SNS-matrix. Let A ¼ aij be an n n matrix whose diagonal entries are negative. Assign a weight wt aij to each arc ði; j Þ of DðAÞ, where 0; aij [ 0; wt aij ¼ 1; aij \0: Define the weight of a directed cycle in DðAÞ as the sum of all the weights on its arcs. By theorem 3.11, A is an SNS-matrix if and only if the weight of every directed cycle in DðAÞ is odd. Example 3.6 [13]. Let
2
1 6 1 6 H5 ¼ 6 6 1 4 1 1
1 1 1 1 1
0 1 1 1 1
0 0 1 1 1
3 0 0 7 7 0 7 7: 1 5 1
In section 3.1, we have verified that H5 is an SNS-matrix. Here, we give another verification by theorem 3.11. The positive arcs of S ðH5 Þ are ð1; 2Þ; ð2; 3Þ; ð3; 4Þ; ð4; 5Þ, and positive arcs are ði; j Þ, where i [ j. Each directed cycle of S ðH5 Þ has the form as p ! p þ 1 ! p þ 2 ! ! q ! p; where p and q satisfy 1 6 p\q 6 4. Thus, the sign of each directed cycle in S ðH5 Þ is negative. By theorem 3.11, H5 is an SNS-matrix. From theorem 3.11, we obtain the following corollary. Corollary 3.1 [13]. Let A ¼ aij be an SNS-matrix of n-order whose diagonal entries are negative, and suppose A has a zero entry ars ¼ 0. Let Ers be an n n ð0; 1Þ-matrix with a unique nonzero entry on the ðr; sÞ-position. Then A þ Ers or A Ers is an SNS-matrix if and only if all the paths from s to r in the signed digraph S ðAÞ have the same sign. Furthermore, if all the paths from s to r are positive, then A Ers is an SNS-matrix; if all paths from s to r are negative, then A þ Ers is an SNS-matrix; and if there are no paths from s to r, then A þ Ers and A Ers are both SNS-matrices.
112
Sign Pattern for Generalized Inverses
Proof. Since A is an SNS-matrix, it follows from theorem 3.11 that A Ers is an SNS-matrix if and only if each directed cycle in the signed digraph S ðA Ers Þ containing the arc ðr; s Þ is negative. Hence A Ers is an SNS-matrix if and only if all the paths from s to r in the signed digraph S ðAÞ are positive. Similarly, A þ Ers is an SNS-matrix if and only if all the paths from s to r in the signed digraph S ðAÞ are negative. ■ Next, the characterizations of S 2 NS-matrices are given in terms of signed digraphs. Theorem 3.12 [13]. Let A ¼ aij be an n n matrix whose diagonal entries are negative. Then A is an S 2 NS-matrix if and only if the following statements hold: (1) Each directed cycle in the signed digraph S ðAÞ is negative. (2) The signs of all the paths with the same initial vertex and terminal vertex are equal. Proof. Let Eij be a ð0; 1Þ-matrix of n-order whose only nonzero entry is in the ði; j Þposition. It follows from theorem 3.6 that A is an S 2 NS-matrix if and only if A is an SNS-matrix, and for each i and j with aij ¼ 0, either A þ Eij or A Eij is an SNSmatrix. Then by theorem 3.11 and corollary 3.1, A is an S 2 NS-matrix if and only if (1) and (2) hold. ■ If A is an S 2 NS-matrix, then the sign pattern of the inverse of each matrix in QðAÞ is determined by the sign pattern of A. In order to discuss the sign pattern of the inverses of an S 2 NS-matrix, we prove the following lemma. Lemma 3.2 [13]. Let A ¼ aij be an n n matrix whose diagonal entries are nege 2 QðAÞ such that the ðr; s Þ-entry of A e 1 is positive (or ative. Then there exists A negative) if and only if there exists a negative (or positive) path from r to s in the signed digraph S ðAÞ, where r 6¼ s. Proof. (Sufficiency) Without loss of generality, assume that A is a ð0; 1; 1Þ-matrix. Suppose that there exists a negative path in S ðAÞ r ! i1 ! ! ik2 ! s: Let es be a column vector of n-order whose s-component is 1 and the other components are all zero. Then there exists a nonzero term ð1Þk1 ð1Þari1 aik2 ;s ð1Þnk ¼ ð1Þn1 ari1 aik2 ;s ; in the standard determinant expansion of Aðr es Þ. By theorem 3.11, we have e 2 QðAÞ such that the sign of ari1 aik2 ;s is negative. Hence, there exists a matrix A h i n 1 e ðr e is positive. Similarly, if there det A es Þ is ð1Þ . Thus, the ðr; s Þ-entry of A e 2 QðAÞ such exists a positive path from r to s in S ðAÞ, then there exists a matrix A 1 e is negative. that the ðr; s Þ-entry of A
SNS and S2NS Matrices
113
e 2 QðAÞ such that the (Necessity) Assume that there exists a matrix A 1 e is positive. Then there exists a term tr in the standard deterðr; s Þ-entry of A minant expansion of Aðr es Þ such that the sign of tr is ð1Þn . There exist integers i1 ; i2 ; . . .; ik2 such that tr ¼ ð1Þk1 ð1Þari1 aik2 ;s t 0 ; where t 0 is a term in the standard determinant expansion of Aðfr; i1 ; . . .; ik2 ; sgÞ. Note that tr0 ¼ ð1Þk t 0 ; is a nonzero term in the standard determinant expansion of A. Since A is an SNSmatrix, we have sgnðtr0 Þ ¼ ð1Þn ¼ sgnðtr Þ. So sgnðari1 aik2 s Þ ¼ 1 and r ! i1 ! ! ik2 ! s; e 2 Q ð AÞ is a negative path of S ðAÞ from r to s. Similarly, if there exists a matrix A 1 e is negative, then there is a positive path of S ðAÞ such that the ðr; s Þ-entry of A from r to s. ■ Let A ¼ aij be an SNS-matrix of order n with a negative main diagonal. Then the signs of the detðAÞ and detðAðfig; fi gÞÞ (for all i 2 f1; 2; . . .; n g) are ð1Þn and ð1Þn1 , respectively. Then for each matrix B 2 QðAÞ, the all the diagonal entries of B 1 are negative. For the non-diagonal entries of B 1 , we have the following theorem. Theorem 3.13 [13]. Let A ¼ aij be an SNS-matrix of n-order whose diagonal entries are negative. n o e 2 QðAÞ have the same e 1 A (1) The ðs; r Þ-entries of the matrices in the set A sign if and only if all the paths from s to r in the signed digraph S ðAÞ have the same sign. (2) If all the paths from s to the same sign , then the ðs; r Þ-entries of n r in S ðAÞ have o 1 e e the matrices in the set A A 2 QðAÞ have the sign (here ¼ 0 if there is no path from s to r). (3) If A is fully indecomposable and arso6¼ 0, then the sign of the ðs; r Þ-entry of each n 1 e A e 2 QðAÞ is sgnðars Þ. matrix in the set A Proof. Lemma 3.2 implies that statements (1) and (2) hold. Assume that A is fully indecomposable and ars 6¼ 0. Since the associated digraph DðAÞ is strongly connected, there is a path from s to r in DðAÞ. Joining the arc ðr; s Þ to any such path, we obtain a directed cycle which by theorem 3.11 is negative. Hence the sign of each path from s to r in signed graph S ðAÞ is sgnðars Þ. Thus, statement (3) follows from (2). ■
Sign Pattern for Generalized Inverses
114
Let ðAÞij denote the ði; j Þ-entry of matrix A. By theorem 3.13, the following results can be obtained. Corollary 3.2 [172]. Let A ¼ aij 2 Rnn be an S 2 NS-matrix whose diagonal entries are negative. Then entries of A1 satisfy the following properties: (1) ðA1 Þii \0 ði ¼ 1; 2; . . .; n Þ. (2) If i 6¼ j, then ðA1 Þij 6¼ 0 if and only if there exist a path in S ðAÞ from i to j. h i (3) If i 6¼ j and ðA1 Þij 6¼ 0, then sgn ðA1 Þij ¼ e, where is the sign of the paths from i to j in S ðAÞ. If a matrix A does not have zero entries, then A is called a totally nonzero matrix. Theorem 3.14 [172]. Let A be an S 2 NS-matrix. Then A1 is totally nonzero if and only if A is fully indecomposable. We have the following results on fully indecomposable S 2 NS-matrices. Theorem 3.15 [11]. Let A be a fully indecomposable S 2 NS-matrix. If ðAÞpq 6¼ 0, h i h i then sgn ðA1 Þqp ¼ sgn ðAÞpq . Theorem 3.16 [13]. Let A be a fully indecomposable S 2 NS-matrix. Then each square submatrix of A is an SNS-matrix or has an identically zero determinant. Definition 3.13 [174]. A signed digraph S is called a strong sign nonsingular (S 2 NS) signed digraph if S satisfies the following two conditions: (1) The sign of each cycle in S is negative. (2) All the paths in S with the same initial vertex and terminal vertex have the same sign. By theorem 3.12, the following corollary can be obtained. Corollary 3.3 [175]. Let A be a real square matrix whose diagonal entries are negative. Then A is an S 2 NS-matrix if and only if S ðAÞ is an S 2 NS signed digraph. By corollary 3.3, the study on S 2 NS-matrices is equivalent to the study on S 2 NS signed digraphs. From definition 3.13, the subgraph of an S 2 NS signed digraph is also an S 2 NS signed digraph. Theorem 3.17 [175]. Let S be an S 2 NS signed digraph. Then there exists at most one arc between any two strongly connected components. Definition 3.14. A digraph D is called an S 2 NS underlying digraph if the arcs of D can be suitably assigned signs such that the resulting signed digraph is an S 2 NS signed digraph. The digraph D which is not an S 2 NS underlying digraph is called an S 2 NS forbidden configuration, abbreviated FC. If D is an FC, but any proper subdigraph of D is not an FC, then D is called a minimal forbidden configuration, abbreviated MFC.
SNS and S2NS Matrices
115
By the above definition, it is easy to check that an FC always contains an MFC as its subdigraph. Clearly, any MFC is not a subgraph of any S 2 NS signed digraphs. Finding out all MFC is a problem which has not yet been completely resolved [173, 174]. Let D be a digraph. A splitting on a vertex x of D is to insert a new vertex x1 and a new arc ðx; x1 Þ, and replacing all the arcs ðx; v Þ incident to x by the arcs ðx1 ; v Þ. A subdivision on an arc ðu; v Þ of D is to remove the arc ðu; v Þ and insert a new vertex u1 and two new arcs ðu; u1 Þ and ðu1 ; v Þ. The reverse digraph D0 of a digraph D is a digraph obtained by reversing the directions of all the arcs of D. The above operations of graphs have the following properties [174]: Proposition 3.1. If a digraph D1 can be obtained from D by a finite vertex splittings and arc subdivisions, then D is an S 2 NS underlying digraph if and only if D1 is an S 2 NS underlying digraph. Proposition 3.2. If digraph D0 is a reverse digraph of D, then D is an S 2 NS underlying digraph (MFC) if and only if D0 is an S 2 NS underlying digraph (MFC). Now we give some examples of MFC [168, 174, 191]. Example 3.7 [191]. Let D3 be a digraph with vertex set fv; x; y g and arc set fðx; y Þ; ðy; x Þ; ðx; v Þ; ðy; v Þg. The digraph D3 is an MFC. Actually, any arc subdivision of D3 and the reverse digraph of D3 are both MFC. Example 3.8 [168]. Let k 2 be a positive integer and t1 ; t2 ; . . .; tk be nonnegative integers. Let P ðti Þ be an undirected path of length ti with two vertices ui and vi . Let G ðti Þ be the digraph obtained by replacing each edge of P ðti Þ by a pair of oppositely directed arcs ði ¼ 1; 2; . . .; k Þ. Let Dðt1 ; t2 ; . . .; tk Þ be a digraph obtained by adding new vertices y1 ; y2 ; . . .; yk , x1 ; x2 ; . . .; xk and some arcs satisfying the following conditions to the disjoint union of G ðt1 Þ; G ðt2 Þ; . . .; G ðtk Þ, ðxi ; vi Þði ¼ 1; 2; . . .; k Þ; ðxi ; ui þ 1 Þði 1; 2; . . .; k mod k Þ; ðui ; yi Þði ¼ 1; 2; . . .; k Þ; ðvi ; yi Þði ¼ 1; 2; . . .; k Þ: When t1 þ t2 þ þ tk is odd, Dðt1 ; t2 ; . . .; tk Þ is an MFC. In [174], the definition of Dðt1 ; t2 ; . . .; tk Þ in example 3.8 was modified, and a more general digraph is defined. Example 3.9 [174]. Let I be a subset of the index set f1; 2; . . .; k g. Replacing the new arcs ðui ; yi Þ and ðvi ; yi Þ in example 3.8 by the arcs: ðui ; y1 Þ; ðvi ; y2 Þði 2 I Þ; ðui ; y2 Þ; ðvi ; y1 Þði 62 I Þ: This produces a new digraph, denoted by DI ðt1 ; t2 ; . . .; tk Þ. When t1 þ t2 þ þ tk is odd, DI ðt1 ; t2 ; . . .; tk Þ is an MFC.
Sign Pattern for Generalized Inverses
116
By observing the above three examples, the following properties of S 2 NS underlying digraphs are obtained (see [174]). (M1) Let D3 be the digraph defined in example 3.7, and let D30 be the reverse digraph of D3 . If a digraph D is an S 2 NS underlying digraph, then D does not contain a subdivision of D3 or D30 as a subgraph. (M2) Let DI ðt1 ; t2 ; . . .; tk Þ be the digraph defined in example 3.9, and let DI0 ðt1 ; t2 ; . . .; tk Þ be the reverse digraph of DI ðt1 ; t2 ; . . .; tk Þ. If a digraph D is an S 2 NS underlying digraph, then D does not contain a subgraph which is obtained from DI ðt1 ; t2 ; . . .; tk Þ or DI0 ðt1 ; t2 ; . . .; tk Þ by finite vertex splittings and arc subdivisions. In [174], the authors pointed out that conditions (M1) + (M2) is not a necessary and sufficient for digraph D being an S 2 NS underlying digraph.
3.3
Ray Nonsingular and Ray S 2 NS Matrices
pffiffiffiffiffiffiffi Let z ¼ reih be a complex number, where r is the modulus of z, i ¼ 1 and h ¼ argðz Þ is the argument [127]. The complex number eih is called the ray pattern of z, denoted by ray z or rayðz Þ. For a complex matrix A ¼ aij , the complex matrix ray aij is called the ray pattern (matrix) of A, denoted by rayðAÞ. When A is a real matrix. rayðAÞ is the sign pattern (matrix) of A. Hence, ray pattern matrices is a generalization of the sign pattern matrices [87]. Example.
2
1 þ i 2 A¼4 1i 0 0 0
3 2 3 2 þ 2i ei4p 1 1 5; rayðAÞ ¼ 4 ei4p 1 0
1 0 0
3 1 ei4p 1 5: 1
Definition 3.15. The set of complex matrices with the same ray pattern as a matrix A is called the ray pattern class of A, denoted by QðAÞ, i.e., QðAÞ ¼ fB jrayðB Þ ¼ rayðAÞg: Similar to the SNS and S 2 NS-matrices, the definitions of ray nonsingular matrices and ray S 2 NS-matrices are given as follows. Definition 3.16 [178]. For a complex matrix A, if all the matrices in QðAÞ are nonsingular, then A is called a ray nonsingular matrix. For a ray nonsingular matrix A, if the ray patterns of the inverses of all the matrices in a ray nonsingular matrix are the same, then A is called a ray S 2 NS-matrix.
SNS and S2NS Matrices
117
The characterization of ray nonsingular matrices is still a problem which has not been resolved [140, 141]. The research on the determinantal regions of ray pattern matrices can be found in [142, 177]. By theorem 3.3, we know that an SNS-matrix has a signed determinant. Inspired by this, the definition of DRU matrices is given in [178]. e ¼ arg½detðAÞ Definition 3.17. Let A be a ray nonsingular matrix. If arg½detð AÞ e for each A 2 QðAÞ, then A is called a determinant ray unique ðDRU Þ matrix. Let D be a digraph with vertex set f1; 2; . . .; n g. If we assign eih (the ray weight) on each arc in D, then D is a ray digraph. Then let W be a subgraph of D, the product of ray weights on all the arcs of W is called the ray weight of W, denoted by rayðW Þ. Definition 3.18 [178]. For a complex square matrix A, assign ray weight ray aij on each arc of the associated digraph DðAÞ, the resulting digraph is called a ray digraph, denoted by S ðAÞ. The matrices A; B 2 Cmn are said to be ray-permutation equivalent, if B can be obtained from A by permuting rows and columns and multiplying rows and columns by nonzero complex numbers. Clearly, the ray nonsingularity and ray S 2 NS of matrices is preserved under the ray-permutation equivalence. Furthermore, the DRU matrix is still a DRU matrix under the ray-permutation equivalence. We always assume that all the diagonal entries of A are 1 in the study of DRU matrices. Theorem 3.18 [178]. Let A ¼ aij be an n n complex matrix whose diagonal entries are 1. Then A is a DRU matrix if and only if the ray weights on the cycles of the associated ray digraph S ðAÞ are 1. Proof. Matrix A is a DRU matrix if and only if the ray pattern of each nonzero term in the standard determinant expansion of A is ð1Þn . Each such nonzero term can be expressed as sgnðrÞa1rð1Þ anrðnÞ ¼ ð1Þn þ k
k Y
rayðCi Þ;
ð3:3Þ
i¼1
where sgnðrÞ ¼ ð1Þn þ k , r is a permutation of f1; 2; . . .; n g and C1 ; C2 ; . . .; Ck consisting of the arcs ð1; rð1ÞÞ; . . .; ðn; rðn ÞÞ are the disjoint cycles in S ðAÞ. Then A is a DRU matrix if and only if the ray weight of each cycle in S ðAÞ is 1. ■ Definition 3.19 [178]. Let A be a complex square matrix. If there exist nonsingular matrices in QðAÞ, and any nonsingular matrices B; C 2 QðAÞ satisfy rayðB 1 Þ ¼ rayðC 1 Þ, then A is called a conditional ray S 2 NS-matrix. The following theorem considers the relationship between ray S 2 NS-matrices and conditional ray S 2 NS-matrices.
Sign Pattern for Generalized Inverses
118
Theorem 3.19 [178]. Let A be a complex square matrix. Then the following statements are equivalent: (1) A is a ray S 2 NS-matrix. (2) A is a conditional ray S 2 NS-matrix. (3) A is a conditional ray S 2 NS-matrix and A is a DRU matrix. Proof. It is obvious that (1))(2) and (3))(1). So we only need to prove (2))(3). Suppose that statement (2) holds and (3) does not hold. Then there exist B; C 2 QðAÞ such that the entries of B; C are distinct in exactly one position, and one of the following cases holds: Case 1. detðB Þ 6¼ 0, detðC Þ 6¼ 0 and arg½detðB Þ 6¼ arg½detðC Þ. Case 2. detðB Þ 6¼ 0, detðC Þ ¼ 0. Without loss of generality, assume that entries ðB Þ11 and ðC Þ11 are distinct. If case 1 holds, then 1 ðB Þ11 detðB Þ ¼ det½B ðf1g; f1gÞ ¼ ðC 1 Þ11 detðC Þ; ðBÞ11 ðC Þ11 det½B ðf1g; f1gÞ 6¼ 0: Thus,
B 1
11
C 1
11
6¼ 0 and arg B 1 11 6¼ arg C 1 11 :
This contradicts that A is a conditional ray S 2 NS-matrix. Next, we consider case 2. Let C 0 be a matrix obtained from C by replacing ðC Þ11 by ð1 þ eÞðC Þ11 eðBÞ11. When e [ 0 is small enough, we have C 0 2 QðAÞ and detðC 0 Þ ¼ ð1 þ eÞdetðC Þ e detðB Þ ¼ e detðB Þ 6¼ 0: Applying Case 1 to B and C 0 , we obtain a contradiction.
■
e ¼ 0 for each matrix A e 2 QðAÞ, then Let A be a complex square matrix. If detð AÞ we say A has an identically zero determinant. By theorem 3.19, we have the following corollary. Corollary 3.4 [178]. A complex n n matrix A is a ray S 2 NS-matrix if and only if A satisfies the following two statements: (1) A is a DRU matrix. (2) Each n 1 order submatrix of A is either a DRU matrix or has identically zero determinant. Theorem 3.20 [178]. Let A ¼ aij be a ray pattern matrix of order n with diagonal entries being 1. Then A is a ray S 2 NS-matrix if and only if A satisfies the following two statements:
SNS and S2NS Matrices
119
(1) The ray of every cycle of S ðAÞ is 1. (2) Each pair of the paths in S ðAÞ with the same initial vertex and the same terminal vertex have the same ray. Proof. (Necessity) (1) follows directly from theorems 3.18 and 3.19. Let P and Q be paths in S ðAÞ from vertex i to j, let P H ¼ P þ fall loops at vertices outside P g; Q H ¼ Q þ fall loops at vertices outside P g; then P H ; Q H are subdigraphs of S ðAÞ and the arc set of P 0 and Q 0 corresponds to a nonzero term in the determinant expansion of the matrix Aðf j g; fi gÞ, denoted by dP and dQ , respectively. By (3.3), ð1Þi þ j dP ¼ ð1Þn þ k ray P H ¼ ð1Þn þ 1 rayðP Þ; ð3:4Þ where k 1 is the number of loops at vertices outside P. Similarly, we have ð1Þi þ j dQ ¼ ð1Þn þ 1 rayðQ Þ: By corollary 3.4, A is a ray S 2 NS-matrix ) Aðfj g; fi gÞ is a DRU matrix ) dP ¼ dQ ) rayðP Þ ¼ rayðQ Þ: (Sufficiency) By (1), each matrix Aðfi g; figÞði ¼ 1; 2; . . .; n Þ is a DRU matrix. Suppose that there exist i 6¼ j such that Aðf j g; fi gÞ does not have identically zero determinant and there are at least two nonzero terms in its standard determinant expansion. Let d1 and d2 denote the two nonzero terms in the standard determinant expansion of Aðf j g; figÞ. Let Dt be the subdigraphs of S ðAÞ whose arc set corresponds to the entries in dt ðt ¼ 1; 2Þ, then Dt ¼ Pt þ Ct1 þ þ Ctrt ðt ¼ 1; 2Þ; where Pt is a path from i to j and Ct1 ; . . .; Ctrt are disjoint cycles (where the ray weights are 1). Similar to (3.4), we have ð1Þi þ j dt ¼ ð1Þn þ 1 rayðPt Þðt ¼ 1; 2Þ: So ð2Þ ) rayðP1 Þ ¼ rayðP2 Þ ) d1 ¼ d2 ) Aðfj g; fi gÞ is a DRU matrix : By corollary 3.4, A is a ray S 2 NS-matrix.
■
From the above theorem, we have the following corollary. Corollary 3.5 [178]. Let A 2 Rnn be a ray S 2 NS-matrix with diagonal entries being 1. Let ðA1 Þij be the entry in ði; j Þ-position of A1 . Then we have:
Sign Pattern for Generalized Inverses
120
(1) ðA1 Þii \0 ði ¼ 1; 2; . . .; n Þ. (2) If i 6¼ j, then ðA1 Þi;j 6¼ 0 if and only if there is no path in S ðAÞ from i to j. h i (3) If i 6¼ j and ðA1 Þi;j 6¼ 0, then ray ðA1 Þi;j ¼ e, where e is the common ray of all the paths in S ðAÞ from i to j. Similar to the S 2 NS signed digraph, the ray S 2 NS digraph can be defined. Definition 3.20 [178]. A ray digraph S is called a ray S 2 NS digraph, if S satisfies the following two conditions: (1) The ray of every cycle of S ðAÞ is 1. (2) Each pair of the paths in S ðAÞ with the same initial vertex and the same terminal vertex has the same ray. Section 3.1 introduces the sign solvable linear systems. In [178], the authors generalized the sign solvable linear systems to ray solvable linear systems. Next, we give the definition of ray solvable linear systems. e ¼e e 2 Q ð AÞ Definition 3.21. Let A 2 Cmn and b 2 Cm . If Ax b is solvable for any A e e e and b 2 QðbÞ, and the ray pattern of all solutions of Ax ¼ b are the same, then Ax ¼ b is called ray solvable. e has linearly indeDefinition 3.22. A complex matrix A is called an L-matrix if A e pendent columns for each A 2 QðAÞ. Similar to theorem 3.2, we have the following result. Theorem 3.21. If linear equation Ax ¼ b is ray solvable, then A is an L-matrix. Definition 3.23 [178]. Let A 2 Cnðn þ 1Þ . If each n n submatrix is DRU matrix, then A is called a ray S H -matrix. The following result generalizes theorem 3.9 to complex matrices. Theorem 3.22 [178]. Let A 2 Cmn and b 2 Cm . Then linear equation Ax ¼ b is ray solvable if and only if the matrix ½ A; b can be transformed to a matrix of the form A1 B b 1 ; ð3:5Þ 0 A2 0 by permuting the rows of ½ A; b and the columns of A, where A2 (which might be vacuous) is an L-matrix and ½ A1 ; b1 (which might be vacuous) is a ray S H -matrix. Proof. (Sufficiency) By Cramer’s rule, linear equation A1 y ¼ b1 is ray solvable. Since A2 is an L-matrix, we know that A2 y ¼ 0 only has zero solution. Thus, Ax ¼ b is ray solvable. (Necessity) Assume Ax ¼ b has a solution u x¼ : 0
SNS and S2NS Matrices
121
By permuting the columns of A, the vector u has no zero coordinates. By permuting the rows of ½ A; b , we further transform ½ A; b to the form A1 B b 1 ; 0 A2 0 where A1 has no zero rows (where both A1 ; A2 might be vacuous). e ¼e e 2 QðAÞ and Since Ax ¼ b is ray solvable, Ax b has a unique solution for any A e b 2 QðAÞ, with e u e x ¼ ; 0 e Þ ¼ rayðu Þ. Theorem 3.21 implies A1 is an L-matrix. The matrix A1 is a where rayð u k l matrix, with k > l. Suppose that k [ l. Since A1 has no zero rows and u has no zero coordinates, e 1 2 QðA1 Þ such that A e 1 x ¼ b1 has no solution. This contradicts that there exists A Ax ¼ b is ray solvable. Thus, k ¼ l. Suppose that A1 is not a DRU matrix, then there exist A01 ; A001 2 QðAÞ possessing distinct only one column (say the first column) and entries in arg det A01 6¼ arg½det A001 . The first coordinates of the solutions of A01 y ¼ b1 and A001 y ¼ b1 would have distinct ray patterns. Thus, A1 is a DRU matrix. Since A1 y ¼ b1 is sign solvable and its solution does not have zero coordinates, by Cramer’s rule, ½ A1 ; b1 is a ray S H -matrix. e 2 2 QðAÞ, let e e 2e Finally, for any A z 0 satisfy A z 0 ¼ 0. Then there exists a matrix e e A 2 QðAÞ such that the solutions of Ax ¼ b can be expressed as v x¼ : z0 Since Ax ¼ b is ray solvable, we have z0 ¼ 0 and A2 is an L-matrix.
■
Chapter 4 Sign Pattern for Moore–Penrose Inverse In chapter 3, we introduce several results on SNS and S 2 NS matrices. The research of S 2 NS matrices focuses on the sign patterns of the inverse matrices. In this chapter, we introduce some results on the sign patterns of the M–P inverse of matrices.
4.1
Least Squares Sign-Solvability
In 1995, B.L. Shader proposed and studied the least squares sign-solvable linear system, which is a generalization of the sign-solvable linear systems. e 2 QðAÞ and e b 2 QðbÞ, if the least Definition 4.1. Let A 2 Rmn ; b 2 Rm . For any A e e squares solutions of Ax ¼ b and Ax ¼ b have the same sign pattern, then Ax ¼ b is called a least squares sign-solvable system. An example of a least squares sign-solvable linear system is quoted [167]. Example 4.1. Consider the linear system Ax ¼ b, 2 3 2 3 p r t 0 6q 0 07 607 7 6 7 A¼6 4 0 s 0 5; b ¼ 4 0 5; 0 0 u v and p; q; r; s; t; u; v are positive numbers. Since Ax ¼ b is unsolvable, we have Ax ¼ b is not a sign-solvable linear system. The least squares solutions of Ax ¼ b are the solutions of the normal equation A> Ax ¼ A> b, i.e., the solutions of 2 2 3 2 3 0 p þ q2 pr pt 4 pr r 2 þ s2 rt 5x ¼ 4 0 5: uv pt rt t2 þ u2
DOI: 10.1051/978-2-7598-2599-8.c004 © Science Press, EDP Sciences, 2021
Sign Pattern for Generalized Inverses
124
The solution of this equation is
2
3 2 t ps uv 4 5: x¼ q 2 rt detðA> AÞ p2 s 2 þ q 2 r 2 þ q 2 s 2
e 2 QðAÞ and e Since A is an L-matrix. For any A b 2 QðbÞ, the least square solue e tions of Ax ¼ b always have the above form, and the sign pattern of least squares solutions is ½ 1; 1; 1 > . Thus, Ax ¼ b is a least squares sign-solvable linear system. Theorem 4.1 [167]. If the linear system Ax ¼ b is least squares sign-solvable, then A is an L-matrix. e 2 QðAÞ and a Proof. Suppose A is not an L-matrix. Then there exist a matrix A e e ¼ b. Then nonzero vector y such that Ay ¼ 0. Let z be a least squares solution of Ax e ¼ b, where k is an arbitrary real number. z þ ky is also a least squares solution of Ax Take k1 ; k2 such that sgnðz þ k1 y Þ 6¼ sgnðz þ k2 y Þ; i.e., Ax ¼ b is not least squares sign-solvable, a contradiction. So A is an L-matrix. ■ Theorem 4.2. If the linear system Ax ¼ b is least squares sign-solvable, then the e ¼e e þe e 2 QðAÞ and e least squares solution to Ax b is A b for any A b 2 QðbÞ. e is nonsingular e >A Proof. It follows from theorem 4.1 that A is an L-matrix. Then A e e for each matrix A 2 QðAÞ. Since the least squares solution of Ax ¼ e b is the solution >e >e e e of A Ax ¼ A b, we have 1 e e >e e >A e þe A x¼ A b¼A b; is a least squares solution.
■
In order to study the least squares sign-solvable linear systems, B.L. Shader proposed the concept of matrices with signed M–P inverses. Definition 4.2 [167]. Let A be a real matrix. If e þ ¼ sgnðAþ Þ; sgn A e 2 QðAÞ, then A is called a matrix with signed M–P inverse, or called Aþ for each A is signed. Clearly, the matrix with a signed M–P inverse is a generalization of the S 2 NS matrix. The following result is obtained by theorems 4.1 and 4.2.
Sign Pattern for Moore–Penrose Inverse
125
Theorem 4.3. Let A 2 Rmn . Then m linear systems Ax ¼ ei , where ei is the i-th column of I (i ¼ 1; 2; . . .; m) are least squares sign-solvable if and only if A is an L-matrix and Aþ is signed. The zero pattern of A is a matrix obtained from A by replacing all the nonzero entries by 1. B.L. Shader gave the following result. Theorem 4.4 [167]. For A 2 Rðn þ 1Þn , if the zero pattern of A is a vertex-edge incidence matrix of a tree, then Aþ is signed.
4.2
Matrices with Signed Moore–Penrose Inverse
In 2001, J. Shao and H. Shan gave complete characterizations of a matrix with a signed M–P inverse [175]. In order to obtain the structural characteristics of a matrix with a signed M–P inverse, we first present two important lemmas. Lemma 4.1 [175]. Let A be a real matrix of n-order such that qðAÞ ¼ n. If A is not an SNS matrix, then there exist invertible matrices A1 ; A2 2 QðAÞ, and nonnegative integers p, q such that 1 1 A1 qp A2 qp \0: Proof. Since qðAÞ ¼ n, there exists a nonsingular matrix B 2 QðAÞ. Since A is not an SNS matrix, there also exists a singular matrix C 2 QðAÞ. Without loss of generality, assume that B differs from C in exactly one position ðp; qÞ, where ðBÞpq ¼ b; ðC Þpq ¼ c and b 6¼ c. Thus, we have detðBÞ ¼ detðBÞ detðC Þ ¼ ð1Þp þ q ðb cÞ det½B ðfpg; fq gÞ: Take with 0\\jcj, and let Ai ¼ C þ ð1Þi Epq ; i ¼ 1; 2; where Epq ¼ ep eq> is the matrix whose ðp; qÞ-entry is 1 and other entries are zero. Then A1 and A2 are both invertible matrices in QðAÞ and detðAi Þ ¼ ð1Þi þ p þ q det½Ai ðfpg; fq gÞ; i ¼ 1; 2: Thus, ðA1 i Þqp ¼ So
A1 1
qp
1 ð1Þi
A1 2
qp
; i ¼ 1; 2:
¼
1 \0: 2
■
Sign Pattern for Generalized Inverses
126
For A ¼ aij ; and B ¼ bij 2 Rmn , let A B denote the Hadamard product of A and B [108], i.e., A B ¼ aij bij 2 Rmn . The notation A > 0 means A is a nonnegative matrix, i.e., all the entries of A are nonnegative. Lemma 4.2 [175]. Let A 2 Rmn be a matrix with a signed M–P inverse satisfying qðAÞ ¼ n. Then each n n submatrix of A with full term rank is an S 2 NS matrix. B Proof. For each n n submatrix B of A with full term rank, suppose A ¼ . For C any two nonsingular matrices B1 ; B2 2 QðBÞ, we next prove the result by contradiction. B11 B21 > 0: Suppose that there exist nonsingular matrices B1 ; B2 2 QðBÞ and integers p; q such that 1 1 B1 qp B2 qp \0: For [ 0, let
Bi ði ¼ 1; 2Þ: Ai ðÞ ¼ C
Then Ai ðÞ 2 QðAÞ. Since Ai ðÞ has full column rank, we have 1 Ai ðÞþ ¼ Ai ðÞ> Ai ðÞ Ai ðÞ> ; i ¼ 1; 2: Consider the limit of the determinant of Ai ðÞ> Ai ðÞ, we have lim detðAi ðÞ> Ai ðÞÞ ¼ lim det Bi> Bi þ 2 C > C !0
!0
¼ ½detðBi Þ2 6¼ 0:
Therefore, Ai ðÞþ is a continuous function of [5, 34]. So we obtain þ Bi þ þ lim Ai ðÞ ¼ Ai ð0Þ ¼ ¼ Bi1 0 ; i ¼ 1; 2: 0 !0 Since B11 qp B21 qp \0, for a sufficiently small [ 0, we have sgn A1 ðÞþ 6¼ sgn A2 ðÞþ : This contradicts the fact that Aþ is signed. So for any two nonsingular matrices B1 ; B2 2 QðBÞ, we have B11 B21 > 0: It follows from theorem 4.5 that B is an SNS matrix. For any p; q and B1 ; B2 2 QðBÞ, we have
Sign Pattern for Moore–Penrose Inverse
B11
qp
127 B21
qp
> 0:
If B is an SNS matrix then det½B1 ðfpg; fq gÞ det½B2 ðfpg; fq gÞ > 0: Thus, Bðfpg; fqgÞ has an identically zero determinant or is an SNS matrix. Hence B is an S 2 NS matrix. ■ The following result can be obtained by lemma 4.2. Theorem 4.5 [175]. Let A 2 Rmn (n 6 m) be a matrix with a signed M–P inverse. Then each n n submatrix of A has an identically zero determinant or is an S 2 NS matrix. For the set ½m ¼ f1; 2; . . .; mg, let Sn ðmÞ be the set of all subsets of ½m with n elements. For a subset X of ½m, let X be the complementary set of X in ½m. The Cauchy–Binet’s formula on determinant is given as follows. Lemma 4.3 [108]. Let A 2 Cnm and B 2 Cmn , where n 6 m. Then X detðA½:; FÞ detðB½F; :Þ: detðABÞ ¼ F2Sn ðmÞ
Let F be a finite set of integers. For q 2 F, let N ðq; FÞ denote the number of the elements in F which is less than or equal to q. The following lemma is obtained using the Cauchy–Binet’s formula. Lemma 4.4 [175]. Let A 2 Rmn be a matrix with full column rank. Then h i p P N ðq;FÞ ðAþ Þpq ¼ detð1Þ ð1Þ det A Fnfqg; fpg detðA½F; :Þ ð A> AÞ q2F2SA P A½F; :1 det A½F; :2 ; ¼ detðA1 > AÞ p;N ðq;FÞ
q2F2SA
where SA ¼ fFjF 2 Sn ðmÞ; detðA½F; :Þ 6¼ 0g. Proof. When A is of full column rank then 1 Aþ ¼ A > A A > : So ðAþ Þpq ¼
n P i¼1
aqi p
ð1Þ ¼ detðBÞ
ð1Þp þ i detðB ½fig;fpgÞ detðBÞ
n P
h i ð1Þi aqi det B fig; fpg ;
i¼1
Sign Pattern for Generalized Inverses
128
where B ¼ A> A. Notice that h i h i h i B fig; fpg ¼ A> fig; : A :; fpg : From lemma 4.3, we have ðAþ Þpq
p
ð1Þ ¼ detðBÞ
n P
ð1Þi aqi
i¼1
P F 0 2Sn1 ðmÞ
h i h i det A> fig; F 0 det A F 0 ; fpg
h i P h i n det A F 0 ; fpg ð1Þi aqi det A F 0 ; fig F 0 2Sn1 ðmÞ hi¼1 i P ð1Þp N ðq;FÞ ¼ detðBÞ ð1Þ det A Fnfqg; fpg detðA½F; :Þ:
¼
ð1Þp detðBÞ
P
q2F2Sn ðmÞ
If A½F; : is nonsingular, then A½F; :1
p;N ðq;FÞ
¼
h i ð1Þp þ N ðq;FÞ det A Fnfqg; fpg : detðA½F; :Þ
Thus, ðAþ Þpq ¼
X 1 1 2 A½F; : det A½F; : : p;N ðq;FÞ detðA> AÞ q2F2S A
The following theorem gives a characterization on the full term rank matrices with signed generalized inverses. Theorem 4.6 [175]. Let A 2 Rmn satisfy qðAÞ ¼ n. Then Aþ is signed if and only if A satisfies the following two conditions: (1) If F 2 Sn ðmÞ satisfies qðA½F; :Þ ¼ n, then A½F; : is an S 2 NS matrix. (2) For each pair of integers p; q with 1 6 q 6 m; 1 6 p 6 n and each pair of sets F1 ; F2 with q 2 Fi 2 SA ði ¼ 1; 2Þ, 2 Y
A½Fi ; :1
i¼1
p;N ðq;Fi Þ
> 0;
where SA ¼ fFjF 2 Sn ðmÞ; qðA½F; :Þ ¼ n g. Proof. (Necessity) If Aþ is signed, then it follows from lemma 4.2 that condition e 2 QðAÞ, A e þ can be written as the (1) holds and A is an L-matrix. So for each A form in lemma 4.4. We prove condition (2) by contradiction. Let SA ¼ fFjF 2 Sn ðm Þ; qðA½F; :Þ ¼ n g: Suppose that there exist integers p; q and sets F1 ; F2 2 SA ðq 2 T1 \ T2 Þ such that
Sign Pattern for Moore–Penrose Inverse
129
2 Y A½Fi ; :1 i¼1
Let
A½F1 ; :1
p;N ðq;F1 Þ
Take
p;N ðq;Fi Þ
\0:
[ 0; A½F2 ; :1
p;N ðq;F2 Þ
\0:
ðkÞ Ak ¼ aij 2 QðAÞ; k ¼ 1; 2;
where ðk Þ
aij ¼
aij ; i 2 Fk ; aij ; i 2 6 Fk ;
and is a positive number. For a sufficiently small number , it follows from lemma 4.4 that þ A1 pq [ 0; Aþ 2 pq \0; This contradicts that Aþ is signed. So condition (2) holds. (Sufficiency) Since condition (1) holds and qðAÞ ¼ n, we get A is an L-matrix. For e 2 QðAÞ and F 2 SA , we have any A e ½F; :1 ¼ sgn A½F; :1 : sgn A By condition (2), we have sgn
P e A ½F; :1
q2F2SA
¼ sgn
p;N ðq;FÞ
P A½F; :1
e ½F; : det A
p;N ðq;FÞ
q2F2SA
2
det A½F; :
!
2
! :
It follows from lemma 4.4 that h i eþ ¼ sgn ðAþ Þpq : sgn A pq
Thus, Aþ is signed.
■
Theorem 4.6 leads to the following corollaries. B Corollary 4.1 [175]. Let A ¼ 2 Rmn and qðBÞ ¼ n. If Aþ is signed, then B þ is C signed.
130
Sign Pattern for Generalized Inverses
Corollary 4.2 [175]. Let A 2 Rmn and qðAÞ ¼ n. Then the following statements are equivalent: (1) Aþ is signed. (2) For any n n submatrices A½F1 ; : and A½F2 ; : with full term rank. If A½F1 ; : and A½F2 ; : contain at least one common row, then A½F1 [ F2 ; : is a matrix with a signed M–P inverse. (3) For each k n submatrix B of A (n 6 k 6 2n 1). If qðBÞ ¼ n, then B þ is signed. Theorem 4.7 [175]. Let A 2 Rmn (m > n > 2) be a matrix with a signed M–P inverse. If each n n submatrix of A has full term rank, then (1) m ¼ n and A is an S 2 NS matrix. (2) A has the same zero pattern as the vertex-edge incidence matrix of a tree. For A 2 Rmn , theorem 4.6 gives a necessary and sufficient condition for signed A when qðAÞ ¼ n. Next, we consider the case qðAÞ\n. Let Nr ðAÞ and Nc ðAÞ denote the numbers of rows and columns of A, respectively. Several auxiliary lemmas are given as follows. þ
Lemma 4.5 [175]. Let A 2 Rmn satisfy qðAÞ\n 6 m. Then there exist permutation matrices P and Q such that B 0 A¼P Q; C D where qðBÞ ¼ Nc ðB Þ and qðDÞ ¼ Nr ðD Þ. B 0 Lemma 4.6 [175]. Let A ¼ be a real matrix. Then qðAÞ ¼ Nc ðB Þ þ Nr ðDÞ C D if and only if qðBÞ ¼ Nc ðB Þ; and qðDÞ ¼ Nr ðDÞ: Proof. The necessity follows from qðAÞ 6 qðBÞ þ Nr ðDÞ and qðAÞ 6 Nc ðB Þ þ qðDÞ. The sufficiency follows from qðAÞ > qðBÞ þ qðDÞ. ■ B 0 Lemma 4.7 [175]. Let A ¼ be a real matrix such that C D rankðBÞ ¼ Nc ðB Þ; and rankðDÞ ¼ Nr ðDÞ. Then 0 Bþ þ A ¼ : Dþ CB þ Dþ þ þ Proof. Since ¼ N c ðB Þ and rankðDÞ ¼ Nr ðD Þ, we get B B ¼ I ; DD ¼ I . rankðBÞ þ 0 B Let X ¼ . Then Dþ CB þ Dþ
Sign Pattern for Moore–Penrose Inverse
131
I 0 BB þ 0 > AX ¼ ; ðXAÞ> ¼ XA; ; ðAXÞ ¼ AX; XA ¼ þ 0 D D 0 I 0 BB þ B 0 Bþ AXA ¼ ¼ X: ¼ A; XAX ¼ C D D þ CB þ D þ
Thus, X ¼ Aþ . a matrix A with a For A 2 Rmn and qðAÞ\n 6 m, by lemma 4.5, tocharacterize B 0 with a signed M–P signed M–P inverse is equivalent to characterize C D inverse, where qðBÞ ¼ Nc ðB Þ and qðDÞ ¼ Nr ðDÞ. B 0 Theorem 4.8 [175]. Let A ¼ be a real matrix such that qðBÞ ¼ Nc ðB Þ and C D qðDÞ ¼ Nr ðDÞ. Then Aþ is signed if and only if the following two conditions hold: (1) B þ and Dþ are signed. e 2 QðC Þ and D e 2 QðBÞ, C e 2 QðDÞ, we have (2) For any B eB e þC e þ Þ ¼ sgnðDþ CB þ Þ: sgnð D Proof. (Necessity) Since qðBÞ ¼ Nc ðBÞ and qðDÞ ¼ Nr ðDÞ, there exist B1 2 QðBÞ and D1 2 QðDÞ, which rankðB1 Þ ¼ Nc ðBÞ; rankðD1 Þ ¼ Nr ðDÞ: We take
A1 ¼
From lemma 4.7, we have Aþ 1 ¼
B1 C
0 D1
2 QðAÞ:
B1þ D1þ CB1þ
0 : D1þ
e 2 QðC Þ and D e 2 QðBÞ, C e 2 QðDÞ, let For any B e 0 e ¼ B A e D e 2 QðAÞ: C Since Aþ is signed, we have
eþ ¼ X A Z
0 ; Y
Sign Pattern for Generalized Inverses
132
where X 2 QðB1þ Þ; Y 2 QðD1þ Þ; Z 2 QðD1þ CB1þ Þ: Since
e B e C
0 e D
þ
X ¼ Z
0 ; Y
we have e þ: e þ and Y ¼ D X¼B Since X 2 QðB1þ Þ; and Y 2 QðD1þ Þ, we have B þ and Dþ being signed, i.e., condition (1) holds. Since B þ is signed and qðBÞ ¼ Nc ðBÞ, lemma 4.2 implies that B is an L-matrix. e 2 QðC Þ and D e 2 QðBÞ, C e 2 QðDÞ, Similarly, C > is an L-matrix. For any B lemma 4.7 leads to e 0 þ eþ B B 0 eB e D e þC eþ : eþ D e ¼ D C Since Aþ is signed, (2) holds. (Sufficiency) Since (1) holds, by lemma 4.2, we get B and C > are L-matrices. For e 2 QðC Þ and D e 2 QðBÞ, C e 2 QðDÞ, by lemma 4.7, we have any B e 0 þ eþ B B 0 : ¼ e D eB e þC eþ e eþ D D C Since (2) holds, Aþ is signed.
■
In order to describe statement (2) of theorem 4.8, we need to introduce the definition of a multipartite signed digraph. Definition 4.3 [175]. Let A ¼ ðaij Þ 2 Rmr , B ¼ ðbij Þ 2 Rrs and C ¼ ðcij Þ 2 Rsn . A graph GðA; B; C Þ is called a multipartite signed digraph if the following two conditions hold. (1) The vertex set V of GðA; B; C Þ is V ¼ X [Y [Z [W; where X ¼ fx1 ; x2 ; . . .; xm g, fw1 ; w2 ; . . .; wn g.
Y ¼ fy1 ; y2 ; . . .; yr g
and
Z ¼ fz1 ; z2 ; . . .; zs g; W ¼
(2) In GðA; B; C Þ, there is an arc from xi to yj (the sign of the arc is the sign of aij ) if and only if aij 6¼ 0; there is an arc from yp to zq (the sign of the arc is the sign of bpq ) if and only if bpq 6¼ 0; there is an arc from zu to wv (the sign of the arc is the sign of cuv ) if and only if cuv 6¼ 0.
Sign Pattern for Moore–Penrose Inverse
133
Next, we use the multipartite signed digraph to describe condition (2) of theorem 4.8. Theorem 4.9 [175]. Let B; C ; D be real matrices satisfying Nc ðC Þ ¼ Nc ðB Þ; Nr ðC Þ ¼ Nr ðDÞ: If B þ and D þ are both signed, then the following conditions are equivalent: e 2 QðC Þ and D e 2 QðBÞ, C e 2 QðDÞ, we have (1) For any B eB e þC e þ Þ ¼ sgnðDþ CB þ Þ: sgnð D e 2 QðC Þ, we have (2) For each C e B þ Þ ¼ sgnðD þ CB þ Þ: sgnðD þ C (3) In G ðDþ ; C ; B þ Þ, each pair of length 3 directed paths with the same initial vertex and terminal vertex have the same sign. When B does not contain zero columns and D does not contain zero rows, then condition (3) is equivalent to the following condition (4). (4) The block matrix
2
I 6 0 X ¼6 4 0 0
Dþ I 0 0
0 C I 0
3 0 0 7 7; Bþ 5 I
is an S 2 NS matrix. Proof. (1))(2) is Obvious. (2))(3). Suppose that (3) dose not hold. Then there exist i; j; p1 ; q1 ; p2 ; q2 such that ðDþ Þip1 ðC Þp1 q1 ðB þ Þq1 j [ 0; ðDþ Þip2 ðC Þp2 q2 ðB þ Þq2 j [ 0: We take Ck 2 QðC Þ (k ¼ 1; 2) such that
ðC Þpq ; p ¼ pk ; q ¼ qk ; ðCk Þpq ¼ ðC Þpq ; otherwise; where is a sufficiently small positive number. Then P
þ ðD C1 B þ Þij ¼ Pp;q ðDþ Þip ðC1 Þpq ðB þ Þqj [ 0; ðDþ C2 B þ Þij ¼ p;q ðDþ Þip ðC2 Þpq ðB þ Þqj \0; contradicting (2).
Sign Pattern for Generalized Inverses
134 (3))(1). Note that ðDþ CB þ Þij ¼
X ðD þ Þip ðC Þpq ðB þ Þqj : p;q
Thus, the product of any two terms in the right hand side of the above formula is e 2 QðC Þ e 2 QðBÞ, C nonnegative. Since B þ and D þ are both signed, then for any B e and D 2 QðDÞ, we have ! P þ þ þ þ ðD Þip ðC Þpq ðB Þqj sgnðD CB Þij ¼ sgn p;q ! P þ e þ e e ¼ sgn D B C
p;q
eB e þC eþ ¼ sgn D
ip
ij
pq
qj
:
Thus, (1) holds. (3),(4). Since the diagonal entries of X are negative, X is an S 2 NS matrix if and only if the signed digraph SðXÞ is an S 2 NS signed digraph. Since X is an upper triangular matrix, SðXÞ does not contain cycles. Thus, SðXÞ ¼ GðD þ ; C ; B þ Þ: So X is an S 2 NS matrix if and only if each pair of directed paths in G ðDþ ; C ; B þ Þ with the same initial vertex and terminal vertex has the same sign. If B does not contain zero columns and D does not contain zero rows, then B þ does not contain zero rows and Dþ does not contain zero columns. Matrix X is an S 2 NS matrix if and only if each pair of length 3 directed paths in G ðDþ ; C ; B þ Þ with the same initial vertex and terminal vertex has the same sign. ■ Combining theorems 4.8 and 4.9, we obtain the following result. B 0 Theorem 4.10. Let A ¼ be a real matrix satisfying qðBÞ ¼ Nc ðBÞ; and C D qðDÞ ¼ Nr ðDÞ. Then Aþ is signed if and only if both B þ and Dþ are signed, and the block matrix 2 3 I Dþ 0 0 6 0 I C 0 7 7; X ¼6 4 0 0 I B þ 5 0 0 0 I is an S 2 NS matrix. Proof. Since qðBÞ ¼ Nc ðB Þ and qðDÞ ¼ Nr ðDÞ, B does not contain zero columns and D does not contain zero rows. From theorems 4.8 and 4.9, Aþ is signed if and only if both B þ and D þ are signed, and
Sign Pattern for Moore–Penrose Inverse 2
I 6 0 X ¼6 4 0 0
Dþ I 0 0
135
0 C I 0
3 0 0 7 7; Bþ 5 I
is an S 2 NS matrix.
■
Next, we introduce the sign majorization relations of real numbers and real matrices. Definition 4.4 [175]. Let a and b be be two real numbers. If b ¼ 0 or ab [ 0, then b is called sign majorized by a, denoted by b a. Definition 4.5 [175]. Let A ¼ aij and B ¼ bij be two m n real matrices. If bij aij for any i 2 f1; 2; . . .; mg and j 2 f1; 2; . . .; ng, then B is called sign majorized by A, denoted by B A. e 2 QðAÞ such By definition 4.5, we know that B A if and only if there exists A e that B can be obtained by replacing some nonzero entries of A by zero. The relation satisfies the following properties: (1) For a real matrix A, A A. (2) If A B and B A, then sgn ðAÞ ¼ sgn ðBÞ. (3) If A B and B C , then A C . So ‘ ’ is a partial order relation on the set of sign pattern matrices. Lemma 4.8. Let A 2 Rnn be an S 2 NS matrix such that B A and qðBÞ ¼ n. Then B is an S 2 NS matrix and B 1 A1 . Proof. Without loss of generality, suppose that all the diagonal entries of A and B are negative. Since A is an S 2 NS matrix, the associated signed digraph SðAÞ is an S 2 NS signed digraph. From B A, SðBÞ is a subgraph of SðAÞ. Thus, SðBÞ is also an S 2 NS signed digraph, i.e., B is an S 2 NS matrix. Since all the diagonal entries of A and B are negative, and A and B are both S 2 NS matrices, all the diagonal entries of A1 and B 1 are negative. Thus, ðB 1 Þii ðA1 Þii ; i ¼ 1; 2; . . .; n: Suppose ðB 1 Þij 6¼ 0 and i 6¼ j. Then sgn½ðB 1 Þij ¼ the common sign of all the paths from i to j in S ðB Þ ¼ the common sign of all the paths from i to j in S ðAÞ ¼ sgn ½ðA1 Þij :
Sign Pattern for Generalized Inverses
136
So,
1 B ij A1 ij ; B 1 A1 :
The following theorem generalizes the sign properties in lemma 4.8 for square matrices to rectangular matrices. Theorem 4.11. Let A; B 2 Rmn satisfy B A and qðBÞ ¼ n. If Aþ is signed, then B þ is signed and B þ Aþ . Proof. We first prove that B þ is signed. Let B1 be an n n submatrix of B with qðBÞ ¼ n, and let A1 be a n n submatrix of A with the same entries as B1 . Then B1 A1 and qðA1 Þ ¼ n. It follows from theorem 4.6 that A1 is an S 2 NS matrix. From lemma 4.8, B1 is an S 2 NS matrix. Thus, B satisfies condition (1) of lemma 4.2. Next, We verify that B satisfies condition (2) of theorem 4.6. Let SA ¼ fFjF 2 Sn ðm Þ; qðA½F; :Þ ¼ n g; SB ¼ fFjF 2 Sn ðm Þ; qðB½F; :Þ ¼ n g: Since A satisfies condition (2) of theorem 4.6, for any p; q and q 2 Fi 2 SB (i ¼ 1; 2), we have SB SA and 2 Y A½Fi ; :1 > 0: i¼1
p;N ðq;Fi Þ
Note that A½Fi ; : and B ½Fi ; : are S 2 NS matrices, and B ½Fi ; : A½Fi ; :; i ¼ 1; 2: By lemma 4.8, we obtain B ½Fi ; :1 A½Fi ; :1 ; i ¼ 1; 2: So,
2 Y B ½Fi ; :1 i¼1
p;N ðq;Fi Þ
> 0:
Thus, B satisfies condition (2) of theorem 4.6, and B þ is signed. For any p; q and q 2 F 2 SB SA , we have B ½F; : A½F; :; B ½F; :1 A½F; :1 : It follows from lemma 4.4 that ðB þ Þpq ðAþ Þpq ; B þ Aþ : For A 2 Rmn and qðAÞ\n 6 m, by lemma 4.5 the study of the sign pattern of A with an M–P inverse can be transformed to that of the sign pattern of the partiB 0 tioned matrix with an M–P inverse, where qðBÞ ¼ Nc ðB Þ; qðDÞ ¼ Nr ðDÞ. C D
Sign Pattern for Moore–Penrose Inverse
137
B 0 Theorem 4.12. Let A ¼ be a matrix with a signed M–P inverse satisfying C D qðBÞ ¼ Nc ðB Þ and qðDÞ ¼ Nr ðDÞ. Then for any A1 satisfying A1 A and þ þ qðA1 Þ ¼ qðAÞ, Aþ 1 is signed and A1 A : Proof. A1 has the block form
B1 A1 ¼ C1
0 ; D1
where B1 B; C1 C ; D1 D. It follows from lemma 4.6 that qðAÞ ¼ Nc ðB Þ þ Nr ðDÞ: So qðA1 Þ ¼ qðAÞ ¼ Nc ðB Þ þ Nr ðDÞ ¼ Nc ðB1 Þ þ Nr ðD1 Þ: By lemma 4.6, we obtain qðB1 Þ ¼ Nc ðB1 Þ; qðD1 Þ ¼ Nr ðD1 Þ: Since Aþ is signed. By theorem 4.8, B þ and D þ are signed. From theorem 4.11, and D1þ are signed, and
B1þ
B1þ B þ ; D1þ Dþ : Since Aþ is signed, we have the multipartite signed digraph G ðDþ ; C ; B þ Þ that satisfies condition (3) of theorem 4.8. Since B1þ B þ ; C1 C ; and D1þ Dþ , the signed digraph G D1þ ; C1 ; B1þ is a subgraph of G ðDþ ; C ; B þ Þ. Thus, G D1þ ; C1 ; B1þ satisfies condition (3) of theorem 4.9. From theorems 4.8 and 4.9, Aþ 1 is signed. Since both B1þ and D1þ are signed, and qðB1 Þ ¼ Nc ðB1 Þ, qðD1 Þ ¼ Nr ðD1 Þ, we have rankðB1 Þ ¼ Nc ðB1 Þ; rankðD1 Þ ¼ Nr ðD1 Þ: From lemma 4.7, we know Aþ ¼
B1þ þ D1 C1 B1þ
0 : D1þ
e 2 QðC Þ and D e 2 QðBÞ, C e 2 QðDÞ, For any B eB e þC e þ Þ ¼ sgnðDþ CB þ Þ: sgnð D Since B1þ B þ ; C1 C ; and D1þ Dþ , we get D1þ C1 B1þ Dþ CB þ . Then 0 0 B1þ Bþ Aþ ¼ ¼ Aþ : 1 D1þ C1 B1þ D1þ Dþ CB þ D þ The proof is complete.
■
Sign Pattern for Generalized Inverses
138
The following result follows from theorems 4.11 and 4.12. Theorem 4.13. Let A; A1 2 Rmn (n 6 m) be two matrices satisfying A1 A and þ þ qðA1 Þ ¼ qðAÞ. If Aþ is signed, then Aþ 1 is signed and A1 A . The following result is obtained by theorem 4.13. Theorem 4.14. Let B be a submatrix of a real matrix A such that qðBÞ ¼ qðAÞ. If Aþ is signed, then B þ is also signed. Proof. Without loss of generality, suppose that B C A¼ : D 0 Let
B A1 ¼ 0
0 : 0
þ Then A1 A and qðA1 Þ ¼ qðBÞ ¼ qðAÞ. By theorem 4.13, Aþ 1 is signed, i.e., B is signed. ■
Theorem 4.14 would not be true without the assumption qðBÞ ¼ qðAÞ. For example, let 2 3 1 0 0 0 6 1 1 0 0 7 7; B ¼ 1 1 ; A¼6 4 1 1 1 0 5 1 1 1 1 1 1 then B is a submatrix of A, and qðBÞ\qðAÞ. Aþ is signed, but B þ is not signed. A study on the number of nonzero entries of matrices with signed M–P inverses can be found in [171].
4.3
Triangular Partitioned Matrices with Signed Moore–Penrose Inverse
Let A 2 Rmn with a signed M–P inverse satisfying qðAÞ ¼ n. If A is without zero rows, then A is permutation equivalent to matrix [175] 2 3 A1 0 0 6 A21 A2 0 7 6 7 6 .. .. 7; .. .. 4 . . . 5 . Ak1 Ak2 Ak
Sign Pattern for Moore–Penrose Inverse
139
where Ai is one of the following types: (1) A column without zero entries. (2) An S 2 NS matrix. (3) A matrix having the same zero pattern as the vertex-edge incidence matrix of a tree. Motivated by the above facts, B.L. Shader [167] proposed a basic problem: establishing the necessary and sufficient conditions for the above block matrix with a signed M–P inverse in terms of the subblocks Aij . J. Shao and H. Shan systematically established a series of conclusions on this problem in [175, 176]. In this section, we introduce the results in this area. We first present three auxiliary lemmas. A1 0 Lemma 4.9 [175]. Let A ¼ 2 Rmn , where A1 ¼ ða1 ; a2 ; . . .; ak Þ> ðk > 2Þ is c B a column without zero entries, and qðBÞ ¼ n 1. Then Aþ is signed if and only if 0 a is a matrix with a signed M–P inverse. D¼ 1 c B Proof. The necessity follows from corollary 4.1. We now prove the sufficiency. Since Dþ is signed, then D satisfies condition (1) of theorem 4.6. Clearly, qðAÞ ¼ n, and A satisfies condition (1) of theorem 4.6. Next, we need to show that A satisfies condition (2) of theorem 4.6. Let T1 ; T2 be two index sets of A such that T1 \ T2 6¼ ;, and qðA½T1 ; :Þ ¼ qðA½T2 ; :Þ ¼ n. Then T1 [ T2 contains at most two elements in f1; 2; . . .; kg. So consider the following two cases. Case 1. T1 [ T2 contains at most one element in f1; 2; . . .; kg. Since Dþ is signed, we have T1 [ T2 satisfies condition (2) of theorem 4.6. So Aþ is signed. Case 2. T1 [ T2 contains two elements in f1; 2; . . .; kg. Let T1 ¼ f1g [ R1 ; T2 ¼ f2g [ R2 ; where R1 and R2 do not contain elements in f1; 2; . . .; kg. Let Si ¼ fj kjj 2 Ri g; ci ¼ c½Si ; :; Bi ¼ B ½Si ; :; i ¼ 1; 2: Then
a A½Ti ; : ¼ i ci
0 ; Bi
a 1 A½ T i ; : ¼ i 1
Let T3 ¼ f1g [ R2 . Similarly, we obtain 1 a A½T3 ; :1 ¼ 1
0 ; i ¼ 1; 2: Bi1
0 : B21
For q 2 T1 \ T2 , we deduce that q 2 R1 \ R2 T3 ; N ðq; T2 Þ ¼ N ðq; T3 Þ > 2:
Sign Pattern for Generalized Inverses
140
Compare A½T2 ; :1 with A½T3 ; :1 , we have ¼ A½T3 ; :1 A½T2 ; :1 p;N ðq;T2 Þ
p;N ðq;T3 Þ
:
Both A½T1 ; : and A½T3 ; : are submatrices of D. Since D satisfies condition (2) of theorem 4.6, we have A½T3 ; :1 > 0: A½T1 ; :1 p;N ðq;T1 Þ
Since
A½T2 ; :1
p;N ðq;T2 Þ
p;N ðq;T3 Þ
¼ A½T3 ; :1
theorem 4.6, and Aþ is signed.
p;N ðq;T3 Þ
, A satisfies condition (2) of ■
If A is a fully indecomposable n n matrix, then each ðn 1Þ ðn 1Þ submatrix of A has term rank n 1 [12]. Lemma 4.10 [175]. Let A 2 Rmn (n > 2) be a matrix without zero rows, and let Aþ be signed. If A contains a fully indecomposable n n submatrix B, then m ¼ n, i.e., A ¼ B. Proof. Suppose m > n þ 1. Let C be an ðn þ 1Þ n submatrix of A containing B. By corollary 4.1, C þ is signed. Since C has no zero rows, and B is fully indecomposable, the each n-order submatrix of C has term rank n. From theorem 4.7, C has the same zero pattern as the vertex-edge incidence matrix of a tree. A column of B contains at most one nonzero entry, which contradicts the fact that B is fully indecomposable. ■ Lemma 4.11 [175]. Let A 2 Rm2 (m > 2) without zero rows and columns. Then Aþ is signed if and only if one of the following conditions holds: (1) A 2 R22 is an S 2 NS matrix without zero entries. (2) A is permutation equivalent to a1 ak ak þ 1 0 0 0 b1 b2
0 br
> ;
where a1 ; . . .; ak ; b1 ; . . .; br are nonzero, and ak þ 1 is arbitrary. Proof. (Necessity) If A contains a 2 2 fully indecomposable submatrix, then by lemma 4.10, A 2 R22 is an S 2 NS matrix without zero entries. If A does not contain a 2 2 fully indecomposable submatrix, then A contains at most one row without zero entries, and A is permutation equivalent to > a1 ak ak þ 1 0 0 ; 0 0 b1 b2 br where a1 ; . . .; ak ; b1 ; . . .; br are nonzero, ak þ 1 is arbitrary.
Sign Pattern for Moore–Penrose Inverse
141
(Sufficiency) We only need to consider the case when condition (2) holds. Note that any 2 2 or 3 2 submatrix of > a1 ak ak þ 1 0 0 ; 0 0 b1 b2 br has signed M–P inverses. Thus, Aþ is signed by corollary 4.2.
■
In the following, we study the signed M–P inverses of two classes of special triangular matrices. A1 0 Theorem 4.15 [175]. Let A ¼ 2 Rmn , where A1 ¼ ða1 ; a2 ; . . .; ar Þ> is a A21 A2 column without zero entries. Then the following statements hold: (1) If A2 is a column without zero entries, then Aþ is signed if and only if A21 contains at most one nonzero entry. 1 0 2 þ (2) If A2 is an S NS matrix, then A is signed if and only if the matrix A21 A2 2 is an S NS matrix. (3) If A2 has the same zero pattern as the vertex-edge incidence matrix of a tree, then Aþ is signed if and only if A21 contains at most one nonzero entry. Proof. (1) By lemma 4.11 statement (1) holds. (2) Necessity follows from theorem 4.6. Sufficiency follows from lemma 4.9. (3) (Necessity) Suppose A21 6¼ 0. Let 0 a1 B¼ : A21 A2 By corollary 4.1, B þ is signed. Note that each n-order submatrix of B has term rank n. From theorem 4.7, B has the same zero pattern as the vertex-edge incidence matrix of a tree. Then A21 contains at most one nonzero entry. (Sufficiency) If A21 ¼ 0, then Aþ is signed. We consider the case when A21 contains exactly one nonzero entry. So 0 a1 B¼ ; A21 A2 has the same zero pattern as the vertex-edge incidence matrix of a tree. So B þ is ■ signed, and Aþ is signed from lemma 4.9. A1 0 Theorem 4.16 [175]. Let A ¼ 2 Rmn , where A1 is a k k fully indeA21 A2 composable S 2 NS matrix (k > 2). Then the following statements hold:
142
Sign Pattern for Generalized Inverses
(1) If A2 ¼ ða1 ; a2 ; . . .; ar Þ> is a column without zero entries, then Aþ is signed if and only if either A21 ¼ 0 or r ¼ 1, and A21 contains exactly one nonzero entry. (2) If A2 is a fully indecomposable S 2 NS matrix, then Aþ is signed if and only if A21 contains at most one nonzero entry. (3) If A2 has the same zero pattern as the vertex-edge incidence matrix of a tree, then Aþ is signed if and only if A21 ¼ 0. Proof. (1) The sufficiency is obvious. For the necessity, suppose A21 6¼ 0 and r > 2. Let 0 A1 B¼ ; B21 B2
ai be an ðn þ 1Þ n submatrix of A, where B2 ¼ is a column of 2-dimension and aj B21 6¼ 0. By corollary 4.1, B þ is signed. Since B21 6¼ 0, and A1 is fully indecomposable, each n n submatrix of B has term rank n. Theorem 4.7 implies that B has the same zero pattern as the vertex-edge incidence matrix of a tree. Since B21 6¼ 0, any column of A1 contains at most one nonzero entry, which contradicts the fact that A1 is fully indecomposable. Thus, we obtain r ¼ 1 when A21 6¼ 0. If r ¼ 1 and A21 contains at least two nonzero entries, then the signed digraph SðAÞ is not an S 2 NS signed digraph by theorem 3.17. So A is not an S 2 NS matrix, i.e., Aþ is not signed, a contradiction. Hence, either A21 ¼ 0 or r ¼ 1, and A21 contains exactly one nonzero entry. (2) (Necessity) If both A1 and A2 are fully indecomposable, and A21 contains at least two nonzero entries, then the signed digraph SðAÞ is not an S 2 NS signed digraph by theorem 3.17. A is not an S 2 NS matrix and hence Aþ is not signed, a contradiction. So A21 contains at most one nonzero entry. (Sufficiency) If both A1 and A2 are fully indecomposable, and A21 contains at most one nonzero entry, then A is an S 2 NS matrix, i.e., A is signed. (3) If A21 ¼ 0, then Aþ is signed. We only need to show the necessity. Suppose that A21 6¼ 0. Each n n submatrix of A has term rank n since A1 is fully indecomposable. A has the same zero pattern as the vertex-edge incidence matrix of a tree by theorem 4.7. Since A21 6¼ 0, we have any column of A1 contains at most one nonzero entry, which contradicts that A1 is fully indecomposable. Thus, A21 ¼ 0. ■
Sign Pattern for Moore–Penrose Inverse
4.4
143
Ray Pattern for Moore–Penrose Inverse
e þ ¼ rayðAþ Þ for all Definition 4.6 [16]. For a complex matrix A, if ray A e 2 QðAÞ, then A is said to possess a ray M–P inverse (or Aþ is ray unique). A Clearly, the concept of matrices having ray M–P inverses is a generalization of the ray S 2 NS matrix. Since ðA Þþ ¼ ðAþ Þ , we know A has a ray M–P inverse if and only if A has a ray M–P inverse. If A ¼ PBQ for some permutation matrices P and Q, then Aþ ¼ Q T B þ P > . In this case, Aþ is ray unique if and only if B þ is ray unique. In this section, we present characterizations of matrices with ray M–P inverses, which extends some results on the characterizations of matrices with signed M–P inverses in [171]. 1 Lemma 4.12 [16]. Let A 2 Rnn be a DRU matrix. If A1 1 qp A2 qp ¼ 0 or 1 2 ray A1 1 qp ¼ ray A2 qp for p; q 2 ½n and A1 ; A2 2 QðAÞ, then A is a ray S NS matrix. ðAðpjq ÞÞ Proof. Since A is a DRU matrix and ðA1 Þqp ¼ ð1Þp þ q detdetðAÞ , we have detðA1 ðpjqÞÞ detðA2 ðpjqÞÞ ¼ 0 or rayðdetðA1 ðpjqÞÞÞ ¼ rayðdetðA2 ðpjqÞÞÞ for all A1 ; A2 2 QðAÞ. If qðAðpjqÞÞ\n 1, then detðA1 ðpjqÞÞ ¼ detðA2 ðpjqÞÞ ¼ 0, i.e., A1 1 qp ¼ 1 A2 qp ¼ 0 for A1 ; A2 2 QðAÞ. If qðAðpjqÞÞ ¼ n 1, then there exist nonzero terms in the determinantal expansion of AðpjqÞ. If the determinantal expansion of AðpjqÞ contains two nonzero terms with different ray patterns, then there exist A1 ; A2 2 QðAÞ such that 0 6¼ rayðdetðA1 ðpjqÞÞÞ 6¼ rayðdetðA2 ðpjqÞÞÞ 6¼ 0, a contradiction. Hence all nonzero terms in the determinantal expansion of AðpjqÞ have the i.e., same ray pattern, 1 AðpjqÞ is a DRU matrix. Since A is a DRU matrix, ray A1 1 qp ¼ ray A2 qp
for A1 ; A2 2 QðAÞ. Hence A is a ray S 2 NS matrix.
■
nn
Lemma 4.13 [16]. Let A 2 R . If qðAÞ ¼ n and A is not a DRU matrix, then there exist nonsingular matrices B; C 2 QðAÞ and integers p; q 2 ½n such that rayðB 1 Þqp 6¼ rayðC 1 Þqp and ðB 1 Þqp ðC 1 Þqp 6¼ 0. Proof. We consider the following two cases. Case 1: A is ray nonsingular. Since A is not a DRU matrix, there exist nonsingular matrices B; C 2 QðAÞ such that rayðdetðB ÞÞ 6¼ rayðdetðC ÞÞ and B differs from C in exactly one entry, i.e., there exist p; q 2 ½n such that ðBÞpq 6¼ ðC Þpq and ðBÞij ¼ ðC Þij for ði; jÞ 6¼ ðp; qÞ. ðB ðpjq ÞÞ. Since
Then
ðB 1 Þqp detðBÞ ¼ ðC 1 Þqp detðC Þ ¼ ð1Þp þ q det
Sign Pattern for Generalized Inverses
144
ð1Þp þ q ðBÞpq ðC Þpq detðB ðpjq ÞÞ ¼ detðBÞ detðC Þ 6¼ 0; we have 0 6¼ rayðB 1 Þqp 6¼ rayðC 1 Þqp 6¼ 0. Case 2: A is not ray nonsingular. Since qðAÞ ¼ n, there exists a nonsingular matrix B 2 QðAÞ and a singular matrix C 0 2 QðAÞ such that B differs from C 0 in exactly one entry, i.e., there exist p; q 2 ½n such that ðBÞpq 6¼ ðC 0 Þpq and ðBÞij ¼ ðC 0 Þij for ði; jÞ 6¼ ðp; qÞ. Let C ¼ ð1 þ eÞC 0 eB (e [ 0). Then C 2 QðAÞ for sufficiently small e [ 0. Since detðC Þ ¼ ð1 þ eÞ detðC 0 Þ e detðB Þ ¼ e detðB Þ 6¼ 0, we have rayðdetðB ÞÞ 6¼ rayðdetðC ÞÞ. From the proof of Case 1, we have ■ 0 6¼ rayðB 1 Þqp 6¼ rayðC 1 Þqp 6¼ 0. If A has full column rank, then A þ ¼ ðA AÞ1 A . From corollary 3.3 in [227], we obtain the following lemma over complex fields. Lemma 4.14. Let A 2 Cmn be a matrix with full column rank. Then h i P Þp ðA þ Þpq ¼ detð1 ð1ÞN ðq;T Þ detðA ½:jT Þ det A T nfq gjfpg ð A AÞ q2T2SA P 1 ; ¼ detðA jdetðA½T j :Þj2 A½T j :1 AÞ p;N ðq;T Þ
q2T 2SA
where SA ¼ fFjF 2 Sn ðm Þ; detðA½Fj :Þ 6¼ 0g. A B Lemma 4.15 [194]. Let M ¼ , where A C D rank ðM Þ ¼ rank ðAÞ. Then E EP þ M ¼ ; Q E Q EP
is
nonsingular
and
where E ¼ ðI þ QQ Þ1 A1 ðI þ P P Þ1 , P ¼ CA1 and Q ¼ A1 B. It is easy to see that lemma 4.7 also holds for complex matrices. B 0 Lemma 4.16 [171]. Let A ¼ , where rankðBÞ ¼ Nc ðB Þ C D 0 Bþ þ rankðDÞ ¼ Nr ðD Þ. Then A ¼ . D þ CB þ D þ
and
For a matrix M with qðM Þ ¼ r, there always exists an r r submatrix A of M such that qðAÞ ¼ qðM Þ. We show that A is a ray S 2 NS matrix if M þ is ray unique, which extends theorem 3.3 in [167] and theorem 2.A in [171]. Theorem 4.17. Let M be a matrix with a ray M–P inverse. If A 2 Crr is a submatrix of M satisfying qðAÞ ¼ qðM Þ ¼ r, then A is a ray S 2 NS matrix. Proof. SinceA is a submatrix of M , there exist permutation matrices U ; V such A B that M ¼ U V . Since qðAÞ ¼ qðM Þ and M þ is ray unique, we have D ¼ 0 C D
Sign Pattern for Moore–Penrose Inverse
145
A B and N ¼ has a ray M–P inverse. We first prove that A is a DRU matrix. If C 0 A is not a DRU matrix, then by lemma 4.13, there matrices 1 exist nonsingular 1 A1 ; A2 2 QðAÞ and p; q 2 ½n such that 0 6¼ ray A1 qp 6¼ ray A2 qp 6¼ 0. Ai eB Take Ni ðeÞ ¼ 2 QðN Þ (e [ 0; i ¼ 1; 2). Since rank ðNi ðeÞÞ 6 qðN Þ ¼ eC 0 qðAÞ and A1 ; A2 are nonsingular, we have rank ðNi ðeÞÞ ¼ rank ðAi Þ. By lemma 4.15, we have E EP þ Ni ðeÞ ¼ ; Q E Q EP 1 1 1 where E ¼ ðI þ QQ Þ1 A1 i ðI þ P P Þ , and P ¼ eCAi , Q ¼ eAi B. So 1 Ai 0 limþ Ni ðeÞþ ¼ : ð4:1Þ 0 0 e!0 1 þ 6¼ ray N2 ðeÞþ for Since 0 6¼ ray A1 1 qp 6¼ ray A2 qp 6¼ 0, we have ray N1 ðeÞ sufficiently small e, a contradiction to the fact that N þ is ray unique. Hence A is a DRU matrix. þ Since 1 A1 is a DRU matrix and N 1 is ray unique, by (4.1), we obtain 1 B1 qp B2 qp ¼ 0 or ray B1 qp ¼ ray B2 qp for all p; q 2 ½r and B1 ; B2 2 QðAÞ.
By lemma 4.12, A is a ray S 2 NS matrix.
■
For every matrix A, there exist permutation matrices P; Q such that B 0 A¼P Q, where qðBÞ ¼ Nc ðB Þ and qðDÞ ¼ Nr ðDÞ (see (5.4) in [13]). Hence C D B 0 þ A is ray unique if and only if has ray M–P inverse (with qðBÞ ¼ Nc ðB Þ C D and qðDÞ ¼ Nr ðDÞ). B 0 Theorem 4.18. Let A ¼ , where qðBÞ ¼ Nc ðB Þ; and qðDÞ ¼ Nr ðD Þ. Then C D Aþ is ray unique if and only if the following two conditions hold: (1) B þ and Dþ are ray unique. þ e 2 QðC Þ and D ~B ~ C e 2 QðBÞ; C e 2 QðDÞ. ~ þ ¼ rayðDþ CB þ Þ for B (2) ray D Proof. Since qðB Þ ¼ Nc ðB Þ and qðDÞ ¼ Nr ðDÞ, there exist B1 2 QðB Þ and D1 2 B1 0 QðDÞ satisfying rankðB1 Þ ¼ Nc ðB Þ and rankðD1 Þ ¼ Nr ðDÞ. Let A1 ¼ . By C D1 ~ 0 0 B1þ ~¼ B lemma 4.16, Aþ For A þ . 1 ¼ D þ CB þ ~ D ~ 2 QðAÞ, since C 1 1 D1 ~ þ ¼ X 0 , where X 2 QðB þ Þ, Y 2 QðDþ Þ and ~ þ Þ ¼ ray Aþ , we get A rayðA 1 1 1 Z Y
Sign Pattern for Generalized Inverses
146
~ þ ; and Z 2 Q D1þ CB1þ . From the definition of the M–P inverse, we get X ¼ B ~ þ ¼ ray B þ and ray D ~ þ . Hence ray B ~ þ ¼ ray Dþ . So B þ and Dþ are Y ¼D 1 1 ray unique, and part (1) holds. Since B þ and D þ are ray unique and qðB Þ ¼ Nc ðB Þ and qðDÞ ¼ Nr ðDÞ, by ~2 theorem 4.17, each matrix in QðBÞ and QðD Þ has full column rank. For B ~ 2 QðC Þ and D ~ 2 QðD Þ, by lemma 4.16, we have QðB Þ; C ~ 0 þ ~þ B 0 B ð4:2Þ ~ D ~B ~ ¼ D ~ þC ~þ : ~þ D C ~B ~ þC ~ þ Þ ¼ rayðDþ CB þ Þ, part (2) holds. Since Aþ is ray unique, we have rayðD þ If parts (1) and (2) hold, then by (4.2), A is ray unique. The following result is an equivalent statement for part (2) of theorem 4.18. Theorem 4.19. Let B; C ; D be matrices with Nc ðC Þ ¼ Nc ðB Þ and Nr ðC Þ ¼ Nr ðDÞ. Suppose that B þ and Dþ are both ray unique. Then the following are equivalent: þ e 2 QðC Þ and D ~B ~ C e 2 QðBÞ; C e 2 QðDÞ. ~ þ ¼ rayðDþ CB þ Þ for all B (1) ray D e 2 QðC Þ. e B þ Þ ¼ rayðDþ CB þ Þ for all C (2) rayðD þ C Proof. ð1Þ ) ð2Þ. Obvious. ð2Þ ) ð1Þ. Let M ¼ Dþ CB þ . Then ðM Þij ¼
NX c ðC Þ N r ðC Þ X k2 ¼1 k1 ¼1
ðDþ Þik1 ðC Þk1 k2 ðB þ Þk2 j :
ð4:3Þ
Suppose that part (1) does not hold. Then there exist i1 ; j1 ; p1 ; q1 ; p2 ; q2 such that 0 6¼ ray ðDþ Þi1 p1 ðC Þp1 q1 ðB þ Þq1 j1 6¼ ray ðDþ Þi1 p2 ðC Þp2 q2 ðB þ Þq2 j1 6¼ 0: Let Ck 2 QðC Þ (k ¼ 1; 2) be the matrix with entries
eðC Þpq ; p ¼ pk ; q ¼ q k ; ðCk Þpq ¼ ðe [ 0Þ ðC Þpq ; otherwise: Let Mk ¼ D þ Ck B þ (k ¼ 1; 2). From (4.3), we obtain ðM1 Þi1 j1 ¼
ðM2 Þi1 j1 ¼
NX c ðC Þ N r ðC Þ X k2 ¼1 k1 ¼1
NX c ðC Þ N r ðC Þ X k2 ¼1 k1 ¼1
ðDþ Þi1 k1 ðC1 Þk1 k2 ðB þ Þk2 j1 þ ðe 1ÞðDþ Þi1 p1 ðC1 Þp1 q1 ðB þ Þq1 j1 ;
ðDþ Þi1 k1 ðC2 Þk1 k2 ðB þ Þk2 j1 þ ðe 1ÞðDþ Þi1 p2 ðC2 Þp2 q2 ðB þ Þq2 j1 :
Sign Pattern for Moore–Penrose Inverse
147
When e is sufficiently large, ray ðM1 Þi1 j1 6¼ ray ðM2 Þi1 j1 , a contradiction to part (2). Hence ð2Þ ) ð1Þ.
■
þ
In theorem 4.18, A being ray unique implies that B and D have ray M–P inverses (with qðBÞ ¼ Nc ðB Þ and qðD Þ ¼ Nc ðD Þ). Next we give necessary and sufficient conditions for B þ to be ray unique. Theorem 4.20. Let A 2 Cmn with qðAÞ ¼ n. Then Aþ is ray unique if and only if the following conditions hold: (1) Every n n submatrix of A with term rank n is a ray S 2 NS matrix. (2) For each p 2 ½n, q 2 ½m and each pair of sets T1 ; T2 q 2 Ti 2 fT jT 2 Sn ðm Þ; qðA½T j :Þ ¼ n gði ¼ 1; 2Þ, we have 1 1 ray A½T1 j : ¼ ray A½T2 j : p;N ðq;T1 Þ
or
Q2 i¼1
A½Ti j :1
p;N ðq;Ti Þ
with
p;N ðq;T2 Þ
¼ 0.
Proof. If Aþ is ray unique, then by theorem 4.17, part (1) holds. Suppose that part (2) does not hold. Then there exist p 2 ½n; q 2 ½m and T1 ; T2 2 fT jT 2 Sn ðm Þ; qðA½T j :Þ ¼ n gðq 2 T1 \ T2 Þ such that 1 1 0 6¼ ray A½T1 j : 6¼ ray A½T2 j : 6¼ 0: p;N ðq;T1 Þ
p;N ðq;T2 Þ
ðAÞij ; i 2 Tk ; eðAÞij ; i 62 Tk : ðe [ sufficiently small e, by lemma 4.14, we have 0; k ¼ 1; 2Þ. þFor ray Aþ ¼ 6 ray A , contradicting the fact that Aþ is ray unique. So part 1 pq 2 pq (2) holds. Suppose that parts (1) and (2) hold. Then A½T j : is a ray S 2 NS matrix for all ■ T 2 fT jT 2 Sn ðm Þ; qðA½T j :Þ ¼ n g. By lemma 4.14, Aþ is ray unique. Let Ak 2 QðAÞ be the matrix with entries ðAk Þij ¼
Next is an example for theorem 4.20. 2 3 a 0 Example. Let A ¼ 4 b c 5, where a; b; c; d are nonzero complex numbers. Since A 0 d has full column rank, we have 1 ðc c þ d dÞa d db b cd 1 þ : A ¼ ðA AÞ A ¼ c ba c a a ða a þ b bÞd detðA AÞ
148
Sign Pattern for Generalized Inverses
Since each matrix in QðAÞ has Aþ is ray unique. full column rank, Matrix A has a 0 b c a 0 , A2 ¼ and A3 ¼ . Clearly three 2 2 submatrices A1 ¼ b c 0 d 0 d a 1 0 A1 ; A2 ; A3 are ray S 2 NS matrices and A1 ¼ , 1 c1 ba 1 c1 1 b a 1 b1 cd 1 0 A1 , A1 . It is easy to see that A1 ; A2 and A3 2 ¼ 3 ¼ 1 0 d 1 0 d satisfy conditions (1) and (2) in theorem 4.20. We can obtain the following result from theorems 4.18, 4.19 and 4.20. B 0 Theorem 4.21. Let A ¼ , where qðBÞ ¼ Nc ðBÞ and qðDÞ ¼ Nr ðDÞ. Then C D Aþ is ray unique if and only if the following statements hold: (1) B and D satisfy the conditions on A in theorem 3.4. þe þ e 2 QðC Þ. (2) ray D C B ¼ rayðDþ CB þ Þ for all C
Chapter 5 Sign Pattern for Drazin Inverse In 1947, P.A. Samuelson, the Nobel Laureate of economics, attributed the qualitative analysis of economic models to the problem of sign-solvability of linear systems, and initiated the research field about the sign patterns of matrices. Later, R.A. Brualdi, B.L. Shader and others studied the S 2 NS matrix systematically. Since 1995, the properties of matrices with signed M–P inverses have been studied by B.L. Shader, Jiayu Shao and Haiying Shan, etc. and generalizes the study of S 2 NS matrices. In 2010, the group inverse of a class of adjacent matrices of directed broom trees is represented by the weight of graphs by M. Catral, D.D. Olesky and P. van den Driessche, and they pointed out that the group inverse of such binary matrices has a constant sign pattern. In 2011, we proposed and studied sign group invertible matrices, strong sign group invertible matrices and matrices with signed Drazin inverses. The research results on matrices with signed Drazin inverses are presented in this chapter.
5.1
Matrices with Signed Drazin Inverse
In 2012, the concept of matrices with signed Drazin inverses was presented [228]. The study of matrices with signed Drazin inverses is the deepening and generalization of the study of S 2 NS matrices. Definition 5.1 [228]. Let A be a square real matrix. We say that A has a signed Drazin inverse (or simply AD is signed), if e D Þ ¼ sgn ðAD Þ; sgn ð A e 2 QðAÞ. for each matrix A It is not difficult to prove the following two results from the definition of matrices with signed Drazin inverses.
DOI: 10.1051/978-2-7598-2599-8.c005 © Science Press, EDP Sciences, 2021
150
Sign Pattern for Generalized Inverses
Theorem 5.1. Let A be a square real matrix. Then A has a signed Drazin inverse if and only if A> has a signed Drazin inverse. Proof. Since ðA> ÞD ¼ ðAD Þ> , the result follows.
■
Theorem 5.2. Let A be a square real matrix and B ¼ PAP > , where P is a permutation matrix. Then AD is signed if and only if B D is signed. Proof. Since B D ¼ PAD P > , the result follows.
■
The properties of matrices with signed inverses and matrices with signed M–P inverses remain unchanged under the substitution equivalence transformation. However, the permutation equivalence cannot preserve the properties of matrices with signed Drazin inverses, which makes it very difficult to describe matrices with signed Drazin inverses technically. In order to characterize nilpotent matrices with signed Drazin inverses, we introduce the concept of sign nilpotence. Definition 5.2 [88]. Let A be a square real matrix. We say A being sign nilpotent, if each matrix in QðAÞ is nilpotent. Theorem 5.3. Let A be a real nilpotent matrix, then AD is signed if and only if A is sign nilpotent. e D ¼ 0 for any A e 2 QðAÞ, Proof. Since AD ¼ 0, we have AD is signed if and only if A i.e., A is sign nilpotent. ■ Let A be a square real matrix, by [88]. Then A is sign nilpotence if and only if A is permutationally similar to a strictly upper triangular matrix. Theorem 5.4 [228]. Let A 2 Rnn . If qðAÞ ¼ n, then AD is signed if and only if A is an S 2 NS matrix. Proof. Clearly an S 2 NS matrix has a signed Drazin inverse. If A is not an SNS matrix, by lemma 4.1, there exist nonsingular matrices A1 ; A2 2 QðAÞ and indices p; q such that 1 1 A1 qp A2 qp \0; a contradiction to AD being signed. Hence A has a signed Drazin inverse if and only ■ if A is an S 2 NS matrix. In [227], Zhou et al. proposed and studied the SGI matrix and S 2 GI matrix, which are generalizations of the concepts of SNS matrix and S 2 NS matrix, respectively. Definition 5.3 [227]. Let A be a square real matrix. If every matrix in QðAÞ is group invertible, we say A is a sign group invertible SGI matrix. An SGI matrix A is called e 2 QðAÞ a strong sign group invertible S 2 GI matrix if the group inverses of A satisfies
Sign Pattern for Drazin Inverse
151
FIG. 5.1 – The broom graph H2kl . e # Þ ¼ sgn ðA# Þ; sgn ð A Obviously, an S 2 GI matrix is a special matrix with a signed Drazin inverse. In [49], the authors indicate the associated matrix of a class of weighted graphs (see figure 5.1) are S 2 GI matrices.
5.2
Upper Triangular Partitioned Matrices with Signed Drazin Inverse
For any reducible real matrix M , there exists a permutation matrix P such that A B M ¼P P>; 0 C where A; C are square matrices. Hence, the characterization of general reducible matrices with signed Drazin inverses is equivalent to the characterization of the A B block triangular matrix with a signed Drazin inverse. 0 C Bu, Wang, Zhou and Sun considered the block upper triangular matrices with signed Drazin inverses [21], and these results will be presented in this section. A B Theorem 5.5 [228]. Let M ¼ be a square matrix with a signed Drazin 0 C inverse, where A is square. Then AD and C D are signed. Proof. By theorem 2.1,
AD M ¼ 0 D
where X¼
j1 X
A
D iþ2
! BC
i
X ; CD
k 1 X i þ 2 Ai B C D I CC þ I AAD
i¼0
j ¼ indðC Þ; k ¼ indðAÞ:
D
i¼0
! AD BC D ;
Sign Pattern for Generalized Inverses
152
Note that the Drazin inverses of matrices in QðM Þ all have the above forms. Since ■ M D is signed, then AD and C D are signed. For a reducible matrix A, there exists a 2 A11 A12 6 0 A22 6 A ¼ P 6 .. .. 4 . . 0
0
permutation matrix P such that 3 A1m A2m 7 7 .. 7P > ; .. . 5 .
Amm
where Aii (i ¼ 1; 2; . . .; m) are the irreducible components of A. By theorem 5.5, we obtain the following corollary. Corollary 5.1 [228]. Let A be a reducible matrix. If AD is signed, then each irreducible component of A has a signed Drazin inverse. For nilpotent matrices being signed, the nilpotent index of sgn C is the maximum of nilpotent indices of all matrices in QðAÞ. Let M ½p; : and M ½:; r denote the p-th row and the r-th column of matrix M , respectively. Next we will present a necessary condition for a class of block upper triangular matrices with signed Drazin inverses. A B Theorem 5.6 [21]. Let M ¼ be a square matrix with a signed Drazin 0 C inverse, where A; C are square and sgn(C) is potentially nilpotent. Then there exists a permutation matrix P such that I 0 A B I 0 A F ¼ ; 0 P 0 C 0 P> 0 N where N ¼ PCP > is a strictly upper triangular matrix. Moreover, we have the following results; e 2 QðAÞ and F e 2 QðFÞ, we have (1) For any A n o n o e D Þ2 F e ½:; 1 ¼ sgn ðAD Þ2 F½:; 1 : sgn ð A e 2 QðAÞ, B e 2 QðC Þ, we have e 2 QðBÞ and C (2) For any A ! j1 i þ 2 X 2 D D i e e e > 0; e eC A A B B i¼1
where j is the nilpotent index of sgnðC Þ.
Sign Pattern for Drazin Inverse
153
(3) For any A1 ; A2 2 QðAÞ, B1 ; B2 2 QðBÞ, we have 2 2 B 1 AD B2 > 0: AD 1 2 (4) If the entries of ðAD Þ2 B are all non zero, then AD ðAD Þ2 B D : sgn ½M ¼ sgn 0 0 Proof. By theorem 5.5, C D is signed. Since sgnðC Þ is potentially nilpotent, by theorem 5.3, C is sign nilpotent. Hence there exists a permutation matrix P such that PCP > ¼ N is a strictly upper triangular matrix. Let I 0 I 0 A F L¼ M ¼ : 0 P 0 P> 0 N Then LD is signed. For any e¼ L
e A 0
by theorem 2.1, we have
e F e N
2 QðLÞ;
X ; 0 Pj1 e D i þ 2 P e D Þ2 F e D Þi þ 2 F eN e i ¼ ðA e þ j1 ð A eN e i , with j being the where X ¼ i¼0 ð A Þ F i¼1 e is strictly upper triangular, the first column of X nilpotent index of sgnðC Þ. Since N is eD ¼ L
eD A 0
e D Þ2 F e ½:; 1; X½:; 1 ¼ ð A and we have
n o n o e D Þ2 F e ½:; 1 ¼ sgn ðAD Þ2 F½:; 1 : sgn ð A
Hence part (1) holds. The r-th column of X is e D Þ2 F e ½:; r þ X½:; r ¼ ð A
j1 X
e D Þi þ 2 F eN e i ½:; r: ðA
i¼1
e is strictly upper triangular, the column vector F eN e i ½:; r is a linear Since N combination of e ½:; 1; F e ½:; 2; . . .; F e ½:; r 1: F
Sign Pattern for Generalized Inverses
154
Pj1 e D i þ 2 e D Þ2 F e ½:; r and eN e i ½:; r is not If the Hadamard product of ð A F i¼1 ð A Þ nonnegative, then there exists an integer q such that ( ) j1 n o X 2 i þ 2 e D Þ ½q; : F e D Þ ½q; : F e ½:; r ¼ sgn eN e i ½:; r 6¼ 0: sgn ð A ðA i¼1
e ½:; 1; F e ½:; 2; . . .; F e ½:; r such that ðXÞ [ 0, and we can also We can choose F q;r e ½:; 1; F e ½:; 2; . . .; F e ½:; r such that ðXÞ \0, a contradiction to LD being choose F q;r signed. Hence we have ! j1 i þ 2 X 2 D D i e e e eN e > 0: A A F F i¼1
Since F ¼ BP > and N ¼ PCP > , we have ! j1 i þ 2 X 2 D D i e e e e e C > 0; A A B B i¼1
and part (2) holds. For any e ¼ M
e A 0
by theorem 2.1, we get
e B e C
2 QðM Þ;
X ; 0 P P e D iþ2B e D Þ2 B e D Þi þ 2 B e i ¼ ðA e i. eC e þ j1 ð A eC where X ¼ j1 i¼0 ð A Þ i¼1 signed, parts (3) and (4) hold from part (2). eD¼ M
eD A 0
Since
MD
is ■
Definition 5.4 [21]. For two sign pattern matrices A 2 Rmn and B 2 Rnp , we say that the P ði; jÞ-entry of AB is an uncertain entry if there are two nonzero terms in the sum nk¼1 ðAÞik ðBÞkj that have opposite signs. In order to further characterize block triangular matrices with signed Drazin inverses, the following lemma needs to be introduced. Lemma 5.1 [21]. Let A be a nonsingular real matrix, and A ¼ sgn ðA1 Þ. If the ði; jÞentry of A2 is an uncertain entry, then there exist nonsingular matrices A1 ; A2 2 QðAÞ such that 2 ðA2 1 Þij [ 0; ðA2 Þij \0:
Proof. If the ði; jÞ-entry of A2 is an uncertain entry, then there exist integers p; q (p 6¼ q) such that
Sign Pattern for Drazin Inverse
155
ðA1 Þip ðA1 Þpj [ 0; ðA1 Þiq ðA1 Þqj \0: Let Dr ðmÞ be the diagonal matrix obtained from the identity matrix I by replacing the r-th entry of I by a positive number m. Let A1 ¼ Dp ðmÞA; A2 ¼ Dq ðmÞA; then A1 ; A2 2 QðAÞ and 1 A1 1 ¼ A Dp
By computation, we have ðA2 1 Þij ¼
X k
1 1 1 ¼ A D ; A1 : q 2 m m
1 ðA1 1 Þik ðA1 Þkj
X
1 ¼ ðA1 1 Þip ðA1 Þpj þ
¼
k6¼p
1 ðA1 1 Þik ðA1 Þkj
1 X ðA1 1 Þip ðA1 Þpj þ ðA1 Þik ðA1 1 Þkj : m k6¼p
Similarly we have ðA2 2 Þij ¼
ðA1 Þiq ðA1 2 Þqj
X
þ
m
k6¼q
ðA1 Þik ðA1 2 Þkj :
If j 6¼ p and j 6¼ q, then ðA2 1 Þij ¼
ðA1 Þip ðA1 Þpj m
þ
ðA2 2 Þij ¼
ðA1 Þiq ðA1 Þqj m
þ
We can choose m such that
k6¼p
P
k6¼q
ðA1 Þik ðA1 Þkj ;
ðA1 Þik ðA1 Þkj :
ðA1 Þip ðA1 Þpj [ 0; 1 m 1 ðA Þiq ðA Þqj sgn \0: m
sgn ½ðA2 1 Þij ¼ sgn sgn ½ðA2 2 Þij ¼
P
If j ¼ p 6¼ q, then ðA2 1 Þij ¼
ðA1 Þip ðA1 Þpj m2
þ
ðA2 2 Þij ¼
ðA1 Þiq ðA1 Þqj m
þ
1 m
P k6¼p
P
k6¼q
ðA1 Þik ðA1 Þkj ;
ðA1 Þik ðA1 Þkj :
Sign Pattern for Generalized Inverses
156
Let a ¼ ðA1 Þip ðA1 Þpj [ 0; b ¼
X k6¼p
ðA1 Þik ðA1 Þkj ;
then ðA2 1 Þij ¼
a b þ : m2 m
Consider the limit b m a m!0 2 m
lim
So there exists m [ 0 such that
¼ lim
m!0
bm ¼ 0; a
jbj a m \ m2 .
Hence we can choose m such that ! 1 1 ðA Þ ðA Þ ip pj sgn½ðA2 [ 0; 1 Þij ¼ sgn m ! ðA1 Þiq ðA1 Þqj 2 \0: sgn½ðA2 Þij ¼ sgn m
If j ¼ q 6¼ p, similar to the above arguments, we can choose m such that 2 ðA2 1 Þij [ 0; ðA2 Þij \0:
■
The proof is completed.
The following theorem studies block upper triangular matrices with signed Drazin inverses with a full term rank sub-block. A B Theorem 5.7 [21]. Let M ¼ be a square matrix, where A is square with a 0 C full term rank, B has at least one column without zero entries, and sgnðC Þ is potentially nilpotent. If M D is signed, then (1) A is an S 2 NS matrix. (2) Neither A2 nor A2 B have uncertain entries, where A ¼ sgn ðA1 Þ; B ¼ sgn ðBÞ: e 2 QðAÞ, B e 2 QðC Þ, then e 2 QðBÞ and C (3) For any A ! j1 X 2 1 i þ 2 e e i e A B ð A Þ B C > 0; i¼1
where j is the nilpotent index of sgnðC Þ.
Sign Pattern for Drazin Inverse
157
Proof. It follows from theorem 5.5 that AD and C D are signed. Since A has full term rank, by theorem 5.4, A is an S 2 NS matrix. Since sgn(C ) is potentially nilpotent, by theorem 5.3, C is sign nilpotent. For any e B e A e M ¼ e 2 QðM Þ; 0 C by theorem 2.1,
e 1 X A ; 0 0 P P e 1 i þ 2 B e 2 B e 1 Þi þ 2 B ei ¼ A e i , where j is the eC e þ j1 ð A eC where X ¼ j1 i¼0 ð A Þ i¼1 nilpotent index of sgnðC Þ. By theorem 5.6, ! j1 X e 1 Þi þ 2 B e 2 B e i > 0: e eC ðA A eD¼ M
i¼1
Note that B has at least one column (the r-th column, say) without zero entries. Then column is e 2 B½:; e r þ X½:; r ¼ A
j1 X
e 1 Þi þ 2 B e i ½:; r: eC ðA
i¼1
Let A ¼ sgnðA1 Þ. If A2 has at least one uncertain entry, by lemma 5.1, there exist integers p; q and nonsingular matrices A1 ; A2 2 QðAÞ such that 2 ðA2 1 Þpq [ 0; ðA2 Þpq \0:
e and B e ½:; r has no zero entries, we can choose A e ½:; r such that ðXÞ [ 0. Since B pr e and B e ½:; r such that ðXÞ \0, a contradiction to M D being We can also choose A pr signed. Hence A2 has no uncertain entries. Let B ¼ sgn ðBÞ. Next we show that A2 Bs has no uncertain entries for any s, where Bs ¼ sgn ðB½:; sÞ, and s is any column of matrix B. If the m-th entry of A2 Bs is an uncertain entry, then there exist B1 ; B2 2 QðFÞ such that A2 ½m; :B1 ½:; s [ 0; A2 ½m; :B2 ½:; s\0: Since
e 2 e
A
B
! j1 X i þ 2 1 i e Þ B e > 0; eC ðA i¼1
Sign Pattern for Generalized Inverses
158
e s such that ðXÞ [ 0, and we can also choose B½:; e s such that we can choose B½:; ms 2 D ðXÞms \0, a contradiction to M being signed. Then A B has no uncertain entries. Hence part (2) and part (3) hold. ■ Lemma 5.2 [12]. Let A be a square real matrix nonzero diagonal entries. Its associated digraph DðAÞ is strongly connected if and only if A is fully indecomposable. A block upper triangular matrix containing a special full rank subblock with a signed Drazin inverse is considered in the following. A B Theorem 5.8 [21]. Let M ¼ be a square matrix with a signed Drazin 0 C inverse, where A is square with nonzero diagonal entries, B has at least one column without zero entries, and sgn ðC Þ is potentially nilpotent. Then there exist permutation matrices P; Q such that 2 3 > A1 0 F1 P 0 A B P 0 ¼ 4 0 A2 F2 5; 0 Q 0 C 0 Q> 0 0 N where N ¼ QCQ > is a strictly upper triangular matrix. Moreover, we have: (1) A1 and A2 are upper triangular with negative diagonal entries, and their associated signed digraphs SðA1 Þ and SðA2 Þ are S 2 NS signed digraphs. (2) Neither A21 F 1 nor A22 F 2 have uncertain entries, where Ai ¼ sgnðA1 i Þ; F i ¼ sgn ðFi Þ; i ¼ 1; 2: (3) For any (4) For any
e 2 QðM Þ, the eigenvalues of M e consist of all diagonal entries of M e. M e e i 2 QðFi Þ (i ¼ 1; 2) and N e 2 QðN Þ, we have A i 2 QðAi Þ, F ! j1 X 2 e 1 Þk þ 2 F ek N e k > 0; i ¼ 1; 2; A Fi ðA i
k¼1
k
where j is the nilpotent index of sgnðC Þ. Proof. By theorem 5.6, there exists a permutation matrix Q such that N ¼ QCQ > is a strictly upper triangular matrix. Since all diagonal entries of A are nonzero, A has full term rank. By theorem 5.7, A is an S 2 NS matrix, and neither A2 nor A2 B have uncertain entries, where A ¼ sgn ðA1 Þ; B ¼ sgn ðBÞ: If A is irreducible, then its associated digraph DðAÞ is strongly connected. By lemma 5.2, A is fully indecomposable. From theorem 3.14 we know that A1 is totally nonzero. If there exist integers p; q such that ðAÞpp [ 0; ðAÞqq \0;
Sign Pattern for Drazin Inverse
159
by theorem 3.15, we have ðA1 Þpp [ 0; ðA1 Þqq \0: Since A2 has no uncertain entries, we know that ðA1 Þpq ¼ ðA1 Þqp ¼ 0; a contradiction to A1 being totally nonzero. So all diagonal entries of A have the same sign. If A is an n n matrix ðn > 2Þ, since A is irreducible, there exist i; j (i\j) such that ðAÞij 6¼ 0. By theorem 3.15, ðAÞji ¼ sgnððAÞij Þ: Moreover, we have ðA1 Þij ¼ ð1Þi þ j
detðAðj; iÞÞ ; detðAÞ
where Aðj; iÞ denotes the submatrix of A obtained by deleting the j-th row and the i-th column of A. Since all diagonal entries of A have the same sign, by theorem 3.3, we get sgn ðdetðAÞÞ ¼ sgnððAÞn11 Þ: Since the term rank of Aðj; iÞ is n 1, by theorem 3.16, Aðj; iÞ is an SNS matrix. Theorem 3.3 implies that sgnðdetðAðj; iÞÞÞ ¼ ð1Þji1 sgn ððAÞn2 11 ðAÞij Þ: Hence we have sgnððA1 Þij Þ ¼ ð1Þi þ j
sgn ðdetðAðj; iÞÞÞ ¼ sgn ððAÞij Þ ¼ sgn ððA1 Þji Þ 6¼ 0: sgnðdetðAÞÞ
Recall that A1 is totally nonzero. Since ðA1 Þij and ðA1 Þji have opposite signs, ðA2 Þii is an uncertain entry, a contradiction. Hence A is a 1 1 S 2 NS matrix if A is irreducible. If A is reducible, then according to the above arguments, each irreducible component of A is a 1 1 S 2 NS matrix of. So A is permutation similar to an upper triangular S 2 NS matrix. If ðAÞii [ 0 and ðAÞjj \0, then ðA1 Þii [ 0 and ðA1 Þjj \0. Since A2 has no uncertain entries, we have ðA1 Þij ¼ ðA1 Þji ¼ 0:
A1 0 Hence A ¼ sgn ðA Þ is permutation similar to , where A1 and A2 0 A2 are upper triangular sign patterns with negative diagonal entries. Hence there exists a permutation matrix P such that 1
Sign Pattern for Generalized Inverses
160
A1 PAP ¼ 0 >
0 ; A2
where A1 and A2 are upper triangular S 2 NS matrices with negative diagonal entries, and sgn ðA1 i Þ ¼ Ai ; i ¼ 1; 2: By corollary 3.3, the associated signed digraphs SðA1 Þ and SðA2 Þ are S 2 NS signed digraphs. Hence part (1) holds. Let F1 > PBQ ¼ ; F2 where F1 has the same number of rows as A1 . Since A2 B has no uncertain entries, both A21 F 1 and A22 F 2 have no uncertain entries, where F i ¼ sgnðFi Þ (i ¼ 1; 2). Hence part (2) holds. Since there exists permutation matrices P; Q such that 2 3 A1 0 F1 A B P> 0 P 0 ¼ 4 0 A2 F2 5; 0 C 0 Q 0 Q> 0 0 N part (3) holds. Theorem 5.7 implies that part (4) holds. Next, we present the necessary and sufficient conditions for
A 0
B 0
■ with a
signed Drazin inverse under certain conditions. A B Theorem 5.9 [21]. Let M ¼ be a square real matrix, where A is a square 0 0 matrix with full term rank, and B has at least one column without zero entries. The Drazin inverse M D is signed if and only if A is an S 2 NS matrix, and neither A2 nor A2 B have uncertain entries, where A ¼ sgn ðA1 Þ; B ¼ sgn ðBÞ. If M D is signed, then M is an S 2 GI matrix, and A A2 B # sgn ðM Þ ¼ : 0 0 Proof. If M D is signed, by theorem 5.7, we have A is an S 2 NS matrix, and neither A2 nor A2 B have uncertain entries. By corollary 2.3, M is an S 2 GI matrix, and A A2 B sgn ðM # Þ ¼ : 0 0
Sign Pattern for Drazin Inverse
161
e 2 QðAÞ and B e 2 QðBÞ, we have If A is an S 2 NS matrix, for any A e 1 A e B e 2 B e # e A : ¼ A 0 0 0 0 If neither A2 nor A2 B have uncertain entries, then M D is signed.
5.3
■
Anti-Triangular Partitioned Matrices with Signed Drazin Inverse
Bu, Zhou, Sun and Wei considered block anti-triangular matrices with signed Drazin inverses [20, 227, 228], and we present these results in this section. Firstly, three auxiliary theorems are given. A B Theorem 5.10. Let M ¼ 2 Cðn þ mÞðn þ mÞ , where B 2 Cnm ; C 2 Cmn C 0 and rank ðBÞ ¼ rank ðC Þ ¼ n. Then M # exists if and only BC is nonsingular. If M # exists, then 0 ðBC Þ1 B # M ¼ : C ðBC Þ1 C ðBC Þ1 AðBC Þ1 B Proof. By theorem 2.31, we prove the existence of M # and its expression.
■
þ
Theorem 5.11 [228]. Let C be an m n real matrix such that C is signed and e 2 QðBÞ and qðC Þ ¼ n. Let B be an n m real matrix, and B 2 QðC > Þ. For any B e 2 QðC Þ, B e is nonsingular and eC C e Þ1 BÞ e ¼ sgnððBC Þ1 BÞ ¼ sgnðC þ Þ; eC sgnðð B e ðB e Þ1 Þ ¼ sgnðC ðBC Þ1 Þ ¼ sgnðB þ Þ: eC sgnð C e is zero or each Proof. By lemma 4.2, the determinant of each n n submatrix of C 2 e n n submatrix of C is an S NS matrix. By lemma 4.3, we obtain X eÞ ¼ e ½F; :Þ: eC e FÞ detð C detð B detð B½:; F2Sn ðmÞ
Since B 2 QðC > Þ, e ½F; :Þ> g: e FÞ ¼ sgnfð C sgnð B½:; By theorem 3.3, we have e ½F; :ÞÞ ¼ sgnðdetðC ½F; :ÞÞ: e FÞÞ ¼ sgnðdetð C sgnðdetð B½:;
Sign Pattern for Generalized Inverses
162
e Þ ¼ 0 if and only if the determinant of each n-order submatrix of C e eC Then detð B e e e e is zero, i.e., qðC Þ\n. Since qðC Þ ¼ n, we have detð B C Þ [ 0 and B C is nonsingular. Let e ðB e Þ1 ¼ ðypq Þ e Þ1 B eC e ¼ ðxpq Þ eC ; Y ¼C : X ¼ ðB nm
mn
Similar to the proof of lemma 4.4, we obtain X ð1Þp e ½Fnfqg; fpgÞ detð B½:; e FÞ; ð1ÞN ðq;FÞ detð C xpq ¼ eÞ eC detð B q2F2Sn ðmÞ
ypq ¼
ð1Þq eÞ eC detð B
X
e ½F; :Þ: e ð1ÞN ðp;FÞ detð B½fqg; FnfpgÞ detð C
p2F2Sn ðmÞ
Let SC ¼ fFjF 2 Sn ðmÞ; detðC ½F; :Þ 6¼ 0g. Since e ½F; :ÞÞ ¼ sgnðdetðC ½F; :ÞÞ; e FÞÞ ¼ sgnðdetð C sgnðdetð B½:; we have
X ð1Þp e ½Fnfqg; fpgÞ detð B½:; e FÞ; ð1ÞN ðq;FÞ detð C e e detð B C Þ q2F2SC X ð1Þq e ½F; :Þ: e ¼ ð1ÞN ðp;FÞ detð B½fqg; FnfpgÞ detð C e Þ p2F2S eC detð B C
xpq ¼ ypq
e ½F; : and B½:; e F are reversible, then For F 2 SC , C e ½Fnfqg; fpgÞ detð C ; e ½F; :Þ detð C e ð1Þq þ N ðp;FÞ detð B½fqg; FnfpgÞ : ¼ e FÞ detð B½:;
ð1Þ e ½F; :1 C p;N ðq;FÞ ¼ e F1 B½:; N ðp;FÞ;q Hence
p þ N ðq;FÞ
X 1 e e ½F; :1 e C p;N ðq;FÞ detð C ½F; :Þ detð B½:; FÞ; e Þ q2F2S eC detð B C X 1 e e e F1 ¼ B½:; N ðp;FÞ;q detð B½:; FÞ detð C ½F; :Þ: e Þ q2F2S eC detð B C
xpq ¼ ypq
e ½F; : are two S 2 NS matrices, we have e F and C For F 2 SC , B½:; e ½F; :ÞÞ 6¼ 0: e FÞÞ ¼ sgnðdetð C sgnðdetð B½:; Thus,
e ½F; :1 sgn C ½F; :1 ¼ sgn C p;N ðq;FÞ p;N ðq;FÞ ; e F1 sgn B½:; F1 ¼ sgn B½:; N ðp;FÞ;q N ðp;FÞ;q :
Sign Pattern for Drazin Inverse
Since
163
X
eÞ ¼ eC detð B
e ½F; :Þ [ 0; e FÞ detð C detð B½:;
F2SC
by theorem 4.6, we have e Þ1 B eC e ¼ sgn½ðBC Þ1 B ¼ sgnðC þ Þ; sgn½ð B 1 e ðB e Þ ¼ sgn½C ðBC Þ1 ¼ sgnðB þ Þ: eC sgn½ C The proof is completed.
■
C 0 be a real matrix with qðC Þ ¼ A B is signed. For any Ai 2 QðAÞ; Bi 2 QðBÞ and Ci 2
Theorem 5.12 [228]. Let M ¼ Nc ðC Þ; qðBÞ ¼ Nr ðBÞ and M þ QðC Þ ði ¼ 1; 2; 3; 4Þ, we have
sgnðB2> ðB1 B2> Þ1 A1 ðC2> C1 Þ1 C2> Þ ¼ sgnðB4> ðB3 B4> Þ1 A2 ðC4> C3 Þ1 C4> Þ; 1 > > 1 > > 1 sgnðC1 ðC2> C1 Þ1 A> 3 ðB1 B2 Þ B1 Þ ¼ sgnðC3 ðC4 C3 Þ A4 ðB3 B4 Þ B3 Þ: Proof. By theorem 4.8, B; B > ; C ; C > have signed M–P inverses. By theorem 5.11, the following matrices B1 B2> ; B3 B4> ; C2> C1 ; C4> C3 ; are nonsingular, and sgnðB2> ðB1 B2> Þ1 Þ ¼ sgnðB4> ðB3 B4> Þ1 Þ ¼ sgnðB þ Þ; sgnððC2> C1 Þ1 C2> Þ ¼ sgnððC4> C3 Þ1 C4> Þ ¼ sgnðC þ Þ; sgnðC1 ðC2> C1 Þ1 Þ ¼ sgnðC3 ðC4> C3 Þ1 Þ ¼ sgnðC > Þ þ ; sgnððB1 B2> Þ1 B1 Þ ¼ sgnððB3 B4> Þ1 B3 Þ ¼ sgnðB > Þ þ : Let E1 ¼ B2> ðB1 B2> Þ1 ; E2 ¼ B4> ðB3 B4> Þ1 ; F1 ¼ ðC2> C1 Þ1 C2> ; F2 ¼ ðC4> C3 Þ1 C4> : Then we have sgnðE1 Þ ¼ sgnðE2 Þ ¼ sgnðB þ Þ; sgnðF1 Þ ¼ sgnðF2 Þ ¼ sgnðC þ Þ: By theorems 4.8 and 4.9, in the multipartite signed digraph GðB þ ; A; C þ Þ, every pair of (directed) paths of length 3 with the same initial vertex and the same terminal vertex have the same sign. Note that X ðEk Ak Fk Þij ¼ ðEk Þip ðAk Þpq ðFk Þqj ðk ¼ 1; 2Þ: p;q
Sign Pattern for Generalized Inverses
164
Thus, the product of any two terms in the above summation is nonnegative. For all indices i and j, we have ! P ðE1 Þip ðA1 Þpq ðF1 Þqj sgnððE1 A1 F1 Þij Þ ¼ sgn p;q
¼ sgn
P p;q
! ðE2 Þip ðA2 Þpq ðF2 Þqj
¼ sgnððE2 A2 F2 Þij Þ: Hence we arrive at sgnðB2> ðB1 B2> Þ1 A1 ðC2> C1 Þ1 C2> Þ ¼ sgnðB4> ðB3 B4> Þ1 A2 ðC4> C3 Þ1 C4> Þ; 1 > > 1 > > 1 sgnðC1 ðC2> C1 Þ1 A> 3 ðB1 B2 Þ B1 Þ ¼ sgnðC3 ðC4 C3 Þ A4 ðB3 B4 Þ B3 Þ: ■
The proof is complete.
Now we present the necessary and sufficient condition for a class of anti-triangular matrices with signed Drazin inverses. A B Theorem 5.13 [228]. Let M ¼ 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; qðC Þ ¼ C 0 C 0 > D n and B 2 QðC Þ. Then M is signed if and only if has a signed M–P A B inverse. If M D is signed, then M is an S 2 GI matrix and 0 Cþ # sgnðM Þ ¼ sgn : B þ B þ AC þ Proof. If
C A
0 B
has a signed M–P inverse, by theorem 4.8, C þ is signed. For any e ¼ M
e A e C
e B 0
2 QðM Þ;
e is nonsingular. In this case, we have rank ð BÞ e Þ ¼ n. eC e ¼ rank ð C by theorem 5.11, B It follows from theorem 5.10 that e Þ1 B eC e 0 ðB # e M ¼ : e B e Þ1 B e ðB e Þ1 Að e ðB e Þ1 C eC eC e eC C e # Þ ¼ sgnðM # Þ, i.e., M is an S 2 GI matrix. By theorems 5.11 and 5.12, sgnð M D Next we assume that M is signed. Since qðC Þ ¼ n, there exists a permutation matrix P such that D C ¼P ; E
Sign Pattern for Drazin Inverse
165
where D is a square matrix of order n with qðDÞ ¼ n. Now we show that D is an SNS matrix. If D is not an SNS matrix, by lemma 4.1, there exist nonsingular matrices D1 ; D2 2 QðDÞ and indices p; q, with 1 p n and 1 q n such that ðD11 Þqp ðD21 Þqp \0: For e [ 0, let
Di Ci ðeÞ ¼ P ; i ¼ 1; 2; eE
then we have
A Ci ðeÞ> Mi ðeÞ ¼ Ci ðeÞ 0
2 QðM Þ; i ¼ 1; 2:
Since Ci ðeÞ is of full column rank, we have Ci ðeÞ þ ¼ ðCi ðeÞ> Ci ðeÞÞ1 Ci ðeÞ> ; ðCi ðeÞ> Þ þ ¼ Ci ðeÞðCi ðeÞ> Ci ðeÞÞ1 : By theorem 5.10, Mi ðeÞD ¼
0 ðCi ðeÞ> Þ þ
Ci ðeÞ þ : ðCi ðeÞ> Þ þ ACi ðeÞ þ
Considering the determinant of Ci ðeÞ> Ci ðeÞ, we have lim detðCi ðeÞ> Ci ðeÞÞ ¼ lim detðDi> Di þ e2 E > EÞ ¼ ½detðDi Þ2 6¼ 0: e!0
e!0
Hence Mi ðeÞD is a continuous function of e for sufficiently small e. Then 0 Ci ð0Þ þ D D lim Mi ðeÞ ¼ Mi ð0Þ ¼ : e!0 ðCi ð0Þ> Þ þ ðCi ð0Þ> Þ þ ACi ð0Þ þ Note that Ci ð0Þ þ ¼
Di 0
þ
P > ¼ Di1
0 P>:
Since ðD11 Þqp ðD21 Þqp \0, we know that sgn½M1 ðeÞD 6¼ sgn½M2 ðeÞD ; for sufficiently small e [0, a contradiction to M D is signed. So D is an SNS matrix. D e 2 QðAÞ and C e 2 QðC Þ, by In this case, C ¼ P is an L-matrix. For any A E theorem 5.10,
Sign Pattern for Generalized Inverses
166
e A e C
e> C 0
D ¼
0
e >Þ þ ðC
eþ C eC eþ : e >Þ þ A ð C
Since M D is signed, C þ is signed and eC e þ Þ ¼ sgnððC > Þ þ AC þ Þ: e >Þ þ A sgnðð C Hence
0 Cþ sgnðM Þ ¼ sgn : B þ B þ AC þ C 0 C Theorem 4.8 implies that has signed M–P inverse, i.e., A C> A signed M–P inverse. D
0 has a B ■
From theorems 5.13, 4.8, 4.9 and 4.10, we can deduce the following three results. A B Theorem 5.14. Let M ¼ 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; qðC Þ ¼ n and C 0 B 2 QðC > Þ. The Drazin inverse M D is signed if and only the following two conditions are satisfied: (1) C þ is signed. e 2 QðAÞ, we have (2) For any A e þ Þ ¼ sgnðB þ AC þ Þ: sgnðB þ AC
A B 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; qðC Þ ¼ n and C 0 B 2 QðC > Þ. The Drazin inverse M D is signed if and only the following two conditions are satisfied: Theorem 5.15. Let M ¼
(1) C þ is signed. (2) In the multipartite signed digraph GðB þ ; A; C þ Þ, the (directed) paths of length 3 have the same sign if the (directed) paths having the same initial vertex and the same terminal vertex.
A B Theorem 5.16. Let M ¼ 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; qðC Þ ¼ n; C 0 and B 2 QðC > Þ. The Drazin inverse M D is signed if and only C þ is signed, and 2 3 I B þ 0 0 6 0 I A 0 7 7; X ¼6 4 0 0 I C þ 5 0 0 0 I is an S 2 NS matrix.
Sign Pattern for Drazin Inverse
167
The relation between matrices with signed M–P inverse and sign majorization is given in theorem 4.13. Drazin inverses of block anti-triangular matrices have similar properties. A B Theorem 5.17 [228]. Let M ¼ 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; qðC Þ ¼ C 0 A1 B1 > n and B 2 QðC Þ. Let M1 ¼ 2 Rðm þ nÞðm þ nÞ , where C1 2 Rmn and C1 0 B1 2 QðC1> Þ, with M1 M and qðM1 Þ ¼ qðM Þ. If M D is signed, then M1D is signed and M1D M D . Proof. Let
C N ¼ A
0 C1 ; N1 ¼ B A1
0 ; B1
we have N1 N and qðN1 Þ ¼ qðM1 Þ ¼ qðM Þ ¼ qðN Þ: If M D is signed, by theorem 5.13, N þ is signed. By theorem 4.12, N1þ is signed and N1þ N þ . By lemma 4.6, we have qðN Þ ¼ Nc ðC Þ þ Nr ðBÞ ¼ Nc ðC1 Þ þ Nr ðB1 Þ: Since qðN1 Þ ¼ qðN Þ, by lemma 4.6, we get qðC1 Þ ¼ qðB1 Þ ¼ n. Theorem 5.13 implies that M1D is signed and 0 Cþ 0 C1þ D D sgnðM Þ ¼ sgn ; sgnðM1 Þ ¼ sgn : B þ B þ AC þ B1þ B1þ A1 C1þ Since N þ and N1þ are signed, by theorem 4.8, C þ and C1þ are signed. By lemma 4.2, C and C1 are L-matrices. So we have rankðC Þ ¼ rankðBÞ ¼ rankðC1 Þ ¼ rankðB1 Þ ¼ n: By lemma 4.7, N
þ
Cþ ¼ B þ AC þ
0 C1þ þ þ þ ; N1 ¼ B1 A1 C1þ B
0 : B1þ
Since N1þ N þ , we obtain M1D M D . ■ A E> Let M ¼ , where E 2 Rmn , n 6 m. When qðEÞ ¼ n, by theorem 5.13, E 0 E 0 D M is signed if and only if has signed M–P inverse. If qðEÞ\n, by A E> lemma 4.5, there exist permutation matrices P and Q such that
Sign Pattern for Generalized Inverses
168
B E¼P C
0 Q; D
where qðBÞ ¼ Nc ðBÞ; qðDÞ ¼ Nr ðDÞ. The characterization for the signed M D is reduced to that for the signed Drazin inverse of the block matrix 2 3 A1 A 2 B > C > 6 A3 A 4 0 D > 7 6 7: 4B 0 0 0 5 C D 0 0 In order to discuss the signed Drazin inverse of block matrices, an auxiliary theorem is given as follows. A E Theorem 5.18. Let M ¼ 2 Cnn , where A 2 Crr and ðE ÞX A ¼ 0. We E 0 have I 0 0 Eþ : MD ¼ þ X þ þ ðE Þ AðE Þ I ðE Þ ðE Þ AE þ Proof. By theorem 2.17, we have an expression of M D . ■ 2 3 A1 A2 B > C > 6 0 0 0 D> 7 7, where qðBÞ ¼ Nc ðBÞ and Theorem 5.19 [20]. Let M ¼ 6 4B 0 0 0 5 C D 0 0 B 0 has a signed M–P inverse. qðDÞ ¼ Nr ðDÞ. If M D is signed, then C D F Proof. Without loss of generality, assume that B ¼ , where F is a square G matrix and qðFÞ ¼ qðBÞ ¼ Nc ðBÞ. Now we show that F is an SNS matrix. If F is not an SNS matrix, by lemma 4.1, there exist non-singular matrices F1 ; F2 2 QðFÞ and integers p; q such that ðF11 Þqp ðF21 Þqp \0. For [ 0, let Fi Bi ðÞ ¼ ; i ¼ 1; 2: G e 2 QðDÞ such that rankð DÞ e ¼ Nr ðDÞ. Then Since qðDÞ ¼ Nr ðDÞ, there exists D we have A Ei ðÞ> Mi ðÞ ¼ 2 QðM Þ; Ei ðÞ 0 Bi ðÞ 0 A 1 A2 and Ei ðÞ ¼ where A ¼ e . Since rankðBi ðÞÞ ¼ Nc ðBÞ and 0 0 C D e ¼ Nr ðDÞ, by lemma 4.7, we obtain rankð DÞ
Sign Pattern for Drazin Inverse
Ei ðÞ
þ
169
Bi ðÞ þ ¼ e þ CBi ðÞ þ D
0 eþ : D
Since Bi ðÞ> has full row rank, we obtain Bi ðÞ> ðBi ðÞ> Þ þ ¼ I . By computation, we have ðEi ðÞ> ÞX A ¼ 0. By theorem 5.18, we get I 0 Ei ðÞ þ D Mi ðÞ ¼ > þ > þ > þ þ ðÞ Þ AðEi ðÞ> ÞX ðE ðEi ðÞ Þ ðEi ðÞ Þ AEi ðÞ i
0 : I
Since Bi ðÞ has full column rank, we have Bi ðÞ þ ¼ ðBi ðÞ> Bi ðÞÞ1 Bi ðÞ> : Considering the determinant of Bi ðÞ> Bi ðÞ, we have lim detðBi ðÞ> Bi ðÞÞ ¼ lim detðFi> Fi þ 2 G > GÞ ¼ ½detðFi Þ2 6¼ 0: !0
!0
Hence Bi ðÞ þ and ðMi ðÞÞD are continuous functions of . Then lim Mi ðÞD ¼ Mi ð0ÞD !0 " 0 ¼ ðEi ð0Þ> Þ þ Note that Ei ð0Þ
þ
Ei ð0Þ þ ðEi ð0Þ> Þ þ AEi ð0Þ þ
Bi ð0Þ þ ¼ e þ CBi ð0Þ þ D
#
I ðEi ð0Þ> Þ þ AðEi ð0Þ> ÞX
0 eþ ; D
Bi ð0Þ þ ¼ ðBi ð0Þ> Bi ð0ÞÞ1 Bi ð0Þ> ¼ Fi1
0 : I
0 :
Since ðF11 Þqp ðF21 Þqp \0, we know that sgn½M1 ðÞD 6¼ sgn½M2 ðÞD ; for sufficiently small [ 0, a contradiction to sgn½M1 ðÞD ¼ sgn½M2 ðÞD . Hence F is F an SNS matrix, and B ¼ is an L-matrix. Similarly, we can prove D > is an G e 0 B 0 B e L-matrix. Let E ¼ . For E ¼ e e 2 QðEÞ; by lemma 4.7, we have C D C D eþ B 0 þ e E ¼ eB eþC eþ : eþ D D
Sign Pattern for Generalized Inverses
170
f1 A f2 A > > e> þ e e e Since B has full row rank, we have B ð B Þ ¼ I . For A ¼ 2 0 0 e e> e ¼ 0. Let M e ¼ A E e > ÞX A . By theorem 5.18, QðAÞ; by computation, we get ð E e E 0 we obtain eþ I 0 0 E eD¼ M e E e > ÞX I : e > Þ þ Að eE eþ e > Þ þ ð E e >Þ þ A ðE ðE B 0 Since M D is signed, we know that E ¼ has a signed M–P inverse. ■ C D An application of the sign pattern of group inverses of block anti-triangular matrices is presented in the following. A B Theorem 5.20 [227]. Let M ¼ 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn has a C 0 Y1 signed M–P inverse, qðC Þ ¼ n and B 2 QðC > Þ. Let Y ¼ 2 Rðm þ nÞm , where Y2 e 2 QðM Þ and Y e 2 QðY Þ, if the linear Y2 2 Rmm is a diagonal matrix. For any M system X1 e e ðX1 2 Rnm ; X2 2 Rmm Þ; M ¼Y X2 is solvable, then the sign pattern of X1 is determined by the sign patterns of B, C and Y2 uniquely. Proof. For any e ¼ M
e A e C
e B 0
" e ¼ 2 QðM Þ; Y
f1 Y f2 Y
# 2 QðY Þ:
e is nonsingular. By theorem 5.10, M eC e # exists. By theorem 5.11, we know that B e X1 ¼ Y e is solvable, then its general solution is If M X2 X1 e þ ðI M e #M e ÞZ ; e #Y ¼M X2 where Z is an arbitrary ðm þ nÞ m matrix. Theorem 5.10 implies that e Þ1 B eC e 0 ðB # e M ¼ : e B e Þ1 B e ðB e Þ1 Að e ðB e Þ1 C eC eC e eC C
Sign Pattern for Drazin Inverse
171
By calculating, we have I 0 0 0 # e # e e e M M ¼ e ðB e Þ1 B e ðB e Þ1 B eC eC e ; IM M ¼ 0 IC e : 0 C Z1 Suppose that Z ¼ (where Z2 is an m m matrix), then Z2 # #" " f1 e Þ1 B e eC Y X1 0 ðB ¼ e B e Þ1 B e ðB e Þ1 Að e ðB e Þ1 C eC eC e eC X2 C f2 Y 0 0 Z1 þ : 1 e e eCÞ B e 0 I C ðB Z2 e Þ1 B e Þ1 BÞ eC eC eY f2 : By theorem 5.11, we know that sgnðð B e ¼ Hence X1 ¼ ð B 1 sgnððBC Þ BÞ: Since Y2 is a diagonal matrix, the sign pattern of X1 is determined by the sign patterns of B, C and Y2 uniquely. ■
5.4
Bipartite Matrices with Signed Drazin Inverse
In this section, some results of symmetric bipartite matrices with signed Drazin inverses are introduced. Firstly, the definition of sign symmetry is given. Definition 5.5 [228]. A square real matrix A is said to be sign symmetric, if sgnðAÞ ¼ sgnðA> Þ. 0 B The adjacency matrix of a bipartite directed graph has block form , C 0 0 B hence the block matrix is also called a bipartite matrix. C 0 0 B Lemma 5.3 [228]. Let the real matrix M ¼ , where the zero blocks are C 0 square and M is sign symmetric. If M D is signed, then C þ is signed, and 0 Cþ sgnðM D Þ ¼ sgn : ðC > Þ þ 0 Proof. For any e ¼ M
0 e C
e> C 0
2 QðM Þ:
Sign Pattern for Generalized Inverses
172
By theorem 2.8, we have 0 eD¼ M e ðC e >C e ÞD C
e >C e ÞD C e> ðC 0
If M D is signed, then C þ is signed and 0 D sgnðM Þ ¼ sgn ðC > Þ þ
¼
0
e >Þ þ ðC
Cþ : 0 ■
The proof is complete.
eþ C : 0
0 B 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; C 0 qðC Þ ¼ n, and M is sign symmetric. The Drazin inverse M D is signed if and only if C þ is signed. Theorem 5.21 [228].
Let
M¼
Proof. By theorem 5.13, M D is signed if and only if C þ is signed. ■ 0 C Corollary 5.2 [228]. Let M ¼ 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; B 0 qðC Þ ¼ n, and M is sign symmetric. The Drazin inverse M D is signed if and only if B þ is signed. Proof. Clearly we have
M¼
0 In
Im 0
0 C
B 0
0 Im
In : 0
Then D B 0 In : 0 Im 0 0 B D Hence M is signed if and only if has a signed Drazin inverse. Since C 0 B 2 QðC > Þ, by theorem 5.21, M D is signed if and only if B þ is signed. 0 B Theorem 5.22 [228]. Let M ¼ 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; C 0 qðC Þ\n 6 m, and M is sign symmetric. The Drazin inverse M D is signed if and only C þ is signed.
0 M ¼ In D
Im 0
0 C
Proof. Since qðC Þ\n 6 m, by lemma 4.5, there exist permutation matrices P and Q such that C1 0 Q; C ¼P C2 C3
Sign Pattern for Drazin Inverse
173
where qðC1 Þ ¼ Nc ðC1 Þ; qðC3 Þ ¼ Nr ðC3 Þ: Since M is sign symmetric, then B B2 > B ¼ Q> 1 P ; 0 B3 where Bi 2 QðCi> Þ; i ¼ 1; 2; 3. If M D is signed, by lemma 5.3, C þ is signed, thus by theorem 4.8, C1þ ; C3þ ; B1þ and B3þ are signed. For any " # e1 0 C e ¼P C Q 2 QðC Þ; e3 e2 C C " # e e2 B > B1 e P > 2 QðBÞ; B¼Q e3 0 B by theorem 2.8, we get
0 e C
e B 0
D
0 ¼ e e e ð C BÞD C
e BÞ e C e D Bð : 0
It is obvious that
D e1 B e1 B e1 B e2 C e1 e2 C > e BÞ e C e D ¼ Q> B Bð e3 B e2 B e2 B e3 C e1 C e2 þ C e3 P ; 0 B D e1 B e1 0 e1 B e1 e2 C C e BÞ e ¼P C e DC ðC e3 B e2 B e2 B e 3 Q: e2 C e1 C e2 þ C e3 C C
e3 B e 1 and C e1C e 3 are nonsingular. Then we have Theorem 5.11 implies that B e1 B e1 B e 1ðB e 1 Þ1 B e1 B e1 B e1 B e1 B e1 B e1 B e1C e 1 Þp C e2 ¼ C e2 C e 1ðC e 1 ÞD C e2 ¼ C e2 C e1C e 2 ¼ 0; ðC p D 1 e e e e e e e e e e e e e e e e e e e e C 2 B 1 ð C 1 B 1 Þ ¼ C 2 B 1 C 2 B 1 C 1 B 1 ð C 1 B 1 Þ ¼ C 2 B 1 C 2 B 1 C 1 ð B 1 C 1 Þ B 1 ¼ 0; e1 B e 1ðB e 1 Þ1 B e2 B e3 B e2 B e1 B e2 B e3 B e2 B e3 B e2 þ C e3 C e 1ðC e 1 ÞD C e2 ¼ C e2 þ C e3 C e1C e2 ¼ C e 3: e1C C
By theorem 2.3, we have e1 B e1 C e2 B e1 C
e1 B e2 C e3 B e2 B e2 þ C e3 C
D
¼
N1 N3
N2 ; N4
where e1 B e1 B e3 B e1 B e1 B e2 B e 1 ÞD þ ð C e 1 ÞD C e 2ðC e 3 Þ1 C e 1ðC e 1 ÞD ; N1 ¼ ð C D 1 1 e e e e1 B e3 B e e2 B e 1Þ C1 B e 2ðC3 B e 3 Þ ; N3 ¼ ð C 3 B e 3Þ C e 1ðC e 1 Þ D ; N4 ¼ ð C e 3 Þ1 : N2 ¼ ð C 1 B Hence
e e 2 N4 > e e 1 N2 þ B B D > B 1 N 1 þ B 2 N3 e e e Bð C BÞ ¼ Q P ; e 3 N3 e 3 N4 B B e 1 þ N2 C e 2 N2 C e3 e BÞ e ¼ P N1 C e DC ðC e 1 þ N4 C e 2 N4 C e 3 Q: N3 C
Sign Pattern for Generalized Inverses
174
Notice that e1 B e1 B e 1ðB e 1 Þ1 B e1 ¼ C e 1 Þ1 : e 1ðC e1C e1C e 1 ÞD ¼ ð B e 1; ðC e 1 ÞD C B By computation, we have e 1 Þ1 B e1 e1C ðB 0 e BÞ e C e D ¼ Q> Bð P>; e3 B e3 B e 2ðB e 1 Þ1 B e1C e 3 Þ1 C e1 B e 3ðC e 3 Þ1 e 3ðC B e 1ðB e3 B e 1 Þ1 C e 1 Þ1 B e3 e 1ðB e1C e1C e 2ðC e 3 Þ1 C C De e e ð C BÞ C ¼ P Q: e3 B e3 e 3 Þ1 C 0 ðC By theorem 5.11, we have e 1ðB e 1 Þ1 B e 1 Þ1 ¼ sgn½C1 ðB1 C1 Þ1 ; e1C e1C e 1 ¼ sgn½ðB1 C1 Þ1 B1 ; sgn½ C sgn½ð B e3 B e3 B e 3 ¼ sgn½ðC3 B3 Þ1 C3 ; sgn½ B e 3ðC e 3 Þ1 C e 3 Þ1 ¼ sgn½B3 ðC3 B3 Þ1 : sgn½ð C Theorem 5.12 indicates that e3 B e 2ðB e 1 Þ1 B e 3ðC e1C e 3 Þ1 C e 1 ¼ sgn½B3 ðC3 B3 Þ1 C2 ðB1 C1 Þ1 B1 ; sgn½ B 1 1 e3 B e 1ðB e1Þ B e 3 ¼ sgn½C1 ðB1 C1 Þ1 B2 ðC3 B3 Þ1 C3 : e1C e 2ðC e 3Þ C sgn½ C Hence M D is signed.
■
0 C 2 Rðm þ nÞðm þ nÞ , where C 2 Rmn ; B 0 qðC Þ\n 6 m, and M is sign symmetric. The Drazin inverse M D is signed if and only if B þ is signed.
Corollary
5.3 [228].
Let
Proof. It is clear that
M¼
0 In
Im 0
0 M ¼ In
Im 0
M¼
0 C
B 0
0 C
B 0
0 Im
In : 0
Then D
D
0 Im
In : 0
0 B has signed Drazin inverse. By C 0 theorem 5.22, that M D is signed if and only if B þ is signed. ■ 0 B Theorem 5.23 [228]. Let real matrix M ¼ , where the zero blocks are C 0 square and M is sign symmetric. Hence M D is signed if and only if C þ is signed. If M D is signed, then Hence M D is signed if and only if
Sign Pattern for Drazin Inverse
175
D
sgnðM Þ ¼ sgn
0
ðC > Þ þ
Cþ : 0
Proof. By theorems 5.21 and 5.22 and corollaries 5.2 and 5.3, we deduce that M D is signed if and only if C þ is signed. If M D is signed, by lemma 5.3, we have 0 Cþ D sgnðM Þ ¼ sgn : ðC > Þ þ 0 The proof is complete.
■
We can obtain the following corollary from theorem 5.23. 0 B Corollary 5.4 [228]. Let real matrix M ¼ , where the zero blocks are square C 0 and M is sign symmetric. If M D is signed, then M D is nonnegative (resp. nonpositive) if and only if C þ is nonnegative (or nonpositive). Definition 5.6 [170]. A real matrix A is said to have a doubly signed M–P inverse, if both A and A þ have a signed M–P inverses. Definition 5.7 [228]. A real matrix A is said to have a doubly signed Drazin inverse, if both A and AD have a signed Drazin inverses. 0 C> Theorem 5.24 [228]. Let real matrix M ¼ , where the zero blocks are C 0 square and M is sign symmetric. Matrix M has a doubly signed Drazin inverse if and only if C has a doubly signed M–P inverse. Proof. If M has a doubly signed Drazin inverse, by theorem 5.23, C þ is signed and 0 Cþ sgnðM D Þ ¼ sgn : ðC > Þ þ 0 Since M D has a signed Drazin inverse, by theorem 5.23, C þ has a signed M–P inverse. Hence C has a doubly signed M–P inverse. If C has a doubly signed M–P inverse, by theorem 5.23, M D is signed and 0 Cþ D sgnðM Þ ¼ sgn : ðC > Þ þ 0 Since C þ has a signed M–P inverse, by theorem 5.23, M D has a signed Drazin inverse. Hence M has a doubly signed Drazin inverse. ■ Theorem 5.25 [228]. Let M be a sign symmetric bipartite matrix and N be a sign symmetric matrix such that N M , and qðN Þ ¼ qðM Þ. If M D is signed, then N D is signed and N D M D .
Sign Pattern for Generalized Inverses
176 Proof. M can be written as
M¼
0 C
B ; 0
where B 2 QðC > Þ. Since N is sign symmetric and N M , then 0 B1 ; N ¼ C1 0 where B1 2 QðC1> Þ; B1 B and C1 C . By B1 B and C1 C , we obtain qðB1 Þ 6 qðBÞ; qðC1 Þ 6 qðC Þ: Hence qðN Þ ¼ qðB1 Þ þ qðC1 Þ 6 qðBÞ þ qðC Þ ¼ qðM Þ: Since qðN Þ ¼ qðM Þ, we get qðB1 Þ ¼ qðBÞ; qðC1 Þ ¼ qðC Þ: If M D is signed, by theorem 5.23, C þ is signed. By theorem 4.13, C1þ is signed, and C1þ C þ . By theorem 5.23, N D is signed and 0 C1þ D sgnðN Þ ¼ sgn ; ðC1> Þ þ 0 0 Cþ sgnðM D Þ ¼ sgn : ðC > Þ þ 0 Since C1þ C þ , we have N D M D .
5.5
Sign Pattern of Group Inverse
A B be a complex square matrix, where A is square. C 0 Assume that there are assumptions: If BCB X ¼ 0, rankðBC Þ ¼ rankðBÞ and the B X AB X 0 exists, we have the group inverse of M exists if and group inverse of CB X 0 X p only if rankðBC þ A B AB X B X AÞ ¼ rankðBÞ. In this case, we give the representation of M # in terms of the group inverse and Moore–Penrose inverse of its subblocks. By this representation, we3obtain a necessary and sufficient condition for 2 A D1 Y1 the real block matrix 4 D2 0 0 5 to be an S 2 GI -matrix, where A is square, D1 Y2 0 0 f1 Y fi 2 QðYi Þ, i ¼ 1; 2. f2 ¼ 0 for each Y and D2 are invertible, and Y Let M ¼
■
Sign Pattern for Drazin Inverse
177
To arrive at the above results, some lemmas on the group inverse of 2 2 block matrix and its matrix sign pattern are presented. Frist, we define sign orthogonality. e BÞ e 2 QðAÞ and B e ¼ 0 for all the matrices A e 2 QðBÞ (with Definition 5.8. If sgnð A Nc ðAÞ ¼ Nr ðBÞ), then the matrices A and B are sign orthogonal. Lemma 5.4 [179]. Let A be a real n n matrix such that qðAÞ ¼ n and A is not an SNS matrix. There exist invertible matrices A1 and A2 in QðAÞ, and integers p, q 1 with 1 p; q n, such that ðA1 1 Þq;p ðA2 Þq;p \0. eþB e þ Þ for all eC In [179, theorem 4.2], Shao and Shan gave a result on sgnð A e e 2 QðC Þ. Using the similar methods as in e 2 QðBÞ and C matrices A 2 QðAÞ, B e 2 QðAÞ, B e 2 QðBÞ and [179, theorem 4.2], we establish a result on sgnðABC Þ, for A e 2 QðC Þ. C Lemma 5.5. Let A; B and C be real matrices with Nc ðAÞ ¼ Nr ðBÞ and eB eÞ ¼ e Þ ¼ sgnðABC Þ for B e 2 QðBÞ, then sgnð A eC Nc ðBÞ ¼ Nr ðC Þ. If sgnðA BC e 2 QðAÞ, B e 2 QðC Þ. e 2 QðBÞ and C sgnðABC Þ for A Proof. Let D ¼ ABC . Then ðDÞi;j ¼
NX c ðBÞ N r ðBÞ X k2 ¼1 k1 ¼1
ðAÞi;k1 ðBÞk1 ;k2 ðC Þk2 ;j :
ð5:1Þ
e 2 QðAÞ, B eB e 2 QðC Þ such that sgnð A e Þ 6¼ e 2 QðBÞ and C eC If there exist A sgnðABC Þ, then there exist integers i1 , j1 , p1 , p2 , q1 , q2 and ðp1 ; q1 Þ 6¼ ðp2 ; q2 Þ such that sgnððAÞi1 ;p1 ðBÞp1 ;q1 ðC Þq1 ;j1 Þ ¼ þ 1; sgnððAÞi1 ;p2 ðBÞp2 ;q2 ðC Þq2 ;j1 Þ ¼ 1: For k ¼ 1; 2; let ðBk Þp;q ¼
1
e ðBÞp;q ; ðBÞp;q ;
p ¼ pk ; q ¼ qk ; otherwise
where e [ 0; p ¼ 1; . . .; Nr ðBÞ, and q ¼ 1; . . .; Nc ðBÞ. Clearly, B1 ; B2 2 QðBÞ and sgnðAB1 C Þ ¼ sgnðAB2 C Þ: Let D1 ¼ AB1 C and D2 ¼ AB2 C . By (5.1), ðD1 Þi1 ;j1 ¼ ðD2 Þi1 ;j1 ¼
NP c ðBÞ NP r ðBÞ k2 ¼1 k1 ¼1 NP c ðBÞ NP r ðBÞ k2 ¼1 k1 ¼1
ðAÞi1 ;k1 ðB1 Þk1 ;k2 ðC Þk2 ;j1 þ ðAÞi1 ;k1 ðB2 Þk1 ;k2 ðC Þk2 ;j1 þ
1
1 ðAÞi1 ;p1 ðB1 Þp1 ;q1 ðC Þq1 ;j1 ;
1
1 ðAÞi1 ;p2 ðB2 Þp2 ;q2 ðC Þq2 ;j1 :
e
e
ð5:2Þ
Sign Pattern for Generalized Inverses
178
When e is sufficiently small, sgnððD1 Þi1 ;j1 Þ ¼ sgnððAÞi1 ;p1 ðB1 Þp1 ;q1 ðC Þq1 ;j1 Þ ¼ þ 1; sgnððD2 Þi1 ;j1 Þ ¼ sgnððAÞi1 ;p2 ðB2 Þp2 ;q2 ðC Þq2 ;j1 Þ ¼ 1: eB eÞ ¼ eC Thus, sgnððD1 Þi1 ;j1 Þ 6¼ sgnððD2 Þi1 ;j1 Þ, which contradicts (5.2). So sgnð A e 2 QðAÞ, B e 2 QðC Þ. e 2 QðBÞ and C sgnðABC Þ for A ■ A B Theorem 5.26. Let M ¼ 2 Cðn þ mÞðn þ mÞ such that the group inverse of C 0 p B X AB X 0 exists. Let C ¼ BC þ A B X AB X B X A, where A 2 Cnn . If BCB X ¼ X CB 0 0 and rankðBC Þ ¼ rankðBÞ, then ¼rankðBÞ. (i) M # exists if and only if rankðCÞ X Y ; (ii) If M # exists, then M # ¼ Z W where X ¼ JAGAH JAH GAH JAG þ G þ H þ J ; Y ¼ C þ B þ JAGAC þ B JAC þ B GAC þ B; Z ¼ ðC CGAÞC þ ðI þ AGAH AH AG Þ þ CG 2 ðI AH Þ; W ¼ ðC CGAÞC þ AðGAC þ B C þ B Þ CG 2 AC þ B; p p # J ¼ B X AB X B X AC þ ; H ¼ C þ A B X AB X B X ; G ¼ B X AB X : Proof. By the singular value decomposition, there exist unitary matrices U 2 Cnn and V 2 Cmm such that D 0 ; ð5:3Þ UBV ¼ 0 0 where D is an r r invertible diagonal matrix and r ¼ rankðB Þ. Let A1 A 2 C1 C2 UAU ¼ ; VCU ¼ ; A3 A 4 C3 C4 2 I 0 6 e U , where U ¼ U 60 where A1 ; C1 2 Crr . Then M ¼ U M 0 V 40 0 an unitary matrix, and 2 3 A 1 D A2 0 6 7 e ¼ 6 C1 0 C2 0 7: M 4 A 3 0 A4 0 5 C3 0 C4 0
ð5:4Þ 0 0 I 0
0 I 0 0
3 0 07 7 is 05 I
ð5:5Þ
Sign Pattern for Drazin Inverse
179
If M # exists, then e # U : M # ¼ UM
ð5:6Þ
Since BCB X ¼ 0 and rankðBC Þ ¼ rankðBÞ, (5.3) and (5.4) imply that C2 ¼ 0 and e in (5.5) further: C1 is invertible. Organize M 2 3 A 1 D A2 0 6 7 e ¼ 6 C1 0 0 0 7 ¼: N1 N2 ; M 4 A 3 0 A4 0 5 N3 N4 C3 0 C4 0 where
D A2 0 A3 0 A4 0 ; N3 ¼ ; N2 ¼ ; N4 ¼ : 0 0 C3 0 C4 0 0 0 C11 It is easy to see that N11 ¼ . Calculations show that the D1 D1 A1 C11 e =N1 :¼ N4 N3 ðN1 Þ1 N2 ¼ N4 . It follows from (5.3) and (5.4) schurcomplement M X 0 0 B AB X 0 B X AB X 0 . Note that the group inverse of ¼ U U that 0 N4 CB X CB X 0 0 e # exists if e =N1 . Since N1 is invertible, by theorem 2.24, M exists, so does that of M e =N1 Þp N3 is invertible. and only if R ¼ ðN1 Þ2 þ N2 ð M 2 3 # 0 A 4 e =N1 Þ# ¼ 4 2 5. Calculations According to corollary 2.4, we have ð M C4 A# 0 4
A1 N1 ¼ C1
yield e =N1 Þp N3 R ¼ ðN1 Þ2 þ N2 ðM ¼ ðN1 Þ2 þ N2 I ðM =N1 ÞðM =N1 Þ# N3 # 2 ¼ A1 þ A2 A3 A2 A4 A4 A3 þ DC1 A1 D : C1 A1 C1 D Note that C1 is invertible, so " # A21 þ A2 A3 A2 A4 A# 4 A3 þ DC1 A1 D rankðRÞ ¼ rank C 1 A1 C1 D " # # A A A2 A4 A4 A3 þ DC1 A1 D ¼ rank 2 3 : 0 C1 D Hence, R is invertible implies that A2 A3 A2 A4 A# 4 A3 þ DC1 is invertible, that is rankðA2 A3 A2 A4 A# 4 A3 þ DC1 Þ ¼ r ¼ rankðBÞ:
Sign Pattern for Generalized Inverses
180
Furthermore, we have
X
C ¼ BC þ A B AB
X p
A2 A3 A2 A4 A# 4 A3 þ DC1 B A¼ 0 X
0 : 0
Thus, R is invertible if and only if rankðCÞ ¼ rankðB Þ. Applying theorem 2.24, we get # e e Y e M ¼ X e W f ; Z where e ¼ N1 R1 KR1 N1 ; X e ¼ N1 R1 KR1 N2 ð M e =N1 Þp N1 R1 N2 ð M e =N1 Þ# ; Y e ¼ ðM e =N1 Þp N3 R1 KR1 N1 ð M e =N1 Þ# N3 R1 N1 ; Z p 1 1 e =N1 Þp ð M e =N1 Þ# N3 R1 N2 ð M e =N1 Þp f ¼ ðM e =N1 Þ N3 R KR N2 ð M W p # # e =N1 Þ N3 R1 N2 ð M e =N1 Þ þ ð M e =N1 Þ ; ð M # e K ¼ N1 þ N2 ð M =N1 Þ N3 : By (5.6),
e e # U ¼ U X M # ¼ UM e Z
e Y f U : W
ð5:7Þ
From (5.3), we obtain
I 0 0 0 U ; Bp ¼ U U; BB þ ¼ U 0 0 0 I I 0 0 0 V ; BX ¼ V V: B þB ¼ V 0 0 0 I
By the above expressions for BB þ ; B p ; B þB and 2 A1 0 N 0 0 6 U 6 0 0 U 1 U ¼ 0 0 0 V 4 C1 0 0 0 þ þ BB ABB B ¼ : B þBC 0 Similarly, we have
B X , and (5.4), we deduce 3 D 0 0 07 7 U 0 0 05 0 V ð5:8Þ 0 0
0 U 0
N2 BB þAB X U ¼ 0 0
0 ; 0
ð5:9Þ
0 B X ABB þ U ¼ 0 B Z CBB þ
0 ; 0
ð5:10Þ
U
0 N3
Sign Pattern for Drazin Inverse
0 U 0
0 B X AB X ¼ U e ð M =N1 Þ CB X
Note that BCB X ¼ 0. Since (5.11) imply that 2 0 60 0 0 6 U 0 e =N1 Þ# U ¼ U6 0 ðM 4 0
U ¼ 0
181
0 : 0
ð5:11Þ
e =N1 exists, corollary 2.4 and the group inverse of M 3 0 0 0 07 7 A# 0 7U 5 4 2 0 C 4 A# 0 4 2 0 0 6 0 # A 4 0 6 6 V 40 0 2 0 C 4 A# 4 0 0 0
0 0 0 0
3 0 07 7 U 07 5 0 0
0 V
G ¼ CG 2
0 ; 0
ð5:12Þ
where G ¼ B X AB
X #
0 U 0
. Similarly, p 0 B X AB X B X e =N1 Þp U ¼ ðM CB X G
0 : BZ
It follows from (5.8)–(5.10) and (5.12) that K 0 U U 0 0 0 0 N 0 0 N2 0 U þU ¼U 1 # 0 ðM =N N 0 0 0 0 Þ 1 3 BB þ AGABB þ þ BB þ ABB þ B ¼ : B þ BC 0 Note that
# 2 R ¼ A1 þ A2 A3 A2 A4 A4 A3 þ DC1 C1 A 1
0 U 0
A1 D : C1 D
Computations yield that the Schur complement is S ¼ A 2 A3 A2 A4 A# 4 A3 þ DC1 : Since S is invertible, by (2.1), we have S 1 A1 C11 S 1 1 : R ¼ D1 A1 S 1 D1 C11 þ D1 A1 S 1 A1 C11
ð5:13Þ
ð5:14Þ
Sign Pattern for Generalized Inverses
182 Manipulations show that 2 1 S 6 0 U6 4 0 0
0 0 0 0
0 0 0 0
3 0 þ 07 7 U ¼ C 0 05 0
0 ; 0
3 S 1 A1 C11 0 0 0 0 07 7U 0 0 05 0 0 0 þ 0 0 0 C 0 A 0 ¼ þ 0 B C 0 0 B 0 0 þ 0 C þ AðB þ BC Þ ¼ ; 0 0
ð5:15Þ
2
0 60 U6 40 0
2
0 6 D1 A1 S 1 U6 4 0 0
0 0 0 0
0 0 0 0
0 0
þ
3 0 þ 07 7U ¼ 0þ 0 A 0 C B 05 0 0 0 0 0 0 0 ¼ : B þ AC þ 0
Similarly, we have 2 3 0 0 0 0 6 0 D1 C 1 þ D1 A1 S 1 A1 C 1 0 0 7 1 1 7U U6 40 0 0 05 0 0 0 0 0 0 ¼ þ þ : 0 B þ AC þ AðB þ BC Þ þ B þ ðB þ BC Þ
ð5:16Þ
0 0
ð5:17Þ
ð5:18Þ
Combining (5.15)–(5.18) yields 1 þ 0 R C þ AðB þ BC Þ Cþ U ¼ U þ þ : ð5:19Þ 0 0 B þ AC þ B þ AC þ AðB þ BC Þ þ B þ ðB þ BC Þ X Y # Substituting (5.8)–(5.14) and (5.19) into (5.7) produces M ¼ ; where Z W X ¼ JAGAH JAH GAH JAG þ G þ H þ J ; Y ¼ C þ B þ JAGAC þ B JAC þ B GAC þ B; Z ¼ ðC CGAÞC þ ðI þ AGAH AH AG Þ þ CG 2 ðI AH Þ; 2 W ¼ ðC CGA ÞC þ AðGAC þ B C þ B Þ CG AC þ B; p p þ þ J ¼ B X AB X B X AC ; H ¼ C A B X AB X B X : ■
Sign Pattern for Drazin Inverse
183
A B For the matrix M ¼ in theorem 5.26, when C ¼ B , we have the folC 0 lowing result. A B Corollary 5.5. Let M ¼ , where A 2 Cnn and B 2 Cnm . Assume that B 0 p the group inverse of B X AB X exists, and let C ¼ BB þ A B X AB X B X A. The following statements hold: (1) M # exists if and only if rank ðCÞ ¼rankðB Þ. X Y ; (2) If M # exists, then M # ¼ Z W where X ¼ JAGAH JAH GAH JAG þ G þ H þ J ; Y ¼ C þ B þ JAGAC þ B JAC þ B GAC þ B; Z ¼ B C þ þ B C þ AGAH B C þ AH B C þ AG; # W ¼ B C þ AGAC þ B B C þ AC þ B; G ¼ B X AB X ; X p p J ¼ B AB X B X AC þ ; H ¼ C þ A B X AB X B X : e 1 Y e 1 Þ ¼ e2 D e 1 Þ ¼ sgnðD1 Y1 Þ and sgnð Y Let D be a nonsingular matrix. If sgnð D e 2 QðDÞ, Y e 1 2 QðY1 Þ and Y e 2 2 QðY2 Þ (with Nc ðDÞ ¼ Nr ðY1 Þ and sgnðY2 D1 Þ for D 1 1 Nc ðY2 Þ ¼ Nr ðDÞ), then D Y1 and Y2 D are called sign unique. 2 3 A D1 Y1 0 5 be a real square matrix, where A is square, Theorem 5.27. Let N ¼ 4 D2 0 Y2 0 0 D1 and D2 are invertible, and Y1 and Y2 are sign orthogonal. Matrix N is an S 2 GI matrix if and only if the following hold: 1 (i) D1 1 and Y2 D2 1 Y2 I Y2 D1 2 60 D1 6 (ii) U ¼ 4 0 0 0 0
are sign unique. 3 0 0 A 0 7 7 is S 2 NS-matrix. 1 D2 D1 Y1 5 0 I
D2 A B . Then N ¼ . Since D1 and D2 are Y2 C 0 invertible, Y1 and Y2 are sign orthogonal, we have B X ¼ 0 and B X AB X 0 ¼ rankðBC Þ ¼ rankðBÞ. By computing, we obtain BCB X ¼ 0, CB X 0 0 0 and rankðBC þ AðB X AB X Þp B X AÞ ¼ rankðBC Þ ¼ rankðBÞ. By theorem 5.26, 0 0 the group inverse of N exists and Proof. Let B ¼ ½ D1
Y1 and C ¼
Sign Pattern for Generalized Inverses
184 2
N# where
0 ¼4 D1 1 1 Y2 D1 2 D1
D1 2 X1 X3
3 1 D1 2 D1 Y1 X2 5; X4
1 X1 ¼ D1 1 AD2 ;
1 1 X2 ¼ D1 1 AD2 D1 Y1 ;
1 1 X3 ¼ Y2 D1 2 D1 AD2 ;
1 1 1 X4 ¼ Y2 D1 2 D1 AD2 D1 Y1 :
ð5:20Þ
Next, we show that the condition is necessary. Since D1 is invertible, we have qðD1 Þ ¼ Nr ðD1 Þ. Then, we show that D1 and D2 are SNS-matrices. If D1 is not an SNS-matrix. By e e e 1 2 QðD1 Þ such that ð D e 1 Þ \0, e 1; D e 1 Þ ð D lemma 5.4, there exist matrices D 1 q;p 1 q;p where the integers p, q with 1 p; q Nr ðD1 Þ. Let 2 3 2 3 e e 1 Y1 e 1 Y1 A D D A 6 7 N1 ¼ 4 D 2 0 0 5; N2 ¼ 4 D2 0 0 5: Y2 0 0 Y2 0 0 e e 1 Þ \0, e 1 Þ ð D Clearly, N1 ; N2 2 QðN Þ and N1# , N2# exist. From (5.16) and ð D 1 q;p 1 q;p
we have ðN1# ÞNr ðAÞ þ q;p ðN2# ÞNr ðAÞ þ q;p \0. This is contrary to N being an S 2 GI matrix. Thus, we have D1 is an SNS-matrix. Similarly, D2 is an SNS-matrix. 2 3 b D b1 Y b1 A b ¼4D b2 0 Therefore, for N 0 5 2 QðN Þ, we have b Y2 0 0 2 3 b 1 D b 1 Y b 1 D b1 0 D 2 2 1 7 b# ¼ 6 b 1 b1 b2 N 4 5; D X X 1 1 1 b b b3 b4 b2 D D X X Y 2
where
1
bD b 1 ; bD b 1 D b 1 Y b 1 A b 1 A b1 ¼ D b2 ¼ D b 1; X X 1 2 1 2 1 1 1 1 1 1 b b b b b b b b b 1 Y b2 D D A D ; X b4 ¼ Y b 2 D D A D 1 D b 1: b3 ¼ Y X 2 1 2 2 1 2 1
Since N is an S 2 GI -matrix, we have b 1 Þ ¼ sgnðD1 Þ; sgnð D 1 1 b 1 Y b 1 D b 1 Þ ¼ sgnðD1 D1 Y1 Þ; sgnð D 2 1 2 1 b 1 Þ ¼ sgnðX1 Þ; sgnð X b 3 Þ ¼ sgnðX3 Þ; sgnð X
b 1 Þ ¼ sgnðD1 Þ; sgnð D 2 2 b 1 D b 1 Þ ¼ sgnðY2 D1 D1 Þ; b2 D sgnð Y 2 1 2 1 b 2 Þ ¼ sgnðX2 Þ; sgnð X b 4 Þ ¼ sgnðX4 Þ; sgnð X
b 1 2 QðD1 Þ, D b 2 2 QðD2 Þ, Y b 1 Þ ¼ b 1 2 QðY1 Þ and Y b 2 2 QðY2 Þ. Since sgnð D for D 1 b 1 Þ ¼ sgnðD1 Þ, both D1 and D2 are S 2 NS-matrices. Hence, there sgnðD1 Þ, sgnð D 1 2 2 exists permutation matrix P1 such that ðP1 D2 Þi;i 6¼ 0 (i ¼ 1; 2; . . .; Nr ðP1 D2 Þ).
Sign Pattern for Drazin Inverse
185
2
3 2 3 I 0 0 A D1 P1T Y1 Let Q1 ¼ 4 0 P1 0 5 and W1 ¼ Q1 NQ1T ¼ 4 P1 D2 0 0 5. It follows 0 0 I Y2 0 0 from theorem 5.26 that 2 3 1 T T D1 0 D1 2 P1 2 P1 P1 D1 Y1 5: W1# ¼ 4 P1 D1 P1 X1 P1T P1 X2 1 1 T 1 T Y2 D2 P1 P1 D1 X3 P1 X4 e 1 Y e 1 P T P1 D e 1Þ ¼ Note that N is an S 2 GI -matrix, thus, so is W1 . Thus, sgnð D 1 2 1 1 T e e e sgnðD1 2 P1 P1 D1 Y1 Þ for D 1 2 QðD1 Þ, D 2 2 QðD2 Þ and Y 1 2 QðY1 Þ. 1 1 Next, we prove D1 1 Y1 and Y2 D2 are sign unique. If D1 Y1 is not sign unique. Let Z1 ¼ D1 P1T and let Z2 ¼ P1 D2 . Then Z11 Y1 ¼ P1 D1 1 Y1 is not sign unique, i.e., there e b e 1 Þ [ 0 and exist integers i1 , i2 and matrices Y 1 ; Y 1 2 QðY1 Þ such that ðZ11 Y i1 ;i2 1 b ðZ1 Y 1 Þi1 ;i2 \0. For 1 i Nr ðZ2 Þ; e [ 0, let e2 ½ij : ¼ Z2 ½ij : i 6¼ i1 ; Z eZ2 ½ij : i ¼ i1 :
e2 2 QðZ2 Þ. It is easy to see that Clearly, Z 8 1 < ðZ2 Þi1 ;i i 6¼ i1 e 1 Þ ¼ ðZ 2 i1 ;i : 1 ðZ 1 Þ i ¼ i1 e 2 i1 ;i
ð1 i Nr ðZ2 ÞÞ:
Note that
NX c ðZ2 Þ 1 1 1 e e 1 Z 1 Y e 1Þ e 1Þ ; ðZ ðZ ¼ Þ ðZ Þ þ ðZ21 Þi1 ;i ðZ11 Y Y 1 i1 ;i2 2 1 i1 ;i2 i;i2 e 2 i1 ;i1 1 i¼1;i6¼i 1
1 1 1 b e 1 Z 1 Y b 1Þ ðZ 2 1 i1 ;i2 ¼ ðZ2 Þi1 ;i1 ðZ1 Y 1 Þi1 ;i2 þ e When e is sufficiently small, we get
i¼1;i6¼i1
b 1Þ : ðZ21 Þi1 ;i ðZ11 Y i;i2
1 e ðZ Þ Y 1 1 i1 ;i2 ; e i1 ;i1 b 1 Þ Þ ¼ sgn 1 ðZ 1 Þ ðZ 1 Y b 1Þ e 1 Z 1 Y sgnðð Z 2 1 2 1 i1 ;i2 i1 ;i1 i1 ;i2 : e e 1 Þ Þ ¼ sgn e 1 Z 1 Y sgnðð Z 1 2 i1 ;i2
1
NX c ðZ2 Þ
Z21
Since e 1 Þ Þ ¼ sgnððZ 1 Y b 1 Þ Þ; sgnððZ11 Y 1 i1 ;i2 i1 ;i2 we have e 1 Z 1 Y e 1 Z 1 Y e 1 Þ Þ ¼ sgnðð Z b 1 Þ Þ: sgnðð Z 2 1 2 1 i1 ;i2 i1 ;i2
186
Sign Pattern for Generalized Inverses
This contradicts the assumption that W1 is an S 2 GI -matrix. So D1 1 Y1 is sign 1 e 1 1 e e e unique and sgnð D 2 H 1 Þ ¼ sgnðD2 D1 Y1 Þ for D 2 2 QðD2 Þ and H 1 2 QðD1 1 Y1 Þ. e 1 Þ ¼ sgnðY2 D1 D1 Þ for H e2D e2 2 is sign unique, and sgnð H Similarly, Y2 D1 2 1 2 1 1 e QðY2 D2 Þ and D 1 2 QðD1 Þ. We then prove part (ii) of Theorem. Let 1 1 1 1 1 L1 ¼ D1 1 ; L2 ¼ D2 ; L3 ¼ D2 D1 Y1 ; L4 ¼ Y2 D2 D1 :
Then
X1 ¼ L1 AL2 ; X2 ¼ L1 AL3 ; X3 ¼ L4 AL2 ; X4 ¼ L4 AL3 :
bD b 1 Þ ¼ sgnðX1 Þ for D b 1 2 QðD1 Þ, A b 2 QðAÞ and D b 2 2 QðD2 Þ, we b 1 A Since sgnð D 1 2 e 2 Þ ¼ sgnðL1 AL2 Þ for A e 2 QðAÞ. It follows from lemma 5.4 that have sgnðL1 AL bL b 2 QðAÞ and L b 2 Þ ¼ sgnðL1 AL2 Þ for L b 1 2 QðL1 Þ, A b 2 2 QðL2 Þ. Simib1A sgnð L b b bL b 3 Þ ¼ sgnðL1 AL3 Þ, sgnð L b 2 Þ ¼ sgnðL4 AL2 Þ, sgnð L b 3 Þ ¼ sgn b4AL b4A b1AL larly, sgnð L b b b b b ðL4 AL3 Þ for A 2 QðAÞ, L 1 2 QðL1 Þ, L 2 2 QðL2 Þ, L 3 2 QðL3 Þ and L 4 2 QðL4 Þ. Let 2 3 I Y2 D1 0 0 2 60 7 D1 A 7: U ¼6 1 40 0 D2 D1 Y 5 0 0 0 I Since D1 and D2 are SNS-matrices, U is an SNS-matrix. We then have 2 3 I L4 L4 AL2 L4 AL3 6 0 L1 L1 AL2 L1 AL3 7 7: U 1 ¼ 6 ð5:21Þ 40 0 L2 L3 5 0 0 0 I 2 3 b b2 0 0 I H 60 D b b1 A 0 7 7 b ¼6 b 1 Þ ¼ sgnðU 1 Þ for U 2 QðU Þ, where Clearly, sgnð U 60 0 D b b H1 7 4 5 2 b b 0 0 0 I
b b 1 2 QðD1 Þ, D b 2 2 QðD2 Þ, A b 2 QðAÞ, H b b 1 2 QðD1 Y1 Þ and H b2 2 I; b I 2 QðI Þ, D 1 1 2 QðY2 D2 Þ. Hence U is an S NS-matrix. So (i) and (ii) hold. If (i) and (ii) hold, then by equations (5.20) and (5.21), N is an S 2 GI matrix. ■ 2 3 A I Y1 Theorem 5.28. Let N ¼ 4 I 0 0 5 be a real square matrix, where A is square, Y2 0 0 and Y1 and Y2 are sign orthogonal. Matrix N is an S 2 GI -matrix if and only if e ¼ sgnðY2 AÞ, sgnð Y eY eY f1 Þ ¼ sgnðY2 AY1 Þ and sgnð A f2 AÞ f2 A f1 Þ ¼ sgnðAY1 Þ for sgnð Y e 2 QðAÞ; Y f1 2 QðY1 Þ and Y f2 2 QðY2 Þ. A
Sign Pattern for Drazin Inverse Proof. From 2 I Y2 0 60 I A 6 40 0 I 0 0 0 Clearly, U
187
theorem 5.27, N is an S 2 GI -matrix if and only if U ¼ 3 0 0 7 7 is an S 2 NS-matrix. Y1 5 I is an SNS-matrix. Calculations give 2 3 I Y2 Y2 A Y2 AY1 60 I A AY1 7 7: U 1 ¼ 6 40 0 I Y1 5 0 0 0 I
e ¼ sgnðY2 AÞ, sgnð Y eY eY f1 Þ ¼ sgnðY2 AY1 Þ and sgnð A f2 AÞ f2 A f1 Þ ¼ Since sgnð Y e 2 QðAÞ; Y f1 2 QðY1 Þ and Y f2 2 QðY2 Þ, we have U being an S 2 NSsgnðAY1 Þ for A 2 ■ matrix. Hence, N is an S GI -matrix.
5.6
Ray Pattern of Drazin Inverse
Real matrices with ray M–P inverse are characterized in [167, 172, 175]. Complex matrices with a ray M–P inverse are studied in section 4.4. e 2 QðAÞ, e D Þ ¼ rayðAD Þ for A Definition 5.9. For a square complex matrix A, if rayð A D then A is said to have a ray Drazin inverse (or A is ray unique). Clearly, matrices with ray Drazin inverses are generalizations of ray S 2 NS matrices. Some real matrices with ray Drazin inverses are given in [20, 21, 49, 227, 228]. Until now, there is no result on complex matrices with ray Drazin inverses. In this section, we present some properties on matrices whose Drazin inverses or M–P inverses are ray unique, and use them to obtain necessary and sufficient conditions for some anti-triangular block matrices [145] possessing ray Drazin inverses. A B Lemma 5.6 [227]. Let M ¼ 2 Cðn þ mÞðn þ mÞ , where A 2 Cnn . If BC is C 0 nonsingular, then 0 ðBC Þ1 B MD ¼ ; C ðBC Þ1 C ðBC Þ1 AðBC Þ1 B and h i X ð1Þp ð1ÞN ðq;FÞ det C Fnfqg; fpg detðB ½:; F Þ; detðBC Þ q2F2S ðmÞ n h i X ð1Þq ¼ ð1ÞN ðp;FÞ det B fq g; Fnfpg detðC ½F; :Þ: detðBC Þ p2F2S ðmÞ
ððBC Þ1 BÞpq ¼ ðC ðBC Þ1 Þpq
n
Sign Pattern for Generalized Inverses
188
A B be a square complex matrix, where A is C D nonsingular and rankðM Þ ¼ rankðAÞ, then ððAW ÞD Þ2 A ððAW ÞD Þ2 B ; MD ¼ CA1 ððAW ÞD Þ2 A CA1 ððAW ÞD Þ2 B Lemma 5.7 [101]. Let M ¼
where W ¼ I þ A1 BCA1 . The following theorem presents a necessary condition for a class of matrices being ray unique. Theorem 5.29 [145]. Let M 2 Cnn be a matrix with ray Drazin inverse. If A 2 Crr is a principle submatrix of M satisfying qðAÞ ¼ qðM Þ ¼ r, then A is a ray S 2 NS matrix. Proof. If qðAÞ ¼ qðM Þ ¼ r, then M is permutation similar to a block matrix N ¼ A B and N has a ray Drazin inverse. C 0 We first prove that A is a DRU matrix. If A is not a DRU matrix, then by lemma 4.13 there exists nonsingular A1 ; A2 2 QðAÞ and p; q 2 ½r such that Ai eB 1 1 1 1 2 rayððA1 Þpq Þ 6¼ rayððA2 Þpq Þ and ðA1 Þpq ðA2 Þpq 6¼ 0. Let Ni ðeÞ ¼ eC 0 QðN Þ (with e [ 0; i ¼ 1; 2Þ. Since rankðNi ðeÞÞ 6 qðN Þ ¼ qðAÞ and A1 ; A2 are nonsingular, we have rankðNi ðeÞÞ ¼ rankðAi Þ. By lemma 5.7, we have ððAi W ÞD Þ2 Ai eððAi W ÞD Þ2 B D Ni ðeÞ ¼ ; D 2 D 2 eCA1 e2 CA1 i ððAi W Þ Þ Ai i ððAi W Þ Þ B 1 where W ¼ I þ e2 A1 i BCAi . Note that Ai W is nonsingular for a sufficiently small e. So 1 Ai 0 D : lim Ni ðeÞ ¼ 0 0 e!0 þ 1 1 1 Since rayððA1 1 Þpq Þ 6¼ rayððA2 Þpq Þ and ðA1 Þpq ðA2 Þpq 6¼ 0, for a sufficiently
small e we have rayðN1 ðeÞÞD Þ 6¼ rayðN2 ðeÞD Þ, a contradiction to N D being ray unique. Hence A is a DRU matrix. Since A is a DRU matrix and N D is ray unique, by the expression of 1 1 1 lime!0 þ Ni ðeÞD , we have ðA1 1 Þpq ðA2 Þpq ¼ 0 or rayððA2 Þpq Þ ¼ rayððA1 Þpq Þ for 2 p; q 2 ½r and A1 ; A2 2 QðAÞ. By theorem 4.17, A is a ray S NS matrix. ■ Next we present a result on an m n complex matrix with a term rank n whose M–P inverse is ray unique. This result will be used to present several sufficient and necessary conditions for some matrices to have ray Drazin inverses.
Sign Pattern for Drazin Inverse
189
Theorem 5.30 [145]. Let C be an m n complex matrix, C þ is ray unique, and e is noneC qðC Þ ¼ n. Let B be an n m complex matrix with B 2 QðC Þ, then B e 2 QðC Þ, and e 2 QðBÞ, C singular for B e Þ1 BÞ eC e ¼ rayðC þ Þ; rayðð B 1 e ðB e Þ Þ ¼ rayðB þ Þ: eC rayð C e and B e both have ray M–P inverses. By theorem 4.17, each n n Proof. Clearly, C e or B e has zero determinant or is a ray S 2 NS matrix. From the submatrix of C Cauchy–Binet formula, we obtain X eÞ ¼ e ½F; :Þ; eC e FÞ detð C detð B detð B½:; ð5:22Þ F2SC
where SC ¼ fF 2 Sn ðmÞj detðC ½F; :Þ 6¼ 0g. e ½F; : are ray S 2 NS matrices. By corollary 3.4, B½:; e F and C e F is For F 2 SC , B½:; e F 2 QðB½:; FÞ ¼ QðC ½F; :Þ, we obtain a DRU matrix. For B½:; e ½F; :ÞÞ ¼ argðdetð C e ½F; :ÞÞ: e FÞÞ ¼ argðdetð C argðdetð B½:; e ½F; :Þ [ 0. Then there is a nonzero term in (5.22) and e FÞ detð C Hence detð B½:; e Þ [ 0 for B e 2 QðC Þ. eC e 2 QðBÞ and C such a term is positive. So detð B 1 1 eÞ B e ðB e Þ . By lemma 5.6, we have eC e and Y ¼ C eC Let X ¼ ð B ðXÞpq ¼
X ð1Þp e ½Fnfqg; fpgÞ detð B e ½:; F Þ; ð1ÞN ðq;FÞ detð C e e detð B C Þ q2F2S C
ðY Þpq
X ð1Þq e ½F; :Þ: e fq g; FnfpgÞ detð C ¼ ð1ÞN ðp;FÞ detð B½ e e detð B C Þ p2F2S C
For q 2 F 2 SC , we have ð1Þ e ½F; :1 C p;N ðq;FÞ ¼
e ½Fnfq g; fpgÞ detð C ; e ½F; :Þ detð C
p þ N ðq;FÞ
e fq g; FnfpgÞ detð B½ : e FÞ detð B½:;
q þ N ðp;FÞ
ð1Þ e ½:; F 1 B N ðp;FÞ;q ¼ Hence ðXÞpq ¼
X 1 e e ½F; :1 e C p;N ðq;FÞ detð C ½F; :Þ detð B½:; FÞ; e e detð B C Þ q2F2S C
ðY Þpq
X 1 e e e ½:; F 1 ¼ B N ðp;FÞ;q detð B½:; FÞ detð C ½F; :Þ: e Þ q2F2S eC detð B C
Sign Pattern for Generalized Inverses
190
e 2 QðC Þ, recall that detð B e Þ [ 0, e 2 QðBÞ and C eC For F 2 Sc , B 2 e e e e detð B½:; FÞ detð C ½F; :Þ [ 0, and B½:; F and C ½F; : are two S NS matrices. By lemma 4.14, we have X 1 2 1 ðC þ Þpq ¼ det ð C ½ T; : Þ C ½ T ; : : j j p;N ðq;T Þ detðC C Þ q2T 2S C
e 2 QðC Þ, we e 2 QðBÞ and C So we have rayððXÞpq Þ ¼ rayððC þ Þpq Þ. Hence for B obtain e Þ1 BÞ eC e ¼ rayðC þ Þ: rayðð B In the similar way, we prove that e ðB e Þ1 Þ ¼ rayðB þ Þ: eC rayð C If each arc of a digraph G is assigned some complex number of modulus one as the ray weight of the arc, then G is called a ray digraph. The ray of a directed path P in G is defined to be the product of the ray weights of all the arcs of G [177]. Let Ai 2 Cni ni þ 1 (i ¼ 1; 2; . . .; m) be a complex matrix. The multipartite ray digraph GðA1 ; A2 ; . . .; Am Þ is defined as follows. (a) The vertex set V of GðA1 ; A2 ; . . .; Am Þ has a partition ðiÞ ðiÞ V1 [ V2 [ [ Vm þ 1 , where Vi ¼ fx1 ; . . .; xni g, i ¼ 1; . . .; m þ 1. ðiÞ
V ¼
ði þ 1Þ
(with a ray weight rayðAi Þpq ) if and only if (b) There is an arc from xp to xq ðAi Þpq 6¼ 0, and these are all the arcs of GðA1 ; A2 ; . . .; Am Þ. The following is an extension of theorem 4.2 in [175], which will be used to study block matrices with ray Drazin inverses. Theorem 5.31. Let Ai 2 Cni ni þ 1 (i ¼ 1; 2; . . .; m). Then the following statements are equivalent: e2 A e 1A e m Þ ¼ rayðA1 A2 Am Þ for A e i 2 QðAi Þ; (1) For all i 2 f1; . . .; mg, rayð A i ¼ 1; 2; . . .; m. e i Ai þ 1 Am Þ ¼ rayðA1 A2 Am Þ for (2) For some i 2 f1; . . .; mg, rayðA1 Ai1 A e i 2 QðAi Þ. A (3) In GðA1 ; A2 ; . . .; Am Þ, the directed paths of length m with the same have the same ray pattern if the directed paths having initial vertex and the same terminal vertex. Proof. Similar to that of theorem 4.2 in [175].
■
Using theorems 5.29 and 5.31, we present the sufficient and necessary conditions for a class of block matrix to have ray Drazin inverses as follows.
Sign Pattern for Drazin Inverse
A Theorem 5.32 [145]. Let M ¼ C
191 B 0
2 Cnn , and A 2 Crr is a principle sub-
e ¼ 0 for B e 2 QðC Þ, eC e 2 QðBÞ and C matrix of M satisfying qðAÞ ¼ qðM Þ ¼ r. If B D then M is ray unique if and only if the following conditions hold:
(1) A is a ray S 2 NS matrix. (2) In GðA1 ; A1 ; BÞ, all directed paths of length 3 with the same initial vertex and the same terminal vertex have the same ray. (3) In GðC ; A1 ; A1 Þ, all directed paths of length 3 with the same initial vertex and the same terminal vertex have the same ray. Proof. If M D is ray unique, then by theorem 5.29, A is a ray S 2 NS matrix, so e e e e e ¼ A B (1) holds. For any M e 0 2 QðM Þ, since A is nonsingular and rankð M Þ 6 C e Since B e ¼ 0, by lemma 5.7, e Þ ¼ rankð AÞ. eC qðM Þ ¼ qðAÞ, we have rankð M 1 e 2 B e e A eD¼ A M e 3 B e 2 C eA eA e : C e 2 BÞ e ¼ rayðA2 BÞ and Since M D is ray unique, we have rayð A 2 2 e Þ ¼ rayðCA Þ. By theorem 5.31 we deduce that (2) and (3) hold. eA rayð C e e e ¼ A B Conversely, if (1), (2) and (3) hold, then for M e 0 2 QðM Þ, we have C 1 e e 2 B e A 2 e 2 e eD¼ A M 2 3 e . By theorem 5.31, we have rayð A BÞ ¼ rayðA BÞ and e e e e CA CA B eA e 2 B e 2 Þ ¼ rayðCA2 Þ. Since C e 3 B e 2 A eA eA eA e ¼C e and all directed paths of rayð C length 3 with the same initial vertex and the same terminal vertex in GðCA2 ; A; A2 BÞ have the same ray, by theorem 5.31, we have e 3 BÞ eA e ¼ rayðCA3 BÞ. Hence M D is ray unique. rayð C ■ By theorems 5.29 and 5.31, we have a sufficient and necessary condition for a class of block matrices possessing to have ray Drazin inverses as follows. A B Theorem 5.33 [145]. Let M ¼ 2 Cðm þ nÞðm þ nÞ , where C 2 Cmn , B 2 C 0 C 0 QðC Þ and qðC Þ ¼ n. Then M D is ray unique if and only if has a ray M–P A B inverse. C 0 Proof. If has a ray M–P inverse, by lemma 4.15, C þ is ray unique and A B e B e A þ þ þ e þ e e 2 QðM Þ, it folrayðB AC Þ ¼ rayðB AC Þ for A 2 QðAÞ. For M ¼ e C 0 e is nonsingular. From lemma 5.6, we get eC lows from theorem 5.29 that B
Sign Pattern for Generalized Inverses
192
eD¼ M
0 e e Þ1 e C ðB C
e Þ1 B eC e ðB : e B e Þ1 B e ðB e Þ1 Að eC eC e C
e ðB e Þ1 Þ ¼ rayðB þ Þ. e Þ1 BÞ eC e ¼ rayðC þ Þ and rayð C eC By theorem 5.29, rayðð B e þ Þ, by theorem 5.31 we obtain Since rayðB þ AC þ Þ ¼ rayðB þ AC e B e Þ1 BÞ e ðB e Þ1 Að eC eC e ¼ rayðB þ AC þ Þ: rayð C Hence M D is ray unique. then by qðC Þ ¼ n, there exists a permutation matrix P such If M D is ray unique, D that C ¼ P , where D is a square n n matrix with qðDÞ ¼ n. Now we show E that D is a DRU matrix. If D is not, then by lemma 4.13, there exists nonsingular matrices D1 ,D2 2 QðDÞ and p,q 2 ½n such that rayððD11 Þpq Þ 6¼ rayððD21 Þpq Þ and Di 1 1 (with e [ 0; i ¼ 1; 2Þ. For Mi ðeÞ ¼ ðD1 Þpq ðD2 Þpq 6¼ 0. Let Ci ðeÞ ¼ P eE A Ci ðeÞ 2 QðM Þ, by lemma 5.6, we have Ci ðeÞ 0 0 Xi ðeÞ D ; Mi ðeÞ ¼ Yi ðeÞ Yi ðeÞAXi ðeÞ where Xi ðeÞ ¼ ðCi ðeÞ Ci ðeÞÞ1 Ci ðeÞ , Yi ðeÞ ¼ Ci ðeÞðCi ðeÞ Ci ðeÞÞ1 . For a sufficiently small e, we have 0 Xi ð0Þ ; lim Mi ðeÞD ¼ Mi ð0ÞD ¼ Yi ð0Þ Yi ð0ÞAXi ð0Þ e!0
where Xi ð0Þ ¼ ðCi ð0Þ Ci ð0ÞÞ1 Ci ð0Þ ¼ Di1 0 P (i ¼ 1; 2). Since rayððD11 Þpq Þ 6¼ rayððD21 Þpq Þ and ðD11 Þpq ðD21 Þpq 6¼ 0, then for a sufficiently
small e, rayðM1 ðeÞD Þ 6¼ rayðM2 ðeÞD Þ, a contradicition to M D being ray unique. So D e þ ¼ ðC e C e Þ þ ¼ C e ðC e C e Þ1 C e , ðC e Þ1 is a DRU matrix. In this case, we have C e C e e 2 QðC Þ. Since M D is ray unique, A for C 2 QðM Þ has a ray Drazin inverse. e C 0 By lemma 5.6, we have e C eþ e D A 0 C ¼ eC eþ : e Þ þ ð C e Þ þ A e ðC C 0
e þ Þ ¼ rayððC Þ þ AC þ Þ. lemma 4.15 and rayððC Þ þ AC Hence C þ is ray unique C 0 C 0 implies that has a ray M–P inverse, i.e., has ray M–P inverse. A C A B ■ From theorem 5.33 and lemma 4.15, we can obtain the following theorem.
Sign Pattern for Drazin Inverse
193
A B Theorem 5.34 [145]. Let M ¼ 2 Cðm þ nÞðm þ nÞ , where C 2 Cmn , C 0 qðC Þ ¼ n, and B 2 QðC Þ. Then, M D is ray unique if and only if C þ is ray unique e þ Þ ¼ rayðB þ AC þ Þ for A e 2 QðAÞ. and rayðB þ AC By theorem 5.34 we have the following result. 0 B Corollary 5.6 [145]. Let M ¼ 2 Cðm þ nÞðm þ nÞ , where C 2 Cmn , qðC Þ ¼ C 0 n and B 2 QðC Þ. Then, M D is ray unique if and only if C þ is ray unique. 0 C Corollary 5.7 [145]. Let M ¼ 2 Cðm þ nÞðm þ nÞ , where C 2 Cmn , qðC Þ ¼ B 0 n and B 2 QðC Þ. Then, M D is ray unique if and only if C þ is ray unique. 0 In 0 B 0 Im Proof. Clearly we have M ¼ . Then C 0 Im 0 In 0 D 0 B 0 In 0 Im : MD ¼ C 0 In 0 Im 0 0 B D Hence, M is ray unique if and only if has a ray Drazin inverse. By C 0 corollary 5.6, M D is ray unique if and only if C þ is ray unique. ■ 0 B Theorem 5.35 [145]. Let M ¼ 2 Cðm þ nÞðm þ nÞ , where C 2 Cmn , C 0 qðC Þ\n 6 m and B 2 QðC Þ. Then, M D is ray unique if and only if C þ is ray unique. e e D 0 C 0 C Proof. For ¼ 2 QðM Þ, by theorem 2.8, we have e e C 0 C 0 e C e ÞD C e eþ 0 ðC 0 C ¼ . If M D is ray unique, then C þ is > þ e D e e e ðC Þ 0 0 C ðC C Þ ray unique. Next we show that M D is ray unique if C þ is ray unique. Since qðC Þ\n 6 m, by lemma 4.5, there exists permutation matrices P and Q such that C1 0 C ¼P Q, where qðC1 Þ ¼ Nc ðC1 Þ and qðC3 Þ ¼ Nr ðC3 Þ. Since B 2 QðC Þ, C2 C3 B B2 we have B ¼ Q 1 P , where Bi 2 QðCi Þði ¼ 1; 2; 3Þ. Since C þ is ray 0 B3 unique, e1 P C e C2
by lemma 4.15, C1þ ; C3þ ; B1þ e1 0 Q 2 QðC Þ and B e ¼ Q B e 0 C3
e ¼ and B3þ are ray unique. For C e2 B e 3 P 2 QðBÞ, by theorem 2.8, we get B
Sign Pattern for Generalized Inverses
194
0 e C
e B 0
D
¼
0 e BÞ e e DC ðC
e BÞ e C e D Bð : 0
Clearly we have D e1 B e1 B e1 B e2 C e1 e2 C e BÞ e C e D ¼ Q B Bð e3 B e2 B e2 B e3 C e1 C e2 þ C e3 P ; 0 B D e1 B e1 B e1 0 e1 e2 De C C C e e ð C BÞ C ¼ P e e e3 B e2 B e 3 Q: e2 C e2 þ C e3 C2 B1 C C e3 B e 1 and C e 3 are nonsingular. Then we have e1C Theorem 5.29 implies that B e1 B e1 B e1 B e1 B e1 B e1 B e1 B e 1ðC e 1 ÞD Þ C e2 ¼ C e2 C e 1ðC e 1 ÞD C e2 ðI C 1 e1 B e1 B e 1ðB e 1Þ B e1C e2 C e1C e 2 ¼ 0; ¼C e1 B e1 B e2 B e2 B e1 B e1 B e2 B e 1 ðI C e 1ðC e 1 ÞD Þ ¼ C e1 C e1C e 1ðC e 1 ÞD C e2 B e2 B e 1ðB e 1 Þ1 B e1C e1 C e1C e 1 ¼ 0; ¼C D e3 B e2 B e1 B e2 B e3 B e2 B e1 B e 1ðB e 1 Þ1 B e2 B e1C e2 þ C e3 C e 1ðC e 1Þ C e2 ¼ C e2 þ C e3 C e1C e2 C e3 B e 3: ¼C By theorem 2.3, we have N1 N2 N3 N4
e1 B e1 C e e1 C2 B
e1 B e2 C e3 B e e e3 C2 B2 þ C
D
N1 ¼ N3
N2 , where N4
e1 B e1 B e3 B e1 B e1 B e2 B e 1 ÞD þ ð C e 1 ÞD C e 2ðC e 3 Þ1 C e 1ðC e 1 ÞD ; ¼ ðC e1 B e3 B e1 B e 1 ÞD C e 2ðC e 3 Þ1 ; ¼ ð C 1 e3 B e1 B e2 B e 3Þ C e 1ðC e 1 ÞD ; ¼ ð C 1 e3 B e 3Þ : ¼ ðC
Hence we have e BÞ e C e D ¼ Q Bð
"
e 2 N3 e 1 N1 þ B B e 3 N3 B
" e e D e e e C ¼ P N1 C 1 þ N 2 C 2 ð C BÞ e 1 þ N4 C e2 N3 C
# e 2 N4 e 1 N2 þ B B P ; e 3 N4 B # e3 N2 C Q: e3 N4 C
e1 B e1 B e 1ðB e 1 Þ1 B e1 ¼ C e 1 Þ1 . By computae1C e1C e 1 ÞD ¼ ð B e 1; ðC e 1 ÞD C e 1ðC Note that B tion, we have
Sign Pattern for Drazin Inverse " e BÞ e C e ¼Q Bð D
"
195
e 1 Þ1 B e1C e1 ðB 1 e3 B e2ðB e 1 Þ1 B e1C e 3Þ C e1 e 3ðC B
#
0
e3 B e 3ðC e 3 Þ1 B # e 1ðB e3 B e 1 Þ1 B e3 e1C e 2ðC e 3 Þ1 C C
e e e 1 e BÞ e ¼ P C1ðB1 C1Þ e DC ðC 0
e3 B e3 e 3 Þ1 C ðC
P ; Q:
By theorem 5.29, we have e 1ðB e 1 Þ1 B e 1 Þ1 ¼ rayðB þ Þ; e1C e 1 ¼ rayðC þ Þ; ray½ C e1C ray½ð B 1 1 1 e 1 þ e e e e e ray½ð C 3 B 3 Þ C 3 ¼ rayðB Þ; ray½ B 3 ð C 3 B 3 Þ ¼ rayðC þ Þ: 3
3
Theorem 5.31 implies that e3 B e 2ðB e 1 Þ1 B e 3ðC e1C e 3 Þ1 C e 1 ¼ ray½C þ C2 C þ ; ray½ B 3 1 1 e 1 e þ e e e e e ray½ C 1 ð B 1 C 1 Þ B 2 ð C 3 B 3 Þ C 3 ¼ ray½B B2 B þ : 1
Hence M D is ray unique.
3
■
0 C 2 Cðm þ nÞðm þ nÞ , where C 2 Cmn , B 0 qðC Þ\n 6 m and B 2 QðC Þ. Then, M D is ray unique if and only if C þ is ray unique. 0 In 0 B 0 Im Proof. Clearly we have M ¼ . Then C 0 Im 0 In 0 D 0 In 0 Im 0 B D : M ¼ In 0 C 0 Im 0 0 B Hence, M D is ray unique if and only if has a ray Drazin inverse. By C 0 theorem 5.35, M D is ray unique if and only if C þ is ray unique. ■
Corollary
5.8 [145].
Let
M¼
Chapter 6 Sign Pattern for Tensors Following the development of the multidimensional data analysis theory, tensors as high-order generalized matrices are increasingly applied in many different fields, such as signal processing [183], computer vision [144] and analytical chemistry [126]. Comparing with the large amount of applications, the theoretical results on tensors are far from adequate. Scholars focus on topics on tensor theory such as eigenvalues of tensors [22, 53–55, 75, 161, 163] spectral theory of hypergraphs [59, 62], nonnegative tensors [51, 52], tensor equations [9, 77, 154, 155, 198, 199] and so on. In this chapter, we concentrate on the sign pattern of tensors.
6.1
Tensors
For positive integers n1 ; n2 ; . . .; nk 2 N, f is a map from ½n1 ½n2 ½nk to fields F, where ½ni ¼ f1; 2; . . .; ni g, i ¼ 1; 2; . . .; k. For i1 ik 2 ½n1 ½n2 ½nk , let f ði1 ik Þ ¼ ai1 ik . Then f : i1 ik 7! ai1 ik . The multidimensional array A ¼ ðai1 i2 ik Þ; 1 ij nj ; j ¼ 1; . . .; k; with n1 n2 nk entries is call a tensor of k-order n1 n2 nk -dimension. Tensors can be regarded as higher order matrices, and 2-order tensors are matrices. Figure 6.1 shows an 1-order 2-dimension tensor (a 2-dimension vector), a 2-order 2-dimension tensor (a 2 2 matrix) and a 3-order 2-dimension tensor. Let Cn1 n2 nk and Rn1 n2 nk be the sets of k-order n1 n2 nk -dimension tensors over the complex field C and the real field R, respectively. ½m;n When n1 ¼ n2 ¼ ¼ nk , A is called a k-order n-dimension tensor. Let Ck denote the set of all m n n tensors of k-order over the complex field. A k-order n-dimension tensor A is called a diagonal tensor if the entries are zero except for the diagonal entries aij ij ij , j ¼ 1; . . .; n. A k-order n-dimension tensor I ¼ ðdi1 i2 ik Þ is called a unit tensor with di1 i2 ik ¼ 1 if i1 ¼ i2 ¼ ¼ ik , and di1 i2 ik ¼ 0 otherwise.
DOI: 10.1051/978-2-7598-2599-8.c006 © Science Press, EDP Sciences, 2021
Sign Pattern for Generalized Inverses
198
FIG. 6.1 – Vector, matrix, tensor. Let A ¼ ðai1 ik Þ 2 Cn1 nk and Bp 2 Cmp np ðp ¼ 1; . . .; k Þ. The multilinear transform of A is denoted by ðB1 ; . . .; Bk Þ A ¼ A0 2 Cm1 mk ; P ;...;nk where the entries of A0 are aj01 jk ¼ ni11;...;i ðB1 Þj1 i1 ðBk Þjk ik ai1 ik [65, 133]. When k ¼1 A is a matrix, we have ðB1 ; B2 Þ A ¼ B1 AB2T : ½m;n
For a vector x ¼ ðx1 ; x2 ; . . .; xn ÞT and a tensor A 2 Ck , Ax k1 is an m-dimension vector whose i-th component is X k1 Ax ¼ aii2 ik xi2 xi3 xik ; ð6:1Þ i i2 ;...;ik 2½n
where i 2 ½m [162]. The above the product of tensors and vectors can be expressed by the multilinear transform as ðI ; x; . . .; x Þ A: The reference [133] introduces more operations of tensors. In 2005, L. Qi defined the eigenvalue of tensors using tensor equations [162]. In ½n;n fact, the eigenvalue of a tensor A 2 Ck is a complex k satisfying the tensor equation ðA kI Þx k1 ¼ 0; where 0 6¼ x 2 Cn , and I is the unity tensor. For the nonzero vector aj 2 Cnj (j ¼ 1; . . .; k), let ðaj Þi be the i-th component of aj . The Segre outer product of a1 ; . . .; ak , denoted by a1 ak, is called a rank one tensor, where the i1 ik -entry is ai1 ik ¼ ða1 Þi1 ðak Þik (see [65]). The rank of a tensor A 2 Cn1 nk , denoted by rankðAÞ, is the smallest r such that A can be written as a sum of r rank one tensors, or A¼
r X j¼1
a1j akj ;
ð6:2Þ
Sign Pattern for Tensors
199
FIG. 6.2 – A 3-order tensor A. where vectors 0 6¼ aij 2 Cni , i ¼ 1; . . .; k and j ¼ 1; . . .; r [65, 133]. For example, figure 6.2 shows a 3-order tensor. If A 2 Cnm is a matrix, then the equivalent decomposition gives A ¼ PKQ, where P 2 Cnn and Q 2 Cmm are invertible matrices, and K is a diagonal matrix whose diagonal entries are r 1ʼs with the remaining zeroes. We have A¼
r X
Pi Q i ;
i¼1
where Pi is the i-th column of P, Qi is the i-th row of Q and r is the rank of A. Proposition 6.1 [65]. Let A 2 Cn1 nk and Li 2 Cmi ni ði ¼ 1; . . .; k Þ. Then rankððL1 ; . . .; Lk Þ AÞ rankðAÞ: If L1 ; . . .; Lk are nonsingular, then the equality holds. Let Ad1 ds1 ds þ 1 dk be a vector consisted of the d1 ds1 ids þ 1 dk -entries from A 2 Cn1 nk where d1 ; . . .; ds1 ; ds þ 1 ; . . .; dk are fixed and i 2 ½ns , that is Ad1 ds1 ds þ 1 dk > ¼ ad1 ds1 1ds þ 1 dk ; ad1 ds1 2ds þ 1 dk ; . . .; ad1 ds1 ns ds þ 1 dk 2 Cns : Take all the d1 ; . . .; ds1 ; ds þ 1 ; . . .; dk 2 ½n1 ½ns1 ½ns þ 1 ½nk , we obtain n1 n2 ns1 ns þ 1 nk vectors, and the set of all the vectors is denoted by Vs ðAÞ ¼ Ad1 ds1 ds þ 1 dk dj 2 nj ; j ¼ 1; . . .; s 1; s þ 1; . . .; k : The dimension rs ðAÞ ¼ dimðspanðVs ðAÞÞÞ; is called the s-th order rank of A. The tuple rank ðAÞ ¼ ðr1 ðAÞ; . . .; rk ðAÞÞ; is called the multilinear rank of A (see [65]). For two real tuples a ¼ ða1 ; a2 ; . . .; an Þ and b ¼ ðb1 ; b2 ; . . .; bn Þ, in this chapter, a b (or a ¼ b) means ai bi (or ai ¼ bi ), for i ¼ 1; 2; . . .; n.
Sign Pattern for Generalized Inverses
200
Proposition 6.2 [65]. Let A 2 Cn1 nk and Li 2 Cmi ni ði ¼ 1; . . .; k Þ. We have rank ððL1 ; . . .; Lk Þ AÞ rank ðAÞ: If L1 ; . . .; Lk are nonsingular, then the equality holds. Proposition 6.3 [65]. Let A 2 Cn1 nk . We have rankðAÞ maxfri ðAÞji ¼ 1; . . .; k g:
6.2
Inverse of Tensors ½m;n
A tensor equation represented by a tensor A ¼ ðai1 ...ik Þ 2 Ct Ax k1 ¼ b;
is ð6:3Þ
e ¼ aijk is a 2 2 2 tensor for x ¼ ðxi Þ 2 Cn ; b ¼ ðbi Þ 2 Cm . For example, when A > 2 e and b ¼ ðb1 ; b2 Þ , the equation Ax ¼ b is equivalent to a111 x12 þ ða112 þ a121 Þx1 x2 þ a122 x22 ¼ b1 ; a211 x12 þ ða212 þ a221 Þx1 x2 þ a222 x22 ¼ b2 : Equation (6.1) is generally a homogeneous polynomial of high degree, and (6.3) is a non-linear equation. The set of solutions x of the equation Ax k1 ¼ 0 is an important object studied in algebraic geometry [4]. Computing the solutions of (6.3) is NP-hard even if the tensor A is of low order and dimension [106]. The determinant of a k-order n-dimension tensor A, denoted by detðAÞ, is the resultant of the system of homogeneous equations Ax k1 ¼ 0, where x 2 Rn [180]. In [162], Qi researched the determinants of symmetric tensors. In [180] Shao proved that detðAÞ is the unique polynomial on the entries of A with the following three properties: (i) detðAÞ ¼ 0 if and only if the system of homogeneous equations Ax k1 ¼ 0 has a nonzero solution; (ii) detðI Þ ¼ 1; (iii) detðAÞ is an irreducible polynomial on the entries of A when the entries ai1 ik ði1 ; . . .; ik 2 ½n Þ of A are all viewed as independent variables. If detðAÞ 6¼ 0, then A is called a nonsingular tensor. The problem of solving tensor equations has great significance both in theory and engineering practice. In [109], the relation between the solvability of the multilinear equation and the singularity of the representing tensor was given: ½n;n
(i) The tensor A 2 Ck is singular if and only if Ax k1 ¼ 0 has a nonzero solution. ½n;n (ii) If A 2 Ct is nonsingular, then Ax k1 ¼ b has solutions for each b 2 Cn . In [76], authors showed that Ax k1 ¼ b has a unique positive solution if A is a nonsingular M-tensor and b is a positive vector.
Sign Pattern for Tensors
201
In [180], Shao et al. define the general tensor product. ½m;n
½n;s
and B 2 Ct
Definition 6.1. For tensors A 2 Ck them is A B 2
½m;s Cðk1Þðt1Þ þ 1
has entries X ðA BÞia1 ak1 ¼
, the general tensor product of
aii2 ik bi2 a1 bik ak1 ;
ð6:4Þ
i2 ;...;ik 2½n
where i 2 ½m and a1 ; . . .; at1 2 ½st1 . Obviously, when A and B are of 2-order, the above product is the usual matrix product. Equations (6.1) and (6.4) give that Ax k1 ¼ A x. In this chapter, sometimes we use A x instead of Ax k1 , and “” stands for the general tensor product. This product satisfies the following properties: (1) ðA1 þ A2 Þ B ¼ A1 B þ A2 B, where A1 ; A2 2 C½n1 ;n2 and B 2 Cn2 nk þ 1 . (2) A ðB1 þ B2 Þ ¼ A B1 þ A B2 , where A 2 Cn1 n2 , and B1 ; B2 2 Cn2 nk þ 1 . (3) A In2 ¼ A, In2 B ¼ B, where A 2 C½n1 ;n2 , B 2 Cn2 nk þ 1 , and In2 is the n2 n2 unity matrix. (4) A ðB C Þ ¼ ðA BÞ C, where A 2 C½n1 ;n2 , B 2 C½n2 ;n3 and C 2 Cn3 nr . Proposition 6.4 [180]. Let A be an m-order n-dimension tensor, and let B be a k-order n-dimension tensor. Then detðA BÞ ¼ detðAÞðk1Þ
n1
n
detðBÞðm1Þ .
Based on the general tensor product, a kind of tensor inverses were given [27]. Definition 6.2 [27]. Let A be a tensor of m-order n-dimension, and let B be a tensor of k-order n-dimension. If A B ¼ I , then A is called an m-order left inverse of B, and B is called a k-order right inverse of A. Proposition 6.5. Let A 2 C½mn1 ;n2 , and I be the unit tensor of n2 -dimension. If A I ¼ 0, then A ¼ 0. Proof. By definition 6.1, we have aii2 im ðA I Þia1 ...am1 ¼ 0
aj ¼ ij þ 1 ij þ 1 ; j ¼ 2; . . .; m 1; otherwise:
Hence A I ¼ 0 implies that A ¼ 0.
■
Proposition 6.6. Let A 2 Cn1 nk , and I be a unit tensor of m-order n1 -dimension. If I A ¼ 0, then A ¼ 0. Proof. By the method of contradiction, suppose that A has a nonzero entry ai1 ik . , a ¼ i2 ik , a conBy definition 6.1, C ¼ I A has a nonzero entry ci1 aa ¼ aim1 1 ik tradiction. Hence I A ¼ 0 implies that A ¼ 0. ■ Proposition 6.7 [162]. For a unit tensor I , we have detðI Þ ¼ 1.
Sign Pattern for Generalized Inverses
202
Proposition 6.8 [162]. Let A be an m-order n-dimension tensor. Then detðAÞ ¼ 0 if and only if A x ¼ 0 has a nonzero solution. Proposition 6.9. Let f1 ðx1 ; . . .; xn Þ; . . .; fr ðx1 ; . . .; xn Þ be homogeneous polynomials of degree m on variables x1 ; . . .; xn , and let F ðx1 ; . . .; xn Þ ¼ ðf1 ðx1 ; . . .; xn Þ; . . .; fr ðx1 ; . . .; xn ÞÞ> . If r\n, then F ðx1 ; . . .; xn Þ ¼ 0 has a nonzero solution. Proof. Let A ¼ ai1 im þ 1 be an ðm þ 1Þ-order n-dimension tensor such that X aii2 im þ 1 xi2 xim þ 1 ; i 2 ½r ; fi ðx1 ; . . .; xn Þ ¼ i2 ;...;im þ 1 2½n
and ai1 im þ 1 ¼ 0 for i1 > r þ 1. Then F ðx1 ; . . .; xn Þ ¼ 0 and Ax m ¼ 0 have the same solution, where x ¼ ðx1 ; . . .; xn Þ> . By corollary 2.5 in [109], we obtain detðAÞ ¼ 0. By proposition 6.8, F ðx1 ; . . .; xn Þ ¼ 0 has a nonzero solution. ■ Let AfRk g denote the set of all k-order right inverses of a tensor A. Theorem 6.1. Let A be an m-order n-dimension diagonal tensor. Then the following statements hold: (1) A has a k-order left inverse if and only if aii 6¼ 0, i ¼ 1; . . .; n. Moreover, a ðk1Þ k-order diagonal tensor D with diagonal entries dii ¼ aii is the unique k-order left inverse of A. (2) A has a k-order right inverse if and only if aii 6¼ 0, i ¼ 1; . . .; n. In this case, we have 1 AfRk g ¼ BB is a k-order diagonal tensor and bm1 ii ¼ aii ; i ¼ 1; . . .; n : Proof. If A has a k-order left inverse, then there exists a k-order n-dimension tensor D such that D A ¼ I . Since A is diagonal, by definition 6.1, A has a k-order left inverse if and only if aii 6¼ 0, i ¼ 1; . . .; n. By D A ¼ I we know that D is diagonal ðk1Þ and dii ¼ aii . Hence statement (1) holds. If A has a k-order right inverse, then there exists a k-order n-dimension tensor B such that A B ¼ I . Since A is diagonal, by definition 6.1, A has a k-order right inverse if and only if aii 6¼ 0, i ¼ 1; . . .; n. By A B ¼ I we know that B is diagonal ■ and aii bm1 ii ¼ 1. Hence statement (2) holds. Theorem 6.2. Let A be an m-order n-dimension tensor. Then A has a 2-order left inverse if and only if there exists a nonsingular matrix P such that A ¼ P I . Moreover, P 1 is the unique 2-order left inverse of A. Proof. If A ¼ P I for a nonsingular matrix P, then A has a 2-order left inverse P 1 . If X is a 2-order left inverse of A, then X A ¼ I . Propositions 6.4 and 6.7 imply that X is nonsingular. So A ¼ X 1 I . Suppose that Y is also a 2-order left inverse of A, we obtain A ¼ Y 1 I . Hence X 1 I ¼ Y 1 I , ðX 1 Y 1 Þ I ¼ 0. By proposition 6.5, we have X ¼ Y . So X is the unique 2-order left inverse of A. ■
Sign Pattern for Tensors
203
Theorem 6.3. Let A be an m-order n-dimension tensor. For the polynomial equations A x ¼ b, if there exists an n n matrix P such that ðA PÞðL2 Þ exists, then h i½ðm1Þ1 x ¼ P ðA P ÞðL2 Þ b . Proof. Since ðA P ÞðL2 Þ A P ¼ I , by propositions 6.4 and 6.7, we have then APy ¼b and detðA P Þ 6¼ 0, detðP Þ 6¼ 0. Let y ¼ P 1 x, h i½ðm1Þ1 I y ¼ ðA P ÞðL2 Þ b. By I y ¼ y ½m1, we have y ¼ ðA P ÞðL2 Þ b . Hence 1 x ¼ P y ¼ P ½ðA PÞðL2 Þ b½ðm1Þ .
■
Theorem 6.4. Let A be an m-order n-dimension tensor. Then A has a 2-order right inverse if and only if there exists a nonsingular matrix Q such that A ¼ I Q. In this case, we have AfR2 g ¼ Q 1 Y Y m1 ¼ I ; Y is a diagonal matrix : Proof. If A ¼ I Q for some nonsingular matrix Q, then A has a 2-order right inverse Q 1 . If X is a 2-order right inverse of A, then A X ¼ I . Propositions 6.4 and 6.7 imply that X is a nonsingular matrix. So A ¼ I X 1 . Hence, if A has a 2-order right inverse, then there exists a nonsingular matrix Q such that A ¼ I Q. Let P be any 2-order right inverse of A, then A P ¼ I Q P ¼ I . Let Y ¼ Q P, then I ¼ I Y . By definition 6.1, Y is diagonal and Y m1 ¼ I . So ■ P ¼ Q 1 Y . Theorem 6.5. Let A and B be tensors such that A B ¼ 0. Then the following hold: (1) If AðL2 Þ (or BðL2 Þ ) exists, then B ¼ 0 (or A ¼ 0). (2) If AfR2 g 6¼ ; (or BfR2 g 6¼ ;), then B ¼ 0 (or A ¼ 0). Proof. If AðL2 Þ exists, then by theorem 6.2, there exists a nonsingular matrix P such that P I B ¼ 0. So I B ¼ 0. By proposition 6.5, we get B ¼ 0. If BðL2 Þ exists, then by theorem 6.2, there exists a nonsingular matrix P such that A P I ¼ 0. By proposition 6.5, we get A P ¼ 0, A ¼ 0. Hence part (1) holds. If AfR2 g 6¼ ;, then by theorem 6.4, there exists a nonsingular matrix Q such that I Q B ¼ 0. By proposition 6.6, we obtain Q B ¼ 0 and B ¼ 0. If BfR2 g 6¼ ;, then by theorem 6.4, there exists a nonsingular matrix Q such that A I Q ¼ 0. So A I ¼ 0. By proposition 6.5, we have A ¼ 0. Hence part (2) holds. ■ In [136], the existence of the left inverse and right inverse of tensors has been established. Theorem 6.6 [136]. Let A be an m-order n-dimension tensor. Then A has a 2-order left inverse (or right inverse) if and only if A has a k-order left inverse (or right inverse), where k > 2.
Sign Pattern for Generalized Inverses
204
Theorem 6.7 [136]. Let A be an m-order n-dimension tensor. If A has a left inverse, then I k P is the unique left inverse of A, where P is a 2-order left inverse of A. Theorem 6.8 [136]. Let A be an m-order n-dimension tensor. If A has a right inverse, then AfRk g ¼ ðQ Y Þ I k Y m1 ¼ I ; Y is a diagonal matrix ; where Q is a 2-order right inverse of A. In [181], Shao et al. characterized the left and right inverses of triangular blocked tensors. In [190], the generalized inverses of tensors via the general product of tensors were proposed which are extensions of the generalized inverses of matrices. The fig-inverse of tensors was used to represent solutions of (6.3), which generalizes theorem 1.26 in the case of z ¼ 0. Furthermore, the representations for the f1g-inverse and the group inverse of some diagonal block, row block and column block tensors were established. In addition, some formulas and results of the fi g-inverse (i ¼ 1; 2; 5) and group inverse of tensors are obtained, and an example of computing the f1g-inverse and f2g-inverse of tensors is presented. Moreover, the Moore–Penrose inverse and Drazin inverse and some properties of tensors via the Einstein product were studied [113–115, 146, 184, 189].
6.3
Minimum and Maximum Rank of Sign Pattern Tensors
For a real matrix A, let mrðAÞ ¼ minfrankðB ÞjB 2 QðAÞg and MrðAÞ ¼ maxfrankðB ÞjB 2 QðAÞg be the minimum rank and maximum rank of the sign pattern of A, respectively. It is well-known that MrðAÞ ¼ qðAÞ [99]. The research on the minimum rank of sign pattern matrices has shown many important applications in communication complexity and neural networks [91, 134, 165]. Some results on the minimum rank of sign pattern matrices are given in [66, 131]. In this section, we give the definitions of the sign pattern tensors, minimum (maximum) rank of sign pattern tensors and term rank of tensors. Definition 6.3. A tensor whose entries are from the set f þ ; ; 0g is called a sign pattern tensor. For a tensor A ¼ ðai1 ik Þ 2 Rn1 nk , sgnðAÞ ¼ ðsgnðai1 ik ÞÞ is called ^ ^ ¼ sgnðAÞ:g is called the sign the sign pattern (tensor) of A and QðAÞ ¼ fAjsgnð AÞ pattern class (or qualitative class) of A. Definition 6.4. For a real tensor A, mrðAÞ ¼ minfrankðBÞjB 2 QðAÞg and MrðAÞ ¼ maxfrankðBÞjB 2 QðAÞg are called the minimum rank and maximum rank of the sign pattern of A, respectively. The maximum number of nonzero entries of A, of which no pair share the same index in the same dimension, is called the term rank of A, denoted by qðAÞ. For a unit tensor I of n-dimension, it is easy to see qðI Þ ¼ n.
Sign Pattern for Tensors
205
In this section, we present the sufficient condition for sign pattern tensors having the same minimum rank or maximum rank, the relations of the minimum (maximum) rank between a sign pattern tensor and its subtensors, and the necessary and sufficient condition for the minimum rank of a sign pattern tensor to be 1. For A ¼ ðai1 ik Þ 2 Rn1 nk , let A>ðp;q Þ ¼ ðai01 ip1 iq ip þ 1 iq1 ip iq þ 1 ik Þ ðp; q 2 ½k Þ be the ðp; q Þ transpose of A, where ai01 ip1 iq ip þ 1 iq1 ip iq þ 1 ik ¼ ai1 ip iq ik ij 2 nj ; j ¼ 1; . . .; kÞ [180]. The tensor A ¼ ðai1 ik Þ 2 Rn1 nk can be written as A¼
nk X nk1 X
ik ¼1 ik1
n1 X i1 ¼1
ðn Þ
ðn Þ
ai1 ik ei1 1 eik k ;
ð nj Þ where eij is the nj -dimension unit vector with ij -th component being 1, ij 2 nj (j ¼ 1; . . .; k). Let Li 2 Rci ni ði ¼ 1; . . .; k Þ. From [65], we have B ¼ ðL1 ; . . .; Lk Þ A ¼
nk X nk1 X ik ¼1 ik1 ¼1
n1 X i1 ¼1
ðn Þ
ðn Þ
L1 ei1 1 Lk eik k :
ð6:5Þ
Let Li (i ¼ 1; 2; . . .; k) be a permutation matrix or a diagonal matrix with e 2 diagonal entries being 1 or 1. By the above formula, we have ðL1 ; . . .; Lk Þ A e 2 QðAÞ. The following result is obtained. QðBÞ for A Theorem 6.9. For a k-order real tensor A, mrðAÞ ¼ mrðBÞ and MrðAÞ ¼ MrðBÞ if one of the following conditions hold: (1) sgnðBÞ ¼ sgnðA>ðp;q Þ Þ. (2) There exist some permutation matrices P1 ; . . .; Pk such that sgnðBÞ ¼ ðP1 ; . . .; Pk Þ sgnðAÞ. (3) There exist some diagonal matrices D1 ; . . .; Dk with diagonal elements being 1 or 1 such that sgnðBÞ ¼ ðD1 ; . . .; Dk Þ sgnðAÞ. Proof. Making ðp; q Þ transpose, denoted by ðÞ>ðp;q Þ, is to A is to exchange vectors apj
and aqj in (6.2). It is easy to see that rankðAÞ ¼ rankðA>ðp;q Þ Þ, mrðAÞ ¼ mrðA>ðp;q Þ Þ
and MrðAÞ ¼ MrðA>ðp;q Þ Þ. It follows that (1) holds. Next we prove (2) and (3). Suppose B ¼ ðP1 ; . . .; Pk Þ A, mrðAÞ ¼ r1 and e 2 QðAÞ such that rankð AÞ e ¼ r1 . According to (6.5) and mrðBÞ ¼ r2 , there exists A e e 2 QðBÞ, we have proposition 6.1, r1 ¼ rankððP1 ; . . .; Pk Þ AÞ. Since ðP1 ; . . .; Pk Þ A mrðBÞ ¼ r2 r1 . Similarly, we obtain r2 r1 . Therefore, r1 ¼ r2 , or mrðAÞ ¼ mrðBÞ. By the same method, it yields that MrðAÞ ¼ MrðBÞ. For the tensor A ¼ ðai1 ik Þ 2 Rn1 nk , let ðj Þ ðAÞi ¼ ai1 ij1 iij þ 1 ik 2 Rn1 nj1 nj þ 1 nk ;
Sign Pattern for Generalized Inverses
206
be a subtensor of A obtained by fixing the j-th index to be i, where j 2 ½k , i 2 nj . It can be regarded as a slice of A on mode-j. Unfolding A into the subtensors (slices) ðj Þ ð1Þ ð pÞ on mode-j yields A ¼ ððAÞ1 ; . . .; ðAÞðnjjÞ Þ [133]. Obviously, ðA>ð1;pÞ Þi ¼ ðAÞi i ¼ 1; . . .; np ; p 2 f2; . . .; k g . ð1Þ
Theorem 6.10. For A 2 Rn1 nk , let A ¼ ððAÞ1 ; . . .; ðAÞðn11Þ Þ be the 1-th unfolded ð1Þ
ð1Þ
expression and A1 ¼ ððAÞ1 ; . . .; ðAÞn1 1 Þ. (1) If ðAÞðn11Þ ¼ 0, then mrðAÞ ¼ mrðA1 Þ and MrðAÞ ¼ MrðA1 Þ. ð1Þ
(2) If sgnððAðn11Þ ÞÞ ¼ c sgnððAl ÞÞ, where l 2 ½n1 1, and c ¼ 1 or 1, then mrðAÞ ¼ mrðA1 Þ. Proof. (1) Let r ¼ rankðAÞ. According to (6.2), A¼ j
b1 a1j
Let a1j ¼
r X j¼1
!
a1j akj :
; where b1j 2 Rn1 1 and a1j
n1
n1
is the n1 -th component of a1j ,
j ¼ 1; . . .; r. Since ðAÞðn11Þ ¼ 0, it yields that r j X b1 A¼ akj : 0 j¼1 P Therefore, A1 ¼ rj¼1 b1j akj , so rankðAÞ ¼ rankðA1 Þ, mrðAÞ ¼ mrðA1 Þ and MrðAÞ ¼ MrðA1 Þ. Thus, (1) holds. (2) Without loss of generality, we suppose that l ¼ 1. Let vector c j ¼ ðða1j Þ1 ; . . .; ða1j Þn1 1 ; cða1j Þ1 Þ> ðj ¼ 1; . . .; r; c ¼ 1 or 1Þ;
where a1j ðk 2 ½n1 1Þ is the k-th component of a1j . Let k
B¼
r X j¼1
ð1Þ ð1Þ ð1Þ c j a2j akj ¼ ðAÞ1 ; . . .; ðAÞn1 1 ; cðAÞ1 : ð1Þ
Since sgnððAÞðn11Þ Þ ¼ c sgnððAÞ1 Þ, we obtain B 2 QðAÞ, so mrðBÞ ¼ mrðAÞ. Take 2 3 1 0 0 6 0 1 07 P¼6 . 7 2 Rn1 n1 . Then we get .. . . 4 ... . .. 5 . c
0
1
Sign Pattern for Tensors
207 Pc j ¼ ðða1j Þ1 ; . . .; ða1 Þnj 1 1 ; 0Þ> ;
for j 2 ½r . Let C ¼ ðP; I ; . . .; I Þ B. Then ð1Þ
ð1Þ
C ¼ ððAÞ1 ; . . .; ðAÞn1 1 ; 0Þ: By (1) of theorem 6.10, we have rankðCÞ ¼ rankðA1 Þ and mrðAÞ ¼ mrðA1 Þ. So it follows from proposition 6.1 that rankðBÞ ¼ rankðCÞ ¼ rankðA1 Þ. Hence, mrðA1 Þ ¼ mrðBÞ ¼ mrðAÞ. Thus, (2) holds. ■ By removing the rows (columns) whose sign patterns are zero or the same as (opposite to) other rows (columns), the conception of the sign condensed matrix is given. It is used to characterise the minimum rank of sign pattern matrices [99]. Similarly, we present the concept of the condensed tensor and use it to characterise the minimum rank of sign pattern tensors. ði Þ We condense a tensor A 2 Rn1 nk by removing the slice ðAÞj from the i-th ði Þ
ði Þ
order unfolded expression A ¼ ððAÞ1 ; . . .; ðAÞðniiÞ Þ if ðAÞj is a zero tensor or has the same sign pattern as (or the opposite to) other slices in i-th order unfolded expression of A. Thus, we obtain a tensor Ci ðAÞ. By removing the slices from all the unfold expression of A ði ¼ 1; . . .; k Þ, we obtain a tensor CðAÞ. From theorem 6.10, the minimum rank of a sign pattern tensor is unchanged after being condensed. Hence, the minimum ranks of the sign pattern of CðAÞ and A are the same, that is, mrðAÞ ¼ mrðCðAÞÞ. There are some results on the necessary and sufficient conditions for the minimum rank of sign pattern matrices 1 or 2 [91, 99]. Next, we give some results on the sign pattern tensors with minimum rank 1. Theorem 6.11. Let A 2 Rn1 nk . Then mrðAÞ ¼ 1 if and only if sgnðCðAÞÞ ¼ þ or : Proof. According to above discussion, the sufficiency is obvious. Next we prove the necessity. Since mrðAÞ ¼ 1, there exists a tensor A1 2 QðAÞ such that rankðA1 Þ ¼ 1. Thus, A1 can be written as the rank one form A1 ¼ a1 ak , where ai 6¼ 0 and ai 2 Rni ði ¼ 1; . . .; k Þ. By condensing A1 , we get C 1 ð A 1 Þ ¼ C 1 ð a1 a2 ak Þ ¼ a 1 a2 ak ; where a1 is the nonzero element of a1 . And C2 ða1 a2 ak Þ ¼ a1 a2 a3 ak ; where a2 is the nonzero element of a2 . Further CðA1 Þ ¼ a1 a2 ak 6¼ 0;
where ai is the nonzero element of ai ði ¼ 3; . . .; k Þ. Obviously, sgnðC AÞ1 ¼ þ or . Note that A1 2 QðAÞ, then sgnðC AÞ1 ¼ sgnðCðAÞÞ ¼ þ or . Hence, the necessity holds. ■
Sign Pattern for Generalized Inverses
208
For A 2 Rn1 nk , let ci be a nonempty subset of the set ½ni and A½c1 ; . . .; ck be a subtensor of A obtained by removing the elements whose i-th indices are not in ci , i ¼ 1; . . .; k, then A½c1 ; . . .; ck ¼ ðbi1 ik Þ 2 Rm1 mk , where mi denotes the number of element of set ci , i ¼ 1; . . .; k. In the following theorem, the relation of the minimum (maximum) rank between a sign pattern tensor and its subtensors is given. Theorem 6.12. For A 2 Rn1 nk , let cj be a subset of the set nj ðj ¼ 1; . . .; k Þ and B ¼ A½c1 ; . . .; ck be a subtensor of A. Then mrðAÞ mrðBÞ and MrðAÞ MrðBÞ. Proof. Without loss of generality, we suppose that A 6¼ 0. Let mrðAÞ ¼ r1 and e 2 QðAÞ and nonzero vectors a j 2 Rni mrðBÞ ¼ r2 . Then there exist a tensor A i ði ¼ 1; . . .; k; j ¼ 1; . . .; r1 Þ such that e ¼ A
r1 X j¼1
a1j akj :
e 2 QðBÞ, so r1 r2 . e ¼ Pr1 a j ½c a j ½c . Obviously, B Let B j¼1 1 1 k k ~ ¼ c1 . e 2 QðBÞ such that rankðBÞ Let MrðBÞ ¼ c1 . Then there exists a tensor B ~ e e e Suppose that rankðAÞ ¼ c2 , Take a tensor A 2 QðAÞ satisfying A ½c ; . . .; c ¼ B. 1
k
then there exist nonzero vectors aij ði ¼ 1; . . .; k; j ¼ 1; . . .; c2 Þ such that e ¼ A
c2 X j¼1
a1j akj :
It is easy to see that e¼ B
c2 X j¼1
a1j ½c1 akj ½ck ;
so c2 c1 . Therefore, MrðAÞ c2 c1 ¼ MrðBÞ. Thus, the theorem holds.
6.4
■
Sign Nonsingular Tensors
In this section, we present the definition of sign nonsingular tensors. Definition 6.5. Let A be a k-order n-dimension tensor. If each tensor in QðAÞ is a nonsingular tensor, then A is called a sign nonsingular SNS tensor. Let A be a k-order n-dimension tensor. If the sign pattern of the solutions of k1 e 2 QðAÞ, then 0 is the unique solution of Ax e k1 ¼ 0. e ¼ 0 are the same for A Ax e 6¼ 0, so A is a sign nonsingular tensor. Thus, Ax k1 ¼ 0 is sign Therefore, detð AÞ solvable if and only if A is a sign nonsingular tensor.
Sign Pattern for Tensors
209
Definition 6.6. Let A be a k-order n-dimension tensor. If each tensor in QðAÞ has an m-order left (right) inverse, then A is said to possess an m-order sign left (right) inverse. In this section, some results are shown, including that the minimum rank of the sign pattern of a sign nonsingular tensor is not less than its dimension; the maximum rank of a sign pattern tensor is not less than its term rank; the tensor having sign left (right) inverse is a sign nonsingular tensor; and the necessary and sufficient condition for a real tensor having 2-order sign left or right inverses. Let A be a k-order tensor as in (6.2). The vectors in Vi ðAÞ ði 2 ½k Þ are the linear combination of a1i ; . . .; ari , so
r
ri ðAÞ ¼ rankðMi Þ minfn1 ; r g;
where Mi ¼ a1i ai 2 Rni r . If mri ðAÞ\ni , then there exists an invertible matrix Pi such that Pi Mi has at least one zero row ði 2 ½k Þ. Let B ¼ ðI ; . . .; I ; Pi ; I ; . . .; I Þ A. Unfolding B into slices by i-th order yields B ¼ ðI ; . . .; I ; Pi ; I ; . . .; I Þ A r P j I a1j I ai1 Pi aij I aijþ 1 I akj ¼ j¼1
ði Þ ði Þ ¼ ðB Þ1 ; . . .; ðB Þni 1 ; 0 :
ð6:6Þ
Let T ¼ ðQ1 ; . . .; Qi1 ; Pi ; Qi þ 1 ; . . .; Qk Þ A, where Qj is an invertible matrix (j 2 ½k , j 6¼ i). Unfolding T into slices by i-th order gives
ði Þ ði Þ T ¼ ðQ1 ; . . .; Qi1 ; Pi ; Qi þ 1 ; . . .; Qk Þ A ¼ ðT Þ1 ; . . .; ðT Þni 1 ; 0 : ð6:7Þ If A 2 Rnn is a sign nonsingular matrix, then mrðAÞ ¼ MrðAÞ ¼ n. For a tensor A, the following result is obtained. nn , if A is a sign nonsingular tensor, Theorem 6.13.
For a k-order tensor A 2 R e e then rank A ¼ ðn; . . .; n Þ for A 2 QðAÞ and mrðAÞ n.
Proof. We prove this theorem by contradiction. Suppose there exists a tensor A1 2 QðAÞ such that rank ðA1 Þ 6¼ ðn; . . .; n Þ. Then there exists an integer i 2 ½k such that ri ðA1 Þ\n. If i ¼ 1, r1 ðA1 Þ\n. From (6.6), there exists an invertible matrix P such that
ð1Þ ð1Þ B ¼ ðP; I ; . . .; I Þ A1 ¼ P A1 ¼ ðBÞ1 ; . . .; ðBÞn1 ; 0 : It follows from proposition 6.9 that Bx k1 ¼ 0 has a nonzero solution x 2 Cn . So n detðBÞ ¼ 0. By proposition 6.4, we obtain detðBÞ ¼ ðdetðP ÞÞðk1Þ detðA1 Þ, so detðA1 Þ ¼ 0. This contradicts the fact that A is a sign nonsingular tensor. Therefore, e ¼ n for A e 2 QðAÞ. r1 ð AÞ If i 6¼ 1, without loss of generality, we suppose that i ¼ 2. From the above discussion and (6.7), there exists an invertible matrix P such that
Sign Pattern for Generalized Inverses
210
ð2Þ ð2Þ B ¼ I ; P T ; . . .; P T A1 ¼ A1 P ¼ ðB Þ1 ; . . .; ðB Þn1 ; 0 :
ð6:8Þ
It follows from (6.8) that the coefficient of the term xnn1 in Bx k1 is zero, where x ¼ ðx1 ; x2 ; . . .; xn Þ> 2 Cn . Hence, Bx k1 ¼ 0. can be written as 8 > < f1 ðx1 ; . . .; xn1 Þ þ xn g1 ðx Þ ¼ 0; .. . > : fn ðx1 ; . . .; xn1 Þ þ xn gn ðx Þ ¼ 0; where fi ðx1 ; . . .; xn1 Þ ði ¼ 1; . . .; n Þ is a homogeneous polynomial with degree k 1; gi ðx Þ ði ¼ 1; . . .; n Þ is a homogeneous polynomial with degree k 2; and the degree of xn in gi ðx Þði ¼ 1; . . .; n Þ is not greater than k 3. So there exists a nonzero solution x ¼ ð0; . . .; 0; xn Þ> to Bx k1 ¼ 0, where xn 6¼ 0. Hence, detðBÞ ¼ 0. By proposition 6.4, n1
we obtain detðBÞ ¼ detðA1 ÞðdetðP ÞÞðk1Þ , so detðA1 Þ ¼ 0. This contradicts the assumption that A is a sign nonsingular tensor. So we have ri ðA1 Þ ¼ n (i ¼ 1; . . .; k). e 2 QðAÞ. By proposition 6.3, we have Therefore, rank ðAÞ ¼ ðn; . . .; n Þ for A mrðAÞ n. ■ Remark 6.1. There exist some sign nonsingular tensor A 2 Rnn such that mrðAÞ [ n. The 3-order 2-dimension tensor A ¼ ðai1 i2 i3 Þ, where a111 ¼ 2, a122 ¼ a212 ¼ 3 and the other entries are zero. For the equation Ax 2 ¼ 0 as follows, 2 2x1 þ 3x22 ¼ 0; 3x1 x2 ¼ 0; e 2 QðAÞ satisfying Ax e 2 ¼ 0 has an unique solution, that is it is easy to see that A > ~ e x ¼ ð0; 0Þ . So detðAÞ 6¼ 0 for A 2 QðAÞ, then A is a sign nonsingular tensor. ~ ¼ 3 2 for A e 2 QðAÞ. According to ½8, we obtain rankðAÞ The maximum rank of a sign pattern matrix is equal to its term rank [99]. In the following result, the maximum rank of a sign pattern tensor is not less than its term rank. Theorem 6.14. For A 2 Rn1 nk , MrðAÞ qðAÞ. Proof. Without loss of generality, suppose that A 6¼ 0. Let qðAÞ ¼ c and c ¼ ½c. Then there exist permutation matrices P1 ; . . .; Pk such that B ¼ ðP1 ; . . .; Pk Þ A has a subtensor B½c; . . .; c ¼ ðbi1 ik Þ with nonzero diagonal elements, where i1 ; . . .; ik 2 c. Take a tensor B1 ¼ ðe b i1 ik Þ 2 QðB½c; . . .; cÞ such that ebi1 ik ; if i1 ¼ ¼ ik ðe [ 0Þ; e b i1 ik ¼ bi1 ik ; otherwise. c1 c1 Qc eðk1Þ is a term in By proposition 4 of [162], we deduce that ecðk1Þ i¼1 b ii detðB1 Þ and the total degree with respect to e is not greater than cðk 1Þc1 2. Hence, when e is large enough, we have
Sign Pattern for Tensors
sgnðdetðB1 ÞÞ ¼ sgn e
211
cðk1Þc1
c Y i¼1
! ðk1Þc1 e b ii
6¼ 0:
By theorem 6.13, rankðB1 Þ c, so MrðB½c; . . .; cÞ c. By theorems 6.9 and 6.12, we have MrðAÞ ¼ MrðBÞ and MrðBÞ MrðB½c; . . .; cÞ, so MrðAÞ c. Therefore, the theorem holds. ■ Next is an example of MrðAÞ [ qðAÞ. Example 6.1. Consider the 3-order 2-dimension tensor A ¼ ðai1 i2 i3 Þ, where a111 ¼ 2, a112 ¼ a121 ¼ 0, a122 ¼ a211 ¼ a221 ¼ a222 ¼ 1, a212 ¼ 1. It is easy to see that ~ ¼ 3 for A e 2 QðAÞ. Therefore, qðAÞ ¼ 2. According to ½8, we obtain rankðAÞ MrðAÞ [ qðAÞ. The sign nonsingular matrices are investigated in [13]. The left and right inverses of tensors are defined in [27]. In the following, we present some results on the sign left and sign right inverses. Theorem 6.15. Let A be a k-order n-dimension tensor. If A has an m-order sign left or sign right inverse, then A is a sign nonsingular tensor. ~ 2 QðAÞ, there Proof. Suppose that A has an m-order sign left inverse, then for A e ¼ I . By proposition 6.4, exists an m-order n-dimension tensor P such that P A n ðk1Þn1 ðm1Þ ~ 6¼ 0. Therefore, A is a sign e ðdetð AÞÞ ¼ detðI Þ ¼ 1, so detðAÞ ðdetðP ÞÞ nonsingular tensor. Similarly, we prove that the tensor having an m-order sign right inverse is a sign nonsingular tensor. ■ n1 n2 n2 Let A ¼ ðai1 ik Þ 2 R be a k-order tensor and let MðAÞ ¼ mij 2 Rn1 n2 denote the majorization matrix of A, where mij ¼ aijj ði 2 ½n1 ; j 2½n2 Þ [160]. The necessary and sufficient conditions of a matrix to be sign nonsingular are given in [13]. Next we present the necessary and sufficient condition of a tensor having 2-order sign left or sign right inverse. Theorem 6.16. Let A ¼ ðai1 ik Þ 2 Rnn ðk 3Þ. Then A ¼ ðai1 ik Þ has a 2-order sign left inverse if and only if all of the following hold: (1) The elements of A are all zero except for aijj ði; j 2 ½n Þ. (2) The majorization matrix MðAÞ is an SNS matrix. Proof. We prove the necessity first. Suppose that A has a 2-order sign left inverse. It e 2 QðAÞ there exists an invertible matrix P e such follows from theorem 6.4 that for A ~ e e I . With the general tensor product, (1) holds and P ~ ¼ MðAÞ. Hence, that A ¼ P each matrix in QðMðAÞÞ is nonsingular. Thus, MðAÞ is an SNS matrix. Next, we prove the sufficiency. Since the elements of A satisfy mij ; if j2 ¼ ¼ jk ; ði; j2 ; . . .; jk 2 ½n Þ; aij2 jk ¼ 0; otherwise;
Sign Pattern for Generalized Inverses
212
e 2 QðAÞ satisfies A e ¼ Mð AÞ e I . Note that Mð AÞ e 2 QðMðAÞÞ, then then each A e exists. Hence, A has a MðAÞ is an SNS matrix, thus, the 2-order left inverse of A 2-order sign left inverse. ■ A k-order tensor A ¼ ðai1 ik Þ 2 Rnn is called sign symmetric if sgnðai1 ik Þ ¼ sgn arði1 ;...;ik Þ , where rði1 ; . . .; ik Þ is an arbitrary permutation of the indices i1 ; . . .; ik . Theorem 6.17. Let A ¼ ðai1 ik Þ 2 Rnn ðk 3Þ. Then, A has a 2-order sign right inverse if and only if there exist a permutation matrix P and a diagonal matrix D with diagonal element being 1 or 1 such that sgnðAÞ ¼ D P I . Especially, when k is odd, D ¼ I . Proof. We prove the sufficiency first. Suppose that there exist a permutation matrix P and a diagonal matrix D with diagonal element being 1 or 1 such that sgnðAÞ ¼ e 2 QðAÞ, there exists a matrix D e 2 QðD Þ D P I (when k is odd, D ¼ I ). For A e e e such that A ¼ D P I . For such a D, there exists D1 2 QðD Þ such that D1k1 ¼ D. > 1 Let P2 ¼ P D1 . Then e P2 ¼ D e P I P > D1 ¼ I ; A 1 so each tensor in QðAÞ has a 2-order right inverse. Next we prove the necessity. Suppose that A has a 2-order sign right inverse. e 2 QðAÞ, there exists an invertible matrix According to theorem 6.4, we have for A nn W 2R such that n X e ¼ I W ¼ I ; W T ; . . .; W T I ¼ ei wi wi ; A i¼1
where ei is the i-th column of the identify matrix, and wi 6¼ 0 denotes the i-th row of ~ ð1Þ ; . . .; ðAÞ ~ ð1Þ Þ, e into slices by 1st order yields A ~ ¼ ððAÞ W (i ¼ 1; . . .; n). Unfolding A n
1
~ where each slice ðAÞ i is a ðk 1Þ-order tensor which can be written as the rank one form ð1Þ
~ ð1Þ ¼ wi wi ; ðAÞ i ~ ð1Þ Þ ¼ ~ ð1Þ is sign symmetric and (mrðAÞ for i 2 ½n . It is easy to see that ðAÞ i i ð 1Þ ~ (MrðAÞi Þ ¼ 1 ði ¼ 1; . . .; n Þ. Let pi be the number of the nonzero elements of wi . ~ ð1Þ Þ pi . According to theorem 6.14, we have 1 ¼ MrððAÞ ~ ð1Þ Þ Obviously, qððAÞ i
i
b i be the only nonzero element of wi pi 1, so pi ¼ 1 (i ¼ 1; . . .; n). Let w ð1Þ ~ b i Þk1 (i ¼ 1; . . .; n). So the tensor ðAÞi has only one nonzero element aijj ¼ ð w ði ¼ 1; . . .; n Þ. Especially, when k is odd, sgn aijj ¼ þ . Hence,
Sign Pattern for Tensors
213
mij ; if j2 ¼ ¼ jk ði; j2 ; . . .; jk 2 ½n Þ; 0; otherwise: majorization matrix MðAÞ ¼ mij . Note that aij2 jk ¼
So,
the k1
■ sgn mij ¼
sgnðwij Þ 6¼ 0. Since W is invertible and there is only one nonzero element in each row of W , and MðAÞ can be permuted into a diagonal matrix. Therefore, there exist a permutation matrix P and a diagonal matrix D with the diagonal entries being 1 or 1 such that D P sgnðMðAÞÞ ¼ I . Thus, sgnðAÞ ¼ D P I .
References [1] Bapat R.B. (2010) Graphs and matrices. Springer, London. [2] Bapat R.B., Kirkland S.J., Prasad K.M., Puntanen S. (2013) Combinatorial matrix theory and generalized inverses of matrices. Springer, India. [3] Bassett L., Maybee J., Quirk J. (1968) Qualitative economics and the scope of the correspondence principle, Econometrica 36, 544. [4] Beltrán C., Shub M. (2012) On the geometry and topology of the solution variety for polynomial system solving, Found. Comput. Math. 12, 719. [5] Ben-Israel A., Greville T.N.E. (2003) Generalized inverses: theory and applications, 2nd edn. Springer-Verlag, New York. [6] Benítez J., Liu X., Zhu T. (2010) Nonsingularity and group invertibility of linear combinations of two k-potent matrices, Linear Multilinear Algebra 58, 1023. [7] Berman A., Plemmons R.J. (1994) Nonnegative matrices in the mathematical sciences. SIAM, Philadelphia. [8] Bhaskara Rao K.P.S. (2002) The theory of generalized inverses over commutative rings. Taylor & Francis, London. [9] Brazell M., Li N., Navasca C., Tamon C. (2013) Solving multilinear systems via tensor inversion, SIAM J. Matrix Anal. Appl. 34, 542. [10] Bru R., Climent J.J., Neumann M. (1995) On the index of block upper triangular matrices, SIAM J. Matrix Anal. Appl. 16, 436. [11] Brualdi R.A., Chavey K.L., Shader B.L. (1994) Bipartite graphs and inverse sign patterns of strong sign-nonsingular matrices, J. Comb. Theory Ser. B 62, 133. [12] Brualdi R.A., Ryser H.J. (1991) Combinatorial matrix theory. Cambridge University Press, Cambridge. [13] Brualdi R.A., Shader B.L. (1995) Matrices of sign-solvable linear systems. Cambridge University Press, Cambridge. [14] Bu C., Cao C. (2002) Group inverse of product of two matrices over skew fields, J. Math. Study 35, 435 (Chinese). [15] Bu C., Feng C., Bai S. (2012) Representations for the Drazin inverses of the sum of two matrices and some block matrices, Appl. Math. Comput. 218, 10226. [16] Bu C., Gu W., Zhou J., Wei Y. (2016) On matrices whose Moore-Penrose inverses are ray unique, Linear Multilinear Algebra 64, 1236. [17] Bu C., Li M., Zhang K., Zheng L. (2009) Group inverse for the block matrices with an invertible subblock, Appl. Math. Comput. 215, 132. [18] Bu C., Luo Y. (2003) Matrix theory. Harbin Engineering University Press, Harbin (Chinese). [19] Bu C., Sun L., Zhou J., Wei Y. (2012), A note on block representations of the group inverse of Laplacian matrices, Electron. J. Linear Algebra 23, 866. [20] Bu C., Sun L., Zhou J., Wei Y. (2013) Some results on the Drazin inverse of anti-triangular matrices, Linear Multilinear Algebra 61, 1568. [21] Bu C., Wang W., Zhou J., Sun L. (2014) On block triangular matrices with signed Drazin inverse, Czech. Math. J. 64, 883. [22] Bu C., Wei Y., Sun L., Zhou J. (2015) Brualdi-type eigenvalue inclusion sets of tensors, Linear Algebra Appl. 480, 168. [23] Bu C., Zhang C. (2013) A note on the formulas for the Drazin inverse of the sum of two matrices, Linear Algebra Appl. 439, 565.
216
References
[24] Bu C., Zhang K. (2010) The explicit representations of the Drazin inverses of a class of block matrices, Electron. J. Linear Algebra 20, 406. [25] Bu C., Zhang K., Zhao J. (2010) Some results on the group inverse of the block matrix with a sub-block of linear combination or product combination of matrices over skew fields, Linear Multilinear Algebra 58, 957. [26] Bu C., Zhang K., Zhao J. (2011) Representations of the Drazin inverse on solution of a class singular differential equations, Linear Multilinear Algebra 59, 863. [27] Bu C., Zhang X., Zhou J., Wang W., Wei Y. (2014) The inverse, rank and product of tensors, Linear Algebra Appl. 446, 269. [28] Bu C., Zhao J., Tang J. (2011) Representation of the Drazin inverse for special block matrix, Appl. Math. Comput. 217, 4935. [29] Bu C., Zhao J., Zhang K. (2009) Some results on group inverses of block matrices over skew fields, Electron. J. Linear Algebra 18, 117. [30] Bu C., Zhao J., Zheng J. (2008) Group inverse for a class 2 × 2 block matrices over skew fields, Appl. Math. Comput. 241, 45. [31] Bu C., Zhou X., Ma L., Zhou J. (2013) On the group inverse for the sum of matrices, J. Austral. Math. Soc. 96, 36. [32] Campbell S.L. (1980) Singular systems of differential equations. Pitman, London. [33] Campbell S.L. (1983) The Drazin inverse and systems of second order linear differential equations, Linear Multilinear Algebra 14, 195. [34] Campbell S.L., Meyer C.D. (2009) Generalized inverses of linear transformations. Pitman, London, 1979, SIAM, Philadelphia. [35] Campbell S.L., Meyer C.D., Rose N.J. (1976) Applications of the Drazin inverse to linear systems of differential equations with singular constant coefficients, SIAM J. Appl. Math. 31, 411. [36] Cao C. (2001) Some results on group inverse of block matrices, J. Natural Sci. Heilongjiang Univ. 18, 5 (Chinese). [37] Cao C., Li J. (2011) A note on the group inverse of some 2 × 2 block matrices over skew fields, Appl. Math. Comput. 217, 10271. [38] Cao C., Li J. (2009) Group inverses for matrices over a Bezout domain, Electron. J. Linear Algebra 18, 600. [39] Cao C., Zhang X., Tang X. (2011) Selection of advanced mathematics methods. Science Press, Beijing (Chinese). [40] Cao C., Zhang X., Tang X. (2004) Reverse order law of group inverses of roducts of two matrices, Appl. Math. Comput. 158, 489. [41] Cao Z. (2005) Variational iteration methods. Science Press, Beijing (Chinese). [42] Castro González N. (2005) Additive perturbation results for the Drazin inverse, Linear Algebra Appl. 397, 279. [43] Castro González N., Dopazo E. (2005) Representation of the Drazin inverse for a class of block matrices, Linear Algebra Appl. 400, 253. [44] Castro González N., Dopazo E., Robles J. (2006) Formulas for the Drazin inverse of special block matrices, Appl. Math. Comput. 174, 252. [45] Castro González N., Martínez Serrano M.F. (2010) Drazin inverse of partitioned matrices in terms of Banachiewicz-Schur forms, Linear Algebra Appl. 432, 1691. [46] Castro González N., Robles J., Vélez Cerrada J.Y. (2013) The group inverse of 2 × 2 matrices over a ring, Linear Algebra Appl. 438, 3600. [47] Castro González N., Vélez Cerrada J.Y. (2008) On the perturbation of the group generalized inverse for a class of bounded operators in Banach spaces, J. Math. Anal. Appl. 34, 1213. [48] Catral M., Olesky D.D., van den Driessche P. (2009) Block representations of the Drazin inverse of a bipartite matrix, Electron. J. Linear Algebra 18, 98. [49] Catral M., Olesky D.D., van den Driessche P. (2010) Graphical description of group inverses of certain bipartite matrices, Linear Algebra Appl. 432, 36. [50] Cvetković Ilić D.S., Wei Y. (2017) Algebraic properties of generalized inverses, Developments in mathematics. Springer, Singapore, vol. 52.
References
217
[51] Chang K.C., Pearson K., Zhang T. (2011) Primitivity, the convergence of the NQZ method, and the largest eigenvalue for nonnegative tensors, SIAM J. Matrix Anal. Appl. 32, 806. [52] Che M., Bu C., Qi L., Wei Y. (2019) Nonnegative tensors revisited: plane stochastic tensors, Linear Multilinear Algebra 67, 1364. [53] Che M., Cvetković Ilić D.S., Wei. Y. (2018) Inequalities on generalized tensor functions with diagonalizable and symmetric positive definite tensors, Stat. Optim. Inf. Comput. 6, 483. [54] Che M., Li G., Qi L., Wei Y. (2017) Pseudo-spectra theory of tensors and tensor polynomial eigenvalue problems, Linear Algebra Appl. 533, 536. [55] Che M., Wei. Y. (2020) Theory and computation of complex tensors and its applications. Springer Nature Singapore Pte Ltd. [56] Cheng G. (2007) Matrix theory and application, 2nd edn. Science Press, Beijing (Chinese). [57] Chen J., Xu Z., Wei Y. (2009) Representations for the Drazin inverse of the sum P + Q + R + S and its applications, Linear Algebra Appl. 430, 438. [58] Chen X., Hartwig R.E. (1996) The group inverse of a triangular matrix, Linear Algebra Appl. 237, 97. [59] Chen Y., Qi L., Zhang X. (2017) The fielder vector of a Laplacian tensor for hypergraph partitioning, SIAM J. Sci. Comput. 39, A2508. [60] Chen Y. (2005) Generalized inverses theory and methods. Nanjing Normal University Press, Nanjing (Chinese). [61] Cheng Y., Zhang K., Xu Z. (2006) Matrix theory. Northwest Polytechnic University Press, Xi’an (Chinese). [62] Cooper J., Dutle A. (2012) Spectra of uniform hypergraphs, Linear Algebra Appl. 436, 3268. [63] Cvetković Ilić D.S. (2008) A note on the representions for the Drazin inverse of 2 × 2 block matrices, Linear Algebra Appl. 429, 242. [64] Dai H. (2001) Matrix theory. Science Press, Beijing (Chinese). [65] De Silva V., Lim L.H. (2008) Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl. 30, 1084. [66] Delsarte P., Kamp Y. (1989) Low rank matrices with a given sign pattern, SIAM J. Discrete Math. 2, 51. [67] Deng C. (2009) The Drazin inverses of sum and difference of idempotents, Linear Algebra Appl. 430, 1282. [68] Deng C. (2010) Generalized Drazin inverse of anti-triangular block matrices, J. Math. Anal. Appl. 368, 1. [69] Deng C. (2012) A comment on some recent results concerning the Drazin inverse of an anti-triangular block matrix, Filomat 26, 341. [70] Deng C., Cvetković Ilić D.S., Wei Y. (2010) Some results on the generalized Drazin inverse of operator matrices, Linear Multilinear Algebra 58, 503. [71] Deng C., Wei Y. (2009) A note on the Drazin inverse of an anti-triangular matrix, Linear Algebra Appl. 431, 1910. [72] Deng C., Wei Y. (2010) New additive results for the generalized Drazin inverse, J. Math. Anal. Appl. 370, 313. [73] Deng C., Wei Y. (2011) Representations for the Drazin inverse of 2 × 2 block-operator matrix with singular Schur complement, Linear Algebra Appl. 435, 2766. [74] Ding W., Qi L., Wei Y. (2013) M-tensors and nonsingular M-tensors, Linear Algebra Appl. 439, 3264. [75] Ding W., Wei Y. (2015) Generalized tensor eigenvalue problems, SIAM J. Matrix Anal. Appl. 36, 1073. [76] Ding W, Wei Y. (2016) Solving multi-linear systems with M-tensors, J. Sci. Comput. 68, 689. [77] Ding W., Wei Y. (2016) Theory and computation of tensors, multi-dimensional arrays. Elsevier/Academic Press, London. [78] Ding X., Zhai G. (2020) Drazin inverse conditions for stability of positive singular systems, J. Franklin Inst. 357, 9853. [79] Ding J., Zhou A. (2009) Nonnegative matrices, positive operators, and applications. World Scientific Publishing Co. Pte. Ltd., Singapore.
218
References
[80] Djordjević D., Rakocević V. (2008) Lectures on generalized inverses. University of Niš, Niš. [81] Djordjević D.S., Stanimirović P.S. (2001) On the generalized Drazin inverse and generalized resolvent, Czech. Math. J. 51, 617. [82] Djordjević D.S., Wei Y. (2002) Additive results for the generalized Drazin inverse, J. Aust. Math. Soc. 73, 115. [83] Dopazo E., Martínez-Serrano M.F. (2010) Further results on the representation of the Drazin inverse of a 2 × 2 block matrix, Linear Algebra Appl. 432, 1896. [84] Drazin M.P. (1958) Pseudo-inverses in associative rings and semigroups, Amer. Math. Monthly 65, 506. [85] Erdélyi I. (1967) On the matrix equation Ax = λBx, J. Math. Anal. Appl. 17, 119. [86] Eschenbach C.A., Hall F.J., Li Z. (1994) Sign pattern matrices and generalized inverses, Linear Algebra Appl. 211, 53. [87] Eschenbach C.A., Hall F.J., Li Z. (1998) From real to complex sign pattern matrices, Bull. Austral. Math. Soc. 57, 159. [88] Eschenbach C.A., Li Z. (1999) Potentially nilpotent sign pattern matrices, Linear Algebra Appl. 299, 81. [89] Fanaï H.R. (2004) On ray-nonsingular matrices, Linear Algebra Appl. 376, 125. [90] Farid F.O., Khan I.A., Wang Q. (2013) On matrices over an arbitrary semiring and their generalized inverses, Linear Algebra Appl. 439, 2085. [91] Forster J. (2002) A linear lower bound on the unbounded error probabilistic communication complexity, J. Comput. Syst. Sci. 65, 612. [92] Fredholm I. (1903) Surune classed équations fonctionnelles, Acta Math. 27, 365. [93] Gantmaher F. (2003) Matrix theory (Z. Ke et al., Translate). Higher Education Press, Beijing; Harbin Institute of Technology Press, Harbin, 2013 (Chinese). [94] Golub G.H., Van Loan C.F. (1996) Matrix computations (Y. Yuan et al., Translate), 3rd edn. Johns Hopkins Press, Science Press, Beijing, 2001 (Chinese). [95] Gorman W.M. (1964) A wider scope for qualitative economics, Rev. Econ. Studies 31, 65. [96] Groetsch C. (1977) Generalized inverses of linear operators: representation and approximation. Marcel Dekker, New York. [97] Guo L., Du X. (2010) Representations for the Drazin inverses of 2 × 2 block matrices, Appl. Math. Comput. 217, 2833. [98] Guo W., Wei M. (2008) Singular value decomposition and its application in generalized inverse theory. Science Press, Beijing (Chinese). [99] Hall F.J., Li Z. (2014) Sign pattren matrices, chapter 33 of the book, Handbook of linear algebra (Hogben L. Ed), 2nd edn. CRC Press, Boca Raton, FL. [100] Harris J.M., Hirst J.L., Mossinghoff M.J. (2000) Combinatoris and graph gheory. Spring-Verlag, New York. [101] Hartwig R.E., Li X., Wei Y. (2005) Representations for the Drazin inverse of a 2 × 2 block matrix, SIAM J. Matrix Anal. Appl. 27, 757. [102] Hartwig R.E., Wang G., Wei Y. (2001) Some additive results on Drazin inverse, Linear Algebra Appl. 322, 207. [103] Hartwig R.E., Shoaf J.M. (1977) Group inverses and Drazin inverses of bidiagonal and triangular Toeplitz matrices, J. Austral. Math. Soc. 24, 10. [104] He X., Sun W. (1990) Introduction to generalized inverses of matrices. Jiangsu Science and Technology Press, Jiangsu (Chinese). [105] Hershkowitz D., Rothblum U., Schneider H. (1989) The combinatorial structure of the generalized nullspace of a block triangular matrix, Linear Algebra Appl. 116, 9. [106] Hillar C.J., Lim L.H. (2013) Most tensor problems are NP-hard, J. ACM 60, 1. [107] Hilbert D. (1912) Grundzüge einer algemeinen theorie der linearen integralgleichungen. Teubner, Leipzig. [108] Horn R.A., Johnson C.R. (2013) Matrix analysis, 2nd edn. Cambridge University Press, New York. [109] Hu S., Huang Z., Ling C., Qi L. (2013) On determinants and eigenvalue theory of tensors, J. Symbolic Comput. 50, 508. [110] Huang T., Zhong S., Li Z. (2003) Matrix theory. Higher Education Press, Beijing (Chinese).
References
219
[111] Hurwitz W.A. (1912) On the pseudo-resolvent to the kernel of an integral equation, Trans. Amer. Math. Soc. 13, 405. [112] Hwang S.G., Kim I.P., Kim S.J., Zhang X. (2003) Tight sign-central matrices, Linear Algebra Appl. 371, 225. [113] Ji J., Wei Y. (2017) Weighted Moore-Penrose inverses and fundamental theorem of even-order tensors with Einstein product, Front. Math. Chin. 12, 1319. [114] Ji J., Wei Y. (2018) The Drazin inverse of an even-order tensor and its application to singular tensor equations, Comput. Math. Appl. 75, 3402. [115] Ji J., Wei Y. (2020) The outer generalized inverse of an even-order tensor, Electron. J. Linear Algebra 36, 599. [116] Jiang E. (2008) Matrix computations. Science Press, Beijing (Chinese). [117] Jin X., Wei Y. (2012) Numerical linear algebra and its applications. Science Press, Beijing, Alpha Science International Ltd., Oxford. [118] Kirkland S., Molitierno J.J., Neumann M., Shader B. (2002) On graphs with equal algebraic and vertex connectivity, Linear Algebra Appl. 341, 45. [119] Kirkland S., Neumann M. (1998) The M-matrix group generalized inverse problem for weighted trees, SIAM J. Matrix Anal. Appl. 19, 226. [120] Kirkland S., Neumann M. (2012) Group inverses of M-matrices and their applications. CRC Press, Boca Raton, FL. [121] Kirkland S., Neumann M., Shader B.L. (1997) Distances in weighted trees and group inverse of laplacian matrices, SIAM J. Matrix Anal. Appl. 18, 827. [122] Kirkland S., Neumann M., Shader B.L. (1998) Bounds on the subdominant eigenvalue involving group inverses with applications to graphs, Czech. Math. J. 47, 1. [123] Kirkland S., Neumann M., Shader B.L. (1998) On a bound on algebraic connectivity: the case of equality, Czech. Math. J. 48, 65. [124] Klee V., Ladner R., Manber R. (1984) Sign-solvability revisited, Linear Algebra Appl. 59, 131. [125] Klein D.J., Randić M. (1993) Resistance distance, J. Math. Chem. 12, 81. [126] Kroonenberg P.M. (2008) Applied multiway data analysis, J. R. Stat. Soc. 172, 941. [127] Kuang J., Tian H. (2012) Complex analysis and application in numerical mathematics. Science Press, Beijing (Chinese). [128] Li X. (2011) A representation for the Drazin inverse of block matrices with a singular generalized Schur complement, Appl. Math. Comput. 217, 7531. [129] Li X., Wei Y. (2004) Iterative methods for the Drazin inverse of a matrix with a complex spectrum, Appl. Math. Comput. 147, 855. [130] Li X., Wei Y. (2007) A note on the representations for the Drazin inverse of a 2 × 2 block matrix, Linear Algebra Appl. 423, 332. [131] Li Z., Gao Y., Arav M., Gong F., Gao W., Hall F.J., van der Holst H. (2013) Sign patterns with minimum rank 2 and upper bounds on minimum ranks, Linear Multilinear Algebra 61, 895. [132] Liao A., Liu J. (2005) Matrix theory. Hunan University Press, Hunan (Chinese). [133] Lim L.H. (2014) Tensors and hypermatrices, chapter 15 of the book, Handbook of linear algebra (L. Hogben, Ed), 2nd edn. CRC Press, Boca Raton, FL. [134] Linial N., Mendelson S., Schechtman G., Shraibman A. (2007) Complexity measures of sign matrices, Combinatorica 27, 439. [135] Liu B. (2005) Combinatorial matrix theory, 2nd edn. Science Press, Beijing (Chinese). [136] Liu W., Li W. (2016) On the inverse of a tensor, Linear Algebra Appl. 495, 199. [137] Liu X., Xu L., Yu Y. (2010) The representations of the Drazin inverse of differences of two matrices, Appl. Math. Comput. 216, 3652. [138] Liu X. (1995) Fundamentals of numerical algebra. China Ocean University Press, Qingdao, (Chinese). [139] Liu Y., Cao C. (2004) Drazin inverses for some partitoned matrices over skew fields, J. Natural Sci. Heilongjiang Univ. 21, 112 (Chinese). [140] Liu Y., Shan H. (2012) Ray nonsingularity of cycle chain matrices, Linear Algebra Appl. 437, 2910.
220
References
[141] Liu Y., Shan H. (2013) A forbidden structure of ray nonsingular matrices, Linear Algebra Appl. 439, 2367. [142] Liu Y., Shao J., Ren L. (2011) Characterization of ray pattern matrix whose determinantal region has two components after deleting the origin, Linear Algebra Appl. 435, 3139. [143] Ljubisavljević J., Cvetković Ilić D.S. (2011) Additive results for the Drazin inverse of block matrices and applications, J. Comput. Appl. Math. 235, 3683. [144] Lu H., Plataniotis K.N., Venetsanopoulos A. (2013) Multilinear subspace learning: dimensionality reduction of multidimensional data. CRC Press, New York. [145] Ma H., An Q. (2019) Ray pattern of Drazin inverses of some matrices, J. Math. Anal. Appl. 472, 303. [146] Ma H., Li N., Stanimirović P., Katsikis. V. (2019) Perturbation theory for Moore-Penrose inverse of tensor via Einstein product, Comput. Appl. Math. 38, 111. [147] Ma H., Miao Z. (2011) Imprimitive non-powerful sign pattern matrices with maximum base, Linear Multilinear Algebra 59, 371. [148] Manber R. (1982) Graph-theoretic approach to qualitative solvability of linear systems, Linear Algebra Appl. 48, 457. [149] Martínez Serrano M.F., Castro González N. (2009) On the Drazin inverse of block matrices and generalized Schur complement, Appl. Math. Comput. 215, 2733. [150] Matsaglia G., Styan G.P.H. (1974) Equalities and inequalities for ranks of matrices, Linear Multilinear Algebra 2, 269. [151] Meyer C.D. (1974) Limits and the index of a square matrix, SIAM J. Appl. Math. 26, 469. [152] Meyer C.D., Rose N.J. (1977) The index and the Drazin inverse of block triangular matrices, SIAM J. Appl. Math. 33, 1. [153] Miao J. (1989) Results of Drazin inverse of a 2 × 2 partitioned matrix, J. Shanghai Normal Univ. 18, 25 (Chinese). [154] Miao Y., Qi L., Wei Y. (2020) Generalized tensor function via the tensor singular value decomposition based on the T-product, Linear Algebra Appl. 590, 258. [155] Miao Y., Qi L., Wei Y. (2021) T-Jordan canonical form and T-Drazin inverse based on the T-Product, Comm. Appl. Math. Comput. 1. [156] Moore E.H. (1920) On the reciprocal of the general algebraic matrix, Bull. Amer. Math. Soc. 26, 394. [157] Mosić D., Djordjević D. (2012) Representation for the generalized Drazin inverse of block matrices in Banach algebras, Appl. Math. Comput. 218, 12001. [158] Nashed M. (1976) Generalized inverses and applications. Academic Press, New York. [159] Penrose R. (1955) A generalized inverse for matrices, Proc. Cambridge Philos. Soc. 51, 406. [160] Pearson K.J. (2010) Essentially positive tensors, Int. J. Algebra 4, 421. [161] Qi L., Chen H., Chen Y. (2018) Tensor eigenvalues and their applications, Adv. Mech. Math. 39, Springer, Singapore. [162] Qi L. (2005) Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput. 40, 1302. [163] Qi L., Luo Z. (2017) Tensor analysis: spectral theory and special tensors. SIAM, Philadelphia. [164] Rao C.R., Mitra S.K. (1971) Generalized inverses of matrices and its applications. Wiley, New York. [165] Razborov A., Sherstov A. (2010) The sign rank of AC(0), SIAM J. Comput. 39, 1833. [166] Samuelson P.A. (1947) Foundations of economic analysis. Harvard University Press, Cambridge. [167] Shader B.L. (1995) Least square sign-solvablity, SIAM J. Matrix Anal. Appl. 16, 1056. [168] Shao J. (1998) On digraphs and forbidden configurations of strong sign nonsingular matrices, Linear Algebra Appl. 282, 221. [169] Shao J. (2000) On the digraphs of sign solvable linear systems, Linear Algebra Appl. 313, 115. [170] Shao J., He J. (2002) Matrices with doubly signed generalized inverses, Linear Algebra Appl. 355, 71. [171] Shao J., He J., Shan H. (2003) Number of nonzero entries of S2NS matrices and matrices with signed generalized inverses, Linear Algebra Appl. 373, 223. [172] Shao J., He J., Shan H. (2003) Matrices with special sign patterns of signed generalized inverses, SIAM J. Matrix Anal. Appl. 24, 990.
References
221
[173] Shao J., Hu Z. (2000) Algebraic constructions of the minimal forbidden digraphs of strong sign nonsingular matrices, Linear Algebra Appl. 317, 1. [174] Shao J., Hu Z. (2000) Characterizations of some classes of strong sign nonsingular digraphs, Discrete Appl. Math. 105, 159. [175] Shao J., Shan H. (2001) Matrices with signed generalized inverses, Linear Algebra Appl. 322, 105. [176] Shao J., Shan H. (2002) The solution of a problem on matrices having signed generalized inverses, Linear Algebra Appl. 345, 43. [177] Shao J., Shan H. (2005) The determinantal regions of complex sign pattern matrices and ray pattern matrices, Linear Algebra Appl. 395, 211. [178] Shao J., Shan H., You L. (2005) Ray solvable linear systems and ray S2NS matrices, Linear Algebra Appl. 395, 229. [179] Shao J., Shan H. (2001) Matrices with signed generalized inverses, Linear Algebra Appl. 322, 105. [180] Shao J. (2013) A general product of tensors with applications, Linear Algebra Appl. 439, 2350. [181] Shao J., You L. (2016) On some properties of three different types of triangular blocked tensors, Linear Algebra Appl. 511, 110. [182] Sheng Y., Ge Y., Zhang H., Cao C. (2013) Group inverses for a class of 2 × 2 block matrices over rings, Appl. Math. Comput. 219, 9340. [183] Sidiropoulos N.D., Lathauwer L.D., Fu X., Huang K., Papalexakis E.E., Faloutsos C. (2017) Tensor decomposition for signal processing and machine learning, IEEE Trans. Signal Proces. 65, 3551. [184] Stanimirović P.S., Ćirić M., Katsikis V.N., Li, C., Ma H. (2020) Outer and (b,c) inverses of tensors, Linear Multilinear Algebra 68, 940. [185] Sahoo J., Behera R. (2020) Reverse-order law for core inverse of tensors, Comput. Appl. Math. 39, 97. [186] Sun J. (2001) Matrix perturbation analysis, 2nd edn. Science Press, Beijing (Chinese). [187] Sun L., Wang W., Zhou J., Bu C. (2015) Some results on resistance distances and resistance matrices, Linear Multilinear Algebra 63, 523. [188] Sun L., Zheng B., Bai S., Bu C. (2016) Formulas for the Drazin inverse of matrices over skew fields, Filomat 12, 3377. [189] Sun L., Zheng B., Bu C., Wei Y. (2016) Moore-Penrose inverse of tensors via Einstein product, Linear Multilinear Algebra 64, 686. [190] Sun L., Zheng B., Wei Y., Bu C. (2018) Generalized inverses of tensors via a general product of tensors, Front. Math. Chin. 13, 893. [191] Thomassen C. (1989) When the sign pattern of a square matrix determines uniquely the sign pattern of its inverse, Linear Algebra Appl. 119, 27. [192] Tucker A. (1984) Applied combinatorics, 2nd edn. John Wiley & Sons, New York. [193] Wang G. (1994) Generalized inverse of matrices and operators. Science Press, Beijing (Chinese). [194] Wang G., Wei Y., Qiao S. (2018) Generalized inverses: theory and computations, developments in mathematics, 2nd edn. Springer, Singapore, Science Press, Beijing, vol. 53. [195] Wang Q., Xue Y. (1996) Matrix equations on skew filed and ring. Knowledge Press, Beijing (Chinese). [196] Wang S., Yang Z. (1996) Generalized inverse of matrix and application. Science Press, Beijing (Chinese). [197] Wang X., Stanimirović P., Wei Y. (2018) Complex ZFs for computing time-varying complex outer inverses, Neurocomputing 275, 983. [198] Wang X., Che M., Wei Y. (2020) Neural network approach for solving nonsingular multi-linear tensor systems, J. Comput. Appl. Math. 368, e112569. [199] Wang X., Che M., Wei Y. (2020) Tensor neural network models for tensor singular value decompositions, Comput. Optim. Appl. 75, 753. [200] Wang Y. (2005) Generalized inverse theory of operators in Banach space and applcation. Science Press, Beijing (Chinese).
222
References
[201] Wang Z., Chen J. (2012) Pseudo Drazin inverses in associative rings and Banach algebras, Linear Algebra Appl. 437, 1332. [202] Wei M. (2006) Theory and computation of generalized least squares problems. Science Press, Beijing (Chinese). [203] Wei Y. (1998) A characterization and representation of the generalized inverse and its applications, Linear Algebra Appl. 280, 87. [204] Wei Y. (1998) Expression for the Drazin inverse of a 2 × 2 block matrix, Linear Multilinear Algebra 45, 131. [205] Wei Y. (2014) Generalized inverses of matrices, Chapter 27 in Handbook of linear algebra (L. Hogben, Ed.), 2nd edn. Chapman and Hall/CRC. [206] Wei Y., Deng C. (2011) A note on additive results for the Drazin inverse, Linear Multilinear Algebra 59, 1319. [207] Wei Y., Stanimirović P., Petković M. (2018) Numerical and symbolic computations of generalized inverses. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ. [208] Xie D., Lei J., Chen J. (2011) Matrix theory and methods. Science Press, Beijing (Chinese). [209] Xu Q., Song C., He L. (2014) Representations for the group inverse of anti-triangular block operator matrices, Linear Algebra Appl. 443, 191. [210] Xu Q., Song C., Wei Y. (2010) The stable perturbation of the Drazin inverse of the square matrices, SIAM J. Matrix Anal. Appl. 31, 1507. [211] Xu Q., Wei Y., Song C. (2012) Explicit characterization of the Drazin index, Linear Algebra Appl. 436, 2273. [212] Xu S., Qian J. (2011) Six lectures on matrix computation. Higher Education Press, Beijing (Chinese). [213] Xue Y. (2012) Stable perturbations of operators and related topics. World Scientific, Singapore. [214] Yanai H., Takeuchi K., Takane Y. (2011) Projection matrices, generalized inverse matrices, and singular value decomposition. Springer, New York. [215] Yang H., Liu X. (2011) The Drazin inverse of the sum of two matrices and its applications, J. Comput. Appl. Math. 235, 1412. [216] You H., Chen J. (2001) Generalized inverses of a sum of morphisms, Linear Algebra Appl. 338, 261. [217] You L., Shen J. (2013) A survey on bases of sign pattern matrices, Linear Algebra Appl. 439, 346. [218] Yu G., Miao Z., Shu J. (2011) The base of a primitive, nonpowerful sign pattern with exactly d nonzero diagonal entries, Discrete Math. 311, 493. [219] Yu G., Miao Z., Shu J. (2012) Bases of primitive nonpowerful sign patterns, Theor. Comput. Sci. 447, 136. [220] Yu L., Bu T., Zhou J. (2014) A note on the Drazin indices of square matrices, Sci. World J., 361349. [221] Zhan X. (2013) Theory of matrix. American Mathematical Society, Providence, RI. [222] Zhang F. (2005) The Schur complement and its applications. Springer, New York. [223] Zhang M., Li W. (1995) Theory of nonnegative matrix. Guangdong Higher Education Press, Guangdong (Chinese). [224] Zhang X., Tang X., Cao C. (2007) Preserver problems on spaces of matrices. Science Press, Beijing. [225] Zhang X., Chen G. (2006) The computation of Drazin inverse and its application in Markov chains, Appl. Math. Comput. 183, 292. [226] Zheng B., Sun L., Jiang X. (2016) Group invertible block matrices, Front. Math. Chin. 11, 679. [227] Zhou J., Bu C., Wei Y. (2012) Group inverse for block matrices and some related sign analysis, Linear Multilinear Algebra 60, 669. [228] Zhou J., Bu C., Wei Y. (2012) Some block matrices with signed Drazin inverses, Linear Algebra Appl. 437, 1779. [229] Zhou J., Sun L., Wang W., Bu C. (2014) Line star sets for Laplacian eigenvalues, Linear Algebra Appl. 440, 164.
References
223
[230] Zhuang G., Chen J., Cui J. (2012) Jacobson’s lemma for the generalized Drazin inverse, Linear Algebra Appl. 436, 742. [231] Zhuang G., Chen J., Cvetković Ilić D.S., Wei Y. (2012) Additive property of Drazin invertibility of elements in a ring, Linear Multilinear Algebra 60, 903. [232] Zhuang W. (2011) Introduction to algebraic generalized inverse. Science Press, Beijing (Chinese).