Polynomial and rational matrices: Applications in dynamical systems theory [1st Edition.] 1846286042, 9781846286049, 9781846286056

Matrices are effective tools for the modelling and analysis of dynamical systems. Professor Kaczorek gives an overview o

330 6 3MB

English Pages 513 Year 2006

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Polynomial and rational matrices: Applications in dynamical systems theory [1st Edition.]
 1846286042, 9781846286049, 9781846286056

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Communications and Control Engineering

Published titles include: Stability and Stabilization of Infinite Dimensional Systems with Applications Zheng-Hua Luo, Bao-Zhu Guo and Omer Morgul

Identification and Control Using Volterra Models Francis J. Doyle III, Ronald K. Pearson and Bobatunde A. Ogunnaike

Nonsmooth Mechanics (Second edition) Bernard Brogliato

Non-linear Control for Underactuated Mechanical Systems Isabelle Fantoni and Rogelio Lozano

Nonlinear Control Systems II Alberto Isidori

Robust Control (Second edition) Jürgen Ackermann

L2 -Gain and Passivity Techniques in Nonlinear Control Arjan van der Schaft

Flow Control by Feedback Ole Morten Aamo and Miroslav Krsti´c

Control of Linear Systems with Regulation and Input Constraints Ali Saberi, Anton A. Stoorvogel and Peddapullaiah Sannuti Robust and H∞ Control Ben M. Chen Computer Controlled Systems Efim N. Rosenwasser and Bernhard P. Lampe Control of Complex and Uncertain Systems Stanislav V. Emelyanov and Sergey K. Korovin Robust Control Design Using H∞ Methods Ian R. Petersen, Valery A. Ugrinovski and Andrey V. Savkin Model Reduction for Control System Design Goro Obinata and Brian D.O. Anderson Control Theory for Linear Systems Harry L. Trentelman, Anton Stoorvogel and Malo Hautus Functional Adaptive Control Simon G. Fabri and Visakan Kadirkamanathan Positive 1D and 2D Systems Tadeusz Kaczorek

Learning and Generalization (Second edition) Mathukumalli Vidyasagar Constrained Control and Estimation Graham C. Goodwin, María M. Seron and José A. De Doná Randomized Algorithms for Analysis and Control of Uncertain Systems Roberto Tempo, Giuseppe Calafiore and Fabrizio Dabbene Switched Linear Systems Zhendong Sun and Shuzhi S. Ge Subspace Methods for System Identification Tohru Katayama Digital Control Systems Ioan D. Landau and Gianluca Zito Multivariable Computer-controlled Systems Efim N. Rosenwasser and Bernhard P. Lampe Dissipative Systems Analysis and Control (Second edition) Bernard Brogliato, Rogelio Lozano, Bernhard Maschke and Olav Egeland Algebraic Methods for Nonlinear Control Systems (Second edition) Giuseppe Conte, Claude H. Moog and Anna Maria Perdon

Tadeusz Kaczorek

Polynomial and Rational Matrices Applications in Dynamical Systems Theory

123

Tadeusz Kaczorek, Prof. dr hab. in˙z. Institute of Control and Industrial Electronics Faculty of Electrical Engineering Warsaw University of Technology 00-662 Warsaw ul. Koszykowa 75m. 19 Poland

Series Editors E.D. Sontag · M. Thoma · A. Isidori · J.H. van Schuppen

British Library Cataloguing in Publication Data Kaczorek, T. (Tadeusz), 1932Polynomial and rational matrices : applications in dynamical systems theory. - (Communications and control engineering) 1. Automatic control - Mathematics 2. Electrical engineering - Mathematics 3. Matrices 4. Linear systems 5. Polynomials I. Title 629.8’312 ISBN-13: ISBN-13: 9781846286049 ISBN-10: ISBN-10: 1846286042 Library of Congress Control Number: 2006936878 Communications and Control Engineering Series ISSN 0178-5354 ISBN 978-1-84628-604-9 e-ISBN 1-84628-605-0

Printed on acid-free paper

© Springer-Verlag London Limited 2007 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. 987654321 Springer Science+Business Media springer.com

Preface

This monograph covers the selected applications of polynomial and rational matrices to the theory of both continuous-time and discrete-time linear systems. It is an extended English version of its preceding Polish edition, which was based on the lectures delivered by the author to the Ph.D. students of the Faculty of Electrical Engineering at Warsaw University of Technology during the academic year 2003/2004. The monograph consists of eight chapters, an appendix and a list of references. Chapter 1 is devoted to polynomial matrices. It covers the following topics: basic operations on polynomial matrices, the generalised Bézoute theorem, the CayleyHamilton theorem, elementary operations on polynomial matrices, the choosing of a basis for a space of polynomial matrices, equivalent polynomial matrices, reduced row matrices and reduced column matrices, the Smith canonical form of polynomial matrices, elementary divisors and zeros of polynomial matrices, similarity of polynomial matrices, the Frobenius and Jordan canonical forms, cyclic matrices, pairs of polynomial matrices, the greatest common divisors and the smallest common multiplicities of matrices, the generalised Bezoute identity, regular and singular matrix pencil decompositions, and the WeierstrassKronecker canonical form of a matrix pencil. Rational functions and matrices are discussed in Chap. 2. With the basic definitions and operations on rational functions introduced at the beginning, the following issues are subsequently addressed: decomposition into the sum of rational functions, operations on rational matrices, the decomposition of a matrix into the sum of rational matrices, the inverse matrix of a polynomial matrix and its reducibility, the McMillan canonical form of rational matrices, the first factorization of rational matrices and the application of rational matrices in the synthesis of control systems. Chapter 3 addresses normal matrices and systems. A rational matrix is called normal if every non-zero minor of size 2 of the polynomial matrix of the denominator is divisible by the minimal polynomial of this matrix. It has been proved that a rational matrix is normal if and only if its McMillan polynomial is equal to the smallest common denominator of all the elements of the rational matrix. Further, the following issues are discussed: the fractional forms of normal

vi

Preface

matrices, the sum and product of normal matrices, the inverse matrix of a normal matrix, the decomposition of normal matrices into the sum of normal matrices, the structural decomposition of normal matrices, the normalisation of matrices via feedback and electrical circuits as examples of normal systems. The problem of the realisation of normal matrices is addressed in Chap. 4. The problem formulation is provided; further the following issues are discussed: necessary and sufficient conditions for the existence of minimal and cyclic realisations, methods of computing the realisation with the state matrix in both the Frobenius and Jordan canonical forms, structural stability and the computation of the normal transfer function matrix Chapter 5 is devoted to normal singular systems. In particular it focuses on discrete singular systems, cyclic pairs of matrices, the normal inverse matrices of cyclic pairs, normal transfer matrices, reachability and cyclicity of singular systems, cyclicity of feedback systems, computation of equivalent standard systems for singular systems. It is shown that electrical circuits consisting of resistances and inductances or resistances and capacities, together with ideal voltage (current) sources, constitute examples of singular continuous-time systems. Both the Kalman decomposition and the structural decomposition of the transfer matrix are generalised to the case of singular systems. Polynomial matrix equations, both rational and algebraic, are discussed in Chap. 6. The chapter begins with unilateral polynomial equations with two unknown matrices. Subsequently the following issues are addressed: the computation of minimal degree solutions to matrix equations, bilateral polynomial equations, the computation of rational solutions to polynomial equations, matrix equations of the m-th order, the Kronecker product of matrices and its applications, and the methods for computing solutions to Sylvester and Lapunov matrix equations. Chapter 7, the last one, is devoted to the problem of realisation and perfect observers for linear systems. A new method for computing minimal realisation for a given improper transfer matrix is provided together with the existence conditions; subsequently the methods for computing full and reduced order observers, as well as functional perfect observers, for 1D and 2D systems are given. In Chap. 8 some new results (published and unpublished) are presented on positive linear discrete-time and continuous-time systems with delays: asymptotic and robust stability, reachability, minimum energy control and positive realisation problem. The Appendix contains some basic definitions and theorems pertaining to the controllability and observability of linear systems. The monograph contains some original results of the author, most of which have already been published. It is haped that this monograph will be of value to Ph.D. students and researchers from the field of control theory and circuit theory. It can be also recommended for undergraduates in electrical engineering, electronics, mechatronics and computer engineering. I would like to express my gratitude to Professors M. Busáowicz and J. Klamka, the reviewers of the Polish version of the book, for their valuable comments and

Preface

vii

suggestions, which helped to improve this monograph. I also wish to thank my Ph.D. students, the first readers of the manuscript, for their remarks. I wish to extend my special thanks to my Ph.D. students Maciej Twardy, Konrad Markowski and Stefan KrzemiĔski for their valuable help in the preparation of this English edition. T. Kaczorek

Contents

Notation .................................................................................................................xv 1 Polynomial Matrices............................................................................................1 1.1 Polynomials ...................................................................................................1 1.2 Basic Notions and Basic Operations on Polynomial Matrices.......................5 1.3 Division of Polynomial Matrices ...................................................................9 1.4 Generalized Bezoute Theorem and the Cayley–Hamilton Theorem ...........16 1.5 Elementary Operations on Polynomial Matrices .........................................20 1.6 Linear Independence, Space Basis and Rank of Polynomial Matrices ........23 1.7. Equivalents of Polynomial Matrices...........................................................27 1.7.1 Left and Right Equivalent Matrices ...................................................27 1.7.2 Row and Column Reduced Matrices..................................................30 1.8 Reduction of Polynomial Matrices to the Smith Canonical Form ...............32 1.9 Elementary Divisors and Zeros of Polynomial Matrices .............................37 1.9.1 Elementary Divisors...........................................................................37 1.9.2 Zeros of Polynomial Matrices ............................................................39 1.10 Similarity and Equivalence of First Degree Polynomial Matrices.............42 1.11 Computation of the Frobenius and Jordan Canonical Forms of Matrices..45 1.11.1 Computation of the Frobenius Canonical Form of a Square Matrix ..............................................................................................45 1.11.2 Computation of the Jordan Canonical Form of a Square Matrix......47 1.12 Computation of Similarity Transformation Matrices.................................49 1.12.1 Matrix Pair Method ..........................................................................49 1.12.2 Elementary Operations Method........................................................54 1.12.3 Eigenvectors Method........................................................................57 1.13 Matrices of Simple Structure and Diagonalisation of Matrices .................59 1.13.1 Matrices of Simple Structure............................................................59 1.13.2 Diagonalisation of Matrices of Simple Structure .............................61 1.13.3 Diagonalisation of an Arbitrary Square Matrix by the Use of a Matrix with Variable Elements ........................................................65 1.14 Simple Matrices and Cyclic Matrices ........................................................67 1.14.1 Simple Polynomial Matrices ............................................................67

x

Contents

1.14.2 Cyclic Matrices ................................................................................69 1.15 Pairs of Polynomial Matrices.....................................................................75 1.15.1 Greatest Common Divisors and Lowest Common Multiplicities of Polynomial Matrices ........................................................................75 1.15.2 Computation of Greatest Common Divisors of a Polynomial Matrix ..............................................................................................77 1.15.3 Computation of Greatest Common Divisors and Smallest Common Multiplicities of Polynomial Matrices .............................................78 1.15.4 Relatively Prime Polynomial Matrices and the Generalised Bezoute Identity .............................................................................................84 1.15.5 Generalised Bezoute Identity ...........................................................86 1.16 Decomposition of Regular Pencils of Matrices .........................................87 1.16.1 Strictly Equivalent Pencils ...............................................................87 1.16.2 Weierstrass Decomposition of Regular Pencils................................92 1.17 Decomposition of Singular Pencils of Matrices ........................................95 1.17.1 Weierstrass–Kronecker Theorem .....................................................95 1.17.2 Kronecker Indices of Singular Pencils and Strict Equivalence of Singular Pencils .............................................................................102 2 Rational Functions and Matrices ...................................................................107 2.1 Basic Definitions and Operations on Rational Functions ..........................107 2.2 Decomposition of a Rational Function into a Sum of Rational Functions.116 2.3 Basic Definitions and Operations on Rational Matrices ............................124 2.4 Decomposition of Rational Matrices into a Sum of Rational Matrices .....128 2.5 The Inverse Matrix of a Polynomial Matrix and Its Reducibility ..............132 2.6 Fraction Description of Rational Matrices and the McMillan Canonical Form..........................................................................................................136 2.6.1 Fractional Forms of Rational Matrices.............................................136 2.6.2 Relatively Prime Factorization of Rational Matrices .......................146 2.6.3 Conversion of a Rational Matrix into the McMillan Canonical Form...............................................................................................152 2.7 Synthesis of Regulators .............................................................................155 2.7.1 System Matrices and the General Problem of Synthesis of Regulators ......................................................................................155 2.7.2 Set of Regulators Guaranteeing Given Characteristic Polynomials of a Closed-loop System ...............................................................159 3 Normal Matrices and Systems........................................................................163 3.1 Normal Matrices ........................................................................................163 3.1.1 Definition of the Normal Matrix ......................................................163 3.1.2 Normality of the Matrix [Is – A]-1 for a Cyclic Matrix ....................164 3.1.3 Rational Normal Matrices ................................................................168 3.2 Fraction Description of Normal Matrices ..................................................170 3.3 Sum and Product of Normal Matrices and Normal Inverse Matrices ........175 3.3.1 Sum and Product of Normal Matrices ..............................................175 3.3.2 The Normal Inverse Matrix..............................................................180 3.4 Decomposition of Normal Matrices...........................................................182

Contents

xi

3.4.1 Decomposition of Normal Matrices into the Sum of Normal Matrices .........................................................................................182 3.4.2 Structural Decomposition of Normal Matrices ................................185 3.5 Normalisation of Matrices Using Feedback...............................................191 3.5.1 State-feedback .................................................................................191 3.5.2 Output-feedback ...............................................................................197 3.6 Electrical Circuits as Examples of Normal Systems..................................200 3.6.1 Circuits of the Second Order ............................................................200 3.6.2 Circuits of the Third Order...............................................................203 3.6.3 Circuits of the Fourth Order and the General Case ..........................210 4 The Problem of Realization ...........................................................................219 4.1 Basic Notions and Problem Formulation...................................................219 4.2 Existence of Minimal and Cyclic Realisations ..........................................220 4.2.1 Existence of Minimal Realisations...................................................220 4.2.2 Existence of Cyclic Realisations ......................................................224 4.3 Computation of Cyclic Realisations ..........................................................226 4.3.1 Computation of a Realisation with the Matrix A in the Frobenius Canonical Form..............................................................................226 4.3.2 Computation of a Cyclic Realisation with Matrix A in the Jordan Canonical Form..............................................................................232 4.4 Structural Stability and Computation of the Normal Transfer Matrix .......244 4.4.1 Structural Controllability of Cyclic Matrices ...................................244 4.4.2 Structural Stability of Cyclic Realisation .........................................245 4.4.3 Impact of the Coefficients of the Transfer Function on the System Description.....................................................................................247 4.4.4 Computation of the Normal Transfer Matrix on the Basis of Its Approximation ...............................................................................249 5 Singular and Cyclic Normal Systems.............................................................255 5.1 Singular Discrete Systems and Cyclic Pairs ..............................................255 5.1.1 Normal Inverse Matrix of a Cyclic Pair ...........................................257 5.1.2 Normal Transfer Matrix ...................................................................260 5.2 Reachability and Cyclicity.........................................................................264 5.2.1 Reachability of Singular Systems.....................................................264 5.2.2 Cyclicity of Feedback Systems ........................................................267 5.3 Computation of Equivalent Standard Systems for Linear Singular Systems .....................................................................................................272 5.3.1 Discrete-time Systems and Basic Notions ......................................272 5.3.2 Computation of Fundamental Matrices ............................................276 5.3.3 Equivalent Standard Systems ...........................................................279 5.3.4 Continuous-time Systems ...............................................................282 5.4 Electrical Circuits as Examples of Singular Systems ................................285 5.4.1 RL Circuits .......................................................................................285 5.4.2 RC Circuits.......................................................................................288 5.5 Kalman Decomposition .............................................................................291 5.5.1 Basic Theorems and a Procedure for System Decomposition..........291

xii

Contents

5.5.2 Conclusions and Theorems Following from System Decomposition ................................................................................295 5.6 Decomposition of Singular Systems..........................................................298 5.6.1 Weierstrass–Kronecker Decomposition ...........................................298 5.6.2 Basic Theorems ................................................................................299 5.7 Structural Decomposition of a Transfer Matrix of a Singular System.......305 5.7.1 Irreducible Transfer Matrices...........................................................305 5.7.2 Fundamental Theorem and Decomposition Procedure.....................306 6 Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations.......................................................................................................313 6.1 Unilateral Polynomial Equations with Two Variables...............................313 6.1.1 Computation of Particular Solutions to Polynomial Equations ........313 6.1.2 Computation of General Solutions to Polynomial Equations...........319 6.1.3 Computation of Minimal Degree Solutions to Polynomial Matrix Equations .......................................................................................322 6.2 Bilateral Polynomial Matrix Equations with Two Unknowns ...................325 6.2.1 Existence of Solutions......................................................................325 6.2.2 Computation of Solutions.................................................................328 6.3 Rational Solutions to Polynomial Matrix Equations..................................332 6.3.1 Computation of Rational Solutions ..................................................332 6.3.2 Existence of Rational Solutions of Polynomial Matrix Equations ...333 6.3.3 Computation of Rational Solutions to Polynomial Matrix E qua tions.........................................................................................334 6.4 Polynomial Matrix Equations ....................................................................336 6.4.1 Existence of Solutions......................................................................336 6.4.2 Computation of Solutions.................................................................337 6.5 The Kronecker Product and Its Applications.............................................340 6.5.1 The Kronecker Product of Matrices and Its Properties ....................340 6.5.2 Applications of the Kronecker Product to the Formulation of Matrix Equations .......................................................................................343 6.5.3 Eigenvalues of Matrix Polynomials .................................................345 6.6 The Sylvester Equation and Its Generalization..........................................347 6.6.1 Existence of Solutions......................................................................347 6.6.2 Methods of Solving the Sylvester Equation .....................................349 6.6.3 Generalization of the Sylvester Equation .........................................357 6.7 Algebraic Matrix Equations with Two Unknowns ....................................358 6.7.1 Existence of Solutions......................................................................358 6.7.2 Computation of Solutions.................................................................360 6.8 Lyapunov Equations ..................................................................................361 6.8.1 Solutions to Lyapunov Equations.....................................................361 6.8.2 Lyapunov Equations with a Positive Semidefinite Matrix ...............363 7 The Realisation Problem and Perfect Observers of Singular Systems ......367 7.1 Computation of Minimal Realisations for Singular Linear Systems .........367 7.1.1 Problem Formulation........................................................................367 7.1.2 Problem Solution..............................................................................369 7.2 Full- and Reduced-order Perfect Observers...............................................376

Contents xiii

7.2.1 Reduced-order Observers ................................................................378 7.2.2 Perfect Observers for Standard Systems ..........................................384 7.3 Functional Observers .................................................................................392 7.4 Perfect Observers for 2D Systems .............................................................396 7.5 Perfect Observers for Systems with Unknown Inputs ...............................400 7.5.1 Problem Formulation .....................................................................400 7.5.2 Problem Solution ...........................................................................402 7.6 Reduced-order Perfect Observers for 2D Systems with Unknown Inputs 409 7.6.1 Problem Formulation........................................................................408 7.6.2 Problem Solution..............................................................................411 8 Positive Linear Systems with Delays..............................................................421 8.1 Positive Discrete-time and Continuous-time Systems ..............................421 8.1.1 Discrete-time Systems .....................................................................421 8.1.2 Continuous-time Systems ................................................................424 8.2 Stability of Positive Linear Discrete-time Systems with Delays ...............425 8.2.1 Asymptotic Stability.........................................................................425 8.2.2 Stability of Systems with Pure Delays .............................................432 8.2.3 Robust Stability of Interval Systems ................................................434 8.3 Reachability and Minimum Energy Control..............................................437 8.3.2 Minimum Energy Control ................................................................442 8.4 Realisation Problem for Positive Discrete-time Systems ..........................446 8.4.1 Problem Formulation........................................................................446 8.4.2 Problem Solution..............................................................................447 8.5 Realisation Problem for Positive Continuous-time Systems with Delays 456 8.5.1 Problem Formulation........................................................................456 8.5.2 Problem Solution..............................................................................457 8.6 Positive Realisations for Singular Multi-variable Discrete-time Systems with Delays ...............................................................................................463 8.6.1 Problem Formulation........................................................................463 8.6.1 Problem Solution..............................................................................466 A Selected Problems of Controllability and Observability of Linear Systems .........................................................................................................473 A.1 Reachability .............................................................................................473 A.2. Controllability .........................................................................................477 A.3 Observability............................................................................................480 A.4 Reconstructability ....................................................................................483 A.5 Dual System.............................................................................................485 6 Stabilizability and Detectability...................................................................485 References ..........................................................................................................487 Index ....................................................................................................................501

Notation

A AT A* A-1 Adj A A(s) AJ AS(s) det A

matrix transpose of A conjugate of A inverse of A adjoint (adjugate) of A polynomial matrix Jordan canonical form of A Smith canonical form of A(s) determinant of A greatest common divisor of all the elements of Adj [OIn-A]

L[iuc] L[i, j]

multiplication of the i-th row by the number cz0 interchange of the i-th and j-th rows addition of the j-th row multiplied by the polynomial b(s) to the i-th row dimension of a matrix with m rows and n columns multiplication of the i-th column by the number c z 0

Dn-1(O)

L[i+jub(s)] mun P[iuc] P[i, j] P[i+jub(s)] P[i+jub(s)] tr A rank A Im A Ker A W(A) M(O) m, then the sum is a polynomial of degree n, if m > n then the sum is a polynomial of degree m. If n = m and an+bn z 0, then this sum is a polynomial of degree n and a polynomial of degree less than n, if an+bn = 0. Thus we have deg > w1 ( s )  w2 ( s) @ d max ª¬deg > w1 ( s ) @ , deg > w2 ( s )@º¼ .

(1.1.4)

In the same vein we define the difference of two polynomials. A polynomial whose coefficients are the products of the coefficients ai and the scalar O, i.e.,

O w( s)

n

¦O a s

i

,

i

(1.1.5)

i 0

is called the product of the polynomial (1.1.1) and the scalar O (a scalar can be regarded as a polynomial of zero degree). A polynomial of the form n m

w1 ( s ) w2 ( s )

¦c s

i

(1.1.6a)

i

i 0

is called the product of the polynomials (1.1.2), where i

ci

¦a b

k i k

, i

0, 1, ! , n  m

k 0

( ak

0 for k ! n, bk

(1.1.6b)

0 for k ! m).

From (1.1.6a) it follows that deg > w1 ( s ) w2 ( s ) @

nm,

(1.1.7)

since anbm z 0 for an z 0, bm z 0. Let w2(s) in (1.1.2) be a nonzero polynomial and n > m, then there exist exactly two polynomials q(s) and r(s) such that w1 ( s )

w2 ( s )q ( s )  r ( s ) ,

(1.1.8)

where deg > r ( s ) @  deg > w2 ( s ) @

m.

(1.1.9)

The polynomial q(s) is called the integer part when r(s) z 0 and the quotient when r(s) = 0, and r(s) is called the remainder.

Polynomial Matrices

3

If r(s) = 0, then w1(s) = w2(s)q(s); we say then that polynomial w1(s) is divisible without remainder by the polynomial w2(s), or equivalently, that polynomial w2(s) divides without remainder a polynomial w1(s), which is denoted by w1(s) | w2(s). We also say that the polynomial w2(s) is a divisor of the polynomial w1(s). Let us consider the polynomials in (1.1.2). We say that a polynomial d(s) is a common divisor of the polynomials w1(s) and w2(s) if there exist polynomials w 1(s) and w 2(s) such that w1 ( s )

d ( s ) w1 ( s ), w2 ( s )

d ( s ) w2 ( s ) .

(1.1.10)

Polynomial dm(s) is called a greatest common divisor (GCD) of the polynomials w1(s) and w2(s), if every common divisor of these polynomials is a divisor of the polynomial dm(s). A GCD dm(s) of polynomials w1(s) and w2(s) is determined uniquely up to multiplication by a constant factor and satisfies the equality d m ( s)

w1 ( s )m1 ( s )  w2 ( s )m2 ( s ) ,

(1.1.11)

where m1(s) and m2(s) are polynomials, which we can determine using Euclid’s algorithm or the elementary operations method. The essence of Euclid’s algorithm is as follows. Using division of polynomials we determine the sequences of polynomials q1,q2,…,qk and r1,r2,…,rk satisfying the following properties ­ w1 w2 q1  r1 ½ °w r q  r ° ° 2 1 2 2 ° °°r1 r2 q3  r3 °° ® ¾. °"""""" ° °rk 2 rk 1qk  rk ° ° ° ¯°rk 1 rk qk 1 ¿°

(1.1.12)

We stop computations when the last nonzero remainder rk is computed and rk-1 is found to be divisible without remainder by rk. With r1,r2,…,rk-1 eliminated from (1.1.12) we obtain (1.1.11) for dm(s) = rk. Thus the last nonzero remainder rk is a GCD of the polynomials w1(s) and w2(s). Example 1.1.1. Let w1

w1 ( s )

s 3  3s 2  3s  1, w2

w2 ( s )

s2  s  1 .

(1.1.13)

4

Polynomial and Rational Matrices

Using Euclid’s algorithm we compute w1

w2 q1  r1 , q1

s  4, r1

6s  3,

w2

r1q2  r2 , q2

1 1 s  , r2 6 12

(1.1.14)

3 . 4

Here we stop because r1 is divisible without remainder by r2. Thus r2 is a GCD of the polynomials in (1.1.13). Elimination of r1 from (1.1.14) yields w1 (q2 )  w2 (1  q1q2 )

r2 ,

that is,

s

3

1· 7 2· §1 §1  3s 2  3s  1 ¨ s  ¸  s 2  s  1 ¨ s 2  s  ¸ 6 12 6 12 3 © ¹ © ¹

3 . 4

The polynomials in (1.1.2) are called relatively prime (or coprime) if and only if their monic GCD is equal to 1. From (1.1.11) for dm(s) = 1 it follows that polynomials w1(s) and w2(s) are coprime if and only if there exist polynomials m1(s) and m2(s) such that w1 ( s )m1 ( s )  w2 ( s )m2 ( s ) 1 .

(1.1.15)

Dividing both sides of (1.1.11) by dm(s), we obtain   1 w1 ( s )m1 ( s )  w2 ( s )m2 ( s ) ,

(1.1.16)

where  wk ( s )

wk ( s ) for k d m (s)

1, 2,! .

Thus if dm(s) is a GCD of the polynomials w1(s) and w2(s), then polynomials   w 1(s) and w 2(s) are coprime. Let s1,s2,…,sp be different roots of multiplicities m1,m2,…,mp (m1+m2+…+mp = n), respectively, of the equation w(s) = 0. The numbers s1,s2,…,sp are called the zeros of polynomial (1.1.1). This polynomial can be uniquely written in the form w( s )

an ( s  s1 ) m1 ( s  s2 ) m2 ...( s  s p )

mp

.

(1.1.17)

Polynomial Matrices

5

1.2 Basic Notions and Basic Operations on Polynomial Matrices A matrix whose elements are polynomials over a field matrix over the field (briefly polynomial matrix)

A( s)

ª¬ aij ( s ) º¼ i 1,...,m j 1,..., n

is called a polynomial

ª a11 ( s ) ! a1n ( s ) º « # % # »» , aij ( s )  ( s ) . « «¬ am1 ( s ) ! amn ( s ) »¼

(1.2.1)

An ordered pair of the number of rows m and columns n, respectively, is called the dimension of matrix (1.2.1) and is denoted by mun. A set of polynomial matrices of dimension mun over a field will be denoted by mun[s]. The following matrix is an example of a 2u2 polynomial matrix over the field of real numbers A 0 (s)

ª s 2  2s  1 s2 º « 2 » 2 ¬ 2 s  s  3 3s  s  3 ¼

2u2

[ s] .

(1.2.2)

Every polynomial matrix can be written in the form of a matrix polynomial. For example, the matrix (1.2.2) can be written in the form of the matrix polynomial A0 (s)

ª1 0 º 2 ª 2 1º ª1 2 º « 2 3» s  « 1 1» s  «3 3» ¬ ¼ ¬ ¼ ¬ ¼

A 2 s 2  A1s  A 0 .

(1.2.3)

Let a matrix of the form (1.2.1) be expressed as the matrix polynomial A( s)

A q s q  ...  A1s  A 0 , A k 

mun

, k

0, 1, ..., q .

(1.2.4)

If Aq is not a zero matrix, then number q is called its degree and is denoted by q = deg A(s). For example, the matrix (1.2.2) (and also (1.2.3)) has the degree two q = 2. If n = m and det Aq z 0, then matrix (1.2.4) is called regular. The sum of two polynomial matrices A( s) B( s )

ª¬ aij ( s ) º¼ i 1,...,m j 1,...,n ª¬bij ( s ) º¼ i 1,...,m j 1,..., n

q

¦A s k

k

and

k 0 t

¦ Bk s k k 0

of the same dimension mun is defined in the following way

(1.2.5)

6

Polynomial and Rational Matrices

A ( s )  B( s ) q ­ t k k °¦ ( A k  B k ) s  ¦ A k s k k t 0 1  ° °° q k ®¦ ( A k  B k ) s °k 0 t ° q k k °¦ ( A k  B k ) s  ¦ B k s k q 1 ¯° k 0

m ¬ª aij ( s )  bij ( s ) ¼º ij 1,..., 1,...,n

½ q ! t° ° °° q t ¾. ° ° q  t° ¿°

(1.2.6)

If q = t and Aq + Bq z 0, then the sum in (1.2.6) is a polynomial matrix of degree q, and if Aq + Bq = 0, then this sum is a polynomial matrix of a degree not greater than q. Thus we have deg > A( s )  B( s )@ d max > deg [ A( s )], deg [B( s )]@ .

(1.2.7)

In the same vein, we define the difference of two polynomial matrices. A polynomial matrix where every entry is the product of an entry of the matrix (1.2.1) and the scalar O is called the product of the polynomial matrix (1.2.1) and the scalar O

O A( s) ª¬O aij ( s ) º¼ i 1,...,m . j 1,..., n

From this definition for O z 0, we have deg [OA(s)] = deg [A(s)]. Multiplication of two polynomial matrices can be carried out if and only if the number of columns of the first matrix (1.2.1) is equal to the number of rows of the second matrix ª¬bij ( s ) º¼ i 1,...,n j 1,..., p

B( s )

t

¦B s

k

k

.

(1.2.8)

k 0

A polynomial matrix of the form ª¬ cij ( s ) º¼ i 1,...,m j 1,..., p

C( s )

q t

A ( s )B ( s )

¦C s

k

k

(1.2.9)

k 0

is called the product of these polynomial matrices, where k

Ck

¦A B l

k l

k

0,1,..., q  t

l 0

(Al

0, l ! q, Bl

0, l ! t ) .

(1.2.10)

Polynomial Matrices

7

From (1.2.10) it follows that Cq+t = AqBt and this matrix is a nonzero one if at least one of the matrices Aq and Bt is nonsingular, in other words one of the matrices A(s) and B(s) is a regular one. Thus we have the relationship deg > A(s)B(s) @ = deg > A(s)@ + deg > B(s)@ if at least one of these matrices is regular,

(1.2.11)

deg > A(s)B(s)@ d deg > A(s) @ + deg > B(s) @ otherwise.

For example, the product of the polynomial matrices A( s) B( s )

ª s2  s 2 s 2  s  1º ª 1 2 º 2 ª 1 1 º ª 0 1º s « s« « 2 » « » » », 2 2s  2 ¼ ¬ 1 2 ¼ ¬2 0¼ ¬ 1 2 ¼ ¬ s  2s  1 ª 2s  2 s  3 º ª 2 1 º ª 2 3º « s  1 1 s  1» «1 1 » s  « 1 1» ¬ ¼ 2 2¼ ¬ ¼ ¬

is the following polynomial matrix A ( s )B ( s )

ª7 s 2  2s  1 « ¬ 4s  4

5 2

s 2  92 s  1º » s 2  6s  1 ¼

ª7 52 º 2 ª 2 92 º ª 1 1 º « 0 1 » s  « 4 6 » s  « 4 1» , ¬ ¼ ¬ ¼ ¬ ¼

whose degree is smaller than the sum deg [A(s)] + deg [B(s)], since A 2 B1

ª 1 2 º ª 2 1 º « 1 2 » « 1 1 » ¬ ¼¬ 2¼

ª0 0 º «0 0 » . ¬ ¼

The matrix (1.2.4) can be written in the form A( s)

s q A q  ...  sA1  A 0 ,

(1.2.12)

since multiplication of the matrix Ai (i = 1,2,…,q) by the scalar s is commutative. Substituting the matrix S in place of the scalar s into (1.2.4) and (1.2.12), we obtain the following, usually different, matrices A p (S)

A q S q  ...  A1S  A 0 ,

A l (S)

S q A q  ...  SA1  A 0 .

The matrix Ap(S) (Al(S)) is called the right-sided (left-sided) value of the matrix A(s) for s = S. Let

8

Polynomial and Rational Matrices

C( s )

A ( s )  B( s ) .

It is easy to verify that

C p (S)

A p (S)  B p (S)

Cl (S)

A l (S)  B l (S) .

and

Consider the polynomial matrices in (1.2.5).

Theorem 1.2.1. If the matrix S commutes with the matrices Ai for i = 1,2,…,q and Bj for j = 1,2,…,t, then the right-sided and the left-sided value of the product of the matrices in (1.2.5) for s = S is equal to the product of the right-sided and left-sided values respectively, of these matrices for s = S. Proof. Taking into account the polynomial matrices in (1.2.5) we can write D( s )

A ( s )B ( s )

t § q i ·§ j · ¨ ¦ Ai s ¸ ¨ ¦ B j s ¸ ©i 0 ¹© j 0 ¹

¦¦ A B s

§ q i ·§ t j · ¨ ¦ s Ai ¸ ¨ ¦ s B j ¸ ©i 0 ¹© j 0 ¹

¦¦ s

q

t

i

i j

j

i 0 j 0

and D( s )

A ( s )B ( s )

q

t

i j

Ai B j .

i 0 j 0

Substituting the matrix S in place of the scalar s, we obtain q

D p (S)

t

¦¦ A B S i

i j

j

i 0 j 0

t § q i ·§ j · ¨ ¦ AiS ¸ ¨ ¦ B j S ¸ ©i 0 ¹© j 0 ¹

A p (S )B p (S) ,

since BjS = SBj for j = 1,2,…,t and p

Dl (S)

q

¦¦ S

i j

Ai B j

i 0 j 0

since SAi=AiS for i = 1,2,…,q. 

§ p i ·§ q j · ¨ ¦ S Ai ¸ ¨ ¦ S B j ¸ ©i 0 ¹© j 0 ¹

A l (S)B l (S ) ,

Polynomial Matrices

9

1.3 Division of Polynomial Matrices Consider the polynomial matrices A(s) and B(s) where det A(s) z 0 and deg A(s) < deg B(s). The matrix A(s) may be not regular, i.e., the matrix of coefficients of the highest power of variable s may be singular.

Theorem 1.3.1. If det A(s) z 0, then for the pair of polynomial matrices A(s) and B(s), deg B(s) > deg A(s) there exists a pair of matrices Qp(s), Rp(s) such that the following equality is satisfied B( s )

Q p ( s) A( s )  R p ( s ), deg A( s ) ! deg R p ( s ) ,

(1.3.1a)

and there exists a pair of matrices Ql(s), Rl(s) such that the following equality holds B( s )

A( s )Ql ( s )  R l ( s ), deg A ( s ) ! deg R l ( s ) .

(1.3.1b)

Proof. Dividing the elements of matrix B(s) Adj A(s) by a polynomial det A(s), we obtain a pair of matrices Qp(s), R1(s) such that B( s )Adj A( s ) Q p ( s ) det A( s )  R1 ( s ), deg > det A ( s )@ ! deg R1 s .

(1.3.2)

Post-multiplication of (1.3.2) by A(s)/det A(s) yields B( s )

Q p ( s) A( s)  R p ( s ) ,

(1.3.3)

since Adj A(s) A(s) = In det A(s), where R p (s)

R1 ( s ) A ( s ) . det A( s )

(1.3.4)

From (1.3.4) we have deg R p ( s)

deg R1 ( s )  deg A( s )  deg > det A ( s ) @  deg A ( s ) ,

since deg [det A(s)] > deg R1(s). The proof of equality (1.3.1b) is similar.  Remark 1.3.1. The pairs of matrices Qp(s), Rp(s) and Ql(s), Rl(s) satisfying the equality (1.3.1) are not uniquely determined (are not unique), since B( s ) [Q p ( s )  C( s )]A( s )  R p ( s )  A( s )C( s )

(1.3.5a)

10

Polynomial and Rational Matrices

and A( s )[Ql ( s )  C( s )]  R l ( s )  A ( s )C( s )

B( s )

(1.3.5b)

are satisfied for an arbitrary matrix C(s) satisfying deg >C(s) A ( s ) @  deg A ( s ), deg > A ( s )C(s) @  deg A ( s ) .

Example 1.3.1. For the matrices ª s 1º « » , B( s ) ¬ 1 1¼

A( s)

s º ªs « 1 s 2  1» ¬ ¼

determine the matrices Qp(s), Rp(s) satisfying the equality (1.3.1a). In this case, det A1 = 0 and det A(s) = s+1. We compute Adj A( s )

ª1 1º «1 s » , B( s )Adj A( s) ¬ ¼

ª0 « 2 ¬s

s 2  s º », s 3  s  1¼

and with (1.3.2) taken into account we have ª0 « 2 ¬s

s 2  s º » s 3  s  1¼

s º ª 0 ª0 0 º « s  1 s 2  s  2 » ( s  1)  «1 1» , ¬ ¼ ¬ ¼

i.e., Q p (s)

s º ª 0 « s  1 s 2  s  2 » , R1 ( s ) ¬ ¼

ª0 0 º «1 1» . ¬ ¼

According to (1.3.4) we obtain R p (s)

R1 ( s ) A ( s ) det A( s )

ª0 0º «1 0 » . ¬ ¼

Consider two polynomial matrices A( s)

A n s n  A n1s n1  ...  A1s  A 0 ,

B( s )

B m s  B m1s m

m 1

 ...  B1s  B 0 .

(1.3.6a) (1.3.6b)

Polynomial Matrices

11

Theorem 1.3.2. If A(s) and B(s) are square polynomial matrices of the same dimensions, and A(s) is regular (det An z 0), then there exist exactly one pair of polynomial matrices Qp(s), Rp(s) satisfying the equality B( s )

Q p ( s) A( s)  R p ( s ) ,

(1.3.7a)

and exactly one pair of polynomial matrices Ql(s), Rl(s) satisfying the equality B( s )

A( s )Ql ( s )  R l ( s )

(1.3.7b)

where deg A( s) ! deg R p ( s ), deg A ( s ) ! deg R l ( s ) . Proof. If n > m, then Qp(s) = 0 and Rp(s) = B(s). Assume that m t n. By the assumption det An z 0 there exists the inverse matrix An-1. Note that the matrix BmAn-1sm-nA(s) has a term in the highest power of s, equal to Bmsm. Hence B( s )

B m A n1s mn A( s )  B (1) ( s ) ,

where B(1)(s) is a polynomial matrix of degree m1 d m-1 of the form B (1) ( s )

m1 m1 1 B (1)  B (1)  ...  B1(1) s  B 0(1) . m1 s m1 1 s

If m1 t n, then we repeat this procedure, taking the matrix B (m1) instead of the 1

matrix Bm, and obtain B (1) ( s )

1 m1  n B (1) A( s )  B (2) ( s ) , m1 A n s

B (2) ( s )

m2 m2 1 (m2  m1 ) . B (2)  B (2)  ...  B1(2) s  B (2) 0 m2 s m2 1 s

where

Continuing this procedure, we obtain the sequence of polynomial matrices B(s), B(1)(s), B(2)(s),…, of decreasing degrees m, m1, m2,…, respectively. In step r, we obtain the matrix B(r)(s) of degree mr < n and

B( s )

B

m



1 m1  n A n1s mn  B (1)  ...  B (mrr1)1 A n1s mr 1 n A ( s )  B ( r ) ( s ) , m1 A n s

that is the equality (1.3.7a) for

12

Polynomial and Rational Matrices

Q p (s)

1 m1  n B m A n1s mn  B (1)  ...  B (mrr1)1 A n1s mr 1 n , m1 A n s

R p (s)

B ( r ) ( s ).

(1.3.8)

Now we will show that there exists only one pair Qp(s), Rp(s) satisfying (1.3.7a). Assume that there exist two different pairs Qp(1)(s), Rp(1)(s) and Qp(2)(s), Rp(2)(s) such that B( s)

(1) Q (1) p ( s) A( s )  R p ( s)

(1.3.9a)

B( s )

(2) Q (2) p ( s) A(s)  R p ( s) ,

(1.3.9b)

and

where deg A(s) > deg Rp(1)(s) and deg A(s) > deg Rp(2)(s). From (1.3.9) we have (1) (2) ¬ªQ p ( s )  Q p ( s ) ¼º A( s )

(1) R (2) p (s)  R p (s) .

(1.3.10)

For Qp(1)(s) z Qp(2)(s) the matrix [Qp(1)(s) - Qp(2)(s)]A(s) is a polynomial matrix of a degree greater than n, and [Rp(2)(s) - Rp(1)(s)] is a polynomial matrix of a degree less than n. Hence from (1.3.10) it follows that Qp(1)(s) = Qp(2)(s) and Rp(1)(s) = Rp(2)(s). Similarly one can prove that Ql ( s )

m1  n A n1B m s mn  A n1B (1)  ...  A n1B (mrr1)1 s mr 1 n , m1 s

R l ( s)

B ( r ) ( s ).

(1.3.11) 

The matrices Qp(s), Rp(s) (Ql(s), Rl(s)) are called, respectively: the right (left) quotient and the remainder from division of the matrix B(s) by the matrix A(s). From the proof of Theorem 1.3.2 the following algorithm for determining matrices Qp(s) and Rp(s) (Ql(s) and Rl(s)) ensues.

Procedure 1.3.1. Step 1: Given matrix An compute An-1. Step 2: Compute B m A n1s mn A( s )

A( s ) A

B m s mn

1 n

and B (1) ( s )

B( s )  B m A n1s mn A( s )

m1 B (1)  ...  B1(1) s  B (1) m1 s 0

Polynomial Matrices

B

(1)



Ǻ( s )  A( s ) A n1B m s mn

( s)

13

m1 B (1)  ...  B1(1) s  B (1) . m1 s 0

Step 3: If m1 t n, then compute



m1  n 1 m1  n B (1) A( s ) A( s ) A n1B (1) m1 A n s m1 s



and 1 m1  n B (1) ( s )  B (1) A(s) m1 A n s

B (2) ( s )

B

(2)

m2 B (2)  ...  B1(2) s  B (2) m2 s 0

m1  n B (1) ( s )  A( s ) A n1B (1) m1 s

( s)



m2 B (2)  ...  B1(2) s  B (2) . m2 s 0

Step 4: If m2 t n, then substituting in the above equalities m1 and B(1)(s) by m2 and B(2)(s), respectively, compute B(3)(s). Repeat this procedure r times until mr < n. Step 5: Compute the matrices Qp(s), Rp(s) (Ql(s), Rl(s)). Example 1.3.2. Given the matrices ª s2  1 s º « » and B( s ) s2  s ¼ ¬ s

A( s)

ª s 4  s 2  1 s 3  s 2  2s º « », 2 s3  s  2 ¼ ¬ 2s  s

determine matrices Qp(s), Rp(s) and Ql(s), Rl(s) satisfying (1.3.7). Matrix A(s) is regular, since A2

ª1 «0 ¬

0º and B 4 1»¼

ª1 «0 ¬

0º . 0 »¼

Using Procedure 1.3.1 we compute the following. Steps 1–3: In this case, B 4 A 21s 2 A( s )

ª1 « ¬0

0 º ª1 0 º 2 ª s 2  1  s º ª s 4  s  s 3 º s « » » « » 0 ¼ s2  s ¼ ¬ 0 0¼ «¬0 1 »¼ ¬ s

and B (1) ( s )

B( s )  B 4 A 21s 2 A( s )

ª s 4  s 2  1 s 3  s 2  2s º ª s 4  s 2 « »« 2 s3  s  2 ¼ ¬ 0 ¬ 2s  s

s3 º » 0 ¼

ª 1 2s 3  s 2  2s º « 2 ». s3  s  2 ¼ ¬ 2s  s

14

Polynomial and Rational Matrices

Since m1 = 3, n = 2, and

B3(1)

ª0 2 º «0 1 » , ¬ ¼

we have B3(1) A 21s A( s )

ª 0 2 º ª1 0 º ª s 2  1 «0 1 » «0 1 » s « ¬ ¼¬ ¼ ¬ s

s º » 2 s  s¼

ª 2s 2 « 2 ¬s

2s 3  2s 2 º » s3  s 2 ¼

and B (2) ( s )

B (1) ( s )  B3(1) A 21s A( s )

ª 1 2s 3  s 2  2s º ª 2s 2 « 2 »« s3  s  2 ¼ ¬ s 2 ¬2s  s

2s3  2s 2 º » s3  s 2 ¼

ª 2 s 2  1 3s 2  2s º « 2 ». 2 ¬ s  s s  s  2¼

Step 4: We repeat the procedure, since m2 = 2 = n. Taking into account that B (2) 2

ª 2 3º « 1 1» , ¬ ¼

we compute 1 B(2) 2 A 2 A( s)

ª 2 3º ª1 0 º ª s 2  1  s º » « 1 1» «0 1 » « s2  s¼ ¬ ¼¬ ¼¬ s

ª 2s 2  3s  2 3s 2  s º « 2 »  s 2  2s ¼ ¬ s  s 1

and B (3) ( s )

1 B (2) ( s )  B (2) 2 A 2 A( s )

ª 2s 2  3s  2 3s 2  s º « 2 »  s 2  2s ¼ ¬ s  s 1

ª 2 s 2  1 3s 2  2s º « 2 » 2 ¬ s  s s  s  2¼

3s º ª 3s  3 « 2 s  1 3s  2 » . ¬ ¼

Step 5: The degree of this matrix is less than the degree of the matrix A(s). Hence, according to (1.3.8), we obtain

Polynomial Matrices

15

1 B 4 A 21s 2  B3(1) A 21s  B (2) 2 A2

Q p (s)

ª1 0 º 2 ª 0 2 º ª 2 3º «0 0 » s  «0 1 » s  « 1 1» ¬ ¼ ¬ ¼ ¬ ¼

ª s 2  2 2 s  3º « » s 1 ¼ ¬ 1

and

B (3) ( s )

R p (s)

3s º ª3s  3 « 2 s  1 3s  2 » . ¬ ¼

We compute Ql(s) and Rl(s) using Procedure 1.3.1. Steps 1–3: We compute ª s 2  1  s º ª1 0 º ª1 0 º 2 « »« »« »s s 2  s ¼ ¬0 1 ¼ ¬0 0¼ ¬ s

A( s ) A 21B 4 s 2

ªs4  s2 « 3 ¬ s

0º » 0¼

and B (1) ( s )

B( s )  A( s ) A 21B 4 s 2

ªs4  s2 « 3 ¬ s

0º » 0¼

ª s 4  s 2  1 s3  s 2  2s º « » 2 s3  s  2 ¼ ¬ 2s  s

ª 1 s3  s 2  2s º « 3 ». 2 3 ¬  s  2s  s s  s  2 ¼

Taking into account that m1 = 3 > n = 2 and B3(1)

ª 0 1º « 1 1» , ¬ ¼

we compute A( s ) A 21B3(1) s

ª s 2  1  s º ª1 0 º ª 0 1º « »« »« »s s 2  s ¼ ¬0 1 ¼ ¬ 1 1¼ ¬ s

ª s2 « 3 2 ¬s  s

s3  s 2  s º » s 3  2s 2 ¼

and B (2) ( s )

B (1) ( s )  A( s ) A21B 3(1) s

ª s2 « 3 2 ¬s  s

s3  s 2  s º » s 3  2s 2 ¼

ª s 3  s 2  2s º 1 « 3 » 2 3 ¬ s  2s  s s  s  2 ¼

ªs 2  1 º s « 2 ». 2 s s s s     3 2 2 ¬ ¼

16

Polynomial and Rational Matrices

Step 4: We repeat the procedure, since m2 = 2 = n. Taking into account that B (2) 2

ª 1 0 º « 3 2 » , ¬ ¼

we have A ( s ) A 21B (2) 2

ª s 2  1  s º ª1 0 º ª 1 0 º « »« »« » s 2  s ¼ ¬0 1 ¼ ¬ 3 2 ¼ ¬ s

ª  s 2  3s  1 º 2s « » 2 2    3 2 2 2 s s s ¬ ¼

and B (3) ( s )

ªs 2  1 º s « 2 » 2 s s s s     3 2 2 ¬ ¼ s º ª 3s  2 . « s 3s  2 »¼ ¬

B (2) ( s )  A( s ) A 21B (2) 2

ª  s 2  3s  1 º 2s « » 2 2 2 s  2 s ¼ ¬ 3s  2 s

Step 5: The degree of this matrix is less than the degree of matrix A(s). Hence according to (1.3.11), we have Ql ( s )

A 21B 4 s 2  A 21B3(1) s  A 21B (2) 2

ª1 0 º 2 ª 0 1º ª 1 0 º «0 0 » s  « 1 1» s  « 3 2 » ¬ ¼ ¬ ¼ ¬ ¼ s º ª 3s  2 . R l ( s ) B (3) ( s ) « s s 3   2 »¼ ¬

ª s2  1 s º « », ¬s  3 s  2¼

1.4 Generalized Bezoute Theorem and the Cayley–Hamilton Theorem Let us consider the division of a square polynomial matrix

F( s)

Fn s n  Fn1s n1  "  F1s  F0 

mum

[s]

(1.4.1)

by a polynomial matrix of the first degree [Ims  A], where Fk mum, k = 0,1,…,n and A mum. The right (left) Rp (Rl) remainder from division of F(s) by [Ims  A] is a polynomial matrix of zero degree, i.e., it does not depend on s.

Theorem 1.4.1. (Generalised Bezoute theorem). The right (left) remainder Rp (Rl) from division of the matrix F(s) by [Ims - A] is equal to Fp(A) (Fl(A)), i.e.,

Polynomial Matrices

Rp

Fp ( A )

Fn A n  Fn1A n1  "  F1A  F0 

mum

Fl ( A )

A n Fn  A n1Fn1  "  AF1  F0 

mum

l

R

17

(1.4.2a)

.

(1.4.2b)

Proof. Post-dividing the matrix F(s) by [Ims - A], we obtain F(s)

Q p (s) I m s  A  R p ,

and pre-dividing by the same matrix, we obtain F(s)

> I m s  A @ Ql ( s)  R l .

Substituting the matrix A in place of the scalar s in the above relationships, we obtain Fp ( A )

Q p ( A )( A  A )  R p

Fl ( A)

( A  A )Q l ( A )  R l

Rp

and Rl .



The following important corollary ensues from Theorem 1.4.1.

Corollary 1.4.1. A polynomial matrix F(s) is post-divisible (pre-divisible) without remainder by [Ims  A] if and only if Fp(A) = 0 (Fl(A) = 0). Let M(s) be the characteristic polynomial of a square matrix A of degree n, i.e.,

M ( s ) det > I n s  A @ s n  an1s n1  "  a1s  a0 . From the definition of the inverse matrix we have

>I n s  A @ Adj>I n s  A @

I nM ( s )

(1.4.3a)

and Adj > I n s  A @> I n s  A @ I nM ( s ) .

(1.4.3b)

It follows from (1.4.3) that a polynomial matrix InM(s) is post-divisible and predivisible by [Ins - A]. According to Corollary 1.4.1 this is possible if and only if InM(A) = M(A) = 0. Thus the following theorem has been proved.

18

Polynomial and Rational Matrices

Theorem 1.4.2. (CayleyHamilton). Every square matrix A satisfies its own characteristic equation

M ( A)

A n  an1A n1  "  a1A  a0 I n

0.

(1.4.4)

Example 1.4.1. The characteristic polynomial of the matrix

A

ª1 2 º «3 4 » ¬ ¼

(1.4.5)

is ªs  1

M ( s ) det > I n s  A @ « ¬ 3

2 º s  4 »¼

s 2  5s  2 .

It is easy to verify that 2

M ( A)

A 2  5A  2I 2

ª1 2 º ª1 2 º ª1 0 º «3 4 »  5 «3 4 »  2 « 0 1 » ¬ ¼ ¬ ¼ ¬ ¼

ª0 0º «0 0» . ¬ ¼

Theorem 1.4.3. Let a polynomial w(s) [s] be of degree N, and A N t n. There exists a polynomial r(s) of a degree less than n, such that w( A )

r ( A) .

nu n

, where

(1.4.6)

Proof. Dividing the polynomial w(s) by the characteristic polynomial M(s) of the matrix A, we obtain w( s )

q ( s )M ( s )  r ( s ) ,

where q(s) and r(s) are the quotient and remainder on division of the polynomial w(s) by M(s), respectively, and deg M(s) = n > deg r(s). With the matrix A substituted in place of the scalar s and with (1.4.4) taken into account, we obtain w( A )

q ( A )M ( A )  r ( A )

r ( A) .



Example 1.4.2. The following polynomial is given w( s )

s 6  5 s 5  3 s 4  5 s 3  2 s 2  3s  2 .

Polynomial Matrices

19

Using (1.4.6) one has to compute w(A) for the matrix (1.4.5). The characteristic polynomial of the matrix is M(s) = s2 - 5s - 2. Dividing the polynomial w(s) by M(s), we obtain w( s )

s

r ( s)

3s  2 .

w( A)

r ( A)

4

 s 2 s 2  5s  2  3s  2 ,

that is

Hence 3A  2I 2

ª1 2 º ª1 0 º  2« 3« » » ¬3 4 ¼ ¬0 1 ¼

ª5 6 º «9 14» . ¬ ¼

The above considerations can be generalized to the case of square polynomial matrices.

Theorem 1.4.4. Let W(s) nun[s] be a polynomial square matrix of degree N, and A nun, where N t n. There exists, a polynomial matrix R(s) of a degree less than n such that Wp ( A )

R p ( A ) and Wl ( A )

R l (A) ,

(1.4.7)

where Wp(A) and Wl(A) are the right-side and left-side values, respectively, of the matrix W(s) with A substituted in place of s.

Proof. Dividing the entries of the matrix W(s) by the characteristic polynomial M(s) of A, we obtain W( s)

Q( s )M ( s )  R ( s ) ,

where Q(s) and R(s) are the quotient and remainder, respectively, of the division of W(s) by M(s), and deg M(s) = n > deg R(s). With A substituted in place of the scalar s and with (1.4.4) taken into account, we obtain Wp ( A )

Q p ( A)M ( A)  R p ( A )

Wl ( A )

Ql ( A )M ( A )  R l ( A )

R p (A)

and R l (A) .



20

Polynomial and Rational Matrices

Example 1.4.3. Given the polynomial matrix W( s)

ª s 6  5s 5  2 s 4  s 2  3s  1 s 5  5s 4  2 s 3  s  1 º « 4 », 3 2 2s 6  10 s 5  4s 4  s  2 ¼ ¬ s  5 s  3s  5 s  3

one has to compute Wp(A) and Wl(A) for the matrix (1.4.5) using (1.4.7). Dividing every entry of W(A) by the characteristic polynomial M(s) of matrix A, we obtain W( s)

ª s4  1 s3 º 2 ª2s  3 s  1 º s  5s  2  « , « 2 4»  s  2 ¼» ¬ 1 ¬ s  1 2 s ¼

R( s)

ª2s  3  s  1 º . « 1  s  2 »¼ ¬

i.e.,

Hence Wp ( A )

R p ( A)

ª 2 1º ª3 1º « 0 1» A  «1 2 » ¬ ¼ ¬ ¼

ª 2 1º ª1 2 º ª3 1º « 0 1» «3 4 »  «1 2 » ¬ ¼¬ ¼ ¬ ¼

ª 2 1º « 2 2» ¬ ¼

ª1 2 º ª 2 1º ª3 1º «3 4 » « 0 1»  «1 2 » ¬ ¼¬ ¼ ¬ ¼

ª 5 4º «7 5 » . ¬ ¼

and

Wl ( A)

R l ( A)

ª 2 1º ª3 1º A« »« » ¬ 0 1¼ ¬1 2 ¼

1.5 Elementary Operations on Polynomial Matrices Definition 1.5.1. The following operations are called elementary operations on a polynomial matrix A(s) mun[s]: 1. Multiplication of any i-th row (column) by the number c z 0. 2. Addition to any i-th row (column) of the j-th row (column) multiplied by any polynomial w(s). 3. The interchange of any two rows (columns), e.g., of the i-th and the j-th rows (columns).

Polynomial Matrices

21

From now on we will use the following notation: L[iuc] multiplication of the i-th row by the number c z 0, P[iuc] multiplication of the i-th column by the number c z 0, L[i+juw(s)] addition to the i-th row of the j-th row multiplied by the polynomial w(s), P[i+juw(s)] addition to the i-th column of the j-th column multiplied by the polynomial w(s), L[i, j] the interchange of the i-th and the j-th row, P[i, j] the interchange of the i-th and the j-th column. It is easy to verify that the above elementary operations when carried out on rows are equivalent to pre-multiplication of the matrix A(s) by the following matrices: i -th column

L m (i, c)

ª1 « «0 «# « «0 « «# « ¬0

0 ! 0 ! 0º » 1 ! 0 ! 0» # % # % #» »  0 ! c ! 0 » i -th row » # % # % #» » 0 ! 0 ! 1¼ i

L d (i, j , w( s ))

ª1 « «0 «# « «0 « «# « ¬0

L z i, j

,

j

0 ! 0 ! 1 ! 0 ! # % # %

0 0 #

! 0º » ! 0» % #»

»

0 ! 1 ! w( s) ! 0» » # % # % # % #» » 0 ! 0 ! 0 ! 1¼ i

ª1 «0 « «# « «0 «# « «0 «# « «¬0

mum

mum

>s@ ,

j

0 1 # 0

! ! % !

0 0 # 0

! ! % !

0 0 # 1

! ! % !

# 0 # 0

% ! % !

# 1 # 0

% ! % !

# 0 # 0

! ! % !

0º 0 »» #» » 0» . #» » 0» #» » 1 »¼

(1.5.1)

22

Polynomial and Rational Matrices

The same operations carried out on columns are equivalent to postmultiplication of the matrix A(s) by the following matrices: i -th column

Pm (i, c)

ª1 «0 « «# « «0 «# « «¬ 0

0 ! 0 0 ! 0º 1 ! 0 0 ! 0 »» # % # # % #»  » 0 ! c 0 ! 0 » i -th row # % # # % #» » 0 ! 0 0 ! 1 »¼ i

Pd (i, j , w( s ))

ª1 «0 « «# « «0 «0 « «# «0 ¬

0 ! 1 ! # % 0 ! 0 ! # % 0 ! i

Pz (i, j )

ª1 «0 « «# « «0 «# « «0 «# « ¬«0

0 ! 0 ! 1 ! 0 ! # % # % 0 ! 0 ! # % # % 0 ! 1 ! # % # % 0 ! 0 !

nun

,

j

! 0 ! 0º 0 ! 0 ! 0 »» # % # % #» » 1 ! 0 ! 0»  w( s ) ! 1 ! 0 » » # % # % #» 0 ! 0 ! 0 »¼ 0

nun

(1.5.2) ,

j 0 ! 0º 0 ! 0 »» # % #» » 1 ! 0»  # % #» » 0 ! 0» # % #» » 0 " 1 ¼»

nun

.

It is easy to verify that the determinants of the polynomial matrices (1.5.1) and (1.5.2) are nonzero and do not depend on the variable s. Such matrices are called unimodular matrices.

Polynomial Matrices

23

1.6 Linear Independence, Space Basis and Rank of Polynomial Matrices Let ai = ai(s), i = 1,…,n be the i-th column of a polynomial matrix A(s) mun[s]. We will consider these columns as m-dimensional polynomial vectors, ai m[s], i = 1,…,n.

Definition 1.6.1. Vectors ai m[s] are called linearly independent over the field of rational functions (s) if and only if there exist rational functions wi=wi(s) (s) not all equal to zero such that w1a1  w2 a2  ...  wn an

0 (zero order) .

(1.6.1)

In other words, these vectors are called linearly independent over the field of rational functions, if the equality (1.6.1) implies wi = 0 for i = 1,…,n. For example, the polynomial vectors a1

ª1º « s » , a2 ¬ ¼

ª s º «1  s 2 » ¬ ¼

(1.6.2)

are linearly independent over the field of rational functions, since the equation w1a1  w2 a2

ª1 º ª s º « s » w1  «1  s 2 » w2 ¬ ¼ ¬ ¼

s º ª w1 º ª1 « s s 2  1» « w » ¬ ¼¬ 2¼

ª0º «0» ¬ ¼

has only the zero solution ª w1 º «w » ¬ 2¼

1

s º ª0º ª1 « s s 2  1» « 0 » ¬ ¼ ¬ ¼

ª0º «0» . ¬ ¼

We will show that the rational functions wi, i = 1,…,n in (1.6.1) can be replaced by polynomials pi = pi(s), i = 1,…,n. To accomplish this, we multiply both sides of (1.6.1) by the smallest common denominator of rational functions wi, i = 1,…,n. We then obtain p1a1  p2 a2  ...  pn an

0,

(1.6.3)

where pi = pi(s) are polynomials. For example, the polynomial vectors a1

ª1 º « s » , a2 ¬ ¼

ª s 1 º «s2  s» ¬ ¼

(1.6.4)

24

Polynomial and Rational Matrices

are linearly dependent over the field of rational functions, since for 1 and w2

w1

1 , s 1

we obtain w1a1  w2 a2

ª1º 1 ª s 1 º « »  « 2 » ¬s¼ s  1 ¬s  s¼

ª0 º « ». ¬0 ¼

(1.6.5)

Multiplying both sides of (1.6.5) by the smallest common denominator of rational functions w1 and w2, which is equal to s + 1, we obtain ª1º ª s  1 º ( s  1) « »  « 2 » ¬s¼ ¬s  s¼

ª0º «0» . ¬ ¼

If the number of polynomial vectors of the space n[s] is larger than n, then these vectors are linearly dependent. For example, adding to two linearly independent vectors (1.6.2) an arbitrary vector a

ª a11 º «a »  ¬ 21 ¼

2

[s] ,

we obtain linearly dependent vectors, i.e., p1a1  p2 a2  p3 a

0,

(1.6.6)

for p1, p2, p3 [s] not simultaneously equal to zero. Assuming, for example, p3 = -1, from (1.6.6) and (1.6.2), we obtain s º ª p1 º ª1 « s s 2  1» « p » ¬ ¼¬ 2¼

ª a11 º «a » ¬ 21 ¼

and ª p1 º «p » ¬ 2¼

1

s º ª a11 º ª1 « s s 2  1» « a » ¬ ¼ ¬ 21 ¼

ª s 2  1  s º ª a11 º « »« » 1 ¼ ¬ a21 ¼ ¬ s

ª s 2  1 a11  sa21 º « ». ¬«  sa11  a21 ¼»

Thus vectors a1, a2, a are linearly dependent for any vector a.

Definition 1.6.2. Polynomial vectors bi = bi(s) n[s], i = 1,…,n are called a basis of space n[s] if they are linearly independent over the field of rational function

Polynomial Matrices

25

and an arbitrary vector a n[s] from this space can be represented as a linear combination of these vectors, i.e., a

p1b1  p2b2  ...  pn bn ,

(1.6.7)

where pi [s], i = 1,…,n. There exist many different bases for the same space. For example, for the space [s] we can adopt the vectors (1.6.2) as a basis. Solving system of equations for an arbitrary vector 2

ª a11 º «a »  ¬ 21 ¼

2

[ s] ,

ªp º a2 @ « 1 » ¬ p2 ¼

> a1

s º ª p1 º ª1 « s s 2  1» « p » ¬ ¼¬ 2¼

ª a11 º «a » , ¬ 21 ¼

we obtain ª p1 º «p » ¬ 2¼

1

s º ª a11 º ª1 « s s 2  1» « a » ¬ ¼ ¬ 21 ¼

ª s 2  1 a11  sa21 º « ». ¬«  sa11  a21 ¼»

As a basis for this space we can also adopt e1

ª1 º « 0 » , e2 ¬ ¼

ª0º «1 » . ¬ ¼

In this case, p1 = a11 and p2 = a21.

Definition 1.6.3. The number of linearly independent rows (columns) of a polynomial matrix A(s) num[s] is called its normal rank (briefly rank). The rank of a polynomial matrix A(s) can be also equivalently defined as the highest order of a minor, which is a nonzero polynomial, of this matrix. The rank of matrix A(s) num[s] is not greater than the number of its rows n or columns m, i.e., rank A( s ) d min (n, m) .

(1.6.8)

If a square matrix A(s) nun[s] is of full rank, i.e., rank A(s) = n, then its determinant is a nonzero polynomial w(s), i.e., det A ( s )

w( s ) z 0 .

(1.6.9)

26

Polynomial and Rational Matrices

Such a matrix is called nonsingular or invertible. It is called singular when det A(s) = 0 (the zero polynomial). For example, the square matrix built from linearly independent vectors (1.6.2) is nonsingular, since s º ª1 det « 1 s 1 s 2 »¼  ¬

and the matrix built from linearly dependent vectors (1.6.4) is singular, since ª1 s  1 º det « » 2 ¬s s  s¼

0.

Theorem 1.6.1. Elementary operations carried out on a polynomial matrix do not change its rank. Proof. Let A( s)

L( s ) A( s )P( s ) 

num

[s] ,

(1.6.10)

where L(s) nun[s] and P(s) mum[s] are unimodular matrices of elementary operations on rows and columns, respectively. From (1.6.10) we immediately have rank A( s )

rank > L( s ) A( s )P ( s ) @

rank A( s ) ,

since L(s) and P(s) are unimodular matrices.  For example, carrying out the operation Ld(2+1u(-s)) on rows of the matrix built from the columns (1.6.2), we obtain

s º ª 1 0 º ª1 «  s 1 » « s s 2  1» ¬ ¼¬ ¼

ª1 s º « ». ¬ 0 1¼

Both polynomial matrices s º ª1 ª1 s º « s s 2  1» and «0 1» ¬ ¼ ¬ ¼

are full rank matrices.

Polynomial Matrices

27

1.7. Equivalents of Polynomial Matrices 1.7.1 Left and Right Equivalent Matrices Definition 1.7.1. Two polynomial matrices A(s), B(s) mun[s] are called left (right) or row (column) equivalent if and only if one of them can be obtained from the other as a result of a finite number of elementary operations carried out on its rows (columns) B( s )

or B(s)

L( s ) A( s )

A( s)P( s) ,

(1.7.1)

where L(s) (P(s)) is the product of unimodular matrices of elementary operations on rows (columns).

Definition 1.7.2. Two polynomial matrices A(s), B(s) mun[s] are called equivalent if and only if one of them can be obtained from the other as a result of a finite number of elementary operations carried out on its rows and columns, i.e., B( s )

L( s ) A( s )P ( s ) ,

(1.7.2)

where L(s) and P(s) are the products of unimodular matrices of elementary operations on rows and columns, respectively.

Theorem 1.7.1. A full rank polynomial matrix A(s) upper triangular matrix of the form

A( s)

L( s ) A( s )

­ ª a11 ( s ) °« °« 0 °« # °« °« 0 °« 0 °° « ®« # °« 0 °¬ ° ª a11 ( s ) °« °« 0 °« # °« ¯° ¬ 0

mul

[s] is left equivalent to an

a12 ( s ) ! a1l ( s ) º a22 ( s ) ! a2l ( s ) »» # % # » » ! a1l ( s) » 0 ! 0 0 » » # % # » 0 0 »¼ !

for

a12 ( s ) ! a1m ( s ) º a22 ( s ) ! a2 m ( s ) »» # % # » » ! amm ( s ) ¼ 0

for

m!l (1.7.3)

m

l

28

Polynomial and Rational Matrices

­ ª a11 ( s ) a12 ( s ) °« a22 ( s ) ° 0 ®« # °« # ° «¬ 0 0 ¯

! a1m ( s ) !

a1l ( s ) º ! a2 m ( s ) ! a2l ( s ) »» for m  l A ( s) L( s) A ( s ) % # % # » » ! amm ( s ) ! aml ( s ) ¼ where the elements a 1i(s), a 2i(s),…, a i-1,i(s) are polynomials of a degree less than a ii(s) for i = 1,2,…,m, and L(s) is the product of the matrices of elementary operations carried out on rows.

Proof. Among nonzero entries of the first columns of the matrix A(s) we choose the entry that is a polynomial of the lowest degree and carrying out L[i, j], we move this entry to the position (1,1). Denote this entry by a 11(s). Then we divide all remaining entries of the first column by a 11(s). We then obtain ai1 ( s )

a11 ( s )qi1 ( s )  ri1 ( s ) for i

2, 3, ..., m ,

where qi1(s) is the quotient and ri1(s) the remainder of division of the polynomial a i1(s) by a 11(s). Carrying out L[i+1u(-qi1(s))], we replace the entry a i1(s) with the remainder ri1(s). If not all remainders are equal to zero, then we choose this one, that is the polynomial of the lowest degree, and carrying out operations L[i, j], we move it to position (1,1). Denoting this remainder by r i1(s), we repeat the above procedure taking the remainder r 11(s) instead of a 11(s). The degree r 11(s) is lower than the degree of a 11(s). After a finite number of steps, we obtain the matrix A (s) of the form

 ( s) A

ª a11 ( s ) a12 ( s ) « 0 a22 ( s ) « « # # « am 2 ( s ) ¬ 0

! a1l ( s ) º ! a2l ( s ) »» . % # » » ! aml ( s ) ¼

We repeat the above procedure for the first column of the submatrix obtained from the matrix A(s) by deleting the first row and the first column. We then obtain a matrix of the form

ˆ ( s) A

ª a11 ( s ) a12 ( s ) a13 ( s ) « 0 a22 ( s ) aˆ23 ( s ) « « 0 aˆ33 ( s ) 0 « # # « # «¬ 0 aˆm 3 ( s ) 0

! a1l ( s ) º ! aˆ2l ( s ) »» ! aˆ3l ( s ) » . » % # » ! aˆml ( s ) »¼

If a 12(s) is not a polynomial of lower degree than the one of a 22(s), then we divide a 12(s) by a 22(s) and carrying out L[1+2u(-q12(s))], we replace the entry

Polynomial Matrices

29

a 12(s) with the entry a 12(s) = r12(s), where q12(s) and r12(s) are the quotient and the remainder on the division of a 12(s) by a 22(s) respectively. Next, we consider the submatrix obtained from the matrix A (s) by removing the first two rows and the first two columns. Continuing this procedure, we obtain the matrix (1.7.3). 

An algorithm of determining the left equivalent matrix of the form (1.7.3) follows immediately from the above proof. Example 1.7.1. The given matrix

A( s)

s ª 1 « s  1 s  2 « «¬ s 2  s 3  1

2 º » » »¼

1 2s 2

is to be transformed to the left equivalent form (1.7.3). To accomplish this, we carry out the following elementary operations: ª1 « o «0 ¬«0 L>1 2us @ ª1 L ª¬3 2u(  ( s 2  2)) º¼  o ««0 «¬0 L ¬ª 21u  ( s 1) ¼º L ª31u(  s 2 ) º ¬ ¼

s s 2 º 2 º ª1 » « L[2,3] o «0 o 1 0 »»  s  2 2s  1»  «¬0 s 2  2 2 s  1¼» 1 0 ¼» 2

0 2 1 0 0 2 s  1

º ». » »¼

Theorem 1.7.2. A full rank polynomial matrix A(s) lower triangular matrix of the form A( s)

mul

[s] is right equivalent to a

A( s)P( s)

0 ­ ª a11 ( s ) °« ° « a21 ( s ) a22 ( s ) °«   °« °¬ am1 ( s ) am 2 ( s ) ® 0 ° ª a11 ( s ) ° « a (s) a ( s) 22 ° « 21 «   ° ° « a (s) a (s) m2 ¯ ¬ m1

0  0º 0 0 0  0 »» for n ! m,     » » am 3 ( s )  amm ( s ) 0  0 ¼ (1.7.4)  0 º  0 »» for n m,   » »  amm ( s ) ¼ 0

  

0

30

Polynomial and Rational Matrices

­ ª a11 ( s ) 0 °« ° « a21 ( s ) a22 ( s ) °° « # # ®« ° « al1 ( s ) al 2 ( s ) °« # # °« ¯° «¬ am1 ( s ) am 2 ( s )

! ! % ! % !

0 º 0 »» # » » for n  m , al 2 ( s ) » # » » aml ( s ) »¼

(1.7.4)

where the elements a i1(s), a i2(s),…, a i-1,i(s) are polynomials of lower degree than that of a ii(s) for i = 1,2,…,n, and P(s) is the product of unimodular matrices of elementary operations carried out on columns. 1.7.2 Row and Column Reduced Matrices The degree of the i-th column (row) of a polynomial matrix is the highest degree of a polynomial that is an entry of this column (row). The degree of the i-th column (row) of the matrix A(s) will be denotedn by deg ci[A(s)] (deg ri[A(s)]) or shortly deg ci (deg ri). Let Lc (Lr) be the matrix built from the coefficients at the highest powers of variable s in the columns (rows) of the matrix A(s). For example, for the polynomial matrix ªs2  1 s  3s º « » s s 2 2 »,   « « s2 s  1 2 s  1»¼ ¬

A( s)

(1.7.5)

we have deg A(s) = 2 deg c1

2, deg c2

deg c3

1,

deg r1

deg r3

2, deg r2

and

Lk

ª1 1 3º « 0 1 0 » , L w « » ¬«1 1 2 »¼

ª1 0 0 º «1 1 0 » . « » «¬1 0 0 »¼

The matrix (1.7.5) can be written, using the above matrices, as follows

A( s)

2 ª1 1 3º ª s « « » «0 1 0 » « 0 ¬«1 1 2 ¼» ¬« 0

0 0 º ª 1 0 0º » « s 0 »  « s  2 0 2 »» 0 s ¼» ¬« 0 1 1¼»

1

Polynomial Matrices

31

or ªs2 « «0 « ¬0

A( s)

0 º ª1 0 0 º ª 1 s 3s º »« » « 0 2 »» . s 0 » «1 1 0 »  « 2 0 s 2 ¼» ¬«1 0 0 ¼» ¬« 0 s  1 2s  1¼»

0

In the general case for a matrix A(s)

mun

[s], we have

A( s)

L c diag ª¬ s deg c1 , s deg c2 ,..., s deg cl º¼  A( s )

(1.7.6)

A(s)

 (s) , diag ª¬ s deg r1 , s deg r2 ,..., s deg rm º¼ L r  A

(1.7.7)

and

 (s) are polynomial matrices satisfying the conditions where A (s), A  ( s )  deg A ( s ) . deg A ( s )  deg A ( s ), deg A

If m = n and det Lc z 0, then the determinant of the matrix (1.7.6) is a polynomial of the degree l

nk

¦ deg

ci ,

i 1

since det A ( s )

det L k det diag ª¬ s deg c1 , s deg c2 ,..., s deg cl º¼  ...

s nk det L c  ...

Similarly, if det Lr z 0, then the determinant of the matrix (1.7.7) is a polynomial of the degree m

nr

¦ deg

rj .

j 1

Definition 1.7.3. A polynomial matrix A(s) is said to be column (row) reduced if and only if Lc (Lr) of this matrix is a full rank matrix. Thus, a square matrix A(s) is column (row) reduced if and only if det Lc z 0 (det Lr z 0). For example, the matrix (1.7.5) is column reduced but not row reduced, since

32

Polynomial and Rational Matrices

1 det L c

1

3

0 1

0

1

2

1

1 5,

det L r

0

0

1 1 0 1

0

0.

0

From the above considerations and Theorems 1.7.1 and 1.7.1c the following important corollary immediately follows.

Corollary 1.7.1. Carrying out only elementary operations on rows or columns it is possible to transform a nonsingular polynomial matrix to one of column reduced form and row reduced form, respectively.

1.8 Reduction of Polynomial Matrices to the Smith Canonical Form mun

Consider a polynomial matrix A(s)

[s] of rank r.

Definition 1.8.1. A polynomial matrix of the form

A S (s)

0 ªi1 ( s) « 0 i (s) 2 « « # # « 0 « 0 « 0 0 « # « # « 0 0 ¬

! ! % ! ! % !

0 ! 0º 0 0 ! 0 »» # # % #» » ir ( s ) 0 ! 0 »  0 0 ! 0» » # # % #» 0 0 ! 0 »¼ 0

mun

[s] .

(1.8.1)

r d min(n,m) is called the Smith canonical form of the matrix A(s) mun[s], where i1(s), i2(s),…,ir(s) are nonzero polynomials that are called invariant, with coefficients by the highest powers of the variable s equal to one, such that the polynomial ik+1(s) is divisible without remainder by the polynomial ik(s), i.e., ik+1 | ik for k = 1,…,r-1.

Theorem 1.8.1. For an arbitrary polynomial matrix A(s) mun[s] of rank r (r d min(n,m)) there exists its equivalent Smith canonical form (1.8.1). Proof. Among the entries of the matrix A(s) we find a nonzero one, which is a polynomial of the lowest degree in respect to s, and interchanging rows and columns we move it to position (1,1). Denote this entry by a 11(s). Assume at the beginning that all entries of the matrix A(s) are divisible without remainder by the entry a 11(s). Dividing the entries a i1(s) of the first column and the first row a 1j(s) by a 11(s), we obtain

Polynomial Matrices

ai1 ( s )

a11 ( s )qi1 ( s )

(i

2, 3,..., m),

a1 j ( s )

a11 ( s )q1 j ( s )

(j

2, 3,..., n),

33

where qi1(s) and q1j(s) are the quotients from division of a i1(s) and a 1j(s) by a 11(s), respectively. Subtracting from the i-th row (i = 2,3,…,m) the first row multiplied by qi1(s) and, respectively from the j-th column (j = 2,3,…,m) the first column multiplied by q1j(s), we obtain a matrix of the form 0 ª a11 ( s ) « 0 a 22 ( s ) « « # # « am 2 ( s ) ¬ 0

!

0 º ! a2 n ( s ) »» . % # » » ! amn ( s ) ¼

(1.8.2)

If the coefficient by the highest power of s of polynomial a 11(s) is not equal to 1, then to accomplish this we multiply the first row (or column) by the reciprocal of this coefficient. Assume next that not all entries of the matrix A(s) are divisible without remainder by a 11(s) and that such entries are placed in the first row and the first column. Dividing the entries of the first row and the first column by a 11(s), we obtain a1i ( s )

a11 ( s )q1i ( s )  r1i ( s )

(i

a j1 ( s )

a11 ( s )q j1 ( s )  rj1 ( s )

(j

2, 3,..., n), 2, 3,..., m),

where q1i(s), qj1(s) are the quotients and r1i(s), rj1(s) are the remainders of division of a 1i(s) and a j1(s) by a 11(s), respectively. Subtracting from the j-th row (i-th column) the first row (column) multiplied by qj1(s) (by q 1i(s)), we replace the entry a j1(s) ( a 1i(s)) by the remainder rj1(s) (r1i(s)). Next, among these remainders we find a polynomial of the lowest degree with respect to s and interchanging rows and columns, we move it to the position (1,1). We denote this polynomial by r 11(s). If not all entries of the first row and the first column are divisible without remainder by r 11(s), then we repeat this procedure taking the polynomial r 11(s) instead of the polynomial a 11(s). The degree of the polynomial r 11(s) is lower than the degree of a 11(s). After a finite number of steps, we obtain in the position (1,1) a polynomial that divides without remainder all the entries of the first row and the first column. If the entry a ik(s) is not divisible by a 11(s), then by adding the i-th row (or k-th column) to the first row (the first column), we reduce this case to the previous one. Repeating this procedure, we finally obtain in the position (1,1) a polynomial that divides without remainder all the entries of the matrix. Further we proceed in the same way as in the first case, when all the entries of the matrix are divisible without remainder by a 11(s).

34

Polynomial and Rational Matrices

If not all entries a ij(s) (i = 2,3,…,m; j = 2,3,…,n) of the matrix (1.8.2) are equal to zero, then we find a nonzero entry among them, which is a polynomial of the lowest degree with respect to s, and interchanging rows and columns, we move it to the position (2,2). Proceeding further as above, we obtain a matrix of the form ª a11 (s) « 0 « « 0 « « # « 0 ¬

0 a22 (s) 0

0 0 a33 ( s)

" " "

# 0

# am 3 (s)

% "

0 º 0 »» a3n (s) » , » # » amn (s) »¼

where a 22(s) is divisible without remainder by a 11(s), and all elements a ij(s) (i = 3,4,…,m; j = 3,4,…,n) are divisible without remainder by a 22(s). Continuing this procedure, we obtain a matrix of the Smith canonical form (1.8.1).  From this proof the following algorithm for determining of the Smith canonical form follows immediately as, illustrated by the following example. Example 1.8.1. To transform the polynomial matrix A( s)

ª ( s  2) 2 ( s  2)( s  3) s  2 º « » ( s  2) 2 s  3¼ ¬( s  2)( s  3)

(1.8.3)

to the Smith canonical form, we carry out the following elementary operations. Step 1: We carry out the operation P[1, 3] A1 ( s )

ª s  2 ( s  2)( s  3) ( s  2) 2 º « ». ( s  2) 2 ( s  2)( s  3) ¼ ¬s  3

All entries of this matrix are divisible without remainder by s + 2 with exception of the entry s + 3.

Step 2: Taking into account the equality s3 s2

1

1 , s2

we carry out the operation L[2+1u(-1)] A 2 ( s)

ª s  2 ( s  2)( s  3) ( s  2) 2 º « » . ( s  2) s2 ¼ ¬ 1

Polynomial Matrices

35

Step 3: We carry out the operation L[1, 2] A3 ( s)

s2 ( s  2) ª 1 « s  2 ( s  2)( s  3) ( s  2) 2 ¬

º ». ¼

Step 4: We carry out the operations P[2+1u(s+2)] and P[3+1u(-s-2)] A 4 ( s)

0 0º ª 1 « s  2 ( s  2)(2s  5) 0 » . ¬ ¼

Step 5: We carry out the operation L[2+1u(-s-2)] and P[2u1 / 2]

A s (s)

0 0º ª1 « ». 5 «0 ( s  2) ¨§ s  ¸· 0 » 2 ¹ »¼ © ¬«

This matrix is of the desired Smith canonical form of (1.8.3). From divisibility of the invariant polynomials ik+1 | ik, k = 1, ..., r – 1, it follows that there exist polynomials d1,d2,…,dr, such that i1

d1 , i2

d1d 2 , ..., ir

d1d 2 ... d r .

Hence the matrix (1.8.1) can be written in the form

A S (s)

ª d1 «0 « «# « «0 «0 « «# «0 ¬

!

0

d1d 2 !

0

0 #

%

0

! d1d 2 ... d r

#

0

!

0

#

%

#

0

!

0

0 0 ! 0º 0 0 ! 0 »» # # % #» » 0 0 ! 0» . 0 0 ! 0» » # # % #» 0 0 ! 0 »¼

(1.8.1a)

Theorem 1.8.2. The invariant polynomials i1(s),i2(s),…,ir(s) of the matrix (1.8.1) are uniquely determined by the relationship ik ( s )

Dk ( s ) Dk 1 ( s )

for k

1, 2, ..., r ,

(1.8.4)

where Dk(s) is the greatest common divisor of all minors of degree k of matrix A(s) (D0(s) = 1).

36

Polynomial and Rational Matrices

Proof. We will show that elementary operations do not change Dk(s). Note that elementary operations 1) consisting of multiplying of an i-th row (column) by a number c z 0 causes multiplication of minors containing this row (column) by this number c. Thus this operation does not change Dk(s). An elementary operation 2) consisting of adding to an i-th row (column) j-th row (column) multiplied by the polynomial w(s) does not change Dk(s), if a minor of the degree k contains either the i-th row and the j-th row or does not contain of them. If the minor of the degree k contains the i-th row, and does not contain the j-th row, then we can represent it as a linear combination of two minors of the degree k of the matrix A(s). Hence the greatest common divisor of the minors of the degree k does not change. Finally, an operation 3), consisting on the interchange of i-th and j-th rows (columns), does not change Dk(s) either, since as a result of this operation a minor of the degree k either does not change (the both rows (columns) do not belong to this minor), or changes only the sign (both rows belong to the same minor), or it will be replaced by another minor of the degree k of the matrix A(s) (only one of these rows belongs to this minor). Thus equivalent matrices A(s) and AS(s) have the same divisors D1(s), D2(s), ..., Dr(s). From the Smith canonical form (1.8.1) it follows that D1 ( s )

i1 ( s ),

(1.8.5)

D2 ( s )

i1 ( s ) i2 ( s ),

Dr ( s )

i1 ( s ) i2 ( s )...ir ( s ).

From (5) we immediately obtain the formula (4).  Using the polynomials d1,d2,…,dr we can write the relationship (1.8.5) in the form D1 ( s )

d1 ,

D2 ( s )

d12 d 2 ,

Dr ( s )

r 1

d d

r 1 2

.

(1.8.6)

...d r

From definition (1.8.1) and Theorems 1.8.1 and 1.8.2, the following important corollary can be derived.

Corollary 1.8.1. Two matrices A(s), B(s) they have the same invariant polynomials.

mu n

[s] are equivalent if and only if

Polynomial Matrices

37

1.9 Elementary Divisors and Zeros of Polynomial Matrices 1.9.1 Elementary Divisors Consider a polynomial matrix A(s) mun[s] of the rank r, whose Smith canonical form AS(s) is given by the formula (1.8.1). Let the k-th invariant polynomial of this matrix be of the form ik ( s)

( s  s1 ) k1 ( s  s2 ) m

mk2

...( s  sq )

mkq

.

(1.9.1)

From divisibility of the polynomial ik+1(s) by the polynomial ik(s) it follows that mr ,1 t mr 1,1 t ... t m1,1 t 0 . mr ,q t mr 1,q t ... t m1,q t 0

(1.9.2)

If, for example, i1(s) = 1, then m11 = m12 = … =m 1q = 0. Definition 1.9.1. Everyone of the expressions (different from 1) ( s  s1 ) m11 , ( s  s2 ) m12 , ..., ( s  sq )

mrq

appearing in the invariant polynomials (1.9.1) is called elementary divisor of the matrix A(s). For example, the elementary divisors of the polynomial matrix (1.8.3) are (s+2) and (s+2, 5). The elementary divisors of a polynomial matrix are uniquely determined. This follows immediately from the uniqueness of the invariant polynomial of polynomial matrices. Equivalent polynomial matrices possess the same elementary divisors. For a polynomial matrix of known dimensions its rank together with its elementary divisors uniquely determine its Smith canonical form. For example, knowing the elementary divisors s  1, (s  1)(s  2), (s  2)2 , (s  3), of a polynomial matrix, its rank r = 4 and dimension 4u4, we can write its Smith canonical form of this polynomial matrix

A s (s)

0 0 0 ª1 º «0 s  1 » 0 0 « ». «0 » 0 ( s  1)( s  2) 0 « » 2 0 0 ( s  1)( s  2) ( s  3) ¼ ¬0

Consider a polynomial, block-diagonal matrix of the form

(1.9.3)

38

Polynomial and Rational Matrices

A( s)

diag [ A1 ( s ), A 2 ( s )]

0 º ª A1 ( s ) . « 0 A 2 ( s ) »¼ ¬

(1.9.4)

Let AkS(s) be the Smith canonical form of the matrix Ak(s), k = 1,2, and ( s  sk1 )m11 ,..., ( s  skq ) k

k mrk , qk

its elementary divisors. Taking into account that equivalent polynomial matrices have the same elementary divisors, we establish that a set of elementary divisors of the matrix (1.9.4) is the sum of the sets of elementary divisors of Ak(s), k = 1,2. Example 1.9.1. Determine elementary divisors of the block-diagonal matrix (1.9.4) for

A1 ( s )

0 º ªs  1 1 « 0 s  1 1 »» , A 2 ( s ) « 0 s  1¼» ¬« 0

0 º ªs  1 1 « 0 0 »» . s 1 « 0 s  2 ¼» ¬« 0

(1.9.5)

It is easy to check that the Smith canonical forms of the matrices (1.9.5) are

A1S ( s )

0 º ª1 0 «0 1 » , A ( s) 0 2S « » «¬ 0 0 ( s  1)3 ¼»

0 ª1 0 º «0 1 ». 0 « » 2 ¬« 0 0 ( s  1) ( s  2) ¼»

(1.9.6)

The elementary divisors of the matrices (1.9.5) are thus equal (s  1)3, (s  1)2, and (s  2), respectively. It is easy to show that the Smith canonical form of the matrix (1.9.4) with the blocks (1.9.5) is equal to A S (s)

diag »1 1 1 1 ( s  1) 2

( s  1)3

( s  2) ¼º

(1.9.7)

and its elementary divisors are (s - 1)2, (s - 1)3, (s - 2). Consider a matrix A nun and its corresponding polynomial matrix [Ins - A]. Let [I n s  A]S

diag >i1 ( s), i2 ( s ), ..., in ( s ) @ ,

(1.9.8)

where ik ( s)

( s  s1 ) k1 ( s  s2 ) m

mk2

...( s  sq )

mkq

, k

1, ..., n ,

(1.9.9)

Polynomial Matrices

39

and s1,s2,…,sq, q d n are the eigenvalues of the matrix A. Definition 1.9.2. Everyone of the expressions (different from 1)

( s  s1 ) m11 , ( s  s2 ) m12 , ..., ( s  sq )

mnq

appearing in the invariant polynomials (1.9.9) is called the elementary divisor of the matrix A. The elementary divisors of the matrix A are uniquely determined and they determine its essential structural properties. 1.9.2 Zeros of Polynomial Matrices

Consider a polynomial matrix A(s) mun[s] of rank r, whose Smith canonical form is equal to (1.8.1). From (1.8.5) it follows that Dr ( s )

i1 ( s )i2 ( s )...ir ( s ) .

(1.9.10)

Definition 1.9.3. Zeros of the polynomial (1.9.10) are called zeros of the polynomial matrix A(s).

The zeros of the polynomial matrix A(s) can be equivalently defined as those values of the variable s, for which this matrix loses its full (normal) rank. For example, for the polynomial matrix (1.8.3) we have Dr ( s )

( s  2)( s  2.5) .

Thus the zeros of the matrix are s10 = -2, s20 = -2.5. It is easy to verify that for these values of the variable s, the matrix (1.8.3) (whose normal rank is equal to 2) has a rank equal to 1. If the polynomial matrix A(s) is square and of the full rank r = n, then det A ( s )

cDr ( s )

c is a constant coefficient independent of s

(1.9.11)

and the zeros of this matrix coincide with the roots of its characteristic equation det A(s) = 0. For example, for the first among the matrices (1.9.5) we have

det A r ( s )

s 1

1

0

0

s 1

1

0

0

s 1

( s  1)3 .

40

Polynomial and Rational Matrices

Thus this matrix has the zero s = 1 of multiplicity 3. The same result will be obtained from (1.9.10), since Dr(s) = (s - 1)3 for A1S(s).

Theorem 1.9.1. Let a polynomial matrix A(s) to r d min(m,n). Then rank A s

­r ® ¯r  di

mun

[s] have a rank (normal) equal

s V A s

½ ¾, si  V A ¿

(1.9.12)

where VA is a set of the zeros of the matrix A(s) and di is a number of distinct elementary divisors containing si.

Proof. By definition of zero, it follows that the matrix A(s) does not lose its full rank if we substitute in place of the variable s a number that does not belong to the set VA, i.e., rank A(s) = r for sVA. Elementary operations do not change the rank of a polynomial matrix. In view of this rank A(s) = rank AS(s) = r, where r is the number of the invariant polynomials (including those equal to 1). If an invariant polynomial contains si, then this polynomial is equal to zero for s = si. Thus we have rank A(si) = r - di, siVA, since the number of polynomials containing si is equal to the number of distinct elementary divisors containing si. „ For instance, the polynomial matrix (1.9.3) of the full column rank has one elementary divisor containing s10 = 3, two elementary divisors containing s20 = 2 and three elementary divisors containing s30 = 1. In view of this, according to (1.9.12) we have rank A S (3)

3, rank A S (2)

2, rank A S (1) 1 .

Remark 1.9.1. A unimodular matrix U(s) nun[s] does not have any zeros since det U(s) = c, where c is certain constant independent of the variable s.

Theorem 1.9.2. An arbitrary rectangular, polynomial matrix A(s) rank that does not have any zeros can be written in the form

A( s)

where P(s)

­> I m 0@ P ( s ), m  n ½ ° ° ªI n º ® ¾, L ! s m n ( ) , « » ° ° ¬0¼ ¯ ¿ nu n

[s] and L(s)

mu m

mun

[s] of full

(1.9.13)

[s] are unimodular matrices.

Proof. If m < n and the matrix does not have any zeros, then applying elementary operations on columns we can bring this matrix to the form [Im 0]. Similarly, if

Polynomial Matrices

41

m > n and the matrix does not have any zeros, then applying elementary operations ªI º on rows we can bring this matrix to the form « n » . ¬0¼ „ Remark 1.9.2. From the relationship (1.9.13) it follows that a polynomial matrix built from an arbitrary number of rows or columns of a matrix that does not have any zeros, never has any zeros.

Theorem 1.9.3. An arbitrary polynomial matrix A(s) mun[s] of rank r d min (m, n) having zeros can be presented in the form of the product of matrices A( s)

B( s )C( s ) ,

(1.9.14)

where the matrix B(s) = L-1(s) diag [i1(s),…,ir(s),0,…,0] containing all the zeros of the matrix A(s), and

C( s )

­ 1 °> I m 0@ P ( s ), ° °° P 1 ( s ), ® ° ° ªI º ° « n » P 1 ( s ), °¯ ¬ 0 ¼

n!m n

m

nm

mu m

is a matrix

½ ° ° °° . ¾ ° ° ° °¿

(1.9.15)

Proof. Let L(s) mum[s] and P(s) nun[s] be unimodular matrices of elementary operations on rows and on columns, respectively, reducing the matrix A(s) to the Smith canonical form AS(s), i.e., A S (s)

L( s ) A( s )P ( s ) .

(1.9.16)

Pre-multiplying (1.9.16) by L-1(s) and post-multiplying by P-1(s), we obtain

A( s)

L1 ( s ) A S ( s )P 1 ( s )

A S (s)

­ ½ °diag [i1 ( s ), ..., ir ( s), 0, ..., 0] > I m 0@ , n ! m ° ° ° °° °° n m¾ . ®diag [i1 ( s ), ..., ir ( s), 0, ..., 0], ° ° ° ° ªI n º °diag [i1 ( s ), ..., ir ( s), 0, ..., 0] « » , n  m° ¬0¼ ¯° ¿°

B( s )C( s ) ,

since

42

Polynomial and Rational Matrices

From (1.9.15) it follows that the matrix C(s) since the matrix P-1(s) is a unimodular matrix.

mun

[s] does not have any zeros, „

1.10 Similarity and Equivalence of First Degree Polynomial Matrices Definition 1.10.1. Two square matrices A and B of the same dimension are said to be similar matrices if and only if there exists a nonsingular matrix P such that B

P 1AP

(1.10.1)

and the matrix P is called a similarity transformation matrix.

Theorem 1.10.1. Similar matrices have the same characteristic polynomials, i.e., det [ sI  B] det [ sI  A ] .

(1.10.2)

Proof. Taking into account (1.10.1), we can write det [ sI  B] det ª¬ sP 1P  P 1AP º¼ det ª¬ P 1 ( sI  A)P º¼ det P 1 det [ sI  A ]det P det [ sI  A ] ,

since det P-1 = (det P)-1. „

Theorem 1.10.2. Polynomial matrices [sI - A] and [sI - B] are equivalent if and only if the matrices A and B are similar. Proof. Firstly, we show that if the matrices A and B are similar, then the polynomial matrices [sI - A] and [sI - B] are equivalent. If the matrices A and B are similar, i.e., they satisfy the relationship (1.10.1), then [ sI  B ]

1 ¬ª sI  P AP ¼º

P 1[ sI  A]P .

This relationship is a special case (for L(s) = P-1 and P(s) = P) of the relationship (1.7.2). Thus the polynomial matrices [sI - A] and [sI - B] are equivalent. We will show now, that if the matrices [sI - A] and [sI - B] are equivalent, then the matrices A and B are similar. Assuming that the matrices [sI - A] and [sI - B] are equivalent, we have [ sI  B] L( s )[ sI  A]P ( s ) ,

(1.10.3)

Polynomial Matrices

43

where L(s) and P(s) are unimodular matrices. The determinant of the matrix L(s) is different from zero and does not depend on the variable s. In view of this, the inverse matrix Q( s )

L1 ( s )

is a polynomial, unimodular matrix as well. Pre-dividing the matrix Q(s) by [sI - A] and post-dividing P(s) by [sI - B], we obtain Q( s ) [ sI  A]Q1 ( s )  Q 0 , P( s)

P1 ( s )[ sI  B]  P0 ,

(1.10.4) (1.10.5)

where Q1(s) and P1(s) are polynomial matrices and the matrices Q0 and P0 do not depend on the variable s. With (1.10.3) pre-multiplied by Q(s) = L-1(s) we obtain Q( s )[ sI  B] [ sI  A]P ( s )

(1.10.6)

and after substitution of (1.10.4) and (1.10.5) into (1.10.6) [ sI  A] >Q1 ( s )  P1 ( s ) @[ sI  B] [ sI  A]P0  Q 0 [ sI  B] .

(1.10.7)

Note that the following equality must hold Q1 ( s )

P1 ( s ) ,

(1.10.8)

since otherwise the left-hand side of (1.10.7) would be a matrix polynomial of a degree of at least 2, and the right side a matrix polynomial of degree of at most 1. After taking into account the equality (1.10.8) from (1.10.7) we obtain Q 0 [ sI  B] [ sI  A]P0 .

(1.10.9)

Pre-division of the matrix L(s) by [sI - B] yields L( s ) [ sI  B]L1 ( s )  L 0 ,

(1.10.10)

where L1(s) is a polynomial matrix and L0 is a matrix independent of the variable s. We will show that the matrices Q0 and L0 are nonsingular matrices satisfying the condition Q0L0

I.

Substitution of (1.10.4) and (1.10.10) into the equality

(1.10.11)

44

Polynomial and Rational Matrices

Q ( s )L ( s )

I

yields I

Q ( s )L ( s )

>( sI  A)Q1 ( s)  Q0 @>(sI  B)L1 ( s)  L0 @

[ sI  A]Q1 ( s )[ sI  B]L1 ( s )  Q 0 [ sI  B]L1 ( s ) 

(1.10.12)

[ sI  A]Q1 ( s )L 0  Q 0 L 0 .

Note that this equality can be satisfied if and only if [ sI  A]Q1 ( s)[ sI  B]L1 ( s )  Q 0 [ sI  B]L1 ( s )  [ sI  A]Q1 ( s )L 0

0 . (1.10.13)

Otherwise the left-hand side of (1.10.12) would be a matrix polynomial of zero degree and the right-hand side would be a matrix polynomial of at least the first degree. With (1.10.13) taken into account, from (1.10.12) we obtain the equality (1.10.11). From this equality the nonsingularity of the matrices Q0 and L0 as well the equality L0 = Q0-1 follow immediately. Pre-multiplication of (1.10.9) by Q0-1 yields [ sI  B] L 0 [ sI  A ]P0

and B

L 0 AP0 , L 0 P0

I.

From these relationships it follows that the matrices A and B are similar. „

Theorem 1.10.3. Matrices A and B are similar if and only if the matrices [sI  A] and [sI  B] have the same invariant polynomials. Proof. According to Corollary 1.8.1 two matrices are equivalent if and only if they have the same invariant polynomials. From Theorem 1.10.2 it follows immediately that the polynomial matrices [sI  A] and [sI  B] have the same invariant polynomials if and only if the matrices A and B are similar. Thus the matrices A and B are similar if and only if the matrices [sI  A] and [sI  B] have the same invariant polynomials. „

Polynomial Matrices

45

1.11 Computation of the Frobenius and Jordan Canonical Forms of Matrices 1.11.1 Computation of the Frobenius Canonical Form of a Square Matrix Consider nun matrices of the form

F

1 0 ª0 «0 0 1 « «# # # « 0 0 0 « «¬ a0  a1  a2



ª an1 « 1 « « 0 « « # «¬ 0

 an2 0 1 # 0

! 0 ! 0

º ª0 » «1 » « » , F «# % # # » « ! 0 1 » «0 «¬0 !  an2  an1 »¼ !  a1  a0 º ª an1 « a » ! 0 0 » « n2 0 » , F « # ! 0 « » % # # » « a1 «¬  a0 ! 1 0 »¼ 0 0

0 ! 0 0 ! 0 # % # 1 ! 0 0 ! 1 1 0 ! 0 1 ! # # % 0 0 ! 0 0 !

a0 º  a1 »» # », »  a2 »  an1 »¼ (1.11.1) 0º 0 »» #». » 1» 0 »¼

We say that the matrices in (1.11.1) have Frobenius canonical forms (or normal canonical forms). Expanding along the row (or the column) containing a0,a1,…,an1, it is easy to show that det > I n s  F @ det ª¬I n s  F º¼

det ª¬ I n s  Fˆ º¼

det ª¬I n s  F º¼

s n  an1s n1  ...  a1s  a0 .

(1.11.2)

We will show that the polynomial (1.11.2) is the only invariant polynomial of the matrix (1.11.1) different from 1. Detailed considerations will be given only for the matrix F. The proof in the other three cases is similar. Deleting the first column and the n-th row in the matrix

>I n s  F @

ªs «0 « «# « «0 «¬ a0

1

0

...

0

s

1 ...

0

# 0

# 0

# s

a1

a2 ! an2

% !

º » » # », » 1 » s  an1 »¼ 0

0

(1.11.3)

we obtain the minor Mn1 equal to (1)n1. With the above in mind, a greatest common devisor of all minors of degree n  1 of this matrix is equal to 1, i.e.,

46

Polynomial and Rational Matrices

Dn1(s) = 1. From the relationship (1.8.4) it follows that the polynomial (1.11.2) is the only polynomial of the matrix F different from 1. Let A nun and the monic polynomials i1 ( s ) 1, ..., i p ( s) 1, i p 1 ( s), ..., in ( s)

be the invariant polynomials of the polynomial matrix [Ins  A], where ip+1(s),…,in(s) are the polynomials of at least the first degree such that ik(s) divides (without remainder) ik+1(s) (k = p+1,…,n1). The matrix [sI  A] reduced to the Smith canonical form is of the form

[ sI  A]S

ª1 «# « «0 « «0 «# « ¬«0

" % " ! # % 0 " 0 # 0 0

0 0 # # 1 0 0 i p 1 ( s ) # 0

# 0

0 º " % ! »» 0 » ! ». 0 » " % # » » " in ( s ) ¼»

(1.11.4)

Let Fp+1,…,Fn be the matrices of the form (1.11.1) that correspond to the invariant polynomials ip+1(s),…,in(s). From considerations of Sect. 1.10 it follows that the quasi-diagonal matrix

FA

0 ªFp 1 « 0 F p2 « « # # « 0 ¬ 0

" 0º " 0 »» % #» » " Fn ¼

(1.11.5)

and A have the same invariant polynomials. Thus according to Theorem 1.10.2, the matrices A and FA are similar. Hence there exists a nonsingular matrix P such that

A

PFAP 1 .

(1.11.6)

The matrix FA given by (1.11.5) is called a Frobenius canonical form or a normal canonical form of the square matrix A. Thus the following important theorem has been proved.

Theorem 1.11.1. For every matrix A nun there exists a nonsingular matrix P nunsuch that the equality (1.11.6) holds. Example 1.11.1. The following matrix is given

Polynomial Matrices

A

ª 1 1 0º « 0 1 0» . « » «¬ 1 0 2 »¼

47

(1.11.7)

Carrying out the elementary operations: P[1+2u(s1)], L[2+1u(s1)], P[3+1u(s+2)], L[2u(1)], L[2+3u(s1)2], L[1u(1)], L[2, 3], L[1, 2] on the matrix 0 º ª s  1 1 « 0 0 »» , s 1 « «¬ 1 0 s  2 »¼

[ sI 3  A ]

we transform this matrix to its Smith canonical form

[ sI 3  A]S

0 ª1 0 º «0 1 ». 0 « » «¬0 0 ( s  1) 2 ( s  2) »¼

Thus the matrix A has the only invariant polynomial different from one i3 ( s )

( s  1) 2 ( s  2)

s 3  4 s 2  5s  2 .

In view of this, the Frobenius canonical form of the matrix (1.11.7) is the following

FA

ª0 1 0º «0 0 1 » . « » «¬ 2 5 4 »¼

(1.11.8)

1.11.2 Computation of the Jordan Canonical Form of a Square Matrix Consider an elementary divisor of the form

s  s0

m

.

(1.11.9)

We will show that the polynomial (1.11.9) is the only elementary divisor of a square matrix of the form

48

Polynomial and Rational Matrices

J

J ( s10 , m)

ª s0 «0 « «# « «0 «¬ 0

1 s0 # 0 0

0 1 # 0 0

" 0º " 0 »» % # » » " 1» " s0 »¼

mum

,

(1.11.10a)

or

Jc

J '( s10 , m)

ª s0 «1 « «# « «0 «¬ 0

0 s0 # 0 0

" 0 " 0 % # " s0 " 1

0º 0 »» # » » 0» s0 »¼

mum

.

(1.11.10b)

The determinant of the polynomial matrix

[ sI m  J ]

ª s  s0 « 0 « « # « « 0 «¬ 0

1 s  s0 # 0 0

0 1 # 0 0

... 0 ... 0 % # ... s  s0 ... 0

0 º 0 »» # » » 1 » s  s0 »¼

(1.11.11)

is equal to the polynomial (1.11.9). The minor Mn1 obtained from the matrix (1.11.11) by removing the first and the m-th columns is equal to (1)m1. Thus a greatest common divisor of all minors of degree m-1 of the matrix (1.11.11) is equal to 1, Dm-1(s) = 1. From (1.8.4) it follows that the polynomial (1.11.9) is the only invariant polynomial of the matrix (1.11.11) different from 1. The proof for the matrix Jc is similar. The matrices J and Jc are called Jordan blocks of the first and the second type, respectively. If q elementary divisors correspond to one eigenvalue, then q Jordan blocks correspond to this eigenvalue. Let J1, J2,…,Jp be Jordan blocks of the form (1.11.10a) (or (1.11.10b)), corresponding to the elementary divisors of the matrix A, where p is the number of elementary divisors of this matrix. Note that all these elementary divisors of the matrix A are also the elementary divisors of a quasi-diagonal matrix of the form

Polynomial Matrices

JA

ª J1 «0 « «# « ¬« 0

0 J2 # 0

" 0º " 0 »»  % # » » " J p »¼

nun

.

49

(1.11.12)

Matrices having the same elementary divisors also have the same invariant polynomials. In view of this, according to Theorem 1.10.2, the matrices A and JA, being matrices having the same invariant polynomials, are similar. Thus there exists a nonsingular matrix T such that

A

TJ AT1 .

(1.11.13)

The matrix (1.11.12) is called the Jordan canonical form of the matrix A, or shortly the Jordan matrix. Thus the following important theorem has been proved.

Theorem 1.11.2. For every matrix A nun there exists a nonsingular matrix T nun such that the equality (1.11.13) holds. If all elementary divisors of the matrix A are of the first degree (in the relationship (1.11.9) m = 1), then a Jordan matrix is a diagonal one. Thus we have the following important corollary.

Corollary 1.11.1. A matrix A is similar to the diagonal matrix consisting of its eigenvalues if and only if all its elementary divisors are divisors of the first degree. Example 1.11.2. The matrix (1.11.7) has only one invariant polynomial different from one and equal to i(s) = (s  1)2(s  2). Thus this matrix has two elementary divisors (s  1)2 and (s  2). Hence the Smith canonical form of the matrix (1.11.7) is equal to

JA

ª1 1 0 º «0 1 0 » . « » «¬0 0 2 »¼

1.12 Computation of Similarity Transformation Matrices 1.12.1 Matrix Pair Method A cyclic matrix A nun and its Frobenius form FA are given. Compute a nonsingular matrix P nun such that

50

Polynomial and Rational Matrices

PAP 1

ª 0 « 0 « « # « « 0 «¬ a0

FA

1 0 # 0 a1

0 1 # 0  a2

0 º " 0 »» " % # ». » 1 » " " an1 »¼

For the given matrix A we choose a row matrix c

1un

ª c º « cA » »z0. det « « # » « n1 » ¬cA ¼

(1.12.1)

such that

(1.12.2)

Almost every matrix c chosen by a “triall and error” method will satisfy the condition (1.12.2), since in the space of parameters the elements of the matrix c lie on a plane. We choose the matrix P in such a way that the condition (1.12.1) holds and

>1

cP 1

0 " 0@ 

1un

.

(1.12.3)

Letting pi (i = 1,2,…,n) be the i-th row of the matrix P. Using (1.12.1) and (1.12.3), we can write ª 0 « 0 « « # « « 0 «¬  a0

ª p1 º «p » « 2»A «# » « » ¬ pn ¼

and c

1 0 #

0 1 #

0  a1

0  a2

" " %

º » ª p1 º »«p » »« 2» »« # » 1 »« » " p "  an1 »¼ ¬ n ¼ 0 0 #

(1.12.4)

ª p1 º «p » >1 0 " 0@ «« #2 »» . « » ¬ pn ¼

Carrying out the multiplication and comparing appropriate rows from (1.12.4), we obtain p1

c , p2

p1A, p3

p2 A, ! , pn

pn1A .

(1.12.5)

Using (1.12.5) we can compute the unknown rows p1,p2,…,pn of the matrix P. Thus we have the following procedure for computation of the matrix P.

Polynomial Matrices

51

Procedure 1.12.1. Step 1: Compute the coefficients a0,a1,…,an-1 of the polynomial s n  an1s n1  "  a1s  a0 .

det[I n s  A ]

(1.12.6)

Step 2: Knowing a0,a1,…,an-1 compute the matrix FA. Step 3: Choose c 1un such that the condition (1.12.2) holds. Step 4: Using (1.12.5) compute the rows p1,p2,…,pn of the matrix P. Example 1.12.1. The following cyclic matrix is given

A

ª 1 1 0º « 0 1 0» . « » «¬ 1 0 2 »¼

(1.12.7)

One has to compute a matrix P transforming this matrix by similarity to the Frobenius canonical form FA. Using Procedure 1.12.1, we obtain the following: Step 1: The characteristic polynomial of the matrix (1.12.7) has the form:

det [I n s  A]

0 s  1 1 0 0 s 1 1 0 s2

( s  2)( s  1) 2

s 3  4s 2  5s  2. (1.12.8)

Step 2: Thus the matrix FA has the form

FA

ª0 1 0º «0 0 1 » . « » «¬ 2 5 4 »¼

(1.12.9)

Step 3: We choose c = [1 0 1] satisfying the condition (1.12.2), since ª c º det «« cA »» 2 ¬«cA ¼»

1

0 1

0 1 2 2 1 4

4.

Step 4: Using (1.12.5), we obtain p1

c

>1

0 1@ ,

p2

p1 A

>0

1 2@ ,

p3

p2 A

> 2

1 4@ .

52

Polynomial and Rational Matrices

Thus the matrix P has the form

P

ª p1 º «p » « 2» «¬ p3 »¼

ª 1 0 1º « 0 1 2» . « » «¬ 2 1 4 »¼

(1.12.10)

If we search for a matrix P that satisfies the condition

P 1 AP

FA

ª0 «1 « «0 « «# «¬ 0

a0 º 0 " 0 a1 »» 1 " 0  a2 » , » # % # # » 0 " 1 an1 »¼ 0 " 0

(1.12.11)

then it is convenient to choose a column matrix b

n

det [b, Ab, " , A n1b] z 0 .

in such a way that (1.12.12)

Let p i (i = 1,…,n) be the i-th column of the matrix P . Using (1.12.11) and P -1b=[1 0 ... 0]T n, we can write

A > p1

b

p2 "

> p1

p2 "

pn @

> p1

p2 "

ª0 «1 « pn @ «0 « «# «¬0

0 " 0 a0 º 0 " 0  a1 »» 1 " 0  a2 » , » # % # # » 0 " 1 an1 »¼ (1.12.13)

ª1 º «0» pn @ « » . «# » « » ¬0¼

Multiplying and comparing appropriate columns from (1.12.13), we obtain p1

b , p2

Ap1 , p3

Ap2 , " , pn

Apn1 .

(1.12.14)

Using (1.12.14), we can successively compute the columns p 1, p 2,…, p n of the matrix P . Thus we have the following procedure for computation of the matrix P .

Polynomial Matrices

53

Procedure 1.12.2. Step 1: Is the same as in Procedure 1.12.1. Step 2: Knowing the coefficients a1,a2,…,an of the polynomial (1.12.6) compute the matrix F A. Step 3: Choose b n such that the condition (1.12.12) is satisfied. Step 4: Using (1.12.14) compute the columns p 1, p 2,…, p n of the matrix P . Example 1.12.2. Find a matrix P transforming the matrix (1.12.7) by similarity into its canonical form F A. Using Procedure 1.12.2 we obtain the following: Step 1: The characteristic polynomial of the matrix (1.12.7) has the form (1.12.8). Step 2: Thus the matrix F A has the form

FA

ª0 0 2 º «1 0 5» . « » ¬«0 1 4 »¼

(1.12.15)

Step 3: We choose b = [0 1 -1]T, which satisfies the condition (1.12.12), since

det ª¬b, Ab, A b º¼ 2

0 1 2 1 1 1 1 2 5

2.

Step 4: Using (1.12.14), we obtain

p1

ª0º « 1 », « » «¬ 1»¼

p2

Ap1

ª1º « 1 », « » «¬ 2 »¼

p3

Ap2

ª2º « 1 ». « » «¬ 5»¼

Thus the desired matrix has the form

P

> p1

p2

p3 @

ª0 1 2º «1 1 1». « » «¬ 1 2 5»¼

The above considerations can be generalised for the remaining canonical Frobenius forms Fˆ A and F A of the matrix A.

54

Polynomial and Rational Matrices

1.12.2 Elementary Operations Method Substituting into (1.10.5) and (1.10.10) the matrix B instead of the variable s, we obtain P (B )

L0 .

P0 , L(B)

(1.12.16)

Thus from the relationship B = L0AP0 it follows that if the matrices A and B are similar, i.e., B = P-1AP, then the transformation matrix P is given by the following formula P

P (B)

> L(B) @

1

,

(1.12.17)

where P(s) and L(s) are unimodular matrices in the equality [ sI  B] L( s )[ sI  A]P ( s ) .

(1.12.18)

To compute P(s), using elementary operations, we reduce the matrices [sI  A], [sI  B] to the Smith canonical form [ sI  A]S [ sI  B]S

L1 ( s )[ sI  A ]P1 ( s ) , L 2 ( s )[ sI  B]P2 ( s )

(1.12.19) (1.12.20)

where P1 ( s )

P11 ( s ) P12 ( s )...P1k1 ( s ) ,

(1.12.21)

P2 ( s )

P21 ( s )P22 ( s )...P2 k2 ( s )

(1.12.22)

where P11(s),P12(s),…,P 1k2 (s) and P21(s),P22(s),…,P 2k2 (s) are matrices of elementary operations carried out on columns of matrices [sI  A] and [sI  B], respectively. The matrices L1(s) and L2(s) are defined similarly. Similarity of the matrices A and B implies that

> s I  A @S > s I  B @S . Taking into account (1.12.19) and (1.12.20) we obtain L 2 ( s )[ sI  B]P2 ( s )

L1 ( s )[ sI  A]P1 ( s )

i.e., [ sI  B] L21 ( s )L1 ( s )[ sI  A ]P1 ( s )P21 ( s ) .

(1.12.23)

Polynomial Matrices

55

From a comparison of (1.12.18) and (1.12.21) to (1.12.23) and (1.12.22), respectively, we obtain: P(s)

P1 ( s )P21 ( s )

P11 ( s )P12 ( s )...P1k1 ( s ) P2k12 ( s )...P221 ( s ) P211 ( s ).

(1.12.24)

Thus we compute the matrix P(s) carrying out elementary operations given by the matrices on the identity matrix P11 ( s ), P12 ( s), ..., P1k1 ( s), P2k12 ( s ), ..., P221 ( s ), P211 ( s ) .

When computing the inverse matrices to the matrices of elementary operations we use the following relationships: P 1[i u c] 1

P [i, j ]

ª 1º P « i u » , P 1 > i  j u b ( s ) @ ¬ c¼ P[ j , i ] P[i, j ].

P >i  j u w( s )@ ,

(1.12.25)

From the above considerations, the following algorithm for computation of the matrix P can be inferred.

Algorithm 1.12.1. Step 1: Transforming the matrices [sI – B], [sI – A] to the Smith canonical forms, determine the sequence of elementary operations given by the matrices P11 ( s ), P12 ( s ), ..., P1k1 ( s ), P21 ( s ), P22 ( s ), ..., P2 k2 ( s ) .

Step 2: Carrying out elementary operations given by the matrices P11(s),P12(s),…,P 1k2 (s),P 2k2 -1(s) P22-1(s), P21-1(s) on the identity matrix, compute the matrix P(s). Step 3: Substituting in the matrix P(s) in place of s the matrix B, compute the matrix P = P(B). Example 1.12.3. Compute a matrix P that transforms the matrix (1.11.7) to the Frobenius canonical form (1.11.8). In this case, the matrix FA is the matrix B. Step 1: To reduce the matrix

sI  FA

0 º ª s 1 «0 s 1 »»  « «¬ 2 5 s  4 »¼

to its Smith canonical form

56

Polynomial and Rational Matrices

[ sI  FA ]S

0 ª1 0 º «0 1 », 0 « » 2 ¬«0 0 ( s  1) ( s  2) ¼»

the following elementary operations need to be carried out L ¬ª3  2 u s  4 ¼º , P ¬ª 2  3 u s ¼º , P ¬ª1  2 u s ¼º ,

L ª¬3  1 u s 2  4 s  5 º¼ , L ª¬1 u 1 º¼ , L ª¬ 2 u 1 º¼ , P > 2, 3@ , P >1, 2@ .

Step 2: In Example 1.11.1 to reduce the matrix [Ins  A] to the Smith canonical form, the following elementary operations are applied P[1  2 u ( s  1)], L[2  1 u ( s  1)], P[3  1 u (2  s)], L[2 u (1)], L[2  3 u ( s  1) 2 ], L[1 u (1)], L[2, 3], L[1, 2].

To compute the matrix P(s) the following elementary operations have to be carried out on the columns of the identity matrix of the third degree P ª¬1  2 u s  1 º¼ , P ª¬3  1 u 2  s º¼ , P > 2, 3@ , P >1, 2@ , P ª¬1  2 u  s º¼ , P ª¬ 2  3 u 1 º¼ .

Then we obtain

P(s)

ª 2(1  s ) « 2( s  1) 2 « 1 ¬«

0º 1 1 »» 0 0 ¼» 1

ª 0 0 0º ª 2 0 0 º ª 2 1 0º « 2 0 0 » s 2  « 4 0 0 » s  « 2 1 1 » . « » « » « » «¬ 0 0 0 »¼ «¬ 0 0 0 »¼ «¬ 1 0 0»¼

Step 3: We substitute into this matrix the matrix

FA

ª0 1 0º «0 0 1 » « » «¬ 2 5 4 »¼

in place of the variable s. We obtain

Polynomial Matrices

P

P FA

57

ª 2 1 0 º « » « 2 3 1» . «¬ 1 0 0 »¼

It is easy to check that this matrix transforms the matrix (1.11.7) to the form FA.

1.12.3 Eigenvectors Method Let a matrix A nun and its Jordan canonical form (1.11.12), containing p blocks of the form (1.11.10a), be given. Let the i-th block, corresponding to the eigenvalue si, have the dimensions miumi (i = 1,…,p). The following matrix

T

ª¬T1 T2 ... Tp º¼ , Ti

ª¬ti1 ti 2 ... timi º¼

(1.12.26)

satisfying (1.11.13) is to be computed. Post-multiplying (1.11.13) by T we obtain AT

TJ A

and after taking into account (1.12.26), (1.11.10) and (1.11.12) ATi

Ti J i for i 1,..., p ,

and

> A  Isi @ ti1

0,

> A  Isi @ ti 2

ti1 , ..., > A  Isi @ timi

ti ,mi 1 ,

i 1,..., p.

(1.12.27)

For the eigenvalue si from the first among the equations in (1.12.27) we compute the column ti1, knowing ti1 we compute from the second equation the column ti2 and finally from the last equation we compute the column tim . Repeating these computations successively for i = 1,2,…,p, we obtain the desired matrix (1.12.26). i

Example 1.12.4. Compute the matrix T transforming the matrix

A

ª1 « 1 « «0 « ¬0

2 1 0º 3 1 0 »» 1 2 0» » 0 0 1¼

(1.12.28)

58

Polynomial and Rational Matrices

to its Jordan canonical form

JA

ª2 «0 « «0 « ¬0

1 0 0º 2 1 0 »» . 0 2 0» » 0 0 1¼

(1.12.29)

From (1.12.29) it follows that the matrix (1.12.28) has one eigenvalue s1 = 2 of multiplicity 3 and one eigenvalue s2 = 1 of multiplicity 1. In this case, the matrix (1.12.26) is of the form

>T1

T

T2 @

>t11

t12 t13 t21 @ .

For i = 1 the equations (1.12.27) take the form

> A  Is1 @ t11

> A  Is1 @ t12

> A  Is1 @ t13

and for i

ª 1 « 1 « «0 « ¬0 ª 1 « 1 « «0 « ¬0 ª 1 « 1 « «0 « ¬0

2 1 1 1 1 0 0 0 2 1 1 1 1 0 0 0 2 1 1 1 1 0 0 0

0º 0 »» t11 0» » 1¼ 0º 0 »» t12 0» » 1¼ 0º 0 »» t13 0» » 1¼

ª0º «0» « », «0» « » ¬0¼

t11 ,

t12 ,

2

> A  Is2 @ t21

ª0 « « 1 «0 « ¬0

2 1 0º 2 1 0 »» t21 1 1 0» » 0 0 0¼

ª0º « » «0» . «0» « » ¬0¼

Solving these equations successively, we obtain

Polynomial Matrices

t11

ª1 º «0» « », «1 » « » ¬0¼

ª1 º «1 » « », «0» « » ¬0¼

t12

ª0 º «0 » « », «1 » « » ¬0 ¼

t13

59

ª0º «0» « », «0» « » ¬1 ¼

t21

and the desired matrix has the form

T

>t11

t12 t13 t21 @

ª1 «0 « «1 « ¬0

1 0 0º 1 0 0 »» . 0 1 0» » 0 0 1¼

If blocks have the form (1.11.10b) considerations are similar.

1.13 Matrices of Simple Structure and Diagonalisation of Matrices 1.13.1 Matrices of Simple Structure Consider a matrix A

nun

whose characteristic polynomial has the form

\ (O ) det > I n O  A @ O n  an1O n1  .....  a1O  a0 .

(1.13.1)

The roots O1, O2,…,Op (p d n) of the equation \(O) = 0 are called eigenvalues of the matrix A, and the set of these eigenvalues is called the spectrum of this matrix.

Definition 1.13.1. We say that an eigenvalue Oi has algebraic multiplicity ni, if Oi is the ni-fold root of the equation \(O) = 0, i.e.,

\ Oi \ ' Oi .... \ n 1 Oi 0, i

but \

ni

Oi z 0,

where \ k O

\ O

d k\ O , i.e., dOk

O  O1 O  O2 n1

(1.13.2)

i 1, ..., p,

n2

.... O  O p . np

(1.13.3)

We say that an eigenvalue Oi has geometrical multiplicity mi if rank > I n Oi  A @

n  mi , i 1, ..., p .

(1.13.4)

60

Polynomial and Rational Matrices

From the Jordan canonical form of the matrix A, it follows that ni > mi for i = 1,…,p.

Definition 1.13.2. A matrix A nun for which ni = mi for i = 1,…,p, is called a matrix of simple structure. Otherwise we say that the matrix has a complex structure. For example, the matrix

A

ª2 a º «0 2 » ¬ ¼

(1.13.5)

for a = 0 is a matrix of simple structure, since n1 = m1 = 2 and for a z 0 it is a matrix of complex structure, since n1 = 2, m1 = 1 (rank [I22  A] = 1)).

Theorem 1.13.1. The similar matrices A nun and B = PAP-1, det P z 0, have eigenvalues of the same algebraic and geometric multiplicities. Proof. According to Theorem 1.10.1, the similar matrices A and B share the same characteristic polynomial, i.e., det > I n O  A @ det > I n O  B @ .

(1.13.6)

The equality (1.13.6) implies that the matrices A and B have the same eigenvalues of the same algebraic multiplicities. From the relationship rank > I n Oi  B @

rank ª¬ P > I n Oi  A @ P 1 º¼ rank > I n Oi  A @ ,

(1.13.7)

for i 1, ..., p,

it follows that eigenvalues of the matrices A and B also have the same geometrical multiplicities. „ From the Jordan canonical structure and (1.13.4) the following important corollary ensues.

Corollary 1.13.1. Geometrical multiplicity mi of an eigenvalue Oi, i = 1,…,p of the matrix A is equal to a number of blocks corresponding to this eigenvalue. Theorem 1.13.2. A matrix A nun is of simple structure if and only if all its elementary divisors are of the first degree.

Polynomial Matrices

61

Proof. According to Corollary 1.11.1, the matrix A is similar to the diagonal matrix consisting of eigenvalues of this matrix if and only if all its elementary divisors are of the first degree. In this case rank > I n Oi  A @

n  ni for i 1,..., p .

(1.13.8)

In view of this, mi = ni for i = 1,…,p, and A is a matrix of simple structure if and only if all its divisors are of the first degree. „ Example 1.13.1. The matrix (1.13.5) is a matrix of simple structure if and only if a = 0, since the Smith canonical form of the matrix ª s  2 a º « 0 s  2 »¼ ¬

>I 2 s  A @ is equal to

>I 2 s  A@ s >I 2 s  A @ s

0 º ªs  2 , for a 0 , « 0  2 »¼ s ¬ 0 º ª1 « 2 » , for a z 0 . «¬ 0 s  2 »¼

For a = 0, the matrix (1.13.5) has two elementary divisors of the first degree, and for a z 0, it has one elementary divisor (s  2)2. According to Theorem 1.13.2, the matrix (1.13.5) is thus of simple structure if and only if a = 0.

1.13.2 Diagonalisation of Matrices of Simple Structure Theorem 1.13.3. For every matrix A singular matrix P nun such that P 1 AP

nun

of simple structure there exists a non-

diag > O1 , O2 , ..., On @

(1.13.9)

where some eigenvalues Oi, i = 1,…,p can be equal.

Proof. From the fact that A is a matrix of simple structure it follows that for every eigenvalue Oi there are as many corresponding eigenvectors Pi as the multiplicity of the eigenvalue amounts to APi

Oi Pi , for i 1, ..., n .

(1.13.10)

62

Polynomial and Rational Matrices

The eigenvectors P1,P2,…,Pn are linearly independent. Hence the matrix P = [P1,P2,…,Pn] is nonsingular. From (1.13.10) for i = 1,…,n, we have AP

P diag > O1 , O2 , ..., On @ .

(1.13.11)

Pre-multiplying (1.13.11) by P-1, we obtain (1.13.9). „ In particular, in the case when A has distinct eigenvalues O1, O2,…,On, the following important corollary ensues from Theorem 1.13.3.

Corollary 1.13.2. Every matrix A nun with distinct eigenvalues O1, O2,…,On can be transformed by similarity to the diagonal form diag [O1, O2,…,On]. To compute the eigenvectors P1,P2,…,Pn, we solve the equation

>I n Oi  A @ Pi

0, for i 1, ..., n

(1.13.12)

or taking instead of Pi any nonzero column of Adj [InOi - A]. From definition of the inverse matrix

>I nO  A@

1

Adj > I n O  A @ , det > I n O  A @

we have

>I n O  A @ Adj>I n O  A @

I n det > I n O  A @ .

(1.13.13)

Substituting O = Oi into (1.13.13) and taking into account that det [InOi - A] = 0, we obtain

>I n Oi  A @ Adj>I n Oi  A @

0, for i 1, ..., p .

(1.13.14)

From (1.13.14) it follows that every nonzero column of Adj [InOi - A] is the eigenvector of the eigenvalue Oi of the matrix A. Example 1.13.2. Compute a matrix P that transforms the matrix

A

ª 3 1 1 º 1« 1 5 1»» « 2 «¬ 2 2 4 »¼

(1.13.15)

Polynomial Matrices

63

to the diagonal form. The characteristic equation of the matrix (1.13.15)

det > I n O  A @

O  32  12 1

1 2

 12

O  52

1 2

1

O2

O3  6O 2  11O  6 0

has three real roots O –O –O –. To compute the eigenvectors P1,P2,P3, we compute the adjoint (adjugate) matrix

Adj > I n O  A @

ª O 2  92 O  92 « 1 1 « 2O 2 « ¬ O2

 12 O  32

O  72 O  52 O  2 2

O  32 º »  12 O  12 » . O 2  4O  4 ¼» 1 2

(1.13.16)

As the eigenvectors P1, P2, P3 of the matrix (1.13.15) we take the third column of the adjoint matrix successively for O –O –O –. The matrix built from these vectors (after multiplication of the third column for O – by 2) has the form

> P1 , P2 , P3 @

P

ª1 1 0 º «0 1 1 » « » «¬1 0 1 »¼

and its inverse is

P

1

ª 1 1 1 º 1« 1 1 1»» . 2« «¬ 1 1 1 »¼

Hence

1

P AP

ª 1 1 1 º ª 3 1 1 º ª1 1 0 º 1« 1 1 1 1»» «« 1 5 1»» «« 0 1 1 »» « 2 2 «¬ 1 1 1 »¼ «¬ 2 2 4 »¼ «¬1 0 1 »¼

Example 1.13.3. Compute a matrix P that reduces the matrix

ª 1 0 0 º « 0 2 0 » . « » «¬ 0 0 3»¼

64

Polynomial and Rational Matrices

A

ª2 0 0º «0 2 0» « » «¬ 1 1 1 »¼

(1.13.17)

to the diagonal form. The characteristic equation of the matrix (1.13.17)

det > I 3O  A @

O2

0

0

0

O2

0

1

1

O 1

(O  2) 2 (O  1)

0

has one double root O  and one root of multiplicity 1, O . The matrix (1.13.17) is a matrix of simple structure, since

rank > I 3O1  A @

ª 0 0 0º rank «« 0 0 0 »» 1 . «¬ 1 1 1 »¼

Thus using similarity transformation the matrix (1.13.17) can be reduced to the diagonal form. From the equation

>I 3O1  A @ Pi

ª 0 0 0º « 0 0 0» P « » i «¬ 1 1 1 »¼

(i 1, 2)

0

it follows that as the eigenvectors P1 and P2 we can adopt

P1

ª1 º «0» , P 2 « » «¬1 »¼

ª1 º «1 » . « » «¬ 0 »¼

Solving the equation

>I 3O2  A @ P3

ª 1 0 0 º « 0 1 0 » P « » 3 «¬ 1 1 0 »¼

0,

Polynomial Matrices

we obtain P3

ª0 º « 0 » . Thus « » «¬1 »¼

> P1 ,

P

65

P2 , P3 @

ª1 1 0 º «0 1 0 » . « » «¬1 0 1 »¼

It is easy to verify that 1

1

P AP

ª1 1 0 º ª 2 0 0 º ª1 1 0 º «0 1 0 » « 0 2 0» « 0 1 0 » « » « »« » «¬1 0 1 »¼ «¬1 1 1 »¼ «¬1 0 1 »¼

ª 2 0 0º « 0 2 0» . « » «¬ 0 0 1 »¼

1.13.3 Diagonalisation of an Arbitrary Square Matrix by the Use of a Matrix with Variable Elements Let a square matrix A and a diagonal matrix / of the same dimension be given. We will show that an arbitrary matrix A can be transformed to the diagonal form / by use of a transformation of a matrix with variable elements.

Theorem 1.13.4. For an arbitrary matrix A / nun there exists a nonsingular matrix T

T(t )

nun

e( A  ȁ )t

and a given diagonal matrix

(1.13.18)

such that

 )T 1 ( AT  T

ȁ.

(1.13.19)

Proof. From (1.13.18) it follows that this matrix is nonsingular for arbitrary matrices A and /. Taking into account that T

( A  ȁ )e ( A  ȁ ) t

( A  ȁ )T ,

we obtain

AT  T T

1

AT  A  ȁ T T

1

ȁ. „

66

Polynomial and Rational Matrices

Example 1.13.4. Compute a matrix T that transforms the matrix ª2 1º «0 2» ¬ ¼

A

to the diagonal form

ª2 0º «0 2» . ¬ ¼

ȁ

Note that the given matrix A is of the Jordan canonical form and one cannot transform it to a diagonal form using similarity transformation (with a matrix P with constant elements) since it does not have a simple structure. Using (1.13.18) we compute T

e( Aȁ )t

ª0 1 º exp « »t ¬0 0 ¼

ª1 t º « 0 1» . ¬ ¼

Taking into account that T 1

ª1 t º  «0 1 » , T ¬ ¼

ª0 1 º «0 0» ¬ ¼

it is easy to check that ȁ

AT  T T

1

°­ ª 2 2t  1º ª0 1 º ½° ª1 t º  ®« ¾ 2 ¼» ¬«0 0 ¼» ¿° ¬« 0 1 ¼» ¯° ¬ 0

ª 2 0º « 0 2» . ¬ ¼

These considerations can be generalised into a matrix A(t) whose elements depend on time t. We will show that a square matrix A(t) of dimension nun with elements being continuous functions of time t can be transformed to the diagonal form ȁ (t )

diag ª¬ O1 t , O2 t , ..., On t º¼ .

(1.13.20)

Let matrix I(t) be the solution of the matrix differential equation

I t AI t , Satisfying, for example, the initial condition I(0) = In.

(1.13.21)

Polynomial Matrices

67

Let t

³ ȁ(W )dW

T(t ) I (t )e 0

.

(1.13.22)

It is known that the matrix (1.13.22) is a nonsingular matrix for every t t 0. We will show that the matrix (1.13.22) satisfies the equation

 (t ) T

A(t )T(t )  T(t ) ȁ(t ) .

(1.13.23)

Differentiating the matrix (1.13.22) with respect to t and taking into account (1.13.21), we obtain t

T (t ) I(t )e

 ³ ȁ (t )dt 0

t

 I (t )e

 ³ ȁ (t ) dt 0

t

A(t )I (t )e

 ³ ȁ (t )dt 0

O (t )

t

 I (t ) e

 ³ ȁ (t ) dt 0

ȁ (t )

A(t )T(t )  T(t ) ȁ (t ).

From (1.13.23), we obtain ȁ (t )

diag > O1 (t ), O2 (t ), ...., On (t ) @ .

T 1 (t ) ¬ª A(t )T(t )  T (t ) ¼º

Thus the desired matrix is given by the relationship (1.13.22), where the matrix

I(t) is a solution to the equation (1.13.21).

1.14 Simple Matrices and Cyclic Matrices 1.14.1 Simple Polynomial Matrices Consider a polynomial matrix A(s)

[s] of rank r d min(m,n).

mun

Definition 1.14.1. A polynomial matrix A(s) mun of rank r is called a simple one if and only if it has only one invariant polynomial distinct from 1. Taking (1.8.4) into account, we can equivalently define a simple matrix as a polynomial matrix satisfying the conditions D1 ( s )

D2 ( s ) ...

Dr 1 ( s ) 1 and Dr ( s )

ir ( s ) ,

(1.14.1)

68

Polynomial and Rational Matrices

where Dk(s), k = 1,…,r is a greatest common divisor of all minors of size k of the matrix A(s). Thus the Smith canonical form of the simple matrix A(s) is equal to

A S (s)

­ ª1 0 °« ° «0 1 °« # # °« ° «0 0 ° «0 0 °¬ °diag[1, ° °° ª1 0 « ® «0 1 °« # # °« ° «0 0 °« ° «0 0 ° «0 0 °« °« # # ° «0 0 °¬ °¯

" " % " "

0 " 0º 0 " 0 »» # # # % # » for n ! m » 1 0 0 " 0» 0 ir ( s ) 0 " 0 »¼ for n m " , 1, ir ( s )] " " % " " " % "

0 0

0 0 # 1 0 0 # 0

0 0

0 º 0 »» # » » 0 » ir ( s ) » » 0 » # » » 0 »¼

Theorem 1.14.1. A polynomial matrix A(s) only if rank A si0

r

r

.

for m ! n

mun

(1.14.2)

r

[s] of rank r is simple if and

r  1 for si0  V A ,

(1.14.3)

where VA is the set of zeros of the matrix A(s).

Proof. The normal rank of the matrix A(s) and of its Smith canonical form AS(s) is the same, i.e., rank A(s) = rank AS(s) = r. From (1.14.2) it follows that the defect of the rank of the matrix A(s) is equal to 1 if and only if s is a zero of this matrix. „ From Definition 1.14.1 one obtains the following important corollary.

Corollary 1.14.1. A polynomial matrix A(s) is simple if and only if only one elementary divisor corresponds to each zero. Example 1.14.1. In Example 1.8.1 it was shown that to the polynomial matrix

Polynomial Matrices

A s

ª ( s  2) 2 « ¬( s  2)( s  3)

( s  2)( s  3) ( s  2) 2

s  2º » s  3¼

69

(1.14.4)

the Smith canonical form

A S (s)

0 0º ª1 «0 ( s  2)( s  2.5) 0 » ¬ ¼

(1.14.5)

corresponds. From (1.14.5) it follows that i1(s) = 1, i2(s) = (s+2)(s+2.5) and thus the matrix (1.14.4) is simple. It is easy to check that the matrix (1.14.4) loses its full rank equal to 2 for zeros s1 = 2 and s2 = 2.5, since A(2)

ª0 0 0 º «0 0 1 » , A(2.5) ¬ ¼

ª 0.25 0.25 0.5º « 0.25 0.25 0.5 » . ¬ ¼

We obtain the same result from the matrix (1.14.5).

1.14.2 Cyclic Matrices Consider a matrix A

num

.

Definition 1.14.2. A matrix A num is called cyclic if and only if the polynomial matrix [Ins - A] corresponding to it is simple. Consider the following matrices

F



ª 0 « « 0 « # « « 0 «¬ a0 ª an1 « « 1 « 0 « « # «¬ 0

"

0 º ª0 » « 0 1 " 0 0 » «1 # # % # # » , F «0 » « 0 0 " 0 1 » «# «¬0 a1  a2 "  an2  an1 »¼ an2 "  a1  a0 º ª an1 « a " 0 0 0 »» « n2 " 0 1 0 » , F « # « » # % # # » « a1 «¬ a0 " 1 0 0 »¼ 1

0

0 " 0

0

0 " 0 1 % 0 # " # 0 " 1 1 0 " 0 1 " #

# %

0 0 " 0 0 "

a0 º a1 »»  a2 » , » # »  an1 »¼ (1.14.16) 0º 0 »» #». » 1» 0 »¼

We say that the matrices (1.14.6) have Frobenius canonical form. Expanding the determinant along the row (or column) containing a0,a1,…,an-1 it is easy to show that the following equality holds

70

Polynomial and Rational Matrices

det > I n s  F @ det ¬ª I n s  F ¼º

det ª¬I n s  Fˆ º¼

det ª¬I n s  F º¼

s n  an1s n1  "  a1s  a0 .

(1.14.7)

Theorem 1.14.2. The matrices (1.14.6) are cyclic for arbitrary values of the coefficients a0,a1,…,an-1. Proof. We prove the theorem in detail only for the matrix F, since in other cases the proof is similar. After deleting the first column and the n-th row from the matrix

[I n s  F ]

ªs «0 « «# « «0 «¬ a0

1 0 s 1 # # 0 0 a1 a2

" 0 % 0 % # % s " an2

0 º 0 »» # », » 1 » an1 »¼

(1.14.8)

we obtain the minor Mn1 equal to (1)n1. Thus the greatest common divisor Dn1(s) of the all minors of degree n1 of the matrix (1.14.8) is equal to 1, i.e., Dn1(s) = 1. The condition (1.14.1) is thus satisfied and the matrix F is cyclic. „

Theorem 1.14.3. A matrix A = [aij] satisfied ­ 0 aij ® ¯z 0 ­ 0 aij ® ¯z 0

for j ! i  1

nun

is cyclic if the following conditions are

i, j 1, ! , n ,

(1.14.9a)

for i ! j  1 , i, j 1, ! , n . for i j  1

(1.14.9b)

for j

i 1

,

Proof. If the conditions (1.14.9a) are satisfied then after deleting the first column and the n-th row from the matrix

[I n s  A ]

ª s  a11 « a « 21 « # « « an1,1 «  an1 ¬

a12 s  a22

0 a23

# an1,2  an 2

#  an2,3  an 3

0 " 0 ! % # ! s  an1,n1  an ,n1 "

º » » # » , (1.14.10) »  an1,n » s  ann »¼ 0 0

we obtain the minor Mn1 equal to Mn1=(1)n1a12a23…an-1,n z 0. Thus Dn1(s) = 1 and the condition (1.14.1) is satisfied. In the case of (1.14.9b) the proof is similar. „

Polynomial Matrices

71

Example 1.14.2. Determine the conditions under which the matrix A2

ª a11 «a ¬ 21

a12 º a22 »¼

(1.14.11)

is or is not a cyclic matrix. If a21 z 0, then carrying out the elementary operations: L[1+2u1/a21(s  a11)], L[2u(a21)], L[1,2] and L[2ua21] on the matrix [I 2 s  A 2 ]

ª s  a11 « a ¬ 21

a12 º , s  a22 »¼

we obtain its Smith canonical form, which is equal to

>I 2 s  A 2 @

ª1 0 º «1 M ( s ) » , ¬ ¼

M s det > I 2 s  A @

(1.14.12) s  a11  a22 s  a11a22  a12 a21 . 2

From (1.14.12) it follows that for a21 z 0, the matrix (1.14.11) is cyclic for any values of other elements. We obtain a similar result for a12 z 0. It is easy to check that for a12 = a21 = 0 the diagonal matrix

A2

ª a11 «0 ¬

0º , a22 »¼

is cyclic if and only if a11 z a22.

Theorem 1.14.4. A matrix A nun is cyclic if and only if only one Jordan block corresponds to it every distinct eigenvalue, i.e.,

JA

where

ª J ( s1 ,n1 ) « « 0 « # « « 0 ¬

0 J ( s2 ,n2 ) # 0

º » »  % # » » ... J ( s p ,n p ) »¼ ...

0

...

0

nun

(1.14.13a)

72

Polynomial and Rational Matrices

J ( sk , nk )

ª sk «0 « «# « «0 «¬ 0

1 sk

0 ... 1 ...

0 0

#

# %

#

0

0 ...

sk

0

0 ...

0

ª sk «1 « «# « «0 «¬ 0

0 sk

... 0 ... 0

0 0

0º 0 »» # » » 1» sk »¼

(1.14.13b)

nk unk

or

J ( sk , nk )

# % # # 0 ... 1 sk 0 ... 0 1

0º 0 »» # » » 0» sk »¼

nk unk

, k

1, ..., p .

Proof. The polynomial matrix Ins  J A

diag[I n1 s  J ( s1 , n1 ), ..., I n p s  J ( s p , n p )]

(1.14.14)

is simple since rank > I n s  J A @

s sk

n  1 for k

1, ..., p .

(1.14.15)

By virtue of Theorem 1.14.1 and Definition 1.14.2 the matrices (1.14.3) and A are cyclic. If at least two blocks correspond to one eigenvalue sk then defect of the rank of the matrix (1.14.14) is greater than 1 and the matrix A is not cyclic. „ Example 1.14.3. From Theorem 1.14.4 it follows that the matrix

A

ª1 1 0 º «0 1 0 » « » «¬0 0 a »¼

(1.14.16)

is a cyclic one for a z 1. However, it is not cyclic for a = 1, since two Jordan blocks correspond to its eigenvalue that is equal to 1 J (1, 2)

ª1 1º «0 1» and J (1,1) ¬ ¼

>1@ .

From Theorem 1.14.4 for J(sk,nk) = ak, nk = 1, k = 1,…,n one obtains the following important corollary.

Polynomial Matrices

73

Corollary 1.14.1. The diagonal matrix A

diag > a1 , a2 , ..., an @ 

nun

(1.14.17)

is cyclic if and only if ai z aj for i z j.

Theorem 1.14.5. Let O1,O2,…,Op be the eigenvalues of multiplicities n1,n2,…,np, respectively, of the matrix A nun. This matrix is cyclic if and only if rank > I n Oi  A @ i

rank > I n Oi  A @ i

n 1

n

for i 1, ..., p.

(1.14.18)

Proof. It is known that similarity transformation does not change the rank of the matrix rank > I n Oi  A @ i

rank > I n Oi  J A @ i

n

n

for i 1, ..., p,

(1.14.19)

where JA is a Jordan canonical form of the matrix A. Taking into account (1.11.10) it is easy to verify that

>I n1Oi  J (Oi , ni )@

ni

0 for i 1, ..., p .

(1.14.20)

From the Jordan canonical form JA of the matrix A and (1.14.20) it follows that only one block corresponds to every eigenvalue Oi if and only if the condition (1.14.18) is satisfied. Thus by virtue of Theorem 1.14.4, the matrix A is cyclic if and only if the condition (11.14.8) is satisfied. „ Example 1.14.4. The matrix (1.14.16) for a z 1 has only one eigenvalue O1 = 1 of multiplicity n1 = 2 and one eigenvalue O2 = a of multiplicity 1. It is easy to check that

>I 3O1  A @

2

>I 3O1  A @

3

ª0 « «0 «¬ 0 ª0 «0 « «¬0

rank > I 3O1  A @

2

1 0 0

0 º 0 »» 1  a »¼

2

0 º ª0 0 «0 0 0 »» , « «¬ 0 0 (1  a ) 2 »¼

º 0 0 »» , 0 (1  a)3 »¼ 0

0

rank > I 3O1  A @

3

­1 for a z 1 ® ¯0 for a 1

74

Polynomial and Rational Matrices

and

>I 3O2  A @

ª a  1 2 « 2 >I3O2  A @ « 0 « 0 ¬ ­2 for a z 1 2 . rank > I 3O2  A @ ® ¯0 for a 1

ª a  1 1 0 º « a  1 0 »» , « 0 «¬ 0 0 0 »¼

rank > I 3O2  A @

2(a  1) 0 º » (a  1) 2 0 » , 0 0» ¼

Thus the condition (1.14.18) is satisfied and the matrix is cyclic if and only if a z 1.

Theorem 1.14.6. A matrix A nun can be transformed by similarity to the Frobenius canonical form (1.14.6) or to the Jordan canonical form (1.14.13) if and only if the matrix A is a cyclic one. Proof. It is known that there exist nonsingular matrices P1 and P2 of similarity transformation such that AF

P1AP11 such that J A

P2 AP21

(1.14.21)

if and only if the polynomial matrices [Ins - A], [Ins - AF] and [Ins - JA] are equivalent, i.e., they have the same invariant polynomials. This takes place if and only if the matrix A is cyclic. The sufficiency follows immediately by virtue of Theorems 1.11.1 and 1.11.2. „ Example 1.14.5. Consider the matrix (1.14.16). This matrix for a z 1 is cyclic and can be transformed by similarity into the Frobenius canonical form AF that is equal to

AF

1 0 º ª0 «0 0 1 »» , « «¬ a 2a  1 2  a »¼

since 0 º ª s  1 1 « 0  1 0 »» s « «¬ 0 0 s  a »¼ 2 3 ( s  1) ( s  a ) s  (2  a ) s 2  (2a  1) s  a.

det > I 3 s  A @

(1.14.22)

Polynomial Matrices

75

For a = 1, the matrix (1.14.16) has the Jordan canonical form with two blocks corresponding to an eigenvalue equal to 1 and is not cyclic. The matrix (1.14.16) for a = 1 cannot be transformed by similarity into the Frobenius canonical form.

1.15 Pairs of Polynomial Matrices 1.15.1 Greatest Common Divisors and Lowest Common Multiplicities of Polynomial Matrices Let mun[s] be the set of mun polynomial matrices with complex coefficients in the variable s.

Definition 1.15.1. A matrix B(s) muq[s] is called a left divisor (LD) of the matrix A(s) mul[s] if and only if there exists a matrix C(s) qul[s] such that A( s)

B( s )C( s ) .

(1.15.1)

A matrix C(s) mul[s] is called a right divisor (RD) of A(s) if there exists a matrix B(s) muq[s] such that (1.15.1) holds.

mul

[s] if and only

Definition 1.15.2. A matrix A(s) qul[s] is called a right multiplicity (RM) of a matrix B(s) muq[s] if and only if there exists a matrix C(s) qun[s] such that (1.15.1) holds. A matrix A(s) mul[s] is called a left multiplicity (LM) of a matrix C(s) qul[s] if and only if there exists a matrix B(s) mul[s] such that (1.15.1) holds. Consider the two polynomial matrices A(s) mul[s] and B(s) mup[s].

Definition 1.15.3. A matrix L(s) muq[s] is called a left common divisor (LCD) of matrices A(s) mul[s] and B(s) mup[s] if and only if there exist matrices A1(s) qul[s] and B1(s) qup[s] such that A( s)

L( s ) A1 ( s ) and B( s )

L( s )B1 ( s ) .

(1.15.2)

A matrix P(s) qul[s] is called a right common divisor (RCD) of matrices A(s) mul[s] and B(s) pul[s] if and only if there exist matrices A2(s) muq[s] and B2(s) puq[s] such that A( s)

A 2 ( s )P ( s ) and B( s )

B 2 ( s )P ( s ) .

(1.15.3)

76

Polynomial and Rational Matrices

Definition 1.15.4. A matrix D(s) pul[s] is called a common left multiplicity (CLM) of matrices A(s) mul[s] and B(s) qul[s] if and only if there exist matrices D1(s) mum[s] and D2(s) puq[s] such that D( s )

D1 ( s ) A ( s ) and D( s )

D 2 ( s )B ( s ) .

(1.15.4)

A matrix F(s) mup[s] is called a common right multiplicity (CRM) of matrices A(s) mul[s] and B(s) muq[s] if and only if there exist matrices F1(s) lup[s] and F2(s) qup[s] such that F(s)

A( s )F1 ( s ) and F ( s )

B( s )F2 ( s ) .

(1.15.5)

Definition 1.15.5. A matrix L(s) muq[s] is called a greatest common left divisor (GCLD) of matrices A(s) mul[s] and B(s) mup[s] if and only if x the matrix L(s) is a common left divisor of the matrices A(s) and B(s); x the matrix L(s) is a common right multiplicity of every common left divisor of the matrices A(s) and B(s), i.e., if A(s) = L1(s)A3(s) and B(s) = L1(s)B3(s), then L(s) = L1(s)T(s), where L1(s), A3(s), B3(s) and T(s) are polynomial matrices of appropriate dimensions. A matrix P(s) qul[s] is called a greatest common right divisor (GCRD) of matrices A(s) mul[s] and B(s) pul[s] if and only if 1. the matrix P(s) is a common right divisor of the matrices A(s) and B(s); 2. the matrix P(s) is a common left multiplicity of every common right divisor of the matrices A(s) and B(s), i.e., if A(s)=A4(s)P1(s) and B(s) = B4(s)P1(s), then P(s) = T(s)P1(s), where A4(s), P1(s), B4(s) and T(s) are polynomial matrices of appropriate dimensions. Definition 1.15.6. A matrix D(s) pul[s] is called a smallest common left multiplicity (SCLM) of matrices A(s) mul[s] and B(s) qul[s] if and only if 1. the matrix D(s) is a common left multiplicity of the matrices A(s) and B(s); 2. the matrix D(s) is a right devisor of every common multiplicity of the matrices A(s) and B(s), i.e., if D (s) = D3(s)A(s) and D (s)=D4(s)B(s), then D (s)=T(s)D(s), where D (s), D3(s), D4(s) and T(s) are polynomial matrices of appropriate dimensions. A matrix F(s) mup[s] is called a smallest common right multiplicity (SCRM) of matrices A(s) mul[s] and B(s) muq[s] if and only if 1. the matrix F(s) is a common right multiplicity of the matrices A(s) and B(s); 2. the matrix F(s) is a left divisor of every common multiplicity of the matrices A(s) and B(s), i.e., if F (s) = A(s)F3(s) and F (s) = B(s)F4(s), then F (s) = F(s)T(s), where F (s), F3(s), F4(s) and T(s) are polynomial matrices of appropriate dimensions.

Polynomial Matrices

77

1.15.2 Computation of Greatest Common Divisors of a Polynomial Matrix Problem 1.15.1. Given C that C

lum

[s], L

lul

[s] a matrix C1 is to be computed such

LC1 ,

(1.15.6)

where L is a lower triangular matrix and rank L t rank C.

Solution. Assume that the matrix L of rank r has the form

L

ª g11 «g « 21 « # « « g r1 « # « «¬ gl1

0 g 22

# gr 2 # gl 2

! 0 ! 0 % # ! g rr % # ! glr

0 ! 0º 0 ! 0 »» # % #» » 0 ! 0» # % #» » 0 ! 0 »¼

(1.15.7)

and the matrix C1

C1

ª x11 ! x1m º « # % # ». « » «¬ xl1 ! xlm »¼

(1.15.8)

The equality (1.15.6) can be written in the form

ª c11 ! c1m º «# % # » « » ¬« cl1 ! clm ¼»

ª g11 «g « 21 « # « « g r1 « # « ¬« gl1

!

0

g 22 !

0

0

# % # g r 2 ! g rr # % # gl 2 ! glr

0 ! 0º 0 ! 0 »» ª x ! x1m º # % # » « 11 » » # % # » . (1.15.9) 0 ! 0» « « x ! xlm ¼» # % # » ¬ l1 » 0 ! 0 ¼»

Carrying out the multiplication and comparing appropriate elements from the equality (1.15.9), we obtain c1 j

and

g11 x1 j i.e., x1 j

c1 j g11

, j 1,! , m

78

Polynomial and Rational Matrices

c2 j

g 21 x1 j  g 22 x2 j , x2 j

1 c2 j  g 21 x1 j . g 22

Thus in the general case for i d r we obtain xij

1 gii

i 1 § · ¨ cij  ¦ gik xkj ¸ . k 1 © ¹

(1.15.10)

Entries of rows of the matrix C1 with indices (i, j), i = r+1,…,l, j = 1,…,m can be chosen arbitrarily. Example 1.15.1. Given the matrices

C

ª1  s 1  s 1  s 2 º « » 1 », «1  s 1  s « 2 0 2  s »¼ ¬

L

ª1  s 0 0 º « 1 s 0 »» , « «¬ 1 1 0 »¼

one has to compute a matrix C1 that satisfies (1.15.6). In this case, rank L = 2. According to (1.15.10), to compute x1j, we divide the first row of the matrix C by g11 = 1+s, and then we subtract the first row of the matrix C1 from the second row of the matrix C and we divide the result by s. We thus obtain:

C1

ª1 1 1  s º « 1 1 1 ». « » ¬« 2 0 2  s »¼

Example 1.15.2. Given C lum[s], P C

mum

[s], one has to compute a matrix C2 such that

C2 P ,

(1.15.11)

where P is an upper triangular matrix and rank P t rank C. Using the transposition, the solution of the dual problem can be reduced to the solution of 1.15.1

1.15.3 Computation of Greatest Common Divisors and Smallest Common Multiplicities of Polynomial Matrices Theorem 1.15.1. A matrix L(s) mum[s] is a GCLD of matrices A(s) B(s) muq[s] (m d l + q) if and only if

mul

[s] and

Polynomial Matrices

> A( s)

B( s ) @ and

> L( s )

0@

79

(1.15.12)

are right equivalent matrices.

Proof. If the matrices (1.15.12) are right equivalent then there exists a unimodular matrix U( s )

ª U11 ( s ) U12 ( s ) º « U ( s) U (s) » , ¬ 21 ¼ 22

> A(s)

ª U ( s ) U12 ( s ) º B( s ) @ « 11 » ¬ U 21 ( s ) U 22 ( s ) ¼

> A( s)

B( s ) @

such that

> L( s ) 0@

(1.15.13)

V12 ( s) º , V22 ( s ) »¼

(1.15.14)

and

ª V (s)

>L(s) 0@ «V11 ( s) ¬

21

where ª V11 ( s ) V12 ( s ) º «V (s) V (s)» ¬ 21 ¼ 22

U 1 ( s ) .

From (1.15.14) we have A( s)

L( s )V11 ( s ) and B( s )

L( s )V12 ( s ) .

Thus the matrix L(s) is a CLD of the matrices A(s) and B(s). To show that the matrix L(s) is a GCLD of the matrices A(s) and B(s), we take into account the relationship A( s )U11 ( s )  B( s )U 21 ( s )

L( s) ,

(1.15.15)

which ensues from (1.15.13). Hence it follows that every CLD of the matrices A(s) and B(s) is also an LD of the matrix L(s). Thus the matrix L(s) is a RM of every CLD of the matrices A(s) and B(s), i.e., a GCLD of these matrices. Now we will show that if a matrix L(s) is a GCLD of the matrices A(s) and B(s), then the matrices in (1.15.12) are right equivalent. By assumption we have A( s)

L( s ) A1 ( s ) , B( s )

L( s )B1 ( s ) ,

(1.15.16)

80

Polynomial and Rational Matrices

where a GCLD of the matrices A1(s) and B1(s) is the identity matrix Im. From (1.15.16) we have

> A(s)

B( s ) @

> L( s )

ª A ( s ) B1 ( s ) º 0@ « 1 », ¬ N( s) M ( s) ¼

(1.15.17)

where N(s) and M(s) are arbitrary polynomial matrices. We will show that there exist matrices N(s) and M(s) such that the matrix ª A1 ( s ) B1 ( s ) º « N( s ) M ( s ) » ¬ ¼

(1.15.18)

is a unimodular matrix. A GCLD of the matrices A1(s) and B1(s) is the identity matrix Im. In view of this, there exists a unimodular matrix U1(s) such that

> A1 ( s)

B1 ( s ) @ U1 ( s )

>I m

0@ .

The matrix U1-1(s) is also a unimodular matrix. Thus from the last relationship we have

> A1 ( s)

B1 ( s ) @

>I m

0@ U11 ( s )

>I m

ª A ( s ) B1 ( s ) º 0@ « 1 ». ¬ N( s ) M ( s ) ¼

Thus the matrix (1.15.18) is unimodular and from (1.15.17) it follows that the matrices (1.15.12) are right equivalent. „ Corollary 1.15.1. If a matrix L(s) is a GCLD of the matrices A(s) and B(s), then there exist polynomial matrices U11, (s) U21(s) such that (1.15.15) holds. The matrix L(s) can be a lower triangular matrix. Corollary 1.15.2. If the GCLD of the matrices A1(s) and B1(s), is equal to L(s) = I, then there exist polynomial matrices N(s) and M(s) such that the square matrix (1.15.18) is a unimodular one. From (1.15.13) it follows that A( s )U12 ( s )

B( s )U 22 ( s )

F(s) .

(1.15.19)

Theorem 1.15.2. The matrix F(s) given by the equality (1.15.19) is a SCRM of the matrices A(s) and B(s).

Polynomial Matrices

81

Proof. From Definition 1.15.4 and (1.15.19) it follows that the matrix F(s) is a CRM of the matrices A(s) and B(s). One has still to show that the matrix F(s) is a left divisor of every CRM of the matrices A(s) and B(s). To show this, it suffices to note that the GCRD of the matrices U12(s), U22(s) is an identity matrix Im-1-q. „ To compute a GCLD and SCRM of matrices A(s) one can apply the following algorithm.

[s] and B(s)

mul

muq

[s],

Algorithm 1.15.1. Step 1: Write the matrices A(s), B(s) and the identity matrices Il, Iq as ª A( s ) B( s) º « » 0 ». « Il « 0 I q »¼ ¬

Step 2: Carrying out appropriate elementary operations on the columns of the matrix [A(s) B(s)] reduce it to the form [L(s) 0]. Carry out the same elementary operations on the columns of the matrix Il+q. Partition the resulting matrix U(s) into the submatrices U11(s), U12(s), U21(s), U22(s) of dimensions corresponding to those of the matrices A(s) and B(s), i.e., ª A( s) B( s ) º 0 º ª L( s ) « » R « »  o I U U 0 ( ) s 12 ( s ) » . « l » « 11 « 0 «¬ U 21 ( s ) U 22 ( s ) »¼ I q »¼ ¬

(1.15.20)

Step 3: The GCLD and SCRM we seek are equal to L(s) in (1.15.20) and F(s) in (1.15.19), respectively. Example 1.15.1. Compute a GCLD and a GCRD of the matrices A( s)

ª s 2  2s s º « », ¬ s  2 1¼

B( s )

ª s  2º « 1 ». ¬ ¼

In this case, m = l = 2, q = 1. In order to compute L(s) and U(s), we write the matrices A(s), B(s) and I2, I1 as follows

82

Polynomial and Rational Matrices

ª A( s) B( s ) º « I 0 »» « 2 «¬ 0 1 »¼

ª s 2  2s « « s2 « 1 « « 0 « 0 ¬

s 1 0 1 0

s  2s º » 1 » 0 » » 0 » 1 »¼

and we perform the following elementary operations 0 º s 2 º ª s 2 » « 1 0 » P[1,2] «1 0 0 »» P[2,3] o «0 0 0 0 »  1 ». » « » 1 1» «1 1 2  s » «¬0 1 0 1 »¼ 0 »¼

ª 0 « 0 P>1 2u(2 s )@ « P>3 2u( 1) @  o« 1 « «2  s «¬ 0

Thus we have

L( s )

ª s 2 º «1 0 » , U ( s ) ¬ ¼

ª U11 ( s ) U12 ( s ) º « U (s) U ( s) » ¬ 21 ¼ 22

ª0 0 1 º « » «1 1 2  s » . «0 1 0 »¼ ¬

We compute the SCRM of the matrices A(s) and B(s) using (1.15.19) F(s)

A( s )U12 ( s )

ª s 2  2s s º ª 1 º « »« » ¬ s  2 1¼ ¬ 2  s ¼

ª0º «0» . ¬ ¼

Theorem 1.15.3. A matrix P(s)Cqul[s] is the GCRD of matrices A(s)Cmul[s] and B(s)Cpul[s] (m+p t l) if and only if the matrices ª A( s) º ª P(s) º « B( s ) » and « 0 » ¬ ¼ ¬ ¼

(1.15.21)

are left equivalent. The proof of this theorem is similar to that of Theorem 1.15.1. Carrying out elementary operations on the rows, we make the following transformation ª A( s) I m « B( s ) 0 ¬

0 º L ª P ( s ) U11 c ( s ) U12 c (s) º .  o« c c I p »¼ U 21 ( s ) U 22 ( s ) »¼ ¬ 0

(1.15.22)

Polynomial Matrices

83

Carrying out elementary operations on the rows of the matrix Im+p transforming the matrix ª A( s ) º to the form ª P ( s ) º , we compute the unimodular matrix ¬« B( s ) ¼» ¬« 0 ¼» Uc( s )

c ( s ) U12 c (s)º ª U11 « Uc ( s ) U c ( s ) » . ¬ 21 ¼ 22

Corollary 1.15.3. If the matrix P(s) is a GCRD of the matrices A(s) and B(s), then there exist polynomial matrices U11c(s) and U12c(s) such that the equality c ( s ) A( s )  U12 c ( s )B ( s ) U11

P(s)

(1.15.23)

holds.

Corollary 1.15.4. If a GCRD of the matrices A1(s) and B1(s) is equal to P(s) = I, then there exist polynomial matrices Nc(s) and Mc (s) such that the square matrix ª A1 ( s) N '( s) º « B ( s ) M '( s) » ¬ 1 ¼

(1.15.24)

is a unimodular one.

Theorem 1.15.4. The matrix D(s) given by D( s )

Uc21 ( s ) A( s )

 Uc22 ( s )B( s )

(1.15.25)

is an SCLM of the matrices A(s) and B(s). Proof of this theorem is similar to that of Theorem 1.15.2. An algorithm for computing a GCRD and a SCLM of matrices A(s) and B(s) is different from Algorithm 1.15.1 only in that instead of the transformation (1.15.20), we carry out the transformation (1.15.22) and instead of elementary operations on columns, we carry out elementary operations on rows. The GCRD we seek is equal to the matrix P(s), and the SCLM that is equal to the matrix D(s) is computed from (1.15.25). Remark 1.15.1. Greatest common divisors and smallest common multiplicities are computed uniquely up to multiplication by a unimodular matrix. In this sense, they are not unique, therefore we usually put the indefinite article a before these notions.

84

Polynomial and Rational Matrices

1.15.4 Relatively Prime Polynomial Matrices and the Generalised Bezoute Identity Definition 1.15.7. Matrices A(s) mul[s] and B(s) muq[s] are called relatively left prime (RLP) if and only if only unimodular matrices are their left common divisors. Matrices A(s) mul[s] and B(s) pul[s] are called relatively right prime (RRP) if and only if only unimodular matrices are their right common divisors.

Theorem 1.15.5. Matrices A(s) matrices

> A( s)

B( s ) @ and

are right equivalent. Matrices A(s)

>I m

[s], B(s)

mul

[s], B(s)

mul

0@

muq

[s] are RLP if and only if the

(1.15.26)

pul

[s] are RRP if and only if the matrices

ª A( s) º ªI l º « B( s ) » and « 0 » ¬ ¼ ¬ ¼

(1.15.27)

are left equivalent.

Proof. If the matrices (1.15.26) are right equivalent then according to Theorem 1.15.1, the GCLD of the matrices A(s) and B(s) is Im, i.e., these matrices are RLP. If the matrices A(s) and B(s) are RLP, then the GLCD is a unimodular matrix, which by use of elementary operations on the columns can by reduced to the form [Im 0], i.e., the matrices (1.15.26) are right equivalent. The proof of the second part of the theorem is similar. „ From Corollary 1.15.1 for L(s) = Im and from Corollary 1.15.3 for P(s) = I1 we obtain the following.

Corollary 1.15.5. If the matrices A(s) and B(s) are RLP, then there exist unimodular matrices U11(s) and U21(s) such that A( s )U11 ( s )  B( s )U 21 ( s )

Im .

(1.15.28)

If the matrices A(s) and B(s) are RRP, then there exist polynomial matrices U11c(s) and U12c(s) such that c ( s ) A( s )  U12 c ( s )B ( s ) U11

Il .

(1.15.29)

Polynomial Matrices

85

The matrices U11(s), U21(s) and U11c(s), U12c(s) can be computed using Algorithm 1.15.1 Example 1.15.2. Show that the matrices A( s)

ª s2 sº « » , B( s ) ¬ s  1 1¼

ª s 2  2º « » ¬ s ¼

are RLP and compute polynomial matrices U11(s), and U21(s) for them such that (28) holds. We will show that the given matrices A(s) and B(s) have a GCLD equal to I2. To accomplish this, we write down these matrices and matrix I3 in the form

ª A ( s ) B( s ) º « » 0 » « Il « 0 I q »¼ ¬

ª s2 « «s  1 « 1 « « 0 « 0 ¬

s 1 0 1 0

s 2  2º » s » 0 » » 0 » 1 »¼

and we carry out the following elementary operations ª0 «1 P>1 2u(  s ) @ « P>3 2u(  s ) @ «1 o « «s «¬ 0

ª « « o « « « «¬ P ª¬3u 12 º¼ P{2,3] P[1,2]

2º ª « 0 »» P[21u( 1)] « P ª 23u  12 s º ¬ ¼ o« 0 »  « » 1 s » « «¬ 0 1 »¼ 1 0 s

0

0

1 0

1

0

1 s

1 1  s  12 s 2

0

 12 s

0 0

0

1

0 1 2

 s

1 s

1 1  s  12 s 2

1 2

0

 12 s

2º 0 »» 0»o » s » 1 »¼

º » » ». » » »¼

Thus the given matrices A(s) and B(s) have a GCLD equal to I2. Thus these matrices are RLP. From the matrix

86

Polynomial and Rational Matrices

U( s )

ª º 1 1 « 0 » « » « 1 s s 1  s  1 s 2 » , 2 « 2 » « » 1 « 1  s » 0 «¬ 2 »¼ 2

we obtain U11 ( s )

ª 0 « 1 ¬ 2

1º ,  s »¼

U 21 ( s )

> 12

0@ .

It is easy to verify that the matrices A(s), B(s), U11(s), U21(s) satisfy (1.15.28).

1.15.5 Generalised Bezoute Identity Consider the polynomial RLP matrices A(s)

mun

[s], B(s)

[s], (n + p t m).

mup

Theorem 1.15.6. If polynomial matrices A(s) mun[s] and B(s) mup[s] are RLP, then there exist polynomial matrices C(s), D(s), M1(s), M2(s), M3(s) and M4(s) of appropriate dimensions such that ª A( s) «C( s ) ¬

B( s ) º ª M1 ( s ) D( s ) »¼ «¬ M 3 ( s )

M 2 (s) º M 4 ( s ) »¼

ªI m «0 ¬

0 º I n p m »¼

(1.15.30)

B( s ) º D( s ) »¼

ªI m «0 ¬

0 º . I n p m »¼

(1.15.31)

and ª M1 ( s ) «M ( s) ¬ 3

M 2 ( s ) º ª A( s) M 4 ( s ) »¼ «¬C( s )

Proof. By the assumption that the matrices A(s) and B(s) are RLP it follows that there exists a unimodular matrix of elementary operations on columns ª U1 ( s ) U 2 ( s ) º ( n  p )u( n  p ) [ s] « U (s) U ( s) »  4 ¬ 3 ¼ such that > A( s ) B( s ) @ U( s ) > I m 0@ . U( s)

Post-multiplying the latter equality by the matrix

Polynomial Matrices

U 1 ( s )

ª V1 ( s ) « V ( s) ¬ 3

V2 ( s ) º , V4 ( s ) »¼

87

(1.15.32)

we obtain ª V1 ( s ) V2 ( s ) º 0@ « » ¬ V3 ( s ) V4 ( s ) ¼

> A( s)

B( s ) @

>I m

> A( s)

B( s ) @

> V1 ( s)

and V2 ( s ) @ .

The matrix (1.15.32) is unimodular and the following equality holds B 1 ( s )U( s )

ª A( s ) B( s ) º ª U1 ( s ) U 2 ( s ) º « V (s) V (s) » « U ( s) U ( s) » 4 4 ¬ 3 ¼¬ 3 ¼

ªI m «0 ¬

0 º . I n p m ¼»

Thus [C(s) D(s)] = [V3(s) V4(s)] and Mk(s) = Uk(s), for k = 1,2,3,4. The identity (1.15.31) follows from the equality U(s)U(s)1 = U(s)1U(s) = In+p. „ The following dual theorem can be proved in a similar way.

Theorem 1.15.7. If polynomial matrices Ac (s) mun[s] and Bc(s) pun[s] are RRP, then there exist polynomial matrices Cc(s), Dc(s), N1(s), N2(s), N3(s) and N4(s) of appropriate dimensions, such that 0 º ª Ac( s ) Cc( s ) º ª N1 ( s ) N 2 ( s ) º ª I n », « Bc( s ) Dc( s ) » « N ( s ) N ( s ) » « 0 I m p n ¼ ¬ ¼¬ 3 4 ¼ ¬ 0 º ª N1 ( s ) N 2 ( s ) º ª Ac( s ) Cc( s ) º ª I n ». « N ( s ) N ( s ) » «Bc( s ) Dc( s ) » « 0 I m p n ¼ ¼ ¬ 4 ¬ 3 ¼¬

(1.15.33) (1.15.34)

1.16 Decomposition of Regular Pencils of Matrices 1.16.1 Strictly Equivalent Pencils Definition 1.16.1. A pencil [Es  A] (or a pair of matrices (E, A)) is called regular if the matrices E and A are square and

88

Polynomial and Rational Matrices

det [Es  A] z 0 for some s 

(1.16.1)

Definition 1.16.2. Let Ek, Ak mun for k = 1,2. The pencils [E1s – A1] and [E2s – A2] (or the pairs of the matrices (E1, A1) and (E2, A2)) are called strictly equivalent if there exist nonsingular matrices P mum, Q nun (with elements independent of the variable s) such that P > E1s  A1 @ Q

E2 s  A 2 .

(1.16.2)

Let Dk(s, t) (k = 1, ..., n) be the greatest common divisor of the all minors of degree k of the matrix [Es – At]. According to (1.8.4) the invariant polynomials of the matrix [Es – At] are uniquely determined by ik ( s, t )

Dnk 1 ( s, t ) for k Dnk ( s, t )

1, 2, ..., r .

(1.16.3)

Factoring the polynomials (1.16.3) into appropriate polynomials that cannot be factored in a given field, we obtain elementary divisors ei(s, t) (i = 1, ..., p) of the matrix [Es – At]. Substituting t = 1 into ei(s, t), we obtain appropriate elementary divisors ei(s) = ei(s,1) of the [Es – A]. Knowing ei(s) of the matrix [Es – A], we can also compute elementary divisors ei(s,t) of the [Es – At] using the relationship ei(s,t) = tqei(s/t), where q is the degree of the polynomial ei(s). In this way, we can find all finite elementary divisors of the matrix [Es – At] with exception of elementary divisors of the form tq. Elementary divisors of the form tq are called infinite elementary divisors of the matrix [Es – A]. Infinite elementary divisors appear if and only if det E = 0. For instance, bringing the pencil [ Es  A ]

ª1 1º ª1 1 º «1 1» s  «1 2 » ¬ ¼ ¬ ¼

into the Smith canonical form [Es  A]S

0 º ª1 «0 s  1» , ¬ ¼

we assess that this pencil possesses the finite elementary divisor s + 1 and the infinitive elementary divisor t, since e(s) = s+1, q = 1 and te(s/t) = s+t. Consider two square pencils of the same size

>E1s  A1 @ and >E2 s  A 2 @ such

that det E1 z 0 and det E 2 z 0. (1.16.4)

Polynomial Matrices

89

Theorem 1.16.1. If the condition (1.16.4) is satisfied, then the pencils [E1s – A1] and [E2s – A2] are equivalent if and only if they are strictly equivalent, i.e., unimodular matrices L(s) and P(s) in the equation E1s  A1

L( s) > E2 s  A 2 @ P ( s )

(1.16.5)

can be replaced with matrices L and P, which are both independent of the variable s, E1s  A1

L >E2 s  A 2 @ P .

(1.16.6)

Proof. The inverse matrix M(s) = L-1(s) of a unimodular matrix L(s) is also a unimodular matrix. Pre-multiplying (1.16.5) by M(s), we obtain M ( s ) > E1 s  A1 @

>E2 s  A 2 @ P( s) .

(1.16.7)

Pre-dividing the matrix M(s) by [E2s – A2] and post-dividing the matrix P(s) by [E2s – A2], we obtain M ( s)

> E 2 s  A 2 @ Q( s )  M,

P(s )

T( s ) > E1s  A1 @  P,

(1.16.8)

where M and P are matrices independent of the variable s. Substituting (1.16.8) into (1.16.7), we obtain

>E2 s  A 2 @>T(s)  Q( s)@>E1s  A1 @

M > E1s  A1 @  > E2 s  A 2 @ P . (1.16.9)

This equality holds only for T(s) = Q(s); otherwise the left-hand side of this equation would be a polynomial matrix of at least second degree, and the righthand side would be a polynomial matrix of at most first degree. Taking into account T(s) = Q(s) in (1.16.9), we obtain M > E1s  A1 @

>E2 s  A 2 @ P .

(1.16.10)

We will show that det M z 0. Pre-dividing the matrix L(s) by E1s - A1, we obtain L( s )

>E1s  A1 @ R (s)  L ,

where L is independent of the variable s. Using (1.16.11), (1.16.7) and (1.16.8) successively, we obtain

(1.16.11)

90

Polynomial and Rational Matrices

I

M ( s )L ( s )

M ( s ) > E1s  A1 @ R ( s )  L

M ( s) > E1s  A1 @ R ( s )  M ( s )L

>E2 s  A 2 @ P(s)R( s)  > E2 s  A 2 @ Q(s)L  ML >E2 s  A 2 @> P( s)R( s)  Q(s)L@  ML.

(1.16.12)

The right-hand side of (1.16.12) is a matrix of zero degree (equal to an identity matrix) if and only if P ( s ) R ( s )  Q ( s )L

0.

(1.16.13)

With the above taken into account, from (1.16.12) we have ML

I.

Thus the matrix M is nonsingular and L = M-1. Pre-multiplying (1.16.10) by L = M-1, we obtain (1.16.6).  From Theorem 1.16.1 we have the following important corollary.

Corollary 1.16.1. If the condition (1.16.4) is satisfied, then notions of equivalence and strict equivalence of pencils [E1s – A1] and [E2s – A2] are the same. From the fact that two polynomial matrices are equivalent if and only if they have the same elementary divisors and from Corollary 1.16.1, the following theorem ensues immediately.

Theorem 1.16.2. If the condition (1.16.4) is satisfied, then pencils [E1s – A1] and [E2s – A2] are strictly equivalent if and only if they have the same finite elementary divisors. If the condition (1.16.4) is not satisfied, then the pencils [E1s – A1] and [E2s – A2] might not be equivalent in spite of the fact that they have the same elementary devisors.

>E1s  A1 @ >E2 s  A 2 @

ª1 1 2 º ª2 «1 1 2 » s  « 3 « » « «¬1 1 3 »¼ «¬ 3 ª1 1 1º ª2 «1 1 1» s  « 1 « » « ¬«1 1 1¼» ¬« 1

1 3º 2 5 »» , 2 6 »¼ 1 1º 2 1»» , 1 1¼»

(1.16.14)

Polynomial Matrices

91

are not strictly equivalent (since rank E1 = 2, rank E2 = 1), although they have the same elementary divisor s + 1, because they have different infinite elementary divisors. Performing elementary operations on the pencil [E1s  A1t], we obtain assertion of this. Theorem 1.16.3. Two regular pencils [E1s – A1] and [E2s – A2] are strictly equivalent if and only if they have the same finite and infinite elementary divisors. Proof. The strict equivalence of the pencils [E1s – A1] and [E2s – A2] implies strict equivalence of the pencils [E1s – A1t] and [E2s – A2t]. In view of this, the pencils [E1s – A1] and [E2s – A2] should have the same finite and infinite elementary divisors. Conversely, let two regular pencils [E1s – A1] and [E2s – A2], which have the same finite and infinite elementary divisors, be given. Let s

aO  b P , t

cO  d P (ad  bc z 0) .

(1.16.15)

Substituting (1.16.15) into [E1s – A1t] and [E2s – A2t] yields

>E1s  A1t @ > E 2 s  A 2t @

ª¬E1 aO  bP  A1 cO  d P º¼

ª¬E1O  A1P º¼ , ª¬E2 aO  bP  A 2 cO  d P º¼ ª¬E 2 O  A 2 P º¼ ,

(1.16.16)

where E1

aE1  cA1 , A1

dA1  bE1 , E 2

aE 2  cA 2 , A 2

dA 2  bE 2 . (1.16.17)

By assumption of regularity of the pencils [E1s – A1t] and [E2s – A2t], one can choose numbers a and c such that

det E1 z 0 and det E2 z 0 .

(1.16.18)

If the condition (1.16.18) is satisfied, then the pencils ª¬E1O  A1P º¼ and ª¬ E2 O  A 2 P º¼

are strictly equivalent and this fact implies that the pencils [E1s – A1t] and [E2s – A2t], as well as the starting-point pencils [E1s – A1], [E2s – A2], are strictly equivalent. „

92

Polynomial and Rational Matrices

1.16.2 Weierstrass Decomposition of Regular Pencils Assume at the beginning that rectangular matrices E, A rank [Es  A]

qun

are such that

q for some s   .

(1.16.19)

Theorem 1.16.4. If the condition (1.16.19) is satisfied, then there exist full-rank matrices P qun and Q nun such that ª I n s  A1 [ Es  A ] P « 1 ¬« 0

º »Q , Ns  I n2 ¼» 0

(1.16.20)

where n1 is the greatest degree of the polynomial of the variable s, which is a minor of degree q of the matrix [Es – A], n1+n2 = n, and N is a nilpotent matrix of index v (Nv = 0). Proof. If the condition (1.16.19) is satisfieds then there exists a number c such that the matrix F = [Ec - A] has full row rank. In this case, there exists the inverse of this matrix 1

FT ª¬ FFT º¼   nuq ,

Fp

(1.16.21)

which satisfies the condition FFp = Iq. Note that

> Es  A @ > E ( s  c )  E c  A @ > E ( s  c )  F @ T

F ª¬Fp E( s  c )  I n º¼ . (1.16.22)

According to the considerations in Sect. 4.2.2, there exists a nonsingular matrix nun such that Fp E

T ¬ª diag J1 , J 0 ¼º T1 ,

(1.16.23)

n un

n un

where J1 = 1 1 is a nonsingular matrix and J0 = 2 2 is a nilpotent matrix with index v. The matrix T can be chosen in such a way that diag (J1, J0) has the Jordan canonical form. Substitution of (1.16.23) into (1.16.22) yields [Es  A ] FT ª¬diag (J1 ( s  c)  I n1 , J 0 ( s  c )  I n2 ) º¼ T 1







FTdiag J1 , J 0 c  I n2 »diag I n1 s  J11 (I n1  J1c ),

J c  I 0

1

n2



J 0 s  I n2 T 1



(1.16.24)



P ªdiag I n 1 s  A1 , Ns  I n2 º Q, ¬ ¼

Polynomial Matrices

93

where





P

FT diag J1 , J 0 c  I n2 , A1

N

J c  I 0

n2

1

J0 , Q





J11 J1c  I n1 ,

T1.

(1.16.25)

Note that Nv = 0, since J0v = 0 and Nv = (J0c - I n2 )-vJ0v = 0. Remark 1.16.3. Transforming A1 and N to the Jordan canonical form, we obtain diag ª¬ H m1 s  I m1 , ..., H mt s  I mt , I n1 s  J º¼ ,

where

H mi

ª0 «0 « «# « «0 «¬ 0

1 0 ! 0 0º 0 1 ! 0 0 »» # # % # # »   mi umi (i 1, ..., t ) » 0 0 ! 0 1» 0 0 ! 0 0 »¼

and J is the Jordan canonical form of the matrix A1 and m1+m2+…+mt+n1=n. Theorem 1.16.4 generalises the classical Weierstrass theorem for the case of a rectangular pencil, which satisfies the condition (1.16.19). If q = n, then the matrix P is square and nonsingular P 1 > Es  A @ Q 1

ª I n1 s  A1 « «¬ 0

º », Ns  I n2 »¼ 0

(1.16.26)

and n1 is equal to the degree of the polynomial det [Es - A]. Theorem 1.16.5. If [Es – A] is a regular pencil, then there exist two nonsingular matrices P, Q nun such that (1.16.26) holds. The transformation matrices P and Q appearing in (1.16.26) can be computed by use of (1.16.25). Another method of computing these matrices is provided below. Let si be the i-th root of the equation det > Es  A @ 0

(1.16.27)

94

Polynomial and Rational Matrices

and mi

dim Ker > Esi  A @ .

(1.16.28)

Compute finite eigenvectors vij1 using the equation

>Esi  A @ vij1

0, for j 1, ..., mi ,

(1.16.29)

and then (finite) eigenvectors vijk+1 from the equation

>Esi  A @ vijk 1

Evijk , for k t 1 .

(1.16.30)

Let mf

dim Ker E

n  rank E .

(1.16.31)

We compute infinite eigenvectors vfj1 from the equations Ev1fj

(1.16.32)

0, for j 1, ..., mf ,

and then eigenvectors vfjk+1 from the equation Evfk j 1

Avfk j , for k t 1 .

(1.16.33)

The computed vectors are columns of the desired matrices P

ª¬ Evijk # Avfk j º¼ ,

Q 1

ª¬ vijk # vfk j º¼ .

(1.16.34)

Using (1.16.29)( 1.16.33) one can easily verify that

>Es  A @ ª¬vijk k ¬ªEvij

#

#

vfk j º¼

ªI n1 s  A1 Avfk j ¼º « «¬ 0

º » . Ns  I n2 »¼ 0

(1.16.35)

Pre-multiplying (1.16.35) by [Evijk # Avfjk]-1, we obtain (26) for P and Q given by (1.16.34). Example 1.16.1. Compute the matrices P and Q for a regular pencil whose matrices E and A have the form

Polynomial Matrices

E

ª1 0 0 º «0 1 0 » , « » «¬0 0 0 »¼

A

95

ª1 0 1º «0 1 0». « » «¬ 1 0 1»¼

In this case,

det [Es  A]

s 1

0

1

0

s 1

0

1

0

1

s ( s  1)

and n1 = 2, n2 = 1, s1 = 1, s2 = 0, m1 = dim Ker [Es1  A] = 1. Using (1.16.29), (1.16.30), (1.16.32) and (1.16.33), we compute successively

>Es1  A @ v1

>Es2  A 2 @ v2

Ev3

ª0 0 1º «0 0 0 » v « » 1 «¬1 0 1 »¼

ª0º «0» , « » «¬ 0 »¼

ª 1 0 1º « 0 1 0 » v « » 2 «¬ 1 0 1 »¼

ª1 0 0 º «0 1 0 » v « » 3 «¬0 0 0 »¼

ª0º «0» , « » «¬ 0 »¼

v3

ª0 º «1 » , « » «¬0 »¼

v1

ª0º «0» , « » «¬ 0 »¼ ª0 º «0 » , « » «¬1 »¼

Ev1

v2

ª1 º «0 » , « » «¬ 1»¼

Av3

ª1 º «0 » . « » «¬ 1»¼

ª0º «1 » , « » «¬ 0»¼

Ev2

ª1 º «0» , « » «¬ 0 »¼

Thus from (1.16.33) we have

P

>Ev1 , Ev2 , Av3 @

ª0 1 1 º « 1 0 0 » , Q 1 « » «¬ 0 0 1»¼

>v1 , v2 , v3 @

ª0 1 0 º «1 0 0 » . « » «¬ 0 1 1 »¼

1.17 Decomposition of Singular Pencils of Matrices 1.17.1 Weierstrass–Kronecker Theorem Definition 1.17.1. A pencil [Es – A] (E, A det [Es – A] for all s when m = n.

) is said to be singular if m z n or

mun

96

Polynomial and Rational Matrices

Let rank [Es – A] = r d min (m, n) for almost every s . Assume that r < n. In this case, the columns of the matrices [Es – A] are linearly dependent and the equation

> Es  A @ x

(1.17.1)

0

has a nonzero solution x = x(s). Among the polynomial solutions to (1.17.1) we seek solutions of the minimal degree p with respect to s having the form x( s )

x0  x1s  x2 s 2  "  x p s p .

(1.17.2)

Substituting (1.17.2) into (1.17.1) and comparing coefficients by the same powers of the variable s, we obtain the equations  Ax0

0, Exi 1  Axi

0, for i 1, ..., p and Ex p

0,

which can be written in the form 0 ªA 0 « E A 0 « « 0 E A « # # # « « 0 0 0 « 0 0 «¬ 0

0 º ª x0 º « » 0 »» « x1 » 0 » « x2 » » »« % # # »« # » ! E  A » « x p 1 » » »« ! 0 E »¼ «¬ x p »¼ ! 0

! 0 ! 0

ª0º «0» « » «0» « ». « #» «0» « » «¬ 0 »¼

(1.17.3)

Note that (1.17.3) has a solution if and only if the matrix

Gp

0 ªA 0 « E A 0 « « 0 E A « # # # « « 0 0 0 « 0 0 «¬ 0

! 0 ! 0

0 º 0 »» ! 0 0 » » % # # » ! E A » » ! 0 E »¼

( p  2) mu( p 1) n

does not have full column rank. By assumption p is minimal, thus we have rank Gi = (i+1)n, for i = 0,1,…,p1 and rank Gp < (p + 1)n. Lemma 1.17.1. If (1.17.1) has the solution (1.17.2) of the minimal degree p > 0, then the pencil [Es – A] is strictly equivalent to the pencil

Polynomial Matrices

ªL p «0 ¬

0 º , Es  A »¼

97

(1.17.4)

where

Lp

ªs 1 « «0 s «0 0 « «# # « «0 0 « ¬0 0

0 " 0 0º »

1 " 0 0» s " 0 0» #

%

#

0 " 1 0 " s

» #» » 0» » 1¼

pu( p 1)

and the equation ª¬Es  A º¼ x

(1.17.5)

0

does not have polynomial solutions of degree smaller than p. Proof. Consider a linear operator [Es – A] mapping n into m. We will show that one can choose bases in n and m in such a way that the corresponding pencil [Es – A] has the form ªL p «0 ¬

Bs  c º . Es  A »¼

(1.17.6)

The linear operator equation corresponding to (1.17.1) is

> Es  A @ x

0,

(1.17.7)

where x

x(s)

x0  x1 s  x2 s 2  "  x p s p .

Similarly as for (1.17.1) we obtain Ax0

0,

Exi 1

Axi and i 1, ..., p,

Ex p

0.

(1.17.8)

We will show that the vectors Ax1 , Ax2 , ..., Ax p

(1.17.9)

98

Polynomial and Rational Matrices

are linearly independent. Suppose that vector Axk linearly depends on vectors Ax1,…,Axk-1 (k d p), that is Axk

a1Ax1  ...  ak 1Axk 1 for certain ai 

.

Using (1.17.8), we obtain Axk

Exk 1

Exˆk 1

0,

a1Ex0  a2 Ex1  ...  ak 1Exk 2

and

where xˆk 1

xk 1  a1 x0  a2 x1  ...  ak 1 xk 2 .

Note that Axk 1

Axk 1  a1Ax0  a2 Ax1  ...  ak 1Axk 2

E xk 2  a2 x0  a3 x1  ...  ak 1 xk 3

Exˆk 2

where xˆk 2

xk 2  a2 x0  a3 x1  ...  ak 1 xk 3 .

Similarly, Axˆk 2

Axk 2  a2 Ax0  a3 Ax1  ...  ak 1Axk 3

E xk 3  a3 x1  ...  ak 1 xk 4

Exˆk 3 ,

where xˆk 3

xk 3  a3 x1  ...  ak 1 xk 4 .

Continuing this procedure, we obtain Axˆk 3

Exˆk 4 , ..., Axˆ1

Exˆ0 , Axˆ0

0,

where xˆk 4

xk 4  a4 x1  ...  ak 1 xk 5 , ..., xˆ1

x1  ak 1 x0 , xˆ0

x0 .

Polynomial Matrices

99

Taking into account the above relationships one can easily verify that the vector x

xˆ ( s)

xˆ0  xˆ1s  xˆ2 s 2  ...  xˆk 1s k 1 and k d p

is a solution to (1.17.7) of degree smaller than p. This contradiction proves that the vectors (1.17.9) are linearly independent. We will show by contradiction that the vectors x0,x1,…,xp are also linearly independent. Suppose that these vectors are linearly dependent, that is b0 x0  b1 x1  ...  bp x p

0 for some bi 

.

(1.17.10)

In this case, we obtain b1Ax1  b2 Ax2  ...  bp Ax p

0,

since Ax0 = 0. The vectors (1.17.9) are linearly independent. In view of this, b1 = b2 =…= bp = 0 and from (1.17.10) we obtain b0x0 = 0. Note that x0 z 0, since otherwise s-1x(s) would also be a solution of the equation. Hence b0x0 = 0 implies b0 = 0 and the vectors x0,x1,…,xp are linearly independent. We choose the vectors (1.17.9) to be the first basis vectors of the space n and the vectors x0,x1,…,xp to be the first basis vectors of m, respectively. Using (1.17.8), it is easy to verify that in this case, the pencil [Es  A] has the form (1.17.6). Note that (1.17.4) can be obtained from (1.17.6) by adding to [Bs + C] an appropriate linear combination with coefficients independent of s from columns Lp and rows [ E s  A ]. We will show that (1.17.5) has no solutions of degree smaller than p. Taking into account (1.17.4), we can write down ªL p «0 ¬

0 º ªz º Es  A »¼ «¬ y »¼

0,

(1.17.11)

which is equivalent to Lpz

0,

ª¬Es  A º¼ y

0.

(1.17.12)

From the special form of Lp it follows that the equation (Lpz = 0) has a solution of degree p of the form

zi

(1)i1 s i1z1 (i 1, ..., p  1)

for arbitrary z1, where z1 is the i-th component of vector z. Thus the matrix Gp-1 in (1.17.3) has full column rank equal to pn.

100

Polynomial and Rational Matrices

The equation [ E s - A ]y = 0 has solution of the minimal degree p if and only if the matrix Gp-1 in the equation

G p 1

ªA 0 0 « « E A 0 « 0 E A « # # « # « 0 0 0 « 0 0 «¬ 0

! ! ! % ! !

0 0 º » 0 0 » 0 0 » » # # » E A » » 0 E »¼

( p 1)( n  p )u p ( n  p 1)

has full column rank, equal to p(n – p  1). From (1.17.4) it follows that the matrix Gp-1 in (1.17.3), after the appropriate interchange of rows and columns, can be written in the form

G p 1

ˆ ªG 0 º p 1 « », G p 1 ¼» ¬« 0

where Gˆ p-1 has dimensions p(p + 1)up(p + 1) and corresponds to the equation Lpz = 0. Note that the condition rank Gp-1 = np implies that rank Gˆ p-1 = p(p+1) and rank G p-1 = p(np1). Hence the equation Lpz = 0 has no solution of degree smaller than p. „ In the general case we assume that 1. rank [Es – A]= r < min(m,n); 2. columns and rows of [Es – A] are linearly dependent over exist x n and v m (independent of s) such that

> Es  A @ x

, i.e., there

(1.17.13)

0

and

> Es  A @

T

v

0.

(1.17.14)

Let (1.17.13) have p0 linearly independent solutions x1,x2,…,xp . Choosing these solutions as the first p0 basis vectors of the space n, we obtain a strictly equivalent pencil that has the form 0

ª¬0np0

Es  A º¼ ,

where 0np is a zero-matrix of dimension nup0. 0

(1.17.15)

Polynomial Matrices

101

Similarly, let (1.17.14) have q0 linearly independent solutions v1,v2,…,vq 0 .Choosing these solutions as the first q0 basis vectors of the space m, we obtain a strictly equivalent pencil that has the form ª « «¬ 0n, p0

0q0 ,n p0 º », sA E »¼

(1.17.16)

 ] has rows and columns linearly independent over . where [ E s  A  ] be linearly dependent over the field of rational Let the columns of [ E s  A functions C(s) and let the equation

ºx ªE s  A ¬ ¼

0

Have a polynomial solution of the minimal degree p1. Applying Lemma 1.17.1 to  ], we obtain a strictly equivalent pencil that has the form the pencil [ E s  A ª « « 0n, p0 « ¬

0q0 ,n p0 L p1 0

º » 0 », E1s  A1 »¼

(1.17.17)

and the equation

>E1s  A1 @ x

(17.18)

0

that has no polynomial solutions of degree smaller than p1. If (1.17.18) has a polynomial solution of the minimal degree p2, then continuing this procedure, we obtain a strictly equivalent pencil that has the form ª « «¬ 0np0

º », diag ¬ª L p1 , L p2 , ..., L pw , E w s  A w ¼º »¼ 0q0 ,n p0

(1.17.19)

where p1 d p2 d…d pw, and the equation [Ews – Aw]x = 0 has no nonzero polynomial solutions. If the pencil [Ews – Aw] has linearly dependent rows over the field (s) and the equation

>Ew s  A w @

T

v

0

has polynomial solution of the minimal degree q1, then applying Lemma 1.17.1 to [Ews – Aw]T, we obtain a strictly equivalent pencil that has the form

102

Polynomial and Rational Matrices

ª « «0 ¬ np0

º », diag ª L p1 , L p2 , ..., L pw , LTq1 , E1c s  A1c º » q ¬ ¼¼ 0q0 ,n p0

(1.17.20)

where the equation

>E1cs  A1c @

T

0

v

(1.17.21)

has no polynomial solutions of degree smaller than q1. If (1.17.21) has polynomial solution of the minimal degree q2, then continuing this procedure, we obtain a strictly equivalent pencil that has the form ª « «¬ 0np0

º », diag ¬ª L p1 , , ..., L pw , LTq1 , ..., LTqs , E0 s  A 0 ¼º »¼ 0q0 ,n p0

(1.17.22)

where [E0s – A0] is a regular pencil. Applying Theorem 1.16.4 to the pencil [E0s – A0], we obtain the Weierstrass– Kronecker canonical form of a singular pencil, that is ª « «¬0np0

º » , (1.17.23) diag ª¬ L p1 , ,..., L pw , L ,..., L , H n1 s  I n1 ,..., H nt s  I nt , I r s  J º¼ »¼ 0q0 ,n p0

T q1

T qs

where the pencil diag ¬ª H n1 s  I n1 ,..., H nt s  I nt , I r s  J ¼º corresponds to the regular pencil E0s – A0. Thus we have proven the following Weierstrass–Kronecker theorem about decomposition of a singular pencil. Theorem 1.17.1. An arbitrary singular pencil [Es – A] is strictly equivalent to the pencil (1.17.23). 1.17.2 Kronecker Indices of Singular Pencils and Strict Equivalence of Singular Pencils Let us consider a pencil [Es – A] for E, A mun. Let x1(s) be a nonzero polynomial solution of minimal degree p1 of the equation

> Es  A @ x

0.

(1.17.24)

Polynomial Matrices

103

Among polynomial solutions of the equation, linearly which are independent of x1(s) over (s), we choose a solution x2(s) of minimal degree p2 (p2 t p1). Then among polynomial solutions of (1.17.24) which are linearly independent of x1(s) and x2(s) over (s), we choose solutions x3(s) of minimal degree p3 (p3 t p2). Continuing this procedure we obtain a sequence of linearly independent polynomial solutions of (1.17.24) of the form x1 ( s ), x2 ( s ),..., xw ( s ) ( w d n)

(1.17.25)

with degrees p1 d p2 d " d pw .

(1.17.26)

In the general case, for a given pencil [Es – A] there exist many sequences of the polynomial solutions (1.17.25) to (1.17.24). We will show that all these sequences of polynomial solutions have the same sequence of degrees (1.17.26). Suppose that x 1(s), x 2(s),..., x w(s) with degrees p 1d p 2d…d p w is another sequence of polynomial solutions to (1.17.24). Let p1

...

pn1  pn1 1

...

pn2  pn2 1

...

p1

...

pn1  pn1 1

...

pn2  pn2 1

...

and

From this choice of x1(s) and x 1(s) it follows that p1 = p 1. Note that x 1(s) for i = 1,…, n 1 is a linear combination x1(s),…,x n (s), since otherwise x n 1 (s) in 1

1

(1.17.25) could be replaced with a polynomial vector of degree smaller than p n 1 . 1

Similarly, xi(s) for i = 1,…,n1 is a linear combination x1 ( s ), ..., xn ( s ) . In view of 1

this,

n1= n

p n 1 = p 2

n2 1

1

and

p n 1 = p 1

. n 1 1

Similarly

it

is

easy

to

show

that

p n 2 1 .

p n 2 1

Definition 1.17.2. Nonnegative integers p1,p2,…,pw are called minimal column (Kronecker) indices of the pencil [Es – A]. Let v1(s) be the nonzero polynomial solution of the minimal degree q1 of the equation

> Es  A @

T

v

0.

(1.17.27)

Among the polynomial solutions of this equation, which are linearly independent over (s) of v1(s), we choose a solution v2(s) of minimal degree q2 (q2 t q1).

104

Polynomial and Rational Matrices

Continuing this procedure, we obtain a sequence of polynomial solutions to (1.17.27) of the form v1 ( s ), v2 ( s ) , ..., vs ( s ) ( s d n)

(1.17.28)

with degrees q1 d q2 d  d qs .

(1.17.29)

Similarly to (1.17.25) and (1.17.26) one can show that all sequences of polynomial solutions (1.17.28) to (1.17.27) have the same sequences of minimal degrees (1.17.29). Definition 1.17.3. Nonnegative integers q1,q2,…,qs are called minimal row (Kronecker) indices of the pencil [Es – A]. Lemma 1.17.2. Strictly equivalent pencils have the same minimal column and row Kronecker indices. Proof. Take strictly equivalent pencils [E1s – A1] and [E2s – A2], i.e., related by the relationship [E2s – A2] = P[E1s – A1]Q. Pre-multiplying the equation

>E1s  A1 @ x

(1.17.30)

0

by a nonsingular matrix P and defining a new vector z = Q-1x (Q is a nonsingular matrix), we obtain P > E1 s  A1 @ QQ 1 x

>E2 s  A 2 @ z

0.

(1.17.31)

Thus these pencils have the same minimal column indices, since the degree of x in (1.17.30) is equal to the degree of z in (1.17.31). Similarly we can prove that these pencils have the same minimal row indices. „ Lemma 1.17.3. The Weierstrass–Kronecker canonical form (1.17.23) of the pencil [Es – A] is completely determined by p0 minimal column indices, which are equal to zero, nonzero minimal column indices p1,p2,…,pw,q0, minimal row indices equal to zero, nonzero minimal row indices q1,q2,…,qs and by its finite and infinite elementary divisors. Proof. The matrix L pi (i = 1,…,w) has only one minimal column index pi, since the equation L p z = 0 has only one polynomial solution of degree pi and the rows i

of the matrix L p are linearly independent. Similarly, the matrix LT q (j = 1,…,s) i

j

Polynomial Matrices

105

has only one minimal zero index qj, since the equation LT q v = 0 has only one j

polynomial solution of degree qj and columns of the matrix LT q are linearly j

independent. It is easy to check that the matrix L p (or LT q ) does not have any i

j

elementary divisors, since one of its minors, of the greatest degree pi (respectively qj), is equal to 1 and the other one is equal to s p ( s q ) . The first p0 columns of the matrix (1.17.23) correspond to polynomial solutions of (1.17.13). In view of this, the first p0 minimal column indices of [Es – A] are equal to 0. Dually, the first q0 minimal row indices of [Es – A] are equal to zero. Note that the pencil [E0s – A0] in (1.17.22) is regular, hence it is completely determined by its finite and infinite elementary divisors. From the block-diagonal form (1.17.23) it follows that the canonical form of the pencil [Es – A] is completely determined by minimal column and row indices, and finite and infinite elementary divisors of every diagonal block. „ i

j

From Lemmas 1.17.2 and 1.17.3 and from the fact that two singular pencils having the same canonical forms are strictly equivalent, the following Kronecker theorem can be inferred. Theorem 1.17.2. (Kronecker) Two singular pencils [E1s – A1], [E2s – A2] for Ek, Ak mun (k = 1,2) are strictly equivalent if and only if they have the same minimal column and row indices, as well the same finite and infinite elementary divisors.

2 Rational Functions and Matrices

2.1 Basic Definitions and Operations on Rational Functions A quotient of two polynomials l(s) and m(s) in variable s, where m(s) is a nonzero polynomial, w( s )

l ( s) m( s )

(2.1.1)

is called a rational function of the variable s. The set of rational functions with coefficients from a field will be denoted by (s). A field can be the field of real numbers , of the complex numbers , of the rational numbers , or a field of rational functions of another variable z, etc. We say that rational functions w1 ( s )

l1 ( s ) , w2 ( s ) m1 ( s )

l2 ( s ) m2 ( s )

(2.1.2)

belong to the same equivalence class if and only if l1 ( s )m2 ( s )

l2 ( s )m 1 ( s ) .

(2.1.3)

Let l1(s) = a(s) l 1(s) and m1(s) = a(s) m 1(s), where a(s) is a greatest common divisor of l1(s) and m1(s). Then w1 ( s )

a ( s )l1 ( s ) a ( s )m1 ( s )

l1 ( s ) , m1 ( s )

108

Polynomial and Rational Matrices

where l 1(s) and m 1(s) are relatively prime. Thus the rational function (2.1.1) represents the whole equivalence class. We say that the rational function (2.1.1) is of standard form if and only if the polynomials l(s) and m(s) are relatively prime and the polynomial m(s) is monic (i.e., a polynomial in which the coefficient at the highest power of the variable s is 1). Zeros of the numerator polynomial l(s) are called finite zeros (shortly zeros), and zeros of the denominator polynomial m(s) are called finite poles (shortly poles) of the rational function (2.1.1). Definition 2.1.1. An order r of the rational function (2.1.1) is a difference of degrees of denominator m(s) and numerator l(s). r

deg m( s )  deg l ( s ) ,

(2.1.5)

where deg denotes the degree of a polynomial. Let

l (s)

lm s m  lm1s m1  ...  l1s  l0 , m( s )

s n  an1s n1  ...  a1s  a0 .(2.1.6)

If the polynomials (2.1.6) are a numerator and denominator, respectively, of the function (2.1.1), then the order of this function is equal to r = n -m. This function has m finite zeros and n finite poles. If r = n - m < 0, then this function has a pole of multiplicity r (s = f) at infinity and if r = n - m > 0, then this function has a zero of multiplicity r (s = f) at infinity. Definition 2.1.2. The rational function (2.1.1) is called proper (or causal) if and only if its order is nonnegative (r = deg m(s) – deg l(s) t 0), and strictly proper (or strictly causal) if and only if its order is positive (r = deg m(s) – deg l(s) > 0).

Dividing the numerator l(s) by the denominator m(s), the rational function (2.1.1) can be presented in the form w( s )

wr s  r  wr 1s  ( r 1)  ... ,

(2.1.7)

where r is the order of the function given by (2.1.5), and wr,wr+1,… are coefficients dependent on the coefficients of the polynomials l(s) and m(s). For example, division of the polynomial l(s) = 2s + 1 by m(s) = s2 + 2s + 3 yields w1 ( s )

2s  1 s  2s  3 2

2s 1  3s 2  9 s 4  ...

In this case, m = 1, n = 2, r = n - m = 1, w1 = 2, w2 = -3, w3 = 0, w4 = 9,…

(2.1.8)

Rational Functions and Matrices

109

The coefficients wr,wr+1,… can also be computed in the following way. From the equality lm s m  lm1 s m1  ...  l1 s  l0 s n  mn1s n1  ...  m1s  m0

w( s )

wr s  r  wr 1s  ( r 1)  ...

(2.1.9)

we obtain lm s m  lm1 s m1  ...  l1 s  l0 ( s n  mn1s n1  ...  m1s  m0 )( wr s  r  wr 1s  ( r 1)  ...).

(2.1.10)

Comparing coefficients of the same powers of the variable s, we obtain wr

lm , wr 1

lm1  mn1 wr , wr  2

lm2  mn1 wr 1  mn2 wr , ...

(2.1.11)

Using (2.1.11) for the function (2.1.8) we obtain the same result as the one yielded by the polynomial division method. From Definition 2.1.2 it follows that the function (2.1.1) is proper if and only if it can be presented in the form w( s )

w0  w1s 1  w 2 s 2  ...

(2.1.12)

and w0 z 0. It is a strictly proper function if and only if w0 = 0. Proper and strictly proper functions have no poles at infinity. The rational functions (2.1.2) are equal if and only if they satisfy the condition (2.1.3). Rational functions of the form (2.1.7) are equal if and only if their appropriate coefficients wk, k = r,r+1,… are equal. A rational function of the form w1 ( s )  w2 ( s )

l1 ( s ) l (s)  2 m1 ( s ) m2 ( s )

l1 ( s )m2 ( s )  l2 ( s )m1 ( s ) m1 ( s )m2 ( s )

(2.1.13)

is called the sum of two rational functions (2.1.2). The sum of two rational functions of the form (2.1.7) is by definition equal to the rational function whose coefficients wk, k = r,r+1,… are the sums of the appropriate coefficients of these rational functions. For example, the sum of the rational function (2.1.8) and of w2 ( s )

2s 2  1 s2

is the rational function

2 s  4  9 s 1  18s 2  36 s 3  72s 4  ...

(2.1.14)

110

Polynomial and Rational Matrices

2 s  4  11s 1  21s 2  36s 3  61s 4  ...

w1 ( s )  w2 ( s )

(2.1.15)

The rational function w1 ( s ) w2 ( s )

l1 ( s )l2 ( s ) m1 ( s )m2 ( s )

(2.1.16)

is called the product of two rational functions of the form (2.1.2). The product of two rational proper functions of the form (2.1.12) and w( s )

w0  w1 s 1  w2 s 2  ...

is by definition equal to w( s ) w( s )

w0 w0  ( w0 w1  w1w0 ) s 1  ( w0 w2  w1w1  w2 w0 ) s 2  ... f

k

¦¦ w w i

k i

sk .

(2.1.17)

k 0 i 0

The product of two arbitrary functions of the form (2.1.7) is similarly defined. For example, the product of the rational functions (2.1.8) and (2.1.14) is equal to the following rational function w1 ( s) w2 ( s)

(2s 1  3s 2  9s 4  ...)(2s  4  9s 1  18s 2  36s 3  ...)

4  14s 1  30s 2  ... .

It is easy to check that with thus defined operations addition and multiplication, the set of rational function satisfies the conditions of the definition of a field. The rational function 0/1 is the zero of this field, and the rational function 1/1 is the 1 of this field (0 denotes the zero polynomial, i.e., a polynomial with zero coefficients and 1 denotes a polynomial whose coefficients, with the exception of the one by s0 (which is 1), are equal to 0). As regards the set of causal rational functions with coefficients of polynomials from a field , it constitutes a ring, which will be denoted by p(s). The 1 of this ring is constituted by proper rational functions of order r = 0. A special case of proper rational functions are stable rational functions. Definition 2.1.3. A proper rational function of the form (2.1.12) is called stable if and only if the sequence of its coefficients w0,w1,w2,… converges to zero. A proper rational function of the form (2.1.1) is stable if and only if its all poles have moduli less than 1. Stable rational functions with coefficients from the field constitute a ring, which will be denoted by S(s). A unit of this ring consists of stable rational functions that have zeros only inside of unit disk (|s| n. Thus a finite rational function has the form of a polynomial of the variable s-1

wn s  n  wn1s  n1  ...  w1s 1  w0 . A proper rational function of the form (2.1.1) is finite if m(s) = sp. The set of finite functions with coefficients from a field constitutes a ring, which will be denoted by [s-1]. Let l(s) be a nonzero polynomial. Then a function of the form m( s ) l ( s)

w 1 ( s )

(2.1.18)

is called the inverse function of the rational function (2.1.1). The inverse function w-1(s) of the function (2.1.7) is of the form w 1 ( s )

wˆ  r s  r  ...  wˆ 0  wˆ1s 1  ...  wˆ r s  r  wˆ r 1s  ( r 1)  ... .

To compute its unknown coefficients

wˆ , ..., wˆ , wˆ , ..., wˆ , wˆ , ... r 0 1 r r 1

(2.1.19) we use the

condition w( s ) w1 ( s ) 1

(2.1.20)

and the principle (2.1.17) of rational function multiplication. Substitution of (2.1.7) and (2.1.19) into (2.1.20) yields f § f i · § j · ¨ ¦ wi s ¸ ¨ ¦ wˆ j s ¸ ©i r ¹ © j r ¹

f

f

¦ ¦ w wˆ s i r j r

i

j

 (i j )

1.

(2.1.21)

Equality of coefficients at the same powers of the variable s in (2.1.21) implies r k

¦ w wˆ q

q r

k q

­1 for k 0 ® ¯0 for k 1, 2, ...

(2.1.22)

Solving the above equation system we compute desired coefficients of the function (2.1.19). Example 2.1.1. For the rational function w( s )

2s 1  3s 2  9s 4  ... ,

112

Polynomial and Rational Matrices

2s 1  3s 2  9s 4  ... ,

w( s )

the inverse function of the form w 1 ( s )

wˆ 1s  wˆ 0  wˆ 1s 1  wˆ 2 s 2  ...

is to be computed. Equation (2.1.21) in this case takes the form (2 s 1  3s 2  9s 4  ...)( wˆ 1s  wˆ 0  wˆ1s 1  wˆ 2 s 2  ...) 1 .

Equality of coefficients of the same powers of the variable s implies 1, 2wˆ 0  3wˆ 1

2wˆ 1

0, 2 wˆ1  3wˆ 0

0, 2 wˆ 2  3wˆ1  9 wˆ 1

0

3 , wˆ 1 4

9 , wˆ 2 8



and wˆ 1

1 , wˆ 0 2

3 wˆ 1 2

3 wˆ 0 2

3 9 wˆ1  wˆ 1 2 2

9 . 16

Thus the desired function is of the form 1 3 9 9 s   s 1  s 2  ... . 2 4 8 16

w 1 ( s )

An arbitrary rational function in the form of the series (2.1.7) is given. With the coefficients wk, k = r,r+1,… of this series known, compute the rational function in the form of the product of the two polynomials (2.1.1), which corresponds to the given function. The solution of this problem follows by virtue of the following lemma. Lemma 2.1.1. Let w( s )

l ( s) m( s )

ln1 s n1  ...  l1 s  l0 s n  mn1s n1  ...  m1s  m0

w1s 1  w2 s 2  w3 s 3  ...

and

Tk

ª w1 «w « 2 «# « ¬ wk

w2 w3 # wk 1

!

wk º ! wk 1 »» , k % # » » " w2 k 1 ¼

1, 2, ...

(2.1.23)

Rational Functions and Matrices

113

If the polynomials l(s) and m(s) are relatively prime, then ­z 0 for k ® ¯ 0 for k

det Tk

1, 2, ..., n n 1

.

(2.1.24)

Proof. To simplify computations we will consider in detail the case when n = 2. For n > 2 considerations proceed similarly. Division of l1s + l0 by s2 + m1s + m0 yields l1s  l0 s  m1s  m0

w1s 1  w2 s 2  w3 s 3  ...

2

(2.1.25)

where w1

l1 z 0, w2

w4

m1w3  m0 w2 , w5

det T2

l0  m1 w1 , w3

 m1w2  m0 w1 ,

(2.1.26)

m1w4  m0 w3 ,...,

w1

w2

w1

w2

w2

w3

w2

 m1w2  m0 w1

 m0 w12  l0 w2

 m0l12  l02  m1l0 l1 z 0 ,

since by assumption a zero s0 = -l0/l1 of rational function is not equal to its pole, 2

§ l0 · § l0 · ¨  ¸  m1 ¨  ¸  m0 © l1 ¹ © l1 ¹

l02  m1l0l1  m0l12 z 0. l12

We will show now that det T3 = 0. Indeed, using (2.1.26) and carrying out appropriate elementary row operations, we obtain

det T3

w1

w2

w3

w1

w2

w3

w2

w3

w4

w2

w3

w4

w3

w4

w5

w3

 m0 w2  m1w3

m0 w3  m1w4

w1

w2

w3

w2

w3

w4

m0 w1  m1w2  w3

0

0

0,

since m0w1+m1w2+w3 = 0. „ At the beginning we consider a case of strictly proper function of the form w( s )

w1s 1  w2 s 2  ... .

(2.1.27)

114

Polynomial and Rational Matrices

Knowing the coefficients w1,w2,… of the function (2.1.27), we check the rank of the symmetric matrix (2.1.23) successively for k = 1, 2, ... According to Lemma 2.1.1, if det Tk z 0 for k = 1, ..., n and det Tn+1 = 0, then the desired degree of the denominator is n, i.e., s n  mn1s n1  ...  m1s  m0

m( s )

(2.1.28)

and the numerator l(s) is a polynomial of a degree of at most n–1 ln1 s n1  ...  l1s  l0 ,

l ( s)

(2.1.29)

since the function (2.1.27) is strictly proper. Division of (2.1.29) by (2.1.28) yields w1s 1  w2 s 2  ... ,

w( s )

(2.1.30)

where w1

ln1 , w2

ln2  ln1mn1 , w3

ln3  mn1 (ln2  ln1mn1 ), ...

(2.1.31)

Solving the following system of 2n equations w1 , w2

ln2  ln1mn1  w2 ,

w1

ln1

w3

ln3  ln1mn2  mn1 (ln2  ln1mn1 )  w3 ,..., w2 n

w2 n

(2.1.32)

with respect to lk and mk for k = 0, 1, 2, ..., n–1, we compute the desired polynomials (2.1.28) and (2.1.29). Example 2.1.2. The following strictly proper rational function is given w( s )

2s 1  3s 2  9s 4  18s 5  ... .

Compute the corresponding function of the form (2.1.1). In this case, the determinants of the matrices (2.1.23) are successively as follows det T1

det T3

w1

w2

2

3

w2

w3

3

0

w3

2

3

0

w3

w4

3

0

9

w4

w5

0

9

18

w1

2, det T2

w1

w2

w2 w3

0.

9,

Rational Functions and Matrices

115

Hence n = 2 and the desired polynomials are of the form m( s )

s 2  m1s  m0 , l ( s )

l1s  l0 .

Using (2.1.32), we obtain the equations l1

w1

2, l0  2m1

3,  2m0  (l0  2m1 )m1

0,  (l0  2m1 )m0

9

whose solutions are l0 = 1, l1 = 2, m0 = 3, m1 = 2. Thus the desired function is w( s )

l ( s) m( s )

2s  1 . s  2s  3 2

This result is consistent with (2.1.8). Now take into account an improper rational function of the form w r s r  ...  w1s  w0  w1s 1  w2 s 2  ... .

(2.1.33)

We decompose this function into the polynomial q( s)

w r s r  ...  w1s  w0

(2.1.34)

and the strictly proper function w( s )

w1s 1  w2 s 2  ... .

(2.1.35)

Using the method presented above we compute the function in the form of the quotient of the two polynomials, which corresponds to the strictly proper function (2.1.35), and then we add to it the polynomial (2.1.34), i.e., w( s )

where

l ( s )

l ( s)  w r s r  ...  w1s  w0 m( s )

r m ( s )( w s  ...  w s  w )  l ( s ) r 1 0

l ( s ) , m( s )

.

Example 2.1.3. With the given improper rational function w( s )

2s  4  9s 1  18s 2  36 s 3  72s 4  ...

(2.1.36)

116

Polynomial and Rational Matrices

compute the corresponding function in the form of the quotient of two polynomials. In this case, q(s) = 2s – 4 and w( s ) =9s-1 – 18s-2 + 36s-3 - 72s-4 To compute a function of the form (2.1.1) corresponding to the strictly proper function w ( s ) , we compute the determinants of the matrices (2.1.23) successively for k = 1, 2, ... We obtain det T1

9, det T2

9

18

18

36

0.

Hence n = 1 and the desired polynomials are m(s) = s + m0, l(s) = l0. Using (2.1.32), we obtain l0

w1

9,  m0l0

w2

18, m0

2.

The desired function, according to (2.1.36), is w( s )

l (s)  q( s) m( s )

9  2s  4 s2

2s 2  1 . s2

The result is consistent with (2.1.14).

2.2 Decomposition of a Rational Function into a Sum of Rational Functions An arbitrary rational function (2.1.1) can be uniquely decomposed into a sum of the strictly proper rational function r(s)/m(s) and the polynomial function q(s), i.e.,

w( s)

r ( s)  q( s) , m( s )

(2.2.1)

where deg r(s) < deg m(s). To decompose the rational function (2.1.1) into a sum (2.2.1), we divide the polynomial l(s) by m(s) and obtain l s

q s m s  r s ,

(2.2.2)

where q(s) and r(s) are the integer part and remainder, respectively, on division of l(s) by m(s). Substitution of (2.2.2) into (2.1.1) yields (2.2.1). If deg l(s) < deg m(s), then q(s) is a zero polynomial and l(s) = r(s).

Rational Functions and Matrices

117

For example the rational function (2.1.14) can be decomposed into the strictly proper rational function 9/(s + 2) and the polynomial 2s – 4, since 2s 2  1 s2

9  2s  4 . s2

Consider a strictly proper rational function of the form w( s )

l (s) , m1 ( s )m2 ( s )...m p ( s )

(2.2.3)

where the polynomials m1(s),m2(s),…,mp(s) are pair-wise relatively prime. We will show that the rational function (2.2.3) can be uniquely decomposed into a sum of strictly proper rational functions lk ( s ) , k mk ( s )

1, ..., p ,

i.e., w( s )

l ( s) l1 ( s ) l ( s)  2  ...  p , m1 ( s ) m2 ( s ) m p (s)

(2.2.4)

where deg lk(s) < deg mk(s) for k = 1, ..., p. To simplify the considerations assume that p = 2. Then from (2.2.3) and (2.2.4), we obtain l ( s) m1 ( s )m2 ( s )

l1 ( s ) l ( s)  2 m1 ( s ) m2 ( s )

l1 ( s )m2 ( s )  l2 ( s )m1 ( s ) . m1 ( s )m2 ( s )

(2.2.5)

Let l (s) m2 ( s )

ln1s n1  ln2 s n2  ...  l1s  l0 , m1 ( s )

s n1  an1 1s n1 1  ...  a1s  a0 ,

s n2  bn2 1s n2 1  ...  b1s  b0 ,

l1 ( s )

cn1 1s n1 1  cn1 2 s n1 2  ...  c1s  c0 ,

l2 ( s )

d n2 1s n2 1  d n2 2 s n2 2  ...  d1s  d 0 , n

(2.2.6) n1  n2 .

The equality (2.2.5) yields l (s)

l1 ( s )m2 ( s )  l2 ( s )m1 ( s )

(2.2.7)

118

Polynomial and Rational Matrices

and substitution of (2.2.6) into (2.2.7) produces ln1s n1  ln2 s n2  ...  l1s  l0

(cn1 1 s n1 1  cn1 2 s n1 2  ...  c1 s  c0 )

u( s n2  bn2 1s n2 1  ...  b1s  b0 )  (d n2 1s n2 1  d n2 2 s n2 2  ...  d1s  d 0 ) (2.2.8) u( s n1  an1 1s n1 1  ...  a1s  a0 ).

Comparing the coefficients at the same powers of the variable s in (2.2.8), we obtain a system of n linear equations of the form cn1 1  d n2 1

ln1 ,

cn1 2  cn1 1bn2 1  d n2 2  d n2 1an1 1

ln  2 ,

cn1 3  cn1 1bn2 1  cn1 1bn2 2  d n2 3  d n2 2 an1 1  d n2 1an1 2 , c1b0  c0b1  d1a0  d 0 a1 c0b0  d 0 a0

(2.2.9)

l1 ,

l0 .

It is easy to check that if m1(s), m2(s) are pair-wise relatively prime, then the matrix of the coefficients of the system (2.2.9) is nonsingular. Hence the system has exactly one solution with respect to the desired coefficients ck, k = 0,2,…,n11 of the polynomial l1(s) and the coefficients dk, k = 0,1,…,n21 of the polynomial l2(s) for the given coefficients ai, i = 0,1,…,n11, bj, j = 0,1,…,n21 and lk, k = 0,1,…,n1 of the polynomials m1(s), m2(s), l(s). Example 2.2.1. Decompose the rational function w( s )

l3 s 3  l2 s 2  l1s  l0 ( s  a1s  a2 )( s 2  b1s  b2 ) 2

(2.2.10)

(l0 , l1 , l2 , l3 , a1 , a2 , b1 , b2 given)

into a sum of strictly proper rational functions. In this case, l3 s 3  l2 s 2  l1s  l0 ( s 2  a1s  a2 )( s 2  b1s  b2 )

x s  x4 x1s  x2  2 3 . s  a1s  a2 s  b1s  b2 2

(2.2.11)

From (2.2.11) we have l3 s 3  l2 s 2  l1s  l0

( x1 s  x2 ) s 2  b1s  b2  ( x3 s  x4 ) s 2  a1 s  a2 .

Comparing the coefficients of the same powers of the variable s, we obtain

(2.2.12)

Rational Functions and Matrices

l3

x1  x3 , l2

x1b1  x2  x3 a1  x4 , l1

l0

x2b2  x4 a2 .

x1b2  x2b1  x3 a2  x4 a1 ,

119

(2.2.13)

Equation (2.2.13) can be written in the form ª1 «b « 1 «b2 « ¬0

0 1 b1 b2

1 a1 a2 0

0 º ª x1 º 1 »» «« x2 »» a1 » « x3 » »« » a2 ¼ ¬ x4 ¼

ªl3 º «l » « 2» . « l1 » « » ¬l0 ¼

(2.2.14)

The matrix of coefficients

A

ª1 «b « 1 «b2 « ¬0

0 1 b1 b2

1 a1 a2 0

0º 1 »» a1 » » a2 ¼

is nonsingular if a1 z b1, a2 z b2 (the polynomials s2+a1s+a2, s2+b1s+b2 are relatively prime). In this case, det A

(a2  b2 ) 2  b2 (a1  b1 ) 2  b1 (a1  b1 )(a2  b2 ) .

Solving (2.2.14), we obtain 1

ª x1 º ª 1 0 1 0 º ª l3 º « x » « b 1 a 1 » «l » 1 1 « 2» « 1 » « 2» « x3 » «b2 b1 a2 a1 » « l1 » a22  b1a1a2  b2 a12  2b2 a2  b12 a2  b1b2 a1  b22 « » « » « » ¬ x4 ¼ ¬ 0 b2 0 a2 ¼ ¬l0 ¼ ª a22  b1a1a2  b2 a12  b2 a2 º ª l3 º b1a2  b2 a1 a2  b2 a1  b1 « »« » 2    ) ( ) ( ) ( ) ( a b a b        a b a b a a a b a a b a 2 1 1 2 2 1 2 2 1 2 2 2 2 1 1 1 » « l2 » . u« « (b12 a2  b1b2 a1  b2 a2  b22 ) (b1a2  b2 a1 ) » « l1 » (a1  b1 ) a2  b2 « »« » b2 (a2  b2 ) b2 (b1a2  b2 a1 ) b2 (a1  b1 ) a2  b1a1  b12  b2 »¼ ¬l0 ¼ «¬

Thus the desired decomposition is of the form w( s )

x3 s  x4 x1s  x2  . s 2  a1s  a2 s 2  b1s  b2

Example 2.2.2. Decompose the strictly proper rational function

120

Polynomial and Rational Matrices

w( s )

s 2  3s  2 ( s 2  3s  2)( s  3)

(2.2.15)

into a sum of two strictly proper rational functions. In this case, s 2  3s  2, m1 ( s )

l (s)

s 2  3s  2, m2 ( s )

s  3.

According to (2.2.5) and (2.2.6) we seek c1s  c0 , l2 ( s )

l1 ( s)

d.

(2.2.16)

Equation (2.2.7) in this case takes the form s 2  3s  2

(c1s  c0 )( s  3)  d ( s 2  3s  2) .

(2.2.17)

Comparing the coefficients at the same powers of the variable s, we obtain the equations c1  d

1, c0  3c1  3d

3, 3c0  2d

2,

whose solution is c0 = 6, c1 = 9, d = 10. Thus the desired decomposition of the function (2.2.15) has the form s 2  3s  2 ( s  3s  2)( s  3) 2



9s  6 10  . s 2  3s  2 s  3

Consider a strictly proper rational function of the form w( s )

l ( s) , m( s )

m( s )

( s  s1 ) n1 ( s  s2 ) n2 ...( s  s p ) p ,

(2.2.18)

where n

p

¦n

i

n

deg m( s ) ! deg l ( s ),

(2.2.19)

i 1

and s1,s2,…,sp are distinct poles of the function (2.2.18) with multiplicities n1,n2,…,np, respectively. The function (2.2.18) is a special case of the function (2.2.3) for mk ( s )

s  sk

nk

, k

1,..., p .

Rational Functions and Matrices

121

The strictly proper rational function lk ( s) , k ( s  sk ) nk

1, ..., p ,

can be further decomposed and represented uniquely in the form lk ( s) ( s  sk ) nk

nk

lki

i 1

k

¦ (s  s )

nk i 1

.

(2.2.20)

Using the decomposition (2.2.4) of the functions (2.2.18) and (2.2.20), we obtain w( s )

l ( s) m( s )

p

p

lk ( s)

¦ (s  s ) k 1

nk

lki

¦¦ (s  s )

nk

k 1 i 1

k

nk i 1

,

(2.2.21)

k

where the coefficients lki are given by the formula lki

1 w i 1 l ( s )( s  sk ) nk m( s ) (i  1)! ws i 1

s sk

.

(2.2.22)

This formula can be derived in the following way. Multiplication of (2.2.21) by ( s  sk ) nk yields l ( s )( s  sk ) nk m( s ) lknk ( s  sk )

nk 1

l11

( s  sk ) nk ( s  sk ) nk  ...  lm1  lk1  lk 2 ( s  sk )  ... n1 ( s  s1 ) ( s  s1 )

 l p1

( s  sk ) nk (s  s p )

np

 ...  l pn p

( s  sk ) nk . (s  s p )

From (2.2.23) for s = sk we successively obtain lk 1

l ( s)( s  sk ) nk m( s )

lk 3

1 w 2 § l ( s)( s  sk ) nk · ¨ ¸ m( s ) 2 ws 2 © ¹

s sk

, lk 2

w § l ( s )( s  sk ) nk · ¨ ¸ m( s ) ws © ¹ s sk

, ... ,

that is the formula (2.2.22). Example 2.2.3. Decompose the strictly proper rational function

s sk

,

(2.2.23)

122

Polynomial and Rational Matrices

w( s )

s 2  3s  2 ( s  1) 2 ( s  2)

(2.2.24)

into the sum (2.2.21). In this case, l(s) = s2 + 3s + 2, m(s) = (s - 1)2(s - 2) and s 2  3s  2 ( s  1) 2 ( s  2)

l11 l l  12  21 . ( s  1) 2 s  1 s  2

Using (2.2.22), we obtain l11

l ( s )( s  1) 2 ( s  1) 2 ( s  2)

l12

w § l ( s) · ¨ ¸ ws © s  2 ¹

l21

l ( s )( s  2) ( s  1) 2 ( s  2)

s 1

l ( s) s2

s 2  3s  2 s2

s 1

s 1

(2s  3)( s  2)  ( s 2  3s  2) ( s  2) 2

s 1

s 2

l ( s) ( s  1) 2

s 2

 6,

s 1

11,

12.

Thus the desired decomposition of the function (2.2.24) has the form s 2  3s  2 ( s  1) 2 ( s  2)



6 11 12   . ( s  1) 2 s  1 s  2

Now consider an improper rational function of the form w( s )

l ( s) , deg l ( s ) ! deg m1 ( s )  deg m2 ( s ) , m1 ( s )m2 ( s )

(2.2.25)

where m1(s), m2(s) are relatively prime. Separating from the function (2.2.25) the polynomial part q(s) (according to the decomposition (2.2.1)), we obtain w( s )

l ( s)  q( s) , m1 ( s )m2 ( s )

(2.2.26)

where deg m1(s)+deg m2(s)> deg l (s). Using (2.2.5), one can write the function (2.2.26) in the form w( s )

l1 ( s ) l ( s)  2  q( s) , m1 ( s ) m2 ( s )

(2.2.27)

Rational Functions and Matrices

123

where deg m1(s) < deg l 1(s), deg m2(s) > deg l 2(s). Let p(s) be an arbitrary polynomial. The function (2.2.27) can be then written in the form w( s )

§ l1 ( s ) · § l (s) ·  p( s) ¸  ¨ 2  q(s)  p( s) ¸ ¨ © m1 ( s ) ¹ © m2 ( s ) ¹

l1 ( s )

l 1 ( s)  m1 ( s ) p ( s ), l2 ( s )

l1 ( s ) l (s) , (2.2.28)  2 m1 ( s ) m2 ( s )

where l2 ( s )  m2 ( s )(q ( s )  p ( s )) .

The decomposition (2.2.25) is thus not unique, since the polynomial p(s) is an arbitrary one. If we separate from functions l1(s)/m1(s), l2(s)/m2(s) the polynomial parts q1(s) and q2(s), respectively, we obtain w( s )

l1 ( s ) l ( s )  q1 ( s )  2  q2 ( s ) , m1 ( s ) m2 ( s )

(2.2.29)

where deg m1(s) > deg l 1(s), deg m2(s) > deg l 2(s). From comparison of (2.2.27) and (2.2.29) it follows that uniqueness of the decomposition holds for l1 ( s ) l1 ( s ), l2 ( s ) l2 ( s ) and q(s) = q1(s)+q2(s). Taking p(s) = 0 in (2.2.28) one can represent the function (2.2.25) as a sum w( s )

w1 ( s )  w2 ( s ), w1 ( s )

l1 ( s) , w2 ( s ) m1 ( s )

l2 ( s )  q(s) , m2 ( s )

w( s )

w1 ( s )  w2 ( s ), w1 ( s )

l1 ( s )  q ( s ), w2 ( s ) m1 ( s )

(2.2.30)

or l2 ( s ) . m2 ( s )

(2.2.31)

The decomposition (2.2.30) is called the minimal decomposition of the function (2.2.25) with respect to m1(s), and the decomposition (2.2.31) is called the minimal decomposition of the function (2.2.25) with respect to m2(s). Using the decomposition (2.2.4), one can generalise these considerations to the case of a rational function of the form (2.2.3).

124

Polynomial and Rational Matrices

2.3 Basic Definitions and Operations on Rational Matrices A matrix W(s) with m rows and n columns whose entries are rational functions wij(s) of a variable s with coefficients from a field

W( s)

ª w11 ( s ) w12 ( s ) « w (s) w (s) 22 « 21 « # # « ( ) w s w m 2 (s) ¬ m1

! w1n ( s ) º ! w2 n ( s ) »» % # » » ! wmn ( s ) ¼

(2.3.1)

is called a rational matrix. The set of rational matrices of dimensions mun of a variable s and with coefficients from a field will be denoted by mun(s). A field can be the field of real numbers , of complex numbers , of rational numbers or of a field of rational functions of another variable z, etc. With all the entries wij(s) of the matrix (2.3.1) brought to the common denominator m(s) with the coefficient at the highest power of s equal to 1, the matrix can be expressed in the form W( s)

L( s ) , m( s )

(2.3.2)

where L(s) mun[s] is a polynomial matrix with coefficients from the field , and m(s) is a polynomial. Let m( s )

( s  s1 ) n1 ( s  s2 ) n2 " ( s  s p ) p , n

p

¦n

i

n.

(2.3.3)

i 1

Definition 2.3.1. The matrix (2.3.2) is called irreducible if and only if L( sk ) z 0mn , k

1, ..., p ,

(2.3.4)

where 0mn is a zero matrix of size mun. If L(sk) = 0mn, then all entries of the matrix L(s) are divisible by (s-sk) and the matrix (2.3.2) is reducible by (s-sk). An irreducible matrix of the form (2.3.2) is called a matrix of standard form. With the polynomial matrix L(s) expressed as the matrix polynomial L( s )

L q s q  L q 1s q1  ...  L1s  L 0 ,

we can write the matrix (2.3.2) in the form

(2.3.5)

Rational Functions and Matrices

W( s)

L q s q  L q1s q 1  ...  L1s  L 0 m( s )

.

125

(2.3.6)

For example, for the following rational matrix

W( s)

ª s « s 1 « « 2 «¬ s  2

1 s2 s2 s 1

º s» » » 2s » ¼

(2.3.7)

the least common denominator of its entries is the polynomial m(s) = (s+1)(s+2), whose roots are s1 = 1, s2 = 2. The rational matrix (2.3.7) of the form (2.3.2) is equal to W( s)

s 1 ª s ( s  2) 1 « ( s  1)( s  2) ¬ 2( s  1) ( s  2) 2

s ( s  1)( s  2) º 2s ( s  1)( s  2) »¼

L( s ) . (2.3.8) m( s )

This matrix is irreducible, since L( s1 )

ª 1 0 0 º « 0 1 0 » , L ( s2 ) ¬ ¼

ª 0 1 0 º « 2 0 0 » . ¬ ¼

The form (2.3.8) is thus the standard form of the matrix (2.3.7). The matrix L(s) expressed as a matrix polynomial is equal to L( s )

ª 0 0 1 º 3 ª1 0 3 º 2 ª 2 1 2 º ª 0 1 0º «0 0 2 » s  «0 1 6 » s  « 2 4 4 » s  « 2 4 0 » . ¬ ¼ ¬ ¼ ¬ ¼ ¬ ¼

In view of this, the matrix (2.3.7) in the form (2.3.6) is equal to W( s)

1 ( s  1)( s  2)

ª 0 1 0 º °½ °­ ª 0 0 1 º 3 ª1 0 3º 2 ª 2 1 2 º s « s « s« u ®« » » » »¾. °¯ ¬ 0 0 2 ¼ ¬0 1 6 ¼ ¬ 2 4 4¼ ¬ 2 4 0 ¼ °¿

(2.3.9)

Definition 2.3.2. The rational matrix (2.3.2) is called proper (or causal) if and only if deg m(s) t deg L(s) and strictly proper (or strictly causal) if and only if deg m(s) > deg L(s). The matrix (2.3.7) is not a proper one, since as it follows from (2.3.9), deg L(s) = 3 and deg m(s) = 2.

126

Polynomial and Rational Matrices

Dividing every entry of the matrix L(s) by m(s), one can express the rational matrix (2.3.2) in the form W( s)

Wr s  r  Wr 1s  ( r 1)  ... ,

(2.3.10)

where r = deg m(s) – deg L(s) is a matrix rank, and Wr,Wr+1,… are matrices of coefficients and depend on the coefficients of the polynomial m(s) and the polynomial matrix L(s). For example, taking into account s 1 1  s 1  s 2  s 3  ..., s 1 s2 s2 1 2 3 1  s  s  s  ..., s 1 the matrix (2.3.7) can be written in the form ª s « s 1 « « 2 «¬ s  2 ª0 0 «0 0 ¬

s 1  2 s 2  4s 3  ...,

1 º s» s2 » s2 » 2s » s 1 ¼ 1º ª1 0 0 º ª 1 1 0 º 1 ª 1 2 0º 2 s« »« » s  « 4 1 0» s  ... . 2 »¼ ¬0 1 0 ¼ ¬ 2 1 0 ¼ ¬ ¼

(2.3.11)

In this case, r = –1. The sum (difference) and the product of rational matrices are defined analogously to the sum (difference) and the product, respectively, of two rational functions. Using (2.3.10), it is easy to show that the sum, the difference and the product of two strictly proper matrices are themselves strictly proper matrices. The set of proper (causal) rational matrices of the variable s, with coefficients from a field and of dimensions mun will be denoted by pmun(s). The entries of these matrices belong to the ring p(s). Thus we can define a p(s)-unimodular matrix as a nonsingular matrix whose determinant is a unit of the ring p(s). Definition 2.3.3. The following operations are called p(s)-elementary operations on the rows and on the columns of a matrix, respectively 1. Multiplication of the i-th row (column) by a unit of the ring p(s), w(s). This operation will be denoted by L[iuw(s)] (P[iuw(s)]). 2. Addition to the i-th row (column) of the j-th row (column) multiplied by an arbitrary proper (causal) rational function w ( s ) . This operation will be denoted by L[i+iu w ( s ) ] (P[i+iu w ( s ) ]). 3. The interchange of two arbitrary rows (columns) i, j. This operation will be denoted by L[i, j] (P[i, j]).

Rational Functions and Matrices

Analogously to polynomial matrices, we can define the proper rational matrices.

127

p(s)-equivalence

of

Definition 2.3.4. Two proper rational matrices W1(s) and W2(s) of the same dimensions are called p(s)-equivalent if and only if there exist p(s)-unimodular matrices Lp(s) and Pp(s) such that W1 ( s )

L p ( s ) W2 ( s )Pp ( s ) .

(2.3.12)

Matrices Lp(s) and Pp(s) are the products of matrices of p(s)-elementary operations on rows and columns, respectively. With p(s)-equivalence, every proper rational matrix W(s) pmun(s) can be converted to the Smith canonical form WS ( s )

diag ª¬ s  d1 , s  d2 ,..., s  dr , 0,..., 0 º¼ 

mun p

(s) ,

(2.3.13)

where d1dd2d…ddJ are nonnegative integers, uniquely determined by the matrix W(s) and r = rank W(s). For example, the proper rational matrix

W( s)

ª s «s 1 « « 1 «¬ s  1

can be converted by WS ( s )

s º s2 » » 1 » s ( s  2) »¼

p(s)-equivalence

ª1 0 º «0 s 1 » (d1 ¬ ¼

into

0, d 2

1)

by the following p(s)-elementary operations on rows: L[2+1u(-1/s)], L[1u(s+1)/s] and on columns: P[2+1u-(s+1)/(s+2)], P[2u(s+2)/(1-s)]. The matrices of p(s) elementary operations are:

L p ( s)

ªs 1 º 0» « s « » , Pp ( s ) «  1 1» «¬ s »¼

s  1º ª «1  1  s » « ». «0 s  2 » «¬ 1  s »¼

Definition 2.3.4. A rational matrix whose entries are stable proper (causal) functions is called a stable matrix.

128

Polynomial and Rational Matrices

The set of stable matrices with coefficients from a field and of dimensions mun will be denoted by Smun(s). This set is a subset of the set of proper rational matrices pmun(s). A matrix whose elements are finite proper functions is called a finite rational matrix. The set of finite rational matrices with coefficients from a field and of dimensions mun will be denoted by mun[s-1]. Consider the rational function w( s )

l ( s) , m( s )

(2.3.14)

such that l(s) and m(s) are relatively prime elements of one of the rings p(s), S(s), [s-1]. Analogously, one can define the set of rational matrices whose entries are rational functions of the form (2.3.14).

2.4 Decomposition of Rational Matrices into a Sum of Rational Matrices An arbitrary rational matrix of the form (2.3.1) can be decomposed into a sum of a strictly proper rational matrix R(s)/m(s) and of a polynomial matrix Q(s), i.e., W( s)

R ( s)  Q( s ) , m( s )

(2.4.1)

where deg m(s) > deg R(s). In order to decompose the rational matrix (2.3.2) into the sum (2.4.1), we divide every entry lij(s) of the matrix L(s) by m(s) lij ( s )

q ij ( s)m( s )  rij ( s ), i 1, ..., m; j 1, ..., n ,

(2.4.2)

where qij(s) and rij(s) are the integer part and the remainder of division, respectively. Substituting (2.4.2) into (2.3.2) and defining Q(s) = [qij(s)], R(s) = [rij(s)], we obtain (2.4.1). If deg L(s) < deg m(s), then Q(s) is a zero matrix and R(s) = L(s). A strictly proper rational matrix of the form W( s)

L( s ) m1 ( s )m2 ( s )...m p ( s )

(2.4.3)

where the polynomials m1(s),m2(s),…,mp(s) are pair-wise relatively prime, is taken into account.

Rational Functions and Matrices

129

The matrix (2.4.3) can be uniquely decomposed into the sum of p strictly proper rational matrices L k ( s) , k mk ( s )

1, ..., p ,

i.e., W( s)

L ( s) L1 ( s ) L 2 ( s )   ...  p , m1 ( s ) m2 ( s ) m p (s)

(2.4.4)

where deg mk(s) > deg Lk(s), k = 1,…,p. In order to carry out the decomposition (2.4.4), one has to apply to every element lij(s) of the matrix L(s) the procedure introduced in point 2. Consider the strictly proper matrix (2.3.2) for m(s) of the form (2.2.19). This matrix is a special case of the matrix (2.4.3) for mk ( s )

s  sk

nk

, k

1,..., p .

The strictly proper rational matrix Lk (s) , k ( s  sk ) nk

1, ..., p

may be further uniquely decomposed into the form Lk (s) ( s  sk ) nk

nk

L ki

i 1

k

¦ (s  s )

nk i 1

, k

1, ..., p .

(2.4.5)

The decomposition (2.4.5) applied to every term of the sum (2.4.4) yields W( s)

L( s ) m( s )

p

nk

L ki

¦¦ ( s  s ) k 1 i 1

nk i 1

,

(2.4.6)

k

where the matrices Lki of the coefficients are given by the formula L ki

1 w i 1 L( s )( s  sk ) nk m( s ) (i  1)! ws i 1

s sk

, k

1, ..., p; i 1, ..., nk .

(2.4.7)

This formula follows from application of (2.2.22) to every entry of the matrix L(s).

130

Polynomial and Rational Matrices

Example 2.4.1. Decompose the rational matrix

W( s)

ª s « ( s  1) 2 « « 2 «¬ s  2

1 º s  2» ». 4 » s  1 »¼

(2.4.8)

We write the matrix in the form (2.4.3) W( s)

L( s ) , m1 ( s )m2 ( s )

m1 ( s )

( s  1) 2 , m2 ( s )

(2.4.9)

where s  2, L( s )

ª s ( s  2) « 2 ¬ 2( s  1)

º ». 4( s  1)( s  2) ¼ ( s  1) 2

We want to decompose the matrix (2.4.8) into the form W( s)

L11 L L  12  21 . ( s  1) 2 s  1 s  2

(2.4.10)

Using (2.4.7), we obtain L11

L12

ª 1 0 º « 0 0» , ¬ ¼ 1 dL ( s ) d ( s  2) ( s  2)  L( s ) w L( s ) ds ds ( s  2) 2 ws s  2 s 1

L( s ) s2s

ª1 0 º «0 4 » , ¬ ¼

(2.4.11)

s 1

L 21

L( s ) ( s  1) 2

s 2

ª0 1º «2 0» . ¬ ¼

Substitution of (2.4.11) into (2.4.10) yields the desired decomposition of the matrix (2.4.8) W( s)

1 ª 1 0 º 1 ª1 0 º 1 ª0 1º .   » « » 2 « ( s  1) ¬ 0 0 ¼ s  1 ¬ 0 4 ¼ s  2 «¬ 2 0»¼

(2.4.12)

Rational Functions and Matrices

131

Now an improper rational matrix of the form W( s)

L( s ) m1 ( s )m2 ( s )

(2.4.13)

is taken into account where deg m1(s)+deg m2(s) < deg L(s), and polynomials m1(s), m2(s) are relatively prime. Separating from the matrix (2.4.13) the polynomial part Q(s) mun[s] (according to the decomposition (2.4.1)), we obtain W( s)

L( s )  Q( s ), deg L( s )  deg m1 ( s )  deg m2 ( s ) . (2.4.14) m1 ( s )m2 ( s )

With the decomposition (2.4.4) applied, the matrix (2.4.14) can be written in the form W( s)

L1 ( s ) L 2 ( s )   Q( s ) , m1 ( s ) m2 ( s )

(2.4.15)

where deg m ( s ) ! deg L1( s ) and deg m ( s ) ! deg L2 ( s ) . 1

2

Addition to and subtraction from the right-hand side of (2.4.15) of an arbitrary polynomial matrix P(s) mun[s] yields W( s)

§ L1 ( s ) · § L 2 (s) ·  P(s) ¸  ¨  Q( s )  P ( s ) ¸ ¨ © m1 ( s ) ¹ © m2 ( s ) ¹

L1 ( s )

L1 ( s )  m1 ( s )P ( s ), L 2 ( s )

L1 ( s ) L 2 ( s )  , (2.4.16) m1 ( s ) m2 ( s )

where L 2 ( s )  m2 ( s ) >Q( s )  P( s ) @ .

Thus the decomposition (2.4.16) of the matrix (2.4.13) is not unique. If we separate from the improper matrices L k ( s) , k mk ( s )

1, ..., p

the polynomial parts Q1(s) and Q2(s), respectively, we obtain W( s)

L 1 ( s) L ( s )  Q1 ( s)  2  Q2 (s) , m1 ( s ) m2 ( s )

(2.4.17)

132

Polynomial and Rational Matrices

where deg m1 ( s ) ! deg L 1 (s ) and deg m2 (s ) ! deg L 2 (s ) .

A comparison of (2.4.17) to (2.4.15) implies that the uniqueness of the decomposition holds for L 1 ( s )

L1 ( s ), L 2 ( s )

L 2 ( s ) and Q( s )

Q1 ( s )  Q 2 ( s ) .

Taking P(s) = 0 in (2.4.16), one can express the matrix (2.4.14) as the sum L 2 (s)  Q( s ) , (2.4.18) m2 ( s )

W( s)

W1 ( s )  W2 ( s ); W1 ( s )

L1 ( s ) , W2 ( s ) m1 ( s )

W( s)

W1 ( s )  W2 ( s ); W1 ( s )

L1 ( s )  Q( s ), W2 ( s ) m1 ( s )

or L2 (s) . (2.4.19) m2 ( s )

The decomposition (2.4.18) is called the minimal decomposition of the matrix (2.4.13) with respect to m1(s) and the decomposition (2.4.19) is called the minimal decomposition of the matrix (2.4.13) with respect to m2(s). Using the decomposition (2.4.4) one can generalise the above considerations to the case of, rational matrix of the form (2.4.3).

2.5 The Inverse Matrix of a Polynomial Matrix and Its Reducibility Consider an invertible (nonsingular) polynomial matrix A(s) nun[s]. Its inverse matrix is the rational matrix A-1(s) nun(s). Let U(s), V(s) nun[s] be unimodular matrices of elementary operations on rows and columns, respectively, that convert this polynomial matrix into the Smith canonical form AS(s), i.e., A S (s)

U( s) A( s )V ( s)

diag >i1 ( s ), i2 ( s ), ..., in ( s ) @ ,

(2.5.1)

where ik(s), k = 1,…,n are the monic invariant polynomials, satisfying the divisibility condition ik(s) | ik+1(s) for k = 0, 1, ..., n1. From (2.5.1), we have A( s )

U 1 ( s ) A S ( s )V 1 ( s )

U 1 ( s )diag >i1 ( s ), i2 ( s), ..., in ( s ) @ V 1 ( s ) , (2.5.2)

Rational Functions and Matrices

133

where the inverse matrices U-1(s), V-1(s) are also unimodular ones. Thus with the following relationships applied, the inverse of the matrix (2.5.2) can be computed as 1 1 ¬ª U ( s ) A S ( s)V ( s ) ¼º

A 1 ( s)

1

V ( s ) A S 1 ( s ) U ( s )

V ( s )diag >i1 ( s ), i2 ( s),..., in ( s) @ U( s) 1

V(s)

(2.5.3)

Adj > diag (i1 ( s ), i2 ( s),..., in ( s)) @ U( s), i1 ( s)i2 ( s)...in ( s)

where the adjoint matrix is of the form Adj > diag[i1 ( s ), i2 ( s ), ..., in ( s )]@ diag >i2 ( s )i3 ( s )...in ( s ), i1 ( s )i3 ( s )...in ( s ), ..., i1 ( s )i2 ( s )...in1 ( s ) @ .

(2.5.4)

Note that in the general case, reductions will take place in the inverse matrix A-1(s), since for certain roots of the invariant polynomials, the adjoint matrix (2.5.4) is equal to a zero matrix. On the other hand, if i1 ( s )

i2 ( s ) ... in1 ( s ) 1 ,

(2.5.5)

then the matrix (2.5.4) takes the form Adj > diag [i1 ( s ), i2 ( s ),..., in ( s )]@ diag >in ( s ), in ( s ),..., in ( s ),1@

(2.5.6)

and for all roots of the invariant polynomial in(s) it is a nonzero matrix. In this case, there are no reductions in the inverse matrix A-1(s). The condition (2.5.5) is also a necessary one for occurrence of reductions in the matrix A-1(s). If this condition is not satisfied, the invariant polynomials i1(s),i2(s),…,in-1(s) have at least one common root. For this root, the adjoint matrix (2.5.4) is equal to zero and the reduction of this root occurs in the matrix A-1(s). In this way, the following theorem has been proved. Theorem 2.5.1. There are no reductions in the inverse matrix A-1(s) if and only if the polynomial matrix A(s) is a simple matrix. i.e., the condition (2.5.5) is satisfied, or equivalently, the characteristic polynomial is identical with the minimal polynomial of this matrix. Thus the inverse A-1(s) of the simple matrix A(s) is of the form A 1 ( s)

V ( s )diag ª¬1, 1, ..., 1, in1 ( s) º¼ U( s ) .

(2.5.7)

134

Polynomial and Rational Matrices

Example 2.5.1. Compute the inverse of the following polynomial matrix A( s)

ª s  1  ( s  1) 2 ( s  2) 2 « 2 ¬ 2( s  1)( s  2)

( s  1) 2 ( s  2) º » 2( s  1)( s  2) ¼

(2.5.8)

and check whether any reductions occur in the inverse matrix. In this case, A( s)

0 0º ª1 s  1º ª s  1 ºª 1 . «0 » « » « 2 ¼¬ 0 ( s  1)( s  2) ¼ ¬ s  2 1 ¼» ¬

Hence A S (s)

0 ªs  1 º 1 « 0 » , U (s) ( s 1)( s 2)   ¬ ¼

ª1 s  1º 1 «0 » , V (s) 2 ¬ ¼

0º ª 1 « s  2 1» . ¬ ¼

The matrix (2.5.8) is not simple, since i1(s) = s+1 z 1 and the reductions by (s+1) will take place in the inverse matrix A-1(s). Using (2.5.3) and (2.5.4), we obtain A 1 ( s )

V ( s)

diag > ( s  1)( s  2), s  1@ U( s) ( s  1) 2 ( s  2)

0º ª 1 ª 1 « ( s  2) 1 » diag « s  1 ¬ ¬ ¼ ª 1 « s 1 « « s2 « s  1 ¬

1 ª º 1  ( s  1) » º« 1 2 « » 1 ( s  1)( s  2) »¼ « » 0 2 ¬« ¼»

1 º » 2 ». 2 ( s  1)( s  2)  1 » 2( s  1)( s  2) »¼ 

Example 2.5.2. Show that the matrix [Ins – A]-1 is not reducible for any coefficients a0,a1,…,an-1 of the matrix

A

ª 0 « 0 « « # « « 0 «¬  a0

1 0

0 1

# 0  a1

# 0  a2

! !

0 º 0 »» ! # ». » 1 » % ! an1 »¼

(2.5.9)

Rational Functions and Matrices

135

Taking into account that for the matrix (2.5.9) s 0

1 0 ! s 1 !

0 0

0 0

# 0

# 0

# 0

# s

# 1

a0

a1

a2 ! an2

det > I n s  A @

s  an1s n

n 1

% !

s  an1

 ...  a1s  a0 ,

we obtain

>I n s  A @

1

Adj > I n s  A @ det > I n s  A @

ª* «* 1 « s n  an1s n1  ...  a1s  a0 « # « ¬*

* ! * 1º * ! * *»» , # % # #» » * ! * *¼

(2.5.10)

where * stands for an entry that does not matter in the considerations. It follows from (2.5.10) that the matrix [Ins – A]-1 is irreducible, since the entry (1, n) of the adjoint matrix is equal to 1. Example 2.5.3. Show that the matrix [Is – A]-1 is irreducible if and only if the entry a of the matrix

A

ª1 1 0 º «0 1 0 » « » «¬ 0 0 a »¼

(2.5.11)

is different from 1. Computation of the inverse [Is – A]-1 of the matrix (2.5.11) yields

> Is  A @

1

0 º ª s  1 1 « 0 s 1 0 »» « «¬ 0 s  a »¼ 0

1

sa 0 º ª( s  1)( s  a ) 1 « 0 ( s  1)( s  a ) 0 »» . ( s  1) 2 ( s  a) « «¬ 0 0 ( s  1) 2 »¼

(2.5.12)

136

Polynomial and Rational Matrices

From (2.5.12) it follows that the matrix [Is – A]-1 for the matrix (2.5.11) is irreducible if and only if a z 1. Using elementary operations it is easy to show that for a = 1

> Is  A @ S

0 º ª s  1 1 « 0 s  1 0 »» « «¬ 0 s  1»¼ S 0

0 0 º ª1 «0 s  1 0 »» « «¬ 0 0 ( s  1) 2 »¼

0 º ª s  1 1 « 0  s 1 0 »» « «¬ 0 s  a »¼ S 0

ª «1 0 « « «0 1 « « «0 0 ¬«

and for a z 1

> Is  A @ S

º » » » 0 ». » ( s  1) 2 ( s  a ) » » (a  1) 2 ¼» 0

2.6 Fraction Description of Rational Matrices and the McMillan Canonical Form 2.6.1 Fractional Forms of Rational Matrices We will show that an arbitrary rational matrix of the form (2.3.1) can be written in the form W( s) W(s)

Dl1 ( s )N l ( s ) ,

(2.6.1a)

1 p

N p ( s )D ( s) ,

(2.6.1b)

where Dl ( s ) 

mum

[ s ] and D p ( s ) 

nun

[s]

are nonsingular matrices and Nl(s) mun[s], Np(s) mun[s]. According to the considerations in point 3 an arbitrary matrix of the form (2.3.1) can be expressed in the standard form (2.3.2). Taking Dl(s) = Imm(s) and Nl(s) = L(s), we obtain from (2.3.2) the matrix W(s) of the form (2.6.1a). Taking Dp(s) = Inm(s) and Np(s) = L(s), we obtain the matrix W(s) of the form (2.6.1b).

Rational Functions and Matrices

137

If Dl(s) = Imm(s), then deg det Dl(s) = m deg m(s), and if Dp(s) = Inm(s), then deg det Dp(s)=n deg m(s). Note that premultiplication of the matrices Dl(s) and Nl(s) by an arbitrary nonsingular matrix K(s) mum[s] does not change the matrix (2.6.1a), since

> K ( s ) Dl ( s ) @

1

K ( s )Nl ( s )

Dl1 ( s)K 1 ( s)K ( s )Nl ( s )

Dl1 ( s )N l ( s )

W( s ).

Analogously, post-multiplication of the matrices Dp(s) and Np(s) by an arbitrary nonsingular matrix K(s) mum[s] does not change the matrix (2.6.1b), since N p ( s )K ( s ) ¬ª D p ( s )K ( s ) ¼º

1

N p ( s )K ( s )K 1 ( s )Dp1 ( s )

N p ( s )Dp1 ( s )

W ( s ).

Thus for a given rational matrix W(s) there are many pairs of matrices (Dl(s), Nl(s)) and (Dp(s), Np(s)), which give the same matrix W(s). Thus these pairs are not unique. If W( s)

L( s ) m( s )

Dl1 ( s )N l ( s )

N p ( s )D p1 ( s ) ,

(2.6.2)

then deg m( s )  deg L( s ) d deg Dl ( s )  deg N l ( s ) , deg m( s )  deg L( s) d deg D p ( s)  deg N p ( s ) .

(2.6.3a) (2.6.3b)

From (2.6.2), we have Dl ( s )L( s )

m( s ) N l ( s )

and deg > Dl ( s )L( s ) @ deg > m( s )N l ( s ) @ .

Taking into account that deg > Dl ( s )L( s ) @ d deg Dl ( s )  deg L( s )

and deg > m( s ) N l ( s ) @ deg m( s )  deg N l ( s ) ,

from (2.6.4) we obtain (2.6.3a). The proof of (2.6.3b) is analogous.

(2.6.4)

138

Polynomial and Rational Matrices

If deg m(s) > deg L(s), then from (2.6.3) it follows that deg Dl(s) > deg Nl(s) and deg Dp(s) > deg Np(s). If, on the other hand deg m(s) t deg L(s), then from (2.6.3) we have deg Dl(s) t deg Nl(s) and deg Dp(s) t deg Np(s). Example 2.6.1. The rational matrix

W( s)

ª 1 «s 3 « « 1 «s  2 ¬

1 s2 1 s3

1 º ( s  2)( s  3) » » 1 » ( s  2) 2 »¼

(2.6.5)

is to be converted to the forms (2.6.1a) and (2.6.1b). We write this matrix in the standard form (2.3.2) W( s) ª ( s  2) 2 ( s  2)( s  3) s  2 º 1 « » 2 ( s  2) ( s  3) ¬ ( s  2)( s  3) s  3¼ ( s  2) 2

L( s ) . m( s )

(2.6.6)

Taking Dl ( s )

I 2 ( s  2) 2 ( s  3),

Nl (s)

L( s)

ª ( s  2) 2 ( s  2)( s  3) s  2 º « », ( s 2)( s 3) ( s  2) 2 s  3¼   ¬

we obtain

W( s )

ª ( s  2) 2 ( s  3) º 0 « » 2 0 ( s  2) ( s  3) ¼ ¬

1

ª ( s  2) 2 ( s  2)( s  3) s  2 º u« ». ( 2)( 3) ( s  2) 2 s s s  3¼   ¬

On the other hand, taking D p ( s)

we obtain

I 3 ( s  2) 2 ( s  3), N p ( s )

L( s) ,

(2.6.7)

Rational Functions and Matrices

W(s)

139

ª ( s  2) 2 ( s  2)( s  3) s  2 º « » ( s  2) 2 s  3¼ ¬( s  2)( s  3) 1

ª( s  2) 2 ( s  3) º 0 0 « » 2 u« 0 ( s  2) ( s  3) 0 » . « 0 0 ( s  2) 2 ( s  3) »¼ ¬

(2.6.8)

In this case, deg det Dl ( s )

m deg m( s )

2 ˜ 3 6 and

deg det D p ( s )

n deg m( s )

3˜3 9 .

(2.6.9)

From (2.6.4) and (2.6.5), it follows that there are reductions between entries of Dl-1(s) and Nl(s), as well as Dp-1(s) and Np(s). Thus the question arises as to under which conditions the reductions do not occur, i.e., the pairs (Dl(s), Nl(s)) and (Dp(s), Np(s)) are irreducible and the degrees of the determinants of the matrices Dl(s) and Dp(s) are minimal. The following theorem gives the answer to this question. Theorem 2.6.1. The pair (Dl(s), Nl(s)) is irreducible and the degree of the determinant of Dl(s) is minimal if and only if rank > Dl ( s ), N l ( s )@

m for all s 

.

(2.6.10a)

The pair (Dp(s), Np(s)) is irreducible and the degree of the determinant of Dp(s) is minimal if and only if ª D p (s) º rank « » ¬ N p ( s) ¼

n for all s 

.

(2.6.10b)

Proof. If the condition (2.6.10a) is satisfied, then a unimodular matrix U(s), with deg det U(s) = 0, can be a greatest common divisor of the matrices Dl(s) and Nl(s). In this case, Dl(s) and Nl(s) are irreducible and the degree of det Dl(s) is minimal. The condition (2.6.10a) is also a necessary one. Let Ll(s), which is a unimodular matrix, be a common left divisor of the matrices Dl(s) and Nl(s), i.e., Dl(s) = Ll(s) D l(s), Nl(s) = Ll(s) N l(s). Then for those values of the variable s for which det Ll(s) = 0, the condition (2.6.10a) is not satisfied, and W( s)

Dl1 ( s )N l ( s )

1

¬ªL l ( s )Dl ( s ) ¼º Ll ( s )N l ( s )

Dl ( s ) N l ( s ) .

In this case, (Dl(s), Nl(s)) is irreducible and the degree of the determinant of Dl(s) is not minimal.

140

Polynomial and Rational Matrices

The proof of the second part of the theorem for the pair (Dp(s), Np(s)) is analogous (dual). „ Definition 2.6.1. An irreducible pair (Dl(s), Nl(s)) (Dp(s), Np(s)) yielding (2.6.1a) ((2.6.1b)) is called a left (right) minimal fraction form of the rational matrix W(s). From the proof it follows that minimal fraction forms of a rational matrix are determined uniquely up to the multiplication by unimodular matrices and that for minimal Dl(s) and Dp(s) the following equality holds deg det Dl ( s )

deg det D p ( s ) .

(2.6.11)

To compute a minimal pair ( D l(s), N l(s)), having given nonminimal (irreducible) pair (Dl(s), Nl(s)) a greatest common left divisor Ll(s) of these matrices is to be determined. To accomplish this, we apply elementary operations on the columns of [Dl(s), Nl(s)] and perform the reduction

ª Dl ( s ) N l ( s ) º ªLl (s) 0 º « I »  « R o 0 U 2 »» , « m » « U1 «¬ 0 «¬ U 3 I n »¼ U 4 »¼ mum [ s ], U 4 U 4 ( s )  U1 U1 ( s) 

nun

[ s]

(2.6.12)

(R denotes an elementary operation on columns), where

ª U1 «U ¬ 3

U2 º U 4 »¼

is unimodular and partitioned into blocks of dimensions corresponding to those of Dl(s) and Nl(s). From (2.6.12), we have

> Dl (s),

ªU Nl ( s)@ « 1 ¬ U3

> Dl (s),

ªU º Nl ( s)@ « 1 » ¬ U3 ¼

U2 º U 4 »¼

>Ll ( s), 0@

and Ll ( s ),

> Dl (s),

ªU º Nl ( s)@ « 2 » ¬U 4 ¼

0.

(2.6.13)

The matrix U4 is nonsingular. From the second equation of (2.6.13), we obtain

Rational Functions and Matrices

Dl1 ( s )N l ( s ) ª U2 º « » ¬ U4 ¼

U 2 U 41 .

141

(2.6.14)

is a full rank matrix for all s , since it is a part of a unimodular matrix.

Thus (U4, U2) is a minimal (irreducible) pair. Using (2.6.14) we can compute a minimal pair ( D l(s), N l(s)) for an arbitrary given pair (Dl(s), Nl(s)). Knowing a greatest common left divisor Ll(s), we can compute a minimal pair from the relationship Dl ( s)

Ll 1 ( s)Dl ( s ), N l ( s )

Ll 1 ( s )N l ( s ) .

(2.6.15)

Analogously, to compute a minimal pair ( D p(s), N p(s)) having given a nonminimal (reducible) pair (Dp(s), Np(s)) one has to compute a greatest common right divisor Pp(s) of these matrices. Carrying out elementary operations on rows of ªD p (s)º « » ¬« N p ( s ) ¼»

we make the reduction ª D p ( s ) I n 0 º L ª Pp ( s ) V1 V2 º o« , « N ( s ) 0 I »  V3 V4 »¼ m¼ ¬ 0 ¬ p V1 V1 ( s )  nun [ s ], V4 V4 ( s )  mum [ s ],

(2.6.16)

where ª V1 «V ¬ 3

V2 º V4 »¼

is a unimodular matrix partitioned into blocks of dimensions corresponding to those of Dp(s) and Np(s). From (16) we have ª V1 «V ¬ 3

V2 º ª D p ( s ) º « » V4 »¼ ¬ N p ( s ) ¼

ª Pp ( s ) º « 0 » ¬ ¼

and V1D p ( s )  V2 N p ( s )

Pp ( s ), V3 D p ( s )  V4 N p ( s )

0.

(2.6.17)

[V3, V4] is a full rank matrix for all s , since it is a part of a unimodular matrix, and V4 is nonsingular. From the second relationship in (2.6.17), we obtain N p ( s )Dp1 ( s )

V41V3 .

(2.6.18)

142

Polynomial and Rational Matrices

Using (2.6.18), we can compute a minimal pair ( D p(s), N p(s)) for an arbitrary pair (Dp(s), Np(s)). Knowing a greatest common right divisor Pp(s), we can compute a minimal pair from the equations D p (s)

D p ( s )Pp1 ( s ), N p ( s )

N p ( s )Pp1 ( s ) .

(2.6.19)

Example 2.6.2. Using the solution of Example 2.6.1 compute the left and the right minimal fraction form of the matrix (2.6.5). It is easy to check that the fraction forms (2.6.7) and (2.6.8) of this matrix are not minimal ones. Using the reduction (2.6.12), we compute a greatest common left divisor Ll(s) of the matrices Dl(s) and Nl(s). With this aim we carry out the following elementary operations 1º ª P > 4  5 u ( s  3)@ , P >3  5 u ( s  2) @ , P «5  4 u » , 2¼ ¬ ª § s 5 ·º P >1  5 u ( s  2)( s  3) @ , P > 2  1 u 2( s  2) @ , P «1  4 u ¨   ¸ » , © 4 8 ¹¼ ¬ P >5  4 u (4)@ , P > 4  1 u (6 s  40)@ , P >1 u 8@ , P > 2, 5@ ,

ª Dl ( s ) N l ( s ) º « I 0 »» « m «¬ 0 I n »¼ ª( s  2) 2 ( s  3) 0 ( s  2) 2 ( s  2)( s  3) s  2 º « » 2 0 ( s  2) ( s  3) ( s  2)( s  3) ( s  2) 2 s  3» « « 1 0 0 `0 0 » « » R o 0 1 0 0 0 »  « « » 0 0 1 0 0 « » 0 0 0 1 0 » « « 0 0 0 0 1 »¼ ¬

Rational Functions and Matrices

143

0 0 s2 ª « 0 1 0 « « 8 0 4 « 0 0 0 « « 0 0 1 « ( s  3)(2s  5) 0 « 4( s  2)( s  3)  (2s  5) « ( s  3)[4( s  1)( s  2)  2s  5] ( s  2)[2( s  1)( s  3)  s  4] ( s  2) ¬ 0 0 º » 0 0 » » 8(2 s  5) 2( s  2) » 0 1 ». » 0 0 » 2 (2 s  5)[(4( s  2)( s  3)  2s  5]  1 ( s  2) ( s  3) » ( s  3)[4( s  1)( s  2)(2s  5)  (2s  5) 2  1] ( s  1)( s  2) 2 ( s  3) »¼

Using (2.6.15) and (2.6.14), we obtain 1

Dl ( s )

º 0 ª s  2 0 º ª( s  2) 2 ( s  3) « » « 0 » 2 1¼ ¬ 0 ( s  2) ( s  3) ¼ ¬ 0 ª( s  2)( s  3) º « », 2 0 ( 2) ( 3) s s   ¬ ¼

Ll 1 ( s )Dl ( s )

1

Nl (s)

W(s)

(2.6.20)

( s  2)( s  3) ( s  2) º ª s  2 0 º ª ( s  2) 2 L ( s)N l ( s) « » » « 0 1 ( s  2) 2 ( s  3) ¼ ¬ ¼ ¬ ( s  2)( s  3) 1 º s2 s3 ª « ( s  2)( s  3) ( s  2) 2 s  3» , ¬ ¼ 1 1 1 ª º « s  3 s  2 ( s  2)( s  3) » », Dl1 ( s )N l ( s ) « 1 1 « 1 » (2.6.21) 2 «s  2 s  3 » s ( 2)  ¬ ¼

det Dl ( s )

1 l

( s  2)3 ( s  3) 2 ,

144

Polynomial and Rational Matrices

W( s )

U 2 U 41

ª0 8(2 s  5) 2( s  2) º «0 0 1 »¼ ¬ 0 (2s  5)[(4( s  2)( s  3)  2s  5]  1

ª 1 (2.6.22) u «« 0 «¬ ( s  2) ( s  3)[4( s  1)( s  2)(2s  5)  (2s  5)2  1] 1 1 1 ª 1 º 0 º « s  3 s  2 ( s  2)( s  3) » « ». ( s  2) 2 ( s  3) »» 1 1 « 1 » 2 ( s  1)( s  2) ( s  3) »¼ 2 «s  2 s  3 » ( 2) s  ¬ ¼ Using the reduction (2.6.16), we compute a greatest common right divisor Pp(s). With this aim we carry out the following elementary operations:

L >5, 4@ , L > 4  5 u ( s  2) @ , L >1  5 u ( s  2)( s  3) @ , L >1, 2@ , 5 ·º ª § 1 L >3  1 u ( s  2) @ , L « 2  4 u ¨  s  ¸ » , L > 2 u (4) @ , L >5  2@ , 4 ¹¼ © 2 ¬ L > 4  2 u (2s  5) @ , L >1 u (1) @ , L >3, 5@ ,

ª D p (s) I n 0 º « N ( s) 0 I » m¼ ¬ p 2 ª( s  2) ( s  3) 0 0 « 2   0 ( 2) ( 3) 0 s s « « 0 0 ( s  2) 2 ( s  3) « 2 ( s  2)( s  3) s2 « ( s  2) 2 « ( s  2)( s  3)  ( 2) s s3 ¬ 0 ( s  2)( s  3) 1 1 ª 0 « 0 ( s  2) 0 0 4 « «s  2 0 1 0 4 « 0 0 0 4(2 s  5) « 0 «¬ 0 0 0 s  2 ( s  2)

( s  2)( s  3) (2 s  5)( s  3)

( s  2)( s  3) ( s  2)(2s  5)

º » » (2 s  5)( s  3)  1 1  ( s  2)(2s  5) » . » ( s  3)[1  (2s  5) 2 ] ( s  2)[1  (2s  5) 2 ]» ( s  2) 2 ( s  3) ( s  2) 2 ( s  3) »¼

Using (2.6.19) and (2.6.18), we obtain

1 0 0 0 0º » 0 1 0 0 0» L o 0 0 1 0 0 »  » 0 0 0 1 0» 0 0 0 0 1 »¼ 0

0 0 0 1

Rational Functions and Matrices

D p ( s)

145

D p ( s )Pp1 ( s )

ª ( s  2) 2 ( s  3) º 0 0 « » 2 ( s  2) ( s  3) 0 0 « » « 0 0 ( s  2) 2 ( s  3) »¼ ¬ 0 ( s  2)( s  3) º ª 0 » s2 0 u «« 0 » 0 1 ¬« s  2 ¼»

1

0 ( s  2)( s  3) º ª 1 « 0 », s s ( 2)( 3) 0   « » 0 0 ¬« s  2 ¼» N p ( s)

N p ( s )Pp1 ( s )

0 ( s  2)( s  3) º ª 0 ª ( s  2) 2 ( s  2)( s  3) s  2 º « » 0 s2 « » 0 » s  3¼ « ( s  2) 2 ¬ ( s  2)( s  3) «¬ s  2 »¼ 0 1 s3 s2 º ª 1 « 2 s  3 »» , « s2 ( s  2) 2 ¼» ¬« s  2

N p ( s )D p1 ( s )

W(s)

det D p ( s )

ª 1 «s 3 « « 1 «s  2 ¬

1 s2 1 s3

1

1 º ( s  2)( s  3) » », 1 » ( s  2) 2 »¼

(2.6.23)

(2.6.24)

( s  2)3 ( s  3) 2

and W( s)

V41V3 1

4(2 s  5) 0 º ª( s  3)[1  (2s  5) 2 ] ( s  2)[1  (2s  5) 2 ]º ª 0 « » « 2 2 2  ( s  2) 1 »¼ s ( 2) ( 3) ( 2) ( 3)      s s s s ¬ ¼ ¬

ª 1 «s 3 « « 1 «s  2 ¬

1 s2 1 s3

1 º ( s  2)( s  3) » ». 1 » ( s  2) 2 »¼

(2.6.25)

Comparing (2.6.22) to (2.6.24) and (2.6.21) to (2.6.25), we find that the appropriate results are the same as well, that deg det D l(s) = deg det D p(s) = 5 and is greater than the degree of the polynomial m(s), which is 3.

146

Polynomial and Rational Matrices

2.6.2 Relatively Prime Factorization of Rational Matrices Consider a strictly proper rational matrix G(s)= lim G ( s ) 0 .

mul

[s], i.e., satisfying the condition

sof

Problem 2.6.1. A strictly proper rational matrix G(s) mup[s] is given. Compute polynomial relatively left prime matrices A1(s) mum[s] and B1(s) mup[s] such that G (s)

A11 ( s )B1 ( s ) .

(2.6.26)

Problem 2.6.2. A strictly proper rational matrix G(s) mup[s] is given. Compute polynomial relatively right prime matrices A2(s) pup[s] and B2(s) mup[s] such that G (s)

B 2 ( s) A 21 ( s) .

(2.6.27)

Expressing a rational matrix G(s) in the form (2.6.26) or (2.6.27) is called relatively prime factorization. Below we first give the procedure of such a factorization, at first, to the form (2.6.26). Procedure 2.6.1. Step 1: Find the least common denominators mi(s) (i = 1,2,…,p) for the columns and write the matrix G(s) in the form G (s)

B( s ) A 1 ( s ) ,

(2.6.28)

where

A( s)

0 ª m1 ( s ) « 0 m 2 (s) « « # # « 0 «¬ 0

!

0 º 0 »» . % # » » ! m p ( s ) »¼ !

(2.6.29)

Step 2: Applying appropriate elementary operations on the rows, carry out the reduction ª A( s) I p « B( s ) 0 ¬

0 º L ª P( s) U1 ( s ) U 2 ( s ) º  o« I m »¼ U 3 ( s) U 4 ( s) »¼ ¬ 0

and compute the matrices U4(s), U3(s). Step 3: The desired factorisation (2.6.26) is readily obtained from the relationship

Rational Functions and Matrices

G (s)

U 41 ( s )U 3 ( s ) .

147

(2.6.30)

This procedure can be derived in the following way. From the equality ª U1 ( s ) U 2 ( s ) º ª A( s ) º « U ( s ) U ( s ) » «B( s ) » ¼ 4 ¬ 3 ¼¬

ª P( s) º « 0 », ¬ ¼

we have U 3 ( s ) A ( s )  U 4 ( s )B ( s )

0,

i.e., G ( s)

B( s) A 1 ( s)

 U 41 ( s )U 3 ( s ) ,

with the assumption det U4(s) z 0. We will show that indeed det U4(s) z 0. Let U 1 ( s )

ª V1 ( s ) « V ( s) ¬ 3

V2 ( s ) º . V4 ( s ) »¼

ª A( s) º « B( s ) » ¬ ¼

ª V1 ( s ) « ¬ V3 ( s )

V2 ( s ) º ª P( s ) º V4 ( s ) ¼» «¬ 0 »¼

(2.6.31)

From

it follows that A(s) = V1(s)P(s). Nonsingularity of A(s) implies det V1(s) z 0. On the other hand, from the relationship V2 ( s ) º ªI p  V11 ( s ) V2 ( s ) º ª V1 ( s ) » « V ( s) V ( s) » « Im ¼ 0 4 ¬ 3 ¼¬ 0 ª V1 ( s ) º « V ( s ) V ( s )  V ( s )V 1 ( s )V ( s ) » 4 3 1 2 ¬ 3 ¼

(2.6.32)

it follows that det ª¬ V4 ( s )  V3 ( s )V11 ( s )V3 ( s ) º¼ z 0 . Premultiplying (2.6.32) by U(s) and taking into account (2.6.31), we obtain

(2.6.33)

148

Polynomial and Rational Matrices

ªI p « ¬«0

 V11 ( s ) Im

V2 ( s ) º » ¼»

0 ª U1 ( s ) U 2 ( s ) º ª V1 ( s ) º « U ( s ) U ( s ) » « V ( s ) V ( s )  V ( s ) V 1 ( s ) V ( s ) » 4 3 1 3 ¼ 4 ¬ 3 ¼¬ 3

and Im

U 4 ( s ) ª¬ V4 ( s )  V3 ( s )V11 ( s )V3 ( s ) º¼ .

Hence after considering (2.6.32), we obtain det U4(s) z 0. The matrices U3(s) and U4(s) are relatively left prime, since [U3(s), U4(s)] is a part of the unimodular matrix U(s). The procedure of factorization (2.6.27) is as follows. Procedure 2.6.2. Step 1: Find the least common denominators mic (s) (i = 1,2,…,m) for rows and write the matrix G(s) in the form G (s)

Ac1 ( s )Bc( s ) ,

(2.6.34)

where

Ac( s )

0 ª m1c( s ) « 0 c2 ( s ) m « « # # « 0 ¬ 0

!

0 º 0 »» . % # » » ! m2c ( s ) ¼ !

(2.6.35)

Step 2: Applying elementary operations on columns carry out the reduction ªL( s ) ª Ac( s ) Bc( s ) º « I » R o «« U1 ( s ) 0 »  « m «¬ U 3 ( s ) «¬ 0 I l »¼

0º U 2 ( s ) »» U 4 ( s ) »¼

and compute the matrices U4(s), U2(s). Step 3: The desired factorization (2.6.27) is derived from the relationship G ( s)

U 2 ( s )U 41 ( s ) .

(2.6.36)

Example 2.6.3. Using Procedures 2.6.1 and 2.6.2, compute the factorizations (2.6.26) and (2.6.27) for the matrix

Rational Functions and Matrices

G ( s)

ª 1 « s  1 « « 1 ¬« s

2 º s 1» ». 1 » s  2 ¼»

1 s 2 s2

149

(2.6.37)

Applying Procedure 2.6.1, we compute the following. Step 1: We compute the least common denominators of all entries of the respective columns of this matrix and write them in the form

G (s)

0 ª s ( s  1) ª  s s  2 2( s  2) º « 0 s ( s  2) « s  1 ¼» « ¬ s  1 2s «¬ 0 0

1

0 º » 0 » . ( s  1)( s  2) »¼

Step 2: Carrying out appropriate elementary operations on the rows of the matrix

G (s)

ª s ( s  1) « 0 « « 0 « « s «¬ s  1

1 0 0 0 0º 0 1 0 0 0 »» ( s  1)( s  2) 0 0 1 0 0 » , » 2( s  2) 0 0 0 1 0» 0 0 0 0 1 »¼ s 1

0

0

s ( s  2)

0

0 s2 2s

we obtain ª1 « «0 «0 « «0 «¬0

and

0 4

2( s  2) ( s  1)

0 ( s  1)( s  2) 0 0 0 0

 32 3 2

0 s

0 3 0 1 s

3s  2 (3s  2)

3 2

 1 2s

9 2

9s

s 2( s  1) 0  s ( s  1) 4 s ( s  1)

1  12 s º » 1 2 s » 0 » » 0 » s ( s  2) »¼

150

Polynomial and Rational Matrices

P(s)

U 2 (s)

U 4 (s)

( s  2) º ª1 0 «0 4 ( s  1) »» , U1 ( s ) « ¬« 0 0 ( s  1)( s  2) ¼» 1  12 s º ª s « 2( s  1) » 1 2 s » , U3 (s) « «¬ 0 0 »¼ 0 ª  s ( s  1) º « 4s ( s  1) ». s s  ( 2) ¬ ¼

ª  32 « 3 « 2 «¬ 0

º 3  »» , 0 1 »¼ 0

3 2

9 2

1 s 2s º ª s «3s  2 (3s  2) 9 s » , ¬ ¼

Step 3: Thus

G (s)

1 4

U ( s )U3 ( s )

ª  s ( s  1) « 4s ( s  1) ¬

1

 ( s  1)  2 s º 0 º ª s . s ( s  2) »¼ «¬ 2  3s 3s  2  9s »¼

Now applying Procedure 2.6.2: Step 1: We compute the least common denominators of all entries of the respective rows of this matrix and write them in the form 1

G (s)

0 º ª s s  1 2s º ª s ( s  1) . « 0  2) »¼ «¬ s  2 2 s ( s s s »¼ ¬

Step 2: Carrying out appropriate elementary operations on the columns of the matrix

G ( s)

we obtain

 s s  1 2s º 0 ª s ( s  1) « 0 s ( s  2) s  2 2 s s »» « « 1 0 0 0 0» « » 1 0 0 0 », « 0 « 0 0 1 0 0» « » 0 0 1 0» « 0 « 0 0 0 0 1 »¼ ¬

Rational Functions and Matrices

0 0 ª 1 « 0 1 0 « « 9 9 1 « 4 8 « 0 4 « 0 « 1 « 0 4s 2 « « 3 3 s s « 1 s 4 8 « « 3 1 3 « 2  s   s 2s 2 2 4 ¬

151

º » 0 0 » » 4 s 15 » » 4( s  1) 0 » »  4( s  1) 4 s » » » 0 s » » » 0 2(4  3s ) » ¼ 0

0

and

L( s )

U3 (s)

ª9 9 º « 4 8 » , U ( s) 2 « » ¬« 0 0 »¼ 1 º » 2 » 3 s » , U 4 ( s) » 8 » 1 3 »   s 2 4 ¼»

ª1 0 º « 0 1 » , U1 ( s ) ¬ ¼ ª « 0 « « 1 3 s « 4 « « 2  3 s 2 ¬«

ª1 « 4 ¬

4 s 4( s  1)

15º , 0 ¼»

4s º ª 4 s 4( s  1) «s  s »» . 0 « «¬ 2 s 0 2(4  3s) »¼

Step 3: In view of this, G (s)

U 2 ( s )U 41 ( s ) 1

4 s º ª 4s 4( s  1) 4s 15º « ª 1  s »» . 0 « 4 4s ( s  1) 0 » «  s ¬ ¼ « 2s 0 2(4  3s ) »¼ ¬

Problem 2.6.3. A rational matrix G(s) mup[s] is given in the form of the left factorisation (2.6.26). Compute the right relatively prime factorisation (2.6.27) of this matrix. Solution. Applying elementary operations on columns we carry out the transformation

152

Polynomial and Rational Matrices

ª A1 ( s ) « « Im « 0 ¬

0 º B1 ( s ) º ªL1 ( s ) » R « o «V1 ( s ) V2 ( s ) »» . 0 »  «¬ V3 ( s ) V4 ( s ) »¼ I p »¼

(2.6.38)

The desired factorisation matrices are given by B 2 ( s)

V2 ( s ), A 2 ( s )

 V4 ( s ) or B 2 ( s )

 V2 ( s ), A 2 ( s )

V4 ( s ) . (2.6.39)

The relations in (2.6.39) can be derived in the following way. From (2.6.38) we have ª V2 ( s ) º » ¬ V4 ( s ) ¼

> A1 ( s) , B1 (s)@ «

0,

(2.6.40)

i.e., A1(s)V2(s) = -B1(s)V4(s). Nonsingularity of A1(s) implies nonsingularity of V4(s). Hence A11 ( s )B1 ( s )

 V2 ( s )V41 ( s ) .

The dual to Problem 2.6.3 can be formulated as follows. Problem 2.6.3c. A rational matrix G(s) mup[s] is given in the form of the right factorisation (2.6.27). Compute the left relatively prime factorisation (2.6.26) of this matrix. To solve this problem we use Step 2 from Procedure 2.6.1. The desired factorisation matrices are given by A1 ( s )

U 4 ( s ), B1 ( s )

or A1 ( s )

U 4 ( s ), B1 ( s )

U 3 ( s ),

(2.6.41)

 U 3 ( s ).

We proceed further in the same way as in Problem 2.6.3. 2.6.3 Conversion of a Rational Matrix into the McMillan Canonical Form Let the following rational matrix be given

W( s)

ª W11 ( s ) ! W1n ( s ) º « # % # »»  « «¬ Wm1 ( s ) ! Wmn ( s ) »¼

mun

( s)

(2.6.42)

Rational Functions and Matrices

153

whose rank is r d min(n, m). With the monic least common denominator of all entries of Wij(s) found, we can express the above matrix in the form W( s)

L( s ) , m( s )

(2.6.43)

where L(s) mun[s]. Applying elementary operations, we can transform L(s) into the Smith canonical form

L s ( s)

U ( s )L ( s ) V ( s )

ªi1 ( s ) « 0 « « # « « 0 « 0 « « # « 0 ¬

0

!

i2 ( s ) ! # % 0

!

0

!

#

%

0

!

0 ! 0º 0 0 ! 0 »» # # % #» » ir ( s) 0 ! 0 » , 0 0 ! 0» » # # % #» 0 0 ! 0 »¼ 0

(2.6.44)

where U(s) mum[s], V(s) nun[s] are unimodular matrices, i1(s),i2(s),…,ir(s) are the invariant polynomials such that ii+1(s) is divisible (without remainder) by ii(s). From (2.6.43) and (2.6.44), after reduction of all common factors occurring simultaneously in m(s) and ik(s), k = 1,…,r, we obtain WM ( s )

U(s) W( s)V ( s) ª l1 ( s) «\ ( s ) « 1 « « 0 « « # « « 0 « « « 0 « # « ¬« 0

L s ( s) m( s )

0

!

0

l2 ( s )

!

0

#

%

#

0

!

0

!

0

# 0

% !

# 0

\ 2 ( s)

where lk ( s )

\ k (s)

ik ( s ) m( s )

(k

lr ( s )

\ r ( s)

1, 2, ..., r )

º 0 ! 0» » » 0 ! 0» », # % #» » 0 ! 0» » » 0 ! 0» # % #» » 0 ! 0 ¼»

(2.6.45)

154

Polynomial and Rational Matrices

and li+1(s) is divisible (without remainder) by li(s), and \k-1(s) is divisible (without remainder) by \k(s). Definition 2.6.2. A matrix WM(s) given by (2.6.45) is called the McMillan canonical form of the matrix W(s). Using the contradiction method, we will show that \1(s) = m(s) and l1(s) = i1(s). Assume that \1(s) z m(s). In this case, every entry of the matrix L(s) is divisible by the appropriate factor of the polynomial m(s). Thus the polynomial m(s) cannot be the least common denominator of all entries of W(s), which contradicts the assumption. Hence \1(s) = m(s) and this implies immediately that l1(s) = i1(s). From the above considerations, the following procedure for computation of the McMillan canonical form (2.6.45) of the matrix W(s) can be derived. Procedure 2.6.3. Step 1: Compute the monic least common denominator m(s) of all entries of the matrix W(s). Step 2: Writing the matrix W(s) in the form (2.6.43), compute the polynomial matrix L(s). Step 3: Applying elementary operations, convert L(s) into the Smith canonical form LS(s). Step 4: Reduce common factors occurring in the polynomials m(s) and ik(s), and then compute the McMillan canonical form (2.6.45). Example 2.6.4. Compute the McMillan canonical form of the matrix (2.6.5). We proceed according to Procedure 2.6.3. Step 1: The least common denominator of all entries of the given matrix is m( s )

( s  2) 2 ( s  3) .

Steps 2 and 3: With the least common denominator found, the matrix W(s) takes the form

W( s)

L( s ) m( s )

ª ( s  2) 2 1 « 2 ( s  2) ( s  3) ¬« ( s  2)( s  3)

Applying elementary operations, we convert the matrix

L( s )

ª ( s  2) 2 « ¬« ( s  2)( s  3)

into the Smith canonical form

( s  2)( s  3) ( s  2)

2

s  2º » s  3 ¼»

( s  2)( s  3) ( s  2)

2

s  2º ». s  3 ¼»

Rational Functions and Matrices

ª1 «0 ¬

L s ( s)

0 ( s  2)( s  2, 5)

155

0º . 0 »¼

Step 4: The desired McMillan canonical form of the matrix (2.6.5) is

WM ( s )

1 ª « ( s  2) 2 ( s  3) « « 0 « ¬

L s ( s) m( s )

º 0 » ». s  2, 5 » 0» ( s  2)( s  3) ¼ 0

2.7 Synthesis of Regulators 2.7.1 System Matrices and the General Problem of Synthesis of Regulators Consider a discrete feedback system (Fig. 2.7.1) consisting of a plant with the matrix transfer function T0 z

Dl1N l

Dl

Dl z 

Dp

Dp z 

N p Dp1 ,

> z@, mum > z@,

pu p

Nl Np

Nl z 

pum

Np z 

>z@, >z@

(2.7.1)

pum

and a regulator with the matrix transfer function Tr z

Xl1Yl

Xl

Xl z 

Xp

Xp z 

Yp X p1 ,

> z@, pu p > z@,

mum

Yl Yp

Yl z  Yp z 

mu p

>z@, > z @.

(2.7.2)

mu p

We assume that the matrices T0 z 

pum

z ,

Tr z 

mu p

z

are proper lim T0 z z of

D0 

pum

, lim Tr z z of

or strictly proper D0 = 0 and Dr = 0.

Dr 

mu p

,

(2.7.3)

156

Polynomial and Rational Matrices

Fig. 2.1. Discrete-time system with feedback

From the scheme in Fig. 2.1 we can write the equations yi

T0 z ui

T0 z vi  zi , vi

Tr z yi , i 



^0, 1, ...` ,(2.7.4)

where yi, ui, vi and zi are vector sequences of plant output, control, regulator output, and disturbances. Substituting (2.7.1) and (2.7.2) into (2.7.4), we obtain Dl yi  N l vi

Xl vi  Yl yi

N l zi ,

0, i 



,

(2.7.5)

which we write in the form ª Dl «Y ¬ l

 N l º ª yi º Xl »¼ «¬ vi »¼

ª Nl º « 0 » zi . ¬ ¼

(2.7.6)

Definition 7.1.1. The polynomial matrix Sl

ª Dl «Y ¬ l

N l º  Xl »¼

p  m u p  m

> z@

(2.7.7a)

will be called the left system matrix of the closed-loop system, and the polynomial matrix Sp

ª Xp « Y ¬ p

Np º  D p »¼

p  m u p  m

> z@

(2.7.7b)

will be called the right system matrix of the closed-loop system. If the matrices Dl and Nl are relatively left prime (T0(z)=Dl-1Nl is irreducible), then according to the considerations in Sect. 1.15.3 there exist a unimodular matrix of elementary operations on columns U

ª U11 «U ¬ 21

U12 º , U11 U 22 »¼

U11 z 

pu p

> z @ , U 22

U 22 z 

mum

> z@,

(2.7.8)

Rational Functions and Matrices

157

such that

ªU N l @ « 11 ¬ U 21

> Dl

U12 º U 22 »¼

ª¬I p

0 º¼ .

(2.7.9)

Postmultiplying (2.7.9) by the unimodular matrix ª V11 V12 º «V », ¬ 21 V22 ¼ V11 z  pu p > z @ , V22

U 1 V11

(2.7.10) V22 z 

mum

> z@,

we obtain

> Dl

N l @

»I p

ªV 0 ¼º « 11 ¬ V21

Dl

V11 ,

Nl

V12 .

V12 º , V22 »¼

(2.7.11)

where (2.7.12)

From (2.7.9) we have Dl U12

and

N l U 22

Dl1N l

1 U12 U 22 ,

(2.7.13)

since det U22 z 0. Comparison of (2.7.1) to (2.7.13) implies that

Np

U12 , D p

U 22 .

(2.7.14)

According to the considerations in Sect. 1.15.3, provided that Dl and Nl are relatively prime, there exist polynomial matrices Xp and Yp such that Dl X p  N l Yp

Ip .

(2.7.15)

From (2.7.9) it follows that Dl U11  N l U 21

Ip .

(2.7.16)

Comparison of (2.7.15) to (2.7.16) yields Xp

U11 , Yp

U 21 .

(2.7.17)

158

Polynomial and Rational Matrices

The matrices (2.7.14) are relatively right prime. Thus there exist polynomial matrices Xl and Yl such that Yl N p  Xl D p

Im .

(2.7.18)

From ª V11 «V ¬ 21

V12 º ª U11 V22 »¼ «¬ U 21

U12 º U 22 »¼

ªI p « ¬0

0º I m ¼»

(2.7.19)

it follows that V21U12  V22 U 22

Im

and if we take into account (2.7.14), we obtain V21N p  V22 D p

Im .

(2.7.20)

Comparison of (2.7.18) to (2.7.20) yields Xl

V22 , Yl

V21 .

(2.7.21)

From (2.7.19) and (2.7.21), as well as (2.7.14) and (2.7.17), it follows that Sl

ª Dl «Y ¬ l

N l º Xl »¼

ª V11 «V ¬ 21

V12 º , Sp V22 »¼

ª Xp «Y ¬ p

Np º D p »¼

ª U11 «U ¬ 21

U12 º .(2.7.22) U 22 »¼

From (2.7.19) and (2.7.22), we have Sl S p

S p Sl

I pm .

(2.7.23)

This way the following theorem has been proved. Theorem 7.1.1. If the transfer matrix T0(z) of the plant is irreducible, then there exists the transfer matrix TJ(z) of the regulator such that the system matrices (2.7.7) of the closed-loop system satisfy (2.7.23) and are unimodular. The general problem of synthesis of regulator for a given plant can be formulated as follows. With the plant transfer matrix T0(z) given, one has to compute the transfer matrix Tr(z) of the regulator in such a way that the system matrix (2.7.7) of the closed-loop system has the desired dynamical properties; for instance, that it is a unimodular matrix or its determinant is equal to a given polynomial.

Rational Functions and Matrices

159

In order to compute the system matrices Sp and Sl, one has to proceed in the following way. 1. Applying elementary operations on columns and carrying out the reduction Nl º ª Ip » R o «« U11 0 »  «¬ U 21 I m »¼

ª Dl « «I p «¬ 0

2.

0 º U12 »» U 22 »¼

(2.7.24)

compute the unimodular matrix Compute the inverse of Sp, which is equal to Sl (since Sl = Sp-1).

2.7.2 Set of Regulators Guaranteeing Given Characteristic Polynomials of a Closed-loop System For a feedback system (Fig. 2.1), the transfer matrix (2.1.1) of the plant is given; the transfer matrix (2.1.2) of the regulator is to be computed in such a way that det S l

ªD det « l ¬ Yl

N l º Xl »¼

cw z ,

(2.7.25)

where w(z) is a given characteristic polynomial of the closed-loop system, and c is a constant independent of z. Theorem 7.2.1. Let Dl and Nl be relatively left prime matrices and Xl0 and Yl0 be matrices of regulator chosen in such a way that the system matrix Sl0

N l º Xl0 »¼

ª Dl «Y0 ¬ l

(2.7.26)

is unimodular. The set of transfer matrices satisfying the condition (2.7.25) is given by the relationships Xl

PXl0  QN l , Yl

PYl0  QDl ,

where the polynomial matrix P = P(s) det P

and Q = Q(s)

mum

[z] satisfies the condition

w z mup

(2.7.27)

[z] is an arbitrary polynomial matrix.

(2.7.28)

160

Polynomial and Rational Matrices

Proof. From (2.1.23) it follows that S 0p

ª¬Sl0 º¼

1

ª X0p « 0 ¬« Yp

Np º ». D p ¼»

(2.7.29)

Using (2.1.15) and (2.1.13), we obtain N l º ª X0p « Xl »¼ «¬  Yp0

ª Dl «Y ¬ l

Sl S 0p

Np º » D p »¼

ªI p «Q ¬

0º , P »¼

(2.7.30)

since Dl X0p  N l Yp0

Nl D p ,

(2.7.31)

Yl N p  Xl D p .

(2.7.32)

I p , Dl N p

and Q

Yl X0p  Xl Yp0 , P

It is easy to show that det P = det [YlNp + XlDp] is the characteristic polynomial of the closed-loop system and the condition (2.7.28) is satisfied. From (2.7.30), we have det S l S 0p

det Sl det S 0p

ªI det « p ¬Q

0º P »¼

det P

w z ,

i.e., det Sl

cw z where c

1 . det S p0

(2.7.33) „

Lemma 7.2.1. The matrix pair (2.7.32) is right equivalent to the pair (Yl, Xl) and is relatively left prime if and only if the pair (Yl, Xl) is relatively left prime. Proof. From (2.7.32), we have

>Q

P@

> Yl

ª X0 Xl @ « p0 «¬ Yp

Np º » D p »¼

> Yl

Xl @ S 0p .

(2.7.34)

Rational Functions and Matrices

161

Thus the pair (2.7.32) is right equivalent to the pair (Yl, Xl), since the matrix Sp0 is unimodular. According to Definition 1.15.7, the pair (2.7.32) is relatively left prime if and only if the pair (Yl, Xl) is relatively left prime. „ So far we have assumed that the matrices Dl and Nl are relatively left prime, which means that the transfer matrix T0(z) of the system is irreducible. Now assume that L = L(z) is the greatest common left divisor (GCLD) of the matrices Dl and Nl, i.e., Dl

LDl ,

Nl

LN l ,

L

L z 

pu p

> z@ .

(2.7.35)

Theorem 7.2.2. Let L be the GCLD of the matrices Dl and Nl; let X l0, Y l0 be the matrices of regulator chosen in such a way that the system matrix Sl0

ª Dl « 0 ¬ Yl

Nl º » Xl0 ¼

(2.7.36)

is unimodular. The set of transfer functions of the regulator satisfying the condition (2.7.25) exists if and only if w z det L

w z

(2.7.37)

and is determined by the relationships Xl

PXl0  QN l , Yl

PYl0  QDl ,

(2.7.38)

where w z

det P

and Q = Q (z)

mup

(2.7.39)

[z] is an arbitrary matrix.

Proof. Taking into account that S 0p

we can write

0 ¬ªSl ¼º

1

ª X0p « 0 ¬« Yp

Np º », D p ¼»

(2.7.40)

162

Polynomial and Rational Matrices

ª Dl «Y ¬ l

Sl S 0p

N l º ª X0p « Xl »¼ «¬ Yp0

Np º » D p »¼

ªL 0 º «Q P » , ¬ ¼

(2.7.41)

since Dl X0p  N l Yp0 Q

L,

Yl X0p  Xl Yp0 ,

Dl N p  N l D p P

L Dl N p  N l D p

Yl N p  Xl D p .

0,

(2.7.42)

From (2.7.41), we have det S l det S 0p

det L det P ,

(2.7.43)

and taking into account (2.7.37) and (2.7.39), we obtain det S l

det L det P det S 0p

cw z where c

1 . det S 0p

(2.7.44)

Note that equality in (2.7.44) holds if and only if the condition (2.7.37) is satisfied. „

3 Normal Matrices and Systems

3.1 Normal Matrices 3.1.1 Definition of the Normal Matrix Consider a rational matrix in the standard form W s

L s , m s

(3.1.1)

where L(s) mun[s] is a polynomial matrix and m(s) (a monic polynomial) is the least common denominator of the entries of the matrix W(s). We assume that the number of rows m and columns n of the matrix (3.1.1) is greater than or equal to two (that is, m, n t 2). Definition 3.1.1. A rational matrix of the form (3.1.1) is called normal if and only if every nonzero second-order minor of the polynomial matrix L(s) is divisible (without remainder) by the polynomial m(s). For example, the matrix

W s

ª 1 « s 1 « « 0 «¬

º 0 » » 1 » s  2 »¼

0 º ªs  2 1 « 0 s  1»¼ 1 2 s s   ¬

is normal, since the determinant of the matrix

L s m s

(3.1.2)

164

Polynomial and Rational Matrices

L s

0 º ªs  2 , « 0 s  1»¼ ¬

(3.1.3)

which is a second-order minor of this matrix, is divisible (without remainder) by the polynomial m(s) = (s + 1)(s + 2). On the other hand, the matrix ª s2 2 « « s  1 « « 0 ¬

W s

º 0 » » 1 » » s  1¼

0 º ªs  2 « 0 s  1»¼ s  1 ¬ 1

2

L s m s

(3.1.4)

is not normal, since the determinant of L(s) is not divisible (without remainder) by the polynomial m(s) = (s + 1)2. 3.1.2 Normality of the Matrix [Is – A]-1 for a Cyclic Matrix The inverse matrix [Is – A]-1 for any matrix A be written in the standard form

> Is  A @

1

nun

is a rational matrix, which can

LA s , m s

(3.1.5)

where LA(s) nun[s] and m(s) is a least common denominator. Applying elementary operations on rows and columns, we can reduce [Is – A] to its Smith canonical form

> Is  A @ S

U s > Is  A @ V s

diag ª¬i1 s , i2 s , ..., ir s , 0, ..., 0 º¼ 

nun

>s@ ,

(3.1.6)

where U(s) and V(s) are unimodular matrices of elementary operations on rows and columns; i1(s), i2(s), … ,ir(s) are the monic invariant polynomials satisfying the divisibility condition ik+1(s) | ik(s) (the polynomial ik+1(s) is divisible without remainder by the polynomial ik(s), k = 1,…,r-1, and r = rank LA(s)). The invariant polynomials are given by the formula ik s

Dk s , for k Dk 1 s

1, ..., r ,

D s 1 , 0

(3.1.7)

where Dk(s) is a greatest common divisor of all k-th order minors of [Is – A]. The minimal polynomial Is  A @

1

0 º ªs  1 1 « 0  s 1 0 »» . 2 « s  1 « 0 s  1»¼ 0 ¬ 1

In this case, the minor M 21

1

0

0 s 1

s 1

is not divisible by the polynomial (s – 1)2. Thus for a = 1, the matrix (3.1.16) is not normal. 3.1.3 Rational Normal Matrices Consider a rational matrix of dimensions mun in the standard form (3.1.1). Let WM s

U s W s V s

ª l1 s « « s  2 s  1@  s  2 s  1 « ». ¼ ¬1 1¼ ¬

We will show that one does not necessarily have to compute the Smith canonical form of L(s) in order to obtain the decomposition (3.4.11). Applying elementary operations on rows and columns, we can write the polynomial matrix L(s) in the form: U s L s V s

ª 1 w s º i s « », ¬k s L s ¼

(3.4.21)

where U(s) and V(s) are unimodular matrices of elementary operations w(s) 1u(n-1)[s], k(s) m-1[s], L (s) (m-1)u(n-1)[s] and i(s) [s]. This follows immediately from the possibility of reduction of L(s) to its Smith canonical form LS(s). Let P s

ª 1 º U 1 s i s « » , Q s ¬k s ¼

ª¬1 w s º¼ V 1 s .

(3.4.22)

Normal Matrices and Systems

189

Since the second-order minors of L(s) are divisible by m(s), the entries of the matrix i(s)[ L (s) – k(s)w(s)] are divisible by m(s), that is, i s ª¬L s  k s w s º¼

where Lˆ (s)

(m-1)u(n-1)

m s Lˆ s ,

(3.4.23)

[s].

Defining G s

01,n1 º 1 ª 0 U 1 s « » V s , ˆ ¬«0m1 L s ¼»

(3.4.24)

we obtain from (3.4.21)–( 3.4.24) L s

ª 1 w s º 1 U 1 s i s « » V s ¬ k s L s ¼ ­° ª 0 ª 1 º ª1 w s ¼º  « U  1 s ®i s « » ¬ «¬ 0 m 1 ¬ k s ¼ °¯ P s Q s  m s G s ,

01,n 1 º ½° 1 »¾ V s m s Lˆ s »¼ °¿

which is the desired decomposition (3.4.11). Example 3.4.3. Compute the decomposition (3.4.11) of the polynomial matrix (3.4.20). Carrying out the elementary operations L[1 + 2], P[1 + 2u(-1)] on the matrix (3.4.20), we obtain U s L s V s

s  1º ª 1 «  s  1 s  1» , ¬ ¼

where U s

ª1 1º «0 1» , V s ¬ ¼

ª 1 0º « 1 1 » , i s 1 . ¬ ¼

In this case, using (3.4.22)(3.4.24) we obtain P s

ª 1 º U 1 s i s « » ¬k s ¼

ª1 1º ª 1 º «0 1 » «  s  1» ¬ ¼¬ ¼

ª s2 º « s  1 » , ¼ ¬

190

Polynomial and Rational Matrices

>1

ª1 0 º s  1@ « » ¬1 1 ¼

Q s

ª¬1 w s º¼ V 1 s

G s

0 º 1 ª0 U 1 s « » V s ˆ ¬« 0 L s ¼»

>s  2

ª1 1º ª 0 «0 1 » «0 ¬ ¼¬

s  1@ ,

º ª1 0 º » s  1 s  2 ¼ «¬1 1 »¼ 0

ª 1 1º ». ¬1 1¼

s  1 s  2 «

Thus the desired decomposition (3.4.20) has the form 0 º ªs  2 « 0 s  1¼» ¬

ª s2 º ª 1 1º «  s  1 » > s  2 s  1@  s  1 s  2 « ». ¼ ¬1 1¼ ¬

The result is consistent with that obtained in Example 3.4.2. Corollary 3.4.1. Let s1,s2,…,sp be the poles (not necessarily distinct) of the rational matrix (3.4.10). Then rank L sk 1, for k

1, 2, ..., p .

(3.4.25)

The condition (3.4.10) follows from (3.4.11), since rank P(sk) = rank Q(sk) = 1. Corollaries 3.3.4 and 3.4.1 give the following criterion of normality of the matrix (3.4.10). Criterion 3.4.1. If the poles s1,s2,…,sp of a matrix are distinct, then this matrix is normal if and only if the condition (3.4.25) is satisfied. If the poles are multiple, then the matrix (3.4.10) is not normal when rank L sk ! 1 for certain k  ^1, 2, ..., p` .

(3.4.26)

Example 3.4.3. The rational matrix

W s

m s

º s  1 s  2 »» , » 1 1 » s 1 s2 ¼ 1 º ªs  2 s  1 s  1 s  2 , L s «    1»¼ s s s 1 2 ¬ L s m s

ª 1 « s 1 « « 1 « ¬s  2

1 s2

1

(3.4.27)

Normal Matrices and Systems

191

has only the distinct poles s1 = 1, s2 = 2. This matrix is not normal, since

rank L s1

ª1 0 1 º rank « » ¬0 1 0¼

2 ! 1.

We will obtain the same result by checking divisibility of the second-order minors of L(s) by m(s). The minor s 1

1

s  1

s  2 s 1

2

 s  2

s2  s  1

is not divisible by m(s) = (s + 1)(s + 2). Note that in the case of poles of multiplicities greater than 1, the condition (3.4.25) is not a sufficient condition for normality of the matrix (3.4.10). For example, the matrix with the double pole at the point s1 = 1

W s

L s m s

ª s2 2 « « s  1 « « 0 ¬

º 0 » » 1 » » s  1¼

0 º ªs  2 « 0 s  1»¼ s  1 ¬ 1

2

is not normal although it satisfies the condition (3.4.10), since rank L s1

ª1 0 º rank « » 1. ¬0 0¼

3.5 Normalisation of Matrices Using Feedback 3.5.1 State-feedback

Consider the system x y

Ax  Bu , Cx ,

with the state-feedback in the form

(3.5.1a) (3.5.1b)

192

Polynomial and Rational Matrices

u

v  Kx ,

(3.5.2)

where v m is a new input, and K into (3.5.1a) yields x

mun

is a gain matrix. Substitution of (3.5.2)

A  BK x  Bv .

(3.5.3)

The transfer matrix of the closed-loop system has the form Tz (s) C > I n s  (A  BK ) @ B . 1

(3.5.4)

The problem of normalisation of a transfer matrix using state-feedback can be formulated in the following way. Problem 3.5.1. Given a system in the form (3.5.1), with the matrix A non-cyclic and the pair (A, C) unobservable, compute the matrix K in such a way that the closed-loop transfer matrix of the system (3.5.4) is normal.

The solution to this problem is based on the following lemma. Lemma 3.5.1. If the matrix A is in its Frobenius canonical form

A

ª0 # « k «¬

I n1 º » »¼

nun

, k

> k1

k 2 ... kn @ ,

(3.5.5)

then for any nonzero matrix C the observability of the pair (A, C) can be always assured by an appropriate choice of k. Proof. The pair (A, C) is observable if and only if ªI s  A º rank « n » ¬ C ¼

n for all s 

.

Applying elementary operations on rows and columns, we can transform the matrix ª s 1 0 «0 s 1 « « # # # « 0 0 «0 « k1 k2 k3 « « c11 c12 c13 « # # # « ¬« c p1 c p 2 c p 3

! ! %

0 0 #

! s ! kn1 ! c1,n1 % # ! c p ,n1

º » » » » 1 » s  kn » » c1n » # » » c pn ¼» 0 0 #

Normal Matrices and Systems

193

to the following form 1 0 ª 0 « 0 0 1 « « # # # « 0 0 0 « « p0 s 0 0 « « p1 s 0 0 « # # # « «¬ p p s 0 0

! ! % ! ! ! % !

0º 0 »» #» » 1» , 0» » 0» #» » 0 »¼

(3.5.6)

where p0 s

s n  kn s n1  ...  k2 s  k1 ,

pi s

cin s n1  ...  ci 2 s  ci1 , i 1, 2, ..., p .

Carrying out appropriate elementary operations on the rows of the matrix (3.5.6) and appropriately choosing k1,…,kn, we obtain ª 0 1 0 « 0 0 1 « «# # # « 0 0 0 « «a 0 0 « «0 0 0 «# # # « «¬ 0 0 0

! ! % ! ! ! % !

0º 0 »» #» » 1» and a z 0 . 0» » 0» #» » 0 »¼

(3.5.7)

The matrix (3.5.7) for a z 0 is a full column rank matrix and thus the pair (A, C) is observable. „ Theorem 3.5.2. Let the matrix A of the system (3.5.1) by noncyclic and the pair (A, C) unobservable. Then there exists a matrix K such that the transfer matrix (3.5.4) is normal if and only if the pair (A, B) is controllable. Proof. Necessity. As is well-known, the pair (A+ BK, B) is controllable if and only if the pair (A, B) is controllable. If the pair (A, B) is not controllable, then the transfer matrix (3.5.4) is not normal. Thus if the pair (A, B) is not controllable, then there exists no K such that the transfer matrix (3.5.4) is normal.

194

Polynomial and Rational Matrices

Sufficiency. If the pair (A, B) is controllable, then there exists a nonsingular matrix T nun(s) such that

A

TAT

1

di ud j

A ij 

ª A11 ! A1m º « # % # »» , B « ¬« A m1 ! A mm ¼» , Bi 

di um

TB

ªB1 º « # », « » «¬B m »¼

(3.5.8a)

,

where

A ij

­ ª0 I di 1 º °« » for i °¬  ai ¼ ® °ª 0 º ° « a » for i z j ¯ ¬« ij ¼»

aij

[a0ij a1ij ... adijj 1 ], bi

j

ª0º «b » , ¬« i ¼»

, Bi

(3.5.8b)

where [0" 0 1 bi,i 1 ... bim ] and d1 , ..., d m

are controllability indices satisfying the condition m

¦d

i

n.

i 1

Let 1



ªb1 º « » «b2 » «#» « » ¬bm ¼

K

ª K1 º « k » , K1  ¬ ¼

ª1 b12 «0 1 « «# # « ¬0 0

" b1m º " b2 m »» % # » » " 1 ¼

1

(3.5.9)

and ( m1)un

, k

> k1

k2 ... kn @ 

1un

.

(3.5.10)

Using (3.5.8a) and (3.5.9), one can easily check that B

BBˆ

diag [b1 , ..., bm ], bi

[0

"

0

1]T 

di

,

(3.5.11)

Normal Matrices and Systems

195

where T denotes the transpose. Let

K

Bˆ 1KT1

ni

¦d

ª an1  en1 1 º « » # « » « a  e », nm 1 nm 1 1 « » «¬ anm  k »¼

(3.5.12)

where i

k

,

k 1

and a ni is the ni-th row of the matrix A i, ei is the i-th row of the identity matrix In and k is given by (3.5.10). Using (3.5.9), (3.5.11) and (3.5.12), one can easily check that Az

T(A  BK )T1

ª0 1 «0 0 « «# # « «0 0 «¬ k1 k2

A  BKT1

0 ! 1 ! # %

0º 0 »» # ». » 0 ! 1» k3 ! kn »¼

ˆ ˆ 1KT1 A  BBB

 A  BK

(3.5.13)

The matrix (3.5.13) is cyclic and k will be chosen in such a way that the pair (Az, C) is observable. According to Lemma 3.5.1, if Az has the Frobenius canonical form (3.5.13), then it is always possible to choose the elements k1,…,kn in such a way that the pair (Az, C) is observable. If Az is cyclic, the pair (A, B) is controllable and the pair (Az, C) is observable, then the transfer matrix (3.5.4) is normal. „ In a general case there exist many gain matrices K normalising the transfer matrix. If the pair (A, B) is controllable, then we can compute the matrix k using the following procedure. Procedure 3.5.1. Step 1: Compute a nonsingular matrix T that transforms the pair (A, B) to the ˆ B . canonical form (3.5.8), and A, B, B, Step 2: Using (3.5.12) compute K and

196

Polynomial and Rational Matrices

ˆ K = BKT

(3.5.14)

for the unknown matrix k. Step 3: Choose k in such a way that the pair (Az, C) is observable. Step 4: Compute the desired matrix K substituting k (computed in Step 3) into (3.5.14). Example 3.5.1. Consider the system (3.5.1) with matrices

A

ª0 1 «0 2 « «0 0 « ¬0 0

0 0º 0 1»» ,B 0 1» » 0 2 ¼

ª0 «1 « «0 « ¬0

0º 2 »» , C 0» » 1¼

ª0 1 0 0 º «0 0 0 1 » , D ¬ ¼

0.

(3.5.15)

It is easy to check that A is not cyclic, the pair (A, B) is controllable and the pair (A, C) is not observable. We seek a matrix K

ª k11 k12 «k k 2 ¬ 1

k14 º k4 »¼

k13 k3

such that the closed loop transfer matrix (3.5.4) is normal. Applying the above procedure, we obtain the following. Step 1: The matrices (3.5.15) already have the canonical form (3.5.8) and

A



ª A11 «A ¬ 21

ªb1 º «b » ¬ 2¼

1

A12 º A 22 ¼»

ª0 1 « «0 2 «0 0 « ¬0 0

ª1 2 º «0 1 » ¬ ¼

1

0º 0 1»» , 0 1» » 0 2 ¼ 0

ª 1 2 º «0 1 » , ¬ ¼

B

B

BBˆ

Step 2: Using (3.5.12) and (3.5.16), we compute K

and

ª  a2  e3 º « » ¬  a4  k ¼

ª 0 « k ¬ 1

2 k2

1  k3

ªB1 º «B » ¬ 2¼

1 º 2  k4 »¼

ª0 «1 « «0 « ¬0

ª0 « «1 «0 « ¬0 0º 0 »» . 0» » 1¼

0º 2 »» , 0» » 1¼

(3.5.16)

Normal Matrices and Systems

K

2 1 1 º ª1 2 º ª 0 «0 1 » « k k k 2  k » ¬ ¼¬ 1 2 3 4¼ 2  2 k 2 1  2 k3 2 k 4  3 º . 2  k4 »¼  k2  k3

ˆ BKT ª 2k1 « k ¬ 1

197

(3.5.17)

Step 3: The pair ( A z , C) for

Az

ª0 «0 « «0 « ¬ k1

1 0

0 1

0 k2

0 k3

0º 0 »» 1» » k4 ¼

is observable for k1 z 0 and arbitrary k2, k3, k4, since

ª C º rank « » ¬CA c ¼

ª0 «0 « «0 « ¬ k1

1

0

0

0

0 k2

1 k3

0º 1 »» 0» » k4 ¼

4

for k1 z 0 and arbitrary k2, k3, k4. Step 4: The desired gain matrix has the form (3.5.17) for k1 z 0 and arbitrary k2, k3, k4. 3.5.2 Output-feedback Consider the system (3.5.1) with an output-feedback in the form u

v  Fy ,

(3.5.18)

where F mup is a gain matrix. From (3.5.1a) and (3.5.18), we have x

A  BFC x  Bv .

(3.5.19)

The closed-loop transfer matrix has the form

Tc ( s) C > I n s  ( A  BFC) @ B . 1

(3.5.20)

The problem of normalisation of the transfer matrix using an output-feedback can be formulated in the following way. Given the system (3.5.1) with the

198

Polynomial and Rational Matrices

noncyclic matrix A, the controllable pair (A, B) and the observable pair (A, C), compute the matrix F in such a way that the closed-loop transfer matrix (3.5.20) is normal. If the pair (A, C) is not observable, then the pair (A + BFC , C) is not observable, and the closed-loop transfer matrix (3.5.20) is not normal, regardless of the matrix F. Thus the problem of normalisation of the transfer function using the output-feedback has a solution only if the pair (A, C) is observable. If additionally the pair (A, B) is controllable, then the problem of normalisation reduces to computation of the matrix F in such a way that the closed-loop system matrix ˆ z = A + BFC is cyclic. Let K = FC. In this case, using the approach provided in A the proof of Theorem 3.5.2, one can compute K, which is given by (3.5.14), in ˆ z = A + BK is a cyclic matrix. From the Kronecker–Capelli such a way that A theorem it follows that the equation K = FC has a solution for the given C and K if and only if rank C

ªC º rank « » . ¬K ¼

(3.5.21)

Thus the following theorem has been proved. Theorem 3.5.2. Let the pair (A, B) be controllable, the pair (A, C) observable, and A be a cyclic matrix. Then there exists a matrix F such that the transfer matrix (3.5.20) is normal if and only if the condition (3.5.21) is satisfied. If the condition (3.5.21) is satisfied, then applying elementary operations on the columns of the matrix K = FC, we obtain [K1 0] F[C1 0],

K1 

mu p

, C1 

pu p

(3.5.22)

and det C1 z 0, since by assumption C is a full row rank matrix. From (3.5.22), we obtain F

K1C11 .

(3.5.23)

Example 3.5.2. Consider the system (3.5.1) with the matrices

A

ª0 1 «0 2 « «0 0 « ¬0 0

0 0º 0 1»» ,B 0 1» » 0 2 ¼

ª0 «1 « «0 « ¬0

0º 2 »» ,C 0» » 1¼

ª1 1 0 0.5º « ». ¬0 2 1 1 ¼

(3.5.24)

It is easy to check that the pair (A, B) is controllable, the pair (A, C) observable, and A is not a cyclic matrix. We seek a matrix

Normal Matrices and Systems

F

ª f11 «f ¬ 21

199

f12 º f 22 »¼

such that the closed-loop transfer matrix is a normal matrix. In the same way as in Example 3.5.1 we compute the matrix K and from (3.5.17) for k1 = 1, k2 = 0, k3 = -1/2, k4 = 2, we obtain ª 2 2 0 1º « 1 0 0.5 0 » . ¬ ¼

K

(3.5.25)

In this case, the condition (3.5.21) is satisfied, since ª1 1 0 0.5º ªCº rank « rank « » » ¬1 2 1 0 ¼ ¬K ¼ ª 1 1 0 0.5º «0 2 1 1 »» rank « 2. «2 2 0 1 » « » ¬ 1 0 0.5 0 ¼

rank C

Applying elementary operations on the columns of the matrix

ªC º «K » ¬ ¼

ª1 «0 « «2 « ¬ 1

0.5º 1 »» , 2 0 1 » » 0 0.5 0 ¼

1 2

0 1

we obtain

ªC1 0 º «K 0» ¬ 1 ¼

0 ª1 «0 1 « «2 0 «  1 0.5 ¬

0 0 0 0

0º 0 »» . 0» » 0¼

Using (3.5.23), we obtain the desired matrix F

K1C11

ª2 0 º « 1 0.5» . ¬ ¼

(3.5.26)

200

Polynomial and Rational Matrices

3.6 Electrical Circuits as Examples of Normal Systems 3.6.1 Circuits of the Second Order Consider an electrical circuit with its scheme given in Fig. 3.6.1, with known resistances R1, R2, inductance L, capacity C, and source voltages e1 and e2.

Fig. 3.1 A circuit of the second order

Taking as state variables the current i in the coil and the voltage uC on the capacitor, we can write the equations e1 e2

di  uC , dt § duC · ¨ C dt  i ¸ R2  uC . © ¹

R1i  L

We write these equations in the form of the state equation

d ªi º dt «¬uC »¼

ª R1 « L « « 1 «¬ C

1 º ª1 » i L ª º «L » « 1 » «¬uC »¼ « 0  «¬ CR 2 »¼ 

º 0 » ªe º »« 1». 1 » ¬e2 ¼ CR 2 »¼

Denoting

x

ªi º «u » , A ¬ C¼

ª R1 « L « « 1 «¬ C

1 º L » », B 1 »  CR 2 ¼» 

ª1 «L « «0 «¬

º 0 » », u 1 » CR 2 »¼

ª e1 º « », ¬ e2 ¼

(3.6.1)

we obtain x

Ax  Bu .

(3.6.2a)

Normal Matrices and Systems

201

Take as an output the voltage on the coil y1=L di/dt and the current i2 of the voltage source e2, y2 = i2. In this case, the output equation takes the form

y

ª y1 º «y » ¬ 2¼

ªe1  R1i  uC º « » « e2  uC » R2 «¬ »¼

C

ª R1 « « 0 «¬

1 º 1 »» , D  R 2 »¼

Cx  Du ,

(3.6.2b)

0 º 1 »» . R 2 »¼

(3.6.3)

where ª1 « «0 «¬

The matrix A is cyclic and its characteristic polynomial

m s

det > Is  A @

R1 L 1  C

1 L

s

s

1 CR 2

(3.6.4)

§R R1  R 2 1 · s2  ¨ 1  ¸s  CR L LCR 2 © 2 ¹

is the same as the minimal one. The inverse matrix

> Is  A @

1

1 LCR2 s  R1CR2  L s  R1  R2 2

CR2 ª L sCR2  1 º u« » LR2 sLR2C  R1 R2C ¼ ¬

(3.6.5)

is a normal matrix. B and C are square nonsingular matrices. Hence the pair (A, B) is controllable and the pair (A, C) is observable. The transfer matrix of this circuit is irreducible and has the form

202

Polynomial and Rational Matrices

C > Is  A @ B  D

T s

1

1

R 1 º ª 1 º « s  1 L L » » 1 »» « 1 »  « 1 s R 2 »¼ « CR 2 »¼ ¬ C 1 LCR 2 s 2  R1CR 2  L s  R1  R 2

ª R1 « « 0 ¬«

ª  sR1CR 2  R1  R 2 « u« 1 «¬

ª1 «L « «0 «¬

º 0 » ª1 »« 1 » «0 « CR 2 »¼ ¬

 sL

º ª1 » «  sL  R1 »  « 0 »¼ «¬ R2

0 º 1 »» R 2 »¼

(3.6.6)

0 º 1 »» . R 2 »¼

It is easy to check that (3.6.6) is a normal matrix. Now we will perform structural decomposition of the matrix (3.6.5). Postmultiplying the polynomial matrix CR 2 ª sLCR 2  L º « LR sLCR 2  R1CR 2 »¼ 2 ¬

L s

(3.6.7)

by the matrix V

ª 0 1º « 1 0 » , ¬ ¼

we obtain L s V

CR 2 s C LR 2  L º ª . «  sLCR  R CR LR 2 »¼ 2 1 2 ¬

Using the notation adopted in Sect. 3.4.2 we obtain in this case U s

ª1 0 º «0 1 » , V s ¬ ¼

k s

 sL  R1 , L s

P s

ª 1 º U 1 s i s « » ¬k s ¼

V, i s

CR2 , w s

sL 

L C

and CR2 ª º « », «¬  sLCR2  R1CR2 »¼

L , CR2

Normal Matrices and Systems

ª L « sL  CR 2 ¬

Q s

ª¬1 w s º¼ V 1 s

G s

0 º 1 ª0 U 1 s « V s ˆ » ¬«0 L s ¼»

203

º 1» , ¼

ª 0 « R CL2 ¬ 2

0º . 0 »¼

It is easy to check that the following equality holds

>Is  A @

1

CR2 º 1 ª L sCR2  1 « » m s ¬ LR2 sLR2C  R1 R2C ¼

P s Q s  G s . m s

(3.6.8)

Note that the structural decomposition of the matrix (3.6.5) yields the structural decomposition of the transfer matrix (3.6.6), since T s

C > Is  A @ B  D 1

CP s Q s B  CG s B  D m s

P s Q s  G s , m s

(3.6.9)

where P s

CP s , Q s

Q s B, G s

CG s B  D .

3.6.2 Circuits of the Third Order Consider the electrical circuit with its scheme given by Fig. 3.2, with known resistances R1, R2, inductance L, capacities C1, C2 and source voltages e1 and e2. As the state variables we take the current in the coil i and the voltages u1 and u2 on the capacitors, as the outputs we take the voltages on the resistances R1, y1 = R1i and R2, y2 = R2i2. Using Kirchoff’s laws we can write the following equations for this circuit di  u1 , dt du e2 u2  R2C2 2  u1 , dt du1 du2 C1 , i  C2 dt dt

e1

R1i  L

204

Polynomial and Rational Matrices

Fig. 3.2 A circuit of the third order

which can be written in the form of the state equation

ªiº d « » u1 dt « » «¬u2 »¼

ª R « 1 « L « 1 « « C1 « « 0 ¬

1 L 1  R 2 C1 



1 R 2 C2

º ª1 » « »ª i º «L 1 »« » « » u1  « 0 R 2 C1 » « » « «¬u2 »¼ « 1 »  » «0 R 2C2 ¼ ¬ 0

º 0 » » 1 » ª e1 º . » R 2 C1 » «¬e2 »¼ 1 » » R 2C2 ¼

Denoting

x

ªiº «u » , A « 1» ¬«u2 ¼»

ª R « 1 « L « 1 « « C1 « « 0 ¬

1 L 1  R2C1 



1 R2C2

º » » 1 » »,B R2C1 » 1 »  » R2 C2 ¼ 0

ª1 « «L « «0 « « «0 ¬

º 0 » » 1 » » ,u R2C1 » 1 » » R2C2 ¼

ª e1 º «e » ,(3.6.10) ¬ 2¼

we obtain x

Ax  Bu .

(3.6.11a)

Taking into account that y1

R1i,

y2

R 2C2

du2 dt

e2  u1  u2 ,

we obtain the output equation of the form

Normal Matrices and Systems

ª y1 º « » ¬ y2 ¼

y

R1i ª º «e  u  u » ¬ 2 1 2¼

ª R1 « ¬0

ªiº 0 º « » ª 0 0 º ª e1 º u1  1 1¼» « » ¬« 0 1 ¼» ¬« e2 ¼» «¬u2 »¼

205

0

(3.6.11b)

Cx  Du

where C

ª R1 «0 ¬

0 0º , D 1 1»¼

ª0 0º «0 1 » . ¬ ¼

(3.6.12)

A is a cyclic matrix since the minor obtained after elimination of the first row and the third column of the matrix [Is – A] is equal to -1/(R2C1C2), therefore the greatest common divisor of Adj [Is – A] is 1. The characteristic polynomial (minimal) of A is R1 L 1  C1

1 L

s m s

det > Is  A @

0

s

1 R 2 C1

1 R 2 C2

0 

1 R 2 C1

s

1 R 2 C2

§R R1 R1 1 1 · 2 § 1    s3  ¨ 1  ¸s ¨ L L L L R C R C C R C R 2 1 2 2 ¹ 2 1 2C2 © © 1 2R1 2 · 1 .  2  ¸s  LR 2 C1C2 LR 22 C1C2 R 2 C1C2 ¹

(3.6.13)

The inverse

> Is  A @

1

where

ª R «s  1 L « « 1 «  « C1 « « 0 ¬

1 L s

1 R 2 C1

1 R 2C2

º » 0 » 1 »  » R 2 C1 » 1 » s » R 2 C2 ¼

1

L s , m s

(3.6.14)

206

Polynomial and Rational Matrices

L s ª 2 § 1 1 · 2  «s  ¨ ¸s  2 R C R C R C © ¹ 2 1 2 2 2 1C2 « « 1 1 « s « C1 R 2 C1C 2 « 1 «  « R 2C1C2 ¬ º » » » R1 1 » s LR 2 C1 R 2 C1 » » § 1 R1 · R1 1 » 2   s ¨ ¸s  LR 2 C1 LC1 »¼ © R 2 C1 L ¹ 



1 1 s L LR 2 C 2

§ 1 R · R1  1 ¸s  s2  ¨ LR 2 C 2 © R 2C2 L ¹ R1 1  s R 2 C2 LR 2 C 2

1 LR 2 C1

(3.6.15)

is a normal matrix, since all nonzero second order minors of the matrix (3.6.15) are divisible without remainder by the polynomial (3.6.13). The pair (A, B) of this circuit is controllable, since the matrix built from the first three columns of [B AB] is nonsingular ª1 « «L « det « 0 « « «0 ¬

0 1 R 2 C1 1 R 2C2

R1 º » L2 » 1 » » LC1 » » 0 » ¼





1 . L2 R 2 C1C2

(3.6.16)

If R1 z 0, then the pair (A, C) is observable too, since the matrix built from the first three rows of ªC º « AC » ¬ ¼

is nonsingular

Normal Matrices and Systems

ª « R1 « « det « 0 « 2 « R1  « L ¬

º 0» » » 1» » » 0» ¼

0 1 

R1 L



R12 . L

207

(3.6.17)

The transfer matrix of this circuit has the form T s

C > Is  A @ B  D 1

ª R «s  1 L « ª R1 0 0 º « 1 « 0 1 1» «  C ¬ ¼« 1 « « 0 ¬ ˆ 0 0 L s ª º « » m s , 0 1 ¬ ¼

1 L s

1 R 2 C1

1 R 2 C2

º 0 » » 1 »  » R 2 C1 » 1 » s » R 2 C2 ¼

1

ª1 « «L « «0 « « «0 ¬

º 0 » » 1 » »  (3.6.18) R 2 C1 » 1 » » R 2C2 ¼

where Lˆ s ª R1 2 § R1 R1 · 2 R1  « s ¨ ¸s  2 L L L L R C R C R © 2 1 2 2 ¹ 2 C1C 2 « « 1  s « C1 L ¬«



R1 2 R1 º s » LR 2 C1 LR 22 C1C2 » (3.6.19) » R 1 s3  1 s 2  s » L LC1 ¼»

is an irreducible matrix, since det Lˆ (s) is divisible without remainder by the polynomial (3.6.13). We will perform the structural decomposition of the matrix (3.6.14). To accomplish this, we write the matrix (3.6.15) in the form

208

Polynomial and Rational Matrices 1 LR2C1

L s

ª § LC1 · C 2L 2 R2C1 s  1 «  LR2C1s  ¨  L  ¸s  C R C C © 2 ¹ 2 2 2 « (3.6.20) « § · LC R1C1 L 2 1 « u  LR2 s   LR2C1s  ¨   R1 R2C1 ¸ s  « C2 C2 © C2 ¹ « LC1 RC L « s 1 1 « C2 C2 C2 ¬ 1 º »  Ls  R1 » 2  LR2C1s   L  R1 R2C1 s  R1  R2 ¼»

and then we postmultiply it by the matrix

V

ª0 0 1 º «0 1 0» . « » «¬1 0 0 »¼

(3.6.21)

Then we obtain L s V

1 LR2C1

ª 1 « « « u«  Ls  R1 « « «  LR2C1 s 2   L  R1 R2C1 s  R1  R2 «¬ § LC · 2L º  LR2 C1 s 2  ¨  L  1 ¸ s  » C R 2 ¹ 2 C2 » © » L  LR2 s  ». C2 » » L » C2 »¼

In this case,

R2 C1 s 

C1 C2

§ LC · RC  LR2C1s 2  ¨  1  R1 R2C1 ¸ s  1 1 C2 © C2 ¹ LC1 R1C1 s C2 C2

(3.6.22)

Normal Matrices and Systems

209

1 , LR2C1

U s

I3 , i s

w s

ª C1 « R2C1s  C2 ¬

k s

 Ls  R1 ª º «  LR C s 2   L  R R C s  R  R » , 2 1 1 2 1 1 2¼ ¬

L s

ª § LC1 · RC 2  R1 R2C1 ¸ s  1 1 «  LR2C1s  ¨  C C2 © ¹ 2 « « LC1 RC s 1 1 « C2 C2 ¬«

§ LC · 2L º  LR2C1s 2  ¨  L  1 ¸ s  », C2 ¹ R 2 C2 ¼ ©

(3.6.23) Lº » C2 » . » L » C2 ¼»

 LR2 s 

Using (3.4.22)–(3.4.24) and (3.6.23), we obtain P s

ª 1 º U 1 s i s « » ¬k s ¼

1 ª º 1 « »  Ls  R1 » LR 2 C1 « «¬  LR 2 C1s 2   L  R1R 2 C1 s  R1  R 2 »¼ ª º 1 « » LR 2 C1 « » « » R1 1 « », s R 2 C1 LR 2 C1 « » « » « s 2  § 1  R 1 · s  R1  1 » ¨ ¸ « LR 2 C1 LC1 »¼ © R 2 C2 L ¹ ¬ Q s

ª¬1 w s º¼ V 1

ª C1 «1 R 2 C1s  C 2 ¬

ª0 0 1 º § LC1 · 2L º « »  LR 2 C1s  ¨  L  » «0 1 0» ¸s  C R C © 2 ¹ 2 2 ¼ ¬«1 0 0 ¼» 2

ª § LC1 · 2L 2 «  LR 2 C1s  ¨  L  ¸s  C2 ¹ R 2 C2 © ¬ G (s)

R 2 C1s 

0 ª0 º 1 U 1 ( s ) « »V ª º L  i s s k s w s 0 ( ) ( ) ( ) ( ) ¬ ¼¼ ¬

º 1» , ¼ ª 0 0 0º « x 0 0» , « » «¬ y z 0 »¼

C1 C2

(3.6.24)

210

Polynomial and Rational Matrices

where § R L L · 2 § R1 1 2L · Ls 3  ¨ R1    1   2 ¸s ¨ ¸s R C R C R C R C C R © © 2 1 2 1 2 2 ¹ 2 2 1 2 C1C2 ¹ 2 R1 1   , C1C2 R2 R22C1C2 x

§ · § RC LC1 3L · 2 L  R1R 2 C1 ¸ s 3  ¨ R 2  1 1  2R 1   LR 2 C1s 4  ¨ 2 L  ¸s C C R C R © ¹ © 2 2 2 1 2 C2 ¹ § 1 3R1 R1 R  2R 1 1 2L · , ¨     s 2 2 ¸ C R C C R C C C R C1C1R 22 © 2 2 2 1 2 1 2 1 2 ¹ y

z

§ RRC § R C · RC R 2 ·  R2C1s 3  ¨  1 2 1  1  1 ¸ s 2  ¨  2  1 1  1  ¸s L C L C L L R © © 2 ¹ 2 2 C2 ¹ § 2 R1 1 ·  ¨  ¸. © R2C2 L C2 L ¹

The structural decomposition of the matrix (3.6.14) yields the structural decomposition of the transfer function (3.6.18). 3.6.3 Circuits of the Fourth Order and the General Case Consider an electrical circuit with its scheme given in Fig. 3.3, with known resistances R1, R2, R3, inductances L1, L2, capacities C1, C2, as well the source voltages e1, e2 and e3. As the state variables we take the currents i1 and i2 in the coils and the voltages u1, u2 on the capacitors, as the outputs y1 and y2 we take the voltage on the coil L1 and the current in the capacitor C2, respectively.

Fig. 3.3 A circuit of the fourth order

Normal Matrices and Systems

211

Using Kirchoff’s laws we can write the following equations for this circuit di1  u1  R3 i1  i2 , dt du e2 u2  R2C2 2  u1 , dt di e3 e2  R3 i1  i2  L2 2 , dt du1 du2 , C1 i1  C2 dt dt e1

R1i1  L1

which can be written in the form of the state equation ª i1 º « » d « i2 » dt « u1 » « » ¬u2 ¼ ª R1  R3 « L 1 « « R3 « L 2 « « 1 « « C1 « 0 « ¬



R3 L1



R3 L2



1 L1

º ª1 » «L » « 1 » ª i1 º « 0 »« » « 0 » « i2 »  « 1 » « u1 » «  »« » « 0 R2C1 » ¬u2 ¼ « « 1 »  » «0 R2C2 ¼ ¬ 0

0

0



1 R2C1

0



1 R2C2

0 

1 L2

1 R2C1 1 R2C2

º 0» (3.6.25) » 1» ªe º L2 » « 1 » » e . »« 2» 0 » «¬ e3 »¼ » » 0» ¼

Denoting

x

ª i1 º «i » « 2 », u « u1 » « » ¬u 2 ¼

A

ª R1  R3 « L 1 « « R3 « L 2 « « 1 « « C1 « 0 « ¬

ª e1 º « » «e2 » , «¬ e3 »¼ 

R3 L1



R3 L2



1 L1 0

0



1 R2C1

0



1 R2C2

º » » » 0 » », B 1 »  » R2C1 » 1 »  » R2C2 ¼ 0

ª1 «L « 1 « «0 « « «0 « « «0 ¬

0 

1 L2

1 R2C1 1 R2C2

º 0» » 1» L2 » », » 0» » » 0» ¼

212

Polynomial and Rational Matrices

we obtain x

Ax  Bu .

(3.6.26a)

Taking into account y1

di1  R1  R3 i1  R3i2  u1  e1 , dt du 1 1 1 C2 2  u1  u2  e2 , dt R2 R2 R2

L1

y2

we obtain the output equation y

ª y1 º «y » ¬ 2¼ ª  R1  R 3 R 3 « « 0 0 «¬

1 

1 R2

ª i1 º 0 º « » ª1 » i2 « 1 « »  » « u1 » «0 R 2 »¼ « » «¬ ¬u 2 ¼

0 º ª e1 º » «e » 0» « 2 » »¼ «¬ e3 »¼

0 1 R2

(3.6.26b)

Cx  Dy, where

C

ª  R1  R3  R3 « « 0 0 ¬«

1 

1 R2

0 º » 1 , D  » R2 ¼»

ª1 « «0 ¬«

0º ». 0» ¼»

0 1 R2

(3.6.27)

To show that A is a cyclic matrix, we transform the matrix [Is – A] by similarity (which does not change the characteristic polynomial) to the form

P > Is  A @ P T

We then obtain

Is  PAPT for P

ª0 «1 « «0 « ¬0

1 0 0º 0 0 0 »» 0 1 0» » 0 0 1¼

P

T

P 1

P .

Normal Matrices and Systems

213

PAPT

ª0 «1 « «0 « ¬0

1 0 0 0 0 1 0 0

ª R3 « L « 2 « R3 « L « 1 « « 0 « « « 0 ¬



ª R1  R3 « L 1 « 0º « R3  L2 0 »» «« 0» « 1 »« 1 ¼ « C1 « 0 « ¬ R 0  3 L2 R1  R3 L1



1 L1

1 C1



1 R2C1

0



1 R2C2



R3 L1



R3 L2



1 L1 0

0



1 R2C1

0



1 R2C2

º » » » ª0 0 »« » «1 1 » «0  »« R2C1 » ¬ 0 1 »  » R2C2 ¼ 0

1 0 0º 0 0 0 »» 0 1 0» » 0 0 1¼

º » » » 0 » ». 1 »  » R2C1 » 1 »  » R2C2 ¼ 0

(3.6.28)

Note that the minor obtained from [Is – PAPT] by elimination of the first row and the fourth column is equal to R3/(L1R2C1C2). The greatest common divisor of the entries of Adj [Is – PAPT] is 1, thus A is a cyclic matrix. The characteristic polynomial (minimal) of the matrix A is s

det > Is  A @

R1  R 3 L1 R3 L2 

R3 L1 s

R3 L2

1 C1

0

0

0

s

1 L1

0

0

0

1 R 2 C1

1 R 2C2

1 R 2 C1 s

1 R 2 C2

§ L L C  L1 L2C1  R3 R2C1C2 L2  R1 R2C1C2 L2  R3 R2C1C2 L1 · 3 s4  ¨ 1 2 2 ¸s R2 L1 L2C1C2 © ¹ § R C L  R3 L2C2  R3 L2C1  L1 R3C2  R1 L2C1  R1 L2C2  L1 R3C1 ¨ 2 2 2 R2 L1 L2C1C2 © R1 R2 R3C1C2 · 2 § L2  R1 R3C2  R2 R3C2  R1 R3C1 · R3 . ¸s ¨ ¸s  R2 L1 L2C1C2 ¹ R L L C C R L L 2 1 2 1 2 2 1 2 C1C2 © ¹

The inverse matrix

(3.6.29)

214

Polynomial and Rational Matrices

> Is  A @

1

R1  R 3 ª «s  L 1 « « R3 « L2 « « 1 «  C1 « « 0 « ¬

R3 L1 s

1 L1

R3 L2

0

0

1 s R 2 C1

0

1 R 2C2

º » » » 0 » » » 1 » R 2 C1 » 1 » s » R 2 C2 ¼

1

0

L s , (3.6.30) m s

where

L s ª s 3 R2C1C2 L2  s 2 C1C2 R2 R3  C1L2  C2 L2  sR3 C1  C2 « C1C2 R2 L2 « 2 «  s R2 R3C1C2  sR3 C1  C2 « « C1C2 R2 L2 « 2 s L 2 R2C2  s L2  R2 R3C2  R3 « « C1C2 R2 L2 « «  sL2  R3 « C1C2 R2 L2 ¬  s 2 R2 R3C1C2  sR3 C1  C2 C1C2 R2 L1 ª s L1 R2C1C2  s L1C2  L1C1  C1R1R2C2  R3 R2C1C2  º « » «¬  s R1C2  R1C1  R3C2  R3C 1  R2C2  1 »¼ C1C2 R2 L1 3

2

 sR2 R3C2  R3 C1C2 R2 L1 R3 C1C2 R2 L1

(3.6.31)

Normal Matrices and Systems

215

 s 2 L 2 R2C2  s L2  R2 R3C2  R3 L1 L2 R2C2 sR2 R3C2  R3 L1 L2 R2C2 ª s 3 L1 L2 R2C2  s 2 R2 R3 L1C2  R1 R2C2 L2  R2 R3C2 L2  L1 L2  º « » ¬«  s R1 R2 R3C2  R3  R1 L2  R3 L2  R1 R3 ¼» L1 L2 R2C2  s 2 L1 L2  s L1 R3  R1 L2  R3 L2  R1 R3 L1 L2 R2 L2 º » » »  R3 » L1 L2C1 R2 » »  s 2 L1 L2  s L1 R3  R1 L2  R3 L2  R1 R3 » » L1 L2C1 R2 » » ª s 3 L1 L2C1 R2  s 2 L1 L2  L1 R2 R3C1  R1 R2C1 L2  R2 R3C1 L2  º » « » » ¬«  s L1 R3  R1 L2  R1 R2 R3C1  R3 L2  R2 L2  R1 R3  R2 R3 ¼» » »¼ L1 L2C1 R2 sL2  R3 L1 L2C1 R2

is a normal matrix, since all nonzero second-order minors of the matrix (3.6.31) are divisible without remainder by the polynomial (3.6.29). The pair (A, B) of this circuit is controllable, since the matrix built from the first four columns of the matrix [B AB] is nonsingular ª1 « « L1 « «0 det « « «0 « « «0 ¬«



0

0

1 L2

1 L2



R1  R3 º L12 

R3 L1 L2

1 R2C1

0

1 L1C1

1 R2C2

0

0

The pair (A, C) is observable, since

» » » » » » » » » » ¼»

1 . L12 L2 R2C1C2

(3.6.32)

216

Polynomial and Rational Matrices

ªC º «CA » ¬ ¼

is nonsingular ªC º det « » ¬CA ¼  R1  R3 ª « 0 « « ª L C R 2  2L C R R  º 2 1 1 3 «« 2 1 1 » det « «¬  L2C1 R32  R32 L1C1  L1 L2 »¼ « L1 L2C1 « « 1  « R2C1 ¬ 1 1  R2

 R3 0 R1 R3 L2  R32 L1  L2 L1 L2 0

º » » » » 1 » z 0. R2C1 » » C1  C2 » R22C1C2 »¼ 0 1  R2

R1 R2C1  R2 R3C1  L1 L1 R2C1 C1  C2 R22C1C2

(3.6.33)

The transfer matrix of this circuit has the form T s

C > Is  A @ B  D 1

ª  R1  R3  R3 « « 0 0 «¬

R1  R3 R3 1 ª «s  L L1 L1 1 « « R3 R3 0 s « L2 L2 u« « 1 1 0 s «  C R 1 2 C1 « « 1 0 0 « R2C2 ¬ ª1 0 0 º ˆ » L s , « «0 1 0» m s R2 ¬« ¼»

º » » » 0 » » » 1 » R2C1 » 1 » s » R2C2 ¼ 0

1

1 1  R2 ª1 «L « 1 « «0 « « «0 « « «0 ¬

0 º » 1  » R2 »¼ 0 

1 L2

1 R2C1 1 R2C2

º 0» » 1» L2 »» » 0» » » 0» ¼

(3.6.34)

Normal Matrices and Systems

217

where Lˆ s ª§ L1 L2 R2C1C2 s 4  L1 L2C1  R3 L1 R2C1C2  L1 L2C2 s 3  · «¨ ¸ 2 ¸ «©¨  L1 R3C2  L1 R3C1 s ¹ « C2 L2 s 2  R3C2 s «¬ §

L1 R2C1C2 R3  L1 L2C2 s 3  ¨ L1C1R3  ©

L1 L2 L1C2 L2 · 2  ¸s  R2 R2C1 ¹

§LRC LR · ¨ 1 3 2  1 3 ¸s R2 ¹ © R2C1 4 L1 L2C1C2 s  L1 R3C1C2  R1 L2C1C2  R3 L2C1C2  

R3 L1C1 L1 L2C1 L1 L2C2 · 3 §   ¸ s  ¨ R1 R3C1C2  L2C2  R2 R2 ¹ R2 ©



R1 L2C2 R3 L2C1 R3 L1C2 R1 L2C1 R3 L2C2 · 2     ¸s  R2 R2 R2 R2 R2 ¹

§ R1 R3C1 R1 R3C2 L2 L2C2 · R3 C2 R3     ¨ ¸s  R R R R C R R2C1 2 2 2 2 1 ¹ 2 ©  L1 R3 R2C1C2 s 3  R3 L1 C2  C1 s 2 º », R3C2 s »¼ 4 3 m s s R2C1C2 L1 L2  s R1 R2C1C2 L2  R2 R3C1C2 L2

(3.6.35)

 R2 R3C1C2 L1  C1 L1 L2  C2 L1 L2  s 2 R3C2 L1  R1C1 L2  R1C2 L2  R1 R2 R3C1C2  R3C1 L2  R3C2 L2  R2C2 L2  R3C1 L1  s R1 R3C1  R1 R3C2  R2 R3C2  L2  R3 .

This is an irreducible and normal matrix, since all nonzero second degree minors of the matrix (3.6.35) are divisible without remainder by the polynomial (3.6.29). Analogously to the two previous cases, we can perform the structural decomposition of the inverse matrix (3.6.30) and transfer matrix (3.6.34). The above considerations can be generalised into electrical circuits of an arbitrary order. From the above considerations we can derive two important corollaries pertaining to electrical circuits of the n-th order (n is not less than 2), with at least two inputs m t 2 and at least two outputs p t 2, that is, min (n, m, p) t 2.

218

Polynomial and Rational Matrices

Corollary 3.6.1. Every matrix A of an electrical circuit of the second order (n = 2) is cyclic, and the inverse [Is – A]-1, as well as transfer matrix T(s) = =C[Is – A]-1B + D are normal. Corollary 3.6.2. The matrices A of typical electrical circuits consisting of resistances, inductances, capacities and source voltages (currents) are cyclic matrices and the inverses [Is – A]-1 are normal matrices. In particular cases, the values of R, L, C can be chosen in such a way that the pair (A, B) are not controllable or/and the pair (A, C) are not observable. In these cases, the transfer matrix T s

Lˆ s m s

may be reducible and then is not a normal matrix, i.e., not all nonzero second-order minors of the polynomial matrix Lˆ (s) are divisible without remainder by the polynomial m(s). Remark 3.6.1. If (A, B) is not a controllable pair, then some pole-zero cancellations occur in Adj > Is  A @ B . det > Is  A @

Analogously, if (A, C) is not an observable pair, then some pole-zero cancellations occur in CAdj > Is  A @ . det > Is  A @

4 The Problem of Realization

4.1 Basic Notions and Problem Formulation Consider a continuous system given by the equations x y

Ax  Bu , Cx  Du ,

(4.1.1a) (4.1.1b)

where x n, u m, y p are the state, the input and the output vectors, respectively, and A nun, B num, C pun and D pum. The transfer matrix of the system (4.1.1) is given by T s

C > Is  A @ B  D . 1

(4.1.2)

For the given matrices A, B, C and D there exists only one transfer matrix (4.1.2). On the other hand, for a given proper transfer matrix T(s) there are many matrices A, B, C and D satisfying (4.1.2). Definition 4.1.1. The quadruplet of the matrices: A nun, B num, C pun and D pum satisfying (4.1.2), is called a realisation of the given transfer matrix T(s) pum(s). It will be denoted Rn,m,p(T) or briefly Rn,m,p. Definition 4.1.2. A realisation Rn,m,p is called minimal if the matrix A has the minimal (least) dimension among all realisations of T(s). A minimal realisation will be denoted by R n,m,p. Definition 4.1.3. A minimal realisation R n,m,p is called cyclic (or simple) if the matrix A is cyclic. A cyclic realisation will be denoted by Rˆ n,m,p.

220

Polynomial and Rational Matrices

The matrix D for a given proper transfer matrix T(s) can be computed using the formula D

lim T s ,

(4.1.3)

s of

which results from (4.1.2), since lim > Is  A @

1

s of

0.

From (4.1.2) and (4.1.3), we have Tsp s

T s  D

C > Is  A @ B . 1

(4.1.4)

Having the proper matrix T(s) and using (4.1.4) we can compute the strictly proper matrix Tsp(s). The realisation problem can be formulated in the following way. With a proper rational matrix T(s) pum(s) given, compute the realisation Rn,m,p of this matrix. The minimal realisation problem can be formulated in the following way. With a proper rational matrix T(s) pum(s) given, compute a minimal realisation R n,m,p of this matrix. The problem of cyclic realisation is formulated as follows. With a proper rational matrix T(s) pum(s) given, compute a cyclic realisation Rˆ n,m,p of this matrix. In the case of a strictly proper transfer matrix Tsp(s) pum(s), the realisation problem reduces to the computation of only three matrices A, B, C satisfying (4.1.4).

4.2 Existence of Minimal and Cyclic Realisations 4.2.1 Existence of Minimal Realisations The theorem stated below provides us with necessary and sufficient conditions for the existence of a minimal realisation R n,m,p for a given rational proper transfer matrix T(s) pum(s). Theorem 4.2.1. A realisation (A, B, C, D) of a matrix T(s) is minimal if and only if (A, B) is a controllable pair and (A, C) is an observable pair. Proof. We will show by contradiction that if (A, B) is a controllable pair and (A, C) is an observable pair, then the realisation is minimal.

The Problem of Realization

221

Let (A, B, C), A nun and ( A,B,C ), A  nun be two different realizations for n ! n of the matrix T(s). From (4.1.4), we have C > Is  A @ B 1

1

C ª¬Is  A º¼ B

(4.2.1)

and

CA i B

CA i B, i

0, 1, ... .

(4.2.2)

From the assumption that (A, B) and ( A, B ) are controllable pairs and that (A, C) and ( A,C ) are observable pairs, it follows that rank S rank S

rank H rank H

n, n,

(4.2.3a) (4.2.3b)

where ª C º « CA » », S [B AB ! A n1B], H « « # » « n 1 » ¬CA ¼ ª C º « » CA » S = ª¬B AB … A n-1B º¼ , H = « . « # » « n-1 » ¬«CA ¼»

(4.2.3c)

(4.2.3d)

From (4.2.2) we have

HS

ª C º « CA » « » ªB AB … A n-1B º ¼ « # »¬ « n-1 » ¬CA ¼

ª CB CAB « 2 « CAB CA B « # # « n 1 «¬CA B CA n B

and

CA n1B º » ! CA n B » » % # » 2 n 1 ! CA B »¼ !

ª CB CAB « 2 CAB CA B « « # # « n1 «¬CA B CA n B

CA n1B º » ! CA n B » » % # » 2 n 1 ! CA B »¼ !

ª C º « » « CA » ªB AB ! A n1B º ¼ « # »¬ « n 1 » «¬CA »¼

HS

222

Polynomial and Rational Matrices

rank HS .

rank HS

(4.2.4)

The relationships rank HS = n, rank HS n and (4.2.4) lead to a contradiction since by assumption n ! n . Now we will show that if (A, B) is not a controllable pair or/and (A, C) is not an observable pair, then (A, B, C) is not a minimal realisation. If (A,B) is not a controllable pair, then there exists a nonsingular matrix P such that A

PAP 1

A1 

n1un1

ª A1 «0 ¬

, B1 

A2 º , B A 3 »¼ n1um

, A3 

PB

ªB1 º « 0 », C ¬ ¼

n  n1 u n n1

CP 1

, C1 

>C1

C2 @ ,

(4.2.5)

pun1

where (A1, B1) is a controllable pair and C > Is  A @ B 1

C1 > Is  A1 @ B1 . 1

(4.2.6)

From (4.2.6) it follows that (A1, B1, C1) is a realisation whose matrix A1 is of smaller size than that of A. Thus (A, B, C) is not a minimal realisation if (A,B) is not a controllable matrix. The proof that if (A, C) is not observable, then (A,B,C) is not a minimal realisation, is analogous. „ Theorem 4.2.2. If the triplet of matrices (A, B, C) is a minimal realisation R n,m,p of a strictly proper transfer matrix T(s) pum(s), then the triplet (PAP-1, BP, CP-1) is also a minimal realisation of the transfer matrix T(s) for an arbitrary nonsingular matrix P nun. Proof. We will show that the matrices PAP-1, BP, CP-1 satisfy the condition (4.1.4). Substituting these matrices into (4.1.4), we obtain 1

CP 1 ª¬Is  PAP 1 º¼ PB

1

1 CP 1 ª P > Is  A @ P 1 º PB ¬ ¼

CP 1P > Is  A @ P 1PB 1

C > Is  A @ B ,

since PP-1 = I. „

If (A, B, C) and ( A,B,C ) are two minimal realisations of the transfer matrix T(s), then there exists only one nonsingular matrix P such that A

PAP 1 , B

BP, C

CP 1 .

(4.2.7)

The Problem of Realization

223

Example 4.2.1. Given two minimal realisations (A, B, C) and ( A,B,C ) of a transfer matrix T(s), compute a nonsingular matrix P satisfying (4.2.7). From the assumption that (A, B, C) and ( A,B,C ) are two minimal realisations of the transfer matrix T(s), it follows that they satisfy the equality (4.2.2) and HS ,

HS

(4.2.8)

where n = n , and the matrices H, S, H and S are given by (4.2.3c) and (4.2.3d). The condition (4.2.3) implies that det [SST] = det [ S S T] z 0 and det [HTH] = = det [ H T H ] z 0. Post-multiplying (4.2.8) by S T, and computing H from the resulting relationship, we obtain 1

H

HSST ª¬SST º¼

P

SST ª¬SST º¼ .

HP ,

(4.2.9)

where 1

(4.2.10)

On the other hand, pre-multiplying (4.2.8) by H T and computing S from the resulting relationship, we obtain S

1

ª¬ H T H º¼ HT HS

P 1S ,

(4.2.11)

where 1

ª¬ H T H º¼ HT H .

P 1

(4.2.12)

Equality of the first m columns of (4.2.11) and the first p rows of (4.2.9) yields B

P 1B, C

CP .

(4.2.13)

One can easily verify that HAS

H AS .

(4.2.14)

Pre-multiplying (4.2.14) by H T, post-multiplying it by S T and then computing A from the resulting relationship, we obtain

224

Polynomial and Rational Matrices

A

ª¬H H º¼ T

1



HT H A SST ª¬SST º¼

1



P 1AP .

(4.2.15)

To show that P is the only feasible matrix, suppose that a matrix P also satisfies (4.2.7). In this case, the equality HP = H P yields H(P - P ) = 0, which implies that P = P , since H is a full column rank matrix. 4.2.2 Existence of Cyclic Realisations

We will provide the necessary and sufficient conditions for the existence of a cyclic realisation Rˆ n,m,p(A,B,C) for a given rational proper transfer matrix T(s) pum(s). Theorem 4.2.3. If A is a cyclic matrix and (A, B) is a controllable pair, then W s

Adj> Is  A @ B det > Is  A @

(4.2.16)

is an irreducible and normal matrix. If A is a cyclic matrix and (A,C) is an observable pair then W s

CAdj > Is  A @ det > Is  A @

(4.2.17)

is an irreducible and normal matrix. Proof. According to Theorem 2.5.1 Adj> Is  A @ det > Is  A @

is an irreducible matrix if A is a cyclic matrix, i.e., [Is – A] is a simple matrix, and at the same time, according to Theorem 3.1.1, it is a normal matrix as well. If (A, B) is a controllable pair, then there exist two polynomial matrices M(s) and N(s) such that

>Is  A @ M s  BN s

I.

(4.2.18)

Pre-multiplying (4.2.18) by [Is – A]-1, we obtain M s 

Adj> Is  A @ B N s det > Is  A @

>Is  A @

1

.

(4.2.19)

The Problem of Realization

225

From (4.2.19) it follows immediately that the matrix (4.2.16) is irreducible. Normality of the matrix (4.2.16) follows from normality of the matrix [Is – A] and the Binet–Cauchy theorem. The proof for (A, C) being an observable pair is analogous (dual). „ Theorem 4.2.4. The rational matrix CAdj > Is  A @ B det > Is  A @

W s

(4.2.20)

is irreducible if and only if the matrices A, B, C constitute a cyclic realization (A, B, C ) Rˆ n,m,p of the matrix W(s) pum(s). Proof. Necessity. If the matrices A, B, C do not constitute a cyclic realisation, then A is not a cyclic matrix or (A, B) is not a controllable pair or (A, C) is not an observable pair. If A is not a cyclic matrix, then

> Is  A @

1

Adj > Is  A @ det > Is  A @

is a reducible matrix. If (A, B) is not a controllable pair, then Adj > Is  A @ B det > Is  A @

is a reducible matrix and if (A, C) is an unobservable pair then CAdj > Is  A @ det > Is  A @

is reducible as well. Sufficiency. According to Theorem 4.2.3, if A is a cyclic matrix and (A, B) is a controllable pair, then the matrix (4.2.16) is irreducible, and if (A, C) is an observable pair, then the matrix (4.2.17) is irreducible. Thus if the matrices A, B, C constitute a cyclic realisation, then the matrix (4.2.20) is irreducible. „ Theorem 4.2.5. There exists a cyclic realisation for a rational proper (transfer) matrix T(s) pum(s) if and only if T(s) is a normal matrix.

226

Polynomial and Rational Matrices

Proof. Necessity. If there exists a cyclic realisation (A, B, C, D) of the matrix T(s), then [Is – A]-1 is a normal matrix and according to the Binet–Cauchy theorem [Is – A]-1B is a normal matrix. Normality of the matrix C[Is – A]-1 follows by virtue of Theorem 4.2.3. Sufficiency. If

L( s ) m( s )

T( s)

is a normal matrix, then using (4.2.3) we can compute the matrix D and the strictly proper matrix (4.2.4), and in turn compute the cyclic matrix A with the dimensions nun, n = deg m(s), the controllable pair (A, B), and the observable pair (A, C). „

4.3 Computation of Cyclic Realisations 4.3.1 Computation of a Realisation with the Matrix A in the Frobenius Canonical Form

The problem of computing a cyclic realisation (AF, B, C, D) for a rational matrix T(s), with the matrix AF in the Frobenius canonical form, can be formulated in the following way. Given a rational proper matrix T(s) pum(s), compute a minimal realisation (AF, B, C, D) Rˆ n,m,p with the matrix AF in the Frobenius canonical form

AF

ª 0 « 0 « « # « « 0 «¬  a0

1

0

!

0

1

!

# 0

# 0

%

a1

!

 a2 !

0 º 0 »» # ». » 1 »  an1 »¼

(4.3.1)

Given T(s) and using (4.1.3) we can compute the matrix D, and in turn the strictly proper rational matrix Tsp s

T s  D

C > Is  A F @ B 1

L s . m s

(4.3.2)

Thus the problem is reduced to computing a minimal realization (AF, B, C) Rˆ n,m,p of the strictly proper matrix Tsp(s) pum(s).

The Problem of Realization

227

The characteristic polynomial m(s) of the matrix (4.3.1), which is equal to the minimal polynomial Is  A @

s n  an1s n1  !  a1s  a0 .

(4.3.3)

One can easily show that Adj [Is - AF] of the matrix (4.3.1) has the form

Adj > Is  A F @

1 0 ! s 1 !

ªs «0 « Adj « # « «0 «¬ a0

1 º ª w s « » ¬M s k s ¼

nun

# 0 a1

0 0

# % # 0 ! s a2 ! an  2

º » » # » » 1 » s  an1 »¼ 0 0

(4.3.4)

>s@ ,

where w s

ª¬ mn1 s mn2 s ! m1 s º¼ ,

k s

T

ª¬ s s 2 ! s n1 º¼ , mn1 s s n1  an1s n2  !  a2 s  a1 mn2 s m1 s

(4.3.5)

s n2  an1s n3  !  a3 s  a2 s  an1

and M(s) (n-1)u(n-1)[s] is a polynomial matrix depending on the coefficients a0,a1,…,an-1. In order to perform the structural decomposition of the inverse [Is - AF]-1, we reduce the matrix (4.3.4) to the form (3.4.14). To this end, we pre-multiply the matrix (4.3.4) by U s

01,n1 º ª 1 « k s » ¬ I n1 ¼

(4.3.6a)

and post-multiply it by the unimodular matrix V s

Now we obtain

ª 0n11, « 1 ¬

I n1 º .  w s »¼

(4.3.6b)

228

Polynomial and Rational Matrices

01,n1 ª 1 º «0 », ¬ n11, M s  k s w s ¼

U s Adj> Is  A F @ V s

(4.3.7)

where [Is - AF]-1 is a normal matrix. Every nonzero second-order minor is divisible without remainder by m(s). Thus every entry of M (s) = M(s) – k(s)w(s) is divisible without remainder by m(s). Therefore, we have M s

ˆ s , M ˆ (s)  m s M

n 1 u n1

>s@ .

(4.3.8)

Taking into account that U 1 s

01,n1 º ª 1 1 «k s » , V s ¬ I n1 ¼

1 º ªw s « » ¬ I n1 0n11, ¼

(4.3.9)

as well as (4.3.8) and (4.3.7), we obtain Adj > Is  A F @

ª 1 U 1 s « ¬« 0n11,

PF s Q F s  m s G F s ,

01,n1 º 1 V s ˆ s »» m s M ¼

(4.3.10)

where PF s

ª 1 º U 1 s « » ¬0n11, ¼

QF s

ª¬1 01,n1 º¼ V 1 s ª¬ w s 1º¼ 01,n1 º 1 ª 0 U 1 s « V s ˆ s »» «¬ 0n11, M ¼

GF s

ª 1 º « » , ¬k s ¼

(4.3.11)

, 0 º ª 01,n1 «ˆ ». «¬ M s 0n11, »¼

From (4.3.2) and (4.3.10), we have

L s CAdj> Is  A F @ B P s Q s  m s G s , where

CPF s Q F s B  m s CG F s B

(4.3.12)

The Problem of Realization

229

ª 1 º C« », ¬k s ¼

P s

CPF s

Q s

QF s B

G s

CG F s B.

ª¬ w s 1º¼ B,

(4.3.13)

Let Ci be the i-th column of the matrix C, and Bi the i-th row of the matrix B, i = 1,2,…,n. Taking into account (4.3.13) and (4.3.5) we obtain ª 1 º « s » P s >C1 C2 ! Cn @ « » « # » « n1 » ¬s ¼ C1  C2 s  ! Cn s n1 P1  P2 s  P3 s 2  !  Pn s n , ª B1 º «B » Q s ª¬ mn1 s mn2 s ! m1 s 1º¼ « 2 » « # » « » ¬B n ¼ B1mn1 s  B 2 mn2 s  !  B n1m1 s  B n

(4.3.14)

B1s n1  an1B1  B 2 s n2  an2 B1  an1B 2  B 3 s n3 !  a1B1  a2 B 2  !  B n

Q1  Q 2 s  Q3 s 2  !  Q n s n1

where Pi

Ci , for i 1, 2, ! , n , an1B1  B 2 , Q n2

Qn

B1 , Q n1

Q1

a1B1  a2 B 2  !  an1B n1  B n .

(4.3.15a) an2 B1  an1B 2  B3 , ! ,

(4.3.15b)

With Qn, Qn-1, ..., Q1 known we can recursively compute from (4.3.15b) the rows Bi, i = 1,2,…,n of the matrix B Q n1  an1B1 , B3

B1

Qn , B2

Bn

Q1  a1B1  a2 B 2  !  an1B n1.

Q n2  an2 B1  an1B 2 , ! ,

(4.3.17)

From the above considerations we can derive the following procedure for computing the desired cyclic realisation (AF, B, C, D) of a given transfer matrix T(s) pun(s).

230

Polynomial and Rational Matrices

Procedure 4.3.1. Step 1: Using (4.1.3), compute the matrix D pum and the strictly proper matrix (4.3.2). Step 2: With the coefficients a0,a1,…,an-1 of the polynomial m(s) known, compute the matrix AF given by (4.3.1). Step 3: Performing the decomposition of the polynomial matrix L(s), compute the matrices P(s) and Q(s). Step 4: Using (4.3.15a) and (4.3.17), compute the matrices C and B. Example 4.3.1. Using Procedure 4.3.1, compute the cyclic realisation of the rational matrix ª s3  s  1 s3  s 2  2s  2 º 1 « ». s 3  s 2  2 s  1 ¬ s 3  s 2  2 s 2 s 3  2 s 2  5s  2 ¼

T s

(4.3.18)

It is easy to check that the matrix (4.3.18) is normal. Thus its cyclic realization exists. Using Procedure 4.3.1, we compute Step1: Using (4.1.3) and (4.3.2), we obtain D

lim T s s of

ª 1 1 º « 1 2» ¬ ¼

(4.3.19)

and Tsp s

T s  D

ª s 2  s  2 1º 1 « ». s  s  2s  1 ¬ 1 s¼ 3

2

(4.3.20)

Step 2: In this case, a0 = 1, a1 = 2, a2 = 1 and

AF

ª0 1 0º « » «0 0 1 ». «¬ 1 2 1»¼

Step 3: In order to perform the structural decomposition of the matrix L s

ª s 2  s  2 1º « » s¼ ¬ 1

it suffices to interchange its columns, i.e., to post-multiply it by

(4.3.21)

The Problem of Realization

231

ª0 1 º «1 0 » ¬ ¼

V s

and compute P(s) and Q(s) ª s 2  s  2 1 º ª 0 1 º ª1 s 2  s  2 º « »« » » « s ¼ ¬1 0 ¼ ¬ s 1 ¼ ¬ 1 0 ª1 º ª0 º 2 « s » ª¬1 s  s  2 º¼  « 0  s 3  s 2  2s  1» , ¬ ¼ ¬ ¼

L s V s

that is ª1 º «s» , Q s ¬ ¼

P s

ª0 1 º 2 ¬ª1 s  s  2 ¼º «1 0 » ¬ ¼

2 ¬ª s  s  2 1¼º .

Step 4: Taking into account that

ª1 º ª 0 º « 0 »  «1 » s ¬ ¼ ¬ ¼

P s

P1  P2 s

Q s

Q1  Q 2 s  Q3 s 2

and

> 2 1@  >1

0@ s  >1 0@ s 2 ,

from (4.3.15a) and (4.3.17), we obtain

B2

ª1 º ª0º ª0 º « 0 » , C2 P2 «1 » , C3 P3 «0 » , B1 ¬ ¼ ¬ ¼ ¬ ¼ Q 2  a2 B1 >1 0@  1>1 0@ > 0 0@ ,

B3

Q1  a1B1  a2 B 2

C1

P1

> 2 1@  2 >1

0@

>0 1@

Q3

>1

0@ ,

.

Hence the desired matrices B and C are

B

ª B1 º « » «B 2 » «¬ B3 »¼

ª1 0 º « » «0 0 » , C «¬0 1 »¼

>C1

C2

C3 @

ª1 0 0 º «0 1 0 » . ¬ ¼

(4.3.22)

232

Polynomial and Rational Matrices

It is easy to check that (AF, B) (determined by (4.3.21) and (4.3.22)) is a controllable pair and (AF, C) is an observable pair. Thus the obtained realisation is cyclic. 4.3.2 Computation of a Cyclic Realisation with Matrix A in the Jordan Canonical Form The problem of computing the cyclic realisation (AJ, B, C, D) Rˆ n,m,p of a given transfer matrix T(s) with the matrix AJ in the Jordan canonical form can be formulated as follows. Given a normal rational matrix T(s) pum(s), compute the minimal realisation (AJ, B, C, D) R n,m,p with the matrix AJ in the Jordan canonical form ª J1 «0 « «# « ¬« 0

AJ

0 J2

# 0

! !

0º 0 »» % # » » ! J p ¼»

diag ª¬ J1

J 2 ! J p º¼ ,

(4.3.23a)

with

J ci

J ci

0 ! 0

ª si «0 « «# « «0 «¬ 0

1 si

0

# % # 0 ! si 0 ! 0

ª si «1 « «0 « «# «¬ 0

0

0 ! 0

si

0 ! 0

1 #

si ! 0 # % # 0 ! 1

# 0

0

1 ! 0

0º 0 »» # » » 1» si »¼ 0º 0 »» 0» » #» si »¼

mi umi

,

(4.3.23b)

mi umi

,

where i = 1,2,…,p, and s1,s2,…,sp are different poles with multiplicities m1.m2,… …,mp, respectively, p

¦m

i

n

i 1

of the matrix T(s). With the matrix T(s) given, and using (4.1.3) we compute the matrix D, and then the strictly proper rational matrix (4.1.4).

The Problem of Realization

233

The problem has been reduced to the computation of the minimal realization (AJ, B, C) R n,m,p of the strictly proper matrix Tsp(s) pum(s). Firstly consider the case of poles of multiplicity 1 (m1 = m2 = … = mp = 1) of the matrix L( s ) , m( s )

Tsp ( s )

where m s

s  s1 s  s2 ! s  sn ,

si z s j , for i z j , i, j 1, ! , n, (4.3.24)

and s1,s2,…,sn are real numbers. In this case, Tsw (s) can be expressed in the following form Tsp s

n

Ti

¦ss i 1

,

(4.3.25)

i

where Ti

L si

lim s  si Tsp s

– si  s j n

s o si

, i 1, ! , n .

(4.3.26)

j 1 j zi

From (4.3.26) and (3.4.11) it follows that rank Ti

1, i 1, ! , n .

(4.3.27)

We decompose the matrix Ti into the product of the two matrices Bi and Ci of rank equal to 1 Ti

Ci B i , rank Ci

rank Bi

1, i 1, ! , n .

(4.3.28)

We will show that the matrices

AJ

diag > s1

s 2 ! sn @ , B

ª B1 º «B » « 1», C « # » « » ¬B n ¼

are a minimal realisation of the matrix Tsw (s).

>C1

C1 ! Cn @ (4.3.29)

234

Polynomial and Rational Matrices

To this end, we compute C > Is  A J @ B 1

>C1 n

ª 1 C1 ! Cn @ diag « ¬ s  s1 Ci B i

n

Ti

¦ss ¦ss i 1

i

i 1

1 s  s2

ª B1 º « » 1 º « B1 » ! » s  sn ¼ « # » « » ¬B n ¼

Tsp s .

i

Thus the matrices (4.3.29) are a realisation of the matrix Tsp(s). It is easy to check that

rank > Is  A J B @

ª s  s1 « 0 rank « « # « ¬ 0

!

0

s  s2 !

0

0 # 0

B1 º B2 »» # » » Bn ¼

% # ! s  sn

n

for all s , since rank Bi = 1 for i = 1,…,n. Analogously to the above

ª Is  A J º rank « » ¬ C ¼

ª s - s1 « 0 « rank « # « « 0 «¬ C1

0

!

s - s2 ! # 0

% !

C2

!

0 º 0 »» # » » s - sn » Cn »¼

n

for all s , since rank Ci = 1 for i = 1,…,n. Thus (AJ, B) is a controllable pair and (AJ, C) is an observable pair. Hence the realisation (4.3.29) is minimal. The desired cyclic realisation (4.3.29) can be computed using the following procedure. Procedure 4.3.2. Step 1: Using (4.3.26) compute the matrices Ti for i = 1,…,n. Step 2: Decompose the matrices Ti into the product (4.3.28) of the matrices Bi and Ci, i = 1,…,n. Step 3: Compute the desired cyclic realisation (4.3.29). Example 4.3.2. Given the normal strictly proper matrix

The Problem of Realization

Tsw s

1 ª « s 1 « 1 « « s  1 s  2 ¬

1 º s  1» » 1 » s  1 »¼

ª s  2 s  2º 1 , s s 1 2   «¬ 1 s  2 »¼

235

(4.3.30)

compute its cyclic realisation (AJ, B, C). In this case, m(s) = (s + 1)(s + 2) and the matrix (4.3.30) has the real poles s1 = -1 and s2 = -2. Using Procedure 4.3.2 we obtain the following. Step 1: Using (4.3.26), we obtain

T1

T2

lim s  s1 Tsp s s o s1

lim s  s2 Tsp s s o s2

1º ª 1 ª1 1º « 1 » «1 1» , « » 1 ¬ ¼ ¬ s  2 ¼ s 1 ªs  2 s  2º « s 1 s 1» ª 0 0º « » « 1 0 » . s  2» ¬ ¼ « 1 ¬« s  1 s  1 »¼ s 2

(4.3.31)

Step 2: We decompose the matrices (4.3.31) into the products (4.3.28) T1 C2

ª1 1º «1 1» C1B1 , C1 ¬ ¼ ª0º « 1» , B 2 >1 0@ ¬ ¼

ª1º «1» , B1 ¬¼

>1 1@ ,

T2

ª 0 0º « 1 0 » ¬ ¼

C2 B 2 ,

.

Step 3: Thus the desired cyclic realisation of the matrix (4.3.30) is AJ C

ª s1 «0 ¬

>C1

0º s2 »¼ C2 @

ª 1 0 º « 0 2 » , B ¬ ¼ 1 0 ª º «1 1» . ¬ ¼

ª B1 º «B » ¬ 2¼

ª1 1 º «1 0 » , ¬ ¼

(4.3.32)

If the matrix Tsp(s) has complex conjugated poles, then using Procedure 4.3.2, we obtain the cyclic realisation (4.3.29) with complex entries. In order to obtain a realisation with real entries, we additionally transform the complex realisation (4.3.29) by the similarity transformation. Let the equation m(s) = 0 have r distinct real roots s1,s2,…,sr and q distinct pairs of complex conjugated roots a1 + jb1, a1 – jb1,…,aq + jbq, aq  jbq, r + q = n. Let the complex realisation (4.3.29) have the form

236

Polynomial and Rational Matrices

AJ

diag[ s1

s2 … sr

B

ª B1 º « # »» « « Br » « » « c1  jd1 » « c1  jd1 » , « » # » « « c  jd » q» « q « c q  jd q » ¬ ¼

C

ª¬C 1 C2 … Cr

a1  jb1

a1  jb1 … aq  jbq

aq  jbq ],

(4.3.33)

g1  jh1

g 1 jh1 … g q  jhq

gq  jhq º¼ .

In this case, the similarity transformation matrix P has the form

P

diag >1 … 1 D1 … D1 @  C nu n , D1

1 ª1 j º . 2 «¬1  j »¼

(4.3.34)

Using (4.3.33) and (4.3.34), we obtain

AJ

P 1A J P

B

P 1B

C

CP

diag[ s1 … s r

A1 … A q ],

ª B1 º « # » « » « Br » « » « 2c1 » , « 2d » « 1» « # » « 2c » « q» «¬ 2dq »¼ ª¬C1 … C r

(4.3.35)

g1  h1 … g q

 hq º¼ ,

since

0 º ªa  jbk D1 1 « k D ak  jbk »¼ 1 ¬ 0 ª ck  jd k º ª 2ck º D1 1 « » « » , > gk  jhk ¬ ck  jd k ¼ ¬2d k ¼ Ak

ªak «b ¬ k

bk º , ak »¼

gk  jhk @ D1

Thus the realisation (4.3.35) has only real entries.

(4.3.36)

> gk

hk @ .

The Problem of Realization

237

Example 4.3.3. Given the normal matrix

Tsp s

s 3 º ª 1 1 , « 2 s  3 s  4 s  2 ¬  s 4 s  2»¼ 3

(4.3.37)

2

compute its real cyclic realisation (AJ, B, C). The matrix (4.3.37) has one real root s1 = 1 and the pair of the complex conjugated roots s2 = 1 + j, s3 = 1  j since

s  s1 s  s2 s  s3 s  1 s 1  j s  1  j

s 3  3s2  4 s  2 .

Applying Procedure 4.3.2, we obtain the following. Step 1: Using (4.3.26) we obtain

T1

lim s  s1 Tsp s

T2

lim s  s2 Tsp s

so s1

1 ª 1 « s  2s  2 ¬  s 2 2

1

ª1 2º « 1 2 » , ¬ ¼

so s2

ª 1 1 s  1 s  1  j «¬  s2

s 3 º » 4s  2¼ s

1 j

lim s  s3 Tsp s

T3

s3º 4 s  2 »¼ s

1º ª 1 « 2 1  j 2 » , « » ¬  j 1 j2 ¼

(4.3.38)

s os3

ª 1 1 s  1 s  1  j ¬« s 2

s 3 º 4 s  2 ¼» s

1  j

ª 1 « 2 « ¬ j

1º 1  j » 2 . » 1 j2 ¼

Step 2: Decomposing the matrices (4.3.38) into the products (4.3.28), we obtain

T1

T2

T3

ª1 2º ª1 º « 1 2» C1B1 , C1 « 1» , ¬ ¼ ¬ ¼ 1 1 ª º «  2 1  j 2 » C B , C 2 2 2 « » 1 2   j j ¬ ¼ 1 1 ª º «  2 1  j 2 » C B , C 3 3 3 « » 1  j2 ¼ ¬ j

B1

>1

2@ ,

ª 1º « » , B2 ¬ 2 j¼ ª 1 º « » , B3 ¬ 2 j ¼

1º ª 1 «¬  2 1  j 2 »¼ , 1º ª 1 « 2 1  j 2 » . ¬ ¼

Step 3: The desired cyclic realisation (4.3.29) with complex entries is

238

Polynomial and Rational Matrices

AJ

B

ª s1 «0 « «¬ 0 ª B1 º «B » « 2» «¬ B 3 »¼

0 s2 0

0º 0 »» s3 »¼

ª « 1 « « 1 « 2 « 1 « ¬ 2

0 0 ª 1 « 0 1  j 0 « «¬ 0 1  0 º 2 » » 1 1  j » , C >C1 2» 1» 1  j » 2¼

º », » j »¼

(4.3.39) 1 º ª1 1 C2 C 3 @ « ». ¬ 1 2 j 2 j ¼

In order to compute a real realization, we perform the similarity transformation (4.3.34) on the realisation (4.3.39)

P

diag >1 D1 @

ª «1 « «0 « « «0 ¬«

0 1 2 1 2

º 0 » » 1» . j 2» » 1 j » 2 ¼»

Using (4.3.35), we obtain

AJ

B

P 1 AJ P

ª «1 « «0 « « «0 ¬«

º 0 » » 1 1» j 2 2» » 1 1 j » 2 2 ¼» 0

1

ª «1 0 0 º« ª 1 « » 0 » «0 « 0 1  j « 1  j¼» « 0 ¬« 0 «0 ¬«

ª 1 0 0 º « 0 1 1» , « » «¬ 0 1 1»¼ ª ºª º 0 »«1 2 » «1 0 « »« » 1 1 1 1 P 1B « 0 1  j » j » « « 2 2 »« 2 2» » « »« 1 1 1 1 «0  j » « 1  j » «¬ 2 2 »¼ ¬« 2 2 »¼

ª1 2º « » « 1  2» , ¬« 0 1¼»

0 1 2 1 2

º 0 » » 1 » j 2» » 1 j » 2 ¼»

The Problem of Realization

C

CP

ª1 1 « 1 2 j ¬

ª «1 « 1 º« 0 2 j»¼ « « «0 ¬«

0 1 2 1 2

º 0 » » 1 j » 2 » 1» j » 2 »¼

239

ª1 1 0 º « 1 0 2» . ¬ ¼

Let in a general case

m s

s  s1 s  s2 m1

m2

… s  s p

mp

p

,

¦m

i

n,

i 1

where s1 ,s2 ,…, sp are real or complex conjugated poles. In this case, the matrix Tsw (s) can be expressed as

Tsp s

p

mi

¦¦ i 1 j 1

Tij

s  si

mi  j 1

,

(4.3.40)

where

Tij

d j 1 ª m 1 s  si i Tsp s º¼| s s . j 1 ¬ i j  1 ! d s

(4.3.41)

Let only one Jordan block J i of the form (4.3.23b) correspond to the i-th pole si with multiplicity mi , and the matrices B and C have the form

B

ª B1 º « » « B2 » , C « # » « » «¬ B p »¼

ª¬C 1 C2 … C p º¼

(4.3.42a)

where

Bi

ª B i1 º «B » « i2 » , C i « # » « » «¬Bimi »¼

Taking into account that

ªCi 1 Ci 2 … Cim º , i 1,2, … , p . ¬ i ¼

(4.3.42b)

240

Polynomial and Rational Matrices

ª 1 «s  s i « « « 0 « « # « « « 0 ¬

> Is  J i @

1

º » » » 1 … mi 1 » s  si » , i 1,2, … , p , (4.3.43) » % # » » 1 … » s  si ¼

1

1

s  si

2



1 s  si # 0

s  si

mi

we can write Ci > Is  Ji @ Bi 1

1 s  si

mi

¦C

B  ik ik

k 1

mi 1

1

s  si

2

¦C

ik

B ik 1  … 

k 1

(4.3.44)

1

s  si

mi

Ci1B imi .

A comparison of (4.3.40) to (4.3.44) yields j

Tij

¦C

ik

Bi,mi j k , for i 1, …, p, j 1, … , mi .

(4.3.45)

k 1

From (4.3.45) for j = 1, we obtain

Ti1

Ci1Bimi .

(4.3.46)

With the matrix Ti1 given, we decompose it into the column matrix Ci1 and the row matrix B imi . Now for (4.3.45), with j = 2, we obtain

Ti 2

Ci1 Bi,mi 1  Ci 2 Bi,m i .

(4.3.47)

With Ti2 and Ci1 , B i,mi known, we take as the vector Ci2 this column of the matrix Ti2 that corresponds to the first nonzero entry of the matrix B i,mi and we multiply it by the reciprocal of this entry. Then we compute

Ti(21 )

Ti 2  Ci 2 Bi,mi

Ci1Bi,mi 1

(4.3.48)

and Bi,m i 1 for the known vector Ci1 . From (4.3.45), for j = 3, we have

Ti3

Ci1 Bi,mi  2  Ci2 Bi,mi 1  Ci 3Bi,mi .

(4.3.49)

The Problem of Realization

241

With Ti3 and Ci2, Bi,mi 1 known, we can compute Ti 3

Ti 3  Ci 2 B i,mi 1

Ci1Bi,mi 2  Ci 3B i,mi

(4.3.50)

and then, in the same way as Ci2, we can choose Ci3 and compute Bi,mi 2 . Pursuing the procedure further, we can compute Ci1 Ci2,…, Ci,mi and Bi1 Bi2,…, Bi,mi . If the structural decomposition of the matrix L(s) of the following form is given L s

P s Q s  m s G s ,

(4.3.51)

then

s  si

mi

L s mi s

Tsw s

P s Q i s  s  si i G s , i 1, ! , p, (4.3.52) m

where mi s

m s

s  si

mi

, Qi s

Q s . mi s

(4.3.53)

Taking into account (4.3.53), we can write (4.3.41) in the following form Tij

d j 1 1 ª P s Qi s º¼ j  1 ! ds j 1 ¬

s si

for i 1, ! , p, j 1, ! , mi , (4.3.54)

since

d j 1 ª m s  si i G s º¼ ds j 1 ¬

s si

0 for j 1, ! , mi , i 1, ! , p .

From (4.3.54) it follows that the matrices Tij depend only on the matrices P(s) and Q(s) and do not depend on the matrix G(s). Knowing P(s) and Q(s) and using (4.3.54), we can compute the matrices Tij for i = 1,…,p and j = 1,…,mi. It is easy to check that for the matrices (AJ, B, C) determined by (4.3.23) and (4.3.42), (AJ, B) is a controllable pair and (AJ, C) is an observable pair. Thus these matrices constitute a cyclic realisation. If the poles s1,s2,…,sp are complex conjugated, then, according to (4.3.34), in order to obtain a real cyclic realisation one has to transform them by the similarity transformation. From the above considerations, one can derive the following important procedure for computing the cyclic realisation (AJ, B, C) for a given normal, strictly proper matrix Tsw (s) with multiple poles.

242

Polynomial and Rational Matrices

Procedure 4.3.3. Step 1: Compute the poles s1,s2,…,sp of the matrix Tsp(s) and their multiplicities m1,m2,…,mp. Step 2: Using (4.3.41) or (4.3.54) compute the matrices Tij for i = 1,…,p and j = 1,…,mi. Step 3: Using the procedure established above, compute the columns Ci1 Ci2,…, Ci,mi of the matrix Ci and the rows Bi1 Bi2,…, Bi,mi of the matrix Bi for i = 1,…,p. Step 4: Using (4.3.23) and (4.3.42) compute the desired realisation (AJ, B, C). Example 4.3.3. Given the normal matrix Tsp s

1

s  1 s  2 2

2

2 ª s  1 2  s  1 º « », ¬« s  1 s  2 s  2 ¼»

(4.3.55)

compute its cyclic realisation (AJ, B, C). Applying Procedure 4.3.3, we obtain the following. Step 1: The matrix (4.3.55) has the two double real poles: s1 = 1, m1 = 2, s2 = 2, m2 = 2. Step 2: Using (4.3.41), we obtain T11

s  1

2

Tsp s

1

s  2

2

s s1

2 ª s  1 2  s  1 º « » ¬« s  1 s  2 s  2 ¼» s

1

d ª 2 T12 s  1 Tsp s º¼ s s1 ¬ ds 2 ­ ª s  1 2  s  1 º ½° d ° 1 « »¾ ® ds ¯° s  2 2 «¬ s  1 s  2 s  2 »¼ ¿° s T21

s  2

2

Tsp s

ª0 0 º «0 1 » , ¬ ¼

1

ª0 0 º «1 1» , ¬ ¼

s s2

2 ª s  1 2  s  1 º » 2 « s  1 ¬« s  1 s  2 s  2 »¼ s

ª1 1º «0 0 » , ¬ ¼

1

2

d ª 2 T22 s  2 Tsp s º¼ s s2 ¬ ds 2 2 ­  s  1 º ½° d ° 1 ª s  1 « »¾ ® ds ¯° s  1 2 «¬ s  1 s  2 s  2 »¼ ¿° s

2

ª 0 0º « 1 1 » . ¬ ¼

The Problem of Realization

243

Step 3: Using (4.3.46) and (4.3.47), we obtain ª0 0 º ª0º «0 1 » C11B12 , C11 «1 » , B12 ¬ ¼ ¬ ¼ ª0 0 º T12 « » C11B11  C12 B12 . ¬1 1¼ We choose

T11

C12 B11 T21 T22

ª0º « 1» thus C11B11 ¬ ¼ >1 0@ , ª1 «0 ¬ ª0 « 1 ¬

1º 0 »¼ 0º 1 »¼

T12  C12 B12

C21B 22 , C21

ª1 º «0» ,B 22 ¬ ¼

>0 1@ ,

ª0 0 º ª 0 º «1 1»  « 1» > 0 1@ ¬ ¼ ¬ ¼

>1

ª0 0º «1 0 » , ¬ ¼

1@ ,

C21B 21  C22 B 22 .

We choose C22 B 21

ª0º « 1» thus C21B 21 ¬ ¼ > 0 0@ .

T22  C22 B 22

ª 0 0º ª 0 º « 1 1 »  « 1» >1 1@ ¬ ¼ ¬ ¼

ª0 0º «0 0» , ¬ ¼

Step 4: Using (4.3.23) and (4.3.42), we obtain the desired realisation

AJ

C

ª 1 1 0 0 º « 0 1 0 0 » « », B « 0 0 2 1 » « » ¬ 0 0 0 1¼

>C11

C12

C21 C 22 @

ª B11 º ª1 0 º « B » «0 1 » « 12 » « », « B 21 » « 0 0 » « » « » ¬ B 22 ¼ ¬1 1¼ ª0 0 1 0 º «1 1 0 1» . ¬ ¼

A question arises: Is it possible, using the similarity transformation, to obtain a cyclic realisation from a noncyclic realisation and vice versa? The following theorem provides us with the answer. Theorem 4.3.1. A realisation (PAP-1, PB, CP-1, D)Rn,m,p for an arbitrary nonsingular matrix P is a cyclic realisation if and only if (A, B, C, D) Rn,m,p is a cyclic realisation. Proof. According to Theorem 4.2.2 (PAP-1, PB, CP-1, D) is a minimal realisation if and only if (A, B, C) is a minimal realisation. We will show that the similarity transformation does not change the invariant polynomials of A. Let U and V be the

244

Polynomial and Rational Matrices

unimodular matrices of elementary operations on the rows and columns of [Is – A] transforming this matrix to its Smith canonical form, i.e.,

> I s  A @S

U s > Is  A @ V s .

(4.3.56)

Let U (s) = U(s)P-1 and V (s) = PV(s). U (s) and V (s) are also unimodular matrices for any nonsingular matrix P, since det U (s) = det U(s) det P-1 and det V (s) = det P det V(s), with det P and det P-1 independent of the variable s. We will show that the matrices U (s) and V (s) reduce the matrix [Is – PAP-1] to its Smith canonical form [Is – A]S. Using the definition of U (s) and V (s), and (4.3.56), we obtain U s [Is  PAP 1 ] V s U s > Is  A @ V s

U s P 1P > Is  A @ P 1PV s

>Is  A @S .

Thus the matrices [Is – PAP-1], [Is – A] have the same invariant polynomials. Hence (PAP-1, PB, CP-1, D) is a cyclic realisation if and only if (A, B, C, D) is a cyclic realisation. „

4.4 Structural Stability and Computation of the Normal Transfer Matrix 4.4.1 Structural Controllability of Cyclic Matrices A matrix A nun is called a cyclic matrix if its minimal polynomial 0) so that the sum A + BH is a cyclic matrix. Only for a particular choice of the matrix B and H is the sum A + BH a noncyclic matrix. As it is known, a matrix in the Frobenius canonical form

A

ª 0 « 0 « « # « « 0 «¬ a0

!

1

0

0

1

!

# 0

# 0

% !

a1

 a2 !

0 º 0 »» # » » 1 »  an1 »¼

(4.4.2)

is a cyclic matrix regardless of the values of the coefficients a0,a1,a2,..,an1. For example, the matrix

A

ª1 1 0 º «0 1 0 » « » «¬ 0 0 a »¼

(4.4.3)

is a cyclic matrix for all the values of the coefficient a z 1, and it is a noncyclic matrix only for a = 1. Let 'A nun be regarded as a disturbance (uncertainty) to the nominal matrix A nun, and take HB = 'A. Then, according to Theorem 4.4.1, since A is cyclic, the matrix A + 'A is also cyclic. 4.4.2 Structural Stability of Cyclic Realisation A minimal realisation (A, B, C, D) Rˆ n,m,p with the cyclic matrix A is called a cyclic realisation. Theorem 4.4.2. Let (A1, B1, C1, D1)Rn,m,p be a cyclic realisation and (A2, B2, C2, D2)Rn,m,p another realisation of the same dimensions. Then there exist such a number H0 > 0 that all the realisations

A1  H A 2 , B1  H B 2 ,C 1 H C2 , D1  H D2  Rn,m,p are cyclic realisations.

for H  H 0 ,

246

Polynomial and Rational Matrices

Proof. According to Theorem 4.4.1, if A1 is a cyclic matrix, then all the matrices A1 + HA2 are cyclic for |H| < H0. If (A1, B1) is a controllable pair then (A1 + HA2, B1 + HB2) is also controllable for all |H| < H1. Analogously, if (A1, C1) is an observable pair, then (A1 + HA2, C1 + HC2) is also observable for all |H| < H2. Thus the realisation (A1 + HA2, B1 + HB2, C1 + HC2) is a minimal one for |H| < min(H1, H2) = H0, and with (A1 + HA2) being a cyclic matrix it is a cyclic realisation as well. „ Example 4.4.1. A cyclic realisation (A1, B1, C1)R3,3,1 is given with

A1

ª0 « «0 «¬ a10

0º 1 »» , B1 a12 »¼

1 0 a11

ª0º « » « 0 » , C1 «¬1 »¼

>1

0 0@ ,

(4.4.4)

where a10, a11, a12 are arbitrary parameters. The question, arises for which values of the parameters a20, a21, a22, b and c in the matrices

A2

ª0 « «0 «¬ a20

1 0 a21

0º 1 »» , B 2 a22 »¼

ª0 º « » «0 » , C2 «¬b »¼

>0

c 0@

(4.4.5)

is the realisation (A1 + A2, B1 + B2, C1 + C2)R3,3,1 a cyclic one? We denote

A

A1  A 2

ª0 « «0 «¬ a0 >1 c

2 0 a1 0@ ,

C

C1  C2

ak

a1k  a2 k , for k

0º 2 »» , B a2 »¼

B1  B 2

ª 0 º « » « 0 », «¬1  b »¼

where 0,1, 2,

(4.4.6)

A is a cyclic matrix for all the values of the parameters a20, a21, and a22. (A, B) is a controllable pair for those values of the parameters a20, a21, a22 and b, for which det [B, AB, A2B] z 0, that is

The Problem of Realization

0

0

0

2 1  b

1  b

a2 1  b

det

4 1  b

2a2 1  b

2a

1

247

8 1  b z 0 for b z 1. (4.4.7) 3

 a22 1  b

(A, C) is an observable pair for those values of the parameters a20, a21, a22 and c, for which ª C º det «« CA »» z 0 , «¬CA 2 »¼

that is 1 c ª C º « » det « CA » 0 2 «¬CA 2 »¼ 2ca0 2ca1 for a1c 2 z 2  a2 c  a0 c3 ,

0 2c 4  2ca2

4 ª¬ 2  a2 c  a0 c3  a1c 2 º¼ z 0,

(4.4.8)

and taking (4.4.6) into account, we obtain

a20 c3  a21c 2  a22 c z a11c 2  a10 c3  a12 c  2 .

(4.4.9)

Thus (A, B, C) is a cyclic realisation for the parameters a20, a21, a22, b and c in the matrices (4.4.5) satisfying the condition (4.4.9) and b z 1. 4.4.3 Impact of the Coefficients of the Transfer Function on the System Description

Consider the transfer matrix T s

0 º ªs  2 . « s  1 s  2 ¬ 0 s  1  a »¼ 1

This matrix is normal if and only if a = 0, since the polynomial 0 s2 0 s 1 a

s  1  a s  2

is divisible without remainder by (s + 1)(s + 2) if and only if a = 0.

(4.4.10)

248

Polynomial and Rational Matrices

For a = 0 there exists a cyclic realisation (A, B, C) Rˆ 2,2,2 of the matrix (4.4.10) with A

ª 1 0 º « 0 2 » , B ¬ ¼

ª1 0 º «0 1 » , C ¬ ¼

ª1 0 º «0 1» , ¬ ¼

(4.4.11)

which can be computed using Procedure 4.3.2. Applying Procedure 4.3.2 for a z 0, we obtain lim s  s1 T s

T1

s o s1

ª1 «0 ¬

0º a »¼

C 1B 1 ,

1 ªs  2 s  2 «¬ 0 ª1 «0 ¬

C1

s o s2

ª0 « ¬0

0 º 1  a ¼»

C 2B 2 ,

0º , B1 1 »¼

1 ªs  2 s  1 «¬ 0

lim s  s 2 T s

T2

0 º s  1  a »¼

C2

ª1 «0 ¬

0º , a »¼

0 º s  1  a »¼

ª0º « » ,B2 ¬1 ¼

>0

s 1

s 2

1  a @.

Thus the desired minimal realisation is

A

C

ªI 2 s1 « 0 ¬

>C1

0º s2 »¼ C2 @

ª 1 «0 « «¬ 0 ª1 0 «0 1 ¬

0 0º 1 0 »» , B 0 2 »¼ 0º . 1 »¼

ª B1 º «B » ¬ 2¼

0 º ª1 «0 », a « » «¬0 1  a »¼

(4.4.12)

To the cyclic realisation (4.4.11) corresponds a system described by the following state equations x1 x2

 x1  u1 ,

y1

x1 ,

y2

x2 .

2 x2  u2 ,

(4.4.13)

To the minimal realisation (4.4.12) corresponds a system described by the following state equations

Problem of Realisation

x1 x2

249

 x1  u1 ,  x2  au2 ,

x3

2 x3  1  a u2 ,

y1

x1 ,

y2

x2  x3 .

(4.4.14)

Note that for a = 0 in (4.4.14) we do not obtain (4.4.13), and the pair (A, B) of the system (4.4.14) becomes not controllable. The above considerations can be generalised to the case of linear systems of any order. 4.4.4 Computation of the Normal Transfer Matrix on the Basis of Its Approximation Consider a transfer matrix Tp s

L s   pum s , m s

(4.4.15)

whose coefficients differ from the coefficients of a normal transfer matrix T s

L s   pum s . m s

(4.4.16)

The problem of computing the normal transfer function on the basis of its approximation can be formulated in the following way. With the transfer matrix (4.4.15) given, one has to compute the normal transfer matrix (4.4.16), which is a good approximation of the matrix (4.4.15). Below we provide a method for solving the problem. The method is based on the structural decomposition of the matrix (4.4.15) [168]. Applying elementary operations, we transform the polynomial matrix L (s) pum[s] into the form U s L s V s

w s º ª 1 i s « ». ¬k s M s ¼

(4.4.17)

where U(s) and V(s) are polynomial matrices of elementary operations on rows and columns, respectively; i(s) is a polynomial and w s  1u m1 > s @ , k s   p 1 > s @ , M s   p 1 u m1 > s @ .

(4.4.18)

250

Polynomial and Rational Matrices

Pre-multiplication of the matrix L1 s

w s º ª 1 « » ¬k s M s ¼

(4.4.19)

by the unimodular matrix U1 s

01,p 1 º ª 1 «k s » ¬ I p 1 ¼

and post-multiplication by the unimodular matrix V1 s

ª 1 « ¬ 0m1,1

w s º » I m1 ¼

yields U1 s L1 s V1 s

01,m1 ª 1 º «0 ». M s k s w s  ¬ p11, ¼

(4.4.20)

m s M1 s  R s ,

(4.4.21)

In this method we take M s  k s w s

where M1 s   p 1 u m1 > s @ , R s   p 1 u m1 > s @ , deg R s  deg m s .

In the further considerations we omit the polynomial matrix R(s). From (4.4.17) and (4.4.20), we have

L s

U 1 s i s L1 s V 1 s

01,m1 ª 1 º 1 1 U 1 s i s U11 s « » V1 s V s  s k s w s 0 M ¬ p 11, ¼ 01,p 1 º ª 1 01,m1 ª 1 º U 1 s i s « » «0  k s s k s w s I M ¼» p 1 ¼ ¬ p 11 , ¬ ª 1 u« ¬0m11,

w s º 1 » V s . I m1 ¼

(4.4.22)

Problem of Realisation

251

Using (4.4.21) and (4.4.22) and omitting R(s), we obtain 01,p 1 º ª 1 ª 1 U 1 s i s « »« ¬ k s I p 1 ¼ ¬ 0 p 11, ª 1 w s º 1 u« » V s 0 ¬ m11, I m1 ¼ L s

01,m1 º m s M1 s »¼

(4.4.23)

and T s

L s m s

P s Q s  G s , m s

(4.4.24)

where ª 1 º 1 i s U 1 s « » , Q s ª¬1 w s º¼ V s , k s ¬ ¼ (4.4.25) 01,m1 º 1 ª 0 1 G s i s U s « » V s . ¬ 0 p 1,1 M1 s ¼ The above considerations yield the following procedure for solving our problem. P s

Procedure 4.4.1. Step 1: Applying elementary operations, transform the matrix L (s) into the form (4.4.17) and compute the polynomial i(s) as well the unimodular matrices U(s) and V(s). Step 2: Choose M1(s) and R(s). Step 3: Using (4.4.25), compute the matrices P(s), Q(s) and G(s). Step 4: Using (4.4.24), compute the desired normal transfer matrix T(s). Example 4.4.2. Provided that the parameter a is small enough (close to zero), compute the normal transfer matrix for the matrix (4.4.10). Using Procedure 4.4.1, we obtain the following. Step 1: In this case, m (s) = m(s) = (s + 1)(s + 2) and L s

0 º ªs  2 . « 0 s  1  a »¼ ¬

Applying the elementary operations L[1 + 2] and P[1 + 2u(1)], we obtain

(4.4.26)

252

Polynomial and Rational Matrices

U s L s V s ª 1 a « s  1  a ¬

0 º ª 1 0º ª1 1º ª s  2 «0 1» « 0 s  1  a »¼ «¬ 1 1 »¼ ¬ ¼¬ s 1 a º ª 1 « s  1  aº 1 a » ». 1 a « » s  1  a¼ «  s 1  a s 1 a » «¬ 1  a 1  a »¼

Thus i s

1  a ,

V 1 s

ª1 1º « 0 1» , V s ¬ ¼

U s

ª1 0 º «1 1 » , w s ¬ ¼

ª 1 0º 1 « 1 1 » , U s ¬ ¼

s 1 a , k s 1 a



s 1 a , M s 1 a

Step 2: In this case, M s  k s w s

s 1  a s  1 a  2 1 a 1  a

s  1 s  2  a s  2 2 1  a

2

m s M1 s  R s .

We take M1 s

1

1  a

2

and R s

a

s2

1  a

2

.

Step 3: Using (4.4.25), we obtain ª 1 º i s U 1 s « » ¬k s ¼ 1 ª º ª1 1º « » 1  a « s a 1   » » ¬0 1 ¼ «  ¬ 1 a ¼ Q s ª¬1 w s º¼ V 1 s P s

ª s  1  a º ª1 0 º «¬1 1  a »¼ «¬1 1 »¼

ªs  2 «¬ 1  a

ª s2 º « s  1  a » , ¼ ¬

s 1 a º , 1  a »¼

ª1 1º «0 1 » , ¬ ¼ s 1 a . 1 a

The Problem of Realization

253

0 º 1 ª0 i s U 1 s « » V s ¬ 0 M1 s ¼ 0 º ª0 ª1 1º « ª1 0 º 1 ª 1 1º 1 »» « 1  a « » » « ». « ¬ 0 1 ¼ « 0 1  a 2 » ¬1 1 ¼ 1  a ¬ 1 1 ¼ ¼ ¬

G s

Step 4: Thus the desired matrix is T s

P s Q s  G s m s

ªs  2 «1  a 1 « s  1 s  2 « a «¬ 1  a

a º » 1 a ». 2 s 1  2a  1  2a  a » »¼ 1 a

(4.4.27)

Note that for a = 0 in (4.4.27), we obtain the normal transfer matrix T s

P s Q s  G s m s

ªs  2 « s  1 s  2 ¬ 0 1

which can be also obtained from (4.4.10) for a = 0.

0 º , s  1»¼

5 Singular and Cyclic Normal Systems

5.1 Singular Discrete Systems and Cyclic Pairs Consider the following discrete system Exi 1 yi

Axi  Bui , i 



{0, 1, ...} ,

Cxi  Dui

(5.1.1a) (5.1.1b)

where xi n, ui m, yi p are the state, input and output vectors, respectively, at the discrete instant i, and E, A nun, B num, C pun, D pum. The system (5.1.1) is called singular if det E = 0 and standard if det E z 0. We assume that det E = 0 and det[Ez  A ] z 0, for some z 

(the complex numbers field).

(5.1.2)

A system of the form (5.1.1) satisfying the condition (5.1.2) is called a regular system. The transfer matrix of the system (5.1.1) is given by

T(z )

C > Ez  A @ B  D . 1

(5.1.3)

This matrix can be written in the standard form T(z )

P (z ) , d (z )

(5.1.4)

where P(z) pum[z] ( pum[z] is the set of polynomial matrices of dimensions pum), d(z) is the minimal monic common denominator of all the elements of the matrix T(z).

256

Polynomial and Rational Matrices

Applying elementary operations on rows and columns, we can reduce the matrix P(z) pum[z] to its Smith canonical form PS ( z )

diag >i1 ( z ), i2 ( z ), ..., ir ( z ), 0, ..., 0@ 

pum

[ z] ,

where i1(z),…,ir(z) are the monic invariant polynomials satisfying the divisibility condition ik+1(z)|ik(z), k = 1,…,r-1 (the polynomial ik(z) divides without remainder the polynomial ik+1(z), k = 1,…,r-1), and r = rank P(z). The invariant polynomials are given by Dk ( z ) Dk 1 ( z )

ik ( z )

D0 ( z )

1 , k

1, ..., r ,

(5.1.6)

where Dk(z) is a greatest common divisor of all the k-th order minors of the matrix P(s). The characteristic polynomial M(z) = det [Ez – A] of the pair (E, A) and the minimal polynomial E1 z  A1 @ E1

1

ª1 0 º «0 0 » , A1 ¬ ¼

2

2 z  1, det > E2 z  A 2 @

ª0 1º « 1 2 » , E 2 ¬ ¼

ª1 0 º «0 0» , A 2 ¬ ¼

z 1 2

4

271

4 z  2,

ª0 1º « 2 4» ¬ ¼

(5.2.26)

have the common eigenvalue z = 0.5. Let the desired characteristic polynomials of the pairs (E1, Az1) and (E2, Az2) be Mz1 = z + 1 and Mz2 = z + 2. Using (5.2.22), (5.2.23), (5.2.25) and choosing appropriately the entries of the vector (5.2.24), we obtain

K1

ª¬ a10  d 01 , a11  d11 º¼ [0 , 1], K 2

A z1

A1  B1K1

ª¬ a02  d 02 , a12  d12 º¼ [0 , 3] ,

and ª0 1º « 1 1» , ǹ z 2 ¬ ¼

A2  B2K 2

ª0 1º « 2 1» . ¬ ¼

Therefore, K

diag [K1 K 2 ]

ª0 1 0 0 º «0 0 0 3» , ¬ ¼

and

Az

A  BK

ª0 1 0 0º « 1 1 0 0 » « ». «0 0 0 1» « » ¬ 0 0 2 1¼

It is easy to check that M(z) = x1 ,

x2 , ..., xn @ , c T

>c1 ,

c2 , ..., cm @

T

and xi along with ci are the i-th rows of X and C, respectively. Proof. From (6.5.7) for the entry cij in , we have n

cij

¦a

ai Xb j

ik

i 1, ..., m; j 1, ..., p ,

xk b j ,

(6.5.9)

k 1

where ai is the i-th row of A, bj the j-th column of B, and aij is the (i, j) entry of A. From Definition 6.5.1 and (6.5.8) we have cij

ai … bTj x

n

¦a

ik

xk b j ,

i 1, ..., m; j 1, ..., p .

(6.5.10)

k 1

From a comparison of (6.5.9) to (6.5.10) it follows that (6.5.7) and (6.5.8) are equivalent.  Taking in (6.5.7) B = Iq (p = q) and A = In (m = n), we obtain from Theorem 6.5.3 the following two important corollaries. Corollary 6.5.1. The equation AX

C,



mun

is equivalent to the equation

,



muq

(6.5.11)

344

Polynomial and Rational Matrices

A … I x

c.

q

(6.5.12)

Corollary 6.5.2. The equation C,

XB

qu p





,

nuq

(6.5.13)

is equivalent to the equation

I

n

… BT x

c.

(6.5.14)

For instance, using (6.5.12) one can write the system of liner equations ª a11 a12 a13 º ª x11 « »« « a21 a22 a23 » « x21 «¬ a31 a32 a33 »¼ «¬ x31

x12 º x22 »» x32 »¼

ªb11 b12 º «b » « 21 b22 » «¬b31 b32 »¼

in the following form ª a11 « «0 « a21 « «0 «a « 31 «¬ 0

0 º ª x11 º » a11 0 a12 0 a13 » «« x12 »» 0 a22 0 a23 0 » « x21 » »« » a21 0 a22 0 a23 » « x22 » 0 a32 0 a33 0 » « x31 » »« » a31 0 a32 0 a33 »¼ «¬ x32 »¼ 0

a12

0 a13

ªb11 º «b » « 12 » «b21 » « ». «b22 » «b » « 31 » «¬b32 »¼

Consider the matrix equation A1XB1  A 2 XB 2  !  A k XB k

C,

(6.5.15)

where Aj, Bj, j = 1,…,k, C and X are square matrices of the same size n. From the rows x1,x2,…,xn of X and the rows c1,c2,…,cn of C we build the n2dimensional vectors x

> x1 , x2 , ..., xn @

T

, c

>c1 , c2 , ..., cn @

T

.

With AjXBj, j = 1,…,k written in the equivalent form [Aj…BjT]x for j = 1,…,k, we can write (6.5.15) as Dx

c,

(6.5.16)

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

345

where D

A1 … B1T  A 2 … B 2T  !  A k … B k T .

(6.5.17)

Now consider the matrix equation AX  XB

where A = Let x

C,

mun

(6.5.18) mum

, B =

> x1 , x2 , ..., xn @

T

, C =

, c

num

are given and X =

>c1 , c2 , ..., cn @

T

num

is the unknown.

,

where xi and ci are the i-th rows of X and C, respectively. Using (6.5.12) and (6.5.14) we can subordinate the vector (A…Im)x to AX, and the vector (In…BT)x to XB. Thus we can write (6.5.18) in the form

A … I

m

 I n … BT x

c.

(6.5.19)

6.5.3 Eigenvalues of Matrix Polynomials Consider a polynomial of degree p of two independent variables x and y with complex coefficients cij of the following form p

w( x, y )

¦c x y i

ij

j

,

(6.5.20)

i, j 0

Let A and B be square matrices of sizes m and n, respectively, with their entries being either real or complex. Consider a square matrix of size mn, given by the formula p

w( A, B)

¦c A ij

i

…Bj ,

(6.5.21)

i, j 0

where Ai…Bj is the Kronecker product of Ai and Bj (see Definition 6.5.1). Theorem 6.5.4. If O1,O2,…,Om are the eigenvalues of A, and P1,P2,…,Pn are the eigenvalues of B, then w(Oi, Pj) for i = 1,2,…,m; j = 1,2,…,n are the eigenvalues of the matrix w(A, B) defined by (6.5.21). Proof. Let TA and TB be nonsingular matrices transforming A and B to their respective Jordan canonical forms AJ and BJ, i.e.,

346

Polynomial and Rational Matrices

AJ

TA ATA1 , B J

TB BTB1 .

(6.5.22)

On the main diagonal of AJ are the eigenvalues O1,O2,…,Om, and on the main diagonal of BJ the eigenvalues P1,P2,…,Pn. It follows from the definition of the Kronecker product that on the main diagonal of the matrix AJ…BJ are the eigenvalues OiPi, for i = 1,2,…,m; j = 1,2,…,n. Hence on the main diagonal of w(AJ, BJ) are the eigenvalues w(Oi, Pj), for i = 1,2,…,m; j = 1,2,…,n. We will show that w(AJ, BJ) and w(A, B) are similar matrices, thus having the same eigenvalues. Taking into account (6.5.22) and A1A 2 A 3 … B1B 2 B3

A1 … B1 A 2 … Ǻ 2 A3 … B3 ,

we can write AJ … BJ

TA ATA1 … TB BTB1

T

A

… TB ǹ … B TA1 … TB1 .

With the equality TA1 … TB1

T

A

… TB

1

taken into consideration, we have AJ … BJ

T

A

… TB A … B TA … TB . 1

Thus AJ…BJ and A…B are similar matrices. Hence w( A J , B J )

T

A

… TB w A, B TA … TB ; 1

w(AJ, BJ) and w(A, B) as similar matrices have the same eigenvalues.  From Theorem 6.5.4 for w(x, y) = x + y and w(x, y) = xy, we have the following corollaries. Corollary 6.5.3. If O1,O2,…,Om are the eigenvalues of A, and P1,P2,…,Pn are the eigenvalues of BT, then Oi + Pj for i = 1,2,…,m; j = 1,2,…,n are the eigenvalues of the matrix A … I n  I m … BT .

(6.5.23)

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

347

Corollary 6.5.4. If O1,O2,…,Om are the eigenvalues of A, and P1,P2,…,Pn the eigenvalues of B, then OiPj for i = 1,2,…,m; j = 1,2,…,n are the eigenvalues of the matrix A…B.

6.6 The Sylvester Equation and Its Generalization 6.6.1 Existence of Solutions Consider the following Sylvester equation AX  XB

C,

(6.6.1)

where A, B are square matrices of size m and n, respectively, and C and X are rectangular matrices of dimension mun. With A, B and C given, one has to compute the matrix X satisfying (6.6.1). From the rows x1,x2,…,xm of X and the rows c1,c2,…,cm of C we build the mndimensional vectors x

> x1 , x2 , ..., xm @

T

, c

> c1 , c2 , ..., cm @

T

.

(6.6.2)

Using (6.5.19) we can write (6.6.1) as Dx

c,

(6.6.3)

where D

A … I n  I m … BT

(6.6.4)

is a square matrix of size mn. Theorem 6.6.1. Equation (6.6.1) has one solution if and only if the matrices A and B do not have the same eigenvalues. Proof. There exists one solution to (6.6.3) if and only if D is a nonsingular matrix. D is nonsingular if and only if all its eigenvalues are nonzero. According to Corollary 6.5.3, the numbers Oi - Pj, for i = 1,2,…,m; j = 1,2,…,n, are the eigenvalues of the matrix (6.6.3). Thus D has nonzero eigenvalues if and only if the matrices A and B do not have common eigenvalues. In this case, D is a nonsingular matrix and (6.6.3) (thus also (6.6.1)) has exactly one solution x

D1c .

(6.6.5)

348

Polynomial and Rational Matrices

Note that the Lyapunov equation AT P  PA

Q

(6.6.6)

is a particular case of the Sylvester equation (6.6.1) for X = P, A = AT, B = A and C = Q. In the particular case for C = 0, (6.6.1) takes the form AX  XB

0.

(6.6.7)

If A and B do not have the same eigenvalues, then D is a nonsingular matrix and the equation Dx = 0 has only the zero solution x = 0. Thus we have the following corollary. Corollary 6.6.1. If the matrices A and B do not have the same eigenvalues, then (6.6.7) has only the zero solution X = 0. If A and B have at least one common eigenvalue, then (6.6.7) has a nonzero solution. Theorem 6.6.2. If all the eigenvalues of A and – B have negative real parts, then the unique solution to (6.6.1) is f

 ³ e At Ce  Bt dt .

X

(6.6.8)

0

Proof. Substituting (6.6.8) into (6.6.1) we obtain f

AX  XB

 ³ Ae At Ce  Bt  e At Ce  Bt B dt 0

f

d  ³ ª¬ e At Ce  Bt º¼ dt dt 0

e At Ce  Bt

f 0

C,

since the matrices A and – B are asymptotically stable and lim e At Ce  Bt i of

0.

In this case, A and B do not have common eigenvalues and according to Theorem 6.6.1 there exists, only one solution to (6.6.1). 

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

349

6.6.2 Methods of Solving the Sylvester Equation

6.6.2.1 The Kronecker Product Method When A and B do not have common eigenvalues, we can solve (6.6.1) by the use of the following procedure. Procedure 6.6.1. Step 1: From the rows of X and C build the vectors x and c of the form (6.6.2). Step 2: Using (6.6.4) compute the matrix D. Step 3: Using (6.6.5) compute the vector x and then the desired matrix X. Example 6.6.1. Solve (6.6.1) with respect to X for the matrices

A

ª 0 1 0 º « 0 0 1» , C « » «¬1 3 3 »¼

ª 2 1 º « », B ¬ 0 3¼

ª1 0 1 º « 0 1 1» . ¬ ¼

Applying Procedure 6.6.1, we obtain Step 1: Using (6.6.2) we compute the vector cT

[1

0

1

0

1

 1] .

Step 2: According to (6.6.4) the matrix D is

D

A … In ª 2 0 « 1 2 « «0 1 « «0 0 «0 0 « ¬« 0 0

ª0 0 1º ª 2 1 º « »  Im … B « » … I 3  I 2 … «1 0 3» ¬ 0 3¼ «¬0 1 3»¼ 1 1 0 0 º 3 0 1 0 »» 5 0 0 1 » ». 0 3 0 1» 0 1 3 3» » 0 0 1 6 ¼» T

Step 3: Using (6.6.5) we obtain the desired solution of the form X

ª x11 «x ¬ 21

x12 x22

x13 º x23 »¼

1 ª 119 34 59 º . 288 «¬ 9 126 27 »¼

(6.6.9)

350

Polynomial and Rational Matrices

6.6.2.2 Integration Method If the matrices A and –B have all their eigenvalues with negative real parts, then we can solve (6.6.1) using the following procedure.

Procedure 6.6.2. Step 1: Compute the minimal polynomials \A(O),\B(O) of the matrices A and –B. Step 2: Compute eAt and e-Bt. Step 3: From (6.6.8) compute the desired solution X. Example 6.6.2. Using Procedure 6.6.2 solve (6.6.1) (with respect to X) for the matrices (6.6.9). The matrices A and –B have all their eigenvalues with negative real parts. Using Procedure 6.6.2, we obtain the following. Step 1: In this case, the characteristic polynomials of the matrices A and –B are the same as their minimal polynomials

\ A (O ) M A (O ) det > O I m  A @

O2

1 O 3

0

O 1 O 0

\ B (O ) M B (O ) det > O I n  B @

1

(O  2)(O  3) ,

0 (O  1)3 .

1

3 O 3

Step 2: The matrix A has the two single eigenvalues O1 = 2, O2 = 3, and the matrix –B has the one eigenvalue O1 = O2 = O3 = P1  1 of multiplicity 3. Using the Sylvester formula, we obtain e At

Z1eO1t  Z 2 eO2t

1

O1  O2

1º 3t ª1 1º 2t ª 0 « 0 0 » e  « 0  1» e ¬ ¼ ¬ ¼

A  O2I m eO t  1

ª e 2t « ¬« 0

1

O2  O1

A  O1I m eO t 2

e 2t  e 3t º » e 3t ¼»

and 1 I 3e P1t  (B  I 3 )te P1t  (B  I 3 ) 2 t 2 e P1t 2 ª1 0 0 º ª1 1 0º ª1 2 1º « » t « » t 1 « » 2 t «0 1 0 » e  « 0 1 1 » te  2 « 1 2 1» t e «¬0 0 1 »¼ «¬ 1 3 2 »¼ «¬ 1 2 1 »¼

e  Bt

Z11e P1t  Z12te P1t  Z13t 2 e P1t

1 2 ª1  t  12 t 2 º t  t2 2t « » 2 1 2 1 2 t  2 t » e t . 1 t  t « 2t « t  12 t 2 3t  t 2 1  2t  12 t 2 » ¬ ¼

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

351

Step 3: From (6.6.8) we have f

X

 ³ e At Ce  Bt dt 0

f ª e 2 t ³« «0 0 ¬

1º e 2t  e 3t º ª1 0 » 3t » « e ¼» ¬ 0 1  1 ¼

1 2 ª1  t  12 t 2 º t  t2 2t « » 2 1 2 1 2 u«  2 t t  2 t » e t dt 1 t  t « t  12 t 2 3t  t 2 1  2t  12 t 2 »¼ ¬

ª e 3t 1  t  e4t t  t 2 e3t 1  2t  e 4 t 1  4t  2t 2 ³ « e 4 t t  t 2 e4t 1  4t  2t 2 0 « ¬ e 3t t  e4t t 2  3t  1 º 1 ª 119 34 59 º » dt . 4 t 2 288 ¬« 9 126 27 ¼» e 1  3t  t »¼ f

The result is consistent with that obtained in Example 6.6.1. 6.6.2.3 Minimal Polynomial Method The method is based on the following theorem.

Theorem 6.6.3. Let < A s

s m  am1s m1  !  a1s  a0

be the minimal polynomial of A Cn  bn1Cn1  !  b1C1 @ ,

X

ª¬ < B A º¼

X

 >Cm  am1Cm1  !  a1C1 @ ª¬ < A B º¼ ,

(6.6.10)

or 1

(6.6.11)

where k

Ck

¦A i 1

i 1

CB k i , C0

0, k

1, 2, ... .

(6.6.12)

352

Polynomial and Rational Matrices

Proof. Using (6.6.1) and (6.6.12) we can write C0

A 0 X  XB 0

C1

AX  XB

C2

A X  XB

C3

A X  XB

Ck

A k X  XB k

2

3

0 C

2

AC  CB

3

A 2C  ACB  CB 2 k

¦A

(6.6.13)

i 1

CB k i .

i 1

Taking into account that Ck = AkX – XBk, k = 1, 2, ..., we write the expression b1C1 + b2C2 + ... + bn–1Cn –1 + Cn in the form < B A X  X< B B

b1C1  b2C2  !  bn1Cn1  Cn .

(6.6.14)

Then invoking that Es1  F @ HC > Es1  A @  run . 1

1



nun

,

Proof. If the condition (6.6.26) is satisfied, then there exists a number s1 such that [Es1 – A] is a nonsingular matrix. With the matrix XEs1 added to and subtracted from the left-hand side of (6.6.25), we obtain X > Es1  A @  > I r s1  F @ XE

 HC .

(6.6.28)

358

Polynomial and Rational Matrices

Pre-multiplying (6.6.28) by [Irs1 – F]-1 and post-multiplying it by[Es1 – A]-1, we obtain (6.6.27).  Note that the eigenvalues of A are the reciprocals of the eigenvalues of F, and det ª¬ I n s  B º¼

1 det ªI n s  E Es1  A º . ¬ ¼

6.7 Algebraic Matrix Equations with Two Unknowns 6.7.1 Existence of Solutions Consider the following matrix equation XA  BY  XCY

D,

(6.7.1)

where the matrices A nuq, B pum, C num and D puq are known. One has to compute the matrices X pun and Y muq satisfying the equation (6.7.1). From now on we will use the MoorePenrose pseudo-inverse of a matrix, which we will shortly call the pseudo-inverse. The pseudo-inverse of A mun, denoted A+ nun, is a matrix that satisfies the following conditions

AA  A

A,

(6.7.2a)

A  AA 

A ,

(6.7.2b)

AA

AA  ,

(6.7.2c)

A A

A  A.

(6.7.2d)

 T



T

For an arbitrary A mun there exists only one pseudo-inverse A+ mun. It can be computed using the SVD decomposition of A [158]. If A nun is a nonsingular matrix, then A+ = A-1. Theorem 6.7.1. Equation (6.7.1) has a solution if rank ª¬ D  BC A º¼ d max n, m ,

(6.7.3)

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

359

where C+ is the pseudo-inverse of C. Proof. Taking Za

A  CY ,

(6.7.4a)

we can write (6.7.1) in the following form XZ a  BY

D.

(6.7.5a)

Solving (6.7.4a) with respect to Y, we obtain Y

C Z a  A ,

(6.7.6a)

and substituting (6.7.6a) into (6.7.5a) we have

X  BC Z 

a

D  BC A ,

D + BC+A puq can be expressed as the product of the two matrices H F nuq (obviously not unique) D  BC A

HF ,

(6.7.7a) pun

and

(6.7.8)

if the condition (6.7.3) is met. In this case, with (6.7.7a) and (6.7.8) taken into account, we obtain X + BC+ = H and Za = F. With H and F known, we can compute the desired matrices from the following relationships X

H  BC , Y

C F  A .

(6.7.9a)

On the other hand, taking Zb

B  XC ,

(6.7.4b)

we can write (6.7.1) as XA  Zb Y

D.

(6.7.5b)

Solving (6.7.4b) with respect to X, we obtain X

Zb  B C ,

and substituting (6.7.6b) to (6.7.5b), we have

(6.7.6b)

360

Polynomial and Rational Matrices

Zb C A  Y

D  BC A .

(6.7.7b)

Thus D + BC+A can be expressed as the product (6.7.8) provided the condition (6.7.3) is met. In this case, with (6.7.7b) and (6.7.8) taken into account, we have Zb = H and C+A + Y = F. With H and F known, we can compute X

H  B C ,

Y

F  C A .

(6.7.9b) Ŷ

Remark 7.7.1. The decomposition (6.7.8) is not unique. Therefore, (6.7.1) has many different solutions X, Y.

6.7.2 Computation of Solutions If the condition (6.7.3) is met, then we can compute a solution X, Y to (6.7.1) using the following procedure, which ensues from the proof of Theorem 6.7.1. Procedure 6.7.1. Step 1: Compute the pseudo-inverse C+ of C and the matrix D + BC+A. Step 2: Decompose the matrix D + BC+A puq into the product of H F nuq. Step 3: Using (6.7.9a), compute the desired solution X, Y.

pun

and

Example 6.7.1. Using Procedure 6.7.1, compute a solution X, Y to (6.7.1) for the matrices A

ª1 º «2» , B ¬ ¼

ª 1 0º « 1 1 » , C ¬ ¼

ª 2 3º «1 1» , D ¬ ¼

ª 10 º « 10 » . ¬ ¼

(6.7.10)

In this case, m = p = n = 2, q = 1, and C is a nonsingular matrix. Hence ª 1 3 º « 1 2 » , ¬ ¼  10 ª º ª 1 0 º ª 1 3 º ª 1 º D  BC A « »« »« »« » ¬ 10 ¼ ¬ 1 1 ¼ ¬ 1 2 ¼ ¬ 2 ¼

C

C1

The condition (6.7.3) is met, since rank ª¬ D  BC A º¼

ª 5º rank « » 1  n ¬8¼

2.

ª 5 º « 8 ». ¬ ¼

(6.7.11)

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

361

Thus (6.7.1) has a solution. Applying Procedure 6.7.1, we obtain the following. Step 1: The matrix D + BC+A has the form (6.7.11). Step 2: The matrix (6.7.11) can be decomposed into the product of two different matrices. We consider two cases of this decomposition D  BC A D  BC A

ª 5º «8» ¬ ¼ ª 5 º «8» ¬ ¼

H1F1 , for H1 H 2 F2 , for H 2

ª1 0 º ª 5º (6.7.12a) « 0 1 » , F1 « 8 » , ¬ ¼ ¬ ¼ ª0 5º ª 7 º « 1 1» , F2 « 1» . (6.7.12b) ¬ ¼ ¬ ¼

Step 3: Using (6.7.9a), we obtain for the case (6.7.12a) X

H1  BC

Y

C F1  A

ª1 0 º ª 1 0 º ª 1 3 º « 0 1 »  « 1 1 » « 1 2 » ¬ ¼ ¬ ¼¬ ¼ ª 1 3 º § ª 5º ª 1 º · « 1 2 » ¨ « 8 »  « 2» ¸ ¬ ¼©¬ ¼ ¬ ¼¹



ª 2 3º «0 2 » , ¬ ¼

ª 24 º «18 » , ¬ ¼

(6.7.13a)

for the case (6.7.12b) X

H 2  BC

Y

C F2  A

ª 0 5 º ª 1 0 º ª 1 3 º « 1 1»  « 1 1 » « 1 2 » ¬ ¼ ¬ ¼¬ ¼



ª 1 3 º § ª 7 º ª 1 º · « 1 2 » ¨ « 1»  « 2 » ¸ ¬ ¼©¬ ¼ ¬ ¼¹

ª1 2 º «0 1 » , ¬ ¼

ª 1º « 2 ». ¬ ¼

(6.7.13b)

It is easy to check that the matrices (6.7.13a) and (6.7.13b) satisfy (6.7.1) for the matrices (6.7.10).

6.8 Lyapunov Equations 6.8.1 Solutions to Lyapunov Equations Definition 6.8.1. The matrix equations XA  AT X

Q ,

AX  XA

Q

T

(6.8.1a) (6.8.1b)

362

Polynomial and Rational Matrices

are called the Lyapunov equations if the matrices A nun and Q nun (positive definite or positive semidefinite) are given, and X nun (positive definite) is the matrix we seek. Theorem 6.8.1. Let A be asymptotically stable and Q be a symmetric, positive definite (or semidefinite) matrix. Then (6.8.1a) has exactly one solution of the form f

³e

X

AT t

Qe At dt ,

(6.8.2a)

0

which is a positive definite (semidefinite) matrix, and (6.8.1b) has exactly one solution of the form f

³e

X

At

T

Qe A t dt ,

(6.8.2b)

0

which is a positive definite (semidefinite) matrix. Proof. Substituting (6.8.2a) into (6.8.1a), we obtain f

XA  AT X

³e 0

f

f

AT t

Qe At dtA  AT ³ e A t Qe At dt T

0





d AT t At ³0 dt e Qe dt

f AT t

e Qe

Q

At

0

since by assumption A is asymptotically stable and eAt o 0 for t o f. We will show that if Q is positive definite (semidefinite), then the matrix (6.8.2a) is positive definite (semidefinite), that is, its quadratic form is positive definite (semidefinite), zTXz > 0 (zTXz t 0) for every z z 0. Using (6.8.2a) we can write f

z T Xz

³z

T

T

e A t Qe At zdt .

(6.8.3)

0

The matrices T

e A t and e At are nonsingular for every t t 0. Thus if Q is a positive definite (semidefinite) matrix, then it follows from (6.8.3) that zTXz > 0 for every z z 0 (zTXz t 0 for every z).

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

363

In order to show that (6.8.1a) has exactly one solution, we assume that it has two different solutions X1 and X2, that is, X1A  AT X1

Q and X 2 A  AT X 2

Q .

(6.8.4)

Subtracting these equations one from another, we obtain

X1  X2 A  AT X1  X 2

0.

(6.8.5)

T

Pre-multiplying (6.8.5) by e A t and post-multiplying it by e At , we obtain e A t ª¬ X1  X 2 A  AT X1  X 2 º¼ e At T

0

and d ª AT t e X1  X 2 e At º ¼ dt ¬

0.

(6.8.6)

It follows form (6.8.6) that e A t X1  X 2 e At T

is a constant matrix for all t. Evaluating it for t = 0 and taking into account (6.8.6), we obtain X1 – X2 = 0, since eAt | t = 0 = I. We obtain the same result for t = f, since eAt o f for t o f. The proof for (6.8.1b) is analogous. „ 6.8.2 Lyapunov Equations with a Positive Semidefinite Matrix In many cases, the matrix Q in the Lyapunov equation is of the form Q = CCT or Q = BBT, i.e., it is a positive semidefinite matrix. Theorem 6.8.2. If A equation XA  AT X

nun

is an asymptotically stable matrix, then the Lyapunov

CT C

(6.8.7)

has exactly one positive definite solution of the form f

X

³e 0

AT t

CT Ce At dt

(6.8.8)

364

Polynomial and Rational Matrices

if and only if (A, C) is an observable pair. Proof. According to Theorem 6.8.1, the solution to (6.8.7) has the form (6.8.8). We will prove the thesis by contradiction. Suppose the solution is not positively definite. Then there exists a vector z such that Xz = 0. In this case, we have from (6.8.8) f

z T Xz

³ Ce

At

2

(6.8.9)

z dt

0

that is, CeAtz = 0. We differentiate the above relationship and evaluate its derivatives for t = 0. We obtain CAkz = 0, for k = 0,1,…,n-1, i.e., ª C º « CA » « »z « # » « n 1 » ¬CA ¼

0.

(6.8.10)

By assumption (A, C) is an observable pair. Thus from (6.8.10) we have z = 0, which contradicts the supposition that the matrix (6.8.8) is not positive definite. Thus the solution (6.8.8) is a positive definite matrix. Also by contradiction, we will show now that the asymptotical stability of A and positive definiteness of the matrix (6.8.8) imply the observability of the pair (A, C). Suppose that (A, C) is unobservable, that is, ª Is  A º rank « . »  n, for all s  ฀ ¬ C ¼

In this case there exist an eigenvector x of the matrix A (Ax = sx) such that Cx = 0, and from (6.8.7) we obtain x XAx  x AT Xx

x

 x CT Cx

 Cx

2

denotes the complex conjugate of x ;

that is,

s  s x Xx

 Cx

2

0, since Cx

where s denotes the conjugate of s.

0,

(6.8.11)

Matrix Polynomial Equations, and Rational and Algebraic Matrix Equations

365

By assumption A is an asymptotically stable matrix hence s + s < 0. Thus from (6.8.11) we have x*Xx = 0. This leads to contradiction, since by assumption X is a positive definite matrix. „ Theorem 6.8.3. If (A, C) is an observable pair, then A is an asymptotically stable matrix if and only if there exists, exactly one symmetric positive definite matrix X satisfying (6.8.7). Proof. According to Theorem 6.8.2, if A is an asymptotically stable matrix and (A, C) is an observable pair, then (6.8.7) has exactly one solution, which is positive definite and has the form (6.8.8). We will show that if (A, C) is an observable pair and X is positively definite, then A is asymptotically stable. Let x be the eigenvector of A corresponding to an eigenvalue s. In the same way as in the case of Theorem 6.8.2, we can show that

s  s x Xx

 Cx . 2

(6.8.12)

By assumption (A, C) is an observable matrix, hence Cx z 0 and x*Xx > 0, since X is positive definite. Thus from (6.8.12) we have s + s < 0, thus A is asymptotically stable. „ Let us sum up made hitherto considerations, in the following important theorem. Theorem 6.8.4. Let X be the solution to (6.8.7). In this case, we have 1. If X is a positive definite matrix and (A, C) is an observable pair, then A is an asymptotically stable matrix. 2. If A is an asymptotically stable matrix and (A, C) is an observable pair, then X is a positive definite matrix. 3. If A is an asymptotically stable matrix and X is a positive definite matrix, then (A, C) is an observable pair. Now consider the following Lyapunov equation AX  XAT

BBT .

(6.8.13)

Taking into account that the controllability of the pair (A, B) is a dual notion to the observability of the pair (A, C), we immediately have the following theorems. Theorem 6.8.5. If A is an asymptotically stable matrix, then (6.8.13) has exactly one solution X, which is symmetric and positive definite, if and only if, (A, B) is a controllable pair.

366

Polynomial and Rational Matrices

Theorem 6.8.6. Let (A, B) be a controllable pair. Then A is an asymptotically stable matrix if and only if there exists exactly one solution, which is symmetric and positive definite, to (6.8.13). Theorem 6.8.7. Let X be the solution to (6.8.13). In this case, we have the following. 1. If X is a positive definite matrix and (A, B) is a controllable pair, then A is an asymptotically stable. 2. If A is asymptotically stable matrix and (A, B) is a controllable pair, then X is a positive definite matrix. 3. If A is an asymptotically stable matrix and X is a positive definite matrix, then (A, B) is a controllable pair.

7 The Realisation Problem and Perfect Observers of Singular Systems

7.1 Computation of Minimal Realisations for Singular Linear Systems 7.1.1 Problem Formulation Consider the following continuous-time, singular system Ex Ax  B 0 u  B1 u , y Cx  Du ,

(7.1.1a) (7.1.1b)

where x n, u m and y p are the vectors of the state, input and output, respectively; E, A nun, B0, B1 num, C pun, D pum. We assume that det E = 0 and (E,A) is a regular pair, i.e., det[Es  A] z 0, for some s  ,

(7.1.2)

where is the field of the complex numbers. The transfer matrix of the system (7.1.1) is T( s )

C > Es  A @ (B 0  sB1 )  D . 1

(7.1.3)

The transfer matrix (7.1.3) is called proper (strictly proper) if and only if lim T( s ) s of



pum

and K z 0 (K

0) .

(7.1.4)

368

Polynomial and Rational Matrices

Otherwise, we call it improper. Equation (7.1.1) can be written as Ex Ax  Bu , y Cx  Du ,

(7.1.5a) (7.1.5b)

where ª E B1 º ª xº  n un , x « »  n , «0 » 0 ¼ ¬ ¬u ¼ ªB0 º ªA 0 º n um A « , n  n un , Ǻ « » » ǿ 0 I  m¼ ¬ ¬ m¼ C [C 0]  pun , D D. E

nm ,

(7.1.5c)

Equation (7.1.1) can be also written as Ex

y

 x  B u , A  Cx ,

(7.1.6a) (7.1.6b)

where

 A

ª A B0 º «0 I » m¼ ¬

n un

, B

ª 0 º « I »  ¬ m¼

n um

 [C D]  , C

pun

.

(7.1.6c)

Definition 7.1.1. The matrices E, A, B0, B1, C and D are called a realisation of the transfer matrix T(s) pum(s) (the set of rational matrices of dimensions pum in the variable s), if they satisfy the relationship (7.1.3). A realisation (E, A, B0, B1, C, D) is called a minimal realisation if the matrices E and A have minimal dimensions among all realizations of T(s).

A realisation  , B , C ) (E, A, B, C, D) or (E, A

is a minimal one if and only if the system (7.1.5), or respectively, the system (7.1.6), is completely controllable and completely observable. The system (7.1.5) is completely controllable if and only if rank ª¬E, B º¼

n and rank ª¬Es  A, B º¼

where n is the dimension of the state vector x .

n for all finite s 

,

(7.1.7)

The Realisation Problem and Perfect Observers of Singular Systems

369

The system (7.1.5) is completely observable if and only if

ªE º rank « » ¬C ¼

ª Es  A º n and rank « » ¬ C ¼

n for all finite s 

.

(7.1.8)

The realisation problem can be formulated in the following way. Given a rational improper matrix T(s) pum(s), compute a realization (E, A, B0, B1, C, D ) and a minimal realisation  , B , C ) . (E, A, B, C, D) or (E, A

A solution to the problem by the method presented below was proposed for the first time in [72]. 7.1.2 Problem Solution An arbitrary rational matrix T(s) T( s )

pum

(s) can be written as

P( s) , d (s)

(7.1.9)

where P(s)is a polynomial matrix of dimension pum and d (s)

d q s q  d q 1s q 1  "  d1s  d 0

(7.1.10)

is the least common denominator of all the entries of T(s). Let N = deg P(s) be the degree of the polynomial matrix P(s) and N > q. The proposed method is based on the following theorem. Theorem 7.1.1. Let s

Z 1  O , d O z 0 and N ! q .

The rational matrix T(Z )

T( s )|s Y 1 O

P (Z ) d (Z )

in the variable Z is a proper matrix, i.e., deg d (Z )

N t deg P (Z ) .

(7.1.11)

370

Polynomial and Rational Matrices

Proof. Substituting s = Z1 + O into T(s) we obtain the improper rational matrix in the variable Z-1 T(Z 1  O )

P(Z 1  O ) , d (Z 1  O )

(7.1.12)

since the degree of P(Z-1 + O) with respect to Z-1 is N , and the degree d(Z-1 + O) is q. With both the numerator and denominator of (7.1.12) multiplied by ZN, we obtain (7.1.11), where

deg d (Z )

N t deg P(Z ) ,

since by assumption d(O) z 0.

„

Note that Theorem 7.1.1 allows us to transform the problem of computing the realization (E, A, B0, B1, C, D) of T(s) to the problem of computing the realisation (AZ, BZ, CZ, DZ) of the proper matrix T (Z). The realisation (AZ, BZ, CZ, DZ) of the matrix T (Z) can be computed using one of the following well-known methods. Let E

AY , A

I n  O AY , B 0

O BY , B1

BY , C CY , D

DY .(7.1.13)

Substituting (7.1.13) and s = Z1 + O into (7.1.3) we obtain T( s )

C[Es  A ]1 (B 0  sB1 )  D

CZ [ AZ (Z 1  O )  (I n  O AZ )]1 (O BZ  BZ s )  DZ CZ [ AZ Z 1  I n ]1 (O  s )BZ  DZ

CZ [I nZ  AZ ]1 BZ  DZ ,

since Z1 =s - O. Thus the following theorem has been proved. Theorem 7.1.2. If (AZ, BZ, CZ, DZ) is the realisation of the matrix T (Z) given by (7.1.11), then the matrices (E, A, B0, B1, C, D) defined by (7.1.13) constitute a realisation of the matrix T(s). The foregoing consideration endows us with the following procedure for computing the realisation (E, A, B0, B1, C, D) of T(s) and the minimal realizations  , B , C ). (E, A, B, C, D) , (E, A

The Realisation Problem and Perfect Observers of Singular Systems

371

Procedure 7.1.1. Step 1: Write the matrix T(s) in the form (7.1.9) and choose the scalar O in such a way that d(O) z 0. Step 2: Substitute s = Z1 + O into T(s) and multiplying both the numerator and the denominator of (7.1.12) by ZN and compute T (Z). Step 3: Using one of the well-known methods provided in [150], compute the realisation (AZ, BZ, CZ, DZ) of T (Z). Step 4: Using (7.1.13), compute the desired realisation (E, A, B0, B1, C, D) of T(s) and the minimal realisation (E, A, B, C, D) or (E, A , B , C ) . Remark 7.1.1. For two different values of O we obtain in a general case two different realizations (AZ, BZ, CZ, DZ) and the corresponding two different realisations (E, A, B0, B1, C, D). Remark 7.1.2. If d(0) z 0, then it is convenient to assume O = 0. In this case, we obtain the following from (7.1.13) E

AY , A

In , B0

0, B1

 B Z , C CZ , D

DZ .

(7.1.13)

Applying the above procedure we compute the realisation (E, A, B0, B1, C, D) of the following transfer function T( s )

aN s N  "  a1s  a0 for N ! q . s q  bq1s q1  "  b1s  b0

(7.1.14)

Step 1: In this case, P(s)

aN s N  "  a1s  a0 , d ( s )

s q  bq 1s q 1  "  b1s  b0 .

(7.1.15)

We choose O in such a way that d(O) z 0. Step 2: Substituting s = Z1 + O into (7.1.14), we obtain T(Z 1  O )

aN (Z 1  O ) N  "  a1 (Z 1  O )  a0 , (7.1.16) (Z 1  O ) q  bq 1 (Z 1  O ) q1  "  b1 (Z 1  O )  b0

and with both the numerator and denominator of (7.1.16) multiplied by ZN we obtain T(Z )

a0Z N  "  aN b0Z N  "  Z N q

a0 a Z N 1  "  a1Z  a0 .  N N 1 N 1 b0 Z  b0Z  "  bq 1Z N q

Step 3: A controllable realization of the transfer function (7.1.17) is

(7.1.17)

372

Polynomial and Rational Matrices

AZ

CZ

ª0 «0 « «# « «0 «¬0

1

0

"

0

1

"

# 0

# 0

% "

0 1 "

0 º 0 »» # » » 1 » b0 »¼

[a0 a1...aN 1 ], DZ

N uN

, BZ

ª0 º « #» « » «0 » « » ¬1 ¼

N

,

ª a0 º « ». ¬ b0 ¼

Note that if N > q, then det AZ = 0 and E is a singular matrix. Step 4: Using (7.1.13) and (7.1.18), we obtain the desired realization (E, A, B0, B1, C, D) of the transfer function (7.1.14). Example 7.1.1. Compute two realizations (E, A, B0, B1, C, D) of the following transfer function T (s)

s 2  2s  3 . s 1

(7.1.19)

Applying the above procedure, we choose two different values of O. We obtain the following. Step 1: In this case, P( s )

s 2  2s  3 and d ( s)

s 1 .

We choose O = 0 and O = 1, since d(0) = 1 and d(1) = 2. Step 2: Substituting s = Z1 and s = Z1 + 1 into (7.1.19), we obtain T (Z 1 )

Z 2  2Z 1  3 , Z 1  1

(7.1.20a)

and T (Z 1  1)

Z 2  4Z 1  6 , Z 1  2

(7.1.20b)

respectively. With both the numerator and the denominator of (7.1.20) multiplied by Z2, we obtain

The Realisation Problem and Perfect Observers of Singular Systems

373

T1 (Z )

3Z 2  2Z  1 Z2  Z

3

Z  1 , Z2  Z

(7.1.21a)

T2 (Z )

6Z 2  4Z  1 2Z 2  Z

3

Z  12 , Z 2  12 Z

(7.1.21b)

and 1 2

respectively. Step 3: The realisations of T1 (Z ) and T2 (Z ) are A1Z

ª0 1 º 1 «0 1» , BZ ¬ ¼

AZ2

ª0 1 º 2 « 0  1 » , BZ ¬ 2¼

ª0º 1 «1 » , CZ ¬ ¼

[1,  1], D1Z

(7.1.22a)

[3]

and ª0 º 2 «1 » , CZ ¬ ¼

ª1 1º 2 «¬ 2 , 2 »¼ , DZ

[3] ,

(7.1.22b)

respectively. Step 4: Using (7.1.13) and (7.1.22), we obtain the desired realisations of the transfer function (7.1.19) E1

A1Z

C1

C1Z

E2

AZ2

ª0 1 º ª1 0 º 1 «0 1» , A1 «0 1 » , B0 ¬ ¼ ¬ ¼ [1,  1], D1 D1Z [3]

ª0º 1 «0» , B1 ¬ ¼

B1Z

ª0 º « 1» , ¬ ¼ (7.1.23a)

and

2 1

B

 BZ 2

ª0 1 º , A2 In « 1» ¬0  2 ¼ ª 0º 2 « 1» , C2 CZ ¬ ¼

 O AZ2

ª1 1 º 2 «0 1 » , B 0 ¬ 2¼

ª1 1º «¬ 2 , 2 »¼ , D2

2

DZ

O BZ2

ª0 º «1 » , ¬ ¼

(7.1.23b)

[3].

respectively. It is easy to verify that the matrices (7.1.23) are indeed realisations of the transfer function (7.1.19).

374

Polynomial and Rational Matrices

Theorem 7.1.3. The singular system (7.1.5) is both completely controllable and completely observable if (AZ, BZ) is a controllable pair and (AZ, CZ) is an observable pair. Proof. In order to prove the complete controllability of the system (7.1.5), one has to show that the conditions (7.1.7) are satisfied for this system. We carry out the proof in detail for a SISO system (m=1, p=1). Without loss of generality, we can assume that the matrices AZ, BZ and CZ have the form (7.1.18). Using (7.1.5c), (7.1.13) and (7.1.7), we obtain  BZ ª E B1 B 0 º ªA rank « rank « Z » 0 I m ¼ 0 ¬0 ¬ 0 0 0º ª0 1 0 " 0 «0 0 1 " 0 0 0 »» « «# # # % # # #» rank « » N n. 0 0» «0 0 0 " 1 « 0 0 1 " b0 1 O » « » 0 1»¼ «¬ 0 0 0 " 0

rank ¬ª E, B ¼º

O BZ º I m ¼»

Thus the first of the conditions (7.1.7) is satisfied. The second is met as well, since ª Es  A B1s B 0 º rank « I m I m »¼ ¬ 0 ª A s  (I n  O A Z ) BZ s O BZ º rank « Z I m I m »¼ 0 ¬ " 0 0 0 0º ª 1 s  O «0 1 s  O " 0 0 0 »» « rank « 0 " # #» 0 0 sO « » 0 s  O " b0 ( s  O )  1 s O » «0 «¬ 0 " " 1 1»¼ 0 0

rank ª¬Es  A, B º¼

N

n,

for all finite s Analogously, in order to prove the complete observability of the system (7.1.5), one has to show that the conditions (7.1.8) are met for this system. Using (7.1.5c), (7.1.13) and (7.1.8), we obtain

The Realisation Problem and Perfect Observers of Singular Systems

ªE º rank « » ¬C ¼ ª0 «0 « «# « rank « 0 «0 « «0 «a ¬ 0

ª E B1 º rank «« 0 0 »» «¬C 0 »¼ " "

ª AZ rank «« 0 «¬ CZ

BZ º 0 »» 0 »¼

0º 0 »» 0» » 1» 0» » 0» » ¼

n.

1 0

0 1

0 0

# 0 0

# % 0 " 1 "

0 a1

0 " 0 a2 " aN 1

# 1 b0

N

375

Thus the first condition of (7.1.8) is met. The second one is met as well, since ªEs  A B1s º ª A Z s  (I n  O A Z ) BZ s º « » « I m » rank « I m »» rank « 0 0 CZ 0 »¼ 0 ¼» ¬« C ¬« " 0 0 0º ª 1 s  O «0 " 1 0 0 »» sO « «0 " 0 0 0» sO rank « » N n.  s  O " b0 ( s  O )  1 s » 0 «0 «0 1» 0 0 " " « » a1 aN 1 0 ¼» " " ¬« a0

ª Es  A º rank « » ¬ C ¼

„

Remark 7.1.3. Analogously one can prove that the system (7.1.6) is both completely controllable and completely observable, if (AZ, BZ) is a controllable pair and (AZ, CZ) is an observable pair. The foregoing considerations lead to the following important corollary that the matrices (7.1.13) are a minimal realisation of the transfer matrix (7.1.9). With the variable s replaced by z, we can apply the method for computing a minimal realization of a discrete-time singular system as well. The considerations can be generalised into the case of singular two-dimensional systems.

376

Polynomial and Rational Matrices

7.2 Full- and Reduced-order Perfect Observers Consider the following continuous-time singular system Ex

Ax  Bu ,

y

Cx ,

x

dx , x dt

x 0

(7.2.1a)

x0

(7.2.1b)

where x t  n , u

u t  m , y

y t   p

are the vectors of the state, input and output, respectively; E, A nun, B num, C pun. We will henceforth assume that det E = 0, rank B = m, rank C = p and det [Es – A] z 0 for certain s (the field of complex numbers). Consider also a continuous-time singular system described by the equation Exˆ

Axˆ  Bu  K Cxˆ  y ,

where xˆ = xˆ (t) in (7.2.1) and K

n

xˆ 0

xˆ0 ,

(7.2.2)

is the state vector, with u, y and E, A, B, C being the same as .

nup

Definition 7.2.1. The system (7.2.2) is called a full-order perfect observer for the system (7.2.1) if and only if xˆ (t) = x(t) for t > 0 and arbitrary initial conditions x0, xˆ . Theorem 7.2.1. There exists a perfect observer of the form (7.2.2) for the system (7.2.1) if it is completely observable, that is, ª Es  A º rank « » ¬ C ¼

n,

(7.2.3a)

for all finite s and ªEº rank « » ¬C ¼

n.

(7.2.3b)

Proof. Let e(t) = x(t) - xˆ (t), t t 0. From (7.2.1) and (7.2.2) we have Ee

Ex  Exˆ

A  KC e .

(7.2.4)

The Realisation Problem and Perfect Observers of Singular Systems

377

If the assumptions (7.2.3) are met, then there exists a matrix K such that det ª¬Es  A  KC º¼ D z 0 ,

(7.2.5)

for all s , where D is a scalar and independent of s. If the condition (7.2.5) is met, then from the expansion ª¬ Es  A  KC º¼

1

f

¦P ĭ s

i 

 i 1

i

it follows that )0 = 0 and according to (5.3.34) the solution to (7.2.4) is e t

eĭ0 A KC t ĭ0 Ex0

0 for t ! 0 ,

that is xˆ (t) = x(t), for t > 0. „

Another proof of this theorem is provided in [115, 116]. If the conditions (7.2.3) are met, then we can obtain an observer of the form (7.2.2) using the following procedure. Procedure 7.2.1. Step 1: Choose the matrix K so that the condition (7.2.5) is met. Step 2: Using (7.2.2) compute the desired observer. Example 7.2.1. Compute an observer of the form (7.2.2) for the system (7.2.1) with

E

ª1 0 0 º « » «0 1 0 » , A ¬«0 0 0 »¼

ª0 1 0º « » «1 2 0 » , B ¬« 0 0 1 »¼

ª1 0 º « » «0 1» , C ¬«1 2 »¼

>1

0 1@ . (7.2.6)

In this case, n =3, m = 2, p = 1. The conditions (7.2.3) are met, since

ª Es  A º rank « » ¬ C ¼

for all finite s , and

1 0º ªs « 1 s  2 0 » » rank « «0 1» 0 « » 1¼ 0 ¬1

3,

378

Polynomial and Rational Matrices

ª1 «0 rank « «0 « ¬1

ªE º rank « » ¬C ¼

0º 0 »» 0 0» » 0 1¼

0 1

3.

Thus there exists a perfect observer of the form (7.2.2) for this system. Step 1: Using (7.2.5) for K = [k1 k2 k3]T, we obtain

det ª¬ Es  A  KC º¼

k3  1 s

s  k1 1  k2  1 s  2  k3

0

k1 k2 k3  1

 2k3  k1  2 s  2k1  k2  k3  1 .

2

The condition (7.2.5) is satisfied for k1 = 0, k3 = 1 and k2 z 0 (k2 is arbitrary). For k2 = 1, one has K = [0 1 1]T. Step 2: The desired observer has the form ª1 0 0 º « 0 1 0 » xˆ « » «¬ 0 0 0 »¼

ª0 1 0 º ª0 1 º ª0º « 2 2 1» xˆ  « 0 1» u  «1 » y . « » « » « » «¬ 1 0 0 »¼ «¬1 2 »¼ «¬1 »¼

7.2.1 Reduced-order Observers Without losing generality we can assume C

>C1

C2 @ , det C1 z 0 ,

where C1 pup, C2 In this case,

Q

ªC11 « «¬ 0

pu(n-p)

.

C11C2 º » I n p »¼

(7.2.7)

is a nonsingular matrix and C CQ

ª¬I p

0 º¼ .

(7.2.8)

The Realisation Problem and Perfect Observers of Singular Systems

379

Defining the new state vector x

ª x1 º « x » , x1  ¬ 2¼

Q 1 x

p

n p

, x2 

,

we obtain from (7.2.1) and (7.2.8) Ex Ax  Bu , y Cx ,

(7.2.9a) (7.2.9b)

where E

ª E11 E12 º « » ¬E 21 E22 ¼ pu p

E11 , A11 

EQ,

A

, E22 , A 22 

ª ǹ11 « ¬ A 21 n  p u n p

A12 º » A 22 ¼

AQ,

(7.2.9c)

.

From (7.2.9) it follows that for a given output y the vector x1 is known. Thus a reduced-order observer should reconstruct only the vector x2. Consider the following continuous-time, singular system Eˆ 2 xˆ2 w

ˆ xˆ  Bˆ u  D ˆ yD ˆ y , A 2 0 1

xˆ2 0

xˆ20 ,

(7.2.10a)

ˆuH ˆ yH ˆ y , Fˆ xˆ2  G 0 1

(7.2.10b)

n-p

, u and y are the same as in (7.2.1), w = w(t) n-p, ˆ , Bˆ , D ˆ, H ˆ , A ˆ , D ˆ , Fˆ , G ˆ , H ˆ are real matrices of appropriate dimensions and E 2 0 1 0 1 ˆ det E 2 = 0.

where xˆ = xˆ (t)

Definition 7.2.2. The system (7.2.10) is called a reduced-order perfect observer for the system (7.2.1) if and only if w(t) = x2(t) for t > 0 and arbitrary initial conditions x0, xˆ 20. If ªE º rank « 12 » ¬E22 ¼

ª C1C º rank E « 1 2 »  n  p , ¬« I n p ¼»

then there exists a matrix of elementary operations on rows P ªE º P « 12 » ¬ E22 ¼

ª0º «E » , ¬ 2¼

(7.2.11)

nun

such that (7.2.12)

380

Polynomial and Rational Matrices

where E2 (n-p)u(n-p) is a singular matrix. Pre-multiplying (7.2.9a) by P and using (7.2.12), we obtain E11 x1 A11 x1  A12 x2  B1u , E21 x1  E2 x2 A 21 x1  A 22 x2  B 2u ,

(7.2.13a) (7.2.13b)

where ª E11 º «E » ¬ 21 ¼

ªE º ªB º P « 11 » , « 1 » ¬E21 ¼ ¬ B 2 ¼

A11 

pu p

pum

, B1 

ªA PB, « 11 ¬ A 21

, A 22 

A12 º A 22 »¼

n p u n p

PA,

, B2 

(7.2.13c) n p um

.

Substituting x1 = y into (7.2.13a) and (7.2.13b), we obtain E2 x2

A 22 x2  u ,

(7.2.14a) (7.2.14b)

y

A12 x2 ,

u

B 2u  A 21 y  E21 y

where

is the new input vector, and y

E11 y  A11 y  B1u

the new output. According to Theorem 7.2.1, there exists a perfect observer for the system (7.2.14) if det > E2 s  A 22 @ z 0 ,

(7.2.15)

for some s ªE s  A 22 º rank « 2 » ¬ A12 ¼

n p ,

(7.2.16a)

for all finite s and ªE º rank « 2 » ¬ A12 ¼

n.

(7.2.16b)

The Realisation Problem and Perfect Observers of Singular Systems

381

We will show that the condition (7.2.16a) is met if and only if (7.2.3a) is satisfied. Using (7.2.9c) and (7.2.13c), we can write ª Es  A º rank « » ¬ C ¼

0 º ª Es  A º ªQ 0 º °½ °­ ª P rank ® « »« »¾ »« °¯ ¬ 0 I n p ¼ ¬ C ¼ ¬ 0 I n p ¼ ¿°

ª E11s  A11  A12 º « » rank « E21s  A 21 E2 s  A 22 » « » Ip 0 ¬ ¼

ªE s  A 22 º p  rank « 2 ». ¬ A12 ¼

Thus the conditions (7.2.16a) and (7.2.3a) are equivalent. Theorem 7.2.2. There exists a reduced-order perfect observer of the form (7.2.10) for the system (7.2.1) if the conditions (7.2.11), (7.2.15), (7.2.3a) and (7.2.16b) are met. Proof. As already proved, the conditions (7.2.16a) and (7.2.3a) are equivalent. If the conditions (7.2.16) and (7.2.15) are met, then there exists a matrix K (n-p)up such that det > E2 s  A 22  KA12 @ D z 0 ,

(7.2.17)

for all s . In this case, there exists a reduced-order perfect observer of the form E2 xˆ2

A 22 xˆ2  u  K A12 xˆ2  y ,

(7.2.18)

xˆ2 t

x2 t ,

(7.2.19)

such that t!0.

If det E2 z 0, then there is no K satisfying the condition (7.2.17).

„

If the conditions (7.2.11), (7.2.15), (7.2.3a) and (7.2.16b) are met, then a reduced-order perfect observer of the form (7.2.10) for the system (7.2.1) can be computed using the following procedure. Procedure 7.2.2. Step 1:With C = [C1 C2] known, compute the matrix Q (given (7.2.7)) along with the matrices E , A . Step 2:Compute P satisfying (7.2.12) along with the matrices E2, A22, A12, A21, A11.

382

Polynomial and Rational Matrices

Step 3:Compute K satisfying the condition (7.2.17). Step 4: Using the equality E2 xˆ2

A 22 xˆ2  u  K A12 xˆ2  y ,

(7.2.20)

compute the desired reduced order-perfect observer. An estimate xˆ (t) of the state vector x(t) is given by xˆ t

ª y t º Q« » ¬ xˆ2 t ¼

ªC11 y t  C11C2 xˆ2 t º « ». xˆ2 t ¬ ¼

(7.2.21)

Example 7.2.2. Compute a reduced-order perfect observer of the form (7.2.20) for the system (7.2.1) with

E

ª1 0 1º «0 1 0 » , A « » «¬ 0 0 0 »¼

ª 0 1 1º «1 2 0 » , B « » «¬ 0 0 1 »¼

ª 1 0º « 0 1» , C « » «¬ 1 2 »¼

>1

0 1@ . (7.2.22)

In this case, n =3, m = 2, p = 1, C1 = [1], C2 = [0 -1] and there exists a reduced order-perfect observer, since § ª C1C º · rank ¨ E « 1 2 » ¸ ¨ « I n p » ¸ ¼¹ © ¬

§ ª1 0 1º ª 0 1 º · ¨ ¸ rank ¨ «« 0 1 0 »» ««1 0»» ¸ ¨ «0 0 0 » «0 1 » ¸ ¼¬ ¼¹ ©¬

and

ª Es  Aº rank « » ¬ C ¼

1 1  s º ªs « 1 s  2 0 »» rank « «0 1 » 0 « » 1 ¼ 0 ¬1

for all finite s . Step 1: Using (7.2.7) and (7.2.22), we obtain

3,

ª0 0 º rank ««1 0 »» 1 «¬0 0 »¼

The Realisation Problem and Perfect Observers of Singular Systems

Q

A

ª1 0 1 º «0 1 0 » , E « » «¬0 0 1 »¼

ªC11 « «¬ 0

C11C2 º » I n p »¼

AQ

ª 0 1 1º «1 2 1 » . « » «¬ 0 0 1 »¼

EQ

ª1 0 0 º «0 1 0» , « » «¬ 0 0 0 »¼

Step 2: In this case, P = I3 satisfy (7.2.12) and ª1 0 º ª A11 «0 0 » , « A ¬ ¼ ¬ 21

E2

ª B1 º «B » ¬ 2¼

PB

B

A12 º A 22 »¼

PA

ª 0 1 1º « » «1 2 1 » , «¬ 0 0 1 »¼

A

ª 1 0º « 0 1» . « » «¬ 1 2 »¼

The conditions (7.2.15) and (7.2.16b) are met since det > E2 s  A 22 @

s  2 1 0

1

2s

and ªE º rank « 2 » ¬ A12 ¼

ª1 0 º rank «« 0 0 »» «¬1 1»¼

2.

Step 3: Using (7.2.17) for K = [k1 k2]T, we obtain det ¬ªE2 s  A 22  KA12 ¼º

s  2  k1

k1  1

k2

k2  1

k2  1 s  k2  1 k1  2  k2 k1  1 . The condition (7.2.17) is satisfied for k2 = 1, k1 z 1. For k1 = 2, we have K

ª k1 º «k » ¬ 2¼

ª2º «1 » and det ¬ª E2 s  A 22  KA12 ¼º 1 . ¬ ¼

383

384

Polynomial and Rational Matrices

Step 4: The desired reduced-order perfect observer is ª1 0 º  «0 0 » xˆ2 ¬ ¼

ª 4 1º ª 0 1º ª1 º ª 2º «1 0 » xˆ2  « 1 2 » u  « 0» y  «1 » y  >1 0@ u , ¬ ¼ ¬ ¼ ¬ ¼ ¬ ¼

and the estimate xˆ (t) is given by

xˆ t

ªC11 y t  C11C2 xˆ2 t º « » xˆ2 t ¬ ¼

ª1 0 1 º «0 1 0 » ª y t º . « » « xˆ t » «¬0 0 1 »¼ ¬ 2 ¼

Remark 7.2.1. If ªE º rank « 12 » ¬ E22 ¼

n p ,

(7.2.23)

then there exists a standard reduced-order observer for the singular system (7.2.1). The procedure for computing such an observer is provided in [152]. 7.2.2 Perfect Observers for Standard Systems Consider the following continuous-time standard system x

Ax  B u , x 0

y

Cx ,

x0 ,

(7.2.24a) (7.2.24b)

with the feedback u

v  Fy

v  FCx ,

(7.2.25)

where F mup and v m is the new input. Substituting (7.2.25) into (7.2.24a), we obtain Ex

Ax  Bv, x 0

y

Cx ,

E

I n  BFC .

x0 ,

(7.2.26a) (7.2.26b)

where (7.2.27)

The Realisation Problem and Perfect Observers of Singular Systems

385

The matrix F is chosen in such a way as to assure the matrix (7.2.27) is singular. Then we build for the singular system (7.2.26) a full-order perfect observer, according to considerations in Sect. 7.2. We will show that for the standard system (7.2.24) ªI s  A º rank « n » ¬ C ¼

n for all s 

,

(7.2.28)

if and only if for the singular system (7.2.26) ª Es  A º rank « » ¬ C ¼

n for all finite s 

.

(7.2.29)

Using (7.2.27), we obtain ª Es  A º rank « » ¬ C ¼ § ªI n =rank ¨ « ¨ 0 ©¬

ª I  A  BFCs º rank « n » C ¬ ¼

BFs º ª I n s  A º · ¸ I p »¼ «¬ C »¼ ¸¹

ªI s  A º rank « n », ¬ C ¼

for all s  . We will also show that for the singular system (7.2.26) ªE º rank « » ¬C ¼

n for an arbitrary F .

(7.2.30)

Using (7.2.27), we can write ª I  BFC º rank « n » C ¬ ¼

ªE º rank « » ¬C ¼

§ ªI n rank ¨ « ¨ ©¬ 0

BF º ª I n º · ¸ I p ¼» «¬ C »¼ ¹¸

ªI º rank « n » ¬C¼

n.

As it is known, if the condition (7.2.28) is met, then there exists a nonsingular matrix P nun such that

A

1

P AP

C CP

ª A11 ! A1 p º « » « # % # », B « A p1 ! A pp » ¬ ¼

ª¬C1 C2 ! C p º¼ ,

P 1B,

(7.2.31a)

386

Polynomial and Rational Matrices

where ª 0 º ai »  «I »¼ ¬« di 1

A ii Ci

>0

ci @ 

n

¦d

d i ud i

pudi

, A ij

, ci

¬ª0 aij ¼º 

d i ud j

i z

j , i, j 1, ... , T

ª¬0 ! 0 1 c1,i 1 ! c1 p º¼ ,

(7.2.31b)

p

i

.

i 1

Let ˆ C

diag ¬ªcˆ1 , ! , cˆ p ¼º , cˆ i

>0

! 0 1@ 

1udi

.

It is easily verifiable that ˆ, C CC

(7.2.32)

where

 C

0 ª1 «c « 21 1 « # # « c c ¬« p1 p 2

! 0º ! 0 »» . % #» » ! 1 ¼»

(7.2.33)

Note that CB

CPP 1B

CB ,

(7.2.34)

and E

I

n

 BFC

P 1 I n  BFC P

P 1EP .

(7.2.35)

Using (7.2.35) and (7.2.32), we obtain E

ˆ , I n  BFC

(7.2.36)

F

 . FC

(7.2.37)

where

The Realisation Problem and Perfect Observers of Singular Systems

387

Theorem 7.2.2. Let the condition (7.2.28) be met and the matrices A , C have the form (7.2.31). There exists a matrix F such that

E

ªI t « 1 «0 «0 «¬

e1 0 e2

0º » 0 » , t1  t2 I t2 »» ¼

n  1, e1 

t1

, e2 

t2

,

(7.2.38)

if and only if CB z 0 .

(7.2.39)

Proof. Necessity. Bº ª I det « n » ¬ FC I m ¼

ªI  BFC B º det « n I m »¼ 0 ¬

det > I n  BFC@ ,

ªI det « n ¬0

det > I m  FCB @ .

but we also have Bº ª I det « n » ¬ FC I m ¼

B º I m  FCB »¼

Hence det E

det > I n  BFC@ det > I m  FCB @ .

If CB = 0, then det E = 1 for an arbitrary F. ˆ z 0, since det C  z 0. Hence for at Sufficiency. If CB = CB z 0, then also CB least one k we have cˆ k b k = b kk z 0, where b k is the k-th row of B and cˆ k is the k-th column of Cˆ = [ cˆ ij]. With the entries of F chosen in the following way

f ij

­ 1 , for i j ° ® bkk °0, otherwise ¯

k

,

(7.2.40)

we obtain

E

ˆ I n  BFC

I n  f kk bk cˆk

ªIt « 1 «0 «0 ¬«

e1 0 e2

0º » 0», I t2 »» ¼

388

Polynomial and Rational Matrices

where e1



T 1 ªbk1 bk 2 ! bk ,k 1 º¼ , e2 bkk ¬



T 1 ªbk ,k 1 bk ,k  2 ! bkn º¼ . bkk ¬

„

Theorem 7.2.3. There exists a feedback matrix K satisfying det ª¬Es  A  KC º¼ D z 0 ,

(7.2.41)

if and only if the conditions (7.2.28) and (7.2.39) are met. Proof. Sufficiency. If the conditions (7.2.28) and (7.2.39) are satisfied, then using (7.2.41), (7.2.31), (7.2.32), and (7.2.35), we obtain det ª¬ P 1 Es  A  KC P º¼ ˆ º, det ª¬Es  A  P 1KC º¼ det ªEs  A  KC ¬ ¼

det ª¬Es  A  KC º¼



(7.2.42)



where K

. P 1KC

(7.2.43)

Without loss of generality, in order to simplify the considerations, we assume

E

ªI n1 « 0 ¬«

eº , e 0 »¼»

> e1

e2 ! en1 @ . T

(7.2.44)

Let ª  A n  en 1 1 1 ¬ i § · ¨ ni ¦ d j ¸ , j 1 © ¹ K

 A n2  en2 1 !  A n p 1  en p1 1

 A np  k º , ¼

(7.2.45)

where A i is the i-th column of A , ei is the i-th column of the identity matrix In, and k = [k1 k2 … kn]T n. Using (7.2.31) and (7.2.45), it is easy to verify that ˆ A  KC

ª 0 º  k» . «I »¼ ¬« n1

(7.2.46)

The Realisation Problem and Perfect Observers of Singular Systems

389

Taking into account (7.2.42), (7.2.44) and (7.2.46), we obtain



det ª¬Es  A  KC º¼



ˆ º det ªEs  A  KC ¬ ¼ e1s  k1 º ªs 0 ! 0 « 1 s ! 0 e2 s  k2 »» « «# # % # » # « » « 0 0 ! s en1s  k n1 » «¬ 0 0 ! 1 »¼ kn e1s  k1  e2 s  k2 s  !  en1s  kn1 s n2  k n s n1

(7.2.47)

k1  e1  k2 s  !  en2  kn1 s n2  en1  kn s n1. Comparing both sides of (7.2.41) and (7.2.47), we obtain k

>D

e1 ! en1 @ . T

(7.2.48)

The necessity can be proved analogously to that for standard systems. „

Theorem 7.2.4. There exists a full-rank perfect observer for the system (7.2.24) of the following form Ex

Ax  Bu  K Cx  y ,

(7.2.49)

if the conditions (7.2.28) and (7.2.39) are met. Proof. If the assumption (7.2.39) is met, then a matrix F can be chosen so that the closed-loop system (7.2.26) is singular. According to Theorem 7.2.3, if the conditions (7.2.28) and (7.2.39) are met, then there exists a matrix K satisfying (7.2.41) and there exists a perfect observer of the form (7.2.49). „ If the conditions (7.2.28) and (7.2.39) are met, then a perfect observer of the form (7.2.49) can be obtained using the following procedure. Procedure 7.2.3. Step 1: Compute a matrix P satisfying (7.2.31). Step 2: Using (7.2.40) compute the matrix F , then F

 1 FC

(7.2.50)

390

Polynomial and Rational Matrices

and E

I n  BFC .

Step 3: Using (7.2.48) and (7.2.43), compute K and K

 1 . PKC

(7.2.51)

Step 4:Compute the desired observer Ex

A  KC x  Bu  Ky .

(7.2.52)

Example 7.2.3. For the standard system (7.2.24) with

A

ª0 « «1 «0 « ¬0

1 2 1 3

0 2º 0 1»» , B 0 0» » 1 1¼

ª1º « » « 0 », C « 1» « » ¬1¼

ª0 1 0 0 º «0 1 0 1 » , ¬ ¼

(7.2.53)

one has to compute the perfect observer (7.2.52) with D = 1. It is easily verifiable that the considered system satisfies the conditions (7.2.28) and (7.2.39), since

ªI s  A º rank « 4 » ¬ C ¼

1 0 2 º ªs « 1 s  2 0 1 »» « «0 1 s 0 » rank « » 3 1 s  1» «0 «0 1 0 0 » « » 1 0 1 »¼ «¬ 0

4 for all s 

and CB

ª0 º «1 » . ¬ ¼

Using Procedure 7.2.3 we obtain the following. Step 1: The matrices (7.2.53) already have the desired form (7.2.31) A = A, B = B, C = C and P = I4.

The Realisation Problem and Perfect Observers of Singular Systems

Step 2: Using (7.2.53) and (7.2.40), we obtain

F

>0

1@ , F

>0

 1 FC

ª1 0 º 1@ « » ¬1 1 ¼

1

>1

1@

and

E

ª1 «0 « «0 « ¬0

I 4  BFC

0 0 1º 0 »» . 0 1 1» » 0 0 0¼

1 0

Step 3: Using (7.2.48) and (7.2.45) and taking into account that e1

1, e2

0, e3

>1

1 and A 2

2 1 3@ , A 4 T

>2

1 0 1@ ,

we obtain k

K

K

>D

e1

e2

»  A 2  e3

 1 PKC

e3 @

>1

T

 A 4  k º¼

ª 1 3º « » « 2 0 » ª1 « 0 0 » «¬1 « » ¬ 3 0 ¼

1 0 1@ ,

ª 1 3º « 2 0 » « », «0 0» « » ¬ 3 0 ¼ ª2 1 « 0º « 2 «0 1 »¼ « ¬ 3

T

3º 0 »» . 0» » 0¼

Step 4: The desired observer has the form ª1 «0 « «0 « ¬0

0 1 0 0

0 1º 0 0 »» x 1 1» » 0 0¼

ª0 «1 « «0 « ¬0

0 0 1 0

0 1º ª1º ª 2 3º » « » « 2 0 » 0 1» 0 »y. x  « »u  « « 1» «0 0» 0 0» » « » « » 1 1¼ ¬1¼ ¬ 3 0 ¼

T

391

392

Polynomial and Rational Matrices

7.3 Functional Observers Consider the continuous-time singular system (7.2.1). We seek a system of the form F z  Gu  H y , z 0

Ez

w

z 0 ,

Lz ,

(7.3.1a) (7.3.1b)

which reconstructs the desired linear function of the state vector Kx, where K mun is known, z n is the state vector, w m is the output vector, and u, y as well as E are the same as for the system (7.2.1); F nun, G num, H nup, L mun. Definition 7.3.1. The system (7.3.1) is called a full-order functional observer for the system (7.2.1) if and only if w t

Ȁx t for t ! 0 ,

(7.3.2)

and arbitrary initial conditions x0, z0. Let e

xz.

(7.3.3)

Using (7.3.3), (7.2.1) and (7.3.1), we obtain Ee

Ex  Ez

A  HC x  Fz  B  G u .

(7.3.4)

If we choose F

A  HC,

B

G,

(7.3.5)

equation (7.3.4) takes the form Ee

Fe .

(7.3.6)

From (7.3.1b) for L = K, (7.3.2) and (7.3.3), we have Kx  w

Ke .

(7.3.7)

From Definition 7.3.1 and (7.3.7) it follows that the system (7.3.1) is a functional observer for the system (7.2.1) if and only if e(t) = 0 for t > 0. This condition is met if and only if there exists a matrix H such that

The Realisation Problem and Perfect Observers of Singular Systems

det ª¬Es  A  HC º¼

D,

393

(7.3.8)

where D is a nonzero scalar and independent of s. Theorem 7.3.1. Let the condition (7.2.3) be satisfied. A full-order perfect observer for the system (7.2.1) exists if and only if  rank A

 a º , rank ª¬ A ¼

(7.3.9)

where

 A

! an 0 º ª a0  D º » « ! an1 » a » , a « 1 » , « # » % # » » « » ! anr ¼ ¬ ar ¼ det > Es  A @ ar s r  ar 1s r 1  !  a1s  a0 , r d rank E  n , (7.3.10)

ª a10 «a « 11 « # « ¬ a1r

p s

a20 a21 # a2 r

and pk s

det ª¬h1 s ! h k 1 s cT h k 1 s ! h n s º¼ akr s r  !  ak 1s  ak 0 , k 1, ! , n,

ª¬h1 s ! h n s º¼

(7.3.11)

T T ¬ªE s  A ¼º

is the determinant of ETs - AT with its k-th column replaced by cT. Proof. Using the Binet–Cauchy theorem, one can easily show that det ª¬Es  A  HC º¼

p s  h1 p1 s  !  k n pn s ,

(7.3.12)

where HT

>h1

! hn @ .

From (7.3.8) and (7.3.12), we have h1 p1 s  h 2 p2 s !  k n pn s D  p s .

(7.3.13)

Comparing the coefficients at the same powers of the variable s, we obtain from (7.3.13) the following equation

394

Polynomial and Rational Matrices

 AH

a .

(7.3.14)

It follows by the Kronecker–Capelli theorem that (7.3.14) has a solution H if and only if the condition (7.3.9) is met. „ If the conditions of Theorem 7.3.1 are met, then a perfect observer of the form (7.3.1) can be obtained using the following procedure. Procedure 7.3.1. Step 1:Using (7.3.11), compute the polynomials p1(s),…,pn(s) and check if the condition (7.3.9) is met. If it is, go to Step 2, otherwise the problem is unsolvable. Step 2: For a given value of the scalar D, compute the matrix H satisfying (7.3.14). H can be also computed by choosing its elements in such a way that di

di H

0 for i 1, ..., q ,

(7.3.15)

and d0 = d0(H) = D, where det ª¬ET s  AT  CT HT º¼

det ª¬Es  A  HC º¼

d q s q  !  d1s  d 0 .

Step 3: Using (7.3.5), compute F, G and L = K. Example 7.3.1. Compute a functional perfect observer of the form (7.3.1) for the system (7.2.1) with

E

ª1 0 0 º «0 1 0 » , A « » «¬0 0 0 »¼

ª 0 1 0º « 0 0 1» , B « » «¬ 1 0 0 »¼

ª1 0 º «0 1 » , C « » «¬1 1»¼

>1

0 0@ , (7.3.16)

so that it reconstructs a linear function Kx for K

ª1 2 3 º « 2 1 2 » and D ¬ ¼

2.

In this case, the condition (7.2.3) is met, since

ª Es  A º rank « » ¬ C ¼

ª s 1 0 º «0 s 1» », rank « «1 0 0 » « » ¬1 0 0 ¼

(7.3.17)

The Realisation Problem and Perfect Observers of Singular Systems

395

for all finite s . Using Procedure 7.3.1, we obtain the following. Step 1: From (7.3.11) and (7.3.10), we have

¬ªE s  A ¼º T

T

ª¬h1 s h 2 s h 3 s º¼

p1 s

det »cT

h 2 s h 3 s ¼º

p2 s

det ª¬h1 s c

p3 s

det ª¬h1 s h 2 s c º¼

1 0 0

ªs 0 « « 1 s «¬ 0 1 0 1 s 0 0 1 0

1º 0 »» , 0 »¼ (c

C),

s

T

h 3 s º¼

1 1 1 0 0 0 0 0 s

p s

T

det > Es  A @

0

0,

1

1 s 0 0 1 0

1,

s 1 0 0 s 1 1, 1 0 0

and

 A

ª a10 «a « 11 «¬ a12

a20 a21 a22

a30 º a31 »» a32 »¼

ª0 0 1 º « 0 0 0 » , a « » «¬ 0 0 0 »¼

ª a 0 D º « a » « 1 » «¬ a2 »¼

ª 1º «0». « » «¬ 0 »¼

(7.3.18)

From (7.3.18), it follows that the condition (7.3.9) is satisfied. Step 2: The equation (7.3.14) for HT = [h1 h2 h3] has the form ª0 0 1 º ª h 1 º «0 0 0» « h » « »« 2» «¬0 0 0 ¼» ¬« h3 ¼»

ª 1º «0» « » ¬« 0 ¼»

and its solution is HT = [h1 h2 -1], where h1 and h2 are arbitrary. The same result is obtained with the use of the second method, which relies on the relationship (7.3.15), since

396

Polynomial and Rational Matrices

det ª¬ Es  A  HC º¼

s  h1  h2 1  h3

1 0 s 1 1  h3 0 0

D

2.

Step 3: Using (7.3.5) and (7.3.17), we obtain

F

ª h1 1 0 º « h 0 1» , G « 2 » «¬ 2 0 0 »¼

A  HC

B

ª1 0 º «0 1 » , L « » «¬1 1»¼

Ȁ

ª1 2 3 º « ». ¬ 2 1 2 ¼

The desired functional perfect observer is ª1 0 0 º ª h1 1 0 º ª1 0 º ªh1 º «0 1 0 » z « h 0 1 » z  «0 1 » u  « h » y, « » « 2 » « » « 2» «¬0 0 0 ¼» «¬ 2 0 0 ¼» «¬1 1¼» «¬ 1¼» ª1 2 3 º w « » z. ¬ 2 1 2 ¼

The foregoing considerations can be extended into the case of reduced-order functional perfect observers [71, 108, 115].

7.4 Perfect Observers for 2D Systems + be the set of nonnegative integers. Consider a two-dimensional (2D) system described by the singular second Fornasini–Marchesini model

Let

Exi 1, j 1

yij

A1 xi 1, j  A 2 xi , j 1  B1ui 1, j  B 2ui , j 1 ,

(7.4.1a)

Cxij ,

(7.4.1b)

where xij n, uij m and yij p are vectors of state, input and output, respectively, and E, Ak nun, Bk num, k = 1,2, C pun. We assume that det E = 0 and det > Ez  A k @ z 0 for some z 

and k

1 or k

2.

(7.4.2)

The boundary conditions for (7.4.1) are xi 0 for i 



and x0 j for j 



.

(7.4.1c)

The Realisation Problem and Perfect Observers of Singular Systems

397

We assume that the boundary conditions are subject to a jump-like change for i = 0 and j = 0. Consider the following 2D singular system A1 xi 1, j  A 2 xi , j 1  B1ui 1, j  B 2ui , j 1  Dyi1, j  Fyi , j 1 ,

Exi 1, j 1 wij

Cxij  Guij  Hyij ,

(7.4.3a) (7.4.3b)

with the boundary conditions xi 0 for i 



and x0 j for j 



,

(7.4.3c)

where xij 

n

, wij 

n

, E, A k , C 

nun

, Bk , G 

num

, k

1, 2, D, F, H 

nu p

.

Definition 7.4.1. The system (7.4.3) is called a perfect observer of the system (7.4.1) if and only if wij

xij , for i, j 



(7.4.4)

and for arbitrary boundary conditions of the form (7.4.1c) and (7.4.3c). Consider the following particular case of the system (7.4.3) Exi 1, j 1 A1 xi 1, j  A 2 xi , j 1  B1ui 1, j  B 2ui , j 1

(7.4.5a)

K1 Cxi 1, j  yi 1, j  K 2 Cxi , j 1  yi , j 1 , wij

xij , i, j 

where K1, K2



,

(7.4.5b)

nup

.

Theorem 7.4.1. The system (7.4.5) is a perfect observer of the system (7.4.1) if ªCº rank « » for k 1 or k 2 , ¬Ak ¼ ªE º ª Ez  A k º rank « » n and rank « » n for all finite z  ¬C ¼ ¬ C ¼ rank C

and k = 2 or k = 1.

(7.4.6) (7.4.7)

398

Polynomial and Rational Matrices

Proof. Let eij

xij  xij , i, j 



.

(7.4.8)

Using (7.4.8), (7.4.1a) and (7.4.5a), we obtain Eei 1, j 1

Exi 1, j 1  Exi 1, j 1

A1  K1C ei1, j  A 2  K 2C ei , j 1 .

(7.4.9)

If the condition (7.4.6) is met for k = 1, then K1 can be chosen in such a way that A1 = K1C, and from (7.4.9), we obtain Eei 1, j 1

A 2  K 2C ei , j 1 .

(7.4.10)

If the condition (7.4.7) is met for k = 2, then there exists a matrix K2 such that det ª¬Ez  A 2  K 2C º¼ D z 0, for some z  ,

(7.4.11)

and according to the considerations in Sect. 7.1 eij = 0 and wij = x ij for i, j +. The proof for k = 2 is analogous. „ If the conditions (7.4.6) and (7.4.7) are satisfied, then a perfect observer of the form (7.4.5) of the system (7.4.1) can be obtained using the following procedure. Procedure 7.4.1. Step 1: Compute K1 so that A1 = K1C. Step 2: With the matrices E, A2, C and scalar D given, compute the matrix K2 so that det ¬ªEz  A 2  K 2C ¼º D z 0 .

(7.4.12)

To this end, we can apply the method of elementary operations, provided in [122]. Step 3: Using (7.4.5) compute the desired observer. Remark 7.4.1. In the foregoing considerations one can interchange the role of the matrices A1 and A2, and K1 and K2, respectively. Example 7.4.1. Compute a perfect observer of the form (7.4.5) for the system (7.4.1) with

The Realisation Problem and Perfect Observers of Singular Systems

E

B1

ª1 «0 « «0 « ¬0

0 1 0 0

0 0 0 0

0º 0 »» , A1 1» » 0¼

ª1 0 º «0 2 » « », B 2 «1 1» « » ¬1 0 ¼

ª0 «1 « «0 « ¬0

1 2 0 0

0 0 1 0

ª0 1 º «1 1» « », C «1 0 » « » ¬0 1 ¼

0º 0 »» , A2 0» » 1¼

> 1

ª 1 «1 « « 2 « ¬ 3

0 1 0 1 0 2 0 3

0º 0 »» , 0» » 0¼

399

(7.4.13)

0 1 0@ .

The system satisfies the conditions (7.4.6) and (7.4.7), since ª Ez  A1 º 4 and rank « » ¬ C ¼

ªCº ªEº rank « » , rank « » A ¬C ¼ ¬ 2¼

rank C

4,

for all finite z . Thus there exists a perfect observer of the form (7.4.5) for this system. Taking into account Remark 7.4.1 and applying Procedure 7.4.1, we obtain the following. Step 1: In this case, K2 = [1 1 2 3]T, since A2 = K2C. Step 2: Using (7.4.12) it is easily verified that for K1 = [0 1 1 0]T and D = 1, we obtain

det ª¬Ez  A1  K1C º¼

1 0 0 z 0 z  2 1 0 0 0 z 1 0 0 0 1

1.

Step 3: The desired observer is ª1 «0 « «0 « ¬0 ª0 «1 « «1 « ¬0

0 1 0 0

0º ª0 » «0 0» xi 1, j 1 « «1 1» » « 0¼ ¬0 1º ª0º « 1» 1»» ui , j 1  « » yi 1, j «1» 0» » « » 1¼ ¬0¼ 0 0 0 0

1 2 0 0

0 1 0 0

0º ª1 0 º » «0 2 » 0» »u xi 1, j  « «1 1» i1, j 0» » « » 1¼ ¬1 0 ¼

ª1º « 1»  « » yi , j 1. «2» « » ¬3¼

400

Polynomial and Rational Matrices

With only slight modifications the foregoing considerations apply to 2D systems described by the Roesser model ª xh º E « iv1, j » «¬ xi , j 1 »¼ yij

>C1

ª A11 «A ¬ 21

A12 º ª xih, j º ª B11 º u , « » A 22 »¼ «¬ xiv, j »¼ «¬ B 22 »¼ ij

ª xh º C2 @ « iv, j » , i, j  «¬ xi , j »¼



(7.7.14)

,

where xih, j 

n1

, xiv, j 

n2

are the horizontal state vector and vertical state vector, respectively, ui,j yi,j p are the vectors of the state, input and output, respectively; ªA E, « 11 ¬ A 21

A12 º ª B11 º , , A 22 ¼» ¬«B 22 ¼»

>C1

m

and

C2 @

are real matrices of appropriate dimensions. If E = diag [E1 E2], (E1 n1un1, E2 n2un2), then the model (7.4.14) can be written in the form (7.4.1), where

xij

ª xih, j º « v » , A1 ¬« xi , j ¼»

ª 0 «A ¬ 21

B1

ª 0 º «B » , B 2 ¬ 22 ¼

ªB11 º « 0 », C ¬ ¼

0 º , A2 A 22 »¼

>C1

ª A11 « 0 ¬

A12 º , 0 »¼

C2 @ .

These considerations can be generalised into the case of the singular (2D) general model [147].

7.5 7.5.1

Perfect Observers for Systems with Unknown Inputs Problem Formulation

Consider the following linear continuous-time system x y

Ax  Bu  Dv , Cx ,

(7.5.1a) (7.5.1b)

The Realisation Problem and Perfect Observers of Singular Systems

401

where x = dx/dt, x n is the state vector, u q is the input vector, v m is the vector of unknown disturbances, y p is the output vector; A nun, B nuq, D num, C pun. We assume that rank C = p < n and rank D = m. We seek an r-th order perfect observer of the form E1 z Fz  Gu  Hy, xˆ Pz  Qy,

(7.5.2)

an observer that for t > 0 exactly reconstructs the state vector x in the presence of the unknown disturbance v, where z r is the state vector of the observer, xˆ is an estimate of x, E1, F rur, det E1 = 0, G ruq, H rup, P nur and Q nup. Let e r be an error of the observer defined as z  Tx ,

e

(7.5.3)

where T run. Differentiating (7.5.3) with respect to t and using (7.5.1) along with (7.5.2), we obtain E1e

E1 z  E1Tx

Fz  Gu  HCx  E1TAx  E1TBu  Ǽ1TDv

Fe  FT  E1TA  HC x  G  E1TB u  E1TDv .

If E1TB

G,

FT  E1TA  HC E1TD

(7.5.4) 0,

(7.5.5)

0,

(7.5.6)

then E1e

Fe .

(7.5.7)

Note that xˆ  x

Pz  QCx  x

Pz  QCx  PTx  PTx  x

Pe  QC  PT  I n x if

Pe,

402

Polynomial and Rational Matrices

PT  QC

>P

ªT º Q@ « » ¬C ¼

In .

(7.5.8)

According to the considerations in Sect. 7.1, if det E1 s  F D z 0 ,

(7.5.9)

where D does not depend on s, then e = 0 for t > 0. The problem of a reduced-order perfect observer with unknown disturbances can be formulated in the following way. Given the matrices A,B,C,D compute E1,F,G,H,T,P,Q in such a way that the relationships (7.5.4), (7.5.5), (7.5.6), (7.5.8) and (7.5.9) hold true. 7.5.2

Problem Solution

The relationship (7.5.5) can be written as

>F

ªTº H@ « » ¬C ¼

E1TA .

If rank F = r, then from the Sylvester inequality it follows that r + n – (r + p) d rank E1TA, and taking into account det E1 = 0, we obtain r > n  p. Since rank E1TD = 0, we have rank E1T + m – n d 0 and rank E1T < n  m. Hence rank E1 < n  m rank. Thus we have p t m. Lemma 7.5.1. There exist a pair of nonsingular matrices (L, R) that transform the system matrices into the form LAR D2

ª A1 A 2 º « A A » , CR 4¼ ¬ 3 ª0 I m p n º pum , «0 » 0 ¬ ¼  A

 C

ª¬0 I p º¼ , LD

if and only if rank C = p and rank D = m (p d m), where A1 A3 pu(n-p), A4 pup, D1 = [In-p 0] (n-p)um.

 D

ª D1 º « D » , (7.5.10) ¬ 2¼

(n-p)u(n-p)

, A2

(n-p)up

,

Proof. As it is known, if rank C = p, then there exists a nonsingular matrix R1 such that CR1 = [C1 C2], where C2 pup and rank C2 = p. Thus there exists a nonsingular matrix R such that

The Realisation Problem and Perfect Observers of Singular Systems

CR

CR1R 2

>C1

0 º ª I C2 @ « n1p 1 » ¬ C2 C1 C2 ¼

403

¬ª0 I p ¼º .

Analogously, using the matrix

L2

ˆ 1 ª D 1 « 1 ˆ ˆ  D ¬« 1 D2

0 º » I nm ¼»

and performing an appropriate partition into the blocks D1 and D2 of D, we obtain (7.5.10). „ Note that in the course of transformation of the matrices of the system (7.5.1) into the form (7.5.10), the state vector is also transformed, according to the relationship xˆ = R1x. It follows from the condition p < n that D2 is not a full rank matrix. Let r = 2n – m  p. We choose the matrices E1 and F to be of the form E1

ªI n p « 0 ¬

0 º , F 0nm »¼

ª 0 «D I ¬ nm

I n p º . 0 »¼

(7.5.11)

It is easily verifiable that the matrices (7.5.11) satisfy the condition (7.5.9). Let T

ª T1 «T ¬ 3

T2 º , T4 »¼

where T1 

n m u n  p

, T2 

n  m u p

, T3 

n p u n p

, T4 

n p u p

and X

 . FT  E1TA

(7.5.12)

Note that the equation HC = -X has a solution if and only if for the given matrices C and X rank C

ª Xº rank « » . ¬C ¼

(7.5.13)

404

Polynomial and Rational Matrices

From (7.5.13) it follows that this condition can be met if and only if the entries of the first n  p columns of X are zero. Let  [a ], i 1, ..., n, j 1, ..., n . T [tij ], i 1, ..., r , j 1, ..., n and A ij

Using (7.5.10) and (7.5.11), we obtain

X

ªt I n p º « 11 # 0 »¼ « «tr ,1 ¬

ª 0 «D I ¬ n p

ª tnm1,1 « # « «t2 nn p ,1 « « D t11 « # « ¬« D tnm ,1

! % ! ! % !

!

ª t11 « # ! t1,n º « » «t % # »  « n p ,1 0 ! tr ,n »¼ « « # « ¬« 0

tnm1,n º ª c11 # »» « # « t2 nm p ,n » « cn p ,1 »« D t1,n » « 0 # » « # » « D tnm,n ¼» ¬« 0

% ! ! % !

! % ! ! % !

t1,n º # »» ª a11 ! a1,n º t n  p ,n » « » » # % # » 0 »« « an ,1 ! an ,n » ¼ # »¬ » 0 ¼»

(7.5.14)

c1,n º # »» cn p ,n » », 0 » # » » 0 ¼»

where n

cij

¦t

i ,k

ak , j .

k 1

The condition (7.5.13) and D z 0 imply ti,j = 0, for i = 1,…,nm and j = 1,…,np, that is, T1 = 0, this in turn implies rank D2 < m. From (7.5.8) it follows that ªT º rank « » ¬C ¼

ª T1 « rank «T3 «0 ¬

T2 º » T4 » I p »¼

n.

(7.5.15)

If T1 = 0, then rank T3 = np. Let

T2

ª t1,n p 1 ! t1,n º « » % # » « # «tn p ,n p1 ! tn p ,n » ¬ ¼

n  p u p

.

The Realisation Problem and Perfect Observers of Singular Systems

405

The equalities n

tn  m i , j

ci , j

n

¦t

a

i ,l l , j

l 1

¦

l n  p 1

ti ,l al , j , for i, j 1, ..., n  p

are equivalent to T3

T2 A3 .

(7.5.16)

The condition (7.5.15) for T1 = 0 implies rank T3 = n  p. If p < n  p, then this condition cannot be met. Otherwise if p t n  p, T 2 has full rank n  p and rank A3 = np. This explains the choice made earlier that r = 2n – m  p. It guaranties that rank T3 = n  p. Let X1 be the matrix built from the columns n  p + 1,…,2n – m  p of X. Taking into account that HC = H[0 Im] = X = [0 X1], we obtain H X1 .

(7.5.17)

From (7.5.8), we have R

>P

ªTº Q@ « » R ¬C ¼

>P

ª TR º Q@ « » . ¬C¼

(7.5.18)

R is a nonsingular matrix, hence

>P

Q@



ªTR º R« » , ¬C¼

where  denotes the Moore–Penrose pseudo-inverse. The following procedure ensues from the foregoing considerations. Procedure 7.5.1. Step 1:Compute the nonsingular matrices L and R transforming the matrices of the system (7.5.1) into the form (7.5.10). Step 2:Choose the matrices E1 and F of the form (7.5.11) Step 3:Choose T1 = 0 and T2 with rank n-m. Step 4: Using the computed in Step 3 ti,j and (7.5.16), compute ti,j, i = nm+1,…,2nmp, j = 1,…,np. Step 5:Taking arbitrary values of ti,j (i = nm+1,…,2nmp, j = np+1,…,n) and using (7.5.4) along with (7.5.17), compute the matrices G and H. Step 6:Using (7.5.18) compute P and Q.

406

Polynomial and Rational Matrices

From (7.5.10), we have ª L 0 º ª Is  A D º ª R 0 º « »« 0 ¼» ¬« 0 I ¼» ¬0 I¼ ¬ C

ªLRs  A1 A2 « Is  A 4 «  A3 « Ip 0 ¬

D1 º » D2 » . 0 »¼

(7.5.19)

Assume at the beginning that rank D = m. Using the matrix D1 and applying elementary operations we can eliminate from LRs – A1 the entries dependent on s; with use of Ip, the same can be done for Is – A4. Hence ª Is  A D º rabk « 0 »¼ ¬ C

n  m for all s 

,

if and only if rank A3 = np. From (7.5.19) it follows that the condition p t np is satisfied if p t m, since p + m t n. Thus the following theorem has been proved. Theorem 7.5.1. Applying Procedure 7.5.1, one can compute the desired perfect observer if and only if 1. p t m, ª Is  A D º 2. rank « n  m for all s  . 0 »¼ ¬ C Example 7.5.1. Compute a perfect observer of the form (7.5.2) for the system (7.5.1) with

A

ª 1 0 0 0 0 º « 0 2 0 0 0 » « » « 0 0 3 0 0 » , Ǻ « » « 0 0 0 4 0 » ¬« 0 0 0 0 5¼»

D

ª1 «0 « «0 « «0 «¬0

0º 1 »» 0» , C » 0» 0 »¼

ª0 «0 « «1 « «0 «¬0

0º 0 »» 0» , » 1» 0 ¼»

ª0 0 1 0 0 º «0 0 0 1 0 » . « » «¬0 0 0 0 1 »¼

Applying Procedure 7.5.1 we obtain, the following. Step 1: The matrices (7.5.20) already have the desired forms (7.5.10).

(7.5.20)

The Realisation Problem and Perfect Observers of Singular Systems

407

Step 2: In this case, m = 2, p = 3 and we choose

E1

ª1 «0 « «0 « «0 «¬ 0

0 0 0 0º 1 0 0 0 »» 0 0 0 0» , F » 0 0 0 0» 0 0 0 0 »¼

ª 0 0 0 1 0º « 0 0 0 0 1» « » «D 0 0 0 0 » . « » « 0 D 0 0 0» «¬ 0 0 D 0 0 »¼

Step 3: In this example,

>T1

ª0 0 1 0 0º «0 0 0 1 0» . « » ¬«0 0 0 0 1 »¼

T2 @

Step 4: Using (7.5.16), we obtain

T

ª0 «0 « «0 « «1 «¬0

0 0 0 0 1

1 0 0 0 0

0 1 0 0 0

0º 0 »» 1» . » 0» 0 »¼

(7.5.21)

Taking into account (7.5.20) and (7.5.12), we obtain

X

ª0 «0 « «0 « «0 «¬0

0 3 0 0 0 D 0 0 0 0

0 4 0 D 0

0 º 0 »» 0 ». » 0 » D »¼

Step 5: Using (7.5.17) and (7.5.22), we obtain

H

ª 3 « 0 « « D « « 0 «¬ 0

0 4 0 D D

0º 0 »» 0» , » 0» 0 »¼

(7.5.22)

408

Polynomial and Rational Matrices

and from (7.5.4)

G

ª1 «0 « «0 « «0 «¬0

0º 1 »» 0» . » 0» 0 »¼

Step 6: From (7.5.18) and (7.5.21), we have

P

0 0 ª 0 « 0 0 0 « « 0, 5 0 0 « « 0 0, 5 0 0 0, 5 ¬« 0

1 0º 0 1 »» 0 0» , Q » 0 0» 0 0 ¼»

0 0 º ª 0 « 0 0 0 »» « « 0, 5 0 0 ». « » « 0 0, 5 0 » 0 0, 5¼» ¬« 0

Thus the desired observer is

ª1 «0 « «0 « «0 ¬« 0



0 0 0 0º 1 0 0 0 »» 0 0 0 0 » z » 0 0 0 0» 0 0 0 0 ¼» 0 0 ª 0 « 0 0 0 « «0, 5 0 0 « « 0 0, 5 0 «¬ 0 0 0, 5

ª0 0 0 «0 0 0 « «D 0 0 « «0 D 0 ¬« 0 0 D

1 0º ª1 «0 0 1 »» « 0 0» z  «0 » « 0 0» «0 0 0 ¼» ¬« 0

0º ª 3 « 0 1 »» « 0 » u  « D » « 0» « 0 0 ¼» ¬« 0

1 0º 0 0 º ª 0 » « 0 1» 0 0 »» « 0 0 0 » z  «0, 5 0 0 »y. » « » 0 0» « 0 0, 5 0 » «¬ 0 0 0 »¼ 0 0, 5»¼

0 4 0 D D

0º 0 »» 0» y, » 0» 0 ¼»

The Realisation Problem and Perfect Observers of Singular Systems

409

7.6 Reduced-order Perfect Observers for 2D Systems with Unknown Inputs 7.6.1 Problem Formulation Consider the following 2D system Exi 1, j yij

A 0 xij  A1 xi , j 1  Buij  Dvij ,

(7.6.1a)

i, j  '  ,

Cxij ,

(7.6.1b)

where xij n is the state vector, uij m the input vector, vij q the vector of unknown disturbances, yij p the output vector; E, A0, A1 nun, B num, D nuq, C pun. We assume that det E = 0 and rank C

p.

(7.6.2)

The boundary conditions for (7.6.1a) have the form

x0 j , for j  ' 

(7.6.3)

Consider the following singular 2D system E1 zi 1, j xˆij

F0 zij  F1 zi , j 1  Guij  H 0 yij  H1 yi , j 1 ,

(7.6.4a)

Pzij  Qyij ,

(7.6.4b)

with the boundary conditions z0 j for j  '  where xˆ ij n is an estimate of xij and zij r, E1, F0, F1 H0, H1 rup, det E1 = 0.

(7.6.4c) rur

, G

rum

,

Definition 7.6.1. The singular system (7.6.4) is called a reduced-order perfect observer of the system (7.6.1) with unknown disturbances, if xˆij

xij , for i, j  '  ,

and arbitrary boundary conditions of the form (7.6.3) and (7.6.4c). Let

(7.6.5)

410

Polynomial and Rational Matrices

zij  TExij

eij

(7.6.6)

be the error of the observer, with T obtain E1ei 1, j

E1 zi 1, j  E1TExi 1, j

run

. Using (7.6.6), (7.6.4) and (7.6.1), we

F0 eij  TExij

 F1 ei , j 1  TExi , j 1  Guij  H 0Cxij  H1Cxi , j 1 E1ȉǹ 0 xij  E1TA1 xi , j 1  E1TBuij  E1TDvij F0 eij  F1ei , j 1  F0 TE  H 0C  E1TA 0 xij

(7.6.7)

 F1TE  Ǿ1C  E1TA1 xi , j 1  G  E1TB uij  E1TDvij . If F0 TE  H 0C  E1TA 0

0,

F1TE  H1C  E1TA1

0,

G

(7.6.8a)

E1TB ,

(7.6.8b)

0,

(7.6.8c)

E1TD

then E1ei 1, j

F0 eij  F1ei , j 1 .

(7.6.9)

Form (7.6.4b), (7.6.6) and (7.6.1b), we have

xˆij  xij

P eij  TExij  QCxij  xij

xˆij  xij

Peij ,

Peij  PTE  QC  I n xij

and (7.6.10)

if and only if PTE  QC

>P

ªTE º Q@ « » ¬C¼

In .

(7.6.11)

Note that a pair of matrices P, Q satisfying (7.6.11) can be found if and only if

The Realisation Problem and Perfect Observers of Singular Systems

ªTE º rank « » ¬C¼

n.

411

(7.6.12)

From the equality ªTE º «C» ¬ ¼

ªT 0 º ª E º «0 I » « » , p ¼ ¬C ¼ ¬

it follows that the condition (7.6.12) implies ªEº rank « » ¬C ¼

n.

(7.6.13)

Henceforth we will assume that the condition (7.6.13) is met. The problem of computing a perfect observer can be formulated in the following manner. With the matrices E, A0, A1, B, C, D given, one has to compute the matrices of the observer (7.6.4) E1, F0, F1, G, H0, H1, P, Q so that the conditions (7.6.8) and (7.6.11) are met. 7.6.2 Problem Solution Lemma 7.6.1. Let the conditions (7.6.2) and (7.6.13) be met, and p + rank E = n. Then there exist nonsingular matrices U, V nun such that E Ak

ªI r c 0º « 0 0 » , r c rank E, C CV ¬ª0 I p ¼º , ¬ ¼ k k ª A11 º A12 nrc u nrc k   r cu r c , A k22   , , k 0,1, A11 UA k V « k k » (7.6.14) ¬ A 21 A 22 ¼

UEV

UB

ª B1 º n  r c u m r cu m , UD «B » , B1   , B 2   ¬ 2¼

D1   r cu q , D2  

n  r c u q

ª D1 º «D » , ¬ 2¼

.

Proof. As it is well-known there exist nonsingular matrices U, V1 UEV1

ªI rc «0 ¬

0º . 0 »¼

Let CV1 = [C1 C2], C1 pu(n-p), C2 (7.6.13) imply det C2 z 0. Hence the matrix

nun

such that (7.6.15)

pup

. The assumptions (7.6.2) and

412

Polynomial and Rational Matrices

V2

0 º ª I rc « C C C1 » 2 ¼ ¬ 2 1

(7.6.16)

is nonsingular and CV

>C1

C2 @ V2

¬ª0 I p ¼º , UEV

ªI rc «0 ¬

0º 0 »¼

E,

(7.6.17)

where V = V1V2.

„

Henceforth we assume that the matrices E and C are of the form (7.6.14). Lemma 7.6.2. If det > E1 z1  F0  F1 z2 @ D ,

(7.6.18)

where D is a nonzero scalar independent of z1 and z2, then a solution to (7.6.9) satisfies the condition eij

0, for i, j ! 0 .

(7.6.19)

Proof. Let e(z1, z1) be the 2D Z transform of eij, defined as e z1 , z2

Z ª¬eij º¼

f

f

¦¦ e z

i  j ij 1 2

z

.

(7.6.20)

i 0 j 0

Taking into account that Z ¬ªei 1, j ¼º

z1 ª¬e z1 , z2  e 0, z2 º¼ ,

Z ª¬ei , j 1 º¼

z2 ª¬e z1 , z2  e z1 , 0 º¼ ,

where e 0, z2

f

¦e

z , e z1 , 0

j 0j 2

j 0

f

¦e

i i0 1

z ,

i 0

we obtain from (7.6.9) e z1 , z2

>E1 z1  F0  F1 z2 @

1

¬ªE1 z1e 0, z2  F1 z2 e z1 , 0 ¼º .

(7.6.21)

The Realisation Problem and Perfect Observers of Singular Systems

413

If the condition (7.6.18) is met, then n1

>E1 z1  F0  F1 z2 @

n2

¦¦ T

1

k l  k , l 1 2

z z ,

(7.6.22)

k 1 l 1

where F01 , T1,2

T1,1

F01F1T1,1 , T2,1

F01Ǽ1T1,1 , !

and the pair (n1, n2) is the nilpotent index. Note that (7.6.18) implies det F0 z 0. Substituting (7.6.22) into (7.6.21), we obtain e z1 , z2

n1

n2

¦¦ T

z z ª¬E1 z1e 0, z2  F1 z2 e z1 , 0 º¼ .

k l  k , l 1 2

k 1 l 1

From (7.6.23) and (7.6.20) it follows that eij = 0, for i, jt 0.

(7.6.23)

„

If r =rc + 1 and E1 F1

ªI rc 0º « 0 0»  ¬ ¼ ªF1c 0 º « 0 0»  ¬ ¼

rur

r ur

, F0

ª 0 « D ¬

, F1c 

r cur c

I rc º  0 »¼

r ur

,

(7.6.24)

,

then the condition (7.6.18) is met, since ª I z  Fc z det « rc 1 1 2 D ¬

I r c º » D. 0 ¼

The choice of T is of crucial importance for the problem solution. Equation (7.6.8a) can be written as ªH0 º «H » C ¬ 1¼

ª E1TA 0  F0 TE º « E TA  F TE » . ¬ 1 1 1 ¼

For C = [0 Ip], (7.6.25) has the solution H

ªH0 º «H » , ¬ 1¼

(7.6.25)

414

Polynomial and Rational Matrices

if and only if ªWº rank « » ¬C¼

rank C ,

(7.6.26)

where ªE1TA 0  F0 TE º « E TA  F TE » ¬ 1 1 1 ¼

W

> W1

W2 @ , W1 

2 r ur c

, W2 

2 ru p

.

Lemma 7.6.3. Let the matrices E1, F0, and F1 have the form (7.6.24). The considered problem has a solution if T is chosen in such a way that ªT º rank « 1 » ¬T3 ¼ D

rc ,

(7.6.27a)

Ker > T1 T2 @ , 0, T12c

T11

(7.6.27b)

T1A10  T2 A 30 , F1cT1

T1A11  T2 A13 ,

(7.6.27c)

where T

ª T1 «T ¬ 3

T2 º , T2  T4 ¼»

r cu p

1ur c

, T3 

, T4 

1u p

, T1

ª T11 º «T » , ¬ 12 ¼

ª A1k A 2k º , « k k» ¬ A3 A 4 ¼ A1k  rcurc , A k2  rcu p , A 3k  purc , A k4  pu p , k 0,1.

T11 

1ur c

, T12 

r c1 ur c

, T12c

>0

ªT º I rc @ « 1 » , A k ¬ T3 ¼

Proof. If the condition (7.6.27a) is met, then (7.6.12) holds true, since

TE

ª T1 «T ¬ 3

0º ªTE º and rank « » 0 ¼» ¬C¼

E1T

ªT1 T2 º «0 0» ¬ ¼

With

ª T1 « rank «T3 «0 ¬

0º » 0» I p »¼

n.

The Realisation Problem and Perfect Observers of Singular Systems

415

it is easy to show that (7.6.27b) implies the condition (7.6.8c). The condition (7.6.26) is met if and only if W1 = 0. Taking into account that W

> W1

W2 @

ªE1TA 0  F0 TE º « E TA  F TE » ¬ 1 1 1 ¼

ª T1A10  T2 A 30  T12c T1A 02  T2 A 04 º « » 0 D T1 « », «T1A11  T2 A13  F1c T1 T1A12  T2 A14 » « » 0 0 ¬« ¼»

we obtain

W1

ª T1A10  T2 A 30  T12c º « » D T1 « » «T1A11  T2 A13  F1c T1 » « » 0 «¬ »¼

0.

(7.6.28)

If the conditions (7.6.27c) are met, then (7.6.28) holds true. If (7.6.28) holds, then from (7.6.25) we have H = W2, and from (7.6.8b) we can compute the matrix G. If the condition (7.6.27a) is met, then from (7.6.11) we can compute the matrices P,Q. In a general case (7.6.11) has many solutions. „ From (7.6.27) it follows that r t rc + q. Lemma 7.6.4. Let the matrices E1, F0, F1 have the form (7.6.24). There exists T run satisfying the conditions (7.6.27), if and only if ptq

(7.6.29)

and ªE z  A 0  A1 z2 rank « 1 1 C ¬ where

Dº 0 »¼

n  q for all z1 , z2  u

,

(7.6.29b)

is the field of complex numbers.

Proof. Note that there exists a matrix T such that ªE1T 0 º rank « » ¬ 0 Ip ¼

rc  p .

(7.6.30)

416

Polynomial and Rational Matrices

Using the Sylvester inequality along with (7.6.29), (37.6.0), (7.6.24), and (7.6.8c), we obtain °­ ª E1T 0 º ª E1 z1  A 0  A1 z2 rank ® « »« C ¯° ¬ 0 I p ¼ ¬

D º °½ ¾ 0 »¼ ¿°

ª E TEz1  E1TA 0  E1TA1 z2 E1TD º rank « 1 C 0 »¼ ¬ ª T z  T1A10  T2 A 30  T1A11  T2 A13 z2 rank « 1 1 0 «¬

(7.6.31)

T1A 02  T2 A 04  T1A12  T2 A14 z2 º » t rc  q Ip »¼

where E1TD = 0. The condition (7.6.31) is equivalent to the following one rank [T1 z1  T1A10  T2 A 30  T1A11  T2 A13 z2 ] t r c  q  p ,

(7.6.32)

which can be met if and only if (7.6.29) holds true. „

Theorem 7.6.1. Let r t rc + q, rc + p = n and the condition (7.6.13) be satisfied. The considered problem of the synthesis of a perfect observer has a solution if and only if the conditions (7.6.29) are met. Proof. The condition (7.6.13) implies (7.6.27a). There exists T such that T12c

>0

ªT º I rc @ « 1 » ¬T3 ¼

>T1

ªA0 º T2 @ « 10 » , ¬ A3 ¼

if and only if rank > T1

T2 @

ªA0 º rank « 10 » ¬ A3 ¼

rc .

A proper choice of F1c always makes the condition F1c T1 satisfied.

T1 A11  T2 A13

(7.6.33)

The Realisation Problem and Perfect Observers of Singular Systems

417

From (7.6.32) it follows that the condition (7.6.33) is met if and only if (7.6.29) is met. „ The foregoing considerations yield the following procedure for computing the observer (7.6.4). Procedure 7.6.1. Step 1:Compute the matrices U, V that transform the matrices E, C, Ak, B, D, k = 0,1 into the form (7.6.14). Step 2:Choose the matrices E1, F0, F1 that are of the form (7.6.24) Step 3:Choose the matrix T that satisfies the condition (7.6.27) for r t rc + q. Step 4:Compute

H

ªH0 º «H » ¬ 1¼

ª T1A 02 « « « T1A12 « «¬

 T2 A 04 º » 0 ».  T2 A14 » » 0 »¼

(7.6.34)

Step 5: Using (7.6.8b) and (7.6.11) compute G, P, and Q. Example 7.6.1. Compute a perfect observer of the form (7.6.4) for the system (7.6.1) with

E

B

ª1 0 «0 0 « ¬«0 0 ª1º « 2 », « » «¬ 1»¼

0º 0 »» , A 0 0 ¼» ª1º D «« 0 »» , «¬ 1»¼

ª 1 2 1 º « 2 0 3» , A 1 « » «¬ 1 1 2 ¼» C

ª0 1 0 º «0 0 2 » , « » ¬« 2 1 1¼»

(7.6.35)

ª0 1 0º «0 0 1 » . ¬ ¼

In this case, n = 3, rc = 1, m = q = 1, p = 2, r = 2. The conditions (7.6.29) are met, since

ª E z  A 0  A1 z2 rank « 1 1 C ¬

for all (z1, z2) u .

Dº 0 »¼

ª z1  1 « 2 « rank «1  2 z2 « « 0 «¬ 0

2  z2

1

0

3  2 z2

1  z2

2  z 2

1

0

0

1

1º 0 »» 1» » 0» 0 »¼

4,

418

Polynomial and Rational Matrices

Let T

ª t11 t12 «t ¬ 21 t22

t13 º . t23 »¼

Applying Procedure 7.6.1 we obtain, the following. Step 1: Matrices (7.6.35) already have the desired forms. Step 2: We choose E1

ª1 0 º « 0 0 » , F0 ¬ ¼

ª 0 « D ¬

1º , F1 0 »¼

ªf «0 ¬

Step 3: The conditions (7.6.27) are met if t11

0, t13

2t12 z 0

0, t21

and t12, t22, t23 are arbitrary. Step 4: Using (7.6.34), we obtain

H

ªH0 º «H » ¬ 1¼

ª T1A 02 « « « T1A12 « ¬«

 T2 A 04 º » 0 »  T2 A14 » » 0 ¼»

ª 0 3t12 º « » «0 0 » . « 0 2t12 » « » ¬0 0 ¼

Step 5: Using (7.6.8b) and (7.6.11), we obtain

G

E1TB

P

ª « p12 « « p21 « « p31 «¬

ª 0 t12 «0 0 ¬

ª1º 0º « » 2 0 »¼ « » «¬ 1»¼

and 1 º 2t12 » » 0 », Q » 0 » »¼

The observer we seek is

ª0 0 º « » «1 0 » . «¬0 1 »¼

ª 2t12 º « 0 », ¬ ¼

0º . 0 »¼

The Realisation Problem and Perfect Observers of Singular Systems

ª1 0 º ª 0 1º ª f 0º «0 0 » zi 1, j « D 0 » zij  « 0 0» zi , j 1 ¬ ¼ ¬ ¼ ¬ ¼ ª 2t12 º ª 0 3t12 º ª0 2t12 º  « » uij  « » yij  «0 0 » yi , j 1 , ¬ 0 ¼ ¬0 0 ¼ ¬ ¼ 1 º ª « p12 2t » ª0 0 º 12 « » xˆij « p21 0 » zij  ««1 0 »» yij , « » «¬0 1 »¼ 0 » « p31 «¬ »¼

where D, f, p12, p21, p31 are arbitrary.

419

8 Positive Linear Systems with Delays

8.1 Positive Discrete-time and Continuous-time Systems 8.1.1 Discrete-time Systems num

be the set of num matrices with entries from the field of real numbers and . The set of num matrices with real nonnegative entries will be denoted by num and +n = +nu1 The set of nonnegative integers will be denoted by + + Consider the discrete-time linear system with delays described by the equations

Let n =

nu1

q

h

xi 1

¦A

x

k i k

k 0

yi

 ¦ B j ui  j , i  '  ,

(8.1.1a)

j 0

Cxi  Dui ,

(8.1.1b)

where h and q are positive integers, xi n, ui m, yi p are the state, input and output vectors, respectively, and Ak nun (k = 0,1,…,h), Bj num (j = 0,1,…,q), C pun, D pum. The initial conditions for (8.1.1a) are given by x i   n , (i

0,1,..., h), u j   m ( j 1, 2,..., q ).

Theorem 8.1.1. The solution to (8.1.1a) is given by

(8.1.2)

422

Polynomial and Rational Matrices

ĭ(i ) x0 

xi

i 1

1 h  j 1

¦ ¦ ĭ(i  k )A

j h k 1

k 1 j

xj 

1 q  j 1

¦ ¦ ĭ(i  k )B

j q k 1

k 1 j

uj

(8.1.3)

q

 ¦¦ ĭ(i  1  k  j )B k u j , j 0 k 0

where 1 h · °½ °­§ Z 1 ®¨ zI n  ¦ A k z  k ¸ z ¾ k 0 ¹ °¿ °¯©

ĭ(i )

(8.1.4)

is the state-transition matrix and Z1 denotes the inverse z-transform. The state-transition matrix )(i) satisfies the equation ĭ(i  1)

A 0ĭ(i )  A1ĭ(i  1)  ...  A hĭ(i  h) ,

(8.1.5)

with the initial conditions ĭ(0)

I n , ĭ(i )

0, for i  0.

(8.1.6)

Proof. It is easy to verify that (8.1.3) satisfies the initial conditions (8.1.2). Substituting (8.1.3) into (8.1.1a) and using (8.1.5) and (8.1.6), we obtain q

h

¦A

x

k i k

k 0

h

 ¦ B j ui  j

¦A

j 0

k 0

¦ ĭ(i  2k )B

j q k 1

ĭ(i  1) x0 

k  j 1

1 h  j 1

¦ ¦ ĭ(i  k  1)A

j h k 1

1 q  j 1



h ª «ĭ(i  k ) x0  ¦ ĭ(i  2k ) A k  j 1 x j k 0 ¬

i  k 1 q º q u j  ¦ ¦ ĭi (i  2k  j  1)B k u j »  ¦ B j ui  j j 0 k 0 ¼ j0

1 h  j 1



k

¦ ĭ(i  k  1)B

i

k  j 1

xj

q

u j  ¦¦ ĭ(i  k  j )B k u j

k  j 1

j q k 1

xi 1.

j 0 k 0

Then (8.1.3) satisfies (8.1.1a). 

Definition 8.1.1. The system (8.1.1) is called (internally) positive if xi +n and yi +p (i +) for every x-i +n, u-j +m, i = 0,1,…,h, j = 1,2,…,q and all inputs ui +m, i +. Theorem 8.1.2. The system (8.1.1) is internally positive if and only if Ak 

nun 

, (k

0,1,..., h), B j 

num 

, (j

0,1,..., q), C 

pun 

, D

pum 

. (8.1.7)

Positive Linear Systems with Delays

423

Proof. Defining ª ui º «u » « i 1 » « # » « » «ui q1 » « ui q » ¬ ¼

xi

ª xi º « x » « i 1 » « # » « » « xi h1 » «¬ xi h »¼

A1 " A h1

A

ª A0 «I « n « # « «0 «¬ 0

B

ªB0 «0 « «# « «0 «¬ 0

B1 " B q 1 B q º 0 " 0 0 »» # % # # », » 0 " 0 0» 0 " 0 0 »¼

 C

>C

n 

, ui

" % " "

0 # 0 0

0 # 0 In

 0 " 0@ , D

m 

,

Ah º 0 »» # », » 0» 0 »¼

>D

(8.1.8)

(8.1.9a)

(8.1.9b)

0 " 0@ ,

(8.1.9c)

(8.1.1) can be written in the form xi 1 yi

Axi  B ui , i   x  D  u , C i



(8.1.10a)

,

(8.1.10b)

i

where n (h  1)n, m (q  1)m and

x0

ª x0 º « x » « 1 » « # » « » « x h1 » «¬ x h »¼

n 

, u0

ª u0 º «u » « 1 » « # » « » «u q 1 » « u q » ¬ ¼

m 

.

In [127] it is shown that system (8.1.10) is positive if and only if

(8.1.11)

424

Polynomial and Rational Matrices

   pun , D    pum . A   nun , B   num , C  

(8.1.12)

 and D  Hence, system (8.1.1) is positive if and only if the matrices A B C satisfy conditions (8.1.12) that are equivalent to (8.1.7).  8.1.2 Continuous-time Systems Consider the multivariable continuous-time system with delays x (t )

q

h

¦ A x(t  id )  ¦ B u(t  jd ), i

i 0

y (t )

j

(8.1.13)

j 0

Cx(t )  Du (t ),

where x(t) n, u(t) m, y(t) p are the state, input and output vectors, respectively and Ai nun, i = 1,…,h, Bj num, j = 0,1,…,q, C pun, D pum and d > 0 is a delay. Initial conditions for (8.1.13a) are given by x0 (t ) for t  [hd , 0] and u0 (t ) for t  [ hq , 0].

(8.1.14)

The solution x(t) of (8.1.13) satisfying (8.1.14) can be found by the use of the step method [67, pp.49]. Definition 8.1.2. The system (8.1.13) is called (internally) positive if for every x0(t) +m, t[-hd, 0], u0(t) +m, t[-qh, 0] and all inputs u(t) +, t t 0, we have x(t) +m and y(t) + for t t 0. Let Mn be the set of nun Metzler matrices, i.e., the set of nun real matrices with nonnegative off-diagonal entries. Theorem 8.1.3. The system (8.1.13) is positive if and only if A0 is a Metzler matrix and matrices Ai, i = 1,…,q, Bj, j = 0,1,…,q, C, D have nonnegative entries, i.e., A 0  M n , A i   nun , i 1,.., h, B j   num , j C   pun , D   pum .

0,1,..., q,

(8.1.15)

Proof. To simplify the notation, the essence of proof will be shown for h = q = 1. Using the step method [67, pp. 49] and defining the vectors

Positive Linear Systems with Delays

x t

ª x t º « » « x t  d » , u t « » # « » ¬« x t  kd ¼»

ª u t º « » « u t  d » , « » # « » ¬«u t  kd ¼»

z0 t

ª A1 t  d  B1u t  d º « » 0 « », « » # « » 0 ¬ ¼

425

(8.1.16)

and the matrices ª A0 «A « 1 «0 « « # «¬ 0

"

0

0

0 " A0 "

0 0

B0 B1

0 º ªB 0 «B 0 »» « 1 0 »,B « 0 A » « # # % # # » «# » «¬ 0 0 0 " A1 A 0 ¼ C >C 0 " 0@ , D > D 0 " 0@ , 0

A0 A1

0

"

0

0 " B0 "

0 0

0

#

#

%

#

0

0

" B1

0º 0 »» 0 », » (8.1.17) # » Ǻ 0 »¼

we may write the equations (8.1.13) in the form x (t )

Ax (t )  Bu (t )  z0 (t ) t  [0, d ],

y (t )

Cx (t )  Du (t ).

(8.1.18)

It is well-known [127] that the system (8.1.18) is positive if and only if the matrix A is a Metzler matrix and the matrices B, C and D have nonnegative entries. From the structure of the matrices (8.1.17), it follows that the system (8.1.13) is positive if and only if (8.1.15) holds. „

8.2 Stability of Positive Linear Discrete-time Systems with Delays 8.2.1 Asymptotic Stability Consider the positive discrete-time linear system with delays described by the homogeneous equation

426

Polynomial and Rational Matrices

h

A 0 xi  ¦ A k xi k , i 

xi 1



(8.2.1)

,

k 1

where h is a positive integer and Ak Defining

xi

ª xi º «x » « i 1 »  « # » « » ¬ xi h ¼

n

, n

nun +

(k = 0,1,…,h).

(h  1)n and A

ª A0 «I « n « # « ¬0

A1 " A h º 0 " 0 »»  # % # » » 0 In 0 ¼

nun 

, (8.2.2)

we may write (8.2.1) in the form xi 1

Axi , i 



(8.2.3)

.

The positive system (8.2.3) is called asymptotically stable if its solution xi

A i x0

satisfies the condition lim xi i of

0 for every x0 

n 

.

It is well-known that the positive system (8.2.3) is asymptotically stable if and only if all eigenvalues z1,z2,…,z n of the matrix A have moduli less than 1, i.e., zk  1, for k

1, 2, ! , n .

(8.2.4)

Theorem 8.2.1. [127]. The positive system (8.2.3) is asymptotically stable if and only if all coefficients a i (i = 0,1,…, n -1) of the characteristic polynomial det > I n z  A  I n @

z n  an 1 z n 1  !  a1 z  a0

(8.2.5)

are positive, i.e., a i > 0, for i = 0,1,…, n 1. Theorem 8.2.2. [165]. The positive system (8.2.3) is asymptotically stable if and only if all principal minors of the matrix A

ª¬ aij º¼

are positive, i.e.,

In  A

Positive Linear Systems with Delays

a11 a21

a11 ! 0,

a12 ! 0, a22

a11 a21 a31

a12 a22 a32

a13 a23 ! 0,! , det A ! 0 . a33

427

(8.2.6)

Using elementary row and column operations (that do not change the value of the determinant), we obtain ªI n z  A 0 « I n « det « 0 « # « «¬ 0

det > I n z  A @

 A1 In z I n # 0

!  A h1 0 ! !

0 # I n

% !

ª 0 « « I n det « 0 « « # « 0 ¬

I n z  A 0 z  A1 !  A h1

ª 0 « « I n det « 0 « « # « 0 ¬

0 0 I n # 0

det ª¬I n z

 A 0 z h  !  A h1  A h º¼

h 1

2

!

0 I n # 0 ! ! ! % !

! % !

0 0 0 # I n

0 0 # I n

A h º 0 »» 0 » » # » I n z »¼

A h º » 0 » 0 » ! » # » I n z »¼

(8.2.7)

I n z h1  A 0 z h  !  A h1 z  A h º » 0 » » 0 » # » » 0 ¼

z n  an 1 z n 1  !  a1 z  a0 .

Therefore, we have the following theorem. Theorem 8.2.3. The positive system with delays (8.2.1) is asymptotically stable if and only if all roots of the equation det ª¬ I n z h1  A 0 z h  !  A h1  A h º¼

z n  an 1 z n 1  !  a1 z  a0

have moduli less than 1. Using elementary row and column operations, we may write

0 (8.2.8)

428

Polynomial and Rational Matrices

det > I n ( z  1)  A @ ªI n z  1  A 0 « I n det « « # « 0 «¬

!  A h1  A1 I n z  1 ! 0

ªI n z  1  A 0 « I n det « « # « 0 «¬

I n z  1  A 0 z  1  A1

A2

0 #

0 #

!  A h1 ! 0 %

#

!

I n

0

! %

0 #

0

!

I n

!

I n

ª 0 « I det « n « # « «¬ 0

0

I n z  1  A 0 z  1  A1 2

0 # 0

Ah

º » » ! » » I n z  1 »¼ 0 #

!

0

0 ! I n !

0

0

#

0

º » 0 » » # » I n z  1 »¼

0 #

%

º » » » # » I n z  1 »¼ 0

2

Ah

 A 2 !  A h1

ª 0 « « I n det « 0 « « # « 0 ¬

#

Ah

0 #

#

%

0

! I n

M h z º » 0 » 0 », » # » 0 »¼

(8.2.9)

where Mh z

I n z  1

h 1

 A 0 z  1  A1 z  1 h

h 1

 !  A h1 z  1  A h . (8.2.10)

Theorem 8.2.4. The positive system with time-delays (8.2.1) is asymptotically stable if and only if all coefficients a i (i = 0,1,…, n -1) of the characteristic polynomial

det M q z z n  a n 1 z n 1  !  a1 z  a0

(8.2.11)

are positive, i.e., a I > 0, for i = 0,1,…, n -1. Proof. From (8.2.9) and (8.2.11) it follows that the characteristic equation det [In(z + 1) – A] = 0 is equal to det Mh(z) = 0. Applying Theorem 8.2.3 to the

Positive Linear Systems with Delays

429

system (8.2.1) written in the form (8.2.3), we obtain the hypothesis of Theorem 8.2.4.  Applying Theorem 8.2.4 to the system with delays (8.2.1) written in the form (8.2.3), we obtain the following theorem. Theorem 8.2.5. The positive system with delays (8.2.1) is asymptotically stable if and only if all principal minors of the matrix

A

ªI n  A 0 « I n « «  « ¬ 0

In  A

 A1   A q 1 In



0

 0

 

 I n

Aq º 0 »»  » » In ¼

(8.2.12)

are positive. Example 8.2.1. Consider the positive system (8.2.1) for n = 2, h = 1 with

A0

ª 0,1 0, 2 º «0, 2 0,1 » , A1 ¬ ¼

ª0, 4 0 º « 0 a» , B ¬ ¼

(8.2.13)

0.

Find values of the parameter a t 0 for which the system is asymptotically stable. In this case, the matrix (8.2.12) has the form

A

ªI n  A 0 « I n ¬

 A1 º I n »¼

ª 0, 9 0, 2 0, 4 0 º « a »» 0 « 0, 2 0, 9 . « 1 0 1 0» « » 1 0 1¼ ¬ 0

(8.2.14)

Using Theorem 8.2.5 for the system, we obtain a11

0, 9 ! 0,

a11 a21 a31

a12 a22 a32

a13 a23 a33

a11 a21

a12 a22

0, 9 0, 2 0, 2 0, 9

0, 9 0, 2 0, 4 0, 2 0, 9 0 1 0 1

0, 77 ! 0, 0, 5 0, 2 0, 2 0, 9

0, 41 ! 0,

430

Polynomial and Rational Matrices

0, 9 0, 2 0, 4 0 0, 2 0, 9 a 0

det A

1

0

1

0

0

1

0

1

0, 5

0, 2

0, 2

0, 9

0, 41  0, 5a ! 0.

Hence the system is asymptotically stable for 0 d a d 0,82. The same result can be obtained by the use of Theorems 8.2.4 or 8.2.3. It will be shown that the instability of the positive system (without delays) xi 1

nun 

A 0 xi , A 0 

(8.2.15)

always implies instability of the positive system with delays (8.2.1). Theorem 8.2.6. The positive system (with delays) (8.2.1) is unstable if the positive system (without delays) (8.2.15) is unstable. Proof. By Theorem 8.2.5, the system (8.2.15) is unstable if at least one of the principal minors of the matrix A0

ª¬ aij0 º¼

I n  A0

is not positive. The system (8.2.1) is unstable if at least one of the principal minors of the matrix

In  A

ªI n  A 0 « I n « « # « ¬ 0

 A1 !  A q 1 In

!

0

# 0

% !

# I n

A q º 0 »» # » » In ¼

(8.2.16)

is not positive. From (8.2.16) it follows that if at least one of the principal minors of the matrix In – A0 is not positive, then at least one of the principal minors of the matrix (8.2.16) is also not positive. Therefore, the instability of the system (8.2.15) always implies the instability of the system (8.2.1).  From Theorem 8.2.5, we have the following important corollary. Corollary 8.2.1. If the positive system (8.2.15) is unstable, then it is not possible to stabilize the system (8.2.1) by a suitable choice of the matrices Ak, k = 1,…,q. Theorem 8.2.7. The positive system (8.2.1) is unstable if at least one diagonal entry of the matrix A0 = [aij0] is greater than 1, i.e.,

Positive Linear Systems with Delays

akk0 ! 1, for some k  1, 2,! , n .

431

(8.2.17)

Proof. It is known [127, Theorem 2.15] that the positive system (8.2.15) is unstable if for at least one k(1,2,…,n) (8.2.17) holds. In this case, by Theorem 8.2.5 the positive system (8.2.1) is also unstable. 

Example 8.2.2. Consider the positive system (8.2.1) for n = 2, q = 1 with ª a11 «a ¬ 21

ª0,1 0, 2 º , A1 «0 2 »¼ ¬

A0

a12 º a22 »¼

a

ij

t 0, i, j 1, 2 .

(8.2.18)

The system (8.2.15) with A0 of the form (8.2.18) is unstable, since one of the eigenvalues of A0 is equal 2. The same result follows from Theorem 8.2.6, since a220 = 2 > 1. In this case, the matrix (8.2.16) has the form

In  A

ªI n  A 0 « I n ¬

 A1 º I n »¼

ª 0, 9 0, 2 a11 « 0 1 a21 « « 1 0 1 « 1 0 ¬ 0

 a12 º  a22 »» . 0 » » 1 ¼

(8.2.19)

Applying Theorem 8.2.5 to (8.2.19), we obtain a11

0, 9 ! 0,

a11

a12

a13

a21 a31

a22 a32

a23 a33

a11 a21

0, 9 0, 2

a12 a22

1

0

0, 9 0, 2 a11 0

1

a21

1

0

1

0, 9  0,

0, 9  a11

0, 2

 a21

1

0, 9  a11  0, 2a21 ,

det A

(8.2.20)

0, 9 0, 2  a11

 a12

0

1

a21

 a22

0, 9  a11

0, 2  a12

1

0

1

0

a21

1  a22

0

1

0

1

0, 9  a11 1  a22  a21 0, 2  a12 . From (8.2.20), it follows that for any entries of the matrix A1, the system (8.2.1) with (8.2.18) is unstable, since the second-order principal minor is negative.

432

Polynomial and Rational Matrices

8.2.2 Stability of Systems with Pure Delays The system (8.2.1) is a system with pure delay if Ak { 0 for k = 0,1,…,h1. In such a case, this system is described by the homogeneous equation xi 1

A h xi h , i 



(8.2.21)

.

From (8.1.9a), it follows that the matrix Ap of the equivalent system xi 1

A p xi

without delays has the form (with n (h  1)n )

Ap

ª0 «I « n «# « ¬0

0 " Ah º 0 " 0 »»  # % # » » 0 In 0 ¼

nun 

.

(8.2.22)

The system (8.2.21) is asymptotically stable if and only if wh(z) z 0 for |z| t 1, where

wh ( z )

det( z h1I n  A h ).

(8.2.23)

From Theorems 8.2.1 and 8.2.2 we have the following theorem. Theorem 8.2.8. [30, 31]. The positive system (8.2.21) with pure delay is asymptotically stable if and only if one of the following equivalent conditions holds: 1. all coefficients of the polynomial wh(z+1) are positive, where wh(z) has the form (8.2.23), 2. all principal minors of the matrix A p I n  A p of the form

Ap

ª In « I « n « # « ¬ 0

0

"

In

"

# 0

% I n

A h º 0 »» # » » In ¼

(8.2.24)

are positive. Proof. From the structure of the matrix (8.2.24) it follows that all principal minors of order from 1 to nh of A p are always positive. Moreover, all principal minors of

Positive Linear Systems with Delays

433

A p of order from nh+1 to (h + 1)n are positive if and only if all principal minors of the matrix

D

In  Ah

(8.2.25)

are positive. From the above, it follows that if the system (8.2.21) with fixed delay h > 0 (h is a positive integer) is asymptotically stable, then the system xi+1 = Ahxip, where p is any positive integer, is also asymptotically stable. Hence, asymptotic stability of the positive system (8.2.21) with pure delay does not depend on the delay. „ Positivity of all principal minors of (8.2.24) is necessary and sufficient for asymptotic stability of the positive system without delay, described by the equation [127] xi 1

A h xi , i 



(8.2.26)

.

It is well-known [127] that the system (8.2.26) is asymptotically stable if and only if all eigenvalues of the matrix Ah have moduli of less than 1. From the above and [127], we have the following. Theorem 8.2.9. [30]. The positive system (8.2.21) with pure delay is asymptotically stable if and only if one of the following equivalent conditions hold: 1. all principal minors of the matrix (8.2.25) are positive, 2. all coefficients of the polynomial det[( z  1)I n  A h ]

z n  an1 z n1  ...  a0

(8.2.27)

are positive, i.e., a i > 0 for i = 0,1,…,n1. Lemma 8.2.1. [30]. The positive system (8.2.21) is not stable if at least one diagonal entry of the matrix Ah = [ahij] is greater than 1, i.e., ahkk > 1, for some k(1,2,…,n).

Example 8.2.3. Consider the positive system (8.2.21) with

Ah

ª a 0.2 0 º « 0.4 0.1 0.1» . « » «¬ 1 0.3 b »¼

(8.2.28)

434

Polynomial and Rational Matrices

Find values of the parameters a t 0 and b t 0 for which the system is asymptotically stable. In this case, matrix (8.2.25) has the form

D

0 º ª1  a 0.2 « 0.4 0.9 0.1» . « » «¬ 1 0.3 1  b »¼

(8.2.29)

Computing all principal minors of (8.2.29), from condition 1) of Theorem 8.2.9, we obtain '1 1  a ! 0, ' 2

0.82  0.9a ! 0, ' 3

0.77  0.87 a  0.82b  0.9ab ! 0.

These inequalities can be written in the form a  0.9111, 0.77  0.87 a  0.82b  0.9ab ! 0.

(8.2.30)

Hence, the system is asymptotically stable for a and b satisfying (8.2.30) and for any fixed delay (h = 1,2,…). 8.2.3 Robust Stability of Interval Systems Let us consider a family of positive discrete-time systems with delays h

xi 1

¦A

x , A k  [ A k , A k ] 

k i k

nun 

(8.2.31)

,

k 0

where     akij  [akij , akij ], akij d akij , with A k

 [akij ], A k

 [akij ], for k

0,1,..., h.

The family (8.2.31) is called an interval family or an interval system with delays. The interval positive system (8.2.31) is called robustly stable, if the system (8.2.1) is asymptotically stable for all Ak[Ak-, Ak+] (k = 0,1,..,h). If Ak[Ak-, Ak+] k = 0,1,..,h, then for the equivalent system (8.2.3), we have AAI, where A is of the form (8.2.2), AI = [A-, A+] and

A

ª A 0 « « In « # « ¬« 0

A1 " A h º » 0 " 0 » , A # % # » » 0 I n 0 ¼»

ª A 0 « « In « # « ¬« 0

A1 " A h º » 0 " 0 » . # % # » » 0 I n 0 ¼»

(8.2.32)

Positive Linear Systems with Delays

435

Theorem 8.2.10. [31]. The interval positive delay system (8.2.31) is robustly stable if and only if the positive system without delays xi 1

A  xi , i 



is asymptotically stable or, equivalently, the positive system with delays h

xi 1

A 0 x0  ¦ A k xi k , i 

(8.2.33)



k 1

is asymptotically stable. Proof. The proof follows directly from the fact that all eigenvalues of any nonnegative matrix A[A-, A+] have moduli less than 1 if and only if all eigenvalues of A+ have moduli less than 1 [31].  From Theorem 8.2.10 it follows that robust stability of the interval system (8.2.31) does not depend on the matrices Ak- +nun, k = 0,1,…,h. Therefore, we may have Ak = 0 for k = 0,1,…,h. Moreover, if the system (8.2.1) is asymptotically stable for any fixed Ak = Akf +nun k = 0,1,…,h, then this system is also asymptotically stable for all Af[0, Akf], k = 0,1,…,h. From the above and Theorems 8.2.1 and 8.2.2 we have the following theorem and lemma. Theorem 8.2.11. [31]. The interval positive delay system (8.2.31) is robustly stable if and only if one of the following equivalent conditions holds: 1. all coefficients of the polynomial w+(z +1) are positive, where w ( z  1)

h

det[( z  1) h1 I n  ¦ A k ( z  1) hk ],

(8.2.34)

k 0

2.

all principal minors of the matrix A 

A

ªI n  A 0 « « I n « # « «¬ 0

are positive.

 A1

"

In

"

#

%

0

I n

 A h º » 0 » # » » I n »¼

I n  A  of the form

(8.2.35)

436

Polynomial and Rational Matrices

Lemma 8.2.2. [31]. The interval positive delay system (8.2.31) is not robustly stable if the positive system (without delays) xi+1 = A0+xi is unstable, or at least one diagonal entry of the matrix A0+ is greater than 1. Consider a family of positive discrete-time linear systems with delays h

xi 1

¦a x

k i k

, ak  [ak , ak ],

(8.2.36)

k 0

where 0 d ak and ak d ak , for k

0,1,..., h.

The positive interval system without delays equivalent to (8.2.36) is described by xi 1

A s xi , A s  [ A s , A s ] 

nun 

, n

h  1.

(8.2.37)

From Theorem 8.2.10 we have the following theorem. Theorem 8.2.12. The interval positive system (8.2.36) with delays is robustly stable if and only if the positive system without delays

xi 1

A s xi , i 

A s

ª a0 « «1 «# « ¬« 0



,

(8.2.38)

where a1 " ah º » 0 " 0» # % # » » 0 1 0 ¼»

(8.2.39)

is asymptotically stable or, equivalently, the positive delays system h

xi 1

¦a

 k i k

x , i



,

k 0

is asymptotically stable, that is, ' n

h

1  ¦ ak ! 0. k 0

(8.2.40)

Positive Linear Systems with Delays

437

Let us consider the interval positive system with pure delay

xi 1

A h xi h , A h  [ A h , A h ] 

nun 

.

(8.2.41)

Theorem 8.2.13. [30, 31]. The interval positive system (8.2.41) with pure delay is robustly stable if and only if the positive delay system

xi 1

A h xi h , i 

(8.2.42)



is asymptotically stable or, equivalently, the positive system without delays xi 1

A h xi , i 

(8.2.43)



is asymptotically stable. From Theorem 8.2.13 it follows that robust stability of the interval system (8.2.41) does not depend on the matrix Ah +nun. Therefore, we may have Ah = 0 From the above and Theorem 8.2.9, we have the following theorem. Theorem 8.2.14. [30, 31]. The interval positive system (8.2.41) with pure delay is robustly stable if and only if one of the following equivalent conditions hold: 1. all principal minors of the matrix D

2.

I n  A h

(8.2.44)

are positive, all coefficients of the polynomial det[( z  1)I n  A h ]

z n  aˆn1 z n1  ...  aˆ0 ,

(8.2.45)

are positive. Lemma 8.2.3. [30, 31]. The positive interval system (8.2.41) is not robustly stable if at least one diagonal entry of the matrix Ah+ = [ahij+] is greater than 1, i.e., ahkk+ > 1, for some k(1,2,…,n).

8.3 Reachability and Minimum Energy Control Consider the positive discrete-time linear system (8.1.1) for h = q with initial the conditions (8.1.2). The considerations for h z q are similar. Definition 8.3.1. A state xf +n is called reachable in N steps if there exists a sequence of inputs ui +m, i = 0,1,…,N1 that transfers the system (8.1.1) from zero initial conditions (8.1.2) to the state xf.

438

Polynomial and Rational Matrices

Definition 8.3.2. If every state xf called reachable in N steps.

n +

is reachable in N steps, then the system is

Definition 8.3.3. If for every state xf +n there exists a natural number N such that the state xf is reachable in N steps, then the system is called reachable. Recall that the set  n is called a cone if the following implication holds: if x , then Dx for every D + The cone is called convex if for any x1, x2 every point of the line segment x = (1-O)x1+Ox2 , for 0 d O d 1. The cone is called solid if its interior contains the sphere K(x, r) with the centre at the point x and radius r. Theorem 8.3.1. The set of reachable states of the positive system (8.1.1) is a positive convex cone. This cone is solid if and only if there exists an N + such that the rank of the reachability matrix RN

[Ȍ ( N  1), Ȍ ( N  2), " , Ȍ (1), Ȍ (0)]

(8.3.1)

is equal to n, where Ȍ (i )

h

¦ ĭ(i  k )B

k

(8.3.2)

,

k 0

and )(i) is the state-transition matrix. Proof. For x i

0 (i

0,1,..., h), u j

0 ( j 1, 2,..., q) and i

N !0

solution (8.3.1) or (8.1.1a) has the form N 1 h

xN

¦¦ ĭ( N  1  k  j )B u k

j

R N u0N ,

(8.3.3)

j 0 k 0

where RN has form (8.3.1) with 0 0 ! 1@ . (8.4.16)

 

Let C

>0

0 ! 1@

b0j

ª b10j º « 0 » «b2 j » , b1 j « # » « 0» ¬« bnj ¼»

(8.4.17)

and ª b11j º « 1 » «b2 j » , « # » « 1» ¬« bnj ¼»

j 1,..., m ,

be the j-th column of the matrices B0 and B1, respectively. Then from (8.4.6) and (8.4.14), we have

450

Polynomial and Rational Matrices





C Adj ª¬I n z 2  A 0 z  A1 º¼ b 0j z  b1j

ª1 z 2 ¬

ª b10j z  b11j º « 0 » b z  b21 j » 2 n 1 ! z º¼ « 2 j « » # « 0 » 1 ¬« bnj z  bnj ¼»

n j ,2 n1 z 2 n1  n j ,2 n1 z

2 n 1

(8.4.18)

 ...  n j ,1 z  n j 0 , j 1,..., m.

Comparing the coefficients at the same powers of z of the equality (8.4.18), we obtain b11j

n j 0 , b10j

n j1 , b21 j

n j 2 , b20 j

n j 3 , ! , bnj1

n j ,2 n1 , bnj0

n j ,2 n1

for j 1,..., m, and

B0

B1

n21 ! nm1 º ª n11 « n ! nm 3 »» n 23 « 13 , « # # % # » « » «¬ n1,2 n1 n2,2 n1 ! nm ,2 n1 »¼ ! n20 nm 0 º ª n10 « n ! n22 nm 2 »» « 12 . « # # % # » « » «¬ n1,2 n1 n2,2 n1 ! nm ,2 n1 »¼

(8.4.19)

Theorem 8.4.1. There exists a positive minimal realization (8.4.3) of T(z) if the following conditions are satisfied. 1. T f lim T z  1um . z of

2.

The conditions n jk t 0, j 1,..., m, k ak t 0, k

0,1,..., 2n  1

0,1,..., 2n  1 ,

(8.4.20) (8.4.21)

hold Proof. The condition (8.4.1) implies D +1um. If the condition (8.4.21) is satisfied, then the matrices A0 and A1 of the forms (8.4.10) have nonnegative entries and their dimension is minimal for the given polynomial d(z). If additionally the condition (8.4.20) is satisfied, then B0, B1 +num.  

Positive Linear Systems with Delays

451

If the conditions of Theorem 8.4.1 are satisfied, then a positive minimal realization (8.4.3) of T(z) can be found by the use of the following procedure. Procedure 8.4.1. Step 1: Using (8.4.7) and (8.4.8) find D and the strictly proper matrix Tsp(z). Step 2: Knowing the coefficients ak, k = 0,1,…,2n1 of d(z) find the matrices (8.4.10). Step 3: Knowing the coefficients njk, j = 1,…,m, k = 0,1,…,2n1 find the matrices B0, B1. Example 8.4.1. Given the transfer matrix 1 N z d z

T z

1 z6  2z5  z3  2z 2  z  2

(8.4.22)

u ª¬ 2 z  3z  2 z  z  4 z  z  3 z  2 z  z  z  z  1º¼ , 6

5

4

3

2

6

5

4

3

find a positive minimal realization (8.4.3). Using the procedure we obtain the following. Step 1: From (8.4.7), we have lim T z

D

z of

> 2 1@

(8.4.23)

and Tsp z

T z  D

N z d z

(8.4.24)

1 ª z 5  2 z 4  z 3  z  1 z 4  2 z 2  1¼º . 6 5 3 z  2z  z  2z2  z  2 ¬

Step 2: Taking into account that a0

a2

a5

2, a1

a3

1, a4

0

and using (8.4.10), we obtain

A0

ª 0 0 a1 º «0 0 a » 3» « «¬ 0 0 a5 »¼

ª0 0 1 º «0 0 1 » , A 1 « » «¬ 0 0 2 »¼

ª 0 0 a0 º «1 0 a » 2» « «¬ 0 1 a4 »¼

ª0 0 2º «1 0 2 » . (8.4.25) « » «¬ 0 1 0 »¼

In this case, the third row R3(z) of the matrix Adj [Inz2 – A0z – A1] has the form

452

Polynomial and Rational Matrices

R3 z

ª¬1 z 2

z 4 º¼ .

(8.4.26)

Step 3: In this case, the quality (8.4.18) has the form ª b10j z  b11j º « » z 4 ¼º «b20 j z  b21 j » « b30 j z  b31 j » ¬ ¼

2 »1 z

(8.4.27)

n j 5 z 5  n j 4 z 4  n j 3 z 3  n j 2 z 2  n j1 z  n j 0 ,

j 1, 2,

where n15

1, n14

2, n13

1, n12

0, n11 1, n10

n25

0, n24

1, n23

0, n22

2, n21

1,

0, n20

1.

From (8.4.27), we have b110

1, b210

1 12

1 22

1, b310

b

1. b

B0

ª b110 b120 º « 0 0 » «b21 b22 » «b310 b320 » ¬ ¼

1, b111

1 32

1 1, b21

1 31

2, b

1, b

0, b120

0, b220

0, b320

0,

2

and ª1 0 º «1 0 » , B « » 1 «¬1 0 »¼

ª b111 b121 º « 1 1 » «b21 b22 » 1 1 » «b31 b32 ¬ ¼

ª1 1 º «0 2» , C « » «¬ 2 1 »¼

>0

0 1@ . (8.4.28)

The desired positive minimal realisation of (8.4.22) is given by (8.4.23), (8.4.25) and (8.4.28). Up to now, the degree of the polynomial d(z) has been even, equal to 2n. Now let us assume that the degree of the denominator is odd. Consider the system with one delay in state and two delays in control xi 1 yi

A 0 xi  A1 xi 1  B 0ui  B1ui 1  B 2 ui 2 , Cxi  Dui ,





(8.4.29a) (8.4.29b)

,

where xi 





1un 

n

, ui  , D

m



, yi 

1um 



, Ak 



nun

, k

0,1, B j 

.

The matrix transfer function of (8.4.29) has the form



num

, j

0,1, 2,

.

Positive Linear Systems with Delays

T z

C ª¬I n z  A 0  A1 z 1 º¼



1



B

0

453

 B1 z 1  B 2 z 2  D

C Adj ª¬I n z 2  A 0 z  A1 º¼ B 0 z 2  B1 z  B 2 D z det ª¬ I n z 2  A 0 z  A1 º¼

(8.4.30) Nc z  D, dc z

where





C Adj ª¬I n z 2  A 0 z  A1 º¼ B 0 z 2  B1 z  B 2

Nc z ª¬ ncj ,2 n z dc z

2n

 ncj ,2 n1 z

2 n 1

 ...  ncj ,1 z  ncj ,0 º¼

j 1,..., m

,

(8.4.31)

z det ª¬ I n z 2  A 0 z  A1 º¼

z 2 n1  a2 n1 z 2 n  ...  a1 z 2  a0 z

2 n 1

¦ d cz d c i

2 n 1

i

1, d 0c

0 ,

i 0

and the coefficients ak, k = 0,1,…,2n1 are of the polynomial d(z) defined by (8.4.6). Knowing the coefficients di’ = ai1, i = 1,…,2n of the polynomial d’(z) we may find the matrices A0 and A1 of the form (8.4.10). Choosing the matrix C of the form (8.4.17) in similarly to in the previous case, we obtain





C Adj ª¬I n z 2  A 0 z  A1 º¼ b0j z 2  b1j z  b 2j

ª1 z 2 ¬

ª b10j z 2  b11j z  b12j º « 0 2 » b z  b21 j z  b22 j » ! z 2 n1 º¼ « 2 j « » # « 0 2 » 1 2 ¬« bnj z  bnj z  bnj ¼»

ncj ,2 n z 2 n  ncj ,2 n1 z 2 n1  ...  ncj ,1 z  ncj 0 ,

(8.4.32)

j 1,..., m,

where bjk is the j-th column of the matrix Bk, k = 0,1,2. Comparing the coefficients at the same powers of z of the equality (8.4.32), we obtain the following 2n+1 equalities: b12j

ncj 0 , b11j

b21 j

ncj 3 , b20 j  b32j

0 n 1 j

b

b

2 nj

bn02, j  bn21, j

ncj1 , b10j  b22 j

ncj 2 ,

ncj 4 , b31 j

ncj ,2 n1 , b

1 n1, j

ncj ,2 n2 , bnj1

ncj 5 , !

(8.4.33)

ncj ,2 n3 , ncj ,2 n1 , bnj0

ncj ,2 n

454

Polynomial and Rational Matrices

with 3n unknown entries bij0,bij1,bij2, i = 1,…,n, j = 1,…,m, of the matrices B0, B1, B2. Note that we may choose arbitrarily n1 entries of the matrix B0, for example, bij0 = 0 for i = 1,…,n1 and find the remaining nonnegative entries of the matrices from (8.4.33). Therefore, the following theorem has been proved. Theorem 8.4.2. There exists a positive minimal realization (8.4.3) of the proper matrix transfer matrix T z

Nc z D, dc z

(8.4.34)

with Nc(z) and dc(z) defined by (8.4.31) if the following conditions are satisfied T f

lim T z  z of

1um 

.

The coefficients of Nc(z) and dc(z) satisfy the conditions ncjk t 0 , j 1,..., m , k

0,1,..., 2 ,

d ic t 0 , i 1,..., 2n, and d 0c

0,

(8.4.35) (8.4.36)

To find a positive minimal realization (8.4.3) of (8.4.34), the Procedure 8.4.1 with slight modification can be used. Example 8.4.2. Given the transfer matrix 1 z  2 z  3z 3  2 z 2  z u ª¬ z 5  z 4  3z 3  1 2 z 5  4 z 4  5 z 3  3 z 2  2 º¼

T z

5

4

(8.4.37)

find the positive minimal realisation (8.4.3). Using the procedure, we obtain the following. Step 1: From (8.4.7), we have D

and

lim ȉ z z of

>1

2@

(8.4.38)

Positive Linear Systems with Delays

Tsp z

T z  D

Nc z dc z

455

(8.4.39)

1 ª z 4  2 z 2  z  1 z 3  z 2  2 z  2 º¼ . z 5  2 z 4  3z 3  2 z 2  z ¬

Step 2: Taking into account that

a0 1 , a1

3

2 , a2

a3

and using (8.4.10), we obtain A0

ª 0 a1 º «0 a » 3¼ ¬

ª0 a0 º «1 a » ¬ 2¼

ª0 2 º « 0 2 » , A1 ¬ ¼

ª 0 1º « 1 3» . ¬ ¼

(8.4.40)

Step 3: In this case, the equality (8.4.32) has the form ª b 0 z 2  b1 z  b12j º ª¬1 z 2 º¼ « 01 j 2 11 j 2 » «¬b2 j z  b2 j z  b2 j »¼ ncj 4 z 4  ncj 3 z 3  ncj 2 z 2  ncj1 z  ncj 0 ,

(8.4.41) j 1, 2,

and n14c 1, n13c 0, n12c c 2, n20 c 2. n21

2, n11c

1, n10c

c 1, n24

c 0, n23

c 1, n22

1,

1 2, b21

0, b210

1, b122

2, b121

2,

From (8.4.41), we have b112 0 12

1, b111 2 22

1, b110 1 22

b

0, b

1, b

B0

ª0 0 º «1 0 » , B1 ¬ ¼

0, b212 0 22

1, b

0

and ª1 2 º «0 1 » , B 2 ¬ ¼

ª1 2º «2 1» , C ¬ ¼

>0 1@ .

(8.4.42)

The desired positive minimal realisation (8.4.3) of (8.4.37) is given by (8.4.38), (8.4.40) and (8.4.42).

456

Polynomial and Rational Matrices

Remark 8.4.2. Note that the role of the delays in the control and output of the system can be interchanged.

8.5 Realisation Problem for Positive Continuous-time Systems with Delays 8.5.1 Problem Formulation Consider the multi-variable continuous-time system with h delays in state and q delays in control x (t )

q

h

¦ A x(t  id )  ¦ B u(t  jd ), i

i 0

y (t )

j

(8.5.1)

j 0

Cx(t )  Du (t ),

where x(t) n, u(t) m, y(t) p are the state, input and output vectors, respectively, and Ai nun, i = 0,1,…,h, Bj num, j = 0,1,…,q, C pun, D pum and d > 0 is a delay. The transfer matrix of the system (8.5.1) is given by

T( s, w)

C [I m s  A 0  A1w  !  A h wh ]1

u[B 0  B1w  !  B q wq ]  D, w

(8.5.2)

e  hs .

Let Mn be the set of nun Metzler matrices. Definition 8.5.1. The matrices A 0  M n , A i   nun , i 1,.., h, B j   num , j C   pun , D   pum

0,1,..., q,

(8.5.3)

are called a positive realisation of a given transfer matrix T(s, w) if they satisfy the equality (8.5.2). A realisation is called minimal if the dimension nun of matrices Ai, i = 0,1,…,h, is minimal among all realisations of T(s, w) The positive realisation problem can be stated as follows. Given a proper transfer matrix T(s, w), find a positive realisation (8.5.3) of T(s, w). Sufficient conditions for solvability of the problem will be established and a procedure for the computation of a positive minimal realisation will be proposed below.

Positive Linear Systems with Delays

457

8.5.2 Problem Solution The transfer matrix (8.5.2) can be rewritten in the form

T( s, w)

C Adj H ( s, w) B 0  B1w  "  B q wq det H ( s, w)

D

(8.5.4)

N( s, w)  D, d ( s, w)

where H ( s, w) [I m s  A 0  A1w  !  A h wh ],



(8.5.5)



N(s, w) C Adj H(s, w) B 0  B1w  "  B q w , q

(8.5.6)

d ( s, w) det H(s, w). From (8.5.4), we have D

lim T( s, w) ,

(8.5.7)

s of

since lim H 1 ( s, w) 0. s of

The strictly proper part of T(s, w) is given by Tsp ( s, w)

T( s, w)  D

N( s, w) . d ( s, w)

(8.5.8)

Therefore, the positive realization problem has been reduced to finding matrices A0  M n , Ak 

mum 

,k

1,...,q, B j 

m 

, j 1, ! , q, C 

pun 

(8.5.9)

for a given strictly proper transfer matrix (8.5.8). Lemma 5.8.1. If

A0

ª0 «1 « «0 « «# «0 ¬

0 ! 0 0 ! 0

a00 º a01 »» 1 ! 0 a02 » , A i » # % # # » 0 ! 1 a0 n1 »¼

ª0 «0 « «# « «¬0

0 " 0 0 " 0

ai 0 º ai1 »» , i 1,..., h, (8.5.10) # % # # » » 0 " 0 ai n1 »¼

458

Polynomial and Rational Matrices

then d ( s,w)

det[I n s  A 0  A1w  "  A h wh ]

(8.5.11)

s n  d n1s n1  d n2 s n2  !  d1s  d 0

where dj

d j ( w)

ah , j wh  ah1, j wh1  !  a1 j w  a0 j , j

0,1,..., n  1 . (8.5.12)

Proof. Expansion of the determinant with respect to the n-th column yields

det [I n s  A 0  A1w  !  A h wh ]

d 0

0

1

s

" "

0

 d1

0

1 "

0

d 2

# 0 0

# 0 0

s

0

% # # " s  d n 2 " 1 s  d n1

s n  d n1s n1  d n2 s n2  !  d1s  d 0 .

Ŷ Remark 8.5.1. There exist many different matrices A0,A1,…,Ah giving the same desired polynomial d(s, w) [164,168,166,171, 173]. Remark 8.5.2. The matrix A0 is a Metzler matrix and the matrices A1,…,Ah have nonnegative entries if and only if the coefficients aij of the polynomial d(s, w) are nonnegative, except a0,n-1, which can be arbitrary. Remark 8.5.3. The dimension nun of matrices (8.5.10) is the smallest possible one for the given d(s, w). Lemma 8.5.2. If the matrices Ai, i=0,1,…,h, have the form (8.5.10), then the n-th row of the adjoint matrix Adj H(s, w) has the form R n ( s ) [1 s ! s n1 ] .

Proof. Taking into account that

Adj H( s, w) H( s, w)

I n d ( s, w) ,

(8.5.13)

Positive Linear Systems with Delays

459

it is easy to verify that R n ( s ) H ( s, w) [0 ! 0 1] d ( s, w) .

(8.5.14) Ŷ

The strictly proper matrix Tsp(s, w) can always be written in the form

Tsp ( s, w)

ª N1 ( s, w) º « » « d1 ( s, w) » « », # « » « N p ( s, w) » « d ( s, w) » ¬ p ¼

d k ( s, w)

s nk  d nk 1s nk 1  !  d1s  d 0 , k

(8.5.15)

where

di

ahi ii whi  !  a1ii w  a0i i , i

d i ( w)

1,..., p,

(8.5.16)

0,1,..., nk  1

is the least common denominator of the k-th row of Tsp(s, w) and N k ( s, w) [nk 1 ( s, w),..., nkm ( s, w)], k nk 1 nk 1 kj

nkj ( s, w) i kj

n

n

s

! a w  a , 1 1j

n w ! n w  n , i iq kj

i1 kj

q

i0 kj

0 0j

1,..., p, j

(8.5.17)

0,1,..., m,

0,1,..., nk  1.

By Lemma 8.5.1 we may associate to the polynomial (8.5.16) the matrices

ª0 0 « «1 0 A k 0 «0 1 « «# # «0 0 ¬ k 1,.., p, i

k º a00 k » ! 0 a01 » k » ! 0 a02 , A ki » % # # » ! 1 a0k nk 1 »¼ 1,..., hk ,

! 0

ª0 « «0 «# « ¬«0

aik0 º » aik1 » , # % # # » » 0 " 0 aiknk 1 ¼»

0 " 0 0 " 0

(8.5.18)

satisfying the condition d k ( s, w)

det [I nk s  A k 0  A k 1w  !  A khk whk ], k

1,.., p .

(8.5.19)

460

Polynomial and Rational Matrices

Let A0 Ai

Bk

C

nun

block diag [ A10 ! A p 0 ] 

nun

block diag [ A1i ! A pi ]  ª b11k " b1km º « » k « # % # » , bij k » «bpk1 " bpm ¬ ¼

ª bijk1 º « » « # », k «bijkni » ¬ ¼

block diag[c1 ! c p ], c

k

, (n

(8.5.20)

n1  "  n p ),

0,1,..., q; i 1,..., p; j 1,.., m, (8.5.21)

[0 ! 0 1] 

1unk

, k

1,..., p .

The number of delays q in control is equal to the degree of the polynomial matrix N(s, w) in variable w. From (8.5.8), (8.5.17), (8.5.22), and (8.5.24)( 8.5.26), we obtain for the j-th column of Tsp(s, w) Tspj (s, w) CH 1 (s, w)[B 0  B1w  !  Bq w q ] j

^

1 block diag[c1 ! c p ] §¨ block diag ª¬I n1 s  A10  A11w  !  A1h1 w h1 º¼ ,... © ª b10j  b11j w  !  b1qj w q º « » h # ...,[I np s  A p 0  A p1w  !  A php w p ]1 « » «bpj0  b1pj w  !  bpjq w q » ¬ ¼

`

­° 1 ½ 1 n 1 ° block diag ® [1 s ! s n1 1 ],!, [1 s ! s p ]¾ d p ( s, w ) ¯° d1 (s, w) ¿° ª b10j  b11j w  !  b1qj w q º « » u« # » «bpj0  b1pj w  !  bpjq w q » ¬ ¼ ª (b1qnj 1 w q  !  b11nj 1 w  b10jn1 )s n1 1  !  b1qj1w q  !  b111j w  b101j º « » d1 (s, w) « » « » # « » « (bqnp w q  !  b1n p w  b 0 n p )s n p 1  !  b q1w q  !  b11w  b01 » pj pj pj pj pj « pj » d p (s, w) «¬ »¼ ª n1 j (s, w) º « » « d1 (s, w) » « » , j 1,..., m, # « » « n pj (s, w) » « d (s, w) » ¬ p ¼

(8.5.23)

Positive Linear Systems with Delays

461

and nij(s, w) are given by (8.5.17). A comparison of the coefficients at the same powers of s and w of the equality (8.5.23) yields b101j

n100j , b111j

n101j ,..., b1qj1

n1n1j 1,0 , b11nj 1

! , b10jn1

n10jq ,...

n1n1j 1,1 ,..., b1qnj 1

n1n1j 1,q

""""""""""""""""""" b

01 pj

!, b

00 pj

11 pj

n ,b 0 n1 pj

n

01 pj

n ,..., b

n p 1,0 pj

1n p pj

,b

q1 pj

n

(8.5.24)

0q pj

n ,...

n p 1,1 pj

qn

,..., bpj p

n 1,q

n pjp

for j = 1,…,m. Theorem 8.5.1. There exists a positive realisation (8.5.3) of T(s, w) if 1. T( f )

2.

lim T( s, w)  s of

pum 

,

(8.5.25)

the coefficients of dk(s, w) k = 1,…,p are nonnegative, except a0 nk 1 , k = 1,…,p, i.e., aijk t 0, i 1,..., hk ;

3.

j

0,1,..., nk  1, k

1,..., p ,

(8.5.26)

the coefficients of Nj(s, w), j =1,…,m are nonnegative, i.e., nijk t 0, for i 1,..., p; j 1,..., m; k

0,1,..., q .

(8.5.27)

Proof. The condition (8.5.25) implies D +pum. If the conditions (8.5.26) are satisfied, then the matrices (8.2.18) have nonnegative entries except a0,n k 1, k = 1,…,p, which can be arbitrary. In this case, A0Mn and Ai +nun, i = 1,…,h. If additionally the conditions (8.5.27) are satisfied, then from (8.5.24) it follows that Bk +num, k=0,1,…,q. The matrix C of the form (8.5.22) is independent of T(s, w) and always has nonnegative entries. Ŷ Theorem 8.5.2. The realisation (8.5.3) of T(s, w) is minimal if the polynomials d1(s, w),…,dp(s, w) are relatively prime (coprime). Proof. If the polynomials d1(s, w),…,dp(s, w) are relatively prime, then d(s, w) = d1(s, w),…,dp(s, w) and by Remark 8.5.3 the matrices (8.5.20) have minimal dimensions. Ŷ

462

Polynomial and Rational Matrices

If the conditions of Theorem 8.5.2 are satisfied, then a positive minimal realisation (8.5.3) of T(s, w) can be found by the use of the following procedure. Procedure 8.5.1. Step 1: Using (8.5.7) and (8.5.8) find the matrix D and the strictly proper matrix Tsp(s, w). Step 2: Knowing the coefficients of dk(s, w), k = 1,…,p, find the matrices (8.5.18) and (8.5.20). Step 3: Knowing the coefficients of Nj(s, w), j = 1,…m, and using (8.5.24), (8.5.21) find the matrices Bi, i=0,1,…,q, and the matrix C. Example 8.5.1. Using above procedure find a positive realisation (8.5.3) of the transfer matrix T( s, w) ª s 2  ( w2  w  2) s  w2  w , « 2 2 2 « s  (  w  2) s  (2w  w  1) « w2  1 , « s  2 w2  w  1 ¬

º s 2  3s  (2w2  1) » s  ( w2  2) s  (2 w2  w  1) » (8.5.28) . » 2 s  2 w2  2 » s  2 w2  w  1 ¼ 2

It is easy to verify that the assumptions of Theorem 8.5.2 are satisfied. Using Procedure 8.5.1, we obtain the following. Step 1: From (8.5.7) and (8.5.8), we have D

lim T( s, w) s of

ª1 1 º «0 2 » ¬ ¼

(8.5.29)

and Tsp ( s, w)

T( s, w)  D

ª ws  w2  1 « s 2  ( w2  2) s  (2w2  w  1) « « w2  1 « s  2 w2  w  1 ¬

º ( w2  1) s  w s  ( w  2) s  (2w2  w  1) »» (8.5.30) . » 2( w2  w) » s  2 w2  w  1 ¼ 2

Step 2: Taking into account that d1 ( s, w)

s 2  ( w2  2) s  (2 w2  w  1),

d 2 ( s, w)

s  2w2  w  1,

and using (8.5.18) and (8.5.20), we obtain

2

Positive Linear Systems with Delays

A0

A2

ª A10 « 0 ¬

ª0 1 0 º « » «1 2 0 » , A1 «0 0 1» ¬ ¼ ª0 2 0 º « » «0 1 0 » . «0 0 2» ¬ ¼

0 º A 20 »¼

ª A 21 « 0 ¬

0 º A 22 »¼

ª A11 « 0 ¬

0 º A 21 »¼

ª0 1 0º « » «0 0 0» , «0 0 1 » ¬ ¼

463

(8.5.31)

Step 3: In this case, n11 ( s, w)

ws  w2  1, n12 ( s, w)

n22 ( s, w)

2( w2  w).

( w2  1) s  w,

Using (8.5.24) and (8.5.21), we obtain

B0

ª b1101 « 02 «b11 01 « b21 ¬

B2

ª b1121 « 22 «b11 « b2121 ¬

b1201 º » b1202 » 01 » b22 ¼ 21 b22 º » b1222 » b2221 »¼

ª1 0 º «0 1 » , B « » 1 «¬1 0 »¼

ª b1111 « 12 «b11 11 « b21 ¬

ª1 0 º «0 1 » and C « » «¬1 2 »¼

b1211 º » b1212 » 11 » b22 ¼

ª0 1 º «1 0 » , « » «¬ 0 2 »¼

(8.5.32)

ª0 1 0º «0 0 1 » . ¬ ¼

The desired positive realisation of (8.5.3) of (8.5.28) is given by (8.5.29), (8.5.31) and (8.5.32). The realisation is minimal, since the polynomials d1(s, w), d2(s, w) are relatively prime.

8.6 Positive Realisations for Singular Multi-variable Discretetime Systems with Delays 8.6.1 Problem Formulation

Consider the discrete-time linear system with q state delays and q input delays described by the equations Ex(i  1)

¦ A q

j

x(i  j )  B j u (i  j ) ,

(8.6.1a)

j 0

y (i )

Cx(i ) i  '  ,

(8.6.1b)

464

Polynomial and Rational Matrices

where x(i) n, u(i) m, y(i) p are the state, input (control) and output vectors respectively, and E, Ak nun, Bk num, k = 0,1,…,q, C pun. It is assumed that det E = 0 and det ª¬ Ez q 1  A 0 z q  A1 z q 1  !  A q º¼ z 0 for some z  (the field of complex numbers).

(8.6.2)

The initial conditions for (8.6.1a) are given by x(i ) 

n

, u ( i ) 

m

for i

(8.6.3)

0,1,..., q.

Let us assume that the matrices E, A0, A1, B0, B1, C have the following canonical forms [81, 127] ª I n1 0 º n un E block diag ª¬E1 , E2 ,..., E p º¼  nun , Ei « » i i, «¬ 0 0 »¼ p

i 1,..., p, n

¦n , i

i 1

Aj a ji A qi

aqi

Bj

bilj C Ci

block diag ¬ª A j1 , A j 2 ,..., A jp ¼º  ª a ji º ni ni 1 , j « a ni »  , a ji  ¬ ji ¼ ª 0 º aqi »  ni uni , aqi «I «¬ ni 1 »¼ ª a1qi º « » ni 1 , j 1,..., q, « # » « aqini 1 » ¬ ¼ j ª b11 " b1jm º « » num , biij # « » j » «bpj1 " bpm ¬ ¼

nun

, A ji

¬ª 0 a ji ¼º 

ni uni

,

1,..., q  1; i 1,..., p, ª aqi º « a ni »  ¬ qi ¼

ni

,

ª bilj º « l ni » , ¬bil ¼

ª bilj º « » « # » , i 1,..., p; l 1,..., n, «bill ni » ¬ ¼ block diag ª¬C1 C2 ! C p º¼  >0 0 ! 1@  1uni , i 1,..., p.

(8.6.4) pun

,

Positive Linear Systems with Delays

465

Definition 8.6.1. The system (8.6.1) is called (internally) positive if for every x(k) +n, u(k) +m, k = 0,1,…,q, and all inputs u(i) +m, i +, we have x(i) +n and y(i) +p for i + Theorem 8.6.1. The system (8.6.1) with matrices of the forms (8.6.4) is positive if and only if akil t 0 for k ni ki

a

bijk 

0,1,..., q; i 1,..., p; l

0, a ! 0 for k ni qi

ni 

0,1,..., ni ,

(8.6.5a)

0,1,..., q  1; i 1,..., p,

for i 1,..., p; j 1,..., m; k

(8.6.5b)

0,1,..., q.

Proof. Let ª xk (i ) º « x (i ) »  ¬ knk ¼

xk (i )

nk

nk 1

, x (i ) 

, i



, k

1,..., p

(8.6.6a)

be the k-th (k = 1,…,p) subvector of x(i) corresponding to the k-th block of (8.6.4) and

A qk b

nk jk

ª 0 «I ¬« nk 2

º 0»  ¼»

ª¬b , ... , b jnk k1

jnk km

( nk 1)u( nk 1)

º¼ , enk

, B jk

j j ¬ªbk 1 , ... , bkm ¼º ,

>0, ... , 0 1@ 

1u( nk 1)

(8.6.6b)

.

Using (8.6.1a), (8.6.4) and (8.6.6), we may write q

q

j 0

j 0

A qk x (i  q)  ¦ a jk x jnk (i  j )  ¦ B jk u(i  j ) ,

x k (i  1)

(8.6.7a)

q

enk xk (i  q )  ¦ b njkk u (i  j ) .

aqknk xknk (i  q )

(8.6.7b)

j 0

If the conditions (8.6.5) are satisfied, then using (8.6.7a), for i=0,1,…,q, and the initial conditions (8.6.3), we may compute xk (i ) 

nk 1 

, for i 1,..., q  1.

Next from (8.6.7b) xknk (q  1) 

and from (8.6.7a)



466

Polynomial and Rational Matrices

xk (q  2) 

nk 1 

.

Continuing the procedure we may find xk (i ) 

nk 

, for i



and k

1,..., p

and from (8.6.1b) y(i) = Cx(i) +p for i +. The necessity follows immediately from the arbitrariness of the initial conditions (8.6.3) and of the input u(i) and can be shown in a similar way as for systems without delays [127]. Ŷ Remark 8.6.1. Using (8.6.6b) we may eliminate xn k from (8.6.7a) and (8.6.1b) and we obtain a standard positive system with delays and advanced arguments in control.

The transfer matrix of (8.6.1) is given by T( z )

C[Ez  A 0  A1 z 1  !  A q z  q ]1 (B 0  B1 z 1  !  B q z  q ) C[E z q1  A 0 z q  A1 z q 1  !  A q ]1 (B 0 z q  B1 z q 1  !  B q ).

(8.6.8)

Definition 8.6.2. Matrices (8.6.4) satisfying (8.6.5a) are called a positive realisation of the transfer matrix T(z) if they satisfy (8.6.8). The realisation is called minimal if the dimension nun of E, Ak, k = 0,1 is minimal among all realisations of T(z). The positive minimal realisation problem can be stated as follows. Given an improper transfer matrix T(z), find a positive (minimal) realisation of T(z) Solvability conditions for the positive (minimal) realizstion problem will be established and a procedure for computation of a positive (minimal) realisation of T(z) will be presented. 8.6.1 Problem Solution To solve the positive realisation problem we shall use the following two lemmas. Lemma 8.6.1. If the matrix E k has the form (8.6.4) and

Positive Linear Systems with Delays

" 0

A0k

ª0 « «0 «# « «0 « ¬«0

aqk º » " 0 a2 qk 1 » % # # » , A1k » " 0 ank 1 » » 0 ¼» " 0

A qk k

ª0 «1 « «0 « «# « «0 «0 ¬

0 " 0 0 " 0

ª0 « «0 «# « «0 « ¬« 0

467

" 0 aqk 1 º » " 0 a2 qk » % # # » , ... , » " 0 ank 2 » » 0 ¼» " 0

º » » 1 " 0 a2( qk 1) » », » # % # # » 0 " 0 a( nk 2)( qk 1) » 0 " 1 1 »¼

(8.6.9)

a0 aqk 1

then dk ( z)

det ª¬Ek z qk 1  A 0 k z qk  !  A qk ,k º¼

z nk  ank 1 z nk 1  !  a1 z  a0 , k

where nk

(8.6.10)

1,..., p.

(nk  1)(qk  1).

Proof. Expansion of the determinant with respect to the ni-th column yields det[Ek z qk 1  A 0 k z qk  !  A qk ,k ] z qk 1 1

!

0

aqk z qk  aqk 1 z qk 1  !  a0

z qk 1 !

0

a2 qk 1 z qk  a2 qk z qk 1  !  aqk 1

0

# 0

# 0

0

0

z  ank 1 z nk

% # qk 1 ! z ! nk 1

#

ank 1 z  ank 2 z qk

1

 !  a1 z  a0 , k

qk 1

 !  a( nk 2)( qk 1)

1 1,..., p.

Ŷ Lemma 8.6.2. If the matrix Ek has the form (8.6.4) and the matrices Aik, i=0,1,…,q, have the forms (8.6.9), then the nk-th row Rn k (z) of the adjoint matrix Adj [Ek z qk 1  A 0 k z qk  !  A qk ,k ]

468

Polynomial and Rational Matrices

has the form R nk ( z ) [1 z qk 1 ! z nk ], k

1,..., p .

(8.6.11)

Proof. Taking into account that

Adj ª¬E z

qk 1

k



 A 0 k z qk  !  A qk ,k º¼ ª¬E k z qk 1  A 0 k z qk  !  A qk ,k º¼

I nk d k ( z ), it is easy to verify that R nk ( z ) ¬ªEk z qk 1  A 0 k z qk  !  A qk ,k ¼º

[0 ! 0 1] d k ( z ) .

Ŷ Let a given improper transfer matrix have the form ª n11 ( z ) n ( z) º , ... , 1m « » d1 ( z ) » « d1 ( z ) « », # « » n pm ( z ) » « n p1 ( z ) « d ( z ) , ... , d ( z ) » p ¬ p ¼

T( z )

where nkj ( z )

nkjkj z kj  !  n1kj z  nkj0

dk ( z)

z  akrk 1 z

t

rk

t

rk 1

k

1,..., p; j 1,..., m ,

 !  ak1 z  ak 0 .

(8.6.13a) (8.6.13b)

The number of delays q is equal to q

max (tk  rk ) (k

tk

max tkj , j 1,..., m .

k

1,..., p) ,

(8.6.14)

where j

If the matrices Ek Ajk have the forms (8.6.4), then the minimal nk is given by the formula

Positive Linear Systems with Delays

nk t

tk  1 , k tk  rk  1

1,..., p .

469

(8.6.15)

The formula (8.6.15) can be justified as follows. If the matrix Ek has a canonical form then (nk  1)(tk  rk  1) t rk , k

1,..., p .

(8.6.16)

Solving (8.6.16) with respect to nk, we obtain (8.6.15). Knowing the coefficients of the denominators d1(z),…,dp(z) of (8.6.12), we may find the matrices Aji of the forms (8.6.4) such that (8.6.10) hold. Let (B0zq + … + Bq)j and Tj(z), j = 1,…,m, be the j-th column of the matrix q B0z + … + Bq and T(z), respectively. Using (8.6.8), (8.6.9) and (8.6.10), we obtain C [E z q1  A 0 z q  !  A q ]1 (B 0 z q  !  B q ) j

Tj ( z )

block diag[C ! C ] block diag ^ª¬E z p

1

...,[E p z

q 1

1

 A 0 p z  !  A qp ] q

q 1

1

`

1

 A 01 z q  !  A q1 º¼ ,!

ªb1jj z q  ...  b1qj º « » # u« » «bpj0 z q  ...  bpjq » ¬ ¼

­ 1 ª¬1 z q1 1 ! z ( q1 1)( n1 1) º¼ ,... block diag ® ¯ d1 ( z ) ª b j z q  ...  b1qj º ½ « 1j 1 ª » ( q p 1)( n p 1) ° q p 1 º¾ « ! z # ..., 1 z » ¼ dp ( z) ¬ °¿ «b0 z q  ...  b q » pj ¼ ¬ pj ª b10jn1 z t1 j  b11nj 1 z t1 j 1  ...  b1qj1,1 z  b1qj1 º ª n1 j ( z ) º « » « » d1 ( z ) « » « d1 ( z ) » « » « # » , j 1,..., m, # « » « » « b 0 n p z t pj  b1n p z t pj 1  ...  b q 1,1 z  b q1 » « n pj ( z ) » pj pj pj pj « » « d ( z) » d p ( z) «¬ ¼» ¬ p ¼

(8.6.17)

where nij(z), i=1,…,p, are defined by (8.6.13a). Comparing the coefficients at the same powers of z of numerators of (8.6.17), we obtain b10jn1

t

n11jj , bij1n1

t 1

n11jj , ... , b1qj1,1

n11 j , b1qj1

n10j

............................................................................., j 1,..., p . b

0np pj

t pj pj

1n p pj

n , b

n

t pj 1 pj

, ... , bpjq 1,1

n1pj , bpjq1

n0pj

(8.6.18)

470

Polynomial and Rational Matrices

Theorem 8.6.2. There exists a positive realisation of (8.6.12) if 1. the coefficients of denominators (8.6.13b) are nonnegative, i.e., aki t 0, for k

2.

1,..., p; i

0,1,..., rk  1,

(8.6.19)

the coefficients of numerators (8.6.13a) are nonnegative, i.e., nkjkj t 0, for k t

(8.6.20)

1,..., p; j 1,..., m.

Proof. If the conditions (8.6.20) are satisfied, then from (8.6.18) it follows that Bj +nun, for j = 0,1,…,q. Additionally, if the condition (8.6.19) is satisfied, then by Theorem 8.6.1 the realisation is positive. Ŷ Theorem 8.6.3. The realization of T(z) is minimal if the denominators di(z),…,dp(z) are relatively prime (coprime). Proof. If the denominators are relatively prime, then d ( z)

det [Ez q 1  A 0 z q  !  A q ]

d1 ( z ) ... d p ( z )

and the matrices E, Aj, j = 0,1,…,q, have minimal possible dimensions.

Ŷ

If the conditions of Theorem 8.6.2 are satisfied, then a positive (minimal) realisation of (8.6.12) can be found by the use of the following procedure. Procedure 8.6.1. Step 1: Knowing the degrees tk of the numerators nij(z) and rk of the denominators dk(z) and using (8.6.14), find the number of delays q and from (8.6.15) the minimal nk for k = 1,…,p. Step 2: Using the coefficients of dk(z) k = 1,…,p, find the matrices Aj j = 0,1,…,q, E and C. Step 3: Using (8.6.18), find the matrices Bj j = 0,1,…,q. Remark 8.6.2. The matrices E and C have the canonical forms (8.6.4) and their dimensions depend only on T(z). Example 8.6.1. Find a positive realisation of the transfer matrix

Positive Linear Systems with Delays

ª z3  2z 2  z  3 « z2  2z  1 « 3 « z  2z2  z «¬ z 2  3z

T( z )

3z 3  2 z  2 º z 2  2 z  1 »» . z 2  3z » z 2  3 z »¼

471

(8.6.21)

It is easy to check that the transfer matrix (8.6.21) satisfies the conditions (8.6.19) and (8.6.20). Using the above procedure we obtain the following. Step 1: In this case, t1 = t2 = 3, r1 = r2 = 2. Hence q

max(tk  rk ) 1 k

and from (8.6.15), we obtain n1 = n2 = 2. Step 2: Taking into account that d1(z) = z3 – 2z - 1 and d1(z) = z2 – 3z, we obtain

E

ª E1 0 º «0 E » 2¼ ¬

A1

ª A11 « 0 ¬

ª1 « «0 «0 « ¬0

0 º A12 »¼

0 0 0º 0 0 0 »» , 0 1 0» » 0 0 0¼ ª0 1 0 «1 1 0 « «0 0 0 « ¬0 0 1

A0

ª A 01 « 0 ¬

0º 0 »» , C 0» » 1¼

0 º A 02 »¼

ªC1 0 º «0 C » 2¼ ¬

ª0 « «0 «0 « ¬0

2 0 0º 0 0 0 »» , 0 0 3» » 0 0 0¼

(8.6.22)

ª0 1 0 0 º «0 0 0 1 » . ¬ ¼

Step 3: Using (8.6.18), we obtain

B0

ª b1101 b1201 º « 02 02 » «b11 b12 » 01 « b21 b 01 » « 02 22 » 02 «¬b21 b22 »¼

ª1 «1 « «0 « ¬1

2º 3 »» , B1 3» » 0¼

ª b1111 b1211 º « 12 12 » «b11 b12 » 11 « b21 b11 » « 12 22 » 12 «¬b21 b22 »¼

ª3 «2 « «1 « ¬2

2º 0 »» . 0» » 1¼

(8.6.23)

The desired realisation of (8.6.21) is given by (8.6.22) and (8.6.23). It is a positive minimal realization, since the polynomials d1(z) = z3 – 2z - 1 and d1(z) = z2 – 3z are relatively prime. Remark 8.6.3. Note that if (nk  1)(qk  1) ! rk , for some k  [1,..., p],

(8.6.24)

472

Polynomial and Rational Matrices

then the numerator and the denominator of the k-th row of the transfer matrix (8.6.12) should be multiplied by z vk , where vk

( nk  1)(qk  1)  rk .

Otherwise the obtained Aj, j=0,1,…,q, do not belong to a positive realisation of (8.6.12). For example, if the given transfer matrix (8.6.12) has the form

T( z )

ª z3  2z 2  z  3 « z2  2z  1 « « z2  2z  1 «¬ z 3

3z 3  2 z  2 º z 2  2 z  1 »» , z 3 » »¼ z 3

(8.6.25)

then for k = 2, we have n2 = 2 q2 = 1 r2 = 1 and v2 = (n2 – 1)(q2 + 1) – r2 = 1. In this case, by multiplying the numerator and denominator of the second row of (8.6.25) by z, we obtain the transfer matrix (8.6.21). The matrices A20 and A12 for the second row of (8.6.25) have the forms A 02

ª0 1º «0 0 » , A12 ¬ ¼

ª0 3º «1 0 » , ¬ ¼

and they do not belong to a positive realisation of (8.6.25).

Appendix Selected Problems of Controllability and Observability of Linear Systems

A.1 Reachability Consider the following discrete-time linear system xi 1 Axi  Bui , i yi Cxi  Dui ,

0, 1, ... ,

where xi n is the state vector, ui A nun, B num, C pun, D pum.

(A.1a) (A.1b) m

the input vector, yi

p

the output vector;

Definition A.1. The system (A.1) (or the pair (A,B)) is called reachable, if for every vector xf n there exists an integer q > 0 and a sequence of inputs {ui, i=0,1,…,q1} such that for x0 = 0, xq = xf. Theorem A.1. The system (A.1) is reachable if and only if one of the following conditions is met 1. rank [B, AB,..., A n1B] n ,

(A.2)

rank [I n z  A, B] n, for all finite z  ,

(A.3)

2.

474

Appendix

3. [I n z  A] and B are left coprime matrices.

(A.4)

Proof. Using the solution i 1

xi

A i x0  ¦ A i k 1Buk k 0

to equation (A.1a) for i = n, x0= 0 and taking into account that xn = xf, we obtain

n 1

xf

¦A

n  k 1

Buk

k 0

ªun1 º « » u ª¬B, A, B,..., A n1B º¼ « n2 » . «# » « » ¬ u0 ¼

(A.5)

From (A.5) it follows that for every xf there exits {ui, i=0,1,…,q1} if and only if the condition (A.2) is met. Let v n be a vector such that vTB = 0 and vTA = zvT for a certain complex variable z. In this case, vT AB

zvT B

0, vT A 2 B

zvT AB

0, ..., vT A n1B

that is, vT ¬ªB, AB, ..., A n1B ¼º

0.

The condition (A.2) thus implies v = 0. Hence from vT > I n z  A, B @ 0

(A.3) follows. From (A.3) it follows that there exists a unimodular matrix U

ª U1 U 2 º «U U »  4¼ ¬ 3

n  mu( n  m )

such that

> Iz  A , B @ U > I n 0 @

[ z] ,

0,

Appendix

475

and

>Iz  A@ U1  BU3

In .

(A.6)

Thus [Iz – A] and B are left coprime matrices. Let u1i

u10i  u11i z  u12i z 2    u1ni2 z n2 (i 1, ..., n),

u3i

u30i  u31i z  u32i z 2    u3ni1 z n1

(A.7)

be the i-th columns of the polynomial matrices U1 nun[z] and U3 mun[z], respectively. Substituting (A.7) into (A.6) and comparing the coefficients by the same powers of the variable z , we obtain Bu30i  Au10i

ei

Bu  u  Au11i 1 3i

0 1i

0 for i 1, ..., n ,

.......................................... n2 3i

Bu

n 1 3i

Bu

n 3 1i

 Au

n 2 1i

0

u u

n2 1i

(A.8)

0

where ei is the i-th column of the identity matrix In. Pre-multiplying the equations in (A.8) successively by A0,A1,A2,….,An1 and adding them up, we obtain Bu30i  ABu31i    A n1Bu3ni1

ei (i 1, ..., n)

and ªu30i º « 1 » u ª¬B, AB,..., A n1B º¼ « 3i » « » « » «¬u3ni1 »¼

ei (i 1, ..., n).

(A.9)

(A.9) implies the condition (A.2). The conditions (A.2), (A.3), and (A.4) are thus equivalent. 

476

Appendix

If the system (A.1) is not reachable, then the set of reachable states from the point x0 = 0 is given by the image of the matrix [B,AB,…,An-1B]. Example A.1. Show that the pair

A

ª0 «0 « «# « «0 ¬« a0

1 0 # 0 a1

0 1 # 0 a2

" 0 º " 0 »» % # », B » " 1 » " an1 ¼»

ª0º «0» « » « #» « » «0» ¬«1 ¼»

(A.10)

is reachable for arbitrary values of the coefficients a0,a1,…,an1. Using (A.3), we obtain

rank > I n z  A, B @

ª z « 0 « rank « # « « 0 «¬ a0

1 z # 0 a1

0 1 # 0  a2

" 0 " 0 % # " 1 " z  an1

0º 0 »» #» » 0» 1 »¼

n,

(A.11)

for all finite z . The last n columns of (A.11) are linearly independent whatever the values of the coefficients a0,a1,…,an-1. Now consider the following continuous-time linear system x y

§ Ax  Bu ¨ x © Cx  Du ,

dx · ¸, dt ¹

(A.12a) (A.12b)

where x = x(t) n is the state vector, u = u(t) m the input vector, y = y(t) output vector; A nun, B num, C pun, D pum.

p

the

Definition D.2. The system (A.12) (or the pair (A,B)) is called reachable if for every vector xf n there exists a time tf > 0 and an input u(t) over the interval [0, tf] such that for x0 = 0, x(tf) = xf. Theorem D.2 The system (A.12) is reachable if and only if one of the following conditions is satisfied:

Appendix

477

1. rank [B, AB, ..., A n1B] n ,

(A.13)

rank [I n s  A, B] n, for all finite s  ,

(A.14)

>I n s  A @

(A.15)

2.

3. and B are left prime.

The proof is similar to that of Theorem A.1. 

A.2. Controllability Definition A.3. The system (A.1) (or the pair (A,B)) is called controllable to zero if for an arbitrary initial state x0 z 0 there exists an integer q > 0 and a sequence of inputs {ui, i=0,1,…,q1} such that xq = 0. Theorem A.3. The system (A.1) is controllable to zero if and only if one of the following conditions is met: 1. Im A n  Im [B, AB,..., A n1B] ,

(A.16)

rank [I  dA, B]

(A.17)

2. n, for all finite d  ,

3. [I  dA] and B are left coprime.

(A.18)

Proof. Using the solution to (A.1), for i = n, xn = 0 we obtain

n 1

A n x0

¦ A nk 1Buk k 0

ªun1 º « » u  ª¬B, AB,..., A n1B º¼ « n2 » . «# » « » ¬ u0 ¼

(A.19)

478

Appendix

From (A.19) it follows that there exists a sequence of inputs {ui, i=0,1,…,q1} for an arbitrary x0 if and only if the condition (A.16) is met. Let v n be a vector such that vTB = 0 and vTA = zvT for a certain variable z. In the same manner as in the proof of Theorem A.1, we obtain vT[B,AB,…,An-1B] = 0. The condition (A.16) implies 0

vT A n

O n vT and thus O

0 or v

0.

Hence the matrix [Inz – A, B] has full row rank n for all finite z z 0, which is equivalent to the conditions (A.17). Analogously to the proof of Theorem A.1 one can show that the condition (A.17) implies (A.18), and the condition (A.18) in turn implies (A.16).  Remark A.1. Each of the conditions (A.13), (A.14) and (A.15) for the system (A.1) with singular A is only a sufficient condition, but not a necessary one for the controllability of the system. If det A z 0, then these conditions are also necessary conditions for the controllability of (A.1). For the system (A.1) with nonsingular A, the conditions of its controllability are equivalent to the conditions of its reachability. Example A.2. The pair of matrices A

ª0 a º «0 0 » , B ¬ ¼

ª1 º «0 » ¬ ¼

(A.20)

is not reachable, since rank > Iz  A, B @

ª z a 1 º rank « » 1, for z ¬0 z 0¼

0.

On the other hand, using (A.17), we obtain rank > I  dA, B @

ª1 da 1 º rank « » ¬0 1 0 ¼

2 for arbitrary a and d .

The pair (A.20) is thus controllable for arbitrary a. Note that in this case the state xf

ª0 º «1 » ¬ ¼

is not reachable from the state x0 = 0, since x0 does not belong to

Appendix

479

ª1 º Im[B, AB] Im « » . ¬0¼

On the other hand, the state x0

ª0 º «1 » ¬ ¼

can be brought to zero by the zero input sequence u0 = u1 = 0, since A2 = 0 for arbitrary a. Definition A.4. The system (A.12) (or the pair (A,B)) is called controllable to zero if for an arbitrary initial state x0 there exists a time tf > 0 and an input u = u(t) over the interval [0, tf]] such that x(tf) = 0. Theorem A.4. The system (A.12) is controllable to zero if and only if one of the conditions (A.16), (A.17), (A.18) of Theorem A.3 is met. The proof of this theorem follows similarly to that of Theorem A.3. 

Using the solution to (A.12a), for x(0) = x0, x(tf) = 0, we obtain

xf

e

At f

tf x0 

³e

0

A (t f W )

tf

Bu (W ) dW

0 and x0

 ³ e AW Bu (W ) dW 0

since eAt is a nonsingular matrix regardless of the matrix A. Hence the controllability of a continuous-time system is equivalent to its reachability for every A. Example A.3. We choose as the state variable x the voltage uc on the capacity of the electrical circuit in Fig. A.1, and as the input the source voltage u. Note that the voltage uc on the capacity is zero for an arbitrary value of the source voltage u. Therefore changing u we cannot reach any desired nonzero value of the voltage uc = xf z 0. Thus this circuit is an example of an uncontrollable system.

480

Appendix

Fig. A.1. Uncontrollable electrical circuit

A.3 Observability First consider the discrete system (A.1). Definition A.5. The system (A.1) (or the pair (A,C)) is called observable if there exists an integer q > 0 such that for given sequences of inputs {ui, i=0,1,…,q-1} and outputs {yi, i=0,1,…,q-1} one can determine the initial state x0 of this system. Theorem A.5. The system (A.1) is observable if and only if one of the following conditions is met: 1. ª C º « CA » » rank « « # » « » n 1 ¬«CA ¼»

n,

(A.21)

2. ªI z  A º rank « n » C ¼ ¬

n for all finite z 

,

(A.22)

3.

>I n z  A @

and C are right coprime.

Proof. Substituting the solution of (A.1a) into (A.1b), we obtain

(A.23)

Appendix

yic

481

i 1

yi  Dui  ¦ CA i k 1Buk

CA i x0 .

(A.24)

k 0

Using (A.24), for i ª y0c º « c» « y1 » « # » « » ¬ ync 1 ¼

0,1,..., n  1 , we have

ª C º « » « CA » x . « # » 0 « » n 1 ¬«CA ¼»

(A.25)

For the given sequences {ui, i=0,1,…,q1},{yi, i=0,1,…,q1} the sequence {y’i, i=0,1,…,n1} is known. From (A.25) we can determine x0 if and only if the condition (A.21) is met. Equivalence of the remaining conditions can be proved similarly (dually) as in Theorem A.1.  Example A.4. Show that the pair A

ª A1 «A ¬ 2

0º , C A 3 »¼

>C1

0@

is not observable for arbitrary submatrices A1rur, A2(n-r)ur, A3(n-r)u (n-r), C1pur. It is easy to verify that Ak

ª A1k « ¬« *

0 º », A 3k ¼»

(A.26)

where * denotes a submatrix insignificant in the following considerations. Using (A.21) and (A.26), we obtain ª C º « CA » « » « # » « » n 1 «¬CA »¼

0º ª C1 «CA » « 1 1 0» . « # #» « » n 1 0¼ ¬C1A1

(A.27)

From (A.27) it follows that the condition (A.21) is not met for arbitrary A1,A2,A3 and C1.

482

Appendix

Definition A.6. The system (A.12) (or the pair (A,C)) is called observable if there exists a time tf > 0 such that for given u(t) and y(t) for 0 d t d tf, one can determine the initial state x0 of this system. Theorem A.6. The system (A.12) is observable if and only if one of the conditions (A.21), (A.22), (A.23) of Theorem A.5 is met. The proof of this theorem follows similarly to that of Theorem A.5. 

Example A.5. We take as the state variables in the circuit in Fig. A.2 the voltage uC on the capacity and the current in the coil iL; as the input u we take the source current i, and as the output y we take the voltage uR on the resistance R, uR = Ri. The circuit is described by the following equations

d ª uc º « » dt ¬iL ¼

ª «0 « «1 ¬« L



1º ª1º C » ª uc º « » » « »  C i, y [0 « » i 0» ¬ L ¼ ¬ 0 ¼ ¼»

ª uc º 0] « »  Ri ¬iL ¼

Fig. A.2. Unobservable electrical circuit

The circuit is not observable, since C [0

ªC º 0] and thus « » ¬CA ¼

0.

Note that with both the source current i and the voltage uR known, we cannot determine the initial state ªuc (0) º « » ¬iL (0) ¼

of this circuit.

Appendix

483

It is easily verifiable that if we choose the voltage on the capacity as the output y, then the circuit is observable.

A.4 Reconstructability First consider the discrete-time system (A.1). Definition A.7. The system (A.1) (or the pair (A,C)) is called reconstructable if there exists an integer q > 0 such that for the two given sequences: input {ui, i=0,1,…,q1} and output {yi, i=0,1,…,q1} one can determine the state vector xq of this system. Theorem A.7. The system (A.1) is reconstructable if and only if one of the following conditions is met 1. ª C º « CA » »  Ker A n , Ker « « # » « » n 1 ¬«CA ¼»

(A.28)

ªI  dA º rank « n » C ¼ ¬

n for all finite d  ,

(A.29)

> I n  dA @

are right coprime.

(A.30)

2.

3. and

Proof of this theorem is analogous (dual) to that of Theorem A.5. Example A.6. The pair A

ª1 «2 ¬

1º , C [1 1] 2 »¼

is not observable, since

(A.31)

484

Appendix

ª Cº rank « » ¬CA ¼

ª1 rank « ¬3

1º 1. 3 »¼

One cannot determine the vector x0 = [x01, x02]T with y0 and y1 known for u0 = u1 = 0, since y0 = Cx0 = x01 + x02, y1 = 3(x01 + x02), that is, we know only the sum x01 + x02. The pair (A.31) is reconstructable, since ª I  dA º rank « n » C ¼ ¬

ª1  d rank «« 2d «¬ 1

d

º 1  2d »» 1»¼

2, for all finite d  .

From the equation x2

A 2 x0

ª 3 y0 º « » ¬ 2 y1 ¼

we can compute x2 with y0 and y1 known. Remark A.2. Each of the conditions (A.21), (A.22), (A.23) for the system (A.1) with A singular is only a sufficient condition and not a necessary one for the reconstructability of this system. If det Az 0, then these conditions are also necessary ones of the reconstructability of the system (A.1). For the system (A.1) with A nonsingular, the conditions of observability are equivalent to those of reconstructability. Definition A.8. The system (or the pair (A,C)) is called reconstructable if there exists a time tf > 0 such that with u(t) and y(t) given for 0 d t d tf one can determine the state vector xf = x(tf) of this system. Theorem A.8. The system (A.12) is reconstructable if and only if one of the conditions (A.21), (A.22), (A.23) of Theorem A.5 is satisfied. The proof of this theorem is analogous to that of Theorem A.5. The reconstructability of the continuous-time system (A.12) is equivalent to its observability.

Appendix

485

A.5 Dual System Definition A.9 The system xi 1 yi

AT xi  CT ui ,

(A.32)

BT xi

is called dual with respect to the system xi 1 yi

Axi  Bui ,

(A.33)

Cxi .

By virtue of Theorem A.1 and A.5 the following result ensues. Theorem A.9. The system (A.33) is reachable (observable) if and only if its dual (A.32) is observable (reachable). The same theorem applies to the continuous-time system (A.12).

6 Stabilizability and Detectability Consider the discrete-time system (A.1). Let z1,z2,…,zn be the eigenvalues of the matrix A of this system. Definition A.10. The eigenvalue zi of the system (A.1) is called controllable if rank > I n zi  A, B @

n (i 1, ..., n) ,

(A.34)

The system (A.1) is reachable if and only if all the eigenvalues z1,z2,…,zn are controllable. Definition A.11. The system (A.1) is called stabilizable if all the unstable eigenvalues |zi| t 1 of this system are controllable. Theorem A.10. The system (A.1) is stabilisable if and only if rank > I n z  A, B @

n, for all z t 1.

The proof of this theorem is analogous to that of Theorem A.1.

(A.35)

486

Appendix

Definition D.12. An eigenvalue zi is called observable if

ªI z  A º rank « n i » C ¼ ¬

n (i 1,..., n) .

(A.36)

The system (A.1) is observable if and only if all the eigenvalues z1,z2,…,zn are observable. Definition A.13. The system (A.1) is called detectable if all the unstable eigenvalues (|zi| t 1) of this system are observable. Theorem A.11. The system (A.1) is detectable if and only if ªI z  A º rank « n » C ¼ ¬

n, for all z t 1.

(A.37)

The proof of this theorem is analogous to that of Theorem A.5. Note that reachability (observability) always implies stabilisability (detectability) of the system (A.1). With merely slight modifications, the foregoing considerations apply to continuous-time systems of the form (A.12).

References

[1] [2]

[3] [4]

[5] [6]

[7] [8] [9]

[10]

[11] [12] [13]

AFRIAT S. N.: The quadratic form positive defined on a linear manifold. Proc. Cambridge Phil. Soc. 1951, vol. 47, pp. 16 ALIZADEH F.: Combinatorial optimization with semidefinite matrices. Proc. of 2nd Annual Integer Programming and Combinatorial Optimization Conference, Carnegie–Mellon University, 1992. ANDERSON B.: A system theory criterion for positive real matrices, SIAM J. Control, 5, 1967, pp. 171182. ANTSAKLIS P. J., GAO Z.: Polynomial and rational matrix interpolation: theory and control applications. Int. J. Control, 1993, vol. 58, No 2, pp. 349404. ANTSAKLIS P. J., WOLOVICH W. A.: The canonical diophantine equations with applications, SIAM J. Contr. Optimiz., vol. 22, No 5, 1984, pp. 77787. ARNOLD W. F., LAUB A. J.: Generalized eigenproblem algorithms and software for algebraic Riccati equations, Proc. IEEE, vol. 72, No 12, 1984, pp. 17461754. AYRES F.: Theory and problems of matrices. Schaum, New York 1962. BACHMANN W., HAACKE R.: Matrizenrechnung für Ingenieure, Springer, Berlin Heidelberg New York 1982. BALAKRISHNAN V., HUANG Y., PACKARD A., DOYLE J.: Linear matrix inequalities in analysis with multipliers, In Proc. American Control Conf., 1994. Invited session on Recent Advances in the Control of Uncertain Systems. BANASZUK A, KOCIECKI M., Observability with unknown input and dual properties for singular systems, J.C. Baltzer AG, Scientific Publishing Co, IMACS, 1991, pp. 125129. BARNETT S.: Matrices in control theory. Van Nostrand Reinhold Company, London 1960. BARNETT S.: Polynomials and linear control systems. Marcel Dekker, New York, 1983. BELLMAN R.: Introduction to matrix analysis. McGraw–Hill, New York 1960.

488

References

[14] BELLMAN R., FAN K.: On systems of linear inequalities in Hermitian matrix variables, In V. L. Klee, editor, Convexity, vol. 7 of Proc. of Symposia in Pure Mathematics, American Mathematical Society, 1963, pp. 1–11. [15] BENALLOU A., MELLICHAMP D. A., SEBORG D. E.: On the number of

solutions of multivariable polynomial systems. IEEE Trans. Auto. Control AC-28, No 2, 1983, pp. 224–227. [16] BENVENUTI L. FARINA L., A tutorial on the positive realization problem. IEEE Trans. Autom. Control, vol. 49, No 5 pp. 651–664. [17] BENVENUTI L. FARINA L., A tutorial on the positive realization problem. IEEE Trans. Autom. Control, vol. 49, No 5, 2004, pp. 651–664.

[18] BERMAN A., NEUMANN M., STERN R.: Nonnegative Matrices in Dynamic Systems, Wiley–Interscience, New York, 1989. [19] BERMAN A., PLEMMONS R. J.: Nonnegative matrices in the mathematical sciences. Computer Science and Applied Mathematics, Academic Press, 1979. [20] BODEWIG E.: Matrix calculus. North–Holland, Amsterdam, 1965. [21] BORNE P., TZAFESTAS S.: Multidimensional polynomial matrix equations. Applied Modelling and Simulation of Technological Systems, Proc. of the 1st IMACS, Lille–France, 3-6 June 1986, pp. 647–653. [22] BOSÁK M., GREGOR J.: On generalized difference equations. Aplikace Matematiky, vol. 32, No 3, 1987, pp. 224–239. [23] BOYD S., EL GHAOUI L., FERON E., BALAKRISHAN V.: Linear Matrix Inequalities in System and Control Theory. In Proc. Annual Allerton Conf. on Communication, Control and Computing, Allerton House, Monticello, Illinois, October, 1993. [24] BOYD S., EL GHAOUI L., FERON E., BALAKRISHAN V.: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia, 1994. [25] BRIERLEY S., LEE E.B.: Solution of the equation A(z)X(z)+X(z)B(z)=C(z) and its application to the stability of generalized linear systems. Int. J. Control, vol. 40, No 6, 1984, pp. 1065–1075. [26] BRUNOWSKY P.: A classification of linear controllable systems. Kybernetika cislo 3. 1970, vol. 6, pp. 173–187. [27] BUSàOWICZ M., Explicit solution of discrete-delay equations. Foundations of Control Engineering, vol. 7, No. 2, 1982, pp. 67–71. [28] BUSàOWICZ M., Odporna stabilnoĞü ukáadów dynamicznych liniowych

stacjonarnych z opóĨnieniami. Warszawa–Biaáystok, 2000. [29] BUSàOWICZ M., KACZOREK T.: Reachability and minimum energy control of positive linear discrete-time systems with one delay. 12th Mediterranean Conference on Control and Automation, June 6-9, 2004, Kusadasi, Izmir, Turkey. [30] BUSàOWICZ M., KACZOREK T.: Stability and robust stability of positive linear discrete-time systems with pure delay. 10th IEEE Int. Conf. on Methods and Models in Automation and Robotics, 2004, 30.08-2.09.2004, MiĊdzyzdroje, pp.105–108 [31] BUSàOWICZ M., KACZOREK T.: Robust stability of positive discrete-time interval systems with time-delays. Bull. of the Poli. Acad. Sci, Techn. Sci, vol. 52, No 2, 2004, pp. 99–102 [32] BUSàOWICZ M., KACZOREK T.: Recent developments in theory of positive discrete-time linear systems with delays–stability and robust stability. Pomiary, Automatyka, Kontrola, No 9, 2004, pp. 9–12

References

489

[33] CAMPBELL S. L., MEYER C. D. Jun., ROSE N. J.: Applications of the Drazin inverse to linear systems of differential equations with singular constant coefficients. SIAM J. App. Math. 1976, vol. 31, No 3, pp. 411–425. [34] CAMPBELL S. L., NICHOLS N. K., TERRELL W. J.: Duality, observability and controllability for linear time-varying descriptor systems. Circuits Systems Signal Process, vol. 10, No 4, 1991, pp. 455–470. [35] CAMPBELL S. L.: Singular systems of differential equations. Pitman Advanced Publishing Program, pp. 138–143. [36] CHANG F. R., CHEN H. C.: The generalized Cayley–Hamilton theorem for standard pencils. Systems and Control Letters 18, 1992, pp. 179–182. [37] CHOLEWICKI T.: Macierzowa analiza obwodów liniowych. PWN, Warszawa 1958. [38] ÇIFTÇIBASI T., YĥKSEL ė.: On the Cayley-Hamilton theorem for twodimensional systems. IEEE Trans. Auto. Control AC-27, 1982, pp. 193–194. [39] COBB D.: Controllability, observability and duality in singular systems. IEEE Trans. Automat. Contr., vol. AC-29, No 12, 1984, pp. 1076–1082. [40] COXON P. G.: Positive input reachability and controllability of positive systems. Linear Algebra and its Applications 94, 1987, pp. 35–53. [41] DAI L.: Singular Control Systems. Lectures Notes in Control and Information Sciences, Springer. Berlin Heidelberg New York, 1989. [42] DATA B. N.: Numerical methods for linear control systems: design and analysis. Elsievier Academic Press, 2004. [43] DIETRICH G., STAHL H.: Grudzüge der Matricenrechnung.VEB, Leipzig 1974. [44] DORF R.C.: Modern control systems. Reading, Mass, Addison-Wesley 1967. [45] DUAN G. R.: Solution to matrix equation AV+BW=EVF and eigenstructure assignment for descriptor systems. Automatica, vol. 28, No 3, 1992, pp. 639– 643. [46] DZIELIēSKI A.: Approximate Model Application to Geometric Nonlinear Control. 7th IEEE International Conference Methods and Models in Automation and Robotics, 2001, MiĊdzyzdroje (Poland), August 28-31,2001. [47] DZIELIēSKI A.: MATLAB Package for N-D Sampling based Modeling of Nonlinear Systems. W: Multidimensional Signals, Circuits and Systems. Eds. K. Gaákowski and J. Wood, Taylor and Francis, London, 2001, pp. 29–44. [48] DZIELIēSKI A.: New Models for Geometric Non-linear Control. 13th International Conference on Process Control, PC’01, Štrbské Pleso (Slovakia), June 11-14, 2001. [49] DZIELIēSLI A., GRANISZEWSKI W. Optimal neurons number method for nonlinear systems modeling. Proceedings of IEEE International Conference on Computational Cybernetics, ICCC’03, Siofok (Hungary), August 29-31, 2003. [50] EL BOUTHOURI A., PRITCHARD A.: A Riccati equation approach to maximizing the stability radius of a linear system under structured stochastic. Lipschitzian perturbations, Syst. Control Letter, January 1994. [51] FARINA L., RINALDI S.: Positive Linear Systems; Theory and Applications. J. Wiley, New York 2000.

490

References

[52] EMRE E., SILVERMAN L. M.: The equation XR+XY=); a characterization of solution. SIAM J. Contr. Optimiz., vol. 19, No 1, 1981, pp. 33–38. [53] EMRE E.: The polynomial equation QQc+RPc=) with application to dynamic feedback. SIAM J. Control and Optimization. November 1980, vol. 18, No 6, pp. 611–620. [54] FADDIEJEWA W. N.: Metody numeryczne algebry liniowej. PWN, Warszawa 1955. [55] FANG Ch.-H.: A simple approach to solving the diophantine equation. IEEE Trans. Auto. Control vol. 37, No 1, 1992, pp. 152–155. [56] FANTI M. P., MAIONE B, TURCHIANO B.: Controllability of multi-input positive discrete-time systems. Int. J. Control, 1990, vol. 51, No 6, pp. 1295– 1308. [57] FANTI M. P., MAIONE B, TURCHIANO B.: Controllability of linear singleinput positive discrete-time systems. Int. J. Control, 1989, vol. 50, No 6, pp. 2523–2542. [58] FEINSTEIN J., BAR-NESS Y.: The solution of the matrix polynomial equation A(s)X(s)+B(s)Y(s)=C(s). IEEE Trans. Auto. Control, vol. AC-29, No 1, 1984, pp. 75–77. [59] FIEDLER M., PTAK V.: Diagonally dominant matrices. Czechoslovak Mathematical Journal, 92, 1967, pp. 420–433. [60] FIEDLER M., PTAK V.: On matrices with non-positive off-diagonal elements and positive principal minors. Czechoslovak Mathematical Journal, 87, 1962, pp. 382–400. [61] FIEDLER M., PTAK V.: Some generalizations of positive definiteness and monotonicity. Numerische Mathematik, 9, 1966, pp. 163–172. [62] FRANKLIN J.N.: Matrix theory. Prentice-Hall, Englewood Cliffs NY 1968. [63] GANTMACHER F. R., KLIEIN M. G.: Oscilliacionnyje matricy jadra i malye kolebanija mechaniczeskich system. Gostizgatielstwo, Moskwa 1950. [64] GANTMACHER F. R.: Teoria matric. Nauka, Moskwa 1967. [65] GILLE J. Ch., CLIQUE M.: Rachunek macierzowy i wprowadzenie do analizy funkcjonalnej. Skrypt Politechniki ĝląskiej, Gliwice 1977. [66] GOLUB G., VAN LOAN C.: Matrix Computations. John Hopkins Univ. Press, Baltimore, second edition, 1989. [67] GÓECKI H., FUKSA S., GRABOWSKI P., KORYTKOWSKI A.: Analysis and Synthesis of Time delay Systems. PWN and J. Wiley. Warszawa 1989.

[68] GRONE R., JOHNSON C. R., SÁ E. M., WOLKOWICZ H.: Positive definite completions of partial Hermitian matrices. Linear Algebra and Appl., 58, 1984, pp. 109–124. [69] HU H.: An algorithm for rescaling a matrix positive definite. Linear Algebra and its Applications, 96, 1987, pp. 131–147. [70] JOHNSON C. D.: A unified canonical form for controllable and uncontrollable linear dynamical system. Int. J. Control 1971, vol. 12, No 3, pp. 497–517. [71] KACZOREK T.: A new design method of minimal order functional observers for linear discrete-time systems. 8th IEEE International Conference on Methods and Models in Automation and Robotics, 2002, 2-5 Sept. 2002, Szczecin, pp. 375–380.

References

491

[72] KACZOREK T.: A novel approach to design of minimal order functional observers for singular linear systems. Bull. Pol. Acad. Techn. Sci., vol. 51, No 2, 2003, pp. 181–188. [73] KACZOREK T.: A novel method for computation of realization in singular systems. XXVI IC-SPETO, International Conference of Fundamentals of Electronics and Circuit Theory, Niedzica 28-31.05.2003, pp. 189–192. [74] KACZOREK. T.: A new method for computation and realizations in singular systems. 14th Int. Conference Pocess Control, June 8-11.2003, Stribske Pleso, Slovakia, pp. 165-1–165-5. [75] KACZOREK T.: Algebraic operations on two-dimensional polynomials. Bull. Acad. Pol. Sci., Ser. Sci. Techn., vol. 30, No 1-2, 1982, pp. 73–83. [76] KACZOREK T.: Algorithms for finding 2-D polynomial matrices A,B satisfying the condition AB = BA. Bull. Pol. Acad. Sci., Ser. Sci. Techn., vol. 30, No 9-10, 1982, pp. 509–516. [77] KACZOREK T.: Algorithm for solving 2-D polynomial matrix equations. Bull. Acad. Pol. Sci., Ser. Sci. Techn., vol. 31, No 1-2, 1983, pp. 51–57. [78] KACZOREK T.: An extension of the Cayley-Hamilton theorem for non-square block matrices and computation of the left and right inverses of matrices, Bull. Pol. Acad. Techn. Sci., vol. 43, No 1, 1995, pp. 39–46. [79] KACZOREK T., STAJNIAK A.: An extension of the Cayley-Hamilton theorem for singular 2-D linear systems with non-square matrices. Bull. Pol. Acad. Techn. Sci., vol. 43, No 1, 1995, pp. 47–56. [80] KACZOREK T.: Badanie ekstremalnych wartoĞci mocy czynnej w liniowych obwodach rozgaáĊzionych. Archiwum Elektrotechniki, 1962, t. XI, z.1, pp. 25–36. [81] KACZOREK T.: Canonical forms of singular 1D and 2D linear systems. The 2nd Int. Workshop on Multidimensional (nD) Systems (NDS-2000), June 2730, 2000 Czocha Castle, Poland, pp. 189–196 [82] KACZOREK T.: Cechy i wáaĞciwoĞci singularnych ukáadów liniowych. Przegląd Elektrotechniczny, R. LXXII 09’96 , pp. 225–230. [83] KACZOREK T.: Computation of fundamental matrices and reachability of positive singular discrete linear systems. Bull. Pol. Acad. Techn. Sci., vol. 46, No 4, 1998, pp. 501–511. [84] KACZOREK T., CICHOCKI A., STAJNIAK A.: Computation of the Drazin inverse of a singular matrix making use of neural networks. Bull. Pol. Acad. Techn. Sci., vol. 40, Nr 4, 1992, pp. 387–394. [85] KACZOREK T.: Realization Problem for Positive 2D Systems with Delays. Mechine Intelligence & Robotic Control, Vol. 6, No 2, 2004, pp. 61–68. [86] KACZOREK T.: Decomposition of time-varying implicit linear systems into dynamical and static parts. Bull. Pol. Acad. Techn. Sci. vol.40,No 2, 1992, pp. 117–123. [87] KACZOREK T.: Determination of solutions to singular 2-D continuousdiscrete linear systems with singular matrix pencils. Bull. Pol. Acad. Sc. Techn. Sc., vol. 43, No 2, 1995, pp. 203–225. [88] KACZOREK T.: Dynamics assignment problem of linear systems. Konferencja Naukowo-Techniczna „Automatyzacja – NowoĞci i Perspektywy” AUTOMATION 2001, 28-30.03.2001, Warszawa, pp. 53–58.

492

References

[89] KACZOREK T.: Dynamics assignment problem of linear systems. Bull. Pol. Acad. Techn. Sci., vol. 49, No 3, 2001, pp. 461–466. [90] KACZOREK T., Electrical circuits as examples of positive singular continuous-time systems. SPETO’98, UstroĔ 20-22.05.98, pp. 37–43. [91] KACZOREK T.: Elementary operations approach to dynamics assignment of singular linear systems. Int. Workshop on Polynomial Methods in Signal, Systems and Control, August 28, 2001 Wysokie Tatry. [92] KACZOREK T.: Elimination of finite eigenvalues of strongly singular systems by feedbacks in linear systems. Int. Conf. on “Mathematical Modeling as Means of Power Consumption”, Lwów, 18-23.06.2001, No 421, pp. 73–77. [93] KACZOREK T.: Elimination of finite eigenvalues of the matrices by feedbacks linear systems. 10th Int. Conf. “Systems, Modeling Control”, Zakopane 21-25.05.2001, pp. 345–350. [94] KACZOREK T.: Elimination of finite eigenvalues of strongly singular the 2D Roesser model by state-feedbacks. Int. J. Appl. Math. Comput. Sc., 2001, vol. 11, No 2, pp. 369–376. [95] KACZOREK T.: Existence and uniqueness of solutions and Cayley-Hamilton theorem for general singular model of 2-D systems. Bull. Pol. Acad. Techn. Sci., vol. 37, No 5-6, 1989, pp. 275–278. [96] KACZOREK T.: Extension of the method of continuants for n-order linear difference equations with variable coefficients. Bull. Acad. Pol. Sci. Ser., Sci. Techn., vol. 33, No 7-8, 1985, pp. 395–400. [97] KACZOREK T.: Extension on Sylvester's theorem to two-dimensional systems. Bull. Acad. Pol. Sci., Ser. Sci. Techn.,vol. 30, No 1-2, 1982, pp. 109–114. [98] KACZOREK T.: Extensions of the Cayley-Hamilton theorem for 2-D continuous-discrete linear systems. Appl. Math. and Com. Sci., vol. 4 No 4, 1994, pp. 507–515. [99] KACZOREK T.: Floquet-Lyapunov transformation for singular 2-D linear systems. Bull. Pol. Acad. Sc. Techn. Sc., vol. 40, No 4, 1992, pp. 355–360. [100] KACZOREK T.: Full-order perfect observer for continuous-time linear systems. Pomiary, Automatyka, Kontrola, Nr 1, 2001, pp. 3–6. [101] KACZOREK T.: SàAWIēSKI M., Functional observer for continuous-time linear systems. Pomiary, Automatyka, Kontrola PAK 9’2003, pp. 4–9. [102] KACZOREK T.: Generalization of the Cayley-Hamilton theorem for nonsquare matrices. Prace XVII SPET0-1995, pp. 77–83. [103] KACZOREK T.: Infinite eigenvalue assignment by output-feedbacks for singular systems. Inter. J. Appl. Math. And Comp. Sci., vol. 14, No 1, 2004, pp. 19–23. [104] KACZOREK T.: Influence of the state-feedback on cyclicity of linear systems. Materiaáy Konferencji Naukowo-Technicznej “Automation 2002”, 20-22 marca 2002, Warszawa, pp. 81–93. [105] KACZOREK T.: Linear Control Systems, vol. 1 and 2. Research Studies Press, J. Wiley, New York 1992-1993. [106] KACZOREK T.: Methods for computation of solutions to regular discrete-time linear systems. Appl. Math. and Comp. Science, vol. 5, no 4, 1995, pp. 635– 656.

References

493

[107] KACZOREK T.: Minimal order perfect functional observers for singular linear systems. Machine Intelligence & Robotic Control, vol. 4, No 2, 2002, pp. 71– 74. [108] KACZOREK T.: Minimal order deadbeat functional observers for singular 2D linear systems. Control & Cybernetics, vol. 32, No 2, 2003, pp. 301–311 [109] KACZOREK T.: Minimal order perfect functional observers for singular linear systems. IV International Workshop “Computational Problems of Electrical Engineering”, Zakopane, Sept. 2-5, 2002, pp. 52–55. [110] KACZOREK T.: Minimalny rząd obserwatorów funkcjonalnych liniowych ukáadów ciągáych. Pomiary, Automatyka, Kontrola, No 9, 2002, pp. 5–8. [111] KACZOREK T., CICHOCKI A.: Neural-type structured networks for solving algebraic Riccati, equations. Archives of Control Sciences, vol. 1, No 3-4, 1992, pp. 153–165. [112] KACZOREK T.: New algorithms of solving 2-D polynomial equations. Bull. Acad. Pol. Sci., Ser. Techn. Sci., vol.30, No 5-6, 1982, pp. 263–269. [113] KACZOREK T.: Obserwatory doskonaáe liniowych ukáadów jedno i dwuwymiarowych. XIV Krajowa Konferencja Automatyki, Zielona Góra, 24-27 czerwca 2002, pp. 3–12. [114] KACZOREK T.: On the solution of linear inhomogeneous matrix difference equations of order n with variable coefficients. Zastosowanie Matematyki (Applicationes Mathematicae), vol. 19, No 2, 1987, pp. 285–287. [115] KACZOREK T.: Perfect functional observers of singular continuous-time linear systems. Machine Intelligence & Robotic Control, vol. 4, No 1, 2002, pp. 77–82. [116] KACZOREK T.: Perfect observers for singular continuous-time linear systems. Int. Scientific Conf. “Energy Savings Electrical Engineering”, Warsaw, 1415 May 2001, pp. 247–251. [117] KACZOREK T.: Perfect observers for singular 2D Fornasini-Marchesini models. IEEE Transactions on Automatic Control, vol. 46, No 10, 2001, pp. 1671–1675. [118] KACZOREK T.: Perfect observers of standard linear systems. Bull. Pol. Acad. Techn. Sci., vol. 50, No 3, 2002, pp. 237–245. [119] KACZOREK T., SàAWIēSKI M.: Perfect observers for standard linear systems. 8th IEEE International Conference on Methods and Models in Automation and Robotics, 2002, 2-5 Sept. 2002, Szczecin, pp. 399–404. [120] KACZOREK T.: Perfect observers for singular 2D linear systems. Bull. Pol. Acad. Techn. Sci., vol. 49, No 1, 2001, pp. 141–147. [121] KACZOREK T., KRZEMIēSKI S.: Perfect reduced-order unknown-input observer for descriptor systems. 7th World Multiconference on Systems, Cybernetics and Informatics, July 27-30,2003, Orlando, Florida, USA. [122] KACZOREK T.: Perfect reduced-order unknown-input observer for 2D linear systems. V Int. Workshop Computational Problems of Electrical Engineering, Jazleevets, Ukraina, 26-29.08.2003, pp. 64–69. [123] KACZOREK T.: PodzielnoĞü minorów stopnia drugiego macierzy licznika transmitancji prze jej mianownik. Przegląd Elektrotechniczny, 12, 2001, pp. 297–302.

494

References

[124] KACZOREK T.: Polynomial and rational matrix interpolations for 2-D systems. Bull. Acad. Pol. Sci. 1994, vol. 42, No2, pp.45-51. [125] KACZOREK T.: Polynomial approach to pole shifting to infinity in singular systems by feedbacks. Bull. Pol. Acad. Techn. Sci., vol. 50, No 2 , 2002, pp. 134–144. [126] KACZOREK T.: Polynomial matrix equations in two indeterminates. Bull. Acad. Pol. Sci., Ser. Sci. Techn. vol. 30, No 1-2, 1982, pp. 95–100. [127] KACZOREK T.: Positive 1D and 2D Systems. Springer. Berlin Heidelberg New York, 2002. [128] KACZOREK T.: Positive 2D linear systems. Proc. of 13th Int. Conf. Systems Science, 15-18 Sept. 1998, Wrocáaw, vol. 1, pp. 50–67. [129] KACZOREK T.: Positive descriptor discrete-time linear systems. International Journal: Problems of Nonlinear Analysis in Engineering Systems, No 1(7) , 1998, pp. 38–54. [130] KACZOREK T.: Positive singular discrete linear systems. Bull. Pol. Acad. Techn. Sci., vol. 45, No 4, 1997, pp. 619–631. [131] KACZOREK T.: Proper rational solution to 2-D two-sided polynomial matrix equation. IEEE Trans. Auto. Control, vol. AC-32, No 9, 1987, pp. 826–828. [132] KACZOREK T.: Rational and polynomial solutions to 2-D polynomial matrix equations. Bull. Pol. Acad. Techn. Sci., vol. 39, No 1, 1991, pp. 105–109. [133] KACZOREK T.: Real solutions to two-sided polynomial matrix equations. Bull. Pol. Acad. Techn. Sci., vol. 35, No 11-12, 1987, pp. 673–677. [134] KACZOREK T.: Reachability and controllability of non-negative 2-D Roesser type models. Bull. Pol. Acad. Techn. Sci., vol. 44, No 1, 1998, pp. 408–410. [135] KACZOREK T.: Realization problem for singular 2D linear systems. Bull. Pol. Acad. Techn. Sci., vol. 6, No.3 , 1998, pp. 317–330. [136] KACZOREK T.: Realization problem for weakly positive linear systems. Bull. Pol. Acad. Techn. Sci., vol. 48, No 4, 2000, pp. 595–606. [137] KACZOREK T., KRZEMIēSKI S.: Reduced-order unknown-input perfect observer for descriptor. 9th IEEE International Conference on Methods and Models in Automation and Robotics, MiĊdzyzdroje, 25-28.08.2003, pp. 375– 380. [138] KACZOREK T., DĄBROWSKI W.: Reduction of singular 2-D periodic systems to constant 2-D linear systems by the use of state feedback and FloquetLyapunov transformation. Bull. Pol. Acad. Techn.Sci., vol. 41, No 2, 1993, pp. 133–138. [139] KACZOREK T.: Relationship between infinite eigenvalue assignment for singular systems and solvability of polynomial matrix equations. 11th Mediterranean Conference on Control and Automation MED’03, June 18-20, 2003, Rhodes, Greece. [140] KACZOREK T.: Singular Compartmental Systems and Electrical Circuits. Elektronoje Modeliowanie, Vol. 26, 2004, pp. 81–98. [141] KACZOREK T.: Shifting to infinity of the eigenvalues in singular linear systems. Przegląd Elektrotechniczny, R. LXXIX 6/2003, pp. 439–443. [142] KACZOREK T.: Sáabo dodatnie ukáady w elektrotechnice. Przegląd Elektrotechniczny RLXXIV, 11’98, pp.277–281.

References

495

[143] KACZOREK T.: Sáabo dodatnie singularne ukáady dyskretne. Zeszyty Naukowe Politechniki ĝląskiej, Seria Automatyka z. 123, 1998, pp. 233–248. [144] KACZOREK T.: Solving of 2-D polynomial matrix equations. Functional-

Differential Systems and Related Topics, Proc. of III Int. Conference, BáaĪejewko, May 22-29 1983, pp. 127–132. [145] KACZOREK T., CICHOCKI A.: Solving of algebraic matrix equations by use of neural networks. Bull. Acad. Techn. Sci. vol. 40, No 1, 1992. pp. 61–68. [146] KACZOREK T., CICHOCKI A.: Solving of real matrix equations AX - YB = C making use of neural networks. Bull. Pol. Acad. Techn. Sci., vol. 40, No 3, 1992, pp. 273–279. [147] KACZOREK T.: Some Recent Developments in Perfect Observers. Triennial International Conference on Applied Automatic Syst. Orhid, Macedonia, Sept. 18-20, 2003, pp. 3–24. [148] KACZOREK T.: Some recent developments in positive systems. Proc. 7th Conference of Dynamical Systems Theory and Applications, àódĨ 2003, pp. 25-35. [149] KACZOREK T.: Special canonical form of matrices A,B,C of multivariable linear systems. Foundations of Control Engineering, 1979, vol. 4, No 1, pp. 27–34. [150] KACZOREK T.: Straight line reachability of Roesser model. IEEE Trans. on Autom. Contr., vol. AC-32, No 7, 1987, pp. 637–639. [151] KACZOREK T.: Teoria Sterowania i Systemów. PWN, Warszawa 1997. [152] KACZOREK T.: The relationship between the infinite eigenvalue assignment for singular systems and the solvability of polynomial matrix equations. Int. J. Appl. Math. and Comp. Sci., vol. 13, No 2, 2003, pp. 161–167. [153] KACZOREK T.: Transformation of matrices A,B,C of multivariable linear time-invariant systems to special canonical form. Bull. Acad. Pol. Sc. 1978, vol. 26, No 7. [154] KACZOREK T.: Transformation of time-varying implicit linear systems to their Weierstrass canonical form. Bull. Pol. Acad. Techn. Sci. vol.40, No 2, 1992, pp. 109–116. [155] KACZOREK T.: Two-Dimensional Linear Systems. Springer. Berlin Heidelberg New York 1985. [156] KACZOREK T.: Two-sided polynomial matrix equations. Bull. Acad. Pol. Techn. Sci., vol. 35, No 11-12, 1987, pp. 667–671. [157] KACZOREK T.: Weakly positive continuous-time linear systems. Bull. Pol. Acad. Techn. Sci., vol. 46, No. 2, 1998, pp. 233–245. [158] KACZOREK T.: Wektory i macierze w automatyce i elektrotechnice. WNT Warszawa 1998. [159] KACZOREK T.: Zero-degree solutions to AX + BY = C and invariant factors assignment problem. Bull. Acad. Pol. Sci., Ser. Sci. Techn.,vol.34, No 9-10, 1986, pp. 553–558. [160] KACZOREK T.: Zero-degree solutions to the bilateral polynomial matrix equations. Bull. Acad. Pol. Sci., Ser. Sci. Techn., vol. 34, No 9-10, 1986, pp. 547–552.

496

References

[161] KACZOREK T., TRACZYK T.: Partial derivative of the Vandermonde determinant and their application to the synthesis of linear control systems. Zastosowania Matematyki, 1973, t. XIII, z. 3, pp. 329–337. [162] KACZOREK. T.: Singular compartmental linear systems. Bull. Pol. Acad. Science, Vol. 51, No 4, 2003, 409–419. [163] KACZOREK. T.: Some recent developments in positive systems. 7th Conference on „Dynamical Systems Theory and Applications”, àódĨ 8-11.12.03. , pp. 23–35.

[164] KACZOREK. T.: Minimal realization for positive multivariable linear systems with delay. Proc. CAITA 2004, Conference on Advances in Internet Technologies and Applications, July 8-11, 2004, Purdue, USA, pp. 1–12 [165] KACZOREK. T.: Stability of positive discrete-time systems with time-delay. 8th World Muliconference on Systems, Cybernetics and Informatics, July 18-21, 2004 Orlando, Florida USA, pp. 321–324 [166] KACZOREK. T.: Structure decomposition and computation of minimal realization of normal transfer matrix of positive systems. 10th IEEE Int. Conf. on Methods and Models in Automation and Robotics, MMAR 2004, 30.08-2.09.2004, MiĊdzyzdroje, pp.93–100 [167] KACZOREK. T.: Realization problem for positive multivariable linear systems with time-delay. Int. Workshop “Computational Problems of Electrical Enginnering”, Zakopane, 1-4.09. 2004, pp. 186–192 [168] KACZOREK. T.: Realization problem for positive discrete- time systems with delays. Systems Science Vol. 30, Nr 4, 2004, pp. 17–30. [169] KACZOREK. T.: Positive 1D and 2D Systems. Springer. Berlin Heidelberg New York 2002. [170] KACZOREK. T.: Realization problem for positive discrete-time systems with delay. System Science, vol. 30, No. 4, 2004, pp. 117–130. [171] KACZOREK. T.: Realization problem for positive continuous-time systems with delays. Proc. Int. Conf. On Systems Engineering. Aug. 16-18, 2005, Las Vegas, USA. [172] KACZOREK. T.: Positive minimal realizations for singular discrete-time systems with one delay in state and one delay in control. Bull. Pol. Acad. Sci. Techn., vol 52, No 3, 2005. [173] KACZOREK. T.: Minimal realization for positive multivariable linear systems with delay. Int. J. Appl. Math. Comput. Sci., 2004, vol. 14, No 2, pp. 181–187. [174] KACZOREK. T., BUSàOWICZ M.: Recent developments in theory of positive discrete-time linear systems with delays – reachability, minimum energy control and realization problem. Pomiary, Automatyka, Kontrola, No 9, 2004, pp. 12–15. [175] KACZOREK. T., BUSàOWICZ M.: Determination of Positive Realization of Singular Continuous-Time Systems with Delays by State Variable Diagram Method. Proc. Conf. Automation, 22-24 March, Warsaw, 2006 pp. 352–361. [176] KACZOREK. T., BUSàOWICZ M.: Determination of realizations od composite positive systems with delays (Wyznaczanie realizacji záoĪonych ukáadów dodatnich z opóĨnieniami). XXVIII IC SPETO, MiĊdzynarodowa Konferencja z Podstaw Elektrotechniki i Teorii Obwodów, UstroĔ 11-14 maja 2005, pp. 349–353 [177] KACZOREK. T., BUSàOWICZ M.: Minimal realization for positive multivariable linear systems with delay. Int. Journal Applied Mathematics and computer Science, No 2, vol. 14, 2004, pp. 181–187 [178] KACZOREK. T., BUSàOWICZ M.: Minimal realization for positive multivariable linear systems with delay. Int. Journal Applied Mathematics and computer Science, No 2, vol. 14, 2004, pp. 181–187 [179] KAILATH T.: Linear Systems. Prentice Hall, Englewood Cliffs NY, 1980.

References

497

[180] KARCANIAS N.: Regular state-space realizations of singular system control problems. Proc. IEEE Conf. Decision and Control, 1987, pp. 1144–1146. [181] KIEàBASIēSKI A., SCHWETLICK H.: Numeryczna Algebra Liniowa. WNT, Warszawa 1992. [182] KLAMKA J.: Controllability of Dynamical Systems. Kluwer Academic Publ,. Dordecht, 1991. [183] KLAMKA J.: Function of 2-D matrix. Foundations of Control Engineering, vol. 9, 1984. [184] KLAMKA J.: SterowalnoĞü ukáadów dynamicznych, Warszawa-Wrocáaw, WNT 1990. [185] KOWALCZYK B.: Macierze i ich zastosowania. WNT, Warszawa 1976. [186] KUýERA V.: A contribution to matrix equations. Trans. Autom. Control. 1972, vol. AC-17, pp. 344–347. [187] KUýERA V.: Constant solutions of polynomial equations. Int. J. Control, vol. 53, No 2, 1991, pp. 495–502. [188] KUýERA V.: Discrete linear control: The polynomial equation approach. Academia, Prague 1979. [189] KUREK J. E.: Genericness of solution to N-dimensional polynomial matrix equation XA = I. IEEE Trans. on Circuits and Systems, vol. 37, No 8, 1990, pp. 1041–1043. [190] LAMPE B., ROSENWASSER E.: Algebraische Methoden zur Theorie der MehrgrĘȕen – Abtastsysteme. Universität Rostock, 2000. [191] LAMPE B., ROSENWASSER E.: Algebraic properties of irreducible transfer matrices. Avtomatika i Telemekhanika, N 7, 2000, pp. 31–43 (in Russian) (English translation: Automation and Remote Control, vol. 61, N 7, Pt. I, 2000, pp. 1091–1102. [192] LANG S.: Algebra. PWN Warszawa 1984. [193] LANG S.: Algebra. Reading, Mass, Addison–Wesley 1971. [194] LANGENHOP C. E.: The Laurent Expansion for a nearly singular matrix. Linear Algebra and its Applications 4, 1971, pp. 329–340. [195] LANKASTER P.: Theory of matrices. Academic Press, New York 1969. [196] LAUB A. J.: A Schur method for solving algebraic Riccati equations. IEEE Trans. Auto Control, AC-24(6). 1979, pp. 913–921. [197] LEVIN J.: On the matrix Riccati equation. IEEE Trans. Auto Control, 1967, vol. AC-12, pp. 746–749. [198] LEWIS F. L.: A survey of linear singular systems. Circuits Systems Signal Process, vol. 5, no 1, 1986, pp. 3–36. [199] LEWIS F. L.: Descriptor systems: Decomposition into forward and backward subsystems. IEEE Trans. Auto Control, vol. AC-29, 1984, pp. 167–170. [200] LEWIS F. L., MERTZIOS B. G.: Fundamental matrix of discrete singular systems, Circuits. Syst., Signal Processing, vol. 8, no 3. 1989, pp. 341–355. [201] LEWIS F. L.: Fundamental, reachability and observability matrices for discrete descriptor systems. IEEE Trans. Automat. Contr., vol. AC-30, 1985, pp. 502–505. [202] LEWIS F. L.: Further remarks on the Cayley-Hamilton theorem and Leverrier’s method for the matrix pencil (sE-A). IEEE Transactions on Automatic Control, vol. AC-31, No. 9, 1986, pp. 869–870.

498

References

[203] LEWIS F. L.: Techniques in 2-D implicit systems. Control and Dynamic Systems, vol. 69, pp. 89–131. [204] LUENBERGER D. G.: Canonical forms for linear multivariable systems. Trans. Autom. Control. June 1967, vol. AC-12, pp. 290–293. [205] LUENBERGER D. G.: Dynamic equations in descriptor form. IEEE Trans. Auto Control, vol. AC-22, No 3, 1977, pp. 312–321. [206] LUENBERGER D. G.: Time-invariant descriptor systems. Automatica, vol. 14, 1978, pp. 473–480. [207] MacDUFFEE C.C.: The theory of matrices. Chelsea, New York 1946. [208] MacLANE S., BIRKHOFF G.: Algebra. MacMillan, New York 1967. [209] MARCUS H., MINC H.: A survey of matrix theory and matrix inequalities. Allyn and Bacon, Boston 1964. [210] MINC H.: Nonnegative Matrices. New York, J. Wiley 1988. [211] MIN-YEN WU: A new concept of eigen-values and eigen-vectors and its applications. Trans. Autom. Control. 1980, vol. AC-25, No 4, pp. 824–826. [212] MISRA P., VAN DOOREN P., VARGA A.: Computation of structural invariants of generalized state-space systems. Automatica, vol. 30, no 12, 1994, pp. 1921–1936. [213] MITKOWSKI W.: Stabilizacja systemów dynamicznych. AGH, Kraków 1996. [214] MOSTOWSKI A., STARK M.: Elementy Algebry WyĪszej. PWN Warszawa 1958. [215] MUFTI I.H.: On the reduction of a system to canonical (phase-variable) form. Trans. Autom. Control. April 1965, vol. AC-10, pp. 206–207. [216] MURTHY D. N. P.: Controllability of a linear positive dynamic system. Int. J. Systems Sci., 1986, vol. 17, No 1, pp. 49–54. [217] NEWMAN M.: Integral matrices. Academic Press, New York 1972. [218] ORTEGA J. M.: Matrix Theory. Plenum Press, New York and London, 1976. [219] OHTA Y., MADEA H. and KODAMA S.: Reachability, observability and realizability of continuous-time positive systems. SIAM J. Control and Optimization, vol. 22, No 2, 1984, pp. 171–180. [220] PACE I. S., BARNETT S.: Efficient algorithms for linear system calculation. Smith form and common divisor of polynomial matrices. Int. J. System Sc. 1974, vol. 5, No 6, pp. 403–411. [221] PENROSE R.: A generalized inverse for matrices. Proc. Cambridge Philos. Soc. 1955, vol. 51, pp. 406–413. [222] PERNELLA R.: Le calcul matriciel. Eyrolles Ed. Paris 1969. [223] PETERSEN I. R., HOLLOT C. V.: A Riccati equation approach to the stabilization of uncertain linear systems. Automatica, vol. 22, No 4, 1986, pp. 397–411. [224] POPOV V. M.: Invariant description of linear time-invariant controllable systems. SIAM J. Control. 1972, vol. 10, No 2, pp. 252–264. [225] PORTER V. A.: On the matrix Riccati equation. Trans. Autom. Control. 1967, vol. AC-12, pp. 746–749. [226] PRONGLE R. M., RAYNER A. A.: Generalized inverse matrices. Griffin, London 1971. [227] REID W. T.: A matrix differential equation of Riccati type. Amer. J. Math. 1946, vol. 68, pp. 237–246.

References

499

[228] ROTH W. E.: The equations AX-BY=C and AX-XB=C in matrices. Proc. Am. Math. Soc.1952, vol. 3, pp. 392–396. [229] SILVERMAN L. M.: Transformation of time-variable systems to canonical (phase-variable) form. Trans. Autom. Control. 1966, vol. AC-11, April, pp. 300–303. [230] SINHA N. K., ROZSA P.: Some canonical form for linear multivariable systems. Int. J. Control. 1976, vol. 23, No 6, pp. 865–883. [231] SMART N. M., BARNETT S.: The Algebra of Matrices in N-Dimensional Systems. IMA Jour. of Math. Contr. and Inform. 6, 1989, pp. 121–133. [232] SMART N. M., BARNETT S.: The Cayley-Hamilton Theorem in Two- and NDimensional Systems. IMA Jour. of Math. Contr. and Inform. 2, 1985, pp. 217–223. [233] SOLAK M. K.: Differential representations of multivariable linear systems with disturbances and polynomial matrix equations PX+RY=W and PX+YN=V. IEEE Trans. on Automatic Contr., vol. AC-30, No 7, 1985, pp. 687–690. [234] SOMMER R.: Entwurf nichlinear, zeitwarianter Systems durch Polynom. Regelungstechnik 27, 1979, H. 12, pp. 393–399. [235] ŠEBEK M.: 2-D polynomial equations. Kybernetika, vol. 19, No 3, 1983, pp. 212–224. [236] ŠEBEK M.: Characteristic polynomial assignment for delay-differential systems via 2-D polynomial equations. Kybernetika, vol. 23, No 5, 1987, pp. 345–359. [237] ŠEBEK M., KUýERA V.: Matrix equations arising in regulator problems. Kybernetika, vol. 17, No 2, 1981, pp. 128–139. [238] ŠEBEK M.: n-D polynomial matrix equations. IEEE Trans. Auto Control, vol. 33, No 5, 1988, pp. 499–502. [239] ŠEBEK M.: Two-sided equations and skew primness for n-D polynomial matrices. System & Control Letters 12, 1989, pp. 331–337. [240] THEODORU N. J.: M-dimensional Cayley-Hamilton theorem. IEEE Trans. Auto Control, AC-34, No 5, 1989, pp. 563–565. [241] TRZASKA Z., MARSZAàEK W.: Inversion of matrix pencils for generalized systems. Journal of the Frankllin Institute, Pergamon Press Ltd., 1992, pp. 479–490. [242] TUROWICZ A., MITKOWSKI W.: Teoria macierzy. Wydawnictwa AGH Kraków 1995. [243] TWARDY M., KACZOREK T.: Observer-based fault detection in dynamical systems – part I. Pomiary, Automatyka, Kontrola, No 7/8, 2004, pp. 5–11. [244] TWARDY M., KACZOREK T. Observer-based fault detection in dynamical systems – part II. Pomiary, Automatyka, Kontrola, No 7/8, 2004, pp. 11–14. [245] VAN DOOREN P.: The computation of Kronecker’s canonical form of a singular pencil. Linear Algebra and Its Applications, 1979, vol. 27, pp. 103– 140. [246] VARDULAKIS A. I. G.: Proper rational matrix diophantine equations and the exact model matching problem. IEEE Trans. Auto. Control AC-29, No 5, 1984, pp. 475–477.

500

References

[247] VAUGHAM D. R.: A negative exponential solution for the matrix Riccati equation. Trans. Autom. Control. 1969, vol. AC-14, pp. 72–79. [248] VIDYASAGAR M.: On matrix measures and convex Liapunov functions. J. Math Anl. and Appl., vol. 62, 1978, pp. 90–103. [249] WARMUS W.: Wektory i macierze. PWN Warszawa 1981. [250] WARWICK K.: Using the Cayley-Hamilton theorem with N-partitioned matrices. IEEE Trans. Automat. Contr. AC-28, 1983, pp. 1127–1128. [251] WILLEMS J. L.: On the existence of a nonpositive solution to the Riccati equation. IEEE Trans. Aut. Control, AC-19 (5), October 1974. [252] WOLOVICH W. A.: Linear multivariable systems. Springer. Berlin Heidelberg New York 1974. [253] WOLOVICH W. A.: Skew prime polynomial matrices. IEEE Trans. Auto. Control AC-23, No 5, 1978, pp. 880–887. [254] WOLOVICH W. A., ANTSAKLIS P.J.: The canonical diophantine equations with applications. SIAM J. Control and Optimization, vol. 22, No 5, 1984, pp. 777–787. [255] XIE G., LONG WANG L, Reachability and controllability of positive linear discrete-time systems with time-delays. in L. Benvenuti, A. De Santis and L. Farina (eds): Positive Systems, LNCIS 294, Springer. Berlin Heidelberg New York 2003, pp. 377–384. [256] YAKUBOVICH V. A.: Solution of certain matrix inequalities encountered in nonlinear control theory. Soviet Math. Dokl., vol. 5, 1964, pp. 652–656. [257] YAKUBOVICH V. A.: The solution of certain matrix inequalities in automatic control theory. Soviet Math. Dokl., vol. 3, 1962, pp. 620–623. (in Russian). [258] YIP E. L. and SINOVEC E. F., Solvability, controllability and observability of continuous descriptor systems. IEEE Trans. Auto Control, vol. AC-26, 1981, pp. 702–707. [259] YOKOYAMA R., KINNEN E.: Phase-variable canonical forms for linear, multi-input, multi-output systems. Int. J. Control. 1973, vol. 17, No 6, pp. 1297–1312. [260] ZIĉTAK K.: The lp solution of the nonlinear matrix equation XY=A. BIT 23, 1983, pp. 248–257. [261] ZURMÜHL.: Matrizen. Eine darstellung für Ingenieure. Springer. Berlin Heidelberg New York 1958. [262] ĩAK S. H.: On the polynomial matrix equation AX+YB=C. IEEE Trans. Auto. Control AC-30, No 12, 1985, pp. 1240–1242.

Index

Algebraic matrix equation 358 Asymptotic stability 423 Canonical form Frobenius 45 Jordan 45,231 McMillan 152 Smith 32 Computation of cyclic realization 231 equivalent standard systems 272 Frobenius canonical form 45 fundamental matrices 276 general solution of polynomial equations 319 Jordan canonical form 45, 48 minimal deree solution 322 minimal realisation for singular linear systems 367 normal transfer matrix 244 particular solution of polynomial equations 313 rational solution 332 similarity transformation matrices 50 Computing greatest common divisors 77, 79 smallest common multiplication 79 Controllability 475 Cyclic pairs 255 Cyclic realization 220 existence 224 computation 226

Cyclicity 264, 267 Division on polynomial matrices 9 Decomposition Kalman 291 normal matrices 182 rational function 116 rational matrices 128 regular pencil 87 singular pencil 95 singular systems 299 structural 185, 305 Weierstrass 91 Weierstrass–Kronecker 299 Dual system 482 Electrical circuit 200, 286 fourth order 210 general case 210 RC 288 RL 286 second-order 200 third-order 203 Elementary divisors 37 operation 20 operations method 54 Eigenvector method 57 Equivalence 42 Equivalent standard systems 279 Eigenvalues of matrix polynomial 345

502

Index

Fraction description of normal matrices 170 rational matrices 136 Functional observers 391

Normality of matix 164

Kronecker indices 102 product 340

Perfect observers for standard systems 384 for systems with unknown inputs 400 full-order 375 of singular systems 367 reduced-order 375, 378, 408 2D systems 396 Polynomial 1 Polynomial operations 5 Polynomial matrix equations 313, 336 bilinear with two unknown 325 rational solution 332 unilateral with two variables 313 Polynomial matrices division 9 equivalents 27 first degree 42 greatest common divisors 75 inverse matrix 132 lowest common divisors 75 pairs 75 rank 23 reduction 32 relative prime 84 simple 68 upon 20 zeros 37, 39 Problem of realisation 219

Observability 478 Output-Feedback 197 Operations on polynomial 5 Generalised Bezoute identity 84, 86 Generalization of Sylvester equation rational function 107 rational matrices 124 357

Lyapunov equation 361 Linear independence 23 Matrices column reduced 30 cyclic 68, 69 decomposition of regular pencil 87 diagonalisation 60, 62 Frobenius canonical form 45 irreducible transfer 305 Jordan canonical form 45 left equivalent 27 normal 163 normalisation 191 rational normal 168 right equivalent 27 row reduced 30 simple 68 simple structure 60 Matrix arbitrary square 65 equation 313 normal inverse 175, 180, 257 normal transfer 260 pair method 50 variable elements 65 Minimum energy control 435, 440 Normal matrices fraction description 170 product 175 sum 175 Normal systems Cyclic 255 singular 255

Reachability 264, 435 Realisation minimal 220 cyclic 220 Realisation problem for positive discrete-time systems 444 positive continuous-time systems 453 singular multi-variable discrete-time systems with delays 461

Index

System Rational continuous-time 282, 422 Reachability 471 discrete-time 419 function 107 linear singular 272, 367 matrices 107, 124 positive linear with delays 419 Reconstructability 480 singular discrete-time 255, 272 Robust stability 432 Similarity 42 Synthesis of regulators 155 Space basis 23 Stability of positive linear discrete-time Theorem systems with delay 423 Bezoute16 Structural stability 244 Cayley–Hamilton 16 Sylvester equation 347 Weierstrass–Kronecker 95

503