138 106 4MB
English Pages 472 Year 2024
Svetlin G. Georgiev, Khaled Zennir Differential Equations
Also of Interest Dynamic Calculus and Equations on Time Scales Edited by Svetlin G. Georgiev, 2023 ISBN 978-3-11-118289-6, e-ISBN (PDF) 978-3-11-118297-1
Potentials and Partial Differential Equations The Legacy of David R. Adams Edited by Suzanne Lenhart, Jie Xiao, 2023 ISBN 978-3-11-079265-2, e-ISBN (PDF) 978-3-11-079272-0 Numerical Analysis on Time Scales Svetlin G. Georgiev, Inci M. Erhan, 2022 ISBN 978-3-11-078725-2, e-ISBN (PDF) 978-3-11-078732-0
Integral Inequalities on Time Scales Svetlin G. Georgiev, 2018 ISBN 978-3-11-070550-8, e-ISBN (PDF) 978-3-11-070555-3
Functional Analysis with Applications Svetlin G. Georgiev, Khaled Zennir, 2019 ISBN 978-3-11-065769-2, e-ISBN (PDF) 978-3-11-065772-2
Svetlin G. Georgiev, Khaled Zennir
Differential Equations �
Projector Analysis on Time Scales
Mathematics Subject Classification 2020 Primary: 34A09, 65L80, 15A22; Secondary: 65L05, 34D05 Authors Prof. Dr. Svetlin G. Georgiev Kliment Ohridski University of Sofia Department of Differential Equations Faculty of Mathematics and Informatics 1126 Sofia Bulgarien [email protected]
Prof. Khaled Zennir 51452 City Ashefaa- Ar-Rass Qassim Saudi Arabia [email protected]
ISBN 978-3-11-137509-0 e-ISBN (PDF) 978-3-11-137715-5 e-ISBN (EPUB) 978-3-11-137771-1 Library of Congress Control Number: 2023951725 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2024 Walter de Gruyter GmbH, Berlin/Boston Cover image: MARHARYTA MARKO / iStock / Getty Images Plus Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com
Preface Time scale theory was first initiated by Stefan Hilger in 1988 in his PhD thesis to unify both approaches of dynamic modeling: difference and differential equations. Similar ideas have been used before and go back in the introduction of the Riemann–Stieltjes integral, which unifies sums and integrals. Many results to differential equations carry over easily to corresponding results for difference equations, while other results seem to be totally different in nature. Because of these reasons, the theory of dynamic equations is an active area of research. The time scale calculus can be applied to any fields in which dynamic processes are described by discrete or continuous time models. So, the calculus of time scales has various applications involving non-continuous domains such as certain bug populations, phytoremediation of metals, wound healing, maximization problems in economics and traffic problems. This book presents an introduction to the theory of dynamic-algebraic equations on time scales. The book is primarily intended for senior undergraduate students and beginning graduate students of engineering and science courses. Students in mathematical and physical sciences will find many sections of direct relevance. The book contains nine chapters. In Chapter 1, we introduce some basic definitions and concepts for dynamic and dynamic-algebraic equations on time scales. Chapter 2 investigates linear dynamic-algebraic equations with constant coefficients. Regular and σ-regular matrix pairs are introduced and investigated. In the chapter is considered the Weierstrass–Kronecker canonical form for linear dynamic-algebraic equations with constant coefficients. They are defined admissible projectors, widely orthogonal admissible projectors and structural characteristic values and they are deduced some of their properties. In the chapter, they are defined coupling and completely coupling projectors and they are decoupled the considered linear dynamic-algebraic equations with constant coefficients. In Chapter 3, we introduce the concept for properly stated, preadmissible, admissible and regular matrix pair. We construct matrix chains and prove that the matrix chain does not depend on the choice of the projector sequence. In the chapter, it is established equivalence of the constructed matrix chains. They are investigated and deducted some important properties of projector sequences. Chapter 4 is devoted on the first kind linear time-varying dynamic-algebraic equations and we classify them as (σ, 1)-properly stated, (σ, 1)-algebraically nice at level 0 and k ≥ 1, (σ, 1)-nice at level 0 and k ≥ 1, (σ, 1)-regular with tractability index 0 and ν ≥ 1. We deduct the inherent equation for the equation (4.1) and give a decomposition of the solutions. We prove that the proposed process for decomposition of the solutions is reversible. Chapter 5 deals with the second kind linear time-varying dynamic-algebraic equations and we classify them as properly stated, algebraically nice at level 0 and k ≥ 1, nice at level 0 and k ≥ 1, regular with tractability index 0 and ν ≥ 1. In the chapter, we deduct a decomposition of the solutions of the defined equations and it is shown that the described process for decomposition of the solutions is reversible. Chapter 6 is devoted to the regular third kind linear time-varying dynamic equations with tractability index ν ≥ 1. We deduct https://doi.org/10.1515/9783111377155-201
VI � Preface the inherent equation for the considered class equations. In the chapter, a decoupling of the solutions is given, and it is shown that the constructed decoupling process is reversible. In Chapter 7, we introduce fourth kind linear time-varying dynamic-algebraic equations. We classify them as (1, σ)-regular and using the properties of the leading term of the considered class equations, we deduce the inherent equation for the considered equations. In the chapter, a procedure is given for decoupling of the considered equations, and we prove that this procedure is reversible. In Chapter 8, we define jets of a function of one independent time scale variable and jets of a function of n independent real variables and one independent time scale variable. We introduce jet spaces and we give some of their properties. In the chapter, they are defined differentiable functions and total derivatives. In Chapter 9, we investigate nonlinear dynamic-algebraic equations on arbitrary time scales. We define properly involved derivatives, constraints and consistent initial values for the considered equations. We introduce a linearization for nonlinear dynamic-algebraic equations and investigate the total derivative for regular linearized equations with tractability index one. This book is addressed to a wide audience of specialists such as mathematicians, physicists, engineers and biologists. It can be used as a textbook at the graduate level and as a reference book for several disciplines. The aim of this book is to present a clear and well-organized treatment of the concept behind the development of mathematics as well as solution techniques. The text material of this book is presented in a readable and mathematically solid format. Paris, January 2024
Svetlin G. Georgiev and Khaled Zennir
Contents Preface � V 1 1.1 1.2 1.3 1.4 1.5 1.6
Introduction � 1 Solvability concepts � 1 Index concepts � 8 Structure of dynamic systems on time scales � 8 Constant coefficients � 34 Advanced practical problems � 44 Notes and references � 46
2 2.1 2.2 2.3 2.4 2.5 2.6 2.7
Linear dynamic-algebraic equations with constant coefficients � 47 Regular linear dynamic-algebraic equations with constant coefficients � 47 The Weierstrass–Kronecker form � 59 Structural characteristics � 70 Decoupling � 92 Complete decoupling � 107 Advanced practical problems � 114 Notes and references � 116
3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8
P-projectors. Matrix chains � 117 Properly stated matrix pairs � 117 Matrix chains � 119 Independency of the matrix chains � 124 Alternative chain constructions � 149 Equivalence of the P- and Π-chains � 153 Some properties of the projectors Πi and Mi � 170 Advanced practical problems � 189 Notes and references � 191
4 4.1 4.2 4.3 4.4
First kind linear time-varying dynamic-algebraic equations � 192 A classification � 192 A particular case � 193 Standard form index one problems � 197 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one � 203 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2 � 220 A reformulation � 220 σ The component vν−1 � 226 The components vkσ � 227
4.5 4.5.1 4.5.2 4.5.3
VIII � Contents 4.5.4 4.5.5 4.6 4.7 5 5.1 5.2 5.3 5.4 5.5 5.5.1 5.5.2 5.5.3 5.5.4 5.5.5 5.6 5.7 6 6.1 6.2 6.3 6.4 6.5 6.5.1 6.5.2 6.5.3 6.6 6.7 7 7.1 7.2 7.3 7.4
Terms coming from Uk Gν−1 C σ x σ � 233 Decoupling � 240 Advanced practical problems � 262 Notes and references � 264 Second kind linear time-varying dynamic-algebraic equations � 265 A classification � 265 A particular case � 266 Standard form index one problems � 269 Decoupling of (σ, 1)-regular second-order linear time-varying dynamic equations of index one � 274 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2 � 282 A reformulation � 282 The component vν−1 � 288 The components vk � 289 Terms coming from Ukσ Gν−1σ Cx � 296 Decoupling � 304 Advanced practical problems � 315 Notes and references � 317 Third kind linear time-varying dynamic-algebraic equations � 318 A classification � 318 A particular case � 319 Standard form index one problems � 322 Decoupling of third kind linear time-varying dynamic algebraic equations of index one � 327 Decoupling of third kind linear time-varying dynamic-algebraic equations of index ≥ 2 � 339 A reformulation � 339 σ The Component vν−1 � 345 σ The components vk . Terms coming from Uk Gν−1 Cx σ � 346 Advanced practical problems � 348 Notes and references � 350 Fourth kind linear time-varying dynamic-algebraic equations � 351 A classification � 351 A particular case � 352 Standard form index one problems � 356 Decoupling of fourth-order linear time-varying dynamic algebraic equations of index one � 361
Contents �
7.5 7.5.1 7.5.2 7.5.3 7.6 7.7 8 8.1
IX
Decoupling of fourth kind linear dynamic-algebraic equations of index ≥ 2 � 366 A reformulation � 366 The component vν−1 � 370 The components vk . Terms coming from Ukσ Gν−1σ Cx � 372 Advanced practical problems � 373 Notes and references � 376
8.5 8.6 8.7 8.8
Jets and jet spaces � 377 The Taylor formula for a function of one independent time scale variable � 377 The Taylor formula for a function of n independent real variables and one independent time scale variable � 392 Jets of a function of one independent time scale variable � 393 Jets of a function of n independent real variables and one independent time scale variable � 395 Jet spaces � 397 Total derivatives � 399 Advanced practical problems � 399 Notes and references � 400
9 9.1 9.2 9.3 9.4 9.5 9.6
Nonlinear dynamic-algebraic equations � 401 Properly involved derivatives � 401 Constraints and consistent initial values � 403 Linearization � 411 Regular linearized equations with tractability index one � 416 Advanced practical problems � 421 Notes and references � 422
A A.1 A.2 A.3 A.4 A.5 A.6
Elements of theory of matrices � 423 Equivalent pairs of matrices � 423 Kronecker canonical form � 423 Projectors and subspaces � 424 Generalized inverses � 426 Matrix pencils � 428 Parameter-dependent matrices and projectors � 431
B B.1 B.2 B.3
Fréchet derivatives and Gâteaux derivatives � 433 Remainders � 433 Definition and uniqueness of the Fréchet derivative � 435 The Gâteaux derivative � 442
8.2 8.3 8.4
X � Contents C C.1 C.2 C.3
Pötzsche’s chain rule � 445 Measure chains � 445 Pötzsche’s chain rule � 447 A generalization of Pötzsche’s chain rule � 450
Bibliography � 457 Index � 459
1 Introduction In this chapter, we introduce some solvable and index concepts for dynamic-algebraic equations on time scales. In the chapter, the structure of dynamic equations on time scales and the Putzer algorithm for finding the Hilger exponent of a matrix on an arbitrary time scale is given. Suppose that 𝕋 is a time scale with a forward jump operator and delta differentiation operator σ and Δ, respectively.
1.1 Solvability concepts The dynamic behavior of many biological, population, economic and physical processes is modeled via dynamic equations. In the case when the states of the considered system are in some ways constrained, then the corresponding mathematical model contains algebraic equations to describe these equations. Definition 1.1. Systems, consisting of both dynamic and algebraic equations are called dynamic-algebraic systems, algebro-dynamic systems, implicit dynamic equations or singular systems. The general form of a dynamic-algebraic equation is F(t, x, x σ , x Δ ) = 0,
(1.1)
where F : I × D1 × D2 × D3 → ℝm , where I ⊆ 𝕋 is a time scale interval and D1 , D2 , D3 ⊆ ℝn are open, m, n ∈ ℕ. The meaning of the quantity x Δ is as in the case of dynamic equations on time scales. On one hand, it denotes the Hilger derivative or a Hilger differentiable function x : I → ℝn with respect to its argument t ∈ I. On the other hand, it is used as an independent variable of the function F and we want F to determine a Hilger differentiable function that solves (1.1), i. e., F(t, x(t), x σ (t), x Δ (t)) = 0,
t ∈ I.
Example 1.1 (The logistic equation). Let N(t) be the size(density) of a population at time t, N(t + 1) be the size(density) of a population at time t + 1. The function N(t+1) is called N(t) the fitness function for the population or the rate of population growth, or the net reproduction rate. One of the simplest population models is the logistic model, which is given by the equation N(t + 1) = N(t)(1 + r(1 − where r and K are given functions. https://doi.org/10.1515/9783111377155-001
N(t) )), K
t ∈ ℤ,
2 � 1 Introduction Example 1.2 (Tumor growth model). The growth of certain tumors can also be modeled by nonlinear integral equations. These models are usually described by systems of differential equations. However, it is possible to reconsider the models in the form of integral equations on time scales. In this section, we introduce two types of tumor growth models. Let N be the total number of tumor cells, which is divided into a population P that proliferates by splitting and a population Q, which remains quiescent. However, the proliferating cells can make a transition to quiescent state with a rate of r(N), which increases with the overall size of the tumor. The model is then given by the system of dynamic equations PΔ (x) = cP(x) − r(N)P(x), QΔ (x) = r(N)P(x).
(1.2)
Example 1.3. We assume that x1 (t) and x2 (t) denote the prey population and the predator population, respectively. Let A, B, C and D be positive constants. Then the predator– prey model is given by the system of dynamic equations x1Δ (t) = Ax1 (t) − Bx1 (t)x2 (t),
x2Δ (t) = −Cx2 (t) + Dx1 (t)x2 (t). Example 1.4 (Competition models). There are two types of competition: one occurs among individuals of the same species (interspecific), and the other occurs among two (or more) species (interspecific competition). Interspecific competition was accounted for in one-dimensional models as we have already seen. *Interspecific competition will be our focus here. G. F. Gause (1935) conducted an experiment on three different species of paramecium, P. Aurelia, P. Caudatum and P. Bursarua T. Park (1954) conducted a similar experiment on two species of the flour beetles, Tribolium Castaneum and T. confusum. Based on these experiments, the competition exclusion principle was established: if two species are very similar (such as sharing the same food ecological niche, etc.), they cannot coexist. Let x(t) and y(t) be the population densities of species x and y. The Leslie–Gower competition model is given by the system {x(t + 1) = { y(t + 1) = {
r1 K1 x(t) , K1 +(r1 −1)x(t)+x1 y(t) r2 K2 y(t) , K2 +(r2 −1)y(t)+c2 x(t)
t ∈ ℤ,
(1.3)
where r1 i, ki , ci , i = 1, 2, are given functions. We will discuss the question of existence of solutions. Uniqueness of solutions will be discussed in the context of initial value problems, when a solution satisfies the initial condition x(t0 ) = x0
(1.4)
1.1 Solvability concepts
�
3
with given t0 ∈ I and x0 ∈ ℝn , and boundary value problems, where the solution is supposed to satisfy b(x(c), x(d)) = 0 with b : D1 × D1 → ℝl , I = [c, d], c, d ∈ 𝕋, c < d. A linear dynamic-algebraic equations with constant coefficients can be represented in the form Ex Δ = Ax + f (t)
(1.5)
Ex Δ = Ax σ + f (t),
(1.6)
or
where E, A ∈ Mm×n , f : I → ℝm . Here, Mm×n is the set of real matrices of type m × n. In order to develop the system (1.1), we have to specify the kind of solutions that one is interested. In this book, we will discuss two kinds of solutions for the equation (1.1): classical solutions and weak solutions. For the classical solutions of the system (1.1), we will use the following definition. The weak solutions will be discussed in Chapter 2 and Chapter 3. Definition 1.2. 1. A function x ∈ C 1 (I) is called a solution of (1.1) if it satisfies (1.1) pointwise. 2. The function x ∈ C 1 (I) is called a solution of the IVP (1.1), (1.6), if it is a solution of (1.1) and satisfies (1.6). 3. An initial condition (1.6) is called consistent with F, if the associated IVP (1.1), (1.6) has at least one solution. 4. A problem is said to be solvable if it has at least one solution. Example 1.5. Let 𝕋 = ℤ. Consider the system −1 Δ x =(4 0
2 −2 1
−t 2 + 9t + 2 − t23t+1 3 t 2 1) x + ( −2t − 8t − 2 − t2 +1 ) 2 −2t 3 2 −t 2 + 3t + 1−5t−5t 2 2 (t +1)(t +2t+2)
subject to the initial condition 0 x(0) = (0) . 0 We will show that the function
4 � 1 Introduction t2 + t x(t) = (t 2 − 3t ) , t t 2 +1
t ∈ ℤ,
is its solution. Here, σ(t) = t + 1, t ∈ 𝕋. Let x1 (t) = t 2 + t,
x2 (t) = t 2 − 3t,
x3 (t) =
t2
t , +1
t ∈ ℤ.
Then x1Δ (t) = σ(t) + t + 1 = t + 1 + t + 1 = 2t + 2,
x2Δ (t) = σ(t) + t − 3 = t + 1 + t − 3 = 2t − 2, x3Δ (t) = =
t 2 + 1 − t(σ(t) + t) t 2 + 1 − t(t + 1 + t) = 2 2 2 (t + 1)((σ(t)) + 1) (t + 1)((t + 1)2 + 1) t 2 + 1 − 2t 2 − t 1 − t − t2 = 2 , 2 + 1)(t + 2t + 2) (t + 1)(t 2 + 2t + 2)
(t 2
t ∈ 𝕋.
Therefore, 2t + 2 x1Δ (t) Δ x (t) = (x2 (t)) = ( 2t − 2 ) , 1−t−t 2 x3Δ (t) (t 2 +1)(t 2 +2t+2) Δ
t ∈ ℤ.
Next, −1 (4 0
2 −2 1
3 −1 1) x(t) + f (t) = ( 4 2 0
2 −2 1
3t t 2 +1 (2t 2 + 10t + t2t+1 ) t 2 − 3t + t22t+1
t 2 − 7t +
=
−t 2 + 9t + 2 − t23t+1 t2 + t 3 2 1) (t − 3t ) + ( −2t 2 − 8t − 2 ) t 2 − t2t+1 t 2 +1
=(
2t + 2 2t − 2
1−t−t 2 (t 2 +1)(t 2 +2t+2)
3t t 2 +1 − 8t − 2 − t2t+1 ) 2 −2t 3 3t + (t1−5t−5t 2 +1)(t 2 +2t+2)
−t 2 + 9t + 2 −
+ ( −2t
) = x Δ (t),
2
−t 2 +
t ∈ 𝕋.
Example 1.6. Let 𝕋 = 2ℕ0 . Consider the IVP −1 (−3 −1
1 1 1
2 1 −1) x Δ = (−1 −1 1
−2 0 −1
0 16t 3 + 11t 2 + 8t + 6 σ 1 ) x + ( 7t 2 − 13t − 6 ) , −1 8t 3 + 3t 2 − 9t − 3
t ∈ 𝕋,
1.1 Solvability concepts
�
2 x(1) = (3) . 4 We will prove that the function t2 + t x(t) = (t 3 + t 2 + t ) , t 2 + 3t
t ∈ 𝕋,
is its solution. Here, σ(t) = 2t, t ∈ 𝕋. Let x1 (t) = t 2 + t,
x2 (t) = t 3 + t 2 + t,
x3 (t) = t 2 + 3t,
t ∈ 𝕋.
Then 2
x1σ (t) = (σ(t)) + σ(t) = 4t 2 + 2t, 3
2
x2σ (t) = (σ(t)) + (σ(t)) + σ(t) = 8t 3 + 4t 2 + 2t, 2
x3σ (t) = (σ(t)) + 3σ(t) = 4t 2 + 6t,
t ∈ 𝕋.
Next, x1Δ (t) = σ(t) + t + 1 = 2t + t + 1 = 3t + 1, 2
x2Δ (t) = (σ(t)) + +tσ(t) + t 2 + σ(t) + t + 1 = 4t 2 + 2t 2 + t 2 + 2t + t + 1 = 7t 2 + 3t + 1, x3Δ (t) = σ(t) + t + 3 = 2t + t + 3 = 3t + 3,
t ∈ 𝕋.
Therefore, x1σ (t) 4t 2 + 2t σ 3 x (t) = (x2 (t)) = (8t + 4t 2 + 2t ) , x3σ (t) 4t 2 + 6t σ
t ∈ 𝕋,
and x1Δ (t) 3t + 1 x (t) = (x2Δ (t)) = (7t 2 + 3t + 1) , 3t + 3 x3Δ (t) Δ
t ∈ 𝕋.
Consequently, −1 (−3 −1
1 1 1
2 −1 −1) x Δ (t) = (−3 −1 −1
1 1 1
2 3t + 1 −1) (7t 2 + 3t + 1) −1 3t + 3
5
6 � 1 Introduction 7t 2 + 6t + 6 = (7t 2 − 9t − 6) , 7t 2 − 3t − 3
t ∈ 𝕋,
and 1 (−1 1
−2 0 −1
1 = (−1 1
0 16t 3 + 11t 2 + 8t + 6 σ 1 ) x (t) + ( 7t 2 − 13t − 6 ) 1 8t 3 + 3t 2 − 9t − 3 −2 0 −1
0 4t 2 + 2t 16t 3 + 11t 2 + 8t + 6 3 2 1 ) (8t + 4t + 2t ) + ( 7t 2 − 13t − 6 ) −1 4t 2 + 6t 8t 3 + 3t 2 − 9t − 3
−16t 3 − 4t 2 − 2t 16t 3 + 11t 2 + 8t + 6 =( ) + ( 7t 2 − 13t − 6 ) 4t 3 2 −8t + 4t + 6t 8t 3 + 3t 2 − 9t − 3 7t 2 + 6t + 6 −1 2 = (7t − 9t − 6) = (−3 7t 2 − 3t − 3 −1
1 1 1
2 −1) x Δ (t), −1
t ∈ 𝕋.
Note that x1 (1) = 12 + 1 = 2,
x2 (1) = 13 + 12 + 1 = 3,
x3 (1) = 12 + 3 = 4 and
2 x(1) = (3) . 4 Example 1.7. Let 𝕋 = 3ℤ. Consider the IVP (
−1 1
2 Δ −4 )x = ( 3 2 x(0) = (
1 3t 2 + 25t + 44 ) xσ + ( 2 ), −1 −t − t − 10
0 ). −2
We will prove that the function x(t) = (
t2 + t ), t −t−2 2
t ∈ [0, ∞),
t ∈ [0, ∞),
1.1 Solvability concepts
is its solution. Here, σ(t) = t + 3, t ∈ 𝕋. Let x1 (t) = t 2 + t,
x2 (t) = t 2 − t − 2,
t ∈ [0, ∞).
Then x1 (0) = 0,
x1σ (t) x2σ (t)
x2 (0) = −2, 2
= (σ(t)) + σ(t) = (t + 3)2 + t + 3 = t 2 + 7t + 12, 2
= (σ(t)) − σ(t) − 2 = (t + 3)2 − t − 3 − 2 = t 2 + 6t + 9 − t − 5 = t 2 + 5t + 4,
x1Δ (t) = σ(t) + t + 1 = t + 3 + t + 1 = 2t + 4, x2Δ (t) = σ(t) + t − 1 = t + 3 + t − 1 = 2t + 2,
t ∈ [0, ∞).
Hence, x (0) 0 x(0) = ( 1 ) = ( ) , x2 (0) −2
x σ (t) t 2 + 7t + 12 x σ (t) = ( 1σ ) = ( 2 ), x2 (t) t + 5t + 4 x Δ (t) 2t + 4 x Δ (t) = ( 1Δ ) = ( ), 2t + 2 x2 (t)
t ∈ [0, ∞).
Consequently, (
−1 1
2 Δ −1 ) x (t) = ( 3 1
2 2t + 4 2t )( )=( ), 3 2t + 2 8t + 10
t ∈ [0, ∞),
and (
−4 2
1 3t 2 + 25t + 44 ) x σ (t) + ( 2 ) −1 −t − t − 10 1 t 2 + 7t + 12 3t 2 + 25t + 44 )( 2 )+( 2 ) −1 t + 5t + 4 −t − t − 10
=(
−4 2
=(
−3t 2 − 23t − 44 3t 2 + 25t + 44 ) + ( ) t 2 + 9t + 20 −t 2 − t − 10
=(
2t −1 )=( 8t + 10 1
2 Δ ) x (t), 3
t ∈ [0, ∞).
�
7
8 � 1 Introduction Exercise 1.1. Let 𝕋 = 3ℕ0 . Check if the function t2 x(t) = ( t ) , 2
t ∈ 𝕋,
is a solution to the IVP −1 (1 1
1 −1 1
1 0 1 ) xΔ = ( 1 −1 1
1 0 1
1 −7t − 1 1 ) x σ + (−9t 2 + 4t − 3) , 0 −9t 2 + t + 1
t ∈ 𝕋,
1 x(1) = (1) . 1
1.2 Index concepts In the analysis of linear dynamic-algebraic equations with constant coefficients (1.5), (1.6), all properties of the system can be determined by computing the invariants of the matrix pair (E, A) under equivalence transformations. There are different approaches for investigations of (1.5), (1.6). Among these approaches, the differentiation index and perturbation index are the most widely used concepts. The main motivation to introduce an index is to classify different types dynamic-algebraic equations with respect to the difficulty to solve them analytically as well as numerically. The different kind of indices were introduced to show how far the dynamic-algebraic equations is away from the dynamic equations.
1.3 Structure of dynamic systems on time scales Definition 1.3. The first-order linear dynamic equation yΔ = p(t)y,
t ∈ 𝕋,
(1.7)
is called regressive if p ∈ R . Theorem 1.1. Suppose that p ∈ R , t0 ∈ 𝕋 and y0 ∈ ℝ. Then the unique solution of the IVP yΔ = p(t)y,
y(t0 ) = y0 ,
is given by y(t) = ep (t, t0 )y0 ,
t ∈ 𝕋.
(1.8)
1.3 Structure of dynamic systems on time scales
�
9
Proof. Let y be a solution of the IVP (1.8). We have Δ yΔ (t)e⊖p (σ(t), t0 ) = p(t)e⊖p (σ(t), t0 )y(t) = p(t)(e⊖p (t, t0 ) + μ(t)e⊖p (t, t0 ))y(t)
= p(t)(e⊖p (t, t0 ) + μ(t)(⊖p)(t)e⊖p (t, t0 ))y(t) = p(t)(1 − =
μ(t)p(t) )e (t, t )y(t) 1 + μ(t)p(t) ⊖p 0
p(t) e (t, t )y(t) = −(⊖p)(t)e⊖p (t, t0 )y(t) 1 + μ(t)p(t) ⊖p 0
Δ = −e⊖p (t, t0 )y(t),
t ∈ 𝕋,
whereupon Δ yΔ (t)e⊖p (σ(t), t0 ) + e⊖p (t, t0 )y(t) = 0,
t ∈ 𝕋,
or Δ
(ye⊖p (⋅, t0 )) (t) = 0,
t ∈ 𝕋.
Then y(t)e⊖p (t, t0 ) = c,
t ∈ 𝕋,
where c is a constant. Since y(t0 ) = y0 , we get y0 = c, i. e., y(t)e⊖p (t, t0 ) = y0 ,
t ∈ 𝕋.
y(t) = y0 ep (t, t0 ),
t ∈ 𝕋.
Consequently,
This completes the proof. Definition 1.4. If p ∈ R and f : 𝕋 → ℝ is rd-continuous, then the dynamic equation yΔ = p(t)y + f (t), is called regressive.
t ∈ 𝕋,
(1.9)
10 � 1 Introduction Theorem 1.2. Suppose that (1.9) is regressive. Let t0 ∈ 𝕋 and x0 ∈ ℝ. The unique solutions of the IVPs x Δ = −p(t)x σ + f (t),
x(t0 ) = x0 ,
t ∈ 𝕋,
(1.10)
and x Δ = p(t)x + f (t),
x(t0 ) = x0 ,
t ∈ 𝕋,
(1.11)
are given by t
x(t) = e⊖p (t, t0 )x0 + ∫ e⊖p (t, τ)f (τ)Δτ,
t ∈ 𝕋,
(1.12)
t ∈ 𝕋,
(1.13)
t0
and t
x(t) = ep (t, t0 )x0 + ∫ ep (t, σ(τ))f (τ)Δτ, t0
respectively. Proof. 1.
Consider x, defined by (1.12). We will prove that it satisfies (1.10). We have x(t0 ) = e⊖p (t0 , t0 )x0 = x0 ,
and t
Δ
x (t) = (⊖p)(t)e⊖p (t, t0 )x0 + ∫(⊖p)(t)e⊖p (t, τ)f (τ)Δτ + e⊖p (σ(t), t)f (t) t0
t
=−
p(t) p(t) f (t) e (t, t )x − , ∫ e⊖p (t, τ)f (τ)Δτ + 1 + μ(t)p(t) ⊖p 0 0 1 + μ(t)p(t) 1 + μ(t)p(t)
t ∈ 𝕋,
t0
σ(t)
x σ (t) = e⊖p (σ(t), t0 )x0 + ∫ e⊖p (σ(t), τ)f (τ)Δτ t0
σ(t)
t
= e⊖p (t, t0 )x0 (1 + (⊖p)(t)μ(t)) + ∫ e⊖p (σ(t), τ)f (τ)Δτ + ∫ e⊖p (σ(t), τ)f (τ)Δτ t0
t
t
=
1 1 e (t, t )x + ∫ e⊖p (t, τ)f (τ)Δτ + μ(t)e⊖p (σ(t), t)f (t) 1 + μ(t)p(t) ⊖p 0 0 1 + μ(t)p(t) t0
1.3 Structure of dynamic systems on time scales
�
t
μ(t) 1 1 = e (t, t )x + f (t), ∫ e⊖p (t, τ)f (τ)Δτ + 1 + μ(t)p(t) ⊖p 0 0 1 + μ(t)p(t) 1 + μ(t)p(t) t0
t ∈ 𝕋. Therefore, x Δ (t) + p(t)x σ (t)
t
p(t) p(t) f (t) =− e⊖p (t, t0 )x0 − ∫ e⊖p (t, τ)f (τ)Δτ + 1 + μ(t)p(t) 1 + μ(t)p(t) 1 + μ(t)p(t) t0
t
p(t) p(t) μ(t)p(t) + e (t, t )x + f (t) ∫ e⊖p (t, τ)f (τ)Δτ + 1 + μ(t)p(t) ⊖p 0 0 1 + μ(t)p(t) 1 + μ(t)p(t) t0
= f (t),
t ∈ 𝕋.
Now we multiply the equation (1.10) by ep (t, t0 ) and we get ep (t, t0 )x Δ (t) + p(t)ep (t, t0 )x σ (t) = ep (t, t0 )f (t),
t ∈ 𝕋,
or Δ
(ep (⋅, t0 )x) (t) = ep (t, t0 )f (t),
t ∈ 𝕋,
which we integrate from t0 to t and we obtain t
ep (t, t0 )x(t) − ep (t0 , t0 )x(t0 ) = ∫ ep (τ, t0 )f (τ)Δτ,
t ∈ 𝕋,
t0
or t
ep (t, t0 )x(t) = x0 + ∫ ep (τ, t0 )f (τ)Δτ,
t ∈ 𝕋,
t0
or t
x(t) = x0 e⊖p (t, t0 ) + ∫ e⊖p (t0 , τ)e⊖p (t, t0 )f (τ)Δτ t0
t
= x0 e⊖p (t, t0 ) + ∫ e⊖p (t, τ)f (τ)Δτ, t0
t ∈ 𝕋.
11
12 � 1 Introduction 2.
Let now x be defined by (1.13). Then t
x Δ (t) = p(t)ep (t, t0 )x0 + p(t) ∫ ep (t, σ(τ))f (τ)Δτ + ep (σ(t), σ(t))f (t) t0
= p(t)x(t) + f (t),
t ∈ 𝕋.
Now we multiply the equation (1.11) by e⊖p (t, t0 ) and we get e⊖p (t, t0 )x Δ (t) − p(t)e⊖p (t, t0 )x(t) = e⊖p (t, t0 )f (t),
t ∈ 𝕋,
or p(t) 1 e (t, t )x Δ (t) − e (t, t )x(t) 1 + μ(t)p(t) ⊖p 0 1 + μ(t)p(t) ⊖p 0 1 = e (t, t )f (t), t ∈ 𝕋, 1 + μ(t)p(t) ⊖p 0 or Δ e⊖p (σ(t), t0 )x Δ (t) + e⊖p (t, t0 )x(t) = e⊖p (σ(t), t0 )f (t),
t ∈ 𝕋,
or Δ
(e⊖p (⋅, t0 )x) (t) = e⊖p (σ(t), t0 )f (t),
t ∈ 𝕋,
which we integrate from t0 to t and we obtain t
e⊖p (t, t0 )x(t) − e⊖p (t0 , t0 )x(t0 ) = ∫ e⊖p (σ(τ), t0 )f (τ)Δτ, t0
or t
e⊖p (t, t0 )x(t) = x0 + ∫ ep (t0 , σ(τ))f (τ)Δτ,
t ∈ 𝕋,
t0
or t
x(t) = x0 ep (t, t0 ) + ∫ ep (t, t0 )ep (t0 , σ(τ))f (τ)Δτ t0
t
= x0 ep (t, t0 ) + ∫ ep (t, σ(τ))f (τ)Δτ, t0
This completes the proof.
t ∈ 𝕋.
t ∈ 𝕋,
1.3 Structure of dynamic systems on time scales
�
13
Suppose that A is a m × n-matrix on 𝕋, A = (aij )1≤i≤m,1≤j≤n , shortly A = (aij ), aij : 𝕋 → ℝ, 1 ≤ i ≤ m, 1 ≤ j ≤ n. Definition 1.5. We say that A is differentiable on 𝕋 if each entry of A is differentiable on 𝕋 and AΔ = (aijΔ ). Example 1.8. Let 𝕋 = ℤ, A(t) = (
t+1 2t − 3
t2 + t ), 2t − 3t + 2 2
t ∈ 𝕋.
We will find AΔ (t), t ∈ 𝕋. We have σ(t) = t + 1, a21 (t) = 2t − 3,
a11 (t) = t + 1,
a12 (t) = t 2 + t,
a22 (t) = 2t 2 − 3t + 2,
t ∈ 𝕋.
Then Δ a11 (t) = 1,
Δ a12 (t) = σ(t) + t + 1 = t + 1 + t + 1 = 2t + 2,
Δ a21 (t) = 2,
Δ a22 (t) = 2(σ(t) + t) − 3 = 2(t + 1 + t) − 3 = 4t + 2 − 3 = 4t − 1,
t ∈ 𝕋κ .
Therefore, AΔ (t) = (
1 2
2t + 2 ), 4t − 1
t ∈ 𝕋.
Example 1.9. Let 𝕋 = 2ℕ0 , t3 + t
A(t) = ( 1 1
t+1 t+2 2
−t 2
2
t
t + t) , t3
t ∈ 𝕋.
We will find AΔ (t), t ∈ 𝕋. We have σ(t) = 2t,
a11 (t) = t 3 + t,
a23 (t) = t 2 + t,
a31 (t) = 1,
t+1 , a13 (t) = t, a21 (t) = 1, t+2 a32 (t) = 2, a33 (t) = t 3 , t ∈ 𝕋. a12 (t) =
a22 (t) = −t 2 ,
Then 2
Δ a11 (t) = (σ(t)) + tσ(t) + t 2 + 1 = (2t)2 + 2t 2 + t 2 + 1 = 4t 2 + 3t 2 + 1 = 7t 2 + 1, t + 2 − (t + 1) 1 Δ Δ a12 (t) = = , a13 (t) = 1, (t + 2)(σ(t) + 2) 2(t + 1)(t + 2)
14 � 1 Introduction Δ a21 (t) = 0,
Δ a22 (t) = −(σ(t) + t) = −(2t + t) = −3t,
Δ a23 (t) = σ(t) + t + 1 = 2t + t + 1 = 3t + 1,
Δ a31 (t) = 0,
2
Δ a33 (t) = (σ(t)) + tσ(t) + t 2 = 4t 2 + 2t 2 + t 2 = 7t 2 ,
Δ a32 (t) = 0,
t ∈ 𝕋.
Therefore, 1 2(t+1)(t+2)
7t 2 + 1 A (t) = ( 0 0 Δ
1 3t + 1) , 7t 2
−3t 0
t ∈ 𝕋.
Example 1.10. Let 𝕋 = ℕ20 , A(t) = (
1 t+1 ) ,
t2 + 1 2
3t
t ∈ 𝕋.
We will find AΔ (t), t ∈ 𝕋. We have σ(t) = (√t + 1)2 ,
a11 (t) = t 2 + 1,
a12 (t) =
1 , t+1
a21 (t) = 2,
a22 (t) = 3t,
t ∈ 𝕋.
Then Δ a11 (t) = σ(t) + t = (√t + 1)2 + t = t + 2√t + 1 + t = 2t + 2√t + 1, 1 1 1 Δ a12 (t) = − =− =− , 2 √ (t + 1)(σ(t) + 1) (t + 1)(( t + 1) + 1) (t + 1)(t + 2√t + 2) Δ a21 (t) = 0,
Δ a22 (t) = 3,
t ∈ 𝕋.
Therefore, AΔ (t) = (
2t + 2√t + 1 0
1 − (t+1)(t+2 √t+2)
3
),
t ∈ 𝕋.
Exercise 1.2. Let 𝕋 = 2ℤ, A(t) = (
t3 2t + 4
t2 ), t−1
t ∈ 𝕋.
Find AΔ (t), t ∈ 𝕋. Answer. AΔ (t) = (
3t 2 + 6t + 4 2
2t + 2 ), 1
t ∈ 𝕋.
1.3 Structure of dynamic systems on time scales
Definition 1.6. If A is differentiable, then Aσ = (aijσ ). Theorem 1.3. If A is differentiable on 𝕋κ , then Aσ (t) = A(t) + μ(t)AΔ (t),
t ∈ 𝕋κ .
Proof. We have Aσ (t) = (aijσ (t)) = (aij (t) + μ(t)aijΔ (t)) = (aij (t)) + μ(t)(aijΔ (t)) = A(t) + μ(t)AΔ (t),
t ∈ 𝕋κ .
Below we suppose that B = (bij )1≤i≤m,1≤j≤n , bij : 𝕋 → ℝ, 1 ≤ i ≤ m, 1 ≤ j ≤ n. Theorem 1.4. Let A and B be differentiable on 𝕋κ . Then (A + B)Δ = AΔ + BΔ
on 𝕋κ .
Proof. We have (A + B)(t) = (aij (t) + bij (t)),
(A + B)Δ (t) = (aijΔ (t) + bΔij (t)) = (aijΔ (t)) + (bΔij (t)) = AΔ (t) + BΔ (t),
t ∈ 𝕋κ .
This completes the proof. Theorem 1.5. Let α ∈ ℝ and A be differentiable on 𝕋κ . Then (αA)Δ = αAΔ
on
𝕋κ .
Proof. We have (αA)Δ (t) = ((αaij )Δ (t)) = (αaijΔ (t)) = α(aijΔ (t)) = αAΔ (t),
t ∈ 𝕋κ .
This completes the proof. Theorem 1.6. Let m = n and A, B be differentiable on 𝕋κ . Then (AB)Δ (t) = AΔ (t)B(t) + Aσ (t)BΔ (t) = AΔ (t)Bσ (t) + A(t)BΔ (t), Proof. We have n
(AB)(t) = ( ∑ aik (t)bkj (t)), k=1
t ∈ 𝕋.
t ∈ 𝕋κ .
�
15
16 � 1 Introduction Then Δ
n
Δ
n
n
k=1
k=1
Δ σ (AB) (t) = (( ∑ aik bkj ) (t)) = ( ∑ (aik bkj )Δ (t)) = ( ∑ (aik (t)bkj (t) + aik (t)bΔkj (t))) k=1
n
n
k=1
k=1
Δ σ = ( ∑ aik (t)bkj (t)) + ( ∑ aik (t)bΔkj (t)) = AΔ (t)B(t) + Aσ (t)BΔ (t) n
Δ = ( ∑ (aik (t)bσkj (t) + aik (t)bΔkj (t))) k=1 n
n
k=1
k=1
Δ = ( ∑ aik (t)bσkj (t)) + ( ∑ aik (t)bΔkj (t)) = AΔ (t)Bσ (t) + A(t)BΔ (t),
t ∈ 𝕋κ .
This completes the proof. Example 1.11. Let 𝕋 = 2ℕ0 , A(t) = (
t 2
t−1 ), 3t + 1
B(t) = (
1 t+1
t ), t−1
t ∈ 𝕋.
Then (AB)(t) = (
t 2
t−1 1 )( 3t + 1 t + 1
= C(t) = (cij (t)),
t t2 + t − 1 )=( 2 t−1 3t + 4t + 3
2t 2 − 2t + 1 ) 3t 2 − 1
t ∈ 𝕋.
We have σ(t) = 2t, a11 (t) = t,
b11 (t) = 1,
2
a12 (t) = t − 1, b12 (t) = t,
c11 (t) = t + t − 1,
a21 (t) = 2,
b21 (t) = t + 1, 2
c12 (t) = 2t − 2t + 1,
a22 (t) = 3t + 1,
b22 (t) = t − 1,
c21 (t) = 3t 2 + 4t + 3,
c22 (t) = 3t 2 − 1,
Then Δ a11 (t) = 1,
bΔ11 (t) = 0,
Δ a12 (t) = 1,
bΔ12 (t) = 1,
Δ a21 (t) = 0,
bΔ21 (t) = 1,
Δ a22 (t) = 3,
bΔ22 (t) = 1,
Δ c11 (t) = σ(t) + t + 1 = 2t + t + 1 = 3t + 1,
Δ c12 (t) = 2(σ(t) + t) − 2 = 2(2t + t) − 2 = 6t − 2,
Δ c21 (t) = 3(σ(t) + t) + 4 = 3(2t + t) + 4 = 9t + 4,
Δ c22 (t) = 3(σ(t) + t) = 3(2t + t) = 9t,
Therefore,
t ∈ 𝕋.
t ∈ 𝕋.
1.3 Structure of dynamic systems on time scales
(AB)Δ (t) = (
3t + 1 9t + 4
6t − 2 ), 9t
t ∈ 𝕋.
Also, AΔ (t)B(t) = ( Aσ (t) = ( Aσ (t)BΔ (t) = (
1 0
1 1 )( 3 t+1
σ(t) 2 2t 2
t t+2 )=( t−1 3t + 3
σ(t) − 1 2t )=( 3σ(t) + 1 2 2t − 1 0 )( 6t + 1 1
2t − 1 ), 3t − 3
2t − 1 ), 6t + 1
1 2t − 1 )=( 1 6t + 1
AΔ (t)B(t) + Aσ (t)BΔ (t) = (
t+2 3t + 3
2t − 1 2t − 1 )+( 3t − 3 6t + 1
=(
3t + 1 9t + 4
6t − 2 ), 9t
4t − 1 ), 6t + 3
4t − 1 ) 6t + 3
t ∈ 𝕋.
Consequently, (AB)Δ (t) = AΔ (t)B(t) + Aσ (t)BΔ (t),
t ∈ 𝕋.
Next, Bσ (t) = (
1 σ(t) + 1
σ(t) 1 )=( σ(t) − 1 2t + 1
2t ), 2t − 1
AΔ (t)Bσ (t) = (
1 0
1 1 )( 3 2t + 1
2t 2t + 2 )=( 2t − 1 6t + 3
A(t)BΔ (t) = (
t 2
t−1 0 )( 3t + 1 1
1 t−1 )=( 1 3t + 1
AΔ (t)Bσ (t) + A(t)BΔ (t) = (
2t + 2 6t + 3
4t − 1 t−1 )+( 6t − 3 3t + 1
=(
3t + 1 9t + 4
6t − 2 ), 9t
4t − 1 ), 6t − 3
2t − 1 ), 3t + 3
2t − 1 ) 3t + 3
t ∈ 𝕋.
Therefore, (AB)Δ (t) = AΔ (t)Bσ (t) + A(t)BΔ (t),
t ∈ 𝕋.
Exercise 1.3. Let 𝕋 = 3ℤ, A(t) = (
t2 + 1 2t − 1
t−2 ), t+1
B(t) = (
t t
2t + 1 ), t−1
t ∈ 𝕋.
�
17
18 � 1 Introduction Prove (AB)Δ (t) = AΔ (t)Bσ (t) + A(t)BΔ (t),
t ∈ 𝕋κ .
Theorem 1.7. Let m = n and A−1 exists on 𝕋. Then (Aσ )
−1
σ
= (A−1 )
on
𝕋.
Proof. For any t ∈ 𝕋, we have A(t)A−1 (t) = I,
t ∈ 𝕋.
Then σ
Aσ (t)(A−1 ) (t) = I, whereupon σ
(Aσ ) (t) = (A−1 ) (t), −1
t ∈ 𝕋.
This completes the proof. Example 1.12. Let 𝕋 = lℕ0 , l > 0, A(t) = (
t+1 1
t+2 ), t+3
t ∈ 𝕋.
Then Aσ (t) = ( (Aσ ) (t) = −1
σ(t) + 1 1
σ(t) + 2 t+l+1 )=( σ(t) + 3 1
t+l+3 1 ( −1 (t + l)(t + l + 3) + 1
t+l+2 ), t+l+3
−t − l − 2 ), t+l+1
t ∈ 𝕋.
Next, A−1 (t) =
t+3 1 ( t(t + 3) + 1 −1
−t − 2 ), t+1
t ∈ 𝕋,
whereupon σ
(A−1 ) (t) = =
σ(t) + 3 1 ( −1 σ(t)(σ(t) + 3) + 1 t+l+3 1 ( −1 (t + l)(t + l + 3) + 1
−σ(t) − 2 ) σ(t) + 1 −t − l − 2 ), t+l+1
t ∈ 𝕋.
1.3 Structure of dynamic systems on time scales
�
19
Consequently, σ
(Aσ ) (t) = (A−1 ) (t), −1
t ∈ 𝕋.
Exercise 1.4. Let 𝕋 = 2ℕ0 and 1 t+1 1 ), t+2
t+2 A(t) = ( 2 t +1
t ∈ 𝕋.
Prove that σ
(Aσ ) (t) = (A−1 ) (t), −1
t ∈ 𝕋.
Theorem 1.8. Let m = n, A be differentiable on 𝕋κ and A−1 , (Aσ )−1 exist on 𝕋. Then Δ
(A−1 ) = −A−1 AΔ (Aσ )
−1
= −(Aσ ) AΔ A−1 −1
on 𝕋κ .
Proof. We have I = AA−1
on
𝕋,
whereupon, using Theorem 1.6, we get Δ
σ
Δ
O = I Δ = (AA−1 ) = AΔ (A−1 ) + A(A−1 ) = AΔ (Aσ ) = AΔ A−1 + Aσ (A−1 )
Δ
−1
+ A(A−1 )
Δ
on 𝕋κ .
Hence, Δ
Δ
A(A−1 ) = −AΔ (Aσ ) ,
Aσ (A−1 ) = −AΔ A−1
−1
on
𝕋κ ,
and Δ
(A−1 ) = −A−1 AΔ (Aσ ) , −1
Δ
(A−1 ) = −(Aσ ) AΔ A−1 −1
on
𝕋κ .
This completes the proof. Exercise 1.5. Let m = n, A and B be differentiable on 𝕋κ , B−1 and (Bσ )−1 exist on 𝕋. Prove Δ
(AB−1 ) = (AΔ − AB−1 BΔ )(Bσ )
−1
σ
= (AΔ − (AB−1 ) BΔ )B−1
on 𝕋κ .
Definition 1.7. We say that the matrix A is rd-continuous on 𝕋 if each entry of A is rdcontinuous. The class of such rd-continuous m × n matrix-valued functions on 𝕋 is denoted by Crd = Crd (𝕋) = Crd (𝕋, R
m×n
).
20 � 1 Introduction Below we suppose that A and B are n × n matrix-valued functions. Definition 1.8. We say that the matrix A is regressive with respect to 𝕋 provided I + μ(t)A(t)
is invertible for all t ∈ 𝕋κ .
The class of such regressive and rd-continuous functions is denoted, similar to the scalar case, by n×n
R = R (𝕋) = R (𝕋, ℝ
).
Theorem 1.9. The matrix-valued function A is regressive if and only if the eigenvalues λi (t) of A(t) are regressive for all 1 ≤ i ≤ n. Proof. Let j ∈ {1, . . . , n} be arbitrarily chosen and λj (t) is an eigenvalue corresponding to the eigenvector y(t). Then (1 + μ(t)λj (t))y(t) = Iy(t) + μ(t)λj (t)y(t) = Iy(t) + μ(t)A(t)y(t) = (I + μ(t)A(t))y(t), whereupon it follows the assertion. This completes the proof. Example 1.13. Let 𝕋 = 3ℕ0 and A(t) = (
t+2 t2 + 1
1 ), t
t ∈ 𝕋.
Consider the equation det (
t + 2 − λ(t) t2 + 1
1 ) = 0, t − λ(t)
t ∈ 𝕋.
We have (λ(t) − t)(λ(t) − t − 2) − t 2 − 1 = 0,
t ∈ 𝕋,
or 2
(λ(t)) − 2(t + 1)λ(t) + t 2 + 2t − t 2 − 1 = 0,
t ∈ 𝕋,
or 2
(λ(t)) − 2(t + 1)λ(t) + 2t − 1 = 0,
t ∈ 𝕋.
Therefore, λ1,2 (t) = t + 1 ± √(t + 1)2 − 2t + 1 = t + 1 ± √t 2 + 2t + 1 − 2t + 1 = t + 1 ± √t 2 + 2,
t ∈ 𝕋,
1.3 Structure of dynamic systems on time scales
�
21
and 1 + 3λ1,2 (t) = 0
⇐⇒
3t + 4 = ∓3√t 2 + 2
⇐⇒
1 + 3(t + 1 ± √t 2 + 2) = 0 2
2
9t + 24t + 16 = 9t + 18
⇐⇒ ⇐⇒
12t = 1.
Consequently, 1 + μ(t)λ1,2 (t) ≠ 0,
t ∈ 𝕋,
i. e., the matrix A is regressive. Theorem 1.10. Let A be 2 × 2 matrix-valued function. Then A is regressive if and only if tr A + μ det A is regressive. Here, tr A denotes the trace of the matrix A. Proof. Let A(t) = (
a11 (t) a21 (t)
a12 (t) ), a22 (t)
t ∈ 𝕋.
Then I + μ(t)A(t) = ( =(
1 0
0 μ(t)a11 (t) )+( 1 μ(t)a21 (t)
1 + μ(t)a11 (t) μ(t)a21 (t)
μ(t)a12 (t) ) μ(t)a22 (t)
μ(t)a12 (t) ), 1 + μ(t)a22 (t)
t ∈ 𝕋.
We get det(I + μ(t)A(t)) 2
= (1 + μ(t)a11 (t))(1 + μ(t)a22 (t)) − (μ(t)) a12 (t)a21 (t) 2
2
= 1 + μ(t)a22 (t) + μ(t)a11 (t) + (μ(t)) a11 (t)a22 (t) − (μ(t)) a12 (t)a21 (t)
(1.14)
2
= 1 + μ(t)(tr(A))(t) + (μ(t)) (det A)(t) = 1 + μ(t)((tr A)(t) + μ(t)(det A)(t)). 1.
Let A be regressive. Then det(I + μ(t)A(t)) ≠ 0,
t ∈ 𝕋κ .
Hence and (1.14), we obtain 1 + μ(t)((tr A)(t) + μ(t)(det A)(t)) ≠ 0,
t ∈ 𝕋κ ,
(1.15)
22 � 1 Introduction i. e., tr A + μ det A 2.
is regressive. Let tr A + μ det A be regressive. Then (1.15) holds. Hence and (1.14), we conclude that A is regressive. This completes the proof.
Definition 1.9. Assume that A and B are regressive on 𝕋. Then we define A ⊕ B, ⊖A, A ⊖ B by (A ⊕ B)(t) = A(t) + B(t) + μ(t)A(t)B(t), (A ⊖ B)(t) = (A ⊕ (⊖B))(t),
(⊖A)(t) = −(I + μ(t)A(t)) A(t), −1
t ∈ 𝕋,
respectively. Example 1.14. Let 𝕋 = 2ℕ and A(t) = (
1 2
t ), 3t
B(t) = (
t 2t
1 ), 3
t ∈ 𝕋.
Here, σ(t) = t + 2,
μ(t) = 2,
t ∈ 𝕋.
Then (A ⊕ B)(t) = A(t) + B(t) + μ(t)A(t)B(t) =(
1 2
t t )+( 3t 2t
=(
1+t 2(1 + t)
=(
1 + 3t + 4t 2 2 + 6t + 12t 2
1 1 ) + 2( 3 2
t t )( 3t 2t
1+t t + 2t 2 ) + 2( 3(1 + t) 2t + 6t 2
1 ) 3
1 + 3t ) 2 + 9t
3 + 7t ), 7 + 21t
(B ⊕ A)(t) = A(t) + B(t) + μ(t)B(t)A(t) =(
1 2
t t )+( 3t 2t
=(
1+t 2(1 + t)
1 t ) + 2( 3 2t
1 1 )( 3 2
1+t t+2 ) + 2( 3(1 + t) 2t + 6
t ) 3t
t 2 + 3t ) 2t 2 + 9t
1.3 Structure of dynamic systems on time scales
=( I + μ(t)B(t) = (
1+t 2 + 2t 1 0
0 t ) + 2( 1 2t
1 1 + 2t )=( 3 4t
det(I + μ(t)B(t)) = 7 + 14t − 8t = 7 + 6t, (I + μ(t)B(t))
−1
=
7 1 ( 7 + 6t −4t
7
−1
3t 1 ( 7 + 6t 2t
2 − 7+6t 1+2t 7+6t
7 1 ( 7 + 6t −4t
− 3t 1 ) = ( 7+6t 2t 2t + 3 − 7+6t
1 − 7+6t
2t+3 − 7+6t
), −2 t )( 1 + 2t 2t
=(
1 2
=( =( =
− 3t t ) + ( 7+6t 2t 3t − 7+6t
7+3t 7+6t 14+10t 7+6t 7+3t 7+6t 14+10t 7+6t
1 − 7+6t
2t+3 ) + 2 (
6t 2 +7t−1 7+6t ) 18t 2 +19t−3 7+6t 6t 2 +7t−1 7+6t ) 18t 2 +19t−3 7+6t
7−3t−4t 2 ( 7+6t −12t 2 −2t+14 7+6t
− 7+6t
+ 2( +(
2t 2 +t−3 7+6t ), 6t 2 +t−7 7+6t
−2t 2 −3t 7+6t −6t 2 −6t 7+6t
−4t 2 −6t 7+6t −12t 2 −12t 7+6t
1 2
1 ) 3
),
(A ⊖ B)(t) = (A ⊕ (⊖B))(t) = A(t) + (⊖B)(t) + μ(t)A(t)(⊖B)(t)
− 3t t ) ( 7+6t 2t 3t − 7+6t
−2t 2 −3t−1 7+6t ) −6t 2 −9t−2 7+6t
1 − 7+6t
2t+3 − 7+6t
)
−4t 2 −6t−2 7+6t ) −12t 2 −18t−4 7+6t
t ∈ 𝕋.
Exercise 1.6. Let 𝕋 = 2ℕ0 , A(t) = (
1 2
1 ), −1
B(t) = (
3 1
4 ), 0
t ∈ 𝕋.
Find (⊖A)(t),
(A ⊕ B)(t),
t ∈ 𝕋.
Answer. 3t−1
(⊖A)(t) = ( 1−3t −2
2
1−3t 2
−1 1−3t 2 ) , 1+3t 1−3t 2
(A ⊕ B)(t) = (
4 + 4t 3 + 5t
5 + 4t ), −1 + 8t
Theorem 1.11. (R , ⊕) is a group. Proof. Let A, B, C ∈ (R , ⊕). Then (I + μA)−1 ,
(I + μB)−1 ,
(I + μC)−1
23
1 + 7t + 2t 2 ), 3 + 21t + 4t 2
2 ), 7
−2 ) = ( 7+6t 4t 1 + 2t − 7+6t
(⊖B)(t) = −(I + μ(t)B(t)) B(t) = − =−
2t 2 + 6t 5 + 3t )=( 4t 2 + 18t 14 + 6t
1+t 2t + 4 )+( 3 + 3t 4t + 12
�
t ∈ 𝕋.
24 � 1 Introduction exist and I + μ(A ⊕ B) = I + μ(A + B + μAB) = I + μA + μB + μ2 AB = I + μA + (I + μA)μB = (I + μA)(I + μB). Therefore, (I + μ(A ⊕ B))
−1
exists. Also, O ⊕ A = A ⊕ O = A. Next, A ⊕ (−(I + μA)−1 A) = A − (I + μA)−1 A − μ(I + μA)−1 A2 = A − (I + μA)−1 (I + μA)A = A − A = O, i. e., the additive inverse of A under the addition ⊕ is −(I + μA)−1 A. Note that I + μ(−(I + μA)−1 A) = (I + μA)−1 (I + μA) − (I + μA)−1 μA = (I + μA)−1 and then −(I + μA)−1 A ∈ R . Also, (A ⊕ B) ⊕ C = (A ⊕ B) + C + μ(A ⊕ B)C = A + B + μAB + C + μ(A + B + μAB)C = A + B + μAB + C + μAC + μBC + μ2 ABC,
A ⊕ (B ⊕ C) = A + (B ⊕ C) + μA(B ⊕ C) = A + B + C + μBC + μA(B + C + μBC) = A + B + C + μBC + μAB + μAC + μ2 ABC.
Consequently, (A ⊕ B) ⊕ C = A ⊕ (B ⊕ C), i. e., in (R , ⊕) the associative law holds. This completes the proof. With A, we will denote the conjugate matrix of A. With AT , we will denote the transpose matrix of A and A∗ = (A)T is the conjugate transpose of A. Theorem 1.12. Let A and B be regressive. Then: 1. A∗ is regressive, 2. ⊖A∗ = (⊖A)∗ , 3. (A ⊕ B)∗ = B∗ ⊕ A∗ .
1.3 Structure of dynamic systems on time scales
�
25
Proof. Since A is regressive, there exists (I + μA)−1 . 1. We have (I + μA)(I + μA)−1 = I
⇒
(I + μA)(I + μA)−1 = I
⇒
((I + μA)−1 ) (I + μA∗ ) = I. ∗
Therefore, (I + μA∗ )
and
−1
(I + μAT )
−1
exist and (I + μA∗ )
−1
2.
= ((I + μA)−1 ) , ∗
(I + μAT )
−1
Consequently, A∗ is regressive. We have (⊖A)∗ = −((I + μA)−1 A) = −A∗ ((I + μA)−1 ) = −A∗ (I + μA∗ ) ∗
3.
T
= ((I + μA)−1 ) .
∗
−1
= ⊖A∗ .
We have (A + B)∗ = (A + B + μAB)∗ = A∗ + B∗ + μB∗ A∗ = B∗ + A∗ + μB∗ A∗ = B∗ ⊕ A∗ . This completes the proof.
Definition 1.10 (Matrix exponential functional). Let A ∈ R and t0 ∈ 𝕋. The unique solution of the IVP Y Δ = A(t)Y ,
Y (t0 ) = I,
is called the matrix exponential function. It is denoted by eA (⋅, t0 ). Theorem 1.13. Let A, B ∈ R and t, s, r ∈ 𝕋. Then: 1. e0 (t, s) = I, eA (t, t) = I, 2. eA (σ(t), s) = (I + μ(t)A(t))eA (t, s), ∗ 3. eA (s, t) = eA−1 (t, s) = e⊖A ∗ (t, s), 4. eA (t, s)eA (s, r) = eA (t, r), 5. eA (t, s)eB (t, s) = eA⊕B (t, s) if eA (t, s) and B commute. Proof. 1.
Consider the IVP Y Δ = O,
Then its unique solution is
Y (s) = I.
26 � 1 Introduction eO (t, s) = I. Now we consider the IVP Y Δ = A(t)Y ,
Y (s) = I.
By the definition of eA (⋅, s), we obtain eA (s, s) = I. 2.
By Theorem 1.3 and the definition of eA (⋅, s), we have eA (σ(t), s) = eA (t, s) + μ(t)eAΔ (t, s) = eA (t, s) + μ(t)A(t)eA (t, s) = (I + μ(t)A(t))eA (t, s).
3.
Let Z(t) = (eA−1 (t, s)) . ∗
Then Δ ∗
Z Δ (t) = ((eA−1 (t, s)) ) = −(eA−1 (t, s)eAΔ (t, s)(eA (σ(t), s)) )
−1 ∗
= −(eA−1 (t, s)A(t)eA (t, s)(eA (σ(t), s)) )
−1 ∗
= −(A(t)(eA (σ(t), s)) ) = −(A(t)((I + μ(t)A(t))eA (t, s)) ) −1 ∗
−1 ∗
= −(A(t)eA−1 (t, s)(I + μ(t)A(t)) )
−1 ∗
= (eA−1 (t, s)(−(I + μ(t)A(t)) A(t))) = (eA−1 (t, s)(⊖A)(t)) −1
∗
∗
= (⊖A)∗ (t)(eA (t, s)) = (⊖A∗ )(t)Z(t). ∗
Therefore, Z(t) = e⊖A∗ (t, s), i. e., e⊖A∗ (t, s) = (eA−1 (t, s)) , ∗
or ∗ eA−1 (t, s) = e⊖A ∗ (t, s).
4.
Let Z(t) = eA (t, s)eA (s, r).
1.3 Structure of dynamic systems on time scales
�
27
Then Z Δ (t) = eAΔ (t, s)eA (s, r) = A(t)eA (t, s)eA (s, r) = A(t)Z(t), Z(r) = eA (r, s)eA (s, r) = eA−1 (s, r)eA (s, r) = I = eA (r, r).
Therefore, eA (t, s)eA (s, r) = eA (t, r). 5.
Let Z(t) = eA (t, s)eB (t, s). Then Z Δ (t) = eAΔ (t, s)eBσ (t, s) + eA (t, s)eBΔ (t, s)
= A(t)eA (t, s)(I + μ(t)B(t))eB (t, s) + B(t)eA (t, s)eB (t, s) = A(t)(I + μ(t)B(t))eA (t, s)eB (t, s) + B(t)eA (t, s)eB (t, s)
= (A(t) + B(t) + μ(t)A(t)B(t))eA (t, s)eB (t, s) = (A ⊕ B)(t)Z(t),
Z(s) = eA (s, s)eB (s, s) = I. Consequently,
eA⊕B (t, s) = eA (t, s)eB (t, s). This completes the proof. Theorem 1.14. Let A ∈ R and t, s ∈ 𝕋. Then Δ
(eA (s, t))t = −eA (s, σ(t))A(t). Proof. We have Δ
Δ
Δ
Δ ∗
∗ (eA (s, t))t = (eA−1 (t, s))t = (e⊖A ∗ (t, s)) = ((e⊖A∗ (t, s)) ) t t
= ((⊖A∗ )(t)e⊖A∗ (t, s)) = (−(I + μ(t)A∗ (t)) A∗ (t)e⊖A∗ (t, s)) ∗
−1
= (−A∗ (t)(I + μ(t)A∗ (t)) e⊖A∗ (t, s)) −1
∗
∗
∗ = (−A∗ (t)e⊖A∗ (σ(t), s)) = −e⊖A ∗ (σ(t), s)A(t) = −eA (s, σ(t))A(t). ∗
This completes the proof. Theorem 1.15 (Variation of constants). Let A ∈ R and f : 𝕋 → ℝn be rd-continuous. Let also, t0 ∈ 𝕋 and y0 ∈ ℝn . Then the IVP
28 � 1 Introduction yΔ (t) = A(t)y + f (t),
y(t0 ) = y0 ,
(1.16)
has unique solution y : 𝕋 → ℝn and this solution is given by t
y(t) = eA (t, t0 )y0 + ∫ eA (t, σ(τ))f (τ)Δτ.
(1.17)
t0
Proof. 1.
Let y be given by (1.17). Then t
y(t) = eA (t, t0 )(y0 + ∫ eA (t0 , σ(τ))f (τ)Δτ), t0
t
Δ
y (t) = A(t)eA (t, t0 )(y0 + ∫ eA (t0 , σ(τ))f (τ)Δτ) + eA (σ(t), t0 )eA (t0 , σ(t))f (t) = A(t)y(t) + f (t),
t0
y(t0 ) = y0 , 2.
i. e., y satisfies the IVP (1.16). Let ỹ be another solution of the IVP (1.16). Let ̃ v(t) = eA (t0 , t)y(t). Then ̃ = eA (t, t0 )v(t) y(t) and ̃ + f (t) = ỹΔ (t) = A(t)eA (t, t0 )v(t) + eA (σ(t), t0 )vΔ (t), A(t)eA (t, t0 )v(t) + f (t) = A(t)y(t) whereupon eA (σ(t), t0 )vΔ (t) = f (t) or vΔ (t) = eA (t0 , σ(t))f (t). Hence, t
t
v(t) = v(t0 ) + ∫ eA (t0 , σ(τ))f (τ)Δτ = y0 + ∫ eA (t0 , σ(τ))f (τ)Δτ t0
t0
1.3 Structure of dynamic systems on time scales
�
29
and t
̃ = eA (t0 , t)v(t) = eA (t0 , t)(y0 + ∫ eA (t0 , σ(τ))f (τ)Δτ), y(t) t0
i. e., ỹ = y, where y is given by (1.17). This completes the proof. Theorem 1.16. Let A ∈ R and C be n × n differentiable matrix. If C is a solution of the dynamic equation, C Δ = A(t)C − C σ A(t), then C(t)eA (t, s) = eA (t, s)C(s).
(1.18)
Proof. We fix s ∈ 𝕋. Let F(t) = C(t)eA (t, s) − eA (t, s)C(s). Then F Δ (t) = C Δ (t)eA (t, s) + C(σ(t))eAΔ (t, s) − eAΔ (t, s)C(s)
= (A(t)C(t) − C(σ(t))A(t))eA (t, s) + C(σ(t))A(t)eA (t, s) − A(t)eA (t, s)C(s) = A(t)C(t)eA (t, s) − A(t)eA (t, s)C(s)
= A(t)(C(t)eA (t, s) − eA (t, s)C(s)) = A(t)F(t),
F(s) = C(s)eA (s, s) − eA (s, s)C(s) = C(s) − C(s) = 0, i. e., F Δ (t) = A(t)F(t),
F(s) = 0.
Hence and Theorem 1.15, we get F(t) = 0 and (1.18) holds. This completes the proof. Corollary 1.1. Let A ∈ R and C be an n × n constant matrix that commutes with A. Then C commutes with eA . Proof. We have Cσ = C
and C Δ = A(t)C − CA(t).
Hence and Theorem 1.16, it follows that C commutes with eA . This completes the proof.
30 � 1 Introduction Corollary 1.2. Let A ∈ R be a constant matrix. Then A commutes with eA . Proof. We apply Theorem 1.16 for C = A. This completes the proof. Theorem 1.17 (Variation of constants). Let A ∈ R and f : 𝕋 → ℝn be rd-continuous. Let also, t0 ∈ 𝕋 and x0 ∈ ℝn . Then the initial value problem x Δ = −A∗ (t)x σ + f (t),
x(t0 ) = x0 ,
(1.19)
has a unique solution x : 𝕋 → ℝn and this solution is given by t
x(t) = e⊖A∗ (t, t0 )x0 + ∫ e⊖A∗ (t, τ)f (τ)Δτ. t0
Proof. The equation (1.19) can be rewritten in the form x Δ (t) = −A∗ (t)(x(t) + μ(t)x Δ (t)) + f (t) = −A∗ (t)x(t) − μ(t)A∗ (t)x Δ (t) + f (t), whereupon (I + μ(t)A∗ (t))x Δ (t) = −A∗ (t)x(t) + f (t), and x Δ (t) = −A∗ (t)(I + μ(t)A∗ (t)) x(t) + (I + μ(t)A∗ (t)) f (t) −1
−1
= (⊖A∗ )(t)x(t) + (I + μ(t)A∗ (t)) f (t). −1
Hence and Theorem 1.15, we obtain t
x(t) = e⊖A∗ (t, t0 )x0 + ∫ e⊖A∗ (t, σ(τ))(I + μ(τ)A∗ (τ)) f (τ)Δτ −1
t0
t
= e⊖A∗ (t, t0 )x0 + ∫ eA∗ (σ(τ), t)((I + μ(τ)A(τ)) ) f (τ)Δτ −1 ∗
t0
t
= e⊖A∗ (t, t0 )x0 + ∫((I + μ(τ)A(τ)) eA (σ(τ), t)) f (τ)Δτ −1
∗
t0
t
t
= e⊖A∗ (t, t0 ) + ∫(eA (τ, t)) f (τ)Δτ = e⊖A∗ (t, t0 ) + ∫ e⊖A∗ (t, τ)f (τ)Δτ. ∗
t0
This completes the proof.
t0
1.3 Structure of dynamic systems on time scales
�
31
Theorem 1.18 (The Liouville formula). Let A ∈ R be a 2 × 2 matrix-valued function and assume that X is a solution of the equation X Δ = A(t)X. Then X satisfies Liouville’s formula det X(t) = etr A+μ det A (t, t0 ) det X(t0 ),
t ∈ 𝕋.
Proof. By Theorem 1.10, it follows that tr A + μ det A is regressive. Let A(t) = (
a11 (t) a21 (t)
a12 (t) ), a22 (t)
X(t) = (
x11 (t) x21 (t)
x12 (t) ). x22 (t)
Then x Δ (t) ( 11 Δ x21 (t)
Δ x12 (t)
a (t) ) = ( 11 Δ a21 (t) x22 (t) =(
a12 (t) x11 (t) )( a22 (t) x21 (t)
a11 (t)x11 (t) + a12 (t)x12 (t) a21 (t)x11 (t) + a22 (t)x21 (t)
x12 (t) ) x22 (t) a11 (t)x12 (t) + a12 (t)x22 (t) ), a21 (t)x12 (t) + a22 (t)x22 (t)
whereupon Δ x11 (t) = a11 (t)x11 (t) + a12 (t)x21 (t), { { { { Δ { {x12 (t) = a11 (t)x12 (t) + a12 (t)x22 (t), { {x Δ (t) = a21 (t)x11 (t) + a22 (t)x21 (t), { 21 { { { Δ x { 22 (t) = a21 (t)x12 (t) + a22 (t)x22 (t).
Then det X(t) = x11 (t)x22 (t) − x12 (t)x21 (t),
Δ σ Δ Δ σ Δ (det X)Δ (t) = x11 (t)x22 (t) + x11 (t)x22 (t) − x12 (t)x21 (t) − x12 (t)x21 (t) Δ x σ (t) x12 (t) ) + det ( 11 Δ x22 (t) x21 (t)
= det (
Δ x11 (t) x21 (t)
= det (
a11 (t)x11 (t) + a12 (t)x21 (t) x21 (t)
+ det (
Δ x11 (t) + μ(t)x11 (t) Δ x21 (t)
σ x12 (t)
Δ x22 (t)
)
a11 (t)x12 (t) + a12 (t)x22 (t) ) x22 (t)
Δ x12 (t) + μ(t)x12 (t) Δ x22 (t)
)
32 � 1 Introduction = (a11 (t)x11 (t)x22 (t) + a12 (t)x21 (t)x22 (t) − a11 (t)x12 (t)x21 (t) − a12 (t)x21 (t)x22 (t)) + det (
x11 (t)
x12 (t)
Δ x21 (t)
x22 (t)
) + μ(t) det (
x11 (t) = a11 (t) det X(t) + det ( Δ x21 (t) = a11 (t) det X(t) + det (
Δ x21 (t)
Δ x12 (t)
Δ x22 (t)
)
Δ x11 (t) ) + μ(t) det ( Δ Δ x22 (t) x21 (t)
x12 (t)
x11 (t) a21 (t)x11 (t) + a22 (t)x21 (t)
+ μ(t) det (
Δ x11 (t)
Δ x12 (t)
Δ x22 (t)
)
x12 (t) ) a21 (t)x12 (t) + a22 (t)x22 (t)
a11 (t)x11 (t) + a12 (t)x21 (t) Δ x21 (t)
a11 (t)x12 (t) + a12 (t)x22 (t) ) Δ x22 (t)
= a11 (t) det X(t) + (a21 (t)x11 (t)x12 (t) + a22 (t)x11 (t)x22 (t)
x11 (t) − a21 (t)x11 (t)x12 (t) − a22 (t)x21 (t)x12 (t)) + μ(t) (a11 (t) det ( Δ x21 (t) + a12 (t) det (
x21 (t) Δ x21 (t)
x22 (t) Δ x22 (t)
+ a12 (t) det (
x11 (t) a21 (t)x11 (t) + a22 (t)x21 (t)
)
x21 (t) a21 (t)x11 (t) + a22 (t)x21 (t)
x12 (t) ) a21 (t)x12 (t) + a22 (t)x22 (t)
x22 (t) )) a21 (t)x12 (t) + a22 (t)x22 (t)
= tr A(t) det X(t) + μ(t)(a11 (t)(a21 (t)x12 (t)x11 (t) + a22 (t)x22 (t)x11 (t)
− a21 (t)x11 (t)x12 (t) − a22 (t)x12 (t)x21 (t)) + a12 (t)(a21 (t)x21 (t)x12 (t) + a22 (t)x22 (t)x21 (t) − a21 (t)x11 (t)x22 (t) − a22 (t)x21 (t)x22 (t)))
= tr A(t) det X(t) + μ(t)(a11 (t)a22 (t) det X(t) − a12 (t)a21 (t) det X(t)) = tr A(t) det X(t) + μ(t) det A(t) det X(t) = (tr A(t) + μ(t) det A(t)) det X(t), i. e., (det X)Δ (t) = (tr A(t) + μ(t) det A(t)) det X(t). Therefore, det X(t) = etr A+μ det A (t, t0 ) det X(t0 ). This completes the proof.
Δ x22 (t)
))
= a11 (t) det X(t) + a22 (t) det X(t) + μ(t) (a11 (t) det (
x12 (t)
1.3 Structure of dynamic systems on time scales
�
33
Example 1.15. Let 𝕋 = lℤ, l > 0. Consider the IVP 3t X Δ (t) = ( 3
4t + 1 ) X(t), 2+t
X(0) = (
1 1
0 ), 1
t ∈ 𝕋,
t > 0.
Here, A(t) = (
3t 3
4t + 1 ), 2+t
μ(t) = l,
t ∈ 𝕋.
Then det A(t) = 3t(2 + t) − 3(4t + 1) = 6t + 3t 2 − 12t − 3 = 3t 2 − 6t − 3, tr A(t) = 3t + 2 + t = 2 + 4t,
tr A(t) + μ(t) det A(t) = 2 + 4t + l(3t 2 − 6t − 3) = 3lt 2 + (4 − 6l)t + 2 − 3l,
t ∈ 𝕋.
Let f (t) = 3lt 2 + (4 − 6l)t + 2 − 3l,
t ∈ 𝕋.
Then, using the Liouville formula, we get t−l
det X(t) = ef (t, 0) det X(0) = ef (t, 0) = ∏(1 + μ(s)f (s)) s=0
t−l
= ∏(1 + l(3ls2 + (4 − 6l)s + 2 − 3l)) s=0 t−l
= ∏(3l2 s2 + (4l − 6l2 )s − 3l2 + 2l + 1), s=0
t ∈ 𝕋,
t > 0.
Exercise 1.7. Let A ∈ R be a n×n matrix-valued function and assume that X is a solution of the equation X Δ = A(t)X. Prove that X satisfies the Liouville formula det X(t) = etr A+μ det A (t, t0 ) det X(t0 ), Hint. Use Theorem 1.18 and induction.
t ∈ 𝕋.
34 � 1 Introduction
1.4 Constant coefficients In this section, we suppose that A is a n × n constant matrix and A ∈ R . Let t0 ∈ 𝕋. Consider the equation x Δ = Ax.
(1.20)
Theorem 1.19. Let λ, ξ be an eigenpair of A. Then x(t) = eλ (t, t0 )ξ,
t ∈ 𝕋κ ,
is a solution of the equation (1.20). Proof. We have Aξ = λξ. Then x Δ (t) = eλΔ (t, t0 )ξ = λeλ (t, t0 )ξ = eλ (t, t0 )(λξ) = eλ (t, t0 )Aξ = A(eλ (t, t0 )ξ) = Ax(t),
t ∈ 𝕋κ .
This completes the proof. Example 1.16. Consider the system {
x1Δ (t) = −3x1 − 2x2 , x2Δ (t) = 3x1 + 4x2 .
Here, A=(
−3 3
−2 ). 4
Then 0 = det (
−3 − λ 3
−2 ) = (λ − 4)(λ + 3) + 6 = λ2 − λ − 6 4−λ
and λ1 = 3,
λ2 = −2.
The considered system is regressive for any time scale for which −2 ∈ R . Note that
1.4 Constant coefficients
ξ1 = (
1 ), −3
ξ2 = (
� 35
−2 ) 1
are eigenvalues corresponding to λ1 and λ2 , respectively. Therefore, x(t) = c1 e3 (t, t0 )ξ1 + c2 e−2 (t, t0 )ξ2 = c1 e3 (t, t0 ) (
1 −2 ) + c2 e−2 (t, t0 ) ( ) , −3 1
where c1 and c2 are real constants, is a solution of the considered system for any time scale for which −2 ∈ R . Example 1.17. Consider the system Δ
x (t) = x1 (t) − x2 (t), { { { 1Δ x2 (t) = −x1 (t) + 2x2 (t) − x3 (t), { { { Δ {x3 (t) = −x2 (t) + x3 (t). Here, 1 A = (−1 0
−1 2 −1
0 −1) . 1
Then 1−λ 0 = det(A − λI) = det ( −1 0
−1 2−λ −1
0 −1 ) = −(λ − 1)2 (λ − 2) + (λ − 1) + (λ − 1) 1−λ
= (λ − 1)(−(λ − 1)(λ − 2) + 2) = (λ − 1)(−λ2 + 3λ) = −λ(λ − 1)(λ − 3). Therefore, λ1 = 0,
λ2 = 1,
λ3 = 3.
Note that the matrix A is regressive for any time scale and 1 ξ1 = (1) , 1
1 ξ2 = ( 0 ) , −1
1 ξ3 = (−2) 1
are eigenvalues corresponding to λ1 , λ2 and λ3 , respectively. Consequently, x(t) = c1 ξ1 + c2 e1 (t, t0 )ξ2 + c3 e3 (t, t0 )ξ3
36 � 1 Introduction 1 1 1 = c1 (1) + c2 e1 (t, t0 ) ( 0 ) + c3 e3 (t, t0 ) (−2) , 1 −1 1 where c1 , c2 and c3 are constants, is a general solution of the considered system. Example 1.18. Consider the system Δ
x1 (t) = −x1 (t) + x2 (t) + x3 (t), { { { { { {x2Δ (t) = x2 (t) − x3 (t) + x4 (t), { { {x3Δ (t) = 2x3 (t) − 2x4 (t), { { { Δ {x4 (t) = 3x4 (t). Here, −1 0 A=( 0 0
1 1 0 0
1 −1 2 0
0 1 ). −2 3
Then −1 − λ 0 0 = det(A − λI) = det ( 0 0 = (λ + 1)(λ − 1)(λ − 2)(λ − 3)
1 1−λ 0 0
1 −1 2−λ 0
0 1 ) −2 3−λ
and λ1 = −1,
λ2 = 1,
λ3 = 2,
λ4 = 3.
The matrix A is regressive for any time scale for which −1 ∈ R . Note that 1 0 ξ1 = ( ) , 0 0
0 1 ξ2 = ( ) , 0 0
0 0 ξ3 = ( ) 1 0
and
0 0 ξ4 = ( ) 0 1
are eigenvectors corresponding to λ1 , λ2 , λ3 and λ4 , respectively. Consequently, x(t) = c1 e−1 (t, t0 )ξ1 + c2 e1 (t, t0 )ξ2 + c3 e2 (t, t0 )ξ3 + c4 e5 (t, t0 )ξ4
1.4 Constant coefficients
�
37
1 0 0 1 = c1 e−1 (t, t0 ) ( ) + c2 e1 (t, t0 ) ( ) 0 0 0 0 0 0 0 0 + c3 e2 (t, t0 ) ( ) + c4 e3 (t, t0 ) ( ) , 1 0 0 1 where c1 , c2 , c3 and c4 are real constants, is a general solution of the considered system. Exercise 1.8. Find a general solution of the system {
x1Δ (t) = x2 (t), x2Δ (t) = x1 (t).
Answer. 1 1 x(t) = c1 e1 (t, t0 ) ( ) + c2 e−1 (t, t0 ) ( ) , 1 −1 where c1 , c2 ∈ ℝ, for any time scale for which −1 ∈ R . Theorem 1.20. Assume that A ∈ R . If x(t) = u(t) + iv(t),
t ∈ 𝕋κ ,
is a complex vector-valued solution of the system (1.20), where u and v are real vectorvalued functions on 𝕋. Then u and v are real vector-valued solutions of the system (1.20) on 𝕋. Proof. We have x Δ (t) = A(t)x(t) = A(t)(u(t) + iv(t)) = A(t)u(t) + iA(t)v(t) = uΔ (t) + ivΔ (t),
t ∈ 𝕋κ .
Equating real and imaginary parts, we get uΔ (t) = A(t)u(t),
vΔ (t) = A(t)v(t),
This completes the proof. Example 1.19. Consider the system {
x1Δ (t) = x1 (t) + x2 (t),
x2Δ (t) = −x1 (t) + x2 (t).
t ∈ 𝕋κ .
38 � 1 Introduction Here, A=(
1 −1
1 ). 1
Then 0 = det(A − λI) = det (
1−λ −1
1 ) = (λ − 1)2 + 1 1−λ
= λ2 − 2λ + 1 + 1 = λ2 − 2λ + 2, whereupon λ1,2 = 1 ± i. Note that 1 ξ=( ) i is an eigenvector corresponding to the eigenvalue λ = 1 + i. We have 1 x(t) = e1+i (t, t0 ) ( ) i = e1 (t, t0 )(cos = e1 (t, t0 ) (( = e1 (t, t0 ) (
1 1+μ
(t, t0 ) + i sin
cos
1 1+μ
i cos
cos
1 1+μ
− sin
(t, t0 )
1 1+μ
(t, t0 )
1 1+μ
1 (t, t0 )) ( ) i
)+(
i sin
1 1+μ
− sin
1 1+μ
(t, t0 )
(t, t0 )
))
(t, t0 )
1 1+μ
sin 1 (t, t0 ) ) + ie1 (t, t0 ) ( 1+μ ). (t, t0 ) cos 1 (t, t0 ) 1+μ
Consequently, cos 1 (t, t0 ) 1+μ e1 (t, t0 ) ( ) − sin 1 (t, t0 ) 1+μ
and
sin 1 (t, t0 ) e1 (t, t0 ) ( 1+μ ) cos 1 (t, t0 ) 1+μ
are solutions of the considered system. Therefore, cos 1 (t, t0 ) sin 1 (t, t0 ) 1+μ x(t) = c1 e1 (t, t0 ) ( ) + c2 e1 (t, t0 ) ( 1+μ ), − sin 1 (t, t0 ) cos 1 (t, t0 ) 1+μ
1+μ
where c1 , c2 ∈ ℝ, is a general solution of the considered system.
1.4 Constant coefficients
� 39
Example 1.20. Consider the system x Δ (t) = x2 (t), { { { 1Δ x (t) = x3 (t), { { 2 { Δ {x3 (t) = 2x1 (t) − 4x2 (t) + 3x3 (t). Here, 0 A = (0 2
1 0 −4
0 1) . 3
Then −λ 0 = det(A − λI) = det ( 0 2
1 −λ −4
0 1 ) 3−λ
= −λ2 (λ − 3) + 2 − 4λ = −(λ3 − 3λ2 + 4λ − 2) = −(λ − 1)(λ2 − 2λ + 2), whereupon λ1 = 1,
λ2,3 = 1 ± i.
Note that 1 ξ1 = (1) 1
1 and ξ2 = (1 + i) 2i
are eigenvectors corresponding to the eigenvalues λ1 = 1 and λ2 = 1 + i, respectively. Note that 1 e1+i (t, t0 ) (1 + i) 2i = e1 (t, t0 )(cos
1 1+μ
(t, t0 ) + i sin cos
1 1+μ
1 1+μ
1 (t, t0 )) (1 + i) 2i
(t, t0 )
sin
1 1+μ
(t, t0 )
1 (t, t0 )) + i ((1 + i) sin 1 (t, t0 ))) = e1 (t, t0 ) (((1 + i) cos 1+μ 1+μ
2i cos
1 1+μ
(t, t0 )
2i sin
1 1+μ
(t, t0 )
40 � 1 Introduction cos
1 1+μ
(t, t0 )
i sin
1 1+μ
(t, t0 )
1 (t, t0 )) + ((−1 + i) sin 1 (t, t0 ))) = e1 (t, t0 ) (((1 + i) cos 1+μ 1+μ
2i cos
1 1+μ
cos
(t, t0 ) 1 1+μ
−2 sin
1 1+μ
(t, t0 )
(t, t0 )
sin
1 1+μ
(t, t0 )
1 (t, t0 ) − sin 1 (t, t0 )) + i (cos 1 (t, t0 ) + sin 1 (t, t0 ))) . = e1 (t, t0 ) ((cos 1+μ 1+μ 1+μ 1+μ
−2 sin
1 1+μ
(t, t0 )
2 cos
1 1+μ
(t, t0 )
Consequently, cos 1 (t, t0 ) 1 1+μ 1 (t, t0 ) − sin 1 (t, t0 )) x(t) = e1 (t, t0 ) (c1 (1) + c2 (cos 1+μ 1+μ 1 −2 sin 1 (t, t0 ) 1+μ
sin
1 1+μ
(t, t0 )
1 (t, t0 ) + sin 1 (t, t0 ))) , + c3 (cos 1+μ 1+μ
2 cos
1 1+μ
(t, t0 )
where c1 , c2 , c3 ∈ ℝ, is a general solution of the considered system. Exercise 1.9. Find a general solution of the system x Δ (t) = x1 (t) − 2x2 (t) + x3 (t), { { { 1Δ x (t) = −x1 (t) + x3 (t), { { 2 { Δ {x3 (t) = x1 (t) − 2x2 (t) + x3 (t). Theorem 1.21 (The Putzer algorithm). Let A ∈ R be a constant n × n matrix and t0 ∈ 𝕋. If λ1 , λ2 , . . ., λn are the eigenvalues of A, then n−1
eA (t, t0 ) = ∑ rk+1 (t)Pk , k=0
where r1 (t) . r(t) = ( .. ) rn (t) is the solution of the IVP
1.4 Constant coefficients
λ1 1 (0 Δ r =( .. . 0 (
P0 = I,
0 λ2 1 .. . 0
0 0 λ3 .. . 0
... ... ... .. . ...
0 0 0) ) r, .. . λn )
Pk+1 = (A − λk+1 I)Pk ,
�
1 0 ( ) r(t0 ) = (0) , .. . 0 ( )
41
(1.21)
0 ≤ k ≤ n − 1.
Proof. Since A is regressive, we have that all eigenvalues of A are regressive. Therefore, the IVP (1.21) has unique solution. We set n−1
X(t) = ∑ rk+1 (t)Pk . k=0
We have P1 = (A − λ1 I)P0 = (A − λ1 I),
P2 = (A − λ2 I)P1 = (A − λ2 I)(A − λ1 I), .. . Pn = (A − λn I)Pn−1 = (A − λn I) . . . (A − λ1 I) = 0. Therefore, n−1
Δ X Δ (t) = ∑ rk+1 (t)Pk , k=0 n−1
n−1
Δ X Δ (t) − AX(t) = ∑ rk+1 (t)Pk − A ∑ rk+1 (t)Pk k=0
k=0
n−1
n−1
Δ = r1Δ (t)P0 + ∑ rk+1 (t)Pk − A ∑ rk+1 (t)Pk k=1
n−1
k=0
n−1
= λ1 r1 (t)P0 + ∑ (rk (t) + λk+1 rk+1 (t))Pk − ∑ rk+1 (t)APk n−1
k=1
k=1
n−1
n−1
= ∑ rk (t)Pk + λ1 r1 (t)P0 + ∑ λk+1 rk+1 (t)Pk − ∑ rk+1 (t)APk k=1 n−1
n−1
k=1
k=0
k=1
n−1
k=0
= ∑ rk (t)Pk + ∑ λk+1 rk+1 (t)Pk − ∑ rk+1 (t)APk n−1
n−1
k=0
= ∑ rk (t)Pk − ∑ (A − λk+1 I)rk+1 (t)Pk k=1 n−1
k=0 n−1
= ∑ rk (t)Pk − ∑ rk+1 (t)Pk+1 = −rn (t)Pn = 0, k=1
k=0
t ∈ 𝕋κ .
42 � 1 Introduction Also, n−1
X(t0 ) = ∑ rk+1 (t0 )Pk = r1 (t0 )P0 = I. k=0
This completes the proof. Example 1.21. Consider the system x Δ (t) = 2x1 (t) + x2 (t) + 2x3 (t), { { { 1Δ x (t) = 4x1 (t) + 2x2 (t) + 4x3 (t), { { 2 { Δ {x3 (t) = 2x1 (t) + x2 (t) + 2x3 (t),
t ∈ 𝕋κ .
Here, 2 A = (4 2
1 2 1
2 4) . 2
Then 2−λ 0 = det(A − λI) = det ( 4 2
1 2−λ 1
2 4 ) 2−λ
= −(λ − 2)3 + 8 + 8 + 4(λ − 2) + 4(λ − 2) + 4(λ − 2) = −(λ − 2)3 + 12(λ − 2) + 16 = −(λ3 − 6λ2 + 12λ − 8 − 12λ + 24 − 16) = −(λ3 − 6λ2 ) = −λ2 (λ − 6), whereupon λ1 = 0,
λ2 = 0,
λ3 = 6.
Consider the IVPs r1Δ (t) = 0, r2Δ (t) r3Δ (t)
r1 (t0 ) = 1,
= r1 (t),
r2 (t0 ) = 0,
= r2 (t) + 6r3 (t),
r3 (t0 ) = 0.
We have r1 (t) = 1,
r2Δ (t) = 1, Then
t ∈ 𝕋κ , r2 (t0 ) = 0.
1.4 Constant coefficients
r2 (t) = t − t0 ,
t ∈ 𝕋κ ,
and r3Δ (t) = t − t0 + 6r3 (t),
r3 (t0 ) = 0.
Therefore, t
t ∈ 𝕋κ .
r3 (t) = ∫ e6 (t, σ(τ))(τ − t0 )Δτ, t0
Next, 1 P0 = (0 0
0 1 0
0 0) , 1
2 P1 = (A − λ1 I)P0 = AP0 = A = (4 2
1 2 1
2 4) , 2
P2 = (A − λ1 I)(A − λ2 I) = A2 I = A2 2 = (4 2
1 2 1
2 2 4) (4 2 2
1 2 1
2 12 4) = (24 2 12
6 12 6
12 24) , 12
P3 = 0.
Therefore, eA (t, t0 ) = r1 (t)P0 + r2 (t)P1 + r3 (t)P2 1 = (0 0
0 1 0
0 2 0) + (t − t0 ) (4 1 2
1 2 1
12 + (∫ e6 (t, σ(τ))(τ − t0 )Δτ) (24 12 t0 t
2 4) 2 6 12 6
and x1 (t) c1 (x2 (t)) = eA (t, t0 ) (c2 ) , x3 (t) c3 where c1 , c2 , c3 ∈ ℝ,is a general solution of the considered system.
12 24) , 12
� 43
44 � 1 Introduction Exercise 1.10. Using the Putzer algorithm, find eA (t, t0 ), where 1. A=( 2.
1 A=(1 −1
1 −1
2 ), 3 −1 0 1
1 2) . 1
1.5 Advanced practical problems Problem 1.1. Let 𝕋 = 2ℤ. Check if the function −1 x(t) = ( t 2 ) , t
t ∈ [0, ∞),
is a solution to the IVP 1 (1 0
0 −1 1
−2 −1 0 ) xΔ = ( 1 1 2
−1 0 −1
1 t 2 + 3t + 1 σ −1) x + ( −t + 1 ) , 0 t 2 + 6t + 9
t ∈ [0, ∞),
−1 x(0) = ( 0 ) . 0 Problem 1.2. Let 𝕋 = 3ℕ0 , t 2 +2
A(t) = ( t+1 4t − 1
t 2 + 3t ), 3t
t ∈ 𝕋.
Find AΔ (t), t ∈ 𝕋. Answer. 3t 2 +4t−2
AΔ (t) = ( (t+1)(3t+1) 4
4t + 3 ), 3
t ∈ 𝕋.
Problem 1.3. Let 𝕋 = 3ℕ0 , A(t) = (
t + 10 t
t 2 − 2t + 2 ), t2 + t + 1
B(t) = (
t−2 t2
t+1 ), t3 − 1
t ∈ 𝕋.
1.5 Advanced practical problems
Prove (AB)Δ (t) = AΔ (t)Bσ (t) + A(t)BΔ (t),
t ∈ 𝕋κ .
Problem 1.4. Let 𝕋 = 3ℕ0 and A(t) = (
t 2 + 2t + 2 1 t+1
t+1 ), t2 + 2
t ∈ 𝕋.
Prove that σ
(Aσ ) (t) = (A−1 ) (t), −1
t ∈ 𝕋.
Problem 1.5. Let 𝕋 = ℕ20 and A(t) = (
1 0
−1 ), 1
B(t) = (
2 −1
0 ), 1
t ∈ 𝕋.
Find (A ⊕ B)(t),
t ∈ 𝕋.
Answer. (
6(√t + 1) −2(√t + 1)
−2(√t + 1) ), 2 √t + 3
t ∈ 𝕋.
Problem 1.6. Find a general solution of the system {
x1Δ (t) = 2x1 (t) + 3x2 (t), x2Δ (t) = x1 (t) + 4x2 (t).
Answer. 3 1 x(t) = c1 e1 (t, t0 ) ( ) + c2 e5 (t, t0 ) ( ) , −1 1 where c1 , c2 ∈ ℝ. Problem 1.7. Find a general solution of the system Δ
x (t) = −x1 (t) − x2 (t) − x3 (t), { { { 1Δ x2 (t) = x1 (t) − x2 (t) + 3x3 (t), { { { Δ {x3 (t) = x1 (t) − x2 (t) + 4x3 (t).
� 45
46 � 1 Introduction Problem 1.8. Using the Putzer algorithm, find eA (t, t0 ), where 1. A=( 2.
−1 A=(1 1
2 1
3 ), −4 2 1 −1
3 −4) . 2
1.6 Notes and references In this chapter, we introduce some basic definitions and conception for dynamic and dynamic-algebraic equations on time scales. Some of the results in this chapter can be found in [1–4, 8–10].
2 Linear dynamic-algebraic equations with constant coefficients Suppose that 𝕋 is a time scale with forward jump operator and delta differentiation operator σ and Δ, respectively. Let I ⊆ 𝕋. In this chapter, we will investigate linear dynamic-algebraic equations with constant coefficients of the form Ax Δ = Bx σ + f (t)
(2.1)
Ax Δ = Bx + f (t),
(2.2)
and
and the corresponding homogeneous equations Ax Δ = Bx σ ,
(2.3)
Ax Δ = Bx,
(2.4)
and
where A, B ∈ Mm×m , f ∈ C (I), f : I → ℝm , subject to the initial condition x(t0 ) = x0 , where t0 ∈ I, x0 ∈ ℝn .
2.1 Regular linear dynamic-algebraic equations with constant coefficients First, we will search a solution of the equation (2.1) in the form x(t) = eλ (t0 , t)y,
t ∈ I,
(2.5)
where λ ∈ R for any t ∈ I and y ∈ ℝm . Since λ ∈ R , t ∈ I, we have that eλ (t0 , σ(t)) ≠ 0,
t ∈ I.
Then we have the following: x Δ (t) = (
Δ
λeλ (t, t0 ) 1 ) (t)y = − y = −λeλ (t0 , σ(t))y = −λx σ (t), eλ (⋅, t0 ) eλ (t, t0 )eλ (σ(t), t0 )
https://doi.org/10.1515/9783111377155-002
t ∈ I.
48 � 2 Linear dynamic-algebraic equations with constant coefficients Hence, 0 = Ax Δ (t) − Bx σ (t) = −λAx σ (t) − Bx σ (t) = −(λA + B)x σ (t),
t ∈ I.
Thus, (2.5) is a nontrivial solution to the equation (2.3) if λ is a zero to the polynomial pσ (λ) = det(λA + B) and y ≠ 0 satisfies the equation (λA + B)y = 0. Definition 2.1. A real number λ ∈ R and a vector y ∈ ℝm , y ≠ 0, are said to be σ-generalized eigenvalue and σ-generalized eigenvector, respectively, if pσ (λ) = 0
and (λA + B)y = 0.
Definition 2.2. The families {λA + B : λ ∈ ℝ} will be said σ-matrix pencil of the matrix pair (A, B). Example 2.1. Let 𝕋 = ℤ. Consider the following linear homogeneous dynamic-algebraic system: x1Δ = −2x1σ + x2σ , x2Δ = x2σ , which we can rewrite in the form (
1 0
Δ 0 x1 −2 ) ( Δ) = ( 1 1 x2
1 x1σ )( ). 0 x2σ
Here, σ(t) = t + 1,
μ(t) = 1,
t ∈ 𝕋,
and A=( Then
1 0
0 ), 1
B=(
−2 1
1 ). 0
2.1 Regular linear dynamic-algebraic equations with constant coefficients
pσ (λ) = det(λA + B) = det (λ ( λ = det (( 0
0 −2 )+( λ 1
1 0
0 −2 )+( 1 1
1 )) 0
1 λ−2 )) = det ( 0 1
1 ) λ
= λ(λ − 2) − 1 = λ2 − 2λ − 1 = 0 if λ1,2 = 1 ± √2. Note that 1 + λ1 μ(t) = 1 + 1 + √2 = 2 + √2 ≠ 0,
t ∈ 𝕋,
1 + λ2 μ(t) = 1 + 1 − √2 = 2 − √2 ≠ 0,
t ∈ 𝕋.
and
y
Therefore, λ1,2 ∈ R . Now, we will search a vector y1 = ( y21 ) ∈ ℝ2 so that (
λ1 − 2 1
1 y 0 ) ( 1) = ( ) λ1 y2 0
or (
−1 + √2 1
1 y 0 ) ( 1) = ( ) , 1 + √ 2 y2 0
or −(1 − √2)y1 + y2 = 0, y1 + (1 + √2)y2 = 0,
whereupon we can take y1 = (
1 ). 1 − √2
y
Now, we will search a vector y2 = ( y1211 ) ∈ ℝ2 so that ( or
λ2 − 2 1
1 y 0 ) ( 11 ) = ( ) λ2 y12 0
�
49
50 � 2 Linear dynamic-algebraic equations with constant coefficients
(
−1 − √2 1
1 y 0 ) ( 11 ) = ( ) , 1 − √2 y12 0
or −(1 + √2)y11 + y12 = 0, y11 + (1 − √2)y12 = 0. Thus, we can take y2 = (
1 ). 1 + √2
Therefore, x 1 (t) = eλ1 (t0 , t)y1 = e1+√2 (t0 , t) (
e1+√2 (t0 , t) 1 )=( ), 1 − √2 (1 − √2)e √ (t0 , t)
t ∈ 𝕋,
e1−√2 (t0 , t) 1 )=( ), √ 1+ 2 (1 + √2)e1−√2 (t0 , t)
t ∈ 𝕋,
1+ 2
and x 2 (t) = eλ2 (t0 , t)y1 = e1−√2 (t0 , t) (
are solutions of the considered linear homogeneous dynamic-algebraic system. Example 2.2. Let 𝕋 = 2ℕ0 . Consider the system x1Δ = x1σ + x3σ , x2Δ = 2x3σ ,
x2σ = 0. Here, σ(t) = 2t,
μ(t) = t,
t ∈ 𝕋,
and 1 A = (0 0
0 1 0
0 0) , 0
1 B = (0 0
Then 1 λA + B = λ (0 0
0 1 0
0 1 0 ) + (0 0 0
0 0 1
1 2) 0
0 0 1
1 2) . 0
2.1 Regular linear dynamic-algebraic equations with constant coefficients
λ = (0 0
0 λ 0
0 1 0) + (0 0 0
0 0 1
1 λ+1 2) = ( 0 0 0
0 λ 1
1 2) . 0
Hence, λ+1 det(λA + B) = det ( 0 0
0 λ 1
1 2 ) = 2(λ + 1) = 0 0
if λ = −1. We have 1 + λμ(t) = 1 − t ≠ 0 if t ≠ 1. Take I = 2ℕ . Then λ ∈ R for any t ∈ I. Now, we will search a vector y1 y = (y2 ) ∈ ℝ3 , y3 y ≠ 0, so that (−A + B)y = 0, or 0 (0 0
0 −1 1
1 y1 0 2 ) ( y2 ) = ( 0) , 0 y3 0
or y3 = 0,
−y2 + 2y3 = 0, y2 = 0.
Take 1 y = (0) . 0 Therefore,
�
51
52 � 2 Linear dynamic-algebraic equations with constant coefficients 1 e−1 (t0 , t) x(t) = e−1 (t0 , t)y = e−1 (t0 , t) (0) = ( 0 ) , 0 0
t ∈ I,
is a nontrivial solution to the considered linear homogeneous dynamic-algebraic equations. Example 2.3. Let 𝕋 = 3ℕ0 . Consider the following linear homogeneous dynamicalgebraic equations: x1Δ = x1σ , x3Δ = x2σ , x4Δ = x3σ , x5Δ = x4σ ,
x5σ = 0. Here, σ(t) = 3t,
μ(t) = 2t,
t ∈ 𝕋,
and 1 0 A = (0 0 0
0 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0) , 1 0
0 0 1 0 0
0 1 0 0 0 ) + (0 1 0 0 0
1 0 B = (0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0) . 0 1
Hence, 1 0 λA + B = λ (0 0 0 λ 0 = (0 0 0 Therefore,
0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 λ 0 0 0
0 0 λ 0 0
0 1 0 0 0) + (0 λ 0 0 0
0 1 0 0 0 0 1 0 0 0
0 0 1 0 0 0 0 1 0 0
0 0 0 1 0 0 0 0 1 0
0 0 0) 0 1 0 λ+1 0 0 0) = ( 0 0 0 1 0
0 1 0 0 0
0 λ 1 0 0
0 0 λ 1 0
0 0 0) . λ 1
2.1 Regular linear dynamic-algebraic equations with constant coefficients
λ+1 0 det(λA + B) = det ( 0 0 0 λ+1 = det ( 0 0
0 1 0 0 0
0 λ 1 0 0
0 1 0
0 0 λ 1 0
0 λ+1 0 0 0) = det ( 0 λ 0 1
0 λ) = λ + 1 = 0 1
if λ = −1. Note that 1 + λμ(t) = 1 − 2t ≠ 0,
t ∈ 𝕋.
Thus, λ ∈ R , t ∈ 𝕋. We will search a vector y1 y2 y = ( y3 ) ∈ ℝ 5 , y4 y5 y ≠ 0, so that (λA + B)y = 0 or 0 0 (0 0 0
0 1 0 0 0
0 −1 1 0 0
0 0 −1 1 0
0 y1 0 0 y2 0 0 ) (y3 ) = (0) , −1 y4 0 1 y5 0
or y2 − y3 = 0, y3 − y4 = 0, y4 − y5 = 0, y5 = 0. We take
0 1 0 0
0 λ 1 0
0 0 ) λ 1
�
53
54 � 2 Linear dynamic-algebraic equations with constant coefficients 1 0 y = (0) . 0 0 Therefore, 1 e−1 (t0 , t) 0 0 x(t) = e−1 (t0 , t)y = e−1 (t0 , t) (0) = ( 0 ) , 0 0 0 0
t ∈ 𝕋,
is a nontrivial solution of the considered linear homogenous dynamic-algebraic equations. Exercise 2.1. Let 𝕋 = 2ℤ. Find a nontrivial solution to the following linear homogeneous dynamic-algebraic equations x1Δ + x2Δ = x1σ , x2Δ = x2σ ,
x3σ = 0.
Now, we will search a nontrivial solution of the linear homogeneous dynamicalgebraic equation (2.4) in the form x(t) = eλ (t, t0 )y, where λ ∈ R , y ∈ ℝm , y ≠ 0. We have x Δ (t) = λeλ (t, t0 )y = λx(t),
t ∈ I.
Then 0 = Ax Δ (t) − Bx(t) = λAx(t) − Bx(t) = (λA − B)x(t) = eλ (t, t0 )(λA − B)y if λ is a zero of the polynomial p(λ) = det(λA − B) and y ∈ ℝm , y ≠ 0 satisfies (λA − B)y = 0.
2.1 Regular linear dynamic-algebraic equations with constant coefficients
�
55
Definition 2.3. A real number λ ∈ R and a real vector y ∈ ℝm , y ≠ 0 are said to be generalized eigenvalue and generalized eigenvector, respectively, if p(λ) = 0
and
(λA − B)y = 0.
Definition 2.4. The families {λA − B : λ ∈ ℝ} are said to be matrix pencil of the matrix pair (A, B). Example 2.4. Let 𝕋 = ℕ0 . Consider the system x1Δ = x1 + x3 , x2Δ = 2x3 , x2 = 0.
Here, σ(t) = t + 1,
μ(t) = 1,
t ∈ 𝕋,
and the matrices A and B are as in Example 2.2. Then λ λA − B = (0 0
0 λ 0
0 1 0) − (0 0 0
0 0 1
1 λ−1 2) = ( 0 0 0
0 λ −1
−1 −2) . 0
Hence, λ−1 det(λA − B) = det ( 0 0
0 λ −1
−1 −2) = 2(λ − 1) = 0 0
if λ = 1. We have 1 + λμ(t) = 1 + 1 = 2 ≠ 0,
t ∈ 𝕋,
Then λ ∈ R for any t ∈ 𝕋. Now, we will search a vector y1 y = (y2 ) ∈ ℝ3 , y3 y ≠ 0, so that
56 � 2 Linear dynamic-algebraic equations with constant coefficients (A − B)y = 0, or 0 (0 0
0 1 −1
−1 y1 0 −2) (y2 ) = (0) , 0 y3 0
or −y3 = 0,
y2 − 2y3 = 0, −y2 = 0.
Take 1 y = (0) . 0 Therefore, 1 e1 (t, t0 ) x(t) = e1 (t, t0 )y = e1 (t, t0 ) (0) = ( 0 ) , 0 0
t ∈ I,
is a nontrivial solution to the considered linear homogeneous dynamic-algebraic equations. Example 2.5. Let 𝕋 = 4ℕ0 . Consider the following linear homogeneous dynamicalgebraic equations: x1Δ = x1 ,
x3Δ = x2 ,
x4Δ = x3 ,
x5Δ = x4 , x5 = 0.
Here, σ(t) = t + 4,
μ(t) = 4,
and the matrices A and B are as in Example 2.3. Then
t ∈ 𝕋,
2.1 Regular linear dynamic-algebraic equations with constant coefficients
λ 0 λA − B = (0 0 0
0 0 0 0 0
λ−1 0 =( 0 0 0
0 λ 0 0 0
0 0 λ 0 0
0 −1 0 0 0
0 1 0 0 0) − ( 0 λ 0 0 0
0 λ −1 0 0
0 0 λ −1 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
�
0 0 0) 0 1
0 0 0 ). λ −1
Therefore, λ−1 0 det(λA − B) = det ( 0 0 0 λ−1 = det ( 0 0
0 −1 0 0 0 0 −1 0
0 λ −1 0 0
0 0 λ −1 0
0 λ−1 0 0 0 ) = − det ( 0 λ 0 −1
0 λ)=λ−1=0 −1
if λ = 1. Note that 1 + λμ(t) = 1 + 4 = 5 ≠ 0, Thus, λ ∈ R , t ∈ 𝕋. We will search a vector y1 y2 y = ( y3 ) ∈ ℝ 5 , y4 y5 y ≠ 0, so that (λA − B)y = 0 or
t ∈ 𝕋.
0 −1 0 0
0 λ −1 0
0 0 ) λ −1
57
58 � 2 Linear dynamic-algebraic equations with constant coefficients 0 0 (0 0 0
0 −1 0 0 0
0 1 −1 0 0
0 0 1 −1 0
0 y1 0 0 y2 0 0 ) (y3 ) = (0) , 1 y4 0 −1 y5 0
or −y2 + y3 = 0,
−y3 + y4 = 0, −y4 + y5 = 0, −y5 = 0.
We take 1 0 y = (0) . 0 0 Therefore, 1 e1 (t, t0 ) 0 0 x(t) = e1 (t, t0 )y = e1 (t, t0 ) (0) = ( 0 ) , 0 0 0 0
t ∈ 𝕋,
is a nontrivial solution of the considered linear homogenous dynamic-algebraic equations. Exercise 2.2. Let 𝕋 = 7ℤ. Find a nontrivial solution to the following linear homogeneous dynamic-algebraic equations: x1Δ + x2Δ = x1 ,
x2Δ = x2 , x3 = 0.
Definition 2.5. For a given matrix pair (A, B), A, B ∈ Mm×m , the σ-matrix pencil and the matrix pair (A, B) are said to be σ-regular if the polynomial pσ (λ) does not vanish identically. Otherwise, the σ-matrix pencil and the matrix pair (A, B) are said to be σ-singular and σ-nonregular, respectively.
2.2 The Weierstrass–Kronecker form
� 59
Definition 2.6. For a given matrix pair (A, B), A, B ∈ Mm×m , the matrix pencil and the matrix pair (A, B) are said to be regular if the polynomial p(λ) does not vanish identically. Otherwise, the matrix pencil and the matrix pair (A, B) are said to be singular and nonregular, respectively.
2.2 The Weierstrass–Kronecker form Suppose that the matrix pair (A, B), A, B ∈ Mm×m , is a regular matrix pair. Then there exist (see the Appendix of this book) nonsingular matrices P, Q ∈ Mm×m and integers 0 ≤ l ≤ m, 0 ≤ μ ≤ l, such that I PAQ = ( 0 W PBQ = ( 0
0 m−l ) , N l 0 m−l ) , I l
(2.6)
where I is the identity matrix, N is absent if l = 0; otherwise, N is nilpotent of index μ, i. e., N μ = 0, N μ−1 ≠ 0. Also, the real-valued matrix N has eigenvalue zero only and it can be transformed into its Jordan canonical form using a real-valued similarity transformation. Then the matrices P and Q can be chosen so that N is in Jordan canonical form. Definition 2.7. The Kronecker index of a regular matrix pair (A, B) and the Kronecker index of a regular dynamic-algebraic equation (2.2) are defined to be the nilpotency order μ in (2.6) and we will write ind(A, B) = μ. Definition 2.8. Equations (2.6) is said to be the Weierstrass–Kronecker form of the regular matrix pair (A, B). Now, we set y Q ( ) = x. z Then yΔ Q ( Δ ) = xΔ. z Hence, the equation (2.2) takes the form
60 � 2 Linear dynamic-algebraic equations with constant coefficients yΔ y AQ ( Δ ) = BQ ( ) + f (t), z z whereupon, multiplying by P both sides of the last equation, we arrive at yΔ y PAQ ( Δ ) = PBQ ( ) + Pf (t). z z Denote Pf (t) = (
f1 (t) ). f2 (t)
Now, using the Weierstrass–Kronecker form (2.6), we obtain the equation (
I 0
0 yΔ W ) ( Δ) = ( N z 0
0 y f (t) )( ) + ( 1 ). I z f2 (t)
Thus, we get two equations yΔ = Wy + f1 (t)
(2.7)
NzΔ = z + f2 (t).
(2.8)
and
The equation (2.7) is an explicit dynamic equation. For the equation (2.8), we have the following important result. Theorem 2.1. Let f2 ∈ C μ (I). Suppose that μ is the index of nilpotency of N. Then (2.8) has unique solution μ−1
j
z = − ∑ N j f2Δ . j=0
(2.9)
Proof. Let D be a linear operator which maps a continuously-differentiable function z to its Hilger derivative zΔ . Then the equation (2.8) takes the form NDz = z + f2 or (I − ND)z + f2 = 0, Hence, using the Neumann series, we find
2.2 The Weierstrass–Kronecker form
∞
μ−1
j=0
j=0
�
j
z = −(I − ND)−1 f2 = − ∑(ND)j f2 = − ∑ N j f2Δ z. Note that ∞
NzΔ − z − f2 = − ∑ N j+1 f2Δ
μ−1
j+1
j
+ ∑ N j f2Δ − f2 = f2 .
j=0
j=0
This completes the proof. Example 2.6. Let 𝕋 = 2ℕ0 . Consider (2.2). Let 0 A = (0 0
1 0 0
0 0) , 0
1 B = (0 0
0 1 0
0 0) 1
and
0 f (t) = (−t 3 ) . −t
Here, σ(t) = 2t, t ∈ 𝕋. We have the system x2Δ (t) = x1 (t),
0 = x2 (t) − t 3 , 0 = x3 (t) − t,
t ∈ 𝕋,
or x2Δ (t) = x1 (t), x2 (t) = t 3 ,
x3 (t) = t,
t ∈ 𝕋.
Note that 2
x2Δ (t) = (σ(t)) + tσ(t) + t 2 = 4t 2 + 2t 2 + t 2 = 7t 2 ,
t ∈ 𝕋.
Consequently, x1 (t) = 7t 2 ,
x2 (t) = t 3 ,
x3 (t) = t,
t ∈ 𝕋,
is the unique solution of the considered system independent of an initial condition. Example 2.7. Let 𝕋 = 3ℕ0 . Consider the system (2.1). Let 1 A = (0 0
1 1 0
0 0) , 0
1 B = (0 0
0 1 0
0 0) 1
and
0 f (t) = ( 0 ) . 9t 2
61
62 � 2 Linear dynamic-algebraic equations with constant coefficients Let also 1 x(1) = (1) . 1 Here, σ(t) = 3t, t ∈ 𝕋. We have the system x1Δ (t) + x2Δ (t) = x1σ (t), x2Δ (t) = x2σ (t),
0 = x3σ (t) − 9t 2 ,
t ∈ 𝕋,
or x1Δ (t) + x2Δ (t) = x1σ (t), x2Δ (t) = x2σ (t),
x3 (3t) = (3t)2 ,
t ∈ 𝕋.
Thus, x3 (t) = t 2 ,
t ∈ 𝕋.
Note that 1 , 1 − 2t
t ∈ 𝕋.
x2 (t) = e⊖(−1) (t, 1),
t ∈ 𝕋.
⊖(−1)(t) = Therefore,
Hence, x1Δ (t) = x1σ (t) −
1 e (t, 1), 1 − 2t ⊖(−1)
t ∈ 𝕋,
and t
x1 (t) = e⊖(−1) (t, 1) − ∫ 1
{e (t, 1)(1 − = { ⊖(−1) {1, Therefore,
t
1 1 e (τ, 1)e⊖(−1) (t, τ)Δτ = e⊖(−1) (t, 1)(1 − ∫ Δτ) 1 − 2τ ⊖(−1) 1 − 2τ t 3
j 2 ∑j=1 1−2j ),
1
t > 1, t = 1.
2.2 The Weierstrass–Kronecker form t
3 {e (t, 1)(1 − 2 ∑j=1 x1 (t) = { ⊖(−1) {1, x2 (t) = e⊖(−1) (t, 1),
x3 (t) = t 2 ,
j ), 1−2j
t > 1, t = 1,
t ∈ 𝕋,
is the unique solution of the considered IVP. Example 2.8. Let 𝕋 = 2ℕ0 . Consider the system (2.1). Let 1 0 A = (0 0 0
0 0 0 0 0
0 1 0 0 0
0 0 f (t) = ( 0 ) , 0 −t 4
0 0 1 0 0
0 0 0) , 1 0
1 0 B = (0 0 0
t ∈ 𝕋,
and 1 315 x(1) = (105) . 15 1 Here, σ(t) = 2t, t ∈ 𝕋. We have the system x1Δ (t) = x1 (t),
x3Δ (t) = x2 (t),
x4Δ (t) = x3 (t),
x5Δ (t) = x4 (t),
0 = x5 (t) − t 4 ,
Hence, x1 (t) = e1 (t, 1),
x3Δ (t) = x2 (t),
x4Δ (t) = x3 (t),
t ∈ 𝕋.
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0) , 0 1
� 63
64 � 2 Linear dynamic-algebraic equations with constant coefficients x5Δ (t) = x4 (t), x5 (t) = t 4 ,
t ∈ 𝕋.
Note that 3
2
x5Δ (t) = (σ(t)) + t(σ(t)) + t 2 σ(t) + t 3 = (2t)3 + t(2t)2 + 2t 3 + t 3 = 8t 3 + 4t 3 + 3t 3 = 15t 3 ,
t ∈ 𝕋,
and x4 (t) = 15t 3 ,
t ∈ 𝕋,
and 2
x4Δ (t) = 15((σ(t)) + tσ(t) + t 2 ) = 15((2t)2 + 2t 2 + t 2 ) = 105t 2 = x3 (t),
t ∈ 𝕋,
and x3Δ (t) = 105(σ(t) + t) = 105(2t + t) = 315t = x2 (t),
t ∈ 𝕋.
Therefore, x1 (t) = e1 (t, 1),
x2 (t) = 315t,
x3 (t) = 105t 2 ,
x4 (t) = 15t 3 , x5 (t) = t 4 ,
t ∈ 𝕋,
is the unique solution of the considered system. Exercise 2.3. Let 𝕋 = 3ℕ0 . Find the solution of the system (2.1) when 1 0 A = (0 0 0
0 0 0 0 0
t 2t f (t) = ( 0 ) , 0 0
0 −1 0 0 0
0 0 −1 0 0
t ∈ 𝕋,
0 0 0 ), −1 0
1 0 B = (0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0) , 0 1
2.2 The Weierstrass–Kronecker form
� 65
subject to the initial condition 1 1 x(1) = (−1) . 1 0 Now, we consider the regular homogeneous dynamic-algebraic equation (2.4). Under above notation, we have (
I 0
0 yΔ W ) ( Δ) = ( N z 0
0 y )( ), I z
whereupon yΔ = Wy,
NzΔ = z. Take z = 0. Then
y(t) = eW (t, t0 )y0 , where y0 ∈ ℝm−l . Hence, y e (t, t0 ) ( )=( W ) y0 z 0 and y e (t, t0 ) x = Q( ) = Q( W ) x0 , z 0 where y x0 = ( 0 ) . 0 Therefore, the solution space has finite dimension m − l and the solution depends smoothly by the initial condition x0 . Theorem 2.2. The linear homogeneous dynamic-algebraic equation (2.4) has a finitedimensional solution space if and only if the matrix pair (A, B) is regular.
66 � 2 Linear dynamic-algebraic equations with constant coefficients Proof. By the above observations, it follows that if (A, B) is a regular matrix pair, then the solution space is finite. Now, assume that the solution space of the equation (2.4) has finite dimension. Suppose that (A, B) is singular, i. e., det(λA − B) = 0. Then for any choice of m + 1 different values λ1 , . . . , λm+1 we find vectors ξ1 , . . . , ξm+1 so that (λj A + B)ξj = 0,
j ∈ {1, . . . , m + 1},
and m+1
∑ αj ξj = 0. j=1
Next, define m+1
x(t) = ∑ αj eλj (t, t0 )ξj . j=1
Then m+1
x Δ (t) = ∑ αj λj eλj (t, t0 )ξj , j=1
m+1
Ax Δ (t) = ∑ αj eλj (t, t0 )(λj Aξj ), j=1
m+1
Bx(t) = ∑ αj eλj (t, t0 )Bξj . j=1
Hence, m+1
Ax Δ (t) − Bx(t) = ∑ αj eλj (t, t0 )(λj A + B)ξj = 0. j=1
Thus, x is a solution to the homogeneous dynamic-algebraic equation (2.4) and it is a solution with initial data x(t0 ) = 0. For disjoint sets,
2.2 The Weierstrass–Kronecker form
�
67
{ξ1 , . . . , ξm+1 } we get different solutions to (2.2). Therefore, the solution space of (2.4) is not finite. This is a contradiction. Consequently, the matrix pair (A, B) is regular. This completes the proof. Example 2.9. Consider 1 0 A=( 0 0
−1 0 0 −1
0 0 0 0
0 −2 ), 0 0
0 0 B=( 0 0
0 0 1 0
1 0 0 0
0 0 ). 0 0
We have 1 0 λA − B = λ ( 0 0 λ 0 =( 0 0
0 0 0 0
−1 0 0 −1 −λ 0 0 −λ
0 0 0 0
0 0 −2 0 )−( 0 0 0 0
0 0 1 0
1 0 0 0
0 0 ) 0 0
0 0 −2λ 0 )−( 0 0 0 0
0 0 1 0
1 0 0 0
0 λ 0 0 )=( 0 0 0 0
−λ 0 −1 −λ
−1 0 0 0
0 −2λ ) 0 0
and λ 0 det(λA − B) = det ( 0 0
−λ 0 −1 −λ
−1 0 0 0
0 λ −2λ ) = −λ det (0 0 0 0
−1 0 0
0 −2λ) = 0. 0
Therefore, the matrix pair (A, B) is singular. By Theorem 2.2, it follows that the corresponding homogeneous dynamic-algebraic equation (2.4) has infinite solution space. Now, we rewrite the equation (2.4) in the form 1 0 ( 0 0
−1 0 0 −1
0 0 0 0
x1Δ 0 0 Δ x −2 0 ) ( 2Δ ) = ( 0 0 x3 Δ 0 0 x 4
or (x1 − x2 )Δ = x3 , −2x4Δ = 0,
0 0 1 0
1 0 0 0
0 x1 0 x ) ( 2) 0 x3 0 x4
68 � 2 Linear dynamic-algebraic equations with constant coefficients 0 = x2 ,
−x2Δ = x4 . Hence, x1Δ = x3 , x2 = 0,
x4 = 0 and t
x1 (t) = c + ∫ x3 (s)Δs, x2 (t) = 0,
t0
x4 (t) = 0. By the above representation of the solutions, it follows again that the solution space of the considered linear homogeneous dynamic-algebraic equation is infinite. Example 2.10. Let 𝕋, A and B be as in Example 2.4. Then the solution space of the equation (2.4) is finite. Example 2.11. Let 𝕋, A and B be as in Example 2.5. Then the solution space of the equation (2.4) is finite. Exercise 2.4. Let 𝕋 = 3ℕ0 and 1 0 (0 ( A=( (−4 (0 1 0 (
0 0 0 1 0 0 0
0 0 0 0 0 0 −1
0 0 0 0 0 0 0
−1 0 0 0 0 0 0
0 0 0 0 0 1 0
0 3 0) ) 0) ), 0) 1 0)
0 1 (1 ( B=( (0 (0 1 0 (
1 0 0 1 0 0 1
1 0 0 0 2 0 0
0 0 −1 0 0 0 0
Investigate if the solution space of the equation (2.4) is finite or not. Now, we consider the equation (2.8) and suppose that J1 N =(
J2
..
.
), Js
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 −1 0) ) 0) ). 0) 1 0)
2.2 The Weierstrass–Kronecker form
� 69
where 0 Jj = (
1 ..
..
.
..
. .
1 0
) ∈ Mkj ×kj ,
j ∈ {1, . . . , s},
and k1 + k2 + ⋅ ⋅ ⋅ + ks = l. Note that the Kronecker index μ is equal to the order of the maximal Jordan block in N. By the equation (2.8), we get the equations Jj ηΔj (t) = ηj (t) + f2j (t),
j ∈ {1, . . . , s}.
(2.10)
Let ηj1 ηj2 ηj = ( . ) , .. ηjkj
f2j1 f2j2 f2j = ( . ) , .. f2jkj
j ∈ {1, . . . , s}.
Then, by the equation (2.10), we find 0 (
1
..
.
.. ..
ηΔj1
. .
ηj1 f2j1 ηj2 f2j2 ηΔj2 )( . ) = ( . ) + ( . ), .. .. .. 1 Δ ηjkj f2jkj ηjk 0 j
or ηΔj2 = ηj1 + f2j1 ,
ηΔj3 = ηj2 + f2j2 , .. .
ηΔjkj = ηjkj −1 + f2jkj −1 , 0 = ηjkj + f2jkj ,
Hence, ηjkj = −f2jkj ,
j ∈ {1, . . . , s}.
j ∈ {1, . . . , s},
70 � 2 Linear dynamic-algebraic equations with constant coefficients Δ ηjkj −1 = ηΔjkj − f2jkj −1 = −f2jk − f2jkj −1 , j 2
Δ Δ ηjkj −2 = ηΔjkj −1 − f2jkj −2 = −f2jk − f2jk − f2jkj −2 , j j −1
.. .
kj
l−1
Δ ηj1 = − ∑ f2jl . l=1
2.3 Structural characteristics Suppose that A, B ∈ Mm×m . Set C0 = A,
D0 = B,
R0 = ker C0
and Q0 ∈ Mm×m be a projector on R0 . Let also P0 = I − Q0 . Then we have the following relations: C0 P0 = C0
(2.11)
and P0 + Q0 = I,
Q0 P0 = P0 Q0 = 0, Q02 = Q0 ,
C0 Q0 = 0.
Then we can rewrite the equation (2.1) in the following form: C0 x Δ = D0 x σ + f , whereupon, using (2.11) and (2.12), we find C0 P0 x Δ = D0 (P0 + Q0 )x σ + f . We apply (2.13) and we obtain (C0 P0 + D0 Q0 P0 )x Δ = D0 P0 x σ + D0 Q0 x σ + f , from where, using (2.11), (2.14) and (2.15), we arrive at
(2.12) (2.13) (2.14) (2.15)
2.3 Structural characteristics
� 71
(C0 + D0 Q0 )P0 x Δ = C0 Q0 x σ + D0 Q02 x σ + D0 P0 x σ + f = (C0 + D0 Q0 )Q0 x σ + D0 P0 x σ + f or (C0 + D0 Q0 )(P0 x Δ − Q0 x σ ) = D0 P0 x σ + f . Set C1 = C0 + D0 Q0 ,
D1 = D0 P0 .
Therefore, C1 (P0 x Δ − Q0 x σ ) = D1 x σ + f . Let now, R1 = ker C1 , Q1 be a projector onto R1 and P1 = I − Q1 . Note that C1 P1 = C1 . Hence, using (2.16), we get C1 P1 (P0 x Δ − Q0 x σ ) = D1 (P1 + Q1 )x σ + f . Now, we use Q1 P1 = 0 and we get (C1 P1 + D1 Q1 P1 )(P0 x Δ − Q0 x σ ) = D1 (P1 + Q1 )x σ + f , or (C1 + D1 Q1 )P1 (P0 x Δ − Q0 x σ ) = D1 Q1 x σ + D1 P1 x σ + f . We apply that Q12 = Q1 ,
C1 Q1 = 0
(2.16)
72 � 2 Linear dynamic-algebraic equations with constant coefficients and we arrive at (C1 + D1 Q1 )P1 (P0 x Δ − Q0 x σ ) = (C1 Q1 + D1 Q12 )x σ + D1 P1 x σ + f or (C1 + D1 Q1 )P1 (P0 x Δ − Q0 x σ ) = (C1 + D1 Q1 )Q1 x σ + D1 P1 x σ + f , or (C1 + D1 Q1 )(P1 (P0 x Δ − Q0 x σ ) − Q1 x σ ) = D1 P1 x σ + f . We set C2 = C1 + D1 Q1 ,
D2 = D1 P 1 .
Then C2 (P1 (P0 x Δ − Q0 x σ ) − Q1 x σ ) = D2 x σ + f and so on. For j ≥ 0, we have Cj+1 = Cj + Dj Qj ,
Rj+1 = ker Cj+1 ,
Dj+1 = Dj Pj ,
Qj+1 ∈ Mm×m is a projector on Rj+1 . Then we have Cj (. . . (C2 (P1 (P0 x Δ − Q0 x σ ) − Q1 x σ )) . . .) = Dj x σ + f . As above, one can get that the equation (2.2) takes the form Cj (. . . (C2 (P1 (P0 x Δ − Q0 x σ ) − Q1 x)) . . .) = Dj x + f . Let rj = rank Cj and Πj = P0 . . . Pj . Then Bj+1 = Bj Pj = Bj−1 Pj−1 Pj . . . = B0 P0 . . . Pj−1 Pj = B0 Πj . Also, we have
2.3 Structural characteristics
�
73
ker Πj ⊆ ker Bj+1 . Since Cj+1 Pj = Cj Pj + Bj Qj Pj = Cj Pj = Cj , we obtain im C0 ⊆ im C1 ⊆ ⋅ ⋅ ⋅ ⊆ im Cj ⊆ im Cj+1 and r0 ≤ r1 ≤ ⋅ ⋅ ⋅ ≤ rj ≤ rj+1 . Moreover, Rj−1 ∩ Rj ⊆ Rj ∩ Rj+1 ,
j ≥ 1.
(2.17)
Thus, if Cj−1 z = 0
and Cj z = 0,
z ∈ ℝn ,
which corresponds to Qj−1 z = z,
Qj z = z,
Pj−1 z = z,
Pj z = z,
or
we conclude that Cj+1 z = (Cj + Dj Qj )z = Cj z + Dj Qj z = Bj z = Bj−1 Pj−1 z = 0. By (2.17), it follows that a nontrivial intersection Rk−1 ∩ Rk never allows an injective matrix Cj , j > k. Definition 2.9. For a given matrix pair (A, B), A, B ∈ Mm×m and k ∈ ℕ, define C0 = A,
D0 = B,
R0 = ker C0 ,
Q ∈ Mm×m is a projector on R0 . For j ≥ 1, set Cj = Cj−1 + Dj−1 Qj−1 , Dj = Dj−1 Pj−1 , Rj = ker Cj , R̂ j = (R0 + ⋅ ⋅ ⋅ + Rj−1 ) ∩ Rj
74 � 2 Linear dynamic-algebraic equations with constant coefficients and fix a complement Xj such that R0 + ⋅ ⋅ ⋅ + Rj−1 = R̂ j ⊕ Xj , choose a projector Qj such that im Qj = Rj ,
Xj ⊆ ker Qj ,
Pj = I − Qj ,
Πj = Πj−1 Pj .
The matrices C0 , C1 . . . , Ck are said to be admissible matrices, the projectors Q0 , Q1 , . . . , Qk are said to be admissible projectors. The matrices C0 , C1 . . . , Ck are said to be regular admissible matrices if R̂ j = {0},
j ∈ {1, . . . , k},
and in this case the projectors Q0 , Q1 , . . . , Qk are said to be regular admissible projectors. Example 2.12. Let 𝕋 = ℤ. Consider the system x1Δ = x1 + x2 + x3 + f1 , x2Δ = x1 + f2 ,
0 = x1 + x2 .
Here, 1 C0 = A = ( 0 0
0 1 0
0 0) , 0
1 D0 = B = ( 1 1
1 0 1
1 0) . 0
We have det C0 = 0. Let y1 y = (y2 ) ∈ ker C0 . y3 Then
2.3 Structural characteristics
1 (0 0
0 1 0
0 y1 0 0) (y2 ) = (0) 0 y3 0
or y1 = 0,
y2 = 0. Thus, a nullspace projector to C0 is 0 Q0 = (0 0
0 0 0
0 0) . 1
Hence, 1 C1 = C0 + D0 Q0 = (0 0 1 = (0 0
0 1 0
0 1 0
0 0 0 ) + (0 0 0
0 1 0) + (1 0 1 0 0 0
1 0 1
1 1 0) = (0 0 0
1 0 0) (0 0 0 0 1 0
0 0 0
0 0) 1
1 0) 0
and 1 P0 = I − Q0 = (0 0
0 1 0
0 0 ) − ( 0 0 1 0
1 D1 = D0 P0 = (1 1
1 0 1
1 1 0) (0 0 0
0 0 0
0 1 ) = ( 0 0 1 0
0 1 0
0 0) , 0
and 0 1 0
0 1 0) = (1 0 1
Note that det C1 = 0, i. e., the matrix C1 is a singular matrix. Let y1 y = (y2 ) ∈ ker C1 . y3
1 0 1
0 0) . 0
�
75
76 � 2 Linear dynamic-algebraic equations with constant coefficients Then 1 (0 0
0 1 0
1 y1 0 0) (y2 ) = (0) 0 y3 0
or y1 + y3 = 0, y2 = 0.
Thus, a nullspace projector to C1 is 1 Q1 = ( 0 −1
0 0 0
0 0) . 0
Then 1 C2 = C1 + D1 Q1 = (0 0 1 = (0 0
0 1 0
0 1 0
1 1 0) + (1 0 1
1 1 0) + (1 0 1 0 0 0
1 0 1
0 2 0) = ( 1 0 1
0 1 0) ( 0 0 −1 0 1 0
0 0 0
0 0) 0
1 0) . 0
We have det C2 = 0 − 1 = −1 ≠ 0, i. e., C2 is nonsingular. Observe that R0 = {(0, 0, y3 )T : y3 ∈ ℝ},
R2 = {(−y3 , 0, y3 )T : y3 ∈ ℝ}
and R0 ∩ R1 = {(0, 0, 0)}. Example 2.13. Let 𝕋 = 2ℕ0 . Consider the system x1Δ = x1σ + x3σ + f1 , x3Δ = x1σ + f2 ,
x1Δ = x1σ + x2σ + f3 .
2.3 Structural characteristics
Here, 1 C0 = A = (0 1
0 0 0
0 1) , 0
1 D0 = B = (1 1
0 0 1
1 0) . 0
We have det C0 = 0. Let y1 y = (y2 ) ∈ ker C0 . y3 Then 1 (0 0
0 0 0
0 y1 0 1 ) (y2 ) = (0) 0 y3 0
or y1 = 0,
y3 = 0. Thus, a nullspace projector to C0 is 0 Q0 = ( 1 0
0 0 0
0 0) . 0
Hence, 1 C1 = C0 + D0 Q0 = (0 1 1 = (0 1
0 0 0
0 0 0
0 0 1 ) + (0 0 1
0 1 1 ) + (1 0 1 0 0 0
0 0 1
0 1 0) = (0 0 2
We have that det C1 = 0.
1 0 0) ( 1 0 0 0 0 0
0 1) . 0
0 0 0
0 0) 0
� 77
78 � 2 Linear dynamic-algebraic equations with constant coefficients Moreover, 1 P0 = I − Q0 = (0 0
0 1 0
0 0 0) − ( 1 1 0
0 0 0
0 1 0) = (−1 0 0
1 D1 = D0 P0 = (1 1
0 0 1
1 1 0) (−1 0 0
0 1 0
0 1 0) = ( 1 1 0
0 1 0
0 0) , 1
and 0 0 1
1 0) . 0
Let y1 y = (y2 ) ∈ ker C1 . y3 Then 1 (0 2
0 0 0
0 y1 0 1 ) (y2 ) = (0) 0 y3 0
or y1 = 0,
y3 = 0,
2y1 = 0. Thus, a nullspace projector to C1 is 0 Q1 = ( 1 0
0 0 0
0 0) . 0
Then 1 C2 = C1 + D1 Q1 = (0 2 1 = (0 2
0 0 0
0 0 0
0 0 1 ) + (0 0 1
0 1 1) + (1 0 0 0 0 0
0 0 1
0 1 0 ) = (0 0 3
1 0 0) ( 1 0 0 0 0 0
0 1) . 0
0 0 0
0 0) 0
2.3 Structural characteristics
�
79
We have det C2 = 0, i. e., C2 is singular. Continuing in this way, we find 1 Cl = ( 0 l+1
0 0 0
0 1) , 0
l ∈ ℕ.
Thus, the sequence C0 ,
C1 , . . . , Ck , . . .
is a sequence of singular matrices. Observe that Rj = {(0, y2 , 0)T : y2 ∈ ℝ},
j ∈ ℕ0 .
Exercise 2.5. Let 𝕋 = 3ℕ0 and x1Δ = −x1σ + x2σ + f1 , x2Δ = x3σ + f2 ,
0 = x1 + x2 + x3 .
1. 2.
Find the matrices A and B in (2.1). Find the matrices C0 , C1 , C2 , C3 , C4 .
Definition 2.10. Let Q0 , . . . , Qk be admissible projectors for the matrix pair (A, B). If Q0 = Q0∗ and Xj = R̂ ⊥ j ∩ (R0 + ⋅ ⋅ ⋅ + Rj−1 ) and ker Qj = (R0 + ⋅ ⋅ ⋅ + Rj )⊥ ⊕ Xj ,
j ∈ {1, . . . , k},
then Q0 , Q1 , . . . , Qk are said to be widely orthogonal admissible projectors. Below, we will deduct some of the properties of the admissible projectors. Suppose that Q0 , Q1 , . . . , Qk are admissible projectors. Then we have the following:
80 � 2 Linear dynamic-algebraic equations with constant coefficients 1.
ker Πj = R0 + ⋅ ⋅ ⋅ + Rj , j ∈ {1, . . . , k}. Proof. Let z ∈ ker Πj be arbitrarily chosen. Then j
0 = Πj z = P0 . . . Pj z = ∏(I − Ql )z l=0
= z − Q0 z − Q1 z − ⋅ ⋅ ⋅ − Qj z − ⋅ ⋅ ⋅ − Q0 . . . Qj z,
j ∈ {0, . . . , k},
or z = Q0 z + ⋅ ⋅ ⋅ + Qj z + ⋅ ⋅ ⋅ + Q0 Q1 . . . Qj z ∈ R0 + ⋅ ⋅ ⋅ Rj ,
j ∈ {0, . . . , k}.
Since z ∈ ker Πj , j ∈ {0, . . . , k} was arbitrarily chosen and we get that it is an element of R0 + ⋅ ⋅ ⋅ + Rj , j ∈ {0, . . . , k}, we conclude that ker Πj ⊆ R0 + ⋅ ⋅ ⋅ + Rj ,
j ∈ {0, . . . , k}.
(2.18)
Now, we will prove the inverse inclusion of the inclusion (2.18). For j = 0, we have Π0 = P 0 and ker Π0 = ker P0 = R0 . Suppose that ker Πj−1 = R0 + ⋅ ⋅ ⋅ + Rj−1 , for j ∈ {0, . . . , k}. Since R0 + ⋅ ⋅ ⋅ + Rj = R0 + ⋅ ⋅ ⋅ + Rj−1 + Rj = Xj + R̂ j + Rj ,
j ∈ {0, . . . , k}.
Thus, each z ∈ R0 + ⋅ ⋅ ⋅ + Rj , j ∈ {0, . . . , k} can be represented in the form z = xj + ẑj + zj ,
j ∈ {0, . . . , k},
where xj ∈ Xj ⊆ R0 + ⋅ ⋅ ⋅ + Rj−1 ,
ẑj ∈ R̂ j ⊆ Rj ,
z j ∈ Rj ,
j ∈ {0, . . . , k}.
Now, since Qj , j ∈ {0, . . . , k}, are admissible projectors, we have Xj ⊆ ker Qj ,
j ∈ {0, . . . , k},
2.3 Structural characteristics
�
81
and Rj = im Qj ,
j ∈ {0, . . . , k}.
Thus, (I − Qj )z = z − Qj z = xj + ẑj + zj − Qj xj − Qj ẑj − Qj zj = xj − Qj xj = (I − Qj )xj ,
j ∈ {0, . . . , k},
and Πj z = Πj−1 Pj z = Πj−1 (I − Qj )z = Πj−1 (I − Qj )xj = Πj−1 xj − Πj−1 Qj xj = Πj−1 xj = 0,
j ∈ {0, . . . , k}.
Then z ∈ ker Πj , j ∈ {0, . . . , k}. Since z ∈ R0 + ⋅ ⋅ ⋅ + Rj was arbitrarily chosen and we get that it is an element of ker Πj , we find the inclusion R0 + ⋅ ⋅ ⋅ + Rj ⊆ ker Πj ,
j ∈ {0, . . . , k}.
By the last inclusion and (2.18), we find the desired result. This completes the proof. 2.
The products Πj = P0 . . . Pj ,
Πj−1 Qj = P0 . . . Pj−1 Qj ,
j ∈ {0, . . . , k},
are projectors. Proof. Let j ∈ {0, . . . , k} be arbitrarily chosen and fixed. Since im Ql = Rl and Rl ⊆ ker Πj , l ≤ j, we get Πj Pl = Πj (I − Ql ) = Πj ,
l ≤ j.
Hence, Π2j = Πj Πj = Πj P0 P1 . . . Pj = Πj P1 . . . Pj = ⋅ ⋅ ⋅ = Πj Pj = Πj and Πj Πj−1 = Πj P0 P1 . . . Pj−1 = Πj P1 . . . Pj−1 = ⋅ ⋅ ⋅ = Πj Pj−1 = Πj , and (Πj−1 Qj )2 = Πj−1 Qj Πj−1 Qj = Πj−1 (I − Pj )Πj−1 Qj
= Πj−1 Πj−1 Qj − Πj−1 Pj Πj−1 Qj = Πj−1 Qj − Πj Πj−1 Qj = Πj−1 Qj − Πj Qj
82 � 2 Linear dynamic-algebraic equations with constant coefficients = Πj−1 Qj − Πj (I − Pj ) = Πj−1 Qj − Πj + Πj Pj = Πj−1 Qj − Πj + Πj = Πj−1 Qj . This completes the proof. 3.
We have the inclusion R0 + ⋅ ⋅ ⋅ + Rj−1 ⊆ ker(Πj−1 Qj ),
j ∈ {1, . . . , k}.
Proof. Let j ∈ {1, . . . , k} and z ∈ R0 + ⋅ ⋅ ⋅ + Rj−1 be arbitrarily chosen. By (1) and R0 + ⋅ ⋅ ⋅ + Rj−1 ⊆ R0 + ⋅ ⋅ ⋅ + Rj , we get z ∈ ker Πj−1 ,
z ∈ ker Πj ,
i. e., we have Πj−1 z = 0,
Πj z = 0.
Hence, Πj−1 Qj z = Πj−1 (I − Pj )z = Πj−1 z − Πj−1 Pj z = Πj−1 z − Πj z = 0. Thus, z ∈ ker(Πj−1 Qj ). Since z ∈ R0 +⋅ ⋅ ⋅+Rj−1 was arbitrarily chosen and for it we get that it is an element of ker(Πj−1 Qj ), we obtain the desired inclusion. This completes the proof. 4.
Dj = Dj Πj−1 , j ∈ {1, . . . , k}. Proof. Let j ∈ {1, . . . , k} be arbitrarily chosen. By the definition of Dj , we get Dj = Dj−1 Pj−1 = Dj−2 Pj−2 Pj−1 = ⋅ ⋅ ⋅ = D0 P0 P1 . . . Pj−1 = D0 Πj−1 . Since Πj−1 is a projector, we have Π2j−1 = Πj−1 and Dj = D0 Πj−1 = D0 Πj−1 Πj−1 = Dj Πj−1 . This completes the proof.
5.
We have R̂ j ⊆ Rj ∩ ker Dj = Rj ∩ Rj+1 ⊆ R̂ j+1 ,
j ∈ {0, . . . , k}.
2.3 Structural characteristics
� 83
Proof. Let j ∈ {0, . . . , k} and z ∈ R̂ j = (R0 + ⋅ ⋅ ⋅ + Rj−1 ) ∩ Rj be arbitrarily chosen. Hence, z ∈ R0 + ⋅ ⋅ ⋅ + Rj−1
and
z ∈ Rj .
Applying (1), we get that z ∈ ker Πj−1 , i. e., Πj−1 z = 0. Hence and (4), we find Dj z = Dj Πj−1 z = 0, i. e., z ∈ ker Dj . Because z ∈ Rj and z ∈ ker Dj , we obtain z ∈ Rj ∩ ker Dj . Since z ∈ R̂ j was arbitrarily chosen and we get that it is an element of Rj ∩ ker Dj , we conclude that R̂ j ⊆ Rj ∩ ker Dj . Now, we suppose that z1 ∈ Rj ∩ ker Dj is arbitrarily chosen. Then z ∈ ker Dj and z ∈ Rj = im Qj = ker Cj . From here, Cj z1 = 0,
Qj z1 = 0,
Dj z 1 = 0
and Cj+1 z1 = (Cj + Dj Qj )z1 = Cj z1 + Dj Qj z1 = 0. Hence, z1 ∈ ker Cj+1 = Rj+1 . From z1 ∈ Rj and z1 ∈ Rj+1 , we obtain z1 ∈ Rj ∩ Rj+1 . Because z1 ∈ Rj ∩ ker Dj was arbitrarily chosen and we get that it is an element of Rj ∩ Rj+1 , we obtain the inclusion Rj ∩ ker Dj ⊆ Rj ∩ Rj+1 .
(2.19)
Let z2 ∈ Rj ∩ Rj+1 be arbitrarily chosen. Then z2 ∈ Rj and z2 ∈ Rj+1 . Hence, z2 ∈ ker Cj and z2 ∈ ker Cj+1 . Thus, z2 ∈ ker Cj ∩ ker Cj+1 .
84 � 2 Linear dynamic-algebraic equations with constant coefficients Therefore, Cj z 2 = 0 and 0 = Cj+1 z2 = (Cj + Dj Qj )z2 = Cj z2 + Dj Qj z2 = Dj Qj z2 = Dj z2 , i. e., z2 ∈ ker Dj . From here and from z2 ∈ Rj , we obtain z2 ∈ Rj ∩ ker Dj . Because z2 ∈ Rj ∩ Rj+1 was arbitrarily chosen and for it we get that it is an element of Rj ∩ ker Dj , we arrive at Rj ∩ Rj+1 ⊆ Rj ∩ ker Dj . By the last inclusion and by (2.19), we find Rj ∩ Rj+1 = Rj ∩ ker Dj . Let z3 ∈ Rj ∩ Rj+1 be arbitrarily chosen. Then z3 ∈ Rj and z3 ∈ Rj+1 . Hence, z3 ⊆ R0 + ⋅ ⋅ ⋅ + Rj . Hence, z3 ∈ (R0 + ⋅ ⋅ ⋅ + Rj ) ∩ Rj+1 = R̂ j+1 . Since z3 ∈ Rj ∩ Rj+1 was arbitrarily chosen and for it we get that it is an element of R̂ j+1 , we obtain Rj ∩ Rj+1 ⊆ R̂ j+1 . This completes the proof. 6.
If Q0 , . . . , Qk are widely orthogonal, then im Πj = (R0 + ⋅ ⋅ ⋅ + Rj )⊥ ,
Πj = Π⋆j ,
Πj−1 Qj = (Πj−1 Qj )⋆ ,
j ∈ {0, . . . , k}.
2.3 Structural characteristics
� 85
Proof. Let j ∈ {0, . . . , k} be arbitrarily chosen. For j = 0, by the definition of Π0 , we have Π0 = P0 = R⊥ 0,
Q0 = I − P0 .
Hence, Π0 = Π⋆0 . Assume that the assertion is true for j − 1. Since Xj ⊆ R0 + ⋅ ⋅ ⋅ + Rj−1 , we obtain Πj−1 Xj = 0. Next, using that im Πj = Πj−1 im Pj = Πj−1 ((R0 + ⋅ ⋅ ⋅ + Rj )⊥ + Xj ) = Πj−1 (R0 + ⋅ ⋅ ⋅ + Rj )⊥ . Note that (R0 + ⋅ ⋅ ⋅ + Rj )⊥ ⊆ (R0 + ⋅ ⋅ ⋅ + Rj−1 )⊥ = im Πj−1 . Therefore, im Πj = (R0 + ⋅ ⋅ ⋅ + Rj )⊥ . Moreover, Πj is the orthoprojector onto (R0 + ⋅ ⋅ ⋅ + Rj )⊥ along R0 + ⋅ ⋅ ⋅ + Rj . Thus, Πj = Π⋆j . Next, Πj−1 Qj = Πj−1 (I − Pj ) = Πj−1 − Πj−1 Pj = Πj−1 − Πj = Π⋆j−1 − Π⋆j = (Πj−1 − Πj )⋆ = (Πj−1 Qj )⋆ .
This completes the proof. 7.
If Q0 , . . . , Qk are regular admissible, then ker(Πj−1 Qj ) = ker Qj ,
j ∈ {1, . . . , k},
86 � 2 Linear dynamic-algebraic equations with constant coefficients and Qj Ql = 0,
l ∈ 0, . . . , j − 1,
j ∈ {1, . . . , k}.
Proof. Let j ∈ {1, . . . , k} be arbitrarily chosen and R̂ j = 0. Then Xj = R0 + ⋅ ⋅ ⋅ + Rj = R0 ⊕ ⋅ ⋅ ⋅ ⊕ Rj . Then, using (1) and the definition for regular admissible matrices, we obtain ker Πj−1 = R0 ⊕ ⋅ ⋅ ⋅ ⊕ Rj−1 = Xj ⊆ ker Qj .
(2.20)
Thus, Qj Ql = 0,
l ∈ {0, . . . , j − 1}.
Note that ker Qj ⊆ ker(Πj−1 Qj ).
(2.21)
Let z ∈ ker(Πj−1 Qj ) be arbitrarily chosen. Then we have Qj z ∈ ker Πj−1 ⊆ ker Qj , from where z ∈ ker Qj . Because z ∈ ker(Πj−1 Qj ) was arbitrarily chosen and we get that it is an element of ker Qj , we obtain the inclusion ker(Πj−1 Qj ) ⊆ ker Qj . Hence and (2.21), we find ker(Πj−1 Qj ) = ker Qj . This completes the proof. 8.
Cj Pj−1 = Cj−1 , j ∈ {1, . . . , k}. Proof. Let j ∈ {1, . . . , k} be arbitrarily chosen. By the definition, we have Cj = Cj−1 + Dj−1 Qj−1 . Hence,
2.3 Structural characteristics
�
87
Cj Pj−1 = Cj−1 Pj−1 + Dj−1 Qj−1 Pj−1 = Cj−1 Pj−1 = Cj−1 . This completes the proof. 9.
Cj+1 Qj = Dj Qj , j ∈ {0, . . . , k}. Proof. Let j ∈ {0, . . . , k} be arbitrarily chosen. Then Cj+1 = Cj + Dj Qj ,
Cj Qj = 0
and Cj+1 Qj = (Cj + Dj Qj )Qj = Cj Qj + Dj Qj2 = Dj Qj . This completes the proof. 10. I − Πj−1 = Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πj−2 Qj−1 ,
j ∈ {1, . . . , k}.
Proof. Let j ∈ {1, . . . , k} be arbitrarily chosen and fixed. Then Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πj−2 Qj−1
= I − P0 + P0 (I − P1 ) + P0 P1 (I − P2 ) + ⋅ ⋅ ⋅ + P0 P1 . . . Pj−2 (I − Pj−1 )
= I − P0 + P0 − P0 P1 + P0 P1 − P0 P1 P2
+ ⋅ ⋅ ⋅ − P0 P1 . . . Pj−2 + P0 P1 . . . Pj−2 − P0 P1 . . . Pj−1
= I − P0 P1 . . . Pj−1 = I − Πj−1 . This completes the proof.
Theorem 2.3. Let A, B ∈ Mm×m and {Cj }j≥0 be an admissible matrix sequence for the matrix pair (A, B) such that there is an integer ν so that Cν is nonsingular. Then Cν−1 A = Πν−1 + (I − Πν−1 )Cν−1 A(I − Πν−1 ), Cν−1 B
= Q0 + ⋅ ⋅ ⋅ + Qν−1 + (I −
Πν−1 )Cν−1 BΠν−1
(2.22) +
Πν−1 Cν−1 BΠν−1
and the matrix pair (A, B) is σ-regular. Proof. By (10), we get B(I − Πν−1 ) = B(Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πν−2 Qν−1 ) = D0 (Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πν−2 Qν−1 ) = D0 Q0 + D0 Π0 Q1 + D0 Π1 Q2 + ⋅ ⋅ ⋅ + D0 Πν−2 Qν−1
= D0 Q0 + D0 P0 Q1 + D0 P0 P1 Q2 + ⋅ ⋅ ⋅ + D0 P0 P1 . . . Pν−2 Qν−1
(2.23)
88 � 2 Linear dynamic-algebraic equations with constant coefficients = D0 Q0 + D1 Q1 + D1 P1 Q2 + ⋅ ⋅ ⋅ + D1 P1 . . . Pν−2 Qν−1 = D0 Q0 + D1 Q1 + D2 Q2 + ⋅ ⋅ ⋅ + D1 P1 . . . Pν−2 Qν−1 = ⋅ ⋅ ⋅ = D0 Q0 + D1 Q1 + D2 Q2 + ⋅ ⋅ ⋅ + Dν−1 Qν−1 . Now, we apply the definition of Cj , j ∈ {0, . . . , ν}, and we find B(I − Πν−1 ) = −C0 + C0 + D0 Q0 + D1 Q1 + D2 Q2 + ⋅ ⋅ ⋅ + Dν−1 Qν−1 = −C0 + C1 + D1 Q1 + D2 Q2 + ⋅ ⋅ ⋅ + Dν−1 Qν−1
= −C0 + C2 + D2 Q2 + ⋅ ⋅ ⋅ + Dν−1 Qν−1 = −C0 + C3 + ⋅ ⋅ ⋅ + Dν−1 Qν−1 = ⋅ ⋅ ⋅ = Cν − C0 . Hence and (8), we arrive at Cν = C0 + B(I − Πν−1 ) = A + B(I − Πν−1 )
(2.24)
and B(I − Πν−1 ) = Cν − C0 = Cν − C1 P0 = Cν − C2 P0 P1
= Cν − C3 P0 P1 P2 = ⋅ ⋅ ⋅ = Cν − Cν P0 P1 . . . Pν−1 = Cν − Cν Πν−1 = Cν (I − Πν−1 ),
and B(I − Πν−1 ) = Cν (I − P0 P1 ⋅ ⋅ ⋅ Pν−1 ) = Cν (I − (I − Q0 )(I − Q1 ) . . . (I − Qν−1 ))
= Cν (I − I + Q0 + Q1 + ⋅ ⋅ ⋅ + Qν−1 − Q0 Q1 + ⋅ ⋅ ⋅ + (−1)ν+1 Q0 Q1 . . . Qν−1 ) = Cν (Q0 + Q1 + ⋅ ⋅ ⋅ + Qν−1 ).
Therefore, Cν−1 B(I − Πν−1 ) = I − Πν−1 and Q0 + Q1 + ⋅ ⋅ ⋅ + Qν−1 = Cν−1 B(I − Πν−1 ), and Q0 + Q1 + ⋅ ⋅ ⋅ + Qν−1 = (I − Πν−1 )(Q0 + Q1 + ⋅ ⋅ ⋅ + Qν−1 ) = (I − Πν−1 )Cν−1 B(I − Πν−1 ), (2.25) and Πν−1 Cν−1 B(I − Πν−1 ) = Πν−1 (I − Πν−1 ) = Πν−1 − Π2ν−1 = Πν−1 − Πν−1 = 0, and
(2.26)
2.3 Structural characteristics
� 89
Cν−1 B(I − Πν−1 )Πν−1 = (I − Πν−1 )Πν−1 = Πν−1 − Π2ν−1 = Πν−1 − Πν−1 = 0. Next, by (2.24), we find I = Cν−1 Cν = Cν−1 (A + B(I − Πν−1 )) = Cν−1 A + Cν−1 B(I − Πν−1 )
(2.27)
and then Πν−1 = Πν−1 (Cν−1 A + Cν−1 B(I − Πν−1 )) = Πν−1 Cν−1 A + Πν−1 Cν−1 B(I − Πν−1 ) = Πν−1 Cν−1 A, (2.28) and Πν−1 = (Cν−1 A + Cν−1 B(I − Πν−1 ))Πν−1 = Cν−1 AΠν−1 + Cν−1 B(I − Πν−1 )Πν−1 = Cν−1 AΠν−1 . Now, we multiply the equation (2.28) with I − Πν−1 and we arrive at Πν−1 Cν−1 A(I − Πν−1 ) = Πν−1 (I − Πν−1 ) = Πν−1 − Π2ν−1 = Πν−1 − Πν−1 = 0.
(2.29)
Consequently, Πν−1 + (I − Πν−1 )Cν−1 A(I − Πν−1 ) = Πν−1 + Cν−1 A(I − Πν−1 ) − Πν−1 Cν−1 A(I − Πν−1 )
= Πν−1 + Cν−1 A(I − Πν−1 ) = Πν−1 + Cν−1 A − Cν−1 AΠν−1 = Πν−1 + Cν−1 A − Πν−1 = Cν−1 A,
i. e., (2.22) holds, and Q0 + ⋅ ⋅ ⋅ + Qν−1 + (I − Πν−1 )Cν−1 BΠν−1 + Πν−1 Cν−1 BΠν−1 = Cν−1 B(I − Πν−1 ) + (I − Πν−1 )Cν−1 BΠν−1 + Πν−1 Cν−1 BΠν−1 = Cν−1 B − Cν−1 BΠν−1 + Cν−1 BΠν−1 − Πν−1 Cν−1 BΠν−1 + Πν−1 Cν−1 BΠν−1 = Cν−1 B, i. e., (2.23) holds. Let Γ be the set of all eigenvalues of the matrix −Πν−1 Cν−1 B. Suppose that λ ∈ ̸ Γ. Consider the equation (λA + B)z = 0. It is equivalent to the equations 0 = Cν−1 (λA + B)z = λCν−1 Az + Cν−1 Bz (2.30)
= λCν−1 A(I − Πν−1 + Πν−1 )z + Cν−1 B(I − Πν−1 + Πν−1 )z =
λCν−1 AΠν−1 z
+
λCν−1 A(I
− Πν−1 )z +
Cν−1 BΠν−1 z
+
Cν−1 B(I
− Πν−1 )z.
90 � 2 Linear dynamic-algebraic equations with constant coefficients We multiply the last equation with Πν−1 and using (2.26), (2.29), we arrive at 0 = λΠν−1 Cν−1 AΠν−1 z + λΠν−1 Cν−1 A(I − Πν−1 )z + Πν−1 Cν−1 BΠν−1 z + Πν−1 Cν−1 B(I − Πν−1 )z = λΠν−1 Cν−1 AΠν−1 z + Πν−1 Cν−1 BΠν−1 z = λΠ2ν−1 z + Πν−1 Cν−1 BΠν−1 z = λΠν−1 z + Πν−1 Cν−1 BΠν−1 z = (λI + Πν−1 Cν−1 B)Πν−1 z. Therefore, Πν−1 z = 0. Now, (2.30) takes the form 0 = λCν−1 A(I − Πν−1 )z + Cν−1 B(I − Πν−1 )z, which we multiply by I − Πν−1 and we find 0 = λ(I − Πν−1 )Cν−1 A(I − Πν−1 )z + (I − Πν−1 )Cν−1 B(I − Πν−1 )z.
(2.31)
Note that by (2.27), we have Cν−1 A = I − Cν−1 B(I − Πν−1 ). Hence and (2.31), we find 0 = λ(I − Πν−1 )(I − Cν−1 B(I − Πν−1 ))(I − Πν−1 )z + (I − Πν−1 )Cν−1 B(I − Πν−1 )z
= λ(I − Πν−1 )2 z − λ(I − Πν−1 )Cν−1 B(I − Πν−1 )2 z + (I − Πν−1 )Cν−1 B(I − Πν−1 )2 z = λ(I − Πν−1 )z + (1 − λ)(I − Πν−1 )Cν−1 B(I − Πν−1 )2 z.
If λ = 1, then z = Πν−1 z = 0. Let λ ≠ 0. Then, applying (2.25), we get λ I + (I − Πν−1 )Cν−1 B(I − Πν−1 ))(I − Πν−1 )z 1−λ λ λ =( I + Q0 + ⋅ ⋅ ⋅ + Qν−1 )(z − Πν−1 z) = ( I + Q0 + ⋅ ⋅ ⋅ + Qν−1 )z. 1−λ 1−λ
0=(
We multiply the last equation by Qν−1 and we find 0=( Therefore,
λ λ 1 2 Q + Q0 Qν−1 + ⋅ ⋅ ⋅ + Qν−1 )z = ( Q + Qν−1 )z = Q z. 1 − λ ν−1 1 − λ ν−1 1 − λ ν−1
(2.32)
2.3 Structural characteristics
�
91
Qν−1 z = 0 and 0=(
λ I + Q0 + ⋅ ⋅ ⋅ + Qν−2 )z. 1−λ
We multiply the last equation with Qν−2 and we find 0=(
λ λ 1 2 Q + Q0 Qν−2 + ⋅ ⋅ ⋅ + Qν−2 )z = ( Q + Qν−2 )z = Q z, 1 − λ ν−2 1 − λ ν−2 1 − 0λ ν−2
i. e., Qν−2 z = 0. Continuing as above, we obtain Qj z = 0,
j ∈ {0, . . . , ν − 1}.
Hence, using (10), we find z = (I − Πν−1 )z = (Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πν−2 Qν−1 )z = 0. This completes the proof. Theorem 2.4. Let A, B ∈ Mm×m and {Cj }j≥0 be an admissible matrix sequence for the matrix pair (A, B) such that there is an integer ν so that Cν is nonsingular. Then the matrix pair (A, B) is regular. Proof. Let Γ1 be the set of all eigenvalues of the matrix Πν−1 Cν−1 B. Suppose that λ ∈ ̸ Γ1 . As in the proof of Theorem 2.3, we have 0 = (λI − Πν−1 Cν−1 B)Πν−1 z and 0 = λ(I − Πν−1 )Cν−1 A(I − Πν−1 )z + (I − Πν−1 )Cν−1 B(I − Πν )z = λ(I − Πν−1 )z + (1 − λ)(I − Πν−1 )Cν−1 B(I − Πν−1 )2 z.
If λ = 1, then z = 0. Let λ ≠ 1. Then, as we have deducted (2.32), we arrive at 0=( and
λ I + Q0 + ⋅ ⋅ ⋅ + Qν−1 )z 1−λ
(2.33)
92 � 2 Linear dynamic-algebraic equations with constant coefficients Qj z = 0,
j ∈ {0, . . . , ν − 1},
and z = (I − Πν−1 )z = (Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πν−2 Qν−1 )z = 0. This completes the proof. Definition 2.11. For any matrix pair (A, B), A, B ∈ Mm×m , the integers rj = rank Cj ,
j ≥ 0,
uj = dim R̂ j ,
j ≥ 1,
where {Cj }j≥0 is an admissible matrix sequence for the matrix pair (A, B), are called structural characteristic values. If there is an integer ν so that Cν is nonsingular, then we have r0 ≤ r1 ≤ ⋅ ⋅ ⋅ ≤ rν−1 < rν = m.
2.4 Decoupling Let A, B ∈ Mm×m and {Cj }j≥0 be the admissible matrix sequence to the matrix pair (A, B) such that there is an integer ν for which Cν is nonsingular. Then, by Theorem 2.3 and Theorem 2.4, it follows that the matrix pair (A, B) is σ-regular and regular. For the structural characteristic values, we have r0 ≤ r1 ≤ ⋅ ⋅ ⋅ ≤ rν−1 < rν = m. Let Q0 , Q1 , . . . , Qν be the involved projectors. Then Qν = 0,
Pν = I.
Hence, Πν = P0 P1 . . . Pν−1 Pν = P0 P1 . . . Pν−1 = Πν−1 . Moreover, R̂ j = Rj ∩ (R0 + ⋅ ⋅ ⋅ + Rj−1 ) = {0},
j ∈ {1, . . . , ν − 1},
and then R0 + ⋅ ⋅ ⋅ + Rj−1 = R0 ⊕ ⋅ ⋅ ⋅ ⊕ Rj−1 , By (7), it follows that
Xj = R0 ⊕ ⋅ ⋅ ⋅ ⊕ Rj−1 .
2.4 Decoupling
Ql Qj = 0,
l ∈ {1, . . . , ν − 1},
� 93
j ∈ {0, . . . , l − 1}.
Consider the equation (2.1). We have C0 = A,
D0 = B,
C0 P0 = C0 (I − Q0 ) = C0 − C0 Q0 = C0
and Π0 = P0 . Hence, C0 = C0 Π0 . Therefore, the equation (2.2) can be rewritten in the form Δ
C0 (Π0 x(⋅)) (t) = D0 x(t) + f (t).
(2.34)
We apply (9) and we find D0 = D0 I = D0 (P0 + Q0 ) = D0 P0 + D0 Q0 = D0 Π0 + D0 Q0 = D0 Π0 + C1 Q0 . By (8), we have C0 = C1 P0 = C1 (P1 + Q1 )P0 = C1 P1 P0 + C1 Q1 P0 = C1 P1 P0 . Hence, the equation (2.34) can be rewritten in the form Δ
C1 P1 P0 (Π0 x(⋅)) (t) = D0 Π0 x(t) + C1 Q0 x(t) + f (t) or C1 P1 P0 x Δ (t) = D0 Π0 x(t) + C1 Q0 x(t) + f (t). Next, we have the following relations: C1 P1 P0 = C1 (Π0 + I − Π0 )P1 P0 = C1 Π0 P1 P0 + C1 (I − Π0 )P1 P0 = C1 P0 P1 P0 + C1 (I − Π0 )(I − Q1 )(I − Q0 )
= C1 P0 (I − Q1 )(I − Q0 ) + C1 (I − Π0 )(I − Q0 − Q1 + Q1 Q0 ) = C1 P0 (I − Q1 − Q0 + Q1 Q0 ) + C1 (I − Π0 )(P0 − Q1 ) = C1 P0 (P1 − Q0 ) + C1 (P0 − P02 ) − C1 (I − P0 )Q1
= C1 (P0 P1 − P0 Q0 ) − C1 (I − P0 )Q1 = C1 P0 P1 − C1 (I − P0 )Q12
= C1 Π1 − C1 (I − P0 )Q1 Q1 = C1 Π1 − C1 (I − P0 )Q1 (I − Π0 + Π0 )Q1
(2.35)
94 � 2 Linear dynamic-algebraic equations with constant coefficients = C1 Π1 − C1 (I − P0 )Q1 (Q0 + Π0 )Q1 = C1 Π1 − C1 (I − P0 )Q1 (Q0 Q1 + Π0 Q1 ) = C1 Π1 − C1 (I − P0 )Q1 Π0 Q1 = C1 Π1 − C1 (I − Π0 )Q1 Π0 Q1 . Then we can rewrite the equation (2.35) as follows: (C1 Π1 − C1 (I − Π0 )Q1 Π0 Q1 )x Δ (t) = D0 Π0 x(t) + C1 Q0 x(t) + f (t) or Δ
C1 Π1 x Δ (t) − C1 (I − Π0 )Q1 Π0 Q1 (Π0 x(⋅)) (t) = D0 Π0 x(t) + C1 Q0 x(t) + f (t), or Δ
Δ
C1 (Π1 x(⋅)) (t) − C1 (I − Π0 )Q1 Π0 Q1 (Π0 x(⋅)) (t) − C1 Q0 x(t) = D0 Π0 x(t) + f (t), or Δ
Δ
C1 (Π1 x(⋅)) (t) − C1 (I − Π0 )Q1 (Π0 Q1 Π0 x(⋅)) (t) − C1 Q0 x(t) = D1 x(t) + f (t), or Δ
Δ
C1 (Π1 x(⋅)) (t) − C1 (I − Π0 )Q1 (Π0 Q1 (I − Q0 )x(⋅)) (t) − C1 Q0 x(t) = D1 x(t) + f (t), or Δ
Δ
C1 (Π1 x(⋅)) (t) − C1 (I − Π0 )Q1 (Π0 Q1 x(⋅)) (t) − C1 Q0 x(t) = D1 x(t) + f (t), or Δ
Δ
C1 (Π1 x(⋅)) (t) − D1 x(t) − C1 (Q0 x(t) + (I − Π0 )Q1 (Π0 Q1 x(⋅)) (t)) = f (t). By induction, we find Δ
Cj (Πj x(⋅)) (t) − Dj x(t) j−1
Δ
− Cj ∑(Ql x(t) + (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t)) = f (t), l=0
j ∈ {1, . . . , ν − 1}.
Now, we use that Cj+1 Pj+1 Pj = Cj+1 (I − Qj+1 )Pj = Cj+1 Pj = Cj and Dj Qj = Cj+1 Qj ,
Cj Ql = Cj+1 Pj Ql = Cj+1 (I − Qj )Ql = Cj+1 Ql ,
l ∈ {0, . . . , j − 1}.
(2.36)
2.4 Decoupling
� 95
Now, using that Πj P l = Πj ,
l ≤ j,
we get Πj+1 Pj Πj = Πj+1 Πj = Πj+1 P0 P1 . . . Pj = Πj+1 P1 . . . Pj = ⋅ ⋅ ⋅ = Πj+1 Pj = Πj+1 and Pj+1 Pj Πj = (Πj + I − Πj )Pj+1 Pj Πj = Πj Pj+1 Pj Πj + (I − Πj )Pj+1 Pj Πj = Πj+1 Pj Πj + (I − Πj )Pj+1 Pj Πj = Πj+1 + (I − Πj )Pj+1 Pj Πj
= Πj+1 + (I − Πj )(I − Qj+1 )(I − Qj )Πj = Πj+1 + (I − Πj )(I − Qj − Qj+1 )Πj = Πj+1 + (I − Πj )(Pj − Qj+1 )Πj = Πj+1 + (I − Πj )Pj Πj − (I − Πj )Qj+1 Πj
= Πj+1 + (I − Πj )Πj − (I − Πj )Qj+1 Πj = Πj+1 + (Πj − Π2j ) − (I − Πj )Qj+1 Πj = Πj+1 − (I − Πj )Qj+1 Πj = Πj+1 − (I − Πj )Qj+1 (I − Q0 ) . . . (I − Qj )
= Πj+1 − (I − Πj )Qj+1 (I − Q0 − ⋅ ⋅ ⋅ − Qj + Q0 Q1 + ⋅ ⋅ ⋅ + (−1)j+1 Q0 . . . Qj ) = Πj+1 − (I − Πj )Qj+1 = Πj+1 − (I − Πj )Qj+1 Qj+1 = Πj+1 − (I − Πj )Qj+1 (I − Πj + Πj )Qj+1
= Πj+1 − (I − Πj )Qj+1 (I − Πj )Qj+1 − (I − Πj )Qj+1 Πj Qj+1
= Πj+1 − (I − Πj )Qj+1 (Qj+1 − Πj Qj+1 ) − (I − Πj )Qj+1 Πj Qj+1 = Πj+1 − (I − Πj )Qj+1 Πj Qj+1 .
Hence, the equation (2.36) takes the form Δ
Cj+1 Pj+1 Pj (Πj x(⋅)) (t) − Dj (Pj + Qj )x(t) j−1
Δ
− Cj ∑(Ql x(t) + (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t)) = f (t) l=0
or Δ
Cj+1 (Πj+1 x(⋅)) (t) − Dj Pj x(t) − Dj Qj x(t) j−1
Δ
− Cj ∑(Ql x(t) + (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t)) = f (t), l=0
or Δ
Cj+1 (Πj+1 x(⋅)) (t) − Dj+1 x(t) − Cj+1 Qj x(t) j−1
j
l=0
l=0
Δ
− Cj+1 Pj ∑ Ql x(t) − Cj+1 Pj ∑(I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) = f (t),
96 � 2 Linear dynamic-algebraic equations with constant coefficients or j
Δ
Δ
Cj+1 (Πj+1 x(⋅)) (t) − Dj+1 x(t) − Cj+1 ∑(Ql x(t) + (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t)) = f (t), l=0
j ∈ {1, . . . , ν}. In particular, for j = ν − 1, we have ν−1
Δ
Δ
Cν (Πν x(⋅)) (t) − Dν x(t) − Cν ∑ (Ql x(t) + (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t)) = f (t). l=0
Since Qν = 0, Pν = I, Πν = Πν−1 , by the last equation, we find Δ
ν−1
ν−2
l=0
l=0
Δ
Cν (Πν−1 x(⋅)) (t) − Dν x(t) − Cν ∑ Ql x(t) − Cν ∑ (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) = f (t), whereupon Δ
ν−1
ν−2
l=0
l=0
Δ
(Πν−1 x(⋅)) (t) − Cν−1 Dν x(t) − ∑ Ql x(t) − ∑ (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) = Cν−1 f (t). The last equation can be written as follows: Δ
(Πν−1 x(⋅)) (t) − Πν−1 Cν−1 Dν x(t) − (I − Πν−1 )Cν−1 Dν x(t) ν−1
ν−2
l=0
l=0
Δ
− ∑ Ql x(t) − ∑ (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) = Πν−1 Cν−1 f (t) + (I − Πν−1 )Cν−1 f (t). From here, we can decoupled the last equation into two equations. Δ
(Πν−1 x(⋅)) (t) − Πν−1 Cν−1 Dν x(t) = Πν−1 Cν−1 f (t)
(2.37)
and ν−1
ν−2
l=0 −1 Πν−1 )Cν f (t).
l=0
Δ
(I − Πν−1 )Cν−1 Dν x(t) + ∑ Ql x(t) + ∑ (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) = −(I −
(2.38)
Since Πj , j ∈ {0, . . . , ν}, are constant matrices, we can consider the equation (2.37) as a dynamic equation with respect to Πν−1 x. Now, we will show that the equation (2.38) is solvable. For this aim, we observe that we have the following relations: I − Πν−1 = I − P0 P1 P2 . . . Pν−1 = I − (I − Q0 )P1 P2 . . . Pν−1 = I − P1 P2 . . . Pν−1 + Q0 P1 P2 . . . Pν−1
= Q0 P1 P2 . . . Pν−1 + I − (I − Q1 )P2 . . . Pν−1
2.4 Decoupling
� 97
= Q0 P1 P2 . . . Pν−1 + Q1 P2 . . . Pν−1 + I − P2 . . . Pν−1
= ⋅ ⋅ ⋅ = Q0 P1 P2 . . . Pν−1 + Q1 P2 . . . Pν−1 + ⋅ ⋅ ⋅ + Qν−2 + I − Pν1 = Q0 P1 P2 . . . Pν−1 + Q1 P2 . . . Pν−1 + ⋅ ⋅ ⋅ + Qν−2 Pν−1 + Qν1 . For l ∈ {0, . . . , ν − 2}, we have the relations: Ql Pl+1 . . . Pν−1 Ql = Ql Pl+1 . . . Pν−2 (I − Qν−1 )Ql
= Ql Pl+1 . . . Pν−2 (Ql − Qν−1 Ql ) = Ql Pl+1 . . . Pν−2 Ql = Ql Pl+1 . . . Pν−3 (I − Qν−2 )Ql
= Ql Pl+1 . . . Pν−3 (Ql − Qν−2 Ql ) = Ql Pl+1 . . . Pν−3 Ql = ⋅ ⋅ ⋅ = Ql Pl+1 Ql = Ql (I − Ql+1 )Ql = Ql (Ql − Ql+1 Ql ) = Ql2 = Ql , and hence, (Ql Pl+1 . . . Pν−1 Ql )2 = Ql Pl+1 . . . Pν−1 Ql Ql Pl+1 . . . Pν−1 Ql
= Ql Pl+1 . . . Pν−1 Ql Pl+1 . . . Pν−1 Ql = Ql Pl+1 . . . Pν−1 Ql ,
i. e., Ql Pl+1 . . . Pν−1 Ql are projectors. Let l, m ∈ {0, . . . , ν − 2}, l ≠ m. If m > l, then Ql Pl+1 . . . Pν−1 Qm = Ql Pl+1 . . . Pm . . . Pν−2 Pν−1 Qm
= Ql Pl+1 . . . Pm . . . Pν−2 (I − Qν−1 )Qm
= Ql Pl+1 . . . Pm . . . Pν−2 (Qm − Qν−1 Qm )
= Ql Pl+1 . . . Pm . . . Pν−2 Qm = ⋅ ⋅ ⋅ = Ql Pl+1 . . . Pm Qm = 0. If m < l, using the last chain of relations, we find Ql Pl+1 . . . Pν−1 Qm = Ql Pl+1 . . . Pν−2 Qm = ⋅ ⋅ ⋅ = Ql Qm = 0. Let l ∈ {0, . . . , ν − 2} and m ∈ {0, . . . , l − 1}. Then Ql Pl+1 . . . Pν−1 (I − Πm )Qm+1
= Ql Pl+1 . . . Pν−1 (Q0 P1 . . . Pm + Q1 P2 . . . Pm + ⋅ ⋅ ⋅ + Qm−1 Pm + Qm )Qm+1
= Ql Pl+1 . . . Pν−1 (Q0 P1 . . . Pm Qm+1 + Q1 P2 . . . Pm Qm+1 + ⋅ ⋅ ⋅ + Qm−1 Pm Qm+1 + Qm Qm+1 ) = Ql Pl+1 . . . Pν−1 Qm Qm+1 = 0 and
98 � 2 Linear dynamic-algebraic equations with constant coefficients Ql (I − Πm )Qm+1 = Ql Pl+1 . . . Pν−1 (I − Πm )Qm+1 = 0. Let l ∈ {0, . . . , ν − 2}. Then Ql Pl+1 . . . Pν−1 (I − Πl )Ql+1
= Ql Pl+1 . . . Pν−1 (Q0 P1 . . . Pl−1 + Q1 P2 . . . Pl−1 + ⋅ ⋅ ⋅ + Ql−1 Pl + Ql )Ql+1 = Ql Pl+1 . . . Pν−1 Q0 P1 . . . Pl−1 Ql+1 + Q1 Pl+1 . . . Pν−1 Q1 P2 . . . Pl−1 Ql+1 + ⋅ ⋅ ⋅ + Ql Pl+1 . . . Pν−1 Ql−1 Pl Ql+1 + Ql Pl+1 . . . Pν−1 Ql Ql+1
= Ql Pl+1 . . . Pν−1 Ql Ql+1 = Ql Ql+1 . For j ∈ {0, . . . , ν − 2}, we have Qj Pj+1 . . . Pν−1 (I − Πν−1 )
= Qj Pj+1 . . . Pν−1 (Q0 P1 P2 . . . Pν−1 + Q1 P2 . . . Pν−1 + ⋅ ⋅ ⋅ + Qν−2 Pν−1 + Qν−1 ) = Qj Pj+1 . . . Pν−1 Q0 P1 P2 . . . Pν−1 + Qj Pj+1 . . . Pν−1 Q1 P2 . . . Pν−1
+ ⋅ ⋅ ⋅ + Qj Pj+1 . . . Pν−1 Qj Pj+1 . . . Pν−1 + ⋅ ⋅ ⋅ + Qj Pj+1 . . . Pν−1 Qν−2 Pν−1
+ Qj Pj+1 . . . Pν−1 Qν−1 = Qj Pj+1 . . . Pν−1 Qj Pj+1 . . . Pν−1 = Qj Pj+1 . . . Pν−1 . Moreover, Qν−1 (I − Πν−1 ) = Qν−1 (Q0 P1 . . . Pν−1 + Q1 P2 . . . Pν−1 + ⋅ ⋅ ⋅ + Qν−2 Pν−1 + Qν−1 )
= Qν−1 Q0 P1 . . . Pν−1 + Qν−1 Q1 P2 . . . Pν−1 + ⋅ ⋅ ⋅ + Qν−1 Qν−2 Pν−1 + Qn−1 Qν−1 = Qν−1 .
Now, we multiply by Qj Pj+1 . . . Pν−1 , j ∈ {0, . . . , ν − 2}, the equation (2.38) and we find Qj Pj+1 . . . Pν−1 (I − Πν−1 )Cν−1 Dν x(t) ν−2
+ Qj Pj+1 . . . Pν−1 ∑ Ql x(t) l=0
ν−1
Δ
+ Qj Pj+1 . . . Pν−1 ∑ (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) l=0
= −Qj Pj+1 . . . Pν−1 (I − Πν−1 )Cν−1 f (t) or Qj Pj+1 . . . Pν−1 Cν−1 Dν x(t) + Qj x(t)
Δ
+ Qj Pj+1 . . . Pν−1 (I − Πj )Qj+1 (Πj Qj+1 x(⋅)) (t) ν−2
Δ
+ Qj Pj+1 . . . Pν−1 ∑ (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) l=j+1
2.4 Decoupling
� 99
= −Qj Pj+1 . . . Pν−1 Cν−1 f (t), or Qj Pj+1 . . . Pν−1 Cν−1 Dν x(t) + Qj x(t) Δ
ν−2
Δ
+ Qj Qj+1 (Πj Qj+1 x(⋅)) (t) + ∑ Qj Pj+1 . . . Pν−1 Ql+1 (Πl Ql+1 x(⋅)) (t) l=j+1
(2.39)
= −Qj Pj+1 . . . Pν−1 Cν−1 f (t). Now, we multiply by Qν−1 the equation (2.38) and we find ν−1
Qν−1 (I − Πν−1 )Cν−1 Dν x(t) + Qν ∑ Ql x(t) ν−1
l=0
Δ
+ Qν−1 ∑ (I − Πl )Ql+1 (Πl Ql+1 x(⋅)) (t) l=0
= −Qν−1 (I − Πν−1 )Cν−1 f (t), or Δ
Qν−1 Cν−1 Dν x(t) + Qν−1 x(t) + Qν−1 (I − Πν−1 )Qν (Πν−1 Qν x(⋅)) (t) = −Qν−1 (I − Πν−1 )Cν−1 f (t), or Qν−1 Cν−1 Dν x(t) + Qν−1 x(t) = −Qν−1 (I − Πν−1 )Cν−1 f (t),
(2.40)
whereupon Qν−1 x(t) = −Qν−1 Cν−1 Dν x(t) − Qν−1 (I − Πν−1 )Cν−1 f (t). For j = ν − 2, by the equation (2.39), we find Δ
Qν−2 Pν−1 Cν−1 Dν x(t) + Qν−2 x(t) + Qν−2 Qν−1 (Πν−2 Qν−1 x(⋅)) (t) = −Qν−2 Pν−1 Cν−1 f (t), or Δ
Qν−2 x(t) = −Qν−2 Pν−1 Cν−1 Dν x(t) − Qν−2 Qν−1 (Πν−2 Qν−1 x(⋅)) (t) − Qν−2 Pν−1 Cν−1 f (t).
Continuing in this way, we obtain Ql x(t) with their dependence on Πν−1 x(t) and Ql+m x(t), m ∈ {1, . . . , ν−1−l}, l ∈ {0, . . . , ν−1}. To determine the whole solution x(t), we have a need of Q0 x(t) and the components Πj−1 Qj x(t), j ∈ {1, . . . , ν − 1} and we will use the expression
100 � 2 Linear dynamic-algebraic equations with constant coefficients I = Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πν−2 Qν−1 + Πν−1 . We multiply the equation (2.39) by Πj−1 , j ∈ {1, . . . , ν − 2}, and we get Πj−1 Qj Pj+1 . . . Pν−1 Cν−1 Dν x(t) + Πj−1 Qj x(t) ν−2
Δ
Δ
+ Πj−1 Qj Qj+1 (Πj Qj+1 x(⋅)) (t) + ∑ Πj−1 Qj Pj+1 . . . Pν−1 Ql+1 (Πl Ql+1 x(⋅)) (t) l=j+1
(2.41)
= −Πj−1 Qj Pj+1 . . . Pν−1 Cν−1 f (t). Now, we multiply the equation (2.40) by Πν−2 and we obtain Πν−2 Qν−1 Cν−1 Dν x(t) + Πν−2 Qν−1 x(t) = −Πν−2 Qν−1 (I − Πν−1 )Cν−1 f (t),
(2.42)
Set v0 (t) = Q0 x(t),
u(t) = Πν−1 x(t),
vj (t) = Πj−1 Qj x(t),
j ∈ {1, . . . , ν − 1}.
We get the representation of the solutions in the following form: x(t) = v0 (t) + v1 (t) + ⋅ ⋅ ⋅ + vν−1 (t) + u(t). Let N01 = Q0 Q1 ,
N0j = Q0 P1 . . . Pj−1 Qj ,
Nii+1 = Πi−1 Qi Qi−1 ,
j ∈ {2, . . . , ν − 1},
i ∈ {1, . . . , ν − 2},
Nij = Πi−1 Qi Pi+1 . . . Pj−1 Qj ,
j ∈ {i + 2, . . . , ν − 1},
i ∈ {1, . . . , ν − 2},
W = −Πν−1 Cν−1 Dν ,
H0 = Q0 P1 . . . Pν−1 Cν−1 Dν ,
Hj = Πj−1 Qj Pj+1 . . . Pν−1 Cν−1 Dν ,
Hν−1 = ld = l0 =
Πν−2 Qν−1 Cν−1 Dν , Πν−1 Cν−1 , −Q0 P1 . . . Pν−1 Cν−1 ,
lj = −Πj−1 Qj Pj+1 . . . Pν−1 Cν−1 ,
j ∈ {1, . . . , ν − 2},
j ∈ {1, . . . , ν − 2},
lν−1 = −Πν−2 Qν−1 Cν−1 . Then, by the equations (2.37), (2.39) and (2.40), we obtain the structured system
(2.43)
2.4 Decoupling
I
0
( ( (
N01 .. .
W H0 ( .. +( ( . .. . H ν−1 (
uΔ (t) 0 N0ν−1 ( )( 0 ) )( Δ ) ) ( v1 (t) ) ) . .. Nν−2ν−1 Δ ) (vν−1 (t))
... .. . .. . 0
( I
..
� 101
.
..
.
u(t) ld v0 (t) l0 ( ) ) ( v0 (t) ) ( .. ) ) ( . ) = ( . ) f (t). ) ( .. ) ( ) ( ) .. .. . . I) (lν−1 ) (vν−1 (t))
(2.44)
Theorem 2.5. Let A, B ∈ Mm×m and the dynamic algebraic equation (2.2) has characteristic values r0 ≤ ⋅ ⋅ ⋅ ≤ rν−1 < rν = m. If the functions u(⋅),
v0 (⋅),
...,
vν−1 (t)
u(t0 ) = Πν−1 u(t0 ),
t0 ∈ I,
are a solution to the system (2.44) and, if
then the compound x(⋅), defined by (2.43), is a solution to the dynamic-algebraic equation (2.2). Proof. By the properties of the projectors Pj , Qj , j ∈ {0, . . . , ν}, we have that Πl and Πl Ql , l ∈ {1, . . . , ν − 1}, are projectors. Observe that vj (t) = Πj−1 Qj x(t) = Πj−1 Qj (v0 (t) + v1 (t) + ⋅ ⋅ ⋅ + vν−1 (t) + u(t)) = Πj−1 Qj (Q0 x(t) + Π0 Q1 x(t) + ⋅ ⋅ ⋅ + Πj−2 Qj x(t) + vj (t) + Πj Qj+1 x(t) + ⋅ ⋅ ⋅ + Πν−2 Qν−1 x(t) + Πj−1 x(t))
= Πj−1 Qj Q0 x(t) + Πj−1 Qj Π0 Q1 x(t) + ⋅ ⋅ ⋅ + Πj−1 Qj Πj−2 Qj−1 x(t) + Πj−1 Qj x(t) + Πj−1 Qj Πj Qj+1 x(t) + ⋅ ⋅ ⋅ + Πj−1 Qj Πν−2 Qν−1 x(t) + Πj−1 Qj Πj−1 x(t)
= Πj−1 Qj vj (t) and
v0 (t) = Q0 v0 (t).
102 � 2 Linear dynamic-algebraic equations with constant coefficients By the first equation of the system (2.44), we find uΔ (t) + Wu(t) = ld f (t),
t ∈ I.
(2.45)
With uf , we will denote the unique solution of the last equation for which uf (t0 ) = u0 ∈ Im Πν−1 . We have t
uf (t) = e−W (t, t0 )u0 + ∫ e−W (t, σ(τ))ld f (τ)Δτ, t0
t ∈ I.
Note that Πν−1 W = Πν−1 Πν−1 Cν−1 Dν = Πν−1 Cν−1 Dν = W and Πν−1 ld = Πν−1 Πν−1 Cν−1 = Πν−1 Cν−1 = ld . Hence, Πν−1 e−W (t, t0 ) = e−W (t, t0 ),
t ∈ I.
Moreover, t
Πν−1 uf (t) = Πν−1 e−W (t, t0 )u0 + Πν−1 ∫ e−W (t, σ(τ))ld f (τ)Δτ t0
t
= e−W (t, t0 )u0 + ∫ e−W (t, σ(τ))ld f (τ)Δτ = uf (t), t0
t ∈ I.
Therefore, uf ∈ Im Πν−1 . By carry out the decoupling procedure in reverse order and putting the things together, we get the desired result. This completes the proof. Remark 2.1. Since the matrices Pj , Qj , j ∈ {0, . . . , ν}, A, B are constant matrices, the above decoupling procedure can be applied to the equation (2.1) and the corresponding matrices Nij , Hi , lj will be the same. Example 2.14. Let 𝕋 = 2ℕ0 . Consider the system x1Δ = x1 + x2 + x3 + f1 , x2Δ = x1 + f2 ,
2.4 Decoupling
� 103
0 = x 1 + x 2 + f3 , as in Example 2.12. Here, σ(t) = 2t,
μ(t) = t,
t ∈ 𝕋.
This system can be rewritten in the following form: x2Δ + x2 = f2 − f3 ,
−x1Δ + x3 = f3 − f1 ,
(2.46)
x1 + x2 = −f3 .
We will use the computations for P0 , P1 , P2 , Q0 , Q1 , Q2 , C0 , C1 , C2 , D0 , D1 , D2 by Example 2.12. Now, we will apply the decoupling system, described above. In this case, the system (2.44) takes the form I (0 0
0 0 0
0 uΔ W ) ( ) + ( N01 0 H0 0 vΔ1 H1
0 I 0
0 u ld ) ( ) = ( 0 v0 l0 ) f , I v1 l1
whereupon uΔ + Wu = ld f ,
N01 vΔ1 + H0 u + Iv0 = l0 f , H1 u + v1 = l1 f ,
or uΔ − Π1 C2−1 D2 u = Π1 C2−1 f ,
Q0 Q1 vΔ1 + Q0 P1 C2−1 D2 u + v0 = l0 f ,
Π0 Q1 C2−1 D2 u + v1 = −Π0 Q1 C2−1 f ,
or (Π1 x)Δ − Π1 C2−1 D2 Π1 x = Π1 C2−1 f ,
Q0 Q1 (Π0 Q1 x)Δ + Q0 P1 C2−1 D2 Π1 x + Q0 x = −Q0 P1 C2−1 f , Π0 Q1 C2−1 D2 Π1 x
+ Π0 Q1 x =
Now, we will find C2−1 . We have det C2 = −1 and the cofactors of the matrix C2 are as follows:
−Π0 Q1 C2−1 f .
(2.47)
104 � 2 Linear dynamic-algebraic equations with constant coefficients C211 = 0,
C223 = 0,
C212 = 0,
C231 = −1,
C213 = −1,
C221 = 0,
C232 = 1,
C222 = −1,
C233 = 2.
Therefore, 0 C2−1 = − ( 0 −1
0 −1 0
−1 0 1 ) = (0 2 1
0 1 0
1 −1) . −2
Next, 1 Π0 = (0 0
0 1 0
0 0) 0
and 1 Π1 = P0 P1 = (0 0
0 1 0
0 0 0) (0 0 1
0 1 0
0 0 0 ) = (0 1 0
0 1 0
0 0) . 0
Hence, 0 Π1 C2−1 D2 Π1 = (0 0
0 1 0
0 0 0 ) (0 0 1
0 1 0
1 0 −1) (0 −2 0
1 0 1
0 0 0) (0 0 0
0 = (0 0
0 1 0
0 0 0 ) (0 0 1
0 1 0
1 0 −1) (0 −2 0
1 0 1
0 0) 0
0 = (0 0
0 1 0
0 0 0 ) (0 0 0
1 −1 −1
0 0 0) = (0 0 0
0 −1 0
0 1 0
0 0) , 0
and 0 Π1 C2−1 = (0 0
0 1 0
0 0 0 ) (0 0 1
0 Q0 Q1 = (0 0
0 0 0
0 1 0) ( 0 1 −1
0 1 0
1 0 −1) = (0 −2 0
0 1 0
0 −1) , 0
0 0 0) = ( 0 0 −1
0 0 0
0 0) , 0
and 0 0 0
0 0) 0
2.4 Decoupling
� 105
and 1 Π0 Q1 = (0 0
0 1 0
0 1 0) ( 0 0 −1
0 0 0
0 1 0 ) = (0 0 0
0 0 0
0 0) , 0
and 0 Q0 P1 C2−1 D2 Π1 = (0 0
0 0 0
0 0 0) (0 1 1
0 1 0
0 0 0 ) (0 1 1
0 1 0
1 0 −1) (0 −2 0
1 0 1
0 = (0 0
0 0 0
0 0 0) (0 1 1
0 1 0
0 0 0 ) (0 1 1
0 1 0
1 0 −1) = (0 −2 0
0 = (0 0
0 0 0
0 0 0) (0 1 1
0 1 0
0 0 0 ) (0 1 0
0 1 0
0 0) 0
0 = (0 0
0 0 0
0 0 0) (0 1 0
0 1 0
0 0) = 0, 0
0 0 0) (0 0 0 0 1 0
0 1 0
0 0) 0
0 0) 0
and 0 Q0 P1 C2−1 = (0 0
0 0 0
0 0 0 ) (0 1 1
0 1 0
0 0 0) (0 1 1
0 1 0
0 = (0 0
0 0 0
0 0 0 ) (0 1 1
0 1 0
0 0 −1) = (0 −1 1
1 −1) −2 0 0 0
0 0 ), −1
and Π0 Q1 C2−1 D2 Π1
1 = (0 0
0 1 0
0 1 0) ( 0 0 −1
0 0 0
0 0 0) (0 0 1
0 1 0
1 0 −1) (0 −2 0
1 0 1
0 0 0) (0 0 0
1 = (0 0
0 1 0
0 1 0) ( 0 0 −1
0 0 0
0 0 0) (0 0 1
0 1 0
1 0 −1) (0 −2 0
1 0 1
0 0) 0
1 = (0 0
0 1 0
0 1 0) ( 0 0 −1
0 0 0
0 0 0) (0 0 0
1 −1 −1
0 0) 0
0 1 0
0 0) 0
106 � 2 Linear dynamic-algebraic equations with constant coefficients 1 = (0 0
0 1 0
0 0 0) (0 0 0
1 0 −1
0 0 0 ) = (0 0 0
1 0 0
0 0) , 0
and 1 Π0 Q1 = ( 0 0
0 1 0
0 1 0) ( 0 0 −1
0 0 0
0 1 0 ) = (0 0 0
0 0 0
0 0 0) (0 0 1
0 1 0
1 0 −1) = (0 −2 0
0 0) , 0
and Π0 Q1 C2−1
1 = (0 0
0 0 0
0 0 0
1 0) . 0
Then the system (2.47), can be rewritten as follows: 0 ((0 0 0 (0 −1
Δ
0 1 0
0 x1 0 0) (x2 )) − (0 0 x3 0
0 0 0
0 1 0) ((0 0 0
0 = − (0 1 0 (0 0
1 0 0
0 0 0
0 0 0
0 −1 0
0 x1 0 0 ) ( x2 ) = ( 0 0 x3 0 Δ
0 x1 0 0) (x2 )) + (0 0 x3 0
0 0 0
0 1 0
0 f1 −1) (f2 ) , 0 f3
0 x1 0 ) ( x2 ) 1 x3
0 f1 0 ) (f2 ) , −1 f3
0 x1 1 0) (x2 ) + (0 0 x3 0
0 0 0
0 x1 0 0 ) ( x2 ) = − ( 0 0 x3 0
0 0 0
1 f1 0 ) ( f2 ) , 0 f3
or 0 0 0 Δ (x2 ) − (−x2 ) = (f2 − f3 ) , 0 0 0 0 (0 −1
0 0 0
0 x1Δ 0 0 0) ( 0 ) + ( 0 ) = ( 0 ) , 0 0 x3 −f1 + f3 x2 x1 f3 ( 0 ) + ( 0 ) = − (0) , 0 0 0
whereupon we get the system (2.46).
2.5 Complete decoupling
�
107
Exercise 2.6. Let 𝕋 = 3ℤ. Consider the dynamic-algebraic equations x1Δ − x3Δ = x1σ − x2σ + x3σ + f1 , x2Δ = x1σ + x3σ + f2 ,
0 = 2x1σ − x2σ + x3σ + f3 .
1. 2. 3.
Determine the matrices A and B. Check if the matrix pair (A, B) is σ-regular and regular. If (A, B) is σ-regular, find an admissible matrix sequence {Cj }j≥0 and write the structured system.
2.5 Complete decoupling Suppose that A, B ∈ Mm×m and the matrix pair (A, B) is regular and has the characteristic values r0 ≤ ⋅ ⋅ ⋅ ≤ rν−1 < rν = m, and Q0 , Q1 , . . . , Qν−1 are admissible projectors for the matrix pair (A, B). Set Q0⋆ = Q0 P1 . . . Pν Cν−1 D0 ,
Qj⋆ = Qj Pj+1 . . . Pν−1 Cν−1 D0 Πj−1 ,
j ∈ {0, . . . , ν − 2},
Qν−1⋆ = Qν−1 Cν−1 D0 Πν−2 . We have Qν−1⋆ Wν−1 = Qν−1 Cν−1 D0 Πν−2 Qν−1 = Qν−1 Cν−1 Dν−1 Qν−1 = Qν−1 Cν−1 Cν Qν−1 = Qν−1 Qν−1 = Qν−1 .
Next, Qj⋆ Qj = Qj Pj+1 . . . Pν−1 Cν−1 D0 Πj−1 Qj = Qj Pj+1 . . . Pν−1 Cν−1 Dj Qj = Qj Pj+1 . . . Pν−1 Cν−1 Cj+1 Qj = Qj Pj+1 . . . Pν−1 Cν−1 Cj+2 Qj = ⋅ ⋅ ⋅ = Qj Pj+1 . . . Pν−1 Cν−1 Cν Qj = Qj Pj+1 . . . Pν−1 Qj = Qj Pj+1 . . . Pν−2 (I − Qν−1 )Qj
= Qj Pj+1 . . . Pν−2 (Qj − Qν−1 Qj ) = Qj Pj+1 . . . Pν−2 Qj = ⋅ ⋅ ⋅ = Qj Qj = Qj ,
j ∈ {1, . . . , ν − 2},
and Q0⋆ Q0 = Q0 P1 . . . Pν−1 Cν−1 D0 Q0 = Q0 P1 . . . Pν−1 Cν−1 C1 Q0 = Q0 P1 . . . Pν−1 Cν−1 C2 Q0
= ⋅ ⋅ ⋅ = Q0 P1 . . . Pν−1 Cν−1 Cν Q0 = Q0 P1 . . . Pν−1 Q0 = Q0 P1 . . . Pν−2 (I − Qν−1 )Q0
108 � 2 Linear dynamic-algebraic equations with constant coefficients = Q0 P1 . . . Pν−2 (Q0 − Qν−1 Q0 ) = Q0 P1 . . . Pν−2 Q0 = ⋅ ⋅ ⋅ = Q0 Q0 = Q0 . Therefore, Qj⋆ Qj = Qj ,
j ∈ {0, . . . , ν − 1}.
Next, Q0 Q0⋆ = Q0 Q0 P1 . . . Pν Cν−1 D0 = Q0 P1 . . . Pν Cν−1 D0 = Q0⋆ , and Qj Qj⋆ = Qj Qj Pj+1 . . . Pν−1 Cν−1 D0 Πν−1 = Qj Pj+1 . . . Pν−1 Cν−1 D0 Πν−1 = Qj⋆ ,
j ∈ {1, . . . , ν − 2},
and Qν−1 Qν−1⋆ = Qν−1 Qν−1 Cν−1 D0 Πν−2 = Qν−1 Cν−1 D0 Πν−2 = Qν−1⋆ . Consequently, Qj Qj⋆ = Qj⋆ ,
j ∈ {0, . . . , ν − 1}.
Hence, (Qj⋆ Qj )2 = Qj⋆ Qj Qj⋆ Qj = Qj⋆ Qj Qj = Qj⋆ Qj ,
j ∈ {0, . . . , ν − 1},
(Qj Qj⋆ )2 = Qj Qj⋆ Qj Qj⋆ = Qj Qj Qj⋆ = Qj Qj⋆ ,
j ∈ {0, . . . , ν − 1},
(Qj⋆ )2 = Qj⋆ Qj⋆ = Qj⋆ Qj Qj⋆ = Qj Qj⋆ = Qj⋆ ,
j ∈ {0, . . . , ν − 1}.
and
and
Therefore, Qj⋆ ,
Qj Qj⋆ ,
Qj⋆ Qj ,
j ∈ {0, . . . , ν − 1},
are projectors on Rj and R0 + ⋅ ⋅ ⋅ + Rj−1 ⊆ ker Qj⋆ ,
j ∈ {1, . . . , ν − 1}.
Define H0⋆ = Q0⋆ Πν−1 ,
Hj⋆ = Πj−1 Qj⋆ Πν−1 ,
j ∈ {1, . . . , ν − 1},
2.5 Complete decoupling
Q j = Qj , C j = Cj ,
j ∈ {0, . . . , ν − 2},
j ∈ {0, . . . , ν − 1},
Z ν = I + Qν−1 Qν−1⋆ Pν−1 .
� 109
Qν−1 = Qν−1⋆ ,
C ν = Cν + Dν−1 Qν−1⋆ Pν−1 ,
Note that C ν = Cν + Dν−1 Qν−1⋆ Pν−1 = Cν + Dν−1 Qν−1 Cν−1 Dν−1 Pν−1
= Cν + Cν Qν−1 Cν−1 Dν−1 Pν−1 = Cν + Cν Qν−1 Qν−1⋆ Pν−1 = Cν (I + Qν−1 Qν−1⋆ Pν−1 ) = Cν Zν
and (I − Qν Qν−1⋆ Pν−1 )(I + Qν Qν−1⋆ Pν−1 )
= I − Qν Qν−1⋆ Pν−1 + Qν Qν−1⋆ Pν−1 − Qν Qν−1⋆ Pν−1 Qν Qν−1 Pν−1 = I − Qν Qν−1⋆ Pν−1 Qν Qν−1 Pν−1 = I,
i. e., Zν−1 = I − Qν Qν−1⋆ Pν−1 . Hence, Qν−1⋆ Zν−1 = Qν−1⋆ (I − Qν Qν−1⋆ Pν−1 ) = Qν−1⋆ − Qν−1⋆ Qν Qν−1⋆ Pν−1 = Qν−1⋆ − Qν Qν−1⋆ Pν−1 = Qν−1⋆ − Qν−1⋆ Pν−1
= Qν−1⋆ − Qν−1⋆ (I − Qν−1 ) = Qν−1⋆ − Qν−1⋆ + Qν−1⋆ Qν−1 = Qν−1⋆ Qν−1 = Qν−1⋆ . Therefore, Qν−1⋆ = Qν−1 C ν D0 Πν−1 −1
= Qν−1 Zν−1 Cν−1 D0 Πν−2 = Qν−1 Cν−1 D0 Πν−2 = Qν−1⋆ = Qν−1 .
Denote H ν−1 = Πν−2 Qν−1⋆ Πν−1 . Then H ν−1 = 0. Continuing as above, we find H j = 0,
j ∈ {0, . . . , ν − 1}.
110 � 2 Linear dynamic-algebraic equations with constant coefficients Then, starting with admissible projectors, we apply the above procedure first for k = ν − 1, then for k = ν − 2 and so on, for k = 0. In any stage, the additional coupling coefficient vanishes and we finish with a complete decoupling of the two parts in (2.44). Definition 2.12. Let A, B ∈ Mm×m and (A, B) be a σ-regular(regular). Also, the matrix pair (A, B) has the structural characteristic values r0 ≤ r1 ≤ ⋅ ⋅ ⋅ ≤ rν−1 < rν = m and Cj , j ∈ {0, . . . , ν} be an admissible matrix sequence. If Hj = 0, j ∈ {0, . . . , ν−1}, then the projectors Q0 , . . . , Qν−1 are said to be completely decoupling projectors for (2.1) ((2.2)). Example 2.15. Let 𝕋 = ℤ. Consider the dynamic-algebraic equations in Example 2.12. By the previous example, we have 0 C2−1 = − ( 0 −1
0 −1 0
−1 0 1 ) = (0 2 1
0 1 0
1 −1) . −2
We take a new projector 1 Q1 = Q1 C2−1 D1 = ( 0 −1 1 =(0 −1
0 0 0
1 =(0 −1
1 0 −1
0 0 0
0 1 0) ( 0 0 −1
0 0 0) (0 0 1 1 −1 −1
0 1 0
1 1 −1) (1 −2 1
1 0 1
0 0) 0
0 0) 0
0 0) . 0
Hence, 1 P1 = I − Q1 = (0 0
0 1 0
0 1 0) − ( 0 1 −1
1 0 −1
0 0 0 ) = (0 0 1
−1 1 1
0 0) , 1
and 1 Π1 = P0 P1 = (0 0 and
0 1 0
0 0 0) (0 0 1
−1 1 1
0 0 0 ) = (0 1 0
−1 1 0
0 0) , 0
2.5 Complete decoupling
1 C 2 = C1 + D1 Q1 = (0 0 1 = (0 0
0 1 0
0 1 0
1 1 0) + (1 0 1
1 1 0 ) + (1 0 1
1 1 1
1 0 1
0 2 0) = ( 1 0 1
0 1 0) ( 0 0 −1 1 2 1
1 0 −1
0 0) 0
1 0) . 0
Now, we will find C 2 . We have −1
det C 2 = 1 − 2 = −1 −1
and its cofactors are as follows: a11 = 0,
a23 = −1,
a12 = 0,
a31 = −2,
a13 = −1,
a32 = 1,
a21 = 1,
a33 = 3.
a22 = −1,
Therefore, 0 −1 C2 = − ( 0 −1
1 −1 −1
−2 0 1 ) = (0 3 1
−1 1 1
2 −1) −3
and 0 −1 C 2 D0 Π1 = (0 1
−1 1 1
2 1 −1) (1 −3 1
1 0 1
0 = (0 1
−1 1 1
2 0 −1) (0 −3 0
0 −1 0
1 0 0) (0 0 0
0 0) 0
−1 1 0
0 0 0 ) = (0 0 0
1 −1 −1
0 0) . 0
Hence, H 0 = Q0 P 1 C 2 D0 Π1 −1
and
0 = (0 0
0 0 0
0 0 0 ) (0 1 1
1 1 1
0 = (0 0
0 0 0
0 0 0 ) (0 1 0
−1 −1 −1
0 0 0 ) (0 1 0
1 −1 −1
0 0 0) = (0 0 0
0 0) 0 0 0 −1
0 0) , 0
�
111
112 � 2 Linear dynamic-algebraic equations with constant coefficients H 1 = P0 Q1 C 2 D0 Π1 −1
1 = (0 0
0 1 0
0 1 0) ( 0 0 −1
1 = (0 0
0 1 0
0 0 0 ) (0 0 0
1 0 −1 0 0 0
0 0 0) (0 0 0 0 0 0) = (0 0 0
1 −1 −1
0 0) 0
0 0 0
0 0) . 0
Therefore, the projectors Q0 and Q1 are admissible decoupling projectors. Next, 1 D2 = D1 P1 = (1 1
1 0 1
0 0 0) (0 0 1
−1 1 1
0 0 0 ) = (0 1 0
0 −1 0
0 0) , 0
and 0 −1 Π1 C 2 D2 Π1 = ( 0 0
−1 1 0
0 0 0) (0 0 1
−1 1 1
2 0 −1) (0 −3 0
0 −1 0
0 0 0) (0 0 0
0 = (0 0
−1 1 0
0 0 0) (0 0 1
−1 1 1
2 0 −1) (0 −3 0
0 −1 0
0 0) 0
0 = (0 0
−1 1 0
0 0 0) (0 0 0
1 −1 −1
0 0 0) = (0 0 0
1 −1 0
−1 1 0
0 0) , 0
and 0 −1 Π1 C 2 = (0 0
−1 1 0
0 0 0 ) (0 0 1
−1 1 1
2 0 −1) = (0 −3 0
−1 1 0
1 −1) , 0
0 Q0 Q1 = (0 0
0 0 0
0 1 0) ( 0 1 −1
1 0 −1
0 0 0) = ( 0 0 −1
0 0 −1
0 0) , 0
0 1 0) = (0 0 0
1 0 0
and
and 1 Π0 Q1 = (0 0
0 1 0
0 1 0) ( 0 0 −1
1 0 −1
0 0) , 0
0 0) 0
2.5 Complete decoupling
� 113
and 0 −1 Q0 P 1 C 2 = ( 0 0
0 0 0
0 0 0) (0 1 1
−1 1 1
0 0 0) (0 1 1
0 = (0 0
0 0 0
0 0 0) (0 1 1
−1 1 1
1 0 −1) = (0 −2 1
2 −1) −3
−1 1 1 0 0 1
0 0 ), −2
and −1 Π0 Q 1 C 2
1 = (0 0
1 0 0
0 0 0) (0 0 1
−1 1 1
2 0 −1) = (0 −3 0
0 0 0
1 0) . 0
In this way, the considered dynamic-algebraic equations decouple completely as follows: 0 ((0 0
−1 1 0
Δ
0 x1 0 0) (x2 )) − (0 0 x3 0
0 (0 −1
0 0 −1
0 1 0) ((0 0 0
1 −1 0 1 0 0
1 (0 0
0 x1 0 0) (x2 ) = (0 0 x3 0 Δ
−1 1 0
0 x1 0 0) (x2 )) = − (0 0 x3 1
0 0 1
0 f1 0 ) ( f2 ) , −2 f3
1 0 0
0 0 0
1 f1 0 ) ( f2 ) 0 f3
0 x1 0 0) (x2 ) = − (0 0 x3 0
or Δ
−x2 x2 −f2 + f3 ( x2 ) − (−x2 ) = ( f2 − f3 ) , 0 0 0 0 (0 −1
0 0 −1
Δ
0 x1 + x2 0 ), 0) ( 0 ) = − ( 0 0 0 f1 + f2 − 2f3 x1 + x2 f3 ( 0 ) = − (0) , 0 0
or
1 f1 −1) (f2 ) , 0 f3
114 � 2 Linear dynamic-algebraic equations with constant coefficients −x2Δ −x2 −f2 + f3 ( x2Δ ) + ( x2 ) = ( f2 − f3 ) , 0 0 0 Δ
0 0 ( ) = −( ), 0 0 Δ −(x1 + x2 ) 2f3 − f1 − f2 x1 + x2 −f3 ( 0 ) = ( 0 ), 0 0 or x2Δ + x2 = f2 − f3 ,
(x1 + x2 )Δ = f1 + f2 − 2f3 , x1 + x2 = −f3 .
Exercise 2.7. Let 𝕋 = 2ℕ0 . Consider the dynamic-algebraic equations: x1Δ = x1 + x2 + f1 , x3Δ = −x2 + f2 ,
x4Δ = −x3 + f3 ,
x5Δ = −x4 + f4 , 0 = x5 + f5 .
1. 2. 3.
Find the matrices A and B. Check if the matrix pair (A, B) is regular. Find completely decoupling projectors for the considered dynamic-algebraic equations.
2.6 Advanced practical problems Problem 2.1. Let 𝕋 = 2ℕ0 . Find a nontrivial solution to the following linear homogeneous dynamic-algebraic equations: x1Δ = x1σ ,
−x3Δ = x2σ , −x4Δ = x3σ , −x5Δ = x4σ , x5σ = 0.
2.6 Advanced practical problems
� 115
Problem 2.2. Let 𝕋 = 4ℕ0 . Find a nontrivial solution to the following linear homogeneous dynamic-algebraic equations: x1Δ = x1 ,
−x3Δ = x2 ,
−x4Δ = x3 ,
−x5Δ = x4 , x5 = 0.
Problem 2.3. Let 𝕋 = 4ℕ0 . Find the solution of the system (2.1) when 1 0 A = (0 0 0
0 0 0 0 0
0 2 0 0 0
−t 2 t f (t) = ( t 3 ) , t 0
0 0 1 0 0
0 0 0 ), −1 0
1 0 B = (0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0) , 0 1
t ∈ 𝕋,
subject to the initial condition −1 0 x(1) = ( 1 ) . 0 −1 Problem 2.4. Let 𝕋 = ℤ and 0 0 (−1 ( A=( (0 (0 1 (1
1 −1 0 1 0 0 0
1 0 0 0 0 0 1
0 0 0 0 1 0 0
1 0 −1 0 0 0 0
0 0 0 −1 0 0 0
−1 1 0) ) 0) ), −1) −1 0)
−1 1 (−1 ( B=( (−3 (1 0 (1
0 0 0 4 0 −1 −1
1 0 0 0 0 0 0
0 −2 1 0 0 0 0
Investigate if the solution space of the equation (2.4) is finite or not.
0 0 0 2 1 0 0
0 0 0 0 0 0 0
0 1 1) ) 0) ). 0) 0 0)
116 � 2 Linear dynamic-algebraic equations with constant coefficients Problem 2.5. Let 𝕋 = 3ℕ0 and x2Δ = x1σ + x3σ + f1 , x3Δ = x2σ + f2 ,
0 = x1 − x2 + 3x3 .
1. 2.
Find the matrices A and B in (2.1). Find the matrices C0 , C1 , C2 , C3 , C4 .
Problem 2.6. Let 𝕋 = 2ℕ0 . Consider the dynamic-algebraic equations x1Δ + x2Δ = −x1σ + 2x2σ − x3σ + f1 , x3Δ = −x1σ − 2x3σ + f2 ,
0 = x1σ + 2x2σ − x3σ + f3 .
1. 2. 3.
Determine the matrices A and B. Check if the matrix pair (A, B) is σ-regular and regular. If (A, B) is σ-regular, find an admissible matrix sequence {Cj }j≥0 and write the structured system.
Problem 2.7. Let 𝕋 = 3ℕ0 . Consider the dynamic-algebraic equations x2Δ = x1 + f1 ,
0 = x 2 + f2 ,
x3Δ = x3 + f3 . 1. 2. 3.
Find the matrices A and B. Check if the matrix pair (A, B) is regular. Find completely decoupling projectors for the considered dynamic-algebraic equations.
2.7 Notes and references In this chapter, they are investigated linear dynamic-algebraic equations with constant coefficients. Regular and σ-regular matrix pairs are introduced and investigated. In the chapter, the Weierstrass–Kronecker canonical form for linear dynamic-algebraic equations with constant coefficients is considered. They are defined admissible projectors, widely orthogonal admissible projectors and structural characteristic values and they are deduced some of their properties. In the chapter, are defined coupling and completely coupling projectors and they are decoupled completely considered linear dynamic-algebraic equations with constant coefficients.
3 P-projectors. Matrix chains In this chapter, we introduce properly stated, regular, preadmissible and admissible matrix pair. They are constructed and investigated some matrix chains on arbitrary time scales. Let 𝕋 be a time scale with a forward jump operator and delta differentiation operator σ and Δ, respectively. Suppose that I ⊆ 𝕋. With X , we will denote 𝕋 or ℝ, or Mm×m and with C 1 (I) we will denote the space 1
C (I) = {x : I : X ,
x ∈ C (I),
∃x Δ
on
I
and x Δ ∈ C (I)}.
3.1 Properly stated matrix pairs Suppose that A, B : I → Mm×m . Definition 3.1. We will say that the matrix pair (A, B) is properly stated on I if A(t) and B(t) satisfy ker A(t) ⊕ im B(t) = ℝm
(3.1)
for any t ∈ I, and both ker A(t) and im B(t) are C 1 -spaces. The condition (3.1) is called the transversality condition. Let (A, B) be a properly stated matrix pair on I. Then A(t), B(t) and A(t)B(t) have a constant rank on I and ker A(t)B(t) = ker B(t),
t ∈ I.
Note that, by the transversality condition (3.1) together with the C 1 -assumption on ker A(t) and im B(t), it follows that there is a projector R ∈ C 1 (I) onto im B and along ker A on I. Theorem 3.1. Let N : I → Mm×m , N ∈ C (I), dim N(t) = m − r, t ∈ I. Then there exists a smooth projector Q onto N on I if and only if N can be spanned by {n1 , . . . , nm−r } on I, where nj ∈ C 1 (I), j ∈ {1, . . . , m − r}. Proof. 1. Let
Suppose that there is a smooth projector Q onto N. Take t0 ∈ I arbitrarily. n10 ,
...,
0 nm−r ∈ ℝm
be a basis of N(t0 ). We choose nj as the solution of the IVP njΔ = QΔ nj https://doi.org/10.1515/9783111377155-003
on
I,
nj (t0 ) = nj0 ,
j ∈ {1, . . . , m − r}.
118 � 3 P-projectors. Matrix chains Let P = I − Q. Then PQ = 0 and 0 = (PQ)Δ = PΔ Q + Pσ QΔ , whereupon PΔ Q = −Pσ QΔ . Next, Pnj = 0
on
I,
j ∈ {1, . . . , m − r},
and PnjΔ = PQΔ nj , and (Pnj )Δ = PΔ nj + Pσ njΔ = PΔ nj − PΔ Qnj = PΔ (I − Q)nj = PΔ Pnj = 0,
j ∈ {1, . . . , m − r}.
Because Pnj (t0 ) = 0, we conclude that Pnj (t) = 0,
2.
t ∈ I.
Thus, nj ∈ N. Since n1 , . . . , nm−r are linearly independent on I, we conclude that they span N. Let N be spanned by F = (n1 , . . . , nm−r ),
nj ∈ C 1 (I),
j ∈ {1, . . . , m − r},
on I. Take Q(t) = F(t)(F T (t)F(t)) F T (t), −1
t ∈ I.
3.2 Matrix chains
� 119
Then Q(t)Q(t) = F(t)(F T (t)F(t)) F T (t)F(t)(F T (t)F(t)) F T (t) −1
−1
= F(t)(F T (t)F(t)) F T (t) = Q(t), −1
t ∈ I.
Note that Q ∈ C 1 (I). This completes the proof.
3.2 Matrix chains Suppose that (A, B) is a properly stated matrix pair on I and C : I → Mm×m , C ∈ C (I). Set G0 (t) = A(t)B(t),
t ∈ I.
Denote with P0 (t) a projector along ker G0 (t), t ∈ I. With B− (t), t ∈ I, we denote the {1, 2}-inverse of B(t), t ∈ I, determined by B(t)B− (t)B(t) = B(t),
B− (t)B(t) = P0 (t),
B− (t)B(t)B− (t) = B− (t), B(t)B− (t) = R(t),
t ∈ I.
(3.2)
Note that the matrix B− (t), t ∈ I, is uniquely determined by the conditions (3.2). Set Q0 (t) = I − P0 (t),
t ∈ I.
We remove the explicit dependence on t for the sake of notational simplicity. Write C0 = C,
C̃0 = C,
̃ 0 = G0 . G
This construction can be iterated for i ≥ 1 in the following manner: ̃ i have a constant rank ri on I. (A1) Gi , G ̃ i satisfies (A2) Ni = ker Gi = ker G (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ) ∩ Ni = {0}. We choose a continuous projector Qi onto Ni such that (A3) Qi Qj = 0, 0 ≤ j < i on I. Set Pi = I − Qi and assume (A4) DP0 P1 . . . Pi D− ∈ C 1 (I).
120 � 3 P-projectors. Matrix chains Assume that (A1)–(A4) hold. Define Δ
Ci Pi = Ci−1 Pi−1 + Ciσ Qiσ + Giσ D−σ (DP0 . . . Pi D− ) DP0 . . . Pi−1 ,
Δ σ σ σ C̃iσ = Ci−1 Pi−1 + Ci Qi + Gi D− (DP0 . . . Pi D− ) Dσ P0σ . . . Pi−1 , ̃ i+1 = G ̃ i + C̃i Qi . Gi+1 = Gi + Ci Qi , G
Remark 3.1. Note that the existence of a projector Qi so that Qi Qj = 0,
0 ≤ j < i,
relies on the fact that the condition (A2) makes it possible to choose a projector Qi onto Ni such that N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊆ ker Qi . Then ker Qj ⊆ ker Qj ,
j ≤ i.
Since Qj projects onto Nj , 0 ≤ j < i, we have im Qj = Nj . Hence, for any x ∈ ℝm and 0 ≤ j < i, we have Qj x ∈ Nj ⊆ ker Qi and then Qi Qj x = 0, i. e., Qi Qj = 0. Definition 3.2. Let the matrix pair (A, B) be properly stated matrix pair. Then the projectors P0 and Q0 are said to be admissible. Definition 3.3. A projector sequence {Q0 , . . . , Qk }, respectively {P0 , . . . , Pk }, with k ≥ 1, is said to be preadmissible up to level k if (A1) and (A2) hold for 0 ≤ i ≤ k and (A3) holds for 0 ≤ i < k. Definition 3.4. A projector sequence {Q0 , . . . , Qk }, respectively {P0 , . . . , Pk }, with k ≥ 1, is said to be admissible up to level k if (A1), (A2) and (A3) hold for 0 ≤ i ≤ k.
3.2 Matrix chains
� 121
Proposition 3.1. Let Q0 , . . . , Qk be an admissible up to level k projector sequence on I. Then ker P0 . . . Pk = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk . Proof. We will use the principal of the mathematical induction. 1. Let k = 1. a) Suppose that z ∈ ker P0 P1 . Then P0 P1 z = 0.
(3.3)
Set z1 = P1 z = (I − Q1 )z = z − Q1 z, i. e., z = z1 + Q1 z. Hence, (3.3) holds if and only if z1 ∈ ker P0 = N0 . Note that Q1 z ∈ N1 . Therefore, z ∈ N0 ⊕ N1 . Because z ∈ ker P0 P1 was arbitrarily chosen and we get that it is an element of N0 ⊕ N1 , we arrive at the relation ker P0 P1 ⊆ N0 ⊕ N1 . b) Let z ∈ N0 ⊕ N1 be arbitrarily chosen. Then z = z0 + z1 , where z0 ∈ N0 and z1 ∈ N1 . Since ker P1 = N1 , we have P1 z1 = 0. Next, by z0 ∈ N0 , it follows that there is w0 ∈ ℝm so that z0 = Q0 w0 . Therefore,
(3.4)
122 � 3 P-projectors. Matrix chains P0 P1 z = P0 P1 (z0 + z1 ) = P0 P1 z0 + P0 P1 z1 = P0 P1 Q0 w0
= P0 (I − Q1 )Q0 w0 = P0 (Q0 − Q1 Q0 )w0 = P0 Q0 w0 = 0,
i. e., z ∈ ker P0 P1 . Since z ∈ N0 ⊕ N1 was arbitrarily chosen and we get that it is an element of ker P0 P1 , we obtain the relation N0 ⊕ N1 ⊆ ker P0 P1 . By the last inclusion and (3.4), we obtain ker P0 P1 = N0 ⊕ N1 . 2.
Assume that ker P0 P1 . . . Pi = N0 ⊕ N1 ⊕ ⋅ ⋅ ⋅ ⊕ Ni
3.
for some i ∈ {1, . . . , k − 1}. We will prove that ker P0 P1 . . . Pi+1 = N0 ⊕ N1 ⊕ ⋅ ⋅ ⋅ ⊕ Ni+1 . a) Let z ∈ ker P0 P1 . . . Pi Pi+1 be arbitrarily chosen. Then P0 P1 . . . Pi Pi+1 z = 0 and z1 = Pi+1 z = (I − Qi+1 )z = z − Qi+1 z, i. e., z = z1 + Qi+1 z. Note that P0 P1 . . . Pi z = 0. Therefore, z1 ∈ ker P0 . . . Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni . Next, Qi+1 z ∈ Ni+1 . Consequently, z = z1 + Qi+1 z ∈ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni ) ⊕ Ni+1 = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni+1 .
(3.5)
3.2 Matrix chains
� 123
Since z ∈ ker P0 P1 . . . Pi+1 was arbitrarily chosen and we obtain that it is an element of N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni+1 , we conclude that ker P0 . . . Pi+1 ⊆ N0 ⊕ . . . Ni+1 .
(3.6)
b) z ∈ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni+1 be arbitrarily chosen. Then z ∈ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni ) ⊕ Ni+1 . Hence, z = z1 + z2 , where z1 ∈ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni = ker P0 . . . Pi ⊆ ker P0 . . . Pi and z2 ∈ Ni+1 . Since ker Pi+1 = Ni+1 , we get Pi+1 z2 = 0 and P0 . . . Pi z1 = 0, and Qi+1 z1 = 0. Hence, P0 . . . Pi+1 z = P0 . . . Pi+1 (z1 + z2 ) = P0 . . . Pi Pi+1 z1 + P0 . . . Pi+1 z2 = P0 . . . Pi Pi+1 z1 = P0 . . . Pi (I − Qi+1 )z1 = P0 . . . Pi z1 − P0 . . . Pi Qi+1 z1 = 0, i. e., z ∈ ker P0 . . . Pi+1 . Because z ∈ N0 ⊕⋅ ⋅ ⋅⊕Ni+1 was arbitrarily chosen and we get that it is an element of ker P0 . . . Pi+1 , we find N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni+1 ⊆ ker P0 . . . Pi+1 . Hence and (3.6), we get (3.5). This completes the proof.
124 � 3 P-projectors. Matrix chains
3.3 Independency of the matrix chains In this section, we will prove that the matrix chain, constructed in the previous section, do not depend on the choice of the admissible up to level k projector sequence. Below, suppose that (A, B) is a properly stated matrix pair and C : I → Mm×m , C ∈ C (I). Lemma 3.1. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then −
B = P 0 B− ,
(3.7)
Q0 Q 0 = Q 0 ,
(3.8)
P0 P0 = P0 ,
(3.10)
Q 0 Q0 = Q0 ,
(3.9)
P 0 P0 = P 0
(3.11)
and G0 Q0 = G0 Q0 = 0.
(3.12)
Proof. We have G0 = G 0 ,
C0 = C 0 ,
N0 = N 0
and ker P0 = ker P0 . Moreover, BB− B = B,
B− BB− = B− ,
BB− = R,
B − B = P0
BB− = R,
B B = P0 .
and −
BB B = B,
−
−
−
B BB = B ,
−
Hence, −
−
−
−
−
P0 B− = B BB− = B R = B BB = B , i. e., −
B = P 0 B− .
3.3 Independency of the matrix chains
Since N0 = N 0 , im Q0 = N0 , im Q0 = N 0 , we have im Q0 = im Q0 . Take z ∈ ℝm arbitrarily. If z ∈ ker Q0 , then Q0 z = 0 and Q0 Q0 z = Q0 = 0 = Q0 z. If z ∈ im Q0 , then Q0 z = z and z ∈ im Q0 = im Q0 . Therefore, Q0 z = z and Q0 Q0 z = Q0 z = z = Q0 z. Because z ∈ ℝm was arbitrarily chosen, we conclude that Q0 Q 0 = Q 0 . Let y ∈ ℝm be arbitrarily chosen. If y ∈ ker Q0 , then Q0 y = 0 and Q0 Q0 y = Q0 = 0 = Q0 y. If y ∈ im Q0 , then y ∈ im Q0 and Q0 y = Q0 y = y. From here,
� 125
126 � 3 P-projectors. Matrix chains Q0 Q0 y = Q0 y = y = Q0 y. Because y ∈ ℝm was arbitrarily chosen, we arrive at Q 0 Q0 = Q0 . By (3.9), we find I − P0 = (I − P0 )(I − P0 ) = I − P0 − P0 + P0 P0 , whereupon P0 P0 = P0 . Since im Q0 = N0 = ker G0 = im Q0 , we get G0 Q0 = G0 Q0 = 0. This completes the proof. Lemma 3.2. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. If Z1 = I + Q0 Q0 P0 , then G1 = G1 Z1 and ker P0 P1 = ker P0 P1 = N0 ⊕ N1 = N 0 ⊕ N 1 . Proof. Applying (3.8) and (3.9), we get G 1 = G 0 + C 0 Q 0 = G 0 + C0 Q 0 = G 0 + C0 Q0 Q 0
= G0 + C0 Q0 Q0 (P0 + Q0 ) = G0 + C0 Q0 Q0 P0 + C0 Q0 Q0 Q0 = G0 + C0 Q0 Q0 P0 + C0 Q0 Q0 = G0 + C0 Q0 + C0 Q0 Q0 P0 = G1 + C0 Q0 Q0 P0 = G1 + (C0 Q0 + C0 Q02 )Q0 P0
3.3 Independency of the matrix chains
� 127
= G1 + (G0 + C0 Q0 )Q0 Q0 P0 = G1 + G1 Q0 Q0 P0 = G1 (I + Q0 Q0 P0 ) = G1 Z1 . Hence, N1 = Z1−1 N 1 and thus, N0 ⊕ N1 = N 0 ⊕ N 1 . By Proposition 3.1, it follows that ker P0 P1 = ker P0 P1 = N0 ⊕ N1 = N 0 ⊕ N 1 . This completes the proof. Lemma 3.3. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then P0 P1 P0 P1 = P0 P1 ,
(3.13)
P0 P1 P0 P1 = P0 P1 ,
(3.14)
P0 P1 P0 = P0 P1
(3.15)
P0 P1 P0 = P0 P1 .
(3.16)
and
Proof. Let x ∈ ℝm be arbitrarily chosen. If x ∈ ker P0 P1 , then x ∈ ker P0 P1 and P0 P1 x = 0, P0 P1 x = 0.
Hence, P0 P1 P0 P1 x = P0 P1 0 = 0 = P0 P1 x. If x ∈ im P0 P1 , then P0 P1 x = x and P0 P1 P0 P1 x = P0 P1 x.
128 � 3 P-projectors. Matrix chains Because x ∈ ℝm was arbitrarily chosen, we conclude that (3.13) holds. Let now, y ∈ ℝm be arbitrarily chosen. If y ∈ ker P0 P1 , then y ∈ ker P0 P1 and P0 P1 y = 0,
P0 P1 y = 0
and P0 P1 P0 P1 y = P0 P1 0 = 0 = P0 P1 y. If y ∈ im P0 P1 , then P0 P1 y = y and P0 P1 P0 P1 y = P0 P1 y. Because y ∈ ℝm was arbitrarily chosen, we conclude that (3.14) holds. Next, P0 P1 P0 y = P0 (I − Q1 )(I − Q0 ) = P0 (I − Q1 − Q0 + Q1 Q0 ) = P0 (P1 − Q0 ) = P0 P1 − P0 Q0 = P0 P1 ,
i. e., (3.15) holds. As above, we get that (3.16) holds. This completes the proof. Lemma 3.4. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then −
ker BBP1 B = ker BP0 P1 B− . −
Proof. Let x ∈ ker BP0 P1 B be arbitrarily chosen. Then −
−
0 = BP0 P1 B x = BP0 P1 P0 B x = BP0 P1 B− x. Therefore, B− x ∈ P0 P1 = ker P0 P1 . Hence, P0 P1 B− x = 0 and BP0 P1 B− x = 0
(3.17)
3.3 Independency of the matrix chains
−
� 129
−
and then, x ∈ ker BP0 P1 B . Because x ∈ ker BP0 P1 B was arbitrarily chosen and we get that it is an element of ker BP0 P1 B− , we conclude that −
ker BP0 P1 B ⊆ ker BP0 P1 B− .
(3.18)
Let now, y ∈ ker BP0 P1 B− be arbitrarily chosen. Then BP0 P1 B− y = 0, which holds if B− y ∈ ker P0 P1 = ker P0 P1 . Hence, −
BP0 P1 B− y = BP0 P1 P0 B− y = BP0 P1 B y. Therefore, y ∈ ker BP0 P1 B. Since y ∈ ker BP0 P1 B− was arbitrarily chosen and we get that − it is an element of ker BP0 P1 B , we conclude that −
ker BP0 P1 B− ⊆ ker BP0 P1 B . Hence, and (3.18), we get (3.17). This completes the proof. Lemma 3.5. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then G1 Q1 = −G1 Q0 Q0 P0 Q1 ,
BP0 = B.
Proof. Let Z1 be as in Lemma 3.2. Then G1 Q1 = G1 (Z1 + I − Z1 )Q1 = G1 Z1 Q1 + G1 (I − Z1 )Q1 . Since im Q1 = N 1 , we have im Z1 Q1 = Z1 N 1 and using that N 1 = ker G1 , we obtain G1 Z1 Q1 = 0. Let x ∈ ℝm be arbitrarily chosen. If x ∈ ker P0 , then x ∈ ker B and P0 x = 0,
Bx = 0,
BP0 x = B = 0 = P0 x = Bx.
130 � 3 P-projectors. Matrix chains If x ∈ im P0 , then P0 x = x and BP0 x = Bx. Because x ∈ ℝm was arbitrarily chosen, we conclude that BP0 = B. This completes the proof. Lemma 3.6. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then Δ
−
σ
C 1 = C1 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B + G1 Q0 R10 , where − Δ
R10 = (P0 − Q0 P0 − Q0 P1 )(BP0 P1 B ) B.
Proof. We have − Δ
−
− Δ
−
C 1 = C 0 P0 − G1 B (BP0 P1 B ) BP0 = C0 P0 − G1 Z1 B (BP0 P1 B ) BP0 . By the proof of Lemma 3.5, we have BP0 = B. Hence, − Δ
−
−
− Δ
C 1 = C0 P0 − G1 Z1 B (BP0 P1 B ) = C0 (P0 + Q0 )P0 − G1 Z1 B (BP0 P1 B ) B −
− Δ
= C0 P0 P0 + C0 Q0 P0 − G1 Z1 B (BP0 P1 B ) B. By (3.10), we have P0 P0 = P0 . Then −
− Δ
C 1 = C0 P0 + C0 Q0 P0 − G1 Z1 B (BP0 P1 B ) B. By (3.7) and (3.15), we find −
BP0 P1 B = BP0 P1 P0 B− = BP0 P1 B− .
3.3 Independency of the matrix chains
� 131
Now, we apply (3.14) and we find −
BP0 P1 B = BP0 P1 P0 P1 B− = BP0 P1 P0 P0 P1 B− = BP0 P1 B− BP0 P1 B− . Therefore, − Δ
Δ
Δ
σ
Δ
(BP0 P1 B ) = (BP0 P1 B− BP0 P1 B− ) = (BP0 P1 B− ) (BP0 P1 B− ) + BP0 P1 B− (BP0 P1 B− ) and −
Δ
σ
−
Δ
σ
−
Δ
σ
Δ
−
C 1 = C0 P0 + C0 Q0 P0 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B − G1 Z1 B BP0 P1 B− (BP0 P1 B− ) B Δ
= C0 P0 + C0 Q0 P0 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B − G1 Z1 P0 P0 P1 B− (BP0 P1 B− ) B Δ
= C0 P0 + C0 Q0 P0 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B − G1 Z1 P0 P1 B− (BP0 P1 B− ) B Δ
Δ
= C0 P0 − G1 B− (BP0 P1 B− ) B + C0 Q0 P0 + G1 B− (BP0 P1 B− ) B −
Δ
σ
Δ
− G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B − G1 Z1 P0 P1 B− (BP0 P1 B− ) B Δ
−
Δ
σ
−
Δ
σ
Δ
σ
= C1 + C0 Q0 P0 + G1 B− (BP0 P1 B− ) B − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B Δ
− G1 Z1 P0 P1 B− (BP0 P1 B− ) B
Δ
= C1 + C0 Q0 P0 + G1 (I − Z1 P0 P1 )B− (BP0 P1 B− ) B − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B Δ
= C1 + (G0 + C0 Q0 )Q0 P0 + G1 (I − Z1 P0 P1 )B− (BP0 P1 B− ) B −
Δ
σ
− G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B
Δ
−
= C1 + G1 Q0 P0 + G1 (I − Z1 P0 P1 )B− (BP0 P1 B− ) B − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B, i. e., Δ
−
Δ
σ
C 1 = C1 + G1 Q0 P0 + G1 (I − Z1 P0 P1 )B− (BP0 P1 B− ) B − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) . Observe that I − Z1 P0 P1 = I + (I − Z1 − I)P0 P1 = I + (I − Z1 )P0 P1 − P0 P1
= I − (I − Q0 )(I − Q1 ) + (I − Z1 )P0 P1 = I − I + Q0 + Q1 − Q0 Q1 − Q0 Q0 P0 P0 P1 = Q1 + Q0 (I − Q1 ) − Q0 Q0 P0 P0 P1 = Q1 + Q0 P1 − Q0 Q0 P0 P0 P1 .
Hence, using Lemma 3.5, (3.8) and (3.10), we get G1 (I − Z1 P0 P1 ) = G1 (Q1 + Q0 P1 − Q0 Q0 P0 P0 P1 )
= G1 Q1 + G1 Q0 P1 − G1 Q0 Q0 P0 P0 P1
= −G1 Q0 Q0 P0 Q1 + G1 Q0 P1 − G1 Q0 Q0 P0 P0 P1 = −G1 Q0 P0 Q1 + G1 Q0 P1 − G1 Q0 P0 P1
= −G1 (Q0 P0 (Q1 + P1 ) + Q0 P1 ) = −G1 (Q0 P0 + Q0 P1 ).
132 � 3 P-projectors. Matrix chains Therefore, Δ
C 1 = C1 + G1 Q0 P0 − G1 (Q0 P0 + Q0 P1 )B− (BP0 P1 B− ) B −
Δ
σ
− G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B
Δ
= C1 + G1 Q0 P0 − G1 (Q0 Q0 P0 + Q0 Q0 P1 )B− (BP0 P1 B− ) B −
Δ
σ
− G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B
Δ
= C1 + G1 Q0 (P0 − Q0 P0 − Q0 P1 )B− (BP0 P1 B− ) B −
Δ
σ
− G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B −
Δ
σ
= C1 + G1 Q0 R10 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B. This completes the proof. Lemma 3.7. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then Q1 Q 1 = Q 1 , Q 1 Q1 = Q1 ,
Q1 − Q1 = Q1 Q1 P1 .
(3.19) (3.20) (3.21)
Proof. Let x ∈ ℝm be arbitrarily chosen. If x ∈ im Q1 , then x ∈ im Q1 and Q1 x = Q 1 x = x.
Hence, Q1 Q1 x = Q1 x = x = Q1 x. If x ∈ ker Q1 , then Q1 Q1 x = Q1 0 = 0 = Q1 x. Because x ∈ ℝm was arbitrarily chosen, we get (3.19). Let now, y ∈ ℝm be arbitrarily chosen. If y ∈ im Q1 , then y ∈ im Q1 and Q1 Q1 y = Q1 y = y = Q1 y. If y ∈ ker Q1 , then Q1 y = 0
3.3 Independency of the matrix chains
� 133
and Q1 Q1 y = Q1 0 = 0 = Q1 y. Since y ∈ ℝm was arbitrarily chosen, we get (3.20). Now, applying (3.19) and (3.20), we find Q1 Q1 P1 = Q1 P1 = Q1 (I − Q1 ) = Q1 − Q1 Q1 = Q1 − Q1 , i. e., we obtain (3.21). This completes the proof. Lemma 3.8. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then Q1 = Q1 P0 = Q1 P0
(3.22)
Q 1 Z1 = Q 1 .
(3.23)
and
Proof. We apply (3.9) and using that Q1 Q0 = 0, we get Q1 P0 = Q1 (I − Q0 ) = Q1 − Q1 Q0 = Q1 − Q1 Q0 Q0 = Q1 and Q1 P0 = Q1 (I − Q0 ) = Q1 − Q1 Q0 = Q1 . Now, by the definition of Z1 and (3.8), we get Q1 Z1 = Q1 (I + Q0 Q0 P0 ) = Q1 + Q1 Q0 Q0 P0 = Q1 + Q1 Q0 P0 = Q1 . This completes the proof. Lemma 3.9. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then G2 = G2 Z2 + G2 P1 A20 , where
(3.24)
134 � 3 P-projectors. Matrix chains Δ
σ
Z2 = (I + Q0 R10 Q1 + Q1 Q1 P1 − Q0 Q0 P0 B− (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1 and − Δ
−
− σ
A20 = −B (BP0 P1 B ) (BP0 P1 B ) BQ1 Z1 .
Proof. We apply (3.21) and (3.23) and we find G2 = G1 + C 1 Q1 = G1 Z1 + C 1 Q1 = G1 Z1 + C 1 Q1 Z1 = (G1 + C 1 Q1 )Z1 Δ
−
σ
= (G1 + (C1 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B + G1 Q0 R10 )Q1 )Z1 Δ
−
σ
= (G1 + C1 Q1 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 + G1 Q0 R10 Q1 )Z1 Δ
−
σ
= (G1 + C1 Q1 + C1 (Q1 − Q1 ) − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 + G1 Q0 R10 Q1 )Z1 Δ
−
σ
= (G2 + C1 Q1 Q1 P1 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 + (G1 + C1 Q1 )Q0 R10 Q1 )Z1 Δ
−
σ
= (G2 + (G1 + C1 Q1 )Q1 Q1 P1 + G2 Q0 R10 Q1 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1 Δ
−
σ
= (G2 + G2 Q1 Q1 P1 + G2 Q0 R10 Q1 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1 −
Δ
σ
−
Δ
σ
= (G2 + G2 Q1 Q1 P1 + G2 Q0 R10 Q1 − G1 (Z1 − I)B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 Δ
−
σ
− G1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1
= (G2 + G2 Q1 Q1 P1 + G2 Q0 R10 Q1 − G1 Q0 Q0 P0 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 Δ
−
σ
− G2 P1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1
Δ
−
σ
= (G2 + G2 Q1 Q1 P1 + G2 Q0 R10 Q1 − (G1 + C1 Q1 )Q0 Q0 P0 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 Δ
−
σ
= (G2 + G2 Q1 Q1 P1 + G2 Q0 R10 Q1 − G2 Q0 Q0 P0 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 Δ
−
σ
− G2 P1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1
−
Δ
σ
−
Δ
σ
= G2 (I + Q1 Q1 P1 + G2 Q0 R10 Q1 − Q0 Q0 P0 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 Δ
−
σ
− P1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1
= G2 (I + Q1 Q1 P1 + G2 Q0 R10 Q1 − Q0 Q0 P0 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 )Z1 −
Δ
σ
− G2 P1 B (BP0 P1 B− ) (BP0 P1 B− ) BQ1 Z1
= G2 Z2 + G2 P1 A20 .
This completes the proof. Lemma 3.10. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then P0 P1 P2 P0 P1 P2 = P0 P1 P2 , P0 P1 P2 P0 P1 P2 = P0 P1 P2 ,
(3.25) (3.26)
3.3 Independency of the matrix chains
� 135
P0 P1 P2 P0 = P0 P1 P2 ,
P0 P1 P2 P0 = P0 P1 P2 .
(3.27) (3.28)
Proof. Let x ∈ ℝm be arbitrarily chosen. If x ∈ ker P0 P1 P2 , then x ∈ ker P0 P1 P2 and P0 P1 P2 x = P0 P1 P2 x = 0, and P0 P1 P2 P0 P1 P2 x = P0 P2 P2 0 = 0 = P0 P1 P2 x. If x ∈ im P0 P1 P2 , then P0 P1 P2 x = x and P0 P1 P2 P0 P1 P2 x = P0 P1 P2 x. Because x ∈ ℝm was arbitrarily chosen, we get (3.25). As above, one can deduct (3.26). Next, P0 P1 P2 P0 = P0 P1 (I − Q2 )(I − Q0 ) = P0 P1 (I − Q0 − Q2 + Q2 Q0 ) = P0 P1 (P2 − Q0 )
= P0 P1 P2 − P0 P1 Q0 = P0 P1 P2 − P0 (I − Q1 )Q0 = P0 P1 P2 − P0 (Q0 − Q1 Q0 ) = P0 P1 P2 − P0 Q0 = P0 P1 P2 ,
i. e., we obtain (3.27), The proof of the equality (3.28) repeats the main idea for the proof of the equality (3.27) and we leave it for an exercise to the reader. This completes the proof. Lemma 3.11. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then C 2 = C2 + G2 Z2 A22 + G1 Z1 A21 + G2 Q0 R20 + G2 Q1 R21 + G2 P1 R22 , with certain coefficients A2j , j ∈ {1, 2}, and R2i , i ∈ {0, 1, 2}. Proof. Applying Lemma 3.6, we get −
− Δ
C 2 = C 1 P1 − G2 B (BP0 P1 P2 B ) BP0 P1 −
Δ
σ
= (C1 − G1 Z1 B (BP0 P1 B− ) (BP0 P1 B− ) B + G1 Q0 R10 )P1 −
− Δ
− G2 Z2 B (BP0 P1 P2 B ) BP0 P1 + G2 P1 A20 BP0 P1
136 � 3 P-projectors. Matrix chains = C1 P1 + G2 Z2 Ã 22 + G1 Z1 A21 + G2 P1 R22 = C1 P1 P1 + C1 Q1 P1 + G2 Z2 Ã 22 + G1 Z1 A21 + G2 P1 R22 = C1 P1 + C1 Q1 P1 + G2 Z2 Ã 22 + G1 Z1 A21 + G2 P1 R22 Δ
Δ
= C1 P1 − G2 B− (BP0 P1 P2 B− ) BP0 P1 + G2 Z2 B− (BP0 P1 P2 B− ) BP0 P1 Δ
+ G2 (I − Z2 )B− (BP0 P1 P2 B− ) BP0 P1 + C1 Q1 P1 + G2 Z2 Ã 22 + G1 Z1 A21 + G2 P1 R22 Δ
= C2 + G2 (I − Z2 )B− (BP0 P1 P2 B− ) BP0 P1 + (G1 + C1 Q1 )Q1 P1 + G2 Z2 A22 + G1 Z1 A21 + G1 P1 R22
= C2 + G2 Z2 A22 + G1 Z1 A21 + G2 Q0 R20 + G2 Q1 R21 + G2 P1 R22 . This completes the proof. Lemma 3.12. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then, for any i ∈ {1, . . . , k}, we have Qi Q i = Q i , Q i Qi = Qi ,
Q i − Qi = Qi Q i P i .
(3.29) (3.30) (3.31)
Proof. We will use the principal of the mathematical induction. 1. For i = 1, the assertion follows by (3.19), (3.20) and (3.21). 2. Assume that the assertion holds for some i − 1, i ∈ {2, . . . , k}. 3. We will prove the assertion for i. Let x ∈ ℝm be arbitrarily chosen. Suppose that x ∈ im Qi . Then x ∈ im Qi , and hence, we arrive at Qi x = Qi x = x. Therefore, Qi Qi x = Qi x = x = Qi x. Assume that x ∈ ker Qi . Then Qi Qi x = Qi 0 = 0 = Qi x. Because x ∈ ℝm was arbitrarily chosen, we get (3.29). Now, we take y ∈ ℝm arbitrarily. If y ∈ im Qi , then y ∈ im Qi and Qi Qi y = Qi y = y = Qi y. If y ∈ ker Qi , then
3.3 Independency of the matrix chains
� 137
Qi y = 0 and Qi Qi y = Qi 0 = 0 = Qi y. Since y ∈ ℝm was arbitrarily chosen, we get (3.30). From here, using (3.29) and (3.30), we obtain Qi Qi Pi = Qi Pi = Qi (I − Qi ) = Qi − Qi Qi = Qi − Qi , i. e., we obtain (3.31). Hence, by the principal of the mathematical induction, we conclude that (3.29), (3.30) and (3.3) hold for any i ∈ {1, . . . , k}. This completes the proof. Lemma 3.13. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then, for any i ∈ {1, . . . , k}, one has P0 P1 . . . Pi P0 P1 . . . Pi = P0 P1 . . . Pi , P0 P1 . . . Pi P0 P1 . . . Pi = P0 P1 . . . Pi , P0 P1 . . . Pi P0 = P0 P1 . . . Pi ,
P0 P1 . . . Pi P0 = P0 P1 . . . Pi .
(3.32) (3.33) 0 ≤ j ≤ i,
(3.34) (3.35)
Proof. We will use the principal of the mathematical induction. 1. Let i = 1. Then the assertion follows by (3.13), (3.14), (3.15) and (3.16). 2. Assume that the assertion holds for i − 1 for some i ∈ {2, . . . , k}. 3. We will prove the assertion for i. Let x ∈ ℝm be arbitrarily chosen. If x ∈ ker P0 P1 . . . Pi , then x ∈ ker P0 P1 . . . Pi and P0 P1 . . . Pi x = P0 P1 . . . Pi x = 0, and P0 P1 . . . Pi P0 P1 . . . Pi x = P0 P1 . . . Pi 0 = 0 = P0 P1 . . . Pi x. If x ∈ im P0 P1 . . . Pi , then P0 P1 . . . Pi x = x and P0 P1 . . . Pi P0 P1 . . . Pi x = P0 P1 . . . Pi x.
138 � 3 P-projectors. Matrix chains Because x ∈ ℝm was arbitrarily chosen, we get (3.32). As above, one can deduct (3.33). Next, P0 P1 . . . Pi P0 = P0 P1 . . . Pi−1 (I − Qi )(I − Q0 )
= P0 P1 . . . Pi−1 (I − Q0 − Qi + Qi Q0 ) = P0 P1 . . . Pi−1 (Pi − Q0 ) = P0 P1 . . . Pi − P0 P1 . . . Pi−1 Q0 = ⋅ ⋅ ⋅ = P0 P1 . . . Pi − P0 P1 Q0
= P0 P1 . . . Pi − P0 (I − Q1 )Q0 = P0 P1 . . . Pi − P0 (Q0 − Q1 Q0 ) = P0 P1 . . . Pi − P0 Q0 = P0 P1 . . . Pi ,
i. e., we obtain (3.34), The proof of the equality (3.35) repeats the main idea for the proof of the equality (3.34) and we leave it for an exercise to the reader. Hence, and the principal of the mathematical induction, we conclude that the assertion is true for any i ∈ {1, . . . , k}. This completes the proof. Lemma 3.14. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then P0 . . . Pi Pj = P0 . . . Pi , Proof. 1.
0 ≤ j ≤ i.
(3.36)
Let j = i. Then P0 . . . Pi Pi = P0 . . . Pi−1 Pi Pi = (P0 . . . Pi−1 )(Pi Pi ) = (P0 . . . Pi−1 )Pi = P0 . . . Pi .
2.
Let j < i. Then P0 . . . Pi Pj = P0 . . . Pi−1 Pi Pj = P0 . . . Pi−1 (I − Qi )(I − Qj ) = P0 . . . Pi−1 (I − Qi − Qj + Qi Qj ) = P0 . . . Pi−1 (I − Qi − Qj ) = P0 . . . Pi−1 (Pi − Qj ) = P0 . . . Pi−1 Pi − P0 . . . Pi−1 Qj = P0 . . . Pi − P0 . . . Pi−2 Pi−1 Qj = P0 . . . Pi − P0 . . . Pi−2 (I − Qi−1 )Qj = P0 . . . Pi − P0 . . . Pi−2 (Qj − Qi−1 Qj ) = P0 . . . Pi − P0 . . . Pi−2 Qj = ⋅ ⋅ ⋅ = P0 . . . Pi − P0 . . . Pj Qj = P0 . . . Pi . This completes the proof.
Lemma 3.15. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Then P0 . . . Pi Pj . . . P0 = P0 . . . Pi ,
0 ≤ j ≤ i.
Proof. Applying (3.36), we get P0 . . . Pi Pj . . . P0 = P0 . . . Pi Pj Pj−1 . . . P0 = (P0 . . . Pi Pj )Pj−1 . . . P0 = P0 . . . Pi Pj−1 . . . P0
(3.37)
3.3 Independency of the matrix chains
� 139
= P0 . . . Pi Pj−1 Pj−2 . . . P0 = (P0 . . . Pi Pj−1 )Pj−2 . . . P0 = P0 . . . Pi Pj−2 . . . P0 = ⋅ ⋅ ⋅ = P0 . . . Pi P0 = P0 . . . Pi . This completes the proof. Theorem 3.2. The subspaces im Gi and N0 ⊕ ⊕Ni , i ∈ {0, . . . , k}, do not at all depend on the special choice of admissible projector functions Q0 , . . . , Qk . Proof. We will use the principal of the mathematical induction. 1. Let i = 1. Then the assertion follows by Lemma 3.2 and Lemma 3.6. 2. Assume that the assertion holds for some i ∈ {0, . . . , k − 1}. More precisely, assume that i
Gi = Gi Zi + ∑ Gi Pi−1 . . . Pj Aij , j=1
i−2
Zi = (I + Qi−1 Qi−1 Pi−1 + ∑ Qj Dij Qi−1 )Zi−1 , j=0
i
i−1
j=1
j=1
C i = Ci + ∑ Gj Zj Bij + Gi ∑ Qj + Cij , for some certain coefficients Aij , Cij , j ∈ {1, . . . , i − 1}, Bik , k ∈ {1, . . . , i}, Dil , l ∈ {0, . . . , i − 2}. Since Qi Qj = Qi Qj Qj = 0,
j ∈ {0, . . . , i − 1},
i ∈ {1, . . . , k},
we have that Qi Zi = Qi ,
i ∈ {1, . . . , k}.
Then Gi+1 = Gi + C i Qi
i−1
= Gi Zi + ∑ Gi Pj . . . Pi−1 Aij j=1
i
i−1
j=1
j=1
+ (Ci + ∑ Gj Zj Bij + Gi ∑ Qj Cij )Qi i−1
i
j=1
j=1
= (Gi + Ci Qi )Zi + Ci (Qi − Qi )Zi + ∑ Gi Pi−1 . . . Pj Aij + ∑ Gj Zj Bij Qi Zi i−1
+ Gi ∑ Qj Cij Qi Zi j=1
140 � 3 P-projectors. Matrix chains i−1
= Gi+1 Zi + Ci Qi Qi Pi Zi + ∑ Gi Pi−1 . . . Pj Aij j=1
i
i−1
j=1
j=1
+ ∑ Gj Zj Bij Qi Zi + Gi ∑ Qj Cij Qi Zi i−1
i
j=1
j=1
= Gi+1 Zi + (Gi + Ci Qi )Qi Qi Pi Zi + ∑ Gi+1 Pi Pi−1 . . . Pj Aij + ∑ Gj Zj Bij Qi Zi i−1
+ (Gi + Ci Qi ) ∑ Qj Cij Qi Zi j=1
i
i+1
j=1
j=1
= Gi+1 (I + Qi Qi Pi + ∑ Qj Di+1j Q − i)Zi + ∑ Gi+1 Pi . . . Pj Ai+1j i+1
= Gi+1 Zi+1 + ∑ Gi+1 Pi . . . Pj Ai+1j . j=1
Next, − Δ
−
C i+1 = C i Pi − Gi B (BP0 . . . Pi+1 B ) BP0 . . . Pi i
i−1
j=1
j=1
− Δ
−
= (Ci + ∑ Gj Zj Bij + Gi ∑ Qj Cij )Pi − Gi Zi B (BP0 . . . Pi+1 B ) BP0 . . . Pi i−1
− Δ
−
− ∑ Gi Pi−1 . . . Pi Aij B (BP0 . . . Pi+1 B ) BP0 . . . Pi j=1
i
i−1
= Ci Pi + ∑ Gj Zj Bij Pi + Gi ∑ Qj Cij Pi j=1
j=1
−
− Δ
− Gi Zi B (BP0 . . . Pi+1 B ) BP0 . . . Pi i−1
− Δ
−
− ∑ Gi Pi−1 . . . Pi Aij B (BP0 . . . Pi+1 B ) BP0 . . . Pi j=1
i
i−1
= Ci Pi Pi + Ci Qi Pi + ∑ Gj Zj Bij Pi + Gi ∑ Qj Cij Pi j=1
j=1
− Δ
−
− Gi Zi B (BP0 . . . Pi+1 B ) BP0 . . . Pi i−1
− Δ
−
− ∑ Gi Pi−1 . . . Pi Aij B (BP0 . . . Pi+1 B ) BP0 . . . Pi j=1
Δ
Δ
= Ci Pi − Gi B− (BP0 . . . Pi+1 B− ) BP0 . . . Pi + Gi Zi B− (BP0 . . . Pi+1 B− ) BP0 . . . Pi Δ
+ Gi (I − Zi )B− (BP0 . . . Pi+1 B− ) BP0 . . . Pi i
i−1
j=1
j=1
+ (Gi + Ci Qi )Qi Pi + ∑ Gj Zj Bij Pi + (Gi + Ci Qi ) ∑ Qj Cij Pi
3.3 Independency of the matrix chains
� 141
− Δ
−
− Gi Zi B (BP0 . . . Pi+1 B ) BP0 . . . Pi i−1
− Δ
−
− ∑ Gi+1 Pi Pi−1 . . . Pi Aij B (BP0 . . . Pi+1 B ) BP0 . . . Pi j=1
Δ
= Ci+1 + Gi Zi B− (BP0 . . . Pi+1 B− ) BP0 . . . Pi Δ
+ Gi (I − Zi )B− (BP0 . . . Pi+1 B− ) BP0 . . . Pi i
i−1
+ Gi+1 Qi Pi + ∑ Gj Zj Bij Pi + Gi+1 ∑ Qj Cij Pi j=1
j=1
− Δ
−
− Gi Zi B (BP0 . . . Pi+1 B ) BP0 . . . Pi i−1
− Δ
−
+ ∑ Gi+1 (Zi+1 − I)Pi Pi−1 . . . Pi Aij B (BP0 . . . Pi+1 B ) BP0 . . . Pi j=1 i−1
− Δ
−
− ∑ Gi+1 Zi+1 Pi Pi−1 . . . Pi Aij B (BP0 . . . Pi+1 B ) BP0 . . . Pi j=1
i+1
i
j=1
j=1
= Ci+1 + ∑ Gj Zj Bi+1j + Gi+1 ∑ Qj Ci+1j . Therefore, the assertion follows for i + 1. Hence, and the principal of the mathematical induction, we conclude that the assertion holds for any i ∈ {0, . . . , k}. This completes the proof. As above, one can deduct the following result. ̃ i and N0 ⊕ Ni , i ∈ {0, . . . , k}, do not at all depend on the Theorem 3.3. The subspaces im G special choice of admissible projector functions Q0 , . . . , Qk . Example 3.1. Let 𝕋 = 2ℕ0 and 0 A(t) = (0 0
1 −t 0
0 B(t) = (0 0
0 1 0
1 C(t) = (0 0
0 0 −t
0 1) , 0 0 0) , 1 0 0) , 1
t ∈ 𝕋.
Here, σ(t) = 2t,
μ(t) = t,
t ∈ 𝕋.
142 � 3 P-projectors. Matrix chains Then 0 G0 (t) = A(t)B(t) = (0 0
1 −t 0
0 0 ) ( 1 0 0 0
0 1 0
0 0 ) = ( 0 0 1 0
1 −t 0
Therefore, A(t)B(t) = A(t),
t ∈ 𝕋,
and P0 (t) = R(t),
t ∈ 𝕋.
Let x1 (t) x(t) = (x2 (t)) ∈ ℝ3 , x3 (t) t ∈ 𝕋, be such that G0 x(t) = 0,
t ∈ 𝕋,
or 0 (0 0
1 −t 0
0 x1 (t) 0 ) ( ) = ( 1 x2 (t) 1) , 0 x3 (t) 0
t ∈ 𝕋,
or x2 (t) = 0,
−tx2 (t) + x3 (t) = 0,
t ∈ 𝕋,
or x2 (t) = 0,
x3 (t) = 0,
t ∈ 𝕋.
Then we take 1 Q0 (t) = (0 0
0 0 0
0 0) , 0
t ∈ 𝕋,
0 1) , 0
t ∈ 𝕋.
3.3 Independency of the matrix chains
� 143
and 1 P0 (t) = I − Q0 (t) = (0 0
0 1 0
0 1 0) − (0 1 0
0 0 0
0 0 0 ) = (0 0 0
0 1 0
0 0) , 1
t ∈ 𝕋.
Next, 0 G1 (t) = G0 (t) + C0 (t)Q0 (t) = (0 0 0 = (0 0
1 −t 0
0 1 1 ) + (0 0 0
0 0 0
1 −t 0
0 1 1 ) + (0 0 0
0 1 0) = (0 0 0
1 −t 0
0 0 −t
0 1 0) (0 1 0
0 1) , 0
and det G1 (t) = 0 + 0 + 0 − 0 − 0 − 0 = 0,
t ∈ 𝕋.
Let x1 (t) x(t) = (x2 (t)) ∈ ℝ3 , x3 (t) t ∈ 𝕋 be such that G1 (t)x(t) = 0,
t ∈ 𝕋,
or −1 (0 0
1 −t 0
0 x1 (t) 0 1 ) (x2 (t)) = (0) , 0 x3 (t) 0
or −x1 (t) + x2 (t) = 0,
−tx2 (t) + x3 (t) = 0,
t ∈ 𝕋,
or x2 (t) = x1 (t),
x3 (t) = tx1 (t),
t ∈ 𝕋.
t ∈ 𝕋,
t ∈ 𝕋,
0 0 0
0 0) 0
144 � 3 P-projectors. Matrix chains Take 0 Q1 (t) = (0 0
1 1 t
0 0) , 0
t ∈ 𝕋,
1 1 t
0 1 0) = (0 0 0
and 1 P1 (t) = I − Q1 (t) = (0 0
0 1 0
0 0 0) − (0 1 0
0 0) , 1
−1 0 −t
t ∈ 𝕋,
and 0 P0 (t)P1 (t) = (0 0
0 1 0
0 1 0) (0 1 0
−1 0 −t
0 0 0) = (0 1 0
b12 (t) b22 (t) b32 (t)
b13 (t) b23 (t)) , b33 (t)
0 0 −t
0 0) , 1
t ∈ 𝕋.
Now, we will find B− (t), t ∈ 𝕋, b11 (t) B− (t) = (b21 (t) b31 (t)
t ∈ 𝕋,
so that B− (t)B(t)B− (t) = B− (t), B− (t)B(t) = P0 (t),
B(t)B− (t)B(t) = B− (t),
B(t)B− (t) = P0 (t) = R(t),
t ∈ 𝕋.
We have 0 B(t)B− (t)B(t) = (0 0
0 1 0
0 b11 (t) 0) (b21 (t) 1 b31 (t)
0 = (0 0
0 1 0
0 0 0) (0 1 0
0 = (0 0
0 b22 b32
b12 (t) b22 (t) b32 (t)
b12 (t) b22 (t) b32 (t)
0 0 b23 ) = (0 b33 0
b13 (t) 0 b23 (t)) (0 b33 (t) 0
0 1 0
b13 (t) b23 (t)) b33 (t) 0 1 0
0 0) , 1
t ∈ 𝕋,
whereupon b22 (t) = 1,
b23 (t) = 0,
b32 (t) = 0,
b33 (t) = 1,
t ∈ 𝕋,
0 0) 1
3.3 Independency of the matrix chains
� 145
and b11 (t) B− (t) = (b21 (t) b31 (t)
b12 (t) 1 0
b13 (t) 0 ), 1
t ∈ 𝕋.
Also, b11 (t) B− (t)B(t) = (b21 (t) b31 (t) 0 = (0 0
0 1 0
b12 (t) 1 0 0 0) , 1
b13 (t) 0 0 ) (0 1 0
0 1 0
0 0 0 ) = (0 1 0
b12 (t) 1 0
b13 (t) 0 ) = P0 (t) 1
t ∈ 𝕋.
Therefore, b12 (t) = 0,
b13 (t) = 0,
t ∈ 𝕋,
and b11 (t) B− (t) = (b21 (t) b31 (t)
0 1 0
0 0) , 1
0 B(t)B− (t) = (0 0
0 1 0
0 b11 (t) 0) (b21 (t) 1 b31 (t)
0 1 0
0 0 0) = (b21 (t) 1 b31 (t)
0 = (0 0
0 1 0
0 0) , 1
t ∈ 𝕋.
Next,
t ∈ 𝕋.
Hence, b21 (t) = 0,
b31 (t) = 0,
t ∈ 𝕋,
and 0 B− (t) = (0 0 It remains to check that
0 1 0
0 0) , 1
t ∈ 𝕋.
0 1 0
0 0) = P0 (t) 1
146 � 3 P-projectors. Matrix chains B(t)B− (t)B(t) = B(t),
t ∈ 𝕋.
We have 0 B(t)B− (t)B(t) = (0 0
0 1 0
0 0 0) (0 1 0
0 1 0
0 0 0) (0 1 0
0 = (0 0
0 1 0
0 0 0) (0 1 0
0 1 0
0 0 0 ) = (0 1 0
0 1 0
0 0) 1 0 1 0
0 0) , 1
t ∈ 𝕋.
Therefore, Δ
C1 (t) = C0 (t)P0 (t) − G1 (t)B− (t)(B(t)P0 (t)P1 (t)B− (t)) B(t)P0 (t) 1 = (0 0
and
0 0 −t
0 0 0) (0 1 0
1 − (0 0
1 −t 0
0 × (0 0
0 1 0
0 0 1 ) (0 0 0 0 0 0) (0 1 0
0 1 0
0 0) 1
0 1 0 0 1 0
0 0 0) ((0 1 0
0 1 0
0 0 0) (0 1 0
0 0 −t
0 0 0) (0 1 0
0 1 0
0 1 0
0 0 0 ) (0 1 0
0 0 −t
0 0 0)) (0 1 0
Δ
0 0)) 1
0 0) 1
0 = (0 0
0 0 −t
0 0 0) − (0 1 0
1 −t 0
0 0 1 ) ((0 0 0
0 = (0 0
0 0 −t
0 0 0) − (0 1 0
1 −t 0
0 0 1 ) (0 0 0
0 0 −t
0 0 0) (0 1 0
0 = (0 0
0 0 −t
0 0 0) − (0 1 0
1 −t 0
0 0 1 ) (0 0 0
0 0 −1
0 0 0) (0 0 0
0 = (0 0
0 0 −t
0 0 0) − (0 1 0
0 −1 0
0 0 0) (0 0 0
0 1 0
0 = (0 0
0 0 −t
0 0 0) − (0 1 0
0 −1 0
0 0 0) = (0 0 0
Δ
0 0) 1 0 1 −t
0 0) 1
0 1 0 0 1 0
Δ
0 0) 1 0 0) 1
0 1 0
0 0) 1
3.3 Independency of the matrix chains
G2 (t) = G1 (t) + C1 (t)Q1 (t) 1 = (0 0
1 −t 0
0 0 1 ) + (0 0 0
0 1 −t
0 0 0 ) (0 1 0
1 = (0 0
1 −t 0
0 0 1 ) + (0 0 0
0 1 0
0 1 0 ) = (0 0 0
t 0) 0
−1 1 t
1 1−t 0
0 1) , 0
t ∈ 𝕋.
Let x1 (t) x(t) = (x2 (t)) ∈ ℝ3 , x3 (t) t ∈ 𝕋, be such that G2 (t)x(t) = 0,
t ∈ 𝕋,
or 1 (0 0
1 1−t 0
0 x1 (t) 0 1 ) (x2 (t)) = (0) , 0 x3 (t) 0
t ∈ 𝕋,
or x1 (t) + x2 (t) = 0,
(1 − t)x2 (t) + x3 (t) = 0,
t ∈ 𝕋,
or x2 (t) = −x1 (t),
x3 (t) = (1 − t)x1 (t),
t ∈ 𝕋,
and take 0 Q2 (t) = (0 0
−t t −t(1 − t)
1 −1 ) , 1−t
t ∈ 𝕋.
Hence, 1 P2 (t) = I − Q2 (t) = (0 0
0 1 0
0 0 0) − (0 1 0
−t t −t(1 − t)
1 −1 ) 1−t
� 147
148 � 3 P-projectors. Matrix chains 1 = (0 0
t 1−t t(1 − t)
−1 1 ), t
t ∈ 𝕋,
and 0 P0 (t)P1 (t)P2 (t) = (0 0
0 0 −t
0 1 0 ) (0 1 0
t 1−t t(1 − t)
−1 0 1 ) = (0 t 0
0 0 0
0 0) , 0
t ∈ 𝕋,
and Δ
C2 (t) = C1 (t)P1 − G2 (t)B− (t)(B(t)P0 (t)P1 (t)P2 (t)B− (t)) B(t)P0 (t)P1 (t) 0 = C1 (t)P1 (t) = (0 0
0 1 −t
0 1 0 ) (0 1 0
1 0 −t
0 0 0) = (0 1 0
0 0 −t
0 0) , 1
t ∈ 𝕋,
and 1 G3 (t) = G2 (t) + C2 (t)Q2 (t) = (0 0 1 = (0 0
1 1−t 0
0 0 1 ) + (0 0 0
1 1−t 0 0 0 −t
0 0 1 ) + (0 0 0
0 1 0) = (0 1 0
0 0 −t
1 1−t −t
0 0 0) (0 1 0 0 1) , 1
−t t −t(1 − t)
1 −1 ) 1−t
t ∈ 𝕋.
We have det G3 (t) = 1 − t + 0 + 0 − 0 + t − 0 = 1 ≠ 0,
t ∈ 𝕋.
Exercise 3.1. Let 𝕋 = 2ℤ and 1 A(t) = (0 0
−1 0 0
0 t) , 1
1 B(t) = (0 0
0 0 0
0 0) , 1
−1 C(t) = ( 0 0
1 0 0
0 0 ), t+1
t ∈ 𝕋.
Find 1. Q0 , Q1 , Q2 , C0 , C1 , C2 and G0 , G1 , G2 , ̃ 0, G ̃ 1, G ̃ 2. 2. C̃0 , C̃1 , C̃1 and G Definition 3.5. The matrix pair (A, B) is said to be regular or σ-regular with tractability index zero on I if A and B both are invertible on I.
3.4 Alternative chain constructions
�
149
Definition 3.6. The matrix pair (A, B) is said to be regular with tractability index ν ≥ 1 if there exists an admissible projector sequence {Q0 , . . . , Qν−1 } so that Gi , 0 ≤ i < ν, are singular and Gν is nonsingular. Definition 3.7. The matrix pair (A, B) is said to be regular with tractability index ν ≥ 1 ̃ i , 0 ≤ i < ν, are if there exists an admissible projector sequence {Q0 , . . . , Qν−1 } so that G ̃ ν is nonsingular. singular and G Definition 3.8. The matrix pair (A, B) is said to be regular if it is regular with any tractability index. Definition 3.9. The matrix pair (A, B) is said to be σ-regular if it is σ-regular with any tractability index.
3.4 Alternative chain constructions Suppose that the matrix pair (A, B) is properly stated on I and C : I → Mm×m , C ∈ C (I). We remove the dependences of t for the sake of notational simplicity. Define ̃ 0 = G0 = AB. G We choose a continuous projector Π0 along the space N0 = ker G0 . In addition, suppose that R is a continuous projector onto im B and along ker A, B− is defined with the following relations: BB− B = B,
B− BB− = B− ,
BB− = R,
B − B = Π0
and set M = I − Π0 ,
C0 = C̃0 = C, ̃1 = G ̃ 0 + C̃0 M0 . G1 = G0 + C0 M0 , G For i ≥ 1, this construction can be iterated as follows: (A5) Gi has a constant rank ri . (A6) Ni = ker Gi verifies (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ) ∩ Ni = {0}. If these both conditions hold, we choose a continuous projector
(3.38)
150 � 3 P-projectors. Matrix chains (A7) Πi along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni with im Πi ⊆ im Πi−1 . Not that the condition (A7) can be met since we have N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Πi = ℝm and then there exists a space transversal to N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni
within
im Πi−1 .
Set Mi = Πi−1 − Πi and assume (A8) BΠi B− is in C 1 (I). Define Δ
Ci Πi = (Ci−1 + Ciσ Miσ + Giσ B−σ (BΠi B− ) B)Πi−1 ,
σ ̃ i B− (BΠi B− )Δ Bσ )Πσ , C̃iσ Πσi = (C̃i−1 + C̃i Mi + G i−1
Gi = Gi−1 + Ci Qi ,
̃i = G ̃ i−1 + C̃i−1 Qi−1 . G
We have Gj+1 = Gj + Cj Mj . Then σ Gj+1 = Gjσ + Cjσ Mjσ ,
whereupon Cj Mj = Gj+1 − Gj ,
σ Cjσ Mjσ = Gj+1 − Gjσ .
Hence, Δ
σ Ci Πi = (Ci−1 + Gi+1 − Giσ + Giσ B−σ (BΠi B− ) B)Πi−1 ,
σ ̃ i B− (BΠi B− )Δ Bσ )Πσ . C̃iσ Πσi = (C̃i−1 + Gi+1 − Gi + G i−1
Remark 3.2. The requirement (A7) is satisfied if Πi is chosen as the orthogonal projector along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni and if this is the case, we have im Πi = (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni )⊥ ⊆ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 )⊥ = im Πi−1 .
3.4 Alternative chain constructions
Proposition 3.2. Suppose that (A5)–(A7) hold. Then: 1. Πi−1 Πi = Πi , 2. Πi Πi−1 = Πi . Proof. Let x ∈ ℝm be arbitrarily chosen. By (A7), we have im Πi ⊆ im Πi−1 1.
and
ker Πi−1 ⊆ ker Πi .
Suppose that x ∈ im Πi . Then x ∈ im Πi−1 and Πi x = x,
Πi−1 x = x,
Πi−1 Πi x = Πi−1 x = x = Πi x.
If x ∈ ker Πi , then Πi x = 0 and Πi−1 Πi x = Πi−1 0 = 0 = Πi x. Because x ∈ ℝm was arbitrarily chosen, we conclude that Πi−1 Πi = Πi . 2.
Assume that x ∈ im Πi−1 . Then Πi−1 x = x and Πi Πi−1 x = Πi x. If x ∈ ker Πi−1 , then x ∈ ker Πi . Hence, Πi−1 x = 0,
Πi x = 0
and Πi Πi−1 x = Πi 0 = 0 = Πi x. Since x ∈ ℝm was arbitrarily chosen, we arrive at the equality Πi Πi−1 = Πi . This completes the proof.
� 151
152 � 3 P-projectors. Matrix chains The above proposition can be generalized as follows. Proposition 3.3. Suppose that (A5)–(A7) hold. Then: 1. Πj Πi = Πi , 0 ≤ j < i, 2. Πi Πj = Πi , 0 ≤ j < i. Proof. Let x ∈ ℝm be arbitrarily chosen. By (A7), we have im Πi ⊆ im Πi−1 ⊆ im Πi−2 ⊆ ⋅ ⋅ ⋅ ⊆ im Πj . 1.
Suppose that x ∈ im Πi . Then x ∈ im Πj and Πi x = x,
Πj x = x,
Πj Πi x = Πj x = x = Πi x.
If x ∈ ker Πi , then Πi x = 0 and Πj Πi x = Πj 0 = 0 = Πi x. Because x ∈ ℝm was arbitrarily chosen, we conclude that Πj Πi = Πi . 2.
Assume that x ∈ im Πj . Then Πj x = x and Πi Πj x = Πi x. If x ∈ ker Πj , then x ∈ ker Πi . Hence, Πj x = 0,
Πi x = 0
and Πi Πj x = Πi 0 = 0 = Πi x. Since x ∈ ℝm was arbitrarily chosen, we arrive at the equality
3.5 Equivalence of the P- and Π-chains
� 153
Πi Πj = Πi . This completes the proof. Proposition 3.4. Suppose that (A5)–(A7) hold. Then Mi is a projector. Proof. Using Proposition 3.4, we get Mi Mi = (Πi−1 − Πi )(Πi−1 − Πi ) = Πi−1 Πi−1 − Πi−1 Πi − Πi Πi−1 + Πi Πi = Πi−1 − Πi − Πi + Πi = Πi−1 − Πi = Mi .
This completes the proof. Definition 3.10. If the matrix pair (A, B) is properly stated and (3.38) holds, then the continuous projector Π0 is said to be admissible. Definition 3.11. A projector sequence {Π0 , . . . , Πk }, k ≥ 1, is said to be preadmissible up to level k if (A5)–(A7) hold for 0 ≤ i ≤ k and (A8) holds for 0 ≤< k. Definition 3.12. A projector sequence {Π0 , . . . , Πk }, k ≥ 1, is said to be admissible up to level k if (A5)–(A8) hold for 0 ≤ i ≤ k.
3.5 Equivalence of the P- and Π-chains We will start this section with the following useful lemma. Lemma 3.16. Let {P0 , . . . , Pk } be an admissible up to level k P-projector sequence and Πi = P0 . . . Pi . Then Πi is a projector and im Πi ⊆ im Πi−1 . Proof. First, we will prove that Πi is a projector. Applying (3.37), we get Πi Πi = P0 . . . Pi P0 . . . Pi = P0 . . . Pi P0 P1 . . . Pi = (P0 . . . Pi P0 )P1 . . . Pi = (P0 . . . Pi )P1 . . . Pi = P0 . . . Pi P1 P2 . . . Pi = (P0 . . . Pi P1 )P2 . . . Pi = (P0 . . . Pi )P2 . . . Pi = P0 . . . Pi P2 . . . Pi = ⋅ ⋅ ⋅ = P0 . . . Pi Pi = P0 . . . Pi = Πi ,
i. e., Πi is a projector. Now, by Proposition 3.1, we have ker P0 . . . Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni or
154 � 3 P-projectors. Matrix chains ker Πi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni . Hence, im Πi = (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni )⊥ ⊆ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 )⊥ = im Πi−1 . This completes the proof. Corollary 3.1. Suppose that all conditions of Lemma 3.16 hold. Then Πσi is a projector and im Πσi ⊆ im Πσi−1 . Lemma 3.17. Let {P0 , . . . , Pk } be an admissible up to level k P-projector sequence and Πi = P0 . . . Pi and Mi = Πi−1 − Πi . Then Mi = Πi−1 Qi . Proof. We have Mi = Πi−1 − Πi = P0 . . . Pi−1 − P0 . . . Pi = P0 . . . Pi−1 − P0 . . . Pi−1 Pi = P0 . . . Pi−1 (I − Pi ) = Πi−1 Qi .
This completes the proof. Lemma 3.18. Let {P0 , . . . , Pk } be an admissible up to level k P-projector sequence and Πi = P0 . . . Pi and Δ
Ci Pi = Ci−1 Pi−1 + Ciσ Qiσ + Giσ B−σ (BP0 . . . Pi B− ) BP0 . . . Pi−1 . Then i
Δ
σ Ci Πi = (C0 + Gi+1 − G1σ + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 . j=1
3.5 Equivalence of the P- and Π-chains
� 155
Proof. We have Δ
Ci Pi = Ci−1 Pi−1 + Giσ B−σ (BP0 . . . Pi B− ) BP0 . . . Pi−1 Δ
σ = (Ci−2 Pi−2 + Gi−1 B−σ (BP0 . . . Pi−1 B− ) BP0 . . . Pi−2 )Pi−1 Δ
+ Giσ B−σ (BP0 . . . Pi B− ) BP0 . . . Pi−1
Δ
σ = Ci−2 Pi−2 Pi−1 + Gi−1 B−σ (BP0 . . . Pi−1 B− ) BP0 . . . Pi−2 Pi−1
+ Giσ B−σ (BP0 . . . Pi )Δ BP0 . . . Pi−1 1
Δ
σ −σ = Ci−2 Pi−2 Pi−1 + ∑ Gi−j B (BP0 . . . Pi−j B− ) BP0 . . . Pi−1 j=0
i−2
Δ
σ −σ = ⋅ ⋅ ⋅ = C1 P1 . . . Pi−1 + ∑ Gi−j B (BP0 . . . Pi−j B− ) BP0 . . . Pi−1 j=0
= (C0 P0 + i−2
Δ G0σ B−σ (BP0 P1 B− ) BP0 )P1 . . . Pi−1 Δ
σ −σ + ∑ Gi−j B (BP0 . . . Pi−j B− ) BP0 . . . Pi−1 j=0
Δ
= C0 P0 P1 . . . Pi−1 + G0σ B−σ (BP0 P1 B− ) BP0 P1 . . . Pi−1 i−2
Δ
σ −σ + ∑ Gi−j B (BP0 . . . Pi−j B− ) BP0 . . . Pi−1 j=0
i−1
Δ
σ −σ = C0 P0 P1 . . . Pi−1 + ∑ Gi−j B (BP0 . . . Pi−j B− ) BP0 . . . Pi−1 j=0
i−1
Δ
i
Δ
= C0 Πi−1 + ∑ Gjσ B−σ (BΠj B− ) BΠi−1 = (C0 + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 . j=1
j=1
This completes the proof. Lemma 3.19. Let {P0 , . . . , Pk } be an admissible up to level k P-projector sequence and Πi = P0 . . . Pi and σ σ ̃ i B− (BP0 . . . Pi B− )Δ Bσ Pσ . . . Pσ . C̃iσ = C̃i−1 Pi−1 +G 0 i−1
Then i
Δ C̃iσ = (C̃0σ + ∑ Gj B− (BΠj B− ) Bσ )Πσi−1 . j=1
156 � 3 P-projectors. Matrix chains Proof. We have σ σ ̃ i B− (BP0 . . . Pi B− )Δ Bσ Pσ . . . Pσ C̃iσ = C̃i−1 Pi−1 +G 0 i−1 Δ
σ σ ̃ i−1 B− (BP0 . . . Pi−1 B− ) Bσ Pσ . . . Pσ )Pσ = (C̃i−2 Pi−2 +G 0 i−2 i−1
̃ i B− (BP0 . . . Pi B− )Δ Bσ Pσ . . . Pσ +G 0 i−1
σ σ σ ̃ i−1 B− (BP0 . . . Pi−1 B− )Δ Bσ Pσ . . . Pσ Pσ = C̃i−2 Pi−2 Pi−1 +G 0 i−2 i−1
̃ i B− (BP0 . . . Pi )Δ Bσ Pσ . . . Pσ +G 0 i−1 1
Δ
σ σ σ ̃ i−j B− (BP0 . . . Pi−j B− ) Bσ Pσ . . . Pσ = C̃i−2 Pi−2 Pi−1 + ∑G 0 i−1 j=0
.. .
i−2
Δ
σ ̃ i−j B− (BP0 . . . Pi−j B− ) Bσ Pσ . . . Pσ = C̃1σ P1σ . . . Pi−1 + ∑G 0 i−1 j=0
=
(C̃0σ P0σ i−2
Δ
̃ 0 B (BP0 P1 B− ) Bσ Pσ )Pσ . . . Pσ +G 0 1 i−1 −
Δ
̃ i−j B− (BP0 . . . Pi−j B− ) Bσ Pσ . . . Pσ + ∑G 0 i−1 j=0
=
C̃0σ P0σ P1σ i−2
Δ
σ ̃ 0 B− (BP0 P1 B− ) Bσ Pσ Pσ . . . Pσ . . . Pi−1 +G 0 1 i−1 Δ
̃ i−j B− (BP0 . . . Pi−j B− ) Bσ Pσ . . . Pσ + ∑G 0 i−1 j=0
i−1
σ ̃ i−j B− (BP0 . . . Pi−j B− )Δ Bσ Pσ . . . Pσ = C̃0σ P0σ P1σ . . . Pi−1 + ∑G 0 i−1 j=0
i−1
Δ
̃ j B− (BΠj B− ) Bσ Πσ = C̃0σ Πσi−1 + ∑ G i−1 j=1
i
̃ j B− (BΠj B− )Δ Bσ )Πσ . = (C̃0σ + ∑ G i−1 j=1
This completes the proof. Lemma 3.20. Let {Π0 , . . . Πk } be an admissible up to level k Π-projector sequence and Δ
Ci = (Ci−1 + Giσ B−σ (BΠi B− ) B)Πi−1 . Then i
Δ
Ci = (C0 + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 . j=1
3.5 Equivalence of the P- and Π-chains
� 157
Proof. Applying Proposition 3.2 and Proposition 3.3, we get Δ
Δ
Ci = (Ci−1 + Giσ B−σ (BΠi B− ) B)Πi−1 = Ci−1 Πi−1 + Giσ B−σ (BΠi B− ) BΠi−1 Δ
Δ
σ = ((Ci−2 + Gi−1 B−σ (BΠi−1 B− ) B)Πi−2 )Πi−1 + Giσ B−σ (BΠi B− ) BΠi−1 Δ
Δ
σ = Ci−2 Πi−2 Πi−1 + Gi−1 B−σ (BΠi−1 B− ) BΠi−2 Πi−1 + Ci−1 Πi−1 + Giσ B−σ (BΠi B− ) BΠi−1 Δ
Δ
σ = Ci−2 Πi−1 + Gi−1 B−σ (BΠi−1 B− ) BΠi−1 + Giσ B−σ (BΠi B− ) BΠi−1 1
Δ
σ −σ = Ci−2 Πi−1 + ∑ Gi−j B (BΠi−j B− ) BΠi−1 j=0
i−2
Δ
σ −σ = ⋅ ⋅ ⋅ = C1 Πi−1 + ∑ Gi−j B (BΠi−j B− ) BΠi−1 j=0
i−2
Δ
Δ
σ −σ = ((C0 + G1σ B−σ (BΠ−1 ) B)Π0 )Πi−1 + ∑ Gi−j B (BΠi−j B− ) BΠi−1 j=0
i−2
Δ
Δ
σ −σ = C0 Π0 Πi−1 + G1σ B−σ (BΠ1 B− ) BΠ0 Πi−1 + ∑ Gi−j B (BΠi−j B− ) BΠi−1 j=0
i−2
Δ
Δ
σ −σ = C0 Πi−1 + G1σ B−σ (BΠ1 B− ) BΠi−1 + ∑ Gi−j B (BΠi−j B− ) BΠi−1 j=0
i−1
Δ
σ −σ = C0 Πi−1 + ∑ Gi−j B (BΠi−j B− ) BΠi−1 j=0
i−1
Δ
σ −σ = (C0 + ∑ Gi−j B (BΠi−j B− ) B)Πi−1 . j=0
This completes the proof. Lemma 3.21. Let {Π0 , . . . Πk } be an admissible up to level k Π-projector sequence and σ ̃ i B− (BΠi B− )Δ Bσ )Πσ . C̃iσ = (C̃i−1 +G i−1
Then i
̃ j B− (BΠj B− )Δ Bσ )Πσ . C̃iσ = (C̃0σ + ∑ G i−1 j=1
Proof. By Proposition 3.2 and Proposition 3.3, we obtain σ ̃ i B− (BΠi B− )Δ Bσ )Πσ C̃iσ = (C̃i−1 +G i−1
σ ̃ i B− (BΠi B− )Δ Bσ Πσ = C̃i−1 Πσi−1 + G i−1 Δ
Δ
σ ̃ i−1 B− (BΠi−1 B− ) Bσ )Πσ )Πσ + G ̃ i B− (BΠi B− ) Bσ Πσ = ((C̃i−2 +G i−2 i−1 i−1
158 � 3 P-projectors. Matrix chains σ ̃ i−1 B− (BΠi−1 B− )Δ Bσ Πσ Πσ + C̃ σ Πσ + G ̃ i B− (BΠi B− )Δ Bσ Πσ = C̃i−2 Πσi−2 Πσi−1 + G i−2 i−1 i−1 i−1 i−1 Δ
Δ
σ ̃ i−1 B− (BΠi−1 B− ) Bσ Πσ + G ̃ i B− (BΠi B− ) Bσ Πσ = C̃i−2 Πσi−1 + G i−1 i−1 1
Δ
σ ̃ i−j B− (BΠi−j B− ) Bσ Πσ = C̃i−2 Πσi−1 + ∑ G i−1 j=0
i−2
̃ i−j B− (BΠi−j B− )Δ Bσ Πσ = ⋅ ⋅ ⋅ = C̃1σ Πσi−1 + ∑ G i−1 j=0
i−2
Δ
Δ
̃ 1 B− (BΠ− ) Bσ )Πσ )Πσ + ∑ G ̃ i−j B− (BΠi−j B− ) Bσ Πσ = ((C̃0σ + G B 0 i−1 i−1 j=0
i−2
̃ 1 B− (BΠ1 B− )Δ Bσ Πσ Πσ + ∑ G ̃ i−j B− (BΠi−j B− )Δ Bσ Πσ = C̃0σ Πσ0 Πσi−1 − G 0 i−1 i−1 j=0
i−2
Δ
Δ
̃ 1 B− (BΠ1 B− ) Bσ Πσ + ∑ G ̃ i−j B− (BΠi−j B− ) Bσ Πσ = C̃0σ Πσi−1 − G i−1 i−1 j=0
i−1
̃ i−j B− (BΠi−j B− )Δ Bσ Πσ = C̃0σ Πσi−1 + ∑ G i−1 j=0
i−1
Δ
̃ i−j B− (BΠi−j B− ) Bσ )Πσ . = (C̃0σ + ∑ G i−1 j=0
This completes the proof. Lemma 3.22. Let {P0 , . . . , Pk } be an admissible up to level k P-projector sequence and Πi = P0 . . . Pi and Δ
Ci = Ci−1 Pi−1 + Giσ B−σ (BP0 . . . Pi B− ) BP0 . . . Pi−1 . Then Gi = Gi−1 + Ci−1 Mi−1 . Proof. By Lemma 3.18, we have i
Δ
Ci = (C0 + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 . j=1
Hence, using that Πi−1 is a projector, we get
3.5 Equivalence of the P- and Π-chains
i
i
Δ
� 159
Δ
Ci Πi−1 = (C0 + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 Πi−1 = (C0 + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 = Ci j=1
j=1
and Ci Πi−1 Qi = Ci Qi . By Lemma 3.17, we have Mi = Πi−1 Qi . Therefore, Gi = Gi−1 + Ci−1 Qi−1 = Gi−1 + Ci−1 Πi−2 Qi−1 = Gi−1 + Ci−1 Mi−1 . This completes the proof. Lemma 3.23. Let {P0 , . . . , Pk } be an admissible up to level k P-projector sequence and Πi = P0 . . . Pi and σ σ ̃ i B− (BP0 . . . Pi B− )Δ Bσ Pσ . . . Pσ . C̃iσ = C̃i−1 Pi−1 +G 0 i−1
Then ̃σ = G ̃ σ + C̃ σ M σ . G i i−1 i−1 i−1 Proof. By Lemma 3.19, we arrive at the equation i
̃ j B− (BΠj B− )Δ Bσ )Πσ . C̃iσ = (C̃0σ + ∑ G i−1 j=1
Note that Πσi−1 is a projector. Then i
i
j=1
j=1
̃ j B− (BΠj B− )Δ Bσ )Πσ Πσ = (C̃ σ + ∑ G ̃ j B− (BΠj B− )Δ Bσ )Πσ = C̃ σ C̃iσ Πσi−1 = (C̃0σ + ∑ G i−1 i−1 0 i−1 i and C̃iσ Πσi−1 Qiσ = C̃iσ Qiσ . By Lemma 3.17, we have
160 � 3 P-projectors. Matrix chains Miσ = Πσi−1 Qiσ . Therefore, ̃σ = G ̃ σ + C̃ σ Qσ = G ̃ σ + C̃ σ Πσ Qσ = G ̃ σ + C̃ σ M σ . G i i−1 i−1 i−1 i−1 i−1 i−2 i−1 i−1 i−1 i−1 This completes the proof. By Lemma 3.16–Lemma 3.23, we get the following fundamental result. Theorem 3.4. An admissible up to level k P-projector sequence {P0 , . . . , Pk } defines an admissible up to level k Π-projector sequence by setting Πi = P0 . . . Pi . Lemma 3.24. Let {Π0 , . . . , Πk } be an admissible up to level k projector sequence. Let also Pi be the projector defined by ker Pi = Ni ,
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi .
Then Πi−1 Pi is a projector. Proof. By (A7), we have that Πi−1 projects along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 . Then I − Πi−1 projects onto N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊆ im Pi . Therefore, Pi (I − Πi−1 ) = I − Πi−1 , whereupon Pi − Pi Πi−1 = I − Πi−1 or Pi Πi−1 = Pi + Πi−1 − I. Hence, Πi−1 Pi Πi−1 Pi = Πi−1 (Pi + Πi−1 − I)Pi = Πi−1 (Pi Pi + Πi−1 Pi − Pi ) = Πi−1 (Pi + Πi−1 Pi − Pi ) = Πi−1 Πi−1 Pi = Πi−1 Pi .
This completes the proof.
3.5 Equivalence of the P- and Π-chains
� 161
Lemma 3.25. Let {Π0 , . . . , Πk } be an admissible up to level k projector sequence and Pi be defined as in Lemma 3.24. Let also, Qi = I − Pi . Then ker Πi−1 Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni . Proof. Let x ∈ ker Πi−1 Pi be arbitrarily chosen. We have x = Pi x + Qi x. Since Πi−1 Pi x = 0, we conclude that Pi x ∈ ker Πi−1 = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 . Then Qi x ∈ Ni and x = Pi x + Qi x ∈ N0 ⊕ ⋅ ⋅ ⋅ Ni−1 ⊕ Ni . Because x ∈ ker Πi−1 Pi was arbitrarily chosen and we get that it is an element of N0 ⊕ ⋅ ⋅ ⋅ Ni , we get ker Πi−1 Pi ⊆ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni . Let y ∈ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 be arbitrarily chosen. Then y ∈ ker Πi−1 and Πi−1 y = 0. Since im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Πi , we obtain N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊆ im Pi . Hence, y ∈ im Pi and Pi y = y. Therefore, Πi−1 Pi y = Πi−1 y = 0,
(3.39)
162 � 3 P-projectors. Matrix chains i. e., y ∈ ker Πi−1 Pi . Because y ∈ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 was arbitrarily chosen and we get that it is an element of ker Πi−1 Pi , we arrive at the relation N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊆ ker Πi−1 Pi .
(3.40)
Next, using that ker Pi ⊆ ker Πi−1 Pi , we obtain that Ni ⊆ ker Πi−1 Pi . Thus, applying (3.40), we obtain N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni ⊆ ker Πi−1 Pi . By the last relation and (3.39), we get the desired result. This completes the proof. Lemma 3.26. Let {Π0 , . . . , Πk } be an admissible up to level k projector sequence and Pi be defined as in Lemma 3.24. Then Πi−1 Pi = Πi . Proof. Let x ∈ ℝm be arbitrarily chosen. If x ∈ im Πi , then Πi x = x. Since im Πi ⊆ im Πi−1 , we have x ∈ im Πi−1 ⊆ im Pi and Pi x = x,
Πi−1 x = x,
and Πi−1 Pi x = Πi−1 x = x = Πi x. Let x ∈ ker Πi . Then
3.5 Equivalence of the P- and Π-chains
� 163
Πi x = 0. Now, using that ker Πi−1 Pi = ker Πi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni , we have that x ∈ ker Πi−1 Pi and Πi−1 Pi x = 0 = Πi x. Because x ∈ ℝm was arbitrarily chosen, we conclude that Πi−1 Pi = Πi . This completes the proof. Lemma 3.27. Let {Π0 , . . . , Πk } be an admissible up to level k projector sequence and Pi be defined as in Lemma 3.24. Then Πi = P0 . . . Pi ,
Mi = P0 . . . Pi−1 Qi .
Proof. Applying Lemma 3.26, we get Πi = Πi−1 Pi = Πi−2 Pi−1 Pi = ⋅ ⋅ ⋅ = Π0 P1 . . . Pi = P0 P1 . . . Pi . Hence, Mi = Πi−1 − Πi = P0 . . . Pi−1 − P0 . . . Pi = P0 . . . Pi−1 − P0 . . . Pi−1 Pi = P0 . . . Pi−1 (I − Pi ) = P0 . . . Pi−1 Qi .
Lemma 3.28. Let {Π0 , . . . , Πk } be an admissible up to level k projector sequence and Pi be defined as in Lemma 3.24 and Δ σ ̃ i B− (BΠi B− )Δ Bσ )Πσ , Ci = (Ci−1 + Giσ B−σ (BΠi B− ) B)Πi−1 , C̃iσ = (C̃i−1 +G i−1 ̃ ̃ ̃ Gi = Gi−1 + Ci Mi , Gi = Gi−1 + Ci−1 Mi .
Then Ci Πi = Ci Pi ,
C̃i Πσi = C̃i Piσ ,
Ci Πi−1 = Ci ,
C̃i Πσi−1 = C̃i
and Δ
Ci = Ci−1 Pi−1 + Giσ B−σ (BP0 . . . Pi B− ) BP0 . . . Pi−1 ,
σ σ ̃ i B− (BP0 . . . Pi B− )Δ Bσ Pσ . . . Pσ , C̃iσ = C̃i−1 Pi−1 +G 0 i−1 ̃σ = G ̃ σ + C̃ σ Qσ . G i i−1 i−1 i−1
Gi = Gi−1 + Ci−1 Qi−1 ,
164 � 3 P-projectors. Matrix chains Proof. First, note that Δ
Δ
Ci Πi−1 = (Ci−1 + Giσ B−σ (BΠi B− ) B)Πi−1 Πi−1 = (Ci−1 + Giσ B−σ (BΠi B− ) B)Πi−1 = Ci and σ ̃ i B− (BΠi B− )Δ Bσ )Πσ Πσ = (C̃ σ − G ̃ i B− (BΠi B− )Δ Bσ )Πσ = C̃ σ . C̃iσ Πσi−1 = (C̃i−1 −G i−1 i−1 i−1 i−1 i
Hence, Ci Πi = Ci Πi−1 Pi = Ci Pi and C̃iσ Πσi = C̃iσ Πσi−1 Piσ = C̃iσ Piσ . Therefore, Δ
Δ
Ci = (Ci−1 + Giσ B−σ (BΠi B− ) BΠi−1 = Ci−1 Πi−1 + Giσ B−σ (BΠi B− ) BΠi−1 Δ
Δ
= Ci−1 Πi−2 Pi−1 + Giσ B−σ (BΠi B− ) BΠi−1 = Ci−1 Pi−1 + Giσ B−σ (BP0 . . . Pi B− ) BP0 . . . Pi−1
and σ ̃ i B− (BΠi B− )Δ Bσ Πσ = C̃ σ Πσ − G ̃ i B− (BΠi B− )Δ Bσ Πσ C̃iσ = (C̃i−1 −G i−1 i−1 i−1 i−1 Δ
Δ
σ σ ̃ i B− (BΠi B− ) Bσ Πσ = C̃ σ Pσ − G ̃ i B− (BP0 . . . Pi B− ) Bσ Pσ . . . Pσ . = C̃i−1 Πσi−2 Pi−1 −G i−1 i−1 i−1 0 i−1
Moreover, Gi = Gi−1 + Ci−1 Mi−1 = Gi−1 + Ci−1 Πi−2 Qi−1 = Gi−1 + Ci−1 Qi−1 and ̃σ = G ̃ σ + C̃ σ M σ = G ̃ σ + C̃ σ Πσ Qσ = G ̃ σ + C̃ σ Qσ . G i i−1 i−1 i−1 i−1 i−1 i−2 i−1 i−1 i−1 i−1 This completes the proof. By Lemma 3.24–Lemma 3.28, we obtain the following result. Theorem 3.5. An admissible up to level k Π-projector sequence {Π0 , . . . , Πk } defines an admissible up to level k P-projector sequence {P0 , . . . , Pk } when Pi be the projector defined by ker Pi = Ni ,
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi .
3.5 Equivalence of the P- and Π-chains
� 165
Corollary 3.2. Let the matrix pair (A, B) is a properly stated matrix pair on I. Then (A, B) is regular with tractability index ν ≥ 1 if and only if there exists an admissible up to level ν − 1 projector sequence {Π0 , . . . , Πν−1 } for which Gi , 0 ≤ i < ν, are singular and Gν is nonsingular. Remark 3.3. Let the matrix pair (A, B) be regular with tractability index ν ≥ 1. Then Qi and Pi can be computed explicitly by the Π-projectors sequence in the following manner: Q0 = M 0 ,
P 0 = I − Q0 .
Next, Gν = Gν−1 + Cν−1 Mν−1 = Gν−2 + Cν−2 Mν−2 + Cν−1 Mν−1 ν−1
ν−1
j=0
j=i
= ⋅ ⋅ ⋅ = G0 + ∑ Cj Mj = Gi + ∑ Cj Mj . Now, using that Gi Qi = 0,
Mk Qk = P0 . . . Pk−1 Qk Qi = 0,
k > i,
and Mi Qi = P0 . . . Pi−1 Qi Qi = P0 . . . Pi−1 Qi = Mi , we arrive at ν−1
ν−1
j=i
j=i
Gν Qi = (Gi + ∑ Cj Mj )Qi = Gi Qi + ∑ Cj Mj Qi = Mi Qi = Mi , whereupon Qi = Gν−1 Mi . Example 3.2. Let {Π0 , . . . , Πν−1 } be an admissible up to level ν ≥ 1 projector sequence. Let also, Pi be the projector defined by ker Pi = Ni ,
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi
and Qi = I − Pi . We will prove that I − Πi−1 = Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πi−2 Qi−1 , We will use the principal of the mathematical induction.
i ≥ 1.
(3.41)
166 � 3 P-projectors. Matrix chains 1.
For i = 0, we have I − Π0 = I − P 0 = Q0 ,
2. 3.
i. e., the assertion holds. Assume that (3.41) holds for some i ∈ {0, . . . , ν − 2}. We will prove that I − Πi = Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πi−1 Qi . Really, using (3.41), we have I − Πi = I − Πi−1 + Πi−1 − Πi = Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πi−2 Qi−1 + Mi = Q0 + Π0 Q1 + ⋅ ⋅ ⋅ + Πi−2 Qi−1 + Πi−1 Qi .
Hence, and the principal of the mathematical induction, we conclude that (3.41) holds for any i ∈ {0, . . . , ν − 1}. Example 3.3. Let {Π0 , . . . , Πν−1 }, {P0 , . . . , Pν−1 } and {Q0 , . . . , Qν−1 } be as in Example 3.2. We will prove that Qi−1 (I − Πi−1 ) = Qi−1 .
(3.42)
By (3.41), using that Qj Qi = 0,
j > i,
we get Qi−1 (I − Πi−1 ) = Qi−1 (Q0 + Π0 Q1 + Π1 Q2 + Π2 Q3 + ⋅ ⋅ ⋅ + Πi−2 Qi−1 )
= Qi−1 Q0 + Qi−1 P0 Q1 + Qi−1 P0 P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1 = Qi−1 P0 Q1 + Qi−1 P0 P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
= Qi−1 (I − Q0 )Q1 + Qi−1 P0 P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
= (Qi−1 − Qi−1 Q0 )Q1 + Qi−1 P0 P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1 = Qi−1 Q1 + Qi−1 P0 P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1 = Qi−1 P0 P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
= Qi−1 (I − Q0 )P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
= (Qi−1 − Qi−1 Q0 )P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1 = Qi−1 P1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
= Qi−1 (I − Q1 )Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
= (Qi−1 − Qi−1 Q1 )Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
3.5 Equivalence of the P- and Π-chains
� 167
= Qi−1 Q2 + Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1 = Qi−1 P0 P1 P2 Q3 + ⋅ ⋅ ⋅ + Qi−1 P0 . . . Pi−2 Qi−1
= ⋅ ⋅ ⋅ = Qi−1 P0 P1 . . . Pi−2 Qi−1 = Qi−1 (I − Q0 )P1 . . . Pi−2 Qi−1
= (Qi−1 − Qi−1 Q0 )P2 . . . Pi−2 Qi−1 = Qi−1 P2 . . . Pi−2 Qi−1 = ⋅ ⋅ ⋅ = Qi−1 Pi−2 Qi−1 = Qi−1 (I − Qi−2 )Qi−1 = (Qi−1 − Qi−1 Qi−2 )Qi−1 = Qi−1 Qi−1 = Qi−1 , i. e., (3.42) holds. Example 3.4. Let {Π0 , . . . , Πν−1 }, {P − 0, . . . , Pν−1 } and {Q0 , . . . , Qν−1 } be as in Example 3.2. We will prove that Πi Pl = Πi ,
l ≤ i.
(3.43)
If l = i, then Πi Pi = P0 . . . Pi Pi = P0 . . . Pi = Πi . Suppose that l < i. Then Πi Pl = P0 . . . Pi−1 Pi Pl = P0 . . . Pi−1 (I − Qi )(I − Ql ) = P0 . . . Pi−1 (I − Qi − Ql + Qi Ql ) = P0 . . . Pi−1 (I − Qi − Ql ) = P0 . . . Pi−1 (Pi − Ql )
= P0 . . . Pi−1 Pi − P0 . . . Pi−2 Pi−1 Ql = Πi − P0 . . . Pi−2 (I − Qi−1 )Ql
= Πi − P0 . . . Pi−2 (Ql − Qi−1 Ql ) = Πi − P0 . . . Pi−2 Ql = ⋅ ⋅ ⋅ = Πi − P0 . . . Pl Ql = Πi , i. e., (3.43) holds. Example 3.5. Let {Π0 , . . . , Πν−1 }, {P0 , . . . , Pν−1 } and {Q0 , . . . , Qν−1 } be as in Example 3.2. We will prove that Pj . . . Pν−1 Pi = { 1.
Pj . . . Pν−1
Pj . . . Pν−1 − Qi
if i ≥ j, if i < j.
(3.44)
Let i ≥ j. Then Pj . . . Pν−1 Pi = Pj . . . Pν−2 Pν−1 Pi = Pj . . . Pν−2 (I − Qν−1 )(I − Qi )
= Pj . . . Pν−2 (I − Qν−1 − Qi + Qν−1 Qi ) = Pj . . . Pν−2 (Pν−1 − Qi ) = Pj . . . Pν−1 − Pj . . . Pν−2 Qi = Pj . . . Pν−1 − Pj . . . Pν−3 Pν−2 Qi = Pj . . . Pν−1 − Pj . . . Pν−3 (I − Qν−2 )Qi
= Pj . . . Pν−1 − Pj . . . Pν−3 (Qi − Qν−2 Qi ) = Pj . . . Pν−1 − Pj . . . Pν−3 Qi = ⋅ ⋅ ⋅ = Pj . . . Pν−1 − Pj . . . Pi Qi = Pj . . . Pν−1 .
168 � 3 P-projectors. Matrix chains 2.
Let i < j. Then, using above computations, we obtain Pj . . . Pν−1 Pi = Pj . . . Pν−1 − Pj Qi = Pj . . . Pν−1 − (I − Qj )Qi = Pj . . . Pν−1 − (Qi − Qj Qi ) = Pj . . . Pν−1 − Qi .
Example 3.6. Let {Π0 , . . . , Πν−1 }, {P0 , . . . , Pν−1 } and {Q0 , . . . , Qν−1 } be as in Example 3.2. We will prove that Pj . . . Pν−1 Qi = { 1.
0
if
Qi
if
i ≥ j,
(3.45)
i < j.
Let i ≥ j. Applying (3.44), we get Pj . . . Pν−1 Qi = Pj . . . Pν−1 (I − Pi ) = Pj . . . Pν−1 − Pj . . . Pν−1 Pi = Pj . . . Pν−1 − Pj . . . Pν−1 = 0.
2.
Let i < j. Then Pj . . . Pν−1 Qi = Pj . . . Pν−1 (I − Pi ) = Pj . . . Pν−1 − Pj . . . Pν−1 Pi = Pj . . . Pν−1 − (Pj . . . Pν−1 − Qi )
= Pj . . . Pν−1 − Pj . . . Pν−1 + Qi = Qi . Example 3.7. Let {Π0 , . . . , Πν−1 }, {P0 , . . . , Pν−1 } and {Q0 , . . . , Qν−1 } be as in Example 3.2. We will prove that Qi Pj . . . Pν−1 = 0,
i ≥ j.
(3.46)
For i = j, the assertion is evident. Let i > j. Then Qi Pj . . . Pν−1 = Qi Pj Pj+1 . . . Pν−1 = Qi (I − Qj )Pj+1 . . . Pν−1 = (Qi − Qi Qj )Pj+1 . . . Pν−1 = Qi Pj+1 . . . Pν−1 = ⋅ ⋅ ⋅ = Qi Pi . . . Pν−1 = 0.
Example 3.8. Let {Π0 , . . . , Πν−1 }, {P0 , . . . , Pν−1 } and {Q0 , . . . , Qν−1 } be as in Example 3.2. We will prove that Pj . . . Pν−1 Πi ={ 1.
Pj . . . Pν−1 − Q0 P1 . . . Pi − Q1 P2 . . . Pi − ⋅ ⋅ ⋅ − Qj−1 Pj . . . Pi Pj . . . Pν−1 − Qi − Qi−1 Pi − Qi−2 Pi−1 Pi − ⋅ ⋅ ⋅ − Q0 P1 . . . Pi
if if
i ≥ j,
(3.47)
i < j.
Let j ≤ i. Then Pj . . . Pν−1 Πi = Pj . . . Pν−1 P0 P1 . . . Pi = (Pj . . . Pν−1 P0 )P1 . . . Pi = (Pj . . . Pν−1 − Q0 )P1 . . . Pi
3.5 Equivalence of the P- and Π-chains
� 169
= Pj . . . Pν−1 P1 . . . Pi − Q0 P1 . . . Pi = Pj . . . Pν−1 P1 P2 . . . Pi − Q0 P1 . . . Pi = (Pj . . . Pν−1 P1 )P2 . . . Pi − Q0 P1 . . . Pi
= (Pj . . . Pν−1 − Q1 )P2 . . . Pi − Q0 P1 . . . Pi
= Pj . . . Pν−1 P2 . . . Pi − Q1 P2 . . . Pi − Q0 P1 . . . Pi = ⋅ ⋅ ⋅
= Pj . . . Pν−1 Pj . . . Pi − Q0 P1 . . . Pi − Q1 P2 . . . Pi − Qj−1 Pj . . . Pi = Pj . . . Pν−1 − Q0 P1 . . . Pi − Q1 P2 . . . Pi − ⋅ ⋅ ⋅ − Qj−1 Pj . . . Pi . 2.
Let i < j. Then, using above computations, we find Pj . . . Pν−1 Πi = Pj . . . Pν−1 Pi − Q0 P1 . . . Pi − Q1 P2 . . . Pi − ⋅ ⋅ ⋅ − Qi−1 Pi
= Pj . . . Pν−1 − Qi − Qi−1 Pi − Qi−2 Pi−1 Pi − ⋅ ⋅ ⋅ − Q0 P1 . . . Pi .
Example 3.9. Let {Π0 , . . . , Πν−1 }, {P0 , . . . , Pν−1 } and {Q0 , . . . , Qν−1 } be as in Example 3.2. We will prove that Qi Pj . . . Pν−1 (I − Πl ) 0 if { { { { { {Qi (Pi+1 . . . Pν + Qi+1 Pi+2 . . . Pν + ⋅ ⋅ ⋅ + Qj−1 Pj . . . Pl ) if ={ { 0 if { { { { {Qi (Ql + Ql−1 Pl + ⋅ ⋅ ⋅ + Qi+1 Pi+2 . . . Pl + Pi+1 . . . Pl ) if
i ≥ j,
i < j ≤ l,
(3.48)
l < i < j, i < l < j.
We have the following. 1. Let i ≥ j. By (3.46), we have Qi Pj . . . Pν−1 = 0 and hence, Qi Pj . . . Pν−1 (I − Πl ) = 0. 2.
Let i < j. Then we have the following cases: a) Let l ≥ j. Then Pj . . . Pν−1 Πl = Pj . . . Pν−1 − Q0 P1 . . . Pl − Q1 P2 . . . Pl − ⋅ ⋅ ⋅ − Qj−1 Pj . . . Pl . Hence, Qi Pj . . . Pν−1
= Qi (Pj . . . Pν−1 − Pj . . . Pν−1 + Q0 P1 . . . Pl + Q1 P2 . . . Pl + ⋅ ⋅ ⋅ + Qj−1 Pj . . . Pl ) = Qi (Q0 P1 . . . Pl + Q1 P2 . . . Pl + ⋅ ⋅ ⋅ + Qj−1 Pj . . . Pl )
170 � 3 P-projectors. Matrix chains = Qi (Q0 P1 . . . Pl + Q1 P2 . . . Pl + ⋅ ⋅ ⋅ + Qi−1 Pi . . . Pl
+ Qi Pi+1 . . . Pl + Qi+1 Pi+2 . . . Pl + ⋅ ⋅ ⋅ + Qj−1 Pj . . . Pl )
= Qi Q0 P1 . . . Pl + Qi Q1 P2 . . . Pl + ⋅ ⋅ ⋅ + Qi Qi−1 Pi . . . Pl
+ Qi Qi Pi+1 . . . Pl + Qi Qi+1 Pi+2 . . . Pl + ⋅ ⋅ ⋅ + Qi Qj−1 Pj . . . Pl
= Qi Pi+1 . . . Pl + Qi Qi+1 Pi+2 . . . Pl + ⋅ ⋅ ⋅ + Qi Qj−1 Pj . . . Pl = Qi (Pi+1 . . . Pl + Qi+1 Pi+2 . . . Pl + ⋅ ⋅ ⋅ + Qj−1 Pj . . . Pl ). b) Let l < i < j. Then
Pj . . . Pν−1 Πl = Pj . . . Pν−1 − Ql Pl − Ql−2 Pl−1 Pl − ⋅ ⋅ ⋅ − Q0 P1 . . . Pl and Qi Pj . . . Pν−1 (I − Πl )
= Qi (Pj . . . Pν−1 − Pj . . . Pν−1 + Ql + Ql−1 Pl + Ql−2 Pl−1 Pl + ⋅ ⋅ ⋅ + Q0 P1 . . . Pl ) = Qi (Ql + Ql−1 Pl + Ql−1 Pl−1 Pl + ⋅ ⋅ ⋅ + Q0 P1 . . . Pl )
= Qi Ql + Qi Ql−1 Pl + Qi Ql−1 Pl−1 Pl + ⋅ ⋅ ⋅ + Qi Q0 P1 . . . Pl = 0. c)
Let i < l < j. Then Qi Pj . . . Pν−1 (I − Πl ) = Qi Ql + Qi Ql−1 Pl + ⋅ ⋅ ⋅ + Qi Qi Pi+1 . . . Pl + Qi Qi−1 Pi . . . Pl + ⋅ ⋅ ⋅ + Qi Q0 P1 . . . Pl
= Qi Ql + Qi Ql−1 Pl + ⋅ ⋅ ⋅ + Qi Pi+1 . . . Pl = Qi (Ql + Ql−1 Pl + ⋅ ⋅ ⋅ + Pi+1 . . . Pl ).
3.6 Some properties of the projectors Πi and Mi We will start this section with the following useful result. Theorem 3.6. Let T, S : I → Mm×m be projectors. 1. If im S ⊆ im T, then a) TS = S. b) ST is a projector onto im S along ker T ⊕ (ker S ∩ im T). 2. If ker S ⊆ ker T, then a) TS = T. b) ST is a projector onto im S ∩ (ker S ⊕ im T) along ker T.
3.6 Some properties of the projectors Πi and Mi
�
171
Proof. 1. a) Let v ∈ ℝm be arbitrarily chosen. Then we have the following cases: i. If v ∈ im S, then v ∈ im T and Sv = Tv = v, and hence, TSv = Tv = v = Sv. ii.
If v ∈ im S, then v ∈ ker S and Sv = 0. Hence, TSv = T0 = 0 = Sv.
Because v ∈ ℝm was arbitrarily chosen, we conclude that TS = S. b) We have STST = SST = ST. Thus, ST is a projector. i. We will prove that im ST = im S.
(3.49)
be arbitrarily chosen. Then Sx = x
and
x ∈ im T,
whereupon Tx = x and STx = Sx = x, i. e., x ∈ im ST. Since x ∈ im S was arbitrarily chosen and we get that it is an element of im ST, we conclude that im S ⊆ im ST. Let y ∈ im ST be arbitrarily chosen. Then
(3.50)
172 � 3 P-projectors. Matrix chains STy = y.
(3.51)
If y ∈ ̸ im S, then y ∈ ker S and Sy = 0. Hence and (3.49), using that S is a projector, we obtain 0 = Sy = SSTy = STy, i. e., y ∈ ker ST. This is a contradiction and, therefore, y ∈ im S. Because y ∈ im ST was arbitrarily chosen and we get that it is an element of im S, we get the relation im ST ⊆ im S. ii.
By the last relation and (3.50), we get (3.49). Now, we will prove that ker ST = ker T ⊕ (ker S ∩ ker T).
(3.52)
Let v ∈ ker T ⊕ (ker S ∩ ker T) be arbitrarily chosen. Then v = v1 + v2 , where v1 ∈ ker T and v2 ∈ ker S ∩ im T. Hence, Tv1 = 0,
Sv2 = 0,
Tv2 = v2
and STv = ST(v1 + v2 ) = STv1 + STv2 = STv2 = Sv2 = 0, i. e., v ∈ ker ST. Since v ∈ ker T ⊕ (ker S ∩ ker T) was arbitrarily chosen and we get that it is an element of ker ST, we obtain the relation ker T ⊕ (ker S ∩ ker T) ⊆ ker ST. Let w ∈ ker ST be arbitrarily chosen. Then STw = 0.
(3.53)
3.6 Some properties of the projectors Πi and Mi
� 173
Observe that w = (I − T)w + Tw. Then T(I − T)w = (T − TT)w = (T − T)w = 0, i. e., (I − T)w ∈ ker T. Next, STw = 0, i. e., Tw ∈ ker S. Moreover, TTw = Tw, i. e., Tw ∈ im T and Tw ∈ ker S ∩ im T. Consequently, w ∈ ker T ⊕ (ker S ∩ ker T), whereupon, using that w ∈ ker ST was arbitrarily chosen, we arrive at ker ST ⊆ ker T ⊕ (ker S ∩ ker T). 2.
By the last relation and the relation (3.53), we obtain (3.52). Now, we will prove the second part of our assertion. We have ker S ⊆ ker T. a) Let x ∈ ℝm be arbitrarily chosen. If x ∈ ker S, then x ∈ ker T and Sx = Tx = 0 and TSx = T0 = 0 = Tx. If x ∈ ̸ ker S, then x ∈ im S and Sx = x.
174 � 3 P-projectors. Matrix chains Hence, TSx = Tx. Because x ∈ ℝm was arbitrarily chosen, we obtain TS = T. b) First, note that STST = STT = ST, i. e., ST is a projector. i. We will prove that im ST = im S ∩ (ker S ⊕ im T). Let x ∈ im ST be arbitrarily chosen. Then STx = x. Let x ∈ im S. Then Sx = x. We represent x as follows: x = (I − T)x + Tx. We have Tx ∈ im T and S(I − T)x = (S − ST)x = Sx − STx = x − x = 0, i. e., (I − T)x ∈ ker S. Therefore, x ∈ ker S ⊕ im T. Hence, x ∈ im S ∩ (ker S ⊕ im T).
(3.54)
3.6 Some properties of the projectors Πi and Mi
� 175
Since x ∈ im ST was arbitrarily chosen and we get that it is an element of im S ∩ (ker S ⊕ im T), we obtain the relation im ST ⊆ im S ∩ (ker S ⊕ im T).
(3.55)
Let y ∈ im S ∩ (ker S ⊕ im T). Then y ∈ im S and y ∈ ker S ⊕ im T. By the last inclusion, we get the following representation y: y = y1 + y2 , where y1 ∈ ker S and y2 ∈ im T. Hence, y1 ∈ ker T and Ty2 = y2 , and y = Sy = S(y1 + y2 ) = Sy1 + Sy2 = Sy2 = STy2 , whereupon y ∈ im ST. Because y ∈ im S ∩ (ker S ⊕ im T) and we get that it is an element of y ∈ im ST, we find the relation im S ∩ (ker S ⊕ im T) ⊆ im ST. ii.
By the last inclusion and (3.55), we find (3.54). We will prove that ker ST = ker T. Let x ∈ ker ST be arbitrarily chosen. Then STx = 0 and then Tx = 0,
(3.56)
176 � 3 P-projectors. Matrix chains i. e., x ∈ ker T. Since x ∈ ker ST was arbitrarily chosen and we get that it is an element of ker T, we get ker ST ⊆ ker T.
(3.57)
Let y ∈ ker T be arbitrarily chosen. Then Ty = 0. Hence, STy = 0, i. e., y ∈ ker ST. Since y ∈ ker T was arbitrarily chosen and we get that it is an element of ker ST, we find ker T ⊆ ker ST. By the last relation and (3.57), we obtain (3.56). This completes the proof. Theorem 3.7. Let {Π0 , . . . , Πν−1 } be an admissible up to level ν ≥ 1 projector sequence. Let also, Pi be the projector defined by ker Pi = Ni ,
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi ,
Mi = Πi−1 − Πi and Qi = I − Pi . Then 0 Mi Mj = { Mi
if if
i ≠ j, i = j.
Proof. We have ker Πi−1 = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊆ im Pi = ker Qi . Then ker Mi = ker Πi−1 Qi = ker Qi = im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni ⊕ im Πi . By Theorem 3.6, it follows that im Mi = im Πi−1 ∩ (ker Πi−1 ⊕ im Qi ) = im Πi−1 ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ Ni ). Note that
(3.58)
3.6 Some properties of the projectors Πi and Mi
im Πj−1 ⊆ im Πi−1 ,
�
177
j > i,
and Mi Mj = Πi−1 Qi Πj−1 Mj . 1.
Let i = j. Then, using that Mi is a projector, we have Mi Mi = Mi .
2.
Let j > i. Then im Mj = im Πj−1 ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nj ) ⊆ im Πj−1 ⊆ im Πi−1 ⊆ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Πi−1 = ker Mi .
Hence, Mi Mj = 0. 3.
Let j < i. Then Qi Qj = 0. From here, Mi Mj = Πi−1 Qi Πj−1 Qj = Πi−1 Qi P0 P1 . . . Pi Πj−1 Qj
= Πi−1 Qi (I − Q0 )P1 . . . Pi Πj−1 Qj = Πi−1 (Qi − Qi Q0 )P1 . . . Pi Πj−1 Qj = Πi−1 Qi P1 . . . Pi Πj−1 Qj = ⋅ ⋅ ⋅ = Πi−1 Qi P1 . . . Pi Πj−1 Qj = 0.
This completes the proof. Theorem 3.8. Suppose that all conditions of Theorem 3.7 hold. Then Πi Πj = { Proof. 1.
Πi Πj
if if
i ≥ j, i < j.
Let i = j. Then Πi Πi = Πi .
2.
Let i > j. Take x ∈ ℝm arbitrarily. Suppose that x ∈ ker Πj . Then Πj x = 0
178 � 3 P-projectors. Matrix chains and Πi Πj x = 0. Note that ker Πj = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nj ⊆ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni = ker Πi . Hence, x ∈ ker Πi and Πi x = 0. Therefore, Πi Πj x = Πi x. Let x ∈ im Πj . Then Πj x = x and Πi Πj x = Πi x. Because x ∈ ℝm was arbitrarily chosen, we conclude that Πi Πj = Πi . 3.
Let i < j. Take x ∈ ℝm arbitrarily. Then im Πj ⊆ im Πi . Hence, Πi Πj = Πj . This completes the proof.
Theorem 3.9. Let all conditions of Theorem 3.7 hold. Then Πi Mj = {
0
if
Mj
if
i ≥ j, i < j.
Proof. By the definition of Mj , we get Πi Mj = Πi (Πj−1 − Πj ) = Πi Πj−1 − Πi Πj .
(3.59)
3.6 Some properties of the projectors Πi and Mi
1.
� 179
Let i ≥ j. Then Πi Mj = Πi − Πi = 0.
2.
Let i < j. Then Πi Mj = Πj−1 − Πj = Mj . This completes the proof.
Theorem 3.10. Suppose that all conditions of Theorem 3.7 hold. Then M i Πj = {
Mi 0
if if
i > j, i ≤ j.
(3.60)
Proof. By the definition of Mj , we find Mi Πj = (Πi−1 − Πi )Πj = Πi−1 Πj − Πi Πj . 1.
Let i > j. Then Mi Πj = Πi−1 − Πi = Mi .
2.
Let i ≤ j. Then Mi Πj = Πj − Πj = 0. This completes the proof.
Theorem 3.11. Let all conditions of Theorem 3.7 hold. Then M i Qj = {
0
if
Mi
if
i > j, i = j.
Proof. We have Mi Qj = (Πi−1 − Πi )Qj = Πi−1 Qj − Πi Qj . 1.
Let i > j. Then, applying that Qi Qj = 0, we arrive at Mi Qj = Πi−1 Qi Qj = 0.
(3.61)
180 � 3 P-projectors. Matrix chains 2.
Let i = j. Then Mi Qi = Πi−1 Qi Qi = Πi−1 Qi = Mi . This completes the proof.
Corollary 3.3. Suppose that all conditions of Theorem 3.11 hold. Then Mi Pi = 0.
(3.62)
Proof. By Theorem 3.11, we have Mi Qi = Mi . Then Mi Pi = Mi (I − Qi ) = Mi − Mi Qi = Mi − Mi = 0. This completes the proof. Theorem 3.12. Suppose that all conditions of Theorem 3.7 hold. Then Qi M i = Qi .
(3.63)
Proof. We have Qi Mi = Qi Πi−1 Qi = Qi P0 P1 . . . Pi−1 Qi = Qi (I − Q0 )P1 . . . Pi−1 Qi = (Qi − Qi Q0 )P1 . . . Pi−1 Qi = Qi P1 . . . Pi−1 Qi = ⋅ ⋅ ⋅ = Qi Qi = 0. This completes the proof. Theorem 3.13. Suppose that all conditions of Theorem 3.7 hold. Then Qj Pi Pi−1 . . . Pl = Qj ,
j > i > l.
(3.64)
Proof. We have Qj Pi Pi−1 . . . Pl = Qj (I − Qi )Pi−1 . . . Pl = (Qj − Qj Qi )Pi−1 . . . Pl = Qj Pi−1 . . . Pl = ⋅ ⋅ ⋅ = Qj Pl = Qj (I − Ql ) = Qj − Qj Ql = Qj .
This completes the proof. Theorem 3.14. Suppose that all conditions of Theorem 3.7 hold. Then Pk Pk−1 . . . Pi = I − Qi − Qi+1 − ⋅ ⋅ ⋅ − Qk ,
k ≥ i.
(3.65)
3.6 Some properties of the projectors Πi and Mi
� 181
Proof. For k = i, we have Pk = I − Qk . Let k > i. Applying (3.64), we get Pk Pk−1 . . . Pi = Pk Pk−1 Pk−2 . . . Pi = (I − Qk )Pk−1 Pk−2 . . . Pi
= Pk−1 Pk−2 . . . Pi − Qk Pk−1 Pk−2 . . . Pi = (I − Qk−1 )Pk−2 . . . Pi − Qk = ⋅ ⋅ ⋅ = Pi − Qi+1 − Qi+2 − ⋅ ⋅ ⋅ − Qk = I − Qi − Qi+ − ⋅ ⋅ ⋅ − Qk .
This completes the proof. Theorem 3.15. Let (A, B) be a matrix pair with tractability index ν ≥ 1. Then Gi Pi−1 = Gi−1 ,
i ∈ {1, . . . , ν}.
(3.66)
Proof. Fix i ∈ {1, . . . , ν}. Since ker Mi−1 = ker Qi−1 = im Pi−1 = im Gi−1 . Therefore, Gi−1 Qi−1 = 0 and Gi−1 Pi−1 = Gi−1 (I − Qi−1 ) = Gi−1 − Gi−1 Qi−1 = Gi−1 . Now, using that Gi = Gi−1 + Ci−1 Qi−1 , we find Gi Pi−1 = (Gi−1 + Ci−1 Qi−1 )Pi−1 = Gi−1 Pi−1 + Ci−1 Qi−1 Pi−1 = Gi−1 . This completes the proof. Theorem 3.16. Let (A, B) be a matrix pair with tractability index ν ≥ 1. Then G0 = Gi Pi−1 . . . P0 ,
i ∈ {1, . . . , ν}
(3.67)
Gi = Gν Pν−1 . . . Pi ,
i ∈ {0, . . . , ν}.
(3.68)
and
182 � 3 P-projectors. Matrix chains Proof. We apply (3.66) and we find G0 = G1 P0 = G2 P1 P0 = G3 P2 P1 P0 = ⋅ ⋅ ⋅ = Gi Pi−1 . . . P0 . Next, for i ∈ {0, . . . , ν}, we obtain Gi = Gi+1 Pi = Gi+2 Pi+1 Pi = ⋅ ⋅ ⋅ = Gν Pν−1 . . . Pi . This completes the proof. Theorem 3.17. Let (A, B) be a matrix pair with tractability index ν ≥ 1. Then Gν−1 G0 = I − Q0 − ⋅ ⋅ ⋅ − Qν−1 .
(3.69)
Moreover, Gν−1 Gi = I − Qi − ⋅ ⋅ ⋅ − Qν−1 ,
i ∈ {0, . . . , ν − 1}.
(3.70)
Proof. By (3.67), we find G0 = Gν Pν−1 . . . P0 , whereupon Gν−1 G0 = Pν−1 . . . P0 . Now, using (3.65), we find Gν−1 G0 = I − Q0 − ⋅ ⋅ ⋅ − Qν−1 . Next, for i ∈ {0, . . . , ν}, using (3.68), we find Gi = Gν Pν−1 . . . Pi , whereupon, using (3.65), we arrive at the equalities Gν−1 Gi = Pν−1 . . . Pi = I − Qi − ⋅ ⋅ ⋅ − Qν−1 . This completes the proof. Theorem 3.18. Let (A, B) be a regular matrix pair with tractability index ν ≥ 1. Then j
Δ
σ Gν−1σ CMj = −(Q1σ + ⋅ ⋅ ⋅ + Qjσ )Mj − ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj . i=1
(3.71)
3.6 Some properties of the projectors Πi and Mi
�
183
Proof. By Lemma 3.18, it follows that i
Δ
σ Ci Πi = (C + Gi+1 + G1σ + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 . j=1
By (3.59), we have Πi−1 Mi = Mi . Hence, i
Δ
σ 0 = Ci Πi Mi = (C + Gi+1 + Giσ + ∑ Gjσ B−σ (BΠj B− ) B)Πi−1 Mi j=1
i
Δ
σ = (C + (Gi+1 − G1σ ) + ∑ Gjσ B−σ (BΠj B− ) B)Mi j=1
i
Δ
σ = CMi + (Gi+1 − G1σ )Mi + ∑ Gjσ B−σ (BΠj B− ) BMi . j=1
Therefore, i
Δ
σ Gν−1σ CMi = −(Gν−1σ Gi+1 − Gν−1σ G1σ )Mi − ∑ Gν−1σ Gjσ B−σ (BΠj B− ) BMi . j=1
Let i < ν − 1. Now, we apply (3.70) and we find σ σ σ σ Gν−1σ CMi = −(I − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 − (I − Q1σ − ⋅ ⋅ ⋅ − Qiσ − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 ))Mi i
Δ
σ = −(Q1σ − ⋅ ⋅ ⋅ − Qiσ )Mi − ∑(I − Qjσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠj .B− ) BMi . j=1
For i = ν − 1, we find σ σ Gν−1σ CMν−1 = −(I − (I − Q1σ − ⋅ ⋅ ⋅ − Qiσ − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 ))Mν−1 ν−1
Δ
σ σ = −(Q1σ − ⋅ ⋅ ⋅ − Qν−1 )Mν−1 − ∑ (I − Qjσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠj B− ) BMν−1 . j=1
This completes the proof. As above, one can prove the following result. Theorem 3.19. Let (A, B) be a regular matrix pair with tractability index ν ≥ 1. Then 1. ̃ i Pi−1 = G ̃ i−1 , G
i ∈ {1, . . . , ν},
184 � 3 P-projectors. Matrix chains 2. ̃0 = G ̃ i Pi−1 . . . P0 , G
i ∈ {1, . . . , ν},
̃i = G ̃ ν Pν−1 . . . Pi , G
i ∈ {0, . . . , ν},
3.
4. ̃ −1 G ̃ G ν 0 = I − Q0 − ⋅ ⋅ ⋅ − Qν−1 , 5. ̃ −1 G ̃ G ν i = I − Qi − ⋅ ⋅ ⋅ − Qν−1 ,
i ∈ {0, . . . , ν},
6. j
̃ −1 CMj = Qj + ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− )Δ BMj . G ν i=1
Theorem 3.20. Let (A, B) be a regular matrix pair with tractability index ν ≥ 1. Define Vk = Qk Pk+1 . . . Pν−1 .
(3.72)
Then Vk is a projector. Proof. Applying (3.45), we get Vk Vk = Qk Pk+1 . . . Pν−1 Qk Pk+1 . . . Pν−1 = Qk (Pk+1 . . . Pν−1 Qk )Pk+1 . . . Pν−1 = Qk Qk Pk+1 . . . Pν−1 = Qk Pk+1 . . . Pν−1 = Vk ,
i. e., Vk is a projector. This completes the proof. Theorem 3.21. Let (A, B) be a regular matrix pair with tractability index ν ≥ 1. Then im Qk Pk+1 . . . Pj = Nk ,
ν ≥ j > k,
(3.73)
and ker Qk Pk+1 . . . Pj = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nj ⊕ im Πj ,
ν ≥ j > k.
(3.74)
3.6 Some properties of the projectors Πi and Mi
�
185
Proof. We will use the principle of the mathematical induction. 1. Let j = k + 1. Since Qk and Pk+1 are projectors, applying Theorem 3.6, we get im Qk Pk+1 = im Qk = Nk . Moreover, ker Qk Pk+1 = ker Pk+1 ⊕ (ker Qk ∩ im Pk+1 )
= Nk+1 ⊕ ((N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ im Πk ) ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ ⊕ im Πk+1 )).
(3.75)
Note that im Πk+1 ⊆ im Πk and N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊆ N0 ⊕ ⋅ ⋅ ⋅ Nk−1 ⊕ Nk+1 . Since (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ) ∩ Nk = {0}, we have (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ im Πk ) ∩ Nk = 0. Then (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ im Πk+1 ) ∩ Nk ⊆ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ im Πk ) ∩ Nk = 0. Therefore, (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ im Πk ) ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk ⊕ im Πk+1 ) = (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ im Πk+1 ). Now, applying (3.75), we arrive at ker Qk Pk+1 = Nk+1 ⊕ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ im Πk+1 ). 2.
Assume that im Qk Pk+1 . . . Pl = Nk
186 � 3 P-projectors. Matrix chains and ker Qk Pk+1 . . . Pl = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ N − l ⊕ im Πl 3.
for some l > k, l < j. We will prove that im Qk Pk+1 . . . Pl+1 = Nk and ker Qk Pk+1 . . . Pl+1 = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ N − l ⊕ im Πl+1 . Since Qk Pk+1 . . . Pl
and Pl+1
are projectors, by Theorem 3.6, we obtain im Qk Pk+1 . . . Pl+1 = im Qk Pk+1 . . . Pl Pl+1 = im Qk Pk+1 . . . Pl = Nk and ker Qk Pk+1 . . . Pl+1
= ker Qk Pk+1 . . . Pl Pl+1
= ker Pl+1 ⊕ ((ker Qk Pk+1 . . . Pl ) ∩ im Pl+1 )
= Nl+1 ⊕ ((N0 ⊕ ⋅ ⋅ ⋅ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ⊕ im Πl ) ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ⊕ im Πl+1 )). Note that im Πl+1 ⊆ im Πl and N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ∩ Nl+1 = {0}. Then (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ⊕ im Πl+1 ) ∩ Nl+1 = 0 and (N0 ⊕ ⋅ ⋅ ⋅ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ⊕ im Πl ) ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ⊕ im Πl+1 ) = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ⊕ im Πl+1 .
3.6 Some properties of the projectors Πi and Mi
� 187
Therefore, ker Qk Pk+1 . . . Pl+1 = Nl+1 ⊕ N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nl ⊕ im Πl+1 . Hence, and the principal of the mathematical induction, it follows that the assertion holds for any j > k, j ≤ ν. This completes the proof. Theorem 3.22. Let (A, B) be a regular matrix pair with tractability index ν ≥ 1. Then im Vk = Nk
(3.76)
ker Vk = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 ⊕ im Πν−1 .
(3.77)
and
Proof. By (3.73), it follows that im Vk = im Qk Pk+1 . . . Pν−1 = Nk . By (3.74), we get ker Vk = ker Qk Pk+1 . . . Pν−1
= N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 ⊕ im Πν−1 .
This completes the proof. Theorem 3.23. Let (A, B) be a regular matrix pair with tractability index ν ≥ 1. Define Uk = Mk Pk+1 . . . Pν−1 . Then Uk = Πk−1 Vk ,
(3.78)
im Uk = im Πk−1 ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk )
(3.79)
ker Uk = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 ⊕ im Πν−1 .
(3.80)
and
Proof. We have Uk = Mk Pk+1 . . . Pν−1 = Πk−1 Qk Pk+1 . . . Pν−1 = Πk−1 Vk . Hence, applying Theorem 3.6, we get
188 � 3 P-projectors. Matrix chains im Uk = im Πk−1 Vk = im Πk−1 ∩ (ker Πk−1 ⊕ im Vk ) = im Πk−1 ∩ (N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk ) and ker Uk = ker Πk−1 Vk = ker Vk = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 ⊕ im Πν−1 . This completes the proof. Theorem 3.24. Let (A, B) be a regular matrix pair with tractability index ν ≥ 1. Then Uk { { { Uk Πi = {Uk − Mk { { {Uk − Mk Pk+1 . . . Pi Proof. 1.
if if if
i < k, i = k, i > k.
Let i < k. Then, applying (3.47), we find
Uk Πi = Mk Pk+1 . . . Pν−1 Πi
= Mk (Pk+1 . . . Pν−1 − Qi − Qi−1 Pi − Qi−1 Pi−1 Pi − ⋅ ⋅ ⋅ − Q0 P1 . . . Pi )
= Πk−1 Qk (Pk+1 . . . Pν−1 − Qi − Qi−1 Pi − Qi−1 Pi−1 Pi − ⋅ ⋅ ⋅ − Q0 P1 . . . Pi )
= Πk−1 (Qk Pk+1 . . . Pν−1 − Qk Qi − Qk Qi−1 Pi − Qk Qi−1 Pi−1 Pi − ⋅ ⋅ ⋅ − Qk Q0 P1 . . . Pi ) = Πk−1 Qk Pk+1 . . . Pν−1 = Mk Pk+1 . . . Pν−1 = Uk . 2.
Let i = k. Then Uk Πk = Mk Pk+1 . . . Pν−1 Πi
= Mk (Pk+1 . . . Pν−1 − Qk − Qk−1 Pk − Qk−1 Pk−1 Pk − ⋅ ⋅ ⋅ − Q0 P1 . . . Pk )
= Πk−1 Qk (Pk+1 . . . Pν−1 − Qk − Qk−1 Pi − Qk−1 Pk−1 Pk − ⋅ ⋅ ⋅ − Q0 P1 . . . Pk ) = Πk−1 (Qk Pk+1 . . . Pν−1 − Qk Qk − Qk Qk−1 Pk − Qk Qk−1 Pk−1 Pk − ⋅ ⋅ ⋅ − Qk Q0 P1 . . . Pk )
= Πk−1 Qk Pk+1 . . . Pν−1 − Πk−1 Qk = Mk Pk+1 . . . Pν−1 − Mk = Uk − Mk . 3.
Let i > k. Then, applying (3.47), we get Uk Πi = Mk Pk+1 . . . Pν−1 Πi
= Mk (Pk+1 . . . Pν−1 − Q0 P1 . . . Pi − Q1 P2 . . . Pi − ⋅ ⋅ ⋅ − Qk Pk+1 . . . Pν−1 )
= Πk−1 Qk (Pk+1 . . . Pν−1 − Q0 P1 . . . Pi − Q1 P2 . . . Pi − ⋅ ⋅ ⋅ − Qk Pk+1 . . . Pν−1 )
= Πk−1 (Qk Pk+1 . . . Pν−1 − Qk Q0 P1 . . . Pi − Qk Q1 P2 . . . Pi − ⋅ ⋅ ⋅ − Qk Qk Pk+1 . . . Pν−1 ) = Πk−1 (Qk Pk+1 . . . Pν−1 − Qk Pk+1 . . . Pi )
= Πk−1 Qk Pk+1 . . . Pν−1 − Πk−1 Qk Pk+1 . . . Pi = Uk − Mk Pk+1 . . . Pi . This completes the proof.
3.7 Advanced practical problems �
189
3.7 Advanced practical problems Problem 3.1. Let 𝕋 = 3ℕ0 and 0 A(t) = (2t 0
1 0 1
−1 0 ), 0
1 B(t) = (0 1
0 0 0
−1 1 ), 1
0 C(t) = (0 0
1 0 t2
−3 1 ), 0
t ∈ 𝕋.
Find: 1. Q0 , Q1 , Q2 , C0 , C1 , C2 and G0 , G1 , G2 , ̃ 0, G ̃ 1, G ̃ 2. 2. C̃0 , C̃1 , C̃1 and G Problem 3.2. Let {Q0 , . . . , Qk } and {Q0 , . . . , Qk } be two admissible up to level k projector sequences and {G0 , N0 , C0 , . . . , Gk , Nk , Ck } and {G0 , N 0 , C 0 , . . . , Gk , N k , C k } be the corresponding sequences. Prove: 1. P0 . . . Pi P0 . . . Pk = P0 . . . Pi , 0 ≤ k ≤ i, 2. P0 . . . P − iP − 0 . . . Pk = P0 . . . Pk , k ≥ i, 3. P0 . . . Pi Pj . . . P0 P0 . . . Pk = P0 . . . Pi , 0 ≤ j, k ≤ i, 4. P0 . . . P − iPj . . . P0 P0 . . . P − k = P0 . . . Pk , 0 ≤ j ≤ i, k ≥ i, 5. Pi Qj = Qj , i > j, 6. Qi Pj = Qi , i > j, 7. Pi Qj Pk = Qj , i > j > k, 8. Qi Pj Qi = Qi , i > j, 9. Qj Pi Pk = Qj , j > i, k, 10. Qj Pi Pk Pl = Qj , j > i, k, l. Problem 3.3. Let {Π0 , . . . , Πk } be an admissible up to level k projector sequence. Let also, Pi be the projector defined by ker Pi = Ni ,
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi .
Prove that: 1. Πi−1 Pi Πj Pj−1 = {
Πj+1 Πi
if
j ≥ i,
if
j < i,
2.
Πi−1 Pi Πj Pj+1 Πl Pl+1
Πi { { { = {Πj+1 { { {Πl+1
if if if
j, l < i, j ≥ i, l, l ≥ i, j,
190 � 3 P-projectors. Matrix chains 3. Πi
if
Πj
if
Πi { { { Πi−1 Pi Πj Πl = {Πj { { {Πl
if
Πi−1 Pi Πj = {
i ≥ j, i < j,
4. i ≥ j, l,
if
j ≥ i, l,
if
l ≥ i, j,
5. Πi { { { Πi−1 Pi Πj Πl−1 Pl = {Πj { { {Πl
if if if
i ≥ j, l, j ≥ i, l, l ≥ i, j.
Problem 3.4. Let {Π0 , . . . , Πν } be an admissible up to level ν ≥ 1 projector sequence. Let also, Pi be the projector defined by ker Pi = Ni , and Qi = I − Pi . Prove that: 1. Mi Pj = Mi , i > j, 2. Πi−1 Pi Πj−1 Pj Qj+1 = Mj+1 ,
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi
i > j.
Problem 3.5. Let {Π0 , . . . , Πν } be an admissible up to level ν ≥ 1 projector sequence. Let also, Pi be the projector defined by ker Pi = Ni ,
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi
and Qi = I − Pi . Prove that: 1. Qi Pi+1 . . . Pν−1 Qi is a projector, 2. Qi Pi+1 . . . Pν−1 Qi = Qi , 3. Qi Pi+1 . . . Pν−1 Qm = 0, i ≠ m. Problem 3.6. Let {Π0 , . . . , Πν−1 } be an admissible up to level ν ≥ 1 projector sequence. Let also, Pi be the projector defined by ker Pi = Ni , and Qi = I − Pi . Prove that:
im Pi = N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni−1 ⊕ im Pi
3.8 Notes and references
� 191
1. Qj Pj+1 . . . Pν−1 (I − Πl ) = 0,
l ∈ {0, . . . , j − 1},
j ∈ {1, . . . , ν − 2},
2. Qν−1 (I − Πl ) = 0,
l ∈ {0, . . . , ν − 2},
3. Qj Pj+1 . . . Pν−1 (I − Πj ) = 0,
j ∈ {0, . . . , ν − 2},
4. Qν−1 (I − Πν−1 = 0, 5. Qj Pj+1 . . . Pj+s−1 Πj+s = 0,
s ∈ {0, . . . , ν − 1 − j},
j ∈ {0, . . . , ν − 2},
6. Qj Pj+1 . . . Pν−1 (I − Πj+s ) = Qj Pj+1 . . . Pj+s ,
s ∈ {1, . . . , ν − 1 − j},
j ∈ {0, . . . , ν − 2}.
3.8 Notes and references In this chapter, we introduce the concept for properly stated, preadmissible, admissible and regular matrix pair. We construct matrix chains and prove that the matrix chain does not depend on the choice of the projector sequence. In the chapter, it is established equivalence of the constructed matrix chains. They are investigated and deducted some important properties of projector sequences. The results in this chapter generalize the classical results on arbitrary time scales. Some of the results in this chapter can be found in [5–7, 13–20].
4 First kind linear time-varying dynamic-algebraic equations Suppose that 𝕋 is a time scale with a forward jump operator and delta differentiation operator σ and Δ, respectively. Let I ⊆ 𝕋. In this chapter, we will investigate the following linear time-varying dynamicalgebraic equation: Aσ (t)(Bx)Δ (t) = C σ (t)x σ (t) + f (t),
t ∈ I,
(4.1)
where A : I → Mn×m , B : I → Mm×n , C : I → Mn×n and f : I → ℝn are given. The equation (4.1) is said to be first kind of linear time-varying dynamic-algebraic equation. We will consider the solutions of (4.1) within the space 1
m
1
CB (I) = {x : I → ℝ : Bx ∈ C (I)}.
Below, we remove the explicit dependence on t for the sake of notational simplicity.
4.1 A classification In this section, we will give a classification of the dynamic-algebraic equation (4.1). Definition 4.1. The matrix pair (Aσ , B) will be said to be the (σ, 1)-leading term of the dynamic-algebraic equation (4.1). Definition 4.2. We will say that the first kind linear time-varying dynamic-algebraic equation (4.1) is (σ, 1)-properly stated if its (σ, 1)-leading term (Aσ , B) is properly stated. Definition 4.3. We will say that the first kind linear time-varying dynamic-algebraic equation (4.1) is (σ, 1)-algebraically nice at level 0 if its (σ, 1)-leading term (Aσ , B) is properly stated. Definition 4.4. We will say that the first kind linear time-varying dynamic-algebraic equation (4.1) is (σ, 1)-algebraically nice at level k ≥ 1 if it is (σ, 1)-algebraically nice at level k − 1 and (A5) and (A6) hold for i = k for some admissible up to level k projector sequence Q0 , . . . , Qk−1 . Definition 4.5. We will say that the first kind linear time-varying dynamic-algebraic equation (4.1) is (σ, 1)-nice at level k if it is (σ, 1)-algebraically nice at level k and there exists an admissible choice of Qk . The ranks ri of Gi , i ∈ {0, . . . , k}, are said to be characteristic values of (4.1). Definition 4.6. The first kind linear time-varying dynamic-algebraic equation (4.1) is said to be (σ, 1)-regular with tractability index 0 if both Aσ and B are invertible. https://doi.org/10.1515/9783111377155-004
4.2 A particular case
�
193
Definition 4.7. The first kind linear time-varying dynamic-algebraic equation (4.1) is said to be (σ, 1)-regular with tractability index ν if there exists an admissible projector sequence {Q0 , . . . , Qν−1 } for which the matrices Gi are singular for 0 ≤ i < ν and Gν is nonsingular. Definition 4.8. The first kind linear time-varying dynamic-algebraic equation (4.1) is said to be regular if it is regular with any nonnegative tractability index. Note that the tractability index of (4.1) is ν if and only if it is nice up to level ν − 1, the matrices Gi , 0 ≤ i < ν, are singular and Gν are nonsingular. Since the dimension of the direct sum N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni increases, the tractability index can not exceed m.
4.2 A particular case Suppose that A, C : I → Mn×n . Consider the equation Aσ x Δ = C σ x σ + f .
(4.2)
We will show that the equation (4.2) can be reduced to the equation (4.1). Suppose that P is a C 1 -projector along ker Aσ . Then Aσ P = Aσ and Aσ x Δ = Aσ Px Δ = Aσ (Px)Δ − Aσ PΔ x σ . Hence, the equation (4.2) takes the form Aσ (Px)Δ − Aσ PΔ x σ = C σ x σ + f or Aσ (Px)Δ = (Aσ PΔ + C σ )x σ + f . Set C1σ = Aσ PΔ + C σ . Thus, (4.2) takes the form
194 � 4 First kind linear time-varying dynamic-algebraic equations Aσ (Px)Δ = C1σ x σ + f ,
(4.3)
i. e., the equation (4.2) is a particular case of the equation (4.1). Example 4.1. Let 𝕋 = 2ℕ0 and 1 A(t) = (0 0
0 −t 0
0 1) , 0
1 1 0
−t C(t) = ( 0 t
t 2t ) , 1
t ∈ 𝕋.
We have σ(t) = 2t,
t ∈ 𝕋,
and 1 Aσ (t) = (0 0
0 −2t 0
0 1) , 0
−2t C σ (t) = ( 0 2t
1 1 0
2t 4t ) , 1
We will find a vector y1 (t) y(t) = (y2 (t)) , y3 (t)
t ∈ 𝕋,
so that 0 Aσ (t)y(t) = (0) , 0
t ∈ 𝕋.
We have 0 1 (0) = (0 0 0
0 −2t 0
0 y1 (t) 1 ) (y2 (t)) 0 y3 (t)
y1 (t) = (−2ty2 (t) + y3 (t)) , 0
t ∈ 𝕋,
whereupon y1 (t) 0 (y2 (t)) = ( 1 ) , y3 (t) 2t
t ∈ 𝕋,
t ∈ 𝕋.
4.2 A particular case
�
and the null projector to Aσ (t), t ∈ 𝕋, is 0 Q(t) = (0 0
0 0 0
0 1), 2t
t ∈ 𝕋.
Hence, 1 P(t) = I − Q(t) = (0 0
0 1 0
0 0 0) − (0 1 0
0 0 0
0 1 1 ) = (0 2t 0
0 1 0
0 −1 ) , 1 − 2t
1 1 0
2t 4t ) 1
t ∈ 𝕋,
is a projector along ker Aσ . Note that 0 PΔ (t) = (0 0
0 0 0
0 0 ), −2
C1σ (t) = Aσ (t)PΔ (t) + C σ (t) 1 = (0 0
0 −2t 0
0 = (0 0
0 0 0
0 0 1 ) (0 0 0
0 0 0
0 −2t 0)+( 0 −2 2t
0 −2t −2) + ( 0 0 2t
1 1 0
2t −2t 4t ) = ( 0 1 2t
1 1 0
2t 4t − 2) . 1
The equation (4.2) can be written as follows: 1 (0 0
0 −2t 0
x1Δ (t) 0 −2t 1 ) (x2Δ (t)) = ( 0 0 2t x3Δ (t)
1 1 0
2t x1σ (t) f1 (t) 4t ) (x2σ (t)) + (f2 (t)) , 1 x3σ (t) f3 (t)
or x1Δ (t) = −2tx1σ (t) + x2σ (t) + 2tx3σ (t) + f1 (t),
−2tx2Δ (t) + x3Δ (t) = x2σ (t) + 4tx3σ (t) + f2 (t), 0 = 2tx1σ (t) + x3σ (t) + f3 (t),
This system, using (4.3), can be rewritten in the form
t ∈ 𝕋.
t ∈ 𝕋,
195
196 � 4 First kind linear time-varying dynamic-algebraic equations 1 (0 0
0 −2t 0
−2t =( 0 2t
0 1 1 ) ((0 0 0
0 1 0
Δ
0 x1 (t) −1 ) (x2 (t))) 1 − 2t x3 (t)
2t x1σ (t) f1 (t) 4t − 2) (x2σ (t)) + (f2 (t)) , 1 x3σ (t) f3 (t)
1 1 0
t ∈ 𝕋,
or 1 (0 0
0 −2t 0
Δ
0 x1 (t) −2tx1σ (t) + x2σ (t) + 2tx3σ (t) + f1 (t) 1 ) (x2 (t) − x3 (t)) = ( x2σ (t) + (4t − 2)x3σ (t) + f2 (t) ) , 0 (1 − 2t)x3 (t) 2tx1σ (t) + x3σ (t) + f3 (t)
t ∈ 𝕋,
or 1 (0 0
x1Δ (t) 0 ) 1) ( x2Δ (t) − x3Δ (t) 0 (1 − 4t)x3Δ (t) − 2x3 (t)
0 −2t 0
−2tx1σ (t) + x2σ (t) + 2tx3σ (t) + f1 (t) = ( x2σ (t) + (4t − 2)x3σ (t) + f2 (t) ) , 2tx1σ (t) + x3σ (t) + f3 (t)
t ∈ 𝕋,
or x1Δ (t) = −2tx1σ (t) + x2σ (t) + 2tx3σ (t) + f1 (t),
−2t(x2Δ (t) − x3Δ (t)) + (1 − 4t)x3Δ (t) − 2x3 (t) = x2σ (t) + (4t − 2)x3σ (t) + f2 (t), 0 = −2tx1σ (t) + x3σ (t) + f3 (t),
or x1Δ (t) = −2tx1σ (t) + x2σ (t) + 2tx3σ (t) + f1 (t),
−2tx2Δ (t) + (1 − 2t)x3Δ (t) = x2σ (t) + (4t − 2)x3σ (t) + 2x3 (t) + f2 (t), 0 = 2tx1σ (t) + x3σ (t) + f3 (t),
t ∈ 𝕋.
Exercise 4.1. Let 𝕋 = 2ℕ0 and 0 A(t) = (t + 1 1 1. 2. 3.
−1 t 0
1 0) , 0
Find the projector P along ker Aσ . Write the system (4.2). Write the system (4.3).
t C(t) = (0 t
0 0 0
t−1 t ), 0
t ∈ 𝕋.
t ∈ 𝕋,
4.3 Standard form index one problems
� 197
4.3 Standard form index one problems In this section, we will investigate the equation Aσ (Px)Δ = C σ x σ + f ,
(4.4)
where ker A is a C 1 -space, C ∈ C (I), P is a C 1 -projector along ker A. Then AP = A. Assume in addition that Q=I −P and (B1) the matrix A1 = A + CQ is invertible. The condition (B1) ensures the equation (4.4) to be regular with tractability index 1. We will start our investigations with the following useful lemma. Lemma 4.1. Suppose that (B1) holds. Then A−1 1 A=P and A−1 1 CQ = Q. Proof. We have A1 P = (A + CQ)P = AP + CQP = A. Since Q = I − P and ker P = ker A, we have im Q = ker A and AQ = 0. Then A1 Q = (A + CQ)Q = AQ + CQQ = CQ. This completes the proof.
198 � 4 First kind linear time-varying dynamic-algebraic equations σ Now, we multiply the equation (4.4) with (A−1 1 ) and we get σ
σ
σ
σ Δ −1 σ σ −1 (A−1 1 ) A (Px) = (A1 ) C x + (A1 ) f .
Now, we employ the first equation of Lemma 4.1 and we get σ
σ
σ σ −1 Pσ (Px)Δ = (A−1 1 ) C x + (A1 ) f .
(4.5)
We decompose x in the following way: x = Px + Qx. Then the equation (4.5) takes the following form: σ
σ
σ σ σ σ σ −1 Pσ (Px)Δ = (A−1 1 ) C (P x + Q x ) + (A1 ) f σ
σ
σ
σ σ σ −1 σ σ σ −1 = (A−1 1 ) C P x + (A1 ) C Q x + (A1 ) f .
Using the second equation of Lemma 4.1, the last equation can be rewritten as follows: σ
σ
σ σ σ σ σ −1 Pσ (Px)Δ = (A−1 1 ) C P x + Q x + (A1 ) f .
(4.6)
We multiply the equation (4.6) with the projector Pσ and using that PP = P,
PQ = 0,
we find σ
σ
σ σ σ σ σ σ σ −1 Pσ Pσ (Px)Δ = Pσ (A−1 1 ) C P x + P Q x + P (A1 ) f
or σ
σ
σ σ σ σ −1 Pσ (Px)Δ = Pσ (A−1 1 ) C P x + P (A1 ) f .
(4.7)
Note that Pσ (Px)Δ = (PPx)Δ − PΔ Px = (Px)Δ − PΔ Px. Hence and (4.7), we find σ
σ
σ
σ
σ σ σ σ −1 (Px)Δ − PΔ Px = Pσ (A−1 1 ) C P x + P (A1 ) f
or σ σ σ σ −1 (Px)Δ = PΔ Px + Pσ (A−1 1 ) C P x + P (A1 ) f .
(4.8)
4.3 Standard form index one problems
� 199
Now, we multiply the equation (4.6) by Qσ and we find σ
σ −σ σ σ σ σ σ σ σ −1 Qσ Pσ (Px)Δ = Qσ (A−1 1 )C B B P x + Q Q x + Q (A1 ) f
or σ
σ
σ −σ σ σ σ σ σ σ −1 0 = Qσ (A−1 1 ) C B B P x + Q x + Q (A1 ) f .
(4.9)
Set u = Px,
v = Qx.
Then, by (4.8) and (4.9), we get the system σ
σ
σ −σ σ σ −1 uΔ = PΔ u + Pσ (A−1 1 ) C B u + P (A1 ) f , σ
(4.10)
σ
σ −σ σ σ −1 vσ = −Qσ (A−1 1 ) C B u − Q (A1 ) f .
We find u ∈ C 1 (I) by the first equation of the system (4.10) and then we find vσ ∈ C (I) by the second equation of the system (4.10). Then, for the solution x of the equation (4.4) we have the following representation: x σ = uσ + vσ = Pσ x σ + Qσ x σ . Example 4.2. Let 𝕋 = ℕ and −1 A(t) = ( 0 0
t+1 0 2t + 2
−1 0 ), −2
0 C(t) = (0 0
1 P(t) = (0 0
0 0 −(t + 1)
0 0) , 1
t ∈ 𝕋.
σ(t) = t + 1,
t ∈ 𝕋.
0 −t 2
1 1) , 1
Here,
We have −1 A(t)P(t) = ( 0 0
t+1 0 2t + 2
−1 1 0 ) (0 −2 0
0 0 −(t + 1)
Therefore, P is a projector along ker A. Next,
0 −1 0) = ( 0 1 0
t+1 0 2t + 2
−1 0 ), −2
t ∈ 𝕋.
200 � 4 First kind linear time-varying dynamic-algebraic equations 1 Q(t) = I − P(t) = (0 0
0 1 0
0 1 0 ) − (0 1 0
0 0 −(t + 1)
0 0 0) = (0 1 0
0 1 t+1
0 0) , 0
1 0 1) (0 1 0
0 1 t+1
0 0) 0
t ∈ 𝕋,
and A1 (t) = A(t) + C(t)Q(t) −1 =(0 0
t+1 0 2t + 2
−1 0 0 ) + (0 −2 0
0 −t 2
−1 =(0 0
t+1 0 2t + 2
−1 0 0 ) + (0 −2 0
t+1 1 t+3
−1 =(0 0
2(t + 1) 1 3t + 5
−1 0 ), −2
0 0) 0
t ∈ 𝕋.
Note that det A1 (t) = 2 ≠ 0,
t ∈ 𝕋.
Thus, A1 is invertible. We will find its cofactors. We compute 0 1 1 0 0 0 = −2, a12 (t) = − = 0, a13 (t) = a11 (t) = 0 −1 0 3t + 5 = 0, 3t + 5 −2 2(t + 1) −1 = −(−4(t + 1) + 3t + 5) = −(−4t − 4 + 3t + 5) = t − 1, a21 (t) = − 3t + 5 −2 −1 −1 −1 2(t + 1) = 2, a23 (t) = − = 3t + 5, a22 (t) = 0 3t + 5 0 −2 2(t + 1) −1 −1 −1 = 1, a32 (t) = − = 0, a31 (t) = 0 0 0 1 −1 2(t + 1) = −1, t ∈ 𝕋. a33 (t) = 1 0 Consequently, A−1 1 (t)
−2 1 = (0 2 0
t−1 2 3t + 5
−1 1 0)=(0 −1 0
t−1 2
1
3t+5 2
1 2
0 ), − 21
t ∈ 𝕋.
4.3 Standard form index one problems
� 201
Hence, −1 Aσ (t) = ( 0 0
t+2 0 2t + 4
0 C σ (t) = (0 0
0 −t − 1 2
0 Qσ (t) = (0 0
0 1 t+2
−1 0 ), −2 1 1) , 1 0 0) , 0
σ (A−1 1 ) (t)
−1 =(0 0
1 Pσ (t) = (0 0
0 0 −(t + 2)
0 PΔ (t) = (0 0
0 0 −1
t 2
1
1 2
3t+8 2
0 0) , 0
0 ), − 21
0 0) , 1
t ∈ 𝕋.
Also, 1 P (t)(A ) (t) = (0 0 σ
P
σ
−1 σ
σ σ (t)(A−1 1 ) (t)C (t)
−1 =(0 0 0 = (0 0
Q
σ
σ (t)(A−1 1 ) (t)
0 = (0 0 0 = (0 0
Q
σ
σ σ (t)(A−1 1 ) (t)C (t)
0 0 −(t + 2) t 2
0
1 2
0 0 ) (0 − 21 0
t+4 2 − (t+2)(t−1) 2
0 − (t+2)(t+3) 2 0 1 t+2
1 2
−1 0 )=(0 0 − 21
1
3t+8 2
0 −t − 1 2
1 1) 1
t−1 2
0 ),
t+3 2
−1 0 0) ( 0 0 0
0 1
0 0) ,
0 1
0 0 0 ) (0 0 0
t+2
t 2
−1 0 0) ( 0 1 0
t 2
1 2
3t+8 2
1
0) − 21
0 −t − 1 2
1 1) 1
0
0 = (0 0
t+2
0 = (0 0
0 −t − 1 −(t + 1)(t + 2)
0 1 ), t+2
Then the equation (4.4) can be rewritten as follows:
t ∈ 𝕋.
t 2
0
t+4 2
1 2
0 ), − 21
202 � 4 First kind linear time-varying dynamic-algebraic equations t+2 0 2t + 4
−1 (0 0
0 = (0 0
−1 1 0 ) ((0 −2 0
0 −1 − t 2
0 0 −(t + 1)
Δ
0 x1 (t) 0) (x2 (t))) 1 x3 (t)
1 x1σ (t) f1 (t) 1) (x2σ (t)) + (f2 (t)) , 1 x3σ (t) f3 (t)
t ∈ 𝕋,
or −1 (0 0
t+2 0 2t + 4
Δ
−1 x1 (t) ) 0 )( 0 −2 −(t + 1)x2 (t) + x3 (t)
x1σ (t) f1 (t) = (−(t + 1)x2σ (t) + x3σ (t)) + (f2 (t)) , 2x2σ (t) + x3σ (t) f3 (t)
t ∈ 𝕋,
or −1 (0 0
t+2 0 2t + 4
−1 x1Δ (t) ) 0 )( 0 −2 −x2σ (t) − (t + 1)x2Δ (t) + x3Δ (t)
x1σ (t) f1 (t) = (−(t + 1)x2σ (t) + x3σ (t)) + (f2 (t)) , 2x2σ (t) + x3σ (t) f3 (t)
t ∈ 𝕋,
or −x1Δ (t) + x2σ (t) + (t + 1)x2Δ (t) − x3Δ (t) x1σ (t) + f1 (t) ( ) = (−(t + 1)x2σ (t) + x3σ (t) + f2 (t)) , 0 σ Δ Δ 2x2 (t) + 2(t + 1)x2 (t) − 2x3 (t) 2x2σ (t) + x3σ (t) + f3 (t) or −x1Δ (t) + x2σ (t) + (t + 1)x2Δ (t) − x3Δ (t) = x1σ (t) + f1 (t),
0 = −(t + 1)x2σ (t) + x3σ (t) + f2 (t),
2x2σ (t) + 2(t + 1)x2Δ (t) − 2x3Δ (t) = 2x2σ (t) + x3σ (t) + f3 (t),
t ∈ 𝕋,
or −x1Δ (t) + (t + 1)x2Δ (t) − x3Δ (t) = x1σ (t) − x2σ (t) + f1 (t),
0 = −(t + 1)x2σ (t) + x3σ (t) + f2 (t),
2(t + 1)x2Δ (t) − 2x3Δ (t) = x3σ (t) + f3 (t),
t ∈ 𝕋.
t ∈ 𝕋,
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
� 203
Exercise 4.2. Let 𝕋 = 4ℕ and 2 A(t) = (0 0 1. 2. 3.
t 0 4t
1 0) , 4
0 C(t) = (0 1
0 t −2
1 −1) , 1
1 P(t) = (0 0
0 1 0
0 1 ), −t + 1
t ∈ 𝕋.
Find the matrix A1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋. Find the equation (4.4). Find the equation (4.10).
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one Suppose that the equation (4.1) is (σ, 1)-regular with tractability index 1. In addition, assume that R is a continuous projector onto im B and along ker A. Set G0 = AB and take P0 to be a continuous projector along ker G0 and denote Q0 = I − P0 ,
B− BB− = B− ,
G1 = G0 + CQ0 ,
B− B = P0 ,
BB− B = B,
BB− = R,
N0 = ker G0 .
Since R = BB− and R is a continuous projector along ker A, we have A = AR = ABB− = G0 B− . Then we can rewrite the equation (4.1) as follows: Aσ Bσ B−σ (Bx)Δ = C σ x σ + f or G0σ B−σ (Bx)Δ = C σ x σ + f . Now, we multiply both sides of the last equation with (G1−1 )σ and we find σ
σ
σ
(G1−1 ) G0σ B−σ (Bx)Δ = (G1−1 ) C σ x σ + (G1−1 ) f . Note that by the definition of G0 and Q0 we have G0 Q0 = 0. Then
(4.11)
204 � 4 First kind linear time-varying dynamic-algebraic equations G1 (I − Q0 ) = (G0 + CQ0 )(I − Q0 ) = G0 + CQ0 − G0 Q0 − CQ0 Q0 = G0 + CQ0 − CQ0 = G0 , whereupon G1−1 G0 = I − Q0 . Thus, the equation (4.11) takes the form σ
σ
(I − Q0σ )B−σ (Bx)Δ = (G1−1 ) C σ x σ + (G1−1 ) f .
(4.12)
Note that Bσ P0σ (I − Q0σ ) = Bσ (P0σ − P0σ Q0σ ) = Bσ P0σ . Then we multiply the equation (4.12) by Bσ P0σ and we find σ
σ
Bσ P0σ B−σ (Bx)Δ = Bσ P0σ (G1−1 ) C σ x σ + Bσ P0σ (G1−1 ) f .
(4.13)
Note that Δ
Δ
Bσ P0σ B−σ (Bx)Δ = (BP0 B− Bx) − (BP0 B− ) Bx Δ
Δ
= (BP0 P0 x)Δ − (BP0 B− ) Bx = (BP0 x)Δ − (BP0 B− ) Bx.
From here, the equation (4.13) can be rewritten as follows: Δ
σ
σ
(BP0 x)Δ = (BP0 B− ) Bx + Bσ P0σ (G1−1 ) C σ x σ + Bσ P0σ (G1−1 ) f . Now, we use the decomposition x = P0 x + Q0 x, to get Δ
Δ
(BP0 x)Δ = (BP0 B− ) BP0 x + (BP0 B− ) BQ0 x σ
σ
σ
+ Bσ P0σ (G1−1 ) C σ P0σ x σ + Bσ P0σ (G1−1 ) C σ Q0σ x σ + Bσ P0σ (G1−1 ) f .
Since im Q0 = ker B, we have Δ
(BP0 B− ) BQ0 x = 0. Next, I = G1−1 (G0 + CQ0 ) = G1−1 G0 + G1−1 CQ0 = I − Q0 + G1−1 CQ0 , whereupon
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
G1−1 CQ0 = Q0 .
� 205
(4.14)
Then σ
Bσ P0σ (G1−1 ) C σ Q0σ = Bσ P0σ Q0σ = 0 and we arrive at the equation Δ
σ
σ
(BP0 x)Δ = (BP0 B− ) BP0 x + Bσ P0σ (G1−1 ) C σ P0σ x σ + Bσ P0σ (G1−1 ) f . Since B− BP0 = P0 P0 = P0 , the last equation can be rewritten in the form Δ
σ
σ
(BP0 x)Δ = (BP0 B− ) BP0 x + Bσ P0σ (G1−1 ) C σ B−σ Bσ P0σ x σ + Bσ P0σ (G1−1 ) f . We set u = BP0 x. Then we obtain the equation Δ
σ
σ
uΔ = (BP0 B− ) u + Bσ P0σ (G1−1 ) C σ B−σ uσ + Bσ P0σ (G1−1 ) f . Now, we multiply both sides of (4.12) by Q0σ and using that Q0σ (I − Q0σ ) = Q0σ − Q0σ Q0σ = Q0σ − Q0σ = 0, we find σ
σ
0 = Q0σ (G1−1 ) C σ x σ + Q0σ (G1−1 ) f , or σ
σ
Q0σ (G1−1 ) f = −Q0σ (G1−1 ) C σ x σ σ
= −Q0σ (G1−1 ) C σ (P0σ x σ + Q0σ x σ ) σ
σ
= −Q0σ (G1−1 ) C σ P0σ x σ − Q0σ (G1−1 ) C σ Q0σ x σ σ
= −Q0σ (G1−1 ) C σ P0σ x σ − Q0σ Q0σ x σ σ
= −Q0σ (G1−1 ) C σ P0σ x σ − Q0σ x σ , where we have used (4.14). We set
(4.15)
206 � 4 First kind linear time-varying dynamic-algebraic equations v = Q0 x. Then we get σ
σ
vσ = −Q0σ (G1−1 ) C σ uσ − Q0σ (G1−1 ) f . By the last equation and (4.15), we obtain the system Δ
σ
σ
uΔ = (BP0 B− ) u + Bσ P0σ (G1−1 ) C σ B−σ uσ + Bσ P0σ (G1−1 ) f , σ
σ
vσ = −Q0σ (G1−1 ) C σ B−σ uσ − Q0σ (G1−1 ) f .
(4.16)
By the first equation of (4.16), we find u and then by the second equation we find vσ . Then, for the solution x of (4.1), we have B−σ uσ + vσ = B−σ Bσ P0σ x σ + Q0σ x σ = P0σ x σ + Q0σ x σ = x σ . Now, we will show that the described process above is σ-reversible, i. e., if u ∈ C 1 (I), u ∈ im BP0 B− and vσ ∈ C (I) satisfy (4.16) and x σ = B−σ uσ + vσ , then x satisfies (4.1). Really, since u ∈ im BP0 B− , we have BP0 B− u = u and BB− u = BB− BP0 B− u = BP0 P0 B− u = BP0 B− u = u. By the second equation of (4.16), using that Q0σ Q0σ = Q0σ , we find vσ = Q0σ vσ . Because im Q0 = ker B, we have Bσ vσ = Bσ Q0σ vσ = 0. Using (4.17), we find Bσ P0σ x σ = Bσ P0σ (B−σ uσ + vσ ) = Bσ P0σ (B−σ uσ + Q0σ vσ ) = Bσ P0σ B−σ uσ = uσ .
Hence, B−σ uσ = B−σ Bσ P0σ x σ = P0σ P0σ x σ = P0σ x σ = x σ − Q0σ x σ
(4.17)
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
�
207
using (4.17), we find x σ = B−σ uσ + vσ = x σ − Q0σ x σ + vσ , whereupon vσ = Q0σ x σ . Next, Bσ x σ = Bσ (P0σ x σ + Q0σ x σ ) = Bσ (P0σ x σ + vσ ) = Bσ P0σ x σ + Bσ vσ = uσ .
Note that the equation (4.15) is restated by the equation (4.11). Then, multiplying (4.11) with Bσ P0σ , we find σ
σ
σ
Bσ P0σ (G1−1 ) G0σ B−σ (Bx)Δ = Bσ P0σ (G1−1 ) C σ x σ + Bσ P0σ (G1−1 ) f , which we premultiply by B−σ and using that B−σ Bσ P0σ = P0σ , we arrive at σ
σ
σ
P0σ (G1−1 ) G0σ B−σ (Bx)Δ = P0σ (G1−1 ) C σ x σ + P0σ (G1−1 ) f , from where σ
σ
σ
(4.18)
σ
σ
(4.19)
P0σ (G1−1 ) Aσ (Bx)Δ = P0σ (G1−1 ) C σ x σ + P0σ (G1−1 ) f . Next, we multiply (4.11) by Q0σ and we find σ
Q0σ (G1−1 ) Aσ (Bx)Δ = Q0σ (G1−1 ) C σ x σ + Q0σ (G1−1 ) f . Now, we add (4.18) and (4.19) and we find σ
σ
σ
(G1−1 ) Aσ (Bx)Δ = (G1−1 ) C σ x σ + (G1−1 ) f , whereupon we get (4.1). Definition 4.9. The equation (4.15) is said to be the inherent equation for the equation (4.1). Theorem 4.1. The subspace im P0 is an invariant subspace for the equation (4.15), i. e., u(t0 ) ∈ (im BP0 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BP0 )(t) for any t ∈ I.
208 � 4 First kind linear time-varying dynamic-algebraic equations Proof. Let u ∈ C 1 (I) be a solution to the equation (4.15) so that (BP0 )(t0 )u(t0 ) = u(t0 ). Hence, u(t0 ) = (BP0 )(t0 )u(t0 ) = (BP0 P0 P0 )(t0 )u(t0 ) = (BP0 B− BP0 )(t0 )u(t0 ) = (BP0 B− )(t0 )(BP0 )(t0 )u(t0 ) = (BP0 B− )(t0 )u(t0 ).
We multiply the equation (4.15) by I − Bσ P0σ B−σ and we get Δ
(I − Bσ P0σ B−σ )uΔ = (I − Bσ P0σ B−σ )(BP0 B− ) u
+ (I − Bσ P0σ B−σ )Bσ P0σ G1−1 C σ B−σ uσ σ
+ (I − Bσ P0σ B−σ )Bσ P0σ (G1−1 ) f Δ
= (I − Bσ P0σ B−σ )(BP0 B− ) u
+ (Bσ P0σ − Bσ P0σ B−σ Bσ P0σ )G1−1 C σ B−σ uσ + (Bσ P0σ − Bσ P0σ B−σ Bσ P0σ )G1−1 f Δ
= (I − Bσ P0σ B−σ )(BP0 B− ) u
+ (Bσ P0σ − Bσ P0σ )G1−1 C σ B−σ uσ + (Bσ P0σ − Bσ P0σ )G1−1 f
Δ
= (I − Bσ P0σ B−σ )(BP0 B− ) u, i. e., Δ
(I − Bσ P0σ B−σ )uΔ = (I − Bσ P0σ B−σ )(BP0 B− ) u.
(4.20)
Set v = (I − BP0 B− )u. Then Δ
vΔ = (I − Bσ P0σ B−σ )uΔ + (I − BP0 B− ) u Δ
Δ
= (I − Bσ P0σ B−σ )(BP0 B− ) u + (I − BP0 B− ) u Δ
Δ
Δ
= ((I − BP0 B− )BP0 B− ) u − (I − BP0 B− ) BP0 B− u + (I − BP0 B− ) u Δ
Δ
Δ
= (BP0 B− − BP0 B− BP0 B− ) u + (I − BP0 B− ) (I − BP0 B− )u = (I − BP0 B− ) v, i. e., Δ
vΔ = (I − BP0 B− ) v.
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
Note that v(t0 ) = u(t0 ) − (BP0 B− )(t0 )u(t0 ) = u(t0 ) − u(t0 ) = 0. Thus, we get the following IVP: Δ
on
on
I.
vΔ = (I − BP0 B− ) v
I,
v(t0 ) = 0. Therefore, v = 0 on I and then
BP0 B− u = u Hence, using that
im BP0 = im BP0 B− , we get BP0 u = u
on I.
This completes the proof. Example 4.3. Let 𝕋 = 2ℕ0 and t 0 A(t) = (0 0 0 0 0 C(t) = ( 0 −1 1
0 1 0 0 0 0 0 −1 1 0
0 0 t2 ) , 0 0 0 t 0 0 0
t B(t) = (0 0
−1 1 0 −t 2 0
1 0 0) , 0 t2
0 t2 0
t ∈ 𝕋.
We will find a vector y1 (t) y(t) = (y2 (t)) ∈ ℝ3 , y3 (t)
t ∈ 𝕋,
so that A(t)y(t) = 0,
0 0 1
t ∈ 𝕋.
0 0 0
0 0) , 0
� 209
210 � 4 First kind linear time-varying dynamic-algebraic equations We have 0 t 0 0 (0) = (0 0 0 0 0
0 1 0 0 0
0 0 y1 (t) t 2 ) (y2 (t)) 0 y3 (t) 0
ty1 (t) y2 (t) 2 = (t y3 (t)) , 0 0
t ∈ 𝕋.
Hence, y1 (t) = 0, y2 (t) = 0, y3 (t) = 0,
t ∈ 𝕋.
Then the null space projector of A is 0 R0 (t) = (0 0
0 0 0
0 0) 0
0 0 0) − (0 1 0
0 0 0
0 1 0 ) = (0 0 0
and 1 R(t) = I − R0 (t) = (0 0
0 1 0
0 1 0
0 0) , 1
0 0 t2 0 0
0 0 0 0 0
t ∈ 𝕋,
is a projector along ker A. Next, G0 (t) = A(t)B(t) t 0 = (0 0 0
0 1 0 0 0
We will find a vector
0 0 t t 2 ) (0 0 0 0
0 t2 0
0 0 1
0 0 0
t2 0 0 0) = ( 0 0 0 0
0 t2 0 0 0
0 0 0) , 0 0
t ∈ 𝕋.
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
z1 (t) z2 (t) z(t) = (z3 (t)) , z4 (t) z5 (t)
�
t ∈ 𝕋,
so that G0 (t)z(t) = 0,
t ∈ 𝕋.
We have 0 t2 0 0 (0) = ( 0 0 0 0 0
0 t2 0 0 0
0 0 t2 0 0
0 z1 (t) t 2 z1 (t) 0 z2 (t) t 2 z2 (t) 0) (z3 (t)) = (t 2 z3 (t)) , 0 z4 (t) 0 0 z5 (t) 0
0 0 0 0 0
t ∈ 𝕋,
whereupon z1 (t) = 0,
z2 (t) = 0,
z3 (t) = 0,
t ∈ 𝕋,
and 0 0 Q0 (t) = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0) , 0 1
t ∈ 𝕋,
and 1 0 P0 (t) = I − Q0 (t) = (0 0 0 1 0 = (0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0) , 0 0
0 0 0 1 0
0 0 0 0 0) − (0 0 0 1 0
t ∈ 𝕋.
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0) 0 1
211
212 � 4 First kind linear time-varying dynamic-algebraic equations Then G1 (t) = G0 (t) + C(t)Q0 (t) t2 0 = (0 0 0
0 t2 0 0 0
0 0 t2 0 0
0 0 0 0 0
0 0 0 0 0) + ( 0 0 −1 0 1
t2 0 = (0 0 0
0 t2 0 0 0
0 0 t2 0 0
0 0 0 0 0
0 0 0 0 0) + (0 0 0 0 0
t2 0 = (0 0 0
0 t2 0 0 0
0 0 t2 0 0
−1 1 0 −t 2 0
1 0 0) , 0 t2
0 0 −1 1 0 0 0 0 0 0
0 t 0 0 0 0 0 0 0 0
1 0 0 0 0 ) (0 0 0 t2 0
−1 1 0 −t 2 0 −1 1 0 −t 2 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0) 0 1
1 0 0) 0 t2
t ∈ 𝕋.
Now, we will search a matrix b11 (t) b21 (t) B− (t) = (b31 (t) b41 (t) b51 (t)
b12 (t) b22 (t) b32 (t) b42 (t) b52 (t)
b13 (t) b23 (t) b33 (t)) , b43 (t) b53 (t)
B(t)B− (t) = R(t),
B− (t)B(t) = P0 (t),
t ∈ 𝕋,
so that
B(t)B− (t)B(t) = B(t),
B− (t)B(t)B− (t) = B− (t),
t ∈ 𝕋.
We have 1 R(t) = B(t)B− (t) = (0 0 tb11 (t) = (t 2 b21 (t) b31 (t)
0 1 0
0 t 0) = (0 1 0
tb12 (t) t 2 b22 (t) b32 (t)
tb13 (t) t 2 b23 (t)) , b33 (t)
0 t2 0
0 0 1
t ∈ 𝕋.
0 0 0
b11 (t) 0 b21 (t) 0) (b31 (t) 0 b41 (t) b51 (t)
b12 (t) b22 (t) b32 (t) b42 (t) b52 (t)
b13 (t) b23 (t) b33 (t)) b43 (t) b53 (t)
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
Then 1 b11 (t) = , t b23 (t) = 0,
b12 (t) = 0,
b13 (t) = 0,
b21 (t) = 0,
b22 (t) =
b31 (t) = 0,
b32 (t) = 0,
b33 (t) = 1,
t ∈ 𝕋,
1 , t2
and 1 t
0
0
1 t2
0 B− (t) = ( 0 b41 (t) (b51 (t)
0 1 ), b43 (t) b53 (t))
0 b42 (t) b52 (t)
t ∈ 𝕋.
Now, we compute 1 0 P0 (t) = (0 0 0
0 1 0 0 0 1 t
0 =( 0 b41 (t) (b51 (t)
1 0 =( 0 tb41 (t) tb51 (t)
0 0 1 0 0
0 0 0 0 0
0 0 0) = B− (t)B(t) 0 0
0
0
1 t2
0 b42 (t) b52 (t) 0 1 0
t 0 ) (0 1 0 b43 (t) b53 (t)) 0 0 1 b43 (t) b53 (t)
t 2 b42 (t) t 2 b52 (t)
0 0 0 0 0
0 t2 0
0 0 1
0 0 0) , 0 0
0 0 0
t ∈ 𝕋.
So, b41 (t) = 0, b51 (t) = 0,
b42 (t) = 0,
b43 (t) = 0,
b52 (t) = 0,
b53 (t) = 0,
t ∈ 𝕋,
and 1 t
0 B− (t) = (0 0 (0
0
1 t2
0 0 0
0
0 1) , 0 0)
t ∈ 𝕋.
0 0) 0
� 213
214 � 4 First kind linear time-varying dynamic-algebraic equations Next, 1 2t
0
0 B−σ (t) = ( 0
1 4t 2
0 0 σ C (t) = ( 0 −1 1
0 0 −1 1 0
0 (0
1 0 P0σ (t) = (0 0 0
0
0 1) , 0 0)
0 0 0
0 1 0 0 0
0 2t 0 0 0 0 0 1 0 0
0 0 0 0 0
2t Bσ (t) = ( 0 0
−1 1 0 −4t 2 0 0 0 0) , 0 0
1 0 0 ), 0 4t 2
0 4t 2 0
0 0 1
0 0 0
0 0 σ Q0 (t) = (0 0 0
0 0) , 0 0 0 0 0 0
0 0 0 0 0
t ∈ 𝕋.
Note that det G1 (t) = −t 10 ≠ 0,
t ∈ 𝕋.
We will find the cofactors of G1 (t), t ∈ 𝕋. We have t 2 0 g11 (t) = 0 0 0 0 g13 (t) = 0 0 0 0 g15 (t) = 0 0 t 2 0 g22 (t) = 0 0
0 t2 0 0
1 0 −t 2 0
0 0 0 0 8 = −t , g12 (t) = − 0 0 0 t 2 2 2 t 1 0 0 t 0 0 0 0 0 = 0, g14 (t) = − 0 0 0 −t 2 0 0 0 0 0 t 2 0 0 t2 0 1 2 2 0 t 0 0 t g21 (t) = − 2 = 0, 0 0 0 0 −t 0 0 0 0 0 t 2 0 −1 1 2 0 t 0 0 8 = −t , g23 (t) = − 2 0 0 −t 0 0 0 0 t 2
0 t2 0 0
1 0 −t 2 0 0 t2 0 0
0 0 = 0, 0 2 t
0 0 = 0, 0 2 t −1 1 0 0 = 0, −t 2 0 2 0 t 0 −1 1 0 0 0 = 0, 0 −t 2 0 2 0 0 t
0 0 0 1 0
0 0 0) , 0 1
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
t 2 0 g24 (t) = 0 0 0 2 t g31 (t) = 0 0 t 2 0 g33 (t) = 0 0 t 2 0 g35 (t) = 0 0 0 2 t g41 (t) = 0 0 t 2 0 g42 (t) = 0 0 t 2 0 g44 (t) = 0 0 0 2 t g51 (t) = 0 0 t 2 0 g52 (t) = − 0 0
t 2 0 0 −1 1 0 0 t 2 0 0 = 0, = 0, g25 (t) = 2 0 0 0 −t 0 0 0 0 t 2 0 t 2 0 −1 1 0 −1 1 0 0 0 1 0 1 0 = 0, = 0, g32 (t) = − 2 2 0 0 −t 0 −t 0 0 2 2 0 0 t 0 0 0 t t 2 0 0 1 0 −1 1 2 2 t 1 0 0 t 0 0 8 = 0, = −t , g34 (t) = − 2 0 0 0 0 0 −t 0 0 0 0 t 2 0 0 t 2 0 0 −1 t2 0 1 2 = 0, 0 0 −t 0 0 0 0 −1 1 t 2 0 0 t 2 0 1 0 1 0 6 = − 0 t 2 0 − 0 t 2 0 = −t , t 2 0 0 2 0 0 t 0 0 0 0 0 t 2 t 2 0 0 −1 1 0 1 0 0 t 2 0 1 0 2 6 = t t 2 0 0 = −t , g43 (t) = − 2 0 0 t 0 0 0 0 t 2 2 0 0 0 0 t 2 0 0 1 0 0 −1 t 2 t 2 0 0 0 t 0 1 8 = t , g45 (t) = − = 0, 2 2 0 0 t 0 t 0 0 0 0 0 0 0 0 t 2 0 −1 1 0 −1 1 0 1 0 2 2 0 0 = −t 2 (−t 4 ) = t 6 , = −t t t2 0 0 0 −t 2 0 2 0 −t 0 0 −1 1 0 1 0 0 1 0 2 2 2 4 6 = −t t 0 0 = −t (t ) = −t , t2 0 0 2 0 0 −t 0 −t 2 0 0 0 0 0
� 215
0 t2 0 0
1 0 −t 2 0
0 0 = 0, 0 t 2
216 � 4 First kind linear time-varying dynamic-algebraic equations t 2 0 g53 (t) = 0 0 t 2 0 g55 (t) = 0 0
0 t2 0 0
−1 1 0 −t 2
0 t2 0 0
0 0 t2 0
t 2 1 0 0 = 0, g54 (t) = − 0 0 0 0 −1 1 8 = −t , t ∈ 𝕋. 0 −t 2
0 t2 0 0
0 0 t2 0
1 0 = 0, 0 0
Consequently, −t 8 0 1 −1 G1 (t) = − 10 ( 0 t 0 0 1 t2
0
1 t2
0
( = (0 0
−t 6 −t 6 0 t8 0
0 0 −t 8 0 0
0
1 t4 1 t4
− t14
0
− t12
0
0
0
1 t2
0
0
0
(0
0 −t 8 0 0 0
t6 −t 6 0 ) 0 −t 8
1 t4
t ∈ 𝕋,
) 0 ),
0
1 t2
0
)
and 1 4t 2
0
0 ( ( (G ) (t) = ( 0 0
1 4t 2
(0
0
−1 σ
0 0
0 0
1 4t 2
0 0
1 16t 4 1 16t 4
0
− 4t12 0
− 16t1 4 1 16t 4
) 0 ) ), 0
1 4t 2
t ∈ 𝕋.
)
Moreover,
t − B(t)P0 (t)B (t) = (0 0
0 t2 0
0 0 1
0 0 0
1 0 0 0) (0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
1
0 t 0 0 0) ( 0 0 0 0 (0
0
1 t2
0 0 0
0
0 1) 0 0)
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
t = (0 0 1 0 = (0 0 0
0 t2 0
0 0 1
0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
1 t
0
0 0 0) , 0 0
0 0 1) 0 0)
1 t2
0 0 ( 0) 0 0 0 (0
217
�
0 0 0
t ∈ 𝕋,
and then Δ
(BP0 B− ) (t) = 0,
t ∈ 𝕋.
Next, σ
Bσ (t)P0σ (t)(G1−1 ) (t) 2t = (0 0
2t = (0 0
0 4t 2 0
0 4t 2 0
0 0 1
0 0 1
0 0 0
0 0 0
1 0 0 0) (0 0 0 0
0 1 0 0 0
1 4t 2
0 0 ( 0) 0 0 0 (0
σ Bσ (t)P0σ (t)(G1−1 ) (t)C σ (t)B−σ (t) 1 2t
= (0 0
1 2t
= (0 0
0 1 0
0 1 0
0 0
0 0 0
0 0
0 0 0
1 4t 2
1 4t 2
0 0 0 0) ( 0 0 −1 1 0 0 0 ( 0 0) 0 − 2t1 (
and
1 2t
0 0 1 0 0
1
0 0 0 0 0
0
0 4t 2 0 0 0) ( 0 0 0 0 (0
0 0
1 4t 2
0 0 0 0 0
1 4t 2
0 0 0
0 0
0 0 −1 1 0
0 2t 0 0 0 0 0
− 4t12 1 4t 2
0
−1 1 0 −4t 2 0
0)
0
0 0 0
1 4t 2
1 4t 2
0)
1
1 2t 0 0 0 )(0 0 0 4t 2 (0
0
0 0
− 16t1 4
1 16t 4 1 16t 4
0
0 1 0 2t 0) = ( 0 0 0
0 2t 0 0 ) = (0 0
0
− 16t1 4 1 16t 4
0
0 0
− 4t12
0 1 0
0 0
0 ) 0
1 4t 2
0
1 4t 2
0
1 4t 2
0 0 0
0 2t ) , 0
0 0 0
)
0 0) , 0
0 0 1) 0 0)
t ∈ 𝕋,
218 � 4 First kind linear time-varying dynamic-algebraic equations σ
Q0σ (t)(G1−1 ) (t) 0 0 = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 = (0 0
0 0 0 0
0 0 0 0
(0
σ
0
0
1
0 0 0 1 0 0 0 0
4t 2 0 0 0 ( 0) ( 0 0 0 1 (0
0 0 0 0
(0 1 2t
0 × (0 0 (0
0 0 =(0
0
0 =(0
1 8t 3 1 ( 8t3
0
0
− 4t12
0 0 0
− 4t12 0
0 1) 0 0)
0 0 0
0 0 0 0
0 0 0 1
0
0
0
0 0 0
− 16t1 4 0
0
0
− 4t12
0
0
1 4t 2
0
0
0
1 16t 4
) 0 )
0
1 4t 2
0
0 0 0 0 0 )( 0 0 −1 1 1 2)
0 0 −1 1 0
0 2t 0 0 0
1 0 2t 0 0 0) ( 0 0 0
0
0
4t
0 0 0
− 4t12
0
− 16t1 4
)
1 4t 2 )
0
0
1 4t 2
1 4t 2 1 ( 4t2
0
0 0 0 0
1 4t 2
1 16t 4 1 16t 4
0
0 0 0 ), 0
Q0σ (t)(G1−1 ) (t)C σ (t)B−σ 0 0 ( 0 = 0
0
0 0) , 0 0)
1) ( 0
1 4t 2
0 0 0
−1 1 0 −4t 2 0
1 0 0) 0 4t 2
0 1) 0 0)
t ∈ 𝕋.
Then the decoupling (4.16) takes the form u1Δ (t)
0 Δ (u2 (t)) = (0 0 uΔ (t) 3
0 0
− 16t1 4
1 0 u1σ (t) 2t 2t ) (u2σ (t)) + ( 0 0 0 u3σ (t)
0 1 0
0 f1 (t) 0 ) (f2 (t)) , 1 f3 (t) 4t 2
4.4 Decoupling of first kind linear time-varying dynamic-algebraic equations of index one
0 0 0 0 ( 0 ( 0 )=− 1 σ v0 (t) 8t 3 σ 1 v1 (t) ( 3 8t
0 0 ( 0 − 0 (0
0 0 0
0 0 u1σ (t) 0) (u2σ (t)) 0 u3σ (t)
− 16t1 4 0
0 0 0 0 0
0)
0 0 0 0
0 0 0
− 4t12
0
0
0 f1 (t) 0 f2 (t) 0 ) (f3 (t)) , 0 0 1 0 2)
t ∈ 𝕋,
4t
or 1 f (t) 0 2t 1 σ Δ (u2 (t)) = ( 2tu3 (t) ) + ( f2 (t) ) , 1 − 16t1 4 u2σ (t) f (t) u3Δ (t) 4t 2 3
u1Δ (t)
0 0 0 0 0 0 ) − (0) , 0 ( 0 ) = −( 1 σ u (t) − 16t1 4 u2σ (t) vσ0 (t) 0 8t 3 1 1 σ vσ1 (t) 0 u (t) 3 1 ( )
t ∈ 𝕋,
8t
or 1 f (t), 2t 1 Δ u2 (t) = 2tu3σ (t) + f2 (t), 1 1 u3Δ (t) = − 4 u2σ (t) + 2 f3 (t), 16t 4t 1 1 σ vσ0 (t) = − 3 u1σ (t) + u (t), 16t 4 2 8t 1 vσ1 (t) = − 3 u1σ (t), t ∈ 𝕋. 8t u1Δ (t) =
Exercise 4.3. Let 𝕋 = 3ℕ0 and 1 0 A(t) = (0 0 0
0 −1 0 0 0
0 0 t + 1) , 0 0
t+1 B(t) = ( 0 0
0 t2 0
0 0 t
0 0 0
0 0) , 0
� 219
220 � 4 First kind linear time-varying dynamic-algebraic equations 0 0 C(t) = ( 0 −t 1
0 0 −t t 0
0 t 0 0 0
t 0 0 ), 0 −1
−t t 0 1 0
t ∈ 𝕋.
Find the representation (4.16).
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2 4.5.1 A reformulation Suppose that the equation (4.1) is (σ, 1)-regular with tractability index ν ≥ 2. Let R be a continuous projector onto im B and along ker Aσ . Denote G0 = Aσ B and let Π0 be a continuous projector along ker G0 . Set M0 = I − Π0 ,
B− BB− = B− ,
C0 = C,
B − B = Π0 ,
G1 = G0 + C0 M0 ,
BB− = R,
BB− B = B,
N0 = ker G0 .
Let Gi , Πi , i ∈ {1, . . . , ν − 1}, be as in (A5)–(A7) and (A8) holds. Let also Δ
σ Ciσ Πσi = (Ci−1 + Ci Mi + Gi B− (BΠi B− ) Bσ )Πσi−1 ,
i ∈ {1, . . . , ν − 1}.
Now, using that R = BB− and R is a continuous projector along ker Aσ , we get Aσ = Aσ R = Aσ BB− = G0 B− . Then we can rewrite the equation (4.1) as follows: Aσ BB− (Bx)Δ = C σ x σ + f or G0 B− (Bx)Δ = C σ x σ + f . Now, we multiply both sides of the last equation with Gν−1 and we find
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
�
Gν−1 G0 B− (Bx)Δ = Gν−1 C σ x σ + Gν−1 f .
221 (4.21)
By (3.69), we have Gν−1 G0 = I − Q0 − ⋅ ⋅ ⋅ − Qν−1 . Therefore, (4.21) takes the form (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 )B− (Bx)Δ = Gν−1 C σ x σ + Gν−1 f .
(4.22)
Since Πν−1 projects along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 and Ni = im Qi , we have Πν−1 (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 ) = Πν−1 and then BΠν−1 (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 ) = BΠν−1 . Then we multiply (4.22) by BΠν−1 and we get BΠν−1 (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 )B− (Bx)Δ = BΠν−1 Gν−1 C σ x σ + BΠν−1 Gν−1 f or BΠν−1 B− (Bx)Δ = BΠν−1 Gν−1 C σ x σ + BΠν−1 Gν−1 f .
(4.23)
On the other hand, Πν−1 B− B = Πν−1 Π0 = Πν−1 . Since BΠν−1 B− and Bx are C 1 , we get Δ
Δ
Δ
BΠν−1 B−a (Bx)Δ = (BΠν−1 B− Bx) − (BΠν−1 B− ) Bσ x σ = (BΠν−1 x)Δ − (BΠν−1 B− ) Bσ x σ . Hence, (4.23) can be rewritten in the form Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bσ x σ = BΠν−1 Gν−1 C σ x σ + BΠν−1 Gν−1 f . Now, we decompose x as follows: x = Πν−1 x + (I − Πν−1 )x. Then we find Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bσ (I − Πσν−1 + Πσν−1 )x σ
222 � 4 First kind linear time-varying dynamic-algebraic equations = BΠν−1 Gν−1 C σ (I − Πσν−1 + Πσν−1 )x σ + BΠν−1 Gν−1 f , or Δ
Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bσ Πσν−1 x σ − (BΠν−1 B− ) Bσ (I − Πσν−1 )x σ
= BΠν−1 Gν−1 C σ Πσν−1 x σ + BΠν−1 Gν−1 C σ (I − Πσν−1 )x σ + BΠν−1 Gν−1 f .
(4.24)
By the definition of Cj , we obtain j
Δ
Cjσ Πσj = (C σ + Gj+1 − G1 + ∑ Gi B−σ (BΠi B− ) Bσ )Πσj−1 . i=1
Hence, using that Πj Mj = 0,
Πj−1 Mj = Mj ,
we arrive at the following relations: 0 = Cjσ Πσj Mjσ j
Δ
= (C σ + Gj+1 − G1 + ∑ Gi B− (BΠi B− ) Bσ )Πσj−1 Mjσ i=1 j
Δ
= (C σ + Gj+1 − G1 + ∑ Gi B− (BΠi B− ) Bσ )Mjσ i=1
j
Δ
= C σ Mjσ + (Gj+1 − G1 )Mjσ + ∑ Gi B− (BΠi B− ) Bσ Mjσ , i=1
from where j
Δ
C σ Mjσ = −(Gj+1 − G1 )Mjσ − ∑ Gi B− (BΠi B− ) Bσ Mjσ . i=1
Multiplying the last equation with Gν−1 , we get j
Δ
Gν−1 C σ Mjσ = −Gν−1 (Gj+1 − G1 )Mjσ − ∑ Gν−1 Gi B− (BΠi B− ) Bσ Mjσ i=1
j
Δ
= −(Q1 + ⋅ ⋅ ⋅ Qj )Mjσ − ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ , i=1
i. e., j
Δ
Gν−1 C σ Mjσ = −(Q1 + ⋅ ⋅ ⋅ Qj )Mjσ − ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ . i=1
(4.25)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
� 223
Then BΠν−1 Gν−1 C σ Mjσ = −BΠν−1 (Q1 + ⋅ ⋅ ⋅ Qj )Mjσ j
Δ
− ∑ BΠν−1 (I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=1 j
Δ
= − ∑ BΠν−1 B− (BΠi B− ) Bσ Mjσ i=1 j
j
Δ
Δ
= − ∑(BΠν−1 B− BΠi B− ) Bσ Mjσ + ∑(BΠν−1 B− ) Bσ Πσi B−σ Bσ Mjσ i=1
i=1
j
j−1
Δ
Δ
= − ∑(BΠν−1 B− BΠi B− ) Bσ Mjσ + ∑(BΠν−1 B− ) Bσ Πσi B−σ Bσ Mjσ i=1
− Δ σ
= −(BΠν−1 B ) B
i=1
Mjσ ,
i. e., Δ
Bσ Πσν−1 Gν−1 C σ Mjσ = −(BΠν−1 B− ) Bσ Mjσ . By the last equation, we find ν−1
Δ
ν−1
BΠν−1 Gν−1 C σ ∑ Mjσ = −(BΠν−1 B− ) Bσ ∑ Mjσ . j=0
j=0
Observe that ν−1
σ ∑ Mjσ = M0σ + M1σ + ⋅ ⋅ ⋅ + Mν−1
j=0
= I − Πσ0 + Πσ0 − Πσ1 + Πσ1 − Πσ2 + ⋅ ⋅ ⋅ + Πσν−2 − Πσν−1 = I − Πσν−1 .
From here, applying (4.26), we get Δ
BΠν−1 Gν−1 C σ (I − Πσν−1 ) = −(BΠν−1 B− ) Bσ (I − Πσν−1 ). By the last relation, the equation (4.24) takes the form Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bσ Πσν−1 x σ = BΠν−1 Gν−1 C σ Πσν−1 x σ + BΠν−1 Gν−1 f . Set u = BΠν−1 x. Then
(4.26)
224 � 4 First kind linear time-varying dynamic-algebraic equations C σ Πσν−1 x σ = C σ Πσ0 Πσν−1 x σ
= C σ B−σ Bσ Πσν−1 x σ = C σ B−σ uσ .
So, we arrive at the equation Δ
uΔ − (BΠν−1 B− ) uσ = BΠν−1 Gν−1 C σ B−σ uσ + BΠν−1 Gν−1 f , or Δ
uΔ = (BΠν−1 B− ) uσ + BΠν−1 Gν−1 C σ B−σ uσ + BΠν−1 Gν−1 f .
(4.27)
Definition 4.10. Equation (4.27) is said to be the inherent equation of the equation (4.1). Theorem 4.2. The subspace im Πν−1 is an invariant subspace for the equation (4.27), i. e., u(t0 ) ∈ (im BΠν−1 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BΠν−1 )(t) for any t ∈ I. Proof. Let u ∈ C 1 (I) be a solution to the equation (4.27) so that (BΠν−1 )(t0 )u(t0 ) = u(t0 ). Hence, u(t0 ) = (BΠν−1 )(t0 )u(t0 ) = (BΠν−1 Π0 Πν−1 )(t0 )u(t0 )
= (BΠν−1 B− BΠν−1 )(t0 )u(t0 ) = (BΠν−1 B− )(t0 )(BΠν−1 )(t0 )u(t0 ) = (BΠν−1 B− )(t0 )u(t0 ).
We multiply the equation (4.27) by I − BΠν−1 B− and we get Δ
(I − BΠν−1 B− )uΔ = (I − BΠν−1 B− )(BΠν−1 B− ) uσ
+ (I − BΠν−1 B− )BΠν−1 Gν−1 C σ B−σ uσ + (I − BΠν−1 B− )BΠν−1 Gν−1 f Δ
= (I − BΠν−1 B− )(BΠν−1 B− ) uσ + (BΠν−1 − BΠν−1 B− BΠν−1 )Gν−1 C σ B−σ uσ + (BΠν−1 − BΠν−1 B− BΠν−1 )Gν−1 f Δ
= (I − BΠν−1 B− )(BΠν−1 B− ) uσ
+ (BΠν−1 − BΠν−1 )Gν−1 C σ B−σ uσ + (BΠν−1 − BΠν−1 )Gν−1 f Δ
= (I − BΠν−1 B− )(BΠν−1 B− ) uσ , i. e.,
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
Δ
(I − BΠν−1 B− )uΔ = (I − BΠν−1 B− )(BΠν−1 B− ) uσ .
(4.28)
Set v = (I − BΠν−1 B− )u. Then Δ
vΔ = (I − BΠν−1 B− )uΔ + (I − BΠν−1 B− ) uσ Δ
Δ
= (I − BΠν−1 B− )(BΠν−1 B− ) uσ + (I − BΠν−1 B− ) uσ Δ
= ((I − BΠν−1 B− )BΠν−1 B− ) uσ Δ
Δ
− (I − BΠν−1 B− ) Bσ Πσν−1 B−σ uσ + (I − BΠν−1 B− ) uσ Δ
= (BΠν−1 B− − BΠν−1 B− BΠν−1 B− ) u Δ
Δ
+ (I − BΠν−1 B− ) (I − Bσ Πσν−1 B−σ )uσ = (I − BΠν−1 B− ) vσ ,
i. e., Δ
vΔ = (I − BΠν−1 B− ) vσ . Note that v(t0 ) = u(t0 ) − (BΠν−1 B− )(t0 )u(t0 ) = u(t0 ) − u(t0 ) = 0. Therefore, we obtain the following IVP: Δ
vΔ = (I − BΠν−1 B− ) vσ
on
v(t0 ) = 0. Therefore, v ≡ 0 on I and then
BΠν−1 B− u = u
on
I.
Hence, using that im BΠν−1 = im BΠν−1 B− , we get BΠν−1 u = u This completes the proof.
on
I.
I,
� 225
226 � 4 First kind linear time-varying dynamic-algebraic equations σ 4.5.2 The component vν−1
Consider the equation (4.21). Note that M0 + M1 + ⋅ ⋅ ⋅ + Mν−1 + Πν−1 = I − Π0 + Π0 − Π2 + ⋅ ⋅ ⋅ + Πν−2 − Πν−1 + Πν−1 = I. Then we decompose the solution x of the equation (4.21) in the form x = M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + Πν−1 x = M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + B− BΠν−1 x. Set vj = Mj x,
j ∈ {0, . . . , ν − 1}.
We multiply the equation (4.21) by Mν−1 and we find Mν−1 Gν−1 G0 B− (Bx)Δ = Mν−1 Gν−1 C σ x σ + Mν−1 Gν−1 f .
(4.29)
By (3.69), we have Mν−1 Gν−1 G0 = Mν−1 (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 )
= Mν−1 − Mν−1 Q0 − ⋅ ⋅ ⋅ − Mν−1 Qν−1 = Mν−1 − Mν−1 = 0.
The equation (4.29) can be rewritten in the following manner: Mν−1 Gν−1 C σ x σ = −Mν−1 Gν−1 f .
(4.30)
From here, σ Mν−1 Gν−1 f = −Mν−1 Gν−1 C σ (M0σ x σ + M1σ x σ + ⋅ ⋅ ⋅ + Mν−1 x + B−σ Bσ Πσν−1 x σ .
Using (4.25), we arrive at Mν−1 Gν−1 C σ Mjσ = −Mν−1 (Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ j
Δ
− ∑ Mν−1 (I − Qi − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 )B(BΠi B− ) Bσ Mjσ = 0,
i=1
j < ν − 1,
and σ σ Mν−1 Gν−1 C σ Mν−1 = −Mν−1 (Q1 + ⋅ ⋅ ⋅ + Qν−1 )Mν−1 ν−1
Δ
σ − ∑ Mν−1 (I − Qi − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 )B(BΠi B− ) Bσ Mν−1 i=1
(4.31)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
�
227
σ = −Mν−1 Mν−1 .
Then, by (4.31), we find σ −Mν−1 Mν−1 x σ − Mν−1 Gν−1 B− BΠν−1 x = −Mν−1 Gν−1 f
or Mν−1 vσν−1 = −Mν−1 Gν−1 C σ B−σ uσ + Mν−1 Gν−1 f .
(4.32)
4.5.3 The components vkσ In this section, we will give a representation of the components vk = Mk x, that were defined in the previous section. Recall the projectors Uk = Mk Pk+1 . . . Pν−1 which project along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 ⊕ im Πν−1 . Then we have the following: Uk Qi = 0,
k ≠ i.
Now, using (3.69) and then (3.61), we obtain Uk Gν−1 G0 B− (Bx)Δ
= Uk (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 )B− (Bx)Δ
= (Uk − Uk Q0 − ⋅ ⋅ ⋅ − Uk Qk−1 − Uk Qk − Uk Qk+1 − ⋅ ⋅ ⋅ − Uk Qν−1 )B− (Bx)Δ = (Uk − Uk Qk )B− (Bx)Δ = Uk (I − Qk )B− (Bx)Δ = Mk Pk+1 . . . Pν−1 (I − Qk )B− (Bx)Δ
= Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−1 Qk )B− (Bx)Δ
= Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−2 (I − Qν−1 )Qk )B− (Bx)Δ
= Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−2 (Qk − Qν−1 Qk ))B− (Bx)Δ .. .
= Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−2 Qk )B− (Bx)Δ = Mk (Pk+1 . . . Pν−1 − Pk+1 Qk )B− (Bx)Δ
= Mk (Pk+1 . . . Pν−1 − (I − Qk+1 )Qk )B− (Bx)Δ
228 � 4 First kind linear time-varying dynamic-algebraic equations = Mk (Pk+1 . . . Pν−1 − Qk + Qk+1 Qk )B− (Bx)Δ = Mk (Pk+1 . . . Pν−1 − Qk )B− (Bx)Δ
= (Mk Pk+1 . . . Pν−1 − Mk Qk )B− (Bx)Δ = (Mk Pk+1 . . . Pν−1 − Mk )B− (Bx)Δ = Mk (Pk+1 . . . Pν−1 − I)B− (Bx)Δ , i. e., Uk Gν−1 G0 B− (Bx)Δ = Mk (Pk+1 . . . Pν−1 − I)B− (Bx)Δ .
(4.33)
Note that we have Pk+1 . . . Pν−1 − I = −Qk+1 − Pk+1 Qk+2 − Pk+1 Pk+2 Qk+3 − ⋅ ⋅ ⋅ − Pk+1 ⋅ ⋅ ⋅ Pν−1 Qν−1 . Since Qj Mj = Qj , we get Pk+1 . . . Pν−1 − I = −Qk+1 Mk+1 − Pk+1 Qk+2 Mk+2
− Pk+1 Pk+2 Qk+3 Mk+3 − ⋅ ⋅ ⋅ − Pk+1 . . . Pν−2 Qν−1 Mν−1 .
Then (4.16) takes the form Uk Gν−1 G0 B− (Bx)Δ = Mk (−Qk+1 Mk+1 − Pk+1 Qk+2 Mk+2
− Pk+1 Pk+2 Qk+3 Mk+3 − ⋅ ⋅ ⋅ − Pk+1 . . . Pν−2 Qν−1 Mν−1 ).
(4.34)
For j > 0, we have BMj x = BMj B− Bx and then Δ
Δ
(BMj x)Δ = (BMj B− Bx) = (BMj B− ) Bσ x σ + BMj B− (Bx)Δ or Δ
BMj B− (Bx)Δ = (BMj x)Δ − (BMJ B− ) Bx. Since P0 = B − B
(4.35)
� 229
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
and Mj = P0 Mj = B− BMj , by (4.34), we find Uk Gν−1 G0 B− (Bx)Δ
= Mk (−Qk+1 B− BMk+1 B− (Bx)Δ − Pk+1 Qk+2 B− BMk+2 B− (Bx)Δ
− Pk+1 Pk+2 Qk+3 B− BMk+3 B− (Bx)Δ − ⋅ ⋅ ⋅ − Pk+1 . . . Pν−2 Qν−1 B− BMν−1 B− (Bx)Δ )
= Mk (−Qk+1 B− (Bvk+1 )Δ − Pk+1 Qk+2 B− (Bvk+2 )Δ
− Pk+1 Pk+2 Qk+3 B− (Bvk+3 )Δ − ⋅ ⋅ ⋅ − −Pk+1 . . . Pν−1 Qν−1 B− (Bvν−1 )Δ ) Δ
Δ
+ Mk (Qk+1 B− (BMk+1 B− ) Bσ x σ + Pk+1 Qk+2 B− (BMk+2 B− ) Bσ x σ Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) Bσ x σ + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) Bσ x σ ). Set Nkk+1 = −Mk Qk+1 B , −
Nkk+2 = −Mk Pk+1 Qk+2 B , −
Nkk+3 = −Mk Pk+1 Pk+2 Qk+3 B , . . . , Nkν−1 = −Mk Pk+1 . . . Pν−2 Qν−1 B . −
−
Therefore, ν−1
Uk Gν−1 G0 B− (Bx)Δ = ∑ Nkj (Bvj )Δ j=k+1
Δ
Δ
+ Mk (Qk+1 B− (BMk+1 B− ) Bx + Pk+1 Qk+2 B− (BMk+2 B− ) Bσ x σ Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) Bσ x σ + ⋅ ⋅ ⋅ Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) Bσ x σ ). Set Δ
Δ
J = Mk (Qk+1 B− (BMk+1 B− ) Bx + Pk+1 Qk+2 B− (BMk+2 B− ) Bσ x σ Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) Bσ x σ + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) Bσ x σ ).
Note that BM0 = 0 and ν−1
ν−1
j=0
j=1
Bx = BΠν−1 x + B ∑ Mj x = BΠν−1 x + B ∑ Mj x.
230 � 4 First kind linear time-varying dynamic-algebraic equations Then J takes the form Δ
J = Mk (Qk+1 B− (BMk+1 B− ) + Pk+1 Qk+2 B− (BMk+2 B− )
Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) )Bσ Πσν−1 x σ Δ
+ Mk (Qk+1 B− (BMk+1 B− ) + Pk+1 Qk+2 B− (BMk+2 B− )
Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅
ν−1
Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) ) ∑ Bσ Mjσ x σ . j=0
Denote Δ
J1 = Mk (Qk+1 B− (BMk+1 B− ) + Pk+1 Qk+2 B− (BMk+2 B− )
Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) )Bσ Πσν−1 x σ , Δ
J2 = Mk (Qk+1 B− (BMk+1 B− ) + Pk+1 Qk+2 B− (BMk+2 B− )
Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
ν−1
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) ) ∑ Bσ Mjσ x σ . j=1
We have Mj B− BΠν−1 x = Mj P0 Πν−1 = Mj Πν−1 = 0. Hence, Δ
Δ
(BMj B− ) Bσ Πσν−1 x σ = −BMj B− (BΠν−1 x)Δ + (BMj B− BΠν−1 x) = −BMj B− (BΠν−1 x)Δ . Therefore, for J1 we get the following representation: Δ
Δ
J1 = Mk (Qk+1 B− (BMk+1 B− ) + Pk+1 Qk+2 B− (BMk+2 B− ) Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) )Bσ Πσν−1 x σ
= −Mk (Qk+1 B− BMk+1 B− + Pk+1 Qk+2 B− BMk+2 B− + Pk+1 Pk+2 Qk+3 B− BMk+3 B− + ⋅ ⋅ ⋅
+ Pk+1 . . . Pν−2 Qν−1 B− BMν−1 B− )(BΠν−1 x)Δ Bσ Πσν−1 x σ
= −Mk (Qk+1 Mk+1 + Pk+1 Qk+2 Mk+2 + Pk+1 Pk+2 Qk+3 Mk+3 + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 Mν−1 )B− (BΠν−1 x)Δ Bσ Πσν−1 x σ
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
= −Mk (Qk+1 + Pk+1 Qk+2 + Pk+1 Pk+2 Qk+3 + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 )B− (BΠν−1 x)Δ Bσ Πσν−1 x σ
= Mk (Pk+1 . . . Pν−1 − I)B− (BΠν−1 x)Δ Bσ Πσν−1 x σ
= Mk (Pk+1 . . . Pν−1 − I)B− (BΠν−1 x)Δ Bσ B−σ Bσ Πσν−1 x σ , i. e., J1 = Mk (Pk+1 . . . Pν−1 − I)B− (BΠν−1 x)Δ Bσ B−σ Bσ Πσν−1 x σ . Set Δ σ
K̃ k = Mk (Pk+1 . . . Pν−1 − I)B (BΠν−1 x) B . −
Thus, −σ σ J1 = K̃ kB u .
Next, Δ
J2 = Mk (Qk+1 B− (BMk+1 B− ) + Pk+1 Qk+2 B− (BMk+2 B− )
Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅
ν−1
Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) ) ∑ Bσ Mjσ x σ j=1
− Δ
= Mk (Qk+1 B (BMk+1 B ) + Pk+1 Qk+2 B− (BMk+2 B− ) −
Δ
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅
ν−1
Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) )Bσ ∑ Mjσ B−σ Bσ Mjσ x σ . j=1
Let − Δ
− Δ
̃ M kj = −Mk (Qk+1 B (BMk+1 B ) + Pk+1 Qk+2 B (BMk+2 B ) −
−
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) )Bσ Mjσ B−σ Bσ . Therefore, ν−1
σ σ ̃ J2 = ∑ M kj Mj x . j=1
Note that
� 231
232 � 4 First kind linear time-varying dynamic-algebraic equations Δ
Δ
(BMj B− ) Bσ Mjσ B−σ = −BMj B− (BMj B− ) + (BMj B− BMj B− ) Δ
= (BMj B− ) − BMj B− (BMj B− )
Δ
Δ
and Δ
Δ
Δ
(BMi B− ) Bσ Mjσ B−σ = (BMi B− BMj B− ) − BMi B− (BMj B− ) Δ
= −BMi B− (BMj B− ) ,
i ≠ j.
Hence, for j > k, we have − Δ
− Δ
̃ M kj = −Mk (Qk+1 B (BMk+1 B ) + Pk+1 Qk+2 B (BMk+2 B ) −
−
Δ
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
Δ
σ + Pk+1 . . . Pj−1 Qj B− (BMj B− ) + Pk+1 . . . Pj Qj+1 B− (BMj+1 B− ) + ⋅ ⋅ ⋅ Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) )Bσ Mjσ B−σ Bσ Δ
Δ
= −Mk (−Qk+1 B− BMk+1 B− (BMj B− ) − Pk+1 Qk+2 B− BMk+2 B− (BMj B− ) Δ
− Pk+1 Pk+2 Qk+3 B− BMk+3 B− (BMj B− ) + ⋅ ⋅ ⋅ Δ
− Pk+1 . . . Pj−2 Qj−1 B− BMj−1 B− (BMj B− ) Δ
Δ
− Pk+1 . . . Pj−1 Qj B− BMj B− (BMj B− ) + Pk+1 . . . Pj−1 Qj (BMj B− ) Δ
− Pk+1 . . . Pj Qj+1 B− BMj+1 B− (BMj B− ) + ⋅ ⋅ ⋅ Δ
− Pk+1 . . . Pν−2 Qν−1 B− BMν−1 B− (BMj B− ) )Bσ
= Mk (Qk+1 + Pk+1 Qk+2 + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 Δ
− Pk+1 . . . Pj−1 Qj )B− (BMj B− ) Bσ
Δ
= Mk (I − Pk+1 . . . Pν−1 − Pk+1 . . . Pj−1 Qj )B− (BMj B− ) Bσ . For j < k, we get − Δ
̃ M kj = −Mk (−Qk+1 B BMk+1 B (BMj B ) −
−
Δ
Δ
− Pk+1 Qk+2 B− BMk+2 B− (BMj B− ) − Pk+1 Pk+2 Qk+3 B− BMk+3 B− (BMj B− ) + ⋅ ⋅ ⋅ Δ
− Pk+1 . . . Pν−2 Qν−1 B− BMν−1 B− (BMj B− ) )Bσ
Δ
= Mk (Qk+1 + Pk+1 Qk+2 + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 )B− (BMj B− ) Bσ Δ
= Mk (I − Pk+1 . . . Pν−1 )B− (BMj B− ) Bσ . Consequently, ν−1
−σ σ ̃ σ σ J = J1 + J2 = K̃ k B u + ∑ Mkj Mj x j=1
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
�
233
ν−1
−σ σ ̃ σ = K̃ k B u + ∑ Mkj vj j=1
and ν−1
Uk Gν− G0 B− (Bx)Δ = ∑ Nkj (Bvj )Δ j=k+1
ν−1
−σ σ ̃ σ + K̃ k B u + ∑ Mkj vj . j=1
4.5.4 Terms coming from Uk Gν−1 C σ x σ In this section, we will find representations of the terms Uk Gν−1 C σ x σ , using the decomposition ν−1
x = Πν−1 x + ∑ Mj x. j=0
We have ν−1
Uk Gν−1 Cx = Uk Gν−1 C σ (Πσν−1 x σ + ∑ Mjσ x σ ) j=0
ν−1
= Uk Gν−1 C σ Πσν−1 x σ + ∑ Uk Gν−1 C σ Mjσ x σ j=0
ν−1
= Mk Pk+1 . . . Pν−1 Gν−1 C σ Πσν−1 x σ + ∑ Mk Pk+1 . . . Pν−1 Gν−1 C σ Mjσ x σ . j=0
Set I1 = Mk Pk+1 . . . Pν−1 Gν−1 C σ Πσν−1 x σ ,
ν−1
I2 = ∑ Mk Pk+1 . . . Pν−1 Gν−1 C σ Mjσ x σ . j=0
Then Uk Gν−1 C σ x σ = I1 + I2 .
(4.36)
Denote −1 σ −σ
K̂ k = Mk Pk+1 . . . Pν−1 Gν C B
Hence,
.
234 � 4 First kind linear time-varying dynamic-algebraic equations I1 = Mk Pk+1 . . . Pν−1 Gν−1 C σ Πσν−1 x σ
= Mk Pk+1 . . . Pν−1 Gν−1 C σ P0σ Πσν−1 x σ = Mk Pk+1 . . . Pν−1 Gν−1 C σ B−σ Bσ Πσν−1 x σ σ = Mk Pk+1 . . . Pν−1 G−1 C σ B−σ uσ = K̂ ku , ν
i. e., σ I1 = K̂ ku .
(4.37)
Now, we consider I2 . We have ν−1
I2 = Mk Pk+1 . . . Pν−1 Gν−1 C σ M0σ x σ + Mk Pk+1 . . . Pν−1 Gν−1 C σ ∑ Mjσ x σ . j=0
Set −1 σ
Mk0 = Mk Pk+1 . . . Pν−1 Gν C ,
ν−1
J = Mk Pk+1 . . . Pν−1 Gν−1 C σ ∑ Mjσ x σ . j=0
Hence, I2 = Mk0 M0σ x σ + J = Mk0 uσ + J. We will simplify J. We have Mk Pk+1 . . . Pν−1 Gν−1 C σ Mjσ j
Δ
σ = Mk Pk+1 . . . Pν−1 (−(Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ − ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ ). i=1
We will consider the following cases: 1. Let j < k. Then Mk Pk+1 . . . Pν−1 (Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ = 0. Moreover, j
Δ
σ Mk Pk+1 . . . Pν−1 ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=1
j
Δ
σ = Mk Pk+1 . . . Pν−1 ∑(I − Qi − ⋅ ⋅ ⋅ − Qk−1 − Qk − Qk+1 − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=1
j
Δ
= ∑(Mk Pk+1 . . . Pν−1 − Mk Pk+1 . . . Pν−1 Qk )B− (BΠi B− ) Bσ Mjσ i=1
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2 j
�
235
Δ
= ∑(Mk Pk+1 . . . Pν−1 − Mk Pk+1 . . . Pν−1 Qk )B− (BΠi B− ) Bσ Mjσ i=1 j
Δ
= ∑(Mk Pk+1 . . . Pν−1 − Mk Qk )B− (BΠi B− ) Bσ Mjσ i=1 j
Δ
= ∑(Mk Pk+1 . . . Pν−1 − Mk )B− (BΠi B− ) Bσ Mjσ i=1 j
Δ
= ∑ Mk (Pk+1 . . . Pν−1 − I)B− (BΠi B− ) Bσ Mjσ . i=1
Note that, for i < j, we have Δ
Δ
B− (BΠi B− ) Bσ Mjσ = B− (BΠi B− ) Bσ Mjσ B−σ Bσ Mjσ Δ
Δ
= B− (BΠi B− BMj B− ) Bσ Mjσ − B− BΠi B− (BMj B− ) Bσ Mjσ Δ
Δ
= B− (BMj B− ) Bσ Mjσ − Πi B− (BMj B− ) Bσ Mjσ Δ
= (I − Πi )B− (BMj B− ) Bσ Mjσ and using (3.47), we find (Pk+1 . . . Pν−1 − I)(I − Πi )
= Pk+1 . . . Pν−1 − I − Pk+1 . . . Pν−1 Πi + Πi
= Pk+1 . . . Pν−1 − I − Pk+1 . . . Pν−1 + Qi + Qi−1 Pi + Qi−2 Pi−1 Pi + ⋅ ⋅ ⋅ + Q0 P1 . . . Pi + Πi = −I + Πi + Q0 P1 . . . Pi + Q1 P2 . . . Pi + Q2 P3 . . . Pi + ⋅ ⋅ ⋅ + Qi−1 Pi + Qi = −I + P1 P2 . . . Pi + Q1 P2 . . . Pi + Q2 P3 . . . Pi + ⋅ ⋅ ⋅ + Qi−1 Pi + Qi = −I + P2 P3 . . . P − i + Q2 P3 . . . Pi + ⋅ ⋅ ⋅ + Qi−1 Pi + Qi + ⋅ ⋅ ⋅ = −I + Pi−1 Pi + Qi−1 Pi + Qi = −I + Pi + Qi = −I + I = 0. Therefore, for i < j, we have Δ
Mk (Pk+1 . . . Pν−1 − I)B− (BΠi B− ) Bσ Mjσ = 0. For i = j, we have Δ
Δ
B− (BΠj B− ) Bσ Mjσ = B− (BΠj B− ) Bσ Mjσ B−σ Bσ Mjσ Δ
Δ
= B− (BΠj B− BMj B− ) Bσ Mjσ − B− BΠj B− (BMj B− ) Bσ Mjσ Δ
= −Πj B− (BMj B− ) Bσ Mjσ and
236 � 4 First kind linear time-varying dynamic-algebraic equations (Pk+1 . . . Pν−1 − I)Πj
= Pk+1 . . . Pν−1 Πj − Πj
= Pk+1 . . . Pν−1 − Πj − Q0 P1 . . . Pj − Q1 P2 . . . Pj − ⋅ ⋅ ⋅ − Qj−1 Pj − Qj = Pk+1 . . . Pν−1 − P1 . . . Pj − Q1 P2 . . . Pj − ⋅ ⋅ ⋅ − Qj−1 Pj − Qj = Pk+1 . . . Pν−1 − Pj − Qj = Pk+1 . . . Pν−1 − I. Thus, Δ
Δ
Mk (Pk+1 . . . Pν−1 − I)B− (BΠj B− ) Bσ Mjσ = −Mk (Pk+1 . . . Pν−1 − I)B− (BMj B− ) Bσ Mjσ and Δ
Mk Pk+1 . . . Pν−1 Gν−1 CMj = −Mk (Pk+1 . . . Pν−1 − I)B− (BMj B− ) Bσ Mjσ . Set − Δ σ
Mkj = −Mk (Pk+1 . . . Pν−1 − I)B (BMj B ) B . −
Therefore, Mk Pk+1 . . . Pν−1 Gν−1 CMj = Mkj Mjσ x σ = Mkj vσj . 2.
Let j ≥ k. Then Mk Pk+1 . . . Pν−1 (Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ
= Mk Pk+1 . . . Pν−1 (Q1 + ⋅ ⋅ ⋅ + Qk−1 + Qk + Qk+1 + ⋅ ⋅ ⋅ + Qj )Mjσ
= Mk Pk+1 . . . Pν−1 Qk Mj + Mk Pk+1 . . . Pν−1 Qk+1 Mjσ + ⋅ ⋅ ⋅ + Mk Pk+1 . . . Pν−1 Qj Mjσ = Mk Qk Mjσ = Mk Mjσ .
Now, using the computations in the previous case, for j = k we get k
Δ
Mk Pk+1 . . . Pν−1 ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mkσ j=1
k−1
Δ
= Mk Pk+1 . . . Pν−1 ∑ (I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mkσ j=1
Δ
+ Mk Pk+1 . . . Pν−1 (I − Qk − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mkσ k−1
Δ
= Mk (Pk+1 . . . Pν−1 − 1) ∑ B− (BΠi B− ) Bσ Mkσ j=1
Δ
+ Mk (Pk+1 . . . Pν−1 − 1)B− (BΠi B− ) Bσ Mkσ
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2 k−1
Δ
= Mk (Pk+1 . . . Pν−1 − 1) ∑ (I − Πi )B− (BMk B− ) Bσ Mkσ j=1
Δ
− Mk (Pk+1 . . . Pν−1 − 1)B− (BMk B− ) Bσ Mkσ Δ
= −Mk (Pk+1 . . . Pν−1 − 1)B− (BMk B− ) Bσ Mkσ . Let − Δ σ
σ
Mkk = −Mk (Pk+1 . . . Pν−1 − 1)B (BMk B ) B Mk − Mk . −
Let j > k. Then j
Δ
Mk Pk+1 . . . Pν−1 ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=1
k−1
Δ
= Mk Pk+1 . . . Pν−1 ∑ (I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=1
j
Δ
+ Mk Pk+1 . . . Pν−1 ∑ (I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=k
k−1
Δ
= Mk (Pk+1 . . . Pν−1 − I) ∑ B− (BΠi B− ) Bσ Mjσ i=1
j
Δ
+ Mk (Pk+1 . . . Pν−1 − I) ∑ B− (BΠi B− ) Bσ Mjσ i=k
k−1
Δ
= Mk (Pk+1 . . . Pν−1 − I) ∑ (I − Πi )B− (BMj B− ) Bσ Mjσ i=1
j
Δ
+ Mk (Pk+1 . . . Pν−1 − I) ∑ (I − Πi )B− (BMj B− ) Bσ Mjσ i=k
Δ
− Mk (Pk+1 . . . Pν−1 − I)Πj B− (BMj B− ) Bσ Mjσ j
Δ
= Mk (Pk+1 . . . Pν−1 − I) ∑ (I − Πi )B− (BMj B− ) Bσ Mjσ i=k
Δ
− Mk (Pk+1 . . . Pν−1 − I)Πj B− (BMj B− ) Bσ Mjσ Δ
= Mk (Pk+1 . . . Pν−1 − I)(I − Πk )B− (BMj B− ) Bσ Mjσ j−1
Δ
+ Mk (Pk+1 . . . Pν−1 − I) ∑ (I − Πi )B− (BMj B− ) Bσ Mjσ i=k
Δ
− Mk (Pk+1 . . . Pν−1 − I)Πj B− (BMj B− ) Bσ Mjσ j−1
Δ
= Mk (Pk+1 . . . Pν−1 − I) ∑ (I − Πi )B− (BMj B− ) Bσ Mjσ i=k
� 237
238 � 4 First kind linear time-varying dynamic-algebraic equations Δ
− Mk (Pk+1 . . . Pν−1 − I)Πj B− (BMj B− ) Bσ Mjσ . Note that (Pk+1 . . . Pν−1 − I)(I − Πi )
= Pk+1 . . . Pν−1 − I + Πi − Pk+1 . . . Pν−1 Πi = Pk+1 . . . Pν−1 − I + Πi − Pk+1 . . . Pν−1
+ Q0 P1 . . . Pi + Q1 P2 . . . Pi + ⋅ ⋅ ⋅ + Qk Pk+1 . . . Pi
= −I + Πi + Q0 P1 . . . Pi + Q1 P2 . . . Pi + ⋅ ⋅ ⋅ + Qk Pk+1 . . . P − i = −I + P1 . . . Pi + Q1 P2 . . . Pi + ⋅ ⋅ ⋅ + Qk Pk+1 . . . Pi = −I + P2 . . . Pi + ⋅ ⋅ ⋅ + Qk Pk+1 . . . Pi = ⋅ ⋅ ⋅
= −I + Pk Pk+1 . . . Pi + Qk Pk+1 . . . Pi = −I + Pk+1 . . . Pi and (Pk+1 . . . Pν−1 − I)Πj
= Pk+1 . . . Pν−1 − I + (Pk+1 . . . Pν−1 − I)(Πj − I)
= Pk+1 . . . Pν−1 − I + I − Pk+1 . . . Pj = Pk+1 . . . Pν−1 − Pk+1 . . . Pj . Consequently, j
Δ
Mk Pk+1 . . . Pν−1 ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=1
j−1
Δ
= Mk ∑ (Pk+1 . . . Pi − I)B− (BMj B− ) Bσ Mjσ i=k
Δ
− Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pj )Πj B− (BMj B− ) Bσ Mjσ . Then Mk Pk+1 . . . Pν−1 Gν−1 C σ Mjσ j−1
Δ
= −Mk Mjσ − Mk ∑ (Pk+1 . . . Pi − I)B− (BMj B− ) Bσ Mjσ i=k
Δ
+ Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pj )Πj B− (BMj B− ) Bσ Mjσ j−1
Δ
= (−Mk − Mk ∑ (Pk+1 . . . Pi − I)B− (BMj B− ) Bσ i=k
Δ
+ Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pj )Πj B− (BMj B− ) Bσ )Mjσ .
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
�
239
Set j−1
− Δ σ
Mkj = −Mk − Mk ∑ (Pk+1 . . . Pi − I)B (BMj B ) B −
i=k
Δ
+ Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pj )Πj B− (BMj B− ) Bσ . Therefore, Mk Pk+1 . . . Pν−1 Gν−1 C σ Mjσ = Mkj vσj . From here, ν−1
I2 = Mk0 vσ0 + ∑ Mkj vσj j=1
and ν−1
σ σ Uk Gν−1 Cν = I1 + I2 = K̂ k u + ∑ Mkj vj .
(4.38)
j=1
We multiply the equation (4.21) by Uk and we find Uk Gν−1 G0 B− (Bx)Δ = Uk Gν−1 C σ x σ + Uk Gν−1 f . Hence, using (4.36) and (4.38), we arrive at ν−1
ν−1
σ ̃k B−σ uσ + ∑ M ̃ ∑ Nkj (Bvj )Δ + K kj vj j=0
j=k+1
ν−1
σ σ −1 = K̂ k u + ∑ Mkj vj + Uk Gν f , j=0
whereupon ν−1
ν−1
j=k+1
j=0
− σ −1 ̂ ̃ ∑ Nkj (Bvj )Δ + (K̃ k B − Kk ) + ∑ (Mkj − Mkj )vj + Uk Gν f .
Note that − Δ σ
̃ M kj − Mkj = Mk (I − Pk+1 . . . Pν−1 )B (BMj B ) B −
Δ
+ Mk (Pk+1 . . . Pν−1 − I)B− (BMj B− ) Bσ
=0
240 � 4 First kind linear time-varying dynamic-algebraic equations and − Δ σ
̃ M kk − Mkk = Mk (I − Pk+1 . . . Pν−1 )B (BMk B ) B −
Δ
+ Mk (Pk+1 . . . Pν−1 − I)B− (BMk B− ) Bσ + Mk
= Mk . Therefore, ν−1
ν−1
j=k+1
j=k+1
− σ −1 ̂ ̃ Mk vσk = ∑ Nkj (Bvj )Δ + (K̃ k B − Kk ) + ∑ (Mkj − Mkj )vj + Uk Gν f ,
(4.39)
vk = Mk vk . Since BMk x = BMk B− Bx and BMk B− = BΠk−1 B− − BΠk B− , and BMk B− x and Bx are in C 1 , we have that Bvk = BMk x is C 1 for any k ≥ 1. 4.5.5 Decoupling In this section, we will use the notation from the previous sections in this chapter. Theorem 4.3. Assume that the equation (4.1) is regular with tractability index ν on I and f is enough smooth function. Then x ∈ CB1 (I) solves (4.1) if and only if it can be written as x = B− u + vν−1 + ⋅ ⋅ ⋅ + v1 + v0 ,
(4.40)
where u ∈ C 1 (I) solves the inherent equation (4.27) and vk ∈ C 1 (I) satisfies (4.39). Proof. If x ∈ C 1 (I) solves (4.1), then by the computations in the previous sections, we get (4.27) and (4.39). Now, we will prove the converse assertion. Note that the identity Bv0 = BM0 v0 = 0 implies that
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
� 241
Bx = BB− u + Bvν−1 + ⋅ ⋅ ⋅ + Bv1 = u + Bvν−1 + ⋅ ⋅ ⋅ + Bv1 ∈ C 1 (I), where we have used that u ∈ im BΠν−1 B− , i. e., u = BΠν−1 B− u and BB− u = BB− BΠν−1 B− u = BP0 Πν−1 B− u = BΠν−1 B− u = u. Now, using (3.58), (3.59), (3.60) and the decomposition (4.40), we find BΠν−1 x = BΠν−1 B− u + BΠν−1 vν−1 + ⋅ ⋅ ⋅ + BΠν−1 v1 + BΠν−1 v0
= u + BΠν−1 Mν−1 vν−1 + ⋅ ⋅ ⋅ + BΠν−1 M1 v1 + BΠν−1 M0 v0 = u,
i. e., u = BΠν−1 x. Now, we multiply (4.40) by Mk and we obtain Mk x = Mk B− u + Mk vν−1 + ⋅ ⋅ ⋅ + Mk vk + ⋅ ⋅ ⋅ + Mk v1 + Mk v0
= Mk B− u + Mk Mν−1 vν−1 + ⋅ ⋅ ⋅ + Mk Mk vk + ⋅ ⋅ ⋅ + Mk M1 v1 + Mk M0 v0 = Mk B− u + vk = Mk B− BΠν−1 B− u + vk = Mk Πν−1 B− u + vk = vk .
Note that the inherent equation (4.27) is restated by the equation (4.21). Then we multiply (4.21) by BΠν−1 and we find BΠν−1 Gν−1 G0 B− (Bx)Δ = BΠν−1 Gν−1 C σ x σ + BΠν−1 Gν−1 f , which we premultiply by B− and using that B− BΠν−1 = Πν−1 , we find B− BΠν−1 Gν−1 G0 B− (Bx)Δ = B− BΠν−1 Gν−1 C σ x σ + B− BΠν−1 Gν−1 f , or Πν−1 Gν−1 G0 B− (Bx)Δ = Πν−1 Gν−1 C σ x σ + Πν−1 Gν−1 f , from where, using (4.39) and the computations in the previous sections, we get
(4.41)
242 � 4 First kind linear time-varying dynamic-algebraic equations Uk Gν−1 A(Bx)Δ = Uk Gν−1 C σ x σ + Uk Gν−1 f ,
k = ν − 1, . . . , 0.
(4.42)
Since Qk M k = Qk , we obtain Qk U − k = Qk Mk Pk+1 . . . Pν−1 = Qk Pk+1 . . . Pν−1 = Vk . We multiply (4.42) by Qk and we find Qk Uk Gν−1 A(Bx)Δ = Qk Uk Gν−1 C σ x σ + Qk Uk Gν−1 f ,
k = ν − 1, . . . , 0,
or Vk Gν−1 Aσ (Bx)Δ = Vk Gν−1 C σ x σ + Vk Gν−1 f ,
k = ν − 1, . . . , 0.
(4.43)
Note that ν−1
I = Πν−1 + ∑ Vk . k=0
Then, by (4.41) and (4.43), we get ν−1
ν−1
ν−1
k=0
k=0
k=0
(Πν−1 + ∑ Vk )Gν−1 Aσ (Bx)Δ = (Πν−1 + ∑ Vk )Gν−1 C σ x σ + (Πν−1 + ∑ Vk )Gν−1 f or Gν−1 Aσ (Bx)Δ = Gν−1 C σ x σ + Gν−1 f , or Aσ (Bx)Δ = C σ x σ + f . This completes the proof. Example 4.4. Let 𝕋 = 2ℕ0 and 1 0 A(t) = (0 0 0
0 g(t) 0 0 0
0 0 1) , 0 0
1 B(t) = (0 0
0 h(t) 0
0 0 1
0 0 0
0 0) , 0
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
0 0 C(t) = ( 0 −1 1
0 0 −1 1 0
0 1 0 0 0
−1 1 0 0 0
1 0 0 ), 0 l(t)
t ∈ 𝕋,
where g, h, l ∈ C (𝕋), g(t) ≠ 0, h(t) ≠ 0, l(t) ≠ 0, t ∈ 𝕋. We have σ(t) = 2t,
t ∈ 𝕋,
and 1 0 Aσ (t) = (0 0 0
0 g(2t) 0 0 0
0 0 1) , 0 0
1 0 G0 (t) = Aσ (t)B(t) = (0 0 0 1 0 = (0 0 0
0 g(2t) 0 0 0
0 0 1 1 ) (0 0 0 0
0 0 1 0 0
0 0 0) , 0 0
t ∈ 𝕋.
y1 (t) y2 (t) y(t) = (y3 (t)) ∈ ℝ5 , y4 (t) y5 (t)
t ∈ 𝕋,
0 h(t)g(2t) 0 0 0
0 0 0 0 0
0 h(t) 0
Let
so that G0 (t)y(t) = 0, We have
t ∈ 𝕋.
0 0 1
0 0 0
0 0) 0
�
243
244 � 4 First kind linear time-varying dynamic-algebraic equations 0 1 0 0 (0) = (0 0 0 0 0
0 h(t)g(2t) 0 0 0
0 0 1 0 0
0 0 0 0 0
y1 (t) g(2t)h(t)y2 (t) =( ), y3 (t) 0 0
0 y1 (t) 0 y2 (t) 0) (y3 (t)) 0 y4 (t) 0 y5 (t)
t ∈ 𝕋,
whereupon y1 (t) = 0, y2 (t) = 0, y3 (t) = 0,
t ∈ 𝕋.
Take 0 0 Q0 (t) = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 r1 (t) r3 (t)
0 0 0 ), r2 (t) r4 (t)
t ∈ 𝕋.
We will find r1 , r2 , r3 and r4 so that Q(t) = Q(t)Q(t),
t ∈ 𝕋.
We have 0 0 (0 0 0
0 0 0 0 0
0 0 = (0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 r1 (t) r3 (t) 0 0 0 0 0
0 0 0 ) r2 (t) r4 (t) 0 0 0 r1 (t) r3 (t)
0 0 0 0 0 ) (0 r2 (t) 0 r4 (t) 0
0 0 0 0 0
0 0 0 0 0
0 0 0 r1 (t) r3 (t)
0 0 0 ) r2 (t) r4 (t)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
0 0 = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 2 (r1 (t)) + r2 (t)r3 (t) r3 (t)(r1 (t) + r4 (t))
0 0 ), 0 r2 (t)(r1 (t) + r4 (t)) r2 (t)r3 (t) + (r4 (t))2
�
t ∈ 𝕋,
whereupon 2
r1 (t) = (r1 (t)) + r2 (t)r3 (t),
r2 (t) = r2 (t)(r1 (t) + r4 (t)),
r3 (t) = r3 (t)(r1 (t) + r4 (t)),
2
r4 (t) = r2 (t)r3 (t) + (r4 (t)) ,
t ∈ 𝕋.
Take r1 (t) = 1,
r2 (t) = 0,
r3 (t) = 0,
r4 (t) = 1,
t ∈ 𝕋.
Hence, 0 0 Q0 (t) = M0 (t) = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0) , 0 1
t ∈ 𝕋,
and 1 0 P0 (t) = Π0 (t) = I − M0 (t) = (0 0 0 1 0 = (0 0 0 Next,
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0) , 0 0
0 1 0 0 0
0 0 1 0 0
t ∈ 𝕋.
0 0 0 1 0
0 0 0 0 0) − (0 0 0 1 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0) 0 1
245
246 � 4 First kind linear time-varying dynamic-algebraic equations G1 (t) = G0 (t) + C0 (t)Q0 (t) 1 0 = (0 0 0
0 h(t)g(2t) 0 0 0
0 0 +(0 −1 1
0 0 −1 1 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0) 0 0 1 0 0 0 ) ( 0 0 0 0 l(t) 0
−1 1 0 0 0
1 0 = (0 0 0
0 h(t)g(2t) 0 0 0
0 0 1 0 0
0 0 0 0 0
1 0 = (0 0 0
0 h(t)g(2t) 0 0 0
0 0 1 0 0
−1 1 0 0 0
0 0 0 0 0
0 0 0 0 0 ) + (0 0 0 0 0 1 0 0 ), 0 l(t)
0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0
0 0 0) 0 1 −1 1 0 0 0
1 0 0 ) 0 l(t)
t ∈ 𝕋.
We will search y1 (t) y2 (t) y(t) = (y3 (t)) ∈ ℝ5 , y4 (t) y5 (t)
t ∈ 𝕋,
such that G1 (t)y(t) = 0,
t ∈ 𝕋.
0 h(t)g(2t) 0 0 0
−1 1 0 0 0
We have 0 1 0 0 (0) = (0 0 0 0 0
0 0 1 0 0
1 y1 (t) 0 y2 (t) 0 ) (y3 (t)) 0 y4 (t) l(t) y5 (t)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
y1 (t) − y4 (t) + y5 (t) h(t)g(2t)y2 (t) + y4 (t) =( ), y3 (t) 0 l(t)y5 (t)
t ∈ 𝕋,
whereupon y1 (t) − y4 (t) + y5 (t) = 0, h(t)g(2t)y2 (t) + y4 (t) = 0, y3 (t) = 0, y5 (t) = 0,
t ∈ 𝕋,
or y1 (t) = y4 (t), y1 (t) = −h(t)g(2t)y2 (t), y3 (t) = 0, y5 (t) = 0,
t ∈ 𝕋.
Take 0 0 Q1 (t) = (0 0 0
0 0 0 0 0
0 0 0 0 0
−h(t)g(2t)r1 (t) r1 (t) 0 −h(t)g(2t)r1 (t) 0
−h(t)g(2t)r2 (t) r2 (t) ), 0 −h(t)g(2t)r2 (t) 0
We will search r1 and r2 so that Q1 (t) = Q1 (t)Q1 (t),
t ∈ 𝕋.
We have 0 0 Q1 (t) = (0 0 0
0 0 0 0 0
0 0 0 0 0
−h(t)g(2t)r1 (t) r1 (t) 0 −h(t)g(2t)r1 (t) 0
−h(t)g(2t)r2 (t) r2 (t) ) 0 −h(t)g(2t)r2 (t) 0
t ∈ 𝕋.
� 247
248 � 4 First kind linear time-varying dynamic-algebraic equations 0 0 = (0 0 0
0 0 0 0 0
0 0 × (0 0 0 0 0 = (0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0
−h(t)g(2t)r1 (t) r1 (t) 0 −h(t)g(2t)r1 (t) 0 0 0 0 0 0
0 0 0 0 0
−h(t)g(2t)r2 (t) r2 (t) ) 0 −h(t)g(2t)r2 (t) 0
−h(t)g(2t)r1 (t) r1 (t) 0 −h(t)g(2t)r1 (t) 0
−h(t)g(2t)r2 (t) r2 (t) ) 0 −h(t)g(2t)r2 (t) 0
(h(t))2 (g(2t))2 (r1 (t))2 −h(t)g(2t)(r1 (t))2 0 (h(t))2 (g(2t))2 (r1 (t))2 0
(h(t))2 (g(2t))2 r1 (t)r2 (t) −h(t)g(2t)r1 (t)r2 (t) ), 0 (h(t))2 (g(2t))2 r1 (t)r2 (t) 0
from where 2
2
2
2
2
−h(t)g(2t)r1 (t) = (h(t)) (g(2t)) (r1 (t)) , −h(t)g(2t)r2 (t) = (h(t)) (g(2t)) r1 (t)r2 (t), 2
r1 (t) = −h(t)g(2t)(r1 (t)) , r2 (t) = −h(t)g(2t)r1 (t)r2 (t), 2
2
2
2
2
−h(t)g(2t)r1 (t) = (h(t)) (g(2t)) (r1 (t)) , −h(t)g(2t)r2 (t) = (h(t)) (g(2t)) r1 (t)r2 (t),
t ∈ 𝕋,
or r1 (t) = −
1 , h(t)g(2t)
t ∈ 𝕋.
Let 0 0 Q1 (t) = (0 0 (0
and
0 0 0 0 0
0 0 0 0 0
1 1 − h(t)g(2t) 0 1 0
0 0
0) , 0 0)
t ∈ 𝕋,
t ∈ 𝕋,
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
1 0 Π1 (t) = I − Q1 (t) = (0 0 0 1 0 = (0 0 (0
0 1
0 0 0
0 0
1 0 0
0 1 0 0 0 −1
0 0 1 0 0
1 h(t)g(2t)
0 0 0
0 0 0 0 ( 0) − 0 0 0 1 (0
0 0 0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
1 1 − h(t)g(2t) 0 1 0
0 0
0) 0 0)
t ∈ 𝕋.
0) , 0 0)
We will find a vector y1 (t) y(t) = (y2 (t)) , y3 (t)
t ∈ 𝕋,
so that A(t)y(t) = 0,
t ∈ 𝕋.
We have 0 1 0 0 (0) = (0 0 0 0 0
0 g(t) 0 0 0
0 y1 (t) 0 y1 (t) g(t)y2 (t) 1 ) (y2 (t)) = ( y3 (t) ) , 0 y3 (t) 0 0 0
whereupon y1 (t) = 0,
y2 (t) = 0,
y3 (t) = 0,
t ∈ 𝕋,
and 1 R(t) = (0 0
0 1 0
0 0) , 1
t ∈ 𝕋,
is a projector along ker A(t), t ∈ 𝕋. Now, we will search a matrix
�
t ∈ 𝕋,
249
250 � 4 First kind linear time-varying dynamic-algebraic equations b11 (t) b21 (t) B− (t) = (b31 (t) b41 (t) b51 (t)
b12 (t) b22 (t) b32 (t) b42 (t) b52 (t)
b13 (t) b23 (t) b33 (t)) , b43 (t) b53 (t)
B(t)B− (t) = R(t),
B− (t)B(t) = P0 (t),
t ∈ 𝕋,
so that
B(t)B− (t)B(t) = B(t),
B− (t)B(t)B− (t) = B− (t),
t ∈ 𝕋.
We have 1 R(t) = B(t)B− (t) = (0 0 b11 (t) = (h(t)b21 (t) b31 (t)
0 1 0
0 1 0) = (0 1 0
b12 (t) h(t)b22 (t) b32 (t)
0 h(t) 0
b13 (t) h(t)b23 (t)) , b33 (t)
0 0 1
0 0 0
b11 (t) 0 b21 (t) 0) (b31 (t) 0 b41 (t) b51 (t)
b12 (t) b22 (t) b32 (t) b42 (t) b52 (t)
b13 (t) b23 (t) b33 (t)) b43 (t) b53 (t)
t ∈ 𝕋.
Then b11 (t) = 1,
b12 (t) = 0, b13 (t) = 0, b21 (t) = 0, 1 b22 (t) = , b23 (t) = 0, b31 (t) = 0, b32 (t) = 0, h(t)
b33 (t) = 1,
and 1 0 B− (t) = ( 0 b41 (t) (b51 (t)
0
1 h(t)
0 b42 (t) b52 (t)
0 0 1 ), b43 (t) b53 (t))
Now, we compute 1 0 P0 (t) = (0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0) = B− (t)B(t) 0 0
t ∈ 𝕋.
t ∈ 𝕋,
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
1 0 =( 0 b41 (t) (b51 (t)
0
0 0 1 ) (0 1 0 b43 (t) b53 (t))
1 h(t)
0 b42 (t) b52 (t)
1 0 =( 0 tb41 (t) tb51 (t)
0 1 0
0 0 1 b43 (t) b53 (t)
t 2 b42 (t) t 2 b52 (t)
0 0 0 0 0
0 h(t) 0 0 0 0) , 0 0
0 0 1
0 0 0
� 251
0 0) 0
t ∈ 𝕋.
So, b41 (t) = 0,
b42 (t) = 0,
b43 (t) = 0,
b51 (t) = 0,
b52 (t) = 0,
b53 (t) = 0,
t ∈ 𝕋,
and 1 0 B− (t) = (0 0 (0
0
1 h(t)
0 0 0
0 0 1) , 0 0)
t ∈ 𝕋.
Note that 1 0 B− (t)B(t)B− (t) = (0 0 (0 1 0 = (0 0 (0
1 B(t)B− (t)B(t) = ( 1 0
0
1 h(t)
0 0 0
0
1 h(t)
0 0 0
0 h(t) 0
0 0 1 1) (1 0 0 0) 0 0 1 1 ) (0 0 0 0) 0 0 1
0 0 0
0 h(t) 0
0 1 0
0 0 1
0 0 0
1 0 0 0 ) (0 0 0 (0
1 0 0 0 ) = (0 1 0 (0
1 0 0 0) (0 0 0 (0
0
1 h(t)
0 0 0
0
1 h(t)
0 0 0
0 0 1 1) (1 0 0 0)
0
1 h(t)
0 0 0
0 0 1) 0 0)
0 0 − 1 ) = B (t), 0 0) 0 h(t) 0
0 0 1
0 0 0
0 0) 0
252 � 4 First kind linear time-varying dynamic-algebraic equations 1 = (1 0
0 h(t) 0
= B(t),
0 0 1
t ∈ 𝕋.
0 0 0
0 1 0) (0 0 0
0 1 0
0 1 0) = ( 1 1 0
0 1 0 0 0
0 0 1 0 0
0 h(t) 0
0 0 1
0 0 0
0 0) 0
Moreover, 1 B(t)Π0 (t)B− (t) = ( 1 0
0 h(t) 0
0 0 1
0 0 0
1 0 0 0 ) (0 0 0 0
1 = (0 0
0 h(t) 0
0 0 1
0 0 0
1 0 0 0 ) (0 0 0 (0
0
1 h(t)
0 0 0
1 0 0 0 0) (0 0 0 0 (0
0 0 0 0 0
0 0 1 ) = ( 0 1 0 0 0)
0
0 0 1) 0 0)
1 h(t)
0 0 0
0 1 0
0 0) , 1
t ∈ 𝕋.
Therefore, Δ
(BΠ0 B− ) (t) = 0,
t ∈ 𝕋.
Next, 1 0 M1 (t) = Π0 (t) − Π1 (t) = (0 0 0 1 0 = (0 0 (0
0 1
0 0 0
0 0
1 0 0
−1
1 h(t)g(2t)
0 −1 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0
0) , 0 0)
1 0 0 0 0) − (0 0 0 0 (0
0 1
0 0 0
0 0 1 0 0
−1
1 h(t)g(2t)
0 0 0
0 0
0) 0 0)
t ∈ 𝕋,
and B(t)Π1 (t)B− (t) 1 = (0 0
0 h(t) 0
0 0 1
0 0 0
1 0 0 0 ) (0 0 0 (0
0 1
0 0 0
0 0 1 0 0
−1
1 h(t)g(2t)
0 0 0
0 0
1 0 0) (0 0 0 0) (0
0
1 h(t)
0 0 0
0 0 1) 0 0)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
1 = (0 0
0 h(t) 0
0 0 1
0 0 0
1 0 0 0 ) (0 0 0 (0
0
1 h(t)
0 0 0
0 0 1 ) = (0 1 0 0 0)
0 1 0
�
0 0) , 1
whereupon Δ
(BΠ1 B− ) (t) = 0,
t ∈ 𝕋.
Let c11 (t) c21 (t) C1 (t) = (c31 (t) c41 (t) c51 (t)
c12 (t) c22 (t) c32 (t) c42 (t) c52 (t)
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
c14 (t) c24 (t) c34 (t) c44 (t) c54 (t)
c15 (t) c25 (t) c35 (t)) . c45 (t) c55 (t)
Hence, C1σ (t)Πσ1 (t)
= (C0σ (t) + C1 (t)M1 (t))Πσ0 (t) 0 0 =(0 −1 1
0 0 −1 1 0
0 1 0 0 0
−1 1 0 0 0
1 1 0 0 0 ) (0 0 0 l(2t) 0
0 1 0 0 0
1 c15 (t) 0 c25 (t) c35 (t)) (0 c45 (t) 0 c55 (t) (0
c11 (t) c21 (t) + (c31 (t) c41 (t) c51 (t)
c12 (t) c22 (t) c32 (t) c42 (t) c52 (t)
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
1 0 × (0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0) 0 0
0 0 −1 1 0
0 1 0 0 0
0 0 0 0 0
0 0 0) 0 0
0 0 =(0 −1 1
c14 (t) c24 (t) c34 (t) c44 (t) c54 (t)
0 0 1 0 0
0 0 0 0 0
0 0 0) 0 0 0 1
0 0 0
0 0 1 0 0
−1
1 h(t)g(2t)
0 −1 0
0 0
0) 0 0)
253
254 � 4 First kind linear time-varying dynamic-algebraic equations c11 (t) c21 (t) + (c31 (t) c41 (t) c51 (t) 0 0 =(0 −1 1
0 0 −1 1 0
c11 (t) c21 (t) = ( c31 (t) c41 (t) − 1 c51 (t) + 1 c11 (2t) c21 (2t) = (c31 (2t) c41 (2t) c51 (2t)
c12 (t) c22 (t) c32 (t) c42 (t) c52 (t)
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
0 1 0 0 0
0 c11 (t) 0 c21 (t) 0) + (c31 (t) 0 c41 (t) 0 c51 (t)
0 0 0 0 0
c12 (t) c22 (t) c32 (t) − 1 c42 (t) + 1 c52 (t) c12 (2t) c22 (2t) c32 (2t) c42 (2t) c52 (2t)
c14 (t) c24 (t) c34 (t) c44 (t) c54 (t)
c13 (t) c23 (t) + 1 c33 (t) c43 (t) c53 (t)
c13 (2t) c23 (2t) c33 (2t) c43 (2t) c53 (2t)
c14 (2t) c24 (2t) c34 (2t) c44 (2t) c54 (2t)
c11 (t)
c12 (t)
c13 (t)
(−c11 (t) +
c21 (t) ( ( = (c31 (t) c41 (t)
c22 (t)
c23 (t)
(−c21 (t) +
c32 (t)
c33 (t)
(−c31 (t) +
c42 (t)
c43 (t)
(−c41 (t) +
(c51 (t)
c52 (t)
c53 (t)
(−c51 (t) +
c15 (t) 1 c25 (t) 0 c35 (t)) (0 c45 (t) 0 c55 (t) 0 c12 (t) c22 (t) c32 (t) c42 (t) c52 (t) 0 0 0 0 0
0 1 0 0 0
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
0 0 1 0 0 0 0 0 0 0
0 0 0 0 0
0 0 0) 0 0
0 0 0) 0 0
0 0 0) 0 0 1 c15 (2t) 0 c25 (2t) c35 (2t)) (0 c45 (2t) 0 c55 (2t) (0
c12 (2t) ) h(2t)g(4t) c22 (2t) ) h(2t)g(4t) c32 (2t) ) h(2t)g(4t) c42 (2t) ) h(2t)g(4t) c52 (2t) ) h(2t)g(4t)
0 1
0 0 0
0 0 1 0 0
0
0 ) 0) ), 0
t ∈ 𝕋.
0)
From here, c11 (t) = c11 (2t), c12 (t) = c12 (2t), c13 (t) = c13 (2t), c12 (2t) c11 (t) = , h(2t)g(4t) c21 (t) = c21 (2t), c22 (t) = c22 (2t), c23 (t) = c23 (2t), c22 (2t) c21 (t) = , c31 (t) = c31 (2t), c32 (t) = c32 (2t), h(2t)g(4t) c32 (2t) c33 (t) = c33 (2t), c31 (t) = , c41 (t) = c41 (2t), h(2t)g(4t) c42 (2t) c42 (t) = c42 (2t), c43 (t) = c43 (2t), c41 (t) = , h(2t)g(4t) c51 (t) = c51 (2t), c52 (t) = c52 (2t), c53 (t) = c53 (2t),
−1
1 h(2t)g(4t)
0 0 0
0 0
0) 0 0)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
c51 (t) =
c52 (2t) , h(2t)g(4t)
255
�
t ∈ 𝕋.
Take c44 (t) =
1 , h(t)g(2t)l(t)
t ∈ 𝕋.
Then c11 (t) c21 (t) C1 (t)Q1 (t) = (c31 (t) c41 (t) c51 (t) 0 0 = (0 0 (0
0 0 0 0 0
c12 (t) c22 (t) c32 (t) c42 (t) c52 (t) 0 0
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
c11 (t) c22 (t) − h(t)g(2t)
0 0 0
0 c44 (t) 0
c14 (t) c24 (t) c34 (t) c44 (t) c54 (t) 0 0
0) , 0 0)
0 c15 (t) 0 c25 (t) ( c35 (t)) 0 c45 (t) 0 c55 (t) (0
0 0
0 0
0 0 0
0 0 0
1 1 − h(t)g(2t) 0 1 0
t ∈ 𝕋,
and G2 (t) = G1 (t) + C1 (t)Q1 (t)
0 1 0 0 0 ) + (0 0 0 l(t) (0
1 0 = (0 0 0
0 h(t)g(2t) 0 0 0
0 0 1 0 0
−1 1 0 0 0
1 0 = (0
0 h(t)g(2t)
0 0
c11 (t) − 1 c22 (t) 1 − h(t)g(2t)
0 (0
0 0 0
1 0 0
0 c44 (t) 0
0 0 0 0 0
0 0
c22 (t) − h(t)g(2t)
0 0
0 c44 (t) 0
0) 0 0)
1 = 1 ≠ 0, h(t)g(2t)l(t)
t ∈ 𝕋.
1 0
0 ), 0 l(t))
0 0 0
c11 (t)
t ∈ 𝕋.
We compute det G2 (t) = h(t)g(2t)l(t)c44 (t) = h(t)g(2t)l(t)
Now, we will compute the cofactors of G2 (t), t ∈ 𝕋. We have
0 0
0) 0 0)
256 � 4 First kind linear time-varying dynamic-algebraic equations c22 (t) c22 (t) h(t)g(2t) 0 1 − h(t)g(2t) 0 0 1 − h(t)g(2t) 0 0 0 1 0 1 0 0 0 0 g11 (t) = = 1, g12 (t) = − = 0, 1 1 0 0 0 0 l(t)h(t)g(2t) 0 0 l(t)h(t)g(2t) 0 0 0 0 0 l(t) 0 l(t) c22 (t) 0 h(t)g(2t) 0 0 0 h(t)g(2t) 1 − h(t)g(2t) 0 0 0 0 1 0 0 0 0 g13 (t) = = 0, g (t) = − = 0, 14 1 0 0 0 0 0 0 0 l(t)h(t)g(2t) 0 0 0 l(t) 0 0 0 l(t) c22 (t) 0 0 c (t) − 1 0 0 h(t)g(2t) 0 1 − h(t)g(2t) 11 0 1 0 0 0 0 1 0 = 0, g15 (t) = = 0, g21 (t) = − 1 1 0 0 0 0 l(t)h(t)g(2t) 0 0 l(t)h(t)g(2t) 0 0 0 l(t) 0 0 0 0 1 0 c (t) − 1 1 0 c (t) − 1 0 0 11 11 0 1 0 0 0 0 0 0 1 = 0, g22 (t) = = , g (t) = − 1 1 0 0 23 0 h(t)g(2t) 0 0 0 l(t)h(t)g(2t) l(t)h(t)g(2t) 0 0 0 0 0 l(t) 0 l(t) 1 0 0 c (t) − 1 1 0 0 1 11 0 0 1 0 0 1 0 0 = 0, g24 (t) = = 0, g25 (t) = − 1 0 0 0 0 0 0 0 l(t)h(t)g(2t) 0 0 0 0 0 0 0 l(t) 0 0 c11 (t) − 1 1 h(t)g(2t) 0 1 − c22 (t) 0 h(t)g(2t) = 0, g31 (t) = 1 0 0 l(t)h(t)g(2t) 0 0 0 0 l(t) 1 0 c (t) − 1 1 1 0 c11 (t) − 1 0 11 c (t) c (t) 0 0 1 − 22 0 h(t)g(2t) 1 − 22 0 0 h(t)g(2t) h(t)g(2t) = 1, g32 (t) = − = 0, g33 (t) = 1 1 0 0 0 0 0 0 l(t)h(t)g(2t) l(t)h(t)g(2t) 0 0 0 0 l(t) 0 0 l(t) 1 0 0 c11 (t) − 1 1 0 0 1 0 h(t)g(2t) 0 0 0 h(t)g(2t) 0 1 − c22 (t) h(t)g(2t) g34 (t) = − = 0, g35 (t) = = 0, 1 0 0 0 0 0 0 0 l(t)h(t)g(2t) 0 0 0 0 l(t) 0 0 0 0 0 c11 (t) − 1 1 h(t)g(2t) 0 1 − c22 (t) 0 h(t)g(2t) = 0, g41 (t) = − 0 1 0 0 0 0 0 l(t)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
1 0 c (t) − 1 1 11 c (t) 22 0 0 0 1 − h(t)g(2t) = 0, g42 (t) = 0 1 0 0 0 0 0 l(t) 1 0 c11 (t) − 1 0 0 h(t)g(2t) 1 − c22 (t) 0 h(t)g(2t) = 0, g43 (t) = − 0 0 0 0 0 0 0 l(t) 1 0 0 1 0 h(t)g(2t) 0 0 = l(t)h(t)g(2t), g44 (t) = 0 0 1 0 0 0 0 l(t) 1 0 0 c11 (t) − 1 c22 (t) 0 h(t)g(2t) 0 1 − h(t)g(2t) = 0, g45 (t) = − 0 0 1 0 0 0 0 0 0 0 c (t) − 1 1 1 0 c11 (t) − 1 11 c22 (t) 0 0 1 − c22 (t) h(t)g(2t) 0 1 − h(t)g(2t) 0 h(t)g(2t) g51 (t) = = 0, g52 (t) = − 0 1 0 1 0 0 0 1 1 0 0 l(t)h(t)g(2t) 0 0 0 l(t)h(t)g(2t) 0 c11 (t) − 1 1 1 h(t)g(2t) 1 − c22 (t) 0 c (t) h(t)g(2t) 0 h(t)g(2t) 1 − 22 0 h(t)g(2t) = g53 (t) = 1 0 0 0 1 0 0 1 0 0 1 l(t)h(t)g(2t) 0 0 0 l(t)h(t)g(2t) c22 (t) 1 = (1 − ), l(t)h(t)g(2t) h(t)g(2t) 1 0 0 1 0 h(t)g(2t) 0 0 = 0, g54 (t) = − 0 0 1 0 0 0 0 0 1 0 0 c11 (t) − 1 c22 (t) 0 h(t)g(2t) 0 1 − h(t)g(2t) = 1 , t ∈ 𝕋. g55 (t) = l(t) 0 0 1 0 1 0 0 0 l(t)h(t)g(2t)
� 257
1 0 = 0, 0 0
258 � 4 First kind linear time-varying dynamic-algebraic equations Consequently, 1 0
0
1 h(t)g(2t)
G2−1 (t) = (0
0
0 (0
0 0
0 0
0 0
1
0 0
1 (1 l(t)h(t)g(2t)
0
0 0
l(t)h(t)g(2t) 0
0
−
c22 (t) ) ) , h(t)g(2t)
1 l(t)
t ∈ 𝕋.
)
Hence, B(t)Π1 (t)G2−1 (t) 1 = (0 0
0 h(t) 0
0 0 1
1 0
0
1 0 0 0) (0
0 0 0
0 0 0
1 h(t)g(2t)
× (0
0
0 (0
1
0 0
0 0
0 (0
1 0 0 0) (0
0 0 0
0
1 = (0 0
0
0 0
0 0
0 0) ,
0 0 C(t)B− (t) = ( 0 −1 1
0
0
0 0 −1 1 0
0 1 0 0 0
0 0 σ −σ ( 0 C (t)B (t) = −1 (1
0 0
1 − h(2t) 1 h(2t)
0
0 (0
−1 1 0 0 0
1 h(t)g(2t)
0 0 0
0) 0 0)
0 0
1 (1 l(t)h(t)g(2t)
0
1 h(t)g(2t)
0 0 0
1 1 0 0 0 ) (0 0 0 l(t) (0
0 1 0) , 0 0)
0 0
−1
1 0 0
l(t)h(t)g(2t) 0
0 0 1
1
0 0 0
0
0 h(t) 0
0
0 0
0 0
1 = (0 0
1 g(2t)
0 1
0
−
c22 (t) ) ) h(t)g(2t)
1 l(t)
0 0
)
−l(t)h(t)g(2t) 1 (h(t))2 (g(2t))2
1 0 0
0 0 0
0
1 h(t)
0 0 0
0 0
0) 0 0)
0 0 0 0 ) ( 0 = 1 0 −1 0) ( 1
0 0
1 − h(t) 1 h(t)
0
0 1 0) ,
0 0)
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
�
259
B(t)Π1 (t)G2−1 (t)C σ (t)B−σ (t) 1 = (0 0
0
1 g(2t)
0
0 0 1
0 0 0 ( 0 0)
0 0
0
0
−1 (1
0 0
1 − h(t) 1 h(t)
0
0 1 1 0) = (0
0 0)
0 0
0
0
1 g(2t) ) ,
0
0
t ∈ 𝕋.
Thus, the inherent equation for the considered equation takes the form u1Δ (t)
1 (u2Δ (t)) = (0 0 u3Δ (t)
1 u1σ (t) 1 σ 0 ) ( ) + ( u (t) 2 g(2t) u3σ (t) 0 0
0 0
0
0
0
1 g(2t)
0
0 0 1
0 0
0
f1 (t) 0 f2 (t) 0) (f3 (t)) , 0 0 0
or u1σ (t) f1 (t) u1Δ (t) 1 1 σ Δ (u2 (t)) = ( g(2t) u3 (t)) + ( g(2t) f2 (t)) , u3Δ (t) 0 f3 (t)
t ∈ 𝕋,
or u1Δ (t) = u1σ (t) + f1 (t), 1 σ 1 u2Δ (t) = u (t) + f (t), g(2t) 3 g(2t) 2 u3Δ (t) = f3 (t),
t ∈ 𝕋.
Next, M1 (t)G2−1 (t) 1 0 = (0 0 (0
1 0 × (0 0 (0
0 1
0 0 0
0 0
−1
1 h(t)g(2t)
1 0 0
0 −1 0
0
1 h(t)g(2t)
0 0 0
0 0 1
0 0
0 0
0) 0 0)
0 0
0
l(t)h(t)g(2t) 0
0 0
1 (1 l(t)h(t)g(2t)
0
−
1 l(t)
c22 (t) ) ) h(t)g(2t)
)
t ∈ 𝕋,
260 � 4 First kind linear time-varying dynamic-algebraic equations 1 0 = (0
0
1 h(t)g(2t)
0 0 0
0 (0
0 0 1 0 0
0 0
−l(t)h(t)g(2t) l(t) 0 −l(t)h(t)g(2t) 0
0) , 0 0)
−l(t)h(t)g(2t) l(t)
0 0 ) ( 0 0
M1 (t)G1−1 (t)C σ (t)B−σ (t) 1 0 = (0
0
1 h(t)g(2t)
0 0 0
0 (0
0 0 1 0 0
−l(t)
( =( (
0 −l(t)h(t)g(2t) 0
− l(t)h(t)g(2t) h(2t) l(t) h(2t) 1 − h(2t) − l(t)h(t)g(2t) h(2t)
l(t)h(t)g(2t) 0
l(t)h(t)g(2t) 0 (
0 0
0
0 −1 0) ( 1
1 h(t)g(2t) )
0
0 0
0
), )
0 0
1 − h(2t) 1 h(2t)
0
0 1 0) 0 0)
t ∈ 𝕋.
)
Therefore, the equation (4.32) can be rewritten as follows: 1 0 (0 0 (0
0 1
0 0
0 0 0
1 0 0
−1
1 h(t)g(2t)
0 −1 0
l(t)h(t)g(2t)
( = −( (
−l(t) 0
l(t)h(t)g(2t) 0 (
1 0 + (0 0 (0
Moreover,
0
1 h(t)g(2t)
0 0 0
0 0
σ
0) v1 (t) 0 0)
− l(t)h(t)g(2t) h(2t) l(t) h(2t) 1 − h(2t) − l(t)h(t)g(2t) h(2t)
1 h(t)g(2t) )
0
0 0 1 0 0
0
−l(t)h(t)g(2t) l(t) 0 −l(t)h(t)g(2t) 0
) uσ (t) )
0
0 0 0 0
)
0) f (t), 0 0)
t ∈ 𝕋.
� 261
4.5 Decoupling of first kind linear dynamic-algebraic equations of index ≥ 2
−
N01 = −M0 (t)Q1 (t)B (t)
0 0 = − (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 = − (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0 0 ( 0) 0 0 0 1 (0
0 0
0 0
1 1 − h(t)g(2t)
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0
0 0 0 0 0) (0 0 0 1 0
0 0 0
0 0
1 0 0) (0 0 0 0) (0
0 1 0
0 0 0 0 0) = (0 0 0 0 0
0 0 0 0 0
0
0 0 1) 0 0)
1 h(t)
0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0) , 0 0
M0 (t)(P1 (t) − I)B− (t) 0 0 = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0 0 0) (0 0 0 1 (0 0 0 0 0 0) (0 0 0 1 0
0 0
0 0
0 0 0 0 0
0 0 0 0 0) = (0 0 0 0 0
0 0 0
−1
1 h(t)g(2t)
0 0 0
0 −1 0
0 0
1 0 0 ) (0 0 0 −1) (0 0 0 0 0 0
0 0 0) , 0 0
0
1 h(t)
0 0 0
0 0 1) 0 0)
t ∈ 𝕋,
whereupon K̃ 0 vanishes. Also, B(t)M1 (t)B− (t) 1 = (0 0
0 h(t) 0
0 0 1
0 0 0
1 = (0 0
0 h(t) 0
0 0 1
0 0 0
1 0 0 0 ) (0 0 0 (0
0 1
0 0 0
1 0 0 0 ) (0 0 0 (0
0
1 h(t)
0 0 0
0 0
−1
1 h(t)g(2t)
1 0 0
0 −1 0
1 0 0) (0 0 0 0) (0
0 0 1 1 ) = (0 0 0 0)
From here, Δ
0 0
(BM1 B− ) (t) = 0,
t ∈ 𝕋,
0 1 0
0 0) , 1
0
1 h(t)
0 0 0
0 0 1) 0 0)
t ∈ 𝕋.
262 � 4 First kind linear time-varying dynamic-algebraic equations and ̃ M 01 (t) = 0,
t ∈ 𝕋.
So, the first equation of (4.39) vanishes. Exercise 4.4. Let 𝕋 = 4ℕ and
1. 2. 3.
1 0 A(t) = (0 0 0
0 t+1 0 0 0
0 0 C(t) = ( 0 −1 1
0 0 −1 1 0
0 0 2) , 0 0 0 1 0 0 0
−1 1 0 0 0
1 B(t) = (0 0 1 0 0 ), 0 t+2
0 t2 0
0 0 3
0 0 0
0 0) , 0
t ∈ 𝕋.
Prove that the equation (4.1) is (σ, 1)-regular with tractability index 2. Write the inherent equation of the equation (4.1). Write the equation (4.32).
4.6 Advanced practical problems Problem 4.1. Let 𝕋 = 3ℕ0 and t−3 A(t) = ( t 3 1 1. 2. 3.
0 0 t
t2 1), 0
−t C(t) = ( 0 t
1 1 −t
−1 t ), 0
t ∈ 𝕋.
Find the projector P along ker Aσ . Write the system (4.2). Write the system (4.3).
Problem 4.2. Let 𝕋 = 2ℕ0 and 2 A(t) = (0 0
3t 0 4t
3 0) , 4
−1 C(t) = ( 2 1
1 t 2
1 −1) , −1
1 P(t) = (0 0
0 1 0
0 1 ), −t + 1
t ∈ 𝕋.
4.6 Advanced practical problems �
1. 2. 3.
Find the matrix A1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋. Find the equation (4.4). Find the equation (4.10).
Problem 4.3. Let 𝕋 = 4ℕ0 and 2 0 A(t) = (0 0 0
0 1 0 0 0
0 0 C(t) = ( 0 −t 1
0 0 1) , 0 0
0 0 −t − 1 t 0
1 B(t) = (0 0
0 t+1 0 0 0
t t+1 0 1 0
0 2 0
0 0 −t
−t 0 0 ), 0 −1
0 0 0
0 0) , 0
t ∈ 𝕋.
Find the representation (4.16). Problem 4.4. Let 𝕋 = 4ℕ0 and
1. 2. 3.
2 0 A(t) = (0 0 0
0 t +t 0 0 0
0 0 C(t) = ( 0 −1 1
0 0 −1 1 0
0 0 1) , 0 0
2
0 1 0 0 0
−1 1 0 0 0
2 B(t) = (0 0 1 0 0 ), 0 1 + t3
0 t+3 0
0 0 1
0 0 0
0 0) , 0
t ∈ 𝕋.
Prove that the equation (4.1) is (σ, 1)-regular with tractability index 2. Write the inherent equation of the equation (4.1). Write the equation (4.32).
Problem 4.5. Let 𝕋 = ℕ and 1 0 A(t) = (0 0 0
0 g(t) 0 0 0
0 0 2) , 0 0
h(t) B(t) = ( 0 0
0 −h(t) 0
0 0 1
0 0 0
0 0) , 0
263
264 � 4 First kind linear time-varying dynamic-algebraic equations 0 0 C(t) = ( 0 −1 1 1. 2. 3. 4.
0 0 −1 1 0
0 1 0 0 0
−1 1 0 0 0
1 0 0 ), 0 t+2
t ∈ 𝕋.
Prove that the equation (4.1) is (σ, 1)-regular with tractability index 3. Write the inherent equation of the equation (4.1). Write the equation (4.32). Write the equations (4.39).
4.7 Notes and references In this chapter, we introduce first kind linear time-varying dynamic-algebraic equations and we classify them as (σ, 1)-properly stated, (σ, 1)-algebraically nice at level 0 and k ≥ 1, (σ, 1)-nice at level 0 and k ≥ 1, (σ, 1)-regular with tractability index 0 and ν ≥ 1. We deduct the inherent equation for the equation (4.1) and give a decomposition of the solutions. We prove that the proposed process for decomposition of the solutions is reversible.
5 Second kind linear time-varying dynamic-algebraic equations Suppose that 𝕋 is a time scale with a forward jump operator and delta differentiation operator σ and Δ, respectively. Let I ⊆ 𝕋. In this chapter, we will investigate the following linear time-varying dynamicalgebraic equation: Aσ (t)(Bx)Δ (t) = C(t)x(t) + f (t),
t ∈ I,
(5.1)
where A, B, C : I → Mm×m and f : I → ℝm are given. Equation (5.1) is said to be a second kind linear time-varying dynamic-algebraic equation. We will consider the solutions of (5.1) within the space 1
m
1
CB (I) = {x : I → ℝ : Bx ∈ C (I)}.
Below, we remove the explicit dependence on t for the sake of notational simplicity.
5.1 A classification In this section, we will give a classification of the dynamic-algebraic equation (5.1). Definition 5.1. The matrix pair (A, B) is said to be the leading term of the dynamicalgebraic equation (5.1). Definition 5.2. We will say that the second kind linear time-varying dynamic-algebraic equation (5.1) is properly stated if its leading term (A, B) is properly stated. Definition 5.3. We will say that the second kind linear time-varying dynamic-algebraic equation (5.1) is algebraically nice at level 0 if its leading term (A, B) is properly stated. Definition 5.4. We will say that the second kind linear time-varying dynamic-algebraic equation (5.1) is algebraically nice at level k ≥ 1 if it is algebraically nice at level k − 1 and (A5) and (A6) hold for i = k for some admissible up to level k projector sequence Q0 , . . . , Qk−1 . Definition 5.5. We will say that the second kind linear time-varying dynamic-algebraic equation (5.1) is nice at level k if it is algebraically nice at level k and there exists an admissible choice of Qk . The ranks ri of Gi , i ∈ {0, . . . , k}, are said to be characteristic values of (5.1). Definition 5.6. The second kind linear time-varying dynamic-algebraic equation (5.1) is said to be regular with tractability index 0 if both A and B are invertible. https://doi.org/10.1515/9783111377155-005
266 � 5 Second kind linear time-varying dynamic-algebraic equations Definition 5.7. The second kind linear time-varying dynamic-algebraic equation (5.1) is said to be regular with tractability index ν if there exists an admissible projector sequence {Q0 , . . . , Qν−1 } for which the matrices Gi are singular for 0 ≤ i < ν and Gν is nonsingular. Definition 5.8. The second kind linear time-varying dynamic-algebraic equation (5.1) is said to be regular if it is regular with any nonnegative tractability index. Definition 5.9. The second kind linear time-varying dynamic-algebraic equation (5.1) is said to be (σ, 1)-regular if it is (σ, 1)-regular with any nonnegative tractability index. Note that the tractability index of (5.1) is ν if and only if it is nice up to level ν − 1, the matrices Gi , 0 ≤ i < ν, are singular and Gν are nonsingular. Since the dimension of the direct sum N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni increases, the tractability index cannot exceed m.
5.2 A particular case Consider the equation Aσ x Δ = Cx + f .
(5.2)
We will show that equation (5.1) can be reduced to equation (5.1). Suppose that P is a C 1 -projector along ker A. Then AP = A and Aσ x Δ = Aσ Pσ x Δ = Aσ (Px)Δ − Aσ PΔ x. Hence, equation (5.2) takes the form Aσ (Px)Δ − Aσ PΔ x = Cx + f or Aσ (Px)Δ = (Aσ PΔ + C)x + f . Set C1 = Aσ PΔ + C.
5.2 A particular case
�
267
Thus, (5.2) takes the form Aσ (Px)Δ = C1 x + f ,
(5.3)
i. e., equation (5.2) is a particular case of the equation (5.1). Example 5.1. Let 𝕋 = 2ℕ0 and the matrices A and C be as in Example 4.1. Then σ(t) = 2t,
t ∈ 𝕋,
and 1 Aσ (t) = (0 0
0 −2t 0
0 1) , 0
t ∈ 𝕋.
We will find a vector y1 (t) y(t) = (y2 (t)) , y3 (t)
t ∈ 𝕋,
0 A(t)y(t) = (0) , 0
t ∈ 𝕋.
so that
We have 0 1 (0) = (0 0 0
0 −t 0
0 y1 (t) y1 (t) 1 ) (y2 (t)) = (−ty2 (t) + y3 (t)) , 0 y3 (t) 0
whereupon y1 (t) 0 (y2 (t)) = ( 1 ) , y3 (t) t
t ∈ 𝕋,
and the null projector to A(t), t ∈ 𝕋, is 0 Q(t) = (0 0 Hence,
0 0 0
0 1) , t
t ∈ 𝕋.
t ∈ 𝕋,
268 � 5 Second kind linear time-varying dynamic-algebraic equations 1 P(t) = I − Q(t) = (0 0
0 1 0
0 0 0 ) − (0 1 0
0 0 0
0 1 1 ) = (0 t 0
0 1 0
0 −1 ) , 1−t
t ∈ 𝕋,
is a projector along ker A. Observe that 0 P (t) = (0 0 Δ
0 0 0
0 0 ), −1
1 C1 (t) = Aσ (t)PΔ (t) + C(t) = (0 0 0 = (0 0
0 0 0
0 −t −1) + ( 0 0 t
0 −2t 0 1 1 0
0 0 1 ) (0 0 0
0 0 0
0 −t 0)+(0 −1 t
t −t 2t ) = ( 0 1 t
1 1 0
t 2t − 1) 1.
1 1 0
t 2t ) 1
Equation (4.2) can be written as follows: 1 (0 0
0 −2t 0
x1Δ (t) 0 −t 1 ) (x2Δ (t)) = ( 0 0 t x3Δ (t)
1 1 0
t x1 (t) f1 (t) 2t ) (x2 (t)) + (f2 (t)) , 1 x3 (t) f3 (t)
t ∈ 𝕋,
or x1Δ (t) = −tx1 (t) + x2 (t) + tx3 (t) + f1 (t),
−2tx2Δ (t) + x3Δ (t) = x2 (t) + 2tx3 (t) + f2 (t), 0 = tx1 (t) + x3 (t) + f3 (t),
t ∈ 𝕋.
This system, using (5.3), can be rewritten in the form 1 (0 0
0 −2t 0
−t =(0 t
0 1 1 ) ((0 0 0 1 1 0
0 1 0
0 x1 (t) −1 ) (x2 (t))) 1−t x3 (t)
t x1 (t) f1 (t) 2t − 1) (x2 (t)) + (f2 (t)) , 1 x3 (t) f3 (t)
Δ
t ∈ 𝕋,
or 1 (0 0
0 −2t 0
Δ
0 x1 (t) −tx1 (t) + x2 (t) + tx3 (t) f1 (t) 1 ) (x2 (t) − x3 (t)) = ( x2 (t) + (2t − 1)x3 (t) ) + (f2 (t)) , 0 (1 − t)x3 (t) tx1 (t) + x3 (t) f3 (t)
t ∈ 𝕋,
5.3 Standard form index one problems
� 269
or 1 (0 0
0 −2t 0
x1Δ (t) 0 −tx1 (t) + x2 (t) + tx3 (t) + f1 (t) 1 ) ( x2Δ (t) − x3Δ (t) ) = ( x2 (t) + (2t − 1)x3 (t) + f2 (t) ) , 0 tx1 (t) + x3 (t) + f3 (t) (1 − 2t)x3Δ (t) − x3 (t)
t ∈ 𝕋,
or x1Δ (t) = −tx1 (t) + x2 (t) + tx3 (t) + f1 (t),
−2t(x2Δ (t) − x3Δ (t)) + (1 − 2t)x3Δ (t) − x3 (t) = x2 (t) + (2t − 1)x3 (t) + f2 (t), 0 = tx1 (t) + x3 (t) + f3 (t),
t ∈ 𝕋,
or x1Δ (t) = −tx1 (t) + x2 (t) + tx3 (t) + f1 (t),
−2tx2Δ (t) + x3Δ (t) = x2 (t) + 2tx3 (t) + f2 (t), 0 = tx1 (t) + x3 (t) + f3 (t),
t ∈ 𝕋.
Exercise 5.1. Let 𝕋 = 4ℕ0 and t A(t) = (0 1 1. 2. 3.
−2 t 0
1+t 0 ), 1−t
0 C(t) = ( t 0
t 0 1
t−1 t ), 0
t ∈ 𝕋.
Find the projector P along ker A. Write the system (5.2). Write the system (5.3).
5.3 Standard form index one problems In this section, we will investigate the equation Aσ (Px)Δ = Cx + f , where ker A is a C 1 -space, C ∈ C (I), P is a C 1 -projector along ker A. Then AP = A. Assume in addition that Q=I −P and
(5.4)
270 � 5 Second kind linear time-varying dynamic-algebraic equations (C1) the matrix Aσ1 = A + CQ is invertible. The condition (C1) ensures the equation (5.4) to be regular with tractability index 1. We will start our investigations with the following useful lemma. Lemma 5.1. Suppose that (C1) holds. Then Aσ−1 1 A=P and Aσ−1 1 CQ = Q. Proof. We have Aσ1 P = (A + CQ)P = AP + CQP = A. Since Q = I − P and ker P = ker A, we have im Q = ker A and AQ = 0. Then Aσ1 Q = (A + CQ)Q = AQ + CQQ = CQ. This completes the proof. Now, we multiply the equation (5.4) with Aσ−1 and we get 1 σ Δ σ−1 σ−1 Aσ−1 1 A (Px) = A1 Cx + A1 f .
Now, we employ the first equation of Lemma 5.1 and we get σ−1 P(Px)Δ = Aσ−1 1 Cx + A1 f .
We decompose x in the following way: x = Px + Qx. Then the equation (5.5) takes the following form: σ−1 σ−1 σ−1 σ−1 P(Px)Δ = Aσ−1 1 C(Px + Qx) + A1 f = A1 CPx + A1 CQx + A1 f .
(5.5)
5.3 Standard form index one problems
� 271
Using the second equation of Lemma 5.1, the last equation can be rewritten as follows: σ−1 P(Px)Δ = Aσ−1 1 CPx + Qx + A1 f .
(5.6)
We multiply the equation (5.6) with the projector P and using that PP = P,
PQ = 0,
we find σ−1 PP(Px)Δ = PAσ−1 1 CPx + PQx + PA1 f
or σ−1 P(Px)Δ = PAσ−1 1 CPx + PA1 f .
(5.7)
Note that P(Px)Δ = (PPx)Δ − PΔ Pσ x σ = (Px)Δ − PΔ Pσ x σ . Hence and (5.7), we find σ−1 (Px)Δ − PΔ Pσ x σ = PAσ−1 1 CPx + PA1 f
or σ−1 (Px)Δ = PΔ Pσ x σ + PAσ−1 1 CPx + PA1 f .
(5.8)
Now, we multiply the equation (5.6) by Q and we find σ−1 QP(Px)Δ = QAσ−1 1 CPx + QQx + QA1 f
or σ−1 0 = QAσ−1 1 CPx + Qx + QA1 f .
(5.9)
Set u = Px,
v = Qx.
Then, by (5.8) and (5.9), we get the system σ−1 uΔ = PΔ uσ + PAσ−1 1 Cu + PA1 f , σ−1 v = −QAσ−1 1 Cu − QA1 f .
(5.10)
272 � 5 Second kind linear time-varying dynamic-algebraic equations We find u ∈ C 1 (I) by the first equation of the system (5.10) and then we find v ∈ C (I) by the second equation of the system (5.10). Then the solution of the equation (5.4) is given by x = u + v = Px + Qx. Example 5.2. Let 𝕋 = ℕ and A, P and C be as in Example 4.2. Then the equation (5.4) can be rewritten as follows: −1 (0 0
t+2 0 2t + 4
0 = (0 0
−1 1 0 ) ((0 −2 0
0 −t 2
0 0 −(t + 1)
Δ
0 x1 (t) 0) (x2 (t))) 1 x3 (t)
1 x1 (t) f1 (t) 1) (x2 (t)) + (f2 (t)) , 1 x3 (t) f3 (t)
t ∈ 𝕋,
or −1 (0 0
t+2 0 2t + 4
Δ
−1 x1 (t) x1 (t) f1 (t) ) = (−tx2 (t) + x3 (t)) + (f2 (t)) , 0 )( 0 −2 −(t + 1)x2 (t) + x3 (t) 2x2 (t) + x3 (t) f3 (t)
t ∈ 𝕋,
or −1 (0 0
t+2 0 2t + 4
x1Δ (t) −1 x1 (t) f1 (t) ) = (−tx2 (t) + x3 (t)) + (f2 (t)) , 0 )( 0 −2 2x2 (t) + x3 (t) f3 (t) −(t + 2)x2Δ (t) − x2 (t) + x3Δ (t)
t ∈ 𝕋, or −x1Δ (t) + (t + 2)x2Δ (t) + x2 (t) − x3Δ (t) = x1 (t) + f1 (t), 2(t +
2)x2Δ (t)
+ 2x2 (t) −
0 = −tx2 (t) + x3 (t) + f2 (t),
2x3Δ (t)
= 2x2 (t) + x3 (t) + f3 (t),
or −x1Δ (t) + (t + 2)x2Δ (t) − x3Δ (t) = x1 (t) − x2 (t) + f1 (t), 2(t +
2)x2Δ (t)
0 = −tx2 (t) + x3 (t) + f2 (t),
−
2x3Δ (t)
= x3 (t) + f3 (t),
Next, we will rewrite the system (5.10). We have
t ∈ 𝕋.
t ∈ 𝕋,
5.3 Standard form index one problems
σ P(t)(A−1 1 (t))
1 = (0 0
0 0 −(t + 1) t 2
−1 P(t)(A (t)) C(t) = ( 0 0 −1
σ
1 2
0
t+6 2
2−t 2 2
0 = (0
0 0 ) (0 0 − 21
0
−1 ) = ( 0 0 − 21 0
1
3t+8 2
0 −t 2
1 1) 1
t 2
1 2
t 2
0
1 2
t+6 2
0 ), − 21
0 ),
t+5 2
0 σ Q(t)(A−1 (t)) = ( 0 1 0
0 1 t+1
−1 0 0) ( 0 0 0
0 σ Q(t)(A−1 (t)) C(t) = (0 0
0 1 t+1
0 0 0 ) (0 0 0
0 = (0 0
1 2
t−1 2
2 − t +6t+2 2
0
t 2
−1 0 0) ( 0 1 0
� 273
0 −t −t(t + 1)
1
3t+8 2
0 −t 2
0 1 ), t+1
0 0 ) = (0 0 − 21
0 1 t+1
0 0) , 0
1 1) 1
t ∈ 𝕋.
Hence, the system (5.10) takes the form u1Δ (t)
0 Δ (u2 (t)) = (0 0 u3Δ (t)
0 0 −1
−1 +(0 0 v1 (t) 0 (v2 (t)) = − (0 v3 (t) 0 0 − (0 0
0 u1σ (t) 0 σ 0) (u2 (t)) + (0 0 u3σ (t) 0 t 2
0
t+6 2
1 2
−t
2
0
+6t+2 2
f1 (t) 0 ) (f2 (t)) , f3 (t) − 21
0 −t −t(t + 1) 0 1 t+1
2−t 2 2
0 u1 (t) 1 ) (u2 (t)) t+1 u3 (t)
0 f1 (t) 0) (f2 (t)) , 0 f3 (t)
t ∈ 𝕋,
or 2
2−t u2 (t) + t−1 u (t) 0 2 2 3 Δ (u2 (t)) = ( 0 ) + ( ) 0 σ 2 Δ t +6t+2 t+5 −u (t) u3 (t) 2 − 2 u2 (t) + 2 u3 (t)
u1Δ (t)
t−1 2
u1 (t) 0 ) (u2 (t)) t+5 u3 (t) 2
274 � 5 Second kind linear time-varying dynamic-algebraic equations −f1 (t) + 2t f2 (t) + 21 f3 (t) +( ), 0 t+6 1 f (t) − 2 f3 (t) 2 2
v1 (t) 0 0 (v2 (t)) = ( ) − ( f2 (t) ) , tu2 (t) − u3 (t) v3 (t) t(t + 1)u2 (t) − (t + 1)u3 (t) (t + 1)f3 (t)
t ∈ 𝕋,
or 2 − t2 t−1 t 1 u2 (t) + u (t) − f1 (t) + f2 (t) + f3 (t), 2 2 3 2 2 t 2 + 6t + 2 t+5 t+6 1 u3Δ (t) = −u2σ (t) − u2 (t) + u3 (t) + f2 (t) − f3 (t) 2 2 2 2 v1 (t) = 0, u1Δ (t) =
v2 (t) = tu2 (t) − u3 (t) − f2 (t),
v3 (t) = t(t + 1)u2 (t) − (t + 1)u3 (t) − (t + 1)f3 (t),
t ∈ 𝕋.
Exercise 5.2. Let 𝕋 = 7ℕ and
1. 2. 3.
1 A(t) = (0 0
t+1 0 3t + 3
1 P(t) = (0 0
0 1 0
1 0) , 3
0 1 ), −t + 1
−1 C(t) = ( 1 −1
−2 t+4 −2
2 1) , 1
t ∈ 𝕋.
Find the matrix Aσ1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋. Find the equation (5.4). Find the equation (5.10).
5.4 Decoupling of (σ, 1)-regular second-order linear time-varying dynamic equations of index one Suppose that the equation (5.1) is (σ, 1)-regular with tractability index one. Set G0 = Aσ B and denote with R a continuous projector along ker Aσ , P0 a continuous projector along ker G0 , Q0 = I − P 0 , B− the {1, 2}-inverse of B so that
G1 = G0 + CQ0 ,
5.4 Decoupling of (σ, 1)-regular second-order linear time-varying dynamic equations
BB− B = B,
B− BB− = B− ,
B − B = P0 ,
� 275
BB− = R.
Then Aσ R = Aσ = Aσ BB− = G0 B− . Hence, the equation (5.1) takes the form G0 B− (Bx)Δ = Cx + f . We multiply both sides of the last equation with G1−1 and we find G1−1 G0 B− (Bx)Δ = G1−1 Cx + G1−1 f .
(5.11)
Now, using that G1−1 G0 = I − Q0 and P0 G1−1 G0 = P0 , whereupon BP0 G1−1 G0 = BP0 , and multiplying the last equation with BP0 , we obtain BP0 B− (Bx)Δ = BP0 G1−1 Cx + BP0 G1−1 f . Hence, Δ
Δ
Δ
BP0 G1−1 Cx + BP0 G1−1 f = (BP0 B− x) − (BP0 B− ) Bσ x σ = (BP0 x)Δ − (BP0 B− ) Bσ x σ . Now, we use the decomposition x = P 0 x + Q0 x to find Δ
Δ
(BP0 x)Δ = (BP0 B− ) Bσ P0σ x σ + (BP0 B− ) Bσ Q0σ x σ + BP0 G1−1 CP0 x + BP0 G1−1 CQ0 x + BP0 G1−1 f Δ
= (BP0 B− ) Bσ P0σ x σ + BP0 G1−1 CB− BP0 x + BP0 G1−1 f ,
i. e.,
276 � 5 Second kind linear time-varying dynamic-algebraic equations Δ
(BP0 x)Δ = (BP0 B− ) Bσ P0σ x σ + BP0 G1−1 CB− BP0 x + BP0 G1−1 f ,
(5.12)
where we have used that BQ0 x = 0,
BP0 G1−1 CQ0 x = 0.
We set BP0 x = u. Then, using the equation (5.12), we arrive at the equation Δ
uΔ = (BP0 B− ) uσ + BP0 G1−1 CB− u + BP0 G1−1 f .
(5.13)
Now, we multiply both sides of the equation (5.11) by Q0 and using that Q0 G1−1 G0 = Q0 (I − Q0 ) = 0 and setting v = Q0 x, we find 0 = Q0 G1−1 Cx + Q0 G1−1 f = Q0 G1−1 CP0 x + Q0 G1−1 CQ0 x + Q0 G1−1 f = Q0 G1−1 CB− BP0 x + Q0 G1−1 CQ0 x + Q0 G1−1 f
= Q0 G1−1 CB− u + Q0 x + Q0 G1−1 f = Q0 G1−1 CB− u + v + Q0 G1−1 f , i. e., v = −Q0 G1−1 CB− u − Q0 G1−1 f . By the last equation and the equation (5.13), we arrive at the system Δ
uΔ = (BP0 B− ) uσ + BP0 G1−1 CB− u + BP0 G1−1 f , v = −Q0 G1−1 CB− u − Q0 G1−1 f .
(5.14)
By the first equation of the system (5.14), we find u and then by its second equation we get v. For the solution x of the equation (5.1), we have x = P0 x + Q0 x = B− BP0 x + Q0 x = B− u + v. Now, we will prove that the above described process is reversible. Let u ∈ C 1 (I), u ∈ im BP0 B− and v ∈ C (I) satisfy (5.14) and
5.4 Decoupling of (σ, 1)-regular second-order linear time-varying dynamic equations
x = B− u + v.
�
277 (5.15)
Then BP0 B− u = u and BB− u = BB− BP0 B− u = BP0 B− u = u. By the second equation of the system (5.14), we get v = Q0 v. Since im Q0 = ker B, we find Bv = BQ0 u = 0. Hence and (5.15), we arrive at BP0 x = BP0 B− u + BP0 v = BP0 B− u + BP0 Q0 v = BP0 B− u = u. By the last relation, we find B− u = B− BP0 x = P0 P0 x = P0 x = x − Q0 x and using (5.15), we obtain x = B− u + v = x − Q0 x + v, whereupon v = Q0 x. Next, Bx = B(P0 x + Q0 x) = B(P0 x + v) = BP0 x + Bv = BP0 x = u. Note that the equation (5.13) is restated by the equation (5.11). Then we multiply (5.11) by BP0 and we find BP0 G1−1 G0 B− (Bx)Δ = BP0 G1−1 Cx + BP0 G1−1 f , which we multiply by B− and using that
278 � 5 Second kind linear time-varying dynamic-algebraic equations B− BP0 = P0 , we find B− BP0 G1−1 G0 B− (Bx)Δ = B− BP0 G1−1 Cx + B− BP0 G1−1 f or P0 G1−1 G0 B− (Bx)Δ = P0 G1−1 Cx + P0 G1−1 f , or P0 G1−1 Aσ (Bx)Δ = P0 G1−1 Cx + P0 G1−1 f .
(5.16)
Now, we multiply (5.11) by Q0 and we obtain Q0 G1−1 Aσ (Bx)Δ = Q0 G1−1 Cx + Q0 G1−1 f . We add the last equation and the equation (5.16) and we obtain G1−1 Aσ (Bx)Δ = G1−1 Cx + G1−1 f , whereupon Aσ (Bx)Δ = Cx + f . Definition 5.10. The equation (5.13) is said to be the inherent equation for the equation (4.1). Theorem 5.1. The subspace im P0 is an invariant subspace for the equation (5.13), i. e., u(t0 ) ∈ (im BP0 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BP0 )(t) for any t ∈ I. Proof. Let u ∈ C 1 (I) be a solution to the equation (5.13) so that (BP0 )(t0 )u(t0 ) = u(t0 ). Then, using the proof of Theorem 4.1, we obtain u(t0 ) = (BP0 B− )(t0 )u(t0 ).
5.4 Decoupling of (σ, 1)-regular second-order linear time-varying dynamic equations
� 279
We multiply the equation (4.15) by I − BP0 B and we get Δ
(I − BP0 B)uΔ = (I − BP0 B)(BP0 B− ) uσ + (I − BP0 B)BP0 G1−1 CB− u + (I − BP0 B)BP0 G1−1 f Δ
= (I − BP0 B)(BP0 B− ) uσ + (BP0 − BP0 BBP0 )G1−1 CB− u + (BP0 − BP0 B− BP0 )G1−1 f Δ
= (I − BP0 B− )(BP0 B− ) uσ . Set
v = (I − BP0 B− )u. Then Δ
Δ
Δ
vΔ = (I − BP0 B− )uΔ + (I − BP0 B− ) uσ = (I − BP0 B− )(BP0 B− ) uσ + (I − BP0 B− ) uσ Δ
Δ
Δ
= ((I − BP0 B− )BP0 B− ) uσ − (I − BP0 B− ) Bσ P0σ B−σ u + (I − BP0 B− ) uσ Δ
Δ
= (I − BP0 B− ) (I − Bσ P0σ B−σ )uσ = (I − BP0 B− ) vσ , i. e., Δ
vΔ = (I − BP0 B− ) vσ . Note that v(t0 ) = 0. Thus, we get the following IVP: Δ
vΔ = (I − BP0 B− ) vσ
on
I,
v(t0 ) = 0.
Therefore, v = 0 on I and then, using the proof of Theorem 4.1, we obtain BP0 u = u
on I.
This completes the proof. Example 5.3. Let 𝕋 = 2ℕ0 and A, B, C be as in Example 4.3. Then Δ
(BP0 B− ) (t) = 0, and
t ∈ 𝕋,
280 � 5 Second kind linear time-varying dynamic-algebraic equations B(t)P0 (t)G1−1 (t) t = (0 0
t = (0 0
0 t2 0
0 t2 0
0 0 1
0 0 1
1 t
0
0
0
0
1 t2
= (0
1
0
1 0 0 0 ) (0 0 0 0
0 0 0
0 1 0 0 0
1 t2
1 t3 1 t2
0
0 0 0 0 0
1
0 t2 0 0 ( 0) ( 0 0 0 0 (0 1 − t14 t4
0
0
0 0 0
1 t2
0 0
0 0 0
0 0 0 −1 1
0 0 −1 1 0
0 t 0 0 0
1 t2
0 0 ( ) 0 0 0 0
0 0 0
0 0 1 0 0
(0
1 t4
0
0
0
0
1 t2
1 t
0
0
0
1 t2
= (0
= (0 0
1
1
0
0
0
1 t3 1 t2
− t13
0 0 = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 = (0 0
0 0 0 0
0 0 0 0
(0
0
0 0 1 ( 0 ) t2 − 1t 0 (
Q0 (t)G1−1 (t)
0
0
1
0 0 0 1 0
0 t2 0 0 ( 0) ( 0 0 0 1 (0 0 0 0 0 0 0), − t12 0 0
− t12
0
1 t2
0
0
0
0
) 0 )
1 t2
0
0 ) 0 0 )
0
− t13 1 )( t2
0
0
0
1 t4
1 t4
− t13 1 ), t2
1 t3 1 t2
0
− t14
0
1 t2
B(t)P0 (t)G1−1 (t)C(t)B− (t) 1 t
0
1 t4 1 t4
0
1 t2 )
0 0 − t12 1 t2
1 t
0
0 − t24 t 0) = ( 0 0
0
0)
0
− t14
0
− t12
0
0
0
1 t2
0
0
0
1
1 t 0 0 0 ) (0 0 0 t2 (0
1 t4 1 t4
0
1 t2
−1 1 0 −t 2 0
0
0
1 t4
) 0 )
1 t2
)
1 t5 1 t4 − t14
0
1 t2
0 0 0 0
0
0 1) 0 0)
t) ,
0
)
5.4 Decoupling of (σ, 1)-regular second-order linear time-varying dynamic equations
Q0 (t)G1−1 (t)C(t)B− (t) 0 0 ( 0 = 0
0 0 0 0
0 0 0 0
0 0 0 − t12
0 0 ( 0 = 0
0 0 0 0
0 0 0 0
0 0 0 − t12
(0
(0
0
0
0
0 0 0 0 0)( 0 0 −1 1 1 2)
0
0
0 0 −1 1 0
t
0 0 0 0 ) ( 0 0 0 −1
1 1 t2 ) ( t
0
0 t 0 0 0
0 0 − t12 1 t2
t
0
−1 1 0 −t 2 0
1 1 t 0 0 0 ) (0 0 0 t2 (0
0 0 t 0 0) = ( 0 0
0)
0
1 t2
0 0 0
0 0 0 − t14
1 t3 1 ( t3
0
� 281
0 0 1) 0 0)
0 0 0) , 0
t ∈ 𝕋.
0)
Then (5.14) takes the form u1Δ (t)
− t24
u3Δ (t)
0
1 t5 1 t4 − t14
(u2Δ (t)) = ( 0
0 0 0 0 ( 0 ) = −(0 1 v0 (t) t3 1 v1 (t) (3 t
0
0 ( − (0 0
(0
u1 (t) t t ) (u2 (t)) + (0 u3 (t) 0 0
0 0 0 − t14 0
0
0
0
0
0 0 0
1
0
0 0 0
0
0
0
1 t2
1
1 t3 1 t2
0
f1 (t) f2 (t) 1 ) (f3 (t)) , t2 0 0 0
− t13
0
0 0 u1 (t) 0) (u2 (t)) 0 u3 (t) 0) 0
0
f1 (t) f2 (t) ) 0 ) (f3 (t)) , 0 0 1 0 t2 )
0
0
0
− t12 0
t ∈ 𝕋,
or u1Δ (t)
− t24 u1 (t) +
(u2Δ (t)) = ( u3Δ (t)
1 u (t) t5 2
1 u (t) + tu3 (t) t4 2 − t14 u2 (t)
1 f (t) t 1
) + ( f2 (t) ) , 1 f (t) t2 3
0 0 0 0 0 0 ) − ( 0) , 0 ( 0 ) = −( 1 u (t) − t14 u2 (t) v0 (t) 0 t3 1 1 v1 (t) u (t) 3 1 ( ) ( 0) t
t ∈ 𝕋,
282 � 5 Second kind linear time-varying dynamic-algebraic equations or u1Δ (t) = −
2 1 1 u1 (t) + 5 u2 (t) + f1 (t), 4 t t t
1 u (t) + tu3 (t) + f2 (t), t4 2 1 1 u3Δ (t) = − 4 u2 (t) + 2 f3 (t), t t 1 1 v0 (t) = − 3 u1 (t) + 4 u2 (t), t t 1 v1 (t) = − 3 u1 (t), t ∈ 𝕋. t u2Δ (t) =
Exercise 5.3. Let 𝕋 = 4ℕ0 and 1 1 A(t) = ( 1 0 0
1 0 0 1 0
0 t t + 2) , 0 1
0 0 C(t) = (0 t 1
0 0 3t + 1 −t 0
0 2t 0 0 0
−t t 0 −1 1
t B(t) = (0 0
0 t4 0
t 0 0) , 0 1
t ∈ 𝕋.
0 0 t2
0 0 0
0 0) , 0
Find the representation (5.14).
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2 5.5.1 A reformulation Suppose that the equation (5.1) is regular with tractability index ν. In addition, assume that R is a continuous projector onto im B and along ker A. Set G0 = AB and take Π0 to be a continuous projector along ker G0 and denote M0 = I − Π0 ,
B− BB− = B− ,
C0 = C,
B − B = Π0 ,
G1 = G0 + C0 M0 ,
BB− = R,
BB− B = B,
N0 = ker G0 .
Let Gi , Πi , i ∈ {1, . . . , ν − 1}, be as in (A5)–(A7). Assume that (A8) holds and let
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
�
283
Δ
Ci Πi = (Ci−1 + Ciσ Miσ + Giσ B−σ (BΠi B− ) B)Πi−1 , Gi = Gi−1 + Ci−1 Mi−1 ,
i ∈ {1, . . . , ν − 1}.
Since R = BB− and R is a continuous projector along ker A, we have A = AR = ABB− = G0 B− . Then we can rewrite the equation (4.22) as follows: Aσ Bσ B−σ (Bx)Δ = Cx + f or G0σ B−σ (Bx)Δ = Cx + f . Now, we multiply both sides of the last equation with Gν−1σ and we find Gν−1σ G0σ B−σ (Bx)Δ = Gν−1σ Cx + Gν−1σ f .
(5.17)
By (3.69), we have σ Gν−1σ G0σ = I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 .
Therefore, (5.17) takes the form σ (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (Bx)Δ = Gν−1σ Cx + Gν−1σ f .
Since Πν−1 projects along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 and Ni = im Qi , we have σ Πσν−1 (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 ) = Πσν−1
and then σ Bσ Πσν−1 (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 ) = Bσ Πσν−1 .
Then we multiply (5.18) by Bσ Πσν−1 and we get σ Bσ Πσν−1 (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (Bx)Δ = Bσ Πσν−1 Gν−1σ Cx + Bσ Πσν−1 Gν−1σ f
or
(5.18)
284 � 5 Second kind linear time-varying dynamic-algebraic equations Bσ Πσν−1 B−σ (Bx)Δ = Bσ Πσν−1 Gν−1σ Cx + Bσ Πσν−1 Gν−1σ f .
(5.19)
On the other hand, Πν−1 B− B = Πν−1 Π0 = Πν−1 . Since BΠν−1 B− and Bx are C 1 , we get Δ
Δ
Bσ Πσν−1 B−σ (Bx)Δ = (BΠν−1 B− Bx) − (BΠν−1 B− ) Bx Δ
= (BΠν−1 x)Δ − (BΠν−1 B− ) Bx.
Hence, (5.19) can be rewritten in the form Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bx = Bσ Πσν−1 Gν−1σ Cx + Bσ Πσν−1 Gν−1σ f . Now, we decompose x as follows: x = Πν−1 x + (I − Πν−1 )x. Then we find Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) B(I − Πν−1 + Πν−1 )x
= Bσ Πσν−1 Gν−1σ C(I − Πν−1 + Πν−1 )x + Bσ Πσν−1 Gν−1σ f ,
or Δ
Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) BΠν−1 x − (BΠν−1 B− ) (I − Πν−1 )x
= Bσ Πσν−1 Gν−1σ CΠν−1 x + Bσ Πσν−1 Gν−1σ C(I − Πν−1 )x + Bσ Πσν−1 Gν−1σ f .
By Lemma 3.18, we have j
Δ
σ Cj Πj = (C + Gj+1 − G1σ + ∑ Giσ B−σ (BΠi B− ) B)Πj−1 i=1
and j
Δ
σ 0 = Cj Πj Mj = (C + Gj+1 − G1σ + ∑ Giσ B−σ (BΠi B− ) B)Πi−1 Mj i=1
j
Δ
σ = (C + Gj+1 − G1σ + ∑ Giσ B−σ (BΠi B− ) B)Mj i=1
j
Δ
σ = CMj + (Gj+1 − G1σ )Mj + ∑ Giσ B−σ (BΠi B− ) BMj , i=1
(5.20)
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
285
�
whereupon j
Δ
σ CMj = −(Gj+1 − G1σ )Mj − ∑ Giσ B−σ (BΠi B− ) BMj . i=1
We multiply the last equation with Gν−1σ and we find j
Δ
σ Gν−1σ CMj = −Gν−1σ (Gj+1 − G1σ )Mj − ∑ Gν−1σ Giσ B−σ (BΠi B− ) BMj i=1
j
Δ
σ = −(Q1σ + ⋅ ⋅ ⋅ Qjσ )Mj − ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj , i=1
i. e., j
Δ
σ Gν−1σ CMj = −(Q1σ + ⋅ ⋅ ⋅ Qjσ )Mj − ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj .
(5.21)
i=1
Then Bσ Πσν−1 Gν−1σ CMj = −Bσ Πσν−1 (Q1σ + ⋅ ⋅ ⋅ + Qjσ )Mj j
Δ
σ − ∑ Bσ Πσν−1 (I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj i=1 j
Δ
= − ∑ Bσ Πσν−1 B−σ (BΠi B− ) BMj i=1 j
j
Δ
Δ
= − ∑(BΠν−1 B− BΠi B− ) BMj + ∑(BΠν−1 B− ) BΠi B− BMj i=1
i=1
j
j−1
Δ
Δ
= − ∑(BΠν−1 B− BΠi B− ) BMj + ∑(BΠν−1 B− ) BΠi B− BMj i=1 j
i=1
Δ
j−1
Δ
Δ
= − ∑(BΠν−1 B− ) BMj + ∑(BΠν−1 B− ) BMj = −(BΠν−1 B− ) BMj , i=1
i=1
i. e., Δ
Bσ Πσν−1 Gν−1σ CMj = −(BΠν−1 B− ) BMj . By the last equation, we find ν−1
Δ
ν−1
Bσ Πσν−1 Gν−1σ C ∑ Mj = −(BΠν−1 B− ) B ∑ Mj . j=0
j=0
(5.22)
286 � 5 Second kind linear time-varying dynamic-algebraic equations Note that ν−1
∑ Mj = M0 + M1 + ⋅ ⋅ ⋅ + Mν−1 = I − Π0 + Π0 − Π1 + Π1 − Π2 + ⋅ ⋅ ⋅ + Πν−2 − Πν−1 = I − Πν−1 .
j=0
Hence, by (5.22), we find Δ
Bσ Πσν−1 Gν−1σ C(I − Πν−1 ) = −(BΠν−1 B− ) B(I − Πν−1 ). By the last relation, the equation (5.20) takes the form Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) BΠν−1 x = Bσ Πσν−1 Gν−1σ CΠν−1 x + Bσ Πσν−1 Gν−1σ f . Set u = BΠν−1 x. Then CΠν−1 x = CΠ0 Πν−1 x = CB− BΠν−1 x = CB− u. Thus, we get the equation Δ
uΔ − (BΠν−1 B− ) u = Bσ Πσν−1 Gν−1σ CB− u + Bσ Πσν−1 Gν−1σ f , or Δ
uΔ = (BΠν−1 B− ) u + Bσ Πσν−1 Gν−1σ CB− u + Bσ Πσν−1 Gν−1σ f .
(5.23)
Definition 5.11. The equation (5.23) is said to be the inherent equation of the equation (5.1). Theorem 5.2. The subspace im Πν−1 is an invariant subspace for the equation (5.23), i. e., u(t0 ) ∈ (im BΠν−1 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BΠν−1 )(t) for any t ∈ I. Proof. Let u ∈ C 1 (I) be a solution to the equation (5.23) so that (BΠν−1 )(t0 )u(t0 ) = u(t0 ).
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
� 287
Hence, u(t0 ) = (BΠν−1 )(t0 )u(t0 )
= (BΠν−1 Π0 Πν−1 )(t0 )u(t0 ) = (BΠν−1 B− BΠν−1 )(t0 )u(t0 )
= (BΠν−1 B− )(t0 )(BΠν−1 )(t0 )u(t0 ) = (BΠν−1 B− )(t0 )u(t0 ). We multiply the equation (5.23) by I − Bσ Πσν−1 B−σ and we get Δ
(I − Bσ Πσν−1 B−σ )uΔ = (I − Bσ Πσν−1 B−σ )(BΠν−1 B− ) u
+ (I − Bσ Πσν−1 B−σ )Bσ Πσν−1 Gν−1σ CB− u + (I − Bσ Πσν−1 B−σ )Bσ Πσν−1 Gν−1σ f Δ
= (I − Bσ Πσν−1 B−σ )(BΠν−1 B− ) u
+ (Bσ Πσν−1 − Bσ Πσν−1 B−σ Bσ Πσν−1 )Gν−1σ CB− u + (Bσ Πσν−1 − Bσ Πσν−1 B−σ Bσ Πσν−1 )Gν−1σ f Δ
= (I − Bσ Πσν−1 B−σ )(BΠν−1 B− ) u
+ (Bσ Πσν−1 − Bσ Πσν−1 )Gν−1σ CB− u + (Bσ Πσν−1 − Bσ Πσν−1 )Gν−1σ f
Δ
= (I − Bσ Πσν−1 B−σ )(BΠν−1 B− ) u, i. e., Δ
(I − Bσ Πσν−1 B−σ )uΔ = (I − Bσ Πσν−1 B−σ )(BΠν−1 B− ) u. Set v = (I − BΠν−1 B− )u. Then Δ
vΔ = (I − Bσ Πσν−1 B−σ )uΔ + (I − BΠν−1 B− ) u Δ
Δ
= (I − Bσ Πσν−1 B−σ )(BΠν−1 B− ) u + (I − BΠν−1 B− ) u Δ
= ((I − BΠν−1 B− )BΠν−1 B− ) u Δ
Δ
− (I − BΠν−1 B− ) BΠν−1 B− u + (I − BΠν−1 B− ) u Δ
Δ
= (BΠν−1 B− − BΠν−1 B− BΠν−1 B− ) u + (I − BΠν−1 B− ) (I − BΠν−1 B− )u Δ
= −(I − BΠν−1 B− ) v, i. e., Δ
vΔ = −(I − BΠν−1 B− ) v.
(5.24)
288 � 5 Second kind linear time-varying dynamic-algebraic equations Note that v(t0 ) = u(t0 ) − (BΠν−1 B− )(t0 )u(t0 ) = u(t0 ) − u(t0 ) = 0. Thus, we get the following IVP: Δ
vΔ = −(I − BΠν−1 B− ) v
on
I,
v(t0 ) = 0. Therefore, v = 0 on I and then
BΠν−1 B− u = u
on
I.
Hence, using that im BΠν−1 = im BΠν−1 B− , we get BΠν−1 u = u
on
I.
This completes the proof. 5.5.2 The component vν−1 Consider the equation (5.17). Observe that I = M0 + M1 + ⋅ ⋅ ⋅ + Mν−1 + Πν−1 . Then we represent the solution x of the equation (5.17) in the form x = M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + Πν−1 x = M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + B− BΠν−1 x. Set vj = Mj x,
j ∈ {0, . . . , ν − 1}.
σ We multiply the equation (5.17) by Mν−1 and we find σ σ σ Mν−1 Gν−1σ G0σ B−σ (Bx)Δ = Mν−1 Gν−1σ Cx + Mν−1 Gν−1σ f .
By (3.69), we have σ σ σ Mν−1 Gν−1σ G0σ = Mν−1 (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 )
(5.25)
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
�
289
σ σ σ σ σ σ = Mν−1 − Mν−1 Q0σ − ⋅ ⋅ ⋅ − Mν−1 Qν−1 = Mν−1 − Mν−1 = 0.
Thus, the equation (5.25) can be rewritten as follows: σ σ Mν−1 Gν−1σ Cx = −Mν−1 Gν−1σ f .
(5.26)
Hence, σ σ Mν−1 Gν−1σ f = −Mν−1 Gν−1σ C(M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + B− BΠν−1 x).
(5.27)
By (5.21), we get σ σ Mν−1 Gν−1σ CMj = −Mν−1 (Q1σ + ⋅ ⋅ ⋅ + Qjσ )Mj j
Δ
σ σ σ − ∑ Mν−1 (I − Qiσ − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 )Bσ (BΠi B− ) BMj
= 0,
i=1
j < ν − 1,
and σ σ σ Mν−1 Gν−1σ CMν−1 = −Mν−1 (Q1σ + ⋅ ⋅ ⋅ + Qν−1 )Mν−1 ν−1
Δ
σ σ σ − ∑ Mν−1 (I − Qiσ − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 )Bσ (BΠi B− ) BMν−1
=
i=1 σ −Mν−1 Mν−1 .
Then, by (5.27), we find σ σ σ Mν−1 Mν−1 x + Mν−1 Gν−1σ SB− BΠν−1 x = Mν−1 Gν−1σ f
or σ σ σ Mν−1 vν−1 = −Mν−1 Gν−1σ CB− u + Mν−1 Gν−1σ f .
(5.28)
5.5.3 The components vk In this section, we will find a suitable representation of the components vk = Mk x. For this aim, we will use the projectors Uk = Mk Pk+1 . . . Pν−1 which project along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nk−1 ⊕ Nk+1 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 ⊕ im Πν−1 .
290 � 5 Second kind linear time-varying dynamic-algebraic equations Hence, Uk Qi = 0,
k ≠ i.
Now, using (3.69) and then (3.61), we obtain Ukσ Gν−1σ G0σ B−σ (Bx)Δ
σ = Ukσ (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (Bx)Δ
σ σ σ = (Ukσ − Ukσ Q0σ − ⋅ ⋅ ⋅ − Ukσ Qk−1 − Ukσ Qkσ − Ukσ Qk+1 − ⋅ ⋅ ⋅ − Ukσ Qν−1 )B−σ (Bx)Δ
= (Ukσ − Ukσ Qkσ )B−σ (Bx)Δ = Ukσ (I − Qkσ )B−σ (Bx)Δ
σ σ = Mkσ Pk+1 . . . Pν−1 (I − Qkσ )B−σ (Bx)Δ
σ σ σ σ = Mkσ (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−1 Qkσ )B−σ (Bx)Δ
σ σ σ σ σ = Mkσ (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−2 (I − Qν−1 )Qkσ )B−σ (Bx)Δ
σ σ σ σ σ = Mkσ (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−2 (Qkσ − Qν−1 Qkσ ))B−σ (Bx)Δ σ σ σ σ = Mkσ (Pk+1 . . . Pν−1 − Pk+1 . . . Pν−2 Qkσ )B−σ (Bx)Δ = ⋅ ⋅ ⋅ σ σ σ = Mkσ (Pk+1 . . . Pν−1 − Pk+1 Qkσ )B−σ (Bx)Δ
σ σ σ = Mkσ (Pk+1 . . . Pν−1 − (I − Qk+1 ))B−σ (Bx)Δ
σ σ σ = Mkσ (Pk+1 . . . Pν−1 − Qkσ + Qk+1 Qkσ )B−σ (Bx)Δ σ σ = Mkσ (Pk+1 . . . Pν−1 − Qkσ )B−σ (Bx)Δ
σ σ = (Mkσ Pk+1 . . . Pν−1 − Mkσ Qkσ )B−σ (Bx)Δ σ σ = (Mkσ Pk+1 . . . Pν−1 − Mkσ )B−σ (Bx)Δ σ σ = Mkσ (Pk+1 . . . Pν−1 − I)B−σ (Bx)Δ ,
i. e., σ σ Ukσ Gν−1σ G0σ B−σ (Bx)Δ = Mkσ (Pk+1 . . . Pν−1 − I)B−σ (Bx)Δ .
Note that we have Pk+1 . . . Pν−1 − I = −Qk+1 − Pk+1 Qk+2 − Pk+1 Pk+2 Qk+3 − ⋅ ⋅ ⋅ − Pk+1 ⋅ ⋅ ⋅ Pν−1 Qν−1 . Since Qj Mj = Qj , we get Pk+1 . . . Pν−1 − I = −Qk+1 Mk+1 − Pk+1 Qk+2 Mk+2 − Pk+1 Pk+2 Qk+3 Mk+3
(5.29)
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
� 291
− ⋅ ⋅ ⋅ − Pk+1 . . . Pν−2 Qν−1 Mν−1 . Then (5.29) takes the form σ σ σ σ σ σ σ σ σ Ukσ Gν−1σ G0σ B−σ (Bx)Δ = Mkσ (−Qk+1 Mk+1 − Pk+1 Qk+2 Mk+2 − Pk+1 Pk+2 Qk+3 Mk+3
(5.30)
σ σ σ σ − ⋅ ⋅ ⋅ − Pk+1 . . . Pν−2 Qν−1 Mν−1 ).
For j > 0, we have BMj x = BMj B− Bx and then Δ
Δ
(BMj x)Δ = (BMj B− Bx) = (BMj B− ) Bx + Bσ Mjσ B−σ (Bx)Δ or Δ
Bσ Mjσ B−σ (Bx)Δ = (BMj x)Δ − (BMJ B− ) Bx.
(5.31)
Since P0 = B − B and Mj = P0 Mj = B− BMj , by (5.30), we find σ σ Ukσ Gν−1σ G0σ B−σ (Bx)Δ = Mkσ (−Qk+1 B−σ Bσ Mk+1 B−σ (Bx)Δ
σ σ σ − Pk+1 Qk+2 B−σ Bσ Mk+2 B−σ (Bx)Δ
σ σ σ σ − Pk+1 Pk+2 Qk+3 B−σ Bσ Mk+3 B−σ (Bx)Δ
⋅⋅⋅
σ σ σ σ − Pk+1 . . . Pν−2 Qν−1 B−σ Bσ Mν−1 B−σ (Bx)Δ )
σ σ σ = Mkσ (−Qk+1 B−σ (Bvk+1 )Δ − Pk+1 Qk+2 B−σ (Bvk+2 )Δ σ σ σ − Pk+1 Pk+2 Qk+3 B−σ (Bvk+3 )Δ
⋅⋅⋅
σ σ σ − Pk+1 . . . Pν−1 Qν−1 B−σ (Bvν−1 )Δ ) Δ
Δ
σ σ σ + Mkσ (Qk+1 B−σ (BMk+1 B− ) Bx + Pk+1 Qk+2 B−σ (BMk+2 B− ) Bx Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) Bx
⋅⋅⋅
292 � 5 Second kind linear time-varying dynamic-algebraic equations Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) Bx).
Set σ
σ
Nkk+1 = −Mk Qk+1 B Nkk+2 = Nkk+3 =
.. .
−σ
,
σ σ −Mkσ Pk+1 Qk+2 B−σ , σ σ σ −Mkσ Pk+1 Pk+2 Qk+3 B−σ ,
σ σ
σ
σ
Nkν−1 = −Mk Pk+1 . . . Pν−2 Qν−1 B
−σ
.
Therefore, ν−1
Ukσ Gν−1σ G0σ B−σ (Bx)Δ = ∑ Nkj (Bvj )Δ j=k+1
Δ
Δ
σ σ σ + Mkσ (Qk+1 B−σ (BMk+1 B− ) Bx + Pk+1 Qk+2 B−σ (BMk+2 B− ) Bx Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) Bx
⋅⋅⋅
Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) Bx).
Set Δ
Δ
σ σ σ J = Mkσ (Qk+1 B−σ (BMk+1 B− ) Bx + Pk+1 Qk+2 B−σ (BMk+2 B− ) Bx Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) Bx
⋅⋅⋅
Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )Bx.
Note that BM0 = 0 and ν−1
ν−1
j=0
j=1
Bx = BΠν−1 x + B ∑ Mj x = BΠν−1 x + B ∑ Mj x. Then J takes the form Δ
σ σ σ J = Mkσ (Qk+1 B−σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− ) σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− )
Δ
Δ
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
⋅⋅⋅
Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )BΠν−1 x Δ
σ σ σ + Mkσ (Qk+1 B−σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− )
Δ
Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) + ⋅ ⋅ ⋅
ν−1
Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) ) ∑ BMj x. j=1
Denote Δ
Δ
σ σ σ J1 = Mkσ (Qk+1 B−σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− ) Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )BΠν−1 x, Δ
Δ
σ σ σ J2 = Mkσ (Qk+1 B−σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− ) Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
ν−1
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) ) ∑ BMj x. j=1
We have Mj B− BΠν−1 x = Mj P0 Πν−1 = Mj Πν−1 = 0. Hence, Δ
(BMj B− ) BΠν−1 x = −Bσ Mjσ B−σ (BΠν−1 x)Δ + (BMj B− BΠν−1 x)
Δ
= −Bσ Mjσ B−σ (BΠν−1 x)Δ .
Therefore, for J1 we get the following representation: Δ
Δ
σ σ σ J1 = Mkσ (Qk+1 B−σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− ) Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )BΠν−1 x
σ σ σ σ σ = −Mkσ (Qk+1 B−σ Bσ Mk+1 B−σ + Pk+1 Qk+2 B−σ Bσ Mk+2 B−σ σ σ σ σ + Pk+1 Pk+2 Qk+3 B−σ Bσ Mk+3 B−σ + ⋅ ⋅ ⋅
σ σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ Bσ Mν−1 B−σ )(BΠν−1 x)Δ (BΠν−1 x)
σ σ σ σ σ σ σ σ σ = −Mkσ (Qk+1 Mk+1 + Pk+1 Qk+2 Mk+2 + Pk+1 Pk+2 Qk+3 Mk+3 + ⋅⋅⋅ σ σ σ σ + Pk+1 . . . Pν−2 Qν−1 Mν−1 )B−σ (BΠν−1 x)Δ BΠν−1 x
σ σ σ σ σ σ = −Mkσ (Qk+1 + Pk+1 Qk+2 + Pk+1 Pk+2 Qk+3 + ⋅⋅⋅
�
293
294 � 5 Second kind linear time-varying dynamic-algebraic equations σ σ σ + Pk+1 . . . Pν−2 Qν−1 )B−σ (BΠν−1 x)Δ BΠν−1 x
σ σ = Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BΠν−1 x)Δ BΠν−1 x
σ σ = Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BΠν−1 x)Δ BB− BΠν−1 x,
i. e., σ σ J1 = Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BΠν−1 x)Δ BB− BΠν−1 x.
Set σ
σ
σ
K̃ k = Mk (Pk+1 . . . Pν−1 − I)B
−σ
(BΠν−1 x)Δ B.
Thus, − J1 = K̃ k B u.
Next, Δ
σ σ σ J2 = Mkσ (Qk+1 B−σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− )
Δ
Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
ν−1
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) ) ∑ BMj x j=1
− Δ
σ σ σ = Mkσ (Qk+1 B−σ (BMk+1 B ) + Pk+1 Qk+2 B−σ (BMk+2 B− )
Δ
Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
ν−1
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )B ∑ Mj B− BMj x. j=1
Let σ
σ
̃ M kj = −Mk (Qk+1 B
−σ
Δ
σ σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− ) Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− ) + ⋅ ⋅ ⋅ Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )BMj B− B.
Therefore, ν−1
̃ J2 = ∑ M kj Mj x. j=1
Note that Δ
(BMj B− ) BMj B− = −Bσ Mjσ B−σ BMj B− )Δ + (BMj B− BMj B− )
Δ
Δ
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
Δ
Δ
= (BMj B− ) − Bσ Mjσ B−σ (BMj B− ) and Δ
Δ
(BMi B− ) BMj B− = (BMi B− BMj B− ) − Bσ Miσ B−σ (BMj B− ) Δ
= −Bσ Miσ B−σ (BMj B− ) ,
Δ
i ≠ j.
Hence, for j > k, we have σ
σ
̃ M kj = −Mk (Qk+1 B
−σ
Δ
σ σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− )
Δ
Δ
σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− )
⋅⋅⋅
σ σ + Pk+1 . . . Pj−1 Qjσ B−σ (BMj B− )
Δ Δ
σ σ + Pk+1 . . . Pjσ Qj+1 B−σ (BMj+1 B− ) + ⋅ ⋅ ⋅ Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )BMj B− B Δ
σ σ = −Mkσ (−Qk+1 B−σ Bσ Mk+1 B−σ (BMj B− )
Δ
σ σ σ − Pk+1 Qk+2 B−σ Bσ Mk+2 B−σ (BMj B− )
Δ
σ σ σ σ − Pk+1 Pk+2 Qk+3 B−σ Bσ Mk+3 B−σ (BMj B− ) + ⋅ ⋅ ⋅ Δ
σ σ σ σ − Pk+1 . . . Pj−2 Qj−1 B−σ Bσ Mj−1 B−σ (BMj B− ) σ σ − Pk+1 . . . Pj−1 Qjσ B−σ Bσ Mjσ B−σ (BMj B− ) σ σ + Pk+1 . . . Pj−1 Qjσ (BMj B− )
Δ
Δ
Δ
σ σ σ − Pk+1 . . . Pjσ Qj+1 B−σ Bσ Mj+1 B−σ (BMj B− ) + ⋅ ⋅ ⋅ Δ
σ σ σ σ − Pk+1 . . . Pν−2 Qν−1 B−σ Bσ Mν−1 B−σ (BMj B− ) )B
σ σ σ σ σ σ = Mkσ (Qk+1 + Pk+1 Qk+2 + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 Δ
σ σ − Pk+1 . . . Pj−1 Qjσ )B−σ (BMj B− ) B
Δ
σ σ σ σ = Mkσ (I − Pk+1 . . . Pν−1 − Pk+1 . . . Pj−1 Qjσ )B−σ (BMj B− ) B.
For j < k, we get σ
σ
̃ M kj = −Mk (−Qk+1 B
−σ σ
Δ
σ B Mk+1 B−σ (BMj B− )
σ σ σ − Pk+1 Qk+2 B−σ Bσ Mk+2 B−σ (BMj B− )
Δ Δ
σ σ σ σ − Pk+1 Pk+2 Qk+3 B−σ Bσ Mk+3 B−σ (BMj B− ) + ⋅ ⋅ ⋅ Δ
σ σ σ σ − Pk+1 . . . Pν−2 Qν−1 B−σ Bσ Mν−1 B−σ (BMj B− ) )B
Δ
σ σ σ σ σ σ = Mkσ (Qk+1 + Pk+1 Qk+2 + ⋅ ⋅ ⋅ + Pk+1 . . . Pν−2 Qν−1 )B−σ (BMj B− ) B
�
295
296 � 5 Second kind linear time-varying dynamic-algebraic equations Δ
σ σ = Mkσ (I − Pk+1 . . . Pν−1 )B−σ (BMj B− ) B.
Consequently, J = J1 + J2
ν−1
ν−1
j=1
j=1
− ̃ ̃ − ̃ = K̃ k B u + ∑ Mkj Mj x = Kk B u + ∑ Mkj vj
and ν−1
Ukσ Gν−σ G0σ B−σ (Bx)Δ = ∑ Nkj (Bvj )Δ j=k+1
ν−1
− ̃ + K̃ k B u + ∑ Mkj vj . j=1
5.5.4 Terms coming from Ukσ Gν−1σ Cx In this section, we will find representations of the terms Ukσ Gν−1σ Cx, using the decomposition ν−1
x = Πν−1 x + ∑ Mj x. j=0
We have ν−1
Ukσ Gν−1σ Cx = Ukσ Gν−1σ C(Πν−1 x + ∑ Mj x) j=0
ν−1
= Ukσ Gν−1σ CΠν−1 x + ∑ Ukσ Gν−1σ CMj x j=0
=
σ σ Mkσ Pk+1 . . . Pν−1 Gν−1σ CΠν−1 x ν−1 σ σ + ∑ Mkσ Pk+1 . . . Pν−1 Gν−1σ CMj x. j=0
Set σ σ I1 = Mkσ Pk+1 . . . Pν−1 Gν−1σ CΠν−1 x, ν−1
σ σ I2 = ∑ Mkσ Pk+1 . . . Pν−1 Gν−1σ CMj x. j=0
Then
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
� 297
Ukσ Gν−1σ Cx = I1 + I2 .
(5.32)
Denote σ σ
σ
K̂ k = Mk Pk+1 . . . Pν−1 Gν CB . −1σ
−
Hence, σ σ I1 = Mkσ Pk+1 . . . Pν−1 Gν−1σ CΠν−1 x
σ σ = Mkσ Pk+1 . . . Pν−1 Gν−1σ CP0 Πν−1 x
σ σ = Mkσ Pk+1 . . . Pν−1 Gν−1σ CB− BΠν−1 x = M σ Pσ . . . Pσ G−1σ CB− u = K̂ k u, k
k+1
ν−1 ν
i. e., I1 = K̂ k u.
(5.33)
Now, we consider I2 . We have ν−1
σ σ σ σ I2 = Mkσ Pk+1 . . . Pν−1 Gν−1σ CM0 x + Mkσ Pk+1 . . . Pν−1 Gν−1σ C ∑ Mj x. j=0
Set σ σ
σ
Mk0 = Mk Pk+1 . . . Pν−1 Gν C, −1σ
ν−1
σ σ J = Mkσ Pk+1 . . . Pν−1 Gν−1σ C ∑ Mj x. j=0
Hence, I2 = Mk0 M0 x + J = Mk0 u + J. We will simplify J. We have σ σ Mkσ Pk+1 . . . Pν−1 Gν−1σ CMj j
Δ
σ σ σ = Mkσ Pk+1 . . . Pν−1 (−(Q1σ + ⋅ ⋅ ⋅ + Qjσ )Mj − ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj ). i=1
We will consider the following cases: 1. Let j < k. Then σ σ Mkσ Pk+1 . . . Pν−1 (Q1σ + ⋅ ⋅ ⋅ + Qjσ )Mj = 0.
298 � 5 Second kind linear time-varying dynamic-algebraic equations Moreover, j
Δ
σ σ σ Mkσ Pk+1 . . . Pν−1 )B−σ (BΠi B− ) BMj ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 i=1
j
Δ
σ σ σ σ σ = Mkσ Pk+1 . . . Pν−1 − Qkσ − Qk+1 − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qk−1 i=1
j
Δ
σ σ σ σ = ∑(Mkσ Pk+1 . . . Pν−1 − Mkσ Pk+1 . . . Pν−1 Qkσ )B−σ (BΠi B− ) BMj i=1 j
Δ
σ σ σ σ = ∑(Mkσ Pk+1 . . . Pν−1 − Mkσ Pk+1 . . . Pν−1 Qkσ )B−σ (BΠi B− ) BMj i=1 j
Δ
σ σ = ∑(Mkσ Pk+1 . . . Pν−1 − Mkσ Qkσ )B−σ (BΠi B− ) BMj i=1 j
Δ
σ σ = ∑(Mkσ Pk+1 . . . Pν−1 − Mkσ )B−σ (BΠi B− ) BMj i=1 j
Δ
σ σ = ∑ Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BΠi B− ) BMj . i=1
Note that, for i < j, we have Δ
Δ
B−σ (BΠi B− ) BMj = B−σ (BΠi B− ) BMj B− BMj Δ
Δ
= B−σ (BΠi B− BMj B− ) BMj − B−σ Bσ Πσi B−σ (BMj B− ) BMj Δ
Δ
= B−σ (BMj B− ) BMj − Πσi B−σ (BMj B− ) BMj Δ
= (I − Πσi )B−σ (BMj B− ) BMj and using (3.47), we find σ σ (Pk+1 . . . Pν−1 − I)(I − Πσi )
σ σ σ σ = Pk+1 . . . Pν−1 − I − Pk+1 . . . Pν−1 Πσi + Πσi
σ σ σ σ σ = Pk+1 . . . Pν−1 − I − Pk+1 . . . Pν−1 + Qiσ + Qi−1 Piσ σ σ + Qi−2 Pi−1 Piσ + ⋅ ⋅ ⋅ + Q0σ P1σ . . . Piσ + Πσi
σ = −I + Πσi + Q0σ P1σ . . . Piσ + Q1σ P2σ . . . Piσ + Q2σ P3σ . . . Piσ + ⋅ ⋅ ⋅ + Qi−1 Piσ + Qiσ
= −I + P1σ P2σ . . . Piσ + Q1σ P2σ . . . Piσ + Q2σ P3σ . . . Piσ σ + ⋅ ⋅ ⋅ + Qi−1 Piσ + Qiσ
σ = −I + P2σ P3σ . . . P − iσ + Q2σ P3σ . . . Piσ + ⋅ ⋅ ⋅ + Qi−1 Piσ + Qiσ = ⋅ ⋅ ⋅ σ σ = −I + Pi−1 Piσ + Qi−1 Piσ + Qiσ = −I + Piσ + Qiσ = −I + I = 0.
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
�
299
Therefore, for i < j, we have Δ
σ σ Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BΠi B− ) BMj = 0.
For i = j, we have Δ
Δ
B−σ (BΠj B− ) BMj = B−σ (BΠj B− ) BMj B− BMj Δ
Δ
= B−σ (BΠj B− BMj B− ) BMj − B−σ Bσ Πσj B−σ (BMj B− ) BMj Δ
= −Πσj B−σ (BMj B− ) BMj and σ σ σ σ (Pk+1 . . . Pν−1 − I)Πσj = Pk+1 . . . Pν−1 Πσj − Πσj
σ σ = Pk+1 . . . Pν−1 − Πσj − Q0σ P1σ . . . Pjσ − Q1σ P2σ . . . Pjσ σ − ⋅ ⋅ ⋅ − Qj−1 Pjσ − Qjσ
σ σ σ = Pk+1 . . . Pν−1 − P1σ . . . Pjσ − Q1σ P2σ . . . Pjσ − ⋅ ⋅ ⋅ − Qj−1 Pjσ − Qjσ σ σ σ σ = Pk+1 . . . Pν−1 − Pjσ − Qjσ = Pk+1 . . . Pν−1 − I.
Thus, Δ
σ σ Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BΠj B− ) BMj
Δ
σ σ = −Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BMj B− ) BMj
and Δ
σ σ σ σ Mkσ Pk+1 . . . Pν−1 Gν−1σ CMj = −Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BMj B− ) BMj .
Set σ
σ
σ
Mkj = −Mk (Pk+1 . . . Pν−1 − I)B
−σ
Δ
(BMj B− ) B.
Therefore, σ σ Mkσ Pk+1 . . . Pν−1 Gν−1σ CMj = Mkj Mj x = Mkj vj .
2.
Let j ≥ k. Then σ σ Mkσ Pk+1 . . . Pν−1 (Q1σ + ⋅ ⋅ ⋅ + Qjσ )Mj
σ σ σ σ = Mkσ Pk+1 . . . Pν−1 (Q1σ + ⋅ ⋅ ⋅ + Qk−1 + Qkσ + Qk+1 + ⋅ ⋅ ⋅ + Qjσ )Mj σ σ σ σ σ = Mkσ Pk+1 . . . Pν−1 Qkσ Mj + Mkσ Pk+1 . . . Pν−1 Qk+1 Mj σ σ + ⋅ ⋅ ⋅ + Mkσ Pk+1 . . . Pν−1 Qjσ Mj
300 � 5 Second kind linear time-varying dynamic-algebraic equations = Mkσ Qkσ Mj = Mkσ Mj . Now, using the computations in the previous case, for j = k we get k
Δ
σ σ σ Mkσ Pk+1 . . . Pν−1 )B−σ (BΠi B− ) BMk ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 j=1
k−1
Δ
σ σ σ = Mkσ Pk+1 . . . Pν−1 )B−σ (BΠi B− ) BMk ∑ (I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 j=1
+
σ σ Mkσ Pk+1 . . . Pν−1 (I
Δ
σ − Qkσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMk k−1
Δ
σ σ = Mkσ (Pk+1 . . . Pν−1 − 1) ∑ B−σ (BΠi B− ) BMk j=1
Δ
σ σ + Mkσ (Pk+1 . . . Pν−1 − 1)B−σ (BΠi B− ) BMk k−1
Δ
σ σ = Mkσ (Pk+1 . . . Pν−1 − 1) ∑ (I − Πσi )B−σ (BMk B− ) BMk j=1
−
σ σ Mkσ (Pk+1 . . . Pν−1
Δ
− 1)B−σ (BMk B− ) BMk Δ
σ σ = −Mkσ (Pk+1 . . . Pν−1 − 1)B−σ (BMk B− ) BMk .
Let σ
σ
σ
Mkk = −Mk (Pk+1 . . . Pν−1 − 1)B
−σ
Δ
(BMk B− ) BMk − Mkσ .
Let j > k. Then j
Δ
σ σ σ Mkσ Pk+1 . . . Pν−1 )B−σ (BΠi B− ) BMj ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 i=1
k−1
Δ
σ σ σ = Mkσ Pk+1 . . . Pν−1 )B−σ (BΠi B− ) BMj ∑ (I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 i=1
j
Δ
σ σ σ + Mkσ Pk+1 . . . Pν−1 )B−σ (BΠi B− ) BMj ∑ (I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 i=k
k−1
Δ
σ σ = Mkσ (Pk+1 . . . Pν−1 − I) ∑ B−σ (BΠi B− ) BMj i=1
j
Δ
σ σ + Mkσ (Pk+1 . . . Pν−1 − I) ∑ B−σ (BΠi B− ) BMj i=k
k−1
Δ
σ σ = Mkσ (Pk+1 . . . Pν−1 − I) ∑ (I − Πσi )B−σ (BMj B− ) BMj i=1
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2 j
Δ
σ σ + Mkσ (Pk+1 . . . Pν−1 − I) ∑ (I − Πσi )B−σ (BMj B− ) BMj i=k
−
σ σ Mkσ (Pk+1 . . . Pν−1
Δ
− I)Πσj B−σ (BMj B− ) BMj j
Δ
σ σ = Mkσ (Pk+1 . . . Pν−1 − I) ∑ (I − Πσi )B−σ (BMj B− ) BMj i=k
−
σ σ Mkσ (Pk+1 . . . Pν−1
Δ
− I)Πσj B−σ (BMj B− ) BMj Δ
σ σ = Mkσ (Pk+1 . . . Pν−1 − I)(I − Πσk )B−σ (BMj B− ) BMj j−1
Δ
σ σ + Mkσ (Pk+1 . . . Pν−1 − I) ∑ (I − Πσi )B−σ (BMj B− ) BMj i=k
−
σ σ Mkσ (Pk+1 . . . Pν−1
Δ
− I)Πσj B−σ (BMj B− ) BMj j−1
Δ
σ σ = Mkσ (Pk+1 . . . Pν−1 − I) ∑ (I − Πσi )B−σ (BMj B− ) BMj i=k
−
σ σ Mkσ (Pk+1 . . . Pν−1
Δ
− I)Πσj B−σ (BMj B− ) BMj .
Note that σ σ (Pk+1 . . . Pν−1 − I)(I − Πσi )
σ σ σ σ = Pk+1 . . . Pν−1 − I + Πσi − Pk+1 . . . Pν−1 Πσi σ σ σ σ = Pk+1 . . . Pν−1 − I + Πσi − Pk+1 . . . Pν−1
σ + Q0σ P1σ . . . Piσ + Q1σ P2σ . . . Piσ + ⋅ ⋅ ⋅ + Qkσ Pk+1 . . . Piσ
σ = −I + Πσi + Q0σ P1σ . . . Piσ + Q1σ P2σ . . . Piσ + ⋅ ⋅ ⋅ + Qkσ Pk+1 . . . P − iσ σ = −I + P1σ . . . Piσ + Q1σ P2σ . . . Piσ + ⋅ ⋅ ⋅ + Qkσ Pk+1 . . . Piσ σ = −I + P2σ . . . Piσ + ⋅ ⋅ ⋅ + Qkσ Pk+1 . . . Piσ = ⋅ ⋅ ⋅ σ σ = −I + Pkσ Pk+1 . . . Piσ + Qkσ Pk+1 . . . Piσ σ = −I + Pk+1 . . . Piσ
and σ σ (Pk+1 . . . Pν−1 − I)Πσj
σ σ σ σ = Pk+1 . . . Pν−1 − I + (Pk+1 . . . Pν−1 − I)(Πσj − I) σ σ σ = Pk+1 . . . Pν−1 − I + I − Pk+1 . . . Pjσ σ σ σ = Pk+1 . . . Pν−1 − Pk+1 . . . Pjσ .
Consequently,
� 301
302 � 5 Second kind linear time-varying dynamic-algebraic equations j
Δ
σ σ σ Mkσ Pk+1 . . . Pν−1 )B−σ (BΠi B− ) BMj ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 i=1
j−1
Δ
σ = Mkσ ∑ (Pk+1 . . . Piσ − I)B−σ (BMj B− ) BMj i=k
Δ
σ σ σ − Mkσ (Pk+1 . . . Pν−1 − Pk+1 . . . Pjσ )Πσj B−σ (BMj B− ) BMj .
Then σ σ Mkσ Pk+1 . . . Pν−1 Gν−1σ CMj
= −Mkσ Mj
j−1
Δ
σ − Mkσ ∑ (Pk+1 . . . Piσ − I)B−σ (BMj B− ) BMj i=k
+
σ σ Mkσ (Pk+1 . . . Pν−1
Δ
σ − Pk+1 . . . Pjσ )Πσj B−σ (BMj B− ) BMj
= (−Mkσ j−1
Δ
σ − Mkσ ∑ (Pk+1 . . . Piσ − I)B−σ (BMj B− ) B i=k
Δ
σ σ σ + Mkσ (Pk+1 . . . Pν−1 − Pk+1 . . . Pjσ )Πσj B−σ (BMj B− ) B)Mj .
Set σ
Mkj = −Mk
j−1
Δ
σ − Mkσ ∑ (Pk+1 . . . Piσ − I)B−σ (BMj B− ) B i=k
+
σ σ Mkσ (Pk+1 . . . Pν−1
Δ
σ − Pk+1 . . . Pjσ )Πσj B−σ (BMj B− ) B.
Therefore, σ σ Mkσ Pk+1 . . . Pν−1 Gν−1σ CMj = Mkj vj .
From here, ν−1
I2 = Mk0 v0 + ∑ Mkj vj j=1
and
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2 ν−1
Ukσ Gν−1σ Cν = I1 + I2 = K̂ k u + ∑ Mkj vj .
�
303
(5.34)
j=1
We multiply the equation (5.17) by Ukσ and we find Ukσ Gν−1σ G0σ B−σ (Bx)Δ = Ukσ Gν−1σ Cx + Ukσ Gν−1σ f . Hence, using (5.32) and (5.34), we arrive at ν−1
ν−1
̃k B− u + ∑ M ̃ ∑ Nkj (Bvj )Δ + K kj vj j=0
j=k+1
ν−1
σ −1σ = K̂ k u + ∑ Mkj vj + Uk Gν f , j=0
whereupon ν−1
ν−1
j=k+1
j=0
− σ −1σ ̂ ̃ ∑ Nkj (Bvj )Δ + (K̃ k B − Kk ) + ∑ (Mkj − Mkj )vj = Uk Gν f .
Note that σ
σ
σ
̃ M kj − Mkj = Mk (I − Pk+1 . . . Pν−1 )B
−σ
Δ
(BMj B− ) B
Δ
σ σ + Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BMj B− ) B
=0 and σ
σ
σ
̃ M kk − Mkk = Mk (I − Pk+1 . . . Pν−1 )B
−σ
Δ
(BMk B− ) B
Δ
σ σ + Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BMk B− ) B + Mkσ
= Mkσ . Therefore, ν−1
ν−1
j=k+1
j=k+1
− σ −1σ ̂ ̃ Mkσ vk + ∑ Nkj (Bvj )Δ + (K̃ k B − Kk ) + ∑ (Mkj − Mkj )vj = Uk Gν f ,
vk = Mk vk . Since BMk x = BMk B− Bx and
(5.35)
304 � 5 Second kind linear time-varying dynamic-algebraic equations BMk B− = BΠk−1 B− − BΠk B− , and BMk xB− and Bx are in C 1 , we have that Bvk = BMk x is C 1 for any k ≥ 1. 5.5.5 Decoupling In this section, we will show that the matrix chain defining the tractability index is aimed at the following characterization of the solutions of regular second kind linear timevarying dynamic-algebraic equations (5.1) with arbitrary index. We will use the notation from the previous sections in this chapter. Theorem 5.3. Assume that the equation (5.1) is regular with tractability index ν on I and f is enough smooth function. Then x ∈ CB1 (I) solves (5.1) if and only if it can be written as x = B− u + vν−1 + ⋅ ⋅ ⋅ + v1 + v0 ,
(5.36)
where u ∈ C 1 (I) solves the inherent equation (5.23) and vk ∈ C 1 (I) satisfies (5.35). Proof. If x ∈ C 1 (I) solves (5.1), then the assertion follows by the computations in Sections 5.5, 5.5.2, 5.5.3 and 5.5.4. Now, we will prove the converse assertion. We will show that the process described in Sections 5.5, 5.5.2, 5.5.3 and 5.5.4 is reversible. Note that the identity Bv0 = BM0 v0 = 0 implies that Bx = BB− u + Bvν−1 + ⋅ ⋅ ⋅ + Bv1 = u + Bvν−1 + ⋅ ⋅ ⋅ + Bv1 ∈ C 1 (I), where we have used that u ∈ im BΠν−1 B− , i. e., u = BΠν−1 B− u and BB− u = BB− BΠν−1 B− u = BP0 Πν−1 B− u = BΠν−1 B− u = u. Now, using (3.58), (3.59), (3.60) and the decomposition (5.36), we find BΠν−1 x = BΠν−1 B− u + BΠν−1 vν−1 + ⋅ ⋅ ⋅ + BΠν−1 v1 + BΠν−1 v0
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
�
305
= u + BΠν−1 Mν−1 vν−1 + ⋅ ⋅ ⋅ + BΠν−1 M1 v1 + BΠν−1 M0 v0 = u, i. e., u = BΠν−1 x. Now, we multiply (5.36) by Mk and we obtain Mk x = Mk B− u + Mk vν−1 + ⋅ ⋅ ⋅ + Mk vk + ⋅ ⋅ ⋅ + Mk v1 + Mk v0
= Mk B− u + Mk Mν−1 vν−1 + ⋅ ⋅ ⋅ + Mk Mk vk + ⋅ ⋅ ⋅ + Mk M1 v1 + Mk M0 v0 = Mk B− u + vk = Mk B− BΠν−1 B− u + vk = Mk Πν−1 B− u + vk = vk .
Note that the inherent equation (5.23) is restated by the equation (5.17). Then we multiply (5.17) by Bσ Πσν−1 and we find Bσ Πσν−1 Gν−1σ G0σ B−σ (Bx)Δ = Bσ Πσν−1 Gν−1σ Cx + Bσ Πσν−1 Gν−1σ f , which we premultiply by B−σ and using that B− BΠν−1 = Πν−1 , we find B−σ Bσ Πσν−1 Gν−1σ G0σ B−σ (Bx)Δ = B−σ Bσ Πσν−1 Gν−1σ Cx + B−σ Bσ Πσν−1 Gν−1σ f , or Πσν−1 Gν−1σ G0σ B−σ (Bx)Δ = Πσν−1 Gν−1σ Cx + Πσν−1 Gν−1σ f ,
(5.37)
from where, using (5.35) and the computations in Section (5.5.3), we get Ukσ Gν−1σ Aσ (Bx)Δ = Ukσ Gν−1σ Cx + Ukσ Gν−1σ f ,
k = ν − 1, . . . , 0.
Since Qk Mk = Qk , we obtain Qk Uk = Qk Mk Pk+1 . . . Pν−1 = Qk Pk+1 . . . Pν−1 = Vk . We multiply (5.38) by Qkσ and we find Qkσ Ukσ Gν−1σ Aσ (Bx)Δ = Qkσ Ukσ Gν−1σ Cx + Qkσ Ukσ Gν−1σ f ,
k = ν − 1, . . . , 0,
(5.38)
306 � 5 Second kind linear time-varying dynamic-algebraic equations or Vkσ Gν−1σ Aσ (Bx)Δ = Vkσ Gν−1σ Cx + Vkσ Gν−1σ f ,
k = ν − 1, . . . , 0.
(5.39)
Note that ν−1
I = Πν−1 + ∑ Vk . k=0
Then, by (5.37) and (5.39), we arrive at ν−1
ν−1
ν−1
k=0
k=0
k=0
(Πσν−1 + ∑ Vkσ )Gν−1σ Aσ (Bx)Δ = (Πσν−1 + ∑ Vkσ )Gν−1σ Cx + (Πσν−1 + ∑ Vkσ )Gν−1σ f or Gν−1σ Aσ (Bx)Δ = Gν−1σ Cx + Gν−1σ f , or Aσ (Bx)Δ = Cx + f . This completes the proof. Example 5.4. Let 𝕋 = 2ℕ0 and A, B, C be as in Example 4.4. Then 1 0 G0 (t) = A(t)B(t) = (0 0 0
0 g(t) 0 0 0
0 0 1 1 ) (0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0) , 0 0
t ∈ 𝕋.
0 0 Q0 (t) = M0 (t) = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0) , 0 1
1 0 = (0 0 0
0 h(t)g(t) 0 0 0
0 h(t) 0
0 0 1
0 0 0
As in Example 4.4, we find 0 0 0 1 0
t ∈ 𝕋,
0 0) 0
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
and 1 0 P0 (t) = Π0 (t) = I − M0 (t) = (0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0) , 0 0
t ∈ 𝕋.
Next, G1 (t) = G0 (t) + C0 (t)Q0 (t) 1 0 = (0 0 0
0 h(t)g(t) 0 0 0
0 0 +(0 −1 1
0 0 −1 1 0
0 0 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0) 0 0
−1 1 0 0 0
1 0 0 0 0 ) (0 0 0 l(t) 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
0 0 0 0 0) + (0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
−1 1 0 0 0
1 0 = (0 0 0
0 h(t)g(t) 0 0 0
0 0 1 0 0
0 0 0 0 0
1 0 = (0 0 0
0 h(t)g(t) 0 0 0
0 0 1 0 0
−1 1 0 0 0
0 0
0 0
1 0 0 ), 0 l(t)
t ∈ 𝕋.
As in Example 4.4, we get 0 0
Q1 (t) = (0 0 (0 and
0 0 0
0 0 0
1
1 − h(t)g(t)
0 1 0
0 0
0) , 0 0)
t ∈ 𝕋,
0 0 0) 0 1 1 0 0 ) 0 l(t)
� 307
308 � 5 Second kind linear time-varying dynamic-algebraic equations 1 0 Π1 (t) = I − Q1 (t) = (0 0 0 1 0 = (0
0 1
0 0
0 0 0
0 (0
1 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 ( 0) − 0 0 0 1 (0
0 0 0 1 0 0 0
−1
1 h(t)g(t)
0 0 0
0) , 0 0)
0 0 0 0 0
0 0 0 0 0
1 1 − h(t)g(t) 0 1 0
0 0
0) 0 0)
t ∈ 𝕋.
Next, 1 R(t) = (0 0
0 1 0
0 0) , 1
t ∈ 𝕋,
is a projector along ker A(t), t ∈ 𝕋, and 1 0 B− (t) = (0 0 (0
0
1 h(t)
0 0 0
0 0 1) , 0 0)
t ∈ 𝕋,
and Δ
(BΠ0 B− ) (t) = 0,
t ∈ 𝕋.
Further, 1 0 M1 (t) = Π0 (t) − Π1 (t) = (0 0 0 1 0 = (0 0 (0
and
0 1
0 0 0
0 0 1 0 0
−1
1 h(t)g(t)
0 −1 0
0 1 0 0 0
0 0 1 0 0 0 0
0) , 0 0)
0 0 0 0 0
1 0 0 0 0) − (0 0 0 0 (0 t ∈ 𝕋,
0 1
0 0 0
0 0 1 0 0
−1
1 h(t)g(t)
0 0 0
0 0
0) 0 0)
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
1 − B(t)Π1 (t)B (t) = (0 0
0 h(t) 0
0 0 1
0 0 0
1 = (0 0
0 h(t) 0
0 0 1
0 0 0
1 0 0 0) (0 0 0 (0 1 0 0 0) (0 0 0 (0
0 1
0 0
0 0 0
1 0 0
0
−1
1 h(t)g(t)
0 0 0
0 0
1 0 0) (0 0 0 0) (0
0 0 1 1 ) = (0 0 0 0)
1 h(t)
0 0 0
0 1 0
0 0) , 1
whereupon Δ
(BΠ1 B− ) (t) = 0,
t ∈ 𝕋.
Let c11 (t) c21 (t) C1 (t) = (c31 (t) c41 (t) c51 (t)
c12 (t) c22 (t) c32 (t) c42 (t) c52 (t)
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
c14 (t) c24 (t) c34 (t) c44 (t) c54 (t)
c15 (t) c25 (t) c35 (t)) . c45 (t) c55 (t)
Hence, C1 (t)Π1 (t) = (C0 (t) + C1σ (t)M1σ (t))Π0 (t) 0 0 = (( 0 −1 1
0 0 −1 1 0
c11 (2t) c21 (2t) + (c31 (2t) c41 (2t) c51 (2t) 1 0 × (0 0 (0
0 1
0 0 0
0 1 0 0 0
c12 (2t) c22 (2t) c32 (2t) c42 (2t) c52 (2t) 0 0
1 0 0
1 0 0 ) 0 l(t)
−1 1 0 0 0
−1
c13 (2t) c23 (2t) c33 (2t) c43 (2t) c53 (2t)
1 h(2t)g(2t)
0 −1 0
c14 (2t) c24 (2t) c34 (2t) c44 (2t) c54 (2t)
c15 (2t) c25 (2t) c35 (2t)) c45 (2t) c55 (2t)
0 1 0 0 0)) (0 0 0 ))
0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0) 0 0
�
0
1 h(t)
0 0 0
309 0 0 1) 0 0)
310 � 5 Second kind linear time-varying dynamic-algebraic equations 0 0 =(0 −1 1
0 0 −1 1 0
0 1 0 0 0
c11 (2t) c21 (2t) + (c31 (2t) c41 (2t) c51 (2t) 0 0 =(0 −1 1
0 0 −1 1 0
0 0 0 0 0
c12 (2t) c22 (2t) c32 (2t) c42 (2t) c52 (2t) 0 1 0 0 0
c11 (2t) c21 (2t) = ( c31 (2t) c41 (2t) − 1 c51 (2t) + 1
0 0 0) 0 0
0 0 0 0 0
c13 (2t) c23 (2t) c33 (2t) c43 (2t) c53 (2t)
c14 (2t) c24 (2t) c34 (2t) c44 (2t) c54 (2t)
0 c11 (2t) 0 c21 (2t) 0) + (c31 (2t) 0 c41 (2t) 0 c51 (2t)
c12 (2t) c22 (2t) c32 (2t) − 1 c42 (2t) + 1 c52 (2t)
c13 (2t) c23 (2t) + 1 c33 (2t) c43 (2t) c53 (2t)
c12 (t) c22 (t) c32 (t) c42 (t) c52 (t)
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
c14 (t) c24 (t) c34 (t) c44 (t) c54 (t)
c11 (t)
c12 (t)
c13 (t)
(−c11 (t) +
c33 (t)
(−c31 (t) +
(
c51 (t)
c22 (t)
c32 (t)
c42 (t) c52 (t)
c23 (t)
c43 (t) c53 (t)
c12 (2t) c22 (2t) c32 (2t) c42 (2t) c52 (2t) 0 0 0 0 0
c13 (2t) c23 (2t) c33 (2t) c43 (2t) c53 (2t)
(−c21 (t) + (−c41 (t) + (−c51 (t) +
0 1 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0) 0 0
0 0 0) 0 0
0 0 0) 0 0
1 c15 (t) 0 c25 (t) c35 (t)) (0 c45 (t) 0 c55 (t) (0
c11 (t) c21 (t) = (c31 (t) c41 (t) c51 (t) c21 (t) ( ( c = ( 31 (t) c41 (t)
c15 (2t) 1 c25 (2t) 0 c35 (2t)) (0 c45 (2t) 0 c55 (2t) 0
c12 (t) ) h(t)g(t) c22 (t) ) h(t)g(t) c32 (t) ) h(t)g(t) c42 (t) ) h(t)g(t) c52 (t) ) h(t)g(t)
0 1
0 0 0
0 0 1 0 0
−1
1 h(t)g(t)
0
0 ) 0) ), 0
0 0 0
0 0
0) 0 0)
t ∈ 𝕋.
0 )
From here, c11 (t) = c11 (2t), c12 (t) = c12 (2t), c13 (t) = c13 (2t), c (t) c11 (t) = 12 , c21 (t) = c21 (2t), c22 (t) = c22 (2t), c23 (t) = c23 (2t) + 1, h(t)g(t) c (t) c21 (t) = 22 , c31 (t) = c31 (2t), c32 (t) = c32 (2t) − 1, c33 (t) = c33 (2t), h(t)g(t) c32 (2t) c31 (t) = , c41 (t) = c41 (2t) − 1, c42 (t) = c42 (2t) + 1, c43 (t) = c43 (2t), h(2t)g(4t)
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
c42 (t) , h(t)g(t) c (t) c51 (t) = 52 , h(t)g(t)
c41 (t) =
c51 (t) = c51 (2t) + 1,
c52 (t) = c52 (2t),
� 311
c53 (t) = c53 (2t),
t ∈ 𝕋.
Take c44 (t) =
1 , h(t)g(t)l(t)
t ∈ 𝕋.
Then c11 (t) c21 (t) C1 (t)Q1 (t) = (c31 (t) c41 (t) c51 (t) 0 0 = (0 0 (0
0 0 0 0 0
c12 (t) c22 (t) c32 (t) c42 (t) c52 (t) 0 0
c13 (t) c23 (t) c33 (t) c43 (t) c53 (t)
c11 (t) c22 (t) − h(t)g(t)
0 0 0
0 c44 (t) 0
c14 (t) c24 (t) c34 (t) c44 (t) c54 (t) 0 0
0) , 0 0)
0 c15 (t) 0 c25 (t) c35 (t)) (0 c45 (t) 0 c55 (t) (0
0 0 0 0 0
0 0 0 0 0
1 1 − h(t)g(t) 0 1 0
t ∈ 𝕋,
and G2 (t) = G1 (t) + C1 (t)Q1 (t)
0 1 0 0 0 ) + (0 0 0 l(t) (0
1 0 = (0 0 0
0 h(t)g(t) 0 0 0
0 0 1 0 0
−1 1 0 0 0
1 0 = (0
0 h(t)g(t)
0 0
c11 (t) − 1 c22 (t) 1 − h(t)g(t)
0 (0
0 0 0
1 0 0
0 c44 (t) 0
1 0
0 ), 0 l(t))
0 0 0 0 0
0 0 0 0 0
c11 (t) c22 (t) − h(t)g(t) 0 c44 (t) 0
0 0
0) 0 0)
t ∈ 𝕋.
We compute det G2 (t) = h(t)g(t)l(t)c44 (t) = h(t)g(t)l(t)
1 = 1 ≠ 0, h(t)g(t)l(t)
t ∈ 𝕋.
0 0
0) 0 0)
312 � 5 Second kind linear time-varying dynamic-algebraic equations As in Example 4.4, we obtain 1 0 G−1 (t) = (0 2
0
1 h(t)g(t)
0
0 (0
0 0
0 0 1
0 0
0 0
0 0
1 (1 l(t)h(t)g(t)
0
l(t)h(t)g(t) 0
0
−
c22 (t) ) ) , h(t)g(t)
1 l(t)
t ∈ 𝕋.
)
Hence, B(t)Π1 (t)G2−1 (t) 1 = (0 0
0 h(t) 0
1 0 × (0
0 0 1
0 0 0
0
0 0 0
1 h(t)g(t)
0
0 (0
1 0 0 0) (0
0 0
1 0 0 0) (0
0 0 1
0 0 0
0
1 = (0 0
0
0 0
0 0
0 0) ,
1
0
0 0 −1 1 0
1 h(t)g(t)
0 0 0
0) 0 0)
1 (1 l(t)h(t)g(t)
0
c22 (t) ) ) h(t)g(t)
1 l(t)
0
1 h(t)g(t)
0 0 0
0 (0
−
0 0
1 0 0
)
−l(t)h(t)g(t) 1 (h(t))2 (g(t))2
0 0 0
0 0
0) 0 0)
0
1 Bσ (t)Πσ1 (t)G2−1σ (t) = (0 0 0 0 − C(t)B (t) = ( 0 −1 1
0 0
−1
0 0
l(t)h(t)g(t) 0
0 h(t) 0
0
1 0 0
0
1 = (0 0
1 g(t)
0 0 0
0 0
0 0
1
0 0
0 (0
0 1
0
1 g(2t)
0
0 1 0 0 0
−1 1 0 0 0
0 0 1
0 0
0
0 0) , 0
1 1 0 0 0 ) (0 0 0 l(t) (0
0
1 h(t)
0 0 0
0 0 0 0 ) ( 0 = 1 −1 0 0) ( 1
0 0
1 − h(t) 1 h(t)
0
0 1 0) , 0 0)
5.5 Decoupling of second kind linear dynamic-algebraic equations of index ≥ 2
�
313
B(t)σ Πσ1 (t)G2−1σ (t)C(t)B− (t) 1 = (0 0
0
1 g(2t)
0
0 0 1
0 0
0
0 0 0 ( 0 0) −1 0 (1
0 0
0 1 1 ) 0 = (0 0 0 0)
1 − h(t) 1 h(t)
0
0 0 0
0
1 g(2t) ) ,
t ∈ 𝕋.
0
Thus, the inherent equation for the considered equation takes the form u1Δ (t)
1 Δ 0 (u2 (t)) = ( 0 u3Δ (t)
0 0 0
0
1 u1 (t) 1 0 g(2t) ) (u2 (t)) + ( u3 (t) 0 0
0
0 0
1 g(2t)
0
1
0 0
0
f1 (t) 0 f2 (t) 0) (f3 (t)) , 0 0 0
t ∈ 𝕋,
or u1Δ (t)
u1 (t) f1 (t) 1 1 (u2Δ (t)) = ( g(2t) u3 (t)) + ( g(2t) f2 (t)) , 0 f3 (t) u3Δ (t)
t ∈ 𝕋,
or u1Δ (t) = u1 (t) + f1 (t), 1 1 u2Δ (t) = u (t) + f (t), g(2t) 3 g(2t) 2 u3Δ (t) = f3 (t),
t ∈ 𝕋.
Next, 1 0
M1 (t)G2−1 (t) = (0 0 (0
0 1
0 0 0 1 0
× (0 0 (0
0 0
−1
1 h(t)g(t)
1 0 0
0 −1 0
0
1 h(t)g(t)
0 0 0
0 0 1
0 0
0 0
0) 0 0) 0 0
0
l(t)h(t)g(t) 0
0 0
1 (1 l(t)h(t)g(t)
0
−
1 l(t)
c22 (t) ) ) h(t)g(t)
)
314 � 5 Second kind linear time-varying dynamic-algebraic equations 1 0 = (0
0
1 h(t)g(t)
0 0 0
0 (0
1 0 M σ (t)G−1σ (t) = (0 1
1
1
1
0 0 0
1 0 0
0
0 0
−l(2t)h(2t)g(2t) l(2t)
0 0 0
0 0
1 − h(t) 1 h(t)
−1 (1
0
l(2t)h(2t)g(2t)
( =( (
0) , 0 0)
−l(2t)h(2t)g(2t) l(2t)
1 h(2t)g(2t)
0 0 ( 0 ×
0 −l(t)h(t)g(t) 0
0 0
0 0
1 h(2t)g(2t)
0 (0
−l(t)h(t)g(t) l(t)
1 0 0
0
0 (0
1 0 M σ (t)G−1σ (t)C(t)B− (t) = (0
0 0
0 0
0 −l(2t)h(2t)g(2t) 0
1 0 0
0) , 0 0) 0 0
0 −l(2t)h(2t)g(2t) 0
0) 0 0)
0 1 0)
0 0)
− l(2t)h(2t)g(2t) h(t) l(2t) h(t) 1 − h(t) − l(2t)h(2t)g(2t) h(t)
−l(2t) 0
l(2t)h(2t)g(2t) 0 (
0
0
1 h(2t)g(2t) )
0
0 0
), ) )
Therefore, the equation (5.28) can be rewritten as follows: 1 0 (0 0 (0
0 1
0 0 0
0 0
1 0 0
−1
1 h(2t)g(2t)
0 −1 0
l(t)h(t)g(t)
( = −( (
−l(2t) 0
l(2t)h(2t)g(2t) 0 (
0 0 0) v1 (t)
0 0) − l(t)h(t)g(t) h(2t)
l(2t) h(t) 1 − h(t) − l(2t)h(2t)g(2t) h(t)
0
0
1 h(2t)g(2t) )
0
0 0
) u(t) ) )
t ∈ 𝕋.
5.6 Advanced practical problems �
1 0
0
0 0
1 h(2t)g(2t)
+ (0 0 (0
0 0 0
1 0 0
0 0
−l(2t)h(2t)g(2t) l(2t)
0) f (t), 0 0)
0 −l(2t)h(2t)g(2t) 0
t ∈ 𝕋.
Moreover, N01 (t) = 0,
K̃ 0 (t) = 0,
̃ M 01 (t) = 0,
t ∈ 𝕋.
Therefore, the first equation of (4.39) vanishes. Exercise 5.4. Let 𝕋 = 3ℕ + 2 and
1. 2. 3.
t2 0 A(t) = ( 0 0 0
0 1 0 0 0
0 0 C(t) = ( 0 −1 1
0 0 −1 1 0
0 0 t + 1) , 0 0 0 1 0 0 0
−1 1 0 0 0
t B(t) = (0 0 1 0 0 ), 0 t+2
0 t 0
0 0 1
0 0 0
0 0) , 0
t ∈ 𝕋.
Prove that the equation (5.1) is (σ, 1)-regular with tractability index 2. Write the inherent equation of the equation (5.1). Write the equation (5.28).
5.6 Advanced practical problems Problem 5.1. Let 𝕋 = 3ℕ0 and 1 A(t) = ( t + 3 1 − 4t 1. 2. 3.
t2 − 1 0 0
1+t 0 ), 1
Find the projector P along ker A. Write the system (5.2). Write the system (5.3).
t+2 C(t) = ( t 0
0 0 1
t−3 1 ), t+2
t ∈ 𝕋.
315
316 � 5 Second kind linear time-varying dynamic-algebraic equations Problem 5.2. Let 𝕋 = 3ℕ and
1. 2. 3.
−1 A(t) = ( 0 0
t 0 2t
−1 P(t) = ( 0 −1
0 1 0
4 8
3 C(t) = (t + 1 1
),
−1 1 ), t+4
0 t 4
−2 1 ), 2
t ∈ 𝕋.
Find the matrix Aσ1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋. Find the equation (5.4). Find the equation (5.10).
Problem 5.3. Let 𝕋 = 7ℕ0 and 2 −1 A(t) = ( 2 0 0
2 0 0 1 0
0 −t t ), 1 1
0 0 C(t) = ( 0 −t 1
0 0 3 t 0
0 7t + 1 t 0 1
t2 + 1 B(t) = ( 0 0 t 2t 0 1 −1
−t 0 0 ), 1 −1
0 t+3 0
0 0 t
0 0 0
0 0) , 0
0 0 3
0 0 0
0 0) , 0
t ∈ 𝕋.
Find the representation (5.14). Problem 5.4. Let 𝕋 = ℕ + 11 and t 0 A(t) = (0 0 0
0 1 + t2 0 0 0
0 0 C(t) = ( 0 −1 1
0 0 −1 1 0
0 0 −1) , 0 0 0 1 0 0 0
−1 1 0 0 0
t B(t) = (0 0 1 0 0 ), 0 t+2
0 t + t2 0
t ∈ 𝕋.
5.7 Notes and references
1. 2. 3.
� 317
317 Prove that the equation (5.1) is (σ, 1)-regular with tractability index 2. Write the inherent equation of the equation (5.1). Write the equation (5.28).
Problem 5.5. Let 𝕋 = 3ℕ and
1. 2. 3. 4.
1 0 A(t) = (0 0 0
0 t2 + t 0 0 0
0 0 C(t) = ( 0 −1 1
0 0 −1 1 0
0 0 1) , 0 0 0 1 0 0 0
−1 1 0 0 0
t B(t) = (0 0 1 0 0 ), 0 t+2
0 −t 0
0 0 1
0 0 0
0 0) , 0
t ∈ 𝕋.
Prove that the equation (5.1) is (σ, 1)-regular with tractability index 3. Write the inherent equation of the equation (5.1). Write the equation (5.28). Write the equations (5.35).
5.7 Notes and references In this chapter, we define second kind linear time-varying dynamic-algebraic equations and we classify them as properly stated, algebraically nice at level 0 and k ≥ 1, nice at level 0 and k ≥ 1, regular with tractability index 0 and ν ≥ 1. In the chapter, we deduct a decomposition of the solutions of the defined equations and it is shown that the described process for decomposition of the solutions is reversible.
6 Third kind linear time-varying dynamic-algebraic equations Suppose that 𝕋 is a time scale with a forward jump operator and delta differentiation operator σ and Δ, respectively. Let I ⊆ 𝕋. In this chapter, we will investigate the following linear time-varying dynamicalgebraic equation: A(t)(Bx)Δ (t) = C(t)x σ (t) + f (t),
t ∈ I,
(6.1)
where A, B, C : I → Mm×m and f : I → ℝm are given. The equation (6.1) will be said to be third kind linear time-varying dynamic-algebraic equation. We will consider the solutions of (6.1) within the space CB1 (I), defined in Chapter 5. Without loss of generality, we remove the explicit dependence on t.
6.1 A classification In this section, we will give a classification of the dynamic-algebraic equation (6.1). Definition 6.1. The matrix pair (A, B) will be said to be the leading term of the dynamicalgebraic equation (6.1). Definition 6.2. We will say that the third kind linear time-varying dynamic-algebraic equation (6.1) is properly stated if its leading term (A, B) is properly stated. Definition 6.3. We will say that the third kind linear time-varying dynamic-algebraic equation (6.1) is (1, σ)-properly stated if its leading term (A, Bσ ) is properly stated. Definition 6.4. We will say that the third kind linear time-varying dynamic-algebraic equation (6.1) is algebraically nice at level 0 if its leading term (A, B) is properly stated. Definition 6.5. We will say that the third kind linear time-varying dynamic-algebraic equation (6.1) is (1, σ)-algebraically nice at level 0 if its leading term (A, B) is (1, σ)properly stated. Definition 6.6. We will say that the third kind linear time-varying dynamic-algebraic equation (6.1) is algebraically nice at level k ≥ 1 if it is algebraically nice at level k − 1 and (A5) and (A6) hold for i = k for some admissible up to level k projector sequence Q0 , . . . , Qk−1 . Definition 6.7. We will say that the third kind linear time-varying dynamic-algebraic equation (6.1) is algebraically nice at level k ≥ 1 if it is algebraically nice at level k − 1 and (A5) and (A6) hold for i = k for some admissible up to level k projector sequence Q0 , . . . , Qk−1 . https://doi.org/10.1515/9783111377155-006
6.2 A particular case
�
319
Definition 6.8. We will say that the third kind linear time-varying dynamic-algebraic equation (6.1) is nice at level k if it is algebraically nice at level k and there exists an admissible choice of Qk . The ranks ri of Gi , i ∈ {0, . . . , k}, are said to be characteristic values of (6.1). Definition 6.9. The third kind linear time-varying dynamic-algebraic equation (6.1) is said to be regular with tractability index 0 if both A and B are invertible. Definition 6.10. The third kind linear time-varying dynamic-algebraic equation (6.1) is said to be (1, σ)-regular with tractability index 0 if both A and Bσ are invertible. Definition 6.11. The third kind linear time-varying dynamic-algebraic equation (6.1) is said to be regular with tractability index ν if there exists an admissible projector sequence {Q0 , . . . , Qν−1 } for which the matrices Gi are singular for 0 ≤ i < ν and Gν is nonsingular. Definition 6.12. The third kind linear time-varying dynamic-algebraic equation (6.1) is said to be (1, σ)-regular with tractability index ν if there exists an admissible projector sequence {Q0 , . . . , Qν−1 } for which the matrices Gi are singular for 0 ≤ i < ν and Gν is nonsingular. Definition 6.13. The third kind linear time-varying dynamic-algebraic equation (6.1) is said to be regular if it is regular with any nonnegative tractability index. Definition 6.14. The third kind linear time-varying dynamic-algebraic equation (6.1) is said to be (1, σ)-regular if it is (1, σ)-regular with any nonnegative tractability index. Note that the tractability index of (6.1) is ν if and only if it is nice up to level ν − 1, the matrices Gi , 0 ≤ i < ν, are singular and Gν are nonsingular. Since the dimension of the direct sum N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni increases, the tractability index can not exceed m.
6.2 A particular case In this section, we will investigate the equation Ax Δ = Cx σ + f .
(6.2)
We will show that it can be reduced to the equation (6.1). Let P be a C 1 -projector along ker A. Then AP = A
320 � 6 Third kind linear time-varying dynamic-algebraic equations and from here Ax Δ = APx Δ
= A(Px Δ )
= A((Px)Δ − PΔ x σ )
= A(Px)Δ − APΔ x σ . Then the equation (6.2) can be rewritten in the following manner: A(Px)Δ − APΔ x σ = Cx σ + f or A(Px)Δ = (APΔ + C)x σ + f . Denoting C1 = APΔ + C, we find A(Px)Δ = C1 x σ + f
(6.3)
and, therefore, we can consider the equation (6.2) as a particular case of the equation (6.1). Example 6.1. Let 𝕋 = 2ℕ0 and the matrices A and C be as in Example 4.1 and Example 5.1. Then σ(t) = 2t,
t ∈ 𝕋,
and 1 P(t) = (0 0
0 1 0
0 −1 ) , 1−t
t ∈ 𝕋,
is a projector along ker A. Also, by Example 5.1, we have 0 PΔ (t) = (0 0 Hence,
0 0 0
0 0 ). −1
6.2 A particular case
C1 (t) = A(t)PΔ (t) + C(t) 1 = (0 0
0 −t 0
0 = (0 0
0 0 0 1 1 0
−t =(0 t
0 0 1 ) (0 0 0
0 0 0
0 −t −1) + ( 0 0 t
0 −t 0)+(0 −1 t 1 1 0
1 1 0
�
321
t 2t ) 1
t 2t ) 1
t 2t − 1) . 1
The equation (6.2) can be written as follows: 1 (0 0
0 −t 0
x1Δ (t) 0 −t 1 ) (x2Δ (t)) = ( 0 0 t x3Δ (t)
1 1 0
t x1σ (t) f1 (t) 2t ) (x2σ (t)) + (f2 (t)) , 1 x3σ (t) f3 (t)
t ∈ 𝕋,
or x1Δ (t) = −tx1σ (t) + x2σ (t) + tx3σ (t) + f1 (t),
−tx2Δ (t) + x3Δ (t) = x2σ (t) + 2tx3σ (t) + f2 (t), 0 = tx1σ (t) + x3σ (t) + f3 (t),
t ∈ 𝕋.
This system, using (6.3), can be rewritten in the form 1 (0 0
0 −t 0
−t =(0 t
0 1 1 ) ((0 0 0 1 1 0
0 1 0
0 x1 (t) −1 ) (x2 (t))) 1−t x3 (t)
Δ
t x1σ (t) f1 (t) σ 2t − 1) (x2 (t)) + (f2 (t)) , 1 x3σ (t) f3 (t)
t ∈ 𝕋,
or 1 (0 0
0 −t 0
x1Δ (t) 0 −tx1σ (t) + x2σ (t) + tx3σ (t) + f1 (t) Δ Δ ) = ( x2σ (t) + (2t − 1)x3σ (t) + f2 (t) ) , 1) ( x2 (t) − x3 (t) 0 tx1σ (t) + x3σ (t) + f3 (t) −x3σ (t) + (1 − t)x3Δ (t)
or x1Δ (t) = −tx1σ (t) + x2σ (t) + tx3σ (t) + f1 (t),
−t(x2Δ (t) − x3Δ (t)) + (1 − t)x3Δ (t) − x3σ (t) = x2σ (t) + (2t − 1)x3σ (t) + f2 (t),
t ∈ 𝕋,
322 � 6 Third kind linear time-varying dynamic-algebraic equations 0 = tx1σ (t) + x3σ (t) + f3 (t),
t ∈ 𝕋,
or x1Δ (t) = −tx1σ (t) + x2σ (t) + tx3σ (t) + f1 (t),
−tx2Δ (t) + x3Δ (t) = x2σ (t) + 2tx3σ (t) + f2 (t), 0 = tx1σ (t) + x2σ (t) + f3 (t),
t ∈ 𝕋.
Exercise 6.1. Let 𝕋 = 3ℕ0 and 0 A(t) = (1 − 3t 0 t−2 C(t) = (t − 1 0 1. 2. 3.
−2t + 1 t 0 0 1 1
1 + t2 t ), t+4
t t + 2) , 2
t ∈ 𝕋.
Find the projector P along ker A. Write the system (6.2). Write the system (6.3).
6.3 Standard form index one problems Consider the equation A(Px)Δ = Cx σ + f ,
(6.4)
where ker A is a C 1 -space, C ∈ C (I), P is a projector so that Pσ is a C 1 -projector along ker A. Denote Q = I − P. Then APσ = A,
AQσ = 0. Assume that (C1) the matrix
A1 = A + CQσ is invertible.
6.3 Standard form index one problems
� 323
By the hypothesis (C1), it follows that the equation (6.4) is regular with tractability index 1. We have the following result. Lemma 6.1. Let (C1) holds. Then we have the following equations: σ A−1 1 A=P ,
σ σ A−1 1 CQ = Q .
Proof. By the definition of A1 , we get A1 Pσ = (A + CQσ )Pσ
= APσ + CQσ Pσ = APσ =A
and A1 Qσ = (A + CQσ )Qσ
= AQσ + CQσ Qσ = AQσ ,
which completes the proof. We multiply the equation (6.4) by A−1 1 and using the first equation and then the second equation of Lemma 6.1, we arrive at Δ −1 σ −1 A−1 1 A(Px) = A1 Cx + A1 f
or σ −1 Pσ (Px)Δ = A−1 1 Cx + A1 f
σ σ σ −1 = A−1 1 C(P + Q )x + A1 f
σ σ σ σ −1 = A−1 1 CP x + Q x + A1 f ,
i. e., σ σ σ σ −1 Pσ (Px)Δ = A−1 1 CP x + Q x + A1 f .
We multiply both sides of the equation (6.5) by Pσ and using that Pσ Pσ = Pσ we obtain
and
Pσ Qσ = 0,
(6.5)
324 � 6 Third kind linear time-varying dynamic-algebraic equations σ σ σ σ σ σ −1 Pσ Pσ (Px)Δ = Pσ A−1 1 CP x + P Q x + P A1 f
or σ σ σ −1 Pσ (Px)Δ = Pσ A−1 1 CP x + P A1 f .
(6.6)
Note that Pσ (Px)Δ = (PPx)Δ − PΔ Px = (Px)Δ − PΔ Px.
Then the equation (6.6) can be rewritten in the form σ σ Δ σ −1 (Px)Δ = Pσ A−1 1 CP x + P Px + P A1 f .
Setting Px = u, we find σ Δ σ −1 uΔ = Pσ A−1 1 Cu + P u + P A1 f .
Now, we multiply the equation (6.5) by Qσ and we find σ σ σ σ σ σ −1 Qσ Pσ (Px)Δ = Qσ A−1 1 CP x + Q Q x + Q A1 f
or σ σ σ σ σ −1 0 = Qσ A−1 1 CP x + Q x + Q A1 f .
Denoting v = Qx, by the last equation we find σ σ σ −1 0 = Qσ A−1 1 Cu + v + Q A1 f
or σ σ −1 vσ = −Qσ A−1 1 Cu − Q A1 f .
(6.7)
6.3 Standard form index one problems
� 325
By the last equation and the equation (6.7), we obtain the system σ Δ σ −1 uΔ = Pσ A−1 1 Cu + P u + P A1 f ,
(6.8)
σ σ −1 vσ = −Qσ A−1 1 Cu − Q A1 f .
By the first equation of (6.8), we find u and then we find v by the second equation of (6.8). Example 6.2. Let 𝕋 = ℕ and A, C, P be as in Example 4.2. Then 0 P (t) = (0 0
0 0 −1
1 Pσ (t) = (0 0
0 0 −t − 1
0 Qσ (t) = (0 0
0 1 t+1
Δ
P
σ
(t)A−1 1 (t)
1 = (0 0 −1
=(0 0
P
σ
(t)A−1 1 (t)C(t)
−1
=(0 0 0 = (0 0
0 0) , 1 0 0) , 0
0 0 −(t + 1) t−1 2
0
1
0) − 21
0 ), − 21
t+3 2 t−1 2
1 2
0 0 ) (0 0 − 21
0
t+3 2 − (t+1)(t−2) 2
t−2 2
− (t+1)(t+2) 2
t+2 2
0
0 −t 2
1 1) 1
0 ),
0 = (0 0
0 1 t+1
0 0) , 0
0 Qσ (t)A−1 (t)C(t) = ( 0 1 0
0 1 t+1
0 0 0) (0 0 0
Q
1 2
3t+5 2
1 2
−1 0 0) ( 0 0 0
(t)A−1 1 (t)
t−1 2
−1 0 0) ( 0 1 0
0 1 t+1
σ
0 = (0 0
0 0) , 0
t−1 2
1
3t+5 2
0 −t 2
1 2
0) − 21
1 1) 1
326 � 6 Third kind linear time-varying dynamic-algebraic equations 0 = (0 0
0 −t t(t + 1)
0 1 ), t+1
t ∈ 𝕋.
Hence, the system (6.8) takes the form u1Δ (t)
(u2Δ (t)) u3Δ (t)
0 = (0 0
− (t+1)(t−2) 2 0 − (t+1)(t+2) 2 −1
t−1 2
0
t+3 2
+(0 vσ1 (t) 0 (vσ2 (t)) = (0 vσ3 (t) 0
0
0 −t −t(t + 1)
t−2 2
1 2
u1σ (t) 0 0 ) (u2σ (t)) + (0 t+2 u3σ (t) 0 2
f1 (t) 0 ) (f2 (t)) , f3 (t) − 21
0 u1σ (t) 0 1 ) (u2σ (t)) − (0 t+1 u3σ (t) 0
0 0 −1
0 1 t+1
0 u1 (t) 0) (u2 (t)) 0 u3 (t)
0 f1 (t) 0) (f2 (t)) , 0 f3 (t)
t ∈ 𝕋,
or u1Δ (t)
− (t+1)(t−2) u2σ (t) + 2
u3Δ (t)
− (t+1)(t+2) u2σ (t) + 2
(u2Δ (t)) = (
0
−f1 (t) + +(
t−2 σ u (t) 2 3
)
t+2 σ u (t) 2 3 t−1 f (t) + 21 f3 (t) 2 2
0 − 21 f3 (t)
t+3 f (t) 2 2
),
vσ1 (t) 0 0 (vσ2 (t)) = − ( ) − ( f2 (t) ) , −tu2σ (t) + u3σ (t) vσ3 (t) −t(t + 1)u2σ (t) + (t + 1)u3σ (t) (t + 1)f2 (t)
t ∈ 𝕋,
or u1Δ (t) = − u2Δ (t) = 0
(t + 1)(t − 2) σ t−2 σ t−1 1 u2 (t) + u (t) − f1 (t) + f (t) + f3 (t), 2 2 3 2 2 2
(t + 1)(t + 2) σ t+2 σ t+3 1 u2 (t) + u (t) + f (t) − f3 (t) 2 2 3 2 2 2 vσ1 (t) = 0,
u3Δ (t) = −
vσ2 (t) = −tu2σ (t) + u3σ (t) − f2 (t),
vσ3 (t) = −t(t + 1)u2σ (t) + (t + 1)u3σ (t) + (t + 1)f2 (t),
t ∈ 𝕋.
6.4 Decoupling of third kind linear time-varying dynamic algebraic equations of index one
� 327
Exercise 6.2. Let 𝕋 = 2ℕ and 1 A(t) = (0 0
t+1 0 −4t − 4
−1 C(t) = ( 1 0 1 P(t) = (0 0 1. 2. 3.
1 t 2 0 1 0
1 0 ), −4
1 −1) , 1 0 1 ), −t + 1
t ∈ 𝕋.
Find the matrix Aσ1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋. Find the equation (6.4). Find the equation (6.8).
6.4 Decoupling of third kind linear time-varying dynamic algebraic equations of index one Suppose that the equation (6.1) is (1, σ)-regular with tractability index one. Let R be a continuous projector so that Rσ is a continuous projector along ker Aσ , i. e., ARσ = A. Set G0σ = ABσ . Let also, P0 be a continuous projector along ker G0 and Q0 = I − P0 ,
G1σ = G0σ + CQ0σ , B− is the {1, 2}-inverse of B so that B− BB− = B− , BB− B = B,
B− B = P0 , BB− = R.
Then
328 � 6 Third kind linear time-varying dynamic-algebraic equations A = ARσ
= ABσ B−σ = G0σ B−σ .
Hence, the equation (6.1) takes the form G0σ B−σ (Bx)Δ = Cx σ + f , which we multiply by (G1−1 )σ and we find G1−1σ G0σ B−σ (Bx)Δ = G1−1σ Cx σ + G1−1σ f .
(6.9)
Note that G0σ Q0σ = 0 and G1σ Q0σ = (G0σ + CQ0σ )Q0σ
= G0σ Q0σ + CQ0σ Q0σ = CQ0σ .
Hence, Q0σ = G1−1σ CQ0σ
(6.10)
and I = G1−1σ (G0σ + CQ0σ )
= G1−1σ G0σ + G1−1σ CQ0σ = G1−1σ G0σ + Q0σ ,
or G1−1σ G0σ = I − Q0σ .
(6.11)
So, the equation (6.9) can be rewritten as follows: (I − Q0σ )B−σ (Bx)Δ = G1−1σ Cx σ + G1−1σ f . Now, using that Bσ P0σ (I − Q0σ ) = Bσ P0σ − Bσ P0σ Q0σ
(6.12)
6.4 Decoupling of third kind linear time-varying dynamic algebraic equations of index one
�
329
= Bσ P0σ and multiplying (6.12) by BP0σ , we find Bσ P0σ (I − Q0σ )B−σ (Bx)Δ = Bσ P0σ G1−1σ Cx σ + Bσ P0σ G1−1σ f or Bσ P0σ B−σ (Bx)Δ = Bσ P0σ G1−1σ Cx σ + Bσ P0σ G1−1σ f . Now, using that Δ
Δ
Bσ P0σ B−σ (Bx)Δ = (BP0 B− Bx) − (BP0 B− ) Bx Δ
= (BP0 x)Δ − (BP0 B− ) Bx
and the decomposition x = P0 x + Q0 x, we find Δ
Δ
(BP0 x)Δ = (BP0 B− ) BP0 x + (BP0 B− ) BQ0 x
+ Bσ P0σ G1−1σ CP0σ x σ + Bσ P0σ G1−1σ CQ0σ x σ + Bσ P0σ G1−1σ f Δ
= (BP0 B− ) BP0 x + Bσ P0σ G1−1σ CP0σ x σ + Bσ P0σ G1−σ f Δ
= (BP0 B− ) BP0 x + Bσ P0σ G1−1σ CB−σ Bσ P0σ x σ + Bσ P0σ G1−σ f . Setting u = BP0 x, we arrive at the equation Δ
uΔ = (BP0 B− ) u + Bσ P0σ G1−1σ CB−σ uσ + Bσ P0σ G1−1σ f . Now, we multiply (6.12) by Q0σ and using (6.10) and Q0σ (I − Q0σ ) = 0, and setting v = Q0 x, we find
(6.13)
330 � 6 Third kind linear time-varying dynamic-algebraic equations 0 = Q0σ (I − Q0σ )B−σ (Bx)Δ
= Q0σ G1−1σ Cx σ + Q0σ G1−1σ f
= Q0σ G1−1σ CP0σ x σ + Q0σ G1−1σ CQ0σ x σ + Q0σ G1−1σ f = Q0σ G1−1σ CP0σ x σ + Q0σ x σ + Q0σ G1−1σ f
= Q0σ G1−1σ CB−σ Bσ P0σ x σ + vσ + Q0σ G1−1σ f = Q0σ G1−1σ CB−σ uσ + vσ + Q0σ G1−1σ f , or vσ = −Q0σ G1−1σ CB−σ uσ − Q0σ G1−1σ f . By the last equation and the equation (6.13), we obtain the system Δ
uΔ = (BP0 B− ) u + Bσ P0σ G1−1σ CB−σ uσ + Bσ P0σ G1−1σ f , vσ = −Q0σ G1−1σ CB−σ uσ − Q0σ G1−1σ f .
(6.14)
By the first equation of the system (6.14), we find u and then by the second equation, we find vσ . Then, for the solution x of the equation (6.1), we obtain x σ = Bσ P0σ x σ + Q0σ x σ = u σ + vσ .
As in Section 4.5.2 and Section 5.5.2, one can prove that the above described process is reversible. Definition 6.15. The equation (6.13) is said to be the inherent equation of the equation (6.1). sult.
As we have proved Theorem 4.1 and Theorem 5.1, one can deduct the following re-
Theorem 6.1. The subspace im P0 is an invariant subspace for the equation (6.13), i. e., u(t0 ) ∈ (im BP0 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BP0 )(t) for any t ∈ I. Example 6.3. Let 𝕋 = 2ℕ0 and A, B, C be as in Example 4.3. Then Δ
(BP0 B− ) (t) = 0, and
t ∈ 𝕋,
6.4 Decoupling of third kind linear time-varying dynamic algebraic equations of index one
G0σ t) = A(t)Bσ (t) t 0 = (0 0 0
0 1 0 0 0
0 0 2t t2 ) ( 0 0 0 0
0 4t 2 0
2t 2 0 =( 0 0 0
0 4t 2 0 0 0
0 0 t2 0 0
0 0 0 0 0
0 0 0) , 0 0
2t 2 0 =( 0 0 0
0 4t 2 0 0 0
0 0 t2 0 0
0 0 0 0 0
0 0 0) 0 0
G1σ (t) = G0σ (t) + C(t)Q0σ (t)
0 0 +(0 −1 1
0 0 −1 1 0
0 t 0 0 0
−1 1 0 −t 2 0
2t 2 0 =( 0 0 0
0 4t 2 0 0 0
0 0 t2 0 0
0 0 0 0 0
2t 2 0 =( 0 0 0
0 4t 2 0 0 0
0 0 t2 0 0
−1 1 0 −t 2 0
0 0 1
0 0 0
1 0 0) , 0 t2
331
0 0) 0
1 0 0 0 0 ) (0 0 0 t2 0
0 0 0 0 0 ) + (0 0 0 0 0
�
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 −1 1 0 −t 2 0
0 0 0) 0 1 1 0 0) 0 t2
t ∈ 𝕋.
Hence, det G1σ t) = −8t 10 ≠ 0,
t ∈ 𝕋.
Therefore, the equation (6.1) is a third kind linear time-varying dynamic-algebraic equation with tractability index one. Now, we will find the cofactors of G1σ (t), t ∈ 𝕋. We have
332 � 6 Third kind linear time-varying dynamic-algebraic equations 4t 2 0 σ g11 (t) = 0 0
0 t2 0 0
1 0 −t 2 0
0 0 0 t 2
0 t2 0 0
1 0 −t 2 0
0 0 0 t 2
4t 2 0 0 0
1 0 −t 2 0
0 0 0 t 2
0 t2 0 0
0 0 0 t 2
= −4t 8 , 0 0 σ g12 (t) = − 0 0
= 0, 0 0 σ g13 (t) = 0 0 = 0,
0 4t 2 0 0 σ g14 (t) = − 0 0 0 0 = 0, 0 4t 2 0 0 σ g15 (t) = 0 0 0 0 = 0,
0 0 0 t 2 σ g21 (t) = − 0 0 0 0 = 0, 2t 2 0 0 t 2 σ g22 (t) = 0 0 0 0
= −2t 8 , 2t 2 0 σ g23 (t) = − 0 0 = 0,
0 0 0 0
0 t2 0 0
1 0 −t 2 0
−1 0 −t 2 0
1 0 0 t 2
−1 0 −t 2 0
1 0 0 t 2
−1 0 −t 2 0
1 0 0 2 t
6.4 Decoupling of third kind linear time-varying dynamic algebraic equations of index one
2t 2 0 σ g24 (t) = 0 0 = 0, 2t 2 0 σ g25 (t) = − 0 0 = 0, 0 2 4t σ g31 (t) = 0 0 = 0,
2t 2 0 σ g32 (t) = − 0 0 = 0, 2t 2 0 σ g33 (t) = 0 0
0 0 0 0 0 0 0 0 0 0 0 0
= 0, 0 2 4t σ g41 (t) = 0 0
1 0 0 t 2 0 t2 0 0
−1 0 −t 2 0 1 0 0 2 t
−1 1 −t 2 0 0 0 0 0
−1 1 −t 2 0
1 0 0 2 t
0 4t 2 0 0
−1 1 −t 2 0
1 0 0 t 2
0 0 0 0
1 0 0 t 2
= −8t 8 , 2t 2 0 σ g34 (t) = − 0 0 = 0, 2t 2 0 σ g35 (t) = 0 0
0 t2 0 0
0 4t 2 0 0 0 4t 2 0 0
0 0 0 0
−1 0 −t 2 0
0 0 t2 0
−1 1 0 0
1 0 0 t 2
� 333
334 � 6 Third kind linear time-varying dynamic-algebraic equations 0 = 4t t 0
1 0 0
2 2
= −4t 6 , 2t 2 0 0 0 σ g42 (t) = 2 0 t 0 0 0 1 2 2 = 2t t 0 0 0
= −2t 6 , 2t 2 0 σ g43 (t) = − 0 0
= 0, 2t 2 0 σ g44 (t) = 0 0
0 0 2 t −1 1 0 0 0 0 t 2
0 4t 2 0 0 0 4t 2 0 0
= 8t 8 , 2t 2 0 σ g45 (t) = − 0 0 = 0, 0 2 4t σ g51 (t) = 0 0
0 = −4t t 0
= 4t 6 , 2t 2 0 σ g52 (t) = − 0 0
0 0 t2 0
1 0 0 t 2 0 0 t2 0
−1 1 0 −t 2 −1 0 −t 2
2 2
0 0 t2 0
1 0 0 2 t
−1 1 0 0
0 4t 2 0 0 0 0 t2 0
1 0 0 t 2
−1 1 0 −t 2
−1 0 0 0 1 0 0 0 1 0 0 1 0 0 0
6.4 Decoupling of third kind linear time-varying dynamic algebraic equations of index one
0 1 = −2t t 0 0 −t 2 = 0, 2t 2 0 −1 2 1 0 4t σ g53 (t) = 0 0 0 0 0 −t 2 2 2
= 0,
2t 2 0 σ g54 (t) = − 0 0 = 0, 2t 2 0 σ g55 (t) = 0 0
0 4t 2 0 0 0 4t 2 0 0
= −8t 8 ,
0 0 0 1 0 0 0
0 0 t2 0 0 0 t2 0
1 0 0 0 −1 0 0 −t 2
t ∈ 𝕋.
Consequently, −4t 8 0 1 −1σ G1 (t) = − 10 ( 0 8t 0 0 1 2t 2
0
( =(0 0
(0
0 −2t 8 0 0 0 0
1 2t 4 1 4t 4
− 2t14
0
− t12
0
0
0
0
1 t2
0
0
1 4t 2
0
0 0 −8t 8 0 0 0
0
) 0 ),
0
1 t2
Hence, 2t Bσ (t)P0σ (t)G1−1σ (t) = ( 0 0
0 4t 2 0
0 0 1
−4t 6 −2t 6 0 8t 8 0
0 0 0
0 0) 0
)
4t 6 0 0 ) 0 −8t 8
t ∈ 𝕋.
� 335
336 � 6 Third kind linear time-varying dynamic-algebraic equations 1 0 = (0 0 0
0 1 0 0 0
2t = (0 0
0 4t 2 0
1 t
= (0
B
(t)P0σ (t)G1−1σ (t)C(t)B−σ (t)
0
0
1 t2
1 t
0
0
= (0 0
0
1 t2
0
0 0 ×(0 −1 1 1 t
= (0 0
=
0
1 t2
0
− 2t14 (− 2t13 0
1 t3 1 t2
0
0
− t12
0
0
0
0 0 0
1 t2
1 2t 4 1 4t 4
0
1 t2
0
0
1 4t 2
0 ) 0 )
0
0
0
1 t2 − 2t14
0
0 0 0
0 0
)
0 ) 0 0 )
0) , 0
0) 0
0 t 0 0 0
−1 1 0 −t 2 0
1
1 2t 0 0 0) ( 0 0 0 t2 (0
0 0 0) ( 0
0 0
0
0
− 2t1 1 ( 2t
t) ,
t ∈ 𝕋,
0
1 4t 5 1 4t 4 − 4t14
− 2t14
0
0
0 0 0) ( 0 0 0 (0
0
0
1 2t 4 1 4t 4
0
1 4t 2
1 2t 2
1 t3 1 t2
1 t3 1 t2
0
1
0 0 0
0
0 0 −1 1 0
0
0 2t 2 0 0 ( 0) ( 0 0 0 0 (0
0
0
1
1
0 0 0 0 0 0 0 1
0 1
0
σ
0 0 1 0 0
0 0
0
1 4t 2
0 0 0
0
0 1) 0 0)
− 4t12
0 t 0)
0
0)
1 4t 2
0
and 0 0 Q0σ (t)G1−1σ (t) = (0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 1 0
1
0 2t 2 0 0 ( 0) ( 0 0 0 1 (0
0
1 2t 4 1 4t 4
− 2t14
0
− t12
0
0
0
0
1 t2
0
0
1 4t 2
0
0
0
) 0 )
0
1 t2
)
6.4 Decoupling of third kind linear time-varying dynamic algebraic equations of index one
0 0 ( 0 = 0
0 0 0 0
0 0 0 0
0 0 0 − t12
0 0 σ −1σ −σ ( 0 Q0 (t)G1 (t)C(t)B (t) = 0
0 0 0 0
0 0 0 0
0 0 0 − t12
(0
(0
0
0
0
0 0 ( 0 =
0
1 t2 )
0
0 0 0 0 0)( 0 0 − 2t1
− 4t12
0 t 0)
0
0)
0 f1 (t) 0 f2 (t) 0 ) (f3 (t)) , 0 0 1 0 2)
t ∈ 𝕋,
1 1 t 2 ) ( 2t
0
0 0 0
1 2t 3 1 ( 2t3
0 0 0), 0
0 0 0) , 0
− 4t14 0
0 0
1 4t 2
0
t ∈ 𝕋.
0)
Then the system (6.14) takes the form − 2t14
u1Δ (t)
(u2Δ (t)) u3Δ (t)
=
(− 2t13 0
0
u1σ (t) t ) (u2σ (t)) u3σ (t) 0
1 t
0
0
0
0
1 t2
+ (0
0 0 0 0 ( 0 ( 0 )=− 1 vσ0 (t) 2t 3 σ 1 v1 (t) ( 3 2t
0 0 ( 0 − 0 (0
or
1 4t 5 1 4t 4 − 4t14
1
0
0 0 0
− 4t14 0
0 0 0 0 0
0 0 0 0 0
1 t3 1 t2
0
f1 (t) f2 (t) 0) (f3 (t)) , 0 0 0 0
0 0 u1σ (t) ) 0 (u2σ (t)) 0 u3σ (t) 0)
0 0 0 − t12 0
t
�
337
338 � 6 Third kind linear time-varying dynamic-algebraic equations u1Δ (t)
(u2Δ (t)) = u3Δ (t)
− 2t14 u1σ (t) +
1 σ u (t) 4t 5 2 1 σ 1 σ (− 2t3 u1 (t) + 4t4 u2 (t) + tu3σ (t)) − 4t14 u2σ (t)
1 f (t) t 1
+ ( f2 (t) ) , 1 f (t) t2 3
0 0 0 0 0 0 ( ) 0 ( 0 )=− − ( 0) , 1 σ 1 σ u (t) − u (t) vσ0 (t) 0 4 3 1 2 4t 2t 1 σ vσ1 (t) 0 u (t) 3 1 ( ) 2t
or 1 σ 1 1 u (t) + 5 u2σ (t) + f1 (t), t 2t 4 1 4t 1 σ 1 σ Δ u2 (t) = − 3 u1 (t) + 4 u2 (t) + tu3σ (t) + f2 (t), 4t 2t 1 σ 1 Δ u3 (t) = − 4 u2 (t) + 2 f3 (t), 4t t 1 1 vσ0 (t) = − 3 u1σ (t) + 4 u2σ (t), 4t 2t 1 σ σ v1 (t) = − 3 u1 (t), t ∈ 𝕋. 2t u1Δ (t) = −
Exercise 6.3. Let 1 5 15 𝕋 = {−1, 0, , , q, 7, , 8, 10, 12} 3 6 2 and 2 0 A(t) = (0 0 0
0 1 0 0 0
2t B(t) = ( 0 0
0 3t + 1 0
0 0 C(t) = ( t t −1 Find the representation (6.14).
0 0 4t ) , 0 0 2
0 0 2t + 1 0 0
0 0 t−1 t t 0 0 1
0 0 0 0 −t 0 1 0
0 0) , 0 t 0 1) , 0 2
t ∈ 𝕋.
t ∈ 𝕋,
6.5 Decoupling of third kind linear time-varying dynamic-algebraic equations of index ≥ 2
� 339
6.5 Decoupling of third kind linear time-varying dynamic-algebraic equations of index ≥ 2 6.5.1 A reformulation Suppose that the equation (6.1) is regular with tractability index ν and R is a continuous projector onto im B and along ker A. Denote G0 = AB and assume that Π0 is a continuous projector along ker G0 and set M 0 = I − Π0 , C0 = C,
G1 = G0 + C0 M0 ,
BB B = B, −
B− BB− = B− ,
B − B = Π0 , BB− = R,
N0 = ker G0 .
In addition, suppose that Gi and Πi satisfy (A5)–(A7) for i ∈ {1, . . . , ν − 1}. Also, assume that (A8) holds and let Δ
Ciσ Πσi = (Ci−1 + Ci Mi + Gi B− (BΠi B− ) Bσ )Πσi−1 . Now, using the definition of R, we find A = AR
= ABB−
= G0 B− . Then the equation (6.1) takes the form ABB− (Bx)Δ = Cx σ + f or G0 B− (Bx)Δ = Cx σ + f . We multiply both sides of the last equation with Gν−1 and we find
340 � 6 Third kind linear time-varying dynamic-algebraic equations Gν−1 G0 B− (Bx)Δ = Gν−1 Cx σ + Gν−1 f .
(6.15)
By (3.69), we have Gν−1 G0 = I − Q0 − ⋅ ⋅ ⋅ − Qν−1 and then the equation (6.15) takes the form (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 )B− (Bx)Δ = Gν−1 Cx σ + Gν−1 f .
(6.16)
Next, using that Πν−1 projects along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 and Ni = im Qi , we find Πν−1 (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 ) = Πν−1 . Thus, multiplying (6.16) by BΠν− , we arrive at BΠν−1 (I − Q0 − ⋅ ⋅ ⋅ − Qν−1 )B− (Bx)Δ = BΠν−1 Gν−1 Cx σ + BΠν−1 Gν−1 f or BΠν−1 B− (Bx)Δ = BΠν−1 Gν−1 Cx σ + BΠν−1 Gν−1 f .
(6.17)
Now, using that Δ
Δ
BΠν−1 B− (Bx)Δ = (BΠν−1 B− Bx) − (BΠν−1 B− ) Bσ x σ Δ
= (BΠν−1 Π0 x)Δ − (BΠν−1 B− ) Bσ x σ .
Then the equation (6.17) can be rewritten as follows: Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bσ x σ = BΠν−1 Gν−1 Cx σ + BΠν−1 Gν−1 f . We decompose x as follows x = Πν−1 x + (I − Πν−1 )x and we obtain Δ
Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bσ Πσν−1 − (BΠν−1 B− ) Bσ (I − Πσν−1 )x σ
= BΠν−1 Gν−1 CΠσν−1 x σ + BΠν−1 Gν−1 C(I − Πσν−1 )x σ + BΠν−1 Gν−1 f .
Note that we have j
Δ
Cjσ Πσj = (C + Gj+1 − G1 + ∑ Gi B− (BΠi B− ) Bσ )Πσj−1 . i=1
(6.18)
6.5 Decoupling of third kind linear time-varying dynamic-algebraic equations of index ≥ 2
�
341
Then 0 = Cj Πσj Mjσ j
Δ
= (C + Gj+1 − G1 + ∑ Gi B− (BΠi B− ) Bσ )Πσj−1 Mjσ i=1 j
Δ
= (C + Gj+1 − G1 + ∑ Gi B− (BΠi B− ) Bσ )Mjσ i=1
j
Δ
= CMjσ + (Gj+1 − G1 )Mjσ + ∑ Gi B− (BΠi B− ) Bσ Mjσ i=1
and j
Δ
CMjσ = −(Gj+1 − G1 )Mjσ − ∑ Gi B− (BΠi B− ) Bσ Mjσ . i=1
We multiply the last equation by Gν−1 and we find Gν−1 CMjσ = −(Gν−1 Gj+1 − Gν−1 G1 )Mjσ j
Δ
− ∑ Gν−1 Gi B− (BΠi B− ) Bσ Mjσ i=1
= −(Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ j
Δ
− ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ , i=1
i. e., Gν−1 CMjσ = −(Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ j
Δ
− ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ . i=1
We multiply (6.19) by BΠν−1 and we get BΠν−1 Gν−1 CMjσ = −BΠν−1 (Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ j
Δ
− ∑ BΠν−1 (I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ i=1 j
Δ
= − ∑ BΠν−1 B− (BΠi B− ) Bσ Mjσ i=1
(6.19)
342 � 6 Third kind linear time-varying dynamic-algebraic equations j
Δ
= − ∑(BΠν−1 B− BΠi B− ) Bσ Mjσ i=1 j
Δ
+ ∑(BΠν−1 B− ) Bσ Πσi B−σ Bσ Mjσ i=1 j
Δ
= − ∑(BΠν−1 B− ) Bσ Mjσ i=1 j
Δ
+ ∑(BΠν−1 B− ) Bσ Πσi Mjσ i=1 j
Δ
= − ∑(BΠν−1 B− ) Bσ Mjσ i=1
j−1
Δ
+ ∑(BΠν−1 B− ) Bσ Mjσ i=1
Δ
= −(BΠν−1 B− ) Bσ Mjσ , i. e., Δ
BΠν−1 Gν−1 CMjσ = −(BΠν−1 B− ) Bσ Mjσ . Now, using that ν−1
∑ Mj = I − Πν−1 ,
j=0
we find ν−1
Δ
ν−1
BΠν−1 Gν−1 C ∑ Mjσ = −(BΠν−1 B− ) Bσ ∑ Mjσ . j=0
j=0
or Δ
BΠν−1 Gν−1 C(I − Πσν−1 ) = −(BΠν−1 B− ) Bσ (I − Πσν−1 ). By the last relation and (6.18), we arrive at Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bσ Πσν−1 x σ = BΠν−1 Gν−1 CΠσν−1 x σ + BΠν−1 Gν−1 f
= BΠν−1 Gν−1 CB−σ Bσ Πσν−1 x σ + BΠν−1 Gν−1 f .
We set BΠν−1 x = u.
6.5 Decoupling of third kind linear time-varying dynamic-algebraic equations of index ≥ 2
� 343
We get the equation Δ
uΔ = (BΠν−1 B− ) uσ + BΠν−1 Gν−1 CB−σ uσ + BΠν−1 Gν−1 f .
(6.20)
Definition 6.16. The equation (6.20) is said to be the inherent equation to the equation (6.1). Theorem 6.2. The subspace im Πν−1 is an invariant subspace for the equation (6.20), i. e., u(t0 ) ∈ (im BΠν−1 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BΠν−1 )(t) for any t ∈ I. Proof. Let u ∈ C 1 (I) be a solution to the equation (6.20) so that (BΠν−1 )(t0 )u(t0 ) = u(t0 ). Hence, u(t0 ) = (BΠν−1 )(t0 )u(t0 )
= (BΠν−1 Π0 Πν−1 )(t0 )u(t0 )
= (BΠν−1 B− BΠν−1 )(t0 )u(t0 )
= (BΠν−1 B− )(t0 )(BΠν−1 )(t0 )u(t0 ) = (BΠν−1 B− )(t0 )u(t0 ).
We multiply the equation (6.20) by I − BΠν−1 B and we get Δ
(I − BΠν−1 B−σ )uΔ = (I − BΠν−1 B)(BΠν−1 B− ) uσ
+ (I − BΠν−1 B)BΠν−1 Gν−1 CB−σ uσ + (I − BΠν−1 B)BΠν−1 Gν−1 f Δ
= (I − BΠν−1 B)(BΠν−1 B− ) uσ
+ (BΠν−1 − BΠν−1 B− BΠν−1 )Gν−1 CB−σ uσ + (BΠν−1 − BΠν−1 B− BΠν−1 )Gν−1 f Δ
= (I − BΠν−1 B− )(BΠν−1 B− ) uσ
+ (BΠν−1 − BΠν−1 )Gν−1 CB−σ uσ + (BΠν−1 − BΠν−1 )Gν−1 f
Δ
= (I − BΠν−1 B− )(BΠν−1 B− ) uσ ,
344 � 6 Third kind linear time-varying dynamic-algebraic equations i. e., Δ
(I − BΠν−1 B− )uΔ = (I − BΠν−1 B− )(BΠν−1 B− ) uσ . Set v = (I − BΠν−1 B− )u. Then, using (6.21), we find Δ
vΔ = (I − BΠν−1 B− )uΔ + (I − BΠν−1 B− ) uσ Δ
Δ
= (I − BΠν−1 B− )(BΠν−1 B− ) uσ + (I − BΠν−1 B− ) uσ Δ
= ((I − BΠν−1 B− )BΠν−1 B− ) uσ Δ
− (I − BΠν−1 B− ) Bσ Πσν−1 B−σ uσ Δ
+ (I − BΠν−1 B− ) uσ
Δ
= (BΠν−1 B− − BΠν−1 B− BΠν−1 B− ) uσ Δ
+ (I − BΠν−1 B− ) (I − Bσ Πσν−1 B−σ )uσ Δ
= −(I − BΠν−1 B− ) vσ , i. e.,
Δ
vΔ = (I − BΠν−1 B− ) vσ . Note that v(t0 ) = u(t0 ) − (BΠν−1 B− )(t0 )u(t0 ) = u(t0 ) − u(t0 ) = 0. Thus, we get the following IVP: Δ
vΔ = (I − BΠν−1 B− ) vσ
on
v(t0 ) = 0. Therefore, v = 0 on I and then
BΠν−1 B− u = u
on
I.
Hence, using that im BΠν−1 = im BΠν−1 B− ,
I,
(6.21)
6.5 Decoupling of third kind linear time-varying dynamic-algebraic equations of index ≥ 2
� 345
we get BΠν−1 u = u
on
I.
This completes the proof. σ 6.5.2 The Component vν−1
Consider the equation (6.15). We represent its solution in the form x = M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + Πν−1 x
= M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + B− BΠν−1 x.
Set vj = Mj x,
j ∈ {0, . . . , ν − 1}.
We multiply (6.15) by Mν−1 and we find Mν−1 Gν−1 G0 B− (Bx)Δ = Mν−1 Gν−1 Cx σ + Mν−1 Gν−1 f . Note that Mν−1 Gν−1 G0 = Mν−1 (I − Q0 − Q1 − ⋅ ⋅ ⋅ − Qν−1 )
= Mν−1 − Mμ−1 Q0 − Mν−1 Q1 − ⋅ ⋅ ⋅ − Mν−1 Qν−1 = Mν−1 − Mν−1 = 0.
Thus, Mν−1 Gν−1 Cx σ = −Mν−1 Gν−1 f , whereupon σ Mν−1 Gν−1 f = −Mν−1 Gν−1 C(M0σ x σ + M1σ x σ + ⋅ ⋅ ⋅ + Mν−1 x σ + B−σ Bσ Πσν−1 x σ ).
By (6.19), we get Mν−1 Gν−1 CMjσ = −Mν−1 (Q1 + ⋅ ⋅ ⋅ + Qj )Mjσ j
Δ
− Mν−1 ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mjσ = 0,
i=1
j < ν − 1,
(6.22)
346 � 6 Third kind linear time-varying dynamic-algebraic equations and σ σ Mν−1 Gν−1 CMν−1 = −Mν−1 (Q1 + ⋅ ⋅ ⋅ + Qν−1 )Mν−1 j
Δ
σ − Mν−1 ∑(I − Qi − ⋅ ⋅ ⋅ − Qν−1 )B− (BΠi B− ) Bσ Mν−1 i=1
σ = −Mν−1 Mν−1 .
Hence, by (6.22), we find σ Mν−1 Gν−1 f = Mν−1 Gν−1 CMν−1 x σ − Mν−1 Gν−1 CB−σ uσ σ = Mν−1 Mν−1 x σ − Mν−1 Gν−1 CB−σ uσ
= Mν−1 vσν−1 − Mν−1 Gν−1 CB−σ uσ or
Mν−1 vσν−1 = Mν−1 Gν−1 f + Mν−1 Gν−1 CB−σ uσ . 6.5.3 The components vkσ . Terms coming from Uk Gν−1 Cx σ As in Section 4.5.2, we get ν−1
Uk Gν− G0 B− (Bx)Δ = ∑ Nkj (Bvj )Δ j=k+1
ν−1
−σ σ ̃ σ + K̃ k B u + ∑ Mkj vj ,
(6.23)
j=1
ν−1
ν−1
j=k+1
j=k+1
− σ −1 ̂ ̃ Mk vσk = ∑ Nkj (Bvj )Δ + (K̃ k B − Kk ) + ∑ (Mkj − Mkj )vj + Uk Gν f ,
vk = Mk vk , and ν−1
σ σ Uk Gν−1 Cx σ = K̂ k u + ∑ Mkj vj , j=1
where Nkk+1 = −Mk Qk+1 B , −
Nkk+2 = −Mk Pk+1 Qk+2 B , −
Nkk+3 = −Mk Pk+1 Pk+2 Qk+3 B , −
6.5 Decoupling of third kind linear time-varying dynamic-algebraic equations of index ≥ 2
�
347
.. . Nkν−1 = −Mk Pk+1 . . . Pν−2 Qν−1 B , −
Δ σ
K̃ k = Mk (Pk+1 . . . Pν−1 − I)B (BΠν−1 x) B , −
− Δ
− Δ
̃ M kj = −Mk (Qk+1 B (BMk+1 B ) + Pk+1 Qk+2 B (BMk+2 B ) −
.. .
−
+ Pk+1 Pk+2 Qk+3 B− (BMk+3 B− )
Δ
Δ
+ Pk+1 . . . Pν−2 Qν−1 B− (BMν−1 B− ) )Bσ Mjσ B−σ Bσ , K̂ k = Mk Pk+1 . . . Pν−1 Gν CB −1
−σ
,
Mk0 = Mk Pk+1 . . . Pν−1 Gν C, −1
− Δ σ
Mkj = −Mk (Pk+1 . . . Pν−1 − I)B (BMj B ) B , −
− Δ σ
j < k, σ
Mkk = −Mk (Pk+1 . . . Pν−1 − 1)B (BMk B ) B Mk − Mk , −
Mkj = −Mk j−1
Δ
− Mk ∑ (Pk+1 . . . Pi − I)B− (BMj B− ) Bσ i=k
Δ
+ Mk (Pk+1 . . . Pν−1 − Pk+1 . . . Pj )Πj B− (BMj B− ) Bσ , K̂ k =
j > k,
Mk Pk+1 . . . Pν−1 Gν−1 CB−σ .
As in Chapter 4, we have the following result. Theorem 6.3. Assume that the equation (6.1) is regular with tractability index ν on I and f is enough smooth function. Then x ∈ CB1 (I) solves (6.1) if and only if it can be written as
x = B− u + vν−1 + ⋅ ⋅ ⋅ + v1 + v0 . Exercise 6.4. Let 𝕋 = 2ℕ and 1 0 A(t) = ( 1 0 0
0 t 0 0 0
t 1 2) , 0 0
t B(t) = (0 t
t+1 t2 0
0 t 3
0 0 0
0 0) , 0
348 � 6 Third kind linear time-varying dynamic-algebraic equations 0 0 C(t) = ( 0 −1 1 1. 2. 3.
0 0 −1 1 0
0 1 0 0 0
−1 1 0 0 0
1 0 0 ), 0 t+2
t ∈ 𝕋.
Prove that the equation (6.1) is regular with tractability index 2. Write the inherent equation of the equation (6.1). Write the equation (6.23).
6.6 Advanced practical problems Problem 6.1. Let 𝕋 = 2ℕ0 and t+1 A(t) = (1 − t t t C(t) = ( t 0 1. 2. 3.
4−t 0 0
2t + 1 −1 −1
1 + 2t + 3t 2 ), t t t t ), 2t + 5
t ∈ 𝕋.
Find the projector P along ker A. Write the system (6.2). Write the system (6.3).
Problem 6.2. Let 𝕋 = 5ℕ and
1. 2. 3.
1 A(t) = (0 0
t+1 0 5t + 5
1 C(t) = (1 1
t+2 t t−2
1 P(t) = (0 0
0 1 0
1 0) , 5 1 1 ), t+1
0 1 ), −t + 1
Find the matrix Aσ1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋. Find the equation (6.4). Find the equation (6.8).
t ∈ 𝕋.
6.6 Advanced practical problems
Problem 6.3. Let 𝕋 = 2ℕ + 4 and 1 1 0 0 0
−1 0 A(t) = ( t 0 0 t B(t) = (0 0
0 t4 0
0 t C(t) = ( 0 −t 1
1 0 t + 1) , 0 0 0 0 t2
0 0 t −t 0
0 0 0 −1 t 0 0 1
0 0) , 0 t t 0 1 1
−t −1 0 ), 0 −1
t ∈ 𝕋.
Find the representation (6.14). Problem 6.4. Let 𝕋 = 3ℕ0 and t+3 0 A(t) = ( 0 0 0 1 B(t) = (0 0 0 0 C(t) = ( 0 −1 1 1. 2. 3.
0 t+1 0 0 0
0 1 + 4t 0 0 0 −1 1 0
0 0 1) , 0 0
0 0 2t + 7 0 1 0 0 0
−1 1 0 0 0
0 0 0
0 0) , 0
1 0 0 ), 0 t+2
t ∈ 𝕋.
Prove that the equation (6.1) is regular with tractability index 2. Write the inherent equation of the equation (6.1). Write the equation (6.23).
� 349
350 � 6 Third kind linear time-varying dynamic-algebraic equations Problem 6.5. Let 𝕋 = 3ℕ0 and 1 0 A(t) = (0 0 0 1+t B(t) = ( 0 0 0 0 C(t) = ( 0 −1 1 1. 2. 3.
0 1 + t + t2 0 0 0 0 −1 − t 0 0 0 −1 1 0
0 1 0 0 0
0 0 2) , 0 0 1 −1 1
0 0 0
0 0) , 1
−1 1 0 0 0
1 0 0 ), 0 t+2
t ∈ 𝕋.
Prove that the equation (6.1) is regular with tractability index 3. Write the inherent equation of the equation (6.1). Write the equation (6.23).
6.7 Notes and references In this chapter, we investigate regular fourth kind linear time-varying dynamic equations with tractability index ν ≥ 1. We deduct the inherent equation for the considered class equations. In the chapter, a decoupling of the solutions is given and it is shown that the constructed decoupling process is reversible.
7 Fourth kind linear time-varying dynamic-algebraic equations Suppose that 𝕋 is a time scale with a forward jump operator and delta differentiation operator σ and Δ, respectively. Let I ⊆ 𝕋. In this chapter, we will investigate the following linear time-varying dynamicalgebraic equation: A(t)(Bx)Δ (t) = C(t)x(t) + f (t),
t ∈ I,
(7.1)
where A : I → Mn×m , B : I → Mm×n , C : I → Mn×n and f : I → ℝn are given. The equation (7.1) is called a fourth kind linear time-varying dynamic-algebraic equation. We will consider the solutions of (7.1) within the space 1
m
1
CB (I) = {x : I → ℝ : Bx ∈ C (I)}.
Below, we remove the explicit dependence on t for the sake of notational simplicity.
7.1 A classification In this section, we will give a classification of the dynamic-algebraic equation (7.1). Definition 7.1. The matrix pair (A, B) will be said to be the leading term of the dynamicalgebraic equation (7.1). Definition 7.2. We will say that the fourth kind linear time-varying dynamic-algebraic equation (7.1) is properly stated if its leading term (A, B) is properly stated. Definition 7.3. We will say that the fourth kind linear time-varying dynamic-algebraic equation (7.1) is algebraically nice at level 0 if its leading term (A, B) is properly stated. Definition 7.4. We will say that the fourth kind linear time-varying dynamic-algebraic equation (7.1) is algebraically nice at level k ≥ 1 if it is algebraically nice at level k − 1 and (A5) and (A6) hold for i = k for some admissible up to level k projector sequence Q0 , . . . , Qk−1 . Definition 7.5. We will say that the fourth kind linear time-varying dynamic-algebraic equation (7.1) is nice at level k if it is algebraically nice at level k and there exists an admissible choice of Qk . The ranks ri of Gi , i ∈ {0, . . . , k}, are said to be characteristic values of (4.1) Definition 7.6. The fourth kind linear time-varying dynamic-algebraic equation (7.1) is said to be regular with tractability index 0 if both A and B are invertible. https://doi.org/10.1515/9783111377155-007
352 � 7 Fourth kind linear time-varying dynamic-algebraic equations Definition 7.7. The fourth kind linear time-varying dynamic-algebraic equation (4.1) is said to be regular with tractability index ν if there exists an admissible projector sequence {Q0 , . . . , Qν−1 } for which the matrices Gi are singular for 0 ≤ i < ν and Gν is nonsingular. Definition 7.8. The fourth kind linear time-varying dynamic-algebraic equation (7.1) is said to be regular if it is regular with any nonnegative tractability index. Note that the tractability index of (7.1) is ν if and only if it is nice up to level ν − 1; the matrices Gi , 0 ≤ i < ν are singular and Gν are nonsingular. Since the dimension of the direct sum N0 ⊕ ⋅ ⋅ ⋅ ⊕ Ni increases, the tractability index cannot exceed m.
7.2 A particular case In this section, we will consider the equation Ax Δ = Cx + f .
(7.2)
Suppose that P is a C 1 -projector so that Pσ is a C 1 -projector along ker A. Then APσ = A and by the equation (7.2) we find Cx + f = APσ x Δ
= A(Px)Δ − APΔ x,
whereupon A(Px)Δ = (C + APΔ )x + f . We set A1 = APΔ + C. Then we get the equation A(Px)Δ = A1 x + f . Thus, the equation (7.2) can be reduced to the equation (7.1).
(7.3)
7.2 A particular case
�
Example 7.1. Let 𝕋 = 2ℕ0 and A, C be as in Example 4.1. We will search a vector y1 (t) y(t) = (y2 (t)) ∈ ℝ3 , y3 (t)
t ∈ 𝕋,
so that A(t)y(t) = 0,
t ∈ 𝕋.
We have 0 1 (0) = (0 0 0
0 −t 0
0 y1 (t) 1 ) (y2 (t)) 0 y3 (t)
y1 (t) = (−ty2 (t) + y3 (t)) , 0
t ∈ 𝕋,
whereupon y1 (t) = 0,
y3 (t) = ty2 (t),
t ∈ 𝕋,
and 0 Qσ (t) = (0 0
0 0 0
0 2), 2t
t ∈ 𝕋,
is so that A(t)Qσ (t) = 0,
t ∈ 𝕋.
We have 0 Q(t) = (0 0 and P(t) = I − Q(t)
0 0 0
0 2) , t
t ∈ 𝕋,
353
354 � 7 Fourth kind linear time-varying dynamic-algebraic equations 1 = (0 0
0 1 0
0 0 0 ) − (0 1 0
1 = (0 0
0 1 0
0 −2 ) , 1−t
0 0 0
0 2) t
t ∈ 𝕋,
and 1 Pσ (t) = (0 0
0 1 0
0 −2 ) , 1 − 2t
1 A(t)Pσ (t) = (0 0
0 −t 0
0 1 1 ) (0 0 0
1 = (0 0
0 −t 0
0 1) 0
t ∈ 𝕋.
Next,
= A(t),
0 1 0
0 −2 ) 1 − 2t
t ∈ 𝕋.
Therefore, Pσ is a C 1 -projector along ker A. Moreover, 0 P (t) = (0 0 Δ
0 0 0
0 0 ), −1
t ∈ 𝕋,
and A1 (t) = A(t)PΔ (t) + C(t) 1 = (0 0
0 −t 0
0 = (0 0
0 0 0
−t =(0 t
1 1 0
The equation (7.1) takes the form
0 0 1 ) (0 0 0
0 0 0
0 −t −1) + ( 0 0 t t −1 + 2t ) , 1
0 −t 0)+(0 −1 t 1 1 0
t 2t ) 1
t ∈ 𝕋.
1 1 0
t 2t ) 1
7.2 A particular case
1 (0 0
0 −t 0
x1Δ (t) 0 −t Δ 1 ) (x2 (t)) = ( 0 0 t x3Δ (t)
1 1 0
t x1 (t) f1 (t) 2t ) (x2 (t)) + (f2 (t)) , 1 x3 (t) f3 (t)
�
355
t ∈ 𝕋,
or x1Δ (t) Δ (−tx2 (t) + x3Δ (t)) 0
−tx1 (t) + x2 (t) + tx3 (t) f1 (t) =( ) + (f2 (t)) , x2 (t) + 2tx3 (t) tx1 (t) + x3 (t) f3 (t)
t ∈ 𝕋,
or x1Δ (t) = −tx1 (t) + x2 (t) + tx3 (t) + f1 (t),
−tx2Δ (t) + x3Δ (t) = x2 (t) + 2tx3 (t) + f2 (t), 0 = tx1 (t) + x3 (t) + f3 (t),
t ∈ 𝕋.
The equation (7.3) can be rewritten in the form 1 (0 0
0 −t 0
−t =(0 t
0 1 1 ) ((0 0 0 1 1 0
0 1 0
0 x1 (t) −2 ) (x2 (t))) 1−t x3 (t)
Δ
t x1 (t) f1 (t) ) ( ) + ( −1 + 2t x2 (t) f2 (t)) , 1 x3 (t) f3 (t)
t ∈ 𝕋,
or 1 (0 0
0 −t 0
Δ
0 x1 (t) −tx1 (t) + x2 (t) + tx3 (t) f1 (t) 1 ) (x2 (t) − 2x3 (t)) = ( x2 (t) + (−1 + 2t)x3 (t) ) + (f2 (t)) , 0 (1 − t)x3 (t) tx1 (t) + x3 (t) f3 (t)
t ∈ 𝕋,
or 1 (0 0
0 −t 0
0 x1Δ (t) −tx1 (t) + x2 (t) + tx3 (t) + f1 (t) Δ 1 ) ( x2 (t) − 2x3Δ (t) ) = ( x2 (t) + (−1 + 2t)x3 (t) + f2 (t) ) , 0 −x3 (t) + (1 − 2t)x3Δ (t) tx1 (t) + x3 (t) + f3 (t)
or x1Δ (t) = −tx1 (t) + x2 (t) + tx3 (t) + f1 (t),
−t(x2Δ (t) − 2x3Δ (t)) − x3 (t) + (1 − 2t)x3Δ (t) = x2 (t) + (−1 + 2t)x3 (t) + f2 (t), 0 = tx1 (t) + x3 (t) + f3 (t),
or
t ∈ 𝕋,
t ∈ 𝕋,
356 � 7 Fourth kind linear time-varying dynamic-algebraic equations x1Δ (t) = −tx1 (t) + x2 (t) + tx3 (t) + f1 (t),
−tx2Δ (t) + x3Δ (t) = x2 (t) + 2tx3 (t) + f2 (t), 0 = tx2 (t) + x3 (t) + f3 (t),
t ∈ 𝕋.
Exercise 7.1. Let 𝕋 = 3ℕ0 and
1. 2. 3.
t A(t) = (t 2 1
2t − 1 t 1+t
1 −3 ) , −4t
−t C(t) = ( 2 −1
1+t t3 0
−4 5t ) , 1
t ∈ 𝕋.
Find a C 1 -projector so that Pσ is a C 1 -projector along ker A. Find A1 (t) = A(t)PΔ (t) + C(t), t ∈ 𝕋. Write the system (7.3).
7.3 Standard form index one problems In this section, we will investigate the equation A(Px)Δ = Cx + f ,
(7.4)
where P is a C 1 -projector along ker A. Set Q = I − P. Suppose that (D1) A1 = A + CQ is invertible. By the condition (D1), it follows that the equation (7.4) is regular with tractability index one. By Lemma 4.1, it follows that A−1 1 A = P,
A−1 1 CQ = Q. We multiply (7.4) by A−1 1 and we find Δ −1 −1 A−1 1 A(Px) = A1 Cx + A1 f
or
7.3 Standard form index one problems
� 357
−1 P(Px)Δ = A−1 1 Cx + A1 f
−1 −1 = A−1 1 CPx + A1 CQx + A1 f −1 = A−1 1 CPx + Qx + A1 f ,
i. e., −1 P(Px)Δ = A−1 1 CPx + Qx + A1 f .
(7.5)
We multiply the equation (7.5) by P and using that PP = P,
P(Px)Δ = (Px)Δ − PΔ Pσ x σ , and setting Px = u, we arrive at the equation −1 (Px)Δ = PΔ Pσ x σ + A−1 1 CPx + A1 f ,
or −1 uΔ = PΔ uσ + A−1 1 Cu + A1 f .
(7.6)
Now, we multiply (7.5) by Q and setting Qx = v, we find −1 0 = QA−1 1 CPx + Qx + QA1 f −1 = QA−1 1 Cu + v + QA1 f .
By the last equation and the equation (7.7), we obtain −1 uΔ = PΔ uσ + A−1 1 Cu + A1 f , −1 v = −QA−1 1 Cu − QA1 f .
Example 7.2. Let 𝕋 = ℕ and A, C be as in Example 4.2. Let also 1 P(t) = (0 0
0 0 −(t + 1)
0 0) , 1
(7.7)
358 � 7 Fourth kind linear time-varying dynamic-algebraic equations 0 Q(t) = (0 0
0 1 t+1
0 0) , 0
t ∈ 𝕋.
We have 1 P(t) + Q(t) = (0 0 1 = (0 0
0 0 −(t + 1) 0 1 0
0 0 0) + (0 1 0
0 0) , 1
0 1 t+1
0 0) 0
0 0 −(t + 1)
0 0) 1
t ∈ 𝕋,
and −1 A(t)P(t) = ( 0 0
t+1 0 2(t + 1)
−1 1 0 ) (0 −2 0
−1 =(0 0
t+1 0 2(t + 1)
−1 0) −2
1 P(t)P(t) = (0 0
0 0 −(t + 1)
0 1 0) (0 1 0
1 = (0 0
0 0 −(t + 1)
0 0) 1
= A(t),
t ∈ 𝕋,
and
= P(t),
t ∈ 𝕋.
0 0 −(t + 1)
0 0) 1
1 0 1) (0 1 0
0 1 t+1
Thus, P is a projector along ker A. Next, we have A1 (t) = A(t) + C(t)Q(t) −1 =(0 0
t+1 0 2(t + 1)
−1 0 0 ) + (0 −2 0
0 −t 2
−1 =(0 0
t+1 0 2(t + 1)
0 −1 0 ) + (0 −2 0
t+1 1 t+3
0 0) 0
0 0) 0
7.3 Standard form index one problems
−1 =(0 0
2(t + 1) 1 3t + 5
−1 0 ),
t ∈ 𝕋.
−2
Hence, det A(t) = 2
≠ 0,
t ∈ 𝕋.
Therefore, A1 is invertible. We will find the cofactors of A1 . We have 0 1 a11 (t) = 3t + 5 −2 = −2, 0 0 a12 (t) = − 0 −2 = 0, 1 0 a13 (t) = 0 3t + 5 = 0, 2(t + 1) −1 a21 (t) = − 3t + 5 −2
= −(−4t − 4 + 3t + 5) = t − 1, −1 −1 a22 (t) = 0 −2 = 2, −1 2(t + 1) a23 (t) = − 3t + 5 0 = 3t + 5, 2(t + 1) −1 a31 (t) = 0 1 = 1, −1 −1 a32 (t) = − 0 0 = 0, −1 2(t + 1) a33 (t) = 1 0 = −1, t ∈ 𝕋. Hence,
� 359
360 � 7 Fourth kind linear time-varying dynamic-algebraic equations
A−1 1 (t) =
t−1 2 3t + 5
−2 1 (0 2 0 −1
t−1 2
0
3t+5 2
=(0
1 0) −1
1 2
1
0 ), − 21
t ∈ 𝕋,
and P(t)A−1 1 (t)
1 = (0 0 −1
=(0 0
P(t)A−1 1 (t)C(t)
−1
=(0 0
0 = (0 0
0 0 −(t + 1) t−1 2
0
1 2
0 ) ( 0 0 0 −1
0
t+3 2 2 − (t−2)(t+1) 2
t−2 2
− (t+1)(t+2) 2
t+2 2
0
0 = (0 0
0 1 t+1
0 0) , 0
0 Q(t)A−1 (t)C(t) = ( 0 1 0
0 1 t+1
0 0 0) (0 0 0
0 −t −t(t + 1) 0 0 −1
0) − 21
0 0) , 0
0 −t 2
1 1) 1
0 ),
−1 0 0) ( 0 0 0
0 PΔ (t) = (0 0
1
0 ), − 21
t+3 2 t−1 2
0 1 t+1
0 = (0 0
1 2
3t+5 2
1 2
0 = (0 0
Q(t)A−1 1 (t)
t−1 2
−1 0 ) ( 0 0 1 0
t−1 2
1
3t+5 2
0 −t 2
0 1 ), t+1 t ∈ 𝕋.
1 2
0) − 21
1 1) 1
7.4 Decoupling of fourth-order linear time-varying dynamic algebraic equations of index one
� 361
Exercise 7.2. Let 𝕋 = 3ℕ and t+1 2t 0
−1 A(t) = ( 4 −2 t+2 C(t) = ( 0 1 1. 2. 3. 4.
t2 t −2
t t2 ) , t+2 t 2t − t 2 ) , t+2
t ∈ 𝕋.
Find a C 1 -projector P along ker A. Find Q(t) = I − P(t), t ∈ 𝕋. Find A1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋. Find the system (7.7).
7.4 Decoupling of fourth-order linear time-varying dynamic algebraic equations of index one Suppose that the equation (7.1) is regular with tractability index one. Assume that R is a continuous projector onto im B and along ker A. Set G0 = AB. Let P0 be a continuous projector along ker G0 and set Q0 = I − P0 ,
G1 = G0 + CQ0 .
Let also B− be the {1, 2}-inverse of B so that B− BB− = B− , BB− B = B,
B− B = P0 , BB− = R.
We have A = AR
= ABB−
= G0 B− . Then the equation (7.1) can be rewritten in the form
362 � 7 Fourth kind linear time-varying dynamic-algebraic equations G0 B− (Bx)Δ = Cx + f , which we multiply by G1−1 and we find G1−1 G0 B− (Bx)Δ = G1−1 Cx + G1−1 f .
(7.8)
Using that G1−1 G0 = I − Q0 ,
BP0 G1−1 G0 = BP0 , G1−1 CQ0 = Q0 ,
BQ0 −1 BP0 G1 CQ0
= 0, = 0,
we find BP0 G1−1 G0 B− (Bx)Δ = BP0 G1−1 Cx + BP0 G1−1 f or BP0 B− (Bx)Δ = BP0 G1−1 Cx + BP0 G1−1 f , whereupon Δ
Δ
Δ
(BP0 B− Bx) = (BP0 B− ) Bσ P0σ x σ + (BP0 B− ) Bσ Q0σ x σ
+ BP0 G1−1 CP0 x + BP0 G1−1 CQ0 x + BP0 G1−1 f ,
or Δ
(BP0 x)Δ = (BP0 B− ) Bσ P0σ x σ + BP0 G1−1 CB− BP0 x + BP0 G1−1 f . We set BP0 x = u and we get Δ
uΔ = (BP0 B− ) uσ + BP0 G1−1 CB− u + BP0 G1−1 f . Now, we multiply by Q0 the equation (7.8) and using that Q0 G1−1 Q0 = 0, and setting
(7.9)
7.4 Decoupling of fourth-order linear time-varying dynamic algebraic equations of index one
� 363
Q0 x = v, we find 0 = Q0 G1−1 Cx + Q0 G1−1 f
= Q0 G1−1 CP0 x + Q0 G1−1 CQ0 x + Q0 G1−1 f = Q0 G1−1 CB− BP0 x + Q0 x + Q0 G1−1 f = Q0 G1−1 CB− u + v + Q0 G1−1 f ,
or v = −Q0 G1−1 CB− u − Q0 G1−1 f . By the last equation and (7.9), we arrive at the system Δ
uΔ = (BP0 B− ) uσ + BP0 G1−1 CB− u + BP0 G1−1 f , v = −Q0 G1−1 CB− u − Q0 G1−1 f .
(7.10)
By the first equation of the system (7.10), we find u, and then by the second equation of (7.10), we find v. For the solution x of (7.1), we have the representation x = BP0 x + Q0 x = u + v.
As in Section 4.5.2 and Section 5.5.2, the above described process is reversible. Definition 7.9. The equation (7.9) is said to be the inherent equation for the equation (7.1). As we have proved Theorem 4.1 and Theorem 5.1, one can prove the following result. Theorem 7.1. The subspace im P0 is an invariant subspace for the equation (7.9), i. e., u(t0 ) ∈ (im BP0 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BP0 )(t) for any t ∈ I. Example 7.3. Let 𝕋 = 2ℕ0 and A, B and C be as in Example 4.3. By the computations in Example 4.3, we have
364 � 7 Fourth kind linear time-varying dynamic-algebraic equations 1 t
B(t)P0 (t)(G1−1 )(t) = (0 0
B(t)P0 (t)(G1−1 )(t)C(t)B− (t)
0
0
1
0
0
− t24
=( 0 0
1 t3 1 t2
1 t2 1 t5 1 t4 − t14
− t13 1 t2
0
0
),
0 t ∈ 𝕋,
t) ,
0
and 0 0 Q0 (t)(G1−1 )(t) = (0 0
0 0 0 0
0 0 0 0
(0 0 0 0 0 0 0 0 Q0 (t)(G1−1 )(t)C(t)B− (t) = ( 0 1 1 − t4 t3 1
0
( t3
0 0 0 − t12 0
0 0 0) , 0
0 0 0), 0
1 t2 )
t ∈ 𝕋.
0)
Then the decoupling (7.10) takes the form u1Δ (t)
(u2Δ (t)) u3Δ (t)
− t24 =( 0 0
1 t5 1 t4 − t14
0 0 0 0 ( 0 ( 0 )=− 1 v0 (t) t3 1 v1 (t) (3 t
0 0 ( 0 − 0 (0
or
1 0 u1 (t) t t ) (u2 (t)) + (0 0 u3 (t) 0
0 0 0 − t14
0
0 f1 (t) 0 ) (f2 (t)) , 1 f3 (t) t2
0 0 u1 (t) 0) (u2 (t)) 0 u3 (t)
0
0 0 0 0
0 1 0
0 0 0 0 0
0) 0 0 0 − t12 0
0 f1 (t) 0 f2 (t) 0 ) (f3 (t)) , 0 0 1 0 2)
t
t ∈ 𝕋,
7.4 Decoupling of fourth-order linear time-varying dynamic algebraic equations of index one
− t24 u1 (t) +
u1Δ (t)
(u2Δ (t)) = ( u3Δ (t)
1 u (t) t5 2
1 u (t) + tu3 (t) t4 2 − t14 u3 (t)
1 f (t) t 1
) + ( f2 (t) ) , 1 f (t) t2 3
0 0 0 0 0 0 ) − ( 0) , 0 ( 0 ) = −( 1 u (t) − t14 u2 (t) v0 (t) 0 t3 1 1 v1 (t) 0 u (t) 3 1 ( )
t ∈ 𝕋,
t
or u1Δ (t) = −
2 1 1 u1 (t) + 5 u2 (t) + f1 (t), 4 t t t
1 u (t) + tu3 (t) + f2 (t), t4 2 1 1 u3Δ (t) = − 4 u2 (t) + 2 f3 (t), t t 1 1 v0 (t) = − 3 u1 (t) + 4 u2 (t), t t 1 v1 (t) = − 3 u1 (t), t ∈ 𝕋. t u2Δ (t) =
Exercise 7.3. Let 𝕋 = 5ℕ0 and 1 1 A(t) = ( 1 0 0 t B(t) = (0 0 0 0 C(t) = (0 t 1 Find the representation (7.10).
−1 0 0 t 0
t t t) , 0 t
0 t2 0
t+1 t t+2
0 1 0
0 0) , 0
0 0 t t 0
t t−1 0 0 0
t t+2 0 t−1 0
t+1 0 0 ), 0 1
t ∈ 𝕋.
� 365
366 � 7 Fourth kind linear time-varying dynamic-algebraic equations
7.5 Decoupling of fourth kind linear dynamic-algebraic equations of index ≥ 2 7.5.1 A reformulation Suppose that the equation (7.1) is (1, σ)-regular with tractability index ν. In addition, assume that R is a continuous projector onto im Bσ and along ker A. Set G0σ = ABσ and take Π0 to be a continuous projector along ker G0 and denote M0 = I − Π0 , C0 = C,
G1σ = G0σ + C0 M0σ ,
BB− B = B,
B− BB− = B− ,
B − B = Π0 ,
Bσ B−σ = R,
N0 = ker G0 .
Let Gi , Πi , i ∈ {1, . . . , ν − 1} be as in (A5)–(A7). Assume that (A8) holds and let Δ
Ci Πi = (Ci−1 + Ci Miσ + Giσ B−σ (BΠi B− ) B)Πi−1 , σ σ Giσ = Gi−1 + Ci−1 Mi−1 ,
i ∈ {1, . . . , ν − 1}.
Since R = Bσ B−σ and R is a continuous projector along ker A, we have A = AR
= ABσ B−σ = G0 B−σ .
Then we can rewrite the equation (7.1) as follows: ABσ B−σ (Bx)Δ = Cx + f
7.5 Decoupling of fourth kind linear dynamic-algebraic equations of index ≥ 2
�
367
or G0σ B−σ (Bx)Δ = Cx + f . Now, we multiply both sides of the last equation with Gν−1σ and we find Gν−1σ G0σ B−σ (Bx)Δ = Gν−1σ Cx + Gν−1σ f .
(7.11)
Note that σ Gν−1σ G0σ = I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 .
Therefore, (7.11) takes the form σ (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (Bx)Δ = Gν−1σ Cx + Gν−1σ f .
(7.12)
Because Πν−1 projects along N0 ⊕ ⋅ ⋅ ⋅ ⊕ Nν−1 and Ni = im Qi , we have σ Πσν−1 (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 ) = Πσν−1
and from here, σ Bσ Πσν−1 (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 ) = Bσ Πσν−1 .
Now, we multiply (7.12) by Bσ Πσν−1 and we get σ Bσ Πσν−1 (I − Q0σ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (Bx)Δ = Bσ Πσν−1 Gν−1σ Cx + Bσ Πσν−1 Gν−1σ f
or Bσ Πσν−1 B−σ (Bx)Δ = Bσ Πσν−1 Gν−1σ Cx + Bσ Πσν−1 Gν−1σ f . Observe that Πσν−1 B−σ Bσ = Πσν−1 Πσ0 = Πσν−1 .
Since BΠν−1 B− and Bx are C 1 , we get Δ
Δ
Bσ Πσν−1 B−σ (Bx)Δ = (BΠν−1 B− Bx) − (BΠν−1 B− ) Bx Δ
= (BΠν−1 x)Δ − (BΠν−1 B− ) Bx.
Therefore, (7.13) can be rewritten in the form
(7.13)
368 � 7 Fourth kind linear time-varying dynamic-algebraic equations Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) Bx = Bσ Πσν−1 Gν−1σ Cx + Bσ Πσν−1 Gν−1σ f . Now, we decompose x as follows: x = Πν−1 x + (I − Πν−1 )x. We obtain Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) B(I − Πν−1 + Πν−1 )x
= Bσ Πσν−1 Gν−1σ C(I − Πν−1 + Πν−1 )x + Bσ Πσν−1 Gν−1σ f ,
or Δ
Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) BΠν−1 x − (BΠν−1 B− ) (I − Πν−1 )x
= Bσ Πσν−1 Gν−1σ CΠν−1 x + Bσ Πσν−1 Gν−1σ C(I − Πν−1 )x + Bσ Πσν−1 Gν−1σ f .
We compute j
Δ
σ Cj Πj = (C + Gj+1 − G1σ + ∑ Giσ B−σ (BΠi B− ) B)Πj−1 i=1
and 0 = Cj Πj Mj j
Δ
σ = (C + Gj+1 − G1σ + ∑ Giσ B−σ (BΠi B− ) B)Πi−1 Mj i=1 j
Δ
σ = (C + Gj+1 − G1σ + ∑ Giσ B−σ (BΠi B− ) B)Mj i=1
j
Δ
σ = CMj + (Gj+1 − G1σ )Mj + ∑ Giσ B−σ (BΠi B− ) BMj , i=1
from where j
Δ
σ CMj = −(Gj+1 − G1σ )Mj − ∑ Giσ B−σ (BΠi B− ) BMj . i=1
We multiply the last equation with Gν−1σ and we find j
Δ
σ Gν−1σ CMj = −Gν−1σ (Gj+1 − G1σ )Mj − ∑ Gν−1σ Giσ B−σ (BΠi B− ) BMj i=1
(7.14)
7.5 Decoupling of fourth kind linear dynamic-algebraic equations of index ≥ 2 j
� 369
Δ
σ = −(Q1σ + ⋅ ⋅ ⋅ Qjσ )Mj − ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj , i=1
i. e., j
Δ
σ Gν−1σ CMj = −(Q1σ + ⋅ ⋅ ⋅ Qjσ )Mj − ∑(I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj . i=1
(7.15)
Hence, Bσ Πσν−1 Gν−1σ CMj = −Bσ Πσν−1 (Q1σ + ⋅ ⋅ ⋅ Qjσ )Mj j
Δ
σ − ∑ Bσ Πσν−1 (I − Qiσ − ⋅ ⋅ ⋅ − Qν−1 )B−σ (BΠi B− ) BMj i=1 j
Δ
= − ∑ Bσ Πσν−1 B−σ (BΠi B− ) BMj i=1 j
j
Δ
Δ
= − ∑(BΠν−1 B− BΠi B− ) BMj + ∑(BΠν−1 B− ) BΠi B− BMj i=1
i=1
j
j−1
Δ
Δ
= − ∑(BΠν−1 B− BΠi B− ) BMj + ∑(BΠν−1 B− ) BΠi B− BMj i=1
i=1
j
Δ
j−1
Δ
= − ∑(BΠν−1 B− ) BMj + ∑(BΠν−1 B− ) BMj i=1
− Δ
i=1
= −(BΠν−1 B ) BMj , or Δ
Bσ Πσν−1 Gν−1σ CMj = −(BΠν−1 B− ) BMj . By the last equation, we obtain ν−1
Δ
ν−1
Bσ Πσν−1 Gν−1σ C ∑ Mj = −(BΠν−1 B− ) B ∑ Mj . j=0
j=0
Because ν−1
∑ Mj = I − Πν−1 ,
j=0
by (7.16), we find Δ
Bσ Πσν−1 Gν−1σ C(I − Πν−1 ) = −(BΠν−1 B− ) B(I − Πν−1 ).
(7.16)
370 � 7 Fourth kind linear time-varying dynamic-algebraic equations Using the equation (7.14), we find Δ
(BΠν−1 x)Δ − (BΠν−1 B− ) BΠν−1 x = Bσ Πσν−1 Gν−1σ CΠν−1 x + Bσ Πσν−1 Gν−1σ f . Set u = BΠν−1 x. Then CΠν−1 x = CB− u. Thus, we get the equation Δ
uΔ − (BΠν−1 B− ) u = Bσ Πσν−1 Gν−1σ CB− u + Bσ Πσν−1 Gν−1σ f , or Δ
uΔ = (BΠν−1 B− ) u + Bσ Πσν−1 Gν−1σ CB− u + Bσ Πσν−1 Gν−1σ f .
(7.17)
Definition 7.10. The equation (7.17) is said to be the inherent equation of the equation (7.1). As we have proved Theorem 5.1, one can prove the following result. Theorem 7.2. The subspace im Πν−1 is an invariant subspace for the equation (7.17), i. e., u(t0 ) ∈ (im BΠν−1 )(t0 ) for some t0 ∈ I if and only if u(t) ∈ (im BΠν−1 )(t) for any t ∈ I. 7.5.2 The component vν−1 We will use the decomposition x = M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + B− BΠν−1 x. Set vj = Mj x,
j ∈ {0, . . . , ν − 1}.
7.5 Decoupling of fourth kind linear dynamic-algebraic equations of index ≥ 2
� 371
σ We multiply the equation (7.11) by Mν−1 and we find σ σ σ Mν−1 Gν−1σ G0σ B−σ (Bx)Δ = Mν−1 Gν−1σ Cx + Mν−1 Gν−1σ f .
(7.18)
Note that σ Mν−1 Gν−1σ G0σ = 0.
Thus, the equation (7.18) takes the following form: σ σ Mν−1 Gν−1σ Cx = −Mν−1 Gν−1σ f .
(7.19)
Hence, σ σ Mν−1 Gν−1σ f = −Mν−1 Gν−1σ C(M0 x + M1 x + ⋅ ⋅ ⋅ + Mν−1 x + B− BΠν−1 x).
(7.20)
Using (7.15), we get σ σ Mν−1 Gν−1σ CMj = −Mν−1 (Q1σ + ⋅ ⋅ ⋅ + Qjσ )Mj j
Δ
σ σ σ − ∑ Mν−1 (I − Qiσ − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 )Bσ (BΠi B− ) BMj
= 0,
i=1
j < ν − 1,
and σ σ σ Mν−1 Gν−1σ CMν−1 = −Mν−1 (Q1σ + ⋅ ⋅ ⋅ + Qν−1 )Mν−1 ν−1
Δ
σ σ σ − ∑ Mν−1 (I − Qiσ − Qi+1 − ⋅ ⋅ ⋅ − Qν−1 )Bσ (BΠi B− ) BMν−1 i=1
σ = −Mν−1 Mν−1 .
Then, by (7.20), we find σ σ σ Mν−1 Mν−1 x + Mν−1 Gν−1σ SB− BΠν−1 x = Mν−1 Gν−1σ f
or σ σ σ Mν−1 vν−1 = −Mν−1 Gν−1σ CB− u + Mν−1 Gν−1σ f .
(7.21)
372 � 7 Fourth kind linear time-varying dynamic-algebraic equations 7.5.3 The components vk . Terms coming from Ukσ Gν−1σ Cx As in Section 5.5.2, we get ν−1
Ukσ Gν−σ G0σ B−σ (Bx)Δ = ∑ Nkj (Bvj )Δ j=k+1
ν−1
−σ ̃ + K̃ k B u + ∑ Mkj vj ,
(7.22)
j=1
ν−1
ν−1
j=k+1
j=k+1
− σ σ −1σ ̂ ̃ Mkσ vk = ∑ Nkj (Bvj )Δ + (K̃ k B − Kk ) + ∑ (Mkj − Mkj )vj + Uk Gν f ,
vk = Mk vk , and ν−1
Uk Gν−1 Cx σ = K̂ k u + ∑ Mkj vj , j=1
where σ
σ
Nkk+1 = −Mk Qk+1 B Nkk+2 = Nkk+3 =
.. .
−σ
,
σ σ −Mkσ Pk+1 Qk+2 B−σ , σ σ σ −Mkσ Pk+1 Pk+2 Qk+3 B−σ ,
σ σ
σ
σ
Nkν−1 = −Mk Pk+1 . . . Pν−2 Qν−1 B K̃ k =
̃ M kj = .. .
−σ
,
σ σ Mkσ (Pk+1 . . . Pν−1 − I)B−σ (BΠν−1 x)Δ B, Δ Δ σ σ σ −Mkσ (Qk+1 B−σ (BMk+1 B− ) + Pk+1 Qk+2 B−σ (BMk+2 B− ) Δ σ σ σ + Pk+1 Pk+2 Qk+3 B−σ (BMk+3 B− )
Δ
σ σ σ + Pk+1 . . . Pν−2 Qν−1 B−σ (BMν−1 B− ) )BMj B− B, σ σ
σ
−1σ
σ σ
σ
−1σ
K̂ k = Mk Pk+1 . . . Pν−1 Gν CB , −
Mk0 = Mk Pk+1 . . . Pν−1 Gν C,
Δ
σ
σ
σ
−σ
(BMj B− ) B,
σ
σ
σ
−σ
(BMk B− ) BMk − Mkσ ,
Mkj = −Mk (Pk+1 . . . Pν−1 − I)B Mkk = −Mk (Pk+1 . . . Pν−1 − 1)B σ
Δ
Mkj = −Mk
j−1
Δ
σ − Mkσ ∑ (Pk+1 . . . Piσ − I)B−σ (BMj B− ) B i=k
j < k,
7.6 Advanced practical problems �
Δ
σ σ σ + Mkσ (Pk+1 . . . Pν−1 − Pk+1 . . . Pjσ )Πσj B−σ (BMj B− ) B,
K̂ k =
σ σ Mkσ Pk+1 . . . Pν−1 Gν−1σ CB− .
373
j > k,
As in Chapter 5, we have the following result. Theorem 7.3. Assume that the equation (7.1) is regular with tractability index ν on I and f is enough smooth function. Then x ∈ CB1 (I) solves (7.1) if and only if it can be written as x = B− u + vν−1 + ⋅ ⋅ ⋅ + v1 + v0 . Exercise 7.4. Let 5 11 7 𝕋 = {2, , , , 4,′ 10, 18} 2 4 2 and t+1 0 A(t) = ( 1 0 0
0 t2 0 0 0
0 1+t 1 ), 0 0
1 B(t) = (0 t
0 t 2t
0 0 0
0 0 C(t) = ( 0 −1 1 1. 2. 3.
1 t 0
0 0 −1 1 0
0 1 0 0 0
0 0) , 0 −1 1 0 0 0
1 0 0 ), 0 t+2
t ∈ 𝕋.
Prove that the equation (7.1) is regular with tractability index 2. Write the inherent equation of the equation (7.1). Write the equation (7.22).
7.6 Advanced practical problems Problem 7.1. Let 𝕋 = 4ℕ0 and t2 + t A(t) = (t 2 − 2 0
t t2 t
1+t 3t ) , −4 + t
374 � 7 Fourth kind linear time-varying dynamic-algebraic equations 2t 2 + t C(t) = ( 2 + t −1
t t t+3
1 0) , 1
t ∈ 𝕋.
1.
Find a C 1 -projector so that Pσ is a C 1 -projector along ker A.
3.
Write the system (7.3).
2.
Find A1 (t) = A(t)PΔ (t) + C(t), t ∈ 𝕋.
Problem 7.2. Let 𝕋 = 4ℕ0 and t2 + t A(t) = ( −3 0
t t+2 t
t+2 C(t) = ( 1 −1
0 t+5 2
1.
Find a C 1 -projector P along ker A.
3.
Find A1 (t) = A(t) + C(t)Q(t), t ∈ 𝕋.
2.
4.
t − t3 t ), t+2
t2 2t ) , t2
t ∈ 𝕋.
Find Q(t) = I − P(t), t ∈ 𝕋. Find the system (7.7).
Problem 7.3. Let 𝕋 = 3ℕ and 0 2 A(t) = ( 1 0 1 0 B(t) = (0 0 1 0 C(t) = (−1 t 1 Find the representation (7.10).
1 0 −1 t 0 0 t 0
t+1 t−1 t + 1) , 1 t t+1 t+2 t
0 0 t t 0
t t+1 0 0 0
t t+1 t
t+1 0 ), t
t t 0 0 0
t+1 2 1 ), 0 t+2
t ∈ 𝕋.
7.6 Advanced practical problems �
Problem 7.4. Let 𝕋 = 7ℕ0 and
1. 2. 3.
3 0 A(t) = (0 0 0
0 4t + 1 0 0 0
0 0 2t ) , 0 0
t2 B(t) = ( 0 0
0 2 0
0 0 3t
0 0 0
0 0) , 0
0 t C(t) = ( 0 −1 1+t
t 0 −1 1 0
0 1 0 0 t
−1 1 0 t 0
1 0 0) , 0 2
t ∈ 𝕋.
Prove that the equation (7.1) is (σ, 1)-regular with tractability index 2. Write the inherent equation of the equation (7.1). Write the equation (7.22).
Problem 7.5. Let 𝕋 = 3ℕ and 1 0 A(t) = (0 0 0
0 t+4 0 0 0
t3 B(t) = ( 0 0
0 −t 3 0
1 0 C(t) = ( 1 −1 1+t 1. 2. 3.
0 0 2) , 0 0 0 0 1
1 0 −1 2 0
0 0 0 0 1 0 0 t
0 0) , 0 −1 1 1 0 0
1 0 0) , 0 2
t ∈ 𝕋.
Prove that the equation (7.1) is (σ, 1)-regular with tractability index 3. Write the inherent equation of the equation (7.1). Write the equation (7.22).
375
376 � 7 Fourth kind linear time-varying dynamic-algebraic equations
7.7 Notes and references In this chapter, we introduce fourth kind linear time-varying dynamic-algebraic equations. We classify them as (1, σ)-regular and using the properties of the leading term of the considered class equations, we deduce the inherent equation for the considered equations. In the chapter, a procedure for decoupling of the considered equations is given and we prove that this procedure is reversible.
8 Jets and jet spaces In this chapter, we introduce jets of one independent time scale variable, jets of n independent real variables and one independent time scale variable. We deduct some of their properties and then we define jet spaces and total derivative in jet variables. Suppose that 𝕋 is a time scale with a forward jump operator and delta differentiation operator σ and Δ, respectively.
8.1 The Taylor formula for a function of one independent time scale variable We introduce the generalized monomials hk : 𝕋 × 𝕋 → ℝ, k ∈ ℕ0 , defined recursively by h0 (t, s) = 1, t
hk (t, s) = ∫ hk−1 (τ, s)Δτ, s
k ∈ ℕ,
t, s ∈ 𝕋.
Then t
h1 (t, s) = ∫ Δτ s
= t − s,
Δ
hk t (t, s) = hk−1 (t, s),
k ∈ ℕ,
t, s ∈ 𝕋.
k ∈ ℕ,
t, s ∈ 𝕋.
k ∈ ℕ,
t ∈ 𝕋.
Example 8.1. Let 𝕋 = ℝ. Then hk (t, s) =
(t − s)k , k!
Example 8.2. Let 𝕋 = ℤ. We define t (0) = 1,
k−1
t (k) = ∏(t − j), j=0
Then h0 (t, s) = (t − s)(0) , t
h1 (t, s) = ∫ h0 (τ, s)Δτ s
https://doi.org/10.1515/9783111377155-008
378 � 8 Jets and jet spaces t
= ∫ Δτ s
=t−s (t − s)(1) = . 1! Assume that hk (t, s) =
(t − s)(k) , k!
t, s ∈ 𝕋,
for some k ∈ ℕ. We will prove that hk+1 (t, s) =
(t − s)(k+1) . (k + 1)!
Really, we have Δ
(
(t − s)(k+1) t (k+1) ) = (σ(t) − s) − (t − s)(k+1) (k + 1)! 1 = ((σ(t) − s)(σ(t) − s − 1) . . . (σ(t) − s − k) (k + 1)! − (t − s)(t − s − 1) . . . (t − s − k)) 1 = ((t + 1 − s)(t + 1 − s − 1) . . . (t + 1 − s − k) (k + 1)! − (t − s)(t − s − 1) . . . (t − s − k)) 1 = ((t + 1 − s)(t − s) . . . (t − s − k + 1) (k + 1)! − (t − s)(t − s − 1) . . . (t − s − k)) 1 = (t − s) . . . (t − s − k + 1)(t + 1 − s − t + s + k) (k + 1)! 1 = (t − s) . . . (t − s − k + 1)(k + 1) (k + 1)! 1 = (t − s)(t − s − 1) . . . (t − s − k + 1) k! (t − s)(k) = k! = hk (t, s), t, s ∈ 𝕋.
Therefore, (8.1) holds for any k ∈ ℕ.
(8.1)
8.1 The Taylor formula for a function of one independent time scale variable
Theorem 8.1. We have t
hk+m+1 (t, t0 ) = ∫ hk (t, σ(s))hm (s, t0 )Δs, t0
t, t0 ∈ 𝕋,
k, m ∈ ℕ0 . Proof. Let t
g(t) = ∫ hk (t, σ(s))hm (s, t0 )Δs. t0
Then, using the chain rule, we get g Δ (t) = hk (σ(t), σ(t))hm (σ(t), t0 ) t
+ ∫ hk−1 (t, σ(s)), hm (s, t0 )Δs t0
t
= ∫ hk−1 (t, σ(s))hm (s, t0 )Δs, t0
2
g Δ (t) = hk−1 (σ(t), σ(t))hm (t, t0 ) t
+ ∫ hk−2 (t, σ(s))hm (s, t0 )Δs t0
t
= ∫ hk−2 (t, σ(s))hm (s)Δs, .. . Δk
t0
t
g (t) = ∫ hm (s, t0 )Δs t0
g
Δk+1
gΔ This completes the proof.
k+m
= hm+1 (t, t0 ), (t) = hm+1 (t, t0 ), .. . (t) = hk+m+1 (t, t0 ).
�
379
380 � 8 Jets and jet spaces We define t
g0 (t, s) = 1,
gk+1 (t, s) = ∫ gk (σ(τ), s)Δτ, s
k ∈ ℕ0 ,
t, s ∈ 𝕋.
Lemma 8.1. Let n ∈ ℕ. If f is n-times differentiable and pk , 0 ≤ k ≤ n − 1, are differentiable at some t ∈ 𝕋 with pΔk+1 (t) = pσk (t)
for
all
0 ≤ k ≤ n − 2,
n ≥ 2,
then we have n−1
Δ
k Δk
n
( ∑ (−1) f (t)pk (t)) = (−1)n−1 f Δ (t)pσn−1 (t) + f (t)pΔ0 (t). k=0
Proof. We have n−1
k
Δ
n−1
k
( ∑ (−1)k f Δ (t)pk (t)) = ∑ (−1)k (f Δ (t)pk (t)) k=0
Δ
k=0 n−1
k+1
k
= ∑ (−1)k (f Δ (t)pσk (t) + f Δ (t)pΔk (t)) k=0 n−1
n−1
k+1
k
= ∑ (−1)k f Δ (t)pσk (t) + ∑ (−1)k f Δ (t)pΔk (t) k=0
k=0
n−2
k+1
n
= ∑ (−1)k f Δ (t)pσk (t) + (−1)n−1 f Δ (t)pσn−1 (t) k=0
n−1
k
0
+ ∑ (−1)k f Δ (t)pΔk (t) + f Δ (t)pΔ0 (t) k=0
n−2
k+1
n
= ∑ (−1)k f Δ (t)pΔk+1 (t) + (−1)n−1 f Δ (t)pσn−1 (t) k=0
n−2
k+1
+ ∑ (−1)k+1 f Δ (t)pΔk+1 (t) + f (t)pΔ0 (t) k=0
n
= (−1)n−1 f Δ (t)pσn−1 (t) + f (t)pΔ0 (t). This completes the proof. Lemma 8.2. The functions gn (t, s) satisfy for all t ∈ 𝕋 the relationship gn (ρk (t), t) = 0
for
all
n ∈ ℕ and
all
0 ≤ k ≤ n − 1.
8.1 The Taylor formula for a function of one independent time scale variable
Proof. Let n ∈ ℕ be arbitrarily chosen. Then gn (ρ0 (t), t) = gn (t, t) t
= ∫ gn−1 (σ(τ), t)Δτ t
= 0. Assume that gn−1 (ρk (t), t) = 0
and
gn (ρk (t), t) = 0
for some 0 ≤ k < n − 1. We will prove that gn (ρk+1 (t), t) = 0. Case 1. ρk (t) is left-dense. Then ρk+1 (t) = ρ(ρk (t)) = ρk (t). Consequently, using the induction assumption, we have gn (ρk+1 (t), t) = gn (ρk (t), t) = 0. Case 2. ρk (t) is left-scattered. Then ρ(ρk (t)) < ρk (t) and there is no s ∈ 𝕋 such that ρk+1 (t) < s < ρk (t). Hence, σ(ρk+1 (t)) = ρk (t). Therefore, gn (σ(ρk+1 (t)), t) = gn (ρk+1 (t), t) + μ(ρk+1 (t))gnΔ (ρk+1 (t), t) or gn (ρk (t), t) = gn (ρk+1 (t), t) + μ(ρk+1 (t))gnΔ (ρk+1 (t), t), whereupon gn (ρk+1 (t), t) = gn (ρk (t), t) − μ(ρk+1 (t))gnΔ (ρk+1 (t), t)
= gn (ρk (t), t) − μ(ρk+1 (t))gn−1 (σ(ρk+1 (t)), t)
�
381
382 � 8 Jets and jet spaces = gn (ρk (t), t) − μ(ρk+1 (t))gn−1 (ρk (t), t) = 0. This completes the proof. Lemma 8.3. Let n ∈ ℕ, and suppose that f is (n − 1)-times differentiable at ρn−1 (t). Then n−1
k
f (t) = ∑ (−1)k f Δ (ρn−1 (t))gk (ρn−1 (t), t). k=0
Proof. 1.
Let n = 1. Then 0
k
0
∑ (−1)k f Δ (ρ0 (t))gk (ρ0 (t), t) = (−1)0 f Δ (t)g0 (t, t)
k=0
2.
= f (t).
Assume that m−1
k
f (t) = ∑ (−1)k f Δ (ρm−1 (t))gk (ρm−1 (t), t) k=0
3.
for some m ∈ ℕ. We will prove that m
k
f (t) = ∑ (−1)k f Δ (ρm (t))gk (ρm (t), t). k=0
1. case. ρm−1 (t) is left-dense. Then ρm (t) = ρ(ρm−1 (t)) = ρm−1 (t). Hence and the induction assumption, we obtain m
k
∑ (−1)k f Δ (ρm (t))gk (ρm (t), t)
k=0
m−1
k
m
= ∑ (−1)k f Δ (ρm (t))gk (ρm (t), t) + (−1)m f Δ (ρm (t))gm (ρm (t), t) k=0
m−1
k
= ∑ (−1)k f Δ (ρm−1 (t))gk (ρm−1 (t), t) k=0
m
+ (−1)m f Δ (ρm−1 (t))gm (ρm−1 (t), t)
now we apply Lemma 8.2 (gm (ρm−1 (t), t) = 0)
8.1 The Taylor formula for a function of one independent time scale variable m−1
k
= ∑ (−1)k f Δ (ρm−1 (t))gk (ρm−1 (t), t) k=0
now we apply the induction assumption = f (t). 2. case. ρm−1 (t) is left-scattered. Then ρm (t) = ρ(ρm−1 (t)) < ρm−1 (t) and there is no s ∈ 𝕋 such that ρm (t) < s < ρm−1 (t). Also, σ(ρm (t)) = ρm−1 (t). Hence, gk (σ(ρm (t)), t) = gk (ρm−1 (t), t). Therefore, gk (ρm−1 (t), t) = gk (σ(ρm (t)), t)
= gk (ρm (t), t) + μ(ρm (t))gkΔ (ρm (t), t)
= gk (ρm (t), t) + μ(ρm (t))gk−1 (σ(ρm (t)), t) = gk (ρm (t), t) + μ(ρm (t))gk−1 (ρm−1 (t), t), whereupon gk (ρm (t), t) = gk (ρm−1 (t), t) − μ(ρm (t))gk−1 (ρm−1 (t), t). Consequently, m
k
∑ (−1)k f Δ (ρm (t))gk (ρm (t), t)
k=0
m
k
k=1 m
k
= f (ρm (t)) + ∑ (−1)k f Δ (ρm (t))gk (ρm (t), t) = f (ρm (t)) + ∑ (−1)k f Δ (ρm (t))gk (ρm−1 (t), t) k=1
�
383
384 � 8 Jets and jet spaces m
k
+ ∑ (−1)k−1 f Δ (ρm (t))μ(ρm (t))gk−1 (ρm−1 (t), t) k=1
m−1
k
= f (ρm (t)) + ∑ (−1)k f Δ (ρm (t))gk (ρm−1 (t), t) m Δm
+ (−1) f
k=1
(ρm (t))gm (ρm−1 (t), t)
m−1
k−1
+ ∑ (−1)k f Δ (ρm (t))μ(ρm (t))gk (ρm−1 (t), t) k=0
m−1
k
= ∑ (−1)k f Δ (ρm (t))gk (ρm−1 (t), t) k=0
m−1
k+1
+ ∑ (−1)k μ(ρm (t))f Δ (ρm (t))gk (ρm−1 (t), t) k=0
m−1
k
Δ
k
= ∑ (−1)k (f Δ (ρm (t)) + μ(ρm (t))(f Δ ) (ρm (t)))gk (ρm−1 (t), t) k=0
m−1
k
= ∑ (−1)k f Δ (σ(ρm (t)))gk (ρm−1 (t), t) k=0
m−1
k
= ∑ (−1)k f Δ (ρm−1 (t))gk (ρm−1 (t), t) k=0
= f (t). This completes the proof. n
Theorem 8.2 (Taylor’s formula). Let n ∈ ℕ. Suppose f is n-times differentiable on 𝕋κ . Let n−1 α ∈ 𝕋κ , t ∈ 𝕋. Then n−1
k
f (t) = ∑ (−1)k gk (α, t)f Δ (α) k=0
ρn−1 (t)
n
+ ∫ (−1)n−1 gn−1 (σ(τ), t)f Δ (τ)Δτ. α
Proof. We note that applying Lemma 8.1 for pk = gk , we have n−1
Δk
k
Δ
( ∑ (−1) gk (τ, t)f (τ)) k=0
n
τ
= (−1)n−1 f Δ (τ)gn−1 (σ(τ), t) + f (τ)g0Δ (τ, t) n
= (−1)n−1 f Δ (τ)gn−1 (σ(τ), t) for
all
The last relation we integrate from α to ρn−1 (t) and we get
n
τ ∈ 𝕋κ .
8.1 The Taylor formula for a function of one independent time scale variable
ρn−1 (t)
n−1
�
385
Δ
k
∫ ( ∑ (−1)k gk (τ, t)f Δ (τ)) Δτ k=0
α
ρ
n−1
τ
(t)
n
= ∫ (−1)n−1 f Δ (τ)gn−1 (σ(τ), t)Δτ α
or n−1
n−1
k
k
∑ (−1)k gk (ρn−1 (t), t)f Δ (ρn−1 (t)) − ∑ (−1)k gk (α, t)f Δ (α)
k=0
k=0
ρn−1 (t)
n
= ∫ (−1)n−1 f Δ (τ)gn−1 (σ(τ), t)Δτ. α
Hence, applying Lemma 8.3, n−1
ρn−1 (t)
Δk
k
n
f (t) − ∑ (−1) gk (α, t)f (α) = ∫ (−1)n−1 f Δ (τ)gn−1 (σ(τ), t)Δτ. k=0
α
This completes the proof. Theorem 8.3. The functions gn and hn satisfy the relationship hn (t, s) = (−1)n gn (s, t) n
for all t ∈ 𝕋 and all s ∈ 𝕋κ . n
Proof. Let t ∈ 𝕋 and s ∈ 𝕋κ be arbitrarily chosen. We apply Theorem 8.2 for α = s and f (τ) = hn (τ, s). We observe that k
f Δ (τ) = hn−k (τ, s),
0 ≤ k ≤ n.
Hence, k
0 ≤ k ≤ n − 1,
n
n+1
f Δ (s) = hn−k (s, s) = 0, f Δ (s) = h0 (s, s) = 1,
f Δ (τ) = 0.
From here, using Taylor’s formula, we get f (t) = hn (t, s) n
k
ρn (t)
n+1
= ∑ (−1)k gk (α, t)f Δ (α) + ∫ (−1)n gn (σ(τ), t)f Δ (τ)Δτ k=0
α
386 � 8 Jets and jet spaces
n
ρn (t)
Δk
k
n+1
= ∑ (−1) gk (s, t)f (s) + ∫ (−1)n gn (σ(τ), t)f Δ (τ)Δτ k=0
s
n−1
k
n
= ∑ (−1)k gk (s, t)f Δ (s) + (−1)n gn (s, t)f Δ (s) k=0
n
= (−1)n gn (s, t)f Δ (s) = (−1)n gn (s, t), i. e., hn (t, s) = (−1)n gn (s, t). This completes the proof. From Theorem 8.2 and Theorem 8.3, it follows the theorem, known as the Taylor formula of order n around α. n
Theorem 8.4 (Taylor’s formula). Let n ∈ ℕ. Suppose f is n-times differentiable on 𝕋κ . Let n−1 also α ∈ 𝕋κ , t ∈ 𝕋. Then n−1
ρn−1 (t)
k
n
f (t) = ∑ hk (t, α)f Δ (α) + ∫ hn−1 (t, σ(τ))f Δ (τ)Δτ. k=0
α
Now we will formulate and prove another variant of Taylor’s formula. Theorem 8.5 (Taylor’s formula). Let n ∈ ℕ. Suppose that the function f is n + 1-times difn+1 n+1 ferentiable on 𝕋κ . Let α ∈ 𝕋κ , t ∈ 𝕋, and t > α. Then n
k
t
n+1
f (t) = ∑ hk (t, α)f Δ (α) + ∫ hn (t, σ(τ))f Δ (τ)Δτ. k=0
α
Proof. Let n+1
g(t) = f Δ (t). Then f solves the problem xΔ
n+1
= g(t),
k
k
x Δ (α) = f Δ (α),
k ∈ {0, . . . , n}.
Note that y(t, s) = hn (t, σ(s)) is the Cauchy function for yΔ
n+1
= 0. Hence, it follows that
(8.2)
8.1 The Taylor formula for a function of one independent time scale variable
�
387
t
f (t) = u(t) + ∫ y(t, σ(τ))g(τ)Δτ α
(8.3)
t
= u(t) + ∫ hn (t, σ(s))g(s)Δs. α
where, u solves the initial value problem uΔ
n+1
m
m
uΔ (α) = f Δ (α),
= 0,
m ∈ {0, . . . , n}.
We set n
k
w(t) = ∑ hk (t, α)f Δ (α). k=0
(8.4)
We have n
m
k
wΔ (t) = ∑ hk−m (t, α)f Δ (α), k=0
m ∈ {0, . . . , n},
and hence n
m
k
wΔ (α) = ∑ hk−m (α, α)f Δ (α) k=0
m
= f Δ (α),
m ∈ {0, . . . , n},
i. e., w solves (8.3). Consequently, w = u. Hence and (8.4), we obtain (8.2). This completes the proof. Example 8.3. Let 𝕋 = 2ℕ0 and f (t) = t 4 + t,
t ∈ 𝕋.
We will apply the Taylor formula of order 3 for f around α = 1. Here, σ(t) = 2t, and h0 (t, 1) = 1, t
h1 (t, 1) = ∫ Δs 1
= t − 1,
t ∈ 𝕋,
388 � 8 Jets and jet spaces t
h2 (t, 1) = ∫ h1 (τ, 1)Δτ 1 t
= ∫(τ − 1)Δτ 1 t
t
= ∫ τΔτ − ∫ Δτ 1
2 τ=t
1
τ 3
− t + 1 τ=1 2 t 1 = − −t+1 3 3 t2 2 = −t+ , 3 3 h2 (t, σ(τ)) = h2 (t, 2τ) =
t
= ∫ h1 (s, 2τ)Δs 2τ
t
= ∫(s − 2τ)Δs 2τ
t
t
= ∫ sΔs − 2τ ∫ Δs 2τ
=
2 s=t
2τ
s − 2τ(t − 2τ) 3 s=2τ
t2 4 2 − τ − 2τt + 4τ 2 3 3 t2 8 = − 2τt + τ 2 , t, τ ∈ 𝕋, 3 3 =
Next, f (1) = 2,
f Δ (t) = (23 + 22 + 2 + 1)t 3 + 1 = 15t 3 + 1,
f Δ (1) = 16, 2
f Δ (t) = 15(22 + 2 + 1)t 2 2
= 105t 2 ,
f Δ (1) = 105,
τ < t.
8.1 The Taylor formula for a function of one independent time scale variable
�
389
3
f Δ (t) = 105(2 + 1)t = 315t,
t ∈ 𝕋.
Now, applying the Taylor formula of order 3 around t = 1, we find t
2
3
f (1) + h1 (t, 1)f Δ (1) + h2 (t, 1)f Δ (1) + ∫ h2 (t, σ(τ))f Δ (τ)Δτ 1
t2 2 = 2 + 16(t − 1) + 105( − t + ) 3 3 t
+ 315 ∫( 1
t2 8 − 2τt + τ 2 )τΔτ 3 3 t
= 2 + 16t − 16 + 35t 2 − 105t + 70 + 105t 2 ∫ τΔτ t
t
1
1
1
− 630t ∫ τ 2 Δτ + 840 ∫ τ 3 Δτ = 35t 2 − 89t + 56 + 105t 2 + 840
τ=t τ=t τ 2 τ3 − 630t 2 3 τ=1 2 + 2 + 1 τ=1
τ=t τ4 23 + 22 + 2 + 1 τ=1
= 35t 2 − 89t + 56 + 35t 4 − 35t 2 − 90t 4 + 90t + 56t 4 − 56 = t4 + t = f (t),
t ∈ 𝕋.
Exercise 8.1. Let 𝕋 = 3ℕ0 and f (t) =
1 + t + t3 , t5 + 1
t ∈ 𝕋.
Write the Taylor formula of order 3 for the function f around t = 1. Theorem 8.6. For any k ∈ ℕ0 , we have 0 ≤ hk (t, s) ≤
(t − s)k , k!
t ≥ s.
Proof. Let g(t) = (t − s)k+1 , Then
t, s ∈ 𝕋,
k ∈ ℕ.
(8.5)
390 � 8 Jets and jet spaces g Δ (t) = lim y→t
= lim y→t
= lim y→t
g(σ(t)) − g(y) σ(t) − y
(σ(t) − s)k+1 − (y − s)k+1 σ(t) − y
(σ(t) − y) ∑kν=0 (σ(t) − s)ν (y − s)k−ν σ(t) − y k
= lim ∑ (σ(t) − s)(y − s)k−ν y→t
ν=0
k
ν
= ∑ (σ(t) − s) (t − s)k−ν , ν=0
t, s ∈ 𝕋,
k ∈ ℕ.
Note that the inequalities (8.5) are true for k = 0. Assume that the inequalities (8.5) are true for some k ∈ ℕ. We will prove the inequalities (8.5) for k + 1. We have 0 ≤ hk+1 (t, s) t
= ∫ hk (τ.s)Δτ s
t
1 ≤ ∫(τ − s)k Δτ k! s
=
t
k 1 ∫ ∑ (τ − s)k Δτ (k + 1)! ν=0 s
t
k 1 = ∫ ∑ (τ − s)ν (τ − s)k−ν Δτ (k + 1)! ν=0 s
t
k 1 ν ≤ ∫ ∑ (σ(τ) − s) (τ − s)k−ν Δτ (k + 1)! ν=0 s
t
=
1 ∫ g Δ (τ)Δτ (k + 1)!
=
τ=t 1 g(τ) τ=s (k + 1)!
= =
s
τ=t 1 (τ − s)k+1 τ=s (k + 1)! (t − s)k+1 , (k + 1)!
t, s ∈ 𝕋,
t ≥ s.
By the principle of mathematical induction, it follows that (8.5) is true for any k ∈ ℕ. This completes the proof.
8.1 The Taylor formula for a function of one independent time scale variable
Let ρn−1 (t)
n
Rn (t, α) = ∫ hn−1 (t, σ(τ))f Δ (τ)Δτ. α
Theorem 8.7. Let t ∈ 𝕋, t ≥ α and n Mn (t) = sup{f Δ (τ) : τ ∈ [α, t]}. Then (t − α)n . Rn (t, α) ≤ Mn (t) (n − 1)! Proof. Let τ ∈ [α, t). Then α ≤ σ(τ) ≤ t and applying (8.5), we get 0 ≤ hn−1 (t, σ(τ)) ≤ ≤ ≤
(t − σ(τ))n−1 (n − 1)! (t − τ)n−1 (n − 1)!
(t − α)n−1 . (n − 1)!
Hence, n−1
ρ (t) Δn Rn (t, α) = ∫ hn−1 (t, σ(τ))f (τ)Δτ α t
n ≤ ∫ hn−1 (t, σ(τ))f Δ (τ)Δτ α
t
≤ Mn (t) ∫ α
(t − α)n−1 Δτ (n − 1)!
(t − α)n = Mn (t) . (n − 1)! This completes the proof.
�
391
392 � 8 Jets and jet spaces
8.2 The Taylor formula for a function of n independent real variables and one independent time scale variable Suppose that S is an open and convex set in ℝn , I ⊆ 𝕋, t0 ∈ I, f : S×I → ℝ, f ∈ C k+1 (S×I), (x10 , . . . , xn0 ), (x1 , . . . , xn ) ∈ S, t ∈ 𝕋, t ≥ t0 . Then, applying the classical Taylor formula for a function of n independent real variables and then the Taylor formula for a function of one independent time scale variable, we obtain f (x1 , . . . , xn , t) − f (x10 , . . . , xn0 , t0 )
= f (x1 , . . . , xn , t) − f (x10 , . . . , xn0 , t)
+ f (x10 . . . , xn0 , t) − f (x10 , . . . , xn0 , t0 )
(x1 − x10 )α1 . . . (xn − xn0 )αn 𝜕α1 . . . 𝜕αn 0 0 α f (x1 , . . . , xn , t) α α1 ! . . . αn ! 𝜕x1 1 . . . 𝜕xn n
= ∑ |α|≤k
+ Rn,k (x1 − x10 , . . . , xn − xn0 , t) k
+ ∑ hl (t, t0 ) l=0
= ∑ |α|≤k
𝜕l f (x10 , . . . , xn0 , t0 ) + R1k (x10 , . . . , xn0 , t0 ) Δt l
(x1 − x10 )α1 . . . (xn − xn0 )αn 𝜕α1 . . . 𝜕αn k 𝜕l 0 0 αn ∑ hl (t, t0 ) l f (x1 , . . . , xn , t0 ) α1 α1 ! . . . αn ! Δt 𝜕x1 . . . 𝜕xn l=0
+ Rn,k (x1 − x10 , . . . , xn − xn0 , t) + R1k (x10 , . . . , xn0 , t, t0 ) k
+ ∑ hl (t, t0 ) l=0
𝜕l f (x10 , . . . , xn0 , t0 ) Δt l
(x1 − x10 )α1 . . . (xn − xn0 )αn 𝜕α1 . . . 𝜕αn 𝜕l hl (t, t0 ) α1 f (x10 , . . . , xn0 , t0 ) αn l α ! . . . α ! Δt 𝜕x . . . 𝜕x 1 n n |α|≤k l=0 1 k
= ∑ ∑ k
+ ∑ hl (t, t0 ) l=0
𝜕l f (x10 , . . . , xn0 , t0 ) Δt l
+ Rn,k (x1 − x10 , . . . , xn − xn0 , t) + R1k (x10 , . . . , xn0 , t, t0 ), where Rnk (⋅, . . . , ⋅, ⋅) is the reminder in the Taylor formula for a function of n independent real variables, R1k (⋅, . . . , ⋅, ⋅) is the reminder in the Taylor formula for a function of one independent time scale variable, α = (α1 , . . . , αn ) ∈ ℕn0 , |α| = α1 + ⋅ ⋅ ⋅ + αn . Therefore, f (x1 , . . . , xn , t)
= f (x10 , . . . , xn0 , t0 )
(x1 − x10 )α1 . . . (xn − xn0 )αn 𝜕α1 . . . 𝜕αn 𝜕l hl (t, t0 ) α1 f (x10 , . . . , xn0 , t0 ) αn l α ! . . . α ! Δt 𝜕x . . . 𝜕x 1 n n |α|≤k l=0 1 k
+ ∑ ∑
(8.6)
8.3 Jets of a function of one independent time scale variable k
+ ∑ hl (t, t0 ) l=0
� 393
𝜕l f (x10 , . . . , xn0 , t0 ) Δt l
+ Rn,k (x1 − x10 , . . . , xn − xn0 , t) + R1k (x10 , . . . , xn0 , t, t0 ). Definition 8.1. The formula (8.6) is said to be the Taylor formula of order (k, k) for a function of n independent real variables and one independent time scale variable around (x10 , . . . , xn0 , t0 ). Let now, f ∈ C k (S, C m (I)). As above, one can deduct the following formula: f (x1 , . . . , xn , t)
= f (x10 , . . . , xn0 , t0 )
(x1 − x10 )α1 . . . (xn − xn0 )αn 𝜕α1 . . . 𝜕αn 𝜕l hl (t, t0 ) α1 f (x10 , . . . , xn0 , t0 ) αn l α ! . . . α ! Δt 𝜕x . . . 𝜕x 1 n n |α|≤k l=0 1 m
+ ∑ ∑ m
+ ∑ hl (t, t0 ) l=0
(8.7)
𝜕l f (x10 , . . . , xn0 , t0 ) Δt l
+ Rn,k (x1 − x10 , . . . , xn − xn0 , t) + R1m (x10 , . . . , xn0 , t, t0 ). Definition 8.2. The formula (8.7) is said to be the Taylor formula of order (k, m) for a function of n independent real variables and one independent time scale variable around (x10 , . . . , xn0 , t0 ).
8.3 Jets of a function of one independent time scale variable Let t0 ∈ 𝕋. Definition 8.3. Let f be k + 1-times delta differentiable in a neighborhood U of t0 . Then k-jet of f at t0 is defined to be the function k
(Jtk0 f )(x) = f (t0 ) + h1 (z, t0 )f Δ (t0 ) + ⋅ ⋅ ⋅ + hk (z, t0 )f Δ (t0 ). Example 8.4. Let 𝕋 = 2ℕ0 and f be as in Example 8.3. We will find (J12 f )(z). By the computations in Example 8.3, we find h1 (z, 1) = z − 1, h2 (z, 1) =
z2 2 −z+ . 3 3
Then 2
(J12 f )(z) = f (1) + h1 (z, 1)f Δ (1) + h2 (z, 1)f Δ (1)
394 � 8 Jets and jet spaces
= 2 + 16(z − 1) + 105(
z2 2 −z+ ) 3 3
= 2 + 16z − 16 + 35z2 − 105z + 70 = 35z2 + 89z + 56. Exercise 8.2. Let 𝕋 = 3ℕ0 and f (t) =
1+t , 1 + 2t 2
t ∈ 𝕋.
Find (J13 f )(z). Theorem 8.8. Let f and g be k + 1-times delta differentiable in a neighborhood U of t0 . Then (Jtk0 (af + bg))(z) = a(Jtk0 f )(z) + b(Jtk0 g)(z) for any a, b ∈ ℝ. Proof. We have k
(Jtk0 (af + bg))(z) = (af + bg)(t0 ) + h1 (z, t0 )(af + bg)Δ (t0 ) + ⋅ ⋅ ⋅ + hk (z, t0 )(af + bg)Δ (t0 ) k
= af (t0 ) + ah1 (z, t0 )f Δ (t0 ) + ⋅ ⋅ ⋅ + ahk (z, t0 )f Δ (t0 ) k
+ bg(t0 ) + bh1 (z, t0 )g Δ (t0 ) + ⋅ ⋅ ⋅ + bhk (z, t0 )g Δ (t0 )
= a(Jtk0 f )(z) + b(Jtk0 g)(z). This completes the proof.
Remark 8.1. Let f and g be k + 1-times delta differentiable in a neighborhood U of t0 . If f Λ exists for any Λ ∈ Sl(k) , where Sl(k) is the set consisting of all possible strings of length k consisting σ exactly l times and Δ k − l times, then k
k
l
(fg)Δ = ∑( ∑ f Λ )g Δ . j=0 Λ∈S (k) j
Therefore, it is impossible to be deduced in the general case a representation for k-jet (Jtk0 (fg))(z) containing jets of f and jets of g. The same is the situation for a representation in the general case for a jet of the composition of f and g via jets of f and jets of g.
8.4 Jets of a function of n independent real variables
� 395
8.4 Jets of a function of n independent real variables and one independent time scale variable Definition 8.4. Let S be open and connected set in ℝn that contains x 0 = (x10 , . . . , xn0 ) ∈ ℝn , t0 ∈ 𝕋 and U be a neighborhood of t0 . Suppose that f : S × U → ℝ, f ∈ C k+1 (S, C m+1 (U)). Then (k, m)-jet of f at (x 0 , t0 ) is defined to be k,m (J(x 0 ,t ) f )(z1 , . . . , zn , zn+1 ) 0
= f (x10 , . . . , xn0 , t0 )
(z1 − x10 )α1 . . . (zn − xn0 )αn 𝜕α1 . . . 𝜕αn 𝜕l hl (zn+1 , t0 ) α1 f (x10 , . . . , xn0 , t0 ) αn l α ! . . . α ! Δt 𝜕x . . . 𝜕x 1 n n |α|≤k l=0 1 m
+ ∑ ∑ m
+ ∑ hl (zn+1 , t0 ) l=0
𝜕l f (x10 , . . . , xn0 , t0 ). Δt l
k When k = m, we will write (J(x 0 ,t ) f )(z1 , . . . , zn , zn+1 ). 0
Let f (x1 , . . . , xn , t) = f1 (x1 , . . . , xn )f2 (t), where f1 ∈ C k (S) and f2 ∈ C m (U). Then k,m (J(x 0 ,t ) f )(z1 , . . . , zn , zn+1 ) 0
= f (x10 , . . . , xn0 , t0 )
(z1 − x10 )α1 . . . (zn − xn0 )αn 𝜕α1 . . . 𝜕αn 𝜕l hl (zn+1 , t0 ) α1 f (x10 , . . . , xn0 , t0 ) αn l α ! . . . α ! Δt 𝜕x . . . 𝜕x 1 n n |α|≤k l=0 1 m
+ ∑ ∑ m
+ ∑ hl (zn+1 , t0 )
=
𝜕l f (x10 , . . . , xn0 , t0 ) Δt l
l=0 f1 (x10 , . . . , xn0 )f2 (t0 ) m (z1 − x10 )α1
+ ∑ ∑
|α|≤k l=0
l . . . (zn − xn0 )αn 𝜕α1 . . . 𝜕αn 0 0 𝜕 hl (zn+1 , t0 ) α1 αn f1 (x1 , . . . , xn ) l f2 (t0 ) α1 ! . . . αn ! Δt 𝜕x1 . . . 𝜕xn
m
+ f1 (x10 , . . . , xn0 ) ∑ hl (zn+1 , t0 )
=
l=0 0 0 f1 (x1 , . . . , xn )f2 (t0 ) (z1 − x10 )α1
+( ∑
|α|≤k m
. . . (zn − xn0 )αn 𝜕α1 . . . 𝜕αn 0 0 α f1 (x1 , . . . , lxn )) α α1 ! . . . αn ! 𝜕x1 1 . . . 𝜕xn n
× (∑ hl (zn+1 , t0 ) l=0
𝜕l f2 (t0 ) Δt l
𝜕l f2 (t0 )) Δt l
396 � 8 Jets and jet spaces + f1 (x10 , . . . , xn0 )(Jtm0 f ))(zn+1 )
= f (x10 , . . . , xn0 , t0 ) + (Jxk0 f1 )(z1 , . . . , zn )(Jtm0 f2 )(zn+1 ) + f1 (x10 , . . . , xn0 )(Jtm0 f ))(zn+1 ).
Example 8.5. Let 𝕋 = 2ℕ0 and f1 (x) = x 4 + x 2 + x,
x ∈ ℝ,
and g(x, t) = f1 (x)f (t),
x ∈ ℝ,
t ∈ 𝕋,
where f is the function in Example 8.4. We will find 3,2 (J(1,1) f )(z1 , z2 ).
We have g(x, t) = (x 4 + x 2 + x)(t 4 + t), g(1, 1) = 3 ⋅ 2
(x, t) ∈ ℝ × 𝕋,
=6
and f1 (1) = 3,
f1′ (x) = 4x 3 + 2x + 1, f1′ (1) = 7,
f1′′ (x) = 12x 2 + 2, f1′′ (1) = 14,
f1′′′ (x) = 24x, f1′′′ (1) = 24.
Then 1 1 (J13 f1 )(z1 ) = f1 (1) + f1′ (1)z1 + z21 f1′′ (1) + z31 f1′′′ (1) 2 6 = 3 + 7z1 + 7z21 + 4z31 . Now, using the computations in Example 8.4, we find 3,2 (J(1,1) g)(z1 , z2 ) = g(1, 1) + (J13 f1 )(z1 )(J12 f )(z2 )
8.5 Jet spaces
� 397
+ f1 (1)(J12 f2 )(z2 )
= 6 + (3 + 7z1 + 7z21 + 4z31 )(35z22 + 89z2 + 56) + 3(35z22 + 89z2 + 56)
= 6 + (6 + 7z1 + 7z21 + 4z31 )(35z22 + 89z2 + 56). Exercise 8.3. Let 𝕋 = 3ℤ and f (x1 , x2 , t) = x12 x22 t 3 + (x1 + x2 )e1 (t, 1),
(x1 , x2 , t) ∈ ℝ2 × 𝕋.
Find 3,2 (J(1,1,1) f )(z1 , z2 , z3 )
8.5 Jet spaces Suppose that f : ℝp+1 × 𝕋 → ℝ, f = f (x1 , . . . , xp−1 , t) depends on p − 1 independent real variables and one independent time scale variable t. For convenience, we introduce the notation xp = t and will write 𝜕xp instead of Δt. The function f has pk = (
p+k−1 ) k
different kth partial derivatives 𝜕J f =
𝜕k f 𝜕xj1 . . . 𝜕xjk
indexed by unordered multiindices J = (j1 , . . . , jk ),
1 ≤ jk ≤ p,
of order k = ♯J. Thus, if we have q dependent variables (u1 , . . . , uq ), we will require qk = qpk different coordinates ujα = 𝜕j f α (x)
398 � 8 Jets and jet spaces of a function u = f (x). For the total space E = X × U ≃ ℝp−1 × 𝕋 × ℝq , the nth jet space J n = J n E = X × U (n) is the Euclidean space of dimension p + q(n) = p + q (
p+n ), n
whose coordinates consist of the p independent variables xi , the q dependent variables uα and the derivative coordinates ujα , α = 1, . . . , q, of order 1 ≤ ♯J ≤ n. Definition 8.5. The space U (n) will be called the vertical space or the fiber. The points of the vertical space U (n) are denoted by u(n) and consists of all the dependent variables and their derivatives up to order n. Thus, the coordinates of a typical point z ∈ J n are denoted by (x, u(n) ). Because the derivative coordinates u(n) form a subset of the derivative coordinates u(n+k) , there is a projector πnn+k : J nk → J n on the jet space with πnn+k (x, u(n+k) ) = (x, u(n) ). In particular, we have that π0n (x, u(n) ) = (x, u) is the projector from J n to E = J 0 . If M ⊂ E is an open subset, then
8.6 Total derivatives
�
399
J n M = (π0n ) M ⊂ J n E −1
is the open subset of the nth jet space, which projects back down to M.
8.6 Total derivatives Definition 8.6. A smooth real-valued function F : J n → ℝ, defined on an open subset of the nth jet space is called a differentiable function. Definition 8.7. Let F(x, u(n) ) be a differentiable function of order n. The total derivative of F with respect to xi is the (n + 1)st order differential function Di F that satisfies Di F(x, f (n+1) (x)) =
𝜕 F(x, f (n) (x)) 𝜕xi
for any smooth function u = f (x). Example 8.6. In the case of one independent time scale variable t and one dependent variable, the total derivative of a given function, n
F(t, u, uΔ (t), . . . , uΔ (t)), the total derivative is given by n
Dt F(t, u(t), uΔ (∗t), . . . , uΔ (t)) =
.. .
n 𝜕 F(c1 , u(c1 ), uΔ (c1 ), . . . , uΔ (c1 )) Δt n 𝜕 + uΔ F(σ(c2 ), u(c2 ), uΔ (c2 ), . . . , uΔ (c2 )) 𝜕u
n+1
+ uΔ (t)
n 𝜕 Δn−1 (σ(cn+1 )), uΔ (cn+1 )) n F(σ(cn+1 ), u(σ(cn+1 )), . . . , u Δ 𝜕u
for some cj ∈ [t, σ(t)], j ∈ {1, . . . , n + 1}.
8.7 Advanced practical problems Problem 8.1. Let 𝕋 = 2ℤ. Find 2
(h2 (t, 1)) − h1 (t, 1)g2 (t, 1),
t ∈ 𝕋,
t > 1.
t ∈ 𝕋,
t > 2.
Problem 8.2. Let 𝕋 = 3ℕ0 . Find 3
h2 (t, 1)h3 (t, 2) − (g3 (t, 1)) ,
400 � 8 Jets and jet spaces Problem 8.3. Let 𝕋 = 4ℕ0 and f (t) =
1+t + e1 (t, 1), t2 + 1
t ∈ 𝕋.
Write the Taylor formula of order 3 for the function f around t = 1. Problem 8.4. Let 𝕋 = ℤ and f (t) = 1 + t 4 ,
t ∈ 𝕋.
Find 2(J02 f )(z) − 4(J03 f )(z). Problem 8.5. Let 𝕋 = 3ℕ0 and 2
f (x1 , x2 , x3 , t) = ex1 sin2 (t, 1) + t 2 (x22 − x33 ),
(x1 , x2 , x3 , t) ∈ ℝ3 × 𝕋.
Find 4,3 (J(1,1,1,1) f )(z1 , z2 , z3 , z4 ).
8.8 Notes and references In this chapter, we define jets of a function of one independent time scale variable and jets of a function of n independent real variables and one independent time scale variable. We introduce jet spaces and we give some of their properties. In the chapter, differentiable functions and total derivatives are defined.
9 Nonlinear dynamic-algebraic equations In this chapter, we will investigate the following nonlinear dynamic-algebraic equation: Δ
f ((g(x(t), t)) , x(t), t) = 0,
(9.1)
where f : ℝn × Df × If → ℝk , Df × If ⊆ ℝm × 𝕋 is continuous and has continuous classical derivatives fy , fx and continuous delta derivative ftΔ , g : Df × If → ℝn is continuous and has a continuous classical derivative gx and a continuous delta derivative gtΔ . A solution x of the nonlinear dynamic-algebraic equation (9.1) is a function defined on an interval I∗ ⊆ If , so that x(t) ∈ Df for any t ∈ I∗ , the function g(x(⋅), ⋅) is continuously delta differentiable on I∗ and x satisfies the equation (9.1) pointwise on I∗ .
9.1 Properly involved derivatives Define 1
1
2
F(y (t), y (t), x(t), t) = ∫ 0 1
2
1
G(x (t), x (t), t) = ∫ 0
𝜕 f (hy1 (t) + (1 − h)y2 (t), x(t), t)dh, 𝜕y 𝜕 g(hx 1 (t) + (1 − h)x 2 (t), t)dh, 𝜕x
where 𝜕 f (hy1 (t) + (1 − h)y2 (t), x(t), t) 𝜕y k,n
=(
𝜕 f (y11 (t), . . . , y1j−1 (t), hy1j (t) + (1 − h)y2j (t), y2j+1 (t), . . . , y2n (t), x(t), t)) 𝜕yj i,j=1
and 𝜕 g(hx 1 (t) + (1 − h)x 2 (t), t) 𝜕x n,m 𝜕 1 2 2 = ( g(x11 (t), . . . , xj−1 (t), hxj1 (t) + (1 − h)xj2 (t), xj+1 (t), . . . , xm (t), σ(t))) , 𝜕xj i,j=1 and y1 (t), y2 (t) ∈ ℝn , x(t), x 1 (t), x 2 (t) ∈ Df , t ∈ If . Definition 9.1. The nonlinear dynamic equation (9.1) has a properly involved derivative, also called a properly stated leading term, if im G and ker F are C 1 -subspaces and the transversality condition https://doi.org/10.1515/9783111377155-009
402 � 9 Nonlinear dynamic-algebraic equations ker F(y1 (t), y2 (t), x(t), t) ⊕ im G(x 1 (t), x 2 (t), t) = ℝn , y1 (t), y2 (t) ∈ ℝn ,
x(t), x 1 (t), x 2 (t) ∈ Df ,
(9.2)
t ∈ If .
holds. Example 9.1. Let 𝕋 = 2ℕ0 . Consider the following nonlinear dynamic-algebraic equation: Δ
(x1 (t) − 2x2 (t)x3 (t)) + x2 (t) − q1 (t) = 0, x1 (t) + x2 (t) − q2 (t) = 0,
x2 (t) − 2x3 (t) − q3 (t) = 0,
t ∈ 𝕋.
Here, n = 1,
k = m = 3.
We have 1 x2 f (y1 , y2 , x, t) = (0) y + ( x1 + x2 ) − q(t), 0 x2 − 2x3
x ∈ ℝ3 ,
y1 , y2 ∈ ℝ,
x 1 , x 2 ∈ ℝ3 ,
t ∈ 𝕋.
t ∈ 𝕋,
and g(x 1 , x 2 , t) = x1 (t) − 2x2 (t)x3 (t), Then 1 1 F(y1 (t), y2 (t), x(t), t) = ∫ (0) dh 0 0
1 = (0) , 0
x ∈ ℝ3 ,
y1 , y2 ∈ ℝ,
t ∈ 𝕋.
Hence, ker F(y1 (t), y2 (t), x(t), t) = {0},
y1 (t), y2 (t)ℝ,
x(t) ∈ ℝ3 ,
t ∈ 𝕋.
x 1 (t), x 2 (t) ∈ ℝ3 ,
t ∈ 𝕋.
Next, 1
G(x 1 (t), x 2 (t), t) = ∫(1, −2x32 (t), −2x21 (t))dh 0
= (1, −2x32 (t), −2x21 (t)),
9.2 Constraints and consistent initial values
� 403
Then ker G(x 1 (t), x 2 (t), t) = {0},
x 1 (t), x 2 (t) ∈ ℝ3 ,
t ∈ 𝕋,
and im G(x 1 (t), x 2 (t), t) = ℝ,
x 1 (t), x 2 (t) ∈ ℝ3 ,
t ∈ 𝕋.
Consequently, ker F(y1 (t), y2 (t), x(t), t) ⊕ im G(x 1 (t), x 2 (t), t) = ℝ, y1 (t), y2 (t) ∈ 𝕋,
x(t), x 1 (t), x 2 (t) ∈ ℝ3 ,
t ∈ 𝕋.
Thus, the considered nonlinear dynamic-algebraic equation has a properly involved derivative. Exercise 9.1. Let 𝕋 = ℤ. Prove that the nonlinear dynamic-algebraic equation Δ
(x1 (t)x2 (t) + 4x3 (t)) + x1 (t)x2 (t) − q2 (t) = 0, x2 (t) − q1 (t) = 0,
x1 (t) + x3 (t) − q3 (t) = 0,
t ∈ 𝕋,
has a properly involved derivative. via
Note that the nonlinear dynamic-algebraic equation (9.1) covers the equation (7.1) f (y, x, t) = A(t)y − C(t)x − f (t), g(x, t) = B(t)x.
If the equation (9.1) has a properly involved derivative, then F and G have constant rank on their definition domain.
9.2 Constraints and consistent initial values Consider the equation (9.1) subject to the initial condition x(t0 ) = x0 ,
(9.3)
where t0 ∈ If and x0 ∈ Df . Definition 9.2. For a given nonlinear dynamic-algebraic equation (9.1) and a given t0 ∈ If , the value x0 ∈ Df is said to be consistent if the IVP (9.1), (9.3) possesses a solution.
404 � 9 Nonlinear dynamic-algebraic equations Definition 9.3. For a nonlinear dynamic-algebraic equation (9.1) with a properly involved derivative, the set n
Δ
σ
M0 (t) = {x(t) ∈ Df : ∃y(t) ∈ ℝ : y(t) − gt (x(σ(t)), t) ∈ im G(x (t), x(t), t),
f (y(t), x(t), t) = 0}
is called an obvious restriction set or obvious constraint of the equation (9.1) at t ∈ If . Example 9.2. Let 𝕋 = ℤ. Consider the nonlinear dynamic equation x1Δ (t) = 2x1 (t),
4
2
3(x1 (t)) + (x2 (t)) = 1 + q(t),
(9.4)
t ∈ 𝕋,
with two equations and two unknown functions on Df = {x ∈ ℝ2 : x2 > 0}, If = 𝕋, q ∈ C (If ), q > −1 on If . This equation can be rewritten in the form (9.1) using f (y, x, t) = (
y − 2x1 ), 3x14 + x22 − 1 − q(t)
g(x, t) = x1 ,
x ∈ Df ,
y ∈ ℝ,
t ∈ If .
We have 1 F(y1 (t), y2 (t), x(t), t) = ( ) , 0 G(x 1 (t), x 2 (t), t) = (1, 0),
y1 (t), y2 (t) ∈ ℝ,
x(t), x 1 (t), x 2 (t) ∈ Df ,
t ∈ If .
From here, ker F(y1 (t), y2 (t), x(t), t) = ker G(x 1 (t), x 2 (t), t) = {0},
y1 (t), y2 (t) ∈ ℝ,
x 1 (t), x 2 (t), x(t) ∈ Df ,
Therefore, im G(x 1 (t), x 2 (t), t) = ℝ,
x 1 (t), x 2 (t) ∈ Df ,
t ∈ If ,
and ker F(y1 (t), y2 (t), x(t), t) ⊕ im G(x 1 (t), x 2 (t), t) = ℝ, y1 (t), y2 (t) ∈ ℝ,
x(t), x 1 (t), x 2 (t) ∈ Df ,
t ∈ If .
t ∈ If .
9.2 Constraints and consistent initial values
� 405
Thus, the equation (9.4) has a properly involved derivative. By the second equation of (9.4), we conclude that the solutions of (9.4) must lie in 4
2
M0 (t) = {x ∈ Df : 3x1 + x2 − 1 − q(t) = 0}.
Let x1 (0) = x10 . Then 4 . x20 = ±√1 + q(0) − 3x10
By the first equation of (9.4), we find x1 (t) = e2 (t, 0)x10 2
= x10 e∫0 log(1+2)Δτ = x10 et log 3 = x10 3t ,
t ∈ 𝕋,
and then 4 34t , x2 (t) = √1 + q(t) − 3x10
t ∈ 𝕋.
It is clear that through each point of M0 (0) there passes exactly one solution. Exercise 9.2. Let 𝕋 = 2ℕ0 . Find the set M0 (t) for the following IVP: x1Δ (t) + tx1 (t) = 0, 2
4
(x1 (t)) + (x2 (t)) = 1 + t 2 ,
t ∈ 𝕋,
on Df = {x ∈ ℝ2 : x1 > 0}. Now, we consider the equation (9.1). Suppose that x ∈ C 1 (If ) and set u(t) = g(x(t), t),
t ∈ If .
By the generalized Pötzsche chain rule (see the Appendix of this book), we find Δ
1
u (t) = (∫ g(x(t) + hμ(t)x Δ (t), t)dh)x Δ (t) + gtΔ (x(σ(t)), t) 0
406 � 9 Nonlinear dynamic-algebraic equations = G(x σ (t), x(t), t)x Δ (t) + gtΔ (x(σ(t)), t),
t ∈ If ,
or uΔ (t) − gtΔ (x(σ(t)), t) = G(x σ (t), x(t), t)x Δ (t),
t ∈ If .
By the inclusion uΔ (t) − gtΔ (x(σ(t)), t) ∈ im G(x σ (t), x(t), t),
t ∈ If ,
(9.5)
it follows that there is a w(t) ∈ im G(x σ (t), x(t), t), t ∈ If , so that uΔ (t) = G(x σ (t), x(t), t)w(t) + gtΔ (x σ (t), t),
t ∈ If .
Note that the inclusion (9.5) holds trivially in the case when G(x(t), t) has full row rank for any t ∈ If . For any solution of the equation (9.1), we have the following identities: f (uΔ (t), x(t), t) = 0,
t ∈ If ,
and f (G(x σ (t), x(t), t)w(t) + gtΔ (x σ (t), t)) = 0,
t ∈ If ,
are valid and then the values x(t) belong to the set n
̃ M 0 (t) = {x ∈ Df : ∃y ∈ ℝ : f (y, x, t) = 0}. ̃ The sets M0 (t) and M 0 (t) are defined for any t ∈ If . We have ̃ M0 (t) ⊆ M 0 (t),
t ∈ If .
If G(x σ (t), x(t), t) has full row rank, we have ̃ M0 (t) = M 0 (t),
t ∈ If .
Theorem 9.1. Let the equation (9.1) have a properly involved derivative. Then, for each t ∈ If and each x(t) ∈ M0 (t), there is a unique y(t) ∈ ℝn , so that y(t) − gtΔ (x(σ(t)), t) ∈ im G(x σ (t), x(t), t) and f (y(t), x(t), t) = 0.
9.2 Constraints and consistent initial values
� 407
Proof. Let t1 ∈ If and x 1 (t1 ) ∈ M0 (t1 ) be arbitrarily chosen. Suppose that there are y1 (t1 ), y2 (t1 ) ∈ ℝn , so that y1 (t1 ) − gtΔ (x 1 (σ(t1 )), x 1 (t1 ), t1 ) ∈ G(x 1 (σ(t1 )), x 1 (t1 ), t1 ),
y2 (t1 ) − gtΔ (x 1 (σ(t1 )), x 1 (t1 ), t1 ) ∈ G(x 1 (σ(t1 )), x 1 (t1 ), t1 ).
(9.6)
Let N = ker G(x 1 (σ(t1 )), x 1 (t1 ), t1 ) and w1 (t1 ) = G(x 1 (σ(t1 )), x 1 (t1 ), t1 ) (y1 (t1 ) − gtΔ (x 1 (t1 ), t1 )), +
w2 (t1 ) = G(x 1 (σ(t1 )), x 1 (t1 ), t1 ) (y2 (t1 ) − gtΔ (x 1 (t1 ), t1 )). +
Here, G(x 1 (σ(t1 )), x 1 (t1 ), t1 )+ is the Moore–Penrose inverse of G(x 1 (σ(t1 )), x 1 (t1 ), t1 ) (see the Appendix of this book). Then w1 (t1 ), w2 (t1 ) ∈ N ⊥ and G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w1 (t1 )
= G(x 1 (σ(t1 )), x 1 (t1 ), t1 )G(x 1 (σ(t1 )), x 1 (t1 ), t1 ) (y1 (t1 ) − gtΔ (x 1 (t1 ), t1 )), +
G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w2 (t1 )
= G(x 1 (σ(t1 )), x 1 (t1 ), t1 )G(x 1 (σ(t1 )), x 1 (t1 ), t1 ) (y2 (t1 ) − gtΔ (x 1 (t1 ), t1 )). +
By (9.6), it follows that there are q1 (t1 ), q2 (t1 ), so that y1 (t1 ) − gtΔ (x 1 (σ(t1 )), x 1 (t1 ), t1 ) = G(x 1 (σ(t1 )), x 1 (t1 ), t1 )q1 (t1 ),
y2 (t1 ) − gtΔ (x 1 (σ(t1 )), x 1 (t1 ), t1 ) = G(x 1 (σ(t1 )), x 1 (t1 ), t1 )q2 (t1 ). Then G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w1 (t1 )
= G(x 1 (σ(t1 )), x 1 (t1 ), t1 )G(x 1 (σ(t1 )), x 1 (t1 ), t1 ) G(x 1 (σ(t1 )), x 1 (t1 ), t1 )q1 (t1 ) +
= G(x 1 (σ(t1 )), x 1 (t1 ), t1 )q1 (t1 )
= y1 (t1 ) − gtΔ (x 1 (σ(t1 )), x 1 (t1 ), t1 ). As above, G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w2 (t1 ) = y2 (t1 ) − gtΔ (x 1 (σ(t1 )), x 1 (t1 ), t1 ). Thus, y1 (t1 ) − y2 (t1 ) = G(x 1 (σ(t1 )), x 1 (t1 ), t1 )(w1 (t1 ) − w2 (t1 ))
408 � 9 Nonlinear dynamic-algebraic equations and f (gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w1 (t1 ), x 1 (t1 ), t1 ) = 0,
f (gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w2 (t1 ), x 1 (t1 ), t1 ) = 0. Hence, 0 = f (gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w1 (t1 ), x 1 (t1 ), t1 )
− f (gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w2 (t1 ), x 1 (t1 ), t1 ) 1
𝜕 f (s(gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w1 (t1 )) 𝜕y
= (∫ 0
+ (1 − s)(gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w2 (t1 ))x 1 (t1 ), t1 )ds) × G(x 1 (σ(t1 )), x(t1 ), t1 )(w1 (t1 ) − w2 (t1 )). Since the equation (9.1) has a properly involved derivative, we have 1
ker((∫ 0
𝜕 f (s(gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w1 (t1 )) 𝜕y
+ (1 − s)(gtΔ (x 1 (σ(t1 )), t1 ) + G(x 1 (σ(t1 )), x 1 (t1 ), t1 )w2 (t1 ))x 1 (t1 ), t1 )ds) × G(x 1 (σ(t1 )), x 1 (t1 ), t1 )) = ker G(x 1 (σ(t1 )), x 1 (t1 ), t1 ). Therefore, w1 (t1 ) − w2 (t1 ) ∈ N. Since w1 (t1 ), w2 (t2 ) ∈ N ⊥ , the last relation is possible if and only if w1 (t1 ) = w2 (t1 ). Then y1 (t1 ) = y2 (t1 ). This completes the proof.
9.2 Constraints and consistent initial values
� 409
Theorem 9.2. Let the equation (9.1) have a properly involved derivative and let ker F(y1 (t), y2 (t), x(t), t),
y1 (t), y2 (t) ∈ ℝn , x(t) ∈ Df ,
t ∈ If ,
does not depend on the choice of y1 and y2 . Suppose that there exists a projector R(x(t), t), x(t) ∈ Df , t ∈ If , so that im R(x(t), t) = im G(x 1 (t), x 2 (t), t),
ker R(x(t), t) = ker F(y1 (t), y2 (t), x(t), t), y1 (t), y2 (t) ∈ ℝn ,
x(t), x 1 (t), x 2 (t) ∈ Df ,
t ∈ If .
Then we have the following: 1. f (y(t), x(t), t) = f (R(x(t), t)y(t), x(t), t), 2. 3.
y(t) ∈ ℝn ,
x(t) ∈ Dd ,
t ∈ If .
R is continuously-differentiable on Df × If . ̃ M0 (t) = M 0 (t) for any t ∈ If .
Proof. 1.
Let t ∈ If , x(t) ∈ Df , y(t) ∈ ℝn be arbitrarily chosen. Set η(t) = (I − R(x(t), t))y(t).
Then f (y(t), x(t), t) − f (R(x(t), t)y(t), x(t), t) 1
=∫ 0
1
=∫ 0
𝜕 f (sy(t) + (1 − s)R(x(t), t)y(t), x(t), t)ds(I − R(x(t), t))y(t) 𝜕y 𝜕 f (sy(t) + (1 − s)R(x(t), t)y(t), x(t), t)dsη(t) 𝜕y
= F(y(t), R(x(t), t)y(t), x(t), t)η(t). Note that η(t) ∈ im(I − R(x(t), t)) = ker F(y(t), R(x(t), t)y(t), x(t), t). Therefore, F(y(t), R(x(t), t)y(t), x(t), t) = 0
(9.7)
410 � 9 Nonlinear dynamic-algebraic equations and f (y(t), x(t), t) = f (R(x(t), t)y(t), x(t), t). 2. 3.
The function R is continuously-differentiable because it is a projector defined on C 1 -subspaces. ̃0 (t) be arbitrarily chosen and fixed. Let also y1 (t) ∈ ℝn be such Let t ∈ If , x(t) ∈ M that 0 = f (y1 (t), x(t), t)
= f (R(x(t), t)y1 (t), x(t), t).
Define y(t) = R(x(t), t)y1 (t) + (I − R(x(t), t))gtΔ (x(σ(t)), t). Then y(t) − gtΔ (x(σ(t)), t) = R(x(t), t)(y1 (t) − gtΔ (x(σ(t)), t)) ∈ im R(x(t), t)
= im G(x 1 (t), x 2 (t), t). By the definition of y(t), we find R(x(t), t)y(t) = R(x(t), t)R(x(t), t)y1 (t)
+ R(x(t), t)(I − R(x(t), t))gtΔ (x(σ(t)), t)
= R(x(t), t)y1 (t). From here and using (1), we arrive at
f (y(t), x(t), t) = f (R(x(t), t)y(t), x(t), t)
= f (R(x(t), t)y1 (t), x(t), t) = f (y1 (t), x(t), t) = 0.
̃ Consequently, x(t) ∈ M0 (t). Because x(t) ∈ M 0 (t), and we get that it is an element of M0 (t), we get the inclusion ̃ M 0 (t) ⊆ M0 (t). Hence, using that
9.3 Linearization
� 411
̃ M0 (t) ⊆ M 0 (t), we get ̃ M0 (t) = M 0 (t). This completes the proof.
9.3 Linearization Define 1
H(y(t), x 1 (t), x 2 (t), t) = ∫ 0
𝜕 f (y(σ(t)), sx 1 (t) + (1 − s)x 2 (t), t)dh, 𝜕y
where 𝜕 f (y(t), hx 1 (t) + (1 − h)x 2 (t), t) 𝜕y 𝜕 1 2 2 = ( f (y(σ(t), x11 (t), . . . , xj−1 (t), hxj1 (t) + (1 − h)xj2 (t), xj+1 (t), . . . , xm (t), t)))k,m i,j=1 , 𝜕xj where y(t) ∈ ℝn , x 1 (t), x 2 (t) ∈ Df , t ∈ If . Let I∗ ⊆ I and x∗ ∈ C 1 (I∗ ) be such that x∗ (t) ∈ Df , t ∈ If , and g(x∗ (⋅), ⋅) ∈ C 1 (I∗ ). Set Δσ
Δ
A(t) = F((g(x∗ (t), t)) , (g(x∗ (t), t)) , x∗ (t), t), B(t) = G(x∗σ (t), x∗ (t), t), Δ
C(t) = H((g(x∗ (t), t)) , x∗σ (t), x∗ (t), t),
t ∈ I∗ .
Define the equation Δ
A(t)(B(t)x(t)) + C(t)x(t) = q(t),
t ∈ I∗ .
(9.8)
Definition 9.4. The equation (9.8) is said to be a linearization of the equation (9.1) along the reference function x∗ . Note that the reference function x∗ is not necessary to be a solution to the equation (9.1).
412 � 9 Nonlinear dynamic-algebraic equations Example 9.3. Let 𝕋 = ℤ. Consider the following nonlinear dynamic-algebraic equation: 2
Δ
3((x1 (t)) + x2 (t)x3 (t)) = q1 (t),
2x1 (t) + x3 (t) = q2 (t),
x2 (t) + x3 (t) = q3 (t),
t ∈ 𝕋.
Here, n = 1, k = m = 3 and σ(t) = t + 1,
t ∈ 𝕋.
We have 3 0 f (y, x, t) = (0) y + (2x1 + x3 ) − q(t), 0 x2 + x3 g(x, t) = x12 + x2 x3 .
Then 1 3 F(y1 (t), y2 (t), x(t), t) = ∫ (0) dh 0 0
3 = (0) , 0
y1 (t), y2 (t) ∈ ℝ,
x(t) ∈ ℝ3 ,
t ∈ If ,
and 1
G(x 1 (t), x 2 (t), t) = ∫(2hx11 (t) + 2(1 − h)x12 (t), x32 (t), x21 (t))dh 0
= (x11 (t) − x12 (t), x32 (t), x21 (t)),
x 1 (t), x 2 (t) ∈ ℝ3 ,
t ∈ If ,
and 0 H(y(t), x (t), x (t), t) = ∫ ( 2 0 0 1
2
0 0 1
1
0 = (2 0
0 0 1
0 1 ) dh 1 0 1) , 1
Let x∗ ∈ C 1 (𝕋) be arbitrarily chosen. Then
y(t) ∈ ℝ,
x 1 (t), x 2 (t) ∈ ℝ3 ,
t ∈ 𝕋.
9.3 Linearization
� 413
3 F(yσ (t), y(t), x∗ (t), t) = (0) , 0
σ σ G(x∗σ (∗t), x∗ (t), t) = (x∗1 (t) − x∗1 (t), x∗3 (t), x∗1 (t))
= (x∗1 (t + 1) − x∗1 (t), x∗3 (t), x∗1 (t + 1)),
0 H(y(t), x∗σ (t), x∗ (t), t) = ( 2 0
0 0 1
0 1) , 1
t ∈ 𝕋,
y(t) ∈ ℝ.
Next, Δ
(g(x∗ (t), t)) = G(x∗σ (t), x∗ (t), t)x∗Δ (t)
Δ x∗1 (t) Δ = (x∗1 (t + 1) − x∗1 (t), x∗3 (t), x∗1 (t + 1)) (x∗2 (t)) Δ x∗3 (t)
Δ Δ Δ = (x∗1 (t + 1) − x∗1 (t))x∗1 (t) + x∗3 (t)x∗2 (t) + x∗1 (t + 1)x∗3 (t), Δσ
Δ
A(t) = F((g(x∗ (t), t)) , (g(x∗ (t), t)) , x∗ (t), t) 3 = (0) , 0
B(t) = G(x∗σ (t), x∗ (t), t)
= (x∗1 (t + 1) − x∗1 (t), x∗3 (t), x∗1 (t + 1)), Δ
C(t) = H((g(x∗ (t), t)) , x∗σ (t), x∗ (t)) 0 = (2 0
0 0 1
0 1) , 1
t ∈ 𝕋.
Therefore, 3 x1 (t) (0) ((x∗1 (t + 1) − x∗1 (t), x∗3 (t), x∗1 (t + 1)) (x2 (t))) 0 x3 (t) 0 + (2 0 or
0 0 1
0 x1 (t) 1 ) (x2 (t)) = q(t), 1 x3 (t)
t ∈ 𝕋,
Δ
414 � 9 Nonlinear dynamic-algebraic equations 3 Δ (0) ((x∗1 (t + 1) − x∗1 (t))x1 (t) + x∗3 (t)x2 (t) + x∗1 (t + 1)x3 (t)) 0 0 + (2x1 (t) + x3 (t)) = q(t), x2 (t) + x3 (t)
t ∈ 𝕋,
or Δ
((x∗1 (t + 1) − x∗1 (t))x1 (t) + x∗3 (t)x2 (t) + x∗1 (t + 1)x3 (t)) = q1 (t),
2x1 (t) + x3 (t) = q2 (t),
x2 (t) + x3 (t) = q3 (t),
t ∈ 𝕋.
Exercise 9.3. Let 𝕋 = 3ℕ0 and x∗ (t) = 1 + 2t + 4t 2 ,
t ∈ 𝕋.
Find the linearization of the following nonlinear dynamic-algebraic equation: 3
2
Δ
((x1 (t)) + (x2 (t)) − x1 (t)x3 (t)) = q1 (t),
x1 (t) − x2 (t) + 2x3 (t) = q2 (t),
x1 (t) + 3x3 (t) = q3 (t),
t ∈ 𝕋.
If the equation (9.1) has a properly involved derivative, then the decomposition ker A(t) ⊕ im B(t) = ℝn ,
t ∈ I∗ ,
holds if ker A(t), t ∈ I∗ , and im B(t), t ∈ I∗ are C -subspaces. If the subspace ker F(y1 (t), y2 (t), x(t), t), y1 (t), y2 (t) ∈ ℝn , x(t) ∈ Df , t ∈ I∗ does not depend on y1 and y2 and x∗ ∈ C 1 (I∗ ), by Theorem 9.2 it follows that ker A(t), t ∈ I∗ and im B(t), t ∈ I∗ are C 1 -subspaces. We set B∗ (x(t), t) = G(x σ (t), x(t), t),
σ
A∗ (x 1 (t), x(t), t) = F((B∗ (x(t), t)x 1 (t) + gtΔ (x(σ(t)), t)) ,
B∗ (x(t), t)x 1 (t) + gtΔ (x(σ(t)), t), x(t), t),
C∗ (x 1 (t), x(t), t) = H(B∗ (x(t), t)x 1 (t) = gtΔ (x(σ(t)), t), x σ (t), x(t), t), x 1 (t), x(t) ∈ Df , t ∈ If . Then we have the following: Δσ
Δ
A(t) = F((g(x∗ (t), t)) , (g(x∗ (t), t)) , x∗ (t), t)
σ
= F((G(x∗σ (t), x∗ (t), t)x∗Δ (t) + gtΔ (x∗ (σ(t)), t)) ,
G(x∗σ (t), x∗ (t), t)x∗Δ (t) + gtΔ (x∗ (σ(t)), t), x∗ (t), t)
9.3 Linearization
� 415
σ
= F((B∗ (x∗ (t), t)x∗Δ (t) + gtΔ (x∗ (σ(t)), t)) ,
B∗ B∗ (x∗ (t), t)x∗Δ (t) + gtΔ (x∗ (σ(t)), t), x∗ (t), t)
= A∗ (x∗Δ (t), x∗ (t), t),
B(t) = G(x∗σ (t), x∗ (t), t) = B∗ (x∗ (t), t),
Δ
C(t) = H((g(x∗ (t), t)) , x∗σ (t), x∗ (t), t)
= H(G(x∗σ (t), x∗ (t), t)x∗Δ (t) + gtΔ (x∗ (σ(t)), t), x∗σ (t), x∗ (t), t) = H(B∗ (x∗ (t), t)x∗Δ (t) + gtΔ (x∗ (σ(t)), t), x∗σ (t), x∗ (t), t) = C∗ (x∗Δ (t), x∗ (t), t),
t ∈ I∗ .
Theorem 9.3. Let the equation (9.1) have a properly involved derivative. Then the decomposition ker A∗ (x 1 (t), x(t), t) ⊕ im B∗ (x(t), t) = ℝn
(9.9)
holds for any x 1 (t) ∈ ℝm , x(t) ∈ Df , t ∈ If , and ker A∗ and im B∗ are C -subspaces. Proof. Since the equation (9.1) has a properly involved derivative, we have ker F(y1 (t), y2 (t), x(t), t) ⊕ im G(x 1 (t), x 2 (t), t) for any y1 (t), y2 (t) ∈ ℝn , x 1 (t), x 2 (t) ∈ ℝm , x(t) ∈ Df , t ∈ If . For each triple (x(t), x(t), t) ∈ ℝm × Df × If , we set σ
y1 (t) = (B∗ (x(t), t)x(t) + gtΔ (x(σ(t)), t)) ,
y2 (t) = B∗ (x(t), t)x(t) + gtΔ (x(σ(t)), t),
x 1 (t) = x σ (t),
x 2 (t) = x(t). Then
F(y1 (t), y2 (t), x(t), t) = A∗ (x(t), x(t), t), G(x 1 (t), x 2 (t), t) = B∗ (x(t), t)
and ker A∗ (x(t), x(t), t) ⊕ im B∗ (x(t), t) = ℝn . Because A∗ and B∗ are continuous matrix functions with constant rank, we have that ker A∗ and im B∗ are C -subspaces. This completes the proof.
416 � 9 Nonlinear dynamic-algebraic equations Definition 9.5. Let the equation (9.8) have a properly involved derivative. The projector valued function R defined by im R(x 1 (t), x(t), t) = im B∗ (x(t), t),
ker R(x 1 (t), x(t), t) = ker A∗ (x 1 (t), x(t), t), x 1 (t) ∈ ℝm , x(t) ∈ Df , t ∈ If , is said to be a border projector function or border projector of the equation (9.1). The basic assumption below is as follows: (E1) 1. The function f is classical continuously-differentiable with respect to its first and second arguments and delta continuously-differentiable with respect to its third argument on ℝn × Df × If . The functions F and H are continuous on ℝn × ℝn × Df × If . The function g is classical continuously-differentiable with respect to its first argument and delta continuously-differentiable with respect to its second argument on Df × If . 2. The equation (9.1) has a properly involved derivative. 3. If ker F(y1 , y2 , x, t) depends on y1 and y2 , then suppose that g has continuous classical second derivative with respect to its first argument and continuous delta second derivative with respect to its second argument on Df × If . 4. The transversality conditions (9.2) and (9.9) are equivalent.
9.4 Regular linearized equations with tractability index one Suppose that (E1) holds and the matrices A∗ , B∗ , C∗ are defined as in the previous section. Then A∗ , B∗ , C∗ and the border projector are continuous matrix functions. Assume that the linearized equation (9.8) is regular with tractability index one. Denote N0 (x(t), t) = ker B∗ (x(t), t),
x(t) ∈ Df ,
t ∈ If ,
x(t) ∈ Df ,
t ∈ If .
and let Q0 be a projector onto B∗ , P0 (x(t), t) = I − Q0 (x(t), t),
We can choose P0 and Q0 to be continuous. Let B∗−1 be the {1, 2}-inverse of B∗ defined by B∗ (x(t), t)B∗− (x 1 (t), x(t), t)B∗ (x(t), t) = B∗ (x(t), t),
B∗− (x 1 (t), x(t), t)B∗ (x(t), t)B∗−1 (x 1 (t), x(t), t) = B∗− (x 1 (t), x(t), t), B∗ (x(t), t)B∗− (x 1 (t), x(t), t) = R(x 1 (t), x(t), t), B∗− (x 1 (t), x(t), t)B∗ (x(t), t) 1
(9.10)
1
= P0 (x (t), x(t), t),
x (t) ∈ ℝm ,
x(t) ∈ Df ,
t ∈ If ,
9.4 Regular linearized equations with tractability index one
� 417
where R(x 1 (t), x(t), t), x 1 (t) ∈ ℝm , x(t) ∈ Df , t ∈ If is a continuous projector along ker A∗ (x 1 (t), x(t), t), x 1 (t) ∈ ℝm , x(t) ∈ Df , t ∈ If . Note that B∗− is uniquely determined by (9.10). Suppose that G0 (x 1 (t), x(t), t) = A∗ (x 1 (t), x(t), t)B∗ (x(t), t), 1
Π0 (x(t), t) = P0 (x(t), t),
C0 (x (t), x(t), t) = C∗ (x 1 (t), x(t), t),
G1 (x 1 (t), x(t), t) = G0 (x 1 (t), x(t), t) + C0 (x 1 (t), x(t), t)Q0 (x(t), t),
N1 (x 1 (t), x(t), t) = ker G1 (x 1 (t), x(t), t),
Π1 (x 1 (t), x(t), t) = Π0 (x(t), t)P1 (x 1 (t), x(t), t),
x 1 (t) ∈ ℝm ,
x(t) ∈ Df ,
t ∈ If ,
where Q1 (x 1 (t), x(t), t), x 1 (t) ∈ ℝm , x(t) ∈ Df , t ∈ If , is a continuous projector onto N1 (x 1 (t), x(t), t), x 1 (t) ∈ ℝm , x(t) ∈ Df , t ∈ If and P1 (x 1 (t), x(t), t) = I − Q1 (x 1 (t), x(t), t),
x 1 (t) ∈ ℝm ,
x(t) ∈ Df ,
t ∈ If .
The total derivative of B∗ Π0 B∗− in jet variables will be denoted as follows: Diff1 (x 2 (t), x 1 (t), x(t), t) = Dt (B∗ Π0 B∗− )(x 1 (t), x(t), t), x 1 (t), x 2 (t) ∈ ℝm ,
x(t) ∈ Df ,
t ∈ If .
2
The new jet variable x 2 (t) ∈ ℝm , t ∈ If can be considered as a place holder for x Δ (t), t ∈ If . We have that there are c1 , c2 ∈ [t, σ(t)], so that Diff1 (x 2 (t), x Δ (t), x(t), t) =
𝜕 (B Π B− )(x Δ (t), x(t), t) Δt ∗ 0 ∗ 𝜕 + x Δ (t) (B∗ Π0 B∗− )(x Δ (c1 ), x(c1 ), σ(c1 )) 𝜕x 2 𝜕 + x Δ (t) 1 (B∗ Π0 B∗− )(x Δ (c2 ), x(σ(c2 )), σ(c2 )). 𝜕x
Example 9.4. Let 𝕋 = ℤ. Consider the following nonlinear dynamic-algebraic equation:
2
x1Δ (t) + 2x1 (t) = 0, 2
(x1 (t)) + (x2 (t)) − 2 = t 2 on Df = {x ∈ ℝ2 : x2 > 0}, Here, n = 1, m = k = 2 and
If = 𝕋.
418 � 9 Nonlinear dynamic-algebraic equations
f (y, x, t) = (
y + x1 ), x12 + x22 − 2 − t 2
g(x, t) = x1 ,
y ∈ ℝ,
x ∈ ℝ2 ,
t ∈ 𝕋.
x ∈ ℝ2 ,
t ∈ 𝕋.
Then 1 fy (y, x, t) = ( ) , 0 fx (y, x, t) = (
1 2x1
gx (x, t) = (1, 0),
gt (x, gt) = 0,
0 ), 2x2
y ∈ ℝ,
Hence, 1
1 F(y1 (t), y2 (t), x(t), t) = ∫ ( ) dh 0 0
1 = ( ), 0 1
G(x 1 (t), x 2 (t), t) = ∫(1, 0)dh 0
= (1, 0), 1
H(y(t), x 1 (t), x 2 (t), t) = ∫ ( 0
2sx11 (t)
1 + 2(1 − s)x12 (t)
2sx21 (t)
1 =( 1 2 s=1 x1 (t)s |s=0 − x12 (t)(1 − s)2 |s=1 s=0 =(
x11 (t)
1 + x12 (t)
x21 (t)
0 ) ds + 2(1 − s)x22 (t)
x21 (t)s2 |s=1 s=0
0
− x22 (t)(1 − s)2 |s=1 s=0
0 ), + x22 (t)
x 1 (t), x 2 (t) ∈ Df , y(t), y1 (t), y2 (t) ∈ ℝ,
t ∈ 𝕋.
Therefore, B∗ (x(t), t) = (1, 0),
1 A∗ (x 1 (t), x(t), t) = ( ) , 0
C∗ (x 1 (t), x(t), t) = H((1, 0)x 1 (t), x(t + 1), x(t), t) =(
1 x1 (t + 1) + x1 (t)
0 ), x2 (t + 1) + x2 (t)
)
9.4 Regular linearized equations with tractability index one
x 1 (t) ∈ nℝ2 ,
x(t) ∈ Df ,
t ∈ If .
Next, G0 (x 1 (t), x(t), t) = A∗ (x 1 (t), x(t), t)B∗ (x(t), t) 1 = ( ) (1, 0) 0 =(
1 0
0 ), 0
x 1 (t) ∈ ℝ2 ,
x(t) ∈ Df ,
t ∈ If .
Let z(t) = (
z1 (t) ) ∈ ℝ2 , z2 (t)
t ∈ 𝕋,
be such that G0 (x 1 (t), x(t), t)z(t) = 0,
x 1 (t) ∈ ℝ2 ,
x(t) ∈ Df ,
Then 1 ( 0
0 z1 (t) 0 )( ) = ( ), 0 z2 (t) 0
t ∈ 𝕋,
or (
z1 (t) 0 ) = ( ), 0 0
t ∈ 𝕋,
whereupon z1 (t) = 0,
t ∈ 𝕋.
Let Q0 = (
0 0
0 ), p
where p ∈ ℝ, p ≠ 0, is chosen so that Q0 = Q0 Q0 . We have (
0 0
0 0 )=( p 0
0 0 )( p 0
0 ) p
t ∈ 𝕋.
�
419
420 � 9 Nonlinear dynamic-algebraic equations
=(
0 0
0 ), p2
whereupon p2 = p and p = 1. Consequently, Q0 = (
0 0
0 ) 1
and G1 (x 1 (t), x(t), t) = G0 (x 1 (t), x(t), t) + C∗ (x 1 (t), x(t), t)Q0 =(
1 0
0 1 )+( 0 x1 (t + 1) + x1 (t)
=(
1 0
0 0 )+( 0 0
=(
1 0
0 ), x1 (t) + x2 (t)
0 0 )( x2 (t + 1) + x2 (t) 0
0 ) 1
0 ) x1 (t) + x2 (t) x 1 (t) ∈ ℝ2 ,
x(t) ∈ Df ,
t ∈ 𝕋.
Note that det G0 (x 1 (t), x(t), t) = x1 (t) + x2 (t) ≠ 0,
x 1 (t) ∈ ℝ2 ,
x(t) ∈ Df ,
t ∈ If .
Exercise 9.4. Let 𝕋 = 2ℤ. Consider the following nonlinear dynamic-algebraic equation:
4
x2Δ (t) + 4x2 (t) = 0, 4
(x1 (t)) + (x2 (t)) − 3 = t 2 on 1 Df = {x ∈ ℝ2 : x1 > }, 2 Find the matrices A∗ , B∗ , C∗ , G0 , P0 , Q0 , G1 .
If = 𝕋.
9.5 Advanced practical problems
� 421
9.5 Advanced practical problems Problem 9.1. Let 𝕋 = 2ℕ0 . Prove that the nonlinear dynamic-algebraic equation 2
x1Δ (t) + (x1 (t) + x2 (t) + t) = 0,
x1 (t) − 3x2 (t) = t 2 ,
t ∈ 𝕋,
has a properly involved derivative. Problem 9.2. Let 𝕋 = 3ℕ0 . Prove that the nonlinear dynamic-algebraic equation x1 (t)x1Δ (t) − x2 (t) = 0, x1 (t) − x2 (t) = 0,
t ∈ 𝕋,
has a properly involved derivative on the region {(x, t) ∈ Df × If : x1 = 0}. Problem 9.3. Let 𝕋 = 3ℕ0 . Find M0 (t) for the system x2Δ (t) − (t 2 + 1)x2 (t) = 0, 2
4
(x1 (t)) + 7(x2 (t)) = 2 + t + t 4 ,
t ∈ 𝕋,
on Df = {x ∈ ℝ2 : x1 > 0}. Problem 9.4. Let 𝕋 = 2ℕ0 and x∗ (t) = 1 + 2e1 (t, 1) + t 2 + t 3 ,
t ∈ 𝕋.
Find the linearization of the following nonlinear dynamic-algebraic equation: 3
4 Δ
(x1 (t)x2 (t)x3 (t) + (x2 (t)) − (x1 (t)) ) = q1 (t),
2x1 (t) + x2 (t) + 3x3 (t) = q2 (t),
x1 (t) + x2 (t) − x3 (t) = q3 (t),
t ∈ 𝕋.
Problem 9.5. Let 𝕋 = 2ℕ0 . Consider the following nonlinear dynamic-algebraic equation: 2
x1Δ (t) − 7x1 (t) = 0, 6
(x1 (t)) + (x2 (t)) − 1 = t 4 on
422 � 9 Nonlinear dynamic-algebraic equations Df = {x ∈ ℝ2 : x2 > 3},
If = 𝕋.
Find the matrices A∗ , B∗ , C∗ , G0 , P0 , Q0 , G1 . Problem 9.6. Let 𝕋 = 3ℕ0 . Consider the following nonlinear dynamic-algebraic equation: x1Δ (t) − x1 (t) = 0,
1 , 1+t 1 4 2 (x1 (t)) + (x3 (t)) − 3 = 1 + t2 2
2
(x1 (t)) + (x2 (t)) − 1 =
on Df = {x ∈ ℝ3 : x2 > 1,
x3 > 2},
If = 𝕋.
Find the matrices A∗ , B∗ , C∗ , G0 , P0 , Q0 , G1 .
9.6 Notes and references In this chapter, we investigate nonlinear dynamic-algebraic equations on arbitrary time scales. We define properly involved derivatives, constraints and consistent initial values for the considered equations. We introduce a linearization for nonlinear dynamicalgebraic equations and investigate the total derivative for regular linearized equations with tractability index one.
A Elements of theory of matrices In this chapter, we give some basic definitions and basic facts for real matrices, used in this book. For the proofs, we refer the reader to [11] and [12]. With Mm×n , we will denote the set of all real matrices of type m × n.
A.1 Equivalent pairs of matrices Definition A.1. Let A1 , B1 , A2 , B2 ∈ Mm×n . We say that the pairs (A1 , B1 ) and (A2 , B2 ) are equivalent if there are nonsingular matrices P ∈ Mm×m and Q ∈ Mn×n so that A2 = PA1 Q, B2 = PB1 Q.
We will write (A1 , B1 ) ∼ (A2 , B2 ). The relation induced in Definition A.1 is an equivalence relation. More precisely, let Aj , Bj ∈ Mm×n , j ∈ {1, 2, 3}. Then we have the following: 1. (A1 , B1 ) ∼ (A1 , B1 ). 2. If (A1 , B1 ) ∼ (A2 , B2 ), then (A2 , B2 ) ∼ (A1 , B1 ). 3. If (A1 , B2 ) ∼ (A2 , B2 ) and (A2 , B2 ) ∼ (A3 , B3 ), then (A1 , B1 ) ∼ (A3 , B3 ).
A.2 Kronecker canonical form Theorem A.1 (Kronecker canonical form). Let A, B ∈ Mm×n . Then there exist nonsingular matrices P ∈ Mm×m and Q ∈ Mn×n such that P(λA − B)Q = diag(Lϵ1 , . . . , Lϵp , Mη1 , . . . , Mηq , Gp1 , . . . , Gpr , Nσ1 , . . . , Nσs ), where the block entries have the following properties: 1. Any entry Lϵj is a bidiagonal block of size ϵj × (ϵj + 1), ϵj ∈ ℕ0 , of the form 0 λ(
https://doi.org/10.1515/9783111377155-010
1 ..
.
1
..
. 0
1
)−(
0 .. .
.. 1
.
0
).
424 � A Elements of theory of matrices 2.
Any entry Mηj is a bidiagonal matrix of size (ηj + 1) × ηj , ηj ∈ ℕ0 , of the form 1 0 λ(
3.
..
0 . .
1 0
)−(
1
.. ..
. .
0 1
).
Any entry Gpj is a bidiagonal matrix of size pj × pj , pj ∈ ℕ0 , λj ∈ ℝ, of the form 1 λ(
4.
..
..
λj .
..
1 ..
)−(
.
.
.. ..
1
. .
1 λj
).
Any entry Nσj is a nilpotent block of size σj × σj , σj ∈ ℕ0 , of the form 0 λ(
1 ..
.
.. ..
1 . .
1 0
)−(
..
.
..
.
). 1
A.3 Projectors and subspaces Let L(ℝm ) denotes the space of all linear maps from ℝm to ℝm . Definition A.2. A linear map Q ∈ L(ℝm ) is said to be a projector if Q2 = Q. Definition A.3. A projector Q ∈ L(ℝm ) is a called a projector onto S ⊆ ℝm if im Q = S. Definition A.4. A projector Q ∈ L(ℝm ) is said to be a projector along S ⊆ ℝm if S = ker Q. Definition A.5. A projector Q ∈ L(ℝm ) is said to be an orthogonal projector or orthoprojector if Q = QT . Theorem A.2. Let P1 , P2 ∈ L(ℝm ) be projectors and Q1 = I − P 1 ,
Q2 = I − P 2 . Then we have the following assertions: 1. z ∈ im Q1 if and only if z = Q1 z.
A.3 Projectors and subspaces
2.
�
425
If Q1 and Q2 project onto the same subspace S, then Q2 = Q1 Q2 , Q1 = Q2 Q1 .
3.
If P1 and P2 project along the same subspace S, then P2 = P2 P1 , P1 = P1 P2 .
4. 5. 6.
Q1 projects onto S if and only if P1 projects along S. Each matrix of the form I +PZQ with arbitrary matrix Z is nonsingular and its inverse is I − PZQ. Each projector P1 is diagonalizable. Its eigenvalues are 0 and 1. The eigenvalue 1 has multiplicity r = rank P1 .
Theorem A.3. Let A, D ∈ Mm×m , r = rank(AD). 1. If ker A ∩ im D = 0, im D = im A, then ker A ⊕ im D = ℝm . 2.
If ker A ⊕ im D = ℝm , then a) ker A ∩ im D = {0}, b) im(AD) = im A, c) ker(AD) = ker D, d) rank A = rank D = r.
Theorem A.4. Given matrices G, Π, N, W of suitable sizes such that ker G = im N,
ker(ΠN) = im W , then it holds that ker G ∩ ker Π = ker(NW ). Theorem A.5. Let M and N be subspaces of ℝm . Then (N + M)⊥ = N ⊥ + M ⊥ . Theorem A.6. 1.
Let N and X be two subspaces of ℝm , N ∩ X = {0}. Then dim N + dim X ≤ m
and there is a projector Q ∈ L(ℝm ) such that Q = N, ⊆ ker Q.
426 � A Elements of theory of matrices 2.
Let S and N be subspaces of ℝm . If S ⊕ N = ℝm , then there is a uniquely determined projector P ∈ L(ℝm ) such that im P = S,
3. 4.
ker P = N.
An orthogonal projector P projects onto S = im P and along S ⊥ = ker P. Let K and N be subspaces of ℝm and N̂ = K ∩ N; X ⊆ ℝm is complement of N̂ in K, i. e., K = N̂ ⊕ X. Then there is a projector Q ∈ L(ℝm ) onto N such that X ⊆ ker Q.
Definition A.6. Let Ω ⊆ ℝq be open and connected and F(x) ⊆ ℝn be a subspace for each x ∈ Ω. For l ∈ ℕ ∪ {0}, F is said to be a C l -subspace on Ω if there exists a projector function R ∈ V l (Ω) so that R(x) = R(x)2 ,
im R(x) = F(x),
x ∈ Ω.
A.4 Generalized inverses Definition A.7. For a matrix Z ∈ Mn×m , we will call the matrix Z − ∈ Mm×n reflexive generalized inverse if ZZ − Z = Z
(A.1)
Z − ZZ − = Z − .
(A.2)
and
Z − is said to be {1, 2}-inverse of Z. We have that ZZ − ∈ Mn×n and Z − Z ∈ Mm×m . Also, Z − Z and ZZ − are projectors, and rank(ZZ − Z) = rank(Z − ZZ − ) = rank Z − = rank Z. Theorem A.7. Suppose that (A.1) and (A.2) hold. Let R ∈ Mn×n be any projector onto im Z and P ∈ Mm×m be any projector along ker Z. If
A.4 Generalized inverses
� 427
Z−Z = P
(A.3)
ZZ − = R,
(A.4)
and
then the generalized reflexive inverse is uniquely determined. Definition A.8. If P and R are orthogonal projectors in (A.3) and (A.4), then Z − Z = (Z − Z)
∗
and ZZ − = (ZZ − ) . ∗
The resulting generalized inverse is called the Moore–Penrose inverse and it is denoted by Z − . To represent the generalized reflexive inverse Z − , we will use the decomposition S Z =U(
0
) V −1
with nonsingular matrices U, S and V . Such decomposition is available using the Householder decomposition of Z. A generalized reflexive inverse is given by Z− = V (
S −1 M1
M2 ) U −1 , M1 SM2
where M1 and M2 satisfy P = Z−Z =V(
I M1 S
0 −1 )V 0
and R = ZZ − I =U( 0
SM2 ) U −1 . 0
Theorem A.8. Let A, D ∈ Mm×m , r = rank(AD). 1. If ker A ∩ im D = 0, im D = im A, then ker A ⊕ im D = ℝm .
428 � A Elements of theory of matrices 2.
If ker A ⊕ im D = ℝm , then a) ker A ∩ im D = {0}, b) im(AD) = im A, c) ker(AD) = ker D, d) rank A = rank D = r.
A.5 Matrix pencils Let A, B ∈ Mm×m . Definition A.9. The matrix pencil (A, B) is defined as the one-parameter family, {λA + B : λ ∈ ℂ}. Definition A.10. If there exists a λ ∈ ℂ, so that λA + B is a nonsingular matrix, i. e., det(λA + B) does not vanish identically, the matrix pencil is said to be regular. Otherwise, the matrix pencil is said to be a singular matrix pencil. A regular matrix pencil (A, B) is equivalent to a matrix pencil (A1 , B1 ) so that A1 = (
Is 0
B1 = (
W 0
0 ), N 0
Im−s
),
for some s ∈ {0, . . . , m}, where W ∈ Ms×s , N ∈ Mm−s×m−s is a nilpotent matrix of index ν, ν ∈ {0, . . . , m}, i. e., N ν = 0 and N ν−1 ≠ 0, and A1 = GAH, B1 = GBH
(A.5)
for nonsingular G, H ∈ Mm×m . This defines the Weierstrass–Kronecker canonical form or Kronecker canonical form of the matrix pencil. Definition A.11. The nilpotency index ν is said to be the Weierstrass–Kronecker index or Kronecker index of the matrix pencil (A, B). Definition A.12. The spectrum of a regular matrix pencil (A, B) is defined as follows: σ((A, B)) = {λ ∈ ℂ : det(λA + B) = 0}. Suppose that (A, B) is a regular matrix pencil. Then det(λA + B) = det(G−1 G(λA + B)HH −1 )
A.5 Matrix pencils
�
429
= det(G−1 (λGAH + GBH)H −1 ) = det(G−1 (λA1 + B1 )H −1 )
= det G−1 det H −1 det(λA1 + B1 )
= det G−1 det H −1 det(λIs + W ), where we have used that det(λN + Im−s ) = 1. Thus, det(λA + B) = 0 if and only if det(λIs + W ) = 0. Therefore, σ((A, B)) = σ((Is , W )). We have the following results. Theorem A.9. Let A be a singular matrix and Q be a projector onto ker A. Then the matrix pencil (A, B) is regular with Kronecker index 1 if and only if the matrix A1 = A + BQ is nonsingular. Theorem A.10. Let A be a singular matrix. Then the matrix pencil (A, B) is regular with Kronecker index 1 if and only if the spaces N = ker A and S = {x ∈ ℝm : Bx ∈ im A} verify N ∩ S = {0}, i. e., N ⊕ S = ℝm .
430 � A Elements of theory of matrices Let Q be a projector onto ker A. Set A0 = A, B0 = B and Ai+1 = Ai + Bi Qi , Bi+1 = Bi Pi ,
i > 0,
where Qi is a projector onto Ni = ker Ai , Pi = I − Qi . We have the following result. Theorem A.11. A matrix pencil (A, B) is regular with Kronecker index ν if and only if the matrices Ai , i ∈ {0, . . . , ν − 1}, are singular and Aν is nonsingular. that
If the matrix pencil (A, B) is regular, then it is possible to choose the projectors Qi so
Qi Qj = 0,
j < i.
(A.6)
Definition A.13. Projectors Qi onto ker A that satisfy (A.6) is said to be an admissible projector sequence. Suppose that Qi , i ∈ {0, . . . , ν} is an admissible projector sequence. Then we have the following: 1. Pi Qj = Qj , j < i, 2. Pi Pj = Pi − Qj , i < j, 3. Qj Pk = Qj , k < j, 4. Pi Pj Pk = Pi − Qj − Qk , k < j < i, 5. Pk+1 Pk+2 . . . Pν−1 Pi = { 6. 7. 8.
Pk+1 Pk+2 . . . Pν−1 − Qi , Pk+1 Pk+2 . . . Pν−1 ,
i ≤ k,
k < i < ν − 1,
Qk Pk+1 Pk+2 . . . Pν−1 Pi = Qk Pk+1 Pk+2 . . . Pν−1 , i ≠ k, Qk Pk+1 Pk+2 . . . Pν−1 Pk = Qk (Pk+1 Pk+2 . . . Pν−1 − I), Qk Pk+1 Pk+2 . . . Pν−1 , { { { Qk Pk+1 Pk+2 . . . Pν−1 P0 . . . Pi = {Qk (Pk+1 Pk+2 . . . Pν−1 − I), { { {Qk (Pk+1 Pk+2 . . . Pν−1 − Pk+1 Pk+2 . . . Pi ,
i < k, i = k, i > k.
A.6 Parameter-dependent matrices and projectors
�
431
A.6 Parameter-dependent matrices and projectors Let I ⊆ 𝕋, 𝕋 be a time scale with forward jump operator and delta differentiation operator σ and Δ, respectively. Suppose that F : I → Mn×m and G : I → Mm×p . Definition A.14. The product FG : I → Mn×p is defined pointwise by (FG)(t) = F(t)G(t),
t ∈ I,
and the product rule applies to the derivatives (FG)Δ (t) = F Δ (t)G(t) + F σ (t)GΔ (t)
= F Δ (t)Gσ (t) + F(t)GΔ (t),
t ∈ I.
Theorem A.12. 1. If the matrix function P ∈ C 1 (I), P : I → Mm×m is projector valued, then it has constant rank r and there are r linearly independent functions η1 , . . . , ηr ∈ C 1 (I), η1 , . . . , ηr : I → ℝm such that im P(t) = span{η1 (t), . . . , ηr (t)}, 2.
3.
t ∈ I.
If a time-dependent subspace L(t) ⊆ ℝm , t ∈ I, with constant dimension r is spanned by the functions η1 , . . . , ηr ∈ C 1 (I), η1 , . . . , ηr : I → ℝm , then the orthoprojector function onto this subspace is continuously differentiable. Let A : I → Mm×m , A ∈ C k (I) has a constant rank r. Then there is a matrix function M ∈ C k (I), M : I → Mm×m that is pointwise nonsingular such that ̃ A(t)M(t) = [A(t), 0],
̃ = r, rank A(t)
t ∈ I.
Definition A.15. Let Ω ⊆ ℝp be open and connected and L(x) ⊆ ℝm , x ∈ Ω. For k ∈ ℕ ∪ {0}, L is said to be a C k -subspace on Ω if there is a projection function R ∈ C k (Ω), R : Ω → Mm×m for which im R(x) = L(x),
x ∈ Ω.
B Fréchet derivatives and Gâteaux derivatives B.1 Remainders Let X and Y be normed spaces. With o(X, Y ), we will denote the set of all maps r : X → Y for which there is a some map α : X → Y such that: 1. r(x) = α(x)‖x‖ for all x ∈ X, 2. α(0) = 0, 3. α is continuous at 0. Definition B.1. The elements of o(X, Y ) will be called remainders. Exercise B.1. Prove that o(X, Y ) is a vector space. Definition B.2. Let f : X → Y be a function and x0 ∈ X. We say that f is stable at x0 if there are some ϵ > 0 and some c > 0 such that ‖x − x0 ‖ ≤ ϵ implies f (x − x0 ) ≤ c‖x − x0 ‖. Example B.1. Let T : X → Y be a linear bounded operator. Then T(x − 0) = T(x)
x ∈ X.
≤ ‖T‖‖x‖, Hence, T is stable at 0.
Theorem B.1. Let X, Y , Z and W be normed spaces, r ∈ o(X, Y ), and assume f : W → X is stable at 0, g : Y → Z is stable at 0. Then r ∘ f ∈ o(W , Y ) and g ∘ r ∈ o(X, Z). Proof. Since r ∈ o(X, Y ), then there is a map α : X → Y such that r(x) = α(x)‖x‖,
x ∈ X,
α(0) = 0 and α is continuous at 0. Define β : W → Y such that β(w) = {
‖f (w)‖ α(f (w)) ‖w‖
0
if if
w ≠ 0, w = 0,
w ∈ W . Since f : W → Z is stable at 0, then there are constants ϵ > 0 and c > 0 such that ‖w‖ ≤ ϵ implies that f (w) ≤ c‖w‖. Hence, https://doi.org/10.1515/9783111377155-011
434 � B Fréchet derivatives and Gâteaux derivatives f (0) = 0
and f (0) = 0.
Next, β(0) = 0 and if w ≠ 0, ‖w‖ ≤ ϵ, we get ‖f (w)‖ β(w) = α(f (w)) ‖w‖ ≤ cα(f (w)). From here, using that f (w) → 0
as w → 0
and α(f (w)) → 0
as
w → 0,
we get β(w) → 0
as w → 0.
Therefore, β : W → Y is continuous at 0. Also, we have – if w = 0, then β(0) = 0,
r ∘ f (0) = α(f (0))f (0) =0
= β(0). –
if w ≠ 0, then r ∘ f (w) = α(f (w))f (w) ‖w‖β(w) = f (w) ‖f (w)‖ = ‖w‖β(w). Therefore, r ∘ f ∈ o(W , Y ).
Since g : Y → Z is stable at 0, then there are constants ϵ1 > 0 and c1 > 0 such that ‖w‖ ≤ ϵ1 implies g(w) ≤ c1 ‖w‖. Define γ : X → Y by
B.2 Definition and uniqueness of the Fréchet derivative
γ(x) = {
g(‖x‖α(x)) ‖x‖
0
if if
� 435
x ≠ 0, x = 0.
Then g(‖x‖α(x)) = ‖x‖γ(x),
x ∈ X.
For x ≠ 0, x ∈ X, we have ‖g(‖x‖α(x))‖ γ(x) = ‖x‖ c1 ‖x‖α(x) ≤ ‖x‖ = c1 α(x). Then γ(x) → 0
as x → 0,
x ∈ X.
Also, g ∘ r(x) = g(r(x))
= g(α(x)‖x‖) = γ(x)‖x‖,
x ∈ X.
This completes the proof.
B.2 Definition and uniqueness of the Fréchet derivative Suppose that X and Y are normed spaces, U is an open subset of X and x0 ∈ U. With L (X, Y ), we will denote the vector space of all linear bounded operators from X to Y . Definition B.3. We say that a function f : X → Y is Fréchet differentiable at x0 if there is some L ∈ L (X, Y ) and r ∈ o(X, Y ) such that f (x) = f (x0 ) + L(x − x0 ) + r(x − x0 ),
x ∈ U.
The operator L will be called the Fréchet derivative of the function f at x0 . We will write Df (x0 ) = L. Suppose that L1 , L2 ∈ L (X, Y ) and r1 , r2 ∈ o(X, Y ) are such that f (x) = f (x0 ) + L1 (x − x0 ) + r1 (x − x0 ),
f (x) = f (x0 ) + L2 (x − x0 ) + r2 (x − x0 ),
x ∈ U.
436 � B Fréchet derivatives and Gâteaux derivatives Then f (x0 ) + L1 (x − x0 ) + r1 (x − x0 ) = f (x0 ) + L2 (x − x0 ) + r2 (x − x0 ),
x ∈ U,
or L1 (x − x0 ) − L2 (x − x0 ) = r2 (x − x0 ) − r1 (x − x0 ),
x ∈ U.
Also, let α1 , α2 : X → Y be such that r1 (x) = ‖x‖α2 (x),
α1 (0) = α2 (0) = 0,
r2 (x) = ‖x‖α1 (x),
α1 and α2 are continuous at 0. Then L1 (x − x0 ) − L2 (x − x0 ) = ‖x − x0 ‖α1 (x − x0 ) − ‖x − x0 ‖α2 (x − x0 ) = ‖x − x0 ‖(α1 (x − x0 ) − α2 (x − x0 )),
x ∈ U.
Let x ∈ X be arbitrarily chosen. Then there is some h > 0 such that for all |t| ≤ h we have x0 + tx ∈ U. Hence, L1 (tx) − L2 (tx) = ‖tx‖(α1 (tx) − α2 (tx)) or t(L1 (x) − L2 (x)) = |t|‖x‖(α1 (tx) − α2 (tx)), or L1 (x) − L2 (x) = sign(t)‖x‖(α1 (tx) − α2 (tx)) →0
as t → 0.
Because x ∈ X was arbitrarily chosen, we conclude that L1 = L2 and r1 = r2 . Definition B.4. We denote by C 1 (U, Y ) the set of all functions f : U → Y that are Fréchet differentiable at each point of U and Df : U → L (X, Y ) is continuous. We denote by C 2 (U, Y ) the set of all functions f ∈ C 1 (U, Y ) such that Df : U → L (X, Y ) is Fréchet differentiable at each point of U and D(Df ) : U → L (X, L (X, Y )) is continuous. Theorem B.2. Let f1 , f2 : U → Y be Fréchet differentiable at x0 and a, b ∈ ℝ. Then af1 + bf2 is Fréchet differentiable at x0 .
B.2 Definition and uniqueness of the Fréchet derivative
�
437
Proof. Let r1 , r2 ∈ o(X, Y ) be such that f1 (x) = f1 (x0 ) + Df1 (x0 )(x − x0 ) + r1 (x − x0 ),
f2 (x) = f2 (x0 ) + Df2 (x0 )(x − x0 ) + r2 (x − x0 ),
x ∈ U.
Hence, (af1 + bf2 )(x) = a(f1 (x0 ) + Df1 (x0 )(x − x0 ) + r1 (x − x0 ))
+ b(f2 (x0 ) + Df2 (x0 )(x − x0 ) + r2 (x − x0 ))
= af1 (x0 ) + bf2 (x0 )
+ (aDf1 (x0 ) + bDf2 (x0 ))(x − x0 ) + (ar1 (x − x0 ) + br2 (x − x0 )),
x ∈ U.
Note that ar1 + br2 ∈ o(X, Y ). This completes the proof. Theorem B.3. A function f : U → Y is Fréchet differentiable at x0 if and only if there is some function F : U → L (X, Y ) that is continuous at x0 and for which f (x) − f (x0 ) = F(x)(x − x0 ), Proof. 1. and
x ∈ U.
Suppose that there is a function F : U → L (X, Y ) that is continuous at x0 f (x) − f (x0 ) = F(x)(x − x0 ),
x ∈ U.
Then f (x) − f (x0 ) = F(x)(x − x0 ) − F(x0 )(x − x0 ) + F(x0 )(x − x0 ) = F(x0 )(x − x0 ) + r(x − x0 ),
where r(x) = {
(F(x + x0 ) − F(x0 ))(x) 0
for for
x + x0 ∈ U, x + x0 ∉ U.
Define (F(x+x0 )−F(x0 ))(x) ‖x‖
{ { { α(x) = {0 { { {0 Then
for for for
x + x0 ∈ U, x + x0 ∉ U, x = 0.
x ≠ 0,
438 � B Fréchet derivatives and Gâteaux derivatives r(x) = α(x)‖x‖,
x ∈ X.
Let ϵ > 0 be arbitrarily chosen. Since F : U → L (X, Y ) is continuous at x0 , there exists some δ > 0 for which ‖x‖ < δ implies (F(x + x0 ) − F(x0 ))(x) ≤ F(x + x0 ) − F(x0 )‖x‖ < ϵ‖x‖. Therefore, α(x) < ϵ
2.
for ‖x‖ < δ, i. e., α is continuous at 0. From here, we conclude that r ∈ o(X, Y ) and F(x0 ) = Df (x0 ). Suppose that f is Fréchet differentiable at x0 . Then there is some r ∈ o(X, Y ) such that f (x) = f (x0 ) + Df (x0 )(x − x0 ) + r(x − x0 ),
x ∈ U,
where Df (x0 ) ∈ L (X, Y ). Since r ∈ o(X, Y ), there is some α : X → Y such that r(x) = α(x)‖x‖,
α(0) = 0,
α(x) → 0
as x → 0.
By the Hahn–Banach extension theorem, it follows that there is some λx ∈ X ∗ such that λx x = ‖x‖ and |λx v| ≤ ‖v‖,
v ∈ X.
Then r(x) = (λx x)α(x),
x ∈ X,
and f (x) = f (x0 ) + Df (x0 )(x − x0 ) + (λx−x0 (x − x0 ))α(x − x0 ), Let F : U → L (X, Y ) be defined as follows:
x ∈ U.
B.2 Definition and uniqueness of the Fréchet derivative
F(x)(v) = Df (x0 )(v) + (λx−x0 v)α(x − x0 ),
x ∈ U,
� 439
v ∈ X.
We have f (x) = f (x0 ) + F(x)(x − x0 ),
x ∈ U,
r(x − x0 ) = (λx−x0 (x − x0 ))α(x − x0 )
= f (x) − f (x0 ) − Df (x0 )(x − x0 )
= F(x)(x − x0 ) − Df (x0 )(x − x0 ),
x ∈ U.
Note that F(x)(v) − F(x0 )(v) = Df (x0 )(v) + (λx−x0 v)α(x − x0 ) − Df (x0 )(v) = (λx−x0 v)α(x − x0 ) = |λx−x0 v|α(x − x0 ) ≤ ‖v‖α(x − x0 ), x ∈ U, v ∈ X. Then F(x) − F(x0 ) ≤ α(x − x0 ),
x ∈ U.
Consequently, F is continuous at x0 . This completes the proof. Theorem B.4. Let Z be a normed space, assume f : U → Z is Fréchet differentiable at x0 , g : f (U) → Z is Fréchet differentiable at f (x0 ). Then g ∘ f : U → Z is Fréchet differentiable at x0 and D(g ∘ f )(x0 ) = Dg(f (x0 )) ∘ Df (x0 ). Proof. Let y0 = f (x0 ),
L1 = Df (x0 ),
L2 = Dg(y0 ). There exist r1 ∈ o(X, Y ), r2 ∈ o(Y , Z) such that f (x) = f (x0 ) + L1 (x − x0 ) + r1 (x − x0 ),
g(y) = g(y0 ) + L2 (y − y0 ) + r2 (y − y0 ),
x ∈ U,
y ∈ f (U).
Hence, g(f (x)) = g(f (x0 )) + L2 (f (x) − y0 ) + r2 (f (x) − y0 )
440 � B Fréchet derivatives and Gâteaux derivatives = g(y0 ) + L2 (L1 (x − x0 ) + r1 (x − x0 )) + r2 (L1 (x − x0 ) + r1 (x − x0 ))
= g(y0 ) + L2 (L1 (x − x0 )) + L2 (r1 (x − x0 )) + r2 (L1 (x − x0 ) + r1 (x − x0 )),
x ∈ U.
Define r3 : X → Z as follows: r3 (x) = r2 (L1 (x) + r1 (x)),
x ∈ U.
Fix c > ‖L1 ‖ and we represent r1 as follows: r1 (x) = α1 (x)‖x‖,
x ∈ U.
We have that α1 : X → Y , α1 (0) = 0 and α1 is continuous at 0. Then there exists some δ > 0 such that if ‖x‖ < δ, then α1 (x) < c − ‖L1 ‖. Hence, if ‖x‖ < δ, then r1 (x) ≤ (c − ‖L1 ‖)‖x‖. Then ‖x‖ < δ implies L1 (x) + r1 (x) ≤ L1 (x) + r1 (x) ≤ ‖L1 ‖‖x‖ + (c − ‖L1 ‖)‖x‖ = c‖x‖.
Then x → L1 (x) + r1 (x) is stable at 0. Hence, by Theorem B.1, we get r3 ∈ o(X, Z). Define r : X → Z as follows: r = L1 ∘ r1 + r3 . We have r ∈ o(X, Z) and g ∘ f (r) = g ∘ f (x0 ) + L2 ∘ L1 (x − x0 ) + r(x − x0 ),
x ∈ U.
Since L1 ∈ L (X, Y ), L2 ∈ L (Y , Z), we have L2 ∘ L1 ∈ L (X, Z). Therefore, g ∘ f is Fréchet differentiable at x0 and L2 ∘ L1 = Dg(y0 ) ∘ Df (x0 )
= Dg(f (x0 )) ∘ Df (x0 ).
This completes the proof.
B.2 Definition and uniqueness of the Fréchet derivative
�
441
Theorem B.5. Let f1 , f2 : U → ℝ be Fréchet differentiable at x0 . Then f1 ⋅ f2 is Fréchet differentiable at x0 and D(f1 ⋅ f2 )(x0 ) = f2 (x0 )Df1 (x0 ) + f1 (x0 )Df2 (x0 ). Proof. Let r1 , r2 ∈ o(X, ℝ) be such that f1 (x) = f1 (x0 ) + Df1 (x0 )(x − x0 ) + r1 (x − x0 ),
f2 (x) = f2 (x0 ) + Df2 (x0 )(x − x0 ) + r2 (x − x0 ),
x ∈ U.
Hence, f1 (x)f2 (x) = (f1 (x0 ) + Df1 (x0 )(x − x0 ) + r1 (x − x0 ))
× (f2 (x0 ) + Df2 (x0 )(x − x0 ) + r2 (x − x0 ))
= f1 (x0 )f2 (x0 ) + f1 (x0 )Df2 (x0 )(x − x0 ) + f2 (x0 )Df1 (x0 )(x − x0 ) + f1 (x0 )r2 (x − x0 ) + Df1 (x0 )(x − x0 )Df2 (x0 )(x − x0 ) + Df1 (x0 )(x − x0 )r2 (x − x0 ) + r1 (x − x0 )f2 (x0 )
+ Df2 (x0 )(x − x0 )r1 (x − x0 ) + r1 (x − x0 )r2 (x − x0 ), Let r : X → ℝ be defined as follows: r(x) = f1 (x0 )r2 (x) + Df1 (x0 )xDf2 (x0 )x + Df1 (x0 )xr2 (x) + r1 (x)f2 (x0 ) + Df2 (x0 )xr1 (x) + r1 (x)r2 (x),
x ∈ U.
Then f1 (x)f2 (x) = f1 (x0 )f2 (x0 ) + f1 (x0 )Df2 (x0 )(x − x0 )
+ f2 (x0 )Df1 (x0 )(x − x0 ) + r(x − x0 ),
x ∈ U.
Note that 2 Df1 (x0 )xDf2 (x0 )x ≤ Df1 (x0 )Df2 (x0 )‖x‖ , Define α : X → ℝ as follows: Df1 (x0 )xDf2 (x0 )x , ‖x‖
α(x) = { 0,
x ∈ U, x = 0.
x ≠ 0,
Then Df1 (x0 )xDf2 (x0 )x = α(x)‖x‖,
x ∈ U,
x ∈ U.
x ∈ U.
442 � B Fréchet derivatives and Gâteaux derivatives |Df1 (x0 )xDf2 (x0 )x| α(x) = ‖x‖
‖Df1 (x0 )‖‖Df2 (x0 )‖‖x‖2 ‖x‖ = Df1 (x0 )Df2 (x0 )‖x‖, x ∈ U, ≤
x ≠ 0.
Then α(x) → 0
as x → 0.
From here, r ∈ o(X, ℝ). This completes the proof.
B.3 The Gâteaux derivative Let X and Y be normed spaces and U be an open subset of X. Let also x0 ∈ U. Definition B.5. Let f : U → Y . If there is some T ∈ L (X, Y ) such that lim t→0
f (x0 + tv) − f (x0 ) = Tv t
for any v ∈ X, we say that f is Gâteaux differentiable at x0 . We write f ′ (x0 ) = T. If f is Gâteaux differentiable at any point of U, then we say that f is Gâteaux differentiable on U. Example B.2. Let f : ℝ2 → ℝ be defined as follows: x4
{ 61 3 f (x1 , x2 ) = { x1 +x2 {0
for (x1 , x2 ) ≠ (0, 0), for
(x1 , x2 ) = (0, 0).
Let v = (v1 , v2 ) ∈ ℝ2 , (v1 , v2 ) ≠ (0, 0) be arbitrarily chosen. We have, for t ≠ 0, f (0 + tv) = = lim t→0
t 4 v41
t 6 v61 + t 3 v32 tv41
t 3 v61 + v32
,
tv4 f (0 + tv) − f (0) = lim 3 6 1 3 t→0 t(t v + v ) t 1 2 = lim =
v41
t→0 t 3 v6 1 v41 . v32
+ v32
B.3 The Gâteaux derivative
� 443
Therefore, f ′ (0, 0)(v1 , v2 ) =
v41 v32
,
(v1 , v2 ) ∈ ℝ2 ,
(v1 , v2 ) ≠ (0, 0).
This ends the example. Theorem B.6. If f : U → Y is Fréchet differentiable at x0 , then it is Gâteaux differentiable at x0 . Proof. Since f : U → Y is Fréchet differentiable at x0 , then there is some r ∈ o(X, Y ) such that f (x) = f (x0 ) + Df (x0 )(x − x0 ) + r(x − x0 ),
x ∈ U,
and r(x) = α(x)‖x‖,
x ∈ X,
where α : X → Y , α(0) = 0, α is continuous at 0. Then, for v ∈ X and t ∈ ℝ, |t| small enough, we have f (x0 + tv) − f (x0 ) Df (x0 )(tv) + r(tv) = t t tDf (x0 )(v) + |t|‖v‖α(tv) = t = Df (x0 )(v) + sign(t)‖v‖α(tv) → Df (x0 )(v) as
This completes the proof.
t → 0.
C Pötzsche’s chain rule C.1 Measure chains Let 𝕋 be some set of real numbers. Definition C.1. A triple (𝕋, ≤, ν) is called a measure chain provided it satisfies the following axioms: (A1) The relation “≤” satisfies, for r, s, t ∈ 𝕋, 1. t ≤ t (reflexive), 2. if t ≤ r and r ≤ s, then t ≤ s (transitive), 3. if t ≤ r and r ≤ t, then t = r (antisymmetric), 4. either r ≤ s or s ≤ r (total). (A2) Any nonvoid subset of 𝕋, which is bounded above has a least upper bound, i. e., the measure chain (T, ≤) is conditionally complete. (A3) The mapping ν : 𝕋 × 𝕋 → ℝ has the following properties, for r, s, t ∈ 𝕋: 1. ν(r, s) + ν(s, t) = ν(r, t) (cocycle property), 2. if r > s, then ν(r, s) > 0 (strong isotony), 3. ν is continuous (continuity). Example C.1. Let 𝕋 be any nonvoid closed subset of real numbers, “≤” is the usual order relation between real numbers and ν(r, s) = r − s,
r, s ∈ 𝕋.
Definition C.2. The forward jump operator σ and the backward jump operator ρ are defined as follows: σ(t) = inf{s ∈ 𝕋 : s > t},
ρ(t) = sup{s ∈ 𝕋 : s < t},
where σ(t) = t ρ(t) = t
if if
t = max 𝕋, t = min 𝕋.
The graininess function is defined as follows: μ(t) = ν(σ(t), t),
t ∈ 𝕋.
The notions left-scattered, left-dense, right-scattered, right-dense, isolated and 𝕋κ are defined as in the case of time scales. https://doi.org/10.1515/9783111377155-012
446 � C Pötzsche’s chain rule Definition C.3. Let X be a Banach space with a norm ‖ ⋅ ‖. We say that f : 𝕋 → X is differentiable at t ∈ 𝕋 if there exists f Δ (t) ∈ X such that for any ϵ > 0 there exists a neighborhood U of t such that Δ f (σ(t)) − f (s) − f (t)ν(σ(t), s) ≤ ϵν(σ(t), s) for all s ∈ U. In this case, f Δ (t) is said to be derivative of f at t. Theorem C.1. We have νΔ (⋅, t) = 1,
t ∈ 𝕋.
Proof. Let t ∈ 𝕋. Let also, ϵ > 0 be arbitrarily chosen and U be a neighborhood of t. Then ν(σ(t), s) + ν(s, t) = ν(σ(t), t),
s ∈ 𝕋,
and ν(σ(t), t) − ν(s, t) − ν(σ(t), s) = ν(σ(t), t) − ν(σ(t), t) =0 ≤ ϵν(σ(t), s),
for any s ∈ U. This completes the proof. As in the case of time scales, one can prove the following assertion. Theorem C.2. Let f , g : 𝕋 → X and t ∈ 𝕋. 1. If t ∈ 𝕋κ , then f has at most one derivative at t. 2. If f is differentiable at t, then f is continuous at t. 3. If f is continuous at t and t is right-scattered, then f is differentiable at t and f Δ (t) = 4.
f (σ(t)) − f (t) . μ(t)
If f and g are differentiable at t ∈ 𝕋κ and α, β ∈ ℝ, then αf + βg is differentiable at t and (αf + βg)Δ (t) = αf Δ (t) + βg Δ (t).
5.
If f and g are differentiable at t ∈ 𝕋κ and “⋅” is bilinear and continuous, then f ⋅ g is differentiable at t and (f ⋅ g)Δ (t) = f Δ (t) ⋅ g(t) + f (σ(t)) ⋅ g Δ (t).
C.2 Pötzsche’s chain rule
6.
�
447
If f and g are differentiable at t ∈ 𝕋κ and g is algebraically invertible, then f ⋅ g −1 is differentiable at t with Δ
(f ⋅ g −1 ) (t) = (f Δ (t) − (f ⋅ g −1 )(t) ⋅ g Δ (t)) ⋅ g −1 (σ(t)).
C.2 Pötzsche’s chain rule Throughout this section, we suppose that (𝕋, ≤, ν) is a measure chain with forward jump operator σ and graininess function μ. Assume that X and Y are Banach spaces and we will write ‖ ⋅ ‖ for the norms of X and Y . For a function f : 𝕋 × X → Y and x0 ∈ X, we denote the delta derivative of t → f (t, x0 ) by Δ1 f (⋅, x0 ), and for a t0 ∈ 𝕋 we denote the Fréchet derivative of x → f (t0 , x) by D2 f (t0 , ⋅), provided these derivatives exist. Theorem C.3 (Pötzsche’s chain rule). For some fixed t0 ∈ 𝕋κ , let g : 𝕋 → X, f : 𝕋 × X → Y be functions such that g, f (⋅, g(t0 )) are differentiable at t0 , and let U ⊆ 𝕋 be a neighborhood of t0 such that f (t, ⋅) is differentiable for t ∈ U ∪ {σ(t0 )}, D2 f (σ(t0 ), ⋅) is continuous on the line segment {g(t0 ) + hμ(t0 )g Δ (t0 ) ∈ X : h ∈ [0, 1]} and D2 f is continuous at (t0 , g(t0 )). Then the composition function F : 𝕋 → Y , F(t) = f (t, g(t)) is differentiable at t0 with derivative F Δ (t0 ) = Δ1 f (t0 , g(t0 )) 1
+ (∫ D2 f (σ(t0 ), g(t0 ) + hμ(t0 )g Δ (t0 ))dh)g Δ (t0 ). 0
Proof. Let U0 ⊆ U be a neighborhood of t0 such that μ(t0 ) ≤ ν(t, σ(t0 )) for
t ∈ U0 .
Let Φ(t, h) = D2 f (t, g(t0 ) + h(g(t) − g(t0 ))),
t ∈ U0 ,
h ∈ [0, 1].
Note that there exists a constant C > 0 such that Φ(σ(t0 ), h) − Φ(t0 , h) ≤ C ν(t, σ(t0 ))
for t ∈ U0 ,
h ∈ [0, 1].
Let ϵ > 0 be arbitrarily chosen. We choose ϵ1 > 0, ϵ2 > 0 small enough such that 1 ϵ1 (1 + C ∫ Φ(σ(t0 ), h)dh) + ϵ2 (ϵ1 + 2g Δ (t0 )) ≤ ϵ. 0
448 � C Pötzsche’s chain rule Since g and f (⋅, g(t0 )) are differentiable at t0 , there exists a neighborhood U1 ⊆ U0 of t0 such that g(t) − g(t0 ) ≤ ϵ1 , Δ g(t) − g(σ(t0 )) − ν(t, σ(t0 ))g (t0 ) ≤ ϵ1 ν(t, σ(t0 )), f (t, g(t0 )) − f (σ(t0 ), g(t0 )) − ν(t, σ(t0 ))Δ1 f (t0 , g(t0 )) ≤ ϵ1 ν(t, σ(t0 )) for t ∈ U1 . Hence, Δ Δ g(t) − g(t0 ) = g(t) − g(σ(t0 )) − ν(t, σ(t0 ))g (t0 ) + g (t0 )ν(t, σ(t0 )) + g(σ(t0 )) − g(t0 ) ≤ g(t) − g(σ(t0 )) − ν(t, σ(t0 ))g Δ (t0 ) + g Δ (t0 )ν(t, σ(t0 )) + g(σ(t0 )) − g(t0 ) ≤ ϵ1 ν(t, σ(t0 )) + g Δ (t0 )ν(t, σ(t0 )) + g Δ (t0 )μ(t0 ) = (ϵ1 + g Δ (t0 ))ν(t, σ(t0 )) + g Δ (t0 )μ(t0 ) ≤ (ϵ1 + 2g Δ (t0 ))ν(t, σ(t0 )), t ∈ U1 . Since g is continuous at t0 and D2 f is continuous at (t0 , g(t0 )), there exists a neighborhood U2 ⊆ U of t0 so that Φ(t, h) − Φ(t0 , h) ≤ ϵ2
for t ∈ U2 ,
h ∈ [0, 1].
Hence, 1 F(t) − F(σ(t0 )) − ν(t, σ(t0 ))(Δ1 f (t0 , g(t0 )) + ∫ Φ(σ(t0 ), h)dhg Δ (t0 )) 0 = f (t, g(t)) − f (σ(t0 ), g(σ(t0 ))) − f (σ(t0 ), g(t0 )) + f (σ(t0 ), g(t0 )) − f (t, g(t0 )) + f (t, g(t0 ))
− ν(t, σ(t0 ))Δ1 f (t0 , g(t0 )) 1
− ν(t, σ(t0 )) ∫ Φ(σ(t0 ), h)dhg Δ (t0 ) 1
0
− ∫ Φ(σ(t0 ), h)dh(g(t) − g(t0 )) 0
1
+ ∫ Φ(σ(t0 ), h)dh(g(t) − g(t0 )) 0
C.2 Pötzsche’s chain rule
≤ f (t, g(t0 )) − f (σ(t0 ), g(t0 )) − ν(t, σ(t0 ))Δ1 f (t0 , g(t0 ))
1 + ∫ Φ(σ(t0 ), h)dh(g(t) − g(t0 ) − ν(t, σ(t0 ))g Δ (t0 )) 0 + f (t, g(t)) − f (t, g(t0 )) − (f (σ(t0 ), g(σ(t0 ))) − f (σ(t0 ), g(t0 ))) 1 − ∫ Φ(σ(t0 ), h)dh(g(t) − g(t0 )) 0 ≤ f (t, g(t0 )) − f (σ(t0 ), g(t0 )) − ν(t, σ(t0 ))Δ1 f (t0 , g(t0 )) 1 + ∫ Φ(σ(t0 ), h)dhg(t) − g(t0 ) − ν(t, σ(t0 ))g Δ (t0 ) 0 1 + ∫(Φ(t, h) − Φ(σ(t0 ), h))dh(g(t) − g(t0 )) 0 ≤ f (t, g(t0 )) − f (σ(t0 ), g(t0 )) − ν(t, σ(t0 ))Δ1 f (t0 , g(t0 )) 1 + ∫ Φ(σ(t0 ), h)dhg(t) − g(t0 ) − ν(t, σ(t0 ))g Δ (t0 ) 0 1 + ∫(Φ(t, h) − Φ(t0 , h))dhg(t) − g(t0 ) 0
1 + ∫(Φ(t0 , h) − Φ(σ(t0 ), h))dhg(t) − g(t0 ) 0
1 ≤ ϵ1 ν(t, σ(t0 )) + ϵ1 ν(t, σ(t0 ))∫ Φ(σ(t0 ), h)dh 0 Δ + ϵ2 (ϵ1 + 2g (t0 ))ν(t, σ(t0 )) + ϵ1 C ν(t, σ(t0 )) 1 = (ϵ1 (1 + C + ∫ Φ(σ(t0 ), h)dh) 0 + ϵ2 (ϵ1 + 2g Δ (t0 )))ν(t, σ(t0 )) ≤ ϵν(t, σ(t0 )), This completes the proof.
t ∈ U1 ∩ U2 .
� 449
450 � C Pötzsche’s chain rule
C.3 A generalization of Pötzsche’s chain rule Let g : 𝕋 × ℝn → ℝ be a given function. Then for the function g(t, y1 , . . . , yn ) we denote by Δ1 g(⋅, y1 , . . . , yn ) its delta derivative. Theorem C.4. For some fixed t0 ∈ 𝕋κ , let yj : 𝕋 → ℝ, j ∈ {1, . . . , n}, f : 𝕋×ℝn → ℝ be continuous functions such that f (⋅, y1 (t0 ), . . . , yn (t0 )), and yj , j ∈ {1, . . . , n}, are differentiable at t0 . Let U ⊆ 𝕋 be a neighborhood of t0 such that: 1. f (t, ⋅, . . . , ⋅) is continuously-differentiable for t ∈ U ∪ {σ(t0 )}, 2. Δ1 f (⋅, y1 (⋅), . . . , yn (⋅)) is continuous at t0 , 3. 𝜕 f (σ(t0 ), y1 (σ(t0 )), . . . , yj−1 (σ(t0 )), ⋅, yj+1 (t), . . . , yn (t)) 𝜕yj is continuous in the line segment {yj (t) + h(yj (σ(t0 )) − yj (t)) ∈ ℝ : h ∈ [0, 1]}, 4.
𝜕 f 𝜕yj
j ∈ {1, . . . , n},
∀t ∈ U ∪ {t0 },
is continuous at (t0 , y1 (t0 ), . . . , yn (t0 )).
Then the composition function F : 𝕋 → ℝ, F(t) = f (t, y1 (t), y2 (t), . . . , yn (t)), is differentiable at t0 with derivative F Δ (t0 ) = Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )) 1
+ (∫ 0
1
+ (∫ 0
+ ⋅⋅⋅
1
+ (∫ 0
𝜕 f (σ(t0 ), y1 (t0 ) + hμ(t0 )yΔ1 (t0 ), y2 (t0 ), . . . , yn (t0 ))dh)yΔ1 (t0 ) 𝜕y1 𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (t0 ) + hμ(t0 )yΔ2 (t0 ), . . . , yn (t0 ))dh)yΔ2 (t0 ) 𝜕y2 𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn−1 (σ(t0 )), 𝜕yn
yn (t0 ) + hμ(t0 )yΔn (t0 ))dh)yΔn (t0 ) = Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )) 1
+ (∫ 0
1
+ (∫ 0
𝜕 f (σ(t0 ), (1 − h)y1 (t0 ) + hyσ1 (t0 ), y2 (t0 ), . . . , yn (t0 ))dh)yΔ1 (t0 ) 𝜕y1 𝜕 f (σ(t0 ), y1 (σ(t0 )), (1 − h)y2 (t0 ) + hyσ2 (t0 ), . . . , yn (t0 ))dh)yΔ2 (t0 ) 𝜕y2
C.3 A generalization of Pötzsche’s chain rule
+ ⋅⋅⋅
1
+ (∫ 0
� 451
𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn−1 (σ(t0 )), 𝜕yn
(1 − h)yn (t0 ) + hyσn (t0 ))dh)yΔn (t0 ). Proof. Let s ∈ (t0 − δ, t0 + δ) ∩ 𝕋, s ≠ σ(t0 ), for δ > 0 small enough, and s < σ(t0 ) if σ(t0 ) > t0 . Then F(σ(t0 )) − F(s)
= f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn (σ(t0 ))) − f (s, y1 (s), y2 (s), . . . , yn (s)) = f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s))
+ f (σ(t0 ), y1 (σ(t0 )), y2 (s), . . . , yn (s)) − f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s))
+ f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn (s)) − f (σ(t0 ), y1 (σ(t0 )), y2 (s), . . . , yn (s)) + ⋅⋅⋅
+ f (σ(t0 ), y1 (σ(t0 ), y2 (σ(t0 )), . . . , yn (σ(t0 ))) − f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn (s)) Then we have F(σ(t0 )) − F(s)
= f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s)) 1
+ (∫ 0
𝜕 f (σ(t0 ), y1 (s) + h(y1 (σ(t0 )) − y1 (s)), y2 (s), . . . , yn (s))dh) 𝜕y1
× (y1 (σ(t0 )) − y1 (s)) 1
+ (∫ 0
𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (s) + h(y2 (σ(t0 )) − y2 (s)), . . . , yn (s))dh) 𝜕y2
× (y2 (σ(t0 )) − y2 (s)) + ⋅⋅⋅ 1
+ (∫ 0
𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn (s) + h(yn (σ(t0 )) − yn (s)))dh) 𝜕yn
× (yn (σ(t0 )) − yn (s)). If σ(t0 ) > t0 , by the mean value theorem there exist ξ1 , ξ2 ∈ [s, σ(t0 )) = [s, t0 ] so that Δ1 f (ξ1 , y1 (s), y2 (s), . . . , yn (s))(σ(t0 ) − s)
≤ f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s))
452 � C Pötzsche’s chain rule ≤ Δ1 f (ξ2 , y1 (s), y2 (s), . . . , yn (s))(σ(t0 ) − s) and Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )) = lim Δ1 f (ξ1 , y1 (s), y2 (s), . . . , yn (s)) s→t0
≤ lim
s→t0
1 (f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s))) σ(t0 ) − s
≤ lim Δ1 f (ξ2 , y1 (s), y2 (s), . . . , yn (s)) s→t0
= Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )). If σ(t0 ) = t0 , by the mean value theorem, there exist ξ1 , ξ2 between s and t0 so that Δ1 f (ξ1 , y1 (s), y2 (s), . . . , yn (s))(t0 − s) ≤ f (t0 , y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s)) ≤ Δ1 f (ξ2 , y1 (s), y2 (s), . . . , yn (s))(t0 − s). In this case, if s < t0 we have Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )) = lim Δ1 f (ξ1 , y1 (s), y2 (s), . . . , yn (s)) s→t0 −
≤ lim
s→t0 − t0
1 (f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s))) −s
≤ lim Δ1 f (ξ2 , y1 (s), y2 (s), . . . , yn (s)) s→t0 −
= Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )), and if s > t0 we have Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )) = lim Δ1 f (ξ1 , y1 (s), y2 (s), . . . , yn (s)) s→t0 +
≥ lim
s→t0 + t0
1 (f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s))) −s
≥ lim Δ1 f (ξ2 , y1 (s), y2 (s), . . . , yn (s)) s→t0 +
= Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )). Moreover,
C.3 A generalization of Pötzsche’s chain rule 1
lim ((∫
s→t0
0
𝜕 f (σ(t0 ), y1 (σ(t0 )), . . . , yj−1 (σ(t0 )), yj (s) + h(yj (σ(t0 )) − yj (s)), 𝜕yj
yj+1 (s), . . . , yn (s))dh) 1
= lim (∫ s→t0
0
yj (σ(t0 )) − yj (t0 ) σ(t0 ) − s
yj (σ(t0 )) − yj (t0 )
s→t0
1 0
)
𝜕 f (σ(t0 ), y1 (σ(t0 )), . . . , yj−1 (σ(t0 )), yj (s) + h(yj (σ(t0 )) − yj (s)), 𝜕yj
yj+1 (s), . . . , yn (s))dh) lim = (∫
� 453
σ(t0 ) − s
𝜕 f (σ(t0 ), y1 (σ(t0 )), . . . , yj−1 (σ(t0 )), yj (t0 ) + h(yj (σ(t0 )) − yj (t0 )), 𝜕yj
yj+1 (t0 ), . . . , yn (t0 ))dh)yΔj (t0 ) 1
= (∫ 0
𝜕 f (σ(t0 ), y1 (σ(t0 )), . . . , yj−1 (σ(t0 )), yj (t0 ) + hμ(t0 )yΔj (t0 ), 𝜕yj
yj+1 (t0 ), . . . , yn (t0 ))dh)yΔj (t0 ),
j ∈ {1, . . . , n}.
Therefore, F(σ(t0 )) − F(s) σ(t0 ) − s f (σ(t0 ), y1 (s), y2 (s), . . . , yn (s)) − f (s, y1 (s), y2 (s), . . . , yn (s)) = lim s→t0 σ(t0 ) − s
lim
s→t0
1
+ lim ((∫ s→t0
×
0
y1 (σ(t0 )) − y1 (s) ) σ(t0 ) − s 1
+ lim ((∫ s→t0
×
𝜕 f (σ(t0 ), y1 (s) + h(y1 (σ(t0 )) − y1 (s)), y2 (s), . . . , yn (s))dh) 𝜕y1
0
𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (s) + h(y2 (σ(t0 )) − y2 (s)), . . . , yn (s))dh) 𝜕y2
y2 (σ(t0 )) − y2 (s) ) σ(t0 ) − s
+ ⋅⋅⋅
454 � C Pötzsche’s chain rule 1
+ lim ((∫ s→t0
×
0
𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn (s) + h(yn (σ(t0 )) − yn (s)))dh) 𝜕yn
yn (σ(t0 )) − yn (s) ) σ(t0 ) − s
= Δ1 f (t0 , y1 (t0 ), y2 (t0 ), . . . , yn (t0 )) 1
+ (∫ 0
1
+ (∫ 0
𝜕 f (σ(t0 ), y1 (t0 ) + hμ(t0 )yΔ1 (t0 ), y2 (t0 ), . . . , yn (t0 ))dh)yΔ1 (t0 ) 𝜕y1 𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (t0 ) + hμ(t0 )yΔ2 (t0 ), . . . , yn (t0 ))dh)yΔ2 (t0 ) 𝜕y2
+ ⋅⋅⋅ 1
+ (∫ 0
𝜕 f (σ(t0 ), y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn−1 (σ(t0 )), yn (t0 ) + hμ(t0 )yΔn (t0 ))dh) 𝜕yn
× yΔn (t0 ). This completes the proof. As above, one can prove the following generalization of the Pötzsche chain rule. Theorem C.5. For some fixed t0 ∈ 𝕋κ , let yj : 𝕋 → ℝ, j ∈ {1, . . . , n}, f : 𝕋×ℝn → ℝ be continuous functions such that f (⋅, y1 (t0 ), . . . , yn (t0 )), and yj , j ∈ {1, . . . , n}, are differentiable at t0 . Let U ⊆ 𝕋 be a neighborhood of t0 such that: 1. f (t, ⋅, . . . , ⋅) is continuously differentiable for t ∈ U ∪ {σ(t0 )}, 2. Δ1 f (⋅, yσ1 (⋅), . . . , yσn (⋅)) is continuous at t0 , 3. 𝜕 f (t , y (σ(t0 )), . . . , yj−1 (σ(t0 )), ⋅, yj+1 (t), . . . , yn (t)) 𝜕yj 0 1 is continuous in the line segment {yj (t) + h(yj (σ(t0 )) − yj (t)) ∈ ℝ : h ∈ [0, 1]}, 4.
𝜕 f 𝜕yj
j ∈ {1, . . . , n},
∀t ∈ U ∪ {t0 },
is continuous at (t0 , y1 (t0 ), . . . , yn (t0 )).
Then the composition function F : 𝕋 → ℝ, F(t) = f (t, y1 (t), y2 (t), . . . , yn (t)), is differentiable at t0 with derivative F Δ (t0 ) = Δ1 f (t0 , y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn (σ(t0 )))
C.3 A generalization of Pötzsche’s chain rule 1
+ (∫ 0
1
+ (∫ 0
+ ⋅⋅⋅
1
+ (∫ 0
� 455
𝜕 f (t , y (t ) + hμ(t0 )yΔ1 (t0 ), y2 (t0 ), . . . , yn (t0 ))dh)yΔ1 (t0 ) 𝜕y1 0 1 0 𝜕 f (t , y (σ(t0 )), y2 (t0 ) + hμ(t0 )yΔ2 (t0 ), . . . , yn (t0 ))dh)yΔ2 (t0 ) 𝜕y2 0 1 𝜕 f (t , y (σ(t0 )), y2 (σ(t0 )), . . . , yn−1 (σ(t0 )), yn (t0 ) + hμ(t0 )yΔn (t0 ))dh) 𝜕yn 0 1
× yΔn (t0 )
= Δ1 f (t0 , y1 (σ(t0 )), y2 (σ(t0 )), . . . , yn (σ(t0 ))) 1
+ (∫ 0
1
+ (∫ 0
+ ⋅⋅⋅
1
+ (∫ ×
𝜕 f (t , (1 − h)y1 (t0 ) + hyσ1 (t0 ), y2 (t0 ), . . . , yn (t0 ))dh)yΔ1 (t0 ) 𝜕y1 0 𝜕 f (t , y (σ(t0 )), (1 − h)y2 (t0 ) + hyσ2 (t0 ), . . . , yn (t0 ))dh)yΔ2 (t0 ) 𝜕y2 0 1 𝜕 f (t , y (σ(t0 )), y2 (σ(t0 )), . . . , yn−1 (σ(t0 )), (1 − h)yn (t0 ) + hyσn (t0 ))dh) 𝜕yn 0 1
0 Δ yn (t0 ).
Bibliography [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
R. Agarwal and M. Bohner. Quadratic functionals for second order matrix equations on time scales, Nonlinear Anal. 33 (1998) 675–692. R. Agarwal, M. Bohner, D. O’Regan and A. Peterson. Dynamic equations on time scales: a survey, J. Comput. Appl. Math. 141 (2002) 1–26. P. Amster, C. Rogers and C. C. Tisdell. Existence of solutions to boundary value problems for dynamic systems on time scales, J. Math. Anal. Appl. 308 (2005) 565–577. D. Anderson. Global stability for nonlinear dynamic equations with variable coefficients, J. Math. Anal. Appl. 345 (2008) 796–804. A. Backes. Extremalbedingungen für Optimierungsprobleme mit Algebro- Differentialgleichungen, PhD Thesis, Inst. Math., Humboldt University, Berlin, 2005. K. Balla, G. A. Kurina and R. März. Index criteria for differential algebraic equations arising from linear-quadratic optimal control problems, J. Dyn. Control Syst. 12 (2006) 289–311. K. Balla and V. H. Linh. Adjoint pairs of differential-algebraic equations and Hamiltonian systems, Appl. Numer. Math. 53 (2005) 131–148. M. Bohner. Calculus of variation on time scales, Dyn. Syst. Appl. 13 (2004) 339–349. M. Bohner and A. Peterson. Dynamic Equations on Time Scales: an Introduction with Applications, Birkhäuser, Boston, 2001. M. Bohner and A. Peterson (eds.). Advances in Dynamic Equations on Time Scales, Birkhäuser Boston, Massachusetts, 2003. F. R. Gantmacher. The Theory of Matrices I, Chelsea Publishing Company, New York, NY, 1959. F. R. Gantmacher. The Theory of Matrices II, Chelsea Publishing Company, New York, NY, 1959. I. Higueras and R. März. Differential algebraic equations with properly stated leading terms, Comput. Math. Appl. 48 (2004) 215–235. I. Higueras, R. März and C. Tischendorf. Stability preserving integration of index-1 DAEs, Appl. Numer. Math. 45 (2003) 175–200. I. Higueras, R. März and C. Tischendorf. Stability preserving integration of index-2 DAEs, Appl. Numer. Math. 45 (2003) 201–229. G. A. Kurina and R. März. On linear-quadratic optimal control problems for time-varying descriptor systems, SIAM J. Control Optim. 42 (2004) 2062–2077. R. März. Adjoint equations of differential-algebraic systems and optimal control problems, Proc. Inst. Math., NAS Belarus 7 (2001) 88–97. R. März. Differential algebraic equations anew, Appl. Numer. Math. 42 (2002) 315–335. R. März. The index of linear differential algebraic equations with properly stated leading terms, Results Math. 42 (2002) 308–338. C. Tischendorf. Coupled systems of differential algebraic and partial differential equations in circuit and device simulation. Modeling and numerical analysis, Habilitationsschrift, Inst. Math., Humboldt University, Berlin, 2003.
https://doi.org/10.1515/9783111377155-013
Index {1, 2}-Inverse 426 (1, σ)-Algebraically Nice at Level 0 Third Kind Linear Time-Varying Dynamic-Algebraic Equation 318 (1, σ)-Properly Stated Third Kind Linear Time-Varying Dynamic-Algebraic Equation 318 (1, σ)-Regular Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 (1, σ)-Regular with Tractability Index 0 Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 (1, σ)-Regular with Tractability Index ν Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 (σ, 1)-Algebraically Nice at Level 0 First Kind Linear Time-Varying Dynamic-Algebraic Equation 192 (σ, 1)-Algebraically Nice at Level k First Kind Linear Time-Varying Dynamic-Algebraic Equation 192 (σ, 1)-Leading Term 192 (σ, 1)-Nice at Level k First Kind Linear Time-Varying Dynamic-Algebraic Equation 192 (σ, 1)-Properly Stated First Kind Linear Time-Varying Dynamic-Algebraic Equation 192 (σ, 1)-Regular Second Kind Linear Time-Varying Dynamic-Algebraic Equation 266 (σ, 1)-Regular with Tractability Index 0 First Kind Linear Time-Varying Dynamic-Algebraic Equation 192 (σ, 1)-Regular with Tractability Index ν First Kind Linear Time-Varying Dynamic-Algebraic Equation 193 σ-Generalized Eigenvalue 48 σ-Generalized Eigenvector 48 σ-Matrix Pencil 48 σ-Nonregular Matrix Pair 58 σ-Nonsingular Matrix Pencil 58 σ-Regular Matrix Pair 58, 149 σ-Regular Matrix Pair with Tractability Index ν 149 σ-Regular Matrix Pair with Tractability Index Zero 148 σ-Regular Matrix Pencil 58
Algebraically Nice at Level 0 Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation Algebraically Nice at Level 0 Second Kind Linear Time-Varying Dynamic-Algebraic Equation 265 Algebraically Nice at Level 0 Third Kind Linear Time-Varying Dynamic-Algebraic Equation Algebraically Nice at Level k Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation Algebraically Nice at Level k Second Kind Linear Time-Varying Dynamic-Algebraic Equation 265 Algebraically Nice at Level k Third Kind Linear Time-Varying Dynamic-Algebraic Equation
Admissible Matrices 74 Admissible Projector 153 Admissible Projector Sequence 120 Admissible Projector Sequence up to Level k 153 Admissible Projectors 74
Gâteaux Derivative 442 Generalized Eigenvalue 55 Generalized Eigenvector 55 Generalized Reflexive Inverse 426 Graininess Function 445
https://doi.org/10.1515/9783111377155-014
351
318 351
318
Border Projector 416 C 1 (U, Y ) 436 C 2 (U, Y ) 436 Characteristic Values of Nice at Level k First Kind Linear Time-Varying Dynamic-Algebraic Equation 192 Characteristic Values of Nice at Level k Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation 351 Characteristic Values of Nice at Level k Second Kind Linear Time-Varying Dynamic-Algebraic Equation 265 Characteristic Values of Nice at Level k Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 Competition Model 2 Completely Decoupling Projectors 110 Consistent Initial Values 403 Differentiable Function 399 Equivalent Pairs of Matrices 423 Fiber 398 Fréchet Derivative 435
460 � Index
Inherent Equation 224, 286, 330, 370 Inherent Equation for Third Kind Linear Time-Varying Dynamic Algebraic Equation 343 Jump Operator 445 (k, m)-Jet of a Function of n Independent Real Variables and One Independent Time Scale Variable 395 k-Jet of a Function of One Independent Time Scale Variable 393 Kronecker Canonical Form 424, 428 Kronecker Canonical Index 428 Kronecker Index of a Dynamic-Algebraic Equation 59 Kronecker Index of a Matrix Pair 59 Leading Term 265, 318, 351 Linearization 411 Liouville’s Formula 31 Logistic Equation 1 Matrix Exponential Function 25 Matrix Pencil 55, 428 Measure Chain 445 Moore–Penrose Inverse 427 Nice at Level k Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation 351 Nice at Level k Second Kind Linear Time-Varying Dynamic-Algebraic Equation 265 Nice at Level k Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 Nonregular Matrix Pair 59 Nonsingular Matrix Pencil 59 Orthogonal Projector 424 Orthoprojector 424 pσ (⋅) 48 Preadmissible Projector Sequence 120 Preadmissible Projector Sequence up to Level k 153 Projector 424 Projector Along 424 Projector Onto 424 Properly Involved Derivative 402 Properly Stated Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation 351 Properly Stated Leading Term 402
Properly Stated Matrix Pair 117 Properly Stated Second Kind Linear Time-Varying Dynamic-Algebraic Equation 265 Properly Stated Third Kind Linear Time-Varying Dynamic-Algebraic Equation 318 R 20 R(𝕋, ℝn×n ) 20 rd-continuous Matrix 20 Regressive Dynamic Equation 9 Regressive Equation 8 Regressive Matrix 20 Regular Admissible Matrices 74 Regular Admissible Projectors 74 Regular First Kind Linear Time-Varying Dynamic-Algebraic Equation 193 Regular Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation 352 Regular Matrix Pair 59, 149 Regular Matrix Pair with Tractability Index ν 149 Regular Matrix Pair with Tractability Index Zero 148 Regular Matrix Pencil 59, 428 Regular Second Kind Linear Time-Varying Dynamic-Algebraic Equation 266 Regular Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 Regular with Tractability Index 0 Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation 351 Regular with Tractability Index 0 Second Kind Linear Time-Varying Dynamic-Algebraic Equation 265 Regular with Tractability Index 0 Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 Regular with Tractability Index ν Fourth Kind Linear Time-Varying Dynamic-Algebraic Equation 352 Regular with Tractability Index ν Second Kind Linear Time-Varying Dynamic-Algebraic Equation 266 Regular with Tractability Index ν Third Kind Linear Time-Varying Dynamic-Algebraic Equation 319 Remainder 433 R(𝕋) 20 Singular Matrix Pencil 428 Spectrum of a Regular Matrix Pencil 428 Structural Characteristic Values 92
Index �
Taylor Formula of order (k, k) for a Function of n Independent Real Variables and One Independent Time Scale Variable 393 Taylor Formula of order (k, m) for a Function of n Independent Real Variables and One Independent Time Scale Variable 393 Taylor’s Formula 384, 386 The Putzer Algorithm 40
461
Transversality Condition 117 Vertical Space 398 Weierstrass–Kronecker Canonical Form 428 Weierstrass–Kronecker Canonical Index 428 Weierstrass–Kronecker Form of a Regular Matrix Pair 59 Widely Orthogonal Admissible Projector 79