Completeness Theorems and Characteristic Matrix Functions: Applications to Integral and Differential Operators 3031045076, 9783031045073

This monograph presents necessary and sufficient conditions for completeness of the linear span of eigenvectors and gene

210 19 4MB

English Pages 357 [358] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Review of Contents
Acknowledgements
Contents
List of Symbols
1 Preliminaries
1.1 Basic Elements of Operator Theory and Definition of Completeness
1.1.1 Elements of Banach Space Operator Theory
1.1.2 Definition of Completeness
1.1.3 Examples Illustrating Proposition 1.1.1 and Schmidt Representations of Compact Operators
1.2 Spectral Preliminaries I
1.3 Compact Hilbert Space Operator of Finite Order
1.3.1 The Operator Tg Revisited
2 Completeness Theorems for Compact Hilbert Space Operators
2.1 First Hilbert Space Completeness Theorem
2.2 Two Additional Completeness Theorems
2.3 A First Application of Theorem 2.2.2
2.3.1 Three Special Cases
2.4 Classical Completeness Theorems Revisited
2.5 The Dense Range Property
3 Compact Hilbert Space Operators of Order One
3.1 Some Remarks About Trace Class Operators
3.2 Preliminaries About Hilbert-Schmidt Operators
3.3 Resolvent Estimates for Compact Operators of Order One
3.4 A Completeness Theorem
3.5 Supplementary Remarks
4 Completeness for a Class of Banach Space Operators
4.1 A Special Class of Operators
4.2 Spectral Preliminaries II
4.3 Theorem 4.1.3 Reduced to the Case When z0 Is Zero
4.4 Proof of Theorem 4.1.3
4.5 An Additional Example
4.6 Some Additional Remarks
4.7 Theorem 3.4.1 Revisited
5 Characteristic Matrix Functions for a Class of Operators
5.1 Equivalence and Jordan Chains
5.1.1 Entire Matrix Functions
5.2 The Characteristic Matrix Function
6 Finite Rank Perturbations of Volterra Operators
6.1 The Characteristic Matrix Function
6.2 A Completeness Theorem
6.3 The Volterra Operator Replaced by a Quasi-Nilpotent Operator
6.4 Examples of Non-compact Quasi-Nilpotent Operators
7 Finite Rank Perturbations of Operators of Integration
7.1 Preliminaries
7.2 Rank One Perturbations of the Operator of Integration on C[0,1], Part 1
7.3 Rank One Perturbations of the Operator of Integration on C[0,1], Part 2
7.4 Rank One Perturbations of the Operator of Integration on L2[0,1]
8 Discrete Case: Infinite Leslie Operators
8.1 Definition of a Leslie Operator
8.2 Associated Boundary Value Systems
8.3 The Characteristic Function and Related Properties
8.4 Completeness for a Concrete Class of Leslie Operators
8.5 A Generalised Leslie Operator
9 Semi-Separable Operators and Completeness
9.1 Discrete Semi-Separable Operators
9.1.1 A Completeness Theorem (A Scalar Case)
9.2 Integral Operators with Semi-Separable Kernels
9.2.1 A Completeness Result for Semi-Separable Integral Operators
9.3 Intermezzo: Fundamental Solutions of ODE and Volterra Operators
9.3.1 A Related Volterra Operator
10 Periodic Delay Equations
10.1 Time Dependent Delay Equations
10.2 A Family of Time Dependent Delay Equations
10.3 A Two-Parameter Family of Solution Operators
10.4 Solution Operators for Periodic Delay Equations
11 Completeness Theorems for Period Maps
11.1 The Period Map and Its Generalisations
11.2 Spectral Properties of the Period Map
11.3 Completeness of the Period Map in Case the Period Is Equal to the Delay
11.4 Scalar Periodic Delay Equations and Completeness (One Periodic)
11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic)
12 Completeness for Perturbations of Unbounded Operators
12.1 The Associated Characteristic Matrix Function
12.2 A Completeness Theorem for a Class of Unbounded Operators
13 Applications to Dynamical Systems
13.1 Mixed Type Functional Differential Equations
13.1.1 Three Examples
13.2 Age-Dependent Population Dynamics
13.3 The Zig-Zag Semigroup
14 Results from the Theory of Entire Functions
14.1 Basic Definitions
14.2 Applications of the Phragmén-Lindelöf Theorem
14.3 Applications of the Paley-Wiener Theorem
14.4 The Phragmén-Lindelöf Indicator Function
14.5 Properties of the Indicator Function
14.6 Entire Functions of Completely Regular Growth
14.7 The Dominating Property
14.8 Distribution of Zeros of Entire Functions and Related Properties
14.8.1 Distribution of Zeros and Completely Regular Growth
14.8.2 Genus, Convergence Exponent, and Order of Entire Functions
14.9 Vector-Valued and Operator-Valued Entire Functions
Epilogue
Bibliography
Subject Index
Recommend Papers

Completeness Theorems and Characteristic Matrix Functions: Applications to Integral and Differential Operators
 3031045076, 9783031045073

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Operator Theory Advances and Applications 288

Marinus A. Kaashoek Sjoerd M. Verduyn Lunel

Completeness Theorems and Characteristic Matrix Functions Applications to Integral and Differential Operators

Operator Theory: Advances and Applications Volume 288 Founded in 1979 by Israel Gohberg Editors: Joseph A. Ball (Blacksburg, VA, USA) Albrecht Böttcher (Chemnitz, Germany) Harry Dym (Rehovot, Israel) Heinz Langer (Wien, Austria) Christiane Tretter (Bern, Switzerland) Associate Editors: Vadim Adamyan (Odessa, Ukraine) Wolfgang Arendt (Ulm, Germany) B. Malcolm Brown (Cardiff, UK) Raul Curto (Iowa, IA, USA) Kenneth R. Davidson (Waterloo, ON, Canada) Fritz Gesztesy (Waco, TX, USA) Pavel Kurasov (Stockholm, Sweden) Vern Paulsen (Houston, TX, USA) Mihai Putinar (Santa Barbara, CA, USA) Ilya Spitkovsky (Abu Dhabi, UAE)

Honorary and Advisory Editorial Board: Lewis A. Coburn (Buffalo, NY, USA) J.William Helton (San Diego, CA, USA) Marinus A. Kaashoek (Amsterdam, NL) Thomas Kailath (Stanford, CA, USA) Peter Lancaster (Calgary, Canada) Peter D. Lax (New York, NY, USA) Bernd Silbermann (Chemnitz, Germany)

Subseries Linear Operators and Linear Systems Subseries editors: Daniel Alpay (Orange, CA, USA) Birgit Jacob (Wuppertal, Germany) André C.M. Ran (Amsterdam, The Netherlands)

Subseries Advances in Partial Differential Equations Subseries editors: Bert-Wolfgang Schulze (Potsdam, Germany) Jerome A. Goldstein (Memphis, TN, USA) Nobuyuki Tose (Yokohama, Japan) Ingo Witt (Göttingen, Germany)

Marinus A. Kaashoek • Sjoerd M. Verduyn Lunel

Completeness Theorems and Characteristic Matrix Functions Applications to Integral and Differential Operators

Marinus A. Kaashoek Department of Mathematics Vrije Universiteit Amsterdam Amsterdam, The Netherlands

Sjoerd M. Verduyn Lunel Mathematical Institute Utrecht University Utrecht, The Netherlands

ISSN 0255-0156 ISSN 2296-4878 (electronic) Operator Theory: Advances and Applications ISBN 978-3-031-04507-3 ISBN 978-3-031-04508-0 (eBook) https://doi.org/10.1007/978-3-031-04508-0 Mathematics Subject Classification: 30D15, 34K06, 34K40, 34L10, 39A06, 45C05, 47Axx, 47Bxx, 47Dxx, 47Exx © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Dedicated to the memory of Israel Gohberg, a wonderful mathematician who inspired us to write this monograph

Preface

In the theory of nonselfadjoint operators, completeness of the linear span of eigenvectors and generalised eigenvectors is an important topic with a long and interesting history (see [28, 32, 62, 65, 66] and also [51, 53]). In the Hilbert space setting the results often concern operators that are close to being selfadjoint. Classical results usually present sufficient conditions for completeness; the problem to give necessary and sufficient conditions for completeness is more difficult to analyse. In our study of completeness or noncompleteness for a given operator T acting on a Banach space X, an important role is played by the T -invariant linear subspace ST := {x ∈ X | z → (I − zT )−1 x is an entire function} which appears is a natural way in many examples presented in this book. In this monograph, we extend the notion of a characteristic matrix function from [48] and present necessary and sufficient conditions for completeness of the linear span of eigenvectors and generalised eigenvectors for operators that admit a characteristic matrix function in a Banach space setting (see, e.g., Theorems 5.2.6 and 6.2.1). The results are based on sharp resolvent estimates near infinity using the theory of entire functions of completely regular growth. In the Hilbert space setting, our completeness results extend Keldysh type results to operators that are in some sense not close to selfadjoint. The examples presented in Chaps. 7, 8 and 9 yield many new completeness and noncompleteness results for operators that are a finite rank perturbation of a quasi-nilpotent operator. An important application of our work are completeness problems for period maps associated with linear periodic delay differential equations. The period maps turn out to be bounded linear Banach space operators that are finite rank perturbations of Volterra operators. See the results in Chap. 11, in particular, Theorems 11.4.4 and 11.5.3. Other applications include completeness problems for infinitesimal generators of semigroups of operators that arise in the study of autonomous delay equations, age-dependent population models and probability theory. See Chap. 13.

vii

viii

Preface

A large part of this book can be used as material for graduate courses or seminars; the different classes of equations that are analysed in detail in Chaps. 7–13 are a good source of examples.

Review of Contents In this section, we review the contents of this book, chapter by chapter. The first chapter has an introductory character. The notion of completeness is defined and some classical completeness theorems are reviewed, in a Banach space setting as well as for Hilbert space operators. In Chap. 2, we review a general classical condition for completeness based on the Phragmén-Lindelöf theorem from the theory of entire functions. We then continue, using delicate properties of the Phragmén-Lindelöf indicator function of an entire function, to refine the result which we formulate in Theorem 2.2.2. In this theorem, we bring together a number of classical results on completeness. As corollaries, we recover some classical and new completeness results for trace class and Hilbert-Schmidt operators. The proofs of the main results in this chapter are presented in such a way that they allow generalisation to the Banach space setting. Section 2.3 serves as a motivation of the type of operators that are analysed in this book. In Chap. 3, we collect and further develop some basic properties of compact Hilbert space operators of order one, and in Theorem 3.4.1, we formulate a new completeness result for this class of operators. These results are presented in such a way that they motivate the main results in Chaps. 4 and 5 regarding necessary and sufficient conditions for completeness. Compare, for example, Theorems 3.4.1 and 4.7.1. In Chap. 4, we present in Theorem 4.1.3 a characterisation of the closure of the generalised eigenspace for classes of bounded Banach space operators T , not necessarily being compact, for which the resolvent admits a certain representation. In fact, the resolvent of the operator T is of the form (I − zT )−1 =

1 P (z) q(z)

for z ∈ C such that q(z) = 0,

()

where q is a scalar entire function not zero in a neighbourhood of zero and P is an operator-valued entire function. In Chap. 5, we extend the notion of a characteristic matrix function from [48] to bounded operators and present necessary and sufficient conditions for completeness for operators that admit a characteristic matrix function, using Theorem 4.1.3 of the preceding chapter. The result in Chaps. 4 and 5 complete the general theory, and with the main theorem of Chap. 5, Theorem 5.2.6, all tools are developed to solve completeness problems in concrete cases, in particular those appearing in the next eight chapters, not including Chap. 10. In Chap. 6 we show that a finite rank perturbation of a Volterra operator V admits a characteristic matrix function. Moreover, the resolvent is of the form described

Preface

ix

under () above, which allowed us to prove a sharp completeness result for this class of operators. In the final section of this chapter, it is shown that the completeness result remains true if the Volterra operator is replaced by a quasi-nilpotent operator. Special classes of operators which are finite rank perturbations of Volterra operators as considered in Chap. 6 are the main object of study in each of the Chaps. 7, 8, 9 and 11. In Chap. 7, the Volterra operator is the operator of integration on the Banach space C[0, 1] or on the Hilbert space L2 [0, 1]. In Chap. 8, the operator T is again a finite rank perturbation of a Volterra operator, see (8.12), but now it is an operator on 2+ (C) associated with an infinite Leslie model, where it can happen that there is no completeness for T while there is completeness for T ∗ . This result provides a new example of a classical example constructed in [43]. In this eighth chapter, we also consider a generalised Leslie operator which is a finite rank perturbation of a non-compact quasi-nilpotent operator, that is, the role of Volterra operators is taken over by quasi-nilpotent operators, as in Sect. 6.3. In Chap. 9, the operator T is a semi-separable operator on 2+ (Cn ) or   2 L [0, 1]; Cn . In the first case the operator T is discrete, but quite different from the Leslie operators considered in Chap. 8. In the second case T is a semi-separable integral operator and results of Chaps. 4, 5 and 6 are developed further. Chapter 10 has a preliminary character preparing for Chap. 11. In Chap. 11 the operator T is a period map associated with a periodic delay differential equation, we first prove that the period map is a finite rank perturbation of a Volterra operator. In this chapter we show the importance of completeness in the study of the qualitative behaviour of solutions of linear periodic differential equations. Elementary solutions that start at an eigenvector or generalised eigenvector of the period map can be represented by an exponential function times a polynomially bounded function. Completeness of the eigenvectors and generalised eigenvectors of the period map implies that on bounded intervals solutions of the periodic delay equation can be approximated by a sequence of elementary solutions. Furthermore, solutions starting at a vector in the complement of the closure of the linear span of eigenvectors and generalised eigenvectors cannot be approximated by elementary solutions and, in general, give rise to solutions of the differential equation that decay faster to zero than any exponential. See, in particular, Sect. 11.3. In Chap. 12, we present a general class of unbounded operators that admit a characteristic matrix function, and we apply the results from ***Chap. 4 to prove sharp completeness theorems. In Chap. 13, the results of Chap. 12 are further specified in the context of three classes of unbounded operators that arise in the study of infinite dimensional dynamical systems. The first class concerns dichotomous operators associated with mixed type neutral functional differential equations, the second class involves age-dependent population models, and the third class is based on recent work [11] concerning the so-called Zig-Zag process. The completeness theorem of the first class is illustrated with three elementary examples of differential delay equations. In Chap. 14, the final chapter, we discuss a number of properties of entire functions of completely regular growth that are used throughout in the proofs of

x

Preface

the completeness theorems. In particular, the connection between the distribution of zeros and the growth properties of an entire function has some new aspects and plays an important role. Special attention is given to entire functions of the form  f (z) = p(z) + q(z)

a

−a

e−zt ϕ(t) dt,

z ∈ C,

where p and q = 0 are polynomials, 0 < a < ∞ and ϕ is a non-zero square integrable function on the interval [−a, a]. To a certain extent, this final chapter has the character of an appendix reviewing and specialising classical results of the theory of entire functions developed by Phragmén and Lindelöf, and by Paley and Wiener. In the Epilogue at the end of the book, with a view towards future research, a class of operators is presented for which the associated characteristic matrix function is only analytic on a proper subset of the complex plane.

Acknowledgements It is a great pleasure to thank A C.M. Ran and F. van Schagen of Vrije Universiteit Amsterdam, J.J.O.O. Wiegerinck of the University of Amsterdam, and B.A.J. de Wolff of Freie Universität Berlin, who provided us with lists of remarks and corrections concerning all chapters of this book. Their contributions are gratefully acknowledged and highly appreciated. We also appreciate the comments of A. Böttcher, H. Dym and the anonymous referees, invited by the book series editors, on somewhat earlier versions of this manuscript. Amsterdam, The Netherlands Utrecht, The Netherlands

Marinus A. Kaashoek Sjoerd M. Verduyn Lunel

Contents

1

Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1 Basic Elements of Operator Theory and Definition of Completeness .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1.1 Elements of Banach Space Operator Theory .. . . . . . . . . . . . . 1.1.2 Definition of Completeness . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1.3 Examples Illustrating Proposition 1.1.1 and Schmidt Representations of Compact Operators .. . . . . . . . . 1.2 Spectral Preliminaries I . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.3 Compact Hilbert Space Operator of Finite Order .. . . . . . . . . . . . . . . . . . 1.3.1 The Operator Tg Revisited . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

5 12 16 18

Completeness Theorems for Compact Hilbert Space Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.1 First Hilbert Space Completeness Theorem.. . . .. . . . . . . . . . . . . . . . . . . . 2.2 Two Additional Completeness Theorems . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3 A First Application of Theorem 2.2.2 .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.1 Three Special Cases . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.4 Classical Completeness Theorems Revisited . . .. . . . . . . . . . . . . . . . . . . . 2.5 The Dense Range Property . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

19 19 26 29 34 35 38

3

Compact Hilbert Space Operators of Order One . . .. . . . . . . . . . . . . . . . . . . . 3.1 Some Remarks About Trace Class Operators .. .. . . . . . . . . . . . . . . . . . . . 3.2 Preliminaries About Hilbert-Schmidt Operators .. . . . . . . . . . . . . . . . . . . 3.3 Resolvent Estimates for Compact Operators of Order One . . . . . . . . 3.4 A Completeness Theorem . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.5 Supplementary Remarks . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

45 45 46 49 51 52

4

Completeness for a Class of Banach Space Operators .. . . . . . . . . . . . . . . . 4.1 A Special Class of Operators . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2 Spectral Preliminaries II . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3 Theorem 4.1.3 Reduced to the Case When z0 Is Zero .. . . . . . . . . . . . . 4.4 Proof of Theorem 4.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

55 55 60 64 71

2

1 1 1 4

xi

xii

Contents

4.5 4.6 4.7

An Additional Example .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Some Additional Remarks .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Theorem 3.4.1 Revisited . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

82 86 87

5

Characteristic Matrix Functions for a Class of Operators .. . . . . . . . . . . 5.1 Equivalence and Jordan Chains . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.1.1 Entire Matrix Functions . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2 The Characteristic Matrix Function . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

91 91 92 97

6

Finite Rank Perturbations of Volterra Operators . .. . . . . . . . . . . . . . . . . . . . 6.1 The Characteristic Matrix Function . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2 A Completeness Theorem . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3 The Volterra Operator Replaced by a Quasi-Nilpotent Operator.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.4 Examples of Non-compact Quasi-Nilpotent Operators.. . . . . . . . . . . .

105 105 108

7

113 115

Finite Rank Perturbations of Operators of Integration .. . . . . . . . . . . . . . . 7.1 Preliminaries.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.3 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.4 Rank One Perturbations of the Operator of Integration on L2 [0, 1] .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

121 121

8

Discrete Case: Infinite Leslie Operators . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.1 Definition of a Leslie Operator . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.2 Associated Boundary Value Systems . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.3 The Characteristic Function and Related Properties .. . . . . . . . . . . . . . . 8.4 Completeness for a Concrete Class of Leslie Operators.. . . . . . . . . . . 8.5 A Generalised Leslie Operator . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

141 141 142 146 150 158

9

Semi-Separable Operators and Completeness . . . . . .. . . . . . . . . . . . . . . . . . . . 9.1 Discrete Semi-Separable Operators . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 9.1.1 A Completeness Theorem (A Scalar Case). . . . . . . . . . . . . . . . 9.2 Integral Operators with Semi-Separable Kernels . . . . . . . . . . . . . . . . . . . 9.2.1 A Completeness Result for Semi-Separable Integral Operators . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 9.3 Intermezzo: Fundamental Solutions of ODE and Volterra Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 9.3.1 A Related Volterra Operator . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

161 161 166 177

10 Periodic Delay Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 10.1 Time Dependent Delay Equations .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 10.2 A Family of Time Dependent Delay Equations .. . . . . . . . . . . . . . . . . . . . 10.3 A Two-Parameter Family of Solution Operators.. . . . . . . . . . . . . . . . . . . 10.4 Solution Operators for Periodic Delay Equations . . . . . . . . . . . . . . . . . .

199 199 211 216 217

127 132 138

184 193 195

Contents

11 Completeness Theorems for Period Maps . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11.1 The Period Map and Its Generalisations . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11.2 Spectral Properties of the Period Map. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11.3 Completeness of the Period Map in Case the Period Is Equal to the Delay .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11.4 Scalar Periodic Delay Equations and Completeness (One Periodic) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

xiii

225 225 232 235 237 247

12 Completeness for Perturbations of Unbounded Operators . . . . . . . . . . . . 259 12.1 The Associated Characteristic Matrix Function . . . . . . . . . . . . . . . . . . . . 259 12.2 A Completeness Theorem for a Class of Unbounded Operators .. . 262 13 Applications to Dynamical Systems . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13.1 Mixed Type Functional Differential Equations .. . . . . . . . . . . . . . . . . . . . 13.1.1 Three Examples.. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13.2 Age-Dependent Population Dynamics . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13.3 The Zig-Zag Semigroup . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

269 269 273 275 278

14 Results from the Theory of Entire Functions. . . . . . . .. . . . . . . . . . . . . . . . . . . . 14.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 14.2 Applications of the Phragmén-Lindelöf Theorem . . . . . . . . . . . . . . . . . . 14.3 Applications of the Paley-Wiener Theorem .. . . .. . . . . . . . . . . . . . . . . . . . 14.4 The Phragmén-Lindelöf Indicator Function . . . .. . . . . . . . . . . . . . . . . . . . 14.5 Properties of the Indicator Function .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 14.6 Entire Functions of Completely Regular Growth .. . . . . . . . . . . . . . . . . . 14.7 The Dominating Property .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 14.8 Distribution of Zeros of Entire Functions and Related Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 14.8.1 Distribution of Zeros and Completely Regular Growth.. . 14.8.2 Genus, Convergence Exponent, and Order of Entire Functions . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 14.9 Vector-Valued and Operator-Valued Entire Functions .. . . . . . . . . . . . .

287 287 291 295 306 315 318 322 328 329 333 337

Epilogue . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 341 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 345 Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 349

List of Symbols

adj (z) M(T ; λ) m((λ0 )) Tr (A) g dominates f hf k(λ, ) m(λ, ) m(T ; λ) N ⊕L n(r, f ) NBV [0, 1] ∅ ρ(T ) σ (T )

x ∗ , x

y, x {q, P } T∗ T∗ L1loc [−h, ∞) MT

the adjugate of the matrix (z), page 100 the generalised eigenspace of T at λ, page 2 the algebraic multiplicity of the entire matrix function  at λ0 , page 93 the trace of an n × n matrix A, page 95 see Definition 14.7.1, page 320 the indicator function of f , page 304 the order of λ as pole of the matrix function (·)−1 , page 92 the order of λ0 as a zero of the entire matrix function , page 94 the algebraic multiplicity T at λ, page 3 the direct sum of linear spaces N and L, not necessarily orthogonal, page 5 the number of zeros of f in the closed disc with centre zero and radius r, page 330 all functions η of bounded variation on [0, 1] normalised such that η(0) = 0 and η is continuous from the right on the open interval (0, 1), page 40 empty set, page 258 the resolvent set of the operator T , page 2 the spectrum of the operator T , page 2 a bilinear form ·, · : X∗ × X → C, page 2 a linear, conjugate linear form ·, · : H × H → C, page 3 an ordered pair of entire functions such that (4.1) holds, page 56 the adjoint or conjugate of a Banach space operator T , page 2 the adjoint of a Hilbert space operator T : H → H , page 3 all integrable functions on [−h, ∞) that locally belong to L1 , i.e., their restriction to compact subsets of [−h, ∞) belong to L1 , page 199 the generalised eigenspace of the operator T , that is, MT is the linear span of the eigenvectors and generalised eigenvectors corresponding to the non-zero eigenvalues of T , page 3 xv

xvi

List of Symbols

MT ST Su,A FT , ,z0   R −,λ

0

dτ rod (T ) Var[−h,0] η

the closure of the generalised eigenspace of the operator T , page 3 the closed linear space consisting of all x ∈ X such that the map z → (I − zT )−1 x is an entire function, page 4 the closed linear space consisting of all x ∈ X such that the map z → (zI − A)−1 x is an entire function, page 260 the linear space of vectors x ∈ X such that the entire function q(z0 + z) dominates the entire function (z0 + z)P (z0 + z)x, page 57 the singular part in the Laurent expansion of a meromorphic operator function R at λ0 , page 95 indicates that the integration is with respect to τ , page 199 the order of a compact operator on a Hilbert space, page 17 the variation of a function of bounded variation η over the interval [−h, 0], page 199

Chapter 1

Preliminaries

This chapter has an introductory character. Basic elements of operator theory are reviewed, and the notion of completeness is defined for Banach space operators and specified further for Hilbert space operators. Elements of spectral theory are presented, and a few illustrative examples are given. The chapter consists of three sections of which the first consists of four subsections. Throughout the Banach and Hilbert spaces are assumed to be complex spaces.

1.1 Basic Elements of Operator Theory and Definition of Completeness This first section consists of four subsections. In the first subsection we review some classical elements of Banach space operator theory. The second subsection presents an example illustrating the role of spectral decompositions in the qualitative theory of dynamical systems. The definition of completeness is given in the third subsection which also presents the main theme of the methods used in the present monograph. In the fourth subsection properties of completeness are presented in Hilbert space setting.

1.1.1 Elements of Banach Space Operator Theory Throughout this monograph X will denote a complex Banach space and H will denote a complex Hilbert space. The Banach algebra of bounded linear operators on X endowed with the operator norm is denoted by L(X). Further, if X and Y denote

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_1

1

2

1 Preliminaries

complex Banach spaces, then L(X, Y ) denotes the vector space of all bounded linear operators between X and Y endowed with the operator norm. The dual space of X will be denoted by X∗ and consists of all bounded linear functionals on X. Elements of X∗ are denoted by x ∗ and the corresponding bounded linear functional on X is given by x → x ∗ , x . So x ∗ (x) := x ∗ , x is defined by a bilinear form for each x ∈ X and x ∗ ∈ X∗ . If T ∈ L(X), then T ∗ denotes the Banach adjoint of T which is the unique operator on X∗ defined by x ∗ , T x =

T ∗ x ∗ , x for all x ∈ X and x ∗ ∈ X∗ ; see Theorem 4.10 in [74]. Instead of “Banach adjoint” the operator T ∗ is also called “conjugate” operator; see, e.g., [36, page 307] or [78, Section 4.8]. If N is a linear subset of the Banach space X, then N ⊥ denotes the annihilator of N, i.e., the set of all x ∗ in the Banach dual X∗ such that x ∗ , u = 0 for each u ∈ N. If L is a linear subset of the Banach space X∗ , then ⊥ L is the set of all x ∈ X such that x, y ∗ = 0 for each y ∗ ∈ L. Summarising, we have N ⊥ = {x ∗ ∈ X∗ | x ∗ , u = 0, for all u ∈ N},

(1.1)



(1.2)

L = {x ∈ X | y ∗ , x = 0, for all y ∗ ∈ L}.

It is well-known (see, e.g., Theorem 4.7 in [74]) that ⊥

(N ⊥ ) = N , where N is the norm-closure of N;

(1.3)

(⊥ L)⊥ =  L, where  L is the weak∗ -closure of L.

(1.4)

Furthermore, since  L is the closed convex balanced hull of L (see [16, Theorem V.1.8]), we have L ⊂ L ⊂  L. Let T ∈ L(X). A complex number λ belongs to the resolvent set ρ(T ) of T if and only if the resolvent (λI − T )−1 exists and is bounded. The spectrum σ (T ) is by definition the complement of ρ(T ) in C. Since the adjoint of λI − T is equal to λI − T ∗ , it follows that ρ(T ) = ρ(T ∗ )

and σ (T ) = σ (T ∗ ).

(1.5)

The point spectrum σp (T ) is the set of those λ ∈ C for which λI − T is not one-to-one, i.e., T ϕ = λϕ for some ϕ = 0. One then calls λ an eigenvalue and ϕ an eigenvector corresponding to λ. The null space Ker (λI − T ) is called the eigenspace and its dimension is called the geometric multiplicity of λ. The generalised eigenspace M(T ; λ) is the smallest closed linear subspace that contains all Ker (λI − T )j for j = 1, 2, . . .. In other words M(T ; λ) =

∞  j =1

Ker (λI − T )j .

(1.6)

1.1 Basic Elements of Operator Theory and Definition of Completeness

3

The dimension m(T ; λ) of M(T ; λ) is called the algebraic multiplicity of λ. If, in addition, λ is an isolated point in σ (T ) and m(T ; λ) is finite, then λ is called an eigenvalue of finite type. When m(T ; λ) = 1 we say that λ is a simple eigenvalue. The spectral projection Pλ : X → X corresponding to the isolated eigenvalue λ onto M(T ; λ) is given by the Riesz projection which is defined by Pλ =

1 2πi



(zI − T )−1 dz,

(1.7)

λ

where λ is a closed contour in the resolvent set of T such that λ is the only eigenvalue of T inside λ . See [32, Section 1.2] for more information on spectral decompositions using the Riesz projection. From the definition of the generalised eigenspace it follows that X = M(T ; λ) ⊕ Ker Pλ .

(1.8)

An operator T : X → X is called compact if the closure of the image under T of the unit ball in X is a compact set in X. If T : X → X is compact, then the spectrum of T is a compact, at most countable, set with 0 as its only possible accumulation point. Furthermore, any non-zero point in the spectrum of T belongs to the point spectrum of T and is an eigenvalue of finite type. The latter is equivalent to the corresponding Riesz projection (1.7) having finite rank (see Chapter II of [32]). We will order the non-zero points in the spectrum by decreasing modulus denoted by λ1 (T ), λ2 (T ), . . . , λν(T ) (T ), where each eigenvalue is repeated as many times as the value of its multiplicity, and where ν(T ) denotes the cardinality of the non-zero spectrum of T . The linear span of the eigenvectors and generalised eigenvectors corresponding to the non-zero eigenvalues of T will be denoted by MT . With some abuse of terminology the space MT is called the generalised eigenspace of the operator T . Note that MT = ⊕λ∈σ (T )\{0} Im Pλ

and MT ∗ = ⊕λ∈σ (T )\{0} Im Pλ∗

(1.9)

The second identity follows from the first by using the identities in (1.5). The closure of MT in X is denoted by MT . If T : H → H is a bounded linear operator on a Hilbert space H , then for x, y ∈ H the inner product y, T x is linear in y, conjugate linear in T x and bounded. Therefore there exists a unique adjoint operator T ∗ : H → H for which y, T x =

T ∗ y, x for all x, y ∈ H ; see Theorem 12.9 in [74]. In particular, in the Hilbert space setting the identities in (1.5) are replaced by ρ(T ) = ρ(T ∗ )

and σ (T ) = σ (T ∗ ).

(1.10)

By abuse of notation, we use the same symbol for Banach and Hilbert space adjoints throughout this book.

4

1 Preliminaries

1.1.2 Definition of Completeness In this subsection T is a compact operator on the Banach space X. If the linear span of all eigenvectors and generalised eigenvectors of T including those corresponding to λ = 0 (if λ = 0 is an eigenvalue) is dense in the space X, then the system of eigenvectors and generalised eigenvectors of T is said to be complete; see, e.g., [32, page 29]. If T : X → X is one-to-one, then the system of eigenvectors and generalised eigenvectors of T is complete if and only if the space MT is dense in X, and the latter happens if and only if λ∈σ (T )\{0}

Ker Pλ∗ = {0}.

(1.11)

Although condition (1.11) above is a necessary and sufficient condition for completeness of the eigenvectors and generalised eigenvectors of T (provided T is one-to-one) this condition is not always useful in the study of completeness. This is a result of the fact that the left hand side of (1.11) requires the study of the spectral properties of the adjoint operator T ∗ on the dual space X∗ . In the Banach space setting, in case of non-completeness, the left hand side of (1.11) will not always lead to a direct sum decomposition of X consisting of MT and a complementary T -invariant subspace. In our analysis of completeness an important role will be played by the linear space ST := {x ∈ X | z → (I − zT )−1 x is an entire function}.

(1.12)

Using the notation introduced in the preceding subsection, it is straightforward to show that ST is given by ST =

λ∈σ (T )\{0}

Ker Pλ .

(1.13)

Indeed, if x ∈ ST , then definition (1.12) tells us that z → (I − zT )−1 x is an entire function, and hence it follows from the Cauchy theorem that the contour integral in identity (1.7) equals

zero. This proves that Pλ x = 0 for any non-zero eigenvalue λ of T . Thus x ∈ λ∈σ (T )\{0} Ker Pλ , and we have proved that ST is a subset of the space defined by the right hand side of (1.13). On the other hand, if Pλ x = 0 for any non-zero λ, then (λ − T )−1 x is analytic at any 0 = λ ∈ σ (T ), and hence (I − zT )−1 x is an entire function. The latter tells us that x ∈ ST , and (1.13) is proved. This space ST is an appropriate candidate for a complementary subspace of MT which is also invariant under T . For instance, if the non-zero part of σ (T ) is finite, then MT ⊕ ST = X trivially. If the non-zero part of σ (T ) is infinite, then the space MT is infinite dimensional and does not have to be closed. Therefore it is a natural question whether the decomposition MT ⊕ ST = X has to be replaced into

1.1 Basic Elements of Operator Theory and Definition of Completeness

5

a decomposition of closed T -invariant subspaces: X = MT ⊕ ST ?

(1.14)

Unfortunately the above decomposition does not hold in general, and in Chap. 8 one will see examples (see Theorem 8.4.3 in particular) such that ST = {0}, but T does not have a complete span of eigenvectors and generalised eigenvectors so that decomposition (1.14) fails to hold in this case. We will also present examples in Chap. 13 such that ST = {0}, M ∩ ST = {0}, and the direct sum MT ⊕ ST is dense in X but not closed (so a direct sum of closed subspaces that is not closed itself). Nevertheless, we have the following positive result. Proposition 1.1.1 Let MT and MT ∗ be the closure of MT in X and of MT ∗ in X∗ , respectively. Assume that MT ∩ ST = {0} and MT ∗ ∩ ST ∗ = {0}.

(1.15)

Then the space X decomposes as follows: X = MT ⊕ ST .

(1.16)

The two identities in (1.15) are rather natural in various applications, but these two identities are not always satisfied. In general additional conditions on the operator T are required to obtain (1.15). Our main results given in Theorem 4.1.3 and Theorem 6.2.1 present conditions on T such that (1.15) holds, in concluding necessary and sufficient conditions such that ST = {0}. Note that ST = {0} is a necessary condition in order to expand elements of X into eigenfunctions and generalised eigenfunctions of T . The subject of eigenfunction expansion is a fundamental part of operator theory and plays an essential role in applications. For example, to study whether boundary value problems are wellposed and in the qualitative analysis of infinite dimensional dynamical systems. The classical spectral theorem for compact selfadjoint operators on a Hilbert space provides an important class of examples and is a motivation for the problems addressed in this monograph. The proof of Proposition 1.1.1 is given in a somewhat more general setting in Sect. 1.2, directly after Corollary 1.2.2 and its proof.

1.1.3 Examples Illustrating Proposition 1.1.1 and Schmidt Representations of Compact Operators In this subsection we present three cases, in a Hilbert space setting, when the two conditions in (1.15) are fulfilled, and hence in these cases equality (1.16) is satisfied

6

1 Preliminaries

too. In all three cases the operator T is a compact operator acting on a Hilbert space H , and the Schmidt-representation of T plays an important role. The first class concerns trace class operators, the second is an operator of integration on L2 [0, 1], and the third concerns operators that are rank one perturbations of an operator of integration. We begin with some spectral preliminaries. Compact Selfadjoint Hilbert Space Operators Let T be a compact selfadjoint operator on a Hilbert space H . Put H0 = Ker T and H1 = Ker T ⊥ . Then H0 and H1 are closed T -invariant subspaces of H , the space H1 = Im T and the spectral theorem (see [36, Theorem IV.5.1]) gives us an orthonormal basis of eigenvectors {ϕj }j ≥1 corresponding to the non-zero eigenvalues {λj }j ≥1 of T on H1 such that Tx =

ω

λj ϕj , x ϕj ,

x ∈ H1 .

(1.17)

j =1

In particular, MT = H1 , and since T is selfadjoint, we have ST = H0 = Ker T . This shows that MT ⊕ ST = H . In particular, the decomposition (1.16) holds for compact selfadjoint operators. In general, the non-zero spectrum of a compact operator might be empty, and then one has no representations of the form (1.17). The Analogue of (1.17) for Arbitrary Compact Hilbert Space Operators To investigate the spectral properties of an arbitrary compact operator T : H → H , it is useful to study the eigenvalues of the compact positive selfadjoint operator T ∗ T associated with T ; see the last paragraph of Sect. 1.1.1 for the definition of T ∗ in a Hilbert space setting. If λ1 (T ∗ T ) ≥ λ2 (T ∗ T ) ≥ · · · > 0 denote the positive eigenvalues of T ∗ T , where each eigenvalue is repeated as many times as the value of its multiplicity, then the singular values of T are defined to be sj (T ) := λj (T ∗ T )1/2 ,

j ≥ 1.

(1.18)

Using the representation (1.17) for T ∗ T one can prove the following representation for T Tx =

ω

sj (T ) ψj , x ϕj ,

x ∈ H,

(1.19)

j =1

where {ϕj }j ≥1 and {ψj }j ≥1 are orthonormal systems in H and ω ∈ N or ω = ∞. Formula (1.19) is called the Schmidt representation of T ; see [32, Theorem VI.1.1]). It is a natural generalisation of formula (1.17) for the non-selfadjoint case.

1.1 Basic Elements of Operator Theory and Definition of Completeness

7

The above result is a beautiful example of what completeness can do for us. Indeed, to prove formula (1.19) we used completeness of compact self-adjoint operators. Thus a non-self-adjoint compact operator T has a representation of the form (1.19) while the operator T might not have a complete span of eigenvectors and generalised eigenvectors. The Case When T Is a Trace Class Operator A compact operator T : H → H is called of trace class if the sum of the singular values converges. Recall (see Sect. 1.1.1) that a non-zero eigenvalue λ of T is said to be simple if the algebraic multiplicity of λ is equal to one, that is, m(T , λ) = 1. Proposition 1.1.2 Let T be an operator of trace class such that Ker T = Ker T ∗ . If T has simple eigenvalues λj that, with a possible exception of a finite number, are in the sector  = {reiϕ | r ≥ 0, |π − ϕ| <  or |ϕ| < } with 0 <  < π/2, and are such that the corresponding eigenvectors {ϕj } of T at λj and {ψj } of T ∗ at λj , j = 1, . . . , ω, where ω ∈ N or ω = ∞, form a bi-orthonormal system in H , then (1.15) holds, that is, MT ∩ ST = {0} and MT ∗ ∩ ST ∗ = {0}, and thus the space H decomposes as H = MT ⊕ ST .

(1.20)

Proof From the assumption that the eigenvectors of T are simple and form an orthonormal system we have the representation Tx =

ω j =1

Pλj x =

ω

λj ψj , x ϕj ,

x ∈ MT ,

(1.21)

j =1

where Pλj denotes the Riesz spectral projection given by (1.7). Suppose that x ∈ MT ∩ ST . Then z → (I − zT )−1 x

is an entire function.

Using representation (1.21), we can for x ∈ MT compute (I − zT )−1 x as follows. Let v = (I − zT )−1 x, then x = v − zT v and hence from (1.21) x =v−z

ω j =1

λj ψj , v ϕj .

(1.22)

8

1 Preliminaries

So if we take the inner product with ψk , k = 1, . . . , ω, on both sides of (1.22) and use the assumption that {ϕj } and {ψj } are a bi-orthonormal system in H we obtain

ψk , v = ψk , x + zλj ψk , v ,

k ≥ 1.

(1.23)

Substituting the solution to (1.23) into (1.22) yields a representation for v v = (I − zT )−1 x = x +

ω j =1

zλj

ψj , x ϕj . 1 − zλj

(1.24)

Since x ∈ ST , it follows from [32, Theorem X.2.1] that the right hand side of (1.24) is an entire function of zero exponential type. Now using the fact that T is of trace class and that the eigenvalues of T belong to , it follows that there exists a positive constant M such that we can estimate the entire function in (1.24) along the imaginary axis as follows (I − iνT )

−1

x ≤ 1 +

M 1 − sin2 

|ν|

ω

|λj | x,

ν ∈ R.

(1.25)

j =1

Since  < π/2, this shows that the entire function z → (I − zT )−1 x is an entire function of zero exponential type bounded by a polynomial of degree one along the imaginary axis. Therefore, it follows from the Paley-Wiener theorem, see Proposition 14.3.4, that the entire function z → (I − zT )−1 x equals a polynomial of degree one. Thus from the Neumann series it follows that (I − zT )−1 x = x + zT x,

z ∈ C.

This implies that x ∈ Ker T 2 , but x ∈ MT and hence T x ∈ M T ∩ Ker T = {0}. This shows T x = 0 and hence x ∈ M T ∩ Ker T = {0}. This completes the proof that if x ∈ MT ∩ ST , then x = 0. Similarly, we can repeat the argument for T replaced by T ∗ using the corresponding representation for T ∗ on MT ∗ to arrive at T ∗x =

ω

λj x, ϕj ψj ,

x ∈ MT ∗ .

(1.26)

j =1

The proof of the proposition now follows from Proposition 1.1.1.

 

The Case When T Is a Volterra Operator A compact operator with no nonzero spectrum is called Volterra operator. A classical example (see, for instance, Example 4 in Section 13.1 of [36]) of such a compact operator is the operator of

1.1 Basic Elements of Operator Theory and Definition of Completeness

9

integration T on H = L2 [0, 1] which is given by   T ϕ (t) = 2i



1

0 ≤ t ≤ 1.

ϕ(s) ds,

(1.27)

t

This operator T is a Hilbert-Schmidt operator but not a trace class operator (cf., Proposition IX.1.1 in [32])). Since the spectrum of T consists of the number zero only, the generalised eigenspace of T consists of the zero vector only, that is, MT = {0}. Furthermore, since the function z → (I −zT )−1 x is an entire function for each x ∈ H = L2 [0, 1] we have ST = H . Hence MT ⊕ ST = H Thus in this case the identity (1.20) is satisfied trivially. For the Volterra operator T given by (1.27) it can be proved that the entire function z → (I −zT )−1 is of exponential type (see Sect. 14.9 for the definition and properties of vector-valued entire functions). For details and more examples about Volterra operators, see Chapters VI and IX of [32] and the examples given in Section V.7 of [78]. In what follows we shall derive the Schmidt representation of T , i.e., formula (1.19) for the case when T is given by(1.27). To do this we need the adjoint T ∗ of T which is given by: 

 T ∗ ϕ (t) = −2i



t

ϕ(s) ds,

0 ≤ t ≤ 1,

(1.28)

0 ≤ t ≤ 1.

(1.29)

0

and hence the selfadjoint operator T ∗ T is given by  ∗  T T ϕ (t) = 4

 t  0

1

ϕ(σ ) dσ ds,

s

To compute the resolvent (I − zT ∗ T )−1 of T ∗ T , we need to solve the equation ψ = ϕ − zT ∗ T ϕ with ψ given and ϕ as the unknown. By differentiating the latter equation twice we arrive at the following boundary value problem ϕ  + 4zϕ = ψ  ,

ϕ(0) = ψ(0),

ϕ  (1) = ψ  (1).

(1.30)

10

1 Preliminaries

The general solution of the homogeneous part of the differential equation in (1.30) is given by √ √ ϕ(t) = c1 cos(2t z) + c2 sin(2t z),

0 ≤ t ≤ 1,

c1 , c2 ∈ R.

(1.31)

A particular solution of the differential equation in (1.30) is given by 

t

ϕp (t) =

 √  cos 2(t − s) z ψ  (s) ds,

0 ≤ t ≤ 1.

(1.32)

0

The first boundary condition in (1.30) implies that the solution (1.31) can be written as √ √ ϕ(t) = cos(2t z)ψ(0) + c2 sin(2t z) + ϕp (t),

0 ≤ t ≤ 1.

(1.33)

We can determine c2 from the second boundary condition in (1.30). Differentiating (1.33) and substituting t = 1, we obtain √ √ √ √ ϕ  (1) = −2 z sin(2 z)ψ(0) + 2 zc2 cos(2 z) + ψ  (1)  1 √ √ sin(2(1 − s) z)ψ  (s) ds. −2 z 0

Using the boundary condition ϕ  (1) = ψ(1) and solving for c2 yields 1 √ √ sin(2 z)ψ(0) + 0 sin(2(1 − s) z)ψ  (s) ds . c2 = √ cos(2 z) This computation shows that the resolvent (I − zT ∗ T )−1 ψ can be written as (I − zT ∗ T )−1 ψ =

1 P (z)ψ, q(z)

(1.34)

where P (z) : L2 [0, 1] → L2 [0, 1] is defined by   √ √ √ √ P (z)ψ (t) = cos(2 z) cos(2t z) + sin(2 z) sin(2t z) ψ(0)  + 

1

√ √ sin(2(1 − s) z)ψ  (s) ds sin(2t z)

t

 √  cos 2(t − s) z ψ  (s) ds,

0

+ 0

0≤t ≤1

1.1 Basic Elements of Operator Theory and Definition of Completeness

11

and √ q(z) = cos(2 z). Note that q is an entire function of order 1/2 (see Sect. 14.1). Resolvent representations of the form (1.34) will play an important role in this book (see, e.g., Sect. 4.1). The zeros of q are real and simple and are given by zk =

1 4

(2k − 1)π

2

k = 1, 2, 3, . . .

,

(1.35)

and correspond to the poles of the resolvent (I − zT ∗ T )−1 . This shows that the eigenvalues λk of T ∗ T are indeed positive and given by λk = zk−1 ,

k = 1, 2, 3, . . .

(1.36)

with corresponding normalised eigenfunctions   2 t ψk = sin (2k − 1)π , (2k − 1)π 2

k = 1, 2, 3, . . . .

(1.37)

This shows that the singular values of T are given by sk (T ) =



λk =

4 , (2k − 1)π

k = 1, 2, 3, . . . .

(1.38)

Summarising, we have proved the following proposition (cf., page 99 in [32]). Proposition 1.1.3 The representation (1.19) for the Volterra operator T is given by Tϕ =

∞ k=1

4

ϕ, ψk ϕk , (2k − 1)π

(1.39)

where ψk is given by (1.37) and ϕk = 14 (2k − 1)π T ψk for k = 1, 2, 3, . . . . We conclude this subsection with another, but related, example of a compact operator for which we would like to study the problem of completeness. The compact operator we have in mind is the operator Tg : L2 [0, 1] → L2 [0, 1] given by 

 Tg x (t) =



t 0



1

x(s) ds +

g(s)x(s) ds, 0

0 ≤ t ≤ 1,

(1.40)

12

1 Preliminaries

where g ∈ L2 [0, 1]. This operator is a rank one perturbation of a Volterra operator, namely, a rank one perturbation of the operator of integration 

t

(T x)(t) =

x(s) ds

0 ≤ t ≤ 1.

0

Standard theorems to verify completeness results do not apply to the operator Tg because Tg is not a trace class operator. This motivated us to develop the completeness results in this monograph. We return to the operator Tg in Sect. 1.3.1.

1.2 Spectral Preliminaries I In this section we collect together a number of elementary spectral facts concerning the sets MT and ST introduced in Sect. 1.1, and about the analogous sets for the Banach adjoint T ∗ . We shall not restrict to compact operators, but the operators concerned will be Banach space operators of which the non-zero part of the spectrum consists of isolated points only. For such an operator T the spaces MT and ST are defined as the first formula in (1.9) and in (1.12), respectively. Thus MT = ⊕λ∈σ (T )\{0} Im Pλ ,

(1.41)

ST = {x ∈ X | z → (I − zT )−1 x is an entire function}.

(1.42)

Here Pλ is the spectral projection defined by (1.7). If the non-zero part of the spectrum of T consists of eigenvalues of finite type only, then T is known as a Riesz operator; see [85]. Lemma 1.2.1 Let T be a bounded linear operator on a complex Banach space X such that the non-zero part of the spectrum of T consists of isolated points only. Furthermore, let ST be the space defined by (1.42), and let ST ∗ be the corresponding space for the Banach adjoint T ∗ . Then ST =



0=λ∈σ (T )

ST ∗ =



Ker Pλ

0=λ∈σ (T )

Ker Pλ∗

and Ker T n ⊂ ST

(n = 1, 2, . . .),

and Ker (T ∗ )n ⊂ ST ∗

(n = 1, 2, . . .),

MT ∩ ST = {0} and MT ∗ ∩ ST ∗ = {0}.

(1.43) (1.44) (1.45)

In particular, ST and ST ∗ are closed subspaces of X and X∗ , respectively. Finally, if X = Im T k ⊕ Ker T k

for some positive integer k,

(1.46)

1.2 Spectral Preliminaries I

13

then T ∗ ∩ Ker (T ∗ )k = {0}. MT ∩ Ker T k = {0} and M

(1.47)

Proof In what follows we shall use a few times the fact that x ∈ ST is equivalent to the statement that the function μ → (μI − T )−1 x is analytic on C \ {0}. Now take x ∈ ST , and let 0 = λ ∈ σ (T ). Furthermore, let λ be a small circle such that λ is the only point in σ (T ) inside λ . Then, using (1.7) and the analyticity of (μI − T )−1 x at λ, we see that Pλ x =

1 2πi



(μI − T )−1 x dμ = 0.

(1.48)

λ

Thus ST ⊂ Ker Pλ for each 0 = λ ∈ σ (T ). Conversely, if x ∈ Ker Pλ for each 0 = λ ∈ σ (T ), then (1.48) tells us that 1 2πi



(μI − T )−1 x dμ = 0,

0 = λ ∈ σ (T ),

λ

which implies that (μI −T )−1 x is analytic on C\{0}. Thus x ∈ ST . This completes the proof of the first part of (1.43). Next, using (I − zT )(I + zT + z2 T 2 + · · · + zk−1 T k−1 ) = I − zk T k , we see that (I −zT )−1 x = (I +zT +z2 T 2 +· · ·+zn−1 T n−1 )x for each x ∈ Ker T n , and hence Ker T n ⊂ ST for each positive integer n, which proves the second part of (1.43). Since σ (T ∗ ) = σ (T ) (see (1.5)) and (I − zT ∗ )−1 = ((I − zT )−1 )∗ analogous results hold with T ∗ in place of T , that is, (1.44) holds. Next we prove the identities in (1.45). First observe that  1 (I − zT )−1 − I , z 1 1 (I − zT )−1 + I. (λI − T )(I − zT )−1 = λ − z z

T (I − zT )−1 =

(1.49) (1.50)

Furthermore, if T ϕλ = λϕλ , then (I − zT )−1 ϕλ =

1 ϕλ . 1 − λz

(1.51)

Using mathematical induction we next prove that (λI − T )m ϕλ = 0 implies (I − zT )−1 ϕλ =

m−1 k=0

(−z)k (λI − T )k ϕλ . (1 − λz)k+1

(1.52)

14

1 Preliminaries

Note that for m = 1 the formula reduces to (1.51). So assume that (1.52) holds for m and that (λI − T )m+1 ϕλ = 0. Then (I − zT )−1 (λI − T )ϕλ =

m−1 k=0

(−z)k (λI − T )k+1 ϕλ . (1 − λz)k+1

(1.53)

Using (1.50) we can rewrite the left hand side of (1.53) to arrive at

λ−

m−1 1 (−z)k 1 (I − zT )−1 ϕλ = (λI − T )k+1 ϕλ − ϕλ . k+1 z (1 − λz) z k=0

Therefore (I − zT )−1 ϕλ =

m−1 k=0

=

m k=1

=

m k=0

(−z)k+1 1 ϕλ (λI − T )k+1 ϕλ + (1 − λz)k+2 1 − λz (−z)k 1 ϕλ (λI − T )k ϕλ + (1 − λz)k+1 1 − λz (−z)k (λI − T )k+1 ϕλ . (1 − λz)k+1

This completes the proof of the induction step and proves that (1.52) holds. Next we show that the identities in (1.45) follow from formula (1.52). Indeed, suppose that ϕ ∈ MT , then there exist constants cj ∈ C, λj ∈ σ (T ) and integers mj , j = 1, . . . , k, such that ϕ has the representation ϕ=

k

cj ϕ λ j

with (λj I − T )mj ϕλj = 0.

(1.54)

j =1

From (1.54) and (1.52) it follows that z → (I − zT )−1 ϕ is a bounded rational function which tends to zero for Re z → ∞. If, in addition, ϕ ∈ ST , then z → (I − zT )−1 ϕ is also entire. But this shows that (I − zT )−1 ϕ is identical zero and since ρ(T ) = ∅ we conclude that ϕ = 0. This proves that ϕ ∈ MT ∩ ST implies ϕ = 0. Since the analogous result holds with T ∗ in place of T , we have completed the proof of (1.45). It remains to prove the identities in (1.47). To do this recall that the space MT is given by MT = ⊕0=λ∈σ (T ) Im Pλ , i.e., MT is the linear hull of all Im Pλ , 0 = λ ∈ σ (T ). If 0 = λ ∈ σ (T ), then T maps Im Pλ in a one-to-one way onto Im Pλ . Since MT is the linear hull of all Im Pλ , 0 = λ ∈ σ (T ), we conclude that T maps MT in a one-to-one way onto MT . In particular, MT ⊂ Im T k , and thus MT ⊂ Im T k for any positive integer k. Using the identity (1.46) this proves the first part of (1.47).

1.2 Spectral Preliminaries I

15

To prove the second part of (1.47) we use basic properties of duality and annihilators in Banach spaces which can be found in [74] and are partially summarised in the second paragraph of Sect. 1.1.1. In particular, we have M⊥ T =



0=λ∈σ (T )

(Im Pλ )⊥ =



0=λ∈σ (T )

Ker Pλ∗ = ST ∗ ,

(1.55)

and ST⊥





⊂ weak -closure



(Ker Pλ )





0=λ∈σ (T )



= weak -closure

=  ⊕

0=λ∈σ (T )

Im Pλ∗

 .

(1.56)

Since Im T k ∩ Ker T k = {0} by (1.46), duality yields  ⊥   {0} = Im T k ⊕ Ker T k = Ker (T ∗ )k ∩ weak∗ -closure Im (T ∗ )k .

(1.57)

From the preceding paragraph with T ∗ in place of T we know that MT ∗ ⊂ T ∗ is a subset of the weak∗ -closure of Im (T ∗ )k . But then we Im (T ∗ )k . Thus M T ∗ ∩ Ker (T ∗ )k = {0}. see from (1.57) that M   Together (1.55) and (1.56) yield the formulas M⊥ T = ST ∗

and ST⊥ ⊂ MT ∗ .

(1.58)

If the underlying space X is a Hilbert space or a reflexive Banach space, the above results can be specified further. For instance for Hilbert spaces we have the following corollary. Corollary 1.2.2 Let T be a bounded linear operator on a complex Hilbert space X, and assume that the non-zero part of the spectrum of T consists of isolated points only. Furthermore, let ST be the set defined by (1.42), and let ST ∗ be the corresponding set for T ∗ . Then M⊥ T = ST ∗

and M⊥ T ∗ = ST

(1.59)

Proof The first identity follows from (1.55). Applying the first identity with T ∗ in ∗∗ = T because X is Hilbert space. ∗∗ place of T , we get M⊥   T ∗ = ST . But T Proof of Proposition 1.1.1 Proof Let MT and MT ∗ be the closure of MT in X and of MT ∗ in X∗ , respectively. Assume that MT ∩ ST = {0} and MT ∗ ∩ ST ∗ = {0}.

(1.60)

16

1 Preliminaries

We have to prove that MT ⊕ ST = X. We will do this by contradiction. Assume that MT ⊕ ST is not dense in X. Then there exists a non-zero vector x ∗ in X∗ such ⊥  that x ∗ ∈ MT ⊕ ST . Using the formulas in (1.58) we see that  ⊥  ⊥  x ∗ ∈ MT ⊕ ST = MT ∩ ST⊥ ⊂ (ST ∗ ∩ MT ∗ ) = (MT ∗ ∩ ST ∗ ) = {0}. This contradicts the fact that x ∗ is non-zero. Thus MT ⊕ ST = X as desired.

 

The next corollary shows that Proposition 1.1.1 is an “if and only if” result in the Hilbert space case. Corollary 1.2.3 Let T be a bounded linear operator on a complex Hilbert space X, and assume that the non-zero part of the spectrum of T consists of isolated points only. Then the identity X = MT ⊕ ST implies that the two identities in (1.60) are satisfied. Proof Assume X = MT ⊕ ST . Then trivially the first identity in (1.60) is satisfied. To prove the second identity in (1.60), note that X = MT ⊕ ST implies that the space (MT ⊕ST )⊥ consists of the zero vector only. But then, using Corollary 1.2.2, we have ⊥  ⊥ {0} = MT ⊕ ST = M⊥ T ∩ ST = ST ∗ ∩ MT ∗ . Thus MT ∗ ∩ ST ∗ = {0}. But then MT ∗ ∩ ST ∗ is equal to the zero space too. Hence the second identity in (1.60) is also satisfied.  

1.3 Compact Hilbert Space Operator of Finite Order Completeness theorems for trace class operators are well-known; see, for example, Theorem VII.8.1 in [32]. In Chap. 3 we shall extend these results to compact Hilbert space operators of finite order. In the present section we shall recall the definition of order for compact Hilbert space operators. We conclude this section with a class of compact operators of order one that are not of trace class which will play a prominent role in this book. A related completeness theorem will be given in Sect. 2.3. Definition 1.3.1 Let H be a complex Hilbert space and let T be a compact operator on H . The order rod (T ) of T is defined to be ∞   sj (T )r < ∞ , rod (T ) = inf r > 0 | j =1

where sj (T ), j = 1, 2, . . ., denote the singular values of T .

(1.61)

1.3 Compact Hilbert Space Operator of Finite Order

17

If the set in the right hand of (1.61) is empty, then T is said to be of infinite order. Compact operators of infinite order do exist. Indeed, an example of such an operator is the diagonal operator T on 2+ given by T = diag (a1 , a2 , a3 , · · · ),

aj =

1 , log(j + 1)

j = 1, 2, 3, . . . .

Since aj ↓ 0 if j → ∞, the operator T is compact and its j -th singular value is aj for j = 1, 2, 3, . . .. Using formula (45) in [71, Section 8], we see that for each r > 0 we have  r x (x + 1)−r =x → ∞ (x → ∞). (1.62) (log(x + 1))r log(x + 1) In particular, for each r > 0 there exists a positive integer j0 (r) such that ajr =

1 1 > , r (log(j + 1)) j

j ≥ j0 (r).

In other words, for each r > 0 the number ajr goes to zero slower than 1/j when ∞ r  r j → ∞. It follows that ∞ j =1 sj (T ) = j =1 aj is divergent for each r > 0, and hence T is of  infinite order. r Note that ∞ j =1 sj (T ) < ∞ for r = 0 happens if and only if T has finite rank. Thus rod (T ) = 0 for finite rank operators. On the other hand, there are many operators of order zero which are not of finite rank. An example is the diagonal operator T on 2+ given by 1 1 1

T = diag 1, 2 , 3 , 4 , . . . . 2 3 4 In general, if rod (T ) > 0, the infimum in (1.61) is not attained. For instance, the operator of integration T given by (1.27) is of order one but the infimum is not attained (as one sees from (1.38)). In other words, T is an example of an order one operator which is not trace class operator. However, note that all compact operators of order strictly less than one are trace class operators. Lemma 1.3.2 If T is a compact operator of finite order and C is an operator of finite rank, then T + C is also of finite order and rod (T + C) = rod (T ). Proof The result is a direct corollary of the fact (see [32, Theorem VI.1.5]) that for any compact operator T on a Hilbert space H the j -th singular value sj (T ) is given by sj (T ) = min{T − K | K ∈ L(H ), rank K ≤ j − 1}.

18

1 Preliminaries

Thus, in our 0 case sj (T )r = sj (T + C) for all j ≥ 1 + rank C. Hence ∞ for each r > r is the series ∞ s (T ) is convergent if and only if the series s (T + C) j j j =1 j =1 convergent. The latter implies that rod (T + C) = rod (T ).   The class of Hilbert-Schmidt operators of order one which are not trace class turns out to be very important in applications. The following subsection concerns the example we met at the end of Sect. 1.1.3.

1.3.1 The Operator Tg Revisited Let Tg be the operator on L2 [0, 1] defined by (1.40), i.e., Tg is given by   Tg x (t) =



t 0



1

x(s) ds +

g(s)x(s) ds,

0 ≤ t ≤ 1.,

(1.63)

0

This operator is a rank one perturbation of the operator of integration. Hence Tg is a compact operator of order one by Lemma 1.3.2 above. But Tg is not trace class. To investigate the system of eigenvectors and generalised eigenvectors for such operators is a main theme in the book. See Sects. 2.3 and 2.3.1 for first completeness results involving Tg . The reader will meet the operator Tg also as an operator on the Banach space C[0, 1]; see, e.g., the paragraph after Theorem 4.4.5. These operators appear again in Sects. 7.2 and 7.3.

Chapter 2

Completeness Theorems for Compact Hilbert Space Operators

This chapter consists of five sections. In the first and second section the first three main completeness theorems are presented with the second and third theorem (Theorems 2.2.1 and 2.2.2) being further refinements of the first theorem (Theorem 2.1.2). In the third section we illustrate the first completeness theorem using the operator Tg appearing in Sect. 1.3.1, and a few additional examples are presented in Sect. 2.3.1. In Sect. 2.4 a number of classical completeness theorems presented in [32] are reviewed from the point of view of Theorems 2.2.1 and 2.2.2. In the final section we present some auxiliary results that will be used to verify the assumptions of the completeness theorems in concrete cases. Elements of the theory of entire functions as presented in Chap. 14 play an important role in the analysis in this chapter.

2.1 First Hilbert Space Completeness Theorem For a compact selfadjoint operator T acting on a Hilbert space H we always have H = MT ⊕ Ker T , and hence such an operator has a complete system of eigenvectors and generalised eigenvectors. In this section we deal with the nonselfadjoint case. We start with a sufficient condition guaranteeing completeness for a compact Hilbert space operator using that such an operator has a finite order. The approach is based on standard estimates for the resolvent of T . In the sequel we let ray (θ ; z0 , s0 ) denote the half-line in the complex plane with angle θ with the positive real axis, base point at z0 ∈ C, and outside a disk centred at z0 with radius s0 ≥ 0: ray (θ ; z0 , s0 ) = {z0 + reiθ | r > s0 }.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_2

(2.1)

19

20

2 Completeness Theorems for Compact Hilbert Space Operators

θ z0

s0

Fig. 2.1 The ray (θ; z0 , s0 )

See Fig. 2.1 on the next page. If z0 = 0 and s0 = 0, then we suppress z0 and s0 in the notation and write ray (θ ) = {reiθ | r > 0}. A vector function f will be called polynomially bounded on the half line ray (θ ; z0 , s0 ) if there exists a positive integer m and constant M such that f (z) ≤ M(1 + |z|m )

for z ∈ ray (θ ; z0 , s0 ).

Next, we introduce the notion of an α-admissible set of such half-lines. Definition 2.1.1 Let α be a positive real number. An ordered set of real numbers 0 < θ1 < θ2 < · · · < θκ ≤ 2π is said to be α-admissible whenever θj +1 − θj ≤

π α

(j = 1, . . . κ − 1),

(θ1 + 2π) − θκ ≤

π . α

(2.2)

If 0 < θ1 < θ2 < · · · < θκ ≤ 2π satisfies (2.2), z0 ∈ C and s0 ≥ 0, we refer to {ray (θj ; z0 , s0 ) | j = 1, . . . , κ} as an α-admissible set of half-lines in the complex plane. See Fig. 2.2. Since 0 < θ1 < θ2 < · · · < θκ ≤ 2π, condition (2.2) is automatically fulfilled when α ≤ 1/2. In that case it may happen that κ = 1. Furthermore, if κ = 1, then the second part of (2.2) shows that α ≤ 1/2. Thus for α > 1/2 the integer κ must be strictly larger than one. Theorem 2.1.2 Let H be a complex Hilbert space, and let T be a compact operator on H of finite order such that Ker T = Ker T ∗ . Suppose that there exist a point z0 ∈ C, a non-negative real number s0 , and an α-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that the following two conditions are satisfied: (i) I − zT is invertible for each z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ,

2.1 First Hilbert Space Completeness Theorem

z0

21

s0



π α

Fig. 2.2 An α-admissible set of κ half-lines

(ii) there exists an integer m such that for every x ∈ H there exists a constant M = M(x) with (I − zT )−1 x ≤ M(1 + |z|m ) for z ∈ ray (θj ; z0 , s0 ), 1 ≤ j ≤ κ.

(2.3)

If, in addition, α > rod (T ), where rod (T ) is the order of T , then H = MT ⊕ Ker T , and hence T has a complete system of eigenvectors and generalised eigenvectors. The above theorem remains true if the condition α > rod (T ) in the final sentence is replaced by T has order at most r and α > r. To see this note that the statement T has order at most r is equivalent to rod (T ) ≤ r. But then α > r implies that α > rod (T ), and hence we get completeness. It will be convenient first to present the following two lemmas. The first is a further refinement of Lemma II.3.4 in [32]. The second lemma is just Theorem X.2.2 of [32]. Lemma 2.1.3 Let H be a complex Hilbert space, and let T be a compact operator on H of finite order such that Ker T = Ker T ∗ . Let P be the orthogonal projection onto M⊥ T , and define  ⊥ W = P T M ⊥ : M ⊥ T → MT .

(2.4)

T

Then W is a Volterra operator, rod (W ) ≤ rod (T ) < ∞, and Ker W ∗ = Ker T . Proof The fact that W is a Volterra operator is covered by Lemma II.3.4 in [32].

22

2 Completeness Theorems for Compact Hilbert Space Operators

The assumption Ker T = Ker T ∗ implies that H = Im T ⊕ Ker T . Since MT ⊂ Im T , it follows that H admits the orthogonal direct sum decomposition H = MT ⊕ L ⊕ Ker T where L = Im T  MT . In particular, M⊥ T = L ⊕ Ker T



and Im T = MT ⊕ L.

From these orthogonal direct sum decompositions, and the fact that the spaces MT and Im T are invariant under T , it follows that T and W admit the following block operator matrix representations: ⎡ T11 T12 T = ⎣ 0 T22 0 0

⎤ 0 0⎦ , 0

 W =

 T22 0 . 0 0

(2.5)

Thus T ∗ and W ∗ are represented by ⎡ ∗ ⎤ T11 0 0 ∗ T ∗ 0⎦ , T ∗ = ⎣T12 22 0 0 0



 ∗ 0 T22 . W = 0 0 ∗

  ∗ . Then the vector Now let x ∈ Ker T22 0 x 0  belongs to Ker T ∗ . According to our assumptions Ker T ∗ = Ker T . Thus x ∈ L and x also belongs to Ker T . This can ∗ is one-to-one, which is equivalent only happen when x = 0. We conclude that T22 to Ker W ∗ = Ker T . Next we deal with the order of the Volterra operator W . If the order of T is infinite, then obviously rod (W ) ≤ rod (T ). Therefore, in what follows we assume that T is of finite order. From the definition of the order operator, see  of a compact rod (T )+ converges. (1.61), it follows that for every  > 0, the series ∞ s (T ) j =1 j According to Proposition 1.3 of Chapter VI of [32] this implies that ∞

sj (W )rod (T )+ < ∞.

(2.6)

j =1

Therefore W has order at most rod (T ), that is, rod (W ) ≤ rod (T ).

 

Lemma 2.1.4 ([32, Theorem X.2.2 ]) Let V be a Volterra operator on a Hilbert  r space, and assume that ∞ j =1 sj (V ) < ∞ for some r > 0. Then for given δ > 0, there exists a constant C = C(δ) such that   (I − zV )−1  ≤ C exp δ|z|r

for z ∈ C.

(2.7)

Proof of Theorem 2.1.2 As in the proof of Lemma 2.1.3 we use the orthogonal direct sum decomposition H = MT ⊕ L ⊕ Ker T , and W is the Volterra operator defined by (2.4). We have to show that L consists of the zero vector only. For this

2.1 First Hilbert Space Completeness Theorem

23

purpose we need an estimate for resolvent of W which we derive from Lemma 2.1.4. Indeed, applying Lemma 2.1.4 with V = W and r = r0 (W )+ we see that for every  > 0 and δ > 0 there exists a constant C = C(, δ) such that the entire function z → (I − zW )−1 satisfies the exponential estimate

(I − zW )−1  ≤ C exp δ|z|rod (W )+

for z ∈ C.

(2.8)

Now, fix a vector x ∈ L. For every y ∈ L define f (z; y) = y, (I − (z + z0 )W )−1 x .

(2.9)

Recall that z0 is the base point of the α-admissible set of half-lines ray (θj ; z0 , s0 ), j = 1, . . . , κ. From the resolvent estimate (2.8) and the fact that  and δ in (2.8) are positive numbers, we see that f = f (·; y) is an entire function of order ρ at most rod (W ), that is, ρ ≤ rod (W ). Since f is continuous, there exists a constant M˜ 1 such that |f (z)| ≤ M˜ 1

for |z| ≤ s0 .

(2.10)

From (2.10), (2.3) and (I − zT )

−1



 (I − zT11 )−1 = 0 (I − zW )−1

 (2.11)

it follows that for some constant M˜ 2 |f (z)| ≤ M˜ 2 (1 + |z|m ) for z ∈ ray (θj ),

j = 1, 2, . . . , κ.

(2.12)

But then, since α > rod (T ) ≥ rod (W ) and ρ ≤ rod (W ), we see that the order ρ of f satisfies α > ρ, and we can apply Theorem 14.2.1, a corollary of a PhragménLindelöf theorem, to show that f (z, y) is a polynomial in z of degree at most m. On the other hand using the Taylor expansion at zero for z → (I − zW )−1 , we can write f (z; y) = y, (I − (z + z0 )W )−1 x = y, x + (z + z0 ) y, W x + (z + z0 )2 y, W 2 x + · · · .

(2.13)

Since f (z, y) is a polynomial in z of degree at most m, it follows from (2.13) that

y, W m+1 x = 0. Recall that both x and y belong to L. But then the operator matrix representation of W in (2.5) shows that m+1 x = y, W m+1 x = 0.

y, T22

(2.14)

24

2 Completeness Theorems for Compact Hilbert Space Operators

Here T22 is the operator defined by (2.5). From the proof of Lemma 2.1.3 we know ∗ is one-to-one. Using (2.14) and the fact that y is an arbitrary vector in that T22 m+1 m+1 L we obtain T22 x = 0. But x ∈ L is also arbitrary. Thus T22 = 0. From m+1 ∗ ∗ ∗ ∗ is onem+1 m+1 (T22 ) = (T22 ) = 0 we see that (T22 ) = 0. On the other hand, T22 ∗ m+1 ∗ = 0 and T22 is one-to-one can only happen when L = {0}. to-one. But (T22 ) Therefore L = {0} and we have completeness.   Before we continue with a refinement of Theorem 2.1.2, we first present a lemma and consider two corollaries. The following lemma can be proved using an approximation by finite rank operators. See Lemma X.4.2 of [32] and, in particular, formula (6) on page 173 of [32]. Lemma 2.1.5 Let H be a complex Hilbert space, let K be a compact selfadjoint operator on H with Ker K = {0}, and let T be a compact operator on H . Given ω, 0 < ω < π/2, define  = {reiθ | r ≥ 0, |θ | < ω or |π − θ | < ω }.

(2.15)

Then (I − zK)−1  ≤ lim

z∈, z→∞

1 sin ω

for z ∈ ,

and

T (I − zK)−1  = 0.

(2.16) (2.17)

Moreover, the convergence in (2.17) is uniform on . The first corollary presents a further specification of Theorem 2.1.2 for the case when the order of T is strictly less than 1/2. Corollary 2.1.6 Let H be a complex Hilbert space, and let T be a compact operator on H of order rod (T ) < 1/2 such that Ker T = Ker T ∗ . Suppose that there exist a point z0 ∈ C, a non-negative real number s0 , and a half-line ray (θ ) such that (i) I − zT is invertible for each z ∈ ray (θ ; z0 , s0 ), (ii) there exists an integer m such that for every x ∈ H there exists a constant M = M(x) with (I − zT )−1 x ≤ M(1 + |z|m ) for z ∈ ray (θ ; z0 , s0 ). Then H = MT ⊕ Ker T , and hence T has a complete system of eigenvectors and generalised eigenvectors. Proof Take α = 1/2. Without loss of generality we may assume that 0 < θ ≤ 2π. Note that (θ + 2π) − θ = 2π ≤ π/α. Hence we can apply Theorem 2.1.2 with α =

2.1 First Hilbert Space Completeness Theorem

25

1/2, κ = 1, and θ1 = θ . Since rod (T ) < 1/2 by assumption, we have α > rod (T ), and thus Theorem 2.1.2 yields the desired completeness result.   Next we derive the famous Keldysh completeness theorem (see Theorem X.4.1 in [32]) as a corollary of Theorem 2.1.2. Theorem 2.1.7 Let H be a complex Hilbert space, and let T = K(I + S) where K and S are compact operators on H with the following properties: K is selfadjoint with Ker K = {0} and I + S is invertible. If, in addition, rod (T ) is finite, then T has a complete system of eigenvectors and generalised eigenvectors. Proof (New Proof) We begin with some preliminaries. Let 0 < ω < π/2, and let  = {reiθ | r ≥ 0, |θ | < ω or |π − θ | < ω }. Define R = S(I + S)−1 , then R is compact and I − R = (I + S)−1 . Applying Lemma 2.1.5 with T = R and with K and ω as above, we see that the operator I − zK is invertible for each z ∈ . Moreover, formulas (2.16) and (2.17) are satisfied. Using the invertibility I − zK we obtain:

I − zT = I − zK(I + S) = (I + S)−1 − zK (I + S) = (I − R − zK) (I + S)

= I − R(I − zK)−1 (I − zK)(I + S),

z ∈ .

(2.18)

Furthermore, from (2.17) we know that there exists s0 ≥ 0 such that the estimate R(I − zK)−1  ≤ 1/2 for each z ∈  and |z| > s0 . Hence

−1  I − R(I − zK)−1  ≤ 2,

z ∈ , |z| > s0 .

But then, using (2.16), we see that (I − zT )−1  ≤

2 (I + S)−1 , sin ω

z ∈ , |z| > s0 .

(2.19)

Now fix an integer  ≥ 2 such that  > rod (T ), and put ω = π/(2) < π/2. Furthermore, let θj = ω + (j − 1)2ω,

j = 1, 2, . . . , κ = 2.

We shall apply the results of the preceding paragraph with this choice of ω. Since I + S is invertible and K is selfadjoint with Ker K = {0}, we have Ker T = Ker T ∗ = {0}. The fact that K is selfadjoint also implies that I − zK is invertible for each nonreal z. But then the calculation in (2.18) shows that the same holds true for I − zT . It follows that item (i) in Theorem 2.1.2 is satisfied. Finally, using (2.19) we see

26

2 Completeness Theorems for Compact Hilbert Space Operators

that (2.3) holds with m = 0. Thus item (ii) in Theorem 2.1.2 is also satisfied. So Theorem 2.1.2 implies that T has a complete system of eigenvectors and generalised eigenvectors.   Remark 2.1.8 Note that in the Keldysh theorem as presented in [32] the compact operator K is assumed to be of finite order rather than T . However, the fact that T = K(I + S) with I + S being invertible implies (use [32, Proposition VI.1.3]) that sj (T ) ≤ sj (K)I + S

and sj (K) ≤ sj (T )(I + S)−1  (j ≥ 1).

Thus for each r > 0 we have ∞ j =1

sj (T )r ≤ I + Sr



sj (K)r ,

j =1



sj (K)r ≤ (I + S)−1 r

j =1



sj (T )r .

j =1

These inequalities show that T is of finite order if and only if K is of finite order, and in that case rod (T ) = rod (K). Remark 2.1.9 The Keldysh theorem as presented in [32] also contains information on the location of the eigenvalues of the operator T = K(I + S). Indeed, from the inequality (2.19) it follows that the operator I − zT is invertible for z ∈ C \  and |z| ≥ s0 . Since (C \ ) ∪ {∞} is invariant under the map z → 1/z, we see that the non-zero eigenvalues of T in C \  lie outside the disc |λ| ≤ 1/s0 . This shows that all the eigenvalues of T , except for a finite number, are in .

2.2 Two Additional Completeness Theorems Next we present two additions to Theorem 2.1.2. Both theorems cover the case when the positive number α appearing in Theorem 2.1.2 is equal to the order of the compact operator T . Theorem 2.2.1 Let H be a complex Hilbert space, and let T be a compact operator on H of finite order rod (T ) > 0. Assume that the infimum in (1.61) is attained and that Ker T = Ker T ∗ . Suppose that there exist a point z0 ∈ C, a nonnegative real number s0 , and an α-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that the following two conditions are satisfied: (i) I − zT is invertible for each z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ, (ii) there exists an integer m such that for every x ∈ H there exists a constant M = M(x) with (I −zT )−1 x ≤ M(1+|z|m )

for z ∈ ray (θj ; z0 , s0 ), 1 ≤ j ≤ κ.

(2.20)

2.2 Two Additional Completeness Theorems

27

If, in addition, α = rod (T ), then H = MT ⊕ Ker T , and hence T has a complete system of eigenvectors and generalised eigenvectors. Note that the condition “the infimum in (1.61) is attained” is equivalent to the  rod (T ) < ∞. Thus the above theorem s requirement that rod (T ) > 0 and ∞ j j =0 (T ) does not apply to operators of order zero. On the other hand, ro (T ) = 0 and  ∞ rod (T ) < ∞ is equivalent to T being of finite rank, and for that case j =0 sj (T ) completeness is trivially obtained. Proof We follow the same line of reasoning as in the proof of Theorem 2.1.2, but now employing two corollaries of the Phragmén-Lindelöf theorem, namely Theorems 14.2.1 and 14.2.2 in the final chapter of this book. In particular, as in the proof of Theorem 2.1.2 we use the orthogonal direct sum decomposition H = MT ⊕ L ⊕ Ker T , and W is the Volterra operator defined by (2.4). Again we have to show that L consists of the only.  zero vector rod (T ) < ∞. Since r (W ) ≤ r (T ) by According to our hypotheses ∞ s (T ) od od j =1 j Lemma 2.1.3 and sj (W ) ≤ j (T ) for each j = 1, 2, . . . by [32, Proposition VI.1.3.], s∞ it follows that the series j =1 sj (W )rod (W ) converges. But then, by Lemma 2.1.4 applied with V = W and r = rod (W ), we see that for every δ > 0 there exists a constant C = C(δ) such that

(I − zW )−1  ≤ C exp δ|z|rod (W ) for z ∈ C.

(2.21)

Next, fix a vector x ∈ L. For every y ∈ L define f = f (·; y) by (2.9). As before, we can use (2.20), see (2.10) and (2.11), to conclude that (2.12) holds. Using (2.21) ˜ we see that for each δ > 0 there exists some constant C˜ = C(δ) such that

for z ∈ C. |f (z)| = |f (z; y)| ≤ C˜ exp δ|z|rod (W )

(2.22)

Since δ > 0 is arbitrary, we conclude that lim sup r→∞

1 r rod (W )

log |f (reiϕ )| ≤ 0 for every 0 < ϕ ≤ 2π.

(2.23)

From (2.22) it also follows that the order ρ of f is at most rod (W ). Recall that α = rod (T ) ≥ rod (W ). Hence α ≥ ρ. If α > ρ we apply Theorem 14.2.1 as before, and if α = ρ we use (2.23) and apply Theorem 14.2.2. In both cases, using assumption (ii), the conclusion is that f is a polynomial of degree at most m. Using the Taylor expansion of (I −zW )−1 at zero, see (2.13), it follows that y, W m+1 x = 0, and we can complete the proof following the same line of reasoning as in the final paragraph of the proof of Theorem 2.1.2.   The next theorem is our second addition to Theorem 2.1.2. It will provide the first step towards necessary and sufficient conditions in order that T has a complete

28

2 Completeness Theorems for Compact Hilbert Space Operators

span of eigenvectors and generalised eigenvectors. As Theorem 2.2.1 it concerns the limiting case α = rod (T ). Theorem 2.2.2 Let H be a complex Hilbert space, and let T be a compact operator on H of finite order rod (T ) > 0 such that Ker T = Ker T ∗ . Suppose that there exist a point z0 ∈ C, a non-negative real number s0 , and an α-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that the following two conditions are satisfied: (i) I − zT is invertible for each z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ, (ii) there exists an integer m such that for every x ∈ H there exists a constant M = M(x) with (I − zT )−1 x ≤ M(1 + |z|m ) for z ∈ ray (θj ; z0 , s0 ), 1 ≤ j ≤ κ.

(2.24)

(iii) for every x ∈ H , there exist complex numbers ϕj such that θj < ϕj < θj +1 (j = 1, 2, . . . , κ − 1), θκ < ϕκ < θ1 + 2π, and lim sup r→∞

1 log ||(I − reiϕj T )−1 x ≤ 0, rα

j = 1, 2, . . . , κ.

(2.25)

If, in addition, α = rod (T ) then H = MT ⊕ Ker T , and hence T has a complete span of eigenvectors and generalised eigenvectors. Proof We follow the same line of reasoning as in the proof of Theorem 2.1.2, employing Theorem 14.2.2 in place of Theorem 14.2.1. As in the proof of Theorem 2.1.2 we use the orthogonal direct sum decomposition H = MT ⊕ L ⊕ Ker T , and W is the Volterra operator defined by (2.4). Again we have to show that L consists of the zero vector only. Fix a vector x ∈ L. For every y ∈ L define f = f (·; y) by (2.9). As W is a Volterra operator of order at most rod (T ) (see Lemma 2.1.3), the function f is an entire function of order at most rod (T ). As before, we can use (2.24), see (2.10) and (2.11), to conclude that (2.12) holds. This shows that there exists a constant M˜ such that ˜ + |z|m ) |f (z)| ≤ M(1

for z ∈ ray (θj ),

j = 1, 2, . . . , κ.

From (2.25) and the operator block matrix representation of (I − zT )−1 given by (2.11), we also obtain that lim sup r→∞

1 log |f (reiϕj )| ≤ 0, r rod (T )

j = 1, 2, . . . , κ − 1.

But then we can apply Theorem 14.2.2 to show that f (z, y) is a polynomial in z of degree at most m. Using the Taylor expansion of (I − zW )−1 at zero, see (2.13), it

2.3 A First Application of Theorem 2.2.2

29

follows that y, W m+1 x = 0. Proceeding as in the final paragraph of the proof of Theorem 2.1.2 we conclude that L = {0}   Remark 2.2.3 Observe from the proof of Theorem 2.2.2 that condition (2.25) is only needed for x ∈ M⊥ T . In Sect. 2.4, we discuss a class of operators for which (2.25) holds for every x ∈ M⊥ T if and only if (2.25) holds for every x ∈ H . In general, we do not know whether these two conditions are equivalent even in a Hilbert space. If the conditions of Theorem 2.1.2 are satisfied, then it follows from Lemma 2.1.3 that (2.25) holds for every x ∈ M⊥ T.

2.3 A First Application of Theorem 2.2.2 We return to the integral operator Tg on L2 [0, 1] given by (1.40), that is,   Tg x (t) =



t



1

x(s) ds +

g(s)x(s) ds,

0

0 ≤ t ≤ 1.

(2.26)

0

Recall that g ∈ L2 [0, 1]. Since Tg is a rank one perturbation of a Volterra operator of order 1, the order of Tg is one, that is, rod (Tg ) = 1. Note that Tg is onot to-one. Indeed, if T x = 0, then the function y given by y(t) = 0 x(s) ds is 1 constant on [0, 1] with the constant being given by − 0 g(s)x(s) ds. But then y is differentiable, and its derivative y  = 0. On the other hand y  = x. Thus x = 0, and T is one-to-one. Furthermore, for a large class of g ∈ L2 [0, 1], the operator Tg also has a dense range. Thus for these g we have Ker Tg = Ker Tg∗ = {0}. However it may happen that g ∈ L2 [0, 1] and Tg has no dense range. For instance, this happens when g(t) = t − 1 for 0 ≤ t ≤ 1. See Sect. 2.5 for further details. Let us compute the resolvent of Tg explicitly. If y = (I −zTg )−1 x, then y satisfies the integral equation 

t

y(t) − z

 y(s) ds − z

0

1

g(s)y(s) ds = x(t),

0 ≤ t ≤ 1.

(2.27)

0

Assume for the moment that x is continuously differentiable. Then y is continuously differentiable too, and (2.27) is equivalent to the inhomogeneous differential equation y(t) ˙ − zy(t) = x(t) ˙

(2.28)

with boundary condition  y(0) − z 0

1

g(s)y(s) ds = x(0).

(2.29)

30

2 Completeness Theorems for Compact Hilbert Space Operators

Using the variation-of-constants formula to solve (2.28) and (2.29), we obtain 

t

y(t) = ezt y(0) +

ez(t −σ )x(σ ˙ ) dσ

0



t

= ezt y(0) + x(t) − ezt x(0) + z

ez(t −σ )x(σ ) dσ.

(2.30)

0

If we substitute the expression for y given by (2.30) in the boundary condition (2.29) and solve for y(0), then we arrive at y(0) = x(0) +



z



1

1−z

1

g(s)x(s) ds+ 0

zs

e g(s) ds 0

1  1

 +z 0

 ez(s−σ )g(s) ds x(σ ) dσ ,

σ

(2.31) where we have used that 



1

s

g(s) 0



ez(s−σ )x(σ ) dσ ds =

0

1  1 0

ez(s−σ )g(s) ds x(σ ) dσ.

σ

Using this expression for y(0) in (2.30) yields y(t) = x(t) +



zezt



1

1−z 0



+z t

−z

t

e−zσ x(σ ) dσ +

0

ez(s−σ )g(s) ds x(σ ) dσ

σ

 t  0

 g(s)x(s) ds +

0

ezs g(s) ds 1  1

1

σ

 ez(s−σ )g(s) ds x(σ ) dσ ,

0 ≤ t ≤ 1.

0

(2.32) Since the continuously differentiable functions on [0, 1] are dense in L2 [0, 1], we conclude that y = (I − zTg )−1 x is given by (2.32) for each x ∈ L2 [0, 1]. Summarising, we obtain the following lemma. Lemma 2.3.1 Let Tg be the operator given by (2.26), and put  q(z) = 1 − z 0

1

ezs g(s) ds.

(2.33)

2.3 A First Application of Theorem 2.2.2

31

Then I − zTg is invertible if and only if q(z) = 0, and in that case 1 P (z), q(z)

(I − zTg )−1 =

q(z) = 0,

(2.34)

where P (z) is the entire operator-valued function given by (P (z)x)(t) = q(z)x(t) + zezt



1



0



1  1

+z t

−z

t

g(s)x(s) ds +

e−zσ x(σ ) dσ +

0

ez(s−σ )g(s) ds x(σ ) dσ

σ

 t  0

σ

 ez(s−σ )g(s) ds x(σ ) dσ ,

0 ≤ t ≤ 1.

0

Recall (see the paragraph containing (1.40)) that Tg is a compact operator of order 1. Hence the same holds true for (I − zTg )−1 − I . From representation (2.32) for (I − zTg )−1 x, it follows that the resolvent (I − zTg )−1 is polynomially bounded along the imaginary axis. Note, however, that the resolvent of Tg is unbounded along ray (θ ) for all θ with π/2 < θ < 3π/2. Therefore the assumptions of Theorem 2.1.2 are not satisfied, and we cannot use Theorem 2.1.2 to decide whether or not Tg has a complete system of eigenvectors and generalised eigenvectors. It is our aim to apply Theorem 2.2.2 with T = Tg and α = 1. This brings us to the following corollary of Theorem 2.2.2. Corollary 2.3.2 Let g ∈ L2 [0, 1], and let Tg be the operator on L2 [0, 1] defined in (1.40), and let q be the entire function given by (2.33). Assume the following three conditions are satisfied: (i) there exists a real number σ0 and a positive real number δ0 such that |q(z)| > δ0 on the line Re z = σ0 , (ii) there exists a real number r0 and a positive real number δ1 such that |q(r)| > δ1 on the negative real axis for r < r0 , (iii) for every  > 0, there exists a real number r1 such that for r > r1 |q(r)| > e(1−)r .

(2.35)

Then the operator Tg is ono-to-one, the range of Tg is dense in L2 [0, 1], and the generalised eigenspace MT is dense in L2 [0, 1]. In particular, Tg has a complete span of eigenvectors and generalised eigenvectors. Proof We split the proof into four parts. in the first part we show that Tg is one-toone and has a dense range. Part 1.

We already know (see the first paragraph of the present section) that Tg is ono-to-one. To prove that Tg has a dense range we reason by contradiction using condition (ii). Indeed, assume that Tg does not have a dense range.

32

2 Completeness Theorems for Compact Hilbert Space Operators

Then by Proposition 2.5.2 there exist ϕ ∈ L2 [0, 1] such that 

1

g(t) =

ϕ(s) ds

(0 ≤ t ≤ 1) and g(0) = −1.

t

Using these conditions on g in the definition of the function q in (2.33) we obtain  1   1 ezs ϕ(t) dt ds q(z) = 1 − z 

0

=1−z

1  t

0



1  t

  ezs ds ϕ(t) dt = 1 −

0

1

=1−

s



0



1

ezt − 1 ϕ(t) dt = 1 −

0

 zezs ds ϕ(t) dt

0



1

ezt ϕ(t) dt +

0

ϕ(t) dt. 0

1 1 Now recall that 0 ϕ(t) dt = g(0) = −1. Thus q(z) = − 0 ezt ϕ(t) dt, and using the Cauchy-Schwarz inequality we obtain  |q(r)|2 = 



1 0

Part 2.

2 e2r − 1 ert ϕ(t) dt  ≤ 2r



1

|ϕ(t)|2 dt,

r ∈ R \ {0}.

0

Using this inequality we see that q(r) → 0 if r tends to −∞. Hence Tg does not have a dense range implies that q(r) → 0 if r tends to −∞. The latter contradicts condition (ii). Thus the range of Tg is dense in L2 [0, 1] as desired. In this part we verify the assumptions (i) and (ii) of Theorem 2.2.2. To do this we use the explicit representation for the resolvent (I − zTg )−1 given in (2.32). Recall that   y(t) = (I − zTg )−1 x (t). From (2.32) using the Cauchy-Schwarz inequality, we derive for z ∈ C with Re z ≥ 0 and q(z) = 0 that |y(t)| ≤ |x(t)| +

    |z|eRe zt  g2 + 1 + |z| eRe z(1−t ) + 1 g2 x2 , |q(z)|

where 0 ≤ t ≤ 1. Therefore, there exists a constant M1 such that for z ∈ C with Re z ≥ 0 and q(z) = 0 we have |z|2 eRe z x2 . y2 ≤ 1 + M1 |q(z)|

(2.36)

2.3 A First Application of Theorem 2.2.2

33

Similarly, for z ∈ C with Re z ≤ 0 and q(z) = 0, we have |y(t)| ≤ |x(t)| +

    |z|eRe zt  g2 + e−Re zt + |z| 1 + e−Re zt g2 x2 . |q(z)|

Therefore, there exists a constant M2 such that for z ∈ C with Re z ≤ 0 and q(z) = 0 |z|2 x2 . y2 ≤ 1 + M2 |q(z)|

Part 3.

(2.37)

Next we use the assumptions about the behaviour of |q(z)| for z ∈ C, where q(z) is given by (2.33). From the first two conditions in Corollary 2.3.2 and representation (2.32), it follows that we can choose a 1-admissible set with κ = 2, θ1 = π/2 and θ2 = 3π/2 such that condition (i) of Theorem 2.2.2 is satisfied with z0 = σ0 and s0 = 0. Furthermore, from estimate (2.37), it follows that condition (2.24) is also satisfied with m = 2. In this part we verify condition (iii) of Theorem 2.2.2. To do this we take ϕ1 = π, φ2 = 2π, and α = 1. From the third condition in Corollary 2.3.2 and estimate (2.36) we obtain

y2 ≤ 1 + M1 r 2 er x2 ,

r ≥ r0 .

Since  > 0 is arbitrary, we conclude that for every x ∈ L2 [0, 1] lim sup r→∞

1 log (I − rT )−1 x ≤ 0. r

From the second condition in Corollary 2.3.2 and estimate (2.37) we obtain that also for every x ∈ L2 [0, 1] lim sup r→∞

Part 4.

1 log (I + rT )−1 x ≤ 0. r

So condition (2.25) is satisfied, and therefore all assumptions of Theorem 2.2.2 are satisfied. We also know that Tg is a compact operator of order one (see, e.g., the first paragraph of the present section). Thus the fact that Tg has a complete span of eigenvectors and generalised eigenvectors now follows by applying Theorem 2.2.2.  

34

2 Completeness Theorems for Compact Hilbert Space Operators

2.3.1 Three Special Cases To illustrate the above corollary, we consider three special cases. First we take g(t) ≡ 1, next g(t) = t and finally g(t) ≡ −1, in each case on [0, 1]. Assume g(t) ≡ 1 on [0, 1]. By Proposition 2.5.2 the range of T1 is dense in L2 [0, 1]. Note that for this choice of g we have 

1

q(z) = 1 − z

ezs g(s) ds = 2 − ez .

0

The entire function q(z) is bounded from below by one both on the imaginary axis and on the negative real axis, and hence conditions (i) and (ii) in Corollary 2.3.2 are satisfied with σ0 = 0 and r0 = 0. Furthermore, on the positive real axis we have |q(r)| = |2 − ez | ≥ er − 2 >

1 r e , 2

r ≥ log 4.

Thus condition (iii) in Corollary 2.3.2 is also satisfied. Therefore the operator   T1 x (t) =



t



1

x(s) ds +

0

x(s) ds,

0 ≤ t ≤ 1,

0

has a complete span of eigenvectors and generalised eigenvectors. Next, we consider the case when g(t) = t on [0, 1]. Again by Proposition 2.5.2 the range of the corresponding operator Tg is dense in L2 [0, 1]. Furthermore 

1

q(z) = 1 − z

ezs s ds = 1 − ez +

0

=

 1 1 − ez z

1 (1 − ez )(z − 1). z

The entire function q(z) is bounded from below by e−1 (e − 1) on the line Re z = −1 and on the negative real axis for Re z ≤ −1. Hence conditions (i) and (ii) in Corollary 2.3.2 are satisfied with σ0 = −1 and r0 = −1. Furthermore, on the positive real axis we have |q(r)| =

 e2 − 1 r 1 1 |1 − ez | ≥ er − 1 ≥ e , 2 2 2e2

r ≥ 2.

Thus condition (iii) in Corollary 2.3.2 is also satisfied. Therefore the operator   Tg x (t) =

 0

t



1

x(s) ds +

sx(s) ds, 0

0 ≤ t ≤ 1,

(2.38)

2.4 Classical Completeness Theorems Revisited

35

has a complete span of eigenvectors and generalised eigenvectors. Finally, assume that g(t) ≡ −1. Then 

1

q(z) = 1 − z

ezs g(s) ds = ez

0

is not bounded from below in the left half plane. Therefore we cannot apply Corollary 2.3.2. However, note that in this particular case with g(t) ≡ −1, we have 

 T−1 x (t) =



t 0



1

x(s) ds − 0



1

x(s) ds =

x(s) ds,

0 ≤ t ≤ 1.

t

Thus T−1 is a Volterra operator and does not have a complete span of eigenvectors and generalised eigenvectors. These examples indicate that the conditions in Theorem 2.2.2 might actually be sharp. This is indeed the case and from the results that we are going to prove in the later sections, it actually follows that the two conditions (i) and (ii) in Corollary 2.3.2 are not only sufficient but also necessary for Tg to have a complete span of eigenvectors and generalised eigenvectors (see Theorem 7.4.1 and the final paragraphs of Sect. 7.4). Remark 2.3.3 Although in the above example the resolvent computations still could be done explicitly by solving a boundary value problem, these calculations quickly become rather quite involved for more complicated operators. One of the aims of this book is to present a simple approach towards resolvent computations using the notion of a characteristic matrix function, see Chap. 7 and the examples in Chaps. 8 and 10.

2.4 Classical Completeness Theorems Revisited At the end of Sect. 2.1 the Keldysh completeness theorem (Theorem X.4.1 in [32]) was derived as a corollary of Theorem 2.1.2. In the present section we show that several of the other classical completeness theorems presented in Part II of [32] can be obtained as by-products of Theorems 2.2.1 and 2.2.2. As we shall see our results also allow for sharper versions of some of these classical theorems. We begin with Theorem X.3.1 of [32] which we derive as a corollary of Theorem 2.2.1. Theorem 2.4.1 ([32, Theorem X.3.1]) Let H be a complex Hilbert space, and let T be a compact operator on H . Let α ≥ 1, and assume that the numerical range WT of T , WT = { T x, x ∈ C | x ∈ H, x = 1 },

36

2 Completeness Theorems for Compact Hilbert Space Operators

is contained in a closedα angle with vertex at zero and opening π/α. If α is such that the series ∞ j =1 sj (T ) converges, then T has a complete span of eigenvectors and generalised eigenvectors. Proof Since α ≥ 1, the numerical range WT is contained in a closed angle with vertex at zero and opening at most π. According to Lemma X.3.3 of [32], this implies that Ker T = Ker T ∗ . Furthermore, from identity (5) on page 168 of [32], it ˜ with vertex at zero and opening π/α such follows that there exists an open angle  that (I − zT )−1  ≤ 2|z|,

˜ z ∈ .

Therefore, we can choose an α-admissible set of half-lines in the complex plane, {ray (θj ) | j = 1, . . . , κ} for some appropriate choice of κ, such that (2.3) holds with m = 0. But then we can apply Theorem 2.1.2 (if α > rod (T )) or Theorem 2.2.1 (if α = rod (T )) with m = 1, z0 = 0, and s0 = 0 to obtain the desired result.   If we take α = 1 in Theorem 2.4.1, then we obtain the following classical completeness result for trace class operators as a corollary. Corollary 2.4.2 ([32, Theorem VII.8.1]) Let H be a complex Hilbert space, and   let T be a trace class operator on H . If the imaginary part 1/(2i) T − T ∗ is nonnegative, then T has a complete span of eigenvectors and generalised eigenvectors. Proof The fact that the imaginary part of T is non-negative implies (actually, is equivalent to the statement) that the numerical range WT of T is contained in the closed right half plane. Thus WT is contained in a closed angle  with vertex at zero and opening π. Moreover, since T is trace class, the series ∞ j =1 sj (T ) converges. Thus, indeed, applying Theorem 2.4.1 with α = 1 yields that T has a complete span of eigenvectors and generalised eigenvectors.   If T is an operator of order 1 that is not trace class (and hence the infimum in (1.61) is not attained), then Corollary 2.4.2 is not true. For example, the operator of integration, see (1.27), has a non-negative imaginary part, but the operator is Volterra and has no non-zero eigenvalues. As a corollary of Theorem 2.2.2 we can now present a sharper version of Theorem 2.4.1, i.e., a sharper version of [32, Theorem X.3.1]. Theorem 2.4.3 Let H be a complex Hilbert space, and let T be a compact operator on H of finite order rod (T ) ≥ 1. Assume that the numerical range WT of T , WT = { T x, x ∈ C | x ∈ H, x = 1 }, is contained in a closed angle with vertex at zero and opening π/rod (T ). If there exists a rod (T )-admissible set of real numbers 0 < ϕ1 < ϕ2 < · · · < ϕκ ≤ 2π such that lim sup r→∞

1 log ||(I − reiϕj T )−1  = 0, r rod (T )

j = 1, . . . , κ,

(2.39)

2.4 Classical Completeness Theorems Revisited

37

then T has a complete span of eigenvectors and generalised eigenvectors. Proof Since rod (T ) ≥ 1, the numerical range WT is contained in a closed angle with vertex at zero and opening less than π. According to Lemma X.3.3 of [32], this implies that Ker T = Ker T ∗ . Furthermore, from identity (5) on page 168 of [32], it follows that there exists an open angle  with vertex at zero and opening π/rod (T ) such that ((I − zT )−1  ≤ 2|z|,

z ∈ .

This estimate implies that given ϕ1 < ϕ2 < · · · < ϕκ as in the statement of the lemma, we can find θ1 < θ2 < · · · < θκ such that θj < ϕj < θj +1 (j = 1, 2, . . . , κ − 1), θκ < ϕκ < θ1 + 2π, and the ordered κ-set of rays {ray (θj ) | j = 1, . . . , κ} is an ρ-admissible set of half-lines in the complex plane. Hence (2.39) implies (2.25). Therefore the assumptions of Theorem 2.2.2 with m = 1, z0 = 0, and s0 = 0, are satisfied and the results follows by applying this theorem.   The next result, which is a generalisation of Corollary 2.4.2, is obtained as corollary of Theorem 2.4.3. Corollary 2.4.4 Let H be a complex Hilbert space, and let T be a compact  operator of order one. If the imaginary part 1/(2i) T − T ∗ is non-negative, and there exists a number ϕ, 0 < ϕ ≤ π, such that for every x ∈ H lim sup r→∞

1 log ||(I ± reiϕ T )−1 x = 0, r

(2.40)

then T has a complete span of eigenvectors and generalised eigenvectors. Proof As we know, the fact that the imaginary part of T is non-negative implies that the numerical range WT of T is contained in the closed right half plane. Thus WT is contained in a closed angle with vertex at zero and opening π. Since rod (T ) = 1 by assumption, (2.40) tells us that (2.39) is satisfied with κ = 2 and with ϕ1 = ϕ and ϕ2 = ϕ + π. Thus the completeness follows by applying Theorem 2.4.3.   Example 2.4.5 I To illustrate the above corollary, let us consider the compact operator T = 2iT1, where T1 is the integral operator on L2 [0, 1] given by (1.40) with g = 1. We seen that T has order one. Note that the imaginary part   have already T := 1/(2i) T − T ∗ of T is given by   T x (t) = 3



1

x(s) ds,

0 ≤ t ≤ 1,

0

and 

T x, x = 3

 0

1

2 x(s) ds  ≥ 0.

38

2 Completeness Theorems for Compact Hilbert Space Operators

Thus T is an operator of order 1 with non-zero imaginary part, and it follows from Corollary 2.4.4 that T has a complete span of eigenvectors and generalised eigenvectors if (2.40) is satisfied. In the next section, we will compute the resolvent of Tg given by (1.40) explicitly, see (2.32). Since (I − zT )−1 = (I − 2izT1 )−1 , we can use representation (2.32) with g = 1 and z → 2iz, and the estimates (2.36) and (2.37) to conclude that (2.40) is satisfied for ϕ = π/2. Thus T has a complete span of eigenvectors and generalised eigenvectors. See the next section for a further analysis of the integral operator Tg in (1.40). Remark 2.4.6 Not all classical completeness theorems appearing in Part II of [32] can be derived as corollaries of the results in Sects. 2.1 and 2.2. This seems in particular to be the case for [32, Theorems VII.8.2 and VIII.3.1].

2.5 The Dense Range Property In this section, we present some results that will be used to verify the assumptions appearing in Theorem 2.2.2 for some special cases. Let X be a Banach space, and assume that a linear operator T : X → X is given by T = V + R, where V and R are bounded linear operators. We assume that V has a dense range, and that R is a rank one operator given by Rx = η, x u,

(x ∈ X).

(2.41)

Here u and η are non-zero vectors in X and X∗ , respectively, where X∗ is the dual space of X. An example of such an operator is the operator Tg given by (1.40) which acts on X = L2 [0, 1]. We are interested to know when T has a dense range (cf., the first paragraph of the preceding section). The next lemma answers this question in the abstract setting described above. Lemma 2.5.1 Let T = V + R, where V and R are bounded linear operators on a Banach space X. Assume V has a dense range, and let R be the rank one operator given by (2.41). Then the range of T is not dense in X if and only if there exists ϕ in the dual space X∗ such that V ∗ϕ = η

and ϕ, u = −1.

(2.42)

Proof First observe that T has no dense range if and only if the null space of the dual operator T ∗ is non-trivial. Assume Im T is not dense. According to the above observation, there exists a non-zero functional ϕ ∈ X∗ such that T ∗ ϕ = 0. We first show that ϕ, u = 0.

2.5 The Dense Range Property

39

Since T ∗ = V ∗ + R ∗ and T ∗ ϕ = 0, we have V ∗ ϕ = −R ∗ ϕ. Using the definition of R in (2.41) we see that

ϕ, Rx = η, x ϕ, u ,

x ∈ X.

(2.43)

If ϕ, u = 0, then ϕ, Rx = 0 for each x ∈ X. Hence R ∗ ϕ = 0. Since V ∗ ϕ = −R ∗ ϕ, it follows that V ∗ ϕ = 0. However, by assumption, the range of V is dense. Thus V ∗ ϕ = 0 implies ϕ = 0, which contradicts our assumption on ϕ. Therefore

ϕ, u = 0. Put c = ϕ, u . From the result of the previous paragraph we know that c = 0. Using T ∗ = V ∗ + R ∗ and T ∗ ϕ = 0, we obtain

V ∗ ϕ, x = T ∗ ϕ, x − R ∗ ϕ, x = − R ∗ ϕ, x = − ϕ, Rx = − η, x ϕ, u = −c η, x ,

x ∈ X.

Hence V ∗ ϕ = −cη. Now put ϕ˜ = (−c)−1 ϕ. Then the identities in (2.42) hold with ϕ˜ in place of ϕ. Next we prove the reverse implication. Assume that the two conditions in (2.42) are satisfied. Then, using (2.43), we see that

ϕ, Rx = − η, x = − V ∗ ϕ, x = − ϕ, V x ,

x ∈ X.

Thus ϕ, T x = ϕ, V x + Rx = 0 for each x ∈ X, and hence T ∗ ϕ, x = 0 for each x ∈ X, which yields T ∗ ϕ = 0. The second condition in (2.42) also implies that ϕ = 0. Thus T has no dense range.   Next we apply the above lemma to the operator T = Tg given by (1.40). This yields the following proposition. Proposition 2.5.2 Let T be the operator on L2 [0, 1] given by   T x (t) =



t 0



1

x(s) ds +

x(s)g(s) ds

(0 ≤ t ≤ 1) x ∈ L2 [0, 1],

(2.44)

0

where g ∈ L2 [0, 1]. Then T has no dense range in L2 [0, 1] if and only if g ∈ Im V ∗ and g(0) = −1. In particular, the range of T is not dense in L2 [0, 1] when, for instance, g(t) = t − 1. On the other hand, if g is continuous in a neighbourhood of zero and g(0) = −1, then the range of T is dense in L2 [0, 1]. Proof We apply Lemma 2.5.1 with X equal to the Hilbert space L2 [0, 1], using Hilbert space adjoints in place of Banach space adjoints and for ·, · we take the

40

2 Completeness Theorems for Compact Hilbert Space Operators

usual Hilbert space inner product. In this case T = V + R, where V and R are the integral operators given by 

 V x (t) =





t

and

x(s) ds

 Rx (t) =

0



1

x(s)g(s) ds,

0 ≤ t ≤ 1.

0

(2.45) Thus T is a rank one perturbation of the operator of integration. The adjoint of V is given by 

 V y (t) = ∗



1

y(s) ds

(0 ≤ t ≤ 1)

y ∈ L2 [0, 1].

t

Thus V ∗ is one-to-one, and hence the range of V is dense in L2 [0, 1]. Next, let u and η be the functions defined by u(s) = 1 and η(s) = g(s)

(0 ≤ s ≤ 1).

Then, by the second part of (2.45), the identity (2.41) holds. Since η = g, the condition g ∈ Im V ∗ is equivalent to the requirement that there exists ϕ ∈ Im V ∗ such that V ∗ ϕ = η. Furthermore, 

1

u, ϕ =

ϕ(s) ds = η(0) = g(0),

0

and hence the second condition in (2.42) is equivalent to the requirement that g(0) = −1. Thus, applying Lemma 2.5.1, the range of T is not dense if and only if g ∈ Im V ∗ and g(0) = −1. Next, if g(t) = t − 1, then g = V ∗ e− , where e− is the function identically equal to −1. Thus g ∈ Im V ∗ and g(0) = −1, and hence for this choice of g the range of T is not dense in L2 [0, 1]. Finally, assume g is continuous in a neighbourhood of zero. Then g(0) is welldefined, and hence g(0) = −1 implies that the range of T is dense.   We proceed by specifying Lemma 2.5.1 for the case when X = C[0, 1]. From the Riesz representation theorem [25, Theorem 7.17], it follows that the dual space C[0, 1]∗ is isometrically isomorphic to the space of complex Radon measures and hence, using [25, Theorem 3.29], isometrically isomorphic to the space NBV [0, 1]. Here the space NBV [0, 1] consists of all functions η of bounded variation on [0, 1] normalised such that η(0) = 0 and η is continuous from the right on the open interval (0, 1). See [25, Section 3.5] for an introduction to the theory of functions of bounded variation and the Lebesgue-Stieltjes integral. See [68, Chapter 12] for an introduction to the Riemann-Stieltjes integral without using measure theory.

2.5 The Dense Range Property

41

Proposition 2.5.3 Let T be the operator on C[0, 1] defined by   T x (t) =



t



1

x(s) ds +

(0 ≤ t ≤ 1)

x(s) dη(s)

0

x ∈ C[0, 1],

(2.46)

0

where η ∈ NBV [0, 1]. Then the range of T is not dense in C[0, 1] if and only if there exists ϕ ∈ NBV [0, 1] such that 



t

η(t) = ϕ(1)t −

ϕ(s) ds

1

(0 ≤ t ≤ 1) and

0

dϕ(s) = −1.

(2.47)

0

Proof Note that T = V + R, where 



t

(V x)(t) =

x(s) ds

1

and (Rx)(t) =

x(s) dη(s)

0

(0 ≤ t ≤ 1).

(2.48)

0

Clearly, V has a dense range and R has rank one. Furthermore, in this case the vector u appearing in the second part of (2.42) is the function identically equal to one on [0, 1]. Let ϕ be any function in NBV [0, 1]. First we compute V ∗ ϕ. Take x ∈ C[0, 1]. Using partial integration (see, e.g., Theorem 12.12 in [68] or Theorem 6.3 in [82]) we have  s   1 ∗

V ϕ, x = ϕ, V x = dϕ(s) x(t) dt 0



s

= ϕ(s)

0

0

 = ϕ(1)

1   x(t) dt  −

1



ϕ(s)x(s) ds 0

1

x(t) dt −

0

0 1

ϕ(s)x(s) ds. 0

t Now, let ϕ˜ be the function defined by ϕ(t) ˜ = 0 ϕ(s) ds. Since ϕ ∈ NBV [0, 1], it follows that ϕ is Riemann integrable (see [22, Theorem 6.2.8]) and therefore we have that ϕ˜ ∈ NBV [0, 1] (see [25, Corollary 3.33]. The above calculation shows that ∗



V ϕ, x =

1

  x(t) d ϕ(1)t − ϕ(t) ˜ ,

x ∈ C[0, 1].

0

Thus for any ϕ ∈ NBV [0, 1] the function V ∗ ϕ is given by (V ∗ ϕ)(t) = ϕ(1)t − ϕ(t) ˜ = ϕ(1)t −



t

ϕ(s)dt, 0

0 ≤ t ≤ 1.

(2.49)

42

2 Completeness Theorems for Compact Hilbert Space Operators

Using u ≡ 1 (see the first paragraph of the proof) and the formula for V ∗ given by (2.49) we can apply Lemma 2.5.1 to show that T has no dense range if and only there exists ϕ ∈ NBV [0, 1] such that 



t

η(t) = ϕ(1)t −

1

(0 ≤ t ≤ 1) and

ϕ(s)ds 0

dϕ(s) = −1.

0

This proves that the condition (2.47) is necessary and sufficient for T to have no dense range.   Corollary 2.5.4 Let T be the operator on C[0, 1] defined by 

 T x (t) =



t

x(s) ds + x(1),

0 ≤ t ≤ 1 (x ∈ C[0, 1]).

0

Then T is one-to-one and the range of T is dense in C[0, 1]. t Proof Assume T x = 0. Then 0 x(s) ds = −x(1) for all 0 ≤ t ≤ 1, and hence x is identically zero. Thus T is one-to-one. To prove that T has a dense range we apply the preceding proposition with η being given by η(t) = 0 for 0 ≤ t < 1 and η(1) = 1. Assume that the range of T is not dense. Then there exists ϕ ∈ NBV [0, 1] such that (2.47) holds. This implies that  ϕ(t)t −

t

 ϕ(s) ds = 0

1

(0 ≤ t < 1) and ϕ(1) −

0

ϕ(s) ds = 1.

0

From the first identity it follows that ϕ(1) − second identity. Thus T has a dense range.

1 0

ϕ(s) ds = 1 which contradicts the  

Corollary 2.5.5 Let g ∈ L2 [0, 1], and let Tg be the operator on C[0, 1] defined by (1.40). Then Tg is one-to-one, and the range of Tg is not dense in C[0, 1] if and only 1 if there exists ϕ ∈ NBV [0, 1] such that 0 dϕ(s) = −1 and g(·) ˜ := ϕ(1) − ϕ(·) is equal to g(·) almost everywhere on [0, 1].

(2.50)

In particular, if g is continuous and g(1) = 0, then the range of Tg is dense in C[0, 1]. Proof The fact that Tg is ono-to-one is well-known. To prove the no dense range statement we apply Proposition 2.5.3 with T = Tg and 

t

η(t) =

g(s) ds 0

(0 ≤ t ≤ 1).

2.5 The Dense Range Property

43

In that case (2.47) is equivalent to 

t

 g(s) ds = ϕ(1)t −

0



t

ϕ(s) ds

(0 ≤ t ≤ 1)

1

and

0

dϕ(s) = −1.

0

(2.51) Thus, by Proposition 2.5.3, the range of Tg is not dense in C[0, 1] if and only if there exists ϕ ∈ NBV [0, 1] such that (2.51) holds. However, if ϕ ∈ NBV [0, 1] satisfies (2.51), then the function g˜ defined by g(t) ˜ := ϕ(1) − ϕ(t), 0 ≤ t ≤ 1, is equal to g almost everywhere on [0, 1]. Thus (2.50) holds true. Conversely, assume that (2.50) holds true, with ϕ ∈ NBV [0, 1] and with 1 0 dϕ(s) = −1. Then 

t



t

g(s) ds =

0



g(s) ˜ ds

0 t

= 0



t

ϕ(1) ds − 0



t

ϕ(s) ds = ϕ(1)t −

ϕ(s) ds

(0 ≤ t ≤ 1).

0

Thus (2.51) holds. To prove the final statement, assume that g is continuous and g(1) = 0. We prove that Im Tg is dense in C[0, 1] by contradiction. So assume that Im Tg is not dense in C[0, 1]. Then there exists a ϕ ∈ NBV [0, 1] such that (2.51) is fulfilled. Since g is continuous, it follows that g(t) ˜ = g(t) on 0 ≤ t ≤ 1. In particular, ϕ is continuous, ϕ(0) = 0, ϕ(1) = −1 and g(1) = g(1) ˜ = −1 − ϕ(1) = 0. But this contradicts the assumption g(1) = 0, which completes the proof.  

Chapter 3

Compact Hilbert Space Operators of Order One

In this chapter we further specify Theorem 2.2.2 for compact Hilbert space operators of order one. Such operators are Hilbert-Schmidt operators and not necessarily trace class operators. We begin with some remarks about the latter class of operators. Throughout this chapter we shall use terminology and basic facts from the theory of entire functions which can be found in Chap. 14.

3.1 Some Remarks About Trace Class Operators Let T be a trace class operator on a Hilbert space H . From Theorem VII.4.3 of [32] we know that the function q(z) := det(I − zT ) is an entire function with the additional property that for each  > 0 there exists a positive constant C such that |q(z)| = | det(I − zT )| ≤ C e|z|

for each z ∈ C.

In other words, q is an entire function of order at most one, and if its order is one, then it is of type zero. For the definitions of “order” and “type” of an entire function we refer to Sect. 14.1. Furthermore, see Theorem VII.7.1 of [32], the function z → P (z) := det(I − zT )(I − zT )−1

for z ∈ C with det(I − zT ) = 0,

(3.1)

extends to a vector-valued entire function with similar growth properties as satisfied by the function q. Indeed, from Theorem X.1.1 of [32], it follows that | det(I − zT )|(I − zT )−1  ≤

∞ 

(1 + |z|sj (T )).

(3.2)

j =1

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_3

45

46

3 Compact Hilbert Space Operators of Order One

To ∞estimate this infinite product, given δ > 0, choose m and δ1 < δ such that j =m+1 sj (T ) < δ1 . Since 1 + |z| ≤ exp(|z|), we obtain that ∞ 

(1 + |z|sj (T )) ≤ eδ1 |z|

j =1

m 

(1 + |z|sj (T )).

j =1

Next use that the last term on the right is a polynomial in z, and so we can choose δ2 |z| . Together this implies M and δ2 = δ − δ1 such that m j =1 (1 + |z|sj (T )) ≤ Me that | det(I − zT )|(I − zT )−1  ≤ Meδ|z| .

(3.3)

The latter inequality implies that the function defined by (3.1) is an entire function of order at most one, and if its order is one, then it is of type zero. Using the definitions of the functions q and P we have (I − zT )−1 =

1 P (z) q(z)

for z ∈ C such that q(z) = 0.

Factorisation of this type will be studied in the next chapter and also in the later chapters of this book. They will be of great help in deriving estimates on the growth of the resolvent. Our aim is to extend the above results to compact operators of order one (Theorem 3.3.1) and to use the extension to derive a completeness theorem (Theorem 3.4.1). The latter theorem will be developed further in terms of the distribution of the eigenvalues in Sect. 4.7.

3.2 Preliminaries About Hilbert-Schmidt Operators Since compact operators of order one are Hilbert-Schmidt, we start to recall some basic properties of Hilbert-Schmidt operators, and then, we continue with the study of special properties of compact operators of order one. We will present a basic theorem in which a sharp evaluation is given of the norm of the resolvent operator of a compact operator of order one in terms of the regularised determinant and the singular values. These estimates can be used to prove a strong completeness theorem for compact operators of order one. A compact linear operator T on a Hilbert space H is called a Hilbert-Schmidt operator if the singular values of T are square summable. The vector space of Hilbert-Schmidt operators is denoted by S2 . On S2 we can define a norm T 2 :=

∞  j =1

sj (T )2

1/2

,

(3.4)

3.2 Preliminaries About Hilbert-Schmidt Operators

47

and with this norm S2 becomes a normed operator ideal, whose norm comes from a Hilbert space structure in which the finite rank operators are dense (see Theorem VIII.2.3 of [32]). With every T ∈ S2 we can associate the number det2 (I − T ) =

ν(T )

(1 − λj (T ))eλj (T )

j =1

which is called the regularised determinant of the operator I − T (see Section IV.2 of [28]). For a Volterra operator V belonging to S2 , we define det2 (I − V ) = 1. For z ∈ C, we call the determinant det2 (I − zT ) =

ν(T )

(1 − zλj (T ))ezλj (T ) ,

(3.5)

j =1

the regularised characteristic determinant of T (see Section IV.2 of [28]). If T is trace class, then det2 (I − zT ) = det(I − zT )ezTr T , ν(T )

where det(I − zT ) = j =1 (1 − zλj (T )) denotes the determinant of a trace class operator, see Theorem VII.6.1 of [32]. The fundamental result given in Theorem IV.2.1 of [28] yields that the regularised characteristic determinant depends continuously upon the operator T , i.e., given a closed bounded set  ⊂ C, for every  > 0, there exists δ > 0 such that for any S ∈ S2 with T − S2 < δ, we have   max det2 (I − zT ) − det2 (I − zS) < , z∈

(3.6)

where  · 2 and det2 are defined in, respectively, (3.4) and (3.5). We continue with a number of consequences of this result (see [28, page 169]). Let T ∈ S2 , and let ϕ1 , ϕ2 , . . . be an arbitrary orthonormal basis of H . Then n  n det2 (I − T ) = lim det δj k − (T ϕj , ϕk ) j,k=1 e j=1 (T ϕj ,ϕj ) . n→∞

In fact, if Tn =

n

j =1 ( · , ϕj )T ϕj ,

(3.7)

then

n  n det2 (I − Tn ) = det δj k − (T ϕj , ϕk ) j,k=1 e j=1 (T ϕj ,ϕj ) .

(3.8)

Since the sequence of operators {Tn }∞ n=1 tends to T in the S2 -norm, we obtain det2 (I − T ) = lim det2 (I − Tn ). n→∞

(3.9)

48

3 Compact Hilbert Space Operators of Order One

Lemma 3.2.1 If T , S ∈ S2 , then the operator U = I − (I − T )(I − S) is a HilbertSchmidt operator and we have det2 (I − U )eTr T S = det2 (I − T )det2 (I − S).

(3.10)

Proof Since U = T + S − T S, the operator U is Hilbert-Schmidt. Now, let ϕ1 , ϕ2 , . . . be an arbitrary orthonormal basis of H , and let Pn the projection onto the linear span of ϕj , j = 1, 2, . . . , n. If Tn = Pn T Pn and Sn = Pn SPn , n ≥ 1, then det2 (I − Tn )det2 (I − Sn ) = det(I − Tn ) det(I − Sn )eTr (Tn +Sn )   = det (I − Tn )(I − Sn ) eTr (Tn +Sn )   = det2 (I − Tn )(I − Sn ) eTr Tn Sn .  

Passing to the limit we obtain (3.10). Lemma 3.2.2 If T ∈ S2 , then I − T is invertible if and only if det2 (I − T ) = 0.

Proof Suppose that I − T is not invertible, then 1 is an eigenvalue of finite type for T . From the definition (3.5), it follows that det2 (I − T ) = 0. On the other hand, if I − T is invertible, then (I − T )−1 = I − S, where S = −T (I − T )−1 . Thus S is Hilbert-Schmidt and using property (3.10), we have   det2 (I − T )det2 (I − S) = det2 (I − T )(I − S) eTr (T S) = eTr T S = 0,  

which completes the proof.

Lemma 3.2.3 Let T be a compact operator of order one, and assume that the series ∞ j =1 |λj (T )| is divergent, where λ1 (T ), λ2 (T ), λ3 (T ), . . . are the non-zero eigenvalues of T ordered by decreasing modulus taking multiplicities into account. Then T is not a trace class operator, and the function q(z) = det2 (I − zT ) is an entire function of order precisely one.  1+ < ∞ for all  > 0. From Proof Since T is of order one, we have ∞ j =1 sj (T ) [32, Corollary VI.2.4] it follows that ∞ j =1

|λj (T )|1+ ≤



sj (T )1+ < ∞ for every  > 0.

j =1

 On the other hand, the series ∞ j =1 |λj (T )| is divergent. This implies that the convergence exponent  of the sequence aj = λj (T )−1 , j = 1, 2, . . ., is precisely

3.3 Resolvent Estimates for Compact Operators of Order One

49

one. Furthermore, using ∞

|λj (T )|2 < ∞

and

j =1



|λj (T )| is divergent,

j =1

we can apply Proposition 14.8.10 with p = 1 and aj = λj (T )−1 , j = 1, 2, . . .. It follows that q is an entire function of order ρ ≤  = 1. Next we prove that T cannot be a trace class operator. Indeed, if T is trace class, then ∞

|λj (T )| ≤

j =1



sj (T ) < ∞.

j =1

 But then ∞ j =1 |λj (T )| is convergent which contradicts divergence condition. Thus T is not trace class. According to Definition 14.8.7 the function q has genus μ = 1. But then we can apply Hadamard’s factorisation theorem (Theorem 14.8.9) to show that the genus μ of q is less than or equal to the order ρ of q, that is, 1 ≤ ρ. We already know that ρ ≤ 1. Thus the order of q is precisely one.  

3.3 Resolvent Estimates for Compact Operators of Order One In this section we give a sharp norm estimate of (I − zT )−1 whenever T is compact operator of order one and det2 (I − zT ) does not vanish identically. This will be the first step in the resolvent estimates needed to prove the completeness results (see the next section). Theorem 3.3.1 Let T be a compact operator of order one. Then the map z → det2 (I − zT )(I − zT )−1 is a vector-valued entire function of order at most one, i.e., for every  > 0, there is an M > 0 and a δ > 0 such that |det2 (I − zT )| (I − zT )−1  ≤ Meδ|z|

1+

.

Proof Observe that Corollary VIII.1.2 in [32] implies that for every  > 0 ∞ j =1

sj (T 2 )

1+ 2



∞ j =1

sj (T )1+ .

(3.11)

50

3 Compact Hilbert Space Operators of Order One

Hence the operator T 2 is a trace class operator and from the inequality (3.2) we obtain | det(I − zT 2 )| (I − zT 2 )−1  ≤



 1 + |z|sj (T 2 ) .

(3.12)

j =1

To estimate this infinite product, fix 0 <  < 1, and put r = (1 + )/2 < 1. Then we can find β ≥ 1 such 1 + |z| ≤ exp(β|z|r ) for all z ∈ C. Next given that ∞ δ1 > 0, choose m such that j =m+1 sj (T 2 ) < δ1 β −1 . Therefore we can estimate the infinite product in (3.12) to obtain ∞ m

  2 δ1 |z|r 1 + |z|sj (T ) ≤ e 1 + |z|sj (T 2 ) . j =1

(3.13)

j =1

Since the last term on the right hand side is a polynomial, we conclude that for every δ > δ1 , there exists a constant C such that m 

r

(1 + |z|sj (T 2 )) ≤ Ce(δ−δ1 )|z| .

j =1

Using the latter inequality in (3.13) we obtain det(I − zT 2 )| (I − zT 2 )−1  ≤ Ceδ|z|

1+ 2

.

In particular, we see that for every δ > 0, there exists a constant C such that | det(I − z2 T 2 )| (I − z2 T 2 )−1  ≤ Ceδ|z|

1+

.

(3.14)

Now we use that (I − z2 T )2 = (I + zT )(I − zT ). Since T 2 is trace class, it follows from (3.10) that det(I − z2 T 2 ) = j ≥1 (I − z2 λj (T 2 ))    = j ≥1 (I − zλj (T )) j ≥1 (I + zλj (T ))    = j ≥1 (I − zλj (T )) j ≥1 (I + zλj (T ))



= j ≥1 (I − zλj (T ))ezλj (T ) j ≥1 (I + zλj (T ))e−zλj (T ) = det2 (I + zT ) det2 (I − zT ).

(3.15)

Finally, together with representation (3.15), estimate (3.14), and the fact that I + zT is a polynomial in z, we have that for every δ > 0, there exists a constant M such

3.4 A Completeness Theorem

51

that |det2 (I − zT )| (I − zT )

−1

   det(I − z2 T 2 )   (I − z2 T 2 )−1    ≤ I + zT   det2 (I + zT )  1+

≤ Meδ|z|

.  

Thus (3.11) holds true, and the proof is complete.

More precise knowledge of the density of the sequence {λn (T )} provides additional information about the exponential type of det2 (I − zT ), see Sect. 14.8 and Lemma 14.8.11 in particular. These ideas will be further exploited in the next chapter.

3.4 A Completeness Theorem We are now ready to specify Theorem 2.2.2 for compact operators of order one. Theorem 3.4.1 Let H be a complex Hilbert space, and let T be a compact operator of order one on H such that Ker T = Ker T ∗ . Suppose that the operator norm of (I − zT )−1 is polynomially bounded along the imaginary axis. If, in addition, lim sup r→∞

1 log det2 (I − rT )(I − rT )−1  ≤ r ≤ lim sup r→∞

lim sup r→∞

1 log |det2 (I − rT )|, r

(3.16)

1 log |det2 (I + rT )|, r

(3.17)

1 log det2 (I + rT )(I + rT )−1  ≤ r ≤ lim sup r→∞

then T has a complete span of eigenvectors and generalised eigenvectors. Proof We apply Theorem 2.2.2 with z0 = 0 and s0 = 0, and with k = 2,

θ1 = π/2,

θ2 = 3π/2 and ϕ1 = π,

ϕ2 = 2π.

Since (I − zT )−1 is assumed to be polynomially bounded along the imaginary axis, conditions (i) and (ii) in Theorem 2.2.2 are satisfied. It remains to check condition (iii). To do this we use that T is a Hilbert-Schmidt operator. For simplicity put q(z) = det2 (I − zT )

and P (z) = q(z)(I − zT )−1 .

52

3 Compact Hilbert Space Operators of Order One

Then q and P are both entire functions, and (3.16) and (3.17) can be rewritten as lim sup r→∞

lim sup r→∞

1 1 log P (r) ≤ lim sup log |q(r)|, r r→∞ r

(3.18)

1 1 log P (−r) ≤ lim sup log |q(−r)|. r r→∞ r

(3.19)

Now take x ∈ X, and let r > 0. Then

1 1 log q(r)−1 P (r)x = log |q(r)−1 | P (r)x r r

1 = − log |q(r)| + log P (r)x r

1 ≤ − log |q(r)| + log (P (r) x) r

1 = − log |q(r)| + log P (r) + log x . r Now using (3.18) we see that lim sup r→∞

1 log (I − reiϕ2 T )−1 x ≤ 0. r

In a similar way one shows that lim sup r→∞

1 log (I − reiϕ1 T )−1 x ≤ 0. r

Thus condition (iii) in Theorem 2.2.2 is satisfied too. Hence T has a complete span of eigenvectors and generalised eigenvectors.  

3.5 Supplementary Remarks So far, we have considered sufficient conditions for completeness of compact Hilbert space operators. Generally, it is not so easy to formulate necessary conditions, aside from trivial conditions such as the condition that the non-zero spectrum of T is infinite. In the next chapter, we present a class of linear operators for which it is possible to present necessary and sufficient conditions for completeness. From our results it follows in particular that if det2 (I −zT ) is an entire function of completely regular growth, see Definition 14.6.1, then the condition (3.16) and (3.17) are necessary and sufficient in order to have a complete span of eigenvectors and generalised eigenvectors for T , see Theorem 4.7.1. Furthermore, as we shall see, the

3.5 Supplementary Remarks

53

completeness results presented in the next chapter apply to Banach space operators as well. For the reader mainly interested in sufficient completeness results in the Hilbert space setting, we would like to stress that there are alternative proofs available in the large literature on the subject that in some cases are shorter than the proofs presented here in Chaps. 2 and 3. See [75] for a broad overview of the completeness problem with many old and new results. In our presentation of the material in Chaps. 2 and 3, we focused in the proofs on those arguments that can be generalised to the Banach space setting, and also allow for the formulation of necessary and sufficient conditions for completeness. Motivated by applications, we are particularly interested in noncompleteness results, and also in completeness results in a general Banach space setting for which the system of eigenvectors and generalised eigenvectors does not satisfy the (unconditional) base properties. For positive results regarding eigenfunction expansion in the context of boundary value problems in a Hilbert space setting, associated with finite-dimensional perturbations of Volterra operators, we refer to [5, 57, 81], and to [52] and the references given there. Using the theory of characteristic matrix functions and their deep connection to operator theory, the purpose of this book is to present necessary and sufficient conditions for completeness in terms of properties of the corresponding characteristic matrix function. This will be the main topic in Chaps. 4 and 5, and in the subsequent chapters we focus on classes of operators for which we can explicitly verify the necessary and sufficient conditions for completeness in terms of the corresponding characteristic matrix function.

Chapter 4

Completeness for a Class of Banach Space Operators

In this chapter we consider the completeness problem for a more general class of bounded linear operators then those considered in Chaps. 2 and 3. Moreover noncompact operators are included too, and we allow the underlying spaces to be complex Banach spaces.

4.1 A Special Class of Operators In this section we describe the class of operators, let q be a scalar entire function, and let P be an operator-valued entire function, P : C → L(X, X), where X is a Banach space. We say that an operator T on X is related to the pair of entire functions {q, P } if q is not zero in a neighbourhood of zero and for each z ∈ C the operator I − zT is invertible whenever q(z) = 0 and in that case q(z) = 0

⇒

(I − zT )−1 =

1 P (z). q(z)

(4.1)

Such an operator T is uniquely determined by the functions {q, P }. Indeed, assume that T1 on X is also related to the functions {q, P }. Then, by (4.1), the functions (I − zT )−1 and (I − zT1 )−1 coincide in a neighbourhood of zero, which implies that I − zT and I − zT1 coincide in a neighbourhood of zero, and thus T = T1 . Since q does not vanish identically, the fact that T is related to the entire functions {q, P } implies that the non-zero part of the spectrum of T consists of isolated points only. Thus for each 0 = λ ∈ σ (T ) the spectral projection Pλ is well defined, and so is the linear space MT := span{Im Pλ | 0 = λ ∈ σ (T )}.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_4

55

56

4 Completeness for a Class of Banach Space Operators

The problem is to obtain necessary and sufficient guaranteeing MT to be dense in X, or more generally, to find a natural closed linear subspace L in X such that MT ⊕ L is dense in X. The converse in (4.1) does not have to be true. More precisely, if I − zT is invertible, then it may happen that q(z) = 0. To see this, let T be related to the pair of entire functions {q, P }. Choose 0 = z0 ∈ C such that q(z0 ) = 0, and define q1 (z) = (z − z0 )q(z) and P1 (z) = (z − z0 )P (z). Then q1 is a scalar entire function, and q1 is not zero in a neighbourhood of zero. Now fix 0 = z ∈ C. Then, using (4.1), we obtain q1 (z) = 0 ⇒ q(z) = 0 ⇒

(I − zT )−1 =

1 1 P (z) = P1 (z). q(z) q1 (z)

It follows that T is related to the entire functions {q1 , P1 }. Since q(z0 ) = 0, we know that I − z0 T is invertible. However, q1 (z0 ) = 0. Thus the right hand side of (4.1) is satisfied for the pair {q1 , P1 } but the left hand side is not. We summarise this discussion in the following definition. Definition 4.1.1 We say that an operator T on X is determined by the (ordered pair of) entire functions {q, P } if q is not zero in a neighbourhood of zero and for each z ∈ C the operator I − zT is invertible whenever q(z) = 0 and that (4.1) holds. There are many examples of operators of the type described above. In fact, we already met a few examples. For instance, if T = Tg , where Tg is the operator appearing in the final paragraph of Sect. 1.1 (see (1.40)), then we know from (2.34) in Lemma 2.3.1 that (4.1) holds with 

1

q(z) = 1 − z

ezs g(s) ds

and P (z) = q(z)(I − zT )−1 .

0

Furthermore, if T is Hilbert-Schmidt, then we know from Lemmas 3.2.3 and 3.2.2 that the operator T is determined by the entire functions {q, P } with q(z) = det2 (I − zT )

  and P (z) = det2 (I − zT ) (I − zT )−1 .

Note that in both cases the pair {q, P } is optimal. To solve the problems referred to above we exploit the availability of detailed information about the asymptotic behaviour of entire functions using the PhragménLindelöf indicator function and entire functions of completely regular growth. The theory of these functions is reviewed in Chap. 14. On the one hand this final chapter presents an overview of the classical theory of entire functions, while on the other hand a number of new results are presented which are relevant for the results and examples discussed in the present and later chapters.

4.1 A Special Class of Operators

57

In what follows we shall need the notion of a scalar entire function “dominating” a vector-valued entire function. See Definition 14.7.1 where this notion is defined for scalar functions. Definition 4.1.2 Let q be an entire function of finite non-zero order ρ and finite type, and let F : C → X be a vector-valued entire function of order at most ρ (see the final section of Chap. 14 for the definition and properties of vector-valued entire functions). We say that q dominates F if for every x ∗ ∈ X∗ the function q dominates z → x ∗ , F (z) in the sense of Definition 14.7.1. We are now ready to formulate the main theorem of this chapter. For the notion of “completely regular growth” which plays an important role in the theorem, we refer to Definition 14.6.1. Theorem 4.1.3 Let T be a bounded linear operator on the Banach space X, and assume that there exists a positive integer k such that X = Im T k ⊕ Ker T k .

(4.2)

Assume that T is determined by the pair of entire functions {q, P }, where (1) the scalar entire function q is of finite non-zero order ρ and has infinitely many zeros, (2) the operator-valued entire function P (z) : X → X is of order at most ρ. Moreover, let Y be a Banach space, and suppose that there is an operator-valued polynomial : C → L(X, Y ) such that

(z)P (z)x = 0

( for all z ∈ C) ⇒ x = 0.

(4.3)

Furthermore, suppose that there exist a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, and a non-negative integer m and a positive constant M such that I − z0 T is invertible and for each z ∈ ray (θj ; z0 , s0 ) with j = 1, 2, . . . , κ, we have 0 < |q(z)| ≤ M(1 + |z|m )

and (I − zT )−1  ≤ M(1 + |z|m ).

(4.4)

Finally, suppose the entire function z → q(z + z0 ) is of completely regular growth, and define   FT , ,z0 := x ∈ X | q(z0 + z) dominates (z0 + z)P (z0 + z)x .

(4.5)

Then FT , ,z0 is a closed linear subspace of X and the closure of the generalised eigenspace of T has the following properties: MT ⊕ Ker T k = FT , ,z0

and X = MT ⊕ ST ,

(4.6)

58

4 Completeness for a Class of Banach Space Operators

FT , ,z0 ∩ ST = Ker T k ,

(4.7)

FT , ,z0 ∩ Im T k = MT ,

(4.8)

where ST is the closed linear subspace of X defined by ST = {x ∈ X | z → (I − zT )−1 x is an entire function}. Remark 4.1.4 If q in item (1) of the above theorem has only a finite number of zeros, then (4.1) implies that the non-zero part of the spectrum of T consists of a finite number of non-zero eigenvalues, λ1 , . . . , λn say. The latter implies that MT = ⊕1≤j ≤n Im Pλj

and ST = ∩1≤j ≤n Ker Pλj .

It follows that MT and ST are closed subspaces of X, and X = MT ⊕ ST . But in that case the completeness problem is not interesting. Both the first and second part of item (1) exclude that q is a polynomial. Finally, if q has a finite number of zeros, then this does not exclude that q can be of non-zero order. Indeed, assume q is a polynomial, and put q1 (z) = exp(z)q(z). Then q1 has a finite number of zeros, but the order of q1 is one. Moreover, (I − zT )−1 =

1 1 P (z) = P1 (z), q(z) q1 (z)

where P1 (z) is the entire function P1 (z) := exp(−z)P (z). Remark 4.1.5 If Y = X and (z) is the identity operator on X = Y for all z, then (4.3) is automatically fulfilled. Indeed, from condition (4.1) it follows that P (0) = q(0)I and q(0) = 0. But then 0 = P (0)x = q(0)x implies that x = 0, as desired. Remark 4.1.6 Since P (z) = q(z)(I −zT )−1 by (4.1), the inequalities in (4.4) imply that P (z) is also polynomially bounded on each ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ. Since (z) is a polynomial, it follows that the entire function z → (z)P (z) has the same property, that is, (z)P (z) is also polynomially bounded on each ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ. The latter will play an important role when we prove Theorem 4.1.3 in Sect. 4.4. In the more concrete settings treated in the next two chapters the choice of the entire functions q and P determining the operator T and the choice of the operator polynomial (z) will be rather natural. See the remarks preceding Theorems 5.2.6 and 6.2.1.

4.1 A Special Class of Operators

59

Corollary 4.1.7 Let T be a bounded linear operator on a complex Banach space X satisfying all the assumptions of Theorem 4.1.3. Then T has a complete span of eigenvectors and generalised eigenvectors if and only if FT , ,z0 = X, where FT , ,z0 is defined by (4.5). Moreover, in that case MT = Im T k

and ST = Ker T k .

Proof The “if and only if” statement follows from the first identity in (4.6). Furthermore, given FT , ,z0 = X, the first part of the final statement follows from the identity (4.8) while the second part follows from identity (4.7).   Note that condition (4.2) is satisfied with k = 1 whenever T is one-to-one and has dense range. Indeed, in that case we have Ker T = {0} and (4.2) reduces to Im T = X. Moreover, if X = H is a Hilbert space, then (4.2) holds with k = 1 if Ker T = Ker T ∗ , and in that case the right hand side of (4.2) is an orthogonal direct sum decomposition. Furthermore, condition (4.2) implies that Ker T k+1 = Ker T k and Im T k+1 = Im T k . In fact, the following proposition holds. Proposition 4.1.8 Let T be a bounded linear operator on a complex Banach space X. Assume that there exists a positive integer integer k such that condition (4.2) is satisfied. Then Ker T k = Ker T n

and Im T k = Im T n

(n = k + 1, k + 2, . . .).

(4.9)

In order to prove the above proposition we first derive an auxiliary result. Lemma 4.1.9 If a bounded linear operator T on a complex Banach space X is one-to-one and has dense range, then T n has the same properties for each positive integer n. Proof Assume T is one-to-one and has dense range, and let n be a positive integer. Then clearly T n is one-to-one. The fact that T has dense range implies that T ∗ on the Banach dual space X∗ is one-to-one, and hence (T ∗ )n is one-to-one too. But (T ∗ )n = (T n )∗ , and therefore

  Im T n = ⊥ (Im T n )⊥ = ⊥ Ker (T n )∗ = ⊥ {0} = X, which completes the proof.

 

Proof of Proposition 4.1.8 In the sequel X0 = Im T k and T0 = T |X0 . We split the proof into five parts. Part 1.

We prove the first part of (4.9). Let n ≥ k + 1 be an integer. Since Ker T k ⊂ Ker T n it suffices to show that Ker T n ⊂ Ker T k . Therefore take x ∈ Ker T n , and put y = T n−1 x. Then T y = T n x = 0. Thus y ∈ Ker T ⊂ Ker T k . Also, y ∈ Im T n−1 ⊂ Im T k . But then

60

Part 2.

Part 3.

4 Completeness for a Class of Banach Space Operators

y ∈ Ker T k ∩ Im T k , and (4.2) tells us that y = 0. Thus x ∈ Ker T k , and if follows that Ker T n ⊂ Ker T k . The operator T0 is one-to-one. Indeed, if T0 x0 = 0 for some x0 ∈ X0 . Then x0 ∈ Ker T ⊂ Ker T k . Thus x0 ∈ X0 ∩ Ker T k . But the latter intersection consists of the zero vector only. Therefore x0 = 0, and T0 is one-to-one. We prove that Im T n ⊂ Im T0n for each integer n ≥ k. Take y ∈ Im T n . Then y = T n x for some x ∈ X0 ⊕ Ker T k . Thus there exist uj ∈ X0 and vj ∈ Ker T k , j = 1, 2, . . . such that x = limj →∞ T n (uj + vj ). Since n ≥ k, we have T n vj = 0 for each j = 1, 2, . . . , and therefore y = T n x = lim T n (uj + vj ) = lim T n uj = lim T0n uj ∈ Im T0n . j →∞

Part 4.

Part 5.

j →∞

j →∞

Hence Im T n ⊂ Im T0n . We prove that T0 has dense range. From the previous step we know that Im T k ⊂ Im T0k , and thus X0 = Im T k ⊂ Im T0k ⊂ X0 . Therefore Im T0k = X0 . On the other hand Im T0k ⊂ Im T0 , and hence the range of T0 is dense in X0 . We prove the second part of (4.9). Let n ≥ k. We know that T0 is one-toone and has dense range (by Parts 2 and 4). Thus Im T0n = X0 . From Part 3 we know that Im T n ⊂ Im T0n ⊂ Im T n . Thus Im T n = Im T0n = X0 = Im T k , as desired.  

The remaining part of this chapter consists of six sections. The first presents some spectral preliminaries. The second shows that in order to prove Theorem 4.1.3 it suffices to consider the case when z0 is zero. The third section then contains the proof of Theorem 4.1.3. The fourth presents an example illustrating the main theorem. Some additional remarks are presented in the fifth section. In the sixth section we revisit Theorem 3.4.1.

4.2 Spectral Preliminaries II Before we prove Theorem 4.1.3, we first derive two lemmas. These two lemmas present some basic properties of sets of the form FT , ,z0 (see formula (4.5)), using vector-valued versions of the lemmas presented in Sect. 14.7. What follows may be seen as an addition to the spectral results presented in Sect. 1.2. Lemma 4.2.1 Let X be a complex Banach space, and let T be a bounded linear operator on X such that T is determined by the entire functions {q, P }. Furthermore, let Y be a Banach space, and let : C → L(X, Y ) be an operatorvalued polynomial. Assume that

4.2 Spectral Preliminaries II

61

(1) q is of finite non-zero order ρ and of completely regular growth, (2) (z)P (z) is of order at most ρ. Put   F := x ∈ X | q(z) dominates (z)P (z)x .

(4.10)

Then the set F is a T -invariant linear subspace of X, and x ∈ F if and only if T x ∈ F . Furthermore, for z ∈ C we have q(z) = 0 ⇒ (I − zT )−1 F ⊂ F .

(4.11)

Finally, if there exists a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, and a non-negative integer m such that for every x ∈ X there is a constant M = M(x), depending continuously on x, with  (z)P (z)xY ≤ M(1 + |z|m )

for z ∈ ray (θj ; z0 , s0 ) and j = 1, 2, . . . , κ,

(4.12)

then F is norm-closed. Proof Fix x1 ∈ F , x2 ∈ F , and y ∗ ∈ Y ∗ . Define fj (z) = y ∗ , (z)P (z)xj ,

z∈C

(j = 1, 2).

Put f (z) = y ∗ , (z)P (z)(x1 + λx2 ) . Note that f1 , f2 , and f are entire functions of order at most ρ, and f (z) = f1 (z)+λf2 (z). Furthermore, q dominates f1 and f2 . Since q is of completely regular growth, q is of normal type by definition. But then we can apply Lemma 14.7.2 to show that f is dominated by q. This holds for each choice of y ∗ in X∗ . It follows that x1 + λx2 ∈ F . Therefore F is a linear subspace of X. In order to show that F is T -invariant, we first observe that (I − zT )−1 T = T (I − zT )−1 =

 1 (I − zT )−1 − I , z

and therefore we can rewrite z (z)P (z)T x as follows z (z)P (z)T x = zq(z) (z)(I − zT )−1 T x = q(z) (z)(I − zT )−1 x − q(z) (z)x = (z)P (z)x − q(z) (z)x.

(4.13)

62

4 Completeness for a Class of Banach Space Operators

Since (z) is an operator polynomial, it follows from Theorem 14.1.1 that the order of q equals the order of q . Therefore Lemma 14.7.4 yields that q(z) dominates q(z) (z)x for x ∈ X. But then it follows from (4.13) that q(z) dominates z (z)P (z)T x if and only if q(z) dominates (z)P (z)x. Furthermore, again by Lemma 14.7.4, the function q(z) dominates z (z)P (z)T x if and only if q(z) dominates (z)P (z)T x. This proves that x ∈ F if and only if T x ∈ F . To prove (4.11), take z1 ∈ C and assume that q(z1 ) = 0. Then I − z1 T is invertible. Define y := (I − z1 T )−1 (z − z1 )T x = (z − z1 )T (I − z1 T )−1 x.

(4.14)

From what has been proved in the preceding paragraphs, it follows that x ∈ F if and only if (z − z1 )T x ∈ F . Therefore, it suffices to prove that x ∈ F implies that y ∈ F . Since z1 ∈ C with q(z1 ) = 0, we have for x ∈ F and for y given by (4.14) that (I − zT )−1 x − (I − z1 T )−1 x = (I − zT )−1 y, and hence

(z)P (z)y = (z)P (z)x − q(z) (z)(I − z1 T )−1 x.

(4.15)

From Lemma 14.7.4 it follows that q(z) dominates q(z) (z)(I − z1 T )−1 x. So we obtain from (4.15) that q(z) dominates (z)P (z)x if and only if q(z) dominates

(z)P (z)y, and this proves (4.11). Finally, using (4.12), we prove that F is closed. Let {xk }k≥1 be a sequence in F such that xk → x in X as k → ∞. We have to prove that x ∈ F . Fix y ∗ ∈ Y ∗ . Define fk : C → C (k ≥ 1), and f : C → C by f (z; y ∗) = y ∗ , (z)P (z)x

and fk (z; y ∗ ) = y ∗ , (z)P (z)xk ,

k ≥ 1.

Obviously, f and fk (k ≥ 1), are entire functions of order at most ρ. According to our hypotheses (see (4.12) in particular) there exists a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that fk (z) ≤ M(1 + |z|m )

for z ∈ ray (θj ; z0 , s0 ) and j = 1, 2, . . . , κ.

The fact that M does not depend on k follows from xk → x as k → ∞ using that M = M(x) in (4.12) depends continuously on x. Since fk → f pointwise we can apply Lemma 14.7.6 with q in place of g to show that q dominates f . Thus q dominates the entire function z → y ∗ , (z)P (z)x for each y ∗ ∈ Y ∗ . Hence x ∈ FT . Thus F is closed.  

4.2 Spectral Preliminaries II

63

Lemma 4.2.2 Let X be a complex Banach space, and let T be a bounded linear operator on X such that T is determined by the entire functions {q, P }. Furthermore, let Z be a Banach space, and let ∗ : C → L(X∗ , Z) be an operatorvalued polynomial. Assume that (1) q is of finite non-zero order ρ and of completely regular growth, (2) ∗ (z)P (z)∗ is of order at most ρ. Put   F∗ = x ∗ ∈ X∗ | q(z) dominates ∗ (z)P (z)∗ x ∗ .

(4.16)

Then the set F∗ is a T  -invariant linear subspace of X∗ . Moreover, if there exists a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, and a non-negative integer m such that for every x ∗ ∈ X∗ there is a constant M = M(x  ) with  ∗ (z)P (z)∗ x ∗ Z ≤ M(1 + |z|m )

for z ∈ ray (θj ; z0 , s0 ) and j = 1, 2, . . . , κ, . (4.17)

then the set F∗ is a weak∗ -closed linear subspace of X∗ . Proof Let T ∗ denote the Banach adjoint of T on the Banach dual space X∗ . Then the resolvent operator (I − zT ∗ )−1 admits the representation (I − zT ∗ )−1 =

1 P (z)∗ . q(z)

(4.18)

The above representation for (I − zT ∗ )−1 follows directly from the definition of the adjoint of a Banach space operator. Indeed, if x ∗ is a continuous linear functional on X, then for each x ∈ X we have

(I − zT ∗ )−1 x ∗ , x = x ∗ , (I − zT )−1 x = =

1

x ∗ , P (z)x q(z)

1

P (z)∗ x ∗ , x . q(z)

Thus we can apply Lemma 4.2.1 with X := X∗ and T := T ∗ to obtain that the set FT ∗ is a T ∗ -invariant linear subspace of X∗ . Note that P (z)∗ is of order at most ρ. Thus we can apply the first part of Lemma 4.2.1 with X := X∗ and T := T ∗ , with

64

4 Completeness for a Class of Banach Space Operators

P (z)∗ in place of P (z), and with ∗ (z) in place of (z). This yields that F∗ is a T ∗ -invariant linear subspace of X∗ . It remains to show F∗ is weak∗ -closed. To do this, let {xk∗ }k≥1 be a sequence in FT ∗ such that xk∗ → x in X as k → ∞. We have to prove that x ∗ belongs to FT ∗ as well. For y ∗ ∈ Y ∗ define fk : C → C, k ≥ 1, and f : C → C, respectively, by f (z; y ∗ ) = y ∗ , ∗ (z)P (z)∗ x ∗

and fk (z; y ∗ ) = y ∗ , ∗ (z)P (z)∗ xk∗ ,

k ≥ 1.

First note that fk → f pointwise and that the functions f and fk are entire functions of order at most ρ which are polynomially bounded on a ρ-admissible {ray (θj ; z0 , s0 ) | j = 1, . . . , κ} by (4.17). Since {xk∗ }k≥1 is a sequence in FT ∗ , it follows that q dominates fk for k ≥ 1. Therefore an application of Lemma 14.7.6 implies that q dominates f for every y ∗ ∈ Y ∗ . This completes the proof that FT ∗ is weak∗ -closed.  

4.3 Theorem 4.1.3 Reduced to the Case When z0 Is Zero In this section we show that it suffices to prove Theorem 4.1.3 in case the point z0 is zero. First note that if z0 is zero, then we can rephrase Theorem 4.1.3 as follows. Theorem 4.3.1 Let T be an bounded linear operator on the Banach space X, and assume that there exists a positive integer k such that X = Im T k ⊕ Ker T k .

(4.19)

Assume that T is determined by the entire functions {q, P }, where (1) the scalar entire function q is of finite non-zero order ρ, has infinite many zeros, and is of completely regular growth, (2) the operator-valued entire function P (z) : X → X is of order at most ρ. Moreover, let Y be a Banach space, and suppose that there is an operator-valued polynomial : C → L(X, Y ) such that

(z)P (z)x = 0

( for all z ∈ C) ⇒ x = 0.

(4.20)

Furthermore, suppose that there exist a ρ-admissible set of half-lines in the complex plane, {ray (θj ; 0, s0 ) | j = 1, . . . , κ}, and a non-negative integer m and a constant M such that for each z ∈ ray (θj ; 0, s0 ), j = 1, 2, . . . , κ, we have 0 < |q(z)| ≤ M(1 + |z|m )

and (I − zT )−1  ≤ M(1 + |z|m ).

(4.21)

4.3 Theorem 4.1.3 Reduced to the Case When z0 Is Zero

65

Finally, define   FT , := x ∈ X | q(z) dominates (z)P (z)x .

(4.22)

Then FT , is closed and the closure of the generalised eigenspace of T has the following properties: MT ⊕ Ker T k = FT ,

and X = MT ⊕ ST ,

(4.23)

FT , ∩ ST = Ker T k ,

(4.24)

FT , ∩ Im T k = MT .

(4.25)

Here as usual ST = {x ∈ X | z → (I − zT )−1 x is entire }. Theorem 4.3.1 not only appears as a special case of Theorem 4.1.3. In fact our aim is to show that the converse statement, that is, Theorem 4.1.3 appears as a corollary of Theorem 4.3.1, also holds true. In other words we shall prove the following proposition. Proposition 4.3.2 In order to prove Theorem 4.1.3 it suffices to prove the theorem for the case when z0 = 0. It will be convenient first to prove the following auxiliary result. Lemma 4.3.3 Let T be a bounded operator on the Banach space X determined by the entire functions {q, P }. Assume I − z0 T is invertible. Define T = (I − z0 T )−1 T ,

 q (z) = q(z + z0 ),

P(z) = P (z + z0 )(I − z0 T ).

 is determined by the pair { Then T q , P}. Furthermore, the following holds: (a) λ ∈ σ (T) \ {0} if and only if 1 + λz0 = 0

and

λ ∈ σ (T ) \ {0}; 1 + λz0

(4.26)

(b) μ ∈ σ (T ) \ {0} if and only if 1 − μz0 = 0

and

μ ∈ σ (T) \ {0}. 1 − μz0

(4.27)

Moreover the map λ →

λ , 1 + λz0

λ ∈ σ (T) \ {0}

(4.28)

66

4 Completeness for a Class of Banach Space Operators

is well defined and maps σ (T) \ {0} in a one-to-one way onto σ (T) \ {0}, and the inverse map is given by μ →

μ , 1 − μz0

μ ∈ σ (T ) \ {0}.

(4.29)

Furthermore, if λ ∈ σ (T) \ {0} and μ ∈ σ (T ) \ {0} and λ = μ(1 − μz0 )−1 , then  at λ and the Riesz projection of T at μ coincide. the Riesz projection of T Proof The fact that T is determined by the entire functions { q , P} follows from the following calculation:

−1 ) = q(z + z0 ) I − z(I − z0 T )−1 T )−1  q (z)(I − zT = q(z + z0 ) ((I − z0 T ) − zT )−1 (I − z0 T ) = q(z + z0 ) (I − (z + z0 )T )−1 (I − z0 T ) (z). = P (z + z0 )(I − zT0 ) = P  are determined by a pair of entire functions, both σ (T ) \ {0} Since both T and T  and σ (T ) \ {0} are countable sets consisting of isolated points only. Next, observe that  = λI − (I − z0 T )−1 T = (I − z0 T )−1 (λI − λz0 T − T ) λI − T

= (I − z0 T )−1 λI − (1 + λz0 )T .

(4.30)

Note that the various operators appearing in (4.30) commute with each other. Proof of Item (a) Let λ ∈ σ (T) \ {0}. If 1 + λz0 = 0, then the identity (4.30)  is invertible. The reduces to λI − T = λ(I − z0 T )−1 which implies that λI − T latter contradicts the fact that λ ∈ σ (T). Thus 1 + λz0 = 0. But then (4.30) can be rewritten as λI − T = (1 + λz0 )(I − z0 T )−1



λ I −T . 1 + λz0

(4.31)

The left hand side of the above identity is not invertible by assumption. This can only happen if the operator λ(1 + λz0 )−1 I − T in the right hand side of (4.31) is not invertible too. Hence we have proved the two statements in (4.26). Next, we consider the reverse implication. So assume that the two parts of (4.26) hold true. First, note that the second part of (4.26) implies that λ is not zero. Given the first part of (4.26), we can rewrite (4.30) as (4.31). In the present case we know that the operator λ(1 + λz0 )−1 I − T in the right hand side of (4.31) is not invertible.  is also not invertible. Thus λ ∈ σ (T) as desired. This implies that λ − T

4.3 Theorem 4.1.3 Reduced to the Case When z0 Is Zero

67

Proof of Item (b) Let μ ∈ σ (T ) \ {0}. If 1 − μz0 = 0. Then μ(I − z0 T ) = μI − T . In that case, since I − z0 T is invertible, μ = 0 implies that μI − T is invertible. However, the latter contradicts the fact that μ ∈ σ (T ). Thus 1 − μz0 = 0. Put μ . 1 − μz0

λ :=

(4.32)

Since μ = 0, the same holds true for λ. From the definition of λ in (4.32) it also follows that λ(1 − μz0 ) = μ and hence λ = μ(1 + λz0 ).

(4.33)

As λ = 0, we also have (1 + λz0 ) = 0. Using (4.30) we see that λI − T = (1 + λz0 )(I − z0 T )−1



λ I −T . 1 + λz0

(4.34)

According to the second part of (4.33) we have λ(1+λz0 )−1 = μ. Moreover, μI −T  is not invertible too. is not invertible by assumption. Thus (4.34) shows that λI − T We conclude that λ ∈ σ (T) \ {0}, and (4.27) is proved. We continue with reverse implication. Since 1 − μz0 = 0, we can define λ :=

μ 1 − μz0

(4.35)

According to the second part of (4.27), we have λ = 0. But then also μ = 0 because of (4.35). From (4.35) it follows that λ − λμz0 = μ, and hence λ = μ(1 + λz0 ). Since, λ = 0, we conclude that both μ and 1 + λz0 are none zero. Furthermore, from (4.30) we see that  = (1 + λz0 )(I − z0 T )−1 λI − T



λ I −T 1 + λz0

= (1 + λz0 )(I − z0 T )−1 (μI − T ).

(4.36)

From the second part of (4.27) and using (4.35), we know that λI − T is not invertible, because of (4.36), and hence the same is true for μI − T . We proved that μ ∈ σ (T ) \ {0}, as desired. The Remaining Part of the Proof We begin with an elementary observation. Let λ and μ be complex numbers, and assume that both 1 + λz0 and 1 − μz0 are non-zero. Then μ=

λ 1 + λz0

⇐⇒

λ=

μ . 1 − μz0

(4.37)

68

4 Completeness for a Class of Banach Space Operators

Given (4.37) and items (a) and (b), it is straightforward to prove the one-to-one and onto property of the maps in (4.28) and (4.29). Next, we prove the statement about the Riesz projections. Let λ ∈ σ (T) \ {0}  we denote the and μ ∈ σ (T ) \ {0}, and assume that λ = μ(1 − μz0 )−1 . By P  and λ0 , and P denotes the Riesz projections Riesz projection corresponding to T corresponding to T at μ0 . We shall prove that P = P . In order to do this, let 

be a  denote the corresponding open disc. We choose the circle with centre λ0 , and let  radius  r of the circle 

in such a way that  in  ∪ (i) there are no other spectral points of T

;   (ii) 1 + λz0 is non-zero for all λ ∈  ∪ . From item (i) it follows that = 1 P 2πi

 

)−1 d λ. (λI − T

Since 1 − μ0 z0 is invertible, the same holds true for 1 − μz0 provided the distance |μ − μ0 | is sufficiently small. This leads to a further restriction on the radius  r of the circle 

. Put

= {μ | μ =

λ , 1 + λz0

λ∈

},

 = {μ | μ =

λ , 1 + λz0

}. λ∈

Now we put an additional constraint on the radius  r of 

, namely in such a way that the following additional properties are satisfied: (iii) there are no other spectral point of T in  ∪ ; (iv) 1 − μz0 is non-zero for all μ ∈  ∪ . Next note that  1 + λz0 = 1 +

μ 1 − μz0

 z0 =

1 − μz0 + μz0 1 = . 1 − μz0 1 − μz0

Thus 1 + λz0 = (1 − μz0 )−1 . Using the latter identity in (4.34) we see that )−1 = (1 − μz0 )(I − z0 T )(μ − T )−1 . (λI − T Next, using λ = μ(1 − μz0 )−1 , we obtain that dλ (1 − μz0 ) − μ(−z0 ) 1 = = . 2 dμ (1 − μz0 ) (1 − μz0 )2

(4.38)

4.3 Theorem 4.1.3 Reduced to the Case When z0 Is Zero

69

It follows that 1 P = 2πi

 

(λI − T)−1 d λ

1 

1 (1 − μz0 )(μI − T )−1 d μ 2πi (1 − μz0 )2  1

1 (μI − T )−1 d μ = (I − z0 T ) 2πi 1 − μz0

= (I − z0 T )

= (I − z0 T )(I − z0 T )−1 P = P . Here the one but last equality results from the spectral calculation theory (see, e.g. [32].) We conclude that P = P , and the proof is complete.   Proof of Proposition 4.3.2 Let T satisfy all conditions in Theorem 4.1.3. In particular, we know that I − z0 T is invertible. Define T = (I − z0 T )−1 T ,

 q (z) = q(z + z0 ),

P(z) = P (z + z0 )(I − z0 T ).

The invertibility of I − z0 T implies that condition (4.2) remains true if T is replace . In fact, we have by T Ker Tj = Ker T j

and Im Tj = Im T j

(j = 0, 1, 2, . . .).

(4.39)

It follows that (4.19) holds true if and only if (4.2) is satisfied. From the preceding lemma we know that T is determined by the entire functions { q , P}. If f is an entire function, then f(z) = f (z + z0 ) is also entire. Moreover, f and f are of the same order (see Lemma 14.1.4) and have the same number of zeros. It follows that  q is of finite non-zero order ρ and as for q(z) the function  q (z) has infinitely many zeros. Also, as for P , the entire function P is of order at most ρ. We also know (see the sentence after (4.4)) that  q is of completely regular growth. Summarising we have (1)  q is of finite non-zero order ρ, has infinitely many zeros, and is of completely regular growth, (2) P is of order at most ρ. (z) = (z + z0 ). Then

 is an operator polynomial,

 :C→ Next, define

L(X, Y ). Moreover, (z)P(z)x = 0 ( for all ∈ C) ⇒

⇒ (z + z0 )P (z + z0 )(I − z0 T )x = 0 ( for all ∈ C) ⇒ (z)P (z)(I − z0 T )x = 0 ( for all ∈ C) ⇒ (I − z0 T )x = 0 ⇒ x = 0.

70

4 Completeness for a Class of Banach Space Operators

 and P in place of and P , respectively. Next we consider Hence (4.3) holds with

the ρ-admissible set of half-lines {ray (θj ; 0, s0 ) | j = 1, · · · , κ}. From the first inequality in (4.4) we know that there exist a non-negative integer m and a constant M such that for each z ∈ ray (θj ; 0, s0 ), j = 1, 2, . . . κ, we have   0 < |q((z + z0 ) ≤ M 1 + |z + z0 |m ,

(4.40)

  (I − (z + z0 )T )−1  ≤ M 1 + |z + z0 |m .

(4.41)

A straightforward calculation (considering two cases, |z| ≤ 1 and |z| > 1) we see that there exists a constant c > 0 such that   1 + |z + z0 |m ≤ c 1 + |z|m ,

for all z ∈ C.

Given the latter inequality, we obtain   01/r λj exists; (3) limr→∞ 1r {j | |λj | ≥ 1/r} < ∞. Then T has a complete span of eigenvectors and generalised eigenvectors if and only if conditions (4.76) and (4.77) are satisfied.

90

4 Completeness for a Class of Banach Space Operators

Proof From the assumption Ker T = Ker T ∗ , it follows that H = Ker T ⊕ Im T . Hence we can apply Theorem 4.1.3 with k = 1. Let q(z) = det2 (I − zT ). From Lemma 14.8.13, it follows that q is an entire function of completely regular growth and that the indicator function of q(z) is given by ⎧ ⎨hq (0) cos θ hq (θ ) = ⎩−h (π) cos θ q

when − π/2 ≤ θ ≤ π/2, when π/2 ≤ θ ≤ 3π/2.

Therefore, as in the proof of Theorem 4.7.1, the conditions (4.78) and (4.79) hold if and only if q dominates P (z)x for each x ∈ X with P (z)x = q(z)(I − zT )−1 x. But then Theorem 4.1.3, with k = 1, θ1 = π/2 and θ2 = (3π)/2, z0 = 0, and for some s0 > 0, yields that H = FT if and only if (4.76) and (4.77) are satisfied.   Final Remark In the second part of this book, we introduce a number of classes of operators defined in terms of a characteristic matrix function to which Theorem 4.1.3 can be applied. Using the characteristic matrix function, the verification of the condition whether q dominates P (z)x used in the definition of FT , can be reduced to a finite dimensional problem. Furthermore, the characterisation of MT given by the identities in (4.6) reduces from a vector-valued characterisation to a scalar-valued characterisation easily verifiable in examples. This will drastically simplify the computations, see Chap. 7.

Chapter 5

Characteristic Matrix Functions for a Class of Operators

In this chapter we extend the notion of characteristic matrix function, as defined in [48] for unbounded operators, to bounded operators. Classes of Banach space operators are introduced for which the assumptions of Theorem 4.1.3 can easily be verified.

5.1 Equivalence and Jordan Chains Let X, Y, X , Y  be Banach spaces, and suppose that L : U → L(X, Y ) and M : U → L(X , Y  ) are operator-valued functions, analytic on the open subset U ⊂ C. The two operator-valued functions L and M are called equivalent on U (see, for example, [31] or Section 2.4 in [6]) if there exist analytic operator-valued functions E : U → L(X , X) and F : U → L(Y, Y  ), whose values are invertible operators, such that, M(z) = F (z)L(z)E(z),

z ∈ U.

(5.1)

Let L : U → L(X, Y ) be an analytic operator-valued function. A point λ0 ∈ U is called a root of L if there exists a vector x0 ∈ X, x0 = 0, such that, L(λ0 )x0 = 0. An ordered set (x0 , x1 , . . . , xk−1 ) of vectors in X is called a Jordan chain for L at λ0 if x0 = 0 and L(z)[x0 + (z − λ0 )x1 + · · · + (z − λ0 )k−1 xk−1 ] = O((z − λ0 )k ).

(5.2)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_5

91

92

5 Characteristic Matrix Functions for a Class of Operators

The number k is called the length of the chain and the maximal length of the chain starting with x0 is called the rank of x0 . The analytic function k−1

(z − λ0 )l xl

(5.3)

l=0

in (5.2) is called a root function of L corresponding to λ0 . Proposition 5.1.1 If two analytic operator functions L and M are equivalent on U, then there is a one-to-one correspondence between their Jordan chains. Proof The equivalence relation (5.1) is symmetric, and thus it suffices to show that Jordan chains for L yield Jordan chains for M. If (x0 , . . . , xk−1 ) is a Jordan chain for L at λ0 , then E(z)−1 (x0 + (z − λ0 )x1 + · · · + (z − λ0 )k−1 xk−1 ) = y0 + (z − λ0 )y1 + · · · + (z − λ0 )k−1 yk−1 + h.o.t. and (y0 , . . . , yk−1 ) is a Jordan chain for M at λ0 . Here h.o.t. stands for the higher order terms. Furthermore, the equivalence yields that the null spaces Ker L(λ0 ) and Ker M(λ0 ) are isomorphic and this proves the proposition.  

5.1.1 Entire Matrix Functions Let  : C → L(Cn ) denote an entire n × n matrix function. If the determinant of  is not identically zero, then we define m(λ, ) to be the order of λ as a zero of det  and k(λ, ) is the order of λ as pole of the matrix function (·)−1 . Let λ0 be an isolated root of , then the Jordan chains for  at λ0 have finite length, and we can organise the chains according to the procedure described by Gohberg and Sigal [30]. Choose an eigenvector, say x1,0 , with maximal rank, say r1 . Next, choose a Jordan chain (x1,0 , . . . , x1,r1 −1 ) of length r1 and let N1 be the complement in Ker (λ0 ) of the subspace spanned by x1,0. In N1 we choose an eigenvector x2,0 of maximal rank, say r2 , and let (x2,0 , . . . , x2,r2 −1 ) be a corresponding Jordan chain of length r2 . We continue as follows, let N2 be the complement in N1 of the subspace spanned by x2,0 and replace N1 by N2 in the above described procedure.

5.1 Equivalence and Jordan Chains

93

In this way, we obtain a basis {x1,0, . . . , xp,0 } of Ker (λ0 ) and a corresponding canonical system of Jordan chains x1,0 , . . . , x1,r1 −1 , x2,0 , . . . , x2,r2 −1 , xp,0 , . . . , xp,rp −1

(5.4)

for  at λ0 . It is easy to see that the rank of any eigenvector x0 corresponding to the root λ0 is always equal to one of the rj for 1 ≤ j ≤ p. Thus, the integers r1 , . . . , rp do not depend on the particular choices made in the procedure described above and are called the zero-multiplicities of  at λ0 . Their sum r1 + · · · + rp is called the algebraic multiplicity of  at λ0 and will be denoted by m((λ0 )). The Case When  Is a Linear Matrix Function Assume (z) = zI − A with A an n × n matrix. In that case a Jordan chain (x0 , . . . , xk−1 ) for  at λ0 satisfies (A − λ0 )x0 = 0,

(A − λ0 )x1 = x0 , . . . , (A − λ0 )xk−1 = xk−2 ,

and hence {(xi,0 , . . . , xi,ri −1 ) | i = 1, 2, . . . , p} is a canonical basis of eigenvectors and generalised eigenvectors for  at λ0 . Thus, in the linear case the algebraic multiplicity of  at λ0 equals the dimension of the generalised eigenspace of A at λ0 , and the definition of the algebraic multiplicity of  at λ0 using Jordan chains is a proper extension to the nonlinear case. In general, however, the system of Jordan chains for  at λ0 is not a basis for the generalised null space of (λ0 ), and hence dim Ker (λ0 )k = m((λ0 )),

where k = 1, 2, 3 . . . .

(5.5)

For example, if  : C → L(C2 ) is given by   (z) = diag (z − λ0 )2 , (z − λ0 )2 , then we have for k = 1, 2, 3, . . ., dim Ker (λ0 )k = 2 and m((λ0 )) = 4. Jordan Chains and Local Smith Form Next we recall the connection between the Jordan chains and the local Smith form for an entire n × n matrix function  : C → L(Cn ) with det  ≡ 0. Let λ0 ∈ C. Then there exist a neighbourhood U of λ0 and analytic matrix functions E and F on U whose values are invertible operators such that (z) = F (z)D(z)E(z),

z ∈ U,

(5.6)

94

5 Characteristic Matrix Functions for a Class of Operators

where   D(z) = diag (z − λ0 )ν1 , . . . , (z − λ0 )νn ,

z ∈ U.

(5.7)

The integers {ν1 , . . . , νn } are uniquely determined by  and the diagonal matrix D in (5.7) is called the local Smith form for  at λ0 . See [34, Theorems 1.2 and 1.3] for a proof of (5.6)–(5.7). For the local Smith form D the Jordan chains are easily determined and it is clear that the set of zero multiplicities is given by {ν1 , . . . , νn }. Hence, the equivalence (5.6) and Proposition 5.1.1 show that the algebraic multiplicity of  at λ is given by m((λ)) =

n

νl .

l=1

On the other hand the equivalence (5.6) yields det (z) = det F (z)(z − λ0 )

n

l=1 νl

det E(z)

with det E(λ0 ) = 0 and det F (λ0 ) = 0. So, the multiplicity of λ0 as zero of det  equals m(λ0 , ) =

n

νl

l=1

as well. This shows that the algebraic multiplicity of  at λ equals the multiplicity of λ as zero of det , i.e., m(λ, ) = m((λ)). In what follows we consider a finite meromorphic operator function R : U → L(X, Y ) on an open set U ⊂ C. This implies that the Laurent expansion of R in a neighbourhood O of λ0 ∈ U has the form ∞

R(z) =

(z − λ0 )l Rl ,

z ∈ O.

(5.8)

l=−n

where Rj , j = −n, −n + 1, −n + 2, . . . are operators of finite rank. The singular part in the Laurent expansion of R at λ0 is denoted by −1     R −,λ = R(z) −,λ := (z − λ0 )ν Rν , 0

0

ν=−n

z ∈ O,

(5.9)

5.1 Equivalence and Jordan Chains

95

and the trace of the singular part of R is defined by −1     (z − λ0 )ν Tr Rν , Tr R −,λ :=

z ∈ O,

0

(5.10)

ν=−n

  where Tr Rν denotes the trace of the finite rank operator Rν . An interesting application of the local Smith form (to which we return in the next section) is the following matrix-valued multiplicity theorem first proved in the Gohberg-Sigal paper [30]. Theorem 5.1.2 Let  : C → L(Cn ) be an entire n × n matrix function with det  ≡ 0. If λ0 is an isolated zero of det , then

1  d (z)−1 (z) dz , m(λ0 , ) = Tr 2πi λ0 dz

(5.11)

where λ0 is a small circle surrounding λ0 and no other zeros of det . Here Tr (A) denotes the trace of an n × n matrix A. The proof of (5.11) is based on the local Smith form (5.6) and the following lemma due to Gohberg and Sigal [30]. Lemma 5.1.3 Let L and M be finite meromorphic operator-valued functions on C. If λ0 ∈ C is a pole of L, then     Tr LM −,λ = Tr ML −,λ . 0

(5.12)

0

Proof Let L1 and M1 be the operator-valued functions defined by   L1 (z) := L(z) − L −,λ

  and M1 (z) := M(z) − M −,λ .

0

0

Then L1 and M1 are analytic at z = λ0 and     L1 M1 −,λ = M1 L1 −,λ = 0. 0

0

Furthermore           Tr LM −,λ = Tr L −,λ M −,λ + Tr L −,λ M1 −,λ 0 0 0 0 0     + Tr L1 M −,λ −,λ 0 0          = Tr M −,λ L −,λ + Tr L −,λ M1 −,λ 0 0 0 0      + Tr L1 M −,λ −,λ 0 0           = Tr M −,λ L −,λ + Tr M1 L −,λ −,λ 0

0

0

0

96

5 Characteristic Matrix Functions for a Class of Operators

    + Tr M −,λ L1 −,λ 0 0          = Tr M −,λ L −,λ + Tr M1 L −,λ −,λ 0 0 0 0    + Tr M −,λ L1 −,λ 0 0   = Tr ML −,λ , 0

 

and (5.12) is proved. Proof of Theorem 5.1.2. First we use the local Smith form (5.6) to rewrite (z)

−1

  d d −1 −1 −1 (z) = E(z) D(z) F (z) F (z) D(z)E(z) dz dz   d d D(z) E(z) + E(z)−1 E(z). + E(z)−1 D(z)−1 dz dz

Next we take the trace of the singular part on both sides of the identity and use Lemma 5.1.3 to obtain       d d d Tr (z)−1 (z) −,λ = Tr F (z)−1 F (z) −,λ + Tr D(z)−1 D(z) −,λ 0 0 0 dz dz dz   d + Tr E(z)−1 E(z) −,λ 0 dz   d = Tr D(z)−1 D(z) −,λ 0 dz Now use that D is the diagonal matrix function given by (5.7) to derive that n   d νl . Tr D(z)−1 D(z) −,λ = 0 dz (z − λ0 ) l=1

Together this shows the identity Tr

1 

1  d d (z)−1 (z) dz = Tr D(z)−1 D(z) dz 2πi λ0 dz 2πi λ0 dz =

n

νl = m(λ0 , ).

l=1

  A Final Remark The factorisation (5.6) can also be seen as a special case of an abstract ring theoretical statement concerning matrices with entries in a principal ideal domain R (see Sections 8 and 10 in Chapter III of Jacobson [47]). To get

5.2 The Characteristic Matrix Function

97

the representation (5.7) one takes for R the ring of all germs of complex functions analytic at λ0 .

5.2 The Characteristic Matrix Function The characteristic matrix function defined in the present section is an extension of the one defined in [48] and adapted for bounded operators. Definition 5.2.1 Let T be a bounded operator on a complex Banach space X, and let  : C → L(Cn ) be an entire n × n matrix function. We call  a characteristic matrix function for T on C if there exist entire operator functions E and F , E : C → L(Cn ⊕ X) and F : C → L(Cn ⊕ X), whose values are invertible operators, such that     I n (z) 0 0 = F (z) C E(z), z ∈ C. (5.13) 0 IX 0 I − zT The operator function appearing in the left hand side of (5.13) is called the Xextension of . The equivalence of the type in (5.13) is closely related to the notion of “equivalence after extension” appearing in [7, Section 4.4]. Note that the equivalence relation (5.13) implies that det (z) does not vanish identically. In fact, taking z = 0 in (5.13) we see that det (0) = 0. Furthermore, I − zT is invertible if and only if det (z) is non-zero, and in that case E(z)

    0 I n (z)−1 0 F (z) = C , 0 IX 0 (I − zT )−1

det (z) = 0.

(5.14)

The operator functions F and E appearing in (5.13) are also described as 2 × 2 matrix functions of which the entries are entire operator functions too. For instance, for F we have        c F11 (z) F12 (z) c F11 (z)c + F12 (z)x F (z) = . = x F21 (z) F22 (z) x F21 (z)c + F22 (z)x Using these partitioning of E(z) and F (z) the equivalence relation (5.14) yields a useful representation for the resolvent operator (I − zT )−1 of T , namely (I − zT )−1 = E21 (z)(z)−1 F12 (z) + E22 (z)F22 (z).

(5.15)

98

5 Characteristic Matrix Functions for a Class of Operators

If Q(z) := E(z)−1 and R(z) := F (z)−1 , then it follows from (5.14) that Q12 (z)(I − zT )−1 = (z)−1 F12 (z),

(5.16)

(I − zT )−1 R21 (z) = E21 (z)(z)−1 .

(5.17)

Since the zeros of det (z) do not have an accumulation point in C, we see from (5.15) that the non-zero part of the spectrum of T consists of isolated eigenvalues only. The next theorem is an adapted version of Theorem 2.1 of [48] for bounded operators and justifies the terminology introduced above. Theorem 5.2.2 Let T be a bounded linear operator on a Banach space X, and let  be a characteristic matrix function for T such that det  ≡ 0. Then (i) the set σ (T ) \ {0} consists of eigenvalues of finite type and σ (T ) \ {0} = {λ−1 ∈ C | det (λ) = 0};

(5.18)

−1 (ii) for λ−1 0 ∈ σ (T ) \ {0}, the partial multiplicities of λ0 as an eigenvalue of T are equal to the zero-multiplicities of  at λ0 ; −1 (iii) for λ−1 ∈ σ (T ) \ {0}, the algebraic multiplicity m(T , λ−1 0 0 ) of λ0 as an eigenvalue of T equals m = m(λ0 , ), the order of λ0 as a zero of det ; −1 −1 (iv) for λ−1 0 ∈ σ (T ) \ {0}, the ascent k(T , λ0 ) of λ0 equals k = k(λ0 , ), the order of λ0 as a pole of −1 and dim Ker (I − λ0 T )k = m.

Proof By definition of a characteristic matrix function, the identity (5.18) holds. Since the values of E and F are bijective operators, it follows from (5.14) that the function z → (I − zT )−1 is finite meromorphic, that is, for every λ−1 0 ∈ σ (T ) \ {0} the Laurent expansion in a neighbourhood of λ0 has the form (I − zT )−1 =



(z − λ0 )l Rl

(5.19)

l=−n

with R−1 , . . . , R−n operators of finite rank. In particular, R−1 has finite rank and item (i) follows. To prove item (ii) let x1,0, . . . , x1,ν1 −1 , x2,0 , . . . , x2,ν2 −1 , xp,0 , . . . , xp,νp −1 with ν1 ≤ · · · ≤ νp be a canonical system of Jordan chains for  at λ0 . For i = 1, . . . , p consider the polynomial ϕi (z) = xi,0 + (z − λ0 )xi,1 + · · · + (z − λ0 )νi −1 xi,νi −1 .

5.2 The Characteristic Matrix Function

99

We know that   (z)ϕi (z) = O (z − λ0 )νi .

(5.20)

Put 

ψi (z) = 0 IX



  ϕi (z) E(z) 0

= yi,0 + (z − λ0 )yi,1 + · · · + (z − λ0 )νi −1 yi,νi −1 + h.o.t. Here, as before, h.o.t. stands for the higher order terms. From (5.20) and the equivalence (5.13), it follows that   (I − zT )ψi (z) = O (z − λ0 )νi , and thus (I −λ0 T )yi,0 = 0, (I −λ0 T )yi,1 = yi,0 , . . . , (I −λ0 T )yi,νi −1 = yi,νi −2 .

(5.21)

We shall prove that y1,0, . . . , y1,ν1 −1 , y2,0 , . . . , y2,ν2 −1 , yp,0 , . . . , yp,νp −1

(5.22)

is a canonical basis of eigenvectors and generalised eigenvectors of T at λ−1 0 . Note that N : Cn → X defined by 

c → 0 IX



  c E(λ0 ) 0

maps Ker (λ0 ) is a one-one way onto Ker I − λ0 T . It follows that the vectors y1,0, . . . , yp,0 are linearly independent. But then we can use (5.21) to show that the set of vectors (5.22) is linearly independent. To finish the proof we show that m(T ; λ−1 0 ), the algebraic multiplicity of T −1 at λ0 , is equal to m(λ0 , ). First note that by definition m(T ; λ−1 0 ) equals the dimension of the range of the spectral projection Pλ−1 which equals the trace of the 0 spectral projection. Thus it suffices to prove that m(λ0 , ) = Tr

1 

(I − zT )−1 dz . 2πi λ0

(5.23)

100

5 Characteristic Matrix Functions for a Class of Operators

For the proof of the key identity (5.23) we use the identities (5.13), (5.14) and Lemma 5.1.3 to obtain     d d Tr (z)−1 (z) −,λ = Tr L(z)−1 L(z) −,λ , 0 0 dz dz

(5.24)

where L(z) is the function given by L(z) =

  ICn 0 . 0 I − zT

(5.25)

Therefore it follows from (5.24) and Theorem 5.1.2 that 1 

m(λ0 , ) = Tr −(I − zT )−1 T dz . 2πi λ0 The identity (5.23) now follows by observing that Im Pλ−1 ⊂ Im T and that T is 0   one-to-one on Im Pλ−1 and hence Tr Pλ−1 T = Tr Pλ−1 which completes the proof 0 0 0 of item (ii) in the theorem. To prove item (iii), it remains to remark that  for an analytic matrix function  with det  = 0, the algebraic multiplicity M (λ0 ) equals the multiplicity of λ0 as a zero of det  (see Sect. 5.1). For the proof of item (iv) note that the ascent of λ0 equals the order of λ0 as a pole of the resolvent z → (I − zT )−1 . But the order of a pole is invariant under equivalence and (iv) follows from (5.14).   Lemma 5.2.3 Let T be a bounded linear operator, and let  be a characteristic matrix function for T determined by (5.13). Put q(z) = det (z) and P (z) = q(z)(I − zT )−1 .

(5.26)

Then q and P are both entire functions, and T is determined by {q, P }. Moreover, I − zT is invertible if and only if q(z) = 0. Proof We already know that I − zT is invertible if and only if q(z) = 0. In what follows adj (z) denotes the adjugate of the matrix (z). Since the entries of (z) are entire functions, the same holds true for the entries of adj (z) because the adjugate is the matrix of co-factors of (z). Thus adj (z) is an entire function. Furthermore, (z)−1 =

1 adj (z), det (z)

q(z) = 0.

(5.27)

5.2 The Characteristic Matrix Function

101

Using the two identities in (5.26) and formula (5.15) we have P (z) = q(z)(I − zT )−1

= det (z) E21 (z)(z)−1 F12 (z) + E22 (z)F22 (z)     1 adj (z) F12 (z) + E22 (z)F22 (z) . = det (z) E21 (z) det (z) = E21 (z) (adj (z)) F12 (z) + det (z)E22 (z)F22 (z).

(5.28)

All terms in the right hand side of (5.28) are entire functions, and hence P is an entire function too.   Definition 5.2.4 The characteristic matrix function  is called nondegenerate with respect to (5.13) if the following holds true: x∈X

and

(adj (z)) F12 (z)x = 0 for all z ∈ C ⇒ x = 0.

(5.29)

Definition 5.2.5 Let T be a bounded operator on a complex Banach space X, and let  : C → L(Cn ) be an entire n × n matrix function of order at most ρ. We call  a ρ-characteristic matrix (function) for T on C if det  is an entire function of finite non-zero order ρ, and there exist entire operator functions E : C → L(Cn ⊕ X) and F : C → L(Cn ⊕ X) of order at most ρ, whose values are bijective operators, such that the identity (5.13) holds. We are now ready to formulate a special case of Theorem 4.1.3 which provides a class of operators that have a representation for the resolvent as in (4.1) but now (see (5.26)) with q(z) being an entire finite matrix function. Theorem 5.2.6 Let T be a bounded linear operator on the Banach space X, which is one-to-one and has a dense range. Let  be a characteristic matrix function for T on C as in (5.13), assume that det (z) has infinitely many zeros. Let ρ be a positive number, and assume that (a)  is a ρ-characteristic matrix (function) for T ; (b) the function Q12 (z) with Q(z) = E(z)−1 is an operator polynomial; (c)  is nondegenerate with respect to (5.13). Furthermore, suppose that there exist a complex number z0 , a non-negative real number s0 , and a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, and there exist a non-negative integer m and a constant M such that I − z0 T is invertible and for each z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ, we have 0 < | det (z)| ≤ M(1 + |z|m )

and (I − zT )−1  ≤ M(1 + |z|m ).

(5.30)

102

5 Characteristic Matrix Functions for a Class of Operators

If, in addition, the entire function z → det (z+z0 ) is of completely regular growth, then   MT = x ∈ X | det (z0 + z) dominates adj (z0 + z)F12 (z0 + z)x

(5.31)

and X = MT ⊕ ST ,

(5.32)

where, as usual, ST = {x ∈ X | z → (I − zT )−1 x is entire}. Moreover, ST is also given by ST = {x ∈ X | z → (z)−1 F12 (z)x is entire}.

(5.33)

Proof We split the proof into two parts. In the first part we show that the conditions of Theorem 4.1.3 are satisfied. In the second part we prove the theorem applying Theorem 4.1.3. Part 1.

In order to apply Theorem 4.1.3, we start with some preparations. Let q and P be the entire functions defined by (5.26). By definition, since  is assumed to be a ρ-characteristic matrix function, q is an entire function of finite non-zero order ρ. Furthermore, our conditions on  imply that adj (z) is an entire function of order at most ρ. But then, since E and F are also of order at most ρ, we can use the identity (5.28) to see that the order of P is less than or equal to ρ. Thus items (1) and (2) in Theorem 4.1.3 are satisfied. Next, put Y = Cn ,

and (z) = Q12 (z).

Since Q12 (z) is assumed to be an operator polynomial, the same holds true for (z). With this choice of Y and the implication (4.3) is satisfied. To see this, note that

(z)P (z) = Q12 (z)q(z)(I − zT )−1 = q(z)Q12 (z)(I − zT )−1 = q(z)(z)−1 F12 (z) = (det (z))(z)−1 F12 (z) = adj (z)F12 (z). It follows that (z)P (z)x = 0 for all z ∈ C implies that adj (z)F12 (z)x = 0 for all z ∈ C. According to item (c) the function  is nondegenerate with respect to (5.13), and hence the nondegenerate condition (5.29) implies that x = 0. This shows that (4.3) is satisfied.

5.2 The Characteristic Matrix Function

Part 2.

103

Next observe that (5.30) tells us that (4.4) is fulfilled. Finally, also by assumption, the entire function z → q(z + z0 ) is of completely regular growth. We are now ready to apply Theorem 4.1.3 using the data referred to in the preceding part. Since T is one-to-one and has a dense range, (4.2) holds with k = 1. Put FT ,λ,z0 = {x ∈ X | q(z + z0 ) dominates (z + z0 )P (z + z0 )}. Then, by Theorem 4.1.3, using Ker T = {0}, we have MT = FT ,λ,z0

and X = MT ⊕ ST .

Thus (5.31) and (5.32) are proved. Finally, from (5.16) and using the fact that Q12 is entire, it follows that x ∈ ST implies that the function (z)−1 F12 (z)x is entire. On the other hand, if (z)−1 F12 (z)x is entire then the identity (5.15) implies that x ∈ ST because the functions E21 , E22 and F22 appearing in (5.15) are all entire. Thus (5.33) is proved.  

Chapter 6

Finite Rank Perturbations of Volterra Operators

In this chapter we introduce an important class of operators that have a characteristic matrix function in the sense of Definition 5.2.1. The chapter consists of three sections. In the first section the characteristic matrix function is defined. The main theorem is a completeness theorem which is proved in the second section. In the final session we show that the results of the first two sessions remain true if the Volterra operator is replaced by a quasi-nilpotent operator.

6.1 The Characteristic Matrix Function Throughout this section X is a Banach space, and T = V + R, where V : X → X is a Volterra operator and R : X → X is an operator of finite rank. The fact that R has finite rank allows us to factor R as R = BC, where B : Cn → X and C : X → Cn , with n ≥ rank R. If n is equal to the rank of R we call R = BC a minimal rank factorisation. With V and the factorisation R = BC we associate the n × n matrix function (z) = ICn − zC(I − zV )−1 B,

z ∈ C.

(6.1)

Here I is the identity operator on X, and ICn is the n × n identity matrix. Formula (6.1) is of special interest if the pair {C, V } is observable, that is, if for each x ∈ X the identity CV j x = 0 for each j = 0, 1, 2, . . . implies x = 0. In other words, the pair {C, V } is observable if ∞

Ker CV j = {0}.

j =0

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_6

105

106

6 Finite Rank Perturbations of Volterra Operators

The notion of observability has its roots in mathematical system theory (see, e.g., [33, page 719]). The next theorem tells us that (z) is a characteristic matrix function for the operator T on C in the sense of Definition 5.2.1. Theorem 6.1.1 Let T = V + R be as in the paragraph preceding the present theorem. Then the n × n entire matrix function  defined by (6.1) is a characteristic matrix function for the operator T = V + R. Indeed, we have     (z) 0 0 ICn = F (z) E(z) 0 IX 0 I − zT

z ∈ C,

(6.2)

where the entire operator-valued functions E(z) : Cn ⊕ X → Cn ⊕ X and F (z) : Cn ⊕ X → Cn ⊕ X are given by $

C(I − zV )−1

(z)

E(z) =

% ,

−z(I − zV )−1 B (I − zV )−1 $ % (z) −C(I − zV )−1 F (z) = . zB IX

(6.3)

(6.4)

The inverses E(z)−1 : Cn ⊕ X → Cn ⊕ X and F (z)−1 : Cn ⊕ X → Cn ⊕ X are the operator-valued functions given by $ E(z)

−1

= $

F (z)−1 =

ICn

−C

zB I − zT ICn

% (6.5)

,

C(I − zV )−1

−zB I − zBC(I − zV )−1

% .

(6.6)

Finally, the characteristic matrix function  is nondegenerate with respect to (6.2) if and only if the pair {C, V } is observable. Proof Fix z ∈ C. We apply Theorem 4.7 in [7] with 

   M11 M12 I − zV zB M= = : X ⊕ Cn → X ⊕ Cn . M21 M22 C ICn Note that both M11 and M22 are invertible operators. Hence the Schur complements of M11 and M22 in M are well defined and are given by −1

1 := M22 − M21 M11 M12 = ICn − zC(I − zV )−1 B = (z);

(6.7)

−1 M21 = I − zV − zBC = I − zT .

2 := M11 − M12 M22

(6.8)

6.1 The Characteristic Matrix Function

107

Put     −1 −M21 M11

1 (z) −C(I − zV )−1 E1 (z) = = , −1 −1 (I − zV )−1 z(I − zV )−1 B M11 M11 M21     −1 −M11 M12 I −z(I − zV )−1 B I = F1 (z) = . −1 −1 −1 M21 M11 M12 M22 M21 ICn − M22 (z) C Then, using the identities (6.7) and (6.8), Theorem 4.7 in [7] tells us that     I − zT 0 (z) 0 = E1 (z) F1 (z). 0 ICn 0 IX

(6.9)

Moreover, the operators E1 (z) and F1 (z) are invertible and E1 (z)

−1

F1 (z)−1

    −1 −M12 M22

2 −zB I − zT = = , −1 −1 M22 M22 M21 C ICn   −1 −M22 M21 ICn = = −1 −1 −1 I − M11 M12 M22 M21 M11 M12   −C ICn = . I − z(I − zV )−1 BC z(I − zV )−1 B

(6.10)

(6.11)

Now put    I n 0 0 ICn F1 (z) C , 0 (I − zV )−1 IX 0     0 IX 0 ICn F (z) = . E1 (z) ICn 0 0 I − zV 

E(z) =

Then (6.3) and (6.4) hold. The identity (6.9) yields (6.2). Indeed,   0 ICn E(z) = F (z) 0 I − zT      0 I ICn 0 0 ICn = E1 (z) × ICn 0 0 I − zV 0 I − zT     I n 0 0 ICn F1 (z) C × I 0 0 (I − zV )−1       0 0 I n I − zT 0 I n F1 (z) C = C E1 (z) 0 I − zV 0 (I − zV )−1 0 ICn

108

6 Finite Rank Perturbations of Volterra Operators

    ICn 0 0 (z) 0 ICn = 0 I − zV 0 (I − zV )−1 0 I   (z) 0 = . 0 I Furthermore, using (6.10) and (6.11), we see that E(z) and F (z) are invertible with the inverses being given by (6.5) and (6.6), respectively. It remains to prove the final statement. To do this note that according to (6.4) the (1, 2)-entry of F (z) is equal to the function −C(I − zV )−1 , and hence adj (z)F12 (z) = − (det (z)) (z)−1 C(I −zV )−1 ,

det (z) = 0.

(6.12)

Fix x ∈ X. Assume that adj (z)F12 (z)x = 0 for all z ∈ C. Using (6.1) we see that (z) is invertible for |z| sufficiently small. But then the identity (6.12) tells us that C(I − zV )−1 x = 0 for |z| sufficiently small. This implies that CV j x = 0 for j = 0, 1, 2, . . .. Conversely, assume that CV j x = 0 for j = 0, 1, 2, . . .. Then C(I − zV )−1 x = 0 for |z| sufficiently small and hence for all z ∈ C. It follows that adj (z)F12 (z)x = 0 for all z ∈ C. Thus adj (z)F12 (z)x = 0 for all z ∈ C ⇐⇒ CV j x = 0

(j = 0, 1, 2, . . .).

Hence using Definition 5.2.4 we see that  is nondegenerate with respect to (6.2) if and only if the pair {C, V } is observable.   In the context of the present chapter the identities (5.15), (5.16) and (5.17) yield: (I − zT )−1 = z(I − zV )−1 B(z)−1 C(I − zV )−1 + (I − zV )−1 ,

(6.13)

(z)C(I − zT )−1 = C(I − zV )−1 ,

(6.14)

(I − zT )−1 B(z) = (I − zV )−1 B.

(6.15)

6.2 A Completeness Theorem As an application of Theorem 4.1.3 we shall derive the following result which is a further specification of Theorem 5.2.6. Theorem 6.2.1 Let T = V + R, where V is a Volterra operator on X such that (I − zV )−1 is an entire function of order at most ρ, and where R is an operator of finite rank on X. Put (z) = ICn − zC(I − zV )−1 B,

z ∈ C,

(6.16)

6.2 A Completeness Theorem

109

where R = BC is a minimal rank factorisation of R, and assume that the entire function det  is of finite non-zero order ρ and has infinitely many zeros. Assume T is one-to-one and has a dense range. Furthermore, suppose that there exist a complex number z0 , a non-negative real number s0 , and a ρ-admissible set of halflines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that det (z0 ) = 0 and (a) there exists a δ0 > 0 with | det (z)| ≥ δ0 > 0 for z ∈ ray (θj ; z0 , s0 ),

j = 1, 2, . . . , κ,

(6.17)

z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ.

(6.18)

(b) there exists an integer m and a constant M with (I − zV )−1  ≤ M(1 + |z|m ) for

If, in addition, the entire function z → det (z0 +z) is of completely regular growth, then MT = {x ∈ X | det (z0 + z) dominates adj (z0 + z)C(I − (z0 + z)V )−1 x},

(6.19)

X = MT ⊕ ST ,

(6.20)

and

where, as usual, ST = {x ∈ X | z → (I − zT )−1 x is entire}. Furthermore, we have (i) the closure of the generalised eigenspace MT of T is the full space if and only if ST consists of the zero vector only; (ii) if the pair {C, V } is not observable, i.e., if there exists a non-zero x ∈ X such that C(I − zV )−1 x = 0 for all z ∈ C, then ST = {0} and T does not have a complete set of eigenvectors and generalised eigenvectors. Proof By assumption T is one-to-one and has dense range. Thus condition (4.2) is fulfilled with k = 1. Put q(z) := det (z),

(6.21) 

 P (z) := z(I − zV )−1 B adj (z) C(I − zV )−1 + q(z)(I − zV )−1 .

(6.22)

Both q and P are entire functions, and (I − zT )−1 =

1 P (z), q(z)

q(z) = 0.

(6.23)

110

6 Finite Rank Perturbations of Volterra Operators

The latter identity follows from (6.13). Indeed, using (6.13) and assuming q(z) = 0, we obtain

1 1 P (z) = z(I − zV )−1 B adj (z) C(I − zV )−1 + (I − zV )−1 q(z) det (z) = z(I − zV )−1 B(z)−1 C(I − zV )−1 + (I − zV )−1 = (I − zT )−1 .

(6.24)

Since q(z) is non-zero for z in a neighbourhood of zero, it follows that T is determined by the entire functions {q, P }. By assumption, q(z) is an entire function of finite non-zero order ρ and has infinitely many zeros. Moreover (I − zV )−1 is entire of order at most ρ. Since (z) is given by (6.16), it follows that (z) is also entire of order at most ρ. Using these facts formula (6.22) then tells us that the same holds true for P (z). In particular, the functions q and P satisfy conditions (1) and (2) in Theorem 4.1.3. Next we show that (4.3) holds true with Y = X and (z) = I − zV . Obviously, (z) is an operator-valued polynomial. Now fix x ∈ X, and assume that (z)P (z)x = 0 for all z ∈ C. As I − zV is invertible for all z, it follows that P (z)x = 0 for all z ∈ C, and thus (use (6.23)) we have q(z)(I − zT )−1 x = 0,

whenever q(z) = 0.

Since q(0) = 1, the above identity for z = 0 yields q(0)x = 1, and thus x = 0. In the remaining part {ray (θj ; z0 , s0 ) | j = 1, . . . , κ} is the set of ρ-admissible half-lines appearing in items (a) and (b). Here z0 is a complex number, s0 is a nonnegative real number, and det (z0 ) = 0. The latter implies that I −z0 T is invertible too. From (6.15) and formula (6.16) it follows that (z) is polynomially bounded on the rays {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, and hence the same holds true for q(z) and adj (z). In particular, the first part of (4.4) is proved. Next, using item (a) and the identity (5.27) it follows that (z)−1 is polynomially bounded too. Using the latter in formula (6.13) we see that the second part of (4.4) also holds true. As in Theorem 4.1.3 the entire function z → q(z0 + z) is assumed to be of completely regular growth. Let x ∈ X. We have to find out when q(z0 + z) dominates (z0 + z)P (z0 + z)x. To do this recall that (z) = I − zV and P (z) is given by (6.22). It follows that

(z0 + z)P (z0 + z)x =   = (z0 + z)B adj (z0 + z) C(I − (z0 + z)V )−1 + q(z0 + z). Using Lemmas 14.7.2 and 14.7.4 we see that q(z0 + z) dominates (z0 + z)P (z0 + z)x ⇐⇒   ⇐⇒ q(z0 + z) dominates adj (z0 + z) C(I − (z0 + z)V )−1 .

6.2 A Completeness Theorem

111

But then we can use the identities (4.5), (4.6), and (4.7) in the final part of Theorem 4.1.3 to obtain the corresponding identities (6.19) and (6.20) in the present theorem. Item (i) in the final paragraph is a direct consequence of (6.20). To prove item (ii), let x = 0, and assume C(I − zV )−1 x = 0 for all z ∈ C. Multiplying identity (6.13) from the right by x yields (I − zT )−1 x = z(I − zV )−1 B(z)−1 C(I − zV )−1 x + (I − zV )−1 x = (I − zV )−1 x.

(6.25)

But (I − zV )−1 x is entire, and hence the same holds true for (I − zT )−1 x, that is, x ∈ ST . Since x = 0, it follows that ST = {0}, and by item (i) there is no completeness.   If the pair {C, V } is observable, then Theorem 6.2.1 can also be obtained as a corollary of Theorem 5.2.6. Corollary 6.2.2 Let T = V + R, where V is a Volterra operator on the Banach space X such that (I − zV )−1 is an entire function of order at most ρ, and where R is an operator of finite rank on X. Put (z) = ICn − zC(I − zV )−1 B,

z ∈ C,

(6.26)

where R = BC is a minimal rank factorisation of R, and assume that the entire function det  is of non-zero finite order ρ, and has infinitely many zeros. Assume both T and its conjugate T ∗ are one-to-one and have a dense range. Furthermore, suppose that there exist a complex number z0 , a non-negative real number s0 , and a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that det (z0 ) = 0 and (a) there exists a δ0 > 0 with | det (z)| > δ0

for z ∈ ray (θj ; z0 , s0 ),

j = 1, 2, . . . , κ,

(6.27)

(b) there exists an integer m and a constant M with (I − zV )−1  ≤ M(1 + |z|m ) for z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ.

(6.28)

If, in addition, the entire function z → det (z0 +z) is of completely regular growth, then T has a complete set of eigenvectors and generalised eigenvectors if and only if T ∗ has a complete set of eigenvectors and generalised eigenvectors.

112

6 Finite Rank Perturbations of Volterra Operators

Proof Note that T ∗ = V ∗ + R ∗ , the operator V ∗ is a Volterra operator on X∗ , and R ∗ = C ∗ B ∗ is a minimal rank factorisation. Furthermore ∗ (z) := (z)∗ = ICn − zB ∗ (I − zV ∗ )−1 C ∗ ,

z ∈ C.

Since det ∗ (z) = det (z)∗ , it follows that det ∗ is of non-zero finite order ρ and has infinitely many zeros. Next, observe that the assumptions (a), (b) and z → det (z0 +z) is of completely regular growth appearing in the corollary are invariant under taking conjugates. Therefore T ∗ satisfies the same assumptions as T . We want to show that MT = X

⇐⇒

MT ∗ = X ∗ .

The conditions appearing in our corollary are precisely the conditions appearing in Theorem 6.2.1. But then we can apply the result given by item (i) in Theorem 6.2.1 to both T and T ∗ . It follows that MT = X if and only if ST consists of the zero vector only

(6.29)

MT ∗ = X∗ if and only if ST ∗ consists of the zero vector only.

(6.30)

∗ Now assume MT = X. According to (1.55) we have M⊥ T = ST . Thus our assumption implies that ST ∗ consists of the zero vector only. But then we can use (6.30) to conclude that MT ∗ = X∗ , and hence we proved that

MT = X

⇒

MT ∗ = X∗ .

To prove the reserve implication, assume that MT ∗ = X∗ . Since T ∗ satisfies the same conditions as T , we can apply (1.55) again but now with T ∗ in place of T ∗∗ which yields M⊥ T ∗ = ST , and hence ∗ ⊥ ST ∗∗ = M⊥ T ∗ = (X ) = {0}.

We conclude that ST ∗∗ consists of the zero vector only. Now take x ∈ ST , that is, (I − zT )−1 x is an entire function. Let j be the canonical embedding of X into X∗∗ . Thus x, f = f, j x for each f ∈ X∗ . But then

(I − zT )−1 x, f = x, (I − zT ∗ )−1 f = (I − zT ∗ )−1 f, j x = f, (I − zT ∗∗ )−1 j x ,

f ∈ X∗ .

(6.31) (6.32)

Since (I − zT )−1 x is entire, it follows that the first term in (the left hand side) of (6.31) is entire for each f ∈ X∗ . But then the function in (6.32) is also entire for each f ∈ X∗ , which implies (see Section V.1 in [78]) that (I − zT ∗∗ )−1 j x is entire. Thus j x ∈ ST ∗∗ . But ST ∗∗ consists of the zero vector only. Hence j x = 0, and thus

6.3 The Volterra Operator Replaced by a Quasi-Nilpotent Operator

113

x = 0. We proved that ST consists of the zero vector only too, and (6.29) shows that MT = X, which completes the proof.   In Chap. 8, when studying infinite Leslie operators, we will see that condition (6.28) in Corollary 6.2.2 is necessary.

6.3 The Volterra Operator Replaced by a Quasi-Nilpotent Operator A bounded operator W on a Banach space X is said to be quasi-nilpotent if the spectrum of W consists of the point zero only. In other words, W is quasi-nilpotent if and only if the operator I −zW is invertible for each z ∈ C. If W is quasi-nilpotent, then the same is true for W ∗ , the conjugate of W , because the spectra of W and W ∗ coincide. Obviously, a Volterra operator is quasi-nilpotent but the converse is not always true because a quasi-nilpotent is not necessarily compact. A few examples of non-compact quasi-nilpotent operators are given in the next section. See also Lemma 8.5.1 which presents an example of a generalised Leslie operator which is a quasi-nilpotent operator and not compact. Let T on X be a finite rank perturbation of the quasi-nilpotent operator W . Thus T = W + R where R is of finite rank. The latter implies that R factors as R = BC, where B : Cn → X and C : X → Cn , with n ≥ rank R. As before, this allows us to define the n × n matrix function (z) = ICn − zC(I − zW )−1 B,

z ∈ C.

(6.33)

Here I is the identity operator on X, and ICn is the n × n identity matrix. The next theorem tells us that (z) is a characteristic matrix function on C for the operator T in the sense of Definition 5.2.1. Theorem 6.3.1 The n × n entire matrix function  defined by (6.33) is a characteristic matrix function for the operator T = W + R. Indeed, we have     (z) 0 ICn 0 = F (z) E(z) 0 I 0 I − zT

z ∈ C,

(6.34)

where the entire operator-valued functions E(z) : Cn ⊕ X → Cn ⊕ X and F (z) : Cn ⊕ X → Cn ⊕ X are given by $ E(z) =

(z)

C(I − zW )−1

−z(I − zW )−1 B (I − zW )−1 $ % (z) −C(I − zW )−1 F (z) = . zB I

% ,

(6.35)

(6.36)

114

6 Finite Rank Perturbations of Volterra Operators

The inverses E(z)−1 : Cn ⊕ X → Cn ⊕ X and F (z)−1 : Cn ⊕ X → Cn ⊕ X are the operator-valued functions given by $ E(z)

−1

= $

F (z)−1 =

ICn

−C

%

zB I − zT ICn

(6.37)

,

C(I − zW )−1

−zB I − zBC(I − zW )−1

% .

(6.38)

The proof of Theorem 6.3.1 in Sect. 6.3 only uses the fact that the spectrum of the operator V consists of the point zero only. Compactness of the operator V does not play a role in the proof of Theorem 6.1.1. Therefore the proof of Theorem 6.3.1 is the same as the proof of Theorem 6.1.1 with V being replaced by W . Also the proof of Theorem 6.2.1 in Sect. 6.2 the compactness of V does not play a role either and Theorem 6.2.1 remains also true if V is replaced by a quasi-nilpotent operator W . Summarising, we have the following completeness result for operators that can be represented by a finite rank perturbation of a quasi-nilpotent operator. Theorem 6.3.2 Let T = W + R, where W is a quasi-nilpotent operator on X such that (I −zW )−1 is an entire function of order at most ρ, and where R is an operator of finite rank on X. Put (z) = ICn − zC(I − zW )−1 B,

z ∈ C,

(6.39)

where R = BC is a minimal rank factorisation of R, and assume that the entire function det  is of finite non-zero order ρ and has infinitely many zeros. Assume T is one-to-one and has a dense range. Furthermore, suppose that there exist a complex number z0 , a non-negative real number s0 , and a ρ-admissible set of halflines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that det (z0 ) = 0 and (a) there exists a δ0 > 0 with | det (z)| ≥ δ0 > 0 for z ∈ ray (θj ; z0 , s0 ),

j = 1, 2, . . . , κ,

(6.40)

z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ.

(6.41)

(b) there exists an integer m and a constant M with (I − zW )−1  ≤ M(1 + |z|m ) for

If, in addition, the entire function z → det (z0 +z) is of completely regular growth, then MT = {x ∈ X | det (z0 + z) dominates adj (z0 + z)C(I − (z0 + z)W )−1 x},

(6.42)

6.4 Examples of Non-compact Quasi-Nilpotent Operators

115

and X = MT ⊕ ST ,

(6.43)

where, as usual, ST = {x ∈ X | z → (I − zT )−1 x is entire}. Furthermore, we have (i) the closure of the generalised eigenspace MT of T is the full space if and only if ST consists of the zero vector only; (ii) if the pair {C, W } is not observable, i.e., if there exists a non-zero x ∈ X such that C(I − zW )−1 x = 0 for all z ∈ C, then ST = {0} and T does not have a complete set of eigenvectors and generalised eigenvectors.

6.4 Examples of Non-compact Quasi-Nilpotent Operators In this section we present four examples of quasi-nilpotent operators that are not compact. For this purpose we need some preliminaries. Let X and Y be Banach spaces. Recall that a bounded linear operator A : X → Y is compact if for each bounded subset  of X the closure of the set A() is a compact subset in Y . The following lemma and corollary present the sequential view on compactness, as one can find in the first part of page 98 in the book [74], or at the end of page 293 together with the beginning of page 294 in the book [78], or on page 198 in [69]. Lemma 6.4.1 The operator A : X → Y is compact if and only if for each bounded sequence x1 , x2 , x3 , . . . in X the sequence Ax1 , Ax2 , Ax3 , . . . contains a subsequence converging to some vector in Y . Corollary 6.4.2 The operator A : X → Y is non-compact if and only if there exists a bounded sequence x1 , x2 , x3 , . . . in X such that the sequence Ax1 , Ax2 , Ax3 , . . . has no subsequence convergent in Y . The proof of the above lemma is based on the fact that compact sets and sequential compact sets are topologically the same. For the latter see Chapter 2 of [71]. The first two examples that will be presented are closely related, and the same is true for the other two examples. Example 6.4.3 Let X = H1 ⊕ H2 , where H1 and H2 are infinite dimensional Hilbert spaces, and let W be the operator on X defined by       0A H1 H1 W = → . : H2 H2 0 0

(6.44)

116

6 Finite Rank Perturbations of Volterra Operators

Here A be an operator from H2 → H1 . Obviously W 2 = 0, and hence, trivially, W is a (quasi-)nilpotent operator. Since        H1  IH1 H1 W = → , A 0 IH2 : 0 H2 H2

(6.45)

it follows that W is compact if and only if A is compact. The latter we shall prove in the next proposition. Proposition 6.4.4 The operator W defined by (6.44) is a non-compact operator if and only if A is not compact Proof The proof consists of two steps. Step 1.

Assume that A is compact. According to (6.45) we have W = BAC where B and C are bounded linear operators,     H1 IH1 : H1 → B= 0 H2

Step 2.



and C = 0 IH2



  H1 → H2 . : H2

But then item (ii) in Theorem 16.1 in the book [36] tells us that the operator BAC is compact too. Thus W is compact. Assume that A is not compact. Then we know (using the equivalence property given in the first paragraph of this section) that there exists a bounded sequence y1 , y2 , y3 , . . . in H2 such that the sequence Ay1 , Ay2 , Ay3 , . . . has no subsequence which is convergent. This follows from the definition of compactness as given in the first paragraph of the present section. Now put   0 xn = yn

     0A 0 Ayn and W xn = = 0 0 yn 0

(n = 1, 2, 3, . . .).

Since the sequence {Ay1 , Ay2 , Ay3 , . . .} has no subsequence which is convergent, the above identity implies that also the sequence {W x1 , W x2 , W x3 , . . .} has no subsequence which is convergent. The latter implies that W is not compact.   Example 6.4.5 As a corollary of Proposition 6.4.4 above we shall show that the shift operator W : L2 [0, 1] → L2 [0, 1] defined by (W x)(t) =

& x(t + 1/2) for 0 ≤ t ≤ 1/2, 0

is quasi-nilpotent but not compact.

for 1/2 ≤ t ≤ 1.

(6.46)

6.4 Examples of Non-compact Quasi-Nilpotent Operators

117

Since L2 [0, 1] can be identified with L2 [0, 1/2] ⊕ L2 [1/2, 1], the operator W defined by (6.46) is given by 2 × 2 operator matrix, namely $ W=

0J 00

% $ :

% L2 [0, 1/2] L2 [1/2, 1]

$ →

% L2 [0, 1/2] L2 [1/2, 1]

.

where J is the operator from L2 [1/2, 1] to L2 [0, 1/2] given by (J x)(t) = x(t + 1/2),

0 ≤ t ≤ 1/2 (x ∈ L2 [1/2, 1]).

Obviously, the operator J is a one-to-one surjective operator, and hence J is not compact. But then Example 6.4.3 tells us that W is quasi-nilpotent and not compact. Example 6.4.6 Let B be a Banach space. We denote by C([0, 1]; B) the linear space of all B-valued continuous functions on [0, 1]. In what follows Y denotes the space C([0, 1]; B) endowed with the norm f Y = max{f (s)B : 0 ≤ s ≤ 1},

f ∈ Y.

An operator W on Y is said to be the operator of integration on Y if 

t

(Wf )(t) =

f (τ ) dτ,

f ∈ Y = C([0, 1]; B).

(6.47)

0

Lemma 6.4.7 The operator W of integration on Y = C([0, 1]; B) is quasinilpotent and its resolvent is given by 



 (I − zW )−1 f (t) = f (t) + zν W ν f (t), ν=1



t

= f (t) + z

ez(t −τ )f (τ ) dτ,

z ∈ C.

(6.48)

0

Moreover, the operator W is compact if and only if the space B is finite dimensional. Proof To show that the spectral radius of W is zero, it suffices to show that W k 1/ k → 0 as k → ∞. Let f ∈ Y. Then (W k f )(t)B ≤

tk f Y , k!

k = 1, 2, 3, · · ·

This implies that W k Y ≤

1 k!

if k → ∞.

(0 ≤ t ≤ 1).

118

6 Finite Rank Perturbations of Volterra Operators

Since (k!)1/ k → ∞ as k → ∞, it follows that the spectral radius of W is zero. Thus W is quasi-nilpotent and a direct computation shows that (6.48) holds. To prove the final statement first we note that the unit ball in B is compact if and only if B is finite dimensional. Next, assume that the space B is infinite dimensional. Let X = B, and let A = I be the identity operator on X. Then A is not a compact operator. But then we know from Corollary 6.4.2 that there exists a bounded sequence b1 , b2 , b3 , . . . in B such that this sequence has no convergent subsequence. Now define a sequence fn ∈ Y by fn (t) = bn for 0 ≤ t ≤ 1. Then Wfn = tbn , and since by construction the sequence b1 , b2 , b3 , . . . has no convergent subsequence, the sequence Wf1 , Wf2 , Wf3 , . . . has no convergent subsequence too. This shows that W is not compact. Finally, assume B is finite dimensional, of dimension n say. Then B is similar to Cn , and W is similar to a direct sum W1 ⊕ · · · ⊕ W1 of n copies of W1 where  (W1 f )(t) =

t

f ∈ C[0, 1],

f (τ ) dτ,

0 ≤ t ≤ 1.

(6.49)

0

From Example 3 in Section 13.1 of [36] or from Example 3 in Section V.7 of [78] we know that W1 is compact (a direct proof is given below), and hence the same is true for the direct sum W1 ⊕ · · · ⊕ W1 . Thus W is compact too.   The proof of the compactness of the operator W1 in (6.49) is actually based on the Arzelà-Ascoli theorem. To see the connection we present here the Arzelà-Ascoli theorem as it appears in [78, page 295]. The Arzelà-Ascoli Theorem Let M be a compact metric space, and let F be a uniformly bounded equicontinuous set in C(M). Then every sequence of functions in F contains a uniformly convergent subsequence. We apply the Arzelà-Ascoli theorem with M = [0, 1] and F = {W1 f | f ∈ C(M),

f  ≤ 1}.

Recall that W1 is given by (6.49). Note that W1 f  ≤ f  ≤ 1 for each f ∈ F , and hence F is uniformly bounded in C(M). Furthermore, using f  ≤ 1, we have  |(W1 f )(t2 ) − (W1 f )(t1 )| ≤

t2

|f (τ )| dτ

t1

≤ |t2 − t1 |,

for all f ∈ F ,

0 ≤ t1 , t2 ≤ 1.

This shows that F is uniformly equicontinuous. Then the Arzelà-Ascoli theorem tells us that every sequence in F contains a uniformly convergent subsection. But then W1 is a compact operator by Lemma 6.4.1. To prove compactness of the operator of integration defined on the Banach space Lp [0, 1] one can use the Arzelà-Ascoli theorem in a similar way as used in the proof of the previous paragraph. In the case p = 2 an alternative proof uses the fact that

6.4 Examples of Non-compact Quasi-Nilpotent Operators

119

in the Hilbert space setting the operator of integration is a Hilbert Schmidt operator, see (1.28). The Arzelà-Ascoli theorem also plays a comparable role in the proof of Theorem 11.1.2 in Chap. 11. Example 6.4.8 Let X = C ([0, 1] × [0, 1]), and let W be the operator on X defined by 

t

(W x)(t, s) =

x(τ, s)dτ,

(t, s) ∈ [0, 1] × [0, 1], (x ∈ X ).

(6.50)

0

In the next proposition we shall prove that W is a non-compact quasi-nilpotent operator. Proposition 6.4.9 The operator W on X = C ([0, 1] × [0, 1]) defined by (6.50) is a non-compact quasi-nilpotent operator, and for each z ∈ C the resolvent is given by   (I − zW )−1 x (t, s) = x(t, s) + z



t

ez(t −τ )x(τ, s) dτ.

(6.51)

0

Proof The proof will be given in three steps. Step 1.

First we prove that W is not compact. To do this let X0 = C[0, 1] × C[0, 1]. The space X0 is a closed subspace of X , and from (6.50) we see that X0 is invariant under W . In what follows W0 := W |X0 , the restriction of W to X0 . To prove that W is not compact it suffices to prove that W0 is not compact. The latter follows from the fact that $ W0 =

V 0 0 I

% $ :

% C[0, 1] C[0, 1]



$ % C[0, 1] C[0, 1]

.

(6.52)

where V is the operator of integration acting on C[0, 1], that is, 

t

(V a)(t) =

a(τ ) dτ,

0 ≤ t ≤ 1 (a ∈ C[0, 1]).

(6.53)

0

Step 2.

As we know the operator V is a Volterra operator and hence compact but the identity operator on the infinite dimensional Banach space C[0, 1] is not compact, and hence W0 is not compact. Next we show that W is quasi-nilpotent. To do this let Y be the Banach space defined by Y = C([0, 1]; B) with B = C[0, 1].

120

6 Finite Rank Perturbations of Volterra Operators

Given this Banach space Y, let WY be the corresponding operator of integration, that is,  (WY y)(t) =

Step 3.

t

y(τ ) dτ,

y∈Y

(0 ≤ t ≤ 1).

0

Applying Lemma 6.4.7 with B = C[0, 1] we know that WY is quasinilpotent. Hence to prove that W is quasi-nilpotent it suffices to show that the operators W and WY are similar. To prove that W and WY are similar, recall that X = C ([0, 1] × [0, 1]) , Y = C([0, 1]; B) where B = C[0, 1]. Let x ∈ X , and let y = J x be defined by y(t) := x(t, ) ∈ C[0, 1],

0 ≤ t ≤ 1.

Note y ∈ Y, and J is a bounded linear operator from X to Y. On the other hand, let K be the linear operator from Y to X given by x(t, s) := (Ky)(t, s) = (y(t))(s),

0 ≤ t ≤ 1 and 0 ≤ s ≤ 1.

Straightforward calculations show that KJ and J K are the identity operators on X and Y, respectively. Furthermore, J W = WY K, and hence W and WY are similar.   In this section the emphasis has been on quasi-nilpotent operators that are not compact. The reverse problem to give conditions such that a quasi-nilpotent operator is compact also important. See, for example, [70], Proposition IX.1.1 in [32], and the first section of the next chapter.

Chapter 7

Finite Rank Perturbations of Operators of Integration

In this chapter we specify further the results of the previous chapter for the case when the Volterra operator V is an operator of integration. Completeness results will be given for three different cases. The first section has a preliminary character.

7.1 Preliminaries Throughout this section V is the operator of integration viewed as an operator on X = C[0, 1] or X = L2 [0, 1]. Thus V is given by 

t

(V x)(t) =

x(s) ds,

0 ≤ t ≤ 1,

x ∈ X.

(7.1)

0

The norm of x ∈ X = C[0, 1] is denoted by x◦ and by x2 when x ∈ X = L2 [0, 1]. Note that C[0, 1] is a subspace of L2 [0, 1], and the canonical embedding J of C[0, 1] into L2 [0, 1] is a contraction. Indeed,  J x2 = 0

1

1/2 |x(t)| dt 2



1

≤ 0

1/2 x2◦ dt

= x◦ ,

x ∈ C[0, 1].

From the final statement of Lemma 6.4.7 we know that the operator V defined by (7.1) is a Volterra operator on X = C[0, 1]. The same is true for X = L2 [0, 1];

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_7

121

122

7 Finite Rank Perturbations of Operators of Integration

see, for example, Proposition IX.1.1 in [32]. Furthermore, the resolvent of V is given by  t

(I − zV )−1 x (t) = x(t) + z ez(t −s)x(s) ds 

0 t

= x(t) + z

eτ z x(t − τ ) dτ,

0 ≤ t ≤ 1,

x ∈ X.

0

(7.2) For each x ∈ X this function is an entire function of order at most one that is polynomially bounded along any line Re z = a with a ∈ R. To see this, first note that | exp(τ z)| = exp(τ Re z) ≤ exp(τ |z|),

τ ≥ 0 and z ∈ C.

(7.3)

Using the first equality and representation (7.2) we arrive at   (I − zV )−1 x◦ ≤ 1 + |z| max 1, eRe z x◦ ,

x ∈ C[0, 1],

(7.4)

  (I − zV )−1 x2 ≤ 1 + |z| max 1, eRe z x2 ,

x ∈ L2 [0, 1].

(7.5)

Thus both on X = C[0, 1] and on X = L2 [0, 1], the resolvent (I − zV )−1 is polynomially bounded along any line Re z = a with a ∈ R. Using the latter inequality in (7.3) and the inequalities (7.4) and (7.5) we obtain

(I − zV )−1 x◦ ≤ 1 + |z|e|z| x◦ ,

x ∈ C[0, 1],

(7.6)



(I − zV )−1 x2 ≤ 1 + |z|e|z| x2 ,

x ∈ L2 [0, 1].

(7.7)

From these two inequalities it follows that for each x ∈ X the vector-valued function (I − zV )−1 x is an entire function of order at most one. We shall be dealing with functions f (z) = f (z; x) = C(I − zV )−1 x, where x ∈ X and C : X → C is a continuous linear functional on X. Clearly, f (z) is an entire function, and from (7.4) and (7.5) it follows that |f (z; x)| = |C(I − zV )−1 x| ≤ C(I − zV )−1 x   ≤ 1 + |z| max 1, eRe z Cx, x ∈ X.

(7.8)

7.1 Preliminaries

123

Here C is the norm of C as a continuous linear functional on X, and x = x◦ if X = C[0, 1] and x = x2 if X = L2 [0, 1]. The inequality (7.8) implies that the order of f (z) is at most one. We shall need the following lemma. Lemma 7.1.1 Let X act on C[0, 1] or L2 [0, 1], let V be the operator of integration on X, and let C be a continuous linear function on X. Assume the pair {C, V } is observable, and let f be the entire function given by f (z; x) = C(I − zV )−1 x, where x is a given non-zero vector in X. Then f belongs to the Paley-Wiener class PW (see Definition 14.3.3). Furthermore, there exists σ1 , 0 ≤ σ1 ≤ 1, depending on x, such that the indicator function of f is given by & hf (θ ) =

σ1 cos θ, −π/2 ≤ θ ≤ π/2; π/2 ≤ θ ≤ 3π/2.

0,

In particular, hf (θ ) ≤ H (θ ), where H : [−π/2, 3π/2] → R is the function defined by & H (θ ) =

cos θ, −π/2 ≤ θ ≤ π/2, 0,

π/2 ≤ θ ≤ 3π/2.

(7.9)

Proof We split the proof into two parts. Throughout f (z) = f (z, x), and x = 0. We already know that the order of f is at most one. Part 1.

Part 2.

In this part it is assumed that {C, V } is observable, and we show that the order of f cannot be strictly less than one. We shall argue by contradiction. So let us assume that the order of f is strictly less than one. This assumption allows us to apply the Phragmén-Lindelöf theorem (Theorem 14.2.1) with α = 1, with κ = 2, and with θ1 = π/2 and θ2 = 3π/2. The estimate in (7.8) shows that f is polynomially bounded on the imaginary axis, and hence (14.11) in Theorem 14.2.1 is fulfilled. Applying Theorem 14.2.1 then shows that f is a polynomial of degree n say. But then CV j x = 0 for j = n + 1, n + 2, . . .. In other words C(I − zV )−1 V n+1 x = 0. The fact that {C, V } is observable then shows that V n+1 x = 0. But the operator V is one-to-one. Hence x = 0, which contradicts the fact that x = 0. Now assume that the order of f is one. Then (7.8) shows that f is of exponential type and of type at most one. But then the fact that f is bounded on the imaginary axis implies (see Proposition 14.3.4) that f belongs to the Paley-Wiener class. We use f ∈ PW to prove the remaining part of the lemma. Since f ∈ PW, we know from Theorem 14.6.3 that f is an entire function of completely regular growth. Furthermore, there exists σ1 , 0 ≤ σ1 ≤ 1,

124

7 Finite Rank Perturbations of Operators of Integration

such that hf (θ ) =

& σ1 cos θ, −π/2 ≤ θ ≤ π/2; π/2 ≤ θ ≤ 3π/2.

0,

This shows that hf (θ ) ≤ H (θ ), where H is given by (7.9).

 

A linear map C : C[0, 1] → C is a continuous linear functional on C[0, 1] if and only if there exists η ∈ NBV [0, 1] such that the action of C is given by 

1

Cx =

x(s) dη(s)

(x ∈ C[0, 1]).

(7.10)

0

In that case η is uniquely determined by C, and the norm of C is equal to the total variation V01 (η) of η; see Theorem III.5.5 in [78]. Similarly, a linear map C : L2 [0, 1] → C is a continuous linear functional on L2 [0, 1] if and only if there exists g ∈ L2 [0, 1] such that the action of C is given by  Cx =

1

g(s)x(s) ds

(x ∈ L2 [0, 1]).

(7.11)

0

In that case g is uniquely determined (almost everywhere on [0, 1]) by C, and the norm of C = g2 ; see Theorem III.5.1 in in [78]. We conclude this section with two lemmas about observable pairs {C, V } on X = C[0, 1] and X = L2 [0, 1]. Throughout g ∈ L2 [0, 1], and by C and C◦ we denote the corresponding continuous linear functionals on L2 [0, 1] and C[0, 1], respectively. Thus C is given by (7.11), and C◦ is given by (7.10) with 

t

η(t) =

g(s) ds,

0 ≤ t ≤ 1.

0

Furthermore, to avoid confusion, the operator of integration on C[0, 1] will be denoted by V◦ and on L2 [0, 1] we will just used the symbol V . Lemma 7.1.2 The pair {C, V } is observable if and only if the pair {C◦ , V◦ } is observable. Proof Recall that the notion of observability is defined in the first paragraph of Sect. 6.1. Let J be the canonical embedding of C[0, 1] into L2 [0, 1]. Thus (Jf )(t) = f (t), 0 ≤ t ≤ 1. The operator J is a one-to-one, and its range is dense in L2 [0, 1] because the set of all continuous functions on [0, 1] is dense in L2 [0, 1]. Furthermore, we have J V◦ = V J

and C◦ = CJ.

(7.12)

7.1 Preliminaries

125

Now assume the pair {C, V } is observable, and let C◦ (I −zV◦ )−1 x = 0 for all z ∈ C for some x ∈ C[0, 1]. From (7.12) it follows that C◦ (I − zV◦ )−1 = C(I − zV )−1 J,

z ∈ C,

and thus C(I − zV )−1 J x = 0, z ∈ C. But {C, V } is observable. Hence J x = 0, and x = 0 because J is one-to-one. This proves that {C◦ , V◦ } is observable. Conversely, let {C◦ , V◦ } be observable, and assume that let C(I − zV )−1 x = 0 for all z ∈ C for some x ∈ L2 [0, 1]. Then Cx = 0, and hence 0 = C(I −zV )−1 x = Cx+zC(I −zV )−1 V x = zC(I −zV )−1 V x,

for all z ∈ C.

Thus C(I − zV )−1 V x = 0. Since V x ∈ C[0, 1], we have 0 = C(I − zV )−1 V x = C◦ (I − zV◦ )−1 V x. By assumption {C◦ , V◦ } is observable. Hence V x = 0. Since V is one-to-one, we conclude that x = 0. This proves that {C, V } is observable.   For the L2 [0, 1] case we have a necessary and sufficient condition for observability. To state the result we need the following definition. Definition 7.1.3 Let g ∈ L2 [0, 1]. We say that 1 belongs to the support of g if there does not exist a number δ, 0 < δ < 1, such that g|[1−δ,1] = 0 almost everywhere. In other words, 1 does not belong to the support of g if and only if there exists 0 < δ < 1 such that g|[1−δ,1] = 0 almost everywhere. Obviously, 1 belongs to the support of g implies that g = 0. If g is continuous from the left at t = 1 and g(1) = 0, then 1 belongs to the support of g. Lemma 7.1.4 Let V be the operator of integration on L2 [0, 1], and let C be the continuous linear functional on L2 [0, 1] given by (7.11), where g ∈ L2 [0, 1]. Then the pair {C, V } is observable if and only if 1 belongs to the support of g. Proof Let V◦ and C◦ be the operators on C[0, 1] introduced in the paragraph preceding Lemma 7.1.2. Given Lemma 7.1.2 it suffices to prove the present lemma for {C◦ , V◦ } in place of {C, V }. Using the first identity in (7.2) for X = C[0, 1] and interchanging the order of integration, we see that C◦ (I − zV◦ )−1 x = 

1

+z 0

e−zτ



1

g(s)x(s)ds+ 0



1

ezs g(s) ds x(τ ) dτ,

τ

In what follows the proof is divided into two parts.

x ∈ C[0, 1].

(7.13)

126

Part 1.

7 Finite Rank Perturbations of Operators of Integration

First it is assumed that 1 belongs to the support of g and that the pair {C◦ , V◦ } is not observable. It then follows from (7.13) that there exists a non-zero x ∈ C[0, 1] such that 

1

e−zτ



0

1

ezs g(s) ds x(τ ) dτ = 0

for all z ∈ C.

(7.14)

τ

First rewrite the left hand side of (7.14) as follows 

1

e−zτ



0

1



ezs g(s) ds x(τ ) dτ = 0

τ

 =

1  1 τ

1  1−τ

0

 =

ez(s−τ ) g(s) ds x(τ ) dτ

ezσ g(σ + τ ) dσ x(τ ) dτ

0 1



ezσ

0

1−σ 0



1

= −ez

e−zt

g(σ + τ )x(τ ) dτ dσ



0

t

ϕ(t − τ )x(τ ) dτ dt,

0

where σ = s − τ , t = 1 − σ and ϕ(s) = g(1 − s) for 0 ≤ s ≤ 1. This shows that (7.14) is equivalent to 

1

e−zt



0

t

ϕ(t − τ )x(τ ) dτ dt = 0

for all z ∈ C.

(7.15)

0

Note that the left hand side of (7.15) is the Laplace transform of a convolution product. Since the Laplace transform converts convolution products into algebraic products (see Lemma 3.1 in Section I.3 of [21]), we have 

1

e

−zt



0

t



1

ϕ(t − τ )x(τ ) dτ dt =

0

e 0

−zt



1

ϕ(t) dt

e−zt x(t) dt

0

and hence (7.15) is equivalent to 

1

e 0

−zt



1

ϕ(t) dt

e−zt x(t) dt = 0

for all z ∈ C.

(7.16)

0

Since 1 belongs to the support of g, the point 0 belongs to the support of ϕ and hence ϕ is non-zero L2 -integrable function. Since we assumed that x ∈ C[0, 1] is nonzero, it follows that x is non-zero L2 -integrable function as well. Therefore it follows from the Paley-Wiener theorem (see Theorem 14.3.1) that the left hand side of (7.16) is a product of two nonzero entire functions of exponential type.

7.2 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 1

Part 2.

127

In particular, the left hand side of (7.16) is the product of two entire functions of order 1, but now it follows from Theorem 14.1.1 that the order of the product is 1 as well. Therefore the left hand side of (7.16) is a non-zero entire function of order 1, This contradicts identity (7.16). Therefore x = 0 and this shows that the pair {C◦ , V◦ } is observable. In this part it is assumed that 1 does not belong to the support of g. Then there exists 0 < δ < 1 such that g|[1−δ,1] is zero almost everywhere. Now choose 0 = x ∈ C[0, 1] such that x(t) = 0 for 0 ≤ t ≤ 1 − δ. Then 

1

 ezs g(s) ds x(s) = 0

for every τ ∈ [0, 1] and all z ∈ C.

τ

Using the identity (7.13), this implies that C◦ (I − zV◦ )−1 x = 0 for all z ∈ C, that is, the pair {C◦ , V◦ } is not observable.   Another sufficient condition for observability in the C[0, 1] setting, with the continuous linear functional C as in (7.10), is given in Lemma 7.2.6.

7.2 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 1 As a first illustration of Theorem 6.2.1 we will study completeness for rank one perturbations of the operator of integration on C[0, 1]. More precisely in the present section X = C[0, 1] and   T x (t) =



t



1

x(s) ds +

0

x(s)dη(s),

x ∈ C[0, 1].

(7.17)

0

Here η ∈ NBV [0, 1]. Clearly, the operator T is a rank one perturbation of a Volterra operator. In fact, T = V + R where V and R on X = C[0, 1] are defined by   V x (t) =



t

x(s) ds,

  Rx (t) =

0



1

x(s)dη(s)

(0 ≤ t ≤ 1).

0

We shall use Theorem 6.2.1 to prove the following theorem. Theorem 7.2.1 Let T be the operator on C[0, 1] defined by (7.17), and assume that lim η(t) = η(1). t ↑1

(7.18)

Then T has a complete span of eigenvectors and generalised eigenvectors. More precisely, MT = C[0, 1].

128

7 Finite Rank Perturbations of Operators of Integration

We shall say that η has an atom at 1 whenever condition (7.18) is satisfied. Lemma 7.2.2 The operator T on C[0, 1] defined by (7.17) is one-to-one, and T has a dense range if η has an atom at 1. Proof To prove that T is one-to-one we can use the same line of reasoning as used in the first paragraph of Sect. 2.3, where this is proved for the case when T acts on L2 [0, 1]. As we have seen in Sect. 2.5, the range of T may not be dense in C[0, 1]. In fact, from Proposition 2.5.3, we know that the range of T is not dense in C[0, 1] implies that there exists a ϕ ∈ NBV [0, 1] such that 

t

η(t) = −t −

ϕ(s) ds

(0 ≤ t ≤ 1).

(7.19)

0

Since ϕ ∈ NBV [0, 1], it follows that ϕ is integrable (see [22, Theorem 6.2.8]), and hence identity (7.19) shows that η is continuous on [0, 1]. But the latter contradicts the fact that η has an atom at 1. Thus the range of T is dense in C[0, 1].   Next we define the characteristic function, and we proceed with a number of related auxiliary results. The Characteristic Function Let C : X → C and B : C → X be given by 

1

Cx =

x(s)dη(s) (x ∈ X),

(Bc)(t) = c

(0 ≤ t ≤ 1).

(7.20)

0

Then R = BC is a minimal rank factorisation. The corresponding characteristic matrix function is the scalar function (z) = I − zC(I − zV )−1 B = 1 − z



1

ezs dη(s).

(7.21)

0

Indeed, using (7.2) with x = Bc, where c ∈ C, we see that  t

(I − zV )−1 Bc (t) = c + z ezτ c dτ = ezt c

(0 ≤ t ≤ 1),

(7.22)

0

and hence 

1

(I − zV )

(z)c = c − z 0

−1

  Bc (s)dη(s) = 1 − z

1

 zs

e dη(s) c. 0

Since c ∈ C is arbitrary, this proves (7.21). The next lemma shows that the characteristic function  is of order ρ = 1.

7.2 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 1

129

Lemma 7.2.3 Assume η is not constant on (0, 1], and let σ = inf{τ ∈ (0, 1] | η is constant on (τ, 1]}.

(7.23)

Then the characteristic function  belongs to the Paley-Wiener class PW, and its indicator h is given by h (θ ) =

& σ cos θ, −π/2 ≤ θ ≤ π/2, 0,

π/2 ≤ θ ≤ 3π/2.

(7.24)

Furthermore,  is of completely regular growth. In fact, more generally, the entire function 0 (z) = (z + z0 ) is of completely regular growth for any choice of z0 , and the corresponding indicator functions, h and h0 , coincide. Finally, if η has an atom at 1, then the indicator h of  is equal to the function H given by (7.9). Proof The first result (including formula (7.24)) is an immediate corollary of Proposition 14.4.4, with −z replaced by z. The fact that  is of completely regular growth follows from Theorem 14.6.3. The final statement follows from the fact (see Proposition 14.4.8) that  ∈ PW implies that the entire function z → (z0 + z) also belongs to the class PW, hence this entire function is also of completely regular growth. Furthermore (again using Proposition 14.4.8) the functions (z) and (z + z0 ) have the same indicator function. 1 To prove the final statement, recall (see (7.21)) that (z) = 1 − z 0 ezs dη(s). Since η has an atom at 1, it follows that inf{τ ∈ (0, 1] | η is constant on (τ, 1]} = 1. But then a direct application of Proposition 14.4.4 shows that h = H .

(7.25)  

Proposition 7.2.4 If η is not constant on (0, 1], then the entire function 

1

(z) = 1 − z

ezs dη(s)

(7.26)

0

has infinitely many zeros. Proof The result follows from Corollary 14.8.4 using Remark 14.8.5.

 

Further on we shall apply Theorem 6.2.1 with ρ = 1, θ1 = π/2, θ2 = 3π/2. For that purpose we need to analyse condition (6.17) in item (a) of Theorem 6.2.1. The following lemma will be used in the estimates. The lemma also allows us to prove that the pair {C, V } is observable when η has an atom at 1.

130

7 Finite Rank Perturbations of Operators of Integration

Lemma 7.2.5 If η has an atom at 1, then there exist real number c0 and positive constants M and m such that  M(1 + |z|)eRe z ≥ 



1

 ezs dη(s) ≥ meRe z ,

Re z > c0 .

(7.27)

0

Proof The upper bound in (7.27) follows by using partial integration (and does not require that η has an atom at 1). Indeed, 

1 0

1   ezs dη(s) = ezs η(s) − 0

 = e η(1) − z z

1

η(s) dezs

0 1

η(s)ezs ds

0

  = e η(1) − z



1

z

η(s)e

z(s−1)

(7.28)

ds .

0

Note that s − 1 ≤ 0 in (7.28). Hence (s − 1)Re z ≤ c0 and thus 



1

|

η(s)e

z(s−1)

0

1

ds| ≤

 |η(s)|e

(s−1)Re z

1

ds ≤

0

|η(s)|ec0 (s−1) ds.

0

The latter inequality together with the equality (7.28) yields the upper bound in (7.27). Let  > 0 be given. To prove the lower bound in (7.27) choose δ > 0 such that the variation of η over the interval [1 − δ, 1) is smaller than . Rewrite 

  e dη(s) = η(1) − η(1−) ez +

1

zs

0



1−



1−δ

e dη(s) + zs

1−δ

ezs dη(s)

0

to estimate  



1

   ezs dη(s) ≥ eRe z |η(1) − η(1−)| −  − va(η)(1 − δ)e−δRe z ,

0

where va(η) denotes the variation of η over the interval [0, 1]. So for c0 sufficiently large, there exists a positive constant m such that (7.27) holds.   From Lemma 7.2.5 it follows that if η has an atom at 1, then the lower estimate (6.17) is satisfied with θ1 = π/2, θ2 = 3π/2, provided z0 ∈ R is chosen such that z0 > max{c0 , 1/m}. Indeed, with this choice of z0 , there exists a δ0 > 0 such that  1 − z



1 0

   ezs dη(s) ≥ z0 mez0 − 1 ≥ δ0

for Re z = z0 .

7.2 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 1

131

Furthermore, z0 ∈ R can be chosen such that I − z0 T is invertible, and hence (z0 ) = 0. Lemma 7.2.6 If η has an atom at 1, then the pair {C, V } is observable. Proof Assume that η has an atom at 1. To prove the observability of {C, V }, let x ∈ X, and assume C(I − zV )−1 x = 0 for all z ∈ C. Using C(I − zV )

−1



1

x= 

 dη(s)x(s) + z

0 1

=

s

dη(s) 

0 1

dη(s)x(s) + z

0



1

e 0

 −zτ



ez(s−τ )x(τ ) dτ

0 1

 ezs dη(s) x(τ ) dτ,

(7.29)

τ

and the uniqueness of the Laplace transform, we see that C(I − zV )−1 x = 0 for all z ∈ C, yields  

1

 ezs dη(s) x(τ ) = 0

for every τ ∈ [0, 1] and all z ∈ C.

(7.30)

τ

Thus if we take z on the line Re z = a0 with a0 > c0 , then it follows from Lemma 7.2.5 that 

1

ezs dη(s) = 0 for every τ ∈ [0, 1] and all z with Re z = a0 .

(7.31)

τ

Together with (7.30) this implies that x(τ ) = 0 for every τ ∈ [0, 1]. This shows that the observability condition holds.   Corollary 7.2.7 Assume η has an atom at 1, and put FT = {x ∈ C[0, 1] | (z) dominates C(I − zV )−1 x}

(7.32)

Then the indicator of the characteristic function  is equal to H , and hence  dominates the function f (z, x) for each x ∈ C[0, 1]. In other words, FT = C[0, 1]. Proof Since η has an atom at 1, the final statement of Lemma 7.2.3 shows that h is equal to the function H defined by (7.9). Now let f (z) = C(I −zV )−1 x with x = 0. Since {C, V } is observable (by the preceding lemma) we know from Lemma 7.1.1 that hf (θ ) ≤ H (θ ). Thus hf (θ ) ≤ h (θ ) which shows that (z) dominates f for each x, and thus FT = C[0, 1].   Proof of Theorem 7.2.1 Throughout we assume that η has an atom at 1. Then we know from Lemma 7.2.2 that the operator T given by (7.17) is ono-to-one and has a dense range. Moreover, the characteristic function  ∈ PW, and according to Proposition 7.2.4 the function  has infinitely many zeros. Furthermore, from Lemma 7.2.5 we know that with θ1 = π/2 and θ2 = 3π/2 condition (6.17) is fulfilled (see the paragraph after the proof of Lemma 7.2.5). The fact that the

132

7 Finite Rank Perturbations of Operators of Integration

same holds true for (6.28) follows from (7.4). We also know that the function z → (z + z0 ) belongs to the Paley-Wiener class PW, and hence this function is of completely regular growth. Thus all assumptions of Theorem 6.2.1 are satisfied, and thus we know from (6.19) that MT = {x ∈ X | (z + z0 ) dominates C(I − (z + z0 )V )−1 x}.

(7.33)

Here we used that in the present setting  is a scalar function, and so adj (z) = 1 for any choice of z. The functions (z) and (z+z0 ) have the same indicator function. Thus in order to complete the proof, i.e., in order to obtain MT = C[0, 1], we have to show (z) dominates C(I − (z + z0 )V )−1 x for each x ∈ C[0, 1].

(7.34)

We know that  belongs to the class PW and that its indicator is equal to the function H given by (7.9). Next consider the functions f (z) = f (z; x) = C(I −zV )−1 x

and f0 (z) = f (z; x) = C(I −(z+z0 )V )−1 x.

By Lemma 7.2.6 the pair {C, V } is observable. But then Lemma 7.1.1 tells us that f belongs to the class PW, and hf (θ ) ≤ H (θ ). Thus  dominates f . From Proposition 14.4.8 we know that the function f0 also belongs to the class PW and that the indicator functions of f and f0 coincide. Thus  dominates f0 too. Since x is an arbitrary vector in C[0, 1], we see that (7.34) holds true, and we are done.   Example 7.2.8 Assume that η consists of a single atom at 1 of value −e. In other words, η(t) = 0 for 0 ≤ t < 1 and η(1) = −e. Thus T is given by 

t

(T x)(t) =

x(s) ds − x(1)e,

0 ≤ t ≤ 1,

x ∈ C[0, 1],

0

and (7.21) shows that the corresponding characteristic function  is given by (z) = 1 + zez+1 . According to Theorem 7.2.1 the operator T has a complete span of eigenvectors and generalised eigenvectors.

7.3 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 2 In the present section we study completeness of the operator T given by (7.17) for the case when η has no atom at 1. For simplicity, we consider the case when 

t

η(t) =

g(s) ds, 0

(7.35)

7.3 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 2

133

where g is a non-zero square integrable function. In this case the operator T defined by (7.17) reduces to the operator Tg introduced in (1.40) acting on C[0, 1] in place of L2 [0, 1]. In other words we shall deal with the operator Tg given by:   Tg x (t) =



t



1

x(s) ds +

0

g(s)x(s) ds,

x ∈ C[0, 1].

(7.36)

0

The corresponding characteristic function is the entire function (z) given by (z) = 1 − C(I − zV )

−1



1

B =1−z

ezs g(s) ds,

z ∈ C.

(7.37)

0

Here the operators V , B, C are defined as in previous section using (7.35). Applying Proposition 7.2.4 with η given by (7.35) we obtain the following result. Proposition 7.3.1 Let (z) be the entire function given by the right hand side of (7.37), and let g be non-zero. Then the entire function (z) has infinitely many zeros. Recall (using Lemmas 7.1.2 and 7.1.4) that the pair {C, V } is observable if and only if 1 belongs to the support of g, that is, if and only if there does not exist a number δ, 0 < δ < 1, such that g|[1−δ,1] = 0 almost everywhere. The following two theorems are the main results of the present section. Theorem 7.3.2 Let g ∈ L2 [0, 1], and let Tg be the operator on C[0, 1] given by (7.36). Furthermore, let  be the corresponding characteristic function defined by formula (7.37), and assume that (A) there exists a0 ∈ R such that (z) is bounded from below in the left half plane Re z ≤ a0 , that is, for some δ0 > 0 we have |(z)| ≥ δ0 > 0

for all Re z ≤ a0 .

If, in addition, g is continuous from the left at t = 1 and g(1) = 0, then Tg has a complete span of eigenvectors and generalised eigenvectors. We shall derive the above theorem as a corollary of the following theorem which presents necessary and sufficient conditions in order that the operator Tg has a complete span of eigenvectors and generalised eigenvectors. Theorem 7.3.3 Let g ∈ L2 [0, 1], and let Tg be the operator on C[0, 1] given by (7.36). Assume that Tg has a dense range. Furthermore, let  be the corresponding characteristic function defined by (7.37), and assume that (A) there exists a0 ∈ R such that (z) is bounded from below in the left half plane Re z ≤ a0 , that is, for some δ0 > 0 we have |(z)| ≥ δ0 > 0

for all Re z ≤ a0 .

(7.38)

134

7 Finite Rank Perturbations of Operators of Integration

Then C[0, 1] = MTg ⊕ ST ,

(7.39)

and the following two statements are equivalent: (i) MTg = C[0, 1] or, by definition, Tg has a complete span of eigenvectors and generalised eigenvectors; (ii) 1 belongs to the support of g. The following proposition will be needed in the proof of the above theorem. The proposition is based on various elements of the theory of entire functions presented in Chap. 14. Proposition 7.3.4 Let g ∈ L2 [0, 1], and assume that 1 belongs to the support of g. Then the corresponding characteristic function  be defined by (7.37) belongs to the Paley-Wiener class PW, and its type is σ = 1. Furthermore, its indicator function h is given by h = H , where H is the function given by (7.9). Proof The fact that 1 belongs to the support of g implies that g is non-zero. But then we know from Definition 14.3.3 that  belongs to the Paley-Wiener class PW. In particular,  is of exponential type by item (i) in Proposition 14.3.4. Furthermore, using Corollary 14.3.5 and Lemma 14.3.6, the fact that 1 belongs to the support of g implies that the type of  is equal to one, and we can apply Proposition 14.4.3 with f (z) = (−z) and therefore hf (θ ) = h (θ + π). This shows that h = H , where H is the function given by (7.9).   Proof of Theorem 7.3.3. We shall apply Theorem 6.2.1 with Tg on C[0, 1] in place of T on X. From the first part of Lemma 7.2.2 we know that Tg is one-toone. Furthermore, from Proposition 7.3.1 it follows that the entire function  has infinitely many zeros. The remaining of the proof is split into three parts. Part 1.

In this part we prove the identity (7.39). From Proposition 7.3.4 we know that  belongs to the class PW. In particular,  is an entire function of order ρ = 1. Furthermore, from the inequality (7.4) it follows that (6.28) is satisfied with m = 1, θ1 = π/2, θ2 = 3π/2, z0 = a, where a ∈ R and s0 ≥ 0 are still arbitrary. In the sequel we take s0 = 0, and we choose z0 = a ≤ a0 such that (z0 ) = 0. Here a0 is the real number appearing in (7.38). Note that a ≤ a0 implies that (6.17) is also satisfied (by condition (A) which holds by assumption). Furthermore, (7.4) shows that (6.18) is satisfied too. Thus items (a) and (b) in Theorem 6.2.1 are both satisfied. Moreover, the fact that (z) belongs to the class PW implies (see Proposition 14.4.8) that z → (z0 + z) also belongs to the class PW, and hence the map z → (z0 + z) is of completely regular growth (by Theorem 14.6.3). We can now use (6.20) in Theorem 6.2.1 to conclude that the identity (7.39) holds. Furthermore, the identity (6.19) in

7.3 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 2

135

Theorem 6.2.1 yields the identity: MTg = {x ∈ C[0, 1] | (z + z0 ) dominates C(I − (z + z0 )V )−1 x}. (7.40) Part 2.

In this part we show that item (ii) implies item (i), i.e., we assume that 1 belongs to the support of g, and we shall prove that Tg has a complete span of eigenvectors and generalised eigenvectors. Given (7.40) and the identity (7.39), it suffices to prove that (z + z0 ) dominates C(I − (z + z0 )V )−1 x for each x ∈ C[0, 1]. (7.41)  We begin with some preliminaries. Put (z) = (z + z0 ), and f(z) = f(z; x) = C(I − (z + z0 )V )−1 x, f (z) = f (z; x) = C(I − zV )−1 x.

Part 3.

Here x ∈ C[0, 1]. Since  belongs to the class PW, the same holds . Moreover,  and   have the same indicator function (by true for  Proposition 14.4.8). From Lemma 7.1.1 we know that f belongs to the class PW. Furthermore, if f belongs to the class PW, then the function f also belongs to the class PW, and f and f have the same indicator. On the other hand, if the order of f is strictly less than one, then the same holds true for f. In fact f and fhave the same order (see Lemma 14.1.4). Hence in order to prove (7.41) we may without loss of generality assume that z0 = 0. Now fix x ∈ C[0.1], and assume that 1 belongs to the support of g. Then Lemma 7.1.4 tells us that the pair {C, V } is observable. Next apply Proposition 7.3.4 to show that h = H , where H is the function given by (7.9). On the other hand, according to Lemma 7.1.1, we have hf ≤ H . Thus (7.41) holds for each x ∈ C[0, 1], and hence MTg = C[0, 1]. It remains to prove item (i) implies item (ii). This will be done by contradiction. So let us assume that item (ii) is not satisfied. Then, by Lemmas 7.1.2 and 7.1.4, the pair {C, V } is not observable. But then item (ii) in Theorem 6.2.1 tells us that STg = {0}, and we can use the identity (7.39) to show that Tg does not have a complete span of eigenvectors and generalised eigenvectors which contradicts our assumption (i). This completes the proof.  

Proof of Theorem 7.3.2. Assume that g is continuous from the left at t = 1 and g(1) = 0. Then 1 belongs to the support of g (see the paragraph directly after Definition 7.1.3), and from Corollary 2.5.5 we know that the range of Tg is dense in C[0, 1]. Thus Tg has a complete span of eigenvectors and generalised eigenvectors by Theorem 7.3.3.  

136

7 Finite Rank Perturbations of Operators of Integration

We conclude this section with a remark and a lemma concerning the assumption “the operator Tg has a dense range” appearing in Theorem 7.3.3. First we present an example showing that condition (A) in Theorem 7.3.3 does not imply that the range of Tg is dense in C[0, 1]. Let g be defined by ⎧ ⎨ −1 for t = 0, g(t) = a for 0 < t < 1, ⎩ 0 for t = 1. Here a is an arbitrary real number. Clearly, g ∈ L2 [0, 1]. Put ϕ(t) :=

⎧ ⎨

0 for t = 0, −1 − a for 0 < t < 1, ⎩ −1 for t = 1.

Note that ϕ is of bounded variation, ϕ(0) = 0, and ϕ is continuous on the open interval (0, 1). Thus, see the paragraph preceding Proposition 2.5.3, the function ϕ 1 belongs to the class NBV [0, 1] with 0 dϕ(s) = −1. Next put g(t) ˜ = −1 − ϕ(t) for 0 ≤ t ≤ 1. Note that g(0) ˜ = −1,

g(t) ˜ = a,

g(1) ˜ = 0.

Thus g˜ = g everywhere on [0, 1] and this shows that (2.50) of Corollary 2.5.5 is satisfied and hence the range of Tg is not dense in C[0, 1]. On the other hand for this function g the characteristic function  is given by  (z) = 1 − z

1



1

ezs g(s) ds = 1 − az

0

1  = 1 − aezs  = 1 + a − aez .

ezs ds

0

0

It follows that (z) → 1 + a if Re z → −∞. Hence for a = −1, condition (A) is satisfied while the range of Tg is not dense in C[0, 1]. Lemma 7.3.5 Let g ∈ L2 [0, 1], and let  be the characteristic function defined by g. Assume that g is continuous on [0, δ] for some 0 < δ < 1. Then condition (A) in Theorem 7.3.3 implies that the range of Tg is dense in C[0, 1]. Proof The proof goes by contradiction. Let g be continuous on [0, δ] for some δ > 0, and assume that the range of Tg is not dense in C[0, 1]. We shall prove that under these conditions (−r) → 0

(r → ∞).

(7.42)

7.3 Rank One Perturbations of the Operator of Integration on C[0, 1], Part 2

137

Since the range of Tg is not dense in C[0, 1], there exists ϕ ∈ NBV [0, 1] such that ϕ(1) = −1 and g(·) ˜ := −1 − ϕ(·) is equal to g(·) almost everywhere on [0, 1]. Using this function ϕ and the fact that g = g˜ almost everywhere on [0, 1] we see that the characteristic function is given by 

1

(z) = 1 − z 



0

e 

 zs



− 1 − ϕ(s) ds

0 1

=1+z

ezs g(s) ˜ ds

0 1

=1−z

1

ezs g(s) ds = 1 − z



1

ezs ds +

0

1   = 1 + ezs  + 0

zezs ϕ(s) ds

0 1



ϕ(s) dezs = ez +

0

1

ϕ(s) dezs .

0

Next using ϕ(0) = 0 and ϕ(1) = −1 we obtain 

1 0

1   ϕ(s) dezs = ϕ(s)ezs  − 0

1



1

ezs d ϕ(s) = −ez −

0

ezs d ϕ(s).

0

We conclude that 

1

(z) = −

ezs d ϕ(s).

0

In what follows Vab (ϕ) denotes the total variation of the function ϕ over the interval [a, b] ⊂ [0, 1]. Take 0 < c < 1. Using standard properties of functions of bounded variation (see, e.g., paragraph 5.20 in [82] or Section 5.3 in [22]), we have  



1 c

 e−rs d ϕ(s) ≤ e−rc Vc1 (ϕ) → 0  



c 0

(r → ∞),

 e−rs d ϕ(s) ≤ V0c (ϕ).

(7.43) (7.44)

Since ϕ(·) = −1 − g(·) almost everywhere on [0, 1], we have V0t (ϕ) = V0t (−1 − g(·)) for 0 ≤ t ≤ 1. But then fact that the function 1 − g(·) is continuous on [0, δ] implies (see Section 5.5 in [22]) that t → V0t (ϕ) is also continuous on [0, δ]. The later yields V0t (ϕ) → 0

if t ↓ 0.

138

7 Finite Rank Perturbations of Operators of Integration

Now fix  > 0. Then the above continuity property implies that there exists 0 < c < 1 such that V0c (ϕ) < 12 . Furthermore, using the limit (7.43) we see that there exits r0 > 0 such that  

 c

 1 e−rs d ϕ(s) ≤  2



c

1

r ≥ r0 .

It follows that  



1

e

−rs

  d ϕ(s) ≤ 

0

e

−rs

  d ϕ(s) + 



0

1

 e−rs d ϕ(s) ≤ ,

r > r0 .

c

But then  |(−r)| = 



1

 ezs d ϕ(s) → 0 if r → ∞,

0

 

and (7.42) is proved. Since (7.42) contradicts condition (A), we are done.

7.4 Rank One Perturbations of the Operator of Integration on L2 [0, 1] In this final section we assume that the underlying space X is the Hilbert space L2 [0, 1]. We shall prove the following result which can be viewed as a further refinement of Corollary 2.3.2. The theorem will be obtained as a corollary of Theorem 6.2.1 in much the same way as Theorem 7.3.3 has been derived as a corollary of Theorem 6.2.1. Throughout Tg is the operator on L2 [0, 1] given by 

 Tg x (t) =



t



1

x(s) ds +

0

g(s)x(s) ds,

x ∈ L2 [0, 1].

(7.45)

0

The corresponding characteristic function is the entire function (z) given by (z) = 1 − zC(I − zV )

−1



1

B =1−z

ezs g(s) ds,

z ∈ C.

(7.46)

0

Here the operator V is the operator of integration on L2 [0, 1], and the operators B : C → L2 [0, 1] and C : L2 [0, 1] → C are defined by  Cx =

1

g(s)x(s)ds 0

(x ∈ L2 [0, 1]),

(Bc)(t) = c

(0 ≤ t ≤ 1).

7.4 Rank One Perturbations of the Operator of Integration on L2 [0, 1]

139

From Proposition 7.3.1 we know that the entire function (z) has infinitely many zeros whenever g is non-zero. Theorem 7.4.1 Let g ∈ L2 [0, 1] be non-zero, and let Tg be the operator on L2 [0, 1] given by (7.45). Furthermore, let (z) be the corresponding characteristic function given by (7.46), and assume that (A) there exists a0 ∈ R such that (z) is bounded from below in the left half plane Re z ≤ a0 , that is, for some δ0 > 0 we have |(z)| ≥ δ0 > 0

for all Re z ≤ a0 .

(7.47)

Then Tg has a dense range, L2 [0, 1] = MTg ⊕ ST ,

(7.48)

and the following two statements are equivalent: (i) MTg = L2 [0, 1], i.e., Tg has a complete span of eigenvectors and generalised eigenvectors; (ii) 1 belongs to the support of g. Proof We first show that the range of Tg is dense in L2 [0, 1]. This is done by contradiction. Indeed, assume that Tg is not dense in L2 [0, 1]. Then (see Part 1 of the proof of Corollary 2.3.2 and use that q(z) in Corollary 2.3.2 is equal to (z)) it follows that (−r) → 0

(r → ∞).

But the latter contradicts the requirements in (A). Hence Tg is dense in L2 [0, 1]. The remaining part of the proof is split into three parts. Part 1.

In this part we prove the identity (7.48). From (7.36) and (7.45) we know that the characteristic functions (z) for L2 [0, 1] and C[0, 1] are the same, and hence Proposition 7.3.4 tells us that that  belongs to the class PW. In particular,  is an entire function of order ρ = 1. Furthermore, from the inequality (7.5) it follows that (6.28) is satisfied with m = 1, θ1 = π/2, θ2 = 3π/2, z0 = a, where a ∈ R and s0 ≥ 0 are still arbitrary. In the sequel we take s0 = 0, and we choose z0 = a ≤ a0 such that (z0 ) = 0. Here a0 is the real number appearing in (7.47). Note that a ≤ a0 implies that (6.18) is also satisfied (by condition (A) which holds by assumption). Thus items (a) and (b) in Theorem 6.2.1 are both satisfied. Moreover, the fact that (z) belongs to the class PW implies (see Proposition 14.4.8) that z → (z0 + z) also belongs to the class PW, and hence the map z → (z0 + z) is of completely regular growth (by Theorem 14.6.3). We can now use (6.20) in Theorem 6.2.1 to conclude that the identity (7.48) holds. Furthermore, the identity (6.19) in

140

7 Finite Rank Perturbations of Operators of Integration

Theorem 6.2.1 yields the identity: MTg = {x ∈ C[0, 1] | (z + z0 ) dominates C(I − (z + z0 )V )−1 x}. (7.49) Part 2.

In this part we prove that item (ii) implies item (i) is satisfied. Thus we assume that 1 belongs to the support of g, and we shall prove that Tg has a complete span of eigenvectors and generalised eigenvectors. Given (7.49) and the identity (7.48), it suffices to prove that (z + z0 ) dominates C(I − (z + z0 )V )−1 xfor each x ∈ C[0, 1]. (7.50)  We begin with some preliminaries. Put (z) = (z + z0 ), and f(z) = f(z; x) = C(I − (z + z0 )V )−1 x,

Part 3.

f (z) = f (z; x) = C(I − zV )−1 x.

Here x ∈ L2 [0, 1]. Since  belongs to the class PW, the same holds . Moreover,  and   have the same indicator function (by true for  Proposition 14.4.8). From Lemma 7.1.1 we know that f belongs to the class PW assuming x = 0. Furthermore, if f belongs to the class PW, then f also belongs to the class PW, and the entire functions f and f have the same indicator. Hence in order to prove (7.50) we may without loss of generality assume that z0 = 0. Now fix x ∈ L2 [0, 1], and assume that 1 belongs to the support of g. Then Lemma 7.1.4 tells us that the pair {C, V } is observable. Next apply Proposition 7.3.4 to show that h = H , where H is the function given by (7.9). On the other hand, according to Lemma 7.1.1 we have hf ≤ H . Thus (7.50) holds for each x ∈ L2 [0, 1], and hence MTg = L2 [0, 1]. It remains to prove that item (i) implies item (ii). This implication follows from item (ii) in Theorem 6.2.1 (cf., Part 3 in the proof of Theorem 7.3.3). This completes the proof.  

It is interesting to compare Theorem 7.4.1 and its proof with the results and proofs in Sect. 2.3 which also deal with case when Tg acts on L2 [0, 1]. In order to illustrate how the approach based on Theorem 6.2.1 simplifies the calculations presented in Sect. 2.3, let us compare (7.13) with the expression (2.31) for y(0). Observe that q(z) = (z) and that (z) dominates C(I − zV )−1 x if and only if q(z) dominates q(z)y(0), where y(0) is given by (2.31). This observation shows that while in Sect. 2.3 in order to apply Theorem 2.2.2 estimates for y(t), 0 ≤ t ≤ 1, were needed, but (as we see now), it actually suffices to have estimates for y(0) only and then apply Theorem 6.2.1. This reduction to a finite dimensional condition will also play a significant role in the computations in the examples presented in the next two chapters.

Chapter 8

Discrete Case: Infinite Leslie Operators

In this chapter we discuss a class of operators acting on the Hilbert space 2+ (C). The operators are infinite versions of Leslie matrices [54], and therefore they are called Leslie operators. These operators are again of the form finite rank perturbation of a Volterra operator. The chapter consists of four sections. The first two present the definition of a Leslie operator and describe the connection with boundary value systems. The third section has an auxiliary character. In the fourth section we present a class of Leslie operators T with special completeness properties, namely T ∗ has a complete span of eigenvectors and generalised eigenvectors, while the latter is not true for the operator T . In the final section we consider completeness for a generalised Leslie operator which is a finite rank perturbation of a non-compact quasi-nilpotent operator, i.e., the role of Volterra operators is taken over by quasinilpotent operator, as in Sect. 6.3.

8.1 Definition of a Leslie Operator In the study of age-structured populations with the population divided in age classes, Leslie [54] introduced non-negative matrices to study the stationary age distribution of the population. The extension to infinite Leslie matrices was studied by Demetrius [18] and attracted some recent interest [39]. The Leslie operators we shall deal with can be defined as follows.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_8

141

142

8 Discrete Case: Infinite Leslie Operators

Let 2+ = 2+ (C) denote the Hilbert space of infinite sequences that are square summable in absolute value, and let T be the operator on 2+ defined by ⎡

r0 ⎢v0 ⎢ ⎢ T = ⎢0 ⎢0 ⎣ .. .

r1 0 v1 0 .. .

r2 0 0 v2 .. .

⎤ ··· ··· · · · · · ·⎥ ⎥ · · · · · ·⎥ ⎥ 0 · · ·⎥ ⎦ .. .. . .

(8.1)

∞ 2 Here r = {rj }∞ j =0 and v = {vj }j =0 sequences in + satisfying ∞

|rj |2 < ∞ and

j =1



|vj |2 < ∞.

j =1

Such an operator T we will call a Leslie operator. We shall see in Sect. 8.3 that the operator T in (8.1) is the sum of a Volterra operator and a rank one operator, and hence T is an operator of the type considered in Sect. 6.2. The latter will allow us to see when a Leslie operator has a complete set of eigenvectors.

8.2 Associated Boundary Value Systems With the operator T we can associate the following time variant boundary value system ⎧ x1,n+1 = x1,n + rn un , ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ x2,n+1 = un

n = 0, 1, 2, . . .

(1) (2) ⎪ ⎪ yn = γn x1,n + γn x2,n ⎪ ⎪ ⎪ ⎩ limn→∞ x1,n = 0 and x2,0 = 0.

(8.2)

Here * γn(1) =

−1 if n = 0, 0 if n ≥ 1;

* and γn(2) =

0 if n = 0, vn−1 if n ≥ 1.

(8.3)

8.2 Associated Boundary Value Systems

143

The state xn at time n of the system (8.2) is the vector xn = (x1,n , x2,n ) and the state space is C2 . Furthermore, the boundary values of the system (8.2) are wellposed, that is, the only solution of the homogeneous system ⎧ x1,n+1 = x1,n , n = 0, 1, , 2, . . . ⎪ ⎪ ⎨ x2,n+1 = 0 ⎪ ⎪ ⎩ limn→∞ x1,n = 0 and x2,0 = 0

(8.4)

is the trivial solution. In that case the system has a well-defined input-output operator; see [29, Section V.3], where the term “transfer operator” is used instead of “input-output operator”. The following lemma shows that the input-output operator of system (8.2) is given by the operator T . Lemma 8.2.1 Given an input sequence (u0 , u1 , u2 , . . .) in 2+ the system (8.2) has a unique output sequence (y0 , y1 , y2 , . . .) which also belongs to 2+ . Moreover the relation between input sequence and output sequence is given by ⎡ ⎤ ⎡ y0 r0 ⎢y1 ⎥ ⎢v0 ⎢ ⎥ ⎢ ⎢y2 ⎥ ⎢ 0 ⎢ ⎥=⎢ ⎢y ⎥ ⎢ 0 ⎣ 3⎦ ⎣ .. .. . .

r1 0 v1 0 .. .

r2 0 0 v2

r3 0 0 0 .. .

⎤⎡ ⎤ ··· u0 ⎥ ⎢ · · ·⎥ ⎢u1 ⎥ ⎥ ⎢ ⎥ · · ·⎥ ⎥ ⎢u2 ⎥ . ⎥ ⎢u ⎥ ⎦ ⎣ 3⎦ .. .. . .

(8.5)

Proof Take (u0 , u1 , u2 , . . .) in 2+ . Then x1,n = x1,n−1 + rn−1 un−1 = x1,n−2 + rn−2 un−2 + rn−1 un−1 = − − − − −− = x1,0 +

n−1

rk uk ,

n = 1, 2, 3, . . . .

k=0

 Thus limn→∞ x1,n = x1,0 + ∞ k=0 rk uk . But then the first boundary value condition  (1) (2) in (8.2) tells us that x1,0 = − ∞ k=0 rk uk . Using the definitions of γn and γn in (8.3) together with the second boundary value condition in (8.2) we see that y0 = γ0(1) x1,0 + γ0(2) x2,0 = γ0(1) x1,0 =



rk uk ,

k=0

yn = γn(1) x1,n + γn(2) x2,n = γn(2) x2,n = vn−1 un−1 , Thus the identity (8.5) holds true.

n = 1, 2, . . . .  

144

8 Discrete Case: Infinite Leslie Operators

Next let z be a complex number, and consider the operator I − zT , where T is given by (8.1). From Lemma 8.2.1 and its proof it follows that I − zT is the input-output operator of the following boundary value system: ⎧ x1,n+1 = x1,n + rn un , ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ x2,n+1 = un

n = 0, 1, 2, . . . (8.6)

(1) (2) ⎪ ⎪ yn = −zγn x1,n − zγn x2,n + un ⎪ ⎪ ⎪ ⎩ limn→∞ x1,n = 0 and x2,0 = 0.

Interchanging input and output in the above system we obtain the so-called inverse system: ⎧  (1)  (2) ⎪ ⎪ x1,n+1 = 1 + zrn γn x1,n + zrn γn x2,n + rn yn , ⎪ ⎪ ⎪ ⎪ ⎨ x2,n+1 = zrn γn(1)x1,n + zrn γn(2) x2,n + yn

n = 0, 1, 2, . . .

(1) (2) ⎪ ⎪ un = zγn x1,n + zγn x2,n + yn ⎪ ⎪ ⎪ ⎪ ⎩ limn→∞ x1,n = 0 and x2,0 = 0.

(8.7)

Lemma 8.2.2 Let z be a complex number. The operator I − zT is invertible if and only if the boundary value conditions of the inverse system (8.7) are well-posed, and in that case the operator (I − zT )−1 is the input-output operator of (8.7). Furthermore, the boundary value conditions of the inverse system (8.7) are wellposed if and only (z) = 0, where (z) = 1 − z



zν rν

ν=0

Here we use the convention that

−1 j =0 vj

ν−1 

vj .

(8.8)

j =0

= 1.

Proof Fix the given complex number z. From [29, TheoremV.3.3] we know that the system (8.7) has well-posed boundary value conditions if and only the operator I − zT is invertible, and in that case the operator (I − zT )−1 is the input-output operator of (8.7). It remains to show that well-posedness of the boundary value conditions is equivalent to (z) = 0, where (z) is given by (8.8). It will be convenient to first present a more concise representation for the system (8.6), namely as follows ⎧ xn+1 = An xn + Bn un , n = 0, 1, 2, . . . ⎪ ⎪ ⎨ yn = −zCn xn + un ⎪ ⎪ ⎩ N1 x0 + limn→∞ N2 xn = 0.

(8.9)

8.2 Associated Boundary Value Systems

145

Here   10 An = , 00

    r Bn = n , Cn = γn(1) γn(2) 1     00 10 N1 = , N2 = . 01 00

(n = 0, 1, , 2, . . .),

The corresponding inverse system (8.7) is then given by ⎧ n = 0, 1, 2, . . . xn+1 = A× ⎪ n xn + Bn yn , ⎪ ⎨ un = zCn xn + un ⎪ ⎪ ⎩ N1 x0 + limn→∞ N2 xn = 0.

(8.10)

Here, by definition, A× n = An + zBn Cn . In other words, A× 0 = A× n

       10 1 − zr0 0 r  , + z 0 −1 0 = 1 −z 0 00

       10 1 zrn vn−1 rn  , = +z 0 vn−1 = 1 0 zvn−1 00

n ≥ 1.

By induction one proves that × A× n · · · An−k

βk (z) = z

k ν=0



1 βk (z) = 0 δk (z)

zk−ν rn−ν

k 

 (n ≥ k + 1), where

vn−1−j ,

δk (z) = zk+1

j =ν

k 

vn−1−j .

j =0

It follows that × × A× n · · · A1 A0



  1 β1 (z) 1 − zr0 0 = 0 δ1 (z) −z 0 

1 − zr0 − zβ1 (z) 0 = 0 −zδ1 (z)  =

 ρ1,n (z) 0 , ρ2,n (z) 0



(8.11)

146

8 Discrete Case: Infinite Leslie Operators

where ρ1,n (z) = 1 − zr0 − zβ1 (z) = 1 − zr0 − z2

n−1

zn−1−ν rn−ν

n

vn−1−j

j =ν

ν=0

=1−z

n−1 

zn−ν rn−ν

n 

vn−1−j = 1 − z

j =ν

ν=0

ρ2,n = −zδ1 (z) = −zn+1

n−1 

n

zν rν

ν=0

ν−1 

vj ;

j =0

vj .

j =0 × × Here we use the convention that v−1 = 1. Using formula (8.11) for A× n · · · A1 A0 , we see that × × N2 xn = N2 A× n−1 · · · A1 A0 x0



    1 0 ρ1,n (z) 0 ρ1,n (z)x1,0 = x0 = . 0 0 0 ρ2,n (z) 0 If follows that     (z)x1,0 ρ1,n (z)x1,0 = , n→∞ x2 , 0 x2 , 0

N1 x0 + lim N2 xn = lim n→∞

where (z) is given by (8.8). Indeed, (z) = 1 − z

∞ ν=0

zν rν

ν−1  j =0

vj = lim ρ1,n (z). n→∞

We conclude that the boundary value conditions in (8.10) can be rewritten as (z)x1,0 = 0 and x2,0 = 0. Hence these conditions are well-posed if and only if (z) = 0.  

8.3 The Characteristic Function and Related Properties This section has an auxiliary character. As a first step, note that the operator T given by (8.1) is a rank one perturbation R of a lower triangular operator V , namely

8.3 The Characteristic Function and Related Properties

147

T = V + R, where ⎡

0 ⎢v0 ⎢ ⎢ V =⎢0 ⎢0 ⎣ .. .

0 0 v1 0 .. .

0 0 0 v2 .. .

··· ··· ··· 0 .. .

⎡ r0 ⎢0 ⎢ ⎢ R = ⎢0 ⎢0 ⎣ .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥, · · ·⎥ ⎦ .. .

r1 0 0 0 .. .

··· ··· ··· 0 .. .

r2 0 0 0 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥, · · ·⎥ ⎦ .. .

(8.12)

both acting on 2+ . The fact that V is a weighted shift with weights vj → 0 for j → ∞ implies that V is compact, see [17, Corollary 4.27.5]. From the explicit representation for the resolvent operator (I −zV )−1 given in (8.15) below, it follows that V is a Volterra operator. In particular, T is a compact operator. Moreover, the operator R factors as R = B◦ C◦ , where B◦ : C → 2+ and C◦ : 2+ → C are given by ⎡ ⎤ u0 ∞ ⎢u1 ⎥ ⎢ ⎥ rj uj . C◦ ⎢u ⎥ = ⎣ 2⎦ j =0 .. .

⎡ ⎤ u ⎢0 ⎥ ⎢ ⎥ B◦ u = ⎢0⎥ , ⎣ ⎦ .. .

(8.13)

Obviously, this factorisation is a minimal rank factorisation. It follows that the function ◦ (z) = 1 − zC◦ (I − zV )−1 B◦ is a characteristic scalar function for T in the sense of Theorem 6.1.1. The next lemma shows that the function ◦ (z) coincides with the function (z) defined by (8.8), and hence completeness of the operator T can be studied in terms of the function (z) defined by (8.8) using Theorem 6.2.1. Lemma 8.3.1 The characteristic function ◦ (z) = 1−zC◦ (I −zV )−1 B◦ coincides with the function (z) defined by (8.8). Furthermore, C◦ (I − zV )−1 ϕ =



rν ϕν +

ν=0



zm

m=1





m+ν−1 

rm+ν ⎝

ν=0

⎞ vj ⎠ ϕν .

(8.14)

j =ν

Proof We start by showing that the resolvent operator (I −zV )−1 is explicitly given by ψ = (I − zV )−1 ϕ with ψ0 = ϕ0 and ψk = ϕk +

k m=1

⎛ zm ⎝

k−1 

⎞ vj ⎠ ϕk−m ,

k ≥ 1.

(8.15)

j =k−m

We prove (8.15) by induction. First note that ψ = (I − zV )−1 ϕ implies that ϕ = ψ − zV ψ which yields ψ0 = ϕ0 and ψk = ϕk + zvk−1 ψk−1 ,

k ≥ 1.

(8.16)

148

8 Discrete Case: Infinite Leslie Operators

Using ψ0 = ϕ0 the identity (8.16) shows that (8.15) holds for k = 1. Now assume that (8.15) holds for some k ≥ 1. Then using (8.16) with k + 1 in place of k we see that ⎛ ⎛ ⎞ ⎞ k k−1  zm ⎝ vj ⎠ ϕk−m ⎠ ψk+1 = ϕk+1 + zvk ⎝ϕk + j =k−m

m=1 k

= ϕk+1 + zvk ϕk +

zm+1 ⎝

m=1 k+1

= ϕk+1 + zvk ϕk +

= ϕk+1 +

k+1





⎞ vj ⎠ ϕk−m

j =k−m



k 

zν ⎝



vj ⎠ ϕk−ν+1

j =k−ν+1

ν=2

zν ⎝

k 

k 



vj ⎠ ϕ(k+1)−ν .

j =(k+1)−ν

ν=1

But then (8.15) is proved for k + 1 in place of k, and by induction this identity holds for each k. Next we compute Q(z) = (I − zV )−1 B◦ . Using the definition of B◦ , the representation (8.15) and ψ0 = ϕ0 , it follows that Q(z) k = zk k−1 j =0 vj for k ≥ 0, where the product is equal to one when k = 0. Hence ◦ (z) = 1 − zC◦ (I − zV )−1 B◦ = 1 − z

=1−z



⎛ rk ⎝

k−1 

∞   rk Q(z) k k=0

⎞ vj ⎠ zk .

(8.17)

j =0

k=0

Comparing the latter result with (8.8) we see that ◦ (z) = (z), as desired. Finally, for ϕ ∈ 2 (C), using ψ0 = ϕ0 and (8.15), we see that C0 (I − zV )−1 ϕ = r0 ϕ0 +



rk ϕk +

k=1

=

=



rk ϕk +

k=1 ∞

k=0

m=1





k=0

rk ϕk +



m=1

zm

∞ k=m

z

m

∞ ν=0

rk



k

zm ⎝

rk ⎝

⎞ vj ⎠ ϕk−m

j =k−m

m=1



k−1 

k−1 



vj ⎠ ϕk−m

j =k−m



m+ν−1 

rm+ν ⎝

j =ν

⎞ vj ⎠ ϕν .

8.3 The Characteristic Function and Related Properties

149

 

Thus (8.14) is proved.

Next in Lemma 8.3.2, Lemma 8.4.1 and Lemma 8.4.2 we present conditions on ∞ the sequences {rj }∞ j =0 and {vj }j =0 that guarantee that the operator T given by (8.1) is one-to-one, has dense range, the pair {C◦ , V◦ } is observable, and for every x ∈ H the entire function z → C◦ (I − zV◦ )−1 x is of Paley-Wiener class. Obviously, T is one-to-one if vj = 0 for all j = 0, 1, 2, . . .. The next lemma tells us when T has a dense range. Lemma 8.3.2 Let T be given (8.1), and assume that vj = 0 for all j = 0, 1, 2, . . .. Then T is one-to-one, and the range of T is dense in 2+ if and only the sequence {|rj /vj |}∞ j =0 is not square summable. ∞ 2 Proof Let T u = y, where u = {uj }∞ j =0 and y = {yj }j =0 are in + . Then yj +1 = vj uj for j ≥ 0. Thus y = 0 implies that vj uj = 0 for each j ≥ 0. But vj = 0 for each j ≥ 0. It follows that u = 0 when y = 0. Thus T is one-to-one. Next we prove that the range of T is not dense if and only if the sequence {|rj /vj |}∞ j =0 is square summable. To do this we apply Lemma 2.5.1 with V and R being given by (8.12) and with X = 2+ . In this case Rx = x, η u, where

η = (r0 , r1 , r2 , . . .)

and u = (1, 0, 0, . . .).

2 It follows that for any ϕ = {ϕj }∞ j =0 in + we have

V ∗ ϕ = η ⇐⇒ vj ϕj +1 = rj

(j = 0, 1, 2, . . .);

u, ϕ = −1 ⇐⇒ ϕ0 = −1.

(8.18) (8.19)

Now assume that the range of T is not dense in 2+ . Then, according to Lemma 2.5.1, there exist ϕ ∈ 2+ such that V ∗ ϕ = η, and hence the equivalence in (8.18) shows that the sequence {|rj /vj |}∞ j =0 is square summable. Conversely, assume that the sequence {|rj /vj |}∞ is square summable. Put j =0 & ϕj =

−1 when j = 0, / rj −1 vj −1 when j = 1, 2, . . . .

Then ϕ = (ϕ0 , ϕ1 , ϕ2 , . . .) ∈ 2+ , and the equivalences in (8.18) and (8.19) imply that V ∗ ϕ = η and u, ϕ = −1. But then the range of T is not dense in 2+ by Lemma 2.5.1.  

150

8 Discrete Case: Infinite Leslie Operators

8.4 Completeness for a Concrete Class of Leslie Operators In this section we assume that vj = rj =(j + 1)−1 for j = 0, 1, 2, . . .. Throughout V◦ : 2+ → 2+ is the operator defined by the first identity in (8.12) with vj = (j + 1)−1 , j = 0, 1, 2, . . . , and C◦ : 2+ → C is the operator defined by the second identity in (8.13) with rj = (j + 1)−1 , j = 0, 1, 2, . . . . We first prove two lemmas. In the first lemma we show that the observability condition holds, i.e., we show that the pair {C◦ , V◦ } is observable. Note that in this 2 case observability just means that x = {xj }∞ j =0 ∈ + and ∞

rn+ν

ν=0

ν! xν = 0 (n + ν)!

for n = 0, 1, 2, . . . .

(8.20)

implies that x = 0. Lemma 8.4.1 Assume vj = rj = (j + 1)−1 for j = 0, 1, 2, . . .. Then the pair {C◦ , V◦ } is observable. Proof Consider the operators: J : 2+ → L2 [0, 1],

∞ xν ν+1 t (0 ≤ t ≤ 1); ν+1 ν=0  t (V1 f )(t) = f (s) ds (0 ≤ t ≤ 1);

(J x)(t) =

V1 : L2 [0, 1] → L2 [0, 1],

 C1 : L [0, 1] → C, 2

C1 f =

(8.21)

(8.22)

0 1

(8.23)

f (s) ds. 0

The operator J is a well-defined bounded linear operator and J is one-to-one. Indeed, fix x ∈ 2+ , then 0∞ 11/2 11/2 0 ∞ ∞ |xν | ν+1 1 ≤ |xν |2 (t 2(ν+1) ) t |(J x)(t)| = ν +1 (ν + 1)2 ν=0



0∞ ν=0

1 (ν + 1)2

11/2

ν=0

x < ∞,

ν=0

0 ≤ t ≤ 1.

Thus J is well defined and bounded. Also note that since  ∞   xv     ν + 1  < ∞, ν=0

8.4 Completeness for a Concrete Class of Leslie Operators

151

it follows that J x is a continuous function on [0, 1]. Next, let 0 = x ∈ 2+ , and assume J x = 0. Choose n◦ such that xj = 0 for j = 0, . . . , n◦ − 1 while xn◦ = 0. Then ⎛ 0 = (J x)(t) = t n◦ +1 ⎝

∞ j =n◦

⎞ 1 xj t j −n◦ ⎠ , j +1

0 ≤ t ≤ 1.

 −1 j −n◦ is also identically equal to The latter implies that g(t) := ∞ j =n◦ (j + 1) xj t zero. But this contradicts the fact that g(0) = (n◦ +1)xn◦ = 0. Thus J is one-to-one. The pair {C1 , V1 } is observable. To prove this, note that for n = 0, 1, 2, . . . we have  1 t (V1n+1 f )(t) = (t − s)n f (s) ds, 0 ≤ t ≤ 1 (f ∈ L2 [0, 1]). n! 0 Now take f such that C1 V1n+1 f = 0 for n = 0, 1, 2, . . .. Then the above identity yields    s 1 1 (s − r)n f (r) dr ds n! 0 0 0   1  1 1 (s − r)n ds f (r) dr = n! 0 r  1 1 (1 − r)n+1 f (r) dr = (n + 1)! 0  1 1 = t n+1 f (1 − t) dt, n = 0, 1, 2, . . . . (8.24) (n + 1)! 0 

0 = C1 V1n+1 f =

1

(V1n+1 f )(s) ds =

1 1 Since 0 f (1 − t) dt = 0 f (t) dt = C1 f = 0 and the polynomials are dense L2 [0, 1], we conclude that f = 0. Hence {C1 , V1 } is observable. Next we show that the following intertwining relations hold: J V◦ = V1 J

and C◦ V◦ = C1 J.

(8.25)

To see this let ej be the j -th vector in the standard orthonormal basis of 2+ . Note that (J ej )(t) = (j + 1)−1 t j +1 , 0 ≤ t ≤ 1. It follows that 1 j +2 1 1 (J ej +1 )(t) = t ; j +1 j +1j +2  t  t 1 j +2 1 1 t (V1 J ej )(t) = (J ej )(s) ds = s j +1 ds = . j +1 0 j +1j +2 0 (J V◦ ej )(t) =

152

8 Discrete Case: Infinite Leslie Operators

This proves the first identity in (8.25). To prove the second identity, note that for j = 0, 1, 2, . . . we have 

1

C1 J ej =

(J ej )(s) ds =

0

1 j +1



1

t j +1 dt =

0

1 1 , j +1j +2

1 1 1 1 C◦ ej +1 = rj +1 = . C◦ V◦ ej = j +1 j +1 j +1j +2 Thus (8.25) is proved. Now we are ready to prove that {C◦ , V◦ } is observable. Let x ∈ 2+ , and assume that C◦ V◦n x = 0 for n = 0, 1, 2, . . .. Then C◦ V◦n x = 0 for n = 1, 2, . . . or, equivalently, C◦ V◦n+1 x = 0 for n = 0, 1, 2, . . .. Using the identities in (8.25), we obtain 0 = C◦ V◦n+1 x = C1 J V◦n x = C1 V1n J x,

n = 0, 1, 2, . . . .

But the pair {C1 , V1 } is observable. Thus J x = 0. Since J is one-to-one, we obtain x = 0. Hence the pair {C◦ , V◦ } is observable.   In the next lemma, given x ∈ 2+ , we analyse the entire function f (z) := f (z; x) = C◦ (I − zV◦ )−1 x,

z ∈ C.

(8.26)

Lemma 8.4.2 Assume vj = rj = (j + 1)−1 for j = 0, 1, 2, . . .. Then for each 0 = x ∈ 2+ the entire function f given by (8.26) belongs to the Paley-Wiener class PW. In particular, the function f is of exponential type. Furthermore, its type σ is less than or equal to one, and its indicator function hf (θ ) is given by & hf (θ ) =

σ cos θ , −π/2 ≤ θ ≤ π/2, π/2 ≤ θ ≤ 3π/2.

0,

Proof In what follows we use the operators J : 2+ → L2 [0, 1],

V1 : L2 [0, 1] → L2 [0, 1],

C1 : L2 [0, 1] → C

defined by (8.21), (8.22), (8.23), respectively. Put g := J x. Then C◦ (I − zV◦ )−1 x = C1 (I − zV1 )−1 J x =

∞ n=0

z

n

C1 V1n g

= C1 g +

∞ n=1

zn C1 V1n g.

(8.27)

8.4 Completeness for a Concrete Class of Leslie Operators

153

Using the identity (8.24) with g in place of f , we see that C◦ (I − zV◦ )

−1

x = C1 g +

∞ n  z n=1



1

= C1 g −

n!

1

t n g(1 − t) dt

0

g(1 − t) dt +

0



1

= C1 g −

n!

n=0



1

g(t) dt +

0

∞ n  z

1

t n g(1 − t) dt

0

ezt g(1 − t) dt.

0

Using the definition of C1 in (8.23), we see that f (z) = C◦ (I − zV◦ )−1 x =



1

 ezt g(1 − t) dt =

0

0 −1

e−zt g(1 + t) dt.

(8.28)

Next we apply Lemma 14.3.7 with a = 1 and with ϕ(t) =

& 0, g(1 + t),

0 < t ≤ 1, −1 ≤ t ≤ 0.

(8.29)

Since x = 0 and J is one to one, the function g is non-zero as a function in L2 [0, 1]. It follows that f belongs to the Paley-Wiener class PW. In particular, f is of exponential type. Moreover, using (14.39), we obtain that the type σ of f is given by σ = inf{τ ∈ (0, 1] | ϕ|[−1,−τ ] = 0 a.e.}. It follows that σ is at most one, as desired. Finally, using Proposition 14.4.3 with p(λ) identically equal to zero, with q(λ) identically equal to one, with a = 1, and with ϕ as in (8.29), we see that the indicator function hf is given by (8.27).   We are now ready to prove the main result in this section. Theorem 8.4.3 Let T : 2+ → 2+ be given by (8.1). If vj = rj = (j + 1)−1 for j = 0, 1, 2, . . ., then ST = {0} and ST ∗ = {0}.

(8.30)

MT ∩ ST = {0} and MT ∗ ∩ ST ∗ = {0}.

(8.31)

Furthermore,

In particular, the operator T ∗ has a complete span of eigenvectors and generalised eigenvectors, while the latter is not true for the operator T .

154

8 Discrete Case: Infinite Leslie Operators

Proof First note that in this case rj /vj = 1, and thus the sequence rj /vj is not square summable. Hence T is one-to-one and has a dense range (Lemma 8.3.2). Furthermore, from Lemma 8.4.1 we see that the observability condition holds. We split the following text into three parts. Part 1.

In this part we prove that ST = {0}. This will be done by contradiction. Assume that 0 = x ∈ ST . Since x ∈ ST , the function z → (I − zT )−1 x is an entire function. Therefore, using (6.14), it follows that z → C◦ (I − zT )−1 x = (z)−1 C◦ (I − zV◦ )−1 x

(8.32)

is an entire function as well. Here the characteristic function (z) is given by (z) = 1 − zC◦ (I − zV◦ )−1 B◦ , where B◦ and C◦ are given by (8.13). Using (8.17) with vj = rj = (j + 1)−1 , j = 0, 1, 2, . . ., we see that C◦ V◦n B◦ =

1 , (n + 1)!

for n = 0, 1, 2, . . . ,

and hence (z) = 1 − z

∞ n=0

1 zn = 2 − e z . (n + 1)!

(8.33)

This shows that (z) is an entire function of exponential type with type one. Furthermore,  is bounded on the imaginary axis. Hence, by Proposition 14.3.4 the function  belongs to the Paley-Wiener class PW. But then we know, use Theorem 14.6.3 with z replaced by −z, that  is of completely regular growth, and its indicator function is given by h (θ ) =

& cos θ , −π/2 ≤ θ ≤ π/2, 0,

π/2 ≤ θ ≤ 3π/2.

(8.34)

We claim that (z) dominates the entire function f (z; x) = C◦ (I − zV◦ )−1 x for any x ∈ 2+ . For x = 0 this is trivially true. Assume x = 0. Then from Lemma 8.4.2 we know that f (z) := f (z; x) belongs to the Paley-Wiener class PW. In particular, the order of f is one. Furthermore, its type σ is less than or equal to one and its indicator function is given by hf (θ ) =

& σ cos θ , 0,

−π/2 ≤ θ ≤ π/2, π/2 ≤ θ ≤ 3π/2.

(8.35)

8.4 Completeness for a Concrete Class of Leslie Operators

155

Comparing (8.35) with (8.34) we see that hf (θ ) ≤ h (θ ). Thus (z) dominates f (z) = f (z; x). An application of Lemma 14.7.5 with g =  and f (z) = f (z; x) then shows that the entire function given by (8.32) equals a polynomial p. Indeed, since  and f both belong to the class PW, the inequalities in (14.102) hold true with κ = 2, with θ1 = π/2, with θ2 = 3π/2, and with appropriate choices of the positive constants m and M. But then item (b) in Lemma 14.7.5 tells us that −1 f is a polynomial. From the previous paragraph we know that z → C◦ (I − zT )−1 x is the polynomial p. But then it follows from (8.28) with g = J x that f (z) = Co (I − zVo )

−1



1

x=

ezt g(1 − t) dt.

0

On the other hand (8.32) and (8.33) give f (z) = (z)p(z) = (2 − ez )p(z) and hence 

1

ezt g(1 − t) dt = (2 − ez )p(z).

(8.36)

0

Taking the limit to −∞ along the negative real axis in (8.36) shows that p = 0. Thus C◦ (I − zV◦ )−1 x = 0.

Part 2.

Since {C◦ , V◦ } is observable, it follows that x = 0. This contradicts our assumption that x = 0. Thus ST = {0}. In this part we prove that ST ∗ = {0}. To do this we use the identity (6.13). Taking adjoints and using V◦ in place of V we obtain (I − zT ∗ )−1 = z(I − zV◦∗ )−1 C◦∗ (z)−1 B◦∗ (I − zV◦∗ )−1 + (I − zV◦∗ )−1 . Therefore, it suffices to construct 0 = x ∗ ∈ X∗ = 2+ such that z → (z)−1 B◦∗ (I − zV◦∗ )−1 x ∗

is entire.

(8.37)

Using the definition of B◦ in (8.13) and representation (8.15) for (I − zV )−1 with V◦ in place of V , we obtain (I − zV◦ )

−1

 B◦ =

zn n!

 , n≥0

(8.38)

156

8 Discrete Case: Infinite Leslie Operators

and thus ∞ n z

B◦∗ (I − zV◦∗ )−1 x ∗ =

n=0

n!

xn∗ ,

x ∗ ∈ 2+ .

∗ Using  ∗  (8.33)2 it follows from (8.37) that we have to construct 0 = x = xn n≥0 in + such that ∞ 1 zn ∗ x 2 − ez n! n

is entire.

(8.39)

n=0

We do this construction of x ∗ in two steps. First define a sequence (bn )n≥0 by b0 = 1 and bn = −1 for n = 1, 2, . . . to obtain the identity ∞ 1 zn bn = 1. 2 − ez n! n=0

Although this is an entire function, observe that (bn )n≥0 ∈ 2+ . In order to repair the next step is to choose a sequence (an )n≥0 such that the sum ∞ this, n /n!)a is an entire function and the sequence (c ) (z n n n≥0 defined by n=0 the identity 0∞ zn n=0

n!

10 an

∞ n z n=0

n!

1 bn

=

∞ n z n=0

n!

cn ,

(8.40)

is a sequence in 2+ . Note that (8.40) is equivalent to 0 n 1 n   zk zn n zn zn−k cn = ak bn−k = ak bn−k , n = 0, 1, 2, . . . k n! k! (n − k)! n! k=0

k=0

 n Thus we seek a sequence (an )n≥0 such that the sum ∞ n=0 (z /n!)an is an entire function and the sequence (cn )n≥0 defined by cn =

n   n−1   n n ak bn−k = an − ak k k k=0

= 2an −

k=0

n  k=0

 n ak , k

n = 0, 1, 2, . . . ,

(8.41)

8.4 Completeness for a Concrete Class of Leslie Operators

157

belongs to 2+ . To do this let P be the infinite matrix of which the n-th column Pn is given by Pn :=

n n 0

1

···

n n

 0 0 ··· ,

n = 0, 1, 2 . . . .

Then (8.41) can be rewritten in the following way ⎡ ⎤ ⎡ ⎤ c0 a0 ⎢ c1 ⎥ ⎢a1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢c ⎥ = 2 ⎢a ⎥ − P 2 ⎣ ⎦ ⎣ 2⎦ .. .. . .

⎡ ⎤ a0 ⎢a1 ⎥ ⎢ ⎥ ⎢a ⎥ . ⎣ 2⎦ .. .

Now let u = −e, where e ∈  (0, 1),and  take aj = uj , j = 0, 1, 2, . . .. Using he identity (1 + u)n = nk=0 nk uk for n = 0, 1, 2, . . . we see that ⎡ ⎤ ⎡ ⎤ 1 c0 ⎢u⎥ ⎢c1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 2⎥ ⎢ ⎥ c := ⎢c2 ⎥ = 2 ⎢u ⎥ − P ⎢u3 ⎥ ⎢c ⎥ ⎣ ⎦ ⎣ 3⎦ .. .. . .

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 1 1 1 ⎢u⎥ ⎢u⎥ ⎢ 1+u ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2⎥ ⎢u2 ⎥ ⎢ 2⎥ ⎢ ⎢ ⎥ = 2 ⎢u ⎥ − ⎢(1 + u) ⎥ . ⎢u3 ⎥ ⎢u3 ⎥ ⎢(1 + u)3 ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ .. .. .. . . .

Hence, using u = −e, we have cn = 2(−e)n − (1 − e)n ,

Part 3.

n = 0, 1, 2, . . . .

Since both e and 1 − e belong to (0, 1), it follows that the sequences (an )n≥0 and (cn )n≥0 both are square summable, and thus both (an )n≥0 and (cn )n≥0 belong to 2+ as desired. So if x ∗ ∈ 2+ is such that xn∗ = cn for n = 0, 1, 2, . . ., then x ∗ solves (8.39) and hence x ∗ ∈ ST ∗ . Thus ST ∗ = {0}. In this part we prove the final statement. Since in the present setting the underlying space is a Hilbert space, we can use Corollary 1.2.2. Thus M⊥ T = ST ∗

and M⊥ T ∗ = ST .

(8.42)

∗ But then the second identity in (8.30) tells us that M⊥ T = ST = {0}. Since, by Lemma 8.3.2, the operator T is one-to-one, the fact that M⊥ T = {0} implies that we have no completeness for T . On the other hand, by the first identity in (8.30), we have M⊥ T ∗ = ST = {0}. Since T has a dense range (by Lemma 8.3.2), the operator T ∗ is one to one, and thus ∗ M⊥ T ∗ = {0} implies that T has a complete span of eigenvectors and generalised eigenvectors. Property (8.31) now directly follows from the fact that MT ∗ = 2+ and ST ∗ = {0},  

158

8 Discrete Case: Infinite Leslie Operators

In the following corollary we summaries the main properties of the operator T appearing in Theorem 8.4.3. Corollary 8.4.4 Let T : 2+ → 2+ be given by (8.1), and assume that vj = rj = (j + 1)−1 for j = 0, 1, 2, . . .. Then ∗ = {0}, and hence there is no (a) the operator T is one-to-one, M⊥ T = ST completeness for the operator T ; (b) the adjoint operator T ∗ is one-to-one and M⊥ T ∗ = ST = {0}, and hence there is completeness for the operator T ∗ .

Since MT ∗ = 2+ and ST ∗ = {0}, we have MT ∗ ∩ ST ∗ = ST ∗ = {0}. Thus the second condition in (1.15) is not satisfied, and we cannot apply Proposition 1.1.1 in this case. Also note that the direct sum MT ∗ ⊕ ST ∗ does not make sense. Furthermore, we have MT ⊕ ST = MT = 2+ because M⊥ T = {0}. Note that the representation (8.15) for (I −zV )−1 implies that condition (6.18) in Theorem 6.2.1 fails in the present chapter. On the other hand, all other assumptions in Theorem 6.2.1 are satisfied in the present context. Since (8.31) shows that the conclusion of the theorem given in (6.20) fails in the present setting, we conclude that condition (6.18) in Theorem 6.2.1 cannot be omitted in that theorem.

8.5 A Generalised Leslie Operator In this section we construct an operator T given by (8.1) such that T is a rank one perturbation R of a quasi-nilpotent operator W , where ⎡

0 ⎢w0 ⎢ ⎢ W =⎢0 ⎢0 ⎣ .. .

0 0 w1 0 .. .

0 0 0 w2 .. .

··· ··· ··· 0 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥, · · ·⎥ ⎦ .. .

⎡ r0 ⎢0 ⎢ ⎢ R = ⎢0 ⎢0 ⎣ .. .

r1 0 0 0 .. .

r2 0 0 0 .. .

··· ··· ··· 0 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥, · · ·⎥ ⎦ .. .

(8.43)

both acting on 2+ . In that case T is said to be a generalised Leslie operator. Lemma 8.5.1 Let wj be defined by wj =

& 1 1 j (j +1)

if j is even if j is odd

(8.44)

8.5 A Generalised Leslie Operator

159

and consider the operator W◦ given by W as in (8.43) and wj as in (8.44). Then the operator W◦ is quasi-nilpotent but not compact. Proof Since W◦ is a weighted shift with weights wj that do not tend to zero, the operator W◦ is not compact. To see that W◦ is quasi-nilpotent we use the explicit representation for ψ = (I − zW◦ )−1 ϕ, ϕ ∈ 2+ , given in (8.15): ψk = ϕk +

k

⎛ zm ⎝



k−1 

wj ⎠ ϕk−m ,

k ≥ 1.

(8.45)

j =k−m

m=1

To prove that W◦ is quasi-nilpotent it suffices to prove that for z ∈ C fixed, the sequence ψk , k ≥ 0, defined in (8.45) belongs to 2+ . First observe that ∞

|ψk |2 ≤

k=0



|ϕk |2 +

k=0

=

=



|ϕk |2 +



k=0

m=1







|z|2m

k−1 

|ϕk | + 2

|wj |2 )|ϕ|2k−m

j =k−m

k=1 m=1

k=0



∞ k

∞ k−1 

|z|2m

|wj |2 |ϕ|2k−m

k=m j =k−m

|z|

2m

m=1

∞ m+ν−1 

|wj |2 |ϕ|2ν

j =ν

ν=0

∞ m+ν−1

 |ϕk |2 1 + |wj |2 |z|2m . max

k=0

ν≥0

m=1

(8.46)

j =ν

Using the definition of wj given in (8.44), we have max ν≥0

m+ν−1 

|wj |2 =

j =ν

m−1 

|wj |2

j =0

=

⎧ ⎨

1 (m!)2 1 ⎩ ((m+1)!)2

if m is even if m is odd.

It now follows from the ratio test applied to the even and odd terms of the infinite power series in (8.46) separately that this series converges for every z ∈ C. This proves that for z ∈ C fixed, the sequence ψk defined by (8.45) with vj given by (8.44) belongs to 2+ . This completes the proof that V is quasi-nilpotent.  

160

8 Discrete Case: Infinite Leslie Operators

Next define & rj =

1 j +1

if j is even

1

if j is odd

(8.47)

The operator R factors as R = B◦ C◦ , where B◦ : C → 2+ and C◦ : 2+ → C are given by ⎡ ⎤ u ⎢0 ⎥ ⎢ ⎥ B◦ u = ⎢0⎥ , ⎣ ⎦ .. .

⎡ ⎤ u0 ∞ ⎢u1 ⎥ ⎢ ⎥ C◦ ⎢u ⎥ = rj uj . ⎣ 2⎦ j =0 .. .

(8.48)

Obviously, this factorisation is a minimal rank factorisation. It follows that the function ◦ (z) = 1 − zC◦ (I − zW◦ )−1 B◦ is a characteristic scalar function for T in the sense of Theorem 6.3.1 and is explicitly given by ◦ (z) = 1 − z

∞ k=0

=1−z

∞ k=0

⎛ rk ⎝

k−1 

⎞ wj ⎠ zk

j =0

zk = 2 − ez . (k + 1)!

(8.49)

The next result is a consequence of Theorem 6.3.1. Corollary 8.5.2 Let z be a complex number. The operator I − zT is invertible if and only if ◦ (z) = 0, where ◦ (z) is defined by (8.49).

Chapter 9

Semi-Separable Operators and Completeness

In this chapter we specify further the results of Chaps. 4, 5 and 6 for semi-separable operators. The chapter consists of three sections (and two subsections). The first section deals with completeness results for a class of discrete semi-separable operators studied in Section V.3 of [29]. Such operators are quite different from the Leslie operators considered in Chap. 8. The theory developed in Chap. 4 is useful in studying completeness for these discrete semi-separable operators. The second section deals with completeness results for semi-separable integral operators. The results in Chapter IX of [32], together with the results developed in Chaps. 5 and 6, are used to derive completeness results for the semi-separable integral operators. The final section has a preparatory character, not directly related to semi-separable operators. We collect some results about the fundamental solution of an ordinary differential equation, and we present an explicit resolvent formula for a class of integral operators and related Volterra operators which will play a role in the next chapter and in Chap. 11.

9.1 Discrete Semi-Separable Operators Throughout T is an operator acting on 2+ (Cm ). Thus T has a block matrix representation: ⎡ T00 ⎢T10 ⎢ T = ⎢T ⎣ 20 .. .

T01 T11 T21 .. .

T02 T12 T22 .. .

⎤ ··· · · ·⎥ ⎥ , · · ·⎥ ⎦ .. .

where Tkj ∈ Cm×m for each k and j.

(9.1)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_9

161

162

9 Semi-Separable Operators and Completeness

The operator T is said to be semi-separable if for each k, j = 0, 1, 2, . . . the entry Tkj in the block matrix representation (9.1) can be written in the following form: Tkj =

(1)

(2)

where Fk , Fk

⎧ ⎨ F (1) G(1) when 0 ≤ j < k < ∞, k j

(9.2)

⎩ −F (2) G(2) when 0 ≤ k ≤ j < ∞, k j

(1)

(2)

and Gj , Gj are operators, Fk(1) : Cn1 → Cm

m n1 and G(1) j :C →C ,

(9.3)

Fk(2) : Cn2 → Cm

m n2 and G(2) j :C →C ,

(9.4)

the positive integers n1 and n2 do not depend on the particular choice of k, j , and it is required that ∞

(i)

Fk 2 < ∞

and



(i)

Gj 2 < ∞

(i = 1, 2).

(9.5)

j =0

k=0

Summarising the four Eqs. (9.1)–(9.4) we obtain ⎡ (2) (2) (2) (2) −F (2) G(2) 0 −F0 G1 −F0 G2 ⎢ 0 ⎢ (1) (1) (2) (2) ⎢ F1 G0 −F1(2)G(2) 1 −F1 G2 ⎢ T = ⎢ (1) (1) ⎢F G F2(1) G(1) −F2(2) G(2) 0 1 2 ⎢ 2 ⎣ .. .. .. . . .

⎤ ⎡ ⎤ ⎡ m⎤ ··· Cm C ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ m⎥ ⎢ Cm ⎥ · · ·⎥ ⎢ C ⎥ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎥ ⎥ : ⎢ m⎥ → ⎢ ⎢ Cm ⎥ , ⎥ · · ·⎥ ⎢ C ⎥ ⎢ ⎥ ⎣ ⎦ ⎦ ⎣ . ⎦ .. .. . . . .

(9.6)

With a semi-separable operator we associate a time-variant boundary value system  as follows. For n = 0, 1, 2, . . . put $ An =

ICn1

0

0 ICn2

$

% , Bn =

G(1) n (2)

Gn

% , Cn =



(1) Fn

(2) Fn



, P =

$ % 0 0 0 ICn2

.

Using these operators the system  we have in mind is defined by ⎧ xn+1 = An xn + Bn un , n = 0, 1, 2, . . . ⎪ ⎪ ⎨  yn = Cn xn ⎪ ⎪ ⎩ P x0 = 0 and limn→∞ (I − P )xn = 0.

(9.7)

9.1 Discrete Semi-Separable Operators

163

Theorem 9.1.1 ([29, Theorem V.3.2]) Let T be a semi-separable operator acting on 2+ (Cm ), and let the entries Tkj in the block matrix representation (9.1) of T be given by (9.2). Then T is the input-output operator of the system  given by (9.7). Remark Concerning Leslie Operators In general, the Leslie operators defined in the previous chapter are not semi-separable. To be more precise, let T on 2+ be given by (8.1), i.e., ⎡

r1 0 v1 0 .. .

r0 ⎢v0 ⎢ ⎢ T = ⎢0 ⎢0 ⎣ .. .

r2 0 0 v2 .. .

··· ··· ··· 0 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥. · · ·⎥ ⎦ .. .

Then T is not semi-separable whenever all vj , j = 0, 1, 2, . . . , are non-zero. To prove this we argue by contradiction. Assume that T is semi-separable. Then there exist Fk : Cn → C

(k = 1, 2, . . .),



Fk 2 < ∞,

and

k=1

Gj : C → Cn

(j = 0, 1, 2, . . .),



Gj 2 < ∞,

j =0

such that Tkj = Fk Gj ,

0 ≤ j < k < ∞.

Put ⎡ T10 F1 ⎢T20 ⎢F2 ⎥   ⎢ ⎢ ⎥ ⎢ K := ⎢F ⎥ G0 G1 G2 · · · = ⎢T30 ⎢T ⎣ 3⎦ ⎣ 40 .. .. . . ⎡



⎡ v0 ⎢0 ⎢ ⎢ = ⎢0 ⎢0 ⎣ .. .

∗ v1 0 0 .. .

∗ ∗ v2 0 .. .

∗ T21 T31 T41 .. .

⎤ ∗ ··· ∗ · · ·⎥ ⎥ ∗ · · ·⎥ ⎥. ⎥ v3 ⎦ .. .

∗ ∗ T32 T42 .. .

⎤ ∗ ··· ∗ · · ·⎥ ⎥ ∗ · · ·⎥ ⎥ ⎥ T43 ⎦ .. .

(9.8)

(9.9)

164

9 Semi-Separable Operators and Completeness

From the definition of K in (9.8) it follows that K is an operator of finite rank. On the other hand, since all vj , j = 0, 1, 2, . . . , are non-zero by assumption, (9.9) tells us that K is of infinite rank. Thus T cannot be semi-separable. Main Topic The question whether or not a discrete semi-separable operator T has a complete set of eigenvectors and generalised eigenvectors will be the main topic of the remainder of this section. First note that the semi-separable operator T on 2+ (Cm ) given by (9.6) can be written as T = V +R, where V is a strictly (block) lower triangular Hilbert-Schmidt operator and R is a finite rank operator, both acting on 2+ (Cm ). These operators are defined as follows: & (1) (1) Fk Gj + Fk(2)G(2) j when 0 ≤ j < k < ∞, (9.10) Vkj = 0 when 0 ≤ k ≤ j < ∞; Rkj = −Fk(2)G(2) j

when j, k = 0, 1, 2, . . . .

(9.11)

The fact that V is a Hilbert-Schmidt operator follows from the two inequalities in (9.5). These two inequalities also allow us to define the following finite rank operators: ⎡

F (i)

G(i)

⎤ F0(i) ⎢ (i) ⎥ ⎢ F1 ⎥ ni 2 m ⎥ =⎢ ⎢F 2(i) ⎥ : C → + (C ), (i = 1, 2); ⎣ ⎦ .. .   (i) (i) = G(i) : 2+ (Cm ) → Cni , (i = 1, 2). G G · · · 0 1 2

(9.12)

(9.13)

Note that (9.11) tells us that R = −F (2)G(2) .

(9.14)

Proposition 9.1.2 Let T = V + R where V and R are given in the previous paragraph. Then the operator V is a Volterra operator, and the entire function (z) = I + zG(2) (I − zV )−1 F (2)

(9.15)

is a characteristic n2 × n2 matrix function associated with T . Proof We already know that V is a compact operator. Thus in order to prove that V is a Volterra operator it satisfies to show that V has no non-zero eigenvalue. To prove the latter, we use that V is a strictly (block) lower triangular. Thus V admits

9.1 Discrete Semi-Separable Operators

165

the following block matrix representation ⎡

0 ⎢V10 ⎢ ⎢ V = ⎢V20 ⎢V ⎣ 30 .. .

0 0 V21 V31 .. .

0 0 0 V32 .. .

0 0 0 0 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥. · · ·⎥ ⎦ .. .

(9.16)

  Now assume V x = λx where x = x0 x1 x2 x3 · · · ∈ 2+ (Cm ) and λ = 0. Since V x = λx, the block matrix representation (9.16) of V shows that λx0 = 0

and λxn = Vn0 x0 + Vn1 x1 + . . . + Vn,n−1 xn−1

(n > 0).

(9.17)

The fact that λ = 0 then implies that x0 = 0. Next assume that xj = 0 for j = 0, . . . , n − 1. The second part of (9.17) then shows that λxn = 0, and thus xn = 0. Since n is arbitrary, we conclude that xj = 0 for all j = 0, 1, 2, . . ., that is, x = 0. Hence V is a Volterra operator. Since V is a Volterra operator and R = −F (2)G(2) , a direct application of Theorem 6.1.1 shows that the entire function (z) given by (9.15) is a characteristic matrix function for T .   So far we assumed that T = V + R with V being strictly block lower triangular. In the next subsection we shall also consider the adjoint case when T ∗ = V ∗ + R ∗ with V and R being given by (9.16) and (9.14), respectively. In that case ⎡

∗ 0 V10 ⎢0 0 ⎢ ⎢ ∗ V = ⎢0 0 ⎢0 0 ⎣ .. .. . .

∗ V20 ∗ V21 0 0 .. .

∗ V30 ∗ V31 ∗ V32 0 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥ ⎥ ⎦ .. .

and R ∗ = −G(2)∗F (2)∗ .

Here for any Hilbert space operator, M ∗ denotes the adjoint of the operator M. Since V is a Volterra operator, the same is true for V ∗ , and obviously R ∗ is of finite rank. Furthermore, from (6.1) and Theorem 6.1.1 we know that the functions (z) = Im + zG(2)(I − zV )−1 F (2) ,

z ∈ C,

∗ (z) = Im + zF (2)∗(I − zV ∗ )−1 G(2)∗,

z ∈ C,

are the characteristic matrix functions for T and T ∗ , respectively. The latter allows us to use Theorem 5.2.2 to describe the spectral properties of T and T ∗ . Also, note that ∗ (z) = (z)∗ ,

z ∈ C,

(9.18)

166

9 Semi-Separable Operators and Completeness

For simplicity we shall restrict the analysis of T and T ∗ to the scalar case when the operator V acts on 2+ (C) and the elements Vkj , 0 ≤ k < j < ∞, are complex numbers.

9.1.1 A Completeness Theorem (A Scalar Case) In this subsection we present a completeness theorem for a class of discrete semiseparable operators. We restrict ourselves to the scalar case when the entries Tkl in (9.1) are just complex numbers. We start with two operators   F = f0 f1 f2 · · · : C → 2+ (C);

(9.19)

  G = g0 g1 g2 · · · : 2+ (C) → C.

(9.20)

Furthermore, the discrete semi-separable operator T we shall be dealing with is given by T = V + R where V and R are operators on 2+ (C). The operator V is the strictly lower triangular operator on 2+ (C) given by ⎡

0 ⎢v10 ⎢ ⎢ V = ⎢v20 ⎢v ⎣ 30 .. .

0 0 v21 v31 .. .

0 0 0 v32 .. .

0 0 0 0 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥, · · ·⎥ ⎦ .. .

vkj = fk gj

(0 ≤ j < k < ∞).

(9.21)

The operator R is the finite rank operator on 2+ (C) given by R = −ν −1 F G.

(9.22)

Here ν is a non-zero complex number, and F and G are defined by (9.19) and (9.20), respectively. Since T = V + R, it follows that the kj -th entry of T is given by & tkj =

−ν −1 fk gj

when 0 ≤ k ≤ j < ∞,

(1 − ν −1 )fk gj

when 0 ≤ j < k < ∞.

(9.23)

Thus T is determined by only the two operators F and G together with the nonzero number ν. Finally, in this setting, following the first paragraph of Sect. 6.1 with B = −ν −1 F and C = G, the characteristic function of T is the scalar function

9.1 Discrete Semi-Separable Operators

167

(z) given by (z) = 1 + zν −1 G(I − zV )−1 F.

(9.24)

The next lemma can be used to compute the characteristic function (z) explicitly. Lemma 9.1.3 Let T = V + R where V and R are the operators defined by (9.21) and (9.22), and let F and G be the operators defined by (9.19) and (9.20). Then G(I − zV )−1 x =

∞  ∞   1 + zfj gj gn xn , n=0 j =n+1

x ∈ 2+ (C).

(9.25)

In particular, the characteristic function (z) defined by (9.24) is given by ∞  1 1  (z) = 1 − + 1 + zfj gj . ν ν

(9.26)

j =0

Proof Let x ∈ 2+ (C), and put y = (I − zV )−1 x. Then y − zV y = x and xk = yk − z

k−1

vkj yj = yk − zfk

j =0

k−1

gj yj ,

k ≥ 0.

(9.27)

j =0

Define uk :=

k

gj yj ,

k = 0, 1, 2, . . . .

(9.28)

j =0

Observe that uk satisfies the following inhomogeneous difference equation k   uk+1 = gk+1 yk+1 + uk = gk+1 xk+1 + zfk+1 gj yj + uk j =0

= gk+1 xk+1 + zgk+1 fk+1

k

gj yj + uk

j =0

  = 1 + zfk+1 gk+1 uk + gk+1 xk+1 ,

k ≥ 0.

(9.29)

The solution to (9.29) is given by uk = gk xk +

k−1  k  n=0 j =n+1

1 + zfj gj



gn xn ,

k ≥ 0.

(9.30)

168

9 Semi-Separable Operators and Completeness

To prove the representation (9.30) we use induction. If k = 0 we know from (9.28) that u0 = g0 y0 which shows that (9.30) holds for k = 0. Next suppose (9.30) holds for k, then substitution of the formula for uk into (9.29) yields k−1 k+1      uk+1 = 1 + zfk+1 gk+1 gk xk + 1 + zfj gj gn xn + gk+1 xk+1 n=0 j =n+1

=

k k+1    1 + zfj gj gn xn + gk+1 xk+1 , n=0 j =n+1

and this proves (9.30). Next we take the limit k → ∞ in (9.30) which tells us that Gy =



gj yj =

j =0

∞  ∞   1 + zfj gj gn xn . n=0 j =n+1

Since G is given by (9.20) and y = (I − zV )−1 x, the above formula yields (9.25). It remains to prove the identity (9.26). To do this recall that (z) is given by (9.24). Now put hn (z) :=

∞  

 1 + zfj gj ,

n = 0, 1, 2 . . . .

(9.31)

j =n

and use the identity (9.25) with xn = ν −1 fn . This yields z G(I − zV )−1 F ν ∞ 1 =1+ hn+1 (z)zfn gn ν

(z) = 1 +

n=0

∞    1  =1+ hn+1 (z) 1 + zfn gn − hn+1 (z) ν n=0

=1+

∞  1  1 hn (z) − hn+1 (z) = 1 + (h0 (z) − 1) ν ν n=0

∞  1 1  + 1 + zfj gj . = 1− ν ν j =0

This proves (9.26).

 

9.1 Discrete Semi-Separable Operators

169

Here we return to the representations (9.10)–(9.13) and relate the role of the parameter ν in (9.22) to these representations of T . First consider the case ν = 1. In this case (9.23) tells us that tkj = 0 for 0 ≤ j < k, ∞, and hence the operator T (2) is upper triangular. In case ν = 2, we have Fj(1) = Fj(2) , and G(1) j = Gj and so ∗ T = −T in the real case. In the next lemma we derive some basic properties of the characteristic matrix function (z) given by (9.26). Lemma 9.1.4 Let T = V + R where V and R are the operators defined by (9.21) and (9.22), and let F and G be the operators defined by (9.19) and (9.20). If the convergent exponent (see (14.130) for the definition) of the sequence {(fj gj )−1 }j ≥0 equals 1/2, i.e., if ∞  1  |fj gj |r < ∞ = , inf r > 0 | 2 j =0

then the function z → G(I −zV )−1 x, given by (9.25) with x ∈ 2+ (C) fixed, and the characteristic function (z), given by (9.26), are entire functions of order at most 1/2. Furthermore, if {fj }j ≥0 and {gj }j ≥0 are real sequences with fj gj < 0 for j ≥ 0, then the characteristic function (z) is bounded on the positive real axis and has maximal growth on the negative real axis. Proof Let n(r) denote the number of points of the sequence (fj gj )−1 inside the disk |z| < r. Since the convergent exponent of the sequence is 1/2, we have that for every real α with 1/2 < α < 1, there exists a constant Cα such that n(r) ≤ Cα r α . Fix k ∈ N. To estimate the order of the entire function ∞ j =k (1 + zfj gj ) observe that for |z| < r ∞ ∞       log (1 + zfj gj ) ≤ log 1 + r|fj gj | = j =k

∞ 0

j =k



 



 r dn(t) log 1 + t

r n(t) 1 =r n(t) dt ≤ dt + t (t + r) t 0 0  r  ∞ ≤ Cα t α−1 dt + Cα t α−2 dt 0

=

Cα α−1 Cα α r + r . α 1−α



∞ r

n(t) dt t2

r

(9.32)

Here we have used integration by parts and the estimate for n(t) derived in the first paragraph of the proof. Using (14.3), it follows from (9.32) that the entire function ∞ j =k (1 + zfj gj ) has order at most 1/2. Furthermore note that the estimate (9.32) does not depend on k. To estimate the order of z → G(I − zV )−1 x for fixed x ∈ 2+ (C) observe that it

170

9 Semi-Separable Operators and Completeness

follows from (9.32) that ∞  ∞ ∞ ∞         (1 + zfj gj )gn xn  ≤ (1 + zfj gj )|gn xn |  n=0 j =n+1

n=0 j =n+1 ∞

1/2  Cα α  ≤ exp r gn2 x. α n=0

Since α > 1/2 is arbitrary, this shows that z → G(I − zV )−1 x is an entire function of order at most 1/2. Similarly, we can use (9.32) with k = 0 to show that (z) is an entire function of order at most 1/2 as well. The last statement directly follows from the explicit representation for the characteristic function (z) given in (9.26).   Two Examples The two examples that will be presented are applications of Lemma 9.1.3 and they illustrate Lemma 9.1.4. Recall that T = V + R, and the kj -th entry of T (see (9.23)) is given by & tkj =

−ν −1 fk gj

when 0 ≤ k ≤ j < ∞,

(1 − ν −1 )fk gj when 0 ≤ j < k < ∞,

and determined by the operators F and G together with the non-zero number ν. Furthermore, the characteristic matrix function of T is given by (9.24). We consider two different data sets. For the first example ν is an arbitrary nonzero complex number and the operators F and G are given by (9.19) and (9.20) using − fj = gj =

1 , j +1

j ≥ 0.

(9.33)

Then (9.26) tells us that the characteristic function (z) of T defined by (9.24) is explicitly given by  √  ∞    1  1  z 1  1 sin π z (z) = 1 − + + . 1− = 1− √ ν ν (j + 1)2 ν ν π z j =0

In particular, in case ν = 1, the operator T is lower triangular & tkj =

−fk gj when 0 ≤ k ≤ j < ∞, 0

when 0 ≤ j < k < ∞,

(9.34)

9.1 Discrete Semi-Separable Operators

171

and the characteristic matrix function of T is given by  √  sin π z . (z) = √ π z If ν = 2, then T = −T ∗ and the characteristic matrix function of T is given by  √  sin π z 1 1+ . (z) = √ 2 π z For the second example take − fj = gj =

1 , j + 1/2

j ≥ 0.

(9.35)

Then the characteristic function (z) of T defined by (9.24) is explicitly given by ∞   √    1  1  z 1 1 (z) = 1 − + + cos π z . 1− = 1− ν ν (j + 1/2)2 ν ν

(9.36)

j =0

Notice that in each of the examples the characteristic function (z) of T , given by (9.34) or (9.36), is a scalar entire function of order ρ = 1/2 which is bounded on the positive real axis confirming the conclusion of Lemma 9.1.4. We also need the next lemma which is an adjoint version of Lemma 9.1.3. In what follows, if h is a complex number, then h∗ denotes the complex conjugate of h, that is, h∗ = h. Lemma 9.1.5 Let T = V + R where V and R are the operators defined by (9.21) and (9.22), and let F and G be the operators defined by (9.19) and (9.20). Then F ∗ (I − zV ∗ )−1 x = −

∞ n−1   1 + zfj∗ gj∗ fn∗ xn , n=0 j =0

x ∈ 2+ (C).

(9.37)

Proof Let y = (I − zV ∗ )−1 x. Then x = y − zV ∗ y. It follows that xk = yk − z



gk∗ fj∗ yj ,

k = 0, 1, 2, . . . .

(9.38)

lim wk = 0.

(9.39)

j =k+1

Define wk = −

∞ j =k

fj∗ yj ,

k ≥ 0,

k→∞

172

9 Semi-Separable Operators and Completeness

Using (9.39) and (9.38), in this order, we see that wk , k = 0, 1, 2, . . ., satisfies the following equation wk+1 = wk + fk∗ yk = wk + fk∗ xk + zfk∗ gk∗



fj∗ yj

j =k+1

= wk +

fk∗ xk

− zfk∗ gk∗ wk+1 ,

k = 0, 1, 2, . . . .

(9.40)

Rewriting Eq. (9.40) by changing the order of wk and wk+1 yields   wk = 1 + zfk∗ gk∗ wk+1 − fk∗ xk .

(9.41)

The solution to (9.41) is given by wk = −

∞ n−1   1 + zfj∗ gj∗ fn∗ xn ,

k = 0, 1, 2, . . . .

(9.42)

n=k j =k

To prove the representation (9.42) we use induction. If k tends to infinity, we know from (9.39) that limk→∞ wk = 0. This shows that (9.42) holds as k tends to infinity. Next suppose that (9.42) holds for k + 1, then substitution of the formula for wk+1 into (9.41) yields ∞ n−1      wk = − 1 + zfk∗ gk∗ 1 + zfj∗ gj∗ fn∗ xn − fk∗ xk n=k+1 j =k+1

=−

∞ n−1   1 + zfj∗ gj∗ fn∗ xn − fk∗ xk n=k+1 j =k

=−

∞ n−1 

1 + zfj∗ gj∗



fn∗ xn

n=k j =k

and this proves (9.42). Finally, we take k = 0 in (9.42) to derive (9.37).

 

We are now ready to formulate a completeness result. Theorem 9.1.6 Let T = V + R where V and R are the operators defined by (9.21) and (9.22), respectively. Suppose that {fj }j ≥0 and {gj }j ≥0 are non-zero real sequences such that −1 (i) the convergent exponent of the sequence {(f j gj ) }j ≥0 equals 1/2; ∞ (ii) for β ∈ R, the product  j =0 1 + iβfj gj  tends to infinity as β → ±∞.

9.1 Discrete Semi-Separable Operators

173

Then the operators T and T ∗ both have a complete span of eigenvectors and generalised eigenvectors. Proof The proof is divided into three parts. Part 1.

In this part we prove that ST = {0}. This will be done by contradiction. Assume that 0 = x ∈ ST . Since x ∈ ST , the function z → (I − zT )−1 x is an entire function. Therefore, using (6.14), it follows that f (z) = G(I − zT )−1 x = (z)−1 G(I − zV )−1 x

(9.43)

is an entire function. From Lemma 9.1.4 it follows that z → G(I − zV )−1 x and (z) are entire functions of order at most 1/2. Therefore it follows from Theorem 14.1.1 that f is an entire function of order at most 1/2. If we define 1 (z) :=

∞ 

(1 + zfj gj ),

j =0

and use (9.25), then on the imaginary axis z = iβ, β ∈ R, we have f (iβ) = (iβ)−1 1 (iβ)

∞  n

(1 + iβfj gj )−1 gn xn .

(9.44)

n=0 j =0

Note that |1 + iβfj gj | =

2

1 + (βfj gj )2 ≥ 1

for j ≥ 0.

(9.45)

To estimate f along the imaginary axis, we first estimate |(z)−1 1 (z)| along the imaginary axis for |z| large. Because of the second assumption (ii), we can choose R ≥ 0 such that ∞   |1 (iβ)| =  (1+iβfj gj ) ≥ 2|ν − 1|+1

for |β| ≥ R.

(9.46)

j =0

Hence 1  |(iβ)−1 1 (iβ)| ≤  |(1 − ν −1 )1 (iβ)−1 | − |ν|−1  ≤

1 |ν|−1 − |2ν|−1

≤ 2|ν| for |β| ≥ R.

(9.47)

174

9 Semi-Separable Operators and Completeness

Take M0 > 0 such that max |f (iβ)| ≤ M0 .

−R≤β≤R

Using (9.44), (9.45) and (9.47), we can estimate f along the imaginary axis for |β| ≥ R as follows |f (iβ)| ≤ 2|ν|

∞  n

|1 + iβfj gj |−1 |gn xn |

n=0 j =0

−1 ∞  n 2 2 ≤ 2|ν| 1 + (βfj gj ) |gn xn | n=0 j =0

≤ 2|ν|



|gn xn | ≤ 2|ν|Gx for |β| ≥ R.

n=0

This proves that f is an entire function of order at most 1/2 that is bounded along the imaginary axis. Thus it follows from Theorem 14.2.2 that f equals a constant and hence (z)−1 G(I − zV )−1 x = Gx

for all z ∈ C.

(9.48)

Therefore it follows from (9.46) that we have the identity 1 (iβ)−1 G(I − iβV )−1 x = 1 (iβ)−1 (iβ)Gx

for all |β| ≥ R.

This implies that ∞

ν −1 + (1 − ν −1 )

∞ 

(1 + iβfj gj )−1 −

j =0

n=0

n 

(1 + iβfj gj )−1 gn xn = 0.

j =0

(9.49) If we let β  → ∞ in (9.49) and use the second assumption (ii), we obtain that Gx = ∞ j =0 gj xj = 0. This proves G(I − zV )−1 x = 0 for all z ∈ C.

(9.50)

To prove that x = 0, it remains to prove that the pair {G, V } is observable. From (9.50) and (9.25), it follows that on the imaginary axis we have ∞  j =1

(1 + iβfj gj )

∞  n n=0 j =1

(1 + iβfj gj )−1 gn xn = 0.

9.1 Discrete Semi-Separable Operators

175

Hence it follows from it follows from (9.46) that for |β| ≥ R ∞  n

(1 + iβfj gj )−1 gn xn = 0.

(9.51)

n=0 j =1

Taking β → ∞ in (9.51) yields that g0 x0 = 0 and hence x0 = 0. Using induction, assuming that x0 = x1 = · · · = xk = 0 for k ≥ 0, we have ∞  n

(1 + iβfj gj )−1 gn xn = 0.

(9.52)

n=k j =k+1

Part 2.

Taking β → ∞ in (9.52) yields that gk+1 xk+1 = 0 and hence xk+1 = 0. This proves x = 0 and completes the proof that ST = {0}. In this part we prove that ST ∗ = {0}. This will be done by contradiction. Assume that 0 = x ∗ ∈ ST ∗ . Since x ∗ ∈ ST ∗ , the function z → (I − zT ∗ )−1 x is an entire function. Therefore, using (6.14) and taking adjoints, it follows that f (z) = F ∗ (I − zT ∗ )−1 x = (z)−1 F ∗ (I − zV ∗ )−1 x ∗ .

(9.53)

Recall that (z) is an entire function of order at most 1/2. It follows from (9.37) and the proof of Lemma 9.1.4 that z → F ∗ (I − zV ∗ )−1 x ∗ is an entire function of order at most 1/2. So it follows from Theorem 14.1.1 that f is an entire function of order at most 1/2. Similar as done in part 1, using (9.37), (9.26), (9.45) and (9.47), we can estimate f along the imaginary axis for |β| ≥ R as follows |f (iβ)| ≤ 2|ν|

∞  ∞

|1 + iβfj gj |−1 |fn xn∗ |

n=0 j =n

≤ 2|ν|

∞  ∞ 2

1 + (βfj gj )2

−1

|fn xn∗ |

n=0 j =n

≤ 2|ν|



|fn xn∗ | ≤ 2|ν|F x ∗  for |β| ≥ R.

n=0

Since we can choose M0 > 0 such that max |f (iβ)| ≤ M0 ,

−R≤β≤R

176

9 Semi-Separable Operators and Completeness

we obtain that f is an entire function of order at most 1/2 that is bounded along the imaginary axis. Thus it follows from Theorem 14.2.2 that f equals a constant and hence (z)−1 F ∗ (I − zV ∗ )−1 x ∗ = F ∗ x ∗

for all z ∈ C.

(9.54)

Therefore it follows from (9.46) that we have the identity 1 (iβ)−1 F ∗ (I − iβV ∗ )−1 x ∗ = 1 (iβ)−1 (iβ)F ∗ x ∗

for all |β| ≥ R.

This implies that ∞

ν −1 + (1 − ν −1 )

n=0

∞ 

(1 + iβfj gj )−1 +

∞ 

(1 + iβfj gj )−1 fn xn∗ = 0.

j =n

j =0

(9.55) If we let β →∞ in (9.55) and use the second assumption (ii) we obtain ∗ that F ∗ x ∗ = ∞ j =0 fj xj = 0. This proves F ∗ (I − zV ∗ )−1 x ∗ = 0

for all z ∈ C.

(9.56)

To prove that x = 0, it remains to prove that the pair {F ∗ , V ∗ } is observable. From (9.56) and (9.37), it follows that on the real axis z = α with α ∈ R we have f0 x0∗

+

∞ n−1 

(1 + αfj gj )fn xn∗ = 0.

n=1 j =0

Taking α = −(f0 g0 )−1 yields f0 x0∗ = 0 and hence x0∗ = 0. Using induction, assuming that x0∗ = x1∗ = · · · = xk∗ = 0 for k ≥ 0, we have k 

∞  ∗ (1 − αfj gj ) fk+1 xk+1 +

j =0

Part 3.

n−1 

 (1 − αfj gj )fn xn∗ = 0.

n=k+2 j =k+1

∗ = 0 and hence Taking again α = −(fk+1 gk+1 )−1 yields fk+1 xk+1 ∗ ∗ xk+1 = 0. This proves x = 0 and completes the proof that ST ∗ = {0}. In this part we prove the statement of the theorem. Since in the present setting the underlying space is a Hilbert space, we can use Corollary 1.2.2. Thus

M⊥ T = ST ∗

and M⊥ T ∗ = ST .

9.2 Integral Operators with Semi-Separable Kernels

177

This shows M⊥ T = {0} and hence T has a complete span of eigenvectors and generalised eigenvectors. Similarly, since M⊥ T ∗ = ST = {0} it follows that T ∗ has a complete span of eigenvectors and generalised eigenvectors as well. This completes the proof of the theorem.   Corollary 9.1.7 Let T be the operator on 2+ (C) whose kj -th entry is given by (9.23) assuming − fj = gj =

1 , j +1

j ≥ 0.

(9.57)

Then the characteristic function (z) is given by  ∞   1 1  z (z) = 1 − 1− + ν ν (j + 1)2 j =0

 √   1  1 sin π z , = 1− + √ ν ν π z

(9.58)

and T has a complete span of eigenvectors and generalised eigenvectors. Proof The operator T is a special case of the operators appearing in the first example given by (9.34) on page 157. In particular, for this special case the formula for the characteristic function is a corollary of (9.33). This shows (9.58). From (9.57) and (9.58) it follows that the assumptions of Theorem 9.1.6 are satisfied and this implies the desired result.  

9.2 Integral Operators with Semi-Separable Kernels   Throughout this section X = L2 [0, 1]; Cm and T : X → X is the integral operator given by 

1

(T x)(t) =

k(t, s)x(s) ds,

0 ≤ t ≤ 1,

(9.59)

0

with & k(t, s) =

F1 (t)G1 (s) when 0 ≤ s ≤ t ≤ 1, F2 (t)G2 (s) when 0 ≤ t < s ≤ 1.

(9.60)

178

9 Semi-Separable Operators and Completeness

Here Fj (t) and Gj (t) (j = 1, 2) are matrices of sizes m × nj and nj × m, respectively, and as functions of t the entries of Fj (t) and Gj (t) are square integrable on [0, 1]. In this case the kernel function k is called a semi-separable kernel function; see [32, Chapter IX]. Given (9.60) the action of the operator T is also given by 

t

(Tf )(t) = F1 (t)



1

G1 (s)f (s) ds + F2 (t)

0

G2 (s)f (s) ds,

0 ≤ t ≤ 1,

t

(9.61) From representation (9.61) it follows that T is a finite rank perturbation of a Volterra operator. In fact, T = V + R, where V and R are the operators acting on X = L2 [0, 1]m defined by 



t

(V x)(t) = F1 (t)

t

G1 (s)x(s) ds − F2 (t)

0

G2 (s)x(s) ds,

0 ≤ t ≤ 1,

0

(9.62) 

1

(Rx)(t) = F2 (t)

G2 (s)x(s) ds,

0 ≤ t ≤ 1.

(9.63)

0

From [32, Proposition IX.1.1] we know that T is a Hilbert-Schmidt operator. On the other hand, in general, a nonzero integral operator with a semi-separable kernel is not a trace class operator but it can be an operator of order one. To derive a representation of the resolvent of V we follow [32, Section IX.2] in the context of the theory developed in Chap. 4. This requires some additional notation. First we introduce three auxiliary matrix-valued functions, namely $

%    n  Cn1 C 1 α(t) = : → , n 2 C Cn2 −G2 (t)F1 (t) −G2 (t)F2 (t) $

G1 (t)F1 (t)

G1 (t)F2 (t)

0 ≤ t ≤ 1; (9.64)

%

 Cn1 , 0 ≤ t ≤ 1; Cn2 −G2 (t)     Cn1 γ (t) = F1 (t) F2 (t) : → Cn1 , 0 ≤ t ≤ 1. Cn2 β(t) =

G1 (t)



: Cm →

(9.65)

(9.66)

Obviously, α(t) = β(t)γ (t) for 0 ≤ t ≤ 1. Given the functions α, β and γ , and using X = L2 [0, 1]m , we define three additional operators, namely A : X × X → X,

: X → X × X,

: X × X → X,

9.2 Integral Operators with Semi-Separable Kernels

179

These operators are defined as follows:      x x (t) A 1 (t) = α(t) 1 , x2 (t) x2

x1 , x2 ∈ X,

( x)(t) = β(t)x(t), x ∈ X, 0 ≤ t ≤ 1;      x1 (t) x1 , x1 , x2 ∈ X, (t) = γ (t)

x2 (t) x2

0 ≤ t ≤ 1;

(9.67) (9.68)

0 ≤ t ≤ 1.

(9.69)

Since α(t) = β(t)γ (t) for 0 ≤ t ≤ 1, the product

is equal to the operator A. Thus A =

. We are now ready to prove the first theorem. Theorem 9.2.1 Let V be the integral operator defined by (9.62). The resolvent operator (I − zV )−1 exists for all z ∈ C and is given by 

 (I −zV )−1 x (t) = x(t)+zγ (t)



t

Y (t, s; z)β(s)x(s) ds,

0 ≤ t ≤ 1,

(9.70)

0

where Y (t, s; z) with Y (s, s; z) = I , 0 ≤ s ≤ t ≤ 1, denotes the fundamental matrix solution of the ordinary differential equation y(t) ˙ = zα(t)y(t),

0 ≤ t ≤ 1.

(9.71)

Here α, β and γ are the the functions defined by (9.64), (9.65) and (9.66), respectively. Proof We split the proof in three parts. The first two parts have an auxiliary character.     Part 1. For j = 1, 2 let Vj : L2 [0, 1]; Cnj → L2 [0, 1]; Cnj be the operator of integration, i.e., 

 Vj x (t) =



t

x(s) ds,

0 ≤ t ≤ 1,

for j = 1.2.

(9.72)

0

We shall show that V =

  V1 0

. 0 V2

(9.73)

180

9 Semi-Separable Operators and Completeness

Using the definition of V as given by (9.62) together with the definitions of the functions β and γ as given by (9.65) and (9.66), respectively, we see that % G1 (s) x(s) d(s) −G2 (s) 0 % 1 0 $  t V1 0 = γ (t)

x (t), β(s)x(s) d(s) = 0 V2 0

  (V x)(t) = F1 (t) F2 (t)

Part 2.

t

$

  x ∈ L2 [0, 1]; Cm ,

which yields the identity  (9.73). In this part y ∈ L2 [0, 1]; Cm . With this function y we associate the function U defined by    V1 0

y (t), U (t) = 0 V2

0 ≤ t ≤ 1.

(9.74)

We claim that the function U is differentiable. In fact we shall show that U˙ (t) = ( y)(t),

0 ≤ t ≤ 1 and U (0) = 0.

(9.75)

To prove this note that y is the function β(·)y(·). It follows that  V1 U (t) = 0  V1 = 0 $ t

     0 V1 0

y (t) = β(·)y(·) (t) V2 0 V2    0 G1 (·) y(·) (t) V2 −G2 (·) % 0 G1 (s)y(s) ds = . 0 ≤ t ≤ 1. t − 0 G2 (s)y(s) ds

Part 3.

(9.76)

Using the final identity in (9.76) it is clear that the function U is differentiable and that its derivative is given by (9.75). Finally, from (9.76) it also follows that  U (0) = 0. Now let x ∈ L2 [0, 1]; Cm , and put y = (I − zV )−1 x. Then y satisfies the equation y − zV y = x.

(9.77)

Furthermore, let U be the function (see (9.74)) given by    V1 0

y (t), U (t) = 0 V2

0 ≤ t ≤ 1.

(9.78)

9.2 Integral Operators with Semi-Separable Kernels

181

By substitution of (9.73) in (9.77) and by applying to both sides of (9.77), we obtain 

x = y − z V y = y − z

 V1 0

y = y − zAU. 0 V2

(9.79)

Here we used that

is equal to the operator A defined by (9.67). From (9.75) we know that U˙ = y and U (0) = 0. Furthermore, (9.79) tells us that y = x + zAU . Summarising, we have U˙ − zAU = x

and U (0) = 0.

(9.80)

Finally, using the variation-of-constants formula, the solution to (9.80) is given by 

t

U (t; z) =

Y (t, s; z)β(s)x(s) ds, 0

where Y (t, s; z) denotes the fundamental solution of (9.71). Next observe from (9.73) and (9.78) that V y = U, and hence it follows from (9.77) that y = x + z U and this proves (9.70).   The operator R in (9.63) is of finite rank and can be represented as R = BC with C : X → Cn2 and B : Cn2 → X given by 

1

Cx =

G2 (s)x(s) ds,



 Bc (t) = F2 (t)c,

0 ≤ t ≤ 1.

(9.81)

0

Using Theorem 6.1.1, we can now derive that T admits a characteristic matrix function (z). In fact, using the explicit representation of the resolvent of V given in (9.70), we obtain the following representation for (z). Theorem 9.2.2 Let T = V + R, where V and R are given by (9.62) and (9.63), respectively. Write R = BC, where B and C are given by (9.81). Then the matrix function (z) : Cn2 → Cn2 given by (z) = ICn2 − zC(I − zV )−1 B,

z ∈ C,

(9.82)

is a characteristic matrix function for T . Moreover, (z) = Y22 (1, 0; z),

for all z ∈ C,

(9.83)

182

9 Semi-Separable Operators and Completeness

and C(I − zV )−1 x =



1

Y21 (1, s; z)G1 (s)x(s) ds+ 0



1

+

Y22 (1, s; z)G2 (s)x(s) ds,

z ∈ C.

(9.84)

0

Here Y (t, s; z) denotes the fundamental solution of (9.71) with Y (s, s; z) = I , in the block matrix notation given by $ Y (t, s; z) =

Y11 (t, s; z) Y12 (t, s; z) Y21 (t, s; z) Y22 (t, s; z)

% 0 ≤ s ≤ t ≤ 1.

,

Proof The fact that (z) is a characteristic matrix function for T follows from Theorem 6.1.1. It remains to prove the representation (9.83) for (z). We start with some preliminary observations. Since   β(s) Bc (s) =

0

G1 (s)F2 (s)c

1

−G2 (s)F2 (s)c

= second column of the matrixα(s),

(9.85)

where α(t) is defined in (9.64). The fundamental matrix solution Y (t, s; z) of (9.71) satisfies the differential equations d Y (t, s; z) = zα(t)Y (t, s; z), dt d Y (t, s; z) = −zY (t, s; z)α(s), ds

0 ≤ s < t ≤ 1,

Y (s, s; z) = I

0 ≤ s < t ≤ 1,

(9.86)

Y (t, t; z) = I.

(9.87)

c ∈ Cn2 .

(9.88)

Therefore we obtain from (9.85) and (9.87) that   zY (t, s; z)β(s) Bc (s) = −

0d

ds Y12 (t, s; z)c d ds Y22 (t, s; z)c

1 ,

Using these observations we can rewrite representation (9.70) to arrive at   (I − zV )−1 Bc (t) = F2 (t)c − γ (t)

 t 0 d Y (t, s; z)c1 ds 12 0

d ds Y22 (t, s; z)c

ds

= F1 (t)Y12 (t, 0; z)c + F2 (t)Y22 (t, 0; z)c,

c ∈ Cn2 . (9.89)

9.2 Integral Operators with Semi-Separable Kernels

183

Next we apply zC to both sides of (9.89) to obtain zC(I − zV )−1 Bc = z

 1   G2 (τ )F1 (τ )Y12 (τ, 0; z)c + G2 (τ )F2 (τ )Y22 (τ, 0; z)c dτ 0

 1 d =− Y22 (τ, 0; z)c dτ dτ 0 = c − Y22 (1, 0; z)c,

c ∈ Cn2 ,

where in the second step we have used (9.86) and (9.71). This shows that (z) = I − zC(I − zV )−1 B = Y22 (1, 0; z). Finally we compute C(I − zV )−1 . Fix x ∈ X. Then C(I − zV )−1 x = Cx + z





1

σ

G2 (σ )γ (σ ) 0



1  1

= Cx + 0

Y (σ, s; z)β(s)x(s) ds dσ 0

 G2 (σ )γ (σ )Y (σ, s; z) dσ β(s)x(s) ds.

s

Note that G2 (σ )γ (σ ) equals the second row of α(σ ). So using (9.86), we find C(I − zV )−1 x = Cx +

1  1

 

0

0 1

=



 dσ β(s)x(s) ds,

1

= Cx + 

s

d d dσ Y21 (σ, s; z) dσ Y22 (σ, s; z)

 Y21 (1, s; z) Y22 (1, s; z) − I β(s)x(s) ds 

1

Y21 (1, s; z)G1 (s)x(s) ds +

0

and this completes the proof of (9.84).

Y22 (1, s; z)G2 (s)x(s) ds 0

 

In [32, Chapter IX] the matrix Y22 (1, 0; z) is called the indicator of the operator I − zT . Representation (9.82) is proved in [32, Section IX.2] for z = 1, and for arbitrary z the equality follows by replacing the kernel function k by zk as is shown in [32, Section IX.3]. Here we have given a different direct proof using the explicit representation of the resolvent of V given in (9.70).

184

9 Semi-Separable Operators and Completeness

9.2.1 A Completeness Result for Semi-Separable Integral Operators To use our results to study completeness of integral operators with semi-separable kernel, we need precise estimates on the behaviour of the characteristic matrix function (9.82). In order to derive an explicit representation for this characteristic matrix function, we restrict to the case m = 1 in this subsection. Let us start with an illustrative example, by proving the following proposition. Proposition 9.2.3 Let T be the operator on L2 [0, 1] given by   T x (t) =



1

e−|t −s|x(s) ds,

0 ≤ t ≤ 1.

(9.90)

0

This operator T is a semi-separable integral operator and has a complete span of eigenvectors and generalised eigenvectors. Moreover, T is one-to-one and has a dense range, and thus MT = L2 [0, 1]. Proof Clearly, the kernel function k(t, s) = e−|t −s| is semi-separable. In fact & k(t, s) =

F1 (t)G1 (s) when 0 ≤ s ≤ t ≤ 1, F2 (t)G2 (s) when 0 ≤ t < s ≤ 1,

with F1 (t) = G2 (t) = e−t and F2 (t) = G1 (t) = et . Furthermore, since in this case X is the space L2 [0, 1], the operator A defined in (9.67) is given by %$ % $    x1 (t) 1 e2t x1 , (t) = A x2 −e−2t −1 x2 (t)

0 ≤ t ≤ 1.

and $ α(t) =

1

e2t

%

−e−2t −1

,

0 ≤ t ≤ 1.

Moreover, a direct computation shows that the fundamental matrix solution of the ordinary differential equation y(t) ˙ = zα(t)y(t) is given by   Y11 (t, s; z) Y12 (t, s; z) Y (t, s; z) = Y21 (t, s; z) Y22 (t, s; z)

9.2 Integral Operators with Semi-Separable Kernels

185

with entries 1  1 − z  −λ(t −s)  e−λ(t −s) + eλ(t −s) + e − eλ(t −s) 2 2λ t  ze  λ(t −s) Y12 (t, s; z) = − e−λ(t −s) e 2λ  ze−t  −λ(t −s) e Y21 (t, s; z) = − eλ(t −s) 2λ 1  1 − z  λ(t −s)  e−λ(t −s) + eλ(t −s) + e Y22 (t, s; z) = e−t − e−λ(t −s) , 2 2λ √ where λ = λ(z) = 1 − 2z. See [32, Section IX.3]. The entries of Y (t, s; z) are entire functions of z for fixed 0 ≤ s ≤ t ≤ 1 of order 1/2 that are polynomially bounded on the positive real axis, and have maximal growth on the negative real axis. Note that for z = Re z < 1/2, we have that λ = λ(z) is on the positive real axis. Also for z = Re z > 1/2, we have that λ = λ(z) is on the positive imaginary axis. Furthermore since the operator T is self-adjoint, the entire function Y22 (1, 0; z) has its zeros on the (positive) real axis. Similarly if z belongs to the ray Y11 (t, s; z) = et

z = (1 + xπ 2 )/2 + (π 2 i)/2,

x > 0,

we have √ √ 1 − 2z = iπ x + i 1 − z = (1 − π 2 x − π 2 i)/2 and   √ √ 1 − π 2 (x + i) sin π x + i , Y22 (1, 0; z) = e−1 cos π x + i + √ 2 x+i

x > 0.

Since eY22 (1, 0; z) = √ √ √ √ −π 2 x + i 1 = sin π x + i + cos π x + i + √ sin π x + i, 2 2 x+i

186

9 Semi-Separable Operators and Completeness

we derive that |eY22 (1, 0; z)| ≥    √    π2 x + i √  √ √ 1    sin π x + i  − cos π x + i + √ sin π x + i  .    2 2 x+i (9.91) From the fact that, as x → ∞, the first term on the right hand side of (9.91) goes to infinity and the second term stays bounded, we conclude that Y22 (1, 0; z) stays bounded away from zero. In fact, we can show that the right hand side of (9.91) is 2 strictly positive on the √ ray(0, iπ /2; 1/2). To show this let x + i = a + bi with a and b positive real numbers. Thus √ z = 1/2√+ π 2 (a + ib)2 /2. Then we have that 2ab = 1 and a ∈ [ 2/2, ∞) and b ∈ (0, 2/2]. For the first term in the right hand side of (9.91) we find a lower bound by  √  π2 x + i  π 2 √a 2 + b 2  eiπa e−πb − e−iπa eπb  √     sin π x + i  =      2 2 2   π 2  eπb − e−πb  π 3 ≥ ≥ 4 . 4b  2 For the second term in the right hand side of (9.91) we have the following upper bound   iπa −πb √ e e + e−iπa eπb  | cos π x + i| =   2 √   πb   √  e + e−πb   eπ/ 2 + e−π/ 2    ≤   ≤   2 2 and  iπa −πb  e e − e−iπa eπb     2i  √ √  1 eπb + e−πb  eπ/ 2 + e−π/ 2  ≤ √ ≤ .   2 4 2 a 2 + b2

   sin π(a + bi)  1    2(a + bi)  = √ 2 2 a + b2

So combining these estimates we find, for z on the ray(0, iπ 2 /2; 1/2), |Y22 (1, 0; z)| ≥

√ 3 π/√2 π3 3 − e + e−π/ 2 ≥ . 4e 4e 4e

(9.92)

9.2 Integral Operators with Semi-Separable Kernels

187

Furthermore, since 0 ≤ s ≤ t ≤ 1 we have 0 ≤ t − s ≤ 1 and hence Y22 (1, 0; z) dominates Yij (t, s; z) for 0 ≤ s ≤ t ≤ 1 and i, j = 1, 2. This can easily be seen by considering the functions z → Yij (t, s; z) on the negative real axis where all entries have maximal growth. Observe that, using (9.62) and (9.63), we can write T as T = V + R. From representation (9.70) for the resolvent (I − zV )−1 using the above properties of Y (t, s; z), we conclude that z → (I − zV )−1 x is an entire function of order one half (ρ = 1/2) which is polynomially bounded along the positive real axis. From representation (9.83) for (z) it also follows that there exists a δ0 > 0 such that |(z)| ≥ δ0 > 0 along the ray (0; iπ 2/2, 0). Furthermore, it follows from (9.84) and the fact that Y22 (1, 0; z) dominates Yij (t, s; z) for 0 ≤ s ≤ t ≤ 1 and i, j = 1, 2 that (z) dominates C(I − zV )−1 x for every x ∈ L2 [0, 1]. This proves that the operator T given by (9.90) has a complete span of eigenvectors and generalised eigenvectors according to Theorem 6.2.1. To prove the final part of the proposition, it suffices to show that T is one-to-one and has a dense range. Recall that T is the operator on L2 [0, 1] defined by (9.90). Obviously, the operator T is selfadjoint, and thus it suffices to prove that T is oneto-one. Take x ∈ L2 [0, 1] such that T x = 0. Then 

1

0= 

e−|t −s|x(s) ds

0 t

=

e−(t −s)x(s) ds +



0

1

e(t −s)x(s) ds

for all 0 ≤ t ≤ 1.

(9.93)

t

Differentiation with respect to t yields  x(t) −

t

e−(t −s)x(s) ds − x(t) +

0



1

e(t −s)x(s) ds = 0.

(9.94)

t

Therefore it follows from (9.93) and (9.94) that 

t

e−(t −s)x(s) ds = 0.

(9.95)

0

Differentiation of (9.95) yields 

t

x(t) =

e−(t −s)x(s) ds

(9.96)

0

and combining (9.95) and (9.96) yields that x = 0. This shows that T is one-to-one, and the proof of the proposition is complete.   The above proposition motivates the following theorem.

188

9 Semi-Separable Operators and Completeness

Theorem 9.2.4 Let T : L2 [0, 1] → L2 [0, 1] be an integral operator with semiseparable kernel given by (9.59)–(9.60). Define 

 G1 (t)F1 (t) G1 (t)F2 (t) α(t) = , −G2 (t)F1 (t) −G2 (t)F2 (t)

0 ≤ t ≤ 1,

and let Y (t, s; z) with Y (s, s; z) = I , 0 ≤ s ≤ t ≤ 1, denote the fundamental matrix of the ordinary differential equation y(t) ˙ = zα(t)y(t),

t ≥ 0.

Assume T is one-to-one and has a dense range. Furthermore, suppose that there exist a complex number z0 , a non-negative real number s0 , and a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that Y22 (1, 0; z0 ) = 0 and (a) there exists a δ0 > 0 with |Y22 (1, 0; z)| ≥ δ0 > 0 for z ∈ ray (θj ; z0 , s0 ),

j = 1, 2, . . . , κ,

(9.97)

(b) there exists an integer m and a constant M with for 0 ≤ s ≤ t ≤ 1 (Y (t, s; z) ≤ M(1 + |z|m ) for z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ. (9.98) Suppose that the entire function z → Y22 (1, 0; z) has infinitely many zeros and is of completely regular growth. If Y22 (1, 0; z) dominates Yij (t, s; z) for all 0 ≤ s ≤ t ≤ 1 and i, j = 1, 2, then the closure of the generalised eigenspace MT of T is the full space. Proof The proof follows from the representation Theorems 9.2.1 and 9.2.2 and the completeness result presented in Theorem 6.2.1.   We conclude this subsection with the following corollary. Corollary 9.2.5 Let T be the integral operator on L2 [0, 1] given by   T x (t) =



1

k(t, s)x(s) ds,

0 ≤ t ≤ 1,

(9.99)

0

with the kernel function k(t, s) being given by & k(t, s) =

e−|t −s|

when 0 ≤ s ≤ t ≤ 1,

−e−|t −s| when 0 ≤ t < s ≤ 1.

(9.100)

9.2 Integral Operators with Semi-Separable Kernels

189

Then T is a semi-separable integral operator, and T has a complete span of eigenvectors and generalised eigenvectors. Moreover, T is one-to-one and has a dense range, and thus MT = L2 [0, 1]. Proof We split the proof into four parts. The main part is the second part which analysis the fundamental matrix solution of an associate differential equation. Part 1.

First let us prove that the operator T defined by (9.99) and (9.100) is semi-separable. To do this note that & k(t, s) =

F1 (t)G1 (s) when 0 ≤ s ≤ t ≤ 1,

(9.101)

F2 (t)G2 (s) when 0 ≤ t < s ≤ 1.

with F1 (t) = −G2 (t) = e−t and F2 (t) = G1 (t) = et . Furthermore, $ α(t) =

G1 (t)F1 (t)

G1 (t)F2 (t)

−G2 (t)F1 (t) −G2 (t)F2 (t)

%

$ =

1 e2t

%

e−2t 1

,

0 ≤ t ≤ 1. (9.102)

Part 2.

Next we compute the fundamental matrix solution Y (t, s; z) of the differential equation y(t) ˙ = zα(t)y(t) with α(t) being defined by (9.102). The system of differential equations becomes y˙1 (t) = zy1 (t) + e2t zy2 (t)

(9.103)

y˙2 (t) = e−2t zy1 (t) + zy2 (t).

(9.104)

In particular, e−2t y˙1 (t) = y˙2 (t) and y2 (t) =

e−2t y˙1 (t) − e−2t y1 (t). z

Using these identities, differentiation of the first equation for y1 yields y¨1 (t) = zy˙1 (t) + 2ze2t y2 (t) + ze2t y˙2 (t) = zy˙1 (t) + 2y˙1 (t) − 2zy1 (t) + zy˙1 (t) = (2z + 2)y˙1(t) − 2zy1 (t).

(9.105)

This is a second order linear differential equation with constant coefficients and can be solved explicitly. The characteristic polynomial of (9.105) and its roots are given by λ2 − (2 + 2z)λ + 2z = 0,

λ1,2 = λ1,2 (z) = 1 + z ±



1 + z2 . (9.106)

190

9 Semi-Separable Operators and Completeness

Therefore the solution of (9.103)–(9.104) becomes y1 (t) = c1 eλ1 t + c2 eλ2 t e−2t y˙1 (t) − e−2t y1 (t) z c1 (λ1 − z) (λ1 −2)t c2 (λ2 − z) (λ2 −2)t e e + , = z z

y2 (t) =

where the constants c1 and c2 have to be determined from the initial conditions. Observe that we can rewrite the solution of (9.103)–(9.104) in a symmetric way as follows   y1 (t) = e(z+1)t c1 eλ(z)t + c2 e−λ(z)t y2 (t) =

 e(z−1)t  c1 (1 + λ(z))eλ(z)t + c2 (1 − λ(z))e−λ(z)t z

where λ(z)2 = 1 + z2 and the constants c1 and c2 have to be determined from the initial conditions. The first column of Y (t, s; z) corresponds to the solution (y1 (t), y2 (t))T with initial data (y1 (s), y2 (s))T = (1, 0)T and the second column of Y (t, s; z) corresponds to the solution (y1 (t), y2 (t))T with initial data (y1 (s), y2 (s))T = (0, 1)T . This allows us to compute Y (t, s; z) explicitly and $ % Y11 (t, s; z) Y12 (t, s; z) Y (t, s; z) = Y21 (t, s; z) Y22 (t, s; z) with entries Y11 (t, s; z) =

 e(z+1)(t −s)  (λ(z) − 1)eλ(z)(t −s) + (λ(z) + 1)e−λ(z)(t −s) 2λ(z)

Y12 (t, s; z) =

 ze(z+1)(t −s)  λ(z)(t −s) − e−λ(z)(t −s) e 2λ(z)

Y21 (t, s; z) =

 ze(z−1)(t −s)  λ(z)(t −s) − e−λ(z)(t −s) e 2λ(z)

Y22 (t, s; z) =

 e(z−1)(t −s)  (λ(z) + 1)eλ(z)(t −s) + (λ(z) − 1)e−λ(z)(t −s) , 2λ(z)

where λ(z)2 = 1 + z2 . Note that the entries of Y (t, s; z) are entire functions of order one (ρ = 1) that are of completely regular growth.

9.2 Integral Operators with Semi-Separable Kernels

191

From the relation λ(z)2 = 1 + z2 we observe that if z is restricted to the line Re z = α, α = 0, then Re λ(z) satisfies  − 1 + α 2 ≤ Re λ(z) ≤ −α

or

α ≤ Re λ(z) ≤



1 + α2 . (9.107)

Indeed, z = α+iν for some ν and λ(z) = a(z)+b(z)i then λ(z)2 = 1+z2 implies 1 + α 2 − ν 2 = a(z)2 − b(z)2

and αν = a(z)b(z).

Note that in particular a(z) = 0 and we can eliminate b(z) to arrive at 1 + α 2 − ν 2 = a(z)2 −

α2 ν 2 a(z)2

and hence  a(z)2  1 + α 2 − a(z)2 = ν 2 . 2 2 a(z) − α

Part 3.

Since the right side is non-negative this proves the claim. This shows that the entries of Y (t, s; z) are polynomially bounded along lines Re z = α, α = 0, and have maximal growth on the positive real axis. Recall that the characteristic function (z) is given by (9.82), and hence (z) = Y22 (1, 0; z) =

 e(z−1)  λ(z)  e(z−1)  λ(z) e e − e−λ(z) + + e−λ(z) . 2λ(z) 2 (9.108)

We first show that if z is restricted to the line Re z = α with α > 0, then (z) = 0. Since (z) = 0 is equivalent to (λ(z) + 1)e2λ(z) = −(λ(z) − 1). Using λ(z) = a(z) + ib(z) we can write

|(λ(z) + 1)e2λ(z)|2 = (a(z) + 1)2 + b(z)2 e2a(z) On the other hand |1 − λ(z)|2 = (a(z) − 1)2 + b(z)2 .

192

9 Semi-Separable Operators and Completeness

Using (9.107) we have that either a(z) > α > 0 or a(z) < −α < 0. Therefore we conclude that if a(z) > 0, then |(λ(z) + 1)e2λ(z)| > | − λ(z) + 1|, and thus (z) = 0. If a(z) < 0, then |(λ(z) + 1)e2λ(z)| < | − λ(z) + 1|, and thus (z) = 0 also in this case. This proves that (z) = 0 on the line Re z = α with α > 0. Next we show that if z = α + iν, then limν→∞ |(z)| = 0. Using Part 2 and the notation λ(z) = a(z) + ib(z), it follows that b(z) → ∞ as ν → ∞. Since either a(z) > α > 0 or a(z) < −α < 0 and |λ(z)| → ∞, it follows from (9.108) that limν→∞ |(z)| = 0. This proves that there exists δ0 > 0 such that |Y22 (1, 0; z)| ≥ δ0

Part 4.

for z = α + iν with α > 0 fixed.

Finally it follows from (9.84) that (z) dominates C(I − zV )−1 x for every x ∈ L2 [0, 1] and this proves that the operator T given by (9.90) has a complete span of eigenvectors and generalised eigenvectors according to Theorem 6.2.1. Finally, let us prove that T defined by (9.99) and (9.100) is one-to-one and has a dense range. To do this note that T ∗ = −T , and hence T has a dense range when T is one-to-one. so it suffices to prove that T is one-to-one. Take x ∈ L2 [0, 1] such that T x = 0. Then 

t

e−(t −s)x(s) ds −

0



1

e(t −s)x(s) ds = 0

for all 0 ≤ t ≤ 1.

t

(9.109)

Differentiation with respect to t yields 

t

x(t) −

e−(t −s)x(s) ds + x(t) −

0



1

e(t −s)x(s) ds = 0.

(9.110)

t

Therefore it follows from (9.109) and (9.110) that  x(t) =

t

e−(t −s)x(s) ds.

(9.111)

0

Differentiation of (9.111) yields that x˙ = 0 and hence x is constant. But now it follows from (9.111) that x = 0. This shows that T is one-to-one. Thus T is one-to-one and has a dense range. This shows that MT = L2 [0, 1] because of the completeness result proved in the preceding part.  

9.3 Intermezzo: Fundamental Solutions of ODE and Volterra Operators

193

9.3 Intermezzo: Fundamental Solutions of ODE and Volterra Operators We have seen that in the computation of the resolvent of an integral operator, the fundamental solution of an ordinary differential equation appears naturally, see for example Theorem 9.2.1. In this section we collect some basic results about the fundamental matrix solution and present a resolvent formula for a class of integral operators that will be useful in Chap. 11, see Lemma 11.4.7. Let A(t) be an n×n matrix of which the entries are Lebesgue integrable functions on the interval [r, c] where r < c < ∞. Then [32, Lemma IX.2.2] tells us that there exists a unique continuous n × n matrix function F on [r, c] such that  F (t) = ICn +

t

r ≤ t ≤ c.

A(s)F (s) ds,

(9.112)

r

Moreover, from the theory of integration (see, e.g., [72, Section 8.15]) we know that F defined by (9.112) is absolutely continuous on [r, c] and d F (t) = A(t)F (t), dt

for all t ∈ [r, c] a.e.

(9.113)

Here the term a.e. stands for almost everywhere and refers to the fact the equality in (9.113) holds except for a set of measure zero. The function F defined by (9.112) is called the fundamental (matrix) solution of (9.113) normalised to ICn at t = r of the homogeneous ordinary differential equation x(t) ˙ = A(t)x(t),

r ≤ t ≤ c.

(9.114)

The absolutely continuous solution of equation (9.114) is given by F (t)x where x is any arbitrary vector in Cn , assuming the equality in (9.114) is understood as almost everywhere on [r, c]. In what follows, given F we define Y (t, s) := F (t)F (s)−1 ,

r ≤ s ≤ t ≤ c.

(9.115)

The function Y (·, s) is called the fundamental solution of (9.114) normalised such that Y (s, s) = ICn at s. From [32, Lemma IX.2.2] we also know that the function F (·)−1 is continuous on [r, c] and F (t)

−1

 =I

Cn

t



F (s)−1 A(s) ds,

r ≤ t ≤ c.

r

In particular, the function F (·)−1 is absolutely continuous on [r, c].

(9.116)

194

9 Semi-Separable Operators and Completeness

If A is a scalar function, or more generally, if A(t) is a diagonal matrix for each t ≥ r, then the fundamental solution Y (t, s) is given explicitly by ⎧   t  ⎪ ⎪ A(τ )dτ , r ≤ s ≤ t ≤ c, ⎨ exp s Y (t, s) = s   ⎪ ⎪ ⎩ exp − A(τ )dτ , r ≤ t ≤ s ≤ c.

(9.117)

t

This fact will be used in Sect. 11.4. For the general case such explicit formulas are not available. If A(t) is a continuous function on [r, c], then the fundamental solution F is continuously differentiable on [r, c]. For that case the results presented below are well known and can be found in classical text books; see, e.g., [14, Sections 3.2– 3.5] and [41, Chapter III]. Lemma 9.3.1 Let F be the fundamental solution of (9.114) normalised to I at r and let Y (·, s) be the fundamental  solution of (9.114) normalised to I at s with r ≤ s ≤ c. Let ϕ ∈ C [r, c], Cn , and let u be a vector in Cn . Then the function x defined by 

t

x(t) = F (t)u + F (t)

F (s)−1 ϕ(s) ds

r



t

= Y (t, 0)u +

Y (t, s)ϕ(s) ds,

r ≤ t ≤ c,

r

is the unique absolutely continuous solution of the nonhomogeneous equation x(t) ˙ = A(t)x(t) + ϕ(t)

for all t ∈ [r, c] a.e. and x(r) = u.

(9.118)

Proof From the properties of F it follows that x is absolutely continuous. A direct computation shows that for almost all t ∈ [r, c] we have : 

t

x(t) ˙ = A(t)F (t)u + A(t)F (t)

F (s)−1 ϕ(s) ds + F (t)F (t)−1 ϕ(t)

r

   t −1 F (s) ϕ(s) ds + ϕ(t) = A(t)x(t) + ϕ(t). = A(t) F (t)u + F (t) r

Thus x is a solution of equation (9.118). To prove uniqueness assume y is another absolutely continuous solution of equation (9.118). Then ψ := x − y satisfies ˙ ψ(t) = A(t)ψ(t),

for all t ∈ [r, c] a.e.,

and ψ(r) = 0.

This implies that ψ(t) = F (t)ψ(r) for r ≤ t ≤ c. But ψ(r) = 0, and hence x = y.  

9.3 Intermezzo: Fundamental Solutions of ODE and Volterra Operators

195

Lemma 9.3.2 Assume the matrix function A(t) in (9.114) is periodic with period ω > 0. Then Y (t + ω, s + ω) = Y (t, s),

r ≤ s ≤ t ≤ c.

(9.119)

Proof Let X(·) = Y (·, s + ω) and put (·) = X(· + ω). So (t) = Y (t + ω, s + ω) ˙ and det (t) = det X(t + ω) = 0. On the other hand, since X(t) = A(t)X(t) and X(s + ω) = I , it follows that ˙ + ω) ˙ (t) = X(t = A(t + ω)X(t + ω) = A(t)X(t + ω) = A(t)(t). Thus  is a fundamental solution of (9.114). Furthermore, (s) = X(s + ω) = I . Thus (·) = Y (·, s). So we can conclude that Y (t + ω, s + ω) = X(t + ω) = (t) = Y (t, s),  

which completes the proof.

The above lemma shows that in general a fundamental solution of (9.114) is not ω-periodic in t if the function A is ω-periodic. On the other hand, the lemma suggests the following result which is known as Floquet’s theorem (see [23]). For the proof we refer to Theorem 5.1 in [14] and Theorem 7.1 in [41] (also pay attention to the footnote on page 118 of [41]). Lemma 9.3.3 Assume that the function A is ω-periodic. Then any fundamental solution  of Eq. (9.114) can be written as (t) = P (t)et Q ,

t ≥ r,

where P (t) is ω-periodic non-singular matrix function, and Q is a constant n × n matrix.

9.3.1 A Related Volterra Operator We continue to use the notation and terminology introduced in the preceding paragraphs. Fix r < c < ∞. We shall be dealing with the operator V on  C [r, c]; Cn defined by 

t

(V ϕ)(t) =

A(s)ϕ(s) ds, r

r ≤ t ≤ c.

(9.120)

196

9 Semi-Separable Operators and Completeness

As before A(t) is an n × n matrix of which the entries are Lebesgue integrable functions on the interval [r, c]. To analyse the spectral properties of V we need z ∈ C as an extra parameter in (9.114), as follows x(t) ˙ = zA(t)x(t),

r ≤ t ≤ c.

(9.121)

In the sequel F (t; z) denotes the fundamental solution of the differential equation (9.121) normalised to I at r and Y (t, s; z) denotes the fundamental solution of the differential equation (9.121) normalised to I at s with r ≤ s ≤ c. Lemma 9.3.4 If the entries of A are Lebesgue integrable functions on [r, c], then the operator V defined by (9.120) is a Volterra operator, and for r ≤ t ≤ c we have

−1

(I − zV )



ϕ (t) = ϕ(t) + zF (t; z)

t

F (s; z)−1 A(s)ϕ(s) ds.

r



t

= ϕ(t) + z

Y (t, s; z)A(s)ϕ(s) ds.

(9.122)

r

Proof We split the proof into three parts. In the first part we prove that V is a compact operator and in the second that V is a Volterra operator. The final part deals with equality (9.122).   Part 1. Let ϕ ∈ C [r, c]; Cn . From the definition of V in (9.120) it follows that 

t

V ϕ = sup  r≤t ≤c



c

A(s)ϕ(s) ds ≤

r

 A(s) ds ϕ.

(9.123)

r

  Since the polynomials are dense in C [r, c]; Cn (see Example 1 on page 265 of [36]), there exists a sequence of n × n matrices A1 , A2 , A3 , . . . of which the entries are polynomials such that 

c

A(s) − Aj (s) ds → 0 (j → ∞).

r

  Now let Vj be the operator on C [r, c]; Cn defined by (9.120) with Aj in place of A. Then Vj is a compact operator for each j = 1, 2, 3, . . ., and the estimate given by (9.123) implies that 

c

V − Vj  ≤

A(s) − Aj (s) ds → 0

(j → ∞).

r

Hence V is the limit in operator norm of a sequence of compact operators. Thus V is compact too.

9.3 Intermezzo: Fundamental Solutions of ODE and Volterra Operators

Part 2.

197

In order to prove that the compact operator V is Volterra, it remains to show that I − zV is one-to-one for each z ∈ C. From (9.120) it follows  that for ψ, ϕ ∈ C [r, c]; Cn we have for each z ∈ C (I − zV )ϕ = ψ ⇐⇒ ψ(t) = ϕ(t) − z

 t A(s)ϕ(s) ds r

for each r ≤ t ≤ c. (9.124)

Now fix z ∈ C, and assume that (I − zV )ϕ = 0, then 

t

ϕ(t) = z

A(s)ϕ(s) ds

for each r ≤ t ≤ c.

r

Hence ϕ is absolutely continuous and satisfies the initial value differential equation: ϕ(t) ˙ = zA(t)ϕ(t),

Part 3.

( for all t ∈ [r, c] a.e.),

ϕ(r) = 0.

So ϕ(t) = F (t; z)ϕ(r) = 0 for all r ≤ t ≤ c and hence ϕ = 0. Thus I − zV is invertible and therefore the operator V is Volterra.  Next, we prove the identity (9.122). Let ϕ ∈ C [r, c]; Cn , and let x = (I − zV )−1 ϕ. Then  x(t) =

t

zA(s)x(s) ds + ϕ(t),

r ≤ t ≤ c.

r

First we assume that ϕ is absolutely continuous on [r, c]. The preceding identity shows that x is also absolutely continuous. Furthermore, x satisfies the nonhomogeneous differential equation x(t) ˙ = zA(t)x(t) + ϕ(t) ˙

( for all t ∈ [r, c] a.e.) and x(r) = ϕ(r). (9.125)

But then, applying Lemma 9.3.1 with zA(t) in place of A(t), with ϕ˙ in place of ϕ, and with u = ϕ(r), we obtain 

t

x(t) = F (t, z)ϕ(r) + F (t; z)

F (s; z)−1 ϕ(s) ˙ ds, r ≤ t ≤ c.

r

(9.126)   Next, let ϕ be an arbitrary function in C [r, c]; Cn , and put ψ = V ϕ. Then x = (I −zV )−1 ϕ = ϕ+z(I −zV )−1 V ϕ = ϕ+z(I −zV )−1 ψ.

(9.127)

198

9 Semi-Separable Operators and Completeness

Note that ψ is absolutely continuous. Thus we can apply the result of the preceding paragraph with ψ in place of ϕ. Since ψ(r) = 0 and ψ˙ = Aϕ, it follows that  t

−1 (I − zV ) ψ (t) = F (t; z) F (s; z)−1 A(s)ϕ(s) ds, r ≤ t ≤ c. r

Using the latter identity in (9.127) yields (9.122). The final statement follows from the fact that Y (t, s; z) = F (t; z)F (s; z)−1 .  

Chapter 10

Periodic Delay Equations

In this chapter we introduce linear periodic functional differential equations as infinite dimensional dynamical systems. The first two sections have a preliminary character and concern time dependent delay equations and the associated fundamental solutions. In the third section we show that a time dependent delay equation has an associated two parameter family of solution operators. In the last section of this chapter we introduce the period maps for periodic delay equations, and in Theorem 10.4.5 we collect the spectral properties of the period maps that will be used in the next chapter when studying completeness problems for period maps.

10.1 Time Dependent Delay Equations We shall be dealing with linear functional differential equations of the following type: ⎧  ⎪ ⎨ x(t) ˙ =

0

[dτ η(t, τ )] x(t + τ ) −h ⎪ ⎩ x(θ ) = ϕ(θ ) (−h ≤ θ ≤ 0).

(t ≥ 0),

(10.1)

  Here ϕ is a given function in C [−h, 0]; Cn . The integral in (10.1) is a RiemannStieltjes integral and the subscript τ in dτ indicates that the integration is with respect to τ . Throughout we assume that for each t ≥ 0 the function η(t, ·) is a n×n matrix of which the entries are real functions of bounded variation on [−h, 0] and continuous from the left on (−h, 0), and η(t, 0) = 0. Moreover, it is assumed that there is a nondecreasing bounded function m ∈ L1loc [−h, ∞), the space of integrable

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_10

199

200

10 Periodic Delay Equations

functions that locally belong to L1 , such that the variation of η(t, ·) satisfies the estimate Var[−h,0] η(t, ·) ≤ m(t),

t ≥ 0.

(10.2)

The non-negative real number h in (10.1) will be called the delay number of Eq. (10.1). In the sequel we will refer to h as the delay. Note that Eq. (10.1) has the character of an initial value problem. Theorem 10.1.1 Under the above conditions, Eq. (10.1) defines a well-posed dynamical system, that is, Eq. (10.1) has a unique absolutely continuous solution x on [0, ∞). The above theorem is a special case of Theorem 6.1.1 in [42]. We shall derive the theorem as a corollary of Theorem 10.1.4 given further on in this section. See Corollary 10.1.6 and Theorem 10.1.9. In what follows it will be convenient to extend for each t ≥ 0 the function η(t, ·) to a function on (−∞, ∞) by η(t, s) := η(t, 0)

for s ≥ 0 and η(t, s) := η(t, −h)

for s ≤ −h.

(10.3)

Since by assumption η(t, 0) = 0, the first identity in (10.3) implies that η(t, s) = 0 for each s ≥ 0. To illustrate the Riemann-Stieltjes integral in Eq. (10.1) with an example, take h = 1, and let η be given by

η(t, τ ) =

⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩

0 a(t)

for τ = 0, for − 1 < τ < 0,

(10.4)

a(t) + b(t) for τ ≤ −1.

In this case Eq. (10.1) reduces to &

x(t) ˙ = a(t)x(t) + b(t)x(t − 1) x(t) = ϕ(t)

(−1 ≤ t ≤ 0).

(t ≥ 0),

(10.5)

Obviously, in this case (10.2) is satisfied too. We shall be dealing with Eq. (10.5) in Sect. 11.4 assuming a and b are one periodic, and in Sect. 11.5 assuming a and b are two periodic. To derive a representation for the solution of equation (10.1) we first present an equivalent version of (10.1) which will allows us to rewrite Eq. (10.1) as a lower triangular integral equation of the second kind.

10.1 Time Dependent Delay Equations

201

As a first step, we replace τ in (10.1) by −τ , and put ζ(t, τ ) := η(t, −τ ). Furthermore, following (10.3), for each t ≥ 0 we define ζ (t, s) := ζ (t, 0)

for s ≤ 0

and ζ(t, s) := ζ(t, h) for s ≥ h.

(10.6)

Note that our conditions on η imply that for each t ≥ 0 the entries of the matrix function ζ (t, ·) belong to NBV [0, h]. In particular, ζ(t, 0) = 0, and hence the first part of (10.6) tells us that ζ(t, s) = 0 for s ≤ 0.   Lemma 10.1.2 Let ϕ ∈ C [−h, 0]; Cn be given, and let η(t, τ ) be as in the first paragraph of this subsection. Put ζ(t, τ ) = η(t, −τ ) for any −∞ < τ < ∞, and define k(t, s) := ζ(t, t − s),

0 ≤ s < ∞,

g(t) := ζ (t, t)ϕ(0) + (ϕ)(t),

0 ≤ t < ∞,

0 ≤ t < ∞,

(10.7) (10.8)

    where  is the linear map from C [−h, 0]; Cn to L1 [0, ∞); Cn given by ⎧ h ⎪ ⎨ [dτ ζ(t, τ )]ϕ(t − τ ), 0 ≤ t ≤ h, (ϕ)(t) = t ⎪ ⎩ 0, t ≥ h.

(10.9)

Then x is a solution of the delay equation (10.1) if and only if 

t

x(t) ˙ −

k(t, s)x(s) ˙ ds = g(t),

t ≥ 0.

(10.10)

0

Proof Recall that for any t ≥ 0 we have ζ(t, τ ) = ζ(t, h) whenever τ ≥ h. This implies that 

h



t

[dτ ζ (t, τ )]x(t − τ ) =

0

[dτ ζ(t, τ )]x(t − τ ),

for all t ≥ h.

(10.11)

0

Using the definition of ζ(t, τ ), the definition of the linear map  in (10.9), and the identity (10.11), one obtains the following equivalent version of (10.1):

x(t) ˙ =

⎧ t ⎪ ⎨ [dτ ζ(t, τ )]x(t − τ ) + (ϕ)(t), ⎪ ⎩

0

x(0) = ϕ(0).

t ≥ 0,

(10.12)

202

10 Periodic Delay Equations

Applying integration by parts the first identity in (10.12) yields  x(t) ˙ =

t

0

 =

t

=

τ =t  ζ(t, τ )x(t ˙ − τ ) dτ + ζ(t, τ )x(t − τ ) + (ϕ)(t) τ =0

0



[dτ ζ(t, τ )]x(t − τ ) + (ϕ)(t)

t

ζ(t, t − s)x(s) ˙ ds + ζ (t, t)x(0) + (ϕ)(t),

0

 =

t

ζ(t, t − s)x(s) ˙ ds + ζ (t, t)ϕ(0) + (ϕ)(t),

t ≥ 0.

0

Together with the definitions of k and g this yields (10.10).

 

Since η(t, s) = η(t, 0) = 0 for s ≥ 0, the kernel function k defined by (10.7) is lower triangular, that is, k(t, s) = 0 for s ≥ t. Also note that the second identity in (10.3) implies that k(t, s) = ζ(t, h),

for all s + h ≤ t.

Furthermore, the function k is entirely determined by the given data. The same is true for the function g appearing in (10.8). Thus Lemma 10.1.2 tells us that solving the linear functional differential equation (10.1) is equivalent to solving a lower triangular integral equation of the second kind of which the kernel function and the right hand side are known. For later purposes we rewrite k in terms of η in place of ζ which yields: k(t, s) = η(t, s − t),

0 ≤ s < ∞, 0 ≤ t < ∞.

(10.13)

Let x be an absolutely continuous solution of the delay equation (10.1). Using x(0) = ϕ(0) and formula (10.10) we obtain 

t

x(t) = x(0) +

x(a) ˙ da

0

= ϕ(0) +

 t  0

a 0

 k(a, s)x(s) ˙ ds



t

da +

g(a) da. 0

(10.14)

10.1 Time Dependent Delay Equations

203

Next, using the definition of the function k in Lemma 10.1.2, changing the order of integration, and applying integration by parts we see that  t  0

a

 k(a, s)x(s) ˙ ds

0

=

 t  0

=

a

0

 t  0

t

=− 

 ζ (a, a − s)x(s) ˙ ds

0 t

=− 0

da

 ζ (a, a − s) da x(s) ˙ ds

t

s



da =



d ds



d ds



t



t

ζ (a, a − s) da x(s) ds +

s



t



s=t  ζ (a, a − s) da x(s)

s=0

s t

ζ (a, a − s) da x(s) ds −

ζ (a, a) da ϕ(0),

0 ≤ a ≤ t.

0

s

Thus the integral equation (10.10) with x˙ as the unknown is equivalent to a second order lower triangular integral equation with x as the unknown given by 

t

x(t) = x(0) +

x(a) ˙ da

0



t

=− 0

d ds



t

 ζ(a, a − s) da x(s) ds

s

 t

 t + ϕ(0) 1 − ζ (a, a) da + g(a) da. 0

(10.15)

0

This allows us to study (10.1) under weaker regularity assumptions on the initial data ϕ. The kernel function k defined by (10.7) has a number of interesting properties. To state these properties requires some additional definitions. Resolvent Kernel Functions Let k(t, s) be an n × n matrix function defined on [0, ∞) × [0, ∞), and assume that its entries are measurable on [0, ∞) × [0, ∞). We say that k is a lower triangular kernel function of type L1loc if k(t, s) = 0 for   0 ≤ t ≤ s < ∞ and for each 0 < c < ∞ and each f ∈ L1 [0, c]; Cn we have  sup

f ≤1 0



c

t

(

k(t, s)f (s) ds)dt < ∞,

0 ≤ t ≤ c.

0

  Here f  denotes the norm of f as function belonging to L1 [0, c]; Cn . In that case, for each 0 < c < ∞, the map 

t

f →

k(t, s)f (s) ds, 0

0 ≤ t ≤ c,

204

10 Periodic Delay Equations

  defines a bounded linear operator on L1 [0, c]; Cn which we shall denote by K (or by Kc if it is convenient to stress the dependence on c). The linear space of lower triangular kernel functions on [0, c] × [0, c] of type L1loc endowed with the norm  c 

k1 := sup

f ≤1 0



s∈[0,c]

k(t, s)f (s) ds dt

0 c

= ess sup

t

k(t, s) dt

(10.16)

0

is a Banach space  (see Theorem 9.2.4  and Proposition 9.2.7 of [40]) which we will denote by L1+ [0, c] × [0, c]; Cn×n . Now let k be a lower triangular kernel function of type L1loc . We call an n × n matrix function r(t, s) on [0, ∞) × [0, ∞) a resolvent kernel function of k if r is a lower triangular kernel function of type L1loc and 

t

r(t, s) = k(t, s) +

r(t, a)k(a, s) da,

0 ≤ s ≤ t < ∞,

(10.17)

k(t, a)r(a, s) da,

0 ≤ s ≤ t < ∞.

(10.18)

s



t

= k(t, s) + s

Note that the identity (10.18) implies that for 0 < c < ∞ we have 

t

(Kc Rc f )(t) =

k(t, s)(Rc f )(s) ds 

0



t

=

k(t, s) 0

=



s

r(s, τ )f (τ ) dτ 0

 t 

t

ds



k(t, s)r(s, τ ) ds f (τ ) dτ 0

τ

 t

= r(t, τ ) − k(t, τ ) f (τ ) dτ 0

= (Rc f )(t) − (Kc f )(t),

0 ≤ t ≤ c.

It follows that Kc Rc = Rc −Kc . Similarly, using (10.17), we have Rc Kc = Rc −Kc . This yields Kc Rc = Rc Kc and (I − Kc )(I + Rc ) = (I + Rc )(I − Kc ) = I,

(10.19)

  where I is the identity operator on L1 [0, c]; Cn . Thus I − Kc is an invertible   operator on L1 [0, c]; Cn , and its inverse is given by I + Rc .

10.1 Time Dependent Delay Equations

205

Proposition 10.1.3 Let the conditions on η presented in the first paragraph of this subsection be satisfied. Then k(t, s) = η(t, s − t), 0 ≤ s, t < ∞, is a lower triangular kernel function of type L1loc . Proof From the paragraph preceding (10.13) we know that k(t, s) = η(t, s − t) implies that k is lower triangular. Furthermore, from (10.2) it follows that k is of   type L1loc . Theorem 10.1.4 If k(t, s) is a lower triangular kernel function of type L1loc , then k(t, s) has a unique resolvent kernel function of type L1loc . Proof The proof will be done in three steps. Throughout k(t, s) is a lower triangular kernel function of type L1loc . Step 1.

First note that if k1 and k2 are lower triangular kernel functions on [0, c]× [0, c], then the same holds true for the functions 



t

k1 (t, a)k2 (a, s) da s

Step 2.

t

and

k2 (t, a)k1 (a, s) da. s

Furthermore, from the discussion in the paragraph preceding the present theorem it follows that a resolvent kernel function of type L1loc is unique whenever it exists. Because of the uniqueness of the resolvent kernel of type L1loc , it suffices to prove existence of a resolvent kernel on [0, c] for every c ∈ (0, ∞). To do this fix c ∈ (0, ∞). Assume first that k1 ≤ 1 with k1 given by (10.16), then for each c ∈ (0, ∞) the map  r(t, s) →

t

k(t, a)r(a, s) da + k(t, s)

s

Step 3.

  is a contraction on L1 [0, c]×[0, c]; Cn×n . This shows that (10.18) (and, using (10.19), similarly (10.17)) has a unique resolvent solution on [0, c] for each c, and this solution is a resolvent kernel of type L1loc . Since k(t, s) is a lower triangular kernel function of type L1loc , we define a scaled lower triangular kernel function of type L1loc by 3 k(t, s) = e−γ (t −s)k(t, s). Since 3 k1 = ess sup s∈[0,c]



c 0

3 k(t, s) dt = ess sup s∈[0,c]



c 0

e−γ (t −s)k(t, s) dt.

206

10 Periodic Delay Equations

ˆ 1 < 1. From Step 2, it follows that the Next choose γ so large that k equation  t 3 k(t, a)3 r(a, s) da, 3 r(t, s) = 3 k(t, s) + s

has a unique solution 3 r ∈ L1 [0, c]n×n for each c ∈ (0, ∞). Therefore, we have  t 3 r(t, s) = e−γ (t −s)k(t, s) + e−γ (t −a)k(t, a)3 r(a, s) da, s

and hence eγ (t −s)3 r(t, s) = k(t, s) +



t

k(t, a)eγ (a−s)3 r(a, s) da.

s

Thus 

t

r(t, s) = k(t, s) +

k(t, a)r(a, s) da,

(10.20)

s

where r(t, s) = eγ (t −s)3 r(t, s). This completes the proof.

 

Proposition 10.1.5 Assume the conditions on η presented in the first paragraph of this subsection are satisfied. If k is the kernel function defined by (10.13), then k(t, s) ≤ m(t) for 0 ≤ s ≤ t. On the other hand, if k(t, s) is a lower triangular kernel function satisfying the previous estimate and r(t, s) is the corresponding resolvent kernel function, then  r(t, s) ≤ m(t) exp



t

 m(σ ) dσ ,

0 ≤ s ≤ t < ∞.

(10.21)

s

Proof From the definition of k given by (10.13) and the estimate (10.2) it follows that k(t, ·) ≤ m(t), where m is nondecreasing and bounded. Using the latter estimate and (10.18), we obtain the following integral inequality for the function u(t, s) := r(t, s) on 0 ≤ s ≤ t. 

t

u(t, s) ≤ m(t) + m(t)

u(a, s) da,

0 ≤ s ≤ t < ∞.

(10.22)

s

Now fix s ∈ [0, ∞), and put  q(t) = exp −



t

m(σ ) dσ s





t s

u(a, s) da. t ≥ s.

(10.23)

10.1 Time Dependent Delay Equations

207

Differentiation of q with respect to t yields  t   dq (t) = −m(t)q(t) + exp − m(σ ) dσ u(t, s) dt s  t  t     u(a, s) da exp − m(σ ) dσ = u(t, s) − m(t)  ≤ m(t) exp −

s



t

s

 m(σ ) dσ ,

s

where we have used (10.22). Integration from s to t yields the inequality 

t

q(t) ≤

 m(a) exp −



s

a

  m(σ ) dσ da = 1 − exp −

s



t

 m(σ ) dσ .

s

Together with the definition of q in (10.23) we arrive at 

t

m(t)

  t  u(a, s) da = m(t) exp m(σ ) dσ q(t)

s

s

 ≤ −m(t) + m(t) exp



t

 m(σ ) dσ .

s

Substitution into (10.22) yields 

t

u(t, s) ≤ m(t) exp[

m(σ ) dσ ],

0 ≤ s ≤ t < ∞,

s

 

which completes the proof.

Together Lemma 10.1.2 (formula (10.10) in particular), Theorem 10.1.4 and the identity (10.19) yield the following corollary. Corollary 10.1.6 Assume the conditions on η presented in the first paragraph of this subsection are satisfied. Let k be the kernel function defined by (10.13), and let r(t, s) be the corresponding resolvent kernel function. Then x is a solution of the delay equation (10.1) if and only if 

t

x(t) ˙ = g(t) +

r(t, s)g(s) ds,

0 ≤ t < ∞,

(10.24)

0

where g is given by (10.8). Moreover, the solution x is unique whenever it exists. Note that the uniqueness of x follows from Eq. (10.24) and the fact that x(0) is uniquely determined by ϕ. More precisely, x(0) = ϕ(0) by the second part of (10.1).

208

10 Periodic Delay Equations

In the special case when η is given by (10.4), the lower triangular kernel function k defined by (10.7) is given by k(t, s) =

& a(s) 0 ≤ s < t

when t < 1; 0 s≥t ⎧ ⎪ ⎪ ⎨a(s) + b(s) 0 ≤ s < t − 1 when t ≥ 1; k(t, s) = a(s) t −1≤s h that can be handled similarly. We assume that 0 ≤ a − σ ≤ h. Then  (ϕ)(t) =



h



h

[dτ ζσ (t, τ )] ϕ(t − τ ) =

t

[dτ ζ(t + σ, τ )] ϕ(t − τ ).

t

Taking t = a − σ the previous identity yields ϕ)(a − σ ) = (



h a−σ

[dτ ζ(a, τ )] ϕ ((a − σ ) − τ ) .

216

10 Periodic Delay Equations

On the other hand, by (10.37), we have  (σ ϕ)(a) =

h

[dτ ζ(a, τ )] ϕ ((a − σ ) − τ ) .

a−σ

Thus (10.48) is proved.  

10.3 A Two-Parameter Family of Solution Operators   Let t ≥ s ≥ 0. Given ϕ ∈ C [−h, 0]; Cn , let x(·) = x(· ; s, ϕ) be the unique solution of (10.38) with s in place of σ , that is, ⎧  ⎪ ⎨ x(t) ˙ =

h

[dτ ζ(t, τ )] x(t − τ ) (t ≥ s ≥ 0), 0 ⎪ ⎩ x(θ + s) = ϕ(θ ) (−h ≤ θ ≤ 0).

(10.49)

  Furthermore, for t ≥ s ≥ 0 we define U (t, s) to be the operator on C [−h, 0]; Cn given by 

 U (t, s)ϕ (θ ) = x(t +θ ; s, ϕ),

−h ≤ θ ≤ 0,

  ϕ ∈ C [−h, 0]; Cn .

(10.50)

In the sequel the family U (t, s), t ≥ s ≥ 0, is called the two-parameter family of solution operators defined by (10.50). The words “of solution operators" will often be omitted. The following proposition is the main result of the present section. Additional results will be given in the next section where the delay equations are assumed to be periodic. Proposition 10.3.1 The two-parameter family U (t, s), t ≥ s ≥ 0, defined by (10.50) has the following properties:   (i) U (s, s) is the identity operator on C [−h, 0]; Cn for all s ≥ 0, (ii) U (t, s)U (s, σ ) = U (t, σ ) for all t ≥ s ≥ σ ≥ 0.   Proof Let ϕ ∈ C [−h, 0]; Cn . Since x(·) = x(· ; s, ϕ) is the unique solution of equation (10.34) with s in place of σ , the second identity in (10.34) tells us that x(s + θ ; s, ϕ) = x(θ + s) = ϕ(θ ),

−h ≤ θ ≤ 0.

Thus U (s, s)ϕ = x(s + · ; s, ϕ) = ϕ, and hence item (i) is proved. Item (i) implies that (ii) is satisfied when s = σ or s = t. Therefore, in  what follows we assume that t > s > σ . Fix ϕ ∈ C [−h, 0]; Cn , and let

10.4 Solution Operators for Periodic Delay Equations

217

  x(·) = x(· ; σ, ϕ). Furthermore, let ψ ∈ C [−h, 0]; Cn be defined by ψ(θ ) := (U (s, σ )ϕ) (θ ) = x(s + θ ; σ, ϕ) = y(s + θ ),

−h ≤ θ ≤ 0,

(10.51)

where y is the restriction of x to [s−h, ∞). Using (10.49), (10.51), and the definition of y, it follows that ⎧  h ⎪ ⎨ y(t) ˙ = [dτ ζ(t, τ )] y(t + τ ), (t ≥ s ≥ 0), (10.52) 0 ⎪ ⎩ y(θ + σ ) = ψ(θ ) (−h ≤ θ ≤ 0). This implies that (U (t, s)ψ) (θ ) = y(t + θ ) = x(t + θ ) = x(t + θ ; σ, ϕ) = (U (t, σ )ϕ) (θ ),

−h ≤ θ ≤ 0.

It follows that U (t, s)U (s, σ )ϕ = U (t, s)ψ = U (t, σ )ϕ as desired.

 

10.4 Solution Operators for Periodic Delay Equations In the present subsection we shall be dealing with delay differential equations of the type (10.1), but now with an additional periodicity condition. We shall assume that η(t + ω, ·) = η(t, ·)

for all t ≥ 0.

(10.53)

Here ω is a positive real number which we require to satisfy ω ≥ h. We call ω the period number associated with the delay equation (10.1). In the sequel we will refer to ω as the period. In what follows we shall assume that the conditions listed in the first paragraph of  Sect. 10.1 are all fulfilled. Thus, given ϕ ∈ C [−h, 0]; Cn and s ≥ 0, Eq. (10.49) has a unique solution which we denote by x(·; s, ϕ). Furthermore, U (t,s), t ≥ s ≥  0, is the two-parameter family of solution operators on C [−h, 0]; Cn defined by (10.50). The main result in this subsection concerns the spectral properties of the operator U (t + ω, t); see Theorem 10.4.5. To derive these spectral results, we need a number of preliminary results which are additions to those given in Proposition 10.3.1. First we show that the periodicity condition (10.53) implies a periodicity condition on U (t, s). Lemma 10.4.1 If the ω-periodicity condition (10.53) is satisfied, then U (t + ω, s + ω) = U (t, s),

for all t ≥ s ≥ 0.

(10.54)

218

10 Periodic Delay Equations

In order to prove the above lemma, we first derive some auxiliary results. Put k(t, s) = η(t, s − t) = ζ(t, t − s),

t ≥ s ≥ 0,

and let r(t, s) be the corresponding resolvent kernel function. Then k(t + ω, s + ω) = η (t + ω, (s + ω) − (t + ω)) = η(t + ω, s − t) = η(t, s − t) = k(t, s),

t ≥ s ≥ 0.

and  r(t + ω, s + ω) = k(t + ω, s + ω) +

r(t + ω, a)k(a, s + ω) da

s+ω



t

= k(t, s) + 

t +ω

r(t + ω, a + ω)k(a + ω, s + ω) da

s t

= k(t, s) +

r(t + ω, a + ω)k(a, s) da,

t ≥ s ≥ 0,

s

which implies that r(t + ω, s + ω) = r(t, s),

t ≥ s ≥ 0.

Using the preceding identities we obtain  X(t + ω, s + ω) = I

Cn

t +ω

+ 

= ICn +  = ICn +

r(a, s + ω) da

s+ω t

r(a + ω, s + ω) da

s t

r(a, s) da = X(t, s),

t ≥ s ≥ 0.

s

We are now ready to prove Lemma 10.4.1.   Proof Let ϕ ∈ C [−h, 0]; Cn , and fix t ≥ s ≥ 0. Put ψ = U (t, s)ϕ

and ψω = U (t + ω, s + ω)ϕ.

We have to show that ψ = ψω . Using the definition of the operators U (t, s) and U (t + ω, s + ω) we obtain   ψ(θ ) = U (t, s)ϕ (θ ) = x(t + θ ; s, ϕ) 

t +θ

= X(t + θ, s)ϕ(0) + s

X(t + θ, a)(s ϕ)(a) da,

−h ≤ θ ≤ 0,

10.4 Solution Operators for Periodic Delay Equations

219

and   ψω (θ ) = U (t + ω, s + ω)ϕ (θ ) = x(t + ω + θ ; s + ω, ϕ) = X(t + ω + θ, s + ω)ϕ(0)+ 

t +ω+θ

+

X(t + ω + θ, a)(s+ω ϕ)(a) da,

−h ≤ θ ≤ 0.

s+ω

Recall that 

h

(s ϕ)(a) =

[dτ ζ(a, τ )]ϕ(a − s − τ ),

0 ≤ a − s ≤ h,

a−s

 (s+ω ϕ)(a) =

h

[dτ ζ(a, τ )]ϕ(a − s − ω − τ ),

0 ≤ a − s − ω ≤ h.

a−s−ω

Replacing a − ω by a  , the last identity yields 



(s+ω ϕ)(a + ω) =

a  −s

 =

h

h a  −s

[dτ ζ(a  + ω, τ )]ϕ(a  − s − τ ) [dτ ζ(a  , τ )]ϕ(a  − s − τ ),

0 ≤ a  − s ≤ h.

It follows that (s ϕ)(a) = (s+ω ϕ)(a + ω),

0 ≤ a − s ≤ h.

(10.55)

Furthermore, we have (s ϕ)(a) = 0 when a − s ≥ h, (s+ω ϕ)(a) = 0

when a − s − ω ≥ h.

Again replacing a − ω by a  , the last identity shows that (s ϕ)(a) = (s+ω ϕ)(a + ω) = 0

when a − s ≥ h

(10.56)

Combining the two results (10.55) and (10.56) we conclude that (s ϕ)(a) = (s+ω ϕ)(a + ω)

for all a ≥ s.

(10.57)

220

10 Periodic Delay Equations

Finally, fix −h ≤ θ ≤ 0. Using the various identities derived above we obtain ψω (θ ) = X(t + ω + θ, s + ω)ϕ(0)+ 

t +ω+θ

+

X(t + ω + θ, a)(s+ω ϕ)(a) da

s+ω

= X(t + θ, s)ϕ(0)+ 

t +θ

+

X(t + ω + θ, a  + ω)(s+ω ϕ)(a  + ω) da 

s



t +θ

= X(t + θ, s)ϕ(0) +

X(t + θ, a  )(s ϕ)(a  ) da  .

s

Thus ψω = ψ, and we are done.

 

Corollary 10.4.2 Let the ω-periodicity condition (10.53) be satisfied. Then the family U (t, s), t ≥ s ≥ 0, has the following properties: (i) U (t + j ω, j ω) = U (t, 0) for t ≥ 0 and j ∈ N, (ii) U (t + j ω, t) = U (t + ω, t)j for t ≥ 0 and j ∈ N, (iii) U (t + ω, 0) = U (t, 0)U (ω, 0) for t ≥ 0. Proof The first item directly follows from Lemma 10.4.1 by induction on j . Indeed, assume that for some j ∈ N the identity in (i) is satisfied. Then using the identity (10.54) in Lemma 10.4.1, we have U (t + (j + 1)ω, (j + 1)ω) = U (t + j ω + ω, j ω + ω) = U (t + j ω, j ω) = U (t, 0). Applying (10.54) with s = 0 we see that for j = 1 the identity in (i) is satisfied. Hence (i) is proved by induction. To prove (ii) we also use induction on j . Suppose that the assertion holds for some j ∈ N and all t ≥ 0, then by using Proposition 10.3.1 and Lemma 10.4.1 twice we obtain U (t + (j + 1)ω, t) = U ((t + ω) + j ω, t) = U ((t + ω) + j ω, t + ω) U (t + ω, t). Now put t  = t + ω. Using the induction hypothesis, we have   U (t + ω) + j ω, t + ω = U (t  + j ω, t  ) = U (t  + ω, t  )j = U ((t + ω) + ω, t + ω)j = U (t + ω, t)j .

10.4 Solution Operators for Periodic Delay Equations

221

Hence U (t + (j + 1)ω, t) = U (t + ω, t)j U (t + ω, t) = U (t + ω, t)j +1 , and (ii) is proved by induction, as desired. Finally, to prove (iii) note that U (t + ω, 0) = U (t + 2ω, ω) using (10.54) = U (t + 2ω, 2ω)U (2ω, ω) using Proposition 10.3.1, item (ii) = U (t, 0)U (ω, 0) using (10.54),  

and (iii) is proved.

The importance of the period map in the study of the asymptotic behaviour of solutions is made precise in the next corollary. Corollary 10.4.3 Let the ω-periodicity condition (10.53) be satisfied, and let s ∈ R be given. Assume that the spectrum of U (s +ω, s) is contained in the open unit disk. Then there exist positive constants M and  such that U (t, s) ≤ Me−(t −s)

for all t ≥ s.

Proof From the spectral radius formula, we know that there exists k0 ∈ N such that U (s + ω, s)k0  < 1.   Since the curves t → U (t, s)ϕ are continuous for ϕ ∈ C [−h, 0]; Cn , the family of maps U (t, s), s ≤ t ≤ s + ω, is pointwise bounded, and hence by the uniform boundedness principle there exists a constant M1 such that U (t, s) ≤ M1 ,

s ≤ t ≤ s + ω.

Put M = M1 max{U (s + ω, s)j  | j = 0, . . . , k0 − 1}, and let t ≥ s be given. Choose the largest integer p ∈ Z+ such that s + pk0 ω ≤ t and choose the largest q ∈ {0, 1, . . . , k0 − 1} such that s0 := s + pk0 ω + qω ≤ t. Then by using Proposition 10.3.1 and Corollary 10.4.2 we have U (t, s) = U (t, s0 )U (s0 , s + pk0 ω)U (s + pk0 ω, s) = U (t − pk0 ω − qp, s)U (s + qω, s)U (s + pk0 ω, s) = U (t − pk0 ω − qp, s)U (s + ω, s)q U (s + ω, s)pk0 .

222

10 Periodic Delay Equations

Therefore U (t, s) ≤ MU (s + ω, s)k0 p , and we can define  := − log U (s + ω, s)k0  > 0  

to complete the proof of the corollary.

Corollary 10.4.4 If the ω-periodicity condition (10.53) is satisfied and j ∈ N is such that s + (j − 1)ω ≤ t < s + j ω, then (i) U (t + ω, t) = U (t + ω, s + j ω)U (s + j ω, t); (ii) U (s + ω, s) = U (s + j ω, t)U (t + ω, s + j ω). Proof The first item directly follows from Proposition 10.3.1 since t + ω > s + j ω. To prove (ii) observe that U (s + ω, s) = U (s + ω + j ω, s + j ω) = U (s + (j + 1)ω, t + ω)U (t + ω, s + j ω) = U (s + j ω, t)U (t + ω, s + j ω),  

and item (ii) is proved.

In the next theorem we shall assume that the operator U (t + ω, t) is a compact operator for each t ≥ 0, an assumption that will be proved in the next section. Theorem 10.4.5 Assume that the operator U (t + ω, t) is compact for each t ≥ 0. Let Mλ,t , λ ∈ σ (U (t + ω, t)), denote the generalised eigenspace at λ of the operator U (t + ω, t). If t ≥ s and 0 = λ ∈ C, then (i) λ ∈ σ (U (t + ω, t)) if and only if λ ∈ σ (U (s + ω, s)); (ii) if λ ∈ σ (U (t + ω, t)), then U (t, s) maps Mλ,s in a one-to-one way onto Mλ,t . Proof Let λ ∈ σ (U (t + ω, t)) and λ = 0, and let Pλ,t denote the Riesz projection (see [32, Section I.2]) of the operator U (t + ω, t) onto the generalised eigenspace Mλ,t . We first prove that U (t +ω, s +j ω)Pλ,s = Pλ,t U (t +ω, s +j ω),

t ≥ s +(j −1)ω ≥ 0.

(10.58)

Using Corollary 10.4.4 to expand U (s + ω, s) and U (t + ω, t), it follows that the following identity holds true     U (t + ω, s + j ω) zI − U (s + ω, s) = zI − U (t + ω, t) U (t + ω, s + j ω),

10.4 Solution Operators for Periodic Delay Equations

223

whenever t ≥ s + (j − 1)ω ≥ 0. Together with the representation of the spectral projection as a contour integral Pλ,t =

1 2πi



(zI − U (t + ω, t))−1 dz,

where denotes a small circle enclosing only the isolated point λ of σ (U (t +ω, t)), this proves (10.58). Similarly, we can prove that U (s + j ω, t)Pλ,t = Pλ,s U (s + j ω, t),

s + j ω > t ≥ 0.

(10.59)

Let λ ∈ C with λ = 0 be given. We next show that ϕ → U (s + j ω, t)ϕ maps Mλ,t in a one-to-one onto Mλ,s . From (10.58) U (t + ω, s + j ω)Mλ,s ⊂ Mλ,t . Therefore, by Corollary 10.4.4, we have Mλ,s = U (s + ω, s)Mλ,s = U (s + j ω, t)U (t + ω, s + j ω)Mλ,s ⊂ U (s + j ω, t)Mλ,t ⊂ Mλ,s , where we have used (10.58) and (10.59). This shows that the map ϕ → U (s + j ω, t)ϕ maps Mλ,t onto Mλ,s . Injectivity of U (s + j ω, t) on Mλ,t follows from U (t + ω, s + j ω)U (s + j ω, t) = U (t + ω, t) and the fact that U (t + ω, t) is injective on Mλ,t . This proves that the map ϕ → U (s + j ω, t)ϕ is an isomorphism from Mλ,t onto Mλ,s . Similarly, we have that the map ϕ → U (t + ω, s + jp)ϕ is an isomorphism from Mλ,t onto Mλ,s as well. Assertion (i) now follows directly from the above arguments and assertion (ii) follows from U (t, s) = U (t, s + (j − 1)ω)U (s + (j − 1)ω, s) = U (t + ω, s + j ω)U (s + ω, s)j −1 , where j ∈ N is such that s + (j − 1)ω ≤ t < s + j ω. Since U (s + ω, s) is an isomorphism of Mλ,s and U (t + ω, s + j ω) defines an isomorphism of Mλ,s onto Mλ,t , assertion (ii) follows.  

Chapter 11

Completeness Theorems for Period Maps

We consider completeness problems for period maps associated with periodic functional differential equations. These maps are bounded linear operators acting on Banach spaces of continuous functions. In the first section we show that these period maps are compact operators which can be written as the sum of a Volterra operator and a finite rank operator. The significance of completeness theorems for period maps is explained in the second and third section. Two completeness theorems for the period map of certain concrete scalar periodic delay equations are presented in the fourth and the fifth section, first for one-periodic equations and next for twoperiodic equations.

11.1 The Period Map and Its Generalisations We begin with the definition of the period map associated with a periodic time dependent delay equation. Definition 11.1.1 If the ω-periodicity condition (10.53) is satisfied, then the   period map associated with (10.1) is defined by the operator T on C [−h, 0]; Cn given by T = U (ω, 0), where U (t, s) is given by (10.50). Using the formula for x(·) = x(· ; 0, ϕ) in (10.40) (see also (10.32)) it follows that T has the representation (T ϕ)(θ ) = X(ω + θ, 0)ϕ(0) +  ω+θ X(ω + θ, a)(ϕ)(a) da, +

−h ≤ θ ≤ 0,

(11.1)

0

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_11

225

226

11 Completeness Theorems for Period Maps

where  is the operator defined by (10.9) and X(· , ·) is the fundamental solution of equation (10.1) defined by (10.25). We shall first show that the operators U (σ +ω, σ ), σ ≥ 0, satisfy the assumption in Theorem 10.4.5, i.e., are compact. In addition we shall  prove that U (σ + h, σ ) is a finite rank perturbation of a Volterra operator on C [−h, 0]; Cn . Using these results we will show that the period map T = U (ω, 0) has a characteristic matrix function in the case ω = h and in the case that ω is a multiple of the delay h. See Corollary 11.1.4. Theorem 11.1.2 Assume Eq. (10.1) is ω-periodic. For t − σ ≥ h, the operator U (t, σ ) is of the form U (t, σ ) = V (t, σ ) + R(t, σ ),   where V (t, σ ) and R(t, σ ), t − σ ≥ h, are operators on C [−h, 0]; Cn given by 





t +θ

V (t, σ )ϕ (θ ) =



  X(t + θ, a) σ ϕ (a) da,

−h ≤ θ ≤ 0,

(11.2)

σ

 R(t, σ )ϕ (θ ) = X(t + θ, σ )ϕ(0),

−h ≤ θ ≤ 0.

(11.3)

Here σ is given by (10.37) and X(t, s) denotes the fundamental matrix solution defined in (10.25). Moreover, V (t, σ ) is a compact operator and R(t, σ ) is an operator of finite rank, and hence U (t, σ ) is a compact operator for t − σ ≥ h. Finally, the operator V (σ + h, σ ) is Volterra. Proof Fix t and σ such that t − σ ≥ h. The representation of U (t, σ ) = V (t, σ ) + R(t, σ ) follows from (10.40), and it is clear that R(t, σ ) defined by (11.3) is of finite rank, in fact of rank at most n. Therefore, it remains to show that the operator V (t, σ ) given by (11.2) is compact and that the operator V (σ +h, σ ) is Volterra , i.e., a compact operator with no non-zero spectrum. We split the proof into four parts; the first three use the Arzelà-Ascoli theorem, as given in Sect. 6.4 below Lemma 6.4.7, to prove that V (t, σ ) is compact. Part 1.

We shall apply the Arzelà-Ascoli theorem for the case when M = [−h, 0] and F = {V (t, σ )ϕ | ϕ ∈ C(M; Cn ), ϕ ≤ 1}. (11.4) In this first part our aim is to show that F is uniformly bounded in C(M; Cn ). To do this note that from the definition of σ in (10.37) and the estimate (10.2) it follows that (σ ϕ)(a) ≤ m(a)ϕ,

0 ≤ a ≤ h,

(11.5)

11.1 The Period Map and Its Generalisations

227

where the function m is as defined in the first paragraph of Sect. 10.1. Therefore, since t − σ ≥ h, we have by using Lemma 10.1.8 

t+θ

(V (t, σ )ϕ)(θ ) ≤





σ



 t



t+θ

exp

m(s) ds





t

exp

m(s) ds

σ

m(a) da ϕ

a

m(a) da ϕ,

−h ≤ θ ≤ 0.

σ

Since ϕ ≤ 1, the preceding inequality implies that V (t, σ )ϕ ≤

 t



σ

Part 2.



t

exp

m(s) ds

m(a) da,

t − σ ≥ h.

σ

From the definition of F in (11.4), it follows that F is uniformly bounded in the Banach space C(M; Cn ). To show that F is equicontinuous, let ϕ ≤ 1 and −1 < θ1 < θ2 < 0, and use (10.27) and (10.25) to estimate (V (t, σ )ϕ)(θ2 ) − (V (t, σ )ϕ)(θ1 ) ≤  4  t +θ1   ≤ 4 X(t + θ2 , t + θ1 ) − I X(t + θ1 , a) σ (a) da  +  ≤

t +θ2 t +θ1

σ t +θ2 t +θ1

4   X(t + θ2 , a) σ (a) da 4 

t +θ1

|r(a, t + θ2 )| da  +

  X(t + θ1 , a) σ (a) da

σ t +θ2 t +θ1

  X(t + θ2 , a) σ (a) da.

This estimate together with Lemma 10.1.8 and Corollary 10.1.5 shows that there exists a positive constant K such that (V (t, σ )ϕ)(θ2 ) − (V (t, σ )ϕ)(θ1 ) ≤ K|θ2 − θ1 |,

Part 3.

t − σ ≥ h. (11.6)

Hence the set F = {V (t, σ )ϕ | ϕ ≤ 1} is equicontinuous for t − σ ≥ h fixed. Combining the results of the first two parts, and using that M = [−h, 0] is a compact metric space, the Arzelà-Ascoli theorem tells us that every sequence in F contains a uniformly convergent subsection. In other words, for each bounded sequence x1 , x2 , x3 , . . . in C(M; Cn ) the sequence V (t, σ )x1 , V (t, σ )x2 , V (t, σ )x3 , . . . contains a subsequence converging to some vector in C(M; Cn ). But then, applying Lemma 6.4.1

228

Part 4.

11 Completeness Theorems for Period Maps

with using A = V (t, σ ) and X = Y = C(M; Cn ), we see that the operator V (t, σ ) is compact. To prove that V (σ +h, σ ) has no non-zero spectrum we have to show that ϕ − zV (σ + h, σ )ϕ = ψ

(11.7)

  has a unique solution ϕ for each z ∈ C and ψ ∈ C [−h, 0]; Cn . The proof is in a weighted norm. Define  based a contraction   mapping principle  F : C [−h, 0]; Cn → C [−h, 0]; Cn by 

 F ϕ (θ ) = ψ(θ ) + zV (σ + h, σ )ϕ,

−h ≤ θ ≤ 0,

(11.8)

  and define for γ ∈ R the weighted norm  · γ on C [−h, 0]; Cn by ϕγ := max

−h≤θ≤0

eγ θ ϕ(θ ).

(11.9)

From (11.8) we have that F ϕ1 − F ϕ2 γ = |z|V (σ + h, σ )(ϕ1 − ϕ2 )γ , and to show that F is a contraction, it suffices to prove that there exists γ and real c with 0 < c < 1 such that |z|V (σ + h, σ )ϕγ ≤ cϕγ

(11.10)

Since (10.1) is periodic, there exists a positive constant M0 such that the function m defined in (10.2) is bounded by M0 . Therefore, using Lemma 10.1.8, it follows that X(t, s) ≤ exp[M0 (t − s)],

t ≥ s ≥ 0.

(11.11)

From (11.2) it follows that   eγ θ z V (σ + h, σ )ϕ (θ) = z



σ +h+θ

   + h + θ, a)eγ (a−σ −h) σ ϕ (a) da, X(σ

σ

(11.12)  a) := eγ (t −a)X(t, a) satisfies where the scaled fundamental solution X(t, 4 4 4 X(t, a)4 ≤ e(M0 +γ )(t −a),

a ≤ t ≤ a + h.

(11.13)

11.1 The Period Map and Its Generalisations

229

Here we have used (11.11). Furthermore using (10.37) we obtain for γ < 0     eγ (a−σ −h)  σ ϕ (a) =   ≤





 eγ (τ −h) dτ ζ (a, τ )eγ (a−σ −τ ) ϕ(a − σ − τ )

h

a−σ

 eγ (τ +σ −a) dτ ζ (a, τ )eγ (a−σ −τ ) ϕ(a − σ − τ )

h

a−σ

≤ M0 ϕγ ,

σ ≤ a ≤ σ + h.

(11.14)

Combining the estimates (11.14) and (11.13) we conclude from (11.12) that  γθ     e z V (σ + h, σ )ϕ (θ ) = z



σ +h+θ

    + h + θ, a)eγ (a−σ −h) σ ϕ (a) da  X(σ

σ

 ≤ |z|

σ +h+θ

e(M0 +γ )(σ +h+θ −a) daM0 ϕγ

σ



  |z| 1 − e(M0 +γ )(h+θ ) M0 ϕγ |M0 + γ |



  |z| 1 − e(M0 +γ )h M0 ϕγ . |M0 + γ |

Hence |z|V (σ + h, σ )ϕγ ≤

  |z| 1 − e(M0 +γ )h M0 ϕγ . |M0 + γ |

This shows that F defined by (11.8) is a contradiction for γ sufficiently negative. This completes the proof that (11.7) has a unique solution for each z ∈ C and hence the operator V (σ + h, σ ) is Volterra.   Recall that the period map T is given by T = U (ω, 0). Therefore an application of Theorem 11.1.2 with σ = 0 yields the following corollary. Corollary 11.1.3  Assume Eq.  (10.1) is ω-periodic with ω = h equal to the delay, and let T on C [−h, 0]; Cn be the corresponding period map. Then T is of the form T = V + R,   where V and R are the operators on C [−h, 0]; Cn given by   V ϕ (θ ) =



h+θ 0

  X(h + θ, a) ϕ (a) da,

−h ≤ θ ≤ 0,

(11.15)

230

11 Completeness Theorems for Period Maps

  Rϕ (θ ) = X(h + θ, 0)ϕ(0),

−h ≤ θ ≤ 0.

(11.16)

Moreover, V is a Volterra operator and R is an operator of finite rank. Corollary 11.1.4 Assume Eq. (10.1) is ω-periodic with ω = nh for some integer n. Then the period map T = U (nh, 0) is of the form T = V + R, where V is a Volterra operator and R is an operator of finite rank. Proof Observe that if ω is an integer multiple of the delay h, then we can write the period map as follows T = U (nh, 0) = U (nh, (n − 1)h) U ((n − 1)h, (n − 2)h) · · · U (h, 0). An application of Theorem 11.1.2 yields U ((m + 1)h, mh) = V (mh + h, mh) + R(mh + h, mh),

m = 0, . . . , n − 1,

where V (mh + h, mh), m = 0, . . . , n − 1 is a Volterra operator. Therefore to prove that U (nh, 0) is a finite rank perturbation of a Volterra operator, it suffices to prove that the operator V defined by V = V (nh, (n − 1)h) V ((n − 1)h, (n − 2)h) · · · V (h, 0) is a Volterra operator as well. We already know from Theorem 11.1.2 that V is compact. An inspection of Part 4 of the proof of Theorem 11.1.2 shows that it suffices to prove that, for z ∈ C fixed, there exists γ and real c with 0 < c < 1 such that |z|V ϕγ ≤ cϕγ ,

(11.17)

and we will use induction to prove (11.17). For n = 1, (11.17) is equal to (11.10) and hence holds true. Assume that |z|V (mh, (m − 1)h) V ((m − 1)h, (m − 2)h) · · · V (h, 0)ϕγ ≤ cϕγ ,

(11.18)

we have to prove that (11.18) implies |z|V ((m + 1)h, m)h) V ((mh, (m − 1)h) · · · V (h, 0)ϕγ ≤ cϕγ ,

(11.19)

11.1 The Period Map and Its Generalisations

231

From (11.10) it follows that |z|V ((m + 1)h, m)h) V ((mh, (m − 1)h) · · · V (h, 0)ϕγ ≤ ≤ c V ((mh, (m − 1)h) · · · V (h, 0)ϕγ ≤ c2 ϕγ , where in the last inequality we have used the induction hypothesis (11.18). This proves (11.17) and it follows from Part 4 of the proof of Theorem 11.1.2 that V is a Volterra operator. Therefore U (nh, 0) is a finite rank perturbation of a Volterra operator as well.   In the general case in which the period ω is not an integer multiple of the delay, we can also write the period map as a finite rank perturbation of a Volterra operator. The construction however becomes more involved and will be presented in a separate publication. In Sect. 11.5 we will present a class of examples in case ω = 2h, and derive explicit formulas for the Volterra operator and the finite rank perturbation. For example, see the formulae (11.62) and (11.63). Together with Theorem 6.1.1, it follows that in case the period ω is a multiple of the delay h, then the period map T associated with (10.1) has a characteristic matrix function that can be computed explicitly. We summarise this result in a corollary in case when ω = h. Corollary 11.1.5 Let ω = h and V and R be the operators given by (11.15) and (11.16), respectively. Then T = V + R is the period map associated with (10.1), and T has a characteristic matrix function, namely the function (z) given by (z) = I − zC(I − zV )−1 B,

(11.20)

    where R = BC with B : Cn → C [−1, ]; Cn and C : C [−1, ]; Cn → Cn are defined by: (Bu)(θ ) = X(ω + θ, 0)u (−ω ≤ θ ≤ 0) and Cϕ = ϕ(0).

(11.21)

Proof From the definitions of B and C in (11.21) we see that (BCϕ)(θ ) = (Bϕ(0)(θ ) = X(ω + θ, 0)ϕ(0) = (Rϕ)(θ )

(−ω ≤ θ ≤ 0).

Thus R = BC, and Theorem 6.1.1 yields that (z) is a characteristic matrix function for T in the sense of Definition 5.2.1.   As one will see, in Sects. 11.4 and 11.5 we explicitly compute (I −zV )−1 . Using Corollary 11.1.3 this yields a formula for the characteristic matrix function for a given periodic functional differential equation. The computations, however, become rather quickly quite involved. An alternative approach is to use the contraction

232

11 Completeness Theorems for Period Maps

argument in the proof of Theorem 11.1.2 to compute the characteristic matrix function numerically using continuation techniques. This approach was used in [77] using the theory developed in [48]. The new theory presented in this chapter, provides a direct and simpler approach that allows an extension the results and numerical computations in [77].

11.2 Spectral Properties of the Period Map As in the previous subsection we deal with Eq. (10.1) throughout assuming that the additional periodicity condition (10.53) is satisfied. Thus we assume that ω ≥ h and η(t + ω, ·) = η(t, ·)

for all t ≥ 0.

(11.22)

Furthermore, we know from Theorem 11.1.2 that the period map T is a well defined  that T = compact operator on C [−h, 0]; Cn . Recall   U (ω, 0), and that for any t ≥ s ≥ 0 the operator U (t, s) on C [−h, 0]; Cn is given by (10.50). From Theorem 10.4.5 it follows that U (t, s) maps in a one-to-way the space Mμ,s onto Mμ,t . Here Mμ,t denotes the generalised eigenspace of U (t + ω, t) at μ. In this subsection we explain the significance of completeness theorems for the period map T . We begin with some notation and terminology (mainly taken from [42, Section 8.1]). If μ belongs to the non-zero (point) spectrum of T , then μ is called a characteristic multiplier of (10.1), and λ for which μ = exp(λω) (unique up to multiples of 2πi) is called a characteristic exponent of (10.1). Let μ = 0 be an eigenvalue of T and let mμ denote the algebraic multiplicity  of μ. Assume that ϕ1 , . . . , ϕmμ in C [−h, 0]; Cn is a basis of eigenvectors and generalised eigenvectors of T at μ, and let Mμ = span {ϕ1 , . . . , ϕmμ } be the corresponding generalised eigenspace. Furthermore, let Φ0 be the mμ -row vector defined by Φ0 = [ϕ1 , . . . , ϕmμ ], viewed as a linear operator from Cmμ into   C [−h, 0]; Cn . Since Mμ is invariant under T , there exists a mμ × mμ matrix L with scalar entries such that T Φ0 = Φ0 L, and the only eigenvalue of L is μ = 0. But then, there is an mμ × mμ matrix B with scalar entries such that L = exp(ωB), and thus L exp(−ωB) is the mμ ×mμ identity matrix. Moreover, the unique eigenvalue λ of B satisfies the identity μ = exp(ωλ). From Theorem 10.4.5 it follows that if   Φ(t) = U (t, 0)ϕ1 · · · U (t, 0)ϕmμ ,

for t ≥ 0,

11.2 Spectral Properties of the Period Map

233

then (T Φ0 )(t) = Φ(t)L,

t ≥ 0.

Next, let P(t), t ≥ 0, be the block mμ -row vector given by P(t) = U (t, 0)Φ0 exp(−tB) = Φ(t) exp(−tB),

t ≥ 0.

  Thus P(t) has size 1 × mμ and its entries are in C [−h, 0]; Cn . Lemma 11.2.1 The function P(t), t ≥ 0, is periodic with period ω. Furthermore, we have U (t, 0)Φ0 = P(t) exp(tB),

t ≥ 0.

(11.23)

Proof From item (iii) in Corollary 10.4.2 we know that U (t + ω, 0) = U (t, 0)T for t ≥ 0. Using the latter identity we see that P(t + ω) = U (t + ω, 0)Φ0 e−(t +ω)B = U (t, 0)T Φ0 e−ωB e−t B = U (t, 0)Φ0 Le−ωB e−t B = U (t, 0)Φ0 e−t B = P(t),

t ≥ 0.

(11.24)

Thus P(t) is periodic with period ω, and the final equality in (11.24) yields (11.23).   The solution of (10.1) with initial value ϕ ∈ Mμ is of Floquet type. This means a solution of the form x(t; ϕ) = p(t) exp(tB)c,

(11.25)

where c ∈ Cmμ is such that ϕ = Φ0 c. The matrix B has size mμ × mμ and has only one eigenvalue at λ with μ = eλω and p(t) = p(t + ω) is a periodic function. Indeed, from Lemma 11.2.1 it follows that     x(t; ϕ) = U (t, 0)Φ0 c (0) = P(t) (0) exp(tB)c.   Put p(t) = P(t) (0), then p(t) = p(t + ω) and this shows (11.25). Furthermore, since λ is the only eigenvalue of B, it follows from the Jordan decomposition of B that we can write et B c = q(t)eλt , where q is a polynomial of degree at most mμ , and hence x(t; ϕ) = p(t)q(t) exp(λt).

234

11 Completeness Theorems for Period Maps

The next theorem is a corollary to Corollary 10.4.3 and relates the Floquet  type  solutions to an arbitrary solution x(t; ϕ) of (10.1) with ϕ ∈ C [−h, 0]; Cn . Theorem 11.2.2 Let μj , j = 1, 2, . . ., denote the non-zero eigenvalues of the period map T ordered by decreasing modulus taking multiplicities into account,  let Pμj denote the Riesz projection onto Mμj , and let ϕ ∈ C [−h, 0]; Cn . Furthermore, let x(t; ϕ) denote the unique solution of (10.1). There are positive constants  and M such that for t ≥ 0 and γ ∈ R     x(t; ϕ) − x(t; ϕγ ) = x(t; ϕ − ϕγ ) ≤ Me(γ −)t ϕ − ϕγ ,

(11.26)

where ϕγ =

N

Pμj ϕ,

j =1

with N ∈ N chosen such that |μN | ≥ exp(γ ω) > |μN+1 |. Here by definition ϕγ = 0 if eγ ω > |μ1 |. Furthermore, if the period map T has a complete span of eigenvectors and generalised eigenvectors, then each solution of (10.1) can be represented as a convergent infinite series of solutions of Floquet type solutions of the form (11.25). Proof Put  = {μ1 , μ2 , . . . , μN } and define M = ⊕μ∈ Mμ

and Q = ∩ Qμ . μ∈

  Then, one has C [−h, 0]; Cn = M ⊕ Q . To prove the exponential estimate, we define the restriction operators of T and U (t, s) to Q by 3 := T |Q T

3(t, s) := U (t, s)|Q . and U

By construction, the spectral radius of e−γ ω T3 is less than one. Therefore, an application of Corollary 10.4.3 with s = 0 yields 3(t, 0) ≤ Me(γ −)t . U Since   3(t, 0)(ϕ − ϕγ ) (0), x(t; ϕ) − x(t; ϕγ ) = U this completes the first part of the proof.

11.3 Completeness of the Period Map in Case the Period Is Equal to the Delay

235

To prove the final statement notice that the solution x(t; ϕ) of (10.1) can be approximated by the solution x(t; ϕγ ). The solution x(t; ϕγ ) can itself be written as a linear combination of Floquet type solutions (11.25) since x(t; ϕγ ) =

N

x(t; Pμj ϕ) =

j =1

N

pj (t) exp(tBj )cj .

j =1

m

Here cj ∈ R μj is such that Pμj ϕ = Φ0 cj , Bj has size mμj × mμj , and only one eigenvalue at λj with μj = eλj ω and pj (t) = pj (t + ω) is a periodic function for 1 ≤ j ≤ N.  

11.3 Completeness of the Period Map in Case the Period Is Equal to the Delay A central problem in dynamical system theory is to understand the behaviour of the system as time tends to infinity. For a period delay equation, the period map T given by formula (11.1) in Definition 11.1.1 determines the long-term behaviour of the solutions, see Corollary 10.4.3 and Theorem 10.4.5. The purpose of this section is to illustrate the operator approach towards periodic delay equations in case the period is equal to the delay, i.e., ω = h. From the explicit representation of the period map T ϕ = x(h + ·; 0, ϕ), it follows that T n ϕ = x(nh + ·; 0, ϕ), n = 1, 2, . . .. Using the notation xn (θ ) = x(n + θ ), −h ≤ θ ≤ 0, it follows that the periodic delay equation (10.1) satisfying  ω = h can be rewritten as a discrete  (10.53) with time dynamical system on C [−h, 0]; Cn xk+1 = T xk ,

k = 0, 1, 2, . . .

  and x0 = ϕ ∈ C [−h, 0]; Cn ,

(11.27)

where T is the period map. Obviously, the solution to (11.27) is given by xn = T n x0 for n ∈ N. To understand thebehaviour of xn as n tends to infinity, we decompose the state  space C [−h, 0]; Cn of initial data into a finite dimensional T -invariant subspace and a T -invariant complementary subspace on which we can estimate the norm of T n . Such a decomposition can be realised using appropriately chosen spectral sets of T . For β > 0, define

= (β) = {λ ∈ σ (T ) | |λ| > e−β }.   We can decompose C [−h, 0]; Cn with respect to into two closed T -invariant subspaces   C [−h, 0]; Cn = M (β) ⊕ Q (β) ,

(11.28)

236

11 Completeness Theorems for Period Maps

where as before, see Sect. 1.1, M = ⊕λ∈ Im Pλ and Q = ∩λ∈

 Ker Pλ , and the spectral projection P : X → X onto M is given by P = λ∈ Pλ . The operator T n restricted to Q is given by T n (I − P ). Let r(T ) = max{|λ| | λ ∈ σ (T )} denote the spectral radius of T . It follows from the formula r(T ) = lim T n 1/n n→∞

that the norm of T n (I − P ) satisfies the exponential estimate T n (I − P ) ≤ Ke(−β+δ)n I − P ,

n ≥ 0.

Here δ is any positive real and K a constant that only depends on δ. The proof of this estimate directly   follows from Theorem 11.2.2 by taking t = nh + θ , and by observing that T n ϕ (θ ) = x(n + θ ; ϕ) and T n ϕγ =

N

μnj Pμj ϕ,

n = 1, 2, . . . .

j =1

Thus, if we combine this estimate with the canonical Jordan representation of T on the finite dimensional space M , we conclude that T n ϕ can be represented as T nϕ =



Jλn ϕ + O(e(−β+δ)n ),

(11.29)

λ∈ (β)

where Jλ denotes the Jordan matrix representation of the operator T restricted to the finite dimensional invariant subspace Mλ (T ). To study what happens when we let β → ∞ in the representation (11.29), we need our completeness result Theorem 6.2.1. In the context of periodic delay equations with ω = h, it follows from Corollary 11.1.3 and Corollary 11.1.5 that Theorem 6.2.1 can be formulated as follows. Theorem 11.3.1 Assume that the entire function det  with  given by (11.20) is of non-zero finite order ρ, and has infinitely many zeros. Furthermore, suppose that there exist a complex number z0 , a non-negative real number s0 , and a ρ-admissible set of half-lines in the complex plane, {ray (θj ; z0 , s0 ) | j = 1, . . . , κ}, such that det (z0 ) = 0 and (a) there exists a δ0 > 0 with | det (z)| ≥ δ0 > 0

for z ∈ ray (θj ; z0 , s0 ),

j = 1, 2, . . . , κ,

11.4 Scalar Periodic Delay Equations and Completeness (One Periodic)

237

(b) there exists an integer m and a constant M with (I − zV )−1  ≤ M(1 + |z|m ) for z ∈ ray (θj ; z0 , s0 ), j = 1, 2, . . . , κ, where V is given by (11.15). If, in addition, the entire function z → det (z0 + z) is of completely regular growth, then MT = {x ∈ X | det (z0 + z) dominates adj (z0 + z)C(I − (z0 + z)V )−1 x}, and X = MT ⊕ ST , where, as usual, ST = {x ∈ X | z → (I − zT )−1 x is entire}. It follows from Theorem 11.3.1 that under the assumptions of the theorem, the decomposition(11.28) in the  limit as β → ∞ extends to a decomposition of a dense subspace of C [−h, 0], Cn . In case of noncompleteness there exists a nonzero T invariant subspace ST such that the norm of restriction of T n to ST decays faster than any exponential. This last observation follows from the fact that by construction the spectral radius of T restricted to ST is zero and hence T n 1/n → 0 as n → ∞. In the next two sections we shall verify the assumptions of this completeness result for specific classes of periodic delay equations.

11.4 Scalar Periodic Delay Equations and Completeness (One Periodic) Consider the scalar periodic delay equation &

x(t) ˙ = a(t)x(t) + b(t)x(t − 1) (t ≥ 0), x(t) = ϕ(t)

(−1 ≤ t ≤ 0).

(11.30)

Here a and b are complex-valued periodic functions of period   one and Lebesgue integrable on finite intervals. Furthermore, ϕ ∈ C [−1, 0], C is given. Since a and b are assumed to be one periodic, we may without loss of generality assume that a and b are one periodic functions defined on the full real line. As a first step towards the solution of the above equation we need the following lemma which is a variant of Lemma 9.3.1 with r = 0 and c = 1, with b(s)ϕ(s − 1) in place of ϕ(s), and with u = ϕ(0).

238

11 Completeness Theorems for Period Maps

Lemma 11.4.1 Let Y (t, s) be the fundamental solution of the equation x(t) ˙ = a(t)x(t),

t ≥ −1,

  normalised to one at t = s, and let ϕ ∈ C [−1, 0], C . The function 

t

x(t) := Y (t, 0)ϕ(0) +

Y (t, s)b(s)ϕ(s − 1) ds,

0 ≤ t ≤ 1,

(11.31)

0

is the unique absolutely continuous solution on [0, 1] of the initial value nonhomogeneous equation &

x(t) ˙ = a(t)x(t) + b(t)ϕ(t − 1)

( for all t ∈ [0, 1] a.e.),

x(0) = ϕ(0).

(11.32)

  Corollary 11.4.2 Given ϕ ∈ C [−1, 0]; C , define ψ(θ ) = x(θ + 1), where x is the unique solution on [0, 1] of the initial value nonhomogeneous equation (11.32). Then ψ is given by  ψ(θ ) = Y (θ, −1)ϕ(0) +

θ −1

Y (θ, s)b(s)ϕ(s) ds,

−1 ≤ θ ≤ 0.

(11.33)

Furthermore, using that a and b are one periodic, (11.32) can be rewritten as &

˙ ) = a(θ )ψ(θ ) + b(θ )ϕ(θ ) ( for all θ ∈ [−1, 0] a.e.), ψ(θ ψ(−1) = ϕ(0).

(11.34)

Proof We use the identity (9.119) and the fact that the function b has period one. Let −1 ≤ θ ≤ 0. Since ψ(θ ) = x(θ + 1), we have 

θ+1

ψ(θ ) = Y (θ + 1, 0)ϕ(0) +  = Y (θ + 1, 0)ϕ(0) +  = Y (θ + 1, 0)ϕ(0) +  = Y (θ, −1)ϕ(0) +

Y (θ + 1, s)b(s)ϕ(s − 1) ds,

0 θ −1 θ −1

Y (θ + 1, s˜ + 1)b(˜s + 1)ϕ(˜s ) d s˜, Y (θ + 1, s + 1)b(s)ϕ(s) ds

θ −1

Y (θ, s)b(s)ϕ(s) ds,

11.4 Scalar Periodic Delay Equations and Completeness (One Periodic)

239

which proves (11.33). It remains to derive (11.34). To do this we use (11.32) with t = θ + 1 and −1 ≤ θ ≤ 0. Since ψ(t − 1) = x(t) for 0 ≤ t ≤ 1, we obtain &

˙ ) = a(θ + 1)ψ(θ ) + b(θ + 1)ϕ(θ ) ( for all θ ∈ [−1, 0] a.e.), ψ(θ ψ(−1) = ϕ(0). (11.35)

But a(θ + 1) = a(θ ) and b(θ + 1) = b(θ ) because a and b are both one periodic. Thus (11.35) yields (11.34), and we are done.     From the above corollary it follows that the operator T on C [−1, 0], C defined by  (T ϕ)(θ ) = Y (θ, −1)ϕ(0) +

θ −1

Y (θ, s)b(s)ϕ(s) ds,

−1 ≤ θ ≤ 0

(11.36)

  is a well-defined bounded linear operator on C [−1, 0], C . This operator T is the period map defined by the periodic delay equation (11.30). The next theorem shows that all solutions of the delay equation (11.30) can be describe in terms of period map. Theorem 11.4.3 Assume a and b are periodic with  period one, and let T be the period map defined by Eq. (11.36). Given ϕ ∈ C [−1, 0]; C , put xn = T n ϕ, n = 0, 1, 2, . . . , and let x be the function on [−1, ∞) defined by x(n + θ ) = xn (θ )

−1≤θ ≤0

(n = 0, 1, 2, . . .).

(11.37)

Then x is an absolutely continuous solution of the delay equation (11.30) and all absolutely continuous solutions of (11.30) are obtained in this way. Proof From the definition of x it follows that x(θ ) = x0 (θ ) = ϕ(θ ), −1 ≤ θ ≤ 0, and thus the initial condition in (11.30) is satisfied. Next, let n ≥ 1. Then xn = T n ϕ = T xn−1 , and we can apply Corollary 11.4.2 with ϕ = xn−1 and ψ = xn . Using (11.34) we obtain &

x˙n (t) = a(t)xn (t) + b(t)xn−1 (t)

( for all t ∈ [−1, 0] a.e.),

xn (−1) = xn−1 (0). It follows that x is continuous on [0, ∞) and absolutely continuous on each finite interval [0, c], and (11.30) is satisfied. The reverse implication is proved in a similar way.  

240

11 Completeness Theorems for Period Maps

  Given ϕ ∈ C [−1, 0]; C we denote by Lϕ the function x on [−1, ∞) defined by (11.37). If ϕ is an eigenvector of the period map T with corresponding eigenvalue λ, then (Lϕ)(t) = λn ϕ(t − n),

n − 1 ≤ t ≤ n (n = 0, 1, 2, . . .).

Moreover, if |λ| < 1, then the function Lϕ is stable in the sense that lim (Lϕ)(t) = 0.

t →∞

The importance of completeness of the eigenvectors of the period map T in the study of the asymptotic behaviour of solutions was made precise in Theorem 11.2.2. In particular, see the statement in the last paragraph of Theorem 11.2.2. Completeness of the Period Map The next result is a completeness theorem for the period map. Theorem 11.4.4 Let a and b be Lebesgue integrable real-valued periodic   functions with period 1, and let T be the associate period map on C [−1, 0]; C defined by (11.36). Assume that T is one-to-one and that  m(b) :=

0 −1

b(s) ds = 0.

(11.38)

Then the operator T has complete span of eigenvectors and generalised eigenvectors if and only if the periodic function b does not change sign. We begin with some preliminaries. Throughout F (t) is the fundamental solution associated with the differential equation x(t) ˙ = a(t)x(t), t ≥ −1, normalised to I at −1, and Y (t, s) = F (t)F (s)−1 is the fundamental solution, normalised to I at  t = s. Since T on C [−1, 0]; C is given by (11.36), it follows that T = V + R, where  (V ϕ)(θ ) :=

θ −1

 Y (θ, s)b(s)ϕ(s) ds = F (θ )

θ

−1

F (s)−1 b(s)ϕ(s) ds,

(Rϕ)(θ ) := Y (θ, −1)ϕ(0) = F (θ )F (−1)−1 ϕ(0) = F (θ )ϕ(0).

(11.39) (11.40)

Furthermore, the rank one operator R admits a minimal factorisation R = BC, where   B : C → C [−1, 0]; C ,

(Bu)(θ ) = F (θ )u,

  C : C [−1, 0]; C → C,

Cϕ = ϕ(0),

for all u ∈ C, −1 ≤ θ ≤ 0, (11.41)   for all ϕ ∈ C [−1, 0]; C . (11.42)

11.4 Scalar Periodic Delay Equations and Completeness (One Periodic)

241

  Remark 11.4.5 Let MF be the multiplication operator on C [−1, 0]; C defined by (MF ϕ)(θ ) = F (θ )ϕ(θ ),

−1 ≤ θ ≤ 0.

Since F (θ ) = 0 for −1 ≤ θ ≤ 0, the operator MF is invertible and MF−1 = MF (·)−1 . Using MF we define V0 := MF−1 V MF ,

R0 := MF−1 RMF ,

B0 := MF−1 B,

C0 := CMF . (11.43)

Then R0 = B0 C0 and T = V + R = MF (V0 + R0 )MF−1 . Hence it suffices to prove Theorem 11.4.4 for T0 := MF−1 T MF in place of T . Lemma 11.4.6 The operator V0 defined by the first identity in (11.43) is given by  (V0 ϕ)(θ ) =

θ −1

b(s)ϕ(s) ds,

  for all ϕ ∈ C [−1, 0], C .

−1 ≤ θ ≤ 0

(11.44)   Furthermore, for each u ∈ C and ϕ ∈ C [−1, 0]; C we have (B0 u)(θ ) = u

(−1 ≤ θ ≤ 0)

and (C0 ϕ) = F (0)ϕ(0).

(11.45)

Proof Using the first identity in (11.43) and the formula for V in (11.39) we have (V0 ϕ)(θ ) = (MF−1 V MF ϕ)(θ ) = F (θ )−1 (V MF ϕ)(θ )  θ −1 = F (θ ) F (θ ) F (s)−1 b(s)(MF ϕ)(s) ds  =

−1

θ −1

F (s)

−1

b(s)F (s)ϕ(s) ds =



θ

−1

b(s)ϕ(s) ds,

−1 ≤ θ ≤ 0.

This proves identity (11.44). The two identities in (11.45) follow directly from (11.41) and (11.42), using the last two identities in (11.43).   Note that the above lemma has also a matrix-valued analogue. However in that case the function b in (11.44) has to be replaced by b˜ where ˜ = F (t)−1 b(t)F (t), b(t)

−1 ≤ t ≤ 0.

242

11 Completeness Theorems for Period Maps

Lemma 11.4.7 The operator V0 defined by (11.44) is a Volterra operator, and its resolvent is given by 

(I − zV0 )−1 ϕ (θ ) = ϕ(θ ) + z

θ −1

G(θ, s; z)−1 b(s)ϕ(s) ds, −1 ≤ θ ≤ 0. (11.46)

Here G(t, s; z) is the fundamental solution of the homogeneous equation x(t) ˙ = zb(t)x(t),

t ∈ R,

normalised to one at t = s. In fact, a direct computation yields that G(t, s; z) is given by 



t

G(t, s; z) = exp

zb(σ ) dσ ,

t ≥ s.

(11.47)

s

Moreover,

(I − zV0 )−1 B0 (θ ) = G(θ ; z),

−1 ≤ θ ≤ 0,

(11.48)

(z) := 1 − zC0 (I − zV0 )−1 B0 = 1 − zc◦ ezm(b) ,

(11.49)

where  c◦ = exp



0 −1

a(s) ds

 and m(b) =

0 −1

b(s) ds.

(11.50)

Proof The first part of the theorem follows directly from Lemma 9.3.4. Formula (11.48) is a corollary of the identity (9.126), and from the second part of (11.45), using the identity (11.48), it follows that  (z) = 1 − z exp  = 1 − z exp which proves (11.50).

0 −1 0 −1

 a(s) ds G(0, z)  a(s) ds exp



0 −1

 zb(s) ds = 1 − zc◦ ezm(b) ,  

Proof of Theorem 11.4.4 We shall apply Theorem 6.2.1 with T = T0 and V = V0 , where T0 = V0 + R0 and R0 = B0 C0 . Here V0 is given by (11.44) and the operators B0 and C0 are given by (11.45). The proof is divided into four parts. Part 1.

In this part we show that the characteristic function  has the desired properties. Note that in this case the characteristic function  is the scalar

11.4 Scalar Periodic Delay Equations and Completeness (One Periodic)

243

function given by (11.50). Since m(b) = 0 by assumption, it follows that  belongs to the Paley-Wiener class PW. To see this we can apply Lemma 7.2.3. Indeed, let η be the function on [0, 1] defined by η(t) =

& 0

if 0 ≤ t < 1, if t = 1.

1

Then  (z) = 1 − zc◦ e

zm(b)

= 1 − zc◦

1

ezm(b)s dη(s).

0

But then Lemma 7.2.3 tells us that  ∈ PW. In particular,  is of order one. Moreover, the function  is of completely regular growth by Theorem 14.6.3. Note that η has an atom at t = 1, The latter implies (again use Lemma 7.2.3) that the indicator function h is given by h (θ ) =

& m(b) cos θ, −π/2 ≤ θ ≤ π/2,

(11.51)

π/2 ≤ θ ≤ 3π/2.

0,

In fact, more generally, 0 (z) = (z + z0 ) is of completely regular growth for any choice of z0 , and the corresponding indicator functions coincide. Furthermore, by Proposition 7.2.4, the fact that η is not constant on (0, 1] also implies that  has infinitely many zeros. Finally, again using that η is not constant on (0, 1], Lemma 7.2.5 allows us to show (see the paragraph before Lemma 7.2.6) that there exist a δ0 > 0 and a real number z0 such that |(z)| ≥ δ0 ,

Part 2.

for all z ∈ z0 + iR.

(11.52)

The latter implies that item (a) in Theorem 6.2.1 is fulfilled with z0 as above, with s0 = 0, and with θ1 = π/2 and θ2 = 3π/2. Note (see (7.27)) that the real number z0 in (11.52) can always be chosen such that z0 > 0. −1 We proceed with an analysis of the an   resolvent (I − zV0 ) . Let ϕ be −1 arbitrary function in C [−1, 0]; C . Using the formula for (I − zV0 ) given by (11.46) in Lemma 11.4.7 we know that for −1 ≤ θ ≤ 0 we have

(I − zV0 )

−1



ϕ (θ ) = ϕ(θ ) + z

θ −1

G(θ ; z)G(s; z)−1b(s)ϕ(s) ds.

Recall that G(t, z) is defined by (11.47) which yields G(θ ; z)G(s; z)

−1

 = exp

θ

 zb(τ ) dτ ,

s

244

11 Completeness Theorems for Period Maps

and hence 

(I − zV0 )−1 ϕ (θ ) = ϕ(θ ) + z



θ

−1

θ

exp

 zb(τ ) dτ b(s)ϕ(s) ds.

s

(11.53) Since b is real-valued and of constant sign, we assume in what follows that b(t) ≥ 0 on −1 ≤ t ≤ 0, which we may do without loss of generality. Recall that | exp μ| = exp(Re μ) ≤ exp(|μ|) (μ ∈ C). Using this fact it follows that for each θ ∈ [−1, 0] we have  θ   θ    exp zb(τ ) dτ b(s)ϕ(s) ds  ≤  −1

s

 ≤

θ

−1

 =

θ −1

    exp

θ s

   zb(τ ) dτ b(s) ds ϕ

  exp Re z

θ

  b(τ ) dτ b(s) ds ϕ

s

For Re z ≥ 0 this yields  θ   θ    exp zb(τ ) dτ b(s)ϕ(s) ds  ≤  −1

s

 ≤

0 −1

  exp Re z

  = exp Re z

0 −1

  b(τ ) dτ b(s) ds ϕ

0

−1

b(τ ) dτ



0

−1

b(τ ) dτ.

Using the above inequality in the identity (11.53) together with the nonzero quantity m(b) defined by (11.38) we obtain (I − zV0 )−1  ≤ 1 + |z|m(b) exp (m(b)Re z) ,

Re z ≥ 0.

(11.54)

Now let z0 be any positive real number. Then (11.54) is satisfied, which implies that item (b) in Theorem 6.2.1 is satisfied with the given z0 , and with s0 = 0, and θ1 = π/2 and θ2 = 3π/2. Moreover, if we choose the positive real number z0 in such a way that (11.52) is fulfilled too, then with this choice of z0 and with s0 = 0, and θ1 = π/2 and θ2 = 3π/2 as before both items (a) and (b) in Theorem 6.2.1 are fulfilled. But then we can apply Theorem 6.2.1 to show that T has complete span of eigenvectors and generalised eigenvectors whenever the entire function (z) dominates all entire functions f (z) = C0 (I − zV0 )−1 ϕ, where ϕ

11.4 Scalar Periodic Delay Equations and Completeness (One Periodic)

Part 3.

245

  is an arbitrary function in C [−1, 0]; C . The latter will be proved in the next part. −1 To prove that (z) dominates f (z)   = C0 (I − zV0 ) ϕ, where ϕ is an arbitrary function in C [−1, 0]; C , we use formula (11.46) to obtain   C0 (I − zV0 )−1 ϕ = F (0) ϕ(0) + z

0

−1

  exp z

0

  b(s) dτ b(s)ϕ(s) ds .

s

(11.55) Using the representation (11.55) and Proposition 14.3.4 it follows that the function f (z) = C0 (I − zV0)−1 ϕ belongs to the Paley-Wiener class PW. Therefore we can use Proposition 14.4.3 to conclude that the indicator function of f (z) is given by ⎧  0 ⎪ ⎨ b(s) ds cos θ, −π/2 ≤ θ ≤ π/2, hf (θ ) = (11.56) σ0 ⎪ ⎩ 0, π/2 ≤ θ ≤ 3π/2, where σ0 = max{s ∈ [−1, 0] | ϕ|[0,s] = 0}.

Part 4.

(11.57)

Therefore, using (11.51), we conclude that if b has constant sign, then (z) always dominates the function C(I − zV )−1 ϕ. This completes the proof that T has a complete span of eigenvectors and generalised eigenvectors if b has constant sign. In this part we prove the “only if part” of the theorem. In order to this, assume that b does not have a constant sign. The latter implies that there exist 0 ≤ t1 < t2 ≤ 1 such that  t2 b(σ ) dσ = 0 and b = 0 on [t1 , t2 ]. (11.58) t1

  According to item (ii) in Theorem 6.2.1 we have MT0 = C [−1, 0]; C whenever the pair {C0 , V0 } is not observable. Hence to complete the proof it suffices to show that (11.58) implies that there exists a non-zero ϕ ∈  C [−1, 0]; C such that the entire function z → C0 (I −zV0 )−1 ϕ vanishes identically on C. So fix 0 ≤ t1 < t2 ≤ 1, and assume (11.58) holds. Let χ[t1 ,t2 ] (t) denote the characteristic function of the interval [t1 , t2 ], and let ϕ0 be the function on [−1, 0] defined by ⎧ t  t ⎨ b(s) ds for t1 ≤ t ≤ t2 , b(s)χ[t1 ,t2 ] (s) ds = ϕ0 (t) = t1 ⎩ −1 0 for t ∈ [−1, 0]\[t1, t2 ]. (11.59)

246

11 Completeness Theorems for Period Maps

We shall prove that C0 (I − zV0 )−1 ϕ0 vanishes identically on C. Obviously, ϕ0 (0) = 0. Using (11.55), we see that it suffices to show that the function   0   0 x0 (z) := exp z b(τ ) dτ b(s)ϕ0 (s) ds vanishes identically on C. −1

s

From the definition of ϕ0 in (11.59) and the identity (11.58) it follows that x0 (z) =

 t2

0  1  s  0 exp z b(τ ) dτ b(s) b(t) dτ ds

t1

s

t1

s

t1

0  1  s   0  t2 t2 exp z b(τ ) dτ + z b(τ ) dτ b(s) b(t) dτ ds = t2

t1

1 0    t   s  0 t2 2 b(τ ) dτ exp z b(τ ) dτ b(s) b(t) dτ ds. = exp z t1

t2

s

t1

Given (11.58) and the definition of ϕ0 , we have 

t2



s

b(τ ) dτ = −

b(τ ) dτ = −ϕ0 (s),

t1 ≤ s ≤ t2 .

t1

s

But then, using the change of variables theorem, we conclude that   x0 (z) = exp z   = exp z



0

t2

b(τ ) dτ t2



0

  exp − zϕ0 (s) ϕ0 (s) dϕ0 (s)

t1 ϕ0 (t2 )

b(τ ) dτ

  exp − zt t dt = 0.

ϕ0 (t1 )

t2

Thus the pair {C0 , V0 } is not observable and we are done.

 

Example 11.4.8 Assume b(t) = sin(2πt) for all t ≥ 0 and consider the differential delay equation x(t) ˙ = a(t)x(t) + b(t)x(t − 1), where the function a is a 1-periodic integrable function. Then  m(b) =

0 −1

sin(2πt) dt = −

1 2π



0

−1

d cos(2πt) = −

0 1  cos(2πt) = 0. −1 2π

Since m(b) = 0, Theorem 11.4.4 does not apply in this case. Nevertheless, our results can be used to show that the period map T has no completeness with this

11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic)

247

choice of a and b. Indeed, it follows from the formula (11.50) for the characteristic function that  with c0 = exp

(z) = 1 − zc0



1

 a(s) ds .

0

It follows from Theorem 6.1.1 and Theorem 5.2.2 that the non-zero spectrum of the period map consists of one simple eigenvalue c0−1 . This proves that the closed space of generalised eigenspace of the period is one-dimensional. Remark that Theorem 11.2.2 yields the following result. If the period map does not have a complete span of eigenvectors and generalised eigenvectors, then there exist solutions of (10.1) that in norm decays faster than any exponential. This is a phenomenon that cannot happen in linear periodic ordinary differential equations. To construct such a solution x = x(·; ϕ) take an initial function ϕ ∈ ST (recall the definition of ST from Theorem 6.2.1), then z → (I − zT )−1 ϕ is an entire function. Thus it follows from the Neumann series of the resolvent and from the Cauchy root test that lim

 n

n→∞

T n  = 0.

Using Theorem 11.3.3 we derive that lim

n→∞

 n

xn  = 0

and this implies that lim eαn xn  = 0

n→∞

for all α ∈ R.

Since xn  = sup{x(n + θ ) | −1 ≤ θ ≤ 0} we conclude that the solution x(t) decays faster than any exponential as t tends to infinity.

11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic) In this subsection we consider the delay equation (11.30) in case a and b are periodic functions of period 2. For purpose of presentation we restrict to a representative class of  equationsfor which we can still do all computations explicitly. As before ϕ ∈ C [−1, 0], C and the functions a and b are assumed to be Lebesgue integrable. The situation is similar to the one periodic case, but the computation of the period map and its resolvent become more involved.

248

11 Completeness Theorems for Period Maps

Assume that a = 0 and b is of the form & b0 (t) 0 ≤ t mod 2 < 1 b(t) = αb0 (t) 1 ≤ t mod 2 < 2,

(11.60)

where α ∈ R \ {0} and b0 is a periodic function of   period 1.  The period two map T : C [−1, 0]; C → C [−1, 0]; C for the periodic delay equation x(t) ˙ = b(t)x(t − 1) with b given by (11.60) becomes   T ϕ (θ ) = ϕ(0) +



−1

 = ϕ(0) +



0

0

−1

b0 (s)ϕ(s) ds + α

θ

−1

b0 (s) ϕ(0) + 

b0 (s)ϕ(s) ds + αϕ(0)  +α



θ −1

b0 (s)





s −1

 b0 (σ )ϕ(σ ) dσ ds

θ −1

b0 (s) ds

s −1

b0 (σ )ϕ(σ ) dσ ds.

(11.61)

From the representation (11.61) for T we conclude that T = V + R where V and R  are operators acting on C [−1, 0]; C given by   V ϕ (θ ) = α 





θ

−1

b0 (s)

 Rϕ (θ ) = ϕ(0) +



0 −1

s −1

(11.62)

b0 (σ )ϕ(σ ) dσ ds, 

b0 (s)ϕ(s) ds + αϕ(0)

θ −1

b0 (s) ds.

(11.63)

Furthermore, the rank two operator R admits a factorisation R = BC, where   B : C2 → C [−1, 0]; C ,   C : C [−1, 0]; C → C2 ,

   θ c1 (θ ) = c1 + c2 b0 (s) ds, c2 −1 0 1 0 ϕ(0) + −1 b0 (s)ϕ(s) ds Cϕ = . αϕ(0) B

(11.64)

(11.65)

Lemma 11.5.1 The operator V defined by (11.62) is a Volterra operator, and its resolvent is given by   (I − zV )−1 ϕ (θ ) = ϕ(θ ) +



θ

−1

∂g (α, z; θ, s)ϕ(s) ds, ∂s

(11.66)

11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic)

249

where g(α, z; θ, s) =

√ 1 exp αz 2 +



θ

 b0 (σ ) dσ +

s

 √ 1 exp − αz 2



θ

b0 (σ ) dσ ),

−1 ≤ s ≤ θ ≤ 0.

s

(11.67) Proof Put ψ = (I − zV )−1 ϕ, then we need to solve the equation ψ − zV ψ = ϕ

(11.68)

with ϕ given and ψ as the unknown. By differentiating (11.68) we arrive at the following initial value problem ψ  (θ ) − αzb0 (θ )



θ

−1

b0 (σ )ψ(σ ) dσ = ϕ  (θ ), − 1 ≤ θ ≤ 0, ψ(−1) = ϕ(−1).

(11.69)

The general solution of the homogeneous part of the differential equation in (11.69) is given by ψ(θ ) = g(α, z; θ, −1)ψ(−1),

−1 ≤ θ ≤ 0,

(11.70)

where g(α, z; θ, −1) is given by (11.67) with s = −1. A particular solution ψp of the differential equation in (11.69) with ψp (−1) = 0 is given by  ψp (θ ) =

θ

−1

g(α, z; θ, s)ϕ  (s) ds,

(11.71)

where g(α, z; θ, s) is given by (11.67) and we have used that g(α, z; θ, θ ) = 1 and ∂s g(α, z; s, s) = 0. This shows that the solution of the initial value problem (11.69) is given by  ψ(θ ) = g(α, z; θ, −1)ϕ(−1) +  = ϕ(θ ) +

θ −1

θ −1

g(α, z; θ, s)ϕ  (s) ds

(11.72)

∂g (α, z; θ, s)ϕ(s) ds, ∂s

where we have used integration by parts in the last identity. This completes the proof of the lemma.   The characteristic matrix function (z) = I −zC(I −zV )−1 B is computed next.

250

11 Completeness Theorems for Period Maps

Lemma 11.5.2 The characteristic matrix function (z) : C2 → C2 associated with the operator T defined by (11.61) is given by $ (z) =

 % 1 − zγ1 (z) − α1 γ2 (z) − α1 γ1 (z) + γ2 (z) − 1 1 − γ2 (z)

−zαγ1 (z)

,

(11.73)

where

√ √ 1 exp( αzm(b0 )) + exp(− αzm(b0 )) , −1 ≤ θ ≤ 0. 2 √

√ √ αz exp( αzm(b0 )) − exp(− αzm(b0 )) , −1 ≤ θ ≤ 0, γ2 (z) := 2

γ1 (z) :=

where m(b0 ) =

0

−1 b0 (σ ) dσ .

Moreover,

det (z) = 1 −

1 + α γ2 (z) − z. α

(11.74)

Proof Observe using (11.64) and (11.72) that    θ  c1  −1 (I − zV ) B (θ ) = g(α, z; θ, −1)c1 + c2 b0 (s)g(α, z; θ, s) ds. c2 −1 (11.75) Using (11.67) we can rewrite the last term in (11.75) as follows 

θ

−1

1 x+ 2  √ b0 (s) exp αz

b0 (s)g(α, z; θ, s) ds =  x=  y=

θ −1 θ −1

1 y 2 θ

s



b0 (s) exp −

√ αz

where

(11.76)

 b0 (σ ) dσ ds,



θ

 b0 (σ ) dσ ds.

s

θ √ Now put k(s) = s b0 (σ ) dσ , c = αz, and ϕ(s) = ck(s). Note that both k(θ ) and ϕ(θ ) are zero. Furthermore, we have 

     1 θ  k  (s) exp ck(s) ds = − ϕ (s) exp ϕ(s) ds c −1 −1    1 1 1 θ = − exp(ϕ(s)) = − + exp ck(−1) . −1 c c c

x=−

θ

11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic)

Similarly 

     1 θ k  (s) exp − ck(s) ds = −ϕ  (s) exp − ϕ(s) ds c −1 −1    1 1 1 θ = exp(−ϕ(s)) = − exp − ck(−1) . −1 c c c θ

y=−

Summarising and using (11.76) we have 

θ

−1

b0 (s)g(α, z; θ, s) ds =

    1 1 exp ck(−1) − exp − ck(−1) 2c 2c  θ  θ √   √  1 = √ exp αz b0 (σ ) dσ − exp − αz b0 (σ ) dσ . 2 αz −1 −1 =

In particular, 

0 −1

b0 (s)g(α, z; 0, s) ds =

1 γ2 (z). αz

Since g(α, z; 0, −1) = γ1 (z) this shows    1 c  −1 (I − zV ) B 1 (0) = γ1 (z)c1 + γ2 (z)c2 . c2 αz Similarly using (11.67) with θ = t and s = −1 we have 

0

−1

 x˜ =  y˜ =

Now put (t) =

1 x˜ + 2  t √ b0 (t) exp αz

b0 (t)g(α, z; t, −1) dt =

t

0 −1 0 −1

−1 b0 (σ ) dσ ,



−1

 √ b0 (t) exp − αz and let c =

1 y˜ 2

where

 b0 (σ ) dσ dt,



t −1

 b0 (σ ) dσ dt.

√ αz. Then  (t) = b0 (t), and hence

 0   1  (t) exp cl(t) dt = exp c(t)  −1 c −1  0   1 1 = exp c b0 (σ ) dσ − . c c −1

x˜ =

0

251

252

11 Completeness Theorems for Period Maps

An analogous calculation with y˜ in place of x˜ yields 

 0   1  (t) exp − cl(t) dt = − exp − c(t)  −1 c −1  0   1 1 = − exp − c b0 (σ ) dσ + . c c −1

y˜ =

0

It follows that 

0 −1

b0 (t)g(α, z; t, −1) dt = 

√ 1 exp( αz = √ 2 αz =

0 −1

√ b0 (σ ) dσ ) − exp(− αz



0 −1

b0 (σ ) dσ )

1 γ2 (z). αz

Furthermore 



0 −1

b0 (t)

s −1

b0 (σ )g(α, z; t, σ ) dσ dt =

 1  γ1 (z) − 1 . αz

Thus using (11.65) it follows that C(I − zV )−1 B can be written as 0 C(I − zV )

−1

B=

γ1 (z) +

1 1 αz γ2 (z) αz

 1 γ1 (z) + γ2 (z) − 1

αγ1 (z)

1 z γ2 (z)

.

This proves that (z) is given by (11.73). Moreover      det (z) = 1 − zγ1 (z) − α −1 γ2 (z) 1 − γ2 (z) − zγ1 (z) γ1 (z) + γ2 (z) − 1 = 1 − α −1 γ2 (z) − γ2 (z) + α −1 γ22 − zγ12 . Next observe that α −1 γ22

− zγ12



−1

= −z

√αz  √ √  2 e αzm(b0 ) − e− αzm(b0 ) 2 1 √ √  2 e αzm(b0 ) + e− αzm(b0 ) −z 2

11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic)

253

Thus det (z) = 1 −

1 + α α

γ2 (z) − z,

(11.77)  

and this completes the proof of the lemma.

To illustrate these results with an example, we consider the case α = 1 and b0 ≡ 1 and the case α = −1 and b0 ≡ 1. In case α = 1 and b0 ≡ 1, the operator  T defined by (11.61) can be written as T = T12 , where T1 : C [−1, 0]; C → C [−1, 0]; C is given by 

 T1 ϕ (θ ) = ϕ(0) +



θ −1

ϕ(s) ds.

(11.78)

It follows from Lemma 11.4.7 and in particular (11.49) that the operator T1 admits a characteristic function 1 (z) = 1 − zez . It follows from Lemma 11.5.2 that √  √  √ det (z) = 1 − z e z − e− z − z √   √ √  √ = 1 − ze z 1 + ze− z √ √ = 1 ( z)1 (− z).

Since T = T12 , this result is in agreement with the fact that λ ∈ σ (T ) \ {0} if and only if

√ √ λ or − λ belongs to σ (T1 ) \ {0}.

    In general, if we define the operator Tα : C [−1, 0]; C → C [−1, 0]; C by 

 Tα ϕ (θ ) = ϕ(0) + α



θ −1

b0 (s)ϕ(s) ds,

α = 0,

(11.79)

then the operator T defined by (11.61) can be written as T = Tα T1 . In case α = −1, the operator T becomes T = T−1 T1 . In this case it follows from Lemma 11.4.7 that det (z) = 1 − z and σ (T ) \ {0} = {1}. Therefore if b0 ≡ 1 both T−1 and T1 do have a complete span of eigenvectors and generalised eigenvectors (see Theorem 11.4.4 with a = 0 and b = ±1), while the product T = T−1 T1 has a finite non-zero spectrum, and hence does not have a complete span of eigenvectors and generalised eigenvectors.

254

11 Completeness Theorems for Period Maps

We continue with an application of Theorem 6.2.1 to derive the following result about completeness of the operator T defined by (11.61). Theorem 11.5.3 If α ∈ R \ {−1, 0} and b0 = 0, then the period two map T defined by (11.61) has a complete span of eigenvectors and generalised eigenvectors. Proof Recall from Lemma 11.5.2 that det (z) is given by det (z) = 1 −

α + 1  √ √  e αzm(b0 ) − e− αzm(b0 ) − z. 2α

(11.80)

From the representation of det (z) it follows that det (z) is an entire function of order 1/2 with type b0 and with infinitely many zeros. Moreover, det (z) is of completely regular growth. First assume that α > 0. Since the order ρ of det (z) equals 1/2, a ρ-admissible set of halflines consists of one halfline only. Take θ1 = π. On the negative real axis, we can rewrite det (z) as follows det (−r) = 1 −

α + 1 √ i sin αr + r. α

This shows that | det (z)| ≥ 1

for z ∈ ray (π).

Moreover, it follows from (11.66) that (I − zV )−1  ≤ M(1 + |z|)

for z ∈ ray (π).

This shows that the assumptions of Theorem 6.2.1 are satisfied for α > 0. Therefore in order to prove that T has a complete span of eigenvectors and generalised eigenvectors, it suffices to prove that   det (z) dominates adj (z)C(I − zV )−1 ϕ for all ϕ ∈ C [−1, 0]; C .

(11.81)

The indicator function of det (z) can be directly computed from the representation of det (z) and is given by hdet  (θ ) =

  √ α cos θ/2 ,

0 ≤ θ ≤ 2π.

(11.82)

To investigate for which ϕ statement (11.81) holds, we first compute C(I −zV )−1 ϕ. Using (11.66) this yields ⎛

ϕ(0) +

C(I − zV )−1 ϕ = ⎝ αϕ(0)

0



−1 b(s)ϕ(s) ds ⎠ + 0 + α −1 b(s)ϕ(s) ds

(11.83)

11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic)

⎛ +⎝

255



0 σ 0 −1 ∂s g(α, z; 0, s)ϕ(s) ds + −1 −1 ∂s g(α, z; σ, s)ϕ(s) dsdσ ⎠ . 0 ∂ g(α, z; 0, s)ϕ(s) ds s −1 (11.84)

From the explicit representation of g(α, z; θ, s) given in (11.67) it follows that   det (z) dominates C(I − zV )−1 ϕ for all ϕ ∈ C [−1, 0]; C . Furthermore from (11.73) we have adj (z) =

$ 1 − γ2 (z) zαγ1 (z)

1 α (γ1 (z) + γ2 (z) − 1) 1 − zγ1 (z) − α1 γ2

% (11.85)

and from the explicit representation of γ1 (z) and γ2 (z) (see Lemma 11.5.2) it follows that det (z) dominates adj (z). Therefore to investigate for which ϕ the statement (11.81) holds, we only have to consider the terms that are not already directly dominated by det (z). This yields adj (z)C(I − zV )−1 ϕ = ⎞ ⎛ 0 0 σ ∂s g(α, z; 0, s)ϕ(s) ds − γ2 (z) −1 ∂ g(α, z; σ, s)ϕ(s) dsdσ γ1 (z) −1 s −1 ⎠+ =⎝ 0 σ 0 zαγ1 (z) −1 ∂ g(α, z; σ, s)ϕ(s) dsdσ − γ (z) 2 −1 s −1 ∂s g(α, z; 0, s)ϕ(s) ds  √  (11.86) + O e αz .

We next show that the terms in (11.86) are dominated by det (z). Rewriting the double integral by changing the order of integration yields 

0



σ

−1 −1

 ∂s g(α, z; σ, s)ϕ(s) dsdσ =  =

0  0 −1

s

0 −1

 ∂s g(α, z; σ, s) dσ ϕ(s) ds

 g(α, z; 0, s) − 1 ϕ(s) ds.

256

11 Completeness Theorems for Period Maps

So the first term of (11.86) becomes  γ1 (z)



0 −1

∂s g(α, z; 0, s)ϕ(s) ds − γ2 (z)

0 −1

 g(α, z; 0, s) − 1 ϕ(s) ds

√  0 √ √ √  αz  √  αz =− e + e −αz e αzs − e− αzs ϕ(s) ds 4 −1 √  0 √ √ √  αz  √   √  αz − − e −αz e αzs + e− αzs ϕ(s) ds + O e αz e 4 −1 √  0 √ √  αz(1+s)   √  αz =− e − e− αz(1+s) ϕ(s) ds + O e αz 2 −1   and this last term is dominated by det (z) for all ϕ ∈ C [−1, 0]; C . The second term of (11.86) becomes 

1

 g(α, z; 0, s) − 1 ϕ(s) ds − γ2 (z)

zαγ1 (z) 0

=−

αz 4



+

0 √ αz −1

αz 4

e



+e

√  √αzs −αz

0 √ αz −1

e

e

−e



0 −1

∂s g(α, z; 0, s)ϕ(s) ds

√  αzs

+ e−

√  √αzs −αz

e

ϕ(s) ds

√  αzs

− e−

 √  ϕ(s) ds + O e αz

√  0 √ √  αz(1+s)   √  αz = e + e− αz(1+s) ϕ(s) ds + O e αz 2 −1   and this last term is again dominated by det (z) for all ϕ ∈ C [−1,  0]; C .  This proves that if α > 0 then (11.81) holds for all ϕ ∈ C [−1, 0]; C . This   implies that MT = C [−1, 0]; C and T has a complete span of eigenvectors and generalised eigenvectors. For α < 0 and α = −1, we can repeat the same argument but now with θ1 = 2π and the negative real axis replaced by the positive real axis. The functions involved are now polynomially bounded on the positive real axis and have maximal (exponential growth) on the negative real axis. This completes the proof that if α ∈ R \ {−1, 0} then T has a complete span of eigenvectors and generalised eigenvectors.   Recall (see formula (11.80)) that m(b0 ) = 0 implies that z = 1 is the only zero of det (z). In this case completeness of the corresponding period map obviously fails. As an example consider the 2-periodic delay equation x(t) ˙ = cos(πt)x(t − 1).

(11.87)

11.5 Scalar Periodic Delay Equations and Completeness (Two Periodic)

257

Define & b0 (t) =

cos(πt)

0 ≤ t mod 2 < 1,

− cos(πt)

1 ≤ t mod 2 < 2.

(11.88)

Then b0 is 1-periodic and  m(b0 ) = −

0

−1

cos(πs) ds = 0.

Furthermore cos(πt) =

& b0 (t) −b0 (t)

0 ≤ t mod 2 < 1, 1 ≤ t mod 2 < 2.

Thus it follows that b(t) = cos(πt) satisfies (11.60) with b0 (t) defined by (11.88) and α = −1. Consequently, the spectrum of the period map associated with (11.87) consists of a single point only.

Chapter 12

Completeness for Perturbations of Unbounded Operators

The operators considered in this chapter will be unbounded Banach space operators. In the first section the theory of characteristic matrix functions, as introduced in Chap. 5, is extended to closed operators. Here we follow the procedure as developed in [48]. In the second section the characteristic matrix function is used to develop an analogue of Theorem 5.2.6 for unbounded Banach space operators.

12.1 The Associated Characteristic Matrix Function Let X be a complex Banach  space, and let the operator A(X → X) be a closed linear operator with domain D A ⊂ X. We start this section by recalling the construction of closed operators having characteristic matrix functions as given in [48]. For the construction of such operators we need to introduce three auxiliary operators D, L and M.   The operator D(X → X) is a closed linear operator with domain D D ⊂ X and D is assumed to satisfy the conditions (H1) and (H2): (H1) N := Ker D is finite dimensional and N = {0}; (H2) the operator D has a restriction D0 (X → X) such that     D D = N ⊕ D D0 and with resolvent set ρ(D0 ) = C. Apart from unbounded the operator D we need two bounded linear operators L : XD → Cn ,

M : X → Cn .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_12

(12.1)

259

260

12 Completeness for Perturbations of Unbounded Operators

Here n = dim N and XD is the domain of D endowed with the graph norm. One may think about D as a maximal operator and about L and M as generalised boundary value operators. In [48] we just assume that the resolvent set ρ(D0 ) is non-empty. Here we deal with the case when ρ(D0 ) = C which fits well with the examples considered in the next chapter. Note operator  that ρ(D0 ) = C means that for each λ ∈ C the −1 maps X one to-one onto X and the inverse map (λ − D λ − D0 maps D D ) 0 0  onto D D0 . With D, L and M as above we associate the operator A(X → X) defined as follows       D A = x ∈ D D | MDx = Lx and Ax = Dx.

(12.2)

Lemma 12.1.1 The operator A defined by (12.2) is a well-defined closed linear operator. ∞ Proof It is clear  that A is well-defined. To prove that A is closed, let (xn )n=0 be a sequence in D A , and assume that x = limn→∞ xn and y = limn→∞ Axn . We have   to prove (see the definition   of aclosed  operator in [36, Section 12.2])  that x ∈ D A and Ax = y. Since D A ⊂ D D and Au = Du for each u ∈ D A , the sequence (xn )∞ n=0 is a sequence in D D  and y = limn→∞ Dxn . But D is a closed operator by assumption. Thus x ∈ D D and Dx = y. It follows that Lx = limn→∞ Lxn , ∞ because   L is bounded on XD . Furthermore, using that (xn )n=0 is a sequence in D A , we have

MDx = M lim (Dxn ) = lim MDxn = lim Lxn = Lx. n→∞

n→∞

n→∞

  Thus x ∈ D A and Ax = Dx = y, and hence is A is a closed operator.

 

Inwhat follows the complex Banach space XA is defined by the domain of A, D A , endowed with the graph norm,  · A given by xA := x + Ax. Note that A : XA → X becomes a bounded operator. To describe the spectral properties of the operator A we shall use the n × n entire matrix function (z) = −(zM − L)D0 (z − D0 )−1 j : Cn → Cn

(z ∈ C),

(12.3)

where j : Cn → N is a given invertible linear map. In general the choice of j will depend on the kind of application one is dealing with. The function  is the characteristic matrix function associated with the operator A as introduced in [48]; see formula (12.4) in the theorem below. Theorem 12.1.2 Let A be the operator associated with D, L and M defined in (12.2), and assume conditions (H1) and (H2) are satisfied. Furthermore, let (z) be the matrix function defined by (12.3). Put Q(z) = −D0 (z − D0 )−1 j , where

12.1 The Associated Characteristic Matrix Function

261

j : Cn → N is an invertible linear map and D0 is the operator appearing in (H2), and assume that there exists a z0 ∈ C such that det (z0 ) is non-zero. Then     I n (z) 0 0 = F (z) C E(z), 0 IX 0 zI − A

z ∈ C,

(12.4)

where E(z) : Cn ⊕ X → Cn ⊕ XA and F (z) : Cn ⊕ X → Cn ⊕ X are the entire operator-valued functions given by     c (MD − L)[Q(z)c +(z − D0 )−1 x]  E(z) = I − Q(z0 )(z0 )−1 (MD − L) [Q(z)c + (z − D0 )−1 x] x (12.5)     c c − (MD − L)(z − D0 )−1 x , (12.6) F (z) = (z − z0 )Q(z0 )(z0 )−1 c + x x and the inverses E(z)−1 : Cn ⊕ X → Cn ⊕ XA and F (z)−1 : Cn ⊕ X → Cn ⊕ X are given by     c j Q(z0 )(z0 )−1 c + j (x − D0−1 Dx) E(z) = zQ(z0 )(z0 )−1 c + (z − D)x x     c c + (MD − L)(z − D0 )−1 x , F (z)−1 = × (z)x −(z − z0 )Q(z0 )(z0 )−1 c + F22 x −1

(12.7) (12.8)

× where F22 (z) = I − (z − z0 )Q(z0 )(z0 )−1 (MD − L)(z − D0 )−1 .

For the proof of the above theorem we refer to Theorem 3.2 in [48, Section I.3], and the formulas for E(z) and F (z) appear in the proof of that theorem. Those for E(z)−1 and F (z)−1 can be derived in an analogous way from the formulas of E(z) and F (z). It is not excluded that the operator M is the zero operator on X. In that case the characteristic matrix function reduces to (z) = LD0 (zI − D0 )−1 j. This case appears naturally in applications. For example, in Sects. 13.2 and 13.3 of the next chapter one will encounter such cases; in particular, see the first identity in (13.20). For the proof of the next corollary we refer to Theorem 2.1 in [48, Section I.2]. Alternatively, one can prove this corollary by applying Theorem 5.2.2 to the operator T defined by T = (z0 I − A)−1 with z0 ∈ ρ(A) a fixed complex number. Corollary 12.1.3 Let A be the operator associated with D, L and M defined in (12.2), and assume conditions (H1) and (H2) are satisfied. In addition, let (z)

262

12 Completeness for Perturbations of Unbounded Operators

be the matrix function defined by (12.3). Then ρ(A) = ∅ if and only if det (z) does not vanish identically, and the spectrum of A is given by σ (A) = {λ ∈ C | det (λ) = 0}. Furthermore the multiplicity theorem holds, that is, for λ ∈ σ (A), dim Ker (λI − A)kλ = mλ , where kλ is order of λ as a pole of (z)−1 and mλ is the order of λ as a zero of det (z).

12.2 A Completeness Theorem for a Class of Unbounded Operators In this section we use the characteristic matrix function (z) introduced in the previous section (see formula (12.3)) to prove a completeness theorem for the operator A associated with the operators D, L and M defined by (12.2). The next result can be viewed as an analogue of Theorem 5.2.6, see the remarks at the end of this section. Theorem 12.2.1 Let A be the operator associated with D, L and M defined in (12.2) satisfying conditions (H1) and (H2), and let the resolvent set ρ(A) be nonempty. In addition, assume that (zI − D0 )−1 is an entire function of exponential type such that the operator norm of (zI − D0 )−1 is polynomially bounded along the imaginary axis. Put Su,A = {x ∈ X | z → (zI − A)−1 x

is entire}.

(12.9)

(i) Suppose that (MD − L)(zI − D0 )−1 x = 0 for all z ∈ C implies x = 0. Then the closure of the generalised eigenspace of A is given by MA = {x ∈ X | det (z) dominates adj (z)(MD − L)(zI − D0 )−1 x}, and X = MA ⊕ Su,A . (ii) If there exist non-zero x ∈ X such that (MD − L)(zI − D0 )−1 x = 0 for all z ∈ C, then Su,A = {0} and the operator A does not have a complete span of eigenvectors and generalised eigenvectors. Proof The proof is divided into four parts.

12.2 A Completeness Theorem for a Class of Unbounded Operators

Part 1.

263

Recall that the n × n matrix function  is given by (12.3). From the assumptions on (zI − D0 )−1 , it follows that  is an entire function of exponential type which is polynomially bounded along the imaginary axis. Therefore, it follows from Theorem 14.3.4 that  belongs to the class PW. From Theorem 12.1.2 we know that  is a characteristic matrix function for A. Indeed, we have     ICn (z) 0 0 = F (z) E(z), z ∈ C, (12.10) 0 IX 0 zI − A where E and F are the entire operator-valued functions given by (12.5) and (12.6). In particular, F12 (z) = (MD −L)(zI −D0 )−1 . Hence, among other things, we have to show that MA = {x ∈ X | det (z) dominates adj (z)F12 (z)x}.

Part 2.

Fix z0 ∈ ρ(A), and define T : X → X to be the bounded operator defined by T := (z0 I − A)−1 . The strategy of proof will be to apply Theorem 5.2.6 to the operator T . In this part we show that T has a characteristic matrix function. Note that   I − zT = I − z(z0 I − A)−1 = (z0 I − A) − z (z0 I − A)−1   = (z0 − z)I − A (z0 I − A)−1 . Furthermore,      0 0 0 ICn ICn ICn . = 0 I − zT 0 (z0 − z)I − A 0 (z0 I − A)−1

(12.11)

It follows that z0 − λ ∈ σ (A) if and only if λ = 0 and λ−1 ∈ σ (T ). Moreover, z0 − λ is an eigenvalue of finite type of A if and only if λ = 0 and λ−1 is an eigenvalue of finite type of T , and in that case the corresponding spectral subspaces coincide: Mλ−1 (T ) = Mz0 −λ (A). Since A has a characteristic matrix function on C, the same holds true for T . In fact we have     (z) 0  ICn 0   = F (z) E(z), z ∈ C, (12.12) 0 IX 0 I − zT

264

12 Completeness for Perturbations of Unbounded Operators

where  (z) = (z0 − z),   0 ICn  E(z) = E(z0 − z), 0 z0 I − A

(12.13)

(z) = F (z0 − z). F Part 3.

Here , E, and F are defined as in (12.10). In this part we show that T satisfies the hypotheses in Theorem 5.2.6. By  is a construction, the operator T is one-to-one, has dense range, and   (z) = (z0 − z) and  characteristic matrix function for T on C. Since   belongs to the class PW too. belongs to the class PW it follows that   is of completely regular But then it follows from Theorem 14.6.3 that  growth in the sense of Definition 5.2.5. From the formulas for E(z)−1 and F (z)−1 given in (12.7)–(12.8), we see that the polynomially boundedness condition on (zI − D0 )−1 implies that (zI − A)−1 is polynomially bounded on the imaginary axis as well. Therefore (I − zT )−1 is polynomially bounded on the imaginary axis as well. From (12.7) it follows that E(z)−1 is an operator polynomial, and  −1 is also an operator polynomial. hence, using (12.13), we see that E(z) Finally, from the assumption that adj (z)F12 (z)x = 0 for all z ∈ C implies x = 0, it follows that this property also holds with z − z0 in place of z. Hence, if 12 (z)x = 0 for all z ∈ C, then x = 0. (z)F adj 

Part 4.

 is nondegenerate relative to the equivalence (12.12). This implies that  Therefore all assumptions of Theorem 5.2.6 are satisfied for the operator T . Note that 12 (z)x} (z) dominates adj  (z)F MT = {x ∈ X | det  = {x ∈ X | det (z0 − z) dominates adj (z0 − z)F12 (z0 − z)x} = {x ∈ X | det (z) dominates adj (z)F12 (z)x} = MA . Analogously, we have 12 (z)x is entire}  F ST = {x ∈ X | (z)  0 − z)F 12 (z0 − z)x is entire} = {x ∈ X | (z = {x ∈ X | (z)F12 (z)x is entire} = Su,A , where we have used (12.10) and (12.9). Thus MT = MA and ST = Su,A .

12.2 A Completeness Theorem for a Class of Unbounded Operators

265

Since T satisfies all assumptions of Theorem 5.2.6, we can apply the latter theorem to the bounded operator T to obtain the desired results for the unbounded operator A.   Note that similarly as before, we can generalise Theorem 12.2.1 and replace the condition that (zI − D0 )−1 is of exponential type and polynomially bounded on the imaginary axis by (zI − D0 )−1 is an entire function of finite non-zero order ρ, that is, of completely regular growth and polynomially bounded on rays as in Theorem 5.2.6. We continue this section to present some further motivation for our interest in noncompleteness and to discuss the relation with the existence of nontrivial small solutions of the evolutionary system du = Au, dt

t ≥ 0, u(0) = u0 ∈ X.

(12.14)

If the operator A is the infinitesimal generator of a strongly continuous semigroup T (t), then the solution to (12.14) is given by u(t) = T (t)u0 , see [67] and u is called a small solution if for every γ ∈ R we have lim eγ t u(t) = 0.

t →∞

A related notion for strongly continuous semigroups was introduced in [4]. A semigroup T (t) is called superstable if for every σ > 0, there exists Mσ such that T (t) ≤ Mσ e−σ t ,

t ≥ 0.

In particular, if there exist α > 0 such that Ker T (α) is non-zero, then the restriction of the semigroup T (t) to Ker T (α) is a superstable semigroup, and we call Ker T (α) a superstable invariant subspace of T (t). The following corollary of Theorem 12.2.1 uses an observation that was also used by Sinclair [76]. Corollary 12.2.2 Let A satisfy the assumptions of Theorem 12.2.1. Define α to be the exponential type of the resolvent operator of A. If the operator A is the infinitesimal generator of a strongly continuous semigroup T (t) on X, then Su,A = Ker T (α). Furthermore, the operator A does not have a complete span of eigenvectors and generalised eigenvectors if and only if the semigroup generated by A has a non-zero superstable subspace. Proof The proof consists of three parts.

266

Part 1.

12 Completeness for Perturbations of Unbounded Operators

First observe that an application of Theorem 12.1.2, in particular part 1 of the proof and (12.10), yields that the resolvent operator of the operator A is given by   (zI − A)−1 = Q(z)(z)−1 MD − L (zI − D0 )−1 + (zI − D0 )−1 , (12.15) where Q(z) is defined as in Theorem 12.1.2. From the assumptions on (zI − D0 )−1 , it follows that for x ∈ Su,A , the function z → (zI − A)−1 x is an entire function of exponential type α which is polynomially bounded along the imaginary axis. Let T (t) denote the strongly continuous semigroup generated by A. It follows from basic semigroup theory, see [67], that the resolvent operator of the generator A equals the Laplace transform of the semigroup T (t), i.e.,  ∞ −1 (zI − A) x = e−zt T (t)x dt, for Re z ≥ ω0 , (12.16) 0

Part 2.

where ω0 denotes the growth bound of the semigroup T (t). In this part we show that Ker T (s) ⊂ Su,A for every s ≥ 0. If x ∈ Ker T (s), then 

∞ 0

Part 3.

e−zt T (t)x dt =



s

e−zt T (t)x dt.

0

So it follows from (12.16) that z → (zI − A)−1 x is an entire function. Hence x ∈ Su,A .   To prove the reverse inclusion, we note that if x ∈ D A ∩Su,A , then Ax ∈ Su,A , and we can analyse the restriction of the semigroup T (t) to Su,A .   3 Su,A → Su,A be the part of A in Su,A , defined by, Let A       3 = Ax for x ∈ D A 3. 3 = D A ∩Su,A and Ax D A 3 is the infinitesimal generator of a strongly continuous The operator A 3 semigroup T (t) given by the restriction of T (t) to Su,A . By construction, 3 is given by the resolvent operator of A the resolvent operator of A restricted to Su,A . 3 is an entire It follows from (12.15) that the resolvent operator of A function of exponential type at most α as well. ∗ , and define Fix x ∈ Su,A and x ∗ ∈ Su,A 3 −1 x . f (z) := x ∗ , (zI − A)

12.2 A Completeness Theorem for a Class of Unbounded Operators

267

Note that from (12.16), it follows that  f (σ + iω) =



e−iωt e−σ t x ∗ , T3(t)x dt

0

3 −1 x . = x ∗ , ((σ + iω)I − A) So f (σ + iω) is an entire function of exponential type at most α which is L2 -integrable along the line Re z = σ . An application of the Paley-Wiener theorem, see Theorem 14.6.3, yields that there exists a L2 integrable function ψ such that f has the following representation 

α

f (z) =

e−zt ψ(t) dt.

0

This implies that x ∗ , T3(t)x = 0 for t ≥ α. The fact that x ∗ can be chosen arbitrary implies that T3(t)x = 0 for t ≥ α and x ∈ Ker T (α). This shows that Su,A ⊂ Ker T (α), where α is the exponential type of the revolvent of A. The last statement directly follows from Theorem 12.2.1(ii) and this completes the proof of the corollary.   To see the relation between Theorems 5.2.6, 6.3.2 and 12.2.1, we first observe from (12.15) that the resolvent of A is a finite rank perturbation of the resolvent of D0 . Next, for z0 ∈ ρ(A) fixed, define T = (z0 I − A)−1 and W = (z0 I − D0 )−1 . Note that   I − zW = (z0 − z)I − D0 (z0 I − D0 )−1 , and hence −1  (I − zW )−1 = (z0 I − D0 ) (z0 − z)I − D0 ,

z ∈ C.

This shows that W is quasi-nilpotent and hence that T is a finite rank perturbation of a quasi-nilpotent operator W in the setting of Sect. 6.3. In the first two sections in the next chapter, the Arzelà-Ascoli theorem, see Sect. 6.4, can be used to show that W = (z0 I − D0 )−1 is a compact operator, compare the representation for the resolvent of D0 respectively given in (13.6) and (13.21). For the third example, the representation for the resolvent of D0 is given in (13.39) and here one has to use Sobolev embedding theorems to prove compactness of W , see [11] for details.

Chapter 13

Applications to Dynamical Systems

In this chapter we consider three different classes of unbounded operators that arise in applications. In the first section we consider exponential dichotomous operators that appear in the study of functional differential equations. In the second section, we consider the infinitesimal generator of the semigroup associated with age-dependent population models. In the third section, we consider unbounded operators that arise in the study of Markov processes. In each of the three sections the unbounded operators concerned are operators A of the kind appearing in (12.2) of the previous chapter. The results concerning completeness obtained in this chapter can be viewed as generalisations of Theorem 6.2.1.

13.1 Mixed Type Functional Differential Equations   Let r1 and r2 be positive reals, and let X = C [−r1 , r2 ], Cn denote the Banach space of continuous functions endowed with the supremum norm. For a continuous function x : R → Cn and t ∈ R, we define xt ∈ X by xt (θ ) = x(t + θ ), −r1 ≤ θ ≤ r2 . A mixed type linear functional differential equation (MFDE) is given by the following relation d Mxt = Lxt , dt

t ∈ R.

(13.1)

Here L and M : X → Cn are bounded linear maps. Together with the initial condition x0 = ϕ, ϕ ∈ X, one can ask whether this relation defines a well-posed dynamical system. From the Riesz representation theorem, it follows that every bounded linear mapping from X into Cn can be represented by a matrix function of bounded © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_13

269

270

13 Applications to Dynamical Systems

variation, i.e., the maps L and M can be given by  Lϕ =

r2 −r1

 [dη(τ )]ϕ(τ ),

Mϕ =

r2 −r1

[dμ(τ )]ϕ(τ ),

(13.2)

where η(θ ) and μ(θ ), −r1 ≤ θ ≤ r2 , are m × m-matrix whose elements are of bounded variation, normalised so that η and μ are continuous from the left on (−r1 , r2 ) and vanish at the endpoint θ = r2 . Without loss of generality we can assume that r1 and r2 are chosen such that not both η and μ are constant in neighbourhoods of r1 and r2 . The abstract Cauchy problem associated with (13.1) is given by du = Au, dt

u(0) = ϕ,

ϕ ∈ X,

where the associated unbounded operator A is given by     dϕ dϕ D A = ϕ ∈ X| ∈ X, M = Lϕ , dθ dθ

Aϕ =

dϕ . dθ

(13.3)

If r2 = 0 and M defined by (13.2) is atomic at zero, i.e.,  Mϕ = ϕ(0) −

0 −r1

dμ0 (τ )ϕ(τ ),

where μ0 is continuous at zero, then the operator A generates a C0 -semigroup T (t) on X given by translation along the solution of (13.1), i.e., T (t)ϕ = xt ( · ; ϕ) and completeness properties of the infinitesimal operator have been studied. See [80], Chapter 9 of [42] and [20] for more information, and see [13, 49] for applications towards hyperbolicity and stability conditions. If r2 > 0, then the real part of the spectrum of A is unbounded in both half planes and the operator A no longer generates a C0 -semigroup, but A is still an exponential dichotomous operator, see [60] for the linear theory for such equations. We start to show that the operator A defined by (13.3) has a characteristic matrix function. Lemma 13.1.1 Let D(X → X) be the operator defined by     D D = ϕ ∈ X | ϕ˙ ∈ X ,

Dϕ =

dϕ . dθ

(13.4)

The operator A defined by (13.3) is the operator defined in (12.2) associated with D, L and M on  = C, where L and M are defined by (13.2). Furthermore, the associated operator D0 (X → X) given by       D D0 = ϕ ∈ D D | ϕ(0) = 0 ,

D0 ϕ = Dϕ,

(13.5)

13.1 Mixed Type Functional Differential Equations

271

has a resolvent (zI − D0 )−1 of exponential type, bounded along the imaginary axis, and the operator function Q introduced in Theorem 12.1.2 is given by Q(z) = −D0 (zI − D0 )−1 j = ezθ , where j : Cn → N is given by j (c) = cI

where

−r1 ≤ θ ≤ r2 .

I (θ ) = 1,

Proof Clearly, the kernel of D consists of the constant functions. It follows that N = Ker D has dimension n. For    D0 (X → X) we take the operator defined by (13.5). We have D D = N ⊕ D D0 , and for each z ∈ C the operator zI − D0 is invertible and the resolvent (zI − D0 )−1 is given by   (zI − D0 )−1 ϕ (θ ) = −



θ

e(θ−σ )z ϕ(σ ) dσ,

−r1 ≤ θ ≤ r2 .

(13.6)

0

Therefore D satisfies (H1), (H2) of Sect. 12.1 and A is the operator associated with D, L and M defined in (12.2). From the explicit representation of the resolvent of D0 , it follows that (zI − D0 )−1 is of exponential type, bounded along the imaginary axis, and Q(z) = ezθ .   It follows from Lemma 13.1.1 and Theorem 12.1.2 that the operator A defined by (13.3) has a characteristic matrix function (z) given by (z) = (zM − L)Q(z)  r2  =z ezτ dμ(τ ) − −r1

r2 −r1

ezτ dη(τ ).

(13.7)

From Theorem 14.6.3 it follows that the entries of (z) and det (z) are entire functions of completely regular growth, and if for the matrix-valued function , we define the function h by h (θ ) = max1≤i,j ≤n hij (θ ), where hij denotes the indicator function of ij , then h (θ ) =

& r2 cos θ, −r1 cos θ,

−π/2 ≤ θ ≤ π/2, π/2 ≤ θ ≤ 3π/2.

Furthermore, it follows from Lemma 14.9.1 that hdet  (θ ) ≤ nh (θ ).

(13.8)

272

13 Applications to Dynamical Systems

From the representation (13.6) for (zI − D0 )−1 , it also follows that for ϕ ∈ X, the function f (z; ϕ) = (MD − L)(zI − D0 )−1 ϕ   r2 −zθ e (zdμ(θ ) − dη(θ )) = −r1

θ

eτ z ϕ(τ ) dτ

(13.9)

0

is an entire function of completely regular growth and hf ( · ;ϕ)(θ ) ≤ h (θ ). Furthermore, from Theorem 14.6.3, it also follows that there exist a vector ϕ0 ∈ X such that hf ( · ;ϕ0) (θ ) = h (θ ).

(13.10)

Theorem 13.1.2 Let A be the closed operator given by (13.3) and let (z) denote the characteristic matrix function given by (13.7). The system of eigenvectors and generalised eigenvectors of A is complete if and only if the indicator function of det  satisfies hdet  = nh . Furthermore, if Su,A = {ϕ ∈ X | z → (zI − A)−1 ϕ

is entire},

then X = MA ⊕ Su,A . Proof First assume that there exists a θ0 with hdet  (θ0 ) < nh (θ0 ). Put f (z; ϕ) = adj (z)(MD − L)(zI − D0 )−1 ϕ0 . From (13.10) and Lemma 14.9.1, it follows that det (z) does not dominate f (z; ϕ0 ). Therefore, Theorem 12.2.1 implies that ϕ0 ∈ MA if the nondegeneracy condition holds. However, if the nondegeneracy condition fails, we have that Su,A = {0} and also find MA = X. Therefore, in this case completeness fails. If the indicator function of det  satisfies hdet  (θ ) = nh (θ ), −π < θ ≤ π, then Lemma 14.9.1 implies that det (z) dominates f (z; ϕ) for every ϕ ∈ X. Therefore, Theorem 12.2.1 implies that MA = X if the nondegeneracy condition holds. To complete the proof of the theorem, it remains to show that in this case the nondegeneracy condition holds. Suppose that this condition fails, then there exists a ϕ = 0 such that 

r2

−r1

e

−zθ

 (zdμ(θ ) − dη(θ )) 0

θ

eτ z ϕ(τ ) dτ = 0 for all z ∈ C.

(13.11)

13.1 Mixed Type Functional Differential Equations

273

Let us first consider the part of the integral on [0, r2 ] in (13.11). Changing the order of integration and defining 

r2

e−zθ (zdμ(θ ) − dη(θ )) = a(z; τ )A(τ ),

τ

where a(z; τ ) is a scalar function and A(τ ) is a matrix independent of z, yields 

r2

ezτ a(z; τ )A(τ )ϕ(τ ) dτ = 0

for all z ∈ C.

(13.12)

0

Since the indicator function of det  satisfies hdet  = nh , it follows from Theorem 14.6.3 that the matrix A(τ ) is nonsingular on [0, r2 ], but then it follows from the uniqueness of the Laplace transform that (13.12) implies that ϕ = 0 on [0, r2 ]. Similarly, we can handle the case when we restrict to the interval [−r1 , 0]. This shows that the nondegeneracy condition holds and this completes the proof of the theorem.  

13.1.1 Three Examples We illustrate Theorem 13.1.2 with three examples. Consider a neutral delay differential equation ⎧ ⎪ ⎪ ⎨x˙1 (t) − x˙3 (t − 1) = x2 (t − 1) x˙2 (t) − x˙2 (t − 1) = x1 (t − 1) + x3 (t − 1) ⎪ ⎪ ⎩x˙ (t) = x1 (t − 1) − x2 (t − 1) + x3 (t − 1). 3

(13.13)

In this case r2 = 0 and M is atomic at zero. The corresponding operator A, defined by (13.3) with M and L given by ⎡ ⎤ ⎡ ⎡ ⎤ ⎡ ⎤ ⎤ ϕ1 (0) − ϕ3 (−1) ϕ2 (−1) ϕ1 ϕ1 ⎦, M ⎣ϕ2 ⎦ = ⎣ϕ2 (0) − ϕ2 (−1)⎦ and L ⎣ϕ2 ⎦ = ⎣ ϕ1 (−1) + ϕ3 (−1) ϕ3 ϕ3 ϕ3 (0) ϕ1 (−1) − ϕ2 (−1) + ϕ3 (−1)) generates a C0 -semigroup on X. See Chapter 9 of [42] and [20] for more information about the general theory for such equations. The characteristic matrix function (13.7) associated to (13.13) is given by ⎡

⎤ z −e−z −ze−z (z) = ⎣−e−z z(1 − e−z ) −e−z ⎦ −e−z e−z z − e−z

274

13 Applications to Dynamical Systems

and the determinant of (z) is given by det (z) = z3 (1 − e−z ) − z2 e−z + z(z + 1)e−3z . So the indicator function satisfies hdet  (θ ) = 3h (θ ), −π < θ ≤ π. Therefore, the system of eigenvectors and generalised eigenvectors of the operator associated with (13.13) is complete in X and the system has no nontrivial small solutions. In the second example, we consider the retarded delay differential equation ⎧ ⎪ ⎪ ⎨x˙1 (t) x˙ (t) ⎪ 2 ⎪ ⎩x˙ (t) 3

= x2 (t − 1) = x1 (t − 1) + x3 (t − 1)

(13.14)

= x1 (t − 1) − x2 (t − 1) + x3 (t − 1).

In this case r2 = 0 and Mϕ = ϕ(0). The operator A defined by (13.3) generates a C0 -semigroup on X. See Chapter III of [21] and [20] for more information about the general theory for such equations. The characteristic matrix function associated to (13.14) is given by ⎡

⎤ z −e−z 0 (z) = ⎣−e−z z −e−z ⎦ −z −z −e e z − e−z and the determinant of (z) is given by det (z) = z3 − z2 e−z . So the indicator function satisfies hdet  (θ ) < 3h (θ ), −π < θ ≤ π. Therefore the system of eigenvectors and generalised eigenvectors of the infinitesimal generator associated with (13.14) is not complete in X and there exist nontrivial small solutions to system (13.14). As the final example, we consider the mixed type differential delay equation & x˙1 (t) − x˙2 (t + 1) = x1 (t − 1) + x2 (t + 1) x˙2 (t) + x˙2 (t + 1) = x1 (t − 1) − x2 (t + 1).

(13.15)

See [60, 61] for more information about the general linear theory for such equations. The characteristic matrix function associated to (13.15) is given by   z − e−z −(z + 1)e−z (z) = −e−z (z + 1)e−z

13.2 Age-Dependent Population Dynamics

275

and the determinant of (z) is given by det (z) = z2 (1 − e2z ) + z(ez − e2z − e−z − 1) − 2. So the indicator function satisfies hdet  (θ ) < 2h (θ ), −π < θ ≤ π. Therefore the system of eigenvectors and generalised eigenvectors of the operator associated with (13.15) is not complete in X. The theory of characteristic matrix functions developed in this book, in particular Theorem 12.1.2, plays an important role in the stability analysis of delay differential equations. See [21, 42] for more information; in particular, Chapter IV of [21]. The characteristic matrix function (z) given by (13.7) is also used in the formulation of the abstract Hopf bifurcation theorem in Chapter X of [21]. See [19] for recent developments in this direction.

13.2 Age-Dependent Population Dynamics In the second section, we consider a basic model from age-dependent population dynamics. See [21, 64, 84] for details and analysis of the model. The model is formulated as a conservation law and the vector variable p denotes the various population densities with respect to age a at time t. It is assumed that age remains bounded and takes values in the interval [0, ω] with ω < ∞. The time evolution is governed by the Lotka-McKendrick-von Förster equation ∂p ∂p (a, t) + (a, t) = −μ(a)p(a, t), ∂a ∂t

t > 0, 0 < a < ω,

(13.16)

with boundary condition 

ω

p(0, t) =

β(a)p(a, t) da,

t > 0,

(13.17)

0

and initial condition p(a, 0) = ϕ(a),

0 ≤ a ≤ ω.

(13.18)

  Here the state ϕ belongs to X = L1 [0, ω], Rn and the n × n-matrices μ and β have entries that belong to L∞ [0, ω] and β is not zero almost everywhere in a neighbourhood of ω. The matrix μ represents the mortality modulus and, similarly, β denotes the birth modulus. Without loss of generality we can assume that ω belongs to the support of β.

276

13 Applications to Dynamical Systems

The abstract evolutionary system associated with (13.16)–(13.18) is given by du = Au, dt

u(0) = ϕ,

ϕ ∈ X,

where A(X → X) is defined by &    ω  D A = ϕ ∈ X | ϕ ∈ W 1,1 [0, ω], ϕ(0) = 0 β(a)ϕ(a)da dϕ Aϕ = − da − μϕ.

(13.19)

Here W 1,1 [0, ω] can be identified with the space of absolutely continuous functions on [0, ω]. Using the method of characteristics (see Webb [84]), we can solve the initial value problem and, consequently, A generates a C0 -semigroup on X given by T (t)ϕ = p( · , t; ϕ). Lemma 13.2.1 Let D(L1 [0, ω] → L1 [0, ω]) be the operator defined by &   D D Dϕ

= {ϕ ∈ L1 [0, ω] | ϕ ∈ W 1,1 [0, ω]} dϕ = − da − μϕ.

The operator A defined by (13.19) is the operator defined in (12.2) associated with D, L and M, where L and M are defined by  Mϕ = 0,

ω

Lϕ = ϕ(0) −

β(a)ϕ(a)da.

(13.20)

0

Furthermore, the associated operator D0 (L1 [0, ω] → L1 [0, ω]) given by     D D0 = {ϕ ∈ D D : ϕ(0) = 0},

D0 ϕ = Dϕ.

has a resolvent (zI − D0 )−1 of exponential type, bounded along the imaginary axis, and the operator function Q introduced in Theorem 12.1.2 is given Q(z) = −D0 (zI − D0 )−1 j = eza Y (a, 0), where Y (a, y) denotes the fundamental solution of the differential equation Dϕ = 0, and j : Cn → N is given by j (c) = Y (a, 0)c, c ∈ Cn . Proof Put X = L1 [0, ω]. To find the kernel of D we solve the equation dϕ (a) = −μ(a)ϕ(a), da

0 ≤ a ≤ ω.

13.2 Age-Dependent Population Dynamics

277

The solution is given by ϕ(a) = Y (a, 0)ϕ(0) for 0 ≤ a ≤ ω, where Y (a, y) denotes the fundamental solution with Y (y, y) = I for 0 ≤ y ≤ a ≤ ω. It follows that N = Ker D has dimension n. Clearly, D D = Ker D ⊕ D D0 . The resolvent of D0 is given by 

 (z − D0 )−1 ϕ (a) =



a

e−z(a−y)Y (a, y)ϕ(y) dy,

0 ≤ a ≤ ω.

(13.21)

0

Therefore D satisfies the hypotheses (H1), (H2) of Sect. 12.1 and A is the operator associated with D, L and M appearing in (12.2). From the explicit representation of the resolvent of D0 , it follows that (zI − D0 )−1 is of exponential type, bounded along the imaginary axis, and Q(z) = eza Y (a, 0).   It follows from Lemma 13.2.1 and Theorem 12.1.2 that the operator A defined by (13.19) has a characteristic matrix function (z) on C given by 

ω

(z) = (−L)Q(z) = I −

e−za β(a)Y (a, 0) da.

(13.22)

0

For a matrix-valued function  : Cn → Cn , we define the indicator function h by h (θ ) =

& 0,

−π/2 ≤ θ ≤ π/2,

−ω cos θ,

π/2 ≤ θ ≤ 3π/2.

(13.23)

From the representation (13.21) for (zI − D0 )−1 , it also follows that for ϕ ∈ X, the function f (z; ϕ) = (−L)(zI − D0 )−1 ϕ  ω  a = β(a) e−z(a−y)Y (a, y)ϕ(y) dy da 0

0

is an entire function of completely regular growth and hf ( · ;ϕ)(θ ) ≤ h (θ ). Furthermore, from Theorem 14.6.3, it also follows that there exist ϕ0 ∈ L1 [0, ω] such that hf ( · ;ϕ0) (θ ) = h (θ ).

(13.24)

Theorem 13.2.2 Let A be the infinitesimal generator given by (13.19), and let (z) denote the characteristic matrix function given by (13.22). Then the system of eigenvectors and generalised eigenvectors of A is complete if and only if the indicator function of det  satisfies hdet  = mh .

278

13 Applications to Dynamical Systems

Proof Let us first assume that there exists a θ0 such that hdet  (θ0 ) < nh (θ0 ). From (13.24) and Lemma 14.9.1, it follows that there exists a ϕ0 such that det (z) does not dominate (−L)(zI − D0 )−1 ϕ0 . Therefore, Theorem 12.2.1 implies that ϕ0 ∈ MA if the nondegeneracy condition holds. However, if the nondegeneracy condition fails, we have that Su,A = {0} and hence MA = L1 [0, ω]. Therefore, completeness fails in this case. If the indicator function satisfies hdet  (θ ) < nh (θ ), −π < θ ≤ π, then Lemma 14.9.1 implies that det (z) dominates (−L)(zI − D0 )−1 ϕ for every ϕ ∈ L1 [0, ω]. To complete the proof of the theorem, it remains to show that in this case the nondegeneracy condition of Theorem 12.2.1 holds. Suppose that this condition fails, then there exists a ϕ = 0 such that 



ω

a

β(a) 0

e−z(a−y)Y (a, y)ϕ(y) dy da = 0

for all z ∈ C.

(13.25)

0

Changing the order of integration and defining 

ω

e−za β(a)Y (a, y) da = b(z; y)A(y),

y

where b(z; y) is a scalar function and A(y) is a matrix independent of z, yields 

ω

ezy b(z; y)A(y)ϕ(y) dy = 0

for all z ∈ C.

(13.26)

0

Since the indicator function of det  satisfies hdet  = nh , it follows from Lemma 14.6.3 that the matrix A(y) is nonsingular on [0, ω], but then it follows from the uniqueness of the Laplace transform that (13.26) implies that ϕ = 0 on [0, ω]. This completes the proof of the theorem.  

13.3 The Zig-Zag Semigroup In recent years piecewise deterministic Markov processes have emerged as a useful computational tool in stochastic simulation, for example for Bayesian statistics or statistical physics. A particular instance of such a process is the zig-zag process, described in [10, 11]. In the present setting the zig-zag process is a Markov process (V (t), #(t)) in the state space E := R × {−1, +1}. Conditional on the velocity process (#(t))t ≥0 , the position process (V (t))t ≥0 in R is completely determined by the relation 

t

V (t) = V (0) +

#(s) ds. 0

13.3 The Zig-Zag Semigroup

279

The velocity process (#(t))t ≥0 in {−1, +1} switches sign at inhomogeneous Poisson rate λ(V (t), #(t)), where λ : E → [0, ∞) is a continuous function. Thus the switching intensity function λ has a crucial impact on the dynamics exhibited by the zigzag process. For a given absolutely continuous potential function U : R → R, the switching intensity λ will be chosen as   λ(x, θ ) = max θ U  (x), 0 ,

(x, θ ) ∈ E.

This condition results in the zig-zag process having stationary distribution μ on E with marginal density on R proportional to e−U . This observation makes the zig-zag process into a useful computational device for the stochastic simulation of a given target density proportional to e−U . In order to obtain quantitative results on the rate of convergence to stationarity, it is natural to investigate the spectral properties of the Markov semigroup connected to the zig-zag process, see [11]. Equip E = R × {−1, +1} with the product topology generated by the usual topologies on R and {−1, +1}. We also consider E with the product σ -algebra generated by products of Lebesgue sets in R and all subsets of {−1, +1}. The construction of the Markov semigroup connected to the zig-zag process involves measures ν on R and μ on E, respectively defined by 

e−U (x) dx,

ν(S) =

and μ(S × {θ }) = ν(S),

S ∈ B(R), θ ∈ {−1, +1}.

S

We denote the associated complex Hilbert spaces of square integrable functions 1,2 (E) if f (·, θ ) ∈ W 1,2 (F ) respectively by L2 (R, ν) and L2 (E, μ). We say f ∈ Wloc for any bounded open set F ⊂ R with θ ∈ {−1, +1}. In the sequel we assume that the potential U is is absolutely continuous and  unimodular, that is,U (0) = 0, U  (x) ≤ 0 for  x ≤ 0 and U (x) ≥ 0 for x ≥ 0. 2 2 The generator C L (E, μ) → L (E, μ) associated with the Markov semigroup connected to the zig-zag process can now be defined as follows (see [10, 11])     1,2 (E), Cf ∈ L2 (E, μ) , D C = f ∈ L2 (E, μ) | f ∈ Wloc Cf (x, θ ) = θ ∂x f (x, θ ) + λ(x, θ )(f (x, −θ ) − f (x, θ )),

(x, θ ) ∈ E. (13.27)

The aim of this section is to show that C corresponds to an operator associated with D, L and M as defined in (12.2). Before we can do this, we start with some preparations. For f : E → C we define f ± : R → C by f ± = f (·, ±1).

280

13 Applications to Dynamical Systems

From the assumption that the potential U is unimodular, we observe that +

λ (x) =

& 0 U  (x)

for x ≤ 0, for x > 0,

(13.28)

and −

λ (x) =

& −U  (x) for x ≤ 0, 0

for x > 0.

(13.29)

This implies  that we can rewrite the operator C given by (13.27) in block operator form as A L2 (R, ν)2 → L2 (R, ν)2 with  +   6   5 f+ f 1,2 2 2 ± ∈ L ∈ L2 (R, ν)2 , (R, ν) | f ∈ W (R), A D A = loc − − f f (13.30)  +     f λ+ ∂x − λ+ f+ A = . (13.31) f− λ− −∂x − λ− f −   Define the unbounded operator D L2 (R, ν)2 → L2 (R, ν)2 by     5 f+ 1,2 D D = ([0, ∞)), ∈ L2 (R, ν)2 | f ± ∈ Wloc f−  + 6 f 1,2 2 2 f ± ∈ Wloc ∈ L , ((−∞, 0)), D (R, ν) f− (13.32)  +  + f f D =A . (13.33) f− f−   Lemma 13.3.1 The operator D L2 (R, ν)2 → L2 (R, ν)2 defined by (13.32)– (13.33) is a closed linear operator. Furthermore, the kernel of D is two dimensional and given by Ker D =

5 f +  f−

∈ L2 (R, ν)2 | f + (x) = f − (0+), f − (x) = f − (0+), x ≥ 0, 6 and f + (x) = f + (0−), f − (x) = f + (0−), x < 0 . (13.34)

13.3 The Zig-Zag Semigroup

281

So N := Ker D is two-dimensional and the invertible linear map j : C2 → N is given by        c c1 c1  1 − H (x) + 2 H (x), j = c2 c1 c2

(13.35)

where H denotes the Heaviside function given by H (x) = 1 for x ≥ 0 and H (x) = 0 for x < 0.   Proof Put fn = (fn+ , fn− )T , f = (f + , f − )T , and let (fn )n≥0 ⊂ D A be a converging sequence in L2 (R, ν)2 with limn→∞ fn = f ∈ L2 (R, ν)2 . Define for n ≥ 0 the function gn = Afn and suppose that the sequence (gn )n≥0 converges in L2 (R, ν)2 to g. Note that    ∂x fn± (x) = ± gn± (x) − λ± (x) fn∓ (x) − fn± (x) .

(13.36)

Define    h± (x) := ± g ± (x) − λ± (x) f ∓ (x) − f ± (x) . 1,2 (R) and that ∂x f ± = h± . We will first show that f ± ∈ Wloc ∞ Let φ be a smooth C -function with compact support in R. Then

 −

R

f ± ∂x φ dx = −





 lim fn± ∂x φ dx = − lim

R n→∞



= lim

n→∞ R

  ∂x fn± φ dx =





f ± ∂x φ dx n→∞ R n



 lim ∂x fn± φ dx =

R n→∞

 R

h± φ dx.

This proves that ∂x f ± = h± . Furthermore, taking a subsequence,   ∂x fn±k = gn±k − λ± fn∓k − fn±k   → g − λ± f ∓ − f ± = ∂x f ±

almost everywhere

(by L2 -convergence of fn± and gn± ). Therefore   gn±k = Afn±k → ±∂x f ± + λ± f ∓ − f ±   almost everywhere. Since also gn±k → g ± in L2 (R, ν) we have f ∈ D A and Af = g. For x ≥ 0, it follows from (13.28), (13.29) and (13.31) that  +   f 0 D = − f 0

(13.37)

282

13 Applications to Dynamical Systems

is equivalent to the following system of differential equations  f + = U f + − U f −,  −  f = 0.



Therefore   f + (x) = eU (x) f + (0+) − f − (0+) + f − (0+), f − (x) = f − (0+),

x ≥ 0,

x ≥ 0.

Since f + ∈ L2 (R, ν), this yields f + (0+) = f − (0+) and f + (x) = f − (0+),

x ≥ 0,

f − (x) = f − (0+),

x ≥ 0.

Similarly, for x < 0, we have that (13.37) is equivalent to the system of differential equations  +  f = 0,  −  f = U f − − U f +. Therefore x < 0, f + (x) = f + (0−),   f − (x) = eU (x) f − (0−) − f + (0−) + f + (0−),

x < 0.

Since f − ∈ L2 (R, ν), this yields f − (0−) = f + (0−) and f + (x) = f + (0−),

x < 0,

f − (x) = f + (0−),

x < 0.

This completes the proof of the lemma.   Define the restriction D0 L2 (R, ν)2 → L2 (R, ν)2 of D by

 

5 f +  6     2 2 + − D D0 = D D ∩ (R, ν) | f (0−) = 0, f (0+) = 0 . ∈ L f− (13.38)   Note that it follows  from Lemma 13.3.1 that D0 has a trivial kernel and that D D = Ker D ⊕ D D0 .

13.3 The Zig-Zag Semigroup

283

Lemma 13.3.2 The resolvent set of the restriction D0 defined by (13.38) of the operator D L2 (R, ν)2 → L2 (R, ν)2 defined by (13.32)–(13.33) equals C. Furthermore, if  +  + f −1 g ) = (zI − D , 0 f− g−

(13.39)

then &

 ∞  ξ ezx+U (x) x e−zξ −U (ξ ) g + (ξ ) + U  (ξ )e−zξ 0 ezs g − (s) ds dξ, f (x) = 0 ezx x e−zξ g + (ξ ) dξ, +

x ≥ 0, x < 0,

and & x e−zx 0 ezξ g − (ξ ) dξ, f (x) =  x  0 e−zx+U (x) −∞ ezξ −U (ξ ) g − (ξ ) − U  (ξ )ezξ ξ e−zs g + (s) ds dξ, −

x ≥ 0, x < 0.

Proof From (13.39) it follows that  +  +  + f f g D0 =z − − . f− f− g Therefore, for x ≥ 0, we have to solve the following system of differential equations  +  f = (z + U  )f + − U  f − − g + ,  −  f = −zf − + g − . with the initial condition f + (0) = f − (0) = 0. The second equation can be solved explicitly and using the variation of constants formula and the initial condition. This yields f − (x) = e−zx



x

ezξ g − (ξ ) dξ,

x ≥ 0.

0

Substitution of this expression for f − into the first equation, we can again use variation of constants formula with the boundary condition that f + ∈ L2 (R, ν) to obtain f + (x) = ezx+U (x)

 x



 e−zξ −U (ξ ) g + (ξ ) + U  (ξ )e−zξ

 0

ξ

 ezs g − (s) ds dξ,

x ≥ 0.

284

13 Applications to Dynamical Systems

Similarly, for x < 0, we have to solve the following system of differential equations  +  f = zf + − g + ,  −  f = (−z + U  )f − − U  f + − g − , with the initial condition f + (0) = f − (0) = 0. Now the first equation can be solved directly using the initial condition. This yields 

+

f (x) = e

0

zx

e−zξ g + (ξ ) dξ,

x < 0.

x

Substitution of this expression for f + into the second equation, we can again use variation of constants formula with the boundary condition that f − ∈ L2 (R, ν) to obtain f − (x) = e−zx+U (x)



x −∞

 ezξ −U (ξ ) g − (ξ ) − U  (ξ )ezξ



0

 e−zs g + (s) ds dξ,

x < 0.

ξ

This shows that ρ(D0 ) = C and proves the representation of (zI − D0 )−1 given in (13.39).   From Lemmas 13.3.1 and 13.3.2 it follows that the operator D defined by (13.32)–(13.33) satisfies the assumptions (H1) and (H2) from Sect. 12.1. Associated with the operatorD we take the operators L and M from (12.2) to be such that M = 0 and L : D D → C2 is given by L

 +  +  f f (0+) − f + (0−) = . f− f − (0−) − f − (0+)

(13.40)

Observe that from (13.30) it follows that       D A = f ∈ D D | Lf = 0 ,

Af = Df.

Thus A is the operator associated with D, L and M appearing in (12.2). The characteristic matrix function of A is given by (12.3) and in the present setting given by (z) = LD0 (z − D0 )−1 j,

(13.41)

where j : C2 → N is given by (13.35). An application of Theorem 12.1.2 shows that all spectral properties of the operator A can be derived from the characteristic matrix function (13.41).

13.3 The Zig-Zag Semigroup

285

Using the explicit representation for the resolvent of D0 presented in Lemma 13.3.2 we can derive an explicit formula for the characteristic matrix function in terms of the potential U . Theorem 13.3.3 The operator A defined by (13.30)–(13.31) has a characteristic matrix function in sense of Theorem 12.1.2 and in this case (z) is given by $

1

(z) =  0

−∞ e

2zξ −U (ξ ) U  (ξ ) dξ



∞ 0

% e−2zξ −U (ξ )U  (ξ ) dξ . 1

(13.42)

Proof Put  +   f c1 −1 ) j = (z − D , 0 f− c2 where j : C2 → N is given by (13.35). Using the explicit representation for the resolvent of D0 presented in Lemma 13.3.2 yields +

f (x) =

⎧   ⎨ezx+U (x) ∞ e−zξ −U (ξ ) 1 + ⎩1 z

x

1−e

 zx

 U  (ξ )  1 − e−zξ dξ c2 , z

x ≥ 0, x < 0,

c1 ,

and f − (x) =

⎧   ⎨ 1 1 − e−zx c2 ,

x ≥ 0,

z

⎩ −zx+U (x)  x zξ −U (ξ ) 1− e −∞ e

 U  (ξ )  1 − ezξ dξ c1 , z

x < 0.

Next we use D0 (z − D0 )−1 j = −j + z(z − D0 )−1 j to obtain that if  +   f c −1 = D0 (z − D0 ) j 1 , f− c2 then f + (x) =

⎧ ⎨−ezx+U (x)  ∞ e−2zξ −U (ξ )U  (ξ ) dξ c2 ,

x ≥ 0,

⎩−ezx c , 1

x < 0,

x

286

13 Applications to Dynamical Systems

and f − (x) =

⎧ ⎨−e−zx c2

x ≥ 0,  x ⎩e−zx+U (x) 2zξ −U (ξ )U  (ξ ) dξ c , x < 0. 1 −∞ e

Finally apply L given by (13.40) to this last expression to arrive at formula (13.42) for (z).   The detailed spectral properties of the operator A defined by (13.30)–(13.31) were studied in [11] using direct methods. Theorem 13.3.3 in combination with Theorem 12.1.2 provides a new approach to study the spectral properties of the operator A.

Chapter 14

Results from the Theory of Entire Functions

In this chapter, which consists of nine sections, we present elements from the theory of entire functions that are used throughout the book. The emphasis is to derive corollaries of the classical results of Phragmén-Lindelöf and Paley-Wiener that are used to derive completeness results for classes of operators. We fine-tune these results in various directions so that natural conditions in operator theory allow us to directly apply fundamental results from the theory of entire functions. In particular, we focus on the connection between the distribution of zeros and the growth properties of an entire function of completely regular growth. Such functions play an important role in the completeness results derived in this book. Using entire functions of the form  a f (z) = p(z) + q(z) e−zt ϕ(t) dt, z ∈ C, (14.1) −a

where p and q = 0 are polynomials, 0 < a < ∞, and ϕ is a non-zero square integrable function on the interval [−a, a], we show how to use the classical results from complex analysis to derive very detailed properties of these functions.

14.1 Basic Definitions Throughout f is a complex-valued analytic function on C which we will refer to as an entire function. Put   M(r; f ) = max |f (z)| | |z| = r .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 M. A. Kaashoek, S. M. Verduyn Lunel, Completeness Theorems and Characteristic Matrix Functions, Operator Theory: Advances and Applications 288, https://doi.org/10.1007/978-3-031-04508-0_14

(14.2)

287

288

14 Results from the Theory of Entire Functions

We say that f is of finite order if there exist positive numbers a and r0 (a) such that   M(r; f ) < exp r a ,

r ≥ r0 (a),

(14.3)

and in that case the greatest lower bound of such a is called the order of an entire function of f . If no such number a exists, then f is called of infinite order. From the maximum modulus principle for entire functions, it follows that M(r; f ) is either constant or strictly increasing to infinity as r → ∞. Therefore, assuming f is not constant, if f is of finite order, then the order ρ of f is given by ρ = lim sup r→∞

log log M(r; f ) . log r

(14.4)

All polynomials are entire functions of order zero. The converse is not true, see the example given in the paragraph directly after (14.8). The order of a product of two entire functions f1 and f2 of finite order is less than or equal to the larger of the order of the factors. This follows from (14.4); see Theorem 14.1.1 below for more information on products. The growth of an entire function f of finite non-zero order ρ is specified further by its type. We say that f is of finite ρ-type or, in short, of finite type if there exist positive numbers K and r1 (K) such that   M(r; f ) < exp Kr ρ ,

r ≥ r1 (K),

(14.5)

and in that case the greatest lower bound of all such K is called the type of f . If no such K exists, then f is said to be of infinite type. In the sequel, when we deal with an entire function of finite or infinite type, we will always assume implicitly that the functions involved are of finite non-zero order. If an entire function f of finite non-zero order is of finite type, then the type σ is given by σ = lim sup r→∞

log M(r; f ) . rρ

(14.6)

An entire function of finite non-zero order is said to be of minimal type if σ = 0 and of normal type if 0 < σ < ∞. We say that f is of exponential type if its order ρ is one and its type σ is finite. Let us note that some authors use a somewhat different definition of exponential type, for instance, the definition used in [73] includes all entire functions of order strictly less than one (see [73, Chapter 19]). With our definition of exponential type we follow the book [55]. Let us consider a few classical examples: f1 (z) = exp(szn ),

f2 (z) = z − exp(−z),

f3 (z) = exp(exp z).

(14.7)

14.1 Basic Definitions

289

If n is a positive integer and s ∈ R, then f1 has order ρ = n and type σ = |s|. For the function f2 both order and type are equal to one. In particular, f2 is of exponential type. The function f3 is of infinite order. Examples of entire functions of finite order and infinite type are more complex. The reciprocal of the Gamma function provides an example. Indeed, (λ)−1 is an entire function of order one and infinite type (see the final paragraph of page 307 and Corollary 54.1 on page 319 both in [63, Volume 2] or the paper [27] and the references therein). The order and type of an entire function can be read off from the asymptotic behaviour of the coefficients in the Taylor expansion at zero. Indeed, if f (z) = ∞ n is an entire function, then (see Section I.2 in [55]) the order ρ and type a z n n=0 σ of f are given by ρ = lim sup − n→∞

n log n , log |an |

and σ =

1 lim sup n|an |ρ/n eρ n→∞

if 0 < ρ < ∞. (14.8)

For instance, using the first identity it is straightforward to check that the function ∞ −n2 n f3 in (14.7) is of infinite order and that the function f (z) = z is an n=0 e entire function of order zero. The following theorem (Theorem 12 on page 22 of [55]) presents precise information about the growth of the product of two entire functions. Theorem 14.1.1 Let f1 and f2 be entire functions of order ρ1 and ρ2 , respectively. If ρ1 < ρ2 , then the order ρ of the product f1 f2 is given by ρ = ρ2 . Furthermore, assume that ρ1 and ρ2 are both finite and non-zero, and let σ1 and σ2 be the types of f1 and f2 , respectively. Then the following holds: (i) If ρ1 < ρ2 , then the type σ of the product f1 f2 is given by σ = σ2 . (ii) If ρ = ρ1 = ρ2 , then the order of the product f1 f2 is given by ρ and its type is the maximum of σ1 and σ2 provided one of the following conditions is satisfied: (a) σ1 = 0 and 0 < σ2 < ∞, (b) σ1 is finite and σ2 = ∞. Proposition 14.1.2 Let f be an entire function of finite order ρ, and let q be a nonzero polynomial. Then qf is an entire function of order ρ, and the functions f and qf have the same type. Proof The result is trivial when q is a non-zero constant. Therefore let us assume that the degree n of q is positive. Thus q(z) = p(z) + a0 zn , where p is a polynomial of degree at most n − 1 and a0 = 0. Note that |a0 | −

|p(z)| |q(z)| |p(z)| ≤ ≤ + |a0 |. |z|n |z|n |z|n

290

14 Results from the Theory of Entire Functions

It follows that there exists r0 > 0 such that 1 |a0 ||z|n ≤ |q(z)| ≤ 2|a0||z|n for |z| ≥ r0 . 2

(14.9)

It is then straightforward to check that f and qf have the same finite non-zero order and that both have the same type.   Corollary 14.1.3 Let f (z) = p(z) + q(z)g(z), where p and q are polynomials, q = 0, and g is an entire function of exponential type. Then f is an entire function of exponential type, and its type is equal to the type of g. Proof The identities in (14.8) imply that the sum of a polynomial p and an entire function h of exponential type with type equal to σ is again an entire function of exponential type and its type is equal to the type of h. Indeed, p + h and h have the same Taylor coefficients at zero except maybe the first n where n is the degree of p. From this remark it follows that it suffices to prove the corollary for the case when p = 0. But then the corollary is an immediate consequence of the preceding proposition.   Further information about the product of f1 and f2 for the case when f1 and f2 both have finite non-zero order and both are of finite type is presented in Theorem 14.6.5 and Lemma 14.7.4. This additional information applies to functions of completely regular growth which are defined in Sect. 14.6. We conclude with an elementary observation that is frequently used in the proofs of the completeness results in Chap. 4. Lemma 14.1.4 Let f be an entire function, and let g be the entire function given by g(z) = f (z + z0 ) where z0 is a complex number. Then f and g have the same order and the same type. Proof Put a = |z0 |. By the maximum modulus principle (see the picture below) we have M(g; r) ≤ M(f, a + r)

and M(f ; r) ≤ M(g, a + r).

Using the first inequality in (14.10) we see that ρ(g) = lim sup r→∞

≤ lim sup r→∞

log log M(g; r) log r log log M(f ; a + r) log(a + r) = ρ(f ). log(a + r) log r

Here we used that log(a + r) log ((a/r + 1)r) log(a/r + 1) = = +1 log r log r log r →1

if r → ∞.

(14.10)

14.2 Applications of the Phragmén-Lindelöf Theorem

291

Using f (z) = g(−z0 + z), we obtain ρ(f ) ≤ ρ(g) in a similar way. Thus f and g have the same order. Assume ρ := ρ(f ) = ρ(g) is finite, and let σ (f ) and σ (g) be the types of f and g, respectively. Then σ (g) = lim sup r→∞

=

log M(r, g) log M(a + r, f ) ≤ ρ r rρ

log M(a + r, f ) (a + r)ρ ρ a = σ (f ). (a + r)ρ aρ

Thus σ (g) ≤ σ (f ). Interchanging the roles of f and g we also get σ (f ) ≤ σ (g), and hence f and g have the same type.   A survey of the basic definitions and elementary properties of vector-valued and matrix-valued entire functions is given in Sect. 14.9.

14.2 Applications of the Phragmén-Lindelöf Theorem In the proofs of the completeness theorems in Sects. 14.2 and 14.3, we need results on the growth of an entire function inside angular regions of the complex plane. We start with two consequences of the Phragmén-Lindelöf theorem (see Theorem VI.4.1 of [15]). In the following statements we use the notion of an α-admissible set of half-lines in the complex plane, see Definition 2.1.1. Theorem 14.2.1 Let f be an entire function of order at most ρ < ∞. Suppose that there exist an α-admissible set of half-lines in the complex plane, {ray (θj ) | j = 1, . . . , κ}, an integer m and a positive constant M such that |f (z)| ≤ M(1 + |z|m )

for z ∈ ray (θj ), j = 1, 2, . . . , κ.

(14.11)

If α > ρ, then f is a polynomial of degree at most m. Proof Define f (j ) (0)  1  zj . g(z) = m f (z) − z j! m−1

(14.12)

j =0

By construction, the function g is entire of order at most ρ < ∞. It follows that for every  > 0, there is a number r0 such that |g(z)| < exp(|z|ρ+ ),

|z| ≥ r0 .

292

14 Results from the Theory of Entire Functions

Furthermore, again by construction (using (14.11)), we have |g(z)| ≤ M, z ∈ ray (θj ), for each j = 1, . . . , κ. We next consider two cases. Assume that α > 1/2. Then the number of half-lines must be strictly larger than one (see the paragraph after Definition 2.1.1). Thus κ > 1. In this case it suffices to prove the theorem for one specific sector (bounded by ray (θj ) and ray (θj +1 ) for a given j ) and then repeat the argument for all other sectors of this form. Without loss of generality, we can assume that the specific sector is given by  = {z ∈ C | | arg z| < π/(2α)}. Since by construction |g(z)| ≤ M for all z ∈ ∂, it follows from a corollary of the Phragmén-Lindelöf theorem, see Corollary VI.4.2 of [15], that |g(z)| ≤ M for all z ∈ . Repeating the same argument for the other sectors yields that |g(z)| ≤ M for all z ∈ C. But now Liouville’s theorem implies that g is constant. Thus from (14.12) it follows that f is a polynomial of degree at most m. Assume that α ≤ 1/2. Then the number of half-lines is larger or equal to one, that is, κ ≥ 1. Therefore, the function g is bounded on at least one half line ray (θ1 ). Since α > ρ, the order ρ of g satisfies ρ < 1/2, but then it follows from the Phragmén-Lindelöf theorem applied to the open domain  = {z ∈ C | z ∈ ray (θ1 )}, see Corollary 4.9.40 of [9], that |g(z)| ≤ M for all z ∈ C and hence g is constant. Thus again from (14.12) it follows that f is a polynomial of degree at most m.   If we have further information about the function f in Theorem 14.2.1, then we can reduce the opening of the angles. Theorem 14.2.2 Let f be an entire function of order at most ρ < ∞. Suppose that there exist an α-admissible set of half-lines in the complex plane, {ray (θj ) | j = 1, . . . , κ}, with the following properties: (a) there exists an integer m and a positive constant M such that |f (z)| ≤ M(1 + |z|m )

for z ∈ ray (θj ), j = 1, 2, . . . , κ;

(14.13)

(b) there exist ϕj such that θj < ϕj < θj +1 (j = 1, 2, . . . , κ − 1), where θκ < ϕκ < θ1 + 2π, such that lim sup r→∞

1 log |f (reiϕj )| ≤ 0, rα

for j = 1, . . . , κ.

(14.14)

If, in addition, α = ρ, then f is a polynomial of degree at most m. Note that condition (14.14) is equivalent to the condition that for each  > 0 there exists a positive number r0 () such that |f (reiϕj )| < exp (r α )

(r ≥ r0 ()),

for j = 1, . . . , κ.

(14.15)

14.2 Applications of the Phragmén-Lindelöf Theorem

293

In order to prove Theorem 14.2.2 we need the following proposition which is a somewhat stronger version of Corollary VI.4.2 of [15]. On the other hand, as we shall see, Corollary VI.4.2 of [15] plays a major role in the proof of the proposition. π }, where 2a ≥ 1, and let g be Proposition 14.2.3 Let  = {z ∈ C | | arg z| < 2a a function analytic on , continuous on , such that the following three conditions are satisfied:

(i) there exists M such that |g(z)| ≤ M for z ∈ ∂; (ii) for each  > 0 there exists r0 = r0 () > 0 such that   |g(z)| < exp |z|a+ , (iii) there exists ϕ, |ϕ|
r0

(z ∈ );

such that lim supr→∞

1 ra

(14.16)

log |g(reiϕ )| ≤ 0.

Then G is bounded on , in fact, |g(z)| ≤ M, z ∈ , where M is as in item (i). We shall use the above proposition in the proof of Theorem 14.2.2 assuming that g is an entire function. In that case condition (ii) just means that we require g to be of order at most a. Proof Let h be the function defined by h(z) = g(z) exp(−σ za ),

(14.17)

where σ > 0 is fixed. First we show that item (i) in the above proposition holds true for h in place of g. Clearly, h is analytic on  and continuous on . To prove that h is also bounded by M on ∂, let h1 (z) = exp(−za ). Take z ∈ . Thus z = r exp(iθ ), where |θ | ≤ π/2a, and hence Re za = r a cos aθ

where |aθ | ≤ π/2.

It follows that 0 ≤ cos aθ ≤ 1, and thus |h1 (z)| = exp(−r a cos aθ ) ≤ 1. Thus h1 is uniformly bounded by one on . Since h(z) = g(z) exp(−σ zρ ) = g(z) (h1 (z))σ ,

z ∈ ,

we conclude that h is uniformly bounded by M on ∂. From (14.16) and the definition of h in (14.17), we see that item (ii) also holds for h in place of g, that is, for each  > 0 there exists r2 = r2 () such that |h(z)| < exp(|z|ρ+ ),

|z| ≥ r2 .

(14.18)

Now choose 0 <  < σ . Note that condition (iii) implies that there exists r1 = r1 () > 0 such that

  |g reiϕ | < exp r a ,

for r ≥ r1 .

(14.19)

294

14 Results from the Theory of Entire Functions

From (14.19) and the definition of h, it follows that there exist a constant C and a number r3 = r3 () such that   |h(reiϕ )| ≤ C exp ( − σ )r a

r ≥ r3 .

and therefore |h(reiϕ )| → 0 as r → ∞. In particular, h is bounded on ray ϕ. We define M• := sup{|h(reiϕ )| | r > 0} < ∞.

(14.20)

Put θ+ = π/2a and θ− = −π/2a, and define 1 = {reiθ ∈  | r > 0,

θ− < θ < ϕ},

2 = {reiθ ∈  | r > 0,

ϕ < θ < θ+ }.

By definition θ+ − θ− = π/a. Thus there exists a number a• > 1/2 such that ϕ − θ− < π/a•

and θ+ − ϕ < π/a• .

Moreover |h(z)| ≤ M˜ for all z ∈ ∂1 and also for all z ∈ ∂2 , where M˜ = max{M, M• }.

(14.21)

But then, applying Corollary VI.4.2 of [15] using the domains 1 and 2 separately, we see that |h(z)| ≤ M˜ for all z ∈ 1 and |h(z)| ≤ M˜ for all z ∈ 2 . We conclude that |h(z)| ≤ M˜ for all z ∈ . It remains to show that M˜ = M. According (14.21) we have M˜ ≥ M. Assume ˜ M > M. Then M˜ = M• > M, and it follows that |h| assumes it maximal value at r• exp(iϕ) for some r• , 0 < r• < ∞, an interior point of . Indeed, |h(r iϕ )| → 0 as r → ∞, and from the first paragraph of the proof we know that |h(z)| ≤ M < M• for all z ∈ ∂. But then the maximum principle will imply that h is a constant. ˜ and |h(z)| ≤ M for all z ∈ , and hence for all z ∈ C. Therefore M = M,   Proof of Theorem 14.2.2 We follow the same reasoning as in the proof Theorem 14.2.1 using Proposition 14.2.3 instead of Corollary VI.4.2 of [15]. Let g be the function given by (14.12). By construction, the function g is entire of order at most ρ and satisfies (14.14) with f = g. Since α = ρ, we know (see the remark directly after Proposition 14.2.3) that for each  > 0 there exists |g(z)| < exp(|z|α+ ),

|z| ≥ r0 .

(14.22)

Assume that α ≥ 1/2. Then ρ ≥ 1/2, because α = ρ by assumption. Note that in this case κ = 1 is not excluded. It suffices to prove the theorem for one specific open domain  = {z ∈ C | z ∈ ray (θ1 )} when κ = 1, and for a sector 

14.3 Applications of the Paley-Wiener Theorem

295

(bounded by ray (θj ) and ray (θj +1 ) for a given j ) and then repeat the argument for all other sectors of this form when k ≥ 2. Observe that by construction, we have that |g(z)| ≤ M for all z ∈ ∂. Thus condition (i) in Proposition 14.2.3 is satisfied. By (14.22) the same holds true for condition (ii). Furthermore, since (14.14) holds with g in place of f , condition (iii) in Proposition 14.2.3 is satisfied too. But then we can apply Proposition 14.2.3 to show that |g(z)| ≤ M for each z ∈ . Repeating the same argument for the other sectors we conclude that |g(z)| ≤ M for each z ∈ C, and hence, by Liouville’s theorem the function g is a constant. But then, by (14.12), it follows that f is a polynomial of degree at most m. Next, we assume α < 1/2, and thus ρ < 1/2 too. In particular, (14.22) holds with α replaced by 1/2, in other words, for each  > 0 there exists |g(z)| < exp(|z| 2 + ), 1

|z| ≥ r0 .

(14.23)

Furthermore, there exists at least one half-line ray (θ ) such that |g(z)| ≤ M for z = reiθ , r ≥ 0, and there exists a least one ϕ such that ray (ϕ) does not coincide with ray (θ ) and lim supr→∞ r1α log |g(reiϕ )| = 0. Since α < 1/2, the latter implies that lim sup r→∞

1 r 1/2

log |g(reiϕ )| = 0.

But then we can apply Proposition 14.2.3 with a = 1/2 to show that |g(z)| ≤ M for each z ∈ C. Hence, by Liouville’s theorem, g is a constant function and f is a polynomial of degree at most m.  

14.3 Applications of the Paley-Wiener Theorem We start with the following version of the Paley-Wiener theorem. Theorem 14.3.1 Let f be an entire function. Then the following two statements are equivalent: (a) f is of exponential type with type at most σ < ∞ and f is L2 -integrable along the imaginary axis; (b) there exists a non-zero L2 -integrable function ϕ, uniquely determined by f , such that  σ f (z) = e−zt ϕ(t) dt, z ∈ C. (14.24) −σ

Proof (a) ⇒ (b) Let f be an entire function of exponential type, and assume its type is at most σ . Then for every  > 0, there exists a constant M = M() such that |f (z)| ≤ Me(σ +)|z| .

(14.25)

296

14 Results from the Theory of Entire Functions

It follows from the Paley-Wiener theorem, see Theorem 19.3 of [73] with the imaginary axis in place of the real line, that there is a ϕ = ϕ with ϕ in L2 [−(σ + ), σ + ] such that  f (z) =

σ + −(σ +)

e−zt ϕ(t) dt,

z ∈ C.

(14.26)

Since f is L2 -integrable along the imaginary axis, the representation (14.26) implies that ϕ is the inverse Fourier transform of the function g with g(y) = f (iy), y ∈ R, i.e.,  √ 2πϕ(t) =

∞ −∞

f (iy)eiyt dy

and hence ϕ is uniquely determined by f and independent of . This shows that ϕ has to vanish outside [−σ, σ ] and proves that f is given by (14.24). (b) ⇒ (a) Next, we prove the reverse statement. Let f be given by (14.24), where ϕ is a non-zero L2 -integrable function. Then there exists a positive constant M such that |f (z)| ≤ Meσ |z| ,

z ∈ C.

(14.27)

If ϕˆ denotes the Fourier transform of ϕ, then  √ 2π ϕ(y) ˆ = f (iy) =

σ −σ

e−iyt ϕ(t) dt,

y ∈ R.

Since ϕ belongs to L2 [−σ, σ ], it follows from the Plancherel theorem, see Theorem 9.13 of [73], that f is L2 -integrable along the imaginary axis. From (14.27) we know that the order ρ of f is at most 1. Next we prove that ρ = 1. Assume ρ = 1. Then the order ρ is strictly less than 1, and it follows from (14.5) that for any 0 < τ < σ , there exists a constant M(τ ) such that |f (z)| ≤ M(τ )eτ |z| ,

z ∈ C.

Again using the Paley-Wiener theorem (see [73, Theorem 19.3]), there exists a L2 integrable function ϕτ such that  f (z) =

τ

−τ

e−zt ϕτ (t) dt,

z ∈ C.

(14.28)

Define ϕτ to be zero on [−σ, σ ]\[−τ, τ ]. Then the two representations (14.24) and (14.28) show that 

σ

−σ

e−zt (ϕ(t) − ϕτ (t)) dt = 0,

z ∈ C.

14.3 Applications of the Paley-Wiener Theorem

297

The latter implies that ϕ = ϕτ . Thus ϕ is zero on [−σ, σ ]\[−τ, τ ] for each 0 < τ < σ . Hence ϕ = 0. But ϕ = 0 by assumption. Thus the order ρ of f is 1. It remains to prove that the type of f is at most σ . Using (14.27), we see that for each  > 0 there exists r() such that M(r; f ) < exp((σ + )r),

r ≥ r().

Since the order ρ of f is equal to one, comparing the above inequality with (14.5) shows that the type of f is finite and at most σ . This completes the proof of the theorem.   Note that the set of entire functions of the form (14.24) does not include non-zero polynomials. Remark 14.3.2 The proof that item (b) in Theorem 14.3.1 implies that the order ρ of the entire function f is 1 can also be obtained as a corollary of the PhragménLindelöf theorem [15, Theorem VI.4.1]. To see this note that (14.24) implies that  |f (z)| ≤ =

a

−a  a −a

|e−zt ||ϕ(t)| dt ≤ |ϕ(t)| dt < ∞,



a −a

e−t Re z |ϕ(t)| dt

z ∈ iR.

(14.29)

Hence f is uniformly bounded on the imaginary axis iR. Now assume that ρ < 1. Then we can apply Theorem 14.2.1 – a corollary of Phragmén-Lindelöf theorem – with α = 1, θ1 = π/2 and θ2 = 3π/2 to show that f is a constant. But then ϕ = 0 which is a contradiction. Thus ρ = 1. Next we introduce a class of entire functions that will play a major role in applications. See [74, Theorem 7.23] for an alternative definition using distributions with compact support. Definition 14.3.3 We say that an entire function f belongs to the Paley-Wiener class PW whenever f is given by  f (z) = p(z) + q(z)

a

−a

e−zt ϕ(t) dt,

z ∈ C,

(14.30)

where p and q = 0 are polynomials, 0 < a < ∞, and ϕ is a non-zero square integrable function on the interval [−a, a]. Theorem 14.3.1 allows us to give an alternative definition of the class PW. Proposition 14.3.4 An entire function f belongs to the class PW if and only if the following two conditions are satisfied: (i) f is of exponential type,

298

14 Results from the Theory of Entire Functions

(ii) f is polynomially bounded on the imaginary axis, that is, there exist a nonnegative integer m and a positive constant M such that |f (z)| ≤ M(1 + |z|m )

for z ∈ iR.

(14.31)

Proof Let f ∈ PW. Write f = p + qh, where p and q are polynomials, h = 0, and  a h(z) = e−zt ϕ(t) dt, z ∈ C, −a

where ϕ is a non-zero square integrable function on the interval [−a, a]. By Theorem 14.3.1, (b) ⇒ (a), the function h is of exponential type. But then we can apply Corollary 14.1.3 to show that the same holds true for f . Thus item (i) is proved. By applying Remark 14.3.2 to h in place of f we see that h is uniformly bounded on the imaginary axis. The latter implies that f = p + qh is polynomially bounded on the imaginary axis, and (ii) is proved. The converse statement is also true, that is, if f is an entire function such that items (i) and (ii) are satisfied, then f admits a representation of the form (14.30). To see this, define g(z) =

m 1  f (j ) (0) j  f (z) − z . j! zm+1

(14.32)

j =0

The fact that f is an entire function of exponential type, implies that the same holds true for g. Furthermore, using (14.31) and the definition of g in (14.32), we see that along the imaginary axis we have the estimate     g(z) ≤ M min 1, 1 , |z|

z ∈ iR.

In particular, g is square integrable on the imaginary axis. But then an application of Theorem 14.3.1, (a) ⇒ (b), yields the existence of a non-zero integrable function ϕ ∈ L2 [−σ, σ ] such that  σ g(z) = e−zτ ϕ(τ ) dτ. −σ

Using the latter in (14.32) yields the representation (14.30) for f .

 

Corollary 14.3.5 If g ∈ PW and f = p + qg, where p and q are polynomials, q = 0, then f ∈ PW. Moreover, f and g have the same type. Proof Since g ∈ PW, we know from Definition 14.3.3 that  g(z) = p1 (z) + q1 (z)

a

−a

e−zt ϕ(t) dt,

z ∈ C,

14.3 Applications of the Paley-Wiener Theorem

299

where p1 and q1 are polynomials, q1 = 0 and ϕ is square integrable. But then  f (z) = (p(z) + q(z)p1 (z)) + q(z)q1(z)

a −a

e−zt ϕ(t) dt,

z ∈ C.

The fact that both q and q1 are non-zero implies that qq1 is a non-zero polynomial. Furthermore, p + qp1 is a polynomial. Using again Definition 14.3.3 we conclude that f ∈ PW. The fact that f and g have the same type follows from Corollary 14.1.3.   The next three lemmas provide formulas to compute the type of entire functions from the class PW. Lemma 14.3.6 Let a > 0, and let ϕ be a non-zero square integrable function on the interval [−a, a]. Put  f (z) =

a −a

e−zt ϕ(t) dt,

z ∈ C.

(14.33)

Then f ∈ PW , the function f is of exponential type, and its type σ is given by σ = max{σ1 , σ2 }, where σ1 := inf{τ ∈ [0, a] | ϕ|[τ,a] = 0 a.e.},

(14.34)

σ2 := inf{τ ∈ [0, a] | ϕ|[−a,−τ ] = 0 a.e.}.

(14.35)

In particular, f is of normal type. Proof From Definition 14.3.3 it is clear that f ∈ PW, and hence we know from Proposition 14.3.4 that f is an entire function of exponential type. Furthermore, from Theorem 14.3.1, (b) ⇒ (a), we know that f is L2 -integrable along the imaginary axis. Let α denote the type of f . Formula (14.33) implies that α ≤ σ , where σ is defined as in the statement of the lemma. Suppose that α < σ . Theorem 14.3.1, (a) ⇒ (b), shows that there is a unique ψ with ψ ∈ L2 [−α, α] such that  f (z) =

α

−α

e−zt ψ(t) dt,

z ∈ C.

(14.36)

From the representations (14.33) and (14.36) for f , and from the uniqueness property of the Laplace transform, it follows that the function ϕ has to vanish almost everywhere outside [−α, α]. Since α < σ , this contradicts the definition of σ . This shows that α = σ , Finally, since ϕ is non-zero as a L2 function, it follows that σ = max{σ1 , σ2 } > 0. Thus the type of f is positive, and hence f is of normal type.  

300

14 Results from the Theory of Entire Functions

The next two lemmas extend the previous lemma to larger classes of entire functions. The first tells us that Lemma 14.3.6 remains true if ϕ is a non-zero integrable function, not necessarily square integrable. Lemma 14.3.7 Let a > 0, and let ϕ be a non-zero integrable function on the interval [−a, a]. Put  f (z) =

a −a

e−zt ϕ(t) dt,

z ∈ C.

(14.37)

Then f ∈ PW , the function f is of exponential type, and its type σ is given by σ = max{σ1 , σ2 }, where σ1 := inf{τ ∈ (0, a] | ϕ|[τ,a] = 0 a.e.},

(14.38)

σ2 := inf{τ ∈ (0, a] | ϕ|[−a,−τ ] = 0 a.e.}.

(14.39)

Proof We shall derive the lemma as a corollary of the preceding lemma. The proof will be split into two parts. In the first part we show that the entire function f defined by (14.37) belongs to the class PW. More precisely, we show that f can be written in the form (14.30) with ϕ square integrable. Part 1.

Put  f+ (z) =  ψ+ (t ) = −

a

e−zt ϕ(t ) dt,

0

a

ϕ(s) ds

 f− (z) =

(0 < t ≤ a),

t

0

−a

e−zt ϕ(t ) dt

ψ− (t ) =



t

−a

ϕ(s) ds

(z ∈ C), (−a ≤ t ≤ 0).

Note that both ψ+ and ψ− are differentiable, ψ+ on the interval [0, a] and ψ− on [−a, 0]. Furthermore, ψ+ (z) = ϕ(z)|[0,a] and ψ− (z) = ϕ(z)|[−a,0] . Finally, let ψ be the function on [−a, a] given by ψ(t) =

⎧ ⎨ψ+ (t)

0 < t ≤ a,

⎩ψ (t) −

−a ≤ t ≤ 0.

(14.40)

Using these facts it follows that  f (z) = c + z

a −a

e−zt ψ(t) dt,

z ∈ C,

(14.41)

14.3 Applications of the Paley-Wiener Theorem

301

where c is a constant. Indeed, we have  f (z) = f+ (z) + f− (z) = 

a

= 0

a 0

e−zt ψ+ (t) dt +

  a  = e−zt ψ+ (t) + z 0

e−zt ϕ(t) dt +



0 −a

a 0



0

−a

e−zt ϕ(t) dt

e−zt ψ− (t) dt

 e−zt ψ+ (t) dt +

  0  + e−zt ψ− (t) + z −a

 = −ψ+ (0) + ψ− (0) + z  = −ψ+ (0) + ψ− (0) + z  =c+z

Part 2.

a

−0

a 0 a −a

0 −a

e−zt ψ+ (t) dt + z

e−zt ψ− (t) dt



0 −a



e−zt ψ(t) dt

e−zt ψ(t) dt

e−zt ψ(t) dt.

This proves (14.41) with c = ψ− (0) + −ψ+ (0). Since ψ+ and ψ− are continuous on the closed intervals [0, a] and [−a, 0], respectively, it follows that ψ is square integrable on the [−a, a]. But then, using Definition 14.3.3, the identity (14.41) tells us that f ∈ PW. Since f ∈ PW, we know that f is of exponential type. It remains to compute its type σ . To do this let g be the entire function defined by  g(z) =

a −a

e−zt ψ(t) dt,

z ∈ C.

Thus f = c +zg. Since ψ is square integrable, Lemma 14.3.6 tells us that g is of exponential type and that its type σ is given by σ = max{σ1 , σ2 }, where σ1 := inf{τ ∈ (0, a] | ψ|[τ,a] = 0 a.e.}, σ2 := inf{τ ∈ (0, a] | ψ|[−a,−τ ] = 0 a.e.}. Moreover, from Corollary 14.1.3 and f = c + zg it follows that the type of f is equal tot the type of g. Finally, observe that for each τ ∈ (0, a] we have ψ|[τ,a] = 0 a.e. ⇐⇒ ϕ|[τ,a] = 0 a.e., ψ|[−a,−τ ] = 0 a.e. ⇐⇒ ϕ|[−a,−τ ] = 0 a.e..

302

14 Results from the Theory of Entire Functions

 

We conclude that f has all desired properties.

Lemma 14.3.8 Let a > 0, and let η be a function of bounded variation on [−a, a]. Put  a e−zt dη(t), z ∈ C. (14.42) f (z) = −a

If f is not a constant, then f ∈ PW, the function f is of exponential type, and its type σ is given by σ = max{σ1 , σ2 }, where σ1 = inf{τ ∈ (0, a] | η is constant on (τ, a]},

(14.43)

σ2 = inf{τ ∈ (0, a] | η is constant on [−a, −τ )}.

(14.44)

Proof We follow the same line of reasoning as in the proof of the preceding lemma. The proof is split into two parts. In the first part we show that f admits a representation of the form  f (z) = c + z

a −a

e−zt ψ(t) dt,

z ∈ C,

(14.45)

where c is a constant and ψ is integrable. Part 1.

Put  f+ (z) =

a



e−zt dη(t),

f− (z) =

0

η+ (t) = η(t) − η(a)

(0 ≤ t ≤ a),

0

−a

e−zt dη(t)

(z ∈ C),

η− (t) = η(t) − η(−a)

(−a ≤ t ≤ 0).

Note that  f+ (z) =

a

e

−zt



0

 a  = e−zt η+ (t) + z 0

 f− (z) =

0

−a

a

dη(t) =

e−zt dη(t) =

0 a 0



e−zt dη+ (t) = −η+ (0) + z 0

−a

 0  = e−zt η− (t) + z −a

e−zt dη+ (t)

0 −a



a

0

e−zt η+ (t)dt

e−zt dη− (t) e−zt η− (t)dt = η− (0) + z



0 −a

e−zt η− (t)dt.

14.3 Applications of the Paley-Wiener Theorem

303

It follows that f (z) = f+ (z) + f− (z)



= −η+ (0) + η− (0) + z

a 0

e−zt η+ (t)dt + z



0 −a

e−zt η− (t)dt.

Thus (14.45) is proved with c = −η+ (0)+η− (0)

Part 2.

and ψ(t) =

⎧ ⎨η+ (t)

0 < t ≤ a,

⎩η (t) −

−a ≤ t ≤ 0.

(14.46)

Since η+ and η− are of bounded variation, these functions are integrable on [0, a] and [−a, 0], hence ψ is integrable on [−a, a]. Furthermore, since f is not a constant (14.45) tells us that ψ is non-zero. Put  a e−zt ψ(t) dt, z ∈ C. g(z) = −a

Since ψ is integrable and ψ is non-zero, we know from Lemma 14.3.7 that g ∈ PW. But then, by Corollary 14.3.5, the function f = c + zg also belongs to PW. Hence f is of exponential type, and from Corollary 14.1.3 it follows that the type of f is equal to the type σ of g. According Lemma 14.3.7 the type σ is given by σ = max{σ1 , σ2 }, where σ1 = inf{τ ∈ (0, a] | ψ|[τ,a] = 0 a.e.}, σ2 = inf{τ ∈ (0, a] | ψ|[−a,−τ ] = 0 a.e.}. On the other from the definition of ψ in (14.46) it follows that for each τ ∈ (0, a] we have η is constant on (τ, a] ⇐⇒ ψ|[τ,a] = 0 a.e., η is constant on [−a, −τ ) ⇐⇒ ψ|[−a,−τ ] = 0 a.e.. This completes the proof.

 

Lemmas 14.3.7 and 14.3.8 together with Corollary 14.3.5 yield the following corollaries.

304

14 Results from the Theory of Entire Functions

Corollary 14.3.9 Let p and q be polynomials, q = 0, and let ϕ be a non-zero integrable function on [−a, a], where a > 0. Put  f (z) = p(z) + q(z)

a

−a

e−zt ϕ(t) dt,

z ∈ C.

Then f belongs to the class PW. Corollary 14.3.10 Let p and q be polynomials, q = 0, and let  f (z) = p(z) + q(z)

a

−a

e−zt dη(t),

z ∈ C,

where a > 0 and η is of bounded variation. If f is not a polynomial, then f ∈ PW. We conclude this section with another set of entire functions belonging to the Paley-Wiener class PW. Proposition 14.3.11 Let f be an entire function of order one, and assume that (a) f is polynomially bounded on the imaginary axis, i.e., there exist an integer m and a positive constant M such that |f (z)| ≤ M(1 + |z|m )

for z ∈ iR.

(14.47)

log |f (−r)| . r

(14.48)

(b) the following lim sups exist and are finite: lim sup r→∞

log |f (r)| r

and

lim sup r→∞

Then f is of finite type and f belongs to the class PW. Proof Given item (a) it follows from Proposition 14.3.4 that it suffices to show that f is of exponential type. But the order of f is one, and thus we only have to prove that the type of f is finite. With m as in (14.47) define the function g by (14.12), i.e., f (j ) (0)  1  zj . f (z) − m z j! m−1

g(z) =

j =0

By construction, since the order of f is one, the function g is entire of order one too, and hence for every  > 0, there is a number r0 such that |g(z)| < exp(|z|1+ ),

|z| ≥ r0 .

(14.49)

14.3 Applications of the Paley-Wiener Theorem

305

Furthermore, the fact that f satisfies (14.48), implies that the same holds for g. Using (14.47), we see that along the imaginary axis we have the estimate |g(z)| ≤ M,

z ∈ iR.

(14.50)

Since by construction g is of exponential type if and only if f is of exponential type, it suffices to prove that g is of finite type. In what follows, the first lim sup in (14.48) is denoted by σ+ and the second by σ− . We first consider the growth of g in the right half plane. Let h+ (z) = g(z) exp(−σ+ z).

(14.51)

Since exp(−σ+ z) is bounded in the closed right half plane by one and g satisfies (14.49) and (14.50), we see that the same estimates hold for h+ (z). Furthermore, observe that lim sup log |h+ (r)| ≤ lim sup log |g(r)| − σ+ = 0. r→∞

r→∞

Thus we can apply Proposition 14.2.3 with a = 1 and with  the right half plane. It follows that h+ is bounded on the right half plane, in fact |h+ (z)| ≤ M for Re z ≥ 0. This shows using (14.51) that |g(z)| ≤ Meσ+ Re z

for Re z ≥ 0.

(14.52)

Similarly, for the left half plane, we can define h− (z) = g(z) exp(σ− z),

(14.53)

to conclude that |g(z)| ≤ Me−σ− Re z

for Re z ≤ 0.

(14.54)

It now follows from (14.52) and (14.54) that g is an entire function of order one and of finite type.   Remark 14.3.12 Let f be an entire function of order strictly less than one, and assume that conditions (a) and (b) in the previous proposition are satisfied. Then f is a polynomial of degree at most m. This follows from Theorem 14.2.1 by applying the latter theorem with α = 1 and κ = 2. At the end of the next section (see Proposition 14.4.7) we shall show that the three conditions appearing in Proposition 14.3.11 completely describe the class PW.

306

14 Results from the Theory of Entire Functions

14.4 The Phragmén-Lindelöf Indicator Function Let f be an entire function of finite non-zero order ρ. In order to describe the asymptotic behaviour of f along rays in the complex plane Phragmén and Lindelöf introduced the indicator function hf (θ ) which is the 2π periodic function defined by hf (θ ) = lim sup r→∞

log |f (reiθ )| , rρ

−π < θ ≤ π.

(14.55)

If, in addition, f is of finite type, then hf is bounded (see Lemma 14.4.1 below) and in that case hf is continuous as well (see Theorem 14.5.3). For the case when f is of exponential type a different proof can be given for hf being bounded, using the identity (14.73) in Theorem 14.5.1; see Corollary 14.5.2 in Sect. 14.5. Lemma 14.4.1 Let f be an entire function of finite non-zero order. If, in addition, f is of finite type, then hf is bounded. Proof Suppose that f is of finite type σ . To prove that hf is bounded, first observe that by comparing the formula for σ in (14.6) with (14.55), we have that hf (θ ) ≤ σ . To prove a lower bound for hf , we fix a real number R > 0 and apply the minimum modulus theorem (see Theorem 11 on page 21 of [55]) with respect to the disk D = {z ∈ C | |z| ≤ 4eR}. This allows us to conclude that there exists r ∈ [R, 2R] such that on the circle |z| = r we have log |f (z)| > −H (η) log M(f ; 4eR), 3e and 0 < η < where H (η) = 2 + log 2η

3e 2.

|z| = r,

This yields

log M(f ; 4eR) log |f (z)| > −H (η) , ρ (4eR) (4eR)ρ

|z| = r.

Since R < r < 2R, we have (2er)ρ < (4eR)ρ < (4er)ρ , and this shows that log |f (z)| log M(f ; 4eR) > −(4e)ρ H (η) , rρ (4eR)ρ

|z| = r.

From the fact that f is of order ρ and of finite type, we also know that lim sup R→∞

log M(f ; 4eR) = σ. (4eR)ρ

(14.56)

14.4 The Phragmén-Lindelöf Indicator Function

307

Together with (14.56), this shows that there exist R0 > 0 and positive constants m and C such that for any R ≥ R0 , there exists r ∈ [R, 2R] such that min |f (z)| ≥ me−Cr . ρ

|z|=r

(14.57)

Thus for this r we have

log |f (reiθ )| ≥ log min |f (z)| |z|=r

ρ ≥ log me−Cr = log m − Cr ρ . Since C is independent of r, and for every R ≥ 1 there exist such a r ∈ [R, 2R], we obtain that hf (θ ) = lim sup r→∞

log |f (reiθ )| ≥ −C. rρ  

This completes the proof that hf is bounded.

In general, the lim sup in (14.55) cannot be replaced by an ordinary limit. To see this, take a function with infinitely many zeros on a ray with angle θ , for example, f (z) = 1 − exp(−z) and θ = π/2. Note that the order of f is equal to one. Furthermore, since θ = π/2, we have reiθ = r cos θ + ir sin θ = ir, and thus 1 1 log |1 − exp (−ir)|2 = log |1 − cos(−r) − i sin(−r)|2 2 2

1 = log (1 − cos r)2 + (sin r)2 2 1 = log 2 (1 − cos r) = −∞ for r = (2π)n, n = 0, 1, 2, . . . . 2

log |f (reiθ )| =

On the other hand, for this choice of f and with θ = π/2 we have hf (θ ) = 0 because the order of f is one. Thus, in general, we cannot take the limit in (14.55). Let f1 and f2 be the entire functions appearing in (14.7). A straightforward computation shows that hf1 (θ ) = s cos nθ, −π < θ ≤ π. * 0 when − π/2 ≤ θ ≤ π/2, hf2 (θ ) = − cos θ when π/2 ≤ θ ≤ 3π/2.

(14.58) (14.59)

More generally (see [55, page 52]), let f (z) = exp ((a − ib)z), where a and b are real numbers. Then |f (reiθ | = exp ((a cos ρθ + b sin ρθ )r ρ ) , and hence hf (θ ) = a cos ρθ + b sin ρθ for each − π < θ ≤ π.

308

14 Results from the Theory of Entire Functions

The next proposition is an addition to Proposition 14.1.2. Proposition 14.4.2 Let f be an entire function of finite non-zero order ρ, and let q be a non-zero polynomial. Then f and qf have the same indicator function. Proof Since q is a non-zero polynomial, we know from Proposition 14.1.2 that qf has the same order as f . In particular, the indicator functions hf and hqf are well defined. Furthermore, using the inequalities in (14.9), we see that it suffices to show hf = hqf for the case when q(z) = zn for some integer n > 0. In that case log |(qf )(reiθ )| = log |r n f (reiθ )| = n log r + log |f (reiθ )|. Since f and qf have the same order, a direct application of (14.55) shows that   hf = hqf . The next two propositions, which are additions to Lemmas 14.3.6 and 14.3.8, provide a large set of entire function f for which one has an explicit formula for the indicator function hf (θ ). The first proposition deals with entire functions belonging the class PW; see the paragraph before Proposition 14.3.4. Proposition 14.4.3 Let f ∈ PW, that is, f is an entire function given by  a e−zt ϕ(t) dt, z ∈ C, (14.60) f (z) = p(z) + q(z) −a

where p and q = 0 are polynomials, 0 < a < ∞, and ϕ is a non-zero square integrable function on the interval [−a, a]. Define σ = max{σ1 , σ2 }, where σ1 and σ2 are given by (14.34) and (14.35), respectively. Assume σ1 > 0. Then f is an entire function of exponential type, its type is σ , and its indicator function hf is given by the following formulas. (a) If σ2 > 0, then hf (θ ) =

⎧ ⎨σ2 cos θ

when − π/2 ≤ θ ≤ π/2,

⎩−σ cos θ 1

when π/2 ≤ θ ≤ 3π/2.

(b) If σ2 = 0 and the polynomial p is non-zero, then ⎧ ⎨0 when − π/2 ≤ θ ≤ π/2, hf (θ ) = ⎩−σ cos θ when π/2 ≤ θ ≤ 3π/2.

(14.61)

(14.62)

1

(c) If σ2 = 0 and p = 0, then ⎧ ⎨−σ3 cos θ hf (θ ) = ⎩−σ cos θ 1

when − π/2 ≤ θ ≤ π/2, when π/2 ≤ θ ≤ 3π/2,

(14.63)

14.4 The Phragmén-Lindelöf Indicator Function

309

where σ3 is defined by σ3 := sup{τ ∈ [0, a] | ϕ|[0,τ ] = 0 a.e.}. In particular, hf is continuous on −π/2 ≤ θ ≤ 3π/2. Proof The fact that f is an entire function of exponential type with type σ follows from Corollary 14.1.3 and Lemma 14.3.6. Proof of Item (a) From item (ii) in Proposition 14.3.4 we know that f is polynomially bounded on the imaginary axis, and thus hf (±π/2) = 0. Next we compute the indicator function hf in the left half plane. Given Lemma 14.3.6, it follows from (14.60) that for every  > 0 there exists a r0 > 0 such that |f (reiθ )| ≤ Mer e−rσ1 cos θ ,

r ≥ r0 ,

π/2 ≤ θ ≤ 3π/2.

Furthermore, from the definition of σ1 , it follows that hf (π) = σ1 > 0. Suppose that there exist α with π/2 0 and h(α) +  < (−σ1 + δ) cos α. Put g(z) = e(σ1 −δ)z f (z). Consider g in the sector given by  = {z ∈ C | |π − arg z| < |π − α|}. Since by construction |g(z)| ≤ M for all z ∈ ∂, it follows from a corollary of the Phragmén-Lindelöf theorem, see Corollary VI.4.2 of [15], that |g(z)| ≤ M for all z ∈ . However, this implies that |f (−r)| ≤ Me(σ1−δ)r ,

r ≥ r0 .

A contradiction to hf (π) = σ1 . This shows that hf (θ ) = −σ1 cos θ for π/2 ≤ θ ≤ 3π/2. In order to prove (14.61) for the right half plane we argue in similar way as in the preceding paragraph, replacing the left half plane by the right half plane. Given Lemma 14.3.6, it follows from (14.60) that for every  > 0 there exists a r0 such that |f (reiθ )| ≤ Mer erσ2 cos θ ,

r ≥ r0 ,

−π/2 ≤ θ ≤ π/2.

Furthermore, from the definition of σ2 , it follows that hf (0) = σ2 > 0. Suppose that there exist α with −π/2 0 and hf (α) +  < (σ2 − δ) cos α, and put g(z) = e−(σ2 −δ)z f (z). Consider g in the sector given by  = {z ∈ C | | arg z| < |α|}. Since by construction |g(z)| ≤ M for all z ∈ ∂, it follows from a corollary of the Phragmén-Lindelöf theorem, see Corollary VI.4.2 of [15], that |g(z)| ≤ M for all z ∈ . But, this implies that |f (r)| ≤ Me(σ2 −δ)r ,

r ≥ r0 .

A contradiction to h(0) = σ2 . This shows that hf (θ ) = σ2 cos θ for −π/2 ≤ θ ≤ π/2. This completes the proof of item (a). Proof of Item (b) The computation of hf in the left half plane follows the same line of reasoning as in the proof of item (a). The argument for the right half plane is different. Since σ2 = 0 and p = 0, there exists a R > 0 such that f is bounded from below in the right half plane Re z > R. On the other hand, if n is the degree of p, then for some constant M we have |f (z)| ≤ M|z|n for Re z ≥ 0. Together these two results imply that hf (θ ) = 0 for −π/2 ≤ θ ≤ π/2 and item (b) is proved. Proof of Item (c) Finally, to prove (14.63), observe that if σ2 = 0 and p = 0, then it follows from (14.60) that for every  > 0 there exists a r0 such that |f (reiθ )| ≤ Mer e−rσ3 cos θ ,

r ≥ r0 , −π/2 ≤ θ ≤ π/2.

Furthermore, from the definition of σ3 , it follows that hf (0) = −σ3 ≤ 0. Suppose that there exist α with −π/2 < α < π/2 such that hf (α) < −σ3 cos α. Given  > 0, we choose a positive δ such that σ3 + δ > 0 and hf (α) +  < −(σ3 + δ) cos α. Put g(z) = e(σ3 +δ)z f (z). Consider g in the sector given by  = {z ∈ C | | arg z| < |α|}.

14.4 The Phragmén-Lindelöf Indicator Function

311

Since by construction |g(z)| ≤ M for all z ∈ ∂, it follows from a corollary of the Phragmén-Lindelöf theorem, see Corollary VI.4.2 of [15], that |g(z)| ≤ M for all z ∈ . But, this implies that |f (r)| ≤ Me−(σ3+δ)r ,

r ≥ r0 .

A contradiction to the assumption that h(0) = σ3 . This shows that hf (θ ) = −σ3 cos θ for −π/2 ≤ θ ≤ π/2. Together with the computation of hf in the left half plane, this completes the proof of (14.63).   Proposition 14.4.4 Let a > 0, and let η ∈ NBV [0, a], i.e., η is a function of bounded variation, normalised such that η(0) = 0 and η is continuous from the right on the open interval (0, a). Put  f (z) = p(z) + q(z)

a

e−zt dη(t),

z ∈ C,

(14.64)

0

where p and q = 0 are polynomials. Let σ1 be given by (14.43), and assume σ1 > 0. Then f is an entire function of exponential type with type σ1 , and its indicator function hf is given by the following formulas. (a) If p = 0, then hf (θ ) =

⎧ ⎨0

when − π/2 ≤ θ ≤ π/2,

⎩−σ cos θ 1

when π/2 ≤ θ ≤ 3π/2.

⎧ ⎨−σ3 cos θ

when − π/2 ≤ θ ≤ π/2,

⎩−σ cos θ 1

when π/2 ≤ θ ≤ 3π/2,

(14.65)

(b) If p = 0, then hf (θ ) =

(14.66)

where σ3 is defined by σ3 := sup{τ ∈ [0, a] | η|[0,τ ] = 0}. In particular, hf is continuous on −π/2 ≤ θ ≤ 3π/2. Proof The fact that f is an entire function of exponential type and that its type is σ1 follows from Corollary 14.1.3 and Lemma 14.3.8. Proof of Item (a) First observe that the function f is polynomially bounded on the imaginary axis to conclude that hf (±(π/2)i) = 0. Next compute the indicator function hf in the left half plane. Given Lemma 14.3.8, it follows from (14.64) that for every  > 0 there exists a r0 > 0 such that |f (reiθ )| ≤ Mer e−rσ1 cos θ ,

r ≥ r0 , π/2 ≤ θ ≤ 3π/2.

312

14 Results from the Theory of Entire Functions

Furthermore, from the definition of σ1 , it follows that hf (π) = σ1 . Similarly as in the proof of Proposition 14.4.3, we have hf (θ ) = −σ1 cos θ for π/2 ≤ θ ≤ 3π/2. If p = 0, then there exists a R > 0 such that f is bounded from below in the right half plane Re z > R. This shows that hf (θ ) = 0 for −π/2 ≤ θ ≤ π/2, and completes the proof of (14.65). Proof of Item (b) If p = 0, then it follows from (14.64) that for every  > 0 there exists a r0 such that |f (reiθ )| ≤ Mer e−rσ3 cos θ ,

r ≥ r0 , −π/2 ≤ θ ≤ π/2.

Similarly as in the proof of Proposition 14.4.3, we have hf (θ ) = −σ3 cos θ for −π/2 ≤ θ ≤ π/2. This completes the proof of the proposition.   Remark 14.4.5 Let f be an entire function of exponential type, and assume that f is polynomially bounded on the imaginary axis. In other words, let f belong to the class PW. Then we know that f is of the form (14.60), and hence its indicator function hf can be computed by using formulas (14.61)–(14.63) in Proposition 14.4.3. Moreover, in this case hf is uniquely determined by its values at θ = 0 and θ = π. In fact, from (14.61)–(14.63) it follows that ⎧ ⎨hf (0) cos θ when − π/2 ≤ θ ≤ π/2, hf (θ ) = (14.67) ⎩−h (π) cos θ when π/2 ≤ θ ≤ 3π/2. f

The above formula holds in greater generality; see item (2) in Theorem 14.8.1. To illustrate the above remark, let us use this remark to obtain the indicator functions corresponding to the functions f1 (z) = exp(sz)

and f2 (z) = z − exp(−z).

In the first identity s is any real number. Both f1 and f2 are of exponential type and uniformly bounded on the imaginary axis. Hence, by Proposition 14.3.4, the functions f1 and f2 belong to the class PW. Furthermore, we have ⎧ ⎨f1 (r) = exp(sr) when θ = 0, f1 (reiθ ) = ⎩f (−r) = exp(−sr) when θ = π. 1

(14.68)

Using the definition of the indicator function in (14.55), we see that (14.68) implies that hf1 (0) = s and hf1 (π) = −s. But then we can apply (14.67) to obtain hf1 (θ ) = s cos θ,

−π < θ ≤ π.

(14.69)

14.4 The Phragmén-Lindelöf Indicator Function

313

Analogously, f2 (reiθ ) =

⎧ ⎨f2 (r) = r − exp(−r)

when θ = 0, ⎩f (−r) = −r − exp(r) when θ = π. 2

(14.70)

It follows that hf2 (0) = 0 and hf1 (π) = 1. Again applying (14.67) we obtain that & hf2 (θ ) =

0

when − π/2 ≤ θ ≤ π/2,

− cos θ when π/2 ≤ θ ≤ 3π/2.

Note that the latter formula proves (14.59), while (14.69) yields (14.59) for the case when n = 1. Remark 14.4.6 Note that the sum of two entire functions of finite non-zero order does not have to be of finite non-zero order. To see this let f1 be an entire function of exponential type, and take f2 = p − f1 , where p is a polynomial. Then we know from Corollary 14.1.3 that f2 is also of exponential type. Thus f1 and f2 are both of order one, but f1 + f2 is a polynomial, and hence f1 + f2 is of zero order. Thus, if hf1 and hf2 are well defined, it does not follow that hf1 +f2 is well defined. We shall come back to this issue in Lemma 14.7.2. See Theorem 14.6.5 for the indicator function of a product of entire functions. We conclude this section with two additional results. The first result is an addition to Proposition 14.3.11, and the second result concerns the transformation f (z) → f (z + z0 ). Proposition 14.4.7 An entire function f belongs to the class PW if and only if the following three conditions are satisfied: (i) f has order one, (ii) f is polynomially bounded on the imaginary axis, (iii) the following lim sups exist and are finite: lim sup r→∞

log |f (r)| r

and

lim sup r→∞

log |f (−r)| . r

(14.71)

Proof If the three conditions are fulfilled, then we know from Proposition 14.3.11 that f belongs to the class PW. To prove the reverse statement, assume that f ∈ PW. Then we know from item (i) in Proposition 14.3.4 that f is of exponential type, and hence f has order one. Furthermore, item (ii) in Proposition 14.3.4 is the just same as item (ii) in the present proposition, Finally, since f ∈ PW, it follows from Lemma 14.4.1 that the indicator function hf (θ ) is well defined and bounded. In particular, the lim sups in (14.71) are well defined and are finite. This completes the proof.  

314

14 Results from the Theory of Entire Functions

Proposition 14.4.8 Let f be an entire function from the class PW, and let z0 ∈ C. Then the entire function f(z) = f (z + z0 ) also belongs the class PW, and the functions f and fare of the same type and have the same indicator function. Proof According to Definition 14.3.3, the function f is given by  f (z) = p(z) + q(z)

a

−a

e−zt ϕ(t) dt,

z ∈ C,

(14.72)

where p and q = 0 are polynomials, 0 < a < ∞, and ϕ is a non-zero square integrable function on the interval [−a, a]. From (14.72) it follows that f(z) = f (z + z0 ) = p(z + z0 ) + q(z + z0 )  =p (z) +  q (z)

a −a

e−zt  ϕ (t) dt,



a

−a

e−(z+z0 )t ϕ(t) dt

z ∈ C,

where p (z) = p(z + z0 ),

 q (z) = q(z + z0 ),

ϕ (t) = ez0 t ϕ(t). 

Obviously, p  and  q are polynomials,  q = 0, and  ϕ is a non-zero square integrable function on the interval [−a, a]. But then Definition 14.3.3 tells us that f belongs to the class PW. Next put σ1 := inf{τ ∈ (0, a] | ϕ|[τ,a] = 0 a.e.}, σ2 := inf{τ ∈ (0, a] | ϕ|[−a,−τ ] = 0 a.e.}, σ2 be the analogous quantities with ϕ being replaced by  ϕ . Obviously, and let  σ1 and  since ϕ(t) = 0 if and only if  ϕ (t) = 0, we have σ1 =  σ1

and σ2 =  σ2 .

Furthermore σ := max{σ1 , σ2 } is equal to  σ := max{ σ1 ,  σ2 }, and we can apply Proposition 14.4.3 to show that f and fhave the same type and the same indicator function.  

14.5 Properties of the Indicator Function

315

14.5 Properties of the Indicator Function In this section we present a number of results involving various properties of indicator functions. Theorem 23 on page 53 of [55] implies that the indicator function hf satisfies the following fundamental trigonometric convexity estimate. Theorem 14.5.1 Let f be an entire function of finite non-zero order ρ. Then the indicator function hf of f defined by (14.55) satisfies the relation hf (θ1 ) sin ρ(θ2 −θ3)+hf (θ2 ) sin ρ(θ3 −θ1 )+hf (θ3 ) sin ρ(θ1 −θ2 ) ≤ 0

(14.73)

for all θ1 < θ2 < θ3 with θ3 − θ1 < π/ρ. From the fundamental relation (14.73), one obtains a number of useful analytic properties of the indicator function hf . The argument of proof is similar to those used for convex functions but now exploiting the trigonometric convexity of the indicator function expressed by (14.73). Using the identity sin(2x) + sin(2y) − sin(2(x + y)) = 4 sin x sin y sin(x + y),

x, y ∈ R,

with 2x = ρ(θ2 − θ1 ) and 2y = ρ(θ3 − θ2 ) we obtain sin(ρ(θ2 − θ1 )) + sin(ρ(θ3 − θ2 )) − sin(ρ(θ3 − θ1 )) =       ρ(θ2 − θ1 ) ρ(θ3 − θ2 ) ρ(θ3 − θ1 ) = 4 sin sin sin . 2 2 2

(14.74)

Here, as Theorem 14.5.1, we assume that θ1 < θ2 < θ3 and θ3 − θ1 < π/ρ. From (14.73) we know that hf (θ2 ) sin(ρ(θ3 − θ1 )) − hf (θ3 ) sin(ρ(θ2 − θ1 )) ≤ hf (θ1 ) sin(ρ(θ3 − θ2 )). (14.75) Together the identities (14.74) and (14.75) yield hf (θ2 ) sin(ρ(θ3 − θ1 )) − hf (θ3 ) sin(ρ(θ2 − θ1 ))+   + hf (θ1 ) sin(ρ(θ2 − θ1 )) − sin(ρ(θ3 − θ1 )) ≤   ≤ hf (θ1 ) sin(ρ(θ3 − θ2 )) + sin(ρ(θ2 − θ1 )) − sin(ρ(θ3 − θ1 ) =       ρ(θ3 − θ2 ) ρ(θ3 − θ1 ) ρ(θ2 − θ1 ) sin sin . = 4hf (θ1 ) sin 2 2 2

316

14 Results from the Theory of Entire Functions

Dividing both sides of the preceding inequality by sin(ρ(θ1 − θ2 )) sin(ρ(θ3 − θ2 )) yields hf (θ3 ) − hf (θ1 ) hf (θ2 ) − hf (θ1 ) ≤ + sin(ρ(θ2 − θ1 )) sin(θ3 − θ1 ) ρ(θ − θ ) ρ(θ − θ ) −1 ρ(θ − θ )  3 1 2 1 3 2 cos cos + hf (θ1 ) sin , 2 2 2 (14.76) for θ1 < θ2 < θ3 and 0 < θ3 − θ1 < π/ρ. The next corollary, which a special case of Lemma 14.4.1, is proved by using Theorem 14.5.1. Corollary 14.5.2 If f is an entire function of exponential type, then the indicator function hf of f is bounded. Proof We will show that min hf (θ ) ≥ − max hf (θ ).

−π≤θ≤π

−π≤θ≤π

(14.77)

Since we have already seen that f is bounded above, this proves the lemma. To prove (14.77), we use (14.73) with ρ = 1 and θ3 − θ2 = π/2. This yields hf (θ3 − π/2) sin(θ3 − θ1 ) − hf (θ3 ) sin(θ3 − θ1 − π/2) ≤ hf (θ1 ).

(14.78)

Next choose θ3 such that sin(θ3 −θ1 ) < 0 and sin(θ3 −θ1 −π/2) = cos(θ3 −θ1 ) > 0. Then, we obtain from (14.78) hf (θ1 ) ≥ hf (θ3 − π/2) sin(θ3 − θ1 ) − hf (θ3 ) cos(θ3 − θ1 )   ≥ max hf (θ ) sin(θ3 − θ1 ) − cos(θ3 − θ1 ) −π≤θ≤π

≥ − max hf (θ ). −π≤θ≤π

This proves (14.77) and completes the proof of the lemma.

 

Using identity (14.76), one can derive the following result, see pages 54–57 of [55]. Theorem 14.5.3 Let f be an entire function of order ρ, and assume that the indicator function hf defined by (14.55) is bounded. Then hf is continuous, has a derivative from the left and from the right at every point, and the right hand derivative is greater than or equal to the left hand derivative. The next result plays an important role in the resolvent estimates in this book. The theorem appears in [55, page 56] without proof.

14.5 Properties of the Indicator Function

317

Theorem 14.5.4 Let f be an entire function of order ρ, and assume that the indicator function hf is bounded. If hf has a local maximum or minimum at θ0 , then hf (θ ) ≥ h(θ0 ) cos ρ(θ − θ0 ),

|θ − θ0 | ≤ π/ρ.

(14.79)

Proof Let h := hf . Let h+ denote the right hand derivative and h− the left hand derivative which both exist by Theorem 14.5.3. This theorem also tells us that h+ (θ0 ) = h− (θ0 ) = 0 if h has a maximum at θ = θ0 . On the other hand, if h has a minimum at θ = θ0 , then we only know that h+ (θ0 ) ≥ 0 ≥ h− (θ0 ). Set θ1 = θ0 in (14.76) and let θ2 converge to θ0 from above. For the left hand side this yields lim

θ2 ↓θ0

1 1 h(θ2 ) − h(θ0 ) h(θ2 ) − h(θ0 ) θ2 − θ0 = lim = h+ (θ0 ) ≥ 0. sin(ρ(θ2 − θ0 )) θ2 ↓θ0 θ2 − θ0 sin(ρ(θ2 − θ0 )) ρ ρ

Using θ1 = θ0 and θ2 ↓ θ0 in the right hand side of (14.76), then shows that ρ(θ − θ ) −1 ρ(θ − θ )  h(θ3 ) − h(θ0 ) 3 0 3 0 + h(θ0 ) sin cos ≥ sin(ρ(θ3 − θ0 )) 2 2 ≥

1  h (θ0 ) ≥ 0. ρ +

Since sin 2x = 2 sin x cos x and cos 2x = 1 − 2(sin x)2 , this yields h(θ3 ) ≥ h(θ0 ) cos ρ(θ3 − θ0 )

for 0 ≤ θ3 − θ0 ≤ π/ρ.

(14.80)

Note the boundary cases θ3 − θ0 = 0 and θ3 − θ0 = π/ρ are satisfied trivially. Finally, taking θ3 = θ0 in (14.76) and letting θ2 converge to θ0 from below, one obtains in a similar way the estimate h(θ1 ) ≥ h(θ0 ) cos ρ(θ1 − θ0 )

for

− π/ρ ≤ θ1 − θ0 ≤ 0.

Together (14.80) and (14.81) prove the estimate (14.79).

(14.81)  

In order to develop a precise calculus for indicator functions that will keep track of growth properties of products and sums of entire functions of order ρ, the following results, see Theorem 28 on page 71 and Theorem 31 on page 73 of [55], are crucial. Theorem 14.5.5 Let f be an entire function of finite non-zero order ρ and of finite type, and let the indicator function hf be defined by (14.55). Then hf is bounded, and for every  > 0, there exists a positive constant R such that log |f (reiθ )| < [hf (θ ) + ]r ρ

− π < θ ≤ π, r ≥ R.

(14.82)

318

14 Results from the Theory of Entire Functions

Furthermore, if −π < θ∗ ≤ π is given, then for every  > 0 and 0 < ω < 1, there exists a δ > 0 and a sequence of intervals [rn , rn (1 + δ)] with rn → ∞ as n → ∞, such that for every n the following lower bound log |f (reiθ∗ )| > [hf (θ∗ ) − ]r ρ ,

rn ≤ r ≤ rn (1 + δ)

(14.83)

holds except on a set of measure not exceeding ωδrn . As mentioned in [55, page 73] estimate (14.83) is due to S.N. Bernstein [8].

14.6 Entire Functions of Completely Regular Growth To describe the intimate connection between the distribution of zeros and the growth properties of an entire function f , we recall the notion of an entire function of completely regular growth (see Chapter III of [55]). Definition 14.6.1 Let f be an entire function of finite non-zero order ρ and of normal type. Then f is said to be of completely regular growth if the following limit lim∗

r→∞

log |f (reiθ )| rρ

exists for

− π < θ ≤ π,

(14.84)

takingvalues in some set Eθ ⊂ R+ where lim∗ means that r tends to infinity without  of relative length zero, i.e., limr→∞ r −1 m Eθ ∩ (0, r) = 0. Equivalently, an entire function f of finite non-zero order ρ and of normal type is of completely regular growth if there exists disks D(λk , rk ) with λk ∈ C and rk > 0, k = 1, 2, . . ., satisfying

rk = o(r)

(14.85)

|λk |≤r

such that log |f (reiθ )| = hf (θ )r ρ + o(r ρ )

for reiθ ∈ ∪ D(λk , rk ) k≥1

(14.86)

as r → ∞, uniformly in θ , where hf denotes the indicator function of f defined in (14.55). We refer to Chapter V of [55] (see also Chapter 7 of [12]) for a complete discussion of entire functions of completely regular growth. The functions f1 and f2 defined by (14.7) are both of finite order and of finite type. Furthermore, for these two functions the limit in (14.84) exists for each θ . In particular, f1 and f2 are both of completely regular growth.

14.6 Entire Functions of Completely Regular Growth

319

There are several different definitions of entire functions of completely regular growth that all turn out to be equivalent (see [3, Section 5.6]) and [38]). For instance, for entire functions of order ρ and finite type σ , Gol’dberg [37] introduced the lower indicator function gf of f by gf (θ ) = sup

5 lim inf

C∈C r→∞, reiθ ∈C

log |f (reiθ )| 6 , rρ

where C is a collection of subsets of the complex plane such that C ∈ C if C can be covered by a union of disks Bδj (zj ) such that lim

r→∞

1 δj = 0. r |zj | 0.

In particular, (14.88) is satisfied. The identity (14.89) follows from Remark 14.4.5. Indeed, using (14.67) we have hf (π/2) = hf (0) cos

π 2

=0

π and hf (−π/2) = hf (0) cos − = 0. 2

Hence hf (π/2) and hf (−π/2) are both zero, and thus (14.89) is satisfied trivially. But then Theorem 14.6.2 tells us that f is of completely regular growth and that (14.87) is satisfied.   We present an example to illustrate the preceding theorem. The example is used in Sect. 7.2. 1 Example 14.6.4 Let f (z) = z − 0 e−zt dη(t), where η is a function of bounded variation, more precisely η ∈ NBV [0, 1]. Using Lemma 14.3.8 we see that f is of exponential type. Furthermore, f is polynomially bounded on the imaginary axis.

14.6 Entire Functions of Completely Regular Growth

321

Thus f ∈ PW. Thus f of completely regular growth. If 1 belongs to the essential support of η, i.e.,  sup t ∈ [0, 1] | for every  > 0



t t −

 |dη(t)| = 0} = 1 ,

then the indicator function of q is given by * hq (θ ) =

0, when − π/2 ≤ θ ≤ π/2, − cos θ, when π/2 ≤ θ ≤ 3π/2.

The importance of the class of entire functions of regular growth in connection with resolvent estimates is contained in the following results which allow a precise calculus for indicator functions. Using the Bernstein estimate (14.83), we have the following result, see pages 159–160 of [55]. Theorem 14.6.5 Let f1 and f2 be entire functions of finite non-zero order ρ and of finite type, and assume that f2 is of completely regular growth. Then f1 f2 is of finite non-zero order ρ and of finite type (by Theorem 14.1.1), and hf1 f2 (θ ) = hf1 (θ ) + hf2 (θ ),

−π < θ ≤ π.

(14.91)

Furthermore, if f1 /f2 is entire of finite non-zero order, then hf1 /f2 (θ ) = hf1 (θ ) − hf2 (θ ),

−π < θ ≤ π.

(14.92)

Lemma 14.6.6 Let f be an entire function of finite non-zero order ρ and of finite type, and suppose that there exist a non-negative integer m, a positive constant M, and a ρ-admissible set {ray (θj ) | j = 1, . . . , κ} such that |f (z)| ≤ M(1 + |z|m )

for z ∈ ray (θj ), j = 1, 2, . . . , κ.

(14.93)

Fix j , 1 ≤ j ≤ κ. Then θj +1 − θj < π/ρ implies that hf (θ ) = 0

for θj ≤ θ ≤ θj +1 .

(14.94)

On the other hand, if θj +1 − θj = π/ρ, then hf (θ ) = hf (θ∗ ) cos ρ(θ − θ∗

for θj ≤ θ ≤ θj +1 ,

(14.95)

where θ∗ = (θj +1 + θj )/2 and θκ+1 = θ1 + 2π. Proof Since {ray (θj ) | j = 1, . . . , κ} is a ρ-admissible set of half-lines in the complex plane, we have for given j , j = 1, . . . , κ, that θj +1 − θj ≤ π/ρ.

322

14 Results from the Theory of Entire Functions

If θj +1 − θj < π/ρ, then it follows from the same reasoning as in the proof of Theorem 14.2.1 using Proposition 14.2.3 instead of Corollary VI.4.2 of [15] that |f (z)| ≤ M(1 + |z|m )

for z = reiθ with θj ≤ θ ≤ θj +1 .

(14.96)

This proves (14.94). Next assume that θj +1 − θj = π/ρ. Since 0 < θ1 < θ2 < · · · < θκ ≤ 2π, it follows that ρ > 1/2. Define  = {z ∈ C | θj < arg z < θj +1 }. Take θ∗ , θj ≤ θ∗ ≤ θj +1 , such that hf (θ∗ ) has a local maximum at θ = θ∗ . From Theorem 14.5.4 we then have that hf (θ ) ≥ hf (θ∗ ) cos ρ(θ − θ∗ )

for |θ − θ∗ | ≤ π/ρ.

(14.97)

Since hf (θj ) = hf (θj +1 ) = 0, it follows from (14.97) that θ∗ = (θj +1 + θj )/2. Put

d(z) = exp −hf (θ∗ )(e−iθ∗ z)ρ , then d is an analytic function on . Furthermore if g(z) = f (z)/d(z), then |g(z)| ≤ M(1 + |z|m )

for z ∈ ray (θj ) ∪ ray (θj +1 ),

and lim sup r→∞

log |g(reiθ∗ )| = 0. rρ

Following the same reasoning as in the proof of Theorem 14.2.1 using Proposition 14.2.3 instead of Corollary VI.4.2 of [15] yields that |g(z)| ≤ M(1 + |z|m )

for z = reiθ with θj ≤ θ ≤ θj +1 ,

and hence for z = reiθ with θj ≤ θ ≤ θj +1   |f (z)| ≤ M(1 + |z|m ) exp hf (θ∗ )r ρ cos ρ(θ − θ∗ ) ,

θj ≤ θ ≤ θj +1 .

(14.98)

This proves that hf (θ ) ≤ hf (θ∗ ) cos ρ(θ − θ∗ ) for θj ≤ θ ≤ θj +1 and together with (14.97) completes the proof of (14.95).  

14.7 The Dominating Property The following definition presents a crucial notion in the resolvent estimates developed in this book.

14.7 The Dominating Property

323

Definition 14.7.1 Let g be an entire function of finite non-zero order ρ, and let f be entire functions of finite order. We say that g dominates f if one of the following two conditions is satisfied: (a) the order of f is strictly less than ρ; (b) the order of f is equal to ρ and hf (θ ) ≤ hg (θ ),

−π < θ ≤ π.

(14.99)

To illustrate Definition 14.7.1 with an example, let us take f (z) = exp(−z/2) and g(z) = z − exp(−z). Note that f = f1 and g = f2 , where f1 is the first function appearing in (14.7) with n = 1 and s = −1/2, and f2 is the second function appearing in (14.7). Thus both f and g are entire functions of order one and of finite type, that is, both are of exponential type. Furthermore, hg is given by the right hand side of (14.59). On the other hand, hf (θ ) = − 12 cos θ . Now note that 1 hf (θ ) = − cos θ ≤ 0 = hg (θ ) for − π/2 ≤ θ ≤ π/2, 2 1 hf (θ ) = − cos θ ≤ − cos = hg (θ ) for π/2 ≤ θ ≤ 3π/2}. 2 Thus g dominates f . Since g in Definition 14.7.1 has a non-zero order, item (a) is satisfied for all entire functions of order zero, and hence all entire functions of order zero are dominated by any entire function of finite non-zero order. In particular, all polynomials are dominated by any entire function of finite non-zero order. Lemma 14.7.2 Let g be an entire function of finite non-zero order, and assume that g is of finite type. Put Fg = {f is an entire function | f is dominated by g}.

(14.100)

Then Fg is a linear space. Proof Let f ∈ Fg . We first show that cf ∈ Fg for any c ∈ C. For c = 0 this is trivially true. Thus assume that c = 0. Then cf has the same order as f . Thus if the order of f is strictly less than the order of g, then the same holds true for cf , and hence cf ∈ Fg by definition. Next assume the order of f is equal to the order ρ of g. Then hcf (θ ) = lim sup r→∞

log |(cf )(reiθ )| log |c| log |(f (reiθ )| = lim sup ρ + lim sup ρ r r rρ r→∞ r→∞

= hf (θ ) ≤ hg (θ ), Thus cf ∈ Fg for any c ∈ C.

−π < θ ≤ π.

324

14 Results from the Theory of Entire Functions

Next we show that Fg is closed under addition. Let fj be an entire function of finite order ρj , and assume that g dominates fj , j = 1, 2. We want to show that f = f1 + f2 ∈ Fg . Without loss of generality we may assume that ρ1 ≤ ρ2 . From (14.4) it follows that f is of finite order ρf and ρf ≤ ρ2 . Furthermore, since g dominates f2 , we have ρ2 ≤ ρ, where ρ is the order of g, and hence ρf ≤ ρ2 ≤ ρ. If ρf < ρ2 , then ρf < ρ, and item (a) in Definition 14.7.1 tells us that g dominates f . Therefore in what follows we may assume that the order ρf of f is equal to ρ2 . If ρf = ρ2 = 0, then (see the paragraph preceding the present lemma) f is dominated by g. Thus we may assume that ρf = ρ2 > 0. Similarly, if ρ2 < ρ, then ρf < ρ, and again f is dominated by g. Thus we may assume that ρf = ρ2 = ρ. In that case we have hf (θ ) = lim sup r→∞

log |f (reiθ )| r ρf

log |f1 (reiθ ) + f2 (reiθ )| r ρ2 r→∞   log 2 max{|f1 (reiθ )|, |f2 (reiθ )|} ≤ lim sup r ρ2 r→∞     max{log 2|f1 (reiθ )| , log 2|f2 (reiθ )| } ≤ lim sup r ρ2 r→∞ = lim sup

(14.101)

If, in addition, ρ1 = ρ2 , then it follows from (14.101) that hf = hf1 +f2 (θ ) ≤ max{hf1 (θ ), hf2 (θ )} ≤ hg (θ ). Thus g dominates f , as desired. To complete the proof it remains to consider the case when ρ1 < ρ2 = ρf = ρ. Recall that f2 ∈ F . Thus g dominates f2 . As the order of f2 is the order of g, it follows that hf2 (θ ) ≤ hg (θ ) for all −π < θ ≤ π by (14.99). Moreover, since g is assumed to be of finite type, we know that hg is uniformly bounded from above. But then the same holds true for hf2 . Next, since f1 is of finite order ρ1 , and using (14.2) and (14.3), we know that for all  > 0 there exists R = R() > 0 such that log |f1 (reiθ | < r ρ1 + for all r > R. Thus with 0 <  < ◦ < ρ2 − ρ1 we arrive at lim sup r→∞

log(2|f1 (reiθ )|) ≤ lim sup r ρ1 −ρ2 +◦ = 0. r ρ2 r→∞

But then the final inequality in (14.101) and the fact that hf2 is uniformly bounded from above show that hf (θ ) ≤ hf2 (θ ) for all −π < θ ≤ π. Recall that hf2 (θ ) ≤

14.7 The Dominating Property

325

hg (θ ) for all −π < θ ≤ π. We conclude that hf (θ ) ≤ hg (θ ) for all −π < θ ≤ π. Thus g dominates f , which completes the proof.   The linear space Fg defined in (14.100) plays an important role in the resolvent estimates in Chap. 4. The purpose of the subsequent lemmas is to prove further properties of Fg in case we have further information about the entire function g. In particular, we will assume in the sequel that g is an entire function of finite nonzero order ρ and of completely regular growth. Recall that, by definition, the latter implies that g is of normal type. Hence, in particular, the function g is of finite type. Lemma 14.7.3 Let g be an entire function of finite non-zero order ρ and of completely regular growth, and let Fg be defined by (14.100). If f is an entire function such that f/g is a rational function, then f ∈ Fg . Proof Since g dominates the function identical zero, we can assume that f = 0. Fix a polynomial p1 such that p1 f/g equals a polynomial p2 . From Proposition 14.1.2, we obtain that the order of p2 g equals ρ. Since p1 f = p2 g, this proves that the order of f equals ρ, and it follows from Proposition 14.4.2 that f and g have the same indicator function. This proves f ∈ Fg .   Lemma 14.7.4 Let g be an entire function of finite non-zero order ρ and of completely regular growth, and let Fg be defined by (14.100). If f ∈ Fg and p is a polynomial, then pf ∈ Fg . Proof Let f ∈ Fg and p a polynomial. According to Theorem 14.1.1, the order of the product pf equals the order of f . If the order ρf of f is less than ρ, then the order of pf is less than ρ, and hence pf ∈ Fg . If ρf = ρ, then it follows from Proposition 14.4.2 that f and pf have the same indicator function. So f ∈ Fg implies pf ∈ Fg .   Lemma 14.7.5 Let g be an entire function of finite non-zero order ρ and of completely regular growth, and let Fg be defined by (14.100). Let f ∈ Fg , and assume that there exist a non-negative integer m, a positive constant M, and a ρadmissible set {ray (θj ) | j = 1, . . . , κ} such that 0 < |g(z)| ≤ M(1 + |z|m )

and |f (z)| ≤ M(1 + |z|m ) for z ∈ ray (θj ), j = 1, 2, . . . , κ.

(14.102)

In addition, assume that p := f/g is an entire function. Then the following two statements hold true. (a) If the order of f is strictly less that ρ, then g has finitely many zeros. (b) If the order of f equals ρ, then p = f/g is a polynomial of degree at most m. Proof Let f ∈ Fg and ρf ≤ ρ, where ρf denotes the order of f . If ρf < ρ, then it follows from Theorem 14.2.1 that f is a polynomial of degree at most m. Since f is a polynomial and f/g is entire, it follows that g has finitely many zeros. This proves (a).

326

14 Results from the Theory of Entire Functions

Next assume that ρf = ρ. Recall that p = f/g and p is an entire function. From (14.99) we obtain that hf (θ ) ≤ hg (θ ) for −π < θ ≤ π, and using (14.102) we have |p(z)| ≤ M(1 + |z|m )

for z ∈ ray (θj ), j = 1, 2, . . . , κ.

(14.103)

Furthermore, it follows from Theorem 14.5.5 that for every  > 0, there exists a positive constant R such that log |f (reiθ )| < [hg (θ ) + ]r ρ

− π < θ ≤ π, r ≥ R.

(14.104)

On the other hand, since g is an entire function of finite non-zero order ρ and of completely regular growth, the function g is of normal type by definition, and we know from (14.85) and (14.86) that there exist disks D(λk , rk ) with λk ∈ C and rk > 0, k = 1, 2, . . ., such that log |g(reiθ )| = hg (θ )r ρ + o(r ρ )

for reiθ ∈ ∪ D(λk , rk ). k≥1

Combining this estimate with (14.104) we arrive at log |p(reiθ )| = log |f (reiθ | − log |g(reiθ | ≤ r ρ + o(r ρ )

for r ≥ R and reiθ ∈ ∪ D(λk , rk ). k≥1

(14.105)

Therefore it follows from (14.103) and (14.105) that p satisfies the assumptions of Theorem 14.2.2, and an application of Theorem 14.2.2 yields that p is a polynomial.   We also need the following result. Lemma 14.7.6 Let g be an entire function of finite non-zero order ρ and of completely regular growth, and let Fg be defined by (14.100). Let fk ∈ Fg , k = 1, 2, . . ., be such that there exist a non-negative integer m, a positive constant M, and a ρ-admissible set {ray (θj ) | j = 1, . . . , κ} such that |fk (z)| ≤ M(1 + |z|m )

for z ∈ ray (θj ), j = 1, 2, . . . , κ.

(14.106)

If f is an entire function of order at most ρ and fk → f pointwise on C if k → ∞, then f ∈ Fg . Proof Given (14.106) the fact that fk → f pointwise on C implies that |f (z)| ≤ M(1 + |z|m )

for z ∈ ray (θj ), j = 1, 2, . . . , κ.

(14.107)

Next observe that if the order of f is less than ρ, then by definition f ∈ Fg . Therefore, in the sequel we can assume that the order of f equals ρ. If the order of

14.7 The Dominating Property

327

fk is strictly less than ρ, then it follows from (14.106) and Theorem 14.2.1 that fk is a polynomial of degree at most m. Therefore, if there are finitely many fk of order ρ, then there exists a subsequence fkj , j = 1, 2, . . ., such that fkj , j = 1, 2, . . ., is a polynomial of degree at most m. But then f is also a polynomial of degree at most m, and hence f belongs to Fg . Thus in the sequel we can assume that there exists a subsequence kj , j = 1, 2, . . ., such that the order of fkj , j = 1, 2, . . ., equals ρ. Since the set of half-lines in the complex plane {ray (θj ) | j = 1, . . . , κ} is ρ-admissible, we have for given j , j = 1, . . . , κ, that θj +1 − θj ≤ π/ρ, where θκ+1 = θ1 + 2π. Hence it suffices to prove that hf (θ ) ≤ hg (θ )

for θj ≤ θ ≤ θj +1

(j = 1, . . . , κ).

Fix j , 1 ≤ j ≤ κ. We consider two cases. Case 1 Assume θj +1 −θj < π/ρ. Then it follows from the proof of Lemma 14.6.6, see (14.96), that |fk (z)| ≤ M(1 + |z|m )

for z = reiθ

with θj ≤ θ ≤ θj +1 ,

(14.108)

and hence hfk (θ ) = 0 for θj ≤ θ ≤ θj +1 for k = 1, 2, . . .. Since g dominates fk and the order of fk equals ρ, we obtain 0 = hfk (θ ) ≤ hg (θ )

for θj ≤ θ ≤ θj +1 .

(14.109)

Moreover, since fk converges pointwise to f , it follows from (14.108) that |f (z)| ≤ M(1 + |z|m )

for z = reiθ

with

θj ≤ θ ≤ θj +1 .

Since the order of f is ρ, we can use Lemma 14.6.6 to obtain: hf (θ ) = 0

for θj ≤ θ ≤ θj +1 .

Thus it follows from (14.109) that hf (θ ) ≤ hg (θ )

for θj ≤ θ ≤ θj +1 .

(14.110)

Case 2 Assume θj +1 −θj = π/ρ. Then it follows from the proof of Lemma 14.6.6, see (14.98), that for all kn such that the order of fkn equals ρ, and z = reiθ with θj ≤ θ ≤ θj +1 we have

|fkn (z)| ≤ M(1 + |z|m ) exp hfkn (θ∗ )r ρ cos ρ(θ∗ ) and hfkn (θ ) = hfkn (θ∗ ) cos ρ(θ − θ∗ ),

θj ≤ θ ≤ θj +1 .

(14.111)

328

14 Results from the Theory of Entire Functions

Here θ∗ = (θj +1 + θj )/2 and θκ+1 = θ1 + 2π. From the fact that fkn ∈ Fg has order ρ, we obtain that hfkn (θ ) ≤ hg (θ ) for all n such that fkn ∈ Fg has order ρ. Put α = supn≥1 hfkn (θ∗ ). Then α cos ρ(θ − θ∗ ) ≤ hg (θ ),

θj ≤ θ ≤ θj +1 .

(14.112)

Recall that if the order of fk is less than ρ, then it follows from (14.106) and Theorem 14.2.1 that fk is a polynomial of degree at most m. If fk has order ρ, then fk satisfies (14.111) with k = kn for some kn . This shows that for all k = 0, 1, 2, . . . we have the estimate   |fk (z)| ≤ M(1 + |z|m ) exp αr ρ cos ρ(θ − θ∗ )

(14.113)

for z = reiθ with θj ≤ θ ≤ θj +1 . Since fk → f pointwise on C and f satisfies (14.107) it follows from (14.113) that for z = reiθ with θj ≤ θ ≤ θj +1 we have   |f (z)| ≤ M(1 + |z|m ) exp αr ρ cos ρ(θ − θ∗ ) . This shows that hf (θ ) ≤ α cos ρ(θ − θ∗ )

for θj ≤ θ ≤ θj +1 .

(14.114)

Therefore, it follows from (14.112) that hf (θ ) ≤ hg (θ ),

for θj ≤ θ ≤ θj +1 .

(14.115)

Together with (14.110), this completes the proof that g dominates f .

 

14.8 Distribution of Zeros of Entire Functions and Related Properties In this monograph we study completeness of eigenvectors and generalised eigenvectors of operators in terms of growth estimates of certain entire functions. However, completeness of eigenvectors and generalised eigenvectors is also intimately connected with the distribution of the eigenvalues itself. The purpose of this section is to explain and illustrate the deep connection between the distribution of zeros and growth properties of entire function. Let f be an entire function. By n(r, f ) we denote the number of zeros of the entire function f in the disk {z ∈ C | |z| ≤ r} taking multiplicities into account. There are classical results that describe the connection between log M(r; f ) and

14.8 Distribution of Zeros of Entire Functions and Related Properties

329

the counting function n(r, f ). In particular, an application of Jensen’s formula (see Theorem 15.18 of [73]) yields that lim sup r→∞

log n(r, f ) ≤ ρ, where ρ is the order of f. r

In what follows, if d := limr→∞ n(r, f )/r exists and is finite, then d is called the density of the zeros of f or just the zero density of f .

14.8.1 Distribution of Zeros and Completely Regular Growth We begin with a motivating example. Given a sequence ν1 , ν2 , . . . in C, it is clear that the linear span of the exponential functions exp(−νj t), j = 1, 2, . . .}, is dense in L2 [0, 1] if and only if 

1

e−νj t ϕ(t) dt = 0 for every ϕ ∈ L2 [0, 1].

(14.116)

0

Now fix 0 = ϕ ∈ L2 [0, 1], and put 

1

f (z) =

e−zt ϕ(t) dt.

(14.117)

0

Then the function f is an entire function with zeros at z = νj , j = 1, 2, . . .. According to Lemma 14.3.7 the function f belongs to the Paley-Wiener class PW and f is of exponential type with type at most one. Furthermore, from Theorem 14.6.3 it follows that f is of completely regular growth. For entire functions of order one of completely regular growth there is precise information about the distribution of the zeros. We summarise these properties in the following theorem which is based on Theorems V.8 and V.11 in [55] (see also the Ahlfors-Heins theorem, Theorem 7.2.6 of [12]). The following theorem is an addition to Theorem 14.6.2. Theorem 14.8.1 Let f be a non-zero entire function of exponential type satisfying the two conditions (14.88) and (14.89). Then (1) f is of completely regular growth and its zeros μj , j = 1, 2, . . ., satisfy the condition ∞   Re 1  < ∞. μj j =1

(14.118)

330

14 Results from the Theory of Entire Functions

(2) The indicator function of f is given by & hf (θ ) =

−π/2 ≤ θ ≤ π/2;

hf (0) cos θ,

(14.119)

−hf (π) cos θ, π/2 ≤ θ ≤ 3π/2.

(3) The density of the zeros of f satisfies 1 n(r, f ) lim = r→∞ r 2π



π −π

(14.120)

hf (θ ) dθ.

In particular, if the right hand side of (14.120) is non-zero, then f has infinitely many zeros. For the proof of the above theorem we refer to the proofs of Theorems V.8 and V.11 in [55]. To be more specific, note that in [55] functions satisfying (14.118) (with real part replaced by imaginary part) are said to belong to class A (see [55, Chapter V]). Item (1) then follows from [55, Theorem V.8] by applying this theorem twice using upper and lower half planes. The proof of item (2) is covered by the proof of [55, Theorem V.8] as given on pages 245 and 246 in [55], again using upper and lower half planes. Item (3) is covered by item (2) of [55, Theorem V.11]. Remark 14.8.2 Given (2) and (3) in Theorem 14.8.1 above the integral in right hand side of (14.120) can be rewritten as follows: 

π −π

 hf (θ ) dθ =



π/2 −π/2

= hf (0)

hf (θ ) dθ + 

3π/2

hf (θ ) dθ π/2

π/2

−π/2

cos θ dθ − hf (π)



3π/2

cos θ dθ π/2

π/2 3π/2   = hf (0) sin θ  − hf (π) sin θ  −π/2

π/2

= 2hf (0) + 2hf (π).

(14.121)

Applying Theorem 14.8.1 to the function f given by (14.117) we obtain the following corollary. Corollary 14.8.3 Let f be the entire function given by (14.117), and let c(r) be the counting function given by c(r) = #{j | |νj | ≤ r}. Assume that lim inf r→∞

1 c(r) > . r π

Then the linear span of {exp(−νj t) | j = 1, 2, . . .} is dense in L2 [0, 1].

(14.122)

14.8 Distribution of Zeros of Entire Functions and Related Properties

331

Proof We argue by contradiction. Assume we have no completeness. Then there exists a 0 = ϕ ∈ L2 [0, 1] such that 

1

e−νj t ϕ(t) dt = 0,

j = 0, 1, 2, . . . .

(14.123)

0

Given this ϕ define f by (14.117). Since ϕ is non-zero, the same is true for f . It follows that σ := inf{τ ∈ [0, 1] | ϕ|[τ,1] = 0 a.e.} > 0. Obviously, σ ≤ 1. Applying Proposition 14.4.3, we see that ⎧ ⎨0 hf (θ ) = ⎩−σ cos θ

when − π/2 ≤ θ ≤ π/2, π/2 ≤ θ ≤ 3π/2.

when

But then (14.121) tells us that 1 2π



π −π

hf (θ ) dθ =

1 σ ≤ . π π

Using (14.120) we conclude that 1 n(r, f ) ≤ . r→∞ r π lim

On the other hand, in this case c(r) ≤ n(r, f ) by (14.123). This implies that (14.122) is not satisfied, and completeness is proved.   Corollary 14.8.4 Let η ∈ NBV [0, a], and assume η is not constant on (0, a], where a > 0. Let f be the entire function given by  f (z) = p(z) + q(z)

a

e−zs dη(s),

(14.124)

0

where p and g are non-zero polynomials. Then f has infinitely many zeros. Proof From Definition 14.3.3 we know that f belongs to the Paley-Wiener class, and according Proposition 14.4.4 its indicator function hf is given by & hf (θ ) =

0,

−π/2 ≤ θ ≤ π/2,

−σ cos θ, π/2 ≤ θ ≤ 3π/2.

(14.125)

332

14 Results from the Theory of Entire Functions

Here σ is given by σ = inf{τ ∈ (0, a] | η is constant on (τ, a]}. Since η is not constant on (0, a] and a > 0, we have 0 < σ ≤ a. Using (14.121) it follows that 

π

−π

hf (θ ) dθ = 2σ > 0.

(14.126)

From (14.125) it also follows that hf (− π2 ) + hf ( π2 ) = 0. Furthermore, since f ∈ PW, the function f is polynomially bounded on the imaginary axis iR, which implies that  0



  log f (iy)f (−iy) dy 1 + y2

converges.

Hence conditions (14.88) and (14.89) in Theorem 14.6.2 are fulfilled. Finally, the fact that f ∈ PW implies that f is of exponential type. But then we can apply Theorem 14.8.1. Using (14.126) we see that item (3) in Theorem 14.8.1 shows that 1 n(r, f ) = lim r→∞ r 2π



π −π

hf (θ ) =

σ > 0. π

(14.127)

Here n(r, f ) is the number of zeros of f in the closed disc with centre zero and radius r. Since the right side of (14.127) is non-zero we conclude that f has infinitely many zeros.   Remark 14.8.5 Corollary 14.8.4 remains true if e−zs in (14.124) is replaced by ezs . To see this, note that  a  a zs ˜ e dη(s) ⇐⇒ f (z) = p(z) ˜ + q(z) ˜ e−zs dη(s), f (z) = p(z) + q(z) 0

0

where f˜(z) = f (−z), p(z) ˜ = p(−z), and q(z) ˜ = q(−z). Obviously, f˜ has infinitely many zeros if and only if f has infinitely many zeros. Furthermore, p˜ and q˜ are non-zero polynomials if and only if p ad q are non-zero polynomials. Thus applying Corollary 14.8.4 with f , p, q being replaced by f˜, p, ˜ q, ˜ respectively, shows that the corollary remains true if e−zs is replaced by ezs . If the polynomial p in (14.124) is zero, it may happen that f has no zeros at all. The simplest example is given by f (z) = e−z . This entire function has no zeros and is of form (14.124) with p(z) = 0 and q(z) = 1 for all z ∈ C, and with & η(t) =

0 if 0 ≤ t < 1, 1 if t = 1.

14.8 Distribution of Zeros of Entire Functions and Related Properties

333

Note that in this case, hf (θ ) = − cos θ , and hence hf (0) + hf (π) = 0. On the other hand, using the same line of reasoning as in the proof of Corollary 14.8.4 one can obtain the following more positive result. Corollary 14.8.6 Let η ∈ NBV [0, a], and assume η is not constant on (0, a], where a > 0. Let f be the entire function given by  f (z) = q(z)

a

e−zs dη(s),

where q is a non-zero polynomial.

(14.128)

0

Then f has infinitely many zeros if σ = σ◦ where σ = inf{τ ∈ (0, a] | η is constant on (τ, a]} σ◦ = sup{τ ∈ [0, 1] | η|[0,τ ] = 0}. Proof In this case hf (0) = −σ◦ and hf (π) = σ . Thus the condition σ = σ◦ is equivalent to hf (0) + hf (π) being non-zero. But then using a reasoning similar to the one given in the proof of Corollary 14.8.4 one obtains that f has infinitely many zeros.   In order to continue with the description of the connection between growth properties of an entire function f and the distribution of its zero, we need some additional definitions.

14.8.2 Genus, Convergence Exponent, and Order of Entire Functions Let f be a given entire function, and let {μ1 , μ2 , . . .} denote the non-zero zeros of f repeated according to multiplicity and arranged such that |μ1 | ≤ |μ2 | ≤ · · · . The function f is said to be of finite rank p if p is a non-negative integer such that ∞

|μj |−(p+1) < ∞ and



j =1

|μj |−p

diverges.

j =1

Definition 14.8.7 We say that the entire function f has finite genus if f has finite rank p and f can be factored as f (z) = zm eg(z)

∞  j =1

 Ep

z μj

 ,

(14.129)

334

14 Results from the Theory of Entire Functions

where m is a non-negative integer, g is a polynomial, and Ep is the canonical product defined by E0 (u) = 1 − u,  up  u2 + ··· + , Ep (u) = (1 − u) exp u + 2 p

p ≥ 1.

In that case the genus of μ of f is given by μ = max{p, q}, where q is the degree of the polynomial g. The following two theorems relate genus to order and vice versa. Theorem 14.8.8 (See Corollary XI.2.16 in [15]) If f is an entire function of finite genus μ, then f is of finite order ρ ≤ μ + 1. Next let us consider an entire function f of finite non-zero order ρ. The following theorem is known as the Hadamard factorization theorem. Theorem 14.8.9 (See Theorem XI.3.4 in [15]) If f is an entire function of finite order ρ, then f has finite genus μ ≤ ρ. In the next proposition we use the notion of convergence exponent. Consider a sequence of complex numbers a1 , a2 , a3 , . . . ,

aj = 0 (j = 1, 2, 3, . . . ),

lim |aj | = ∞.

j →∞

By definition the convergence exponent of the sequence {aj } is the number  given by  := inf{r |

∞ j =1

1 < ∞}. |aj |r

(14.130)

Note that the set defined by the right hand side of (14.130) can be empty. In that case we say that the convergence exponent infinite. An example of a sequence with an infinite convergence exponent is given by the sequence aj = log(j + 1), (j = 1, 2, 3, . . . ); see the paragraph after Definition 1.3.1. Proposition 14.8.10 Let aj , j = 1, 2, 3, . . ., be a sequence of complex numbers arranged such that |a1 | ≤ |a2 | ≤ |a3 | ≤ · · · , and assume that the sequence has a finite convergence exponent , and let p be a non-negative integer such that ∞ j =1

|aj |

−(p+1)